From 47238af85b36ddc85f527b7468eb8d8853eb8d54 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 17:25:38 +0800 Subject: [PATCH 01/51] init overlap text projects --- contrib/Overlap-Recovery/README.md | 260 +++++++++++++++++++++++++++++ contrib/Overlap-Recovery/inference | 1 + 2 files changed, 261 insertions(+) create mode 100644 contrib/Overlap-Recovery/README.md create mode 120000 contrib/Overlap-Recovery/inference diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md new file mode 100644 index 000000000..7ce21175c --- /dev/null +++ b/contrib/Overlap-Recovery/README.md @@ -0,0 +1,260 @@ +# Overlap-Recovery重叠文本还原参考设计 + +## 1 介绍 + +本开发样例使用自研算法完成重叠文本的还原任务,供用户参考。 本系统基于昇腾Ascend310卡。本仓库是重叠文本识别任务(Overlap-CRNN)的上游任务,即完成对重叠文本还原并输出文本实例的mask。 + +### 1.1 支持的产品 + +本系统采用Atlas300-3010作为实验验证的硬件平台,并支持Atlas200RC以及Atlas500的硬件平台.具体产品实物图和硬件参数请参见《Atlas 300 AI加速卡 用户指南(型号 3010)》。由于采用的硬件平台为含有Atlas 300的Atlas 800 AI服务器 (型号3010),而服务器一般需要通过网络访问,因此需要通过笔记本或PC等客户端访问服务器,而且展示界面一般在客户端。 + +### 1.2 支持的版本 + +版本号查询方法,在Ascend产品环境下,运行命令: + +``` +npu-smi info +``` + + + +### 1.3 软件方案介绍 + +软件方案主要为文本还原的系统,子系统功能具体描述请参考 表1.1 系统方案各子系统功能描述。重叠文本还原子系统可以实现还原重叠文本并得到各个文本实例的mask,本方案选择使用基于分割的算法并提出一种重叠区域感知的模块来恢复出重叠文本实例。系统方案中各模块功能如表1.2 所示。 + +表1.1 系统方案各子系统功能描述: + +| 序号 | 子系统 | 功能描述 | +| :--: | :----------------: | :----------------------------------------------------------: | +| 1 | 重叠文本还原子系统 | 重叠文本还原子系统将得到重叠文本实例的mask的结果,之后将结果送入到下游的文字识别模型进行文字识别。 | + +表1.2 系统方案中各模块功能: + +| 序号 | 子系统 | 功能描述 | +| :--: | :--------: | :----------------------------------------------------------: | +| 1 | 输入图像 | 将图像(JPG/PNG格式)通过Pillow库读入。 | +| 2 | 图像解码 | 通过Pillow第三方库对图像解码。 | +| 3 | 图像放缩 | 模型的输入为固定尺寸,所以需要对输入图片进行等比例放缩。 | +| 4 | 文字还原 | 在图像放缩后,将缓存区数据送入文字还原模型。本方案选用自研算法进行文本还原 | +| 5 | 结果可视化 | 通过Pillow库可视化单张图像的识别。 | + + + +### 1.4 代码目录结构与说明 + +eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示: + +```pytnon +├── train #训练代码的文件夹 +├── inference #推理代码的文件夹 +``` + +其中,`Overlap-Recovery/train`工程目录如下图所示: + +TODO + + +其中,`Overlap-Recovery/inference`工程目录如下图所示: + +```pytnon +├── eval.py #精度测试 +├── eval_utils.py #指标计算的辅助函数 +├── load_ann.py #加载测试集 +├── load_img_data.py #加载图片数据 +├── ominfer.py #单张图片推理 +├── export.py #将ckpt模型导出为onnx格式的模型 +├── preprocess_utils.py #加载图片做预处理的辅助函数 +├── README.md +├── models #不同类型的模型文件 +│ ├── best_iou.onnx +│ └── best_iou.ckpt +│ └── best_iou.om +├── dataset #测试数据集 +│ ├── img +│ └── annotation.json +``` + +### 1.5 技术实现流程图 + +实现流程图如下图所示: + +![image-20221201214655261](./流程图.png) + + + +### 1.6 特性及适用场景 + +本案例中的还原模型适用于常规图像的文本,并可以返回测试图像的文本区域的IOU指标。 + +本模型在以下几种情况去噪效果良好:图像中文字清晰可见、排版工整、字符大小适中等。 + +在以下几种情况去噪效果不太好:图像中文字模糊、排版随意、字符较小等。 + + + +## 2 环境依赖 + +下面列出环境依赖软件和版本。 + +推荐系统为ubuntu 18.04或centos 7.6。 + +其中训练环境依赖软件和版本如下表: + +TODO + + +其中推理环境依赖软件和版本如下表: + +| 软件名称 | 版本 | +| ------------------- | ----------- | +| MindX SDK | 3.0RC3 | +| Ascend-CANN-toolkit | 5.1.RC2 | +| ubuntu | 18.04.1 LTS | +| python | 3.9.2 | +| cv2 | 4.5.5.64 | +| numpy | 1.23.1 | +| pillow | 9.1.0 | +| mmcv-full | 1.7.0 | + +在运行推理项目前,需要设置环境变量: + +- 环境变量介绍 + +``` +. ${sdk_path}/set_env.sh +. ${ascend_toolkit_path}/set_env.sh +``` + + + +## 3 模型训练 + +**步骤1** 从昇腾社区的modelzoo中下载官方CRNN模型代码:https://www.hiascend.com/zh/software/modelzoo/models/detail/C/c4945b2fc8aa47f6af9b4f2870e41062/1 + +**步骤2** 为适配我们的任务要求,做如下修改: + +1. **default_config.yaml** + + ```yaml + model_version: "V2" # GPU训练使用V2 + label_dict: "PATH/TO/ch_sim_en_digit_symble.txt" # 使用自己的字典的路径 + max_text_length: 12 + class_num: 6703 + blank: 6702 + train_dataset_path: "" # 训练数据集路径 + train_eval_dataset: "synth" # 名称使用synth + train_eval_dataset_path: "" # 测试数据路径 + ``` + +2. **dataset.py** + + 将第41行的: + + ```python + letters = [letter for letter in config1.label_dict] + ``` + + 修改为: + + ```python + letters = [] + with open(config1.label_dict, 'r') as f: + for line in f: + letter = line.strip('\n') + letters.append(letter) + f.close() + ``` + +3. **metric.py** + + 将第18行的字典 + + ```python + label_dict = "abcdefghijklmnopqrstuvwxyz0123456789" + ``` + + 修改为( `dict_path `为自行准备的字典 `ch_sim_en_digit_symble.txt `,可在本仓库下找到): + + ``` + label_dict = [] + with open("[dict_path]", 'r') as f: + for line in f: + letter = line.strip('\n') + label_dict.append(letter) + f.close() + ``` + +**步骤3** 训练步骤参考官方代码https://www.hiascend.com/zh/software/modelzoo/models/detail/C/c4945b2fc8aa47f6af9b4f2870e41062/1 + + + +## 4 模型转换 + + +通过第三节的训练后得到ckpt模型文件,在项目运行前需要先将ckpt文件通过 `export.py `转换成ONNX模型文件,然后在本代码仓下通过ATC将ONNX转换成om模型。 + +模型转换工具(ATC)相关介绍如下:https://support.huawei.com/enterprise/zh/doc/EDOC1100234054 + +具体步骤如下: + +1. 准备好训练得到的ckpt模型文件,放至服务器上`Overlap-Recovery/train/models`文件夹下。 + +2. 进入`Overlap-Recovery/train`文件夹下,修改`export.py`文件中`ckpt_file_path`和`file_name`参数为自己的路径,执行命令: + + ``` + python export.py + ``` + +3. 将生成的ONNX模型转移到推理服务器,放至在`Overlap-CRNN/inference/models`路径下。 + +4. 进入推理服务器执行命令(修改`onnx_model_path`和`output_model_path`参数为自己的路径): + + ``` + atc --model=[air_model_path] --framework=5 --output=[output_model_path] --soc_version=Ascend310 --input_shape="input:1,3,1472,1472" + ``` + +5. 执行该命令会在当前目录下生成项目需要的模型文件`[output_model].om`。执行后终端输出为: + + ``` + ATC start working now, please wait for a moment. + ATC run success, welcome to the next use. + ``` + +表示命令执行成功。 + +相关模型的下载链接如下:http://xxx.zip。 +将模型按照提供的文件夹目录放至即可。 + +## 5 模型推理 + +当已有模型的om文件,保存在`Overlap-Recovery/inference/models/`下 + +示例步骤如下: + +**步骤1** 将任意一张待预测的图片存到当前目录下(`./Overlap-Recovery/inference`),文件名修改为`test`。 + +**步骤2** 按照模型转换获取om模型,放置在`Overlap-Recovery/inference/models/`路径下。若未自行转换模型,使用的是仓库提供的模型,则无需修改相关文件,否则修改`ominfer.py`中相关配置,将`model_path`对象的路径改成实际的om模型的路径;`img_prefix`和`img_name`对象的路径改成实际的测试图片的路径;`save_path`对象设置成需要保存可视化图像的路径。 + +**步骤3** 在命令行输入 如下命令运行整个工程: + +``` +python ominfer.py +``` + +**步骤4** 运行结束输出`test`文件夹,预测的可视化结果保存在`test`文件夹下。 + + + +## 6 测试精度 + +**步骤1** 在`Overlap-Recovery/inference/dataset/`路径下准备相同格式的数据集(已提供测试用的数据集,按照文件目录放至即可:http://xxx.zip) + +**步骤2** 在命令行输入 如下命令运行整个工程: + +``` +python eval.py +``` + +模型在测试集上的精度达标,最终模型的的acc为80.%,满足精度要求(acc≥80%)。 + +![image-20221202155839483](./测试结果.png) \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference b/contrib/Overlap-Recovery/inference new file mode 120000 index 000000000..1c18255ac --- /dev/null +++ b/contrib/Overlap-Recovery/inference @@ -0,0 +1 @@ +../../../Overlap_SDK/ \ No newline at end of file -- Gitee From c3ef1a9450c7e765865a25ab77944bfccf44ac5b Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 17:31:20 +0800 Subject: [PATCH 02/51] add inference code --- contrib/Overlap-Recovery/inference | 1 - .../inference/.idea/.gitignore | 8 + .../inference/.idea/Overlap_SDK.iml | 11 + .../inference/.idea/deployment.xml | 21 + .../inspectionProfiles/Project_Default.xml | 18 + .../inspectionProfiles/profiles_settings.xml | 6 + .../Overlap-Recovery/inference/.idea/misc.xml | 4 + .../inference/.idea/modules.xml | 8 + .../inference/dataset/.gitkeep | 0 contrib/Overlap-Recovery/inference/eval.py | 156 +++ .../Overlap-Recovery/inference/eval_utils.py | 151 +++ .../Overlap-Recovery/inference/load_ann.py | 55 + .../inference/load_img_data.py | 49 + .../inference/models/.gitkeep | 0 contrib/Overlap-Recovery/inference/ominfer.py | 148 +++ .../inference/preprocess_utils.py | 1022 +++++++++++++++++ contrib/Overlap-Recovery/inference/test.jpg | Bin 0 -> 265282 bytes contrib/Overlap-Recovery/inference/test/0.png | Bin 0 -> 6186 bytes contrib/Overlap-Recovery/inference/test/1.png | Bin 0 -> 6092 bytes .../Overlap-Recovery/inference/test/input.jpg | Bin 0 -> 265282 bytes .../\346\265\201\347\250\213\345\233\276.png" | Bin 0 -> 11469 bytes ...3\350\257\225\347\273\223\346\236\234.png" | Bin 0 -> 15416 bytes 22 files changed, 1657 insertions(+), 1 deletion(-) delete mode 120000 contrib/Overlap-Recovery/inference create mode 100644 contrib/Overlap-Recovery/inference/.idea/.gitignore create mode 100644 contrib/Overlap-Recovery/inference/.idea/Overlap_SDK.iml create mode 100644 contrib/Overlap-Recovery/inference/.idea/deployment.xml create mode 100644 contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/Project_Default.xml create mode 100644 contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/profiles_settings.xml create mode 100644 contrib/Overlap-Recovery/inference/.idea/misc.xml create mode 100644 contrib/Overlap-Recovery/inference/.idea/modules.xml create mode 100644 contrib/Overlap-Recovery/inference/dataset/.gitkeep create mode 100644 contrib/Overlap-Recovery/inference/eval.py create mode 100644 contrib/Overlap-Recovery/inference/eval_utils.py create mode 100644 contrib/Overlap-Recovery/inference/load_ann.py create mode 100644 contrib/Overlap-Recovery/inference/load_img_data.py create mode 100644 contrib/Overlap-Recovery/inference/models/.gitkeep create mode 100644 contrib/Overlap-Recovery/inference/ominfer.py create mode 100644 contrib/Overlap-Recovery/inference/preprocess_utils.py create mode 100644 contrib/Overlap-Recovery/inference/test.jpg create mode 100644 contrib/Overlap-Recovery/inference/test/0.png create mode 100644 contrib/Overlap-Recovery/inference/test/1.png create mode 100644 contrib/Overlap-Recovery/inference/test/input.jpg create mode 100644 "contrib/Overlap-Recovery/inference/\346\265\201\347\250\213\345\233\276.png" create mode 100644 "contrib/Overlap-Recovery/inference/\346\265\213\350\257\225\347\273\223\346\236\234.png" diff --git a/contrib/Overlap-Recovery/inference b/contrib/Overlap-Recovery/inference deleted file mode 120000 index 1c18255ac..000000000 --- a/contrib/Overlap-Recovery/inference +++ /dev/null @@ -1 +0,0 @@ -../../../Overlap_SDK/ \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/.idea/.gitignore b/contrib/Overlap-Recovery/inference/.idea/.gitignore new file mode 100644 index 000000000..73f69e095 --- /dev/null +++ b/contrib/Overlap-Recovery/inference/.idea/.gitignore @@ -0,0 +1,8 @@ +# Default ignored files +/shelf/ +/workspace.xml +# Datasource local storage ignored files +/dataSources/ +/dataSources.local.xml +# Editor-based HTTP Client requests +/httpRequests/ diff --git a/contrib/Overlap-Recovery/inference/.idea/Overlap_SDK.iml b/contrib/Overlap-Recovery/inference/.idea/Overlap_SDK.iml new file mode 100644 index 000000000..4ddc51fb8 --- /dev/null +++ b/contrib/Overlap-Recovery/inference/.idea/Overlap_SDK.iml @@ -0,0 +1,11 @@ + + + + + + + + + + \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/.idea/deployment.xml b/contrib/Overlap-Recovery/inference/.idea/deployment.xml new file mode 100644 index 000000000..0342ed18d --- /dev/null +++ b/contrib/Overlap-Recovery/inference/.idea/deployment.xml @@ -0,0 +1,21 @@ + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/Project_Default.xml b/contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/Project_Default.xml new file mode 100644 index 000000000..9ab1e9045 --- /dev/null +++ b/contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/Project_Default.xml @@ -0,0 +1,18 @@ + + + + \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/profiles_settings.xml b/contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/profiles_settings.xml new file mode 100644 index 000000000..105ce2da2 --- /dev/null +++ b/contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/profiles_settings.xml @@ -0,0 +1,6 @@ + + + + \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/.idea/misc.xml b/contrib/Overlap-Recovery/inference/.idea/misc.xml new file mode 100644 index 000000000..68775bbcb --- /dev/null +++ b/contrib/Overlap-Recovery/inference/.idea/misc.xml @@ -0,0 +1,4 @@ + + + + \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/.idea/modules.xml b/contrib/Overlap-Recovery/inference/.idea/modules.xml new file mode 100644 index 000000000..81b7b7a6d --- /dev/null +++ b/contrib/Overlap-Recovery/inference/.idea/modules.xml @@ -0,0 +1,8 @@ + + + + + + + + \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/dataset/.gitkeep b/contrib/Overlap-Recovery/inference/dataset/.gitkeep new file mode 100644 index 000000000..e69de29bb diff --git a/contrib/Overlap-Recovery/inference/eval.py b/contrib/Overlap-Recovery/inference/eval.py new file mode 100644 index 000000000..a64c31c47 --- /dev/null +++ b/contrib/Overlap-Recovery/inference/eval.py @@ -0,0 +1,156 @@ +# -*- coding: utf-8 -*- + +import warnings +warnings.filterwarnings('ignore') + +from PIL import Image +import shutil +import numpy as np +from mindx.sdk import base +from mindx.sdk.base import Tensor, Model, Size, log, ImageProcessor, post, BTensor + +from eval_utils import evaluate_metric +from load_ann import load_annotations +from load_img_data import load_img_data + +class OverlapDataset: + + def __init__(self, annotation_file, img_prefix, seg_prefix): + self.data_list = load_annotations(annotation_file, img_prefix, seg_prefix) + # self.data_list = self.data_list[:4] # for debug + self.img_prefix = img_prefix + self.seg_prefix = seg_prefix + self.sample_num = len(self.data_list) + print(f"There are totally {self.sample_num} samples") + + def __len__(self): + return self.sample_num + + def __getitem__(self, item): + data_item = self.data_list[item] + img_name = data_item['filename'] + img_tensor, img_meta = load_img_data(img_name, self.img_prefix) # hwc-chw + img_meta['seg_map_path'] = data_item['seg_map_path'] + return img_tensor, img_meta + +def prepare_model(model_path, device_id): + base.mx_init() # 全局资源初始化 + model = Model(model_path, device_id) # 创造模型对象 + return model + + + +def postprocess(scaled_mask_preds, cls_score): + num_imgs = 1 + segm_results = [] + segm_scores = [] + for img_id in range(num_imgs): + cls_score_per_img = cls_score[img_id] # num_det, 1 + topk_indices =np.argsort(cls_score_per_img.flatten())[::-1][:4] + scores_per_img = cls_score_per_img.flatten()[topk_indices] + mask_indices = topk_indices + masks_per_img = scaled_mask_preds[img_id][mask_indices] # b, num_det, h,w + seg_masks = masks_per_img > 0.5 + seg_result, segm_score = segm2result(seg_masks, scores_per_img) + segm_results.append(seg_result) + segm_scores.append(segm_score) + # bs, num_det, h, w + segm_results = np.stack(segm_results) + # bs, num_det, 1 + segm_scores = np.stack(segm_scores) + return segm_results, segm_scores + +def segm2result(mask_preds, cls_scores): + segm_result = [] + seg_scores = [] + num_ins = mask_preds.shape[0] # num_dets, h, w + for idx in range(num_ins): + segm_result.append(mask_preds[idx]) + seg_scores.append(cls_scores[idx]) + # here we only have one classes (text) + segm_result = np.stack(segm_result) # num_det, h, w + seg_scores = np.stack(seg_scores) # num_det + return segm_result, seg_scores + + +if __name__ == '__main__': + # dataset + ann_file = './dataset2/annotation.json' + img_prefix= './dataset2' + seg_mask_prefix = './dataset2' + dataset = OverlapDataset(ann_file, img_prefix, seg_mask_prefix) + sample_num = dataset.sample_num + dataset = iter(dataset) + + # model + device_id = 1 # 芯片ID + # model_path = "models/best_miou.om" # 模型的路径 + # model_path = "models/best_miou_pynative_3.om" # 模型的路径 + # model_path = "models/best_iou_recheck3.om" # 模型的路径 + model_path = "models/best_iou_recheck_ckpt_test.om" # 模型的路径 + # model_path = "models/best_miou_graph_mode.om" # 模型的路径 + # model_path = "models/best_iou.om" # 模型的路径 + model = prepare_model(model_path, device_id) + + # inference + results = [] + img_metas_list = [] + for idx in range(sample_num): + resizeImg, img_meta = next(dataset) + # print(img_meta) + print(f'sample {idx}') + + # prepare image + resizeImg = np.expand_dims(resizeImg, 0) # add batch dim, 1,3,h,w + resizeImg = np.ascontiguousarray(resizeImg) + imageTensor = Tensor(resizeImg) # 推理前需要转换为tensor的List,使用Tensor类来构建。 + imageTensor.to_device(device_id) # !!!!!重要,需要转移至device侧,该函数单独执行 + imageTensorList = [imageTensor] # 推理前需要转换为tensor的List + + # forward + outputs = model.infer(imageTensorList) + + # preds Tensor to numpy + outputs_np = [] + for i in range(len(outputs)): + outputs[i].to_host() + n = np.array(outputs[i]) + outputs_np.append(n) + + # (1, 4, h, w), (1, 4, 1) + pred_masks, pred_scores = outputs_np[0], outputs_np[1] + # (1, 4, h, w), (1, 4) + pred_masks, pred_scores = postprocess(pred_masks, pred_scores) + + # print(f"pred_masks: {pred_masks.shape} pred_score: {pred_masks.shape}") + + # remove padding area + # (1, 4, h, w), (1,4) + resize_shape = img_meta['img_shape'][:2] # h,w + pred_masks = pred_masks[:, :, :resize_shape[0], :resize_shape[1]] + + # rescaled to original size + ori_size = img_meta['ori_shape'][:2] # h,w + pred_masks = pred_masks[0]# removed batch dim + rescaled_masks = [] + for idx in range(pred_masks.shape[0]): + img = pred_masks[idx] + # text_instance = img.astype(np.uint8) + pil_image = Image.fromarray(img) + # pil_image = pil_image.resize((ori_size[1], ori_size[0]), Image.Resampling.BILINEAR) + pil_image = pil_image.resize((ori_size[1], ori_size[0])) + resized_img = np.array(pil_image) + rescaled_masks.append(resized_img) + rescaled_masks = np.stack(rescaled_masks) + + rescaled_masks = np.expand_dims(rescaled_masks, 0) + result = (pred_scores, rescaled_masks) + results.append(result) + img_metas_list.append(img_meta) + # evaluate + eval_res = evaluate_metric(results, img_metas_list, score_thresh=0.2,) + text_iou = np.around(eval_res["text_iou"], decimals=3) + print("==============================") + print("精度测试结果如下:") + print(f'text_iou: {text_iou * 100}%') + print("==============================") \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/eval_utils.py b/contrib/Overlap-Recovery/inference/eval_utils.py new file mode 100644 index 000000000..121353313 --- /dev/null +++ b/contrib/Overlap-Recovery/inference/eval_utils.py @@ -0,0 +1,151 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import numpy as np +import cv2 + +def cal_mask_IoU(mask_a, mask_b, check_valid=False): + if check_valid: + assert len(np.unique(mask_a)) <= 2 + assert len(np.unique(mask_b)) <= 2 + a_bool = mask_a.astype(np.bool) + b_bool = mask_b.astype(np.bool) + intersection_area = (a_bool & b_bool).sum() + union_area = (a_bool | b_bool).sum() + if union_area == 0: + return 0 + return intersection_area / union_area + + +def cal_overlap_mask(mask_list): + if len(mask_list) < 2: + return None + mask_list_bool = [x.astype(np.bool) for x in mask_list] + overlap_mask = np.zeros_like(mask_list_bool[0]) + for ii in range(len(mask_list_bool) - 1): + for jj in range(ii + 1, len(mask_list_bool)): + cur_olp = mask_list_bool[ii] & mask_list_bool[jj] + overlap_mask = overlap_mask | cur_olp + return overlap_mask + + +def cal_union_mask(mask_list): + if len(mask_list) < 1: + return None + mask_list_bool = [x.astype(np.bool) for x in mask_list] + union_mask = np.zeros_like(mask_list_bool[0]) + for mask_bool in mask_list_bool: + union_mask = union_mask | mask_bool + return union_mask + + + +def eval_func(box_scores, masks, img_meta, score_thresh=0.2, iou_thresh=0.5): + # prepare gt + # import pdb;pdb.set_trace() + gt_masks = [cv2.imread(x, cv2.IMREAD_UNCHANGED) // 255 for x in img_meta['seg_map_path']] + for mask_ in gt_masks: + if len(mask_.shape) > 2: + import ipdb + ipdb.set_trace() + print(gt_masks) + gt_text = cal_union_mask(gt_masks) + gt_overlap = cal_overlap_mask(gt_masks) + # prepare predict of overlap and text area + + # select top 2 prediction + box_scores = box_scores[0] # remove batch dim + scores = box_scores.tolist() + valid_idx = [] + for ins_idx, score in enumerate(box_scores): + if score > score_thresh: + valid_idx.append(ins_idx) + pred_masks = [masks[0][_] for _ in valid_idx] + if len(pred_masks) == 0: + pred_overlap = np.zeros_like(masks[0][0]) + pred_text = np.zeros_like(masks[0][0]) + elif len(pred_masks) == 1: + pred_overlap = np.zeros_like(masks[0][0]) + pred_text = cal_union_mask(pred_masks) + else: + pred_overlap = cal_overlap_mask(pred_masks) + pred_text = cal_union_mask(pred_masks) + + if len(gt_masks) > 1: + # calculate metrics + intersection_text = (pred_text & gt_text).sum() + union_text = (pred_text | gt_text).sum() + intersection_overlap = (pred_overlap & gt_overlap).sum() + union_overlap = (pred_overlap | gt_overlap).sum() + else: + intersection_text = 0 + union_text = 0 + intersection_overlap = 0 + union_overlap = 0 + + # prepare predict of text instance + # filter out invalid prediction + valid_idx = [] + for ins_idx, score in enumerate(box_scores): + if score > score_thresh: + valid_idx.append(ins_idx) + match_matrix = np.zeros((len(valid_idx), len(gt_masks)), dtype=np.bool) + for ins_idx in range(len(valid_idx)): + for gt_ins_idx in range(len(gt_masks)): + if match_matrix[:, gt_ins_idx].sum() > 0: + continue + # calculate IoU + if cal_mask_IoU(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > iou_thresh: + match_matrix[ins_idx, gt_ins_idx] = True + break + # calculate instance-wise mIoU + text_ins_miou = 0 + if match_matrix.sum() > 0: + for ins_idx in range(max(match_matrix.shape)): + if ins_idx >= match_matrix.shape[0]: + # miss det + continue + else: + if ins_idx >= match_matrix.shape[1] or match_matrix[ins_idx].sum() == 0: + # wrong det + continue + else: + pred_mask = masks[0][valid_idx[ins_idx]].astype(np.bool) + gt_idx = match_matrix[ins_idx].nonzero()[0][0] + gt_mask = gt_masks[gt_idx].copy() + cur_iou = cal_mask_IoU(pred_mask, gt_mask) + text_ins_miou += cur_iou + return (intersection_text, union_text, intersection_overlap, union_overlap), \ + text_ins_miou, max(match_matrix.shape) + +def evaluate_metric(results, + img_metas, + score_thresh=0.2, + iou_thrs=0.5, + ): + + intersection_text = 0 + union_text = 0 + intersection_overlap = 0 + union_overlap = 0 + text_ins_miou_list = [] + total_ins_num = 0 + for idx, ((box_scores, masks), img_meta) in enumerate(zip(results, img_metas)): + # structure: + # box_scores: List[ numpy_array with shape (num_ins, 1*score) * num_classes ] + # masks: List[ List[ numpy_array_bool with shape (h, w) * num_ins ] * num_classes ] + + overall_iou_metrics, text_ins_miou, ins_num = eval_func(box_scores, masks, img_meta, score_thresh, iou_thrs) + intersection_text += overall_iou_metrics[0] + union_text += overall_iou_metrics[1] + intersection_overlap += overall_iou_metrics[2] + union_overlap += overall_iou_metrics[3] + text_ins_miou_list.append(text_ins_miou) + total_ins_num += ins_num + + metric_results = dict( + text_iou=intersection_text / union_text, + ) + + return metric_results + diff --git a/contrib/Overlap-Recovery/inference/load_ann.py b/contrib/Overlap-Recovery/inference/load_ann.py new file mode 100644 index 000000000..026c3543b --- /dev/null +++ b/contrib/Overlap-Recovery/inference/load_ann.py @@ -0,0 +1,55 @@ +# -*- coding: utf-8 -*- + +import json +import os.path as osp +import imagesize + +def load_annotations(ann_file, img_prefix, seg_prefix): + """Load annotation from Overlap""" + data_list = [] + img_dir = img_prefix + seg_dir = seg_prefix + if osp.isfile(ann_file): + with open(ann_file, 'r', encoding='utf-8') as f: + info_list = json.load(f) + for info_ in info_list: + assert len(info_) == 3, f"Invalid line: {info_}" + img_name = info_['img_name'] + data_info = dict(img_path=osp.join(img_dir, img_name)) + data_info['data_type'] = info_['data_type'] + data_info['filename'] = img_name + width, height = imagesize.get(data_info['img_path']) + data_info['width'] = width + data_info['height'] = height + seg_map_path = [] + text_labels = [] + bboxes = [] + # should follow a pre-defined order, e.g., from top layer to bottom + for text_ins in info_['texts']: + x, y, w, h = text_ins['bbox'] + bbox = [x, y, x + w, y + h] + bboxes.append(bbox) + seg_map_path.append(osp.join(seg_dir, text_ins[f"mask"])) + text_labels.append(text_ins['label']) + # for key_ in self.key_list: + # x, y, w, h = info_[f"{key_}_bbox"] + # bbox = [x, y, x + w, y + h] + # bboxes.append(bbox) + # seg_map_path.append(osp.join(seg_dir, info_[f"{key_}_mask_bin"])) + # text_labels.append(info_[f"{key_}_label"]) + data_info['bboxes'] = bboxes + data_info['seg_map_path'] = seg_map_path + data_info['text_labels'] = text_labels + # removed + # data_info['key_list'] = self.key_list + data_list.append(data_info) + else: + raise NotImplementedError + return data_list + +if __name__ == '__main__': + ann_file = '/home/yuliang2/overlap_qualified_data_1129/annotation.json' + img_prefix= '/home/yuliang2/overlap_qualified_data_1129' + seg_prefix = '/home/yuliang2/overlap_qualified_data_1129' + data_list = load_annotations(ann_file, img_prefix, seg_prefix) + print(len(data_list)) \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/load_img_data.py b/contrib/Overlap-Recovery/inference/load_img_data.py new file mode 100644 index 000000000..b9e2e0eca --- /dev/null +++ b/contrib/Overlap-Recovery/inference/load_img_data.py @@ -0,0 +1,49 @@ +# -*- coding: utf-8 -*- + +from preprocess_utils import build_processor + + +# img_scale = (736, 736) +# img_scale = (1472, 1472) +img_scale = (768, 768) +img_norm_cfg = dict( + mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) +test_pipeline = [ + dict(type='LoadImageFromFile'), + dict( + type='MultiScaleFlipAug', + img_scale=img_scale, + flip=False, + transforms=[ + dict(type='Resize', keep_ratio=True), + dict(type='RandomFlip'), + dict(type='Normalize', **img_norm_cfg), + dict(type='Pad', size=img_scale), + dict(type='HWCToCHW', keys=['img']), + # dict(type='ImageToTensor', keys=['img']), # HWCToCHW + ToTensor + dict(type='Collect', keys=['img']), + ]) +] + +preprocessor = build_processor(test_pipeline) + +def load_img_data(img_name, img_prefix=None): + + img_info = {'filename':img_name} + img_data = {'img_prefix':img_prefix, 'img_info': img_info} + + resized_img_data = preprocessor(img_data) + resizeImg = resized_img_data['img'] + img_metas = resized_img_data['img_metas'] + return resizeImg[0], img_metas[0] + # return resizeImg, img_metas + +if __name__ == '__main__': + + img_prefix = '/home/yuliang2/overlap_text/data' + img_name = '2.jpg' + resizeImg, img_metas = load_img_data(img_name, img_prefix) + print(img_metas) + print(f"ori_shape: {img_metas['ori_shape']} " + f"resize_shape: {img_metas['img_shape']} " + f"padded_shape: {img_metas['pad_shape']}") diff --git a/contrib/Overlap-Recovery/inference/models/.gitkeep b/contrib/Overlap-Recovery/inference/models/.gitkeep new file mode 100644 index 000000000..e69de29bb diff --git a/contrib/Overlap-Recovery/inference/ominfer.py b/contrib/Overlap-Recovery/inference/ominfer.py new file mode 100644 index 000000000..af5d5f066 --- /dev/null +++ b/contrib/Overlap-Recovery/inference/ominfer.py @@ -0,0 +1,148 @@ +# -*- coding: utf-8 -*- +# @Author: Wenwen Yu +# @Email: yuwenwen62@gmail.com +# @Created Time: 11/29/22 11:20 AM + +import warnings +warnings.filterwarnings('ignore') + +import os +from base64 import decode +import numpy as np +from mindx.sdk import base +from mindx.sdk.base import Tensor, Model, Size, log, ImageProcessor, post, BTensor +import cv2 +import mmcv +from load_img_data import load_img_data +from PIL import Image +import shutil + +device_id = 1 # 芯片ID +model_path = "models/best_miou_pynative_3.om" # 模型的路径 +# model_path = "models/best_iou.om" # 模型的路径 +img_prefix = './' +img_name = 'test.jpg' +# img_name = '200.jpg' +save_path = './' + +base.mx_init() # 全局资源初始化 +model = Model(model_path, device_id) # 创造模型对象 + +def om_infer_one(img_name, img_prefix=None, vis_dir=None, score_thr=0.4): + resizeImg, img_meta = load_img_data(img_name, img_prefix) # hwc-chw + ori_filename = img_meta['ori_filename'] + abs_filename = img_meta['filename'] + print(f"ori_filename: {img_meta['ori_filename']}") + print(f"filename: {img_meta['filename']}") + # h,w,c + print(f"ori_shape: {img_meta['ori_shape']} " + f"resize_shape: {img_meta['img_shape']} " + f"padded_shape: {img_meta['pad_shape']}") + # import pdb;pdb.set_trace() + resizeImg = np.expand_dims(resizeImg, 0) # add batch dim, 1,3,h,w + resizeImg = np.ascontiguousarray(resizeImg) + imageTensor = Tensor(resizeImg) # 推理前需要转换为tensor的List,使用Tensor类来构建。 + imageTensor.to_device(device_id) # !!!!!重要,需要转移至device侧,该函数单独执行 + imageTensorList = [imageTensor] # 推理前需要转换为tensor的List + outputs = model.infer(imageTensorList) + # import pdb;pdb.set_trace() + + inputs = [] + for i in range(len(outputs)): + outputs[i].to_host() + n = np.array(outputs[i]) + inputs.append(n) + # tensor = BTensor(n) # 后处理需要使用baseTensor类型来构建,文档不全 + # inputs.append(base.batch([tensor] * 2, keep_dims=True)) + + # (1, 4, h, w), (1,4) / (1, 4, 1) + pred_masks, pred_scores = inputs[0], inputs[1] + pred_masks, pred_scores = postprocess(pred_masks, pred_scores) + print(f"pred_masks_shape: {pred_masks.shape} pred_score_shape: {pred_scores.shape}") + + # np.save("pred_npy_res/om_output_mask.npy", pred_masks.astype(np.uint8)) + # np.save('pred_npy_res/om_output_score.npy', pred_scores) + + print(f"original pred unique value: {np.unique(pred_masks)}") + + # remove padding area + # (1, 4, 1472, 1472), (1,4) + resize_shape = img_meta['img_shape'][:2] # h, w + pred_masks = pred_masks[:, :, :resize_shape[0], :resize_shape[1]] + + ori_size = img_meta['ori_shape'][:2] # h, w + # import pdb;pdb.set_trace() + + # remove batch dim + # (4, h, w), (4) + pred_masks, pred_scores = pred_masks[0], pred_scores[0] + + img_id = os.path.basename(ori_filename).split('.')[0] + if vis_dir is not None: + save_dir = os.path.join(vis_dir, img_id) + if not os.path.exists(save_dir): + # os.mkdir(save_dir) + os.makedirs(save_dir) + shutil.copyfile(abs_filename, os.path.join(save_dir, f"input.{os.path.basename(ori_filename).split('.')[1]}")) + for instance_idx in range(pred_masks.shape[0]): + # (h,w) + text_instance = pred_masks[instance_idx] + pred_score = pred_scores[instance_idx] + + if pred_score < score_thr: + continue + + # print(pred_score) + # print(np.unique(text_instance)) + # import pdb;pdb.set_trace() + + text_instance = text_instance.astype(np.uint8) + area = np.sum(text_instance) + print(f"pred_text_instance: {instance_idx+1} pred_score: {pred_score} unique value: {np.unique(text_instance)} area: {area}") + + pred_mask = Image.fromarray(text_instance * 255) + # import pdb;pdb.set_trace() + pred_mask = pred_mask.resize((ori_size[1], ori_size[0]))# w,h + + if vis_dir is not None: + save_file = os.path.join(save_dir, f'{instance_idx}.png') + pred_mask.save(save_file, bit=1) + print(f'pred text mask saving to {save_file}') + + + +def postprocess(scaled_mask_preds, cls_score): + num_imgs = 1 + segm_results = [] + segm_scores = [] + for img_id in range(num_imgs): + cls_score_per_img = cls_score[img_id] # num_det, 1 + topk_indices =np.argsort(cls_score_per_img.flatten())[::-1][:4] + scores_per_img = cls_score_per_img.flatten()[topk_indices] + mask_indices = topk_indices + masks_per_img = scaled_mask_preds[img_id][mask_indices] # b, num_det, h,w + seg_masks = masks_per_img > 0.5 + seg_result, segm_score = segm2result(seg_masks, scores_per_img) + segm_results.append(seg_result) + segm_scores.append(segm_score) + # bs, num_det, h, w + segm_results = np.stack(segm_results) + # bs, num_det, 1 + segm_scores = np.stack(segm_scores) + return segm_results, segm_scores + +def segm2result(mask_preds, cls_scores): + segm_result = [] + seg_scores = [] + num_ins = mask_preds.shape[0] # num_dets, h, w + for idx in range(num_ins): + segm_result.append(mask_preds[idx]) + seg_scores.append(cls_scores[idx]) + # here we only have one classes (text) + segm_result = np.stack(segm_result) # num_det, h, w + seg_scores = np.stack(seg_scores) # num_det + return segm_result, seg_scores + + +if __name__ == '__main__': + om_infer_one(img_name, img_prefix, vis_dir=save_path) diff --git a/contrib/Overlap-Recovery/inference/preprocess_utils.py b/contrib/Overlap-Recovery/inference/preprocess_utils.py new file mode 100644 index 000000000..6c3240ab5 --- /dev/null +++ b/contrib/Overlap-Recovery/inference/preprocess_utils.py @@ -0,0 +1,1022 @@ +# -*- coding: utf-8 -*- +# @Author: Wenwen Yu +# @Email: yduwenwen62@gmail.com +# @Created Time: 11/29/22 12:12 PM + +import collections +import warnings +import os.path as osp + +import numpy as np +import mmcv +from mmcv.utils import Registry, build_from_cfg + +# from mindx.sdk.base import Tensor + +PIPELINES = Registry('pipeline') + +@PIPELINES.register_module() +class LoadImageFromFile: + """Load an image from file. + + Required keys are "img_prefix" and "img_info" (a dict that must contain the + key "filename"). Added or updated keys are "filename", "img", "img_shape", + "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), + "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). + + Args: + to_float32 (bool): Whether to convert the loaded image to a float32 + numpy array. If set to False, the loaded image is an uint8 array. + Defaults to False. + color_type (str): The flag argument for :func:`mmcv.imfrombytes`. + Defaults to 'color'. + file_client_args (dict): Arguments to instantiate a FileClient. + See :class:`mmcv.fileio.FileClient` for details. + Defaults to ``dict(backend='disk')``. + """ + + def __init__(self, + to_float32=False, + color_type='color', + channel_order='bgr', + file_client_args=dict(backend='disk')): + self.to_float32 = to_float32 + self.color_type = color_type + self.channel_order = channel_order + self.file_client_args = file_client_args.copy() + self.file_client = None + + def __call__(self, results): + """Call functions to load image and get image meta information. + + Args: + results (dict): Result dict from :obj:`mmdet.CustomDataset`. + + Returns: + dict: The dict contains loaded image and meta information. + """ + + if self.file_client is None: + self.file_client = mmcv.FileClient(**self.file_client_args) + + if results['img_prefix'] is not None: + filename = osp.join(results['img_prefix'], + results['img_info']['filename']) + else: + filename = results['img_info']['filename'] + + img_bytes = self.file_client.get(filename) + img = mmcv.imfrombytes( + img_bytes, flag=self.color_type, channel_order=self.channel_order) + if self.to_float32: + img = img.astype(np.float32) + + results['filename'] = filename + results['ori_filename'] = results['img_info']['filename'] + results['img'] = img + results['img_shape'] = img.shape + results['ori_shape'] = img.shape + results['img_fields'] = ['img'] + return results + + def __repr__(self): + repr_str = (f'{self.__class__.__name__}(' + f'to_float32={self.to_float32}, ' + f"color_type='{self.color_type}', " + f"channel_order='{self.channel_order}', " + f'file_client_args={self.file_client_args})') + return repr_str + +@PIPELINES.register_module() +class Compose: + """Compose multiple transforms sequentially. + + Args: + transforms (Sequence[dict | callable]): Sequence of transform object or + config dict to be composed. + """ + + def __init__(self, transforms): + assert isinstance(transforms, collections.abc.Sequence) + self.transforms = [] + for transform in transforms: + if isinstance(transform, dict): + transform = build_from_cfg(transform, PIPELINES) + self.transforms.append(transform) + elif callable(transform): + self.transforms.append(transform) + else: + raise TypeError('transform must be callable or a dict') + + def __call__(self, data): + """Call function to apply transforms sequentially. + + Args: + data (dict): A result dict contains the data to transform. + + Returns: + dict: Transformed data. + """ + + for t in self.transforms: + data = t(data) + if data is None: + return None + return data + + def __repr__(self): + format_string = self.__class__.__name__ + '(' + for t in self.transforms: + str_ = t.__repr__() + if 'Compose(' in str_: + str_ = str_.replace('\n', '\n ') + format_string += '\n' + format_string += f' {str_}' + format_string += '\n)' + return format_string + +@PIPELINES.register_module() +class MultiScaleFlipAug: + """Test-time augmentation with multiple scales and flipping. + + An example configuration is as followed: + + .. code-block:: + + img_scale=[(1333, 400), (1333, 800)], + flip=True, + transforms=[ + dict(type='Resize', keep_ratio=True), + dict(type='RandomFlip'), + dict(type='Normalize', **img_norm_cfg), + dict(type='Pad', size_divisor=32), + dict(type='ImageToTensor', keys=['img']), + dict(type='Collect', keys=['img']), + ] + + After MultiScaleFLipAug with above configuration, the results are wrapped + into lists of the same length as followed: + + .. code-block:: + + dict( + img=[...], + img_shape=[...], + scale=[(1333, 400), (1333, 400), (1333, 800), (1333, 800)] + flip=[False, True, False, True] + ... + ) + + Args: + transforms (list[dict]): Transforms to apply in each augmentation. + img_scale (tuple | list[tuple] | None): Images scales for resizing. + scale_factor (float | list[float] | None): Scale factors for resizing. + flip (bool): Whether apply flip augmentation. Default: False. + flip_direction (str | list[str]): Flip augmentation directions, + options are "horizontal", "vertical" and "diagonal". If + flip_direction is a list, multiple flip augmentations will be + applied. It has no effect when flip == False. Default: + "horizontal". + """ + + def __init__(self, + transforms, + img_scale=None, + scale_factor=None, + flip=False, + flip_direction='horizontal'): + self.transforms = Compose(transforms) + assert (img_scale is None) ^ (scale_factor is None), ( + 'Must have but only one variable can be set') + if img_scale is not None: + self.img_scale = img_scale if isinstance(img_scale, + list) else [img_scale] + self.scale_key = 'scale' + assert mmcv.is_list_of(self.img_scale, tuple) + else: + self.img_scale = scale_factor if isinstance( + scale_factor, list) else [scale_factor] + self.scale_key = 'scale_factor' + + self.flip = flip + self.flip_direction = flip_direction if isinstance( + flip_direction, list) else [flip_direction] + assert mmcv.is_list_of(self.flip_direction, str) + if not self.flip and self.flip_direction != ['horizontal']: + warnings.warn( + 'flip_direction has no effect when flip is set to False') + if (self.flip + and not any([t['type'] == 'RandomFlip' for t in transforms])): + warnings.warn( + 'flip has no effect when RandomFlip is not in transforms') + + def __call__(self, results): + """Call function to apply test time augment transforms on results. + + Args: + results (dict): Result dict contains the data to transform. + + Returns: + dict[str: list]: The augmented data, where each value is wrapped + into a list. + """ + + aug_data = [] + flip_args = [(False, None)] + if self.flip: + flip_args += [(True, direction) + for direction in self.flip_direction] + for scale in self.img_scale: + for flip, direction in flip_args: + _results = results.copy() + _results[self.scale_key] = scale + _results['flip'] = flip + _results['flip_direction'] = direction + data = self.transforms(_results) + aug_data.append(data) + # list of dict to dict of list + aug_data_dict = {key: [] for key in aug_data[0]} + for data in aug_data: + for key, val in data.items(): + aug_data_dict[key].append(val) + return aug_data_dict + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(transforms={self.transforms}, ' + repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' + repr_str += f'flip_direction={self.flip_direction})' + return repr_str + +@PIPELINES.register_module() +class Resize: + """Resize images & bbox & mask. + + This transform resizes the input image to some scale. Bboxes and masks are + then resized with the same scale factor. If the input dict contains the key + "scale", then the scale in the input dict is used, otherwise the specified + scale in the init method is used. If the input dict contains the key + "scale_factor" (if MultiScaleFlipAug does not give img_scale but + scale_factor), the actual scale will be computed by image shape and + scale_factor. + + `img_scale` can either be a tuple (single-scale) or a list of tuple + (multi-scale). There are 3 multiscale modes: + + - ``ratio_range is not None``: randomly sample a ratio from the ratio \ + range and multiply it with the image scale. + - ``ratio_range is None`` and ``multiscale_mode == "range"``: randomly \ + sample a scale from the multiscale range. + - ``ratio_range is None`` and ``multiscale_mode == "value"``: randomly \ + sample a scale from multiple scales. + + Args: + img_scale (tuple or list[tuple]): Images scales for resizing. + multiscale_mode (str): Either "range" or "value". + ratio_range (tuple[float]): (min_ratio, max_ratio) + keep_ratio (bool): Whether to keep the aspect ratio when resizing the + image. + bbox_clip_border (bool, optional): Whether to clip the objects outside + the border of the image. In some dataset like MOT17, the gt bboxes + are allowed to cross the border of images. Therefore, we don't + need to clip the gt bboxes in these cases. Defaults to True. + backend (str): Image resize backend, choices are 'cv2' and 'pillow'. + These two backends generates slightly different results. Defaults + to 'cv2'. + interpolation (str): Interpolation method, accepted values are + "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' + backend, "nearest", "bilinear" for 'pillow' backend. + override (bool, optional): Whether to override `scale` and + `scale_factor` so as to call resize twice. Default False. If True, + after the first resizing, the existed `scale` and `scale_factor` + will be ignored so the second resizing can be allowed. + This option is a work-around for multiple times of resize in DETR. + Defaults to False. + """ + + def __init__(self, + img_scale=None, + multiscale_mode='range', + ratio_range=None, + keep_ratio=True, + bbox_clip_border=True, + backend='cv2', + interpolation='bilinear', + override=False): + if img_scale is None: + self.img_scale = None + else: + if isinstance(img_scale, list): + self.img_scale = img_scale + else: + self.img_scale = [img_scale] + assert mmcv.is_list_of(self.img_scale, tuple) + + if ratio_range is not None: + # mode 1: given a scale and a range of image ratio + assert len(self.img_scale) == 1 + else: + # mode 2: given multiple scales or a range of scales + assert multiscale_mode in ['value', 'range'] + + self.backend = backend + self.multiscale_mode = multiscale_mode + self.ratio_range = ratio_range + self.keep_ratio = keep_ratio + # TODO: refactor the override option in Resize + self.interpolation = interpolation + self.override = override + self.bbox_clip_border = bbox_clip_border + + @staticmethod + def random_select(img_scales): + """Randomly select an img_scale from given candidates. + + Args: + img_scales (list[tuple]): Images scales for selection. + + Returns: + (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, \ + where ``img_scale`` is the selected image scale and \ + ``scale_idx`` is the selected index in the given candidates. + """ + + assert mmcv.is_list_of(img_scales, tuple) + scale_idx = np.random.randint(len(img_scales)) + img_scale = img_scales[scale_idx] + return img_scale, scale_idx + + @staticmethod + def random_sample(img_scales): + """Randomly sample an img_scale when ``multiscale_mode=='range'``. + + Args: + img_scales (list[tuple]): Images scale range for sampling. + There must be two tuples in img_scales, which specify the lower + and upper bound of image scales. + + Returns: + (tuple, None): Returns a tuple ``(img_scale, None)``, where \ + ``img_scale`` is sampled scale and None is just a placeholder \ + to be consistent with :func:`random_select`. + """ + + assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2 + img_scale_long = [max(s) for s in img_scales] + img_scale_short = [min(s) for s in img_scales] + long_edge = np.random.randint( + min(img_scale_long), + max(img_scale_long) + 1) + short_edge = np.random.randint( + min(img_scale_short), + max(img_scale_short) + 1) + img_scale = (long_edge, short_edge) + return img_scale, None + + @staticmethod + def random_sample_ratio(img_scale, ratio_range): + """Randomly sample an img_scale when ``ratio_range`` is specified. + + A ratio will be randomly sampled from the range specified by + ``ratio_range``. Then it would be multiplied with ``img_scale`` to + generate sampled scale. + + Args: + img_scale (tuple): Images scale base to multiply with ratio. + ratio_range (tuple[float]): The minimum and maximum ratio to scale + the ``img_scale``. + + Returns: + (tuple, None): Returns a tuple ``(scale, None)``, where \ + ``scale`` is sampled ratio multiplied with ``img_scale`` and \ + None is just a placeholder to be consistent with \ + :func:`random_select`. + """ + + assert isinstance(img_scale, tuple) and len(img_scale) == 2 + min_ratio, max_ratio = ratio_range + assert min_ratio <= max_ratio + ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio + scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) + return scale, None + + def _random_scale(self, results): + """Randomly sample an img_scale according to ``ratio_range`` and + ``multiscale_mode``. + + If ``ratio_range`` is specified, a ratio will be sampled and be + multiplied with ``img_scale``. + If multiple scales are specified by ``img_scale``, a scale will be + sampled according to ``multiscale_mode``. + Otherwise, single scale will be used. + + Args: + results (dict): Result dict from :obj:`dataset`. + + Returns: + dict: Two new keys 'scale` and 'scale_idx` are added into \ + ``results``, which would be used by subsequent pipelines. + """ + + if self.ratio_range is not None: + scale, scale_idx = self.random_sample_ratio( + self.img_scale[0], self.ratio_range) + elif len(self.img_scale) == 1: + scale, scale_idx = self.img_scale[0], 0 + elif self.multiscale_mode == 'range': + scale, scale_idx = self.random_sample(self.img_scale) + elif self.multiscale_mode == 'value': + scale, scale_idx = self.random_select(self.img_scale) + else: + raise NotImplementedError + + results['scale'] = scale + results['scale_idx'] = scale_idx + + def _resize_img(self, results): + """Resize images with ``results['scale']``.""" + for key in results.get('img_fields', ['img']): + if self.keep_ratio: + img, scale_factor = mmcv.imrescale( + results[key], + results['scale'], + return_scale=True, + interpolation=self.interpolation, + backend=self.backend) + # the w_scale and h_scale has minor difference + # a real fix should be done in the mmcv.imrescale in the future + new_h, new_w = img.shape[:2] + h, w = results[key].shape[:2] + w_scale = new_w / w + h_scale = new_h / h + else: + img, w_scale, h_scale = mmcv.imresize( + results[key], + results['scale'], + return_scale=True, + interpolation=self.interpolation, + backend=self.backend) + results[key] = img + + scale_factor = np.array([w_scale, h_scale, w_scale, h_scale], + dtype=np.float32) + results['img_shape'] = img.shape + # in case that there is no padding + results['pad_shape'] = img.shape + results['scale_factor'] = scale_factor + results['keep_ratio'] = self.keep_ratio + + def _resize_bboxes(self, results): + """Resize bounding boxes with ``results['scale_factor']``.""" + for key in results.get('bbox_fields', []): + bboxes = results[key] * results['scale_factor'] + if self.bbox_clip_border: + img_shape = results['img_shape'] + bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) + bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) + results[key] = bboxes + + def _resize_masks(self, results): + """Resize masks with ``results['scale']``""" + for key in results.get('mask_fields', []): + if results[key] is None: + continue + if self.keep_ratio: + results[key] = results[key].rescale(results['scale']) + else: + results[key] = results[key].resize(results['img_shape'][:2]) + + def _resize_seg(self, results): + """Resize semantic segmentation map with ``results['scale']``.""" + for key in results.get('seg_fields', []): + if self.keep_ratio: + gt_seg = mmcv.imrescale( + results[key], + results['scale'], + interpolation='nearest', + backend=self.backend) + else: + gt_seg = mmcv.imresize( + results[key], + results['scale'], + interpolation='nearest', + backend=self.backend) + results[key] = gt_seg + + def __call__(self, results): + """Call function to resize images, bounding boxes, masks, semantic + segmentation map. + + Args: + results (dict): Result dict from loading pipeline. + + Returns: + dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', \ + 'keep_ratio' keys are added into result dict. + """ + + if 'scale' not in results: + if 'scale_factor' in results: + img_shape = results['img'].shape[:2] + scale_factor = results['scale_factor'] + assert isinstance(scale_factor, float) + results['scale'] = tuple( + [int(x * scale_factor) for x in img_shape][::-1]) + else: + self._random_scale(results) + else: + if not self.override: + assert 'scale_factor' not in results, ( + 'scale and scale_factor cannot be both set.') + else: + results.pop('scale') + if 'scale_factor' in results: + results.pop('scale_factor') + self._random_scale(results) + + self._resize_img(results) + self._resize_bboxes(results) + self._resize_masks(results) + self._resize_seg(results) + return results + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(img_scale={self.img_scale}, ' + repr_str += f'multiscale_mode={self.multiscale_mode}, ' + repr_str += f'ratio_range={self.ratio_range}, ' + repr_str += f'keep_ratio={self.keep_ratio}, ' + repr_str += f'bbox_clip_border={self.bbox_clip_border})' + return repr_str + +@PIPELINES.register_module() +class RandomFlip: + """Flip the image & bbox & mask. + + If the input dict contains the key "flip", then the flag will be used, + otherwise it will be randomly decided by a ratio specified in the init + method. + + When random flip is enabled, ``flip_ratio``/``direction`` can either be a + float/string or tuple of float/string. There are 3 flip modes: + + - ``flip_ratio`` is float, ``direction`` is string: the image will be + ``direction``ly flipped with probability of ``flip_ratio`` . + E.g., ``flip_ratio=0.5``, ``direction='horizontal'``, + then image will be horizontally flipped with probability of 0.5. + - ``flip_ratio`` is float, ``direction`` is list of string: the image will + be ``direction[i]``ly flipped with probability of + ``flip_ratio/len(direction)``. + E.g., ``flip_ratio=0.5``, ``direction=['horizontal', 'vertical']``, + then image will be horizontally flipped with probability of 0.25, + vertically with probability of 0.25. + - ``flip_ratio`` is list of float, ``direction`` is list of string: + given ``len(flip_ratio) == len(direction)``, the image will + be ``direction[i]``ly flipped with probability of ``flip_ratio[i]``. + E.g., ``flip_ratio=[0.3, 0.5]``, ``direction=['horizontal', + 'vertical']``, then image will be horizontally flipped with probability + of 0.3, vertically with probability of 0.5. + + Args: + flip_ratio (float | list[float], optional): The flipping probability. + Default: None. + direction(str | list[str], optional): The flipping direction. Options + are 'horizontal', 'vertical', 'diagonal'. Default: 'horizontal'. + If input is a list, the length must equal ``flip_ratio``. Each + element in ``flip_ratio`` indicates the flip probability of + corresponding direction. + """ + + def __init__(self, flip_ratio=None, direction='horizontal'): + if isinstance(flip_ratio, list): + assert mmcv.is_list_of(flip_ratio, float) + assert 0 <= sum(flip_ratio) <= 1 + elif isinstance(flip_ratio, float): + assert 0 <= flip_ratio <= 1 + elif flip_ratio is None: + pass + else: + raise ValueError('flip_ratios must be None, float, ' + 'or list of float') + self.flip_ratio = flip_ratio + + valid_directions = ['horizontal', 'vertical', 'diagonal'] + if isinstance(direction, str): + assert direction in valid_directions + elif isinstance(direction, list): + assert mmcv.is_list_of(direction, str) + assert set(direction).issubset(set(valid_directions)) + else: + raise ValueError('direction must be either str or list of str') + self.direction = direction + + if isinstance(flip_ratio, list): + assert len(self.flip_ratio) == len(self.direction) + + def bbox_flip(self, bboxes, img_shape, direction): + """Flip bboxes horizontally. + + Args: + bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k) + img_shape (tuple[int]): Image shape (height, width) + direction (str): Flip direction. Options are 'horizontal', + 'vertical'. + + Returns: + numpy.ndarray: Flipped bounding boxes. + """ + + assert bboxes.shape[-1] % 4 == 0 + flipped = bboxes.copy() + if direction == 'horizontal': + w = img_shape[1] + flipped[..., 0::4] = w - bboxes[..., 2::4] + flipped[..., 2::4] = w - bboxes[..., 0::4] + elif direction == 'vertical': + h = img_shape[0] + flipped[..., 1::4] = h - bboxes[..., 3::4] + flipped[..., 3::4] = h - bboxes[..., 1::4] + elif direction == 'diagonal': + w = img_shape[1] + h = img_shape[0] + flipped[..., 0::4] = w - bboxes[..., 2::4] + flipped[..., 1::4] = h - bboxes[..., 3::4] + flipped[..., 2::4] = w - bboxes[..., 0::4] + flipped[..., 3::4] = h - bboxes[..., 1::4] + else: + raise ValueError(f"Invalid flipping direction '{direction}'") + return flipped + + def __call__(self, results): + """Call function to flip bounding boxes, masks, semantic segmentation + maps. + + Args: + results (dict): Result dict from loading pipeline. + + Returns: + dict: Flipped results, 'flip', 'flip_direction' keys are added \ + into result dict. + """ + + if 'flip' not in results: + if isinstance(self.direction, list): + # None means non-flip + direction_list = self.direction + [None] + else: + # None means non-flip + direction_list = [self.direction, None] + + if isinstance(self.flip_ratio, list): + non_flip_ratio = 1 - sum(self.flip_ratio) + flip_ratio_list = self.flip_ratio + [non_flip_ratio] + else: + non_flip_ratio = 1 - self.flip_ratio + # exclude non-flip + single_ratio = self.flip_ratio / (len(direction_list) - 1) + flip_ratio_list = [single_ratio] * (len(direction_list) - + 1) + [non_flip_ratio] + + cur_dir = np.random.choice(direction_list, p=flip_ratio_list) + + results['flip'] = cur_dir is not None + if 'flip_direction' not in results: + results['flip_direction'] = cur_dir + if results['flip']: + # flip image + for key in results.get('img_fields', ['img']): + results[key] = mmcv.imflip( + results[key], direction=results['flip_direction']) + # flip bboxes + for key in results.get('bbox_fields', []): + results[key] = self.bbox_flip(results[key], + results['img_shape'], + results['flip_direction']) + # flip masks + for key in results.get('mask_fields', []): + results[key] = results[key].flip(results['flip_direction']) + + # flip segs + for key in results.get('seg_fields', []): + results[key] = mmcv.imflip( + results[key], direction=results['flip_direction']) + return results + + def __repr__(self): + return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' + + +@PIPELINES.register_module() +class Pad: + """Pad the image & masks & segmentation map. + + There are two padding modes: (1) pad to a fixed size and (2) pad to the + minimum size that is divisible by some number. + Added keys are "pad_shape", "pad_fixed_size", "pad_size_divisor", + + Args: + size (tuple, optional): Fixed padding size. + size_divisor (int, optional): The divisor of padded size. + pad_to_square (bool): Whether to pad the image into a square. + Currently only used for YOLOX. Default: False. + pad_val (dict, optional): A dict for padding value, the default + value is `dict(img=0, masks=0, seg=255)`. + """ + + def __init__(self, + size=None, + size_divisor=None, + pad_to_square=False, + pad_val=dict(img=0, masks=0, seg=255)): + self.size = size + self.size_divisor = size_divisor + if isinstance(pad_val, float) or isinstance(pad_val, int): + warnings.warn( + 'pad_val of float type is deprecated now, ' + f'please use pad_val=dict(img={pad_val}, ' + f'masks={pad_val}, seg=255) instead.', DeprecationWarning) + pad_val = dict(img=pad_val, masks=pad_val, seg=255) + assert isinstance(pad_val, dict) + self.pad_val = pad_val + self.pad_to_square = pad_to_square + + if pad_to_square: + assert size is None and size_divisor is None, \ + 'The size and size_divisor must be None ' \ + 'when pad2square is True' + else: + assert size is not None or size_divisor is not None, \ + 'only one of size and size_divisor should be valid' + assert size is None or size_divisor is None + + def _pad_img(self, results): + """Pad images according to ``self.size``.""" + pad_val = self.pad_val.get('img', 0) + for key in results.get('img_fields', ['img']): + if self.pad_to_square: + max_size = max(results[key].shape[:2]) + self.size = (max_size, max_size) + if self.size is not None: + padded_img = mmcv.impad( + results[key], shape=self.size, pad_val=pad_val) + elif self.size_divisor is not None: + padded_img = mmcv.impad_to_multiple( + results[key], self.size_divisor, pad_val=pad_val) + results[key] = padded_img + results['pad_shape'] = padded_img.shape + results['pad_fixed_size'] = self.size + results['pad_size_divisor'] = self.size_divisor + + def _pad_masks(self, results): + """Pad masks according to ``results['pad_shape']``.""" + pad_shape = results['pad_shape'][:2] + pad_val = self.pad_val.get('masks', 0) + for key in results.get('mask_fields', []): + results[key] = results[key].pad(pad_shape, pad_val=pad_val) + + def _pad_seg(self, results): + """Pad semantic segmentation map according to + ``results['pad_shape']``.""" + pad_val = self.pad_val.get('seg', 255) + for key in results.get('seg_fields', []): + results[key] = mmcv.impad( + results[key], shape=results['pad_shape'][:2], pad_val=pad_val) + + def __call__(self, results): + """Call function to pad images, masks, semantic segmentation maps. + + Args: + results (dict): Result dict from loading pipeline. + + Returns: + dict: Updated result dict. + """ + self._pad_img(results) + self._pad_masks(results) + self._pad_seg(results) + return results + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(size={self.size}, ' + repr_str += f'size_divisor={self.size_divisor}, ' + repr_str += f'pad_to_square={self.pad_to_square}, ' + repr_str += f'pad_val={self.pad_val})' + return repr_str + + +@PIPELINES.register_module() +class Normalize: + """Normalize the image. + + Added key is "img_norm_cfg". + + Args: + mean (sequence): Mean values of 3 channels. + std (sequence): Std values of 3 channels. + to_rgb (bool): Whether to convert the image from BGR to RGB, + default is true. + """ + + def __init__(self, mean, std, to_rgb=True): + self.mean = np.array(mean, dtype=np.float32) + self.std = np.array(std, dtype=np.float32) + self.to_rgb = to_rgb + + def __call__(self, results): + """Call function to normalize images. + + Args: + results (dict): Result dict from loading pipeline. + + Returns: + dict: Normalized results, 'img_norm_cfg' key is added into + result dict. + """ + for key in results.get('img_fields', ['img']): + results[key] = mmcv.imnormalize(results[key], self.mean, self.std, + self.to_rgb) + results['img_norm_cfg'] = dict( + mean=self.mean, std=self.std, to_rgb=self.to_rgb) + return results + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(mean={self.mean}, std={self.std}, to_rgb={self.to_rgb})' + return repr_str + +@PIPELINES.register_module() +class ImageToTensor: + """Convert image to :obj:`torch.Tensor` by given keys. + + The dimension order of input image is (H, W, C). The pipeline will convert + it to (C, H, W). If only 2 dimension (H, W) is given, the output would be + (1, H, W). + + Args: + keys (Sequence[str]): Key of images to be converted to Tensor. + """ + + def __init__(self, keys): + self.keys = keys + + def __call__(self, results): + """Call function to convert image in results to :obj:`torch.Tensor` and + transpose the channel order. + + Args: + results (dict): Result dict contains the image data to convert. + + Returns: + dict: The result dict contains the image converted + to :obj:`torch.Tensor` and transposed to (C, H, W) order. + """ + for key in self.keys: + img = results[key] + if len(img.shape) < 3: + img = np.expand_dims(img, -1) + img = img.transpose(2, 0, 1) # HWC-> CHW + img = np.ascontiguousarray(img) + # img = (to_tensor(img)).contiguous() + img = to_tensor(img) + results[key] = img + return results + + def __repr__(self): + return self.__class__.__name__ + f'(keys={self.keys})' + + +@PIPELINES.register_module() +class HWCToCHW: + """Convert image to :obj:`torch.Tensor` by given keys. + + The dimension order of input image is (H, W, C). The pipeline will convert + it to (C, H, W). If only 2 dimension (H, W) is given, the output would be + (1, H, W). + + Args: + keys (Sequence[str]): Key of images to be converted to Tensor. + """ + + def __init__(self, keys): + self.keys = keys + + def __call__(self, results): + """Call function to convert image in results to :obj:`torch.Tensor` and + transpose the channel order. + + Args: + results (dict): Result dict contains the image data to convert. + + Returns: + dict: The result dict contains the image converted + to :obj:`torch.Tensor` and transposed to (C, H, W) order. + """ + for key in self.keys: + img = results[key] + if len(img.shape) < 3: + img = np.expand_dims(img, -1) + img = img.transpose(2, 0, 1) # HWC-> CHW + img = np.ascontiguousarray(img) + # img = (to_tensor(img)).contiguous() + # img = to_tensor(img) + results[key] = img + return results + + def __repr__(self): + return self.__class__.__name__ + f'(keys={self.keys})' + + +def to_tensor(data): + """Convert objects of various python types to :obj:`torch.Tensor`. + + Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, + :class:`Sequence`, :class:`int` and :class:`float`. + + Args: + data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to + be converted. + """ + # mindspore Tensor + # return Tensor(data) + raise NotImplementedError + +@PIPELINES.register_module() +class Collect: + """Collect data from the loader relevant to the specific task. + + This is usually the last stage of the data loader pipeline. Typically keys + is set to some subset of "img", "proposals", "gt_bboxes", + "gt_bboxes_ignore", "gt_labels", and/or "gt_masks". + + The "img_meta" item is always populated. The contents of the "img_meta" + dictionary depends on "meta_keys". By default this includes: + + - "img_shape": shape of the image input to the network as a tuple \ + (h, w, c). Note that images may be zero padded on the \ + bottom/right if the batch tensor is larger than this shape. + + - "scale_factor": a float indicating the preprocessing scale + + - "flip": a boolean indicating if image flip transform was used + + - "filename": path to the image file + + - "ori_shape": original shape of the image as a tuple (h, w, c) + + - "pad_shape": image shape after padding + + - "img_norm_cfg": a dict of normalization information: + + - mean - per channel mean subtraction + - std - per channel std divisor + - to_rgb - bool indicating if bgr was converted to rgb + + Args: + keys (Sequence[str]): Keys of results to be collected in ``data``. + meta_keys (Sequence[str], optional): Meta keys to be converted to + ``mmcv.DataContainer`` and collected in ``data[img_metas]``. + Default: ``('filename', 'ori_filename', 'ori_shape', 'img_shape', + 'pad_shape', 'scale_factor', 'flip', 'flip_direction', + 'img_norm_cfg')`` + """ + + def __init__(self, + keys, + meta_keys=('filename', 'ori_filename', 'ori_shape', + 'img_shape', 'pad_shape', 'scale_factor', 'flip', + 'flip_direction', 'img_norm_cfg')): + self.keys = keys + self.meta_keys = meta_keys + + def __call__(self, results): + """Call function to collect keys in results. The keys in ``meta_keys`` + will be converted to :obj:mmcv.DataContainer. + + Args: + results (dict): Result dict contains the data to collect. + + Returns: + dict: The result dict contains the following keys + + - keys in``self.keys`` + - ``img_metas`` + """ + + data = {} + img_meta = {} + for key in self.meta_keys: + img_meta[key] = results[key] + # data['img_metas'] = DC(img_meta, cpu_only=True) + data['img_metas'] = img_meta + for key in self.keys: + data[key] = results[key] + return data + + def __repr__(self): + return self.__class__.__name__ + \ + f'(keys={self.keys}, meta_keys={self.meta_keys})' + +def build_processor(test_pipelines): + return Compose(test_pipelines) + # return build_from_cfg(test_pipelines, PIPELINES) diff --git a/contrib/Overlap-Recovery/inference/test.jpg b/contrib/Overlap-Recovery/inference/test.jpg new file mode 100644 index 0000000000000000000000000000000000000000..01251aade0065872fcd8d7c374301db18b176846 GIT binary patch literal 265282 zcmV(&K;gfMP)4z&<&wbe*1byrnaSJ#>L*;CH*oXPK5S?=AZcdfm0tCUZXdl|B1U_IN&@9q)F#Mx%B(9Jbp-u~)j38_oKBJ~OIZE-9C)Ns_GAb91RxeZ9C``Dov0v~t;Gxtg1XTXnmg*=)Mo zZAzu0CvG;|Q%VhAKb_7;vpVc6)v~cU@K>QwG?)GEXyk=L!8rE^UpeOUF>w_0=-qCA z`FbvwMc4g)ja;>AXSG^pldb0#lBQ9eGiw{ExF-D*866bp!%%VqYv`RQmbuTXnB zelUxCsqI<2?dY)Imn&DfY<{yDA5U8!bvm8laNx>9q1^5C7RwpVmP<9a!aY>YoU>O~ zSL@9ZWsC}HkaTn=Hw2CBdPGj^=fBAg@P%p$cq~q zKp|h=?GD(DBWQx-<#Guh(bJjB`OS8B&Sq-$X1Q3KOvb0vwp4bTl5rkS`zR~)`__ynUDVHnPMlze+Q-W^qX1_m-CqqO) zI?^(qhvXp3%a<=7?jMobPq*7$uh%{@CzPyJD;HTsv1Yq%RG~qqOt*V8pDkDGMJ~xV z8=duL=XykY`Qn9x*TdRAx?PY}xLqL-u&cQ-Mt8wA~@NTjop+>CzFFac8|*T{w%CyX~HQ`j#Jz zCcS=q#Y{Wh_H;7yCz_a=12gp-gYeYg3q^!!n$c`JI-mBXa#AYh7xO7eAR+mrh*$pQ zlBCgWq&l^A9~zC(>-ENSTCL`GOZ3)snXJ;y<#KsGXWOmLIdfVp<`-*V0FLPdQ$R2b z!WVPP*xN~MhVH@>p*Kc-XQ_f{*Q*M7)WpigF3L2K{ zTf9KNa$&QbQ-^G}K$|WGihT>qmSyph&308Rkid|=3muRG0TGRkxUI1v+Gd01iDPG2 zC+5yI?qbL&f6+Bx_bt~jdc)xe>zzkcJX)<5zCw=3sy_>bQnS%@F>Bz?%|@G58IAi` zO4rEeZhOdNaHm)*A7g_1rE%`1&C=De*4r{F4E=B9!l#k5@KF1O@8z5 z{W9}+Je`;`SFjrHg1((j+i$YscC8k3MqGBM(}AJ!Z1?qS1oF=X>4p`dmlUnB{BD&-14N9^3zP}mFvN+r*P6BlgR zt-~Jm5yA*}yDdJNyWu)dVX)zJKKayif*?HR*=76qGfQDO!WgGJ`})O(A|E!pxic;> zNOL5hC~IIri1_nVu7qoGX9TBS#?Q+jYdV(%q1~5@aw~dc@I=2+r$q+6rV_PU!!Y*y z4FzOq4u><=^3b(f^K{G|j{DO&{5NAUn@!G{b3R`+gxz+v+tO`<+gL4Uo}bGV$n?%Z0(4i!&+3$Y{6|ll8HK%a4zY zA00|1H}rMR+fUy3%@>J@m+M{T$JF+_9TFim+4aTa;pB_0R?An|?^L_0w1Q7%^lH^Q zL2_GHk``2QjbjYzc6)A15`2XtG*nXL49Ov%sOl-b-ZkzomlX9Fx>+hO7i;G!GcjIq zP%C!oRISw3>n#v~FU4Zn+;_XhdcDl%N~a7%!?KThz1R7CWw)Ch4*10ubAU%^!`IQl z7>vr(FWN=SU?2i!qz03+@(^lBE^}F24w9ak)GT(ySA3g*?be9kTSkcYG{R}#g3XG@ zK_~ka%z=1GQdnvz~VFlgs|sG(s44Ibepu1}vFGDdE(6^T5; z#i=#S!hESU0L3yca^8@zo3tzzD}Omp0S%Hx!{SE0LCB5AO^F`OYCt8s}bKHY>?Vj^)Ymkssv)M2_3I(_ja`VDL({W5ZE=Km^ zHjt5eI8Fs4fJkai8R*b2bnx3XejND72pwD~V@QUPTJM0cz6zxINv&Lwg`t|{%F{i< zz&4v532{AaV`|)E_9Q`5#~J17GCu7{iVP zGi)>~wOy}(g}P@fmMhB2P8k_$@;EFsVR3hVdY)@M5@*-zU8z_G){LL%q|k}|qTI(E zX(5tH;sZ1yXU62Ars}q)>W-%5ckimpKC*x**Gsr{nO+JZODX9?%IkI0$U;FTCPpDx zE+g5Ma*;|;XH)lYH3=3}vNkn#q1U^jlNjLs>`sW^E<7z~W=Lk~iKyeGQKiN+rtG%V z31#E>z=ZDI*)^dXhp;QKI{Dj&HkxMW%wY%1pV;5cIzk^5hGBO8bIOF!re>>Dsv>MQ z7n(zbA#o%m<+V!91Ch?`T!U2{D{_@9^-{UAKMzZ7 zchRj@tp&aT*pqy#*}?(eHQ?LZTchIc%z$Ah6Oxj2bq@vSqr|yT6)Inn= zm8~)5PN$1?jJUzHTGv=~Jg!_{F0~u=)^z%8JTG5-Gat@l4qky%W@SQ{#a#$_rO*d*a=Zm>p1wrldY43?vr24Vb!PDMzwg3#O({wpv|Uyk48;mLm^c zTP+L4JWxQ=+=qrjtMz&n#oVD>sc=^iAxy$gz(vptzVW1d_< z02~`?dOf!%774%QBZ5>e39scS-1u~TeTDCQMupth*!_%{Kby^y3a*@blCfs99g0ot z3#6#%ap7^#XBL2Jkhfa3VQNN83_OJflMo^)6kjekr$7Xz7;Bg+f6Kc5MFs_0~LMWd^)C)y%c1b9C+#emc zTRjwJvrGYV`6~a+j)a)9V}zyhaT12jiegEnAV$EA3Hj=H<%dzxmH280XJnE;GShmKt65+jZ4|E1>1>cqY@515kHkx21T;dbv?9 zu=t$h8Ht#olen90s!~+h9tip%fECoBJj5;AQLQvRkE!x(VRG0eLIGM0O*pot`e>Ej0FT<3ZYc7Z52NnY+m6w4EPnq@cw8NDXIvlnx$|XWJF-I_%e@(Ip zI}6Q&Cd?dD=Ni;P8bJv+={)@^77F30Q%n?Issa}I3D61_nJnJpL9cs7K;54OtcGb0 zU@j)$mh#yWrgAOKy7&~7KnY_hRa>o1z@mczvl?V^cL*iFzJ9qBRP4o}*=nTAcyTrE7psM?*b^Wu_PEucpl4kBcZokSo1Xar~J5s2(Ji%(um!y`!# z$1f6NRn93g3avuWh&N!&q5&9vAz|I91k1R^dDe}^$2o#0{~|Py@Nyj?<61n#%+T=q z`UXMd8K@H<$w?WuF}!&35@DHNF@OUmI88`yYl3;noK2_}Z2^V$2gF#Cc8MU;v0QIG zgBXgE7V{;N_qv^Pb~7A3ua{f*DHRUO`FOk9R1G(uJM1>|`IO0!^q{e`*-%7H2$}xP zda>ULhy(W^9^Y0Hj2<$@d}%Zs2>aLzL_j)pph0X>G^56#g+tkurscW^@jqBKgN7_6456 zU3T2##A>u*v08+fmP=9@J~$oVh8n!S-p&sa3C#{~G8|9VyG6BDAz%_RoR@H!82GZ$ zViWm?5Gjt^ve#(B#8lm=1_Wd!g-WH!UlGg0ng7n!>sJ)Yyv6+WS_ct!yFfzljW2S6 zlt#XYo>ClgNHVdBl>wj#60RY7$YqM(X)!sA&_pWpoFj@CV4f;hm~x^L$|5tL6b{E7 z=S3CBCfPs};85Ja+@s#%v}v^9jNN=LeOhZaUx}l(+d1HY^=#aDJS3>Cc6Tj_f82^e zi=|4bSb+@i>ZS7pVkAqz>na zM}t(EV)l&2XHW&%i(v5@ge6Vqd_h3Gl3KY$TNud}*(fuhC?&Mz~1JjVFWB9&~ ztqg=9C8rM%K%5=4LBPba-o>Z7!ctl zv?yW?19Q%D|1JomZ0|tnn~E%ZBxUGz-h9m> z*qNpTk&Cu?u3QI~VQlV0MtsV?8(Otemtl)s0Taexm&)_`!i_?kgRN)oi)IS-@C#gk zbb$-j04eaB;h@sH4M}lalJcQ?q@TecT4v}nA&DtRdQFNsn~^TSCUWUA@QpAD!pIMp z!t<{g1O6r_hI7sngLj4#as4@vAif7cAYO<_7SmMR>AW{7c|w`_<0-PYAW_UzG3|`L zB!S)8&{Am_Qbbr-UgaoB50s!?2l`*GfKZMDZRB&+N&}!_qjGr=z8tzi9cozCY$qNbFucx%SaT~fj0$pkMVh1Y6LXEW4xnPcpW(!oZdE7uC`-LDy@ zbEe(7SuD5wB8%orvb&qDG-jq%|HRa`hj}iMaPD$Wh-uh_jXz-bi5V3Q2t_j^6A)TJ zvm&3V%2tw;CT=?h2*^8U_acYL10)#1xRD1G4v4@(vS*h7nsyj1g~x`C_(QnwkI$*paOG;S@nj1{wV%oHpxO*qD_b#--Dj;TxuxcNSTwB^C^;ulnB8v8d05l zGEB~kFU`^Q&XYGoLN3M>%65i*r#1A^piz~2LfpY4BiaW570E3ZJZ?74pf;4O0F-j2 z#J0{CVwx-;z>%wQaxI0tKcmcpfZUt=7fMC!NwH&qYckJ#OEUx6EKxW@_$czFQ8z`C zKs%&x6riM`9)!mSPz7c)4}J(}P(73|veTDr zN&>P6snenzj(h0EyLhybAfDT%AyG!vip^}cnr>9Y|5IU&Dl97Ih(n|D3wL0{`A};2v3u}Z#PVzwxP;T5 zOIEpdoLGskvnTw<5ebC-2`NCfTt#2pAGBc-MQq1Q{DruYb>98f4 z)pFhVq>>n>daVgZlW`V-DaXa7xuu0NVd*8+7RfP2$Cm>nhUlbza0~D<&CDHn;wH_P z=*ZC#brC3H@nB)j!|_o=dMITNJc>}IYdup6CBuBM-R6${bR*EA@P`m(g6biBN(iZ+ zN(IrgSR9!aU^9HbRs|Rh#moioJr3N%^h#Bzmt)27wNR>+Dz({i7RX%Pw^ZZaX?PJG ztF7%y+#BXV4l^7ir*c-35w;`hDs#%oesH?fgQ*3Va1RqVQN}1LHp}h#Q)HbeGZM9J zWFU*>!ipiw79y!s0FvM2MM zxnqkC$RNStMO#2Pk@q^t2OjDJLIXB+Q7HmBtNYV!7`FrG9bMSMz{~#lrJ*S zk={4(I*p0!R!Y*d`~zSn(S{>|ba_;@)`YSao9FX+2?{H1Nnj9=QlZHmZ1>RTv6wV- z+0tem3ENV>E@Na~)TC5xz$1(GjMzp-rlOiP5t`)0sQF3`l2nsBWK_wsk#||j+z7uC&>V>e_js0_t4FMmqNa|#AZaYzfyM}MT}Nta<8F#Ch6Dp)dnveTRvG2qX{ z47e$Yg1-<0KJx6r@&D>S6&?bE>{P(ZlpK+`XBK%&exHp9G!yle0x?MaIs20f6rHeH z^wr>zD1bGbZEz=`;Vdn*cLBBXA)<#f%SZ?1_hJ`_=`4X09Tdi$T0CVgE-}<6&`tDK z(Z)bDBvpQZs0uknUXiKXZ4FpJ2JfQ|?v|pCgs|dD9L*dv$LVrr*pQ~1Os6yUsR0Ef zxy5SfR@g=Zm~Xi&jz^5hcL#Qp%dVm38vRj>7-=CL&yWe^P7I1UEhv&U(wc}#3X1{r;-N{glxCLCQDQL$-vuA=f_Zd| z=I(DWG3Lc0j9emHb?F)ipohEj1sbJ!j7DQ7GaG|Rqyc1BJOzi;tk4$9Htb%nOGYGc zq&vjKd3;ZVzt~Gp)QoY-Z{UceE%RqsECut4iSCK1@`1>;P+j*Q z4LJe*bR~JZSYA2ml#T=r$deDhNis0b#*@Q?j2fzB`0@mPXLa0G_CW@jv=B9!D-R4> z@CUNV84*JO21$LfpJVK%3p^XCFggQ6Fexcu(4#?JyfaQW!Jt&w`H5Zc>YCC{rc>3# zPPa#vW?wgb>DP!zmkKF(tNMsnbb1++DhaxbTItNpdxOX)u{Ul=Lr}nu3yme561<7v+sVjcEu=)E@ATNs#W8@M3SI z#W;0LNzn|dL@JvusySWEwJ@Bs*&$5Td@8>SsV>IP2L=%r1= z6+w<;J}Kk4CxYyZqw^S)A|;I{0h*{Te890qNG}m8YC!>lIkSsKc{=ej z0OCcv$d==jj$cC|v=jN#g2*8zSQU;Ze4GE}O*j%_>+RA=r5_K+#b&=iJ}PSduEfjW zaWVSA*cWUIPk4^Zo0%Fs@$x_4Fe!tHgsJmx{ z83o-+3WEX;SL#+09kUd;g!M*RNXpehZDm{$GMDnX%6t|wWIVlIPt{Zthj_G7QR=51 z6j)U-+IC?|*Fb8RcaQ`>$j;UA1*S@!AEy|F?(jotC-?;%sLDuOP?NC3X_p;3o6+Ui zN&2B8NeX!OXba9ET(pz(YHYeEa+C!3;_T(>j09WLg6BEldoH?GQl)@urKdxYT@$z_ zO3BJNCq-;`;;}fPX4;Jt1>rp-735OgHy9EEo>O~u6vZ%Uq2+q49xmkUb-Q{*w8Jr@ z65<{z0H@9pdVKvn%~p{v{8xANz;HGyVGM4NNar4K%C0Txin_~u#@Im*2Cp22(J9-J z4i-PjVX`BFHeN};rW}u$lGw1taNO5;i%2rDq>yBqzhei4ih!XxuHiKr0~zkXevv&I zt1=gI6{<+r{E9-R$O=ACNfx;A$XIiUK~r0;SzHz(%*EU!wMuPnToaZkQYK+%!~SUJN}mMx$hC+@I1gnNjE*Slu$3!=TxSc7 zrnySFcihR(Xc9SS?xEaF0Yh2Lw_%RSyw64{hZ9eJhy+q(Bc0Elj)#>L8p13WL+&%5 zQ>Us+>#jCa)l~sEOtMnX+GSeZn;riVpkBjnO$e(_Bk>QddQd>DV*q8w7jP6jxxf2z zKM{tpHKDv=j?I&2MV6EZk%J;9$U@_kQbJh(B=NX*2Eak}I?%QT#nk(zDdSMjLo`6q zE5wh?XAWOrIVBVAMQT6dLIZ+It9%*24fO!!gc%FM3`Sc)q#|*O*E2SKH`px*^%0>U z^M))#4v`aVc0iL6Nmk-gyJ$8$(TyoRsMEUAlpi8gg!1a@nssvC4aqhvnEa1__m}u6 zf4iMqr}K>1rBsmoHGaH40x#rQU!SotC{7Ag1GXG4%@tCyhzRMRa#(v-m!lKq7<~kH zis%_Ox1jAT3Jltg#d_&-5D)T0)I1m61F(}>_&Lo5xkoCUnLshs8sCd>Y3*0H`vL-W zajj9C%w`Zfk>Y7RPIxGykhyY8+QYS|z`LiOOLCA#1x1wNZ&K@@V~^5DgS?d#_3#k% zzpNbX{)C&8Ga^(nZ2@N7cCn-zY?g{cSTlaNP(n@6A3GVT)IdkVH?%2}-)|KyGpNLS z5f>5)H$fNvv~U8ssA3R8TF>X<=$Pq>WH0+d-(;3A#~v;9#ETFGG)j?${;{3#E1*%a zq)>@607f`<7<(OUuhbtKj2Trg6*;RGS=H-(001BWNklIEf=DFrm{oNAps7{ej29YuBW#=-mT z3tXwM7&TXf7tA;JnNm!ZE}}9PobJ)UXx7ZiC~B}db0kDj&!y2$##2-Cb>mTMfjR}p zCP;F8hZs=Bi%uH4DU(lrk$@SnoE3_(_5PI86IGBsHj^pA#pS}npv+bGmT)M56Y*xF z#rw{5uk1qe8Kbo7qZ*{uKz>i&UVSL101M5ab815+V8gdy2WIhm@)k;adPF!g6K52h zCvQf=wM4cc+CIeuaozyPJj`K)VSx4s;({oy5VO>;17~cyqLfUOx^WpOD-^N@B*qdf z)*56FyLBr7Z4rxjkb9;b1`chdi1ME$%Pw>QdZPQWedP)?TB)r@=G}he3FPBP%Ht^tQAom zJ8+Cv2*nR$rmzMEd@>z?oFt2ilw?3^lrLSmk=bTiQ>PLn!V4JMf%zgAi$r19D~%q< zN~uYh#mB;8nM|HIz{46BC=GY0-C$P^*O-W>DL65fOP|jzg^8dOP{ZvMFyUc|a3HEo zwTw$t#kNs+rl&X#cTqi~3JV=vOU=$H3s$6nG7J?hMjRhH0|)4mBUJ3vW%_AMqunUyiH=4jA8^aayC<1(afZB*8?PYy-fk~Nhkw^kT;5S>zB(h(qj~whi zAVow|%0^#iFq_7(HGv|d(4kZ)!S6s?rZt1z1~wUOOs z`Jiu}6z@?njbkLcN-(~tVzXVyBG5=UXm{Qc*>brfJHOqHNol>>!}oAeF^ZAuiO`t!AuiUqm=p_+F5F})84;HGOxn#z80COg@-^^O&O)Ls zTd$Z&+QU>GHsV*e3-Jm69_^}bMVzoQ#bSg>LMw1(IPb_n>Z?}23y72^x^!W6grMq1 z627iMUREOmT&pJ?Sv~fbUuozy7z`f+$yzF!6uBS7jC5eBu-r^Ghn45ZVY;5Ax>L7% z&FL^S+^F!#81zKJp4EC~t3^z%Z3_lTN~m^{5#2JJ%{j_3yF+9TRpaL*O(+&6+{myh zNXpqV79j?VtN;!;fND9vXe~`eH=5Sa%a{=m-CX7Zlte}tklDp4AqAQT*$m~jN( zWC7LQ3p#wj%g8we%8+>BU$d%f!d4j;63($k^s|k;E3oSV@h&{XDn@{liBd>(XSq6} zd8;nM2QYn+y%d;BPOW88>bm-ukm+_Wg#;eUi<)L6ZNlwI$Y^ITyR%_KFy`NCUJnOT zDvLULCAd`|PZm;5+Sty}@TQ)=T5;YruxnGQv!%%&MLO&eI8B%GJ?J|b_vOEuXySQdYSfSG9b zs#Wx==(Ryo>2}F3O^gVjbW{Z%q=aZ!vC^YSDHdAms5VkE)WVR?1gpg?HATK|Dmgo`P|@j3uJRz|d>jXGON3pfSs9uDM%DyPaP4Qu7ZRb5DMwHjwWX8hMx8)t-VVia6#flZMX`>+4j zD>EeP=~SbVmX06`|MBb5+!FpyxEZssIvcy>y}6GF52IJ==nVCy{<*w-h%6+LNywYT zz$v4LiWU{D6DD0MZB`*ju2MEIk7=VVB$CvL#tg$q*+iWJ7{-K!X;Vm$y`kl?F0ZLK zM2cJQCgG%qERbxY-hJ!Rxm)&|`%s6%%GneOI9F;~BV^BY@MUTZD~5{Jx^g!KX&H-q z!yHHFIBSSiz#xDoYnjF|TsmTe8>blQnu%DEi_rM39z%DQE6Wd67nP1nq3SE6Qgi{p zGpWJhefdeP7XDYPS^ms;0nhtlP6uID@kixr+YJ*NXcn=hNP9FR0TA(B(-{*s>qsP(CzIv z_6H7|9btONhMuA{&7x^4T#@r^kaaMD8wet+pw0=f<5$|E5!qPQfFV&1491-TziYuF zMbMFtWP%rZdrmo$8C@Ut!L=Jh&2QPj;eTYqsfP3A&KRNFH#SD5urx; znfVk&7vTmBM!#LHCPWY{!3%s}%ROcb(Z@dO3g`TM!Q>yMiG;dym9>Kc`>EOPji(Dr zn-t`1L2PoT2(Qjr7kESqg|@V+d`O0Mrie1F@^VVHh9&0-?M)+x3G}ffxPOlP+*XTD zL-A6uC-JGMrVp5vho;S>A$DP7pr5SP9SP(-UntMt?N}*YN=r3tv~@_W$<)GfZ9HAV z&+kPECtvLLdghM$46p>OXA!w{tp-4SCeXV4aF`5HED#qNfh&ftaaDjoKAm)x{j1u;4cX(p zmZEe35&sMV%bGHLjIZ-*FyK)@1b!>}DkI@Tqr=suiwz03Wk5b|je|7}F7|maJymi%UYh0JVd%{AqtKHCnH?hr)clC-wT+lVWGN zJ*}g2qPIO%)>~~~-OZ+vFWs#6x%D0jt)G?a^gZTPAgb=nlks_CU&D(wJOwuH4TU6% z!%5H!tc)E)%*a323*)5p^hSeHFc9hG&drNCH5qhsJ=mAlS|WSl&U`T4N!eW5mAUD$)sx-y&ws zMaCj>=}~Ss`)F8AedL)}a)IOI@BSyZW^Td^hsm35OgPpL&Mbi_SIeCLRw2V48;x%O zc`JDFM*Lu*8unM^8q*)=JSZMcu%&j1*p{ z^_cIG$SE<5=M(ZA4 zW^&QdVVzpqJ801OMIt;VqxmgU+#OWBvkK(pY6nTFc_xwTODeVQcn0l?9uk67sy#RLkj)Lx(=tv}AS*8;MYfH6fahnZzA{!{UFH~yH@pKt$l#^Py+Ou$* zq)9)wd*7JMrDg+s70bQ(@+9u@-MoIPb73Wv-Y50Ot3uJLPZ4(R`l<(Z*s~z&*oB&) z*(4EEwGY#m+-I!A*U{HfwgIgrOJVcC*b#!9)AD}oal*>@z-g*YcLT4bnYzpGh^Ev1 zgv=4d^jo;bR+)7ykU5GhLjoB}t$C7ark{39=myFXiKt#O*-2%#!Ep;xZ#2m9^RaVJv;nNQ`k}5mH(^|3iI3$6@(_~SRxDT-EO*Lf8f|PC zqaP^zP@Wt`9YQp0cI+Di%Vna;g)9;GqzNcY=W!x5k<}?Gu;Wc_xna4h+f!n<2Kb83 z9QlAP?m#;g+2xfbRY{qVfkjnSI1LFLf6_0R7ybZ{^G}Z<_3-atH!$NAiA>P!a-n^PEC`6gimLSoS7H1`P%M-g6oE%RxVX6gE!O9~Oqpzwi zsusrp$3TBtH;J~0$n>Zkeg1#_`x5kA>xIGCkLAIBD}O+mTr!!@M3MZ&<72_Pj$@X! z(|%)<5^=>?jj~?7B1_P85T@EL7Lw8AqYx>d*V|Cs?5B)lqkPq>6|TDb{CQk0?ryHE z4?thOgr0V;F1<|EL&o12Gj1gr8L0F+)2!(p%#t@G9ag%g#-6b%a3=+ zU2n82llfTqLBy2JL-07;e6HK;PNrzd@j?(f1r-3Sdgaw1@fjfb%~$Q4cD7klEK##j zk71u8S;2I2g!D(_F<{QJ3)Eugt3p-V^ZUEWW>YH`+gO{nuPAur^bmRq2oI6*{eYW- zCJ(0mtmOmVjKYbSkH#ys6xRR7pZ{bz`@qK3MPb#tW+65j1mI<@8>NoSNVjtv>3&Ix z3;=5foy>STG@W+)B@8l~FUPhi$zG|RT&e=RGMpGimsrg!46E6GjRK(LC!c)s@#9C_ z)qB|MR&K76z zuhpZ81ttQo;Z2A)kgeN5wDDBW4#*7tk_wGOv4=3%tmv; zY@?F@*3Wfw*IGZ$|TlO%;f6q zAqMk|MnhK3!Z!^O&wZOO$%zZjW2Y9nLEw>yq>(XXxw*NqZcce1ZP|*_ntTHIhk7nR zI+SE9&Dv^2*Rz!=D)(Xhh^@=Ds%S$|QP!f@ZHZ`*LOiI_3k)f|9QV_2|7$NARlVeK znc|w|BcstXBG>B?$1Ijpk17{+5fha$?u$)dyto|>pV8L7VPY5Cz)WVAM(()b`J(v4 zPw#*{0U52dYZ(sNm_V=8T#k6Nxw^iUY<%uN(R?!5Nq5^5NPAYT)KQr{iH=)*!R|rC+=%B+GnuSu0zu$Kf#s3R=3WabMnHV~c^|aL87nD(gx1NC*RR z5nV)fe{)1tvc$(AI5&voz`Oi8l1A1vT~9{EXu3cgSX1}%m52|DC8%OJ9P538zN^*t z(_k3gIvEDxm4Nc8KhnZpEJ7{k@Bi?t4D@9BnE66fLx(YKi@w?{`zUTtHchl*vhVN*2qq}vpksR@!L^FklgR`QJGIU%Mq zY;Wc~7>|U|mFl-1p10#U#GEV0(v=Id$&_ccO@@a-)?y#>ECr=y4vXEBok_JD&QsZn zZaqIlR9)SHQoQr8UVZ`-4EjSOQN5El9*stXnPnio;oIW2W$SZ4eE49fxLK}8<0AN} zQ>dKJUUdq+p6LtUc-9P$2ey2Y?wR+!x0cVR(xt81Oo<4@2=q*BQIgWvp{zxM5KuV!zCv!yt^l5R?40-os6h|U~1Mhg<3 z?jAwBA}wVyln0!!);+Q^QB;<{;-7QUbF*pXcfRwJ!RTD57UnCtpy-^_r4a5a8#@%; z+};dFqqO3bv(X!KhDiW;qYh0>(B(J&&2Rm8|HZ#s)MzaRw@%GUx4Kme1AXqQWY{0o z4g-^YELlo97I!~Ra^LxG|Ni5>tN^gkY}c2Y(Q;|q$K>|<#b^`;NR1;*Rjt~89_dq; zJW*r_vv^qPm(LxN`S{_R|MP#;CeoFLK6-m8Nxf;X7AH-o2fDa?g z=OaM`RfW}b$jA&3n%N@v5C743HYbgSv5N!!QL%|CU;0DfE;4Ou*EjFDmU1y8{`-|b zRs9=(?brYGFTc#6MjDY>K%PK?O9nEc{J@@SDi>8!ncTJOZFXcjn=ri02a$7Ih45l2 z)K$rBwO!FG&8h2g_P2lc%P-$9>W!|J-OcVN3S&1nuMKH1e&U~`t;ySL`l-=cJm`0S z_$0O761$YMv+;-j>~H;Bf8o!4)AkR``I9IjHf$?Zb?7mmqW)&<)pVKp;_dz){%^lG zofmb(5Ql6Yh|V209Jcgk07=M3XMtptJdSvs0EHTJhmp+>?-F{>eQh?^*Vm*)X2!`h zIeZj)F_@QV9#wq{8GTA8eVyuAmP{=nP+z1u-K%RFE<-?^K6&+m;WlyH)LgCRs2W*F zGJpTS|F^&WE4P{fY#Kn)qWwt)CEW>1rCRD-JSxGX!&!Ee7)XCkCPSmLOjs;992w%0 zfd_AA+Lw1XpMAFc$xrX7za?k^U#hL?c#%8h6h)E}ocjjH8Fbr@-QV9w@963D$*XTZ zJp2R)V+ySv4)VvUAid7@diilY8tgWUU-^^Yu+s78|Ac%-B3|)ZtDa<#C7-ga)A<;# zFPH3FQRQgmZZD=IN_A2&(oQBuYQz{i>|>2vwAEn;TgL7X&agi5&rc*=J?12sj zOg-6LvCD{4DJj&m0Hw)fHJNYbk#v>0+D-4@i-Y%>=1)GKeE!)N^Hm1gUeeu@y(txP zZsny1@28$+v!~55GkKmc(1ycJ3}rcnW3`6cm$x79?q@%Js+Di@h0a%R$J#~6@7?=V zG5NgL%l*c0{K;>B`xi&Ux1zr3WbyN_Umniu$#|eQq1EbGfT)EBWKkVN+xdK~`-4el zxGl^w8#YZAk$pEiLY4j*;&&lelBf^|xp;l)lS27syDoqJ)73x!&X3;RPgHf2Qbc>x z@r-?`6r~DuG-w3n$)m$fckV@=iRg41o0A4*(}>V-g6+P@;kk&P$$S>|8fB1lr!FT+ zvdc)X%QzUL!%A0rB3-z8zu;b#wHrs+vy? zY}5jwos2Kl1;63s%&pn7L#h@43G37K)wLl(UP#?;HM1w3g0bY=GHK#BnAojhTP$Cl zGM&L_x_(}^dRNs(Z9JI`X54tS*Xv9MPm^~#>XKgX1#O?Lg>#vDO&Vv`AJ0@-ty=ix z=YLgocE9EiGC;oe9*!p}{<+vdEPmv~(oR8w!pJ6t)+yI`|G4cB=|)wcvO4Bh+k7@@ z>sFANna(sjvi>$cw4Ay$o>wd?Z=P#3Y&2C|RU<9scG=-jx-F_8?OJ;rjXzrrV^F;i zRk(u@ns5k&@)8-1JwH^3Wy%kGk}Vb=j3%7A)fkiYd3$O+jqOQwBT9L=fAFnR=}NN` zd-pV6^?En8j%}m|KmLmIYjtn0pT~dvo$m^+{`o)n&A(j>;95*FvVyVso9a4*#(4mh zzAk>n6fOKXqLYRG8w>{Nu`y!Mf%$eLvO`|wHSIL8RVA&zdK`UG-B`;~*jm4~J?iBd zPdMquVX22ti7-DLrAsv<0kU6hJM-xxdyY=6yL(Eiek@v_b9I*db7S^O01ORDH;%s8 z{&Z{s!*r2vbbkS!?ms_>oJG3o1&_n2CWRL-Z!LX#_w;45$u(Oy;@Ibh2a;o6m11Q+ zogSpL#V$LD!8e4KBZty3wxh{jH9 zKWmmfj>bT$W$yt%#Y4;~abId^W%5@bsq*{6yML^R!cK`Ln{W8N)ac^&jtyqBxxPnJ4Xi`(?LWz7-&OBkYm>n`+mRA zzSw|{@QlaNyDXC8S|qf|lAbw;vYSsi8MpTjsvtaGL_ zTl^{K)_SM>dA8cVe>;8ue)#mXw>P@ve66jfa(a{kJ05@Wm%b%y{qgCmC}qXAG^gQY zgp4;gx3J-85*>GmRoAipAp;y|VHb@WQSqZ45pPpcO(g@?wfd!{NG=$LuzHOC8 zSO64q?i${O?kv^S_(24KQb{lSG_Bb8;-{*lIYm}!m&rcZojsV!u`EX)AM`=4V{i2^80*j_G;tm zjiA|jj>aip887KH$Y=>wd$j0FWEM3f93$worbdyAFW1Y$706-5a*U{_$YJQIinX7n zmVyNVQ@{lfA%GhvW#Si0gY@hcIMRckW*aw4MHkm+ znF~u=mL%Q0lk7PH0iGoKppNB3><}wsPsr@2L$CcdyMH#!;eBr4%N!hr_sRDVOgB+mBR1SXm2f^6+)7m^1I5#mQX0I@29ho z%nTt$mc@LmS0K6!rsFJwWqmoA0;|VXLK3H$xu;5j!xfQBDf+_*o}DGr3wQuI z%=z{>T4dUd%#1NfS~dxrPv4hv$4=97WywA8*&brC$Ft{>7`+r43Z9=Wdp56*@Zw_)a>-K4Wg%A3XYP zn>CX8;fXmociOi~3~$~Ks(F5ip?oRzVJh0cq?N3384o28Xj)~yUq0O50iexpeYIPR z?%u07o4&fy{b0CX#cJMIg>wH;vfH`r$@z5m{M>E#mT4aEFkcrH50#`y)Zy?vmV1ik z8E`qLS!}70V;7^-b&Zu@F=zCXyQ4QzXj&~}r$|r~Wl-Bv%2!4?OQpTRbu48N#I>q8N(D6AyI_|D7=+I zlS$Q{o;CTp!XOqsm_aEN8zx((kKV3K``w!1wNpqqCpZ(_H4)JxQ6f4bl zk5BI&S(fDL>gMtB@ee-z0fp*zuM|`szxp!GsI(8+{bz5!%ANDCKY2YHC(omItq_EI zyHi}x@AQV*pgxx=EN0du?rZi|YT1!KYVV3GWPshIZj=`pO$xtJvU$O`xv1hl+D*hi^JP@i$DCM$2V^!Y1^kv zWef=A3bW^h#ATFGGpA2K{Yucl4QhCJ^Y%k*D+SJ;&mZrfn0iLD(`qfJ)2}}JN*3Ye z%TM0D`)CzUL?kCy~u5MoMcgz3#cgMf;z2E*Tf9YTUs{mF&slR{mPusCh zKbpRWqMW|H?zh{kPS4#^5T#j($p8Q#07*naRD5hUbOz-IfuN(<;BHc^^yNYa#?8bK zBvN6@R7`^asuW9)#Tfd5ts2T_)rrqZ38fPufcelD^|@U>J6H7MV`b#e=gGoGLCF+d z_NT!B3@R*{!AzXc-M3zBr|Uk+l)2HJ^2OVaVLyu%Nz}Iw_rVdVh|_U$|1p>p&1uJV z|A|QzyWD)^^;chg^>NUToNTl6bpL2Nt!i@pN)@%aK8!QD@pZSjHi7k%k(-NnbG7ky zfP)>Q6@Nt71ei@=($h{;i}4t=H0t-8a8P&(smFFF%b8DFwF)!c>-1P@HcKsdI$ufg zvq5gHi$#Bw78`{nT|55C*o$$sFNp;u4?w3l$d+c4xgEU3xAuyQL`&MIXi6q*ZvK>W zV7MI{nPb8lgNYcWRj<~gWjlS7$E86?qk)Qb=18Lh8b+h{L^kkGwk8YWt zzIm&KjEPb3r;1~{@W#Fxr@N2$qtV!6b<+OyBn1(>6i_q33ZM{gO2i3Ja7&w(P=*b1 zoJJBH@^LbqSU}@=h+^s!xQMmR8?#6Przs?0`}o`! z_-Hhk&gKt~Pjsx+?hHrcuim`Fva9PGe(0;WA9v?!>+0v8N11=}TmO%BSCY~C6fd1p zQt##qlUmhZl)2X%CJ7iIDB6V5RFBXnMo82ZvBDPEC`CY(F774wQX8HwrlA`uG5x+& zp~AUR{6<%pGFvo5N;1ZdEJ{2FqAEQbNJv>g;VQc9=!QG#qKWF7w5aU05=C8tu>@yu z3|3##+}L~DQ{;SP9>*i)ibzIl$C%C*PtSvLRgtTu|KY>^gA%f2!elzXe|SO!*-cD& ze|KjrUE$6-^Zvu#NMxXm{&ar(?n8etI$NLA343^6|`0*D%QT*_c=Z?mc z&p!XLEA2;zXm8)XeU{v}IZ8hF`26%d9C*HcC5NNgWRl@Zd(voTzg+9HgA!T&dSI+Kv}fEnvl3F~#Cx zm?G4caFe_+4I5`!(CYuhMPG#N<#{kwEKLnM2MXD3W_Le$dLB<~$)-E~kAL(NO&^q1 z8>98LN{k@3p@O9S{$mQ9HGo7l^kOtI0zhPE&4teP`vU^tdYqnHUKR_AW62JU7P81W z`^VbAC`<|?OTT=bEGe;2F$$|;rl2@nuBP?v@;Tdjd^-Hm5BvY@x4-BQD$Vw9bUKVTEi-N!nq8<$M;BQ!8^=(s!C-;BiSv7DOh%IEW5)4sb2@(+4N=$!?Q! zcCs2DlZ@3yvrXo4WJ{8IzVgZc`CFg;FaPU5`QsnU$<@Y-tWu)v3wI?WEeC!)xQ`}U z(9T*5PFzp6b@c&C{9#^HhC!p+$O51JEMB>g%SYHP4{4)oH3D~d|Ni6q_r%pE)9vAC zN{ls9O0FWrUdyvuL&u&t@ju_ z4PdkD>z2--{?oJ0gYIpc_CG(~(;4YMxwa@UFrwvrqjI~MukR#jFx5sNi6p-7+LZ+@n98%xQF$mOOH28=2hB7Zx-|9?uO*Ovs!2;l z9wvIj_-_PZ3PTDZfmm?PJlb*C9BT&04EU0ZI6O|p+WMCPI>MLz1!;pCUP;q857izE`FL7?q4{?D<>9!NKdnuVl zs#Zx4=0c2kw9OIn>o1Lcgbco2Z%Oqrtg6A$OPMcDsJqu6=xT~>LXMloOr<54t-(^& zV!fD|CCAcvBW(j3)tB2*uA1S+!A2!(F+%JJl$L5E&7wvc?Lz_?nGlq(+hSI7Oh3Ll zhVC32&^$k>#_207k0;SLq!0BpbcXBVUUwbRm z(}($6y3nY^rXvp@K8j>t{`|`TMhty6>9%@>{OJGv?(hHEUwhrF^>m%>Hxo;x)r?8U zdN%?O6qfz?I(=ke{9{cB24hNQw7*egE=Q)F7L&Zx?xeN{iK#BD#;?W}Az_Qj*yYC` zF2bM1YEG;*d#jz=bmn+&+&}KV`@Os8L1w;+r21kzSg4Ja$#A^Q$T~s|m2cmC8RH?u zRw8fY97Q&4$#FcMqh~%8r!b$5CgVO((z$8(2mPOZ_;KuQ=*@w%;bJq>*}y+#PL+oT z9SZ8E*WPh=+~)r9N5kpt_x|!<`u4B=nNI+xPP5SIRu_{2_EjrizimXG8?|-MGw!mu ztjNW+m=S98vMDK~1TKI+0CXC$5yS{j`16Y9l0uFB)@TwH^4-aTI--$d?Tb*D{p(`2 znIKx;hVZzd0AMv&zSZ4ZSlQ&mE<1F6x9dONV@D-j9P@bhft859+RS$`q7IE|JZDHI(=kGtL2u?uC+F!SdGwB zaS@_v$kmC1{-SZeqlQT)$boVAamq1+H673G9c}L*GtmUlfN7b-=zyYzilT*Bv@BYa zWRKo>+sMTdbefwcQ8Y`)FvdOvHkHVGrp?FBnfu3)w)A?nBY~w%rV~(4KOSW}E)du`}60**NYa zWjgpQcG+GJQmImHjEL_-k>1Q{0}&}rYxwe|DCfFT%VXI3K;Lqb1SW7Fz2i0@Pb*<$ zKWxX4zAR5y{I+VROD|y)f*;PeFJ2KILA0JKO4#k#pkh864>WwY+6wz?87Dcn>zgi+ zba(f`@UFaJEGZ28BU8p6!aVE`yx>6-j>`1Y^W)ve4-$>)g1QNGosGt0Yrb@-6O5dHo~DomJ-rx=9@P(eXu_1Zt zD}GZ&9_|tsh9n;1Evqgw!E93LUcCbQU>Ze4Nto-am-3ej1t{<&mXAb93`6CkqR$lq zaw?)tK*ZW(@TB!xxvb}}u|Hns#uGmc-^=LCoL|RuH9(`@WkpaWw zcFTGyf`z1Pvs{zIB{88*bk6PYy#@pDvk1_yRq?BbPckm>~IV< z7enoim0aoNm+#j9;`i@9Je=ECzg(=ooXUuJYe3VguhXzE?XYOI5ZmHV%>pMFz0nYt zKxNTY?-eiREjsI72^!ncLm!*5WXttRp<*$njbx6^w$~Td?GCJJf%MW`n0bD6BOQVD9&yJ&WC^T)Z9qdinBJPUYRZ zH!$7x)fGN|c(3{p;wsqH1_iZCc=&OD$Ly-~4~%FexZ{|B5pNKI4dGM_>0;=#QbI*I zPFR>+P0TO=|3SYw1voE7y>c5z7S)Re7(&v^hoG}Tx|KUC5r6$6NY@b~2*O3q#|pyi zvC^vLE5NDt$NtSt$p(*u(US+sx4>DG!5F@)*R{rH`|R{++N?A3-QlGFH2erbR%<1u zN$aWzz0@jv^>Dr$jDRz%7Cc>si3zf{$T5znNb_W}UY3sFlnxn|Hr7XO_+#sal7N>= zZ%kWKm6kl~Bgop)uJ`8MP<51nuqQ?2Ek!7?PG)I$(v}y0f|PdbQ+};B{a5W&<5rPQ zc@OB>wqTiG|Fgf)Yg*VkXP)I=*j%2H$Rdawh?EyCGD7z8A|g~sb2oxACIZK1r*%$_ zPlG3`6YS(sDK_suJlbrg-D~QI9S#PM505e-TIR-+{_y#MO{`Uwf^to=r94!*;;m4zed&dN+&MKS6PG{n)laZ577Ul! z_-OI633N1J$7T#tRRI$d7<^j+7Bkh^qVl`n|5R6>_5kfXAag{6>AsWP286`edp!@< zq&C9lLRDGfC|cWHF}jR&tY-Sp&u+$i8^&DEVn#oe?2)Cs4AB1i_7$P~`1pP@BSk$c z2BvKDcuWB_srLs@vS+noz0=TQV$_is+h|DfYwop?bwTypb z@6nd?qowsi6RkU5i^N9F*OPLy3`*S(26$(NJa=sBK@Fm8!>(rS>T6RE9pL%04nxQ| z3=+YYv=GK<6)BL!0J#yl!k+Ao?NyiCSkc|NZCPlTRPs{7mJ-XD%PDwgrlmHya>7>E zn^g$}B1rl5s-3n6E2@OoKn9=}5HYdPg(bQK;Zo=*Zh58*m4n#jo?EaTF9N5jQ5mgx zv4yqOG&8bWu{M;Gkk>6r0@;R@Gln6&m-*v|5ANE%QAe4<@sm^xnhcy$XD*9cvYbk7I9Xd3F7wV~2X-K8&*SbM0v+lM)M)Sjf4=z-Ei&f^{2wcvTNe zXsxmR45yP3bnDGVR;dmK@@zVRJEQSRJe(_7ktv;(ySuxObaA((it+UL2wK)!4X9^0 z=v#NxZfhDk4xXOaadT{Rd+#2WpTF6D?+>58|A+m%JKmW$QJu9YJHJvVWTUL-uo7i4 zFJw-=b_b}ndd+{;gW4D&Hu?Jo%cFj^*g5Bz=urMGgwYkKOL+|(V+Zx|Q%O$cNZ_a_Wkp@x=+#&J| z#OxT3j@ZRV$fK;_w(>)A$LtFeCVGv)hHbKNF<_=Wz2z;eP7{Ski6RYd7%)O27K+Xa zZ3v=LKr#}O@#gmU0Aq|nEWixfqYOPgTBVd};AUU~A*V&-K&Lc;^Q^zUs%Ru5LQ)lq zm~5IL5$C4k3HDdL(ZQ~BAMWlX3gq!2EC8G|bXBB}g|FTY-hLQb#UbxWBGH)`m<(vc zxdOcUm>CZL0+Plna@8*bV!O;}wUc?S?pVa}6=AO^E zwhr_1F+R4=WqOr6g_1m+iK7-{R$df~4ptr@26~ctxZVIRYUmjES!edb)2R){Fia*z z(!?e4Rst;_+qJE|aGWw~&4D+KjeWGN61*@+c1L}i$BM8xIx_6IYV8rZR?~_UiF?N$ zTM~5!JX=UB5pHrPiUe{n(=Bn$c=Yju0+70Wi(&&tTY{c5VizMvF=hrjHvW{e7x{_z z&6o~R6z@8a;U%t+wQ>3g82`G07^AXSP2M|-+Z;@!^-~I0AAN#wcdOcm8 z-h8-!>_3~J7={0sO~x`%a;mZ{R^AYNZv==f<7B+U?l-u0T5EQUPy}AldagxSV@NC= zfmO+$P5W&F>(88g9*v~P3LQXJY^frt6Vj_)2tF4(_s56t{O)Jp|AV*3BTTG4A!=F$ z>G#;`8_wfCvlt0Hp#!TA{Fx?thflSYIF(tbqFG=U?>MJe^4vxx)Wcb7Yk}0jRy?yQ zVHJM(t^|xdIyzoXdf2P&u?!cn{CIy)EYw$2)D&5*Dyiugs#UF9uVw(V(~v zEZk~&dnh5XI!Z*V!pY#WQ~_twgcJ3$qDa+L!kPN}pb)!+$fF6o1G&033J~!EJ@gPX zTh}3La?2)gcshSnje)@P=-tJy0>rr;UI6LuLLJ>-dOo_g!7e% zpWf$id{9I}g zeVqzna0@77*X4y{Up#pyx&9X7Uf$hn3=-Tay4ANP_6gJG&onTw@s7O%W9vgUlsy$i zx-Dz&S-rHl|=wk=t9_a*f5X954F3(UY3F*pCB0M9nn@ zVC<>WMUyy28M{AekYfwd^RjcH{2po~zV3?#|$<9N-4aP)e^Dt(Me)Jgg80pP3$rk z4`nj=2yaS8s3=_?k1t-lXm+~~j}Q2A>Cupb@)f8HxsReFs+Mgk^K_fl{@;q`K-oHX}d>9Wh#DouruSe0%$GLs@CiC0?19yW<3@ds~b z^A@g>w@=yaDR*Krt4XG8BY#mc%A`Gq5V0n+*mVzj)|053K!Z+Og_%O;So8*>*tF2z zLTn(G5FL+P;YP#deL56YvqERdTx z_8<_%SLlULohQSLnm=`>Du9ni){QQ`4DI?lUM9#>%1;aC(igDD?mOU)kun#?#yA>E zn3S~36ra}_{Jy=ZE`?dEuLbdJ1WOLkQ8?`nU1^bip>lZhjp8Zso8uN%b zvu0De&wjlAm*4;L!$-Z*Rqi;x&hkxdLtpl+`$dl=ep2*_bcm`WCv-v9@wgX&kjY1vrGH<>UWqc=U% zOhr9BJR-QjyuCN!7k=@ZktmY2wOwxPDg(|(?!^yP@fGz67mNPlA1(H#hjR4tvP>?a z!2QA0t?kf2RS@n%dPE`aY4N~TJn<&*)TP_dL7z1%q_x9#7M55BGBQXbNd0qUW+%32 zIQdFjGv#nJ1Q>mt!D7uhNN*nXAj;-JuReLjid__`)#=cZ@p!=Dy?k*)u4#)&rY<*j z5JEn_4=$rEOfkLi&on*SJZTXvH7XA>UuT^6Lq41ja>b5m0Y-d;o_@Qbk1n4&Zz68u zd`vB+wS%8slRh*=D&mr8%CfFSD|&3P5cXv!j?_z{i70r7L|iItVgL*W`{GvEN?!em zT&CMQR5{A6u*4q1q!wUcbH@UH)y`QZ=mY~fmKcGA3rL}k1+2b$BX?T7PV@v#HQ z`Wkk`+eTua0t0ga2a`@J0uj_BAs5ORl21K1HjYjsV-$o5d@&WX@N*6fm8C;8G%>uC zT%nn13OHURlKOjMsnsyBye1z2)BrH0)46Ii>g$>i;bUGSKRZDci36|<=qIq%is@=Vo(Us3CH(UC2=c}bH!g9rYttDh% zPUaJDo)Nn!dT1>)Wn(sFNkoZ3z|6QGkoNj=prKu2wTBoDVq8Gz$FQVwTYuxSfC(Q~5rDn@72e`_<_XnBJKUWoxH}3eH zHDt;m9^Z9C0w()@+hu8eNHMHy4K4auOE~hP&0^CYf&T(>@EMMwrRo@W6IFH6SPoWD4nZ6uQy2MMq zdGG|Fl_g56>UtyNtp_?;88{9YUiB5$DYcvoW*ERI<71U4F%6#K`2Q#B&U$3auEa3s z6p@)RM@Hm4+^hYQW?OSigxx#Xt=6lZc8{^=<*SeDTa8xh2H-3<{NM$oGIJ6(_ICP%wvCiRw!eRb zm%e#(VPm7Dv#OwUR1G@9^VmskOzI^Q?|6%G5QZ6Bj-QMWi$~!mA7xKs|0ca+ZDCwv z##4?E$$^ORTOA!gU#KMqASKE8zmXS;zE);%Ao7&^1slUGfxtk2~8_w3V)&C(W=Tgb*rJu=@YRTP!!6)cHr zgI?id@WvchM49o}$c-6EIPKUQiIcA3JH=&0FnilaAALl+ZnxW}d~|YRnrH?DJQFmG zQw0vUSRnL_dMnkuv{_0Td@UCnj0b9gs&k_oka{QPG93424&0lIWvK-5IJ zRX$|=i_pM`=V>@j827HPnMNTeGp6TQo#T@efX%LfsGgre-V)f-9OKcudwagJ6OKod zg9B2>KI#?HxSn4Ue~#h^H>FauRBDLzw-APbcN-W^1m7T-4`j%;93!`&K1dG%CNv4@ z+aQC%q*DJ8`(#R)*O#A(#{ebARu^F`ET}^8dgIk!e9`{mO{ddauubZsMhaBAY0y_Q zPE9}I=}|^+JPk2|>#IM!?hP-Q=++B7k&{lh1>Q1J8A3(a;EWc|wlN*v(OhdGi%1)2 zQ=)dZ8}({U4B==n+*VVh-Uv1K=_!=#stE6n@cPEi?&0AHPc2d-qSu`At9j&~-b*ZsQFCuGXI!C+dl#)J!w293 zIyYK7k3b`i0oEt6J;7H&oNifCgj=oRXh;z@D1>82^kffY8PbGIDjNEojvjT$iV|?e zO{CEucfti;S?4(H3|cGs=W@NUy1wK**89n4uM``*x;Ar-Jn36d zB6T(B2U&;{R!5_-xi-8b4GqH`&cSGhk0Q9yayf#>c;T(NJZT-6F^1U5qVG*``Opnl zzWCz(^2(3wQL0}jRxZ*^nq;gcBaF>Ok|QGk6mRSb5AU6ypW)IT1r}!?PO$`e*_%rb z!$C(cl1lV87Y3~jR0Qt}(h3!l)w!I3)tZvJrD;m%w`H zxxV_C2&4fY$I!wpXv|6T^)WRwa>iSzYuR~FbJ7&xRgmt+y125ds~gP4Nt+zMg9vqc z6yu}<+2hsCm0M{;uSD{q5=pg1>x~@KFw?` z{h$BE`{KraK^2yAacRPzB;ZYtKZz8C0F#WuMCO8>nM){QX%nSinusYFYTA=bJk&lE z;(Gi<>_Em}mUyHBCE^2;ex&|66TnFoCZbR{33(|BX#<68j6Xc@x(GkBsY)6o%S--> z0u}1N>GB6LIqnsnB#=?UZ(0}lIB#TFR9G(VXM|ZhC=pkowR)q%*>@o?6(>aR!cLHfwbk=1b=nt^0?MBatBj@g=t~HW6+3&_o+%q3ZL;dS%{2TXH}rPm!`JOUPm=2jFH7+^d!nb~9h!<*4B zzr1-^DicaBZ~97YzBqm*Ay;`cO@i)nvY}X<{dK|oqsT4C zVWiKJ8pkXD{Qvp+$KUzworZ#y&qPHP4(Tpnhy8Ocn=Jwddol?J1fF79iLe+8D*_h) zsTV^asHvuAocB)>DvPWOzx;|_GW?N8M@OPP{R9m-k%W(#7w}TnsBOJ&PYC79Pd-MH z7#*>TyTUZYlCigL@9c^A(1M6zd1r4&P}KX=cPva^K|Q*vLmEm!6@vrwxP>12|sL~s%s!n-FvBP^lVG3|2KtEdZ97G6OjRevZ!l18~D z*l9GqqhW<0UnZeb5skIJ>NPu=0dhL_kSq zIzYov9NXmq9Dsa0Cp??V4(c%dEb+c7hWxs%7>o4v=~{ks?Jh??@Sf=RMWz^x7+N8s z$6(WLH+~FgIQZJ1PFT7IS0h|f<|R@KqCjU(jODmE5zyb-Fd+;pKjWIrDA#<82NC7wP~sK(B5 zr{4Tr_9)-;Cbue~g^|Q9f)_O8&?#@em~)Oy=IET69;3ksoYgc!_#D}YNpp=nX!E3u z5O;zn^JWpGH&oW-f{IO1B$#jt3jj5q=YB*oCG{{^-@{J@whAkVJ7ecoww0E{2`}E2 zQ%7m%84+I8W{6X<5b5K<$EyJf82!Z!ky2W&P{dI%PdJ*@YfWByw?FHv9Jki6k=%-5 zp$vXSog-)}wq9l?P7f)p8upbwT z%k7&h$=>DC!RBiBekwbUG%8x6AhVAq3lsR;%^;H@OkYI=iwG0oxw(W`0GX53*XuyV zbbv0HQLtYRASxsV6;UsQawTq&Fr(DddH=k{nWpG3K?=iSaa~oOhtWm3g~^J# z)gieO$4Db7Fd-NWS%DFbK_fB;nTsh4)^1fEo0MHfqy_hc=>5yqxj;dZ+1c3z`zM`d zv&5w;LkK9AIlxhkAT!+by18s=e}kdtD@!-sTUEYm+lR8zK7aH67oT7Koxk;I zqx#rsouXRey+A3U3J?S5lv>H=q?|66Y~L5Q3ep0xXq%>_f8Y|X!_?ue6acB> z%it%^L5+#fbubU&(K-z{GgpV!JDs%Vi4{2)!H_fAQy&$>SPC{P&kuW zSD|2r@pcUSC;>OjK~Tq6yM2l>U}to_f|8L$#mdaBBGcmC`-|PZNC;A1@5 z>SE%S!N>ER^l=L)W#YWAUW&kQ8PI@Ues$n4MDdf6xCT|5vQQo8ks?vTEiQBHCsL$` zgM(!zwcD8_GCOvG+x^Nt?pI?oFjKaqsMo5!diCm)Pd>@UPyBI<(I&T3p6$tQcmCmn zIlg#tA`FOhL(U{fV%Jx#@x_%$)(GyFX$Xx2zMBeA!mFX^XAAE+*&Mypfj4@TmwKI# z1M|utV~~alT<{E^1sq?^%VkKwvB8_N%S<1LGW5%)Y@22u@-*jx$SbT@Ccm2|Y;m!0 z7eWRZsah%??CqC}G76ch)<1gjlEr-{%xV5`a(H}la4gg4el(ZFx_huKY(#1j?f!H*fuW(U0s%FCj00O{dTu? z({9PWOG3a?kK<*~4W3>sjETepGM0`>n}Xd%#Twz!fk>J)ovI4!#cef^B+v*Ndwe8_ zSWu!wlF1?-jIAD@g~p%+bhJJLz}G+$_mrWXU#~_u*wRXKdz)ZOn50!wdf8xs2vWkd zz#ML{r09b@>;g9;lGlB^b%8L1v>c3CF18jVHlUUSfhE~o6;O79`H=D4T_md5_LFxI zq8ghe_C{-@Rv46^F?f+xc8&G*6V$biJYOtdfxHGArI_G-b84Z`+uV7CX3W zr$4`GPdX!5xzXxrNAmRYqfbyp`{p{5iny1mWskxW+`F~aut5CyRPYxsU((_Fovw1% zuU>r=;#%bAS9kYzCFyhZa;ns~84Rl?=~y-@<@02#r*Ka{{Hc6y zgAs&1RQwe^sE}ipvH91wju0xxBQM0&P4}}`^-=#yz5!@6xysWZu!z{^8{ttP=*VS( zYPbrIf~^&1LRJ#1RfaLv))|gimwf*Dn|yv}_MkOc7{3YY;&Z!$5YX&tyUo4R(^JbT zmYovK;m1UUifYYzuXiH|_28gU5xc3}&YgM{*mRR5+8?w?fu&Ln_&TD$31S;%RY3>^ zm7HYvuWR>OH&lD76%TCYZcS_QRtiAaWj~bj@@wwz7*(B)STD!fnOnZI_~8 z@MJo;pVFrp$H-d)0Xs|g>ramw9nLI@I0`|coIF=IZ}hYvA5I4SbtPt^V_oRAw-;9w z8yr!I-f-HZ59+$g`&_jwnJM*MOlK(ZQP2^)BW@RHIf|ZbfAQiTZ{ zAN@lkCM)JVxLFj71(^Q&YMHY0X}69xm$*8^cJi;It>-C6=0ib;T`;;*osp2QfUXFn zrv*+lk!rJ%$F0T#a~IzqUJ!_f6PK%1Pj+l0snzS&>+-(~CK z>};Zdta_bH=ac(!LBY1IW`A-$9JM#su%1Sz!%-2IG`tT!V3csrGrdlRiJ#!S=~!In zi3mP+EJ8XOC*=+YlPuB)Ib*gn4W5PjuIUx4+s(>#dwSW@ZHGJ2BxtuIwg_x4T#xIK zXN{VWu_`8FmFXPjX%B1J0~HY~RVIVc#mY6cQwSua9V)~tmr~-`!(+Qk=U&>`J??co zpMU;Vt~+(yi~FMqUtqln^Cj6!%s~1CK{G24M7LfnR;L3<6UgGci)zF!@9d6 z*!MvVuCRe)o&bnO9D|-?(G3)OhyjMSD-;bqvb`MF^efGVCjz2UO2bVkFbwOS(*N${ zL16HgH}At_y~NJHBG`rjfrU@WLKj*aP9xPnzgAt(Rj1>wMiKl{fv6GjA;;E4Hdp`Ae|>D|KCr>8 zGOlF3nNJX$MdVblcp+)5txWHh`;(0~?^|m7*bs#G(NU^f8k5L#oWk9m<}R-d+RGfj zDsK~eNuQ-k14T(7Vz>OA|M;Wte7vuFI)|reC(KsVjPPEJaFK9{%~%wOU=Gl;y@15c zdT5Jd2}1O|Kh!$; zH*KPYmPad;8Cm=%KRo&FNqO~I{d2-a^s$q6harPRiV}#ELFyd4sh4O(a1pr0dfXnYeimgs20ZVe^j^YUD8{{ZcB>pkCr;)Vt240@?a~p zMk|e~u?$!a7wuBy1EL=s4S9Vs_AuZMxj{?ERmc{VroQu?=BroL!us+edN(holL5N{ zrxf2ecUIEAvhd|l9fIpUWu7*91hp%gbB! z21f?4u*3@7vc6b5v;!7oLsRFV-L9|xr~k=sAMZtxHghq6`5o1JxlvQ*eN&@J_^W$q zrT*&w{=?U@n+P$7hx-^aU4aC8aByg-ys}Ex9337b)!@deQ!vgJDiRrW7IvZHp+4Um*h)RC6R^E5t1~Hu8T*vJu;&2CM~Z76YWR=d=NpyX3KP zXnIbh`<5$nHoGxV7Y|fve=;7<|Jq;K{{CmiHdoS^*_sLt4U#V^V!A97-hmJ@O~Q(W z!c?nMD_O&dgo47UEb88VIGbL#A6M3EZCza^O754B_V>d#c7I>N^>-U#spGVIUZSIb zxouB`*~!UCpaH`@gLR6JXJPScwmtBWdbMO9jt}{~0tkq-D*nFM-#FZ>;94XTt=40y zMBA=t5r!_pSWP0baPZSnb^vwR8=N6`d!~5>Pf)0tdv}9=#+)5pzruPMIW$pm7mTz& zm8bdmR2a>2|KbmRcG*?Bid`|*0@L?oODxZ$+w1U!)WSAVw{6>I1R>#M$sZb}3PcY< z*>K&HV-EIveRgRv{p-y}BhHU`Pgj+sjF>tF{=sJd!~u)r!nPm`Z04U}J{phgx+~2_8N+Yib~y|frNkmD%PPXFT#KRH-nP?B zB9(N|?+IDqIKgwUIR+GYOF~!PzPs%8*=i|F5wsUR{Ds0x%+W$EAVgju5F@i#UpPu+ zh#{krihYzQr=RwJrRz8sM>noh8JU#b0@alk`SW|s?ms^m?|@B_zI@ak$x1vVonXiFga#j+-IlReHW_;>AU z(P~?0P3S9lGO&;Xcw!?Ml$K_@BA{78uMSwH545&Gh6E@y(U2Ry21|ts11kl{)kI=x zWc#SsZ=r&&)|P($i}yFT(`s!GDd9r)H1-EKQbYtKVe7%!S4pa%mJH~_8B&o4~s;NZXrsc|q2Ck4g~MC4gq{zeKzxHluhoatDViolAe=fS8o znf4VCXWQpzn#@Gv5$j62o2n*^wLY!cRy$YiOQrhx>_iZ16Uic@evN!C`?O*qEE-pJNowJWQ4{g-KAUXgw*s5S?D*|E2hVCF9 z4HQ1GAHXrSJ|HA!O+IyjV@P{-b&2V-C6hW%Kb+fLn%lZgOuqc;-RHl)td#dNLyX#tWe7nEOds?*3otXf1QUO);B{xU%fr}akvrL!y~AUX^WNB+fA9>)Apb` zFZ=(~vYyi`{UJNNy91^D2mI+|EK#%;`!x6h7e5scNkOKHx)R_#oixyC-=cJ@rfOvS z_G)$Mu_&3v*4}H&_1NG*czU{c|ACxS&}l4Bb>lr5ij5?N_qqg=YaRtQOp+yU~J*15P&0| zg)sn900G}oga)P!Cme}p71r%2&aD`A*bv z0d1Y}hmpa)LV^YiR`<+Au~X}JhRysuiIxA!&mz?#W&P+ z6E8z}*POu?PWtv7BZn=76vq%csYD7ZglOV?$a^rIa3w-3d9APj7^O=N%y(q6*aW*- zwWKv(3=DiMG3ua8g>$pP@3r zr{}OmnG71COfhd4%*+_u?cT(GQ&S_PM{K&E&eb_E$D8Xb))gUG7-(=ui4dr4BrA

5B+FQpUP~p%hx0SVvm+#gw?wpX1piYQ zqk?M4pvGz4liU&;Asn;BC=OJ!WAid2x3Ec2`1E~6G_(M~SJYelgS$v4dVEbqU9C4; zxA%Yg(=TTCIUGt;obIpK^*z#VYefG!cHcHB8mKXo;A(KP9NeD z50(8=%*s^eqH}kiP3`J=U_CfRaDlJvM{?EUXTaf zKD(cN_;5-qGDAqfDN0Y0SiQDdWRwg$wnh#Q_uB5xmWHq2_HsE~G7>=m5k?^+1ONaa z07*naRNTX37B;BZpScqVj1l$>AqAb6X4CI8CtNHeZXnAf*i!>AciA$f5UD zrB>k(h6GXOs8q7oCN3Y9U91U~A(Bb?BDBpo;c`NK~M>M9!C$n*W znfO_!GO<(h4bleCDFRp`TWpjH`rYKz)M|4mZWJkQM6oeNMY|VU*z%DXhJs~ z7Z$mZB1jd3>HJT*LUn#WdHX@}x2T0nOCj=MRHQ@?K?d?gfQT-`Lz`)J*3NG>n5E~u z`oFX30%F|rNA=rG*hjl0jQVHaJ=)u=y?k*f{o3O8dc&)$&wlm!Yc;C=`M2<>9v%F~RG&=1Lmz`!@`Qoa!J_1nrjR>s$>x+N& z2Y>Pp{@xF5hd8+Ul2k0dadH{k1CpVjNMw-xg?ZI#k+}@=tPhoPmAS>TnVsl)-9bUj z2N)@u?OV~wQ%-eT4my7l=ormbX(ZYOsVetKRKY;~lo@am6IJPzV+#RSS;Ju+0h$3{9z7_ zl2=+WhBzPSAJ$k>IZcjOw>j*oqY%Bk@05Cp=3R_T>Oo;g(QAyoQr({RO_#z)=92Ul z@O^mhc9jX?YLg+s!x>l(Up?E!?iX|1>w~qR;YQGQatS8;$%Cnh<99q!uv~yfNK6#Q zO-^(zBCQ9?=bjve=N}F$9lfW(p1Qf0^^iPYE!|Y7s?&^9=KHdkF{_6Eq^m zO=yar}1adDWI?27t1yV6mHl=A?u5Sn-Kf5MUiYzn6;d` zAc^qr^rrv%U;pgKzwv6Vy5Oh9pH(Mk&5vksZdMMo)ri-^6l}L>8yOJCOu>4QBE^5> zRQyJvqo+`U2y%|-{(zc`!OF$x(=d`byCCI_DlK@2-w*HhG*Uw2BPl zZO7JfxbsTf$8Gmg08c4jF78+V#*bfir;BoJhm&C3QiCN=JFafk`ns+PEdGA6(bygi zybU803|&wN?dp2;=3QGud6h5lVnsjpPmUz;^Tf__X1b>0{b<K z)!5?uZMQCU?K(NE{^cKhwzQZkuE@{H1`!SjVKh^H+cp;~d#0?}9aqis@*esp801z* z^~zc?ic>s7Kl+^DNApDm5KMUm$VtWrCHnz|fH;Zfc2s`osFJmrTyJ*?88J}=Xb36@C=+=i_?Q4h+!x|e zrW`Y3IFFET-pHO8^0SOtnWGQ&P;D`OXu^T4AU>GEM9+kth_OrRs3Q>vRJB&tcm?byar#eR5n$Z5m;EP zt}OC|35F12%*tZa@3!d$ik@yYs-@D#=uLZdwIQlUrJdC5{b6TsPbj7>3ocp}zE&H_ z!?GHDbsWcb!%QNoc7+jVqWe~_sY}UI#H4~grACAc-Nip77mlE;M4YKRP9mM_EP`np zym9(UA=hn*bT15*`eG-vm-&fWhEz;nhw7AoQm1dV+rppc=n==4eFB<%F5mm?HyT^D z`D}!Ih=kzI&BmezZ{;QMfRUH6W*Z5Dq*#HK7m2~p+>i+nJ-2mV8g|3iAOUJPmQ)pu ztR>Yy>!BH&gz2l~Zo$|ey&8wIqP7+|q)8^X3&x>Yl!?YlN49n-DNf4vz`i3^JLZp- z+r)(x1$z4$GF<2`>}ug_iEw9da}rp*S9L&UQ8qgpm?a9= znxjf2t19RA`LAu^&&&I26W!@BB~k05do6a7NG?sqi|98V#iHmqHb`aO30zO&=*2SF z$irPHM+H(D3D!0khF;=W<^Tervxq+MSUL*jL#PvT zy0Da#`J|;)1P2y*P9M9foBenH;3v|rOLd7%YqQQ`ZKWuYVZE>}-iI|$;b!{;)*Tq~ zdx6Jh*la?NCllvE(5HzPJ zf)24$4USnv#9+A5FcFfFV2qIA$OE%_KRA>jW6;if4wYX>Fv)=QhAP#KWHKSr=2!FD zK#G8XvWy>4n2hx0SMS&I<&`^yb9%snrzfz01kr?^ZqJgN5=MIu2P;yMcSSjfy8)qD z%?!;7L=F7*Z`S|zfBxg&{X75BD|=M@-GCDnvPAnRwDde!BLIY?$~6-b!_B zPVRJ~(YqxS7Hl=RJsvNVR!rB=C;M6k=mecJ>NYJdv|Y7 zTf5tu`j+urMk>fhNXJAth?K%CSL>agRy1uMJCC(>g~^DNBozz82npz}T-%=XW}kog zUfg@Fa;WZpyM3d~k4M?5SZ~Ku)&9$>qfZpeRxX=fxmlHE$H_@LTYg++p3bc*hk$N% zzZFhKz156I8W;SPiyJBEp2jGQm)ZHy2AnfNg0o$W{=t@-)@I||g|{#pdFGVTV-%Ce zTSIr5%b&W}(=hZEf|!~%AXzI{pT_C}VU^aB)y7UXV8G=Qu}5Or3QuvX^0m+&LaGJg z!X|T*;B0AG6iVF^QxPpHvS>go*iy`N|Hf7@4qT0;LiiI_p4fWRLvR6WZAh@e{>odU zWK6NVNzR&$C#0e6=GNV?`|p2wcD&O(Y}95VL~~KNT1gag}8B^GmWqNv}+`aRfvyi_j}dM&x9+X1o(T7D93; zu9^4}&IzF$n^g=7w_ykh$t7YXG*5fxMd%IhR%&WP%0gc?Bh?24r{WzT3IRwm{)KTi zoC=WvN+!cD7%<=fhGmYhj*$cv$P-oN@Od?qT`lR|VYgwEgbBI@6-&FP=YKeTSXBrf z5oP-pD>qE`I$%-EL^n@z0IW6=#j=Py?bfY!s6|bQi{(-KX4b#2mzVy@fB#?p@HbyQ zJ>04TCqQqR%Wh(^o;Rzt#hj5xp#(XAOZT~_-}?U6Wcr)`+yDO4+2iVHB0~d3$Uoa0 zDA5$w?{)eSzqcYBdI|WZ44k4%o6*Y8Vky1uuEpc_()=M;E{fxrZ{!H;n2B_*$g@Tu zSi8jq8TiZI@IWdCx`tvvp#VeZ z7E4RpdwY>V#pL*;1X+zd2`tJJQzjcVz3DXn=GRm*@)<{9Yu9{0cg>LD9>TDuxN0_Vd!F|ZYX!7X`fqN4vg@4)`aH=wep??*$C%iX3!o|P{UlM)l{vAOo?j9 z#8F`*HaSZh^NqUta+iSvJ)0G>$0i=;Bc0pHUecuOw5)$vm%MaefFt+T5e z+~VZrF^s&txMVjkRtn~GeO6}f_w98sKD(Frl$ox+Kw6;1TdP9ydcNscm zRzmzqkpydxF^M#zmhk&=)`(~KFY0KqIoL(jp^s8IboAWgQjG)6*3FNod4ZN}qT&R% zBW9|$b=(PS=g z-hoXesR2|ADu)tv>Nlt3d-hr3dyrYw51XN_AsUCMb4V~*j?!Ub>cR?Vd-Eu~1J1!$ zLTG$VHVOymJo{@|O^VqAeIj+(2^GL{ps_@8skW`ET~DvN@DIc#_oo9DG6 zw!pv$^mGpM0_KAH>dHSS9R+744tf_Jw6i0aXQ|V@CErx5yIN_i>i##pS$!Dh7E^J1 zOeax_b;6K4!xf^JXY3aGgs!SSySN|bo+fKjvPqoJ#vFnnAV2_X#j>+bwt z{NbMyxr${Q-_qSw+8H%W3bv+veTyP&$iFCs5&3{oPX*Xv8ZOL@CxYLu2@mZE;Ut|s z$Vb$HE2vDi($k}2)#y7!&n0y(?3Men#Z%!v1zySHF?Dt@m@XYM2{srtS|K?r&k8H1 zL`i*C0YrrP-Sl27oZtJqAN`fz*jmgk*H=Y9iMSmyJ0KTyLUf=rg%L{q4mB4Z`nBTp zFaKcc2ftMk9##-+6Fms#_7w)|qoX50ylSCZN)jAx@M^b?0}ALDY3Xc4UkW!JF0 zfo*KM7*V+lM7RXP!BXta7zk8>IY`Ey9Ed%iZYWw8;Sk~iWCBWcibPkuWO*?yuHCKE zW7H9z^Y%e|6oHy7CSHO&rQ1bDrHT?gy$j(X~Q=HS4gyV|~k*7Qu?iIH({x)7k6=RHz+0A_vL6 zCK6&!9wmNUMG+}$7CTvJ^iIGo3GC`$N6M}(hlQPs7=mKi1RkskC-M~)DW8N3+;+Om z1KT?@!6%)D$m+mKab(~KUPv$mge*ddaAe_x z_If+9#oL)^-H&~H1-Ze8s6s$s7;y~;FJq-KibEB=m0Ev&FI-?1))v^A3m@F;wlePhDAqIs%j*w$vEtHF%hAkVcauK!SD+6 z(RvAq7G1X#{Qm8m*Zwwkw#}|}tBw;iVs~Crmw!$A4>XZ~J`vucv)uUl<_t9WQ5aeo zVPTD}h*xH=LJy6Nc|pRdZm-MBf+}$th$NGjfl(Q*kgH|{cJsuEbQze5sGI@-{m{-d z{tkFqVStNKziKQ_Y zCF?2cQ&6rx7i5N|fBVy4@%Nxuk_u@I#=_FjL(i-u>`UxM z%3Z5a9?h^VKHSI%`NfMDfC-GS(PjfHGO+CL?dgz&??G}r({V)EaUQPn$ZQgt?Ijzz zXO0&wfk|cRIjjAM1F5LO^bRKY9uZV}N?9(r*4R@iXux9ik)3-49OfW43Cb*=9#zG) z^n#K@N#RutL#5^fvsiaB1Pg7fCfEeyWR2h_8$l{jNs*l|Qw(hb0lt9A^CdbqB@B<< z7CzZ(L{{@KgbCju$%*IxH}WmybuaDCUg9B@YaN43J?DZfdjVPSHSzmmdjNtEIEBAr19 zNDoT*0%pLeczk^5U1)>Tg;RmDLN75iZ$|*PoMZR*qG?gS^MR&mcgb`YBN3vj8M*Uj z=Q*bDV|I?+o}CA!WM2+YWMIhx+Z_V|*aRrj#~HV~(G#+JP0EvlicnO7bB5fGkxsBL zd9O0u=TX<0@zCRh(pB_)uQmSJ|MjQ5``Agf#-(gi3BRdpAiC%CSIgDM@)5RB9nF{A zkMKmq2&)4n!^Re|PI`$UK?A0FGN2&|pMz$8GDs?&VOt%7=-K(Hae=(?vI9UQ{FDU} zksEGOwb=~Om;%5NA!2uhlH9ib+2kN6S|*^7q-_2n$a*voW2Iz;FyBS|lBF>Q0L$2) zF@UyR&3gqQ7;3tPsv2>L);-I`$)IXu=E6l0Fvs}al|kM+7;F)Ra?+GMm0MZE41?%R zG=j^)$AnSeNwy)WaPCB+lDoY6+mDs<$#6R@Zp8Tcctv;~392}OBvy5rpUAKg>uDs= z+xmLQr7At6Nxl&E`YtXmFkcD8Ce&)*B0~b;+UB~}-S6MN1p|o}%iX10T#54!`q8!^SRrz!R)u2aJM7 zfy3XPH(x`Qwvp^eW&<-RKYD)52&4EnAIdL&`FXOe#2ZPv9tFW*1;`kl7eOj=t5=aBMtreOaT=$Uikr5OaWJEG~3)9vnQ)!MDeF9Sf9; zA#?U?xLAWNhdm-VLu$rld-L8z21%lNBs3!w99OtD@?qqB)YLlyURZPQh37|fJ9^;+ zu{|(4pcOcn)E&jy#Ql_&z8)!`yqv36wv7|=ctfRJw_XNp+Yx7EUUpMt7MN;W@WVQ# zD4^(kWk5a&Z-NGFE&j%-fX$ounUzs@JU`+H*-ykvh=OfHZoX4?FjNGr*4v5_V9;2zxl<^V zzxd)c{ewAY`ID#p}Y?6Kj{kIojM9N_QddiUDNoUxM`jAO6% z3Qq$HQ+AE%bL1LufJ<#FazMAMEKXrV`Orjsg* zY%F5MeF);UlunQc6k%1Zc z{{9~7mE++dR?^<@Mso>Blz6HlNdj0v1GFUSa!tuhwuOKNBCeD*^^+kaf8`O|L_EYg zI}b$lx@_q^)<}Q_B<&U;40DW1!4D(xF4*D%?}Z!67dfz|ApH9E>sX=1Lg*Ix0A9IE zsU^2q1+qb}drPi|%PNfy`gA6o0vx!#dzhEaCK(c`GU}PyLDwX`Lu^Y1F+j<&!B|=2 zbc||{6>Lw8EHB8F!*)C&D%JM(4-7+hA_F@nmu+jlE$EM-YRi{ zC}wXUa>HcG3kNZ?q%bLq%*G&*Nz#shJxQ73egy{d9T3vEZW*_QE)rH@L5h8cBeJ!5 zd}ZK=_z+25xu`bAgllJz3U4lnfyW*J2OKltEHemrnf-^W4cQVQ?KxPnfKKI+4bc99 zp1JpWgRw#p$43Vry?XrPKfj)KX@K`sWTiWX<6klv3Mm>7Wt&eyA3~5O>=5K_oPs}# z=Mfa}3ds*1wXve3F?EF90~d~c1>9k^wZo{=-WH-dtV$t}s(qwhp(+m9Z(~WE!SXmx z%E~vjZ(C+n3(Y#$>vRc+mVqrq^D9!3t%xkv?n>P56)L5Aqi}Kz|7vgu5=lc~wYMIn zqTX4m%Z#aIjJdW6xm=(oIX^sLqYeOW)pur(m5bgzB!B^&N^eg;7_PdY?D9ci(eO07 zWlV>Y4~l3dx^nSKB&bJtvbABAve>m^$X`g8C&S>$8gMc26XT46JVew;KZFaom~#l2 z`tRXB)(_sGd^ZvWV2>x`+DP4I4^4gbQRc(3-P*mz0G2S78$~f!Z&8?z@^Uk3k?(2^ zZIcS)t3fDiw(~8)qhytgJj7L2-IRHWMXe41pbXvP=oeoF38c6bLLb`U;b9~6H zZ+Xk@?T6WP0OzTVpMUX7bmHwFee~-6+Ydx+rmCIYBOxAi7X;2bqlDh+>HFu$Nka{> z=*ik%zQ3v$7|_Jh3;~1^!zPEo;5XF-cCjIN9Z8dP1sYHbg4y(GYpn4A19hMlT*5^9 z!};q~UxjjulXgnJy(0RF4vliTxO z!*G*b-aw5UAV_P~9z!&V41X0tI9(pa7UKw;<0r-eX_%^IkYvMeX_%IS;B4uGMaAA3 zGF;#}9VIQOv1Ok)A1)e_V{GK=xLQQbA9?zN-_7Q;#3LE(sFFq#uy?yge`nWkcMPKiwkez74?X}ot2qF z5;=(m&)&PZuUFKOlNOHmapP6)B!h2kG6h0oNS^(L#%5umoyiUYEXllK`JA`(w3FU# zUq;l)vGGtIHQm79nY>Wef}b|&go3qHVhM?eB!uHBtg)&GdLG9Z)vB993RR|5PE_e0 zueP^qgGx{%S~m~&3nO{hO$>0#g%uqLf73d=@VTGXhGGOsHu1yCi9NCtzgcrR-PROdqg zc?fxXb;BC&X|^oD!3+QdWH1oQ59toHMA9S{iiOZ(;`ad5s-L1*IORpIFdRcR99F5$ zgF0gom>}Fv_HU*q4@a2lviucncrd(X^Q5C}{o0d=o!dwGA;%3Spl&XZk8QYDm zITa`(SEgz!h}lkVe4r2-xA-pG0h0@ZL$a_T<}gIeA0@}`@6o5;QS3B|(3wR}fG{jh z6iQ+w4BkH$q$@^Zhj=Kvl+dGHBBX>B8#v`L_k7jsk87I|I?3wSXZ;`bqW!B5TZ$#bRSn+THI;WW^_}9vt{i;i zF)B-vGa!}qV@n-9&8y{Cnt)+XOcgnsncW$G4O471l0z+6#`GbIuA`IS?rAqh`;Kt~ zs}c-8?R@Ei*hfq1dMj0zOF8C-`7;z<>aGmPFyenEmw3tS@V! zx#N+zm607D(scEvTa*jrN|&wPbfYJg$_`#7C!f8Gk=E8POF-&c#CGNNspQZBiWL$A z@X}W}hcI!hL1yy9b2M`oynJzSOMA2Ni$nk-luS7iO%78Z$6;A5)D z*S}UYEP^a1lk@X)X=n(+e1o55H*4X>N1dwdlo(gk3g0eziLIn)J2m#)$td{G=o%@Ft5lt<+{k{E* z^K&+93c;&aAH6^OFdp_*l%WHK^>#Y6${hE0$Lr1|bKk+?9`ZkZcR89&c)WzgU$-Sr zg$%?G(e9F7V{K9>lN_BD%oy?`4B6s?;rwJ@f>b!wY{S_zEz`%@(+0$ z1;wTcTfD`)$?qh3%-0%$lJkn1i;ZUCukJ0?DxfWZD$4tBH)~@dDxMQXu}V5rqdXEE z%{BbqDZI^GM45LL^L*6$R`tt**lu?N9jm4_$j=hj$3trJSIa5ow&j8-<+^IOF*=VI~sdTwKR zjd|AiEgGU|w{Ka)5L8x%N6?uu;n~0#JCPPdEctR6bd7ux#8kkP6xf4z>;cA4#YE0L z0!C=_233|ofnx9hp>b)$;Uv|C#f5qR2|{f&ZCw=8&JpFzyxWS7xB{s%DPe>+9(T4{ z8st)rSR)t-><4xfoQ{;CYNzb2o11I2ns!1#@wd>+Ef6B@F~ZjvM9~iu!Q!?h`co(} z-{E~Im(h~Iz8cD-fh9g=h(-}i%EX+(#c+RzEP7yZael+(7q*=Ue2jEf+}Xu$hdOlN zx!w5T!AZ0VjSS4>_jdMJI^=E=Oo1C;;6x%`RjYhRx%BaR4AcmLb8-Gj_kMLO3>)3d_4z8HH{B=!@rvL0kn%tH2*kiP_^IS*(5j<*YkcdiC*dacB{FIJ(4S%1eN?uuXTe zLQ&boo`ndZW?8eDov6m+u!ua`7ffQ#!ko+zs7} zR~v>;#zwQ_SU&Ob^ZVPjXq2JQ6*l#xIoYei`U=^wb^Omr-o1MV-<*VgxDveqp>sNn zdhN6Kuf4+dBM0;J^bBHv38cla=o9LOncZ7ekrvMf2ZydaKR;8U5fi??k-0}l3JM}M zi3c6Eg95t>EKz%h zH$DPDd}1*ElV~Tp>=-AhfMd?RGGy~>T%@wL7p#|dM(=Wc+TKig4J4&x;Y!ltyqRjt zq9~340$g5Q*t{7t{16-DB88tyZRGBna)n6Et@0F4xuS9m3;-k zFgn9Svq%H6pal;Ao4`bS4f{jKhYk{qVuuvr%E}D`3Kk5YAl1Sw>wl6Re2X2jM8)7KKwnD7{L` z2j5Ku%RGfk6*mYHev<9f4JK%0CS&mV%U}evfWhE zFPc1qNL9Sb-rkb-!2E~n8xX?n9^z5(56=%|czirLd0E$}S-J1vntFW;DVKJDt(H_vdISx+WJ2b~tJ&c7SQ3H1on_ zu>{*=r`wKv)MCLQZ|xV-|G5gj0TiqodOKuFwfT5 z()#!b2?3N%1)qW(?(!6>1OuoGP=bLu?=qXCUmJ}k3ryspGw79BPKx9g0-%M+(Aq1U z3^4}_NnM*<=25tY5kMm+fyEoW*Le&gmumwj@UOgf{4LD^CtYG>q=QxG0UTyl0fdz7c3_B8tAydcJ9WRFG zm_AUyfB(*?T+f((aeje6xn1m-pA6Xak*kr=j6Io{71Xr|>V@<~f3=UQ*G%mqTNgBvj9Av!9u~oLUF|f$aIXi2epWoKEn&|OXxrBh&uDcj& zz;$PP2ab7^?cYIjZ_fqb|H;Rn9vtimGTZyACdhJ(<2=UAQ3lWV_=8+aC)hHuvvhJH-@C>3$ZI0 zONh(FbK}b32GkCMY^Y`G7AH0Wbc_mC+|mQ=yLazE3?}$Ea2^q1Yvzt%O~hp!0b1>0 zEPmo&>2y>>$Y~S{`2pXts9hj(JI+iofQRO5G$0IbErC!?&PaG`j#i3WlY`h&nvqMq z0uota2W}xsQREPaA+b$xn4*+un3v)#z!QAT$iWJcQ|KunN!lEvzsM*T(NPd*K_YR= z(!`w;I<3)yB5ER9{{oO)#6G#JSd4aC=ci{F9=8n*eLRk^SHSWpATWBSbR)B(!aYN5 zd0Kk+?(*?To@i`cOfe7+my zyeu%?#oVC2J(iCW;}jWUYa~;WB(1KReZQjx$b8c}4l4bPcudm-yX$C_}<+6l`5hvpsbA$?D>?g-Qeszsa-I8N%p5QoJ zHLxY}kfa=VIQESlllsZ~BDNLCvetGQ<8&T8B?2*Sh$_;V}s|H5;20K3bDftdc77`Z8jwQ6b)E+AOb5K+z7n|yChAUh@W8pncg5u zCU1y^C4N{_@*P&SIvjRWC2V>48z3Zg#uDK9W@=XhH4a$n9@#SrKqnC)g2l0_kceZ; z%sjMoV6>kxeHNZ<^YfOVqSOe>NT3`KjPU?CoEQE9jNSsgPU2-oj!iIP?Cs*$A{yjZ zbG-AHXbgead3wWABrJk9%f3iTeU=@VYIrmH=S67c#o`O}3N8a~;9v}v$tx_VG59!{ zt+in|>DXW#^QQ}RlP8NalUOJ=m_9795r75M0%5>IYmVa~QS8;a2xN-JklN-5Ecm^@ z_g|C?OS(c`Twj?8&ANxmd?|C>yk?T76q$IX{w&HHFbg z@Q@ZZdK8#@2p~9P2`mp@kFqjwc?*7O7#YN{8_~5#hdThamQ3GpTI3(>FznC?Q-&k^ z6F{Y6Y|A3=B?|}t72U3l?9=>UzoDuCAczaF93D@k5QD=u?Y+hWlK4BhkT3N8yAQT! zkuTaUTQoy7%Vb9be$AOmfDTDV6liyEr;c(h$#p> zM}$V12k`0hT0yIdFlA`#5mDi-T=_ z)A0Q8@X#E?z?v|_U*BBArs6O3^{Xj+8u~PB&-W06hcu3@nOy-iOv+d13G6O#i+D=p zZAfGmkF~4eFO6o?G4|~)&$K{5sv435cZI+{97+YSIz4T0#!V_#{m67hwDYD)7X-*ktip3ypVVS{2Mv0?nq9336G#_DCjcROB#NXQK!V7?ws+5-v`2Qg!CO>n(b4KO9BF2aWq z`b5OS5#-(OG6SYwV0O-Xk)6-N8b-V#=?!l2&!Z5MYrM`8E)9%cM^(d#thR}Gl#`Ih zOZ_!es}DxT1DWAP04&K#9)j>$u?^b+KNBItV?7^68I0A;reKwvx2zuJER4+nXiUSA zcDy1}cCpZ#Ry$Tawf16yhLe1pGGpApg3SZRcJdGj6Vbqf&}wXwWIp@!r&$ri%C!m4aK!p zBP09MzTgIUI<~Li_|S|b0OoG798y0t8}I;yQb5l`tOyXnbI&(aFo(2|Ssfs&X-o4K z`WEUSx_LX{nwwKvh@vXBD(inl$ht;&2$m);2l|)_(FA|y)o_gM;6h%?3?rSzW@Itd zYf_+ru=2I>KM7~CY8%pMLLsK35Z$dj`h_WgW=6=z1gt_|MxkEYx#(SF78zJfA%ls@ z!UM;a*sqoz5t)MXKF*td4DfLoY{^tQmk|rpe+3gA zTQZ`6-d+Lnpyv!DEvMaJ$hpcF5TOK6!t|jH7+dZDg@jMlzl9Oc`C4Qe&;c;^s6K~* z19EWb9BUK24ac@v+UUL#?n97|ZO;VB_g-h2B*I57D5@8_NkmXb1cnjuX~gso=2i3Y zG0$MrOlO)tsr0bHTu!H15*0250e>e}xPTm#pcXYAAd?pWnZF_9AkO*%h0|Y;N{9G;?2Z}MnxpeD&o%ZU$8f?=&;TdMj$#9%5WsjjHmkjFW6r?aqDMN6Bo z6vxtgo#~?rkbxc+!rT#q@njoh#yVYNDuZDZq+t#;QI87mcfKI;mTg+WB%&${FeYXj zOA{BuAK)N;hfD-V=qA*L*xS%~L}XBZTtFa#AqHbqSX<(L5e;F<41>u6JYh~i%qvsR zsBdZBHOR)U31V)yDU!U>_`_8L!Wn0=IM9HIS11zF!%Ro173nWJA7v2(Lw6y2?)Kx1EUjkH$Ty#C#lJmc7Ll;OM3`WB4kR(D>_%|IKFmepfpOv{vBf3@ z2o2KY@FaeS!(FOtYzmd_x=T9kTb^4Pf*3uS&&X8J!N@T(uSraelEi{KFNmC4Q-KP| ziOj;OBpXFajfL|%6yz&>c8mk~II#3ap4K-4$DyzS3ML3WRErVK#pJviY+3h>Y1CE7 zGeE7z{8p?^o-m#;GoX={J8<4Ky(JUgop&aq2p3@I9?SJB;26Il7G^gF;e>5BhzMIu zG!!?bs)n>cB+1%u;|wWT8E;dDV&>vuv<$CvENRE|4ar`PUEvat z>*}h_`vXakih>5HEL@N43J-GohAb~Kz{|N0DHSSuU~jyYhNY^FQF#l{fjq5QGna3O1Scd?5FZwls)uS<^=p)V=v=xAH`vt0ueh|J7HE(CP9(_H{v z)g??h5usvPGTs@6|X zP|--t%I)G5X<0&Dg?v~5K$Pmmu4~~j0=RCTBt+~3sYjGO>`O%y7^0m$JXE2I;x85= zGzl>EmpE>aiRQm%=m`{IOUXWfXdzw0xDSy*%v$<*i}`|228samsy=rU`aFI_X}7hQ zOHsF|l;`1HSS_JNmyQbp;UtQqXfy*32IPd*d%<)*! zzP273G5K#m^FOqOeP?-QSXy8%K zk0Z)cCQt~=9%W!gLi{8T`3mC!6MXCwrsgDagind}4CBn?Kc= zWLH)Q@_yaf-F9W`=2i+dI{{#tm(|>=Z+R@$jp`uSe0Qh5zV>*0uvIMO_YQUp#ynZr z={LcFa2I9pV`~Wuwy#{q;1SXWvvm~3(m&hY5)%d+EGUG;Zz4(zFF=vly`MciLk6T| zgy_*vu}L$)q_P*B8Mqx+x*H?oL@;M(D3QRE?N%;=s8-p$66i>T^e~~MDphyApNs<9 zj=jlwf5|k~vDp;MOAT^9G*QC=ir8_(K}1qy$qS3R51!+jat!S176ih3h&&H`AG@&q zZqZk$1!Fm6>zNHecXloXVh8e1#eqU+h0m(3I1@Ye617h(XYla=oozA&+nKv*2F-+^ zjJw(iLXXKX1XJ?rY?=wX!XXMtG9hEi!M818MYxHV!KgN z-2_cAGG!G*C|E|#JrSNEvkUYN#Wi+)B!M>-pcZo*&Plk0k!)vX>mb{`jUHzEKoc^K zOhe&F!Z5_P({}wc2et(;GA4v!)vY4>ONf=KX{Qo0u-qB1=OY;oMZ``7hVzw}mA}*} zJ0vK?U|}}|NyutVd`F`GYw`x;GDEdPjz?iSjCzIGE>F>e9xWB96FZz7muOg9pxfdv zg!D#}&o&aT>;uh4HMROsK#bKm2wU}29tBM)II{`nq9&{y>>-okCfaQgaQGD-7vX7V zlynjg0ux-)fnix@Z%5Twl~F-WyAb3s3k#3GLkM#>5pZ`5!bw*C5Av4@e)EZwG3^9_ zZ%%&wU$_Ckz zK4iIIEL1e>nzyGsXA)LyqE-O`u*m?6WBdY0jl|qDzF+`XK&ZcjFbBA7A9^^l4s^S6 z2ykQ;Gj+yPe^SVgl?g{yM-Kl=-AIW~@Fa!p?x*5xJ03f2-gfB3`CEYSJoDS3Ch8SRU1 zuC7>w$csRlCMX$MKFVr4(nx388Yz4(PxGrcr+N|*W3qg}vYx@p;G_R<0#B@?uYi_q zjn>-ifW=@8$vBza{8d5)QEVt~m{^u}K_-DEp{RchX`2bw3MRfOchwZ^R)A2w*VI%P+*{v2h5rJ9c7?*e_Ph~C= zE!m@VmCBA{ztl~;j~$JaPuz$3P=cT!6j08L4J=`B>@%Jk0%5>L`stfD#EJy* z#u{XzS)7jrTZxi+0OduPj;A@u$lgP|G0i<$HqB5|wbWqE^;nn-TwL8?@XrNqfmP0Xte+eUjdh%y2TMD&gqD;mNzZdlG{H8DiMHeca2%q#O~+PwJ0B6*pJE_uW_6fe>wD!ymC^ z?_i36Y?OKxsjQ}ZEjCbC>i?gqJL_#N$@0Z6Q4=Xq12vE59A#!zwb5l?z%YDm_=EVx zAJGqf_3&a`z{Yi>UAWy>fzqv=5;mrD?B@s~&Rq`@>J@1~ z5-D@SVA*;knNclz2?8M5YXjM!Pa`2u2nHSp*h82*YT$6XwnM<|?hKDr(rS{$d zAciGQ?%7PEr1O?#cA0I7>WX=+xR1wQ@y#Zwj^r(p zKG6dWj-Ad}VG|=B#(+=>7n!4G>uc$c>7mis{6&K!=YQl{Ol?U)kV7XpCmufP9iIni93d-7MA| z)dxmcVhRAP0Tb`l%)bJ&=t-vUx?nkC#)ucVpBE2C$EDH+h*ehcFbH0~-{+KH4C}Spqs47X^2%l9W z^Y3BL(#bdR>?ecC2S5KiO1P+IMFM4(psKohKq~qs9gMK!IwMO?Dz%8|DFz?;42A7nEF!l;OO1E-eIn;IsAh zh}U65;NYgE2TYs`Vgnb_08+y?6623(EY_8=3fEa4QFY8d2_thJN#%KA6uuK&+rW65 zx#RR>6?F0&SfDP7GuZ;8Z+|NI_$gpZnjpm|$x#JQzHt&gm>0S3<8dr0Tqw!Z)AhPsE9V2K_dNgA~2uD&3C$rsP z*>&VYf-@F%!X_j3So|I@W*pNzRd9gFOF^TsYLyXOCjJdht}{8t7Q8|z*=!N&i4VX(fev4R88kvQstT5{6^j&>a_N;eR=a0H zGt$XkpC1N17Sg)~aTRgsKMo|&k{wAq0!eFB*)FMPAJncs-zc=0QA88nQ#I@`*SV4; z#HAokA#v?lZKVC3_06?qeih?c#lB_!6jwDUAVSLtv4G?7=mCd9kBVfHSy(^Xh6i>GBy5sQh8JaIoKk2Tm%z;F=^2Km zPY82%cB0hX#>Tp=I>*w{FJKfvI%kyhVv%=D|Ga? zAZavITkw(?<3cnPSGDyDNI)>Z70zgB&9s`bI8*ph*|{qI#70TF-S~xg!(L$kq|yHP z_*i)A@-lFzx|&=?OR!4MXq8F=ZtlDG(ZBnUFOL`HqRb_mpY;YT=MEhy`>pgt(Ka?7 zIiQv&<_-nlGGR1`Usov4NxjI6|Fi`DxMh=pMsnMoA#&k>0uTcQ&XC@4>d!x2yw&QM zS=4~A?>H9n7fVoKb%w_$tjv#L?7XaRp%V(Au|=1hOb!kXfTCEp@fDye>c9J7g?{v@ zy&18+C~e|C__g_BUa$y>CO2$$p1KMwR9f8@i!T1+(k7;!YwJ*$?0{~BIdVZdsmY{Z z2yWEPO3CpM?nCum%jMM9Vx}Vg2zB~RP?@K{ny*UN`=)Oj(UPjVPZLGGC$}R@M&%mlfsw5%^dkfbBqrBa@#B)H8HIvaQ+Ljcu6@@V$pCo{U z#~HBYi>Y6{j+&WxD5@dW z5f0_NiE@l|Fb#lVtZvQM64O7t+SoLsmbs59!yMsTg_(&=$wz%#v2sq*Di$xm#!A8J zC{IQrM-SrpSh)UO&mPy-M(gQi#JORoG180l2~T(x!D1wk0rJ1zzklZ~EGO0)l%8e+s1cPx-F#{Bg)60d|L@eO0kT>>z+d;ASP7!-h* zjz}Lo=(du_1fL z0jY8za}Z6uM>lep%-2#vel)8RiaD~qAipJ@$Klf~+!a-9D&k9AVrZlOKupvR4jX86U2^uV|ZqF_dIVwhvCCKk=?b!&%B&aXa zMj?X(9N7nhz1{7#mGwa%0gexjKP<1UI&@Te!~3JvfA^2yPbm~O8ql$o$(&5d2)K2* zu^}VV%H1>1TkBhxh}akDWmh}fl>SZWTU*|onNIdMP0@-1qhP5Og0hf+rUM)B4$y@P z;;Y;aRtqaMeW0k|ZV5G`&sjnu1QofDTWf%pa4dH$ISg|j(V~C&VZn5(J#+*_Yl)v) zPNEBVAqUF1bB+#)As%)=p8rq3Uf&NnUK97$zXWWxH;8z(+{aUctDEzu@l>P9!)DKHk9O{0 zb2VBn&I$fUMbX**zx?vcmy@sIyEVG}5R@{P;)usDkOcc3yDq}RPQ&Q+W(`UVZ3w9? zfWQPDm=5JO0aNtph-GIHOiN-VwkThi^O%?MG)3?klr4GZpzDzoomz>pR30k819C>zGk`6rc|HVIWCEd?C z&YwPh77AM1Tz(o4e*N@|4Gqq5S!t~(T7G(Y|2ET9 zcP1PFvsU+aV_UgM)NTAiZhRvZSK`N~!@@>AI1n>ZXm5mmXY@Ph}+szjT&<6|D0(10k^4H>XGJnCs`VS1uMkDAJ&z^qvN5=Mh| z`x2aOoe&HEa`FkrZmexJ8!O`Ex4j$TiP7Z2DLBTny?vyP_O;_n26r`y<&MrAt5Gs6 zxg#rdRs~^{Hs0FYGXiKY7i)LNu`cs`Kf*kGpO%t-W4Y-=c&T?G=e@msPEH^ta!l4t zjSb3~FZ!h!AQqJ9tCggq=v*7%(q4xM+#Q(gFRl_h1zsyM4d$^#(gx%56A9*x6jm6h zJuOM-!5!>}JqP!6u3Rc+4%u&RJ0yo?lHesfRIY5&7i!P9JtR!tD91SE>=JGrXVDpS zo${bND!J~^vnMH$nXa&~I`+JZFRzdZTUU||`w*N&CN{UOF4s3&vx8dwR#~F{!)B>| z@%8IhBU_Pz-fGb-=!TVM3al6hN|wb$WjMRM-P>s)EAKIUOHM~8;^X&BG@7eJdUd^D$bXaOV2LXSMIL|U%#FJ%1UebULKmtHgAeW#4UVzX=BGYhx3ew zb~w_@Eui(FOs}slIa(~bgtTU*!yGgVc7R8)l#dI;AcYx*(cot7k>rUMH_MSdhNKl+ zYh_`3uKzl?JO{d9d~NRL@lF;jwod22xw=t+Q!xUW73Y^1I3PQQVr7f5X&Q-4I;+e7 z!{I#-9IFP=Z(Iyz`lWHHK!QRU!;MC!@kZa8$;}tgy+WP8HO*1N;J}O zI|(~snlJ{Q(Q(9T7VVULeio0x#Z=kjyf{7k8vIkhVP=-U6>6eFiaCuEZ^Epdm(?4- zU0e@)50kZ>WhcbdCE7hik+emhy@%hYl9G-5PS+bX?j5i=HI_gTe2yNb!nBH5GwtiP zFyQX)&SHI8c}m^}*K_mJ?QVy)LhpuYomV_XWfUjU+wT&0zvN{HPz@bq+9&MeFXrAM z$gP@3a*SlmTstvU_JRcTNYw=sZ>2X=r`ssMgEjMmOM z60SB1hvf?9wqD?Zqj;*=zO{AvUp6t^7T3 z$sHma2msYz*E%WkG_G)j?JKY|(B;1xIqkyzVBM@#{+8KYT%3DU0P?c?*tw>RFYX^4 zTz02UIyYnIx9&MOy+>frO*MY^!|xvl_otU1CazI z?0i;C?&Rcz(@f{?V`7*o-ClN`J3e#*o#QY-Co)GlLks9GhL@8g`#m1r*4~Et#pr!9 z!$xT;vmS~~|8(|v6E{f?Ye!IO zGTW@Wy4-qDcp5<|_t#Qz<&}Aa`I4i%y9*Z=Cxhp=-9M&?;@#O-yBd&2VsGM7(F1ZU zx>cs!SYP@Ukw>l|x8(gs2EoHjVWXu+Kmhn4VB)K}7`Ef0?b5)v%Uy`TDX?MXXa*L5>h5maPXmWy$&u}Mx}PzRdShAc-u3MzqScQ+!34G#3?V2X?WXB< zFhpb5Tu0EOy8jzk{N%aNRN(=XoZI7EkMd!2PQz_nz{gbmXci!EIJ2#W}K&B z8eYRw2cyh|Ti@&SY!+Tuy&43p70uK@%!M}y))65=o)->%5>ZHF1}{?InSZ}r^RFd0 zGA`Ly>LgmoE@s1k8fIYtd`)lzHdeCAFo1_!hx-rS!b6ISgLDO_AJl=myZzk{-<5M^ zQdWat=9?nv?&>4mR|it&~*y!~>-nIRh4Bpj<%rmI%)tfiV%3m!(AAtWlk~I$J?pN%+}sM@!LK>H#}u#B>Gsb$JZPf=XITH z3UM$#Dk|5)&5fK9ufy-G5W^j27#s3N8bDzwNiS$(nPwtfg{(+=T1pLsu>f-?tsoW~ z#UeroqDi+f0D-|CUAb1;h1kpEWk-%X1%X0-#;+M5Sa;VIf1!c&`_##MmD_AB6~CCP zPXxu$r&p)O^P;jbR@)(%3r>lCq~F}wX*-Aia-Vdr&1G(Tcl-MGb$H)@o*r#&@39G} zjFVz0A{XXo)dEdKEiX!@bo9Zl2&HZ2B8q(1tA ziUrL;=$lms)VWjGH)Pfhx%mSutV1A5W*v`TV3jHcFqYRIW&)>YxfFm!ThCf4BZ1@; zBZY&I(Wg(J+{a$5EQ`&e;b;KZ>9%W5QeRqhQU(USz~v+MQL=9w2t(?C6HBOUFv$Ld z>%04uAe%U9V|6?_#ZK{Ru935|^Sf)V1t*m6o--fS7OK5%Zz}&`N70pO@$`~w#EIzF z)=q1#rgog|Xu~beoLK<@k)?(Ab#NHn0Q5&p-d-%xC9FYg5k=!rABk_ExeplyFWtr4+T} z`~WUvs8l^5^6D@odp4Ki=my^MtPRCWs?lhqjFl|Hk4h!$Eufc$lO#v=O1F{rJB8nHX(@|`ZTd>ZA$= zmR7eA6o4=b*vRE5zusOaog*7<@F2jal0nm*oPX8eDS?sfmBpndeq^e293q_xCmp%f zvSrWdWKrvME~$=R_3Z2`;siUuvR#&cg5`vHD(H&?lD_-{D)Opo6~Qx!!A>vq7Be#W zu%wcPt4|XYF@jjGEjE%MwyTL72_R@`z!dGmsr2#xf6UYeS(yvf%5wes;^47QP2|QSRDz{SKage@ZN!?bOM+fnVJ&Z zDG?twp$>34zYm5Op>aZw3L|NoMx`Z9D@u9Ulwe*i+^;5UI-pnGbSdya-ustO!OT=@ zW73Fulas-Q$<5RMl`te3{G$il!E@08K<)fB=_uY3Oe0_6`qJnA#6Ev$p zf3ZcZ16bK*32v5~tEQrGSO?DVmWLtNPEpD^jB1r`s1PC8 zZ;@>{#y=rHZ!70J7z6=k8Kx?;%BtZ203ZNKL_t*QFUV&F0;wlwXHYQ~n=IO-*NaE; zuFfkP%xIIt?_W@Y{tUGMiawNn6oGz`Yv=f3h0|w=36*Q9rzS|fhXz@r)m6~I(Qs+S zUQw69i!12OyeyW@VP-8;K_wa`XADW0!-M@nr{1tZZ&l4L9(z%0p(djDzTZ*F*Fl6z zbTXV!jR-LCI*(mpi>1cq!ki-k?;Xj$vA(UEB0|RNF*^YUBZsI!4IvGnA}jU^RKItm zW3f2>DnM)PDDnVa&;#0-ddw8G71F80D5hSDOjq|oBQlLmL8(lQaaJ;uB{=3S`Pjkc zob{q@7zPeY(R$9wvy=J~SKbTn#r3u5Y0hWlui5Iwbd;hcCV?z(j|6`?FM5Ra|>^p7qcD#!bau`whHzz5YmQH z$~grUL2>+V9LCVpsA_^3c9hQsNfWND^1ekDH;QTaqxz^1{xD+UtE}&HbzV#dKl1= zo}@6y^+KrhPH%x>QXxFzW4H-+vHE3Rv%v5BXhyon*ON;!iI;$Pfqm;V5ra%LzPO%% zK^i4$l$L{900;?Txc=FTgE;LmKyOj_%g(i=1CAvqIi?m% z8w3#jC`>S(Hw%msz_hx?I1C0`H~r!YP8k@;0TImxc(HQxsz6k7qwFh=SOPt;^^*Z; zqZ6bXc3h~-eQz-;E#0r)X#B9y2dw<$raZCAf$1j;96Yjb%j%S%mmyhlBh=VPtd5`Z zd~z$Ir*lUX+KY=)5AX>lxZv9T&wqRR)4zUx9%KA<9vd47AhBdHVC4EmtqIG`atuCm z>C_kP&&tkr{ZvfB_(?f)1dVc7+`u)aU@q1LQ30cY;qo5Y-VQrPW9Ee_Oxaxg3M@bb zceTN(sZMIgK^KCb{J6XCsW0GyLW2GAw%cwkTr8q)=b2nD6VWYZsw>0LEpSzkx6DjE zFy|!&6aBl6`W**dKH#QjkNe4O|Em0`kWfj9(GM;Usvgk{gi(-Asui?|I-$&XSbKS2 zW=a&;#;ierLxRrm8jXxxq7E|)v0q-bsb3}dKRG#VD?hx}il^x8>w4fAa%Xl_6TXqLi7U5M6;g}j-p4)H#CS8+O!d?Ll4i=*|LYaY5 z7~n}_ZNn660CT}`*#}%+R7a6J6cow{dYCl6jA6wvNrlPm@``60X1Ff!3llf%3UtO$ zK9XPZtl`0N?U^r}nx-r;rYo*R&@vv&ZD2-7HNjh#OAUY1hc+B#x}d!K$zWW=gye%c zI&oIWmZ$pG>Y!Y106>lmPPxL^*T&4B?{DzReyLLaIsMG4)nz(BA9C2n>2B89>$jM)@2z37qgznN^AFMn6~HO`N{ z`{#M6q_F`5kwKa=n7D5)#%W!ATwtnsS-mQtHPkcR2OEI20P8dn009;)Eei-&?4qI_ zwgIya*CHwPk18R*pFH9NQGInED3BY41LSckF?pL11YmkODPw_t`5Q7LJyXV7tu=c^ z(w)ckW2hDW8-RXlqF>JgC+A2pUwIh~A5s-+GEZTjGCn41HgkcJK}g+WO@f^@7Ea*3 z+23o~Gr+G)*x+a|pv)~BKETUY;Nak@5X+tHm3Jd!sDRizQuOa^o<8CnVp|xl@>M>xn=$d#O1`vC}lnc3(E|PD&dZP zOwGr7hp$D6NN$Y3diDwpd_3qUYvkkY?z>Y-JL~8{T-#rvjB4%O#MI1#Za%^Qj=CK{Zp&%4gbf!{QDh zQF(N;GUQd>w4QH^4LWQp9dSE3w@(oUnObN-SGa)w4aPOFpeSPiY8Uul_u|S|pfi4L zM4JoOh$qdkNkOq1HU}YXWlX0U7~NQ(U}Y|t8qncr9tS#a(IHx?R;TRcpadWYF__pc zXtK4og({#H)7cS0_6^A8;lQDR%+B%ALmRZUAO9QB-6nTZhD@HV3gpB2j?>NQe)MFiwJCt*8I zSm4$voYGaKuB0-wcOeivj3J&w8`Rf8jH_!oKLUs{vr?oT-?vf~(gH6Gp{^kZ;MGza z2wb!xlV-Giqa6l;)hR1gd7ltkp)yb*K$4}R*L=Z`0h>)2AKl!Pd{~*%&h>UI{PfJ| zQTgM0TUPmLZPXRS4u-&MNUb?j{4m`!i=b4aMfIu)zr0DHDPGNKRy@D~e zfF>Icj+uZh7q)Im;>B_uFc`lo52b^2ob_EYa58dPmPPvpfZ&Um0ij4I45>c~=H8BX`50RuMs|kLxJ)rf@3rVg1r4~t zt4_cQv4>A( zV_pQNRfRc#iD89##nlvll&l1hFwl9j@eLc2{te&NRkn=3{{ye z%KF9=-CR@_R(?oI6-?Xu<2=f2{3LYPfJzTpa?^z6KuN{0x|LF_BXb8sP}aajg;;JH z2ABYXzt+je`uU5lT)6Mr-wIGJkcz)Lfhb;7+9%im6xDq{K>|}S82!T)*WPOk2ICPo ztKO_Vu)|uapm>Zn6>4f`FlCSl=V%5|0E4asZ-9=&0UA)RpT}>1{_*CoKc3FauRICI zah*78a*cmRILe;JOx9BxgFyK;yfo1Su@*-aHP)I->mnAMb=GUe_M^xHN4IXZc(b&RcPR*uNqhp17qF+1IqGJ!3IUqM3>-2vuPW=Q*)*oOVYgAH&luQ zCix^bcaB&soPJ&hq;~L;=vI&qN0x-}G{Q70FM?_yDEq@7sP)75@8swCZqFB8u(c+% z`68wQ&iJhs2!~+gPcvo?G0ezhJDsjm`Bv9kT+{?P)6RCa>zhHldnb$6nODZI&W>F) zreGDM7DEdx5ns*b8=EkexA>2{-HuF4sv*dX!}pv?Wpkn}pQ(I7UngZv|GImJIXjNC zoMe_F<?qKiORSOjz+3|&ny(=XbOS3^`%lfq!hD$4TLBKqrb7an(QJU%*c+f64J z5&%ggpi5y~QlFpX5H+Zjwolr1YPUn|su{Jeoj=5$n`v#?_P_-TEcWHp1-j+p6D9Q} z*+y@6L=YBYf7z85BBhcB7Q~4m&_qESRO$X4O%;Z~oVa01B|Yd3J&Y44u?o-(%K|Kj zWU`gYh0x5`HH<5DI)c_po3qnd_^EkHy7IYse*%9yW_gL=`W6il-vXsUtLt! z?iWCXGZ>7Uh2j#DAtk-V9plu@wV#aBPkQz)*QS|n$!GWt=DyG`{LCIV^de{@TJ-`} zfUg1H2p@=@flkwU8rlHZ-``(fTNNHxVbD(a4G^K`00E45?R^q*Wpsd!x8g4*aD05s z6>`P3or-4bDF^v5#99eaui$HgD3F^*AP9k55is~a1;=F2l(J)|XEr#hILSFyHxC@p zz#%Sa4EZQ=T+c{Yeux3!ixv#KsSR#4BU7udw~Wu9yG%7U26v0_;^8{ z#PWEhfcQx#ngP2u2I;<6c=k6UvDmtWh(1=zNBip9K54A`|6`Bbmczy17lXX z5FP~5xu4v}hF}6_>NgFlXEoF&)_|(6@rfz1_onJn+4{G@K%r>!`iPIxXy{@nhuO(# z0-hBm8yir1DW@D(o+U1<+%*Aa?gF^rBmctS3n9rd&5^gpZx|Sd`I^25T^oyQ07zJP zp4v-t*Bl*)ZPT^sk6so5$vxM*-`Ln(TU{$Y&xCx9#ALliKNWqRoSb-(O_3xvECNFd z=M~V&;0ZXOv=Hw2t0ho-9VjwUK-HbJOYfn`xg)$RJ zE=R+HSTU`*HMqZ1I|q0Rk~n>5{_AP`?%sB>b$MKNUz?z!VSa_=ls_vbiVK&j*!xbY zwI$5SQ@y{x^6pS_$Gvrs*T-r(IuPtkIM8+$#s7#F5F_{sQ?TV_l}`pMFcGUCFQ2K{ ze^#4kwJ;TuSs?njlyoKw5n^#B+Z5!nKamMkY2q*~MmEryBY@DFoyv!`X^gfSR5l|x6Jyp>H0pq?28>);UA{PGmxX`A8y#9T3<*S=;&oJPU zo7UG&*G2`5_Svxe#4}9aT=a>_*v;x#8Ohc5MXayw)aUJ%dm8qi8L;#RD>xiHRKaGt zDQV47%72qmF4B}2Ya+p^rd&2E6{?K6taCuXOpux(`Go%H0@*>B96N9TNr+*P@XAFs zPS@Clzi{4Pcxfe3H01rV$>GZJ5Qi-ALF&$@QsD_m5=FcKG5$|Dtt0lqdhM}+z3utzF8 zSM2Di&q4$sygsIJ$|Gic1{(2GenryP>KU$VJP8XRiXpv(JpRjl04TAEc@Ax++124$U+w?$lr@!FElXl$t@)q+<>Wv9 zKYx{ZSYKR&nDVf3PJ3LC2fYDmsV(Liwa@DQzx#gpR_dW{A8I4JgLS{Hpwdxz%?O9{5~PNsigc39&X_@{srdH6&TTg zp1sHg=(#oktAKhbY|3c319V=8dUU#^H5;h9&>yUHBH8Y@4nl& ztE1BMpoC#_0U>Jcc(A1c`-#9XcM2Iy;98=r*(Cn!$2;LyJM`s5*uMfiD@}EbWaRVs zPpP>>aAn8Bo6rt_L*seef1N^H6j5&B=gd0#=p2g-x|Z%kYp6Y;UQe)YJI)$%D{sRapnx~UBS z%4_!s*CIFLqiU^o&Q8rwP3A_!x2J()<(YaU#57S&KG1FrZ0IZx(cRTm#R(*xLgRzO z#p~|teNX)e_KHIEDX-v#Vk%thar%v=>THVdIJ>13_{cfIUJfhx2GJnG#6!q+$cC1! z0>g#YWsp|EXd$X$10%N9YEj%p{%nzPbXLm%q-ARf&$TIYjXqSa8L`9zd<_ObX1}IJ zLA(NSoOo8CUdxKgP+Pf910Og21#pbswJ2eQ)%x)XPM4saA@L2Of*9dB(PwCXUmSm! zuTWN2DQ+tD_v0fjZ_4fpL=vv|`f53?7M4IIur);Tw1g9S@1`wur35tS01b$EJ#at^ z%MUGZIt{-6{(FP|^y!m#y+3}>PB=SLje>~-V}L9!TB?CWqkf-BlpUmA&A<`@HFOR) zsSdWs+BYe6MJ|o4Tfn5QzyA4Nqj=lM+P>;uD+&I@Q4XetWc7_7z2UG*(@K~@TG=Cn zgB1{)qh+XoRXBdYVw)^bms9?Ow z@U(BOw=55VhkunMOZhN7q3Sp(u$shfXaQVaY9!vMAE#<6(4bgy88{ zpe7e9OtR8m=Rx;9Qr+hUaz}gJ=2Mgxyw0V$z_-kn2+dK9GA05E?R995{$)J=H&b^i zyOYEf_SBO?J2lY)@w)-_x;HDnBP;{&|QOf3-4m=O~>R)Nw$?HdflJeFymPytn%nx9M= zqr}#MB~+HkR5k{2HEH0!DOVH}80DW&B9|J6UU}iCXijLP?*@ScP6FT>RT)4K0Q8`x zd7*JjAY(`y*QV({l7vf!QC*{4|8O@V6KGvS7<1GBxago{gU4l%Vd(UIIFovVHJhAf zRn(JNr?lW^OY&)x1&`hCZ|`hxY|g4i*z4OFzOmLC3Fr@>S^_UkMLniD{Bh5QQ_eEK zIqb8!*dkxy3@#M&WR~)jJw1HyVp4tr8BGzNe?Z}@X&RLab1^TILIN&0+YK8yA-u1E z;3qx95SV0{JR(>KM2G}R%aTLL4!6TOt)p3^9n8ECcwDKtipCa&WehAH!Feijb5qZ* z$vp7))q}{>9ervk?BrUXKA`HOjj`$0r@HULtM%hRv>YgF519l%9$@Y;=AUs)&ooxn zR-cliEk94}VbvxeIIcXdxpA#V4SAj%9qukQxJhqELgvL-(W#f2HsS*mL(#tvz!{v2 zU|s5p$IjCC*_r(AV>Y5qAPmIi#_~wfblVYfC?&)}=aEthEvTBLXs~FfTItmEPVbB} zWy3D%jqT^5TuJHq4F^fIJJ$|_VuwhW>)*(`?XvFQAHC~#+7gtUL&veaFfD277S{8g zIh_Fg$Vvuu#weOW57Hmx2Qbeb2NzV|wbgjEcpi`fz+YWIbRXO1D|Nj$NY)S#vj&AN zW4lm$X4Qv^U?YQJgI|G;#gLUyF3C|Ij(c$sTU_G#6bR*yYwdmDJ)F>t`j2T^V7;Z2 zVyFu3_wEOFQ>h%oFZ}VeXHjf_!xw~S8M$J57Py;ZGve+L9i@#zHRR`rRs+6nx0hB{ zctvW7&G1EJzwtVxqowF zY++tANa&hwg(Iue3Ij1mJAzC^KNhpHflv_rn5488RuJ<=)bLZbD7ivOZ!C~%ybxKP zrNs?#T#3|(WMy?jxP(XuG0iNDUuW!t1|7RfU?<(nJDI3vUCBC>6+Wf><6NS_^R>&df8B=4umqOr9po z^`+&xN7+CMP^>LAYyrOMUf9>7^g*MJI)EfcCaRmgC8aZj!>C5a6lQ$-@xqMEL!@TlWQfE^fuLa9d=y%3a09dMn(Uc!y21&>tE zZ0mGALEOS93`OZVbZSN19@q0Wzfank+%XnwiT zdT3wqXC-8_psoyCLang+%JRlwc&+RhBTtPTb{j+UWz00><4t)@%9(>tTCGT80B_`6 zg{70PU)837MFc|oI_F}j35~d}J2$j66-bZYzx(p}vrw4$5Ks(-W-gPOjJQw>0b-KU zeX)Y<;y^#;6DA{Fu zI=Psds~;a9;|O2Ad@)!IpHSf~HLMU`;jIaZ%*<%^>C5NSPoH?&Y|y#6MTUcBgOWlq ziH_zGtdO)o_L66?wn~#4IuapnpS#bWFPSI|uf}Tw&2x&_{K|PHh2v2K2sBC!ZFNt9 z)TgxeP zdozvBldtK&ZCC*S03ZNKL_t)nYlZ`b+s9~00%Pz9)tmC76H^9Z`3r1WZ&wc+z46~7 zDQm?311Lik5Healz!>;}K6!(JJ;ksMAh;%q9RrDV7+baMJTP~rp|1v@f94ZO&R*3y z1(t!U62f2q^7Y^UH>-!cd+RGIHrHwoJnu2KmCx}Ot0p06X(gd-i@vxhCbB=CoIVU6 zXbRtd|Gmn6O2b(JSYbQ28e;h2`}fB*$4UTWK%Bq#TwY#y_iokO4N{2H-b<20(txTE zG&vwOHLixO?ft8(4sBi31COWup(5Ij!I(?$?uKR!D!7Vr+E@vWId%X|`DhT_9*>D( zHyTZ$Z-s%aowTHQG&ozisTEP#fPS79#|F1i9+XkVLC1az46Y`NUA(f&Op8| z(004X?1Eh-L{wG)cIG?`4od^JSp3P43b%zF*x&bbdU`@;@zy*!L|R`8b=#`=|b7yn!KtALGk?&cl0k`)w z_ZS(6hlfr}Jvljn&#SFf>G-%6I*5yGuAv_+hElS$9J>?1XQS`R6Jt+LNx+j%fodQO zCI&q$7EnVHOdQ;#czE1;*>Vm(495bHctkYN%m6SZv>AG@vDEE;n8&}VAnwr8fDTfD zHbnHID7cDc?d&kl2$k~QMB^_nE{Z#Y2P}2=G+Wss8l&O8MbEZ>rK!9j#3glR=T$I@ zUyk%W*xSEox8-_SysZ>Tnt#6JjOlG|Od;;xgUuCA%Uww_*sw$1tgR^0v#b!wWZdJV zVe6<^804wXEw3$h}1xCqv!R4SEW@qk|X|5|wuG^XQq4$usygqN6K}GCbHz&LE^+czd$# z8G?YMGr){QJQ6&O0lOesv0dorKE#Mx^ypkfZ0y@Pkbct41Sv40&}z6t150ByVFla4 zSHtrz({OEBX%Fsr9pt3v)$VRCdbgJZm}=Q*SWOm7$N^59r006Q*_x^;<(c?K#_EYx z@4$K(DkWJyX$A(3OW60BQ+~<4Sa$@QC{*u@*$upji()-Ah%w*|Y9bg^&|Y2S-r? zup%u(^$K^q%NWc$*h5W-J2IOa%IpK5pcTR@ZeLTvn@sTEe!xnH6~tUbXDQw=cj#{V zFSIIql>%rRL2&ZR1Qvf+M9{Q@zTsHCWev^+RC z00E?HJU|Na7iVW!Hapx?*3>7MGn?EEq@h>oyT)8>7cYsI+lV?{~kv*ao{7`ktJM1$`3sZZVibN$mL~7+Di{( z)A%QOtg*ntL(rJo($?0#7s*fnJ}Y8&Py*Um5nzEx=4jPSxk8U`i&!WJ0!tNwD%u3r zC17TF$lvd}wGQRSh(ljYlL6(Jl3B3;1|eIHF*5GY)VMua%!y@c#i{>D1y>h+#?2Qc z(d2Tn3fmo+dtn=b`^qb;0c_LH+In^iU0nWaZN_eH9{2=N!Mg9i{~qKK3pik9K%pxWrwssw>HGWLci(ZJAbH5~ z{@n+-m*prQKmclZXzrdhWqip(!w_wJK-GO;eKd9lOoF%O43EpFq+0D&Ez-CAp2-&}dL#Q*Kp zn?q3ap^vpmT3-+ou((_s&dK05 zK0Q6781rdi#q^iZh)rA^+)gV{zTW15akgy}Y!m`?s-&*n<&BbKYsibgWK8N zBYBydtd@80-^1OnpMJi&x&d|w_AKR-W}wJ}#u;y4AD z2bP5aEphqeINGUNsk%w<6>P|{ef9;*6~M`8h-$L1m92@G8ZvM;q#`r{0ojkimaJl| zO}Qhi1Q`N43m8SRW{C-F52|=K01Ca4B>^ni3~;DgIc>@sKS0tk>?q9k{z8-Sb>uYYQ9xcaQLbtZEv2D53kT1#N%rHdh*pv6`Y2l z{aYtBjRcu(OaaWV_T7VWAzzv6UyR=8VfxL&7Y=ln0fp3Keu_Z)iwg4A!ayA#<*2lgtiSkU zDE@3T+!L>pK&*d$>mPo2w|nrr|M7p?nVMTu7cX;>|29=?ZLC{UAhOn0fj&+rvg9NK zS5xPv>wt&V*77SK(%t!7O$D|`Db!<$+}SmeGkF1oasw9-BM%Ceg}r0$I8Wl}c2DzC zlp5{m*Vx2^z&5&sfN{+XLRw#3{4i1s|aA10)EiqNs!NN+Brx5U8cq zQLJ@j81ird1npQ*p$FGoCfX2zP(Amdl<$0hw|7_2$Xy;6j%kSz>`JPSjm84D5_N6r ziUk~_ZAqHv!*GBWSRxy%__9Lc4`|GCaz@dBV4f5#bQS7K^RnFUl|7)@s{o zH+r2KNVvCss;Q>XXo)|!zEYoEMBkIz+tOOovAJ<>DJrv^jJKE9`Ddp2%Bn;r3*>zF z!-M_3{cYPU)YzMA%S>6Vr3Od8qhn26vN4U|T9dFg1KxJvN}jwJ#ap4#sbfl;>pa<#!)D-j=B&TF|r z8*l|A-E_?mK_motbRO93NRPF3(L|!ln$fZ0Sg`7?+VUNnDeIY?33l7fOBPdEKUvr1 zggg4+!hG_zh3L`&LjQ*7QPzhn4Ddy5PUJ zPd!{@t9ktW`**aO&tE>ewkGmxnO*{f@k>XSm(*ZCgkr@rls1WR{G5!%NUvV@{Z%Vs zA;d%MhEP4gP6`Eb)?2%F%kj<-oSRz?KwG9BA*i(tWU?xVf{koBHfC+_akTRJ)KSEH zOdNxRDzHLI2yV^`NMS!Lqg9d`4B-$xhR;c{Hu|ii2;U#9#fqCs>L%oQd!AaHng8y1 zd;IqJI{Rt>4ue}HqGx14C|TW?9y-4``{XMj}cu;VR55V!<6M^dnD#EkdXmZ1-BZ8jgXb_0}dT!yMQsa9hnp|2VDXP~o^agq5#WBzeGn5m~$2n@5AMI6*BuQaC`wX4S4=Y^Ng&Dv#S`m#QC z{yO-zJ~>%@J6U@Dx-|JZJNTtOK3$&f%nZN0^#0bEoUGN(C%1o|ef(5^yP1B8KmVp@ zZz(4j??CEcVhF5NLbKC3SEmi8vme@R3MgbV2Fk=O>I4U~QB3`BfBT6h_xnHm9zFx?P^}oEvV`8|>$5la zgVW27dctga3ag=J*8xF|Kk>#S_E8c^TLrRQ z2}RB-Zi?r#c9MLl5(wIs=MykE-9o#Fb3*j+F0AFD(l*PUUA-&xq z$+yuPnBr@`x@j)V80SZ6&*uaCQH5*Dly!A&)Muw>df3>=-aYp@6$@XOy=q_jTKG=V zXE^~Vr*TxBp$h~l2qyw`fz+~&)>ym{3pjBKG^bodh*pYaYWnlJFL3F zfOV@ZT~|{1%?(bW)CEB(#S?NvWcdy^x15bb2#NiIe4*7Obj7)s%_sb*tPn{aDigzD z8H1^!7}-3#aC@u0BRl1f`XuceD13aeB38i%#EcnFj$VnZip#rlD}khi)`meWmC=ku zIxP@@`+%O=d-ITwFi0D_&VgCpfYzce-7K>r;!woA)T`I~M8npi@N&{$Ulg(45 z)+*Nkb`QvACk9Xj({zvt<bS*lV9SIMkcLt;PWN*ZxT75*A|n-l!3qY7*!1qHA&|mfWf=wXrT;)~ z#p1d1Ice}=7|7}4EJOksRgx@-rHsmm##WiyXFaWgfFD)Q6by_y>)=)U-fjF|B=gNd ztlt8_6xxKZG`SLai@VL_!Nsy+ZRQMPsF0-G{=u$;DvHR`EY1Wl!pe9gV1vDN71RvS z4_U*K3nZCb*k^NVSArAYWfYl}b=rj?;&f!j!W@r59IP0GQ`xDvyh*ls8wGb6!G`5+ zZEXTHUx)3JmBhl1%?imGB%iTG+}zz3_qn`ES!|mZi3!4n^s>(%5<_3AElM^)6pPIP zTt%f|RQN~?J9Db$u9->s>nEtJy%BoE9bf(Rtd{Tt^Z>`5imhwrb>5DIU|HDrf}F?% zia|1!dKZ&{EwHCIPB~EWB6#RjG?^CmI&faov!%-mY{Xl*X!m$+@F=|LNQ~&Ski;XQD%nH&^BjV-jkG%F z&UI}fUaAw*hxTDBDJhs;oW`bqaTLT5kC|ffYim-Hgqpk&Bn^Y!;8*u8UloEf6}>?` zj|00G6|*(if^gIT5%097@eRfSEs-=aLI~{b2*}`dd);j#pXRb)!=!)Dhp7a}s?tIa z{hO(`XPTxmEj4ZlN zseV6e%p9%Gtq4Cce%zkU#3w^NLfR1`akskcM;&PjoH&zP+88luU!4MX*PQAW#b|+7 zZ>~Yis2~Z24{n1mx~o8;fxfT;Dl2OMzroOP41%7VfZ;6TYaam!ZG4$ux=_eaT^N$N;L(&AsN0Y4BM0(=i#Dbl+-|i@5JFl4 zdb#C>U>4DR>uyE-JvA*s{;v{L{blFF1)n< z<%GxP5(`dZioKvON;TBpo*brz&?Psw!TuN^Ns^dgckij%!+ zQ)HH-PLpmxWWWU2ZkG&Y0#C`tdlT~;?TY#gV+s+Le4w&3JXlq>ETzKFv{IF}|45f! zGD0!itrt zEDt$EB9dZHk6dmLRM65UEN_#A%>^VZZ2h?8AYLl=bNsnqQ4S5|m4#JK3x=7E=M?qKRPTs>O0+H^M46u!nG8>P>+xqHOYT4|UJEVM`;dKt`c0T&P(_T*Zq z4Y=4S-X&SZNDdATyc!SkifxCD&&11LX;@m)1Sue1O~hjTj`20M#8+|{$u*XX-n<&L zwJ{P)<=T4<$=FbfR~OGsT$3)yLEI`#0#+cH618(}Bv#EM#_vAG(1n}ETzM6u7gN9A ztmFLdF1&mevYLA3HsweWgXL{Y4M$H>LX6Nla$~9`M@ zez%gpN!8Z^LASMHtE8kRE_%j^77>L z3^`zApl3O_5MedcYImH+j=OMdD9@F0cAOa|PIyBrK!!6`zW@U?fDsQ0yuF&?XtKD6 zx%&wgQBMt01S{|_t^&)^56%uj8Ix2hc9P&s$XHYBXC^*o=wohxV%Etu04(dq ztvN%v6KoeBre?=wL|{XY5Kw`XSs6E|fn#Q_i)20LY;8_F0K(s-8#&rpE3ODTz^LcN z{UwZS=aGyi#Rs!X2BpcVw~1zBV+*PYWckqDXqQT=xK;y(z7PM9p9R3eIQ~w*epMTH zZ)Xo{Jvi79sFVUmEru&*;~WLRA}r00>?89~uj!jRcC0J&0$DWRA5iGlR~LSQ;<%w} zkN9h*Ist*|6FsvKnZNEAvsXW!uILAlLVn6#u_>L3#*+#7Y!Ohh^WTf~Pf>E10H-{6 zlZr>(>vO=7W;0bA-#@7ANieK3NR^fK5-SP|Nj1|qFdR@u0Cw!!hqB}2LY^2`{>M*W zNVVlv1^=Z;5zj9IxAw7LoSyMY0HbBXR;;K4aiNP<5YP8&M+;tdJMv|J{CL~Bv9(yf z^wW~0`~}K2`V|6kpd~`9+aSRVld=VvxR3RXtcul}4=<(v&?UjZ0Iq-rXGub5$6xJ66%!}r$ zA0X1t$6tN{YaV=4mJr1PrYm@FWv^&vAwbnC@9pm!xq^>+HXa{(hORV#MbbcsLZ*vD zrG}59T=avP&5hmBGtJD#BeVuY*&OCYPRye>IP!ykZh3Wsuh)^wlhK2i(PY{N&p!1r zw#!W8_O4HaRCc+yBN+y|S+K4UFhJq^)}1d^8xVHK2eTv2WY7wpl5-%&x~($lIS%aI zWG~AYgX;PuMaZXW2}mu7q=&*5Yqn8Xl1hew9*^Tc|LHGIZu6>QyKqzEKK9k;h)i8^ zMSzAti(r6rP5lEm(}zMnHsPmavh5#reg!)IHU0)At+<6?!EbKyD?}NEoRw&|YNl*R z?i+>yqiYKU>=mAe{Bx{zayxlD$amRjiV~{_sGKw)liE&^6&1ju-~&d)cA-(onV?I- z6VX`_VgEb}a$Kw$(zD}H2b82`C`2+#r_WjMBQh0FCmlggCP;J6J zE;#s95QZLsUbIv;nZi-{zPuqdK}a|0-PsmozM5wtT53shl z@Pr85&4;(D_$?AqGiC^=p87~tBcA` zQ%EvZ(raI)S>?D4pC-Jw5`b<_LM;_zA_$16FleTrI2MI6;$1&edwYr_P4haL8V)9` zS44e$Cd9r8S~-+YJ{N0%Xt@U$CKd$pB0>j#KyfcJ9gTs-HHFc$mftng&0oY%Ah$~b zzWZhY6do~UkAnkDLM;v-pu~|agdi$k>(v zfFzaybzGaTrL2S>n7HF*dV`>vp%K5j?*dE^n4c8f*;sG>?gxQx(WaV%#6i!H3=(AG z7)dc&E~W@Y0a`mF_|;T2e0ZMv(_cijufzZ}!x4-}IKi(gC5C~Cs4HT=Qb~=uVkiI(4i+VdTLiyr ztM7403tS)%!F0||Lbg?i`tT6-7XO6?`k_D@l$IXUv-i5TOxoxcuDDLKzb9D+a{#)i zjn#!mfXm}L^e#WS0BXPbxHeM&*T!%8n@{%M`)KTGU-O@G{APQ0aLP>+xI6d*> zf)q?zlN0I&YU{z)c^R8DZfK@w69fS$r05Y8A^=pfPbANss8;VfykUWR1g-#4@u00mcH_%}p0B2__r1@{24Ed-BcB(qLz@9^}{l z(?9>n9UyN4{1HyUr6fThD2NIwwpE!iHVr-%xWLkQqM?g3S~v z*jg_aK2}Nt)6^0I=o9NO9zE+0ZlD@U3QPwU!bUTpR+=@@%XP&A?O73Do(xman|Cp? zq{*mLWY;&<9lsSO#8l`xqWu61;rx?0K;>6PJ)38fI?K=yn+hzhP1)B9@xTNq zeAOgn65l2UhAE|UWl_dY>XY!%2))RAr2ZJZIBKtd*G~m#d)n)cG-_V~5&hU+eI@(2 z8DH(Fqy4n6=#kR-oOd3#bxtGGB##42DZ z5D9(3ijWKa!f0h^Ssmcu8@1M4ANZ)nV7bDHGTTrNMZTD{2~~z)i_}dlxlGd~U}5w7 zn7I!S0HvZ}Ur+@Rpt?g{@@%CI_p9{Di9_&Fa z+n%LP)aR8Rs1f*0*j9cS$o$Nj|M~_g3UqYzq_GKGxPrK z6DZ^ZYYGJ+HnyfVPywOYuVl~ugH2Qm^Q&_+%M`%zOWgM&PX^;7jBmk66G(#raTZHh zpmVz$KKe;$?!mAo7M4$t=;YB zP3IDP*IV@c_&<#qmp4!vy2lG^t_d%M9mFy<{=4WMd zQX6I4@zqiBjEc=F55inT)G32(R+9orBnMy>m#?6TH-nV91m}tr^j>$&Q8NgmnJKs~ zc9l1=S7sV@mAKyQ_6I=m6-*4x+hI*w^95Ssc>HOwrId^D6GfKHyo?PTX#L*k{6K5! zt9MZkHJ}}XBqr{4IIodlY6epZ112%Vr{bP~>K|`DgV>%tig_8814!O%dXO4~dL=Y! zgSC1Kk#cvNBn1&hc8mio-^Q|^)rgpXmI>~3>8OJ|>~KSl>{5fm1|^siox2E=tT>V?hQ#9KLr5 zhP7T?B!}$odY+Y??p&zXhvzY!fOYHaMRnk94Nf%#TiMiKu@*?1K4+l|ZYMAXAq@Zm zQz^&~YX{Fwxk8nqj_hDn!{N)`!s1%gaRrdfc8nK(7AC88MhJ_O}<2Xn42?SYiGXxT5;${ ztU|n<-7SU?%>a5UkttbRW~i+cOUsfKg=U@P{S*N;Q>G+i_XDCG62*=w<8EYVVX80qZCdt}V;V5emkLFoH&w>Opfy_DVz)r3d#^KkK zITFb3?`EJ@<>)QvRmhK= zLc;*ZB(fxsp4b#WA&U4&_?>%B4SwW$j><``09h)$>#STkY!NUuHZY~(=`xZ{xwUh1 z$QS&!y^X?C#MJn8PO5nd+c!>~0FCu4td0Ia!dz3Ylcvv0t&SJ z1w@K(FB~2oy3emdBJ-bsI6kFw7rN4?55?2tg~?;xN&I&c#5C{|xOttf5EwwgBzl{u zkJT}dl1`Jo@iZNVIw*M^EHp=Nfu_3BgPZOcm9AVHu&KH4ulp{7@5#I7HhFz_wDAvr zIN}`;Oy<@E8B{IA!VDGUDQGPjZ#57@Jxa7X>3L;!IG*~S|Cj&c zo$Wc!hqPPV-Q7aQcqkg;!=QY0I$hpgWeb!!F)z%|d;t(t0bd|m?zn3%5Gw+zrUTm0 z28e*#SNDB@MOvM?^9I0D=8?a2b6RqA?Wxpx^=gVFS2!Yv4?bSuK?{*DB^;$g-FgMk zIM@jQ0mmFA;8)NC2c!>Bo|U>?K{Y(L)Y%1Iz4_IH-Vjeply@Kf1m+l-SgR#2V9$yW znp8aRt@?NE0vFF$h@P)+?3EG$R33uLJp9XFe^$qyjKo zfwBTB1W%@C?g!5|cLQNBsUq@nt~$4l$>4JE*H;&f=Bj-bk_6QPd{HLYj@JRV8Elld zT~FfE_<4CH=mLle5wTYkcVLs5yAf8)XGP2xwVHTM%^$9OkYW{ZlgR;aHw9U*RNzTx|-~yonY( z$idN!C^@}X?f&ptferh99mv5Iz}cFuVK^YMEPK`7dp=T>&V)TU$i{ms>L)I8`B9cy zLL$WCh_h^7jYu}PHXTH&b^vPq?zmS-EaUI7AFCKlNqO^%1S5gE z5`KC%3;1kl6pXT~tp9M!<4s9-CcS8oZ52`)h5hs<_PD9IFKq^<*>k(P%8s1j+!r+! zr*f`|qrjfV3U%!sefW?6^sll%;f#U7F8B!o)*Y-&$}oI@5&i=XUEb6%HC%^Mey}G& z8~_1liMVNwzdS40={pWo1?_Rk_t(cj?>fOlwACWEl|F zVo5*%0Yva$K=6@2hTsdIz%~&C@QJVi2R3%t$g(t=rKkJ$U2Ct(eP64J-*Zk&LC)=( zlPAyezR&x-&%Ok*Q8;FE5wD{RLv>!)jFpo9{J4-1?)YhQqqx1LpBNfRY*J(5kVbxa zsd(Mit509LUoj_}m}F)f0&n2+!GU9A@FCz@I}7JyvK|ZOf`Du_))%q&IzO?*fIMpE zE&kb!{L+inG_$GxiVSP-MiW7t#i*#-q;iOf6()*<;A^%vAp~plk4J=&V{U^!Y`Vn( zqaA<&u*xpcs3th(207iouxULqMfT#NMD4ce96%)4;@EUdgcyX1rb}tQP{PNenU^IN zP40?UuU@$ea#{j&wCJw4=#71z=Hh6`Qtp57APj~0uTKVq={~BQQzUc!=&K(IjlVqSAEweCf)Pa)8WByiSv(hydHVQ5I?f}-6YKE_r;o;%_TT0 zHZ3t9Ym{WP{h|JA(z1SN<(AOdW0-ew8yURx(A>`6#mx{bEE50Nus^a1%e&?3{_gq5 z5AWW+V<~_3>=~;aY?DRVXf(y!pMU&FOGNtTy>_72{rP3&rYON6!5R0m0^UaS}N!iDI{?*G@@(^RUY&;;- z`X`bP@=+nY#(Hpmey$+<@xj5CaP(G7LoyCO!i|a0!7VF;g`Fw4$KJ3x34ivd3zFa( zk_~Z93Hp$mJ(l1S{ClqJm(r^YBiou@KV^Ri9Yixx5io%{lsGVPK#I}`7`1{t2ASv* z5ri{;ip$`GCA+vfSOymR;S0Qp>zwSef?yuL+La;hAD6e5|9KGv4SCk$ejo1PwV1|DmjaCy8DP zXXieOAy!nSt?sHWL84%2QZ;BnkYGHUVjAVGOu0X}^QL;8cuIHd86FA<@B*Wh04iM6XF~2IhM2 za!uevB971)2pLdfXCrcHA-vyRLuWuqcm+^0FO3Z{(6(SLpA89;myV6tyRPhaqw7-2 zo0$QI50RB6Dwu@9sjWOHs}nOxZy;D-nAHwF0Uydf;0MPHHltD3_B_~cCeh~;eh!+Y zanf}l8?Ufe7_Gk^%jn~Fq7}t7wLsXg8{5$h%mo%yE~Ag*6ci3TiYpT*+}Md~Tdb^t z*?Rr9m{?yK!9b(a9dMAS`~2~Iyk1nE9uCwvZTEPEQOwdoV?RpQ%_&l+>S?wHScn;$ zJ*8yu{tCDds4s{cH=SWl;Acd=Kis!Q`?c*x^*Hk+?oLsFxp;2+O1l$*ErKhQKcp!d z9+W2lS$$Y!7i=kL121u1!it)UuD~P|eztzGJV;keD#$%U8tmtE^c@~p7><~zI*`$s zxMp*)GC7Vcnc&bmuh^mpjc&D3_z1l0E>rW>ZGN3L#9y>Ox-R zo`l81WJZ7x2$9I$)^)`MnOx(t^y4R&#YUoe?-Yej;YRj=cu@MH=%(!lgT_>cbd5Bt-r&5dECPw|#1F-Gm) z3R6B!?{Q7k3V)jt{J;wxi3U$ma2Cu$Emo2RhGV1fiohHjZz}>Nd_dT|2E(Z+R%$L; zLU&oLMRJyH45K)bqlHhFDPPS_)li=7f!;u^?y02DT6JHAa_>ay3nHgwiBF@p5WTUZtoi{+Z(}OBE1z+P}Kqzd0Y`nK`a;u z-Ev1J@_>zFw-(~(YCKqEmc??dP(3|Aza;-rKCEdPu13Pel0SU-07};V^z`)GZ@)D= z$BjlkYVi>o)(hAUh2~MTHM}pZZPS$iD$PsLPQ<}A`{KUr@1}VZuUBzTh1x`NDb_VL1wTR;_vzBzw7NY9KTvFGP+;` z&`g6g{GOo&sY|5qXl-eOa56wrq*9+KoN_ zb;MH(yHHb!;ES389V;Ti9fda@`Z}|XCmZMU=g*NEr8*2$wgF|WQds{d|G{tn{=fGH zEqAb~uW#YAQibNez(fkulTM?c-HQ$JUQc$!e*g1}ODf(*0=^=@)`X zUFog(4Gv*@UOyLWGp-RI9=YF~e)#8Wof?s`6H6zqrt z@PpcG*9EW3Z7S%i@K@s0eq!Uh=SR&fq%t8B3gBsE%^)O zJPz_Z_G(d^SOe(E=*9*Z8rbl8;=R$YGDKDz(ag#+?UpMD90=20td}P+V=|Ii5uDXO zl0y3YSP`|M6u=4o%B>!Sf~nM;T}Ei#fm%U3i}|?XJ(_HX_!yE6Kw$uJOVprC^uNRs zODVwecD>vx1GMf{Qmifl4ErTH&=8U>T^ZAWtmCfe!LTOhiWg6U` z3PQ(6Z{EL`)X*_RBO~WX=w8E6$9MLzI^`t+iyS2{{54}riM4FPKc1a z05l+?sdYOoY#+o-O@tdDu}qRLCzCJ$fCw9zI8Ss#sHT*o^=zUGpk?LB2B>*hMnh8! z;D<;XikiF9DF-#+fk+k=4!Fz3<_qSy;BqDA&{vj>a5oB4L*}k;Z#dIeO57gc;NH#)_k{xdu)1U$ zk;S!$GzKL@9Y=|!L_9T32Ps%Qfp}_Pa7mb^N<5rH!~=FYeWg3y@lEGf%lvd2P2R8;kWPr{!FJ zpM#Z&l@hyKsY*Xr2go`~mGa_x?^D=2V&LfpnxJ} z15Gg!*^9{zB~OLH3ceb%$NIR&DktVRu_CcmeD@sSvIl&~mFC4dmy(tMN1#PS1j1e} z0;tk#%A?*DEaber7F1M#jenACDrEuecGR-bXkznHe_4B2jivh#!|~RgLUq4+MpBms zY(@8*r&vBY$nN^$i!Y4lyYIfU#!3yM&Oiq;KnD~=UF*#ZMZ>$gx_bBajkRYxmLevj zmHF+D|M+)azTCdP;)jfIMx*`qSHBj}MQ`ws49O2KzNI(F}1@C+y5(-1$u?&)mpg;z;JHi}V0MA@$%n+8J z%YcC~$aK;Y4O9WyF#*K~oPAZ6t-8k)W7W8(DF8J_MWJ+L&_?4xn2NQdJ?pdvL>`5+ zdbq~|1;~*4Jr;o`sDAn7m!Ev{2~7rMAm_>18TpQM?2Uy|1X(!tQUx?k?f$;!X&z<3 zq*3T1cv@`*ue-Zdl~qN9J$tI%mTaWP*D-q=qnvQWg9C(klVA&7IO5?2{gpL9(%hKjLNFEAlC^*+C>%W2giP?<%2; zniP!Y-@m<(iO*{Y38^C-4~*{uY7!qU^Yu!E?s&wGM7zgaXY>+pwo4R`aDAl7XLppk z8jjkr5cLB;y(e{TWtr^E0`ZlW&vjs}MZ`cqf*s)+@I|g`I8C@VHjuaFd0UADDjFh7 z^}5KHErFu2jAx7E)@~BF*VsQR?@Hh$tD)zO>}Iv6__7+2<*I)x-7S}obIo~jJESH5 zW~8^r?BZ#2`LxNi-^1Bzym%0)I$1vU6GOw)0QFB4fJwCnvjN#m7(l5^=+TKpP#XjgHct@ZKGuuV zzfDEyGTN-zk+c@S;!0TEa*N*^#YCHI6`RTyH#bnQ@th1TDHBDsfW}(-iGju>nMV5k zF;9u?2>rO`v!14tvUl69-s{)ju}jvQ4b&tyWJGpFipj_Gk6Z+V7Vl<A!jxklyj(hP$d001BWNklJVp$<&Td8tfg|nbjLS5-DOq|9HAU-B;lY;|jBrplEIBAugbl^iqka!6chbParw|b~aqh8Uv zyV;ccM!MTPeg5o1ae;&{qFfgr&y%-r*Y}U8y#3CdEumwIjMUff`qx+e|N6iD4>uQU zb#jm?M0UNg6*0K8$>Gt#@%G`bzI%6daVenY^z`iZj<$UxP`z=o-|vo5`JK(EW}(g_ z+YSxA)`&DW1Vuot*KNI8sdO2kI(rX7VT*rXF8%Z{Cb1tKB0R@R4ACDnj|5kqCoO9X{29XA;A=z~uB ztmk2zNj61^NyWDGRX`GI9MWh~xj+!SB}>D@tdvTsCKC7wH6!shOr3VM784s$AW+g7 z>#HEOXc5F99b4mRTTuQlet8jvY_|AU*-OKaVB)0dfJxMR3jndiU;~u_U`y zjNjHuy~vF^i;M6CmPHJpmmA(CfQJp}4zKj;t?fEj-ct&H6(7MA=ANRY}=O zv_xvsS|_w9)1Y37ZN#c$;#Lq?63kK8HI!|01P!5!i>bP?@ICgd>y5!8g2rSP6ZG*Y zR%U;#dwdfy>R)P6FDP*0=za=TcVJuJ^}ZtyqdfEiBaiw_}r zi2E>_;Sd0g2Ri-g*WbV&?sn$iH}BpvdgCJS_44LMb?2Y{5W zjJl#}l!#UlD%Fw~u?ZVVReTS?I;o z&c;%8^IHW7fp!4)X36SCCr-{wAwVKtZf@?-*0Zy-*6r=(r3N?|<@f1E_K)EyUz}6y zx^-`?aw`~i^p;pnWCp2(Di%b)ZgCx9@xW-6cw&jd6;Nx86S}aS8xQ2Br)@Iue93t0$K=(YKsYLQL_l8BCNR>BgYyE zHJ2Bv8IlttL=6_U@9#y$7zQ$mUSOGwO%Xsjn}KBhB=jm`GFR5iS4X=)A5HEa&d)zE z>#*XPEFCTuHBrn4!?EBjL$&M-bv#hzJyl*tzxeg{ua2IW+BY*OSR@0-p7DkCY3ori zy0B&h+R4PYkF5sV7;H>1Qc;qQvGmD&Dz_uBBI^A_g-yO+jDSjQ&k)aLAB+Y=ti`+t zT&f50#LHm?u^>i*DTs4>6{RpC8{o^DvU|V;-Ucn-Dt049CkleybTs7akyIcjQ_q{% zuOWTCSW^*19Y{_)b+TcI$Jzxpn?9Y;1BA&voIxJwr(!1JYVwcyZm*jaC=@?Kd@ zL+h1gjZMsEEnC27z4yL-51uB=8bI10b+Lm)dGVzH-vF9f>zSlOo#;EfY}T9p`M>@% zg#v1OXWxADbF7-chw`#8+e8F;`Y3*EFxHwN|0Zz-Io@v~aNWbhTC-l0!?!`03M-P( z02zERUAnc<P`Q)|M61dAIu??*v{sN2iPOb7$hNr1&abP z0U}{7#>V4qyYiCPcPg6?Ep_5iL9zWaCa#M(~SAI-_g^kKM45@sXCGEA5dEDXW{ zP+(PX-xGyDK(AUgYBSh)fh&15H(((=9o?i5!S6+npr^8j_C( zEq%I!;le&f~NMH|Z z_Vx}HOgJ_A5-sV1*$jk1Sr36GpkOz8l)5AKh9q%p>~T4a*v--o>4~$~CH&KxRnJFr zvy%BQFQj!9u8SUwCINw+Z=_&U}@+GzL~6RB)DMU;#rwwRL2rdUMLl1D1TNf~;1_Ui8DoB!b-{O|wGfAya=s`b^h zw;E2E*`&U(onI)do0&BaPm879@4tPGFREuQyret0XWu8%7>3H098z==6=?dCG2fK% z2IJv%Sel~BF-lX2?O`lmh1P|&2y-oboWrDCq>eRjHMQRtTOH zZn{~JnxoPLYjqHd#PVh*Nv4UoaKzEJv#it?D~>MsN?nv_5)m;3vn_SQ!ixgl5^ zvocu0<%6Jy>|SlZ-BogMon3@akwO5mkp`^P_2>fOz@nRo4NLz65LCnKSyu}Ar<3C^ zkB*zn5TponjMJPphhgZ!huaNh`ai9s9#2NTn`s5k)jGFV*&pL_a6O|fA(*F`Ri^$_SIUrN*XX3 zk+=ymq&Z0h$|!k&so-1I!t*gPPK98MIrL#~sK)jz`ph%&1D)zGoCcqLvZn#%r=NX+ zXICPtYJ|JHC49OPHpXdeSlD~f7F>s_5!4luM#qBOn+)s%Hppg)k&v*M99H*#b#l#TXkfDGbTlG(unY58fiv26ps{@NR4cR~px zxz4Jf29s2!e}1I4GF6ObQp#@U_|X(GRC21kQ;RO48>?r}4uS!2&Jz#oHC=v5G1Ab~ zdJ*?^bu;Y^m${wlPPwYM=x{I;Ept#mk}oXJB&zA>)mf@sTwQZ|V!2fHyBleV8;xeY zP%hlI?z)3sNvYc0HqOi>QYeuK3lxOKcvLNJpB8mlUfB1qTsH_#vlO&PEcr4XGrZfX=NekS!{4bs_b#A?)gMOWf_B|N1!*d z%nxa?B4`^80a8ttgpAojD4=RWph}IkFws*h7CY+UiwN)!=uE#-saL9XVk6VGF_7zt zKAIW}O5s(5dE^5t3b71^USb*oX^ez`<0rOh_-fTbgV2}knO>*eF{u(;pME|=tanSH zz9P%b!yuPm5IxD@)qHwK-y)G93O?ILpsKKCtxhE0Lze<9 zrE;UbuRa)^tX!^fXp83Zm1jf^aiN#E0V5DwX1&%~=6DVy;_~Wgd#%R5uF5*!c{LX&K+rD;l}gJi>QpY8>w?L z#;u*_Q&(A+6@v&}J-0;?;(iyu$C_#kH#e8~1zCarvp-PlelAo3|BI1tAZjIAZMSW5W2@}seq8*TI z;t#6LE0#D^y+v4!4j&Lyt{BVvH*YQrm7^hJ{9*;B>`sbLEuX?lLCK3inJ2}EcDHvL zit8!l9%AgK=2sejMHY#)*VKCv9YGVcw~#qIQjHtJ9Jai#JnKESwLm$l4*w zv@{D1y@L*3TtIl%3kK~Zpq1-XT2Bu3u(B1Y5&0tCh8^w_9`kvrb7IAd&6necdF-Z(#)I<7x$AB~pf!Mr$LRc4RX`BQbYEcrKks`h5Z=~I3DxYwEOEV75I z+;L}|Z;$h>aX}z=e_i_6&5R#1Dhe}7>C$2o2eHgXP!j#ckxv9khJj1}Y}zVMTl5FB z^>`#LC6EQ_5L(vjOIk=uH7lDYbUs3^K{&NvnLa+mg+rj>AE`B}lBav!@lH`gGP2$3 z?z&a2?Vmh*ChGd`PKwOV$q|vA@Etkn0u&>1+A1)ANvNVoaaV?Yd%s#!p^_&M1gh1eq&Ara<#i2uWL>4Zv!BZDgpjMp0owTGXPdwHR-N|tU@>Y~ z`_aSwT`Q_f?N%3$%CzgxD33&<)?70M(x~dRI5Wy9A~XtOW%Eh&+FX#XieSVHn>L6G z{I(~aw!`3=h?Km(!$#@y zwy%sAl1G~mPGet%y3?q!!AmB){lV($(HrGMMw{e6YU}L zT4ENg|K6T-$}xU{7c#Y-_Yb}rt?lCWs4O581oH~3N|Z?QaR7-u&4)CUelQ-NFy19X zuIIB>*lym7p0ThM3uUpWz2WR9U%%*0H~P!%Z@#@&d4f7yDXTQq5|Ji|WiLv}WuxWX zR<^NU%Wf?MiapF$FHURUe0RO`lkCpMgO)C_y_2-+y=r4M%H7=DhES3dzOmaM_d8ty zIXg0+I3n-HYM&GkN+?0@7Az(QR0tUODtbUuJ5o)T%Uorn=SzJ&A=qEqnu@JJA4@FY0L zRcQ&Z`Jz&6(i)N-X{BS$Bg&jUGpAqBU8nO5BXMu27IH3$)F$~xAvN|SD+WvOkEm9g zVKxld*$t6T?5m+pZ5|&S)!7w!Mso>Y1(w1-DU9BJgt<3Tv`?ypbgFT}&?4mf)A%ru0 zsLQKRg}Sa{t&W`K#8^tDaKz*u;0Z*GgdaZrJR%n5Q25Hcg?ZiYi1J^rS_tEgoigP> z-LNS|ngkkV5%C)*IT}1M;wibiLi~MF>4;2YRUtmpE!|bqJVv-ykEdPczcS;MPJg~J zkgr}5|9*L=RnjA+3L5aW2om;QrXOJ!v*kDnUu4UoukOOMv)ioiuO50IuS6APYdaP7 z9nWuDvt9j+3HL~lmA55QV1oI}_VsteQ!j}d48#Hki=d0_q9vlp&A z%orsp-#SSHzKKK7a6FH(*hE!N)LaWKU5ZH!>V$(+n3Y_am00u0ttRvngDDvZ>`2v; zSrds9N$8D(-hi0^1%-Nw&<~W3+)y^Rz`4T4faj^D zCFTuZL>r+rx|(?)TFE+(n3u%jh2tdNB#RAK0m7!SsQ!j|Ca4@@?iaRN!aJe~)FFBY zql;>Ka4&RoR8i6E)6>Ja-Hv1|7SOdV9wQt%FfyqXAJ||AqD|Whe7LU4NZ+f*ECN2#n_d~J0 zz3ZKv9vT=<8#VM;dBYq*%i?8Ei%*3098CQ+9kX!krH=hg;NVL#>jQcL zt03KuEjh^e6{GSMIY@4dT|sC@7U7U%7I+WCqgh21OvR^R7g~Vft8Pw8WbS07hlkQw z_3zWGrm|b(nB|V5X;I2V@kdN2SB;v-*>Kf*HG$5nN#6zoj9!E;AVwhU*RW3tEsakU zlH-TRfXpLtj`CyY{?70Icw=MU>vVTZ)sy2x6m>8XZxdnYB=KpK1|W0oU{h$cC-{mI za)bTjfBa9m@Y$9LJ|VjVzJws)12O>R?FN%V41A2>Z@xx#QN)zp`AL-hK0(e71V!|Wq#xPrfp}aW(bEVlJa8nhYBhW*nBEBu&pQa$tR3Aj}38vv8y~ z5~cIv#CYZnm??}c+!jCwjSb}aw=qRux3?wu!N3J9sccPfIbmwi zeT3Z{zr)R(^)$GeELyCZ4+waXzaHzaDVr>P@w?X-JXHxxuy=x_N7X&Hc(y)1Br96T zv$jTIx7hV6T#;GrDGE>OZ^Q67EmZE{*odAT|g&705#kHfo9k?7* zyB(zHZTiN;O5w%(2WTIu3m79ugwP12tYV(ShDjQOEpP+v;))N6L>k6oN}E)@k(U{= zGHEgxn67uBJhVU?$F??gIAHGHi&*&!QT^oO05qITfD_bQjuuA*Df0oYazV;lQ|ODho^6$Bx2v0(ore|N)9Ce)PZcM0lwGWr*J7GEv!-GeEGBpuQ_3tl~+l^B?LJ zy4_kjZ|>F}E_#v4o2gg03(h%)_qSnHqGDA_aINi$A@H<^AM%w_e?9-;MU`(GiqilpziAHfE0! zfZ!Dhg0U0#y1G`+m9tbFjuTzg`KTOGYS4K__p!YilIvlJC{nl6flvWUW9I`!3KkX* zLlT5zpZ zPH1Sbd>v7>uNu2Vh3~ z7Tj}huPhH=olrw1MLZ!()LsKvI#+b*2VSOw*9kzfJB0Z^c3$p^!Thj^NkE^s*=iB*3zQ5%>vQ0ewNH+MHeMSu%P z+2Xvyd_`|kxGfNxhW`=WKp044!l+ z`{a{PK$iY)RjSRV1@k&T!Dbp&TlV6hr2 zs5XVWQ9p_X4@=4a9D}>Xok-dsa1mCbAkBIrfNi&SSg9Tm!nQG)dP0%f3+ZWj8A&&PQB z2}~gaGk!FAyOa*m=q@G}^(k%4O{- znYPpZC1pyuCxG7Gt{!OOq@5q`U`gsEK9Y-PyOrinu}XO0o%(S8@#yG8G@zI@Ny*X{ z3|psq`VAB^7jzH}iizZ+IStX(y4wPVK}gvnXy&L9;+cbZAxm^P(+}MtFjmYY-H6UX ztO&*oN>C-m-jt?0wj&(-H~=4?pp0@iChYG;gH$6Rp@(6*VEjQ;ePyIbq4(3|la6Qac(5bV`D;E)YQNTnoy)i8(nYtwUGfg}M$>iNr#fAc@3CERboD z@d?K;8k(nTI#RbOE8kg3y@Fc2Z~*m)Rw8Tl#KPJMdh$0AJ!XGqc{765CdcmJDj4 ziY{>MuPHlod3ot}OX(Hnn98rF=GbKp$ZBxN7*ZmGL^59(2APjxhV%=y8nr{AH0}Gr z#$*2E_=T{h{b`ZuAm%y3m(*l|J;Wj| z3itt!U>kLvG%D;04Wj!K1SOU@27TrhKc%7^!9$j;2w%!*LjV9E07*naR2@t@h999d zu{>5C!SWCFvTYJM{s-C6IK3CojPD}itKl`ZO9Ue@GC=`5#Xb@bMm_~WPlc~$jZk_r zJhtQ(&w0~?&F1bap~G>i$26N!;v4GwxZVe>fW-yi<@o*k_wWhh^h_B8CPj566W!eE zrX0!^t%XoelCt0Iv1AJeTKh?GCq2g7oe%CB)`ZypkkvF9jzaYe(_CnvZt)^_rI>A~ z@XRcnG#Hce)!_Y_9)ql(qGkxMVC>qpML)8q~8^Vdk+gFV^aZu z$fH=>NCt2na6~zK+F;o8`g-Gtap>nif47;_QOu@p&`qs~3^*YhSdkAB@a)-hRKqd$ zR^Q);X03Lpbf>4MMo1FGB^pi4H6zy*G?HP5IRfUmjC+Z+<#rc90z(31K=IYFMKMR> zxf{`FTAulu9TWm#nlv_N(G!IE!7c}GVC8xb2gX!Ob`S32ujiT|82J_Q(`ZZqka>~Z zh&=fz-4u=i$BKBqw-~kyQfiV4k_MJ8P_m#Qqma~smS;rJ2=2kbgH*u}2H^<17qz@t zfAh^d1V_J9lLa@#&11+D?t^A2qS`Yk4nbW+gGhvSI~?FrmYzS8Z>TF}n3`<_v0)T$ z1D1U2)o49g6;MK@C#uvZ-OfMtF91=GcP($EJYHz-u~)8bKS(Ct@~sDSS27 zVG0(T?ZLqzT0{4xkhu(D@w%Pj)^4;mSG_z{SvX4q;7}UXHYra6vG8vRewD$&4F&{> z9-VFwkRINdKnGhXMYWVK$JD6aVy`I2H^Ri zo$4#z>*ADc7^L%#y~t&j#!rbtNfr;|xoHynygI$zfwc%2CbqToz%ar$mE>!Km}@}8 z(@=!Kz<|vb_D^e~--T-k?AO9R&i|FWSl33@QJo6OqE)!a%H~^t01L3YPW6;(%C?(0sx66*U=kX%@o4 zN{3D*fx=OX6w0QY8BpZWCv`L+#G}0ZG{(j<7^|LaxZ=fy%DbYxn+8P#HDLEjo#1N$ z(4)g6F)hfBw=*8fAYS)3I1K}Q(&bEH<(l-QiMR|af)*~2k`4}u5gg?b)llQdzTJvyyWwOm?KbnIt=V9`m=w#0)%r8VA=h(3TzlI)we@1QT7+?u zEgOhJvR<#z2n&}AN01(-qPeEyGNZK_#EB6gJsgJ+gD8lXhkelJ>g(uOA=W`e(#vPH z?M?Y8T+lPuBr$(Vae@tyeW+3#Le@M+@vaW(`0f4|zL|WxeD%BdmA#YK3VaR7l8Z z^i2RaR{|pvW01bGb3r_}m=Tb=29X6=3yqTz1xPrQD!3q*9HEP7rH!2YbA@>5S9Xf+ z!1jrJC>ArCXcDRt`%x*6D|59DMQV|UBozuc*A}nQcqXBnMG&8R{^FQM4Q=2R?(P9# z4i46r5yTiw=jh;2=2ir`K0e69e|FkrXZJ?cs_b?a5;}zgWarTq*BJisiC86yEF#?X zyBG7C5IfTe$`uBUkW|nJ=w#M_kRE%lkMQ_^^B3Q6Vk75C^Cz)h=$eJ=PX+M6^2aB2 zkp{z-P)wpZ;9CehKHW#@)8Z`=)DZdVc8zVD2$WHV)j|NW^GGscZH_5QQz8umH?<xQch5-~t9tfP+D*O%1CL$z4gJ%*a;7HJ~zXn%ALVn<97C4!H z#96@{;*rZdX`JNby9+2h4T2#==I)q24N9n-F#rW9Iq;Kv?ObV#$_NUHPPABM2QK;1u_phm>Qak65l23(CqxZ!gwsOjmD+S{H0*6cC&@DzF0pG zNVWSD zTtf0qL>1}4>M@7m()Bim@f&l{DWYXGug!e*l@|*Bv6uE-5VTCA;Cc-%S`Yh8_$COL zm$|@Q%j|Z)l385s*l0c9d20+T1Pz$oZi^U<<{3WL8z@`=fbb)=N)?msA@B*@Jqx67$*V_GrDqk5p6-9 zvouBs5k15f^#MSuX^o7>Pe$t)Niukf9TZ3voZnpk(eMB4 z%g>viyf~vNAbjGnnu3n9(HO=YpPilAW>S>M=U;sNBhpZ(XMOXn_QU`E&;C-nf|?0n z0P{ zLBiNzg3c$~+cm)Gn|+Hj(IuH3Tzq`2a%KGFcDEpK9_4EaXMm(_n+O}7wMKYt1b#Zw zHb}xfZ#NQ`5%UyU76k{<&CnW|gW~Wel}rr-BjOOlWork0M_mjI93WDzpZi0L8C^rzc@cf;7rpP zJjXGu6mYyEg@$1#`eMYji*NRrfBrB3LO6CQsRSXIQ-Ec&!5pWMgtnD=srwMq^kJir zGspnR0fbOsFt`RI2%Lli-US-WCuBm`aJd7IR|6@OcHjrc@Wf0(3D~WX<5YkHBa0Fi zfk^Pw@g)kE*%yvs4q*a5&?0zY92gr4h7{i3P?&pBrzOK=y%NeRxQCtJ)=QJmyp()@ ze*V#_kqB`)=w}#0j6fJN!%s_ZJb>!> zlu2BLvcb5&Sj{SX^26k`N&XGFp97n5Oyq@$7=|@61xw~Q+y=1LC!K&LF&|bJd%#bU zz`{JM+VRt>ke8ENi-MOKdaL|uI3e*T(u+Ko4z8o;S6{xg@yL=MZTlS@>=P;NG{(x$ zKl?)Ts)s`+VLA3>^4c|&((2kOaJMJ~$K*940gfZ_*oH(|JHT4bL2;lq1eihM^YlQOM&A`)yy_pPh>^yzQ^t)B_3!oNU-NK3o|cMe9; ziQ8M@0=>Gr#DXoGedXhUj=jjHvR4rfCy|5nC%>lDfdc9pSEd)6GK8>^96YUr22@T@ zPeBG6*;lBRzvvI6G44g}5jv#fsOeYRf1%#guy-etos=)1l@z239`y2Rq}dxW4+DPv?YE$3A&eQlAm9ML zJx3B}I1vi~fUWjaVNeVkpHjpsdT2kbD^l5J?c1x5gwOEAsR_4~a7ZU3eoEX8XPt4H zKJUhK5UAfs2o9S&}R z;0LujH=EQ(T+L+FMe&@7=q1XaV&OG~T{_i)vUVBO(NVQs`CLEcu|9m`Cgp>R==!l5WDc zyZswxW0Z@vW;p8jiMT+RqaKmT^yxV|I312} zTCMlu1FMyNU8E4)a!Gx;6Y>20QJ)?B)n}hDmwx#q%1El&{m?6c4O^&G=~ z_39N?X|h&?zYTWTu_88VRXbMTrFD|r6VaH?Q!adB&`sYOAj8u(F61pP%cD_t2(Bx29QdGoV z#i~UW5Ueh7-a4q3O`WTtNQq>(>^D{0HkuCr0?<B^OKV^@!s6gtQwzv z`laZ$B*mh6d~^)MaVD^$4KpR9C&qTFy;B8cWYZB>v>f^UM6!_Tl~k zwEOnkZ!v7W^!QX1vMUsGr!S7haD6;~E%WmE%M;%Fey8X95@=f`YR;q$Dqg3Qspfu=}uAePy7NQ3A^ z##9=MTn0kBwZTrQ@XUGoBT&HAPzbdqiZyL>4T%1Z)>88D)jez0xfO>UzWS&A%veYSL5 zh)S;X6opT0DsLfo8$8DVj&3s|x9p`97jQOTUN9lJpa3;}wY}*v@QmGpw0LxM5cRkD zJO$$=&J}yZdL!BRg9Ty;L0fxzn(}s(q9>R!4ceFDj#wZo*;%*se!hN^R8`#A{*%A= z)BpCr`-Rw;%Hi?5iw_8*cngjcMl`Qz?jKb4$`@DXufO|_G@#UL2mQSx`wVXGCO`V( zSlvRmXkRg2?PB)iPhH9ZS(ozJQn(#`epgo(Ir-31hA`Yf!3A;_N)Zo}XtWjNFEDi~ zVO{Tra63Vq`AQivd3bpyR?;Z3gNh&uQ>UiwTtQV2DjjO-UlJM_$}d|S5pHKE{P}1Z zfM7q4K?(A(&YhebU0+^G=p<$y936=m6?+P@hlfpLzHfE7KxD(6o}PVp{nq}m+(aMv zIvGy+YCTXeB*URu5EsbxRv9mGUUOx`re&E3S>Cc+3-85AZ8YM!j0E585|G_)zQq zPHZ8s27lFLHV@odla%wD8|#aYA7`tobU8hRVhJ7|&XVe!-+%wT_Y#kwsO3QZ5L|y< zL%}`LcIwpXY)W@W_~bf_7ZWys;&!Q4rRzjN5nGi^sk}Xe!Nv}n`|9*MuWY`Y_QiVh zUQ~`^wIv3qJL_U}o)dCp1l$KyQ<7?GBsS}XK>3@VPiZ&m1Omx!oeP5p(t`GD`JL5d zg=7W2APoqZQkyWF29e4no8EP#eXsy*68VLVgfL-a_WI~G%c1N-bN~yql-^5m#`6e; z-mcoXxv;Xr8JUBitvrq{w5v)A!YFbH1ZSY0g7r7(Gc+` zXov(tET1aPU}fz<9)vyZsx{c!-a9^eVb@%qUqCkWUG~RVvaNb&&8Bqlo69Q^uml9Z zLP(Sioy7H!72BN9EMj|24p znL{>YU0+__e{xFd<*~D_U^w+e$4)8b_i2$8Z$?=H%ve3ut^u~))M zKbR`WVnU(5NLq*GPXc72myCB=njN5EmlVL`*o6`UbH(GYzIyTJzi#jCl{Rw4`}?TZ zS93n6Wo91>5l(>rUtfK+auR^)h7Xxlr`6_OI@mwlTz9UoJF_2cAJ$8p??51{*T&n< zZnyf6%3>j>I&|H%2jdU`a|gA9>x=8RZ{NvN$rrc9!(Y%s!e|_3E@en?&!S<6>z*i~ zXAbv~S}Clt!~`N|l&m+qz`v*$5;lhvUz529-Ek102p3uWiw%%r>vN0Xo_G)#o%NAoAqQl zwJzCwk=Cq5lMZ!_=8?b<(P%cIDEKo~WbWUxTkI50cZ=JDQCBb@L#XP2$}LB_=*E*+ zDaCZyI?zq=MNk?cSgi)s>g|oFv&3%1+^h+*HtNytNh}Z=F`4&ANEmIYa&WiFOj=MHoyM%d9~3;te<_C8x(0wkL3LsiGad;& z+&>S3`@?u;2YF;_{-uAwsZQ@o+{xC4N(Zb%ZH_sa0CrYyaKFqTsnuRL{3iFETlR<3 z=Vu>3MCED7efI2mf6%kq20=$L!h!y(MeU-G3$n6y%8(u4KRQ_28 z0s%~|3U4tr2 z^~k!F2eh|$B-(H^y3rA$$oKCH3%g`X{D`=&KP7I+;xfRro z&=BmfqO6NzagkMqxQitcD9Ki5v`CP0OzcIF(*qNt5>yltz!UcZGR557IlMd^U$9e5 z$0}evKmqhJS-kYMYKd@Qy4L~g7TYuO(K^>{ib2^ zT{cS077$To?dV3 z`ah_HaC56j+|;N-%O5d&Mup9x_z6MxyLHk&ONRrrNhgEgd2C-{K2E%C_6zHzh5;7J z(vle)jV4z^LU*c_vak(qo{5f!rARa6R+MLF#$`plh&jsJ1bfd&Pt#K&bQhyS9s+@) zVn-+z&O5fu=1WR)3^;JzW!~sl>+N>O30OjPi18({VbH=uKq-}3Tw@^}!%>Pv;wwdz z0tF2iHd`JOhZYH&kEKE&qT^1a9Eohk-u#OgIWh~4)Mb22=`#%sxv>@9h4a~a-s^#0 z>L;=iWQEf&U%qr}6wRD3)Y)kq>|+E>CGOy7;~AWdyNzo8@BZHBtHtE{;!9aFdDdgwPTdKg5j8OiEM4Ah=fJ;eKv{wKHY{4Ue#fpQUW8m^A#}*5!BtX@p zA-quL9wIQ(p3I$R7zC__U+iHpOyOe%T^Y-&ekeP>^u-G9UuCwzL#cOuvhxj z;f=D6&?mAvN=WZV%XTW^b5$5OigwdyCT*QmJmax|auqsMsnYI9 z34o9XFkqWEiK*0pT&*C@)ACh$^4k(!Z$&4)e0d0sFk0(tE_NM)>sX*4gg^iA!HO7| znbKDvGde&QQ77=tJ9VhGOXZsBQ;`6f@L(iu_Eo)j#fYvqWE3E(47fYhkF!HcP zBEfce_@>~Bgg-AhJls1z-jjSOVvqgd^yCFtN6K_W+&66z+{j9Q{`u!11x?9u$e$}1}8-K{rmSo&tk)Ea&d9#MY&Rh zs9#^-c=Apqylq3Zo0jTy-y%+8!s{{w1(j0*hnm;4PZK4qUIRKs}R^-2Fs~MI91~A!u#byON=V5m*c;hR(+hn2Mk14Md%`}n}C-1U}25o!IYUGiMi5;kKJ#6eU1n;wKdi_F*vL!%EVFi^aNLc2x})*p*T4AqvgZ2&{3YAUcP+k*uBsm%ui2_bvz~GvOto}#W(W{*iD>q9NpMVbAMl5qR6>; zoIZQj{Lzm-Bj2R4TZlN`nG!*;GJV>LU1p&zw?GlocQPi7bmuZ}-hR+DND_KzHCUv2 zoms}lGGQPx-qg-6HM~?o1VDCx+RO7G&tbamx4W$=GbUbZ6>qMoW%93U%11y^Q?tuV z_Uh^i8hbDO=H0t@WQ|%vpiuxzlj2yr-NN1&APrSW%|v~zC_t`Cg{-wmi$T-jiQqjA z+oVaUUpnZw_jU_E`tlQYN{M3Cy%O#w*oUzNdbkBt`NCZ%t&8s;W}~xxHT5Jv_m`7l zy|TGi%*=*8k)Vy;t;4#?*eeb1oIlwwJ^Oy>jND3@npwtHKqMj^I4kfxv_J4m0?JwQm+QI0Y7u=wV57U(Y^Xm$ zOzaJtu$9AgeDw$m?-8&gimMkZ57}ZQRWBr?ZiE1m(PSphd?qyL|M>0QlY~d!&>x()5?^|%Sa)y*?G`|?pi@tD*45OI@?3;5$uJk<$5_i za6)PpiYFNWl8M5GI({;8>*29nD%K;H-+Y+J-Fu)Dkg)h#yoEb3P%wfuz0DsYiMs%s zHqAXqr({kwC^ibg;88_rbM5~k`KI=y2t?HR089~-p-@N0MkLvI6qp+T+M!yWN#YNO z2m3m9GD7mT@OI=MO5$6|3UR%rK%M#7>Aq~!$F2Q$mjKf@;=R57(O`T%xZcyWtDGuhAe!H*GqjHo2?XRx;RCSt3Ww%@!udZ*d0i8C9u5oE>s)HaLUM41kUQ&Go z!~_Di80ct?`+KV;U`ut1Bs*wnWsUsl5#HOw09Zh$zu^wC;vR(>+PUs!d|l*A%Mz=NMSAUJ{qWj9};rxK}8M*$<1>WHCWIs8QrPu-C(wjY%X4W@?hC3C&&+ zKBMt2%`18ay^Qdljn!;CxR%Hde-O3EeVRw=36TO)acf(B^<5ZY@I^ao@1SAhpnzgP zS!1a3N?VAg-d6CA6?e@}ux9$x{XF-8d3&Ut5tgV}ekQYoc-gRq5mQZO;V;zkF&;DJ zE%4H_)5fQtyjsje1EN{cGnjz(^vHUivCuHT-63ZzbcP+x`D7^KQ_(cS{7^2mkIZ3| zVLM%gArbVfhrcq~62d)T3@PqakB%$<=l}Ijf9uo#*m8=(kQxZL%?0`mCvtT~dlpDb zHlg@A+IY~@q%*qkXUg8@dUHN9D!PscQ>OJ0ybwp}erjr&A9$leUY!bXaLn_eH?m_? z#%XwteON#lHkjH%n^#v^uS$JB*t*GXB!cLw8=(fQAz=^+yRJ~ol_IwUsdoEH3V5VD-Vv+K)bev0txDN1co!E!n5$L6d>SD*PZB zBuzsc!x>-C>l#>*Ne`r1HFUy!tV{fxkM5C0eCtHZP(+}5;(isISDg&zc zR!L-g?8l_|9?WFH@{kB+A&A?CfxfAt5?C8n2v|#U3?3BS7p#?PQ;gxVOrmU#M~EX$ zEXcZUtg)~4Mq%AeTFPhReWSsuH|Xu}>wK}rM$Uh$f^VbV7!F#OmzN~CMzx_=IU{v0 zyL+;Kv>Ii;`R2RP=+&#|HP#=dZMr|jg@Zwy>)SW)ZAhhbC0@?^W6Mlb3dyuRlI_<} zk3AAISmA=qp-1*Mj8tvn>#vU^F6KcSHGv8OnONL7SHL8ynx}43gk6mlONfpDO3?P? zv!M|~`y+b7g8zKCENeQ_XpseBR1=}a zYNTe@`-d71_NW2`VY-@OwA(F|g>#4~Q zDbyCzKr`yS<0UR(gwr*Wfkk{dG%ghLy-`%U6%|eO)ARwIbPFmY#B)wQ%qRlg;Jo7~ zX0;VfU<e>}(qka#zw@&n|JIMcepuhz^Mc8@1*|jv zigd_`zuz5cQWCu+viXwC2L=g_!ys7Ej$GG(q&|ka>kOU+H}G|oJ1TSlBV9yJ9K8h2)KrX*#vJHh5EHPAS|m)Go7)@t4a(D(K-@kO+GTHUoPR42A01+^jg(4oavgc`*)>kAw6$-I9jw3E)bmM`N7iK^F$K zx`_ONkpI?UXM|hKR^_4yuCZW!#U+z+JJFZLAVXG5LSgM4<9CMW38Y2^+xuv@YS}6#C<;94 z6!ID+=sA{^&%xcv@6IZm1O!gxFY!&XEg<3_-JU zN}H%?i{~>O$A*lZJyYwK%J3M`iu0M`{0qKh)l_i<^z}i|$Tq4kz)H7{%x1`6NSQA? zwYmi!_SzC!NT#Gt&ir1xS8MF=*Y+cZNy&}%;`7g*|K#h>C&T_uZrQ9gS(?oYn-M98 zfhe^=nl2Zkei}Ebn}8d>YDj>HfHI&94YK@(OU|@6LaXCZFPB>*^-+3LWe;U(MB$(j z-U}zVMPg6pBG-MndScqb)4)nR0(cUa(4l~EWWdQRw9{-#$@Wp_nDqv^0({#8f>P9o zu*wre?LRfji+dAkv`JNC4o_wlJcRTy`f6H^&+2|3wksikKjwQ zX-{ftI?>7yNxJy({=VHRlq(8_)%L37mWRz%p)3MoO<-STa+O-0a`;Do__zO$fAYU> zDwtbu-nY7~X-lHk{{CsVheh7W(JNOE1dYO+WUJTExx4mc`SZ^2{I>YT{qaOlc=zN` zuWYgKVGC8Qj^$fyDDB!D5qfr11c=Ou_1<@ED0Rlvb_}!3Y%?l80a2t>n_pIgiY;xV z7@A_qK#m%t#xA*lgEVR^()PHNp$VkomWaT41JeirH;zYbVx=Jqt03W}=m2dskiXv& zVuu( zl*2!P=RqXM1Hn-xxCAliCAOIH?C%~hTqi*gs|u{z*`2A7DD$bHRb6Jhl#wch3vFUq z1ucmp*CKwy!ps&qefo}fTWW>eWLmRjwO0u*LRP7;}s&*^?b&u9IgEFM>y zBALE2rKA3EJegO^HPkl3Ve+DKCDtqx<6(%gFa*o@79_?8>oVn_MCt~01n)$Wxz0LQ zcej^UgQL$kij_uI1&>7tSe24sA=dg4v=pY&?RQn;#V3Si5<$kZ2}#5=MLa8fV={`~ zllBJPDCBDGU}?K^t^Adc4K^#@A9OsBxL&71A*rB>J=kBeG^nc{WePvhIj%Yl6_|5k> zuisvK5r2{_Lgdi{nQc^MQNUEzS*6P0AGs{F76U^YU@Xo!MHt94!}3<;5sJDZj5CqXo-TJV$6s)#z#;(?hL`9eJe?sBWs zbQ5xiaL;5#hqZGn_%YD2LmET`u3%J2zE|*OoEGh4NR-SV1eP~q9bK)$Wt#F^&n12B zR`!jO?a6s$o%uo(pF(_lrQNSTfBrB2#jj>h^TKZI;r5j%^Xb9ip}qRy>f`;*KzjMn z>DkTorPME5IJc8yeP|6dvfV0HD|&7-35wWKfs>>!5a@A+dleK(8{Ly%M||#jU=MpE zw3qu8Iq| zko|=G9Y9YcQ1Hb_0_*{eTxKD3C!A=)?9>*eCy~{o-R}mST8`elc5sqi=kcPa)dNG^ zat76jOLKK|H|Gz-xmsMkvMd!S1y`OPSRNGUlTkYy_p5ms!ffjtO3JJ$5l!XSAuMgq zCJ**O=K`vLtOYDepA2u@EjL&x6`j>mnf-je+8D5BEgu>O4T9+T2gzx&ao5r~J#*2t>mFV@ErOvkPLTJHED^ZsVU zZ_3`F>?eN4`pgCu2>rAj%t2-Tev~kx&hajWDml(vbso!&G8y(^HL35tT1=PSXo$H{ zEma5scQ@_%L}9lRkT*;O^k82iOPxnVhHbML^b3bbeq7L3RhrSV3|b+3?n z51j~MXr-kD6B~-#Dd{2;mPSL8`Hzc~c9c>UOHEtK_=44%EL20rYvsn$Uw@7hI`EHNM_vG9%%Kjxt6~n0`{&qUO#Q0DVDXmR@EZi3kU>x)?|s#SaJxw z#8To9-z0{=R*H&=ZFfGsq}N21AYliO54wi3Il~ez;=~k{OPT>jy;p8cW=z<_%t>A& zYw`&*af#WRPCL38qREZ=fmk?Y3tYdK^kpH%SX{vrlocOz>h$?@{4Z!agF%!gQyqXp zJFJw1?JS4mJFqS7)O9;pO}nI+t-KhXs=UZQ`L|ze?NkK>MGlf~Yn!iR&xxQ4Q>vVj z$G!jQfBVHRfBBuHtZ?FvuCN5Af<_~l#=JZASeubTB1JT0|AL@!1=C0pq1LeAA)hD- zn>|T|I7rFyXDd^Ys!u>dR7R8DBTM#1UR17HKD0RgTffliLB^Xo5;NCFih3j>pmsY0k)>=+ZGny?RP`yttjAy+Y@<@H zh`E%8%#261Bo4b5n(P)6A9F%T6Uw=jtt(*6jXRk!&`6rK{TK&vc-sJ_WmX~{vK@nC z1f)j@U4~MoSU&?77RnmZX&dpw|HspP?YOqB2Vy3RWU&~ERScC^-tio@WUHl?)KDFm8%UaK<6V$RXuKY7r^LM;|+a?UaS z@#h$$)k>-}mZ$X!A2=OCZ`#M!0}m`}kx-=SEX-UJY1l*NNE36>>1?a=GpxqFt;JIu zXe12xuoSSC6n&JA3kXq^L#@Z;Nw~;c=NOl9kuj)FQF>&;@Hj>}Nbo#@az&WQQ_zXv z_)xTj*C9`{68j?U=};^pxTRiN9~zQ?F;B;&2JEy#n`L{VD}EARq*>ZE4UuG`%LwST znk^JktA*Daep5?n=(R@O8yJry?9O`%U^K0%Fx`wv7*Mjplv#)*nA*g?#`IBI4AQo8 zFPGuVpK?6bL2`$>^BrDG^WF-8v$dR8w5x#_9!~QXWIZ`Ocl~JC6$voKRtzbcSE78)uCKqEsPg>9b1^B0VsnfP?ku2o?v4YRMXoqDY>09U z3X1L*02q=m(miEWB!eXJQ3BA=(FlA*a!PpalCqMZ^_ZW*h>T1#Ck`6aU3MEo+}#u{ zaSt9HR=^pyIxj|(IFm8k$4AF6X}o)TogN3B#xq9QsJtAYp-p z_w@8&w7UXj=yZE~M<~8|^M-E_!b4!_t7r+_DDCnBWac6r78`h8Jv$$cQ>~{VH={fV znlCXAcO_pk4+Mqi!)~n>AtksO1sps|>8&hBY6CV{KWPAqsq%ak zNJ(-d^0ye~3mMdDhp!2g_#tU^9;ynBG=id-FMPwVcAe$}L^_*DX^8lTd_U6&`Ty00 zR0NpBGonRePVH2KY-1CnM~U^Mm)vCV3w23TAL^{YYLYUC(G`o@&zt)e{UXC;Ta2X& zhi{K@z{t=Yd@Vo@X#A3VkTS${Sr(L19jDn6cItjawX)3^XFk&LHoc7SiVms+_aUp`06LEap$XF$?(9VNfIw6PQwL2%EK>+?aVEYMYRp{~&YvD}y zPa35l6593;la2Spk(L!nXH&jT68rMv@bsjP13-HGSI8G*ARBp^FE6i{aZjB#X~07V zH~6T0H_3RBma}|pc6gcfw9%G;_en#W<7&e-$eaDe^N1gLdGDk>SFjpN`P_bJH#>Cn zxv~;&CL)3Sz_INP2uK2aN~Qyr6F-ur`-uhEww>-?whtk8pMZ>edb!rW_*2q(kbv7| zSdo+dU;#b>P;f(y$%7&;iERAIx3B}|7QM#Pp_=48kt#A(!KhH2t0W$C1@P9$>OpsO z49@v_rFxLfDZJ05mglt#Q_kgi%*WkwzElZEN_WF1Fv&BQZ?`(vH}{Q`^Ye?V-f(nx z{~(~rl%|`tpE_9G#pR_R;owFjR85kiqfl74xQ0CO2#!V9$VC`(Ma%>h=HjnZ%F($l22Gly;?htKgxhzCO;0EF>5-0;(T=z;zDm3Xiu^yZJ07=JMlhxy`LJXLf^ zXb?OjI--uX3IQqk$^LuZx5ns%?ly zBReJ@+FOh^iI() zxtDKwzw<4-A*ys|_qSja?YsXO$YATeKk-MolH%gZS8}D#&bWEO+_@k$9Kbly)6*KJ zJe!(=qYZWqx2G75hV5dh?w=4huTF%|OZqCZ0M%=ABFRYN%8sNy`$gTu=)IUDK@w*s zvngFU(xEv}!lOOQWws#xM&|8k9KG*e@WR$G4UoYkq+S@&((yJbB31?_Nfe_o>=d=c zGX`sfZMg`w*N^EPF~u_MEOXQ3aheJlxn$%&Ak$5C zKSBfO(L4pm5GTPKEV+7<7QjH0!k?Y)Py|#QGlU(1n&NygM^gOR+Tu7p5p*ordThq8 zKHR;Bm)_&}e2}6d)bM9ldVWxF0(T*?-NACdLY=5!2(b~B4SuDp`FaAr$R-30xJ#8w z&^`tM;6aeg;aOIsq78>ard%jMf$Vqrf^LO9K}68ATBZov*abVvxfl4>xzVPYFgizx6*@l(K|xTd8gio(mlLa06fxch(n|deYZOm2Ggaw!F*F*M z0riKW0I2=p5FLS1$b_Wx738zunD7WJ&z}Z?{ct_+5TVo3+6 z9hS56S(lRiM}Md*$uX;vwR%d4y||S8K@EQP^7YlT=h(;XowBvf#rYMnaeaM5mYg&i z+EU!zwrQM4#|`WK{Q2w6&8>#uwQ8MS21pbLas^-LJas5&R@>|J-oJlui`XZJ8+Nam zCEuTwdkRBDqBb+vt}#&T8noev| zJG}BB_(Ms@E8*BqwBtIc1_kQAij=Tg^sMP@BB}&YWMqW(8bUm231N4^-GjeJ+hEZT zN}shc@1DB4VQxu5#*F7^V8$_($X#qbW|T?yaAjiH8L`|W-fYK1x#oEGzy0Rr*{PV^ zy|1VigwG#7e88_Q-WKfIm_;8tcQ;}0dQa3z#LzJo552u>sSIF@QccuTCk6!hL9`oA zjjY1x)bI&20tYYbf^O?Xyu9H~2Rhr_iA?bEp7cf0R_It=X0-?t^t|`E_faL9yeuZ! z8uAD;H1Y{!9!$J&7rqZ>Ziri8rKE%tgus(TEun&z61DdZ!UxgfXzm2%9gHR242C_v z5%y~$+XO$9BIOY>>F$k^HE^JRz#`%hD{z?uLH!nEJ65jpkZ6f9Ojxd*V=h+O#B@<_ zT*zKDjSSTjbhEfgwG=e`10!jB##mK4{4|Va_Kdw{-_e^La z!gq*=)mReFlc>M6LpYbvH>~~Fu(;} zshUoPJ__J2)0fO(BzMN*DM@rD#{^=LFttcJGUQDP7`EHN645eW8!|~$aS39?Xy_V| zX4N$!z_YnIyOmw>Gg&61Z@tnL>VZ8sj4(mS#kCMXFBa(US4QX z@%U9RDqTB2DOL*azr5A@N|)c4KfM0s?au_a2xPrl?+iPFzq|;#CK?0SHT!Hc)?Fqy z%h8EeQz7wV-V5^Co+EU&UQ=PbceJ)4wFm+D)rn$8;3U76eB*d2G?*zffzZ{0hKz&b z<@gJ@a<)asIwBUl@#WpYz+G_k6O<|t#i zShKGP?P9OUm(zw9)wT?u9`_x8v|*>ovLWv8M=wr~jbigJ*j-q3Ac-R7V(5X9mp}aH z|MH*x_QU_VUUE9=CKr(`J`gq;2ElTTsFXLMLY5yNTddV1mTkWBo4@@nmkv~9tSAoU z`-<3Dg&3Q*m7H@OG)mijlL+}QqaC*nMR_uV~<+?id7H0#9k2)O7X)RcLy2R1;OE+}MfR>V4O4YakX0Fga;;(Ngt^npUnB z7hi9G``h0xr*f}jQsm9e4QTI9AF^>xSrTm7=?W;;nghEEF`L!(q7VWMxFL%VBXGFU z0+52GC|#=Wcz~*l1sOgv-y_>z*%mE|gkttdKRK&b9wkfO_Qgk(UJYcdMi(Y=>kH}D zH@Eka0N%WL_38bmckkYF_p(cLG>w2zW`26s&?N|W%cXL^`tirf+E%U^CERL%`e)y~ zdHtMQyIMKo4)65ZG-2h%1o?VSBnu_#w0*Rl?oGR`T;cHS^wsG}b?|pLxq`;PoH=HU zJ~2NL8Zq#AcgM&I$3;X+qOQveB>m*k1_zOefAv>?MG1_#;6y&>Ph^_zG;*R1Bn$}9 z;hnG#ngTB4l7ilnQ3%(7(`EnwAOJ~3K~(;1K9-v|n>XfjIzFBsiV>$K#zY^&I+bvu zbOc4halN8ekzU=|eZI7ByFHHKvr(yk;cv zeRO)1m&9l0g=x-oDSTeA)(P-@3&@8ZwhHyv+#pz_Uv-qI9G-bke{$~BO-uu~w<$%R z_7eW)srF(QP>Vke=cIi2T;~&z)R^2hoHyPmOMn(0nbf<)6>mH^DPCQ2bR~u@Lp!Ef z7@^^SIS?R+KM~86cgSWZ-un}$zz$J{w_j)LQ!^X6D`XL<69;!}hJU^5@F1muk{UyH zyReM7a>MXO5wuC?SAO6RpLB2CQ&J4E+%>Vavw}giPK6zWr**~~IlOyK`cCjgu7EWjS^*67d(@2%<5jhu^7ea%QQZdleQ(3=~4N0v0_uv2E%-+3y zuMoTNU9_;xv7)A}&ZtYh)o8?QN;w!bk7qAmajEmGiSqf}N=9<-k_dtQLRadd)YByH zBCA_RJX1pi(vDROX$;czE1Wr!JLK9qJ-$}1lEe)odo11H} zVy;;&?t`&Pd1ol>gsT*PFPi%Yb&Hww@BR4W`+xX{x9`62@5{`jmiGrETh?4oTqmMI zTPEE(|BDm}-9b?3GCVEOP=r@cqnGYL|KnM`{sNY**B_MB(Z12Hz&KdyWrqxjYG>pu zyqp&wWmM$LYENp#Fg_>ii0wB{Ib;@`2sggdw_o-Xnqs=M9R#9f4rY9^qK5PR`YDU# z$+5W}6Qh37-{bvq%@5ISM>|(hcZiMJu)9?{dT=NUwE(F=i9@0T|L|hX5KOA_(k6VY zkMd?^_6cpv>^9D3+LOV+&q*jtrj9;?YnT5X6P8uC{aUO_eeHMBynmj=N{w`c5ldmkia6%LnrO{h#pF6 zF*nQQI#rRCioASKEin&Eji*`e=Xd`kV6eX*ji#||?8L?( z6scyB77OPHTgwPeBma6f(|%w+UcY#LuFC=M=ZyD02nz?tJVlR>cj7#`{gRrnyRToH-JFr=Tu&Mf5~EI6Sz^2SDBr}c_cK(jcJx<~ zxJlL8ZtxT^*u`2Tr0#}I&`eP|m>@tq-{|eP?(Xl+zkOZ#`Z(I#FN>t}D)Uc@JxuL$dH%-V_Bg`1PJi-*T+LnLVlTKYOG z0}iCt$PKnrepjCymg){y4g6T-7T<`eYx=E+?(FQEcg@+l%|YQ;YRbp$5^ND2p$jpv zLKuiN0uC}qRAsIoG4!0~Mp0UlK|7Aw!=L?B1Y|3|(bdJi|2Bz#?gZ ziK;=YYA(=H9KK0e|IujF=Dpaix%;UU+v~56Z+3ciG5B=*@k8`n5p2-UoX&1CdBUJg zV3Q-a(Ycn9XI1mZN0T{7=+AHe@nApo{K_1#wR~9~59w$0(Wg#xz8Dk1#!y*!cNj$1 zsGkhR{QD4lDYswSS1%ySAK zU4UXKg&MC;hr^%D3qBu3LXlO;MVoYBWB-CMPUl?6Zy$0s;YxDu1GY)Tp62Q?P6XD?bpwpm4 zRpA_Hn(sVy6d{e2aA}v>Q5Y9FU@QG1Qa!`gw1UlDU0oot+dF-)LwO}ypaIufDB_${ zfe$e~-Ad-Zn)DVTW|q_a=|aMsTms@P^u`DoGB%fK)r6{Hxw-*^<3QpV)QZS;wrq~t zkQL6gVmfCwbKAVe^mX$~?`NWUa-w$dvq@0Va8FgAu0$IB0}F78mBEBodz8!bHidVC z&x50nwMu*E--EDVMaqPAG?(9{MQ)`0O;ZtW)Gb8VB5?yARF1+YY-^5QNSG{%(+p2l zC0(F!pkKWw5FP9wl4)-ukcxw5wmbF>++;VVC{=fNUr@bVa`@hVOP7my*~%YYDn0y>P$ z9cX$mg?iRFzoaQpxO|s`jX@wWkG5}64gHsYth?;(7fauL_uZ!t?;alRxYNJ+_6-nm zWL3Y}KRG?)Q3X{2zf%5iMVr;vd2jivAHMmIe{=Rf{Ez?BY%{oe{#u7B?MASRBhxxc zwY$4p{g1_3Xz0>Jj5L7Tf+Z%E|NQ=)?W(pmee;2N#Z`bsXx(N7nrzHN5o#M4Ol6RaOr!^!Wy|NirrYx8ndu=0&C^AiMH zyn6m70z#b*MktNEacp+O$gkPOQ$Dw#F&G)+p*$oP7te5H2msZ!21X#d|Yb|WCTf)2xb8g@COY9{_HCS1%QP$KuG>Kl;bI= zHJYSFdC+)ikU(B?rkM!2rSe-Ey;SnZFpM@9U?O=%tqFD?G@F*UEe<0aRc}2##O{`9 z!wU+M%y?JOwTEKgqEM(lIMKlYgnx|4N8!b*mtVeq`T3W(oS4VeQ$pwIX`ot-iO+>X zD@vu+x*t!*bo{O!QkR#Pf{Q)QpmZd62|5jWBcAsNbCwJDoq2+C`D(q<>whtu19dBj zzqz^gXiVHI=*a#Ti#KoHyyY86!Z1V*CLjPSis594K&%Oll(S_Foqi_rHX!qc%XQ^9;do`Nf>1%&-c zpA|k)RK{g!!Gl$p#k@CcruZM8ImZ-Qw9xggD&w{WC#>eX?#wZ3Nr`1yg({|6`w^5AcnAH zmZwQuz7n=;SPKzl@`RHn!H|&50iUB!tYEZ`x`C6%tMc~v?|=J2u>Z7izD-wdzT7)G zd8_i*+a2`;+lt)tQsusV|KZbn1s}5YDFU}}yAN;?CJf^hPSs8p~qf_MZB|sPMLy*Aa zJu>rk%9?8g9z%}`c?tB9$_yTWFZl#6fG@YYC#q7!j_It9>bTapZ$AOLu+Q1)nXbXN zch{+0`q{H*!@;!OYAd?6Bs$h(q%a6|M(&*Rpx(W?Jm9U`%vy93JXeo4jV%YFtjSaa z=yCsW!%XHqz_`QIBl&U&$%O42{@H`Vp;SVO-guJ7Y@{tPBlPWRQ&144U=D#C~d z;L0&fP*&b{*QiDnPLWj_di{h`jS{&p+yb9Ce;C_k+1|^PSyjS}(nJkIIBBHrJ1I$k zkwRWRoQ($2*K?n4zLMVR@R%ub?KNsY&~M!){y4g9

$#Bi{~%Qx$8VO1v6h=y^s|C(=Ts(+ld;!s)CyBs@HrhiA723ovUIUm5U7phW(*h z6ka$5D3m zWRD314bVrc*;c1gIby-D*)j6-8xBT_ps0YPfc@0V7iT!9V(DUjuY9OOXb-ty9EYVd z$;l=_36-6kUh1~g?B4Ou*Q+IYN2%?Da^}~sU!$pCetMe}^hIy-8`)l_O4sYQe|sDC zAK->rfdHnQ+9PE%W1La%s+frzp(!GTq#CDvj<_a#+yNRQM(*L z0varC<#T!fXxo~Vz+S+o=eg3K+=AG{!jd~j&o`f7UkIOb)y%oQto;5=w{;(dio7ja zOtn5k;*`wogZ-rRTCq_^YnEP8^y{mYg9v}*ffUp z^7WhNtJ(SsTOk{wkwdKe)Ft?FNd7D|lPn{3etrlm)Nam%pefQ7=%rqCFh=0z9Z?nxe$4>mbb_(+8{K_?m!?Ma@%v2l7U zG}4Cu%JgtNp?a+6y0p4*|DK;+io&X_@OY^NQmg6c#Kuv@HT~s^A)cyNr6CB1H{+xZ zC``&!dkimHI3S4eh~zQWDQEE{>q{)V#LJXKs&@xweLi=(kje483|fG!H(@NZU1(cd z!JUZ5u<^+W@s)17X^z}ULpUkQ`|*oV1QHbsZt)J|%)V=7Y!ZJ7Urkch;mAFRHrDBr z>2Vn=Iyha7nbGhr(nVk!O@~tBO_+g3tk-JR8BXCu8}$>NySXGF!dX(bY-|o2UJUCb zT09hdL6h7KEJ@~x`by*#GN{2O$yfWWf&l6SI=ssAA;Q;Hg^4{)<2Su6j(rLzQi@ z){}Z&Nz>;~AJ5NDC0Pma8EX~l5J&K0-fA-o*uDqM&AZ!MtEYd#g4Sb88klKRx^1_c zSo}g!0Nq>UvDF)}*+Rq}!~_eNEj5-3X-U*i6-Z(&Y)$D=>^>02U&3NfLXUu{BxA0T9+Fb@i{FRr?UjY)n!6F*E z4_)I|SmTr(eH>^~9F8>H9H>FqO7GUC`{9p!Jxq?!z(#SZ+U&g-;1~`LzW6F2Ooii z{9_=eEf8Gg9wH7jH&6$oNRG?l@!=kmvKYW|F~!ct#h<((gRR7tpPt$~zW?IIOM2-+!opZ=j(I<{c=N=`=t9%prLAgX$lG71ouE3kwUOLac*QU?;(%HV-b$l zA}RC_4=RhJ#iX+~p&+0g;k{h9Hykfgneyd0cXmBF{4_i2m%i#CFi#yiR#6;`DxKO!Y?%hd&+3v3^Rcg!ohr9cJyI4Kc(lbslQlFlA zx@Frku0?JT=fLGp3FZ*#GOZ%#$fvlppIqA1X9kxOw{gS;G~W2!!BMGUmn=uT7k;5R zAP11p^^*qVnn_Zh+@C4vx%_4x=VJR!7;J>etu2TUIJ~^E6hMKC-h;0*5L00aO9qp_ zr$MFgq3icmC7@4lI=+~-+vlEq=^2Dd5GP$6vjEwCZCUdGI zF4}8{?vS)i#GW{sgj^f(ieodrLri8zi&Rl1!rc3juZ7dS}&W~aeSES z{f-4VatwyQpizfyAW5kVOai!YA`o){#U!l>NFqc)|DZZyRbs<&Dn5xTk{3|>Byp;T z8)3Hw>XoNkKls``a8c`=A1>od(ps#C-iWr*jIUUmWqX4RTm$YTqN>CGK+oeRF0j&J zNo}d_MmS4~i9Md*-8bpbFhmBtfrs2*c%#gBa}4;6(YRg#B8Th@?hX>^u{Y2T!2oLx z3$s-p`h8}9wpe01KoTh8bWpODfRC3AhlLUy4`c74Hbfn3GzW=)Tqu#;f+q&3!0`OE z^y?pu@Qe4nQ{WWU=v;h!Q<+Tt>`bFm;iyRS&=1_;*#7t+Q#`FS_BWYHXPDb$>ZNKS zO(z^~$LnhTsCICinn@Dc9OX?ItG0Wi`^T0!80FIwOiDXuVF=~U%h^CHRw3VfvASMO zdYwC!HR>_anT=K@!muVum?y!lZzIK$JXrhUira|66b^)}`IiJvlv-;k9|buH;^+W4 z#w-BDp#TcZfg#SaBr?=Ts&z z30cW1k5r3@J-Eq^vT#_y1X!>FLs0cA(dce|@*Vkvv;Y%49djs`1-G+^#rD-105uBQ z5tQ+(j?a;K!|&?7BU5ficr0hieTI@$GyBqNbisaL0c`fYyFBb%HFan^bRH@x?}Q- zh=6}Ut<(~wEykWMue~!4tR|3BTjt8K?0x4 z@N3M-x_#aKe8qF5E=Z%4%2hos)v9Qoy&TqlG&b@hboNN?O>Py(;FyE4+rPY20hlDr z7jqTY3=Zie+VV6E3m=S3gU~-eiUuseI=zSG?%-&#Z`zI90d*0v+cqcT6&Q&eD>&l!j`AousHQ}LAD_6Z~DDmDm{z{KP1*JbvF20YZ` zd``mo8O$P?-w+5_IdqLTAmfck_$#3K=Sr|jm=SHBw=ePsv>%9`V;VBdHSEhhP7D3) z<>cVSi1W#}HBx znUS1(aiI#FL7?#d{Nlnxg~y!Bt5+|b$iw4scR-oyAn`9~LrWt5lcX@$MZ6a( z0>1S_U<@r`WFbEw7--``bR*v6L?p!Q76yj*u5(_vh0X5dhxZDfv1Z5m;>8P}?=akD zK7k32&o(`aP2+KnzR`#icbUJs$?x!oiau4193^5HrGLH-g)ERfum#EGo{p3(gB<{e z-2|KEbKZSU0mSsNA(E9?l7P-q_e2YDD^Kxe_r%1KOBBzD4rUm>!N`M<(qb*AjpNYH z<3sy(D=K-+=ki?eQ339Eu(3AZfQzzP&d=(+n)j#A z(U!y!1U}a{x1Op!4+W#!pRg4pu{CBXl26)VM!6VAiaPVeL@~&my%D{_JlgjC>adWN z?YCvdr?(duQi|p2+(O7;;?VN(afhN7z~yBNKKJrn?JYtE$mC4Q_!;VUM?DRBhS@}j z4pY*;h$Ms5(NY#DU=v^>XSl@8xvORI=S0&?bcIMO2rlUwpNl}_5;&J{Vcg_d9BY9~ zBRWA?zjMJ@tN{_DB;Rs9ie}Ps;?p99%?jn(X0P1rZ3c_{_TYGvFE#svySC}H(e8x{ zyQ368qd8ITy48Aj{du87b0PaQ8a#G8L5kU&yR@VT>p`a{82doRT~><0rAHGjJFwu@ zu7wB|=<}U6EGml-Pz3bw0%}qpkaxfOB2w}pP`k`_{_}fhlo&vfZZ7)+DBRpV0hU$j zG7QWG{s0u<3|o*YjKa75)ncMVS&_S2gPV^q7p4ddKo5@1)m$u_?1oRBIm`8BUvV&* zyqpIb3b6s!bd{%$wAb_z&)rnL$PhEC*G#R--re2PV5%n#V~x74J7<1+)ZpRza({z3 z;DPZ=y6x#gmOW5kTGi9ypMHA(U;k;XesrXLuC|8!KgII#Q*Ws;KT5VLH0g}N#d~YE zTKH#YUOQ6sp~memo?kJ|IS4cnV2mV?g*^hMJBx%3ll%+W7C`4=a4V7I=vW%Hp?_7wU>|M^m?RAbE{l$Mdo*1|!g zU*$Tp$=3H-q-0Vzzxe0>_OjKPD&_*y;qQ$&gXqt^trUD|M$%pFkBP>3r_D&vfG8SS zaRnqijw?1yX(bJ^N&-BIYdjJDI0kvS$>-*;o*nj2rA0x5-s)e>VgZ!vna5X0AK_bxND*xG8P|#X7Uh~2R}CN z3wuw+MKS;YAOJ~3K~(eF-b+)_b!Zb5lP-8zRG)7sjXtVc6`Tzw9a7Di=~0C73I4&s z9weWKZcH^dDLjB44yg}R6}l8_L7B)yaEqY8EMS6#$u!xm z9D_>Cc=lLKltCu$xqAh|c_=(J}?F0{!MeEL;RLx{mHawapgy&=$OZa&|K6NEzssl;|W0QSDZjIAYOIN-)R% zVjwH8bae9d{U?scZ?7tU{73%s9lW-H-*yI>o2Qu zbnNB1F~%E)xE^HgA0E(@9%0y#_4qt*2zB%!jQua8Lrwuj0g;~ua>EuQClVfFy0f7wralOiEKsqu7ZM9XQfN5XC=+JhMTaZ9O0MJi z;XB&(L=jAzN+wB|j?QRoLM@4eLn6QcC?R07M44^Gj8aPp0_J(#O0|Y6lC17F>ng=Z z=0l0gR}erNxvgWVqb z3{0}%Bv&Q%N@ojgtnTGSL~5Xu_TKW!MJQ(Wk?~4G;#*D3;VOmEv6TuMDuBgU=90D| zXq$p*F{=CRf_Ig~(wYXy(4=*PxR<9;O1-BM@f&Z_`ZMqGMYMFLQ3$yg%GV^u(1eh) zjNv?HIamp?1f;A{YExpXg$tPYrwEASGR)9kCJbjOb2JJLU&ZIS`CjEIb$$yoG_un? zJlkllpK`NEfKe|$S&YQ?ofjgPz(&bkmA1eB%PYjlgHB+}26iygb}42$8sW0Nhdc2Y z8C)bRjASuE{d(g>EYJ|PSLb@7$kD*R`Ho1V@?_1A6zYX;DJp`VI`_F$>8yV8?dyyG z?fKu4oQx{r94{+QkbLY#kSfP}_x3INga)A3 zU3VcoAA`PxtyYuBf$bMpmyFqNr^h1v?lPjy2b+>~A|1#rYTvMl7420riog z^9ctUolQ>-N(IjtxkB4ZQVk=jgm&Ptlukkg=e@sw&@<%uixV=Z1$VJBnczk-8bFE~CD}c#G8EN`1wA5T1L8DDVXEbZjP{yoPvRzu;_?B zr-!eVka>gAJfAsGdw+9t2c%N{PAVm>ujMc8%_BW}vL9nIoje|>Hcj|FpNWPm^QEL& z2#%8mVR*v>#b@+8-()1E#tTfKJ?;$?g6GN@2B}C1qvh>K`?={P{NgxvILxEl8xDpd z+*%15_Y+x_)a1#Vkna->x!u;G%NVjl}bOJQt~JKH&jMDW&e0%^Qzl zR^4e@lQMUt22w&ay+PG6q$?b0%k@d)Tx(rsVkR$!$#YdC0N2~x*~OW(K>=PZFlj;X zebnmD_GWUf_|8oECM31oXMRoeGQfI@huLf}ZW7Y|#bDIZ!A5|YewH=A>EEm6GaG)F<%Km@nr4w(6gYn0} ztw7eWQ3!vzABI4ZZG}REEu1wVbu`Tt!wIXzaWc@^b02Q4`Y~E0Y#I>=jWwb|XbCQQ z*pOtR@D%p!LKVdV$G@nmf6VnBo`5lxmLOQ1EcTn5>}>y3uNEnrdY{izjcU2dh@19o zykb?l8Jku$SsdmIMHA+4wawLHwW7i3qQ$MoDHcJy>_fLZoX@o6)Sxh`2%{|uZL6dR zDX~F8FC3M22a-wZIc~*2m)LOhE(6LU%V1j0AK-W|pPjP8fBryZ!pmrfLrEwfRsZ?F z`Zrg<{)5Th{#w{i&!VC3WNYWkAua0d)S%GOVGTvop*_x7N}b_tkQWDxD%cd(P)tr5 zwZb|-`g}Lv>=h~Fgs=`A(-}L!pkZ=YCs#yLt5?yLvP67bp$0cG+uAYV_QE}yhS?6` zfm)a~7%q>RBgWvOJrfC^MCP}e^RD$HsVbEuZb>h`so0U^^ytPP#bW6JOK2>0K={C) zydVa_mY_!%yStZPDOeT|WT|XXf@0{Ti2<|-4=V8nwX6qUCjVe^IH}QWTCA0ILnEOr z@Sqeq7jd^=a+w6D)Fbu#gcv8(N+dVJV=YL8|q)pn;xS|1Ix zF;Jo=dt5BaWK~u=Lqz{sk4|TmLsog1Y1v7dWCYr@VNi4+*;g^Hns!+0RF3h$L@aH-}tVFkHVCb%KeHR9>V4Znp~F!aFvk$THE&(`BLk z6a$lluQ&*vI$f>OG4w(u>gOPX&Tod7k}F7(@5@TYgNGz;z#o;@)m!FXxI{R*(|e25 zzM(erR8hty-D}E08eSuIqe&sAZZl}jHPnwCKMi?LzocwVjprB&+AWH@RLo?wch>Fu zjM|Rvy7OrCvIL);LT>udAhDl_7VFu|M!F*c=boTRP=LT+#RX|QfuND}Eoa{eC6Ogz!AY-F z*yD`S_R0O-)+Vtjl3!(w>NRZ-a5g~PjX!a7RHn}(#_r}M6K|~)Tlxjf?(5Bi`WSqW zY5-<1$r@PAv>V|#cjRXE&l_ADia2Q%42~60K=1Npx zP5Q6~P*!{vVA^%SO}JqKiXB)2*Mp#pqz%>->1rJG;ZH~$uWT1|2;9~WbV$*TY!LJ? z6njVSF4h!?JXU%jhl*nWeydV-_Eacg;MLZ^o$?@a+FTwFR;78Wv^}Vf*ZU8HWq+Aj z<}3Ymwlzsj(xp}Ys54Jr>l&qssZ{T;_S&=6gIGrCFjuMGcVd#K1K@0EK0fjaOX9jX zmwP%xtM+*H{5car`69eIIc?yTQu}b3>JhUha{Ogx>%j^Yd)+5+X;f>x(Hh&I)Qf-e zN5AIKf4IMe$K~Vt(b=UTWtZRoIMxIsn^|RZx4QJxsmqo2WcIn)9z^i8*qtq}P0?Yz zX0l`MQ=R2R7CmdVeH>1jqiJWx&|mA2{Q2>|dG}zDJ-;g4hLBP22u^}t;gDoH>#ib| z96b^Sni1`bUilYw>po=z0i~o5xz&cw5>Ck!GT0IbyGVLGC+Wpb5Ho@;9059sQ^TJ< zd*)cIOR4GW*ROZTMgSrcQuQl?m1jXN(Ss?Gm1i@P|N2Kz|El?PVHOZ z3{Rkk>usgcFy<>{6-^NyMl~2Vwp9;Ux_=bs2$3lV54yUQvrulr;uIHQ;_N)>mdOuc z?8b~U0RO_Lb!4E1M}4V8m>8OZE71Xk7Uda^sY7Aet($F{8Oh6Q0su?FfT8C9Idv^x z&B!p}k~WHQ%ZcK*zqq)B)wCbtwsg#_4-bKDUTJlz;#H3zIoa%zbnzel@x81wor-KT z)D)SV8n2bgK7?}M04!uGdQVD`il{)?k=WzCBxRI+ny+zkDt(b}xd)!ww}3w|lJR0) zSOm1R7=-V0_e?Geiwea#3vOy~luXanvqwhnOBdvjoO6-!uz+j#pObjCeAvT!$oZ(6Ye_IXQ3wWDbIotDJA=$uu?|T1G6g zYBLoQGc=>Ez-YGIE1rzEhg!Fc^%*&?=F3F`J~+;e$s&{(@xr`EXs@o#)m#;)EQ*VZ z=Ok}aMD9AmvZz(FKnx%PdGMhI+8t7$zgk%GK1PK2 z6$Id+mpuYnvT;g4gsT&@3?CAFv*p;Mx-lS#^tp(^LwM|6UI0)iU~p$&5yzoIa3nJm z9WnBQVqV_hoamIjBCZu)IVMk01%J{v$zyDaZ2<}3A!(EdB~mBKCbWyt#uyIqscG3{ zA_eu(Hsqf;Y>_Y%0e=1C3*95yT1qSy`h`S0Zn=V!Y2*Ct<+rc271I0#x2HCG4_Hx- zTw-I+fBE@d^M#Yf6>_i!wKY#w*_Pw`Ae2Sa{YGBa<^0ZOkSjAMxwTuZbzMY`Jh^;)b8h*w~Mpx;qUF&UWj9V`f3QrqEtParq&4pKKyTV?K3?B`xxRyxoRE*br!Ti(w$n-7gr(aXvX|tQFEuC!jf`i5CuiyfKa!777Q>_qJZxpTQY@9da)FZ(!soXl;LS4G73kH zb@BMm|NY+#B^s4@A#6_y2q$KJX?H$V}PQkgA+>^Uxh$a zmq4lm;GHyE4?GJiPHMyrW0fXSEL;e6-`(l*IT9|QNH$xo+v_j#mh`;n z84cVM!DiGQEVtwSWXR3-=FRtf;r-5Sqh3(>FEyKUT3k@13KsQvgIDw7k>RZwWm7cf zl9Dw&@I8(9m?ybXyt>d%adLH0d;Ri?bj44#@#J%bB-|s#a-k+WK+8f|06GaNdz8zh zj^Et)`la0L-~9atqk6PLMw>jedd`GLMk;x*CZr}n8(Nnf8tQTs5`=Pq5>cuD`DdcJ znE*b1$S0$6R(?E-4efyJmoMrlqX!HyzI*qMn~ru&*syOXni$CW`I%%BNvPcUP8@@y z=}ObDH7@x)hq5cVQfil~UKq!TD_gmKnrcnVu+Q=w=3_>u{7<>v$GRap&6h9V)M`4I z(Guo0*(ztJL?EA$q_oSk3nrOxxTC6{o)G+We0^l=)vDc5f8@~f$M1=hkQ$rR>fmgc zs`genUZ4TZTz+{n`EQ(9dC`MqfiL#W;nL(Tq$78d#+IQ#MNR>Sw%cS^sYSCper!8& z11afS(!nXs0Yx}zW;jcZj-QLDx$coH3|dQn$gMYlxRjpIBw}WgFj5uVDdJ#wd_-)^ zT@JUfE>L(rw0Fu>0mUTCsNFWctfoXiX+_W7a$Y%VNa;@GR=6C{8tLME-A&FQFA?s+ z?}f7{)6nu#|bsQ9Fb&CE`#2*ns%R+>nkcnT*h)7ZTl!Zr%=( z*#gY?R~)CiY8F_yvLH9(T~LjT@pLcGlJ04pQXk!i?zSU#8~htfZ)=#C7Gu*PknUO*+s58%b$(?+pqQK8&xUm zy|l=++A6~b8SExSFEIf9!7(t^Plx5To&x+ox@gE#E+3BYm(fW1uFTb;&PY7JrP77b zwzS#B{OaoJ%{Si?lGoQabR56xMR;@l7)2_LI_1C=Ao3C!k=YN;2RKhjWR-T>GMJ;Y z9FOTYuNzlqm3mz@UgPWS{h$2nzplP~MV6>8dmMC9s!#U!_0#`&ciSRLGFcS?A8u~= zG_wX|&1cumhwgNUukAJ zUx#LdyowolKw90REGP$Y)W@MBxxtJ85>qIi$k+CYRqDu`6irGr6Aq9oSI=nPmw

zAdid7OX*w^yBrIu^l3I`>`M7j^nF!w3s#?p;by)^9{jlX*vJsh-d=13&-Q68KNp7B_ zWCxJzKmPcuTCKkG2Y|3M3gaLX`W*OM4{F5${2GL|D`Ha*hoxz&EEU}dK60H*^CI4) zA)BLxQ$?new1cQwak8!lObp4n_%*zAsYsV)aN$7AU`8qpo&n%`3v0HUfp%J}>6w$* zE?B41tEZpOy-c>hwSmWYh=W4caSgGP^oFN0=l)%ZKXj&Vr&wEIFMp?n3PdU$JWmi+Cj50%kk(6JdZL%$CRpHw6EV%g6MW z@MZ65CYm)Jj~JKCf;RH|0wJVu5GP~;WwR!wQyQo7q^zOvG+%VT|KZ!mrh$qwvzJ#_ z7qjII)IwP(m)S=hW0i~>BKiu|wtgSIp3jf-q}`y^iHu=q!=l!>RyTF114mh?4x6iv z-bl3scf68OMMXtm0Kuvej=nhmmd3}^>$zM{7%N31(KI71%^nlP3qU{^gr8)FC^vLB zEI{a${-iuYKkXBYDf%xdGC#maQXG*67!oa^yX}csEDn#W7v;mJ$5sm2M|YWL*NMA$6YxUdR_UntIZ7K8a(R{f5`N5UC=}CXs z8qZD2l{DPyJoUyBhoat`_jW-H;1JihHx6ISTAJI<{q4p;S%`1y z$#7(qtf8I;Q|cEl(gKsVmHb4EHr#Nei1h_&~()cVggFBqUf=>OCBs zHA+VX4fYPY{qzri`4`WB^()hV!oBu57EWZ~W8Vut>pkt_%Z;Pz4Si}p;e~C~VvJHd z$#AV7#gypoi1y8K(nz)2&CUaK{{0tke*RzoDNlpj&N9VXac|M;Ub8eS#hNo4j5-j& z1Ph&Ga4rHKCSdNI89bw0xinVq*$S&A-Gwf)Y`0rb0+J&#j_dVw2+48ko4XsB;vlgV ztimW?^(>i`Dvf>=ZzuSnr#M`t7C=0%Hl)khBI0Qc5lBfwGTt}>p~tWWBSI6p8s#B; zLTiS;W?!Ltn;uCiR_vFg0mp=RP6*dSEl26jeR}Fv9HKTHwXSalO%|zi$90;*zL?;N zIcnomyDK)2B2;=yB42DrDw+hkhMm!u&!152ci%oQlva@9Ue7){^i|hG_2K*FoUr~ zyg?slt{qp^x%oQOfd7S})RDe^J8*~EWS}KX0$~Os5W=kZOOkf)Cr*WjbG{`x1z6jf zI8;L;bA~==J-W$M-{y<1vsN20?gs7;IY&F)O(JR=77*&^2H4% zJcQltY7^S(-qu`gD^cuoR4zOSNAc+Js2>WuPIS*;B~w%$Z+?mJb21IH-I9xDh@wcE zP)D&lN%dFTd}+T@+gg(%hIa~6VJP3raWaf77-lEM*K76^|A`Y`2=>-78VxOcx@&+`iLO2u*ik=7Jx zY5Az{CZ8la@DdC*9af3yl~XRW_hS=F1%w_H@wS-vAphIJ3*2xN`BV6hQYLOP zHjq?wv;PkY?fN1gDH!oj%{HIT!DTn7*&pf5y z`lKa$?U)}p+wVTXSC@tQGn2R#onYiroAU&*#7*2ytvbsZsiPJ)0Dc4xppWu>ZeN^} zxR_7{!toBv*i2)j4K>CtcwnVG@W*I;$2beyWjf52!h!CdFwA$I4c~G&VZ~q=EOwq+ zs*hx`aKNsVNp807s>+p4qG0tMl+>GFDo}ab1*^TPVsfYbPg) z2jB~4_w4yIA-hkX-xbr_v*YUhr%&pv7~d>g zqJu(DzkpE(mz}O-VzG=hv`S0V5|8sjZ!8)Vtu4Lb$(!LB5X;(`?X1m|F5<2B;M3PL zT{?JFqb7H0ngLZS=H_AC^i7+@Ym#^uT7r8Zz)=Yqbn(;tG&_mnVb~RpR^8+irxwMI zB!1G)G95#cPEY4wqp|Vc>({Sg%k`bc&Ld&)*B387ef-QWsnn{kpS{#ptKH*(P5to0 z54SgWU%q^Xt*_sFd;8@lSRLBRsHfkX__6w~gh8hZTDE`N`3HClBuNTk-uw{ zjHxeH8nx32tPD3#=BRpBEU%@wtHg}Kj4R>yAZLTYyW`NzsE?m+<$_iQ)9Ac8T}N9# zuJvr8+I<`fp=&29$B+}F*_LCbJ3ypb&@T8R;5m);+VF_2aR$|=ZZvWit3Ek9+vv{U z=KWt5!fPI}cNos9pc81x!OR3|NAuWhrS>L_LcSVhQ8BG(Z%?bSch{pBL7U6|`0Y1e zzB~*E6Ds-1Y0czsFC>Y-lkrd-xbS^OKQZI<9kEZ?l$>u4)vq2eC7TvV-*SN~BOBKw zE68IrduS^(ONs#DM*`v-%)Ug=j5q|Q2XlN8?2tbb6uU5fV=i5a+)`Aq1ZeI?L$~;G zsb68o;EnI!zo(BRiPpi0(wm_OxKib(-n1ixdT=})natZ6Z*m#{7z1f*lnIQ8AA?Rj z>&>)_Hu8MX=|4;#^Vxkva^HS>e|vYsv(%9G_av~|HN^Mo<%?$L{^s^8{qot8pUq+b z03ZNKL_t*L6;=4|_6|oqKRbtDw|Ac;Ji@vlimlEFD*E|MD zLRN#O})!yOh*ZrZ&6y5O)_&S%j#WhIz&T01eXfT)7^odqSSKL}tX@!@UIm@L#Vt?d!Y#-J_Nu2qpN4eA7tQ z(ei84YCbA?5Z&Xrq%4NwlynmjPK;c|q#-tNA~t?@dW6zjDySL~K0Gb{{^Za8_@{Q^ z;!pp>!P%Lqup_$FY{|i~&1Y^N9#>PX8LFrCRUpyLfYb z{Z-x?mO%v`^#;vB%Sel>^AibdfA_!q&x#3ajru{kGG6OS`*>WeLevVb&x|y46yYZn1h;&GZ%4O+YM=yA*>Uq0rlWk%hyoNLvFF9Eu#$ zm(3Z?`8DY8UKza!=GWKP{87P%VopW8P=`!LKk<`<``}glkr8r96m&=gtNg0z1ILRN zuW(MzA4i6 z%h!*Yyo|d&GwoaLYt=TqYI<#OE`92M(U$Y|>u>MAe);>~e5f9#=i~X;>kl)zbB9q# zA4TAoC2==?4jA|>wvJ${rKarfAjZ0B}9>Wc6o`536U7p zD&K*(os$az84d(mVgMS#V~Eo|gGVW0)9U5nc0Q!2#n-_i$;yFsdaU46s2;~6lDRO0 z$7QzgExq6jIN(BDEbqj;!5P50LOIGLzgix0-sQ-6k19WW_~82Flrq6L;yAY4jO^Y| zpB#j_b_dP&SGq_^JCw2}G1n%HZ_)?HN2l7YcDnZxMC!E$clU6}!>@@}P4YFz6jKyn z+}Y!%%ly^VH@cmF`t)(JUL0}Wot$zNS{}FFVM$6lEnx<4E-tPfVxlX?y#Dr^I(u}- zPfV8P*`22L7P+JS^4Y*)m>@JXwdYX42tUTDhmul47-kSS)D;=%aA%Z=$Uk_D5JMD8Q1~4p1j6 ze@QKr03Z>rsY2K+DSE~@Q}1r8{dH-)ozYL_)$6H==liF(otTQzE|``($ST1lWt0;L zr)R-9xQ6HvdWb#T-pD+W3vyID%Cgi(X@9=hReh>OV1EDV^5XjX z%iZ1hjS-T^DmV39mA}XAhhVG?uac>&Q(0o(k z_IryYpcPEVl<4%6rp0@@6P)RZ$-nKqRBrYMvxmnR6o@$Co?fJ>&dF%0!s(z4@`4YuqIyVsE&Yh4sER~shteL-1BAL1^CkC!^`-PoLBzN@)@# zCxbJa=|!V_ci*llR7iz$iQgE?Nsi-8dgWP;Oj$YbC3-ZrKN2Op_VJ=5s?SZlx zSSQfOlSu?(M3Hrynt%8Z9$Napsq<+(swzP#F`$3xNw`t2ID9KCwVZOHCrV@j73Gb=*bX#PxFJ{DB3ROuQjlf~>R* zkH&qHj5+#+`T|%S>HvittsLm+_xNErqJ=hrm}`}rZn5H1F)1WG%LxtOVVc}|kaDo7 zN+l9QVJUOQnBO2{TpMuScn#fTUJ6!6l_jMatlbk0dq}(eDAR*&$B!|e_7s&~U7q@r zhP*<=%KmwxVtUlqTL%DTsc+5udqe_dkthJkO{`R8S&%M0X?y67^#-3{!%kaq84M#- z;e}9}z0m|IceWrX4XY5Hync-a_BA|}`)>>mFb1S#is`dT4~W+S%h418((to#5Gtnb)sh`-Dae@2QP` z`keYNfAp}J?{%7T$)!tIipLMP+o#?vUwyt;|M6~oaMzni+{%_JPviOLyE_;mbRn4k z_2!PXr9zTd2mU!rd`V1O?+2zM*&{82(~M0=;M1Q38-~2e{P@gQ`G_2s$SNJfEERrQ z4~dD>!6Y!j3MI-oY1rfJSu#+HGOiAR1q_MQC<@F_IS-F&nXF{!;^T6!_U(_hqlm~pAM8xY_RpJTpC0x%S5)uQMsZ$L#x}B-mXg~OzVvXR!#{lX7`$( z-ZekGd#Mrqc%l}wKWL`Z^JVFJ>=kb1Vm@Cnvt&FCmoX<#b_tggEt64vR6sSwyj zwQ;K5Ir$3*Iz+)A*b;R-c8w)@*`tGov|et75-0%_6x9TNO$bbOTl4~fauh&c#IO`} zKtF0B0_NoRErjH!%e?V|bKY~KvfZhniS6Q-$aq8@iWZqxP}E}F)_PzO)O9zAWIgg- zd7|{cCnuCAx`xxqt-gM_{`~oiD-CrY_ZP-{=MVJ3EcJW3gz~x+IGVWMzh1xJ?kz5! zpRo%+-8>Fwo1Avbo9&m|yQcZ-%A%L6_n*JqHCqT?T>Gg%Xmw+}f(VH&b9&Lq$fbmi z)S$7U(rK-L5Q6dsTZgyOs3Y2`Fq?-u5{FjISS%Qx%O>J$6IzneC76#<;5c>^d@82n z;OU_m01-1WfkyE1P|OVkQZbZ`uJ1%aDCtS48O-U_pzU*K^U$zX1%Czx z=th+f0yL7&>d39(NZ{vGcW z6&XTwG=4oC%DKWM03#c`2U(hwTzv_1HYzMkQ;_V$3YNB@%Eh<}6+is#qtOtEiE-yG z9GR1Ta9F~cW%z-6k{f2|z2v9`%i0-rp1QW|dW4bq1_e~8Gn|jkIwD3=&0P#Bu?ENB zCyYc1*cNmrRCI=esXhTlAPx9Nv>bR{V=GK>b~Z(c$#UFXlw0#sZ&NaLS)J8P>M*69 z-Okei=gcD^6!U?~J1vVa5IoS{gR27~o}FJ1%pBPmV&k+CDIeO05IGb&ErwzsreOyf z8y~2}vmRO={)ZlJVqWh?!l7JS=t+&TM@mn>)84KYjVgO3 zBOZ8{D=4a{mR=PTxXT*zMoC~e+-lW^Mu7}@OoOXU<}LP(?3}(1F)&Jnm@>rvsLO*Y zucO`3?TVjrx+85SyMF=-Q9r}Za+NSTeeCI$U`x* zS!vnehDg@ zN;AIV6HJDD_AGh?Q_|jeFMZuLR5vT}XrJ>{V45E7XdIK%1bo-{-0^t{Us;mR(JxFu zl4y-z!0xEb2$_NuoI7}Tn7{t!jn@vszU#ebxq~yZYk1{Di|HFfARxa_h(;!=6@r7W zgCxKmHyREIBZ4_3phKtcaz0C@TBY*0y>)xOUpc)b#SNW$>JG9$rNveGyK~X;e5m9x4G8 zS306*%F4XLtc|ASi`%hox{`#Qh71l}cn31siI*8CI=r)2~@)RSHWHP0` ziFR<~D!KUK`_s#d6Y^aRFySwy53Yb+`0J8sQNTx9Kx+&R&m#G}cOEfgOKqwSx=98e zJa&ZMJ3^oXY+?Zz&n4IaLnJ$iLS$`gg5$3cbMI z%^axKuU5IM&+SZ8(rl$c)9#N4ZT*j#hq=6M-ErfljaF;W8QZ+%8GVst_h+Sxb3>v% zr`COvH?@*Wi=X1j7H>=eldMbI~aVPxF?2@ucwZ_o0Yy9=1y0|6N;`tQN%rYxF`#Sg8n{Q!=u1s ziQklK{-1;5N&V!kaf112lZ)3&2?I~G2RTio468^*AwSr3A;bN#bv;kbPERi6CC)}& zlYa?oX}VaVv}%6E(xGmK8*WcRM7A%M+V4GSxluhb@N_;N^;lfmal|B)wKhdd%^iS9 zIhEscnl_3(Aw>s@6k#2Cz>IzU=8X^i3OGAMbH`=WV7U_B0**Na4tJ+0*g~)yoWaZ+ z%5boi(5!J%*Z3>4N4XjzE4$&m}Dm1be4k$E(s|g?Y@D zq7xB>USz;itGhF%aT3x2ogEz$Jj=Vg8yOJi7Y$|bF_w6xhQlcPT(8|3K6Q^Izpl5M zN6&}j6RpDJzE9`<){}Ue&iZpOQd0Amd#W1~1~p^+XHp6QaO;oF;f*@SF^B!d~IGIX^!o zM5usx9QY%volrqU-V^p{N@lJ1V1*EX(NkQr6EM~Z`QQHkOx;dV0EtoP`{5=Ws|-5-b7)M8hIPFMwfR*uL=hF{~@!_{K0T_`;AOc!3Sc226<} zsbNp|On29om0RrlzWDc?qy;B>Y9b>e&Uw%KKI{M4e*P6%N>&U99hN0CN%X;-o?ia} zq7%V@^j-uGvv?L)c(Nvufx~k82*WkKk7pgC_wk*_bw>|E&M4}no{7f`^SaN@p6oM4 zi}ngbQSc&hj#vOA+=hI0zALTDXRG4zCM9@!C@Q5v2HOBh9BN6X9=KS}ZM>1 zq4oQWJ4`1saST%^>uO^@kv_uJ@?k!sQEv}$sM$gxSCV!t6$pPJ>15o>*jyIw#B4rL zo?uJshfQdOWZ+Vve0YFzBCCd8p&Gq#f>hLj#?=_rvaG}Qcx~w%S@P9uOH5cITq+Mp zAKG!iz>`<&^R|@PC5xH(F4rm?8tMkO#DEL~pw-G%`6T&$MmD;sB!+^V*)<~+S@6h| zQ8rpe9Z7^V^nZ?L#yBX82o7L|#Ei2~h{otK3=K=mX3Wy3Oi=hD^dR zrpnR07C%uREN_m6`$+15WVoyiQL&3>tVE~@4DLDhviM1tE=UiYWK>~vAx=x)N`$CS}knrWbCMl?p_5)!Kv40+8hb2<=B$B9FfU-$_N zB$ehMCnA6qk)}PyWBnNAak3+#_933j*kid&V#u=?4dhJ&yAaWe9lZ&$h?;k4P|{Yz zBzd=@Wdc!!y-%i9F3nc?S-##&o!+hLolRw3s?2vQNsX2l+p>@32T-=jYIsHbDyzCv zA7#yv270J$U?6sem|+^#eu0!7*#UhhK-7u7G0*V8VDe>@l;RvY=yks%3Nit<@=5|Izm(Qd@Wy-6Ta4@`K(6to2MNs7Ia8MWrBiO+?* zj}qK5%&~+%rMft4_l%gz7LrNsL@Ni23MYbK<+i!5p#K_Vgs>}vnPCHp&Xj+Lc^KZC zoN#0f8;RJPbtuo|*=eGBL_9RBk@|0g1H{TMhTCIlrX zhM1gUoBoRZ<8j;|6Jb|^+txAo1iJN41C`ojj~`i*M$MM1yQRs7LzuE-^9&$a=TObr zG^dja2O+YI5T2Bf1y?O$*$^FeEI>9PJwp;>@{WX3mS2*jnm?ens1_Psx{%`_-w7B} z!nYij9kT|%S}MEPzeK-3dErlvB2jxg*{t#{>PKo+_ruZrK%0Yv{b6=CJ9e*H0jnZx zG!TBtGI-q5Q6`#LfZx+GVh*+23VFV$tC%UPYvtF*0@M_QHtTxNexZq6fI7pi!#XGMr`ua@#LVW zVYp`F$jdQeedC8T72nZ$ifV?im zZ`z}jCvkUBtV{(Myjh{ zV+`_e!q+x(D8d^Nb)M^DAi+;RPu0ko@aTC0i~z?u(L!d*9UCYrh0#Hy z3ig(BQC4Qh5wPOYe4%D;f>_I|kFOUiilNZ$#~>5jON~WF|8m|uZfaj~e}5+*C)}Mf zK&&mLKp0PRS-XJ2nMDk7A4dG>AS_1Bac48CwNT8UuOLbhQGjV0WI9J8?nS&ST=03u z1kX*0EKvhA$9qElo}yNZK34>BVm0Kp*_h`@blcy28g)9M@U>Z;?0r!pl8E>w{^zQs zW7?7^!ak*(({z>Q`TexKNmb_iWnX>jUMSkWMnwv!6|>Xab~1{F6Zc)84Do5Q#@5>| zXPl@yDl~q?l(?M)TkGw+(VI75ukDH!ReilI8{-p&9f8_us?P@%rvWmvHy{agGQ)v0 zW_&UcDzwg~W+TrsOZM(cE^E2TiR`SZ_qqDGbS}GPJl`3n2A_uSj9P7dN;O>v=qvh5 z&vVjO1U-!cTY_zL)+-pXx7LNU{x?2GIlL>a(ddBim8{EbJ;%Y`u$jSOK2@mbL-8If6txB}xtw=xn0H9$cUAN36u zY$j=C2YMlHrt42Go|GZVHivZe>M^39D5VTwns*bDc6|?t4)ntYe;?nw5z9 z($FC>H|zS*fm!vPj-UuHpD;&{E^i=td$^lps;bsA{_c1(TM)WF+f_@5^h65+$YL;hk2Fmg~drMo|F|SAaLNchqs7M*Z@> zHI!nWKB&My*{NM{JA7Llc!8-VQ-{n=m+)#yI6KlfwDd!lU!K&O2H6jRwGa(3Mr$nIgxt(#7*vTbj zcQC1-mJ~8zP3^IaOvJ(5MW%x4Y=pp?w98$act7xn+RM)u8c=FeXa(ZbYfsC- z@vGMfq=>qZg2lL%3&@p3z*Zi!zheE7)9)!fHrte(3OOCWMd~ecEPjI491&fbd?!Ap zvDe?EmxNQ|PmyCvrSAzZw6dwRxate@JqD#!f%`E%>^V@pXb!b+sJ^U#71 zRSpanf=F(E|KI)hQnicWI?u}mAXSd)tR_@z^)Bm<;2KK>D7&Ucc+`yxS*KPI*6T!q z4n(&=+Y%XcK+&#k6NkF$4>)iV^W~tqFP}!ynPyT^T$?n4a}nPP`b#(}o)`b)84sg)=UBawC_?R#i zdi5mI$_3B7Mq*r5doM458>m^(k;j~jq^>sv$%}w(kh&wz-8q{8DWq% zKD@-9@-TMTZrhgR5u&KrS$K_N?YT|!bpDEB5ht=hEsn|iPjv2uzdD+mn;XaEQd-5a zM(sRIfnBUx&6~+7ghKFndW9Y7wJW=nf1h5EMP7I)_eE>{-&+v?03ZNKL_t&`$&<1z zl9|h5oe9eW}I zSP27;gcbRfpq}TRMlyo3Q=o3Q-L?;B7sm`zc?~d_7au)0XKg&_vRxa;N8Xv}bYM2Q zlCOsDRa&+$t4b{@Xjs;MR9f%`$Opk9xAq6mjWKOD1JtTQXoc|?LkaC9RgWi#C=F2` zTm734j~K*rIg8j_;YI6?A0oxRvFSr4`w5OkU!fAC-omxXD~6-%$G^$UV+2DHvIS5D zC&p|2Y3!cI_u66{lV1~N)FXq5GR1z}P$s{7Id>8>p=w7HiM%-1`@YV*wP}dmDgrXm zI*ex>?OB6=XhuV?cV)=9lR5d(*vM!^XPWu<~9dpO*b z#+xi-F^MhW4D`8y*?OO7uzY-Dpjbz})C$#f8b zhQxz&$UR4Am!4Qj^xB1!@9M;;=|>Fl!rZl5%r#|sV>$+s&v;=s&^ECET|lD0EZHBR z=3LBBi|w;_F5%?D$cn@vHVXqY3ulOa{VSP_8DiY~Qn3?Z9Ui5PKulyYbjbUrHPBX@ zweHi1RSy7KJ-WR}3`weOl-tZu9stL;P!jmXT|Hm5>=`8fBWG2UJ)rWqCSHoS&cAK3t4Fo*D=~ZROe7ISO+fyW+B< zM~>00VX0(|PN2?0$P|ka0`Np|y7_<<=VlwlhLwuuE-^BaPQ?;V&Y^k=;>p&z?9&E1 zs@vOJWXck^*MJu}xV)}xv8=%cg9r0{_YeMF_4zXp%t-rm!bU!lrrDxK%kWHtFT+}l zE(s_mNF<>lsb%O9X#y>YX~qKG=wpq;qk~c}?d{O~g@5QE@$H_U6k+&vwPe<22XWH#YT;QsvX( zR8jhzkSf-yS}8ERPp_^@2M4Nf^mO5hL$287di#)sY)?%_UO#@|(8*&Ck4h@QV#evv za(2N4R)(oNgL1NtL0gbI>LUeD6N=*f5^3^ul>s?xrW7Xyt%O&JDU%%%%$5;nh!{B~9+ysP2 zv^mNbnvs}U8;zW!SA676@okcU3l8h-?1F^CBT8+UTSxU1XcKta1Wd_9`Wq;20ko>W zA$6Sq;vvU<9nD~6q$R!o=H`)#C&#h3c8NBct%4bOV;aqAVN=KoI!4a1v`JE!xVB-g z5cp-hi^g>l5t&d>32q#=9&f`eEr>QLOZ*@f1&%e_Z#3*0I;n4$klK7UYMZ|hBYxEg z8{66^llXIF9*)B-UXz%LYRN>PuGad3{Fvp$P^Nl4+Iq{XsxdB8U;gYH&W{Dd%45`$ z0p5}PDTo$A4Hj>2Q8Etlb&?C(#3tj=sb;6P_>Mlw7?;9;_|?tWe6m}qffvkEE40bV@X`j&t{gs#*;mbxIXcV;hQ)s@BY;r{7&0r;9F&*<{Y%xKe zK49^5RbqvPsSR4hkd|S$sR1niq|ziGyC`3lT`Q}@9pGt3k+)|1;l>%)1yr>0(rXvk zRj<^v*Xd73$Ia8L^J_U2ygIF_Ff}2@N`Uk}fBBs8;1twGoy$!ywWkm5+&8tsIH(4m zpYDqX^n z9vPk4$EZ7HexN`3g51KeM8M0SDA|qJjaOm2Y5qLTbk~hGYgKil<-O`Z*GJ|w+?(5P zz!m_&XD?o8J>gKJ&EIx+B(bdIi@)&Q-}=p6wLDy#HE^`amES&o811JrD^&ZS zIcFx!<6}rOk%=cMHDN=crCpZZfGYIn=|QH{D>P>5qs2B) zjBWO27D#~EbG8ImDh9Far5p8i<-=F9&s-N}(tO3ka(6yp)$89wUJUg0S%eW&tCm)y z{@yg-`9hUi;i%EQ`9{BuQ_Wz~ERP4LQ>~vzvx2qy1{FvZ-Xfq&HB3(ZYW=u%cb83k zZ2ZIcK)1|jkB_ohb)FsF%92lxTXjVlGRQODYy(Rq-2^8B=}QVlF`{3?&B>2QI@c_9i3yPa+VW77{5ImjcC$fx??_~JsEmygjQzE~+80$$eKmP!@&Bg$pk z7NWhnyuQDE(2$TAyS%!1Xx)wmV;N+Zm(O%KR=`gkKEJ;9Ag(4X=e27Bn$SbNX^?2- zwmEUu2?-^*qWz_M9Vv*ip6K{mJ;oqW9_S6OW49p_kRneS5?puQ@WZX?+87oM z5!yZ=^UdCnWqy2|-IWak(x0MiLcb7Rq|mkX*d>aApxGj}K@9l9l9Vbku+yFsXNhV! zBA11UIWMP)jQ#2vMRpyxq#;+9i|c(Q6I!s)&MY0s1|2L+QOH!;=4i+x<6rhcZnjg& zhZiEfWy4pCfDp5zKQ>f!2*8T6@m$(C1Fkj5v|cNBjX79uB(PKtiWpG)O3*QW)PVr< zVuiJ$W)j8evN%tbcNrrCqG+Em(LIS=N0~MtWRgjD6yPQTel#EL=LirRcbUwlSLPG7 z`NNUXI4t)K4KCgjRxG#66&KDg~F)UCC9O| z2h)Tzu9=K`VTF6f$jC%2tYmqCSO~dzd{5{i;j=4G(J=6i61X);@T%^nqpKr0BXl`TLUgM~KakhDP^B|B+1v*Xd!tTtVCSBuRC^4-+h*%k_ z((G$dVZSw+WC+#w`A>g#aq-N4K0L}&(2iOP&E%nVM=|NYEeHl*(u_||3!V4)ctbo-H~fQ19{iGwJoMfAv@DJkOs$XPg=!=ecCG0GIBKiOptSzIbVMA8u~O z{hk(y7bi!uqKRh1RrY3z5y3(->D2kGW)yd>ny+8Are&p;x%C{>J>QJmy}Lew zt{e^qvu|$RtAr<9ObHP~2I?kYh_=VQ)G7<&2Tl^!?@d*io0%b&!^GA@f8acZ_YbMt z+l*0_2H%e7+J&2F4!|OsOkOapB}~q6Q37=nw~n#vIdX!fZ=aBT#4uD5B*rK{vPH~` zmBC_cpj}=cylCY_I0|R>hQi~K^;E0%XYYkTavE`JMf%g9eIZGU5m3n#Z{B~f6-pCy zk%W*<3{KBvUOs<$bMwJ*V<$N0!_7}|jgL9atZ`uSi`GVcjFKEVi7H^dHs4)}yFc}M ziU8JdVz2QC26n2)$0w~8v|T|y!z*u&t|*hzj%MR@xJm=!i5a$oOF~ zQi@&}tf}d|FK-(id)SC#X387p<2t`t3fbDeZ0oTXW@|B0tsWn`;Q`1;(p&l=h;t>q z-JrvHkT3HpAp0acPLT@nFr7qgga;%j{LA!Y!B~T8Hi`9&Lz!XIp56!T<<1-jks6b@ zrWv*T5})&oo=(;*Z8K`itZ!`PX|ftLwA!Y`CY+c%RM2366FMZvx4mq85}CWFS1+{u z*Hgzu=s73Ok3s&gUcIu;@DI|_N{3Gg5@4dT?K~kz5Brn3_z3P<|BJ2ph*^7O>K zqPt#;v8qZR$9LLYolRhs=OBTvC!L!Q^<44%;@}RSv`1Ls>iYS3GVedO*!8vuMOlfC z8JEv4QU8bA5jjLdBNA3LXM$wSr?MmKPjm@&Vh3y{;bzn5Fh9v_8Zh>O4Zs=*V=1)z zhF6HcB2rBigxN`pgMU7V%4GoldM_iGIQ1>@T=#|gL0p9e{^!` z`PB2s&ZyLwlQHU2UJqrbXlmFsUPcv)Xp%;FlbajBv|x4mPyfmP%(b3~W6}`H+>^HC z$ioBPe0V1j14avaeT))%cVt4c|ICz3pP$O26T6FgTm%$2Y8Y7p`r#+qC=?J;@~|-j z8x`~2YqpU9&5ToDv7yM+=IHj1#1HG{I_E~?C=>uMDsDl>+B2V?U7r=2uk{dIuP@Fo z<+*iQUB!dd!`in#`PS#3e+eIT(nTEs5tl|ax$Nol7Z<B6Yb zC+Jir#ukaG%)2crPsW6f-D?J~;2P=05p&tZ0u5eMNHiJlutMMp&WUcGp!xZ=ajdqm1}Cs^ez z&G$-fq5TJ)hesH(^p~siXK!DBqtjz0pZV6oX}9&*o=;cv>Gjq5{bt>3_cpWX$rX|` zMR=Y!*p3xI_{~-QgNY->d7fHnE=5id;y^D}7!Gk#jtQH2MpDMVP-!f{df~6L<|45+ z5{klKq36gV02|?Wcc*;?V@^~4WgJ`3}M(H{Yrd>sAVVN zDw0kJFY$5~gS`$8AxEqysRE3wFnk$uOOzq}dse=t4_&zjUV|CbdPMGHs>)N;)<|DZ z$I5Me^Udq|gIOxW&27(n^*+10zZH-s|>^ zAy6h0bDP&!FbrG&1@Jf*Pdk{+Q{Q~^l>v-ckKASxAW7oLsomV%I3v=@t~x$`11E_4 z)@ilKNEf%^82#P50TFRnKO6KP`fYAd^kvZAk@mKTT&7^8g-R)gX$(4>^cc|O!qH)K zRnYcLy9$!68NV(%6wtRn`8F}}_U#*PdaYqd&*j%&{hW()asI5)JO>GinQPK0`>y?{ zhv}qLNFAS=4znvzqC&gVMkYuC(X!mRK2pUn zydjbGM(J^E!Ihw|0(Iqh_wO^{}%u|qm znua42u96t3bKnO`X8QBdZ7EXrUGatNfBq*|Xq_D$B#>a23~)LhKQOVhMr1f!F{Yw# za1?Flg@{a|<>I{Y9H#Sa zqcI%g{PbLm8KiK0qPMiP5MwLG!V;f+@+pLJGTSIW)&ehLKr3@YD}p81-|mU%eCy*+ zEncn{9NNRwDo28`zxe6F-~I7tXN|1(pp>62W10GLLYnec8~E0vSt9Mu_wu~1403Uu zNWzi2UOuE#*=SKy8XwWZQf5TKr!!vD-xfox zuwL;PrH($B;)xrlws$Yc$SLINqLym0a(#Nq3e_mEw4R>kQpMeJ&}kP^nX9v_LM9Ik z;gvTpPxmSitsX3v*(p2P&UWbd(@!pc@5evX!@`JchgdFD1;6g2`|jcV;Q1Gev-g{m zMY@*CmNg-|p*&&V#fk@_ufNLMn&nKRKH0G>aA7TdYW~&_UcNZXm9s0z01^}Y#G;DLaEREr z?Nm_3D|d7@JF!+vn1yLYFbS>UA4JWX03W%dUnv=#%=3ln&E5FtZ>Ra<3AzeUiWdmr ze)U3ZfD{**JHL8ndY$j`Z!~<9=_|K@Rrb9fe7|-7V38Emw_lZh{9C_RtANzx>yH6M ziaa=$7LSV5-mB&z^+{I26*G)6JmwO&Gv$fU08fyTVh=Rj0&2MvgL`(@D*W}&9;LSF zw;t(;n$m0(d7KNgGfmmwdXiS_I@-<@C7hpJHxBjX6ABKa%EWTW3vOukfh3velG z9as@&M1r&9=zz7UMc-jq5^Two4j+hK(J9TM`GsqReUWC*&`2!><#w{=bH+M>6QOff zCu(GHgI=zY>qTJ<9$PZgIiu$AHjj(n`Hs2adriJ1YdHoS$78`;^q2nl*50xoC{K)pUpD%8jr{kR7ao zy0wjt>H6w2iE}Uh@DDzgxG)=Z%i;2y)Y%XXkjVlzFN3YJpdRdH;T5q*Rx)UT0A{7A z{M}`W{ZZ!2pG|H)Ow7~byL8plEEMOL#ESh&H&#n|@u#*pWTX#dgyj3ZkxQC$L}h08 zs~`W)_wvO`dsX>j(zsh!OZ5{1L1R6l0U(SH^-b4;$qH0ub^Cl2rArO280kP*9po|> z=2}gbTT?q%^D?K|j9pQ&Z0Y#q8B!b%`f~7}UtXvg78tYsv$IpDEK$cu;(EPOukN2; zRDSsVXP4*Nqeim;0-6AC9-p{19RMO?G?4bB3N+xbenf=QZ45G{eF7{#gI3)uoW1yw zc^PSh_{ep_GGNrFz$*C&GZ5lQ%mUzq$k8(d)N4sYfhRD|DE4Z!dJF*J$ykV(aa51< zAH&S|rh7X`{~${0g!zwMz5kRrrRqX2FFo@?6s0U2X*FL1wI&zmwQ@-VxBFG}bu7u& zW^zUWUZuhfhZkw3Pc%?;(qWOk|Fl_WZ$rb}7y2|ZzGwM?Ut*~c{D+6OPE=};nda}_ zyo=ZzhewBI8Jo%_7#$Yu8n~0aD~Ev34B`tNO{hTCG3O&h z+tHy;#@dEW?8-s?2#dV@-ggPV(os{w>HE7!-5;yscxUG}&<6?DUpip&j31?Y)aa?sP3d-fs3N9im>tJ8-kafqLZSz#%zFO2bl!;`G|XY+&no#fabxyLls#)y zRK5LhqwU$tPrh~gejx*u<1W8Pn1NU!uay-!JZ=yZuUv(3NY;qat)yfqZ${cI=CiWq z>3)W=bF^dhaiq*8HCq$YY&C}2xnZDx?Kb-e(7biNx}xnKy}gmcj!qmmyHf8F&nH7y zmj@vb55le#q`t{F*$!bl#Jy*(tkhsQZ(31WzrpJqTqZUK?Y|FFr3;&8+5Jrp>1jdh% zMT)zd%~SbDC`!;>IY4BzMfdna<&7wULx3?FCnrO_r6Vn^DEKTed4K-^Uy<%#%5Qta zX}dqXK0C})ETUotqVUq9FHwp85K1kI82~OKn&cgM!UY7Ur!{B^w+3$tmh>GT&WkVr z$ix$o51zuBz4k)xfTMZ)_N@;&bY!3>3xcVYQfAm1qbazdt(;JkorL16%hqH(h5|%O zlBz#;m7Eq}Og>sckMqnD=91ugqBIfu@ZTzSBjmE1R|bUJUgN1 zafp7pjgV%PE1hJxc%`IsDyY-zWVCJGv9M8!;M5Xz?N@5^ThmA?jCqlINQZCEnwLa6 z9C5mBB*h9l$^=Mi$O>OG8bt9R=@pg)_r(HWR^rQMDxzdtfhpLB))N9>*C0Rp1;iD_ zGM@NWRtsU{s)meCGK`2`pQnaUxBx(i)&y#cJQPw{0y4p_pO0CMvI;HHpIr;nQd_tI z-+CQeko+G%$=68vr(c5vk{kG>oei5P{)?OYlYjZq8jbho`As?sXwek?EW;H|l&e)z z09eI0C+P9Ei+Jq|o>S~dt|998K1JaysCVhU{XXzor;`)C#6{QKrqVdn>RyqR?mNw@ zOeDr&WDNjSNa3*BGCOIA(hguO2K?DOT8=OxPdRtdglZYt^zJX~%9fgE5Lie`iEMc~ z6YrIUQpacf7>YVyP~KI#?e($i(_>1wqU2Rxc^1+;(or-=kr)#&2)iOnTN@!8e$ z@BHAG(zQCJ?bv1W0Q=Oi=1YQrf$<1Ys)yvd35JTP@ZJJqX8N6++6Pj=ve|BMp-#I^ zT4{wD**M*Sf&-IMo}V@T`t!e-4o1up*4$(;zQ1d;C{;)5Sqw}dERGweR@fW$mOJ$> zzJDM#Hy`T`d$^A-9pa!++Qz zO7vU|yNQjdi$Yu@Zoru0h{SYbTf_zoZYf4XokZP6OQl&AdQso%e6uRjIImg}L)K_$ifp~ZSkhEkf2rqzCY;weZ1SF+Q6 zynkrX*eR)V9fPGgw?~MOO@@u?u+sA3{c?cXO~(+NQ?@8sCqUauak*9$v{0OUP}BAU z|}dmT!^jGJD!tZHv;%`8 zmq%u6Eql_S@wk!2D!oyY?h>0>rZ??08mZw_#s--X?LPJFaBDxr5rKdzWI6yv*)$MH zTHKL)U%16GFuQc>Mx7lkW1^}LWGSZ+l161PpF{~;=`(P0nlq5P$l(XIDlgtx5-8vv*uGH_A!~Z9SsEGirLKNN%PPD zp9f|*BRzCL+Wg|Eyl&KMn|btj&EzZE|4@EOU`+5rxoqkKTG z@>k^KTctcAXY=%VUqLZKg|1AiARxlj15v8dM9~(14j;3s#L75GLWR*Zi75BLGLs_7 zUhgu_qOzpOpLyb5K8x28aUuYdg82YFSE?UkYbZSD6=w=UM;rm5-*a73^!yM`Sn z_Mlu2{^oi7M<~lG1|h2*Q5?VMuV+e*K!g z`{H?6sa9hO69a{Cf3kI|r2`L+axehAi$U!Hjag$WaFq}(2i^Dh1Ws1wXxIuRIryp( z4YQFHw_$bB<}U6`RrZGHT0&Xj@ERI{3?s#kAIED68w=6U1LSMcOW*E06h|N`<;pJN zynMwvgRGKqhrx9T?|I)|Dc}dKcm!|qFtkfWt9bGWjG~h8P$UHGXr?*%WtA?PXzzYx zVVDk2waxkHw!2Uu&ce@T4WATo(29-n3kf@5&1@1!f82U}U^rGv1o2)2HH-)*paqVk*1*z__dw1I7U~e>EN}OYo$$Tl_HMFd4MIESuW{)HYopl5wx3m;W5~QNrVPx1xqCa-C7p5HIO#Hr*1h%}59YNWD^t6Lnc3#v42vMimO;T>87omw zK>rbE9z#mE@PoWO6#$VcRpFPp5%Ym+2wB4#kIrP}njos06>9mgcBCnp4$X2^ag=>m z9fnX`-<>DZz8Y)-kU-P4%7{RD`I?Ey8S^K;uC!`)9+*X)Cz6 zTQ(9(BZngN89~TlELE;P{m%7gzo3PJ&hMfq{1`?x7d?!0aYJd^hTVqsFB?VJ)p05; z7>g6yqCr18;nS-MP`sc~uokz54dTch@YVBYIF>(O+U&2-&WTznWX_ojj1aXUoQ_9q zJoT!bR$HT*#_^e+rknX>Je)+rwOkCo;mAOBnU9Tko7ztZ|MgM6e7W2dZtpuT1#{)2 zvmWvPFwty7hiYch)0&Lx$Fx!narSM`phgvyk6@MiXphZR&m|od!Wx0{< z0gRASLgYv$v5gLwq(M2sEg^IwLEP&Q@X3$YPo@ll;g>Mk{V(3k@mG|Wg<0+Fbhk|i zY3;nKmAZ&`qJ={MxeW3~WBdj*24gYCjnE87IsCQgc<9GsP`mei#v0f+%msQBlfTzg zzm#HAyOd>w{S*oAOU{Q5ZSo0KaUqiMr8vxs0ppFsp^eLU+7;k1(T)TC@OVp4NE-79 zHYUn%GRX_(x4nfb9vu+`U_NW`nzyIkNxGxCP@Wg*AafNj(i_PZW4hcz7Yu1Jkcgv7 ztzJ1~sTMWI!G++KRrGO@DHNG9CK`k_t^viqOpfLmG%;zt%r(>*b#1JAM;8_tiIwQ} zve>9k0@`J{#k_X@lRrJ&wKZR>TG>%Udtu7nMnTl??qx>6$|`z`0pX>aI2;*W&XT6I zS7B#x)~7?+{XS2+;w%{CQN8^A?|v5jEeyUjS-eJe$9znW@5fn$QFU-2z7lFVT1UqK zW9R5Jk&1S)Y>MG{_r;et+f;?2BWiD-nP_;AP7lXjSUM0|e}W4YBQR`oFu#y0yO6jt_glvs#yx)Ij)cjApqzrQ#3Eo*wE8pp120s7(P>XTIU zF{Ua_qFG^u2TVrQ<+`k|`K?exS&O^*^qZgjWjV8J z)bwP77V@6w^7UEF0fU2cVs5>zMXb|F{irNIT)wZMv#=bBN2?+9$)sY*3V9P2Q3NB? zt=Y8(+?*F3nD*I~|GcK_y>A}o6D`@Gc#x9K)&%IzrOk=jSlYY+|i;esX+#4_%%P zBWt3{8^t8+0!XA(RQNuUp_GZNfzE6Ip{jlRyWjuK-_Gd+CWxs^TkVho^r#L?HbIiV z&uB%^P7I(=CUFVLJ5(FScr*%&s4qc9Iipe?qH8WPv%QfGSBx5s~xb zPSKn~)^Y%JJ`=YwO~woA4GBkM70esP7JCLQMZ78Kp+Q}0u~;cfC#U)T zL{8CtYF9luzUV~gkXx`czeg5AviVvl9YD(EGUz@-wqT+NT@uX%MUffBnvfv|PzvYH zzIZ>oi>hQ|B-u0(#Pp5^Epi>$&#NuM_a@h~J!}&5*$?iBDvcL4aAnULTh9I@<2)-9`HP<0&p;pdx}o%uHSvs zE{W{Is{S5MF$hxFGH8r5p+Z3GfjoFbp_pe!Hsk>BVVg3bp4|^@maU^p)FKm|bXHpK z1qT=Mk*l;oIV54Mi23YXWQRE^xo7AkFnzEDuF*Aek5D>ZQ0bLi9xTQl)m?Yvee*6-w6Hvjw{ZnjC zNau*fBo1!;AMGAYMt_<0rAeh!U|}a(F2sF8a7oi<`Y@OojONTjhRXd&dIZ6LJdZbt zC$?mebxxVe_%2sn-=1Sdm+0k7xpu znuM?n_GlpsF%_cO#@Iv}-L?kgLNCW_mbBV?h6y<@At&@G;Q$x*86w~FVNT`deo(e8 zc37UvM)*^LTG3%)aWgZ0nhhln<2!%rR~pY>K;ShD)*;GykP~9~3~hKE$3@aO-mo^C z>w=&uoiS;{fc}zRt*DVB5|z*DbqeO8_IW!9)Oq#0i*D!PEWI<7&iJokXW)1=SCGq~ zSj4KT^#f3;OT<*F z^~3f62F<6+7OQ1=oNiRy5yc;HK2sAI-rV_!8|oi3Q7xq8BU1XbgFrT<1#?_Z8fkl| z`JKA%hX+lI2&JUUqWw-VlZ3#CjDR&0Y2v%wP5;{G5cB?*G!aRpS=96T%^%@^#JhPP z=qoKtYa{(HC-Kw^J=lDmhqE(_o~x8IN=<=%dpsUM%=s<^bQm$Y9fPKlRUA%>#Lxu9 zOP0~bZcUV9qKH&QJ>A*3^TBI0!{qX>RS|Xp;Oq&Kh=}suIM}Q;22*&>EUZw0Rj@;( zjVaDJYRF%qHPICib-S*nGGj%xz)4PS~w zh~T(t;^)dS|D*rl_fC#vMNSqIWi(=F0*xCCZ~ZLApm1hK(yQBQWBo}N+Qe^|>a)!( zkR+*K^5$;(zy9xU7W1K6B@qf64mn|f5OBC$FkDwKPPJxKm{b-TdCOmuL~bZg^N>$K z%IpftLdB>K59ASm)aezito3hRZ3kpQxlorUg}5l6a`|YgSbEXbP^)g-?&`2V$L~>B zix9{Jt0f1B)F&&!uXZ)kHWHVP=RCKTvqsdw(u$~-7;ss)XNr>z)B)2JTL|BS| zPnu6eL|u>2XYe|MXqOYOGB0lZ;e*UT(Se{cf><$CPRBL6F7kszpxHD>mO#rwS2F!J zY(#Qk`lOx;sqrTFuuSP^0Az&EE8Y>9p@Ov02uQ8UpGC61a2vns&tug3Vr*Bdc9z6RJ?D&EylR;Sxx-g+TT?)1xhk^3xR{Pllt;;Yb=4oG&{r>0rA#E z*}4L{e=(NYIza@CT#9kBz6E|kKKJ}F1$ zE1SWm=`)@I0`bJIj7-VmC&lQ^RpdtrAo0`KR$c+{=Gj)ss@ZmvxpjCVbmBco;K~loYHUq;FmxC?)S$q)ok7YE~RKg zA*Gp}ondmGBt=4}lAx*}^?(a$r4C=YC9sHoV+d?E!Z4QPN4^KpHN-6)tAFA)PSBXD zZM!q+v>D;f+$NE`a0G8iOELYBUP3`^o?s`sNarndaDFG**{1*)JR{opi{i3tSukt<(5yMTlB z`t8YPhG9-~`FE}My47k#nQZB?bvKtNwOEnz8iQyyz%09b9&-wuzT^xR9m|okk1TBw zU@dG8MNeswHscs*_&N7eyoe@~_lyY961N=yl7JXu9?^@)!dw$|#gj&nT~<0td@=%K z;ihmF?Mx!FBZY$`^)X&Fd%L3CN=NZu}K)*OQf9JWTq7c;qAnK zxro8X4o7EijBf%d_Y4{m-f^Jy(c)zzL)lgkd1NTvUOa@Zn+?!hU$jVt= z1ZIFbkueye#fjiS%X1LU8Wp*qt2r75T!7(=<=Dz`BPLX)WA*sTRWP&X2?3TAf%}e= z#zHQNxDiUQW#Y&9*Hify+XRjjS}lY!7*3sjt(vPy*K0}kH$j5)~p><44tJ5(TfWTVYz9bZ4NkEVTP*nTvqFa$dzSyVTE4M zT)q-Od_ez^P~>l+iGs-@sYE4`{YoD-e(>wRDQA{miB1Mlv5&tPKLPH!j?cQLMqW0@ zH^Ex!ej+_jE(`4F;DD0_ZxXX$*EUa%y~!P>21Wk)%#@Ryp=Zs8e9>XM#l|>|mPFy( zqT_2Z@v@+(2DCc^k8Mjxug&`6@~n1PUC(!e&e#WvhxO5>{4lL73m-3XSNG$U3Jyyt z9TfW}_w^q`S*6o@=@@6p4z);b-FPu)0m4M2!2y(|ZAo;nfj*;kSfu+W)m3UNP`F$V z`7E=$`=!jHz`0kwVVCl8zFHwU8_2Hvb z4y2^%(?w1;8v1wf{D7rffFuGIs&HmBt+;eSE(R8FPtX! zkhCLG^a zNEC9>)+SpoR$7V7PtTu4H`^o!@i|`(Aq0NpJf3{V22PGp4iK{%)bpX_Q)Z{F~YIc8gYe)y{2u z{#%aUujlV~+uL-i#VO5e=mKZhNhHz#F+>$}|LI&H#+aTrO4}iOr+}1!nTgY`lW^*% zKYd5Lm<5MtRQ6Jnw%z%LIt1U#O#FDtMnVU~?sv9PtJ91{78oeOzqU_uY2D$cpNA+q>2y>wzie z;>PYtCONMkEC#)~P8REpQC}_|;ecJlnEe+raJPGQedT%iI(QizZCh-S39M3V8b`1S z?O(UWEI^{Ro@;#%g3?GN5P-iIMoFE95r4PGt&$s7uZ=+sr{{^`pW!n%rjD zb^A?0pW%hn&M=l_A|$=WrOQWQK6xe=CO|h>`@fZA6a=ckp<(;_kU!u$@rK}P72t=; zN+b2&EkZ{pjZv`*IgWX}u)n_Vm{E)$-%>K>S7b8MG#T~fV1vB` zmlQw<7+3-NAo8%HFZ%J?rJi29K8fkFSPRellXo1Fd-|@*J3gFW{RMUxj*IRrd`v~G zQO3_P3ytV2)IqEygjJ+n`wHpro#gI`jZLpd#(~_nI7OC$AJ6D1=s)T_#=4nnqAF~) zGP;T_%m%b$2pv5WbMAt;kv0pCUg4{~c=3`74F$VT9lw0_OTYEo<&!fe0S5$%F$b=v zVH7na<(j_kh4l#AYPJD&l`Y7ijpvneo6C!5@QV>Kiw8qqDrC0+?0neVH(iy!|)6k>{NwxrZLG~jMFE}Ow;Tl zMz8FMd?lwp=WseS>jg|i9U8Ukxthl(!mvZV9g{|Zb^w*TxNRY1t%}eTbxJa5H|o6V z2h~?Et_51|zWDMuS9o?TWzHC?9+`e}dWJiMf@MU6txz(rI!MA{5z2@JJGl#&#q**N zgz9lTkPdQAmOv-9!K6+Nd7uYWA-f`sV$={_iX-3eiP$Xi>9RJ#pPab&T{9M(7!4l5 z`J!z1b`O8@gXvir?wK512oTUDRC z^>DW`5n$Fh6s;R=*JF|0Z+-Mi3q=7|9tN?0s75{Vm-i7%1*Hgd*vc1<*X z)ZRl4s6V8nfcc@-WxMg3Ia@t#JB-@UjA&uJEOZSvuWWqnPyVBBDITVK!WLlFobD>q z3CLRXREbYi&yaP@D%%e3wWM%aA_FTV48Fj&p;}M_EiGI!n-BFn1gN1P(c&kY-6|79 z>Te2q@i0oUxykCEcquOI;4w_{6OIn*54MOOc2?!pN0-&IQL^*vt4m)ru+wJIwCoOq zWi@O=E{n5v9q*Gd|_s->z(TO%>JF@i$4ypbXb1wk#J8$MKkht2JYiMUmBb zW~A-{mI!a-+$i7kP?pW`3?R#+CZ#_@vWd$QLj3)^_aJ9>J!GPMeENgm{;{M#j<-S~ z=AT*@dlWDi2xV!`uBmORO2aR)ui66}V}~dl6Xiv-+Ew3hayZIfIWt>i#{~VTHA!wp zSSkzRe8DB3U7SgPu{K|XV6d3S!cp5TKL#I4p&-1@pIuN|EEF0-`JrCxk}%toYOYRCYIbV9}^21z?KwSV~0mav2kHMLg*p{QGF)>~vNwmZTvJrpg*3 zkSJd5gE55}fRk1LZCyBOedR_UOHxR>Y^ll#xU+Hmcb?_V1HTKr^{QEOH;S9?+OZ z3!!HC^U?n_>}z(QTN7${*oP4(KEJ)arIx*>O|02U{Obq(`yy3kWD7^#IBj4wvLO%a zhYdwvEUqUYA*`GvwjYIiYI|(Xc5_fOrtn}qkvV8^WSzue-3cCOx)q(@5~tZB&(AOL z&XY(Y^x1C#xn+tC!$~u#XXjXki^w318aELvnpG5GE-gpdIXny00>O&xdA5yOW?+ua zV8pv55o#E{n-UkcDC=)DNOfAh@jz2QxM^lAN-$9YhE-#_v#IGClzT=-A+gS2_k*cH zM65LZ=7}WfN`$_ec_s!!5MiTg88nNzdC9A4Ss1ATAhD9h(TR*Bow8Hg*>C>ZcMlJu z8>fy2iKXk5L&zX_KVCl>8Hhg;Q4%OG!7Ze!1M{`Pj{&CBdFp@rKYns_{7L!nGzn)e zl`SHsh>dE{kd?d3Qvtj=37zG{9uD-afA?PR&e}k?m)Qa%vonBq5?o9eVkppw$b%Ra z;Iw`l9VSRb(W2fC-m9)`s&#AqoN5_NLUtn$y0|+Y_C_NyNMFu$@XSFly93+~d+Ra-c zlSO}yeW+;wYqVQDs1@oBG%r3U%Pu$Hq-NYREb8c~dEIxULZCL$j!*&U5%|PF_e%m_CvV{_ zJB?=}>gdPfFrJS95@j7*x@ZRjK5fZ_(;n z@~PbH_gW_`TsaZNm4-YpZR;A&TJfpQ8pEPHy^1c-?^6~NMCRMw=y%S0%zkI%> zA6h|8^|E?#IUTQm^>=??eqZh6^j%w5Rn~|;93{yBN{E?Joc+GE;z6Gx54%*hapb~9 z%)+B_K5UCkrMlrWv2g+;xu~&{VNS{uQ~(`TW6em%RJzN-QOp>kBr^SCsvX8mnw!#d zPcCbuF|iY6VGASdHr!|WV3I0L@{LmE8T(wDN=R2U?X0>}9%o4flGHz(-?|oO%e-`f z$F3}ZOormWZ9UkQYN2%b>{(Y*^Jv8Dq6n;tyXjxdAYJkP{cCxCg5LUsP%{y1e! z!4+0?L12NzG1U_z__aUph2cpGJMVBBV~(}N2$z)JBC(aWOT5_?AV7^aS?}up%BvNF z10h=*g0-H>LWV9IMUf+TWZyPVa3_an&ZdYWL`gjCo^S`V-9WZ6rHnxg5y-OuKDqPZ z?gJF)SAOG%M9F%wlKx^^x5-{`0xHTDSz)WwLh}h^ke6aWV}cB#5Ft`LaKenr z2BN(odB}MrXxxi%aL7*K!-o&f%XOR}yG4wI7xCZy4`2NFN558f=LquREM{8GP#ozbg>fBENs*6C@caz+GeJElja$E3p5`NIi3$jgtv_4PMj zvo#<(N43MDo`&J7W^|_-36W{Cp3DF0-~Rjg`YBmG+bjXsi{1Lcz=-|+{OsJn4?V5V zOnk^+oSnbF{Xhny0U?~b$7sQc|(l!^Q8ruH4?-l0I^FMA9vqwm9q;#IyzK~LkSw9)gLhXkc^aCsyW7eI34E6GLYb~ zw3tEe6>!>eB*Gabl#%APj#S`0oY0@C5#GX~W>*dRWCR0q@n4dB_YTPhE!-<6jI+lyFc!`sgFC!OhO# zQ5B74wq%54&fQ?V7%d9_ z_n&=j?4N`{>vvgl-<9PXI8Gz%A)3vPKl`QM`8{30CcH@gsdJL)0K3;jeMaP@uCA{+ zur5dfh9d&vOB~E6wB3dWOP`#`c_awgHS*$cKKkYnC`z_>CVeXFQLTOSh`oWx!lFSet6h0GgH0W~@ zRkd|L|3bH;xnggH1VP`skz#~3|17{*$T6-qY;{wqKD=%XvOY$HxAlaJ{_OX)%e zyO(o7se@valCXiCr9NHy^gC;UGNO#7D&oXK)@)`r83bUFFH)79S`zW=i~lXCT3 z(vt*elze(|4wr|vU`#dhhs{O|T$)T_n>LN(LYhNlf>D#qeZc$6tK5%&`@1DACicN2 zDA`@G)?B5Yr`YPyqu(O1pV*jwJ{3~&2o~(;<@sQ~; z5-NZo`BtUsul>;`X81(=vKl1KkpZmWZ|s9GH>2{Y3BsR^WnH9>8^?{KV2- zD}mx~7}c>ce0(&FyCjD1ujIIQC;)Fhy|(>+MP8|-VFzLj8%^VMZk47u^X1*`%~KYQ z33A_j^BQrup=djy%!Fi&c~0?g=jD!G^IhDCI$;xf!Nv~{E4r3N|exkvMjND1_x*K91>|EVNGOHyWqw zO?fsMri#^<-~VCh^jb)Lfe`9P1r8)6cb%%~Zq{f{H+`W_!6ZZzT70?Uf>L;d3!k!D z(suXRL%T(pH4bAeOGF(-BV}jXSxXpH}Bu_@%g#=Y7r3_ zci@aJUcPAEzkU7s(Viazy7hvd!vLuOlM6uW5#3Xbc{1S{KgFp+P zVM}uujoNZqmUt#9qy*(lURBqr9&@wOQK&Z??};f6{%&34e67|GkL^;a(l7x* zv-@t(w%4mg1#28=^-uv_;^;tM@(0c1{MCz9Hl}0fL}QH7pbJJbV6e&he$*~cm2RVf zNH8GkGAa1t?DFRR7M_bH^a+nt%;os>pp-f+yDP(+B_Q>V7lnq9pi&5G?MYno^Awir zETQF}*b44IMLZ5!*t_+jH{5pziW#Hbs9>mEdJ=y$1cNC7j%LOw!Yt7KsOW0FCd?gD zfbEImvGbDf3$Q15RDG^yR6BxPdUaGQoHq5lTAQ>TfOD-JiA%ecpf~Cg_3D*Cmjz^^wOa*8NngZS!Gehz^`(^%-$)fKvt zHD9jlyHJu8VOr64vD4Ox<%9`=$O0|!G0>+x9VD)^$!OFf-6#eYao>!#sJCT)$jDOb z6q725SfoQJ$NSDHfcQuaq_k-lC@|Mumm#`c5q>5vjK;$8$>i?N4vMCzM_BEcGMciK^mv9y$@;esypM3O793b( zIT|WLRA0i!j=|1=2v|&>BLI)iWe%yT4eP~VEZ2~)0Ik^1gUpjyJ@m%(dp-j8%W3SJ` z1jqq2j0V+6gdW$-jRg*ccfLY6%w9dU7|-+y%hwzZmY7UROewHWD??b|S_Ehik32D5 zBX!Y!{l4iI)6R$%(27I;MJ7iZF<1Lp?MnER0YhYZAqImgcH4fK4l+c~&-ST~`SUas5VBB!=XE?=qj8Mt zC~`=IJ1WKI4KoE9F-Rf|nrP%qHbpP@9RtB|B*PSv+!M@`T1|2(jZeaN$Gu60M3fRM zLFjb(E(Uni`51zwaC^7rgn*;OQ5fv9pa&*Tg(V~Vlf>ud=LqYG^1N=etvhQ5l>YVG zH!dzq>E%by-@N`>SlXyU#;i~(!V}CpKEE0cm7sHgin-FWY!T$0;`sQ`{`=8eNc^q6ZYnzZO@_*!550u>T5t%z6 zCEg6_xtf9ROy^|F99&&Bk-^)qztSp#0CEc04QB6T-D1$~inEE3-Q3(bWk+^>eeIb1 z2}o}CqV^EWv8=_+um1M$w%T9)*Z=sR{_#Kj{n}x%awMiwfQcQ{n(2Az_O8QO?#-tE z=JU^U#k2dCR?5doK*Th9l{7KDfn&Fn8I|1$--!Y`Af~p(+Xwe`o@mi)2^MY*w))=p zerYL!E10S(gU9U1Rcr7W0(w?}5YY}U7n$P(NRM|n?@e6#%T2miX) zy3eQbk{l14C+%JjvdyPCsvm_Bl2oL!Xi)|FCuT8`tqzior^_R-+YgWQf+T~=HeA7FA_2We-lBi`1LM3AO@R6*JYIaB`=7tIm07kc&iwn(A^P8`K{`$)guYQe(mzbLr1U&h4PMW0FvXXj$)Uqxq`CNwO zs4g^13~A}&hZamk^EjQC2p_PRh@9>7KFg7}WT{d?Kmm|+6l_B)m0GvcRl6Jeo>6@V z2V+lc4M7&_^c7nV!5d7{)8Q0G2|me~8$TG(RwQpOAG5GH#IJtsH_ku)q`TeSzJ60Y zZc4@#%_9n-6Y2|i)3Dp-cD2%KARNDAzUf3+^)v^gex^=KRY#>s$MVe-KU6qZdUQbDz?D80%MWIwevu5*+ zZ-4f8{^%d34=SDc5(>EiRP2`;xiL>eVj}k z5-5y!@80c}yXJX-lhO2ZJA``SuBQN=SeR!<5)Mlw+#@)5^~lgj=~z?~(rlkqGg&P~D{g>)Xu<=~W8o4EiAV>d$}RSY1%5J=iR=n2J-MCQ7gapgoPAuW8?v0lIu zMpEWTef{Q92Nq3J7t?uSlSS=RKqy?Pq>EGtrtzT2Y&E8DEX?6)ur`IgczkFPPwOoC z7@jUb;qd6p(@cgBYFiG019G8Ev?t{%0135>v;!qk=)p6h7-)?ed zbG^KOSl`~ZWSDDBR;|mg9RP}E@(~sV`;gbTKJnPKz!dUVifFb7cu%FCHfylU-Wf#iliw9Mq0R6FAaQl0dcO5n7J`fevXz1PB$B z*6H?LUwxt~Pq_|shDanhgfYQ{G%{hS7G$}+`s}m0CM&B=+qi4Es%(+Ny0mObRM#(G zpoZJWd-79i%?l*-;YI?pQI$wUTU-XO67LE8zfH zdC~7{xCj|;DDGgikdDq=khqU`z*Xt)x>9)FAB0nqP1nHDa1UlVUzAve*eKtWkjr90 zR z;AU1G64>tyPk|EH%RvY?LI<`gTpgf;)yum}n`NA~&tz z^7`8P*d`BmcM|yI0}e+$s6shky<~#Z>g4cMEobFnzTwi=#Lm z^vc`GS-sHS&LE#12?gsN#lS()ZHPvZfPm$0S{-x%e^muN=-z{a3yq7-Z1w!p@1zt7 z;g|B^!(gOCpBvjBN(y(0P$+9eojg?Xboj zkD0h6)q&S`FGiI+IMy_~2N|E7pDT+K5ZdP195Lip)Q6NupO*L7X`9^50%JT5$HTYp z-f_tg6*iG`W% zjUSd@yndH0Tr~>04iY~C*Rb4yKmp_W;fZSnw;T~ z98to+97C1}Wm|@VW!W%b_#+wq1b$<`5g^&HAPrz;X*4sO88$oIaAMUtm2;JUYZphZ zzC7Hg&aPcgSkKDqzjzg6*KRzBO3dl?pqCH00f>1i{hWj@)4%(p_kQsoZ*KHR!3ZRvf!o8Mx!au0f8px>93M(t%=_%5qZ3+a6nStK)y)_@c86_w+kZ%H=1SV+ zLZP79os5k9@lh9)+1poN7@I<5-0s!og2NsZoke`Oe&d7XOob!o;eLjy^YU5b`=}z= ziBlfQv!Jfwy)+(2@6G59Cya%F+#_XpG!t`?Q=kQ0;N?VqXqMrj=vA`Dg2*owL&X=P z0y@B^!gBdw!=`lQh2ovV1bQuemzY8re}P9C?k3})_pu?cW}orC>w{1+ChtU~CaO%7 z^Q?0+AanX(Q5hG2si|o)r1WqC4tjRldwaXS)^mHgknT+v#q}J#&?FOC=ZpO6;fWV0 z7_q?^9wETGBv;8RUKFippdcU!D)e|EaFXULM}=pnZ7KP@$wHJ8E9iV?%?G!mJBBoPPAyuN?2p4rT3-$;YH~MeaHO|{ zqbrdulEG9{G9v>2cs=4Gp$m{@uonaGF12Pc3iY_Kzug#{y9CP!oZ=tn<&|C3Kt00}~B-^S-E3#p-R%|0gS`K=lR z9a!{ZiNu~m#ouUXOl{G|p2h~< zS}yHIvq32a7Yxai?PC5P_9d+cU4xl=<;XwgXM=E61z zq1)}s+7X+#?<80ed~y`GQd`a5f6h^Os4zqu7r+on-XkX;otLtPT)l&|C`HKCou~pa zOav*{t6HVbD)%u9#iBt(#0(EzqLio)7_**f7Sc%^(b7g*Yow9kbey9Od6YL1B8uH$ zLL^{`Y$V0FK%YWK0u<<$-DP;vIrSatqYx5=1j~{OX@>Tc6R==JC?ls|)FEETnRm&V2*-4+b5eXQ z1}Q<5@+#$8JV#+qI^B>RayxJjWu+ypYIaX8;3JTkn3tjHeP3?@WNAn~$6gCuTkGXF zzxpNg9iXms-)55twLBzY5)Kr07*naRQgEbG8ns+WvRGpVRf_lcR%^`{qKHPix}uX0WU6f zM0(jG`qM<_0*z?Db5uFvgdAbpDj(@}y5YfJ%nk9V6NtQ|3&Ekv*jj4~BW^)2vlyHt z!Rg43&YC{RU#+@*_pf&ZP5%7NZgWZ%X#oGOqY1!@}`Jna3nAKO5|Ykxndi}Z`a&R zyKpLnPeRLt2%<(Q-Y2bju22rZL^{x%`lNP;*|3PDJGBII$()pd?e+)y5TbHy?4{ij zCt%C+%=oK5xM&YOT6w>^lQ+XxpfoSTGEhESFBgf{B`=GX9A&MCpPq@LJl}KN;2}Op zPd5t-5$a1sG&v$LFW%5C0Tb`bnwm?Z3#=pDng0`2Q_>C0ccFYGx*|1qceg?toui&a zG{HFw#ieW!xJ)Vn1nP<0|v(c^|{cbp=6{L&2GdiOmL0FYG5Tz5N#%!nwKj4I;j8Sat zd#I%@%r`V#ge^iz&mdnT*G5oKB2#z(tdF&F8@Y}>xIpLFj65YJVlr|pi}!^vm#Z;F z%4lRry#M|awUY@0T?-dlUX{zD8+W&tk?B`zWOj;gM!LRLo7sG&ppP>z8KoOv<3G29 z2bSV(8*M9X3Wd_Ihii3#8m0iKW*)DWjE5nGl^Z~Qt?Y+M_|p-ioWM*eztdL`07@u! z@rosoI7BYulIAYHO51oq(C=Gt3=R7LS0^rZg-`xT>OWhB4^C za$bHgzwV@#-*u}(xNNU$@FE0{bHOF6Q&~llepS>;2GNL@+qvL&KFr0G@i)X~WT?n$ zySCu25$CH{uPiv*z)zHaxW-^brw|*0w=6C!7h*dskzWsw@(>^Ka627bp7O-Ykkq-y zZwcG?(_+M531=6{>;D%SCP15Te!{*_Ah}#S0M-RJbZriC*vRW7^ly9y;ZNQ)6YqQZ zY`h=+m*joQHC#ATa#!q+8O~>Nb2sphZ+IJpVS!C0s4X$1L}xT4@30Mhosp3NrW#T> z3C8#<;`8!*!s27DDBgQM6w?tgOR?-qC&%5m4h1Npf1c)LBCZc>k$g#X8}oAQyXX;? z-oq`WNtugDSyS(rX2Ler?F!ErwJf6mUby=#OS{58@x2ZJ9oddrIzPQ)=F$H# zxR^tYXstt}A_Oyz2yCmc>z4L;oiM^-fuw^9f*e?FtbNJm3P>ZroP_~>siu#Mtgt`# zlFKA{)=K zWoXv@#^A38kF!1IO{<~7YOW^vf_1ZyLn5URaVczH{( zf@(q^_wpI-cx^QP`uM|_@;E8v3jT)pxJ4^h`jgqMyvB8En5kTDN?%M0U$0t&>U)1Y z@BG1}{mb=Pzx?v;{_!t|^|y;oy7A67Q{frICuFX3aUdd`SnCAyH6};hg0xydt3lv! zb7|!gnD){2IcXtm{O(T_b8<6(b=}88WLG27gt_xZ!Y+zj)r2WUwx`swgaIH@Phu+y zEm6-HL%SC*9zA^w2w6f*B+FpU_NfHhnIVF(#naRXu{yL3+G|`A7&I2ivKYo;TU}T* zJjzepRxMPlTGAguI8#2MI`PrNc+i*fAeVACL!w*=5`p0DN3S|t57R8C7$LhGJ$uwT zJ?_$gWPbZ{X*d#9gfwT9dA`v&d;UJZB3Mi&AXL3unRwF)XR@UeIvb3KI0nyztT4iq ztGn6ja4RaSM0sHLLowgX&fBF^-6=(HrgAzkNm*lo%0Z-I&+w2pI^yun#Tkbj$gdoy z8O0(|KM#!VaAEGh{u6&s_+X-v#LH?RJ{OVK>p1bCBrQw4Nh@+4bQT>V7JfzQ7@&>u zzMnALj2w6JzL&2{q5}~ica%KM2ZL!RHI5$W6yQOu8(ZUCP)}$gwufXDv5z^p)lZ`_ zXcu0|x5p3f#CYT7GGv3l%A|5~lmP=*NVGkgs(@`CdQ-DmLWmwHAq1H97mIv`LSANOR6u{sE3_qf5} zOxXG!u00dw#Zd`p%TT*47NKJB7>y_9Fe2ZT%+FG^em_7T?k@hN4Vp}>`M z-0k^dh60rko92n;0iZb^b&q6@2`O3TlcQq=i3WcozDHx1MKc~96}K0ye6K03bAGSE zT67laAvRNE14NsLXs{54qGIrwVtIHo`0-!;8!tcmo^1Ko+yT4*97EgP-DETnrb;R_ zm9A($<=WR#ckZKZ99iTbN3fhoG@%7986ZryoXO%x;z14bMR79p9waYsE-RD=r$iQf zsHn1KB8|!|#QK57B<4#P&KNl6;ZR`F1UW6@kQ-AeD~wm*Hh~YubzUwulda@-XbU|k zJdP`W^zoU7)Rtoo8kVgah{$l-ij3bl6?Bp6H#kT;&e!(UldEOr^{V}+vy(qu zJpFgmcmLn=C@ev0$;sF`&IU zDdOr$LMhhcbWpjR)Gxn$)9=d(vC$Yun4}TCl3mSFpMew}!3f5%85$Ce^>dw0ulI6XV_y?kRFW#?x{V%iL%Ih{m zm50AXj>Bp>nc?)+Lh7WZiu*pjn$!ZQD#JZ4x6_(*S`9^gb_hIoyM*jE? zhvYx&^|s2(;qfQr=C1p(dBh$<4hMW$D4-D43#xnsCzOVef3X(6#mnHm z4%aw~wL4s!qn9rw)x2pg7SwWEQzYoS&C4;v;tz+}os?^TMV*EhnR&$mB!<8oEtWNz zDRp{}Kl*rAsVdgMaTO7W7Bt%(3wozhed5py?qXm8Adg(#%KwoE?H^8LJ-nVx689ywz8e1Rv;ifjDyP3Sx=W#Pj#7*RBE9@ z^(&U(;a>c?-+1xapDVOFK*}UF^65fLDL}gu0h16cI2$cBL#b(ZJ2X{QoTwSpM7z^9 z9?oTWB)wjjc(p-9`|0VKXtV5OviI!ljFNAUm$T`qS`bY}zxop7q^3p)O+(3>n`>tT z>L7nofUbVgw203yU(| znI?f%m4uo?qYny7vZ?6HK~9OvcF-FvvR{3pt+X9h4*T;fGO+u`!+k&v@T&zi81{#k zA!$s&sFI3;W*iweAgaToe9+iXm#;7pV|KC(kFLTc;~S_t$r+5;f}9#hP15W)-&_;g zo{*enIwlZqXCgY_5sL5^6$QsLxk)vr6T^3>Z_%+s0-r?NV)=fv(O$X_^@&ymNNF*D z4O==xxyfsbaL0Kr54KdFuG7DI+g}%&`^H5-)&AwE_~pELoxb>D*8GF(^>j_L%N$lz zsN`kx-OoOvcXOW}biQtl zEXy)yG|>RqE1a>ulZPOc`b$6i{JY z_u&a1?N7K0?&Kjp>-nbaQOfwe906})nZC|FjBmeEge7$Y*hJP;5hhh1oI|W*NhwW!9bDh(JlPHsl*636(S?0z2 zPlWO1m||LPKG-WJV-CF|?aIUMHYOL2r7!hdXFFJM^v+yJ@H9`{L|!)fVC&FB*|vsq z-0LaqRSzY^A5qOLhhAW?dzwXwtw_3!9oE{K#)$tc)?!j##^9cq1ucvsO(@2cIhQPY z*fCb)Fr?)6aJadNM}|-{VK!R*^z5BaJ~jG9E-P@X@IGQ|!9eoLXLN?4Vw($l&|+zG z($#ArpX-kvz;GS4Iv6^)EFT}+#ql#p*><6fRLc_~^lau-3`TQkmi#VSg!P(Tx%3W> zHsDV?<-%+*A4-+U`ID2!B)4biz@(rak%-dFBtR(l(~R_|f9dDTT?o;>l zq7V|!QdWF+q*7YpayF-I804q3Zl(frfi@j@U%-Y%$c7XGe?qZi(|mq;7_;((Qq*^=j>pdbU;P{SoLc%GcD`oM6nZSIASf>j>2^WCL~cPrknNS zTD7Y-)VapfxtljYo1KBmNx9rj?^sKdd|89zmhwz#4CTsEx!MhVaS6?VrF$++W|!@+ zB|<+=lo2`JR{KAze*WrcVL-|5r&-1dp~Ll-#9a=000|`f-v*&8FBjyTNBM~7__&7~ z7=6I|&OV2pLKYi=r84 zJTW5%1Gs^kkI)f5|0CID@+9(fZZ6>BH^d}pyM&mmLi z>e*s*z0WD}maWgbG=^%tO-=UM@pdo!146qr9%ov1g460iG#h0uSlrf{B)ptM&LpH7 z;OV9i1*()Jf>zmqqe6t6KJJC!3DC6ctZ7k8iMA_qv83p-bE%5Dg zSzc&-34hPY5P^d;pI$wGn*ZIu_|ZuxHMr@gcA2B&C!3v)%cDx8+U_aO%n~gGX|H$Q zRQwy|-Rn-mMr@yfDwXqcqHVfWJ)8ymmFIO?b(}+552yNx2B(LD+~KHU1hXFMuRg?xz)TU3!#jt3hi=y$m%Uabf8Y zn`;mBC=d572Jc9E2QehfAGKTBcp>|Q_GRrxL5zQxE1~Gl(A563%t1W{>#8{z6Jr62-w9wks`5+ktQg(=6~3Urd9D^oD%;;=CuOnm9^1 zPr@dkr>m$UhCS0+3NKX>%si})n@lXOlU(T#qLdZ^G-Yg@vUpU&n52${VqIMC{}C8`l{?|_4eP17KRe8}Eiy&__C+L5%v@E8HTnvWfCfIzV9NClJ zvBE~D$&``f(`Qd;9;Ayke01?x^ItG)>3s3x={s->_p+dLOLr9!BH&(l+5+H!|%ebMS-~Mxb~-8(tjC66m5fLzT2ztIEh? zF5Qc}oSmL}pYp(SeR6h!`B)0P`pJ_g_y%_zL+{1Q4~<#2B>LgjuGEUKgjHu99iOpU zkUAC-LCJcJ>0^Zt`E{1aQc%D}rlnf>0Of|em~5jfG)l|g`N>D0e$QXjWx)g7ontUOGn}n?e&gC{n)Tv;_p6v>f_hfSHc0){A?f3cAUuPSkp|iicQIMB*C3_|+!jW{!zBedKT* zYn|Zo*$(f!4cEnisd+A}j~fn$&ML+G2@NMJ7k}~*5BF5>J9^$pj$ybEiCR{Wv%+U= zriVlorYiFSAC!B-E~TnMKqcJtYKGnz0mp+}^0>s<4;#gY9VAC68LN8ZXvgeMDVN7b z?1!G?C_&4~gpENyi#JLV>$V$}vU4R-Wc%#d6I>75GTgI^3qEX<_vF*_bJEMBtQ5X1 zr0xc@r8YWv)`GFXj35rJ1l~*|j4=3m{r1%>J*1yKJx@^iAU;{UFr-Z0lMhpHak%!Y zuMS03%~_(oA~hE#=x&Ot8~n)_4%{DqGByeb0*|P9*qtmYN6c5igsLLavzITw^Wz`M zsgzM@jVbosZZB>E6S-b;iI>o3HC(m3Hs+Rtw+bdNoN_Ktw zvuDrf5eBJM%ZD#sI)A`tjqteBX_PBK3_(XU+V$gd_Ni*flm;7=Av|Zw-9$y=l`{TU zuirSWXz}{?{`&sGfPeTm{^pYpKmOwCW}aecW#lt`eRcKqo40jPl>7AC+iOtWy2SW* zDKvj~TQ67S8Q#O^lXU}#p`ruQ@apQV3uaJS`=VgK6>m)Rvw%Vp|B&gA5FVP5<+b5aRRx{2@iUS6LY!8Jk^0 zigu8R{N8)-As@-09{c|L?;(k4G%m`X9iON{2`xCAZ66)yn(a)n3PnYx-=n9`LWc?{ z<;E!)AekJlem&Rd)iTA*NQ*-@@%gzC(}d+2hkYDeVbs`8dr@5&RbSSM(J$pO$tX+! zzQS3E=6CW?%!Y6_O-BjHg%AKQkfr45sqYktzIsLg?XpEfv%X96 zHeb zi8nSmc-+wbn?{T`&)P+7`pAmyh%ZT#Tw@x)_|u11Z(ulU76N5C!ak;ChBOd|&zBwu z!eb9UF*6Zcp@3V6D6!O-yBdx^tB{ zbST=E{%}q5xjxLxS1`YPg@xcU@?_eXvqrb_&*Cy!Fi=AbM+A?La>8PjJs)^itK4#3 z`{)1qmtS0svz^lqe(?R?JMZ2tmWt7#Qi4w7Ny6wE%tTgyGNeeVHBUgjRI!RlgfCu0 zEsTesDpcx#LMRA|JjspiESiOFvolLNM z(@$NTp3xeuyJ4K4p3495Oq3H?n2j2*jYL#Pu68`sli(ZqR<{=rV3H0r1*lb5?6do; z#`^9r{ovCrblk7Mo{gu%PE%=~=wrgy7x|_vHTyB+$!VD?L@kV}CltV4U=xN(V8?H=SR;e>*<#XCrz7FGK3bN68*x;3^ zgSfg~eDRG&dKH5w$&j(B-7CoFj!9)6KYAi5++6%7?x|vlxMcK23@HFKHEUS4n)^>b zJpRdVeq749E^~qNn<=VD`o<&J@asCRB$HE}9K~9ay&XopiXb!=iFfMxOGmS1=l}j^ z|49WT&{9`T+eZ6{7Epw{&;$o4I$=k0VhCu$(C$2Y_x#gOfpbxx2g9UH zKfpx@aY^37E)dNS{)F1WZeTN8F*z1+y(reB`@8L8a#qiNc#?Z@oUQL4ifQ>Jj-VqI zwZ8x6-@P6T*8)0Bb8aMa;hLyaE9&qWh0nG-)Hzu$LXiqZ&3tmv`1Ip< zkK5|&p=~~;E@B8l9-);A8cj@7MguAewQa9Yy(n{-hK?Il1awZxCDeMSZBfUsUfm2q zG3LP|LA@M_Fz7$!@N%t30%1;KBe4`vo7T<&KX4Vh;BP1>eCD9GXv;szWooQM2Lq>KmDVE);F2_WH4}E()m*F z`TL!3eLBhHqf2mB-V^^9i_81C%Vw9j3 z!oW8lX3BuZNlWkBxB!}2zt11qU7lD{rH_H zg&%(YGPCE&;APPVPZ%WvG@Npqd{>P>3(w!V_#y}! zX8-^o07*naRJXt1=l#;0{_ONb>H>O20d~UkP3#~oT~JI+9*cA`EeA~yee->Vi=WRG zd_^u?MsFFM0elNhnks>MW|f@m134o!xmZL?7L!+N7TRTWm~i3A-Zv44+&x~tS824L z1{3t17?SQmiaZjy8pckb3G^~`yic}O8ZE0+IqIE%_~Cbc{NsDH2F`aXRkyk0%baR9 zoN-#9;Z6JMHuti5FnfGv$?se~mpv49nzgcbhck}+$dUsVnuNN{q71{~7uXJhXUU#W`awTAUxTgTgC=*Th zPTv2{cjuX$S}!He;yMp+UUPNp#;GFjawq*JCF;Y*y6WFVf)haf@stCQ{NQ#8mchjt zklbtZr~yILr9_T@G?qpT{z(+0qX90DBD>ww|EyC zrU3oo5B?w(7*X(2BAsRja>gr_zP-6osOAW;qXTCwTE4FMb}69f(<#zlYO97MfUCnP zksPz75TFHQ=%6CxNQt& zqum7R4*bRk&rZ+oGXh~cismjJKT^gbu=Dx%zIXQIoxAP!N;k3fnmq*77CfZ-%yz41 z+jJ)qo#ejhb$bIa0B1m$zo~-8rChT^#44vBmDB4?ao;_WP@c}sQ}seskQ9)Cycq7Q zZ+-my_3fWZh$)mf#syuX#wa;5*ifXt^V~Qr2t%Pc{({Z(z{!Lp;_v#CfAUZM#fQ&+ zvy|O(h^pMUX3 zUwi{<{O*4EaJ!Lt%%C4HWA`N;gyOr;+}>XE#8cOd=kEGCKAB`>gblRW;IKBDxrxc} zxVV=1D>}m)e0pHB3AM> zXphZnIu3+_YW0)f_|1Kzy)Kq9cxjd*)xOSG(~zc8Jrf5Endf^$+u3OyfNB6pgZSZw z-7Q9g(rVBvGz+GqAEYqtX3Ibk5=A)<)YT=&0~Qy4y@lXE7`nik)A>z}OE~Av1Z- zhL6a7xI8HWU(3Q#^n4gVo*NEhjl)BbKs~{{U}ox^6CTrmd*WtRR~Wsy^utsmuTnav zQZ(4NuQ)hEonG4zszp*v6Z?>O_k5{885qw3XC>-QQ{{7v!RU-lk%6ZK@ZcaSvt;qHH|HmwK#^+YBS*-!l^XfoamGcmjzZe=<$SiNK*= z8Z?KQAqnd;W>ml2jW(CVzSGcnbn)pAe|XJV;B3MkNM*0CZzQuLHsSDecC6rD&mqrJ zz0562iu)P09D#5Zb(Bm%Z^mjEr3JuvqHh$p1nLUDmi!4D2d$utLz#6HPvBF)6OKyz zlUhb@F~wiX7)+OZX-OF)i`s?p~hVA=u~DJRaI9H@$<_sU;8YZ;Oxeeh+PPku4Q?63760mSq~fesa%m^>u^FEm3A1)H!Uyn<(&L5WNB`mBgF`CrkSD;kR!aY`BqCnB|gbJ zo5@O)3Rv9)3^4S7D9)b81A)4W4ow%lqp~q{$A?cci#KeSHD_Z&4Wo*YSR%k1F`E=_w-1*{_yhp zPyXamdV(0`U~olGlAvK2<4G9PoaKJ1Zt_#NSXpP2skW6g8>=jF8=W#x$YmN*)Y}e3%!zDz)d4LIc z$HDgs8Vj!>ki9sDVhR*=gN@{TPdT(PS8*4hxSr43LCtm4c`|q@nW~p>ud127oEj$P zWO!T4Y#(*1kD5{mL$uTzy?kjVatT*U*F=iUufF_3j+*$DbkzRsofIJWk}#L0FXMe+ zZ%9h~3YSKB_`a>j;tbDHm?pLsi;&dnIW*S9`Z!az%z}9N)tb1|Lwwe?qhYytVqJ$J z5lru1id4H6NoRL=cinnYhrgokD=bawmz9U>hZIF!fcK+B##gvm?vG}5G&Wh<{Cd7T z1P-Y~W*k_Xli-~Y8~Tx~BA#aGMw8epY?`RY@e5@ZYbI@%cu4f@DdC0sm)V16r$ujE z4wP%OK#?_$PagjC%~y$QTS&%cW94)-voe252lJb#G{l1>umt5={rc^dA`4a+HvEv3 zjIt++jr?-7eD&(}^QSoA>2@OpMt^CFLsV8-*~6lfQz1OI;V86%9vJHz1(=b4{p;&Q z_l!K|gqz15iP2-dc;AQ_S`}|966HL^UnZd1j!vF^=X>qP&t8#o3?M>}MCZN9WgESY zj4MbtBBcxwJfa_o`6>B&7#^6E=(ZYJA^vS0fCdKqDC&4Q1WQ-kssM_IO@uIZik7&~w zW`v={L7a=Z1`V8GRY6xc#Tr2IU*Owv*lusdJlGb6(RlId)hkr(S~39R zK$kv|WLn$h+qaVCyiYBl%1ME+Q_41-zzCpPvE&E{$-9UBhERXa&=-fs+tf7x;jG69 z-32yz96zTwVpQgi6KzsZ%4*-dy41XDm7fR7s!fl{jp?^XCy$^oklD%#j?XT5-tXRC zK><4HbTq1ZdvmFb>g8Yfo$ATa{cyaHS=VS9`t{w7Ogo-G;_dGGMmCLnPkG{F7|S5l zc3E1&ufF)j8d}eGU+2M-1<2l;1PnqcR2e>VeB9dgTvZ2=BC{@UM;*XFRvcBpK=a|Tr9F}WDULufo97{ znhZBn%i;{F`MZy*;btPwVM&dM2L5he22Zwj54(_jh;B znHm>cMOoy6$E`6Uz?71`sanACP!Xy_fta%i9*E7a7J+QQ#JL2#e{+59>kk4(xXg20 zTYtoiZV{>#bHO=}@-HqfXnqgMJ*_CI<(Cd--*)h{`9-ZUoc*eSIi_y`$=zZ|$f-wQv9Z*ZIO9b_ZlG$CQM z4l^+>M9C1uk3`#imW$kT-hc596+vv|z_o3#CwSkZ5=I|V6l*Xmi{;u(ERzf2dWX}e z=s46>a+V*=C*_8P%Qco=f5|R6A3EJWst`=i6pY-{Jm0l#Amsh(+FX3EZ#Q_?7>8et zo;>rWdxAX$TyB_FPJtD7oU7bqP||H`9^a}4^ter z-?Wh3daVyJF)2$Du>M*@}{mpAcYzL_{gYebevNz01!3rXj z2*V488>nk@h4&6q#QEB}e~2 zT#ogjWYW&sU-W>I{qzvj6-i%7EwR0Jw45~S>|59?5xPvs;Q09X;JIegAV%+7cAs^# z=NPchS|8WcBaihyO-t^xbP9{V_z-Q`^t6rK2Ol>t6F2&psX!g3 z3{}h%@Czx}Nu`cE=`e7NPv3i|szjV^Eme?9ov{;%@_1Qvi158^BTFV0f*`?^3eSlF zGyi-G^>8zvY$wBuM(L~~{Z>qCBsoif+FT}M0I&R(EQ1seo)zHnFz!MG>*tX)uJm)5gi^%sz)PgNE z2Jd?Z*c*)7B?toX#hlO_Y~h>#@$dcYZ~xUFeEfc)A?Pi)UO*MXJGd}>zt;h{XFPW8 z!2ZItPkzPFY>oG^NG}&CBl2rzGkT?O;+xgW$t~4d>ha^kcfQ^Ghkx&PKKuAQz0$vX z3WQMXL5uLretve`08yw^CCR8-J$mut_}R1ZY@zHJMeEixko$3qq_w+^5^to!a4G72 zn}W~`aBr#n*=}C8GE>Q@Mw!R()7T#Mmk%^(cqPNA?48yqFy;_=YV^S}- zw=)%LV|gQ-0+!3844Y>XTV-W@^4a&k_1({})x+fS{n11M5c7*rxf$F_Ud4oESYS8C z#()*QpRX5WlEY^OqOKS{C`V01dkA-&JC;SZp3hjr_+67S$RN(HXje72?NswmdbMWu zp_8XsZoc)&d;jELzLH!FInC0Fm(*%4>p#)>l8dZf0;EKN*HosUwyPjNF7PXb&=Nvss%mbgpMqd!4yb5&jM#2(r)*J2#nA|MkeI5ro}oZ`W2N~B?2X_6I{>~Kp>iVOK=9Hzb`X^(d!bc5 z=@2k(g}Thk4)WY0ROI5?@kV|@F;SkGV6QRtB4@w+FwN*GkWL@dqoDa>MoP>GEs53? zaN8ou5}fnpn@d~S?H-B1KRn#ggY(UX9Lm9B&e_yxoGfMIt@u~flwiXpG!x=q*`92a zNdmU8(RWpis$gATFbo9bZnso>)hcBqiRV|Bne`~OTf&EvH068AM*RjsB2Waq@ZSi* z##s?fW4Om*AizlOI6lN+L}o+du^{n*XeLdyRN<&zgjWL@DdH+Am*%dc3Lt~vkdf2m zkSZo+3k5)3a16Aa+CuTNst&_)y2&mWQK>G>=Gp82>SrI*Apf7=`_=#XkAE?u##MIZ zj)uv6RyPpJla&t;ElkI1V@h_=26^8G#jyhU9L86PbqGzBpP@z`jH2VK_4)qy9)J2_ z=SSbWxaejo&M>(aKw+6~yQ3$V*k!epsgzTzY<@PHq*~3Y(mENf!+`LKl%{-Ut)2{c zDR2&T4t(T7z?{`iuSgp;D56k-7$I>wji2Xt6$){^l-p@3LRqJ;)*VK!)cFkDr|v;u z#};G#ao7mdQ0A-?XdG(76rM`}qArCjhwOlF|JoKgXgO3UZ(d(rIS8~UYeEr3oIYZI z{Cc?dEAEQ%I_wDV=ITmB$_?F4+aa_)t}WL-%F$e+VZ}bANGLZ}l2~AXpmW2BLO;Wd zwB|!)WMpBaWs=dToG~;+tq4{VH7}v67_oEcY>;-SXmHQZ;G7dbk!FLRMEIxi|COhRwO)5H9Ao`K&5^1-N?2 zQDneB?2~&|-j|G;&vNCl#Mbpnx?I@qxt?Gb?&gi!{`tFS^j~xwL5O_G(K0XuW-!h< zg^^%07YCj@$vHTzsbx)eCu}bo0~@HO3ka~@q!A=P2B$u)h`JF+A(MiFx3awTqtT}P zu&bsTr~6{nnuFGpWa3n_5AR101QGAEWpsr2?vmAu)-U1vCUTug`m#C(9vhwbXr_D$ z*kACV=veF;Es{(WF^ajf!Q_lUOX5WKCwG?f*?KJN5q;!^1*aE_{|@DyIoJs zhOgFh4UX30y3>c7*|=XVi~l{bm(1e`+}&3m}01;BLXtlAR62 zFv?lAhD4VE`b|gkyJ;_lRX0VdrBe=sRkKa@T2cQ6a)+Tnw8S}d5~OmX>eZ5*S(XcX zl3yXgeEPt7dDLy-{4+VAi($KIDbkbSK(CY2UbU=G;uy{wT%v}aNLH8GORus=xF?nY zwL+zC0pN1SKftQnxmP52*WxLT!rYE`Y| z(wiA6kcGgVUh5);ckv(u)@?3x{`|d@M^7Y@=o-Y7w~66a2CU_91KC9c;cBOK?xc6& zck)Lz^HH}}jA{cJU?X^l*^e%o?;n?Q+XY`hbUB*xKuk5US4*e_Q-!;phjw^5y_=1$ zYfNNX10N5Zg3RvXth<5W+#hg+wp(C5Icl#ZUNvOHWjB+d9F5d{UxlFF;Z`fWTa{{w zT7!G4$VYoDEh|Hih0&bR0z+1JG$3Gvg~-Xu7LmabitlthSAW>3q^A#Gee$yNKm3E= z{d@oQZ+-u>M<9%|`*B_Bj*z!!v6_4S z(R;;4?P{R6Q*;M_;tH5W>q!ZjYUBp}8`Oym9A^!lkqRI_bGf>@#7B;M$5AygADvV; zFCR53kvYuqoT<0kDvGA#8;DB=#^wzF(n8Xc9N&D3>b}r3i8s==OT!$XhhDpaa=F1(!7E z8*cy(&a!5T$(9pP+byGY_Op}f@OE+~iRtFPR0$)LJX}t(c%0N%rzZCSq|s_o@bM$A z>J(mM&a+8ex8gcAANIrQL^CTYm7mLx-ajoDRE*P)po%PW1s*LG;O;#j%`gMPAx7Uf z2NS^MLnBX=Zhj^J%~1^}>Nps#)l4F@dVFF4R`}(?zw}PoFKz zCz%4|a7hVsmjZrbUuAn+CIEO8;$ZV zpMID=?Dg1CR1FPoo?fS`#gBjdTdCu-yDgZF?R>3}j%h+$#e-Jmd|8Zye-GUon2|Wo z_8RGjn#JBpvu^~9LdC{q@D5gMCfCr4Hd>zsby8ILJVZ4yLCRDq)b#Z9`={B9)U^^R zK4@Nx^?v>v-#Yz2zSapb_2?)a|1VIV-;1YU0=}Qt+L0ASl(=If0U`DDf*G`eNU2u= zjeb5I$eyAbiw46Dm`rT*x>fD+#WGcade=p^w(HS+lG>yjB?+JPaX_h)9KD6of$K}@ z1L_nTtsW#N`O}?729KC6rbBAYKGmw}0Jt4L+#q~JxPyx$t8J5NSHoMknck_>QoF)Uh0f>G`$(mr$0syI*w7WM5hB7y2}XqHSP4~GTgs^OWDV^Bn5&bc z?5nH3v^)q!B9V}B`&yFjQYS6ZY_K6bz^SKawV_Tr6O9WLWXK6g2}wynI6r&(cDfwO z0o&nj)uU$f>-)j@VRS@fRx3C6Tt3m$Ob~dw+~a-Z`+2kR#V|ErZ6V{g%IW8|acg~v zfv)sriL45#Zdo26?NuX5)gztR3LpM&ojFW6FFPstQuW zyOaoFrlhMA#lUU~xH7^Z$Pyu`#U`tWaJA7j?4t6ZVym2|U*~|PdpihETdBB{@*^Ic zP@RLR{aRu=dK&eVk1CA2#e8u6${w}q99F6MIO69`C=Ixqg!^4e@;92%O~}DlM$zuE zS5!Rk#W~cjWt70|NVV%gzBf=0aN6Te?(XhkDoUHfcsc5&yTUI|9ko*BoM^AGs+K!p z7EUI8CWM4TJXUIi(_6^`Hws%H?Hft@-~8`C#8RnpZf7?6Nx!q<@Z;z|Uj9Ulb*mAF z3g!eEkxwS+yhIDN8lg~Fh1K!aMCu4wg}2k20{n$yL-?O@Tq^he=oeq%t7EJ+U*$|b zJTNhm9K}e-_VVzgsxwe-e3yGD>il@LJl59X9;zafI?nliPCkGB-4BmCWkFr~EgOs>AP}rXG@*N*Lf0mogH&lhaajm z%~W6CYj5<<&HmAC`Muk9nb%JqIDdcq6o;gwiR&ar9v>v#L@rYV_Z^^p1aGR17I!L2 zWnFQ{6TPIJJh_15DOi7CbK%IcdR8(5WqU5{HhB|i9Lwq1@sW}`6)nZ(=+8gS{+;L3 z*6eFFL7Qy-pZ%*Z2ltyDBF)v-*~+Dc4b9Wuw|+1!_1@e^`MUHAO#RK^dQNjqr=OiY zee=z0%M!*)rNWLrcnjIZ*2l|sC0mVCy&(X>ErI%aXp)!=Q3F2J>xAJ^+;5jsuYo^v2WM9819xfn@j6w$VxCr`arc% zm!yn|TQJ>3Xy59rg-zj>@rFGNxA!A>pu6cjJCYL5PnRzUO)2P0IGdX#VpaeEAOJ~3 zK~yMC2^ZtuD@*<_C>A1X(s28M8QI`N3h`4azaoBgBymIy7o=e;=7wJaJt#CJh=X|-KFxK&*@a*8v zszQ;}0lT|WJK5};Uw-}Y%P-WQZvWZu{rfkU4~w-}h{Xm+uVoPpfaOt^%_9ODrz(7; z&UC18li?K&P*Bu@BVTt*Q5O9SF^7M zsW<87s`os5{_=9ZpRQALaRHg)N5@}&^VX9EiPVvgfiiDXf?Z)9UaH z$$Y5SrNQN2yf}yH#H&Nyvw27+T(#&XUh(CMv#8RxKe4bP+sEiAQIf2cw)oY8LIt!~ zP^5SyMHIDTDuOMT-s;uLSG?UvaW6+rsF_KT>S2A&)PJyJ ziaAg?$sc!D0v!jaghW%rE;_E`RAacS<|@tg{eoMRYM~#nCX0Y`g$2)=?Ke6+hK?_z zsf@GQHq-315-)!=saSWLElx+~9i6z>>d?(@2X{n=tQFKC={^$t#Yh{~BxiKeV!M>f zuP?umbjTf|2OvQ$y@UgoIjE1a4gStY=N#HA&dW@Fm~J@$;s}uxu}ZFK71qaSZNp)${OT23z=t4$xutre7PKu*h$lyq%Tl5d)k-P8 zM#9IL$;Xc$m-2UlLPaR=`e<2)ec`=#n&12GF_0z3Q?b~rRGk@ROi?5ZjE{H17&vXb ziaESSd$cK~A+Mwt{_=JyF+q(fuZp-vCZi-M@*{60a~BHrQdxn^nny03sb-?iU)}0z zyK2=>mSZ>^_sE83gAT?cc8_C_>$JfH*JW@fRtCAY(QbJ*X;e!U=?$`!)hWh_MVX~6 zw6LAX4i}SX6P-tv73XR|D0=YCKDV7x5VIVZJD~%^O-_x!YOx;8TOC^YCaZlcyhRQL zHZr8OY?>dnE1)Ak|IYDh*Agb+s%AG9@{Rqv!)_uzBe|qP&M~2dnqgZq*t5E3k5aAY z`KP}amHt?Bu8qv+G8Zb#SDKBI@^+)W9q;Z&!&d89s&O=@noamQyRFW{azw}2=PKY^ zBpx*P^P`4Nm2pTg6I~YTrFy8-0`*bqwuQv~)1SOZSO7n{k`V<_#;(0g(c~R7=g7{m zp&em9UF?ptBe)W0Y*{y|rlcpGNHH9Vx)GdHO(;5#Ty&gg4miUJn-MC;awOe{<5{aZ zT{$ShFjoQ5xx#rA4s(>P*SCYoY_pcJqDp4E7!`7>51*AkI6HyT`ROOE?{m0JFm70nYET(5(R$x!^c1T?#rWIjRbx9(TmgL&SEAH6BME5gaRyz zouy_ah>9Mfw^`gb3%m1Pt2lq)8RulJq@!y1oukT&^GCas{yoBw zouq$V&RIjI;gXLGYQI@V-kK-}#?T#Ac|7a2eC{|xI47p;jQA6l;>+o6!8WN5!Pp}% zF0LJ023AJYA!)E(DoV&`+}vJz2XeY!v{+2)rct{bu)R!iY!+hKSgG4=$hTJ3Bs5zf z=m|F&8P3djDRoV0AR399=*WihIP~G`Nk4y}z&aIl6YNH4VB+{Bo!gx-e+ogAAUEJa z{KkmQ+>(?JoH-ai<=3o~6Uf60ydQ}Da$ux4r&OU1XjK{_TXRj0S#%*<*meieB0M9{ zo#$pf7v`dN(ntNxLuKm$I59y!MH*#3CCgdc;ib$GRh8^lVJ&7oHPf=Yc{T-N*f5Ml z8b<`xqMM+X&9S(C8ngMEGHQsF7|=ix!_)eAEkr`3o2V7DNe>I=BtexO0Jg}NSGl8A z?xg>aUkT20gUc~fzAT_3#fW(aekTx$lq-g*N1-zS%WGBnH^SX-1VSDm7coFhuyrgQ z&{+^-a32;G39Tm>*%5$LrX0-&QLa{|lopX9=DtS4NA>JkgEt%cONnZ$;ZUCYIHv)6 zYP3q-ZHn_!XOXGs>dnfQQ!m|1KEQIXT&_p$+-|QmYE&mIIYOSIEfdhTEG~qdjFa*r zb%N4u=xi4`LlPiS%Sq>0(hu((3D@f#kDzymCq|`Y?xNc|p5HyHt@8j}X|1qUhT!9} z8c%4VI=VNhe65s=Lb(jO-)4D8!l4cXP{D2#%P|_ojhC|t7lektq2p|#-bmBJ9l$F# zMjRw`ktU)7{>PFb<48mWl;uBkTxISwm>fHRCQ=2EZSWTAYfS;`(|;Q+h^KVQV5p~tUHb=M1z9}(d*16 z6-H8tvmkQdr=>F-O(zMI$lcgNwWyIf7lMZ+4@Ucr~lG)OZygw2GRNR`nGA`eMw@z21%G z1?|>alkrM$>tmqzFxw(ei@e>37~qAQuV=C~=!pCZBUo;u+$Q@0Y=FCgYL0LU@@(IY z7vrzL!J499_-NMekMjbrVlAt{az&^#%gJb~0s-qgZ`IoS<+*C|2#}=2))rIWOP^L6 zAg@=nzfhAQb$V&$Q z6qaaKgpBhV=Y+OKzbfRm6RvS)1HY}@P9EvZ#y;I=H}^&T+9}!ddq#;U0YcO!CV4c= zQ9O-g2f)g&Ni>87+U^+yRUrn&Cr;$6(*^VgsIZ-fT7G zn@k|E2A)TlW6>-oQR_U7R3-+o6w0I3LU9RF_Cm5X(?t zE>+&4%<|3?)bi7uIZMxYGKE4Wd}yBOr#ZUtI?VLXCL7Z~gkgBD5)80yAzRMs?!3-W z^DLP~agxW|<8k51ym-7U=x#F0RzMin6evD`6s%O$*bk@EyP-;zFh{c=ow{Weg)dcCf$h}XA$8&*bmuqGDM)yEZgz=R7WiFPyfB0^0{ zj==5Q++xa15#F1G2*Uknf`lkjz<&7Eg_jY!4ztDGRz8gzkbZIm*ejt7M5Z9(p@?xD zm$X&czLV1#wO=XF8}iGUNR`YSLG;Qz`n*il(d*5g-cI0)=p+lh)&sCPItbK?ZC1;G z2M?^5vf}0_hXAf>=2~j-n=^=MRdD)-BPE2v{$vJUzI`pDQ&x8|-;x!z+$7zF3oXL< zrVo9~Ce|e$LvzFxjX$+cX{}6&R-!0jVjwYyCnGCWgo}SdEwBjr{9>?BbVgK^Xl=%~ z-Kl9!jv1U_7}r!ri;xlCkc+sAwv?e$^0G?UMP1laxUiGcC`uHr3HpI`rA#oRWs6u< zAGyA~rAn1ErI7}oi#x@(r9xw*pfWpfPO9ZDEn~VL&{iwO6Cz?fQx^sHqCL&k-;5Xi zRiV=86tbn;`!`w$mjo4h-B)jh>&c>CtR0_pu5RC2H2c!)wr=jOfxm!yyLj@>H($RN zeyZ*sK7I^>PQk~NW-U2`9X1^Gg;2Zoqg_tzv)BQ*w#=@~Zb?#ONPH?OF0&BcJep)4 zX|4IS<4Nr655L^Yl`gPE_Ukn5(Wy4;mE5_g6sD64TufGLLKK5wqX-FG3R|Ge5f2S@ zezPI}W`bDx#&#}~zOdYvHt80wR#-`8%(Ag)QvJb0DYYiU2E{{q6y8C>TJl&eM;+Bc8TKNL zc*=XMK{Hm&4rA4oi=fKYW2z+~P%61{M6gJ@N(m@y>4w7(i6U37NSb$WtVnkyi()SC8XV#5JW#a^jtwuhW zhNXDzj&DTk9*^<{Uy70i#NlKcOe~+M`(7c7lFPt4v+myI_4XY&fJq4?xG)qnjg|W= zZsN34s0cnoUhbqxM5Q>>2No3H2qShR@Uq;^v%TBQ^Dig$yG?22@HN_3efIf`k;xFl zJ1v$7o8(mRP`)Edc?#jsbZuX87Yjs`BDIKC84qi3uH9DTw}fTOe;FtM;iH~z_GNYQqxakOof?7%~25R(!yq zWVPBeDg`^VRV~X>0O_9{e)IEhcb?Z5snxwGqS#ZYBP;mU_ml~9g7N@|0-KK5Cv7I_ zBxYkm!w6p`+Hf6u4hnM!T{=S&?TjIe5_}3&%yfkItZp5ahAMYjJ|MVMK0sA>;kK<* zS_C0c%J&yVfvL!A$CNXwBMO!KiMqgDqxWc&sXRz{&QwyxX03a1GhPjK9&h$EcD`M# zr>WdFCk7nJG!i{ANTASEyf`2VLn(o1EmAf#mlf{>@XpXr*BdH511N~&OXe@5Gj=~g zgyk_nkQNqRgmh2@Kzn!coAss`4Czc-DF8&i57QwHKxJ4o)?8}Iwl1VQ7GSK*!pw7x zai;mJS^hp%+ZWrz`D!>Ga1f|(g~%{`2rD*BF?#1R8ap6%M$3FWz}$FOfQvzKZ>Og( zYV%J|RVBR20D#}iE|YySWU}j1uLA%VhErcDQ?1~*=n0gP#VC`4?W>4{!+;B()lbjh zUA2)^V-;~^k0c_PoFJ#9Z&)M65Ka}pp7}0x!vck8957*hU&KGk*+Pvsmo=R#cS_wS z{bg~mscp(<`}&hv=J;+^A7)zH#>FVpx>@GOTEZSbU!>~OZF#Y;?hEZzx@Ol_Lgy`z zoeR-SLH|lo8jU2N+jMqWZ%A`x>kYk8OwF>Y65*@hO7NQ(S}afG{+JEk0>5Lc04?m? z_zyQrlJbL)B8=o&v(1!I2qY=f{?)ksc2-$sAz=vQ48a9zdQ^h85U9E1#Rh7nzI@tH z*-t4qQ9s$L#?dY!`-k$8P{_+pr?Pf;%*&QLx-GV!XLCd_u(JK`Xat5f+b){i) zYDrqf)*2~$ru?^z@|Wn)O3lIpqYjrGr}ac$E(<|<5FWdd(cs!~GQ)&p^E3t)HV$Cg zxM(QfD1`fHq1Mo{Sewmoq;q@{_U1#-GbV237Mk1TNC!&+V~w1@(3x~QWNb$%WMqV) za4+wRqEVx9`0y9;fpG5{mizsR^SUYIO7n@bpON4ZJ@&6L+jk))J-5oATpD=J{k( zUPksBXY#PILk%Bax-@}!)c!*>rFD;@$LNuw-WQeFt+!p(2 zvOc)~ulqgN1dmYw_7s3(+)#AoHP>SzY5uI`yuSE?S>2`ip+I?`=?4-u``~;~@ zvkPIn6tLsFK)4HHP4Vd4%3>@9{|tyyx!dKs&6B4~+RKYW+a#7|UX4cAekIKoV4F)p z#@3ZNi?4`~gIC!+^sf+XvC)klDct2eAo(WodUN$n#PiuP%iUln5!@!1iF6Jpe*R6a zINfF^!>yk5n|)6&sQoN+ovYt+Zc1bd7@hoBsw0DIF>g`Tg;qgJFx(A_;^c%!$eM5r zX}~ri{i57UAZ91n!3Px@49dr!*gVJ-aV5(bZt_GZaxSTvSZNQE_)g#OyzRHTC1P|F zoPCq7YjLuhjp&F~rN@ii-Rf5CZ=RJWqfghQpBjPqD(=hI)A{XqR_0D>mcM?uy|Dtug)T1U-d8>1H-0RB=6uEZFxhIX{2hy$u2|!F%%a&-Uufaa8kHRKX zn2eSsB_)M=IVu4t(t&YCp%dDs0g(*?FP0OuYV`X{m37jx#5j)pCSDW*K6JEQkxfqSpRdKRzY|=;R?AhgFHv>7Csq8TBeQqV0j+C8Q zK)by3YQBl6yseGm~kl-Ew za@};gcR$WgKtPlvB$wtJ;7@izOO@)wZVXyqjf=JdQgH0JLnGZiyeB#)M3k<8M;rXR z$6yd=MT4-ak>{#dMb@Q=8(miIzgD3%7_DE(`4}jUABZPdJzoMni2^>-Hi2)dAbya` zCpC;bJruPpf#K)?eB0wN*dVzW@34Rtc7%M=<&&xEAggNzit016QYpBB$UgIFfp;vm&~yE%HhJ?(FRX=n7rxa<%30z6Ao>2ROh`B)aN z$xxt0m@#|&h9Gh7{L#|HvLs$68MTOj^T+xCd8MfHACPb49ov(ti7|DWEekl7G{MW2 z&8pGW3C8$J>CNZww1500eVpmr@kZ48YR<8NE|~4Ue6{SoS~MQ^jdZmOV2Hqzy;jwB zEF*YBmsxR0g4wep>`Pe&^Z>hH9k6e$F9x5xwuPvycz6<(` z*m@**jVJ5Tpp%~d#c!QH$=&AX0Xj*Ug7cxi&mP6@(EJw~v_#iYf{CI;BS|(Qe3d+< zL_MOXWLUuX@Sj1+F`aEXu@UEQ1x!Lwv5p>rkZmEOcVeYx<+)>UaKQ#44K&GAsMSxX z#gk2TnQ2sxUo1-}()n(eGsVme>us_^PS?+J74x!SF_Bj zbhPL0SktD{^L?#QKMU=czkQe1a}xzBerLLK%t|K#l*<__Q3|19QU)(-DpfLQD8yRE zX|$vkoV%D?*}5l+F#Mh~ZK1=NuStpYvwZbtmQjh9YCIk7YxnEyM1Q8E_Z0ID*40tE zx~=!pttWS@>g~F=t38>fYj@&MQLvaQ*DL+;Jx2>W$w(d|=~fDU#Ba4&qls(d?l=sm z;xU2IQBAMTwK^9J+2=;va(`PJ?i!O_W4!OYnU|(}ZjH8&FVq`JPmt6?GqbJ=Z87e| zMj^t(X=CG(&5{b}K4p(8QOq%RVM+NNJEwO4w$C*uW+3JdGjaDY-Q{Y>yJD}uDh)P` zedFCt{k_Xo>+P~RFF)QjpWUviH>=V(+e-g`RNZHfWO;Tb_VC_EWTdUCu4*&GF&uJ7 z91hJcxflqbv?~!31ZV|85Fx$bD?tJTNFP*y=*gb$s!A8$`_S*)7!II7b#`Rrjcfkq z*ylW_dAH5h2J6$FQG5Ggn{9rcC}FrNwBGHDS1-%dc$;>>DqRB8()l{0N)lOA1Shrb z&hQVlI;HAv%+Kwc}}xysS=K+%1$Md)?~zLfYs0* z9W}qmj*qp+UHkETJx}R!v;*n3_H0W9pWH+t86wbXR)G?JXu$U)-K*$^OUIeB4-i4U zsUQE#pLDv%SCo_NSY_EfRDTnt5`2#Y?S98B;{>A$RaK{utCDc2gdA3a&M{4TuTl7z zr0olW2l^|>aHHvBL~E2P66S7-+e-9tYb-ng(u=C#5-_T@ll3i03i>-z?@=p)1z0%k z0(q9S-bEHc>Ivsl2uH#J<&2iE)^*Tmu?USA!9`YFN|y+2vP+c2EBk-_^-pYrixuze ziET<@5EzDy1w3VylPkaB5Vo#%!=#3NeAd;Pq})YBsP~ z7>UwWrJuADx70eVmVuL;H-$*}H6>4#nosB6vT!v&m&bbE!R?c} zXbZHJH`&1^!k*$^<0sRiCfHFJ5bnfEy{OyEN=Qr+9TtK@)Id=`Ce(8AqopK7G&$PA zWv|*TYTuSB9aj&z#sB!LpHjW93z=IU~39A+i7-}E7?0%kJZu= zlqRNa?5Lp3xvj z)tK_ku*n!JaC}zTWen+5BF6pNjl>y?G7OiEfM;p7-9OJque;nV^UZhb>wjL=KJI&y z+&7>1S_{{|CCIY3gZ%yHUH98bb(+4NWv>5ebR2EcrD}7%*Y;#No{yo3zI^lk@%eSL z+%$8gH@#~?k@0jaOx8!Kc{<9UuaCLgHy?0cEyZRw`$f7w$ld=x zG8W9UmZ+JeRf;~AyaatmtBY-RnQlzd-D&>e@%&h@eEcp3G^tpj5?Cx@-W>OmZ$1ynO&Vr zdN0F0i8(5U7oCdqpgb;0G^5O*&)41KdOd<@2ZqrT714(vQUonB%$g)NUTL%Jt;yIC zXD7gm74C$r4Ehp~8?WwAdzJiJjjLpuR#D;o$p z3X1u3n-4@JN<=T_U3g?#5mf7@ zf^awN2u4GOpsiG=jBK0=v4P1K3Xnx}cg!&<8Nl*@+n)J1PYO#A{MDi3e$`yPzcp$a%F5KNT5AbPdFtBWp}+?7p0G|vPe+8C{@Amz`2o< z-duD{_*+S$h>-$QuSGT;Az;t@SMlX26&SJ`$9!Y3t$)h4=lLsDTif}VPa+f|hZuqI zNcA3>ub$Rv8O);2Mvsyg#pI6}1m;3WSWPncO!^4H*Cz2fX%)In8bFD=J7oZJSt7a@ zy%{P34tWX=KL&2+BR-O)Y}EbD_2m1PPDRIQy!R=_27uCraI5wQi|#Pf8y|{03p5vP z&Di;ZCB~hQM((N}Sh#$INO8Ze6wBS-6_yeAYaG0i@d+M&Qw&xEHb$Kw&XPh$SLZ$< zzh&nTAd-qm*eSkGAHO$$RcZRu09Zh$zu}>FT7Ld}feb7cJFo&FQBxL;cw}i4FHfct=ph ziZD!skf+@1=}`)n{)K9TY$MnFdd|!ja*?E~-<~q1A@=f&I4xKwidGRJewam38@;}q zrwEC!wE9tDj*SyRYoqgSvP6%+DWCg?V(M{x7;M+7E$`a>Vs4>jw1uLjfhg>fXL~r? zXoM7~i~Ez>HIn#}nTjEybJ0GF)oiH!st1%EZiq~;s-%5pv&$Cux%*xEYP{VG3e}5u z8rveD!^kFkXG6?Ka|?z96r;^l3>X1^zWZYC_*d^9;% zD5lbuB&y{ok39xM(^1lmO}ia?-;@%``O*lPy!dUIH+lN>_>l=~eM{C1*Ghmfp1d5bLvLO16(&=>8u$~y5uV=s{ z(pO^i66rzyQ*0hfoyqiLBa2#~lg%H>C6H~V)$F#{lvA_;P%%Z0A;`(FVr+p=se-_a zTdP2ZNs2WbqP!5F_3Oj@lgNP^?<9SkcwiifRUODWdJ;+Em zgVZx%h`XejY)=(2hBt=7lHb zWW}!$3=pfKLj7YGh~G%mM8quu2SQEJ;WUtPMkpz!iTpffh)CYpAkL~IwNwpFYVSn< zBzxJNi0U&;!|;zo;uV8=`IXh|IGVslVDD=y%(bEwSenSL=m-7^s5^KGl`D=xKa;89 zi|nw@p>qggK_|2Soe?6y8K^0GQdAW_t8_yZuN{nN? zs82F92GI504e6RL2lCT}f*++iPv5<4`@>9gnXOt;3#}B7B@4e=tB>`(It8%@kdon& zp(gS$o-$2G5%)XrU4p%NGXbrslfh|R8lcH3-4!{#qhQ_7)B!gEw-Vf{{GG|*h!DCZ z8Nh6Ih(zgSRye0?**p@f=hsi&)b?Gga@stufJjxkr%MZofNkaF+*(uut+x^U&6QI| zbRGR7?F^=i(n2-hf_!1>*(FejmSyrXEWdr@y~*|R<}yCs$M~qTH?2qVF?2&P;s&?d z3fQHG8RC5YqaQ!?N~=mVE-9opgd)z$27687(*`Z(T9ur8o~~!=mvxyAgApRd8b;*D z{hiN;<{+pFpl;U6demJ5g=@H?ZjMdNtWfk2(7~{Qve8O;`TEHtp7WI%{Vw$!K=2w3 zs5jdp+gsJ{hX)ni?Rn^?oc_Kq(@DBeCO_?)#p1=*A+`?LX!;V%;pn;0qqb z41&cwA3art9V{yoZLDhE^SLi%z)tqjm3u3d~k0s?Gyz#w0ewuDGG%lHGc3#)Uevfq+sc9f zCg#DNa%uo9i12eWo1cIAu9kh$keP=I!4OJ6iN(jFF^I{A{MS7h1cpX6EcJjL#v;0m zv+!0V^Tga_OO8paK*{BFE@F=ny$S^*spreKvKxiKu$L`q>nGwVh_H_e@AN$6`oNxL z2_N5`PL#W4R5(cbYy*~!`8Fvnk2qKk{Kv_uy-*GEZ+Xq`89zA-#Mh zh9uEjiR7}i;uWjK-za<)aTd44iSnxGuDm!YObG^J6aq`uy(e)@>_X?!C#tN}z%WNK zq~K9vO$5F;oI(9@mqFB1VMCo#T4+SAs+$ZZlCVKsQ7(Zf6uVFSm}uPOsoj{z^xpHj zh_7(lSzzMlNBC}=;ZS|a^d{MxO{ThpL=7wtqYwlTCwgEc_##g38u6qC(4dM80H=|Q z5uhWa)Gr6#Pzf&UOZA)xJgyuHmgr$22&vIZ{1p?=o{b?oYap`jHrrI=0s>VsEt~JO z8bVV0`OAGf`-8i7cK>Oi0LOZ+-zr0V+*Zcv`l$#*xtc7;yX-n&u45-b%gO-J1w+Dh z0cUYYU3ag@_1S3j{PdzlxnT+2m-L2n1I7f6oqIqMY5{2_WGdCTh(>S`M0LdSX{ga) z#J|v)^|Tm`KfTsdFAvd2;sBYz_vT+xWkS~63$qiOh1@1=I08AtWpc=!$U~X;z_x}N za9Jum3hRd1s%(xq#YM)4AVh%l<9!Ssn1a6)mdabnGPRb-2AxpP%<Us1N($z}w3pGrD|WHV2pxWMs0?MZ zWkg~86=s&eQ68t91ubk{trT#tVKdsC^T%vswXF8dmg6LYqYVs!{OnH4xFH4Ib_ zmBVLO;K|hGb7}#!RDW8#wM<^9fb!M2*xyyI$BX^aMc}^j4=d1Rpmyw-o2#X_nrdn{7^1b~XTpI8|!hO};uF zXO)}@i+5VJYmu$ROS_n4EA8GaG5Cm6<^bSzF&`D zZt|m9f{NpFqnPKPZ2g%Gz4l}=hlvY3%B z5C9KB6(;t&UqjLPFAhtrc~UKr_KZ!tRB$44w1F=r#u9-Y@3a5g|MbDpevb>pD^BBf zjfFRoE8csa1W75cJ4Z@H`XBsU@ zJK04^V+`hWz?lSN)c1zn?yhQ_sU)F-?@M|K_7v&?x<``?T2TT~0RhpZ;6)0N>t;s6#@Ch+ z?e1~g#_EsT*gUBG>O|YoiBQ*OEy+YUSw@0#f`K7?16dcH-6E2PS!uggZTLks*F8M_ z-a|RNdXl`E?DDg8_tUohxNlFh?R7f0*vtWV;r4`0s4($5swx%}L(<9t@+_3IQN>Ti zS&J57q@X;%wQDgLu4L(ilCxmFH^tr?Qp>||3a47y_zn3rAQL&c(ADQZ{?PoR_m<0a zzrph;(So%hKb?$0BncyXLHtMfo!!-F)^>K-Q4q|j1SlxZ=mDz8MnFkETu73(vr%F8 zea`E3U-DG$5JWl>hHiJu49ZvQSEC8CU{QQUhrQ9c{rPzJl>K5_`}*^~HQD18h953L zr>u@CfY7Ldj9P0xPGrv6Nu(|`&L8_P#{!tXs#I1BL)y94ioy+VC zRi?klcW$yTZqj8;T6SqAv#W(X;SDESFz^V`kSa-V#+tP7th(52!IKmgjCmJBm-z%( zTA#|x^wsoyJI&lL^NrVp`GkRG%(a)+VtOw?Q0skuSs`EvTAOTfpnmm;Wgzl!2878W zxYO_SVv}I7IpYG?r7en=jax?vC$|`!2sY%ZZ#N~tRx~p;HB*zRs1c5sLZwxI9=#xF zs~uPTPfoqZWoub@_qy+|(!J?6zm(j{*0yN`7CfcG%639`ZAWllq6{^I?Cu;TYYdCP z&6@`Q$a0PhLp@h1F3`)V6|nggEL#n-C`LDvK`A#QWVsTpyQfzXWl|z_eO1rs0hf@! zF+F*Z7)Zrh%asN}LdV++;JdV==(z1Gwr`F^CFzYYWu)eV&Gd9ATu(AL|Fmhn9?CWs zX${OGsU3k($#6N0AX8D@C><3@0yYu{fdeNB&EKl&oV44Nx5HoDrGEE1T|bPFtr0ci zV0<>$BFxx5q_>P6_r$MkAnGl&9q0^}5#Aj^DBz&+i|re+J3n14U~J~)sOX5#91~=# zGrPHhz8^6lhxsnHgb-kiwjBIRDX1ZoEXgMEOC)X-2A~OOH<-i0+_49DMe~QgA zVo3|~ob|~_BA1w3mNDHHG6Xg7Ax{44zv+`24rP0UB#uOP=+cnT0cGRR zX7Xo7{n5e^FbL?}3?fIsR)xjwu_DilJ1PufXCC zMOD@zkHRT&WBe-TaS4w@GA2$~I8SaXp4Kgf*n*{@6jjI>Uz$VPz1l2&Gs(XlW*%jbkvYdwwIM z!DUIzJ@$<6VjH7T7%4`Sf5zlQ-@^K<)PockGiI#I!}05Osk8n3gMMDNTq*cuUwS<> zUsHFx>f6aKH;HPrEj*Y$PRkIXOy(-#E{JDGsSwxI_5jh9-%W3_lmGgcUtJTn(@z9)KpX?$cgWc@ z0LOvQwGdF#N420P;=6;6fIf*%j|VQ^Y1mGVr55O+UmE~}YB7@WL=N}1&s`&2X}-2GVe z>{|t;vVG(eNmHsd@I!d^$ht=gfqu2ctbAt-m@-w_+z?>43 zC4S6GC|Q?j8AuXYwH~{o)oBHkgY6F(U^~kz1l8Ey769uK&UeF=MqQgc{W694?R{!M zv5f5$f}mJZVc}s4Wpx3!W1KbtJj9l7(;YD@O5XA&xy6FmOi#t2HmuZ($+C-iA>_NE zApGp;uu6qWRH;XCo?6@5?AV)T??0V8(-ab3f69^)Qfh=0Su7|eG;sc+e3Sc=1 zf^eX5ALZm@un3p8jnjYivu^M7T-?tpid2aFsC}2iM&@gUh<8Yy2c25fw=1g%Mnh7< zU;&Q?ek=mH^x-kK@Fk*SNyO6^MbPDl_GsAUpk2$P+{@Nyd4IdH!|5<7@Yp=XTf0^uzn=1yWGM1!Ib zt-!~3bP1z184Mg2_A@1tf73#Ep|#k$DV|1DLWuu_4_yJ?FXj zZiEjgv-<}}@m6}@y1-FOI4MR!jkym`l^e>#J#l*_bwjBpH)*j19?YG%VTPnfjx`iY z3(I|kUzxM}-JF|=^bF|pIr|0GGCOL-E%Cx=MFJ=&>-$z>4ycrg=j)*e3q>_V&P+77 z8qGih1l?O&ogF{MwUD6RG>W}u4Qnwwi_BsO1PR>X5S~7}EZIej42p*7?rfSq*#Eoq z3BkA0H7!if%SHRV{9$c-ubgztZcV>z3NKVk`eAl%fyCMg3o;~+H$hZQ{jI`Ig&8WN zWB-(y2ngv2b0VLpL!un)Pb7T!EA)yXHpSl4H0O+!vd|&Eimm z3tx%aWY9}bnNP#1EHAh-I;q&KVH%^0x|DQ6ZCUWKUhG2HyRu@>q_bsbUGC6K z>`;s2707ch`8w{Tl&`V^nR)W5n{;KKZ7=reW_sN&IR{f|N0-=%7s%+2(4^J&$d<{n0w?koq1CqJPqAaJE&5)fGtblSuAhfK$?phk@f41ew{ z`k8S2M4Tt|O02u!e&S*>I9W(RCIi6;E=5C;He5`_+8&un8g%2kpgyXCTneW_VU>H) zO#ljEF7uHuHN?Av4MesBW3x*dZWjV8Ek?a_$P^KwQyv6Cq9C0rWeV3X%l6>d9k65a zjm6y4OMi=0RMxS=;dBm?k&4VY8Bl{oA*FPBxtKv@k;IgoQl{Kt`%O3dtFMc-_4f!x zgbu(fs21L-#l~!QEsh(mnp#QsZ(AX1(nUBOVpqn3k`6QqKw1K%Bq1X~bT9ses3)We zPYS|Rcp5nipC|9b?YT(W#1|@XWa9IKvVRbq5W-3=E5(}60c%{YUe)Oi-#I8|w9X{C z2`I`PRgaj2sswUVwG)epq;f@;v4HYLO^_#E!gtyKTdZo))ILVxd02nUVmNn;aiTHy&}7OSJ6*@uJoe)j>6j4ihj* zhlX%#lF_kLf$?T zmfT~Y)?7byvPTt*5JU46=F`XQ@DNX^3J7L*w$%~ALZ4=~^LBp!rM4fLlO+2|m>eMz zlvq;n16e^0vQ|8u%781I#$|3-#Sh%Gs3ue zU6hLPk%4nR(k*6z@4}uD=yO+*EV0CF0?LC&yL$eQKkL3N&-2?+16+JB2Dl2ck02)C zaM-&{3A-qE5C;V4EAl25gTQADpz-7%Cw?0vE+&kjZ)b#kBWfI$2cyK7%n`ks zo^MivZGKa_+f+Y%-ZTg2{_MfdmvHQP$M!N&4#*w zfz}%NT<_}2lY4r$1>8J%y*|J0m7NGg5D0?Z7Bfw!rIc)HBt>tp`nRLWEMjHbJ){>0 zX1qBLc1ju2c$x%1Sf`MCm9j#yg>mvk+Ia~aL8!xMvbsc1!&OZPt@X@ZsCF-0?V3M) zKDQq?sk!VF%Zvd_K=1dr_jkk>>L7kYC6{S6YSRVAWSB8MPP?{Aba2QZdF#p^8ER`1 z55*X{77-E<3B9INW`vnn2a(w0d)^9*MDAK!Boh<~FIjhGVMI6d*tC0`{rewuz9_ub z;0jkuE07cKx$>lc%#rQP0u0a@J!zTQ2ciC{TwQbahcT5=c{ta;dfN0pZ`p|Fr`x5B{!G>GVW(J~cZkCn3YysK^{?0&97>)Fca57Q#X#`Fva*dxaRXM>TP_E8 zWLs_c-!QTQ4BRejScX9;PQ03t-mEoj80$^=x881wJjAx z#vbEiRfoAwc7+XK(HXpORxCqBpZ*!w^|?7pfAQ_AI!KiR+9L!G1j5#g&Cg(Ce1J&R z*aXZp)-h92u!h%_V213ZY_puZDr|qJbNESP*3B;9l`K8tBFNnF;1C!)&r)Ng+94sS zr47`$z}Y8!^CBEBWjonBd=xT>Hf1^^eGonmX^Toer}bomy~}J&YCM8A#tbEg)q*D_ zp3W`{0`nE`oz#(JFRR}TI*Jtr0h~lNI5SNEby^EtHc_%&c`r&?hna zD75r!=G`&2O`wgEtI+UtDUr)H^O$0+JOk^3`9%X;F!(NgPjEjn33)z5q-Fvh9v(be zIgdiU5a$86QmQt zo!w}Xd%cIXa;nWLC}co*Hz0R9HLmZk??CV68yfl3`}zTr{U>+LQ{XqTUe=~Oq*VIm zx*I)UBFKc%sLGWH7srr=Ck)VsH&IzTGdgeJx64&Ioob&afAk)RY<%fX&l{vf(l*B1Tk*yK03ZNKL_t&$vlAGdImYCn0b=5w`pw`& za~VCuC|p3$lK#SRvhwtbMdS0En;Qhouz>tCIV8%0;fxrghs|UC`9J))e482c5Qho# zV`qw167buYUrF+VCap{xlSk32G-@|H5pqmCeKas`RLIEr8qNAHnn7?I@>kuy;yZ%~ zhiXbf4X+VDu7g@}4~?^(FRbLW!^q*s-t6dS8OUxN0b&V+sY+15CU0<>jhR4 z*H9;I%q+%nZ?- zu8t1vzgcu1kDXnn!m$#sVmaD1mOXTnWni%f57=lTwGZ**PWV4|nRTNwj_HSOhz`EJ zy|qK8dhY5;D%@EvX<34%t9O0JYZd&o3(#!0TUW#3gsKy5#~ex!^PRPOX|{XW)v!JN zo4@#J?`^9PnL?HDnp>Ca#0h3ExSCS6lZ3*sZ7OUMn(B-vGiH8xwCT(QpiHfZi~O|wRr-A)-?jARp-_6&GD1z%|NqWob9UrQna$m38-;A(T}{m5)?PeicsN_N*Y2`Zc>QLb@j;|0KB4Ds8kz3)U;g4J?YqnGz}9&L?uHlJk)>Wv!YP zCV?nG#4Cj=Lh6#AR%D<+kpWzC@BA|dhes_HVd42zpopO7CospI%je!3^u#GI(G(Yu zJ;lfrqjl6obJk-)9%VEqLUh^x^k3i?Qz#qpVj*DTLLPfgS9Mad#T`(vgoC;AE0q;9 z&S?y8@Br9_R`9DR5f4BtoFtE8{)p4Dh~06d^o6b1qv2A4uU*h(f*whFKXhFnxLnuS z3NAe35%%y(9eGq2QC)_g1+~1bLL5?DG5hPj3o;aI>%No->l=1~TH(T*>fL4ZWCv zwPLhAQ>=|3&<%grehVixDt^KgvMXXW%N6rr4w3c~a(5Lvmw_e>6WL<;J;90{ywCyJ z?xFU@y7iOq*VVBaqf#qXX<*MMs^tc2?KTDPpN(>Tcgx!lA#=4(uWLJzqwb9##)NWX zChXnYw;YR?=NHSJ5*6AV42Cw0)yMQXVcm17()xS+fDJ|Z?d| zKm%qxm&tA@BK*@8NYShn3JtkxShw|6A4MaT#0_pc%P!v2irI$iA;nDusz z9g##9NEY@$g)rw)Dz_sv1;0%+=@b4669M6`IP(60+`6!O`SUMwKWH6`GT0(HcyT1o z*`nfCY#q85T@VXB$9%*ITN;h#1iqTNgkV@N%&TSQ;pK4mdD9xEn%h!yc03PI+HSV( zZf_iFGLmLhehGRpiLT?hh2@AmNy7a7>%Pzkqg$+pk*WipW~O{@{ub~aRVsu6!GC=f zJlwXP_HH#@z3qR|KfKVY8NwJRh)-|cyhSt)xd23wt2g28dfEBAR!--SLL@pP@*jRV zksqo>Y+hPk*p1lS23s0oN_!K7!ft}h_~p*sf7snU9%|$W(_J!bq1urAwVtCV6$2$K zeb+WJb^-O~DU{7(xnb?t)am8pL+axn+!t>1+cZqG(FbZ5%a8utXGc1oP-bS)2gDxY zX_mBUN&vLs1Xy?^o^#ofel}YQnIoiR2U)9xWQd=`-gEgdtZ=x1h|omv-@uUYVF83q z8QvKg>rwpbqCR(Z-|~VQl{rb+No?B4(>SS<*b&}ac%hF&$VVBU14wKqaXI&F+-MEg zbvr!u>|g(J< zoTR$3R-#xCX_+6+h&`_NrNXqC`m@9zkaF)x}1}F@hG`m?gc^vJYI1B~W zs3JlMUsWVmf8{qf?bGD@pLUNwyl<;F`==*m9W7-j;wRz?lhuUy6rM6T3~0)U8kZ7e z&WMKCE1WRypu9kKzpkY>rNijMRi%DD&ez$0n5I81^Q#P(C2&oQU71lBEQnG`&ZyFa zL(gSIv5cR|;G{*sRN{gudPc`tisuM3B|uyhHcW_LMCSx62u*g4A$|Mq-Lu-sSa9XL z+Z$(fYfuIi1h0_K^H0C}+5OM&mDi4ua-%{JdPXHckg<~>J`w;$tUQv!N&pMA@q=_D zc&P*9U zF*)s=h?B6ul&VB0u;sqAJ~y8C^-_qE_;+936gCBPDv>-eKWEK3tjS8VEgwk{2{EV#6cLG(sW_^1b+PhH zf23Qd%1>AMufAOtr_!isWYhyLQ!@)Nb8PlIuj8@Yj7SB`rPXrYUAcGslR> z3YY~FMiVolmjbu_RE>u0PACts59l=xScP3J6amj_D*(laVI*0xLTWZPb<5}sX`2LH zPRrMu!s<;kTRluu+XV}mJPr~R0S(H|tOZClj+G+|SwsA2+>yq)BK8THR!ud({-;$7 zoX3(aRI9LIbRGo>4i}bR*irlY`+L(t!ZK*B3U(o@N?b{?kT3~K*+0BZ{qjSr71Sya z9ExXYG)3$xXHghL?!Zd3A3cSKuqh~{%dg&30T#0b=H^IB@3|wCK8feAzT)LiM4wN* zE*6_l45^dYWi)}+?pH_piWkY?YCd$B?ZA!mp#!|9Ga^itYDXqvj$?ekZ7 zSA>R~aU!aNPQTe`xu|DL-e0+|?En5(UmB`oGjF-h%uTIE7dNzOZtM@*;{7TluxW}Jpsqj~lAxN>BLrF+c=y-^JA&8Ab z&4#O`aw55eQyHCjyaW#GyUDnOPc4UOUP%o%B}6Io>c$u>>hoC>1{1SG(7wl<7)Z@& z1K?zp!`>+TWy8>eg5d%E%6C8{pLTs=;$^N&){d6IpNpggjNC8AZ`72?Ix_<10?)v1 zP3}PIOkju?;qM7jyi9H}omKdir*)^W?Pn%`445kco34I4+OKlWL%s&?7qJu-Ng0ks zKMBR_(ODo*<)9SJc)D(@YyD90x{)-Cxn4!x%Jj?IPPNp0U1q;uW)_)7x_+adhH!ln z^Ud79`!E%Dkp{@8@9u8IhwMHph%(^h+r!+rUZ+z?osb35sDz3EXWo%YRuThYG#=x} zHs+?*6hA*bnFy0;J-iNH0ifj*0>qEN&-zMW2=KNIPNyavArZov9Z>$~lIZNI{;pEz1 z%yREPZksRrQnuO8*V{vY>+PXXZB!a9AvF>FR;zh;cgxVRsH}(gAKtsnc%*w{2ETw{ zohlWk{q^;Yr^aJH8jn7F_`uYE9Sm5(`mRxm#(@RRh+_tdCc$5L9PW>d^DLnPt{Z_V@yyW&>p#D3qqlBvY+*lP+DotN}6V z$Ie%WQk#jOaI2$_aaW7U!)m>Id#A1F)p*J|ILRYkUG>Z^o`q(kd3}A&=hM!@UXU;& z)a*?&)oitx1%wW9K~;;ysTg2n;En^b^K_Pnap&VlzmourhB}pK&TZzc!{ATU;BTIp z_GmwV@DB&bg~T>L<9#pyrV$b%t#UZ*a`dUI^>}q;;_M0^9ybrq`|dbZMrEUf14rf@ z8>;4k`?cy|O2vD1A{b``5NLE_1mO@f5Iovqu?&UcWf-unO@{_(E6vz9+!`+=(BhUn0@pgNUFtL(7T$K(t)znpPttT(K zNdrQ)5H+yv0p24T7@IjXCza?MO!sJDs27Qa2rI`@gS9mfRgN)Xigx>Nbwd!Hw_6pLS^H2EarQpAPB+%wO?es|B- zhVx|q&;RvDlqC<_aBH8%q9mwkt#-Rd1c*Smgx)PuFCNB{MUcrHP-W=m8ucDul_v6a zQE^K=_^Bdo!wqVPLNCAyfwrPbne>-SMJrM=E{5JsM7%3{XY!(=H8QnC92hTua*Xp+ zs+XT(ML_y9vt`8M75&Ej^mxo+GQ?MW-99n&-kY69tz*HR^?S`|Dh!$^_Y?3uy)N}S zB3NOmfH7e~=@w4=CRNU~zZzyfeBAVgr*^J&yUnKt^D!ygP;*xVy$?u`5QKsNb|jNm z!Ct>!Rc|leT&+`(XOPmvdXsc1(V=isoc`mley8^%$@ls6UuN0IO>udytj>8IUqNBW z`_O_UG1;W@S^O}063aw>!f1(|2J*vVmUY<8JEz&Z>;CtX^T%a&ovH2%&Cww{77~`1 z@Wj5B&A)0?WTe32(`PyZZo?3Rco62Xa*+sdidD9)mk_rU3zP@jlchw*khx~aGAE92 zZg0p_6C_Xe@bDn-Ler?|zCq;ATi)&~sm-6hul)W^rm}ft4THyo0km6+y4bqSR-0&M zb)lGJ4XJ82v$ErzPH!?DgicXO5FC!yA(ahBTMqnxR0*V^Dral5e5O>lP^{U!lNpxe zO{UbsC;zl>f4{k!#&BH0-xIhuuib`gn#^C^&L`;2Nux2EE5={b(Rh>$suf~_b z$~Iy`pvpd^n0SslB6LgV7J9_YNp7Db&aT-Va?!l3;CL)wB7>%0-{A7+ZS8U2T;#gb zeGdLYqL66A1PTLX@h>Fs2_P{X{YKPa67%aw2nlSu+vQ{yv9;47Pzk5={p}rYx>?e@ zGyG|U8py1+M==G4_HaIboy_n%chO=_2)h+fyIC`tJU#_-Uv?@#?qz=5-E?xRPQAtj z-)q@$TxrMzcp;S_VTZm&L``l+a-7^}MGw|9G@9w^@9s*!`|C~bNq=;R?M!ZjqYSPI zZ%|#XWe*l#sNQ{c|J5+?djz0klFuWS<*!~OVt=>+^b@lFkmmY=(ebQ$D1jl z!pd-JN-X)Dq6xNSCd7{Zfg}N*E!&va%C>k9D>W!j=#T?Njx@qc#NRjd)UGcc3m2AX4tMC#gJK(L8 zdq&Ps#VRIF~MD0+e5yw;@}%u&132nL(P!%xX^w?KQktQ z)kR6ta)#*|J86+iC@x+;@7~w*T$#Tepj@ittF2VAHr$?O)}#;|v~@Z~;h_UbERL;4 zkkIj7Zh1+;2WvAONcMxB z=x}Q+Uc3j>6I&>wu%^zIi^xM>qSeBaA*68%>H6I;bMtxM!#H`Y_7A1{NI4;bibfl! zS*ZQQK4@c8kUg~teFevFPBD4nAn@ya#e5jN|lyq)$2z^A=SHP zdB30=EJnQVN!d*B#MShveIEbum(35gRWpk>mI|9jaWF+2yA6a)cHn4sv>g!&;;_(< zFH?0i#Y5%&pu6sSN^@*LKWW|$TBm!F8LgRxDY!G9l2rU`eH<|b*MAMFrryW z9)9ohu`91RO*hY_MkpTwE(E;vY6SVYN^YADBL>-LiK!%185KoqNHPXW>QK#W)S&)e zYhK;HNR*06D!&G}K~~`of*WVMW9Kp1Y&W}r1hX{~xevO@E)RUBNso|BY-c(toC%js zgke19#f`Ak4BTbY5xkCJ&e&*9YoU5M{&ETLeaEkMA8X7-L(m(^q>EMfrb%YqPsejd z>>#E)`7C=9+ZN!PK^zI5$hC(qj4VVT&HN+-oJuROuI;Zp)5O5D7^CDYY2YPKSS~hI zz-X;1>B!hrqAxo}5Fq50gVBYag3=MP)#?FghE@U-RF@5|9w`~@woM0+|84-ZEP_v< zLUvm4iG`l@1(%~*+^D-06L*s2887K0#+|#2eozL8CNpa8-;E{mD5{zrY>dpYPaJG4 zytjNQHgQn$g(EJ6kslA^}a66DAg5wdVgoq`vU(!bI_=$%~MKNzWyUJ79zs zhN{x(45ZQGv0THD3%5No&sL6|8J`OcC3(>#wh-HlsVsNWt~V{RaL}|Jmp!l7pgOy2GFJ}`RYr6Tx(;MaAx|MKyrT)vk1{_?JUSdCCgAp%Mj@!sev1`W&|{<0_gVA&fO!K%bvjmM9gQS(bQImE2-BVrkOfrF=cN zot0*P_lNImcM>gov9%%ec%>%KTGsd1Sn~M-X5z8+$HlLe&f@o-?Tg3CSG($u{$bvF zJvC3I29}%W<&<#;A{q=apdB`KDs$a!?-s9kAo5uOaI6Y)$b^obHR`d4kkjNJ? zVf}DZ$M@aRm!(C1Js=_?QGpOd+$qVJz(WVHS7Ik%@sW2?JH;g498!~W?Q`mO+j#%9 zJS}ykv1`P;XgOzw>*Y)&6&m8A*VlbI+q@C|VgrpzS;JhjeATMnmbd@rF89OQqMP5D zM1)gzhB|zSD$CxiMToM>K@Vreol_5fl92A+vUJ17hdLw5eJIpHW?HvCbQs|o@-Bf3 z`^617{_=8w+w|TESxfdKk?RuPB+`=o7ngI4&t1spzHY%&mc%s?Dp5;6Svz))C={5O z?E8fK8&NLe0;PWpv3s*|T#muua)P&^f8|=7E5VoupM|G#@kMA4UxNJ%PeL2>D60$z zmV0u1SlwCC1^_& zqP@6;#IE7%m$R8{m*e92oTN^_;L5h*&J4#0LF|s@$)WXpx}O){PJ?AVJyK5|aglIVmaMu2&LzwQyc3WEx{%{V zkra!>x5a=}Ai$<+8=KTP=Jt#0%=FK{Y5c+UVLSNsTG;B`92VYuTG1Jf~SGMY&CETdZ*je~}SM}=0m(ii#*FY=S)ZML|ul{(LjS!uX94lWttd!}^G1eMvsHb#_FMi11T zlxZXw1Sz)8pXCEetH&)W};!!PdzueAe6X?&p0Hwvo~pOP-&ds)4Hh5+A3OLpfM z-Ht^QAG2GN@P{@sQ3VwavF)Mbz)2JW<6Z^_WlrUd!Pas+gVZ+NMK%99{cvu5p&)2D zUntOM^{(sy^zXvdh+tXkR5F@ah=Ut-_=>=&Z9-e{0S zOk?-cZFfjbtH>jZ2sFS|@FK4_jiy*vbp z^KO&Jfki`!=eo;R>P3Thcxyd7GzR5$&8Jgs8vIF_WP}icG=b3*V0|*fdmE6iMzh*+ zXoa+9Adlo=2rpnif?Mj|tT7bmw`v7%Q(3m`-`t)4;EQrMz0T}>P8CtQs&H2FUfg}t z76{h0A{d52k-xG5mCjyI&hK9F>l_-}Vsmji47I?}niH;p?4jNuMT>{WhX*{g!>VP4}M`LSAw%*gdr$n4d~mf``mH{ zR$oFXnXiG^n^wZXq*!kxjK^2xVSF?xmI2kw>wP=_AwOzob(G;-01yhu%llcg%8J?6 zK`YJ3cXz@^_8q#1U8?kQsC~QZEz39KT}n?+0t+Jmq@o8Ql`pD=0<&MZ6qX<3#z}12 zz&P8X;v2hpd;8s=ebc%x%n$Pyi`}kwvB<{IZAvA9i4BpMRS3g~#E@WKmgPlNcyB7b zw+P9HMB;_RLb$r&>L`-{03ZNKL_t&mpST;}vZHOHi`sB7E1aypoX1yeen2SWNOtGiyqk{95Nq;f8#7(?${uLda`ZxQ6woLcc^+&z|ApB zDF!h>n!qZ8&ITSf1(E9tIH?|_&Cpev2qpFK)UOprQ+jOLi((^~!s`4DWDn9V5E1NMbE(4` zV&`~jXEuNKMgET-R6qZ-TAppb^WC!i|Bmzfvgmw8e1Jd9+iH~fD+WbmF4d?uZQU3Y z*lQ0){Sjb662#mt>4qV=KSjaGV9XRfA@(eAMGjL^s7yl&p`H}~FHx*BVbJOWY^f6A zno$M}zKBm!m)UBi8e!Pz6AI)E1RM*?H~1^DG8l@dv;B!eL%NvOsX@5PBu~J5n1nf5 zzr5Vz{PQ1u)BCx`UcM%v7 zcHq8HHg=IzBzGYqBeTJboCLzZvJ7!|f3zi|lTkCF)5d zX-Al{KqFuR!OSC#%!E+zy4_JZM1`|jY+F-lODn*HVhzp4UGovSpa_^&^K|qIzw=*- zcD7VyTj)?_##S{BC}34?V>HtqXXE|tRo_Wng?6FX%B|9T_Nsgz1HP&zz^e6O-t4w0 zKe5!Ac2dXup>(4q4CwhfU4uD(P2GIjcc-Tchk^yZ0J+vbn`P`_Gys4{mmNK(hMu>o zAp#6H|1L}}{3I|2x{dIq^t56zZRX2vtIa2qZIY$K=x>)&h$yuim)q8Rax%1aP(sYc z8USf^ESg&zQE018EdWRF{R_;Jll1YC(Ga!B_^-aUNaRgom$6+@w5X zWu{0ny?yPQ?^n04hwfToa;5mR8Y~ZM=s}uqJP$yLIB~z*pD!qNMogZ8l+x~j^$-{qmf)slRtyp893kbr z&azMc_FeY(Zqn`iCa0eq#Dt1{{tCU~6gDb#P2>eZSm2)QM^c*%pbrw6pV-_EOA0>- zw~NY&r%gqmsh&xtE(r{uE&-$NR+H6Hvy{O-QHN=Si5(NF@k*0hEZaJ2o&0zLh&f z*PzR(ECN@&eC}=%rAJ*yP|uCI9nbIlfGxDHL8Nj#wr6k}zlzQw?)7ySb-b6e87Yg# z{%2+ieIQ9UHm*;Li~Rt1kt>#0Qm&OF^%P21UPVRhSbWW$emEwHGwDo z1`BV!CMzT&oE%)sAkhSYMe(o(nB5R-%T1)R5w7B629-63I9bnEhrW()*L?h#L%Lz|gaJ zsPo(F2>K0PUJ33tkc?h?v$?2+mX zJ`VYMH!A!G)6Y*I>-sr%fAF`n;ghtjHJNQ9xcoR1=(y^&au`WxtHvl*OINQdsp|7^ zv_L;A91Fuzk&1L9i1GBsR{_MHcm^5JOn z&pe<30j%?Q($kf&K$Wp*M4_{lT%fx8$U{R4fDWVWNN?x0?6Rg6$9|eiZ=&jhx((+t zk17+2mIaEehx0&RTLJs5o@u3Np?}Cf%+vMxvBb5B8rs7uS1KTG+2SW)If*(RQqoJ7 z2bi7sVbqI8p^eB%YPXy-44s?{qgjq;^JoHq%gN9JXaI-q*kYSPdZ})&P;EZV){DV} zHC?F6`3)4&YRIAnK!As>CW=z1NGcpJPNWhao*w_(l9FtYl*ulq)~YR>KM3ywng{h6 zYmCoA1d(q5r~+t==_C*(;Fr^N$n#7N`E5pFrmk2KbF@jQvDibRN8AsRfJmvegg#LvqoafXC);_}zahsja#!>czxl$RsqgKK5<$gZj zqzgrv_Mxq^Uy3d_;2-=E7Jm}sT|`-@~j1{IF0a&I9dwKW2{(S{TThQ}HmMQ8;to{hLWiqjjGim;cHEoLFULsvgwM4FZX_ZMgJU>06Q*Wh@KfGz&Lnmske%!S> z*W=0HBv_w6m1@!ISlXQ&((-}XAewK<)-~Tv2PB4f%iduMP1hrjrcjh}oy`?%FBPpp zjzg@z%3~iR&bgi}2BXa){bpXepB$^tb6})=xl&*5*LtDvQ>;Qo4@$TAIx=}Us3Op8 zrP~{nnO!fwXBDYW(ZAi!5P`JlV#g%qQ>;D*OGkNF3zUz7JHW~WRg=TLzV5jPi9wOU zUaRdqzg9TAUMXMX2M}T5MB+NL-5oaGmiC!Op>|d4e;%Nx1WwHz%Ma5-|9M_%=FjcC zzn`YY<(!&ks;k0g$l}q}gHe8le3HzyS;McQSV2anxCjz+O&8mtrLtNC(ofP`=NS8R zyKC0!4j7;YkR%5!O=sjY)G0((XdsljS86R}$NTlGyRMYFl1`Jwa7{f`?~a)_uiJ;g zNcmr`+^T*Wko93e@c6W|+%6Z*0%%=xvYL;VlZdRf`opbOT06R}T`gvhdxzQ5*s@+G zK)~)O>Qr*6Rq3=TDWTiF-Uk!@7ITyK%v-fk$MiT;Tj5m_IH46y8X*OXgi0%;N-Ufi zV&F2lq7GJmP!me9ZOx!Ys9Ua$@*PUxB11@&2$sm41TF0R7^y}&n2ua8V z-#L~J!+jJfWa`!0d_06DV(E*4bG#Y3SR}1VS9KGKOgGy!M94J1ChtSC#qY`M@(7nt z?4ygQ^v1!2_T{g3fzKUI`kNa2ntb-BSpJDa6FMzST;-WI=5Kr*JJAL3R$XdDM_~l6jj}{wB7=;VqZO^cvI>RbKDakF;9-gO}2rs zgtx(4GIKT?!4iq1{jqm_`|;y zV}f3%OPVD((@wA^(r%dMBn-Mw&o6`FkY(Kf|D7mO8ROP9DSE3YeygU|)p`x|N6!(G zSOF=~N&cD}L&u3Rs`Q;9HO>HIlz)Jd3q^Pv@i99#!pMu&RVfF`31(n-3tWM41}HB0 zz$4uUb%bQo98r-L!*7;Ktdj7pv`9?WKpfyRF9#$mNHL*C0BF(!M^tZB)7kQHk)Bnr z7nOHfW=-c&IwS=JFgDcUFuT9KzadP1{oO~RK=T!elhJU<^^<5}e0}`%c`+E23nf|Z zrwt)Dwr2aCwyiuKyviiEBfqlq6*iu#Ltpgxb|%?jy7k#KD2iRw`ZC9O*KK01%1_2)613fI7lhHG=f|1LE3Z~uf+4JAR)D4+ zr0k2D3Ghuwkgz%#XG9N)H^XIOFW4_@T^mpHbY`Be471I_zV*7p$0^JQ@T8p@GYB&c zuqFOSh_3TWqrBb9)>F+o7)@k65hL;A<%-IgGy8&h%U>kjNOvHX;(3!^z4pqdH?7i# zUZs~+k5Z_2ue7K9yT@6x)F{`wfB!VJXDF>k)pJc;HwRk{ylZ=s03qNSON<5rGO;AO zAq+h7e%C6T`*r%y$f+Vr z#1KrpoLIE{6+($QLXlq9Fqsp^G8qv)k~bj`4!c@veOKf0irpdlh_Ec{$G4?;?0!Cu zd-k678?7SJn(WoXUzsr9AxI_3<0bS(z7vmRy<@`Zk08a7+_MP%$}V)6_hd_=LLzlf zIQi*UH^u(()4oeS4>u@I8#OE8(>TUY$2hqaMG@HYKJofjgFa$Q_OCB`#d*ZmLmgWF zUQ}8zwp{i#S*e+ufyhB9zr4P1Cfp*g5IB_S50{9^NPZ@N@cyFSN#M(~g<$*bKS$EV4AS_UlSc=bm`EsNCB)@=i|rz`F;6wM zkyxDXC!6+AWUE}z@KNAx1dodiv)z%*pUlR+j$l=B6%CQsLK6r-vXiW3%|y7n?jKgu z*Oy1&<7P3dg06iW{r*+&Lv^d}QC(cLbT9N$)vNioG?N3A)I%-7&M`*P(`op{lqzLX z2fr3g6Na!VKQ_2vizES;BHT={DHX^%Vm5&HM%<7EHqUSGn*TW3Oy@z4)M>S4IX^!=T0wH!c7|3?e3y2&V?$c6E-k+; z(!xd)X^|7|G3@4>ipI{`Z02-oD$FJ?LbWOjr&8-1jd3+SpI9Hh%Trp?O{&tkE|DVppxszPhJiY$WVkaSH6FGvHn$1Hp=PjXqw$kTDekFCx?xwrt5Ot+o{!VHXp## z8+I<$a?W{o(_Mr?>hpC6yedLcW@Sm_K?OkR^4 zcd;sNZ*M7;=jSJmR6wY6x#!QHxlOG?_P&^YQ`MPthep~{n?!3~XgXi5?8d_rASg2s z6Hl->#(oL=D{=&(*F%^mUWNJe3psZAt3UZCfaDU6@nJ~3BzSdE9+vZOJsG>q2DEyq z7Kb^DM{sZ=e#2W3DuwK4=o>XL5qM!dp!(MB#E9Lb3$cPHcmrwpda3~6|1X3c# zva;KSrv+kHZwi|~d?^23=hRM3Ri_8kWPg~Z8;_Z{pEnhKZZU-!7%Q9W#8^^ZME%W| zQ|?Lw-&8zU^xSN#$w#HG+BS6t`6UqHM@_@^6(c<}Ain^6poH5?&Y67*^`isgWYtZFZ)L;q#GE>s%8IqF(?Ep&;+NpV_zu)v&s zA3Bvdk3UFXlMz;hM+{E%)mQ<vV5ZegAmA9v-W>#!K~s79>TS?w(hXto&~RIl6RaKIXIL{zO$M%vEq0R*qw4Im4NddsxaR!$o+ zYqxzzE(D}7DAexm?x-rgTVPQnBqcG8QgluXYQEqP=gA2^^86G5F_eP27}89AWkPPo zaA~!xp&@aooKD^D6_6A(HGnHFOYDc!`M?rtbg#8i9?9Y?SJYcT1BL?$VX0j;=eyak zw3~h1N`L4#v5F_;f>u^U_^FpN< zz-=gu*}U6s?U%u+Pm)=umDlznzu0ulM=IfBeth|JyGcA1Z6DV!E0-?iZNZ zz8fB&mc@1PX0$J0#Sp=~f4G~>CQ3M!8a6t?$t@M+u6d0Dn}V4zOt&}JuEEci+hDa^ zRnpg|=bv=XKe(0_hFVw~YNEsrisbNq4|##$DCn02Zo;?Dtq-|Hw*US9Zc(_KETf=? z?|^IHQn6-2`sza&f-!7)Yu+AnEow62H6d*j>grW6c$1s`;-T=SJkNj!ZD0#!ze0zF zRWDB-j=$`Tb4GxN#CIMN6)_TN#`Ph%* zhbZVt@KWrUP;CarMt1XVBLS41b67i^+8uDT-GNifwYml)j!UJdWFqlMvuI4>_LxOf zS(GEt^IB8|S5E*8Y*ZH}GUTI-njvx}cS(H31jH*YZ@qecf3lN9<38oFVZ+;ylUwP`{U#Pc8>1dfPnvH5?WpI_Xf%b9iP!mkcK(g@`%LCW`a3 zh3jMLi^cBEOzWe>a7b^1i;KUE@91$hL4gbc&~J9;Pp>i*4ykpT1I5=M6}xEXZjzde_-iB>1BZg6BaMEU^Z>?-ri^D1UYG6bnMx!AF zxJc+@!3L#V2mzy}DGCOKn6go|Fk4JX+`MKLsy(I{+*#c?Q_ri@Rw9zkCaq50gKt^u z;j&wdK99;BIYz;{Kv~yx7} z^;VGTf@3CdI}0PA6Si?#Ry$&_76XCppUV--O5xn+X116NiIrTrar3;Zf1FUBrY{Es zyID+B8FEp?0zMC3@!yEB@XRN4DqW_cI-40!<}&j!FsTg04s!-CiWL?FU&oSNLZH<2 z#RCR2kTQ)HjX|^knQh_&<&vz2olXMUw*~}g%TV-MEysy8l84Y!tv2wa4ub*U7J;G9 zj$>$K3{+K3FP7PooAOg}wM)3w|QJVI^~e;ZEmy>>(q0pVC*q!kmKNzPie zdTY7$@88va(J$&p_U$uhj{2rd&~3`Z%CsfbtU|U1&edOr15p+jXhZw)<3}f;rR_)k zrd1(O9>029`sJJAtxo43uE1kSG(q0u7aiPfy4vWjlRI_WnM1dnI&CIoY6YhxduaA* zzwlQWZO^YG&|al$r*dt$DXdP7Q{^_ByE(6OsvvRxx2`(hetKk)bQ-;CtuvsI8=kIct z+hw&3uZ<7AdG2=e|8aF6P15YydDzQ)-_^cff9(t~o&#V8gb{-ff&_yUgMy@mv>~a` zf}w@B`i9zYBZ}Z)+`X@9SKj;5?_>=|4Ap(3Z+3N7R_5Ov`<&;LZ>GVhuA4nofRvVS z5A;WrqX<_;R4Kclh?GV|4IKPEdwOC_$a<1@pIx0*ayRYDaXYf(*isfE*)9YnE+z>a zS-|LUnpYwwhcgo(%AVy*upW)4q4Z3K*ZA$A$8NUEc+ClQilz8_(dQh} zfEYQ6xZevpCqz;%c{w(i4xQ`-tJFvQdiv>2B0UkyzXJj|$De#EIUz22l0VKNtNqSo z2`!R{Zj0OnLkX$!W+tS8J1~c~`D(Ku_Kv0}#JI5(kXPhu{0%Snsa7>~+w6b%pI`g) z=~g0lKm;at>Hd8|Vp=ld>5M=OxQeHuV|fN{h1KbOU%hf)q!tH?@u!3WuP zK4X}C%jr#+FwhU`Rx$}6fetECxJdy=EEF00Puuf{?ZxxHJKUG}9c*E?7VOzl5TvL> z#z>3>eg*&&*Kx#}qMsESD^{`|+p?O4>8Q6`FC>1erNVAGxhpLG;$`;x7f7^62b_w_ zDcbH-ss3qM`tACBkv(gkbs4!#bHQP_QVS{)^2fuQS1(Yi$S-4fWo+tIfJvoXDqi1Q z6KV`#U;g&hYa`F266`#`xZ+!=Zm>1va>aub#SrKM4x-LNXEuNYL%Z3KEkq6`AXZFm zn!8bB^YoL~ayHA4)8jl<#$`!MnGuFLzl7>0*tod%oJlmihAYhBQ0k(AL3WhdEKA4b zSKZtX+S}?z>6vxQuMG&MIv=8u2=Z3Ya_~)a&Pv1b#X&rz34`>^vb5e* z&j`rqTFB1l?~w;*t#f!Pp|kO1md^bN#(~y4{d>yy~eIzX3K4)TObtg8ks=pNSw#MOWBq@P^;dCc2_1zVDRz^Ae~U!U*BA2>sewiF}hDa5Wit+Qq;=sYpp=ZpMNN=eK5W8rd` zy8LE$@eI`%cd55C|xu(3#$!^Y31q^N zoD##~0+~N6fY9N=k-27}b)o6ghA)xH)Y7@uq4C)VwE4$ws(!5mI{K$dxK(o(x0jEz z)p)*blp8mwmWCrd5XwDXzPMuSiD)0QrStPEYhpI-?>CdzFK^B1JeKwT3+N0bh4;_EEUi!GJNPgMC)p z)ECE3KfKDmJFD!|g})nT$7v>c{`~U7E+yJn1=lw>;Nz5pHKf)IK7>E6@)rh%1T@_u zlAUt?MWgVxy8EI&>lV~@s3zI>=Tj(~My(Yf*!9ekE0t?fwk{=9=(8+_n4mE5r6oOU zu*=XWk{z~$MP-4SW%-EFfKjRJOYO@>zuQmQcca2xZ?PTB7AUZy_By>oYBuIzNj3A! z0QLwtDeig|r*`Q4t)@%_UNjT3!)yi%3b)DX>6` z6PZq5eRlCes{|ZecKA*A$`q`QICvJXXs=RrJd|xtU*FzD0i&XKpDq-?c>Pw<59vlW z7&M7DlLPH0R`BliJ1*-3y8BGlQx_}6snJQ=yIIVla=^gN^Dde7^gNKNDPSc&j|WM9`aC?9^+Uk(oSe6>9s4?JvGH^f9zK%fdlq?XK(7v;AxiDqi%mFcMlrFq6M z8mWeI^3gY6)Fv-#tA_mMdIqe8!9?(e0uDj3v-D2k={*IM&dxVNES7*X2g@ zxRF)?PNzlrw*75!SbO)BxxSy`bBu7x_0848^RrcobfeSlP9`Jwh_z0qOR`$7P}u3} z;%0;Sb-693GOzDm`Ji(Q-bv zn4_qpY>4r#0 z1e~1}tzdcT%yhqnKbDksIJU175eip`jx*BOvC$G8<2Vt{fq1fdE$@@wa!uFaY&fui z{KaWWsOe-ElBh||N|3}*XM!scKq6<1KOs+UxFw)OpU(bo|I-UQJd{q72;u~tj(ve| z^_9shd@hTLXbAjFxL>|Cz+H)7qVi9=vK;}Lq#`ujyTB*V?Hgb)%D@vk$2I9s*N@1u zaY&qcPwUF`tuJzpD-eq=OOZGhD)n(=>c+?;|GZBnhHSzN3J|Hs5&Re)CxjmfX6z)Y zB1#^dT16XDaf3Vt%JF1O_Yz#>|PZ+dwr1B1G$HP_QUI|+%U7o=FKF*j=Uc)_n16JnTti~ zrPR(!!>d+XC-&d~*g{^(It9lGEduBT7Zi~kZQG&5>J6!B-iMn%?{vaz4PC@sHe%ewuVd31qnqFf&3NEMB!PToLS}0$n=5-B z9oz4ZFL(7<68&O`o31#+9+TqTW#igxi}v5e5XWK7Z$0;m{N-3E~`N9-?0s|X0YS3WH!#mF# z)C?ep#1)ItJ@-a~aPP!1?I%JF|D!~T-5yj3!9>eg*1p<8HL6d~&%_+Tr!S=XJ9^Z5 zC-kl4mJ*w4<(m)hM`QJ^c#G@(QD3A{Zs%1!_gQ}2D(q*%LHSO9qh$bVcl-LyTha8V z{%{uEuUoZZJ`8gP!R?)WYb5kzzrMVzHQMH#y9o>hRs6+S=4WrquW?@mhhy{%){DJa z6A6f}GmMr>t)M|#b`RiZqeZqdDs@v3Ye6VNasV-zTWeu;ERD0>kLjD91X!vjTPc(z zV}{`dn99Z>`1nkWC5|jfl#EjJRZCbUqX~*?Fbj!dt`-jY)l+AC|KEIG|G`zZvF+_= zPvhaUaZ?UFTxR;4+WqeQDf8xe-+1Iaiq%vr=*mX3sd!bpeWsif_}d)8fUzM)Ft4d2 z93V>w@$gO@Q`GN~Q;rntfM6RTnjsa+nxMFEsa0=_rPVo1(?!n6u;;8#9Ej8nGB;*U%Uy(gBI+5C4O_r7lVMHn zd)ZB-t^=#ccC_*hnwQ0CXE~XCc7k91bPgqx2vFD5k1jyv`RK0zINV4o77amS>(A;wGQPm))V<$|G#4LAn_ znC_*#Qz?lwgkp@z2!ACWCXr@tEB+;~fHFDyH;1|@s92=C9-beBb(>0^XG__Z>SpvsH}$<&m3npzgj0>&GGRdEn)~`H zRn=O!mmZ(m#saOfaJe3*5k^dDJ$V^M);~^rG}xr|`=`cD-LmDMr1T*!4l{$yV@O z3d{^Su_d}lGP<}Te0TBu_-Mi78fe$5lW%+b%(R0YtX9-Y1ysP~owb^S;WG#(U7Ih| zEoZ4z4l?V)GfbwaE%i#^0YhnNTbUFL@3ck^CK9H-zglS?&Yo7qHl3i@spyurFLmU7 zE zzx>|WVmEpoV_(oKF_o*aKjj6p)mF2`4``NruDzv-o%2ofVZ{-N7 z3)ju>u?MUX%GjbpJrDh71t~U&7TQ=@IGMN~Z*Q)?{Z@PFnY!1zvo4ALTdbYwbVugq zn0nwt%d85-7YbmRw6M|~=qlR-xJZh*T1Yp_>dDtnQ_PPYG{5$~+I6NeJ!(JtXPsza z9?Kvy*n&a}8$sf!KTtuUr&H_;c6RwNxV^>hf-iWG-jL6lZB$9Ce*c~kQ$jhjT}y}W z*Yg|(%ygD;*Y-+hb|K-F47y#4#Sw=Xp4VMzQ@C8EyMsl}%3vy6DnN!1>Cw!f-I_f; zjb^JFP+GGwno536_2@=Y^LTjP=_7&rvejHnxBW3ZckZmXxv9>7deitm>_GH1;b9zs z2s8LTa<$2^_K`~5f6>f6vSk5_1Of{r zs-UVLnrZ3~S&t9X{_b*_ zt|&1W_68scJOJAd1Yh~T&6_>!&*bgJSAx^pOHBCBE4|K4XFvglC+-P zTB5cI1eT2@V27RrLh?Rw8e|QQy}bYXgVQ-qK1ojRCm)PNi-S{bre)(r9V9+bRKu{A zT!tv%h>MJ3)N}oW0-sG6sSx*LLDC4`#}Mj7nR{P?zQkoCk(pf6p8vf-kJ}>IydSy6 zn9vBy$Jsz!vdV_Am}oYAc$0jIi%>0oS~?~h6DuX=yWs64uw?On#(ctkt%E{8@w0Je#ch=j2ldM%MGU!z^snr4GBe;eDnk-blj3lRx9q?vS zFN;35u9I+j8#Ns8xYPMAe_VF6v!8$Y;yai7!+Q+2hs5&Q+)x zEP&PFrw5X^bAAqZJEDS={z5Kk4mcQiG|%P1!pRJeI73;=f&)8oR?l&{qcTPSMF1x+ zAtw6Qm>NKTGM#vpBpAlegU3(zN{Uo0E*bdS;cb2W{kujp_1ypXyiVPb@*I)X2$`m4 z@A{-lwJNPx0~FB*vB+ZbLu|PvGRXBEBUIVTvZoK7TJEx4F6QW3e?-ZVfp3O#4xtxG z=|F|~lOOU))}PQT!C<$TDO1LhvTaPU^or+-Cw6M|lHOuevEl;PqyG$$YyIl5tV|Z` za=k6mC$ami+2wJ6cX!+C58r>d571ZVtgu}@C?%mknr&6Nz^qsiWWq|v!sz)vJ(<0# zZvOR8Uwze>@18%VxARu@yo$hJ{IPbt$Q7E;j|0U4YIa&!Qnu^+hxe9yqaGX{D02)E zGD`j{;x08=!Vt$b4%%?*O%p1^5bh#xFf1y=P%7rL4^Iys(?oD9`@LuDJID@FsX?#j zWEB{WRi1kGaK_8U{?p@VuoH}G2s%qL|FxE86+0^{w03-FQv&$-qnehl;M0k_hD$8o^ zwv<`DT9{(>)IKBpauPUNu#lQT>89IzROFhm*yw}JXp?P>QqBIZxj57`0A~#%>f!h> zi{a)8U5Pb&*Ch$Tu|GEddzVKe?r{Ph`Yw88J~}5?J=YIcTa7^cpN2qg~A( z&x@Ov4Jg~OF12ZZ9-=fz-r#W-I74a0NCcBms^df@Sx8Y*7BbWBkYex-bwciW-yVe9 zB3%cA=znTA(X8I222O%8K8sUQ%pq}i5>qw!<}J=JM1T?-jcBrzFjU%fE)_xsM{kmA zIjI)D3$lDu4s*~WLk}e{a?eJl&a4}YoHEJ& zAOH4C;@)$(t~>LUr%O;C#*UrlgVjot7>6*FN};PblUl@zOG%{~jS&rDpD zHy=mqxl%Hey#hB2%`1XOC|yT3x0e%#VAdYE#T6H1%B`o}3a&=*6P4Kei{kEk?fK7M zX-Yb-9Q3T+kA}g5&Qs(x_=;aFn*H46D%Tu|J)ZAyeoa)*_D@iw6KCHT){6^Je1gu%x?`aZ9bz)%!#1{koy={U+A{SVHqeim??eKWkQ|ipc^MPh83} z8%N(LnjlbS!7StPX{=p(F+5Mt{`772ufDF-m+wo*jZIu@qwQ8H(o$h*rc5i?hC&)r5#^AOp32Fq3V9Q+)9NfcHTlz{>_V6qj;)eGs$cKF3v z{!JaPaq2hA+9=(gDt%Y{&98bt zxX%CdRpmT8$gPK^@P5Q`5ExDCrDz7hm+?$4!lLwf6U{B7@ulA7>cm=PNJalSXHw>i zZ1*?YEA*hd!qqlY?XQ=zyO2E51ETizQ!Ha<2RTM6&G{3IJj(6 zq=&(V6g5efC?m#K)9AwF9z+tHj8TjkjgGWnVB2MP7n$+TUgo}go+@ENSda1ogH{Pf z8~T%~X9Pu{KnX$c^MYSmj>}_ymhFDn-n^ezC{;aego8B3UTQc*;}M8$*$&1J=}7@CPmhG`E$4OEkU=+ z)HPUDpvE5)EA@ENwvH)k&8;mv1}Oo-=N57=7b_XE5>A&fg4-aQ9%r1ksb<0|`Qy*d zUe>^+rnR!Dx(T*aqZDF5IxRbSm_@dnoG@5g(<9(9gZ4}@nd;DphOq~iwp&6Q*AtfkCPK5M`w>_FmA zfJ<#sK_K2E?pGPj*8?~nov#ww^KkOWKLSqunbl-=M4U{eD%gPE;B!k}(;3e?;D6RN z)BcZ(bI!OqBsX@rA^jAawR#Cg=|d?D$#xQqlgDzxu7=Kr<@H%>_tWdj zi^4QD?M-(xyKj~*jaJ3pra8!5P4c$`YX!>`mnD^rc1ntfB{OO@P)X1-D@=Yxy7A-3 zkG>lU9C8CSgT2s}#XRhgQP=`Je(N*U|K*Fz{XcqpbCw?f`B`JfOab4ihV-#+^HNRC z0g{I4RPk>qr>1?hKjyaCs^^`pX}h2mPomB6?tuuY)=EO{3`_FP5Xm^`u#{Ntii`YL zfE?zs6f(PNVSioTfV$Xsc-p{IP$Hs@0>zm4qS?{;-B|LTih&#?i)#Of2HckdA}wqj65TO6gwmkvLsv)YCgOL ze4mj8vdqGE_BE%xOY9kOjoIEPkqel#MDmS8oB@?Zl@6=#h%)6`9G7&-b7{l{H@79PFa zT|Px-ErYKw8=)JMk(!GYGFh6?BkSQ+P_(pZ?+5@gKj+zbp+?Cfwbqm4<`I*#SC~wa|=Y?RpVpu1Ps9`cOLrE}GVy&I0})gvX3e&70kbCIU18 zrzViN)Sig`PvD4Bay;lxv7O(kzg*;vXo)BJ|W#uV<~0CQSFt_P9cF)vRR` z?*^t6sS80~+|K{>%Zsza$KrM@OKOqhjM#4$&C=mrJ@w1p{#&ARv$^P;_4kjx>4P*w zyLg^L-aEy~SUqpFCC&OuBQr;Bx$=Obfqmaxwp$kL^M3F!8_C_mT<|&OTJj7Btj-`f zP?|H(Z{K)sACy7X%!pt7<;?_N@4-Q`$_IJ@twB zhA`$>{JD&*lJO#dIIM;%iKBqOS8_+hh_(^XS2R-zj%N$)uoDq$9rxGOL*+;%i}T7W z=xb`DmDx`AuV4gDFc(~`=giJhzUnr{B3mM-G6z{EUADt>Bme#;rq1pH|SWzBJRzMuW{tonHY zSH*;^9?XEH+T-HY$7AchaIwx`Xhx7qjS}i4!lM)+t7M5HU`c0}daKwV5!;ENH_}xZ zF-3=g)Y!c=U$br6Rz+BM~M{0e?HZ=_B zW$8oa6wrcp#nmS%rF|*ODBNN|!0o+)lVp=ICJ0!Tgq!1|1X)5M`ia>uHeq`)001BW zNklfNVEtSCBAq`)a)8Bj3&9D2*)u5ckb6n;1!rN+9fr?gu5TlPEnyOOHMv8je zZmW&jd=X9{8LH4*9pNGom6fuNSp=XwJh&5IK?F`TNMea3_6=APo>f)Z6??fS2c2(xZB!?FS~td3w7?v5+-P5yhs8v0zA9Z!iZ}gjZ3wGUYOVH}AR++RKo%*P z7NLNhB#Wvn!O)O*(E`E689>pt$gS$mR4e4fTp$`zNLCMvceV8&y{R%x)C365i2vG_ zW{|DssnGTvOZ8nAFbod`3=x$b1sop1W5II}U}Qsvgr>|*7FS89YEJIDL3Y<^iurf9-On-G*JI{^V z*-5P=ej5oqwkInBzj|r&@J09V)pa?$89v;V*Aj=_EbTGzE638WttOp;?4;J2hlVoUPtUNl5iPS80l%(tYiA(;Rcnd8;Dh%|@MI1!oW$=i&8x zJ#;UPTKz2QTemD&$->J zwvz+bzyNzdgug~pC~9wpqp2pfzRh(<+xjAiUY-xN=Ux7J zwO19asvmMd*G~y$895M!-e5>Ns!g>TG3!`7fTkYd;o)A_8IKQ25oIo`ZM}VNj`q`D zdHMK9uc|+L*UBxQ)$mpd16}jnT@bm{6dt{DY8{x+8CLoyoDWN|$ss zP|8MJ+9ENb`s@koM_)S}sX@18y4{bJX+Zqnv6ew)jmq|m^K$yORmAoqmBqESRYL6r z@mMqvu{-D|b`p8O&?juE0sGK3$q=(+-%(eVA3^6~t(=q4v+!&dZzdWg6#@iUr~!1U zYXv~52)E#C;EWlLaLmk5?@QtER}7bryVPl;S-@V7Z9re-vcBDweUWeUs|9uv2qdjX z_Hy8c?XpDnS}!(un6}BYhhXe57;KUnKdbqCHPVZ9@#!kwtsIutdl zQzbvE6Z9*(Zfs`Ox>ZCJ#`q>*6BPxjd=J<^u#zsgEaHb;7Qr!FEcVdW;Z zqsGybBzF^O1HXEo403`S^D|zSOJcl-y&m)fA^QR{66Bq69UlzWh-MEPP3U9Cp5E(1 zznI#q>+>IfOuz1f!)zAOj-%EXEsp)s;-Xuxln#SvnkLa#x;{T6SD@l(E7lR7SMC$;&9zy8CEmxZwy?iy&>7FFUShoq7UJwu@~*%yEHFp=|A zhh_CKsl>P}D2|92i=3Ey_Q-CVA8M*hm_hKo-Z{N*;z?)^NRWrHPae~m{9N8~xS_m( zd?DP2KsujW|k{*}zs`Wy-b7SJ1L*Ym67sY@Ep~ z-z%yn`Y&88J1Q;gPbUZY^70aKmbE$_FY&1~o3glyKM?t!m8<2oV$ta+Jl#TFfRWrm zZ%vKz9zBQ`ZdBPdr1ciYdG|)?l-?hCvFXIw2$}9#x5Us!r*}wqT4&9AD?u#7h3EK!LZIp~RVPRY zTP^%oBY%E=Va^GWYBArb zM6f>wB&=bnx`Xs?a{rsVR`I4?23wd;draO^r740et!N|MU9pY2W1% zHMOZ-uD&5Gb>ME^YI8hSp$d0|x=N{F(8vdEs6iub?0vFWG(W+P!Cf=>FoQ+JX)0E+ zTi3U}&uXhbdDpD(dfDxyoY$6ON~)77iJKQOH$vZ90pr2yPi6%nt$qrtU2laju&sf% zswNdMI=YE{*(1z(5|+SC+T|@0H5_tZo(qx)B?q@U5b&G%Wj%-UqKK{_y%&Yfma4$n ztgd>q&Bms~xSm`+FCQTgsdHUi8H+@k{Q`m|>O;Kik|j`-NKujY9Cv~7hNzx;Ab70@ z<6uAteLKjpn;|7mKSMl{{fQLyCSj5cTDT_OCu2{4a2y0kc73W7D~!N{`E zu|Dusk@&54ox3jY9NaCtDTVRTmzh#gIFb(oC;<9OQpU`;wq$D#s2P0dNloN$5 zAjh_l1(q3xw`YxX@0b7Vdl!Fvee4`?xC}z{p%sd@LTivwmra*D(@gi{s`+VmcC4HO zheaT5yE+v*5ZM%n#0e9_K1qhCAlGu*Gsc0lW>S+LeFuAvUO7pBQr@t@FIXt`sKZyj zgcmtFD_h`UWpA)#^*K^sltzF3UEtYI9V{v{tP%*2(vx)a7qj}~zCIKwEY-w<%q(pd z&Q_wi16$z>lCpVEYIaPhsY?DSijexBysM2~Hjqut;XznN?!|mqSzKBM6-Uqn$)ecm z_ONHMc03^O-lPlF@yw_q)6H=4r3`9+b&jP;CG&xG)Y{hW{^PLH+tpZy_`$|du$xt@ zR%tbAxVcCHz#k-`KB*E%y8iO&f?+0g!7PM15c7gJ{j5Cy%P$*O`Qd8#2>T~#gO-s< zT!cI&I1f=%GAgmRi8>H*&8|(&*9wV+6hp#8~fZK+$Yr>HyF-+7lP51x& z?M;66v(GwTVv<}vYu7-UpvO1excS|9H@cbz0UD2!qY?g!dSLt-wHo~-qis8aNGUN9 z9~!Poqx@(0yVd|Hk@dgV11UI)w?_TD z()LeZ6wcBbU{6NJVn2KN&8qQX(_ZB-rUKb%GUo}G1dmr*Ek7<}_}b@p%n+wXO@pK|MG-7O#Z2GlEh+GOyhV);8oyAliTuN0mj@c}1kCvjSj$f8mKYdfjFHk&870lV};vb18p_!ZUqgmLL>U!xy zm<+*^;9XA0CmGY^cmPdZFlyW56$S9lr`t$WjtIBFKjSaVDp%oka-bt6qE4ReP>y(% zaDoLH&PO`%U@k_sU8vs8%TG2l^&nP%S z^ko;*?&g9;@3K^*uTC;XL|}Me0l^|%elOX*<JJt9x*q{A$$9 zg`4wsyj;^ye{zs$lkgV32&vOOhyTT^Px_%6TUit#EbcskbS$-x-5-q0U-s73gKgVE zqf|n5ohhU`2*(=@x6I8fLMnyoW~oSE2|UKe^vL4H^anhNRbdse zxlnKs&X$db_K0v|zDs$A^=eq)VHYHqV4BW4QN@V~Ll_FijFFBQ0-Qf>5rHlV$wlI{ zNFvOM>BEYEFl?;DXzuX`x5cyUm;yMYN{Yep=p>0N`8|)kmj>4jzMJ{1za zV))2pZ!fOlVg==e9P3q>kF*+38U+?b4mfdY`6z`hy2fZmux8`Iy5%>cN(yI5TJUzb zEo=%cv%{h$cY-2J<6jVpWa`DWU4%UTvlMxfa2D<*ZdettxJ+JH*LzbQ0 z3`REh6oEu45(V?+1Za}~E!Y*dzII#kZE$N-=j`+4vuv@kUXE7t(IMM7R_{J8(tX}{ zw&=FC<9b&1pH6if(LwyWi$secU&f$8tJfFLv(w>kKeEuD{D}^SE0^)Kv$mR`l7m4GApHawu}? z>!c>kLbXIqHVKSB4ZNG_Dq3B^kGy=o$c+B=PwzgjFFQ(!@aS=VbZnz1Eu_s%IFvYK z337=@9ZHoozaw94bi66|rh6(=gxGQlwjMfzB*R;wGHVi}D;M83^0L3ulTcTsD}M@1nslIB#S-co2Ex=m?ny`7Gz59IBNOlN9Kad9y?A}n>; zMgmY|s9J4AmHAG6v66Ffa!-uR3G+Zxd@=E211e?*CBY8zkaHCw*@))w65^6Hya=Ar z3LSHL5hf1&{Qbd6#u5b%+_1APDj%dvY)ruh3y<{HYlR%c;}ChEg9!7CLo;e`#=C}NRq{;h)}1Iamn>FLc?-aWP!`Nnx}OgKVcr6kc^DBfQ^`&pd;J5 zQ)h+R`g*AJNjMRprzA>S5EJq}!5%Y=hp^<|Vw; zmq!{p0##f>0+`F9N!6LkkA-YitEYa`Jwmk{^IR-nv2ZzQ))W{ zi&FCz9ZR5TB$IJ3>f^MjJf!dTov$WG_5Z$V3*urhkVH|NXdfxXsKwcB9eeejd)z*i zjPfY-4KIP(VC0^r*}NdQH9j0+e|2n#gpa%f7i^x$Tvi$JZ-@E$Z`LS~x z9vA)1a?G9er82yeefc-@$~4m%?h6cgiA3XQU0QWOoR(ygB?&Y~Ob2u(n^c8-WaDTH ztzt^~)!@ta;>#=THL%zQzzmEZEQtxZIj(WZS|M7ZgZYP#uTyfEoU3@2YRyxa{dksk zH#P$iA9WM(Ba#6T%hLH#Ynjn>*H?pGcv0xQn&8@+&ntQet7P;Y^I7-u{POB1XvXAC zQ|UKv^l_OzJl;c`X_PB=gz=kYdRsgE^_N9PYkIweQa$F8a}31o-fFa>^lJ92 zRU_MZWfKq8g>H9OH+L|DA3l80A6j5dk$Lz!yIrfpE6FQ$63}Tk>vy*|&kvsttI1V) z_p{g4&s%8fpDOu7tIZ(TbD{Y7N?|0Tmdg;7%?45o3^6#qpgNFq6u4HT3ID)0f)y4~ z-sI2MmDj(RR`|P;>LM>Qc#8WlAKdL1wejg(^Duo z@mHeM*SAPTUdpVeEWRxSlkR-hrz%3*}Ff!(k1CvnUszIMaowyL}*cd zMJHnyIOd8Ve>#M)b3x@<&sw}`D|;wk&WbOdGcTXk(KE1f(PgK|gbAu#TwTkRvMop+ zoa4+U!C2OcFmegW`ee+)E9bK>+Lm;_#p|;!<=FcUs5<8p>|`>B+Xsp0#E6gjxrijgqt>)8 zB*w#E@m>B(ml4sbj>uBC>f^M4mG{*uarn1?^!lqxzr6ft<;(2o#|IuG0^sIlS1s4Y1f|T?uXfkpEX%fyrVwHgi8Ax> zRB7NEl85?TGRfRvXlIGomX0H9k_>->;-^8Pl>lc#AjPLlSWv>4muSp&2zcuu04h*X zH?8tzcKUgF@GpLJ*F22dW_&s1`lpK}-M*RY<8|(Sf1YZ+9&a@1m?6oO`FAs%0~S39 zfDk1lv_*Dvb7M9Ha+p&uU%s#w=cL@~cwRYt|9W**L9IWoAwS_v$SW>|RtmY2Of{n( z9}IRGr9W`KL0Qk9NJN;-T=a;gysfD4qBPKrI^M7wJc48Ro|Trd+x#?p_KRt0m2Zpi zMLCQGtp+tgvh9N z@UFs9cuK8XSu7OUBAjmIDMXOxz1QvBU;g-u`@jFcr?XW9QG2U79S;@qfJofk-r3f` z$#Usmy#0(rhPJ?+DC5wJ9$$+IjZ(DD`rW&C7OEJU-S`KeeSsV5e)udd)K(ao$qZ-c z(N_uRE<**T_1dG1r4*W0=359z@i!i#4}}N*{7s ztzWrxdv(=&>M=>UgPn`cBH_xU=g!Z%iNCT$MhEahz8DG*Ymz(>ptzs1)g>O@^s%;j z{F@)$NrKiFzs_%mf&d~L;4%XATozaZYFOM!*Hx?)JP=w&3fOBbMxH8;)0ZFD?QiCl z;VwV5j~8qJ?TZk)95hmz1!tGonhqCHug;&OT3B^!l!$YQIP1U|h_$dkIBIuZCG zZ>?X)zTHI}Cm*EP!Vt8)ZBA01 zGM8@J&3PV!a#C)h=faHt^}{>%aVpAbr}ure=M$?TbUkncwF7k7zCPaOKP>8lhSnA>2|+J{EXvJIPHy{PlObfAfQ@^8B~C?I@E^p-YK$1GO6T&pLq%N*#1s z%{E{C-R9!`x-4(VKC%hvJDCuz0qF^-A%>*#Z^vPZPo|88v$Kwb3Uf>l%%wzEu&r1V z97i?_sHDR#(vsJ-9%058=mm38O4^AbC&IZT(p}Xtym*K^PCt8#b-hi-?a% zF;CwR%~pr{*^5l)&42u;1iYa0VYS(M9`{r!u^?op6dn%xBSmuW-oEYipYgYw{OjvG zCDsa4G~PXH*C9BH3I;o~tHL6?-%Q7SDeLHflZvUIL>C+=G2q6-aziCAJLTe53TOQE z^}AP5&o~}z=VPg0=wX9l!Ld(;neDL>^!9K_0h?!mJfb>+7j|_LpL9r@dTI_on+nI5 z64d~za?zrr58o-Z(<=yPC8J8yj+Kf+H!*uzPhTD%Iwdak;b~KO-q)T9w)LXgY)@8) zryjn5VM}w(dIL^O|3u5f<_dMN*>Q85)zVm{Vy~xB<(unHadH1wUw5vMha!zh?}y_) zRd;=H&3ppopqn~p=hi-`BIB`Mub>+t`s^me+l)f~x~HyJTd7j#(;7i|eOkEMm(PHd zX%l+4mpEO`vOMH!HSOTi_)G*kk-G<5<+1}9UmmO;FRn^b>SoTM8o ztlPV2hUPkvNHbu^lxd9~x0rXS=V8R$kRUKlnNU0m5f75cE`ay+^tPOeE{$2`?wq5f zDZLz@r6xbVE&h|QTe;zXX{4n+w)PB&p4H`QG$0YL>>UssxQI85`Jzp%Sp=Z=>f@^y zPrJ*9-Pt_f&D1;SbtGg7vqrm{!D5h9f}^hKv~a{pHpH42VkQZ9%Nf?bMyJAw!~j? z9fDo1h{SdsewdS8!QQl}T<3IbpT;dG{aV7oNHXYDBR$8jIB5ab@hKlq4x-(k_nF$+ zfA?>0+zr|4%5KQ!cPsA0rM*MKdW&eVZ9G8Y&4nxE{lqOIl4!Qy8Q>n~B%Am#hJ*8A zM0``CaUxvhpD%LXhBZOixp??ou1uZyB6s53i6LK+oFBf6zuLW~^OVw0)X&lL^X6z7 zAL7-8Y@$_kl;ev5r*13XuC6xqcZJra1o}{XRd4!>OH&8WfwosC3Iye=}-#h`Q&#If@cN&|2{+;UC?n5~RA95TnXYlC228HeJ+S1%uW&+#3{6hN#0Q*uULfX)SJI69lQv)0(@!f?hx@A~8P z{As2GM3bH(!fXpG$D3Z3nUvizofm{T)YD1A-oAV(<}e#g#4m0yu8h1I zm*}Ypcq2lhln!edG|}b#Emy~WqQHgX^Js`@=Hnnm z%^kPZ?EbpA`bSrp(&|YNAxVXzSBd(vmAH`p!~rs(0vSU^yVly+A;7n2asBZ$VPT^j zm{gvRSD6pn&Qq%MVVRx=+LL>r${J7dZmKf3Qe?117-G9bRR_(s1Tk2wo7JY(t>(i& zmNiqCP$cjeQ^#6+0;`!W*rHf*=oKb1IA5pJ9pP8NACoQK-rRCeL4ak9&)XepLwLmQ zxV*SvKN2ql@rxHPe5++?79{$}6fuWpcJs$?n*Z{rFK-T?nuk7lUaq4WTKxR6T3woZS&}<3Y z7Enz>NqWzqq{W71{+8~en%RBU$^JoWjcQwhOiqwl$Zy12EO@$s9$PP4?W+hGZaE5j zZ7`I)kP*g0DJTJGFW3>NLZR15-NZDn69|cL4jKYB78&9XP*7x`J$sVrr(=4;$z({f zvD{`5dH5MNT*QQ;B*4f0bef;!vln^de5=blF6c$B`sr8d-Zf+VLO7Fl;U|&&W|hU{ zvwpRa-H$*5+2%gIT#Z=E{;&VT9T8@4c+2H{4M)Ky@hcHen>ldzCb-q=ICdez*ZsTv zNh^f!63BsoJUicm`6hog5OO=wyunjNR8E#DzS#}yqCv_y$IC;y@6)l{@L4DL)%nR- z$8&f9%1nqovLL1@eESuO2~aTFO2rb3TSihCRYm-4UmhH~!@_6x%gSt@9xXRPbFcQb zQXar)>hGJ)Ff`Z!7s2Nrw3QJE_|0Zpl|zq8+I4V_JPcQEXR2%v6_GsPUMiPH>@ofLXK$)H zQe>9*?lwIiAd6Jys)T|tv8+Ada98=eX=|405UQvphQPKtPZxf)H4zIcrZrF!p4}8g z6b^}GvU+H7^UDXJGivda);=oa6s)tI21K!rR4_()t#w>%jJ zLlGX&!eVuIghf1P-?cj*+R8HT7Em&%qlPT)yp22OiQk2-8NI7#Jyv@ zpP!v$Ut`!>?I3fL`SR(x@5Z|w1#XKTj`!vM)mib&#;mly-!FP4mNP+S

Cbd)AHP_U7QKch%(4f@B3LqTtF1oC>XQ z+}ZxI$bl)6Shl)!n~vZj&TTJE^lxaxUy}S@h5d}idsjx$8TzQiSDg{HcM!k&&>AC!y zH!laDep8rz_^a>TeeWVu*rM{($Y9rO)tTyp-V-?3vng4e^Xu*jw+BgSayHat}M8cNcJ$z{nbp7*!FbL2kaw9!YYl&jLH&5 za)@r5nCwc9!&4&3{mDHz$2AG?lfUBF%bUcDH${)FWT)9sYJr8y!$FU{Z}eMo4}K6| ziRN%KVIuph#p9SC<5Q%F(<}p+BTo@wGJhj*{eceWcKM8_^%xrLkm;4^cx)(HrV%uw zX_C}TiiRUNYPs&4lNrW@8vcpsHEj|_UD;Yn>+&z~v6#hyLQ$N&|ix6{O zKH&!3^l3PVV1gKdk9L#+dy;=u`-FNlg;~0>Ts$vU)1$JHa;?8B3=WsW{Hs;rR?lEa zqUrY7xw-k~;c@o(krBnI;a`5ZAJD?x^GgHyY4B)5TEbAveSh#Q8zc`PivtW3$n?Yd zDl_@fW#%^9Yo?>ej9NDNWi>49-<^&SlYw2yJNCT&@otrBj#J%3x+I~!UP7-y*=jQx zykx2nJie%&Sh&f7RR zw4Bm$Z#D0I`mDM8n;*Vu?mlYEq#cYHEgWQagEeEz4TrQ?O!`>TfmlG`PzQXx|Bw!+ z7Ev)Slclv=ZdA}^N>)hDi`bf=kDu--2!6VNn3_TRg8t3C>?z%?>uWKp6Sp3{RK^6IJuY4r2H(GYqt+-Op0S-tZ!uta zFf~tFDHG`-{Z)Wy2f(cqdnR}=pO_=L94^wRk2E%|X-=+w{ptDmRLJf!dixfD@Fsdm z>{H~v-T(MecS}`Ao%{PwPmjGo^lCVo%l4olh>I%Uv4DS8q2LCcSE-_mU-=cI6~d`E z7!V&Mn$0VRp?#_}&-G>J1E>NtN~P3p)bZ$m!>Di7*K7oRrKMvoeR1CW_T#rgd8)-A zBg%@4$IHfH{Eu(DKmV+KR~*##y18uZk;5#wyhx+yX1~HflOXLB-X`&rumC}iF+69r z;0CZn)YjPSYonbKuzJE$v|nt%_T|J6I=U3uBa*%WC2vZ2nJ>LG1h~pdac2U3;AVJY z%9+Lc)W`q))sKFnigPA7&7WdK|4_&Av&0BJO}-Wkn@kC^YPbse}dAbgV1mPUdv zXF8I)B!KDz0-2UE$<1m*tdmoqAQf!}aVmR{2DMZuYu9FPdNL|Ru_oPMK}NJNYU82{ zXYvoZXF^U-6Qd~;9!zNYNm_!MboC-^{j`8>N5>R!BEa!L&wY>jioei+WRUBe95J8u zrn~Yo*1kw!5h6U4j=dbI0amzJuKYS{E3z%DL=>g-eWkyxkFu{Oxw~nmJy;0=WVr*f z#)P8@VWd*>5(HA_6q(V90?#lOjtwbUx^NM_wL8takd>57e!spcZ~yQ-{gcbX+uBBv zGen}tXN{9fTpxCy6t2d@6y=DEu`5gu-BtPJx2xjQLOKgeW)&Pjw2g--aA^$9fh9tG zdoIZm$8O1sQZbLl$LN{YB2iC|>#yTPHMSA6F5Jq`&8VS&`z(5&(_`H?*XFq=V zS!G%~Vp0)o)V_%gWbhR0W`gV1FqwgT`*DBS-xL6Hw(QzeUff)~%G9s}OtI+;H`bC* zfE8}fcvE=pSqO#ZFXi|3^s=7WpB0X6?2a7d)k5@@NhDBuODvKn#*Nr7Tu= z6HE;;R%-0E1IG*6wW?Jx-j!I3Fg6TKP-eSA!PutM{l6YF&-<#ZHRabI!Y_#OmjuBk z{bxy3JiC{&zR!>J!b-2UT3t&OQ4zvUB*C{8cEdlo&Lqv=mmYjP$0}Vi$>yD7E7c+# zpbVOj3b0AkC9zFqV%GedC5ra#al9$~c2=CHYkG3moL-zRCiY=HO&xVucZ4em!rq(SeOi7T6lCgKAePC`-Cn78eC~ zRRjk=?b3gd$trmQxSLI13Hz`2p1Y0Iv)HQbQaPq-Wk7ThO6G?0`i0jF!BWFCa zt>*ul$;~&VMe*f*>gD}H0jxf=)kUT>!LPcR2Dum>4JRRIreiCm==OFzC|-cvOG3e@ zvYTi1T6*xCKfl}m@V11@;jmrouwe?Rqbr~!tsi_J2s(nwLevDC6sZ^kfEIwK%Y!5y zZG%K!hq$yKxogQDG-`lhkY;3YSoYyOcn0QFfrfD5_$P))Jx3vy`eRwWWR8&+6!ncnlM^fzm!b&ViTB+EDd84;#5wNML?m~_;E#xu zn{ZR!v^d?TCkX->-{EW<%vAag_w6`2DIxy=vrFimlRt%X>dBI4b~BM3aq|w`-|1KH zoDQ58GCO&B=k)9E-icw2;zw8ZevD`$$XvwDxS&6rF5=h;5yet!fXP3wqsxK~TqrMd zWvTo9e1y;uxqff78H1}A&vv;>gm;fTL>T*WrT4r*-H7QlLQG2xPcvc4PQ77wjV4n% zi3Q|&KRx$rg;J|jgUTIF`bMyL$eb1O&Gg_qFWO&R?3>3v8Y|R!rizINDkS!6Ky4Sd zDxgdz<#J87<|czg*-V$B6q9M4kp?C_J!L$mS4?*j>r9M=2={r%$ap#MD;%>x7F$aM zbA_-LNH3q!sT^bNbh8A@M!&b7E{CJuS!VsCHy2-Z<*<9Z^c*XVeW`m&)Q<7J>SdT$ z6V0Rq;H!d#k4C(NX6Jm`pFBN2nI7G%WHqsRlT%~`VeRWUhJu#Q{a)}UrGiu(lb~iy zpO$B@E6w!*tof<3-Zwxtv%1#n+reoJoGl4-M7dw!66o1VxS26A?Wi;oyi`iXjT5Ag z#AMnpYtkrK+5sR^f)C`UQmzc*bCHwkUaDc46VVTZlR`|US22os06^r#c`G(3fnTUB zbAqdDgigG5puG7i|Lf<)+qY*`@?rJ_{eE_SGY0{}b(jtCg(y2{FC$%h9`s=7=)oyD z!01&bQwgE+FjG$#+W1a~@R|@FwQ9dNdh9*Jk92GLXY2=K03?Dh@H4eCiy_+^X*_kf zQ?*b$eCmH#?>%<0TTZujkJVjG$rwj5myf5q9f8R$};h$7EOC#cksSk7m^etVNWJ_A+LmWuVz zsl|v(XL28iY}*k|Ix5={TG00XXj;^R5_Wm@3qeXs62e=j^26OxfgiEAT=GRT0D@3Z z$bEXcmKmvLMNb!;D`Y}gMS*~iTr5dsx3UtfWPkEb$iyUr$n^w6$R>>>EO96mJBNI$ zm8;(GjeOyuQ?EZiJ@pb)WUUDec>FZu1#WJe6=yu$^R)7p?appKTn;{U^3#9*b^YgW zYTf)2tOm&&BLV3Gcf>zLOI$&AKzMItbduAJaEGE}3rg@>Z?c`O}Ud7%B{9)u2RR$N(g7fyM#yZaL$#kCGRW_6FedpC~jYCYc zuL*{FUq!XU3zaM9$}V;*;XbB$H}=F_=g?!NuBIXvJNtsFMe`Lm;p0xT=G>EGpLM>=x^LJai4A$G z2!BfpBynnL=2R7EAY4;C(Z_wnr9xo_4_c(fEfo5k-68C2PTY!!roim_P+jCM$Eowt zt~%Z2XS?H4m<0$PPRd{?nKo{6)Cn22Tk?uT4Q!c9M`;@V+LVUnh(m~D)<&n8dQnS# z(Gtncwu)S&sR&NhS!kR{z`uKbMbZWuzaU`*>xr<|%}15g-^R&1Gslso*w zSUJHlbl}3xnIU7DR z8#RIyR(^PTszjsQbAYQ)@4po%glj zc_hdB%YCXR$-|C3Db3 zw$Afl;(NDQ$kbd2Trd{toP74V|77^YLe(dNfT549R14KgafX|0KC3DAsh>>+>mD8| z`OICbT2E;%dDyQeWe^TDoN%#g4p!BzO~!;jz+T-d!YAe*Gn1u-bVPhZ{9o}YJ*qU@ z-7|z1snX+ouSbeTU*w4;5Rd^HG^aeC0@^eJS{Bm4YO&=3V?$Ue(pGy0P7!R9AIBeEG0qV>k*=iNJ2d6Y-ll9dG?SCDmr4cC};+MksjV`lr8UTIaOnM zf+pL4cw1cl*}KM%U!b%cWjDRuYN*E3u$myNo{wTjvF4Lf7{3X>N4nRVeE<*ZnljqGC=J%9cxj-gk?$W=&VU9ma+ERxL1A9y4g_6y%G3FE%2{RutT0 zn_Zci_wQQyH#Z>BN_?QVMagh91k=Ab6WD<5rjw9077K=NN98A3Z$3RK5zepxXuxji zb3&ZBX2A=taOoIjtWR=|>=vdFg^??Y7&1BOeL=7UT^2bN=Md(Jy>4E({oyANx*>9v zKPT`UW^JgkgqzX(zR_n+5}Rd45ClJWxJNM&d>tq?sbwNdY>jj}i3MY5>YeON$E3QQ zQf%x@{V z+jY99?Yz3$(&eo1=KZ8TMTRR9Q%MiNO~yk{Z2BMZrbEQ%vifICN8^X}vlXH=0lCE- z*I_1ees(F_`Otp=d@7`hArMBtIjf}p$>-gd`aq4JteA561ZpcalL}rx-Y&Z+*+T@Y z2NdAfBB{L0R5g!-;IVDunNuPLiYemdjnjLMPs98BH{;L0vniN)(ip@L0zl?FH?B;G z&DN7rdVkTnc;6qOwy39OKe)>O>}Ba9Gt8}?S>F@T2HhQ~7V0{W5|GO=C{`Nb-fngA zTc20b-AeZF2YXmgmfYp##Z$lk@buV#CMcFh+B!1DjwwZoBQ~j|$T4+sCYe9chK$?Z ztd_H-^xseWLSdP!OEK;3@?&PR3QFv3O?BorKa1o9aZ^}TZVrt%<9X(r!3=U-NPjvA$S|#Y?y`*1nQEc!Iqa$Q;a+dBBLppb5<>0nzKV=oo=A2 zQ1cFkCSx%j@MUGBRST|rpQM8JS65eRHha39c@a`sbjTnT>ZXI>HEH%7ITb2d8qcRl zC6EC<2W$q8pvWF+^*rk{BHIc$CR6AP9|w3unfyHI%PwCxdD^?D`(8C$dvkU<)$?@t z@Upi0>pwXE@!L{r{+sc9=;I>Nv%xf*h+vMOIE$RWwSyU!D|`|`%F2|T+K^Bw&PU2f2_|_r41Y~THOabP=V9-hbDFTzi8#OkKm-z?0fHcCF-fel zX?)URnc?7sz4y%Ybk5no^)|>k8-t$ix8Lv&D?aO4 z9TH!r%FtFCH~b_$8-x(@g*LgdT8`|9Yyr<0@cmp&3lx(?bi53@Pq2~B7h?cyI@eW2W!o)l#ayYqM(swTQPXy1XqR1|pDH|;jJN8OQV1K%|kVJ_qbA53Y zRf+KSc-P}&=)y`!4))vZ&@R-~Rap5QhbGPtSHWN<`A4=iHF;9k@*%b1J@sZbl zjEK|v$oqbK?c`oxA0SbR{IB4>iN}jloq+oJXB#8lva9K!h^&I0T*|vbZ z0ALihXSo^(7Kfn-KCxtojB13x|ISl%yw|GR5gPKcSX~r&v#izk8Z4ewMo$cKt3Uaq z{KNjaau{MON`s@QEFsxBDY}2!SmuRI5tgFxK$<+-Wh!6Kj&F~RLlN6+*f=~4BNS^h zP==Ode9L`$-wik!rz*f*;HrqnbmPdXl!oXD`!r%adOq(r!9$WRA-1I7g^3tg5@7x_ zD%n(#IV`JTJ4=CD5c1}hA}5GVEx_W|NvMH>X%CO>P4$Cux?ZtZMUJM2|+&=5XsrtZN#=s9-I z@KE^m`EFi*qcU`+zxQcdsaIUxN~5Ob-gC~a761Ss07*naREzcILcVLiELTd+T2RU} zC&eDD0ZbSEY-G0IZtZ@2pBrSCWhLX%4DY!vi9v({+@lqa0n3|&Mbpk zi=ZicGl#T4WV-lj1QWE;i!z`nJ~iZijE33uW6W(fe|mg>8WZyuk4!%Cpfxg(?hH?o z&w+JD>O1%NJZYBFFUzMN_DbLD9Q$dRMq!ZDZZl_w68LI7*1MUZm-$wt^Ej6%MpaWT za1_?V4BoH8Yjtx^!}p?0LJzfeN53gO)|&}@a0FF~toxDz41z}ic>)7~jbZuD#6_I8 zeydXEKPmm&mA+HExm7{98N!uZ@*-b(qXHg;M95}dZqN=Jn|MBUfCd!O{){py;i$0Z zr)P^!ySw}}QTEgTn_;q8%9O4^*JC~v(( zYC4}Z3+qCfG84X;pVD9K>gnSWA=jG@C><$H5=l$9l+PgDp*AY5ezxBE>S4Xub+*;J zbL+#0$qp6a9=TQb_GzrFWkn6P+f`og^J>l>`JmN9CbQgaxJ$IYPH{SH+H0!VKseuY zS}KM9@3&uI-T7g+@Oz)%9j1SMIveNmrH~r!A23%mQu#)$@i-gH)t@s}^;X+uW3tUb z_^Q8o9F9eL*?7Hji%leTZ{;-F9i@m->5H?ySSzKokz^Lbtg6rFcN|_+_qV0(X{&%w zlH_35c~aY3Zq>?^0E=EON`7?Co`)qwTa&P<4PdZ6jX5-T*vz?>_O0lifE5A6RA)^u z_c$hyCuuBLQniE;O)6G?Zn8K$JUp;630XT-X_MGj6aZ8}tG_rWCaKNKt@M6?A%n-Z zz%SwAFbxV8AP^DW)U;Yu7_i>^ckcw6DKI7daxFF{iQFwU2&jiCcQezeH&45FrTtf* z-xq)PzVw~j{cir{Z1&LU_KNA^Y&4Wj!htVcL_T?R>#cZ7xgqiM5GDs_^0bMqB?HKl z+n;hY7Ci28yGkA8Jw}%-dJp!hITiViQj@TUGCD&`-7L;q^b;P|Vs?7jz}*_pO@Lz> zykrzU$LBh#c!0gNmNr;N^-cw1RMb4SuMrSJ9-J4jM9WsW4V1qHwcy29YSO7;sarlH6Z>U!{n~-SOrI9i#y}M z%GdqfVHQE+qJQp!sSELswP3S0}r4;GUjMU+*;r4aAQm6 z$}bsqRBI|k?CJ4xUcD?n$qfJWw+4mXdpL{;P!_VQ<0_0K7$h~SJZ|Dl4c4joH_OJh zbb~hZo8d^kUa5{me5-wc)FMIyW$%W?Y6+5yS`$Q2G80J*l2A#(&2rO9FMjn=PP1MH z0i;~NElPZt>A?jMC2EQEnqP9ds2t+2U|WQR1Bl>6now4=T}puL=q(e6NVI-4J=eY% zpBA}ZYcy@u7cCW7<9Eiq><$qf|7OkV?9qrkO5?_TH1C%8YLP&XI zl_jifXj4MfE!dWHW(_o(^EYttz? zvunKiCja64U2S{F;P6+ih%1aVP}cZxZ0|=kc3Da!Ds~1~0th zcW8y=O=t6vzU5ELk*uUzd@9Irm8V`)S6&lu!bHw&YNN~|QOMHBG&Jfm!co#NV8?sy zdJpC1%_7&QR30)PpIaXr=9zM@!=B@3{P-el-fpjd+Z#+qqlo(H^IAZtwbvGkX$I;l z*c7pW?Z)sq<9a-->_)%UJO0H_-*k_|;&C3qxRb-Bm*=rMg>PBcn582)7i|k&0>O|B z;4am8zpelJI9TMYBx9c%O`Z+6^<+$~tG}ODDOBO$3Od4mCMq|0#NPOvr(xKrF>i{qsqc>b$fCa3f96Ipi+x9 z$vBG%SxBk(=70ZcGi9e1ii6u*xfZjm)~DO)M&S~ULae&72&s$La7S{;hS3mM+!k9= zdp=2(-4w~k=1`h!iVw5g_)tIB?$KYab|8#ID4{bADnsL~E{}RrJuN(`_nI-dl!9a= z3w&we>m>sPhIvdeL|IaqCOqV7x#r#9z8j4Wh2;s|OPxMb9~!Hr-XbfL-1~-k-0KgX z#Mid#PN7sUR>tDAm@(7xLtPvR=2Yh3?*5a<+4ema`O0mo`O&5@c-Z8|$4b8U$uf60 zP7U7gTj~C{F74O9o~|DkQ5;xncAg1f7?Dio#mm=J+Gsqo?LU6~dOR6D@0Rdb_uZQ; zBFmBvuZ8E?lpT80s=G8VT577_seh-F{zsqYZ_~pHWOg*eo}(SfT#EHFrjW&y&f0Bn zUwxum^H=l2FmqqM`;2w-OoyP^AM^wQw5Ap3?sf(!@t>b3(p$w6y5foiwX)1Qw#7UW z$SnYP{WSiIpWLUWZ&kfJOqiFx6|L9IfNX`mhVZrAfhw^|v*pQI|Dd0AaRe>k_6uVu z^*yA@i&J5psZqVNLvf@%*XkIt?-ZXq6+O67d6(rfnbX2Wd-cUWG##P#ux6m+V`4=& z5mq*@lLZ)2joAChtv6digrxA$7d8~qO{a?(K$73e{P@)eTZz%6A>Jn+eMCJo4B-4b zT7RCGpHx;qz0d#Tg|<=a`S@)(uF-^By**x3!AJed%I3?%z#5e6+7gS1>I;BVy0pNn zaS}oh;|Lo3s{$=lI!9fZPq}TTILr6GINdxS@K%I|jPUOPLsPK$ggGUdhgIGCfHGS_o$>OuI(0d_tG(ChVG!wX+eG(_lw$cfmSn$1O9se15E3Y~NHO6~Ek`fzB!&)%k6FOo>zE?>;{Zw8jCRnIsx z6b9rv!I}(u&5kn~M%BoBOrzvYk8FRc(dDEveh_)HVEOQ2BOI5Pr~mx(H|3wd&ec!P zM@$sz!m`0CXz4PYxdNV9(ME)m!XgBWF<)%C=m=>TWvc&bntfDLm_u6$7ZhmsK(JU4 z7r=H%tsocjMUE_SK*5_4S_-DWUM+EjfMmo^`n_A0=z5nu<{J6N-MP?w7&Cm;S#)#d z*6dhZraOi9C+YgD@xHjwOSja|HeSxpt><0qP<)ZAzg!)nIxTt|G_9*0vnYu(V*W)H#?q9HMbJnJ1mrsFSDfr!1Q4y z{bhkH9nZ+&Sz+QzmD;mIdAP|vVXRKoc(;pGp?iS#SvJ1yRSr*r!G%X2F= zj%t?x`3kb=Qbmqrw~8LDNCCrL`eBj%dR2b9^xn=-b4(l9AC%DII)eu*0h~+b^W!8i za=7|M^w=ocPtz;borc-$yZ+71Wj}8LZ7etW=Ie(=`u#Gs%Cypr7prvrd8cbNll#-@ zq58PY&Cm5z?M0@6$G$WLQ0J znh|s&%FTYGQk|MvHjPiCJ(ooXb$wA<+b!ol>=qh1NxhZY$>~uxqdDAG#)k?vpk=!8 ze5sF5)lIs&&9ugQfzbN&RNLmd&)f1c)mrK6mG7<6t=Xx*$#xILn{}px2x_{`mpk{F zTIa9Fmv`rzV*x?4%1~X=PVIhAf1mIZcaja2Cas|hR8%Op^_)=3#xrJ>w;OH3k7s}1 zYkt_}Ozjc!ZwtLs?QUNhyq&2&>s;b;Z7_jly1mWarD`u1`}_=0x%6URxqsZK{4el( zT_t_o$p=OXI946u1*#IMj%%J;R+0zfzeJ;u8BF5nv1@($Bhr?M+)hMm)YZE&Z~B)w z7D9QJU%+l*pv;#IP3HTPNcWUZLNr;-f;zrPZxQLCRL)WAPslsQ9MM&#m`wH&2~TQW zzsnAGnB^Tc8KQSZ>6K7gO7TGssDTp#30GoIIy=a0=+N+2#VDfVsmnWB=*Bml{dxh% zxQySSu#z9g`^b^;^vPG;wAbG8k@Nex9zYy>9eKYbCgU|+up?FM%3aI983K14c-$mT znk!VmlV}X#5GSj(db?%zY1e+2d$n!6&Kxa^=?a=iJ7m;Il*cixw}ETP_d1o{v5c%I zg?6LoaD7C3EhALQ2@Nwhz_&7_n@6Ws#iQLs!zy5JQ^=IE3eAHZTp~N)1Wu8>3_@eMO@l>j{ zmPpWYJGDg1mA0fDQbM5EQn4xSq@Y?=`=^8MI&>feUu2-t}y}|Q* zstLHrxSKiT_U|4ah!r%wnt|+KK-tcM3ER~3*Hb*<((DrGhsANC4bADrE~hV(ZVhm$ z--aY88D>qhwu`i9uurfPE`|K-@R`jOj=??UAeeVag&02O!CU2Zl_ZpHp}bW zLBShIWCwsB5z5(2tQOu~)PIZ~O>Vs``qgps9H4f!&}~+qcDg~&EUugW;QcB4cAu`N zTk2ZieX$wS5h8Ck<|93vy7X^fsVRM!%y^F2=V&7}`RmnIfizIOVUcQ5X$uPV`1CN; z#VVb8)$H3T52I%q#o{+j0Cg11OW4!~pa`5&d2+^Jihbs)WdH@s!&>ebk7;-{s8LZi59RRe}MrLi<7C9+{x9`AE|iIkzhZ*6f&=}yJ>p%w?BB9`N^wd zM`tgAvaKF1n&1_uli_MLPkam+3Ti?viw*`Y)vpXsQ%P2ME;8ADDlR7X4Jn}{Xl(qi zs+|b+ei*iDHj^?*p?CIwxH3sGKM@pyiHVT@O(h9!L=F>dl!Dq#lgRi^CN=_e&g?5! zIp(n6$T7a1Omko;@DA(E9C(fRW`w+hM4g!&@Mm8ON8zH`L(7R2H2KXcp`3`k>#exm zVAG7^TUIu*x$nwy;PlwJV`||nRU7u@;WjFh%8CAALk9lG{KT_c5i>g!W zJu0UY4bHx;$*$xJA{63H+F4g|FCt`re!8oie!Exv^PjwI9Dc3Ab)i}U`PWg&3KH7p zxa)UB!;ow8ee>s$#p)XMN_{wR65iM+XDvEwlr2Ts`=b>^z0U+nT9r^^XK*c;Wj({(;vOQ&COS%_vgbh zQ>w!(zR-43yOEoUB%IKJ$lrAob*(s;(z`d!^cU0J)1Lcvy=Z7V3er_g zpKf_(#0=ia6@3-$5!E}u0wEbft)}r(x7kOhtn(&Ip>sDFjwZ?jIFT>9xA{Wq>1nh( z7F+FpC0BqBkQeKAKP=}{!)Is+O-(bk*304B_l4ziKiB^JwvjGQf}A6tYqW4bd*b2g zQW%sAA9wTr$6x;~I)vN73-Hlp)Vu78Ri<%qGJakJg7Yl7JzS`9`9@e)m>%QW23t(p zUMk`nG?IYD&1y^Uk*`J@7&)MN+J3u6IbpBCw;|va8M>^G(d4KsJDWdy6c(GvpK-h* z=fK>gR2b%bRfts%J>$+|1ETYR;Gx=_GHVY00$47!yDY}v?`6K*&1|O+7bx`La0_Ki zuAtvkPA_IiHi_Y9a{W`fHeH^V2o!jV>GWtinQzzJMgnRwo<*%6LnfcoPKL@Uh#MBv zzR2$KK@l2=oSe~Lvdw0>-r}%cyhlz_D>V9}?Ot<~^ckmn#gcd&vu$tdo!T~?dN04I zGi0G%s1@F-MdQ~NyHe}&L!0Gux0w0(osBK7)fryD+a+atPVU^WWUx zPliJ=B5OcU*F6rOpRF=|v`We;5skG0Mbb ziqhN`zZU>wcz`&F_N_e7^M&Zww^HLY^=gp&t9nvZl)r^CxRz?+p;(3 zo8#Qmmvy~j*b!d)_a2tJG zpxF*Q4?*z=Y><=0dp8-|HtwvJw&n6J7o&2xlY4d8H8q+G9@A>?2f*X;kt}O!QzFp5`xEoVWaa|RZKfQ2v&tst&8o&K%v$`#N#=)eQf3D>i1LU1pQdcQy zB!In>KRt=p4!cI%s1tuCr?n0w*8?5Exg*_Bfx2lQ3F)XH6izT=2kWQSrnsK>rT6KN z)3?9#v?+~N*jNP<^1?b1KH%Rqn&D({UN##G1R`ZJ2E38dJ2JF2FJu~6(u`_t$`61a zoB2)QcyoFBi{I|P$WPG|Nu{mV@6KmSM$k{95Z~+al2t&O3uK-qJf2saI$~CZ6_ftb z9A`d!KJ}*OqApQp8*7CQR7N6`#&F8_5_%5wbK-jjF2@j2Eq;6LF@?x{Wc-busGWEV zqWDrK-2#Qwd1Lw1n*Z&8{lnW|epWlrzm#FYDOxBqXEdH-4&_*w!lI_qOktj|E-5p> zgj%7Rho_1rNJ%rLbU}ROufKVoWjd*9kGV}@&_Uu&TzmXZBK2<4CRcnOH6n_cjKXZ8 zt8G>n+dT!d#}tvP6omP~WLJBRS5ds&oy((=TH8`JaKvy)NIIn3vWh=WGmUU^>8Zt) z0Y_fx;C@rx{piD7J3T6FpEJ8zxd_x7tpj<@_%PV;Urm8+@~1*W5XwhxgE^+VbJ!Yk zD#av?zg&PepvCNUyXsaYy>gkz8)2)TEf)my>sN26G#u;-c2&csGWq#3kf5z@7mQ~( zd7929?M{d7F&t{!w#PNCea2`$m9b+(;2yPyKzOb5R#2`TpLCP6ifVP$B4zAp5iwJ< zzXq7qHDtdTmUr*|?H_*rgMMl`{WX?hXde|+Y3bed*nWonm3zZ4X=^m^KAGiS{?+g- z^;PNIFoL9Qr11K^j@6}E92!rJ7~GOPoArYefy*+PPT1D{PP>7m9o_O_(v~@DfW7Io zh_LloR(_OAgE}xVu-(Fqt_^`$zMgfd^ORj2_4jEklM5vULJoF1?8a`DKFuq;_docc zmD)VNeSAMo6*dyi+QN#bL;P=55=OP2x}GVp-v-T$WE{eF;W3?ly8Aa77-sjPTTu-aN~GO(IN~(yHt9Q&KYl5Lry8T(5)4{ zW#hKqeDO5lyW|?!@XE!p%$C5f+FsqAVJqZFM7i6oB@C4nlfE7G)?fzzmoDKCAlH4BW`Bn(?&#S zVx2RD)F}pMJi+dLh7GQCa>hF$E0RKZ=OJn$F&6qPN=k*xCW~;2XS5m-8sdCb~Bs zGVs~~^$=Hb>y&dOgLwHTEUGCheSTJ(cmHsCIXrgO=Q2@#U{|tMWdG=JX$M&jSqEw4 zgsIfo!>5aC^NbeaWT3%h8xjo`ga}OWxcsbj`JI=A-+5CiY)5U}cL%K-wRJl1FFii2 zp-JsUzoU#6hx}y=3+QYD1O~5HWi0>zAOJ~3K~y83EADdrFV}#jw$?Sqh^$22W{>G)|T25;OghuIm<$_m$13-EB9wXrxc5Ni@vn2dh1| znL(BJmx9)|w&1(PWwwjD7O4E`WK4Uo-QDd{!$a%)yw~+UGPtVb0<0A@U&dgCAp>5V zFPP5I4DhayHFgp+Ps}YW39P_M~s~qAluN5&$PVKia z-_6~-*R4#wlwF-N?@whOyDCRiZg9)WIk*_jPFoum9q-gQ=wCwUu-`&w2z@BNwwscg z6069<_S;N}*4x^akUxbSZU2ZU*w{ga2Co~)bAAjtAlpZrM}mRREp%))#8Tlh4=HGw zXuzx$jaj6uS)544JL%uLg8r|+#oT|#!i}Gpwz3%q6 zUKTKGqz)URk5Ki*WH=>(;Z4*$$+u%j3ges&Z>JkK!qQBJa}#AQv5yUEQXd(KRU+QM zNHY`KNhm0hz!82o{#G%!A!KnPjm?f}awHMN2Lc83eL~V#yzeWfhOkO>6@3=Y;)fQd7eP;~qq zMA|XFNMX99$9B7p{Ehxf0JT2#<9$b7C7?g$THIgq2@BM)hu7v>bc~w=PQ5yJ)|m%} z56Gk%3{vFnsC}PO{X_dZ&xh8-41z^4wPuA)*gX4`DUw+oDrL;ol4Ns4|Nwt`#b$Bnzt&lnLHA@tjTM5qQ zlA;x;;w68NmSNmwy6+AzC+D`JERqjI1dWVn6kj;5eA2`U{w7W7HAp6 zV$M81XEVlG+vfE`vU&`}v)>lh!{2#X`fq;vqI-GPN4>;-D2uC3&%0%?BC*D8L zAmu4P$H-Kv&=ZZ7x6VZ5Hrh3Sw(0ndy+^krTd!O#iv9%L*@v4sq9ec*#1>%SW|e8d zt=Gv4|55(1?4%YSw@)uh)B1T)k=|petnFJ_Bynb;^oD0vl1(B?{shw zy#n@TCVTx?DjY@=-1M7zx6P2|ugED8Z{iLjnA(hbiV&JCNyBtJb{cH}Uq2$N2`Wa< zd9G^;)%|HP`pL`MzxdVXh57%J-aUplujo6Xh1j2V#CWnALxG^1wAv@T+%WS(cWqsC z-;ZaCH0@VOBcwsFHoe}!z7U)dh!YCycbho2v9i?~HC^N+@zJsvmrlD2W*i`{biUfY zqY|`XupRPAD>Dv-aG5P7UPAtoKS|Cr5Fd;;=m0`Vc+VEM!K(SR3d% zE3nIybT!%x62+vTL?bl^&c{58ji7Ip^o%}NMCmCF@PHw-qpumpO{HY3889R=Iy_?w zadN&0;KTkD)HTJ32UxpiR+d;lZ24Ny1_rR{vs6Mp9SO6Tob#$;JE0P+$wMKDbR*_C0&WDsPhFh>*O<+ug;GmM;3TOLTDbYbE*tTRBf zr4kAO){=fbguO+J5;zDx*K3_WQOG?&XF%I(43S@dyEc7O;~Qi zcOfASF9(U~##;vgGr~k7&7pYK_BHZE z4NOG3uOtottAHH}oD~Rio6%%C#-}lxNij)Su8Cvfi_XlN@xIrN@xE_`)({R#I>Cn~ zaCHZ`Kc6|5AJ_0F$Qt{fhJz4tug>LQn<_bPgw8@9C6{;J@pwX%GPQ`&WDovEE5?i_ zcXqu+59WotC$uM+T!V^O5KMW(idJA03mCey0G1?uUx+F|CZT0J8SHAqamn`Qm%I0? z#{5{!R_lUf&>K+2VG4$s;CqyLn3e)*6n_9`k6Uz{7&O=#@~-}$`9RPrd1{t2+WdV| zKmX`uc~Dq)YkBn{tMwGcRJ3|^nhQYMl)9@n+Pn%OCL1QK-NOMwm0cxJix>4-ZFsJn zDg%}rK}8K~C)(Q_^x2?Kp-C{!O(wujflV}wv*`pq4`&v%5eMXCKDTIqe`e#+e1dSZ z+-S6wOtZz#>qTI+N+THN0wff3utF;PS z9{mP$740vtmOQi#z)guEri+d~<@RBvUVA@${^seaTyJy-15U<+SjO%M z#C6X{(H1dDQDYUf?UJra!Fha2i)v?)jnsej zv(JjVVGeIi`Y6$-e4VNYwQkXe*%`YKpi-krdZq+9P-_M%9CZ*{k+|%ECIYxtYjO=| zOLEjp1>M=euVd9}fe~@uH83d)fr8hInrhLK5{&GNhv8_A{H%fZQ8EA(H_VQ`^YAPo zhWWe9(k8_L?S%HFiYL=4V2jD8q8Q1`>FCq@_9s0oPpj4QqYP)SJAeuUJm+>LO`Rgt z1D319qS6ya2$>~{B0ghZvqzCE)azApMtOsA0&=sWXk7r-H?L@u3B33=uLX;{6H0>6He zCnvmDEldsP%fo2$G{G*DMItnt40nrhuaf!XwwB$EPRDtT!GPM zANgdi+ir_PP(+9z7fylGLipCN=RUgSE^cg0ag<0rr6TG>f-zjUIagC43P>zIE-n8g z5$?v5r5Ii`T?Z;1J&?k;4l_0mv2$%-j(W&)33&q5pN=6MY-%!@lqSRJ2=0V%k!8r< zwM1!pAMtglD^qN-d0~q&%Ok=W-8jOFkVJP&_M=@!r8(i%#qlTE&MJyriyLt#zI;_g zTrC?RB6LY5yx)GDg+TOuACUqCkQ?>SR~#F+AE*ErevoCxQLwAnOUYLPAI%Y_tJpbV zX+~iR0Y>OWPiev%G&2Z7@_Oe5(JfNq9J{ItbIN_z8G#B z3$^F%9wG=mZp8|Wi${Dw>#U)X_PuHF_EKsqVvKki=07dW4f2+X3nU4{{s0gpdk;AHWd}-AvZ~P-{tNf z4u!owDGaRuXq7p?6msA<>gf07y{!Tv6qKv#3WZxU-N6lGN0GbXz(*-81g3526$=&C zrgpNa@x#Z(^`CrN`00yOZTG&MTJe6wLiXELl^-UFLbmeN1k#f<6?;bng;8%7tQ<)& zONz%P!EGqo40-sHgbDwCa3cWs)$@FlE}b&eXeBHN2*6U!Ot5Oru2rJHArm45O4Qiv z0z+!JvlQNVGMIccKtO#qGZQ7;^H|prul0Qd#^~t1aac{ES}^t@aSF2+CD8BaO~DQ$_vK=yLeZ|l&d z0;OJ;OCHF9tM9W5`u(<$_joE}d%1i4(UUxn5SXq6ZK%r3Xgsaeu{kzEoNGn1n%&;t z&}ze{NA8H^iuf;PIXXzmFWo6ir6k^J?)AI#r35cn?FNZD$PvCLndQlRnp(X3y!iN+ zKOWo_=Y`!cf8Nx}#l=kaJi>47HhA7yBoNUFOOrZez8qD3HeZd(um9s0k8V(gfNC9y z*`zpMEun6W-<8V<+z)Ey?Q8~c07Eh8^zEYQbWU3J`hA)WRVDEZ!=`*vL=mrDqj@Dc zWoS!N;R(OBWwuxf$R@tIg0QHZA=v0S$o~Y+%B&#P$!xb8W(HCQraBKbk`qD6N*FRn z5&?2bvu=xpE$069JZz}TWsb;)7TBGdBCorlPD{V50YqM}Pud|c!Je4O;svqOYQm1% zhJt`1)sggN**iahPIlw2C8e08{*AvGdgP^5Zx9WheR;Hdk zmVu?g7T4_HTIHIO1E*<1fJb^c_IpHaHp5{BO`( z&XE1r|KvUl-#Ax%O!RooKl0nT{J3xDBB#7|itsu&LUMt*Hsl1D%ZPQgTRr;CYX&kY zPYGFDM#pzu();cpm?aayyPF2bp~^fq{SgiMVAs&>xJA7B;-?^hJ26v)x_PQeesy6p zrs|9A>(S}v-MTSPHD>4ZI!)*-dJ zdWV+|^V<6T@4qhnjX|{HXD(Pb!1VxNyRiv=RiMKaf{gyv8bKi06&>t{k;TiKY+x z*x)O)pakqv7xK^KlJX|bc!*KdDwf^(*VE(Ff#oJM7JXCznGjW0)Ye5EhKbzL>6!|842XP;u~un8^|rfxac-^)tpGlogz{%dGH^c|azRKW&A}I2YkJ(p__3Q` z{p!=&y>QD`)DeD0#W?zLlJwN-+=SPQQY8QvRg^l*L+b0tVY<@FS6Xs5LgzFDeQLr9 zHPOI&y|WZdk}}CjV?vRHwzc}~85>6)Md%Cs+Vsge^30Rzv!|kQ2t$&9;Qj<&g;Hm~ zNllApNQx?6sW&R6s+H?FE3@8!C9#g?T^?n%wwlb)|CZE{t|$HM{LjDB_`z)komBRy z9BQQoh-pAZNaB+;gi63tb(XR%MMfgWoI2#2kLTw9`+kZJdZW4T5${L>43WQ%jT-XG z)6fWXL$RVGXF;5sYH78DR6Cr0jY9&z5b*;R=$kjMh=M?_qx!GasTRVTagDZsZ75(& zc^ejP#u6AKIyJC5%Z#_hY}6s7id=&mqTUF7cnirKqit7eyTiL;@q{AcA}54Wk%fxLxzfC-&)~Fp z1i#4#E1HLGc2Jy2$oMm4c5oz%fZ?Y>S{)=~1PdYK{ZLx4Oxh|(_q>pXdQB_8=A#%C z$wknB5Dy&W=p}q=omW4)FTXD8r*b|vZvM|N9=F+Us@N}9JM^^u4TVz?A`|Z0s*n%5 z0UN-Sad&T&qoo2*<#brfS%k`-giLk&HV61flx^D&U)t*Yl`F`(}DMKFlZa^aBGK+|ZZo95H6^Mzs#RadG@@tf$x(JJi@aUQ3%1JuEwqV?fM@O=99bK=Ka#r&2$^N~^;!f3vG^GIhev`L%XHU}?qU4nW)v z02)-aqKl%{r=4yAjcU{VO70omZmV)S$PVcsB6w40@3XmzsoiaH_ubav7oRpiZl0QX zUC~l&_>k0@r3^U+8xZ6WkX8H@H3!>B8f)}K*Yh)Yj8WDbSmc15YdPZPIrQ9ZgkUC1 z+9S=eOEsp4yga!ug0nFT#N^#qVqlgIYd)Ma%fU2^gC}Q$5JslhY70fu?8tr#!i$ta z*PsKjZBXv2KIL~5w&RJzo7nTbevA3`=EmkrOn=>x0hnunBwQ{S{=i?DnxLO3YRL+{ z0@!xqa!r>QPvD$kvWcec^nEA0{DaSLe)dtLnO-9htrbyijj)&+I%GLY-6=#&&Xrn& zxxjL?@fr#YOQrGVG~Q>|>B_nFN|})4XsbCuOwM1~$OH9mTGi5s0@0d2UmSG%LMTBV zRV?&E*ew&NMZVK&aSo#7{Cv8kQuwM3KPdzu|UrbM9}+ztSCo#tmH=h!5jNpb-sn%fOHvMERR93ld$z$}X| zhEx)Y5Vx?jSQ{(LLJ3a2Y435|pvU+zHkAAdds!^g!tk1^>vwuYg%<)cZTG3~94nSk zRxzqy25{GIM^tmbgkz{lrJ6|pXbaCGJjl+lv~(yS5OJB%GC&a8)!*OW3N?=;z#?Bw zqo*-KQ1uEhUzZNwzb$l96P0xAg}-@PzTI`#d8MFv!wC8oeFgNLq{M>WHasv9ACi&a zq8-Bq2{=~hJ4Utv^nzgJPuOB4p>-}&B`e{NI7M`D55^jyW9LMRYW$^w%jfm$((ISt zY7g@3^JYvk^ZeYcvy-;evmFDW4tp*j!4@}79+|8su3gJS(_vvKZ$Ko1XGJAHbx6Wk z#&WLqrxhMPjc(0mVHfI(foONp(a8$Y>4RQzd)j=d<|f!JPGd#13z6Xyh@5&N@S_$! zc^x$_l$}$V9Xjc%=lo`Y`L4sfHc+7j{-=NOIuZ|tFxsC{Ww8PX5#yDF-hPwPk6@wN zbpVX0=n3?&YdG~)Wf=v@H|D3OvVxtOu1QW6mvNZmX&|Fx+?~kzy>HkV-jQ0&M0F7> z&-@8H3EZ>mB+J29gCRr@kW_V?xf`Ab!##hcwaC;Ksm$|w9^9m=2BN334IApMpj}u% z=1K4fN-)+JSQ3THHBwBUr;}0GFX>AghIRVx`_0`y`q7Jzn&3fC!Y|j>fi%6iF=6O= zlSXnnuccThhB(okF)VJKmnt=i5HVfqe6z^oAlg-5UCJGRF-y2m;;+<&{v%S9$-`Df zEksl+?Q13Nc++8i?~lEHPoT_Z)tkg9a9M&CMlbR7p|l^}Wu|}dN$tBm3Fxm&Y%kDO z+E=*A?FF3D7KX+?$5a#L0Vw4GaE$~1W*smlLJcOA)yawv=!h?gV{l-Yz4-Ra)gTA^ zYKZfqp#Bb^3RDd^M7*foZp|lSf?jWIK=4><5&SIxz*uwSmzDJ5Mjf3W(Hr}= z(e0tFlvwj1)v|K1+HkUeJDgQ}cOq=tpo+fh>&|euKIc0)X!*Y{Ob)qgj^5Odi8xwvzKDukj-Q%+&oEjhF;(ck7oo zH_wklArLH1>fL(T^&fn@`;UJ7x_KVu_LJ0pg*@`KTMEfoLXDQvRudQPfu-wWTbioq zS5!6skRCg3*`@sTuKu)*zEd%Arqrs1ad%r*hBP7zI*1Z&Bs601mM2ApA)spl%STuP z`x$bb(c^u4jl?naVo)B02h9qWogxI8lT;?1YKXlFjXEBUI2kOJ+j_Qjc=vhx_)&G+ zZ{z`XC&%2XczYBP3g23yI4he)uS5*(xbJl_?&!a(YS+XC6Eiq98%(rWm>3ppS*7P^==I2n79j*$}I&A!(@|KP6B%|npQZKPDW z{mq(zRifSm%`)sv*}%5riox8xo|rA1adqSW@FAHyDU^%7iW7R@f-$zqRLYtK2aIp|zHVFQ(GRKaeFtzOd>2U2T{)*2mqpJ0%m0?> z5-o6}zLm^2wdSqRKMWIcHq^H_yy@(Ol*hZC!qXZ`uU(R~y^1qeqDAB0VylS8>;f)X zQ9s4Si#y}5&^ior87awe$^cg{14`-|Plzf4V<$Tf*|Fhpkc_!%w*AT5lT1c$lj&!g zFQ)1IyZH>b_T`%w&kG&kqvvX`-8b@qsM&1?{Wgo3A&=<3-atfDWSL%-A~w0z=qa!5VGZnKaXsh$pv1fRi2DQ_-gTleiF4DNiWxg6;;i5u6(m? z3{Ra|s<}GlC-7`=#wCHJf=r?cxyX>$8;Y8Ro0_?-eJ}J6pClWsq55n-cEfDKcB5qv z=q4+M&@H2j8UN_!^79XhpR~?}{R}V>XiTiHR>=cMGVv9!6e>lezcLD~`&T-`h+Z%g7ovk4eZ?q+1QFSeXKpeXVsT>QdZ2rgyX9=Dbc9S0cH`bBW1@8%FI8 z1k#Du-!kdqd!k9=MV3X95m%30rollg1k@`f%~q)guBoWwsc1omybifw^V<(7ts>l^ zl;W_wD<8hqzI>;BY$HJ9daW0tKP=61rK)?h^BCfCwFgT~&*rnIX0d>hS#k*wAY1KQ z4EkcpgulA!k|EliY>wM11W~Ox+tG5!8GH3=bB+Uj6PNzzqc=KX%;wMX&M$9!`cKcs zL;W;f-`_}_&h<>(ZQs0lY3n~fkC@)=PM57co~m(Cq8J@C5{A&F8>>+7B7=*KAPIcx zX7bq0%>VE^U7+y6=~36@MiJZG1evySK9+M9|0Shj*K~f#dk4EnASn3 z8-1Fhe|G+EaASM(V<;}NKCI6Dfo0^m_~Bxbww|3&)lny`Y1`0l_3em#WmC2D>?ili zJkP!CGIJt`@Pg*n>B3*X8!l149`WEg!2u00*~^kqy|)p~HxPxQGF3uCj8xG(PYwiK z#I;#R$z?q*lsYTS5*YN6dig?jbP9-dZ@|SP&KYURj&<~sOX8o*>_w%OuIE_AN#8a^X@Erjcxn^k|0luW+ zP2>o~y9_c>K#Y=dRyQeUBwd$4BMB2B&ZLM8v>Wz9R$3mEw@Q-B*GL?hNyaPr!ddy< z-ttXpo=<5h#>o*Lz_*XSHZT#_Skvt-H#FwMJNCHK8W!4u&X!M&*UT3^XQ79d0b~`u zH_1T&PJyC$(B7 zf|HTBQGTe2J%+28q3Df~5TIFhQo*2y^VJ>~)4utgRq^F$lhx#$mkuaRVKAXgRkV8& z5EDv@O6qJ1o={&FCF2^gKvQa`5v9(8oLW?$ixtQHvnbBf!V1|AL1NYD&H5v!13PKwtA zLug?0!9izPlTYhQ>2D{w`7yVT(9|X$=Jk=jG;^#g>h{9iu`u$n&q)e`IPqOB_MqJ) zj{+kt>1DCkn@!OYEKNcPXgx*$C=#o>)5c-|V?DX@KlXZE6UUz*t@_>eUYoM<7r%I2 z{P9as;^9#ZU@miee~)=(x;cI|OBVoKK%>8#_f(YiU_&t2-569Fk<%aC2=gLRVtK+7 z)@y_sXmSK(h`y^sNn*%VMBqVbZLuWkXb~p{a)AmYI^3%GzNKX@czD(-sp7}w=m{&0 z*fMk>9fWJMIWbBjXW=`_YZm&|7q@$uKDL8BOJt&pg8m`D#P-Ng+j+NK0#6Szi(kIU z^i!kDW?asRLE%eg=at@%ghCpKJUM7$IjvgT!`j6*UbD$%H`M&C%-!9cQB5c_vV`?j z*1Na+`+FPhc@z}--CmDB?O8~Gj`jnMBzmD&Y`gCxvzPcLDNrJl;@of#T7p-37<1j;E43#iA=8CQomWwW2ka@jIZ0~p+2bepv7Dg2CA^yd`QzWi#vzR`i;zEwk{>#6-v(C9ti6p%yh7B%hY&zW@XEtaOoWf&-gi4xO(MHY}a~ua4 z?HnIJB9ED~M5X(2$MBbKrP=5)^qB$HYpdCX9dmQwSB2%7(hsFDKHl3tY88o{} z+z*SHm=AzOrtxBwzWs9DT$f*;tGDx0=1Kis`dn|$?Br$E7zzTbb|{lY1(E>f}v5-*K8^J z6PypfRqVl(5)VUX|niuj?%`sfvhT_)AXCy`ROk{?R1qkFNZ=JGPQ~}e3$lhhf$UoN|8Mn z_4byXfoBf(1X}3+6@|C&-hPm&V8hhLFN;05Rc-K!?370BrdsQdhzf|KX8qNm3nxYs zK=Hl#;9F>D%u?$xbh-BOm82hs4gcpwkYY2b+8-(F&z8R#JZ!Y0r(;( zL14|URO7-P>~-33Xj*$&YR#J$mullb{&->ReUfwmjOT1y*nF^~{ zFJFnN5?GS(cY|A+$~@a?_x)RwZ*HKZ`iZcUYO?uc)Tl0oi~UG{L=cHZl0f(f3nVz^ zC~C`q>le?r`PI)}#Bl(afKrwVVjVzo zlU&vSQ-*gH{>pxzkb#DS&WAwJB<3cVMbU|S65VksK5;f0TC-0K2046t_mZ6v;kcA; zW?Yf-Za%G@H^2B+_h&CtwZj7uqtT_!)YSG z(GX$g86yLy<(Ib2MxSwXJXqIWOZyQ$=8GUDU5O@%bh$suz|2Ojr3Ax<-|EO76o8Bh zo8!jYAf_yP@ozIMT$yai00gsitpT|fy2tTqMXXhlxlz}d&~C*DC$>zMt1B1t2w#oL zA@RIUR7<#PqzYB%eG^6T8wx-6dkD%v3jC%&OZI>HA3kzz{+Q$7gFJ?-#(9$jgYR?A zs4@-*6-tkhzj6@1Lhrdua*e2VwCr8d3N}t;uWKZVm(XFTgS3j`MK=zUp|L#uHdhVA z)>Td=ZZ4#@mvST&bg-~ckgrH4XIGc%GI#qHXa2E+p>}mCPJr7%xlys_Qd5{BFp^ZZ z-|5qr1fLvByuuD`b8B(sOXc%Pko}*dmEOUeZZ`K&@syLU_6O7jL zh#|}PesRx!Ct4aRf!f6@FE%& zk3j6QP+q=zDb7Rhdgt!;o-R*tK`HJX8$a5xFV^@7xRqm6SNr9C`S_#3NkA;K9cNCP zaxn^$l}R(nI7`MRytDNh09bIsin4P(2z|d_Of|y29rT2QAOgR67@?Pngh~YK;0HZU zo&yu0_JEIUIrgGnL!60>pJGjZTBwH-lLHfMCpf8;4?4iJRC6~ef-4lP!;F$xpw zhu4AEeE>XLtF2~Ann3q9AM}ciouEqLkG|c0U0kG>?`q{#qpl6BXxvK7v)yh`X>^7H zrW%&d3?3(rb$SnokVGj#*B(|Z*f~6~%z_B(seg`f~rTQ!~Qs&F_KMx=1{zj`kxVe3LdLj#B zwbvWooISs9W;Q>0)A(WcRNfD@vSP%5^{keYVp^Jj?3oe}A!}mp| z*b?g%7tPobO@yJ;qitpMjPU*a3#N6vKAv8`dBc~IYawnW2ziHgx>UUHHnDQR*x~2S z6+V4^TbO_KrZD;C$E9B7#DC;K2n)*>+R^MP`!9*tW_h46Y~ zXdz5tJto>e{2022QgFNjC?vz;BPWU7^soQPAac?>H>%@;bqE~A zJZD(AvvARU$9`pWcs!@@WuXS^z~XjE!kG4sTiHkc5ZR-<@hpY|*eQ~57oo@Zn1#V_yfb$NP5b`hGhUu{&Qip6wMu&hVk1Tp5C1OeB2s1>qwcYJ7Xw|TZ|@(=PA4{18TKF zR3-9TWny3FJKVLb-Yt>fEf<@7ZpE8ysa(2bQ3XZFO7TrjE;gUuTu{x9*5!Qj0Rtp@ zjbI7l7EWOFg1}OI*+K~nQgF=@khBXaXM9);ta710HY|}!SkROtU|Qu^xw@15)ZZe# ze^ovF)1SSmZ69=3LlLSCv#1df&sO16voV=_?zY!vf8j=kEyy)E6CQK7;0luRh}@Mr zK0S?Jz4!zM?ClfA;3NEgx7T|6{!3SR-E7^~>mxlOL;9RPc=Pgk@}L=gKsjpqDXd|0 zAsiw}z#UV4`Sf&uGhj0gAKpU5H7dn3r+1-pf2|i%9U0enqVpqhRm>dgnm!et|NOK1 zpM2h~9v+Unk*Z3-=?zvW@*P-!W(c*%<%PA5PCDw9|l zQxa>t`SvH@g7IM2d$bq#FRVI3#4XVuBD$-2)aB^O26;?UK$oF*CLaxv{dQa2|LXIb zk1MOvVGQbGy{Lw_{YwJifH6Ea+B+JCmNr_wi8vg$Vsu~zs5Ni3v;QE0fZx)84n>qeq;gw1v2n z0Jdk*4Hhd8bBF%m-{WynStB>ehQ+St_<)>6w9wFRJN>@csQo2|!Pg~*!i8FIZYr72 zn}_eV0TdT)UGezPgqQ?zx!&nE@Hjn4sO1`+o7WnMx)V!Z=SNFIxZYl?H&G6YT1V4p z9u;{_)7S;%GGS<_X?+si_}gf9gnefhP%VK!2)s=|lPLopc78wW?7sZxzxsA%{dKU5 zK-mGXX=fS!3d>s@hk3mSk5i;9csj5&F?KzuOo4~y0&;?DWV-Dm88JZYjdwvYsRCE+ zkh$$?o1)ZgXaT@u3EyJCyM>rML7y9vn)S~$V5oH3K$4+xgdT&c7voG)S!vl&8=-rm zU?YA!qmTTF`OpK#BPVQQej5kBsRO_Ln0mjdmRm>j0@wolR9_%KorvlqSgAS6!--tO z>j=suKq;>sWBAyZp3s}}N8D1P!jv#Lh!t@!QW&DnDMg!Ho~M@F_tmz9yYOhN3d;;U zCQrS%Cu>mt|Cp~&Q|)iIou}08bE>;9wlN!;F~%ULJTQ-Oe~dCn8y2f-D4a=Eq>`vf zm2;qVgBw&&${Lcogl=7rpW21=tK$04KW|!Io#LjP*K=Da7O95NO~eLbIwLEHEy_4X zQ#i7TW;Q+^(j{TTj1KdcgNwd?b1J@HW#4Wp^U|B>jRBKtCFSYlOW;`-i&nh_K` z$BQ7#a))XL3zn_d92xVHEhNC!FblZD4o>2i(Qb#)Wxf(K5QKvy=aug6ZeAsoy@Hn5 zDhj9ypk}+Z^6T4|1eU5&+SHib-zrxErQH>?1^fa`LIDKYLJr1`RTyrRUc7i=c|ggs z^zUx(e0?4XlC=QE@e+;Quqbu}AI_`7{L5dw%Jp)rd(XZQL1Hi}*b_qGXfn4LtyTy_ zQQ(Bvr6@BPpLiSTbI{em`CLeC;JYdtZ1nWrTq^>RI%0-%!X&FutHUA-aY73QMM_nJ zGFy%#QpPm`E)>v)hz!*DH?GJh5{?ziW2Xf~P^kx|HFmBTCVT>edNx(cdf&=_+`fEN zoc;6v;=}KEPq*c5_B<_?SjsAXW%uWjI{xtoY*Q1W^h5xhJOvb&vtbT;3`s7fZn1DZ zNN^m~$ZnKjWrY;g`YxxwFQZ^eJeh4tgdlDULP&f|;V{H^7cI^|21k**`BNg)FsUoBcJ)ta=(>xwMbD`?lLu2>&QEm4@bzjd= z5-clw^=R1yc`GI)sF0?vouzkAPgx79w3>B6 z0w=;NS1naE;uFOUfmjHrGyz1V*;bu~Y@d{2d^qU#RKJfVW6#p-bVY$k6-P3wqD)_c z${H|>C6hP(ELzf+kP^SNN9+#!+G;0rLKvP;x&XKz(F?;$ z?RsjDBgGg9J+={H78Gl0!N_0<@=vxUC4@spFY3R08fU1lQf$V+T zpY=(&UXMv)R{~2(5JeSFps|wNOl*iiY*AyBsOvhRM_uv>4 zWNIAdE6IHZZB_Q){P(Yf8s$iwJ*fbVYp@(6BN99T5pGMYbvGAN>qNGO-`@AzDO`q` zPd1Y;96FCgxrp@fT}1Ivk@86P2gx_1rexBhF*?yq29iYcw(#LJeajR!I2n_zh?;P? z&A-!&CI9iP_iUCnauJ@@Ote_R0-H9(L zk+#9o#R@K8zK9{Ml$yLlHVCB+WK4LncOq+=vo2K&DLl2@RYZ%i#M0!7&|X4q8*~9Zgv@|5KK%09&U#{C$TrZYK#+^#{r=f_v1|(jiCC~z#P&B0)hdQ|J=mYueJ|`Sa$uYWIR9cx1IW*(u%7t#3F~ zZ^Frgie~uFr+O2wn0_umDErMiw!FP|!y)cjoO3NC&JfbQ-16r5_UAXdzx?steSVqQ z48uL9aWAZLj63#=Y(xZsWtAdUERw~L-$lBMbqt{-aOC_@qqyCGSq0)mH7RLp&|bqY z&B8@k1$`nAGg>X4m#ciOF=!9w%51FE3jk!t20`Uc!@^TZigIHGHD17mMRBnb-Q7*E z$Bj^$!vJF*`A3C_qINJW21b!j$VAB8lKVnDi@EgGpq%==efs`Q9vHWH0p<@nRH~XG z5|Ih&7J1)q@_8B`i-yGy$Uka;=%GWlq9!9Q=rVxPgkwUGfGz26wLd&*`ggJ%M3q_h zDCQV%Yv#T9xSWk-7MloOP1%r0Cq;o##H|s~B14Kuyl>L0rUr2q{@~e6`JczfE`e~G zUB%nAbP0D0+VfiNtar9wL%sp1VliU|3T`%#BhSD2>EQ5_52}UJk{52S1XN3grM{k{9+ht*YTJ0k+QRDy3@7H1Pca{C^~sxVxZ zBJs^)BWVeDI?@s)k*qFhNUvqo>=$40HoOw(lMODO2{XpXS*;;5(xOC3FxYohzt(v) zv8OH;9(b14u9~4lU5obvpnqnY=4nmZgaD*7uBydFDL^~)dP_c{Lfo4l_jc6_8 zp%E;S?7#bGuYFnYfx@BXC=6eVTz~Xj(%gvS6W^y5oSHE54rj{c6W1)L1^mPj35U!T z8|FL_LoUSo^orZ!@zGWO-;72MCL(Nj7XsY`XiQv2M^$DTU8b5Z+THixpKrdo+%D3s zgt|uGqrBvi@Qc{38n(!%5h|**QZQikt>m#TG`2w<)a1OFE|5RS4h6h*F)n3JuWG5! z$_t^em-$IO6D_{%Ozx*(H1MURZ@JLP49u|F5`M-A_#|W}_82GF2r?cGnxT-R)TCf} zscdT>e6y+idU9OIfb(^dU=f$ad6*cuhy-ozMy8h~NP3bpYUinDYi75ZVJs9iXjKa9 z*|43O{o%)zUwl+n%#}Mr-JsPVKmeRY4xz~Y|Dx`s$<^z;4>*9v3^b-5Zr_=-C`zR4 zqGBaVL=OumT11;HvPzZrUgWD~=}+OREaHt_mADc`1Mj^(V@6}>{Qd`IrHj>kFZ#v% zp7We%9yKivLr)b?L!^nlpWd2|et(kol#$l)eJNZmJyxPhq?QQSWuV~{*i9PhBwbxp zLwS^>vS`SqqLEC+*Vr?HH|s&K`Nd~%+b6r8v0{(Xc01n!|NNIcAysG=OjjgVXnLw9UL0~4s?g34xO!RqmH+`TmXOPV`&$bEtKB=0K@ zH>@e8ZKauX5_>{I6Yi<$ooSjji%;@J2a_v`Eo5Y-DgsT@&81jkxY#1^%oq<_+&1>J zFE8tV^Yz((`1(wr{EO;zP#3(uPtEqhV1)<}9QJfD)F{`=JyUxuDC7rn@f)9Cf*qY>3!4wZ0JDoh$w zF(|R)n1TifK+wslDxQ zY?I#OW=JJp1T$4ntoF70^msLt4Lj75=fm|U0eU+V)HEU6vdv<1NYjUVMwWR`)~^kq zyplNV#Mtd$PwsRs=;&?0%kEIEI@N5el}cVx{NdqN4_Q|ApT21Q`!5>f{k3>czo#fQ zNkUGF+@hl&b$YRu!dJtineTL5h>9H{ja(O>TWnMfp84|?na4kryj%)vWI#QpMv))M zw{j0|hJ3K)=szhAv4iN)bkIUy$BaeK;dG423z^-M2SzSP2S^1jO~t#|D?ay366N?g zFms@Y!N=I(qFJjhYM=bwe|gFKMi_9TF~Ow1c@V{%>TS6w!3+G;a`UyaJuJv-!U&3t zieHrsO>3Qq9QRloDq_h{jm1}DOeAxTlu8N~@#6&uj^5BzX;E)U^zyW3YoK%3!}4ib z8{c=n`mXZ&U$~w(`j|XxR>Z%|{YpbghrS&4l(?S^jo7;fO)MIQQVGM#nFaNhT5Z)9 zyV>;^gI&!g`z*aE5zr!6X%_3yIgyI@ zQS|5l03ZNKL_t*ZIzvNbV6)E{+O4O?+0DtT|2u2kD5s^&bN!{3);Nr3fz3v({`jEf zh9Nxk$L_=99ribpG}CB{8H%WPo;PpLM{{jC77y*!?O%U6`t?s+=cWYBu4Nff<1yBh z?)Qv~P?65Us11h`R_}JDu0_GQb1*7~cB1r}3e_vMJ9A~^CTY_R0NIX+OR*ZSBUCZ=o~*wZ*M^lfk-11e8jZwt1WTSHy^5Dc zV=0_Y2Mg>+tevLD?axw-b2-!L#*-HYYqz(LSMB{(bN_PC z`gC)1H(Op@yc7FZo~GpCLS~yVFW87rT=uHEw6dUoXVcBVIyjUunv?I9lw3zmF zl-zAPl^wU*yJ6+qAJ_l-$MtWQ3heX1k;fD%+NGo7M`f=tR}@I|YsWJ4yj09-%cdx3aoXMwJDgvwhnriGAdZR` zqjRZgi|M`6oHPBwME2)Xs=$cHUys!LTg>9sLCPLpj#44@EkvlyZhjZT(>*8Z%0 za>~KMnI3(-K_-h9c{@ZDT*J?^uPvD-gI0wdwelH!aUASP=!fFZF#?MS2{tQ%$>&CR zShEzUVXuo>6Bs0om@Rp!ak2O{m%q%K+4MmU^2KO$dv^oRA)y#M5@9aWHZ+{ENc^30 zV`>I$l;A4?lMoW1ILsfK%TNF6RqHQ)q#?%#Zdi~6Kx>s=#T7ajHKpOzmv{L7WO(Lj z+zCzyxM&I79?}!3G5a;&>Ownf59PcPp=P7w;shSKNCdJz?T*WlbwY#GQ85`Vv?yA& zdk$N3C>zU%S0@j@dRw__^Rj6SgYzWb&K*SiOz@lsz9eWcQ*d2Kg0~*oPEqKo*0Eji zC{P7QGJu$mXjkVS*(RSY(LW9*ZiCXB0|ypUQJtNjlPEN$GmJX3O%_;6c(*hvZA~z} zx_F^1zG!I}9c;$x(I)f5%W!qP{a7+d(pUxp_i)k*ll1jSq^P-ECQT)isVq$tJ%8|y z5IffD&qxIKu|78-46n%-(l5m4Y?|-$(!AsNIanM;QL&-$wlHf%Wv#ds$yCH4?|~L! zdvJ$>{9ybca%w3NM3H&v%ah@67rp;EAN^s~VUKc^s=u>UxM>H_98LftN+xnnb-P+j zA&H?v@&TxR+Mft8CR(W5%KdGpy1nRBhllB3eKGi(A9p9*tk}cik)KC`9%Wc*g$Qd< zE#uOX-UqxONiVQpW|pnlMv1i?q^08;m;y+S#xIuD(f{~V`z`-mr>`LOQT?C<7_(9| z-QyCb#k4ekZ3@536o7xy64P{^X_F{=+}=9cKJO$HOus&_{rXQYe$<~fS08AOvenb! zlm%Upw5{2U{7xL74!|S(#D>OkxFZfVbwPdFHl^FV06pTM@jx1$e(Bql>c!e_(PBFTYzo(t2hOoZ#gWD#}OGzI^_+9+;x1?L9U{IdgMG-$(2XrNEatMjq^ z*M%|3yUjQ6$A9&nnmb&cpr2Q!U2npkfEg8pV9@(zqo{yaYxerK55Wy<8u8Y0=_X zS0#B$<`PbjLiIS-+(0lL75yVmfPrikIyB?I$biIUqhF@WFcQN+JUd<`wviN2ud^h- z-5jyB)(1AtAzk}~pUuPlVDg?rwD2lhGtBo-Tu!XzwNE|0*kU0I){SdlwnWeU-sDM3 zfcVrr$kSbi=azT7(^hc->+lsfU~L|4{kB2Oc%SFnDx=JIIp2_9uick;ojO1sHoSzz zu-iO7ojkPuXpi#d~6QRo+ z%51C$cFWtE41uQ=>#@I||Lcq0Z+`ybwDPc;ed6$;YFh+|qTAY&?m&^o*f(xf_7~uD z2-qDMfr0=0$<|1F`|^32YtX-5xBvThm5-a2uZcGEse#@^gK-uI-f=L7J!+FCW{7Qq zSyZYexUoO2A0{cOlN_|FZI@rY?EK|t^`D(>yW6{dli^quIHCJTITAxL{h`_%49^qB zc)Y7^7ZmVny}8IhiHfp}R*j3;&w_a&O)D;0{J1P0u>Clo%txR)(4e+S8I(z~-HX7l z1cVzr=ERV(WrwE33rDKobf>ncCp%xZ*vVOA|Gp<6_v{kFOK`n>h!q&=yvYV(hS z>YDG)P~_eGR{N}$k-4-s4)ipo69;Mn1on#My&G14^VQ(0J||M-Mad8&8DN5BO`-!K zSv4lvBjBfaM;dr^x)YP(7+Aci>6&)XRWtPr83QiAYoq1geVi$fcRio9pT9h7{^YFs zy0d&g+P_z|c34!sSY&S(Yk(qSAUwl9W{41za4CEgtEV$RR8R< z`iqy1U%YA>x%Tb*#`*Jy*7~MfW#gIxVj{pi9#*B)f}+Jm&X9@%BJonUIfF%%f@vj? z6c};47_e$P5v^MsLR%7)MOD;=2)vzQI^s0^vYvR_Bl4eMB!us`M_kfp?WdQ$!~5}R zHKRMD3cPsvlIkO6oz{HX>+$@ADes=%yn0E4*0$Mox~z2dhrg$Hq8xr3S~-ht*li}( z!q^vx7~4Nz!Rdel!3Ewlu5s^vr{S^9|yv zw!f%v-}ay13~KL(l{ej;VM6`qRZ|-XjdmXI3}#U6%rK7fM}ML{vR~&j+&}rJ;rr1G z3%xTM=3&aw{rd?KY{ho$8p*Xx1ZLoL>V zjJ7MrLq5v>=c%LKNdi4<9DX#a{`9i`qP-Y*xX_s&+_o8ukjl!CtIi-gyiW83q?EPy zt2d+8tJ9PI$?Em6_I6mGUH?H>4N9;j5p4Wm5pSG20Hm`(`gf!t+M>9b{6_=SCmO3} z;=#{~37(4W#i94B)Ah}N`u3-1wd=~_Q}N3kN&}Agk*FH@PiVnRBZ5Q;U^Eacl&_kD zS7uIJJ_JT#(o`irH}fUq8pX4Y#Rd!!whN~1elY@!H(HL&x>8R9IdKMYX?c|N@h8G9%w zm2F6flA@h294%QsHQ$#G=xe;FKKBm4kFx7{9id|GLK&G!ys_CM6XUxO#u_`03-H46DEX+4-Bce5RRv1r8jM z06M+o&PHWB?6-yOnC(J7&dkyBCz9e|ZEF}d=4{RE6=38;j<{B!E@R6G;mW34=?<0cH&Ws8?dP9; zbFm#(9%=@gQFV3k+7#D>_WQ-{N!Wuhq7-P*P(tiVD^d`0dEy=B&#i`SsRZA;xsh;K)z;J^|ErJg{q+d$x}b6(5NvcX_`<* zvx5i7T(2Q&ge2auZtov!>uImv9#2~L<{%EofByQVb*6nHMYhrX=a1X#jK|d{^e}y% z;Zhfq!Oy;Y@%;JwPW7oi2n=8#>@4CjS12mTY?d4cra9PMrZuz&7>xzvRbNauEOMo(Nx0b0 z*jFZAQFj_o3yC0a8l9iKypS^ce-_n;ilVN==e^E{k19GI&YJ4wOfF0|#ANv|oy~fr zuiN2yAWv%bBq=1y(eu2%y6IN7&;4CrGvn1ASLo^4NOyzobULhVzk1)6s&Q7it#9Xp zetWT{)R2*RT&b-JTZoL@~g>*>Se)8Vb#%^XfYP2(XHO=ee2eP!S_vmPvcKqE-0x&wa6Gj8E%{yRRWl|1 zBl=NgrwGw(;f}VS_2#&7->seuDyyH2Tc3}qP$}8e1zj+L*?}qTS2GH}aEUw?HMAK9 zqD95?{ORU@^=h4O-@m%q)MSFZ>z#kBUVNM%uCH(Kx6j5ITFNSe9#UVbJ&b~z_xbH8 zjw3OZ>QhzKB_2IpeLOf3CU3iWt^ENL`C|WWuzS9r$!tRx<`pmzgeOq4>?cY)r-Fc^ z*+pfrc$%2)jOuCJ8v=icXT8xDr)|1f=Y4{GJaOJDcW>c7}D zX@~6o>OcPdugVknsBkJzj%%b?ERjpa;vp3?3DLDj+gl`R=^N&YcH;hM$7LFGps8Yr zJDEuAq}iN!Z!)P1zfSzAKMV#@UNl2xZ=oWViL`xldw#p^R!(Hm|hQw|h?LLB)&5FR4ZF_nl@_0_Ffp~-S$ zA>?%jroEJrv2VT@ySkN*IRLnPZEGZM2rOq)v$ zP_!XHE+Atul4v>iG&7ZT7LtD2*irnRX*o{P@Bw%fz}gbe1A3tXXsH_?ww-cfKq@Cf zVi7d%OkiC3QpMoTW_NsfzuZl?Rf7xpMIqR7IZbLtCw)RQmTa|j=FyoF*~Fo5K7R0C6d-RxX5h(o&NFkSam~^wKP!IX^k5#aSVbbioQ;P8 z84d@|%x=0ZXN2OV&gkNPzLp3=`ZwCAT8|4i^0`4Hx=s2Ldquy`lkA~dF^DFp{XX_0 z`#wRYB$A*hV2~9EKOKrSmrTl>e%plZ#h|W_&31XGc!5uhRd;=TjhWaJ9~9+MCB5DCQL!9c?1t1B zoXyIhu}W*SZCy@RO73}aX%8hX;N8Ynri(Kky%bwUVbB=qVNkJS6vm1!+eK;^5S>X1 z+g@#V(bpY%Dw}{`E|xtBXXPai9$rCBm}c(~>9{+4$Bsb8PhrkUMud3Kk*($Ja>JYv zt2&UOXi-eIiLoo7o<6Efj~sH3x6F;Jz(%@5&sl~G?KLmQiiH(`+T06@6!9NdZIj1G z%d)wZ;410#lismf20*kX_xJI1ywQcv;~XyeqA1!cV0QH%{@c%8`LZ9Y3{x&MEZyxp zSw~OfuNXcK*n@m-5mx5+@xt}dWdyPw;$|=LMSpd(PyE@V{D7L@@P@9+%i-y^0Ncn8 zO1W4x%Hbx$nA#n_+RVY`u;I#+2_ccrmH^k2NwPw;lH>;P12K54gTyz1$y>D1MOuVW z!pkkDR&Ap0Wh|(*A8j&UrwFLs0u#%#{@5-_3=ir{oUwkT z(EZ^{zX{POwuNR$LDw-I`EWT~nKdb=VylFwvz*}yO%~b~r4lK`#mTON(f+Cm8ncr{ zhJ3A=YA@R5Q4SRe*)Jk2wrxgO1GV@(2y=0GCgBIMk2(e5yUku4nWf@!_^E4eFVvVC zD2Vv-)f){NT~o1T8NlX}@~6HvZB4-$kj*Uy*kFRDXPqJ8iRC~F9Lis1zkm<4AB`un zDjb&QvxIP^;7#l~o(D}}2*b3nzAmRb{aOv?(W3S1uA?`efslhH&&(Pf|JmSZjP#$xj0 zd)A}TCk$80ju)R8FHgFD`0fv~QQ7mwnvcvT;Y9eDtv*2!S;G(2WR}X`?kFV<0K26E zU}=h;p7yq&ah4C&{QK>5zms9BGO4AM%FSokm_ib|4L=5T8CsNuV=-l0wZ*kn0l`Tcq4=<06cPbi3ve$ z;Po!gkSWvXoKfL`*H3{8q7G-n7A?Rhc$BBdfhE@~<7yt`7AtwlJY#@KnIO(NprD*^ zCK^ZVh=NCsSw^f&Pw+sklzr}fpz{igt14d-lQBfq(CSA_- z4|LD*+UNFVJ)UrWmTm~gf0DVrNK+^EeH#Qo%rlum`JHiL)`lTFpoH-qB`Hk~`K#J^ zVgp^X!#vGW-Qe?_@BfzxVDy|RlSjOSG2REh7^_>4K=e<%#x?Zy2-i;9n|xxcc~gn6 zV1MqkbP(hX^kcb};%1tr*Sigv&?hp?vRW-jiOEvI>{D+stizThV;R0wC`Zz*WEG34 zLkMEE(-sxb;zHlhH8e~p?r9!>&t@Z3zFtuQRuG8%$lMp}O&1M!(Htp}VbdT_D# z=QSUp$0$6|?+3lp&;{|VSws?dQm1m-8&M{AJAIs1U6Yxj*WYvanyraQBs|lZEhfB` zb{qZNT;75EkBr!YombwCGBBa24iU*d^jM$!&`}dh&MU}4 z2t%SM+4-yg0fJ{7qae8<+7aSV>rpCT(tF>NFb>LA>Vz!-NMW{x39dp}v96MF4r@6} zB}OM%gMjxWF%<-#5O{xx9w|noHrfn-$G{)o2X_#x|Qfju#?dGtg;@R8r0+f`z{b>!VPRUkp!4wFdlmV?wn$7BX>f>_-V{=Hf0_oW4@ zq`}7S>7G$8RHOJ+=PNvmuUL+w6lC;bd7QxLDD@>LO~ri&ZqTj1RXcd-b9SU$7%cs` z9s3ox3%>|fd`}2gQsfz#LjjGV>9JHd56wN=q{iI>Eo8Kq!NbE%9|B~fN)&$-l|s+D znoN2VGk6YQi`~zWD8q|B%}|FS#&dz zd@3D&ymi63XLJ#UP*}u5g1+ghQX5#w$dK1j{-HB8_Zs@y6`hg*;6XW}F46{hXYDUR z(O&xOl~_e;!;w&Y)F%`j)*B$fcHw%dQ?OcQ5XObqVLcR!Y^}}z9UT#CL7@>O7KSOw zz5|Nng_{b)0V=f`fc&<@GTaOW0j3j2u;5>4JlB3gZzX1Kk$3`=#X3%VY>wSZeN62j zF%YH74&n^&vlj{&*`8i%-W~3Ij3p9RUjmH)!F!v(aj>P5(z~rBJBA!Vx!j7UH zAl@WArqeL1973RIyqLEa(#X!lj(kYQh@d*T8XQRZ;Yv(cL3fyi?)?>AC71=c7Tijy zgeO8Zsxeo7JowrqpXkNL?n{Ug>_Aax%p969G8edBOhqr?e#PDF!h-GBn%(c8u5oY| zG;og72_8yTG5gk!VjoT(QzP!5umM>+W&6^p`7sfVR% zOXOR`vSxRZUdgI)Kso@jd|VSb2B)*nTv#y1WHHVin|6`crC~lKZxe@?#v-`H1wjirHd;3( z(TT_hvC*@!EN4|lkNDHY`6Q4NOs<)u1#>XaBqIQqv065a`SD!)@eX>me!$1&Su1VY zZ>Qt86LAVOO6zA~J5k~Uv|T5qdx7$!$ona%r8J5V>Ce&O&BDw(F$KZU+r7y}wrp1$rlp%=fw=a?_7%rxXrFs{D zTQFd{c1F&H>~|v0h?Q*p*6GLRT8D9KlOF^aZsJ%Wt=Dr5AT+EKHIGMqs{(55RfkQZ zpmbXEr(PFzR;Yv6EBfuU*(HzUiqHE!YN8%ix)@ZM8HmUVr7t1ba%#kk0jUl>>ZXutTv;Feg`B z01-(;n82&aWC$(6-P>v2$QHIBI^7Up1wQN=Q9Bgp;T{+6K?w9s<}Br-5at?ms&TVmuU$}F;3`Xzi-p;WPzW!iZG;;dqDwh7wBt^xKu$K9974W7=$96N5`c&{ zLA{{ySOYq>nFJb1OVzJJj~Oi!#%`T{e0cI-|Ms(^y#j*RhIRWh>}QjDgggAxZ`v zie(v%g|jJjq8MCt>Xl6jse_<7Rf@y$7IM_=z>y%C1DP|ouWMAML{ac`{sBCzy z{W>b2gO7a`3f8_&S5dko(v|&@Srl%x9&+Wh*AHhadX|OBlvjBVskyl!{bo%~)?Q2@ zRcEz|X3aU4@aa~koEdL^u;-b_N#~@ErhC>4fs|elcT6T5Nxndu?o=28SOQ7ljGY4D z+mv-62GF)Kw~$K$IOtECB%({`(({jF+jDG!^B*0YMCyM1J7feq%#k zErjMb*9si=A%wx2Kp6EBmRuc(jW&niQ6t)O1Ov@xH3{X74I!mZ1{X2T=^9dT=-B}a z{rv0&?p-nwD*7X7n`F9==Z{qyBA4{BVD9i6c%e4t>vng{iMm~}0+)@p&i^{T+~8a~ zf2Y~cO5y!IvkkTT5a-p32D_neIyl7;Q9-1A;CdONlL`dSExIIXV5cLTlmoEnYUHgImo=HMb8|8f=bpjiy`{HzN;GkM4 zZ34Eu-lQs(5KZG&FpA#^sF#o5p^HXS7ZCmTBs~ho!o=Qsad$BLU7J3`Wm7W|D@_R` zLqT{(((>!87we^_u5|pZ+*xLUhA(t}iWd)ot=S9F80{g)niyD7T*6sMah%}D0peZg zcDN9O5*S&0O*BYhs^VeGcoMDMGKM50(Ufk4200ZVqR;VqC9~!R82yJdlgynuPzskS zE14{tCEby-NGy?~sh-+l+=}uKdwh3_5;-Ip~!YC?Kc6BY6)_UYYe%piJSQ{F44U_OGpV*#l``jmf z@LZw}T^+BB0>?>`&wX;dEyAu$PCh;}OuR@Yd`i+Ssvt=45f7vAVC4p}g@9=mY3!VV zQmz>R@i-}mRIgBzo{t2OH?qtDGaBd)Zi7r3GT~*eVGw44j*E+v>NLRpQ5yQZdU3%h z;9<8=v|*hzP#dB96DAn$@n~^s+qIRjfs$WQK$LI`R}X&SfRFYKKZZ?W3)yTq63;Iy zASv_IB$NZYNHVw=Jo7nMcpjk z6x#*d2xgQF>#%#*Sb&9*gT#_+LITKa%65DOXY-Y#fOmqf9lUj76aJ|iYaBA@T)3~t zo=rbNPxCq#q>le+*0%3;7!8KS)I*lu!ZnPo@*3MSsM6$pF1;W3S4%;KOAf-bN6`d0 zPax+?c{Kp=`M_-a&P$``+6X?7jQD7U*6a!1Yo#FUjTTaD8_sEeOyY)5j(Q~}+F8@m z{8OS3^2V0Qk6wQW_|fRoSZHWFI4P2|c+o5i^Ilx3UnW3sY(VEh@R!sCu%HJ+=ugfOMp?G|j|{@+Kaii3 z5=a;1N|QmKqcW;SjMRfB@M&>IeL=C%Om-s1jagHe=s1NPa~#Dovrh(9AuMAH!Z_l7 zDZa}vO?jW#& zR_W9<3%gA#nxqEcRmZKtc_I@mlI0YFOtcj6og=w|M(|L`fKe3s4Xg+`&$KnVm1|G( zVEbYxl5_sfgeZ+({3(s;=$*KM+jO5PP7ZwaNd=O}(lET5YE}HV*j=J=KVVC26e<_$ zfJF!ylDM!)YNh@lOj}gu=FuQ37p~|QK4IHIvNoZP&4Du9ZAZJs5S>JJ@~@e3#dI>K z1`+UK>qPSsDbTyLcF)9c+(W!sWP)x%Bq9yV;x7`jq5?98SOqEbJlp;9XwII5`Iuf_ReqXX}FN+-$g+0+Wgyc4_^cA4-<26>} z^{x?HXzze-Rba5GM4x0ABp@{d2eR&jY+ml%Mp67!Wu-b(;Wy?Qgl!JhETB+^3wn67% zQ|@trePFphhkujth2PK}?snVp^2blBx2rB*YW3wIr92z)wA@4FcmoPSlCY2Z;Oztv zXPxEZ7Z`yP#i@bFDT*&U*&JRq#<)l=#3F%ggnA+wBie-OaqY@kZsHbsRC;isnWAFb z6uA<6#gf35rmeY z^{q*#DZXzQRkmMZZ}OZ{FOB=tWtFW(vxyNK>6K(CL`C&;9Kn#1w*ziIv`qjB56zY% z3xm?7GR&r+2LKlS>rC^{;9?0?`j#iaZul>V4QRH2d~|q=j7})R!xMM0Yb)V2rc0-l z@of4Q6y|bA_>EAvzju|3!VjLdg;1E9a!o8Ltesm%s|?x}h;Xq2_gI_z5ewY@f6J2qC1EcP Q+5i9m07*qoM6N<$f&tDsF8}}l literal 0 HcmV?d00001 diff --git a/contrib/Overlap-Recovery/inference/test/0.png b/contrib/Overlap-Recovery/inference/test/0.png new file mode 100644 index 0000000000000000000000000000000000000000..2de2d1e7760222f308b326d9f1c3cf49648125db GIT binary patch literal 6186 zcmc&(RaX=Ypj@R>LApcflx~skZk7~~?v@awQ@TXDm+r2W?rvB*mnD~_?swnrKe*@2 znKKXbG*5G8qSRI8a4^X*0RRAwg1odQ0PvRTKaD{Dj{sv3JXQdJ7^WaCsqK?<{>R2l zM}{a^4IlsQEH^TWE6CmmE82b_98(@^Bjh5azE%H=dTnntD+>)Kof4kRTNWBk92UCh zFdaMtV;KU1_k$?FipF0lEfO>%{@#-}nf|7iKZ243yf4N(A1D6^xgU&^0zzoe|1S!1 zQp(7`$d8Pm2Cs*dWlMgfw_y_`i2+;8(fj@uN?o)!etEs$8jBeb=iJ#nT&q<+v_iwb z+9D=eZ8dp#g&vNPy9Vm^4~03zm&lDMVbt0-eUv(=JI!o7yPiH-u=S8#=}A7NMm*X% z*^Uw$@hRHnXS{Rfpr?9PLk0*!Ay+9)j!WoRs+3*?yxmN+wzid~D?jwcg@rTb&k0TP zdwS6F>^H8ad?ot-F@{Y9SK2J7!gD&P(^S>uk3~UAt?0j0gjm6(45o1LQ3=)(R?=EyunJ_W^UGh(I1DiF;V5KA<+<_V`W_w2EC^0+Gsz%AcdIEOV5KIc{9Z1 zME~<-UVDZ2n_gy!s0|Jv__s`bfuTvI-mNDiDA}sKJ~p*NOy^h2{A%B3i0E{(Mil!| zZg^^DQfLbEciGW}wYph%o>8}=%~sP>o#@@thPGSrCRvZ&Leh0|U+oDk#|2i1DGEUN zG;jv`(E@|3_E(n@9TiVvbPAZBNGe;~mod9enk&PGj$V(Ah?P00g}hvx9{EM-HW&VH zu^%mhTf0l=er$A>C}FwQ46nRz3UOU(hL^QXNEe_OT}IcLLLkr}83;~#m=Bip!;$}! zQJCAdhEaXkZb^+^qW{vo8R zs{1r8SFt1HJ?o~A5po8&>{4U5j!kwoD^K4*Pn!A*KpD{rRZ-jp>NfzIN41J@V*3kU z;>;)MXk#+ad|ZZH)=MqxQrNOxUFD zP9@f`>)Ku{(0N--)_(up9OR$U`aO&#J%`%bk>kjt%5D-BF-HM7(%jnPup$8O0`3*R zXQ2T;79wb+9&T+RZ9nJGe0u`#EZx-y_OS-KPog0+nch1|+bnG6(zh5* z<4&TR=Jc|E36<+8jRoCJOb=4j`Tl(V%7qKujFAu`_*_^i>w%f~w^8Z9ugXEuB<&DKG`LO8A3qbjd1FB&l1N<=%uJ2zo{18Y! zlrt1!AFlDxbczjaH28ZdZfYk!2eI}(#h7aPLoAOGJyg=A%5ueTxRezXvJ!`qJ;AzP zfkMWwfyX#mY2|ufwb@$8P2l4n0Bqk+Xl_H1&D4< zcEG#87#2ltz&^H;8q1ysa~g2+T64KxZbI$zw>v3>E2|ZM#Fx5=+yEya4Dvls=|&b? z8h4<#fMCCM)-H*t$LTX>-gn;oL&s}g0t3pKrg^@kutWsS2=j`WX*e3cqawW3S7=Jr zfXRg_w)8j9e{^IYsNcb~`9YzfJ2uV74~BXBIb#x%i+5-3rWcy8*Q~MwVmcjT|P zS<}haKElgwaV0%pM%(hZpUrXyXWzKr8shOk?Nh5bI5IV*cCxc&b(E~E{+cp^?^4KC z`%YNVaAHC}<|^N~>P%;opI=Bsbj-d_ka7Qu18)ty;m=LD4i!@Wzy<4NL!KdN(72%9 z83wz7n~2cGlw@07awsaAxji&*nx?E!&D=)d!DrCtoCX0hx;zQcu~4~OE{Xt~4Q z_wCGb%3Iv=%5i|6PCirxl`c<~gL?fTE$C0z@h2=8mJVg3B=Ot=E2>RTc4CiU73%8& zkSm>=k+S`CsZC6!sx9I2R zpLAY)Sp}rMh+RTp-$cPEO3*i zp?6I%B%@1GGVwQ?b6VIb?oUjSI*=^^JD5~%<5#J3Ya$G;RQ}qk zL4u@b!p^U)4=bAyn?0%Y+y?dqE)?Z-cZWQ8u%-p64%szAoGNw3n-?%y(dEx zL;oXJ=oHL+0-4!nNtoa&u${+I5BI;6t}o=En%L}^vivU{L11%3Xm`J^o~K6wUW>nd zI9~b#=I7+jC;Y1@H^U0sZ<`W=I+Xvw%|R*$*i~3IckdWf-J03O&^f3Sam5U@rq2m!$yTWz z4>o3llKWAZDSN$@Owa)Ts9G1+{6k`fVpxbuvo((OZ(>y3Z^|BwZ6o6fm|Md1*a6}? zzz%G{_P%wuj1K2`1R=abTwz~#{h0Bc*v|3?96#!pQl`B)PSPn_VaU>d?ZhO`+Bt*T z1y-B*s(xQ%)yyeYL$}^neAielf0CbJkeam8q)%)tYH=Vgwxb$Y!z`vOx-N79qpX`B z*eGX}D=wqV1^7VfnEe}{Z2pxrnZCMrT7~j5J03jV2N79-j-tl?QoC_SJ`br-eu{?S zb%n&pja0vAo;m`@X%F%{q?!?FEMH{Q%s09?Yp_mZ+TkqnF6>SMS7 zPO7t4c^qgvyxYCod?m=-j#_}uc1+2V(K&kzdm?jow-_&p7V~FYDxDD9s#aH6^ZTC< zkaRS6N-%PmxFzUuEkWks&MJWKqFneohD@J9=i#NhtucZFB} z3n|bQ;MiTy!JHr6@ANy%;`shXO_GJZITLn*p2{vU*QCw0-|4I`lDDEJd8v8 zDFQX%l~L?-oO_ceVUOQEWCOzcO|eMf#JX87pw!biA+gpd^Y5}80N`;p>oiDE-=D{2 zOa@l_r0kJn9|r;na4WAGT)eCHT)4LJ5x4^=9Q`>M2(IVLmeWO}MrqY!VIez2nZSVk z2JD;Y)|c0kD))5(>EU!GisfV=uB#F!=~*qWxPCTyx2qGAClWw3sAG^ng0J?L*nCLf z=(N{r(^ynrZLikr3%Ax9N$*?JO*}SBr__Bjvj0l@uHeiG0Tb6TINDOu+f^Kz(ZlI8G_4vP=@t=ODzom z`10a7Abv?YGy|%uuV*@Yz zh^?|6qCk|oG%YpkS!a%az)uh1@3=Exb3NjoIMLi_FN4&~UuWTROvVP!@mUWvjrHUC zZlFjQzP7MaZGGBls3{%^1KzFrAK3G9T%}U~QEhe0)Qi;vPUKyx`8AN+ch%NX)sus% z+aY-882MJ1b#s=U6z?}frhcjuJ9~h4zxk$dkwALso)hNjU9oFuPy zd+m*AsO%YOqL`E2&saVCurvf{9(;#3X|?mf7jljwnLE-IX{stov&rIZ%bT7?6S!Ve z3Pt|gxYR+Qu@S1|BOAC?{0|1B9~Xj2>rkd#kq*xm|3o_2aoc~fy)_2ZsDt}<5U^gL@T^RiDPBU^#a+I!}?z^1_iFD(=P z?m2Z)KV7U6AIz#=Z){ewPVwC+VZi0?uDy^!cy=6e0VM^07n-zZebY-?ie~I!p(a$r z9CQ;+5;AOu)g#`96uuSG#+>2of%AbW`X@WFg&7d)_hcH zga&D38g*cko`|Q8k>ofkCa8+D{Rx)p`JPx%iGg82O{>@~EH2uCw@9yNd5>e|`}6tG zD-VnAuh4ypM=26#`Dm%3xIxQWg+o_tFDBQ$NxOJ?1*JX7(=H-rB+_$v3Og=m~{<;toUNO*YCU-$dB_|$eh zY#0{#FY9Z}87SmW8rOfGIGm9zirM|sFoDGx`>i33Cr1buTWk`p>k9qZz2@FCEe`xr z6C#MZWv=1V=rlL-brn5kt_%$MDZc$dm8*tBzPoj+V~HX15hH`+X^nn^;VR3=CG%V) zSBcX#SfnzK5~Nz+_Bh-pt3_~6C6i!PAxpwz)G5StA;!igb$EA@%2pyLMZveJ81}oo zBy^eH>{6BlMR={v$fwO}Iza}=SlxIOPY|4)MarufSzKS~{`Ia{?CeB%+$S?T4gFZD zM+=k_5TI=LcOV=)vedBmbE9*$U0>Xq)f<4e0M_OhV`cfqvsnEB>)}L|mEMy1$(o?H zB3R1vAlu2JC+W$rFVK}9no&t(b4ZEHwd3RQAXmz>mnpsOrIn5{nh7(P^u}sVMM`3= zY|cc1OLd*iW;hp*yAlU}{Mo=(jFcDcI2Ql>$MJ)Q%SS4#jw}*Vg!$toGLyYQH$-H) zrg4Qxl$nH)dU5zEY;E-<^CldRA$~p3 zg}!HWI*^(I6H2EKg`a!z~`Phh^pFQd42&_`}8_HJzjvHFzTjer~YCC%)xU5rK+|GMJH_i!;c!|&iE-wBZ@on`h*s3AqHnZ(I*>qG9A-H z%VaxA;KyxoyAu55Q>@9?tl}|bC5+dKl8I^#5zP%+wfga^#Yr(Z#|8C%@9Rg9{@c&& zJZr^P&;z_%_V-jy|LXbmTF3o#5kBlPlWmQCuUm;k+^zRdeK6jt=7uvjGNDFiF718JbR;7bpwSX zxq*zXd0LqRBiFI&&R_bBfZ9Z)887YdUL0q6sG@&)wx5zmc}iTjm;GwhB3tBxQ}!C0 zfZdF_sA2a80f@V`@oD(1GYkCZPVss^C_hzW#qP$Jph>v#WwU)R086|HU22EtM=Lp2 zBEtCiT*rD=Lr|83(xpx9IC!*GW%Oq#ZLgC`%cxK8Ey?-xPoKeqoFa<+!}ECAnvuQ* z$~}6l9P!a22P;9hJm^@IvLn{Qy#UuZtSnySeF8)Rq}Q~;;owYE;`5w|dGV^UdAIjB zFtWQDr2Ll}%(e?SP-I#!n-xY*}>l z>rLNoWzdm?t|-O_fW+|uEiLt?Jgg3y2|#|w--$)lsHymei6&E@R4^@M5;8X6-lY@h z__;m~N4{q39t-2P6T1?Q$4A{l)vd2RPxd_5yqE~pA^sk(mHL6#2Qfk=@Wl?ptUC;A z?XX-T_{lAd;SNsGCBtdd*bU|AYy50#6Uj#LY`;bk_M>hH3t+O0Nq;fNv`#<>b0HMc z_6-Wi^8=IcPX0g-S{z~X5TEsLVsy4I<%&)dk3WQB_Lb^dxg0x@WUu3p>umVzW>AYJ z%D=;5=hgF}={ks7@bwr#FUrwr(`ua14+;K-Y3)@C}&u~i;p{rttG&vX8Q zB3NP*ZPPH3F38lL%^VyXUFPC;#2-Sa?ja)GPS;CVg-!`fhlB z2L9#`IpR%~SI{@qF9(B*0KituoE9Ss(9a|1mxDb=RJzYe!8Y>`Rw^X@C~BlcF;1WO z^(po}InQQ`c#A*2>GP$u=YN4Kg8>ts&+)2b;f0}_gp|f4#kREHC9Pd1*!hu@%#Trqes0WRS$!< UGW|MP{`Uk>kWrPc{%RKTKmI-^uK)l5 literal 0 HcmV?d00001 diff --git a/contrib/Overlap-Recovery/inference/test/1.png b/contrib/Overlap-Recovery/inference/test/1.png new file mode 100644 index 0000000000000000000000000000000000000000..d4b178a02bb51bc48a3ed7fee7c1a461ead13745 GIT binary patch literal 6092 zcmdUzRaX=Yw1x+1q)TZ*K%^Uip@)zdx27HT>FyYEVCaVP zo%1iwv-aBSeJ}RyzIY-uz={NT)OY{@fZ(GNNDBaXLH}RA!})IkMm9+}0DxDCA3-uY z-kHcOs5!a(Y?m3B6u&qwMDF`uKGTr`F$qXs216b{&b2A1h22m_vrXRgmQquM4U1&< zC9x(6CS&~44zUh*H(P%0i%8lH!Z)}d2dA$UEBB9E{=*k*^`t8OpmeUP%xvzq6b~Rn zjPd_uvOqT#fc#WSwdm*Pn-vC$IAZI)0Do2m|A(kGF?+@o+j#+Mxx7Z9bL(;AtADo!~* zq_b4DwAuXJDdY|kY(IJvLVQ;EL3Bn8XTdG|J|mp6>JNW)e9fRz%t6WIx#d+om;h8F z{s~OvQhmm+8`_h&tzxLZnQRj#JC&p_i=rtei3hwjBnoJNzbah)ZDz}sHt>F|UMT#- ztDoG;Sik#PT272TlY8R%eO%2S17?a%WxIjis5@)8yKxz-L=@Zp=pPy?**s*P(i!P6 z@mC?t0PiBD9|fytIFFOTD}hQwI}lSZ{2PTPJiXi5tdvoB2K~>LMu8jVuyoEU%U%yS@RLKda=u*ZYLCzp(@!e z0_$yK+E?8!ZG0^{G2#NJJ;gX-A+j!_?h+xCjg-rqy-2t>5SAjLU^K6upjDXB6M=#R!(ML&=X7OYWs}9Ph7q;7r4euj#IrSMja!@@^uL# z6^szm5vAEjNQZep}N+6gXW9J{@6$Xnj5b?!=nzwZVF5;MT%`Uosa=f&2KWih+$nAg@k zy7EGJ1&Gpp3P2Fg(R2*Y5c4DYV#`N&Y=@g{S*8dPGY=D)FHPEZtX$I@x!534Z{ z?tH$N3aqy_R_G|dnib5L3#Fv(c)b730HTXq(`Fk&;M{ZBEkY=@JPK{)-2w)J9@!;Y z_+Q!RXek-qrGD+uKrQTIVU*jN9KVajytk&8u*EGHSMEQKW>KHXjDc(Eami-dg-J=I zCP5u4erZ`m>j!9%(jrv7RQ{;b8TSd-j{`pq@BBl{$(t?+`IFmM(hZvo zZvM3zXtR2`iNEx*Ey!9XCggvK7I6B<&LG)D+T7d}kcJLYAtQ7Nkd~*T6RbSas$bH; z;q~Npa{t4=DtrxOfmGo$+>t8_s^V7Hl`rCwwrvme0svioDO5p2^!^a9iCU{5FY3L* zCoQTiwjusjvbjBWCH}G00klA+Lr8uRy-#^X5u)E^W1#}6O&!Mvvq5+eq`2uh+&j`o z6j7Qg0|j9e)64Vu@(pv&uulTp$Dgyge`L7F_}h3`_!YWsZUi`wFIoGvRksVn{2c^J zFBfquwo;$96Uxyo(@BjtO{ePQOdU|L?Dx4a#`a=x@xCCCB$2dQ}}zUu^ca7 z|D`xTxA{}nhWWzQ(M_Cg_Db6iYM+M?`u*%W38oezbij}O_c?`rus)gRzXa`FKi?u9 ziU^`0_fls^s}mmTxqrSoD%D&)gtSpSEH8yjaM4*1-AyCg5~tiwm7 z)zNNJaUm7THxdgxnktwvZkdxh|M1R=#j{?+8%g%1*}I@rKkf`BdwnGyHO9N+Ll{@rRwkEmx=2qdAYy}whjYJ+nfT6P|f$D{1ePrO%h-JK{sE&4!=}DZNw+nsW+}8@NzIhcL^Fvri9xl5r$A zrjV<$I+o29gLl9FP<2gi_rdvx)NSnS8Vg>2ZqJHmeUqJoM^`SnycquvHn=b;Yfob= zVQ*Y-C9xOz7o3`U%Bd^sY#Yv8P-hY(u&72>ZDIbvWXur%Ku> zORXHC$fLw!$3Z@d{cJndXf#cCDrG*~N(~KfKxKlmwQk?W9nI&$Ge3D3i@8?r3LzFs z72(q(u`uJbXZ(h&A_w`H{G}Ay*^Bnbr@|#uo!GKes|C0cH5X*A?G51nZ9Kd8t7|AKuJ@UxE>}6ME}BG_Rs3k( zXn*5dtC#a{k$4)S${S_JB`FcR;u1dE_)=#Z8&FQ?0VzvYQMSs(-_F*z^OAJfk7^R9 zQg)TWW7XsCzyG2^SYxSQ$;@TFcO#Esj*+@@*lAO_h9Yy02kYNJl+sq_zu!<}V3pBw zzCh5CZ`yi!=7vSYk2X@$+wo5GXq4L`&k|5dlg0B}QT$GWI%=J~22mI@5NyVz25k3b z-Ap=0=_B+*I_oe3J>dIkmRm;`9lqMfne{O(=c3bCWlqHDONnd{B;-YX)^;cza~lH4 zuk+F!-)R1ljE~8Yo;nMM!>wRfwvO=Lo!DjFldIRX#}mIIcs5I-x%lOMdb1@x_n^); zRcEXK-lyzDF|%`xMz5v`cc_4MKk&;nqP7U-e)%tirKLgpH^}15faKNaRt=bOs$Cg% zD1Bs4r28B@4s0L4IL7k9{5tMJUt4)<%?-vokrV&`21~`-kgeX8 z%2r6kX=7z{a;X2hGsm7e_hlFHfHJ4>!{KTq33Bin>|!ln3k8{F(8$hqLtDD`ao`Nb z>(&f+rSJCd$Mx5e$2yi!IegaCY|3Ub?dv#juKUKhA28C(*yAc*o{EnL`M0*(V^-x6 zKiL-mi;zbwBiEnZ{cb^0`z32Gn?as?!Z4#e!O|snVdI|>m{MGmzKz-guU}b<0cNISOQ65}ub;o{603_b)Q93BdCs7-U*fg!3qLu>B4bg%~WKdl;`8lj)>$csFw zBEEB^TXL)nU@d5c4SY=@eZ=V@pgCG~aXyZEz;Td6SU+T%vHMfTEYCFtClDL5UcCaP5uyxeyEOXIiV^TC zLy2CVso2HoQLDSNmlMNu^fWFc$y2%*$Vt%J*XP$#CsfHQ=@z`qZm>E@U`K8lDuLI+ zad6Wl%lRtMKZE`RyYWt7k^R;2Y<_F;m&%)ZeSa!_Pl+UDo;=4gQ7W1g5M5e}=Eonj zuW+rHQJdTTP6w0IPUO8rNctY&^I9K7hyrVz_G{nx>l8ovw4 z6)9l~#T#43Q=JpzM}E@26a}`wlqmIeq!Sv7@QHGFbIy(&ER}REFH=FkzF;F)B#l=j zKn6sytqcVjjm9sw8ugBocZg)2Cl9%;^aJ)lsj;Qe7H0AH!%+`j5ZU!Y1U$#oH88e8vB$Ol8#(}>DxibZvZC1j z%eIAT8`~fJk?Kgrkm2YX`tn}?YDiMaPDsn+UrP1|K9(5DF?MyZ`gplC1+_A4FQ@rF zo5LG@x45ZEi`8;3jzbt5+oaF&cD1c^IM?gATz<_#L!d}qy)e}^#%P@EsC}fqm&!wj z`y0B(O`Y|e=KF^CThV44?!d>W;3#pIHp3T+uVDcMRutFD-P?)if;!yAP>Oy;Qu^3Q zm*pt6P^$ly5_zw9RVEICVea?oRe^^hrQS-y;J%O;C>VONg{;3`A4|Mq8Rjl1_1*c_ zZ-NOpnPA0!w(qsqQ$Y}wk!y20Gxbm0D{;E56cW8^jB!6#8s7FO`{kq+XTY@Pa5B#L z{{7HRg|7X%L~Rjqox5wyY!qATupXa;x@?3}n27!PE(@u49wl~68kiB$ZP-6CkN4q= zul1}@ttzGSw5$HBQ6tgMkiwU)kZ6O*>6m{S z7_F7xo#0=Gp2lsI){_jk5VOgYvXPw9o*LKt2n(lyN#^?4!Iyg$BE_VsRA^Z;dzs0x zH!FgeDf-k}zscel>Ml$_h`f`({bbQ&+FpzAfNvphtTNFYe`D>%O711Rs&n14Zj%u} zDm(ajBPy}iVxOej(fgaw45>#bO8PX>ON*ML@#XR}YfddaAlH}_AibLwS+(C6H81eA zUk1png<3fY(BLb^LVh{%8iB+6?6EgUw}u$GpvftLMmukj>M%^ejVJ>km(bCT7GP3+ z5nsAS)Q=AE(Clnfm|WZPL!S*E5eTtcZj+Z7-Q$5sw&#~p-5cDJiTxfGs&lBc6FVc3 z?HC?T5GB_Py)-oBQIL(S5L6LtH@7QNJn3#LyXR!I&u^ZyN<9DE9ch%ZeWk5pK$vdf zYT2Bz>pt}nA{6Df*odtJJuscyK6jmU->6hE0RH{-4G`#c_Pbm0lm5AvTSGuE&X&U9 z>c3RIseslFXCCDGATS?FHd}^ebe2eyQ#|&BeaFUya@7*%M%p?#fWk(93Da*50xJE_6}yh07-jUmCDFqu405=zXhfrrk)htr=3k%!u-jf4C_X_=$>QMiaOgKn zkxWJbNWtH3={oyx)A0VyWax_S_U(@rpm<2c4uOS}6fA{VZG6@LyI-S^a0aqFm{`pz zYe^5D7i&x4l(U$PjWe1-z|4Yg(BTgp(!FnC$@?;@_vcVwU{Ks zx62nhIo!3Kw7)_o@$QgvKV@+k7YYQCF|W~y;{xz=uIXa;Gs(;sk44I-JgL2;MjLLv zMXXd56hCu5d8B0A_j~?xZunXsV@2_zOdnREe*cueN(slu(rosApu1&Ojnu!62K*3z zRvfxvM@nY7wvyn7D+MZkh=7l3YRkKDpwX9r@@A++joqTkwgsxpw|C+y;)w|O3eEW-^$2!N)zG}AZt5oqR`|dxf zqWZ8Qp2qUhtS?7JF4B(EJJWui@S|~v9Hs(FrGcc}BL=El+$MlDctWC=+vr_}bYO%K z_ZUqWblbFv`k~AGT!bg9^)l@NQwOX!8|&GR>86@WZ3OJPKS>w=V_@+{c7U>9s^yS3dporkq|RoF}kV5*P)|^sDN;Fd;f}6d?j=gRMY@Q zMURY*j#@YIg12s+)_CKKZ-4#QWiqEJaPX?W#@bTT4kQR<(N%c8QPR^v5CzSfyrR9{ zNFR6s)(&%~mzvkNq%+mRlA>^VlHb-hkTFH@1;}s;6dy3g|9~3>m!Cci`a(yPoeDgc zmZ(J83#6fVBZ59sI4G~f8o%zR>64Sd3?i+b)Qa%L1cW9pH!kxq;(Knm^WD+;Oax2jHRu4n^4zV)7+NN70St-g@GiCLgPxLcqYRZ7U- zqNNZ4*;HrZ&;CkL&bPRk7y|+a4KLcuz=bY_Ee%D?=l~PG)labu8Gjz;(ED@ehgY%> z11zt?I9grOt5zrhO+SUq@2I6rNo|&JR22<8=SzE+`iF(&(FS$%Rgq}=@J x)}nw82n@8Cnry;wtkcpb+>ri1IgYFQ#IS`}!{_Alx&L2H|4|+cs+2Vk`X9k(w)_A9 literal 0 HcmV?d00001 diff --git a/contrib/Overlap-Recovery/inference/test/input.jpg b/contrib/Overlap-Recovery/inference/test/input.jpg new file mode 100644 index 0000000000000000000000000000000000000000..01251aade0065872fcd8d7c374301db18b176846 GIT binary patch literal 265282 zcmV(&K;gfMP)4z&<&wbe*1byrnaSJ#>L*;CH*oXPK5S?=AZcdfm0tCUZXdl|B1U_IN&@9q)F#Mx%B(9Jbp-u~)j38_oKBJ~OIZE-9C)Ns_GAb91RxeZ9C``Dov0v~t;Gxtg1XTXnmg*=)Mo zZAzu0CvG;|Q%VhAKb_7;vpVc6)v~cU@K>QwG?)GEXyk=L!8rE^UpeOUF>w_0=-qCA z`FbvwMc4g)ja;>AXSG^pldb0#lBQ9eGiw{ExF-D*866bp!%%VqYv`RQmbuTXnB zelUxCsqI<2?dY)Imn&DfY<{yDA5U8!bvm8laNx>9q1^5C7RwpVmP<9a!aY>YoU>O~ zSL@9ZWsC}HkaTn=Hw2CBdPGj^=fBAg@P%p$cq~q zKp|h=?GD(DBWQx-<#Guh(bJjB`OS8B&Sq-$X1Q3KOvb0vwp4bTl5rkS`zR~)`__ynUDVHnPMlze+Q-W^qX1_m-CqqO) zI?^(qhvXp3%a<=7?jMobPq*7$uh%{@CzPyJD;HTsv1Yq%RG~qqOt*V8pDkDGMJ~xV z8=duL=XykY`Qn9x*TdRAx?PY}xLqL-u&cQ-Mt8wA~@NTjop+>CzFFac8|*T{w%CyX~HQ`j#Jz zCcS=q#Y{Wh_H;7yCz_a=12gp-gYeYg3q^!!n$c`JI-mBXa#AYh7xO7eAR+mrh*$pQ zlBCgWq&l^A9~zC(>-ENSTCL`GOZ3)snXJ;y<#KsGXWOmLIdfVp<`-*V0FLPdQ$R2b z!WVPP*xN~MhVH@>p*Kc-XQ_f{*Q*M7)WpigF3L2K{ zTf9KNa$&QbQ-^G}K$|WGihT>qmSyph&308Rkid|=3muRG0TGRkxUI1v+Gd01iDPG2 zC+5yI?qbL&f6+Bx_bt~jdc)xe>zzkcJX)<5zCw=3sy_>bQnS%@F>Bz?%|@G58IAi` zO4rEeZhOdNaHm)*A7g_1rE%`1&C=De*4r{F4E=B9!l#k5@KF1O@8z5 z{W9}+Je`;`SFjrHg1((j+i$YscC8k3MqGBM(}AJ!Z1?qS1oF=X>4p`dmlUnB{BD&-14N9^3zP}mFvN+r*P6BlgR zt-~Jm5yA*}yDdJNyWu)dVX)zJKKayif*?HR*=76qGfQDO!WgGJ`})O(A|E!pxic;> zNOL5hC~IIri1_nVu7qoGX9TBS#?Q+jYdV(%q1~5@aw~dc@I=2+r$q+6rV_PU!!Y*y z4FzOq4u><=^3b(f^K{G|j{DO&{5NAUn@!G{b3R`+gxz+v+tO`<+gL4Uo}bGV$n?%Z0(4i!&+3$Y{6|ll8HK%a4zY zA00|1H}rMR+fUy3%@>J@m+M{T$JF+_9TFim+4aTa;pB_0R?An|?^L_0w1Q7%^lH^Q zL2_GHk``2QjbjYzc6)A15`2XtG*nXL49Ov%sOl-b-ZkzomlX9Fx>+hO7i;G!GcjIq zP%C!oRISw3>n#v~FU4Zn+;_XhdcDl%N~a7%!?KThz1R7CWw)Ch4*10ubAU%^!`IQl z7>vr(FWN=SU?2i!qz03+@(^lBE^}F24w9ak)GT(ySA3g*?be9kTSkcYG{R}#g3XG@ zK_~ka%z=1GQdnvz~VFlgs|sG(s44Ibepu1}vFGDdE(6^T5; z#i=#S!hESU0L3yca^8@zo3tzzD}Omp0S%Hx!{SE0LCB5AO^F`OYCt8s}bKHY>?Vj^)Ymkssv)M2_3I(_ja`VDL({W5ZE=Km^ zHjt5eI8Fs4fJkai8R*b2bnx3XejND72pwD~V@QUPTJM0cz6zxINv&Lwg`t|{%F{i< zz&4v532{AaV`|)E_9Q`5#~J17GCu7{iVP zGi)>~wOy}(g}P@fmMhB2P8k_$@;EFsVR3hVdY)@M5@*-zU8z_G){LL%q|k}|qTI(E zX(5tH;sZ1yXU62Ars}q)>W-%5ckimpKC*x**Gsr{nO+JZODX9?%IkI0$U;FTCPpDx zE+g5Ma*;|;XH)lYH3=3}vNkn#q1U^jlNjLs>`sW^E<7z~W=Lk~iKyeGQKiN+rtG%V z31#E>z=ZDI*)^dXhp;QKI{Dj&HkxMW%wY%1pV;5cIzk^5hGBO8bIOF!re>>Dsv>MQ z7n(zbA#o%m<+V!91Ch?`T!U2{D{_@9^-{UAKMzZ7 zchRj@tp&aT*pqy#*}?(eHQ?LZTchIc%z$Ah6Oxj2bq@vSqr|yT6)Inn= zm8~)5PN$1?jJUzHTGv=~Jg!_{F0~u=)^z%8JTG5-Gat@l4qky%W@SQ{#a#$_rO*d*a=Zm>p1wrldY43?vr24Vb!PDMzwg3#O({wpv|Uyk48;mLm^c zTP+L4JWxQ=+=qrjtMz&n#oVD>sc=^iAxy$gz(vptzVW1d_< z02~`?dOf!%774%QBZ5>e39scS-1u~TeTDCQMupth*!_%{Kby^y3a*@blCfs99g0ot z3#6#%ap7^#XBL2Jkhfa3VQNN83_OJflMo^)6kjekr$7Xz7;Bg+f6Kc5MFs_0~LMWd^)C)y%c1b9C+#emc zTRjwJvrGYV`6~a+j)a)9V}zyhaT12jiegEnAV$EA3Hj=H<%dzxmH280XJnE;GShmKt65+jZ4|E1>1>cqY@515kHkx21T;dbv?9 zu=t$h8Ht#olen90s!~+h9tip%fECoBJj5;AQLQvRkE!x(VRG0eLIGM0O*pot`e>Ej0FT<3ZYc7Z52NnY+m6w4EPnq@cw8NDXIvlnx$|XWJF-I_%e@(Ip zI}6Q&Cd?dD=Ni;P8bJv+={)@^77F30Q%n?Issa}I3D61_nJnJpL9cs7K;54OtcGb0 zU@j)$mh#yWrgAOKy7&~7KnY_hRa>o1z@mczvl?V^cL*iFzJ9qBRP4o}*=nTAcyTrE7psM?*b^Wu_PEucpl4kBcZokSo1Xar~J5s2(Ji%(um!y`!# z$1f6NRn93g3avuWh&N!&q5&9vAz|I91k1R^dDe}^$2o#0{~|Py@Nyj?<61n#%+T=q z`UXMd8K@H<$w?WuF}!&35@DHNF@OUmI88`yYl3;noK2_}Z2^V$2gF#Cc8MU;v0QIG zgBXgE7V{;N_qv^Pb~7A3ua{f*DHRUO`FOk9R1G(uJM1>|`IO0!^q{e`*-%7H2$}xP zda>ULhy(W^9^Y0Hj2<$@d}%Zs2>aLzL_j)pph0X>G^56#g+tkurscW^@jqBKgN7_6456 zU3T2##A>u*v08+fmP=9@J~$oVh8n!S-p&sa3C#{~G8|9VyG6BDAz%_RoR@H!82GZ$ zViWm?5Gjt^ve#(B#8lm=1_Wd!g-WH!UlGg0ng7n!>sJ)Yyv6+WS_ct!yFfzljW2S6 zlt#XYo>ClgNHVdBl>wj#60RY7$YqM(X)!sA&_pWpoFj@CV4f;hm~x^L$|5tL6b{E7 z=S3CBCfPs};85Ja+@s#%v}v^9jNN=LeOhZaUx}l(+d1HY^=#aDJS3>Cc6Tj_f82^e zi=|4bSb+@i>ZS7pVkAqz>na zM}t(EV)l&2XHW&%i(v5@ge6Vqd_h3Gl3KY$TNud}*(fuhC?&Mz~1JjVFWB9&~ ztqg=9C8rM%K%5=4LBPba-o>Z7!ctl zv?yW?19Q%D|1JomZ0|tnn~E%ZBxUGz-h9m> z*qNpTk&Cu?u3QI~VQlV0MtsV?8(Otemtl)s0Taexm&)_`!i_?kgRN)oi)IS-@C#gk zbb$-j04eaB;h@sH4M}lalJcQ?q@TecT4v}nA&DtRdQFNsn~^TSCUWUA@QpAD!pIMp z!t<{g1O6r_hI7sngLj4#as4@vAif7cAYO<_7SmMR>AW{7c|w`_<0-PYAW_UzG3|`L zB!S)8&{Am_Qbbr-UgaoB50s!?2l`*GfKZMDZRB&+N&}!_qjGr=z8tzi9cozCY$qNbFucx%SaT~fj0$pkMVh1Y6LXEW4xnPcpW(!oZdE7uC`-LDy@ zbEe(7SuD5wB8%orvb&qDG-jq%|HRa`hj}iMaPD$Wh-uh_jXz-bi5V3Q2t_j^6A)TJ zvm&3V%2tw;CT=?h2*^8U_acYL10)#1xRD1G4v4@(vS*h7nsyj1g~x`C_(QnwkI$*paOG;S@nj1{wV%oHpxO*qD_b#--Dj;TxuxcNSTwB^C^;ulnB8v8d05l zGEB~kFU`^Q&XYGoLN3M>%65i*r#1A^piz~2LfpY4BiaW570E3ZJZ?74pf;4O0F-j2 z#J0{CVwx-;z>%wQaxI0tKcmcpfZUt=7fMC!NwH&qYckJ#OEUx6EKxW@_$czFQ8z`C zKs%&x6riM`9)!mSPz7c)4}J(}P(73|veTDr zN&>P6snenzj(h0EyLhybAfDT%AyG!vip^}cnr>9Y|5IU&Dl97Ih(n|D3wL0{`A};2v3u}Z#PVzwxP;T5 zOIEpdoLGskvnTw<5ebC-2`NCfTt#2pAGBc-MQq1Q{DruYb>98f4 z)pFhVq>>n>daVgZlW`V-DaXa7xuu0NVd*8+7RfP2$Cm>nhUlbza0~D<&CDHn;wH_P z=*ZC#brC3H@nB)j!|_o=dMITNJc>}IYdup6CBuBM-R6${bR*EA@P`m(g6biBN(iZ+ zN(IrgSR9!aU^9HbRs|Rh#moioJr3N%^h#Bzmt)27wNR>+Dz({i7RX%Pw^ZZaX?PJG ztF7%y+#BXV4l^7ir*c-35w;`hDs#%oesH?fgQ*3Va1RqVQN}1LHp}h#Q)HbeGZM9J zWFU*>!ipiw79y!s0FvM2MM zxnqkC$RNStMO#2Pk@q^t2OjDJLIXB+Q7HmBtNYV!7`FrG9bMSMz{~#lrJ*S zk={4(I*p0!R!Y*d`~zSn(S{>|ba_;@)`YSao9FX+2?{H1Nnj9=QlZHmZ1>RTv6wV- z+0tem3ENV>E@Na~)TC5xz$1(GjMzp-rlOiP5t`)0sQF3`l2nsBWK_wsk#||j+z7uC&>V>e_js0_t4FMmqNa|#AZaYzfyM}MT}Nta<8F#Ch6Dp)dnveTRvG2qX{ z47e$Yg1-<0KJx6r@&D>S6&?bE>{P(ZlpK+`XBK%&exHp9G!yle0x?MaIs20f6rHeH z^wr>zD1bGbZEz=`;Vdn*cLBBXA)<#f%SZ?1_hJ`_=`4X09Tdi$T0CVgE-}<6&`tDK z(Z)bDBvpQZs0uknUXiKXZ4FpJ2JfQ|?v|pCgs|dD9L*dv$LVrr*pQ~1Os6yUsR0Ef zxy5SfR@g=Zm~Xi&jz^5hcL#Qp%dVm38vRj>7-=CL&yWe^P7I1UEhv&U(wc}#3X1{r;-N{glxCLCQDQL$-vuA=f_Zd| z=I(DWG3Lc0j9emHb?F)ipohEj1sbJ!j7DQ7GaG|Rqyc1BJOzi;tk4$9Htb%nOGYGc zq&vjKd3;ZVzt~Gp)QoY-Z{UceE%RqsECut4iSCK1@`1>;P+j*Q z4LJe*bR~JZSYA2ml#T=r$deDhNis0b#*@Q?j2fzB`0@mPXLa0G_CW@jv=B9!D-R4> z@CUNV84*JO21$LfpJVK%3p^XCFggQ6Fexcu(4#?JyfaQW!Jt&w`H5Zc>YCC{rc>3# zPPa#vW?wgb>DP!zmkKF(tNMsnbb1++DhaxbTItNpdxOX)u{Ul=Lr}nu3yme561<7v+sVjcEu=)E@ATNs#W8@M3SI z#W;0LNzn|dL@JvusySWEwJ@Bs*&$5Td@8>SsV>IP2L=%r1= z6+w<;J}Kk4CxYyZqw^S)A|;I{0h*{Te890qNG}m8YC!>lIkSsKc{=ej z0OCcv$d==jj$cC|v=jN#g2*8zSQU;Ze4GE}O*j%_>+RA=r5_K+#b&=iJ}PSduEfjW zaWVSA*cWUIPk4^Zo0%Fs@$x_4Fe!tHgsJmx{ z83o-+3WEX;SL#+09kUd;g!M*RNXpehZDm{$GMDnX%6t|wWIVlIPt{Zthj_G7QR=51 z6j)U-+IC?|*Fb8RcaQ`>$j;UA1*S@!AEy|F?(jotC-?;%sLDuOP?NC3X_p;3o6+Ui zN&2B8NeX!OXba9ET(pz(YHYeEa+C!3;_T(>j09WLg6BEldoH?GQl)@urKdxYT@$z_ zO3BJNCq-;`;;}fPX4;Jt1>rp-735OgHy9EEo>O~u6vZ%Uq2+q49xmkUb-Q{*w8Jr@ z65<{z0H@9pdVKvn%~p{v{8xANz;HGyVGM4NNar4K%C0Txin_~u#@Im*2Cp22(J9-J z4i-PjVX`BFHeN};rW}u$lGw1taNO5;i%2rDq>yBqzhei4ih!XxuHiKr0~zkXevv&I zt1=gI6{<+r{E9-R$O=ACNfx;A$XIiUK~r0;SzHz(%*EU!wMuPnToaZkQYK+%!~SUJN}mMx$hC+@I1gnNjE*Slu$3!=TxSc7 zrnySFcihR(Xc9SS?xEaF0Yh2Lw_%RSyw64{hZ9eJhy+q(Bc0Elj)#>L8p13WL+&%5 zQ>Us+>#jCa)l~sEOtMnX+GSeZn;riVpkBjnO$e(_Bk>QddQd>DV*q8w7jP6jxxf2z zKM{tpHKDv=j?I&2MV6EZk%J;9$U@_kQbJh(B=NX*2Eak}I?%QT#nk(zDdSMjLo`6q zE5wh?XAWOrIVBVAMQT6dLIZ+It9%*24fO!!gc%FM3`Sc)q#|*O*E2SKH`px*^%0>U z^M))#4v`aVc0iL6Nmk-gyJ$8$(TyoRsMEUAlpi8gg!1a@nssvC4aqhvnEa1__m}u6 zf4iMqr}K>1rBsmoHGaH40x#rQU!SotC{7Ag1GXG4%@tCyhzRMRa#(v-m!lKq7<~kH zis%_Ox1jAT3Jltg#d_&-5D)T0)I1m61F(}>_&Lo5xkoCUnLshs8sCd>Y3*0H`vL-W zajj9C%w`Zfk>Y7RPIxGykhyY8+QYS|z`LiOOLCA#1x1wNZ&K@@V~^5DgS?d#_3#k% zzpNbX{)C&8Ga^(nZ2@N7cCn-zY?g{cSTlaNP(n@6A3GVT)IdkVH?%2}-)|KyGpNLS z5f>5)H$fNvv~U8ssA3R8TF>X<=$Pq>WH0+d-(;3A#~v;9#ETFGG)j?${;{3#E1*%a zq)>@607f`<7<(OUuhbtKj2Trg6*;RGS=H-(001BWNklIEf=DFrm{oNAps7{ej29YuBW#=-mT z3tXwM7&TXf7tA;JnNm!ZE}}9PobJ)UXx7ZiC~B}db0kDj&!y2$##2-Cb>mTMfjR}p zCP;F8hZs=Bi%uH4DU(lrk$@SnoE3_(_5PI86IGBsHj^pA#pS}npv+bGmT)M56Y*xF z#rw{5uk1qe8Kbo7qZ*{uKz>i&UVSL101M5ab815+V8gdy2WIhm@)k;adPF!g6K52h zCvQf=wM4cc+CIeuaozyPJj`K)VSx4s;({oy5VO>;17~cyqLfUOx^WpOD-^N@B*qdf z)*56FyLBr7Z4rxjkb9;b1`chdi1ME$%Pw>QdZPQWedP)?TB)r@=G}he3FPBP%Ht^tQAom zJ8+Cv2*nR$rmzMEd@>z?oFt2ilw?3^lrLSmk=bTiQ>PLn!V4JMf%zgAi$r19D~%q< zN~uYh#mB;8nM|HIz{46BC=GY0-C$P^*O-W>DL65fOP|jzg^8dOP{ZvMFyUc|a3HEo zwTw$t#kNs+rl&X#cTqi~3JV=vOU=$H3s$6nG7J?hMjRhH0|)4mBUJ3vW%_AMqunUyiH=4jA8^aayC<1(afZB*8?PYy-fk~Nhkw^kT;5S>zB(h(qj~whi zAVow|%0^#iFq_7(HGv|d(4kZ)!S6s?rZt1z1~wUOOs z`Jiu}6z@?njbkLcN-(~tVzXVyBG5=UXm{Qc*>brfJHOqHNol>>!}oAeF^ZAuiO`t!AuiUqm=p_+F5F})84;HGOxn#z80COg@-^^O&O)Ls zTd$Z&+QU>GHsV*e3-Jm69_^}bMVzoQ#bSg>LMw1(IPb_n>Z?}23y72^x^!W6grMq1 z627iMUREOmT&pJ?Sv~fbUuozy7z`f+$yzF!6uBS7jC5eBu-r^Ghn45ZVY;5Ax>L7% z&FL^S+^F!#81zKJp4EC~t3^z%Z3_lTN~m^{5#2JJ%{j_3yF+9TRpaL*O(+&6+{myh zNXpqV79j?VtN;!;fND9vXe~`eH=5Sa%a{=m-CX7Zlte}tklDp4AqAQT*$m~jN( zWC7LQ3p#wj%g8we%8+>BU$d%f!d4j;63($k^s|k;E3oSV@h&{XDn@{liBd>(XSq6} zd8;nM2QYn+y%d;BPOW88>bm-ukm+_Wg#;eUi<)L6ZNlwI$Y^ITyR%_KFy`NCUJnOT zDvLULCAd`|PZm;5+Sty}@TQ)=T5;YruxnGQv!%%&MLO&eI8B%GJ?J|b_vOEuXySQdYSfSG9b zs#Wx==(Ryo>2}F3O^gVjbW{Z%q=aZ!vC^YSDHdAms5VkE)WVR?1gpg?HATK|Dmgo`P|@j3uJRz|d>jXGON3pfSs9uDM%DyPaP4Qu7ZRb5DMwHjwWX8hMx8)t-VVia6#flZMX`>+4j zD>EeP=~SbVmX06`|MBb5+!FpyxEZssIvcy>y}6GF52IJ==nVCy{<*w-h%6+LNywYT zz$v4LiWU{D6DD0MZB`*ju2MEIk7=VVB$CvL#tg$q*+iWJ7{-K!X;Vm$y`kl?F0ZLK zM2cJQCgG%qERbxY-hJ!Rxm)&|`%s6%%GneOI9F;~BV^BY@MUTZD~5{Jx^g!KX&H-q z!yHHFIBSSiz#xDoYnjF|TsmTe8>blQnu%DEi_rM39z%DQE6Wd67nP1nq3SE6Qgi{p zGpWJhefdeP7XDYPS^ms;0nhtlP6uID@kixr+YJ*NXcn=hNP9FR0TA(B(-{*s>qsP(CzIv z_6H7|9btONhMuA{&7x^4T#@r^kaaMD8wet+pw0=f<5$|E5!qPQfFV&1491-TziYuF zMbMFtWP%rZdrmo$8C@Ut!L=Jh&2QPj;eTYqsfP3A&KRNFH#SD5urx; znfVk&7vTmBM!#LHCPWY{!3%s}%ROcb(Z@dO3g`TM!Q>yMiG;dym9>Kc`>EOPji(Dr zn-t`1L2PoT2(Qjr7kESqg|@V+d`O0Mrie1F@^VVHh9&0-?M)+x3G}ffxPOlP+*XTD zL-A6uC-JGMrVp5vho;S>A$DP7pr5SP9SP(-UntMt?N}*YN=r3tv~@_W$<)GfZ9HAV z&+kPECtvLLdghM$46p>OXA!w{tp-4SCeXV4aF`5HED#qNfh&ftaaDjoKAm)x{j1u;4cX(p zmZEe35&sMV%bGHLjIZ-*FyK)@1b!>}DkI@Tqr=suiwz03Wk5b|je|7}F7|maJymi%UYh0JVd%{AqtKHCnH?hr)clC-wT+lVWGN zJ*}g2qPIO%)>~~~-OZ+vFWs#6x%D0jt)G?a^gZTPAgb=nlks_CU&D(wJOwuH4TU6% z!%5H!tc)E)%*a323*)5p^hSeHFc9hG&drNCH5qhsJ=mAlS|WSl&U`T4N!eW5mAUD$)sx-y&ws zMaCj>=}~Ss`)F8AedL)}a)IOI@BSyZW^Td^hsm35OgPpL&Mbi_SIeCLRw2V48;x%O zc`JDFM*Lu*8unM^8q*)=JSZMcu%&j1*p{ z^_cIG$SE<5=M(ZA4 zW^&QdVVzpqJ801OMIt;VqxmgU+#OWBvkK(pY6nTFc_xwTODeVQcn0l?9uk67sy#RLkj)Lx(=tv}AS*8;MYfH6fahnZzA{!{UFH~yH@pKt$l#^Py+Ou$* zq)9)wd*7JMrDg+s70bQ(@+9u@-MoIPb73Wv-Y50Ot3uJLPZ4(R`l<(Z*s~z&*oB&) z*(4EEwGY#m+-I!A*U{HfwgIgrOJVcC*b#!9)AD}oal*>@z-g*YcLT4bnYzpGh^Ev1 zgv=4d^jo;bR+)7ykU5GhLjoB}t$C7ark{39=myFXiKt#O*-2%#!Ep;xZ#2m9^RaVJv;nNQ`k}5mH(^|3iI3$6@(_~SRxDT-EO*Lf8f|PC zqaP^zP@Wt`9YQp0cI+Di%Vna;g)9;GqzNcY=W!x5k<}?Gu;Wc_xna4h+f!n<2Kb83 z9QlAP?m#;g+2xfbRY{qVfkjnSI1LFLf6_0R7ybZ{^G}Z<_3-atH!$NAiA>P!a-n^PEC`6gimLSoS7H1`P%M-g6oE%RxVX6gE!O9~Oqpzwi zsusrp$3TBtH;J~0$n>Zkeg1#_`x5kA>xIGCkLAIBD}O+mTr!!@M3MZ&<72_Pj$@X! z(|%)<5^=>?jj~?7B1_P85T@EL7Lw8AqYx>d*V|Cs?5B)lqkPq>6|TDb{CQk0?ryHE z4?thOgr0V;F1<|EL&o12Gj1gr8L0F+)2!(p%#t@G9ag%g#-6b%a3=+ zU2n82llfTqLBy2JL-07;e6HK;PNrzd@j?(f1r-3Sdgaw1@fjfb%~$Q4cD7klEK##j zk71u8S;2I2g!D(_F<{QJ3)Eugt3p-V^ZUEWW>YH`+gO{nuPAur^bmRq2oI6*{eYW- zCJ(0mtmOmVjKYbSkH#ys6xRR7pZ{bz`@qK3MPb#tW+65j1mI<@8>NoSNVjtv>3&Ix z3;=5foy>STG@W+)B@8l~FUPhi$zG|RT&e=RGMpGimsrg!46E6GjRK(LC!c)s@#9C_ z)qB|MR&K76z zuhpZ81ttQo;Z2A)kgeN5wDDBW4#*7tk_wGOv4=3%tmv; zY@?F@*3Wfw*IGZ$|TlO%;f6q zAqMk|MnhK3!Z!^O&wZOO$%zZjW2Y9nLEw>yq>(XXxw*NqZcce1ZP|*_ntTHIhk7nR zI+SE9&Dv^2*Rz!=D)(Xhh^@=Ds%S$|QP!f@ZHZ`*LOiI_3k)f|9QV_2|7$NARlVeK znc|w|BcstXBG>B?$1Ijpk17{+5fha$?u$)dyto|>pV8L7VPY5Cz)WVAM(()b`J(v4 zPw#*{0U52dYZ(sNm_V=8T#k6Nxw^iUY<%uN(R?!5Nq5^5NPAYT)KQr{iH=)*!R|rC+=%B+GnuSu0zu$Kf#s3R=3WabMnHV~c^|aL87nD(gx1NC*RR z5nV)fe{)1tvc$(AI5&voz`Oi8l1A1vT~9{EXu3cgSX1}%m52|DC8%OJ9P538zN^*t z(_k3gIvEDxm4Nc8KhnZpEJ7{k@Bi?t4D@9BnE66fLx(YKi@w?{`zUTtHchl*vhVN*2qq}vpksR@!L^FklgR`QJGIU%Mq zY;Wc~7>|U|mFl-1p10#U#GEV0(v=Id$&_ccO@@a-)?y#>ECr=y4vXEBok_JD&QsZn zZaqIlR9)SHQoQr8UVZ`-4EjSOQN5El9*stXnPnio;oIW2W$SZ4eE49fxLK}8<0AN} zQ>dKJUUdq+p6LtUc-9P$2ey2Y?wR+!x0cVR(xt81Oo<4@2=q*BQIgWvp{zxM5KuV!zCv!yt^l5R?40-os6h|U~1Mhg<3 z?jAwBA}wVyln0!!);+Q^QB;<{;-7QUbF*pXcfRwJ!RTD57UnCtpy-^_r4a5a8#@%; z+};dFqqO3bv(X!KhDiW;qYh0>(B(J&&2Rm8|HZ#s)MzaRw@%GUx4Kme1AXqQWY{0o z4g-^YELlo97I!~Ra^LxG|Ni5>tN^gkY}c2Y(Q;|q$K>|<#b^`;NR1;*Rjt~89_dq; zJW*r_vv^qPm(LxN`S{_R|MP#;CeoFLK6-m8Nxf;X7AH-o2fDa?g z=OaM`RfW}b$jA&3n%N@v5C743HYbgSv5N!!QL%|CU;0DfE;4Ou*EjFDmU1y8{`-|b zRs9=(?brYGFTc#6MjDY>K%PK?O9nEc{J@@SDi>8!ncTJOZFXcjn=ri02a$7Ih45l2 z)K$rBwO!FG&8h2g_P2lc%P-$9>W!|J-OcVN3S&1nuMKH1e&U~`t;ySL`l-=cJm`0S z_$0O761$YMv+;-j>~H;Bf8o!4)AkR``I9IjHf$?Zb?7mmqW)&<)pVKp;_dz){%^lG zofmb(5Ql6Yh|V209Jcgk07=M3XMtptJdSvs0EHTJhmp+>?-F{>eQh?^*Vm*)X2!`h zIeZj)F_@QV9#wq{8GTA8eVyuAmP{=nP+z1u-K%RFE<-?^K6&+m;WlyH)LgCRs2W*F zGJpTS|F^&WE4P{fY#Kn)qWwt)CEW>1rCRD-JSxGX!&!Ee7)XCkCPSmLOjs;992w%0 zfd_AA+Lw1XpMAFc$xrX7za?k^U#hL?c#%8h6h)E}ocjjH8Fbr@-QV9w@963D$*XTZ zJp2R)V+ySv4)VvUAid7@diilY8tgWUU-^^Yu+s78|Ac%-B3|)ZtDa<#C7-ga)A<;# zFPH3FQRQgmZZD=IN_A2&(oQBuYQz{i>|>2vwAEn;TgL7X&agi5&rc*=J?12sj zOg-6LvCD{4DJj&m0Hw)fHJNYbk#v>0+D-4@i-Y%>=1)GKeE!)N^Hm1gUeeu@y(txP zZsny1@28$+v!~55GkKmc(1ycJ3}rcnW3`6cm$x79?q@%Js+Di@h0a%R$J#~6@7?=V zG5NgL%l*c0{K;>B`xi&Ux1zr3WbyN_Umniu$#|eQq1EbGfT)EBWKkVN+xdK~`-4el zxGl^w8#YZAk$pEiLY4j*;&&lelBf^|xp;l)lS27syDoqJ)73x!&X3;RPgHf2Qbc>x z@r-?`6r~DuG-w3n$)m$fckV@=iRg41o0A4*(}>V-g6+P@;kk&P$$S>|8fB1lr!FT+ zvdc)X%QzUL!%A0rB3-z8zu;b#wHrs+vy? zY}5jwos2Kl1;63s%&pn7L#h@43G37K)wLl(UP#?;HM1w3g0bY=GHK#BnAojhTP$Cl zGM&L_x_(}^dRNs(Z9JI`X54tS*Xv9MPm^~#>XKgX1#O?Lg>#vDO&Vv`AJ0@-ty=ix z=YLgocE9EiGC;oe9*!p}{<+vdEPmv~(oR8w!pJ6t)+yI`|G4cB=|)wcvO4Bh+k7@@ z>sFANna(sjvi>$cw4Ay$o>wd?Z=P#3Y&2C|RU<9scG=-jx-F_8?OJ;rjXzrrV^F;i zRk(u@ns5k&@)8-1JwH^3Wy%kGk}Vb=j3%7A)fkiYd3$O+jqOQwBT9L=fAFnR=}NN` zd-pV6^?En8j%}m|KmLmIYjtn0pT~dvo$m^+{`o)n&A(j>;95*FvVyVso9a4*#(4mh zzAk>n6fOKXqLYRG8w>{Nu`y!Mf%$eLvO`|wHSIL8RVA&zdK`UG-B`;~*jm4~J?iBd zPdMquVX22ti7-DLrAsv<0kU6hJM-xxdyY=6yL(Eiek@v_b9I*db7S^O01ORDH;%s8 z{&Z{s!*r2vbbkS!?ms_>oJG3o1&_n2CWRL-Z!LX#_w;45$u(Oy;@Ibh2a;o6m11Q+ zogSpL#V$LD!8e4KBZty3wxh{jH9 zKWmmfj>bT$W$yt%#Y4;~abId^W%5@bsq*{6yML^R!cK`Ln{W8N)ac^&jtyqBxxPnJ4Xi`(?LWz7-&OBkYm>n`+mRA zzSw|{@QlaNyDXC8S|qf|lAbw;vYSsi8MpTjsvtaGL_ zTl^{K)_SM>dA8cVe>;8ue)#mXw>P@ve66jfa(a{kJ05@Wm%b%y{qgCmC}qXAG^gQY zgp4;gx3J-85*>GmRoAipAp;y|VHb@WQSqZ45pPpcO(g@?wfd!{NG=$LuzHOC8 zSO64q?i${O?kv^S_(24KQb{lSG_Bb8;-{*lIYm}!m&rcZojsV!u`EX)AM`=4V{i2^80*j_G;tm zjiA|jj>aip887KH$Y=>wd$j0FWEM3f93$worbdyAFW1Y$706-5a*U{_$YJQIinX7n zmVyNVQ@{lfA%GhvW#Si0gY@hcIMRckW*aw4MHkm+ znF~u=mL%Q0lk7PH0iGoKppNB3><}wsPsr@2L$CcdyMH#!;eBr4%N!hr_sRDVOgB+mBR1SXm2f^6+)7m^1I5#mQX0I@29ho z%nTt$mc@LmS0K6!rsFJwWqmoA0;|VXLK3H$xu;5j!xfQBDf+_*o}DGr3wQuI z%=z{>T4dUd%#1NfS~dxrPv4hv$4=97WywA8*&brC$Ft{>7`+r43Z9=Wdp56*@Zw_)a>-K4Wg%A3XYP zn>CX8;fXmociOi~3~$~Ks(F5ip?oRzVJh0cq?N3384o28Xj)~yUq0O50iexpeYIPR z?%u07o4&fy{b0CX#cJMIg>wH;vfH`r$@z5m{M>E#mT4aEFkcrH50#`y)Zy?vmV1ik z8E`qLS!}70V;7^-b&Zu@F=zCXyQ4QzXj&~}r$|r~Wl-Bv%2!4?OQpTRbu48N#I>q8N(D6AyI_|D7=+I zlS$Q{o;CTp!XOqsm_aEN8zx((kKV3K``w!1wNpqqCpZ(_H4)JxQ6f4bl zk5BI&S(fDL>gMtB@ee-z0fp*zuM|`szxp!GsI(8+{bz5!%ANDCKY2YHC(omItq_EI zyHi}x@AQV*pgxx=EN0du?rZi|YT1!KYVV3GWPshIZj=`pO$xtJvU$O`xv1hl+D*hi^JP@i$DCM$2V^!Y1^kv zWef=A3bW^h#ATFGGpA2K{Yucl4QhCJ^Y%k*D+SJ;&mZrfn0iLD(`qfJ)2}}JN*3Ye z%TM0D`)CzUL?kCy~u5MoMcgz3#cgMf;z2E*Tf9YTUs{mF&slR{mPusCh zKbpRWqMW|H?zh{kPS4#^5T#j($p8Q#07*naRD5hUbOz-IfuN(<;BHc^^yNYa#?8bK zBvN6@R7`^asuW9)#Tfd5ts2T_)rrqZ38fPufcelD^|@U>J6H7MV`b#e=gGoGLCF+d z_NT!B3@R*{!AzXc-M3zBr|Uk+l)2HJ^2OVaVLyu%Nz}Iw_rVdVh|_U$|1p>p&1uJV z|A|QzyWD)^^;chg^>NUToNTl6bpL2Nt!i@pN)@%aK8!QD@pZSjHi7k%k(-NnbG7ky zfP)>Q6@Nt71ei@=($h{;i}4t=H0t-8a8P&(smFFF%b8DFwF)!c>-1P@HcKsdI$ufg zvq5gHi$#Bw78`{nT|55C*o$$sFNp;u4?w3l$d+c4xgEU3xAuyQL`&MIXi6q*ZvK>W zV7MI{nPb8lgNYcWRj<~gWjlS7$E86?qk)Qb=18Lh8b+h{L^kkGwk8YWt zzIm&KjEPb3r;1~{@W#Fxr@N2$qtV!6b<+OyBn1(>6i_q33ZM{gO2i3Ja7&w(P=*b1 zoJJBH@^LbqSU}@=h+^s!xQMmR8?#6Przs?0`}o`! z_-Hhk&gKt~Pjsx+?hHrcuim`Fva9PGe(0;WA9v?!>+0v8N11=}TmO%BSCY~C6fd1p zQt##qlUmhZl)2X%CJ7iIDB6V5RFBXnMo82ZvBDPEC`CY(F774wQX8HwrlA`uG5x+& zp~AUR{6<%pGFvo5N;1ZdEJ{2FqAEQbNJv>g;VQc9=!QG#qKWF7w5aU05=C8tu>@yu z3|3##+}L~DQ{;SP9>*i)ibzIl$C%C*PtSvLRgtTu|KY>^gA%f2!elzXe|SO!*-cD& ze|KjrUE$6-^Zvu#NMxXm{&ar(?n8etI$NLA343^6|`0*D%QT*_c=Z?mc z&p!XLEA2;zXm8)XeU{v}IZ8hF`26%d9C*HcC5NNgWRl@Zd(voTzg+9HgA!T&dSI+Kv}fEnvl3F~#Cx zm?G4caFe_+4I5`!(CYuhMPG#N<#{kwEKLnM2MXD3W_Le$dLB<~$)-E~kAL(NO&^q1 z8>98LN{k@3p@O9S{$mQ9HGo7l^kOtI0zhPE&4teP`vU^tdYqnHUKR_AW62JU7P81W z`^VbAC`<|?OTT=bEGe;2F$$|;rl2@nuBP?v@;Tdjd^-Hm5BvY@x4-BQD$Vw9bUKVTEi-N!nq8<$M;BQ!8^=(s!C-;BiSv7DOh%IEW5)4sb2@(+4N=$!?Q! zcCs2DlZ@3yvrXo4WJ{8IzVgZc`CFg;FaPU5`QsnU$<@Y-tWu)v3wI?WEeC!)xQ`}U z(9T*5PFzp6b@c&C{9#^HhC!p+$O51JEMB>g%SYHP4{4)oH3D~d|Ni6q_r%pE)9vAC zN{ls9O0FWrUdyvuL&u&t@ju_ z4PdkD>z2--{?oJ0gYIpc_CG(~(;4YMxwa@UFrwvrqjI~MukR#jFx5sNi6p-7+LZ+@n98%xQF$mOOH28=2hB7Zx-|9?uO*Ovs!2;l z9wvIj_-_PZ3PTDZfmm?PJlb*C9BT&04EU0ZI6O|p+WMCPI>MLz1!;pCUP;q857izE`FL7?q4{?D<>9!NKdnuVl zs#Zx4=0c2kw9OIn>o1Lcgbco2Z%Oqrtg6A$OPMcDsJqu6=xT~>LXMloOr<54t-(^& zV!fD|CCAcvBW(j3)tB2*uA1S+!A2!(F+%JJl$L5E&7wvc?Lz_?nGlq(+hSI7Oh3Ll zhVC32&^$k>#_207k0;SLq!0BpbcXBVUUwbRm z(}($6y3nY^rXvp@K8j>t{`|`TMhty6>9%@>{OJGv?(hHEUwhrF^>m%>Hxo;x)r?8U zdN%?O6qfz?I(=ke{9{cB24hNQw7*egE=Q)F7L&Zx?xeN{iK#BD#;?W}Az_Qj*yYC` zF2bM1YEG;*d#jz=bmn+&+&}KV`@Os8L1w;+r21kzSg4Ja$#A^Q$T~s|m2cmC8RH?u zRw8fY97Q&4$#FcMqh~%8r!b$5CgVO((z$8(2mPOZ_;KuQ=*@w%;bJq>*}y+#PL+oT z9SZ8E*WPh=+~)r9N5kpt_x|!<`u4B=nNI+xPP5SIRu_{2_EjrizimXG8?|-MGw!mu ztjNW+m=S98vMDK~1TKI+0CXC$5yS{j`16Y9l0uFB)@TwH^4-aTI--$d?Tb*D{p(`2 znIKx;hVZzd0AMv&zSZ4ZSlQ&mE<1F6x9dONV@D-j9P@bhft859+RS$`q7IE|JZDHI(=kGtL2u?uC+F!SdGwB zaS@_v$kmC1{-SZeqlQT)$boVAamq1+H673G9c}L*GtmUlfN7b-=zyYzilT*Bv@BYa zWRKo>+sMTdbefwcQ8Y`)FvdOvHkHVGrp?FBnfu3)w)A?nBY~w%rV~(4KOSW}E)du`}60**NYa zWjgpQcG+GJQmImHjEL_-k>1Q{0}&}rYxwe|DCfFT%VXI3K;Lqb1SW7Fz2i0@Pb*<$ zKWxX4zAR5y{I+VROD|y)f*;PeFJ2KILA0JKO4#k#pkh864>WwY+6wz?87Dcn>zgi+ zba(f`@UFaJEGZ28BU8p6!aVE`yx>6-j>`1Y^W)ve4-$>)g1QNGosGt0Yrb@-6O5dHo~DomJ-rx=9@P(eXu_1Zt zD}GZ&9_|tsh9n;1Evqgw!E93LUcCbQU>Ze4Nto-am-3ej1t{<&mXAb93`6CkqR$lq zaw?)tK*ZW(@TB!xxvb}}u|Hns#uGmc-^=LCoL|RuH9(`@WkpaWw zcFTGyf`z1Pvs{zIB{88*bk6PYy#@pDvk1_yRq?BbPckm>~IV< z7enoim0aoNm+#j9;`i@9Je=ECzg(=ooXUuJYe3VguhXzE?XYOI5ZmHV%>pMFz0nYt zKxNTY?-eiREjsI72^!ncLm!*5WXttRp<*$njbx6^w$~Td?GCJJf%MW`n0bD6BOQVD9&yJ&WC^T)Z9qdinBJPUYRZ zH!$7x)fGN|c(3{p;wsqH1_iZCc=&OD$Ly-~4~%FexZ{|B5pNKI4dGM_>0;=#QbI*I zPFR>+P0TO=|3SYw1voE7y>c5z7S)Re7(&v^hoG}Tx|KUC5r6$6NY@b~2*O3q#|pyi zvC^vLE5NDt$NtSt$p(*u(US+sx4>DG!5F@)*R{rH`|R{++N?A3-QlGFH2erbR%<1u zN$aWzz0@jv^>Dr$jDRz%7Cc>si3zf{$T5znNb_W}UY3sFlnxn|Hr7XO_+#sal7N>= zZ%kWKm6kl~Bgop)uJ`8MP<51nuqQ?2Ek!7?PG)I$(v}y0f|PdbQ+};B{a5W&<5rPQ zc@OB>wqTiG|Fgf)Yg*VkXP)I=*j%2H$Rdawh?EyCGD7z8A|g~sb2oxACIZK1r*%$_ zPlG3`6YS(sDK_suJlbrg-D~QI9S#PM505e-TIR-+{_y#MO{`Uwf^to=r94!*;;m4zed&dN+&MKS6PG{n)laZ577Ul! z_-OI633N1J$7T#tRRI$d7<^j+7Bkh^qVl`n|5R6>_5kfXAag{6>AsWP286`edp!@< zq&C9lLRDGfC|cWHF}jR&tY-Sp&u+$i8^&DEVn#oe?2)Cs4AB1i_7$P~`1pP@BSk$c z2BvKDcuWB_srLs@vS+noz0=TQV$_is+h|DfYwop?bwTypb z@6nd?qowsi6RkU5i^N9F*OPLy3`*S(26$(NJa=sBK@Fm8!>(rS>T6RE9pL%04nxQ| z3=+YYv=GK<6)BL!0J#yl!k+Ao?NyiCSkc|NZCPlTRPs{7mJ-XD%PDwgrlmHya>7>E zn^g$}B1rl5s-3n6E2@OoKn9=}5HYdPg(bQK;Zo=*Zh58*m4n#jo?EaTF9N5jQ5mgx zv4yqOG&8bWu{M;Gkk>6r0@;R@Gln6&m-*v|5ANE%QAe4<@sm^xnhcy$XD*9cvYbk7I9Xd3F7wV~2X-K8&*SbM0v+lM)M)Sjf4=z-Ei&f^{2wcvTNe zXsxmR45yP3bnDGVR;dmK@@zVRJEQSRJe(_7ktv;(ySuxObaA((it+UL2wK)!4X9^0 z=v#NxZfhDk4xXOaadT{Rd+#2WpTF6D?+>58|A+m%JKmW$QJu9YJHJvVWTUL-uo7i4 zFJw-=b_b}ndd+{;gW4D&Hu?Jo%cFj^*g5Bz=urMGgwYkKOL+|(V+Zx|Q%O$cNZ_a_Wkp@x=+#&J| z#OxT3j@ZRV$fK;_w(>)A$LtFeCVGv)hHbKNF<_=Wz2z;eP7{Ski6RYd7%)O27K+Xa zZ3v=LKr#}O@#gmU0Aq|nEWixfqYOPgTBVd};AUU~A*V&-K&Lc;^Q^zUs%Ru5LQ)lq zm~5IL5$C4k3HDdL(ZQ~BAMWlX3gq!2EC8G|bXBB}g|FTY-hLQb#UbxWBGH)`m<(vc zxdOcUm>CZL0+Plna@8*bV!O;}wUc?S?pVa}6=AO^E zwhr_1F+R4=WqOr6g_1m+iK7-{R$df~4ptr@26~ctxZVIRYUmjES!edb)2R){Fia*z z(!?e4Rst;_+qJE|aGWw~&4D+KjeWGN61*@+c1L}i$BM8xIx_6IYV8rZR?~_UiF?N$ zTM~5!JX=UB5pHrPiUe{n(=Bn$c=Yju0+70Wi(&&tTY{c5VizMvF=hrjHvW{e7x{_z z&6o~R6z@8a;U%t+wQ>3g82`G07^AXSP2M|-+Z;@!^-~I0AAN#wcdOcm8 z-h8-!>_3~J7={0sO~x`%a;mZ{R^AYNZv==f<7B+U?l-u0T5EQUPy}AldagxSV@NC= zfmO+$P5W&F>(88g9*v~P3LQXJY^frt6Vj_)2tF4(_s56t{O)Jp|AV*3BTTG4A!=F$ z>G#;`8_wfCvlt0Hp#!TA{Fx?thflSYIF(tbqFG=U?>MJe^4vxx)Wcb7Yk}0jRy?yQ zVHJM(t^|xdIyzoXdf2P&u?!cn{CIy)EYw$2)D&5*Dyiugs#UF9uVw(V(~v zEZk~&dnh5XI!Z*V!pY#WQ~_twgcJ3$qDa+L!kPN}pb)!+$fF6o1G&033J~!EJ@gPX zTh}3La?2)gcshSnje)@P=-tJy0>rr;UI6LuLLJ>-dOo_g!7e% zpWf$id{9I}g zeVqzna0@77*X4y{Up#pyx&9X7Uf$hn3=-Tay4ANP_6gJG&onTw@s7O%W9vgUlsy$i zx-Dz&S-rHl|=wk=t9_a*f5X954F3(UY3F*pCB0M9nn@ zVC<>WMUyy28M{AekYfwd^RjcH{2po~zV3?#|$<9N-4aP)e^Dt(Me)Jgg80pP3$rk z4`nj=2yaS8s3=_?k1t-lXm+~~j}Q2A>Cupb@)f8HxsReFs+Mgk^K_fl{@;q`K-oHX}d>9Wh#DouruSe0%$GLs@CiC0?19yW<3@ds~b z^A@g>w@=yaDR*Krt4XG8BY#mc%A`Gq5V0n+*mVzj)|053K!Z+Og_%O;So8*>*tF2z zLTn(G5FL+P;YP#deL56YvqERdTx z_8<_%SLlULohQSLnm=`>Du9ni){QQ`4DI?lUM9#>%1;aC(igDD?mOU)kun#?#yA>E zn3S~36ra}_{Jy=ZE`?dEuLbdJ1WOLkQ8?`nU1^bip>lZhjp8Zso8uN%b zvu0De&wjlAm*4;L!$-Z*Rqi;x&hkxdLtpl+`$dl=ep2*_bcm`WCv-v9@wgX&kjY1vrGH<>UWqc=U% zOhr9BJR-QjyuCN!7k=@ZktmY2wOwxPDg(|(?!^yP@fGz67mNPlA1(H#hjR4tvP>?a z!2QA0t?kf2RS@n%dPE`aY4N~TJn<&*)TP_dL7z1%q_x9#7M55BGBQXbNd0qUW+%32 zIQdFjGv#nJ1Q>mt!D7uhNN*nXAj;-JuReLjid__`)#=cZ@p!=Dy?k*)u4#)&rY<*j z5JEn_4=$rEOfkLi&on*SJZTXvH7XA>UuT^6Lq41ja>b5m0Y-d;o_@Qbk1n4&Zz68u zd`vB+wS%8slRh*=D&mr8%CfFSD|&3P5cXv!j?_z{i70r7L|iItVgL*W`{GvEN?!em zT&CMQR5{A6u*4q1q!wUcbH@UH)y`QZ=mY~fmKcGA3rL}k1+2b$BX?T7PV@v#HQ z`Wkk`+eTua0t0ga2a`@J0uj_BAs5ORl21K1HjYjsV-$o5d@&WX@N*6fm8C;8G%>uC zT%nn13OHURlKOjMsnsyBye1z2)BrH0)46Ii>g$>i;bUGSKRZDci36|<=qIq%is@=Vo(Us3CH(UC2=c}bH!g9rYttDh% zPUaJDo)Nn!dT1>)Wn(sFNkoZ3z|6QGkoNj=prKu2wTBoDVq8Gz$FQVwTYuxSfC(Q~5rDn@72e`_<_XnBJKUWoxH}3eH zHDt;m9^Z9C0w()@+hu8eNHMHy4K4auOE~hP&0^CYf&T(>@EMMwrRo@W6IFH6SPoWD4nZ6uQy2MMq zdGG|Fl_g56>UtyNtp_?;88{9YUiB5$DYcvoW*ERI<71U4F%6#K`2Q#B&U$3auEa3s z6p@)RM@Hm4+^hYQW?OSigxx#Xt=6lZc8{^=<*SeDTa8xh2H-3<{NM$oGIJ6(_ICP%wvCiRw!eRb zm%e#(VPm7Dv#OwUR1G@9^VmskOzI^Q?|6%G5QZ6Bj-QMWi$~!mA7xKs|0ca+ZDCwv z##4?E$$^ORTOA!gU#KMqASKE8zmXS;zE);%Ao7&^1slUGfxtk2~8_w3V)&C(W=Tgb*rJu=@YRTP!!6)cHr zgI?id@WvchM49o}$c-6EIPKUQiIcA3JH=&0FnilaAALl+ZnxW}d~|YRnrH?DJQFmG zQw0vUSRnL_dMnkuv{_0Td@UCnj0b9gs&k_oka{QPG93424&0lIWvK-5IJ zRX$|=i_pM`=V>@j827HPnMNTeGp6TQo#T@efX%LfsGgre-V)f-9OKcudwagJ6OKod zg9B2>KI#?HxSn4Ue~#h^H>FauRBDLzw-APbcN-W^1m7T-4`j%;93!`&K1dG%CNv4@ z+aQC%q*DJ8`(#R)*O#A(#{ebARu^F`ET}^8dgIk!e9`{mO{ddauubZsMhaBAY0y_Q zPE9}I=}|^+JPk2|>#IM!?hP-Q=++B7k&{lh1>Q1J8A3(a;EWc|wlN*v(OhdGi%1)2 zQ=)dZ8}({U4B==n+*VVh-Uv1K=_!=#stE6n@cPEi?&0AHPc2d-qSu`At9j&~-b*ZsQFCuGXI!C+dl#)J!w293 zIyYK7k3b`i0oEt6J;7H&oNifCgj=oRXh;z@D1>82^kffY8PbGIDjNEojvjT$iV|?e zO{CEucfti;S?4(H3|cGs=W@NUy1wK**89n4uM``*x;Ar-Jn36d zB6T(B2U&;{R!5_-xi-8b4GqH`&cSGhk0Q9yayf#>c;T(NJZT-6F^1U5qVG*``Opnl zzWCz(^2(3wQL0}jRxZ*^nq;gcBaF>Ok|QGk6mRSb5AU6ypW)IT1r}!?PO$`e*_%rb z!$C(cl1lV87Y3~jR0Qt}(h3!l)w!I3)tZvJrD;m%w`H zxxV_C2&4fY$I!wpXv|6T^)WRwa>iSzYuR~FbJ7&xRgmt+y125ds~gP4Nt+zMg9vqc z6yu}<+2hsCm0M{;uSD{q5=pg1>x~@KFw?` z{h$BE`{KraK^2yAacRPzB;ZYtKZz8C0F#WuMCO8>nM){QX%nSinusYFYTA=bJk&lE z;(Gi<>_Em}mUyHBCE^2;ex&|66TnFoCZbR{33(|BX#<68j6Xc@x(GkBsY)6o%S--> z0u}1N>GB6LIqnsnB#=?UZ(0}lIB#TFR9G(VXM|ZhC=pkowR)q%*>@o?6(>aR!cLHfwbk=1b=nt^0?MBatBj@g=t~HW6+3&_o+%q3ZL;dS%{2TXH}rPm!`JOUPm=2jFH7+^d!nb~9h!<*4B zzr1-^DicaBZ~97YzBqm*Ay;`cO@i)nvY}X<{dK|oqsT4C zVWiKJ8pkXD{Qvp+$KUzworZ#y&qPHP4(Tpnhy8Ocn=Jwddol?J1fF79iLe+8D*_h) zsTV^asHvuAocB)>DvPWOzx;|_GW?N8M@OPP{R9m-k%W(#7w}TnsBOJ&PYC79Pd-MH z7#*>TyTUZYlCigL@9c^A(1M6zd1r4&P}KX=cPva^K|Q*vLmEm!6@vrwxP>12|sL~s%s!n-FvBP^lVG3|2KtEdZ97G6OjRevZ!l18~D z*l9GqqhW<0UnZeb5skIJ>NPu=0dhL_kSq zIzYov9NXmq9Dsa0Cp??V4(c%dEb+c7hWxs%7>o4v=~{ks?Jh??@Sf=RMWz^x7+N8s z$6(WLH+~FgIQZJ1PFT7IS0h|f<|R@KqCjU(jODmE5zyb-Fd+;pKjWIrDA#<82NC7wP~sK(B5 zr{4Tr_9)-;Cbue~g^|Q9f)_O8&?#@em~)Oy=IET69;3ksoYgc!_#D}YNpp=nX!E3u z5O;zn^JWpGH&oW-f{IO1B$#jt3jj5q=YB*oCG{{^-@{J@whAkVJ7ecoww0E{2`}E2 zQ%7m%84+I8W{6X<5b5K<$EyJf82!Z!ky2W&P{dI%PdJ*@YfWByw?FHv9Jki6k=%-5 zp$vXSog-)}wq9l?P7f)p8upbwT z%k7&h$=>DC!RBiBekwbUG%8x6AhVAq3lsR;%^;H@OkYI=iwG0oxw(W`0GX53*XuyV zbbv0HQLtYRASxsV6;UsQawTq&Fr(DddH=k{nWpG3K?=iSaa~oOhtWm3g~^J# z)gieO$4Db7Fd-NWS%DFbK_fB;nTsh4)^1fEo0MHfqy_hc=>5yqxj;dZ+1c3z`zM`d zv&5w;LkK9AIlxhkAT!+by18s=e}kdtD@!-sTUEYm+lR8zK7aH67oT7Koxk;I zqx#rsouXRey+A3U3J?S5lv>H=q?|66Y~L5Q3ep0xXq%>_f8Y|X!_?ue6acB> z%it%^L5+#fbubU&(K-z{GgpV!JDs%Vi4{2)!H_fAQy&$>SPC{P&kuW zSD|2r@pcUSC;>OjK~Tq6yM2l>U}to_f|8L$#mdaBBGcmC`-|PZNC;A1@5 z>SE%S!N>ER^l=L)W#YWAUW&kQ8PI@Ues$n4MDdf6xCT|5vQQo8ks?vTEiQBHCsL$` zgM(!zwcD8_GCOvG+x^Nt?pI?oFjKaqsMo5!diCm)Pd>@UPyBI<(I&T3p6$tQcmCmn zIlg#tA`FOhL(U{fV%Jx#@x_%$)(GyFX$Xx2zMBeA!mFX^XAAE+*&Mypfj4@TmwKI# z1M|utV~~alT<{E^1sq?^%VkKwvB8_N%S<1LGW5%)Y@22u@-*jx$SbT@Ccm2|Y;m!0 z7eWRZsah%??CqC}G76ch)<1gjlEr-{%xV5`a(H}la4gg4el(ZFx_huKY(#1j?f!H*fuW(U0s%FCj00O{dTu? z({9PWOG3a?kK<*~4W3>sjETepGM0`>n}Xd%#Twz!fk>J)ovI4!#cef^B+v*Ndwe8_ zSWu!wlF1?-jIAD@g~p%+bhJJLz}G+$_mrWXU#~_u*wRXKdz)ZOn50!wdf8xs2vWkd zz#ML{r09b@>;g9;lGlB^b%8L1v>c3CF18jVHlUUSfhE~o6;O79`H=D4T_md5_LFxI zq8ghe_C{-@Rv46^F?f+xc8&G*6V$biJYOtdfxHGArI_G-b84Z`+uV7CX3W zr$4`GPdX!5xzXxrNAmRYqfbyp`{p{5iny1mWskxW+`F~aut5CyRPYxsU((_Fovw1% zuU>r=;#%bAS9kYzCFyhZa;ns~84Rl?=~y-@<@02#r*Ka{{Hc6y zgAs&1RQwe^sE}ipvH91wju0xxBQM0&P4}}`^-=#yz5!@6xysWZu!z{^8{ttP=*VS( zYPbrIf~^&1LRJ#1RfaLv))|gimwf*Dn|yv}_MkOc7{3YY;&Z!$5YX&tyUo4R(^JbT zmYovK;m1UUifYYzuXiH|_28gU5xc3}&YgM{*mRR5+8?w?fu&Ln_&TD$31S;%RY3>^ zm7HYvuWR>OH&lD76%TCYZcS_QRtiAaWj~bj@@wwz7*(B)STD!fnOnZI_~8 z@MJo;pVFrp$H-d)0Xs|g>ramw9nLI@I0`|coIF=IZ}hYvA5I4SbtPt^V_oRAw-;9w z8yr!I-f-HZ59+$g`&_jwnJM*MOlK(ZQP2^)BW@RHIf|ZbfAQiTZ{ zAN@lkCM)JVxLFj71(^Q&YMHY0X}69xm$*8^cJi;It>-C6=0ib;T`;;*osp2QfUXFn zrv*+lk!rJ%$F0T#a~IzqUJ!_f6PK%1Pj+l0snzS&>+-(~CK z>};Zdta_bH=ac(!LBY1IW`A-$9JM#su%1Sz!%-2IG`tT!V3csrGrdlRiJ#!S=~!In zi3mP+EJ8XOC*=+YlPuB)Ib*gn4W5PjuIUx4+s(>#dwSW@ZHGJ2BxtuIwg_x4T#xIK zXN{VWu_`8FmFXPjX%B1J0~HY~RVIVc#mY6cQwSua9V)~tmr~-`!(+Qk=U&>`J??co zpMU;Vt~+(yi~FMqUtqln^Cj6!%s~1CK{G24M7LfnR;L3<6UgGci)zF!@9d6 z*!MvVuCRe)o&bnO9D|-?(G3)OhyjMSD-;bqvb`MF^efGVCjz2UO2bVkFbwOS(*N${ zL16HgH}At_y~NJHBG`rjfrU@WLKj*aP9xPnzgAt(Rj1>wMiKl{fv6GjA;;E4Hdp`Ae|>D|KCr>8 zGOlF3nNJX$MdVblcp+)5txWHh`;(0~?^|m7*bs#G(NU^f8k5L#oWk9m<}R-d+RGfj zDsK~eNuQ-k14T(7Vz>OA|M;Wte7vuFI)|reC(KsVjPPEJaFK9{%~%wOU=Gl;y@15c zdT5Jd2}1O|Kh!$; zH*KPYmPad;8Cm=%KRo&FNqO~I{d2-a^s$q6harPRiV}#ELFyd4sh4O(a1pr0dfXnYeimgs20ZVe^j^YUD8{{ZcB>pkCr;)Vt240@?a~p zMk|e~u?$!a7wuBy1EL=s4S9Vs_AuZMxj{?ERmc{VroQu?=BroL!us+edN(holL5N{ zrxf2ecUIEAvhd|l9fIpUWu7*91hp%gbB! z21f?4u*3@7vc6b5v;!7oLsRFV-L9|xr~k=sAMZtxHghq6`5o1JxlvQ*eN&@J_^W$q zrT*&w{=?U@n+P$7hx-^aU4aC8aByg-ys}Ex9337b)!@deQ!vgJDiRrW7IvZHp+4Um*h)RC6R^E5t1~Hu8T*vJu;&2CM~Z76YWR=d=NpyX3KP zXnIbh`<5$nHoGxV7Y|fve=;7<|Jq;K{{CmiHdoS^*_sLt4U#V^V!A97-hmJ@O~Q(W z!c?nMD_O&dgo47UEb88VIGbL#A6M3EZCza^O754B_V>d#c7I>N^>-U#spGVIUZSIb zxouB`*~!UCpaH`@gLR6JXJPScwmtBWdbMO9jt}{~0tkq-D*nFM-#FZ>;94XTt=40y zMBA=t5r!_pSWP0baPZSnb^vwR8=N6`d!~5>Pf)0tdv}9=#+)5pzruPMIW$pm7mTz& zm8bdmR2a>2|KbmRcG*?Bid`|*0@L?oODxZ$+w1U!)WSAVw{6>I1R>#M$sZb}3PcY< z*>K&HV-EIveRgRv{p-y}BhHU`Pgj+sjF>tF{=sJd!~u)r!nPm`Z04U}J{phgx+~2_8N+Yib~y|frNkmD%PPXFT#KRH-nP?B zB9(N|?+IDqIKgwUIR+GYOF~!PzPs%8*=i|F5wsUR{Ds0x%+W$EAVgju5F@i#UpPu+ zh#{krihYzQr=RwJrRz8sM>noh8JU#b0@alk`SW|s?ms^m?|@B_zI@ak$x1vVonXiFga#j+-IlReHW_;>AU z(P~?0P3S9lGO&;Xcw!?Ml$K_@BA{78uMSwH545&Gh6E@y(U2Ry21|ts11kl{)kI=x zWc#SsZ=r&&)|P($i}yFT(`s!GDd9r)H1-EKQbYtKVe7%!S4pa%mJH~_8B&o4~s;NZXrsc|q2Ck4g~MC4gq{zeKzxHluhoatDViolAe=fS8o znf4VCXWQpzn#@Gv5$j62o2n*^wLY!cRy$YiOQrhx>_iZ16Uic@evN!C`?O*qEE-pJNowJWQ4{g-KAUXgw*s5S?D*|E2hVCF9 z4HQ1GAHXrSJ|HA!O+IyjV@P{-b&2V-C6hW%Kb+fLn%lZgOuqc;-RHl)td#dNLyX#tWe7nEOds?*3otXf1QUO);B{xU%fr}akvrL!y~AUX^WNB+fA9>)Apb` zFZ=(~vYyi`{UJNNy91^D2mI+|EK#%;`!x6h7e5scNkOKHx)R_#oixyC-=cJ@rfOvS z_G)$Mu_&3v*4}H&_1NG*czU{c|ACxS&}l4Bb>lr5ij5?N_qqg=YaRtQOp+yU~J*15P&0| zg)sn900G}oga)P!Cme}p71r%2&aD`A*bv z0d1Y}hmpa)LV^YiR`<+Au~X}JhRysuiIxA!&mz?#W&P+ z6E8z}*POu?PWtv7BZn=76vq%csYD7ZglOV?$a^rIa3w-3d9APj7^O=N%y(q6*aW*- zwWKv(3=DiMG3ua8g>$pP@3r zr{}OmnG71COfhd4%*+_u?cT(GQ&S_PM{K&E&eb_E$D8Xb))gUG7-(=ui4dr4BrA

5B+FQpUP~p%hx0SVvm+#gw?wpX1piYQ zqk?M4pvGz4liU&;Asn;BC=OJ!WAid2x3Ec2`1E~6G_(M~SJYelgS$v4dVEbqU9C4; zxA%Yg(=TTCIUGt;obIpK^*z#VYefG!cHcHB8mKXo;A(KP9NeD z50(8=%*s^eqH}kiP3`J=U_CfRaDlJvM{?EUXTaf zKD(cN_;5-qGDAqfDN0Y0SiQDdWRwg$wnh#Q_uB5xmWHq2_HsE~G7>=m5k?^+1ONaa z07*naRNTX37B;BZpScqVj1l$>AqAb6X4CI8CtNHeZXnAf*i!>AciA$f5UD zrB>k(h6GXOs8q7oCN3Y9U91U~A(Bb?BDBpo;c`NK~M>M9!C$n*W znfO_!GO<(h4bleCDFRp`TWpjH`rYKz)M|4mZWJkQM6oeNMY|VU*z%DXhJs~ z7Z$mZB1jd3>HJT*LUn#WdHX@}x2T0nOCj=MRHQ@?K?d?gfQT-`Lz`)J*3NG>n5E~u z`oFX30%F|rNA=rG*hjl0jQVHaJ=)u=y?k*f{o3O8dc&)$&wlm!Yc;C=`M2<>9v%F~RG&=1Lmz`!@`Qoa!J_1nrjR>s$>x+N& z2Y>Pp{@xF5hd8+Ul2k0dadH{k1CpVjNMw-xg?ZI#k+}@=tPhoPmAS>TnVsl)-9bUj z2N)@u?OV~wQ%-eT4my7l=ormbX(ZYOsVetKRKY;~lo@am6IJPzV+#RSS;Ju+0h$3{9z7_ zl2=+WhBzPSAJ$k>IZcjOw>j*oqY%Bk@05Cp=3R_T>Oo;g(QAyoQr({RO_#z)=92Ul z@O^mhc9jX?YLg+s!x>l(Up?E!?iX|1>w~qR;YQGQatS8;$%Cnh<99q!uv~yfNK6#Q zO-^(zBCQ9?=bjve=N}F$9lfW(p1Qf0^^iPYE!|Y7s?&^9=KHdkF{_6Eq^m zO=yar}1adDWI?27t1yV6mHl=A?u5Sn-Kf5MUiYzn6;d` zAc^qr^rrv%U;pgKzwv6Vy5Oh9pH(Mk&5vksZdMMo)ri-^6l}L>8yOJCOu>4QBE^5> zRQyJvqo+`U2y%|-{(zc`!OF$x(=d`byCCI_DlK@2-w*HhG*Uw2BPl zZO7JfxbsTf$8Gmg08c4jF78+V#*bfir;BoJhm&C3QiCN=JFafk`ns+PEdGA6(bygi zybU803|&wN?dp2;=3QGud6h5lVnsjpPmUz;^Tf__X1b>0{b<K z)!5?uZMQCU?K(NE{^cKhwzQZkuE@{H1`!SjVKh^H+cp;~d#0?}9aqis@*esp801z* z^~zc?ic>s7Kl+^DNApDm5KMUm$VtWrCHnz|fH;Zfc2s`osFJmrTyJ*?88J}=Xb36@C=+=i_?Q4h+!x|e zrW`Y3IFFET-pHO8^0SOtnWGQ&P;D`OXu^T4AU>GEM9+kth_OrRs3Q>vRJB&tcm?byar#eR5n$Z5m;EP zt}OC|35F12%*tZa@3!d$ik@yYs-@D#=uLZdwIQlUrJdC5{b6TsPbj7>3ocp}zE&H_ z!?GHDbsWcb!%QNoc7+jVqWe~_sY}UI#H4~grACAc-Nip77mlE;M4YKRP9mM_EP`np zym9(UA=hn*bT15*`eG-vm-&fWhEz;nhw7AoQm1dV+rppc=n==4eFB<%F5mm?HyT^D z`D}!Ih=kzI&BmezZ{;QMfRUH6W*Z5Dq*#HK7m2~p+>i+nJ-2mV8g|3iAOUJPmQ)pu ztR>Yy>!BH&gz2l~Zo$|ey&8wIqP7+|q)8^X3&x>Yl!?YlN49n-DNf4vz`i3^JLZp- z+r)(x1$z4$GF<2`>}ug_iEw9da}rp*S9L&UQ8qgpm?a9= znxjf2t19RA`LAu^&&&I26W!@BB~k05do6a7NG?sqi|98V#iHmqHb`aO30zO&=*2SF z$irPHM+H(D3D!0khF;=W<^Tervxq+MSUL*jL#PvT zy0Da#`J|;)1P2y*P9M9foBenH;3v|rOLd7%YqQQ`ZKWuYVZE>}-iI|$;b!{;)*Tq~ zdx6Jh*la?NCllvE(5HzPJ zf)24$4USnv#9+A5FcFfFV2qIA$OE%_KRA>jW6;if4wYX>Fv)=QhAP#KWHKSr=2!FD zK#G8XvWy>4n2hx0SMS&I<&`^yb9%snrzfz01kr?^ZqJgN5=MIu2P;yMcSSjfy8)qD z%?!;7L=F7*Z`S|zfBxg&{X75BD|=M@-GCDnvPAnRwDde!BLIY?$~6-b!_B zPVRJ~(YqxS7Hl=RJsvNVR!rB=C;M6k=mecJ>NYJdv|Y7 zTf5tu`j+urMk>fhNXJAth?K%CSL>agRy1uMJCC(>g~^DNBozz82npz}T-%=XW}kog zUfg@Fa;WZpyM3d~k4M?5SZ~Ku)&9$>qfZpeRxX=fxmlHE$H_@LTYg++p3bc*hk$N% zzZFhKz156I8W;SPiyJBEp2jGQm)ZHy2AnfNg0o$W{=t@-)@I||g|{#pdFGVTV-%Ce zTSIr5%b&W}(=hZEf|!~%AXzI{pT_C}VU^aB)y7UXV8G=Qu}5Or3QuvX^0m+&LaGJg z!X|T*;B0AG6iVF^QxPpHvS>go*iy`N|Hf7@4qT0;LiiI_p4fWRLvR6WZAh@e{>odU zWK6NVNzR&$C#0e6=GNV?`|p2wcD&O(Y}95VL~~KNT1gag}8B^GmWqNv}+`aRfvyi_j}dM&x9+X1o(T7D93; zu9^4}&IzF$n^g=7w_ykh$t7YXG*5fxMd%IhR%&WP%0gc?Bh?24r{WzT3IRwm{)KTi zoC=WvN+!cD7%<=fhGmYhj*$cv$P-oN@Od?qT`lR|VYgwEgbBI@6-&FP=YKeTSXBrf z5oP-pD>qE`I$%-EL^n@z0IW6=#j=Py?bfY!s6|bQi{(-KX4b#2mzVy@fB#?p@HbyQ zJ>04TCqQqR%Wh(^o;Rzt#hj5xp#(XAOZT~_-}?U6Wcr)`+yDO4+2iVHB0~d3$Uoa0 zDA5$w?{)eSzqcYBdI|WZ44k4%o6*Y8Vky1uuEpc_()=M;E{fxrZ{!H;n2B_*$g@Tu zSi8jq8TiZI@IWdCx`tvvp#VeZ z7E4RpdwY>V#pL*;1X+zd2`tJJQzjcVz3DXn=GRm*@)<{9Yu9{0cg>LD9>TDuxN0_Vd!F|ZYX!7X`fqN4vg@4)`aH=wep??*$C%iX3!o|P{UlM)l{vAOo?j9 z#8F`*HaSZh^NqUta+iSvJ)0G>$0i=;Bc0pHUecuOw5)$vm%MaefFt+T5e z+~VZrF^s&txMVjkRtn~GeO6}f_w98sKD(Frl$ox+Kw6;1TdP9ydcNscm zRzmzqkpydxF^M#zmhk&=)`(~KFY0KqIoL(jp^s8IboAWgQjG)6*3FNod4ZN}qT&R% zBW9|$b=(PS=g z-hoXesR2|ADu)tv>Nlt3d-hr3dyrYw51XN_AsUCMb4V~*j?!Ub>cR?Vd-Eu~1J1!$ zLTG$VHVOymJo{@|O^VqAeIj+(2^GL{ps_@8skW`ET~DvN@DIc#_oo9DG6 zw!pv$^mGpM0_KAH>dHSS9R+744tf_Jw6i0aXQ|V@CErx5yIN_i>i##pS$!Dh7E^J1 zOeax_b;6K4!xf^JXY3aGgs!SSySN|bo+fKjvPqoJ#vFnnAV2_X#j>+bwt z{NbMyxr${Q-_qSw+8H%W3bv+veTyP&$iFCs5&3{oPX*Xv8ZOL@CxYLu2@mZE;Ut|s z$Vb$HE2vDi($k}2)#y7!&n0y(?3Men#Z%!v1zySHF?Dt@m@XYM2{srtS|K?r&k8H1 zL`i*C0YrrP-Sl27oZtJqAN`fz*jmgk*H=Y9iMSmyJ0KTyLUf=rg%L{q4mB4Z`nBTp zFaKcc2ftMk9##-+6Fms#_7w)|qoX50ylSCZN)jAx@M^b?0}ALDY3Xc4UkW!JF0 zfo*KM7*V+lM7RXP!BXta7zk8>IY`Ey9Ed%iZYWw8;Sk~iWCBWcibPkuWO*?yuHCKE zW7H9z^Y%e|6oHy7CSHO&rQ1bDrHT?gy$j(X~Q=HS4gyV|~k*7Qu?iIH({x)7k6=RHz+0A_vL6 zCK6&!9wmNUMG+}$7CTvJ^iIGo3GC`$N6M}(hlQPs7=mKi1RkskC-M~)DW8N3+;+Om z1KT?@!6%)D$m+mKab(~KUPv$mge*ddaAe_x z_If+9#oL)^-H&~H1-Ze8s6s$s7;y~;FJq-KibEB=m0Ev&FI-?1))v^A3m@F;wlePhDAqIs%j*w$vEtHF%hAkVcauK!SD+6 z(RvAq7G1X#{Qm8m*Zwwkw#}|}tBw;iVs~Crmw!$A4>XZ~J`vucv)uUl<_t9WQ5aeo zVPTD}h*xH=LJy6Nc|pRdZm-MBf+}$th$NGjfl(Q*kgH|{cJsuEbQze5sGI@-{m{-d z{tkFqVStNKziKQ_Y zCF?2cQ&6rx7i5N|fBVy4@%Nxuk_u@I#=_FjL(i-u>`UxM z%3Z5a9?h^VKHSI%`NfMDfC-GS(PjfHGO+CL?dgz&??G}r({V)EaUQPn$ZQgt?Ijzz zXO0&wfk|cRIjjAM1F5LO^bRKY9uZV}N?9(r*4R@iXux9ik)3-49OfW43Cb*=9#zG) z^n#K@N#RutL#5^fvsiaB1Pg7fCfEeyWR2h_8$l{jNs*l|Qw(hb0lt9A^CdbqB@B<< z7CzZ(L{{@KgbCju$%*IxH}WmybuaDCUg9B@YaN43J?DZfdjVPSHSzmmdjNtEIEBAr19 zNDoT*0%pLeczk^5U1)>Tg;RmDLN75iZ$|*PoMZR*qG?gS^MR&mcgb`YBN3vj8M*Uj z=Q*bDV|I?+o}CA!WM2+YWMIhx+Z_V|*aRrj#~HV~(G#+JP0EvlicnO7bB5fGkxsBL zd9O0u=TX<0@zCRh(pB_)uQmSJ|MjQ5``Agf#-(gi3BRdpAiC%CSIgDM@)5RB9nF{A zkMKmq2&)4n!^Re|PI`$UK?A0FGN2&|pMz$8GDs?&VOt%7=-K(Hae=(?vI9UQ{FDU} zksEGOwb=~Om;%5NA!2uhlH9ib+2kN6S|*^7q-_2n$a*voW2Iz;FyBS|lBF>Q0L$2) zF@UyR&3gqQ7;3tPsv2>L);-I`$)IXu=E6l0Fvs}al|kM+7;F)Ra?+GMm0MZE41?%R zG=j^)$AnSeNwy)WaPCB+lDoY6+mDs<$#6R@Zp8Tcctv;~392}OBvy5rpUAKg>uDs= z+xmLQr7At6Nxl&E`YtXmFkcD8Ce&)*B0~b;+UB~}-S6MN1p|o}%iX10T#54!`q8!^SRrz!R)u2aJM7 zfy3XPH(x`Qwvp^eW&<-RKYD)52&4EnAIdL&`FXOe#2ZPv9tFW*1;`kl7eOj=t5=aBMtreOaT=$Uikr5OaWJEG~3)9vnQ)!MDeF9Sf9; zA#?U?xLAWNhdm-VLu$rld-L8z21%lNBs3!w99OtD@?qqB)YLlyURZPQh37|fJ9^;+ zu{|(4pcOcn)E&jy#Ql_&z8)!`yqv36wv7|=ctfRJw_XNp+Yx7EUUpMt7MN;W@WVQ# zD4^(kWk5a&Z-NGFE&j%-fX$ounUzs@JU`+H*-ykvh=OfHZoX4?FjNGr*4v5_V9;2zxl<^V zzxd)c{ewAY`ID#p}Y?6Kj{kIojM9N_QddiUDNoUxM`jAO6% z3Qq$HQ+AE%bL1LufJ<#FazMAMEKXrV`Orjsg* zY%F5MeF);UlunQc6k%1Zc z{{9~7mE++dR?^<@Mso>Blz6HlNdj0v1GFUSa!tuhwuOKNBCeD*^^+kaf8`O|L_EYg zI}b$lx@_q^)<}Q_B<&U;40DW1!4D(xF4*D%?}Z!67dfz|ApH9E>sX=1Lg*Ix0A9IE zsU^2q1+qb}drPi|%PNfy`gA6o0vx!#dzhEaCK(c`GU}PyLDwX`Lu^Y1F+j<&!B|=2 zbc||{6>Lw8EHB8F!*)C&D%JM(4-7+hA_F@nmu+jlE$EM-YRi{ zC}wXUa>HcG3kNZ?q%bLq%*G&*Nz#shJxQ73egy{d9T3vEZW*_QE)rH@L5h8cBeJ!5 zd}ZK=_z+25xu`bAgllJz3U4lnfyW*J2OKltEHemrnf-^W4cQVQ?KxPnfKKI+4bc99 zp1JpWgRw#p$43Vry?XrPKfj)KX@K`sWTiWX<6klv3Mm>7Wt&eyA3~5O>=5K_oPs}# z=Mfa}3ds*1wXve3F?EF90~d~c1>9k^wZo{=-WH-dtV$t}s(qwhp(+m9Z(~WE!SXmx z%E~vjZ(C+n3(Y#$>vRc+mVqrq^D9!3t%xkv?n>P56)L5Aqi}Kz|7vgu5=lc~wYMIn zqTX4m%Z#aIjJdW6xm=(oIX^sLqYeOW)pur(m5bgzB!B^&N^eg;7_PdY?D9ci(eO07 zWlV>Y4~l3dx^nSKB&bJtvbABAve>m^$X`g8C&S>$8gMc26XT46JVew;KZFaom~#l2 z`tRXB)(_sGd^ZvWV2>x`+DP4I4^4gbQRc(3-P*mz0G2S78$~f!Z&8?z@^Uk3k?(2^ zZIcS)t3fDiw(~8)qhytgJj7L2-IRHWMXe41pbXvP=oeoF38c6bLLb`U;b9~6H zZ+Xk@?T6WP0OzTVpMUX7bmHwFee~-6+Ydx+rmCIYBOxAi7X;2bqlDh+>HFu$Nka{> z=*ik%zQ3v$7|_Jh3;~1^!zPEo;5XF-cCjIN9Z8dP1sYHbg4y(GYpn4A19hMlT*5^9 z!};q~UxjjulXgnJy(0RF4vliTxO z!*G*b-aw5UAV_P~9z!&V41X0tI9(pa7UKw;<0r-eX_%^IkYvMeX_%IS;B4uGMaAA3 zGF;#}9VIQOv1Ok)A1)e_V{GK=xLQQbA9?zN-_7Q;#3LE(sFFq#uy?yge`nWkcMPKiwkez74?X}ot2qF z5;=(m&)&PZuUFKOlNOHmapP6)B!h2kG6h0oNS^(L#%5umoyiUYEXllK`JA`(w3FU# zUq;l)vGGtIHQm79nY>Wef}b|&go3qHVhM?eB!uHBtg)&GdLG9Z)vB993RR|5PE_e0 zueP^qgGx{%S~m~&3nO{hO$>0#g%uqLf73d=@VTGXhGGOsHu1yCi9NCtzgcrR-PROdqg zc?fxXb;BC&X|^oD!3+QdWH1oQ59toHMA9S{iiOZ(;`ad5s-L1*IORpIFdRcR99F5$ zgF0gom>}Fv_HU*q4@a2lviucncrd(X^Q5C}{o0d=o!dwGA;%3Spl&XZk8QYDm zITa`(SEgz!h}lkVe4r2-xA-pG0h0@ZL$a_T<}gIeA0@}`@6o5;QS3B|(3wR}fG{jh z6iQ+w4BkH$q$@^Zhj=Kvl+dGHBBX>B8#v`L_k7jsk87I|I?3wSXZ;`bqW!B5TZ$#bRSn+THI;WW^_}9vt{i;i zF)B-vGa!}qV@n-9&8y{Cnt)+XOcgnsncW$G4O471l0z+6#`GbIuA`IS?rAqh`;Kt~ zs}c-8?R@Ei*hfq1dMj0zOF8C-`7;z<>aGmPFyenEmw3tS@V! zx#N+zm607D(scEvTa*jrN|&wPbfYJg$_`#7C!f8Gk=E8POF-&c#CGNNspQZBiWL$A z@X}W}hcI!hL1yy9b2M`oynJzSOMA2Ni$nk-luS7iO%78Z$6;A5)D z*S}UYEP^a1lk@X)X=n(+e1o55H*4X>N1dwdlo(gk3g0eziLIn)J2m#)$td{G=o%@Ft5lt<+{k{E* z^K&+93c;&aAH6^OFdp_*l%WHK^>#Y6${hE0$Lr1|bKk+?9`ZkZcR89&c)WzgU$-Sr zg$%?G(e9F7V{K9>lN_BD%oy?`4B6s?;rwJ@f>b!wY{S_zEz`%@(+0$ z1;wTcTfD`)$?qh3%-0%$lJkn1i;ZUCukJ0?DxfWZD$4tBH)~@dDxMQXu}V5rqdXEE z%{BbqDZI^GM45LL^L*6$R`tt**lu?N9jm4_$j=hj$3trJSIa5ow&j8-<+^IOF*=VI~sdTwKR zjd|AiEgGU|w{Ka)5L8x%N6?uu;n~0#JCPPdEctR6bd7ux#8kkP6xf4z>;cA4#YE0L z0!C=_233|ofnx9hp>b)$;Uv|C#f5qR2|{f&ZCw=8&JpFzyxWS7xB{s%DPe>+9(T4{ z8st)rSR)t-><4xfoQ{;CYNzb2o11I2ns!1#@wd>+Ef6B@F~ZjvM9~iu!Q!?h`co(} z-{E~Im(h~Iz8cD-fh9g=h(-}i%EX+(#c+RzEP7yZael+(7q*=Ue2jEf+}Xu$hdOlN zx!w5T!AZ0VjSS4>_jdMJI^=E=Oo1C;;6x%`RjYhRx%BaR4AcmLb8-Gj_kMLO3>)3d_4z8HH{B=!@rvL0kn%tH2*kiP_^IS*(5j<*YkcdiC*dacB{FIJ(4S%1eN?uuXTe zLQ&boo`ndZW?8eDov6m+u!ua`7ffQ#!ko+zs7} zR~v>;#zwQ_SU&Ob^ZVPjXq2JQ6*l#xIoYei`U=^wb^Omr-o1MV-<*VgxDveqp>sNn zdhN6Kuf4+dBM0;J^bBHv38cla=o9LOncZ7ekrvMf2ZydaKR;8U5fi??k-0}l3JM}M zi3c6Eg95t>EKz%h zH$DPDd}1*ElV~Tp>=-AhfMd?RGGy~>T%@wL7p#|dM(=Wc+TKig4J4&x;Y!ltyqRjt zq9~340$g5Q*t{7t{16-DB88tyZRGBna)n6Et@0F4xuS9m3;-k zFgn9Svq%H6pal;Ao4`bS4f{jKhYk{qVuuvr%E}D`3Kk5YAl1Sw>wl6Re2X2jM8)7KKwnD7{L` z2j5Ku%RGfk6*mYHev<9f4JK%0CS&mV%U}evfWhE zFPc1qNL9Sb-rkb-!2E~n8xX?n9^z5(56=%|czirLd0E$}S-J1vntFW;DVKJDt(H_vdISx+WJ2b~tJ&c7SQ3H1on_ zu>{*=r`wKv)MCLQZ|xV-|G5gj0TiqodOKuFwfT5 z()#!b2?3N%1)qW(?(!6>1OuoGP=bLu?=qXCUmJ}k3ryspGw79BPKx9g0-%M+(Aq1U z3^4}_NnM*<=25tY5kMm+fyEoW*Le&gmumwj@UOgf{4LD^CtYG>q=QxG0UTyl0fdz7c3_B8tAydcJ9WRFG zm_AUyfB(*?T+f((aeje6xn1m-pA6Xak*kr=j6Io{71Xr|>V@<~f3=UQ*G%mqTNgBvj9Av!9u~oLUF|f$aIXi2epWoKEn&|OXxrBh&uDcj& zz;$PP2ab7^?cYIjZ_fqb|H;Rn9vtimGTZyACdhJ(<2=UAQ3lWV_=8+aC)hHuvvhJH-@C>3$ZI0 zONh(FbK}b32GkCMY^Y`G7AH0Wbc_mC+|mQ=yLazE3?}$Ea2^q1Yvzt%O~hp!0b1>0 zEPmo&>2y>>$Y~S{`2pXts9hj(JI+iofQRO5G$0IbErC!?&PaG`j#i3WlY`h&nvqMq z0uota2W}xsQREPaA+b$xn4*+un3v)#z!QAT$iWJcQ|KunN!lEvzsM*T(NPd*K_YR= z(!`w;I<3)yB5ER9{{oO)#6G#JSd4aC=ci{F9=8n*eLRk^SHSWpATWBSbR)B(!aYN5 zd0Kk+?(*?To@i`cOfe7+my zyeu%?#oVC2J(iCW;}jWUYa~;WB(1KReZQjx$b8c}4l4bPcudm-yX$C_}<+6l`5hvpsbA$?D>?g-Qeszsa-I8N%p5QoJ zHLxY}kfa=VIQESlllsZ~BDNLCvetGQ<8&T8B?2*Sh$_;V}s|H5;20K3bDftdc77`Z8jwQ6b)E+AOb5K+z7n|yChAUh@W8pncg5u zCU1y^C4N{_@*P&SIvjRWC2V>48z3Zg#uDK9W@=XhH4a$n9@#SrKqnC)g2l0_kceZ; z%sjMoV6>kxeHNZ<^YfOVqSOe>NT3`KjPU?CoEQE9jNSsgPU2-oj!iIP?Cs*$A{yjZ zbG-AHXbgead3wWABrJk9%f3iTeU=@VYIrmH=S67c#o`O}3N8a~;9v}v$tx_VG59!{ zt+in|>DXW#^QQ}RlP8NalUOJ=m_9795r75M0%5>IYmVa~QS8;a2xN-JklN-5Ecm^@ z_g|C?OS(c`Twj?8&ANxmd?|C>yk?T76q$IX{w&HHFbg z@Q@ZZdK8#@2p~9P2`mp@kFqjwc?*7O7#YN{8_~5#hdThamQ3GpTI3(>FznC?Q-&k^ z6F{Y6Y|A3=B?|}t72U3l?9=>UzoDuCAczaF93D@k5QD=u?Y+hWlK4BhkT3N8yAQT! zkuTaUTQoy7%Vb9be$AOmfDTDV6liyEr;c(h$#p> zM}$V12k`0hT0yIdFlA`#5mDi-T=_ z)A0Q8@X#E?z?v|_U*BBArs6O3^{Xj+8u~PB&-W06hcu3@nOy-iOv+d13G6O#i+D=p zZAfGmkF~4eFO6o?G4|~)&$K{5sv435cZI+{97+YSIz4T0#!V_#{m67hwDYD)7X-*ktip3ypVVS{2Mv0?nq9336G#_DCjcROB#NXQK!V7?ws+5-v`2Qg!CO>n(b4KO9BF2aWq z`b5OS5#-(OG6SYwV0O-Xk)6-N8b-V#=?!l2&!Z5MYrM`8E)9%cM^(d#thR}Gl#`Ih zOZ_!es}DxT1DWAP04&K#9)j>$u?^b+KNBItV?7^68I0A;reKwvx2zuJER4+nXiUSA zcDy1}cCpZ#Ry$Tawf16yhLe1pGGpApg3SZRcJdGj6Vbqf&}wXwWIp@!r&$ri%C!m4aK!p zBP09MzTgIUI<~Li_|S|b0OoG798y0t8}I;yQb5l`tOyXnbI&(aFo(2|Ssfs&X-o4K z`WEUSx_LX{nwwKvh@vXBD(inl$ht;&2$m);2l|)_(FA|y)o_gM;6h%?3?rSzW@Itd zYf_+ru=2I>KM7~CY8%pMLLsK35Z$dj`h_WgW=6=z1gt_|MxkEYx#(SF78zJfA%ls@ z!UM;a*sqoz5t)MXKF*td4DfLoY{^tQmk|rpe+3gA zTQZ`6-d+Lnpyv!DEvMaJ$hpcF5TOK6!t|jH7+dZDg@jMlzl9Oc`C4Qe&;c;^s6K~* z19EWb9BUK24ac@v+UUL#?n97|ZO;VB_g-h2B*I57D5@8_NkmXb1cnjuX~gso=2i3Y zG0$MrOlO)tsr0bHTu!H15*0250e>e}xPTm#pcXYAAd?pWnZF_9AkO*%h0|Y;N{9G;?2Z}MnxpeD&o%ZU$8f?=&;TdMj$#9%5WsjjHmkjFW6r?aqDMN6Bo z6vxtgo#~?rkbxc+!rT#q@njoh#yVYNDuZDZq+t#;QI87mcfKI;mTg+WB%&${FeYXj zOA{BuAK)N;hfD-V=qA*L*xS%~L}XBZTtFa#AqHbqSX<(L5e;F<41>u6JYh~i%qvsR zsBdZBHOR)U31V)yDU!U>_`_8L!Wn0=IM9HIS11zF!%Ro173nWJA7v2(Lw6y2?)Kx1EUjkH$Ty#C#lJmc7Ll;OM3`WB4kR(D>_%|IKFmepfpOv{vBf3@ z2o2KY@FaeS!(FOtYzmd_x=T9kTb^4Pf*3uS&&X8J!N@T(uSraelEi{KFNmC4Q-KP| ziOj;OBpXFajfL|%6yz&>c8mk~II#3ap4K-4$DyzS3ML3WRErVK#pJviY+3h>Y1CE7 zGeE7z{8p?^o-m#;GoX={J8<4Ky(JUgop&aq2p3@I9?SJB;26Il7G^gF;e>5BhzMIu zG!!?bs)n>cB+1%u;|wWT8E;dDV&>vuv<$CvENRE|4ar`PUEvat z>*}h_`vXakih>5HEL@N43J-GohAb~Kz{|N0DHSSuU~jyYhNY^FQF#l{fjq5QGna3O1Scd?5FZwls)uS<^=p)V=v=xAH`vt0ueh|J7HE(CP9(_H{v z)g??h5usvPGTs@6|X zP|--t%I)G5X<0&Dg?v~5K$Pmmu4~~j0=RCTBt+~3sYjGO>`O%y7^0m$JXE2I;x85= zGzl>EmpE>aiRQm%=m`{IOUXWfXdzw0xDSy*%v$<*i}`|228samsy=rU`aFI_X}7hQ zOHsF|l;`1HSS_JNmyQbp;UtQqXfy*32IPd*d%<)*! zzP273G5K#m^FOqOeP?-QSXy8%K zk0Z)cCQt~=9%W!gLi{8T`3mC!6MXCwrsgDagind}4CBn?Kc= zWLH)Q@_yaf-F9W`=2i+dI{{#tm(|>=Z+R@$jp`uSe0Qh5zV>*0uvIMO_YQUp#ynZr z={LcFa2I9pV`~Wuwy#{q;1SXWvvm~3(m&hY5)%d+EGUG;Zz4(zFF=vly`MciLk6T| zgy_*vu}L$)q_P*B8Mqx+x*H?oL@;M(D3QRE?N%;=s8-p$66i>T^e~~MDphyApNs<9 zj=jlwf5|k~vDp;MOAT^9G*QC=ir8_(K}1qy$qS3R51!+jat!S176ih3h&&H`AG@&q zZqZk$1!Fm6>zNHecXloXVh8e1#eqU+h0m(3I1@Ye617h(XYla=oozA&+nKv*2F-+^ zjJw(iLXXKX1XJ?rY?=wX!XXMtG9hEi!M818MYxHV!KgN z-2_cAGG!G*C|E|#JrSNEvkUYN#Wi+)B!M>-pcZo*&Plk0k!)vX>mb{`jUHzEKoc^K zOhe&F!Z5_P({}wc2et(;GA4v!)vY4>ONf=KX{Qo0u-qB1=OY;oMZ``7hVzw}mA}*} zJ0vK?U|}}|NyutVd`F`GYw`x;GDEdPjz?iSjCzIGE>F>e9xWB96FZz7muOg9pxfdv zg!D#}&o&aT>;uh4HMROsK#bKm2wU}29tBM)II{`nq9&{y>>-okCfaQgaQGD-7vX7V zlynjg0ux-)fnix@Z%5Twl~F-WyAb3s3k#3GLkM#>5pZ`5!bw*C5Av4@e)EZwG3^9_ zZ%%&wU$_Ckz zK4iIIEL1e>nzyGsXA)LyqE-O`u*m?6WBdY0jl|qDzF+`XK&ZcjFbBA7A9^^l4s^S6 z2ykQ;Gj+yPe^SVgl?g{yM-Kl=-AIW~@Fa!p?x*5xJ03f2-gfB3`CEYSJoDS3Ch8SRU1 zuC7>w$csRlCMX$MKFVr4(nx388Yz4(PxGrcr+N|*W3qg}vYx@p;G_R<0#B@?uYi_q zjn>-ifW=@8$vBza{8d5)QEVt~m{^u}K_-DEp{RchX`2bw3MRfOchwZ^R)A2w*VI%P+*{v2h5rJ9c7?*e_Ph~C= zE!m@VmCBA{ztl~;j~$JaPuz$3P=cT!6j08L4J=`B>@%Jk0%5>L`stfD#EJy* z#u{XzS)7jrTZxi+0OduPj;A@u$lgP|G0i<$HqB5|wbWqE^;nn-TwL8?@XrNqfmP0Xte+eUjdh%y2TMD&gqD;mNzZdlG{H8DiMHeca2%q#O~+PwJ0B6*pJE_uW_6fe>wD!ymC^ z?_i36Y?OKxsjQ}ZEjCbC>i?gqJL_#N$@0Z6Q4=Xq12vE59A#!zwb5l?z%YDm_=EVx zAJGqf_3&a`z{Yi>UAWy>fzqv=5;mrD?B@s~&Rq`@>J@1~ z5-D@SVA*;knNclz2?8M5YXjM!Pa`2u2nHSp*h82*YT$6XwnM<|?hKDr(rS{$d zAciGQ?%7PEr1O?#cA0I7>WX=+xR1wQ@y#Zwj^r(p zKG6dWj-Ad}VG|=B#(+=>7n!4G>uc$c>7mis{6&K!=YQl{Ol?U)kV7XpCmufP9iIni93d-7MA| z)dxmcVhRAP0Tb`l%)bJ&=t-vUx?nkC#)ucVpBE2C$EDH+h*ehcFbH0~-{+KH4C}Spqs47X^2%l9W z^Y3BL(#bdR>?ecC2S5KiO1P+IMFM4(psKohKq~qs9gMK!IwMO?Dz%8|DFz?;42A7nEF!l;OO1E-eIn;IsAh zh}U65;NYgE2TYs`Vgnb_08+y?6623(EY_8=3fEa4QFY8d2_thJN#%KA6uuK&+rW65 zx#RR>6?F0&SfDP7GuZ;8Z+|NI_$gpZnjpm|$x#JQzHt&gm>0S3<8dr0Tqw!Z)AhPsE9V2K_dNgA~2uD&3C$rsP z*>&VYf-@F%!X_j3So|I@W*pNzRd9gFOF^TsYLyXOCjJdht}{8t7Q8|z*=!N&i4VX(fev4R88kvQstT5{6^j&>a_N;eR=a0H zGt$XkpC1N17Sg)~aTRgsKMo|&k{wAq0!eFB*)FMPAJncs-zc=0QA88nQ#I@`*SV4; z#HAokA#v?lZKVC3_06?qeih?c#lB_!6jwDUAVSLtv4G?7=mCd9kBVfHSy(^Xh6i>GBy5sQh8JaIoKk2Tm%z;F=^2Km zPY82%cB0hX#>Tp=I>*w{FJKfvI%kyhVv%=D|Ga? zAZavITkw(?<3cnPSGDyDNI)>Z70zgB&9s`bI8*ph*|{qI#70TF-S~xg!(L$kq|yHP z_*i)A@-lFzx|&=?OR!4MXq8F=ZtlDG(ZBnUFOL`HqRb_mpY;YT=MEhy`>pgt(Ka?7 zIiQv&<_-nlGGR1`Usov4NxjI6|Fi`DxMh=pMsnMoA#&k>0uTcQ&XC@4>d!x2yw&QM zS=4~A?>H9n7fVoKb%w_$tjv#L?7XaRp%V(Au|=1hOb!kXfTCEp@fDye>c9J7g?{v@ zy&18+C~e|C__g_BUa$y>CO2$$p1KMwR9f8@i!T1+(k7;!YwJ*$?0{~BIdVZdsmY{Z z2yWEPO3CpM?nCum%jMM9Vx}Vg2zB~RP?@K{ny*UN`=)Oj(UPjVPZLGGC$}R@M&%mlfsw5%^dkfbBqrBa@#B)H8HIvaQ+Ljcu6@@V$pCo{U z#~HBYi>Y6{j+&WxD5@dW z5f0_NiE@l|Fb#lVtZvQM64O7t+SoLsmbs59!yMsTg_(&=$wz%#v2sq*Di$xm#!A8J zC{IQrM-SrpSh)UO&mPy-M(gQi#JORoG180l2~T(x!D1wk0rJ1zzklZ~EGO0)l%8e+s1cPx-F#{Bg)60d|L@eO0kT>>z+d;ASP7!-h* zjz}Lo=(du_1fL z0jY8za}Z6uM>lep%-2#vel)8RiaD~qAipJ@$Klf~+!a-9D&k9AVrZlOKupvR4jX86U2^uV|ZqF_dIVwhvCCKk=?b!&%B&aXa zMj?X(9N7nhz1{7#mGwa%0gexjKP<1UI&@Te!~3JvfA^2yPbm~O8ql$o$(&5d2)K2* zu^}VV%H1>1TkBhxh}akDWmh}fl>SZWTU*|onNIdMP0@-1qhP5Og0hf+rUM)B4$y@P z;;Y;aRtqaMeW0k|ZV5G`&sjnu1QofDTWf%pa4dH$ISg|j(V~C&VZn5(J#+*_Yl)v) zPNEBVAqUF1bB+#)As%)=p8rq3Uf&NnUK97$zXWWxH;8z(+{aUctDEzu@l>P9!)DKHk9O{0 zb2VBn&I$fUMbX**zx?vcmy@sIyEVG}5R@{P;)usDkOcc3yDq}RPQ&Q+W(`UVZ3w9? zfWQPDm=5JO0aNtph-GIHOiN-VwkThi^O%?MG)3?klr4GZpzDzoomz>pR30k819C>zGk`6rc|HVIWCEd?C z&YwPh77AM1Tz(o4e*N@|4Gqq5S!t~(T7G(Y|2ET9 zcP1PFvsU+aV_UgM)NTAiZhRvZSK`N~!@@>AI1n>ZXm5mmXY@Ph}+szjT&<6|D0(10k^4H>XGJnCs`VS1uMkDAJ&z^qvN5=Mh| z`x2aOoe&HEa`FkrZmexJ8!O`Ex4j$TiP7Z2DLBTny?vyP_O;_n26r`y<&MrAt5Gs6 zxg#rdRs~^{Hs0FYGXiKY7i)LNu`cs`Kf*kGpO%t-W4Y-=c&T?G=e@msPEH^ta!l4t zjSb3~FZ!h!AQqJ9tCggq=v*7%(q4xM+#Q(gFRl_h1zsyM4d$^#(gx%56A9*x6jm6h zJuOM-!5!>}JqP!6u3Rc+4%u&RJ0yo?lHesfRIY5&7i!P9JtR!tD91SE>=JGrXVDpS zo${bND!J~^vnMH$nXa&~I`+JZFRzdZTUU||`w*N&CN{UOF4s3&vx8dwR#~F{!)B>| z@%8IhBU_Pz-fGb-=!TVM3al6hN|wb$WjMRM-P>s)EAKIUOHM~8;^X&BG@7eJdUd^D$bXaOV2LXSMIL|U%#FJ%1UebULKmtHgAeW#4UVzX=BGYhx3ew zb~w_@Eui(FOs}slIa(~bgtTU*!yGgVc7R8)l#dI;AcYx*(cot7k>rUMH_MSdhNKl+ zYh_`3uKzl?JO{d9d~NRL@lF;jwod22xw=t+Q!xUW73Y^1I3PQQVr7f5X&Q-4I;+e7 z!{I#-9IFP=Z(Iyz`lWHHK!QRU!;MC!@kZa8$;}tgy+WP8HO*1N;J}O zI|(~snlJ{Q(Q(9T7VVULeio0x#Z=kjyf{7k8vIkhVP=-U6>6eFiaCuEZ^Epdm(?4- zU0e@)50kZ>WhcbdCE7hik+emhy@%hYl9G-5PS+bX?j5i=HI_gTe2yNb!nBH5GwtiP zFyQX)&SHI8c}m^}*K_mJ?QVy)LhpuYomV_XWfUjU+wT&0zvN{HPz@bq+9&MeFXrAM z$gP@3a*SlmTstvU_JRcTNYw=sZ>2X=r`ssMgEjMmOM z60SB1hvf?9wqD?Zqj;*=zO{AvUp6t^7T3 z$sHma2msYz*E%WkG_G)j?JKY|(B;1xIqkyzVBM@#{+8KYT%3DU0P?c?*tw>RFYX^4 zTz02UIyYnIx9&MOy+>frO*MY^!|xvl_otU1CazI z?0i;C?&Rcz(@f{?V`7*o-ClN`J3e#*o#QY-Co)GlLks9GhL@8g`#m1r*4~Et#pr!9 z!$xT;vmS~~|8(|v6E{f?Ye!IO zGTW@Wy4-qDcp5<|_t#Qz<&}Aa`I4i%y9*Z=Cxhp=-9M&?;@#O-yBd&2VsGM7(F1ZU zx>cs!SYP@Ukw>l|x8(gs2EoHjVWXu+Kmhn4VB)K}7`Ef0?b5)v%Uy`TDX?MXXa*L5>h5maPXmWy$&u}Mx}PzRdShAc-u3MzqScQ+!34G#3?V2X?WXB< zFhpb5Tu0EOy8jzk{N%aNRN(=XoZI7EkMd!2PQz_nz{gbmXci!EIJ2#W}K&B z8eYRw2cyh|Ti@&SY!+Tuy&43p70uK@%!M}y))65=o)->%5>ZHF1}{?InSZ}r^RFd0 zGA`Ly>LgmoE@s1k8fIYtd`)lzHdeCAFo1_!hx-rS!b6ISgLDO_AJl=myZzk{-<5M^ zQdWat=9?nv?&>4mR|it&~*y!~>-nIRh4Bpj<%rmI%)tfiV%3m!(AAtWlk~I$J?pN%+}sM@!LK>H#}u#B>Gsb$JZPf=XITH z3UM$#Dk|5)&5fK9ufy-G5W^j27#s3N8bDzwNiS$(nPwtfg{(+=T1pLsu>f-?tsoW~ z#UeroqDi+f0D-|CUAb1;h1kpEWk-%X1%X0-#;+M5Sa;VIf1!c&`_##MmD_AB6~CCP zPXxu$r&p)O^P;jbR@)(%3r>lCq~F}wX*-Aia-Vdr&1G(Tcl-MGb$H)@o*r#&@39G} zjFVz0A{XXo)dEdKEiX!@bo9Zl2&HZ2B8q(1tA ziUrL;=$lms)VWjGH)Pfhx%mSutV1A5W*v`TV3jHcFqYRIW&)>YxfFm!ThCf4BZ1@; zBZY&I(Wg(J+{a$5EQ`&e;b;KZ>9%W5QeRqhQU(USz~v+MQL=9w2t(?C6HBOUFv$Ld z>%04uAe%U9V|6?_#ZK{Ru935|^Sf)V1t*m6o--fS7OK5%Zz}&`N70pO@$`~w#EIzF z)=q1#rgog|Xu~beoLK<@k)?(Ab#NHn0Q5&p-d-%xC9FYg5k=!rABk_ExeplyFWtr4+T} z`~WUvs8l^5^6D@odp4Ki=my^MtPRCWs?lhqjFl|Hk4h!$Eufc$lO#v=O1F{rJB8nHX(@|`ZTd>ZA$= zmR7eA6o4=b*vRE5zusOaog*7<@F2jal0nm*oPX8eDS?sfmBpndeq^e293q_xCmp%f zvSrWdWKrvME~$=R_3Z2`;siUuvR#&cg5`vHD(H&?lD_-{D)Opo6~Qx!!A>vq7Be#W zu%wcPt4|XYF@jjGEjE%MwyTL72_R@`z!dGmsr2#xf6UYeS(yvf%5wes;^47QP2|QSRDz{SKage@ZN!?bOM+fnVJ&Z zDG?twp$>34zYm5Op>aZw3L|NoMx`Z9D@u9Ulwe*i+^;5UI-pnGbSdya-ustO!OT=@ zW73Fulas-Q$<5RMl`te3{G$il!E@08K<)fB=_uY3Oe0_6`qJnA#6Ev$p zf3ZcZ16bK*32v5~tEQrGSO?DVmWLtNPEpD^jB1r`s1PC8 zZ;@>{#y=rHZ!70J7z6=k8Kx?;%BtZ203ZNKL_t*QFUV&F0;wlwXHYQ~n=IO-*NaE; zuFfkP%xIIt?_W@Y{tUGMiawNn6oGz`Yv=f3h0|w=36*Q9rzS|fhXz@r)m6~I(Qs+S zUQw69i!12OyeyW@VP-8;K_wa`XADW0!-M@nr{1tZZ&l4L9(z%0p(djDzTZ*F*Fl6z zbTXV!jR-LCI*(mpi>1cq!ki-k?;Xj$vA(UEB0|RNF*^YUBZsI!4IvGnA}jU^RKItm zW3f2>DnM)PDDnVa&;#0-ddw8G71F80D5hSDOjq|oBQlLmL8(lQaaJ;uB{=3S`Pjkc zob{q@7zPeY(R$9wvy=J~SKbTn#r3u5Y0hWlui5Iwbd;hcCV?z(j|6`?FM5Ra|>^p7qcD#!bau`whHzz5YmQH z$~grUL2>+V9LCVpsA_^3c9hQsNfWND^1ekDH;QTaqxz^1{xD+UtE}&HbzV#dKl1= zo}@6y^+KrhPH%x>QXxFzW4H-+vHE3Rv%v5BXhyon*ON;!iI;$Pfqm;V5ra%LzPO%% zK^i4$l$L{900;?Txc=FTgE;LmKyOj_%g(i=1CAvqIi?m% z8w3#jC`>S(Hw%msz_hx?I1C0`H~r!YP8k@;0TImxc(HQxsz6k7qwFh=SOPt;^^*Z; zqZ6bXc3h~-eQz-;E#0r)X#B9y2dw<$raZCAf$1j;96Yjb%j%S%mmyhlBh=VPtd5`Z zd~z$Ir*lUX+KY=)5AX>lxZv9T&wqRR)4zUx9%KA<9vd47AhBdHVC4EmtqIG`atuCm z>C_kP&&tkr{ZvfB_(?f)1dVc7+`u)aU@q1LQ30cY;qo5Y-VQrPW9Ee_Oxaxg3M@bb zceTN(sZMIgK^KCb{J6XCsW0GyLW2GAw%cwkTr8q)=b2nD6VWYZsw>0LEpSzkx6DjE zFy|!&6aBl6`W**dKH#QjkNe4O|Em0`kWfj9(GM;Usvgk{gi(-Asui?|I-$&XSbKS2 zW=a&;#;ierLxRrm8jXxxq7E|)v0q-bsb3}dKRG#VD?hx}il^x8>w4fAa%Xl_6TXqLi7U5M6;g}j-p4)H#CS8+O!d?Ll4i=*|LYaY5 z7~n}_ZNn660CT}`*#}%+R7a6J6cow{dYCl6jA6wvNrlPm@``60X1Ff!3llf%3UtO$ zK9XPZtl`0N?U^r}nx-r;rYo*R&@vv&ZD2-7HNjh#OAUY1hc+B#x}d!K$zWW=gye%c zI&oIWmZ$pG>Y!Y106>lmPPxL^*T&4B?{DzReyLLaIsMG4)nz(BA9C2n>2B89>$jM)@2z37qgznN^AFMn6~HO`N{ z`{#M6q_F`5kwKa=n7D5)#%W!ATwtnsS-mQtHPkcR2OEI20P8dn009;)Eei-&?4qI_ zwgIya*CHwPk18R*pFH9NQGInED3BY41LSckF?pL11YmkODPw_t`5Q7LJyXV7tu=c^ z(w)ckW2hDW8-RXlqF>JgC+A2pUwIh~A5s-+GEZTjGCn41HgkcJK}g+WO@f^@7Ea*3 z+23o~Gr+G)*x+a|pv)~BKETUY;Nak@5X+tHm3Jd!sDRizQuOa^o<8CnVp|xl@>M>xn=$d#O1`vC}lnc3(E|PD&dZP zOwGr7hp$D6NN$Y3diDwpd_3qUYvkkY?z>Y-JL~8{T-#rvjB4%O#MI1#Za%^Qj=CK{Zp&%4gbf!{QDh zQF(N;GUQd>w4QH^4LWQp9dSE3w@(oUnObN-SGa)w4aPOFpeSPiY8Uul_u|S|pfi4L zM4JoOh$qdkNkOq1HU}YXWlX0U7~NQ(U}Y|t8qncr9tS#a(IHx?R;TRcpadWYF__pc zXtK4og({#H)7cS0_6^A8;lQDR%+B%ALmRZUAO9QB-6nTZhD@HV3gpB2j?>NQe)MFiwJCt*8I zSm4$voYGaKuB0-wcOeivj3J&w8`Rf8jH_!oKLUs{vr?oT-?vf~(gH6Gp{^kZ;MGza z2wb!xlV-Giqa6l;)hR1gd7ltkp)yb*K$4}R*L=Z`0h>)2AKl!Pd{~*%&h>UI{PfJ| zQTgM0TUPmLZPXRS4u-&MNUb?j{4m`!i=b4aMfIu)zr0DHDPGNKRy@D~e zfF>Icj+uZh7q)Im;>B_uFc`lo52b^2ob_EYa58dPmPPvpfZ&Um0ij4I45>c~=H8BX`50RuMs|kLxJ)rf@3rVg1r4~t zt4_cQv4>A( zV_pQNRfRc#iD89##nlvll&l1hFwl9j@eLc2{te&NRkn=3{{ye z%KF9=-CR@_R(?oI6-?Xu<2=f2{3LYPfJzTpa?^z6KuN{0x|LF_BXb8sP}aajg;;JH z2ABYXzt+je`uU5lT)6Mr-wIGJkcz)Lfhb;7+9%im6xDq{K>|}S82!T)*WPOk2ICPo ztKO_Vu)|uapm>Zn6>4f`FlCSl=V%5|0E4asZ-9=&0UA)RpT}>1{_*CoKc3FauRICI zah*78a*cmRILe;JOx9BxgFyK;yfo1Su@*-aHP)I->mnAMb=GUe_M^xHN4IXZc(b&RcPR*uNqhp17qF+1IqGJ!3IUqM3>-2vuPW=Q*)*oOVYgAH&luQ zCix^bcaB&soPJ&hq;~L;=vI&qN0x-}G{Q70FM?_yDEq@7sP)75@8swCZqFB8u(c+% z`68wQ&iJhs2!~+gPcvo?G0ezhJDsjm`Bv9kT+{?P)6RCa>zhHldnb$6nODZI&W>F) zreGDM7DEdx5ns*b8=EkexA>2{-HuF4sv*dX!}pv?Wpkn}pQ(I7UngZv|GImJIXjNC zoMe_F<?qKiORSOjz+3|&ny(=XbOS3^`%lfq!hD$4TLBKqrb7an(QJU%*c+f64J z5&%ggpi5y~QlFpX5H+Zjwolr1YPUn|su{Jeoj=5$n`v#?_P_-TEcWHp1-j+p6D9Q} z*+y@6L=YBYf7z85BBhcB7Q~4m&_qESRO$X4O%;Z~oVa01B|Yd3J&Y44u?o-(%K|Kj zWU`gYh0x5`HH<5DI)c_po3qnd_^EkHy7IYse*%9yW_gL=`W6il-vXsUtLt! z?iWCXGZ>7Uh2j#DAtk-V9plu@wV#aBPkQz)*QS|n$!GWt=DyG`{LCIV^de{@TJ-`} zfUg1H2p@=@flkwU8rlHZ-``(fTNNHxVbD(a4G^K`00E45?R^q*Wpsd!x8g4*aD05s z6>`P3or-4bDF^v5#99eaui$HgD3F^*AP9k55is~a1;=F2l(J)|XEr#hILSFyHxC@p zz#%Sa4EZQ=T+c{Yeux3!ixv#KsSR#4BU7udw~Wu9yG%7U26v0_;^8{ z#PWEhfcQx#ngP2u2I;<6c=k6UvDmtWh(1=zNBip9K54A`|6`Bbmczy17lXX z5FP~5xu4v}hF}6_>NgFlXEoF&)_|(6@rfz1_onJn+4{G@K%r>!`iPIxXy{@nhuO(# z0-hBm8yir1DW@D(o+U1<+%*Aa?gF^rBmctS3n9rd&5^gpZx|Sd`I^25T^oyQ07zJP zp4v-t*Bl*)ZPT^sk6so5$vxM*-`Ln(TU{$Y&xCx9#ALliKNWqRoSb-(O_3xvECNFd z=M~V&;0ZXOv=Hw2t0ho-9VjwUK-HbJOYfn`xg)$RJ zE=R+HSTU`*HMqZ1I|q0Rk~n>5{_AP`?%sB>b$MKNUz?z!VSa_=ls_vbiVK&j*!xbY zwI$5SQ@y{x^6pS_$Gvrs*T-r(IuPtkIM8+$#s7#F5F_{sQ?TV_l}`pMFcGUCFQ2K{ ze^#4kwJ;TuSs?njlyoKw5n^#B+Z5!nKamMkY2q*~MmEryBY@DFoyv!`X^gfSR5l|x6Jyp>H0pq?28>);UA{PGmxX`A8y#9T3<*S=;&oJPU zo7UG&*G2`5_Svxe#4}9aT=a>_*v;x#8Ohc5MXayw)aUJ%dm8qi8L;#RD>xiHRKaGt zDQV47%72qmF4B}2Ya+p^rd&2E6{?K6taCuXOpux(`Go%H0@*>B96N9TNr+*P@XAFs zPS@Clzi{4Pcxfe3H01rV$>GZJ5Qi-ALF&$@QsD_m5=FcKG5$|Dtt0lqdhM}+z3utzF8 zSM2Di&q4$sygsIJ$|Gic1{(2GenryP>KU$VJP8XRiXpv(JpRjl04TAEc@Ax++124$U+w?$lr@!FElXl$t@)q+<>Wv9 zKYx{ZSYKR&nDVf3PJ3LC2fYDmsV(Liwa@DQzx#gpR_dW{A8I4JgLS{Hpwdxz%?O9{5~PNsigc39&X_@{srdH6&TTg zp1sHg=(#oktAKhbY|3c319V=8dUU#^H5;h9&>yUHBH8Y@4nl& ztE1BMpoC#_0U>Jcc(A1c`-#9XcM2Iy;98=r*(Cn!$2;LyJM`s5*uMfiD@}EbWaRVs zPpP>>aAn8Bo6rt_L*seef1N^H6j5&B=gd0#=p2g-x|Z%kYp6Y;UQe)YJI)$%D{sRapnx~UBS z%4_!s*CIFLqiU^o&Q8rwP3A_!x2J()<(YaU#57S&KG1FrZ0IZx(cRTm#R(*xLgRzO z#p~|teNX)e_KHIEDX-v#Vk%thar%v=>THVdIJ>13_{cfIUJfhx2GJnG#6!q+$cC1! z0>g#YWsp|EXd$X$10%N9YEj%p{%nzPbXLm%q-ARf&$TIYjXqSa8L`9zd<_ObX1}IJ zLA(NSoOo8CUdxKgP+Pf910Og21#pbswJ2eQ)%x)XPM4saA@L2Of*9dB(PwCXUmSm! zuTWN2DQ+tD_v0fjZ_4fpL=vv|`f53?7M4IIur);Tw1g9S@1`wur35tS01b$EJ#at^ z%MUGZIt{-6{(FP|^y!m#y+3}>PB=SLje>~-V}L9!TB?CWqkf-BlpUmA&A<`@HFOR) zsSdWs+BYe6MJ|o4Tfn5QzyA4Nqj=lM+P>;uD+&I@Q4XetWc7_7z2UG*(@K~@TG=Cn zgB1{)qh+XoRXBdYVw)^bms9?Ow z@U(BOw=55VhkunMOZhN7q3Sp(u$shfXaQVaY9!vMAE#<6(4bgy88{ zpe7e9OtR8m=Rx;9Qr+hUaz}gJ=2Mgxyw0V$z_-kn2+dK9GA05E?R995{$)J=H&b^i zyOYEf_SBO?J2lY)@w)-_x;HDnBP;{&|QOf3-4m=O~>R)Nw$?HdflJeFymPytn%nx9M= zqr}#MB~+HkR5k{2HEH0!DOVH}80DW&B9|J6UU}iCXijLP?*@ScP6FT>RT)4K0Q8`x zd7*JjAY(`y*QV({l7vf!QC*{4|8O@V6KGvS7<1GBxago{gU4l%Vd(UIIFovVHJhAf zRn(JNr?lW^OY&)x1&`hCZ|`hxY|g4i*z4OFzOmLC3Fr@>S^_UkMLniD{Bh5QQ_eEK zIqb8!*dkxy3@#M&WR~)jJw1HyVp4tr8BGzNe?Z}@X&RLab1^TILIN&0+YK8yA-u1E z;3qx95SV0{JR(>KM2G}R%aTLL4!6TOt)p3^9n8ECcwDKtipCa&WehAH!Feijb5qZ* z$vp7))q}{>9ervk?BrUXKA`HOjj`$0r@HULtM%hRv>YgF519l%9$@Y;=AUs)&ooxn zR-cliEk94}VbvxeIIcXdxpA#V4SAj%9qukQxJhqELgvL-(W#f2HsS*mL(#tvz!{v2 zU|s5p$IjCC*_r(AV>Y5qAPmIi#_~wfblVYfC?&)}=aEthEvTBLXs~FfTItmEPVbB} zWy3D%jqT^5TuJHq4F^fIJJ$|_VuwhW>)*(`?XvFQAHC~#+7gtUL&veaFfD277S{8g zIh_Fg$Vvuu#weOW57Hmx2Qbeb2NzV|wbgjEcpi`fz+YWIbRXO1D|Nj$NY)S#vj&AN zW4lm$X4Qv^U?YQJgI|G;#gLUyF3C|Ij(c$sTU_G#6bR*yYwdmDJ)F>t`j2T^V7;Z2 zVyFu3_wEOFQ>h%oFZ}VeXHjf_!xw~S8M$J57Py;ZGve+L9i@#zHRR`rRs+6nx0hB{ zctvW7&G1EJzwtVxqowF zY++tANa&hwg(Iue3Ij1mJAzC^KNhpHflv_rn5488RuJ<=)bLZbD7ivOZ!C~%ybxKP zrNs?#T#3|(WMy?jxP(XuG0iNDUuW!t1|7RfU?<(nJDI3vUCBC>6+Wf><6NS_^R>&df8B=4umqOr9po z^`+&xN7+CMP^>LAYyrOMUf9>7^g*MJI)EfcCaRmgC8aZj!>C5a6lQ$-@xqMEL!@TlWQfE^fuLa9d=y%3a09dMn(Uc!y21&>tE zZ0mGALEOS93`OZVbZSN19@q0Wzfank+%XnwiT zdT3wqXC-8_psoyCLang+%JRlwc&+RhBTtPTb{j+UWz00><4t)@%9(>tTCGT80B_`6 zg{70PU)837MFc|oI_F}j35~d}J2$j66-bZYzx(p}vrw4$5Ks(-W-gPOjJQw>0b-KU zeX)Y<;y^#;6DA{Fu zI=Psds~;a9;|O2Ad@)!IpHSf~HLMU`;jIaZ%*<%^>C5NSPoH?&Y|y#6MTUcBgOWlq ziH_zGtdO)o_L66?wn~#4IuapnpS#bWFPSI|uf}Tw&2x&_{K|PHh2v2K2sBC!ZFNt9 z)TgxeP zdozvBldtK&ZCC*S03ZNKL_t)nYlZ`b+s9~00%Pz9)tmC76H^9Z`3r1WZ&wc+z46~7 zDQm?311Lik5Healz!>;}K6!(JJ;ksMAh;%q9RrDV7+baMJTP~rp|1v@f94ZO&R*3y z1(t!U62f2q^7Y^UH>-!cd+RGIHrHwoJnu2KmCx}Ot0p06X(gd-i@vxhCbB=CoIVU6 zXbRtd|Gmn6O2b(JSYbQ28e;h2`}fB*$4UTWK%Bq#TwY#y_iokO4N{2H-b<20(txTE zG&vwOHLixO?ft8(4sBi31COWup(5Ij!I(?$?uKR!D!7Vr+E@vWId%X|`DhT_9*>D( zHyTZ$Z-s%aowTHQG&ozisTEP#fPS79#|F1i9+XkVLC1az46Y`NUA(f&Op8| z(004X?1Eh-L{wG)cIG?`4od^JSp3P43b%zF*x&bbdU`@;@zy*!L|R`8b=#`=|b7yn!KtALGk?&cl0k`)w z_ZS(6hlfr}Jvljn&#SFf>G-%6I*5yGuAv_+hElS$9J>?1XQS`R6Jt+LNx+j%fodQO zCI&q$7EnVHOdQ;#czE1;*>Vm(495bHctkYN%m6SZv>AG@vDEE;n8&}VAnwr8fDTfD zHbnHID7cDc?d&kl2$k~QMB^_nE{Z#Y2P}2=G+Wss8l&O8MbEZ>rK!9j#3glR=T$I@ zUyk%W*xSEox8-_SysZ>Tnt#6JjOlG|Od;;xgUuCA%Uww_*sw$1tgR^0v#b!wWZdJV zVe6<^804wXEw3$h}1xCqv!R4SEW@qk|X|5|wuG^XQq4$usygqN6K}GCbHz&LE^+czd$# z8G?YMGr){QJQ6&O0lOesv0dorKE#Mx^ypkfZ0y@Pkbct41Sv40&}z6t150ByVFla4 zSHtrz({OEBX%Fsr9pt3v)$VRCdbgJZm}=Q*SWOm7$N^59r006Q*_x^;<(c?K#_EYx z@4$K(DkWJyX$A(3OW60BQ+~<4Sa$@QC{*u@*$upji()-Ah%w*|Y9bg^&|Y2S-r? zup%u(^$K^q%NWc$*h5W-J2IOa%IpK5pcTR@ZeLTvn@sTEe!xnH6~tUbXDQw=cj#{V zFSIIql>%rRL2&ZR1Qvf+M9{Q@zTsHCWev^+RC z00E?HJU|Na7iVW!Hapx?*3>7MGn?EEq@h>oyT)8>7cYsI+lV?{~kv*ao{7`ktJM1$`3sZZVibN$mL~7+Di{( z)A%QOtg*ntL(rJo($?0#7s*fnJ}Y8&Py*Um5nzEx=4jPSxk8U`i&!WJ0!tNwD%u3r zC17TF$lvd}wGQRSh(ljYlL6(Jl3B3;1|eIHF*5GY)VMua%!y@c#i{>D1y>h+#?2Qc z(d2Tn3fmo+dtn=b`^qb;0c_LH+In^iU0nWaZN_eH9{2=N!Mg9i{~qKK3pik9K%pxWrwssw>HGWLci(ZJAbH5~ z{@n+-m*prQKmclZXzrdhWqip(!w_wJK-GO;eKd9lOoF%O43EpFq+0D&Ez-CAp2-&}dL#Q*Kp zn?q3ap^vpmT3-+ou((_s&dK05 zK0Q6781rdi#q^iZh)rA^+)gV{zTW15akgy}Y!m`?s-&*n<&BbKYsibgWK8N zBYBydtd@80-^1OnpMJi&x&d|w_AKR-W}wJ}#u;y4AD z2bP5aEphqeINGUNsk%w<6>P|{ef9;*6~M`8h-$L1m92@G8ZvM;q#`r{0ojkimaJl| zO}Qhi1Q`N43m8SRW{C-F52|=K01Ca4B>^ni3~;DgIc>@sKS0tk>?q9k{z8-Sb>uYYQ9xcaQLbtZEv2D53kT1#N%rHdh*pv6`Y2l z{aYtBjRcu(OaaWV_T7VWAzzv6UyR=8VfxL&7Y=ln0fp3Keu_Z)iwg4A!ayA#<*2lgtiSkU zDE@3T+!L>pK&*d$>mPo2w|nrr|M7p?nVMTu7cX;>|29=?ZLC{UAhOn0fj&+rvg9NK zS5xPv>wt&V*77SK(%t!7O$D|`Db!<$+}SmeGkF1oasw9-BM%Ceg}r0$I8Wl}c2DzC zlp5{m*Vx2^z&5&sfN{+XLRw#3{4i1s|aA10)EiqNs!NN+Brx5U8cq zQLJ@j81ird1npQ*p$FGoCfX2zP(Amdl<$0hw|7_2$Xy;6j%kSz>`JPSjm84D5_N6r ziUk~_ZAqHv!*GBWSRxy%__9Lc4`|GCaz@dBV4f5#bQS7K^RnFUl|7)@s{o zH+r2KNVvCss;Q>XXo)|!zEYoEMBkIz+tOOovAJ<>DJrv^jJKE9`Ddp2%Bn;r3*>zF z!-M_3{cYPU)YzMA%S>6Vr3Od8qhn26vN4U|T9dFg1KxJvN}jwJ#ap4#sbfl;>pa<#!)D-j=B&TF|r z8*l|A-E_?mK_motbRO93NRPF3(L|!ln$fZ0Sg`7?+VUNnDeIY?33l7fOBPdEKUvr1 zggg4+!hG_zh3L`&LjQ*7QPzhn4Ddy5PUJ zPd!{@t9ktW`**aO&tE>ewkGmxnO*{f@k>XSm(*ZCgkr@rls1WR{G5!%NUvV@{Z%Vs zA;d%MhEP4gP6`Eb)?2%F%kj<-oSRz?KwG9BA*i(tWU?xVf{koBHfC+_akTRJ)KSEH zOdNxRDzHLI2yV^`NMS!Lqg9d`4B-$xhR;c{Hu|ii2;U#9#fqCs>L%oQd!AaHng8y1 zd;IqJI{Rt>4ue}HqGx14C|TW?9y-4``{XMj}cu;VR55V!<6M^dnD#EkdXmZ1-BZ8jgXb_0}dT!yMQsa9hnp|2VDXP~o^agq5#WBzeGn5m~$2n@5AMI6*BuQaC`wX4S4=Y^Ng&Dv#S`m#QC z{yO-zJ~>%@J6U@Dx-|JZJNTtOK3$&f%nZN0^#0bEoUGN(C%1o|ef(5^yP1B8KmVp@ zZz(4j??CEcVhF5NLbKC3SEmi8vme@R3MgbV2Fk=O>I4U~QB3`BfBT6h_xnHm9zFx?P^}oEvV`8|>$5la zgVW27dctga3ag=J*8xF|Kk>#S_E8c^TLrRQ z2}RB-Zi?r#c9MLl5(wIs=MykE-9o#Fb3*j+F0AFD(l*PUUA-&xq z$+yuPnBr@`x@j)V80SZ6&*uaCQH5*Dly!A&)Muw>df3>=-aYp@6$@XOy=q_jTKG=V zXE^~Vr*TxBp$h~l2qyw`fz+~&)>ym{3pjBKG^bodh*pYaYWnlJFL3F zfOV@ZT~|{1%?(bW)CEB(#S?NvWcdy^x15bb2#NiIe4*7Obj7)s%_sb*tPn{aDigzD z8H1^!7}-3#aC@u0BRl1f`XuceD13aeB38i%#EcnFj$VnZip#rlD}khi)`meWmC=ku zIxP@@`+%O=d-ITwFi0D_&VgCpfYzce-7K>r;!woA)T`I~M8npi@N&{$Ulg(45 z)+*Nkb`QvACk9Xj({zvt<bS*lV9SIMkcLt;PWN*ZxT75*A|n-l!3qY7*!1qHA&|mfWf=wXrT;)~ z#p1d1Ice}=7|7}4EJOksRgx@-rHsmm##WiyXFaWgfFD)Q6by_y>)=)U-fjF|B=gNd ztlt8_6xxKZG`SLai@VL_!Nsy+ZRQMPsF0-G{=u$;DvHR`EY1Wl!pe9gV1vDN71RvS z4_U*K3nZCb*k^NVSArAYWfYl}b=rj?;&f!j!W@r59IP0GQ`xDvyh*ls8wGb6!G`5+ zZEXTHUx)3JmBhl1%?imGB%iTG+}zz3_qn`ES!|mZi3!4n^s>(%5<_3AElM^)6pPIP zTt%f|RQN~?J9Db$u9->s>nEtJy%BoE9bf(Rtd{Tt^Z>`5imhwrb>5DIU|HDrf}F?% zia|1!dKZ&{EwHCIPB~EWB6#RjG?^CmI&faov!%-mY{Xl*X!m$+@F=|LNQ~&Ski;XQD%nH&^BjV-jkG%F z&UI}fUaAw*hxTDBDJhs;oW`bqaTLT5kC|ffYim-Hgqpk&Bn^Y!;8*u8UloEf6}>?` zj|00G6|*(if^gIT5%097@eRfSEs-=aLI~{b2*}`dd);j#pXRb)!=!)Dhp7a}s?tIa z{hO(`XPTxmEj4ZlN zseV6e%p9%Gtq4Cce%zkU#3w^NLfR1`akskcM;&PjoH&zP+88luU!4MX*PQAW#b|+7 zZ>~Yis2~Z24{n1mx~o8;fxfT;Dl2OMzroOP41%7VfZ;6TYaam!ZG4$ux=_eaT^N$N;L(&AsN0Y4BM0(=i#Dbl+-|i@5JFl4 zdb#C>U>4DR>uyE-JvA*s{;v{L{blFF1)n< z<%GxP5(`dZioKvON;TBpo*brz&?Psw!TuN^Ns^dgckij%!+ zQ)HH-PLpmxWWWU2ZkG&Y0#C`tdlT~;?TY#gV+s+Le4w&3JXlq>ETzKFv{IF}|45f! zGD0!itrt zEDt$EB9dZHk6dmLRM65UEN_#A%>^VZZ2h?8AYLl=bNsnqQ4S5|m4#JK3x=7E=M?qKRPTs>O0+H^M46u!nG8>P>+xqHOYT4|UJEVM`;dKt`c0T&P(_T*Zq z4Y=4S-X&SZNDdATyc!SkifxCD&&11LX;@m)1Sue1O~hjTj`20M#8+|{$u*XX-n<&L zwJ{P)<=T4<$=FbfR~OGsT$3)yLEI`#0#+cH618(}Bv#EM#_vAG(1n}ETzM6u7gN9A ztmFLdF1&mevYLA3HsweWgXL{Y4M$H>LX6Nla$~9`M@ zez%gpN!8Z^LASMHtE8kRE_%j^77>L z3^`zApl3O_5MedcYImH+j=OMdD9@F0cAOa|PIyBrK!!6`zW@U?fDsQ0yuF&?XtKD6 zx%&wgQBMt01S{|_t^&)^56%uj8Ix2hc9P&s$XHYBXC^*o=wohxV%Etu04(dq ztvN%v6KoeBre?=wL|{XY5Kw`XSs6E|fn#Q_i)20LY;8_F0K(s-8#&rpE3ODTz^LcN z{UwZS=aGyi#Rs!X2BpcVw~1zBV+*PYWckqDXqQT=xK;y(z7PM9p9R3eIQ~w*epMTH zZ)Xo{Jvi79sFVUmEru&*;~WLRA}r00>?89~uj!jRcC0J&0$DWRA5iGlR~LSQ;<%w} zkN9h*Ist*|6FsvKnZNEAvsXW!uILAlLVn6#u_>L3#*+#7Y!Ohh^WTf~Pf>E10H-{6 zlZr>(>vO=7W;0bA-#@7ANieK3NR^fK5-SP|Nj1|qFdR@u0Cw!!hqB}2LY^2`{>M*W zNVVlv1^=Z;5zj9IxAw7LoSyMY0HbBXR;;K4aiNP<5YP8&M+;tdJMv|J{CL~Bv9(yf z^wW~0`~}K2`V|6kpd~`9+aSRVld=VvxR3RXtcul}4=<(v&?UjZ0Iq-rXGub5$6xJ66%!}r$ zA0X1t$6tN{YaV=4mJr1PrYm@FWv^&vAwbnC@9pm!xq^>+HXa{(hORV#MbbcsLZ*vD zrG}59T=avP&5hmBGtJD#BeVuY*&OCYPRye>IP!ykZh3Wsuh)^wlhK2i(PY{N&p!1r zw#!W8_O4HaRCc+yBN+y|S+K4UFhJq^)}1d^8xVHK2eTv2WY7wpl5-%&x~($lIS%aI zWG~AYgX;PuMaZXW2}mu7q=&*5Yqn8Xl1hew9*^Tc|LHGIZu6>QyKqzEKK9k;h)i8^ zMSzAti(r6rP5lEm(}zMnHsPmavh5#reg!)IHU0)At+<6?!EbKyD?}NEoRw&|YNl*R z?i+>yqiYKU>=mAe{Bx{zayxlD$amRjiV~{_sGKw)liE&^6&1ju-~&d)cA-(onV?I- z6VX`_VgEb}a$Kw$(zD}H2b82`C`2+#r_WjMBQh0FCmlggCP;J6J zE;#s95QZLsUbIv;nZi-{zPuqdK}a|0-PsmozM5wtT53shl z@Pr85&4;(D_$?AqGiC^=p87~tBcA` zQ%EvZ(raI)S>?D4pC-Jw5`b<_LM;_zA_$16FleTrI2MI6;$1&edwYr_P4haL8V)9` zS44e$Cd9r8S~-+YJ{N0%Xt@U$CKd$pB0>j#KyfcJ9gTs-HHFc$mftng&0oY%Ah$~b zzWZhY6do~UkAnkDLM;v-pu~|agdi$k>(v zfFzaybzGaTrL2S>n7HF*dV`>vp%K5j?*dE^n4c8f*;sG>?gxQx(WaV%#6i!H3=(AG z7)dc&E~W@Y0a`mF_|;T2e0ZMv(_cijufzZ}!x4-}IKi(gC5C~Cs4HT=Qb~=uVkiI(4i+VdTLiyr ztM7403tS)%!F0||Lbg?i`tT6-7XO6?`k_D@l$IXUv-i5TOxoxcuDDLKzb9D+a{#)i zjn#!mfXm}L^e#WS0BXPbxHeM&*T!%8n@{%M`)KTGU-O@G{APQ0aLP>+xI6d*> zf)q?zlN0I&YU{z)c^R8DZfK@w69fS$r05Y8A^=pfPbANss8;VfykUWR1g-#4@u00mcH_%}p0B2__r1@{24Ed-BcB(qLz@9^}{l z(?9>n9UyN4{1HyUr6fThD2NIwwpE!iHVr-%xWLkQqM?g3S~v z*jg_aK2}Nt)6^0I=o9NO9zE+0ZlD@U3QPwU!bUTpR+=@@%XP&A?O73Do(xman|Cp? zq{*mLWY;&<9lsSO#8l`xqWu61;rx?0K;>6PJ)38fI?K=yn+hzhP1)B9@xTNq zeAOgn65l2UhAE|UWl_dY>XY!%2))RAr2ZJZIBKtd*G~m#d)n)cG-_V~5&hU+eI@(2 z8DH(Fqy4n6=#kR-oOd3#bxtGGB##42DZ z5D9(3ijWKa!f0h^Ssmcu8@1M4ANZ)nV7bDHGTTrNMZTD{2~~z)i_}dlxlGd~U}5w7 zn7I!S0HvZ}Ur+@Rpt?g{@@%CI_p9{Di9_&Fa z+n%LP)aR8Rs1f*0*j9cS$o$Nj|M~_g3UqYzq_GKGxPrK z6DZ^ZYYGJ+HnyfVPywOYuVl~ugH2Qm^Q&_+%M`%zOWgM&PX^;7jBmk66G(#raTZHh zpmVz$KKe;$?!mAo7M4$t=;YB zP3IDP*IV@c_&<#qmp4!vy2lG^t_d%M9mFy<{=4WMd zQX6I4@zqiBjEc=F55inT)G32(R+9orBnMy>m#?6TH-nV91m}tr^j>$&Q8NgmnJKs~ zc9l1=S7sV@mAKyQ_6I=m6-*4x+hI*w^95Ssc>HOwrId^D6GfKHyo?PTX#L*k{6K5! zt9MZkHJ}}XBqr{4IIodlY6epZ112%Vr{bP~>K|`DgV>%tig_8814!O%dXO4~dL=Y! zgSC1Kk#cvNBn1&hc8mio-^Q|^)rgpXmI>~3>8OJ|>~KSl>{5fm1|^siox2E=tT>V?hQ#9KLr5 zhP7T?B!}$odY+Y??p&zXhvzY!fOYHaMRnk94Nf%#TiMiKu@*?1K4+l|ZYMAXAq@Zm zQz^&~YX{Fwxk8nqj_hDn!{N)`!s1%gaRrdfc8nK(7AC88MhJ_O}<2Xn42?SYiGXxT5;${ ztU|n<-7SU?%>a5UkttbRW~i+cOUsfKg=U@P{S*N;Q>G+i_XDCG62*=w<8EYVVX80qZCdt}V;V5emkLFoH&w>Opfy_DVz)r3d#^KkK zITFb3?`EJ@<>)QvRmhK= zLc;*ZB(fxsp4b#WA&U4&_?>%B4SwW$j><``09h)$>#STkY!NUuHZY~(=`xZ{xwUh1 z$QS&!y^X?C#MJn8PO5nd+c!>~0FCu4td0Ia!dz3Ylcvv0t&SJ z1w@K(FB~2oy3emdBJ-bsI6kFw7rN4?55?2tg~?;xN&I&c#5C{|xOttf5EwwgBzl{u zkJT}dl1`Jo@iZNVIw*M^EHp=Nfu_3BgPZOcm9AVHu&KH4ulp{7@5#I7HhFz_wDAvr zIN}`;Oy<@E8B{IA!VDGUDQGPjZ#57@Jxa7X>3L;!IG*~S|Cj&c zo$Wc!hqPPV-Q7aQcqkg;!=QY0I$hpgWeb!!F)z%|d;t(t0bd|m?zn3%5Gw+zrUTm0 z28e*#SNDB@MOvM?^9I0D=8?a2b6RqA?Wxpx^=gVFS2!Yv4?bSuK?{*DB^;$g-FgMk zIM@jQ0mmFA;8)NC2c!>Bo|U>?K{Y(L)Y%1Iz4_IH-Vjeply@Kf1m+l-SgR#2V9$yW znp8aRt@?NE0vFF$h@P)+?3EG$R33uLJp9XFe^$qyjKo zfwBTB1W%@C?g!5|cLQNBsUq@nt~$4l$>4JE*H;&f=Bj-bk_6QPd{HLYj@JRV8Elld zT~FfE_<4CH=mLle5wTYkcVLs5yAf8)XGP2xwVHTM%^$9OkYW{ZlgR;aHw9U*RNzTx|-~yonY( z$idN!C^@}X?f&ptferh99mv5Iz}cFuVK^YMEPK`7dp=T>&V)TU$i{ms>L)I8`B9cy zLL$WCh_h^7jYu}PHXTH&b^vPq?zmS-EaUI7AFCKlNqO^%1S5gE z5`KC%3;1kl6pXT~tp9M!<4s9-CcS8oZ52`)h5hs<_PD9IFKq^<*>k(P%8s1j+!r+! zr*f`|qrjfV3U%!sefW?6^sll%;f#U7F8B!o)*Y-&$}oI@5&i=XUEb6%HC%^Mey}G& z8~_1liMVNwzdS40={pWo1?_Rk_t(cj?>fOlwACWEl|F zVo5*%0Yva$K=6@2hTsdIz%~&C@QJVi2R3%t$g(t=rKkJ$U2Ct(eP64J-*Zk&LC)=( zlPAyezR&x-&%Ok*Q8;FE5wD{RLv>!)jFpo9{J4-1?)YhQqqx1LpBNfRY*J(5kVbxa zsd(Mit509LUoj_}m}F)f0&n2+!GU9A@FCz@I}7JyvK|ZOf`Du_))%q&IzO?*fIMpE zE&kb!{L+inG_$GxiVSP-MiW7t#i*#-q;iOf6()*<;A^%vAp~plk4J=&V{U^!Y`Vn( zqaA<&u*xpcs3th(207iouxULqMfT#NMD4ce96%)4;@EUdgcyX1rb}tQP{PNenU^IN zP40?UuU@$ea#{j&wCJw4=#71z=Hh6`Qtp57APj~0uTKVq={~BQQzUc!=&K(IjlVqSAEweCf)Pa)8WByiSv(hydHVQ5I?f}-6YKE_r;o;%_TT0 zHZ3t9Ym{WP{h|JA(z1SN<(AOdW0-ew8yURx(A>`6#mx{bEE50Nus^a1%e&?3{_gq5 z5AWW+V<~_3>=~;aY?DRVXf(y!pMU&FOGNtTy>_72{rP3&rYON6!5R0m0^UaS}N!iDI{?*G@@(^RUY&;;- z`X`bP@=+nY#(Hpmey$+<@xj5CaP(G7LoyCO!i|a0!7VF;g`Fw4$KJ3x34ivd3zFa( zk_~Z93Hp$mJ(l1S{ClqJm(r^YBiou@KV^Ri9Yixx5io%{lsGVPK#I}`7`1{t2ASv* z5ri{;ip$`GCA+vfSOymR;S0Qp>zwSef?yuL+La;hAD6e5|9KGv4SCk$ejo1PwV1|DmjaCy8DP zXXieOAy!nSt?sHWL84%2QZ;BnkYGHUVjAVGOu0X}^QL;8cuIHd86FA<@B*Wh04iM6XF~2IhM2 za!uevB971)2pLdfXCrcHA-vyRLuWuqcm+^0FO3Z{(6(SLpA89;myV6tyRPhaqw7-2 zo0$QI50RB6Dwu@9sjWOHs}nOxZy;D-nAHwF0Uydf;0MPHHltD3_B_~cCeh~;eh!+Y zanf}l8?Ufe7_Gk^%jn~Fq7}t7wLsXg8{5$h%mo%yE~Ag*6ci3TiYpT*+}Md~Tdb^t z*?Rr9m{?yK!9b(a9dMAS`~2~Iyk1nE9uCwvZTEPEQOwdoV?RpQ%_&l+>S?wHScn;$ zJ*8yu{tCDds4s{cH=SWl;Acd=Kis!Q`?c*x^*Hk+?oLsFxp;2+O1l$*ErKhQKcp!d z9+W2lS$$Y!7i=kL121u1!it)UuD~P|eztzGJV;keD#$%U8tmtE^c@~p7><~zI*`$s zxMp*)GC7Vcnc&bmuh^mpjc&D3_z1l0E>rW>ZGN3L#9y>Ox-R zo`l81WJZ7x2$9I$)^)`MnOx(t^y4R&#YUoe?-Yej;YRj=cu@MH=%(!lgT_>cbd5Bt-r&5dECPw|#1F-Gm) z3R6B!?{Q7k3V)jt{J;wxi3U$ma2Cu$Emo2RhGV1fiohHjZz}>Nd_dT|2E(Z+R%$L; zLU&oLMRJyH45K)bqlHhFDPPS_)li=7f!;u^?y02DT6JHAa_>ay3nHgwiBF@p5WTUZtoi{+Z(}OBE1z+P}Kqzd0Y`nK`a;u z-Ev1J@_>zFw-(~(YCKqEmc??dP(3|Aza;-rKCEdPu13Pel0SU-07};V^z`)GZ@)D= z$BjlkYVi>o)(hAUh2~MTHM}pZZPS$iD$PsLPQ<}A`{KUr@1}VZuUBzTh1x`NDb_VL1wTR;_vzBzw7NY9KTvFGP+;` z&`g6g{GOo&sY|5qXl-eOa56wrq*9+KoN_ zb;MH(yHHb!;ES389V;Ti9fda@`Z}|XCmZMU=g*NEr8*2$wgF|WQds{d|G{tn{=fGH zEqAb~uW#YAQibNez(fkulTM?c-HQ$JUQc$!e*g1}ODf(*0=^=@)`X zUFog(4Gv*@UOyLWGp-RI9=YF~e)#8Wof?s`6H6zqrt z@PpcG*9EW3Z7S%i@K@s0eq!Uh=SR&fq%t8B3gBsE%^)O zJPz_Z_G(d^SOe(E=*9*Z8rbl8;=R$YGDKDz(ag#+?UpMD90=20td}P+V=|Ii5uDXO zl0y3YSP`|M6u=4o%B>!Sf~nM;T}Ei#fm%U3i}|?XJ(_HX_!yE6Kw$uJOVprC^uNRs zODVwecD>vx1GMf{Qmifl4ErTH&=8U>T^ZAWtmCfe!LTOhiWg6U` z3PQ(6Z{EL`)X*_RBO~WX=w8E6$9MLzI^`t+iyS2{{54}riM4FPKc1a z05l+?sdYOoY#+o-O@tdDu}qRLCzCJ$fCw9zI8Ss#sHT*o^=zUGpk?LB2B>*hMnh8! z;D<;XikiF9DF-#+fk+k=4!Fz3<_qSy;BqDA&{vj>a5oB4L*}k;Z#dIeO57gc;NH#)_k{xdu)1U$ zk;S!$GzKL@9Y=|!L_9T32Ps%Qfp}_Pa7mb^N<5rH!~=FYeWg3y@lEGf%lvd2P2R8;kWPr{!FJ zpM#Z&l@hyKsY*Xr2go`~mGa_x?^D=2V&LfpnxJ} z15Gg!*^9{zB~OLH3ceb%$NIR&DktVRu_CcmeD@sSvIl&~mFC4dmy(tMN1#PS1j1e} z0;tk#%A?*DEaber7F1M#jenACDrEuecGR-bXkznHe_4B2jivh#!|~RgLUq4+MpBms zY(@8*r&vBY$nN^$i!Y4lyYIfU#!3yM&Oiq;KnD~=UF*#ZMZ>$gx_bBajkRYxmLevj zmHF+D|M+)azTCdP;)jfIMx*`qSHBj}MQ`ws49O2KzNI(F}1@C+y5(-1$u?&)mpg;z;JHi}V0MA@$%n+8J z%YcC~$aK;Y4O9WyF#*K~oPAZ6t-8k)W7W8(DF8J_MWJ+L&_?4xn2NQdJ?pdvL>`5+ zdbq~|1;~*4Jr;o`sDAn7m!Ev{2~7rMAm_>18TpQM?2Uy|1X(!tQUx?k?f$;!X&z<3 zq*3T1cv@`*ue-Zdl~qN9J$tI%mTaWP*D-q=qnvQWg9C(klVA&7IO5?2{gpL9(%hKjLNFEAlC^*+C>%W2giP?<%2; zniP!Y-@m<(iO*{Y38^C-4~*{uY7!qU^Yu!E?s&wGM7zgaXY>+pwo4R`aDAl7XLppk z8jjkr5cLB;y(e{TWtr^E0`ZlW&vjs}MZ`cqf*s)+@I|g`I8C@VHjuaFd0UADDjFh7 z^}5KHErFu2jAx7E)@~BF*VsQR?@Hh$tD)zO>}Iv6__7+2<*I)x-7S}obIo~jJESH5 zW~8^r?BZ#2`LxNi-^1Bzym%0)I$1vU6GOw)0QFB4fJwCnvjN#m7(l5^=+TKpP#XjgHct@ZKGuuV zzfDEyGTN-zk+c@S;!0TEa*N*^#YCHI6`RTyH#bnQ@th1TDHBDsfW}(-iGju>nMV5k zF;9u?2>rO`v!14tvUl69-s{)ju}jvQ4b&tyWJGpFipj_Gk6Z+V7Vl<A!jxklyj(hP$d001BWNklJVp$<&Td8tfg|nbjLS5-DOq|9HAU-B;lY;|jBrplEIBAugbl^iqka!6chbParw|b~aqh8Uv zyV;ccM!MTPeg5o1ae;&{qFfgr&y%-r*Y}U8y#3CdEumwIjMUff`qx+e|N6iD4>uQU zb#jm?M0UNg6*0K8$>Gt#@%G`bzI%6daVenY^z`iZj<$UxP`z=o-|vo5`JK(EW}(g_ z+YSxA)`&DW1Vuot*KNI8sdO2kI(rX7VT*rXF8%Z{Cb1tKB0R@R4ACDnj|5kqCoO9X{29XA;A=z~uB ztmk2zNj61^NyWDGRX`GI9MWh~xj+!SB}>D@tdvTsCKC7wH6!shOr3VM784s$AW+g7 z>#HEOXc5F99b4mRTTuQlet8jvY_|AU*-OKaVB)0dfJxMR3jndiU;~u_U`y zjNjHuy~vF^i;M6CmPHJpmmA(CfQJp}4zKj;t?fEj-ct&H6(7MA=ANRY}=O zv_xvsS|_w9)1Y37ZN#c$;#Lq?63kK8HI!|01P!5!i>bP?@ICgd>y5!8g2rSP6ZG*Y zR%U;#dwdfy>R)P6FDP*0=za=TcVJuJ^}ZtyqdfEiBaiw_}r zi2E>_;Sd0g2Ri-g*WbV&?sn$iH}BpvdgCJS_44LMb?2Y{5W zjJl#}l!#UlD%Fw~u?ZVVReTS?I;o z&c;%8^IHW7fp!4)X36SCCr-{wAwVKtZf@?-*0Zy-*6r=(r3N?|<@f1E_K)EyUz}6y zx^-`?aw`~i^p;pnWCp2(Di%b)ZgCx9@xW-6cw&jd6;Nx86S}aS8xQ2Br)@Iue93t0$K=(YKsYLQL_l8BCNR>BgYyE zHJ2Bv8IlttL=6_U@9#y$7zQ$mUSOGwO%Xsjn}KBhB=jm`GFR5iS4X=)A5HEa&d)zE z>#*XPEFCTuHBrn4!?EBjL$&M-bv#hzJyl*tzxeg{ua2IW+BY*OSR@0-p7DkCY3ori zy0B&h+R4PYkF5sV7;H>1Qc;qQvGmD&Dz_uBBI^A_g-yO+jDSjQ&k)aLAB+Y=ti`+t zT&f50#LHm?u^>i*DTs4>6{RpC8{o^DvU|V;-Ucn-Dt049CkleybTs7akyIcjQ_q{% zuOWTCSW^*19Y{_)b+TcI$Jzxpn?9Y;1BA&voIxJwr(!1JYVwcyZm*jaC=@?Kd@ zL+h1gjZMsEEnC27z4yL-51uB=8bI10b+Lm)dGVzH-vF9f>zSlOo#;EfY}T9p`M>@% zg#v1OXWxADbF7-chw`#8+e8F;`Y3*EFxHwN|0Zz-Io@v~aNWbhTC-l0!?!`03M-P( z02zERUAnc<P`Q)|M61dAIu??*v{sN2iPOb7$hNr1&abP z0U}{7#>V4qyYiCPcPg6?Ep_5iL9zWaCa#M(~SAI-_g^kKM45@sXCGEA5dEDXW{ zP+(PX-xGyDK(AUgYBSh)fh&15H(((=9o?i5!S6+npr^8j_C( zEq%I!;le&f~NMH|Z z_Vx}HOgJ_A5-sV1*$jk1Sr36GpkOz8l)5AKh9q%p>~T4a*v--o>4~$~CH&KxRnJFr zvy%BQFQj!9u8SUwCINw+Z=_&U}@+GzL~6RB)DMU;#rwwRL2rdUMLl1D1TNf~;1_Ui8DoB!b-{O|wGfAya=s`b^h zw;E2E*`&U(onI)do0&BaPm879@4tPGFREuQyret0XWu8%7>3H098z==6=?dCG2fK% z2IJv%Sel~BF-lX2?O`lmh1P|&2y-oboWrDCq>eRjHMQRtTOH zZn{~JnxoPLYjqHd#PVh*Nv4UoaKzEJv#it?D~>MsN?nv_5)m;3vn_SQ!ixgl5^ zvocu0<%6Jy>|SlZ-BogMon3@akwO5mkp`^P_2>fOz@nRo4NLz65LCnKSyu}Ar<3C^ zkB*zn5TponjMJPphhgZ!huaNh`ai9s9#2NTn`s5k)jGFV*&pL_a6O|fA(*F`Ri^$_SIUrN*XX3 zk+=ymq&Z0h$|!k&so-1I!t*gPPK98MIrL#~sK)jz`ph%&1D)zGoCcqLvZn#%r=NX+ zXICPtYJ|JHC49OPHpXdeSlD~f7F>s_5!4luM#qBOn+)s%Hppg)k&v*M99H*#b#l#TXkfDGbTlG(unY58fiv26ps{@NR4cR~px zxz4Jf29s2!e}1I4GF6ObQp#@U_|X(GRC21kQ;RO48>?r}4uS!2&Jz#oHC=v5G1Ab~ zdJ*?^bu;Y^m${wlPPwYM=x{I;Ept#mk}oXJB&zA>)mf@sTwQZ|V!2fHyBleV8;xeY zP%hlI?z)3sNvYc0HqOi>QYeuK3lxOKcvLNJpB8mlUfB1qTsH_#vlO&PEcr4XGrZfX=NekS!{4bs_b#A?)gMOWf_B|N1!*d z%nxa?B4`^80a8ttgpAojD4=RWph}IkFws*h7CY+UiwN)!=uE#-saL9XVk6VGF_7zt zKAIW}O5s(5dE^5t3b71^USb*oX^ez`<0rOh_-fTbgV2}knO>*eF{u(;pME|=tanSH zz9P%b!yuPm5IxD@)qHwK-y)G93O?ILpsKKCtxhE0Lze<9 zrE;UbuRa)^tX!^fXp83Zm1jf^aiN#E0V5DwX1&%~=6DVy;_~Wgd#%R5uF5*!c{LX&K+rD;l}gJi>QpY8>w?L z#;u*_Q&(A+6@v&}J-0;?;(iyu$C_#kH#e8~1zCarvp-PlelAo3|BI1tAZjIAZMSW5W2@}seq8*TI z;t#6LE0#D^y+v4!4j&Lyt{BVvH*YQrm7^hJ{9*;B>`sbLEuX?lLCK3inJ2}EcDHvL zit8!l9%AgK=2sejMHY#)*VKCv9YGVcw~#qIQjHtJ9Jai#JnKESwLm$l4*w zv@{D1y@L*3TtIl%3kK~Zpq1-XT2Bu3u(B1Y5&0tCh8^w_9`kvrb7IAd&6necdF-Z(#)I<7x$AB~pf!Mr$LRc4RX`BQbYEcrKks`h5Z=~I3DxYwEOEV75I z+;L}|Z;$h>aX}z=e_i_6&5R#1Dhe}7>C$2o2eHgXP!j#ckxv9khJj1}Y}zVMTl5FB z^>`#LC6EQ_5L(vjOIk=uH7lDYbUs3^K{&NvnLa+mg+rj>AE`B}lBav!@lH`gGP2$3 z?z&a2?Vmh*ChGd`PKwOV$q|vA@Etkn0u&>1+A1)ANvNVoaaV?Yd%s#!p^_&M1gh1eq&Ara<#i2uWL>4Zv!BZDgpjMp0owTGXPdwHR-N|tU@>Y~ z`_aSwT`Q_f?N%3$%CzgxD33&<)?70M(x~dRI5Wy9A~XtOW%Eh&+FX#XieSVHn>L6G z{I(~aw!`3=h?Km(!$#@y zwy%sAl1G~mPGet%y3?q!!AmB){lV($(HrGMMw{e6YU}L zT4ENg|K6T-$}xU{7c#Y-_Yb}rt?lCWs4O581oH~3N|Z?QaR7-u&4)CUelQ-NFy19X zuIIB>*lym7p0ThM3uUpWz2WR9U%%*0H~P!%Z@#@&d4f7yDXTQq5|Ji|WiLv}WuxWX zR<^NU%Wf?MiapF$FHURUe0RO`lkCpMgO)C_y_2-+y=r4M%H7=DhES3dzOmaM_d8ty zIXg0+I3n-HYM&GkN+?0@7Az(QR0tUODtbUuJ5o)T%Uorn=SzJ&A=qEqnu@JJA4@FY0L zRcQ&Z`Jz&6(i)N-X{BS$Bg&jUGpAqBU8nO5BXMu27IH3$)F$~xAvN|SD+WvOkEm9g zVKxld*$t6T?5m+pZ5|&S)!7w!Mso>Y1(w1-DU9BJgt<3Tv`?ypbgFT}&?4mf)A%ru0 zsLQKRg}Sa{t&W`K#8^tDaKz*u;0Z*GgdaZrJR%n5Q25Hcg?ZiYi1J^rS_tEgoigP> z-LNS|ngkkV5%C)*IT}1M;wibiLi~MF>4;2YRUtmpE!|bqJVv-ykEdPczcS;MPJg~J zkgr}5|9*L=RnjA+3L5aW2om;QrXOJ!v*kDnUu4UoukOOMv)ioiuO50IuS6APYdaP7 z9nWuDvt9j+3HL~lmA55QV1oI}_VsteQ!j}d48#Hki=d0_q9vlp&A z%orsp-#SSHzKKK7a6FH(*hE!N)LaWKU5ZH!>V$(+n3Y_am00u0ttRvngDDvZ>`2v; zSrds9N$8D(-hi0^1%-Nw&<~W3+)y^Rz`4T4faj^D zCFTuZL>r+rx|(?)TFE+(n3u%jh2tdNB#RAK0m7!SsQ!j|Ca4@@?iaRN!aJe~)FFBY zql;>Ka4&RoR8i6E)6>Ja-Hv1|7SOdV9wQt%FfyqXAJ||AqD|Whe7LU4NZ+f*ECN2#n_d~J0 zz3ZKv9vT=<8#VM;dBYq*%i?8Ei%*3098CQ+9kX!krH=hg;NVL#>jQcL zt03KuEjh^e6{GSMIY@4dT|sC@7U7U%7I+WCqgh21OvR^R7g~Vft8Pw8WbS07hlkQw z_3zWGrm|b(nB|V5X;I2V@kdN2SB;v-*>Kf*HG$5nN#6zoj9!E;AVwhU*RW3tEsakU zlH-TRfXpLtj`CyY{?70Icw=MU>vVTZ)sy2x6m>8XZxdnYB=KpK1|W0oU{h$cC-{mI za)bTjfBa9m@Y$9LJ|VjVzJws)12O>R?FN%V41A2>Z@xx#QN)zp`AL-hK0(e71V!|Wq#xPrfp}aW(bEVlJa8nhYBhW*nBEBu&pQa$tR3Aj}38vv8y~ z5~cIv#CYZnm??}c+!jCwjSb}aw=qRux3?wu!N3J9sccPfIbmwi zeT3Z{zr)R(^)$GeELyCZ4+waXzaHzaDVr>P@w?X-JXHxxuy=x_N7X&Hc(y)1Br96T zv$jTIx7hV6T#;GrDGE>OZ^Q67EmZE{*odAT|g&705#kHfo9k?7* zyB(zHZTiN;O5w%(2WTIu3m79ugwP12tYV(ShDjQOEpP+v;))N6L>k6oN}E)@k(U{= zGHEgxn67uBJhVU?$F??gIAHGHi&*&!QT^oO05qITfD_bQjuuA*Df0oYazV;lQ|ODho^6$Bx2v0(ore|N)9Ce)PZcM0lwGWr*J7GEv!-GeEGBpuQ_3tl~+l^B?LJ zy4_kjZ|>F}E_#v4o2gg03(h%)_qSnHqGDA_aINi$A@H<^AM%w_e?9-;MU`(GiqilpziAHfE0! zfZ!Dhg0U0#y1G`+m9tbFjuTzg`KTOGYS4K__p!YilIvlJC{nl6flvWUW9I`!3KkX* zLlT5zpZ zPH1Sbd>v7>uNu2Vh3~ z7Tj}huPhH=olrw1MLZ!()LsKvI#+b*2VSOw*9kzfJB0Z^c3$p^!Thj^NkE^s*=iB*3zQ5%>vQ0ewNH+MHeMSu%P z+2Xvyd_`|kxGfNxhW`=WKp044!l+ z`{a{PK$iY)RjSRV1@k&T!Dbp&TlV6hr2 zs5XVWQ9p_X4@=4a9D}>Xok-dsa1mCbAkBIrfNi&SSg9Tm!nQG)dP0%f3+ZWj8A&&PQB z2}~gaGk!FAyOa*m=q@G}^(k%4O{- znYPpZC1pyuCxG7Gt{!OOq@5q`U`gsEK9Y-PyOrinu}XO0o%(S8@#yG8G@zI@Ny*X{ z3|psq`VAB^7jzH}iizZ+IStX(y4wPVK}gvnXy&L9;+cbZAxm^P(+}MtFjmYY-H6UX ztO&*oN>C-m-jt?0wj&(-H~=4?pp0@iChYG;gH$6Rp@(6*VEjQ;ePyIbq4(3|la6Qac(5bV`D;E)YQNTnoy)i8(nYtwUGfg}M$>iNr#fAc@3CERboD z@d?K;8k(nTI#RbOE8kg3y@Fc2Z~*m)Rw8Tl#KPJMdh$0AJ!XGqc{765CdcmJDj4 ziY{>MuPHlod3ot}OX(Hnn98rF=GbKp$ZBxN7*ZmGL^59(2APjxhV%=y8nr{AH0}Gr z#$*2E_=T{h{b`ZuAm%y3m(*l|J;Wj| z3itt!U>kLvG%D;04Wj!K1SOU@27TrhKc%7^!9$j;2w%!*LjV9E07*naR2@t@h999d zu{>5C!SWCFvTYJM{s-C6IK3CojPD}itKl`ZO9Ue@GC=`5#Xb@bMm_~WPlc~$jZk_r zJhtQ(&w0~?&F1bap~G>i$26N!;v4GwxZVe>fW-yi<@o*k_wWhh^h_B8CPj566W!eE zrX0!^t%XoelCt0Iv1AJeTKh?GCq2g7oe%CB)`ZypkkvF9jzaYe(_CnvZt)^_rI>A~ z@XRcnG#Hce)!_Y_9)ql(qGkxMVC>qpML)8q~8^Vdk+gFV^aZu z$fH=>NCt2na6~zK+F;o8`g-Gtap>nif47;_QOu@p&`qs~3^*YhSdkAB@a)-hRKqd$ zR^Q);X03Lpbf>4MMo1FGB^pi4H6zy*G?HP5IRfUmjC+Z+<#rc90z(31K=IYFMKMR> zxf{`FTAulu9TWm#nlv_N(G!IE!7c}GVC8xb2gX!Ob`S32ujiT|82J_Q(`ZZqka>~Z zh&=fz-4u=i$BKBqw-~kyQfiV4k_MJ8P_m#Qqma~smS;rJ2=2kbgH*u}2H^<17qz@t zfAh^d1V_J9lLa@#&11+D?t^A2qS`Yk4nbW+gGhvSI~?FrmYzS8Z>TF}n3`<_v0)T$ z1D1U2)o49g6;MK@C#uvZ-OfMtF91=GcP($EJYHz-u~)8bKS(Ct@~sDSS27 zVG0(T?ZLqzT0{4xkhu(D@w%Pj)^4;mSG_z{SvX4q;7}UXHYra6vG8vRewD$&4F&{> z9-VFwkRINdKnGhXMYWVK$JD6aVy`I2H^Ri zo$4#z>*ADc7^L%#y~t&j#!rbtNfr;|xoHynygI$zfwc%2CbqToz%ar$mE>!Km}@}8 z(@=!Kz<|vb_D^e~--T-k?AO9R&i|FWSl33@QJo6OqE)!a%H~^t01L3YPW6;(%C?(0sx66*U=kX%@o4 zN{3D*fx=OX6w0QY8BpZWCv`L+#G}0ZG{(j<7^|LaxZ=fy%DbYxn+8P#HDLEjo#1N$ z(4)g6F)hfBw=*8fAYS)3I1K}Q(&bEH<(l-QiMR|af)*~2k`4}u5gg?b)llQdzTJvyyWwOm?KbnIt=V9`m=w#0)%r8VA=h(3TzlI)we@1QT7+?u zEgOhJvR<#z2n&}AN01(-qPeEyGNZK_#EB6gJsgJ+gD8lXhkelJ>g(uOA=W`e(#vPH z?M?Y8T+lPuBr$(Vae@tyeW+3#Le@M+@vaW(`0f4|zL|WxeD%BdmA#YK3VaR7l8Z z^i2RaR{|pvW01bGb3r_}m=Tb=29X6=3yqTz1xPrQD!3q*9HEP7rH!2YbA@>5S9Xf+ z!1jrJC>ArCXcDRt`%x*6D|59DMQV|UBozuc*A}nQcqXBnMG&8R{^FQM4Q=2R?(P9# z4i46r5yTiw=jh;2=2ir`K0e69e|FkrXZJ?cs_b?a5;}zgWarTq*BJisiC86yEF#?X zyBG7C5IfTe$`uBUkW|nJ=w#M_kRE%lkMQ_^^B3Q6Vk75C^Cz)h=$eJ=PX+M6^2aB2 zkp{z-P)wpZ;9CehKHW#@)8Z`=)DZdVc8zVD2$WHV)j|NW^GGscZH_5QQz8umH?<xQch5-~t9tfP+D*O%1CL$z4gJ%*a;7HJ~zXn%ALVn<97C4!H z#96@{;*rZdX`JNby9+2h4T2#==I)q24N9n-F#rW9Iq;Kv?ObV#$_NUHPPABM2QK;1u_phm>Qak65l23(CqxZ!gwsOjmD+S{H0*6cC&@DzF0pG zNVWSD zTtf0qL>1}4>M@7m()Bim@f&l{DWYXGug!e*l@|*Bv6uE-5VTCA;Cc-%S`Yh8_$COL zm$|@Q%j|Z)l385s*l0c9d20+T1Pz$oZi^U<<{3WL8z@`=fbb)=N)?msA@B*@Jqx67$*V_GrDqk5p6-9 zvouBs5k15f^#MSuX^o7>Pe$t)Niukf9TZ3voZnpk(eMB4 z%g>viyf~vNAbjGnnu3n9(HO=YpPilAW>S>M=U;sNBhpZ(XMOXn_QU`E&;C-nf|?0n z0P{ zLBiNzg3c$~+cm)Gn|+Hj(IuH3Tzq`2a%KGFcDEpK9_4EaXMm(_n+O}7wMKYt1b#Zw zHb}xfZ#NQ`5%UyU76k{<&CnW|gW~Wel}rr-BjOOlWork0M_mjI93WDzpZi0L8C^rzc@cf;7rpP zJjXGu6mYyEg@$1#`eMYji*NRrfBrB3LO6CQsRSXIQ-Ec&!5pWMgtnD=srwMq^kJir zGspnR0fbOsFt`RI2%Lli-US-WCuBm`aJd7IR|6@OcHjrc@Wf0(3D~WX<5YkHBa0Fi zfk^Pw@g)kE*%yvs4q*a5&?0zY92gr4h7{i3P?&pBrzOK=y%NeRxQCtJ)=QJmyp()@ ze*V#_kqB`)=w}#0j6fJN!%s_ZJb>!> zlu2BLvcb5&Sj{SX^26k`N&XGFp97n5Oyq@$7=|@61xw~Q+y=1LC!K&LF&|bJd%#bU zz`{JM+VRt>ke8ENi-MOKdaL|uI3e*T(u+Ko4z8o;S6{xg@yL=MZTlS@>=P;NG{(x$ zKl?)Ts)s`+VLA3>^4c|&((2kOaJMJ~$K*940gfZ_*oH(|JHT4bL2;lq1eihM^YlQOM&A`)yy_pPh>^yzQ^t)B_3!oNU-NK3o|cMe9; ziQ8M@0=>Gr#DXoGedXhUj=jjHvR4rfCy|5nC%>lDfdc9pSEd)6GK8>^96YUr22@T@ zPeBG6*;lBRzvvI6G44g}5jv#fsOeYRf1%#guy-etos=)1l@z239`y2Rq}dxW4+DPv?YE$3A&eQlAm9ML zJx3B}I1vi~fUWjaVNeVkpHjpsdT2kbD^l5J?c1x5gwOEAsR_4~a7ZU3eoEX8XPt4H zKJUhK5UAfs2o9S&}R z;0LujH=EQ(T+L+FMe&@7=q1XaV&OG~T{_i)vUVBO(NVQs`CLEcu|9m`Cgp>R==!l5WDc zyZswxW0Z@vW;p8jiMT+RqaKmT^yxV|I312} zTCMlu1FMyNU8E4)a!Gx;6Y>20QJ)?B)n}hDmwx#q%1El&{m?6c4O^&G=~ z_39N?X|h&?zYTWTu_88VRXbMTrFD|r6VaH?Q!adB&`sYOAj8u(F61pP%cD_t2(Bx29QdGoV z#i~UW5Ueh7-a4q3O`WTtNQq>(>^D{0HkuCr0?<B^OKV^@!s6gtQwzv z`laZ$B*mh6d~^)MaVD^$4KpR9C&qTFy;B8cWYZB>v>f^UM6!_Tl~k zwEOnkZ!v7W^!QX1vMUsGr!S7haD6;~E%WmE%M;%Fey8X95@=f`YR;q$Dqg3Qspfu=}uAePy7NQ3A^ z##9=MTn0kBwZTrQ@XUGoBT&HAPzbdqiZyL>4T%1Z)>88D)jez0xfO>UzWS&A%veYSL5 zh)S;X6opT0DsLfo8$8DVj&3s|x9p`97jQOTUN9lJpa3;}wY}*v@QmGpw0LxM5cRkD zJO$$=&J}yZdL!BRg9Ty;L0fxzn(}s(q9>R!4ceFDj#wZo*;%*se!hN^R8`#A{*%A= z)BpCr`-Rw;%Hi?5iw_8*cngjcMl`Qz?jKb4$`@DXufO|_G@#UL2mQSx`wVXGCO`V( zSlvRmXkRg2?PB)iPhH9ZS(ozJQn(#`epgo(Ir-31hA`Yf!3A;_N)Zo}XtWjNFEDi~ zVO{Tra63Vq`AQivd3bpyR?;Z3gNh&uQ>UiwTtQV2DjjO-UlJM_$}d|S5pHKE{P}1Z zfM7q4K?(A(&YhebU0+^G=p<$y936=m6?+P@hlfpLzHfE7KxD(6o}PVp{nq}m+(aMv zIvGy+YCTXeB*URu5EsbxRv9mGUUOx`re&E3S>Cc+3-85AZ8YM!j0E585|G_)zQq zPHZ8s27lFLHV@odla%wD8|#aYA7`tobU8hRVhJ7|&XVe!-+%wT_Y#kwsO3QZ5L|y< zL%}`LcIwpXY)W@W_~bf_7ZWys;&!Q4rRzjN5nGi^sk}Xe!Nv}n`|9*MuWY`Y_QiVh zUQ~`^wIv3qJL_U}o)dCp1l$KyQ<7?GBsS}XK>3@VPiZ&m1Omx!oeP5p(t`GD`JL5d zg=7W2APoqZQkyWF29e4no8EP#eXsy*68VLVgfL-a_WI~G%c1N-bN~yql-^5m#`6e; z-mcoXxv;Xr8JUBitvrq{w5v)A!YFbH1ZSY0g7r7(Gc+` zXov(tET1aPU}fz<9)vyZsx{c!-a9^eVb@%qUqCkWUG~RVvaNb&&8Bqlo69Q^uml9Z zLP(Sioy7H!72BN9EMj|24p znL{>YU0+__e{xFd<*~D_U^w+e$4)8b_i2$8Z$?=H%ve3ut^u~))M zKbR`WVnU(5NLq*GPXc72myCB=njN5EmlVL`*o6`UbH(GYzIyTJzi#jCl{Rw4`}?TZ zS93n6Wo91>5l(>rUtfK+auR^)h7Xxlr`6_OI@mwlTz9UoJF_2cAJ$8p??51{*T&n< zZnyf6%3>j>I&|H%2jdU`a|gA9>x=8RZ{NvN$rrc9!(Y%s!e|_3E@en?&!S<6>z*i~ zXAbv~S}Clt!~`N|l&m+qz`v*$5;lhvUz529-Ek102p3uWiw%%r>vN0Xo_G)#o%NAoAqQl zwJzCwk=Cq5lMZ!_=8?b<(P%cIDEKo~WbWUxTkI50cZ=JDQCBb@L#XP2$}LB_=*E*+ zDaCZyI?zq=MNk?cSgi)s>g|oFv&3%1+^h+*HtNytNh}Z=F`4&ANEmIYa&WiFOj=MHoyM%d9~3;te<_C8x(0wkL3LsiGad;& z+&>S3`@?u;2YF;_{-uAwsZQ@o+{xC4N(Zb%ZH_sa0CrYyaKFqTsnuRL{3iFETlR<3 z=Vu>3MCED7efI2mf6%kq20=$L!h!y(MeU-G3$n6y%8(u4KRQ_28 z0s%~|3U4tr2 z^~k!F2eh|$B-(H^y3rA$$oKCH3%g`X{D`=&KP7I+;xfRro z&=BmfqO6NzagkMqxQitcD9Ki5v`CP0OzcIF(*qNt5>yltz!UcZGR557IlMd^U$9e5 z$0}evKmqhJS-kYMYKd@Qy4L~g7TYuO(K^>{ib2^ zT{cS077$To?dV3 z`ah_HaC56j+|;N-%O5d&Mup9x_z6MxyLHk&ONRrrNhgEgd2C-{K2E%C_6zHzh5;7J z(vle)jV4z^LU*c_vak(qo{5f!rARa6R+MLF#$`plh&jsJ1bfd&Pt#K&bQhyS9s+@) zVn-+z&O5fu=1WR)3^;JzW!~sl>+N>O30OjPi18({VbH=uKq-}3Tw@^}!%>Pv;wwdz z0tF2iHd`JOhZYH&kEKE&qT^1a9Eohk-u#OgIWh~4)Mb22=`#%sxv>@9h4a~a-s^#0 z>L;=iWQEf&U%qr}6wRD3)Y)kq>|+E>CGOy7;~AWdyNzo8@BZHBtHtE{;!9aFdDdgwPTdKg5j8OiEM4Ah=fJ;eKv{wKHY{4Ue#fpQUW8m^A#}*5!BtX@p zA-quL9wIQ(p3I$R7zC__U+iHpOyOe%T^Y-&ekeP>^u-G9UuCwzL#cOuvhxj z;f=D6&?mAvN=WZV%XTW^b5$5OigwdyCT*QmJmax|auqsMsnYI9 z34o9XFkqWEiK*0pT&*C@)ACh$^4k(!Z$&4)e0d0sFk0(tE_NM)>sX*4gg^iA!HO7| znbKDvGde&QQ77=tJ9VhGOXZsBQ;`6f@L(iu_Eo)j#fYvqWE3E(47fYhkF!HcP zBEfce_@>~Bgg-AhJls1z-jjSOVvqgd^yCFtN6K_W+&66z+{j9Q{`u!11x?9u$e$}1}8-K{rmSo&tk)Ea&d9#MY&Rh zs9#^-c=Apqylq3Zo0jTy-y%+8!s{{w1(j0*hnm;4PZK4qUIRKs}R^-2Fs~MI91~A!u#byON=V5m*c;hR(+hn2Mk14Md%`}n}C-1U}25o!IYUGiMi5;kKJ#6eU1n;wKdi_F*vL!%EVFi^aNLc2x})*p*T4AqvgZ2&{3YAUcP+k*uBsm%ui2_bvz~GvOto}#W(W{*iD>q9NpMVbAMl5qR6>; zoIZQj{Lzm-Bj2R4TZlN`nG!*;GJV>LU1p&zw?GlocQPi7bmuZ}-hR+DND_KzHCUv2 zoms}lGGQPx-qg-6HM~?o1VDCx+RO7G&tbamx4W$=GbUbZ6>qMoW%93U%11y^Q?tuV z_Uh^i8hbDO=H0t@WQ|%vpiuxzlj2yr-NN1&APrSW%|v~zC_t`Cg{-wmi$T-jiQqjA z+oVaUUpnZw_jU_E`tlQYN{M3Cy%O#w*oUzNdbkBt`NCZ%t&8s;W}~xxHT5Jv_m`7l zy|TGi%*=*8k)Vy;t;4#?*eeb1oIlwwJ^Oy>jND3@npwtHKqMj^I4kfxv_J4m0?JwQm+QI0Y7u=wV57U(Y^Xm$ zOzaJtu$9AgeDw$m?-8&gimMkZ57}ZQRWBr?ZiE1m(PSphd?qyL|M>0QlY~d!&>x()5?^|%Sa)y*?G`|?pi@tD*45OI@?3;5$uJk<$5_i za6)PpiYFNWl8M5GI({;8>*29nD%K;H-+Y+J-Fu)Dkg)h#yoEb3P%wfuz0DsYiMs%s zHqAXqr({kwC^ibg;88_rbM5~k`KI=y2t?HR089~-p-@N0MkLvI6qp+T+M!yWN#YNO z2m3m9GD7mT@OI=MO5$6|3UR%rK%M#7>Aq~!$F2Q$mjKf@;=R57(O`T%xZcyWtDGuhAe!H*GqjHo2?XRx;RCSt3Ww%@!udZ*d0i8C9u5oE>s)HaLUM41kUQ&Go z!~_Di80ct?`+KV;U`ut1Bs*wnWsUsl5#HOw09Zh$zu^wC;vR(>+PUs!d|l*A%Mz=NMSAUJ{qWj9};rxK}8M*$<1>WHCWIs8QrPu-C(wjY%X4W@?hC3C&&+ zKBMt2%`18ay^Qdljn!;CxR%Hde-O3EeVRw=36TO)acf(B^<5ZY@I^ao@1SAhpnzgP zS!1a3N?VAg-d6CA6?e@}ux9$x{XF-8d3&Ut5tgV}ekQYoc-gRq5mQZO;V;zkF&;DJ zE%4H_)5fQtyjsje1EN{cGnjz(^vHUivCuHT-63ZzbcP+x`D7^KQ_(cS{7^2mkIZ3| zVLM%gArbVfhrcq~62d)T3@PqakB%$<=l}Ijf9uo#*m8=(kQxZL%?0`mCvtT~dlpDb zHlg@A+IY~@q%*qkXUg8@dUHN9D!PscQ>OJ0ybwp}erjr&A9$leUY!bXaLn_eH?m_? z#%XwteON#lHkjH%n^#v^uS$JB*t*GXB!cLw8=(fQAz=^+yRJ~ol_IwUsdoEH3V5VD-Vv+K)bev0txDN1co!E!n5$L6d>SD*PZB zBuzsc!x>-C>l#>*Ne`r1HFUy!tV{fxkM5C0eCtHZP(+}5;(isISDg&zc zR!L-g?8l_|9?WFH@{kB+A&A?CfxfAt5?C8n2v|#U3?3BS7p#?PQ;gxVOrmU#M~EX$ zEXcZUtg)~4Mq%AeTFPhReWSsuH|Xu}>wK}rM$Uh$f^VbV7!F#OmzN~CMzx_=IU{v0 zyL+;Kv>Ii;`R2RP=+&#|HP#=dZMr|jg@Zwy>)SW)ZAhhbC0@?^W6Mlb3dyuRlI_<} zk3AAISmA=qp-1*Mj8tvn>#vU^F6KcSHGv8OnONL7SHL8ynx}43gk6mlONfpDO3?P? zv!M|~`y+b7g8zKCENeQ_XpseBR1=}a zYNTe@`-d71_NW2`VY-@OwA(F|g>#4~Q zDbyCzKr`yS<0UR(gwr*Wfkk{dG%ghLy-`%U6%|eO)ARwIbPFmY#B)wQ%qRlg;Jo7~ zX0;VfU<e>}(qka#zw@&n|JIMcepuhz^Mc8@1*|jv zigd_`zuz5cQWCu+viXwC2L=g_!ys7Ej$GG(q&|ka>kOU+H}G|oJ1TSlBV9yJ9K8h2)KrX*#vJHh5EHPAS|m)Go7)@t4a(D(K-@kO+GTHUoPR42A01+^jg(4oavgc`*)>kAw6$-I9jw3E)bmM`N7iK^F$K zx`_ONkpI?UXM|hKR^_4yuCZW!#U+z+JJFZLAVXG5LSgM4<9CMW38Y2^+xuv@YS}6#C<;94 z6!ID+=sA{^&%xcv@6IZm1O!gxFY!&XEg<3_-JU zN}H%?i{~>O$A*lZJyYwK%J3M`iu0M`{0qKh)l_i<^z}i|$Tq4kz)H7{%x1`6NSQA? zwYmi!_SzC!NT#Gt&ir1xS8MF=*Y+cZNy&}%;`7g*|K#h>C&T_uZrQ9gS(?oYn-M98 zfhe^=nl2Zkei}Ebn}8d>YDj>HfHI&94YK@(OU|@6LaXCZFPB>*^-+3LWe;U(MB$(j z-U}zVMPg6pBG-MndScqb)4)nR0(cUa(4l~EWWdQRw9{-#$@Wp_nDqv^0({#8f>P9o zu*wre?LRfji+dAkv`JNC4o_wlJcRTy`f6H^&+2|3wksikKjwQ zX-{ftI?>7yNxJy({=VHRlq(8_)%L37mWRz%p)3MoO<-STa+O-0a`;Do__zO$fAYU> zDwtbu-nY7~X-lHk{{CsVheh7W(JNOE1dYO+WUJTExx4mc`SZ^2{I>YT{qaOlc=zN` zuWYgKVGC8Qj^$fyDDB!D5qfr11c=Ou_1<@ED0Rlvb_}!3Y%?l80a2t>n_pIgiY;xV z7@A_qK#m%t#xA*lgEVR^()PHNp$VkomWaT41JeirH;zYbVx=Jqt03W}=m2dskiXv& zVuu( zl*2!P=RqXM1Hn-xxCAliCAOIH?C%~hTqi*gs|u{z*`2A7DD$bHRb6Jhl#wch3vFUq z1ucmp*CKwy!ps&qefo}fTWW>eWLmRjwO0u*LRP7;}s&*^?b&u9IgEFM>y zBALE2rKA3EJegO^HPkl3Ve+DKCDtqx<6(%gFa*o@79_?8>oVn_MCt~01n)$Wxz0LQ zcej^UgQL$kij_uI1&>7tSe24sA=dg4v=pY&?RQn;#V3Si5<$kZ2}#5=MLa8fV={`~ zllBJPDCBDGU}?K^t^Adc4K^#@A9OsBxL&71A*rB>J=kBeG^nc{WePvhIj%Yl6_|5k> zuisvK5r2{_Lgdi{nQc^MQNUEzS*6P0AGs{F76U^YU@Xo!MHt94!}3<;5sJDZj5CqXo-TJV$6s)#z#;(?hL`9eJe?sBWs zbQ5xiaL;5#hqZGn_%YD2LmET`u3%J2zE|*OoEGh4NR-SV1eP~q9bK)$Wt#F^&n12B zR`!jO?a6s$o%uo(pF(_lrQNSTfBrB2#jj>h^TKZI;r5j%^Xb9ip}qRy>f`;*KzjMn z>DkTorPME5IJc8yeP|6dvfV0HD|&7-35wWKfs>>!5a@A+dleK(8{Ly%M||#jU=MpE zw3qu8Iq| zko|=G9Y9YcQ1Hb_0_*{eTxKD3C!A=)?9>*eCy~{o-R}mST8`elc5sqi=kcPa)dNG^ zat76jOLKK|H|Gz-xmsMkvMd!S1y`OPSRNGUlTkYy_p5ms!ffjtO3JJ$5l!XSAuMgq zCJ**O=K`vLtOYDepA2u@EjL&x6`j>mnf-je+8D5BEgu>O4T9+T2gzx&ao5r~J#*2t>mFV@ErOvkPLTJHED^ZsVU zZ_3`F>?eN4`pgCu2>rAj%t2-Tev~kx&hajWDml(vbso!&G8y(^HL35tT1=PSXo$H{ zEma5scQ@_%L}9lRkT*;O^k82iOPxnVhHbML^b3bbeq7L3RhrSV3|b+3?n z51j~MXr-kD6B~-#Dd{2;mPSL8`Hzc~c9c>UOHEtK_=44%EL20rYvsn$Uw@7hI`EHNM_vG9%%Kjxt6~n0`{&qUO#Q0DVDXmR@EZi3kU>x)?|s#SaJxw z#8To9-z0{=R*H&=ZFfGsq}N21AYliO54wi3Il~ez;=~k{OPT>jy;p8cW=z<_%t>A& zYw`&*af#WRPCL38qREZ=fmk?Y3tYdK^kpH%SX{vrlocOz>h$?@{4Z!agF%!gQyqXp zJFJw1?JS4mJFqS7)O9;pO}nI+t-KhXs=UZQ`L|ze?NkK>MGlf~Yn!iR&xxQ4Q>vVj z$G!jQfBVHRfBBuHtZ?FvuCN5Af<_~l#=JZASeubTB1JT0|AL@!1=C0pq1LeAA)hD- zn>|T|I7rFyXDd^Ys!u>dR7R8DBTM#1UR17HKD0RgTffliLB^Xo5;NCFih3j>pmsY0k)>=+ZGny?RP`yttjAy+Y@<@H zh`E%8%#261Bo4b5n(P)6A9F%T6Uw=jtt(*6jXRk!&`6rK{TK&vc-sJ_WmX~{vK@nC z1f)j@U4~MoSU&?77RnmZX&dpw|HspP?YOqB2Vy3RWU&~ERScC^-tio@WUHl?)KDFm8%UaK<6V$RXuKY7r^LM;|+a?UaS z@#h$$)k>-}mZ$X!A2=OCZ`#M!0}m`}kx-=SEX-UJY1l*NNE36>>1?a=GpxqFt;JIu zXe12xuoSSC6n&JA3kXq^L#@Z;Nw~;c=NOl9kuj)FQF>&;@Hj>}Nbo#@az&WQQ_zXv z_)xTj*C9`{68j?U=};^pxTRiN9~zQ?F;B;&2JEy#n`L{VD}EARq*>ZE4UuG`%LwST znk^JktA*Daep5?n=(R@O8yJry?9O`%U^K0%Fx`wv7*Mjplv#)*nA*g?#`IBI4AQo8 zFPGuVpK?6bL2`$>^BrDG^WF-8v$dR8w5x#_9!~QXWIZ`Ocl~JC6$voKRtzbcSE78)uCKqEsPg>9b1^B0VsnfP?ku2o?v4YRMXoqDY>09U z3X1L*02q=m(miEWB!eXJQ3BA=(FlA*a!PpalCqMZ^_ZW*h>T1#Ck`6aU3MEo+}#u{ zaSt9HR=^pyIxj|(IFm8k$4AF6X}o)TogN3B#xq9QsJtAYp-p z_w@8&w7UXj=yZE~M<~8|^M-E_!b4!_t7r+_DDCnBWac6r78`h8Jv$$cQ>~{VH={fV znlCXAcO_pk4+Mqi!)~n>AtksO1sps|>8&hBY6CV{KWPAqsq%ak zNJ(-d^0ye~3mMdDhp!2g_#tU^9;ynBG=id-FMPwVcAe$}L^_*DX^8lTd_U6&`Ty00 zR0NpBGonRePVH2KY-1CnM~U^Mm)vCV3w23TAL^{YYLYUC(G`o@&zt)e{UXC;Ta2X& zhi{K@z{t=Yd@Vo@X#A3VkTS${Sr(L19jDn6cItjawX)3^XFk&LHoc7SiVms+_aUp`06LEap$XF$?(9VNfIw6PQwL2%EK>+?aVEYMYRp{~&YvD}y zPa35l6593;la2Spk(L!nXH&jT68rMv@bsjP13-HGSI8G*ARBp^FE6i{aZjB#X~07V zH~6T0H_3RBma}|pc6gcfw9%G;_en#W<7&e-$eaDe^N1gLdGDk>SFjpN`P_bJH#>Cn zxv~;&CL)3Sz_INP2uK2aN~Qyr6F-ur`-uhEww>-?whtk8pMZ>edb!rW_*2q(kbv7| zSdo+dU;#b>P;f(y$%7&;iERAIx3B}|7QM#Pp_=48kt#A(!KhH2t0W$C1@P9$>OpsO z49@v_rFxLfDZJ05mglt#Q_kgi%*WkwzElZEN_WF1Fv&BQZ?`(vH}{Q`^Ye?V-f(nx z{~(~rl%|`tpE_9G#pR_R;owFjR85kiqfl74xQ0CO2#!V9$VC`(Ma%>h=HjnZ%F($l22Gly;?htKgxhzCO;0EF>5-0;(T=z;zDm3Xiu^yZJ07=JMlhxy`LJXLf^ zXb?OjI--uX3IQqk$^LuZx5ns%?ly zBReJ@+FOh^iI() zxtDKwzw<4-A*ys|_qSja?YsXO$YATeKk-MolH%gZS8}D#&bWEO+_@k$9Kbly)6*KJ zJe!(=qYZWqx2G75hV5dh?w=4huTF%|OZqCZ0M%=ABFRYN%8sNy`$gTu=)IUDK@w*s zvngFU(xEv}!lOOQWws#xM&|8k9KG*e@WR$G4UoYkq+S@&((yJbB31?_Nfe_o>=d=c zGX`sfZMg`w*N^EPF~u_MEOXQ3aheJlxn$%&Ak$5C zKSBfO(L4pm5GTPKEV+7<7QjH0!k?Y)Py|#QGlU(1n&NygM^gOR+Tu7p5p*ordThq8 zKHR;Bm)_&}e2}6d)bM9ldVWxF0(T*?-NACdLY=5!2(b~B4SuDp`FaAr$R-30xJ#8w z&^`tM;6aeg;aOIsq78>ard%jMf$Vqrf^LO9K}68ATBZov*abVvxfl4>xzVPYFgizx6*@l(K|xTd8gio(mlLa06fxch(n|deYZOm2Ggaw!F*F*M z0riKW0I2=p5FLS1$b_Wx738zunD7WJ&z}Z?{ct_+5TVo3+6 z9hS56S(lRiM}Md*$uX;vwR%d4y||S8K@EQP^7YlT=h(;XowBvf#rYMnaeaM5mYg&i z+EU!zwrQM4#|`WK{Q2w6&8>#uwQ8MS21pbLas^-LJas5&R@>|J-oJlui`XZJ8+Nam zCEuTwdkRBDqBb+vt}#&T8noev| zJG}BB_(Ms@E8*BqwBtIc1_kQAij=Tg^sMP@BB}&YWMqW(8bUm231N4^-GjeJ+hEZT zN}shc@1DB4VQxu5#*F7^V8$_($X#qbW|T?yaAjiH8L`|W-fYK1x#oEGzy0Rr*{PV^ zy|1VigwG#7e88_Q-WKfIm_;8tcQ;}0dQa3z#LzJo552u>sSIF@QccuTCk6!hL9`oA zjjY1x)bI&20tYYbf^O?Xyu9H~2Rhr_iA?bEp7cf0R_It=X0-?t^t|`E_faL9yeuZ! z8uAD;H1Y{!9!$J&7rqZ>Ziri8rKE%tgus(TEun&z61DdZ!UxgfXzm2%9gHR242C_v z5%y~$+XO$9BIOY>>F$k^HE^JRz#`%hD{z?uLH!nEJ65jpkZ6f9Ojxd*V=h+O#B@<_ zT*zKDjSSTjbhEfgwG=e`10!jB##mK4{4|Va_Kdw{-_e^La z!gq*=)mReFlc>M6LpYbvH>~~Fu(;} zshUoPJ__J2)0fO(BzMN*DM@rD#{^=LFttcJGUQDP7`EHN645eW8!|~$aS39?Xy_V| zX4N$!z_YnIyOmw>Gg&61Z@tnL>VZ8sj4(mS#kCMXFBa(US4QX z@%U9RDqTB2DOL*azr5A@N|)c4KfM0s?au_a2xPrl?+iPFzq|;#CK?0SHT!Hc)?Fqy z%h8EeQz7wV-V5^Co+EU&UQ=PbceJ)4wFm+D)rn$8;3U76eB*d2G?*zffzZ{0hKz&b z<@gJ@a<)asIwBUl@#WpYz+G_k6O<|t#i zShKGP?P9OUm(zw9)wT?u9`_x8v|*>ovLWv8M=wr~jbigJ*j-q3Ac-R7V(5X9mp}aH z|MH*x_QU_VUUE9=CKr(`J`gq;2ElTTsFXLMLY5yNTddV1mTkWBo4@@nmkv~9tSAoU z`-<3Dg&3Q*m7H@OG)mijlL+}QqaC*nMR_uV~<+?id7H0#9k2)O7X)RcLy2R1;OE+}MfR>V4O4YakX0Fga;;(Ngt^npUnB z7hi9G``h0xr*f}jQsm9e4QTI9AF^>xSrTm7=?W;;nghEEF`L!(q7VWMxFL%VBXGFU z0+52GC|#=Wcz~*l1sOgv-y_>z*%mE|gkttdKRK&b9wkfO_Qgk(UJYcdMi(Y=>kH}D zH@Eka0N%WL_38bmckkYF_p(cLG>w2zW`26s&?N|W%cXL^`tirf+E%U^CERL%`e)y~ zdHtMQyIMKo4)65ZG-2h%1o?VSBnu_#w0*Rl?oGR`T;cHS^wsG}b?|pLxq`;PoH=HU zJ~2NL8Zq#AcgM&I$3;X+qOQveB>m*k1_zOefAv>?MG1_#;6y&>Ph^_zG;*R1Bn$}9 z;hnG#ngTB4l7ilnQ3%(7(`EnwAOJ~3K~(;1K9-v|n>XfjIzFBsiV>$K#zY^&I+bvu zbOc4halN8ekzU=|eZI7ByFHHKvr(yk;cv zeRO)1m&9l0g=x-oDSTeA)(P-@3&@8ZwhHyv+#pz_Uv-qI9G-bke{$~BO-uu~w<$%R z_7eW)srF(QP>Vke=cIi2T;~&z)R^2hoHyPmOMn(0nbf<)6>mH^DPCQ2bR~u@Lp!Ef z7@^^SIS?R+KM~86cgSWZ-un}$zz$J{w_j)LQ!^X6D`XL<69;!}hJU^5@F1muk{UyH zyReM7a>MXO5wuC?SAO6RpLB2CQ&J4E+%>Vavw}giPK6zWr**~~IlOyK`cCjgu7EWjS^*67d(@2%<5jhu^7ea%QQZdleQ(3=~4N0v0_uv2E%-+3y zuMoTNU9_;xv7)A}&ZtYh)o8?QN;w!bk7qAmajEmGiSqf}N=9<-k_dtQLRadd)YByH zBCA_RJX1pi(vDROX$;czE1Wr!JLK9qJ-$}1lEe)odo11H} zVy;;&?t`&Pd1ol>gsT*PFPi%Yb&Hww@BR4W`+xX{x9`62@5{`jmiGrETh?4oTqmMI zTPEE(|BDm}-9b?3GCVEOP=r@cqnGYL|KnM`{sNY**B_MB(Z12Hz&KdyWrqxjYG>pu zyqp&wWmM$LYENp#Fg_>ii0wB{Ib;@`2sggdw_o-Xnqs=M9R#9f4rY9^qK5PR`YDU# z$+5W}6Qh37-{bvq%@5ISM>|(hcZiMJu)9?{dT=NUwE(F=i9@0T|L|hX5KOA_(k6VY zkMd?^_6cpv>^9D3+LOV+&q*jtrj9;?YnT5X6P8uC{aUO_eeHMBynmj=N{w`c5ldmkia6%LnrO{h#pF6 zF*nQQI#rRCioASKEin&Eji*`e=Xd`kV6eX*ji#||?8L?( z6scyB77OPHTgwPeBma6f(|%w+UcY#LuFC=M=ZyD02nz?tJVlR>cj7#`{gRrnyRToH-JFr=Tu&Mf5~EI6Sz^2SDBr}c_cK(jcJx<~ zxJlL8ZtxT^*u`2Tr0#}I&`eP|m>@tq-{|eP?(Xl+zkOZ#`Z(I#FN>t}D)Uc@JxuL$dH%-V_Bg`1PJi-*T+LnLVlTKYOG z0}iCt$PKnrepjCymg){y4g6T-7T<`eYx=E+?(FQEcg@+l%|YQ;YRbp$5^ND2p$jpv zLKuiN0uC}qRAsIoG4!0~Mp0UlK|7Aw!=L?B1Y|3|(bdJi|2Bz#?gZ ziK;=YYA(=H9KK0e|IujF=Dpaix%;UU+v~56Z+3ciG5B=*@k8`n5p2-UoX&1CdBUJg zV3Q-a(Ycn9XI1mZN0T{7=+AHe@nApo{K_1#wR~9~59w$0(Wg#xz8Dk1#!y*!cNj$1 zsGkhR{QD4lDYswSS1%ySAK zU4UXKg&MC;hr^%D3qBu3LXlO;MVoYBWB-CMPUl?6Zy$0s;YxDu1GY)Tp62Q?P6XD?bpwpm4 zRpA_Hn(sVy6d{e2aA}v>Q5Y9FU@QG1Qa!`gw1UlDU0oot+dF-)LwO}ypaIufDB_${ zfe$e~-Ad-Zn)DVTW|q_a=|aMsTms@P^u`DoGB%fK)r6{Hxw-*^<3QpV)QZS;wrq~t zkQL6gVmfCwbKAVe^mX$~?`NWUa-w$dvq@0Va8FgAu0$IB0}F78mBEBodz8!bHidVC z&x50nwMu*E--EDVMaqPAG?(9{MQ)`0O;ZtW)Gb8VB5?yARF1+YY-^5QNSG{%(+p2l zC0(F!pkKWw5FP9wl4)-ukcxw5wmbF>++;VVC{=fNUr@bVa`@hVOP7my*~%YYDn0y>P$ z9cX$mg?iRFzoaQpxO|s`jX@wWkG5}64gHsYth?;(7fauL_uZ!t?;alRxYNJ+_6-nm zWL3Y}KRG?)Q3X{2zf%5iMVr;vd2jivAHMmIe{=Rf{Ez?BY%{oe{#u7B?MASRBhxxc zwY$4p{g1_3Xz0>Jj5L7Tf+Z%E|NQ=)?W(pmee;2N#Z`bsXx(N7nrzHN5o#M4Ol6RaOr!^!Wy|NirrYx8ndu=0&C^AiMH zyn6m70z#b*MktNEacp+O$gkPOQ$Dw#F&G)+p*$oP7te5H2msZ!21X#d|Yb|WCTf)2xb8g@COY9{_HCS1%QP$KuG>Kl;bI= zHJYSFdC+)ikU(B?rkM!2rSe-Ey;SnZFpM@9U?O=%tqFD?G@F*UEe<0aRc}2##O{`9 z!wU+M%y?JOwTEKgqEM(lIMKlYgnx|4N8!b*mtVeq`T3W(oS4VeQ$pwIX`ot-iO+>X zD@vu+x*t!*bo{O!QkR#Pf{Q)QpmZd62|5jWBcAsNbCwJDoq2+C`D(q<>whtu19dBj zzqz^gXiVHI=*a#Ti#KoHyyY86!Z1V*CLjPSis594K&%Oll(S_Foqi_rHX!qc%XQ^9;do`Nf>1%&-c zpA|k)RK{g!!Gl$p#k@CcruZM8ImZ-Qw9xggD&w{WC#>eX?#wZ3Nr`1yg({|6`w^5AcnAH zmZwQuz7n=;SPKzl@`RHn!H|&50iUB!tYEZ`x`C6%tMc~v?|=J2u>Z7izD-wdzT7)G zd8_i*+a2`;+lt)tQsusV|KZbn1s}5YDFU}}yAN;?CJf^hPSs8p~qf_MZB|sPMLy*Aa zJu>rk%9?8g9z%}`c?tB9$_yTWFZl#6fG@YYC#q7!j_It9>bTapZ$AOLu+Q1)nXbXN zch{+0`q{H*!@;!OYAd?6Bs$h(q%a6|M(&*Rpx(W?Jm9U`%vy93JXeo4jV%YFtjSaa z=yCsW!%XHqz_`QIBl&U&$%O42{@H`Vp;SVO-guJ7Y@{tPBlPWRQ&144U=D#C~d z;L0&fP*&b{*QiDnPLWj_di{h`jS{&p+yb9Ce;C_k+1|^PSyjS}(nJkIIBBHrJ1I$k zkwRWRoQ($2*K?n4zLMVR@R%ub?KNsY&~M!){y4g9

$#Bi{~%Qx$8VO1v6h=y^s|C(=Ts(+ld;!s)CyBs@HrhiA723ovUIUm5U7phW(*h z6ka$5D3m zWRD314bVrc*;c1gIby-D*)j6-8xBT_ps0YPfc@0V7iT!9V(DUjuY9OOXb-ty9EYVd z$;l=_36-6kUh1~g?B4Ou*Q+IYN2%?Da^}~sU!$pCetMe}^hIy-8`)l_O4sYQe|sDC zAK->rfdHnQ+9PE%W1La%s+frzp(!GTq#CDvj<_a#+yNRQM(*L z0varC<#T!fXxo~Vz+S+o=eg3K+=AG{!jd~j&o`f7UkIOb)y%oQto;5=w{;(dio7ja zOtn5k;*`wogZ-rRTCq_^YnEP8^y{mYg9v}*ffUp z^7WhNtJ(SsTOk{wkwdKe)Ft?FNd7D|lPn{3etrlm)Nam%pefQ7=%rqCFh=0z9Z?nxe$4>mbb_(+8{K_?m!?Ma@%v2l7U zG}4Cu%JgtNp?a+6y0p4*|DK;+io&X_@OY^NQmg6c#Kuv@HT~s^A)cyNr6CB1H{+xZ zC``&!dkimHI3S4eh~zQWDQEE{>q{)V#LJXKs&@xweLi=(kje483|fG!H(@NZU1(cd z!JUZ5u<^+W@s)17X^z}ULpUkQ`|*oV1QHbsZt)J|%)V=7Y!ZJ7Urkch;mAFRHrDBr z>2Vn=Iyha7nbGhr(nVk!O@~tBO_+g3tk-JR8BXCu8}$>NySXGF!dX(bY-|o2UJUCb zT09hdL6h7KEJ@~x`by*#GN{2O$yfWWf&l6SI=ssAA;Q;Hg^4{)<2Su6j(rLzQi@ z){}Z&Nz>;~AJ5NDC0Pma8EX~l5J&K0-fA-o*uDqM&AZ!MtEYd#g4Sb88klKRx^1_c zSo}g!0Nq>UvDF)}*+Rq}!~_eNEj5-3X-U*i6-Z(&Y)$D=>^>02U&3NfLXUu{BxA0T9+Fb@i{FRr?UjY)n!6F*E z4_)I|SmTr(eH>^~9F8>H9H>FqO7GUC`{9p!Jxq?!z(#SZ+U&g-;1~`LzW6F2Ooii z{9_=eEf8Gg9wH7jH&6$oNRG?l@!=kmvKYW|F~!ct#h<((gRR7tpPt$~zW?IIOM2-+!opZ=j(I<{c=N=`=t9%prLAgX$lG71ouE3kwUOLac*QU?;(%HV-b$l zA}RC_4=RhJ#iX+~p&+0g;k{h9Hykfgneyd0cXmBF{4_i2m%i#CFi#yiR#6;`DxKO!Y?%hd&+3v3^Rcg!ohr9cJyI4Kc(lbslQlFlA zx@Frku0?JT=fLGp3FZ*#GOZ%#$fvlppIqA1X9kxOw{gS;G~W2!!BMGUmn=uT7k;5R zAP11p^^*qVnn_Zh+@C4vx%_4x=VJR!7;J>etu2TUIJ~^E6hMKC-h;0*5L00aO9qp_ zr$MFgq3icmC7@4lI=+~-+vlEq=^2Dd5GP$6vjEwCZCUdGI zF4}8{?vS)i#GW{sgj^f(ieodrLri8zi&Rl1!rc3juZ7dS}&W~aeSES z{f-4VatwyQpizfyAW5kVOai!YA`o){#U!l>NFqc)|DZZyRbs<&Dn5xTk{3|>Byp;T z8)3Hw>XoNkKls``a8c`=A1>od(ps#C-iWr*jIUUmWqX4RTm$YTqN>CGK+oeRF0j&J zNo}d_MmS4~i9Md*-8bpbFhmBtfrs2*c%#gBa}4;6(YRg#B8Th@?hX>^u{Y2T!2oLx z3$s-p`h8}9wpe01KoTh8bWpODfRC3AhlLUy4`c74Hbfn3GzW=)Tqu#;f+q&3!0`OE z^y?pu@Qe4nQ{WWU=v;h!Q<+Tt>`bFm;iyRS&=1_;*#7t+Q#`FS_BWYHXPDb$>ZNKS zO(z^~$LnhTsCICinn@Dc9OX?ItG0Wi`^T0!80FIwOiDXuVF=~U%h^CHRw3VfvASMO zdYwC!HR>_anT=K@!muVum?y!lZzIK$JXrhUira|66b^)}`IiJvlv-;k9|buH;^+W4 z#w-BDp#TcZfg#SaBr?=Ts&z z30cW1k5r3@J-Eq^vT#_y1X!>FLs0cA(dce|@*Vkvv;Y%49djs`1-G+^#rD-105uBQ z5tQ+(j?a;K!|&?7BU5ficr0hieTI@$GyBqNbisaL0c`fYyFBb%HFan^bRH@x?}Q- zh=6}Ut<(~wEykWMue~!4tR|3BTjt8K?0x4 z@N3M-x_#aKe8qF5E=Z%4%2hos)v9Qoy&TqlG&b@hboNN?O>Py(;FyE4+rPY20hlDr z7jqTY3=Zie+VV6E3m=S3gU~-eiUuseI=zSG?%-&#Z`zI90d*0v+cqcT6&Q&eD>&l!j`AousHQ}LAD_6Z~DDmDm{z{KP1*JbvF20YZ` zd``mo8O$P?-w+5_IdqLTAmfck_$#3K=Sr|jm=SHBw=ePsv>%9`V;VBdHSEhhP7D3) z<>cVSi1W#}HBx znUS1(aiI#FL7?#d{Nlnxg~y!Bt5+|b$iw4scR-oyAn`9~LrWt5lcX@$MZ6a( z0>1S_U<@r`WFbEw7--``bR*v6L?p!Q76yj*u5(_vh0X5dhxZDfv1Z5m;>8P}?=akD zK7k32&o(`aP2+KnzR`#icbUJs$?x!oiau4193^5HrGLH-g)ERfum#EGo{p3(gB<{e z-2|KEbKZSU0mSsNA(E9?l7P-q_e2YDD^Kxe_r%1KOBBzD4rUm>!N`M<(qb*AjpNYH z<3sy(D=K-+=ki?eQ339Eu(3AZfQzzP&d=(+n)j#A z(U!y!1U}a{x1Op!4+W#!pRg4pu{CBXl26)VM!6VAiaPVeL@~&my%D{_JlgjC>adWN z?YCvdr?(duQi|p2+(O7;;?VN(afhN7z~yBNKKJrn?JYtE$mC4Q_!;VUM?DRBhS@}j z4pY*;h$Ms5(NY#DU=v^>XSl@8xvORI=S0&?bcIMO2rlUwpNl}_5;&J{Vcg_d9BY9~ zBRWA?zjMJ@tN{_DB;Rs9ie}Ps;?p99%?jn(X0P1rZ3c_{_TYGvFE#svySC}H(e8x{ zyQ368qd8ITy48Aj{du87b0PaQ8a#G8L5kU&yR@VT>p`a{82doRT~><0rAHGjJFwu@ zu7wB|=<}U6EGml-Pz3bw0%}qpkaxfOB2w}pP`k`_{_}fhlo&vfZZ7)+DBRpV0hU$j zG7QWG{s0u<3|o*YjKa75)ncMVS&_S2gPV^q7p4ddKo5@1)m$u_?1oRBIm`8BUvV&* zyqpIb3b6s!bd{%$wAb_z&)rnL$PhEC*G#R--re2PV5%n#V~x74J7<1+)ZpRza({z3 z;DPZ=y6x#gmOW5kTGi9ypMHA(U;k;XesrXLuC|8!KgII#Q*Ws;KT5VLH0g}N#d~YE zTKH#YUOQ6sp~memo?kJ|IS4cnV2mV?g*^hMJBx%3ll%+W7C`4=a4V7I=vW%Hp?_7wU>|M^m?RAbE{l$Mdo*1|!g zU*$Tp$=3H-q-0Vzzxe0>_OjKPD&_*y;qQ$&gXqt^trUD|M$%pFkBP>3r_D&vfG8SS zaRnqijw?1yX(bJ^N&-BIYdjJDI0kvS$>-*;o*nj2rA0x5-s)e>VgZ!vna5X0AK_bxND*xG8P|#X7Uh~2R}CN z3wuw+MKS;YAOJ~3K~(eF-b+)_b!Zb5lP-8zRG)7sjXtVc6`Tzw9a7Di=~0C73I4&s z9weWKZcH^dDLjB44yg}R6}l8_L7B)yaEqY8EMS6#$u!xm z9D_>Cc=lLKltCu$xqAh|c_=(J}?F0{!MeEL;RLx{mHawapgy&=$OZa&|K6NEzssl;|W0QSDZjIAYOIN-)R% zVjwH8bae9d{U?scZ?7tU{73%s9lW-H-*yI>o2Qu zbnNB1F~%E)xE^HgA0E(@9%0y#_4qt*2zB%!jQua8Lrwuj0g;~ua>EuQClVfFy0f7wralOiEKsqu7ZM9XQfN5XC=+JhMTaZ9O0MJi z;XB&(L=jAzN+wB|j?QRoLM@4eLn6QcC?R07M44^Gj8aPp0_J(#O0|Y6lC17F>ng=Z z=0l0gR}erNxvgWVqb z3{0}%Bv&Q%N@ojgtnTGSL~5Xu_TKW!MJQ(Wk?~4G;#*D3;VOmEv6TuMDuBgU=90D| zXq$p*F{=CRf_Ig~(wYXy(4=*PxR<9;O1-BM@f&Z_`ZMqGMYMFLQ3$yg%GV^u(1eh) zjNv?HIamp?1f;A{YExpXg$tPYrwEASGR)9kCJbjOb2JJLU&ZIS`CjEIb$$yoG_un? zJlkllpK`NEfKe|$S&YQ?ofjgPz(&bkmA1eB%PYjlgHB+}26iygb}42$8sW0Nhdc2Y z8C)bRjASuE{d(g>EYJ|PSLb@7$kD*R`Ho1V@?_1A6zYX;DJp`VI`_F$>8yV8?dyyG z?fKu4oQx{r94{+QkbLY#kSfP}_x3INga)A3 zU3VcoAA`PxtyYuBf$bMpmyFqNr^h1v?lPjy2b+>~A|1#rYTvMl7420riog z^9ctUolQ>-N(IjtxkB4ZQVk=jgm&Ptlukkg=e@sw&@<%uixV=Z1$VJBnczk-8bFE~CD}c#G8EN`1wA5T1L8DDVXEbZjP{yoPvRzu;_?B zr-!eVka>gAJfAsGdw+9t2c%N{PAVm>ujMc8%_BW}vL9nIoje|>Hcj|FpNWPm^QEL& z2#%8mVR*v>#b@+8-()1E#tTfKJ?;$?g6GN@2B}C1qvh>K`?={P{NgxvILxEl8xDpd z+*%15_Y+x_)a1#Vkna->x!u;G%NVjl}bOJQt~JKH&jMDW&e0%^Qzl zR^4e@lQMUt22w&ay+PG6q$?b0%k@d)Tx(rsVkR$!$#YdC0N2~x*~OW(K>=PZFlj;X zebnmD_GWUf_|8oECM31oXMRoeGQfI@huLf}ZW7Y|#bDIZ!A5|YewH=A>EEm6GaG)F<%Km@nr4w(6gYn0} ztw7eWQ3!vzABI4ZZG}REEu1wVbu`Tt!wIXzaWc@^b02Q4`Y~E0Y#I>=jWwb|XbCQQ z*pOtR@D%p!LKVdV$G@nmf6VnBo`5lxmLOQ1EcTn5>}>y3uNEnrdY{izjcU2dh@19o zykb?l8Jku$SsdmIMHA+4wawLHwW7i3qQ$MoDHcJy>_fLZoX@o6)Sxh`2%{|uZL6dR zDX~F8FC3M22a-wZIc~*2m)LOhE(6LU%V1j0AK-W|pPjP8fBryZ!pmrfLrEwfRsZ?F z`Zrg<{)5Th{#w{i&!VC3WNYWkAua0d)S%GOVGTvop*_x7N}b_tkQWDxD%cd(P)tr5 zwZb|-`g}Lv>=h~Fgs=`A(-}L!pkZ=YCs#yLt5?yLvP67bp$0cG+uAYV_QE}yhS?6` zfm)a~7%q>RBgWvOJrfC^MCP}e^RD$HsVbEuZb>h`so0U^^ytPP#bW6JOK2>0K={C) zydVa_mY_!%yStZPDOeT|WT|XXf@0{Ti2<|-4=V8nwX6qUCjVe^IH}QWTCA0ILnEOr z@Sqeq7jd^=a+w6D)Fbu#gcv8(N+dVJV=YL8|q)pn;xS|1Ix zF;Jo=dt5BaWK~u=Lqz{sk4|TmLsog1Y1v7dWCYr@VNi4+*;g^Hns!+0RF3h$L@aH-}tVFkHVCb%KeHR9>V4Znp~F!aFvk$THE&(`BLk z6a$lluQ&*vI$f>OG4w(u>gOPX&Tod7k}F7(@5@TYgNGz;z#o;@)m!FXxI{R*(|e25 zzM(erR8hty-D}E08eSuIqe&sAZZl}jHPnwCKMi?LzocwVjprB&+AWH@RLo?wch>Fu zjM|Rvy7OrCvIL);LT>udAhDl_7VFu|M!F*c=boTRP=LT+#RX|QfuND}Eoa{eC6Ogz!AY-F z*yD`S_R0O-)+Vtjl3!(w>NRZ-a5g~PjX!a7RHn}(#_r}M6K|~)Tlxjf?(5Bi`WSqW zY5-<1$r@PAv>V|#cjRXE&l_ADia2Q%42~60K=1Npx zP5Q6~P*!{vVA^%SO}JqKiXB)2*Mp#pqz%>->1rJG;ZH~$uWT1|2;9~WbV$*TY!LJ? z6njVSF4h!?JXU%jhl*nWeydV-_Eacg;MLZ^o$?@a+FTwFR;78Wv^}Vf*ZU8HWq+Aj z<}3Ymwlzsj(xp}Ys54Jr>l&qssZ{T;_S&=6gIGrCFjuMGcVd#K1K@0EK0fjaOX9jX zmwP%xtM+*H{5car`69eIIc?yTQu}b3>JhUha{Ogx>%j^Yd)+5+X;f>x(Hh&I)Qf-e zN5AIKf4IMe$K~Vt(b=UTWtZRoIMxIsn^|RZx4QJxsmqo2WcIn)9z^i8*qtq}P0?Yz zX0l`MQ=R2R7CmdVeH>1jqiJWx&|mA2{Q2>|dG}zDJ-;g4hLBP22u^}t;gDoH>#ib| z96b^Sni1`bUilYw>po=z0i~o5xz&cw5>Ck!GT0IbyGVLGC+Wpb5Ho@;9059sQ^TJ< zd*)cIOR4GW*ROZTMgSrcQuQl?m1jXN(Ss?Gm1i@P|N2Kz|El?PVHOZ z3{Rkk>usgcFy<>{6-^NyMl~2Vwp9;Ux_=bs2$3lV54yUQvrulr;uIHQ;_N)>mdOuc z?8b~U0RO_Lb!4E1M}4V8m>8OZE71Xk7Uda^sY7Aet($F{8Oh6Q0su?FfT8C9Idv^x z&B!p}k~WHQ%ZcK*zqq)B)wCbtwsg#_4-bKDUTJlz;#H3zIoa%zbnzel@x81wor-KT z)D)SV8n2bgK7?}M04!uGdQVD`il{)?k=WzCBxRI+ny+zkDt(b}xd)!ww}3w|lJR0) zSOm1R7=-V0_e?Geiwea#3vOy~luXanvqwhnOBdvjoO6-!uz+j#pObjCeAvT!$oZ(6Ye_IXQ3wWDbIotDJA=$uu?|T1G6g zYBLoQGc=>Ez-YGIE1rzEhg!Fc^%*&?=F3F`J~+;e$s&{(@xr`EXs@o#)m#;)EQ*VZ z=Ok}aMD9AmvZz(FKnx%PdGMhI+8t7$zgk%GK1PK2 z6$Id+mpuYnvT;g4gsT&@3?CAFv*p;Mx-lS#^tp(^LwM|6UI0)iU~p$&5yzoIa3nJm z9WnBQVqV_hoamIjBCZu)IVMk01%J{v$zyDaZ2<}3A!(EdB~mBKCbWyt#uyIqscG3{ zA_eu(Hsqf;Y>_Y%0e=1C3*95yT1qSy`h`S0Zn=V!Y2*Ct<+rc271I0#x2HCG4_Hx- zTw-I+fBE@d^M#Yf6>_i!wKY#w*_Pw`Ae2Sa{YGBa<^0ZOkSjAMxwTuZbzMY`Jh^;)b8h*w~Mpx;qUF&UWj9V`f3QrqEtParq&4pKKyTV?K3?B`xxRyxoRE*br!Ti(w$n-7gr(aXvX|tQFEuC!jf`i5CuiyfKa!777Q>_qJZxpTQY@9da)FZ(!soXl;LS4G73kH zb@BMm|NY+#B^s4@A#6_y2q$KJX?H$V}PQkgA+>^Uxh$a zmq4lm;GHyE4?GJiPHMyrW0fXSEL;e6-`(l*IT9|QNH$xo+v_j#mh`;n z84cVM!DiGQEVtwSWXR3-=FRtf;r-5Sqh3(>FEyKUT3k@13KsQvgIDw7k>RZwWm7cf zl9Dw&@I8(9m?ybXyt>d%adLH0d;Ri?bj44#@#J%bB-|s#a-k+WK+8f|06GaNdz8zh zj^Et)`la0L-~9atqk6PLMw>jedd`GLMk;x*CZr}n8(Nnf8tQTs5`=Pq5>cuD`DdcJ znE*b1$S0$6R(?E-4efyJmoMrlqX!HyzI*qMn~ru&*syOXni$CW`I%%BNvPcUP8@@y z=}ObDH7@x)hq5cVQfil~UKq!TD_gmKnrcnVu+Q=w=3_>u{7<>v$GRap&6h9V)M`4I z(Guo0*(ztJL?EA$q_oSk3nrOxxTC6{o)G+We0^l=)vDc5f8@~f$M1=hkQ$rR>fmgc zs`genUZ4TZTz+{n`EQ(9dC`MqfiL#W;nL(Tq$78d#+IQ#MNR>Sw%cS^sYSCper!8& z11afS(!nXs0Yx}zW;jcZj-QLDx$coH3|dQn$gMYlxRjpIBw}WgFj5uVDdJ#wd_-)^ zT@JUfE>L(rw0Fu>0mUTCsNFWctfoXiX+_W7a$Y%VNa;@GR=6C{8tLME-A&FQFA?s+ z?}f7{)6nu#|bsQ9Fb&CE`#2*ns%R+>nkcnT*h)7ZTl!Zr%=( z*#gY?R~)CiY8F_yvLH9(T~LjT@pLcGlJ04pQXk!i?zSU#8~htfZ)=#C7Gu*PknUO*+s58%b$(?+pqQK8&xUm zy|l=++A6~b8SExSFEIf9!7(t^Plx5To&x+ox@gE#E+3BYm(fW1uFTb;&PY7JrP77b zwzS#B{OaoJ%{Si?lGoQabR56xMR;@l7)2_LI_1C=Ao3C!k=YN;2RKhjWR-T>GMJ;Y z9FOTYuNzlqm3mz@UgPWS{h$2nzplP~MV6>8dmMC9s!#U!_0#`&ciSRLGFcS?A8u~= zG_wX|&1cumhwgNUukAJ zUx#LdyowolKw90REGP$Y)W@MBxxtJ85>qIi$k+CYRqDu`6irGr6Aq9oSI=nPmw

zAdid7OX*w^yBrIu^l3I`>`M7j^nF!w3s#?p;by)^9{jlX*vJsh-d=13&-Q68KNp7B_ zWCxJzKmPcuTCKkG2Y|3M3gaLX`W*OM4{F5${2GL|D`Ha*hoxz&EEU}dK60H*^CI4) zA)BLxQ$?new1cQwak8!lObp4n_%*zAsYsV)aN$7AU`8qpo&n%`3v0HUfp%J}>6w$* zE?B41tEZpOy-c>hwSmWYh=W4caSgGP^oFN0=l)%ZKXj&Vr&wEIFMp?n3PdU$JWmi+Cj50%kk(6JdZL%$CRpHw6EV%g6MW z@MZ65CYm)Jj~JKCf;RH|0wJVu5GP~;WwR!wQyQo7q^zOvG+%VT|KZ!mrh$qwvzJ#_ z7qjII)IwP(m)S=hW0i~>BKiu|wtgSIp3jf-q}`y^iHu=q!=l!>RyTF114mh?4x6iv z-bl3scf68OMMXtm0Kuvej=nhmmd3}^>$zM{7%N31(KI71%^nlP3qU{^gr8)FC^vLB zEI{a${-iuYKkXBYDf%xdGC#maQXG*67!oa^yX}csEDn#W7v;mJ$5sm2M|YWL*NMA$6YxUdR_UntIZ7K8a(R{f5`N5UC=}CXs z8qZD2l{DPyJoUyBhoat`_jW-H;1JihHx6ISTAJI<{q4p;S%`1y z$#7(qtf8I;Q|cEl(gKsVmHb4EHr#Nei1h_&~()cVggFBqUf=>OCBs zHA+VX4fYPY{qzri`4`WB^()hV!oBu57EWZ~W8Vut>pkt_%Z;Pz4Si}p;e~C~VvJHd z$#AV7#gypoi1y8K(nz)2&CUaK{{0tke*RzoDNlpj&N9VXac|M;Ub8eS#hNo4j5-j& z1Ph&Ga4rHKCSdNI89bw0xinVq*$S&A-Gwf)Y`0rb0+J&#j_dVw2+48ko4XsB;vlgV ztimW?^(>i`Dvf>=ZzuSnr#M`t7C=0%Hl)khBI0Qc5lBfwGTt}>p~tWWBSI6p8s#B; zLTiS;W?!Ltn;uCiR_vFg0mp=RP6*dSEl26jeR}Fv9HKTHwXSalO%|zi$90;*zL?;N zIcnomyDK)2B2;=yB42DrDw+hkhMm!u&!152ci%oQlva@9Ue7){^i|hG_2K*FoUr~ zyg?slt{qp^x%oQOfd7S})RDe^J8*~EWS}KX0$~Os5W=kZOOkf)Cr*WjbG{`x1z6jf zI8;L;bA~==J-W$M-{y<1vsN20?gs7;IY&F)O(JR=77*&^2H4% zJcQltY7^S(-qu`gD^cuoR4zOSNAc+Js2>WuPIS*;B~w%$Z+?mJb21IH-I9xDh@wcE zP)D&lN%dFTd}+T@+gg(%hIa~6VJP3raWaf77-lEM*K76^|A`Y`2=>-78VxOcx@&+`iLO2u*ik=7Jx zY5Az{CZ8la@DdC*9af3yl~XRW_hS=F1%w_H@wS-vAphIJ3*2xN`BV6hQYLOP zHjq?wv;PkY?fN1gDH!oj%{HIT!DTn7*&pf5y z`lKa$?U)}p+wVTXSC@tQGn2R#onYiroAU&*#7*2ytvbsZsiPJ)0Dc4xppWu>ZeN^} zxR_7{!toBv*i2)j4K>CtcwnVG@W*I;$2beyWjf52!h!CdFwA$I4c~G&VZ~q=EOwq+ zs*hx`aKNsVNp807s>+p4qG0tMl+>GFDo}ab1*^TPVsfYbPg) z2jB~4_w4yIA-hkX-xbr_v*YUhr%&pv7~d>g zqJu(DzkpE(mz}O-VzG=hv`S0V5|8sjZ!8)Vtu4Lb$(!LB5X;(`?X1m|F5<2B;M3PL zT{?JFqb7H0ngLZS=H_AC^i7+@Ym#^uT7r8Zz)=Yqbn(;tG&_mnVb~RpR^8+irxwMI zB!1G)G95#cPEY4wqp|Vc>({Sg%k`bc&Ld&)*B387ef-QWsnn{kpS{#ptKH*(P5to0 z54SgWU%q^Xt*_sFd;8@lSRLBRsHfkX__6w~gh8hZTDE`N`3HClBuNTk-uw{ zjHxeH8nx32tPD3#=BRpBEU%@wtHg}Kj4R>yAZLTYyW`NzsE?m+<$_iQ)9Ac8T}N9# zuJvr8+I<`fp=&29$B+}F*_LCbJ3ypb&@T8R;5m);+VF_2aR$|=ZZvWit3Ek9+vv{U z=KWt5!fPI}cNos9pc81x!OR3|NAuWhrS>L_LcSVhQ8BG(Z%?bSch{pBL7U6|`0Y1e zzB~*E6Ds-1Y0czsFC>Y-lkrd-xbS^OKQZI<9kEZ?l$>u4)vq2eC7TvV-*SN~BOBKw zE68IrduS^(ONs#DM*`v-%)Ug=j5q|Q2XlN8?2tbb6uU5fV=i5a+)`Aq1ZeI?L$~;G zsb68o;EnI!zo(BRiPpi0(wm_OxKib(-n1ixdT=})natZ6Z*m#{7z1f*lnIQ8AA?Rj z>&>)_Hu8MX=|4;#^Vxkva^HS>e|vYsv(%9G_av~|HN^Mo<%?$L{^s^8{qot8pUq+b z03ZNKL_t*L6;=4|_6|oqKRbtDw|Ac;Ji@vlimlEFD*E|MD zLRN#O})!yOh*ZrZ&6y5O)_&S%j#WhIz&T01eXfT)7^odqSSKL}tX@!@UIm@L#Vt?d!Y#-J_Nu2qpN4eA7tQ z(ei84YCbA?5Z&Xrq%4NwlynmjPK;c|q#-tNA~t?@dW6zjDySL~K0Gb{{^Za8_@{Q^ z;!pp>!P%Lqup_$FY{|i~&1Y^N9#>PX8LFrCRUpyLfYb z{Z-x?mO%v`^#;vB%Sel>^AibdfA_!q&x#3ajru{kGG6OS`*>WeLevVb&x|y46yYZn1h;&GZ%4O+YM=yA*>Uq0rlWk%hyoNLvFF9Eu#$ zm(3Z?`8DY8UKza!=GWKP{87P%VopW8P=`!LKk<`<``}glkr8r96m&=gtNg0z1ILRN zuW(MzA4i6 z%h!*Yyo|d&GwoaLYt=TqYI<#OE`92M(U$Y|>u>MAe);>~e5f9#=i~X;>kl)zbB9q# zA4TAoC2==?4jA|>wvJ${rKarfAjZ0B}9>Wc6o`536U7p zD&K*(os$az84d(mVgMS#V~Eo|gGVW0)9U5nc0Q!2#n-_i$;yFsdaU46s2;~6lDRO0 z$7QzgExq6jIN(BDEbqj;!5P50LOIGLzgix0-sQ-6k19WW_~82Flrq6L;yAY4jO^Y| zpB#j_b_dP&SGq_^JCw2}G1n%HZ_)?HN2l7YcDnZxMC!E$clU6}!>@@}P4YFz6jKyn z+}Y!%%ly^VH@cmF`t)(JUL0}Wot$zNS{}FFVM$6lEnx<4E-tPfVxlX?y#Dr^I(u}- zPfV8P*`22L7P+JS^4Y*)m>@JXwdYX42tUTDhmul47-kSS)D;=%aA%Z=$Uk_D5JMD8Q1~4p1j6 ze@QKr03Z>rsY2K+DSE~@Q}1r8{dH-)ozYL_)$6H==liF(otTQzE|``($ST1lWt0;L zr)R-9xQ6HvdWb#T-pD+W3vyID%Cgi(X@9=hReh>OV1EDV^5XjX z%iZ1hjS-T^DmV39mA}XAhhVG?uac>&Q(0o(k z_IryYpcPEVl<4%6rp0@@6P)RZ$-nKqRBrYMvxmnR6o@$Co?fJ>&dF%0!s(z4@`4YuqIyVsE&Yh4sER~shteL-1BAL1^CkC!^`-PoLBzN@)@# zCxbJa=|!V_ci*llR7iz$iQgE?Nsi-8dgWP;Oj$YbC3-ZrKN2Op_VJ=5s?SZlx zSSQfOlSu?(M3Hrynt%8Z9$Napsq<+(swzP#F`$3xNw`t2ID9KCwVZOHCrV@j73Gb=*bX#PxFJ{DB3ROuQjlf~>R* zkH&qHj5+#+`T|%S>HvittsLm+_xNErqJ=hrm}`}rZn5H1F)1WG%LxtOVVc}|kaDo7 zN+l9QVJUOQnBO2{TpMuScn#fTUJ6!6l_jMatlbk0dq}(eDAR*&$B!|e_7s&~U7q@r zhP*<=%KmwxVtUlqTL%DTsc+5udqe_dkthJkO{`R8S&%M0X?y67^#-3{!%kaq84M#- z;e}9}z0m|IceWrX4XY5Hync-a_BA|}`)>>mFb1S#is`dT4~W+S%h418((to#5Gtnb)sh`-Dae@2QP` z`keYNfAp}J?{%7T$)!tIipLMP+o#?vUwyt;|M6~oaMzni+{%_JPviOLyE_;mbRn4k z_2!PXr9zTd2mU!rd`V1O?+2zM*&{82(~M0=;M1Q38-~2e{P@gQ`G_2s$SNJfEERrQ z4~dD>!6Y!j3MI-oY1rfJSu#+HGOiAR1q_MQC<@F_IS-F&nXF{!;^T6!_U(_hqlm~pAM8xY_RpJTpC0x%S5)uQMsZ$L#x}B-mXg~OzVvXR!#{lX7`$( z-ZekGd#Mrqc%l}wKWL`Z^JVFJ>=kb1Vm@Cnvt&FCmoX<#b_tggEt64vR6sSwyj zwQ;K5Ir$3*Iz+)A*b;R-c8w)@*`tGov|et75-0%_6x9TNO$bbOTl4~fauh&c#IO`} zKtF0B0_NoRErjH!%e?V|bKY~KvfZhniS6Q-$aq8@iWZqxP}E}F)_PzO)O9zAWIgg- zd7|{cCnuCAx`xxqt-gM_{`~oiD-CrY_ZP-{=MVJ3EcJW3gz~x+IGVWMzh1xJ?kz5! zpRo%+-8>Fwo1Avbo9&m|yQcZ-%A%L6_n*JqHCqT?T>Gg%Xmw+}f(VH&b9&Lq$fbmi z)S$7U(rK-L5Q6dsTZgyOs3Y2`Fq?-u5{FjISS%Qx%O>J$6IzneC76#<;5c>^d@82n z;OU_m01-1WfkyE1P|OVkQZbZ`uJ1%aDCtS48O-U_pzU*K^U$zX1%Czx z=th+f0yL7&>d39(NZ{vGcW z6&XTwG=4oC%DKWM03#c`2U(hwTzv_1HYzMkQ;_V$3YNB@%Eh<}6+is#qtOtEiE-yG z9GR1Ta9F~cW%z-6k{f2|z2v9`%i0-rp1QW|dW4bq1_e~8Gn|jkIwD3=&0P#Bu?ENB zCyYc1*cNmrRCI=esXhTlAPx9Nv>bR{V=GK>b~Z(c$#UFXlw0#sZ&NaLS)J8P>M*69 z-Okei=gcD^6!U?~J1vVa5IoS{gR27~o}FJ1%pBPmV&k+CDIeO05IGb&ErwzsreOyf z8y~2}vmRO={)ZlJVqWh?!l7JS=t+&TM@mn>)84KYjVgO3 zBOZ8{D=4a{mR=PTxXT*zMoC~e+-lW^Mu7}@OoOXU<}LP(?3}(1F)&Jnm@>rvsLO*Y zucO`3?TVjrx+85SyMF=-Q9r}Za+NSTeeCI$U`x* zS!vnehDg@ zN;AIV6HJDD_AGh?Q_|jeFMZuLR5vT}XrJ>{V45E7XdIK%1bo-{-0^t{Us;mR(JxFu zl4y-z!0xEb2$_NuoI7}Tn7{t!jn@vszU#ebxq~yZYk1{Di|HFfARxa_h(;!=6@r7W zgCxKmHyREIBZ4_3phKtcaz0C@TBY*0y>)xOUpc)b#SNW$>JG9$rNveGyK~X;e5m9x4G8 zS306*%F4XLtc|ASi`%hox{`#Qh71l}cn31siI*8CI=r)2~@)RSHWHP0` ziFR<~D!KUK`_s#d6Y^aRFySwy53Yb+`0J8sQNTx9Kx+&R&m#G}cOEfgOKqwSx=98e zJa&ZMJ3^oXY+?Zz&n4IaLnJ$iLS$`gg5$3cbMI z%^axKuU5IM&+SZ8(rl$c)9#N4ZT*j#hq=6M-ErfljaF;W8QZ+%8GVst_h+Sxb3>v% zr`COvH?@*Wi=X1j7H>=eldMbI~aVPxF?2@ucwZ_o0Yy9=1y0|6N;`tQN%rYxF`#Sg8n{Q!=u1s ziQklK{-1;5N&V!kaf112lZ)3&2?I~G2RTio468^*AwSr3A;bN#bv;kbPERi6CC)}& zlYa?oX}VaVv}%6E(xGmK8*WcRM7A%M+V4GSxluhb@N_;N^;lfmal|B)wKhdd%^iS9 zIhEscnl_3(Aw>s@6k#2Cz>IzU=8X^i3OGAMbH`=WV7U_B0**Na4tJ+0*g~)yoWaZ+ z%5boi(5!J%*Z3>4N4XjzE4$&m}Dm1be4k$E(s|g?Y@D zq7xB>USz;itGhF%aT3x2ogEz$Jj=Vg8yOJi7Y$|bF_w6xhQlcPT(8|3K6Q^Izpl5M zN6&}j6RpDJzE9`<){}Ue&iZpOQd0Amd#W1~1~p^+XHp6QaO;oF;f*@SF^B!d~IGIX^!o zM5usx9QY%volrqU-V^p{N@lJ1V1*EX(NkQr6EM~Z`QQHkOx;dV0EtoP`{5=Ws|-5-b7)M8hIPFMwfR*uL=hF{~@!_{K0T_`;AOc!3Sc226<} zsbNp|On29om0RrlzWDc?qy;B>Y9b>e&Uw%KKI{M4e*P6%N>&U99hN0CN%X;-o?ia} zq7%V@^j-uGvv?L)c(Nvufx~k82*WkKk7pgC_wk*_bw>|E&M4}no{7f`^SaN@p6oM4 zi}ngbQSc&hj#vOA+=hI0zALTDXRG4zCM9@!C@Q5v2HOBh9BN6X9=KS}ZM>1 zq4oQWJ4`1saST%^>uO^@kv_uJ@?k!sQEv}$sM$gxSCV!t6$pPJ>15o>*jyIw#B4rL zo?uJshfQdOWZ+Vve0YFzBCCd8p&Gq#f>hLj#?=_rvaG}Qcx~w%S@P9uOH5cITq+Mp zAKG!iz>`<&^R|@PC5xH(F4rm?8tMkO#DEL~pw-G%`6T&$MmD;sB!+^V*)<~+S@6h| zQ8rpe9Z7^V^nZ?L#yBX82o7L|#Ei2~h{otK3=K=mX3Wy3Oi=hD^dR zrpnR07C%uREN_m6`$+15WVoyiQL&3>tVE~@4DLDhviM1tE=UiYWK>~vAx=x)N`$CS}knrWbCMl?p_5)!Kv40+8hb2<=B$B9FfU-$_N zB$ehMCnA6qk)}PyWBnNAak3+#_933j*kid&V#u=?4dhJ&yAaWe9lZ&$h?;k4P|{Yz zBzd=@Wdc!!y-%i9F3nc?S-##&o!+hLolRw3s?2vQNsX2l+p>@32T-=jYIsHbDyzCv zA7#yv270J$U?6sem|+^#eu0!7*#UhhK-7u7G0*V8VDe>@l;RvY=yks%3Nit<@=5|Izm(Qd@Wy-6Ta4@`K(6to2MNs7Ia8MWrBiO+?* zj}qK5%&~+%rMft4_l%gz7LrNsL@Ni23MYbK<+i!5p#K_Vgs>}vnPCHp&Xj+Lc^KZC zoN#0f8;RJPbtuo|*=eGBL_9RBk@|0g1H{TMhTCIlrX zhM1gUoBoRZ<8j;|6Jb|^+txAo1iJN41C`ojj~`i*M$MM1yQRs7LzuE-^9&$a=TObr zG^dja2O+YI5T2Bf1y?O$*$^FeEI>9PJwp;>@{WX3mS2*jnm?ens1_Psx{%`_-w7B} z!nYij9kT|%S}MEPzeK-3dErlvB2jxg*{t#{>PKo+_ruZrK%0Yv{b6=CJ9e*H0jnZx zG!TBtGI-q5Q6`#LfZx+GVh*+23VFV$tC%UPYvtF*0@M_QHtTxNexZq6fI7pi!#XGMr`ua@#LVW zVYp`F$jdQeedC8T72nZ$ifV?im zZ`z}jCvkUBtV{(Myjh{ zV+`_e!q+x(D8d^Nb)M^DAi+;RPu0ko@aTC0i~z?u(L!d*9UCYrh0#Hy z3ig(BQC4Qh5wPOYe4%D;f>_I|kFOUiilNZ$#~>5jON~WF|8m|uZfaj~e}5+*C)}Mf zK&&mLKp0PRS-XJ2nMDk7A4dG>AS_1Bac48CwNT8UuOLbhQGjV0WI9J8?nS&ST=03u z1kX*0EKvhA$9qElo}yNZK34>BVm0Kp*_h`@blcy28g)9M@U>Z;?0r!pl8E>w{^zQs zW7?7^!ak*(({z>Q`TexKNmb_iWnX>jUMSkWMnwv!6|>Xab~1{F6Zc)84Do5Q#@5>| zXPl@yDl~q?l(?M)TkGw+(VI75ukDH!ReilI8{-p&9f8_us?P@%rvWmvHy{agGQ)v0 zW_&UcDzwg~W+TrsOZM(cE^E2TiR`SZ_qqDGbS}GPJl`3n2A_uSj9P7dN;O>v=qvh5 z&vVjO1U-!cTY_zL)+-pXx7LNU{x?2GIlL>a(ddBim8{EbJ;%Y`u$jSOK2@mbL-8If6txB}xtw=xn0H9$cUAN36u zY$j=C2YMlHrt42Go|GZVHivZe>M^39D5VTwns*bDc6|?t4)ntYe;?nw5z9 z($FC>H|zS*fm!vPj-UuHpD;&{E^i=td$^lps;bsA{_c1(TM)WF+f_@5^h65+$YL;hk2Fmg~drMo|F|SAaLNchqs7M*Z@> zHI!nWKB&My*{NM{JA7Llc!8-VQ-{n=m+)#yI6KlfwDd!lU!K&O2H6jRwGa(3Mr$nIgxt(#7*vTbj zcQC1-mJ~8zP3^IaOvJ(5MW%x4Y=pp?w98$act7xn+RM)u8c=FeXa(ZbYfsC- z@vGMfq=>qZg2lL%3&@p3z*Zi!zheE7)9)!fHrte(3OOCWMd~ecEPjI491&fbd?!Ap zvDe?EmxNQ|PmyCvrSAzZw6dwRxate@JqD#!f%`E%>^V@pXb!b+sJ^U#71 zRSpanf=F(E|KI)hQnicWI?u}mAXSd)tR_@z^)Bm<;2KK>D7&Ucc+`yxS*KPI*6T!q z4n(&=+Y%XcK+&#k6NkF$4>)iV^W~tqFP}!ynPyT^T$?n4a}nPP`b#(}o)`b)84sg)=UBawC_?R#i zdi5mI$_3B7Mq*r5doM458>m^(k;j~jq^>sv$%}w(kh&wz-8q{8DWq% zKD@-9@-TMTZrhgR5u&KrS$K_N?YT|!bpDEB5ht=hEsn|iPjv2uzdD+mn;XaEQd-5a zM(sRIfnBUx&6~+7ghKFndW9Y7wJW=nf1h5EMP7I)_eE>{-&+v?03ZNKL_t&`$&<1z zl9|h5oe9eW}I zSP27;gcbRfpq}TRMlyo3Q=o3Q-L?;B7sm`zc?~d_7au)0XKg&_vRxa;N8Xv}bYM2Q zlCOsDRa&+$t4b{@Xjs;MR9f%`$Opk9xAq6mjWKOD1JtTQXoc|?LkaC9RgWi#C=F2` zTm734j~K*rIg8j_;YI6?A0oxRvFSr4`w5OkU!fAC-omxXD~6-%$G^$UV+2DHvIS5D zC&p|2Y3!cI_u66{lV1~N)FXq5GR1z}P$s{7Id>8>p=w7HiM%-1`@YV*wP}dmDgrXm zI*ex>?OB6=XhuV?cV)=9lR5d(*vM!^XPWu<~9dpO*b z#+xi-F^MhW4D`8y*?OO7uzY-Dpjbz})C$#f8b zhQxz&$UR4Am!4Qj^xB1!@9M;;=|>Fl!rZl5%r#|sV>$+s&v;=s&^ECET|lD0EZHBR z=3LBBi|w;_F5%?D$cn@vHVXqY3ulOa{VSP_8DiY~Qn3?Z9Ui5PKulyYbjbUrHPBX@ zweHi1RSy7KJ-WR}3`weOl-tZu9stL;P!jmXT|Hm5>=`8fBWG2UJ)rWqCSHoS&cAK3t4Fo*D=~ZROe7ISO+fyW+B< zM~>00VX0(|PN2?0$P|ka0`Np|y7_<<=VlwlhLwuuE-^BaPQ?;V&Y^k=;>p&z?9&E1 zs@vOJWXck^*MJu}xV)}xv8=%cg9r0{_YeMF_4zXp%t-rm!bU!lrrDxK%kWHtFT+}l zE(s_mNF<>lsb%O9X#y>YX~qKG=wpq;qk~c}?d{O~g@5QE@$H_U6k+&vwPe<22XWH#YT;QsvX( zR8jhzkSf-yS}8ERPp_^@2M4Nf^mO5hL$287di#)sY)?%_UO#@|(8*&Ck4h@QV#evv za(2N4R)(oNgL1NtL0gbI>LUeD6N=*f5^3^ul>s?xrW7Xyt%O&JDU%%%%$5;nh!{B~9+ysP2 zv^mNbnvs}U8;zW!SA676@okcU3l8h-?1F^CBT8+UTSxU1XcKta1Wd_9`Wq;20ko>W zA$6Sq;vvU<9nD~6q$R!o=H`)#C&#h3c8NBct%4bOV;aqAVN=KoI!4a1v`JE!xVB-g z5cp-hi^g>l5t&d>32q#=9&f`eEr>QLOZ*@f1&%e_Z#3*0I;n4$klK7UYMZ|hBYxEg z8{66^llXIF9*)B-UXz%LYRN>PuGad3{Fvp$P^Nl4+Iq{XsxdB8U;gYH&W{Dd%45`$ z0p5}PDTo$A4Hj>2Q8Etlb&?C(#3tj=sb;6P_>Mlw7?;9;_|?tWe6m}qffvkEE40bV@X`j&t{gs#*;mbxIXcV;hQ)s@BY;r{7&0r;9F&*<{Y%xKe zK49^5RbqvPsSR4hkd|S$sR1niq|ziGyC`3lT`Q}@9pGt3k+)|1;l>%)1yr>0(rXvk zRj<^v*Xd73$Ia8L^J_U2ygIF_Ff}2@N`Uk}fBBs8;1twGoy$!ywWkm5+&8tsIH(4m zpYDqX^n z9vPk4$EZ7HexN`3g51KeM8M0SDA|qJjaOm2Y5qLTbk~hGYgKil<-O`Z*GJ|w+?(5P zz!m_&XD?o8J>gKJ&EIx+B(bdIi@)&Q-}=p6wLDy#HE^`amES&o811JrD^&ZS zIcFx!<6}rOk%=cMHDN=crCpZZfGYIn=|QH{D>P>5qs2B) zjBWO27D#~EbG8ImDh9Far5p8i<-=F9&s-N}(tO3ka(6yp)$89wUJUg0S%eW&tCm)y z{@yg-`9hUi;i%EQ`9{BuQ_Wz~ERP4LQ>~vzvx2qy1{FvZ-Xfq&HB3(ZYW=u%cb83k zZ2ZIcK)1|jkB_ohb)FsF%92lxTXjVlGRQODYy(Rq-2^8B=}QVlF`{3?&B>2QI@c_9i3yPa+VW77{5ImjcC$fx??_~JsEmygjQzE~+80$$eKmP!@&Bg$pk z7NWhnyuQDE(2$TAyS%!1Xx)wmV;N+Zm(O%KR=`gkKEJ;9Ag(4X=e27Bn$SbNX^?2- zwmEUu2?-^*qWz_M9Vv*ip6K{mJ;oqW9_S6OW49p_kRneS5?puQ@WZX?+87oM z5!yZ=^UdCnWqy2|-IWak(x0MiLcb7Rq|mkX*d>aApxGj}K@9l9l9Vbku+yFsXNhV! zBA11UIWMP)jQ#2vMRpyxq#;+9i|c(Q6I!s)&MY0s1|2L+QOH!;=4i+x<6rhcZnjg& zhZiEfWy4pCfDp5zKQ>f!2*8T6@m$(C1Fkj5v|cNBjX79uB(PKtiWpG)O3*QW)PVr< zVuiJ$W)j8evN%tbcNrrCqG+Em(LIS=N0~MtWRgjD6yPQTel#EL=LirRcbUwlSLPG7 z`NNUXI4t)K4KCgjRxG#66&KDg~F)UCC9O| z2h)Tzu9=K`VTF6f$jC%2tYmqCSO~dzd{5{i;j=4G(J=6i61X);@T%^nqpKr0BXl`TLUgM~KakhDP^B|B+1v*Xd!tTtVCSBuRC^4-+h*%k_ z((G$dVZSw+WC+#w`A>g#aq-N4K0L}&(2iOP&E%nVM=|NYEeHl*(u_||3!V4)ctbo-H~fQ19{iGwJoMfAv@DJkOs$XPg=!=ecCG0GIBKiOptSzIbVMA8u~O z{hk(y7bi!uqKRh1RrY3z5y3(->D2kGW)yd>ny+8Are&p;x%C{>J>QJmy}Lew zt{e^qvu|$RtAr<9ObHP~2I?kYh_=VQ)G7<&2Tl^!?@d*io0%b&!^GA@f8acZ_YbMt z+l*0_2H%e7+J&2F4!|OsOkOapB}~q6Q37=nw~n#vIdX!fZ=aBT#4uD5B*rK{vPH~` zmBC_cpj}=cylCY_I0|R>hQi~K^;E0%XYYkTavE`JMf%g9eIZGU5m3n#Z{B~f6-pCy zk%W*<3{KBvUOs<$bMwJ*V<$N0!_7}|jgL9atZ`uSi`GVcjFKEVi7H^dHs4)}yFc}M ziU8JdVz2QC26n2)$0w~8v|T|y!z*u&t|*hzj%MR@xJm=!i5a$oOF~ zQi@&}tf}d|FK-(id)SC#X387p<2t`t3fbDeZ0oTXW@|B0tsWn`;Q`1;(p&l=h;t>q z-JrvHkT3HpAp0acPLT@nFr7qgga;%j{LA!Y!B~T8Hi`9&Lz!XIp56!T<<1-jks6b@ zrWv*T5})&oo=(;*Z8K`itZ!`PX|ftLwA!Y`CY+c%RM2366FMZvx4mq85}CWFS1+{u z*Hgzu=s73Ok3s&gUcIu;@DI|_N{3Gg5@4dT?K~kz5Brn3_z3P<|BJ2ph*^7O>K zqPt#;v8qZR$9LLYolRhs=OBTvC!L!Q^<44%;@}RSv`1Ls>iYS3GVedO*!8vuMOlfC z8JEv4QU8bA5jjLdBNA3LXM$wSr?MmKPjm@&Vh3y{;bzn5Fh9v_8Zh>O4Zs=*V=1)z zhF6HcB2rBigxN`pgMU7V%4GoldM_iGIQ1>@T=#|gL0p9e{^!` z`PB2s&ZyLwlQHU2UJqrbXlmFsUPcv)Xp%;FlbajBv|x4mPyfmP%(b3~W6}`H+>^HC z$ioBPe0V1j14avaeT))%cVt4c|ICz3pP$O26T6FgTm%$2Y8Y7p`r#+qC=?J;@~|-j z8x`~2YqpU9&5ToDv7yM+=IHj1#1HG{I_E~?C=>uMDsDl>+B2V?U7r=2uk{dIuP@Fo z<+*iQUB!dd!`in#`PS#3e+eIT(nTEs5tl|ax$Nol7Z<B6Yb zC+Jir#ukaG%)2crPsW6f-D?J~;2P=05p&tZ0u5eMNHiJlutMMp&WUcGp!xZ=ajdqm1}Cs^ez z&G$-fq5TJ)hesH(^p~siXK!DBqtjz0pZV6oX}9&*o=;cv>Gjq5{bt>3_cpWX$rX|` zMR=Y!*p3xI_{~-QgNY->d7fHnE=5id;y^D}7!Gk#jtQH2MpDMVP-!f{df~6L<|45+ z5{klKq36gV02|?Wcc*;?V@^~4WgJ`3}M(H{Yrd>sAVVN zDw0kJFY$5~gS`$8AxEqysRE3wFnk$uOOzq}dse=t4_&zjUV|CbdPMGHs>)N;)<|DZ z$I5Me^Udq|gIOxW&27(n^*+10zZH-s|>^ zAy6h0bDP&!FbrG&1@Jf*Pdk{+Q{Q~^l>v-ckKASxAW7oLsomV%I3v=@t~x$`11E_4 z)@ilKNEf%^82#P50TFRnKO6KP`fYAd^kvZAk@mKTT&7^8g-R)gX$(4>^cc|O!qH)K zRnYcLy9$!68NV(%6wtRn`8F}}_U#*PdaYqd&*j%&{hW()asI5)JO>GinQPK0`>y?{ zhv}qLNFAS=4znvzqC&gVMkYuC(X!mRK2pUn zydjbGM(J^E!Ihw|0(Iqh_wO^{}%u|qm znua42u96t3bKnO`X8QBdZ7EXrUGatNfBq*|Xq_D$B#>a23~)LhKQOVhMr1f!F{Yw# za1?Flg@{a|<>I{Y9H#Sa zqcI%g{PbLm8KiK0qPMiP5MwLG!V;f+@+pLJGTSIW)&ehLKr3@YD}p81-|mU%eCy*+ zEncn{9NNRwDo28`zxe6F-~I7tXN|1(pp>62W10GLLYnec8~E0vSt9Mu_wu~1403Uu zNWzi2UOuE#*=SKy8XwWZQf5TKr!!vD-xfox zuwL;PrH($B;)xrlws$Yc$SLINqLym0a(#Nq3e_mEw4R>kQpMeJ&}kP^nX9v_LM9Ik z;gvTpPxmSitsX3v*(p2P&UWbd(@!pc@5evX!@`JchgdFD1;6g2`|jcV;Q1Gev-g{m zMY@*CmNg-|p*&&V#fk@_ufNLMn&nKRKH0G>aA7TdYW~&_UcNZXm9s0z01^}Y#G;DLaEREr z?Nm_3D|d7@JF!+vn1yLYFbS>UA4JWX03W%dUnv=#%=3ln&E5FtZ>Ra<3AzeUiWdmr ze)U3ZfD{**JHL8ndY$j`Z!~<9=_|K@Rrb9fe7|-7V38Emw_lZh{9C_RtANzx>yH6M ziaa=$7LSV5-mB&z^+{I26*G)6JmwO&Gv$fU08fyTVh=Rj0&2MvgL`(@D*W}&9;LSF zw;t(;n$m0(d7KNgGfmmwdXiS_I@-<@C7hpJHxBjX6ABKa%EWTW3vOukfh3velG z9as@&M1r&9=zz7UMc-jq5^Two4j+hK(J9TM`GsqReUWC*&`2!><#w{=bH+M>6QOff zCu(GHgI=zY>qTJ<9$PZgIiu$AHjj(n`Hs2adriJ1YdHoS$78`;^q2nl*50xoC{K)pUpD%8jr{kR7ao zy0wjt>H6w2iE}Uh@DDzgxG)=Z%i;2y)Y%XXkjVlzFN3YJpdRdH;T5q*Rx)UT0A{7A z{M}`W{ZZ!2pG|H)Ow7~byL8plEEMOL#ESh&H&#n|@u#*pWTX#dgyj3ZkxQC$L}h08 zs~`W)_wvO`dsX>j(zsh!OZ5{1L1R6l0U(SH^-b4;$qH0ub^Cl2rArO280kP*9po|> z=2}gbTT?q%^D?K|j9pQ&Z0Y#q8B!b%`f~7}UtXvg78tYsv$IpDEK$cu;(EPOukN2; zRDSsVXP4*Nqeim;0-6AC9-p{19RMO?G?4bB3N+xbenf=QZ45G{eF7{#gI3)uoW1yw zc^PSh_{ep_GGNrFz$*C&GZ5lQ%mUzq$k8(d)N4sYfhRD|DE4Z!dJF*J$ykV(aa51< zAH&S|rh7X`{~${0g!zwMz5kRrrRqX2FFo@?6s0U2X*FL1wI&zmwQ@-VxBFG}bu7u& zW^zUWUZuhfhZkw3Pc%?;(qWOk|Fl_WZ$rb}7y2|ZzGwM?Ut*~c{D+6OPE=};nda}_ zyo=ZzhewBI8Jo%_7#$Yu8n~0aD~Ev34B`tNO{hTCG3O&h z+tHy;#@dEW?8-s?2#dV@-ggPV(os{w>HE7!-5;yscxUG}&<6?DUpip&j31?Y)aa?sP3d-fs3N9im>tJ8-kafqLZSz#%zFO2bl!;`G|XY+&no#fabxyLls#)y zRK5LhqwU$tPrh~gejx*u<1W8Pn1NU!uay-!JZ=yZuUv(3NY;qat)yfqZ${cI=CiWq z>3)W=bF^dhaiq*8HCq$YY&C}2xnZDx?Kb-e(7biNx}xnKy}gmcj!qmmyHf8F&nH7y zmj@vb55le#q`t{F*$!bl#Jy*(tkhsQZ(31WzrpJqTqZUK?Y|FFr3;&8+5Jrp>1jdh% zMT)zd%~SbDC`!;>IY4BzMfdna<&7wULx3?FCnrO_r6Vn^DEKTed4K-^Uy<%#%5Qta zX}dqXK0C})ETUotqVUq9FHwp85K1kI82~OKn&cgM!UY7Ur!{B^w+3$tmh>GT&WkVr z$ix$o51zuBz4k)xfTMZ)_N@;&bY!3>3xcVYQfAm1qbazdt(;JkorL16%hqH(h5|%O zlBz#;m7Eq}Og>sckMqnD=91ugqBIfu@ZTzSBjmE1R|bUJUgN1 zafp7pjgV%PE1hJxc%`IsDyY-zWVCJGv9M8!;M5Xz?N@5^ThmA?jCqlINQZCEnwLa6 z9C5mBB*h9l$^=Mi$O>OG8bt9R=@pg)_r(HWR^rQMDxzdtfhpLB))N9>*C0Rp1;iD_ zGM@NWRtsU{s)meCGK`2`pQnaUxBx(i)&y#cJQPw{0y4p_pO0CMvI;HHpIr;nQd_tI z-+CQeko+G%$=68vr(c5vk{kG>oei5P{)?OYlYjZq8jbho`As?sXwek?EW;H|l&e)z z09eI0C+P9Ei+Jq|o>S~dt|998K1JaysCVhU{XXzor;`)C#6{QKrqVdn>RyqR?mNw@ zOeDr&WDNjSNa3*BGCOIA(hguO2K?DOT8=OxPdRtdglZYt^zJX~%9fgE5Lie`iEMc~ z6YrIUQpacf7>YVyP~KI#?e($i(_>1wqU2Rxc^1+;(or-=kr)#&2)iOnTN@!8e$ z@BHAG(zQCJ?bv1W0Q=Oi=1YQrf$<1Ys)yvd35JTP@ZJJqX8N6++6Pj=ve|BMp-#I^ zT4{wD**M*Sf&-IMo}V@T`t!e-4o1up*4$(;zQ1d;C{;)5Sqw}dERGweR@fW$mOJ$> zzJDM#Hy`T`d$^A-9pa!++Qz zO7vU|yNQjdi$Yu@Zoru0h{SYbTf_zoZYf4XokZP6OQl&AdQso%e6uRjIImg}L)K_$ifp~ZSkhEkf2rqzCY;weZ1SF+Q6 zynkrX*eR)V9fPGgw?~MOO@@u?u+sA3{c?cXO~(+NQ?@8sCqUauak*9$v{0OUP}BAU z|}dmT!^jGJD!tZHv;%`8 zmq%u6Eql_S@wk!2D!oyY?h>0>rZ??08mZw_#s--X?LPJFaBDxr5rKdzWI6yv*)$MH zTHKL)U%16GFuQc>Mx7lkW1^}LWGSZ+l161PpF{~;=`(P0nlq5P$l(XIDlgtx5-8vv*uGH_A!~Z9SsEGirLKNN%PPD zp9f|*BRzCL+Wg|Eyl&KMn|btj&EzZE|4@EOU`+5rxoqkKTG z@>k^KTctcAXY=%VUqLZKg|1AiARxlj15v8dM9~(14j;3s#L75GLWR*Zi75BLGLs_7 zUhgu_qOzpOpLyb5K8x28aUuYdg82YFSE?UkYbZSD6=w=UM;rm5-*a73^!yM`Sn z_Mlu2{^oi7M<~lG1|h2*Q5?VMuV+e*K!g z`{H?6sa9hO69a{Cf3kI|r2`L+axehAi$U!Hjag$WaFq}(2i^Dh1Ws1wXxIuRIryp( z4YQFHw_$bB<}U6`RrZGHT0&Xj@ERI{3?s#kAIED68w=6U1LSMcOW*E06h|N`<;pJN zynMwvgRGKqhrx9T?|I)|Dc}dKcm!|qFtkfWt9bGWjG~h8P$UHGXr?*%WtA?PXzzYx zVVDk2waxkHw!2Uu&ce@T4WATo(29-n3kf@5&1@1!f82U}U^rGv1o2)2HH-)*paqVk*1*z__dw1I7U~e>EN}OYo$$Tl_HMFd4MIESuW{)HYopl5wx3m;W5~QNrVPx1xqCa-C7p5HIO#Hr*1h%}59YNWD^t6Lnc3#v42vMimO;T>87omw zK>rbE9z#mE@PoWO6#$VcRpFPp5%Ym+2wB4#kIrP}njos06>9mgcBCnp4$X2^ag=>m z9fnX`-<>DZz8Y)-kU-P4%7{RD`I?Ey8S^K;uC!`)9+*X)Cz6 zTQ(9(BZngN89~TlELE;P{m%7gzo3PJ&hMfq{1`?x7d?!0aYJd^hTVqsFB?VJ)p05; z7>g6yqCr18;nS-MP`sc~uokz54dTch@YVBYIF>(O+U&2-&WTznWX_ojj1aXUoQ_9q zJoT!bR$HT*#_^e+rknX>Je)+rwOkCo;mAOBnU9Tko7ztZ|MgM6e7W2dZtpuT1#{)2 zvmWvPFwty7hiYch)0&Lx$Fx!narSM`phgvyk6@MiXphZR&m|od!Wx0{< z0gRASLgYv$v5gLwq(M2sEg^IwLEP&Q@X3$YPo@ll;g>Mk{V(3k@mG|Wg<0+Fbhk|i zY3;nKmAZ&`qJ={MxeW3~WBdj*24gYCjnE87IsCQgc<9GsP`mei#v0f+%msQBlfTzg zzm#HAyOd>w{S*oAOU{Q5ZSo0KaUqiMr8vxs0ppFsp^eLU+7;k1(T)TC@OVp4NE-79 zHYUn%GRX_(x4nfb9vu+`U_NW`nzyIkNxGxCP@Wg*AafNj(i_PZW4hcz7Yu1Jkcgv7 ztzJ1~sTMWI!G++KRrGO@DHNG9CK`k_t^viqOpfLmG%;zt%r(>*b#1JAM;8_tiIwQ} zve>9k0@`J{#k_X@lRrJ&wKZR>TG>%Udtu7nMnTl??qx>6$|`z`0pX>aI2;*W&XT6I zS7B#x)~7?+{XS2+;w%{CQN8^A?|v5jEeyUjS-eJe$9znW@5fn$QFU-2z7lFVT1UqK zW9R5Jk&1S)Y>MG{_r;et+f;?2BWiD-nP_;AP7lXjSUM0|e}W4YBQR`oFu#y0yO6jt_glvs#yx)Ij)cjApqzrQ#3Eo*wE8pp120s7(P>XTIU zF{Ua_qFG^u2TVrQ<+`k|`K?exS&O^*^qZgjWjV8J z)bwP77V@6w^7UEF0fU2cVs5>zMXb|F{irNIT)wZMv#=bBN2?+9$)sY*3V9P2Q3NB? zt=Y8(+?*F3nD*I~|GcK_y>A}o6D`@Gc#x9K)&%IzrOk=jSlYY+|i;esX+#4_%%P zBWt3{8^t8+0!XA(RQNuUp_GZNfzE6Ip{jlRyWjuK-_Gd+CWxs^TkVho^r#L?HbIiV z&uB%^P7I(=CUFVLJ5(FScr*%&s4qc9Iipe?qH8WPv%QfGSBx5s~xb zPSKn~)^Y%JJ`=YwO~woA4GBkM70esP7JCLQMZ78Kp+Q}0u~;cfC#U)T zL{8CtYF9luzUV~gkXx`czeg5AviVvl9YD(EGUz@-wqT+NT@uX%MUffBnvfv|PzvYH zzIZ>oi>hQ|B-u0(#Pp5^Epi>$&#NuM_a@h~J!}&5*$?iBDvcL4aAnULTh9I@<2)-9`HP<0&p;pdx}o%uHSvs zE{W{Is{S5MF$hxFGH8r5p+Z3GfjoFbp_pe!Hsk>BVVg3bp4|^@maU^p)FKm|bXHpK z1qT=Mk*l;oIV54Mi23YXWQRE^xo7AkFnzEDuF*Aek5D>ZQ0bLi9xTQl)m?Yvee*6-w6Hvjw{ZnjC zNau*fBo1!;AMGAYMt_<0rAeh!U|}a(F2sF8a7oi<`Y@OojONTjhRXd&dIZ6LJdZbt zC$?mebxxVe_%2sn-=1Sdm+0k7xpu znuM?n_GlpsF%_cO#@Iv}-L?kgLNCW_mbBV?h6y<@At&@G;Q$x*86w~FVNT`deo(e8 zc37UvM)*^LTG3%)aWgZ0nhhln<2!%rR~pY>K;ShD)*;GykP~9~3~hKE$3@aO-mo^C z>w=&uoiS;{fc}zRt*DVB5|z*DbqeO8_IW!9)Oq#0i*D!PEWI<7&iJokXW)1=SCGq~ zSj4KT^#f3;OT<*F z^~3f62F<6+7OQ1=oNiRy5yc;HK2sAI-rV_!8|oi3Q7xq8BU1XbgFrT<1#?_Z8fkl| z`JKA%hX+lI2&JUUqWw-VlZ3#CjDR&0Y2v%wP5;{G5cB?*G!aRpS=96T%^%@^#JhPP z=qoKtYa{(HC-Kw^J=lDmhqE(_o~x8IN=<=%dpsUM%=s<^bQm$Y9fPKlRUA%>#Lxu9 zOP0~bZcUV9qKH&QJ>A*3^TBI0!{qX>RS|Xp;Oq&Kh=}suIM}Q;22*&>EUZw0Rj@;( zjVaDJYRF%qHPICib-S*nGGj%xz)4PS~w zh~T(t;^)dS|D*rl_fC#vMNSqIWi(=F0*xCCZ~ZLApm1hK(yQBQWBo}N+Qe^|>a)!( zkR+*K^5$;(zy9xU7W1K6B@qf64mn|f5OBC$FkDwKPPJxKm{b-TdCOmuL~bZg^N>$K z%IpftLdB>K59ASm)aezito3hRZ3kpQxlorUg}5l6a`|YgSbEXbP^)g-?&`2V$L~>B zix9{Jt0f1B)F&&!uXZ)kHWHVP=RCKTvqsdw(u$~-7;ss)XNr>z)B)2JTL|BS| zPnu6eL|u>2XYe|MXqOYOGB0lZ;e*UT(Se{cf><$CPRBL6F7kszpxHD>mO#rwS2F!J zY(#Qk`lOx;sqrTFuuSP^0Az&EE8Y>9p@Ov02uQ8UpGC61a2vns&tug3Vr*Bdc9z6RJ?D&EylR;Sxx-g+TT?)1xhk^3xR{Pllt;;Yb=4oG&{r>0rA#E z*}4L{e=(NYIza@CT#9kBz6E|kKKJ}F1$ zE1SWm=`)@I0`bJIj7-VmC&lQ^RpdtrAo0`KR$c+{=Gj)ss@ZmvxpjCVbmBco;K~loYHUq;FmxC?)S$q)ok7YE~RKg zA*Gp}ondmGBt=4}lAx*}^?(a$r4C=YC9sHoV+d?E!Z4QPN4^KpHN-6)tAFA)PSBXD zZM!q+v>D;f+$NE`a0G8iOELYBUP3`^o?s`sNarndaDFG**{1*)JR{opi{i3tSukt<(5yMTlB z`t8YPhG9-~`FE}My47k#nQZB?bvKtNwOEnz8iQyyz%09b9&-wuzT^xR9m|okk1TBw zU@dG8MNeswHscs*_&N7eyoe@~_lyY961N=yl7JXu9?^@)!dw$|#gj&nT~<0td@=%K z;ihmF?Mx!FBZY$`^)X&Fd%L3CN=NZu}K)*OQf9JWTq7c;qAnK zxro8X4o7EijBf%d_Y4{m-f^Jy(c)zzL)lgkd1NTvUOa@Zn+?!hU$jVt= z1ZIFbkueye#fjiS%X1LU8Wp*qt2r75T!7(=<=Dz`BPLX)WA*sTRWP&X2?3TAf%}e= z#zHQNxDiUQW#Y&9*Hify+XRjjS}lY!7*3sjt(vPy*K0}kH$j5)~p><44tJ5(TfWTVYz9bZ4NkEVTP*nTvqFa$dzSyVTE4M zT)q-Od_ez^P~>l+iGs-@sYE4`{YoD-e(>wRDQA{miB1Mlv5&tPKLPH!j?cQLMqW0@ zH^Ex!ej+_jE(`4F;DD0_ZxXX$*EUa%y~!P>21Wk)%#@Ryp=Zs8e9>XM#l|>|mPFy( zqT_2Z@v@+(2DCc^k8Mjxug&`6@~n1PUC(!e&e#WvhxO5>{4lL73m-3XSNG$U3Jyyt z9TfW}_w^q`S*6o@=@@6p4z);b-FPu)0m4M2!2y(|ZAo;nfj*;kSfu+W)m3UNP`F$V z`7E=$`=!jHz`0kwVVCl8zFHwU8_2Hvb z4y2^%(?w1;8v1wf{D7rffFuGIs&HmBt+;eSE(R8FPtX! zkhCLG^a zNEC9>)+SpoR$7V7PtTu4H`^o!@i|`(Aq0NpJf3{V22PGp4iK{%)bpX_Q)Z{F~YIc8gYe)y{2u z{#%aUujlV~+uL-i#VO5e=mKZhNhHz#F+>$}|LI&H#+aTrO4}iOr+}1!nTgY`lW^*% zKYd5Lm<5MtRQ6Jnw%z%LIt1U#O#FDtMnVU~?sv9PtJ91{78oeOzqU_uY2D$cpNA+q>2y>wzie z;>PYtCONMkEC#)~P8REpQC}_|;ecJlnEe+raJPGQedT%iI(QizZCh-S39M3V8b`1S z?O(UWEI^{Ro@;#%g3?GN5P-iIMoFE95r4PGt&$s7uZ=+sr{{^`pW!n%rjD zb^A?0pW%hn&M=l_A|$=WrOQWQK6xe=CO|h>`@fZA6a=ckp<(;_kU!u$@rK}P72t=; zN+b2&EkZ{pjZv`*IgWX}u)n_Vm{E)$-%>K>S7b8MG#T~fV1vB` zmlQw<7+3-NAo8%HFZ%J?rJi29K8fkFSPRellXo1Fd-|@*J3gFW{RMUxj*IRrd`v~G zQO3_P3ytV2)IqEygjJ+n`wHpro#gI`jZLpd#(~_nI7OC$AJ6D1=s)T_#=4nnqAF~) zGP;T_%m%b$2pv5WbMAt;kv0pCUg4{~c=3`74F$VT9lw0_OTYEo<&!fe0S5$%F$b=v zVH7na<(j_kh4l#AYPJD&l`Y7ijpvneo6C!5@QV>Kiw8qqDrC0+?0neVH(iy!|)6k>{NwxrZLG~jMFE}Ow;Tl zMz8FMd?lwp=WseS>jg|i9U8Ukxthl(!mvZV9g{|Zb^w*TxNRY1t%}eTbxJa5H|o6V z2h~?Et_51|zWDMuS9o?TWzHC?9+`e}dWJiMf@MU6txz(rI!MA{5z2@JJGl#&#q**N zgz9lTkPdQAmOv-9!K6+Nd7uYWA-f`sV$={_iX-3eiP$Xi>9RJ#pPab&T{9M(7!4l5 z`J!z1b`O8@gXvir?wK512oTUDRC z^>DW`5n$Fh6s;R=*JF|0Z+-Mi3q=7|9tN?0s75{Vm-i7%1*Hgd*vc1<*X z)ZRl4s6V8nfcc@-WxMg3Ia@t#JB-@UjA&uJEOZSvuWWqnPyVBBDITVK!WLlFobD>q z3CLRXREbYi&yaP@D%%e3wWM%aA_FTV48Fj&p;}M_EiGI!n-BFn1gN1P(c&kY-6|79 z>Te2q@i0oUxykCEcquOI;4w_{6OIn*54MOOc2?!pN0-&IQL^*vt4m)ru+wJIwCoOq zWi@O=E{n5v9q*Gd|_s->z(TO%>JF@i$4ypbXb1wk#J8$MKkht2JYiMUmBb zW~A-{mI!a-+$i7kP?pW`3?R#+CZ#_@vWd$QLj3)^_aJ9>J!GPMeENgm{;{M#j<-S~ z=AT*@dlWDi2xV!`uBmORO2aR)ui66}V}~dl6Xiv-+Ew3hayZIfIWt>i#{~VTHA!wp zSSkzRe8DB3U7SgPu{K|XV6d3S!cp5TKL#I4p&-1@pIuN|EEF0-`JrCxk}%toYOYRCYIbV9}^21z?KwSV~0mav2kHMLg*p{QGF)>~vNwmZTvJrpg*3 zkSJd5gE55}fRk1LZCyBOedR_UOHxR>Y^ll#xU+Hmcb?_V1HTKr^{QEOH;S9?+OZ z3!!HC^U?n_>}z(QTN7${*oP4(KEJ)arIx*>O|02U{Obq(`yy3kWD7^#IBj4wvLO%a zhYdwvEUqUYA*`GvwjYIiYI|(Xc5_fOrtn}qkvV8^WSzue-3cCOx)q(@5~tZB&(AOL z&XY(Y^x1C#xn+tC!$~u#XXjXki^w318aELvnpG5GE-gpdIXny00>O&xdA5yOW?+ua zV8pv55o#E{n-UkcDC=)DNOfAh@jz2QxM^lAN-$9YhE-#_v#IGClzT=-A+gS2_k*cH zM65LZ=7}WfN`$_ec_s!!5MiTg88nNzdC9A4Ss1ATAhD9h(TR*Bow8Hg*>C>ZcMlJu z8>fy2iKXk5L&zX_KVCl>8Hhg;Q4%OG!7Ze!1M{`Pj{&CBdFp@rKYns_{7L!nGzn)e zl`SHsh>dE{kd?d3Qvtj=37zG{9uD-afA?PR&e}k?m)Qa%vonBq5?o9eVkppw$b%Ra z;Iw`l9VSRb(W2fC-m9)`s&#AqoN5_NLUtn$y0|+Y_C_NyNMFu$@XSFly93+~d+Ra-c zlSO}yeW+;wYqVQDs1@oBG%r3U%Pu$Hq-NYREb8c~dEIxULZCL$j!*&U5%|PF_e%m_CvV{_ zJB?=}>gdPfFrJS95@j7*x@ZRjK5fZ_(;n z@~PbH_gW_`TsaZNm4-YpZR;A&TJfpQ8pEPHy^1c-?^6~NMCRMw=y%S0%zkI%> zA6h|8^|E?#IUTQm^>=??eqZh6^j%w5Rn~|;93{yBN{E?Joc+GE;z6Gx54%*hapb~9 z%)+B_K5UCkrMlrWv2g+;xu~&{VNS{uQ~(`TW6em%RJzN-QOp>kBr^SCsvX8mnw!#d zPcCbuF|iY6VGASdHr!|WV3I0L@{LmE8T(wDN=R2U?X0>}9%o4flGHz(-?|oO%e-`f z$F3}ZOormWZ9UkQYN2%b>{(Y*^Jv8Dq6n;tyXjxdAYJkP{cCxCg5LUsP%{y1e! z!4+0?L12NzG1U_z__aUph2cpGJMVBBV~(}N2$z)JBC(aWOT5_?AV7^aS?}up%BvNF z10h=*g0-H>LWV9IMUf+TWZyPVa3_an&ZdYWL`gjCo^S`V-9WZ6rHnxg5y-OuKDqPZ z?gJF)SAOG%M9F%wlKx^^x5-{`0xHTDSz)WwLh}h^ke6aWV}cB#5Ft`LaKenr z2BN(odB}MrXxxi%aL7*K!-o&f%XOR}yG4wI7xCZy4`2NFN558f=LquREM{8GP#ozbg>fBENs*6C@caz+GeJElja$E3p5`NIi3$jgtv_4PMj zvo#<(N43MDo`&J7W^|_-36W{Cp3DF0-~Rjg`YBmG+bjXsi{1Lcz=-|+{OsJn4?V5V zOnk^+oSnbF{Xhny0U?~b$7sQc|(l!^Q8ruH4?-l0I^FMA9vqwm9q;#IyzK~LkSw9)gLhXkc^aCsyW7eI34E6GLYb~ zw3tEe6>!>eB*Gabl#%APj#S`0oY0@C5#GX~W>*dRWCR0q@n4dB_YTPhE!-<6jI+lyFc!`sgFC!OhO# zQ5B74wq%54&fQ?V7%d9_ z_n&=j?4N`{>vvgl-<9PXI8Gz%A)3vPKl`QM`8{30CcH@gsdJL)0K3;jeMaP@uCA{+ zur5dfh9d&vOB~E6wB3dWOP`#`c_awgHS*$cKKkYnC`z_>CVeXFQLTOSh`oWx!lFSet6h0GgH0W~@ zRkd|L|3bH;xnggH1VP`skz#~3|17{*$T6-qY;{wqKD=%XvOY$HxAlaJ{_OX)%e zyO(o7se@valCXiCr9NHy^gC;UGNO#7D&oXK)@)`r83bUFFH)79S`zW=i~lXCT3 z(vt*elze(|4wr|vU`#dhhs{O|T$)T_n>LN(LYhNlf>D#qeZc$6tK5%&`@1DACicN2 zDA`@G)?B5Yr`YPyqu(O1pV*jwJ{3~&2o~(;<@sQ~; z5-NZo`BtUsul>;`X81(=vKl1KkpZmWZ|s9GH>2{Y3BsR^WnH9>8^?{KV2- zD}mx~7}c>ce0(&FyCjD1ujIIQC;)Fhy|(>+MP8|-VFzLj8%^VMZk47u^X1*`%~KYQ z33A_j^BQrup=djy%!Fi&c~0?g=jD!G^IhDCI$;xf!Nv~{E4r3N|exkvMjND1_x*K91>|EVNGOHyWqw zO?fsMri#^<-~VCh^jb)Lfe`9P1r8)6cb%%~Zq{f{H+`W_!6ZZzT70?Uf>L;d3!k!D z(suXRL%T(pH4bAeOGF(-BV}jXSxXpH}Bu_@%g#=Y7r3_ zci@aJUcPAEzkU7s(Viazy7hvd!vLuOlM6uW5#3Xbc{1S{KgFp+P zVM}uujoNZqmUt#9qy*(lURBqr9&@wOQK&Z??};f6{%&34e67|GkL^;a(l7x* zv-@t(w%4mg1#28=^-uv_;^;tM@(0c1{MCz9Hl}0fL}QH7pbJJbV6e&he$*~cm2RVf zNH8GkGAa1t?DFRR7M_bH^a+nt%;os>pp-f+yDP(+B_Q>V7lnq9pi&5G?MYno^Awir zETQF}*b44IMLZ5!*t_+jH{5pziW#Hbs9>mEdJ=y$1cNC7j%LOw!Yt7KsOW0FCd?gD zfbEImvGbDf3$Q15RDG^yR6BxPdUaGQoHq5lTAQ>TfOD-JiA%ecpf~Cg_3D*Cmjz^^wOa*8NngZS!Gehz^`(^%-$)fKvt zHD9jlyHJu8VOr64vD4Ox<%9`=$O0|!G0>+x9VD)^$!OFf-6#eYao>!#sJCT)$jDOb z6q725SfoQJ$NSDHfcQuaq_k-lC@|Mumm#`c5q>5vjK;$8$>i?N4vMCzM_BEcGMciK^mv9y$@;esypM3O793b( zIT|WLRA0i!j=|1=2v|&>BLI)iWe%yT4eP~VEZ2~)0Ik^1gUpjyJ@m%(dp-j8%W3SJ` z1jqq2j0V+6gdW$-jRg*ccfLY6%w9dU7|-+y%hwzZmY7UROewHWD??b|S_Ehik32D5 zBX!Y!{l4iI)6R$%(27I;MJ7iZF<1Lp?MnER0YhYZAqImgcH4fK4l+c~&-ST~`SUas5VBB!=XE?=qj8Mt zC~`=IJ1WKI4KoE9F-Rf|nrP%qHbpP@9RtB|B*PSv+!M@`T1|2(jZeaN$Gu60M3fRM zLFjb(E(Uni`51zwaC^7rgn*;OQ5fv9pa&*Tg(V~Vlf>ud=LqYG^1N=etvhQ5l>YVG zH!dzq>E%by-@N`>SlXyU#;i~(!V}CpKEE0cm7sHgin-FWY!T$0;`sQ`{`=8eNc^q6ZYnzZO@_*!550u>T5t%z6 zCEg6_xtf9ROy^|F99&&Bk-^)qztSp#0CEc04QB6T-D1$~inEE3-Q3(bWk+^>eeIb1 z2}o}CqV^EWv8=_+um1M$w%T9)*Z=sR{_#Kj{n}x%awMiwfQcQ{n(2Az_O8QO?#-tE z=JU^U#k2dCR?5doK*Th9l{7KDfn&Fn8I|1$--!Y`Af~p(+Xwe`o@mi)2^MY*w))=p zerYL!E10S(gU9U1Rcr7W0(w?}5YY}U7n$P(NRM|n?@e6#%T2miX) zy3eQbk{l14C+%JjvdyPCsvm_Bl2oL!Xi)|FCuT8`tqzior^_R-+YgWQf+T~=HeA7FA_2We-lBi`1LM3AO@R6*JYIaB`=7tIm07kc&iwn(A^P8`K{`$)guYQe(mzbLr1U&h4PMW0FvXXj$)Uqxq`CNwO zs4g^13~A}&hZamk^EjQC2p_PRh@9>7KFg7}WT{d?Kmm|+6l_B)m0GvcRl6Jeo>6@V z2V+lc4M7&_^c7nV!5d7{)8Q0G2|me~8$TG(RwQpOAG5GH#IJtsH_ku)q`TeSzJ60Y zZc4@#%_9n-6Y2|i)3Dp-cD2%KARNDAzUf3+^)v^gex^=KRY#>s$MVe-KU6qZdUQbDz?D80%MWIwevu5*+ zZ-4f8{^%d34=SDc5(>EiRP2`;xiL>eVj}k z5-5y!@80c}yXJX-lhO2ZJA``SuBQN=SeR!<5)Mlw+#@)5^~lgj=~z?~(rlkqGg&P~D{g>)Xu<=~W8o4EiAV>d$}RSY1%5J=iR=n2J-MCQ7gapgoPAuW8?v0lIu zMpEWTef{Q92Nq3J7t?uSlSS=RKqy?Pq>EGtrtzT2Y&E8DEX?6)ur`IgczkFPPwOoC z7@jUb;qd6p(@cgBYFiG019G8Ev?t{%0135>v;!qk=)p6h7-)?ed zbG^KOSl`~ZWSDDBR;|mg9RP}E@(~sV`;gbTKJnPKz!dUVifFb7cu%FCHfylU-Wf#iliw9Mq0R6FAaQl0dcO5n7J`fevXz1PB$B z*6H?LUwxt~Pq_|shDanhgfYQ{G%{hS7G$}+`s}m0CM&B=+qi4Es%(+Ny0mObRM#(G zpoZJWd-79i%?l*-;YI?pQI$wUTU-XO67LE8zfH zdC~7{xCj|;DDGgikdDq=khqU`z*Xt)x>9)FAB0nqP1nHDa1UlVUzAve*eKtWkjr90 zR z;AU1G64>tyPk|EH%RvY?LI<`gTpgf;)yum}n`NA~&tz z^7`8P*d`BmcM|yI0}e+$s6shky<~#Z>g4cMEobFnzTwi=#Lm z^vc`GS-sHS&LE#12?gsN#lS()ZHPvZfPm$0S{-x%e^muN=-z{a3yq7-Z1w!p@1zt7 z;g|B^!(gOCpBvjBN(y(0P$+9eojg?Xboj zkD0h6)q&S`FGiI+IMy_~2N|E7pDT+K5ZdP195Lip)Q6NupO*L7X`9^50%JT5$HTYp z-f_tg6*iG`W% zjUSd@yndH0Tr~>04iY~C*Rb4yKmp_W;fZSnw;T~ z98to+97C1}Wm|@VW!W%b_#+wq1b$<`5g^&HAPrz;X*4sO88$oIaAMUtm2;JUYZphZ zzC7Hg&aPcgSkKDqzjzg6*KRzBO3dl?pqCH00f>1i{hWj@)4%(p_kQsoZ*KHR!3ZRvf!o8Mx!au0f8px>93M(t%=_%5qZ3+a6nStK)y)_@c86_w+kZ%H=1SV+ zLZP79os5k9@lh9)+1poN7@I<5-0s!og2NsZoke`Oe&d7XOob!o;eLjy^YU5b`=}z= ziBlfQv!Jfwy)+(2@6G59Cya%F+#_XpG!t`?Q=kQ0;N?VqXqMrj=vA`Dg2*owL&X=P z0y@B^!gBdw!=`lQh2ovV1bQuemzY8re}P9C?k3})_pu?cW}orC>w{1+ChtU~CaO%7 z^Q?0+AanX(Q5hG2si|o)r1WqC4tjRldwaXS)^mHgknT+v#q}J#&?FOC=ZpO6;fWV0 z7_q?^9wETGBv;8RUKFippdcU!D)e|EaFXULM}=pnZ7KP@$wHJ8E9iV?%?G!mJBBoPPAyuN?2p4rT3-$;YH~MeaHO|{ zqbrdulEG9{G9v>2cs=4Gp$m{@uonaGF12Pc3iY_Kzug#{y9CP!oZ=tn<&|C3Kt00}~B-^S-E3#p-R%|0gS`K=lR z9a!{ZiNu~m#ouUXOl{G|p2h~< zS}yHIvq32a7Yxai?PC5P_9d+cU4xl=<;XwgXM=E61z zq1)}s+7X+#?<80ed~y`GQd`a5f6h^Os4zqu7r+on-XkX;otLtPT)l&|C`HKCou~pa zOav*{t6HVbD)%u9#iBt(#0(EzqLio)7_**f7Sc%^(b7g*Yow9kbey9Od6YL1B8uH$ zLL^{`Y$V0FK%YWK0u<<$-DP;vIrSatqYx5=1j~{OX@>Tc6R==JC?ls|)FEETnRm&V2*-4+b5eXQ z1}Q<5@+#$8JV#+qI^B>RayxJjWu+ypYIaX8;3JTkn3tjHeP3?@WNAn~$6gCuTkGXF zzxpNg9iXms-)55twLBzY5)Kr07*naRQgEbG8ns+WvRGpVRf_lcR%^`{qKHPix}uX0WU6f zM0(jG`qM<_0*z?Db5uFvgdAbpDj(@}y5YfJ%nk9V6NtQ|3&Ekv*jj4~BW^)2vlyHt z!Rg43&YC{RU#+@*_pf&ZP5%7NZgWZ%X#oGOqY1!@}`Jna3nAKO5|Ykxndi}Z`a&R zyKpLnPeRLt2%<(Q-Y2bju22rZL^{x%`lNP;*|3PDJGBII$()pd?e+)y5TbHy?4{ij zCt%C+%=oK5xM&YOT6w>^lQ+XxpfoSTGEhESFBgf{B`=GX9A&MCpPq@LJl}KN;2}Op zPd5t-5$a1sG&v$LFW%5C0Tb`bnwm?Z3#=pDng0`2Q_>C0ccFYGx*|1qceg?toui&a zG{HFw#ieW!xJ)Vn1nP<0|v(c^|{cbp=6{L&2GdiOmL0FYG5Tz5N#%!nwKj4I;j8Sat zd#I%@%r`V#ge^iz&mdnT*G5oKB2#z(tdF&F8@Y}>xIpLFj65YJVlr|pi}!^vm#Z;F z%4lRry#M|awUY@0T?-dlUX{zD8+W&tk?B`zWOj;gM!LRLo7sG&ppP>z8KoOv<3G29 z2bSV(8*M9X3Wd_Ihii3#8m0iKW*)DWjE5nGl^Z~Qt?Y+M_|p-ioWM*eztdL`07@u! z@rosoI7BYulIAYHO51oq(C=Gt3=R7LS0^rZg-`xT>OWhB4^C za$bHgzwV@#-*u}(xNNU$@FE0{bHOF6Q&~llepS>;2GNL@+qvL&KFr0G@i)X~WT?n$ zySCu25$CH{uPiv*z)zHaxW-^brw|*0w=6C!7h*dskzWsw@(>^Ka627bp7O-Ykkq-y zZwcG?(_+M531=6{>;D%SCP15Te!{*_Ah}#S0M-RJbZriC*vRW7^ly9y;ZNQ)6YqQZ zY`h=+m*joQHC#ATa#!q+8O~>Nb2sphZ+IJpVS!C0s4X$1L}xT4@30Mhosp3NrW#T> z3C8#<;`8!*!s27DDBgQM6w?tgOR?-qC&%5m4h1Npf1c)LBCZc>k$g#X8}oAQyXX;? z-oq`WNtugDSyS(rX2Ler?F!ErwJf6mUby=#OS{58@x2ZJ9oddrIzPQ)=F$H# zxR^tYXstt}A_Oyz2yCmc>z4L;oiM^-fuw^9f*e?FtbNJm3P>ZroP_~>siu#Mtgt`# zlFKA{)=K zWoXv@#^A38kF!1IO{<~7YOW^vf_1ZyLn5URaVczH{( zf@(q^_wpI-cx^QP`uM|_@;E8v3jT)pxJ4^h`jgqMyvB8En5kTDN?%M0U$0t&>U)1Y z@BG1}{mb=Pzx?v;{_!t|^|y;oy7A67Q{frICuFX3aUdd`SnCAyH6};hg0xydt3lv! zb7|!gnD){2IcXtm{O(T_b8<6(b=}88WLG27gt_xZ!Y+zj)r2WUwx`swgaIH@Phu+y zEm6-HL%SC*9zA^w2w6f*B+FpU_NfHhnIVF(#naRXu{yL3+G|`A7&I2ivKYo;TU}T* zJjzepRxMPlTGAguI8#2MI`PrNc+i*fAeVACL!w*=5`p0DN3S|t57R8C7$LhGJ$uwT zJ?_$gWPbZ{X*d#9gfwT9dA`v&d;UJZB3Mi&AXL3unRwF)XR@UeIvb3KI0nyztT4iq ztGn6ja4RaSM0sHLLowgX&fBF^-6=(HrgAzkNm*lo%0Z-I&+w2pI^yun#Tkbj$gdoy z8O0(|KM#!VaAEGh{u6&s_+X-v#LH?RJ{OVK>p1bCBrQw4Nh@+4bQT>V7JfzQ7@&>u zzMnALj2w6JzL&2{q5}~ica%KM2ZL!RHI5$W6yQOu8(ZUCP)}$gwufXDv5z^p)lZ`_ zXcu0|x5p3f#CYT7GGv3l%A|5~lmP=*NVGkgs(@`CdQ-DmLWmwHAq1H97mIv`LSANOR6u{sE3_qf5} zOxXG!u00dw#Zd`p%TT*47NKJB7>y_9Fe2ZT%+FG^em_7T?k@hN4Vp}>`M z-0k^dh60rko92n;0iZb^b&q6@2`O3TlcQq=i3WcozDHx1MKc~96}K0ye6K03bAGSE zT67laAvRNE14NsLXs{54qGIrwVtIHo`0-!;8!tcmo^1Ko+yT4*97EgP-DETnrb;R_ zm9A($<=WR#ckZKZ99iTbN3fhoG@%7986ZryoXO%x;z14bMR79p9waYsE-RD=r$iQf zsHn1KB8|!|#QK57B<4#P&KNl6;ZR`F1UW6@kQ-AeD~wm*Hh~YubzUwulda@-XbU|k zJdP`W^zoU7)Rtoo8kVgah{$l-ij3bl6?Bp6H#kT;&e!(UldEOr^{V}+vy(qu zJpFgmcmLn=C@ev0$;sF`&IU zDdOr$LMhhcbWpjR)Gxn$)9=d(vC$Yun4}TCl3mSFpMew}!3f5%85$Ce^>dw0ulI6XV_y?kRFW#?x{V%iL%Ih{m zm50AXj>Bp>nc?)+Lh7WZiu*pjn$!ZQD#JZ4x6_(*S`9^gb_hIoyM*jE? zhvYx&^|s2(;qfQr=C1p(dBh$<4hMW$D4-D43#xnsCzOVef3X(6#mnHm z4%aw~wL4s!qn9rw)x2pg7SwWEQzYoS&C4;v;tz+}os?^TMV*EhnR&$mB!<8oEtWNz zDRp{}Kl*rAsVdgMaTO7W7Bt%(3wozhed5py?qXm8Adg(#%KwoE?H^8LJ-nVx689ywz8e1Rv;ifjDyP3Sx=W#Pj#7*RBE9@ z^(&U(;a>c?-+1xapDVOFK*}UF^65fLDL}gu0h16cI2$cBL#b(ZJ2X{QoTwSpM7z^9 z9?oTWB)wjjc(p-9`|0VKXtV5OviI!ljFNAUm$T`qS`bY}zxop7q^3p)O+(3>n`>tT z>L7nofUbVgw203yU(| znI?f%m4uo?qYny7vZ?6HK~9OvcF-FvvR{3pt+X9h4*T;fGO+u`!+k&v@T&zi81{#k zA!$s&sFI3;W*iweAgaToe9+iXm#;7pV|KC(kFLTc;~S_t$r+5;f}9#hP15W)-&_;g zo{*enIwlZqXCgY_5sL5^6$QsLxk)vr6T^3>Z_%+s0-r?NV)=fv(O$X_^@&ymNNF*D z4O==xxyfsbaL0Kr54KdFuG7DI+g}%&`^H5-)&AwE_~pELoxb>D*8GF(^>j_L%N$lz zsN`kx-OoOvcXOW}biQtl zEXy)yG|>RqE1a>ulZPOc`b$6i{JY z_u&a1?N7K0?&Kjp>-nbaQOfwe906})nZC|FjBmeEge7$Y*hJP;5hhh1oI|W*NhwW!9bDh(JlPHsl*636(S?0z2 zPlWO1m||LPKG-WJV-CF|?aIUMHYOL2r7!hdXFFJM^v+yJ@H9`{L|!)fVC&FB*|vsq z-0LaqRSzY^A5qOLhhAW?dzwXwtw_3!9oE{K#)$tc)?!j##^9cq1ucvsO(@2cIhQPY z*fCb)Fr?)6aJadNM}|-{VK!R*^z5BaJ~jG9E-P@X@IGQ|!9eoLXLN?4Vw($l&|+zG z($#ArpX-kvz;GS4Iv6^)EFT}+#ql#p*><6fRLc_~^lau-3`TQkmi#VSg!P(Tx%3W> zHsDV?<-%+*A4-+U`ID2!B)4biz@(rak%-dFBtR(l(~R_|f9dDTT?o;>l zq7V|!QdWF+q*7YpayF-I804q3Zl(frfi@j@U%-Y%$c7XGe?qZi(|mq;7_;((Qq*^=j>pdbU;P{SoLc%GcD`oM6nZSIASf>j>2^WCL~cPrknNS zTD7Y-)VapfxtljYo1KBmNx9rj?^sKdd|89zmhwz#4CTsEx!MhVaS6?VrF$++W|!@+ zB|<+=lo2`JR{KAze*WrcVL-|5r&-1dp~Ll-#9a=000|`f-v*&8FBjyTNBM~7__&7~ z7=6I|&OV2pLKYi=r84 zJTW5%1Gs^kkI)f5|0CID@+9(fZZ6>BH^d}pyM&mmLi z>e*s*z0WD}maWgbG=^%tO-=UM@pdo!146qr9%ov1g460iG#h0uSlrf{B)ptM&LpH7 z;OV9i1*()Jf>zmqqe6t6KJJC!3DC6ctZ7k8iMA_qv83p-bE%5Dg zSzc&-34hPY5P^d;pI$wGn*ZIu_|ZuxHMr@gcA2B&C!3v)%cDx8+U_aO%n~gGX|H$Q zRQwy|-Rn-mMr@yfDwXqcqHVfWJ)8ymmFIO?b(}+552yNx2B(LD+~KHU1hXFMuRg?xz)TU3!#jt3hi=y$m%Uabf8Y zn`;mBC=d572Jc9E2QehfAGKTBcp>|Q_GRrxL5zQxE1~Gl(A563%t1W{>#8{z6Jr62-w9wks`5+ktQg(=6~3Urd9D^oD%;;=CuOnm9^1 zPr@dkr>m$UhCS0+3NKX>%si})n@lXOlU(T#qLdZ^G-Yg@vUpU&n52${VqIMC{}C8`l{?|_4eP17KRe8}Eiy&__C+L5%v@E8HTnvWfCfIzV9NClJ zvBE~D$&``f(`Qd;9;Ayke01?x^ItG)>3s3x={s->_p+dLOLr9!BH&(l+5+H!|%ebMS-~Mxb~-8(tjC66m5fLzT2ztIEh? zF5Qc}oSmL}pYp(SeR6h!`B)0P`pJ_g_y%_zL+{1Q4~<#2B>LgjuGEUKgjHu99iOpU zkUAC-LCJcJ>0^Zt`E{1aQc%D}rlnf>0Of|em~5jfG)l|g`N>D0e$QXjWx)g7ontUOGn}n?e&gC{n)Tv;_p6v>f_hfSHc0){A?f3cAUuPSkp|iicQIMB*C3_|+!jW{!zBedKT* zYn|Zo*$(f!4cEnisd+A}j~fn$&ML+G2@NMJ7k}~*5BF5>J9^$pj$ybEiCR{Wv%+U= zriVlorYiFSAC!B-E~TnMKqcJtYKGnz0mp+}^0>s<4;#gY9VAC68LN8ZXvgeMDVN7b z?1!G?C_&4~gpENyi#JLV>$V$}vU4R-Wc%#d6I>75GTgI^3qEX<_vF*_bJEMBtQ5X1 zr0xc@r8YWv)`GFXj35rJ1l~*|j4=3m{r1%>J*1yKJx@^iAU;{UFr-Z0lMhpHak%!Y zuMS03%~_(oA~hE#=x&Ot8~n)_4%{DqGByeb0*|P9*qtmYN6c5igsLLavzITw^Wz`M zsgzM@jVbosZZB>E6S-b;iI>o3HC(m3Hs+Rtw+bdNoN_Ktw zvuDrf5eBJM%ZD#sI)A`tjqteBX_PBK3_(XU+V$gd_Ni*flm;7=Av|Zw-9$y=l`{TU zuirSWXz}{?{`&sGfPeTm{^pYpKmOwCW}aecW#lt`eRcKqo40jPl>7AC+iOtWy2SW* zDKvj~TQ67S8Q#O^lXU}#p`ruQ@apQV3uaJS`=VgK6>m)Rvw%Vp|B&gA5FVP5<+b5aRRx{2@iUS6LY!8Jk^0 zigu8R{N8)-As@-09{c|L?;(k4G%m`X9iON{2`xCAZ66)yn(a)n3PnYx-=n9`LWc?{ z<;E!)AekJlem&Rd)iTA*NQ*-@@%gzC(}d+2hkYDeVbs`8dr@5&RbSSM(J$pO$tX+! zzQS3E=6CW?%!Y6_O-BjHg%AKQkfr45sqYktzIsLg?XpEfv%X96 zHeb zi8nSmc-+wbn?{T`&)P+7`pAmyh%ZT#Tw@x)_|u11Z(ulU76N5C!ak;ChBOd|&zBwu z!eb9UF*6Zcp@3V6D6!O-yBdx^tB{ zbST=E{%}q5xjxLxS1`YPg@xcU@?_eXvqrb_&*Cy!Fi=AbM+A?La>8PjJs)^itK4#3 z`{)1qmtS0svz^lqe(?R?JMZ2tmWt7#Qi4w7Ny6wE%tTgyGNeeVHBUgjRI!RlgfCu0 zEsTesDpcx#LMRA|JjspiESiOFvolLNM z(@$NTp3xeuyJ4K4p3495Oq3H?n2j2*jYL#Pu68`sli(ZqR<{=rV3H0r1*lb5?6do; z#`^9r{ovCrblk7Mo{gu%PE%=~=wrgy7x|_vHTyB+$!VD?L@kV}CltV4U=xN(V8?H=SR;e>*<#XCrz7FGK3bN68*x;3^ zgSfg~eDRG&dKH5w$&j(B-7CoFj!9)6KYAi5++6%7?x|vlxMcK23@HFKHEUS4n)^>b zJpRdVeq749E^~qNn<=VD`o<&J@asCRB$HE}9K~9ay&XopiXb!=iFfMxOGmS1=l}j^ z|49WT&{9`T+eZ6{7Epw{&;$o4I$=k0VhCu$(C$2Y_x#gOfpbxx2g9UH zKfpx@aY^37E)dNS{)F1WZeTN8F*z1+y(reB`@8L8a#qiNc#?Z@oUQL4ifQ>Jj-VqI zwZ8x6-@P6T*8)0Bb8aMa;hLyaE9&qWh0nG-)Hzu$LXiqZ&3tmv`1Ip< zkK5|&p=~~;E@B8l9-);A8cj@7MguAewQa9Yy(n{-hK?Il1awZxCDeMSZBfUsUfm2q zG3LP|LA@M_Fz7$!@N%t30%1;KBe4`vo7T<&KX4Vh;BP1>eCD9GXv;szWooQM2Lq>KmDVE);F2_WH4}E()m*F z`TL!3eLBhHqf2mB-V^^9i_81C%Vw9j3 z!oW8lX3BuZNlWkBxB!}2zt11qU7lD{rH_H zg&%(YGPCE&;APPVPZ%WvG@Npqd{>P>3(w!V_#y}! zX8-^o07*naRJXt1=l#;0{_ONb>H>O20d~UkP3#~oT~JI+9*cA`EeA~yee->Vi=WRG zd_^u?MsFFM0elNhnks>MW|f@m134o!xmZL?7L!+N7TRTWm~i3A-Zv44+&x~tS824L z1{3t17?SQmiaZjy8pckb3G^~`yic}O8ZE0+IqIE%_~Cbc{NsDH2F`aXRkyk0%baR9 zoN-#9;Z6JMHuti5FnfGv$?se~mpv49nzgcbhck}+$dUsVnuNN{q71{~7uXJhXUU#W`awTAUxTgTgC=*Th zPTv2{cjuX$S}!He;yMp+UUPNp#;GFjawq*JCF;Y*y6WFVf)haf@stCQ{NQ#8mchjt zklbtZr~yILr9_T@G?qpT{z(+0qX90DBD>ww|EyC zrU3oo5B?w(7*X(2BAsRja>gr_zP-6osOAW;qXTCwTE4FMb}69f(<#zlYO97MfUCnP zksPz75TFHQ=%6CxNQt& zqum7R4*bRk&rZ+oGXh~cismjJKT^gbu=Dx%zIXQIoxAP!N;k3fnmq*77CfZ-%yz41 z+jJ)qo#ejhb$bIa0B1m$zo~-8rChT^#44vBmDB4?ao;_WP@c}sQ}seskQ9)Cycq7Q zZ+-my_3fWZh$)mf#syuX#wa;5*ifXt^V~Qr2t%Pc{({Z(z{!Lp;_v#CfAUZM#fQ&+ zvy|O(h^pMUX3 zUwi{<{O*4EaJ!Lt%%C4HWA`N;gyOr;+}>XE#8cOd=kEGCKAB`>gblRW;IKBDxrxc} zxVV=1D>}m)e0pHB3AM> zXphZnIu3+_YW0)f_|1Kzy)Kq9cxjd*)xOSG(~zc8Jrf5Endf^$+u3OyfNB6pgZSZw z-7Q9g(rVBvGz+GqAEYqtX3Ibk5=A)<)YT=&0~Qy4y@lXE7`nik)A>z}OE~Av1Z- zhL6a7xI8HWU(3Q#^n4gVo*NEhjl)BbKs~{{U}ox^6CTrmd*WtRR~Wsy^utsmuTnav zQZ(4NuQ)hEonG4zszp*v6Z?>O_k5{885qw3XC>-QQ{{7v!RU-lk%6ZK@ZcaSvt;qHH|HmwK#^+YBS*-!l^XfoamGcmjzZe=<$SiNK*= z8Z?KQAqnd;W>ml2jW(CVzSGcnbn)pAe|XJV;B3MkNM*0CZzQuLHsSDecC6rD&mqrJ zz0562iu)P09D#5Zb(Bm%Z^mjEr3JuvqHh$p1nLUDmi!4D2d$utLz#6HPvBF)6OKyz zlUhb@F~wiX7)+OZX-OF)i`s?p~hVA=u~DJRaI9H@$<_sU;8YZ;Oxeeh+PPku4Q?63760mSq~fesa%m^>u^FEm3A1)H!Uyn<(&L5WNB`mBgF`CrkSD;kR!aY`BqCnB|gbJ zo5@O)3Rv9)3^4S7D9)b81A)4W4ow%lqp~q{$A?cci#KeSHD_Z&4Wo*YSR%k1F`E=_w-1*{_yhp zPyXamdV(0`U~olGlAvK2<4G9PoaKJ1Zt_#NSXpP2skW6g8>=jF8=W#x$YmN*)Y}e3%!zDz)d4LIc z$HDgs8Vj!>ki9sDVhR*=gN@{TPdT(PS8*4hxSr43LCtm4c`|q@nW~p>ud127oEj$P zWO!T4Y#(*1kD5{mL$uTzy?kjVatT*U*F=iUufF_3j+*$DbkzRsofIJWk}#L0FXMe+ zZ%9h~3YSKB_`a>j;tbDHm?pLsi;&dnIW*S9`Z!az%z}9N)tb1|Lwwe?qhYytVqJ$J z5lru1id4H6NoRL=cinnYhrgokD=bawmz9U>hZIF!fcK+B##gvm?vG}5G&Wh<{Cd7T z1P-Y~W*k_Xli-~Y8~Tx~BA#aGMw8epY?`RY@e5@ZYbI@%cu4f@DdC0sm)V16r$ujE z4wP%OK#?_$PagjC%~y$QTS&%cW94)-voe252lJb#G{l1>umt5={rc^dA`4a+HvEv3 zjIt++jr?-7eD&(}^QSoA>2@OpMt^CFLsV8-*~6lfQz1OI;V86%9vJHz1(=b4{p;&Q z_l!K|gqz15iP2-dc;AQ_S`}|966HL^UnZd1j!vF^=X>qP&t8#o3?M>}MCZN9WgESY zj4MbtBBcxwJfa_o`6>B&7#^6E=(ZYJA^vS0fCdKqDC&4Q1WQ-kssM_IO@uIZik7&~w zW`v={L7a=Z1`V8GRY6xc#Tr2IU*Owv*lusdJlGb6(RlId)hkr(S~39R zK$kv|WLn$h+qaVCyiYBl%1ME+Q_41-zzCpPvE&E{$-9UBhERXa&=-fs+tf7x;jG69 z-32yz96zTwVpQgi6KzsZ%4*-dy41XDm7fR7s!fl{jp?^XCy$^oklD%#j?XT5-tXRC zK><4HbTq1ZdvmFb>g8Yfo$ATa{cyaHS=VS9`t{w7Ogo-G;_dGGMmCLnPkG{F7|S5l zc3E1&ufF)j8d}eGU+2M-1<2l;1PnqcR2e>VeB9dgTvZ2=BC{@UM;*XFRvcBpK=a|Tr9F}WDULufo97{ znhZBn%i;{F`MZy*;btPwVM&dM2L5he22Zwj54(_jh;B znHm>cMOoy6$E`6Uz?71`sanACP!Xy_fta%i9*E7a7J+QQ#JL2#e{+59>kk4(xXg20 zTYtoiZV{>#bHO=}@-HqfXnqgMJ*_CI<(Cd--*)h{`9-ZUoc*eSIi_y`$=zZ|$f-wQv9Z*ZIO9b_ZlG$CQM z4l^+>M9C1uk3`#imW$kT-hc596+vv|z_o3#CwSkZ5=I|V6l*Xmi{;u(ERzf2dWX}e z=s46>a+V*=C*_8P%Qco=f5|R6A3EJWst`=i6pY-{Jm0l#Amsh(+FX3EZ#Q_?7>8et zo;>rWdxAX$TyB_FPJtD7oU7bqP||H`9^a}4^ter z-?Wh3daVyJF)2$Du>M*@}{mpAcYzL_{gYebevNz01!3rXj z2*V488>nk@h4&6q#QEB}e~2 zT#ogjWYW&sU-W>I{qzvj6-i%7EwR0Jw45~S>|59?5xPvs;Q09X;JIegAV%+7cAs^# z=NPchS|8WcBaihyO-t^xbP9{V_z-Q`^t6rK2Ol>t6F2&psX!g3 z3{}h%@Czx}Nu`cE=`e7NPv3i|szjV^Eme?9ov{;%@_1Qvi158^BTFV0f*`?^3eSlF zGyi-G^>8zvY$wBuM(L~~{Z>qCBsoif+FT}M0I&R(EQ1seo)zHnFz!MG>*tX)uJm)5gi^%sz)PgNE z2Jd?Z*c*)7B?toX#hlO_Y~h>#@$dcYZ~xUFeEfc)A?Pi)UO*MXJGd}>zt;h{XFPW8 z!2ZItPkzPFY>oG^NG}&CBl2rzGkT?O;+xgW$t~4d>ha^kcfQ^Ghkx&PKKuAQz0$vX z3WQMXL5uLretve`08yw^CCR8-J$mut_}R1ZY@zHJMeEixko$3qq_w+^5^to!a4G72 zn}W~`aBr#n*=}C8GE>Q@Mw!R()7T#Mmk%^(cqPNA?48yqFy;_=YV^S}- zw=)%LV|gQ-0+!3844Y>XTV-W@^4a&k_1({})x+fS{n11M5c7*rxf$F_Ud4oESYS8C z#()*QpRX5WlEY^OqOKS{C`V01dkA-&JC;SZp3hjr_+67S$RN(HXje72?NswmdbMWu zp_8XsZoc)&d;jELzLH!FInC0Fm(*%4>p#)>l8dZf0;EKN*HosUwyPjNF7PXb&=Nvss%mbgpMqd!4yb5&jM#2(r)*J2#nA|MkeI5ro}oZ`W2N~B?2X_6I{>~Kp>iVOK=9Hzb`X^(d!bc5 z=@2k(g}Thk4)WY0ROI5?@kV|@F;SkGV6QRtB4@w+FwN*GkWL@dqoDa>MoP>GEs53? zaN8ou5}fnpn@d~S?H-B1KRn#ggY(UX9Lm9B&e_yxoGfMIt@u~flwiXpG!x=q*`92a zNdmU8(RWpis$gATFbo9bZnso>)hcBqiRV|Bne`~OTf&EvH068AM*RjsB2Waq@ZSi* z##s?fW4Om*AizlOI6lN+L}o+du^{n*XeLdyRN<&zgjWL@DdH+Am*%dc3Lt~vkdf2m zkSZo+3k5)3a16Aa+CuTNst&_)y2&mWQK>G>=Gp82>SrI*Apf7=`_=#XkAE?u##MIZ zj)uv6RyPpJla&t;ElkI1V@h_=26^8G#jyhU9L86PbqGzBpP@z`jH2VK_4)qy9)J2_ z=SSbWxaejo&M>(aKw+6~yQ3$V*k!epsgzTzY<@PHq*~3Y(mENf!+`LKl%{-Ut)2{c zDR2&T4t(T7z?{`iuSgp;D56k-7$I>wji2Xt6$){^l-p@3LRqJ;)*VK!)cFkDr|v;u z#};G#ao7mdQ0A-?XdG(76rM`}qArCjhwOlF|JoKgXgO3UZ(d(rIS8~UYeEr3oIYZI z{Cc?dEAEQ%I_wDV=ITmB$_?F4+aa_)t}WL-%F$e+VZ}bANGLZ}l2~AXpmW2BLO;Wd zwB|!)WMpBaWs=dToG~;+tq4{VH7}v67_oEcY>;-SXmHQZ;G7dbk!FLRMEIxi|COhRwO)5H9Ao`K&5^1-N?2 zQDneB?2~&|-j|G;&vNCl#Mbpnx?I@qxt?Gb?&gi!{`tFS^j~xwL5O_G(K0XuW-!h< zg^^%07YCj@$vHTzsbx)eCu}bo0~@HO3ka~@q!A=P2B$u)h`JF+A(MiFx3awTqtT}P zu&bsTr~6{nnuFGpWa3n_5AR101QGAEWpsr2?vmAu)-U1vCUTug`m#C(9vhwbXr_D$ z*kACV=veF;Es{(WF^ajf!Q_lUOX5WKCwG?f*?KJN5q;!^1*aE_{|@DyIoJs zhOgFh4UX30y3>c7*|=XVi~l{bm(1e`+}&3m}01;BLXtlAR62 zFv?lAhD4VE`b|gkyJ;_lRX0VdrBe=sRkKa@T2cQ6a)+Tnw8S}d5~OmX>eZ5*S(XcX zl3yXgeEPt7dDLy-{4+VAi($KIDbkbSK(CY2UbU=G;uy{wT%v}aNLH8GORus=xF?nY zwL+zC0pN1SKftQnxmP52*WxLT!rYE`Y| z(wiA6kcGgVUh5);ckv(u)@?3x{`|d@M^7Y@=o-Y7w~66a2CU_91KC9c;cBOK?xc6& zck)Lz^HH}}jA{cJU?X^l*^e%o?;n?Q+XY`hbUB*xKuk5US4*e_Q-!;phjw^5y_=1$ zYfNNX10N5Zg3RvXth<5W+#hg+wp(C5Icl#ZUNvOHWjB+d9F5d{UxlFF;Z`fWTa{{w zT7!G4$VYoDEh|Hih0&bR0z+1JG$3Gvg~-Xu7LmabitlthSAW>3q^A#Gee$yNKm3E= z{d@oQZ+-u>M<9%|`*B_Bj*z!!v6_4S z(R;;4?P{R6Q*;M_;tH5W>q!ZjYUBp}8`Oym9A^!lkqRI_bGf>@#7B;M$5AygADvV; zFCR53kvYuqoT<0kDvGA#8;DB=#^wzF(n8Xc9N&D3>b}r3i8s==OT!$XhhDpaa=F1(!7E z8*cy(&a!5T$(9pP+byGY_Op}f@OE+~iRtFPR0$)LJX}t(c%0N%rzZCSq|s_o@bM$A z>J(mM&a+8ex8gcAANIrQL^CTYm7mLx-ajoDRE*P)po%PW1s*LG;O;#j%`gMPAx7Uf z2NS^MLnBX=Zhj^J%~1^}>Nps#)l4F@dVFF4R`}(?zw}PoFKz zCz%4|a7hVsmjZrbUuAn+CIEO8;$ZV zpMID=?Dg1CR1FPoo?fS`#gBjdTdCu-yDgZF?R>3}j%h+$#e-Jmd|8Zye-GUon2|Wo z_8RGjn#JBpvu^~9LdC{q@D5gMCfCr4Hd>zsby8ILJVZ4yLCRDq)b#Z9`={B9)U^^R zK4@Nx^?v>v-#Yz2zSapb_2?)a|1VIV-;1YU0=}Qt+L0ASl(=If0U`DDf*G`eNU2u= zjeb5I$eyAbiw46Dm`rT*x>fD+#WGcade=p^w(HS+lG>yjB?+JPaX_h)9KD6of$K}@ z1L_nTtsW#N`O}?729KC6rbBAYKGmw}0Jt4L+#q~JxPyx$t8J5NSHoMknck_>QoF)Uh0f>G`$(mr$0syI*w7WM5hB7y2}XqHSP4~GTgs^OWDV^Bn5&bc z?5nH3v^)q!B9V}B`&yFjQYS6ZY_K6bz^SKawV_Tr6O9WLWXK6g2}wynI6r&(cDfwO z0o&nj)uU$f>-)j@VRS@fRx3C6Tt3m$Ob~dw+~a-Z`+2kR#V|ErZ6V{g%IW8|acg~v zfv)sriL45#Zdo26?NuX5)gztR3LpM&ojFW6FFPstQuW zyOaoFrlhMA#lUU~xH7^Z$Pyu`#U`tWaJA7j?4t6ZVym2|U*~|PdpihETdBB{@*^Ic zP@RLR{aRu=dK&eVk1CA2#e8u6${w}q99F6MIO69`C=Ixqg!^4e@;92%O~}DlM$zuE zS5!Rk#W~cjWt70|NVV%gzBf=0aN6Te?(XhkDoUHfcsc5&yTUI|9ko*BoM^AGs+K!p z7EUI8CWM4TJXUIi(_6^`Hws%H?Hft@-~8`C#8RnpZf7?6Nx!q<@Z;z|Uj9Ulb*mAF z3g!eEkxwS+yhIDN8lg~Fh1K!aMCu4wg}2k20{n$yL-?O@Tq^he=oeq%t7EJ+U*$|b zJTNhm9K}e-_VVzgsxwe-e3yGD>il@LJl59X9;zafI?nliPCkGB-4BmCWkFr~EgOs>AP}rXG@*N*Lf0mogH&lhaajm z%~W6CYj5<<&HmAC`Muk9nb%JqIDdcq6o;gwiR&ar9v>v#L@rYV_Z^^p1aGR17I!L2 zWnFQ{6TPIJJh_15DOi7CbK%IcdR8(5WqU5{HhB|i9Lwq1@sW}`6)nZ(=+8gS{+;L3 z*6eFFL7Qy-pZ%*Z2ltyDBF)v-*~+Dc4b9Wuw|+1!_1@e^`MUHAO#RK^dQNjqr=OiY zee=z0%M!*)rNWLrcnjIZ*2l|sC0mVCy&(X>ErI%aXp)!=Q3F2J>xAJ^+;5jsuYo^v2WM9819xfn@j6w$VxCr`arc% zm!yn|TQJ>3Xy59rg-zj>@rFGNxA!A>pu6cjJCYL5PnRzUO)2P0IGdX#VpaeEAOJ~3 zK~yMC2^ZtuD@*<_C>A1X(s28M8QI`N3h`4azaoBgBymIy7o=e;=7wJaJt#CJh=X|-KFxK&*@a*8v zszQ;}0lT|WJK5};Uw-}Y%P-WQZvWZu{rfkU4~w-}h{Xm+uVoPpfaOt^%_9ODrz(7; z&UC18li?K&P*Bu@BVTt*Q5O9SF^7M zsW<87s`os5{_=9ZpRQALaRHg)N5@}&^VX9EiPVvgfiiDXf?Z)9UaH z$$Y5SrNQN2yf}yH#H&Nyvw27+T(#&XUh(CMv#8RxKe4bP+sEiAQIf2cw)oY8LIt!~ zP^5SyMHIDTDuOMT-s;uLSG?UvaW6+rsF_KT>S2A&)PJyJ ziaAg?$sc!D0v!jaghW%rE;_E`RAacS<|@tg{eoMRYM~#nCX0Y`g$2)=?Ke6+hK?_z zsf@GQHq-315-)!=saSWLElx+~9i6z>>d?(@2X{n=tQFKC={^$t#Yh{~BxiKeV!M>f zuP?umbjTf|2OvQ$y@UgoIjE1a4gStY=N#HA&dW@Fm~J@$;s}uxu}ZFK71qaSZNp)${OT23z=t4$xutre7PKu*h$lyq%Tl5d)k-P8 zM#9IL$;Xc$m-2UlLPaR=`e<2)ec`=#n&12GF_0z3Q?b~rRGk@ROi?5ZjE{H17&vXb ziaESSd$cK~A+Mwt{_=JyF+q(fuZp-vCZi-M@*{60a~BHrQdxn^nny03sb-?iU)}0z zyK2=>mSZ>^_sE83gAT?cc8_C_>$JfH*JW@fRtCAY(QbJ*X;e!U=?$`!)hWh_MVX~6 zw6LAX4i}SX6P-tv73XR|D0=YCKDV7x5VIVZJD~%^O-_x!YOx;8TOC^YCaZlcyhRQL zHZr8OY?>dnE1)Ak|IYDh*Agb+s%AG9@{Rqv!)_uzBe|qP&M~2dnqgZq*t5E3k5aAY z`KP}amHt?Bu8qv+G8Zb#SDKBI@^+)W9q;Z&!&d89s&O=@noamQyRFW{azw}2=PKY^ zBpx*P^P`4Nm2pTg6I~YTrFy8-0`*bqwuQv~)1SOZSO7n{k`V<_#;(0g(c~R7=g7{m zp&em9UF?ptBe)W0Y*{y|rlcpGNHH9Vx)GdHO(;5#Ty&gg4miUJn-MC;awOe{<5{aZ zT{$ShFjoQ5xx#rA4s(>P*SCYoY_pcJqDp4E7!`7>51*AkI6HyT`ROOE?{m0JFm70nYET(5(R$x!^c1T?#rWIjRbx9(TmgL&SEAH6BME5gaRyz zouy_ah>9Mfw^`gb3%m1Pt2lq)8RulJq@!y1oukT&^GCas{yoBw zouq$V&RIjI;gXLGYQI@V-kK-}#?T#Ac|7a2eC{|xI47p;jQA6l;>+o6!8WN5!Pp}% zF0LJ023AJYA!)E(DoV&`+}vJz2XeY!v{+2)rct{bu)R!iY!+hKSgG4=$hTJ3Bs5zf z=m|F&8P3djDRoV0AR399=*WihIP~G`Nk4y}z&aIl6YNH4VB+{Bo!gx-e+ogAAUEJa z{KkmQ+>(?JoH-ai<=3o~6Uf60ydQ}Da$ux4r&OU1XjK{_TXRj0S#%*<*meieB0M9{ zo#$pf7v`dN(ntNxLuKm$I59y!MH*#3CCgdc;ib$GRh8^lVJ&7oHPf=Yc{T-N*f5Ml z8b<`xqMM+X&9S(C8ngMEGHQsF7|=ix!_)eAEkr`3o2V7DNe>I=BtexO0Jg}NSGl8A z?xg>aUkT20gUc~fzAT_3#fW(aekTx$lq-g*N1-zS%WGBnH^SX-1VSDm7coFhuyrgQ z&{+^-a32;G39Tm>*%5$LrX0-&QLa{|lopX9=DtS4NA>JkgEt%cONnZ$;ZUCYIHv)6 zYP3q-ZHn_!XOXGs>dnfQQ!m|1KEQIXT&_p$+-|QmYE&mIIYOSIEfdhTEG~qdjFa*r zb%N4u=xi4`LlPiS%Sq>0(hu((3D@f#kDzymCq|`Y?xNc|p5HyHt@8j}X|1qUhT!9} z8c%4VI=VNhe65s=Lb(jO-)4D8!l4cXP{D2#%P|_ojhC|t7lektq2p|#-bmBJ9l$F# zMjRw`ktU)7{>PFb<48mWl;uBkTxISwm>fHRCQ=2EZSWTAYfS;`(|;Q+h^KVQV5p~tUHb=M1z9}(d*16 z6-H8tvmkQdr=>F-O(zMI$lcgNwWyIf7lMZ+4@Ucr~lG)OZygw2GRNR`nGA`eMw@z21%G z1?|>alkrM$>tmqzFxw(ei@e>37~qAQuV=C~=!pCZBUo;u+$Q@0Y=FCgYL0LU@@(IY z7vrzL!J499_-NMekMjbrVlAt{az&^#%gJb~0s-qgZ`IoS<+*C|2#}=2))rIWOP^L6 zAg@=nzfhAQb$V&$Q z6qaaKgpBhV=Y+OKzbfRm6RvS)1HY}@P9EvZ#y;I=H}^&T+9}!ddq#;U0YcO!CV4c= zQ9O-g2f)g&Ni>87+U^+yRUrn&Cr;$6(*^VgsIZ-fT7G zn@k|E2A)TlW6>-oQR_U7R3-+o6w0I3LU9RF_Cm5X(?t zE>+&4%<|3?)bi7uIZMxYGKE4Wd}yBOr#ZUtI?VLXCL7Z~gkgBD5)80yAzRMs?!3-W z^DLP~agxW|<8k51ym-7U=x#F0RzMin6evD`6s%O$*bk@EyP-;zFh{c=ow{Weg)dcCf$h}XA$8&*bmuqGDM)yEZgz=R7WiFPyfB0^0{ zj==5Q++xa15#F1G2*Uknf`lkjz<&7Eg_jY!4ztDGRz8gzkbZIm*ejt7M5Z9(p@?xD zm$X&czLV1#wO=XF8}iGUNR`YSLG;Qz`n*il(d*5g-cI0)=p+lh)&sCPItbK?ZC1;G z2M?^5vf}0_hXAf>=2~j-n=^=MRdD)-BPE2v{$vJUzI`pDQ&x8|-;x!z+$7zF3oXL< zrVo9~Ce|e$LvzFxjX$+cX{}6&R-!0jVjwYyCnGCWgo}SdEwBjr{9>?BbVgK^Xl=%~ z-Kl9!jv1U_7}r!ri;xlCkc+sAwv?e$^0G?UMP1laxUiGcC`uHr3HpI`rA#oRWs6u< zAGyA~rAn1ErI7}oi#x@(r9xw*pfWpfPO9ZDEn~VL&{iwO6Cz?fQx^sHqCL&k-;5Xi zRiV=86tbn;`!`w$mjo4h-B)jh>&c>CtR0_pu5RC2H2c!)wr=jOfxm!yyLj@>H($RN zeyZ*sK7I^>PQk~NW-U2`9X1^Gg;2Zoqg_tzv)BQ*w#=@~Zb?#ONPH?OF0&BcJep)4 zX|4IS<4Nr655L^Yl`gPE_Ukn5(Wy4;mE5_g6sD64TufGLLKK5wqX-FG3R|Ge5f2S@ zezPI}W`bDx#&#}~zOdYvHt80wR#-`8%(Ag)QvJb0DYYiU2E{{q6y8C>TJl&eM;+Bc8TKNL zc*=XMK{Hm&4rA4oi=fKYW2z+~P%61{M6gJ@N(m@y>4w7(i6U37NSb$WtVnkyi()SC8XV#5JW#a^jtwuhW zhNXDzj&DTk9*^<{Uy70i#NlKcOe~+M`(7c7lFPt4v+myI_4XY&fJq4?xG)qnjg|W= zZsN34s0cnoUhbqxM5Q>>2No3H2qShR@Uq;^v%TBQ^Dig$yG?22@HN_3efIf`k;xFl zJ1v$7o8(mRP`)Edc?#jsbZuX87Yjs`BDIKC84qi3uH9DTw}fTOe;FtM;iH~z_GNYQqxakOof?7%~25R(!yq zWVPBeDg`^VRV~X>0O_9{e)IEhcb?Z5snxwGqS#ZYBP;mU_ml~9g7N@|0-KK5Cv7I_ zBxYkm!w6p`+Hf6u4hnM!T{=S&?TjIe5_}3&%yfkItZp5ahAMYjJ|MVMK0sA>;kK<* zS_C0c%J&yVfvL!A$CNXwBMO!KiMqgDqxWc&sXRz{&QwyxX03a1GhPjK9&h$EcD`M# zr>WdFCk7nJG!i{ANTASEyf`2VLn(o1EmAf#mlf{>@XpXr*BdH511N~&OXe@5Gj=~g zgyk_nkQNqRgmh2@Kzn!coAss`4Czc-DF8&i57QwHKxJ4o)?8}Iwl1VQ7GSK*!pw7x zai;mJS^hp%+ZWrz`D!>Ga1f|(g~%{`2rD*BF?#1R8ap6%M$3FWz}$FOfQvzKZ>Og( zYV%J|RVBR20D#}iE|YySWU}j1uLA%VhErcDQ?1~*=n0gP#VC`4?W>4{!+;B()lbjh zUA2)^V-;~^k0c_PoFJ#9Z&)M65Ka}pp7}0x!vck8957*hU&KGk*+Pvsmo=R#cS_wS z{bg~mscp(<`}&hv=J;+^A7)zH#>FVpx>@GOTEZSbU!>~OZF#Y;?hEZzx@Ol_Lgy`z zoeR-SLH|lo8jU2N+jMqWZ%A`x>kYk8OwF>Y65*@hO7NQ(S}afG{+JEk0>5Lc04?m? z_zyQrlJbL)B8=o&v(1!I2qY=f{?)ksc2-$sAz=vQ48a9zdQ^h85U9E1#Rh7nzI@tH z*-t4qQ9s$L#?dY!`-k$8P{_+pr?Pf;%*&QLx-GV!XLCd_u(JK`Xat5f+b){i) zYDrqf)*2~$ru?^z@|Wn)O3lIpqYjrGr}ac$E(<|<5FWdd(cs!~GQ)&p^E3t)HV$Cg zxM(QfD1`fHq1Mo{Sewmoq;q@{_U1#-GbV237Mk1TNC!&+V~w1@(3x~QWNb$%WMqV) za4+wRqEVx9`0y9;fpG5{mizsR^SUYIO7n@bpON4ZJ@&6L+jk))J-5oATpD=J{k( zUPksBXY#PILk%Bax-@}!)c!*>rFD;@$LNuw-WQeFt+!p(2 zvOc)~ulqgN1dmYw_7s3(+)#AoHP>SzY5uI`yuSE?S>2`ip+I?`=?4-u``~;~@ zvkPIn6tLsFK)4HHP4Vd4%3>@9{|tyyx!dKs&6B4~+RKYW+a#7|UX4cAekIKoV4F)p z#@3ZNi?4`~gIC!+^sf+XvC)klDct2eAo(WodUN$n#PiuP%iUln5!@!1iF6Jpe*R6a zINfF^!>yk5n|)6&sQoN+ovYt+Zc1bd7@hoBsw0DIF>g`Tg;qgJFx(A_;^c%!$eM5r zX}~ri{i57UAZ91n!3Px@49dr!*gVJ-aV5(bZt_GZaxSTvSZNQE_)g#OyzRHTC1P|F zoPCq7YjLuhjp&F~rN@ii-Rf5CZ=RJWqfghQpBjPqD(=hI)A{XqR_0D>mcM?uy|Dtug)T1U-d8>1H-0RB=6uEZFxhIX{2hy$u2|!F%%a&-Uufaa8kHRKX zn2eSsB_)M=IVu4t(t&YCp%dDs0g(*?FP0OuYV`X{m37jx#5j)pCSDW*K6JEQkxfqSpRdKRzY|=;R?AhgFHv>7Csq8TBeQqV0j+C8Q zK)by3YQBl6yseGm~kl-Ew za@};gcR$WgKtPlvB$wtJ;7@izOO@)wZVXyqjf=JdQgH0JLnGZiyeB#)M3k<8M;rXR z$6yd=MT4-ak>{#dMb@Q=8(miIzgD3%7_DE(`4}jUABZPdJzoMni2^>-Hi2)dAbya` zCpC;bJruPpf#K)?eB0wN*dVzW@34Rtc7%M=<&&xEAggNzit016QYpBB$UgIFfp;vm&~yE%HhJ?(FRX=n7rxa<%30z6Ao>2ROh`B)aN z$xxt0m@#|&h9Gh7{L#|HvLs$68MTOj^T+xCd8MfHACPb49ov(ti7|DWEekl7G{MW2 z&8pGW3C8$J>CNZww1500eVpmr@kZ48YR<8NE|~4Ue6{SoS~MQ^jdZmOV2Hqzy;jwB zEF*YBmsxR0g4wep>`Pe&^Z>hH9k6e$F9x5xwuPvycz6<(` z*m@**jVJ5Tpp%~d#c!QH$=&AX0Xj*Ug7cxi&mP6@(EJw~v_#iYf{CI;BS|(Qe3d+< zL_MOXWLUuX@Sj1+F`aEXu@UEQ1x!Lwv5p>rkZmEOcVeYx<+)>UaKQ#44K&GAsMSxX z#gk2TnQ2sxUo1-}()n(eGsVme>us_^PS?+J74x!SF_Bj zbhPL0SktD{^L?#QKMU=czkQe1a}xzBerLLK%t|K#l*<__Q3|19QU)(-DpfLQD8yRE zX|$vkoV%D?*}5l+F#Mh~ZK1=NuStpYvwZbtmQjh9YCIk7YxnEyM1Q8E_Z0ID*40tE zx~=!pttWS@>g~F=t38>fYj@&MQLvaQ*DL+;Jx2>W$w(d|=~fDU#Ba4&qls(d?l=sm z;xU2IQBAMTwK^9J+2=;va(`PJ?i!O_W4!OYnU|(}ZjH8&FVq`JPmt6?GqbJ=Z87e| zMj^t(X=CG(&5{b}K4p(8QOq%RVM+NNJEwO4w$C*uW+3JdGjaDY-Q{Y>yJD}uDh)P` zedFCt{k_Xo>+P~RFF)QjpWUviH>=V(+e-g`RNZHfWO;Tb_VC_EWTdUCu4*&GF&uJ7 z91hJcxflqbv?~!31ZV|85Fx$bD?tJTNFP*y=*gb$s!A8$`_S*)7!II7b#`Rrjcfkq z*ylW_dAH5h2J6$FQG5Ggn{9rcC}FrNwBGHDS1-%dc$;>>DqRB8()l{0N)lOA1Shrb z&hQVlI;HAv%+Kwc}}xysS=K+%1$Md)?~zLfYs0* z9W}qmj*qp+UHkETJx}R!v;*n3_H0W9pWH+t86wbXR)G?JXu$U)-K*$^OUIeB4-i4U zsUQE#pLDv%SCo_NSY_EfRDTnt5`2#Y?S98B;{>A$RaK{utCDc2gdA3a&M{4TuTl7z zr0olW2l^|>aHHvBL~E2P66S7-+e-9tYb-ng(u=C#5-_T@ll3i03i>-z?@=p)1z0%k z0(q9S-bEHc>Ivsl2uH#J<&2iE)^*Tmu?USA!9`YFN|y+2vP+c2EBk-_^-pYrixuze ziET<@5EzDy1w3VylPkaB5Vo#%!=#3NeAd;Pq})YBsP~ z7>UwWrJuADx70eVmVuL;H-$*}H6>4#nosB6vT!v&m&bbE!R?c} zXbZHJH`&1^!k*$^<0sRiCfHFJ5bnfEy{OyEN=Qr+9TtK@)Id=`Ce(8AqopK7G&$PA zWv|*TYTuSB9aj&z#sB!LpHjW93z=IU~39A+i7-}E7?0%kJZu= zlqRNa?5Lp3xvj z)tK_ku*n!JaC}zTWen+5BF6pNjl>y?G7OiEfM;p7-9OJque;nV^UZhb>wjL=KJI&y z+&7>1S_{{|CCIY3gZ%yHUH98bb(+4NWv>5ebR2EcrD}7%*Y;#No{yo3zI^lk@%eSL z+%$8gH@#~?k@0jaOx8!Kc{<9UuaCLgHy?0cEyZRw`$f7w$ld=x zG8W9UmZ+JeRf;~AyaatmtBY-RnQlzd-D&>e@%&h@eEcp3G^tpj5?Cx@-W>OmZ$1ynO&Vr zdN0F0i8(5U7oCdqpgb;0G^5O*&)41KdOd<@2ZqrT714(vQUonB%$g)NUTL%Jt;yIC zXD7gm74C$r4Ehp~8?WwAdzJiJjjLpuR#D;o$p z3X1u3n-4@JN<=T_U3g?#5mf7@ zf^awN2u4GOpsiG=jBK0=v4P1K3Xnx}cg!&<8Nl*@+n)J1PYO#A{MDi3e$`yPzcp$a%F5KNT5AbPdFtBWp}+?7p0G|vPe+8C{@Amz`2o< z-duD{_*+S$h>-$QuSGT;Az;t@SMlX26&SJ`$9!Y3t$)h4=lLsDTif}VPa+f|hZuqI zNcA3>ub$Rv8O);2Mvsyg#pI6}1m;3WSWPncO!^4H*Cz2fX%)In8bFD=J7oZJSt7a@ zy%{P34tWX=KL&2+BR-O)Y}EbD_2m1PPDRIQy!R=_27uCraI5wQi|#Pf8y|{03p5vP z&Di;ZCB~hQM((N}Sh#$INO8Ze6wBS-6_yeAYaG0i@d+M&Qw&xEHb$Kw&XPh$SLZ$< zzh&nTAd-qm*eSkGAHO$$RcZRu09Zh$zu}>FT7Ld}feb7cJFo&FQBxL;cw}i4FHfct=ph ziZD!skf+@1=}`)n{)K9TY$MnFdd|!ja*?E~-<~q1A@=f&I4xKwidGRJewam38@;}q zrwEC!wE9tDj*SyRYoqgSvP6%+DWCg?V(M{x7;M+7E$`a>Vs4>jw1uLjfhg>fXL~r? zXoM7~i~Ez>HIn#}nTjEybJ0GF)oiH!st1%EZiq~;s-%5pv&$Cux%*xEYP{VG3e}5u z8rveD!^kFkXG6?Ka|?z96r;^l3>X1^zWZYC_*d^9;% zD5lbuB&y{ok39xM(^1lmO}ia?-;@%``O*lPy!dUIH+lN>_>l=~eM{C1*Ghmfp1d5bLvLO16(&=>8u$~y5uV=s{ z(pO^i66rzyQ*0hfoyqiLBa2#~lg%H>C6H~V)$F#{lvA_;P%%Z0A;`(FVr+p=se-_a zTdP2ZNs2WbqP!5F_3Oj@lgNP^?<9SkcwiifRUODWdJ;+Em zgVZx%h`XejY)=(2hBt=7lHb zWW}!$3=pfKLj7YGh~G%mM8quu2SQEJ;WUtPMkpz!iTpffh)CYpAkL~IwNwpFYVSn< zBzxJNi0U&;!|;zo;uV8=`IXh|IGVslVDD=y%(bEwSenSL=m-7^s5^KGl`D=xKa;89 zi|nw@p>qggK_|2Soe?6y8K^0GQdAW_t8_yZuN{nN? zs82F92GI504e6RL2lCT}f*++iPv5<4`@>9gnXOt;3#}B7B@4e=tB>`(It8%@kdon& zp(gS$o-$2G5%)XrU4p%NGXbrslfh|R8lcH3-4!{#qhQ_7)B!gEw-Vf{{GG|*h!DCZ z8Nh6Ih(zgSRye0?**p@f=hsi&)b?Gga@stufJjxkr%MZofNkaF+*(uut+x^U&6QI| zbRGR7?F^=i(n2-hf_!1>*(FejmSyrXEWdr@y~*|R<}yCs$M~qTH?2qVF?2&P;s&?d z3fQHG8RC5YqaQ!?N~=mVE-9opgd)z$27687(*`Z(T9ur8o~~!=mvxyAgApRd8b;*D z{hiN;<{+pFpl;U6demJ5g=@H?ZjMdNtWfk2(7~{Qve8O;`TEHtp7WI%{Vw$!K=2w3 zs5jdp+gsJ{hX)ni?Rn^?oc_Kq(@DBeCO_?)#p1=*A+`?LX!;V%;pn;0qqb z41&cwA3art9V{yoZLDhE^SLi%z)tqjm3u3d~k0s?Gyz#w0ewuDGG%lHGc3#)Uevfq+sc9f zCg#DNa%uo9i12eWo1cIAu9kh$keP=I!4OJ6iN(jFF^I{A{MS7h1cpX6EcJjL#v;0m zv+!0V^Tga_OO8paK*{BFE@F=ny$S^*spreKvKxiKu$L`q>nGwVh_H_e@AN$6`oNxL z2_N5`PL#W4R5(cbYy*~!`8Fvnk2qKk{Kv_uy-*GEZ+Xq`89zA-#Mh zh9uEjiR7}i;uWjK-za<)aTd44iSnxGuDm!YObG^J6aq`uy(e)@>_X?!C#tN}z%WNK zq~K9vO$5F;oI(9@mqFB1VMCo#T4+SAs+$ZZlCVKsQ7(Zf6uVFSm}uPOsoj{z^xpHj zh_7(lSzzMlNBC}=;ZS|a^d{MxO{ThpL=7wtqYwlTCwgEc_##g38u6qC(4dM80H=|Q z5uhWa)Gr6#Pzf&UOZA)xJgyuHmgr$22&vIZ{1p?=o{b?oYap`jHrrI=0s>VsEt~JO z8bVV0`OAGf`-8i7cK>Oi0LOZ+-zr0V+*Zcv`l$#*xtc7;yX-n&u45-b%gO-J1w+Dh z0cUYYU3ag@_1S3j{PdzlxnT+2m-L2n1I7f6oqIqMY5{2_WGdCTh(>S`M0LdSX{ga) z#J|v)^|Tm`KfTsdFAvd2;sBYz_vT+xWkS~63$qiOh1@1=I08AtWpc=!$U~X;z_x}N za9Jum3hRd1s%(xq#YM)4AVh%l<9!Ssn1a6)mdabnGPRb-2AxpP%<Us1N($z}w3pGrD|WHV2pxWMs0?MZ zWkg~86=s&eQ68t91ubk{trT#tVKdsC^T%vswXF8dmg6LYqYVs!{OnH4xFH4Ib_ zmBVLO;K|hGb7}#!RDW8#wM<^9fb!M2*xyyI$BX^aMc}^j4=d1Rpmyw-o2#X_nrdn{7^1b~XTpI8|!hO};uF zXO)}@i+5VJYmu$ROS_n4EA8GaG5Cm6<^bSzF&`D zZt|m9f{NpFqnPKPZ2g%Gz4l}=hlvY3%B z5C9KB6(;t&UqjLPFAhtrc~UKr_KZ!tRB$44w1F=r#u9-Y@3a5g|MbDpevb>pD^BBf zjfFRoE8csa1W75cJ4Z@H`XBsU@ zJK04^V+`hWz?lSN)c1zn?yhQ_sU)F-?@M|K_7v&?x<``?T2TT~0RhpZ;6)0N>t;s6#@Ch+ z?e1~g#_EsT*gUBG>O|YoiBQ*OEy+YUSw@0#f`K7?16dcH-6E2PS!uggZTLks*F8M_ z-a|RNdXl`E?DDg8_tUohxNlFh?R7f0*vtWV;r4`0s4($5swx%}L(<9t@+_3IQN>Ti zS&J57q@X;%wQDgLu4L(ilCxmFH^tr?Qp>||3a47y_zn3rAQL&c(ADQZ{?PoR_m<0a zzrph;(So%hKb?$0BncyXLHtMfo!!-F)^>K-Q4q|j1SlxZ=mDz8MnFkETu73(vr%F8 zea`E3U-DG$5JWl>hHiJu49ZvQSEC8CU{QQUhrQ9c{rPzJl>K5_`}*^~HQD18h953L zr>u@CfY7Ldj9P0xPGrv6Nu(|`&L8_P#{!tXs#I1BL)y94ioy+VC zRi?klcW$yTZqj8;T6SqAv#W(X;SDESFz^V`kSa-V#+tP7th(52!IKmgjCmJBm-z%( zTA#|x^wsoyJI&lL^NrVp`GkRG%(a)+VtOw?Q0skuSs`EvTAOTfpnmm;Wgzl!2878W zxYO_SVv}I7IpYG?r7en=jax?vC$|`!2sY%ZZ#N~tRx~p;HB*zRs1c5sLZwxI9=#xF zs~uPTPfoqZWoub@_qy+|(!J?6zm(j{*0yN`7CfcG%639`ZAWllq6{^I?Cu;TYYdCP z&6@`Q$a0PhLp@h1F3`)V6|nggEL#n-C`LDvK`A#QWVsTpyQfzXWl|z_eO1rs0hf@! zF+F*Z7)Zrh%asN}LdV++;JdV==(z1Gwr`F^CFzYYWu)eV&Gd9ATu(AL|Fmhn9?CWs zX${OGsU3k($#6N0AX8D@C><3@0yYu{fdeNB&EKl&oV44Nx5HoDrGEE1T|bPFtr0ci zV0<>$BFxx5q_>P6_r$MkAnGl&9q0^}5#Aj^DBz&+i|re+J3n14U~J~)sOX5#91~=# zGrPHhz8^6lhxsnHgb-kiwjBIRDX1ZoEXgMEOC)X-2A~OOH<-i0+_49DMe~QgA zVo3|~ob|~_BA1w3mNDHHG6Xg7Ax{44zv+`24rP0UB#uOP=+cnT0cGRR zX7Xo7{n5e^FbL?}3?fIsR)xjwu_DilJ1PufXCC zMOD@zkHRT&WBe-TaS4w@GA2$~I8SaXp4Kgf*n*{@6jjI>Uz$VPz1l2&Gs(XlW*%jbkvYdwwIM z!DUIzJ@$<6VjH7T7%4`Sf5zlQ-@^K<)PockGiI#I!}05Osk8n3gMMDNTq*cuUwS<> zUsHFx>f6aKH;HPrEj*Y$PRkIXOy(-#E{JDGsSwxI_5jh9-%W3_lmGgcUtJTn(@z9)KpX?$cgWc@ z0LOvQwGdF#N420P;=6;6fIf*%j|VQ^Y1mGVr55O+UmE~}YB7@WL=N}1&s`&2X}-2GVe z>{|t;vVG(eNmHsd@I!d^$ht=gfqu2ctbAt-m@-w_+z?>43 zC4S6GC|Q?j8AuXYwH~{o)oBHkgY6F(U^~kz1l8Ey769uK&UeF=MqQgc{W694?R{!M zv5f5$f}mJZVc}s4Wpx3!W1KbtJj9l7(;YD@O5XA&xy6FmOi#t2HmuZ($+C-iA>_NE zApGp;uu6qWRH;XCo?6@5?AV)T??0V8(-ab3f69^)Qfh=0Su7|eG;sc+e3Sc=1 zf^eX5ALZm@un3p8jnjYivu^M7T-?tpid2aFsC}2iM&@gUh<8Yy2c25fw=1g%Mnh7< zU;&Q?ek=mH^x-kK@Fk*SNyO6^MbPDl_GsAUpk2$P+{@Nyd4IdH!|5<7@Yp=XTf0^uzn=1yWGM1!Ib zt-!~3bP1z184Mg2_A@1tf73#Ep|#k$DV|1DLWuu_4_yJ?FXj zZiEjgv-<}}@m6}@y1-FOI4MR!jkym`l^e>#J#l*_bwjBpH)*j19?YG%VTPnfjx`iY z3(I|kUzxM}-JF|=^bF|pIr|0GGCOL-E%Cx=MFJ=&>-$z>4ycrg=j)*e3q>_V&P+77 z8qGih1l?O&ogF{MwUD6RG>W}u4Qnwwi_BsO1PR>X5S~7}EZIej42p*7?rfSq*#Eoq z3BkA0H7!if%SHRV{9$c-ubgztZcV>z3NKVk`eAl%fyCMg3o;~+H$hZQ{jI`Ig&8WN zWB-(y2ngv2b0VLpL!un)Pb7T!EA)yXHpSl4H0O+!vd|&Eimm z3tx%aWY9}bnNP#1EHAh-I;q&KVH%^0x|DQ6ZCUWKUhG2HyRu@>q_bsbUGC6K z>`;s2707ch`8w{Tl&`V^nR)W5n{;KKZ7=reW_sN&IR{f|N0-=%7s%+2(4^J&$d<{n0w?koq1CqJPqAaJE&5)fGtblSuAhfK$?phk@f41ew{ z`k8S2M4Tt|O02u!e&S*>I9W(RCIi6;E=5C;He5`_+8&un8g%2kpgyXCTneW_VU>H) zO#ljEF7uHuHN?Av4MesBW3x*dZWjV8Ek?a_$P^KwQyv6Cq9C0rWeV3X%l6>d9k65a zjm6y4OMi=0RMxS=;dBm?k&4VY8Bl{oA*FPBxtKv@k;IgoQl{Kt`%O3dtFMc-_4f!x zgbu(fs21L-#l~!QEsh(mnp#QsZ(AX1(nUBOVpqn3k`6QqKw1K%Bq1X~bT9ses3)We zPYS|Rcp5nipC|9b?YT(W#1|@XWa9IKvVRbq5W-3=E5(}60c%{YUe)Oi-#I8|w9X{C z2`I`PRgaj2sswUVwG)epq;f@;v4HYLO^_#E!gtyKTdZo))ILVxd02nUVmNn;aiTHy&}7OSJ6*@uJoe)j>6j4ihj* zhlX%#lF_kLf$?T zmfT~Y)?7byvPTt*5JU46=F`XQ@DNX^3J7L*w$%~ALZ4=~^LBp!rM4fLlO+2|m>eMz zlvq;n16e^0vQ|8u%781I#$|3-#Sh%Gs3ue zU6hLPk%4nR(k*6z@4}uD=yO+*EV0CF0?LC&yL$eQKkL3N&-2?+16+JB2Dl2ck02)C zaM-&{3A-qE5C;V4EAl25gTQADpz-7%Cw?0vE+&kjZ)b#kBWfI$2cyK7%n`ks zo^MivZGKa_+f+Y%-ZTg2{_MfdmvHQP$M!N&4#*w zfz}%NT<_}2lY4r$1>8J%y*|J0m7NGg5D0?Z7Bfw!rIc)HBt>tp`nRLWEMjHbJ){>0 zX1qBLc1ju2c$x%1Sf`MCm9j#yg>mvk+Ia~aL8!xMvbsc1!&OZPt@X@ZsCF-0?V3M) zKDQq?sk!VF%Zvd_K=1dr_jkk>>L7kYC6{S6YSRVAWSB8MPP?{Aba2QZdF#p^8ER`1 z55*X{77-E<3B9INW`vnn2a(w0d)^9*MDAK!Boh<~FIjhGVMI6d*tC0`{rewuz9_ub z;0jkuE07cKx$>lc%#rQP0u0a@J!zTQ2ciC{TwQbahcT5=c{ta;dfN0pZ`p|Fr`x5B{!G>GVW(J~cZkCn3YysK^{?0&97>)Fca57Q#X#`Fva*dxaRXM>TP_E8 zWLs_c-!QTQ4BRejScX9;PQ03t-mEoj80$^=x881wJjAx z#vbEiRfoAwc7+XK(HXpORxCqBpZ*!w^|?7pfAQ_AI!KiR+9L!G1j5#g&Cg(Ce1J&R z*aXZp)-h92u!h%_V213ZY_puZDr|qJbNESP*3B;9l`K8tBFNnF;1C!)&r)Ng+94sS zr47`$z}Y8!^CBEBWjonBd=xT>Hf1^^eGonmX^Toer}bomy~}J&YCM8A#tbEg)q*D_ zp3W`{0`nE`oz#(JFRR}TI*Jtr0h~lNI5SNEby^EtHc_%&c`r&?hna zD75r!=G`&2O`wgEtI+UtDUr)H^O$0+JOk^3`9%X;F!(NgPjEjn33)z5q-Fvh9v(be zIgdiU5a$86QmQt zo!w}Xd%cIXa;nWLC}co*Hz0R9HLmZk??CV68yfl3`}zTr{U>+LQ{XqTUe=~Oq*VIm zx*I)UBFKc%sLGWH7srr=Ck)VsH&IzTGdgeJx64&Ioob&afAk)RY<%fX&l{vf(l*B1Tk*yK03ZNKL_t&$vlAGdImYCn0b=5w`pw`& za~VCuC|p3$lK#SRvhwtbMdS0En;Qhouz>tCIV8%0;fxrghs|UC`9J))e482c5Qho# zV`qw167buYUrF+VCap{xlSk32G-@|H5pqmCeKas`RLIEr8qNAHnn7?I@>kuy;yZ%~ zhiXbf4X+VDu7g@}4~?^(FRbLW!^q*s-t6dS8OUxN0b&V+sY+15CU0<>jhR4 z*H9;I%q+%nZ?- zu8t1vzgcu1kDXnn!m$#sVmaD1mOXTnWni%f57=lTwGZ**PWV4|nRTNwj_HSOhz`EJ zy|qK8dhY5;D%@EvX<34%t9O0JYZd&o3(#!0TUW#3gsKy5#~ex!^PRPOX|{XW)v!JN zo4@#J?`^9PnL?HDnp>Ca#0h3ExSCS6lZ3*sZ7OUMn(B-vGiH8xwCT(QpiHfZi~O|wRr-A)-?jARp-_6&GD1z%|NqWob9UrQna$m38-;A(T}{m5)?PeicsN_N*Y2`Zc>QLb@j;|0KB4Ds8kz3)U;g4J?YqnGz}9&L?uHlJk)>Wv!YP zCV?nG#4Cj=Lh6#AR%D<+kpWzC@BA|dhes_HVd42zpopO7CospI%je!3^u#GI(G(Yu zJ;lfrqjl6obJk-)9%VEqLUh^x^k3i?Qz#qpVj*DTLLPfgS9Mad#T`(vgoC;AE0q;9 z&S?y8@Br9_R`9DR5f4BtoFtE8{)p4Dh~06d^o6b1qv2A4uU*h(f*whFKXhFnxLnuS z3NAe35%%y(9eGq2QC)_g1+~1bLL5?DG5hPj3o;aI>%No->l=1~TH(T*>fL4ZWCv zwPLhAQ>=|3&<%grehVixDt^KgvMXXW%N6rr4w3c~a(5Lvmw_e>6WL<;J;90{ywCyJ z?xFU@y7iOq*VVBaqf#qXX<*MMs^tc2?KTDPpN(>Tcgx!lA#=4(uWLJzqwb9##)NWX zChXnYw;YR?=NHSJ5*6AV42Cw0)yMQXVcm17()xS+fDJ|Z?d| zKm%qxm&tA@BK*@8NYShn3JtkxShw|6A4MaT#0_pc%P!v2irI$iA;nDusz z9g##9NEY@$g)rw)Dz_sv1;0%+=@b4669M6`IP(60+`6!O`SUMwKWH6`GT0(HcyT1o z*`nfCY#q85T@VXB$9%*ITN;h#1iqTNgkV@N%&TSQ;pK4mdD9xEn%h!yc03PI+HSV( zZf_iFGLmLhehGRpiLT?hh2@AmNy7a7>%Pzkqg$+pk*WipW~O{@{ub~aRVsu6!GC=f zJlwXP_HH#@z3qR|KfKVY8NwJRh)-|cyhSt)xd23wt2g28dfEBAR!--SLL@pP@*jRV zksqo>Y+hPk*p1lS23s0oN_!K7!ft}h_~p*sf7snU9%|$W(_J!bq1urAwVtCV6$2$K zeb+WJb^-O~DU{7(xnb?t)am8pL+axn+!t>1+cZqG(FbZ5%a8utXGc1oP-bS)2gDxY zX_mBUN&vLs1Xy?^o^#ofel}YQnIoiR2U)9xWQd=`-gEgdtZ=x1h|omv-@uUYVF83q z8QvKg>rwpbqCR(Z-|~VQl{rb+No?B4(>SS<*b&}ac%hF&$VVBU14wKqaXI&F+-MEg zbvr!u>|g(J< zoTR$3R-#xCX_+6+h&`_NrNXqC`m@9zkaF)x}1}F@hG`m?gc^vJYI1B~W zs3JlMUsWVmf8{qf?bGD@pLUNwyl<;F`==*m9W7-j;wRz?lhuUy6rM6T3~0)U8kZ7e z&WMKCE1WRypu9kKzpkY>rNijMRi%DD&ez$0n5I81^Q#P(C2&oQU71lBEQnG`&ZyFa zL(gSIv5cR|;G{*sRN{gudPc`tisuM3B|uyhHcW_LMCSx62u*g4A$|Mq-Lu-sSa9XL z+Z$(fYfuIi1h0_K^H0C}+5OM&mDi4ua-%{JdPXHckg<~>J`w;$tUQv!N&pMA@q=_D zc&P*9U zF*)s=h?B6ul&VB0u;sqAJ~y8C^-_qE_;+936gCBPDv>-eKWEK3tjS8VEgwk{2{EV#6cLG(sW_^1b+PhH zf23Qd%1>AMufAOtr_!isWYhyLQ!@)Nb8PlIuj8@Yj7SB`rPXrYUAcGslR> z3YY~FMiVolmjbu_RE>u0PACts59l=xScP3J6amj_D*(laVI*0xLTWZPb<5}sX`2LH zPRrMu!s<;kTRluu+XV}mJPr~R0S(H|tOZClj+G+|SwsA2+>yq)BK8THR!ud({-;$7 zoX3(aRI9LIbRGo>4i}bR*irlY`+L(t!ZK*B3U(o@N?b{?kT3~K*+0BZ{qjSr71Sya z9ExXYG)3$xXHghL?!Zd3A3cSKuqh~{%dg&30T#0b=H^IB@3|wCK8feAzT)LiM4wN* zE*6_l45^dYWi)}+?pH_piWkY?YCd$B?ZA!mp#!|9Ga^itYDXqvj$?ekZ7 zSA>R~aU!aNPQTe`xu|DL-e0+|?En5(UmB`oGjF-h%uTIE7dNzOZtM@*;{7TluxW}Jpsqj~lAxN>BLrF+c=y-^JA&8Ab z&4#O`aw55eQyHCjyaW#GyUDnOPc4UOUP%o%B}6Io>c$u>>hoC>1{1SG(7wl<7)Z@& z1K?zp!`>+TWy8>eg5d%E%6C8{pLTs=;$^N&){d6IpNpggjNC8AZ`72?Ix_<10?)v1 zP3}PIOkju?;qM7jyi9H}omKdir*)^W?Pn%`445kco34I4+OKlWL%s&?7qJu-Ng0ks zKMBR_(ODo*<)9SJc)D(@YyD90x{)-Cxn4!x%Jj?IPPNp0U1q;uW)_)7x_+adhH!ln z^Ud79`!E%Dkp{@8@9u8IhwMHph%(^h+r!+rUZ+z?osb35sDz3EXWo%YRuThYG#=x} zHs+?*6hA*bnFy0;J-iNH0ifj*0>qEN&-zMW2=KNIPNyavArZov9Z>$~lIZNI{;pEz1 z%yREPZksRrQnuO8*V{vY>+PXXZB!a9AvF>FR;zh;cgxVRsH}(gAKtsnc%*w{2ETw{ zohlWk{q^;Yr^aJH8jn7F_`uYE9Sm5(`mRxm#(@RRh+_tdCc$5L9PW>d^DLnPt{Z_V@yyW&>p#D3qqlBvY+*lP+DotN}6V z$Ie%WQk#jOaI2$_aaW7U!)m>Id#A1F)p*J|ILRYkUG>Z^o`q(kd3}A&=hM!@UXU;& z)a*?&)oitx1%wW9K~;;ysTg2n;En^b^K_Pnap&VlzmourhB}pK&TZzc!{ATU;BTIp z_GmwV@DB&bg~T>L<9#pyrV$b%t#UZ*a`dUI^>}q;;_M0^9ybrq`|dbZMrEUf14rf@ z8>;4k`?cy|O2vD1A{b``5NLE_1mO@f5Iovqu?&UcWf-unO@{_(E6vz9+!`+=(BhUn0@pgNUFtL(7T$K(t)znpPttT(K zNdrQ)5H+yv0p24T7@IjXCza?MO!sJDs27Qa2rI`@gS9mfRgN)Xigx>Nbwd!Hw_6pLS^H2EarQpAPB+%wO?es|B- zhVx|q&;RvDlqC<_aBH8%q9mwkt#-Rd1c*Smgx)PuFCNB{MUcrHP-W=m8ucDul_v6a zQE^K=_^Bdo!wqVPLNCAyfwrPbne>-SMJrM=E{5JsM7%3{XY!(=H8QnC92hTua*Xp+ zs+XT(ML_y9vt`8M75&Ej^mxo+GQ?MW-99n&-kY69tz*HR^?S`|Dh!$^_Y?3uy)N}S zB3NOmfH7e~=@w4=CRNU~zZzyfeBAVgr*^J&yUnKt^D!ygP;*xVy$?u`5QKsNb|jNm z!Ct>!Rc|leT&+`(XOPmvdXsc1(V=isoc`mley8^%$@ls6UuN0IO>udytj>8IUqNBW z`_O_UG1;W@S^O}063aw>!f1(|2J*vVmUY<8JEz&Z>;CtX^T%a&ovH2%&Cww{77~`1 z@Wj5B&A)0?WTe32(`PyZZo?3Rco62Xa*+sdidD9)mk_rU3zP@jlchw*khx~aGAE92 zZg0p_6C_Xe@bDn-Ler?|zCq;ATi)&~sm-6hul)W^rm}ft4THyo0km6+y4bqSR-0&M zb)lGJ4XJ82v$ErzPH!?DgicXO5FC!yA(ahBTMqnxR0*V^Dral5e5O>lP^{U!lNpxe zO{UbsC;zl>f4{k!#&BH0-xIhuuib`gn#^C^&L`;2Nux2EE5={b(Rh>$suf~_b z$~Iy`pvpd^n0SslB6LgV7J9_YNp7Db&aT-Va?!l3;CL)wB7>%0-{A7+ZS8U2T;#gb zeGdLYqL66A1PTLX@h>Fs2_P{X{YKPa67%aw2nlSu+vQ{yv9;47Pzk5={p}rYx>?e@ zGyG|U8py1+M==G4_HaIboy_n%chO=_2)h+fyIC`tJU#_-Uv?@#?qz=5-E?xRPQAtj z-)q@$TxrMzcp;S_VTZm&L``l+a-7^}MGw|9G@9w^@9s*!`|C~bNq=;R?M!ZjqYSPI zZ%|#XWe*l#sNQ{c|J5+?djz0klFuWS<*!~OVt=>+^b@lFkmmY=(ebQ$D1jl z!pd-JN-X)Dq6xNSCd7{Zfg}N*E!&va%C>k9D>W!j=#T?Njx@qc#NRjd)UGcc3m2AX4tMC#gJK(L8 zdq&Ps#VRIF~MD0+e5yw;@}%u&132nL(P!%xX^w?KQktQ z)kR6ta)#*|J86+iC@x+;@7~w*T$#Tepj@ittF2VAHr$?O)}#;|v~@Z~;h_UbERL;4 zkkIj7Zh1+;2WvAONcMxB z=x}Q+Uc3j>6I&>wu%^zIi^xM>qSeBaA*68%>H6I;bMtxM!#H`Y_7A1{NI4;bibfl! zS*ZQQK4@c8kUg~teFevFPBD4nAn@ya#e5jN|lyq)$2z^A=SHP zdB30=EJnQVN!d*B#MShveIEbum(35gRWpk>mI|9jaWF+2yA6a)cHn4sv>g!&;;_(< zFH?0i#Y5%&pu6sSN^@*LKWW|$TBm!F8LgRxDY!G9l2rU`eH<|b*MAMFrryW z9)9ohu`91RO*hY_MkpTwE(E;vY6SVYN^YADBL>-LiK!%185KoqNHPXW>QK#W)S&)e zYhK;HNR*06D!&G}K~~`of*WVMW9Kp1Y&W}r1hX{~xevO@E)RUBNso|BY-c(toC%js zgke19#f`Ak4BTbY5xkCJ&e&*9YoU5M{&ETLeaEkMA8X7-L(m(^q>EMfrb%YqPsejd z>>#E)`7C=9+ZN!PK^zI5$hC(qj4VVT&HN+-oJuROuI;Zp)5O5D7^CDYY2YPKSS~hI zz-X;1>B!hrqAxo}5Fq50gVBYag3=MP)#?FghE@U-RF@5|9w`~@woM0+|84-ZEP_v< zLUvm4iG`l@1(%~*+^D-06L*s2887K0#+|#2eozL8CNpa8-;E{mD5{zrY>dpYPaJG4 zytjNQHgQn$g(EJ6kslA^}a66DAg5wdVgoq`vU(!bI_=$%~MKNzWyUJ79zs zhN{x(45ZQGv0THD3%5No&sL6|8J`OcC3(>#wh-HlsVsNWt~V{RaL}|Jmp!l7pgOy2GFJ}`RYr6Tx(;MaAx|MKyrT)vk1{_?JUSdCCgAp%Mj@!sev1`W&|{<0_gVA&fO!K%bvjmM9gQS(bQImE2-BVrkOfrF=cN zot0*P_lNImcM>gov9%%ec%>%KTGsd1Sn~M-X5z8+$HlLe&f@o-?Tg3CSG($u{$bvF zJvC3I29}%W<&<#;A{q=apdB`KDs$a!?-s9kAo5uOaI6Y)$b^obHR`d4kkjNJ? zVf}DZ$M@aRm!(C1Js=_?QGpOd+$qVJz(WVHS7Ik%@sW2?JH;g498!~W?Q`mO+j#%9 zJS}ykv1`P;XgOzw>*Y)&6&m8A*VlbI+q@C|VgrpzS;JhjeATMnmbd@rF89OQqMP5D zM1)gzhB|zSD$CxiMToM>K@Vreol_5fl92A+vUJ17hdLw5eJIpHW?HvCbQs|o@-Bf3 z`^617{_=8w+w|TESxfdKk?RuPB+`=o7ngI4&t1spzHY%&mc%s?Dp5;6Svz))C={5O z?E8fK8&NLe0;PWpv3s*|T#muua)P&^f8|=7E5VoupM|G#@kMA4UxNJ%PeL2>D60$z zmV0u1SlwCC1^_& zqP@6;#IE7%m$R8{m*e92oTN^_;L5h*&J4#0LF|s@$)WXpx}O){PJ?AVJyK5|aglIVmaMu2&LzwQyc3WEx{%{V zkra!>x5a=}Ai$<+8=KTP=Jt#0%=FK{Y5c+UVLSNsTG;B`92VYuTG1Jf~SGMY&CETdZ*je~}SM}=0m(ii#*FY=S)ZML|ul{(LjS!uX94lWttd!}^G1eMvsHb#_FMi11T zlxZXw1Sz)8pXCEetH&)W};!!PdzueAe6X?&p0Hwvo~pOP-&ds)4Hh5+A3OLpfM z-Ht^QAG2GN@P{@sQ3VwavF)Mbz)2JW<6Z^_WlrUd!Pas+gVZ+NMK%99{cvu5p&)2D zUntOM^{(sy^zXvdh+tXkR5F@ah=Ut-_=>=&Z9-e{0S zOk?-cZFfjbtH>jZ2sFS|@FK4_jiy*vbp z^KO&Jfki`!=eo;R>P3Thcxyd7GzR5$&8Jgs8vIF_WP}icG=b3*V0|*fdmE6iMzh*+ zXoa+9Adlo=2rpnif?Mj|tT7bmw`v7%Q(3m`-`t)4;EQrMz0T}>P8CtQs&H2FUfg}t z76{h0A{d52k-xG5mCjyI&hK9F>l_-}Vsmji47I?}niH;p?4jNuMT>{WhX*{g!>VP4}M`LSAw%*gdr$n4d~mf``mH{ zR$oFXnXiG^n^wZXq*!kxjK^2xVSF?xmI2kw>wP=_AwOzob(G;-01yhu%llcg%8J?6 zK`YJ3cXz@^_8q#1U8?kQsC~QZEz39KT}n?+0t+Jmq@o8Ql`pD=0<&MZ6qX<3#z}12 zz&P8X;v2hpd;8s=ebc%x%n$Pyi`}kwvB<{IZAvA9i4BpMRS3g~#E@WKmgPlNcyB7b zw+P9HMB;_RLb$r&>L`-{03ZNKL_t&mpST;}vZHOHi`sB7E1aypoX1yeen2SWNOtGiyqk{95Nq;f8#7(?${uLda`ZxQ6woLcc^+&z|ApB zDF!h>n!qZ8&ITSf1(E9tIH?|_&Cpev2qpFK)UOprQ+jOLi((^~!s`4DWDn9V5E1NMbE(4` zV&`~jXEuNKMgET-R6qZ-TAppb^WC!i|Bmzfvgmw8e1Jd9+iH~fD+WbmF4d?uZQU3Y z*lQ0){Sjb662#mt>4qV=KSjaGV9XRfA@(eAMGjL^s7yl&p`H}~FHx*BVbJOWY^f6A zno$M}zKBm!m)UBi8e!Pz6AI)E1RM*?H~1^DG8l@dv;B!eL%NvOsX@5PBu~J5n1nf5 zzr5Vz{PQ1u)BCx`UcM%v7 zcHq8HHg=IzBzGYqBeTJboCLzZvJ7!|f3zi|lTkCF)5d zX-Al{KqFuR!OSC#%!E+zy4_JZM1`|jY+F-lODn*HVhzp4UGovSpa_^&^K|qIzw=*- zcD7VyTj)?_##S{BC}34?V>HtqXXE|tRo_Wng?6FX%B|9T_Nsgz1HP&zz^e6O-t4w0 zKe5!Ac2dXup>(4q4CwhfU4uD(P2GIjcc-Tchk^yZ0J+vbn`P`_Gys4{mmNK(hMu>o zAp#6H|1L}}{3I|2x{dIq^t56zZRX2vtIa2qZIY$K=x>)&h$yuim)q8Rax%1aP(sYc z8USf^ESg&zQE018EdWRF{R_;Jll1YC(Ga!B_^-aUNaRgom$6+@w5X zWu{0ny?yPQ?^n04hwfToa;5mR8Y~ZM=s}uqJP$yLIB~z*pD!qNMogZ8l+x~j^$-{qmf)slRtyp893kbr z&azMc_FeY(Zqn`iCa0eq#Dt1{{tCU~6gDb#P2>eZSm2)QM^c*%pbrw6pV-_EOA0>- zw~NY&r%gqmsh&xtE(r{uE&-$NR+H6Hvy{O-QHN=Si5(NF@k*0hEZaJ2o&0zLh&f z*PzR(ECN@&eC}=%rAJ*yP|uCI9nbIlfGxDHL8Nj#wr6k}zlzQw?)7ySb-b6e87Yg# z{%2+ieIQ9UHm*;Li~Rt1kt>#0Qm&OF^%P21UPVRhSbWW$emEwHGwDo z1`BV!CMzT&oE%)sAkhSYMe(o(nB5R-%T1)R5w7B629-63I9bnEhrW()*L?h#L%Lz|gaJ zsPo(F2>K0PUJ33tkc?h?v$?2+mX zJ`VYMH!A!G)6Y*I>-sr%fAF`n;ghtjHJNQ9xcoR1=(y^&au`WxtHvl*OINQdsp|7^ zv_L;A91Fuzk&1L9i1GBsR{_MHcm^5JOn z&pe<30j%?Q($kf&K$Wp*M4_{lT%fx8$U{R4fDWVWNN?x0?6Rg6$9|eiZ=&jhx((+t zk17+2mIaEehx0&RTLJs5o@u3Np?}Cf%+vMxvBb5B8rs7uS1KTG+2SW)If*(RQqoJ7 z2bi7sVbqI8p^eB%YPXy-44s?{qgjq;^JoHq%gN9JXaI-q*kYSPdZ})&P;EZV){DV} zHC?F6`3)4&YRIAnK!As>CW=z1NGcpJPNWhao*w_(l9FtYl*ulq)~YR>KM3ywng{h6 zYmCoA1d(q5r~+t==_C*(;Fr^N$n#7N`E5pFrmk2KbF@jQvDibRN8AsRfJmvegg#LvqoafXC);_}zahsja#!>czxl$RsqgKK5<$gZj zqzgrv_Mxq^Uy3d_;2-=E7Jm}sT|`-@~j1{IF0a&I9dwKW2{(S{TThQ}HmMQ8;to{hLWiqjjGim;cHEoLFULsvgwM4FZX_ZMgJU>06Q*Wh@KfGz&Lnmske%!S> z*W=0HBv_w6m1@!ISlXQ&((-}XAewK<)-~Tv2PB4f%iduMP1hrjrcjh}oy`?%FBPpp zjzg@z%3~iR&bgi}2BXa){bpXepB$^tb6})=xl&*5*LtDvQ>;Qo4@$TAIx=}Us3Op8 zrP~{nnO!fwXBDYW(ZAi!5P`JlV#g%qQ>;D*OGkNF3zUz7JHW~WRg=TLzV5jPi9wOU zUaRdqzg9TAUMXMX2M}T5MB+NL-5oaGmiC!Op>|d4e;%Nx1WwHz%Ma5-|9M_%=FjcC zzn`YY<(!&ks;k0g$l}q}gHe8le3HzyS;McQSV2anxCjz+O&8mtrLtNC(ofP`=NS8R zyKC0!4j7;YkR%5!O=sjY)G0((XdsljS86R}$NTlGyRMYFl1`Jwa7{f`?~a)_uiJ;g zNcmr`+^T*Wko93e@c6W|+%6Z*0%%=xvYL;VlZdRf`opbOT06R}T`gvhdxzQ5*s@+G zK)~)O>Qr*6Rq3=TDWTiF-Uk!@7ITyK%v-fk$MiT;Tj5m_IH46y8X*OXgi0%;N-Ufi zV&F2lq7GJmP!me9ZOx!Ys9Ua$@*PUxB11@&2$sm41TF0R7^y}&n2ua8V z-#L~J!+jJfWa`!0d_06DV(E*4bG#Y3SR}1VS9KGKOgGy!M94J1ChtSC#qY`M@(7nt z?4ygQ^v1!2_T{g3fzKUI`kNa2ntb-BSpJDa6FMzST;-WI=5Kr*JJAL3R$XdDM_~l6jj}{wB7=;VqZO^cvI>RbKDakF;9-gO}2rs zgtx(4GIKT?!4iq1{jqm_`|;y zV}f3%OPVD((@wA^(r%dMBn-Mw&o6`FkY(Kf|D7mO8ROP9DSE3YeygU|)p`x|N6!(G zSOF=~N&cD}L&u3Rs`Q;9HO>HIlz)Jd3q^Pv@i99#!pMu&RVfF`31(n-3tWM41}HB0 zz$4uUb%bQo98r-L!*7;Ktdj7pv`9?WKpfyRF9#$mNHL*C0BF(!M^tZB)7kQHk)Bnr z7nOHfW=-c&IwS=JFgDcUFuT9KzadP1{oO~RK=T!elhJU<^^<5}e0}`%c`+E23nf|Z zrwt)Dwr2aCwyiuKyviiEBfqlq6*iu#Ltpgxb|%?jy7k#KD2iRw`ZC9O*KK01%1_2)613fI7lhHG=f|1LE3Z~uf+4JAR)D4+ zr0k2D3Ghuwkgz%#XG9N)H^XIOFW4_@T^mpHbY`Be471I_zV*7p$0^JQ@T8p@GYB&c zuqFOSh_3TWqrBb9)>F+o7)@k65hL;A<%-IgGy8&h%U>kjNOvHX;(3!^z4pqdH?7i# zUZs~+k5Z_2ue7K9yT@6x)F{`wfB!VJXDF>k)pJc;HwRk{ylZ=s03qNSON<5rGO;AO zAq+h7e%C6T`*r%y$f+Vr z#1KrpoLIE{6+($QLXlq9Fqsp^G8qv)k~bj`4!c@veOKf0irpdlh_Ec{$G4?;?0!Cu zd-k678?7SJn(WoXUzsr9AxI_3<0bS(z7vmRy<@`Zk08a7+_MP%$}V)6_hd_=LLzlf zIQi*UH^u(()4oeS4>u@I8#OE8(>TUY$2hqaMG@HYKJofjgFa$Q_OCB`#d*ZmLmgWF zUQ}8zwp{i#S*e+ufyhB9zr4P1Cfp*g5IB_S50{9^NPZ@N@cyFSN#M(~g<$*bKS$EV4AS_UlSc=bm`EsNCB)@=i|rz`F;6wM zkyxDXC!6+AWUE}z@KNAx1dodiv)z%*pUlR+j$l=B6%CQsLK6r-vXiW3%|y7n?jKgu z*Oy1&<7P3dg06iW{r*+&Lv^d}QC(cLbT9N$)vNioG?N3A)I%-7&M`*P(`op{lqzLX z2fr3g6Na!VKQ_2vizES;BHT={DHX^%Vm5&HM%<7EHqUSGn*TW3Oy@z4)M>S4IX^!=T0wH!c7|3?e3y2&V?$c6E-k+; z(!xd)X^|7|G3@4>ipI{`Z02-oD$FJ?LbWOjr&8-1jd3+SpI9Hh%Trp?O{&tkE|DVppxszPhJiY$WVkaSH6FGvHn$1Hp=PjXqw$kTDekFCx?xwrt5Ot+o{!VHXp## z8+I<$a?W{o(_Mr?>hpC6yedLcW@Sm_K?OkR^4 zcd;sNZ*M7;=jSJmR6wY6x#!QHxlOG?_P&^YQ`MPthep~{n?!3~XgXi5?8d_rASg2s z6Hl->#(oL=D{=&(*F%^mUWNJe3psZAt3UZCfaDU6@nJ~3BzSdE9+vZOJsG>q2DEyq z7Kb^DM{sZ=e#2W3DuwK4=o>XL5qM!dp!(MB#E9Lb3$cPHcmrwpda3~6|1X3c# zva;KSrv+kHZwi|~d?^23=hRM3Ri_8kWPg~Z8;_Z{pEnhKZZU-!7%Q9W#8^^ZME%W| zQ|?Lw-&8zU^xSN#$w#HG+BS6t`6UqHM@_@^6(c<}Ain^6poH5?&Y67*^`isgWYtZFZ)L;q#GE>s%8IqF(?Ep&;+NpV_zu)v&s zA3Bvdk3UFXlMz;hM+{E%)mQ<vV5ZegAmA9v-W>#!K~s79>TS?w(hXto&~RIl6RaKIXIL{zO$M%vEq0R*qw4Im4NddsxaR!$o+ zYqxzzE(D}7DAexm?x-rgTVPQnBqcG8QgluXYQEqP=gA2^^86G5F_eP27}89AWkPPo zaA~!xp&@aooKD^D6_6A(HGnHFOYDc!`M?rtbg#8i9?9Y?SJYcT1BL?$VX0j;=eyak zw3~h1N`L4#v5F_;f>u^U_^FpN< zz-=gu*}U6s?U%u+Pm)=umDlznzu0ulM=IfBeth|JyGcA1Z6DV!E0-?iZNZ zz8fB&mc@1PX0$J0#Sp=~f4G~>CQ3M!8a6t?$t@M+u6d0Dn}V4zOt&}JuEEci+hDa^ zRnpg|=bv=XKe(0_hFVw~YNEsrisbNq4|##$DCn02Zo;?Dtq-|Hw*US9Zc(_KETf=? z?|^IHQn6-2`sza&f-!7)Yu+AnEow62H6d*j>grW6c$1s`;-T=SJkNj!ZD0#!ze0zF zRWDB-j=$`Tb4GxN#CIMN6)_TN#`Ph%* zhbZVt@KWrUP;CarMt1XVBLS41b67i^+8uDT-GNifwYml)j!UJdWFqlMvuI4>_LxOf zS(GEt^IB8|S5E*8Y*ZH}GUTI-njvx}cS(H31jH*YZ@qecf3lN9<38oFVZ+;ylUwP`{U#Pc8>1dfPnvH5?WpI_Xf%b9iP!mkcK(g@`%LCW`a3 zh3jMLi^cBEOzWe>a7b^1i;KUE@91$hL4gbc&~J9;Pp>i*4ykpT1I5=M6}xEXZjzde_-iB>1BZg6BaMEU^Z>?-ri^D1UYG6bnMx!AF zxJc+@!3L#V2mzy}DGCOKn6go|Fk4JX+`MKLsy(I{+*#c?Q_ri@Rw9zkCaq50gKt^u z;j&wdK99;BIYz;{Kv~yx7} z^;VGTf@3CdI}0PA6Si?#Ry$&_76XCppUV--O5xn+X116NiIrTrar3;Zf1FUBrY{Es zyID+B8FEp?0zMC3@!yEB@XRN4DqW_cI-40!<}&j!FsTg04s!-CiWL?FU&oSNLZH<2 z#RCR2kTQ)HjX|^knQh_&<&vz2olXMUw*~}g%TV-MEysy8l84Y!tv2wa4ub*U7J;G9 zj$>$K3{+K3FP7PooAOg}wM)3w|QJVI^~e;ZEmy>>(q0pVC*q!kmKNzPie zdTY7$@88va(J$&p_U$uhj{2rd&~3`Z%CsfbtU|U1&edOr15p+jXhZw)<3}f;rR_)k zrd1(O9>029`sJJAtxo43uE1kSG(q0u7aiPfy4vWjlRI_WnM1dnI&CIoY6YhxduaA* zzwlQWZO^YG&|al$r*dt$DXdP7Q{^_ByE(6OsvvRxx2`(hetKk)bQ-;CtuvsI8=kIct z+hw&3uZ<7AdG2=e|8aF6P15YydDzQ)-_^cff9(t~o&#V8gb{-ff&_yUgMy@mv>~a` zf}w@B`i9zYBZ}Z)+`X@9SKj;5?_>=|4Ap(3Z+3N7R_5Ov`<&;LZ>GVhuA4nofRvVS z5A;WrqX<_;R4Kclh?GV|4IKPEdwOC_$a<1@pIx0*ayRYDaXYf(*isfE*)9YnE+z>a zS-|LUnpYwwhcgo(%AVy*upW)4q4Z3K*ZA$A$8NUEc+ClQilz8_(dQh} zfEYQ6xZevpCqz;%c{w(i4xQ`-tJFvQdiv>2B0UkyzXJj|$De#EIUz22l0VKNtNqSo z2`!R{Zj0OnLkX$!W+tS8J1~c~`D(Ku_Kv0}#JI5(kXPhu{0%Snsa7>~+w6b%pI`g) z=~g0lKm;at>Hd8|Vp=ld>5M=OxQeHuV|fN{h1KbOU%hf)q!tH?@u!3WuP zK4X}C%jr#+FwhU`Rx$}6fetECxJdy=EEF00Puuf{?ZxxHJKUG}9c*E?7VOzl5TvL> z#z>3>eg*&&*Kx#}qMsESD^{`|+p?O4>8Q6`FC>1erNVAGxhpLG;$`;x7f7^62b_w_ zDcbH-ss3qM`tACBkv(gkbs4!#bHQP_QVS{)^2fuQS1(Yi$S-4fWo+tIfJvoXDqi1Q z6KV`#U;g&hYa`F266`#`xZ+!=Zm>1va>aub#SrKM4x-LNXEuNYL%Z3KEkq6`AXZFm zn!8bB^YoL~ayHA4)8jl<#$`!MnGuFLzl7>0*tod%oJlmihAYhBQ0k(AL3WhdEKA4b zSKZtX+S}?z>6vxQuMG&MIv=8u2=Z3Ya_~)a&Pv1b#X&rz34`>^vb5e* z&j`rqTFB1l?~w;*t#f!Pp|kO1md^bN#(~y4{d>yy~eIzX3K4)TObtg8ks=pNSw#MOWBq@P^;dCc2_1zVDRz^Ae~U!U*BA2>sewiF}hDa5Wit+Qq;=sYpp=ZpMNN=eK5W8rd` zy8LE$@eI`%cd55C|xu(3#$!^Y31q^N zoD##~0+~N6fY9N=k-27}b)o6ghA)xH)Y7@uq4C)VwE4$ws(!5mI{K$dxK(o(x0jEz z)p)*blp8mwmWCrd5XwDXzPMuSiD)0QrStPEYhpI-?>CdzFK^B1JeKwT3+N0bh4;_EEUi!GJNPgMC)p z)ECE3KfKDmJFD!|g})nT$7v>c{`~U7E+yJn1=lw>;Nz5pHKf)IK7>E6@)rh%1T@_u zlAUt?MWgVxy8EI&>lV~@s3zI>=Tj(~My(Yf*!9ekE0t?fwk{=9=(8+_n4mE5r6oOU zu*=XWk{z~$MP-4SW%-EFfKjRJOYO@>zuQmQcca2xZ?PTB7AUZy_By>oYBuIzNj3A! z0QLwtDeig|r*`Q4t)@%_UNjT3!)yi%3b)DX>6` z6PZq5eRlCes{|ZecKA*A$`q`QICvJXXs=RrJd|xtU*FzD0i&XKpDq-?c>Pw<59vlW z7&M7DlLPH0R`BliJ1*-3y8BGlQx_}6snJQ=yIIVla=^gN^Dde7^gNKNDPSc&j|WM9`aC?9^+Uk(oSe6>9s4?JvGH^f9zK%fdlq?XK(7v;AxiDqi%mFcMlrFq6M z8mWeI^3gY6)Fv-#tA_mMdIqe8!9?(e0uDj3v-D2k={*IM&dxVNES7*X2g@ zxRF)?PNzlrw*75!SbO)BxxSy`bBu7x_0848^RrcobfeSlP9`Jwh_z0qOR`$7P}u3} z;%0;Sb-693GOzDm`Ji(Q-bv zn4_qpY>4r#0 z1e~1}tzdcT%yhqnKbDksIJU175eip`jx*BOvC$G8<2Vt{fq1fdE$@@wa!uFaY&fui z{KaWWsOe-ElBh||N|3}*XM!scKq6<1KOs+UxFw)OpU(bo|I-UQJd{q72;u~tj(ve| z^_9shd@hTLXbAjFxL>|Cz+H)7qVi9=vK;}Lq#`ujyTB*V?Hgb)%D@vk$2I9s*N@1u zaY&qcPwUF`tuJzpD-eq=OOZGhD)n(=>c+?;|GZBnhHSzN3J|Hs5&Re)CxjmfX6z)Y zB1#^dT16XDaf3Vt%JF1O_Yz#>|PZ+dwr1B1G$HP_QUI|+%U7o=FKF*j=Uc)_n16JnTti~ zrPR(!!>d+XC-&d~*g{^(It9lGEduBT7Zi~kZQG&5>J6!B-iMn%?{vaz4PC@sHe%ewuVd31qnqFf&3NEMB!PToLS}0$n=5-B z9oz4ZFL(7<68&O`o31#+9+TqTW#igxi}v5e5XWK7Z$0;m{N-3E~`N9-?0s|X0YS3WH!#mF# z)C?ep#1)ItJ@-a~aPP!1?I%JF|D!~T-5yj3!9>eg*1p<8HL6d~&%_+Tr!S=XJ9^Z5 zC-kl4mJ*w4<(m)hM`QJ^c#G@(QD3A{Zs%1!_gQ}2D(q*%LHSO9qh$bVcl-LyTha8V z{%{uEuUoZZJ`8gP!R?)WYb5kzzrMVzHQMH#y9o>hRs6+S=4WrquW?@mhhy{%){DJa z6A6f}GmMr>t)M|#b`RiZqeZqdDs@v3Ye6VNasV-zTWeu;ERD0>kLjD91X!vjTPc(z zV}{`dn99Z>`1nkWC5|jfl#EjJRZCbUqX~*?Fbj!dt`-jY)l+AC|KEIG|G`zZvF+_= zPvhaUaZ?UFTxR;4+WqeQDf8xe-+1Iaiq%vr=*mX3sd!bpeWsif_}d)8fUzM)Ft4d2 z93V>w@$gO@Q`GN~Q;rntfM6RTnjsa+nxMFEsa0=_rPVo1(?!n6u;;8#9Ej8nGB;*U%Uy(gBI+5C4O_r7lVMHn zd)ZB-t^=#ccC_*hnwQ0CXE~XCc7k91bPgqx2vFD5k1jyv`RK0zINV4o77amS>(A;wGQPm))V<$|G#4LAn_ znC_*#Qz?lwgkp@z2!ACWCXr@tEB+;~fHFDyH;1|@s92=C9-beBb(>0^XG__Z>SpvsH}$<&m3npzgj0>&GGRdEn)~`H zRn=O!mmZ(m#saOfaJe3*5k^dDJ$V^M);~^rG}xr|`=`cD-LmDMr1T*!4l{$yV@O z3d{^Su_d}lGP<}Te0TBu_-Mi78fe$5lW%+b%(R0YtX9-Y1ysP~owb^S;WG#(U7Ih| zEoZ4z4l?V)GfbwaE%i#^0YhnNTbUFL@3ck^CK9H-zglS?&Yo7qHl3i@spyurFLmU7 zE zzx>|WVmEpoV_(oKF_o*aKjj6p)mF2`4``NruDzv-o%2ofVZ{-N7 z3)ju>u?MUX%GjbpJrDh71t~U&7TQ=@IGMN~Z*Q)?{Z@PFnY!1zvo4ALTdbYwbVugq zn0nwt%d85-7YbmRw6M|~=qlR-xJZh*T1Yp_>dDtnQ_PPYG{5$~+I6NeJ!(JtXPsza z9?Kvy*n&a}8$sf!KTtuUr&H_;c6RwNxV^>hf-iWG-jL6lZB$9Ce*c~kQ$jhjT}y}W z*Yg|(%ygD;*Y-+hb|K-F47y#4#Sw=Xp4VMzQ@C8EyMsl}%3vy6DnN!1>Cw!f-I_f; zjb^JFP+GGwno536_2@=Y^LTjP=_7&rvejHnxBW3ZckZmXxv9>7deitm>_GH1;b9zs z2s8LTa<$2^_K`~5f6>f6vSk5_1Of{r zs-UVLnrZ3~S&t9X{_b*_ zt|&1W_68scJOJAd1Yh~T&6_>!&*bgJSAx^pOHBCBE4|K4XFvglC+-P zTB5cI1eT2@V27RrLh?Rw8e|QQy}bYXgVQ-qK1ojRCm)PNi-S{bre)(r9V9+bRKu{A zT!tv%h>MJ3)N}oW0-sG6sSx*LLDC4`#}Mj7nR{P?zQkoCk(pf6p8vf-kJ}>IydSy6 zn9vBy$Jsz!vdV_Am}oYAc$0jIi%>0oS~?~h6DuX=yWs64uw?On#(ctkt%E{8@w0Je#ch=j2ldM%MGU!z^snr4GBe;eDnk-blj3lRx9q?vS zFN;35u9I+j8#Ns8xYPMAe_VF6v!8$Y;yai7!+Q+2hs5&Q+)x zEP&PFrw5X^bAAqZJEDS={z5Kk4mcQiG|%P1!pRJeI73;=f&)8oR?l&{qcTPSMF1x+ zAtw6Qm>NKTGM#vpBpAlegU3(zN{Uo0E*bdS;cb2W{kujp_1ypXyiVPb@*I)X2$`m4 z@A{-lwJNPx0~FB*vB+ZbLu|PvGRXBEBUIVTvZoK7TJEx4F6QW3e?-ZVfp3O#4xtxG z=|F|~lOOU))}PQT!C<$TDO1LhvTaPU^or+-Cw6M|lHOuevEl;PqyG$$YyIl5tV|Z` za=k6mC$ami+2wJ6cX!+C58r>d571ZVtgu}@C?%mknr&6Nz^qsiWWq|v!sz)vJ(<0# zZvOR8Uwze>@18%VxARu@yo$hJ{IPbt$Q7E;j|0U4YIa&!Qnu^+hxe9yqaGX{D02)E zGD`j{;x08=!Vt$b4%%?*O%p1^5bh#xFf1y=P%7rL4^Iys(?oD9`@LuDJID@FsX?#j zWEB{WRi1kGaK_8U{?p@VuoH}G2s%qL|FxE86+0^{w03-FQv&$-qnehl;M0k_hD$8o^ zwv<`DT9{(>)IKBpauPUNu#lQT>89IzROFhm*yw}JXp?P>QqBIZxj57`0A~#%>f!h> zi{a)8U5Pb&*Ch$Tu|GEddzVKe?r{Ph`Yw88J~}5?J=YIcTa7^cpN2qg~A( z&x@Ov4Jg~OF12ZZ9-=fz-r#W-I74a0NCcBms^df@Sx8Y*7BbWBkYex-bwciW-yVe9 zB3%cA=znTA(X8I222O%8K8sUQ%pq}i5>qw!<}J=JM1T?-jcBrzFjU%fE)_xsM{kmA zIjI)D3$lDu4s*~WLk}e{a?eJl&a4}YoHEJ& zAOH4C;@)$(t~>LUr%O;C#*UrlgVjot7>6*FN};PblUl@zOG%{~jS&rDpD zHy=mqxl%Hey#hB2%`1XOC|yT3x0e%#VAdYE#T6H1%B`o}3a&=*6P4Kei{kEk?fK7M zX-Yb-9Q3T+kA}g5&Qs(x_=;aFn*H46D%Tu|J)ZAyeoa)*_D@iw6KCHT){6^Je1gu%x?`aZ9bz)%!#1{koy={U+A{SVHqeim??eKWkQ|ipc^MPh83} z8%N(LnjlbS!7StPX{=p(F+5Mt{`772ufDF-m+wo*jZIu@qwQ8H(o$h*rc5i?hC&)r5#^AOp32Fq3V9Q+)9NfcHTlz{>_V6qj;)eGs$cKF3v z{!JaPaq2hA+9=(gDt%Y{&98bt zxX%CdRpmT8$gPK^@P5Q`5ExDCrDz7hm+?$4!lLwf6U{B7@ulA7>cm=PNJalSXHw>i zZ1*?YEA*hd!qqlY?XQ=zyO2E51ETizQ!Ha<2RTM6&G{3IJj(6 zq=&(V6g5efC?m#K)9AwF9z+tHj8TjkjgGWnVB2MP7n$+TUgo}go+@ENSda1ogH{Pf z8~T%~X9Pu{KnX$c^MYSmj>}_ymhFDn-n^ezC{;aego8B3UTQc*;}M8$*$&1J=}7@CPmhG`E$4OEkU=+ z)HPUDpvE5)EA@ENwvH)k&8;mv1}Oo-=N57=7b_XE5>A&fg4-aQ9%r1ksb<0|`Qy*d zUe>^+rnR!Dx(T*aqZDF5IxRbSm_@dnoG@5g(<9(9gZ4}@nd;DphOq~iwp&6Q*AtfkCPK5M`w>_FmA zfJ<#sK_K2E?pGPj*8?~nov#ww^KkOWKLSqunbl-=M4U{eD%gPE;B!k}(;3e?;D6RN z)BcZ(bI!OqBsX@rA^jAawR#Cg=|d?D$#xQqlgDzxu7=Kr<@H%>_tWdj zi^4QD?M-(xyKj~*jaJ3pra8!5P4c$`YX!>`mnD^rc1ntfB{OO@P)X1-D@=Yxy7A-3 zkG>lU9C8CSgT2s}#XRhgQP=`Je(N*U|K*Fz{XcqpbCw?f`B`JfOab4ihV-#+^HNRC z0g{I4RPk>qr>1?hKjyaCs^^`pX}h2mPomB6?tuuY)=EO{3`_FP5Xm^`u#{Ntii`YL zfE?zs6f(PNVSioTfV$Xsc-p{IP$Hs@0>zm4qS?{;-B|LTih&#?i)#Of2HckdA}wqj65TO6gwmkvLsv)YCgOL ze4mj8vdqGE_BE%xOY9kOjoIEPkqel#MDmS8oB@?Zl@6=#h%)6`9G7&-b7{l{H@79PFa zT|Px-ErYKw8=)JMk(!GYGFh6?BkSQ+P_(pZ?+5@gKj+zbp+?Cfwbqm4<`I*#SC~wa|=Y?RpVpu1Ps9`cOLrE}GVy&I0})gvX3e&70kbCIU18 zrzViN)Sig`PvD4Bay;lxv7O(kzg*;vXo)BJ|W#uV<~0CQSFt_P9cF)vRR` z?*^t6sS80~+|K{>%Zsza$KrM@OKOqhjM#4$&C=mrJ@w1p{#&ARv$^P;_4kjx>4P*w zyLg^L-aEy~SUqpFCC&OuBQr;Bx$=Obfqmaxwp$kL^M3F!8_C_mT<|&OTJj7Btj-`f zP?|H(Z{K)sACy7X%!pt7<;?_N@4-Q`$_IJ@twB zhA`$>{JD&*lJO#dIIM;%iKBqOS8_+hh_(^XS2R-zj%N$)uoDq$9rxGOL*+;%i}T7W z=xb`DmDx`AuV4gDFc(~`=giJhzUnr{B3mM-G6z{EUADt>Bme#;rq1pH|SWzBJRzMuW{tonHY zSH*;^9?XEH+T-HY$7AchaIwx`Xhx7qjS}i4!lM)+t7M5HU`c0}daKwV5!;ENH_}xZ zF-3=g)Y!c=U$br6Rz+BM~M{0e?HZ=_B zW$8oa6wrcp#nmS%rF|*ODBNN|!0o+)lVp=ICJ0!Tgq!1|1X)5M`ia>uHeq`)001BW zNklfNVEtSCBAq`)a)8Bj3&9D2*)u5ckb6n;1!rN+9fr?gu5TlPEnyOOHMv8je zZmW&jd=X9{8LH4*9pNGom6fuNSp=XwJh&5IK?F`TNMea3_6=APo>f)Z6??fS2c2(xZB!?FS~td3w7?v5+-P5yhs8v0zA9Z!iZ}gjZ3wGUYOVH}AR++RKo%*P z7NLNhB#Wvn!O)O*(E`E689>pt$gS$mR4e4fTp$`zNLCMvceV8&y{R%x)C365i2vG_ zW{|DssnGTvOZ8nAFbod`3=x$b1sop1W5II}U}Qsvgr>|*7FS89YEJIDL3Y<^iurf9-On-G*JI{^V z*-5P=ej5oqwkInBzj|r&@J09V)pa?$89v;V*Aj=_EbTGzE638WttOp;?4;J2hlVoUPtUNl5iPS80l%(tYiA(;Rcnd8;Dh%|@MI1!oW$=i&8x zJ#;UPTKz2QTemD&$->J zwvz+bzyNzdgug~pC~9wpqp2pfzRh(<+xjAiUY-xN=Ux7J zwO19asvmMd*G~y$895M!-e5>Ns!g>TG3!`7fTkYd;o)A_8IKQ25oIo`ZM}VNj`q`D zdHMK9uc|+L*UBxQ)$mpd16}jnT@bm{6dt{DY8{x+8CLoyoDWN|$ss zP|8MJ+9ENb`s@koM_)S}sX@18y4{bJX+Zqnv6ew)jmq|m^K$yORmAoqmBqESRYL6r z@mMqvu{-D|b`p8O&?juE0sGK3$q=(+-%(eVA3^6~t(=q4v+!&dZzdWg6#@iUr~!1U zYXv~52)E#C;EWlLaLmk5?@QtER}7bryVPl;S-@V7Z9re-vcBDweUWeUs|9uv2qdjX z_Hy8c?XpDnS}!(un6}BYhhXe57;KUnKdbqCHPVZ9@#!kwtsIutdl zQzbvE6Z9*(Zfs`Ox>ZCJ#`q>*6BPxjd=J<^u#zsgEaHb;7Qr!FEcVdW;Z zqsGybBzF^O1HXEo403`S^D|zSOJcl-y&m)fA^QR{66Bq69UlzWh-MEPP3U9Cp5E(1 zznI#q>+>IfOuz1f!)zAOj-%EXEsp)s;-Xuxln#SvnkLa#x;{T6SD@l(E7lR7SMC$;&9zy8CEmxZwy?iy&>7FFUShoq7UJwu@~*%yEHFp=|A zhh_CKsl>P}D2|92i=3Ey_Q-CVA8M*hm_hKo-Z{N*;z?)^NRWrHPae~m{9N8~xS_m( zd?DP2KsujW|k{*}zs`Wy-b7SJ1L*Ym67sY@Ep~ z-z%yn`Y&88J1Q;gPbUZY^70aKmbE$_FY&1~o3glyKM?t!m8<2oV$ta+Jl#TFfRWrm zZ%vKz9zBQ`ZdBPdr1ciYdG|)?l-?hCvFXIw2$}9#x5Us!r*}wqT4&9AD?u#7h3EK!LZIp~RVPRY zTP^%oBY%E=Va^GWYBArb zM6f>wB&=bnx`Xs?a{rsVR`I4?23wd;draO^r740et!N|MU9pY2W1% zHMOZ-uD&5Gb>ME^YI8hSp$d0|x=N{F(8vdEs6iub?0vFWG(W+P!Cf=>FoQ+JX)0E+ zTi3U}&uXhbdDpD(dfDxyoY$6ON~)77iJKQOH$vZ90pr2yPi6%nt$qrtU2laju&sf% zswNdMI=YE{*(1z(5|+SC+T|@0H5_tZo(qx)B?q@U5b&G%Wj%-UqKK{_y%&Yfma4$n ztgd>q&Bms~xSm`+FCQTgsdHUi8H+@k{Q`m|>O;Kik|j`-NKujY9Cv~7hNzx;Ab70@ z<6uAteLKjpn;|7mKSMl{{fQLyCSj5cTDT_OCu2{4a2y0kc73W7D~!N{`E zu|Dusk@&54ox3jY9NaCtDTVRTmzh#gIFb(oC;<9OQpU`;wq$D#s2P0dNloN$5 zAjh_l1(q3xw`YxX@0b7Vdl!Fvee4`?xC}z{p%sd@LTivwmra*D(@gi{s`+VmcC4HO zheaT5yE+v*5ZM%n#0e9_K1qhCAlGu*Gsc0lW>S+LeFuAvUO7pBQr@t@FIXt`sKZyj zgcmtFD_h`UWpA)#^*K^sltzF3UEtYI9V{v{tP%*2(vx)a7qj}~zCIKwEY-w<%q(pd z&Q_wi16$z>lCpVEYIaPhsY?DSijexBysM2~Hjqut;XznN?!|mqSzKBM6-Uqn$)ecm z_ONHMc03^O-lPlF@yw_q)6H=4r3`9+b&jP;CG&xG)Y{hW{^PLH+tpZy_`$|du$xt@ zR%tbAxVcCHz#k-`KB*E%y8iO&f?+0g!7PM15c7gJ{j5Cy%P$*O`Qd8#2>T~#gO-s< zT!cI&I1f=%GAgmRi8>H*&8|(&*9wV+6hp#8~fZK+$Yr>HyF-+7lP51x& z?M;66v(GwTVv<}vYu7-UpvO1excS|9H@cbz0UD2!qY?g!dSLt-wHo~-qis8aNGUN9 z9~!Poqx@(0yVd|Hk@dgV11UI)w?_TD z()LeZ6wcBbU{6NJVn2KN&8qQX(_ZB-rUKb%GUo}G1dmr*Ek7<}_}b@p%n+wXO@pK|MG-7O#Z2GlEh+GOyhV);8oyAliTuN0mj@c}1kCvjSj$f8mKYdfjFHk&870lV};vb18p_!ZUqgmLL>U!xy zm<+*^;9XA0CmGY^cmPdZFlyW56$S9lr`t$WjtIBFKjSaVDp%oka-bt6qE4ReP>y(% zaDoLH&PO`%U@k_sU8vs8%TG2l^&nP%S z^ko;*?&g9;@3K^*uTC;XL|}Me0l^|%elOX*<JJt9x*q{A$$9 zg`4wsyj;^ye{zs$lkgV32&vOOhyTT^Px_%6TUit#EbcskbS$-x-5-q0U-s73gKgVE zqf|n5ohhU`2*(=@x6I8fLMnyoW~oSE2|UKe^vL4H^anhNRbdse zxlnKs&X$db_K0v|zDs$A^=eq)VHYHqV4BW4QN@V~Ll_FijFFBQ0-Qf>5rHlV$wlI{ zNFvOM>BEYEFl?;DXzuX`x5cyUm;yMYN{Yep=p>0N`8|)kmj>4jzMJ{1za zV))2pZ!fOlVg==e9P3q>kF*+38U+?b4mfdY`6z`hy2fZmux8`Iy5%>cN(yI5TJUzb zEo=%cv%{h$cY-2J<6jVpWa`DWU4%UTvlMxfa2D<*ZdettxJ+JH*LzbQ0 z3`REh6oEu45(V?+1Za}~E!Y*dzII#kZE$N-=j`+4vuv@kUXE7t(IMM7R_{J8(tX}{ zw&=FC<9b&1pH6if(LwyWi$secU&f$8tJfFLv(w>kKeEuD{D}^SE0^)Kv$mR`l7m4GApHawu}? z>!c>kLbXIqHVKSB4ZNG_Dq3B^kGy=o$c+B=PwzgjFFQ(!@aS=VbZnz1Eu_s%IFvYK z337=@9ZHoozaw94bi66|rh6(=gxGQlwjMfzB*R;wGHVi}D;M83^0L3ulTcTsD}M@1nslIB#S-co2Ex=m?ny`7Gz59IBNOlN9Kad9y?A}n>; zMgmY|s9J4AmHAG6v66Ffa!-uR3G+Zxd@=E211e?*CBY8zkaHCw*@))w65^6Hya=Ar z3LSHL5hf1&{Qbd6#u5b%+_1APDj%dvY)ruh3y<{HYlR%c;}ChEg9!7CLo;e`#=C}NRq{;h)}1Iamn>FLc?-aWP!`Nnx}OgKVcr6kc^DBfQ^`&pd;J5 zQ)h+R`g*AJNjMRprzA>S5EJq}!5%Y=hp^<|Vw; zmq!{p0##f>0+`F9N!6LkkA-YitEYa`Jwmk{^IR-nv2ZzQ))W{ zi&FCz9ZR5TB$IJ3>f^MjJf!dTov$WG_5Z$V3*urhkVH|NXdfxXsKwcB9eeejd)z*i zjPfY-4KIP(VC0^r*}NdQH9j0+e|2n#gpa%f7i^x$Tvi$JZ-@E$Z`LS~x z9vA)1a?G9er82yeefc-@$~4m%?h6cgiA3XQU0QWOoR(ygB?&Y~Ob2u(n^c8-WaDTH ztzt^~)!@ta;>#=THL%zQzzmEZEQtxZIj(WZS|M7ZgZYP#uTyfEoU3@2YRyxa{dksk zH#P$iA9WM(Ba#6T%hLH#Ynjn>*H?pGcv0xQn&8@+&ntQet7P;Y^I7-u{POB1XvXAC zQ|UKv^l_OzJl;c`X_PB=gz=kYdRsgE^_N9PYkIweQa$F8a}31o-fFa>^lJ92 zRU_MZWfKq8g>H9OH+L|DA3l80A6j5dk$Lz!yIrfpE6FQ$63}Tk>vy*|&kvsttI1V) z_p{g4&s%8fpDOu7tIZ(TbD{Y7N?|0Tmdg;7%?45o3^6#qpgNFq6u4HT3ID)0f)y4~ z-sI2MmDj(RR`|P;>LM>Qc#8WlAKdL1wejg(^Duo z@mHeM*SAPTUdpVeEWRxSlkR-hrz%3*}Ff!(k1CvnUszIMaowyL}*cd zMJHnyIOd8Ve>#M)b3x@<&sw}`D|;wk&WbOdGcTXk(KE1f(PgK|gbAu#TwTkRvMop+ zoa4+U!C2OcFmegW`ee+)E9bK>+Lm;_#p|;!<=FcUs5<8p>|`>B+Xsp0#E6gjxrijgqt>)8 zB*w#E@m>B(ml4sbj>uBC>f^M4mG{*uarn1?^!lqxzr6ft<;(2o#|IuG0^sIlS1s4Y1f|T?uXfkpEX%fyrVwHgi8Ax> zRB7NEl85?TGRfRvXlIGomX0H9k_>->;-^8Pl>lc#AjPLlSWv>4muSp&2zcuu04h*X zH?8tzcKUgF@GpLJ*F22dW_&s1`lpK}-M*RY<8|(Sf1YZ+9&a@1m?6oO`FAs%0~S39 zfDk1lv_*Dvb7M9Ha+p&uU%s#w=cL@~cwRYt|9W**L9IWoAwS_v$SW>|RtmY2Of{n( z9}IRGr9W`KL0Qk9NJN;-T=a;gysfD4qBPKrI^M7wJc48Ro|Trd+x#?p_KRt0m2Zpi zMLCQGtp+tgvh9N z@UFs9cuK8XSu7OUBAjmIDMXOxz1QvBU;g-u`@jFcr?XW9QG2U79S;@qfJofk-r3f` z$#Usmy#0(rhPJ?+DC5wJ9$$+IjZ(DD`rW&C7OEJU-S`KeeSsV5e)udd)K(ao$qZ-c z(N_uRE<**T_1dG1r4*W0=359z@i!i#4}}N*{7s ztzWrxdv(=&>M=>UgPn`cBH_xU=g!Z%iNCT$MhEahz8DG*Ymz(>ptzs1)g>O@^s%;j z{F@)$NrKiFzs_%mf&d~L;4%XATozaZYFOM!*Hx?)JP=w&3fOBbMxH8;)0ZFD?QiCl z;VwV5j~8qJ?TZk)95hmz1!tGonhqCHug;&OT3B^!l!$YQIP1U|h_$dkIBIuZCG zZ>?X)zTHI}Cm*EP!Vt8)ZBA01 zGM8@J&3PV!a#C)h=faHt^}{>%aVpAbr}ure=M$?TbUkncwF7k7zCPaOKP>8lhSnA>2|+J{EXvJIPHy{PlObfAfQ@^8B~C?I@E^p-YK$1GO6T&pLq%N*#1s z%{E{C-R9!`x-4(VKC%hvJDCuz0qF^-A%>*#Z^vPZPo|88v$Kwb3Uf>l%%wzEu&r1V z97i?_sHDR#(vsJ-9%058=mm38O4^AbC&IZT(p}Xtym*K^PCt8#b-hi-?a% zF;CwR%~pr{*^5l)&42u;1iYa0VYS(M9`{r!u^?op6dn%xBSmuW-oEYipYgYw{OjvG zCDsa4G~PXH*C9BH3I;o~tHL6?-%Q7SDeLHflZvUIL>C+=G2q6-aziCAJLTe53TOQE z^}AP5&o~}z=VPg0=wX9l!Ld(;neDL>^!9K_0h?!mJfb>+7j|_LpL9r@dTI_on+nI5 z64d~za?zrr58o-Z(<=yPC8J8yj+Kf+H!*uzPhTD%Iwdak;b~KO-q)T9w)LXgY)@8) zryjn5VM}w(dIL^O|3u5f<_dMN*>Q85)zVm{Vy~xB<(unHadH1wUw5vMha!zh?}y_) zRd;=H&3ppopqn~p=hi-`BIB`Mub>+t`s^me+l)f~x~HyJTd7j#(;7i|eOkEMm(PHd zX%l+4mpEO`vOMH!HSOTi_)G*kk-G<5<+1}9UmmO;FRn^b>SoTM8o ztlPV2hUPkvNHbu^lxd9~x0rXS=V8R$kRUKlnNU0m5f75cE`ay+^tPOeE{$2`?wq5f zDZLz@r6xbVE&h|QTe;zXX{4n+w)PB&p4H`QG$0YL>>UssxQI85`Jzp%Sp=Z=>f@^y zPrJ*9-Pt_f&D1;SbtGg7vqrm{!D5h9f}^hKv~a{pHpH42VkQZ9%Nf?bMyJAw!~j? z9fDo1h{SdsewdS8!QQl}T<3IbpT;dG{aV7oNHXYDBR$8jIB5ab@hKlq4x-(k_nF$+ zfA?>0+zr|4%5KQ!cPsA0rM*MKdW&eVZ9G8Y&4nxE{lqOIl4!Qy8Q>n~B%Am#hJ*8A zM0``CaUxvhpD%LXhBZOixp??ou1uZyB6s53i6LK+oFBf6zuLW~^OVw0)X&lL^X6z7 zAL7-8Y@$_kl;ev5r*13XuC6xqcZJra1o}{XRd4!>OH&8WfwosC3Iye=}-#h`Q&#If@cN&|2{+;UC?n5~RA95TnXYlC228HeJ+S1%uW&+#3{6hN#0Q*uULfX)SJI69lQv)0(@!f?hx@A~8P z{As2GM3bH(!fXpG$D3Z3nUvizofm{T)YD1A-oAV(<}e#g#4m0yu8h1I zm*}Ypcq2lhln!edG|}b#Emy~WqQHgX^Js`@=Hnnm z%^kPZ?EbpA`bSrp(&|YNAxVXzSBd(vmAH`p!~rs(0vSU^yVly+A;7n2asBZ$VPT^j zm{gvRSD6pn&Qq%MVVRx=+LL>r${J7dZmKf3Qe?117-G9bRR_(s1Tk2wo7JY(t>(i& zmNiqCP$cjeQ^#6+0;`!W*rHf*=oKb1IA5pJ9pP8NACoQK-rRCeL4ak9&)XepLwLmQ zxV*SvKN2ql@rxHPe5++?79{$}6fuWpcJs$?n*Z{rFK-T?nuk7lUaq4WTKxR6T3woZS&}<3Y z7Enz>NqWzqq{W71{+8~en%RBU$^JoWjcQwhOiqwl$Zy12EO@$s9$PP4?W+hGZaE5j zZ7`I)kP*g0DJTJGFW3>NLZR15-NZDn69|cL4jKYB78&9XP*7x`J$sVrr(=4;$z({f zvD{`5dH5MNT*QQ;B*4f0bef;!vln^de5=blF6c$B`sr8d-Zf+VLO7Fl;U|&&W|hU{ zvwpRa-H$*5+2%gIT#Z=E{;&VT9T8@4c+2H{4M)Ky@hcHen>ldzCb-q=ICdez*ZsTv zNh^f!63BsoJUicm`6hog5OO=wyunjNR8E#DzS#}yqCv_y$IC;y@6)l{@L4DL)%nR- z$8&f9%1nqovLL1@eESuO2~aTFO2rb3TSihCRYm-4UmhH~!@_6x%gSt@9xXRPbFcQb zQXar)>hGJ)Ff`Z!7s2Nrw3QJE_|0Zpl|zq8+I4V_JPcQEXR2%v6_GsPUMiPH>@ofLXK$)H zQe>9*?lwIiAd6Jys)T|tv8+Ada98=eX=|405UQvphQPKtPZxf)H4zIcrZrF!p4}8g z6b^}GvU+H7^UDXJGivda);=oa6s)tI21K!rR4_()t#w>%jJ zLlGX&!eVuIghf1P-?cj*+R8HT7Em&%qlPT)yp22OiQk2-8NI7#Jyv@ zpP!v$Ut`!>?I3fL`SR(x@5Z|w1#XKTj`!vM)mib&#;mly-!FP4mNP+S

Cbd)AHP_U7QKch%(4f@B3LqTtF1oC>XQ z+}ZxI$bl)6Shl)!n~vZj&TTJE^lxaxUy}S@h5d}idsjx$8TzQiSDg{HcM!k&&>AC!y zH!laDep8rz_^a>TeeWVu*rM{($Y9rO)tTyp-V-?3vng4e^Xu*jw+BgSayHat}M8cNcJ$z{nbp7*!FbL2kaw9!YYl&jLH&5 za)@r5nCwc9!&4&3{mDHz$2AG?lfUBF%bUcDH${)FWT)9sYJr8y!$FU{Z}eMo4}K6| ziRN%KVIuph#p9SC<5Q%F(<}p+BTo@wGJhj*{eceWcKM8_^%xrLkm;4^cx)(HrV%uw zX_C}TiiRUNYPs&4lNrW@8vcpsHEj|_UD;Yn>+&z~v6#hyLQ$N&|ix6{O zKH&!3^l3PVV1gKdk9L#+dy;=u`-FNlg;~0>Ts$vU)1$JHa;?8B3=WsW{Hs;rR?lEa zqUrY7xw-k~;c@o(krBnI;a`5ZAJD?x^GgHyY4B)5TEbAveSh#Q8zc`PivtW3$n?Yd zDl_@fW#%^9Yo?>ej9NDNWi>49-<^&SlYw2yJNCT&@otrBj#J%3x+I~!UP7-y*=jQx zykx2nJie%&Sh&f7RR zw4Bm$Z#D0I`mDM8n;*Vu?mlYEq#cYHEgWQagEeEz4TrQ?O!`>TfmlG`PzQXx|Bw!+ z7Ev)Slclv=ZdA}^N>)hDi`bf=kDu--2!6VNn3_TRg8t3C>?z%?>uWKp6Sp3{RK^6IJuY4r2H(GYqt+-Op0S-tZ!uta zFf~tFDHG`-{Z)Wy2f(cqdnR}=pO_=L94^wRk2E%|X-=+w{ptDmRLJf!dixfD@Fsdm z>{H~v-T(MecS}`Ao%{PwPmjGo^lCVo%l4olh>I%Uv4DS8q2LCcSE-_mU-=cI6~d`E z7!V&Mn$0VRp?#_}&-G>J1E>NtN~P3p)bZ$m!>Di7*K7oRrKMvoeR1CW_T#rgd8)-A zBg%@4$IHfH{Eu(DKmV+KR~*##y18uZk;5#wyhx+yX1~HflOXLB-X`&rumC}iF+69r z;0CZn)YjPSYonbKuzJE$v|nt%_T|J6I=U3uBa*%WC2vZ2nJ>LG1h~pdac2U3;AVJY z%9+Lc)W`q))sKFnigPA7&7WdK|4_&Av&0BJO}-Wkn@kC^YPbse}dAbgV1mPUdv zXF8I)B!KDz0-2UE$<1m*tdmoqAQf!}aVmR{2DMZuYu9FPdNL|Ru_oPMK}NJNYU82{ zXYvoZXF^U-6Qd~;9!zNYNm_!MboC-^{j`8>N5>R!BEa!L&wY>jioei+WRUBe95J8u zrn~Yo*1kw!5h6U4j=dbI0amzJuKYS{E3z%DL=>g-eWkyxkFu{Oxw~nmJy;0=WVr*f z#)P8@VWd*>5(HA_6q(V90?#lOjtwbUx^NM_wL8takd>57e!spcZ~yQ-{gcbX+uBBv zGen}tXN{9fTpxCy6t2d@6y=DEu`5gu-BtPJx2xjQLOKgeW)&Pjw2g--aA^$9fh9tG zdoIZm$8O1sQZbLl$LN{YB2iC|>#yTPHMSA6F5Jq`&8VS&`z(5&(_`H?*XFq=V zS!G%~Vp0)o)V_%gWbhR0W`gV1FqwgT`*DBS-xL6Hw(QzeUff)~%G9s}OtI+;H`bC* zfE8}fcvE=pSqO#ZFXi|3^s=7WpB0X6?2a7d)k5@@NhDBuODvKn#*Nr7Tu= z6HE;;R%-0E1IG*6wW?Jx-j!I3Fg6TKP-eSA!PutM{l6YF&-<#ZHRabI!Y_#OmjuBk z{bxy3JiC{&zR!>J!b-2UT3t&OQ4zvUB*C{8cEdlo&Lqv=mmYjP$0}Vi$>yD7E7c+# zpbVOj3b0AkC9zFqV%GedC5ra#al9$~c2=CHYkG3moL-zRCiY=HO&xVucZ4em!rq(SeOi7T6lCgKAePC`-Cn78eC~ zRRjk=?b3gd$trmQxSLI13Hz`2p1Y0Iv)HQbQaPq-Wk7ThO6G?0`i0jF!BWFCa zt>*ul$;~&VMe*f*>gD}H0jxf=)kUT>!LPcR2Dum>4JRRIreiCm==OFzC|-cvOG3e@ zvYTi1T6*xCKfl}m@V11@;jmrouwe?Rqbr~!tsi_J2s(nwLevDC6sZ^kfEIwK%Y!5y zZG%K!hq$yKxogQDG-`lhkY;3YSoYyOcn0QFfrfD5_$P))Jx3vy`eRwWWR8&+6!ncnlM^fzm!b&ViTB+EDd84;#5wNML?m~_;E#xu zn{ZR!v^d?TCkX->-{EW<%vAag_w6`2DIxy=vrFimlRt%X>dBI4b~BM3aq|w`-|1KH zoDQ58GCO&B=k)9E-icw2;zw8ZevD`$$XvwDxS&6rF5=h;5yet!fXP3wqsxK~TqrMd zWvTo9e1y;uxqff78H1}A&vv;>gm;fTL>T*WrT4r*-H7QlLQG2xPcvc4PQ77wjV4n% zi3Q|&KRx$rg;J|jgUTIF`bMyL$eb1O&Gg_qFWO&R?3>3v8Y|R!rizINDkS!6Ky4Sd zDxgdz<#J87<|czg*-V$B6q9M4kp?C_J!L$mS4?*j>r9M=2={r%$ap#MD;%>x7F$aM zbA_-LNH3q!sT^bNbh8A@M!&b7E{CJuS!VsCHy2-Z<*<9Z^c*XVeW`m&)Q<7J>SdT$ z6V0Rq;H!d#k4C(NX6Jm`pFBN2nI7G%WHqsRlT%~`VeRWUhJu#Q{a)}UrGiu(lb~iy zpO$B@E6w!*tof<3-Zwxtv%1#n+reoJoGl4-M7dw!66o1VxS26A?Wi;oyi`iXjT5Ag z#AMnpYtkrK+5sR^f)C`UQmzc*bCHwkUaDc46VVTZlR`|US22os06^r#c`G(3fnTUB zbAqdDgigG5puG7i|Lf<)+qY*`@?rJ_{eE_SGY0{}b(jtCg(y2{FC$%h9`s=7=)oyD z!01&bQwgE+FjG$#+W1a~@R|@FwQ9dNdh9*Jk92GLXY2=K03?Dh@H4eCiy_+^X*_kf zQ?*b$eCmH#?>%<0TTZujkJVjG$rwj5myf5q9f8R$};h$7EOC#cksSk7m^etVNWJ_A+LmWuVz zsl|v(XL28iY}*k|Ix5={TG00XXj;^R5_Wm@3qeXs62e=j^26OxfgiEAT=GRT0D@3Z z$bEXcmKmvLMNb!;D`Y}gMS*~iTr5dsx3UtfWPkEb$iyUr$n^w6$R>>>EO96mJBNI$ zm8;(GjeOyuQ?EZiJ@pb)WUUDec>FZu1#WJe6=yu$^R)7p?appKTn;{U^3#9*b^YgW zYTf)2tOm&&BLV3Gcf>zLOI$&AKzMItbduAJaEGE}3rg@>Z?c`O}Ud7%B{9)u2RR$N(g7fyM#yZaL$#kCGRW_6FedpC~jYCYc zuL*{FUq!XU3zaM9$}V;*;XbB$H}=F_=g?!NuBIXvJNtsFMe`Lm;p0xT=G>EGpLM>=x^LJai4A$G z2!BfpBynnL=2R7EAY4;C(Z_wnr9xo_4_c(fEfo5k-68C2PTY!!roim_P+jCM$Eowt zt~%Z2XS?H4m<0$PPRd{?nKo{6)Cn22Tk?uT4Q!c9M`;@V+LVUnh(m~D)<&n8dQnS# z(Gtncwu)S&sR&NhS!kR{z`uKbMbZWuzaU`*>xr<|%}15g-^R&1Gslso*w zSUJHlbl}3xnIU7DR z8#RIyR(^PTszjsQbAYQ)@4po%glj zc_hdB%YCXR$-|C3Db3 zw$Afl;(NDQ$kbd2Trd{toP74V|77^YLe(dNfT549R14KgafX|0KC3DAsh>>+>mD8| z`OICbT2E;%dDyQeWe^TDoN%#g4p!BzO~!;jz+T-d!YAe*Gn1u-bVPhZ{9o}YJ*qU@ z-7|z1snX+ouSbeTU*w4;5Rd^HG^aeC0@^eJS{Bm4YO&=3V?$Ue(pGy0P7!R9AIBeEG0qV>k*=iNJ2d6Y-ll9dG?SCDmr4cC};+MksjV`lr8UTIaOnM zf+pL4cw1cl*}KM%U!b%cWjDRuYN*E3u$myNo{wTjvF4Lf7{3X>N4nRVeE<*ZnljqGC=J%9cxj-gk?$W=&VU9ma+ERxL1A9y4g_6y%G3FE%2{RutT0 zn_Zci_wQQyH#Z>BN_?QVMagh91k=Ab6WD<5rjw9077K=NN98A3Z$3RK5zepxXuxji zb3&ZBX2A=taOoIjtWR=|>=vdFg^??Y7&1BOeL=7UT^2bN=Md(Jy>4E({oyANx*>9v zKPT`UW^JgkgqzX(zR_n+5}Rd45ClJWxJNM&d>tq?sbwNdY>jj}i3MY5>YeON$E3QQ zQf%x@{V z+jY99?Yz3$(&eo1=KZ8TMTRR9Q%MiNO~yk{Z2BMZrbEQ%vifICN8^X}vlXH=0lCE- z*I_1ees(F_`Otp=d@7`hArMBtIjf}p$>-gd`aq4JteA561ZpcalL}rx-Y&Z+*+T@Y z2NdAfBB{L0R5g!-;IVDunNuPLiYemdjnjLMPs98BH{;L0vniN)(ip@L0zl?FH?B;G z&DN7rdVkTnc;6qOwy39OKe)>O>}Ba9Gt8}?S>F@T2HhQ~7V0{W5|GO=C{`Nb-fngA zTc20b-AeZF2YXmgmfYp##Z$lk@buV#CMcFh+B!1DjwwZoBQ~j|$T4+sCYe9chK$?Z ztd_H-^xseWLSdP!OEK;3@?&PR3QFv3O?BorKa1o9aZ^}TZVrt%<9X(r!3=U-NPjvA$S|#Y?y`*1nQEc!Iqa$Q;a+dBBLppb5<>0nzKV=oo=A2 zQ1cFkCSx%j@MUGBRST|rpQM8JS65eRHha39c@a`sbjTnT>ZXI>HEH%7ITb2d8qcRl zC6EC<2W$q8pvWF+^*rk{BHIc$CR6AP9|w3unfyHI%PwCxdD^?D`(8C$dvkU<)$?@t z@Upi0>pwXE@!L{r{+sc9=;I>Nv%xf*h+vMOIE$RWwSyU!D|`|`%F2|T+K^Bw&PU2f2_|_r41Y~THOabP=V9-hbDFTzi8#OkKm-z?0fHcCF-fel zX?)URnc?7sz4y%Ybk5no^)|>k8-t$ix8Lv&D?aO4 z9TH!r%FtFCH~b_$8-x(@g*LgdT8`|9Yyr<0@cmp&3lx(?bi53@Pq2~B7h?cyI@eW2W!o)l#ayYqM(swTQPXy1XqR1|pDH|;jJN8OQV1K%|kVJ_qbA53Y zRf+KSc-P}&=)y`!4))vZ&@R-~Rap5QhbGPtSHWN<`A4=iHF;9k@*%b1J@sZbl zjEK|v$oqbK?c`oxA0SbR{IB4>iN}jloq+oJXB#8lva9K!h^&I0T*|vbZ z0ALihXSo^(7Kfn-KCxtojB13x|ISl%yw|GR5gPKcSX~r&v#izk8Z4ewMo$cKt3Uaq z{KNjaau{MON`s@QEFsxBDY}2!SmuRI5tgFxK$<+-Wh!6Kj&F~RLlN6+*f=~4BNS^h zP==Ode9L`$-wik!rz*f*;HrqnbmPdXl!oXD`!r%adOq(r!9$WRA-1I7g^3tg5@7x_ zD%n(#IV`JTJ4=CD5c1}hA}5GVEx_W|NvMH>X%CO>P4$Cux?ZtZMUJM2|+&=5XsrtZN#=s9-I z@KE^m`EFi*qcU`+zxQcdsaIUxN~5Ob-gC~a761Ss07*naREzcILcVLiELTd+T2RU} zC&eDD0ZbSEY-G0IZtZ@2pBrSCWhLX%4DY!vi9v({+@lqa0n3|&Mbpk zi=ZicGl#T4WV-lj1QWE;i!z`nJ~iZijE33uW6W(fe|mg>8WZyuk4!%Cpfxg(?hH?o z&w+JD>O1%NJZYBFFUzMN_DbLD9Q$dRMq!ZDZZl_w68LI7*1MUZm-$wt^Ej6%MpaWT za1_?V4BoH8Yjtx^!}p?0LJzfeN53gO)|&}@a0FF~toxDz41z}ic>)7~jbZuD#6_I8 zeydXEKPmm&mA+HExm7{98N!uZ@*-b(qXHg;M95}dZqN=Jn|MBUfCd!O{){py;i$0Z zr)P^!ySw}}QTEgTn_;q8%9O4^*JC~v(( zYC4}Z3+qCfG84X;pVD9K>gnSWA=jG@C><$H5=l$9l+PgDp*AY5ezxBE>S4Xub+*;J zbL+#0$qp6a9=TQb_GzrFWkn6P+f`og^J>l>`JmN9CbQgaxJ$IYPH{SH+H0!VKseuY zS}KM9@3&uI-T7g+@Oz)%9j1SMIveNmrH~r!A23%mQu#)$@i-gH)t@s}^;X+uW3tUb z_^Q8o9F9eL*?7Hji%leTZ{;-F9i@m->5H?ySSzKokz^Lbtg6rFcN|_+_qV0(X{&%w zlH_35c~aY3Zq>?^0E=EON`7?Co`)qwTa&P<4PdZ6jX5-T*vz?>_O0lifE5A6RA)^u z_c$hyCuuBLQniE;O)6G?Zn8K$JUp;630XT-X_MGj6aZ8}tG_rWCaKNKt@M6?A%n-Z zz%SwAFbxV8AP^DW)U;Yu7_i>^ckcw6DKI7daxFF{iQFwU2&jiCcQezeH&45FrTtf* z-xq)PzVw~j{cir{Z1&LU_KNA^Y&4Wj!htVcL_T?R>#cZ7xgqiM5GDs_^0bMqB?HKl z+n;hY7Ci28yGkA8Jw}%-dJp!hITiViQj@TUGCD&`-7L;q^b;P|Vs?7jz}*_pO@Lz> zykrzU$LBh#c!0gNmNr;N^-cw1RMb4SuMrSJ9-J4jM9WsW4V1qHwcy29YSO7;sarlH6Z>U!{n~-SOrI9i#y}M z%GdqfVHQE+qJQp!sSELswP3S0}r4;GUjMU+*;r4aAQm6 z$}bsqRBI|k?CJ4xUcD?n$qfJWw+4mXdpL{;P!_VQ<0_0K7$h~SJZ|Dl4c4joH_OJh zbb~hZo8d^kUa5{me5-wc)FMIyW$%W?Y6+5yS`$Q2G80J*l2A#(&2rO9FMjn=PP1MH z0i;~NElPZt>A?jMC2EQEnqP9ds2t+2U|WQR1Bl>6now4=T}puL=q(e6NVI-4J=eY% zpBA}ZYcy@u7cCW7<9Eiq><$qf|7OkV?9qrkO5?_TH1C%8YLP&XI zl_jifXj4MfE!dWHW(_o(^EYttz? zvunKiCja64U2S{F;P6+ih%1aVP}cZxZ0|=kc3Da!Ds~1~0th zcW8y=O=t6vzU5ELk*uUzd@9Irm8V`)S6&lu!bHw&YNN~|QOMHBG&Jfm!co#NV8?sy zdJpC1%_7&QR30)PpIaXr=9zM@!=B@3{P-el-fpjd+Z#+qqlo(H^IAZtwbvGkX$I;l z*c7pW?Z)sq<9a-->_)%UJO0H_-*k_|;&C3qxRb-Bm*=rMg>PBcn582)7i|k&0>O|B z;4am8zpelJI9TMYBx9c%O`Z+6^<+$~tG}ODDOBO$3Od4mCMq|0#NPOvr(xKrF>i{qsqc>b$fCa3f96Ipi+x9 z$vBG%SxBk(=70ZcGi9e1ii6u*xfZjm)~DO)M&S~ULae&72&s$La7S{;hS3mM+!k9= zdp=2(-4w~k=1`h!iVw5g_)tIB?$KYab|8#ID4{bADnsL~E{}RrJuN(`_nI-dl!9a= z3w&we>m>sPhIvdeL|IaqCOqV7x#r#9z8j4Wh2;s|OPxMb9~!Hr-XbfL-1~-k-0KgX z#Mid#PN7sUR>tDAm@(7xLtPvR=2Yh3?*5a<+4ema`O0mo`O&5@c-Z8|$4b8U$uf60 zP7U7gTj~C{F74O9o~|DkQ5;xncAg1f7?Dio#mm=J+Gsqo?LU6~dOR6D@0Rdb_uZQ; zBFmBvuZ8E?lpT80s=G8VT577_seh-F{zsqYZ_~pHWOg*eo}(SfT#EHFrjW&y&f0Bn zUwxum^H=l2FmqqM`;2w-OoyP^AM^wQw5Ap3?sf(!@t>b3(p$w6y5foiwX)1Qw#7UW z$SnYP{WSiIpWLUWZ&kfJOqiFx6|L9IfNX`mhVZrAfhw^|v*pQI|Dd0AaRe>k_6uVu z^*yA@i&J5psZqVNLvf@%*XkIt?-ZXq6+O67d6(rfnbX2Wd-cUWG##P#ux6m+V`4=& z5mq*@lLZ)2joAChtv6digrxA$7d8~qO{a?(K$73e{P@)eTZz%6A>Jn+eMCJo4B-4b zT7RCGpHx;qz0d#Tg|<=a`S@)(uF-^By**x3!AJed%I3?%z#5e6+7gS1>I;BVy0pNn zaS}oh;|Lo3s{$=lI!9fZPq}TTILr6GINdxS@K%I|jPUOPLsPK$ggGUdhgIGCfHGS_o$>OuI(0d_tG(ChVG!wX+eG(_lw$cfmSn$1O9se15E3Y~NHO6~Ek`fzB!&)%k6FOo>zE?>;{Zw8jCRnIsx z6b9rv!I}(u&5kn~M%BoBOrzvYk8FRc(dDEveh_)HVEOQ2BOI5Pr~mx(H|3wd&ec!P zM@$sz!m`0CXz4PYxdNV9(ME)m!XgBWF<)%C=m=>TWvc&bntfDLm_u6$7ZhmsK(JU4 z7r=H%tsocjMUE_SK*5_4S_-DWUM+EjfMmo^`n_A0=z5nu<{J6N-MP?w7&Cm;S#)#d z*6dhZraOi9C+YgD@xHjwOSja|HeSxpt><0qP<)ZAzg!)nIxTt|G_9*0vnYu(V*W)H#?q9HMbJnJ1mrsFSDfr!1Q4y z{bhkH9nZ+&Sz+QzmD;mIdAP|vVXRKoc(;pGp?iS#SvJ1yRSr*r!G%X2F= zj%t?x`3kb=Qbmqrw~8LDNCCrL`eBj%dR2b9^xn=-b4(l9AC%DII)eu*0h~+b^W!8i za=7|M^w=ocPtz;borc-$yZ+71Wj}8LZ7etW=Ie(=`u#Gs%Cypr7prvrd8cbNll#-@ zq58PY&Cm5z?M0@6$G$WLQ0J znh|s&%FTYGQk|MvHjPiCJ(ooXb$wA<+b!ol>=qh1NxhZY$>~uxqdDAG#)k?vpk=!8 ze5sF5)lIs&&9ugQfzbN&RNLmd&)f1c)mrK6mG7<6t=Xx*$#xILn{}px2x_{`mpk{F zTIa9Fmv`rzV*x?4%1~X=PVIhAf1mIZcaja2Cas|hR8%Op^_)=3#xrJ>w;OH3k7s}1 zYkt_}Ozjc!ZwtLs?QUNhyq&2&>s;b;Z7_jly1mWarD`u1`}_=0x%6URxqsZK{4el( zT_t_o$p=OXI946u1*#IMj%%J;R+0zfzeJ;u8BF5nv1@($Bhr?M+)hMm)YZE&Z~B)w z7D9QJU%+l*pv;#IP3HTPNcWUZLNr;-f;zrPZxQLCRL)WAPslsQ9MM&#m`wH&2~TQW zzsnAGnB^Tc8KQSZ>6K7gO7TGssDTp#30GoIIy=a0=+N+2#VDfVsmnWB=*Bml{dxh% zxQySSu#z9g`^b^;^vPG;wAbG8k@Nex9zYy>9eKYbCgU|+up?FM%3aI983K14c-$mT znk!VmlV}X#5GSj(db?%zY1e+2d$n!6&Kxa^=?a=iJ7m;Il*cixw}ETP_d1o{v5c%I zg?6LoaD7C3EhALQ2@Nwhz_&7_n@6Ws#iQLs!zy5JQ^=IE3eAHZTp~N)1Wu8>3_@eMO@l>j{ zmPpWYJGDg1mA0fDQbM5EQn4xSq@Y?=`=^8MI&>feUu2-t}y}|Q* zstLHrxSKiT_U|4ah!r%wnt|+KK-tcM3ER~3*Hb*<((DrGhsANC4bADrE~hV(ZVhm$ z--aY88D>qhwu`i9uurfPE`|K-@R`jOj=??UAeeVag&02O!CU2Zl_ZpHp}bW zLBShIWCwsB5z5(2tQOu~)PIZ~O>Vs``qgps9H4f!&}~+qcDg~&EUugW;QcB4cAu`N zTk2ZieX$wS5h8Ck<|93vy7X^fsVRM!%y^F2=V&7}`RmnIfizIOVUcQ5X$uPV`1CN; z#VVb8)$H3T52I%q#o{+j0Cg11OW4!~pa`5&d2+^Jihbs)WdH@s!&>ebk7;-{s8LZi59RRe}MrLi<7C9+{x9`AE|iIkzhZ*6f&=}yJ>p%w?BB9`N^wd zM`tgAvaKF1n&1_uli_MLPkam+3Ti?viw*`Y)vpXsQ%P2ME;8ADDlR7X4Jn}{Xl(qi zs+|b+ei*iDHj^?*p?CIwxH3sGKM@pyiHVT@O(h9!L=F>dl!Dq#lgRi^CN=_e&g?5! zIp(n6$T7a1Omko;@DA(E9C(fRW`w+hM4g!&@Mm8ON8zH`L(7R2H2KXcp`3`k>#exm zVAG7^TUIu*x$nwy;PlwJV`||nRU7u@;WjFh%8CAALk9lG{KT_c5i>g!W zJu0UY4bHx;$*$xJA{63H+F4g|FCt`re!8oie!Exv^PjwI9Dc3Ab)i}U`PWg&3KH7p zxa)UB!;ow8ee>s$#p)XMN_{wR65iM+XDvEwlr2Ts`=b>^z0U+nT9r^^XK*c;Wj({(;vOQ&COS%_vgbh zQ>w!(zR-43yOEoUB%IKJ$lrAob*(s;(z`d!^cU0J)1Lcvy=Z7V3er_g zpKf_(#0=ia6@3-$5!E}u0wEbft)}r(x7kOhtn(&Ip>sDFjwZ?jIFT>9xA{Wq>1nh( z7F+FpC0BqBkQeKAKP=}{!)Is+O-(bk*304B_l4ziKiB^JwvjGQf}A6tYqW4bd*b2g zQW%sAA9wTr$6x;~I)vN73-Hlp)Vu78Ri<%qGJakJg7Yl7JzS`9`9@e)m>%QW23t(p zUMk`nG?IYD&1y^Uk*`J@7&)MN+J3u6IbpBCw;|va8M>^G(d4KsJDWdy6c(GvpK-h* z=fK>gR2b%bRfts%J>$+|1ETYR;Gx=_GHVY00$47!yDY}v?`6K*&1|O+7bx`La0_Ki zuAtvkPA_IiHi_Y9a{W`fHeH^V2o!jV>GWtinQzzJMgnRwo<*%6LnfcoPKL@Uh#MBv zzR2$KK@l2=oSe~Lvdw0>-r}%cyhlz_D>V9}?Ot<~^ckmn#gcd&vu$tdo!T~?dN04I zGi0G%s1@F-MdQ~NyHe}&L!0Gux0w0(osBK7)fryD+a+atPVU^WWUx zPliJ=B5OcU*F6rOpRF=|v`We;5skG0Mbb ziqhN`zZU>wcz`&F_N_e7^M&Zww^HLY^=gp&t9nvZl)r^CxRz?+p;(3 zo8#Qmmvy~j*b!d)_a2tJG zpxF*Q4?*z=Y><=0dp8-|HtwvJw&n6J7o&2xlY4d8H8q+G9@A>?2f*X;kt}O!QzFp5`xEoVWaa|RZKfQ2v&tst&8o&K%v$`#N#=)eQf3D>i1LU1pQdcQy zB!In>KRt=p4!cI%s1tuCr?n0w*8?5Exg*_Bfx2lQ3F)XH6izT=2kWQSrnsK>rT6KN z)3?9#v?+~N*jNP<^1?b1KH%Rqn&D({UN##G1R`ZJ2E38dJ2JF2FJu~6(u`_t$`61a zoB2)QcyoFBi{I|P$WPG|Nu{mV@6KmSM$k{95Z~+al2t&O3uK-qJf2saI$~CZ6_ftb z9A`d!KJ}*OqApQp8*7CQR7N6`#&F8_5_%5wbK-jjF2@j2Eq;6LF@?x{Wc-busGWEV zqWDrK-2#Qwd1Lw1n*Z&8{lnW|epWlrzm#FYDOxBqXEdH-4&_*w!lI_qOktj|E-5p> zgj%7Rho_1rNJ%rLbU}ROufKVoWjd*9kGV}@&_Uu&TzmXZBK2<4CRcnOH6n_cjKXZ8 zt8G>n+dT!d#}tvP6omP~WLJBRS5ds&oy((=TH8`JaKvy)NIIn3vWh=WGmUU^>8Zt) z0Y_fx;C@rx{piD7J3T6FpEJ8zxd_x7tpj<@_%PV;Urm8+@~1*W5XwhxgE^+VbJ!Yk zD#av?zg&PepvCNUyXsaYy>gkz8)2)TEf)my>sN26G#u;-c2&csGWq#3kf5z@7mQ~( zd7929?M{d7F&t{!w#PNCea2`$m9b+(;2yPyKzOb5R#2`TpLCP6ifVP$B4zAp5iwJ< zzXq7qHDtdTmUr*|?H_*rgMMl`{WX?hXde|+Y3bed*nWonm3zZ4X=^m^KAGiS{?+g- z^;PNIFoL9Qr11K^j@6}E92!rJ7~GOPoArYefy*+PPT1D{PP>7m9o_O_(v~@DfW7Io zh_LloR(_OAgE}xVu-(Fqt_^`$zMgfd^ORj2_4jEklM5vULJoF1?8a`DKFuq;_docc zmD)VNeSAMo6*dyi+QN#bL;P=55=OP2x}GVp-v-T$WE{eF;W3?ly8Aa77-sjPTTu-aN~GO(IN~(yHt9Q&KYl5Lry8T(5)4{ zW#hKqeDO5lyW|?!@XE!p%$C5f+FsqAVJqZFM7i6oB@C4nlfE7G)?fzzmoDKCAlH4BW`Bn(?&#S zVx2RD)F}pMJi+dLh7GQCa>hF$E0RKZ=OJn$F&6qPN=k*xCW~;2XS5m-8sdCb~Bs zGVs~~^$=Hb>y&dOgLwHTEUGCheSTJ(cmHsCIXrgO=Q2@#U{|tMWdG=JX$M&jSqEw4 zgsIfo!>5aC^NbeaWT3%h8xjo`ga}OWxcsbj`JI=A-+5CiY)5U}cL%K-wRJl1FFii2 zp-JsUzoU#6hx}y=3+QYD1O~5HWi0>zAOJ~3K~y83EADdrFV}#jw$?Sqh^$22W{>G)|T25;OghuIm<$_m$13-EB9wXrxc5Ni@vn2dh1| znL(BJmx9)|w&1(PWwwjD7O4E`WK4Uo-QDd{!$a%)yw~+UGPtVb0<0A@U&dgCAp>5V zFPP5I4DhayHFgp+Ps}YW39P_M~s~qAluN5&$PVKia z-_6~-*R4#wlwF-N?@whOyDCRiZg9)WIk*_jPFoum9q-gQ=wCwUu-`&w2z@BNwwscg z6069<_S;N}*4x^akUxbSZU2ZU*w{ga2Co~)bAAjtAlpZrM}mRREp%))#8Tlh4=HGw zXuzx$jaj6uS)544JL%uLg8r|+#oT|#!i}Gpwz3%q6 zUKTKGqz)URk5Ki*WH=>(;Z4*$$+u%j3ges&Z>JkK!qQBJa}#AQv5yUEQXd(KRU+QM zNHY`KNhm0hz!82o{#G%!A!KnPjm?f}awHMN2Lc83eL~V#yzeWfhOkO>6@3=Y;)fQd7eP;~qq zMA|XFNMX99$9B7p{Ehxf0JT2#<9$b7C7?g$THIgq2@BM)hu7v>bc~w=PQ5yJ)|m%} z56Gk%3{vFnsC}PO{X_dZ&xh8-41z^4wPuA)*gX4`DUw+oDrL;ol4Ns4|Nwt`#b$Bnzt&lnLHA@tjTM5qQ zlA;x;;w68NmSNmwy6+AzC+D`JERqjI1dWVn6kj;5eA2`U{w7W7HAp6 zV$M81XEVlG+vfE`vU&`}v)>lh!{2#X`fq;vqI-GPN4>;-D2uC3&%0%?BC*D8L zAmu4P$H-Kv&=ZZ7x6VZ5Hrh3Sw(0ndy+^krTd!O#iv9%L*@v4sq9ec*#1>%SW|e8d zt=Gv4|55(1?4%YSw@)uh)B1T)k=|petnFJ_Bynb;^oD0vl1(B?{shw zy#n@TCVTx?DjY@=-1M7zx6P2|ugED8Z{iLjnA(hbiV&JCNyBtJb{cH}Uq2$N2`Wa< zd9G^;)%|HP`pL`MzxdVXh57%J-aUplujo6Xh1j2V#CWnALxG^1wAv@T+%WS(cWqsC z-;ZaCH0@VOBcwsFHoe}!z7U)dh!YCycbho2v9i?~HC^N+@zJsvmrlD2W*i`{biUfY zqY|`XupRPAD>Dv-aG5P7UPAtoKS|Cr5Fd;;=m0`Vc+VEM!K(SR3d% zE3nIybT!%x62+vTL?bl^&c{58ji7Ip^o%}NMCmCF@PHw-qpumpO{HY3889R=Iy_?w zadN&0;KTkD)HTJ32UxpiR+d;lZ24Ny1_rR{vs6Mp9SO6Tob#$;JE0P+$wMKDbR*_C0&WDsPhFh>*O<+ug;GmM;3TOLTDbYbE*tTRBf zr4kAO){=fbguO+J5;zDx*K3_WQOG?&XF%I(43S@dyEc7O;~Qi zcOfASF9(U~##;vgGr~k7&7pYK_BHZE z4NOG3uOtottAHH}oD~Rio6%%C#-}lxNij)Su8Cvfi_XlN@xIrN@xE_`)({R#I>Cn~ zaCHZ`Kc6|5AJ_0F$Qt{fhJz4tug>LQn<_bPgw8@9C6{;J@pwX%GPQ`&WDovEE5?i_ zcXqu+59WotC$uM+T!V^O5KMW(idJA03mCey0G1?uUx+F|CZT0J8SHAqamn`Qm%I0? z#{5{!R_lUf&>K+2VG4$s;CqyLn3e)*6n_9`k6Uz{7&O=#@~-}$`9RPrd1{t2+WdV| zKmX`uc~Dq)YkBn{tMwGcRJ3|^nhQYMl)9@n+Pn%OCL1QK-NOMwm0cxJix>4-ZFsJn zDg%}rK}8K~C)(Q_^x2?Kp-C{!O(wujflV}wv*`pq4`&v%5eMXCKDTIqe`e#+e1dSZ z+-S6wOtZz#>qTI+N+THN0wff3utF;PS z9{mP$740vtmOQi#z)guEri+d~<@RBvUVA@${^seaTyJy-15U<+SjO%M z#C6X{(H1dDQDYUf?UJra!Fha2i)v?)jnsej zv(JjVVGeIi`Y6$-e4VNYwQkXe*%`YKpi-krdZq+9P-_M%9CZ*{k+|%ECIYxtYjO=| zOLEjp1>M=euVd9}fe~@uH83d)fr8hInrhLK5{&GNhv8_A{H%fZQ8EA(H_VQ`^YAPo zhWWe9(k8_L?S%HFiYL=4V2jD8q8Q1`>FCq@_9s0oPpj4QqYP)SJAeuUJm+>LO`Rgt z1D319qS6ya2$>~{B0ghZvqzCE)azApMtOsA0&=sWXk7r-H?L@u3B33=uLX;{6H0>6He zCnvmDEldsP%fo2$G{G*DMItnt40nrhuaf!XwwB$EPRDtT!GPM zANgdi+ir_PP(+9z7fylGLipCN=RUgSE^cg0ag<0rr6TG>f-zjUIagC43P>zIE-n8g z5$?v5r5Ii`T?Z;1J&?k;4l_0mv2$%-j(W&)33&q5pN=6MY-%!@lqSRJ2=0V%k!8r< zwM1!pAMtglD^qN-d0~q&%Ok=W-8jOFkVJP&_M=@!r8(i%#qlTE&MJyriyLt#zI;_g zTrC?RB6LY5yx)GDg+TOuACUqCkQ?>SR~#F+AE*ErevoCxQLwAnOUYLPAI%Y_tJpbV zX+~iR0Y>OWPiev%G&2Z7@_Oe5(JfNq9J{ItbIN_z8G#B z3$^F%9wG=mZp8|Wi${Dw>#U)X_PuHF_EKsqVvKki=07dW4f2+X3nU4{{s0gpdk;AHWd}-AvZ~P-{tNf z4u!owDGaRuXq7p?6msA<>gf07y{!Tv6qKv#3WZxU-N6lGN0GbXz(*-81g3526$=&C zrgpNa@x#Z(^`CrN`00yOZTG&MTJe6wLiXELl^-UFLbmeN1k#f<6?;bng;8%7tQ<)& zONz%P!EGqo40-sHgbDwCa3cWs)$@FlE}b&eXeBHN2*6U!Ot5Oru2rJHArm45O4Qiv z0z+!JvlQNVGMIccKtO#qGZQ7;^H|prul0Qd#^~t1aac{ES}^t@aSF2+CD8BaO~DQ$_vK=yLeZ|l&d z0;OJ;OCHF9tM9W5`u(<$_joE}d%1i4(UUxn5SXq6ZK%r3Xgsaeu{kzEoNGn1n%&;t z&}ze{NA8H^iuf;PIXXzmFWo6ir6k^J?)AI#r35cn?FNZD$PvCLndQlRnp(X3y!iN+ zKOWo_=Y`!cf8Nx}#l=kaJi>47HhA7yBoNUFOOrZez8qD3HeZd(um9s0k8V(gfNC9y z*`zpMEun6W-<8V<+z)Ey?Q8~c07Eh8^zEYQbWU3J`hA)WRVDEZ!=`*vL=mrDqj@Dc zWoS!N;R(OBWwuxf$R@tIg0QHZA=v0S$o~Y+%B&#P$!xb8W(HCQraBKbk`qD6N*FRn z5&?2bvu=xpE$069JZz}TWsb;)7TBGdBCorlPD{V50YqM}Pud|c!Je4O;svqOYQm1% zhJt`1)sggN**iahPIlw2C8e08{*AvGdgP^5Zx9WheR;Hdk zmVu?g7T4_HTIHIO1E*<1fJb^c_IpHaHp5{BO`( z&XE1r|KvUl-#Ax%O!RooKl0nT{J3xDBB#7|itsu&LUMt*Hsl1D%ZPQgTRr;CYX&kY zPYGFDM#pzu();cpm?aayyPF2bp~^fq{SgiMVAs&>xJA7B;-?^hJ26v)x_PQeesy6p zrs|9A>(S}v-MTSPHD>4ZI!)*-dJ zdWV+|^V<6T@4qhnjX|{HXD(Pb!1VxNyRiv=RiMKaf{gyv8bKi06&>t{k;TiKY+x z*x)O)pakqv7xK^KlJX|bc!*KdDwf^(*VE(Ff#oJM7JXCznGjW0)Ye5EhKbzL>6!|842XP;u~un8^|rfxac-^)tpGlogz{%dGH^c|azRKW&A}I2YkJ(p__3Q` z{p!=&y>QD`)DeD0#W?zLlJwN-+=SPQQY8QvRg^l*L+b0tVY<@FS6Xs5LgzFDeQLr9 zHPOI&y|WZdk}}CjV?vRHwzc}~85>6)Md%Cs+Vsge^30Rzv!|kQ2t$&9;Qj<&g;Hm~ zNllApNQx?6sW&R6s+H?FE3@8!C9#g?T^?n%wwlb)|CZE{t|$HM{LjDB_`z)komBRy z9BQQoh-pAZNaB+;gi63tb(XR%MMfgWoI2#2kLTw9`+kZJdZW4T5${L>43WQ%jT-XG z)6fWXL$RVGXF;5sYH78DR6Cr0jY9&z5b*;R=$kjMh=M?_qx!GasTRVTagDZsZ75(& zc^ejP#u6AKIyJC5%Z#_hY}6s7id=&mqTUF7cnirKqit7eyTiL;@q{AcA}54Wk%fxLxzfC-&)~Fp z1i#4#E1HLGc2Jy2$oMm4c5oz%fZ?Y>S{)=~1PdYK{ZLx4Oxh|(_q>pXdQB_8=A#%C z$wknB5Dy&W=p}q=omW4)FTXD8r*b|vZvM|N9=F+Us@N}9JM^^u4TVz?A`|Z0s*n%5 z0UN-Sad&T&qoo2*<#brfS%k`-giLk&HV61flx^D&U)t*Yl`F`(}DMKFlZa^aBGK+|ZZo95H6^Mzs#RadG@@tf$x(JJi@aUQ3%1JuEwqV?fM@O=99bK=Ka#r&2$^N~^;!f3vG^GIhev`L%XHU}?qU4nW)v z02)-aqKl%{r=4yAjcU{VO70omZmV)S$PVcsB6w40@3XmzsoiaH_ubav7oRpiZl0QX zUC~l&_>k0@r3^U+8xZ6WkX8H@H3!>B8f)}K*Yh)Yj8WDbSmc15YdPZPIrQ9ZgkUC1 z+9S=eOEsp4yga!ug0nFT#N^#qVqlgIYd)Ma%fU2^gC}Q$5JslhY70fu?8tr#!i$ta z*PsKjZBXv2KIL~5w&RJzo7nTbevA3`=EmkrOn=>x0hnunBwQ{S{=i?DnxLO3YRL+{ z0@!xqa!r>QPvD$kvWcec^nEA0{DaSLe)dtLnO-9htrbyijj)&+I%GLY-6=#&&Xrn& zxxjL?@fr#YOQrGVG~Q>|>B_nFN|})4XsbCuOwM1~$OH9mTGi5s0@0d2UmSG%LMTBV zRV?&E*ew&NMZVK&aSo#7{Cv8kQuwM3KPdzu|UrbM9}+ztSCo#tmH=h!5jNpb-sn%fOHvMERR93ld$z$}X| zhEx)Y5Vx?jSQ{(LLJ3a2Y435|pvU+zHkAAdds!^g!tk1^>vwuYg%<)cZTG3~94nSk zRxzqy25{GIM^tmbgkz{lrJ6|pXbaCGJjl+lv~(yS5OJB%GC&a8)!*OW3N?=;z#?Bw zqo*-KQ1uEhUzZNwzb$l96P0xAg}-@PzTI`#d8MFv!wC8oeFgNLq{M>WHasv9ACi&a zq8-Bq2{=~hJ4Utv^nzgJPuOB4p>-}&B`e{NI7M`D55^jyW9LMRYW$^w%jfm$((ISt zY7g@3^JYvk^ZeYcvy-;evmFDW4tp*j!4@}79+|8su3gJS(_vvKZ$Ko1XGJAHbx6Wk z#&WLqrxhMPjc(0mVHfI(foONp(a8$Y>4RQzd)j=d<|f!JPGd#13z6Xyh@5&N@S_$! zc^x$_l$}$V9Xjc%=lo`Y`L4sfHc+7j{-=NOIuZ|tFxsC{Ww8PX5#yDF-hPwPk6@wN zbpVX0=n3?&YdG~)Wf=v@H|D3OvVxtOu1QW6mvNZmX&|Fx+?~kzy>HkV-jQ0&M0F7> z&-@8H3EZ>mB+J29gCRr@kW_V?xf`Ab!##hcwaC;Ksm$|w9^9m=2BN334IApMpj}u% z=1K4fN-)+JSQ3THHBwBUr;}0GFX>AghIRVx`_0`y`q7Jzn&3fC!Y|j>fi%6iF=6O= zlSXnnuccThhB(okF)VJKmnt=i5HVfqe6z^oAlg-5UCJGRF-y2m;;+<&{v%S9$-`Df zEksl+?Q13Nc++8i?~lEHPoT_Z)tkg9a9M&CMlbR7p|l^}Wu|}dN$tBm3Fxm&Y%kDO z+E=*A?FF3D7KX+?$5a#L0Vw4GaE$~1W*smlLJcOA)yawv=!h?gV{l-Yz4-Ra)gTA^ zYKZfqp#Bb^3RDd^M7*foZp|lSf?jWIK=4><5&SIxz*uwSmzDJ5Mjf3W(Hr}= z(e0tFlvwj1)v|K1+HkUeJDgQ}cOq=tpo+fh>&|euKIc0)X!*Y{Ob)qgj^5Odi8xwvzKDukj-Q%+&oEjhF;(ck7oo zH_wklArLH1>fL(T^&fn@`;UJ7x_KVu_LJ0pg*@`KTMEfoLXDQvRudQPfu-wWTbioq zS5!6skRCg3*`@sTuKu)*zEd%Arqrs1ad%r*hBP7zI*1Z&Bs601mM2ApA)spl%STuP z`x$bb(c^u4jl?naVo)B02h9qWogxI8lT;?1YKXlFjXEBUI2kOJ+j_Qjc=vhx_)&G+ zZ{z`XC&%2XczYBP3g23yI4he)uS5*(xbJl_?&!a(YS+XC6Eiq98%(rWm>3ppS*7P^==I2n79j*$}I&A!(@|KP6B%|npQZKPDW z{mq(zRifSm%`)sv*}%5riox8xo|rA1adqSW@FAHyDU^%7iW7R@f-$zqRLYtK2aIp|zHVFQ(GRKaeFtzOd>2U2T{)*2mqpJ0%m0?> z5-o6}zLm^2wdSqRKMWIcHq^H_yy@(Ol*hZC!qXZ`uU(R~y^1qeqDAB0VylS8>;f)X zQ9s4Si#y}5&^ior87awe$^cg{14`-|Plzf4V<$Tf*|Fhpkc_!%w*AT5lT1c$lj&!g zFQ)1IyZH>b_T`%w&kG&kqvvX`-8b@qsM&1?{Wgo3A&=<3-atfDWSL%-A~w0z=qa!5VGZnKaXsh$pv1fRi2DQ_-gTleiF4DNiWxg6;;i5u6(m? z3{Ra|s<}GlC-7`=#wCHJf=r?cxyX>$8;Y8Ro0_?-eJ}J6pClWsq55n-cEfDKcB5qv z=q4+M&@H2j8UN_!^79XhpR~?}{R}V>XiTiHR>=cMGVv9!6e>lezcLD~`&T-`h+Z%g7ovk4eZ?q+1QFSeXKpeXVsT>QdZ2rgyX9=Dbc9S0cH`bBW1@8%FI8 z1k#Du-!kdqd!k9=MV3X95m%30rollg1k@`f%~q)guBoWwsc1omybifw^V<(7ts>l^ zl;W_wD<8hqzI>;BY$HJ9daW0tKP=61rK)?h^BCfCwFgT~&*rnIX0d>hS#k*wAY1KQ z4EkcpgulA!k|EliY>wM11W~Ox+tG5!8GH3=bB+Uj6PNzzqc=KX%;wMX&M$9!`cKcs zL;W;f-`_}_&h<>(ZQs0lY3n~fkC@)=PM57co~m(Cq8J@C5{A&F8>>+7B7=*KAPIcx zX7bq0%>VE^U7+y6=~36@MiJZG1evySK9+M9|0Shj*K~f#dk4EnASn3 z8-1Fhe|G+EaASM(V<;}NKCI6Dfo0^m_~Bxbww|3&)lny`Y1`0l_3em#WmC2D>?ili zJkP!CGIJt`@Pg*n>B3*X8!l149`WEg!2u00*~^kqy|)p~HxPxQGF3uCj8xG(PYwiK z#I;#R$z?q*lsYTS5*YN6dig?jbP9-dZ@|SP&KYURj&<~sOX8o*>_w%OuIE_AN#8a^X@Erjcxn^k|0luW+ zP2>o~y9_c>K#Y=dRyQeUBwd$4BMB2B&ZLM8v>Wz9R$3mEw@Q-B*GL?hNyaPr!ddy< z-ttXpo=<5h#>o*Lz_*XSHZT#_Skvt-H#FwMJNCHK8W!4u&X!M&*UT3^XQ79d0b~`u zH_1T&PJyC$(B7 zf|HTBQGTe2J%+28q3Df~5TIFhQo*2y^VJ>~)4utgRq^F$lhx#$mkuaRVKAXgRkV8& z5EDv@O6qJ1o={&FCF2^gKvQa`5v9(8oLW?$ixtQHvnbBf!V1|AL1NYD&H5v!13PKwtA zLug?0!9izPlTYhQ>2D{w`7yVT(9|X$=Jk=jG;^#g>h{9iu`u$n&q)e`IPqOB_MqJ) zj{+kt>1DCkn@!OYEKNcPXgx*$C=#o>)5c-|V?DX@KlXZE6UUz*t@_>eUYoM<7r%I2 z{P9as;^9#ZU@miee~)=(x;cI|OBVoKK%>8#_f(YiU_&t2-569Fk<%aC2=gLRVtK+7 z)@y_sXmSK(h`y^sNn*%VMBqVbZLuWkXb~p{a)AmYI^3%GzNKX@czD(-sp7}w=m{&0 z*fMk>9fWJMIWbBjXW=`_YZm&|7q@$uKDL8BOJt&pg8m`D#P-Ng+j+NK0#6Szi(kIU z^i!kDW?asRLE%eg=at@%ghCpKJUM7$IjvgT!`j6*UbD$%H`M&C%-!9cQB5c_vV`?j z*1Na+`+FPhc@z}--CmDB?O8~Gj`jnMBzmD&Y`gCxvzPcLDNrJl;@of#T7p-37<1j;E43#iA=8CQomWwW2ka@jIZ0~p+2bepv7Dg2CA^yd`QzWi#vzR`i;zEwk{>#6-v(C9ti6p%yh7B%hY&zW@XEtaOoWf&-gi4xO(MHY}a~ua4 z?HnIJB9ED~M5X(2$MBbKrP=5)^qB$HYpdCX9dmQwSB2%7(hsFDKHl3tY88o{} z+z*SHm=AzOrtxBwzWs9DT$f*;tGDx0=1Kis`dn|$?Br$E7zzTbb|{lY1(E>f}v5-*K8^J z6PypfRqVl(5)VUX|niuj?%`sfvhT_)AXCy`ROk{?R1qkFNZ=JGPQ~}e3$lhhf$UoN|8Mn z_4byXfoBf(1X}3+6@|C&-hPm&V8hhLFN;05Rc-K!?370BrdsQdhzf|KX8qNm3nxYs zK=Hl#;9F>D%u?$xbh-BOm82hs4gcpwkYY2b+8-(F&z8R#JZ!Y0r(;( zL14|URO7-P>~-33Xj*$&YR#J$mullb{&->ReUfwmjOT1y*nF^~{ zFJFnN5?GS(cY|A+$~@a?_x)RwZ*HKZ`iZcUYO?uc)Tl0oi~UG{L=cHZl0f(f3nVz^ zC~C`q>le?r`PI)}#Bl(afKrwVVjVzo zlU&vSQ-*gH{>pxzkb#DS&WAwJB<3cVMbU|S65VksK5;f0TC-0K2046t_mZ6v;kcA; zW?Yf-Za%G@H^2B+_h&CtwZj7uqtT_!)YSG z(GX$g86yLy<(Ib2MxSwXJXqIWOZyQ$=8GUDU5O@%bh$suz|2Ojr3Ax<-|EO76o8Bh zo8!jYAf_yP@ozIMT$yai00gsitpT|fy2tTqMXXhlxlz}d&~C*DC$>zMt1B1t2w#oL zA@RIUR7<#PqzYB%eG^6T8wx-6dkD%v3jC%&OZI>HA3kzz{+Q$7gFJ?-#(9$jgYR?A zs4@-*6-tkhzj6@1Lhrdua*e2VwCr8d3N}t;uWKZVm(XFTgS3j`MK=zUp|L#uHdhVA z)>Td=ZZ4#@mvST&bg-~ckgrH4XIGc%GI#qHXa2E+p>}mCPJr7%xlys_Qd5{BFp^ZZ z-|5qr1fLvByuuD`b8B(sOXc%Pko}*dmEOUeZZ`K&@syLU_6O7jL zh#|}PesRx!Ct4aRf!f6@FE%& zk3j6QP+q=zDb7Rhdgt!;o-R*tK`HJX8$a5xFV^@7xRqm6SNr9C`S_#3NkA;K9cNCP zaxn^$l}R(nI7`MRytDNh09bIsin4P(2z|d_Of|y29rT2QAOgR67@?Pngh~YK;0HZU zo&yu0_JEIUIrgGnL!60>pJGjZTBwH-lLHfMCpf8;4?4iJRC6~ef-4lP!;F$xpw zhu4AEeE>XLtF2~Ann3q9AM}ciouEqLkG|c0U0kG>?`q{#qpl6BXxvK7v)yh`X>^7H zrW%&d3?3(rb$SnokVGj#*B(|Z*f~6~%z_B(seg`f~rTQ!~Qs&F_KMx=1{zj`kxVe3LdLj#B zwbvWooISs9W;Q>0)A(WcRNfD@vSP%5^{keYVp^Jj?3oe}A!}mp| z*b?g%7tPobO@yJ;qitpMjPU*a3#N6vKAv8`dBc~IYawnW2ziHgx>UUHHnDQR*x~2S z6+V4^TbO_KrZD;C$E9B7#DC;K2n)*>+R^MP`!9*tW_h46Y~ zXdz5tJto>e{2022QgFNjC?vz;BPWU7^soQPAac?>H>%@;bqE~A zJZD(AvvARU$9`pWcs!@@WuXS^z~XjE!kG4sTiHkc5ZR-<@hpY|*eQ~57oo@Zn1#V_yfb$NP5b`hGhUu{&Qip6wMu&hVk1Tp5C1OeB2s1>qwcYJ7Xw|TZ|@(=PA4{18TKF zR3-9TWny3FJKVLb-Yt>fEf<@7ZpE8ysa(2bQ3XZFO7TrjE;gUuTu{x9*5!Qj0Rtp@ zjbI7l7EWOFg1}OI*+K~nQgF=@khBXaXM9);ta710HY|}!SkROtU|Qu^xw@15)ZZe# ze^ovF)1SSmZ69=3LlLSCv#1df&sO16voV=_?zY!vf8j=kEyy)E6CQK7;0luRh}@Mr zK0S?Jz4!zM?ClfA;3NEgx7T|6{!3SR-E7^~>mxlOL;9RPc=Pgk@}L=gKsjpqDXd|0 zAsiw}z#UV4`Sf&uGhj0gAKpU5H7dn3r+1-pf2|i%9U0enqVpqhRm>dgnm!et|NOK1 zpM2h~9v+Unk*Z3-=?zvW@*P-!W(c*%<%PA5PCDw9|l zQxa>t`SvH@g7IM2d$bq#FRVI3#4XVuBD$-2)aB^O26;?UK$oF*CLaxv{dQa2|LXIb zk1MOvVGQbGy{Lw_{YwJifH6Ea+B+JCmNr_wi8vg$Vsu~zs5Ni3v;QE0fZx)84n>qeq;gw1v2n z0Jdk*4Hhd8bBF%m-{WynStB>ehQ+St_<)>6w9wFRJN>@csQo2|!Pg~*!i8FIZYr72 zn}_eV0TdT)UGezPgqQ?zx!&nE@Hjn4sO1`+o7WnMx)V!Z=SNFIxZYl?H&G6YT1V4p z9u;{_)7S;%GGS<_X?+si_}gf9gnefhP%VK!2)s=|lPLopc78wW?7sZxzxsA%{dKU5 zK-mGXX=fS!3d>s@hk3mSk5i;9csj5&F?KzuOo4~y0&;?DWV-Dm88JZYjdwvYsRCE+ zkh$$?o1)ZgXaT@u3EyJCyM>rML7y9vn)S~$V5oH3K$4+xgdT&c7voG)S!vl&8=-rm zU?YA!qmTTF`OpK#BPVQQej5kBsRO_Ln0mjdmRm>j0@wolR9_%KorvlqSgAS6!--tO z>j=suKq;>sWBAyZp3s}}N8D1P!jv#Lh!t@!QW&DnDMg!Ho~M@F_tmz9yYOhN3d;;U zCQrS%Cu>mt|Cp~&Q|)iIou}08bE>;9wlN!;F~%ULJTQ-Oe~dCn8y2f-D4a=Eq>`vf zm2;qVgBw&&${Lcogl=7rpW21=tK$04KW|!Io#LjP*K=Da7O95NO~eLbIwLEHEy_4X zQ#i7TW;Q+^(j{TTj1KdcgNwd?b1J@HW#4Wp^U|B>jRBKtCFSYlOW;`-i&nh_K` z$BQ7#a))XL3zn_d92xVHEhNC!FblZD4o>2i(Qb#)Wxf(K5QKvy=aug6ZeAsoy@Hn5 zDhj9ypk}+Z^6T4|1eU5&+SHib-zrxErQH>?1^fa`LIDKYLJr1`RTyrRUc7i=c|ggs z^zUx(e0?4XlC=QE@e+;Quqbu}AI_`7{L5dw%Jp)rd(XZQL1Hi}*b_qGXfn4LtyTy_ zQQ(Bvr6@BPpLiSTbI{em`CLeC;JYdtZ1nWrTq^>RI%0-%!X&FutHUA-aY73QMM_nJ zGFy%#QpPm`E)>v)hz!*DH?GJh5{?ziW2Xf~P^kx|HFmBTCVT>edNx(cdf&=_+`fEN zoc;6v;=}KEPq*c5_B<_?SjsAXW%uWjI{xtoY*Q1W^h5xhJOvb&vtbT;3`s7fZn1DZ zNN^m~$ZnKjWrY;g`YxxwFQZ^eJeh4tgdlDULP&f|;V{H^7cI^|21k**`BNg)FsUoBcJ)ta=(>xwMbD`?lLu2>&QEm4@bzjd= z5-clw^=R1yc`GI)sF0?vouzkAPgx79w3>B6 z0w=;NS1naE;uFOUfmjHrGyz1V*;bu~Y@d{2d^qU#RKJfVW6#p-bVY$k6-P3wqD)_c z${H|>C6hP(ELzf+kP^SNN9+#!+G;0rLKvP;x&XKz(F?;$ z?RsjDBgGg9J+={H78Gl0!N_0<@=vxUC4@spFY3R08fU1lQf$V+T zpY=(&UXMv)R{~2(5JeSFps|wNOl*iiY*AyBsOvhRM_uv>4 zWNIAdE6IHZZB_Q){P(Yf8s$iwJ*fbVYp@(6BN99T5pGMYbvGAN>qNGO-`@AzDO`q` zPd1Y;96FCgxrp@fT}1Ivk@86P2gx_1rexBhF*?yq29iYcw(#LJeajR!I2n_zh?;P? z&A-!&CI9iP_iUCnauJ@@Ote_R0-H9(L zk+#9o#R@K8zK9{Ml$yLlHVCB+WK4LncOq+=vo2K&DLl2@RYZ%i#M0!7&|X4q8*~9Zgv@|5KK%09&U#{C$TrZYK#+^#{r=f_v1|(jiCC~z#P&B0)hdQ|J=mYueJ|`Sa$uYWIR9cx1IW*(u%7t#3F~ zZ^Frgie~uFr+O2wn0_umDErMiw!FP|!y)cjoO3NC&JfbQ-16r5_UAXdzx?steSVqQ z48uL9aWAZLj63#=Y(xZsWtAdUERw~L-$lBMbqt{-aOC_@qqyCGSq0)mH7RLp&|bqY z&B8@k1$`nAGg>X4m#ciOF=!9w%51FE3jk!t20`Uc!@^TZigIHGHD17mMRBnb-Q7*E z$Bj^$!vJF*`A3C_qINJW21b!j$VAB8lKVnDi@EgGpq%==efs`Q9vHWH0p<@nRH~XG z5|Ih&7J1)q@_8B`i-yGy$Uka;=%GWlq9!9Q=rVxPgkwUGfGz26wLd&*`ggJ%M3q_h zDCQV%Yv#T9xSWk-7MloOP1%r0Cq;o##H|s~B14Kuyl>L0rUr2q{@~e6`JczfE`e~G zUB%nAbP0D0+VfiNtar9wL%sp1VliU|3T`%#BhSD2>EQ5_52}UJk{52S1XN3grM{k{9+ht*YTJ0k+QRDy3@7H1Pca{C^~sxVxZ zBJs^)BWVeDI?@s)k*qFhNUvqo>=$40HoOw(lMODO2{XpXS*;;5(xOC3FxYohzt(v) zv8OH;9(b14u9~4lU5obvpnqnY=4nmZgaD*7uBydFDL^~)dP_c{Lfo4l_jc6_8 zp%E;S?7#bGuYFnYfx@BXC=6eVTz~Xj(%gvS6W^y5oSHE54rj{c6W1)L1^mPj35U!T z8|FL_LoUSo^orZ!@zGWO-;72MCL(Nj7XsY`XiQv2M^$DTU8b5Z+THixpKrdo+%D3s zgt|uGqrBvi@Qc{38n(!%5h|**QZQikt>m#TG`2w<)a1OFE|5RS4h6h*F)n3JuWG5! z$_t^em-$IO6D_{%Ozx*(H1MURZ@JLP49u|F5`M-A_#|W}_82GF2r?cGnxT-R)TCf} zscdT>e6y+idU9OIfb(^dU=f$ad6*cuhy-ozMy8h~NP3bpYUinDYi75ZVJs9iXjKa9 z*|43O{o%)zUwl+n%#}Mr-JsPVKmeRY4xz~Y|Dx`s$<^z;4>*9v3^b-5Zr_=-C`zR4 zqGBaVL=OumT11;HvPzZrUgWD~=}+OREaHt_mADc`1Mj^(V@6}>{Qd`IrHj>kFZ#v% zp7We%9yKivLr)b?L!^nlpWd2|et(kol#$l)eJNZmJyxPhq?QQSWuV~{*i9PhBwbxp zLwS^>vS`SqqLEC+*Vr?HH|s&K`Nd~%+b6r8v0{(Xc01n!|NNIcAysG=OjjgVXnLw9UL0~4s?g34xO!RqmH+`TmXOPV`&$bEtKB=0K@ zH>@e8ZKauX5_>{I6Yi<$ooSjji%;@J2a_v`Eo5Y-DgsT@&81jkxY#1^%oq<_+&1>J zFE8tV^Yz((`1(wr{EO;zP#3(uPtEqhV1)<}9QJfD)F{`=JyUxuDC7rn@f)9Cf*qY>3!4wZ0JDoh$w zF(|R)n1TifK+wslDxQ zY?I#OW=JJp1T$4ntoF70^msLt4Lj75=fm|U0eU+V)HEU6vdv<1NYjUVMwWR`)~^kq zyplNV#Mtd$PwsRs=;&?0%kEIEI@N5el}cVx{NdqN4_Q|ApT21Q`!5>f{k3>czo#fQ zNkUGF+@hl&b$YRu!dJtineTL5h>9H{ja(O>TWnMfp84|?na4kryj%)vWI#QpMv))M zw{j0|hJ3K)=szhAv4iN)bkIUy$BaeK;dG423z^-M2SzSP2S^1jO~t#|D?ay366N?g zFms@Y!N=I(qFJjhYM=bwe|gFKMi_9TF~Ow1c@V{%>TS6w!3+G;a`UyaJuJv-!U&3t zieHrsO>3Qq9QRloDq_h{jm1}DOeAxTlu8N~@#6&uj^5BzX;E)U^zyW3YoK%3!}4ib z8{c=n`mXZ&U$~w(`j|XxR>Z%|{YpbghrS&4l(?S^jo7;fO)MIQQVGM#nFaNhT5Z)9 zyV>;^gI&!g`z*aE5zr!6X%_3yIgyI@ zQS|5l03ZNKL_t*ZIzvNbV6)E{+O4O?+0DtT|2u2kD5s^&bN!{3);Nr3fz3v({`jEf zh9Nxk$L_=99ribpG}CB{8H%WPo;PpLM{{jC77y*!?O%U6`t?s+=cWYBu4Nff<1yBh z?)Qv~P?65Us11h`R_}JDu0_GQb1*7~cB1r}3e_vMJ9A~^CTY_R0NIX+OR*ZSBUCZ=o~*wZ*M^lfk-11e8jZwt1WTSHy^5Dc zV=0_Y2Mg>+tevLD?axw-b2-!L#*-HYYqz(LSMB{(bN_PC z`gC)1H(Op@yc7FZo~GpCLS~yVFW87rT=uHEw6dUoXVcBVIyjUunv?I9lw3zmF zl-zAPl^wU*yJ6+qAJ_l-$MtWQ3heX1k;fD%+NGo7M`f=tR}@I|YsWJ4yj09-%cdx3aoXMwJDgvwhnriGAdZR` zqjRZgi|M`6oHPBwME2)Xs=$cHUys!LTg>9sLCPLpj#44@EkvlyZhjZT(>*8Z%0 za>~KMnI3(-K_-h9c{@ZDT*J?^uPvD-gI0wdwelH!aUASP=!fFZF#?MS2{tQ%$>&CR zShEzUVXuo>6Bs0om@Rp!ak2O{m%q%K+4MmU^2KO$dv^oRA)y#M5@9aWHZ+{ENc^30 zV`>I$l;A4?lMoW1ILsfK%TNF6RqHQ)q#?%#Zdi~6Kx>s=#T7ajHKpOzmv{L7WO(Lj z+zCzyxM&I79?}!3G5a;&>Ownf59PcPp=P7w;shSKNCdJz?T*WlbwY#GQ85`Vv?yA& zdk$N3C>zU%S0@j@dRw__^Rj6SgYzWb&K*SiOz@lsz9eWcQ*d2Kg0~*oPEqKo*0Eji zC{P7QGJu$mXjkVS*(RSY(LW9*ZiCXB0|ypUQJtNjlPEN$GmJX3O%_;6c(*hvZA~z} zx_F^1zG!I}9c;$x(I)f5%W!qP{a7+d(pUxp_i)k*ll1jSq^P-ECQT)isVq$tJ%8|y z5IffD&qxIKu|78-46n%-(l5m4Y?|-$(!AsNIanM;QL&-$wlHf%Wv#ds$yCH4?|~L! zdvJ$>{9ybca%w3NM3H&v%ah@67rp;EAN^s~VUKc^s=u>UxM>H_98LftN+xnnb-P+j zA&H?v@&TxR+Mft8CR(W5%KdGpy1nRBhllB3eKGi(A9p9*tk}cik)KC`9%Wc*g$Qd< zE#uOX-UqxONiVQpW|pnlMv1i?q^08;m;y+S#xIuD(f{~V`z`-mr>`LOQT?C<7_(9| z-QyCb#k4ekZ3@536o7xy64P{^X_F{=+}=9cKJO$HOus&_{rXQYe$<~fS08AOvenb! zlm%Upw5{2U{7xL74!|S(#D>OkxFZfVbwPdFHl^FV06pTM@jx1$e(Bql>c!e_(PBFTYzo(t2hOoZ#gWD#}OGzI^_+9+;x1?L9U{IdgMG-$(2XrNEatMjq^ z*M%|3yUjQ6$A9&nnmb&cpr2Q!U2npkfEg8pV9@(zqo{yaYxerK55Wy<8u8Y0=_X zS0#B$<`PbjLiIS-+(0lL75yVmfPrikIyB?I$biIUqhF@WFcQN+JUd<`wviN2ud^h- z-5jyB)(1AtAzk}~pUuPlVDg?rwD2lhGtBo-Tu!XzwNE|0*kU0I){SdlwnWeU-sDM3 zfcVrr$kSbi=azT7(^hc->+lsfU~L|4{kB2Oc%SFnDx=JIIp2_9uick;ojO1sHoSzz zu-iO7ojkPuXpi#d~6QRo+ z%51C$cFWtE41uQ=>#@I||Lcq0Z+`ybwDPc;ed6$;YFh+|qTAY&?m&^o*f(xf_7~uD z2-qDMfr0=0$<|1F`|^32YtX-5xBvThm5-a2uZcGEse#@^gK-uI-f=L7J!+FCW{7Qq zSyZYexUoO2A0{cOlN_|FZI@rY?EK|t^`D(>yW6{dli^quIHCJTITAxL{h`_%49^qB zc)Y7^7ZmVny}8IhiHfp}R*j3;&w_a&O)D;0{J1P0u>Clo%txR)(4e+S8I(z~-HX7l z1cVzr=ERV(WrwE33rDKobf>ncCp%xZ*vVOA|Gp<6_v{kFOK`n>h!q&=yvYV(hS z>YDG)P~_eGR{N}$k-4-s4)ipo69;Mn1on#My&G14^VQ(0J||M-Mad8&8DN5BO`-!K zSv4lvBjBfaM;dr^x)YP(7+Aci>6&)XRWtPr83QiAYoq1geVi$fcRio9pT9h7{^YFs zy0d&g+P_z|c34!sSY&S(Yk(qSAUwl9W{41za4CEgtEV$RR8R< z`iqy1U%YA>x%Tb*#`*Jy*7~MfW#gIxVj{pi9#*B)f}+Jm&X9@%BJonUIfF%%f@vj? z6c};47_e$P5v^MsLR%7)MOD;=2)vzQI^s0^vYvR_Bl4eMB!us`M_kfp?WdQ$!~5}R zHKRMD3cPsvlIkO6oz{HX>+$@ADes=%yn0E4*0$Mox~z2dhrg$Hq8xr3S~-ht*li}( z!q^vx7~4Nz!Rdel!3Ewlu5s^vr{S^9|yv zw!f%v-}ay13~KL(l{ej;VM6`qRZ|-XjdmXI3}#U6%rK7fM}ML{vR~&j+&}rJ;rr1G z3%xTM=3&aw{rd?KY{ho$8p*Xx1ZLoL>V zjJ7MrLq5v>=c%LKNdi4<9DX#a{`9i`qP-Y*xX_s&+_o8ukjl!CtIi-gyiW83q?EPy zt2d+8tJ9PI$?Em6_I6mGUH?H>4N9;j5p4Wm5pSG20Hm`(`gf!t+M>9b{6_=SCmO3} z;=#{~37(4W#i94B)Ah}N`u3-1wd=~_Q}N3kN&}Agk*FH@PiVnRBZ5Q;U^Eacl&_kD zS7uIJJ_JT#(o`irH}fUq8pX4Y#Rd!!whN~1elY@!H(HL&x>8R9IdKMYX?c|N@h8G9%w zm2F6flA@h294%QsHQ$#G=xe;FKKBm4kFx7{9id|GLK&G!ys_CM6XUxO#u_`03-H46DEX+4-Bce5RRv1r8jM z06M+o&PHWB?6-yOnC(J7&dkyBCz9e|ZEF}d=4{RE6=38;j<{B!E@R6G;mW34=?<0cH&Ws8?dP9; zbFm#(9%=@gQFV3k+7#D>_WQ-{N!Wuhq7-P*P(tiVD^d`0dEy=B&#i`SsRZA;xsh;K)z;J^|ErJg{q+d$x}b6(5NvcX_`<* zvx5i7T(2Q&ge2auZtov!>uImv9#2~L<{%EofByQVb*6nHMYhrX=a1X#jK|d{^e}y% z;Zhfq!Oy;Y@%;JwPW7oi2n=8#>@4CjS12mTY?d4cra9PMrZuz&7>xzvRbNauEOMo(Nx0b0 z*jFZAQFj_o3yC0a8l9iKypS^ce-_n;ilVN==e^E{k19GI&YJ4wOfF0|#ANv|oy~fr zuiN2yAWv%bBq=1y(eu2%y6IN7&;4CrGvn1ASLo^4NOyzobULhVzk1)6s&Q7it#9Xp zetWT{)R2*RT&b-JTZoL@~g>*>Se)8Vb#%^XfYP2(XHO=ee2eP!S_vmPvcKqE-0x&wa6Gj8E%{yRRWl|1 zBl=NgrwGw(;f}VS_2#&7->seuDyyH2Tc3}qP$}8e1zj+L*?}qTS2GH}aEUw?HMAK9 zqD95?{ORU@^=h4O-@m%q)MSFZ>z#kBUVNM%uCH(Kx6j5ITFNSe9#UVbJ&b~z_xbH8 zjw3OZ>QhzKB_2IpeLOf3CU3iWt^ENL`C|WWuzS9r$!tRx<`pmzgeOq4>?cY)r-Fc^ z*+pfrc$%2)jOuCJ8v=icXT8xDr)|1f=Y4{GJaOJDcW>c7}D zX@~6o>OcPdugVknsBkJzj%%b?ERjpa;vp3?3DLDj+gl`R=^N&YcH;hM$7LFGps8Yr zJDEuAq}iN!Z!)P1zfSzAKMV#@UNl2xZ=oWViL`xldw#p^R!(Hm|hQw|h?LLB)&5FR4ZF_nl@_0_Ffp~-S$ zA>?%jroEJrv2VT@ySkN*IRLnPZEGZM2rOq)v$ zP_!XHE+Atul4v>iG&7ZT7LtD2*irnRX*o{P@Bw%fz}gbe1A3tXXsH_?ww-cfKq@Cf zVi7d%OkiC3QpMoTW_NsfzuZl?Rf7xpMIqR7IZbLtCw)RQmTa|j=FyoF*~Fo5K7R0C6d-RxX5h(o&NFkSam~^wKP!IX^k5#aSVbbioQ;P8 z84d@|%x=0ZXN2OV&gkNPzLp3=`ZwCAT8|4i^0`4Hx=s2Ldquy`lkA~dF^DFp{XX_0 z`#wRYB$A*hV2~9EKOKrSmrTl>e%plZ#h|W_&31XGc!5uhRd;=TjhWaJ9~9+MCB5DCQL!9c?1t1B zoXyIhu}W*SZCy@RO73}aX%8hX;N8Ynri(Kky%bwUVbB=qVNkJS6vm1!+eK;^5S>X1 z+g@#V(bpY%Dw}{`E|xtBXXPai9$rCBm}c(~>9{+4$Bsb8PhrkUMud3Kk*($Ja>JYv zt2&UOXi-eIiLoo7o<6Efj~sH3x6F;Jz(%@5&sl~G?KLmQiiH(`+T06@6!9NdZIj1G z%d)wZ;410#lismf20*kX_xJI1ywQcv;~XyeqA1!cV0QH%{@c%8`LZ9Y3{x&MEZyxp zSw~OfuNXcK*n@m-5mx5+@xt}dWdyPw;$|=LMSpd(PyE@V{D7L@@P@9+%i-y^0Ncn8 zO1W4x%Hbx$nA#n_+RVY`u;I#+2_ccrmH^k2NwPw;lH>;P12K54gTyz1$y>D1MOuVW z!pkkDR&Ap0Wh|(*A8j&UrwFLs0u#%#{@5-_3=ir{oUwkT z(EZ^{zX{POwuNR$LDw-I`EWT~nKdb=VylFwvz*}yO%~b~r4lK`#mTON(f+Cm8ncr{ zhJ3A=YA@R5Q4SRe*)Jk2wrxgO1GV@(2y=0GCgBIMk2(e5yUku4nWf@!_^E4eFVvVC zD2Vv-)f){NT~o1T8NlX}@~6HvZB4-$kj*Uy*kFRDXPqJ8iRC~F9Lis1zkm<4AB`un zDjb&QvxIP^;7#l~o(D}}2*b3nzAmRb{aOv?(W3S1uA?`efslhH&&(Pf|JmSZjP#$xj0 zd)A}TCk$80ju)R8FHgFD`0fv~QQ7mwnvcvT;Y9eDtv*2!S;G(2WR}X`?kFV<0K26E zU}=h;p7yq&ah4C&{QK>5zms9BGO4AM%FSokm_ib|4L=5T8CsNuV=-l0wZ*kn0l`Tcq4=<06cPbi3ve$ z;Po!gkSWvXoKfL`*H3{8q7G-n7A?Rhc$BBdfhE@~<7yt`7AtwlJY#@KnIO(NprD*^ zCK^ZVh=NCsSw^f&Pw+sklzr}fpz{igt14d-lQBfq(CSA_- z4|LD*+UNFVJ)UrWmTm~gf0DVrNK+^EeH#Qo%rlum`JHiL)`lTFpoH-qB`Hk~`K#J^ zVgp^X!#vGW-Qe?_@BfzxVDy|RlSjOSG2REh7^_>4K=e<%#x?Zy2-i;9n|xxcc~gn6 zV1MqkbP(hX^kcb};%1tr*Sigv&?hp?vRW-jiOEvI>{D+stizThV;R0wC`Zz*WEG34 zLkMEE(-sxb;zHlhH8e~p?r9!>&t@Z3zFtuQRuG8%$lMp}O&1M!(Htp}VbdT_D# z=QSUp$0$6|?+3lp&;{|VSws?dQm1m-8&M{AJAIs1U6Yxj*WYvanyraQBs|lZEhfB` zb{qZNT;75EkBr!YombwCGBBa24iU*d^jM$!&`}dh&MU}4 z2t%SM+4-yg0fJ{7qae8<+7aSV>rpCT(tF>NFb>LA>Vz!-NMW{x39dp}v96MF4r@6} zB}OM%gMjxWF%<-#5O{xx9w|noHrfn-$G{)o2X_#x|Qfju#?dGtg;@R8r0+f`z{b>!VPRUkp!4wFdlmV?wn$7BX>f>_-V{=Hf0_oW4@ zq`}7S>7G$8RHOJ+=PNvmuUL+w6lC;bd7QxLDD@>LO~ri&ZqTj1RXcd-b9SU$7%cs` z9s3ox3%>|fd`}2gQsfz#LjjGV>9JHd56wN=q{iI>Eo8Kq!NbE%9|B~fN)&$-l|s+D znoN2VGk6YQi`~zWD8q|B%}|FS#&dz zd@3D&ymi63XLJ#UP*}u5g1+ghQX5#w$dK1j{-HB8_Zs@y6`hg*;6XW}F46{hXYDUR z(O&xOl~_e;!;w&Y)F%`j)*B$fcHw%dQ?OcQ5XObqVLcR!Y^}}z9UT#CL7@>O7KSOw zz5|Nng_{b)0V=f`fc&<@GTaOW0j3j2u;5>4JlB3gZzX1Kk$3`=#X3%VY>wSZeN62j zF%YH74&n^&vlj{&*`8i%-W~3Ij3p9RUjmH)!F!v(aj>P5(z~rBJBA!Vx!j7UH zAl@WArqeL1973RIyqLEa(#X!lj(kYQh@d*T8XQRZ;Yv(cL3fyi?)?>AC71=c7Tijy zgeO8Zsxeo7JowrqpXkNL?n{Ug>_Aax%p969G8edBOhqr?e#PDF!h-GBn%(c8u5oY| zG;og72_8yTG5gk!VjoT(QzP!5umM>+W&6^p`7sfVR% zOXOR`vSxRZUdgI)Kso@jd|VSb2B)*nTv#y1WHHVin|6`crC~lKZxe@?#v-`H1wjirHd;3( z(TT_hvC*@!EN4|lkNDHY`6Q4NOs<)u1#>XaBqIQqv065a`SD!)@eX>me!$1&Su1VY zZ>Qt86LAVOO6zA~J5k~Uv|T5qdx7$!$ona%r8J5V>Ce&O&BDw(F$KZU+r7y}wrp1$rlp%=fw=a?_7%rxXrFs{D zTQFd{c1F&H>~|v0h?Q*p*6GLRT8D9KlOF^aZsJ%Wt=Dr5AT+EKHIGMqs{(55RfkQZ zpmbXEr(PFzR;Yv6EBfuU*(HzUiqHE!YN8%ix)@ZM8HmUVr7t1ba%#kk0jUl>>ZXutTv;Feg`B z01-(;n82&aWC$(6-P>v2$QHIBI^7Up1wQN=Q9Bgp;T{+6K?w9s<}Br-5at?ms&TVmuU$}F;3`Xzi-p;WPzW!iZG;;dqDwh7wBt^xKu$K9974W7=$96N5`c&{ zLA{{ySOYq>nFJb1OVzJJj~Oi!#%`T{e0cI-|Ms(^y#j*RhIRWh>}QjDgggAxZ`v zie(v%g|jJjq8MCt>Xl6jse_<7Rf@y$7IM_=z>y%C1DP|ouWMAML{ac`{sBCzy z{W>b2gO7a`3f8_&S5dko(v|&@Srl%x9&+Wh*AHhadX|OBlvjBVskyl!{bo%~)?Q2@ zRcEz|X3aU4@aa~koEdL^u;-b_N#~@ErhC>4fs|elcT6T5Nxndu?o=28SOQ7ljGY4D z+mv-62GF)Kw~$K$IOtECB%({`(({jF+jDG!^B*0YMCyM1J7feq%#k zErjMb*9si=A%wx2Kp6EBmRuc(jW&niQ6t)O1Ov@xH3{X74I!mZ1{X2T=^9dT=-B}a z{rv0&?p-nwD*7X7n`F9==Z{qyBA4{BVD9i6c%e4t>vng{iMm~}0+)@p&i^{T+~8a~ zf2Y~cO5y!IvkkTT5a-p32D_neIyl7;Q9-1A;CdONlL`dSExIIXV5cLTlmoEnYUHgImo=HMb8|8f=bpjiy`{HzN;GkM4 zZ34Eu-lQs(5KZG&FpA#^sF#o5p^HXS7ZCmTBs~ho!o=Qsad$BLU7J3`Wm7W|D@_R` zLqT{(((>!87we^_u5|pZ+*xLUhA(t}iWd)ot=S9F80{g)niyD7T*6sMah%}D0peZg zcDN9O5*S&0O*BYhs^VeGcoMDMGKM50(Ufk4200ZVqR;VqC9~!R82yJdlgynuPzskS zE14{tCEby-NGy?~sh-+l+=}uKdwh3_5;-Ip~!YC?Kc6BY6)_UYYe%piJSQ{F44U_OGpV*#l``jmf z@LZw}T^+BB0>?>`&wX;dEyAu$PCh;}OuR@Yd`i+Ssvt=45f7vAVC4p}g@9=mY3!VV zQmz>R@i-}mRIgBzo{t2OH?qtDGaBd)Zi7r3GT~*eVGw44j*E+v>NLRpQ5yQZdU3%h z;9<8=v|*hzP#dB96DAn$@n~^s+qIRjfs$WQK$LI`R}X&SfRFYKKZZ?W3)yTq63;Iy zASv_IB$NZYNHVw=Jo7nMcpjk z6x#*d2xgQF>#%#*Sb&9*gT#_+LITKa%65DOXY-Y#fOmqf9lUj76aJ|iYaBA@T)3~t zo=rbNPxCq#q>le+*0%3;7!8KS)I*lu!ZnPo@*3MSsM6$pF1;W3S4%;KOAf-bN6`d0 zPax+?c{Kp=`M_-a&P$``+6X?7jQD7U*6a!1Yo#FUjTTaD8_sEeOyY)5j(Q~}+F8@m z{8OS3^2V0Qk6wQW_|fRoSZHWFI4P2|c+o5i^Ilx3UnW3sY(VEh@R!sCu%HJ+=ugfOMp?G|j|{@+Kaii3 z5=a;1N|QmKqcW;SjMRfB@M&>IeL=C%Om-s1jagHe=s1NPa~#Dovrh(9AuMAH!Z_l7 zDZa}vO?jW#& zR_W9<3%gA#nxqEcRmZKtc_I@mlI0YFOtcj6og=w|M(|L`fKe3s4Xg+`&$KnVm1|G( zVEbYxl5_sfgeZ+({3(s;=$*KM+jO5PP7ZwaNd=O}(lET5YE}HV*j=J=KVVC26e<_$ zfJF!ylDM!)YNh@lOj}gu=FuQ37p~|QK4IHIvNoZP&4Du9ZAZJs5S>JJ@~@e3#dI>K z1`+UK>qPSsDbTyLcF)9c+(W!sWP)x%Bq9yV;x7`jq5?98SOqEbJlp;9XwII5`Iuf_ReqXX}FN+-$g+0+Wgyc4_^cA4-<26>} z^{x?HXzze-Rba5GM4x0ABp@{d2eR&jY+ml%Mp67!Wu-b(;Wy?Qgl!JhETB+^3wn67% zQ|@trePFphhkujth2PK}?snVp^2blBx2rB*YW3wIr92z)wA@4FcmoPSlCY2Z;Oztv zXPxEZ7Z`yP#i@bFDT*&U*&JRq#<)l=#3F%ggnA+wBie-OaqY@kZsHbsRC;isnWAFb z6uA<6#gf35rmeY z^{q*#DZXzQRkmMZZ}OZ{FOB=tWtFW(vxyNK>6K(CL`C&;9Kn#1w*ziIv`qjB56zY% z3xm?7GR&r+2LKlS>rC^{;9?0?`j#iaZul>V4QRH2d~|q=j7})R!xMM0Yb)V2rc0-l z@of4Q6y|bA_>EAvzju|3!VjLdg;1E9a!o8Ltesm%s|?x}h;Xq2_gI_z5ewY@f6J2qC1EcP Q+5i9m07*qoM6N<$f&tDsF8}}l literal 0 HcmV?d00001 diff --git "a/contrib/Overlap-Recovery/inference/\346\265\201\347\250\213\345\233\276.png" "b/contrib/Overlap-Recovery/inference/\346\265\201\347\250\213\345\233\276.png" new file mode 100644 index 0000000000000000000000000000000000000000..b624fb4a3f6f51fd6ac2fd9633811d5e25eca5cd GIT binary patch literal 11469 zcmchdWl&sAx3;lhfuO-%gS&fh1_@4Z7$ms6hd>|ioNZnIS9F=6`9H3`P&BujoMT!?mO+oW%2`U*r6I z^YowrGfHzQKJz5(rF;HOmOKBtM|7qpD{Ej7+rWL{8alMc^u)jJF}CGI^<6YS53sz* z02w5#9N2$(I~MQXv|9=nZ}8GJNLT=b7p}rdWHLZ3=poU!zQ~;aQ{vwIzfIg?vlaBe zLKsVK`_5&>pZEjjJ7RHhG3U~Ox80Hcd8F@ImN=SFHOJR!m9~rh1-CiVM3m-ui)}*3 zm0a>#g9#Al4dPZD&3>ZjExIuyN1iD4uuMy&y`sg$!_f4(Sn(`72TDS_*R;Xo&5V-J z`IM?6Uyc?X&O|xKsNiV~fVm&6qLXM!aD(LoKN-cTKOyhWc*NeyY0(7NAc{mIxq+LX zog88qka(R_-_}!#*6bej_2xb-tk1?wd0VA-6T}-{Pt_!dHfoI|6MVtJexO zR_q%RGw|&;cNhuk?dXD~)qE6saS>Kocavw(pdtHek8HT$gt zIq8KaiF&ScO9!rXsm$k7G;{&aB>@7ptv2sQ_h&^OY+p%}s;TEbKa+qF_mOWk9Zcv% z6S-^nTN;+8{41gDpz-h_Hsj7D!{?%)c|p}Ac#Bb}#v8rc?8&kiB%$F$kvx9kue;QJ zm6MOnN=v;T!fcGpnv#oj7>iB-gZ^&$Ueoh_HU%wPsZJvgw}%;XIkt1$)L74H8%M`0 z)EwqHKRb`)*3RFMCX|z-eC`QiB$SDwmhGQmuFD=6PT$LP%U3(@|=7!m{SbAWdp|6(-8p&oU*2qfouMzLPrP1 z&Fol3ZFM~j^XK+Aq=O5t(+mgr^pSrp)@yX7Yl2tXnbtJ$8RnWHa+y+wwXFJ1|Lh|y zHLGBm_90!Z8dqAZ1(ICiEe4=NWCxTtSCy313av&M;=whMJcj;$7L4y@xA0fd4=oXTZnq91;-f1AEhYn z2(_9vw`Gn6d6rw$iMozuUQ1RVA#!j2AGyh%k9!C+ zu7}JOcR}lEP$@SlJa!j=SI;4c!g*4&W;a{zx7Mn?5+!zD4fy-J#41_;MU*RIhyZL;F9aYK3k-Ryz&*Y^^gM3QndqE z>OTGiMly>g^grJQCAAqgJz1Q3FF;#0sVGqMf3#deLIK-!3mp7{ExWmFee}6}6WIbM zn&vn_mvxxS|L$2ENesLmeXiTGuOyz^`d<6$*o`oRRgfxgz1v`sCUTGnuu+ZuZ>%_g z)z7b)e_d~;#IpO{fk`u5=PjnMsFIm+6%Fy8er10}RU-D`EKh_wJ7g`^8}jVq<{)GQ ziaj=rKUr!!V4C7fe%BrZ4i92TP4-v7)TcA3C+}gQePy{th?!yEfiyhlW7reJr7TGh zd~9WKs}1tLU0AW*HhlFbA=Y!6ox{E2PlZVkt|U}Fql6_QPxPVudjmB$4G+E$e;-bL z!6L_}L87$v;l{G5o3#tC1XMXsS~nIe>yg)^YQ|ZPVvj6eoue%y4%0-*Qf|*h*oG8E zX!~nzJ&Z*d(tw5R`QaXeI?Ikn?Fgq|hrvseN+S1LB9GTA4;OV*!nx$24;ag*>6cB3 zkn}VWB`=of8>cWgpG_c$T(wW@ux&y{wk;Ol<1iM#>p|IM((68H_mZQdPBhwM$T!3+ zAAi@inBuoy58LUxuLa58XqnmImEABHT+8j3kV`SYVyT7qXoftsl^vS!T8G#j)rxX! z0u!fj+nmU})^#42%zz{}g^RVHJr_`Jn2RQ`S6-0z{#xX+essU}@qvx{n*tIytC8=1 z&CL5*vYFhvraKR1U#-aO>QV1ajxNFDP$iJMds1|tM7K^WH+Ow?w9E}p(;6z3RT+u? z<~vxKKmAI+j(Y^b2T|g0Dv}sVe|7xg6J0;9_sGVnLVpuUjR zraHCyCsye~?28n6Z=TS;MJgJn5%tFyfVSiLw5F=e^e3KA$|`+hEL%|oDZ(qa%a<_` z;-DmxbjKgGp|pAwlZQ`p#pA{qHj1sT|wixF={4q(eO{z>*w6MW(6`3j%}NpqeZf{FWSgi5FH4 zLrCn~pt(Ob(DjCpz0E$q9`*CRzd1)OrLT*^3AHj`w}ib|_Oe61>?a!sC@B?{6F{=C zHwBroSFU<^979`{J^z-J+RMMS?*8F~s-PeT#>_({&bgh|fertNte?Pdp5arAB_8&H zu(cX$vR{@q%_JCvPaGQnw{m~r)(9sXjC53=0XTz;by%>eqn0d52Yg~>Ys4`B;C%v_ zXQuLm)zuvtPeud|Q=^R3d?C$2!$uW7&NZV8#1BbeMJ>DN~-QfX;!36|ij{}-5 z6TlcK%*UgzB76GtR{Ey|k2mPAD(x$yM-WZd3-;futcI}QXP=FjFN!=IN3L9txE-bZ za2{l;gW82uId$Bk`Dc0&p7=f(=z8ZNgqt=5VbSD%kQEAB_Mz9xZ7OZz6>2`tEymQ} z+*wSjmdu(z&4)f^gf@2YJLCL+!*GSy+lp0S*+sAtP}jJuenXZ#cod|Ay7T#AZs!T4Cii9-;q0@Ha^$kwR3GaF)z{+UpxzYQqxO~KY0 zT9MxJs|>Hy$vST9uLko}>=ApPMKH-FWN=Hvr>LLy?uShTM~bv?qn51W+Bs<5O_%v{ zMJHWhDG6ev7`xR|jhUr(O(2dsE7J4}gbbp%vmwhLx$OtozE6}_XGM62R@G;ksAuK` zTjvc^`y+xdpo0nzBZm+sJvLMA6EMbXTEEhuFZDYLa^Bh^WAJ!0WG=+>COI@T5dwQ` zrEQUw&%F_ z1N0}{fSGkGzPHJMt}*xVFnyDV*EggNH5Tx$jlYvGnyg#-><7%N*#+wd#X+}AhrVXq zK8tLNTIoLkn?egPphkUd%%hmttM|;f3}dh4P=!%}_;iMXA9G_sy?u9s%a#B@AAhQ} zl=?%B+`4Us$7rq>A~9~0_WQNUQtn^eJ!(tE)C?2Sg6kL;R}o!|Rt|>pr&$8ngU4_Z z?2ce2=Gd)orx@bH-a5#E{iDBU^%KM8)SGRX&P`ylEXH$yOxu@_}?+QlAg|J$mNb(%hD1d!H6T^NshC zBu+zYZ6+rNWylPCX!^QB+!Euch2oqVTmx-wQc}w1?%NoRp!Es_9H4#HQ~lna7W>`* zHE7(c1jv_v?>zd#`WKy)5$bIJkv{opc37lN@ASB{8YF=kBD%8zeho*DC1&fVp|;W^ z{ZL$uZ`{{Wq{yurDjhfb3E608Fqk1s-;B}go!Ht1_&{gnM#Dh&-p^U=aq>O?$#JCL zoeN}fW9(zKf^DK<);CK@ptHb=wGgPi_z@)TMYKs;Ek8OdAv#oZ32DqF$CiB?I}W-G z&)oup`(4f9%>`Zbu#%pxM(VEu$j{B3{S zU2F%hofc{~?^b2p7w<#k3sDd^){D!|Xk~fAG{)=F?38G^@?~{~XNZGP>nuILV2)0z zQ2yqXL*DrS^P5Ym$I(7&Hb31Nw`5hBs;1LO<-3-4u>Fp{%k90U|4;GYAyiHd9=1`> z-&NM$>vQE*-DjZ2&fmDI5s|ZBxOR?zCJoY3%S<@+Kj6nUWU#ikIN<>&zNH4U>2NSC zLM9d|!dcUb%{@(kBs6v}KHp~z8?7?7qd&b7dMhn1@J&$CPD3u6zq}Wr zH?AG;^vMO@DKn#3H*FObmh#{4Mf_$LGUAl5aIJKp13m6LLmxF<%YbO}zS~A7t>ouauT@aRe=lt#4!J zD<1ma;4?sDN(fc3huvJxv_6~z3~na6Z+=@FCDl@Lx#r%te{Vla6kioZN%(EdKuVXCF!0?XVG>Tn3rTlJ9nA+W?CI=(*jP8p+4^U+Q2E2u{ zy{J-j!ox^|C;@uojIdEvi`%`9zriMp5jw`!L1SB+DmXXg55TXlLd@3E30E3UqHSM{ zBrDe*y5h#F$o^v?+F4Nh>Vlf&PnOovdJkvZp%e9A{>8#VFbSzB^+x8~J&_Z8%7#rg zR9I3}XDn>*H0Y{&T zY@;;0Zm~kXWohDJ&mgtxpt3?|P0Igy$B$nSjXy(5p*iVymPOjVWrn}+D_VVmlWy?` z?8`5b^ZHMR$zW zZLleIp=ET6*!z>dgZL?1W$nx$gTLT(GUu3mdqtrj8k=--gSRCp)^?DatoZ5xIx@Q) zDIHfqd_5JP55=a4yCk3vi!#wZc0I)~Bv8Y4No&P2`JsgRlmSHw~p-J!%PH06=YE8TirnV?=P zo@*;@$5t_*)_QhPg)(d3pWm<3C6VaiYs6K7O@4gw*187Z_4nBV|7=L8hOqbd{WC<`f z-@iB5HH3@uuDuu!DTWjYPcq}Gi&iil_IGuH2iDP;r z@aROA7BFP|=C*1NIk=EwxO26B!-QA#+b3}jSzxwh`OsQB2?mKS&*T)leu&@AJl#p< z!u;I0TUg|7yp3uWK)i)SLUar+ZT2_ncSjvD*-|1o*+cT6#C>b{V`Oj8j{KD|AsxbD zS2GTCTCKWrT1q%MldLewxWM}8rm=&2J&4fqlG`*k?j#ThN;b^KM>?IInIBE zP2trG!^Ozac6+g(THQNZLh*dB*!e1Q-SaxkXax~`jrp2MpVG|mh7DjHB!pBUWUza?$xiihot5)^gWHuD=XFajsiO+6YP0VY6%739# z`yGLf(yEe!f;UN|=ttIa^p2tn8MM_U4CdjSTv0hDymj)htAy8K{aiPaO;yO~f zB?`m8v-~!xX*mK0`(bLAI;@9lkyBRG_2egj$2Qff!T^R!unw)oT@0uEqd=SKGt)u3MIMkIPr^#JH>5qeCu}lSJqv-t`G$Ob1=U1`fX{ z%Dxdc$APsmz%ms)A(F=`y)65Y_xB#!%`ECVAy;at?JUlBwUKOhWb^hwoj|^zU>k0M zecAdD^QN|84Hnr-?zw1!*LoLR_|BWmsv#na=G1!&nlfXN$uT|1w2fYb=|UD#mxHM= zHDyAz)~AaqS>8AmH9D|E@pszXnWpRs;vliJf#9``M1v7sqEpOKwZRYPEO)Fw_hk`{ zO_%HDVt?$axKbzE1rL;K2JPoWKJDi%%<6OKlHShtE^0;37|el+Ex)SUWN%f5!+JX- zb7EYT@WISJbzdTgR|XID_H1s-k@dveC{$?7Qtr;~f_XkwjV zgy}XCQxLLDuS<@x7!?GXwI>m8tZu&=G;oSsFTtZhJ%wY7>)A2?<|sl&1L(lBDkcmj z*&@+1F;ZZr0miZCc9oP_a!d{htXwOmn;S{wzcGMD^Jkkp5x4gDoM~KsYqisv+j}BJ zggUI6aieBt`*7a)8s$&B)x}IW%RbLS_+j_w@Wv#dBEp*4b*Eghh)&6cOxJCg=p4T3 z*9Es?y4X{1Z@?Q`NAg;-m`hJwypWc2n6R3kct7O7e&VIAnS}zjeBlr#9UR{VQ#j*r zOwSJ&m_~k64s_~qtp1+M9jvVdr7EBTbY4&mp`s3Q3L`!4i;k0ry$Hb~UswO-)Pqc4 z%T1RTH7I;%LD_jTSBsW3RiXLYV7ohl!-Tb#Etpj%mM1|Fx~i<1d_W4gE*Tw+(bX>m zljJ$XD<0FKK%{0u_%76?^GO4MY< z4Ot|Rf>ICl?RWX0p==tl1fv=>{PG+W^HD7Z+l0hT%7&ORT<3nLcpSq7G5Nbf*4W&2 zw@L;7il5SWDgqyR8NrRM3F#3WGQ+tT;>?2K+3zN(*s|Ab5a0p5n$l4;TqhW4w{mD( zW$V7NLSjF6d8z6pZoHF)ErYnw9KfH(3xnjdWD{{6KbC5S=Y^iza zj^lA0AQsT)C6RoLXpeVHKHyko>@{7> zgrH?I+*q#Ok&iw~u};4tFSYKZ#*u&nyzmJq`HZ*l7Q3%}-GYn*x)TTYn}amUacyP| zL6Pb&(_TO04GaEAz_3~T@b_s~_V0`_AUxyO9Ovw^loFd38WL={!);MGval?#VSIG1?Kkm{J!flNY0;AH|jP`~PjIhN`AQS|H`y#TMdhXpO^EcqQ|r55jp z--IHL-FZOs$o-o6c$)E$idaTYk=bj$NG_djP2O&)!KF|{VF8ckgKU8}h{scj~|(C~2g99WOI|RJ)kh z^}oc2uHDPxOs>p9&NML+^3~8v^OVOW7ECT4{iS#z6O@A}-fOQ7whN4o@^0~XSTX`c zf9MU42RqxQ{o6eW>hA;izjw<&C5zcBKn)A*;*ykjZM z%#Ap+lJMg%&C#;ne2K)b`0J*~I?ELv<}toFbi3nHpH!gT_2P95MKXsKc1pwOa$aN$ z;_i2SefqrB6cRsAC!m4I1 z|CCMMxJm!kiSLVp$n1}h2p1IsU{n(Cg%1i1-F}QuEbnqC2tbf63Sz1Ll+;bdM1^`> zTtfoc|NDtngT<_ktz%llva#~M*)l!FDPs5Tbkq(l=toGi8S0m{9=J>*L24?mO3fu* zvPoKksC;b2=`51-eA!zgYlCYd`85Y)DH~9=4%e(qq(gIg=I@n2QU0qq?X13e{_vkVc4xFqWe>)~q z`eh~(U3V^;a~v`X?oq;%(knX4HVlp4MbFkUSOY-Hq2}^Qmt9$;X6lAx*JobEBc1Y+ z@_)RiXMuvEZ*iowQXc_5Wn~i;dz)(>J=0jLN*a>1yvpvuNHFJ1jGDgIY4hhd zFc*t_B;QTGbz5xOx^UM(4>HlajBD}rw{iy3tO1V`*=Mh78xPR6a#B5i2Fdn}>y9vS zU7d!pbI~AnKSbo}okYL^3mekgeJOIyK*vkGDdJ2*=M;=yp{?~_{xENyu}-;fF~(T< zFNFhGYGt5fKnK}Tf{96U*XeZQSKHFYcsq1nZ5&)F!>=a57qk8~3UD|8Tlk`!ZtAEF zN%?6LoJqk7IV{X(MFRP7R_QMp!_{(HiO*DFE^x3N1B>$-UfPBc4-s?#!Fh>R%%EOBQ?||zj+Ciq zLb9WJ>6#Lg^MXD!Ib6Emi9AHmaKH60n1(wdfQ%|*LNP%? z)F02aWaP)td-lL8s$3-Yff?r1Nj~M$VMIe!(Ez07F3h{B8ei|Vj(8RZ4AW}#&nkG8y-V0Mn)(z2zp8SIlrB37b-b5m zc;oSbUDirj8apEHSH}eowj5Yp2{|e#TC~Xh_fVC!RLC66#OZr99S5y%vL2xZ2gi}Y z!Ay4=@J*SuTZ9$IvDh@ZotCf5L(LK^^rN>p=3^k2Gq8V!UGysnI6#&Fg~I89ijuFI&S@%+B%zH@ekSOM|zy@)V&aNMrvZ%8m1CZbfKt*(m5~CdEk2s zv^%o#{NhbjV8ZgCs~73N(ao9s@fHIWpicJA(pSTHE%$&6nQSVesSnh}5yAw|la*n9 z4)>9xzRseeOdQ(Yv*2P0>JMos_XLk)e#AsAO$b#Q0^|8ETKaOg3wDK(CHYR;F)Oen zPkqbb~CPfI`I*72K);rKk=Qyx+&bMXWGS_r3q zm1mrqt`n`rMN^Q%$D&byDT}+pjlCJ&(1o}`Sbnbn(W0QHdo##ZZO10Hn{>9gE1d`B#BiVxDwRKV9 zekDTEA=5h~XrWA6BKgCA`i4S6j!${dx&bcCA6%ie)Ho_r(%ODNpzi5tb`V+s;h5NtX;dXs&u9e%jRK|U>~F!478zcJD7=Wz{+$4LyYS-r#5fRe9~rkQgm=;@1^PvXw?C^@ z${m8dfn&qHSk!ohI*RgPr(I&^Sdt}GG zqL$k@_$3{|#Tq+>r{KFQ1Y^W|MKQTP2XDM@2_P^==jmh`F|gU=n`Ue9&@veWmPI>W zVT3YmoH3<*cV+OcrTCUo|2Emv6{e#7_?MqNN2dM0KH+}vf}&@8>;;}`3Wh8-e%`z| z3)UVAKO^hH*2OVS>swe(*!a~<>l2Kag16MDS@M^C!0VhZ*D<0NA`gn>9P5iE>$>bk z)Po}zZ$}p&A^W$iM^`fx_@^>%2PZ(OZ|i+7kghw$L>VmmqhTGh@^hnlIA!;^CLk+< zAKC54#qSBq@xud~zR<*m7yn6Eyu%oUh2!yQg(3%0*61Y(P}4O%0r ziD8IeP`wt96jKYQ{irVAMK(MiEoovyuAi2S!3^wVWt`R)9Fz|TTGf*ahl1|t2`5yx z>rxKG-Mee@us=PLhekF(#-f%CS6ga9ms_0?zF(x{c|dpE;s{k4SMfKiqHk>rP>WduQENd@etFRnAYGYQy*P^JS=+ z^>P#=4(_?9YnVRa8G~VtuZogQE9Nsc)YgZHy`^4v%{bU+!Yb|NtZ4S;(*VWu3Hv?9yKT6> z-z7R8-6>4751sMzyf}}17|`s0NqSUazT1F9^xKSU_30yBO0Iz|V!CgAytfmM?I2B@2Sy6X9e6?f zqIPRNtodTvAMZ{t zm)us)|IU=XhaQ?TeGndj?7X}wO?ITKc#bqgbOr7kFLcL4(Xy&Fr3D0p=zJq7Q=$+U zFZB$AX&?n`w3fI-4pWbBmTGMlTp^#-VSEm3PlOI3+{Au)=bMzqFV^@RO;}MSEUz@; zp43N%TegZl0S_Ah%b)-Zf*i|3)A3S`hZiPKW%Exi;F|_l&P^WAlYkM?03r2jZoB0n zWD|4m_5q0Z_Aj@&A>h3Xr5z%!Rrtm;{L4z<4V!*<4EoS6NAT3u__kN*%yU7!P449 zic(17Cic5K)~U7bU)eXBerl*g2=O#y;P-@PJ^rW`>~u=_k-43f_78f4W=x%!{7=zt zwjTX&y##^jo7(;<4&J*U9v0YT(*t#@$C{iTsNFuwNTMjfQl)_$&2x`rk(Up(mV{Kf z@8l##z?=FJLr)FrUsV3zx+O+p94-<*Y&s=*cskEELi;-#FmArm;PUsarmn+U5N0g?WQWVuc?|W5z@H_zcBbtzrmGQ%x{3{!*ju6kKnGIc(Uc^WS3vmdatO)s~nP{vi zO(93DW%kwr$(CZQHi3j+2g)j_r~+HO+257A{0$Xh-VW*DnQ8y5fp`|sp4>H z1dP)`Rdk{Sq^TluK^N6cA5lShZuG0ROA=W3G4*=tDVO_;*ZGLwd^$Ii8#F)+Ri1cE zj1_nk`B<`87(^pda*SpkRRBUv0CYA0BR-$nP+T+=R0HCyH{G2bn7FQvbF}?)^;17O zO(0_e5-2i=Lp5PyCh(a7$OBxTcnUOtayeUmCV7wAQV4-y@CJLiX%*nX??~$cIM9Pme({S0 z9`j$rnOvh|`=y9RhZ^5PaVf`BP43J-^G~0*$tbkhb_QTzXoaz{X$hDo*Dq@-4V`3y z+r)FQk)a#d1_?w!LU{)%S%uND%bUgFU)pgqnEOIjB>Q0Df>)6C3>@7=+NAQc$YG9z z;LM@8gi}UhLp2chhWa-OFjJ@0P8Tgw>6_VP#|^9Rk3Ez^_1o*h8*4aPqrP0j5~}w` z7Gar;PzlR~Y!d0C;sZ4Bj7BzUG8j^)GJ@db1_NSM&}rkhLG~;PIYH6S*im}2N6etbdjT09YJ~eHwJG48=yi01;-nVXF~Sc zFwtfpCG`ZF8|mdzO~flN9GMN>B6RE_8l>&~Vyfuz^C2lpk_hCJ4@8Hk_6K2^JdKM} zZg;2!ICcUA`aqZ{D-Ebd!HSmhI`E&%Y@i`{2*FPUpm0^P0`Z}v&>4M(13+#lM1&H! zSLg-isK8znzFZZmeYgTxM6G*@#L+!K91);ES8sI7PRQ`@(Gl^5V^0XSJQbc9wrnW> zJo?uW%=#_Y7n19gCc=A|Si)(b1V;o8Di|@$p|kh-I@pO8jPI~U82-`V?jJc{?9nijelg0 zxKz1axqTH|524LRpDg+^sWQK#m6{(ktlY#Cnm7*mj3CpPQ<$3r9eiONK0bt1KHC#c z!fw05?qWY7KRrFM`z?q1g6!0P*ZI8Mc3o26WdnXXb*6q{vg}a`pCABhleXwfEkle6 zKiW2;;0lX9xYXG}~;1=w=Ls$VC zyF?DdQL21^hLMMxYog3l;OgZMFmc`xvpSWtokC6<)vy95`p zn8~P(36WA%jSw60dIHsf`T?aA3TJ#Xk$VC{365v1j}T)@*ja&C1sWI1gTSfWN-6I&Y+&h<(B332ZYWXa@H1pbR7ABrEAy;9Bg4l{AhY$z+vJ%EFUZR8v%8$q^2U>Du=E$Z9>B#Jew@BGRHh>Z1 zMTk$6FC_U&Bv9g0#Hq+_2`|ZV33bU_6ZsN;kCL01J27x((vV{$O^kYvj*O{~$&clK z&-x}Eg+Z1?TrgQeQ+`pvqCltIL788vrD9!$$eNZVsVkr>|D!OkY^%ISQCXQ;`A!9= zj$A3bu2t_cV2!f{p(UQhHVZFSI9nolE^W?jF3ZB*!v4gi18p-uPiCf=zF2)uevV+C zzk+!Y?|1td$b;wE;aSU>`fq(J1r{3C73^eI7?yDs1WTQzlerQLElVdB&n#!w3D$V4 zt@(=uBa7JisKxO)t*RFB^R6Xk+oaZ1w}e)KP6_YM$M2VN5W@`S4wyc}Lc<8dmSX7$ ziX&R#)yb90RZa5cs^#kH+RLKsdhTI%?N^MOZsfzwMoyWllZTVRli8E<12JI@;n3l< zSZd4{XZ25B0}t;HAzox&Zx41SzVqg%kf-LSb&GWi{ssX!D_Ak?8FuxnJI#VK4!OHq z11|&8VFb8-%Sf!v%rR^^rfr)yNrw4}4T=$ogBHPa-;42$%Nlwe%Wkpv_lGyh;ZXHa z6UVTVQORdCJ80Obvs?0h_n)z{vcR*ZX4)~^vVPc@*`k}PS+s6n1akh~7~q-miS=yq ztP71C;)q^F|BdcW`#lXm4J+-OW=5;$XJe~Wy_Sip`NO2vOqR8-fv&CZc9`A1IrCK6 z{80z(+|mHsgj@0@8u#M#Z@PinjjKjpt7|j*#%+^rwjhRn=MWE(!w78z7RXlN9^n?@ zaFPYajfUBV>Dxnz=)^G+^*A2xKiSFE%B>3s<=(~3;<0$D994#k*5=&iJg|CP&%{%Rh z!UM{4?8)ZkckgsJe#dq4kp45+J?#nq+T6Hi1Zw00bK%tZXy9)3XyIz&kL>N{+4`0F z*~ne>z4w9r75&Zj=~s&F#I(t^X)l}uOacThw8zZt-sCO|xKSWcAX(rK!8JigL7ZTG zh-~n3NOj1ku(q&C7)uzkkg(8Nxb7j=y+7@$8(R%m#>X+X(8ZxcA+~Vd-|R5x;O#NP z<8Zj`9@^fJ(~$QfIwX+BVs-Fd2*`h57C7a1`@vj;QT^Z+>geI>VORZQwS7`}J^CA4XhU^dg$eANsDMTcbxq+tV`YJ(o80Y&5sG zL{^+`qw(j5h=Nn$Vr6s9{e;rzJS>0wtLMke$HaVE3!S@mk0-cesK?M$svos`-Mp4+ zeU+3?Tztyx<(`+g1X0|^~#NXqZdOhBiS{4k(C_Eqv-pKB1$#_7U|dp~+!uN7Mu{$t!Rl15OD zPtUdGU$i5lpIlOTZ1Orbee`|mNscV9V~eS$^ucB2)Y&{}CWmW+&wDRtrhWKM?>JuX zrcJ?p?$YRzY*|{|&;8>=qFUSf()4NKL7k_Lr8(A;+wb@F*_d_U^0dyeUatqqp4j?S zTlPw@1SQo^(6FsuD*XNpU*_4v(Y1d9eypB zZVhhLs?V!`)TQ^#`8S>am{>pRsptv6cG~jw)Bhv*HE2cTrIr zumULc3uE2_SRXK@+8q)5w-;o-C4z!tam4XHU0KJv6b#}nNE`T+V(w~+DM;a{KhrP5 z8^YFv=B~iP=l;NFhTd`JqZFTj^MsA5hNPLSED#kS4Fv=Sj12@1NC5*bK46^xKP?7K z2?YAja}Xe)a7!Stf6K@L?tecCfa~vn{&NQ@0QqkT;N}9*|E7W5{uUMRa3TZTpdBPM zoq>SR0cmtHU`Zv?8z3P1?~)>dDjvX>x{y9-qNoH=fk<8_-A`NIstVt!C<&!rN+L9X zM8VNYC_(ZA=2fC12w#Dfz)GI`)+Jfw?9dEN}4@~cCN2)uD|Iw zHZgOPDUSCa;wL5$5<)?N1O*TxjUY?lj*QR`V1R)E3ko11fqet%pFO}31f&2D2=xdF z;b`@I#5|$@`4KFTR1B)>cZCpeBxsB%{tO}>myEewKAEL?n4|;aeqkLLMr_CnWn0GW zta?ziSig@9Aice|l^R*-dN!Mu1Oc&TJ=Q`lmsN$C)yynR{=b6XbS6DcsDFu)81j(= z()PC3SK`*2&3Jc8_l}mE5Y<>;Jg!iE6u%H|j?-&gW>qFCFL*FqL?il%5 zd#gtd+HB)1C`tdP?R~w-NX1BBpL|SQ|7kmLHWZX_6Lg9S>VNurV-F_dZ6O?XU*0F6 zoH#N!lfjwJR=X`TKMj}szseQ=*RKo(2gcr(Zfex0@s#hy>S}Sn#*(YcCAgVaf}*hy zeaYG^NGHYr*Fs~VUqkOpkF_#lL`0{;7cEg0Xm01|n?1hafS$1ZuiXEoC*)yfb&-oG z|Fy|GA}}M&ZLd8s$-j?ofPoDy_yWjf|1}z*fT6|bldOpdz@~pq%LlOA6y$74Tf5(FVZ+@FS5gGcgTM$A$2Lv#%5C|lb|6Ac_I3WM^ zM$Sb1?_d%jA%z`*TFi_5TjAH=y#Ka{80mjJYX=nINIpQaS^xMMph5yb{{1?8G~(Yj z`v?eNW+4_xqW)XqmoOorWmwIJ4m`E=63u;obnMB}4~7n>!>VekxJF#gCgp1N?1cv$ z_LD!Q6LEhkxW;nX*}L!O?@}lU1EzEM-b6OZrBY5J*UpEBq`#F)rBcn#k&K+|4up5C z95UK=H0vA#^Lp&G=h?+Dx}ATbm&)hI$>qY7lF#sqkHqH{yggl?Q?Lvaqmd zX$*SB-&AOo0-s&Ymot}P+*`S3(@B|p7q% z1TbfW7nn>Abdp*e?yBX&Fs+>Y>q)Vl%g&BrhvPnZ%h!yx+>_#LgD=j5ao5Y`N@TO& zrI>Nzg6A>&x-yhBV;-cL89wYYIQxmx{RLF;APA~SlnGQIAt z@~=;Ku+Y%Z_KKG_C>hu<6bd<+a=Ebi5Q4JVbzLq>N!zPySbX(+2elS0(;*9VI<2EZ zwZ#n>iKGy#o4c#6ZfSPAEvGXU-8Gh~3m9es>Qcin|8xxAEhZKdN%>#fCXmcN?kaH@ z;F(Nj4({1;`5cnoPZ!SKy^6)=60|fMOE;gxVY1p5lNN>+TO<^;V1c8^42;cAT}pMj z)mhSEY5Aqa@Y${Ng~8%bGUF2M8OUsQ&u~99TGAdb`7OFcAs-2tbUQmJ2dNEZc+~53 z-Cea)1<~9xRr517vF}C3f|EPD;av~S^Wlr|dAG`*I(j6xu0 z7h6Rq%v9>+WL42~h&j7AE<8I1scoviSy>(I86Da8Hift)a=BVe%qt;cvpidOjKPA+ zgN-pDGaJl?J!DrH{PIyL@=$y={v_5$vDd|CzhR-1i~2?@(_*de?nOhYuh8jMM|EUW z6W}U#N_xX{h#%V%`T9IotX0^aU)tfCsH%>pv) z_HedX&}b@!6)kfdomP7@I*U%PHw}w9gB>`IHlgVG*_p}g4xGz}+RpXf{}NrsI2sCA zszmTW$iv(sREcP_dJ9UB_eti>CFH+*(nGw7n!m(9_P+pDh&f~Lr0gy#NMUZ56H42y zw%>pgOL2HBxjsvq%L|k1^=Csdtx6!7RBAyqPhlR9^?(y4*hmPCn z(NwmS)ayYqehP&=9i(uu)%~`=y)s|k#&r{z#~$Wm&4zT4TYmZZ56y19T(_%xf-3Ew zc%+G!{?|qKSWL4o&c~{monL>@M#Wz|wjGBjw`I>} zX*8Lsf7f@qDR66@j?V43Vr8}1s_2`~B*rS;ToYB;WG0AL@ zaF>+Vse+rS-1c+}!423wzQ+i;2)xDP-j6=mukvy_sKaopHu?It|kHpPPW&7=4L!q|IjlM z&hS4yuN(0=>o4~WDxNO{1)@5hnX$FBaeHk73xh(g*;LY)i~lwM8?TmDf50xn z^}~*?aq#DAlRB^4#FDMcRgU-hz)~G+xpfUw4!eJYWrlb>9=onKy@7A!(C|TiIdX~O ztTQ~0LU8#x74-GvCHE5fJ%f?_A3Az1olSSEoiak`>(WxW+_h6Wa`S|0Z^d0se>Y9?$iS#;-SfX0A6gurFuG>5osIyc4mnYG` z0Raz*fssW;N0NGQIt^<7(hvW+T@rCTUQ1X(U59QJ$5N+_8S33pH~|-&b4J#>z3sD& zEv=fz*n0PaI2a5niRU?%i){N%CYFoHR5RA2bbSQXE2s`x7rf|pO!h^M1g`?}hY~8= zS0Q1w><{T`PyKm1!y@}JlnE>~SnG+&W}Ae$2&^?bvl4DiQ>@(P?9AA5-ex}E9tW>3 zc5jRIRYFKG_PBsS;QOaI-sTm9`=nG1{#jC;hWAe{UKVT^z+xqwTrO9xTjLc2(=iUg z!T<+ic&+ibW5nWpgR?=a1wuH;eE~ozIb@~hT`0qdYCHQQ@uakPnw{5whR;I?NO%u? zo7rr+aH4hc20-lE1gFxIuyke+JLb*I%I;{DV7MfPt(vz8o(w9DDW#kB(>X##K1rE){VS>Oic_>F;Vesp;n{A{Nl#RBl z&1bGsLN~t&3C&ZDy&hLGtE~5n!YD4;Xv%z;jHf8-s+%vvRv&aE(y|cEy`rW@uBdMe zRB4kOPN(WJI(OE>54XyD#eFDLgu7pC$SV8gO@Aiso_XrOg}#1u3O8T z1PgqQ2#74~eozxH=Z$Lpp^f4v0163TuJ89Do0+^0K5%v5^!3S`+^dbv76Byq!{k1A zVJ0*`wPVKYw)X1WYk0p5K07mW1F(s>7UC0hN$WR2`?}0~u>F)D{1R|IS1Aty|FiD5 zftSXjR0g;0-M9|r?XMvqu+}}~ZBg#2Bs=4$BAs4mG?0~Cm5t3>#X-aVEv;>QQ$y7|io}%eR~@52jggln+!m*d>eE9O2bkyY1U?rLW5d z&gYc7N(x3Ge7X5ivp3K#~-fNP@1ipX;apHc;A%DGt#XJKM1B zzP@_Bw&k{XM@+5+eVaEmb`N&G@W9m#nevfk$f##(77Tt>LUB|1Ie&8Pr=X0sd2k5lZI9_w7+>PTNc-f8S$}SZ&UB^-uafS7Wj^@|#268LxqUcguy=P8?N( zCAY+vo#y#3|0%jsnarTl*)EruCzrys1u$tHiRSWrhNPIQne}~JHkPd6=e#g_gSupChIE-JkO^>$$1W3ws__Ms&5UJ#UkSQo; z1BTjZ*3TDC5Iy|Vvgyg^(MGy42W(8Th~I|c+BJMZtA>fng;sU8#}^CMUbkY~dB{(v z@zzhTc<^`C!M+c5+{W^DK3wCfCHNc0l`P|<#X6o0SsWaZ0_yg6<4`o1v{bQvfgOnSs0rKSv0VKE(>u^6vPzNV zLTGi^=5IkxKLUXW5mYSQZl@{3+|w$%%=z6<^P91S5K+aC^7C0uGx9f&!aYPVSpj)` zX=gT(sxKGYv+1~ZUJv<6O4Zi^4pX=w0Ux10GGXiIjfHRHZ}h88rlsID{Z>oOjJd}^ zKegdda7!TE(PI@P*Gw%c7fO^otdy-YsExNrwAY3jut;G0tuIjh47lAdcPj|WV}`~v zUmQZBe!Gx_(f7>?5Mc>*8Ck4ozwdnfJVlrG=fA<%6JvV@0;L{{PH}#RFPG02{jyw- zYP|g!8{_wUYaXZCzGmBU)-7nYFTz-ZD&!Un z-9R%lQr~kvuD)R zOe{YJD7$Fi06JWI|sEZ046VMc1P2ID1bOKETArflFfc~cyhO)%5tv1R3$n$PrLV(>EW877Cm@;ZgUU} zZ9!T#d!?UIHiV>B&oh1};*a13)eW+EjzlUI8axi4mSp(Sbf1F0%~A$7k7{_UQ&%@0 zZt5W2?^<19Ro$O<#g&p=X>hwqJ5&+1l}6um1Z0mSvjRF0Qv!P<$uVd8h-Y=V+nBzY zHu)$VO=6`wze~l*fh>n9KL-W`7yh!k&Hc%(h=fBtX%sn`!3H{aN%6_?+aw@xY)%)7yaM-< ziG^11B*$9g*S`5pg1v=6uiic1$OcaT(ZkvWZoTcZl4?D~jGOR*7gN(BnQYl!a45eO80v9J$Uu>(WI`#9Y>MpOO&Z<6n4n4*67^q*T@_B z7N%q0D-El(UlYk5QIO^UKm20Y$K_6^Yb3=47p{N@TDaEyZf8Ho+-Wmb(nN-ZXK$pNt_?A#;&*O571IWdo(X+dxLQubx^i&o{#5A zLF@%&?b`Zx>0EMF*MkTws3lm{0}x>is!jZi{*aGYIonjUqqbXqPKG7d3)}{zzz_&! zU(DW={qAKxTnDfI!F)?vW1MB^v%9NJ@2X{_V6Z4&~B3V0Q1`6g^?TKTM%FRZbbNgrrt zm2}V^Vvsj6>1p=CXqur(qote|l?EyB)H5>O4wrFG^lRYCIXgNg`Rfn0^hdDXp3?<0 z)C`P%)?c@nw+Iasktdf0Kes;;*cvQf&u#jE&jk)dgUJqv(wu7Mcs6(usTTKa7!+%D zz9YP->VWIIrS=c$dUqby!`b#B3^Cxt>^xykSbDm+0jv84>~0u*wbJG4mlx{$$~kk4 ztoMUQ>wY4%S;d1ZLm~vlb}zBku;IUbi0FZR1B6QMid1(wG0XJ0VKCqN8i@J7x_(yT z493U6lE%=``H*21^AqhDSYUo*LSHAVh)Ot~KodN~)Fkz6u2!k+va&O!>yYP|P^;C+ z`*S>195pYZA+iT@v>x&5c+zLMZbdC~Q<`aMyWKF+yd5-FKXCrz6fS&9>t@6#)eB6` z*`P>b_;mbiIyt2T!ulBz)o(818wK|&w)0}dwdz(wp)h~B3i6ylfgacJo0+ty{V5?c zt%=cAvmtEDZq33>6YvCFTw{EfV?BtAxaTS~o?+piCek!^d;FnyMfR9jMuVds%RBux zm2Wx38Q4yk4C)~Xnr@V66vs6qRI{(R&zLe;RvU~pJ)&Ffqflm-1pZ6{mF?P%UMOZy zuo^?1PD57lg}-(*Vk6ih#-MwZCx_Z~|ifd_r`5bugYY{D#%3XB!Nfkxv zO_U3D0N|lO&fg$D@7}hLfvJHJqy%BsRhkOL`fIe;HPyFf=K!vf4u>04*eLj{$sn*R zgz~E(xh5{BpZe;dD?c;`z)nC*!p>p#K|MBd-D4ip3yTHpkH~dA7*}aFWhI3yiI~iW zbyR_i+lKE`KbG{ENlWI6!!Ef9+JJ1Axx$j=3RVSlpiE-_3MHB%5{1Q`AtAAVN+y@C zDv5-q={`wi?~nNP{e_U(^v4>@W2nW8G>0}D4L@~bBGBU^4;G|~Y5p&bKDf}d2F8%=tHb?NHet>Ci3RRLi$sOk#+CjRwii#EM)Wt;>7 zv)Q708B;p|!4GVU7f4}Z$%yrTJPn6IlK{uj=)VyuU-1A=#Oot0@WXw6DYvpmNS;kB z96(@6uan_=~r!-Ma>Bda!xoq;M9la=_e#!e8hF^;-Qy@H?k8H)Oo zD;7EyLt!|WHN{S&ojDH(KZ6k1kloHmp{m@Y;HO>47q*3gLkiTq(9Z>?M>8{f*Cl$^9FjXs@ee<<3HLFP_n-Z{^Z>hPs zqV+eJ3)_QZ@6kH&Cg*AugA3t&lC_Y$5GsesUhCyLY6vqO&-0GFxRv<+wz#IX^9{`> ztX#vXN;1;>a8>@t@|c+6=bXhFwNCf+Ce0=<)QppZgakUIDbg~il@6TZpT?BGc8e1n zH*$fYx3Dw|5>aPr?eohSRSrUdMlrRflzru&>Mj1TIYpI!=H27_9m6<8G{|PRqQGVT z)BZ;!hlyDudx<|L|6=BXecoWih?&>E`SU`6Q@}sSdW0hY${*r5_#^P2LV+g$pk2$Q zfXWxNWP}S~6BrTNc@+LrCm|OA+egQ% zkjnp;3}B24U}(k+*TJX!mqo$@Fe}jrDy8!Oz7kWMP|BEd`1!>6C2+OTT=iEReNJaV z5ZuTghlAUHDj~H1w37GN@%5XgywKt+UrrPJY6dFP;r=4)Z%zki^y$_Bn~h_RbUYA@ zI=%McZ^+xIViD>uIs>mMF->J8TSpt)UpFe7=iU*W8a+OifOwHDCOGu5$a%qdLrhBSIhWf-8EtS_1=kq+}}ANh75pNja*0_Zw#BvrelMP zrqN|o>^OgFQfX9m7?}^2 zv+ws+Qt>{pmltIsdw8J|W;dJH{#O3?t`Q2knhbSd%xvRJJ^H z=J7I*mJW^Xh)R`rkz-Ed-CEOi)?MOMtb06|qb5YxTyQXU!bj29<@J#D(AqKVCsnoh z{r5|0bh@fseFb*;1f&=;rsosSkmVKIF1;?~^^jP5vB%el3Bd|JHxkiKe7 zr_w@&n9bhhUfrMKU!p7dq;ttQSF0FU+%2Tq36Z=3kfc_3`PpoB&0~WS z0GHB=u9#To%|X#2H~cMkhfc3FaD1k0Bu7|zip*QYk4{qxh=?+ozMGGx1im&Ig3oj; z`u8DmIXzd{6~#rOQk3QKcA`ES7f@jX0vFZKl6@B5!RY9{9LxnY^hB&s1HqeV?%WEC z#Qw5!BFmF!0&;FLDbZ3oT`q;pc`{2T^C>Yg3u@otHq)u3%rM_OTH810PQYk?dOZuk z$e`5rtoOT$Ic!#tV%iZ#yF~36NR7)(NC6;92Z%p6+rbQGueDIo`OE9wQUEl#=*1aB z>97h-S2cR65vPlC_8TVVCq5Sr+m;|II;6p#VycDe6SeqX%o_VIW<8o%%;NtGMSNz4 z_wAd!1aPz42l+IL&GfWSy!yKsjEN;kvQ1A{JDxY%C~hv1uvZm;SXnC`O=E=xaAV_h z7ns_=WIrC*t(K3_&R(Z+?#IeODPzb`WM#8i6(=w5Pno4Iizo6;ayk(8<3^RbZHBDG z7r0ugP&3*nq=u3f)Ajwt03nLn{Mqo7KT{feg~ptal{^Gh`&drsD%!%I#k$^&_3;=K zj>VnsQqHF1>@d{^CC|{A%}#gA<7wV9@7KMV!Jyt3v7l|6ku>1o+O>a=8)x#S8gN<la+#KO#O#cL&6(Z+qt|Y0~1$MUe?$R`k7GZ_3@ar9G2w!xi&|hCjTX%6J~3-+a#6C-e>NgE0xPCx5YKs z`)$54 zUS%^YiZd6ASM;X_Z}`)%Jj3lCUdu^$eh$g!SZsdbcBi9)ZOt`KjN49Ry#2dHb3eCc zyQSvHsy_pR-{rjf+Zv4D{^s4OogGn?*VrOO@0d&VXe{^IsV1c8)>7Md#@ zz+kD96x~~Q&uW5WL6iUz-z8MGBi1DaUdZndx@gLp)`OU4&`-!*_Keqfv&|{)Lm25j z!>%p!zE%f%gb4M`T9g&*caP%c&#OA&kye#XGgdA%udJ{pP`*c5K?4{1?)M<3U)K{= zqj}x0^Fy%di`|o&Op3}|`9Ydr*Um+3*b!aZmCi|62f0Y46sDC>`L?atWr`sdb+f=G$ z;<9e=Y*^zxYH;Y0BwKDMh>;ErK}0A~AQrj!ESF#=z%USRosx;c)WVSjVL|+6WoZ_6 z4mmk!E|Wh+el@hXWreCHdHyl7tY=$`&hLCoUqsmIG#FoI(3$}z*dadvfmu+`&_=$Y zeyn&)Q1V0G%T>K<>u@qh=LI#*AbN?Nt3`QUgA#}AV^UFMsj2i*wl#uAqYVfiYhE8K z;^ek3>VBPZzsM!0!~UO*m=~-lq)^u&*?P7(JU->Fs#`qU+K>PtEv{hQ-A=hQ>hR<{ z8)g8rjZUdGa*ykGB{1$v%?Iw%ohOL)yMcv=nvp(<7{!{0;(adqYhJQF{{$TbJeM?u zzQM0mcL7EgTgeE=6Y$w{TWw@N9pnx89z151%~yKuVW9^Mv=C8^o^_P3)mekT?EVe&^)=_`uO@jy^_*hwK zRL#z{okV$u3_=BqmP(zuZJRBkoUhmOrQK`n>usLYw3>}cuRgE9EtZ=?nyI2@hL-L* zdBaF}PdnP+LFdBQkD|_K#MHvCNUGxWFpJ<{<)z9TiQ3Q}g0?tVXta7E@YWiOhC#Ps zxgJr850+i|#QkHqVaSM{jQdge8;I#OJJn^UxaJfN$5%i{@qNE;ba{r9xK7yK&NVA} z=sgUGDlG|^b61;q#ONEzwm4{Up9ZDZa_eC3@BwjdN0oOAtTE(@U>;r zXm==WW;q;SqU}1hl|ua) z1S(akd15zz1|I5F${9BM5#pXZ;HeROn{5sqi4AsExeEuzIA zLVu(BJE)LwM~AzOCS;uVo2Fmnz1;oyb1~a~>Ob{&Jv?sc8J;0L=Xw5$B!3fFUOGVlnt>vEn=r_We9nZAvwx zzPignV`RxmY#!Atb^P!=z-t+xb>#?mcWH@CYL|mrKcv+=fP1;Mwo$A5OcwJiItxDfH@5NYgSI zSV+hyU`CL@@hEFP2n^aCD$r=hwX+c*HeUWq4X3W3q%9Fnq8wyDR#k8k z(d(&3@TJ+};?)E0Oa(8&m~CZOW<1iLrVM1+Y=>>QCjI!b>d)||;OrR6>jiUqU*XT@ zl-Lz9yRgcyw%3ixr+ES_Zp7w?;eKebDG!6ZZVk(7Tf&Ni0TY2p$jTKbg{~u>d^IpF zgum8NN^2&dA>PQ+k$y_H{0NvTSusz6laVyY49$M1$>_^u{2yH|Kozmc0M3efWg zTk^!vyIrgd%&81J`PI$T$Qm7DBlAHZV9V9(Ob*0=?k|QYDBZ0V;tKjc@Rj#qXp7U- z=cOP)0cin;yituK+aeI~6`_-f?I%+BhBg0HXkH^rq%=BkE_c3cJ(*eoC!b$j=tQSd z(+tx&Kp5*k(#Yfv-@;btHXV*mE2+bj<~f;?8u^M=Of)jajmG$fP{`TR82va#T!!k0 zlVpofjep+4px1Gcxc_9}!wUYpXxf7n#TE|PuN++dgC!t=y~6{?Q;GZITqMVo>-XyM zv4%HH;K$K_|F6n#8;TS&Os!tPFTW?m&F8VQf#qL`A|T>Ii5X`k(AInvI$nk5sAA@Q zV?wYb`Bxo?6$myk%QHae{x9aR#|46qSH1r_G4}lL6BGmhOo1}}GmRbXFR($VNAg#S z>aXX?`47nO|J2V0Qa74E#L!?Qa1Gu<{}0iu%>Trte?_x|UL`OHcw8uay()DVn_Vs( zUhARpSGO661j9TYaaCTNa+Xh+!uy%SLmFmJ8-ud|<%6^&DClKtX=HL)l|BbRhWmfi zc*5Ss(+H=JvToME{2LmcU+O2-X_sG(yKqr z92?y*dDR|S_(Q8Z4h3>0eO5?#rs5c&aozZz#&x7EY~qadoa87H3gRRWu?pki=&9ats^WiyxPAJ_ z5h5}V%Z4LOj?lu<<^(Ew;(sG29AHDex!C%;k8I4f1fq}c46`Z+4cl1E$MeGv*^+CS zhA97&42u!a@!WP~c$-gVVSKuCx83@^u&>NdtBef|*=JC&*$~;U1E&Udt^?dX5AU8~Q8fM?n_vN=A}S!)*xJ&|;D+^jxU`d9F~ zB>h+LyH%QtW%57uu5rpfiecoj)%Yy<#IJE z0^k$2GikJcriu|JFl>4Bbot0manT(siq-bx;YLVQLf6l~GoBEHu}{%x2bmEngJ=A& z=ob{zi3%`D8Dj-l9Mdh%*vjS-k3IjYf3I(n*8c9iWKbf=BauaRMF*)R$r|DSaIwZ8 z1S^0rK>^7_=;|(H3KeD?`h_>NknEt2{+bR-gd2p-YlLTQv z9KnGJ+7c0E_h7J@rZYqV6;1BZ%A Date: Mon, 12 Dec 2022 17:35:52 +0800 Subject: [PATCH 03/51] add train directory --- contrib/Overlap-Recovery/README.md | 4 ++-- contrib/Overlap-Recovery/train/.gitkeep | 0 2 files changed, 2 insertions(+), 2 deletions(-) create mode 100644 contrib/Overlap-Recovery/train/.gitkeep diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index 7ce21175c..b64aff773 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -78,7 +78,7 @@ TODO 实现流程图如下图所示: -![image-20221201214655261](./流程图.png) +![image-20221201214655261](./inference/流程图.png) @@ -257,4 +257,4 @@ python eval.py 模型在测试集上的精度达标,最终模型的的acc为80.%,满足精度要求(acc≥80%)。 -![image-20221202155839483](./测试结果.png) \ No newline at end of file +![image-20221202155839483](./inference/测试结果.png) \ No newline at end of file diff --git a/contrib/Overlap-Recovery/train/.gitkeep b/contrib/Overlap-Recovery/train/.gitkeep new file mode 100644 index 000000000..e69de29bb -- Gitee From 3643ba9318722e5a3e749d1864573dee9b5f0d71 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 17:52:04 +0800 Subject: [PATCH 04/51] modified filename --- contrib/Overlap-Recovery/inference/eval.py | 13 ++++--------- contrib/Overlap-Recovery/inference/ominfer.py | 3 +-- 2 files changed, 5 insertions(+), 11 deletions(-) diff --git a/contrib/Overlap-Recovery/inference/eval.py b/contrib/Overlap-Recovery/inference/eval.py index a64c31c47..de90277cb 100644 --- a/contrib/Overlap-Recovery/inference/eval.py +++ b/contrib/Overlap-Recovery/inference/eval.py @@ -75,21 +75,16 @@ def segm2result(mask_preds, cls_scores): if __name__ == '__main__': # dataset - ann_file = './dataset2/annotation.json' - img_prefix= './dataset2' - seg_mask_prefix = './dataset2' + ann_file = './dataset/annotation.json' + img_prefix= './dataset' + seg_mask_prefix = './dataset' dataset = OverlapDataset(ann_file, img_prefix, seg_mask_prefix) sample_num = dataset.sample_num dataset = iter(dataset) # model device_id = 1 # 芯片ID - # model_path = "models/best_miou.om" # 模型的路径 - # model_path = "models/best_miou_pynative_3.om" # 模型的路径 - # model_path = "models/best_iou_recheck3.om" # 模型的路径 - model_path = "models/best_iou_recheck_ckpt_test.om" # 模型的路径 - # model_path = "models/best_miou_graph_mode.om" # 模型的路径 - # model_path = "models/best_iou.om" # 模型的路径 + model_path = "models/best_iou.om" # 模型的路径 model = prepare_model(model_path, device_id) # inference diff --git a/contrib/Overlap-Recovery/inference/ominfer.py b/contrib/Overlap-Recovery/inference/ominfer.py index af5d5f066..e9603af68 100644 --- a/contrib/Overlap-Recovery/inference/ominfer.py +++ b/contrib/Overlap-Recovery/inference/ominfer.py @@ -18,8 +18,7 @@ from PIL import Image import shutil device_id = 1 # 芯片ID -model_path = "models/best_miou_pynative_3.om" # 模型的路径 -# model_path = "models/best_iou.om" # 模型的路径 +model_path = "models/best_iou.om" # 模型的路径 img_prefix = './' img_name = 'test.jpg' # img_name = '200.jpg' -- Gitee From 05d99c422d6f6e310fa8e7e3975e0122445b0cbc Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 17:59:23 +0800 Subject: [PATCH 05/51] update readme --- contrib/Overlap-Recovery/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index b64aff773..d84366a5b 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -210,7 +210,7 @@ TODO 4. 进入推理服务器执行命令(修改`onnx_model_path`和`output_model_path`参数为自己的路径): ``` - atc --model=[air_model_path] --framework=5 --output=[output_model_path] --soc_version=Ascend310 --input_shape="input:1,3,1472,1472" + atc --model=[air_model_path] --framework=5 --output=[output_model_path] --soc_version=Ascend310 --input_shape="input:1,3,768,768" ``` 5. 执行该命令会在当前目录下生成项目需要的模型文件`[output_model].om`。执行后终端输出为: -- Gitee From 33ee469ea93e90acf8a0bf6e00be82148a5c72e9 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 18:03:10 +0800 Subject: [PATCH 06/51] update readme --- contrib/Overlap-Recovery/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index d84366a5b..2814bd8b3 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -210,7 +210,7 @@ TODO 4. 进入推理服务器执行命令(修改`onnx_model_path`和`output_model_path`参数为自己的路径): ``` - atc --model=[air_model_path] --framework=5 --output=[output_model_path] --soc_version=Ascend310 --input_shape="input:1,3,768,768" + atc --model=[air_model_path] --framework=5 --output=[output_model_path] --soc_version=Ascend310 --input_shape="img:1,3,768,768" ``` 5. 执行该命令会在当前目录下生成项目需要的模型文件`[output_model].om`。执行后终端输出为: -- Gitee From 4f1b0066b87fe6c4cbb45bc15c6bddde92ed9e8b Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 18:08:14 +0800 Subject: [PATCH 07/51] update readme --- contrib/Overlap-Recovery/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index 2814bd8b3..29154b575 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -255,6 +255,6 @@ python ominfer.py python eval.py ``` -模型在测试集上的精度达标,最终模型的的acc为80.%,满足精度要求(acc≥80%)。 +模型在测试集上的精度达标,最终模型的的acc为84.2%,满足精度要求(acc≥80%)。 ![image-20221202155839483](./inference/测试结果.png) \ No newline at end of file -- Gitee From 06b327460142a26a9a559425710d3d6076b8cd66 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 18:16:29 +0800 Subject: [PATCH 08/51] update readme --- contrib/Overlap-Recovery/README.md | 4 ++-- contrib/Overlap-Recovery/inference/ominfer.py | 3 --- 2 files changed, 2 insertions(+), 5 deletions(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index 29154b575..64d0d69e0 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -222,7 +222,7 @@ TODO 表示命令执行成功。 -相关模型的下载链接如下:http://xxx.zip。 +相关模型的下载链接如下:[models.zip](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/Overlap-Recovery/models.zip)。 将模型按照提供的文件夹目录放至即可。 ## 5 模型推理 @@ -247,7 +247,7 @@ python ominfer.py ## 6 测试精度 -**步骤1** 在`Overlap-Recovery/inference/dataset/`路径下准备相同格式的数据集(已提供测试用的数据集,按照文件目录放至即可:http://xxx.zip) +**步骤1** 在`Overlap-Recovery/inference/dataset/`路径下准备相同格式的数据集(已提供测试用的数据集,按照文件目录放至即可:[dataset.zip](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/Overlap-CRNN/dataset.zip)) **步骤2** 在命令行输入 如下命令运行整个工程: diff --git a/contrib/Overlap-Recovery/inference/ominfer.py b/contrib/Overlap-Recovery/inference/ominfer.py index e9603af68..ed4c6d470 100644 --- a/contrib/Overlap-Recovery/inference/ominfer.py +++ b/contrib/Overlap-Recovery/inference/ominfer.py @@ -21,7 +21,6 @@ device_id = 1 # 芯片ID model_path = "models/best_iou.om" # 模型的路径 img_prefix = './' img_name = 'test.jpg' -# img_name = '200.jpg' save_path = './' base.mx_init() # 全局资源初始化 @@ -51,8 +50,6 @@ def om_infer_one(img_name, img_prefix=None, vis_dir=None, score_thr=0.4): outputs[i].to_host() n = np.array(outputs[i]) inputs.append(n) - # tensor = BTensor(n) # 后处理需要使用baseTensor类型来构建,文档不全 - # inputs.append(base.batch([tensor] * 2, keep_dims=True)) # (1, 4, h, w), (1,4) / (1, 4, 1) pred_masks, pred_scores = inputs[0], inputs[1] -- Gitee From 2d98e174ec407d5ce33162516f1a5d0e74f5adf7 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 19:24:58 +0800 Subject: [PATCH 09/51] update readme --- contrib/Overlap-Recovery/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index 64d0d69e0..ac23fd9ea 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -2,7 +2,7 @@ ## 1 介绍 -本开发样例使用自研算法完成重叠文本的还原任务,供用户参考。 本系统基于昇腾Ascend310卡。本仓库是重叠文本识别任务(Overlap-CRNN)的上游任务,即完成对重叠文本还原并输出文本实例的mask。 +本开发样例使用自研算法完成重叠文本的还原任务,供用户参考。 本系统基于昇腾Ascend310卡。本仓库是重叠文本识别任务([Overlap-CRNN](https://gitee.com/ascend/mindxsdk-referenceapps/tree/master/contrib/Overlap-CRNN))的上游任务,即完成对重叠文本还原并输出文本实例的mask。 ### 1.1 支持的产品 @@ -193,7 +193,7 @@ TODO 通过第三节的训练后得到ckpt模型文件,在项目运行前需要先将ckpt文件通过 `export.py `转换成ONNX模型文件,然后在本代码仓下通过ATC将ONNX转换成om模型。 -模型转换工具(ATC)相关介绍如下:https://support.huawei.com/enterprise/zh/doc/EDOC1100234054 +模型转换工具(ATC)相关介绍如下:[ATC介绍](https://support.huawei.com/enterprise/zh/doc/EDOC1100234054) 具体步骤如下: @@ -205,7 +205,7 @@ TODO python export.py ``` -3. 将生成的ONNX模型转移到推理服务器,放至在`Overlap-CRNN/inference/models`路径下。 +3. 将生成的ONNX模型转移到推理服务器,放至在`Overlap-Recovery/inference/models`路径下。 4. 进入推理服务器执行命令(修改`onnx_model_path`和`output_model_path`参数为自己的路径): -- Gitee From a0af056b9200bec67ff25ce5e4f2096816a57ad0 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 19:26:56 +0800 Subject: [PATCH 10/51] update readme --- contrib/Overlap-Recovery/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index ac23fd9ea..8e2f7936a 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -255,6 +255,6 @@ python ominfer.py python eval.py ``` -模型在测试集上的精度达标,最终模型的的acc为84.2%,满足精度要求(acc≥80%)。 +模型在测试集上的精度达标,最终模型的的精度为84.2%,满足精度要求(≥80%)。 ![image-20221202155839483](./inference/测试结果.png) \ No newline at end of file -- Gitee From 1a0a944ab242005e33f7801837a85148ffb91d55 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 19:33:07 +0800 Subject: [PATCH 11/51] add gitignore --- contrib/Overlap-Recovery/inference/.gitignore | 143 ++++++++++++++++++ 1 file changed, 143 insertions(+) create mode 100644 contrib/Overlap-Recovery/inference/.gitignore diff --git a/contrib/Overlap-Recovery/inference/.gitignore b/contrib/Overlap-Recovery/inference/.gitignore new file mode 100644 index 000000000..fd6c89636 --- /dev/null +++ b/contrib/Overlap-Recovery/inference/.gitignore @@ -0,0 +1,143 @@ +# Created by .ignore support plugin (hsz.mobi) +### Python template +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +share/python-wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.nox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +*.py,cover +.hypothesis/ +.pytest_cache/ +cover/ + +# Translations +*.mo +*.pot + +# Django stuff: +*.log +local_settings.py +db.sqlite3 +db.sqlite3-journal + +# Flask stuff: +instance/ +.webassets-cache + +# Scrapy stuff: +.scrapy + +# Sphinx documentation +docs/_build/ + +# PyBuilder +.pybuilder/ +target/ + +# Jupyter Notebook +.ipynb_checkpoints + +# IPython +profile_default/ +ipython_config.py + +# pyenv +# For a library or package, you might want to ignore these files since the code is +# intended to run in multiple environments; otherwise, check them in: +# .python-version + +# pipenv +# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. +# However, in case of collaboration, if having platform-specific dependencies or dependencies +# having no cross-platform support, pipenv may install dependencies that don't work, or not +# install all needed dependencies. +#Pipfile.lock + +# PEP 582; used by e.g. github.com/David-OConnor/pyflow +__pypackages__/ + +# Celery stuff +celerybeat-schedule +celerybeat.pid + +# SageMath parsed files +*.sage.py + +# Environments +.env +.venv +env/ +venv/ +ENV/ +env.bak/ +venv.bak/ + +# Spyder project settings +.spyderproject +.spyproject + +# Rope project settings +.ropeproject + +# mkdocs documentation +/site + +# mypy +.mypy_cache/ +.dmypy.json +dmypy.json + +# Pyre type checker +.pyre/ + +# pytype static type analyzer +.pytype/ + +# Cython debug symbols +cython_debug/ + +.idea +.DS_Store \ No newline at end of file -- Gitee From 4b30cf34841337270f33e611ec9432b28aeb111a Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 19:42:19 +0800 Subject: [PATCH 12/51] ignore .idea --- contrib/Overlap-Recovery/inference/.gitignore | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contrib/Overlap-Recovery/inference/.gitignore b/contrib/Overlap-Recovery/inference/.gitignore index fd6c89636..37b662b10 100644 --- a/contrib/Overlap-Recovery/inference/.gitignore +++ b/contrib/Overlap-Recovery/inference/.gitignore @@ -139,5 +139,5 @@ dmypy.json # Cython debug symbols cython_debug/ -.idea -.DS_Store \ No newline at end of file +#.idea +#.DS_Store \ No newline at end of file -- Gitee From e63c29367bc4855a71ee36b1bcd24fe504583bc4 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 19:44:47 +0800 Subject: [PATCH 13/51] ignore .idea --- .../inference/.idea/.gitignore | 8 ------- .../inference/.idea/Overlap_SDK.iml | 11 ---------- .../inference/.idea/deployment.xml | 21 ------------------- .../inspectionProfiles/Project_Default.xml | 18 ---------------- .../inspectionProfiles/profiles_settings.xml | 6 ------ .../Overlap-Recovery/inference/.idea/misc.xml | 4 ---- .../inference/.idea/modules.xml | 8 ------- 7 files changed, 76 deletions(-) delete mode 100644 contrib/Overlap-Recovery/inference/.idea/.gitignore delete mode 100644 contrib/Overlap-Recovery/inference/.idea/Overlap_SDK.iml delete mode 100644 contrib/Overlap-Recovery/inference/.idea/deployment.xml delete mode 100644 contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/Project_Default.xml delete mode 100644 contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/profiles_settings.xml delete mode 100644 contrib/Overlap-Recovery/inference/.idea/misc.xml delete mode 100644 contrib/Overlap-Recovery/inference/.idea/modules.xml diff --git a/contrib/Overlap-Recovery/inference/.idea/.gitignore b/contrib/Overlap-Recovery/inference/.idea/.gitignore deleted file mode 100644 index 73f69e095..000000000 --- a/contrib/Overlap-Recovery/inference/.idea/.gitignore +++ /dev/null @@ -1,8 +0,0 @@ -# Default ignored files -/shelf/ -/workspace.xml -# Datasource local storage ignored files -/dataSources/ -/dataSources.local.xml -# Editor-based HTTP Client requests -/httpRequests/ diff --git a/contrib/Overlap-Recovery/inference/.idea/Overlap_SDK.iml b/contrib/Overlap-Recovery/inference/.idea/Overlap_SDK.iml deleted file mode 100644 index 4ddc51fb8..000000000 --- a/contrib/Overlap-Recovery/inference/.idea/Overlap_SDK.iml +++ /dev/null @@ -1,11 +0,0 @@ - - - - - - - - - - \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/.idea/deployment.xml b/contrib/Overlap-Recovery/inference/.idea/deployment.xml deleted file mode 100644 index 0342ed18d..000000000 --- a/contrib/Overlap-Recovery/inference/.idea/deployment.xml +++ /dev/null @@ -1,21 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/Project_Default.xml b/contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/Project_Default.xml deleted file mode 100644 index 9ab1e9045..000000000 --- a/contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/Project_Default.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - - \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/profiles_settings.xml b/contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/profiles_settings.xml deleted file mode 100644 index 105ce2da2..000000000 --- a/contrib/Overlap-Recovery/inference/.idea/inspectionProfiles/profiles_settings.xml +++ /dev/null @@ -1,6 +0,0 @@ - - - - \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/.idea/misc.xml b/contrib/Overlap-Recovery/inference/.idea/misc.xml deleted file mode 100644 index 68775bbcb..000000000 --- a/contrib/Overlap-Recovery/inference/.idea/misc.xml +++ /dev/null @@ -1,4 +0,0 @@ - - - - \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/.idea/modules.xml b/contrib/Overlap-Recovery/inference/.idea/modules.xml deleted file mode 100644 index 81b7b7a6d..000000000 --- a/contrib/Overlap-Recovery/inference/.idea/modules.xml +++ /dev/null @@ -1,8 +0,0 @@ - - - - - - - - \ No newline at end of file -- Gitee From 70212890c1161cd53ad8c12fe35637bbe941a105 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 19:45:43 +0800 Subject: [PATCH 14/51] add .gitignore --- contrib/Overlap-Recovery/inference/.gitignore | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contrib/Overlap-Recovery/inference/.gitignore b/contrib/Overlap-Recovery/inference/.gitignore index 37b662b10..fd6c89636 100644 --- a/contrib/Overlap-Recovery/inference/.gitignore +++ b/contrib/Overlap-Recovery/inference/.gitignore @@ -139,5 +139,5 @@ dmypy.json # Cython debug symbols cython_debug/ -#.idea -#.DS_Store \ No newline at end of file +.idea +.DS_Store \ No newline at end of file -- Gitee From 1c5ed6776cc855d074d2ec3ede4819bca352bed3 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 19:53:33 +0800 Subject: [PATCH 15/51] clean inference code --- contrib/Overlap-Recovery/inference/eval.py | 8 -------- .../Overlap-Recovery/inference/eval_utils.py | 4 ---- contrib/Overlap-Recovery/inference/load_ann.py | 14 +++----------- contrib/Overlap-Recovery/inference/ominfer.py | 15 --------------- .../inference/preprocess_utils.py | 18 +++++++++--------- 5 files changed, 12 insertions(+), 47 deletions(-) diff --git a/contrib/Overlap-Recovery/inference/eval.py b/contrib/Overlap-Recovery/inference/eval.py index de90277cb..c15203f15 100644 --- a/contrib/Overlap-Recovery/inference/eval.py +++ b/contrib/Overlap-Recovery/inference/eval.py @@ -2,9 +2,7 @@ import warnings warnings.filterwarnings('ignore') - from PIL import Image -import shutil import numpy as np from mindx.sdk import base from mindx.sdk.base import Tensor, Model, Size, log, ImageProcessor, post, BTensor @@ -38,8 +36,6 @@ def prepare_model(model_path, device_id): model = Model(model_path, device_id) # 创造模型对象 return model - - def postprocess(scaled_mask_preds, cls_score): num_imgs = 1 segm_results = [] @@ -117,8 +113,6 @@ if __name__ == '__main__': # (1, 4, h, w), (1, 4) pred_masks, pred_scores = postprocess(pred_masks, pred_scores) - # print(f"pred_masks: {pred_masks.shape} pred_score: {pred_masks.shape}") - # remove padding area # (1, 4, h, w), (1,4) resize_shape = img_meta['img_shape'][:2] # h,w @@ -130,9 +124,7 @@ if __name__ == '__main__': rescaled_masks = [] for idx in range(pred_masks.shape[0]): img = pred_masks[idx] - # text_instance = img.astype(np.uint8) pil_image = Image.fromarray(img) - # pil_image = pil_image.resize((ori_size[1], ori_size[0]), Image.Resampling.BILINEAR) pil_image = pil_image.resize((ori_size[1], ori_size[0])) resized_img = np.array(pil_image) rescaled_masks.append(resized_img) diff --git a/contrib/Overlap-Recovery/inference/eval_utils.py b/contrib/Overlap-Recovery/inference/eval_utils.py index 121353313..415ceade3 100644 --- a/contrib/Overlap-Recovery/inference/eval_utils.py +++ b/contrib/Overlap-Recovery/inference/eval_utils.py @@ -131,10 +131,6 @@ def evaluate_metric(results, text_ins_miou_list = [] total_ins_num = 0 for idx, ((box_scores, masks), img_meta) in enumerate(zip(results, img_metas)): - # structure: - # box_scores: List[ numpy_array with shape (num_ins, 1*score) * num_classes ] - # masks: List[ List[ numpy_array_bool with shape (h, w) * num_ins ] * num_classes ] - overall_iou_metrics, text_ins_miou, ins_num = eval_func(box_scores, masks, img_meta, score_thresh, iou_thrs) intersection_text += overall_iou_metrics[0] union_text += overall_iou_metrics[1] diff --git a/contrib/Overlap-Recovery/inference/load_ann.py b/contrib/Overlap-Recovery/inference/load_ann.py index 026c3543b..1a64aac11 100644 --- a/contrib/Overlap-Recovery/inference/load_ann.py +++ b/contrib/Overlap-Recovery/inference/load_ann.py @@ -31,25 +31,17 @@ def load_annotations(ann_file, img_prefix, seg_prefix): bboxes.append(bbox) seg_map_path.append(osp.join(seg_dir, text_ins[f"mask"])) text_labels.append(text_ins['label']) - # for key_ in self.key_list: - # x, y, w, h = info_[f"{key_}_bbox"] - # bbox = [x, y, x + w, y + h] - # bboxes.append(bbox) - # seg_map_path.append(osp.join(seg_dir, info_[f"{key_}_mask_bin"])) - # text_labels.append(info_[f"{key_}_label"]) data_info['bboxes'] = bboxes data_info['seg_map_path'] = seg_map_path data_info['text_labels'] = text_labels - # removed - # data_info['key_list'] = self.key_list data_list.append(data_info) else: raise NotImplementedError return data_list if __name__ == '__main__': - ann_file = '/home/yuliang2/overlap_qualified_data_1129/annotation.json' - img_prefix= '/home/yuliang2/overlap_qualified_data_1129' - seg_prefix = '/home/yuliang2/overlap_qualified_data_1129' + ann_file = './dataset/annotation.json' + img_prefix= './dataset' + seg_prefix = './dataset' data_list = load_annotations(ann_file, img_prefix, seg_prefix) print(len(data_list)) \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/ominfer.py b/contrib/Overlap-Recovery/inference/ominfer.py index ed4c6d470..61be089f6 100644 --- a/contrib/Overlap-Recovery/inference/ominfer.py +++ b/contrib/Overlap-Recovery/inference/ominfer.py @@ -7,12 +7,9 @@ import warnings warnings.filterwarnings('ignore') import os -from base64 import decode import numpy as np from mindx.sdk import base from mindx.sdk.base import Tensor, Model, Size, log, ImageProcessor, post, BTensor -import cv2 -import mmcv from load_img_data import load_img_data from PIL import Image import shutil @@ -36,14 +33,12 @@ def om_infer_one(img_name, img_prefix=None, vis_dir=None, score_thr=0.4): print(f"ori_shape: {img_meta['ori_shape']} " f"resize_shape: {img_meta['img_shape']} " f"padded_shape: {img_meta['pad_shape']}") - # import pdb;pdb.set_trace() resizeImg = np.expand_dims(resizeImg, 0) # add batch dim, 1,3,h,w resizeImg = np.ascontiguousarray(resizeImg) imageTensor = Tensor(resizeImg) # 推理前需要转换为tensor的List,使用Tensor类来构建。 imageTensor.to_device(device_id) # !!!!!重要,需要转移至device侧,该函数单独执行 imageTensorList = [imageTensor] # 推理前需要转换为tensor的List outputs = model.infer(imageTensorList) - # import pdb;pdb.set_trace() inputs = [] for i in range(len(outputs)): @@ -55,10 +50,6 @@ def om_infer_one(img_name, img_prefix=None, vis_dir=None, score_thr=0.4): pred_masks, pred_scores = inputs[0], inputs[1] pred_masks, pred_scores = postprocess(pred_masks, pred_scores) print(f"pred_masks_shape: {pred_masks.shape} pred_score_shape: {pred_scores.shape}") - - # np.save("pred_npy_res/om_output_mask.npy", pred_masks.astype(np.uint8)) - # np.save('pred_npy_res/om_output_score.npy', pred_scores) - print(f"original pred unique value: {np.unique(pred_masks)}") # remove padding area @@ -67,7 +58,6 @@ def om_infer_one(img_name, img_prefix=None, vis_dir=None, score_thr=0.4): pred_masks = pred_masks[:, :, :resize_shape[0], :resize_shape[1]] ori_size = img_meta['ori_shape'][:2] # h, w - # import pdb;pdb.set_trace() # remove batch dim # (4, h, w), (4) @@ -88,16 +78,11 @@ def om_infer_one(img_name, img_prefix=None, vis_dir=None, score_thr=0.4): if pred_score < score_thr: continue - # print(pred_score) - # print(np.unique(text_instance)) - # import pdb;pdb.set_trace() - text_instance = text_instance.astype(np.uint8) area = np.sum(text_instance) print(f"pred_text_instance: {instance_idx+1} pred_score: {pred_score} unique value: {np.unique(text_instance)} area: {area}") pred_mask = Image.fromarray(text_instance * 255) - # import pdb;pdb.set_trace() pred_mask = pred_mask.resize((ori_size[1], ori_size[0]))# w,h if vis_dir is not None: diff --git a/contrib/Overlap-Recovery/inference/preprocess_utils.py b/contrib/Overlap-Recovery/inference/preprocess_utils.py index 6c3240ab5..c950ec156 100644 --- a/contrib/Overlap-Recovery/inference/preprocess_utils.py +++ b/contrib/Overlap-Recovery/inference/preprocess_utils.py @@ -847,7 +847,7 @@ class Normalize: @PIPELINES.register_module() class ImageToTensor: - """Convert image to :obj:`torch.Tensor` by given keys. + """Convert image to :obj:`Tensor` by given keys. The dimension order of input image is (H, W, C). The pipeline will convert it to (C, H, W). If only 2 dimension (H, W) is given, the output would be @@ -861,7 +861,7 @@ class ImageToTensor: self.keys = keys def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and + """Call function to convert image in results to :obj:`Tensor` and transpose the channel order. Args: @@ -869,7 +869,7 @@ class ImageToTensor: Returns: dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. + to :obj:`Tensor` and transposed to (C, H, W) order. """ for key in self.keys: img = results[key] @@ -888,7 +888,7 @@ class ImageToTensor: @PIPELINES.register_module() class HWCToCHW: - """Convert image to :obj:`torch.Tensor` by given keys. + """Convert image to :obj:`Tensor` by given keys. The dimension order of input image is (H, W, C). The pipeline will convert it to (C, H, W). If only 2 dimension (H, W) is given, the output would be @@ -902,7 +902,7 @@ class HWCToCHW: self.keys = keys def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and + """Call function to convert image in results to :obj:`Tensor` and transpose the channel order. Args: @@ -910,7 +910,7 @@ class HWCToCHW: Returns: dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. + to :obj:`Tensor` and transposed to (C, H, W) order. """ for key in self.keys: img = results[key] @@ -928,13 +928,13 @@ class HWCToCHW: def to_tensor(data): - """Convert objects of various python types to :obj:`torch.Tensor`. + """Convert objects of various python types to :obj:`Tensor`. - Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, + Supported types are: :class:`numpy.ndarray`, :class:`Tensor`, :class:`Sequence`, :class:`int` and :class:`float`. Args: - data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to + data (Tensor | numpy.ndarray | Sequence | int | float): Data to be converted. """ # mindspore Tensor -- Gitee From a2048a95e0bb5f7862c47cd17a22d6b0762e93da Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 20:03:21 +0800 Subject: [PATCH 16/51] clean inference code --- contrib/Overlap-Recovery/inference/eval.py | 38 ++++++++++--------- .../inference/load_img_data.py | 5 +-- contrib/Overlap-Recovery/inference/ominfer.py | 19 +++++----- 3 files changed, 32 insertions(+), 30 deletions(-) diff --git a/contrib/Overlap-Recovery/inference/eval.py b/contrib/Overlap-Recovery/inference/eval.py index c15203f15..43a2e54c7 100644 --- a/contrib/Overlap-Recovery/inference/eval.py +++ b/contrib/Overlap-Recovery/inference/eval.py @@ -69,18 +69,13 @@ def segm2result(mask_preds, cls_scores): return segm_result, seg_scores -if __name__ == '__main__': +def eval(ann_file, img_prefix, seg_mask_prefix, model_path, device_id): # dataset - ann_file = './dataset/annotation.json' - img_prefix= './dataset' - seg_mask_prefix = './dataset' dataset = OverlapDataset(ann_file, img_prefix, seg_mask_prefix) sample_num = dataset.sample_num dataset = iter(dataset) # model - device_id = 1 # 芯片ID - model_path = "models/best_iou.om" # 模型的路径 model = prepare_model(model_path, device_id) # inference @@ -92,11 +87,11 @@ if __name__ == '__main__': print(f'sample {idx}') # prepare image - resizeImg = np.expand_dims(resizeImg, 0) # add batch dim, 1,3,h,w + resizeImg = np.expand_dims(resizeImg, 0) # add batch dim, 1,3,h,w resizeImg = np.ascontiguousarray(resizeImg) - imageTensor = Tensor(resizeImg) # 推理前需要转换为tensor的List,使用Tensor类来构建。 - imageTensor.to_device(device_id) # !!!!!重要,需要转移至device侧,该函数单独执行 - imageTensorList = [imageTensor] # 推理前需要转换为tensor的List + imageTensor = Tensor(resizeImg) # 推理前需要转换为tensor的List,使用Tensor类来构建。 + imageTensor.to_device(device_id) # !!!!!重要,需要转移至device侧,该函数单独执行 + imageTensorList = [imageTensor] # 推理前需要转换为tensor的List # forward outputs = model.infer(imageTensorList) @@ -109,18 +104,18 @@ if __name__ == '__main__': outputs_np.append(n) # (1, 4, h, w), (1, 4, 1) - pred_masks, pred_scores = outputs_np[0], outputs_np[1] + pred_masks, pred_scores = outputs_np[0], outputs_np[1] # (1, 4, h, w), (1, 4) - pred_masks, pred_scores = postprocess(pred_masks, pred_scores) + pred_masks, pred_scores = postprocess(pred_masks, pred_scores) # remove padding area # (1, 4, h, w), (1,4) - resize_shape = img_meta['img_shape'][:2] # h,w + resize_shape = img_meta['img_shape'][:2] # h,w pred_masks = pred_masks[:, :, :resize_shape[0], :resize_shape[1]] # rescaled to original size - ori_size = img_meta['ori_shape'][:2] # h,w - pred_masks = pred_masks[0]# removed batch dim + ori_size = img_meta['ori_shape'][:2] # h,w + pred_masks = pred_masks[0] # removed batch dim rescaled_masks = [] for idx in range(pred_masks.shape[0]): img = pred_masks[idx] @@ -135,9 +130,18 @@ if __name__ == '__main__': results.append(result) img_metas_list.append(img_meta) # evaluate - eval_res = evaluate_metric(results, img_metas_list, score_thresh=0.2,) + eval_res = evaluate_metric(results, img_metas_list, score_thresh=0.2, ) text_iou = np.around(eval_res["text_iou"], decimals=3) print("==============================") print("精度测试结果如下:") print(f'text_iou: {text_iou * 100}%') - print("==============================") \ No newline at end of file + print("==============================") + + +if __name__ == '__main__': + ann_file = './dataset/annotation.json' #标签路径 + img_prefix = './dataset' #图片根路径 + seg_mask_prefix = './dataset' #mask根路径 + device_id = 1 # 芯片ID + model_path = "models/best_iou.om" # 模型的路径 + eval(ann_file, img_prefix, seg_mask_prefix, model_path, device_id) \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/load_img_data.py b/contrib/Overlap-Recovery/inference/load_img_data.py index b9e2e0eca..2eb5f76a8 100644 --- a/contrib/Overlap-Recovery/inference/load_img_data.py +++ b/contrib/Overlap-Recovery/inference/load_img_data.py @@ -3,8 +3,7 @@ from preprocess_utils import build_processor -# img_scale = (736, 736) -# img_scale = (1472, 1472) + img_scale = (768, 768) img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) @@ -40,7 +39,7 @@ def load_img_data(img_name, img_prefix=None): if __name__ == '__main__': - img_prefix = '/home/yuliang2/overlap_text/data' + img_prefix = './dataset/img' img_name = '2.jpg' resizeImg, img_metas = load_img_data(img_name, img_prefix) print(img_metas) diff --git a/contrib/Overlap-Recovery/inference/ominfer.py b/contrib/Overlap-Recovery/inference/ominfer.py index 61be089f6..fccc15bed 100644 --- a/contrib/Overlap-Recovery/inference/ominfer.py +++ b/contrib/Overlap-Recovery/inference/ominfer.py @@ -14,16 +14,10 @@ from load_img_data import load_img_data from PIL import Image import shutil -device_id = 1 # 芯片ID -model_path = "models/best_iou.om" # 模型的路径 -img_prefix = './' -img_name = 'test.jpg' -save_path = './' +def om_infer_one(img_name, model_path, device_id, img_prefix=None, vis_dir=None, score_thr=0.4): + base.mx_init() # 全局资源初始化 + model = Model(model_path, device_id) # 创造模型对象 -base.mx_init() # 全局资源初始化 -model = Model(model_path, device_id) # 创造模型对象 - -def om_infer_one(img_name, img_prefix=None, vis_dir=None, score_thr=0.4): resizeImg, img_meta = load_img_data(img_name, img_prefix) # hwc-chw ori_filename = img_meta['ori_filename'] abs_filename = img_meta['filename'] @@ -126,4 +120,9 @@ def segm2result(mask_preds, cls_scores): if __name__ == '__main__': - om_infer_one(img_name, img_prefix, vis_dir=save_path) + device_id = 1 # 芯片ID + model_path = "models/best_iou.om" # 模型的路径 + img_prefix = './' + img_name = 'test.jpg' + save_path = './' + om_infer_one(img_name, model_path, device_id, img_prefix, vis_dir=save_path) -- Gitee From 4fc33ed60235c5b39f215e1ad4d45c838ac6d1d9 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 20:34:41 +0800 Subject: [PATCH 17/51] update readme --- contrib/Overlap-Recovery/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index 8e2f7936a..ee861e179 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -6,7 +6,7 @@ ### 1.1 支持的产品 -本系统采用Atlas300-3010作为实验验证的硬件平台,并支持Atlas200RC以及Atlas500的硬件平台.具体产品实物图和硬件参数请参见《Atlas 300 AI加速卡 用户指南(型号 3010)》。由于采用的硬件平台为含有Atlas 300的Atlas 800 AI服务器 (型号3010),而服务器一般需要通过网络访问,因此需要通过笔记本或PC等客户端访问服务器,而且展示界面一般在客户端。 +本系统采用Atlas300-3010作为实验验证的硬件平台,并支持Atlas200RC以及Atlas500的硬件平台,具体产品实物图和硬件参数请参见《Atlas 300 AI加速卡 用户指南(型号 3010)》。由于采用的硬件平台为含有Atlas 300的Atlas 800 AI服务器 (型号3010),而服务器一般需要通过网络访问,因此需要通过笔记本或PC等客户端访问服务器,而且展示界面一般在客户端。 ### 1.2 支持的版本 @@ -86,7 +86,7 @@ TODO 本案例中的还原模型适用于常规图像的文本,并可以返回测试图像的文本区域的IOU指标。 -本模型在以下几种情况去噪效果良好:图像中文字清晰可见、排版工整、字符大小适中等。 +本模型在以下几种情况还原效果良好:图像中文字清晰可见、排版工整、字符大小适中等。 在以下几种情况去噪效果不太好:图像中文字模糊、排版随意、字符较小等。 -- Gitee From f7c71d3dcbf6b34e0b5e0e010eee4ed2d5a25073 Mon Sep 17 00:00:00 2001 From: HamPerdredes Date: Mon, 12 Dec 2022 21:11:57 +0800 Subject: [PATCH 18/51] init train code --- contrib/Overlap-Recovery/train/LICENSE | 201 +++++ contrib/Overlap-Recovery/train/__init__.py | 4 + contrib/Overlap-Recovery/train/eval.py | 160 ++++ .../train/onnx_model_wrappers.py | 171 ++++ contrib/Overlap-Recovery/train/onnx_test.py | 11 + .../Overlap-Recovery/train/pytorch2onnx.py | 343 ++++++++ contrib/Overlap-Recovery/train/readme_pre.py | 13 + .../train/resource_utils/resnet50_dict.json | 1 + .../train/scripts/convert_resnet.sh | 43 + .../Overlap-Recovery/train/scripts/train.sh | 1 + .../Overlap-Recovery/train/src/__init__.py | 4 + .../train/src/dataset/__init__.py | 6 + .../train/src/dataset/base_dataset.py | 323 +++++++ .../train/src/dataset/build_dataset.py | 19 + .../train/src/dataset/data_process.py | 814 ++++++++++++++++++ .../train/src/dataset/real_dataset.py | 298 +++++++ .../train/src/dataset/synth_dataset.py | 297 +++++++ .../train/src/dataset/utils.py | 349 ++++++++ .../train/src/deoccluder/__init__.py | 6 + .../src/deoccluder/custom_cells/__init__.py | 10 + .../custom_cells/custom_assigner.py | 243 ++++++ .../deoccluder/custom_cells/custom_blocks.py | 274 ++++++ .../deoccluder/custom_cells/custom_losses.py | 110 +++ .../custom_cells/custom_match_cost.py | 217 +++++ .../custom_cells/custom_operations.py | 104 +++ .../custom_cells/custom_samplers.py | 126 +++ .../train/src/deoccluder/deoccluder_r50.py | 277 ++++++ .../train/src/deoccluder/fpn_neck.py | 121 +++ .../train/src/deoccluder/resnet.py | 136 +++ .../train/src/deoccluder/roi/__init__.py | 4 + .../deoccluder/roi/custom_kernel_iter_head.py | 325 +++++++ .../roi/custom_kernel_update_head.py | 293 +++++++ .../src/deoccluder/roi/kernel_update_head.py | 339 ++++++++ .../src/deoccluder/roi/kernel_updator.py | 91 ++ .../train/src/deoccluder/rpn/__init__.py | 4 + .../train/src/deoccluder/rpn/kernel_head.py | 582 +++++++++++++ .../src/deoccluder/rpn/positional_encoding.py | 155 ++++ .../deoccluder/rpn/semantic_fpn_wrapper.py | 282 ++++++ .../train/src/deoccluder/utils.py | 44 + .../train/src/model_utils/__init__.py | 0 .../train/src/model_utils/configs/__init__.py | 6 + .../src/model_utils/configs/config_base.py | 128 +++ .../src/model_utils/configs/config_model.py | 146 ++++ .../train/src/model_utils/device_adapter.py | 27 + .../train/src/model_utils/local_adapter.py | 36 + .../train/src/model_utils/moxing_adapter.py | 122 +++ .../train/src/utils/pth2ckpt.py | 59 ++ contrib/Overlap-Recovery/train/train.py | 118 +++ 48 files changed, 7443 insertions(+) create mode 100644 contrib/Overlap-Recovery/train/LICENSE create mode 100644 contrib/Overlap-Recovery/train/__init__.py create mode 100644 contrib/Overlap-Recovery/train/eval.py create mode 100644 contrib/Overlap-Recovery/train/onnx_model_wrappers.py create mode 100644 contrib/Overlap-Recovery/train/onnx_test.py create mode 100644 contrib/Overlap-Recovery/train/pytorch2onnx.py create mode 100644 contrib/Overlap-Recovery/train/readme_pre.py create mode 100644 contrib/Overlap-Recovery/train/resource_utils/resnet50_dict.json create mode 100644 contrib/Overlap-Recovery/train/scripts/convert_resnet.sh create mode 100644 contrib/Overlap-Recovery/train/scripts/train.sh create mode 100644 contrib/Overlap-Recovery/train/src/__init__.py create mode 100644 contrib/Overlap-Recovery/train/src/dataset/__init__.py create mode 100644 contrib/Overlap-Recovery/train/src/dataset/base_dataset.py create mode 100644 contrib/Overlap-Recovery/train/src/dataset/build_dataset.py create mode 100644 contrib/Overlap-Recovery/train/src/dataset/data_process.py create mode 100644 contrib/Overlap-Recovery/train/src/dataset/real_dataset.py create mode 100644 contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py create mode 100644 contrib/Overlap-Recovery/train/src/dataset/utils.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/__init__.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/__init__.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/resnet.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/roi/__init__.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/rpn/__init__.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py create mode 100644 contrib/Overlap-Recovery/train/src/deoccluder/utils.py create mode 100644 contrib/Overlap-Recovery/train/src/model_utils/__init__.py create mode 100644 contrib/Overlap-Recovery/train/src/model_utils/configs/__init__.py create mode 100644 contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py create mode 100644 contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py create mode 100644 contrib/Overlap-Recovery/train/src/model_utils/device_adapter.py create mode 100644 contrib/Overlap-Recovery/train/src/model_utils/local_adapter.py create mode 100644 contrib/Overlap-Recovery/train/src/model_utils/moxing_adapter.py create mode 100644 contrib/Overlap-Recovery/train/src/utils/pth2ckpt.py create mode 100644 contrib/Overlap-Recovery/train/train.py diff --git a/contrib/Overlap-Recovery/train/LICENSE b/contrib/Overlap-Recovery/train/LICENSE new file mode 100644 index 000000000..261eeb9e9 --- /dev/null +++ b/contrib/Overlap-Recovery/train/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/contrib/Overlap-Recovery/train/__init__.py b/contrib/Overlap-Recovery/train/__init__.py new file mode 100644 index 000000000..7c8d0d8c3 --- /dev/null +++ b/contrib/Overlap-Recovery/train/__init__.py @@ -0,0 +1,4 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/24 23:14 +# @Author : WeiHua diff --git a/contrib/Overlap-Recovery/train/eval.py b/contrib/Overlap-Recovery/train/eval.py new file mode 100644 index 000000000..31d11f5c8 --- /dev/null +++ b/contrib/Overlap-Recovery/train/eval.py @@ -0,0 +1,160 @@ +# Copyright 2020-2021 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================ + +"""Evaluation for De-Occluder""" +import os +import time +import numpy as np +from loguru import logger +from tqdm import tqdm + +from src.model_utils.configs.config_base import config +from src.model_utils.device_adapter import get_device_id, get_device_num +from src.deoccluder import CustomKNet +from src.dataset import build_dataset + +import mindspore as ms +from mindspore import context +from mindspore.train.serialization import load_checkpoint, load_param_into_net +from mindspore.common import set_seed +from mindspore import dataset as de + + +set_seed(1) + +def eval_func(eval_set, ckpt_path, config, src_eval_set): + """MaskRcnn evaluation.""" + + net = CustomKNet(config.model) + param_dict = load_checkpoint(ckpt_path) + load_param_into_net(net, param_dict, strict_load=False) + non_match_keys = [] + matched_keys = [] + for _, param in net.parameters_and_names(): + if param.name in param_dict: + matched_keys.append(param.name) + else: + non_match_keys.append(param.name) + net.set_train(False) + + eval_iter = 0 + total = eval_set.get_dataset_size() + + print("\n========================================\n") + print("total images num: ", total) + print("Processing, please wait a moment.") + results = [] + for data in tqdm(eval_set.create_dict_iterator(output_numpy=True, num_epochs=1), total=total): + eval_iter = eval_iter + 1 + for key in data.keys(): + data[key] = ms.Tensor(data[key]) + # run net + output = net(**data) + results.append(output[0]) + import ipdb + ipdb.set_trace() + # print(src_eval_set.evaluate(results)) + print(src_eval_set.evaluate(results, metric='segm_with_each')) + + +def modelarts_process(): + """ modelarts process """ + def unzip(zip_file, save_dir): + import zipfile + s_time = time.time() + if not os.path.exists(os.path.join(save_dir, config.modelarts_dataset_unzip_name)): + zip_isexist = zipfile.is_zipfile(zip_file) + if zip_isexist: + fz = zipfile.ZipFile(zip_file, 'r') + data_num = len(fz.namelist()) + print("Extract Start...") + print("unzip file num: {}".format(data_num)) + data_print = int(data_num / 100) if data_num > 100 else 1 + i = 0 + for file in fz.namelist(): + if i % data_print == 0: + print("unzip percent: {}%".format(int(i * 100 / data_num)), flush=True) + i += 1 + fz.extract(file, save_dir) + print("cost time: {}min:{}s.".format(int((time.time() - s_time) / 60),\ + int(int(time.time() - s_time) % 60))) + print("Extract Done") + else: + print("This is not zip.") + else: + print("Zip has been extracted.") + + if config.need_modelarts_dataset_unzip: + zip_file_1 = os.path.join(config.data_path, config.modelarts_dataset_unzip_name + ".zip") + save_dir_1 = os.path.join(config.data_path) + + sync_lock = "/tmp/unzip_sync.lock" + + # Each server contains 8 devices as most + if get_device_id() % min(get_device_num(), 8) == 0 and not os.path.exists(sync_lock): + print("Zip file path: ", zip_file_1) + print("Unzip file save dir: ", save_dir_1) + unzip(zip_file_1, save_dir_1) + print("===Finish extract data synchronization===") + try: + os.mknod(sync_lock) + except IOError: + pass + + while True: + if os.path.exists(sync_lock): + break + time.sleep(1) + + print("Device: {}, Finish sync unzip data from {} to {}.".format(get_device_id(), zip_file_1, save_dir_1)) + print("#" * 200, os.listdir(save_dir_1)) + print("#" * 200, os.listdir(os.path.join(config.data_path, config.modelarts_dataset_unzip_name))) + + config.coco_root = os.path.join(config.data_path, config.modelarts_dataset_unzip_name) + config.checkpoint_path = os.path.join(config.output_path, config.ckpt_path) + config.ann_file = os.path.join(config.coco_root, config.ann_file) + + +def eval_(): + device_target = config.device_target + # context.set_context(mode=context.GRAPH_MODE, device_target=device_target, device_id=get_device_id()) + context.set_context(mode=context.PYNATIVE_MODE, device_target=device_target, device_id=get_device_id(), ) + + print("Start create eval dataset!") + + # It will generate mindrecord file in config.mindrecord_dir + if not os.path.exists(config.mindrecord_dir): + os.makedirs(config.mindrecord_dir) + # create_mindrecord_dir(prefix, config.mindrecord_dir, mindrecord_file) + logger.add(os.path.join(config.mindrecord_dir, time.asctime(time.localtime()).replace(' ', '_') + ".log")) + + # prepare dataset + eval_set_cls = build_dataset(config.data['test']) + collect_pipe = config.data['test']['pipeline'][-1] + column_names = list(collect_pipe['keys']) + list(collect_pipe['meta_keys']) + print(column_names) + eval_set = de.GeneratorDataset(eval_set_cls, + column_names=column_names, + num_parallel_workers=config.data['workers_per_gpu'], + shuffle=False) + eval_set = eval_set.batch(1, drop_remainder=False) + + print("Start Eval!") + print("ckpt_path=", config.checkpoint_path) + eval_func(eval_set, config.checkpoint_path, config, eval_set_cls) + + +if __name__ == '__main__': + eval_() diff --git a/contrib/Overlap-Recovery/train/onnx_model_wrappers.py b/contrib/Overlap-Recovery/train/onnx_model_wrappers.py new file mode 100644 index 000000000..4d046c5e6 --- /dev/null +++ b/contrib/Overlap-Recovery/train/onnx_model_wrappers.py @@ -0,0 +1,171 @@ +# -*- coding: utf-8 -*- +# @Author: Wenwen Yu +# @Email: yuwenwen62@gmail.com +# @Created Time: 11/15/22 3:47 PM + +# Copyright (c) OpenMMLab. All rights reserved. +import os.path as osp +import warnings + +import numpy as np +import torch + +from mmdet.core import bbox2result +from mmdet.models import BaseDetector + + +class DeployBaseDetector(BaseDetector): + """DeployBaseDetector.""" + + def __init__(self, class_names, device_id): + super(DeployBaseDetector, self).__init__() + self.CLASSES = class_names + self.device_id = device_id + + def simple_test(self, img, img_metas, **kwargs): + raise NotImplementedError('This method is not implemented.') + + def aug_test(self, imgs, img_metas, **kwargs): + raise NotImplementedError('This method is not implemented.') + + def extract_feat(self, imgs): + raise NotImplementedError('This method is not implemented.') + + def forward_train(self, imgs, img_metas, **kwargs): + raise NotImplementedError('This method is not implemented.') + + def val_step(self, data, optimizer): + raise NotImplementedError('This method is not implemented.') + + def train_step(self, data, optimizer): + raise NotImplementedError('This method is not implemented.') + + def forward_test(self, *, img, img_metas, **kwargs): + raise NotImplementedError('This method is not implemented.') + + def async_simple_test(self, img, img_metas, **kwargs): + raise NotImplementedError('This method is not implemented.') + + def forward(self, img, img_metas, return_loss=True, **kwargs): + outputs = self.forward_test(img, img_metas, **kwargs) + batch_masks, seg_scores = outputs # (bs, num_cls num_det, h, w) (bs, num_cls, num_det, 5) + # batch_dets, batch_labels = outputs[:2] + # batch_masks = outputs[2] if len(outputs) == 3 else None + batch_size = img[0].shape[0] + img_metas = img_metas[0] + results = [] + rescale = kwargs.get('rescale', True) + for i in range(batch_size): + masks = batch_masks[i] # (num_det, h, w) + score = seg_scores[i] # (num_det,) + seg_per_cls = [] + score_per_cls = [] + score_per_cls.append(score) # ( num_det) + # (num_det, h, w) + # masks = masks[0] # get num_cls is 1(onlh text), so we get it from idx 0 + for ins_idx in range(masks.shape[0]): + img_h, img_w = img_metas[i]['img_shape'][:2] + ori_h, ori_w = img_metas[i]['ori_shape'][:2] + mask = masks[ins_idx, :img_h, :img_w] + if rescale: + mask = mask.astype(np.float32) + mask = torch.from_numpy(mask) + mask = torch.nn.functional.interpolate( + mask.unsqueeze(0).unsqueeze(0), size=(ori_h, ori_w)) + mask = mask.squeeze(0).squeeze(0).detach().asnumpy() + # if mask.dtype != np.bool: + # mask = mask >= 0.5 + seg_per_cls.append(mask) + results.append((score_per_cls, [seg_per_cls])) + return results + +def forward1(self, img, img_metas, return_loss=True, **kwargs): + outputs = self.forward_test(img, img_metas, **kwargs) + batch_masks, bbox_results = outputs # (bs, num_cls num_det, h, w) (bs, num_cls, num_det, 5) + # batch_dets, batch_labels = outputs[:2] + # batch_masks = outputs[2] if len(outputs) == 3 else None + batch_size = img[0].shape[0] + img_metas = img_metas[0] + results = [] + rescale = kwargs.get('rescale', False) + for i in range(batch_size): + masks = batch_masks[i] # (num_cls num_det, h, w) + bbox_and_score = bbox_results[i] # num_cls, num_det, 5) + seg_per_cls = [] + bbox_per_cls = [] + bbox_per_cls.append(bbox_and_score.squeeze(0).detach().numpy()) # (num_det, 5) + # (num_det, h, w) + masks = masks[0] # get num_cls is 1(onlh text), so we get it from idx 0 + for ins_idx in range(masks.shape[0]): + img_h, img_w = img_metas[i]['img_shape'][:2] + ori_h, ori_w = img_metas[i]['ori_shape'][:2] + mask = masks[ins_idx, :img_h, :img_w] + if rescale: + mask = torch.nn.functional.interpolate( + mask, size=(ori_h, ori_w)) + mask = mask.detach().numpy() + + if mask.dtype != np.bool: + mask = mask >= 0.5 + seg_per_cls.append(mask) + results.append((bbox_per_cls, [seg_per_cls])) + return results + + + +class ONNXRuntimeDetector(DeployBaseDetector): + """Wrapper for detector's inference with ONNXRuntime.""" + + def __init__(self, onnx_file, class_names, device_id): + super(ONNXRuntimeDetector, self).__init__(class_names, device_id) + import onnxruntime as ort + + # get the custom op path + ort_custom_op_path = '' + try: + from mmcv.ops import get_onnxruntime_op_path + ort_custom_op_path = get_onnxruntime_op_path() + except (ImportError, ModuleNotFoundError): + warnings.warn('If input model has custom op from mmcv, \ + you may have to build mmcv with ONNXRuntime from source.') + session_options = ort.SessionOptions() + # register custom op for onnxruntime + if osp.exists(ort_custom_op_path): + session_options.register_custom_ops_library(ort_custom_op_path) + sess = ort.InferenceSession(onnx_file, session_options) + providers = ['CPUExecutionProvider'] + options = [{}] + is_cuda_available = ort.get_device() == 'GPU' + if is_cuda_available: + providers.insert(0, 'CUDAExecutionProvider') + options.insert(0, {'device_id': device_id}) + + sess.set_providers(providers, options) + + self.sess = sess + self.io_binding = sess.io_binding() + self.output_names = [_.name for _ in sess.get_outputs()] + self.is_cuda_available = is_cuda_available + + def forward_test(self, imgs, img_metas, **kwargs): + input_data = imgs[0] + # set io binding for inputs/outputs + device_type = 'cuda' if self.is_cuda_available else 'cpu' + if not self.is_cuda_available: + input_data = input_data.cpu() + self.io_binding.bind_input( + name='input', + device_type=device_type, + device_id=self.device_id, + element_type=np.float32, + shape=input_data.shape, + buffer_ptr=input_data.data_ptr()) + + for name in self.output_names: + self.io_binding.bind_output(name) + # run session to get outputs + self.sess.run_with_iobinding(self.io_binding) + ort_outputs = self.io_binding.copy_outputs_to_cpu() + return ort_outputs + + diff --git a/contrib/Overlap-Recovery/train/onnx_test.py b/contrib/Overlap-Recovery/train/onnx_test.py new file mode 100644 index 000000000..f5b01fd7e --- /dev/null +++ b/contrib/Overlap-Recovery/train/onnx_test.py @@ -0,0 +1,11 @@ +# -*- coding: utf-8 -*- +# @Author: Wenwen Yu +# @Email: yuwenwen62@gmail.com +# @Created Time: 11/14/22 11:47 PM + +import onnx + +onnx_file = '/home/whua/code/overlap_text/logs/knet/default_synth_v0_3x_4proposal/weight.onnx' +model = onnx.load(onnx_file) +print([input.name for input in model.graph.input]) +print([output.name for output in model.graph.output]) \ No newline at end of file diff --git a/contrib/Overlap-Recovery/train/pytorch2onnx.py b/contrib/Overlap-Recovery/train/pytorch2onnx.py new file mode 100644 index 000000000..ed092eadc --- /dev/null +++ b/contrib/Overlap-Recovery/train/pytorch2onnx.py @@ -0,0 +1,343 @@ +# Copyright (c) OpenMMLab. All rights reserved. +import argparse +import os.path as osp +import warnings +from functools import partial + +import numpy as np +import onnx +import torch +from mmcv import Config, DictAction + +from mmdet.core.export import build_model_from_cfg, preprocess_example_input +from mmdet.core.export.model_wrappers import ONNXRuntimeDetector as UnusedWrapper +from onnx_model_wrappers import ONNXRuntimeDetector + + +def pytorch2onnx(model, + input_img, + input_shape, + normalize_cfg, + opset_version=11, + show=False, + output_file='tmp.onnx', + verify=False, + test_img=None, + do_simplify=False, + dynamic_export=None, + skip_postprocess=False): + + input_config = { + 'input_shape': input_shape, + 'input_path': input_img, + 'normalize_cfg': normalize_cfg + } + # prepare input + one_img, one_meta = preprocess_example_input(input_config) + # import pdb;pdb.set_trace() + # debug + one_meta['scale_factor'] = one_meta['scale_factor'].tolist() + one_meta.pop('show_img') + + pad_H, pad_W = 736, 736 + one_meta['batch_input_shape'] = (pad_H, pad_W) + # one_meta = dict() + + img_list, img_meta_list = [one_img], [[one_meta]] + + if skip_postprocess: + warnings.warn('Not all models support export onnx without post ' + 'process, especially two stage detectors!') + model.forward = model.forward_dummy + torch.onnx.export( + model, + one_img, + output_file, + input_names=['input'], + export_params=True, + keep_initializers_as_inputs=True, + do_constant_folding=True, + verbose=show, + opset_version=opset_version) + + print(f'Successfully exported ONNX model without ' + f'post process: {output_file}') + return + + # replace original forward function + origin_forward = model.forward + model.forward = partial( + model.forward, + img_metas=img_meta_list, + return_loss=False, + rescale=False) + + output_names = ['masks', 'scores'] + # if model.with_mask: + # output_names.append('masks') + input_name = 'input' + dynamic_axes = None + if dynamic_export: + dynamic_axes = { + input_name: { + 0: 'batch', + 2: 'height', + 3: 'width' + }, + 'masks': { + 0: 'batch', + 1: 'num_cls', + 2: 'num_dets' + }, + 'bbox': { + 0: 'batch', + 1: 'num_cls', + 2: 'num_dets' + }, + } + torch.onnx.export( + model, + img_list, + output_file, + input_names=[input_name], + output_names=output_names, + export_params=True, + keep_initializers_as_inputs=True, + do_constant_folding=True, + verbose=show, + opset_version=opset_version, + dynamic_axes=dynamic_axes) + + model.forward = origin_forward + import ipdb + # ipdb.set_trace() + if do_simplify: + import onnxsim + + from mmdet import digit_version + + min_required_version = '0.4.0' + assert digit_version(onnxsim.__version__) >= digit_version( + min_required_version + ), f'Requires to install onnxsim>={min_required_version}' + + model_opt, check_ok = onnxsim.simplify(output_file) + if check_ok: + onnx.save(model_opt, output_file) + print(f'Successfully simplified ONNX model: {output_file}') + else: + warnings.warn('Failed to simplify ONNX model.') + print(f'Successfully exported ONNX model: {output_file}') + + if verify: + # check by onnx + onnx_model = onnx.load(output_file) + onnx.checker.check_model(onnx_model) + + # wrap onnx model + onnx_model = ONNXRuntimeDetector(output_file, model.CLASSES, 0) + if dynamic_export: + # scale up to test dynamic shape + h, w = [int((_ * 1.5) // 32 * 32) for _ in input_shape[2:]] + h, w = min(1344, h), min(1344, w) + input_config['input_shape'] = (1, 3, h, w) + + if test_img is None: + input_config['input_path'] = input_img + + # prepare input once again + one_img, one_meta = preprocess_example_input(input_config) + + one_meta['scale_factor'] = one_meta['scale_factor'].tolist() + one_meta.pop('show_img') + + if dynamic_export: + pad_H, pad_W = h, w + one_meta['batch_input_shape'] = (pad_H, pad_W) + + img_list, img_meta_list = [one_img], [[one_meta]] + + # get pytorch output + with torch.no_grad(): + pytorch_results = model( + img_list, + img_metas=img_meta_list, + return_loss=False, + rescale=True)[0] + + img_list = [_.cuda().contiguous() for _ in img_list] + if dynamic_export: + img_list = img_list + [_.flip(-1).contiguous() for _ in img_list] + img_meta_list = img_meta_list * 2 + # get onnx output + onnx_results = onnx_model( + img_list, img_metas=img_meta_list, return_loss=False)[0] + + # print(onnx_results) + # compare a part of result + + for scores in pytorch_results[0]: + new_scores =scores[:, -1] # remove pytorch_results fake bboxes, keep scores + new_pytorch_res = [[new_scores], pytorch_results[1]] + # compare_pairs = list(zip(onnx_results, pytorch_results)) + compare_pairs = list(zip(onnx_results, new_pytorch_res)) + err_msg = 'The numerical values are different between Pytorch' + \ + ' and ONNX, but it does not necessarily mean the' + \ + ' exported ONNX model is problematic.' + # check the numerical value + # [(scores, masks) ,...,] + for type_idx, (onnx_res, pytorch_res) in enumerate(compare_pairs): + for idx, (o_res, p_res) in enumerate(zip(onnx_res, pytorch_res)): + np.testing.assert_allclose( + o_res, p_res, rtol=1e-03, atol=1e-05, err_msg=err_msg) + print('The numerical values are the same between Pytorch and ONNX') + + +def parse_normalize_cfg(test_pipeline): + transforms = None + for pipeline in test_pipeline: + if 'transforms' in pipeline: + transforms = pipeline['transforms'] + break + assert transforms is not None, 'Failed to find `transforms`' + norm_config_li = [_ for _ in transforms if _['type'] == 'Normalize'] + assert len(norm_config_li) == 1, '`norm_config` should only have one' + norm_config = norm_config_li[0] + return norm_config + + +def parse_args(): + parser = argparse.ArgumentParser( + description='Convert MMDetection models to ONNX') + parser.add_argument('config', help='test config file path') + parser.add_argument('checkpoint', help='checkpoint file') + parser.add_argument('--input-img', type=str, help='Images for input') + parser.add_argument( + '--show', + action='store_true', + help='Show onnx graph and detection outputs') + parser.add_argument('--output-file', type=str, default='tmp.onnx') + parser.add_argument('--opset-version', type=int, default=12) + parser.add_argument( + '--test-img', type=str, default=None, help='Images for test') + parser.add_argument( + '--dataset', + type=str, + default='coco', + help='Dataset name. This argument is deprecated and will be removed \ + in future releases.') + parser.add_argument( + '--verify', + action='store_true', + help='verify the onnx model output against pytorch output') + parser.add_argument( + '--simplify', + action='store_true', + help='Whether to simplify onnx model.') + parser.add_argument( + '--shape', + type=int, + nargs='+', + default=[800, 1216], + help='input image size') + parser.add_argument( + '--mean', + type=float, + nargs='+', + default=[123.675, 116.28, 103.53], + help='mean value used for preprocess input data.This argument \ + is deprecated and will be removed in future releases.') + parser.add_argument( + '--std', + type=float, + nargs='+', + default=[58.395, 57.12, 57.375], + help='variance value used for preprocess input data. ' + 'This argument is deprecated and will be removed in future releases.') + parser.add_argument( + '--cfg-options', + nargs='+', + action=DictAction, + help='Override some settings in the used config, the key-value pair ' + 'in xxx=yyy format will be merged into config file. If the value to ' + 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' + 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' + 'Note that the quotation marks are necessary and that no white space ' + 'is allowed.') + parser.add_argument( + '--dynamic-export', + action='store_true', + help='Whether to export onnx with dynamic axis.') + parser.add_argument( + '--skip-postprocess', + action='store_true', + help='Whether to export model without post process. Experimental ' + 'option. We do not guarantee the correctness of the exported ' + 'model.') + args = parser.parse_args() + return args + + +if __name__ == '__main__': + args = parse_args() + warnings.warn('Arguments like `--mean`, `--std`, `--dataset` would be \ + parsed directly from config file and are deprecated and \ + will be removed in future releases.') + + # assert args.opset_version == 11, 'MMDet only support opset 11 now' + + try: + from mmcv.onnx.symbolic import register_extra_symbolics + except ModuleNotFoundError: + raise NotImplementedError('please update mmcv to version>=v1.0.4') + register_extra_symbolics(args.opset_version) + + cfg = Config.fromfile(args.config) + if args.cfg_options is not None: + cfg.merge_from_dict(args.cfg_options) + + if args.shape is None: + img_scale = cfg.test_pipeline[1]['img_scale'] + input_shape = (1, 3, img_scale[1], img_scale[0]) + elif len(args.shape) == 1: + input_shape = (1, 3, args.shape[0], args.shape[0]) + elif len(args.shape) == 2: + input_shape = (1, 3) + tuple(args.shape) + else: + raise ValueError('invalid input shape') + # import pdb;pdb.set_trace() + # build the model and load checkpoint + model = build_model_from_cfg(args.config, args.checkpoint, + args.cfg_options) + + if not args.input_img: + args.input_img = osp.join(osp.dirname(__file__), '../../demo/demo.jpg') + + normalize_cfg = parse_normalize_cfg(cfg.test_pipeline) + + # convert model to onnx file + pytorch2onnx( + model, + args.input_img, + input_shape, + normalize_cfg, + opset_version=args.opset_version, + show=args.show, + output_file=args.output_file, + verify=args.verify, + test_img=args.test_img, + do_simplify=args.simplify, + dynamic_export=args.dynamic_export, + skip_postprocess=args.skip_postprocess) + + # Following strings of text style are from colorama package + bright_style, reset_style = '\x1b[1m', '\x1b[0m' + red_text, blue_text = '\x1b[31m', '\x1b[34m' + white_background = '\x1b[107m' + + msg = white_background + bright_style + red_text + msg += 'DeprecationWarning: This tool will be deprecated in future. ' + msg += blue_text + 'Welcome to use the unified model deployment toolbox ' + msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' + msg += reset_style + warnings.warn(msg) diff --git a/contrib/Overlap-Recovery/train/readme_pre.py b/contrib/Overlap-Recovery/train/readme_pre.py new file mode 100644 index 000000000..1d0f6581c --- /dev/null +++ b/contrib/Overlap-Recovery/train/readme_pre.py @@ -0,0 +1,13 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/30 23:09 +# @Author : WeiHua + +""" +ipdb +opencv-python +imagesize +loguru +pip install mmcv==0.2.14 + +""" \ No newline at end of file diff --git a/contrib/Overlap-Recovery/train/resource_utils/resnet50_dict.json b/contrib/Overlap-Recovery/train/resource_utils/resnet50_dict.json new file mode 100644 index 000000000..2561cf7ef --- /dev/null +++ b/contrib/Overlap-Recovery/train/resource_utils/resnet50_dict.json @@ -0,0 +1 @@ +{"conv1.weight": "conv1.weight", "bn1.running_mean": "bn1.moving_mean", "bn1.running_var": "bn1.moving_variance", "bn1.weight": "bn1.gamma", "bn1.bias": "bn1.beta", "layer1.0.conv1.weight": "layer1.0.conv1.weight", "layer1.0.bn1.running_mean": "layer1.0.bn1.moving_mean", "layer1.0.bn1.running_var": "layer1.0.bn1.moving_variance", "layer1.0.bn1.weight": "layer1.0.bn1.gamma", "layer1.0.bn1.bias": "layer1.0.bn1.beta", "layer1.0.conv2.weight": "layer1.0.conv2.weight", "layer1.0.bn2.running_mean": "layer1.0.bn2.moving_mean", "layer1.0.bn2.running_var": "layer1.0.bn2.moving_variance", "layer1.0.bn2.weight": "layer1.0.bn2.gamma", "layer1.0.bn2.bias": "layer1.0.bn2.beta", "layer1.0.conv3.weight": "layer1.0.conv3.weight", "layer1.0.bn3.running_mean": "layer1.0.bn3.moving_mean", "layer1.0.bn3.running_var": "layer1.0.bn3.moving_variance", "layer1.0.bn3.weight": "layer1.0.bn3.gamma", "layer1.0.bn3.bias": "layer1.0.bn3.beta", "layer1.0.downsample.0.weight": "layer1.0.downsample.0.weight", "layer1.0.downsample.1.running_mean": "layer1.0.downsample.1.moving_mean", "layer1.0.downsample.1.running_var": "layer1.0.downsample.1.moving_variance", "layer1.0.downsample.1.weight": "layer1.0.downsample.1.gamma", "layer1.0.downsample.1.bias": "layer1.0.downsample.1.beta", "layer1.1.conv1.weight": "layer1.1.conv1.weight", "layer1.1.bn1.running_mean": "layer1.1.bn1.moving_mean", "layer1.1.bn1.running_var": "layer1.1.bn1.moving_variance", "layer1.1.bn1.weight": "layer1.1.bn1.gamma", "layer1.1.bn1.bias": "layer1.1.bn1.beta", "layer1.1.conv2.weight": "layer1.1.conv2.weight", "layer1.1.bn2.running_mean": "layer1.1.bn2.moving_mean", "layer1.1.bn2.running_var": "layer1.1.bn2.moving_variance", "layer1.1.bn2.weight": "layer1.1.bn2.gamma", "layer1.1.bn2.bias": "layer1.1.bn2.beta", "layer1.1.conv3.weight": "layer1.1.conv3.weight", "layer1.1.bn3.running_mean": "layer1.1.bn3.moving_mean", "layer1.1.bn3.running_var": "layer1.1.bn3.moving_variance", "layer1.1.bn3.weight": "layer1.1.bn3.gamma", "layer1.1.bn3.bias": "layer1.1.bn3.beta", "layer1.2.conv1.weight": "layer1.2.conv1.weight", "layer1.2.bn1.running_mean": "layer1.2.bn1.moving_mean", "layer1.2.bn1.running_var": "layer1.2.bn1.moving_variance", "layer1.2.bn1.weight": "layer1.2.bn1.gamma", "layer1.2.bn1.bias": "layer1.2.bn1.beta", "layer1.2.conv2.weight": "layer1.2.conv2.weight", "layer1.2.bn2.running_mean": "layer1.2.bn2.moving_mean", "layer1.2.bn2.running_var": "layer1.2.bn2.moving_variance", "layer1.2.bn2.weight": "layer1.2.bn2.gamma", "layer1.2.bn2.bias": "layer1.2.bn2.beta", "layer1.2.conv3.weight": "layer1.2.conv3.weight", "layer1.2.bn3.running_mean": "layer1.2.bn3.moving_mean", "layer1.2.bn3.running_var": "layer1.2.bn3.moving_variance", "layer1.2.bn3.weight": "layer1.2.bn3.gamma", "layer1.2.bn3.bias": "layer1.2.bn3.beta", "layer2.0.conv1.weight": "layer2.0.conv1.weight", "layer2.0.bn1.running_mean": "layer2.0.bn1.moving_mean", "layer2.0.bn1.running_var": "layer2.0.bn1.moving_variance", "layer2.0.bn1.weight": "layer2.0.bn1.gamma", "layer2.0.bn1.bias": "layer2.0.bn1.beta", "layer2.0.conv2.weight": "layer2.0.conv2.weight", "layer2.0.bn2.running_mean": "layer2.0.bn2.moving_mean", "layer2.0.bn2.running_var": "layer2.0.bn2.moving_variance", "layer2.0.bn2.weight": "layer2.0.bn2.gamma", "layer2.0.bn2.bias": "layer2.0.bn2.beta", "layer2.0.conv3.weight": "layer2.0.conv3.weight", "layer2.0.bn3.running_mean": "layer2.0.bn3.moving_mean", "layer2.0.bn3.running_var": "layer2.0.bn3.moving_variance", "layer2.0.bn3.weight": "layer2.0.bn3.gamma", "layer2.0.bn3.bias": "layer2.0.bn3.beta", "layer2.0.downsample.0.weight": "layer2.0.downsample.0.weight", "layer2.0.downsample.1.running_mean": "layer2.0.downsample.1.moving_mean", "layer2.0.downsample.1.running_var": "layer2.0.downsample.1.moving_variance", "layer2.0.downsample.1.weight": "layer2.0.downsample.1.gamma", "layer2.0.downsample.1.bias": "layer2.0.downsample.1.beta", "layer2.1.conv1.weight": "layer2.1.conv1.weight", "layer2.1.bn1.running_mean": "layer2.1.bn1.moving_mean", "layer2.1.bn1.running_var": "layer2.1.bn1.moving_variance", "layer2.1.bn1.weight": "layer2.1.bn1.gamma", "layer2.1.bn1.bias": "layer2.1.bn1.beta", "layer2.1.conv2.weight": "layer2.1.conv2.weight", "layer2.1.bn2.running_mean": "layer2.1.bn2.moving_mean", "layer2.1.bn2.running_var": "layer2.1.bn2.moving_variance", "layer2.1.bn2.weight": "layer2.1.bn2.gamma", "layer2.1.bn2.bias": "layer2.1.bn2.beta", "layer2.1.conv3.weight": "layer2.1.conv3.weight", "layer2.1.bn3.running_mean": "layer2.1.bn3.moving_mean", "layer2.1.bn3.running_var": "layer2.1.bn3.moving_variance", "layer2.1.bn3.weight": "layer2.1.bn3.gamma", "layer2.1.bn3.bias": "layer2.1.bn3.beta", "layer2.2.conv1.weight": "layer2.2.conv1.weight", "layer2.2.bn1.running_mean": "layer2.2.bn1.moving_mean", "layer2.2.bn1.running_var": "layer2.2.bn1.moving_variance", "layer2.2.bn1.weight": "layer2.2.bn1.gamma", "layer2.2.bn1.bias": "layer2.2.bn1.beta", "layer2.2.conv2.weight": "layer2.2.conv2.weight", "layer2.2.bn2.running_mean": "layer2.2.bn2.moving_mean", "layer2.2.bn2.running_var": "layer2.2.bn2.moving_variance", "layer2.2.bn2.weight": "layer2.2.bn2.gamma", "layer2.2.bn2.bias": "layer2.2.bn2.beta", "layer2.2.conv3.weight": "layer2.2.conv3.weight", "layer2.2.bn3.running_mean": "layer2.2.bn3.moving_mean", "layer2.2.bn3.running_var": "layer2.2.bn3.moving_variance", "layer2.2.bn3.weight": "layer2.2.bn3.gamma", "layer2.2.bn3.bias": "layer2.2.bn3.beta", "layer2.3.conv1.weight": "layer2.3.conv1.weight", "layer2.3.bn1.running_mean": "layer2.3.bn1.moving_mean", "layer2.3.bn1.running_var": "layer2.3.bn1.moving_variance", "layer2.3.bn1.weight": "layer2.3.bn1.gamma", "layer2.3.bn1.bias": "layer2.3.bn1.beta", "layer2.3.conv2.weight": "layer2.3.conv2.weight", "layer2.3.bn2.running_mean": "layer2.3.bn2.moving_mean", "layer2.3.bn2.running_var": "layer2.3.bn2.moving_variance", "layer2.3.bn2.weight": "layer2.3.bn2.gamma", "layer2.3.bn2.bias": "layer2.3.bn2.beta", "layer2.3.conv3.weight": "layer2.3.conv3.weight", "layer2.3.bn3.running_mean": "layer2.3.bn3.moving_mean", "layer2.3.bn3.running_var": "layer2.3.bn3.moving_variance", "layer2.3.bn3.weight": "layer2.3.bn3.gamma", "layer2.3.bn3.bias": "layer2.3.bn3.beta", "layer3.0.conv1.weight": "layer3.0.conv1.weight", "layer3.0.bn1.running_mean": "layer3.0.bn1.moving_mean", "layer3.0.bn1.running_var": "layer3.0.bn1.moving_variance", "layer3.0.bn1.weight": "layer3.0.bn1.gamma", "layer3.0.bn1.bias": "layer3.0.bn1.beta", "layer3.0.conv2.weight": "layer3.0.conv2.weight", "layer3.0.bn2.running_mean": "layer3.0.bn2.moving_mean", "layer3.0.bn2.running_var": "layer3.0.bn2.moving_variance", "layer3.0.bn2.weight": "layer3.0.bn2.gamma", "layer3.0.bn2.bias": "layer3.0.bn2.beta", "layer3.0.conv3.weight": "layer3.0.conv3.weight", "layer3.0.bn3.running_mean": "layer3.0.bn3.moving_mean", "layer3.0.bn3.running_var": "layer3.0.bn3.moving_variance", "layer3.0.bn3.weight": "layer3.0.bn3.gamma", "layer3.0.bn3.bias": "layer3.0.bn3.beta", "layer3.0.downsample.0.weight": "layer3.0.downsample.0.weight", "layer3.0.downsample.1.running_mean": "layer3.0.downsample.1.moving_mean", "layer3.0.downsample.1.running_var": "layer3.0.downsample.1.moving_variance", "layer3.0.downsample.1.weight": "layer3.0.downsample.1.gamma", "layer3.0.downsample.1.bias": "layer3.0.downsample.1.beta", "layer3.1.conv1.weight": "layer3.1.conv1.weight", "layer3.1.bn1.running_mean": "layer3.1.bn1.moving_mean", "layer3.1.bn1.running_var": "layer3.1.bn1.moving_variance", "layer3.1.bn1.weight": "layer3.1.bn1.gamma", "layer3.1.bn1.bias": "layer3.1.bn1.beta", "layer3.1.conv2.weight": "layer3.1.conv2.weight", "layer3.1.bn2.running_mean": "layer3.1.bn2.moving_mean", "layer3.1.bn2.running_var": "layer3.1.bn2.moving_variance", "layer3.1.bn2.weight": "layer3.1.bn2.gamma", "layer3.1.bn2.bias": "layer3.1.bn2.beta", "layer3.1.conv3.weight": "layer3.1.conv3.weight", "layer3.1.bn3.running_mean": "layer3.1.bn3.moving_mean", "layer3.1.bn3.running_var": "layer3.1.bn3.moving_variance", "layer3.1.bn3.weight": "layer3.1.bn3.gamma", "layer3.1.bn3.bias": "layer3.1.bn3.beta", "layer3.2.conv1.weight": "layer3.2.conv1.weight", "layer3.2.bn1.running_mean": "layer3.2.bn1.moving_mean", "layer3.2.bn1.running_var": "layer3.2.bn1.moving_variance", "layer3.2.bn1.weight": "layer3.2.bn1.gamma", "layer3.2.bn1.bias": "layer3.2.bn1.beta", "layer3.2.conv2.weight": "layer3.2.conv2.weight", "layer3.2.bn2.running_mean": "layer3.2.bn2.moving_mean", "layer3.2.bn2.running_var": "layer3.2.bn2.moving_variance", "layer3.2.bn2.weight": "layer3.2.bn2.gamma", "layer3.2.bn2.bias": "layer3.2.bn2.beta", "layer3.2.conv3.weight": "layer3.2.conv3.weight", "layer3.2.bn3.running_mean": "layer3.2.bn3.moving_mean", "layer3.2.bn3.running_var": "layer3.2.bn3.moving_variance", "layer3.2.bn3.weight": "layer3.2.bn3.gamma", "layer3.2.bn3.bias": "layer3.2.bn3.beta", "layer3.3.conv1.weight": "layer3.3.conv1.weight", "layer3.3.bn1.running_mean": "layer3.3.bn1.moving_mean", "layer3.3.bn1.running_var": "layer3.3.bn1.moving_variance", "layer3.3.bn1.weight": "layer3.3.bn1.gamma", "layer3.3.bn1.bias": "layer3.3.bn1.beta", "layer3.3.conv2.weight": "layer3.3.conv2.weight", "layer3.3.bn2.running_mean": "layer3.3.bn2.moving_mean", "layer3.3.bn2.running_var": "layer3.3.bn2.moving_variance", "layer3.3.bn2.weight": "layer3.3.bn2.gamma", "layer3.3.bn2.bias": "layer3.3.bn2.beta", "layer3.3.conv3.weight": "layer3.3.conv3.weight", "layer3.3.bn3.running_mean": "layer3.3.bn3.moving_mean", "layer3.3.bn3.running_var": "layer3.3.bn3.moving_variance", "layer3.3.bn3.weight": "layer3.3.bn3.gamma", "layer3.3.bn3.bias": "layer3.3.bn3.beta", "layer3.4.conv1.weight": "layer3.4.conv1.weight", "layer3.4.bn1.running_mean": "layer3.4.bn1.moving_mean", "layer3.4.bn1.running_var": "layer3.4.bn1.moving_variance", "layer3.4.bn1.weight": "layer3.4.bn1.gamma", "layer3.4.bn1.bias": "layer3.4.bn1.beta", "layer3.4.conv2.weight": "layer3.4.conv2.weight", "layer3.4.bn2.running_mean": "layer3.4.bn2.moving_mean", "layer3.4.bn2.running_var": "layer3.4.bn2.moving_variance", "layer3.4.bn2.weight": "layer3.4.bn2.gamma", "layer3.4.bn2.bias": "layer3.4.bn2.beta", "layer3.4.conv3.weight": "layer3.4.conv3.weight", "layer3.4.bn3.running_mean": "layer3.4.bn3.moving_mean", "layer3.4.bn3.running_var": "layer3.4.bn3.moving_variance", "layer3.4.bn3.weight": "layer3.4.bn3.gamma", "layer3.4.bn3.bias": "layer3.4.bn3.beta", "layer3.5.conv1.weight": "layer3.5.conv1.weight", "layer3.5.bn1.running_mean": "layer3.5.bn1.moving_mean", "layer3.5.bn1.running_var": "layer3.5.bn1.moving_variance", "layer3.5.bn1.weight": "layer3.5.bn1.gamma", "layer3.5.bn1.bias": "layer3.5.bn1.beta", "layer3.5.conv2.weight": "layer3.5.conv2.weight", "layer3.5.bn2.running_mean": "layer3.5.bn2.moving_mean", "layer3.5.bn2.running_var": "layer3.5.bn2.moving_variance", "layer3.5.bn2.weight": "layer3.5.bn2.gamma", "layer3.5.bn2.bias": "layer3.5.bn2.beta", "layer3.5.conv3.weight": "layer3.5.conv3.weight", "layer3.5.bn3.running_mean": "layer3.5.bn3.moving_mean", "layer3.5.bn3.running_var": "layer3.5.bn3.moving_variance", "layer3.5.bn3.weight": "layer3.5.bn3.gamma", "layer3.5.bn3.bias": "layer3.5.bn3.beta", "layer4.0.conv1.weight": "layer4.0.conv1.weight", "layer4.0.bn1.running_mean": "layer4.0.bn1.moving_mean", "layer4.0.bn1.running_var": "layer4.0.bn1.moving_variance", "layer4.0.bn1.weight": "layer4.0.bn1.gamma", "layer4.0.bn1.bias": "layer4.0.bn1.beta", "layer4.0.conv2.weight": "layer4.0.conv2.weight", "layer4.0.bn2.running_mean": "layer4.0.bn2.moving_mean", "layer4.0.bn2.running_var": "layer4.0.bn2.moving_variance", "layer4.0.bn2.weight": "layer4.0.bn2.gamma", "layer4.0.bn2.bias": "layer4.0.bn2.beta", "layer4.0.conv3.weight": "layer4.0.conv3.weight", "layer4.0.bn3.running_mean": "layer4.0.bn3.moving_mean", "layer4.0.bn3.running_var": "layer4.0.bn3.moving_variance", "layer4.0.bn3.weight": "layer4.0.bn3.gamma", "layer4.0.bn3.bias": "layer4.0.bn3.beta", "layer4.0.downsample.0.weight": "layer4.0.downsample.0.weight", "layer4.0.downsample.1.running_mean": "layer4.0.downsample.1.moving_mean", "layer4.0.downsample.1.running_var": "layer4.0.downsample.1.moving_variance", "layer4.0.downsample.1.weight": "layer4.0.downsample.1.gamma", "layer4.0.downsample.1.bias": "layer4.0.downsample.1.beta", "layer4.1.conv1.weight": "layer4.1.conv1.weight", "layer4.1.bn1.running_mean": "layer4.1.bn1.moving_mean", "layer4.1.bn1.running_var": "layer4.1.bn1.moving_variance", "layer4.1.bn1.weight": "layer4.1.bn1.gamma", "layer4.1.bn1.bias": "layer4.1.bn1.beta", "layer4.1.conv2.weight": "layer4.1.conv2.weight", "layer4.1.bn2.running_mean": "layer4.1.bn2.moving_mean", "layer4.1.bn2.running_var": "layer4.1.bn2.moving_variance", "layer4.1.bn2.weight": "layer4.1.bn2.gamma", "layer4.1.bn2.bias": "layer4.1.bn2.beta", "layer4.1.conv3.weight": "layer4.1.conv3.weight", "layer4.1.bn3.running_mean": "layer4.1.bn3.moving_mean", "layer4.1.bn3.running_var": "layer4.1.bn3.moving_variance", "layer4.1.bn3.weight": "layer4.1.bn3.gamma", "layer4.1.bn3.bias": "layer4.1.bn3.beta", "layer4.2.conv1.weight": "layer4.2.conv1.weight", "layer4.2.bn1.running_mean": "layer4.2.bn1.moving_mean", "layer4.2.bn1.running_var": "layer4.2.bn1.moving_variance", "layer4.2.bn1.weight": "layer4.2.bn1.gamma", "layer4.2.bn1.bias": "layer4.2.bn1.beta", "layer4.2.conv2.weight": "layer4.2.conv2.weight", "layer4.2.bn2.running_mean": "layer4.2.bn2.moving_mean", "layer4.2.bn2.running_var": "layer4.2.bn2.moving_variance", "layer4.2.bn2.weight": "layer4.2.bn2.gamma", "layer4.2.bn2.bias": "layer4.2.bn2.beta", "layer4.2.conv3.weight": "layer4.2.conv3.weight", "layer4.2.bn3.running_mean": "layer4.2.bn3.moving_mean", "layer4.2.bn3.running_var": "layer4.2.bn3.moving_variance", "layer4.2.bn3.weight": "layer4.2.bn3.gamma", "layer4.2.bn3.bias": "layer4.2.bn3.beta", "fc.weight": "fc.weight", "fc.bias": "fc.bias"} \ No newline at end of file diff --git a/contrib/Overlap-Recovery/train/scripts/convert_resnet.sh b/contrib/Overlap-Recovery/train/scripts/convert_resnet.sh new file mode 100644 index 000000000..14cefecfb --- /dev/null +++ b/contrib/Overlap-Recovery/train/scripts/convert_resnet.sh @@ -0,0 +1,43 @@ +#!/bin/bash +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================ + +echo "==============================================================================================================" +echo "Please run the script as: " +echo "bash convert_resnet.sh PTH_PATH CKPT_PATH" +echo "for example: bash convert_resnet.sh resnet50-19c8e357.pth pretrained_resnet50.ckpt" +echo "==============================================================================================================" + +PTH_PATH=$1 +CKPT_PATH=$2 +PROJECT_DIR=$(cd "$(dirname "$0")" || exit; pwd) +DICT_FILE=/home/whua/code/overlap_text/KNet_Huawei/K-Net-mindspore/resnet50_dict.json + +if [ $# != 2 ] +then + echo "Please specify the pth of PyTorch and ckpt of Mindspore" + echo "Please try again" + exit +fi + +LOG_DIR=/home/whua/code/overlap_text/KNet_Huawei/K-Net-mindspore/logs +echo $PROJECT_DIR + +python /home/whua/code/overlap_text/KNet_Huawei/K-Net-mindspore/src/utils/pth2ckpt.py \ + --pth-path $PTH_PATH \ + --ckpt-path $CKPT_PATH \ + --dict-file $DICT_FILE > $LOG_DIR/convert_resnet.log 2>&1 & + +echo "The convert_resnet.log file is at /logs/convert_resnet.log" diff --git a/contrib/Overlap-Recovery/train/scripts/train.sh b/contrib/Overlap-Recovery/train/scripts/train.sh new file mode 100644 index 000000000..3ec8c4f38 --- /dev/null +++ b/contrib/Overlap-Recovery/train/scripts/train.sh @@ -0,0 +1 @@ +CUDA_VISIBLE_DEVICES=0 python train.py diff --git a/contrib/Overlap-Recovery/train/src/__init__.py b/contrib/Overlap-Recovery/train/src/__init__.py new file mode 100644 index 000000000..7c8d0d8c3 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/__init__.py @@ -0,0 +1,4 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/24 23:14 +# @Author : WeiHua diff --git a/contrib/Overlap-Recovery/train/src/dataset/__init__.py b/contrib/Overlap-Recovery/train/src/dataset/__init__.py new file mode 100644 index 000000000..d0ea8125b --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/dataset/__init__.py @@ -0,0 +1,6 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/30 14:31 +# @Author : WeiHua + +from .build_dataset import build_dataset diff --git a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py new file mode 100644 index 000000000..14858cfeb --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py @@ -0,0 +1,323 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/30 14:32 +# @Author : WeiHua + +import os.path as osp +import warnings + +import numpy as np +from terminaltables import AsciiTable +from .data_process import PipelineFunc + + +class CustomDataset: + """Custom dataset for detection. + + The annotation format is shown as follows. The `ann` field is optional for + testing. + + .. code-block:: none + + [ + { + 'filename': 'a.jpg', + 'width': 1280, + 'height': 720, + 'ann': { + 'bboxes': (n, 4) in (x1, y1, x2, y2) order. + 'labels': (n, ), + 'bboxes_ignore': (k, 4), (optional field) + 'labels_ignore': (k, 4) (optional field) + } + }, + ... + ] + + Args: + ann_file (str): Annotation file path. + pipeline (list[dict]): Processing pipeline. + classes (str | Sequence[str], optional): Specify classes to load. + If is None, ``cls.CLASSES`` will be used. Default: None. + data_root (str, optional): Data root for ``ann_file``, + ``img_prefix``, ``seg_prefix`` if specified. + test_mode (bool, optional): If set True, annotation will not be loaded. + filter_empty_gt (bool, optional): If set true, images without bounding + boxes of the dataset's classes will be filtered out. This option + only works when `test_mode=False`, i.e., we never filter images + during tests. + """ + + CLASSES = None + + PALETTE = None + + def __init__(self, + ann_file, + pipeline, + classes=None, + data_root=None, + img_prefix='', + seg_prefix=None, + seg_suffix='.png', + test_mode=False, + filter_empty_gt=True): + self.ann_file = ann_file + self.data_root = data_root + self.img_prefix = img_prefix + self.seg_prefix = seg_prefix + self.seg_suffix = seg_suffix + self.test_mode = test_mode + self.filter_empty_gt = filter_empty_gt + self.CLASSES = self.get_classes(classes) + + # join paths if data_root is specified + if self.data_root is not None: + if not osp.isabs(self.ann_file): + self.ann_file = osp.join(self.data_root, self.ann_file) + if not (self.img_prefix is None or osp.isabs(self.img_prefix)): + self.img_prefix = osp.join(self.data_root, self.img_prefix) + if not (self.seg_prefix is None or osp.isabs(self.seg_prefix)): + self.seg_prefix = osp.join(self.data_root, self.seg_prefix) + # load annotations (and proposals) + self.data_infos = self.load_annotations(self.ann_file) + + self.proposals = None + + # filter images too small and containing no annotations + if not test_mode: + valid_inds = self._filter_imgs() + self.data_infos = [self.data_infos[i] for i in valid_inds] + if self.proposals is not None: + self.proposals = [self.proposals[i] for i in valid_inds] + # set group flag for the sampler + self._set_group_flag() + + # processing pipeline + # self.pipeline = Compose(pipeline) + self.pipeline = self.build_pipeline(pipeline) + + def __len__(self): + """Total number of samples of data.""" + return len(self.data_infos) + + def build_pipeline(self, pipeline): + return PipelineFunc(pipeline) + + def load_annotations(self, ann_file): + """Load annotation from annotation file.""" + raise NotImplementedError + + def get_ann_info(self, idx): + """Get annotation by index. + + Args: + idx (int): Index of data. + + Returns: + dict: Annotation info of specified index. + """ + + return self.data_infos[idx]['ann'] + + def get_cat_ids(self, idx): + """Get category ids by index. + + Args: + idx (int): Index of data. + + Returns: + list[int]: All categories in the image of specified index. + """ + + return self.data_infos[idx]['ann']['labels'].astype(np.int).tolist() + + def pre_pipeline(self, results): + """Prepare results dict for pipeline.""" + results['img_prefix'] = self.img_prefix + results['seg_prefix'] = self.seg_prefix + results['proposal_file'] = None + results['bbox_fields'] = [] + results['mask_fields'] = [] + results['seg_fields'] = [] + + def _filter_imgs(self, min_size=32): + """Filter images too small.""" + if self.filter_empty_gt: + warnings.warn( + 'CustomDataset does not support filtering empty gt images.') + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds + + def _set_group_flag(self): + """Set flag according to image aspect ratio. + + Images with aspect ratio greater than 1 will be set as group 1, + otherwise group 0. + """ + self.flag = np.zeros(len(self), dtype=np.uint8) + for i in range(len(self)): + img_info = self.data_infos[i] + if img_info['width'] / img_info['height'] > 1: + self.flag[i] = 1 + + def _rand_another(self, idx): + """Get another random index from the same group as the given index.""" + pool = np.where(self.flag == self.flag[idx])[0] + return np.random.choice(pool) + + def __getitem__(self, idx): + """Get training/test data after pipeline. + + Args: + idx (int): Index of data. + + Returns: + dict: Training/test data (with annotation if `test_mode` is set \ + True). + """ + + if self.test_mode: + return self.prepare_test_img(idx) + while True: + data = self.prepare_train_img(idx) + if data is None: + idx = self._rand_another(idx) + continue + return data + + def prepare_train_img(self, idx): + """Get training data and annotations after pipeline. + + Args: + idx (int): Index of data. + + Returns: + dict: Training data and annotation after pipeline with new keys \ + introduced by pipeline. + """ + + img_info = self.data_infos[idx] + ann_info = self.get_ann_info(idx) + results = dict(img_info=img_info, ann_info=ann_info) + if self.proposals is not None: + results['proposals'] = self.proposals[idx] + self.pre_pipeline(results) + return self.pipeline(results) + + def prepare_test_img(self, idx): + """Get testing data after pipeline. + + Args: + idx (int): Index of data. + + Returns: + dict: Testing data after pipeline with new keys introduced by \ + pipeline. + """ + + img_info = self.data_infos[idx] + results = dict(img_info=img_info) + if self.proposals is not None: + results['proposals'] = self.proposals[idx] + self.pre_pipeline(results) + return self.pipeline(results) + + @classmethod + def get_classes(cls, classes=None): + """Get class names of current dataset. + + Args: + classes (Sequence[str] | str | None): If classes is None, use + default CLASSES defined by builtin dataset. If classes is a + string, take it as a file name. The file contains the name of + classes where each line contains one class name. If classes is + a tuple or list, override the CLASSES defined by the dataset. + + Returns: + tuple[str] or list[str]: Names of categories of the dataset. + """ + if classes is None: + return cls.CLASSES + raise NotImplementedError + # if isinstance(classes, str): + # # take it as a file path + # class_names = mmcv.list_from_file(classes) + # elif isinstance(classes, (tuple, list)): + # class_names = classes + # else: + # raise ValueError(f'Unsupported type {type(classes)} of classes.') + # + # return class_names + + def get_cat2imgs(self): + """Get a dict with class as key and img_ids as values, which will be + used in :class:`ClassAwareSampler`. + + Returns: + dict[list]: A dict of per-label image list, + the item of the dict indicates a label index, + corresponds to the image index that contains the label. + """ + if self.CLASSES is None: + raise ValueError('self.CLASSES can not be None') + # sort the label index + cat2imgs = {i: [] for i in range(len(self.CLASSES))} + for i in range(len(self)): + cat_ids = set(self.get_cat_ids(i)) + for cat in cat_ids: + cat2imgs[cat].append(i) + return cat2imgs + + def format_results(self, results, **kwargs): + """Place holder to format result to dataset specific output.""" + + def evaluate(self, *args, **kwargs): + raise NotImplementedError + + def __repr__(self): + """Print the number of instance number.""" + dataset_type = 'Test' if self.test_mode else 'Train' + result = (f'\n{self.__class__.__name__} {dataset_type} dataset ' + f'with number of images {len(self)}, ' + f'and instance counts: \n') + if self.CLASSES is None: + result += 'Category names are not provided. \n' + return result + instance_count = np.zeros(len(self.CLASSES) + 1).astype(int) + # count the instance number in each image + for idx in range(len(self)): + label = self.get_ann_info(idx)['labels'] + unique, counts = np.unique(label, return_counts=True) + if len(unique) > 0: + # add the occurrence number to each class + instance_count[unique] += counts + else: + # background is the last index + instance_count[-1] += 1 + # create a table with category count + table_data = [['category', 'count'] * 5] + row_data = [] + for cls, count in enumerate(instance_count): + if cls < len(self.CLASSES): + row_data += [f'{cls} [{self.CLASSES[cls]}]', f'{count}'] + else: + # add the background number + row_data += ['-1 background', f'{count}'] + if len(row_data) == 10: + table_data.append(row_data) + row_data = [] + if len(row_data) >= 2: + if row_data[-1] == '0': + row_data = row_data[:-2] + if len(row_data) >= 2: + table_data.append([]) + table_data.append(row_data) + + table = AsciiTable(table_data) + result += table.table + return result + diff --git a/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py new file mode 100644 index 000000000..9958c9a5d --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py @@ -0,0 +1,19 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/30 22:27 +# @Author : WeiHua + +from .real_dataset import RealOverlapDataset +from .synth_dataset import SynthOverlapDataset +import mindspore.dataset as de + + +CUSTOM_DATASETS = { + 'RealOverlapDataset': RealOverlapDataset, + 'SynthOverlapDataset': SynthOverlapDataset +} + + +def build_dataset(cfg): + data_type = cfg.pop('type') + return CUSTOM_DATASETS[data_type](**cfg) diff --git a/contrib/Overlap-Recovery/train/src/dataset/data_process.py b/contrib/Overlap-Recovery/train/src/dataset/data_process.py new file mode 100644 index 000000000..3d1737d71 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/dataset/data_process.py @@ -0,0 +1,814 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/30 22:30 +# @Author : WeiHua + +from os import path as osp +import cv2 +import numpy as np +import warnings +import mmcv +import mindspore as ms +from .utils import BitmapMasks +from collections.abc import Sequence + + +class DataContainer(object): + """A container for any type of objects. + + Typically tensors will be stacked in the collate function and sliced along + some dimension in the scatter function. This behavior has some limitations. + 1. All tensors have to be the same size. + 2. Types are limited (numpy array or Tensor). + + We design `DataContainer` and `MMDataParallel` to overcome these + limitations. The behavior can be either of the following. + + - copy to GPU, pad all tensors to the same size and stack them + - copy to GPU without stacking + - leave the objects as is and pass it to the model + - pad_dims specifies the number of last few dimensions to do padding + """ + + def __init__(self, + data, + stack=False, + padding_value=0, + cpu_only=False, + pad_dims=2): + self._data = data + self._cpu_only = cpu_only + self._stack = stack + self._padding_value = padding_value + assert pad_dims in [None, 1, 2, 3] + self._pad_dims = pad_dims + + def __repr__(self): + return '{}({})'.format(self.__class__.__name__, repr(self.data)) + + @property + def data(self): + return self._data + + @property + def datatype(self): + if isinstance(self.data, ms.Tensor): + return self.data.type() + else: + return type(self.data) + + @property + def cpu_only(self): + return self._cpu_only + + @property + def stack(self): + return self._stack + + @property + def padding_value(self): + return self._padding_value + + @property + def pad_dims(self): + return self._pad_dims + + def size(self, *args, **kwargs): + return self.data.size(*args, **kwargs) + + def dim(self): + return self.data.dim() + + +class LoadImageFromFile: + """Load an image from file.""" + + def __init__(self, + to_float32=False, + color_type='color', + channel_order='bgr'): + self.to_float32 = to_float32 + self.color_type = color_type + self.channel_order = channel_order + + def __call__(self, results): + if results['img_prefix'] is not None: + filename = osp.join(results['img_prefix'], + results['img_info']['filename']) + else: + filename = results['img_info']['filename'] + + img = cv2.imread(filename) + if self.to_float32: + img = img.astype(np.float32) + + results['filename'] = filename + results['ori_filename'] = results['img_info']['filename'] + results['img'] = img + results['img_shape'] = img.shape + results['ori_shape'] = img.shape + results['img_fields'] = ['img'] + return results + + def __repr__(self): + repr_str = (f'{self.__class__.__name__}(' + f'to_float32={self.to_float32}, ' + f"color_type='{self.color_type}', " + f"channel_order='{self.channel_order}' ") + return repr_str + + +class CustomLoadAnnotations: + """Customized load multiple types of annotations.""" + + def __init__(self, + with_bbox=True, + with_label=True, + with_mask=False): + self.with_bbox = with_bbox + self.with_label = with_label + self.with_mask = with_mask + + def _load_bboxes(self, results): + ann_info = results['ann_info'] + results['gt_bboxes'] = ann_info['bboxes'].copy() + + gt_bboxes_ignore = ann_info.get('bboxes_ignore', None) + if gt_bboxes_ignore is not None: + results['gt_bboxes_ignore'] = gt_bboxes_ignore.copy() + results['bbox_fields'].append('gt_bboxes_ignore') + results['bbox_fields'].append('gt_bboxes') + + gt_is_group_ofs = ann_info.get('gt_is_group_ofs', None) + if gt_is_group_ofs is not None: + results['gt_is_group_ofs'] = gt_is_group_ofs.copy() + + return results + + def _load_labels(self, results): + results['gt_labels'] = results['ann_info']['labels'].copy() + results['text_labels'] = results['ann_info']['text_labels'].copy() + return results + + def _load_masks(self, results): + h, w = results['img_info']['height'], results['img_info']['width'] + gt_masks = [cv2.imread(_, cv2.IMREAD_UNCHANGED) for _ in results['ann_info']['masks']] + gt_masks = [mask // 255 for mask in gt_masks] + gt_masks = BitmapMasks(gt_masks, h, w) + results['gt_masks'] = gt_masks + results['mask_fields'].append('gt_masks') + return results + + def __call__(self, results): + if self.with_bbox: + results = self._load_bboxes(results) + if results is None: + return None + if self.with_label: + results = self._load_labels(results) + if self.with_mask: + results = self._load_masks(results) + return results + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(with_bbox={self.with_bbox}, ' + repr_str += f'with_label={self.with_label}, ' + repr_str += f'with_mask={self.with_mask}, ' + return repr_str + + +class Resize: + """Resize images & bbox & mask.""" + + def __init__(self, + img_scale, + multiscale_mode='range', + keep_ratio=True, + bbox_clip_border=True, + interpolation='bilinear', + override=False): + if isinstance(img_scale, list): + self.img_scale = img_scale + else: + self.img_scale = [img_scale] + + assert multiscale_mode in ['value', 'range'] + + self.multiscale_mode = multiscale_mode + self.keep_ratio = keep_ratio + self.interpolation = interpolation + self.override = override + self.bbox_clip_border = bbox_clip_border + + def _random_scale(self, results): + if len(self.img_scale) == 1: + scale, scale_idx = self.img_scale[0], 0 + else: + raise NotImplementedError + results['scale'] = scale + results['scale_idx'] = scale_idx + + def _resize_img(self, results): + """Resize images with ``results['scale']``.""" + for key in results.get('img_fields', ['img']): + if self.keep_ratio: + img, scale_factor = mmcv.imrescale( + results[key], + results['scale'], + return_scale=True, + interpolation=self.interpolation) + # the w_scale and h_scale has minor difference + # a real fix should be done in the mmcv.imrescale in the future + new_h, new_w = img.shape[:2] + h, w = results[key].shape[:2] + w_scale = new_w / w + h_scale = new_h / h + else: + img, w_scale, h_scale = mmcv.imresize( + results[key], + results['scale'], + return_scale=True, + interpolation=self.interpolation) + results[key] = img + + scale_factor = np.array([w_scale, h_scale, w_scale, h_scale], + dtype=np.float32) + results['img_shape'] = img.shape + # in case that there is no padding + results['pad_shape'] = img.shape + results['scale_factor'] = scale_factor + results['keep_ratio'] = self.keep_ratio + + def _resize_bboxes(self, results): + """Resize bounding boxes with ``results['scale_factor']``.""" + for key in results.get('bbox_fields', []): + bboxes = results[key] * results['scale_factor'] + if self.bbox_clip_border: + img_shape = results['img_shape'] + bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) + bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) + results[key] = bboxes + + def _resize_masks(self, results): + """Resize masks with ``results['scale']``""" + for key in results.get('mask_fields', []): + if results[key] is None: + continue + if self.keep_ratio: + results[key] = results[key].rescale(results['scale']) + else: + results[key] = results[key].resize(results['img_shape'][:2]) + + def __call__(self, results): + if 'scale' not in results: + if 'scale_factor' in results: + img_shape = results['img'].shape[:2] + scale_factor = results['scale_factor'] + assert isinstance(scale_factor, float) + results['scale'] = tuple( + [int(x * scale_factor) for x in img_shape][::-1]) + else: + self._random_scale(results) + else: + if not self.override: + assert 'scale_factor' not in results, ( + 'scale and scale_factor cannot be both set.') + else: + results.pop('scale') + if 'scale_factor' in results: + results.pop('scale_factor') + self._random_scale(results) + + self._resize_img(results) + self._resize_bboxes(results) + self._resize_masks(results) + if len(results.get('seg_fields', [])) > 0: + raise NotImplementedError + # self._resize_seg(results) + return results + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(img_scale={self.img_scale}, ' + repr_str += f'multiscale_mode={self.multiscale_mode}, ' + repr_str += f'keep_ratio={self.keep_ratio}, ' + repr_str += f'bbox_clip_border={self.bbox_clip_border})' + return repr_str + + +class RandomFlip: + """Flip the image & bbox & mask.""" + + def __init__(self, flip_ratio=None, direction='horizontal'): + if isinstance(flip_ratio, list): + assert mmcv.is_list_of(flip_ratio, float) + assert 0 <= sum(flip_ratio) <= 1 + elif isinstance(flip_ratio, float): + assert 0 <= flip_ratio <= 1 + elif flip_ratio is None: + pass + else: + raise ValueError('flip_ratios must be None, float, ' + 'or list of float') + self.flip_ratio = flip_ratio + + valid_directions = ['horizontal', 'vertical', 'diagonal'] + if isinstance(direction, str): + assert direction in valid_directions + elif isinstance(direction, list): + assert mmcv.is_list_of(direction, str) + assert set(direction).issubset(set(valid_directions)) + else: + raise ValueError('direction must be either str or list of str') + self.direction = direction + + if isinstance(flip_ratio, list): + assert len(self.flip_ratio) == len(self.direction) + + def bbox_flip(self, bboxes, img_shape, direction): + assert bboxes.shape[-1] % 4 == 0 + flipped = bboxes.copy() + if direction == 'horizontal': + w = img_shape[1] + flipped[..., 0::4] = w - bboxes[..., 2::4] + flipped[..., 2::4] = w - bboxes[..., 0::4] + elif direction == 'vertical': + h = img_shape[0] + flipped[..., 1::4] = h - bboxes[..., 3::4] + flipped[..., 3::4] = h - bboxes[..., 1::4] + elif direction == 'diagonal': + w = img_shape[1] + h = img_shape[0] + flipped[..., 0::4] = w - bboxes[..., 2::4] + flipped[..., 1::4] = h - bboxes[..., 3::4] + flipped[..., 2::4] = w - bboxes[..., 0::4] + flipped[..., 3::4] = h - bboxes[..., 1::4] + else: + raise ValueError(f"Invalid flipping direction '{direction}'") + return flipped + + def __call__(self, results): + if 'flip' not in results: + if isinstance(self.direction, list): + # None means non-flip + direction_list = self.direction + [None] + else: + # None means non-flip + direction_list = [self.direction, None] + + if isinstance(self.flip_ratio, list): + non_flip_ratio = 1 - sum(self.flip_ratio) + flip_ratio_list = self.flip_ratio + [non_flip_ratio] + else: + non_flip_ratio = 1 - self.flip_ratio + # exclude non-flip + single_ratio = self.flip_ratio / (len(direction_list) - 1) + flip_ratio_list = [single_ratio] * (len(direction_list) - + 1) + [non_flip_ratio] + + cur_dir = np.random.choice(direction_list, p=flip_ratio_list) + + results['flip'] = cur_dir is not None + if 'flip_direction' not in results: + results['flip_direction'] = cur_dir + if results['flip']: + # flip image + for key in results.get('img_fields', ['img']): + results[key] = mmcv.imflip( + results[key], direction=results['flip_direction']) + # flip bboxes + for key in results.get('bbox_fields', []): + results[key] = self.bbox_flip(results[key], + results['img_shape'], + results['flip_direction']) + # flip masks + for key in results.get('mask_fields', []): + results[key] = results[key].flip(results['flip_direction']) + + # flip segs + for key in results.get('seg_fields', []): + results[key] = mmcv.imflip( + results[key], direction=results['flip_direction']) + return results + + def __repr__(self): + return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' + + +class Normalize: + """Normalize the image.""" + + def __init__(self, mean, std, to_rgb=True): + self.mean = np.array(mean, dtype=np.float32) + self.std = np.array(std, dtype=np.float32) + self.to_rgb = to_rgb + + def __call__(self, results): + for key in results.get('img_fields', ['img']): + results[key] = mmcv.imnormalize(results[key], self.mean, self.std, + self.to_rgb) + results['img_norm_cfg'] = dict( + mean=self.mean, std=self.std, to_rgb=self.to_rgb) + return results + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(mean={self.mean}, std={self.std}, to_rgb={self.to_rgb})' + return repr_str + + +class Pad: + """Pad the image & masks & segmentation map.""" + + def __init__(self, + size=None, + size_divisor=None, + pad_to_square=False, + pad_val=dict(img=0, masks=0, seg=255), + pad_ins_num=4, + eval_model=False): + self.size = size + self.size_divisor = size_divisor + if isinstance(pad_val, float) or isinstance(pad_val, int): + warnings.warn( + 'pad_val of float type is deprecated now, ' + f'please use pad_val=dict(img={pad_val}, ' + f'masks={pad_val}, seg=255) instead.', DeprecationWarning) + pad_val = dict(img=pad_val, masks=pad_val, seg=255) + assert isinstance(pad_val, dict) + self.pad_val = pad_val + self.pad_to_square = pad_to_square + self.pad_ins_num = pad_ins_num + self.eval_model = eval_model + + if pad_to_square: + assert size is None and size_divisor is None, \ + 'The size and size_divisor must be None ' \ + 'when pad2square is True' + else: + assert size is not None or size_divisor is not None, \ + 'only one of size and size_divisor should be valid' + assert size is None or size_divisor is None + + def _pad_img(self, results): + """Pad images according to ``self.size``.""" + pad_val = self.pad_val.get('img', 0) + for key in results.get('img_fields', ['img']): + if self.pad_to_square: + max_size = max(results[key].shape[:2]) + self.size = (max_size, max_size) + if self.size is not None: + padded_img = mmcv.impad( + results[key], shape=self.size, pad_val=pad_val) + elif self.size_divisor is not None: + padded_img = mmcv.impad_to_multiple( + results[key], self.size_divisor, pad_val=pad_val) + results[key] = padded_img + results['pad_shape'] = padded_img.shape + results['pad_fixed_size'] = self.size + results['pad_size_divisor'] = self.size_divisor + + def _pad_masks(self, results): + """Pad masks according to ``results['pad_shape']``.""" + pad_shape = results['pad_shape'][:2] + pad_val = self.pad_val.get('masks', 0) + for key in results.get('mask_fields', []): + results[key] = results[key].pad(pad_shape, pad_val=pad_val) + + def _pad_seg(self, results): + """Pad semantic segmentation map according to + ``results['pad_shape']``.""" + pad_val = self.pad_val.get('seg', 255) + for key in results.get('seg_fields', []): + results[key] = mmcv.impad( + results[key], shape=results['pad_shape'][:2], pad_val=pad_val) + + def __call__(self, results): + self._pad_img(results) + if self.eval_model: + return results + self._pad_masks(results) + self._pad_seg(results) + # padding instance number to predefined + to_pad = self.pad_ins_num - results['gt_bboxes'].shape[0] + if to_pad > 0: + results['gt_bboxes'] = np.concatenate([results['gt_bboxes'], + np.zeros((to_pad, 4), dtype=np.float32)], + axis=0) + results['gt_labels'] = np.concatenate([results['gt_labels'], + -np.ones((to_pad,), dtype=np.long)]) + gt_masks = results['gt_masks'].masks + h, w = gt_masks.shape[1:] + gt_masks = np.concatenate([gt_masks, + np.zeros((to_pad, h, w), dtype=gt_masks.dtype)], + axis=0) + results['gt_masks'] = BitmapMasks(gt_masks, h, w) + + return results + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(size={self.size}, ' + repr_str += f'size_divisor={self.size_divisor}, ' + repr_str += f'pad_to_square={self.pad_to_square}, ' + repr_str += f'pad_val={self.pad_val})' + return repr_str + + +def to_tensor(data): + """Convert objects of various python types to :obj:`mindspore.Tensor`.""" + + if isinstance(data, ms.Tensor): + return data + elif isinstance(data, np.ndarray): + return ms.Tensor.from_numpy(data) + elif isinstance(data, Sequence) and not mmcv.is_str(data): + return ms.Tensor(data) + elif isinstance(data, int): + return ms.Tensor([data], dtype=ms.int64) + elif isinstance(data, float): + return ms.Tensor([data], dtype=ms.float32) + else: + raise TypeError(f'type {type(data)} cannot be converted to tensor.') + + +class DefaultFormatBundle: + """Default formatting bundle.""" + + def __init__(self, + img_to_float=True, + pad_val=dict(img=0, masks=0, seg=255)): + self.img_to_float = img_to_float + self.pad_val = pad_val + + def __call__(self, results): + if 'img' in results: + img = results['img'] + if self.img_to_float is True and img.dtype == np.uint8: + # Normally, image is of uint8 type without normalization. + # At this time, it needs to be forced to be converted to + # flot32, otherwise the model training and inference + # will be wrong. Only used for YOLOX currently . + img = img.astype(np.float32) + # add default meta keys + results = self._add_default_meta_keys(results) + if len(img.shape) < 3: + img = np.expand_dims(img, -1) + img = np.ascontiguousarray(img.transpose(2, 0, 1)) + results['img'] = DataContainer( + to_tensor(img), padding_value=self.pad_val['img'], stack=True) + for key in ['proposals', 'gt_bboxes', 'gt_bboxes_ignore', 'gt_labels']: + if key not in results: + continue + results[key] = DataContainer(to_tensor(results[key])) + if 'gt_masks' in results: + results['gt_masks'] = DataContainer( + results['gt_masks'], + padding_value=self.pad_val['masks'], + cpu_only=True) + return results + + def _add_default_meta_keys(self, results): + img = results['img'] + results.setdefault('pad_shape', img.shape) + results.setdefault('scale_factor', 1.0) + num_channels = 1 if len(img.shape) < 3 else img.shape[2] + results.setdefault( + 'img_norm_cfg', + dict( + mean=np.zeros(num_channels, dtype=np.float32), + std=np.ones(num_channels, dtype=np.float32), + to_rgb=False)) + return results + + def __repr__(self): + return self.__class__.__name__ + \ + f'(img_to_float={self.img_to_float})' + + +class Collect: + """Collect data from the loader relevant to the specific task.""" + + def __init__(self, + keys, + meta_keys=('filename', 'ori_filename', 'ori_shape', + 'img_shape', 'pad_shape', 'scale_factor', 'flip', + 'flip_direction', 'img_norm_cfg'), + eval_mode=False): + self.keys = keys + self.meta_keys = meta_keys + self.eval_mode = eval_mode + + def __call__(self, results): + data = {} + img_meta = {} + out_data = [] + for key in self.meta_keys: + img_meta[key] = results[key] + data['img_metas'] = DataContainer(img_meta, cpu_only=True) + for key in self.keys: + data[key] = results[key] + # return data + for key in self.keys: + if self.eval_mode: + out_data.append(results[key]) + continue + if key == 'gt_masks': + out_data.append(results[key].data.masks) + else: + out_data.append(results[key].data.asnumpy()) + flip_map = { + 'horizontal': 0, + 'vertical': 1, + 'diagonal': 2 + } + for key in self.meta_keys: + if key == 'flip_direction': + if isinstance(results[key], type(None)): + out_data.append(-1) + else: + out_data.append(flip_map[results[key]]) + else: + out_data.append(results[key]) + return tuple(out_data) + + def __repr__(self): + return self.__class__.__name__ + \ + f'(keys={self.keys}, meta_keys={self.meta_keys})' + + +class MultiScaleFlipAug: + """Test-time augmentation with multiple scales and flipping. + + An example configuration is as followed: + + .. code-block:: + + img_scale=[(1333, 400), (1333, 800)], + flip=True, + transforms=[ + dict(type='Resize', keep_ratio=True), + dict(type='RandomFlip'), + dict(type='Normalize', **img_norm_cfg), + dict(type='Pad', size_divisor=32), + dict(type='ImageToTensor', keys=['img']), + dict(type='Collect', keys=['img']), + ] + + After MultiScaleFLipAug with above configuration, the results are wrapped + into lists of the same length as followed: + + .. code-block:: + + dict( + img=[...], + img_shape=[...], + scale=[(1333, 400), (1333, 400), (1333, 800), (1333, 800)] + flip=[False, True, False, True] + ... + ) + + Args: + transforms (list[dict]): Transforms to apply in each augmentation. + img_scale (tuple | list[tuple] | None): Images scales for resizing. + scale_factor (float | list[float] | None): Scale factors for resizing. + flip (bool): Whether apply flip augmentation. Default: False. + flip_direction (str | list[str]): Flip augmentation directions, + options are "horizontal", "vertical" and "diagonal". If + flip_direction is a list, multiple flip augmentations will be + applied. It has no effect when flip == False. Default: + "horizontal". + """ + + def __init__(self, + transforms, + img_scale=None, + scale_factor=None, + flip=False, + flip_direction='horizontal'): + self.transforms = PipelineFunc(transforms) + assert (img_scale is None) ^ (scale_factor is None), ( + 'Must have but only one variable can be set') + if img_scale is not None: + self.img_scale = img_scale if isinstance(img_scale, + list) else [img_scale] + self.scale_key = 'scale' + assert mmcv.is_list_of(self.img_scale, tuple) + else: + self.img_scale = scale_factor if isinstance( + scale_factor, list) else [scale_factor] + self.scale_key = 'scale_factor' + + self.flip = flip + self.flip_direction = flip_direction if isinstance( + flip_direction, list) else [flip_direction] + assert mmcv.is_list_of(self.flip_direction, str) + if not self.flip and self.flip_direction != ['horizontal']: + warnings.warn( + 'flip_direction has no effect when flip is set to False') + if (self.flip + and not any([t['type'] == 'RandomFlip' for t in transforms])): + warnings.warn( + 'flip has no effect when RandomFlip is not in transforms') + + def __call__(self, results): + """Call function to apply test time augment transforms on results. + + Args: + results (dict): Result dict contains the data to transform. + + Returns: + dict[str: list]: The augmented data, where each value is wrapped + into a list. + """ + + aug_data = [] + flip_args = [(False, None)] + if self.flip: + flip_args += [(True, direction) + for direction in self.flip_direction] + for scale in self.img_scale: + for flip, direction in flip_args: + _results = results.copy() + _results[self.scale_key] = scale + _results['flip'] = flip + _results['flip_direction'] = direction + data = self.transforms(_results) + aug_data.append(data) + # list of dict to dict of list + aug_data_dict = {key: [] for key in aug_data[0]} + for data in aug_data: + for key, val in data.items(): + aug_data_dict[key].append(val) + return aug_data_dict + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(transforms={self.transforms}, ' + repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' + repr_str += f'flip_direction={self.flip_direction})' + return repr_str + + +class ImageToTensor: + """Convert image to :obj:`torch.Tensor` by given keys. + + The dimension order of input image is (H, W, C). The pipeline will convert + it to (C, H, W). If only 2 dimension (H, W) is given, the output would be + (1, H, W). + + Args: + keys (Sequence[str]): Key of images to be converted to Tensor. + """ + + def __init__(self, keys): + self.keys = keys + + def __call__(self, results): + """Call function to convert image in results to :obj:`torch.Tensor` and + transpose the channel order. + + Args: + results (dict): Result dict contains the image data to convert. + + Returns: + dict: The result dict contains the image converted + to :obj:`torch.Tensor` and transposed to (C, H, W) order. + """ + for key in self.keys: + img = results[key] + if len(img.shape) < 3: + img = np.expand_dims(img, -1) + results[key] = to_tensor(img.transpose(2, 0, 1)) + return results + + def __repr__(self): + return self.__class__.__name__ + f'(keys={self.keys})' + + +CUSTOM_PIPELINES = { + 'LoadImageFromFile': LoadImageFromFile, + 'CustomLoadAnnotations': CustomLoadAnnotations, + 'Resize': Resize, + 'RandomFlip': RandomFlip, + 'Normalize': Normalize, + 'Pad': Pad, + 'DefaultFormatBundle': DefaultFormatBundle, + 'Collect': Collect, + 'MultiScaleFlipAug': MultiScaleFlipAug, + 'ImageToTensor': ImageToTensor +} + + +class PipelineFunc: + def __init__(self, pipelines): + self.pipelines = [] + for pipe in pipelines: + pipe_type = pipe.pop('type') + self.pipelines.append(CUSTOM_PIPELINES[pipe_type](**pipe)) + + def __call__(self, results): + for pipe in self.pipelines: + results = pipe(results) + return results diff --git a/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py new file mode 100644 index 000000000..6d8e0c794 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py @@ -0,0 +1,298 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/30 14:25 +# @Author : WeiHua + +import json +import os +import os.path as osp +from tqdm import tqdm + +import cv2 +import numpy as np +import imagesize + +from .base_dataset import CustomDataset +from .utils import cal_mask_IoU, cal_overlap_mask, cal_union_mask + + +class RealOverlapDataset(CustomDataset): + """Custom Synthetic Overlap dataset for text de-occlusion.""" + CLASSES = ('text', ) + + def __init__(self, score_thresh=0.5, iou_thresh=0.5, res_flags=None, **kwargs): + self.score_thresh = score_thresh + self.iou_thresh = iou_thresh + self.res_flags = res_flags + super(RealOverlapDataset, self).__init__(**kwargs) + + def load_annotations(self, ann_file): + """Load annotation from Synth Overlap""" + data_list = [] + img_dir = self.img_prefix + seg_dir = self.seg_prefix + if osp.isfile(ann_file): + with open(ann_file, 'r', encoding='utf-8') as f: + info_list = json.load(f) + for info_ in info_list: + assert len(info_) == 3, f"Invalid line: {info_}" + img_name = info_['img_name'] + data_info = dict(img_path=osp.join(img_dir, img_name)) + data_info['data_type'] = info_['data_type'] + data_info['filename'] = img_name + width, height = imagesize.get(data_info['img_path']) + data_info['width'] = width + data_info['height'] = height + seg_map_path = [] + text_labels = [] + bboxes = [] + # should follow a pre-defined order, e.g., from top layer to bottom + for text_ins in info_['texts']: + x, y, w, h = text_ins['bbox'] + bbox = [x, y, x + w, y + h] + bboxes.append(bbox) + seg_map_path.append(osp.join(seg_dir, text_ins[f"mask"])) + text_labels.append(text_ins['label']) + data_info['bboxes'] = bboxes + data_info['seg_map_path'] = seg_map_path + data_info['text_labels'] = text_labels + data_list.append(data_info) + else: + raise NotImplementedError + return data_list + + def _filter_imgs(self, min_size=32): + """Filter images too small or without ground truths.""" + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: + if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: + continue + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds + + def get_ann_info(self, idx): + data_info = self.data_infos[idx] + # todo: Not support ignore flag for now. + ann = dict( + bboxes=np.array(data_info['bboxes'], dtype=np.float32), + labels=np.zeros(len(data_info['bboxes']), dtype=np.int64), + text_labels=data_info['text_labels'], + bboxes_ignore=np.zeros((0, 4), dtype=np.float32), + masks=data_info['seg_map_path'], + seg_map=data_info['seg_map_path'] + ) + return ann + + def vis_result(self, img_idx, scores, masks, vis_dir='/home/whua/vis'): + if not os.path.exists(vis_dir): + os.mkdir(vis_dir) + if len(scores) == 0: + return + valid_idx = [] + for idx, score in enumerate(scores): + if score > self.score_thresh: + valid_idx.append(idx) + if len(valid_idx) > 0: + img = cv2.imread(self.data_infos[img_idx]['img_path']) + img_name = self.data_infos[img_idx]['img_path'].split('/')[-1].split('.')[0] + cv2.imwrite(os.path.join(vis_dir, f"{img_name}.jpg"), img) + for idx, ins_idx in enumerate(valid_idx): + save_name = f"{img_name}_{idx}.jpg" + canvas = np.zeros_like(img) + canvas[masks[ins_idx]] = img[masks[ins_idx]] + cv2.imwrite(os.path.join(vis_dir, save_name), canvas) + + def eval_func(self, idx, box_scores, masks): + # prepare gt ~ hard code + gt_masks = [cv2.imread(x, cv2.IMREAD_UNCHANGED) // 255 for x in self.data_infos[idx]['seg_map_path']] + gt_text = cal_union_mask(gt_masks) + gt_overlap = cal_overlap_mask(gt_masks) + # prepare predict of overlap and text area + box_info = box_scores[0] + if len(box_info) < 2: + raise RuntimeError + else: + # select top 2 prediction + scores = box_info[:, 4].tolist() + valid_idx = [] + for ins_idx, box_ in enumerate(box_info): + if box_[-1] > self.score_thresh: + valid_idx.append(ins_idx) + pred_masks = [masks[0][_] for _ in valid_idx] + if len(pred_masks) == 0: + pred_overlap = np.zeros_like(masks[0][0]) + pred_text = np.zeros_like(masks[0][0]) + elif len(pred_masks) == 1: + pred_overlap = np.zeros_like(masks[0][0]) + pred_text = cal_union_mask(pred_masks) + else: + pred_overlap = cal_overlap_mask(pred_masks) + pred_text = cal_union_mask(pred_masks) + if len(gt_masks) > 1: + # calculate metrics + intersection_text = (pred_text & gt_text).sum() + union_text = (pred_text | gt_text).sum() + intersection_overlap = (pred_overlap & gt_overlap).sum() + union_overlap = (pred_overlap | gt_overlap).sum() + else: + intersection_text = 0 + union_text = 0 + intersection_overlap = 0 + union_overlap = 0 + + # self.vis_result(idx, box_info[:, 4].tolist(), masks[0]) + + # prepare predict of text instance + # filter out invalid prediction + valid_idx = [] + for ins_idx, box_ in enumerate(box_info): + if box_[-1] > self.score_thresh: + valid_idx.append(ins_idx) + match_matrix = np.zeros((len(valid_idx), len(gt_masks)), dtype=np.bool) + for ins_idx in range(len(valid_idx)): + for gt_ins_idx in range(len(gt_masks)): + if match_matrix[:, gt_ins_idx].sum() > 0: + continue + # calculate IoU + if cal_mask_IoU(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > self.iou_thresh: + match_matrix[ins_idx, gt_ins_idx] = True + break + # calculate instance-wise mIoU + text_ins_miou = 0 + if match_matrix.sum() > 0: + for ins_idx in range(max(match_matrix.shape)): + if ins_idx >= match_matrix.shape[0]: + # miss det + continue + else: + if ins_idx >= match_matrix.shape[1] or match_matrix[ins_idx].sum() == 0: + # wrong det + continue + else: + pred_mask = masks[0][valid_idx[ins_idx]].astype(np.bool) + gt_idx = match_matrix[ins_idx].nonzero()[0][0] + gt_mask = gt_masks[gt_idx].copy() + cur_iou = cal_mask_IoU(pred_mask, gt_mask) + text_ins_miou += cur_iou + return (intersection_text, union_text, intersection_overlap, union_overlap), \ + text_ins_miou, max(match_matrix.shape) + + def evaluate(self, + results, + metric='segm', + logger=None, + jsonfile_prefix=None, + classwise=False, + proposal_nums=(100, 300, 1000), + iou_thrs=None, + metric_items=None): + """Evaluation in COCO protocol. + + Args: + results (list[list | tuple]): Testing results of the dataset. + metric (str | list[str]): Metrics to be evaluated. Options are + 'bbox', 'segm', 'proposal', 'proposal_fast'. + logger (logging.Logger | str | None): Logger used for printing + related information during evaluation. Default: None. + jsonfile_prefix (str | None): The prefix of json files. It includes + the file path and the prefix of filename, e.g., "a/b/prefix". + If not specified, a temp file will be created. Default: None. + classwise (bool): Whether to evaluating the AP for each class. + proposal_nums (Sequence[int]): Proposal number used for evaluating + recalls, such as recall@100, recall@1000. + Default: (100, 300, 1000). + iou_thrs (Sequence[float], optional): IoU threshold used for + evaluating recalls/mAPs. If set to a list, the average of all + IoUs will also be computed. If not specified, [0.50, 0.55, + 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95] will be used. + Default: None. + metric_items (list[str] | str, optional): Metric items that will + be returned. If not specified, ``['AR@100', 'AR@300', + 'AR@1000', 'AR_s@1000', 'AR_m@1000', 'AR_l@1000' ]`` will be + used when ``metric=='proposal'``, ``['mAP', 'mAP_50', 'mAP_75', + 'mAP_s', 'mAP_m', 'mAP_l']`` will be used when + ``metric=='bbox' or metric=='segm'``. + + Returns: + dict[str, float]: COCO style evaluation metric. + """ + metric = metric if isinstance(metric, str) else metric[0] + # allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast'] + allowed_metrics = ['segm', 'segm_multi', 'segm_with_each'] + if metric not in allowed_metrics: + raise KeyError(f'metric {metric} is not supported') + + assert len(results) == self.__len__() + if metric in ['segm', 'segm_with_each']: + intersection_text = 0 + union_text = 0 + intersection_overlap = 0 + union_overlap = 0 + text_ins_miou_list = [] + total_ins_num = 0 + if metric == 'segm_with_each': + qualifier_list = [] + for idx, (box_scores, masks) in tqdm(enumerate(results)): + # structure: + # box_scores: List[ numpy_array with shape (num_ins, 5=4*coord+1*score) * num_classes ] + # masks: List[ List[ numpy_array_bool with shape (h, w) * num_ins ] * num_classes ] + + overall_iou_metrics, text_ins_miou, ins_num = self.eval_func(idx, box_scores, masks) + intersection_text += overall_iou_metrics[0] + union_text += overall_iou_metrics[1] + intersection_overlap += overall_iou_metrics[2] + union_overlap += overall_iou_metrics[3] + text_ins_miou_list.append(text_ins_miou) + total_ins_num += ins_num + if metric == 'segm_with_each': + # hard-code + if text_ins_miou / ins_num > 0.75: + qualifier_list.append(dict( + img_path=self.data_infos[idx]['img_path'], + score=text_ins_miou / ins_num, + iou=overall_iou_metrics[0] / (overall_iou_metrics[1] + 1e-6) + )) + if metric == 'segm_with_each': + # hard-code + with open('/home/whua/overlap_real_qualifiers.json', 'w', encoding='utf-8') as saver: + json.dump(qualifier_list, saver, ensure_ascii=False) + metric_results = dict( + text_iou=intersection_text / union_text, + overlap_iou=intersection_overlap / union_overlap, + text_ins_miou=np.sum(text_ins_miou_list) / total_ins_num + ) + else: + assert len(self.res_flags) == len(results[0]) + metric_results = dict() + for flag_idx, flag in enumerate(self.res_flags): + intersection_text = 0 + union_text = 0 + intersection_overlap = 0 + union_overlap = 0 + text_ins_miou_list = [] + total_ins_num = 0 + for idx in tqdm(range(len(results))): + # structure: + # box_scores: List[ numpy_array with shape (num_ins, 5=4*coord+1*score) * num_classes ] + # masks: List[ List[ numpy_array_bool with shape (h, w) * num_ins ] * num_classes ] + box_scores, masks = results[idx][flag_idx] + overall_iou_metrics, text_ins_miou, ins_num = self.eval_func(idx, box_scores, masks) + intersection_text += overall_iou_metrics[0] + union_text += overall_iou_metrics[1] + intersection_overlap += overall_iou_metrics[2] + union_overlap += overall_iou_metrics[3] + text_ins_miou_list.append(text_ins_miou) + total_ins_num += ins_num + + metric_results[flag] = dict( + text_iou=intersection_text / (union_text + 1e-6), + overlap_iou=intersection_overlap / (union_overlap + 1e-6), + text_ins_miou=np.sum(text_ins_miou_list) / total_ins_num + ) + + return metric_results + + + diff --git a/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py new file mode 100644 index 000000000..905edcbf8 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py @@ -0,0 +1,297 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/30 14:25 +# @Author : WeiHua + +import json +import os +import os.path as osp +from tqdm import tqdm + +import cv2 +import numpy as np +import imagesize + +from .base_dataset import CustomDataset +from .utils import cal_mask_IoU, cal_overlap_mask, cal_union_mask + + +class SynthOverlapDataset(CustomDataset): + """Custom Synthetic Overlap dataset for text de-occlusion.""" + CLASSES = ('text', ) + + def __init__(self, score_thresh=0.5, iou_thresh=0.5, res_flags=None, **kwargs): + self.score_thresh = score_thresh + self.iou_thresh = iou_thresh + self.res_flags = res_flags + super(SynthOverlapDataset, self).__init__(**kwargs) + + def load_annotations(self, ann_file): + """Load annotation from Synth Overlap""" + data_list = [] + img_dir = self.img_prefix + seg_dir = self.seg_prefix + if osp.isfile(ann_file): + with open(ann_file, 'r', encoding='utf-8') as f: + lines = f.readlines() + for line in lines: + info_ = json.loads(line.strip()) + assert len(info_) == 2, f"Invalid line: {line}" + img_name = info_['img_name'] + data_info = dict(img_path=osp.join(img_dir, img_name)) + data_info['filename'] = img_name + width, height = imagesize.get(data_info['img_path']) + data_info['width'] = width + data_info['height'] = height + seg_map_path = [] + text_labels = [] + bboxes = [] + # should follow a pre-defined order, e.g., from top layer to bottom + for text_ins in info_['texts']: + x, y, w, h = text_ins['bbox'] + bbox = [x, y, x + w, y + h] + bboxes.append(bbox) + seg_map_path.append(osp.join(seg_dir, text_ins[f"mask_bin"])) + text_labels.append(text_ins['label']) + data_info['bboxes'] = bboxes + data_info['seg_map_path'] = seg_map_path + data_info['text_labels'] = text_labels + data_list.append(data_info) + else: + raise NotImplementedError + return data_list + + def _filter_imgs(self, min_size=32): + """Filter images too small or without ground truths.""" + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: + if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: + continue + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds + + def get_ann_info(self, idx): + data_info = self.data_infos[idx] + # todo: Not support ignore flag for now. + ann = dict( + bboxes=np.array(data_info['bboxes'], dtype=np.float32), + labels=np.zeros(len(data_info['bboxes']), dtype=np.int64), + text_labels=data_info['text_labels'], + bboxes_ignore=np.zeros((0, 4), dtype=np.float32), + masks=data_info['seg_map_path'], + seg_map=data_info['seg_map_path'] + ) + return ann + + def vis_result(self, img_idx, scores, masks, vis_dir='/home/whua/vis'): + if not os.path.exists(vis_dir): + os.mkdir(vis_dir) + if len(scores) == 0: + return + valid_idx = [] + for idx, score in enumerate(scores): + if score > self.score_thresh: + valid_idx.append(idx) + if len(valid_idx) > 0: + img = cv2.imread(self.data_infos[img_idx]['img_path']) + img_name = self.data_infos[img_idx]['img_path'].split('/')[-1].split('.')[0] + cv2.imwrite(os.path.join(vis_dir, f"{img_name}.jpg"), img) + for idx, ins_idx in enumerate(valid_idx): + save_name = f"{img_name}_{idx}.jpg" + canvas = np.zeros_like(img) + canvas[masks[ins_idx]] = img[masks[ins_idx]] + cv2.imwrite(os.path.join(vis_dir, save_name), canvas) + + def eval_func(self, idx, box_scores, masks): + # prepare gt ~ hard code + gt_masks = [cv2.imread(x, cv2.IMREAD_UNCHANGED) // 255 for x in self.data_infos[idx]['seg_map_path']] + gt_text = cal_union_mask(gt_masks) + gt_overlap = cal_overlap_mask(gt_masks) + # prepare predict of overlap and text area + box_info = box_scores[0] + if len(box_info) < 2: + raise RuntimeError + else: + # select top 2 prediction + scores = box_info[:, 4].tolist() + valid_idx = [] + for ins_idx, box_ in enumerate(box_info): + if box_[-1] > self.score_thresh: + valid_idx.append(ins_idx) + pred_masks = [masks[0][_] for _ in valid_idx] + if len(pred_masks) == 0: + pred_overlap = np.zeros_like(masks[0][0]) + pred_text = np.zeros_like(masks[0][0]) + elif len(pred_masks) == 1: + pred_overlap = np.zeros_like(masks[0][0]) + pred_text = cal_union_mask(pred_masks) + else: + pred_overlap = cal_overlap_mask(pred_masks) + pred_text = cal_union_mask(pred_masks) + if len(gt_masks) > 1: + # calculate metrics + intersection_text = (pred_text & gt_text).sum() + union_text = (pred_text | gt_text).sum() + intersection_overlap = (pred_overlap & gt_overlap).sum() + union_overlap = (pred_overlap | gt_overlap).sum() + else: + intersection_text = 0 + union_text = 0 + intersection_overlap = 0 + union_overlap = 0 + + # self.vis_result(idx, box_info[:, 4].tolist(), masks[0]) + + # prepare predict of text instance + # filter out invalid prediction + valid_idx = [] + for ins_idx, box_ in enumerate(box_info): + if box_[-1] > self.score_thresh: + valid_idx.append(ins_idx) + match_matrix = np.zeros((len(valid_idx), len(gt_masks)), dtype=np.bool) + for ins_idx in range(len(valid_idx)): + for gt_ins_idx in range(len(gt_masks)): + if match_matrix[:, gt_ins_idx].sum() > 0: + continue + # calculate IoU + if cal_mask_IoU(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > self.iou_thresh: + match_matrix[ins_idx, gt_ins_idx] = True + break + # calculate instance-wise mIoU + text_ins_miou = 0 + if match_matrix.sum() > 0: + for ins_idx in range(max(match_matrix.shape)): + if ins_idx >= match_matrix.shape[0]: + # miss det + continue + else: + if ins_idx >= match_matrix.shape[1] or match_matrix[ins_idx].sum() == 0: + # wrong det + continue + else: + pred_mask = masks[0][valid_idx[ins_idx]].astype(np.bool) + gt_idx = match_matrix[ins_idx].nonzero()[0][0] + gt_mask = gt_masks[gt_idx].copy() + cur_iou = cal_mask_IoU(pred_mask, gt_mask) + text_ins_miou += cur_iou + return (intersection_text, union_text, intersection_overlap, union_overlap), \ + text_ins_miou, max(match_matrix.shape) + + def evaluate(self, + results, + metric='bbox', + logger=None, + jsonfile_prefix=None, + classwise=False, + proposal_nums=(100, 300, 1000), + iou_thrs=None, + metric_items=None): + """Evaluation in COCO protocol. + + Args: + results (list[list | tuple]): Testing results of the dataset. + metric (str | list[str]): Metrics to be evaluated. Options are + 'bbox', 'segm', 'proposal', 'proposal_fast'. + logger (logging.Logger | str | None): Logger used for printing + related information during evaluation. Default: None. + jsonfile_prefix (str | None): The prefix of json files. It includes + the file path and the prefix of filename, e.g., "a/b/prefix". + If not specified, a temp file will be created. Default: None. + classwise (bool): Whether to evaluating the AP for each class. + proposal_nums (Sequence[int]): Proposal number used for evaluating + recalls, such as recall@100, recall@1000. + Default: (100, 300, 1000). + iou_thrs (Sequence[float], optional): IoU threshold used for + evaluating recalls/mAPs. If set to a list, the average of all + IoUs will also be computed. If not specified, [0.50, 0.55, + 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95] will be used. + Default: None. + metric_items (list[str] | str, optional): Metric items that will + be returned. If not specified, ``['AR@100', 'AR@300', + 'AR@1000', 'AR_s@1000', 'AR_m@1000', 'AR_l@1000' ]`` will be + used when ``metric=='proposal'``, ``['mAP', 'mAP_50', 'mAP_75', + 'mAP_s', 'mAP_m', 'mAP_l']`` will be used when + ``metric=='bbox' or metric=='segm'``. + + Returns: + dict[str, float]: COCO style evaluation metric. + """ + metric = metric if isinstance(metric, str) else metric[0] + # allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast'] + allowed_metrics = ['segm', 'segm_multi', 'segm_with_each'] + if metric not in allowed_metrics: + raise KeyError(f'metric {metric} is not supported') + + assert len(results) == self.__len__() + if metric in ['segm', 'segm_with_each']: + intersection_text = 0 + union_text = 0 + intersection_overlap = 0 + union_overlap = 0 + text_ins_miou_list = [] + total_ins_num = 0 + if metric == 'segm_with_each': + qualifier_list = [] + for idx, (box_scores, masks) in tqdm(enumerate(results)): + # structure: + # box_scores: List[ numpy_array with shape (num_ins, 5=4*coord+1*score) * num_classes ] + # masks: List[ List[ numpy_array_bool with shape (h, w) * num_ins ] * num_classes ] + + overall_iou_metrics, text_ins_miou, ins_num = self.eval_func(idx, box_scores, masks) + intersection_text += overall_iou_metrics[0] + union_text += overall_iou_metrics[1] + intersection_overlap += overall_iou_metrics[2] + union_overlap += overall_iou_metrics[3] + text_ins_miou_list.append(text_ins_miou) + total_ins_num += ins_num + if metric == 'segm_with_each': + # hard-code + if text_ins_miou / ins_num > 0.8: + qualifier_list.append(dict( + img_path=self.data_infos[idx]['img_path'], + score=text_ins_miou / ins_num, + iou=overall_iou_metrics[0] / (overall_iou_metrics[1] + 1e-6) + )) + if metric == 'segm_with_each': + # hard-code + with open('/home/whua/overlap_qualifiers.json', 'w', encoding='utf-8') as saver: + json.dump(qualifier_list, saver, ensure_ascii=False) + metric_results = dict( + text_iou=intersection_text / union_text, + overlap_iou=intersection_overlap / union_overlap, + text_ins_miou=np.sum(text_ins_miou_list) / total_ins_num + ) + else: + assert len(self.res_flags) == len(results[0]) + metric_results = dict() + for flag_idx, flag in enumerate(self.res_flags): + intersection_text = 0 + union_text = 0 + intersection_overlap = 0 + union_overlap = 0 + text_ins_miou_list = [] + total_ins_num = 0 + for idx in tqdm(range(len(results))): + # structure: + # box_scores: List[ numpy_array with shape (num_ins, 5=4*coord+1*score) * num_classes ] + # masks: List[ List[ numpy_array_bool with shape (h, w) * num_ins ] * num_classes ] + box_scores, masks = results[idx][flag_idx] + overall_iou_metrics, text_ins_miou, ins_num = self.eval_func(idx, box_scores, masks) + intersection_text += overall_iou_metrics[0] + union_text += overall_iou_metrics[1] + intersection_overlap += overall_iou_metrics[2] + union_overlap += overall_iou_metrics[3] + text_ins_miou_list.append(text_ins_miou) + total_ins_num += ins_num + + metric_results[flag] = dict( + text_iou=intersection_text / (union_text + 1e-6), + overlap_iou=intersection_overlap / (union_overlap + 1e-6), + text_ins_miou=np.sum(text_ins_miou_list) / total_ins_num + ) + + return metric_results + + diff --git a/contrib/Overlap-Recovery/train/src/dataset/utils.py b/contrib/Overlap-Recovery/train/src/dataset/utils.py new file mode 100644 index 000000000..2acbdeefd --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/dataset/utils.py @@ -0,0 +1,349 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/10/25 23:53 +# @Author : WeiHua + +import numpy as np +import mmcv +import mindspore as ms + + +def cal_mask_IoU(mask_a, mask_b, check_valid=False): + if check_valid: + assert len(np.unique(mask_a)) <= 2 + assert len(np.unique(mask_b)) <= 2 + a_bool = mask_a.astype(np.bool) + b_bool = mask_b.astype(np.bool) + intersection_area = (a_bool & b_bool).sum() + union_area = (a_bool | b_bool).sum() + if union_area == 0: + return 0 + return intersection_area / union_area + + +def cal_overlap_mask(mask_list): + if len(mask_list) < 2: + return None + mask_list_bool = [x.astype(np.bool) for x in mask_list] + overlap_mask = np.zeros_like(mask_list_bool[0]) + for ii in range(len(mask_list_bool) - 1): + for jj in range(ii + 1, len(mask_list_bool)): + cur_olp = mask_list_bool[ii] & mask_list_bool[jj] + overlap_mask = overlap_mask | cur_olp + return overlap_mask + + +def cal_union_mask(mask_list): + if len(mask_list) < 1: + return None + mask_list_bool = [x.astype(np.bool) for x in mask_list] + union_mask = np.zeros_like(mask_list_bool[0]) + for mask_bool in mask_list_bool: + union_mask = union_mask | mask_bool + return union_mask + + +class BitmapMasks: + """This class represents masks in the form of bitmaps. + + Args: + masks (ndarray): ndarray of masks in shape (N, H, W), where N is + the number of objects. + height (int): height of masks + width (int): width of masks + """ + + def __init__(self, masks, height, width): + self.height = height + self.width = width + if isinstance(masks, ms.Tensor): + len_mask = masks.shape[0] + else: + len_mask = len(masks) + if len_mask == 0: + self.masks = np.empty((0, self.height, self.width), dtype=np.uint8) + else: + if isinstance(masks, ms.Tensor): + self.masks = masks.asnumpy() + else: + assert isinstance(masks, (list, np.ndarray)) + if isinstance(masks, list): + assert isinstance(masks[0], np.ndarray) + assert masks[0].ndim == 2 # (H, W) + else: + assert masks.ndim == 3 # (N, H, W) + + self.masks = np.stack(masks).reshape(-1, height, width) + assert self.masks.shape[1] == self.height + assert self.masks.shape[2] == self.width + + def __getitem__(self, index): + """Index the BitmapMask. + + Args: + index (int | ndarray): Indices in the format of integer or ndarray. + + Returns: + :obj:`BitmapMasks`: Indexed bitmap masks. + """ + masks = self.masks[index].reshape(-1, self.height, self.width) + return BitmapMasks(masks, self.height, self.width) + + def __iter__(self): + return iter(self.masks) + + def __repr__(self): + s = self.__class__.__name__ + '(' + s += f'num_masks={len(self.masks)}, ' + s += f'height={self.height}, ' + s += f'width={self.width})' + return s + + def __len__(self): + """Number of masks.""" + return len(self.masks) + + def rescale(self, scale, interpolation='nearest'): + """See :func:`BaseInstanceMasks.rescale`.""" + if len(self.masks) == 0: + new_w, new_h = mmcv.rescale_size((self.width, self.height), scale) + rescaled_masks = np.empty((0, new_h, new_w), dtype=np.uint8) + else: + rescaled_masks = np.stack([ + mmcv.imrescale(mask, scale, interpolation=interpolation) + for mask in self.masks + ]) + height, width = rescaled_masks.shape[1:] + return BitmapMasks(rescaled_masks, height, width) + + def resize(self, out_shape, interpolation='nearest'): + """See :func:`BaseInstanceMasks.resize`.""" + if len(self.masks) == 0: + resized_masks = np.empty((0, *out_shape), dtype=np.uint8) + else: + resized_masks = np.stack([ + mmcv.imresize( + mask, out_shape[::-1], interpolation=interpolation) + for mask in self.masks + ]) + return BitmapMasks(resized_masks, *out_shape) + + def flip(self, flip_direction='horizontal'): + """See :func:`BaseInstanceMasks.flip`.""" + assert flip_direction in ('horizontal', 'vertical', 'diagonal') + + if len(self.masks) == 0: + flipped_masks = self.masks + else: + flipped_masks = np.stack([ + mmcv.imflip(mask, direction=flip_direction) + for mask in self.masks + ]) + return BitmapMasks(flipped_masks, self.height, self.width) + + def pad(self, out_shape, pad_val=0): + """See :func:`BaseInstanceMasks.pad`.""" + if len(self.masks) == 0: + padded_masks = np.empty((0, *out_shape), dtype=np.uint8) + else: + padded_masks = np.stack([ + mmcv.impad(mask, shape=out_shape, pad_val=pad_val) + for mask in self.masks + ]) + return BitmapMasks(padded_masks, *out_shape) + + def crop(self, bbox): + """See :func:`BaseInstanceMasks.crop`.""" + assert isinstance(bbox, np.ndarray) + assert bbox.ndim == 1 + + # clip the boundary + bbox = bbox.copy() + bbox[0::2] = np.clip(bbox[0::2], 0, self.width) + bbox[1::2] = np.clip(bbox[1::2], 0, self.height) + x1, y1, x2, y2 = bbox + w = np.maximum(x2 - x1, 1) + h = np.maximum(y2 - y1, 1) + + if len(self.masks) == 0: + cropped_masks = np.empty((0, h, w), dtype=np.uint8) + else: + cropped_masks = self.masks[:, y1:y1 + h, x1:x1 + w] + return BitmapMasks(cropped_masks, h, w) + + def crop_and_resize(self, + bboxes, + out_shape, + inds, + device='cpu', + interpolation='bilinear', + binarize=True): + """See :func:`BaseInstanceMasks.crop_and_resize`.""" + if len(self.masks) == 0: + empty_masks = np.empty((0, *out_shape), dtype=np.uint8) + return BitmapMasks(empty_masks, *out_shape) + + # convert bboxes to tensor + if isinstance(bboxes, np.ndarray): + bboxes = torch.from_numpy(bboxes).to(device=device) + if isinstance(inds, np.ndarray): + inds = torch.from_numpy(inds).to(device=device) + + num_bbox = bboxes.shape[0] + fake_inds = torch.arange( + num_bbox, device=device).to(dtype=bboxes.dtype)[:, None] + rois = torch.cat([fake_inds, bboxes], dim=1) # Nx5 + rois = rois.to(device=device) + if num_bbox > 0: + gt_masks_th = torch.from_numpy(self.masks).to(device).index_select( + 0, inds).to(dtype=rois.dtype) + targets = roi_align(gt_masks_th[:, None, :, :], rois, out_shape, + 1.0, 0, 'avg', True).squeeze(1) + if binarize: + resized_masks = (targets >= 0.5).cpu().numpy() + else: + resized_masks = targets.cpu().numpy() + else: + resized_masks = [] + return BitmapMasks(resized_masks, *out_shape) + + def expand(self, expanded_h, expanded_w, top, left): + """See :func:`BaseInstanceMasks.expand`.""" + if len(self.masks) == 0: + expanded_mask = np.empty((0, expanded_h, expanded_w), + dtype=np.uint8) + else: + expanded_mask = np.zeros((len(self), expanded_h, expanded_w), + dtype=np.uint8) + expanded_mask[:, top:top + self.height, + left:left + self.width] = self.masks + return BitmapMasks(expanded_mask, expanded_h, expanded_w) + + def translate(self, + out_shape, + offset, + direction='horizontal', + fill_val=0, + interpolation='bilinear'): + """Translate the BitmapMasks. + + Args: + out_shape (tuple[int]): Shape for output mask, format (h, w). + offset (int | float): The offset for translate. + direction (str): The translate direction, either "horizontal" + or "vertical". + fill_val (int | float): Border value. Default 0 for masks. + interpolation (str): Same as :func:`mmcv.imtranslate`. + + Returns: + BitmapMasks: Translated BitmapMasks. + """ + if len(self.masks) == 0: + translated_masks = np.empty((0, *out_shape), dtype=np.uint8) + else: + translated_masks = mmcv.imtranslate( + self.masks.transpose((1, 2, 0)), + offset, + direction, + border_value=fill_val, + interpolation=interpolation) + if translated_masks.ndim == 2: + translated_masks = translated_masks[:, :, None] + translated_masks = translated_masks.transpose( + (2, 0, 1)).astype(self.masks.dtype) + return BitmapMasks(translated_masks, *out_shape) + + def shear(self, + out_shape, + magnitude, + direction='horizontal', + border_value=0, + interpolation='bilinear'): + """Shear the BitmapMasks. + + Args: + out_shape (tuple[int]): Shape for output mask, format (h, w). + magnitude (int | float): The magnitude used for shear. + direction (str): The shear direction, either "horizontal" + or "vertical". + border_value (int | tuple[int]): Value used in case of a + constant border. + interpolation (str): Same as in :func:`mmcv.imshear`. + + Returns: + BitmapMasks: The sheared masks. + """ + if len(self.masks) == 0: + sheared_masks = np.empty((0, *out_shape), dtype=np.uint8) + else: + sheared_masks = mmcv.imshear( + self.masks.transpose((1, 2, 0)), + magnitude, + direction, + border_value=border_value, + interpolation=interpolation) + if sheared_masks.ndim == 2: + sheared_masks = sheared_masks[:, :, None] + sheared_masks = sheared_masks.transpose( + (2, 0, 1)).astype(self.masks.dtype) + return BitmapMasks(sheared_masks, *out_shape) + + def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): + """Rotate the BitmapMasks. + + Args: + out_shape (tuple[int]): Shape for output mask, format (h, w). + angle (int | float): Rotation angle in degrees. Positive values + mean counter-clockwise rotation. + center (tuple[float], optional): Center point (w, h) of the + rotation in source image. If not specified, the center of + the image will be used. + scale (int | float): Isotropic scale factor. + fill_val (int | float): Border value. Default 0 for masks. + + Returns: + BitmapMasks: Rotated BitmapMasks. + """ + if len(self.masks) == 0: + rotated_masks = np.empty((0, *out_shape), dtype=self.masks.dtype) + else: + rotated_masks = mmcv.imrotate( + self.masks.transpose((1, 2, 0)), + angle, + center=center, + scale=scale, + border_value=fill_val) + if rotated_masks.ndim == 2: + # case when only one mask, (h, w) + rotated_masks = rotated_masks[:, :, None] # (h, w, 1) + rotated_masks = rotated_masks.transpose( + (2, 0, 1)).astype(self.masks.dtype) + return BitmapMasks(rotated_masks, *out_shape) + + @property + def areas(self): + """See :py:attr:`BaseInstanceMasks.areas`.""" + return self.masks.sum((1, 2)) + + def to_ndarray(self): + """See :func:`BaseInstanceMasks.to_ndarray`.""" + return self.masks + + def to_tensor(self, dtype): + """See :func:`BaseInstanceMasks.to_tensor`.""" + return ms.Tensor(self.masks, dtype=dtype) + + def get_bboxes(self): + num_masks = len(self) + boxes = np.zeros((num_masks, 4), dtype=np.float32) + x_any = self.masks.any(axis=1) + y_any = self.masks.any(axis=2) + for idx in range(num_masks): + x = np.where(x_any[idx, :])[0] + y = np.where(y_any[idx, :])[0] + if len(x) > 0 and len(y) > 0: + # use +1 for x_max and y_max so that the right and bottom + # boundary of instance masks are fully included by the box + boxes[idx, :] = np.array([x[0], y[0], x[-1] + 1, y[-1] + 1], + dtype=np.float32) + return boxes diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/__init__.py b/contrib/Overlap-Recovery/train/src/deoccluder/__init__.py new file mode 100644 index 000000000..b9f04e432 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/__init__.py @@ -0,0 +1,6 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/24 22:06 +# @Author : WeiHua + +from .deoccluder_r50 import CustomKNet, TrainModelWrapper diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/__init__.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/__init__.py new file mode 100644 index 000000000..63d2b7648 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/__init__.py @@ -0,0 +1,10 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/25 15:06 +# @Author : WeiHua + +from .custom_operations import CustomResizeBilinear, normal_init, multi_apply +from .custom_blocks import ConvModule, FFN, MultiheadAttention +from .custom_losses import build_loss +from .custom_samplers import build_sampler +from .custom_assigner import build_assigner diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py new file mode 100644 index 000000000..872362f43 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py @@ -0,0 +1,243 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/28 17:32 +# @Author : WeiHua + +try: + from scipy.optimize import linear_sum_assignment +except ImportError: + linear_sum_assignment = None +import numpy as np +import mindspore as ms +from mindspore import nn, ops +from .custom_match_cost import build_match_cost +from .custom_operations import NiceRepr + + +class AssignResult(NiceRepr): + """Stores assignments between predicted and truth boxes. Code inherited from mmdetection. + + Attributes: + num_gts (int): the number of truth boxes considered when computing this + assignment + + gt_inds (LongTensor): for each predicted box indicates the 1-based + index of the assigned truth box. 0 means unassigned and -1 means + ignore. + + max_overlaps (FloatTensor): the iou between the predicted box and its + assigned truth box. + + labels (None | LongTensor): If specified, for each predicted box + indicates the category label of the assigned truth box. + """ + + def __init__(self, num_gts, gt_inds, max_overlaps, labels=None): + self.num_gts = num_gts + self.gt_inds = gt_inds + self.max_overlaps = max_overlaps + self.labels = labels + # Interface for possible user-defined properties + self._extra_properties = {} + + @property + def num_preds(self): + """int: the number of predictions in this assignment""" + return len(self.gt_inds) + + def set_extra_property(self, key, value): + """Set user-defined new property.""" + assert key not in self.info + self._extra_properties[key] = value + + def get_extra_property(self, key): + """Get user-defined property.""" + return self._extra_properties.get(key, None) + + @property + def info(self): + """dict: a dictionary of info about the object""" + basic_info = { + 'num_gts': self.num_gts, + 'num_preds': self.num_preds, + 'gt_inds': self.gt_inds, + 'max_overlaps': self.max_overlaps, + 'labels': self.labels, + } + basic_info.update(self._extra_properties) + return basic_info + + def __nice__(self): + """str: a "nice" summary string describing this assign result""" + parts = [] + parts.append(f'num_gts={self.num_gts!r}') + if self.gt_inds is None: + parts.append(f'gt_inds={self.gt_inds!r}') + else: + parts.append(f'gt_inds.shape={tuple(self.gt_inds.shape)!r}') + if self.max_overlaps is None: + parts.append(f'max_overlaps={self.max_overlaps!r}') + else: + parts.append('max_overlaps.shape=' + f'{tuple(self.max_overlaps.shape)!r}') + if self.labels is None: + parts.append(f'labels={self.labels!r}') + else: + parts.append(f'labels.shape={tuple(self.labels.shape)!r}') + return ', '.join(parts) + + def add_gt_(self, gt_labels): + """Add ground truth as assigned results. + + Args: + gt_labels (torch.Tensor): Labels of gt boxes + """ + # self_inds = torch.arange( + # 1, len(gt_labels) + 1, dtype=ms.int32, device=gt_labels.device) + self_inds = ms.Tensor(np.arange( + 1, len(gt_labels) + 1), dtype=ms.int32) + self.gt_inds = ops.concat([self_inds, self.gt_inds]) + + self.max_overlaps = ops.concat( + [self.max_overlaps.new_ones(len(gt_labels)), self.max_overlaps]) + + if self.labels is not None: + self.labels = ops.concat([gt_labels, self.labels]) + + +class MaskHungarianAssigner(nn.Cell): + """Computes one-to-one matching between predictions and ground truth.""" + + def __init__(self, + cls_cost=dict(type='ClassificationCost', weight=1.), + mask_cost=dict(type='SigmoidCost', weight=1.0), + dice_cost=dict(), + boundary_cost=None, + topk=1): + super(MaskHungarianAssigner, self).__init__() + self.cls_cost = build_match_cost(cls_cost) + self.mask_cost = build_match_cost(mask_cost) + self.dice_cost = build_match_cost(dice_cost) + if boundary_cost is not None: + self.boundary_cost = build_match_cost(boundary_cost) + else: + self.boundary_cost = None + self.topk = topk + + def assign(self, + bbox_pred, + cls_pred, + gt_bboxes, + gt_labels, + img_meta=None, + gt_bboxes_ignore=None, + eps=1e-7): + """Computes one-to-one matching based on the weighted costs. + + This method assign each query prediction to a ground truth or + background. The `assigned_gt_inds` with -1 means don't care, + 0 means negative sample, and positive number is the index (1-based) + of assigned gt. + The assignment is done in the following steps, the order matters. + + 1. assign every prediction to -1 + 2. compute the weighted costs + 3. do Hungarian matching on CPU based on the costs + 4. assign all to 0 (background) first, then for each matched pair + between predictions and gts, treat this prediction as foreground + and assign the corresponding gt index (plus 1) to it. + + Args: + bbox_pred (Tensor): Predicted boxes with normalized coordinates + (cx, cy, w, h), which are all in range [0, 1]. Shape + [num_query, 4]. + cls_pred (Tensor): Predicted classification logits, shape + [num_query, num_class]. + gt_bboxes (Tensor): Ground truth boxes with unnormalized + coordinates (x1, y1, x2, y2). Shape [num_gt, 4]. + gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). + img_meta (dict): Meta information for current image. + gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are + labelled as `ignored`. Default None. + eps (int | float, optional): A value added to the denominator for + numerical stability. Default 1e-7. + + Returns: + :obj:`AssignResult`: The assigned result. + """ + assert gt_bboxes_ignore is None, \ + 'Only case when gt_bboxes_ignore is None is supported.' + num_gts, num_bboxes = gt_bboxes.shape[0], bbox_pred.shape[0] + + # 1. assign -1 by default + assigned_gt_inds = ms.numpy.full((num_bboxes, ), -1, dtype=ms.int64) + assigned_labels = ms.numpy.full((num_bboxes, ), -1, dtype=ms.int64) + if num_gts == 0 or num_bboxes == 0: + # No ground truth or boxes, return empty assignment + if num_gts == 0: + # No ground truth, assign all to background + assigned_gt_inds[:] = 0 + return AssignResult( + num_gts, assigned_gt_inds, None, labels=assigned_labels) + + # 2. compute the weighted costs + # classification and bboxcost. + if self.cls_cost.weight != 0 and cls_pred is not None: + cls_cost = self.cls_cost(cls_pred, gt_labels) + else: + cls_cost = 0 + if self.mask_cost.weight != 0: + reg_cost = self.mask_cost(bbox_pred, gt_bboxes) + else: + reg_cost = 0 + if self.dice_cost.weight != 0: + dice_cost = self.dice_cost(bbox_pred, gt_bboxes) + else: + dice_cost = 0 + if self.boundary_cost is not None and self.boundary_cost.weight != 0: + b_cost = self.boundary_cost(bbox_pred, gt_bboxes) + else: + b_cost = 0 + cost = cls_cost + reg_cost + dice_cost + b_cost + + # 3. do Hungarian matching on CPU using linear_sum_assignment + # cost = cost.detach().cpu() + cost = cost.asnumpy() + if linear_sum_assignment is None: + raise RuntimeError('Please run "pip install scipy" ' + 'to install scipy first.') + if self.topk == 1: + matched_row_inds, matched_col_inds = linear_sum_assignment(cost) + else: + topk_matched_row_inds = [] + topk_matched_col_inds = [] + for i in range(self.topk): + matched_row_inds, matched_col_inds = linear_sum_assignment( + cost) + topk_matched_row_inds.append(matched_row_inds) + topk_matched_col_inds.append(matched_col_inds) + cost[matched_row_inds] = 1e10 + matched_row_inds = np.concatenate(topk_matched_row_inds) + matched_col_inds = np.concatenate(topk_matched_col_inds) + + matched_row_inds = ms.Tensor.from_numpy(matched_row_inds) + matched_col_inds = ms.Tensor.from_numpy(matched_col_inds) + + # 4. assign backgrounds and foregrounds + # assign all indices to backgrounds first + assigned_gt_inds[:] = 0 + # assign foregrounds based on matching results + assigned_gt_inds[matched_row_inds] = matched_col_inds + 1 + assigned_labels[matched_row_inds] = gt_labels[matched_col_inds] + return AssignResult( + num_gts, assigned_gt_inds, None, labels=assigned_labels) + + +CUSTOM_ASSIGNER = { + 'MaskHungarianAssigner': MaskHungarianAssigner +} + + +def build_assigner(cfg): + assigner_type = cfg.pop('type') + return CUSTOM_ASSIGNER[assigner_type](**cfg) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py new file mode 100644 index 000000000..dfbae908b --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py @@ -0,0 +1,274 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/25 15:26 +# @Author : WeiHua + +import warnings +import mindspore as ms +from mindspore import nn, ops +from src.model_utils.configs.config_base import config + + +class ConvModule(nn.Cell): + def __init__(self, + in_channels, + out_channels, + kernel_size=1, + padding=0, + stride=1, + groups=1, + dilation=1, + conv_cfg=None, + norm_cfg=None, + act_cfg=None): + super().__init__() + if norm_cfg is not None: + bias = False + else: + bias = True + self.conv = nn.Conv2d(in_channels, + out_channels, + kernel_size, + stride=stride, + pad_mode='pad', + padding=padding, + group=groups, + dilation=dilation, + has_bias=bias) + + self.norm = None + if norm_cfg: + if norm_cfg['type'] == 'BN': + self.norm = nn.BatchNorm2d(out_channels, momentum=0.9) + elif norm_cfg['type'] == 'GN': + self.norm = nn.GroupNorm(norm_cfg['num_groups'], out_channels) + elif norm_cfg['type'] == 'LN': + self.norm = nn.LayerNorm(norm_cfg['normalized_shape']) + else: + raise TypeError('Unknown normalization layer') + + self.act = None + if act_cfg: + if act_cfg['type'] == 'ReLU': + self.act = nn.ReLU() + elif act_cfg['type'] == 'Sigmoid': + self.act = nn.Sigmoid() + else: + raise TypeError('Unknown activation layer') + + def construct(self, x): + out = self.conv(x) + if self.norm is not None: + out = self.norm(out) + if self.act is not None: + out = self.act(out) + return out + + +class FFN(nn.Cell): + """Implements feed-forward networks (FFNs) with identity connection. + + Args: + embed_dims (int): The feature dimension. Same as + `MultiheadAttention`. Defaults: 256. + feedforward_channels (int): The hidden dimension of FFNs. + Defaults: 1024. + num_fcs (int, optional): The number of fully-connected layers in + FFNs. Default: 2. + act_cfg (dict, optional): The activation config for FFNs. + Default: dict(type='ReLU') + ffn_drop (float, optional): Probability of an element to be + zeroed in FFN. Default 0.0. + add_identity (bool, optional): Whether to add the + identity connection. Default: `True`. + dropout_layer (obj:`ConfigDict`): The dropout_layer used + when adding the shortcut. + """ + + def __init__(self, + embed_dims=256, + feedforward_channels=1024, + num_fcs=2, + act_cfg=dict(type='ReLU'), + ffn_drop=0., + dropout_layer=None, + add_identity=True): + super(FFN, self).__init__() + assert num_fcs >= 2, 'num_fcs should be no less ' \ + f'than 2. got {num_fcs}.' + self.embed_dims = embed_dims + self.feedforward_channels = feedforward_channels + self.num_fcs = num_fcs + self.act_cfg = act_cfg + if act_cfg and act_cfg.get('type') == 'ReLU': + self.activate = nn.ReLU() + else: + raise RuntimeError(f"Not support cfg: {act_cfg}") + + layers = [] + in_channels = embed_dims + for _ in range(num_fcs - 1): + layers.append( + nn.SequentialCell( + nn.Dense(in_channels, feedforward_channels), self.activate, + nn.Dropout(ffn_drop) if ffn_drop > 0 else nn.Identity())) + in_channels = feedforward_channels + layers.append(nn.Dense(feedforward_channels, embed_dims)) + layers.append(nn.Dropout(ffn_drop) if ffn_drop > 0 else nn.Identity()) + self.layers = nn.SequentialCell(*layers) + self.dropout_layer = nn.Dropout() if dropout_layer else nn.Identity() + self.add_identity = add_identity + + def construct(self, x, identity=None): + """Forward function for `FFN`. + + The function would add x to the output tensor if residue is None. + """ + out = self.layers(x) + if not self.add_identity: + return self.dropout_layer(out) + if identity is None: + identity = x + return identity + self.dropout_layer(out) + + +class MultiheadAttention(nn.Cell): + """A wrapper for ``torch.nn.MultiheadAttention``. + + This module implements MultiheadAttention with identity connection, + and positional encoding is also passed as input. + + Args: + embed_dims (int): The embedding dimension. + num_heads (int): Parallel attention heads. + attn_drop (float): A Dropout layer on attn_output_weights. + Default: 0.0. + proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. + Default: 0.0. + dropout_layer (obj:`ConfigDict`): The dropout_layer used + when adding the shortcut. + init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. + Default: None. + batch_first (bool): When it is True, Key, Query and Value are shape of + (batch, n, embed_dim), otherwise (n, batch, embed_dim). + Default to True for mindspore. + """ + + def __init__(self, + embed_dims, + num_heads, + attn_drop=0., + proj_drop=0., + dropout_layer=0., + init_cfg=None, + batch_first=True, + num_proposals=4, + **kwargs): + super().__init__(init_cfg) + self.embed_dims = embed_dims + self.num_heads = num_heads + self.batch_first = batch_first + batch_size = config.data['samples_per_gpu'] + self.attn = nn.transformer.MultiHeadAttention( + batch_size, num_proposals, num_proposals, embed_dims, num_heads, + attention_dropout_rate=attn_drop, **kwargs) + if proj_drop > 0: + self.proj_drop = nn.Dropout(proj_drop) + else: + self.proj_drop = nn.Identity() + self.num_proposals = num_proposals + self.dropout_layer = nn.Dropout(dropout_layer) if dropout_layer > 0 else nn.Identity() + + def construct(self, + query, + key=None, + value=None, + identity=None, + query_pos=None, + key_pos=None, + attn_mask=None, + key_padding_mask=None, + **kwargs): + """Forward function for `MultiheadAttention`. + + **kwargs allow passing a more general data flow when combining + with other operations in `transformerlayer`. + + Args: + query (Tensor): The input query with shape [num_queries, bs, + embed_dims] if self.batch_first is False, else + [bs, num_queries embed_dims]. + key (Tensor): The key tensor with shape [num_keys, bs, + embed_dims] if self.batch_first is False, else + [bs, num_keys, embed_dims] . + If None, the ``query`` will be used. Defaults to None. + value (Tensor): The value tensor with same shape as `key`. + Same in `nn.MultiheadAttention.forward`. Defaults to None. + If None, the `key` will be used. + identity (Tensor): This tensor, with the same shape as x, + will be used for the identity link. + If None, `x` will be used. Defaults to None. + query_pos (Tensor): The positional encoding for query, with + the same shape as `x`. If not None, it will + be added to `x` before forward function. Defaults to None. + key_pos (Tensor): The positional encoding for `key`, with the + same shape as `key`. Defaults to None. If not None, it will + be added to `key` before forward function. If None, and + `query_pos` has the same shape as `key`, then `query_pos` + will be used for `key_pos`. Defaults to None. + attn_mask (Tensor): ByteTensor mask with shape [num_queries, + num_keys]. Same in `nn.MultiheadAttention.forward`. + Defaults to None. + key_padding_mask (Tensor): ByteTensor with shape [bs, num_keys]. + Defaults to None. + + Returns: + Tensor: forwarded results with shape + [num_queries, bs, embed_dims] + if self.batch_first is False, else + [bs, num_queries embed_dims]. + """ + + if key is None: + key = query + if value is None: + value = key + if identity is None: + identity = query + if key_pos is None: + if query_pos is not None: + # use query_pos if key_pos is not available + if query_pos.shape == key.shape: + key_pos = query_pos + else: + warnings.warn(f'position encoding of key is' + f'missing in {self.__class__.__name__}.') + if query_pos is not None: + query = query + query_pos + if key_pos is not None: + key = key + key_pos + + # Because the dataflow('key', 'query', 'value') of + # ``torch.nn.MultiheadAttention`` is (num_query, batch, + # embed_dims), We should adjust the shape of dataflow from + # batch_first (batch, num_query, embed_dims) to num_query_first + # (num_query ,batch, embed_dims), and recover ``attn_output`` + # from num_query_first to batch_first. + if self.batch_first: + query = query.transpose((1, 0, 2)) + key = key.transpose((1, 0, 2)) + value = value.transpose((1, 0, 2)) + B, N, _ = query.shape + else: + N, B, _ = query.shape + out = self.attn( + query_tensor=query, + key_tensor=key, + value_tensor=value, + attention_mask=ops.ones((B, N, N), ms.float32))[0] + + if self.batch_first: + out = out.transpose((1, 0, 2)) + + return identity + self.dropout_layer(self.proj_drop(out)) + diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py new file mode 100644 index 000000000..df90b197e --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py @@ -0,0 +1,110 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/25 16:37 +# @Author : WeiHua + +import mindspore as ms +import numpy as np +from mindspore import nn, ops + + +class BinaryCrossEntropy(nn.Cell): + def __init__(self, loss_weight=1, reduction='mean', use_sigmoid=True): + super(BinaryCrossEntropy, self).__init__() + self.bce_loss = ops.binary_cross_entropy_with_logits + self.reduction = reduction + self.loss_weight = loss_weight + self.use_sigmoid = use_sigmoid + assert self.use_sigmoid + + def construct(self, pred, label): + return self.loss_weight * ops.binary_cross_entropy_with_logits( + pred, label, ops.ones(pred.shape, ms.float32), + ops.ones(pred.shape, ms.float32), reduction=self.reduction) + + +class FocalLoss(nn.Cell): + def __init__(self, gamma=2.0, loss_weight=1, reduction='sum', use_sigmoid=True): + super(FocalLoss, self).__init__() + self.focal_loss = nn.FocalLoss(gamma=gamma, reduction=reduction) + self.loss_weight = loss_weight + self.use_sigmoid = use_sigmoid + + def construct(self, pred, label, avg_factor): + return self.loss_weight * self.focal_loss(pred, label) / avg_factor + + +class SigmoidFocalClassificationLoss(nn.Cell): + """" + Sigmoid focal-loss for classification. + + Args: + gamma (float): Hyper-parameter to balance the easy and hard examples. Default: 2.0 + alpha (float): Hyper-parameter to balance the positive and negative example. Default: 0.25 + + Returns: + Tensor, the focal loss. + """ + def __init__(self, gamma=2.0, alpha=0.25, loss_weight=1.0, use_sigmoid=True): + super(SigmoidFocalClassificationLoss, self).__init__() + self.sigmiod_cross_entropy = ops.SigmoidCrossEntropyWithLogits() + self.sigmoid = ops.Sigmoid() + self.pow = ops.Pow() + self.onehot = ops.OneHot() + self.on_value = ms.Tensor(1.0, ms.float32) + self.off_value = ms.Tensor(0.0, ms.float32) + self.gamma = gamma + self.alpha = alpha + self.loss_weight = loss_weight + self.use_sigmoid = use_sigmoid + + def construct(self, logits, label): + label = self.onehot(label, ops.shape(logits)[-1], self.on_value, self.off_value) + sigmiod_cross_entropy = self.sigmiod_cross_entropy(logits, label) + sigmoid = self.sigmoid(logits) + label = ops.cast(label, ms.float32) + p_t = label * sigmoid + (1 - label) * (1 - sigmoid) + modulating_factor = self.pow(1 - p_t, self.gamma) + alpha_weight_factor = label * self.alpha + (1 - label) * (1 - self.alpha) + focal_loss = modulating_factor * alpha_weight_factor * sigmiod_cross_entropy + return self.loss_weight * focal_loss + + +class DiceLoss(nn.Cell): + def __init__(self, loss_weight=1, use_sigmoid=True): + super(DiceLoss, self).__init__() + self.dice_loss = nn.DiceLoss() + self.loss_weight = loss_weight + self.use_sigmoid = use_sigmoid + assert self.use_sigmoid + self.sigmoid = ops.Sigmoid() + + def construct(self, pred, label): + return self.loss_weight * self.dice_loss(self.sigmoid(pred), label) + + +class CLSBCELoss(nn.Cell): + def __init__(self, loss_weight=1, use_sigmoid=True, reduction='mean'): + super(CLSBCELoss, self).__init__() + # self.bce_loss = nn.BCELoss(reduction=reduction) + self.bce_loss = nn.CrossEntropyLoss(reduction=reduction) + self.loss_weight = loss_weight + self.use_sigmoid = use_sigmoid + self.sigmoid = ops.Sigmoid() + + def construct(self, pred, label): + return self.loss_weight * self.bce_loss(pred, label) + # return self.loss_weight * self.bce_loss(self.sigmoid(pred), label) + +CUSTOM_LOSSES = { + 'BinaryCrossEntropy': BinaryCrossEntropy, + 'FocalLoss': FocalLoss, + 'DiceLoss': DiceLoss, + 'CLSBCELoss': CLSBCELoss, + 'SigmoidFocalClassificationLoss': SigmoidFocalClassificationLoss +} + + +def build_loss(loss_cfg: dict): + loss_type = loss_cfg.pop('type') + return CUSTOM_LOSSES[loss_type](**loss_cfg) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py new file mode 100644 index 000000000..14783d2e9 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py @@ -0,0 +1,217 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/28 17:46 +# @Author : WeiHua + +import mindspore as ms +from mindspore import nn, ops +from .custom_operations import Einsum + + +class FocalLossCost: + """FocalLossCost. + + Args: + weight (int | float, optional): loss_weight + alpha (int | float, optional): focal_loss alpha + gamma (int | float, optional): focal_loss gamma + eps (float, optional): default 1e-12 + binary_input (bool, optional): Whether the input is binary, + default False. + + Examples: + from mmdet.core.bbox.match_costs.match_cost import FocalLossCost + import torch + self = FocalLossCost() + cls_pred = torch.rand(4, 3) + gt_labels = torch.tensor([0, 1, 2]) + factor = torch.tensor([10, 8, 10, 8]) + self(cls_pred, gt_labels) + tensor([[-0.3236, -0.3364, -0.2699], + [-0.3439, -0.3209, -0.4807], + [-0.4099, -0.3795, -0.2929], + [-0.1950, -0.1207, -0.2626]]) + """ + + def __init__(self, + weight=1., + alpha=0.25, + gamma=2, + eps=1e-12, + binary_input=False): + self.weight = weight + self.alpha = alpha + self.gamma = gamma + self.eps = eps + self.binary_input = binary_input + + def _focal_loss_cost(self, cls_pred, gt_labels): + """ + Args: + cls_pred (Tensor): Predicted classification logits, shape + (num_query, num_class). + gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). + + Returns: + torch.Tensor: cls_cost value with weight + """ + cls_pred = cls_pred.sigmoid() + neg_cost = -(1 - cls_pred + self.eps).log() * ( + 1 - self.alpha) * cls_pred.pow(self.gamma) + pos_cost = -(cls_pred + self.eps).log() * self.alpha * ( + 1 - cls_pred).pow(self.gamma) + # cls_cost = pos_cost[:, gt_labels] - neg_cost[:, gt_labels] + gt_numpy = gt_labels.asnumpy() + cls_cost = ms.Tensor(pos_cost.asnumpy()[:, gt_numpy]) - ms.Tensor(neg_cost.asnumpy()[:, gt_numpy]) + return cls_cost * self.weight + + def _mask_focal_loss_cost(self, cls_pred, gt_labels): + """ + Args: + cls_pred (Tensor): Predicted classfication logits + in shape (num_query, d1, ..., dn), dtype=torch.float32. + gt_labels (Tensor): Ground truth in shape (num_gt, d1, ..., dn), + dtype=torch.long. Labels should be binary. + + Returns: + Tensor: Focal cost matrix with weight in shape\ + (num_query, num_gt). + """ + cls_pred = cls_pred.flatten(1) + gt_labels = gt_labels.flatten(1).astype(ms.float32) + n = cls_pred.shape[1] + cls_pred = cls_pred.sigmoid() + neg_cost = -(1 - cls_pred + self.eps).log() * ( + 1 - self.alpha) * cls_pred.pow(self.gamma) + pos_cost = -(cls_pred + self.eps).log() * self.alpha * ( + 1 - cls_pred).pow(self.gamma) + + einsum = ops.Einsum('nc,mc->nm') + cls_cost = einsum((pos_cost, gt_labels)) + einsum((neg_cost, (1 - gt_labels))) + # cls_cost = Einsum('nc,mc->nm', pos_cost, gt_labels) + \ + # Einsum('nc,mc->nm', neg_cost, (1 - gt_labels)) + return cls_cost / n * self.weight + + def __call__(self, cls_pred, gt_labels): + """ + Args: + cls_pred (Tensor): Predicted classfication logits. + gt_labels (Tensor)): Labels. + + Returns: + Tensor: Focal cost matrix with weight in shape\ + (num_query, num_gt). + """ + if self.binary_input: + return self._mask_focal_loss_cost(cls_pred, gt_labels) + else: + return self._focal_loss_cost(cls_pred, gt_labels) + + +class DiceCost(object): + """DiceCost. + + Args: + weight (int | float, optional): loss_weight + pred_act (bool): Whether to activate the prediction + before calculating cost + + Examples: + from mmdet.core.bbox.match_costs.match_cost import BBoxL1Cost + import torch + self = BBoxL1Cost() + bbox_pred = torch.rand(1, 4) + gt_bboxes= torch.FloatTensor([[0, 0, 2, 4], [1, 2, 3, 4]]) + factor = torch.tensor([10, 8, 10, 8]) + self(bbox_pred, gt_bboxes, factor) + tensor([[1.6172, 1.6422]]) + """ + + def __init__(self, + weight=1., + pred_act=False, + act_mode='sigmoid', + eps=1e-3): + self.weight = weight + self.pred_act = pred_act + self.act_mode = act_mode + self.eps = eps + + def dice_loss(cls, input, target, eps=1e-3): + input = input.reshape(input.shape[0], -1) + target = target.reshape(target.shape[0], -1).astype(ms.float32) + # einsum saves 10x memory + # a = torch.sum(input[:, None] * target[None, ...], -1) + a = Einsum('nh,mh->nm', input, target) + b = ops.reduce_sum(input * input, 1) + eps + c = ops.reduce_sum(target * target, 1) + eps + d = (2 * a) / (b[:, None] + c[None, ...]) + # 1 is a constance that will not affect the matching, so ommitted + return -d + + def __call__(self, mask_preds, gt_masks): + """ + Args: + bbox_pred (Tensor): Predicted boxes with normalized coordinates + (cx, cy, w, h), which are all in range [0, 1]. Shape + [num_query, 4]. + gt_bboxes (Tensor): Ground truth boxes with normalized + coordinates (x1, y1, x2, y2). Shape [num_gt, 4]. + + Returns: + torch.Tensor: bbox_cost value with weight + """ + if self.pred_act and self.act_mode == 'sigmoid': + mask_preds = mask_preds.sigmoid() + elif self.pred_act: + mask_preds = mask_preds.softmax(dim=0) + dice_cost = self.dice_loss(mask_preds, gt_masks, self.eps) + return dice_cost * self.weight + + +class MaskCost(object): + """MaskCost. + + Args: + weight (int | float, optional): loss_weight + """ + + def __init__(self, weight=1., pred_act=False, act_mode='sigmoid'): + self.weight = weight + self.pred_act = pred_act + self.act_mode = act_mode + + def __call__(self, cls_pred, target): + """ + Args: + cls_pred (Tensor): Predicted classification logits, shape + [num_query, num_class]. + gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). + + Returns: + torch.Tensor: cls_cost value with weight + """ + if self.pred_act and self.act_mode == 'sigmoid': + cls_pred = cls_pred.sigmoid() + elif self.pred_act: + cls_pred = cls_pred.softmax(dim=0) + + _, H, W = target.shape + # flatten_cls_pred = cls_pred.view(num_proposals, -1) + # eingum is ~10 times faster than matmul + pos_cost = Einsum('nhw,mhw->nm', cls_pred, target) + neg_cost = Einsum('nhw,mhw->nm', 1 - cls_pred, 1 - target) + cls_cost = -(pos_cost + neg_cost) / (H * W) + return cls_cost * self.weight + + +CUSTOM_MATCH_COST = { + 'FocalLossCost': FocalLossCost, + 'DiceCost': DiceCost, + 'MaskCost': MaskCost +} + + +def build_match_cost(cfg): + cost_type = cfg.pop('type') + return CUSTOM_MATCH_COST[cost_type](**cfg) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py new file mode 100644 index 000000000..964ccde95 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py @@ -0,0 +1,104 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/25 15:01 +# @Author : WeiHua + +from functools import partial +import numpy as np +import warnings +import mindspore as ms +import mindspore.nn as nn +from mindspore.common import initializer as init + + +class NiceRepr: + """Inherit from this class and define ``__nice__`` to "nicely" print your + objects. + + Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function + Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``. + If the inheriting class has a ``__len__``, method then the default + ``__nice__`` method will return its length. + + Code inherited from mmdetection. + """ + + def __nice__(self): + """str: a "nice" summary string describing this module""" + if hasattr(self, '__len__'): + # It is a common pattern for objects to use __len__ in __nice__ + # As a convenience we define a default __nice__ for these objects + return str(len(self)) + else: + # In all other cases force the subclass to overload __nice__ + raise NotImplementedError( + f'Define the __nice__ method for {self.__class__!r}') + + def __repr__(self): + """str: the string of the module""" + try: + nice = self.__nice__() + classname = self.__class__.__name__ + return f'<{classname}({nice}) at {hex(id(self))}>' + except NotImplementedError as ex: + warnings.warn(str(ex), category=RuntimeWarning) + return object.__repr__(self) + + def __str__(self): + """str: the string of the module""" + try: + classname = self.__class__.__name__ + nice = self.__nice__() + return f'<{classname}({nice})>' + except NotImplementedError as ex: + warnings.warn(str(ex), category=RuntimeWarning) + return object.__repr__(self) + + +class CustomResizeBilinear(nn.ResizeBilinear): + def __init__(self, size=None, scale_factor=None, align_corners=False, **kwargs): + super(CustomResizeBilinear, self).__init__(**kwargs) + self.size = size + self.scale_factor = scale_factor + self.align_corners = align_corners + + def construct(self, x, **kwargs): + return super(CustomResizeBilinear, self).construct( + x, self.size, self.scale_factor, self.align_corners) + + +def normal_init(cell: nn.Cell, + init_gain: float = 0.02, + mean: float = 0, + bias: float = 0) -> None: + if hasattr(cell, 'weight') and cell.weight is not None: + cell.weight.set_data(init.initializer( + init.Normal(init_gain, mean), cell.weight.shape)) + if hasattr(cell, 'bias') and cell.bias is not None: + cell.bias.set_data(init.initializer(bias, cell.bias.shape)) + + +def multi_apply(func, *args, **kwargs): + """Apply function to a list of arguments. + + Note: + This function applies the ``func`` to multiple inputs and + map the multiple outputs of the ``func`` into different + list. Each list contains the same type of outputs corresponding + to different inputs. + + Args: + func (Function): A function that will be applied to a list of + arguments + + Returns: + tuple(list): A tuple containing multiple list, each list contains \ + a kind of returned results by the function + """ + pfunc = partial(func, **kwargs) if kwargs else func + map_results = map(pfunc, *args) + return tuple(map(list, zip(*map_results))) + + +def Einsum(format, x, y): + return ms.Tensor(np.einsum(format, x.asnumpy(), y.asnumpy()), dtype=x.dtype) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py new file mode 100644 index 000000000..40ff5d7a6 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py @@ -0,0 +1,126 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/25 22:59 +# @Author : WeiHua + +import numpy as np +import mindspore as ms +from mindspore import ops, nn +from .custom_operations import NiceRepr + + +class MaskSamplingResult(NiceRepr): + """Bbox sampling result. + + Example: + self = + """ + + def __init__(self, pos_inds, neg_inds, masks, gt_masks, assign_result, + gt_flags): + self.pos_inds = pos_inds + self.neg_inds = neg_inds + if pos_inds.shape[0] == 0: + H, W = masks.shape[-2:] + self.pos_masks = np.zeros((0, H, W)) + else: + self.pos_masks = masks[pos_inds] + if neg_inds.shape[0] == 0: + H, W = masks.shape[-2:] + self.neg_masks = np.zeros((0, H, W)) + else: + self.neg_masks = masks[neg_inds] + self.pos_is_gt = gt_flags[pos_inds] + + self.num_gts = gt_masks.shape[0] + self.pos_assigned_gt_inds = assign_result.gt_inds[pos_inds] - 1 + size = ops.Size() + if size(gt_masks) == 0: + # hack for index error case + assert self.pos_assigned_gt_inds.numel() == 0 + self.pos_gt_masks = ms.numpy.empty_like(gt_masks) + else: + self.pos_gt_masks = gt_masks[self.pos_assigned_gt_inds] + + if assign_result.labels is not None: + self.pos_gt_labels = assign_result.labels[pos_inds] + else: + self.pos_gt_labels = None + + @property + def masks(self): + """torch.Tensor: concatenated positive and negative boxes""" + return ops.concat([self.pos_masks, self.neg_masks]) + + @property + def bboxes(self): + """torch.Tensor: concatenated positive and negative boxes""" + return ops.concat([self.pos_bboxes, self.neg_bboxes]) + + def __nice__(self): + data = self.info.copy() + data['pos_masks'] = data.pop('pos_masks').shape + data['neg_masks'] = data.pop('neg_masks').shape + parts = [f"'{k}': {v!r}" for k, v in sorted(data.items())] + body = ' ' + ',\n '.join(parts) + return '{\n' + body + '\n}' + + @property + def info(self): + """Returns a dictionary of info about the object.""" + return { + 'pos_inds': self.pos_inds, + 'neg_inds': self.neg_inds, + 'pos_masks': self.pos_masks, + 'neg_masks': self.neg_masks, + 'pos_is_gt': self.pos_is_gt, + 'num_gts': self.num_gts, + 'pos_assigned_gt_inds': self.pos_assigned_gt_inds, + } + + +class MaskPseudoSampler(nn.Cell): + """A pseudo sampler that does not do sampling actually.""" + + def __init__(self, **kwargs): + super(MaskPseudoSampler, self).__init__() + + def sample(self, assign_result, masks, gt_masks, **kwargs): + """Directly returns the positive and negative indices of samples. + + Args: + assign_result (:obj:`AssignResult`): Assigned results + masks (torch.Tensor): Bounding boxes + gt_masks (torch.Tensor): Ground truth boxes + + Returns: + :obj:`SamplingResult`: sampler results + """ + inds_numpy = assign_result.gt_inds.asnumpy() + pos_inds = ms.Tensor(np.unique(np.nonzero(inds_numpy > 0)[0])) + neg_inds = ms.Tensor(np.unique(np.nonzero(inds_numpy == 0)[0])) + + zeros = ops.Zeros() + gt_flags = zeros((masks.shape[0], ), ms.uint8) + + sampling_result = MaskSamplingResult(pos_inds, neg_inds, masks, + gt_masks, assign_result, gt_flags) + return sampling_result + + +CUSTOM_SAMPLER = { + 'MaskPseudoSampler': MaskPseudoSampler +} + + +def build_sampler(cfg: dict): + sampler_type = cfg.pop('type') + return CUSTOM_SAMPLER[sampler_type](**cfg) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py b/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py new file mode 100644 index 000000000..fce73681b --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py @@ -0,0 +1,277 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/24 22:06 +# @Author : WeiHua + +import mindspore as ms +from mindspore import nn, ops +from mindspore import load_checkpoint, load_param_into_net +from mindspore.nn.optim import Adam +from mindspore.context import ParallelMode +from mindspore.parallel._auto_parallel_context import auto_parallel_context +from mindspore.communication.management import get_group_size +from src.model_utils.configs.config_base import Config +from ..dataset.utils import BitmapMasks +from .resnet import resnet50 +from .fpn_neck import FeatPyramidNeck +from .rpn.kernel_head import ConvKernelHead +from .roi.custom_kernel_iter_head import CustomKernelIterHead +from .utils import sem2ins_masks + +class CustomKNet(nn.Cell): + def __init__(self, config): + super(CustomKNet, self).__init__() + self.config = Config(config) + self.mask_assign_stride = self.config.mask_assign_stride + # build backbone (resnet-50) + self.backbone = resnet50(pretrained=False) + # build FPN + self.neck = FeatPyramidNeck(**self.config.neck, + feature_shapes=self.config.feature_shapes) + # build RPN head + self.rpn_head = ConvKernelHead(**self.config.rpn_head) + + # build ROI head + self.roi_head = CustomKernelIterHead(**self.config.roi_head) + + self.interpolate = nn.ResizeBilinear() + + self.is_model_export = False + + self.reduce_sum = ops.ReduceSum() + + self.cnt = 0 + + def load_r50(self, ckpt_path, prefix='backbone'): + param_dict = load_checkpoint(ckpt_path) + if prefix: + prefix_param_dict = dict() + for key, val in param_dict.items(): + prefix_param_dict[f"{prefix}.{key}"] = val + param_dict = prefix_param_dict + load_param_into_net(self.backbone, param_dict) + + def extract_feat(self, img): + """Directly extract features from the backbone+neck.""" + x = self.backbone(img) + x = self.neck(x) + return x + + def forward_train(self, + img, + img_metas, + gt_bboxes=None, + gt_labels=None, + gt_bboxes_ignore=None, + gt_masks=None, + gt_semantic_seg=None): + assert gt_masks is not None + + # gt_masks and gt_semantic_seg are not padded when forming batch + gt_masks_tensor = [] + gt_sem_seg = [] + gt_sem_cls = [] + # batch_input_shape shoud be the same across images + pad_H, pad_W = img_metas[0]['batch_input_shape'] + assign_H = pad_H // self.mask_assign_stride + assign_W = pad_W // self.mask_assign_stride + + for i, gt_mask in enumerate(gt_masks): + mask_tensor = gt_mask.to_tensor(ms.float32) + if gt_mask.width != pad_W or gt_mask.height != pad_H: + # pad_wh = (0, pad_W - gt_mask.width, 0, pad_H - gt_mask.height) + # mask_tensor = F.pad(mask_tensor, pad_wh, value=0) + pad_wh = ((0, 0), (0, pad_H - gt_mask.height), (0, pad_W - gt_mask.width)) + pad_op = nn.Pad(paddings=pad_wh) + mask_tensor = pad_op(mask_tensor) + + if gt_semantic_seg is not None: + # gt_semantic seg is padded by 255 and + # zero indicating the first class + sem_labels, sem_seg = sem2ins_masks( + gt_semantic_seg[i], + num_thing_classes=self.num_thing_classes) + if sem_seg.shape[0] == 0: + gt_sem_seg.append( + mask_tensor.new_zeros( + (mask_tensor.shape[0], assign_H, assign_W))) + else: + gt_sem_seg.append( + self.interpolate( + sem_seg[None], (assign_H, assign_W), + align_corners=False)[0]) + gt_sem_cls.append(sem_labels) + + else: + gt_sem_seg = None + gt_sem_cls = None + if mask_tensor.shape[0] == 0: + gt_masks_tensor.append( + mask_tensor.new_zeros( + (mask_tensor.shape[0], assign_H, assign_W))) + else: + gt_masks_tensor.append( + self.interpolate( + mask_tensor[None], (assign_H, assign_W), + align_corners=False)[0]) + gt_masks = gt_masks_tensor + x = self.extract_feat(img) + rpn_results = self.rpn_head.forward_train(x, img_metas, gt_masks, + gt_labels, gt_sem_seg, + gt_sem_cls) + + (rpn_losses, proposal_feats, x_feats, mask_preds, + cls_scores) = rpn_results + losses = self.roi_head.forward_train( + x_feats, + proposal_feats, + mask_preds, + cls_scores, + img_metas, + gt_masks, + gt_labels, + gt_bboxes_ignore=gt_bboxes_ignore, + gt_bboxes=gt_bboxes, + gt_sem_seg=gt_sem_seg, + gt_sem_cls=gt_sem_cls, + imgs_whwh=None) + + losses.update(rpn_losses) + total_loss = None + for key, val in losses.items(): + if isinstance(total_loss, ms.Tensor): + total_loss += val + else: + total_loss = val + self.cnt += 1 + if self.cnt % 10 == 0: + print(losses) + return total_loss + + def simple_test(self, img, img_metas, rescale=False): + x = self.extract_feat(img) + rpn_results = self.rpn_head.simple_test_rpn(x, img_metas) + (proposal_feats, x_feats, mask_preds, cls_scores, + seg_preds) = rpn_results + segm_results = self.roi_head.simple_test( + x_feats, + proposal_feats, + mask_preds, + cls_scores, + img_metas, + imgs_whwh=None, + rescale=rescale) + list_segm_results = [] + for segm in segm_results: + list_segm_results.append(list(segm)) + return list_segm_results + + def construct(self, img, gt_bboxes=None, gt_label=None, gt_masks=None, ori_shape=None, img_shape=None, + pad_shape=None, scale_factor=None, flip=None, flip_direction=None): + if self.training: + # pack inputs + img_metas = [] + h, w = img.shape[-2:] + batch_input_shape = (h, w) + gt_bboxes_list = [] + gt_label_list = [] + gt_masks_list = [] + for idx in range(img.shape[0]): + img_meta = { + 'ori_shape': ori_shape[idx], + 'img_shape': img_shape[idx], + 'pad_shape': pad_shape[idx], + 'scale_factor': scale_factor[idx], + 'flip': flip[idx], + 'flip_direction': flip_direction[idx], + 'batch_input_shape': batch_input_shape + } + img_metas.append(img_meta) + num_ins = (gt_label[idx] == 0).sum().astype(ms.int64) + gt_bboxes_list.append(gt_bboxes[idx, :num_ins]) + gt_label_list.append(gt_label[idx, :num_ins]) + gt_masks_list.append(BitmapMasks(gt_masks[idx, :num_ins], h, w)) + return self.forward_train(img, img_metas, + gt_bboxes=gt_bboxes_list, + gt_labels=gt_label_list, + gt_masks=gt_masks_list) + else: + if self.is_model_export: + return self.model_export(img) + else: + # pack inputs + img_metas = [] + h, w = img.shape[-2:] + batch_input_shape = (h, w) + for idx in range(img.shape[0]): + img_meta = { + 'ori_shape': ori_shape[idx], + 'img_shape': img_shape[idx], + 'pad_shape': pad_shape[idx], + 'scale_factor': scale_factor[idx], + 'batch_input_shape': batch_input_shape + } + img_metas.append(img_meta) + return self.simple_test(img, img_metas, True) + + def model_export(self, img): + # pack fake inputs + img_metas = [] + h, w = img.shape[-2:] + batch_input_shape = (h, w) + for idx in range(img.shape[0]): + img_meta = { + 'ori_shape': img.shape[1:], + 'img_shape': img.shape[1:], + 'pad_shape': img.shape[1:], + 'scale_factor': [1, 1], + 'batch_input_shape': batch_input_shape + } + img_metas.append(img_meta) + + x = self.extract_feat(img) + # print('*'*20) + proposal_feats, x_feats, mask_preds, cls_scores, seg_preds = self.rpn_head.onnx_export(x) + # mask_preds = self.rpn_head.onnx_export(x) + # return mask_preds + + scaled_mask_preds, cls_score = self.roi_head.onnx_export(x_feats, + proposal_feats, + mask_preds, + cls_scores, + img_metas, + ) + return scaled_mask_preds, cls_score + + +class TrainModelWrapper(nn.Cell): + + def __init__(self, network): + super(TrainModelWrapper, self).__init__() + self.network = network + self.network.set_train() + self.trainable_params = network.trainable_params() + self.weights = ms.ParameterTuple(self.trainable_params) + self.optimizer = Adam(self.trainable_params, learning_rate=0.0001, eps=1e-8) + self.hyper_map = ops.HyperMap() + self.grad = ops.GradOperation(get_by_list=True) + self.reducer_flag = False + self.grad_reducer = None + self.parallel_mode = ms.get_auto_parallel_context("parallel_mode") + if self.parallel_mode in [ParallelMode.DATA_PARALLEL, ParallelMode.HYBRID_PARALLEL]: + self.reducer_flag = True + if self.reducer_flag: + mean = ms.get_auto_parallel_context("gradients_mean") + if auto_parallel_context().get_device_num_is_set(): + degree = ms.get_auto_parallel_context("device_num") + else: + degree = get_group_size() + self.grad_reducer = nn.DistributedGradReducer( + self.optimizer.parameters, mean, degree) + + def construct(self, *args, **kwargs): + total_loss = self.network(*args, **kwargs) + grads = self.grad(self.network, self.weights)(*args, **kwargs) + if self.reducer_flag: + grads = self.grad_reducer(grads) + return ops.depend(total_loss, self.optimizer(grads)) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py b/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py new file mode 100644 index 000000000..98aec5e56 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py @@ -0,0 +1,121 @@ +# Copyright 2020-2021 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================ +"""Feature pyramid network. (inherited from MaskRCNN in model zoo)""" + +import numpy as np +import mindspore.nn as nn +from mindspore.ops import operations as P +from mindspore.common.tensor import Tensor +from mindspore.common import dtype as mstype +from mindspore.common.initializer import initializer +from mindspore import context + + +def bias_init_zeros(shape): + """Bias init method.""" + return Tensor(np.array(np.zeros(shape).astype(np.float32)), dtype=mstype.float32) + +def _conv(in_channels, out_channels, kernel_size=3, stride=1, padding=0, pad_mode='pad'): + """Conv2D wrapper.""" + shape = (out_channels, in_channels, kernel_size, kernel_size) + weights = initializer("XavierUniform", shape=shape, dtype=mstype.float32) + shape_bias = (out_channels,) + biass = bias_init_zeros(shape_bias) + return nn.Conv2d(in_channels, out_channels, + kernel_size=kernel_size, stride=stride, padding=padding, + pad_mode=pad_mode, weight_init=weights, has_bias=True, bias_init=biass) + +class FeatPyramidNeck(nn.Cell): + """ + Feature pyramid network cell, usually uses as network neck. + + Applies the convolution on multiple, input feature maps + and output feature map with same channel size. if required num of + output larger then num of inputs, add extra maxpooling for further + downsampling; + + Args: + in_channels (tuple) - Channel size of input feature maps. + out_channels (int) - Channel size output. + num_outs (int) - Num of output features. + + Returns: + Tuple, with tensors of same channel size. + + Examples: + neck = FeatPyramidNeck([100,200,300], 50, 4, config.feature_shapes) + input_data = (normal(0,0.1,(1,c,1280//(4*2**i), 768//(4*2**i)), + dtype=np.float32) \ + for i, c in enumerate(config.fpn_in_channels)) + x = neck(input_data) + """ + + def __init__(self, + in_channels, + out_channels, + num_outs, + feature_shapes): + super(FeatPyramidNeck, self).__init__() + + if context.get_context("device_target") == "Ascend": + self.cast_type = mstype.float16 + else: + self.cast_type = mstype.float32 + + self.num_outs = num_outs + self.in_channels = in_channels + self.fpn_layer = len(self.in_channels) + + assert not self.num_outs < len(in_channels) + + self.lateral_convs_list_ = [] + self.fpn_convs_ = [] + + for _, channel in enumerate(in_channels): + l_conv = _conv(channel, out_channels, kernel_size=1, stride=1, + padding=0, pad_mode='valid').to_float(self.cast_type) + fpn_conv = _conv(out_channels, out_channels, kernel_size=3, stride=1, + padding=0, pad_mode='same').to_float(self.cast_type) + self.lateral_convs_list_.append(l_conv) + self.fpn_convs_.append(fpn_conv) + self.lateral_convs_list = nn.layer.CellList(self.lateral_convs_list_) + self.fpn_convs_list = nn.layer.CellList(self.fpn_convs_) + self.interpolate1 = P.ResizeBilinear(feature_shapes[2]) + self.interpolate2 = P.ResizeBilinear(feature_shapes[1]) + self.interpolate3 = P.ResizeBilinear(feature_shapes[0]) + self.cast = P.Cast() + self.maxpool = P.MaxPool(kernel_size=1, strides=2, pad_mode="same") + + def construct(self, inputs): + x = () + for i in range(self.fpn_layer): + x += (self.lateral_convs_list[i](inputs[i]),) + + y = (x[3],) + y = y + (x[2] + self.cast(self.interpolate1(y[self.fpn_layer - 4]), self.cast_type),) + y = y + (x[1] + self.cast(self.interpolate2(y[self.fpn_layer - 3]), self.cast_type),) + y = y + (x[0] + self.cast(self.interpolate3(y[self.fpn_layer - 2]), self.cast_type),) + + z = () + for i in range(self.fpn_layer - 1, -1, -1): + z = z + (y[i],) + + outs = () + for i in range(self.fpn_layer): + outs = outs + (self.fpn_convs_list[i](z[i]),) + + for i in range(self.num_outs - self.fpn_layer): + outs = outs + (self.maxpool(outs[3]),) + return outs diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py b/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py new file mode 100644 index 000000000..822149fab --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py @@ -0,0 +1,136 @@ +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================ + +""" +resnet-50 backbone, code inherited from model zoo. +""" + +import mindspore.nn as nn +from mindspore.common import initializer +from mindspore import Parameter +import mindspore +from mindspore import load_checkpoint, load_param_into_net +from src.model_utils.configs.config_base import config + + +class Bottleneck(nn.Cell): + expansion = 4 + + def __init__(self, inplanes, planes, stride=1, downsample=None): + super(Bottleneck, self).__init__() + self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, has_bias=False) + self.bn1 = nn.BatchNorm2d(planes) + + self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, + has_bias=False, pad_mode='pad') + self.bn2 = nn.BatchNorm2d(planes) + + self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, has_bias=False) + self.bn3 = nn.BatchNorm2d(planes * self.expansion) + + self.relu = nn.ReLU() + self.downsample = downsample + self.stride = stride + + def construct(self, x): + residual = x + + out = self.conv1(x) + out = self.bn1(out) + out = self.relu(out) + + out = self.conv2(out) + out = self.bn2(out) + out = self.relu(out) + + out = self.conv3(out) + out = self.bn3(out) + + if self.downsample is not None: + residual = self.downsample(x) + + out += residual + out = self.relu(out) + + return out + + +class ResNet(nn.Cell): + """ + A ResNet-50 model without final fully connected layer + """ + def __init__(self, block, layers): + self.inplanes = 64 + super(ResNet, self).__init__() + self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, has_bias=False, pad_mode='pad') + self.bn1 = nn.BatchNorm2d(64) + self.relu = nn.ReLU() + self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2) + self.layer1 = self._make_layer(block, 64, layers[0]) + self.layer2 = self._make_layer(block, 128, layers[1], stride=2) + self.layer3 = self._make_layer(block, 256, layers[2], stride=2) + self.layer4 = self._make_layer(block, 512, layers[3], stride=2) + self.avgpool = nn.AvgPool2d(kernel_size=7, stride=1) + self.pad = nn.Pad(paddings=((0, 0), (0, 0), (1, 1), (1, 1)), mode='CONSTANT') + + for m in self.cells(): + if isinstance(m, nn.Conv2d): + m.weight = Parameter(initializer.initializer( + init=initializer.HeNormal(mode='fan_out', nonlinearity='relu'), + shape=m.weight.shape, dtype=mindspore.float32), name=m.weight.name) + + def _make_layer(self, block, planes, blocks, stride=1): + downsample = None + if stride != 1 or self.inplanes != planes * block.expansion: + downsample = nn.SequentialCell([ + nn.Conv2d(self.inplanes, planes * block.expansion, + kernel_size=1, stride=stride, has_bias=False), + nn.BatchNorm2d(planes * block.expansion) + ]) + + layers = [] + layers.append(block(self.inplanes, planes, stride, downsample)) + self.inplanes = planes * block.expansion + for _ in range(1, blocks): + layers.append(block(self.inplanes, planes)) + + return nn.SequentialCell(*layers) + + def construct(self, x): + x = self.conv1(x) + x = self.bn1(x) + x = self.relu(x) + x = self.pad(x) + x = self.maxpool(x) + + c2 = self.layer1(x) + c3 = self.layer2(c2) + c4 = self.layer3(c3) + c5 = self.layer4(c4) + + return c2, c3, c4, c5 + + +def resnet50(pretrained=True, **kwargs): + """Constructs a ResNet-50 model. + Args: + pretrained (bool): If True, returns a model pre-trained on ImageNet + """ + model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) + if pretrained: + param_dict = load_checkpoint(config.pretrained_r50) + load_param_into_net(model, param_dict) + + return model diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/__init__.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/__init__.py new file mode 100644 index 000000000..2b2fbdae7 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/__init__.py @@ -0,0 +1,4 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/28 20:00 +# @Author : WeiHua diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py new file mode 100644 index 000000000..251cfb53a --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py @@ -0,0 +1,325 @@ +from .custom_kernel_update_head import CustomKernelUpdateHead +from ..custom_cells import build_assigner, build_sampler + +import mindspore as ms +from mindspore import nn, ops +import numpy as np + +class CustomKernelIterHead(nn.Cell): + + def __init__(self, + num_stages=6, + recursive=False, + assign_stages=5, + stage_loss_weights=(1, 1, 1, 1, 1, 1), + proposal_feature_channel=256, + merge_cls_scores=False, + post_assign=False, + hard_target=False, + num_proposals=100, + num_thing_classes=80, + mask_assign_stride=4, + mask_head=dict(), + mask_out_stride=4, + train_cfg=None, + test_cfg=None, + **kwargs): + super(CustomKernelIterHead, self).__init__() + assert mask_head is not None + assert len(stage_loss_weights) == num_stages + self.num_stages = num_stages + self.stage_loss_weights = stage_loss_weights + self.proposal_feature_channel = proposal_feature_channel + self.merge_cls_scores = merge_cls_scores + self.recursive = recursive + self.post_assign = post_assign + self.mask_out_stride = mask_out_stride + self.hard_target = hard_target + self.assign_stages = assign_stages + self.num_thing_classes = num_thing_classes + self.mask_assign_stride = mask_assign_stride + self.num_proposals = num_proposals + self.train_cfg = train_cfg + self.test_cfg = test_cfg + if mask_head is not None: + self.init_mask_head(None, mask_head) + self.init_assigner_sampler() + self.init_weights() + + def init_assigner_sampler(self): + """Initialize assigner and sampler for each stage.""" + self.mask_assigner = [] + self.mask_sampler = [] + if self.train_cfg is not None: + for idx, rcnn_train_cfg in enumerate(self.train_cfg): + self.mask_assigner.append( + build_assigner(rcnn_train_cfg['assigner'])) + self.current_stage = idx + self.mask_sampler.append( + build_sampler(rcnn_train_cfg['sampler'])) + + def init_weights(self): + for i in range(self.num_stages): + self.mask_head[i].init_weights() + + def init_mask_head(self, mask_roi_extractor, mask_head): + """Initialize mask head and mask roi extractor. + + Args: + mask_roi_extractor (dict): Config of mask roi extractor. + mask_head (dict): Config of mask in mask head. + """ + self.mask_head = nn.CellList() + if not isinstance(mask_head, list): + mask_head = [mask_head for _ in range(self.num_stages)] + assert len(mask_head) == self.num_stages + for head in mask_head: + self.mask_head.append(CustomKernelUpdateHead(**head)) + if self.recursive: + for i in range(self.num_stages): + self.mask_head[i] = self.mask_head[0] + + def _mask_forward(self, stage, x, object_feats, mask_preds, img_metas): + mask_head = self.mask_head[stage] + cls_score, mask_preds, object_feats = mask_head( + x, object_feats, mask_preds, img_metas=img_metas) + if mask_head.mask_upsample_stride > 1 and (stage == self.num_stages - 1 + or self.training): + interpolate = nn.ResizeBilinear() + scaled_mask_preds = interpolate( + mask_preds, + scale_factor=mask_head.mask_upsample_stride, + align_corners=False) + else: + scaled_mask_preds = mask_preds + mask_results = dict( + cls_score=cls_score, + mask_preds=mask_preds, + scaled_mask_preds=scaled_mask_preds, + object_feats=object_feats) + + return mask_results + + @property + def apply_kernel_occlusion(self): + return self.mask_head[0].apply_kernel_occlusion + + @property + def occ_pair_num(self): + return 2 * self.mask_head[0].pair_num + + def construct(self, *inputs, **kwargs): + if self.training: + return self.forward_train(*inputs, **kwargs) + else: + return self.simple_test(*inputs, **kwargs) + + def forward_train(self, + x, + proposal_feats, + mask_preds, + cls_score, + img_metas, + gt_masks, + gt_labels, + gt_bboxes_ignore=None, + imgs_whwh=None, + gt_bboxes=None, + gt_sem_seg=None, + gt_sem_cls=None): + + num_imgs = len(img_metas) + if self.mask_head[0].mask_upsample_stride > 1: + interpolate = nn.ResizeBilinear() + prev_mask_preds = interpolate( + ops.stop_gradient(mask_preds), + scale_factor=self.mask_head[0].mask_upsample_stride, + align_corners=False) + else: + prev_mask_preds = ops.stop_gradient(mask_preds) + + if cls_score is not None: + prev_cls_score = ops.stop_gradient(cls_score) + else: + prev_cls_score = [None] * num_imgs + + if self.hard_target: + gt_masks = [x.bool().astype(ms.float32) for x in gt_masks] + else: + gt_masks = gt_masks + + object_feats = proposal_feats + all_stage_loss = {} + all_stage_mask_results = [] + assign_results = [] + for stage in range(self.num_stages): + mask_results = self._mask_forward(stage, x, object_feats, + mask_preds, img_metas) + all_stage_mask_results.append(mask_results) + if self.apply_kernel_occlusion: + mask_preds = mask_results['mask_preds'][:, :-self.occ_pair_num] + else: + mask_preds = mask_results['mask_preds'] + scaled_mask_preds = mask_results['scaled_mask_preds'] + cls_score = mask_results['cls_score'] + object_feats = mask_results['object_feats'] + + if self.post_assign: + if self.apply_kernel_occlusion: + prev_mask_preds = ops.stop_gradient(scaled_mask_preds[:, :-self.occ_pair_num]) + else: + prev_mask_preds = ops.stop_gradient(scaled_mask_preds) + prev_cls_score = ops.stop_gradient(cls_score) + + sampling_results = [] + if stage < self.assign_stages: + assign_results = [] + for i in range(num_imgs): + if stage < self.assign_stages: + mask_for_assign = prev_mask_preds[i][:self.num_proposals] + if prev_cls_score[i] is not None: + cls_for_assign = prev_cls_score[ + i][:self.num_proposals, :self.num_thing_classes] + else: + cls_for_assign = None + assign_result = self.mask_assigner[stage].assign( + mask_for_assign, cls_for_assign, gt_masks[i], + gt_labels[i], img_metas[i]) + assign_results.append(assign_result) + if self.apply_kernel_occlusion: + sampling_result = self.mask_sampler[stage].sample( + assign_results[i], scaled_mask_preds[i, :-self.occ_pair_num], gt_masks[i]) + else: + sampling_result = self.mask_sampler[stage].sample( + assign_results[i], scaled_mask_preds[i], gt_masks[i]) + sampling_results.append(sampling_result) + mask_targets = self.mask_head[stage].get_targets( + sampling_results, + gt_masks, + gt_labels, + self.train_cfg[stage], + True, + gt_sem_seg=gt_sem_seg, + gt_sem_cls=gt_sem_cls) + + single_stage_loss = self.mask_head[stage].loss( + object_feats, + cls_score, + scaled_mask_preds, + *mask_targets, + imgs_whwh=imgs_whwh) + for key, value in single_stage_loss.items(): + all_stage_loss[f's{stage}_{key}'] = value * \ + self.stage_loss_weights[stage] + + if not self.post_assign: + if self.apply_kernel_occlusion: + prev_mask_preds = ops.stop_gradient(scaled_mask_preds[:, :-self.occ_pair_num]) + else: + prev_mask_preds = ops.stop_gradient(scaled_mask_preds) + prev_cls_score = ops.stop_gradient(cls_score) + + return all_stage_loss + + def simple_test(self, + x, + proposal_feats, + mask_preds, + cls_score, + img_metas, + imgs_whwh=None, + rescale=False): + + # Decode initial proposals + num_imgs = len(img_metas) + + object_feats = proposal_feats + scaled_mask_preds = None + for stage in range(self.num_stages): + mask_results = self._mask_forward(stage, x, object_feats, + mask_preds, img_metas) + object_feats = mask_results['object_feats'] + cls_score = mask_results['cls_score'] + mask_preds = mask_results['mask_preds'] + scaled_mask_preds = mask_results['scaled_mask_preds'] + + num_classes = self.mask_head[-1].num_classes + results = [] + + if self.mask_head[-1].loss_cls.use_sigmoid: + cls_score = cls_score.sigmoid() + else: + cls_score = cls_score.softmax(-1)[..., :-1] + + for img_id in range(num_imgs): + cls_score_per_img = cls_score[img_id] + scores_per_img, topk_indices = ops.TopK(sorted=True)( + cls_score_per_img.view(-1), self.test_cfg['max_per_img']) + mask_indices = topk_indices // num_classes + labels_per_img = topk_indices % num_classes + masks_per_img = scaled_mask_preds[img_id][mask_indices] + single_result = self.mask_head[-1].get_seg_masks( + masks_per_img, labels_per_img, scores_per_img, + self.test_cfg, img_metas[img_id]) + results.append(single_result) + return results + + def onnx_export(self, + x, + proposal_feats, + mask_preds, + cls_score, + img_metas, + ): + + + # Decode initial proposals + num_imgs = len(img_metas) + # num_proposals = proposal_feats.size(1) + + object_feats = proposal_feats + scaled_mask_preds = None + for stage in range(self.num_stages): + cls_score, mask_preds, scaled_mask_preds, object_feats = self._mask_forward_export(stage, x, object_feats, + mask_preds, img_metas) + + return scaled_mask_preds, cls_score + + def segm2result_onnx(self, mask_preds, det_labels, cls_scores): + num_classes = self.num_classes + # bbox_result = None + segm_result = [[] for _ in range(num_classes)] + seg_scores = [[] for _ in range(num_classes)] + + mask_preds = mask_preds.detach() # num_det, h,w + det_labels = det_labels.detach() #class id + cls_scores = cls_scores.detach() + + num_ins = mask_preds.shape[0] # num_dets, h, w + for idx in range(num_ins): + segm_result[det_labels[idx]].append(mask_preds[idx]) + seg_scores[det_labels[idx]].append(cls_scores[idx]) + # here we only have one classes (text) + segm_result = segm_result[0] # num_cls, num_det, h, w + segm_result = ms.ops.stack(segm_result) # num_det, h, w + seg_scores = seg_scores[0] # num_cls, num_det + seg_scores = ms.ops.stack(seg_scores) # num_det + + return segm_result, seg_scores + + def _mask_forward_export(self, stage, x, object_feats, mask_preds, img_metas): + mask_upsample_stride = 2 + mask_head = self.mask_head[stage] + cls_score, mask_preds, object_feats = mask_head( + x, object_feats, mask_preds, img_metas=img_metas) + if mask_upsample_stride > 1 and (stage == self.num_stages - 1 + or self.training): + interpolate = nn.ResizeBilinear() + scaled_mask_preds = interpolate( + mask_preds, + scale_factor=mask_upsample_stride, + align_corners=False) + else: + scaled_mask_preds = mask_preds + + return cls_score, mask_preds, scaled_mask_preds, object_feats diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py new file mode 100644 index 000000000..9ed2fee07 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py @@ -0,0 +1,293 @@ +import numpy as np + +from .kernel_update_head import KernelUpdateHead +import mindspore as ms +from mindspore import nn, ops +from ..custom_cells import build_loss + + +class CustomKernelUpdateHead(KernelUpdateHead): + def __init__(self, apply_kernel_occlusion=False, kernel_occlusion_cfg=None, **kwargs): + super(CustomKernelUpdateHead, self).__init__(**kwargs) + self.apply_kernel_occlusion = apply_kernel_occlusion + if apply_kernel_occlusion: + self.init_kernel_occlusion(kernel_occlusion_cfg) + + def init_kernel_occlusion(self, kernel_occlusion_cfg): + # prepare config + self.num_proposals = kernel_occlusion_cfg.get('num_proposals') + assert self.num_proposals >= 2 + self.pair_list = [] + for ii in range(self.num_proposals - 1): + for jj in range(ii + 1, self.num_proposals): + self.pair_list.append(ii) + self.pair_list.append(jj) + self.pair_num = len(self.pair_list) // 2 + self.pair_manner = kernel_occlusion_cfg.get('pair_manner', 'sum') + print(f"Manner of merging kernel pair: {self.pair_manner}") + assert self.pair_manner in ['sum', 'cat'] + # prepare layer and init weights + if self.pair_manner == 'sum': + self.union_fc = nn.Dense(self.in_channels, self.in_channels) + self.interact_fc = nn.Dense(self.in_channels, self.in_channels) + else: + self.union_fc = nn.Dense(2 * self.in_channels, self.in_channels) + self.interact_fc = nn.Dense(2 * self.in_channels, self.in_channels) + self.apply_occ_union = kernel_occlusion_cfg['u_mask_loss']['loss_weight'] > 0 or kernel_occlusion_cfg['u_dice_loss']['loss_weight'] > 0 + self.occ_union_mask_loss = build_loss(kernel_occlusion_cfg.get('u_mask_loss').copy()) + self.occ_interact_mask_loss = build_loss(kernel_occlusion_cfg.get('i_mask_loss').copy()) + self.occ_union_dice_loss = build_loss(kernel_occlusion_cfg.get('u_dice_loss').copy()) + self.occ_interact_dice_loss = build_loss(kernel_occlusion_cfg.get('i_dice_loss').copy()) + + def kernel_occlusion(self, obj_feat): + """ + Apply Kernel Occlusion operation. + + :param + :obj_feat : Tensor with shape (B, N, K * K, C), where K is convolution kernel size + """ + b, n, _, c = obj_feat.shape + # B, N, K * K, C -> B, N, K * K * C + kernels = obj_feat.reshape(b, n, -1) + assert n == self.num_proposals + # B, pair_num, 2, K * K * C + kernel_pairs = kernels[:, self.pair_list].reshape(b, self.pair_num, 2, -1) + if self.pair_manner == 'sum': + # B, pair_num, K * K * C + kernel_pairs = kernel_pairs.sum(axis=2) + else: + # B, pair_num, 2 * K * K * C + kernel_pairs = kernel_pairs.reshape(b, self.pair_num, -1) + # union and interact kernels + # B, 2 * pair_num, K * K * C -> B, 2 * pair_num, K * K, C + ui_kernels = ops.concat([ + self.union_fc(kernel_pairs), self.interact_fc(kernel_pairs) + ], axis=1).reshape(b, 2 * self.pair_num, -1, c) + + return ui_kernels + + def construct(self, + x, + proposal_feat, + mask_preds, + prev_cls_score=None, + mask_shape=None, + img_metas=None): + N, num_proposals = proposal_feat.shape[:2] + if self.feat_transform is not None: + x = self.feat_transform(x) + C, H, W = x.shape[-3:] + + mask_h, mask_w = mask_preds.shape[-2:] + if mask_h != H or mask_w != W: + gather_mask = self.interpolate( + mask_preds, size=(H, W), align_corners=False) + else: + gather_mask = mask_preds + + + # sigmoid_masks = gather_mask.sigmoid() + sigmoid_masks = ms.ops.sigmoid(gather_mask) + nonzero_inds = sigmoid_masks > self.hard_mask_thr + sigmoid_masks = nonzero_inds.astype(ms.float32) + + # einsum is faster than bmm by 30% + einsum = ops.Einsum('bnhw,bchw->bnc') + x_feat = einsum((sigmoid_masks, x)) + + # obj_feat in shape [B, N, C, K, K] -> [B, N, C, K*K] -> [B, N, K*K, C] + proposal_feat = proposal_feat.reshape(N, num_proposals, + self.in_channels, + -1).transpose(0, 1, 3, 2) + obj_feat = self.kernel_update_conv(x_feat, proposal_feat) + + # [B, N, K*K, C] -> [B, N, K*K*C] -> [N, B, K*K*C] + obj_feat = obj_feat.reshape(N, num_proposals, -1).transpose(1, 0, 2) + obj_feat = self.attention_norm(self.attention(obj_feat)) + # [N, B, K*K*C] -> [B, N, K*K*C] + obj_feat = obj_feat.transpose(1, 0, 2) + + # obj_feat in shape [B, N, K*K*C] -> [B, N, K*K, C] + obj_feat = obj_feat.reshape(N, num_proposals, -1, self.in_channels) + + # FFN + if self.with_ffn: + obj_feat = self.ffn_norm(self.ffn(obj_feat)) + + cls_feat = obj_feat.sum(-2) + mask_feat = obj_feat + + ui_pair_num = None + if self.apply_kernel_occlusion and self.training: + ui_kernels = self.kernel_occlusion(obj_feat) + ui_pair_num = ui_kernels.shape[1] + mask_feat = ops.concat([mask_feat, ui_kernels], axis=1) + + for cls_layer in self.cls_fcs: + cls_feat = cls_layer(cls_feat) + for reg_layer in self.mask_fcs: + mask_feat = reg_layer(mask_feat) + + cls_score = self.fc_cls(cls_feat).view(N, num_proposals, -1) + # [B, N, K*K, C] -> [B, N, C, K*K] + mask_feat = self.fc_mask(mask_feat).transpose(0, 1, 3, 2) + + if (self.mask_transform_stride == 2 and self.feat_gather_stride == 1): + mask_x = self.interpolate( + x, scale_factor=0.5, align_corners=False) + H, W = mask_x.shape[-2:] + else: + mask_x = x + # [B, N, C, K*K] -> [B*N, C, K, K] + if self.apply_kernel_occlusion and self.training: + tmp_num = num_proposals + ui_pair_num + mask_feat = mask_feat.reshape(N, tmp_num, C, + self.conv_kernel_size, + self.conv_kernel_size) + else: + mask_feat = mask_feat.reshape(N, num_proposals, C, + self.conv_kernel_size, + self.conv_kernel_size) + # [B, C, H, W] -> [1, B*C, H, W] + new_mask_preds = [] + for i in range(N): + new_mask_preds.append( + ops.conv2d( + mask_x[i:i + 1], + mask_feat[i], + padding=int(self.conv_kernel_size // 2))) + + new_mask_preds = ops.concat(new_mask_preds, axis=0) + if self.apply_kernel_occlusion and self.training: + new_mask_preds = new_mask_preds.reshape(N, num_proposals + ui_pair_num, H, W) + else: + new_mask_preds = new_mask_preds.reshape(N, num_proposals, H, W) + if self.mask_transform_stride == 2: + new_mask_preds = self.interpolate( + new_mask_preds, + scale_factor=2, + align_corners=False) + + if mask_shape is not None and mask_shape[0] != H: + new_mask_preds = self.interpolate( + new_mask_preds, + size=mask_shape, + mode='bilinear') + + return cls_score, new_mask_preds, obj_feat.transpose(0, 1, 3, 2).reshape( + N, num_proposals, self.in_channels, self.conv_kernel_size, + self.conv_kernel_size) + + def loss(self, + object_feats, + cls_score, + mask_pred, + labels, + label_weights, + mask_targets, + mask_weights, + imgs_whwh=None, + reduction_override=None, + **kwargs): + if not self.apply_kernel_occlusion: + return super(CustomKernelUpdateHead, self).loss( + object_feats, + cls_score, + mask_pred, + labels, + label_weights, + mask_targets, + mask_weights, + imgs_whwh=imgs_whwh, + reduction_override=reduction_override, + **kwargs + ) + assert mask_pred.shape[1] > 2 * self.pair_num + losses = super(CustomKernelUpdateHead, self).loss( + object_feats, + cls_score, + mask_pred[:, :-2 * self.pair_num], + labels, + label_weights, + mask_targets, + mask_weights, + imgs_whwh=imgs_whwh, + reduction_override=reduction_override, + **kwargs + ) + b, _, h, w = mask_pred.shape + occ_mask_pred = mask_pred[:, -2 * self.pair_num:] + # determine positive indexes + bg_class_ind = self.num_classes + # note in spare rcnn num_gt == num_pos + pos_inds = (labels >= 0).astype(ms.int32) & (labels < bg_class_ind).astype(ms.int32) + pos_inds = pos_inds.reshape(b, -1) + mask_targets = mask_targets.reshape(b, -1, h, w) + # select gt pairs + pred_union_inds = [] + pred_interact_inds = [] + occ_union_targets = [] + occ_interact_targets = [] + for batch_idx in range(b): + num_valid = pos_inds[batch_idx].sum().asnumpy().item() + if num_valid <= 1: + continue + valid_inds = ops.nonzero(pos_inds[batch_idx]).view(-1).asnumpy().tolist() + valid_inds = sorted(valid_inds) + valid_target_pairs = [] + union_pred_pair = [] + iteract_pred_pair = [] + for ii in range(num_valid - 1): + for jj in range(ii + 1, num_valid): + valid_target_pairs.append(ii) + valid_target_pairs.append(jj) + # get corresponding index in pair list + a, b = valid_inds[ii], valid_inds[jj] + idx_in_pair = (self.num_proposals - 1 + self.num_proposals - a) * a // 2 + b - a - 1 + union_pred_pair.append([batch_idx, idx_in_pair]) + iteract_pred_pair.append([batch_idx, idx_in_pair + self.pair_num]) + # check if this code contain bug + candidate_pair_list = np.array(self.pair_list).reshape(-1, 2) + assert candidate_pair_list[idx_in_pair][0] == a and candidate_pair_list[idx_in_pair][1] == b + # union_of_img1, interact_of_img1, union_of_img2, ... + pred_union_inds += union_pred_pair + pred_interact_inds += iteract_pred_pair + # prepare gt + # num_pair, 2, h, w -> we apply hard target for occlusion target + mask_target = mask_targets[batch_idx, ms.Tensor(np.nonzero(pos_inds[batch_idx].asnumpy())[0])][ + valid_target_pairs].reshape(-1, 2, h, w).astype(ms.bool_) + # 2 * num_pair, h, w + # union_wo_interaction, interaction area + union_area = mask_target[:, 0].astype(ms.int32) | mask_target[:, 1].astype(ms.int32) + interaction_area = mask_target[:, 0].astype(ms.int32) & mask_target[:, 1].astype(ms.int32) + # union without interaction area + # occ_union_targets.append(union_area ^ interaction_area) + occ_union_targets.append(union_area) + occ_interact_targets.append(interaction_area) + if len(occ_interact_targets) == 0: + losses.update(loss_occ_mask=occ_mask_pred.sum() * 0, + loss_occ_dice=occ_mask_pred.sum() * 0) + return losses + # select prediction + occ_union_targets = ops.concat(occ_union_targets, axis=0).astype(ms.float32) + occ_interact_targets = ops.concat(occ_interact_targets, axis=0).astype(ms.float32) + occ_union_preds = occ_mask_pred[[x[0] for x in pred_union_inds], + [x[1] for x in pred_union_inds]] + occ_interact_preds = occ_mask_pred[[x[0] for x in pred_interact_inds], + [x[1] for x in pred_interact_inds]] + if self.apply_occ_union: + loss_occ_union_mask = self.occ_union_mask_loss(occ_union_preds, occ_union_targets) + loss_occ_union_dice = self.occ_union_dice_loss(occ_union_preds, occ_union_targets) + loss_occ_interact_mask = self.occ_interact_mask_loss(occ_interact_preds, occ_interact_targets) + loss_occ_interact_dice = self.occ_interact_dice_loss(occ_interact_preds, occ_interact_targets) + losses.update( + loss_occ_mask=0.5*(loss_occ_union_mask+loss_occ_interact_mask), + loss_occ_dice=0.5*(loss_occ_union_dice+loss_occ_interact_dice) + ) + else: + losses.update( + loss_occ_mask=self.occ_interact_mask_loss(occ_interact_preds, occ_interact_targets), + loss_occ_dice=self.occ_interact_dice_loss(occ_interact_preds, occ_interact_targets) + ) + + return losses diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py new file mode 100644 index 000000000..e8acd0e0e --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py @@ -0,0 +1,339 @@ +import numpy as np + +import mindspore as ms +from mindspore import nn, ops +from mindspore.common import initializer as init +from mindspore.communication.management import GlobalComm, get_group_size + +from ..custom_cells import (build_loss, multi_apply, ConvModule, FFN, MultiheadAttention) +from .kernel_updator import KernelUpdator + + +class KernelUpdateHead(nn.Cell): + + def __init__(self, + num_classes=80, + num_ffn_fcs=2, + num_heads=8, + num_cls_fcs=1, + num_mask_fcs=3, + feedforward_channels=2048, + in_channels=256, + out_channels=256, + dropout=0.0, + mask_thr=0.5, + ffn_act_cfg=dict(type='ReLU', inplace=True), + conv_kernel_size=3, + feat_transform_cfg=None, + hard_mask_thr=0.5, + kernel_init=False, + with_ffn=True, + mask_out_stride=4, + relative_coors=False, + relative_coors_off=False, + feat_gather_stride=1, + mask_transform_stride=1, + mask_upsample_stride=1, + num_thing_classes=80, + num_stuff_classes=53, + mask_assign_stride=4, + ignore_label=255, + thing_label_in_seg=0, + kernel_updator_cfg=dict(), + loss_mask=dict( + type='CrossEntropyLoss', use_mask=True, loss_weight=1.0), + loss_dice=dict(type='DiceLoss', loss_weight=3.0), + loss_cls=dict( + type='FocalLoss', + use_sigmoid=True, + gamma=2.0, + alpha=0.25, + loss_weight=2.0), + num_proposals=4): + super(KernelUpdateHead, self).__init__() + self.num_classes = num_classes + self.loss_cls = build_loss(loss_cls) + self.loss_mask = build_loss(loss_mask) + self.loss_dice = build_loss(loss_dice) + + self.in_channels = in_channels + self.out_channels = out_channels + self.mask_thr = mask_thr + self.fp16_enabled = False + self.dropout = dropout + + self.num_heads = num_heads + self.hard_mask_thr = hard_mask_thr + self.kernel_init = kernel_init + self.with_ffn = with_ffn + self.mask_out_stride = mask_out_stride + self.relative_coors = relative_coors + self.relative_coors_off = relative_coors_off + self.conv_kernel_size = conv_kernel_size + self.feat_gather_stride = feat_gather_stride + self.mask_transform_stride = mask_transform_stride + self.mask_upsample_stride = mask_upsample_stride + + self.num_thing_classes = num_thing_classes + self.num_stuff_classes = num_stuff_classes + self.mask_assign_stride = mask_assign_stride + self.ignore_label = ignore_label + self.thing_label_in_seg = thing_label_in_seg + + self.attention = MultiheadAttention(in_channels * conv_kernel_size**2, + num_heads, dropout, num_proposals=num_proposals) + # self.attention_norm = build_norm_layer( + # dict(type='LN'), in_channels * conv_kernel_size**2)[1] + self.attention_norm = nn.LayerNorm([in_channels * conv_kernel_size ** 2]) + + self.kernel_update_conv = KernelUpdator(**kernel_updator_cfg) + + if feat_transform_cfg is not None: + kernel_size = feat_transform_cfg.pop('kernel_size', 1) + self.feat_transform = ConvModule( + in_channels, + in_channels, + kernel_size, + stride=feat_gather_stride, + padding=int(feat_gather_stride // 2), + **feat_transform_cfg) + else: + self.feat_transform = None + + if self.with_ffn: + self.ffn = FFN( + in_channels, + feedforward_channels, + num_ffn_fcs, + act_cfg=ffn_act_cfg, + dropout_layer=dropout) + # self.ffn_norm = build_norm_layer(dict(type='LN'), in_channels)[1] + self.ffn_norm = nn.LayerNorm([in_channels]) + + self.cls_fcs = nn.CellList() + for _ in range(num_cls_fcs): + self.cls_fcs.append( + nn.Dense(in_channels, in_channels, has_bias=False)) + self.cls_fcs.append( + nn.LayerNorm([in_channels])) + self.cls_fcs.append(nn.ReLU()) + + if self.loss_cls.use_sigmoid: + self.fc_cls = nn.Dense(in_channels, self.num_classes) + else: + self.fc_cls = nn.Dense(in_channels, self.num_classes + 1) + + self.mask_fcs = nn.CellList() + for _ in range(num_mask_fcs): + self.mask_fcs.append( + nn.Dense(in_channels, in_channels, has_bias=False)) + self.mask_fcs.append(nn.LayerNorm([in_channels])) + self.mask_fcs.append(nn.ReLU()) + + self.fc_mask = nn.Dense(in_channels, out_channels) + self.allreduce = ops.AllReduce(ops.ReduceOp.SUM, GlobalComm.WORLD_COMM_GROUP) + self.interpolate = nn.ResizeBilinear() + + def init_weights(self): + """Use xavier initialization for all weight parameter and set + classification head bias as a specific value when use focal loss.""" + self.init_parameters_data() + for _, m in self.cells_and_names(): + if isinstance(m, nn.Conv2d): + n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels + m.weight.set_data(ms.Tensor(np.random.normal(0, np.sqrt(2. / n), + m.weight.data.shape).astype("float32"))) + if m.bias is not None: + m.bias.set_data( + ms.Tensor(np.zeros(m.bias.data.shape, dtype="float32"))) + elif isinstance(m, nn.BatchNorm2d): + m.gamma.set_data( + ms.Tensor(np.ones(m.gamma.data.shape, dtype="float32"))) + m.beta.set_data( + ms.Tensor(np.zeros(m.beta.data.shape, dtype="float32"))) + elif isinstance(m, nn.Dense): + m.weight.set_data( + ms.Tensor(np.random.normal(0, 0.001, m.weight.data.shape).astype("float32"))) + if m.has_bias: + m.bias.set_data( + ms.Tensor(np.random.normal(0, 0.001, m.bias.data.shape).astype("float32"))) + if self.loss_cls.use_sigmoid: + self.fc_cls.bias.set_data(init.initializer(0.01, self.fc_cls.bias.shape)) + if self.kernel_init: + print('mask kernel in mask head is normal initialized by std 0.01') + # nn.init.normal_(self.fc_mask.weight, mean=0, std=0.01) + self.fc_mask.weight.set_data(init.initializer( + init.Normal(0.01, 0), self.fc_mask.weight.shape)) + + def construct(self, *inputs, **kwargs): + raise NotImplementedError + + def loss(self, + object_feats, + cls_score, + mask_pred, + labels, + label_weights, + mask_targets, + mask_weights, + imgs_whwh=None, + reduction_override=None, + **kwargs): + + losses = dict() + bg_class_ind = self.num_classes + # note in spare rcnn num_gt == num_pos + pos_inds = (labels >= 0).astype(ms.int32) & (labels < bg_class_ind).astype(ms.int32) + num_pos = pos_inds.sum().astype(ms.float32) + # avg_factor = reduce_mean(num_pos).clamp_(min=1.0) + + num_preds = mask_pred.shape[0] * mask_pred.shape[1] + assert mask_pred.shape[0] == cls_score.shape[0] + assert mask_pred.shape[1] == cls_score.shape[1] + + if cls_score is not None: + get_size = ops.Size() + if get_size(cls_score) > 0: + avg_factor = labels.astype(ms.float32).asnumpy().sum() + H, W = cls_score.shape[:2] + losses['loss_cls'] = self.loss_cls( + cls_score.reshape(-1, 1), + labels.reshape(-1)).sum() / avg_factor + if mask_pred is not None: + bool_pos_inds = pos_inds.astype(ms.bool_) + # 0~self.num_classes-1 are FG, self.num_classes is BG + # do not perform bounding box regression for BG anymore. + H, W = mask_pred.shape[-2:] + if bool_pos_inds.any(): + candi_index = ops.nonzero(bool_pos_inds).squeeze(-1) + pos_mask_pred = mask_pred.reshape(num_preds, H, + W)[candi_index] + pos_mask_targets = mask_targets[candi_index] + losses['loss_mask'] = self.loss_mask(pos_mask_pred, + pos_mask_targets) + losses['loss_dice'] = self.loss_dice(pos_mask_pred, + pos_mask_targets) + else: + losses['loss_mask'] = mask_pred.sum() * 0 + losses['loss_dice'] = mask_pred.sum() * 0 + + return losses + + def _get_target_single(self, pos_inds, neg_inds, pos_mask, neg_mask, + pos_gt_mask, pos_gt_labels, gt_sem_seg, gt_sem_cls, + cfg): + + num_pos = pos_mask.shape[0] + num_neg = neg_mask.shape[0] + num_samples = num_pos + num_neg + H, W = pos_mask.shape[-2:] + # original implementation uses new_zeros since BG are set to be 0 + # now use empty & fill because BG cat_id = num_classes, + # FG cat_id = [0, num_classes-1] + labels = ms.numpy.full((num_samples, ), + self.num_classes, + dtype=ms.int64) + new_zeros = ops.Zeros() + label_weights = new_zeros((num_samples, self.num_classes), pos_mask.dtype) + mask_targets = new_zeros((num_samples, H, W), pos_mask.dtype) + mask_weights = new_zeros((num_samples, H, W), pos_mask.dtype) + if num_pos > 0: + labels[pos_inds] = pos_gt_labels + pos_weight = 1.0 if cfg['pos_weight'] <= 0 else cfg['pos_weight'] + label_weights[pos_inds] = pos_weight + pos_mask_targets = pos_gt_mask + mask_targets[pos_inds] = pos_mask_targets + mask_weights[pos_inds] = 1 + + if num_neg > 0: + label_weights[neg_inds] = 1.0 + + return labels, label_weights, mask_targets, mask_weights + + def get_targets(self, + sampling_results, + gt_mask, + gt_labels, + rcnn_train_cfg, + concat=True, + gt_sem_seg=None, + gt_sem_cls=None): + + pos_inds_list = [res.pos_inds for res in sampling_results] + neg_inds_list = [res.neg_inds for res in sampling_results] + pos_mask_list = [res.pos_masks for res in sampling_results] + neg_mask_list = [res.neg_masks for res in sampling_results] + pos_gt_mask_list = [res.pos_gt_masks for res in sampling_results] + pos_gt_labels_list = [res.pos_gt_labels for res in sampling_results] + if gt_sem_seg is None: + # me: fix hard-code bug + num_imgs = len(sampling_results) + gt_sem_seg = [None] * num_imgs + gt_sem_cls = [None] * num_imgs + + labels, label_weights, mask_targets, mask_weights = multi_apply( + self._get_target_single, + pos_inds_list, + neg_inds_list, + pos_mask_list, + neg_mask_list, + pos_gt_mask_list, + pos_gt_labels_list, + gt_sem_seg, + gt_sem_cls, + cfg=rcnn_train_cfg) + if concat: + labels = ops.concat(labels, 0) + label_weights = ops.concat(label_weights, 0) + mask_targets = ops.concat(mask_targets, 0) + mask_weights = ops.concat(mask_weights, 0) + return labels, label_weights, mask_targets, mask_weights + + def rescale_masks(self, masks_per_img, img_meta): + h, w, _ = img_meta['img_shape'] + expand_dims = ops.ExpandDims() + masks_per_img = self.interpolate( + expand_dims(masks_per_img, 0).sigmoid(), + size=img_meta['batch_input_shape'], + align_corners=False) + + masks_per_img = masks_per_img[:, :, :h, :w] + ori_shape = img_meta['ori_shape'] + seg_masks = self.interpolate( + ms.Tensor(masks_per_img.asnumpy()), + size=tuple(ori_shape[:2].asnumpy().tolist()), + align_corners=False).squeeze(0) + return seg_masks + + def get_seg_masks(self, masks_per_img, labels_per_img, scores_per_img, + test_cfg, img_meta): + # resize mask predictions back + seg_masks = self.rescale_masks(masks_per_img, img_meta) + seg_masks = seg_masks > test_cfg['mask_thr'] + bbox_result, segm_result = self.segm2result(seg_masks, labels_per_img, + scores_per_img) + return bbox_result, segm_result + + def segm2result(self, mask_preds, det_labels, cls_scores): + num_classes = self.num_classes + bbox_result = None + segm_result = [[] for _ in range(num_classes)] + mask_preds = mask_preds.asnumpy() + det_labels = det_labels.asnumpy() + cls_scores = cls_scores.asnumpy() + + num_ins = mask_preds.shape[0] + # fake bboxes + bboxes = np.zeros((num_ins, 5), dtype=np.float32) + bboxes[:, -1] = cls_scores + bbox_result = [bboxes[det_labels == i, :] for i in range(num_classes)] + for idx in range(num_ins): + segm_result[det_labels[idx]].append(mask_preds[idx]) + return bbox_result, segm_result + + def get_seg_masks_onnx(self, masks_per_img, labels_per_img, scores_per_img, + test_cfg, img_meta): + # resize mask predictions back + seg_masks = self.rescale_masks(masks_per_img, img_meta) + seg_masks = seg_masks > test_cfg.mask_thr + return seg_masks diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py new file mode 100644 index 000000000..d4ba7295a --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py @@ -0,0 +1,91 @@ +import mindspore as ms +from mindspore import nn, ops + + +class KernelUpdator(nn.Cell): + + def __init__(self, + in_channels=256, + feat_channels=64, + out_channels=None, + input_feat_shape=3, + gate_sigmoid=True, + gate_norm_act=False, + activate_out=False, + act_cfg=dict(type='ReLU', inplace=True)): + super(KernelUpdator, self).__init__() + self.in_channels = in_channels + self.feat_channels = feat_channels + self.out_channels_raw = out_channels + self.gate_sigmoid = gate_sigmoid + self.gate_norm_act = gate_norm_act + self.activate_out = activate_out + if isinstance(input_feat_shape, int): + input_feat_shape = [input_feat_shape] * 2 + self.input_feat_shape = input_feat_shape + self.act_cfg = act_cfg + self.out_channels = out_channels if out_channels else in_channels + + self.num_params_in = self.feat_channels + self.num_params_out = self.feat_channels + self.dynamic_layer = nn.Dense( + self.in_channels, self.num_params_in + self.num_params_out) + self.input_layer = nn.Dense( + self.in_channels, self.num_params_in + self.num_params_out) + self.input_gate = nn.Dense(self.in_channels, self.feat_channels) + self.update_gate = nn.Dense(self.in_channels, self.feat_channels) + if self.gate_norm_act: + self.gate_norm = nn.LayerNorm([self.feat_channels]) + + self.norm_in = nn.LayerNorm([self.feat_channels]) + self.norm_out = nn.LayerNorm([self.feat_channels]) + self.input_norm_in = nn.LayerNorm([self.feat_channels]) + self.input_norm_out = nn.LayerNorm([self.feat_channels]) + + if act_cfg and act_cfg['type'] == 'ReLU': + self.activation = nn.ReLU() + else: + self.activation = nn.Identity() + self.fc_layer = nn.Dense(self.feat_channels, self.out_channels) + self.fc_norm = nn.LayerNorm([self.out_channels]) + + def construct(self, update_feature, input_feature): + update_feature = update_feature.reshape(-1, self.in_channels) + num_proposals = update_feature.shape[0] + parameters = self.dynamic_layer(update_feature) + param_in = parameters[:, :self.num_params_in].view( + -1, self.feat_channels) + param_out = parameters[:, -self.num_params_out:].view( + -1, self.feat_channels) + input_feats = self.input_layer( + input_feature.reshape(num_proposals, -1, self.feat_channels)) + input_in = input_feats[..., :self.num_params_in] + input_out = input_feats[..., -self.num_params_out:] + + expand_dims = ops.ExpandDims() + gate_feats = input_in * expand_dims(param_in, -2) + if self.gate_norm_act: + gate_feats = self.activation(self.gate_norm(gate_feats)) + + input_gate = self.input_norm_in(self.input_gate(gate_feats)) + update_gate = self.norm_in(self.update_gate(gate_feats)) + if self.gate_sigmoid: + # input_gate = input_gate.sigmoid() + input_gate = ms.ops.sigmoid(input_gate) + # update_gate = update_gate.sigmoid() + update_gate = ms.ops.sigmoid(update_gate) + param_out = self.norm_out(param_out) + input_out = self.input_norm_out(input_out) + + if self.activate_out: + param_out = self.activation(param_out) + input_out = self.activation(input_out) + + # param_out has shape (batch_size, feat_channels, out_channels) + features = update_gate * expand_dims(param_out, -2) + input_gate * input_out + + features = self.fc_layer(features) + features = self.fc_norm(features) + features = self.activation(features) + + return features diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/__init__.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/__init__.py new file mode 100644 index 000000000..7c8d0d8c3 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/__init__.py @@ -0,0 +1,4 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/24 23:14 +# @Author : WeiHua diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py new file mode 100644 index 000000000..aa8cef6d8 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py @@ -0,0 +1,582 @@ +import numpy as np +import mindspore as ms +from mindspore import nn, ops +from mindspore.common import initializer as init +from mindspore.communication.management import GlobalComm, get_group_size +from .semantic_fpn_wrapper import SemanticFPNWrapper +from ..custom_cells import (ConvModule, normal_init, build_loss, multi_apply, + build_sampler, build_assigner) +from src.model_utils.configs import Config + +from mindspore import log as logger + + +def bias_init_with_prob(prior_prob: float) -> float: + """initialize conv/fc bias value according to a given probability value.""" + bias_init = float(-np.log((1 - prior_prob) / prior_prob)) + return bias_init + + +class ConvKernelHead(nn.Cell): + + def __init__(self, + num_proposals=100, + in_channels=256, + out_channels=256, + num_heads=8, + num_cls_fcs=1, + num_seg_convs=1, + num_loc_convs=1, + att_dropout=False, + localization_fpn=None, + conv_kernel_size=1, + norm_cfg=dict(type='GN', num_groups=32), + semantic_fpn=True, + train_cfg=None, + num_classes=80, + xavier_init_kernel=False, + kernel_init_std=0.01, + use_binary=False, + proposal_feats_with_obj=False, + loss_mask=None, + loss_seg=None, + loss_cls=None, + loss_dice=None, + loss_rank=None, + feat_downsample_stride=1, + feat_refine_stride=1, + feat_refine=True, + with_embed=False, + feat_embed_only=False, + conv_normal_init=False, + mask_out_stride=4, + hard_target=False, + num_thing_classes=80, + num_stuff_classes=53, + mask_assign_stride=4, + ignore_label=255, + thing_label_in_seg=0, + cat_stuff_mask=False, + **kwargs): + super(ConvKernelHead, self).__init__() + self.num_proposals = num_proposals + self.num_cls_fcs = num_cls_fcs + self.train_cfg = Config(train_cfg) + self.in_channels = in_channels + self.out_channels = out_channels + self.num_classes = num_classes + self.proposal_feats_with_obj = proposal_feats_with_obj + self.sampling = False + self.localization_fpn = SemanticFPNWrapper(**localization_fpn) + self.semantic_fpn = semantic_fpn + self.norm_cfg = norm_cfg + self.num_heads = num_heads + self.att_dropout = att_dropout + self.mask_out_stride = mask_out_stride + self.hard_target = hard_target + self.conv_kernel_size = conv_kernel_size + self.xavier_init_kernel = xavier_init_kernel + self.kernel_init_std = kernel_init_std + self.feat_downsample_stride = feat_downsample_stride + self.feat_refine_stride = feat_refine_stride + self.conv_normal_init = conv_normal_init + self.feat_refine = feat_refine + self.with_embed = with_embed + self.feat_embed_only = feat_embed_only + self.num_loc_convs = num_loc_convs + self.num_seg_convs = num_seg_convs + self.use_binary = use_binary + self.num_thing_classes = num_thing_classes + self.num_stuff_classes = num_stuff_classes + self.mask_assign_stride = mask_assign_stride + self.ignore_label = ignore_label + self.thing_label_in_seg = thing_label_in_seg + self.cat_stuff_mask = cat_stuff_mask + + self.loss_mask = ops.BinaryCrossEntropy() + if loss_mask is not None: + self.loss_mask = build_loss(loss_mask) + else: + self.loss_mask = loss_mask + + if loss_dice is not None: + self.loss_dice = build_loss(loss_dice) + else: + self.loss_dice = loss_dice + + if loss_seg is not None: + self.loss_seg = build_loss(loss_seg) + else: + self.loss_seg = loss_seg + if loss_cls is not None: + self.loss_cls = build_loss(loss_cls) + else: + self.loss_cls = loss_cls + + if loss_rank is not None: + self.loss_rank = build_loss(loss_rank) + else: + self.loss_rank = loss_rank + + if self.train_cfg: + self.assigner = build_assigner(self.train_cfg.assigner) + # use PseudoSampler when sampling is False + if self.sampling and hasattr(self.train_cfg, 'sampler'): + sampler_cfg = self.train_cfg.sampler + else: + sampler_cfg = dict(type='MaskPseudoSampler') + self.sampler = build_sampler(sampler_cfg) + self._init_layers() + self.allreduce = ops.AllReduce(ops.ReduceOp.SUM, GlobalComm.WORLD_COMM_GROUP) + self.init_weights() + self.sigmoid = ops.Sigmoid() + + def _init_layers(self): + """Initialize a sparse set of proposal boxes and proposal features.""" + self.init_kernels = nn.Conv2d( + self.out_channels, + self.num_proposals, + self.conv_kernel_size, + padding=int(self.conv_kernel_size // 2), + has_bias=False) + + if self.semantic_fpn: + if self.loss_seg.use_sigmoid: + self.conv_seg = nn.Conv2d(self.out_channels, self.num_classes, + 1) + else: + self.conv_seg = nn.Conv2d(self.out_channels, + self.num_classes + 1, 1) + + if self.feat_downsample_stride > 1 and self.feat_refine: + self.ins_downsample = ConvModule( + self.in_channels, + self.out_channels, + 3, + stride=self.feat_refine_stride, + padding=1, + norm_cfg=self.norm_cfg) + self.seg_downsample = ConvModule( + self.in_channels, + self.out_channels, + 3, + stride=self.feat_refine_stride, + padding=1, + norm_cfg=self.norm_cfg) + + self.loc_convs = nn.CellList() + for i in range(self.num_loc_convs): + self.loc_convs.append( + ConvModule( + self.in_channels, + self.out_channels, + 1, + norm_cfg=self.norm_cfg)) + + self.seg_convs = nn.CellList() + for i in range(self.num_seg_convs): + self.seg_convs.append( + ConvModule( + self.in_channels, + self.out_channels, + 1, + norm_cfg=self.norm_cfg)) + + def init_weights(self): + self.localization_fpn.init_weights() + + if self.feat_downsample_stride > 1 and self.conv_normal_init: + logger.info('Initialize convs in KPN head by normal std 0.01') + for conv in [self.loc_convs, self.seg_convs]: + for m in conv.cells_and_names(): + if isinstance(m, nn.Conv2d): + normal_init(m, init_gain=0.01) + + if self.semantic_fpn: + bias_seg = bias_init_with_prob(0.01) + if self.loss_seg.use_sigmoid: + normal_init(self.conv_seg, init_gain=0.01, bias=bias_seg) + else: + normal_init(self.conv_seg, mean=0, init_gain=0.01) + if self.xavier_init_kernel: + logger.info('Initialize kernels by xavier uniform') + self.init_kernels.weight.set_data( + init.initializer(init.XavierUniform(), self.init_kernels.weight.shape)) + else: + logger.info( + f'Initialize kernels by normal std: {self.kernel_init_std}') + normal_init(self.init_kernels, mean=0, init_gain=self.kernel_init_std) + + def _decode_init_proposals(self, img, img_metas): + num_imgs = len(img_metas) + localization_feats = self.localization_fpn(img) + if isinstance(localization_feats, list): + loc_feats = localization_feats[0] + else: + loc_feats = localization_feats + for conv in self.loc_convs: + loc_feats = conv(loc_feats) + if self.feat_downsample_stride > 1 and self.feat_refine: + loc_feats = self.ins_downsample(loc_feats) + mask_preds = self.init_kernels(loc_feats) + + if self.semantic_fpn: + if isinstance(localization_feats, list): + semantic_feats = localization_feats[1] + else: + semantic_feats = localization_feats + for conv in self.seg_convs: + semantic_feats = conv(semantic_feats) + if self.feat_downsample_stride > 1 and self.feat_refine: + semantic_feats = self.seg_downsample(semantic_feats) + else: + semantic_feats = None + + if semantic_feats is not None: + seg_preds = self.conv_seg(semantic_feats) + else: + seg_preds = None + + + + proposal_feats = self.init_kernels.weight.clone() + proposal_feats = proposal_feats[None].broadcast_to((num_imgs, ) + proposal_feats.shape) + + if semantic_feats is not None: + x_feats = semantic_feats + loc_feats + else: + x_feats = loc_feats + + if self.proposal_feats_with_obj: + sigmoid_masks = self.sigmoid(mask_preds) + nonzero_inds = sigmoid_masks > 0.5 + if self.use_binary: + sigmoid_masks = nonzero_inds.astype(ms.float32) + else: + sigmoid_masks = nonzero_inds.astype(ms.float32) * sigmoid_masks + einsum = ops.Einsum('bnhw,bchw->bnc') + obj_feats = einsum((sigmoid_masks, x_feats)) + else: + obj_feats = None + + cls_scores = None + + if self.proposal_feats_with_obj: + proposal_feats = proposal_feats + obj_feats.view( + num_imgs, self.num_proposals, self.out_channels, 1, 1) + + if self.cat_stuff_mask and not self.training: + mask_preds = ops.concat( + [mask_preds, seg_preds[:, self.num_thing_classes:]], axis=1) + stuff_kernels = self.conv_seg.weight[self. + num_thing_classes:].clone() + stuff_kernels = stuff_kernels[None].broadcast_to((num_imgs, ) + stuff_kernels.shape) + proposal_feats = ops.concat([proposal_feats, stuff_kernels], axis=1) + + return proposal_feats, x_feats, mask_preds, cls_scores, seg_preds + + def forward_train(self, + img, + img_metas, + gt_masks, + gt_labels, + gt_sem_seg=None, + gt_sem_cls=None): + """Forward function in training stage.""" + num_imgs = len(img_metas) + results = self._decode_init_proposals(img, img_metas) + (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) = results + if self.feat_downsample_stride > 1: + interpolate = nn.ResizeBilinear() + scaled_mask_preds = interpolate( + mask_preds, + scale_factor=self.feat_downsample_stride, + align_corners=False) + if seg_preds is not None: + scaled_seg_preds = interpolate( + seg_preds, + scale_factor=self.feat_downsample_stride, + align_corners=False) + else: + scaled_seg_preds = None + else: + scaled_mask_preds = mask_preds + scaled_seg_preds = seg_preds + + if self.hard_target: + gt_masks = [x.bool().astype(ms.float32) for x in gt_masks] + else: + gt_masks = gt_masks + + sampling_results = [] + if cls_scores is None: + detached_cls_scores = [None] * num_imgs + else: + detached_cls_scores = ops.stop_gradient(cls_scores) + for i in range(num_imgs): + assign_result = self.assigner.assign(ops.stop_gradient(scaled_mask_preds[i]), + detached_cls_scores[i], + gt_masks[i], gt_labels[i], + img_metas[i]) + sampling_result = self.sampler.sample(assign_result, + scaled_mask_preds[i], + gt_masks[i]) + sampling_results.append(sampling_result) + + mask_targets = self.get_targets( + sampling_results, + gt_masks, + self.train_cfg, + True, + gt_sem_seg=gt_sem_seg, + gt_sem_cls=gt_sem_cls) + + losses = self.loss(scaled_mask_preds, cls_scores, scaled_seg_preds, + proposal_feats, *mask_targets) + + if self.cat_stuff_mask and self.training: + mask_preds = ops.concat( + [mask_preds, seg_preds[:, self.num_thing_classes:]], axis=1) + stuff_kernels = self.conv_seg.weight[self. + num_thing_classes:].clone() + stuff_kernels = stuff_kernels[None].broadcast_to((num_imgs, ) + stuff_kernels.shape) + proposal_feats = ops.concat([proposal_feats, stuff_kernels], axis=1) + + return losses, proposal_feats, x_feats, mask_preds, cls_scores + + def loss(self, + mask_pred, + cls_scores, + seg_preds, + proposal_feats, + labels, + label_weights, + mask_targets, + mask_weights, + seg_targets, + reduction_override=None, + **kwargs): + losses = dict() + bg_class_ind = self.num_classes + # note in spare rcnn num_gt == num_pos + pos_inds = (labels >= 0).astype(ms.int32) & (labels < bg_class_ind).astype(ms.int32) + num_preds = mask_pred.shape[0] * mask_pred.shape[1] + if cls_scores is not None: + raise NotImplementedError + + bool_pos_inds = pos_inds.astype(ms.bool_) + # 0~self.num_classes-1 are FG, self.num_classes is BG + # do not perform bounding box regression for BG anymore. + H, W = mask_pred.shape[-2:] + if bool_pos_inds.sum(): + candi_index = ops.nonzero(bool_pos_inds).squeeze(-1) + pos_mask_pred = mask_pred.reshape(num_preds, H, W)[candi_index] + pos_mask_targets = mask_targets[candi_index] + losses['loss_rpn_mask'] = self.loss_mask(pos_mask_pred, + pos_mask_targets) + losses['loss_rpn_dice'] = self.loss_dice(pos_mask_pred, + pos_mask_targets) + + if self.loss_rank is not None: + raise NotImplementedError + + else: + losses['loss_rpn_mask'] = mask_pred.sum() * 0 + losses['loss_rpn_dice'] = mask_pred.sum() * 0 + if self.loss_rank is not None: + losses['loss_rank'] = mask_pred.sum() * 0 + + if seg_preds is not None: + if self.loss_seg.use_sigmoid: + losses['loss_rpn_seg'] = self.loss_seg(seg_preds.squeeze(1), seg_targets.astype(ms.float32)) + else: + raise NotImplementedError + + return losses + + def _get_target_single(self, pos_inds, neg_inds, pos_mask, neg_mask, + pos_gt_mask, pos_gt_labels, gt_sem_seg, gt_sem_cls, + cfg): + num_pos = pos_mask.shape[0] + num_neg = neg_mask.shape[0] + num_samples = num_pos + num_neg + H, W = pos_mask.shape[-2:] + # original implementation uses new_zeros since BG are set to be 0 + # now use empty & fill because BG cat_id = num_classes, + # FG cat_id = [0, num_classes-1] + labels = ms.numpy.full((num_samples, ), + self.num_classes, + dtype=ms.int64) + new_zeros = ops.Zeros() + type_ = pos_mask.dtype + label_weights = new_zeros((num_samples, ), type_) + mask_targets = new_zeros((num_samples, H, W), type_) + mask_weights = new_zeros((num_samples, H, W), type_) + seg_targets = ms.numpy.full((H, W), + self.num_classes, + dtype=ms.int64) + + if gt_sem_cls is not None and gt_sem_seg is not None: + gt_sem_seg = gt_sem_seg.bool() + for sem_mask, sem_cls in zip(gt_sem_seg, gt_sem_cls): + seg_targets[sem_mask] = sem_cls.astype(ms.int64) + if num_pos > 0: + labels[pos_inds] = pos_gt_labels + pos_weight = 1.0 if cfg.pos_weight <= 0 else cfg.pos_weight + label_weights[pos_inds] = pos_weight + mask_targets[pos_inds] = pos_gt_mask + mask_weights[pos_inds] = 1 + for i in range(num_pos): + seg_targets[pos_gt_mask[i].astype(ms.bool_)] = pos_gt_labels[i] + + if num_neg > 0: + label_weights[neg_inds] = 1.0 + + return labels, label_weights, mask_targets, mask_weights, seg_targets + + def get_targets(self, + sampling_results, + gt_mask, + rpn_train_cfg, + concat=True, + gt_sem_seg=None, + gt_sem_cls=None): + pos_inds_list = [res.pos_inds for res in sampling_results] + neg_inds_list = [res.neg_inds for res in sampling_results] + pos_mask_list = [res.pos_masks for res in sampling_results] + neg_mask_list = [res.neg_masks for res in sampling_results] + pos_gt_mask_list = [res.pos_gt_masks for res in sampling_results] + pos_gt_labels_list = [res.pos_gt_labels for res in sampling_results] + if gt_sem_seg is None: + # me: fix hard-code bug. + num_imgs = len(sampling_results) + gt_sem_seg = [None] * num_imgs + gt_sem_cls = [None] * num_imgs + results = multi_apply( + self._get_target_single, + pos_inds_list, + neg_inds_list, + pos_mask_list, + neg_mask_list, + pos_gt_mask_list, + pos_gt_labels_list, + gt_sem_seg, + gt_sem_cls, + cfg=rpn_train_cfg) + (labels, label_weights, mask_targets, mask_weights, + seg_targets) = results + if concat: + labels = ops.concat(labels, 0) + label_weights = ops.concat(label_weights, 0) + mask_targets = ops.concat(mask_targets, 0) + mask_weights = ops.concat(mask_weights, 0) + seg_targets = ops.stack(seg_targets, 0) + return labels, label_weights, mask_targets, mask_weights, seg_targets + + def simple_test_rpn(self, img, img_metas): + """Forward function in testing stage.""" + return self._decode_init_proposals(img, img_metas) + + def forward_dummy(self, img, img_metas): + """Dummy forward function. + + Used in flops calculation. + """ + return self._decode_init_proposals(img, img_metas) + + def onnx_export(self, x): + """Test without augmentation. + Args: + x (tuple[Tensor]): Features from the upstream network, each is + a 4D-tensor. + img_metas (list[dict]): Meta info of each image. + Returns: + Tensor: dets of shape [N, num_det, 5]. + """ + + # rpn_results = self.simple_test_rpn(x, img_metas) + rpn_results = self._decode_init_proposals_export(x) + + # return rpn_results + + (proposal_feats, x_feats, mask_preds, cls_scores, + seg_preds) = rpn_results + return proposal_feats, x_feats, mask_preds, cls_scores, seg_preds + + def _decode_init_proposals_export(self, img): + num_imgs = 1 + # localization_feats = self.localization_fpn(img) + localization_feats = self.localization_fpn.model_export(img) + + if isinstance(localization_feats, list): + loc_feats = localization_feats[0] + else: + loc_feats = localization_feats + for conv in self.loc_convs: + loc_feats = conv(loc_feats) + if self.feat_downsample_stride > 1 and self.feat_refine: + loc_feats = self.ins_downsample(loc_feats) + mask_preds = self.init_kernels(loc_feats) + + # return mask_preds + + if self.semantic_fpn: + if isinstance(localization_feats, list): + semantic_feats = localization_feats[1] + else: + semantic_feats = localization_feats + for conv in self.seg_convs: + semantic_feats = conv(semantic_feats) + if self.feat_downsample_stride > 1 and self.feat_refine: + semantic_feats = self.seg_downsample(semantic_feats) + else: + semantic_feats = None + + if semantic_feats is not None: + seg_preds = self.conv_seg(semantic_feats) + else: + seg_preds = None + + + # proposal_feats = self.init_kernels.weight.clone() + tmp_feat = np.array(self.init_kernels.weight).astype(np.float32) + # proposal_feats = ms.Tensor(np.copy(tmp_feat), dtype=self.init_kernels.weight.dtype) + # # proposal_feats = proposal_feats[None].broadcast_to((num_imgs, ) + proposal_feats.shape) + # proposal_feats = ms.ops.broadcast_to(proposal_feats[None], (num_imgs, ) + proposal_feats.shape) + tmp_feat = np.broadcast_to(tmp_feat[None], (num_imgs, ) + tmp_feat.shape) + proposal_feats = ms.Tensor(np.copy(tmp_feat), dtype=self.init_kernels.weight.dtype) + + if semantic_feats is not None: + x_feats = semantic_feats + loc_feats + else: + x_feats = loc_feats + + if self.proposal_feats_with_obj: + # sigmoid_masks = mask_preds.sigmoid() + sigmoid_masks = self.sigmoid(mask_preds) + nonzero_inds = sigmoid_masks > 0.5 + if self.use_binary: + sigmoid_masks = nonzero_inds.astype(ms.float32) + else: + sigmoid_masks = nonzero_inds.astype(ms.float32) * sigmoid_masks + einsum = ops.Einsum('bnhw,bchw->bnc') + obj_feats = einsum((sigmoid_masks, x_feats)) + else: + obj_feats = None + + cls_scores = None + + if self.proposal_feats_with_obj: + proposal_feats = proposal_feats + obj_feats.view( + num_imgs, self.num_proposals, self.out_channels, 1, 1) + + if self.cat_stuff_mask and not self.training: + mask_preds = ops.concat( + [mask_preds, seg_preds[:, self.num_thing_classes:]], axis=1) + stuff_kernels = self.conv_seg.weight[self. + num_thing_classes:].clone() + # stuff_kernels = stuff_kernels[None].broadcast_to((num_imgs, ) + stuff_kernels.shape) + stuff_kernels = ms.ops.broadcast_to(stuff_kernels[None], (num_imgs, ) + stuff_kernels.shape) + proposal_feats = ops.concat([proposal_feats, stuff_kernels], axis=1) + + return proposal_feats, x_feats, mask_preds, cls_scores, seg_preds diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py new file mode 100644 index 000000000..aec93bad4 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py @@ -0,0 +1,155 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/25 0:35 +# @Author : WeiHua +import math +import numpy as np +import mindspore as ms +from mindspore import nn +from mindspore import ops + + +class SinePositionalEncoding(nn.Cell): + """Position encoding with sine and cosine functions. + + See `End-to-End Object Detection with Transformers + `_ for details. + + Args: + num_feats (int): The feature dimension for each position + along x-axis or y-axis. Note the final returned dimension + for each position is 2 times of this value. + temperature (int, optional): The temperature used for scaling + the position embedding. Defaults to 10000. + normalize (bool, optional): Whether to normalize the position + embedding. Defaults to False. + scale (float, optional): A scale factor that scales the position + embedding. The scale will be used only when `normalize` is True. + Defaults to 2*pi. + eps (float, optional): A value added to the denominator for + numerical stability. Defaults to 1e-6. + offset (float): offset add to embed when do the normalization. + Defaults to 0. + init_cfg (dict or list[dict], optional): Initialization config dict. + Default: None + """ + + def __init__(self, + num_feats, + temperature=10000, + normalize=False, + scale=2 * math.pi, + eps=1e-6, + offset=0.): + super(SinePositionalEncoding, self).__init__() + if normalize: + assert isinstance(scale, (float, int)), 'when normalize is set,' \ + 'scale should be provided and in float or int type, ' \ + f'found {type(scale)}' + self.num_feats = num_feats + self.temperature = temperature + self.normalize = normalize + self.scale = scale + self.eps = eps + self.offset = offset + + def construct(self, mask): + """Forward function for `SinePositionalEncoding`. + + Args: + mask (Tensor): ByteTensor mask. Non-zero values representing + ignored positions, while zero values means valid positions + for this image. Shape [bs, h, w]. + + Returns: + pos (Tensor): Returned position embedding with shape + [bs, num_feats*2, h, w]. + """ + # For convenience of exporting to ONNX, it's required to convert + # `masks` from bool to int. + mask = mask.astype(ms.int32) + not_mask = 1 - mask # logical_not + y_embed = not_mask.cumsum(1, dtype=ms.float32) + x_embed = not_mask.cumsum(2, dtype=ms.float32) + + if self.normalize: + y_embed = (y_embed + self.offset) / \ + (y_embed[:, -1:, :] + self.eps) * self.scale + x_embed = (x_embed + self.offset) / \ + (x_embed[:, :, -1:] + self.eps) * self.scale + dim_t = ms.Tensor(np.arange(self.num_feats), dtype=ms.float32) + # dim_t = torch.arange( + # self.num_feats, dtype=torch.float32, device=mask.device) + dim_t = self.temperature**(2 * (dim_t // 2) / self.num_feats) + pos_x = x_embed[:, :, :, None] / dim_t + pos_y = y_embed[:, :, :, None] / dim_t + # use `view` instead of `flatten` for dynamically exporting to ONNX + B, H, W = mask.shape + sin = ops.Sin() + cos = ops.Cos() + pos_x = ops.stack( + (sin(pos_x[:, :, :, 0::2]), cos(pos_x[:, :, :, 1::2])), + axis=4).view(B, H, W, -1) + pos_y = ops.stack( + (sin(pos_y[:, :, :, 0::2]), cos(pos_y[:, :, :, 1::2])), + axis=4).view(B, H, W, -1) + pos = ops.concat((pos_y, pos_x), axis=3).transpose((0, 3, 1, 2)) + return pos + + def model_export(self, mask): + """Forward function for `SinePositionalEncoding`. + + Args: + mask (Tensor): ByteTensor mask. Non-zero values representing + ignored positions, while zero values means valid positions + for this image. Shape [bs, h, w]. + + Returns: + pos (Tensor): Returned position embedding with shape + [bs, num_feats*2, h, w]. + """ + # For convenience of exporting to ONNX, it's required to convert + # `masks` from bool to int. + mask = mask.astype(ms.int32) + not_mask = 1 - mask # logical_not + + tmp_not_mask = np.array(not_mask, dtype=np.int32) + y_embed = np.cumsum(tmp_not_mask, axis=1, dtype=np.float32) + # y_embed = ms.Tensor(y_embed, dtype=ms.float32) + x_embed = np.cumsum(tmp_not_mask, axis=2, dtype=np.float32) + # x_embed = ms.Tensor(x_embed, dtype=ms.float32) + + if self.normalize: + y_embed = (y_embed + self.offset) / \ + (y_embed[:, -1:, :] + self.eps) * self.scale + x_embed = (x_embed + self.offset) / \ + (x_embed[:, :, -1:] + self.eps) * self.scale + # dim_t = ms.Tensor(np.arange(self.num_feats), dtype=ms.float32) + dim_t = np.arange(self.num_feats).astype(np.float32) + # dim_t = torch.arange( + # self.num_feats, dtype=torch.float32, device=mask.device) + # dim_t = self.temperature**(2 * (dim_t // 2) / self.num_feats) + dim_t = self.temperature**(2 * (dim_t / 2) / self.num_feats) + pos_x = x_embed[:, :, :, None] / dim_t + pos_y = y_embed[:, :, :, None] / dim_t + # use `view` instead of `flatten` for dynamically exporting to ONNX + B, H, W = mask.shape + + tmp_pos_x = pos_x + tmp_pos_y =pos_y + tmp_pos_x = np.stack((np.sin(tmp_pos_x[:,:,:,0::2]), np.cos(tmp_pos_x[:,:,:,1::2])), axis=4).reshape(B,H,W,-1) + tmp_pos_y = np.stack((np.sin(tmp_pos_y[:,:,:,0::2]), np.cos(tmp_pos_y[:,:,:,1::2])), axis=4).reshape(B,H,W,-1) + tmp_pos = np.concatenate((tmp_pos_y, tmp_pos_x),axis=3).transpose((0,3,1,2)) + pos = ms.Tensor(tmp_pos, dtype=ms.float32) + + return pos + + def __repr__(self): + """str: a string that describes the module""" + repr_str = self.__class__.__name__ + repr_str += f'(num_feats={self.num_feats}, ' + repr_str += f'temperature={self.temperature}, ' + repr_str += f'normalize={self.normalize}, ' + repr_str += f'scale={self.scale}, ' + repr_str += f'eps={self.eps})' + return repr_str diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py new file mode 100644 index 000000000..6d85304eb --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py @@ -0,0 +1,282 @@ +import mindspore as ms +from mindspore import nn, ops +from mindspore import log as logger +from ..custom_cells import CustomResizeBilinear, ConvModule, normal_init +from .positional_encoding import SinePositionalEncoding +import numpy as np + +class SemanticFPNWrapper(nn.Cell): + """Implementation of Semantic FPN used in Panoptic FPN. + + Args: + in_channels ([type]): [description] + feat_channels ([type]): [description] + out_channels ([type]): [description] + start_level ([type]): [description] + end_level ([type]): [description] + cat_coors (bool, optional): [description]. Defaults to False. + fuse_by_cat (bool, optional): [description]. Defaults to False. + conv_cfg ([type], optional): [description]. Defaults to None. + norm_cfg ([type], optional): [description]. Defaults to None. + """ + + def __init__(self, + in_channels, + feat_channels, + out_channels, + start_level, + end_level, + cat_coors=False, + positional_encoding=None, + cat_coors_level=3, + fuse_by_cat=False, + return_list=False, + upsample_times=3, + with_pred=True, + num_aux_convs=0, + act_cfg=dict(type='ReLU', inplace=True), + out_act_cfg=dict(type='ReLU'), + conv_cfg=None, + norm_cfg=None): + super(SemanticFPNWrapper, self).__init__() + + self.in_channels = in_channels + self.feat_channels = feat_channels + self.start_level = start_level + self.end_level = end_level + assert start_level >= 0 and end_level >= start_level + self.out_channels = out_channels + self.conv_cfg = conv_cfg + self.norm_cfg = norm_cfg + self.act_cfg = act_cfg + self.cat_coors = cat_coors + self.cat_coors_level = cat_coors_level + self.fuse_by_cat = fuse_by_cat + self.return_list = return_list + self.upsample_times = upsample_times + self.with_pred = with_pred + if positional_encoding is not None: + self.positional_encoding = SinePositionalEncoding(**positional_encoding) + else: + self.positional_encoding = None + self.convs_all_levels = nn.CellList() + for i in range(self.start_level, self.end_level + 1): + convs_per_level = nn.SequentialCell() + if i == 0: + if i == self.cat_coors_level and self.cat_coors: + chn = self.in_channels + 2 + else: + chn = self.in_channels + if upsample_times == self.end_level - i: + one_conv = ConvModule( + chn, + self.feat_channels, + 3, + padding=1, + conv_cfg=self.conv_cfg, + norm_cfg=self.norm_cfg, + act_cfg=self.act_cfg) + # convs_per_level.add_module('conv' + str(i), one_conv) + convs_per_level.append(one_conv) + else: + for i in range(self.end_level - upsample_times): + one_conv = ConvModule( + chn, + self.feat_channels, + 3, + padding=1, + stride=2, + conv_cfg=self.conv_cfg, + norm_cfg=self.norm_cfg, + act_cfg=self.act_cfg) + # convs_per_level.add_module('conv' + str(i), one_conv) + convs_per_level.append(one_conv) + self.convs_all_levels.append(convs_per_level) + continue + + for j in range(i): + if j == 0: + if i == self.cat_coors_level and self.cat_coors: + chn = self.in_channels + 2 + else: + chn = self.in_channels + one_conv = ConvModule( + chn, + self.feat_channels, + 3, + padding=1, + conv_cfg=self.conv_cfg, + norm_cfg=self.norm_cfg, + act_cfg=self.act_cfg) + # convs_per_level.add_module('conv' + str(j), one_conv) + convs_per_level.append(one_conv) + if j < upsample_times - (self.end_level - i): + one_upsample = CustomResizeBilinear( + scale_factor=2, align_corners=False) + convs_per_level.append(one_upsample) + continue + + one_conv = ConvModule( + self.feat_channels, + self.feat_channels, + 3, + padding=1, + conv_cfg=self.conv_cfg, + norm_cfg=self.norm_cfg, + act_cfg=self.act_cfg) + # convs_per_level.add_module('conv' + str(j), one_conv) + convs_per_level.append(one_conv) + if j < upsample_times - (self.end_level - i): + one_upsample = CustomResizeBilinear( + scale_factor=2, align_corners=False) + convs_per_level.append(one_upsample) + + self.convs_all_levels.append(convs_per_level) + + if fuse_by_cat: + in_channels = self.feat_channels * len(self.convs_all_levels) + else: + in_channels = self.feat_channels + + if self.with_pred: + self.conv_pred = ConvModule( + in_channels, + self.out_channels, + 1, + padding=0, + conv_cfg=self.conv_cfg, + act_cfg=out_act_cfg, + norm_cfg=self.norm_cfg) + + self.num_aux_convs = num_aux_convs + self.aux_convs = nn.CellList() + for i in range(num_aux_convs): + self.aux_convs.append( + ConvModule( + in_channels, + self.out_channels, + 1, + padding=0, + conv_cfg=self.conv_cfg, + act_cfg=out_act_cfg, + norm_cfg=self.norm_cfg)) + + def init_weights(self): + logger.info('Use normal intialization for semantic FPN') + for m in self.cells_and_names(): + if isinstance(m, nn.Conv2d): + normal_init(m, init_gain=0.01) + + def generate_coord(self, input_feat): + x_range = ops.linspace(ms.Tensor(-1, dtype=ms.float32), + ms.Tensor(1, dtype=ms.float32), + input_feat.shape[-1]) + y_range = ops.linspace(ms.Tensor(-1, dtype=ms.float32), + ms.Tensor(1, dtype=ms.float32), + input_feat.shape[-2]) + y, x = ops.meshgrid((y_range, x_range)) + y = y.broadcast_to((input_feat.shape[0], 1, -1, -1)) + x = x.broadcast_to((input_feat.shape[0], 1, -1, -1)) + coord_feat = ops.concat([x, y], 1) + return coord_feat + + def construct(self, inputs): + mlvl_feats = [] + for i in range(self.start_level, self.end_level + 1): + input_p = inputs[i] + if i == self.cat_coors_level: + if self.positional_encoding is not None: + new_zeros = ops.Zeros() + ignore_mask = new_zeros( + (input_p.shape[0], input_p.shape[-2], + input_p.shape[-1]), ms.bool_) + + positional_encoding = self.positional_encoding(ignore_mask) + input_p = input_p + positional_encoding + if self.cat_coors: + coord_feat = self.generate_coord(input_p) + input_p = ops.concat([input_p, coord_feat], 1) + + mlvl_feats.append(self.convs_all_levels[i](input_p)) + + if self.fuse_by_cat: + feature_add_all_level = ops.concat(mlvl_feats, axis=1) + else: + feature_add_all_level = sum(mlvl_feats) + + if self.with_pred: + out = self.conv_pred(feature_add_all_level) + else: + out = feature_add_all_level + + if self.num_aux_convs > 0: + outs = [out] + for conv in self.aux_convs: + outs.append(conv(feature_add_all_level)) + return outs + + if self.return_list: + return [out] + else: + return out + + + def model_export(self, inputs): + mlvl_feats = [] + for i in range(self.start_level, self.end_level + 1): + input_p = inputs[i] + if i == self.cat_coors_level: + if self.positional_encoding is not None: + new_zeros = ops.Zeros() + ignore_mask = new_zeros( + (input_p.shape[0], input_p.shape[-2], + input_p.shape[-1]), ms.bool_) + + positional_encoding = self.positional_encoding.model_export(ignore_mask) + input_p = input_p + positional_encoding + if self.cat_coors: + coord_feat = self.generate_coord_export(input_p) + input_p = ops.concat([input_p, coord_feat], 1) + + mlvl_feats.append(self.convs_all_levels[i](input_p)) + + if self.fuse_by_cat: + feature_add_all_level = ops.concat(mlvl_feats, axis=1) + else: + feature_add_all_level = sum(mlvl_feats) + + if self.with_pred: + out = self.conv_pred(feature_add_all_level) + else: + out = feature_add_all_level + + if self.num_aux_convs > 0: + outs = [out] + for conv in self.aux_convs: + outs.append(conv(feature_add_all_level)) + return outs + + if self.return_list: + return [out] + else: + return out + + def generate_coord_export(self, input_feat): + x_range = ops.linspace(ms.Tensor(-1, dtype=ms.float32), + ms.Tensor(1, dtype=ms.float32), + input_feat.shape[-1]) + y_range = ops.linspace(ms.Tensor(-1, dtype=ms.float32), + ms.Tensor(1, dtype=ms.float32), + input_feat.shape[-2]) + y, x = ops.meshgrid((y_range, x_range)) + + tmp_y = np.array(y.numpy()) + tmp_y = np.broadcast_to(tmp_y, (input_feat.shape[0], 1, -1, -1)) + y = ms.Tensor(tmp_y, dtype=y.dtype) + + tmp_x = np.array(x.numpy()) + tmp_x = np.broadcast_to(tmp_x, (input_feat.shape[0], 1, -1, -1)) + x = ms.Tensor(tmp_x, dtype=y.dtype) + + coord_feat = ops.concat([x, y], 1) + return coord_feat \ No newline at end of file diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/utils.py b/contrib/Overlap-Recovery/train/src/deoccluder/utils.py new file mode 100644 index 000000000..a8063dc00 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/deoccluder/utils.py @@ -0,0 +1,44 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/30 0:36 +# @Author : WeiHua + +import mindspore as ms +from mindspore import nn, ops + +def sem2ins_masks(gt_sem_seg, + num_thing_classes=80): + """Convert semantic segmentation mask to binary masks + + Args: + gt_sem_seg (torch.Tensor): Semantic masks to be converted. + [0, num_thing_classes-1] is the classes of things, + [num_thing_classes:] is the classes of stuff. + num_thing_classes (int, optional): Number of thing classes. + Defaults to 80. + + Returns: + tuple[torch.Tensor]: (mask_labels, bin_masks). + Mask labels and binary masks of stuff classes. + """ + unique = ops.Unique() + classes = unique(gt_sem_seg) + masks = [] + labels = [] + + for i in classes: + # skip ignore class 255 and "thing classes" in semantic seg + if i == 255 or i < num_thing_classes: + continue + labels.append(i) + masks.append(gt_sem_seg == i) + + if len(labels) > 0: + stack = ops.Stack() + labels = stack(labels) + masks = ops.concat(masks) + else: + labels = gt_sem_seg.new_zeros(size=[0]) + masks = gt_sem_seg.new_zeros( + size=[0, gt_sem_seg.shape[-2], gt_sem_seg.shape[-1]]) + return labels.astype(ms.int64), masks.astype(ms.float32) diff --git a/contrib/Overlap-Recovery/train/src/model_utils/__init__.py b/contrib/Overlap-Recovery/train/src/model_utils/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/__init__.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/__init__.py new file mode 100644 index 000000000..9f86fda73 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/__init__.py @@ -0,0 +1,6 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/25 0:17 +# @Author : WeiHua + +from .config_base import Config diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py new file mode 100644 index 000000000..0db0f43ae --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py @@ -0,0 +1,128 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/24 21:54 +# @Author : WeiHua + +from pprint import pprint, pformat +from .config_model import model + +class Config: + """ + Configuration namespace. Convert dictionary to members. + """ + def __init__(self, cfg_dict): + for k, v in cfg_dict.items(): + setattr(self, k, v) + # if isinstance(v, (list, tuple)): + # setattr(self, k, [Config(x) if isinstance(x, dict) else x for x in v]) + # else: + # setattr(self, k, Config(v) if isinstance(v, dict) else v) + + def get(self, attr_name, default_value=None): + return getattr(self, attr_name, default_value) + + def __str__(self): + return pformat(self.__dict__) + + def __repr__(self): + return self.__str__() + + +synth_data_root = "data/overlap_text/opt_4ins_250k/" +# real_data_root = "data/overlap_text/overlap_qualified_data_1129/" +real_data_root = "data/overlap_text/formal_v1/" +# real_data_root = "data/overlap_text/opt_4ins_250k/" +img_scale = (768, 768) +img_norm_cfg = dict( + mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) +train_pipeline = [ + dict(type='LoadImageFromFile'), + dict(type='CustomLoadAnnotations', with_bbox=True, with_mask=True), + dict(type='Resize', img_scale=img_scale, keep_ratio=True), + dict(type='RandomFlip', flip_ratio=0.5), + # # visualization tool + # dict(type='CustomVisualize'), + dict(type='Normalize', **img_norm_cfg), + dict(type='Pad', size_divisor=768), # 32 + dict(type='DefaultFormatBundle'), + dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'], + meta_keys=('ori_shape', 'img_shape', 'pad_shape', 'scale_factor', 'flip', + 'flip_direction'), + ), +] +test_pipeline = [ + dict(type='LoadImageFromFile'), + dict(type='Resize', img_scale=img_scale, keep_ratio=True), + dict(type='Normalize', **img_norm_cfg), + dict(type='Pad', size_divisor=768, eval_model=True), # 32 + dict(type='ImageToTensor', keys=['img']), + dict(type='Collect', keys=['img'], + meta_keys=('ori_shape', 'img_shape', 'pad_shape', 'scale_factor'), + eval_mode=True), + # dict( + # type='MultiScaleFlipAug', + # img_scale=img_scale, + # flip=False, + # transforms=[ + # dict(type='Resize', keep_ratio=True), + # dict(type='RandomFlip'), + # dict(type='Normalize', **img_norm_cfg), + # dict(type='Pad', size_divisor=768), # 32 + # dict(type='ImageToTensor', keys=['img']), + # dict(type='Collect', keys=['img']), + # ]) +] +config_dict = dict( + model=model, + pre_trained="", + data=dict( + samples_per_gpu=8, # 8 + workers_per_gpu=8, # 8 + train=dict( + type='SynthOverlapDataset', + ann_file=synth_data_root + 'train_gt.jsonl', + img_prefix=synth_data_root, + seg_prefix=synth_data_root, + pipeline=train_pipeline), + # type='RealOverlapDataset', + # ann_file=real_data_root + 'annotation.json', + # img_prefix=real_data_root, + # seg_prefix=real_data_root, + # pipeline=train_pipeline), + val=dict( + type='RealOverlapDataset', + ann_file=real_data_root + 'annotation.json', + img_prefix=real_data_root, + seg_prefix=real_data_root, + pipeline=test_pipeline, + test_mode=True), + test=dict( + type='RealOverlapDataset', + # type='SynthOverlapDataset', + ann_file=real_data_root + 'test_500.json', + # ann_file=real_data_root + 'val_gt.jsonl', + # ann_file=real_data_root + 'annotation.json', + img_prefix=real_data_root, + seg_prefix=real_data_root, + pipeline=test_pipeline, + test_mode=True) + ), + train_cfg=dict( + total_epoch=60, + optimizer='Adam', + lr=0.00005, + lr_power=4e-10, + wd=0.05, + save_iterval=1, + ckpt_max=10000, + ), + device_target='GPU', + mindrecord_dir='/home/whua/code/logs/ms_knet/dice4_occ_ep60', + pretrained_r50='data/resnet50.ckpt', + do_eval=False, + run_distribute=False, + enable_modelarts=False, + checkpoint_path='/home/whua/knet_eval_ckpt.ckpt' +) + +config = Config(config_dict) diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py new file mode 100644 index 000000000..de62e6647 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py @@ -0,0 +1,146 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# @Time : 2022/11/24 21:57 +# @Author : WeiHua + +num_stages = 3 +num_proposals = 4 +conv_kernel_size = 1 +num_classes = 1 +kernel_occlusion_cfg = dict( + num_proposals=num_proposals, + pair_manner='sum', + u_mask_loss=dict( + type='BinaryCrossEntropy', loss_weight=1.0), + i_mask_loss=dict( + type='BinaryCrossEntropy', loss_weight=1.0), + u_dice_loss=dict(type='DiceLoss', loss_weight=4.0), + i_dice_loss=dict(type='DiceLoss', loss_weight=4.0), +) +model = dict( + mask_assign_stride=4, + # origin size: 768 * 768 + feature_shapes=[[192, 192], [96, 96], [48, 48], [24, 24], [12, 12]], + backbone=dict( + layer_nums=[3, 4, 6, 3], + in_channels=[64, 256, 512, 1024], + out_channels=[256, 512, 1024, 2048]), + neck=dict( + in_channels=[256, 512, 1024, 2048], + out_channels=256, + num_outs=4), + rpn_head=dict( + # type='ConvKernelHead', + conv_kernel_size=conv_kernel_size, + feat_downsample_stride=2, + feat_refine_stride=1, + feat_refine=False, + use_binary=True, + num_loc_convs=1, + num_seg_convs=1, + conv_normal_init=True, + localization_fpn=dict( + # type='SemanticFPNWrapper', + in_channels=256, + feat_channels=256, + out_channels=256, + start_level=0, + end_level=3, + upsample_times=2, + positional_encoding=dict(num_feats=128, normalize=True), + cat_coors=False, + cat_coors_level=3, + fuse_by_cat=False, + return_list=False, + num_aux_convs=1, + norm_cfg=dict(type='GN', num_groups=32, requires_grad=True)), + num_proposals=num_proposals, + proposal_feats_with_obj=True, + xavier_init_kernel=False, + kernel_init_std=1, + num_cls_fcs=1, + in_channels=256, + num_classes=num_classes, + feat_transform_cfg=None, + loss_seg=dict( + type='BinaryCrossEntropy', + loss_weight=1.0 + ), + # loss_seg=dict( + # type='FocalLoss', + # gamma=2.0, + # loss_weight=1.0), + loss_mask=dict( + type='BinaryCrossEntropy', loss_weight=1.0), + loss_dice=dict(type='DiceLoss', loss_weight=4.0), + train_cfg=dict( + assigner=dict( + type='MaskHungarianAssigner', + cls_cost=dict(type='FocalLossCost', weight=2.0), + dice_cost=dict(type='DiceCost', weight=4.0, pred_act=True), + mask_cost=dict(type='MaskCost', weight=1.0, pred_act=True)), + sampler=dict(type='MaskPseudoSampler'), + pos_weight=1 + ), + test_cfg=None, + ), + roi_head=dict( + type='CustomKernelIterHead', + num_stages=num_stages, + stage_loss_weights=[1] * num_stages, + proposal_feature_channel=256, + mask_head=[ + dict( + # type='CustomKernelUpdateHead', + kernel_occlusion_cfg=kernel_occlusion_cfg, + apply_kernel_occlusion=True, + num_classes=num_classes, + num_ffn_fcs=2, + num_heads=8, + num_cls_fcs=1, + num_mask_fcs=1, + feedforward_channels=2048, + in_channels=256, + out_channels=256, + dropout=0.0, + mask_thr=0.5, + conv_kernel_size=conv_kernel_size, + mask_upsample_stride=2, + ffn_act_cfg=dict(type='ReLU', inplace=True), + with_ffn=True, + feat_transform_cfg=dict( + conv_cfg=dict(type='Conv2d'), act_cfg=None), + kernel_updator_cfg=dict( + in_channels=256, + feat_channels=256, + out_channels=256, + input_feat_shape=3, + act_cfg=dict(type='ReLU', inplace=True)), + loss_mask=dict( + type='BinaryCrossEntropy', loss_weight=1.0), + loss_dice=dict( + type='DiceLoss', loss_weight=4.0), + loss_cls=dict( + type='SigmoidFocalClassificationLoss', + loss_weight=2.0), + num_proposals=num_proposals + ) for _ in range(num_stages) + ], + train_cfg=[ + dict( + assigner=dict( + type='MaskHungarianAssigner', + cls_cost=dict(type='FocalLossCost', weight=2.0), + dice_cost=dict(type='DiceCost', weight=4.0, pred_act=True), + mask_cost=dict(type='MaskCost', weight=1.0, + pred_act=True)), + sampler=dict(type='MaskPseudoSampler'), + pos_weight=1) for _ in range(num_stages) + ], + test_cfg=dict( + max_per_img=num_proposals, + mask_thr=0.5, + merge_stuff_thing=dict( + iou_thr=0.5, stuff_max_area=4096, instance_score_thr=0.3)) + ), + ) diff --git a/contrib/Overlap-Recovery/train/src/model_utils/device_adapter.py b/contrib/Overlap-Recovery/train/src/model_utils/device_adapter.py new file mode 100644 index 000000000..53c5e070f --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/model_utils/device_adapter.py @@ -0,0 +1,27 @@ +# Copyright 2021 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================ + +"""Device adapter for ModelArts""" + +from .configs.config_base import config + +if config.enable_modelarts: + from .moxing_adapter import get_device_id, get_device_num, get_rank_id, get_job_id +else: + from .local_adapter import get_device_id, get_device_num, get_rank_id, get_job_id + +__all__ = [ + "get_device_id", "get_device_num", "get_rank_id", "get_job_id" +] diff --git a/contrib/Overlap-Recovery/train/src/model_utils/local_adapter.py b/contrib/Overlap-Recovery/train/src/model_utils/local_adapter.py new file mode 100644 index 000000000..769fa6dc7 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/model_utils/local_adapter.py @@ -0,0 +1,36 @@ +# Copyright 2021 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================ + +"""Local adapter""" + +import os + +def get_device_id(): + device_id = os.getenv('DEVICE_ID', '0') + return int(device_id) + + +def get_device_num(): + device_num = os.getenv('RANK_SIZE', '1') + return int(device_num) + + +def get_rank_id(): + global_rank_id = os.getenv('RANK_ID', '0') + return int(global_rank_id) + + +def get_job_id(): + return "Local Job" diff --git a/contrib/Overlap-Recovery/train/src/model_utils/moxing_adapter.py b/contrib/Overlap-Recovery/train/src/model_utils/moxing_adapter.py new file mode 100644 index 000000000..bf8df5aff --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/model_utils/moxing_adapter.py @@ -0,0 +1,122 @@ +# Copyright 2021 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================ + +"""Moxing adapter for ModelArts""" + +import os +import functools +from mindspore import context +from mindspore.profiler import Profiler +from .configs.config_base import config + +_global_sync_count = 0 + +def get_device_id(): + device_id = os.getenv('DEVICE_ID', '0') + return int(device_id) + + +def get_device_num(): + device_num = os.getenv('RANK_SIZE', '1') + return int(device_num) + + +def get_rank_id(): + global_rank_id = os.getenv('RANK_ID', '0') + return int(global_rank_id) + + +def get_job_id(): + job_id = os.getenv('JOB_ID') + job_id = job_id if job_id != "" else "default" + return job_id + +def sync_data(from_path, to_path): + """ + Download data from remote obs to local directory if the first url is remote url and the second one is local path + Upload data from local directory to remote obs in contrast. + """ + import moxing as mox + import time + global _global_sync_count + sync_lock = "/tmp/copy_sync.lock" + str(_global_sync_count) + _global_sync_count += 1 + + # Each server contains 8 devices as most. + if get_device_id() % min(get_device_num(), 8) == 0 and not os.path.exists(sync_lock): + print("from path: ", from_path) + print("to path: ", to_path) + mox.file.copy_parallel(from_path, to_path) + print("===finish data synchronization===") + try: + os.mknod(sync_lock) + except IOError: + pass + print("===save flag===") + + while True: + if os.path.exists(sync_lock): + break + time.sleep(1) + + print("Finish sync data from {} to {}.".format(from_path, to_path)) + + +def moxing_wrapper(pre_process=None, post_process=None): + """ + Moxing wrapper to download dataset and upload outputs. + """ + def wrapper(run_func): + @functools.wraps(run_func) + def wrapped_func(*args, **kwargs): + # Download data from data_url + if config.enable_modelarts: + if config.data_url: + sync_data(config.data_url, config.data_path) + print("Dataset downloaded: ", os.listdir(config.data_path)) + if config.checkpoint_url: + sync_data(config.checkpoint_url, config.load_path) + print("Preload downloaded: ", os.listdir(config.load_path)) + if config.train_url: + sync_data(config.train_url, config.output_path) + print("Workspace downloaded: ", os.listdir(config.output_path)) + + context.set_context(save_graphs_path=os.path.join(config.output_path, str(get_rank_id()))) + config.device_num = get_device_num() + config.device_id = get_device_id() + if not os.path.exists(config.output_path): + os.makedirs(config.output_path) + + if pre_process: + pre_process() + + if config.enable_profiling: + profiler = Profiler() + + run_func(*args, **kwargs) + + if config.enable_profiling: + profiler.analyse() + + # Upload data to train_url + if config.enable_modelarts: + if post_process: + post_process() + + if config.train_url: + print("Start to copy output directory") + sync_data(config.output_path, config.train_url) + return wrapped_func + return wrapper diff --git a/contrib/Overlap-Recovery/train/src/utils/pth2ckpt.py b/contrib/Overlap-Recovery/train/src/utils/pth2ckpt.py new file mode 100644 index 000000000..c8b4d23c3 --- /dev/null +++ b/contrib/Overlap-Recovery/train/src/utils/pth2ckpt.py @@ -0,0 +1,59 @@ +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================ + +""" +```bash +# 将PyTorch的resnet50预训练模型转化为Mindspore的预训练模型 +# 同时请将src/config.py中的PRETRAINED_RESNET_50改成PTH_PATH +bash scripts/convert_resnet.sh [PTH_PATH] [CKPT_PATH] +# example: bash scripts/convert_resnet.sh resnet50-19c8e357.pth pretrained_resnet50.ckpt +``` +""" + +"""pth --> ckpt""" +import argparse +import json + +from mindspore import Tensor +from mindspore.train.serialization import save_checkpoint + +import torch + + +parser = argparse.ArgumentParser(description="trans pth to ckpt") +parser.add_argument('--pth-path', type=str, default='resnet50-19c8e357.pth', help="The path of pth file") +parser.add_argument('--ckpt-path', type=str, default='pretrained_resnet50.ckpt', help='The path to save ckpt file') +parser.add_argument('--dict-file', type=str, required=True, help='dict file') + +args = parser.parse_args() + +pth_dict = torch.load(args.pth_path) + + +with open(args.dict_file, 'r') as f: + name_dict = json.load(f) + +new_param_list = [] + +for pth_name, ckpt_name in name_dict.items(): + param_dict = {} + data = pth_dict[pth_name] + param_dict['name'] = ckpt_name + param_dict['data'] = Tensor(data.detach().numpy()) + new_param_list.append(param_dict) + + +save_checkpoint(new_param_list, args.ckpt_path) +print(f'The ckpt file is saved in {args.ckpt_path}') diff --git a/contrib/Overlap-Recovery/train/train.py b/contrib/Overlap-Recovery/train/train.py new file mode 100644 index 000000000..05106ea3a --- /dev/null +++ b/contrib/Overlap-Recovery/train/train.py @@ -0,0 +1,118 @@ +"""train model.""" + +import time +import os +import numpy as np + +from src.model_utils.configs.config_base import config +from src.model_utils.device_adapter import get_device_id, get_device_num +from src.dataset import build_dataset +from src.deoccluder import CustomKNet, TrainModelWrapper +from loguru import logger + +import mindspore.common.dtype as mstype +from mindspore import context, Tensor, Parameter +from mindspore.train.callback import CheckpointConfig, ModelCheckpoint, TimeMonitor, LossMonitor +from mindspore.train import Model +from mindspore.train.serialization import load_checkpoint, load_param_into_net +from mindspore.common import set_seed +from mindspore import dataset as de + + +# set fixed seed +set_seed(1) + + +def load_pretrained_ckpt(net, load_path, device_target): + param_dict = load_checkpoint(load_path) + + if config.pretrain_epoch_size == 0: + key_mapping = {'down_sample_layer.1.beta': 'bn_down_sample.beta', + 'down_sample_layer.1.gamma': 'bn_down_sample.gamma', + 'down_sample_layer.0.weight': 'conv_down_sample.weight', + 'down_sample_layer.1.moving_mean': 'bn_down_sample.moving_mean', + 'down_sample_layer.1.moving_variance': 'bn_down_sample.moving_variance', + } + for oldkey in list(param_dict.keys()): + if not oldkey.startswith(('backbone', 'end_point', 'global_step', + 'learning_rate', 'moments', 'momentum')): + data = param_dict.pop(oldkey) + newkey = 'backbone.' + oldkey + param_dict[newkey] = data + oldkey = newkey + for k, v in key_mapping.items(): + if k in oldkey: + newkey = oldkey.replace(k, v) + param_dict[newkey] = param_dict.pop(oldkey) + break + + for item in list(param_dict.keys()): + if not (item.startswith('backbone') or item.startswith('rcnn_mask')): + param_dict.pop(item) + + if device_target == 'GPU': + for key, value in param_dict.items(): + tensor = Tensor(value, mstype.float32) + param_dict[key] = Parameter(tensor, key) + + load_param_into_net(net, param_dict) + return net + +def train_model(): + device_target = config.device_target + context.set_context(mode=context.PYNATIVE_MODE, device_target=device_target, device_id=get_device_id()) + + logger.info("Start train!") + rank = 0 + + logger.info("Start create dataset!") + + # It will generate mindrecord file in config.mindrecord_dir + if rank == 0 and not os.path.exists(config.mindrecord_dir): + os.makedirs(config.mindrecord_dir) + if rank == 0: + logger.add(os.path.join(config.mindrecord_dir, time.asctime(time.localtime()).replace(' ', '_')+".log")) + + # prepare dataset + train_set = build_dataset(config.data['train']) + collect_pipe = config.data['train']['pipeline'][-1] + column_names = list(collect_pipe['keys']) + list(collect_pipe['meta_keys']) + train_set = de.GeneratorDataset(train_set, + column_names=column_names, + num_parallel_workers=config.data['workers_per_gpu'], + shuffle=False) + train_set = train_set.batch(config.data['samples_per_gpu'], drop_remainder=True) + + # Prepare model + net = CustomKNet(config.model) + net = net.set_train() + net.load_r50(config.pretrained_r50) + net = TrainModelWrapper(net) + # load checkpoint or pretrained model + load_path = config.pre_trained + if load_path != "": + logger.info(f"Loading pretrained checkpoint from {load_path}") + net = load_pretrained_ckpt(net=net, load_path=load_path, device_target=device_target) + + # Learning rate adjustment. + steps_per_epoch = train_set.get_dataset_size() + + # Create model + model = Model(net) + + # Callbacks + time_cb = TimeMonitor(data_size=10) + loss_cb = LossMonitor(per_print_times=10) + + # Save-checkpoint callback + ckpt_config = CheckpointConfig(save_checkpoint_steps=min(500, steps_per_epoch * config.train_cfg['save_iterval']), + keep_checkpoint_max=config.train_cfg['ckpt_max']) + ckpt_cb = ModelCheckpoint(prefix='{}'.format("KNet_Deoccluder_SGD"), + directory=config.mindrecord_dir + "/card" + str(rank), + config=ckpt_config) + cb = [time_cb, loss_cb, ckpt_cb] + model.train(config.train_cfg['total_epoch'], train_set, callbacks=cb, dataset_sink_mode=False) + + +if __name__ == '__main__': + train_model() -- Gitee From 3d98a170e992c7de38d6d986d69e4d3c4864d37d Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 21:25:49 +0800 Subject: [PATCH 19/51] update infer --- contrib/Overlap-Recovery/inference/.gitignore | 3 ++- contrib/Overlap-Recovery/inference/ominfer.py | 19 ++++++++++++++++++- 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/contrib/Overlap-Recovery/inference/.gitignore b/contrib/Overlap-Recovery/inference/.gitignore index fd6c89636..6f57c0b0b 100644 --- a/contrib/Overlap-Recovery/inference/.gitignore +++ b/contrib/Overlap-Recovery/inference/.gitignore @@ -140,4 +140,5 @@ dmypy.json cython_debug/ .idea -.DS_Store \ No newline at end of file +.DS_Store +ominfer_testcase.py \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/ominfer.py b/contrib/Overlap-Recovery/inference/ominfer.py index fccc15bed..5ec5ffd39 100644 --- a/contrib/Overlap-Recovery/inference/ominfer.py +++ b/contrib/Overlap-Recovery/inference/ominfer.py @@ -13,11 +13,29 @@ from mindx.sdk.base import Tensor, Model, Size, log, ImageProcessor, post, BTens from load_img_data import load_img_data from PIL import Image import shutil +import cv2 def om_infer_one(img_name, model_path, device_id, img_prefix=None, vis_dir=None, score_thr=0.4): + + if not os.path.exists(model_path): + print("The input model path is empty!!!") + print("plz place the model in ./Overlap-Recovery/models/") + exit() + base.mx_init() # 全局资源初始化 model = Model(model_path, device_id) # 创造模型对象 + if not os.path.exists(img_name): + print("The input image path is empty!!!") + print("plz place the image in ./Overlap-Recovery/") + exit() + + if cv2.imread(img_name) is None: + print("=============!Error!================") + print("The input image is empty, plz check out!") + print("====================================") + exit() + resizeImg, img_meta = load_img_data(img_name, img_prefix) # hwc-chw ori_filename = img_meta['ori_filename'] abs_filename = img_meta['filename'] @@ -85,7 +103,6 @@ def om_infer_one(img_name, model_path, device_id, img_prefix=None, vis_dir=None, print(f'pred text mask saving to {save_file}') - def postprocess(scaled_mask_preds, cls_score): num_imgs = 1 segm_results = [] -- Gitee From a9990b1cc9a3fbfb8efcd50c034ca5d7c5f99a97 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Mon, 12 Dec 2022 22:38:25 +0800 Subject: [PATCH 20/51] modified infer --- contrib/Overlap-Recovery/inference/ominfer.py | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/contrib/Overlap-Recovery/inference/ominfer.py b/contrib/Overlap-Recovery/inference/ominfer.py index 5ec5ffd39..4439d79c6 100644 --- a/contrib/Overlap-Recovery/inference/ominfer.py +++ b/contrib/Overlap-Recovery/inference/ominfer.py @@ -19,18 +19,18 @@ def om_infer_one(img_name, model_path, device_id, img_prefix=None, vis_dir=None, if not os.path.exists(model_path): print("The input model path is empty!!!") - print("plz place the model in ./Overlap-Recovery/models/") + print("plz place the model in ./Overlap-Recovery/inference/models/") exit() base.mx_init() # 全局资源初始化 model = Model(model_path, device_id) # 创造模型对象 - if not os.path.exists(img_name): + if not os.path.exists(os.path.join(img_prefix, img_name)): print("The input image path is empty!!!") - print("plz place the image in ./Overlap-Recovery/") + print("plz place the image in ./Overlap-Recovery/inference/") exit() - if cv2.imread(img_name) is None: + if cv2.imread(os.path.join(img_prefix, img_name)) is None: print("=============!Error!================") print("The input image is empty, plz check out!") print("====================================") @@ -143,3 +143,4 @@ if __name__ == '__main__': img_name = 'test.jpg' save_path = './' om_infer_one(img_name, model_path, device_id, img_prefix, vis_dir=save_path) + -- Gitee From 3c5e034d195ab67807b1e032d3bd23e10e75d43c Mon Sep 17 00:00:00 2001 From: HamPerdredes Date: Mon, 12 Dec 2022 23:00:16 +0800 Subject: [PATCH 21/51] update config --- .../src/model_utils/configs/config_base.py | 30 ++----------------- 1 file changed, 2 insertions(+), 28 deletions(-) diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py index 0db0f43ae..d86b748b4 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py @@ -13,10 +13,6 @@ class Config: def __init__(self, cfg_dict): for k, v in cfg_dict.items(): setattr(self, k, v) - # if isinstance(v, (list, tuple)): - # setattr(self, k, [Config(x) if isinstance(x, dict) else x for x in v]) - # else: - # setattr(self, k, Config(v) if isinstance(v, dict) else v) def get(self, attr_name, default_value=None): return getattr(self, attr_name, default_value) @@ -29,9 +25,7 @@ class Config: synth_data_root = "data/overlap_text/opt_4ins_250k/" -# real_data_root = "data/overlap_text/overlap_qualified_data_1129/" -real_data_root = "data/overlap_text/formal_v1/" -# real_data_root = "data/overlap_text/opt_4ins_250k/" +real_data_root = "data/overlap_text/overlap_test_data/" img_scale = (768, 768) img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) @@ -59,18 +53,6 @@ test_pipeline = [ dict(type='Collect', keys=['img'], meta_keys=('ori_shape', 'img_shape', 'pad_shape', 'scale_factor'), eval_mode=True), - # dict( - # type='MultiScaleFlipAug', - # img_scale=img_scale, - # flip=False, - # transforms=[ - # dict(type='Resize', keep_ratio=True), - # dict(type='RandomFlip'), - # dict(type='Normalize', **img_norm_cfg), - # dict(type='Pad', size_divisor=768), # 32 - # dict(type='ImageToTensor', keys=['img']), - # dict(type='Collect', keys=['img']), - # ]) ] config_dict = dict( model=model, @@ -84,11 +66,6 @@ config_dict = dict( img_prefix=synth_data_root, seg_prefix=synth_data_root, pipeline=train_pipeline), - # type='RealOverlapDataset', - # ann_file=real_data_root + 'annotation.json', - # img_prefix=real_data_root, - # seg_prefix=real_data_root, - # pipeline=train_pipeline), val=dict( type='RealOverlapDataset', ann_file=real_data_root + 'annotation.json', @@ -98,10 +75,7 @@ config_dict = dict( test_mode=True), test=dict( type='RealOverlapDataset', - # type='SynthOverlapDataset', - ann_file=real_data_root + 'test_500.json', - # ann_file=real_data_root + 'val_gt.jsonl', - # ann_file=real_data_root + 'annotation.json', + ann_file=real_data_root + 'annotation.json', img_prefix=real_data_root, seg_prefix=real_data_root, pipeline=test_pipeline, -- Gitee From 3448e1b4b4facecb15f14554ad8e0d6147f11427 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 00:30:25 +0800 Subject: [PATCH 22/51] model export add --- contrib/Overlap-Recovery/train/.gitignore | 144 ++++++++++++++++++ contrib/Overlap-Recovery/train/export.py | 33 ++++ .../Overlap-Recovery/train/models/.gitkeep | 0 .../custom_cells/custom_assigner.py | 3 +- .../deoccluder/custom_cells/custom_blocks.py | 34 ++++- .../train/src/deoccluder/deoccluder_r50.py | 8 +- .../deoccluder/roi/custom_kernel_iter_head.py | 22 ++- .../roi/custom_kernel_update_head.py | 10 +- .../src/deoccluder/roi/kernel_update_head.py | 8 +- .../train/src/deoccluder/rpn/kernel_head.py | 15 +- 10 files changed, 245 insertions(+), 32 deletions(-) create mode 100644 contrib/Overlap-Recovery/train/.gitignore create mode 100644 contrib/Overlap-Recovery/train/export.py create mode 100644 contrib/Overlap-Recovery/train/models/.gitkeep diff --git a/contrib/Overlap-Recovery/train/.gitignore b/contrib/Overlap-Recovery/train/.gitignore new file mode 100644 index 000000000..6f57c0b0b --- /dev/null +++ b/contrib/Overlap-Recovery/train/.gitignore @@ -0,0 +1,144 @@ +# Created by .ignore support plugin (hsz.mobi) +### Python template +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +share/python-wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.nox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +*.py,cover +.hypothesis/ +.pytest_cache/ +cover/ + +# Translations +*.mo +*.pot + +# Django stuff: +*.log +local_settings.py +db.sqlite3 +db.sqlite3-journal + +# Flask stuff: +instance/ +.webassets-cache + +# Scrapy stuff: +.scrapy + +# Sphinx documentation +docs/_build/ + +# PyBuilder +.pybuilder/ +target/ + +# Jupyter Notebook +.ipynb_checkpoints + +# IPython +profile_default/ +ipython_config.py + +# pyenv +# For a library or package, you might want to ignore these files since the code is +# intended to run in multiple environments; otherwise, check them in: +# .python-version + +# pipenv +# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. +# However, in case of collaboration, if having platform-specific dependencies or dependencies +# having no cross-platform support, pipenv may install dependencies that don't work, or not +# install all needed dependencies. +#Pipfile.lock + +# PEP 582; used by e.g. github.com/David-OConnor/pyflow +__pypackages__/ + +# Celery stuff +celerybeat-schedule +celerybeat.pid + +# SageMath parsed files +*.sage.py + +# Environments +.env +.venv +env/ +venv/ +ENV/ +env.bak/ +venv.bak/ + +# Spyder project settings +.spyderproject +.spyproject + +# Rope project settings +.ropeproject + +# mkdocs documentation +/site + +# mypy +.mypy_cache/ +.dmypy.json +dmypy.json + +# Pyre type checker +.pyre/ + +# pytype static type analyzer +.pytype/ + +# Cython debug symbols +cython_debug/ + +.idea +.DS_Store +ominfer_testcase.py \ No newline at end of file diff --git a/contrib/Overlap-Recovery/train/export.py b/contrib/Overlap-Recovery/train/export.py new file mode 100644 index 000000000..d7adbfec4 --- /dev/null +++ b/contrib/Overlap-Recovery/train/export.py @@ -0,0 +1,33 @@ +# -*- coding: utf-8 -*- +# @Author: Wenwen Yu +# @Email: yuwenwen62@gmail.com +# @Created Time: 12/5/22 3:19 PM + + +""" export model to 'AIR', 'ONNX' and 'MINDIR' """ + +import numpy as np +import mindspore as ms +from mindspore import Tensor, context, load_checkpoint, export, load_param_into_net +from src.deoccluder import CustomKNet +from src.model_utils.configs.config_base import config +from src.model_utils.device_adapter import get_device_id + +context.set_context(mode=context.PYNATIVE_MODE, device_target="CPU", device_id= get_device_id()) + +def best_model_export(): + ckpt_file_path = './models/best_iou.ckpt' + file_name = 'best_iou.onnx' + config.data['samples_per_gpu'] = 1 + + net = CustomKNet(config.model) + load_checkpoint(ckpt_file_path, net=net) + net.set_train(False) + net.is_model_export = True + + input_data = Tensor(np.zeros([1, 3, 768, 768]), ms.float32) + export(net, input_data, file_name=file_name, file_format='ONNX') # 'AIR', 'ONNX' and 'MINDIR' + print(f'save to {file_name}') + +if __name__ == '__main__': + best_model_export() diff --git a/contrib/Overlap-Recovery/train/models/.gitkeep b/contrib/Overlap-Recovery/train/models/.gitkeep new file mode 100644 index 000000000..e69de29bb diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py index 872362f43..b75e34148 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py @@ -204,8 +204,7 @@ class MaskHungarianAssigner(nn.Cell): # cost = cost.detach().cpu() cost = cost.asnumpy() if linear_sum_assignment is None: - raise RuntimeError('Please run "pip install scipy" ' - 'to install scipy first.') + raise NotImplementedError('Please run "pip install scipy" to install scipy first.' ) if self.topk == 1: matched_row_inds, matched_col_inds = linear_sum_assignment(cost) else: diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py index dfbae908b..0b1b10a0d 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py @@ -111,12 +111,15 @@ class FFN(nn.Cell): layers.append( nn.SequentialCell( nn.Dense(in_channels, feedforward_channels), self.activate, - nn.Dropout(ffn_drop) if ffn_drop > 0 else nn.Identity())) + )) in_channels = feedforward_channels layers.append(nn.Dense(feedforward_channels, embed_dims)) - layers.append(nn.Dropout(ffn_drop) if ffn_drop > 0 else nn.Identity()) + # layers.append(nn.Dropout(ffn_drop) if ffn_drop > 0 else nn.Identity()) self.layers = nn.SequentialCell(*layers) - self.dropout_layer = nn.Dropout() if dropout_layer else nn.Identity() + if dropout_layer: + self.dropout_layer = nn.Dropout() + else: + self.dropout_layer = None # nn.Identity() self.add_identity = add_identity def construct(self, x, identity=None): @@ -126,10 +129,14 @@ class FFN(nn.Cell): """ out = self.layers(x) if not self.add_identity: - return self.dropout_layer(out) + if self.dropout_layer is not None: + out = self.dropout_layer(out) + return out if identity is None: identity = x - return identity + self.dropout_layer(out) + if self.dropout_layer is not None: + out = self.dropout_layer(out) + return identity + out class MultiheadAttention(nn.Cell): @@ -175,9 +182,14 @@ class MultiheadAttention(nn.Cell): if proj_drop > 0: self.proj_drop = nn.Dropout(proj_drop) else: - self.proj_drop = nn.Identity() + # self.proj_drop = nn.Identity() + self.proj_drop = None self.num_proposals = num_proposals - self.dropout_layer = nn.Dropout(dropout_layer) if dropout_layer > 0 else nn.Identity() + + if dropout_layer > 0: + self.dropout_layer = nn.Dropout(dropout_layer) + else: + self.dropout_layer = None # nn.Identity() def construct(self, query, @@ -270,5 +282,11 @@ class MultiheadAttention(nn.Cell): if self.batch_first: out = out.transpose((1, 0, 2)) - return identity + self.dropout_layer(self.proj_drop(out)) + if self.proj_drop is not None: + out = self.proj_drop(out) + + if self.dropout_layer is not None: + out = self.dropout_layer(out) + return identity + out + # return identity + self.dropout_layer(self.proj_drop(out)) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py b/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py index fce73681b..ac2d6f1c7 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py @@ -40,7 +40,7 @@ class CustomKNet(nn.Cell): self.reduce_sum = ops.ReduceSum() - self.cnt = 0 + # self.cnt = 0 def load_r50(self, ckpt_path, prefix='backbone'): param_dict = load_checkpoint(ckpt_path) @@ -143,9 +143,9 @@ class CustomKNet(nn.Cell): total_loss += val else: total_loss = val - self.cnt += 1 - if self.cnt % 10 == 0: - print(losses) + # self.cnt += 1 + # if self.cnt % 10 == 0: + # print(losses) return total_loss def simple_test(self, img, img_metas, rescale=False): diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py index 251cfb53a..2726b1926 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py @@ -41,6 +41,7 @@ class CustomKernelIterHead(nn.Cell): self.num_proposals = num_proposals self.train_cfg = train_cfg self.test_cfg = test_cfg + self.interpolate = nn.ResizeBilinear() if mask_head is not None: self.init_mask_head(None, mask_head) self.init_assigner_sampler() @@ -283,13 +284,20 @@ class CustomKernelIterHead(nn.Cell): cls_score, mask_preds, scaled_mask_preds, object_feats = self._mask_forward_export(stage, x, object_feats, mask_preds, img_metas) + cls_score = ms.ops.sigmoid(cls_score) + + # Resale scaled_mask_preds to batched img shape, (B, num_det, H/4, W/4) -> (B, num_det, H, W) + scaled_mask_preds = self.interpolate( + ms.ops.sigmoid(scaled_mask_preds), + size=(768, 768), # hard code + align_corners=False) + return scaled_mask_preds, cls_score def segm2result_onnx(self, mask_preds, det_labels, cls_scores): - num_classes = self.num_classes - # bbox_result = None - segm_result = [[] for _ in range(num_classes)] - seg_scores = [[] for _ in range(num_classes)] + + segm_result = [] + seg_scores = [] mask_preds = mask_preds.detach() # num_det, h,w det_labels = det_labels.detach() #class id @@ -297,12 +305,10 @@ class CustomKernelIterHead(nn.Cell): num_ins = mask_preds.shape[0] # num_dets, h, w for idx in range(num_ins): - segm_result[det_labels[idx]].append(mask_preds[idx]) - seg_scores[det_labels[idx]].append(cls_scores[idx]) + segm_result.append(mask_preds[idx]) + seg_scores.append(cls_scores[idx]) # here we only have one classes (text) - segm_result = segm_result[0] # num_cls, num_det, h, w segm_result = ms.ops.stack(segm_result) # num_det, h, w - seg_scores = seg_scores[0] # num_cls, num_det seg_scores = ms.ops.stack(seg_scores) # num_det return segm_result, seg_scores diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py index 9ed2fee07..758366a7b 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py @@ -92,8 +92,14 @@ class CustomKernelUpdateHead(KernelUpdateHead): sigmoid_masks = nonzero_inds.astype(ms.float32) # einsum is faster than bmm by 30% - einsum = ops.Einsum('bnhw,bchw->bnc') - x_feat = einsum((sigmoid_masks, x)) + b, n, h, w = sigmoid_masks.shape + _, c, _, _ = x.shape + sigmoid_masks = ms.ops.reshape(sigmoid_masks, (b, n, h*w)) + tmp_x_feats = ms.ops.reshape(x, (b, c, h*w)) + tmp_x_feats = ms.ops.transpose(tmp_x_feats, (0, 2, 1)) + x_feat = ms.ops.bmm(sigmoid_masks, tmp_x_feats) + + # x_feat = Einsum('bnhw,bchw->bnc', sigmoid_masks, x) # obj_feat in shape [B, N, C, K, K] -> [B, N, C, K*K] -> [B, N, K*K, C] proposal_feat = proposal_feat.reshape(N, num_proposals, diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py index e8acd0e0e..623bf16f0 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py @@ -293,7 +293,7 @@ class KernelUpdateHead(nn.Cell): h, w, _ = img_meta['img_shape'] expand_dims = ops.ExpandDims() masks_per_img = self.interpolate( - expand_dims(masks_per_img, 0).sigmoid(), + ms.ops.sigmoid(expand_dims(masks_per_img, 0)), size=img_meta['batch_input_shape'], align_corners=False) @@ -331,9 +331,7 @@ class KernelUpdateHead(nn.Cell): segm_result[det_labels[idx]].append(mask_preds[idx]) return bbox_result, segm_result - def get_seg_masks_onnx(self, masks_per_img, labels_per_img, scores_per_img, + def get_seg_masks_onnx(self, masks_per_img, test_cfg, img_meta): - # resize mask predictions back - seg_masks = self.rescale_masks(masks_per_img, img_meta) - seg_masks = seg_masks > test_cfg.mask_thr + seg_masks = masks_per_img > 0.5 return seg_masks diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py index aa8cef6d8..28b94ed97 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py @@ -544,7 +544,7 @@ class ConvKernelHead(nn.Cell): # # proposal_feats = proposal_feats[None].broadcast_to((num_imgs, ) + proposal_feats.shape) # proposal_feats = ms.ops.broadcast_to(proposal_feats[None], (num_imgs, ) + proposal_feats.shape) tmp_feat = np.broadcast_to(tmp_feat[None], (num_imgs, ) + tmp_feat.shape) - proposal_feats = ms.Tensor(np.copy(tmp_feat), dtype=self.init_kernels.weight.dtype) + proposal_feats = ms.Tensor(np.copy(tmp_feat), dtype=self.init_kernels.weight.dtype) if semantic_feats is not None: x_feats = semantic_feats + loc_feats @@ -559,8 +559,17 @@ class ConvKernelHead(nn.Cell): sigmoid_masks = nonzero_inds.astype(ms.float32) else: sigmoid_masks = nonzero_inds.astype(ms.float32) * sigmoid_masks - einsum = ops.Einsum('bnhw,bchw->bnc') - obj_feats = einsum((sigmoid_masks, x_feats)) + # einsum = ops.Einsum('bnhw,bchw->bnc') + # obj_feats = einsum((sigmoid_masks, x_feats)) + b, n, h, w = sigmoid_masks.shape + _, c, _, _ = x_feats.shape + tmp_sigmoid_masks = ms.ops.reshape(sigmoid_masks, (b, n, h*w)) + tmp_x_feats = ms.ops.reshape(x_feats, (b, c, h*w)) + tmp_x_feats = ms.ops.transpose(tmp_x_feats, (0, 2, 1)) + obj_feats = ms.ops.bmm(tmp_sigmoid_masks, tmp_x_feats) + + # obj_feats = Einsum('bnhw,bchw->bnc', sigmoid_masks, x_feats) + # obj_feats = torch.einsum('bnhw,bchw->bnc', sigmoid_masks, x_feats) else: obj_feats = None -- Gitee From 1ded19f19c5f2b50ad10bfb7a2c535d940bab2d6 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 00:38:05 +0800 Subject: [PATCH 23/51] clean file --- .../Overlap-Recovery/train/pytorch2onnx.py | 343 ------------------ 1 file changed, 343 deletions(-) delete mode 100644 contrib/Overlap-Recovery/train/pytorch2onnx.py diff --git a/contrib/Overlap-Recovery/train/pytorch2onnx.py b/contrib/Overlap-Recovery/train/pytorch2onnx.py deleted file mode 100644 index ed092eadc..000000000 --- a/contrib/Overlap-Recovery/train/pytorch2onnx.py +++ /dev/null @@ -1,343 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os.path as osp -import warnings -from functools import partial - -import numpy as np -import onnx -import torch -from mmcv import Config, DictAction - -from mmdet.core.export import build_model_from_cfg, preprocess_example_input -from mmdet.core.export.model_wrappers import ONNXRuntimeDetector as UnusedWrapper -from onnx_model_wrappers import ONNXRuntimeDetector - - -def pytorch2onnx(model, - input_img, - input_shape, - normalize_cfg, - opset_version=11, - show=False, - output_file='tmp.onnx', - verify=False, - test_img=None, - do_simplify=False, - dynamic_export=None, - skip_postprocess=False): - - input_config = { - 'input_shape': input_shape, - 'input_path': input_img, - 'normalize_cfg': normalize_cfg - } - # prepare input - one_img, one_meta = preprocess_example_input(input_config) - # import pdb;pdb.set_trace() - # debug - one_meta['scale_factor'] = one_meta['scale_factor'].tolist() - one_meta.pop('show_img') - - pad_H, pad_W = 736, 736 - one_meta['batch_input_shape'] = (pad_H, pad_W) - # one_meta = dict() - - img_list, img_meta_list = [one_img], [[one_meta]] - - if skip_postprocess: - warnings.warn('Not all models support export onnx without post ' - 'process, especially two stage detectors!') - model.forward = model.forward_dummy - torch.onnx.export( - model, - one_img, - output_file, - input_names=['input'], - export_params=True, - keep_initializers_as_inputs=True, - do_constant_folding=True, - verbose=show, - opset_version=opset_version) - - print(f'Successfully exported ONNX model without ' - f'post process: {output_file}') - return - - # replace original forward function - origin_forward = model.forward - model.forward = partial( - model.forward, - img_metas=img_meta_list, - return_loss=False, - rescale=False) - - output_names = ['masks', 'scores'] - # if model.with_mask: - # output_names.append('masks') - input_name = 'input' - dynamic_axes = None - if dynamic_export: - dynamic_axes = { - input_name: { - 0: 'batch', - 2: 'height', - 3: 'width' - }, - 'masks': { - 0: 'batch', - 1: 'num_cls', - 2: 'num_dets' - }, - 'bbox': { - 0: 'batch', - 1: 'num_cls', - 2: 'num_dets' - }, - } - torch.onnx.export( - model, - img_list, - output_file, - input_names=[input_name], - output_names=output_names, - export_params=True, - keep_initializers_as_inputs=True, - do_constant_folding=True, - verbose=show, - opset_version=opset_version, - dynamic_axes=dynamic_axes) - - model.forward = origin_forward - import ipdb - # ipdb.set_trace() - if do_simplify: - import onnxsim - - from mmdet import digit_version - - min_required_version = '0.4.0' - assert digit_version(onnxsim.__version__) >= digit_version( - min_required_version - ), f'Requires to install onnxsim>={min_required_version}' - - model_opt, check_ok = onnxsim.simplify(output_file) - if check_ok: - onnx.save(model_opt, output_file) - print(f'Successfully simplified ONNX model: {output_file}') - else: - warnings.warn('Failed to simplify ONNX model.') - print(f'Successfully exported ONNX model: {output_file}') - - if verify: - # check by onnx - onnx_model = onnx.load(output_file) - onnx.checker.check_model(onnx_model) - - # wrap onnx model - onnx_model = ONNXRuntimeDetector(output_file, model.CLASSES, 0) - if dynamic_export: - # scale up to test dynamic shape - h, w = [int((_ * 1.5) // 32 * 32) for _ in input_shape[2:]] - h, w = min(1344, h), min(1344, w) - input_config['input_shape'] = (1, 3, h, w) - - if test_img is None: - input_config['input_path'] = input_img - - # prepare input once again - one_img, one_meta = preprocess_example_input(input_config) - - one_meta['scale_factor'] = one_meta['scale_factor'].tolist() - one_meta.pop('show_img') - - if dynamic_export: - pad_H, pad_W = h, w - one_meta['batch_input_shape'] = (pad_H, pad_W) - - img_list, img_meta_list = [one_img], [[one_meta]] - - # get pytorch output - with torch.no_grad(): - pytorch_results = model( - img_list, - img_metas=img_meta_list, - return_loss=False, - rescale=True)[0] - - img_list = [_.cuda().contiguous() for _ in img_list] - if dynamic_export: - img_list = img_list + [_.flip(-1).contiguous() for _ in img_list] - img_meta_list = img_meta_list * 2 - # get onnx output - onnx_results = onnx_model( - img_list, img_metas=img_meta_list, return_loss=False)[0] - - # print(onnx_results) - # compare a part of result - - for scores in pytorch_results[0]: - new_scores =scores[:, -1] # remove pytorch_results fake bboxes, keep scores - new_pytorch_res = [[new_scores], pytorch_results[1]] - # compare_pairs = list(zip(onnx_results, pytorch_results)) - compare_pairs = list(zip(onnx_results, new_pytorch_res)) - err_msg = 'The numerical values are different between Pytorch' + \ - ' and ONNX, but it does not necessarily mean the' + \ - ' exported ONNX model is problematic.' - # check the numerical value - # [(scores, masks) ,...,] - for type_idx, (onnx_res, pytorch_res) in enumerate(compare_pairs): - for idx, (o_res, p_res) in enumerate(zip(onnx_res, pytorch_res)): - np.testing.assert_allclose( - o_res, p_res, rtol=1e-03, atol=1e-05, err_msg=err_msg) - print('The numerical values are the same between Pytorch and ONNX') - - -def parse_normalize_cfg(test_pipeline): - transforms = None - for pipeline in test_pipeline: - if 'transforms' in pipeline: - transforms = pipeline['transforms'] - break - assert transforms is not None, 'Failed to find `transforms`' - norm_config_li = [_ for _ in transforms if _['type'] == 'Normalize'] - assert len(norm_config_li) == 1, '`norm_config` should only have one' - norm_config = norm_config_li[0] - return norm_config - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert MMDetection models to ONNX') - parser.add_argument('config', help='test config file path') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument('--input-img', type=str, help='Images for input') - parser.add_argument( - '--show', - action='store_true', - help='Show onnx graph and detection outputs') - parser.add_argument('--output-file', type=str, default='tmp.onnx') - parser.add_argument('--opset-version', type=int, default=12) - parser.add_argument( - '--test-img', type=str, default=None, help='Images for test') - parser.add_argument( - '--dataset', - type=str, - default='coco', - help='Dataset name. This argument is deprecated and will be removed \ - in future releases.') - parser.add_argument( - '--verify', - action='store_true', - help='verify the onnx model output against pytorch output') - parser.add_argument( - '--simplify', - action='store_true', - help='Whether to simplify onnx model.') - parser.add_argument( - '--shape', - type=int, - nargs='+', - default=[800, 1216], - help='input image size') - parser.add_argument( - '--mean', - type=float, - nargs='+', - default=[123.675, 116.28, 103.53], - help='mean value used for preprocess input data.This argument \ - is deprecated and will be removed in future releases.') - parser.add_argument( - '--std', - type=float, - nargs='+', - default=[58.395, 57.12, 57.375], - help='variance value used for preprocess input data. ' - 'This argument is deprecated and will be removed in future releases.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='Override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--dynamic-export', - action='store_true', - help='Whether to export onnx with dynamic axis.') - parser.add_argument( - '--skip-postprocess', - action='store_true', - help='Whether to export model without post process. Experimental ' - 'option. We do not guarantee the correctness of the exported ' - 'model.') - args = parser.parse_args() - return args - - -if __name__ == '__main__': - args = parse_args() - warnings.warn('Arguments like `--mean`, `--std`, `--dataset` would be \ - parsed directly from config file and are deprecated and \ - will be removed in future releases.') - - # assert args.opset_version == 11, 'MMDet only support opset 11 now' - - try: - from mmcv.onnx.symbolic import register_extra_symbolics - except ModuleNotFoundError: - raise NotImplementedError('please update mmcv to version>=v1.0.4') - register_extra_symbolics(args.opset_version) - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - if args.shape is None: - img_scale = cfg.test_pipeline[1]['img_scale'] - input_shape = (1, 3, img_scale[1], img_scale[0]) - elif len(args.shape) == 1: - input_shape = (1, 3, args.shape[0], args.shape[0]) - elif len(args.shape) == 2: - input_shape = (1, 3) + tuple(args.shape) - else: - raise ValueError('invalid input shape') - # import pdb;pdb.set_trace() - # build the model and load checkpoint - model = build_model_from_cfg(args.config, args.checkpoint, - args.cfg_options) - - if not args.input_img: - args.input_img = osp.join(osp.dirname(__file__), '../../demo/demo.jpg') - - normalize_cfg = parse_normalize_cfg(cfg.test_pipeline) - - # convert model to onnx file - pytorch2onnx( - model, - args.input_img, - input_shape, - normalize_cfg, - opset_version=args.opset_version, - show=args.show, - output_file=args.output_file, - verify=args.verify, - test_img=args.test_img, - do_simplify=args.simplify, - dynamic_export=args.dynamic_export, - skip_postprocess=args.skip_postprocess) - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: This tool will be deprecated in future. ' - msg += blue_text + 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) -- Gitee From 6fe0ea200adb2ce7058d5d391ca48e792079a695 Mon Sep 17 00:00:00 2001 From: HamPerdredes Date: Tue, 13 Dec 2022 00:44:58 +0800 Subject: [PATCH 24/51] update readme of train --- contrib/Overlap-Recovery/README.md | 71 +++- contrib/Overlap-Recovery/train/LICENSE | 201 ---------- .../train/onnx_model_wrappers.py | 171 --------- contrib/Overlap-Recovery/train/onnx_test.py | 11 - .../Overlap-Recovery/train/pytorch2onnx.py | 343 ------------------ contrib/Overlap-Recovery/train/readme_pre.py | 13 - 6 files changed, 69 insertions(+), 741 deletions(-) delete mode 100644 contrib/Overlap-Recovery/train/LICENSE delete mode 100644 contrib/Overlap-Recovery/train/onnx_model_wrappers.py delete mode 100644 contrib/Overlap-Recovery/train/onnx_test.py delete mode 100644 contrib/Overlap-Recovery/train/pytorch2onnx.py delete mode 100644 contrib/Overlap-Recovery/train/readme_pre.py diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index ee861e179..34cc7515e 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -51,7 +51,62 @@ eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示 其中,`Overlap-Recovery/train`工程目录如下图所示: -TODO +```pytnon +├── eval.py #精度测试 +├── train.py #模型训练主函数 +├── __init__.py +├── src #模型源码及相关辅助函数 +│ ├── __init__.py +│ ├── dataset #数据集加载、预处理等相关函数 +│ │ ├── __init__.py +│ │ ├── base_dataset.py #dataset类的基类 +│ │ ├── build_dataset.py #提供接口构造dataset对象 +│ │ ├── data_process.py #数据预处理相关函数 +│ │ ├── real_dataset.py #用于测试数据的dataset类 +│ │ ├── synth_dataset.py #用于训练数据的dataset类 +│ │ ├── utils.py #dataset构造所需的辅助函数 +│ ├── deoccluder #去重叠算法相关代码 +│ │ ├── __init__.py +│ │ ├── deoccluder_r50.py #模型主结构代码 +│ │ ├── fpn_neck.py # FPN模块代码 +│ │ ├── resnet.py # resnet-50 backbone代码 +│ │ ├── utils.py # 辅助函数 +│ │ ├── rpn # kernel初始化相关 +│ │ │ ├── __init__.py +│ │ │ ├── kernel_head.py # kernel初始化相关函数 +│ │ │ ├── positional_encoding.py # 位置编码函数 +│ │ │ ├── semantic_fpn_warpper.py # 语义fpn编码 +│ │ ├── roi # kernel更新相关 +│ │ │ ├── __init__.py +│ │ │ ├── custom_kernel_iter_head.py # kernel迭代函数 +│ │ │ ├── custom_kernel_update_head.py # kernel更新函数 +│ │ │ ├── kernel_update_head.py # kernel更新函数基类 +│ │ │ ├── kernel_updator.py # kernel更新辅助函数 +│ │ ├── custom_cells # 算法组件 +│ │ │ ├── __init__.py +│ │ │ ├── custom_assigner.py # 标签分配函数 +│ │ │ ├── custom_blocks.py # 自定义模块 +│ │ │ ├── custom_losses.py # 自定义损失函数 +│ │ │ ├── custom_match_cost.py # 自定义匹配代价评估函数 +│ │ │ ├── custom_operations.py # 自定义算子 +│ │ │ ├── custom_samplers.py # 自定义采样函数 +│ ├── model_utils # 模型训练相关代码 +│ │ ├── __init__.py +│ │ ├── device_adapter.py +│ │ ├── local_adapter.py +│ │ ├── moxing_adapter.py +│ │ ├── configs # 配置文件函数 +│ │ │ ├── __init__.py +│ │ │ ├── config_base.py +│ │ │ ├── config_model.py +│ ├── utils # 将pytorch权重转为mindspore权重 +│ │ └── pth2ckpt.py +├── scripts # scripts文件 +│ ├── convert_resnet.sh # 将pytorch的resnet权重转为mindspore权重 +│ └── train.sh # 训练指令 +├── resource_utils # 转换pytorch权重所需的相关材料 +│ └──resnet50_dict.json +``` 其中,`Overlap-Recovery/inference`工程目录如下图所示: @@ -100,7 +155,19 @@ TODO 其中训练环境依赖软件和版本如下表: -TODO +| 软件名称 | 版本 | +| ------------------- | ----------- | +| MindX SDK | 3.0RC3 | +| Ascend-CANN-toolkit | 5.1.RC2 | +| ubuntu | 18.04.1 LTS | +| python | 3.9.2 | +| opencv-python | 4.6.0.66 | +| numpy | 1.23.1 | +| pillow | 9.1.0 | +| mmcv | 0.2.14 | +| loguru | 0.2.14 | +| tqdm | 4.64.1 | +| imagesize | 1.4.1 | 其中推理环境依赖软件和版本如下表: diff --git a/contrib/Overlap-Recovery/train/LICENSE b/contrib/Overlap-Recovery/train/LICENSE deleted file mode 100644 index 261eeb9e9..000000000 --- a/contrib/Overlap-Recovery/train/LICENSE +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/contrib/Overlap-Recovery/train/onnx_model_wrappers.py b/contrib/Overlap-Recovery/train/onnx_model_wrappers.py deleted file mode 100644 index 4d046c5e6..000000000 --- a/contrib/Overlap-Recovery/train/onnx_model_wrappers.py +++ /dev/null @@ -1,171 +0,0 @@ -# -*- coding: utf-8 -*- -# @Author: Wenwen Yu -# @Email: yuwenwen62@gmail.com -# @Created Time: 11/15/22 3:47 PM - -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings - -import numpy as np -import torch - -from mmdet.core import bbox2result -from mmdet.models import BaseDetector - - -class DeployBaseDetector(BaseDetector): - """DeployBaseDetector.""" - - def __init__(self, class_names, device_id): - super(DeployBaseDetector, self).__init__() - self.CLASSES = class_names - self.device_id = device_id - - def simple_test(self, img, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def aug_test(self, imgs, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def extract_feat(self, imgs): - raise NotImplementedError('This method is not implemented.') - - def forward_train(self, imgs, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def val_step(self, data, optimizer): - raise NotImplementedError('This method is not implemented.') - - def train_step(self, data, optimizer): - raise NotImplementedError('This method is not implemented.') - - def forward_test(self, *, img, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def async_simple_test(self, img, img_metas, **kwargs): - raise NotImplementedError('This method is not implemented.') - - def forward(self, img, img_metas, return_loss=True, **kwargs): - outputs = self.forward_test(img, img_metas, **kwargs) - batch_masks, seg_scores = outputs # (bs, num_cls num_det, h, w) (bs, num_cls, num_det, 5) - # batch_dets, batch_labels = outputs[:2] - # batch_masks = outputs[2] if len(outputs) == 3 else None - batch_size = img[0].shape[0] - img_metas = img_metas[0] - results = [] - rescale = kwargs.get('rescale', True) - for i in range(batch_size): - masks = batch_masks[i] # (num_det, h, w) - score = seg_scores[i] # (num_det,) - seg_per_cls = [] - score_per_cls = [] - score_per_cls.append(score) # ( num_det) - # (num_det, h, w) - # masks = masks[0] # get num_cls is 1(onlh text), so we get it from idx 0 - for ins_idx in range(masks.shape[0]): - img_h, img_w = img_metas[i]['img_shape'][:2] - ori_h, ori_w = img_metas[i]['ori_shape'][:2] - mask = masks[ins_idx, :img_h, :img_w] - if rescale: - mask = mask.astype(np.float32) - mask = torch.from_numpy(mask) - mask = torch.nn.functional.interpolate( - mask.unsqueeze(0).unsqueeze(0), size=(ori_h, ori_w)) - mask = mask.squeeze(0).squeeze(0).detach().asnumpy() - # if mask.dtype != np.bool: - # mask = mask >= 0.5 - seg_per_cls.append(mask) - results.append((score_per_cls, [seg_per_cls])) - return results - -def forward1(self, img, img_metas, return_loss=True, **kwargs): - outputs = self.forward_test(img, img_metas, **kwargs) - batch_masks, bbox_results = outputs # (bs, num_cls num_det, h, w) (bs, num_cls, num_det, 5) - # batch_dets, batch_labels = outputs[:2] - # batch_masks = outputs[2] if len(outputs) == 3 else None - batch_size = img[0].shape[0] - img_metas = img_metas[0] - results = [] - rescale = kwargs.get('rescale', False) - for i in range(batch_size): - masks = batch_masks[i] # (num_cls num_det, h, w) - bbox_and_score = bbox_results[i] # num_cls, num_det, 5) - seg_per_cls = [] - bbox_per_cls = [] - bbox_per_cls.append(bbox_and_score.squeeze(0).detach().numpy()) # (num_det, 5) - # (num_det, h, w) - masks = masks[0] # get num_cls is 1(onlh text), so we get it from idx 0 - for ins_idx in range(masks.shape[0]): - img_h, img_w = img_metas[i]['img_shape'][:2] - ori_h, ori_w = img_metas[i]['ori_shape'][:2] - mask = masks[ins_idx, :img_h, :img_w] - if rescale: - mask = torch.nn.functional.interpolate( - mask, size=(ori_h, ori_w)) - mask = mask.detach().numpy() - - if mask.dtype != np.bool: - mask = mask >= 0.5 - seg_per_cls.append(mask) - results.append((bbox_per_cls, [seg_per_cls])) - return results - - - -class ONNXRuntimeDetector(DeployBaseDetector): - """Wrapper for detector's inference with ONNXRuntime.""" - - def __init__(self, onnx_file, class_names, device_id): - super(ONNXRuntimeDetector, self).__init__(class_names, device_id) - import onnxruntime as ort - - # get the custom op path - ort_custom_op_path = '' - try: - from mmcv.ops import get_onnxruntime_op_path - ort_custom_op_path = get_onnxruntime_op_path() - except (ImportError, ModuleNotFoundError): - warnings.warn('If input model has custom op from mmcv, \ - you may have to build mmcv with ONNXRuntime from source.') - session_options = ort.SessionOptions() - # register custom op for onnxruntime - if osp.exists(ort_custom_op_path): - session_options.register_custom_ops_library(ort_custom_op_path) - sess = ort.InferenceSession(onnx_file, session_options) - providers = ['CPUExecutionProvider'] - options = [{}] - is_cuda_available = ort.get_device() == 'GPU' - if is_cuda_available: - providers.insert(0, 'CUDAExecutionProvider') - options.insert(0, {'device_id': device_id}) - - sess.set_providers(providers, options) - - self.sess = sess - self.io_binding = sess.io_binding() - self.output_names = [_.name for _ in sess.get_outputs()] - self.is_cuda_available = is_cuda_available - - def forward_test(self, imgs, img_metas, **kwargs): - input_data = imgs[0] - # set io binding for inputs/outputs - device_type = 'cuda' if self.is_cuda_available else 'cpu' - if not self.is_cuda_available: - input_data = input_data.cpu() - self.io_binding.bind_input( - name='input', - device_type=device_type, - device_id=self.device_id, - element_type=np.float32, - shape=input_data.shape, - buffer_ptr=input_data.data_ptr()) - - for name in self.output_names: - self.io_binding.bind_output(name) - # run session to get outputs - self.sess.run_with_iobinding(self.io_binding) - ort_outputs = self.io_binding.copy_outputs_to_cpu() - return ort_outputs - - diff --git a/contrib/Overlap-Recovery/train/onnx_test.py b/contrib/Overlap-Recovery/train/onnx_test.py deleted file mode 100644 index f5b01fd7e..000000000 --- a/contrib/Overlap-Recovery/train/onnx_test.py +++ /dev/null @@ -1,11 +0,0 @@ -# -*- coding: utf-8 -*- -# @Author: Wenwen Yu -# @Email: yuwenwen62@gmail.com -# @Created Time: 11/14/22 11:47 PM - -import onnx - -onnx_file = '/home/whua/code/overlap_text/logs/knet/default_synth_v0_3x_4proposal/weight.onnx' -model = onnx.load(onnx_file) -print([input.name for input in model.graph.input]) -print([output.name for output in model.graph.output]) \ No newline at end of file diff --git a/contrib/Overlap-Recovery/train/pytorch2onnx.py b/contrib/Overlap-Recovery/train/pytorch2onnx.py deleted file mode 100644 index ed092eadc..000000000 --- a/contrib/Overlap-Recovery/train/pytorch2onnx.py +++ /dev/null @@ -1,343 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os.path as osp -import warnings -from functools import partial - -import numpy as np -import onnx -import torch -from mmcv import Config, DictAction - -from mmdet.core.export import build_model_from_cfg, preprocess_example_input -from mmdet.core.export.model_wrappers import ONNXRuntimeDetector as UnusedWrapper -from onnx_model_wrappers import ONNXRuntimeDetector - - -def pytorch2onnx(model, - input_img, - input_shape, - normalize_cfg, - opset_version=11, - show=False, - output_file='tmp.onnx', - verify=False, - test_img=None, - do_simplify=False, - dynamic_export=None, - skip_postprocess=False): - - input_config = { - 'input_shape': input_shape, - 'input_path': input_img, - 'normalize_cfg': normalize_cfg - } - # prepare input - one_img, one_meta = preprocess_example_input(input_config) - # import pdb;pdb.set_trace() - # debug - one_meta['scale_factor'] = one_meta['scale_factor'].tolist() - one_meta.pop('show_img') - - pad_H, pad_W = 736, 736 - one_meta['batch_input_shape'] = (pad_H, pad_W) - # one_meta = dict() - - img_list, img_meta_list = [one_img], [[one_meta]] - - if skip_postprocess: - warnings.warn('Not all models support export onnx without post ' - 'process, especially two stage detectors!') - model.forward = model.forward_dummy - torch.onnx.export( - model, - one_img, - output_file, - input_names=['input'], - export_params=True, - keep_initializers_as_inputs=True, - do_constant_folding=True, - verbose=show, - opset_version=opset_version) - - print(f'Successfully exported ONNX model without ' - f'post process: {output_file}') - return - - # replace original forward function - origin_forward = model.forward - model.forward = partial( - model.forward, - img_metas=img_meta_list, - return_loss=False, - rescale=False) - - output_names = ['masks', 'scores'] - # if model.with_mask: - # output_names.append('masks') - input_name = 'input' - dynamic_axes = None - if dynamic_export: - dynamic_axes = { - input_name: { - 0: 'batch', - 2: 'height', - 3: 'width' - }, - 'masks': { - 0: 'batch', - 1: 'num_cls', - 2: 'num_dets' - }, - 'bbox': { - 0: 'batch', - 1: 'num_cls', - 2: 'num_dets' - }, - } - torch.onnx.export( - model, - img_list, - output_file, - input_names=[input_name], - output_names=output_names, - export_params=True, - keep_initializers_as_inputs=True, - do_constant_folding=True, - verbose=show, - opset_version=opset_version, - dynamic_axes=dynamic_axes) - - model.forward = origin_forward - import ipdb - # ipdb.set_trace() - if do_simplify: - import onnxsim - - from mmdet import digit_version - - min_required_version = '0.4.0' - assert digit_version(onnxsim.__version__) >= digit_version( - min_required_version - ), f'Requires to install onnxsim>={min_required_version}' - - model_opt, check_ok = onnxsim.simplify(output_file) - if check_ok: - onnx.save(model_opt, output_file) - print(f'Successfully simplified ONNX model: {output_file}') - else: - warnings.warn('Failed to simplify ONNX model.') - print(f'Successfully exported ONNX model: {output_file}') - - if verify: - # check by onnx - onnx_model = onnx.load(output_file) - onnx.checker.check_model(onnx_model) - - # wrap onnx model - onnx_model = ONNXRuntimeDetector(output_file, model.CLASSES, 0) - if dynamic_export: - # scale up to test dynamic shape - h, w = [int((_ * 1.5) // 32 * 32) for _ in input_shape[2:]] - h, w = min(1344, h), min(1344, w) - input_config['input_shape'] = (1, 3, h, w) - - if test_img is None: - input_config['input_path'] = input_img - - # prepare input once again - one_img, one_meta = preprocess_example_input(input_config) - - one_meta['scale_factor'] = one_meta['scale_factor'].tolist() - one_meta.pop('show_img') - - if dynamic_export: - pad_H, pad_W = h, w - one_meta['batch_input_shape'] = (pad_H, pad_W) - - img_list, img_meta_list = [one_img], [[one_meta]] - - # get pytorch output - with torch.no_grad(): - pytorch_results = model( - img_list, - img_metas=img_meta_list, - return_loss=False, - rescale=True)[0] - - img_list = [_.cuda().contiguous() for _ in img_list] - if dynamic_export: - img_list = img_list + [_.flip(-1).contiguous() for _ in img_list] - img_meta_list = img_meta_list * 2 - # get onnx output - onnx_results = onnx_model( - img_list, img_metas=img_meta_list, return_loss=False)[0] - - # print(onnx_results) - # compare a part of result - - for scores in pytorch_results[0]: - new_scores =scores[:, -1] # remove pytorch_results fake bboxes, keep scores - new_pytorch_res = [[new_scores], pytorch_results[1]] - # compare_pairs = list(zip(onnx_results, pytorch_results)) - compare_pairs = list(zip(onnx_results, new_pytorch_res)) - err_msg = 'The numerical values are different between Pytorch' + \ - ' and ONNX, but it does not necessarily mean the' + \ - ' exported ONNX model is problematic.' - # check the numerical value - # [(scores, masks) ,...,] - for type_idx, (onnx_res, pytorch_res) in enumerate(compare_pairs): - for idx, (o_res, p_res) in enumerate(zip(onnx_res, pytorch_res)): - np.testing.assert_allclose( - o_res, p_res, rtol=1e-03, atol=1e-05, err_msg=err_msg) - print('The numerical values are the same between Pytorch and ONNX') - - -def parse_normalize_cfg(test_pipeline): - transforms = None - for pipeline in test_pipeline: - if 'transforms' in pipeline: - transforms = pipeline['transforms'] - break - assert transforms is not None, 'Failed to find `transforms`' - norm_config_li = [_ for _ in transforms if _['type'] == 'Normalize'] - assert len(norm_config_li) == 1, '`norm_config` should only have one' - norm_config = norm_config_li[0] - return norm_config - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert MMDetection models to ONNX') - parser.add_argument('config', help='test config file path') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument('--input-img', type=str, help='Images for input') - parser.add_argument( - '--show', - action='store_true', - help='Show onnx graph and detection outputs') - parser.add_argument('--output-file', type=str, default='tmp.onnx') - parser.add_argument('--opset-version', type=int, default=12) - parser.add_argument( - '--test-img', type=str, default=None, help='Images for test') - parser.add_argument( - '--dataset', - type=str, - default='coco', - help='Dataset name. This argument is deprecated and will be removed \ - in future releases.') - parser.add_argument( - '--verify', - action='store_true', - help='verify the onnx model output against pytorch output') - parser.add_argument( - '--simplify', - action='store_true', - help='Whether to simplify onnx model.') - parser.add_argument( - '--shape', - type=int, - nargs='+', - default=[800, 1216], - help='input image size') - parser.add_argument( - '--mean', - type=float, - nargs='+', - default=[123.675, 116.28, 103.53], - help='mean value used for preprocess input data.This argument \ - is deprecated and will be removed in future releases.') - parser.add_argument( - '--std', - type=float, - nargs='+', - default=[58.395, 57.12, 57.375], - help='variance value used for preprocess input data. ' - 'This argument is deprecated and will be removed in future releases.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='Override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--dynamic-export', - action='store_true', - help='Whether to export onnx with dynamic axis.') - parser.add_argument( - '--skip-postprocess', - action='store_true', - help='Whether to export model without post process. Experimental ' - 'option. We do not guarantee the correctness of the exported ' - 'model.') - args = parser.parse_args() - return args - - -if __name__ == '__main__': - args = parse_args() - warnings.warn('Arguments like `--mean`, `--std`, `--dataset` would be \ - parsed directly from config file and are deprecated and \ - will be removed in future releases.') - - # assert args.opset_version == 11, 'MMDet only support opset 11 now' - - try: - from mmcv.onnx.symbolic import register_extra_symbolics - except ModuleNotFoundError: - raise NotImplementedError('please update mmcv to version>=v1.0.4') - register_extra_symbolics(args.opset_version) - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - if args.shape is None: - img_scale = cfg.test_pipeline[1]['img_scale'] - input_shape = (1, 3, img_scale[1], img_scale[0]) - elif len(args.shape) == 1: - input_shape = (1, 3, args.shape[0], args.shape[0]) - elif len(args.shape) == 2: - input_shape = (1, 3) + tuple(args.shape) - else: - raise ValueError('invalid input shape') - # import pdb;pdb.set_trace() - # build the model and load checkpoint - model = build_model_from_cfg(args.config, args.checkpoint, - args.cfg_options) - - if not args.input_img: - args.input_img = osp.join(osp.dirname(__file__), '../../demo/demo.jpg') - - normalize_cfg = parse_normalize_cfg(cfg.test_pipeline) - - # convert model to onnx file - pytorch2onnx( - model, - args.input_img, - input_shape, - normalize_cfg, - opset_version=args.opset_version, - show=args.show, - output_file=args.output_file, - verify=args.verify, - test_img=args.test_img, - do_simplify=args.simplify, - dynamic_export=args.dynamic_export, - skip_postprocess=args.skip_postprocess) - - # Following strings of text style are from colorama package - bright_style, reset_style = '\x1b[1m', '\x1b[0m' - red_text, blue_text = '\x1b[31m', '\x1b[34m' - white_background = '\x1b[107m' - - msg = white_background + bright_style + red_text - msg += 'DeprecationWarning: This tool will be deprecated in future. ' - msg += blue_text + 'Welcome to use the unified model deployment toolbox ' - msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy' - msg += reset_style - warnings.warn(msg) diff --git a/contrib/Overlap-Recovery/train/readme_pre.py b/contrib/Overlap-Recovery/train/readme_pre.py deleted file mode 100644 index 1d0f6581c..000000000 --- a/contrib/Overlap-Recovery/train/readme_pre.py +++ /dev/null @@ -1,13 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# @Time : 2022/11/30 23:09 -# @Author : WeiHua - -""" -ipdb -opencv-python -imagesize -loguru -pip install mmcv==0.2.14 - -""" \ No newline at end of file -- Gitee From 90e2dedd708c2dbef1c69b18d1a6bfdf1d7f0873 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 00:51:09 +0800 Subject: [PATCH 25/51] update readme --- contrib/Overlap-Recovery/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index 34cc7515e..cf915ff68 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -53,7 +53,8 @@ eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示 ```pytnon ├── eval.py #精度测试 -├── train.py #模型训练主函数 +├── train.py #模型训练主函数 +├── export.py #将ckpt模型导出为onnx格式的模型 ├── __init__.py ├── src #模型源码及相关辅助函数 │ ├── __init__.py @@ -117,7 +118,6 @@ eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示 ├── load_ann.py #加载测试集 ├── load_img_data.py #加载图片数据 ├── ominfer.py #单张图片推理 -├── export.py #将ckpt模型导出为onnx格式的模型 ├── preprocess_utils.py #加载图片做预处理的辅助函数 ├── README.md ├── models #不同类型的模型文件 -- Gitee From c31b1175f8b127715d509c82a45be81a317886a1 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 01:15:40 +0800 Subject: [PATCH 26/51] update readme --- contrib/Overlap-Recovery/README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index cf915ff68..93f112c62 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -160,7 +160,8 @@ eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示 | MindX SDK | 3.0RC3 | | Ascend-CANN-toolkit | 5.1.RC2 | | ubuntu | 18.04.1 LTS | -| python | 3.9.2 | +| python | 3.9.2 | +| MindSpore | 1.9.0 | | opencv-python | 4.6.0.66 | | numpy | 1.23.1 | | pillow | 9.1.0 | -- Gitee From 550ab83d53cc97de057d31e05e7c9e1da23f0d78 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 12:49:10 +0800 Subject: [PATCH 27/51] update readme --- contrib/Overlap-Recovery/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index 93f112c62..cb99210e0 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -278,7 +278,7 @@ eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示 4. 进入推理服务器执行命令(修改`onnx_model_path`和`output_model_path`参数为自己的路径): ``` - atc --model=[air_model_path] --framework=5 --output=[output_model_path] --soc_version=Ascend310 --input_shape="img:1,3,768,768" + atc --model=[onnx_model_path] --framework=5 --output=[output_model_path] --soc_version=Ascend310 --input_shape="img:1,3,768,768" ``` 5. 执行该命令会在当前目录下生成项目需要的模型文件`[output_model].om`。执行后终端输出为: @@ -309,7 +309,7 @@ eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示 python ominfer.py ``` -**步骤4** 运行结束输出`test`文件夹,预测的可视化结果保存在`test`文件夹下。 +**步骤4** 运行结束输出`test`文件夹,预测的mask可视化结果保存在`test`文件夹下。 -- Gitee From 36cecdc34549fceba381d6e8b7035af2bf29c855 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 13:20:21 +0800 Subject: [PATCH 28/51] update readme --- contrib/Overlap-Recovery/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index cb99210e0..bad9411b1 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -36,7 +36,7 @@ npu-smi info | 2 | 图像解码 | 通过Pillow第三方库对图像解码。 | | 3 | 图像放缩 | 模型的输入为固定尺寸,所以需要对输入图片进行等比例放缩。 | | 4 | 文字还原 | 在图像放缩后,将缓存区数据送入文字还原模型。本方案选用自研算法进行文本还原 | -| 5 | 结果可视化 | 通过Pillow库可视化单张图像的识别。 | +| 5 | 结果可视化 | 通过Pillow库可视化单张图像的预测的文本实例mask。 | -- Gitee From a2f4af376dae1fb68dd4f8dfe8e54b4c9a4242fb Mon Sep 17 00:00:00 2001 From: HamPerdredes Date: Tue, 13 Dec 2022 16:59:12 +0800 Subject: [PATCH 29/51] clean code and update readme --- contrib/Overlap-Recovery/README.md | 66 ++++----------- contrib/Overlap-Recovery/train/eval.py | 84 +++---------------- .../train/scripts/convert_resnet.sh | 8 +- .../deoccluder/custom_cells/custom_blocks.py | 2 +- .../src/model_utils/configs/config_base.py | 15 ++-- .../src/model_utils/configs/config_model.py | 5 -- contrib/Overlap-Recovery/train/train.py | 1 + 7 files changed, 36 insertions(+), 145 deletions(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index 34cc7515e..134d06e55 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -196,62 +196,26 @@ eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示 ## 3 模型训练 -**步骤1** 从昇腾社区的modelzoo中下载官方CRNN模型代码:https://www.hiascend.com/zh/software/modelzoo/models/detail/C/c4945b2fc8aa47f6af9b4f2870e41062/1 - -**步骤2** 为适配我们的任务要求,做如下修改: - -1. **default_config.yaml** - - ```yaml - model_version: "V2" # GPU训练使用V2 - label_dict: "PATH/TO/ch_sim_en_digit_symble.txt" # 使用自己的字典的路径 - max_text_length: 12 - class_num: 6703 - blank: 6702 - train_dataset_path: "" # 训练数据集路径 - train_eval_dataset: "synth" # 名称使用synth - train_eval_dataset_path: "" # 测试数据路径 - ``` - -2. **dataset.py** - - 将第41行的: - - ```python - letters = [letter for letter in config1.label_dict] - ``` - - 修改为: - - ```python - letters = [] - with open(config1.label_dict, 'r') as f: - for line in f: - letter = line.strip('\n') - letters.append(letter) - f.close() - ``` - -3. **metric.py** +**步骤1** 从pytorch官方下载[resnet-50预训练权重](https://download.pytorch.org/models/resnet50-19c8e357.pth) +,并利用脚本转换成mindspore支持的格式 +``` +sh train/scripts/convert_resnet.sh PATH-TO-PYTORCH-WEIGHT PATH-TO-MINDSPORE-WEIGHT +``` - 将第18行的字典 +**步骤2** 准备数据集并修改相关config参数 + + 在```train/src/model_utils/config_base.py```中,修改```pretrained_r50```参数为转换后的backbone权重路径, +并参考测试数据的格式准备好训练数据,修改```synth_data_root```和```real_data_root```等参数。此外,可通过修改```mindrecord_dir``` +参数设置输出路径。 - ```python - label_dict = "abcdefghijklmnopqrstuvwxyz0123456789" - ``` - 修改为( `dict_path `为自行准备的字典 `ch_sim_en_digit_symble.txt `,可在本仓库下找到): +**步骤3** 准备环境并训练模型 + + 参考环境要求配制好运行环境后,通过```python train/train.py```命令进行模型训练。 - ``` - label_dict = [] - with open("[dict_path]", 'r') as f: - for line in f: - letter = line.strip('\n') - label_dict.append(letter) - f.close() - ``` +**步骤4** 使用mindspore模型直接推理 -**步骤3** 训练步骤参考官方代码https://www.hiascend.com/zh/software/modelzoo/models/detail/C/c4945b2fc8aa47f6af9b4f2870e41062/1 + 修改```train/src/model_utils/config_base.py```中```checkpoint_path```参数为checkpoint的保存路径,并运行```python train/eval.py```即可 diff --git a/contrib/Overlap-Recovery/train/eval.py b/contrib/Overlap-Recovery/train/eval.py index 31d11f5c8..1e1d63d5e 100644 --- a/contrib/Overlap-Recovery/train/eval.py +++ b/contrib/Overlap-Recovery/train/eval.py @@ -21,7 +21,7 @@ from loguru import logger from tqdm import tqdm from src.model_utils.configs.config_base import config -from src.model_utils.device_adapter import get_device_id, get_device_num +from src.model_utils.device_adapter import get_device_id from src.deoccluder import CustomKNet from src.dataset import build_dataset @@ -36,7 +36,7 @@ set_seed(1) def eval_func(eval_set, ckpt_path, config, src_eval_set): """MaskRcnn evaluation.""" - + config.train = False net = CustomKNet(config.model) param_dict = load_checkpoint(ckpt_path) load_param_into_net(net, param_dict, strict_load=False) @@ -52,9 +52,9 @@ def eval_func(eval_set, ckpt_path, config, src_eval_set): eval_iter = 0 total = eval_set.get_dataset_size() - print("\n========================================\n") - print("total images num: ", total) - print("Processing, please wait a moment.") + logger.info("\n========================================\n") + logger.info("total images num: ", total) + logger.info("Processing, please wait a moment.") results = [] for data in tqdm(eval_set.create_dict_iterator(output_numpy=True, num_epochs=1), total=total): eval_iter = eval_iter + 1 @@ -63,96 +63,32 @@ def eval_func(eval_set, ckpt_path, config, src_eval_set): # run net output = net(**data) results.append(output[0]) - import ipdb - ipdb.set_trace() - # print(src_eval_set.evaluate(results)) - print(src_eval_set.evaluate(results, metric='segm_with_each')) - - -def modelarts_process(): - """ modelarts process """ - def unzip(zip_file, save_dir): - import zipfile - s_time = time.time() - if not os.path.exists(os.path.join(save_dir, config.modelarts_dataset_unzip_name)): - zip_isexist = zipfile.is_zipfile(zip_file) - if zip_isexist: - fz = zipfile.ZipFile(zip_file, 'r') - data_num = len(fz.namelist()) - print("Extract Start...") - print("unzip file num: {}".format(data_num)) - data_print = int(data_num / 100) if data_num > 100 else 1 - i = 0 - for file in fz.namelist(): - if i % data_print == 0: - print("unzip percent: {}%".format(int(i * 100 / data_num)), flush=True) - i += 1 - fz.extract(file, save_dir) - print("cost time: {}min:{}s.".format(int((time.time() - s_time) / 60),\ - int(int(time.time() - s_time) % 60))) - print("Extract Done") - else: - print("This is not zip.") - else: - print("Zip has been extracted.") - - if config.need_modelarts_dataset_unzip: - zip_file_1 = os.path.join(config.data_path, config.modelarts_dataset_unzip_name + ".zip") - save_dir_1 = os.path.join(config.data_path) - - sync_lock = "/tmp/unzip_sync.lock" - - # Each server contains 8 devices as most - if get_device_id() % min(get_device_num(), 8) == 0 and not os.path.exists(sync_lock): - print("Zip file path: ", zip_file_1) - print("Unzip file save dir: ", save_dir_1) - unzip(zip_file_1, save_dir_1) - print("===Finish extract data synchronization===") - try: - os.mknod(sync_lock) - except IOError: - pass - - while True: - if os.path.exists(sync_lock): - break - time.sleep(1) - - print("Device: {}, Finish sync unzip data from {} to {}.".format(get_device_id(), zip_file_1, save_dir_1)) - print("#" * 200, os.listdir(save_dir_1)) - print("#" * 200, os.listdir(os.path.join(config.data_path, config.modelarts_dataset_unzip_name))) - - config.coco_root = os.path.join(config.data_path, config.modelarts_dataset_unzip_name) - config.checkpoint_path = os.path.join(config.output_path, config.ckpt_path) - config.ann_file = os.path.join(config.coco_root, config.ann_file) + logger.info(src_eval_set.evaluate(results)) def eval_(): device_target = config.device_target - # context.set_context(mode=context.GRAPH_MODE, device_target=device_target, device_id=get_device_id()) - context.set_context(mode=context.PYNATIVE_MODE, device_target=device_target, device_id=get_device_id(), ) + context.set_context(mode=context.PYNATIVE_MODE, device_target=device_target, device_id=get_device_id()) - print("Start create eval dataset!") + logger.info("Start create eval dataset!") # It will generate mindrecord file in config.mindrecord_dir if not os.path.exists(config.mindrecord_dir): os.makedirs(config.mindrecord_dir) - # create_mindrecord_dir(prefix, config.mindrecord_dir, mindrecord_file) logger.add(os.path.join(config.mindrecord_dir, time.asctime(time.localtime()).replace(' ', '_') + ".log")) # prepare dataset eval_set_cls = build_dataset(config.data['test']) collect_pipe = config.data['test']['pipeline'][-1] column_names = list(collect_pipe['keys']) + list(collect_pipe['meta_keys']) - print(column_names) eval_set = de.GeneratorDataset(eval_set_cls, column_names=column_names, num_parallel_workers=config.data['workers_per_gpu'], shuffle=False) eval_set = eval_set.batch(1, drop_remainder=False) - print("Start Eval!") - print("ckpt_path=", config.checkpoint_path) + logger.info("Start Eval!") + logger.info("ckpt_path=", config.checkpoint_path) eval_func(eval_set, config.checkpoint_path, config, eval_set_cls) diff --git a/contrib/Overlap-Recovery/train/scripts/convert_resnet.sh b/contrib/Overlap-Recovery/train/scripts/convert_resnet.sh index 14cefecfb..ffc29c394 100644 --- a/contrib/Overlap-Recovery/train/scripts/convert_resnet.sh +++ b/contrib/Overlap-Recovery/train/scripts/convert_resnet.sh @@ -23,7 +23,7 @@ echo "========================================================================== PTH_PATH=$1 CKPT_PATH=$2 PROJECT_DIR=$(cd "$(dirname "$0")" || exit; pwd) -DICT_FILE=/home/whua/code/overlap_text/KNet_Huawei/K-Net-mindspore/resnet50_dict.json +DICT_FILE=$PROJECT_DIR/../resource_utils/resnet50_dict.json if [ $# != 2 ] then @@ -32,12 +32,12 @@ then exit fi -LOG_DIR=/home/whua/code/overlap_text/KNet_Huawei/K-Net-mindspore/logs -echo $PROJECT_DIR +LOG_DIR=$PROJECT_DIR/../logs -python /home/whua/code/overlap_text/KNet_Huawei/K-Net-mindspore/src/utils/pth2ckpt.py \ +python $PROJECT_DIR/../src/utils/pth2ckpt.py \ --pth-path $PTH_PATH \ --ckpt-path $CKPT_PATH \ --dict-file $DICT_FILE > $LOG_DIR/convert_resnet.log 2>&1 & echo "The convert_resnet.log file is at /logs/convert_resnet.log" + diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py index 0b1b10a0d..7dc37f934 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py @@ -175,7 +175,7 @@ class MultiheadAttention(nn.Cell): self.embed_dims = embed_dims self.num_heads = num_heads self.batch_first = batch_first - batch_size = config.data['samples_per_gpu'] + batch_size = config.data['samples_per_gpu'] if config.train else 1 self.attn = nn.transformer.MultiHeadAttention( batch_size, num_proposals, num_proposals, embed_dims, num_heads, attention_dropout_rate=attn_drop, **kwargs) diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py index d86b748b4..cc0e65a16 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py @@ -1,8 +1,3 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# @Time : 2022/11/24 21:54 -# @Author : WeiHua - from pprint import pprint, pformat from .config_model import model @@ -24,8 +19,8 @@ class Config: return self.__str__() -synth_data_root = "data/overlap_text/opt_4ins_250k/" -real_data_root = "data/overlap_text/overlap_test_data/" +synth_data_root = "root-directory-to-train-data" +real_data_root = "root-directory-to-test-data" img_scale = (768, 768) img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) @@ -91,12 +86,12 @@ config_dict = dict( ckpt_max=10000, ), device_target='GPU', - mindrecord_dir='/home/whua/code/logs/ms_knet/dice4_occ_ep60', - pretrained_r50='data/resnet50.ckpt', + mindrecord_dir='path-for-saving-logs-and-files', + pretrained_r50='path-to-pretrained-model', do_eval=False, run_distribute=False, enable_modelarts=False, - checkpoint_path='/home/whua/knet_eval_ckpt.ckpt' + checkpoint_path='path-to-checkpoint-model' ) config = Config(config_dict) diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py index de62e6647..531391405 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py @@ -1,8 +1,3 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# @Time : 2022/11/24 21:57 -# @Author : WeiHua - num_stages = 3 num_proposals = 4 conv_kernel_size = 1 diff --git a/contrib/Overlap-Recovery/train/train.py b/contrib/Overlap-Recovery/train/train.py index 05106ea3a..f243908e1 100644 --- a/contrib/Overlap-Recovery/train/train.py +++ b/contrib/Overlap-Recovery/train/train.py @@ -84,6 +84,7 @@ def train_model(): train_set = train_set.batch(config.data['samples_per_gpu'], drop_remainder=True) # Prepare model + config.train = True net = CustomKNet(config.model) net = net.set_train() net.load_r50(config.pretrained_r50) -- Gitee From 27f32bd8922798162f4fe50cd4ea36bed5fca727 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 17:18:32 +0800 Subject: [PATCH 30/51] update readme --- contrib/Overlap-Recovery/README.md | 16 +++++++++++----- contrib/Overlap-Recovery/inference/.gitignore | 7 ++++++- 2 files changed, 17 insertions(+), 6 deletions(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index 191770a85..76ab7db88 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -210,13 +210,19 @@ sh train/scripts/convert_resnet.sh PATH-TO-PYTORCH-WEIGHT PATH-TO-MINDSPORE-WEIG 参数设置输出路径。 -**步骤3** 准备环境并训练模型 - - 参考环境要求配制好运行环境后,通过```python train/train.py```命令进行模型训练。 +**步骤3** 按照环境依赖要求配置好训练所需运行环境后,执行如下命令启动模型训练。 + + ``` + python train/train.py + ``` -**步骤4** 使用mindspore模型直接推理 +**步骤4** 使用训练好的mindspore模型直接推理 - 修改```train/src/model_utils/config_base.py```中```checkpoint_path```参数为checkpoint的保存路径,并运行```python train/eval.py```即可 + 修改```train/src/model_utils/config_base.py```中```checkpoint_path```参数为checkpoint的保存路径,执行如下命令推理。 + + ``` + python train/eval.py + ``` diff --git a/contrib/Overlap-Recovery/inference/.gitignore b/contrib/Overlap-Recovery/inference/.gitignore index 6f57c0b0b..5bad0a4ad 100644 --- a/contrib/Overlap-Recovery/inference/.gitignore +++ b/contrib/Overlap-Recovery/inference/.gitignore @@ -141,4 +141,9 @@ cython_debug/ .idea .DS_Store -ominfer_testcase.py \ No newline at end of file +ominfer_testcase.py +eval_test_ckpt.py +eval_utils_in.py +#./test/0.png +#./test/1.png +#test/input.jpg \ No newline at end of file -- Gitee From 07add8e7a095376b8f18a63f82ba8b7b7a799696 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 17:22:48 +0800 Subject: [PATCH 31/51] update readme --- contrib/Overlap-Recovery/README.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index 76ab7db88..4ddaf1fe0 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -273,9 +273,10 @@ sh train/scripts/convert_resnet.sh PATH-TO-PYTORCH-WEIGHT PATH-TO-MINDSPORE-WEIG **步骤2** 按照模型转换获取om模型,放置在`Overlap-Recovery/inference/models/`路径下。若未自行转换模型,使用的是仓库提供的模型,则无需修改相关文件,否则修改`ominfer.py`中相关配置,将`model_path`对象的路径改成实际的om模型的路径;`img_prefix`和`img_name`对象的路径改成实际的测试图片的路径;`save_path`对象设置成需要保存可视化图像的路径。 -**步骤3** 在命令行输入 如下命令运行整个工程: +**步骤3** 在命令行输入 如下命令运行单张图片模型推理: -``` +``` +cd inference python ominfer.py ``` @@ -287,9 +288,10 @@ python ominfer.py **步骤1** 在`Overlap-Recovery/inference/dataset/`路径下准备相同格式的数据集(已提供测试用的数据集,按照文件目录放至即可:[dataset.zip](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/Overlap-CRNN/dataset.zip)) -**步骤2** 在命令行输入 如下命令运行整个工程: +**步骤2** 在命令行输入 如下命令运行精度测试: -``` +``` +cd inference python eval.py ``` -- Gitee From 644bb8e1c2b25330aee70cafc1b66151c15853c0 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 17:24:36 +0800 Subject: [PATCH 32/51] update readme --- contrib/Overlap-Recovery/README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index 4ddaf1fe0..1dd1cd89b 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -239,7 +239,8 @@ sh train/scripts/convert_resnet.sh PATH-TO-PYTORCH-WEIGHT PATH-TO-MINDSPORE-WEIG 2. 进入`Overlap-Recovery/train`文件夹下,修改`export.py`文件中`ckpt_file_path`和`file_name`参数为自己的路径,执行命令: - ``` + ``` + cd train python export.py ``` @@ -247,7 +248,8 @@ sh train/scripts/convert_resnet.sh PATH-TO-PYTORCH-WEIGHT PATH-TO-MINDSPORE-WEIG 4. 进入推理服务器执行命令(修改`onnx_model_path`和`output_model_path`参数为自己的路径): - ``` + ``` + cd inference/models atc --model=[onnx_model_path] --framework=5 --output=[output_model_path] --soc_version=Ascend310 --input_shape="img:1,3,768,768" ``` -- Gitee From 2091ca6d4bac0c0bf9d540f0c8314dd102a632dc Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 20:19:40 +0800 Subject: [PATCH 33/51] reformat inference code --- contrib/Overlap-Recovery/inference/eval.py | 67 ++++--- .../Overlap-Recovery/inference/eval_utils.py | 9 +- .../Overlap-Recovery/inference/load_ann.py | 26 +-- .../inference/load_img_data.py | 25 +-- contrib/Overlap-Recovery/inference/ominfer.py | 69 ++++--- .../inference/preprocess_utils.py | 176 ++++++++---------- 6 files changed, 164 insertions(+), 208 deletions(-) diff --git a/contrib/Overlap-Recovery/inference/eval.py b/contrib/Overlap-Recovery/inference/eval.py index 43a2e54c7..fd316c58f 100644 --- a/contrib/Overlap-Recovery/inference/eval.py +++ b/contrib/Overlap-Recovery/inference/eval.py @@ -1,22 +1,23 @@ # -*- coding: utf-8 -*- import warnings -warnings.filterwarnings('ignore') from PIL import Image import numpy as np from mindx.sdk import base from mindx.sdk.base import Tensor, Model, Size, log, ImageProcessor, post, BTensor - from eval_utils import evaluate_metric from load_ann import load_annotations from load_img_data import load_img_data +warnings.filterwarnings('ignore') + +DEVICE_ID = 1 # 芯片ID + class OverlapDataset: - def __init__(self, annotation_file, img_prefix, seg_prefix): - self.data_list = load_annotations(annotation_file, img_prefix, seg_prefix) - # self.data_list = self.data_list[:4] # for debug - self.img_prefix = img_prefix + def __init__(self, annotation_file, img_prefix_path, seg_prefix): + self.data_list = load_annotations(annotation_file, img_prefix_path, seg_prefix) + self.img_prefix = img_prefix_path self.seg_prefix = seg_prefix self.sample_num = len(self.data_list) print(f"There are totally {self.sample_num} samples") @@ -31,18 +32,20 @@ class OverlapDataset: img_meta['seg_map_path'] = data_item['seg_map_path'] return img_tensor, img_meta + def prepare_model(model_path, device_id): base.mx_init() # 全局资源初始化 model = Model(model_path, device_id) # 创造模型对象 return model + def postprocess(scaled_mask_preds, cls_score): num_imgs = 1 segm_results = [] segm_scores = [] for img_id in range(num_imgs): cls_score_per_img = cls_score[img_id] # num_det, 1 - topk_indices =np.argsort(cls_score_per_img.flatten())[::-1][:4] + topk_indices = np.argsort(cls_score_per_img.flatten())[::-1][:4] scores_per_img = cls_score_per_img.flatten()[topk_indices] mask_indices = topk_indices masks_per_img = scaled_mask_preds[img_id][mask_indices] # b, num_det, h,w @@ -56,6 +59,7 @@ def postprocess(scaled_mask_preds, cls_score): segm_scores = np.stack(segm_scores) return segm_results, segm_scores + def segm2result(mask_preds, cls_scores): segm_result = [] seg_scores = [] @@ -69,47 +73,43 @@ def segm2result(mask_preds, cls_scores): return segm_result, seg_scores -def eval(ann_file, img_prefix, seg_mask_prefix, model_path, device_id): +def evaluate(ann_file, img_prefix, seg_mask_prefix, model_path): # dataset dataset = OverlapDataset(ann_file, img_prefix, seg_mask_prefix) sample_num = dataset.sample_num dataset = iter(dataset) # model - model = prepare_model(model_path, device_id) + model = prepare_model(model_path, DEVICE_ID) # inference results = [] img_metas_list = [] for idx in range(sample_num): - resizeImg, img_meta = next(dataset) - # print(img_meta) + resize_img, img_meta = next(dataset) print(f'sample {idx}') # prepare image - resizeImg = np.expand_dims(resizeImg, 0) # add batch dim, 1,3,h,w - resizeImg = np.ascontiguousarray(resizeImg) - imageTensor = Tensor(resizeImg) # 推理前需要转换为tensor的List,使用Tensor类来构建。 - imageTensor.to_device(device_id) # !!!!!重要,需要转移至device侧,该函数单独执行 - imageTensorList = [imageTensor] # 推理前需要转换为tensor的List + resize_img = np.expand_dims(resize_img, 0) # add batch dim, 1,3,h,w + resize_img = np.ascontiguousarray(resize_img) + image_tensor = Tensor(resize_img) # 推理前需要转换为tensor的List,使用Tensor类来构建。 + image_tensor.to_device(DEVICE_ID) # !!!!!重要,需要转移至device侧,该函数单独执行 + imageTensorList = [image_tensor] # 推理前需要转换为tensor的List # forward outputs = model.infer(imageTensorList) # preds Tensor to numpy outputs_np = [] - for i in range(len(outputs)): - outputs[i].to_host() - n = np.array(outputs[i]) - outputs_np.append(n) + for item in outputs: + item = item.to_host() + item = np.array(item) + outputs_np.append(item) - # (1, 4, h, w), (1, 4, 1) - pred_masks, pred_scores = outputs_np[0], outputs_np[1] - # (1, 4, h, w), (1, 4) - pred_masks, pred_scores = postprocess(pred_masks, pred_scores) + pred_masks, pred_scores = outputs_np[0], outputs_np[1] # (1, 4, h, w), (1, 4, 1) + pred_masks, pred_scores = postprocess(pred_masks, pred_scores) # (1, 4, h, w), (1, 4) # remove padding area - # (1, 4, h, w), (1,4) resize_shape = img_meta['img_shape'][:2] # h,w pred_masks = pred_masks[:, :, :resize_shape[0], :resize_shape[1]] @@ -117,8 +117,8 @@ def eval(ann_file, img_prefix, seg_mask_prefix, model_path, device_id): ori_size = img_meta['ori_shape'][:2] # h,w pred_masks = pred_masks[0] # removed batch dim rescaled_masks = [] - for idx in range(pred_masks.shape[0]): - img = pred_masks[idx] + for tmp_idx in range(pred_masks.shape[0]): + img = pred_masks[tmp_idx] pil_image = Image.fromarray(img) pil_image = pil_image.resize((ori_size[1], ori_size[0])) resized_img = np.array(pil_image) @@ -131,7 +131,7 @@ def eval(ann_file, img_prefix, seg_mask_prefix, model_path, device_id): img_metas_list.append(img_meta) # evaluate eval_res = evaluate_metric(results, img_metas_list, score_thresh=0.2, ) - text_iou = np.around(eval_res["text_iou"], decimals=3) + text_iou = np.around(eval_res.get("text_iou", 0), decimals=3) print("==============================") print("精度测试结果如下:") print(f'text_iou: {text_iou * 100}%') @@ -139,9 +139,8 @@ def eval(ann_file, img_prefix, seg_mask_prefix, model_path, device_id): if __name__ == '__main__': - ann_file = './dataset/annotation.json' #标签路径 - img_prefix = './dataset' #图片根路径 - seg_mask_prefix = './dataset' #mask根路径 - device_id = 1 # 芯片ID - model_path = "models/best_iou.om" # 模型的路径 - eval(ann_file, img_prefix, seg_mask_prefix, model_path, device_id) \ No newline at end of file + ANN_FILE_PATH = './dataset/annotation.json' # 标签路径 + IMG_PREFIX_PATH = './dataset' # 图片根路径 + SEG_MASK_PREFIX_PATH = './dataset' # mask根路径 + iINFER_MODEL_PATH = "models/best_iou.om" # 模型的路径 + evaluate(ANN_FILE_PATH, IMG_PREFIX_PATH, SEG_MASK_PREFIX_PATH, iINFER_MODEL_PATH) \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/eval_utils.py b/contrib/Overlap-Recovery/inference/eval_utils.py index 415ceade3..dee4985b5 100644 --- a/contrib/Overlap-Recovery/inference/eval_utils.py +++ b/contrib/Overlap-Recovery/inference/eval_utils.py @@ -4,7 +4,8 @@ import numpy as np import cv2 -def cal_mask_IoU(mask_a, mask_b, check_valid=False): + +def cal_mask_iou(mask_a, mask_b, check_valid=False): if check_valid: assert len(np.unique(mask_a)) <= 2 assert len(np.unique(mask_b)) <= 2 @@ -42,7 +43,6 @@ def cal_union_mask(mask_list): def eval_func(box_scores, masks, img_meta, score_thresh=0.2, iou_thresh=0.5): # prepare gt - # import pdb;pdb.set_trace() gt_masks = [cv2.imread(x, cv2.IMREAD_UNCHANGED) // 255 for x in img_meta['seg_map_path']] for mask_ in gt_masks: if len(mask_.shape) > 2: @@ -95,7 +95,7 @@ def eval_func(box_scores, masks, img_meta, score_thresh=0.2, iou_thresh=0.5): if match_matrix[:, gt_ins_idx].sum() > 0: continue # calculate IoU - if cal_mask_IoU(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > iou_thresh: + if cal_mask_iou(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > iou_thresh: match_matrix[ins_idx, gt_ins_idx] = True break # calculate instance-wise mIoU @@ -113,11 +113,12 @@ def eval_func(box_scores, masks, img_meta, score_thresh=0.2, iou_thresh=0.5): pred_mask = masks[0][valid_idx[ins_idx]].astype(np.bool) gt_idx = match_matrix[ins_idx].nonzero()[0][0] gt_mask = gt_masks[gt_idx].copy() - cur_iou = cal_mask_IoU(pred_mask, gt_mask) + cur_iou = cal_mask_iou(pred_mask, gt_mask) text_ins_miou += cur_iou return (intersection_text, union_text, intersection_overlap, union_overlap), \ text_ins_miou, max(match_matrix.shape) + def evaluate_metric(results, img_metas, score_thresh=0.2, diff --git a/contrib/Overlap-Recovery/inference/load_ann.py b/contrib/Overlap-Recovery/inference/load_ann.py index 1a64aac11..bfd178960 100644 --- a/contrib/Overlap-Recovery/inference/load_ann.py +++ b/contrib/Overlap-Recovery/inference/load_ann.py @@ -4,13 +4,14 @@ import json import os.path as osp import imagesize -def load_annotations(ann_file, img_prefix, seg_prefix): + +def load_annotations(ann_file_path, img_prefix_path, seg_prefix_path): """Load annotation from Overlap""" - data_list = [] - img_dir = img_prefix - seg_dir = seg_prefix - if osp.isfile(ann_file): - with open(ann_file, 'r', encoding='utf-8') as f: + data_result_list = [] + img_dir = img_prefix_path + seg_dir = seg_prefix_path + if osp.isfile(ann_file_path): + with open(ann_file_path, 'r', encoding='utf-8') as f: info_list = json.load(f) for info_ in info_list: assert len(info_) == 3, f"Invalid line: {info_}" @@ -18,7 +19,7 @@ def load_annotations(ann_file, img_prefix, seg_prefix): data_info = dict(img_path=osp.join(img_dir, img_name)) data_info['data_type'] = info_['data_type'] data_info['filename'] = img_name - width, height = imagesize.get(data_info['img_path']) + width, height = imagesize.get(data_info.get('img_path', '')) data_info['width'] = width data_info['height'] = height seg_map_path = [] @@ -34,14 +35,7 @@ def load_annotations(ann_file, img_prefix, seg_prefix): data_info['bboxes'] = bboxes data_info['seg_map_path'] = seg_map_path data_info['text_labels'] = text_labels - data_list.append(data_info) + data_result_list.append(data_info) else: raise NotImplementedError - return data_list - -if __name__ == '__main__': - ann_file = './dataset/annotation.json' - img_prefix= './dataset' - seg_prefix = './dataset' - data_list = load_annotations(ann_file, img_prefix, seg_prefix) - print(len(data_list)) \ No newline at end of file + return data_result_list diff --git a/contrib/Overlap-Recovery/inference/load_img_data.py b/contrib/Overlap-Recovery/inference/load_img_data.py index 2eb5f76a8..6535bd131 100644 --- a/contrib/Overlap-Recovery/inference/load_img_data.py +++ b/contrib/Overlap-Recovery/inference/load_img_data.py @@ -19,30 +19,19 @@ test_pipeline = [ dict(type='Normalize', **img_norm_cfg), dict(type='Pad', size=img_scale), dict(type='HWCToCHW', keys=['img']), - # dict(type='ImageToTensor', keys=['img']), # HWCToCHW + ToTensor dict(type='Collect', keys=['img']), ]) ] preprocessor = build_processor(test_pipeline) -def load_img_data(img_name, img_prefix=None): - img_info = {'filename':img_name} - img_data = {'img_prefix':img_prefix, 'img_info': img_info} +def load_img_data(img_name_path, img_prefix_path=None): + + img_info = {'filename':img_name_path} + img_data = {'img_prefix':img_prefix_path, 'img_info': img_info} resized_img_data = preprocessor(img_data) - resizeImg = resized_img_data['img'] - img_metas = resized_img_data['img_metas'] - return resizeImg[0], img_metas[0] - # return resizeImg, img_metas - -if __name__ == '__main__': - - img_prefix = './dataset/img' - img_name = '2.jpg' - resizeImg, img_metas = load_img_data(img_name, img_prefix) - print(img_metas) - print(f"ori_shape: {img_metas['ori_shape']} " - f"resize_shape: {img_metas['img_shape']} " - f"padded_shape: {img_metas['pad_shape']}") + resize_img = resized_img_data.get('img', '') + img_metas = resized_img_data.get('img_metas', '') + return resize_img[0], img_metas[0] diff --git a/contrib/Overlap-Recovery/inference/ominfer.py b/contrib/Overlap-Recovery/inference/ominfer.py index 4439d79c6..7a594615c 100644 --- a/contrib/Overlap-Recovery/inference/ominfer.py +++ b/contrib/Overlap-Recovery/inference/ominfer.py @@ -3,40 +3,42 @@ # @Email: yuwenwen62@gmail.com # @Created Time: 11/29/22 11:20 AM -import warnings -warnings.filterwarnings('ignore') - import os +import shutil +import warnings import numpy as np +import cv2 from mindx.sdk import base from mindx.sdk.base import Tensor, Model, Size, log, ImageProcessor, post, BTensor from load_img_data import load_img_data from PIL import Image -import shutil -import cv2 +warnings.filterwarnings('ignore') -def om_infer_one(img_name, model_path, device_id, img_prefix=None, vis_dir=None, score_thr=0.4): +DEVICE_ID = 1 # 芯片ID +MODEL_PATH = "models/best_iou.om" # 模型的路径 - if not os.path.exists(model_path): +def om_infer_one(img_name_path, img_prefix=None, vis_dir=None, score_thr=0.4): + + if not os.path.exists(MODEL_PATH): print("The input model path is empty!!!") print("plz place the model in ./Overlap-Recovery/inference/models/") exit() base.mx_init() # 全局资源初始化 - model = Model(model_path, device_id) # 创造模型对象 + model = Model(MODEL_PATH, DEVICE_ID) # 创造模型对象 - if not os.path.exists(os.path.join(img_prefix, img_name)): + if not os.path.exists(os.path.join(img_prefix, img_name_path)): print("The input image path is empty!!!") print("plz place the image in ./Overlap-Recovery/inference/") exit() - if cv2.imread(os.path.join(img_prefix, img_name)) is None: + if cv2.imread(os.path.join(img_prefix, img_name_path)) is None: print("=============!Error!================") print("The input image is empty, plz check out!") print("====================================") exit() - resizeImg, img_meta = load_img_data(img_name, img_prefix) # hwc-chw + resize_img, img_meta = load_img_data(img_name_path, img_prefix) # hwc-chw ori_filename = img_meta['ori_filename'] abs_filename = img_meta['filename'] print(f"ori_filename: {img_meta['ori_filename']}") @@ -45,45 +47,40 @@ def om_infer_one(img_name, model_path, device_id, img_prefix=None, vis_dir=None, print(f"ori_shape: {img_meta['ori_shape']} " f"resize_shape: {img_meta['img_shape']} " f"padded_shape: {img_meta['pad_shape']}") - resizeImg = np.expand_dims(resizeImg, 0) # add batch dim, 1,3,h,w - resizeImg = np.ascontiguousarray(resizeImg) - imageTensor = Tensor(resizeImg) # 推理前需要转换为tensor的List,使用Tensor类来构建。 - imageTensor.to_device(device_id) # !!!!!重要,需要转移至device侧,该函数单独执行 - imageTensorList = [imageTensor] # 推理前需要转换为tensor的List + resize_img = np.expand_dims(resize_img, 0) # add batch dim, 1,3,h,w + resize_img = np.ascontiguousarray(resize_img) + image_tensor = Tensor(resize_img) # 推理前需要转换为tensor的List,使用Tensor类来构建。 + image_tensor.to_device(DEVICE_ID) # !!!!!重要,需要转移至device侧,该函数单独执行 + imageTensorList = [image_tensor] # 推理前需要转换为tensor的List outputs = model.infer(imageTensorList) inputs = [] - for i in range(len(outputs)): - outputs[i].to_host() - n = np.array(outputs[i]) - inputs.append(n) + for item in outputs: + item = item.to_host() + item = np.array(item) + inputs.append(item) - # (1, 4, h, w), (1,4) / (1, 4, 1) - pred_masks, pred_scores = inputs[0], inputs[1] + pred_masks, pred_scores = inputs[0], inputs[1] # (1, 4, h, w), (1,4) / (1, 4, 1) pred_masks, pred_scores = postprocess(pred_masks, pred_scores) print(f"pred_masks_shape: {pred_masks.shape} pred_score_shape: {pred_scores.shape}") print(f"original pred unique value: {np.unique(pred_masks)}") # remove padding area - # (1, 4, 1472, 1472), (1,4) resize_shape = img_meta['img_shape'][:2] # h, w pred_masks = pred_masks[:, :, :resize_shape[0], :resize_shape[1]] ori_size = img_meta['ori_shape'][:2] # h, w # remove batch dim - # (4, h, w), (4) - pred_masks, pred_scores = pred_masks[0], pred_scores[0] + pred_masks, pred_scores = pred_masks[0], pred_scores[0] # (4, h, w), (4) img_id = os.path.basename(ori_filename).split('.')[0] if vis_dir is not None: save_dir = os.path.join(vis_dir, img_id) if not os.path.exists(save_dir): - # os.mkdir(save_dir) os.makedirs(save_dir) shutil.copyfile(abs_filename, os.path.join(save_dir, f"input.{os.path.basename(ori_filename).split('.')[1]}")) for instance_idx in range(pred_masks.shape[0]): - # (h,w) text_instance = pred_masks[instance_idx] pred_score = pred_scores[instance_idx] @@ -92,7 +89,8 @@ def om_infer_one(img_name, model_path, device_id, img_prefix=None, vis_dir=None, text_instance = text_instance.astype(np.uint8) area = np.sum(text_instance) - print(f"pred_text_instance: {instance_idx+1} pred_score: {pred_score} unique value: {np.unique(text_instance)} area: {area}") + print(f"pred_text_instance: {instance_idx+1} pred_score: {pred_score} " + f"unique value: {np.unique(text_instance)} area: {area}") pred_mask = Image.fromarray(text_instance * 255) pred_mask = pred_mask.resize((ori_size[1], ori_size[0]))# w,h @@ -109,7 +107,7 @@ def postprocess(scaled_mask_preds, cls_score): segm_scores = [] for img_id in range(num_imgs): cls_score_per_img = cls_score[img_id] # num_det, 1 - topk_indices =np.argsort(cls_score_per_img.flatten())[::-1][:4] + topk_indices = np.argsort(cls_score_per_img.flatten())[::-1][:4] scores_per_img = cls_score_per_img.flatten()[topk_indices] mask_indices = topk_indices masks_per_img = scaled_mask_preds[img_id][mask_indices] # b, num_det, h,w @@ -123,9 +121,10 @@ def postprocess(scaled_mask_preds, cls_score): segm_scores = np.stack(segm_scores) return segm_results, segm_scores + def segm2result(mask_preds, cls_scores): segm_result = [] - seg_scores = [] + seg_scores = [] num_ins = mask_preds.shape[0] # num_dets, h, w for idx in range(num_ins): segm_result.append(mask_preds[idx]) @@ -137,10 +136,8 @@ def segm2result(mask_preds, cls_scores): if __name__ == '__main__': - device_id = 1 # 芯片ID - model_path = "models/best_iou.om" # 模型的路径 - img_prefix = './' - img_name = 'test.jpg' - save_path = './' - om_infer_one(img_name, model_path, device_id, img_prefix, vis_dir=save_path) + INFER_IMG_PREFIX = './' + IMG_NAME = 'test.jpg' + SAVE_PATH = './' + om_infer_one(IMG_NAME, INFER_IMG_PREFIX, vis_dir=SAVE_PATH) diff --git a/contrib/Overlap-Recovery/inference/preprocess_utils.py b/contrib/Overlap-Recovery/inference/preprocess_utils.py index c950ec156..2847e9f97 100644 --- a/contrib/Overlap-Recovery/inference/preprocess_utils.py +++ b/contrib/Overlap-Recovery/inference/preprocess_utils.py @@ -11,10 +11,9 @@ import numpy as np import mmcv from mmcv.utils import Registry, build_from_cfg -# from mindx.sdk.base import Tensor - PIPELINES = Registry('pipeline') + @PIPELINES.register_module() class LoadImageFromFile: """Load an image from file. @@ -35,11 +34,7 @@ class LoadImageFromFile: Defaults to ``dict(backend='disk')``. """ - def __init__(self, - to_float32=False, - color_type='color', - channel_order='bgr', - file_client_args=dict(backend='disk')): + def __init__(self, to_float32=False, color_type='color', channel_order='bgr', file_client_args=dict(backend='disk')): self.to_float32 = to_float32 self.color_type = color_type self.channel_order = channel_order @@ -87,6 +82,7 @@ class LoadImageFromFile: f'file_client_args={self.file_client_args})') return repr_str + @PIPELINES.register_module() class Compose: """Compose multiple transforms sequentially. @@ -135,6 +131,7 @@ class Compose: format_string += '\n)' return format_string + @PIPELINES.register_module() class MultiScaleFlipAug: """Test-time augmentation with multiple scales and flipping. @@ -179,28 +176,20 @@ class MultiScaleFlipAug: "horizontal". """ - def __init__(self, - transforms, - img_scale=None, - scale_factor=None, - flip=False, - flip_direction='horizontal'): + def __init__(self, transforms, img_scale=None, scale_factor=None, flip=False, flip_direction='horizontal'): self.transforms = Compose(transforms) assert (img_scale is None) ^ (scale_factor is None), ( 'Must have but only one variable can be set') if img_scale is not None: - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] + self.img_scale = img_scale if isinstance(img_scale, list) else [img_scale] self.scale_key = 'scale' assert mmcv.is_list_of(self.img_scale, tuple) else: - self.img_scale = scale_factor if isinstance( - scale_factor, list) else [scale_factor] + self.img_scale = scale_factor if isinstance(scale_factor, list) else [scale_factor] self.scale_key = 'scale_factor' self.flip = flip - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] + self.flip_direction = flip_direction if isinstance(flip_direction, list) else [flip_direction] assert mmcv.is_list_of(self.flip_direction, str) if not self.flip and self.flip_direction != ['horizontal']: warnings.warn( @@ -248,6 +237,7 @@ class MultiScaleFlipAug: repr_str += f'flip_direction={self.flip_direction})' return repr_str + @PIPELINES.register_module() class Resize: """Resize images & bbox & mask. @@ -294,15 +284,8 @@ class Resize: Defaults to False. """ - def __init__(self, - img_scale=None, - multiscale_mode='range', - ratio_range=None, - keep_ratio=True, - bbox_clip_border=True, - backend='cv2', - interpolation='bilinear', - override=False): + def __init__(self, img_scale=None, multiscale_mode='range', ratio_range=None, keep_ratio=True, bbox_clip_border=True, + backend='cv2', interpolation='bilinear', override=False): if img_scale is None: self.img_scale = None else: @@ -323,7 +306,6 @@ class Resize: self.multiscale_mode = multiscale_mode self.ratio_range = ratio_range self.keep_ratio = keep_ratio - # TODO: refactor the override option in Resize self.interpolation = interpolation self.override = override self.bbox_clip_border = bbox_clip_border @@ -486,23 +468,6 @@ class Resize: else: results[key] = results[key].resize(results['img_shape'][:2]) - def _resize_seg(self, results): - """Resize semantic segmentation map with ``results['scale']``.""" - for key in results.get('seg_fields', []): - if self.keep_ratio: - gt_seg = mmcv.imrescale( - results[key], - results['scale'], - interpolation='nearest', - backend=self.backend) - else: - gt_seg = mmcv.imresize( - results[key], - results['scale'], - interpolation='nearest', - backend=self.backend) - results[key] = gt_seg - def __call__(self, results): """Call function to resize images, bounding boxes, masks, semantic segmentation map. @@ -540,6 +505,23 @@ class Resize: self._resize_seg(results) return results + def _resize_seg(self, results): + """Resize semantic segmentation map with ``results['scale']``.""" + for key in results.get('seg_fields', []): + if self.keep_ratio: + gt_seg = mmcv.imrescale( + results[key], + results['scale'], + interpolation='nearest', + backend=self.backend) + else: + gt_seg = mmcv.imresize( + results[key], + results['scale'], + interpolation='nearest', + backend=self.backend) + results[key] = gt_seg + def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(img_scale={self.img_scale}, ' @@ -549,6 +531,7 @@ class Resize: repr_str += f'bbox_clip_border={self.bbox_clip_border})' return repr_str + @PIPELINES.register_module() class RandomFlip: """Flip the image & bbox & mask. @@ -613,40 +596,6 @@ class RandomFlip: if isinstance(flip_ratio, list): assert len(self.flip_ratio) == len(self.direction) - def bbox_flip(self, bboxes, img_shape, direction): - """Flip bboxes horizontally. - - Args: - bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k) - img_shape (tuple[int]): Image shape (height, width) - direction (str): Flip direction. Options are 'horizontal', - 'vertical'. - - Returns: - numpy.ndarray: Flipped bounding boxes. - """ - - assert bboxes.shape[-1] % 4 == 0 - flipped = bboxes.copy() - if direction == 'horizontal': - w = img_shape[1] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - elif direction == 'vertical': - h = img_shape[0] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - elif direction == 'diagonal': - w = img_shape[1] - h = img_shape[0] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - else: - raise ValueError(f"Invalid flipping direction '{direction}'") - return flipped - def __call__(self, results): """Call function to flip bounding boxes, masks, semantic segmentation maps. @@ -702,6 +651,40 @@ class RandomFlip: results[key], direction=results['flip_direction']) return results + def bbox_flip(self, bboxes, img_shape, direction): + """Flip bboxes horizontally. + + Args: + bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k) + img_shape (tuple[int]): Image shape (height, width) + direction (str): Flip direction. Options are 'horizontal', + 'vertical'. + + Returns: + numpy.ndarray: Flipped bounding boxes. + """ + + assert bboxes.shape[-1] % 4 == 0 + flipped = bboxes.copy() + if direction == 'horizontal': + w = img_shape[1] + flipped[..., 0::4] = w - bboxes[..., 2::4] + flipped[..., 2::4] = w - bboxes[..., 0::4] + elif direction == 'vertical': + h = img_shape[0] + flipped[..., 1::4] = h - bboxes[..., 3::4] + flipped[..., 3::4] = h - bboxes[..., 1::4] + elif direction == 'diagonal': + w = img_shape[1] + h = img_shape[0] + flipped[..., 0::4] = w - bboxes[..., 2::4] + flipped[..., 1::4] = h - bboxes[..., 3::4] + flipped[..., 2::4] = w - bboxes[..., 0::4] + flipped[..., 3::4] = h - bboxes[..., 1::4] + else: + raise ValueError(f"Invalid flipping direction '{direction}'") + return flipped + def __repr__(self): return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' @@ -723,11 +706,7 @@ class Pad: value is `dict(img=0, masks=0, seg=255)`. """ - def __init__(self, - size=None, - size_divisor=None, - pad_to_square=False, - pad_val=dict(img=0, masks=0, seg=255)): + def __init__(self, size=None, size_divisor=None, pad_to_square=False, pad_val=dict(img=0, masks=0, seg=255)): self.size = size self.size_divisor = size_divisor if isinstance(pad_val, float) or isinstance(pad_val, int): @@ -774,14 +753,6 @@ class Pad: for key in results.get('mask_fields', []): results[key] = results[key].pad(pad_shape, pad_val=pad_val) - def _pad_seg(self, results): - """Pad semantic segmentation map according to - ``results['pad_shape']``.""" - pad_val = self.pad_val.get('seg', 255) - for key in results.get('seg_fields', []): - results[key] = mmcv.impad( - results[key], shape=results['pad_shape'][:2], pad_val=pad_val) - def __call__(self, results): """Call function to pad images, masks, semantic segmentation maps. @@ -796,6 +767,14 @@ class Pad: self._pad_seg(results) return results + def _pad_seg(self, results): + """Pad semantic segmentation map according to + ``results['pad_shape']``.""" + pad_val = self.pad_val.get('seg', 255) + for key in results.get('seg_fields', []): + results[key] = mmcv.impad( + results[key], shape=results['pad_shape'][:2], pad_val=pad_val) + def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(size={self.size}, ' @@ -845,6 +824,7 @@ class Normalize: repr_str += f'(mean={self.mean}, std={self.std}, to_rgb={self.to_rgb})' return repr_str + @PIPELINES.register_module() class ImageToTensor: """Convert image to :obj:`Tensor` by given keys. @@ -877,7 +857,6 @@ class ImageToTensor: img = np.expand_dims(img, -1) img = img.transpose(2, 0, 1) # HWC-> CHW img = np.ascontiguousarray(img) - # img = (to_tensor(img)).contiguous() img = to_tensor(img) results[key] = img return results @@ -918,8 +897,6 @@ class HWCToCHW: img = np.expand_dims(img, -1) img = img.transpose(2, 0, 1) # HWC-> CHW img = np.ascontiguousarray(img) - # img = (to_tensor(img)).contiguous() - # img = to_tensor(img) results[key] = img return results @@ -937,10 +914,10 @@ def to_tensor(data): data (Tensor | numpy.ndarray | Sequence | int | float): Data to be converted. """ - # mindspore Tensor - # return Tensor(data) + # return Tensor(data) mindspore Tensor raise NotImplementedError + @PIPELINES.register_module() class Collect: """Collect data from the loader relevant to the specific task. @@ -1007,7 +984,6 @@ class Collect: img_meta = {} for key in self.meta_keys: img_meta[key] = results[key] - # data['img_metas'] = DC(img_meta, cpu_only=True) data['img_metas'] = img_meta for key in self.keys: data[key] = results[key] @@ -1017,6 +993,6 @@ class Collect: return self.__class__.__name__ + \ f'(keys={self.keys}, meta_keys={self.meta_keys})' + def build_processor(test_pipelines): return Compose(test_pipelines) - # return build_from_cfg(test_pipelines, PIPELINES) -- Gitee From 66ba155ebe4f186ad29948716d077e03fdb787c1 Mon Sep 17 00:00:00 2001 From: HamPerdredes Date: Tue, 13 Dec 2022 21:52:30 +0800 Subject: [PATCH 34/51] clean code --- contrib/Overlap-Recovery/train/__init__.py | 2 - contrib/Overlap-Recovery/train/eval.py | 7 +- contrib/Overlap-Recovery/train/export.py | 1 + .../Overlap-Recovery/train/src/__init__.py | 2 - .../train/src/dataset/__init__.py | 2 - .../train/src/dataset/base_dataset.py | 32 ++------- .../train/src/dataset/build_dataset.py | 10 +-- .../train/src/dataset/data_process.py | 20 ++---- .../train/src/dataset/real_dataset.py | 52 +++++++-------- .../train/src/dataset/synth_dataset.py | 52 +++++++-------- .../train/src/dataset/utils.py | 11 ++-- .../train/src/deoccluder/__init__.py | 2 - .../src/deoccluder/custom_cells/__init__.py | 2 - .../custom_cells/custom_assigner.py | 38 +++++------ .../deoccluder/custom_cells/custom_blocks.py | 27 ++------ .../deoccluder/custom_cells/custom_losses.py | 5 +- .../custom_cells/custom_match_cost.py | 43 +++++------- .../custom_cells/custom_operations.py | 4 +- .../custom_cells/custom_samplers.py | 23 +++---- .../train/src/deoccluder/deoccluder_r50.py | 33 +++------- .../train/src/deoccluder/fpn_neck.py | 2 + .../train/src/deoccluder/resnet.py | 28 ++++---- .../train/src/deoccluder/roi/__init__.py | 2 - .../deoccluder/roi/custom_kernel_iter_head.py | 31 +++------ .../roi/custom_kernel_update_head.py | 48 +++++++------- .../src/deoccluder/roi/kernel_update_head.py | 62 +++++++---------- .../src/deoccluder/roi/kernel_updator.py | 17 ++--- .../train/src/deoccluder/rpn/__init__.py | 2 - .../train/src/deoccluder/rpn/kernel_head.py | 56 ++++------------ .../src/deoccluder/rpn/positional_encoding.py | 54 +++++++-------- .../deoccluder/rpn/semantic_fpn_wrapper.py | 36 ++++------ .../train/src/deoccluder/utils.py | 6 +- .../train/src/model_utils/configs/__init__.py | 2 - .../src/model_utils/configs/config_base.py | 66 +++++++++---------- .../src/model_utils/configs/config_model.py | 39 +++++------ .../train/src/model_utils/local_adapter.py | 1 + .../train/src/model_utils/moxing_adapter.py | 10 +-- .../train/src/utils/pth2ckpt.py | 13 +--- contrib/Overlap-Recovery/train/train.py | 1 + 39 files changed, 322 insertions(+), 522 deletions(-) diff --git a/contrib/Overlap-Recovery/train/__init__.py b/contrib/Overlap-Recovery/train/__init__.py index 7c8d0d8c3..4f96c1580 100644 --- a/contrib/Overlap-Recovery/train/__init__.py +++ b/contrib/Overlap-Recovery/train/__init__.py @@ -1,4 +1,2 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/24 23:14 -# @Author : WeiHua diff --git a/contrib/Overlap-Recovery/train/eval.py b/contrib/Overlap-Recovery/train/eval.py index 1e1d63d5e..a416b30ed 100644 --- a/contrib/Overlap-Recovery/train/eval.py +++ b/contrib/Overlap-Recovery/train/eval.py @@ -33,10 +33,11 @@ from mindspore import dataset as de set_seed(1) +config.train = False + def eval_func(eval_set, ckpt_path, config, src_eval_set): """MaskRcnn evaluation.""" - config.train = False net = CustomKNet(config.model) param_dict = load_checkpoint(ckpt_path) load_param_into_net(net, param_dict, strict_load=False) @@ -83,12 +84,12 @@ def eval_(): column_names = list(collect_pipe['keys']) + list(collect_pipe['meta_keys']) eval_set = de.GeneratorDataset(eval_set_cls, column_names=column_names, - num_parallel_workers=config.data['workers_per_gpu'], + num_parallel_workers=1, shuffle=False) eval_set = eval_set.batch(1, drop_remainder=False) logger.info("Start Eval!") - logger.info("ckpt_path=", config.checkpoint_path) + logger.info(f"ckpt_path = {config.checkpoint_path}") eval_func(eval_set, config.checkpoint_path, config, eval_set_cls) diff --git a/contrib/Overlap-Recovery/train/export.py b/contrib/Overlap-Recovery/train/export.py index d7adbfec4..661874a35 100644 --- a/contrib/Overlap-Recovery/train/export.py +++ b/contrib/Overlap-Recovery/train/export.py @@ -15,6 +15,7 @@ from src.model_utils.device_adapter import get_device_id context.set_context(mode=context.PYNATIVE_MODE, device_target="CPU", device_id= get_device_id()) + def best_model_export(): ckpt_file_path = './models/best_iou.ckpt' file_name = 'best_iou.onnx' diff --git a/contrib/Overlap-Recovery/train/src/__init__.py b/contrib/Overlap-Recovery/train/src/__init__.py index 7c8d0d8c3..4f96c1580 100644 --- a/contrib/Overlap-Recovery/train/src/__init__.py +++ b/contrib/Overlap-Recovery/train/src/__init__.py @@ -1,4 +1,2 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/24 23:14 -# @Author : WeiHua diff --git a/contrib/Overlap-Recovery/train/src/dataset/__init__.py b/contrib/Overlap-Recovery/train/src/dataset/__init__.py index d0ea8125b..ca7286c87 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/__init__.py +++ b/contrib/Overlap-Recovery/train/src/dataset/__init__.py @@ -1,6 +1,4 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/30 14:31 -# @Author : WeiHua from .build_dataset import build_dataset diff --git a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py index 14858cfeb..7dd9288ca 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py @@ -1,7 +1,5 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/30 14:32 -# @Author : WeiHua import os.path as osp import warnings @@ -52,16 +50,8 @@ class CustomDataset: PALETTE = None - def __init__(self, - ann_file, - pipeline, - classes=None, - data_root=None, - img_prefix='', - seg_prefix=None, - seg_suffix='.png', - test_mode=False, - filter_empty_gt=True): + def __init__(self, ann_file, pipeline, classes=None, data_root=None, img_prefix='', + seg_prefix=None, seg_suffix='.png', test_mode=False, filter_empty_gt=True): self.ann_file = ann_file self.data_root = data_root self.img_prefix = img_prefix @@ -69,7 +59,7 @@ class CustomDataset: self.seg_suffix = seg_suffix self.test_mode = test_mode self.filter_empty_gt = filter_empty_gt - self.CLASSES = self.get_classes(classes) + self.CLASSES = self.GetClasses(classes) # join paths if data_root is specified if self.data_root is not None: @@ -94,18 +84,19 @@ class CustomDataset: self._set_group_flag() # processing pipeline - # self.pipeline = Compose(pipeline) self.pipeline = self.build_pipeline(pipeline) def __len__(self): """Total number of samples of data.""" return len(self.data_infos) - def build_pipeline(self, pipeline): + @staticmethod + def build_pipeline(pipeline): return PipelineFunc(pipeline) def load_annotations(self, ann_file): """Load annotation from annotation file.""" + print(self.ann_file, ann_file) raise NotImplementedError def get_ann_info(self, idx): @@ -227,7 +218,7 @@ class CustomDataset: return self.pipeline(results) @classmethod - def get_classes(cls, classes=None): + def GetClasses(cls, classes=None): """Get class names of current dataset. Args: @@ -243,15 +234,6 @@ class CustomDataset: if classes is None: return cls.CLASSES raise NotImplementedError - # if isinstance(classes, str): - # # take it as a file path - # class_names = mmcv.list_from_file(classes) - # elif isinstance(classes, (tuple, list)): - # class_names = classes - # else: - # raise ValueError(f'Unsupported type {type(classes)} of classes.') - # - # return class_names def get_cat2imgs(self): """Get a dict with class as key and img_ids as values, which will be diff --git a/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py index 9958c9a5d..80122a961 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py @@ -1,11 +1,8 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/30 22:27 -# @Author : WeiHua from .real_dataset import RealOverlapDataset from .synth_dataset import SynthOverlapDataset -import mindspore.dataset as de CUSTOM_DATASETS = { @@ -16,4 +13,9 @@ CUSTOM_DATASETS = { def build_dataset(cfg): data_type = cfg.pop('type') - return CUSTOM_DATASETS[data_type](**cfg) + if data_type not in CUSTOM_DATASETS: + raise KeyError(f"Not support dataset type: {data_type}") + try: + return CUSTOM_DATASETS[data_type](**cfg) + except Exception as e: + raise RuntimeError(e) diff --git a/contrib/Overlap-Recovery/train/src/dataset/data_process.py b/contrib/Overlap-Recovery/train/src/dataset/data_process.py index 3d1737d71..3a4fec369 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/data_process.py +++ b/contrib/Overlap-Recovery/train/src/dataset/data_process.py @@ -1,16 +1,14 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/30 22:30 -# @Author : WeiHua from os import path as osp +import warnings +from collections.abc import Sequence import cv2 import numpy as np -import warnings import mmcv import mindspore as ms from .utils import BitmapMasks -from collections.abc import Sequence class DataContainer(object): @@ -34,10 +32,8 @@ class DataContainer(object): data, stack=False, padding_value=0, - cpu_only=False, pad_dims=2): self._data = data - self._cpu_only = cpu_only self._stack = stack self._padding_value = padding_value assert pad_dims in [None, 1, 2, 3] @@ -57,10 +53,6 @@ class DataContainer(object): else: return type(self.data) - @property - def cpu_only(self): - return self._cpu_only - @property def stack(self): return self._stack @@ -129,7 +121,8 @@ class CustomLoadAnnotations: self.with_label = with_label self.with_mask = with_mask - def _load_bboxes(self, results): + @staticmethod + def _load_bboxes(results): ann_info = results['ann_info'] results['gt_bboxes'] = ann_info['bboxes'].copy() @@ -565,8 +558,7 @@ class DefaultFormatBundle: if 'gt_masks' in results: results['gt_masks'] = DataContainer( results['gt_masks'], - padding_value=self.pad_val['masks'], - cpu_only=True) + padding_value=self.pad_val['masks']) return results def _add_default_meta_keys(self, results): @@ -606,7 +598,7 @@ class Collect: out_data = [] for key in self.meta_keys: img_meta[key] = results[key] - data['img_metas'] = DataContainer(img_meta, cpu_only=True) + data['img_metas'] = DataContainer(img_meta) for key in self.keys: data[key] = results[key] # return data diff --git a/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py index 6d8e0c794..1b809c997 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py @@ -1,7 +1,5 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/30 14:25 -# @Author : WeiHua import json import os @@ -13,7 +11,7 @@ import numpy as np import imagesize from .base_dataset import CustomDataset -from .utils import cal_mask_IoU, cal_overlap_mask, cal_union_mask +from .utils import CalMaskIou, CalOverlapMask, CalUnionMask class RealOverlapDataset(CustomDataset): @@ -40,7 +38,10 @@ class RealOverlapDataset(CustomDataset): data_info = dict(img_path=osp.join(img_dir, img_name)) data_info['data_type'] = info_['data_type'] data_info['filename'] = img_name - width, height = imagesize.get(data_info['img_path']) + try: + width, height = imagesize.get(data_info['img_path']) + except Exception as e: + raise RuntimeError(e) data_info['width'] = width data_info['height'] = height seg_map_path = [] @@ -61,20 +62,8 @@ class RealOverlapDataset(CustomDataset): raise NotImplementedError return data_list - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: - if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - def get_ann_info(self, idx): data_info = self.data_infos[idx] - # todo: Not support ignore flag for now. ann = dict( bboxes=np.array(data_info['bboxes'], dtype=np.float32), labels=np.zeros(len(data_info['bboxes']), dtype=np.int64), @@ -85,6 +74,17 @@ class RealOverlapDataset(CustomDataset): ) return ann + def _filter_imgs(self, min_size=32): + """Filter images too small or without ground truths.""" + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: + if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: + continue + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds + def vis_result(self, img_idx, scores, masks, vis_dir='/home/whua/vis'): if not os.path.exists(vis_dir): os.mkdir(vis_dir) @@ -107,8 +107,8 @@ class RealOverlapDataset(CustomDataset): def eval_func(self, idx, box_scores, masks): # prepare gt ~ hard code gt_masks = [cv2.imread(x, cv2.IMREAD_UNCHANGED) // 255 for x in self.data_infos[idx]['seg_map_path']] - gt_text = cal_union_mask(gt_masks) - gt_overlap = cal_overlap_mask(gt_masks) + gt_text = CalUnionMask(gt_masks) + gt_overlap = CalOverlapMask(gt_masks) # prepare predict of overlap and text area box_info = box_scores[0] if len(box_info) < 2: @@ -126,10 +126,10 @@ class RealOverlapDataset(CustomDataset): pred_text = np.zeros_like(masks[0][0]) elif len(pred_masks) == 1: pred_overlap = np.zeros_like(masks[0][0]) - pred_text = cal_union_mask(pred_masks) + pred_text = CalUnionMask(pred_masks) else: - pred_overlap = cal_overlap_mask(pred_masks) - pred_text = cal_union_mask(pred_masks) + pred_overlap = CalOverlapMask(pred_masks) + pred_text = CalUnionMask(pred_masks) if len(gt_masks) > 1: # calculate metrics intersection_text = (pred_text & gt_text).sum() @@ -142,8 +142,6 @@ class RealOverlapDataset(CustomDataset): intersection_overlap = 0 union_overlap = 0 - # self.vis_result(idx, box_info[:, 4].tolist(), masks[0]) - # prepare predict of text instance # filter out invalid prediction valid_idx = [] @@ -156,7 +154,7 @@ class RealOverlapDataset(CustomDataset): if match_matrix[:, gt_ins_idx].sum() > 0: continue # calculate IoU - if cal_mask_IoU(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > self.iou_thresh: + if CalMaskIou(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > self.iou_thresh: match_matrix[ins_idx, gt_ins_idx] = True break # calculate instance-wise mIoU @@ -174,7 +172,7 @@ class RealOverlapDataset(CustomDataset): pred_mask = masks[0][valid_idx[ins_idx]].astype(np.bool) gt_idx = match_matrix[ins_idx].nonzero()[0][0] gt_mask = gt_masks[gt_idx].copy() - cur_iou = cal_mask_IoU(pred_mask, gt_mask) + cur_iou = CalMaskIou(pred_mask, gt_mask) text_ins_miou += cur_iou return (intersection_text, union_text, intersection_overlap, union_overlap), \ text_ins_miou, max(match_matrix.shape) @@ -287,9 +285,7 @@ class RealOverlapDataset(CustomDataset): total_ins_num += ins_num metric_results[flag] = dict( - text_iou=intersection_text / (union_text + 1e-6), - overlap_iou=intersection_overlap / (union_overlap + 1e-6), - text_ins_miou=np.sum(text_ins_miou_list) / total_ins_num + text_iou=intersection_text / (union_text + 1e-6) ) return metric_results diff --git a/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py index 905edcbf8..be4c5b8b9 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py @@ -1,7 +1,5 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/30 14:25 -# @Author : WeiHua import json import os @@ -13,7 +11,7 @@ import numpy as np import imagesize from .base_dataset import CustomDataset -from .utils import cal_mask_IoU, cal_overlap_mask, cal_union_mask +from .utils import CalMaskIou, CalOverlapMask, CalUnionMask class SynthOverlapDataset(CustomDataset): @@ -40,7 +38,10 @@ class SynthOverlapDataset(CustomDataset): img_name = info_['img_name'] data_info = dict(img_path=osp.join(img_dir, img_name)) data_info['filename'] = img_name - width, height = imagesize.get(data_info['img_path']) + try: + width, height = imagesize.get(data_info['img_path']) + except Exception as e: + raise RuntimeError(e) data_info['width'] = width data_info['height'] = height seg_map_path = [] @@ -61,20 +62,8 @@ class SynthOverlapDataset(CustomDataset): raise NotImplementedError return data_list - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: - if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - def get_ann_info(self, idx): data_info = self.data_infos[idx] - # todo: Not support ignore flag for now. ann = dict( bboxes=np.array(data_info['bboxes'], dtype=np.float32), labels=np.zeros(len(data_info['bboxes']), dtype=np.int64), @@ -85,6 +74,17 @@ class SynthOverlapDataset(CustomDataset): ) return ann + def _filter_imgs(self, min_size=32): + """Filter images too small or without ground truths.""" + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: + if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: + continue + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds + def vis_result(self, img_idx, scores, masks, vis_dir='/home/whua/vis'): if not os.path.exists(vis_dir): os.mkdir(vis_dir) @@ -107,8 +107,8 @@ class SynthOverlapDataset(CustomDataset): def eval_func(self, idx, box_scores, masks): # prepare gt ~ hard code gt_masks = [cv2.imread(x, cv2.IMREAD_UNCHANGED) // 255 for x in self.data_infos[idx]['seg_map_path']] - gt_text = cal_union_mask(gt_masks) - gt_overlap = cal_overlap_mask(gt_masks) + gt_text = CalUnionMask(gt_masks) + gt_overlap = CalOverlapMask(gt_masks) # prepare predict of overlap and text area box_info = box_scores[0] if len(box_info) < 2: @@ -126,10 +126,10 @@ class SynthOverlapDataset(CustomDataset): pred_text = np.zeros_like(masks[0][0]) elif len(pred_masks) == 1: pred_overlap = np.zeros_like(masks[0][0]) - pred_text = cal_union_mask(pred_masks) + pred_text = CalUnionMask(pred_masks) else: - pred_overlap = cal_overlap_mask(pred_masks) - pred_text = cal_union_mask(pred_masks) + pred_overlap = CalOverlapMask(pred_masks) + pred_text = CalUnionMask(pred_masks) if len(gt_masks) > 1: # calculate metrics intersection_text = (pred_text & gt_text).sum() @@ -142,8 +142,6 @@ class SynthOverlapDataset(CustomDataset): intersection_overlap = 0 union_overlap = 0 - # self.vis_result(idx, box_info[:, 4].tolist(), masks[0]) - # prepare predict of text instance # filter out invalid prediction valid_idx = [] @@ -156,7 +154,7 @@ class SynthOverlapDataset(CustomDataset): if match_matrix[:, gt_ins_idx].sum() > 0: continue # calculate IoU - if cal_mask_IoU(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > self.iou_thresh: + if CalMaskIou(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > self.iou_thresh: match_matrix[ins_idx, gt_ins_idx] = True break # calculate instance-wise mIoU @@ -174,7 +172,7 @@ class SynthOverlapDataset(CustomDataset): pred_mask = masks[0][valid_idx[ins_idx]].astype(np.bool) gt_idx = match_matrix[ins_idx].nonzero()[0][0] gt_mask = gt_masks[gt_idx].copy() - cur_iou = cal_mask_IoU(pred_mask, gt_mask) + cur_iou = CalMaskIou(pred_mask, gt_mask) text_ins_miou += cur_iou return (intersection_text, union_text, intersection_overlap, union_overlap), \ text_ins_miou, max(match_matrix.shape) @@ -287,9 +285,7 @@ class SynthOverlapDataset(CustomDataset): total_ins_num += ins_num metric_results[flag] = dict( - text_iou=intersection_text / (union_text + 1e-6), - overlap_iou=intersection_overlap / (union_overlap + 1e-6), - text_ins_miou=np.sum(text_ins_miou_list) / total_ins_num + text_iou=intersection_text / (union_text + 1e-6) ) return metric_results diff --git a/contrib/Overlap-Recovery/train/src/dataset/utils.py b/contrib/Overlap-Recovery/train/src/dataset/utils.py index 2acbdeefd..3b16f2824 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/utils.py +++ b/contrib/Overlap-Recovery/train/src/dataset/utils.py @@ -1,14 +1,13 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/10/25 23:53 -# @Author : WeiHua import numpy as np import mmcv import mindspore as ms -def cal_mask_IoU(mask_a, mask_b, check_valid=False): +# def cal_mask_IoU(mask_a, mask_b, check_valid=False): +def CalMaskIou(mask_a, mask_b, check_valid=False): if check_valid: assert len(np.unique(mask_a)) <= 2 assert len(np.unique(mask_b)) <= 2 @@ -21,7 +20,8 @@ def cal_mask_IoU(mask_a, mask_b, check_valid=False): return intersection_area / union_area -def cal_overlap_mask(mask_list): +# def cal_overlap_mask(mask_list): +def CalOverlapMask(mask_list): if len(mask_list) < 2: return None mask_list_bool = [x.astype(np.bool) for x in mask_list] @@ -33,7 +33,8 @@ def cal_overlap_mask(mask_list): return overlap_mask -def cal_union_mask(mask_list): +# def cal_union_mask(mask_list): +def CalUnionMask(mask_list): if len(mask_list) < 1: return None mask_list_bool = [x.astype(np.bool) for x in mask_list] diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/__init__.py b/contrib/Overlap-Recovery/train/src/deoccluder/__init__.py index b9f04e432..145b98302 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/__init__.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/__init__.py @@ -1,6 +1,4 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/24 22:06 -# @Author : WeiHua from .deoccluder_r50 import CustomKNet, TrainModelWrapper diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/__init__.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/__init__.py index 63d2b7648..5721ba23e 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/__init__.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/__init__.py @@ -1,7 +1,5 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/25 15:06 -# @Author : WeiHua from .custom_operations import CustomResizeBilinear, normal_init, multi_apply from .custom_blocks import ConvModule, FFN, MultiheadAttention diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py index b75e34148..d9ac38d8e 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py @@ -1,7 +1,5 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/28 17:32 -# @Author : WeiHua try: from scipy.optimize import linear_sum_assignment @@ -50,23 +48,6 @@ class AssignResult(NiceRepr): assert key not in self.info self._extra_properties[key] = value - def get_extra_property(self, key): - """Get user-defined property.""" - return self._extra_properties.get(key, None) - - @property - def info(self): - """dict: a dictionary of info about the object""" - basic_info = { - 'num_gts': self.num_gts, - 'num_preds': self.num_preds, - 'gt_inds': self.gt_inds, - 'max_overlaps': self.max_overlaps, - 'labels': self.labels, - } - basic_info.update(self._extra_properties) - return basic_info - def __nice__(self): """str: a "nice" summary string describing this assign result""" parts = [] @@ -86,14 +67,29 @@ class AssignResult(NiceRepr): parts.append(f'labels.shape={tuple(self.labels.shape)!r}') return ', '.join(parts) + @property + def info(self): + """dict: a dictionary of info about the object""" + basic_info = { + 'num_gts': self.num_gts, + 'num_preds': self.num_preds, + 'gt_inds': self.gt_inds, + 'max_overlaps': self.max_overlaps, + 'labels': self.labels, + } + basic_info.update(self._extra_properties) + return basic_info + + def get_extra_property(self, key): + """Get user-defined property.""" + return self._extra_properties.get(key, None) + def add_gt_(self, gt_labels): """Add ground truth as assigned results. Args: gt_labels (torch.Tensor): Labels of gt boxes """ - # self_inds = torch.arange( - # 1, len(gt_labels) + 1, dtype=ms.int32, device=gt_labels.device) self_inds = ms.Tensor(np.arange( 1, len(gt_labels) + 1), dtype=ms.int32) self.gt_inds = ops.concat([self_inds, self.gt_inds]) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py index 7dc37f934..dab453ecc 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py @@ -1,7 +1,5 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/25 15:26 -# @Author : WeiHua import warnings import mindspore as ms @@ -10,17 +8,8 @@ from src.model_utils.configs.config_base import config class ConvModule(nn.Cell): - def __init__(self, - in_channels, - out_channels, - kernel_size=1, - padding=0, - stride=1, - groups=1, - dilation=1, - conv_cfg=None, - norm_cfg=None, - act_cfg=None): + def __init__(self, in_channels, out_channels, kernel_size=1, padding=0, stride=1, + groups=1, dilation=1, conv_cfg=None, norm_cfg=None, act_cfg=None): super().__init__() if norm_cfg is not None: bias = False @@ -85,15 +74,11 @@ class FFN(nn.Cell): when adding the shortcut. """ - def __init__(self, - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - act_cfg=dict(type='ReLU'), - ffn_drop=0., - dropout_layer=None, - add_identity=True): + def __init__(self, embed_dims=256, feedforward_channels=1024, num_fcs=2, + act_cfg=None, ffn_drop=0., dropout_layer=None, add_identity=True): super(FFN, self).__init__() + if isinstance(act_cfg, type(None)): + act_cfg = dict(type='ReLU') assert num_fcs >= 2, 'num_fcs should be no less ' \ f'than 2. got {num_fcs}.' self.embed_dims = embed_dims diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py index df90b197e..b1ce3e9e9 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py @@ -1,7 +1,5 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/25 16:37 -# @Author : WeiHua import mindspore as ms import numpy as np @@ -86,7 +84,6 @@ class DiceLoss(nn.Cell): class CLSBCELoss(nn.Cell): def __init__(self, loss_weight=1, use_sigmoid=True, reduction='mean'): super(CLSBCELoss, self).__init__() - # self.bce_loss = nn.BCELoss(reduction=reduction) self.bce_loss = nn.CrossEntropyLoss(reduction=reduction) self.loss_weight = loss_weight self.use_sigmoid = use_sigmoid @@ -94,7 +91,7 @@ class CLSBCELoss(nn.Cell): def construct(self, pred, label): return self.loss_weight * self.bce_loss(pred, label) - # return self.loss_weight * self.bce_loss(self.sigmoid(pred), label) + CUSTOM_LOSSES = { 'BinaryCrossEntropy': BinaryCrossEntropy, diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py index 14783d2e9..16824071f 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py @@ -1,7 +1,5 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/28 17:46 -# @Author : WeiHua import mindspore as ms from mindspore import nn, ops @@ -33,12 +31,8 @@ class FocalLossCost: [-0.1950, -0.1207, -0.2626]]) """ - def __init__(self, - weight=1., - alpha=0.25, - gamma=2, - eps=1e-12, - binary_input=False): + def __init__(self, weight=1., alpha=0.25, gamma=2, + eps=1e-12, binary_input=False): self.weight = weight self.alpha = alpha self.gamma = gamma @@ -60,11 +54,25 @@ class FocalLossCost: 1 - self.alpha) * cls_pred.pow(self.gamma) pos_cost = -(cls_pred + self.eps).log() * self.alpha * ( 1 - cls_pred).pow(self.gamma) - # cls_cost = pos_cost[:, gt_labels] - neg_cost[:, gt_labels] gt_numpy = gt_labels.asnumpy() cls_cost = ms.Tensor(pos_cost.asnumpy()[:, gt_numpy]) - ms.Tensor(neg_cost.asnumpy()[:, gt_numpy]) return cls_cost * self.weight + def __call__(self, cls_pred, gt_labels): + """ + Args: + cls_pred (Tensor): Predicted classfication logits. + gt_labels (Tensor)): Labels. + + Returns: + Tensor: Focal cost matrix with weight in shape\ + (num_query, num_gt). + """ + if self.binary_input: + return self._mask_focal_loss_cost(cls_pred, gt_labels) + else: + return self._focal_loss_cost(cls_pred, gt_labels) + def _mask_focal_loss_cost(self, cls_pred, gt_labels): """ Args: @@ -88,25 +96,8 @@ class FocalLossCost: einsum = ops.Einsum('nc,mc->nm') cls_cost = einsum((pos_cost, gt_labels)) + einsum((neg_cost, (1 - gt_labels))) - # cls_cost = Einsum('nc,mc->nm', pos_cost, gt_labels) + \ - # Einsum('nc,mc->nm', neg_cost, (1 - gt_labels)) return cls_cost / n * self.weight - def __call__(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classfication logits. - gt_labels (Tensor)): Labels. - - Returns: - Tensor: Focal cost matrix with weight in shape\ - (num_query, num_gt). - """ - if self.binary_input: - return self._mask_focal_loss_cost(cls_pred, gt_labels) - else: - return self._focal_loss_cost(cls_pred, gt_labels) - class DiceCost(object): """DiceCost. diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py index 964ccde95..777b3546f 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py @@ -1,11 +1,9 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/25 15:01 -# @Author : WeiHua from functools import partial -import numpy as np import warnings +import numpy as np import mindspore as ms import mindspore.nn as nn from mindspore.common import initializer as init diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py index 40ff5d7a6..5cd66e50d 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py @@ -1,7 +1,5 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/25 22:59 -# @Author : WeiHua import numpy as np import mindspore as ms @@ -24,18 +22,17 @@ class MaskSamplingResult(NiceRepr): })> """ - def __init__(self, pos_inds, neg_inds, masks, gt_masks, assign_result, - gt_flags): + def __init__(self, pos_inds, neg_inds, masks, gt_masks, assign_result, gt_flags): self.pos_inds = pos_inds self.neg_inds = neg_inds if pos_inds.shape[0] == 0: - H, W = masks.shape[-2:] - self.pos_masks = np.zeros((0, H, W)) + height, width = masks.shape[-2:] + self.pos_masks = np.zeros((0, height, width)) else: self.pos_masks = masks[pos_inds] if neg_inds.shape[0] == 0: - H, W = masks.shape[-2:] - self.neg_masks = np.zeros((0, H, W)) + height, width = masks.shape[-2:] + self.neg_masks = np.zeros((0, height, width)) else: self.neg_masks = masks[neg_inds] self.pos_is_gt = gt_flags[pos_inds] @@ -60,11 +57,6 @@ class MaskSamplingResult(NiceRepr): """torch.Tensor: concatenated positive and negative boxes""" return ops.concat([self.pos_masks, self.neg_masks]) - @property - def bboxes(self): - """torch.Tensor: concatenated positive and negative boxes""" - return ops.concat([self.pos_bboxes, self.neg_bboxes]) - def __nice__(self): data = self.info.copy() data['pos_masks'] = data.pop('pos_masks').shape @@ -73,6 +65,11 @@ class MaskSamplingResult(NiceRepr): body = ' ' + ',\n '.join(parts) return '{\n' + body + '\n}' + @property + def bboxes(self): + """torch.Tensor: concatenated positive and negative boxes""" + return ops.concat([self.pos_bboxes, self.neg_bboxes]) + @property def info(self): """Returns a dictionary of info about the object.""" diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py b/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py index ac2d6f1c7..a535d1609 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py @@ -1,7 +1,5 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/24 22:06 -# @Author : WeiHua import mindspore as ms from mindspore import nn, ops @@ -18,6 +16,7 @@ from .rpn.kernel_head import ConvKernelHead from .roi.custom_kernel_iter_head import CustomKernelIterHead from .utils import sem2ins_masks + class CustomKNet(nn.Cell): def __init__(self, config): super(CustomKNet, self).__init__() @@ -40,8 +39,6 @@ class CustomKNet(nn.Cell): self.reduce_sum = ops.ReduceSum() - # self.cnt = 0 - def load_r50(self, ckpt_path, prefix='backbone'): param_dict = load_checkpoint(ckpt_path) if prefix: @@ -57,14 +54,8 @@ class CustomKNet(nn.Cell): x = self.neck(x) return x - def forward_train(self, - img, - img_metas, - gt_bboxes=None, - gt_labels=None, - gt_bboxes_ignore=None, - gt_masks=None, - gt_semantic_seg=None): + def forward_train(self, img, img_metas, gt_bboxes=None, gt_labels=None, + gt_bboxes_ignore=None, gt_masks=None, gt_semantic_seg=None): assert gt_masks is not None # gt_masks and gt_semantic_seg are not padded when forming batch @@ -72,16 +63,14 @@ class CustomKNet(nn.Cell): gt_sem_seg = [] gt_sem_cls = [] # batch_input_shape shoud be the same across images - pad_H, pad_W = img_metas[0]['batch_input_shape'] - assign_H = pad_H // self.mask_assign_stride - assign_W = pad_W // self.mask_assign_stride + pad_h, pad_w = img_metas[0]['batch_input_shape'] + assign_H = pad_h // self.mask_assign_stride + assign_W = pad_w // self.mask_assign_stride for i, gt_mask in enumerate(gt_masks): mask_tensor = gt_mask.to_tensor(ms.float32) - if gt_mask.width != pad_W or gt_mask.height != pad_H: - # pad_wh = (0, pad_W - gt_mask.width, 0, pad_H - gt_mask.height) - # mask_tensor = F.pad(mask_tensor, pad_wh, value=0) - pad_wh = ((0, 0), (0, pad_H - gt_mask.height), (0, pad_W - gt_mask.width)) + if gt_mask.width != pad_w or gt_mask.height != pad_h: + pad_wh = ((0, 0), (0, pad_h - gt_mask.height), (0, pad_w - gt_mask.width)) pad_op = nn.Pad(paddings=pad_wh) mask_tensor = pad_op(mask_tensor) @@ -143,9 +132,6 @@ class CustomKNet(nn.Cell): total_loss += val else: total_loss = val - # self.cnt += 1 - # if self.cnt % 10 == 0: - # print(losses) return total_loss def simple_test(self, img, img_metas, rescale=False): @@ -230,10 +216,7 @@ class CustomKNet(nn.Cell): img_metas.append(img_meta) x = self.extract_feat(img) - # print('*'*20) proposal_feats, x_feats, mask_preds, cls_scores, seg_preds = self.rpn_head.onnx_export(x) - # mask_preds = self.rpn_head.onnx_export(x) - # return mask_preds scaled_mask_preds, cls_score = self.roi_head.onnx_export(x_feats, proposal_feats, diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py b/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py index 98aec5e56..16756331b 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py @@ -27,6 +27,7 @@ def bias_init_zeros(shape): """Bias init method.""" return Tensor(np.array(np.zeros(shape).astype(np.float32)), dtype=mstype.float32) + def _conv(in_channels, out_channels, kernel_size=3, stride=1, padding=0, pad_mode='pad'): """Conv2D wrapper.""" shape = (out_channels, in_channels, kernel_size, kernel_size) @@ -37,6 +38,7 @@ def _conv(in_channels, out_channels, kernel_size=3, stride=1, padding=0, pad_mod kernel_size=kernel_size, stride=stride, padding=padding, pad_mode=pad_mode, weight_init=weights, has_bias=True, bias_init=biass) + class FeatPyramidNeck(nn.Cell): """ Feature pyramid network cell, usually uses as network neck. diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py b/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py index 822149fab..388805169 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py @@ -91,6 +91,20 @@ class ResNet(nn.Cell): init=initializer.HeNormal(mode='fan_out', nonlinearity='relu'), shape=m.weight.shape, dtype=mindspore.float32), name=m.weight.name) + def construct(self, x): + x = self.conv1(x) + x = self.bn1(x) + x = self.relu(x) + x = self.pad(x) + x = self.maxpool(x) + + c2 = self.layer1(x) + c3 = self.layer2(c2) + c4 = self.layer3(c3) + c5 = self.layer4(c4) + + return c2, c3, c4, c5 + def _make_layer(self, block, planes, blocks, stride=1): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: @@ -108,20 +122,6 @@ class ResNet(nn.Cell): return nn.SequentialCell(*layers) - def construct(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.pad(x) - x = self.maxpool(x) - - c2 = self.layer1(x) - c3 = self.layer2(c2) - c4 = self.layer3(c3) - c5 = self.layer4(c4) - - return c2, c3, c4, c5 - def resnet50(pretrained=True, **kwargs): """Constructs a ResNet-50 model. diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/__init__.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/__init__.py index 2b2fbdae7..4f96c1580 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/__init__.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/__init__.py @@ -1,4 +1,2 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/28 20:00 -# @Author : WeiHua diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py index 2726b1926..b2209b4e7 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py @@ -1,31 +1,20 @@ -from .custom_kernel_update_head import CustomKernelUpdateHead -from ..custom_cells import build_assigner, build_sampler - import mindspore as ms from mindspore import nn, ops import numpy as np +from .custom_kernel_update_head import CustomKernelUpdateHead +from ..custom_cells import build_assigner, build_sampler + + class CustomKernelIterHead(nn.Cell): - def __init__(self, - num_stages=6, - recursive=False, - assign_stages=5, - stage_loss_weights=(1, 1, 1, 1, 1, 1), - proposal_feature_channel=256, - merge_cls_scores=False, - post_assign=False, - hard_target=False, - num_proposals=100, - num_thing_classes=80, - mask_assign_stride=4, - mask_head=dict(), - mask_out_stride=4, - train_cfg=None, - test_cfg=None, - **kwargs): + def __init__(self, num_stages=6, recursive=False, assign_stages=5, stage_loss_weights=(1, 1, 1, 1, 1, 1), + proposal_feature_channel=256, merge_cls_scores=False, post_assign=False, hard_target=False, + num_proposals=100, num_thing_classes=80, mask_assign_stride=4, mask_head=None, mask_out_stride=4, + train_cfg=None, test_cfg=None, **kwargs): super(CustomKernelIterHead, self).__init__() - assert mask_head is not None + if isinstance(mask_head, type(None)): + mask_head = dict() assert len(stage_loss_weights) == num_stages self.num_stages = num_stages self.stage_loss_weights = stage_loss_weights diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py index 758366a7b..1a21b7a73 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py @@ -1,8 +1,8 @@ import numpy as np - -from .kernel_update_head import KernelUpdateHead import mindspore as ms from mindspore import nn, ops + +from .kernel_update_head import KernelUpdateHead from ..custom_cells import build_loss @@ -33,7 +33,8 @@ class CustomKernelUpdateHead(KernelUpdateHead): else: self.union_fc = nn.Dense(2 * self.in_channels, self.in_channels) self.interact_fc = nn.Dense(2 * self.in_channels, self.in_channels) - self.apply_occ_union = kernel_occlusion_cfg['u_mask_loss']['loss_weight'] > 0 or kernel_occlusion_cfg['u_dice_loss']['loss_weight'] > 0 + self.apply_occ_union = kernel_occlusion_cfg['u_mask_loss']['loss_weight'] > 0 or \ + kernel_occlusion_cfg['u_dice_loss']['loss_weight'] > 0 self.occ_union_mask_loss = build_loss(kernel_occlusion_cfg.get('u_mask_loss').copy()) self.occ_interact_mask_loss = build_loss(kernel_occlusion_cfg.get('i_mask_loss').copy()) self.occ_union_dice_loss = build_loss(kernel_occlusion_cfg.get('u_dice_loss').copy()) @@ -66,22 +67,17 @@ class CustomKernelUpdateHead(KernelUpdateHead): return ui_kernels - def construct(self, - x, - proposal_feat, - mask_preds, - prev_cls_score=None, - mask_shape=None, - img_metas=None): - N, num_proposals = proposal_feat.shape[:2] + def construct(self, x, proposal_feat, mask_preds, prev_cls_score=None, + mask_shape=None, img_metas=None): + n_sample, num_proposals = proposal_feat.shape[:2] if self.feat_transform is not None: x = self.feat_transform(x) - C, H, W = x.shape[-3:] + chn, height, width = x.shape[-3:] mask_h, mask_w = mask_preds.shape[-2:] - if mask_h != H or mask_w != W: + if mask_h != height or mask_w != width: gather_mask = self.interpolate( - mask_preds, size=(H, W), align_corners=False) + mask_preds, size=(height, width), align_corners=False) else: gather_mask = mask_preds @@ -102,19 +98,19 @@ class CustomKernelUpdateHead(KernelUpdateHead): # x_feat = Einsum('bnhw,bchw->bnc', sigmoid_masks, x) # obj_feat in shape [B, N, C, K, K] -> [B, N, C, K*K] -> [B, N, K*K, C] - proposal_feat = proposal_feat.reshape(N, num_proposals, + proposal_feat = proposal_feat.reshape(n_sample, num_proposals, self.in_channels, -1).transpose(0, 1, 3, 2) obj_feat = self.kernel_update_conv(x_feat, proposal_feat) # [B, N, K*K, C] -> [B, N, K*K*C] -> [N, B, K*K*C] - obj_feat = obj_feat.reshape(N, num_proposals, -1).transpose(1, 0, 2) + obj_feat = obj_feat.reshape(n_sample, num_proposals, -1).transpose(1, 0, 2) obj_feat = self.attention_norm(self.attention(obj_feat)) # [N, B, K*K*C] -> [B, N, K*K*C] obj_feat = obj_feat.transpose(1, 0, 2) # obj_feat in shape [B, N, K*K*C] -> [B, N, K*K, C] - obj_feat = obj_feat.reshape(N, num_proposals, -1, self.in_channels) + obj_feat = obj_feat.reshape(n_sample, num_proposals, -1, self.in_channels) # FFN if self.with_ffn: @@ -134,29 +130,29 @@ class CustomKernelUpdateHead(KernelUpdateHead): for reg_layer in self.mask_fcs: mask_feat = reg_layer(mask_feat) - cls_score = self.fc_cls(cls_feat).view(N, num_proposals, -1) + cls_score = self.fc_cls(cls_feat).view(n_sample, num_proposals, -1) # [B, N, K*K, C] -> [B, N, C, K*K] mask_feat = self.fc_mask(mask_feat).transpose(0, 1, 3, 2) if (self.mask_transform_stride == 2 and self.feat_gather_stride == 1): mask_x = self.interpolate( x, scale_factor=0.5, align_corners=False) - H, W = mask_x.shape[-2:] + height, width = mask_x.shape[-2:] else: mask_x = x # [B, N, C, K*K] -> [B*N, C, K, K] if self.apply_kernel_occlusion and self.training: tmp_num = num_proposals + ui_pair_num - mask_feat = mask_feat.reshape(N, tmp_num, C, + mask_feat = mask_feat.reshape(n_sample, tmp_num, chn, self.conv_kernel_size, self.conv_kernel_size) else: - mask_feat = mask_feat.reshape(N, num_proposals, C, + mask_feat = mask_feat.reshape(n_sample, num_proposals, chn, self.conv_kernel_size, self.conv_kernel_size) # [B, C, H, W] -> [1, B*C, H, W] new_mask_preds = [] - for i in range(N): + for i in range(n_sample): new_mask_preds.append( ops.conv2d( mask_x[i:i + 1], @@ -165,23 +161,23 @@ class CustomKernelUpdateHead(KernelUpdateHead): new_mask_preds = ops.concat(new_mask_preds, axis=0) if self.apply_kernel_occlusion and self.training: - new_mask_preds = new_mask_preds.reshape(N, num_proposals + ui_pair_num, H, W) + new_mask_preds = new_mask_preds.reshape(n_sample, num_proposals + ui_pair_num, height, width) else: - new_mask_preds = new_mask_preds.reshape(N, num_proposals, H, W) + new_mask_preds = new_mask_preds.reshape(n_sample, num_proposals, height, width) if self.mask_transform_stride == 2: new_mask_preds = self.interpolate( new_mask_preds, scale_factor=2, align_corners=False) - if mask_shape is not None and mask_shape[0] != H: + if mask_shape is not None and mask_shape[0] != height: new_mask_preds = self.interpolate( new_mask_preds, size=mask_shape, mode='bilinear') return cls_score, new_mask_preds, obj_feat.transpose(0, 1, 3, 2).reshape( - N, num_proposals, self.in_channels, self.conv_kernel_size, + n_sample, num_proposals, self.in_channels, self.conv_kernel_size, self.conv_kernel_size) def loss(self, diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py index 623bf16f0..5c6500298 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py @@ -11,46 +11,30 @@ from .kernel_updator import KernelUpdator class KernelUpdateHead(nn.Cell): - def __init__(self, - num_classes=80, - num_ffn_fcs=2, - num_heads=8, - num_cls_fcs=1, - num_mask_fcs=3, - feedforward_channels=2048, - in_channels=256, - out_channels=256, - dropout=0.0, - mask_thr=0.5, - ffn_act_cfg=dict(type='ReLU', inplace=True), - conv_kernel_size=3, - feat_transform_cfg=None, - hard_mask_thr=0.5, - kernel_init=False, - with_ffn=True, - mask_out_stride=4, - relative_coors=False, - relative_coors_off=False, - feat_gather_stride=1, - mask_transform_stride=1, - mask_upsample_stride=1, - num_thing_classes=80, - num_stuff_classes=53, - mask_assign_stride=4, - ignore_label=255, - thing_label_in_seg=0, - kernel_updator_cfg=dict(), - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0), - loss_dice=dict(type='DiceLoss', loss_weight=3.0), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=2.0), - num_proposals=4): + def __init__(self, num_classes=80, num_ffn_fcs=2, num_heads=8, num_cls_fcs=1, + num_mask_fcs=3, feedforward_channels=2048, in_channels=256, + out_channels=256, dropout=0.0, mask_thr=0.5, ffn_act_cfg=None, + conv_kernel_size=3, feat_transform_cfg=None, hard_mask_thr=0.5, + kernel_init=False, with_ffn=True, mask_out_stride=4, + relative_coors=False, relative_coors_off=False, feat_gather_stride=1, + mask_transform_stride=1, mask_upsample_stride=1, num_thing_classes=80, + num_stuff_classes=53, mask_assign_stride=4, ignore_label=255, + thing_label_in_seg=0, kernel_updator_cfg=None, loss_mask=None, + loss_dice=None, loss_cls=None, num_proposals=4): super(KernelUpdateHead, self).__init__() + # init dict-like arguments + if isinstance(ffn_act_cfg, type(None)): + ffn_act_cfg = dict(type='ReLU', inplace=True) + if isinstance(kernel_updator_cfg, type(None)): + kernel_updator_cfg = dict() + if isinstance(loss_mask, type(None)): + loss_mask = dict(type='CrossEntropyLoss', use_mask=True, loss_weight=1.0) + if isinstance(loss_dice, type(None)): + loss_dice = dict(type='DiceLoss', loss_weight=3.0) + if isinstance(loss_cls, type(None)): + loss_cls = dict(type='FocalLoss', use_sigmoid=True, gamma=2.0, + alpha=0.25, loss_weight=2.0) + self.num_classes = num_classes self.loss_cls = build_loss(loss_cls) self.loss_mask = build_loss(loss_mask) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py index d4ba7295a..6b21d610b 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py @@ -4,16 +4,11 @@ from mindspore import nn, ops class KernelUpdator(nn.Cell): - def __init__(self, - in_channels=256, - feat_channels=64, - out_channels=None, - input_feat_shape=3, - gate_sigmoid=True, - gate_norm_act=False, - activate_out=False, - act_cfg=dict(type='ReLU', inplace=True)): + def __init__(self, in_channels=256, feat_channels=64, out_channels=None, input_feat_shape=3, + gate_sigmoid=True, gate_norm_act=False, activate_out=False, act_cfg=None): super(KernelUpdator, self).__init__() + if isinstance(act_cfg, type(None)): + act_cfg = dict(type='ReLU', inplace=True) self.in_channels = in_channels self.feat_channels = feat_channels self.out_channels_raw = out_channels @@ -42,7 +37,7 @@ class KernelUpdator(nn.Cell): self.input_norm_in = nn.LayerNorm([self.feat_channels]) self.input_norm_out = nn.LayerNorm([self.feat_channels]) - if act_cfg and act_cfg['type'] == 'ReLU': + if act_cfg and act_cfg.get('type', 'None') == 'ReLU': self.activation = nn.ReLU() else: self.activation = nn.Identity() @@ -70,9 +65,7 @@ class KernelUpdator(nn.Cell): input_gate = self.input_norm_in(self.input_gate(gate_feats)) update_gate = self.norm_in(self.update_gate(gate_feats)) if self.gate_sigmoid: - # input_gate = input_gate.sigmoid() input_gate = ms.ops.sigmoid(input_gate) - # update_gate = update_gate.sigmoid() update_gate = ms.ops.sigmoid(update_gate) param_out = self.norm_out(param_out) input_out = self.input_norm_out(input_out) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/__init__.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/__init__.py index 7c8d0d8c3..4f96c1580 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/__init__.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/__init__.py @@ -1,4 +1,2 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/24 23:14 -# @Author : WeiHua diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py index 28b94ed97..b5a5c7258 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py @@ -1,14 +1,13 @@ import numpy as np import mindspore as ms from mindspore import nn, ops +from mindspore import log as logger from mindspore.common import initializer as init from mindspore.communication.management import GlobalComm, get_group_size +from src.model_utils.configs import Config from .semantic_fpn_wrapper import SemanticFPNWrapper from ..custom_cells import (ConvModule, normal_init, build_loss, multi_apply, build_sampler, build_assigner) -from src.model_utils.configs import Config - -from mindspore import log as logger def bias_init_with_prob(prior_prob: float) -> float: @@ -19,46 +18,19 @@ def bias_init_with_prob(prior_prob: float) -> float: class ConvKernelHead(nn.Cell): - def __init__(self, - num_proposals=100, - in_channels=256, - out_channels=256, - num_heads=8, - num_cls_fcs=1, - num_seg_convs=1, - num_loc_convs=1, - att_dropout=False, - localization_fpn=None, - conv_kernel_size=1, - norm_cfg=dict(type='GN', num_groups=32), - semantic_fpn=True, - train_cfg=None, - num_classes=80, - xavier_init_kernel=False, - kernel_init_std=0.01, - use_binary=False, - proposal_feats_with_obj=False, - loss_mask=None, - loss_seg=None, - loss_cls=None, - loss_dice=None, - loss_rank=None, - feat_downsample_stride=1, - feat_refine_stride=1, - feat_refine=True, - with_embed=False, - feat_embed_only=False, - conv_normal_init=False, - mask_out_stride=4, - hard_target=False, - num_thing_classes=80, - num_stuff_classes=53, - mask_assign_stride=4, - ignore_label=255, - thing_label_in_seg=0, - cat_stuff_mask=False, - **kwargs): + def __init__(self, num_proposals=100, in_channels=256, out_channels=256, num_heads=8, + num_cls_fcs=1, num_seg_convs=1, num_loc_convs=1, att_dropout=False, + localization_fpn=None, conv_kernel_size=1, norm_cfg=None, semantic_fpn=True, + train_cfg=None, num_classes=80, xavier_init_kernel=False, kernel_init_std=0.01, + use_binary=False, proposal_feats_with_obj=False, loss_mask=None, loss_seg=None, + loss_cls=None, loss_dice=None, loss_rank=None, feat_downsample_stride=1, + feat_refine_stride=1, feat_refine=True, with_embed=False, feat_embed_only=False, + conv_normal_init=False, mask_out_stride=4, hard_target=False, num_thing_classes=80, + num_stuff_classes=53, mask_assign_stride=4, ignore_label=255, thing_label_in_seg=0, + cat_stuff_mask=False, **kwargs): super(ConvKernelHead, self).__init__() + if isinstance(norm_cfg, type(None)): + norm_cfg = dict(type='GN', num_groups=32) self.num_proposals = num_proposals self.num_cls_fcs = num_cls_fcs self.train_cfg = Config(train_cfg) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py index aec93bad4..7ec8788ef 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py @@ -1,7 +1,5 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/25 0:35 -# @Author : WeiHua import math import numpy as np import mindspore as ms @@ -34,13 +32,8 @@ class SinePositionalEncoding(nn.Cell): Default: None """ - def __init__(self, - num_feats, - temperature=10000, - normalize=False, - scale=2 * math.pi, - eps=1e-6, - offset=0.): + def __init__(self, num_feats, temperature=10000, normalize=False, + scale=2 * math.pi, eps=1e-6, offset=0.): super(SinePositionalEncoding, self).__init__() if normalize: assert isinstance(scale, (float, int)), 'when normalize is set,' \ @@ -78,24 +71,32 @@ class SinePositionalEncoding(nn.Cell): x_embed = (x_embed + self.offset) / \ (x_embed[:, :, -1:] + self.eps) * self.scale dim_t = ms.Tensor(np.arange(self.num_feats), dtype=ms.float32) - # dim_t = torch.arange( - # self.num_feats, dtype=torch.float32, device=mask.device) dim_t = self.temperature**(2 * (dim_t // 2) / self.num_feats) pos_x = x_embed[:, :, :, None] / dim_t pos_y = y_embed[:, :, :, None] / dim_t # use `view` instead of `flatten` for dynamically exporting to ONNX - B, H, W = mask.shape + batch_size, height, width = mask.shape sin = ops.Sin() cos = ops.Cos() pos_x = ops.stack( (sin(pos_x[:, :, :, 0::2]), cos(pos_x[:, :, :, 1::2])), - axis=4).view(B, H, W, -1) + axis=4).view(batch_size, height, width, -1) pos_y = ops.stack( (sin(pos_y[:, :, :, 0::2]), cos(pos_y[:, :, :, 1::2])), - axis=4).view(B, H, W, -1) + axis=4).view(batch_size, height, width, -1) pos = ops.concat((pos_y, pos_x), axis=3).transpose((0, 3, 1, 2)) return pos + def __repr__(self): + """str: a string that describes the module""" + repr_str = self.__class__.__name__ + repr_str += f'(num_feats={self.num_feats}, ' + repr_str += f'temperature={self.temperature}, ' + repr_str += f'normalize={self.normalize}, ' + repr_str += f'scale={self.scale}, ' + repr_str += f'eps={self.eps})' + return repr_str + def model_export(self, mask): """Forward function for `SinePositionalEncoding`. @@ -115,9 +116,7 @@ class SinePositionalEncoding(nn.Cell): tmp_not_mask = np.array(not_mask, dtype=np.int32) y_embed = np.cumsum(tmp_not_mask, axis=1, dtype=np.float32) - # y_embed = ms.Tensor(y_embed, dtype=ms.float32) x_embed = np.cumsum(tmp_not_mask, axis=2, dtype=np.float32) - # x_embed = ms.Tensor(x_embed, dtype=ms.float32) if self.normalize: y_embed = (y_embed + self.offset) / \ @@ -126,30 +125,21 @@ class SinePositionalEncoding(nn.Cell): (x_embed[:, :, -1:] + self.eps) * self.scale # dim_t = ms.Tensor(np.arange(self.num_feats), dtype=ms.float32) dim_t = np.arange(self.num_feats).astype(np.float32) - # dim_t = torch.arange( - # self.num_feats, dtype=torch.float32, device=mask.device) - # dim_t = self.temperature**(2 * (dim_t // 2) / self.num_feats) dim_t = self.temperature**(2 * (dim_t / 2) / self.num_feats) pos_x = x_embed[:, :, :, None] / dim_t pos_y = y_embed[:, :, :, None] / dim_t # use `view` instead of `flatten` for dynamically exporting to ONNX - B, H, W = mask.shape + batch_size, height, width = mask.shape tmp_pos_x = pos_x tmp_pos_y =pos_y - tmp_pos_x = np.stack((np.sin(tmp_pos_x[:,:,:,0::2]), np.cos(tmp_pos_x[:,:,:,1::2])), axis=4).reshape(B,H,W,-1) - tmp_pos_y = np.stack((np.sin(tmp_pos_y[:,:,:,0::2]), np.cos(tmp_pos_y[:,:,:,1::2])), axis=4).reshape(B,H,W,-1) + tmp_pos_x = np.stack( + (np.sin(tmp_pos_x[:, :, :, 0::2]), np.cos(tmp_pos_x[:, :, :, 1::2])), axis=4 + ).reshape(batch_size, height, width, -1) + tmp_pos_y = np.stack( + (np.sin(tmp_pos_y[:, :, :, 0::2]), np.cos(tmp_pos_y[:, :, :, 1::2])), axis=4 + ).reshape(batch_size, height, width, -1) tmp_pos = np.concatenate((tmp_pos_y, tmp_pos_x),axis=3).transpose((0,3,1,2)) pos = ms.Tensor(tmp_pos, dtype=ms.float32) return pos - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_feats={self.num_feats}, ' - repr_str += f'temperature={self.temperature}, ' - repr_str += f'normalize={self.normalize}, ' - repr_str += f'scale={self.scale}, ' - repr_str += f'eps={self.eps})' - return repr_str diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py index 6d85304eb..a880ea2c9 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py @@ -1,9 +1,10 @@ +import numpy as np import mindspore as ms from mindspore import nn, ops from mindspore import log as logger from ..custom_cells import CustomResizeBilinear, ConvModule, normal_init from .positional_encoding import SinePositionalEncoding -import numpy as np + class SemanticFPNWrapper(nn.Cell): """Implementation of Semantic FPN used in Panoptic FPN. @@ -20,25 +21,16 @@ class SemanticFPNWrapper(nn.Cell): norm_cfg ([type], optional): [description]. Defaults to None. """ - def __init__(self, - in_channels, - feat_channels, - out_channels, - start_level, - end_level, - cat_coors=False, - positional_encoding=None, - cat_coors_level=3, - fuse_by_cat=False, - return_list=False, - upsample_times=3, - with_pred=True, - num_aux_convs=0, - act_cfg=dict(type='ReLU', inplace=True), - out_act_cfg=dict(type='ReLU'), - conv_cfg=None, - norm_cfg=None): + def __init__(self, in_channels, feat_channels, out_channels, start_level, end_level, + cat_coors=False, positional_encoding=None, cat_coors_level=3, fuse_by_cat=False, + return_list=False, upsample_times=3, with_pred=True, num_aux_convs=0, act_cfg=None, + out_act_cfg=None, conv_cfg=None, norm_cfg=None): super(SemanticFPNWrapper, self).__init__() + # init dict-like arguments + if isinstance(act_cfg, type(None)): + act_cfg = dict(type='ReLU', inplace=True) + if isinstance(out_act_cfg, type(None)): + out_act_cfg = dict(type='ReLU') self.in_channels = in_channels self.feat_channels = feat_channels @@ -76,10 +68,9 @@ class SemanticFPNWrapper(nn.Cell): conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) - # convs_per_level.add_module('conv' + str(i), one_conv) convs_per_level.append(one_conv) else: - for i in range(self.end_level - upsample_times): + for ii in range(self.end_level - upsample_times): one_conv = ConvModule( chn, self.feat_channels, @@ -89,7 +80,6 @@ class SemanticFPNWrapper(nn.Cell): conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) - # convs_per_level.add_module('conv' + str(i), one_conv) convs_per_level.append(one_conv) self.convs_all_levels.append(convs_per_level) continue @@ -108,7 +98,6 @@ class SemanticFPNWrapper(nn.Cell): conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) - # convs_per_level.add_module('conv' + str(j), one_conv) convs_per_level.append(one_conv) if j < upsample_times - (self.end_level - i): one_upsample = CustomResizeBilinear( @@ -124,7 +113,6 @@ class SemanticFPNWrapper(nn.Cell): conv_cfg=self.conv_cfg, norm_cfg=self.norm_cfg, act_cfg=self.act_cfg) - # convs_per_level.add_module('conv' + str(j), one_conv) convs_per_level.append(one_conv) if j < upsample_times - (self.end_level - i): one_upsample = CustomResizeBilinear( diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/utils.py b/contrib/Overlap-Recovery/train/src/deoccluder/utils.py index a8063dc00..e7a2f907f 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/utils.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/utils.py @@ -1,13 +1,11 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/30 0:36 -# @Author : WeiHua import mindspore as ms from mindspore import nn, ops -def sem2ins_masks(gt_sem_seg, - num_thing_classes=80): + +def sem2ins_masks(gt_sem_seg, num_thing_classes=80): """Convert semantic segmentation mask to binary masks Args: diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/__init__.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/__init__.py index 9f86fda73..6a2dd1797 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/configs/__init__.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/__init__.py @@ -1,6 +1,4 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -# @Time : 2022/11/25 0:17 -# @Author : WeiHua from .config_base import Config diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py index cc0e65a16..a74e8b24e 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py @@ -1,6 +1,7 @@ from pprint import pprint, pformat from .config_model import model + class Config: """ Configuration namespace. Convert dictionary to members. @@ -9,74 +10,73 @@ class Config: for k, v in cfg_dict.items(): setattr(self, k, v) - def get(self, attr_name, default_value=None): - return getattr(self, attr_name, default_value) - def __str__(self): return pformat(self.__dict__) def __repr__(self): return self.__str__() + def get(self, attr_name, default_value=None): + return getattr(self, attr_name, default_value) + -synth_data_root = "root-directory-to-train-data" -real_data_root = "root-directory-to-test-data" -img_scale = (768, 768) -img_norm_cfg = dict( +SYNTH_DATA_ROOT = "root-directory-to-train-data" +REAL_DATA_ROOT = "root-directory-to-test-data" +IMG_SCALE = (768, 768) +IMG_NORM_CFG = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ +TRAIN_PIPELINE = [ dict(type='LoadImageFromFile'), dict(type='CustomLoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=img_scale, keep_ratio=True), + dict(type='Resize', img_scale=IMG_SCALE, keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), - # # visualization tool - # dict(type='CustomVisualize'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=768), # 32 + dict(type='Normalize', **IMG_NORM_CFG), + dict(type='Pad', size_divisor=768), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'], meta_keys=('ori_shape', 'img_shape', 'pad_shape', 'scale_factor', 'flip', 'flip_direction'), ), ] -test_pipeline = [ +TEST_PIPELINE = [ dict(type='LoadImageFromFile'), - dict(type='Resize', img_scale=img_scale, keep_ratio=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=768, eval_model=True), # 32 + dict(type='Resize', img_scale=IMG_SCALE, keep_ratio=True), + dict(type='Normalize', **IMG_NORM_CFG), + dict(type='Pad', size_divisor=768, eval_model=True), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img'], meta_keys=('ori_shape', 'img_shape', 'pad_shape', 'scale_factor'), eval_mode=True), ] -config_dict = dict( +CONFIG_DICT = dict( model=model, pre_trained="", data=dict( - samples_per_gpu=8, # 8 - workers_per_gpu=8, # 8 + samples_per_gpu=8, + workers_per_gpu=8, train=dict( type='SynthOverlapDataset', - ann_file=synth_data_root + 'train_gt.jsonl', - img_prefix=synth_data_root, - seg_prefix=synth_data_root, - pipeline=train_pipeline), + ann_file=SYNTH_DATA_ROOT + 'train_gt.jsonl', + img_prefix=SYNTH_DATA_ROOT, + seg_prefix=SYNTH_DATA_ROOT, + pipeline=TRAIN_PIPELINE), val=dict( type='RealOverlapDataset', - ann_file=real_data_root + 'annotation.json', - img_prefix=real_data_root, - seg_prefix=real_data_root, - pipeline=test_pipeline, + ann_file=REAL_DATA_ROOT + 'annotation.json', + img_prefix=REAL_DATA_ROOT, + seg_prefix=REAL_DATA_ROOT, + pipeline=TEST_PIPELINE, test_mode=True), test=dict( type='RealOverlapDataset', - ann_file=real_data_root + 'annotation.json', - img_prefix=real_data_root, - seg_prefix=real_data_root, - pipeline=test_pipeline, + ann_file=REAL_DATA_ROOT + 'annotation.json', + img_prefix=REAL_DATA_ROOT, + seg_prefix=REAL_DATA_ROOT, + pipeline=TEST_PIPELINE, test_mode=True) ), train_cfg=dict( + # usually we only need to train 1 epoch to reach the desired performance total_epoch=60, optimizer='Adam', lr=0.00005, @@ -94,4 +94,4 @@ config_dict = dict( checkpoint_path='path-to-checkpoint-model' ) -config = Config(config_dict) +config = Config(CONFIG_DICT) diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py index 531391405..5db0c0a73 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py @@ -1,9 +1,9 @@ -num_stages = 3 -num_proposals = 4 -conv_kernel_size = 1 -num_classes = 1 +NUM_STAGES = 3 +NUM_PROPOSALS = 4 +CONV_KERNEL_SIZE = 1 +NUM_CLASSES = 1 kernel_occlusion_cfg = dict( - num_proposals=num_proposals, + num_proposals=NUM_PROPOSALS, pair_manner='sum', u_mask_loss=dict( type='BinaryCrossEntropy', loss_weight=1.0), @@ -25,8 +25,7 @@ model = dict( out_channels=256, num_outs=4), rpn_head=dict( - # type='ConvKernelHead', - conv_kernel_size=conv_kernel_size, + conv_kernel_size=CONV_KERNEL_SIZE, feat_downsample_stride=2, feat_refine_stride=1, feat_refine=False, @@ -35,7 +34,6 @@ model = dict( num_seg_convs=1, conv_normal_init=True, localization_fpn=dict( - # type='SemanticFPNWrapper', in_channels=256, feat_channels=256, out_channels=256, @@ -49,22 +47,18 @@ model = dict( return_list=False, num_aux_convs=1, norm_cfg=dict(type='GN', num_groups=32, requires_grad=True)), - num_proposals=num_proposals, + num_proposals=NUM_PROPOSALS, proposal_feats_with_obj=True, xavier_init_kernel=False, kernel_init_std=1, num_cls_fcs=1, in_channels=256, - num_classes=num_classes, + num_classes=NUM_CLASSES, feat_transform_cfg=None, loss_seg=dict( type='BinaryCrossEntropy', loss_weight=1.0 ), - # loss_seg=dict( - # type='FocalLoss', - # gamma=2.0, - # loss_weight=1.0), loss_mask=dict( type='BinaryCrossEntropy', loss_weight=1.0), loss_dice=dict(type='DiceLoss', loss_weight=4.0), @@ -81,15 +75,14 @@ model = dict( ), roi_head=dict( type='CustomKernelIterHead', - num_stages=num_stages, - stage_loss_weights=[1] * num_stages, + num_stages=NUM_STAGES, + stage_loss_weights=[1] * NUM_STAGES, proposal_feature_channel=256, mask_head=[ dict( - # type='CustomKernelUpdateHead', kernel_occlusion_cfg=kernel_occlusion_cfg, apply_kernel_occlusion=True, - num_classes=num_classes, + num_classes=NUM_CLASSES, num_ffn_fcs=2, num_heads=8, num_cls_fcs=1, @@ -99,7 +92,7 @@ model = dict( out_channels=256, dropout=0.0, mask_thr=0.5, - conv_kernel_size=conv_kernel_size, + conv_kernel_size=CONV_KERNEL_SIZE, mask_upsample_stride=2, ffn_act_cfg=dict(type='ReLU', inplace=True), with_ffn=True, @@ -118,8 +111,8 @@ model = dict( loss_cls=dict( type='SigmoidFocalClassificationLoss', loss_weight=2.0), - num_proposals=num_proposals - ) for _ in range(num_stages) + num_proposals=NUM_PROPOSALS + ) for _ in range(NUM_STAGES) ], train_cfg=[ dict( @@ -130,10 +123,10 @@ model = dict( mask_cost=dict(type='MaskCost', weight=1.0, pred_act=True)), sampler=dict(type='MaskPseudoSampler'), - pos_weight=1) for _ in range(num_stages) + pos_weight=1) for _ in range(NUM_STAGES) ], test_cfg=dict( - max_per_img=num_proposals, + max_per_img=NUM_PROPOSALS, mask_thr=0.5, merge_stuff_thing=dict( iou_thr=0.5, stuff_max_area=4096, instance_score_thr=0.3)) diff --git a/contrib/Overlap-Recovery/train/src/model_utils/local_adapter.py b/contrib/Overlap-Recovery/train/src/model_utils/local_adapter.py index 769fa6dc7..8a1b1fa1f 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/local_adapter.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/local_adapter.py @@ -17,6 +17,7 @@ import os + def get_device_id(): device_id = os.getenv('DEVICE_ID', '0') return int(device_id) diff --git a/contrib/Overlap-Recovery/train/src/model_utils/moxing_adapter.py b/contrib/Overlap-Recovery/train/src/model_utils/moxing_adapter.py index bf8df5aff..2e9502ad0 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/moxing_adapter.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/moxing_adapter.py @@ -21,7 +21,8 @@ from mindspore import context from mindspore.profiler import Profiler from .configs.config_base import config -_global_sync_count = 0 +_GLOBAL_SYNC_COUNT = 0 + def get_device_id(): device_id = os.getenv('DEVICE_ID', '0') @@ -43,6 +44,7 @@ def get_job_id(): job_id = job_id if job_id != "" else "default" return job_id + def sync_data(from_path, to_path): """ Download data from remote obs to local directory if the first url is remote url and the second one is local path @@ -50,9 +52,9 @@ def sync_data(from_path, to_path): """ import moxing as mox import time - global _global_sync_count - sync_lock = "/tmp/copy_sync.lock" + str(_global_sync_count) - _global_sync_count += 1 + global _GLOBAL_SYNC_COUNT + sync_lock = "/tmp/copy_sync.lock" + str(_GLOBAL_SYNC_COUNT) + _GLOBAL_SYNC_COUNT += 1 # Each server contains 8 devices as most. if get_device_id() % min(get_device_num(), 8) == 0 and not os.path.exists(sync_lock): diff --git a/contrib/Overlap-Recovery/train/src/utils/pth2ckpt.py b/contrib/Overlap-Recovery/train/src/utils/pth2ckpt.py index c8b4d23c3..cb4a9ab59 100644 --- a/contrib/Overlap-Recovery/train/src/utils/pth2ckpt.py +++ b/contrib/Overlap-Recovery/train/src/utils/pth2ckpt.py @@ -13,23 +13,12 @@ # limitations under the License. # ============================================================================ -""" -```bash -# 将PyTorch的resnet50预训练模型转化为Mindspore的预训练模型 -# 同时请将src/config.py中的PRETRAINED_RESNET_50改成PTH_PATH -bash scripts/convert_resnet.sh [PTH_PATH] [CKPT_PATH] -# example: bash scripts/convert_resnet.sh resnet50-19c8e357.pth pretrained_resnet50.ckpt -``` -""" - -"""pth --> ckpt""" import argparse import json - +import torch from mindspore import Tensor from mindspore.train.serialization import save_checkpoint -import torch parser = argparse.ArgumentParser(description="trans pth to ckpt") diff --git a/contrib/Overlap-Recovery/train/train.py b/contrib/Overlap-Recovery/train/train.py index f243908e1..eb3c1780c 100644 --- a/contrib/Overlap-Recovery/train/train.py +++ b/contrib/Overlap-Recovery/train/train.py @@ -58,6 +58,7 @@ def load_pretrained_ckpt(net, load_path, device_target): load_param_into_net(net, param_dict) return net + def train_model(): device_target = config.device_target context.set_context(mode=context.PYNATIVE_MODE, device_target=device_target, device_id=get_device_id()) -- Gitee From a520029df267e0ad2913a15e3e44e7a3ed89e1cb Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 22:35:10 +0800 Subject: [PATCH 35/51] reformat inference code --- contrib/Overlap-Recovery/inference/eval.py | 25 +++--- .../Overlap-Recovery/inference/eval_utils.py | 6 +- contrib/Overlap-Recovery/inference/ominfer.py | 23 ++--- .../inference/preprocess_utils.py | 83 ++++++++++--------- 4 files changed, 70 insertions(+), 67 deletions(-) diff --git a/contrib/Overlap-Recovery/inference/eval.py b/contrib/Overlap-Recovery/inference/eval.py index fd316c58f..4cac21ca5 100644 --- a/contrib/Overlap-Recovery/inference/eval.py +++ b/contrib/Overlap-Recovery/inference/eval.py @@ -11,6 +11,10 @@ from load_img_data import load_img_data warnings.filterwarnings('ignore') DEVICE_ID = 1 # 芯片ID +ANN_FILE_PATH = './dataset/annotation.json' # 标签路径 +IMG_PREFIX_PATH = './dataset' # 图片根路径 +SEG_MASK_PREFIX_PATH = './dataset' # mask根路径 +INFER_MODEL_PATH = "models/best_iou.om" # 模型的路径 class OverlapDataset: @@ -94,19 +98,18 @@ def evaluate(ann_file, img_prefix, seg_mask_prefix, model_path): resize_img = np.ascontiguousarray(resize_img) image_tensor = Tensor(resize_img) # 推理前需要转换为tensor的List,使用Tensor类来构建。 image_tensor.to_device(DEVICE_ID) # !!!!!重要,需要转移至device侧,该函数单独执行 - imageTensorList = [image_tensor] # 推理前需要转换为tensor的List + image_tensor_list = [image_tensor] # 推理前需要转换为tensor的List # forward - outputs = model.infer(imageTensorList) + outputs = model.infer(image_tensor_list) # preds Tensor to numpy - outputs_np = [] - for item in outputs: - item = item.to_host() - item = np.array(item) - outputs_np.append(item) + outputs[0].to_host() + outputs[0] = np.array(outputs[0]) + outputs[1].to_host() + outputs[1] = np.array(outputs[1]) - pred_masks, pred_scores = outputs_np[0], outputs_np[1] # (1, 4, h, w), (1, 4, 1) + pred_masks, pred_scores = outputs[0], outputs[1] # (1, 4, h, w), (1, 4, 1) pred_masks, pred_scores = postprocess(pred_masks, pred_scores) # (1, 4, h, w), (1, 4) # remove padding area @@ -139,8 +142,4 @@ def evaluate(ann_file, img_prefix, seg_mask_prefix, model_path): if __name__ == '__main__': - ANN_FILE_PATH = './dataset/annotation.json' # 标签路径 - IMG_PREFIX_PATH = './dataset' # 图片根路径 - SEG_MASK_PREFIX_PATH = './dataset' # mask根路径 - iINFER_MODEL_PATH = "models/best_iou.om" # 模型的路径 - evaluate(ANN_FILE_PATH, IMG_PREFIX_PATH, SEG_MASK_PREFIX_PATH, iINFER_MODEL_PATH) \ No newline at end of file + evaluate(ANN_FILE_PATH, IMG_PREFIX_PATH, SEG_MASK_PREFIX_PATH, INFER_MODEL_PATH) \ No newline at end of file diff --git a/contrib/Overlap-Recovery/inference/eval_utils.py b/contrib/Overlap-Recovery/inference/eval_utils.py index dee4985b5..28efdba28 100644 --- a/contrib/Overlap-Recovery/inference/eval_utils.py +++ b/contrib/Overlap-Recovery/inference/eval_utils.py @@ -90,12 +90,12 @@ def eval_func(box_scores, masks, img_meta, score_thresh=0.2, iou_thresh=0.5): if score > score_thresh: valid_idx.append(ins_idx) match_matrix = np.zeros((len(valid_idx), len(gt_masks)), dtype=np.bool) - for ins_idx in range(len(valid_idx)): - for gt_ins_idx in range(len(gt_masks)): + for ins_idx, tmp_valid_idx in enumerate(valid_idx): + for gt_ins_idx, tmp_gt_mask in enumerate(gt_masks): if match_matrix[:, gt_ins_idx].sum() > 0: continue # calculate IoU - if cal_mask_iou(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > iou_thresh: + if cal_mask_iou(masks[0][tmp_valid_idx], tmp_gt_mask) > iou_thresh: match_matrix[ins_idx, gt_ins_idx] = True break # calculate instance-wise mIoU diff --git a/contrib/Overlap-Recovery/inference/ominfer.py b/contrib/Overlap-Recovery/inference/ominfer.py index 7a594615c..759a8e76a 100644 --- a/contrib/Overlap-Recovery/inference/ominfer.py +++ b/contrib/Overlap-Recovery/inference/ominfer.py @@ -16,6 +16,10 @@ warnings.filterwarnings('ignore') DEVICE_ID = 1 # 芯片ID MODEL_PATH = "models/best_iou.om" # 模型的路径 +INFER_IMG_PREFIX = './' +IMG_NAME = 'test.jpg' +SAVE_PATH = './' + def om_infer_one(img_name_path, img_prefix=None, vis_dir=None, score_thr=0.4): @@ -51,16 +55,16 @@ def om_infer_one(img_name_path, img_prefix=None, vis_dir=None, score_thr=0.4): resize_img = np.ascontiguousarray(resize_img) image_tensor = Tensor(resize_img) # 推理前需要转换为tensor的List,使用Tensor类来构建。 image_tensor.to_device(DEVICE_ID) # !!!!!重要,需要转移至device侧,该函数单独执行 - imageTensorList = [image_tensor] # 推理前需要转换为tensor的List - outputs = model.infer(imageTensorList) + image_tensor_list = [image_tensor] # 推理前需要转换为tensor的List + outputs = model.infer(image_tensor_list) - inputs = [] - for item in outputs: - item = item.to_host() - item = np.array(item) - inputs.append(item) + # preds Tensor to numpy + outputs[0].to_host() + outputs[0] = np.array(outputs[0]) + outputs[1].to_host() + outputs[1] = np.array(outputs[1]) - pred_masks, pred_scores = inputs[0], inputs[1] # (1, 4, h, w), (1,4) / (1, 4, 1) + pred_masks, pred_scores = outputs[0], outputs[1] # (1, 4, h, w), (1,4) / (1, 4, 1) pred_masks, pred_scores = postprocess(pred_masks, pred_scores) print(f"pred_masks_shape: {pred_masks.shape} pred_score_shape: {pred_scores.shape}") print(f"original pred unique value: {np.unique(pred_masks)}") @@ -136,8 +140,5 @@ def segm2result(mask_preds, cls_scores): if __name__ == '__main__': - INFER_IMG_PREFIX = './' - IMG_NAME = 'test.jpg' - SAVE_PATH = './' om_infer_one(IMG_NAME, INFER_IMG_PREFIX, vis_dir=SAVE_PATH) diff --git a/contrib/Overlap-Recovery/inference/preprocess_utils.py b/contrib/Overlap-Recovery/inference/preprocess_utils.py index 2847e9f97..367e79f07 100644 --- a/contrib/Overlap-Recovery/inference/preprocess_utils.py +++ b/contrib/Overlap-Recovery/inference/preprocess_utils.py @@ -34,10 +34,12 @@ class LoadImageFromFile: Defaults to ``dict(backend='disk')``. """ - def __init__(self, to_float32=False, color_type='color', channel_order='bgr', file_client_args=dict(backend='disk')): + def __init__(self, to_float32=False, color_type='color', + channel_order='bgr', file_client_args=None): self.to_float32 = to_float32 self.color_type = color_type self.channel_order = channel_order + file_client_args = file_client_args or dict(backend='disk') self.file_client_args = file_client_args.copy() self.file_client = None @@ -284,8 +286,8 @@ class Resize: Defaults to False. """ - def __init__(self, img_scale=None, multiscale_mode='range', ratio_range=None, keep_ratio=True, bbox_clip_border=True, - backend='cv2', interpolation='bilinear', override=False): + def __init__(self, img_scale=None, multiscale_mode='range', ratio_range=None, + keep_ratio=True, bbox_clip_border=True, backend='cv2', interpolation='bilinear', override=False): if img_scale is None: self.img_scale = None else: @@ -458,16 +460,6 @@ class Resize: bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) results[key] = bboxes - def _resize_masks(self, results): - """Resize masks with ``results['scale']``""" - for key in results.get('mask_fields', []): - if results[key] is None: - continue - if self.keep_ratio: - results[key] = results[key].rescale(results['scale']) - else: - results[key] = results[key].resize(results['img_shape'][:2]) - def __call__(self, results): """Call function to resize images, bounding boxes, masks, semantic segmentation map. @@ -505,6 +497,25 @@ class Resize: self._resize_seg(results) return results + def _resize_masks(self, results): + """Resize masks with ``results['scale']``""" + for key in results.get('mask_fields', []): + if results[key] is None: + continue + if self.keep_ratio: + results[key] = results[key].rescale(results['scale']) + else: + results[key] = results[key].resize(results['img_shape'][:2]) + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(img_scale={self.img_scale}, ' + repr_str += f'multiscale_mode={self.multiscale_mode}, ' + repr_str += f'ratio_range={self.ratio_range}, ' + repr_str += f'keep_ratio={self.keep_ratio}, ' + repr_str += f'bbox_clip_border={self.bbox_clip_border})' + return repr_str + def _resize_seg(self, results): """Resize semantic segmentation map with ``results['scale']``.""" for key in results.get('seg_fields', []): @@ -522,15 +533,6 @@ class Resize: backend=self.backend) results[key] = gt_seg - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(img_scale={self.img_scale}, ' - repr_str += f'multiscale_mode={self.multiscale_mode}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'keep_ratio={self.keep_ratio}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - @PIPELINES.register_module() class RandomFlip: @@ -651,6 +653,9 @@ class RandomFlip: results[key], direction=results['flip_direction']) return results + def __repr__(self): + return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' + def bbox_flip(self, bboxes, img_shape, direction): """Flip bboxes horizontally. @@ -685,9 +690,6 @@ class RandomFlip: raise ValueError(f"Invalid flipping direction '{direction}'") return flipped - def __repr__(self): - return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' - @PIPELINES.register_module() class Pad: @@ -706,9 +708,10 @@ class Pad: value is `dict(img=0, masks=0, seg=255)`. """ - def __init__(self, size=None, size_divisor=None, pad_to_square=False, pad_val=dict(img=0, masks=0, seg=255)): + def __init__(self, size=None, size_divisor=None, pad_to_square=False, pad_val=None): self.size = size self.size_divisor = size_divisor + pad_val = pad_val or dict(img=0, masks=0, seg=255) if isinstance(pad_val, float) or isinstance(pad_val, int): warnings.warn( 'pad_val of float type is deprecated now, ' @@ -746,13 +749,6 @@ class Pad: results['pad_fixed_size'] = self.size results['pad_size_divisor'] = self.size_divisor - def _pad_masks(self, results): - """Pad masks according to ``results['pad_shape']``.""" - pad_shape = results['pad_shape'][:2] - pad_val = self.pad_val.get('masks', 0) - for key in results.get('mask_fields', []): - results[key] = results[key].pad(pad_shape, pad_val=pad_val) - def __call__(self, results): """Call function to pad images, masks, semantic segmentation maps. @@ -767,13 +763,12 @@ class Pad: self._pad_seg(results) return results - def _pad_seg(self, results): - """Pad semantic segmentation map according to - ``results['pad_shape']``.""" - pad_val = self.pad_val.get('seg', 255) - for key in results.get('seg_fields', []): - results[key] = mmcv.impad( - results[key], shape=results['pad_shape'][:2], pad_val=pad_val) + def _pad_masks(self, results): + """Pad masks according to ``results['pad_shape']``.""" + pad_shape = results['pad_shape'][:2] + pad_val = self.pad_val.get('masks', 0) + for key in results.get('mask_fields', []): + results[key] = results[key].pad(pad_shape, pad_val=pad_val) def __repr__(self): repr_str = self.__class__.__name__ @@ -783,6 +778,14 @@ class Pad: repr_str += f'pad_val={self.pad_val})' return repr_str + def _pad_seg(self, results): + """Pad semantic segmentation map according to + ``results['pad_shape']``.""" + pad_val = self.pad_val.get('seg', 255) + for key in results.get('seg_fields', []): + results[key] = mmcv.impad( + results[key], shape=results['pad_shape'][:2], pad_val=pad_val) + @PIPELINES.register_module() class Normalize: -- Gitee From f1f98d30ea808eadfa6499a94782b95a5c9b7a74 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 23:25:09 +0800 Subject: [PATCH 36/51] reformat inference code --- .../inference/preprocess_utils.py | 160 +++++++++--------- 1 file changed, 77 insertions(+), 83 deletions(-) diff --git a/contrib/Overlap-Recovery/inference/preprocess_utils.py b/contrib/Overlap-Recovery/inference/preprocess_utils.py index 367e79f07..320283090 100644 --- a/contrib/Overlap-Recovery/inference/preprocess_utils.py +++ b/contrib/Overlap-Recovery/inference/preprocess_utils.py @@ -178,7 +178,8 @@ class MultiScaleFlipAug: "horizontal". """ - def __init__(self, transforms, img_scale=None, scale_factor=None, flip=False, flip_direction='horizontal'): + def __init__(self, transforms, img_scale=None, scale_factor=None, flip=False): + flip_direction='horizontal' self.transforms = Compose(transforms) assert (img_scale is None) ^ (scale_factor is None), ( 'Must have but only one variable can be set') @@ -287,7 +288,11 @@ class Resize: """ def __init__(self, img_scale=None, multiscale_mode='range', ratio_range=None, - keep_ratio=True, bbox_clip_border=True, backend='cv2', interpolation='bilinear', override=False): + keep_ratio=True): + bbox_clip_border = True + backend = 'cv2' + interpolation = 'bilinear' + override = False if img_scale is None: self.img_scale = None else: @@ -417,6 +422,52 @@ class Resize: results['scale'] = scale results['scale_idx'] = scale_idx + def __call__(self, results): + """Call function to resize images, bounding boxes, masks, semantic + segmentation map. + + Args: + results (dict): Result dict from loading pipeline. + + Returns: + dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', \ + 'keep_ratio' keys are added into result dict. + """ + + if 'scale' not in results: + if 'scale_factor' in results: + img_shape = results['img'].shape[:2] + scale_factor = results['scale_factor'] + assert isinstance(scale_factor, float) + results['scale'] = tuple( + [int(x * scale_factor) for x in img_shape][::-1]) + else: + self._random_scale(results) + else: + if not self.override: + assert 'scale_factor' not in results, ( + 'scale and scale_factor cannot be both set.') + else: + results.pop('scale') + if 'scale_factor' in results: + results.pop('scale_factor') + self._random_scale(results) + + self._resize_img(results) + self._resize_bboxes(results) + self._resize_masks(results) + self._resize_seg(results) + return results + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(img_scale={self.img_scale}, ' + repr_str += f'multiscale_mode={self.multiscale_mode}, ' + repr_str += f'ratio_range={self.ratio_range}, ' + repr_str += f'keep_ratio={self.keep_ratio}, ' + repr_str += f'bbox_clip_border={self.bbox_clip_border})' + return repr_str + def _resize_img(self, results): """Resize images with ``results['scale']``.""" for key in results.get('img_fields', ['img']): @@ -460,43 +511,6 @@ class Resize: bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) results[key] = bboxes - def __call__(self, results): - """Call function to resize images, bounding boxes, masks, semantic - segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', \ - 'keep_ratio' keys are added into result dict. - """ - - if 'scale' not in results: - if 'scale_factor' in results: - img_shape = results['img'].shape[:2] - scale_factor = results['scale_factor'] - assert isinstance(scale_factor, float) - results['scale'] = tuple( - [int(x * scale_factor) for x in img_shape][::-1]) - else: - self._random_scale(results) - else: - if not self.override: - assert 'scale_factor' not in results, ( - 'scale and scale_factor cannot be both set.') - else: - results.pop('scale') - if 'scale_factor' in results: - results.pop('scale_factor') - self._random_scale(results) - - self._resize_img(results) - self._resize_bboxes(results) - self._resize_masks(results) - self._resize_seg(results) - return results - def _resize_masks(self, results): """Resize masks with ``results['scale']``""" for key in results.get('mask_fields', []): @@ -507,15 +521,6 @@ class Resize: else: results[key] = results[key].resize(results['img_shape'][:2]) - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(img_scale={self.img_scale}, ' - repr_str += f'multiscale_mode={self.multiscale_mode}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'keep_ratio={self.keep_ratio}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - def _resize_seg(self, results): """Resize semantic segmentation map with ``results['scale']``.""" for key in results.get('seg_fields', []): @@ -656,19 +661,8 @@ class RandomFlip: def __repr__(self): return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' - def bbox_flip(self, bboxes, img_shape, direction): - """Flip bboxes horizontally. - - Args: - bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k) - img_shape (tuple[int]): Image shape (height, width) - direction (str): Flip direction. Options are 'horizontal', - 'vertical'. - - Returns: - numpy.ndarray: Flipped bounding boxes. - """ - + @staticmethod + def bbox_flip(bboxes, img_shape, direction): assert bboxes.shape[-1] % 4 == 0 flipped = bboxes.copy() if direction == 'horizontal': @@ -731,6 +725,28 @@ class Pad: 'only one of size and size_divisor should be valid' assert size is None or size_divisor is None + def __call__(self, results): + """Call function to pad images, masks, semantic segmentation maps. + + Args: + results (dict): Result dict from loading pipeline. + + Returns: + dict: Updated result dict. + """ + self._pad_img(results) + self._pad_masks(results) + self._pad_seg(results) + return results + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(size={self.size}, ' + repr_str += f'size_divisor={self.size_divisor}, ' + repr_str += f'pad_to_square={self.pad_to_square}, ' + repr_str += f'pad_val={self.pad_val})' + return repr_str + def _pad_img(self, results): """Pad images according to ``self.size``.""" pad_val = self.pad_val.get('img', 0) @@ -749,20 +765,6 @@ class Pad: results['pad_fixed_size'] = self.size results['pad_size_divisor'] = self.size_divisor - def __call__(self, results): - """Call function to pad images, masks, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Updated result dict. - """ - self._pad_img(results) - self._pad_masks(results) - self._pad_seg(results) - return results - def _pad_masks(self, results): """Pad masks according to ``results['pad_shape']``.""" pad_shape = results['pad_shape'][:2] @@ -770,14 +772,6 @@ class Pad: for key in results.get('mask_fields', []): results[key] = results[key].pad(pad_shape, pad_val=pad_val) - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(size={self.size}, ' - repr_str += f'size_divisor={self.size_divisor}, ' - repr_str += f'pad_to_square={self.pad_to_square}, ' - repr_str += f'pad_val={self.pad_val})' - return repr_str - def _pad_seg(self, results): """Pad semantic segmentation map according to ``results['pad_shape']``.""" -- Gitee From 5b7739f2b1bfcdec52dbe9356ddacbc65ab349d1 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 23:43:25 +0800 Subject: [PATCH 37/51] reformat inference code --- contrib/Overlap-Recovery/inference/ominfer.py | 3 - .../inference/preprocess_utils.py | 71 +++++++++---------- 2 files changed, 34 insertions(+), 40 deletions(-) diff --git a/contrib/Overlap-Recovery/inference/ominfer.py b/contrib/Overlap-Recovery/inference/ominfer.py index 759a8e76a..5a265c065 100644 --- a/contrib/Overlap-Recovery/inference/ominfer.py +++ b/contrib/Overlap-Recovery/inference/ominfer.py @@ -1,7 +1,4 @@ # -*- coding: utf-8 -*- -# @Author: Wenwen Yu -# @Email: yuwenwen62@gmail.com -# @Created Time: 11/29/22 11:20 AM import os import shutil diff --git a/contrib/Overlap-Recovery/inference/preprocess_utils.py b/contrib/Overlap-Recovery/inference/preprocess_utils.py index 320283090..dab7c36eb 100644 --- a/contrib/Overlap-Recovery/inference/preprocess_utils.py +++ b/contrib/Overlap-Recovery/inference/preprocess_utils.py @@ -1,7 +1,4 @@ # -*- coding: utf-8 -*- -# @Author: Wenwen Yu -# @Email: yduwenwen62@gmail.com -# @Created Time: 11/29/22 12:12 PM import collections import warnings @@ -179,7 +176,7 @@ class MultiScaleFlipAug: """ def __init__(self, transforms, img_scale=None, scale_factor=None, flip=False): - flip_direction='horizontal' + flip_direction = 'horizontal' self.transforms = Compose(transforms) assert (img_scale is None) ^ (scale_factor is None), ( 'Must have but only one variable can be set') @@ -389,39 +386,6 @@ class Resize: scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) return scale, None - def _random_scale(self, results): - """Randomly sample an img_scale according to ``ratio_range`` and - ``multiscale_mode``. - - If ``ratio_range`` is specified, a ratio will be sampled and be - multiplied with ``img_scale``. - If multiple scales are specified by ``img_scale``, a scale will be - sampled according to ``multiscale_mode``. - Otherwise, single scale will be used. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: Two new keys 'scale` and 'scale_idx` are added into \ - ``results``, which would be used by subsequent pipelines. - """ - - if self.ratio_range is not None: - scale, scale_idx = self.random_sample_ratio( - self.img_scale[0], self.ratio_range) - elif len(self.img_scale) == 1: - scale, scale_idx = self.img_scale[0], 0 - elif self.multiscale_mode == 'range': - scale, scale_idx = self.random_sample(self.img_scale) - elif self.multiscale_mode == 'value': - scale, scale_idx = self.random_select(self.img_scale) - else: - raise NotImplementedError - - results['scale'] = scale - results['scale_idx'] = scale_idx - def __call__(self, results): """Call function to resize images, bounding boxes, masks, semantic segmentation map. @@ -468,6 +432,39 @@ class Resize: repr_str += f'bbox_clip_border={self.bbox_clip_border})' return repr_str + def _random_scale(self, results): + """Randomly sample an img_scale according to ``ratio_range`` and + ``multiscale_mode``. + + If ``ratio_range`` is specified, a ratio will be sampled and be + multiplied with ``img_scale``. + If multiple scales are specified by ``img_scale``, a scale will be + sampled according to ``multiscale_mode``. + Otherwise, single scale will be used. + + Args: + results (dict): Result dict from :obj:`dataset`. + + Returns: + dict: Two new keys 'scale` and 'scale_idx` are added into \ + ``results``, which would be used by subsequent pipelines. + """ + + if self.ratio_range is not None: + scale, scale_idx = self.random_sample_ratio( + self.img_scale[0], self.ratio_range) + elif len(self.img_scale) == 1: + scale, scale_idx = self.img_scale[0], 0 + elif self.multiscale_mode == 'range': + scale, scale_idx = self.random_sample(self.img_scale) + elif self.multiscale_mode == 'value': + scale, scale_idx = self.random_select(self.img_scale) + else: + raise NotImplementedError + + results['scale'] = scale + results['scale_idx'] = scale_idx + def _resize_img(self, results): """Resize images with ``results['scale']``.""" for key in results.get('img_fields', ['img']): -- Gitee From 69ebb0ab7145e4517b8ba870aade2cedd5a78465 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Tue, 13 Dec 2022 23:56:08 +0800 Subject: [PATCH 38/51] reformat inference code --- .../inference/preprocess_utils.py | 92 +++++++++---------- 1 file changed, 46 insertions(+), 46 deletions(-) diff --git a/contrib/Overlap-Recovery/inference/preprocess_utils.py b/contrib/Overlap-Recovery/inference/preprocess_utils.py index dab7c36eb..29bd7d9ed 100644 --- a/contrib/Overlap-Recovery/inference/preprocess_utils.py +++ b/contrib/Overlap-Recovery/inference/preprocess_utils.py @@ -314,6 +314,52 @@ class Resize: self.override = override self.bbox_clip_border = bbox_clip_border + def __call__(self, results): + """Call function to resize images, bounding boxes, masks, semantic + segmentation map. + + Args: + results (dict): Result dict from loading pipeline. + + Returns: + dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', \ + 'keep_ratio' keys are added into result dict. + """ + + if 'scale' not in results: + if 'scale_factor' in results: + img_shape = results['img'].shape[:2] + scale_factor = results['scale_factor'] + assert isinstance(scale_factor, float) + results['scale'] = tuple( + [int(x * scale_factor) for x in img_shape][::-1]) + else: + self._random_scale(results) + else: + if not self.override: + assert 'scale_factor' not in results, ( + 'scale and scale_factor cannot be both set.') + else: + results.pop('scale') + if 'scale_factor' in results: + results.pop('scale_factor') + self._random_scale(results) + + self._resize_img(results) + self._resize_bboxes(results) + self._resize_masks(results) + self._resize_seg(results) + return results + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(img_scale={self.img_scale}, ' + repr_str += f'multiscale_mode={self.multiscale_mode}, ' + repr_str += f'ratio_range={self.ratio_range}, ' + repr_str += f'keep_ratio={self.keep_ratio}, ' + repr_str += f'bbox_clip_border={self.bbox_clip_border})' + return repr_str + @staticmethod def random_select(img_scales): """Randomly select an img_scale from given candidates. @@ -386,52 +432,6 @@ class Resize: scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) return scale, None - def __call__(self, results): - """Call function to resize images, bounding boxes, masks, semantic - segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', \ - 'keep_ratio' keys are added into result dict. - """ - - if 'scale' not in results: - if 'scale_factor' in results: - img_shape = results['img'].shape[:2] - scale_factor = results['scale_factor'] - assert isinstance(scale_factor, float) - results['scale'] = tuple( - [int(x * scale_factor) for x in img_shape][::-1]) - else: - self._random_scale(results) - else: - if not self.override: - assert 'scale_factor' not in results, ( - 'scale and scale_factor cannot be both set.') - else: - results.pop('scale') - if 'scale_factor' in results: - results.pop('scale_factor') - self._random_scale(results) - - self._resize_img(results) - self._resize_bboxes(results) - self._resize_masks(results) - self._resize_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(img_scale={self.img_scale}, ' - repr_str += f'multiscale_mode={self.multiscale_mode}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'keep_ratio={self.keep_ratio}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - def _random_scale(self, results): """Randomly sample an img_scale according to ``ratio_range`` and ``multiscale_mode``. -- Gitee From 51cb9353cf63a93c41e7b5561df91cd57ec2e42d Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Wed, 14 Dec 2022 00:16:50 +0800 Subject: [PATCH 39/51] reformat inference code --- contrib/Overlap-Recovery/inference/eval.py | 15 +++++++++++++++ contrib/Overlap-Recovery/inference/ominfer.py | 15 +++++++++++++++ 2 files changed, 30 insertions(+) diff --git a/contrib/Overlap-Recovery/inference/eval.py b/contrib/Overlap-Recovery/inference/eval.py index 4cac21ca5..74c81438d 100644 --- a/contrib/Overlap-Recovery/inference/eval.py +++ b/contrib/Overlap-Recovery/inference/eval.py @@ -1,5 +1,20 @@ # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import warnings from PIL import Image import numpy as np diff --git a/contrib/Overlap-Recovery/inference/ominfer.py b/contrib/Overlap-Recovery/inference/ominfer.py index 5a265c065..b55c85249 100644 --- a/contrib/Overlap-Recovery/inference/ominfer.py +++ b/contrib/Overlap-Recovery/inference/ominfer.py @@ -1,5 +1,20 @@ # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import os import shutil import warnings -- Gitee From 06252331a9a4e7e584d3962478bf8f83d668b77c Mon Sep 17 00:00:00 2001 From: HamPerdredes Date: Wed, 14 Dec 2022 02:30:54 +0800 Subject: [PATCH 40/51] clean code --- .../train/src/dataset/base_dataset.py | 33 +-- .../train/src/dataset/build_dataset.py | 4 +- .../train/src/dataset/data_process.py | 33 +-- .../train/src/dataset/real_dataset.py | 48 +-- .../train/src/dataset/synth_dataset.py | 48 +-- .../train/src/dataset/utils.py | 47 +-- .../train/src/deoccluder/deoccluder_r50.py | 19 +- .../deoccluder/roi/custom_kernel_iter_head.py | 90 +++--- .../roi/custom_kernel_update_head.py | 7 +- .../src/deoccluder/roi/kernel_update_head.py | 62 ++-- .../src/deoccluder/roi/kernel_updator.py | 10 +- .../train/src/deoccluder/rpn/kernel_head.py | 278 +++++++++--------- .../src/deoccluder/rpn/positional_encoding.py | 34 +-- .../deoccluder/rpn/semantic_fpn_wrapper.py | 24 +- .../src/model_utils/configs/config_base.py | 3 - .../src/model_utils/configs/config_model.py | 1 - 16 files changed, 351 insertions(+), 390 deletions(-) diff --git a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py index 7dd9288ca..b8cb27d27 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py @@ -35,31 +35,22 @@ class CustomDataset: Args: ann_file (str): Annotation file path. pipeline (list[dict]): Processing pipeline. - classes (str | Sequence[str], optional): Specify classes to load. - If is None, ``cls.CLASSES`` will be used. Default: None. - data_root (str, optional): Data root for ``ann_file``, - ``img_prefix``, ``seg_prefix`` if specified. test_mode (bool, optional): If set True, annotation will not be loaded. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes of the dataset's classes will be filtered out. This option - only works when `test_mode=False`, i.e., we never filter images - during tests. """ CLASSES = None PALETTE = None - def __init__(self, ann_file, pipeline, classes=None, data_root=None, img_prefix='', - seg_prefix=None, seg_suffix='.png', test_mode=False, filter_empty_gt=True): + def __init__(self, ann_file, pipeline, img_prefix='', test_mode=False): self.ann_file = ann_file - self.data_root = data_root + self.data_root = None self.img_prefix = img_prefix - self.seg_prefix = seg_prefix - self.seg_suffix = seg_suffix + self.seg_prefix = img_prefix + self.seg_suffix = '.png' self.test_mode = test_mode - self.filter_empty_gt = filter_empty_gt - self.CLASSES = self.GetClasses(classes) + self.filter_empty_gt = True + self.CLASSES = self.get_classes(None) # join paths if data_root is specified if self.data_root is not None: @@ -155,11 +146,6 @@ class CustomDataset: if img_info['width'] / img_info['height'] > 1: self.flag[i] = 1 - def _rand_another(self, idx): - """Get another random index from the same group as the given index.""" - pool = np.where(self.flag == self.flag[idx])[0] - return np.random.choice(pool) - def __getitem__(self, idx): """Get training/test data after pipeline. @@ -180,6 +166,11 @@ class CustomDataset: continue return data + def _rand_another(self, idx): + """Get another random index from the same group as the given index.""" + pool = np.where(self.flag == self.flag[idx])[0] + return np.random.choice(pool) + def prepare_train_img(self, idx): """Get training data and annotations after pipeline. @@ -218,7 +209,7 @@ class CustomDataset: return self.pipeline(results) @classmethod - def GetClasses(cls, classes=None): + def get_classes(cls, classes=None): """Get class names of current dataset. Args: diff --git a/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py index 80122a961..c5c95e98c 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py @@ -17,5 +17,5 @@ def build_dataset(cfg): raise KeyError(f"Not support dataset type: {data_type}") try: return CUSTOM_DATASETS[data_type](**cfg) - except Exception as e: - raise RuntimeError(e) + except KeyError: + raise RuntimeError(KeyError) diff --git a/contrib/Overlap-Recovery/train/src/dataset/data_process.py b/contrib/Overlap-Recovery/train/src/dataset/data_process.py index 3a4fec369..a21c1fb62 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/data_process.py +++ b/contrib/Overlap-Recovery/train/src/dataset/data_process.py @@ -138,20 +138,12 @@ class CustomLoadAnnotations: return results - def _load_labels(self, results): + @staticmethod + def _load_labels(results): results['gt_labels'] = results['ann_info']['labels'].copy() results['text_labels'] = results['ann_info']['text_labels'].copy() return results - def _load_masks(self, results): - h, w = results['img_info']['height'], results['img_info']['width'] - gt_masks = [cv2.imread(_, cv2.IMREAD_UNCHANGED) for _ in results['ann_info']['masks']] - gt_masks = [mask // 255 for mask in gt_masks] - gt_masks = BitmapMasks(gt_masks, h, w) - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') - return results - def __call__(self, results): if self.with_bbox: results = self._load_bboxes(results) @@ -163,6 +155,16 @@ class CustomLoadAnnotations: results = self._load_masks(results) return results + @staticmethod + def _load_masks(results): + h, w = results['img_info']['height'], results['img_info']['width'] + gt_masks = [cv2.imread(_, cv2.IMREAD_UNCHANGED) for _ in results['ann_info']['masks']] + gt_masks = [mask // 255 for mask in gt_masks] + gt_masks = BitmapMasks(gt_masks, h, w) + results['gt_masks'] = gt_masks + results['mask_fields'].append('gt_masks') + return results + def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(with_bbox={self.with_bbox}, ' @@ -176,23 +178,20 @@ class Resize: def __init__(self, img_scale, - multiscale_mode='range', keep_ratio=True, - bbox_clip_border=True, - interpolation='bilinear', - override=False): + interpolation='bilinear'): if isinstance(img_scale, list): self.img_scale = img_scale else: self.img_scale = [img_scale] - + multiscale_mode = 'range' assert multiscale_mode in ['value', 'range'] self.multiscale_mode = multiscale_mode self.keep_ratio = keep_ratio self.interpolation = interpolation - self.override = override - self.bbox_clip_border = bbox_clip_border + self.override = False + self.bbox_clip_border = True def _random_scale(self, results): if len(self.img_scale) == 1: diff --git a/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py index 1b809c997..8bbc31cd5 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py @@ -11,7 +11,7 @@ import numpy as np import imagesize from .base_dataset import CustomDataset -from .utils import CalMaskIou, CalOverlapMask, CalUnionMask +from .utils import cal_mask_iou, cal_overlap_mask, cal_union_mask class RealOverlapDataset(CustomDataset): @@ -40,8 +40,8 @@ class RealOverlapDataset(CustomDataset): data_info['filename'] = img_name try: width, height = imagesize.get(data_info['img_path']) - except Exception as e: - raise RuntimeError(e) + except KeyError: + raise RuntimeError(KeyError) data_info['width'] = width data_info['height'] = height seg_map_path = [] @@ -74,17 +74,6 @@ class RealOverlapDataset(CustomDataset): ) return ann - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: - if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - def vis_result(self, img_idx, scores, masks, vis_dir='/home/whua/vis'): if not os.path.exists(vis_dir): os.mkdir(vis_dir) @@ -104,11 +93,22 @@ class RealOverlapDataset(CustomDataset): canvas[masks[ins_idx]] = img[masks[ins_idx]] cv2.imwrite(os.path.join(vis_dir, save_name), canvas) + def _filter_imgs(self, min_size=32): + """Filter images too small or without ground truths.""" + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: + if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: + continue + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds + def eval_func(self, idx, box_scores, masks): # prepare gt ~ hard code gt_masks = [cv2.imread(x, cv2.IMREAD_UNCHANGED) // 255 for x in self.data_infos[idx]['seg_map_path']] - gt_text = CalUnionMask(gt_masks) - gt_overlap = CalOverlapMask(gt_masks) + gt_text = cal_union_mask(gt_masks) + gt_overlap = cal_overlap_mask(gt_masks) # prepare predict of overlap and text area box_info = box_scores[0] if len(box_info) < 2: @@ -126,10 +126,10 @@ class RealOverlapDataset(CustomDataset): pred_text = np.zeros_like(masks[0][0]) elif len(pred_masks) == 1: pred_overlap = np.zeros_like(masks[0][0]) - pred_text = CalUnionMask(pred_masks) + pred_text = cal_union_mask(pred_masks) else: - pred_overlap = CalOverlapMask(pred_masks) - pred_text = CalUnionMask(pred_masks) + pred_overlap = cal_overlap_mask(pred_masks) + pred_text = cal_union_mask(pred_masks) if len(gt_masks) > 1: # calculate metrics intersection_text = (pred_text & gt_text).sum() @@ -149,12 +149,14 @@ class RealOverlapDataset(CustomDataset): if box_[-1] > self.score_thresh: valid_idx.append(ins_idx) match_matrix = np.zeros((len(valid_idx), len(gt_masks)), dtype=np.bool) - for ins_idx in range(len(valid_idx)): - for gt_ins_idx in range(len(gt_masks)): + num_valid = len(valid_idx) + num_gt_masks = len(gt_masks) + for ins_idx in range(num_valid): + for gt_ins_idx in range(num_gt_masks): if match_matrix[:, gt_ins_idx].sum() > 0: continue # calculate IoU - if CalMaskIou(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > self.iou_thresh: + if cal_mask_iou(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > self.iou_thresh: match_matrix[ins_idx, gt_ins_idx] = True break # calculate instance-wise mIoU @@ -172,7 +174,7 @@ class RealOverlapDataset(CustomDataset): pred_mask = masks[0][valid_idx[ins_idx]].astype(np.bool) gt_idx = match_matrix[ins_idx].nonzero()[0][0] gt_mask = gt_masks[gt_idx].copy() - cur_iou = CalMaskIou(pred_mask, gt_mask) + cur_iou = cal_mask_iou(pred_mask, gt_mask) text_ins_miou += cur_iou return (intersection_text, union_text, intersection_overlap, union_overlap), \ text_ins_miou, max(match_matrix.shape) diff --git a/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py index be4c5b8b9..b81fc1fa4 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py @@ -11,7 +11,7 @@ import numpy as np import imagesize from .base_dataset import CustomDataset -from .utils import CalMaskIou, CalOverlapMask, CalUnionMask +from .utils import cal_mask_iou, cal_overlap_mask, cal_union_mask class SynthOverlapDataset(CustomDataset): @@ -40,8 +40,8 @@ class SynthOverlapDataset(CustomDataset): data_info['filename'] = img_name try: width, height = imagesize.get(data_info['img_path']) - except Exception as e: - raise RuntimeError(e) + except KeyError: + raise RuntimeError(KeyError) data_info['width'] = width data_info['height'] = height seg_map_path = [] @@ -74,17 +74,6 @@ class SynthOverlapDataset(CustomDataset): ) return ann - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: - if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - def vis_result(self, img_idx, scores, masks, vis_dir='/home/whua/vis'): if not os.path.exists(vis_dir): os.mkdir(vis_dir) @@ -104,11 +93,22 @@ class SynthOverlapDataset(CustomDataset): canvas[masks[ins_idx]] = img[masks[ins_idx]] cv2.imwrite(os.path.join(vis_dir, save_name), canvas) + def _filter_imgs(self, min_size=32): + """Filter images too small or without ground truths.""" + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: + if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: + continue + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds + def eval_func(self, idx, box_scores, masks): # prepare gt ~ hard code gt_masks = [cv2.imread(x, cv2.IMREAD_UNCHANGED) // 255 for x in self.data_infos[idx]['seg_map_path']] - gt_text = CalUnionMask(gt_masks) - gt_overlap = CalOverlapMask(gt_masks) + gt_text = cal_union_mask(gt_masks) + gt_overlap = cal_overlap_mask(gt_masks) # prepare predict of overlap and text area box_info = box_scores[0] if len(box_info) < 2: @@ -126,10 +126,10 @@ class SynthOverlapDataset(CustomDataset): pred_text = np.zeros_like(masks[0][0]) elif len(pred_masks) == 1: pred_overlap = np.zeros_like(masks[0][0]) - pred_text = CalUnionMask(pred_masks) + pred_text = cal_union_mask(pred_masks) else: - pred_overlap = CalOverlapMask(pred_masks) - pred_text = CalUnionMask(pred_masks) + pred_overlap = cal_overlap_mask(pred_masks) + pred_text = cal_union_mask(pred_masks) if len(gt_masks) > 1: # calculate metrics intersection_text = (pred_text & gt_text).sum() @@ -149,12 +149,14 @@ class SynthOverlapDataset(CustomDataset): if box_[-1] > self.score_thresh: valid_idx.append(ins_idx) match_matrix = np.zeros((len(valid_idx), len(gt_masks)), dtype=np.bool) - for ins_idx in range(len(valid_idx)): - for gt_ins_idx in range(len(gt_masks)): + num_valid = len(valid_idx) + num_gt_masks = len(gt_masks) + for ins_idx in range(num_valid): + for gt_ins_idx in range(num_gt_masks): if match_matrix[:, gt_ins_idx].sum() > 0: continue # calculate IoU - if CalMaskIou(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > self.iou_thresh: + if cal_mask_iou(masks[0][valid_idx[ins_idx]], gt_masks[gt_ins_idx]) > self.iou_thresh: match_matrix[ins_idx, gt_ins_idx] = True break # calculate instance-wise mIoU @@ -172,7 +174,7 @@ class SynthOverlapDataset(CustomDataset): pred_mask = masks[0][valid_idx[ins_idx]].astype(np.bool) gt_idx = match_matrix[ins_idx].nonzero()[0][0] gt_mask = gt_masks[gt_idx].copy() - cur_iou = CalMaskIou(pred_mask, gt_mask) + cur_iou = cal_mask_iou(pred_mask, gt_mask) text_ins_miou += cur_iou return (intersection_text, union_text, intersection_overlap, union_overlap), \ text_ins_miou, max(match_matrix.shape) diff --git a/contrib/Overlap-Recovery/train/src/dataset/utils.py b/contrib/Overlap-Recovery/train/src/dataset/utils.py index 3b16f2824..4f130da99 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/utils.py +++ b/contrib/Overlap-Recovery/train/src/dataset/utils.py @@ -6,8 +6,7 @@ import mmcv import mindspore as ms -# def cal_mask_IoU(mask_a, mask_b, check_valid=False): -def CalMaskIou(mask_a, mask_b, check_valid=False): +def cal_mask_iou(mask_a, mask_b, check_valid=False): if check_valid: assert len(np.unique(mask_a)) <= 2 assert len(np.unique(mask_b)) <= 2 @@ -20,8 +19,8 @@ def CalMaskIou(mask_a, mask_b, check_valid=False): return intersection_area / union_area -# def cal_overlap_mask(mask_list): -def CalOverlapMask(mask_list): +# def CalOverlapMask(mask_list): +def cal_overlap_mask(mask_list): if len(mask_list) < 2: return None mask_list_bool = [x.astype(np.bool) for x in mask_list] @@ -33,8 +32,8 @@ def CalOverlapMask(mask_list): return overlap_mask -# def cal_union_mask(mask_list): -def CalUnionMask(mask_list): +# def CalUnionMask(mask_list): +def cal_union_mask(mask_list): if len(mask_list) < 1: return None mask_list_bool = [x.astype(np.bool) for x in mask_list] @@ -172,42 +171,6 @@ class BitmapMasks: cropped_masks = self.masks[:, y1:y1 + h, x1:x1 + w] return BitmapMasks(cropped_masks, h, w) - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device='cpu', - interpolation='bilinear', - binarize=True): - """See :func:`BaseInstanceMasks.crop_and_resize`.""" - if len(self.masks) == 0: - empty_masks = np.empty((0, *out_shape), dtype=np.uint8) - return BitmapMasks(empty_masks, *out_shape) - - # convert bboxes to tensor - if isinstance(bboxes, np.ndarray): - bboxes = torch.from_numpy(bboxes).to(device=device) - if isinstance(inds, np.ndarray): - inds = torch.from_numpy(inds).to(device=device) - - num_bbox = bboxes.shape[0] - fake_inds = torch.arange( - num_bbox, device=device).to(dtype=bboxes.dtype)[:, None] - rois = torch.cat([fake_inds, bboxes], dim=1) # Nx5 - rois = rois.to(device=device) - if num_bbox > 0: - gt_masks_th = torch.from_numpy(self.masks).to(device).index_select( - 0, inds).to(dtype=rois.dtype) - targets = roi_align(gt_masks_th[:, None, :, :], rois, out_shape, - 1.0, 0, 'avg', True).squeeze(1) - if binarize: - resized_masks = (targets >= 0.5).cpu().numpy() - else: - resized_masks = targets.cpu().numpy() - else: - resized_masks = [] - return BitmapMasks(resized_masks, *out_shape) - def expand(self, expanded_h, expanded_w, top, left): """See :func:`BaseInstanceMasks.expand`.""" if len(self.masks) == 0: diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py b/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py index a535d1609..d80006a32 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py @@ -105,9 +105,11 @@ class CustomKNet(nn.Cell): align_corners=False)[0]) gt_masks = gt_masks_tensor x = self.extract_feat(img) - rpn_results = self.rpn_head.forward_train(x, img_metas, gt_masks, - gt_labels, gt_sem_seg, - gt_sem_cls) + rpn_results = self.rpn_head.forward_train(x, gt_masks, + gt_labels, + img_metas=img_metas, + gt_sem_seg=gt_sem_seg, + gt_sem_cls=gt_sem_cls) (rpn_losses, proposal_feats, x_feats, mask_preds, cls_scores) = rpn_results @@ -115,15 +117,16 @@ class CustomKNet(nn.Cell): x_feats, proposal_feats, mask_preds, - cls_scores, - img_metas, - gt_masks, - gt_labels, gt_bboxes_ignore=gt_bboxes_ignore, gt_bboxes=gt_bboxes, gt_sem_seg=gt_sem_seg, gt_sem_cls=gt_sem_cls, - imgs_whwh=None) + imgs_whwh=None, + img_metas=img_metas, + gt_masks=gt_masks, + gt_labels=gt_labels, + cls_score=cls_scores + ) losses.update(rpn_losses) total_loss = None diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py index b2209b4e7..182a04e83 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py @@ -8,24 +8,26 @@ from ..custom_cells import build_assigner, build_sampler class CustomKernelIterHead(nn.Cell): - def __init__(self, num_stages=6, recursive=False, assign_stages=5, stage_loss_weights=(1, 1, 1, 1, 1, 1), - proposal_feature_channel=256, merge_cls_scores=False, post_assign=False, hard_target=False, - num_proposals=100, num_thing_classes=80, mask_assign_stride=4, mask_head=None, mask_out_stride=4, - train_cfg=None, test_cfg=None, **kwargs): + def __init__(self, num_stages=6, proposal_feature_channel=256, mask_head=None, **kwargs): super(CustomKernelIterHead, self).__init__() if isinstance(mask_head, type(None)): mask_head = dict() + num_proposals = kwargs.get('num_proposals', 100) + train_cfg = kwargs.get('train_cfg', None) + test_cfg = kwargs.get('test_cfg', None) + mask_assign_stride = kwargs.get('mask_assign_stride', 4) + stage_loss_weights = kwargs.get('stage_loss_weights', None) assert len(stage_loss_weights) == num_stages self.num_stages = num_stages self.stage_loss_weights = stage_loss_weights self.proposal_feature_channel = proposal_feature_channel - self.merge_cls_scores = merge_cls_scores - self.recursive = recursive - self.post_assign = post_assign - self.mask_out_stride = mask_out_stride - self.hard_target = hard_target - self.assign_stages = assign_stages - self.num_thing_classes = num_thing_classes + self.merge_cls_scores = False + self.recursive = False + self.post_assign = False + self.mask_out_stride = 4 + self.hard_target = False + self.assign_stages = 5 + self.num_thing_classes = 80 self.mask_assign_stride = mask_assign_stride self.num_proposals = num_proposals self.train_cfg = train_cfg @@ -69,7 +71,13 @@ class CustomKernelIterHead(nn.Cell): for i in range(self.num_stages): self.mask_head[i] = self.mask_head[0] - def _mask_forward(self, stage, x, object_feats, mask_preds, img_metas): + @property + def apply_kernel_occlusion(self): + return self.mask_head[0].apply_kernel_occlusion + + def _mask_forward(self, x, object_feats, mask_preds, **kwargs): + stage = kwargs.get('stage', None) + img_metas = kwargs.get('img_metas', None) mask_head = self.mask_head[stage] cls_score, mask_preds, object_feats = mask_head( x, object_feats, mask_preds, img_metas=img_metas) @@ -90,10 +98,6 @@ class CustomKernelIterHead(nn.Cell): return mask_results - @property - def apply_kernel_occlusion(self): - return self.mask_head[0].apply_kernel_occlusion - @property def occ_pair_num(self): return 2 * self.mask_head[0].pair_num @@ -108,16 +112,14 @@ class CustomKernelIterHead(nn.Cell): x, proposal_feats, mask_preds, - cls_score, - img_metas, - gt_masks, - gt_labels, - gt_bboxes_ignore=None, - imgs_whwh=None, - gt_bboxes=None, - gt_sem_seg=None, - gt_sem_cls=None): - + **kwargs): + cls_score = kwargs.get('cls_score', None) + imgs_whwh = kwargs.get('imgs_whwh', None) + gt_sem_seg = kwargs.get('gt_sem_seg', None) + gt_sem_cls = kwargs.get('gt_sem_cls', None) + img_metas = kwargs.get('img_metas', None) + gt_masks = kwargs.get('gt_masks', None) + gt_labels = kwargs.get('gt_labels', None) num_imgs = len(img_metas) if self.mask_head[0].mask_upsample_stride > 1: interpolate = nn.ResizeBilinear() @@ -143,16 +145,24 @@ class CustomKernelIterHead(nn.Cell): all_stage_mask_results = [] assign_results = [] for stage in range(self.num_stages): - mask_results = self._mask_forward(stage, x, object_feats, - mask_preds, img_metas) + mask_results = self._mask_forward(x, object_feats, + mask_preds, + stage=stage, + img_metas=img_metas) all_stage_mask_results.append(mask_results) if self.apply_kernel_occlusion: - mask_preds = mask_results['mask_preds'][:, :-self.occ_pair_num] + try: + mask_preds = mask_results['mask_preds'][:, :-self.occ_pair_num] + except KeyError: + raise KeyError else: - mask_preds = mask_results['mask_preds'] - scaled_mask_preds = mask_results['scaled_mask_preds'] - cls_score = mask_results['cls_score'] - object_feats = mask_results['object_feats'] + try: + mask_preds = mask_results['mask_preds'] + except KeyError: + raise KeyError + scaled_mask_preds = mask_results.get('scaled_mask_preds', None) + cls_score = mask_results.get('cls_score', None) + object_feats = mask_results.get('object_feats', None) if self.post_assign: if self.apply_kernel_occlusion: @@ -226,12 +236,14 @@ class CustomKernelIterHead(nn.Cell): object_feats = proposal_feats scaled_mask_preds = None for stage in range(self.num_stages): - mask_results = self._mask_forward(stage, x, object_feats, - mask_preds, img_metas) - object_feats = mask_results['object_feats'] - cls_score = mask_results['cls_score'] - mask_preds = mask_results['mask_preds'] - scaled_mask_preds = mask_results['scaled_mask_preds'] + mask_results = self._mask_forward(x, object_feats, + mask_preds, + stage=stage, + img_metas=img_metas) + object_feats = mask_results.get('object_feats', None) + cls_score = mask_results.get('cls_score', None) + mask_preds = mask_results.get('mask_preds', None) + scaled_mask_preds = mask_results.get('scaled_mask_preds', None) num_classes = self.mask_head[-1].num_classes results = [] diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py index 1a21b7a73..d2a4d3346 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py @@ -67,8 +67,8 @@ class CustomKernelUpdateHead(KernelUpdateHead): return ui_kernels - def construct(self, x, proposal_feat, mask_preds, prev_cls_score=None, - mask_shape=None, img_metas=None): + def construct(self, x, proposal_feat, mask_preds, **kwargs): + mask_shape = kwargs.get('mask_shape', None) n_sample, num_proposals = proposal_feat.shape[:2] if self.feat_transform is not None: x = self.feat_transform(x) @@ -82,7 +82,6 @@ class CustomKernelUpdateHead(KernelUpdateHead): gather_mask = mask_preds - # sigmoid_masks = gather_mask.sigmoid() sigmoid_masks = ms.ops.sigmoid(gather_mask) nonzero_inds = sigmoid_masks > self.hard_mask_thr sigmoid_masks = nonzero_inds.astype(ms.float32) @@ -95,8 +94,6 @@ class CustomKernelUpdateHead(KernelUpdateHead): tmp_x_feats = ms.ops.transpose(tmp_x_feats, (0, 2, 1)) x_feat = ms.ops.bmm(sigmoid_masks, tmp_x_feats) - # x_feat = Einsum('bnhw,bchw->bnc', sigmoid_masks, x) - # obj_feat in shape [B, N, C, K, K] -> [B, N, C, K*K] -> [B, N, K*K, C] proposal_feat = proposal_feat.reshape(n_sample, num_proposals, self.in_channels, diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py index 5c6500298..be079eebc 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py @@ -11,17 +11,31 @@ from .kernel_updator import KernelUpdator class KernelUpdateHead(nn.Cell): - def __init__(self, num_classes=80, num_ffn_fcs=2, num_heads=8, num_cls_fcs=1, - num_mask_fcs=3, feedforward_channels=2048, in_channels=256, - out_channels=256, dropout=0.0, mask_thr=0.5, ffn_act_cfg=None, - conv_kernel_size=3, feat_transform_cfg=None, hard_mask_thr=0.5, - kernel_init=False, with_ffn=True, mask_out_stride=4, - relative_coors=False, relative_coors_off=False, feat_gather_stride=1, - mask_transform_stride=1, mask_upsample_stride=1, num_thing_classes=80, - num_stuff_classes=53, mask_assign_stride=4, ignore_label=255, - thing_label_in_seg=0, kernel_updator_cfg=None, loss_mask=None, - loss_dice=None, loss_cls=None, num_proposals=4): + def __init__(self, loss_dice=None, loss_cls=None, num_proposals=4, **kwargs): super(KernelUpdateHead, self).__init__() + # load arguments + num_classes = kwargs.get('num_classes', 80) + num_ffn_fcs = kwargs.get('num_ffn_fcs', 2) + num_heads = kwargs.get('num_heads', 8) + num_cls_fcs = kwargs.get('num_cls_fcs', 1) + num_mask_fcs = kwargs.get('num_mask_fcs', 3) + feedforward_channels = kwargs.get('feedforward_channels', 2048) + in_channels = kwargs.get('in_channels', 256) + out_channels = kwargs.get('out_channels', 256) + dropout = kwargs.get('dropout', 0.0) + mask_thr = kwargs.get('mask_thr', 0.5) + ffn_act_cfg = kwargs.get('ffn_act_cfg', None) + conv_kernel_size = kwargs.get('conv_kernel_size', 3) + feat_transform_cfg = kwargs.get('feat_transform_cfg', None) + hard_mask_thr = kwargs.get('hard_mask_thr', 0.5) + kernel_init = kwargs.get('kernel_init', False) + with_ffn = kwargs.get('with_ffn', True) + mask_out_stride = kwargs.get('mask_out_stride', 4) + mask_upsample_stride = kwargs.get('mask_upsample_stride', 1) + mask_assign_stride = kwargs.get('mask_assign_stride', 4) + kernel_updator_cfg = kwargs.get('kernel_updator_cfg', None) + loss_mask = kwargs.get('loss_mask', None) + # init dict-like arguments if isinstance(ffn_act_cfg, type(None)): ffn_act_cfg = dict(type='ReLU', inplace=True) @@ -51,18 +65,18 @@ class KernelUpdateHead(nn.Cell): self.kernel_init = kernel_init self.with_ffn = with_ffn self.mask_out_stride = mask_out_stride - self.relative_coors = relative_coors - self.relative_coors_off = relative_coors_off + self.relative_coors = False + self.relative_coors_off = False self.conv_kernel_size = conv_kernel_size - self.feat_gather_stride = feat_gather_stride - self.mask_transform_stride = mask_transform_stride + self.feat_gather_stride = 1 + self.mask_transform_stride = 1 self.mask_upsample_stride = mask_upsample_stride - self.num_thing_classes = num_thing_classes - self.num_stuff_classes = num_stuff_classes + self.num_thing_classes = 80 + self.num_stuff_classes = 53 self.mask_assign_stride = mask_assign_stride - self.ignore_label = ignore_label - self.thing_label_in_seg = thing_label_in_seg + self.ignore_label = 255 + self.thing_label_in_seg = 0 self.attention = MultiheadAttention(in_channels * conv_kernel_size**2, num_heads, dropout, num_proposals=num_proposals) @@ -78,8 +92,8 @@ class KernelUpdateHead(nn.Cell): in_channels, in_channels, kernel_size, - stride=feat_gather_stride, - padding=int(feat_gather_stride // 2), + stride=1, + padding=int(1 // 2), **feat_transform_cfg) else: self.feat_transform = None @@ -91,7 +105,6 @@ class KernelUpdateHead(nn.Cell): num_ffn_fcs, act_cfg=ffn_act_cfg, dropout_layer=dropout) - # self.ffn_norm = build_norm_layer(dict(type='LN'), in_channels)[1] self.ffn_norm = nn.LayerNorm([in_channels]) self.cls_fcs = nn.CellList() @@ -119,8 +132,6 @@ class KernelUpdateHead(nn.Cell): self.interpolate = nn.ResizeBilinear() def init_weights(self): - """Use xavier initialization for all weight parameter and set - classification head bias as a specific value when use focal loss.""" self.init_parameters_data() for _, m in self.cells_and_names(): if isinstance(m, nn.Conv2d): @@ -145,7 +156,6 @@ class KernelUpdateHead(nn.Cell): self.fc_cls.bias.set_data(init.initializer(0.01, self.fc_cls.bias.shape)) if self.kernel_init: print('mask kernel in mask head is normal initialized by std 0.01') - # nn.init.normal_(self.fc_mask.weight, mean=0, std=0.01) self.fc_mask.weight.set_data(init.initializer( init.Normal(0.01, 0), self.fc_mask.weight.shape)) @@ -160,16 +170,12 @@ class KernelUpdateHead(nn.Cell): label_weights, mask_targets, mask_weights, - imgs_whwh=None, - reduction_override=None, **kwargs): - losses = dict() bg_class_ind = self.num_classes # note in spare rcnn num_gt == num_pos pos_inds = (labels >= 0).astype(ms.int32) & (labels < bg_class_ind).astype(ms.int32) num_pos = pos_inds.sum().astype(ms.float32) - # avg_factor = reduce_mean(num_pos).clamp_(min=1.0) num_preds = mask_pred.shape[0] * mask_pred.shape[1] assert mask_pred.shape[0] == cls_score.shape[0] diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py index 6b21d610b..d8e3aaf73 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py @@ -4,17 +4,17 @@ from mindspore import nn, ops class KernelUpdator(nn.Cell): - def __init__(self, in_channels=256, feat_channels=64, out_channels=None, input_feat_shape=3, - gate_sigmoid=True, gate_norm_act=False, activate_out=False, act_cfg=None): + def __init__(self, in_channels=256, feat_channels=64, out_channels=None, act_cfg=None): super(KernelUpdator, self).__init__() if isinstance(act_cfg, type(None)): act_cfg = dict(type='ReLU', inplace=True) self.in_channels = in_channels self.feat_channels = feat_channels self.out_channels_raw = out_channels - self.gate_sigmoid = gate_sigmoid - self.gate_norm_act = gate_norm_act - self.activate_out = activate_out + self.gate_sigmoid = True + self.gate_norm_act = False + self.activate_out = False + input_feat_shape = 3 if isinstance(input_feat_shape, int): input_feat_shape = [input_feat_shape] * 2 self.input_feat_shape = input_feat_shape diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py index b5a5c7258..353e6905d 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py @@ -18,50 +18,47 @@ def bias_init_with_prob(prior_prob: float) -> float: class ConvKernelHead(nn.Cell): - def __init__(self, num_proposals=100, in_channels=256, out_channels=256, num_heads=8, - num_cls_fcs=1, num_seg_convs=1, num_loc_convs=1, att_dropout=False, - localization_fpn=None, conv_kernel_size=1, norm_cfg=None, semantic_fpn=True, - train_cfg=None, num_classes=80, xavier_init_kernel=False, kernel_init_std=0.01, - use_binary=False, proposal_feats_with_obj=False, loss_mask=None, loss_seg=None, - loss_cls=None, loss_dice=None, loss_rank=None, feat_downsample_stride=1, - feat_refine_stride=1, feat_refine=True, with_embed=False, feat_embed_only=False, - conv_normal_init=False, mask_out_stride=4, hard_target=False, num_thing_classes=80, - num_stuff_classes=53, mask_assign_stride=4, ignore_label=255, thing_label_in_seg=0, - cat_stuff_mask=False, **kwargs): + def __init__(self, thing_label_in_seg=0, cat_stuff_mask=False, **kwargs): super(ConvKernelHead, self).__init__() + norm_cfg = kwargs.get('norm_cfg', None) + loss_mask = kwargs.get('loss_mask', None) + loss_seg = kwargs.get('loss_seg', None) + loss_cls = kwargs.get('loss_cls', None) + loss_dice = kwargs.get('loss_dice', None) + loss_rank = kwargs.get('loss_rank', None) if isinstance(norm_cfg, type(None)): norm_cfg = dict(type='GN', num_groups=32) - self.num_proposals = num_proposals - self.num_cls_fcs = num_cls_fcs - self.train_cfg = Config(train_cfg) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_classes = num_classes - self.proposal_feats_with_obj = proposal_feats_with_obj + self.num_proposals = kwargs.get('num_proposals', 100) + self.num_cls_fcs = kwargs.get('num_cls_fcs', 1) + self.train_cfg = Config(kwargs.get('train_cfg', None)) + self.in_channels = kwargs.get('in_channels', 256) + self.out_channels = kwargs.get('out_channels', 256) + self.num_classes = kwargs.get('num_classes', 80) + self.proposal_feats_with_obj = kwargs.get('proposal_feats_with_obj', False) self.sampling = False - self.localization_fpn = SemanticFPNWrapper(**localization_fpn) - self.semantic_fpn = semantic_fpn + self.localization_fpn = SemanticFPNWrapper(**kwargs.get('localization_fpn', dict())) + self.semantic_fpn = kwargs.get('semantic_fpn', True) self.norm_cfg = norm_cfg - self.num_heads = num_heads - self.att_dropout = att_dropout - self.mask_out_stride = mask_out_stride - self.hard_target = hard_target - self.conv_kernel_size = conv_kernel_size - self.xavier_init_kernel = xavier_init_kernel - self.kernel_init_std = kernel_init_std - self.feat_downsample_stride = feat_downsample_stride - self.feat_refine_stride = feat_refine_stride - self.conv_normal_init = conv_normal_init - self.feat_refine = feat_refine - self.with_embed = with_embed - self.feat_embed_only = feat_embed_only - self.num_loc_convs = num_loc_convs - self.num_seg_convs = num_seg_convs - self.use_binary = use_binary - self.num_thing_classes = num_thing_classes - self.num_stuff_classes = num_stuff_classes - self.mask_assign_stride = mask_assign_stride - self.ignore_label = ignore_label + self.num_heads = kwargs.get('num_heads', 8) + self.att_dropout = kwargs.get('att_dropout', False) + self.mask_out_stride = kwargs.get('mask_out_stride', 4) + self.hard_target = kwargs.get('hard_target', False) + self.conv_kernel_size = kwargs.get('conv_kernel_size', 1) + self.xavier_init_kernel = kwargs.get('xavier_init_kernel', False) + self.kernel_init_std = kwargs.get('kernel_init_std', 0.01) + self.feat_downsample_stride = kwargs.get('feat_downsample_stride', 1) + self.feat_refine_stride = kwargs.get('feat_refine_stride', 1) + self.conv_normal_init = kwargs.get('conv_normal_init', False) + self.feat_refine = kwargs.get('feat_refine', True) + self.with_embed = kwargs.get('with_embed', False) + self.feat_embed_only = kwargs.get('feat_embed_only', False) + self.num_loc_convs = kwargs.get('num_loc_convs', 1) + self.num_seg_convs = kwargs.get('num_seg_convs', 1) + self.use_binary = kwargs.get('use_binary', False) + self.num_thing_classes = kwargs.get('num_thing_classes', 80) + self.num_stuff_classes = kwargs.get('num_stuff_classes', 53) + self.mask_assign_stride = kwargs.get('mask_assign_stride', 4) + self.ignore_label = kwargs.get('ignore_label', 255) self.thing_label_in_seg = thing_label_in_seg self.cat_stuff_mask = cat_stuff_mask @@ -103,6 +100,31 @@ class ConvKernelHead(nn.Cell): self.init_weights() self.sigmoid = ops.Sigmoid() + def init_weights(self): + self.localization_fpn.init_weights() + + if self.feat_downsample_stride > 1 and self.conv_normal_init: + logger.info('Initialize convs in KPN head by normal std 0.01') + for conv in [self.loc_convs, self.seg_convs]: + for m in conv.cells_and_names(): + if isinstance(m, nn.Conv2d): + normal_init(m, init_gain=0.01) + + if self.semantic_fpn: + bias_seg = bias_init_with_prob(0.01) + if self.loss_seg.use_sigmoid: + normal_init(self.conv_seg, init_gain=0.01, bias=bias_seg) + else: + normal_init(self.conv_seg, mean=0, init_gain=0.01) + if self.xavier_init_kernel: + logger.info('Initialize kernels by xavier uniform') + self.init_kernels.weight.set_data( + init.initializer(init.XavierUniform(), self.init_kernels.weight.shape)) + else: + logger.info( + f'Initialize kernels by normal std: {self.kernel_init_std}') + normal_init(self.init_kernels, mean=0, init_gain=self.kernel_init_std) + def _init_layers(self): """Initialize a sparse set of proposal boxes and proposal features.""" self.init_kernels = nn.Conv2d( @@ -154,30 +176,75 @@ class ConvKernelHead(nn.Cell): 1, norm_cfg=self.norm_cfg)) - def init_weights(self): - self.localization_fpn.init_weights() + def forward_train(self, + img, + gt_masks, + gt_labels, + **kwargs,): + """Forward function in training stage.""" + img_metas = kwargs.get('img_metas', None) + gt_sem_seg = kwargs.get('gt_sem_seg', None) + gt_sem_cls = kwargs.get('gt_sem_cls', None) + num_imgs = len(img_metas) + results = self._decode_init_proposals(img, img_metas) + (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) = results + if self.feat_downsample_stride > 1: + interpolate = nn.ResizeBilinear() + scaled_mask_preds = interpolate( + mask_preds, + scale_factor=self.feat_downsample_stride, + align_corners=False) + if seg_preds is not None: + scaled_seg_preds = interpolate( + seg_preds, + scale_factor=self.feat_downsample_stride, + align_corners=False) + else: + scaled_seg_preds = None + else: + scaled_mask_preds = mask_preds + scaled_seg_preds = seg_preds - if self.feat_downsample_stride > 1 and self.conv_normal_init: - logger.info('Initialize convs in KPN head by normal std 0.01') - for conv in [self.loc_convs, self.seg_convs]: - for m in conv.cells_and_names(): - if isinstance(m, nn.Conv2d): - normal_init(m, init_gain=0.01) + if self.hard_target: + gt_masks = [x.bool().astype(ms.float32) for x in gt_masks] + else: + gt_masks = gt_masks - if self.semantic_fpn: - bias_seg = bias_init_with_prob(0.01) - if self.loss_seg.use_sigmoid: - normal_init(self.conv_seg, init_gain=0.01, bias=bias_seg) - else: - normal_init(self.conv_seg, mean=0, init_gain=0.01) - if self.xavier_init_kernel: - logger.info('Initialize kernels by xavier uniform') - self.init_kernels.weight.set_data( - init.initializer(init.XavierUniform(), self.init_kernels.weight.shape)) + sampling_results = [] + if cls_scores is None: + detached_cls_scores = [None] * num_imgs else: - logger.info( - f'Initialize kernels by normal std: {self.kernel_init_std}') - normal_init(self.init_kernels, mean=0, init_gain=self.kernel_init_std) + detached_cls_scores = ops.stop_gradient(cls_scores) + for i in range(num_imgs): + assign_result = self.assigner.assign(ops.stop_gradient(scaled_mask_preds[i]), + detached_cls_scores[i], + gt_masks[i], gt_labels[i], + img_metas[i]) + sampling_result = self.sampler.sample(assign_result, + scaled_mask_preds[i], + gt_masks[i]) + sampling_results.append(sampling_result) + + mask_targets = self.get_targets( + sampling_results, + gt_masks, + self.train_cfg, + True, + gt_sem_seg=gt_sem_seg, + gt_sem_cls=gt_sem_cls) + + losses = self.loss(scaled_mask_preds, cls_scores, scaled_seg_preds, + proposal_feats, *mask_targets) + + if self.cat_stuff_mask and self.training: + mask_preds = ops.concat( + [mask_preds, seg_preds[:, self.num_thing_classes:]], axis=1) + stuff_kernels = self.conv_seg.weight[self. + num_thing_classes:].clone() + stuff_kernels = stuff_kernels[None].broadcast_to((num_imgs, ) + stuff_kernels.shape) + proposal_feats = ops.concat([proposal_feats, stuff_kernels], axis=1) + results = (losses, proposal_feats, x_feats, mask_preds, cls_scores) + return results def _decode_init_proposals(self, img, img_metas): num_imgs = len(img_metas) @@ -244,77 +311,8 @@ class ConvKernelHead(nn.Cell): num_thing_classes:].clone() stuff_kernels = stuff_kernels[None].broadcast_to((num_imgs, ) + stuff_kernels.shape) proposal_feats = ops.concat([proposal_feats, stuff_kernels], axis=1) - - return proposal_feats, x_feats, mask_preds, cls_scores, seg_preds - - def forward_train(self, - img, - img_metas, - gt_masks, - gt_labels, - gt_sem_seg=None, - gt_sem_cls=None): - """Forward function in training stage.""" - num_imgs = len(img_metas) - results = self._decode_init_proposals(img, img_metas) - (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) = results - if self.feat_downsample_stride > 1: - interpolate = nn.ResizeBilinear() - scaled_mask_preds = interpolate( - mask_preds, - scale_factor=self.feat_downsample_stride, - align_corners=False) - if seg_preds is not None: - scaled_seg_preds = interpolate( - seg_preds, - scale_factor=self.feat_downsample_stride, - align_corners=False) - else: - scaled_seg_preds = None - else: - scaled_mask_preds = mask_preds - scaled_seg_preds = seg_preds - - if self.hard_target: - gt_masks = [x.bool().astype(ms.float32) for x in gt_masks] - else: - gt_masks = gt_masks - - sampling_results = [] - if cls_scores is None: - detached_cls_scores = [None] * num_imgs - else: - detached_cls_scores = ops.stop_gradient(cls_scores) - for i in range(num_imgs): - assign_result = self.assigner.assign(ops.stop_gradient(scaled_mask_preds[i]), - detached_cls_scores[i], - gt_masks[i], gt_labels[i], - img_metas[i]) - sampling_result = self.sampler.sample(assign_result, - scaled_mask_preds[i], - gt_masks[i]) - sampling_results.append(sampling_result) - - mask_targets = self.get_targets( - sampling_results, - gt_masks, - self.train_cfg, - True, - gt_sem_seg=gt_sem_seg, - gt_sem_cls=gt_sem_cls) - - losses = self.loss(scaled_mask_preds, cls_scores, scaled_seg_preds, - proposal_feats, *mask_targets) - - if self.cat_stuff_mask and self.training: - mask_preds = ops.concat( - [mask_preds, seg_preds[:, self.num_thing_classes:]], axis=1) - stuff_kernels = self.conv_seg.weight[self. - num_thing_classes:].clone() - stuff_kernels = stuff_kernels[None].broadcast_to((num_imgs, ) + stuff_kernels.shape) - proposal_feats = ops.concat([proposal_feats, stuff_kernels], axis=1) - - return losses, proposal_feats, x_feats, mask_preds, cls_scores + results = (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) + return results def loss(self, mask_pred, @@ -326,7 +324,6 @@ class ConvKernelHead(nn.Cell): mask_targets, mask_weights, seg_targets, - reduction_override=None, **kwargs): losses = dict() bg_class_ind = self.num_classes @@ -465,15 +462,13 @@ class ConvKernelHead(nn.Cell): Returns: Tensor: dets of shape [N, num_det, 5]. """ - - # rpn_results = self.simple_test_rpn(x, img_metas) rpn_results = self._decode_init_proposals_export(x) # return rpn_results (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) = rpn_results - return proposal_feats, x_feats, mask_preds, cls_scores, seg_preds + return rpn_results def _decode_init_proposals_export(self, img): num_imgs = 1 @@ -512,9 +507,6 @@ class ConvKernelHead(nn.Cell): # proposal_feats = self.init_kernels.weight.clone() tmp_feat = np.array(self.init_kernels.weight).astype(np.float32) - # proposal_feats = ms.Tensor(np.copy(tmp_feat), dtype=self.init_kernels.weight.dtype) - # # proposal_feats = proposal_feats[None].broadcast_to((num_imgs, ) + proposal_feats.shape) - # proposal_feats = ms.ops.broadcast_to(proposal_feats[None], (num_imgs, ) + proposal_feats.shape) tmp_feat = np.broadcast_to(tmp_feat[None], (num_imgs, ) + tmp_feat.shape) proposal_feats = ms.Tensor(np.copy(tmp_feat), dtype=self.init_kernels.weight.dtype) @@ -524,24 +516,18 @@ class ConvKernelHead(nn.Cell): x_feats = loc_feats if self.proposal_feats_with_obj: - # sigmoid_masks = mask_preds.sigmoid() sigmoid_masks = self.sigmoid(mask_preds) nonzero_inds = sigmoid_masks > 0.5 if self.use_binary: sigmoid_masks = nonzero_inds.astype(ms.float32) else: sigmoid_masks = nonzero_inds.astype(ms.float32) * sigmoid_masks - # einsum = ops.Einsum('bnhw,bchw->bnc') - # obj_feats = einsum((sigmoid_masks, x_feats)) b, n, h, w = sigmoid_masks.shape _, c, _, _ = x_feats.shape tmp_sigmoid_masks = ms.ops.reshape(sigmoid_masks, (b, n, h*w)) tmp_x_feats = ms.ops.reshape(x_feats, (b, c, h*w)) tmp_x_feats = ms.ops.transpose(tmp_x_feats, (0, 2, 1)) obj_feats = ms.ops.bmm(tmp_sigmoid_masks, tmp_x_feats) - - # obj_feats = Einsum('bnhw,bchw->bnc', sigmoid_masks, x_feats) - # obj_feats = torch.einsum('bnhw,bchw->bnc', sigmoid_masks, x_feats) else: obj_feats = None @@ -559,5 +545,5 @@ class ConvKernelHead(nn.Cell): # stuff_kernels = stuff_kernels[None].broadcast_to((num_imgs, ) + stuff_kernels.shape) stuff_kernels = ms.ops.broadcast_to(stuff_kernels[None], (num_imgs, ) + stuff_kernels.shape) proposal_feats = ops.concat([proposal_feats, stuff_kernels], axis=1) - - return proposal_feats, x_feats, mask_preds, cls_scores, seg_preds + results = (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) + return results diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py index 7ec8788ef..4ba71ae33 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py @@ -32,19 +32,28 @@ class SinePositionalEncoding(nn.Cell): Default: None """ - def __init__(self, num_feats, temperature=10000, normalize=False, - scale=2 * math.pi, eps=1e-6, offset=0.): + def __init__(self, num_feats, normalize=False, scale=2 * math.pi, **kwargs): super(SinePositionalEncoding, self).__init__() if normalize: assert isinstance(scale, (float, int)), 'when normalize is set,' \ 'scale should be provided and in float or int type, ' \ f'found {type(scale)}' self.num_feats = num_feats - self.temperature = temperature + self.temperature = kwargs.get('temperature', 10000) self.normalize = normalize self.scale = scale - self.eps = eps - self.offset = offset + self.eps = kwargs.get('eps', 1e-6) + self.offset = kwargs.get('offset', 0.) + + def __repr__(self): + """str: a string that describes the module""" + repr_str = self.__class__.__name__ + repr_str += f'(num_feats={self.num_feats}, ' + repr_str += f'temperature={self.temperature}, ' + repr_str += f'normalize={self.normalize}, ' + repr_str += f'scale={self.scale}, ' + repr_str += f'eps={self.eps})' + return repr_str def construct(self, mask): """Forward function for `SinePositionalEncoding`. @@ -87,16 +96,6 @@ class SinePositionalEncoding(nn.Cell): pos = ops.concat((pos_y, pos_x), axis=3).transpose((0, 3, 1, 2)) return pos - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_feats={self.num_feats}, ' - repr_str += f'temperature={self.temperature}, ' - repr_str += f'normalize={self.normalize}, ' - repr_str += f'scale={self.scale}, ' - repr_str += f'eps={self.eps})' - return repr_str - def model_export(self, mask): """Forward function for `SinePositionalEncoding`. @@ -123,7 +122,6 @@ class SinePositionalEncoding(nn.Cell): (y_embed[:, -1:, :] + self.eps) * self.scale x_embed = (x_embed + self.offset) / \ (x_embed[:, :, -1:] + self.eps) * self.scale - # dim_t = ms.Tensor(np.arange(self.num_feats), dtype=ms.float32) dim_t = np.arange(self.num_feats).astype(np.float32) dim_t = self.temperature**(2 * (dim_t / 2) / self.num_feats) pos_x = x_embed[:, :, :, None] / dim_t @@ -132,14 +130,14 @@ class SinePositionalEncoding(nn.Cell): batch_size, height, width = mask.shape tmp_pos_x = pos_x - tmp_pos_y =pos_y + tmp_pos_y = pos_y tmp_pos_x = np.stack( (np.sin(tmp_pos_x[:, :, :, 0::2]), np.cos(tmp_pos_x[:, :, :, 1::2])), axis=4 ).reshape(batch_size, height, width, -1) tmp_pos_y = np.stack( (np.sin(tmp_pos_y[:, :, :, 0::2]), np.cos(tmp_pos_y[:, :, :, 1::2])), axis=4 ).reshape(batch_size, height, width, -1) - tmp_pos = np.concatenate((tmp_pos_y, tmp_pos_x),axis=3).transpose((0,3,1,2)) + tmp_pos = np.concatenate((tmp_pos_y, tmp_pos_x),axis=3).transpose((0, 3, 1, 2)) pos = ms.Tensor(tmp_pos, dtype=ms.float32) return pos diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py index a880ea2c9..26e1ba882 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py @@ -21,11 +21,17 @@ class SemanticFPNWrapper(nn.Cell): norm_cfg ([type], optional): [description]. Defaults to None. """ - def __init__(self, in_channels, feat_channels, out_channels, start_level, end_level, - cat_coors=False, positional_encoding=None, cat_coors_level=3, fuse_by_cat=False, - return_list=False, upsample_times=3, with_pred=True, num_aux_convs=0, act_cfg=None, - out_act_cfg=None, conv_cfg=None, norm_cfg=None): + def __init__(self, in_channels, feat_channels, out_channels, **kwargs): super(SemanticFPNWrapper, self).__init__() + start_level = kwargs.get('start_level', -1) + end_level = kwargs.get('end_level', -1) + positional_encoding = kwargs.get('positional_encoding', None) + cat_coors_level = kwargs.get('cat_coors_level', 3) + fuse_by_cat = kwargs.get('fuse_by_cat', False) + upsample_times = kwargs.get('upsample_times', 3) + num_aux_convs = kwargs.get('num_aux_convs', 0) + act_cfg = kwargs.get('act_cfg', None) + out_act_cfg = kwargs.get('out_act_cfg', None) # init dict-like arguments if isinstance(act_cfg, type(None)): act_cfg = dict(type='ReLU', inplace=True) @@ -38,15 +44,15 @@ class SemanticFPNWrapper(nn.Cell): self.end_level = end_level assert start_level >= 0 and end_level >= start_level self.out_channels = out_channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg + self.conv_cfg = kwargs.get('conv_cfg', None) + self.norm_cfg = kwargs.get('norm_cfg', None) self.act_cfg = act_cfg - self.cat_coors = cat_coors + self.cat_coors = kwargs.get('cat_coors', False) self.cat_coors_level = cat_coors_level self.fuse_by_cat = fuse_by_cat - self.return_list = return_list + self.return_list = kwargs.get('return_list', False) self.upsample_times = upsample_times - self.with_pred = with_pred + self.with_pred = kwargs.get('with_pred', True) if positional_encoding is not None: self.positional_encoding = SinePositionalEncoding(**positional_encoding) else: diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py index a74e8b24e..f950e67ac 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py @@ -58,20 +58,17 @@ CONFIG_DICT = dict( type='SynthOverlapDataset', ann_file=SYNTH_DATA_ROOT + 'train_gt.jsonl', img_prefix=SYNTH_DATA_ROOT, - seg_prefix=SYNTH_DATA_ROOT, pipeline=TRAIN_PIPELINE), val=dict( type='RealOverlapDataset', ann_file=REAL_DATA_ROOT + 'annotation.json', img_prefix=REAL_DATA_ROOT, - seg_prefix=REAL_DATA_ROOT, pipeline=TEST_PIPELINE, test_mode=True), test=dict( type='RealOverlapDataset', ann_file=REAL_DATA_ROOT + 'annotation.json', img_prefix=REAL_DATA_ROOT, - seg_prefix=REAL_DATA_ROOT, pipeline=TEST_PIPELINE, test_mode=True) ), diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py index 5db0c0a73..6fa4dda6c 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py @@ -102,7 +102,6 @@ model = dict( in_channels=256, feat_channels=256, out_channels=256, - input_feat_shape=3, act_cfg=dict(type='ReLU', inplace=True)), loss_mask=dict( type='BinaryCrossEntropy', loss_weight=1.0), -- Gitee From 00462f51d59caa3a62d2648f24b5478a2f0594a3 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Wed, 14 Dec 2022 14:07:41 +0800 Subject: [PATCH 41/51] add licence --- contrib/Overlap-Recovery/README.md | 4 ++-- .../Overlap-Recovery/inference/eval_utils.py | 15 +++++++++++++++ contrib/Overlap-Recovery/inference/load_ann.py | 15 +++++++++++++++ .../inference/preprocess_utils.py | 15 +++++++++++++++ contrib/Overlap-Recovery/train/eval.py | 2 +- contrib/Overlap-Recovery/train/export.py | 17 ++++++++++++++--- .../train/src/dataset/base_dataset.py | 15 +++++++++++++++ .../train/src/dataset/build_dataset.py | 15 +++++++++++++++ .../train/src/dataset/data_process.py | 15 +++++++++++++++ .../train/src/dataset/real_dataset.py | 15 +++++++++++++++ .../train/src/dataset/synth_dataset.py | 15 +++++++++++++++ .../Overlap-Recovery/train/src/dataset/utils.py | 15 +++++++++++++++ .../deoccluder/custom_cells/custom_assigner.py | 15 +++++++++++++++ .../deoccluder/custom_cells/custom_blocks.py | 15 +++++++++++++++ .../deoccluder/custom_cells/custom_losses.py | 15 +++++++++++++++ .../custom_cells/custom_match_cost.py | 15 +++++++++++++++ .../custom_cells/custom_operations.py | 15 +++++++++++++++ .../deoccluder/custom_cells/custom_samplers.py | 15 +++++++++++++++ .../train/src/deoccluder/deoccluder_r50.py | 15 +++++++++++++++ .../train/src/deoccluder/fpn_neck.py | 3 ++- .../train/src/deoccluder/resnet.py | 1 + .../deoccluder/roi/custom_kernel_iter_head.py | 16 ++++++++++++++++ .../deoccluder/roi/custom_kernel_update_head.py | 16 ++++++++++++++++ .../src/deoccluder/roi/kernel_update_head.py | 16 ++++++++++++++++ .../train/src/deoccluder/roi/kernel_updator.py | 16 ++++++++++++++++ .../train/src/deoccluder/rpn/kernel_head.py | 16 ++++++++++++++++ .../src/deoccluder/rpn/positional_encoding.py | 16 ++++++++++++++++ .../src/deoccluder/rpn/semantic_fpn_wrapper.py | 16 ++++++++++++++++ .../train/src/deoccluder/utils.py | 15 +++++++++++++++ .../src/model_utils/configs/config_base.py | 16 ++++++++++++++++ .../src/model_utils/configs/config_model.py | 16 ++++++++++++++++ .../train/src/model_utils/device_adapter.py | 2 +- .../train/src/model_utils/local_adapter.py | 2 +- .../train/src/model_utils/moxing_adapter.py | 2 +- contrib/Overlap-Recovery/train/train.py | 15 +++++++++++++++ 35 files changed, 437 insertions(+), 10 deletions(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index 1dd1cd89b..aa04566bf 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -158,7 +158,7 @@ eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示 | 软件名称 | 版本 | | ------------------- | ----------- | | MindX SDK | 3.0RC3 | -| Ascend-CANN-toolkit | 5.1.RC2 | +| Ascend-CANN-toolkit | 6.0.RC1 | | ubuntu | 18.04.1 LTS | | python | 3.9.2 | | MindSpore | 1.9.0 | @@ -176,7 +176,7 @@ eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示 | 软件名称 | 版本 | | ------------------- | ----------- | | MindX SDK | 3.0RC3 | -| Ascend-CANN-toolkit | 5.1.RC2 | +| Ascend-CANN-toolkit | 6.0.RC1 | | ubuntu | 18.04.1 LTS | | python | 3.9.2 | | cv2 | 4.5.5.64 | diff --git a/contrib/Overlap-Recovery/inference/eval_utils.py b/contrib/Overlap-Recovery/inference/eval_utils.py index 28efdba28..2ae6dcd81 100644 --- a/contrib/Overlap-Recovery/inference/eval_utils.py +++ b/contrib/Overlap-Recovery/inference/eval_utils.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import numpy as np import cv2 diff --git a/contrib/Overlap-Recovery/inference/load_ann.py b/contrib/Overlap-Recovery/inference/load_ann.py index bfd178960..adcd67eae 100644 --- a/contrib/Overlap-Recovery/inference/load_ann.py +++ b/contrib/Overlap-Recovery/inference/load_ann.py @@ -1,5 +1,20 @@ # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import json import os.path as osp import imagesize diff --git a/contrib/Overlap-Recovery/inference/preprocess_utils.py b/contrib/Overlap-Recovery/inference/preprocess_utils.py index 29bd7d9ed..9c4e5dd39 100644 --- a/contrib/Overlap-Recovery/inference/preprocess_utils.py +++ b/contrib/Overlap-Recovery/inference/preprocess_utils.py @@ -1,5 +1,20 @@ # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import collections import warnings import os.path as osp diff --git a/contrib/Overlap-Recovery/train/eval.py b/contrib/Overlap-Recovery/train/eval.py index a416b30ed..e47eafb9c 100644 --- a/contrib/Overlap-Recovery/train/eval.py +++ b/contrib/Overlap-Recovery/train/eval.py @@ -1,4 +1,4 @@ -# Copyright 2020-2021 Huawei Technologies Co., Ltd +# Copyright 2022 Huawei Technologies Co., Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/contrib/Overlap-Recovery/train/export.py b/contrib/Overlap-Recovery/train/export.py index 661874a35..7dadeade6 100644 --- a/contrib/Overlap-Recovery/train/export.py +++ b/contrib/Overlap-Recovery/train/export.py @@ -1,7 +1,18 @@ # -*- coding: utf-8 -*- -# @Author: Wenwen Yu -# @Email: yuwenwen62@gmail.com -# @Created Time: 12/5/22 3:19 PM + +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. """ export model to 'AIR', 'ONNX' and 'MINDIR' """ diff --git a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py index 7dd9288ca..4c9d734cc 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import os.path as osp import warnings diff --git a/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py index 80122a961..d5daca949 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/build_dataset.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + from .real_dataset import RealOverlapDataset from .synth_dataset import SynthOverlapDataset diff --git a/contrib/Overlap-Recovery/train/src/dataset/data_process.py b/contrib/Overlap-Recovery/train/src/dataset/data_process.py index 3a4fec369..abc21716b 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/data_process.py +++ b/contrib/Overlap-Recovery/train/src/dataset/data_process.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + from os import path as osp import warnings from collections.abc import Sequence diff --git a/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py index 1b809c997..bbf003da0 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import json import os import os.path as osp diff --git a/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py index be4c5b8b9..95e99b50f 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import json import os import os.path as osp diff --git a/contrib/Overlap-Recovery/train/src/dataset/utils.py b/contrib/Overlap-Recovery/train/src/dataset/utils.py index 3b16f2824..1edd0a1de 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/utils.py +++ b/contrib/Overlap-Recovery/train/src/dataset/utils.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import numpy as np import mmcv import mindspore as ms diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py index d9ac38d8e..76a682ef4 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + try: from scipy.optimize import linear_sum_assignment except ImportError: diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py index dab453ecc..9dc73b213 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import warnings import mindspore as ms from mindspore import nn, ops diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py index b1ce3e9e9..7a8f5eb4d 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import mindspore as ms import numpy as np from mindspore import nn, ops diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py index 16824071f..9b34141f8 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import mindspore as ms from mindspore import nn, ops from .custom_operations import Einsum diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py index 777b3546f..8eae0dacb 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + from functools import partial import warnings import numpy as np diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py index 5cd66e50d..e3bc3bcc8 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import numpy as np import mindspore as ms from mindspore import ops, nn diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py b/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py index a535d1609..4d0a11160 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import mindspore as ms from mindspore import nn, ops from mindspore import load_checkpoint, load_param_into_net diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py b/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py index 16756331b..e60160f4d 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py @@ -1,4 +1,4 @@ -# Copyright 2020-2021 Huawei Technologies Co., Ltd +# Copyright 2022 Huawei Technologies Co., Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,6 +14,7 @@ # ============================================================================ """Feature pyramid network. (inherited from MaskRCNN in model zoo)""" + import numpy as np import mindspore.nn as nn from mindspore.ops import operations as P diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py b/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py index 388805169..5d8226475 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py @@ -17,6 +17,7 @@ resnet-50 backbone, code inherited from model zoo. """ + import mindspore.nn as nn from mindspore.common import initializer from mindspore import Parameter diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py index b2209b4e7..0a435e274 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py @@ -1,3 +1,19 @@ + +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import mindspore as ms from mindspore import nn, ops import numpy as np diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py index 1a21b7a73..4ab33295d 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py @@ -1,3 +1,19 @@ + +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import numpy as np import mindspore as ms from mindspore import nn, ops diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py index 5c6500298..ac347e8c1 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py @@ -1,3 +1,19 @@ + +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import numpy as np import mindspore as ms diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py index 6b21d610b..f21a63928 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_updator.py @@ -1,3 +1,19 @@ + +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import mindspore as ms from mindspore import nn, ops diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py index b5a5c7258..89b01d922 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py @@ -1,3 +1,19 @@ + +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import numpy as np import mindspore as ms from mindspore import nn, ops diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py index 7ec8788ef..3eb0efe7e 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py @@ -1,5 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- + +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import math import numpy as np import mindspore as ms diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py index a880ea2c9..9d28b22dd 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/semantic_fpn_wrapper.py @@ -1,3 +1,19 @@ + +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import numpy as np import mindspore as ms from mindspore import nn, ops diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/utils.py b/contrib/Overlap-Recovery/train/src/deoccluder/utils.py index e7a2f907f..288ccecee 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/utils.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/utils.py @@ -1,6 +1,21 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import mindspore as ms from mindspore import nn, ops diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py index a74e8b24e..36edf6499 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_base.py @@ -1,3 +1,19 @@ + +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + from pprint import pprint, pformat from .config_model import model diff --git a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py index 5db0c0a73..60dc454f6 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/configs/config_model.py @@ -1,3 +1,19 @@ + +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + NUM_STAGES = 3 NUM_PROPOSALS = 4 CONV_KERNEL_SIZE = 1 diff --git a/contrib/Overlap-Recovery/train/src/model_utils/device_adapter.py b/contrib/Overlap-Recovery/train/src/model_utils/device_adapter.py index 53c5e070f..e589ad560 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/device_adapter.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/device_adapter.py @@ -1,4 +1,4 @@ -# Copyright 2021 Huawei Technologies Co., Ltd +# Copyright 2022 Huawei Technologies Co., Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/contrib/Overlap-Recovery/train/src/model_utils/local_adapter.py b/contrib/Overlap-Recovery/train/src/model_utils/local_adapter.py index 8a1b1fa1f..6b8285ca8 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/local_adapter.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/local_adapter.py @@ -1,4 +1,4 @@ -# Copyright 2021 Huawei Technologies Co., Ltd +# Copyright 2022 Huawei Technologies Co., Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/contrib/Overlap-Recovery/train/src/model_utils/moxing_adapter.py b/contrib/Overlap-Recovery/train/src/model_utils/moxing_adapter.py index 2e9502ad0..d7dff5a29 100644 --- a/contrib/Overlap-Recovery/train/src/model_utils/moxing_adapter.py +++ b/contrib/Overlap-Recovery/train/src/model_utils/moxing_adapter.py @@ -1,4 +1,4 @@ -# Copyright 2021 Huawei Technologies Co., Ltd +# Copyright 2022 Huawei Technologies Co., Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/contrib/Overlap-Recovery/train/train.py b/contrib/Overlap-Recovery/train/train.py index eb3c1780c..fcd7ea8de 100644 --- a/contrib/Overlap-Recovery/train/train.py +++ b/contrib/Overlap-Recovery/train/train.py @@ -1,5 +1,20 @@ """train model.""" +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + import time import os import numpy as np -- Gitee From b13da459ed9c0de1a758742884fb8abe1e47bb56 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Wed, 14 Dec 2022 14:23:03 +0800 Subject: [PATCH 42/51] add licence --- .../Overlap-Recovery/inference/load_img_data.py | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/contrib/Overlap-Recovery/inference/load_img_data.py b/contrib/Overlap-Recovery/inference/load_img_data.py index 6535bd131..190095e2d 100644 --- a/contrib/Overlap-Recovery/inference/load_img_data.py +++ b/contrib/Overlap-Recovery/inference/load_img_data.py @@ -1,7 +1,21 @@ # -*- coding: utf-8 -*- -from preprocess_utils import build_processor +# Copyright 2022 Huawei Technologies Co., Ltd +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from preprocess_utils import build_processor img_scale = (768, 768) -- Gitee From 17fbdd6b0f9740eb370c92dcc4be75b1fe0f0cec Mon Sep 17 00:00:00 2001 From: HamPerdredes Date: Wed, 14 Dec 2022 18:57:22 +0800 Subject: [PATCH 43/51] clean code --- .../custom_cells/custom_assigner.py | 19 +++--- .../deoccluder/custom_cells/custom_blocks.py | 24 +++---- .../deoccluder/custom_cells/custom_losses.py | 5 +- .../custom_cells/custom_match_cost.py | 68 +++++++++---------- .../custom_cells/custom_operations.py | 3 +- .../custom_cells/custom_samplers.py | 24 ++++--- 6 files changed, 77 insertions(+), 66 deletions(-) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py index d9ac38d8e..867fd75b8 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py @@ -43,11 +43,6 @@ class AssignResult(NiceRepr): """int: the number of predictions in this assignment""" return len(self.gt_inds) - def set_extra_property(self, key, value): - """Set user-defined new property.""" - assert key not in self.info - self._extra_properties[key] = value - def __nice__(self): """str: a "nice" summary string describing this assign result""" parts = [] @@ -67,6 +62,11 @@ class AssignResult(NiceRepr): parts.append(f'labels.shape={tuple(self.labels.shape)!r}') return ', '.join(parts) + def set_extra_property(self, key, value): + """Set user-defined new property.""" + assert key not in self.info + self._extra_properties[key] = value + @property def info(self): """dict: a dictionary of info about the object""" @@ -105,12 +105,13 @@ class MaskHungarianAssigner(nn.Cell): """Computes one-to-one matching between predictions and ground truth.""" def __init__(self, - cls_cost=dict(type='ClassificationCost', weight=1.), - mask_cost=dict(type='SigmoidCost', weight=1.0), - dice_cost=dict(), boundary_cost=None, - topk=1): + topk=1, + **kwargs): super(MaskHungarianAssigner, self).__init__() + cls_cost = kwargs.get('cls_cost', dict(type='ClassificationCost', weight=1.)) + mask_cost = kwargs.get('mask_cost', dict(type='SigmoidCost', weight=1.0)) + dice_cost = kwargs.get('dice_cost', dict()) self.cls_cost = build_match_cost(cls_cost) self.mask_cost = build_match_cost(mask_cost) self.dice_cost = build_match_cost(dice_cost) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py index dab453ecc..3d9781160 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py @@ -8,9 +8,10 @@ from src.model_utils.configs.config_base import config class ConvModule(nn.Cell): - def __init__(self, in_channels, out_channels, kernel_size=1, padding=0, stride=1, - groups=1, dilation=1, conv_cfg=None, norm_cfg=None, act_cfg=None): + def __init__(self, in_channels, out_channels, kernel_size=1, **kwargs): super().__init__() + norm_cfg = kwargs.get('norm_cfg', None) + act_cfg = kwargs.get('act_cfg', None) if norm_cfg is not None: bias = False else: @@ -18,11 +19,11 @@ class ConvModule(nn.Cell): self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, - stride=stride, + stride=kwargs.get('stride', 1), pad_mode='pad', - padding=padding, - group=groups, - dilation=dilation, + padding=kwargs.get('padding', 0), + group=kwargs.get('groups', 1), + dilation=kwargs.get('dilation', 1), has_bias=bias) self.norm = None @@ -74,8 +75,9 @@ class FFN(nn.Cell): when adding the shortcut. """ - def __init__(self, embed_dims=256, feedforward_channels=1024, num_fcs=2, - act_cfg=None, ffn_drop=0., dropout_layer=None, add_identity=True): + def __init__(self, embed_dims=256, feedforward_channels=1024, num_fcs=2, **kwargs): + act_cfg = kwargs.get('act_cfg', None) + dropout_layer = kwargs.get('dropout_layer', None) super(FFN, self).__init__() if isinstance(act_cfg, type(None)): act_cfg = dict(type='ReLU') @@ -99,13 +101,12 @@ class FFN(nn.Cell): )) in_channels = feedforward_channels layers.append(nn.Dense(feedforward_channels, embed_dims)) - # layers.append(nn.Dropout(ffn_drop) if ffn_drop > 0 else nn.Identity()) self.layers = nn.SequentialCell(*layers) if dropout_layer: self.dropout_layer = nn.Dropout() else: - self.dropout_layer = None # nn.Identity() - self.add_identity = add_identity + self.dropout_layer = None + self.add_identity = kwargs.get('add_identity', True) def construct(self, x, identity=None): """Forward function for `FFN`. @@ -167,7 +168,6 @@ class MultiheadAttention(nn.Cell): if proj_drop > 0: self.proj_drop = nn.Dropout(proj_drop) else: - # self.proj_drop = nn.Identity() self.proj_drop = None self.num_proposals = num_proposals diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py index b1ce3e9e9..3b90c7dc4 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_losses.py @@ -104,4 +104,7 @@ CUSTOM_LOSSES = { def build_loss(loss_cfg: dict): loss_type = loss_cfg.pop('type') - return CUSTOM_LOSSES[loss_type](**loss_cfg) + try: + return CUSTOM_LOSSES[loss_type](**loss_cfg) + except KeyError: + raise KeyError diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py index 16824071f..45d5950ed 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py @@ -3,7 +3,7 @@ import mindspore as ms from mindspore import nn, ops -from .custom_operations import Einsum +from .custom_operations import custom_einsum class FocalLossCost: @@ -31,13 +31,27 @@ class FocalLossCost: [-0.1950, -0.1207, -0.2626]]) """ - def __init__(self, weight=1., alpha=0.25, gamma=2, - eps=1e-12, binary_input=False): + def __init__(self, weight=1., alpha=0.25, gamma=2, **kwargs): self.weight = weight self.alpha = alpha self.gamma = gamma - self.eps = eps - self.binary_input = binary_input + self.eps = kwargs.get('eps', 1e-12) + self.binary_input = kwargs.get('binary_input', False) + + def __call__(self, cls_pred, gt_labels): + """ + Args: + cls_pred (Tensor): Predicted classfication logits. + gt_labels (Tensor)): Labels. + + Returns: + Tensor: Focal cost matrix with weight in shape\ + (num_query, num_gt). + """ + if self.binary_input: + return self._mask_focal_loss_cost(cls_pred, gt_labels) + else: + return self._focal_loss_cost(cls_pred, gt_labels) def _focal_loss_cost(self, cls_pred, gt_labels): """ @@ -58,21 +72,6 @@ class FocalLossCost: cls_cost = ms.Tensor(pos_cost.asnumpy()[:, gt_numpy]) - ms.Tensor(neg_cost.asnumpy()[:, gt_numpy]) return cls_cost * self.weight - def __call__(self, cls_pred, gt_labels): - """ - Args: - cls_pred (Tensor): Predicted classfication logits. - gt_labels (Tensor)): Labels. - - Returns: - Tensor: Focal cost matrix with weight in shape\ - (num_query, num_gt). - """ - if self.binary_input: - return self._mask_focal_loss_cost(cls_pred, gt_labels) - else: - return self._focal_loss_cost(cls_pred, gt_labels) - def _mask_focal_loss_cost(self, cls_pred, gt_labels): """ Args: @@ -128,18 +127,6 @@ class DiceCost(object): self.act_mode = act_mode self.eps = eps - def dice_loss(cls, input, target, eps=1e-3): - input = input.reshape(input.shape[0], -1) - target = target.reshape(target.shape[0], -1).astype(ms.float32) - # einsum saves 10x memory - # a = torch.sum(input[:, None] * target[None, ...], -1) - a = Einsum('nh,mh->nm', input, target) - b = ops.reduce_sum(input * input, 1) + eps - c = ops.reduce_sum(target * target, 1) + eps - d = (2 * a) / (b[:, None] + c[None, ...]) - # 1 is a constance that will not affect the matching, so ommitted - return -d - def __call__(self, mask_preds, gt_masks): """ Args: @@ -156,9 +143,20 @@ class DiceCost(object): mask_preds = mask_preds.sigmoid() elif self.pred_act: mask_preds = mask_preds.softmax(dim=0) - dice_cost = self.dice_loss(mask_preds, gt_masks, self.eps) + dice_cost = self.custom_dice_loss(mask_preds, gt_masks, self.eps) return dice_cost * self.weight + def custom_dice_loss(cls, input, target, eps=1e-3): + input = input.reshape(input.shape[0], -1) + target = target.reshape(target.shape[0], -1).astype(ms.float32) + # einsum saves 10x memory + a = custom_einsum('nh,mh->nm', input, target) + b = ops.reduce_sum(input * input, 1) + eps + c = ops.reduce_sum(target * target, 1) + eps + d = (2 * a) / (b[:, None] + c[None, ...]) + # 1 is a constance that will not affect the matching, so ommitted + return -d + class MaskCost(object): """MaskCost. @@ -190,8 +188,8 @@ class MaskCost(object): _, H, W = target.shape # flatten_cls_pred = cls_pred.view(num_proposals, -1) # eingum is ~10 times faster than matmul - pos_cost = Einsum('nhw,mhw->nm', cls_pred, target) - neg_cost = Einsum('nhw,mhw->nm', 1 - cls_pred, 1 - target) + pos_cost = custom_einsum('nhw,mhw->nm', cls_pred, target) + neg_cost = custom_einsum('nhw,mhw->nm', 1 - cls_pred, 1 - target) cls_cost = -(pos_cost + neg_cost) / (H * W) return cls_cost * self.weight diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py index 777b3546f..6e1da5f2f 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py @@ -98,5 +98,6 @@ def multi_apply(func, *args, **kwargs): return tuple(map(list, zip(*map_results))) -def Einsum(format, x, y): +# def Einsum(format, x, y): +def custom_einsum(format, x, y): return ms.Tensor(np.einsum(format, x.asnumpy(), y.asnumpy()), dtype=x.dtype) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py index 5cd66e50d..4e3691546 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_samplers.py @@ -22,7 +22,10 @@ class MaskSamplingResult(NiceRepr): })> """ - def __init__(self, pos_inds, neg_inds, masks, gt_masks, assign_result, gt_flags): + def __init__(self, pos_inds, neg_inds, masks, **kwargs): + gt_masks = kwargs.get('gt_masks', None) + assign_result = kwargs.get('assign_result', None) + gt_flags = kwargs.get('gt_flags', None) self.pos_inds = pos_inds self.neg_inds = neg_inds if pos_inds.shape[0] == 0: @@ -52,11 +55,6 @@ class MaskSamplingResult(NiceRepr): else: self.pos_gt_labels = None - @property - def masks(self): - """torch.Tensor: concatenated positive and negative boxes""" - return ops.concat([self.pos_masks, self.neg_masks]) - def __nice__(self): data = self.info.copy() data['pos_masks'] = data.pop('pos_masks').shape @@ -65,6 +63,11 @@ class MaskSamplingResult(NiceRepr): body = ' ' + ',\n '.join(parts) return '{\n' + body + '\n}' + @property + def masks(self): + """torch.Tensor: concatenated positive and negative boxes""" + return ops.concat([self.pos_masks, self.neg_masks]) + @property def bboxes(self): """torch.Tensor: concatenated positive and negative boxes""" @@ -109,7 +112,9 @@ class MaskPseudoSampler(nn.Cell): gt_flags = zeros((masks.shape[0], ), ms.uint8) sampling_result = MaskSamplingResult(pos_inds, neg_inds, masks, - gt_masks, assign_result, gt_flags) + gt_masks=gt_masks, + assign_result=assign_result, + gt_flags=gt_flags) return sampling_result @@ -120,4 +125,7 @@ CUSTOM_SAMPLER = { def build_sampler(cfg: dict): sampler_type = cfg.pop('type') - return CUSTOM_SAMPLER[sampler_type](**cfg) + try: + return CUSTOM_SAMPLER[sampler_type](**cfg) + except KeyError: + raise KeyError -- Gitee From 246e0561ea1cc589f1f6dd46a1143c1d3104ab44 Mon Sep 17 00:00:00 2001 From: HamPerdredes Date: Wed, 14 Dec 2022 19:41:16 +0800 Subject: [PATCH 44/51] fix code --- contrib/Overlap-Recovery/train/eval.py | 4 +- .../train/src/dataset/base_dataset.py | 89 +++--- .../train/src/dataset/data_process.py | 46 +-- .../train/src/dataset/real_dataset.py | 81 +---- .../train/src/dataset/synth_dataset.py | 81 +---- .../train/src/dataset/utils.py | 101 ------ .../custom_cells/custom_assigner.py | 27 +- .../deoccluder/custom_cells/custom_blocks.py | 7 +- .../custom_cells/custom_match_cost.py | 22 +- .../custom_cells/custom_operations.py | 3 +- .../train/src/deoccluder/deoccluder_r50.py | 12 +- .../train/src/deoccluder/resnet.py | 4 +- .../deoccluder/roi/custom_kernel_iter_head.py | 16 +- .../roi/custom_kernel_update_head.py | 1 - .../src/deoccluder/roi/kernel_update_head.py | 72 ++--- .../train/src/deoccluder/rpn/kernel_head.py | 291 +++++++++--------- .../src/deoccluder/rpn/positional_encoding.py | 2 +- 17 files changed, 324 insertions(+), 535 deletions(-) diff --git a/contrib/Overlap-Recovery/train/eval.py b/contrib/Overlap-Recovery/train/eval.py index e47eafb9c..40c99e553 100644 --- a/contrib/Overlap-Recovery/train/eval.py +++ b/contrib/Overlap-Recovery/train/eval.py @@ -36,7 +36,7 @@ set_seed(1) config.train = False -def eval_func(eval_set, ckpt_path, config, src_eval_set): +def eval_func(eval_set, ckpt_path, src_eval_set): """MaskRcnn evaluation.""" net = CustomKNet(config.model) param_dict = load_checkpoint(ckpt_path) @@ -90,7 +90,7 @@ def eval_(): logger.info("Start Eval!") logger.info(f"ckpt_path = {config.checkpoint_path}") - eval_func(eval_set, config.checkpoint_path, config, eval_set_cls) + eval_func(eval_set, config.checkpoint_path, eval_set_cls) if __name__ == '__main__': diff --git a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py index 5383be456..b0ba141db 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py @@ -53,7 +53,7 @@ class CustomDataset: test_mode (bool, optional): If set True, annotation will not be loaded. """ - CLASSES = None + custom_classes = None PALETTE = None @@ -65,7 +65,7 @@ class CustomDataset: self.seg_suffix = '.png' self.test_mode = test_mode self.filter_empty_gt = True - self.CLASSES = self.get_classes(None) + self.custom_classes = self.get_classes(None) # join paths if data_root is specified if self.data_root is not None: @@ -149,18 +149,6 @@ class CustomDataset: valid_inds.append(i) return valid_inds - def _set_group_flag(self): - """Set flag according to image aspect ratio. - - Images with aspect ratio greater than 1 will be set as group 1, - otherwise group 0. - """ - self.flag = np.zeros(len(self), dtype=np.uint8) - for i in range(len(self)): - img_info = self.data_infos[i] - if img_info['width'] / img_info['height'] > 1: - self.flag[i] = 1 - def __getitem__(self, idx): """Get training/test data after pipeline. @@ -181,10 +169,17 @@ class CustomDataset: continue return data - def _rand_another(self, idx): - """Get another random index from the same group as the given index.""" - pool = np.where(self.flag == self.flag[idx])[0] - return np.random.choice(pool) + def _set_group_flag(self): + """Set flag according to image aspect ratio. + + Images with aspect ratio greater than 1 will be set as group 1, + otherwise group 0. + """ + self.flag = np.zeros(len(self), dtype=np.uint8) + for i in range(len(self)): + img_info = self.data_infos[i] + if img_info['width'] / img_info['height'] > 1: + self.flag[i] = 1 def prepare_train_img(self, idx): """Get training data and annotations after pipeline. @@ -205,6 +200,29 @@ class CustomDataset: self.pre_pipeline(results) return self.pipeline(results) + def _rand_another(self, idx): + """Get another random index from the same group as the given index.""" + pool = np.where(self.flag == self.flag[idx])[0] + return np.random.choice(pool) + + @classmethod + def get_classes(cls, classes=None): + """Get class names of current dataset. + + Args: + classes (Sequence[str] | str | None): If classes is None, use + default custom_classes defined by builtin dataset. If classes is a + string, take it as a file name. The file contains the name of + classes where each line contains one class name. If classes is + a tuple or list, override the custom_classes defined by the dataset. + + Returns: + tuple[str] or list[str]: Names of categories of the dataset. + """ + if classes is None: + return cls.custom_classes + raise NotImplementedError + def prepare_test_img(self, idx): """Get testing data after pipeline. @@ -223,24 +241,6 @@ class CustomDataset: self.pre_pipeline(results) return self.pipeline(results) - @classmethod - def get_classes(cls, classes=None): - """Get class names of current dataset. - - Args: - classes (Sequence[str] | str | None): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - - Returns: - tuple[str] or list[str]: Names of categories of the dataset. - """ - if classes is None: - return cls.CLASSES - raise NotImplementedError - def get_cat2imgs(self): """Get a dict with class as key and img_ids as values, which will be used in :class:`ClassAwareSampler`. @@ -250,10 +250,10 @@ class CustomDataset: the item of the dict indicates a label index, corresponds to the image index that contains the label. """ - if self.CLASSES is None: - raise ValueError('self.CLASSES can not be None') + if self.custom_classes is None: + raise ValueError('self.custom_classes can not be None') # sort the label index - cat2imgs = {i: [] for i in range(len(self.CLASSES))} + cat2imgs = {i: [] for i in range(len(self.custom_classes))} for i in range(len(self)): cat_ids = set(self.get_cat_ids(i)) for cat in cat_ids: @@ -262,8 +262,11 @@ class CustomDataset: def format_results(self, results, **kwargs): """Place holder to format result to dataset specific output.""" + print(self, results) + raise NotImplementedError def evaluate(self, *args, **kwargs): + print(self, args) raise NotImplementedError def __repr__(self): @@ -272,10 +275,10 @@ class CustomDataset: result = (f'\n{self.__class__.__name__} {dataset_type} dataset ' f'with number of images {len(self)}, ' f'and instance counts: \n') - if self.CLASSES is None: + if self.custom_classes is None: result += 'Category names are not provided. \n' return result - instance_count = np.zeros(len(self.CLASSES) + 1).astype(int) + instance_count = np.zeros(len(self.custom_classes) + 1).astype(int) # count the instance number in each image for idx in range(len(self)): label = self.get_ann_info(idx)['labels'] @@ -290,8 +293,8 @@ class CustomDataset: table_data = [['category', 'count'] * 5] row_data = [] for cls, count in enumerate(instance_count): - if cls < len(self.CLASSES): - row_data += [f'{cls} [{self.CLASSES[cls]}]', f'{count}'] + if cls < len(self.custom_classes): + row_data += [f'{cls} [{self.custom_classes[cls]}]', f'{count}'] else: # add the background number row_data += ['-1 background', f'{count}'] diff --git a/contrib/Overlap-Recovery/train/src/dataset/data_process.py b/contrib/Overlap-Recovery/train/src/dataset/data_process.py index 740cdfd1e..f38361a9a 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/data_process.py +++ b/contrib/Overlap-Recovery/train/src/dataset/data_process.py @@ -153,12 +153,6 @@ class CustomLoadAnnotations: return results - @staticmethod - def _load_labels(results): - results['gt_labels'] = results['ann_info']['labels'].copy() - results['text_labels'] = results['ann_info']['text_labels'].copy() - return results - def __call__(self, results): if self.with_bbox: results = self._load_bboxes(results) @@ -171,13 +165,9 @@ class CustomLoadAnnotations: return results @staticmethod - def _load_masks(results): - h, w = results['img_info']['height'], results['img_info']['width'] - gt_masks = [cv2.imread(_, cv2.IMREAD_UNCHANGED) for _ in results['ann_info']['masks']] - gt_masks = [mask // 255 for mask in gt_masks] - gt_masks = BitmapMasks(gt_masks, h, w) - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') + def _load_labels(results): + results['gt_labels'] = results['ann_info']['labels'].copy() + results['text_labels'] = results['ann_info']['text_labels'].copy() return results def __repr__(self): @@ -187,6 +177,16 @@ class CustomLoadAnnotations: repr_str += f'with_mask={self.with_mask}, ' return repr_str + @staticmethod + def _load_masks(results): + h, w = results['img_info']['height'], results['img_info']['width'] + gt_masks = [cv2.imread(_, cv2.IMREAD_UNCHANGED) for _ in results['ann_info']['masks']] + gt_masks = [mask // 255 for mask in gt_masks] + gt_masks = BitmapMasks(gt_masks, h, w) + results['gt_masks'] = gt_masks + results['mask_fields'].append('gt_masks') + return results + class Resize: """Resize images & bbox & mask.""" @@ -257,16 +257,6 @@ class Resize: bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) results[key] = bboxes - def _resize_masks(self, results): - """Resize masks with ``results['scale']``""" - for key in results.get('mask_fields', []): - if results[key] is None: - continue - if self.keep_ratio: - results[key] = results[key].rescale(results['scale']) - else: - results[key] = results[key].resize(results['img_shape'][:2]) - def __call__(self, results): if 'scale' not in results: if 'scale_factor' in results: @@ -295,6 +285,16 @@ class Resize: # self._resize_seg(results) return results + def _resize_masks(self, results): + """Resize masks with ``results['scale']``""" + for key in results.get('mask_fields', []): + if results[key] is None: + continue + if self.keep_ratio: + results[key] = results[key].rescale(results['scale']) + else: + results[key] = results[key].resize(results['img_shape'][:2]) + def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(img_scale={self.img_scale}, ' diff --git a/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py index 3fe27b0a4..b0c256644 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py @@ -31,7 +31,7 @@ from .utils import cal_mask_iou, cal_overlap_mask, cal_union_mask class RealOverlapDataset(CustomDataset): """Custom Synthetic Overlap dataset for text de-occlusion.""" - CLASSES = ('text', ) + custom_classes = ('text', ) def __init__(self, score_thresh=0.5, iou_thresh=0.5, res_flags=None, **kwargs): self.score_thresh = score_thresh @@ -108,17 +108,6 @@ class RealOverlapDataset(CustomDataset): canvas[masks[ins_idx]] = img[masks[ins_idx]] cv2.imwrite(os.path.join(vis_dir, save_name), canvas) - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: - if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - def eval_func(self, idx, box_scores, masks): # prepare gt ~ hard code gt_masks = [cv2.imread(x, cv2.IMREAD_UNCHANGED) // 255 for x in self.data_infos[idx]['seg_map_path']] @@ -194,61 +183,31 @@ class RealOverlapDataset(CustomDataset): return (intersection_text, union_text, intersection_overlap, union_overlap), \ text_ins_miou, max(match_matrix.shape) - def evaluate(self, - results, - metric='segm', - logger=None, - jsonfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=None, - metric_items=None): - """Evaluation in COCO protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float], optional): IoU threshold used for - evaluating recalls/mAPs. If set to a list, the average of all - IoUs will also be computed. If not specified, [0.50, 0.55, - 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95] will be used. - Default: None. - metric_items (list[str] | str, optional): Metric items that will - be returned. If not specified, ``['AR@100', 'AR@300', - 'AR@1000', 'AR_s@1000', 'AR_m@1000', 'AR_l@1000' ]`` will be - used when ``metric=='proposal'``, ``['mAP', 'mAP_50', 'mAP_75', - 'mAP_s', 'mAP_m', 'mAP_l']`` will be used when - ``metric=='bbox' or metric=='segm'``. + def _filter_imgs(self, min_size=32): + """Filter images too small or without ground truths.""" + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: + if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: + continue + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds - Returns: - dict[str, float]: COCO style evaluation metric. - """ + def evaluate(self, results, metric='segm', **kwargs): metric = metric if isinstance(metric, str) else metric[0] - # allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast'] - allowed_metrics = ['segm', 'segm_multi', 'segm_with_each'] + allowed_metrics = ['segm', 'segm_multi'] if metric not in allowed_metrics: raise KeyError(f'metric {metric} is not supported') assert len(results) == self.__len__() - if metric in ['segm', 'segm_with_each']: + if metric in ['segm']: intersection_text = 0 union_text = 0 intersection_overlap = 0 union_overlap = 0 text_ins_miou_list = [] total_ins_num = 0 - if metric == 'segm_with_each': - qualifier_list = [] for idx, (box_scores, masks) in tqdm(enumerate(results)): # structure: # box_scores: List[ numpy_array with shape (num_ins, 5=4*coord+1*score) * num_classes ] @@ -261,18 +220,6 @@ class RealOverlapDataset(CustomDataset): union_overlap += overall_iou_metrics[3] text_ins_miou_list.append(text_ins_miou) total_ins_num += ins_num - if metric == 'segm_with_each': - # hard-code - if text_ins_miou / ins_num > 0.75: - qualifier_list.append(dict( - img_path=self.data_infos[idx]['img_path'], - score=text_ins_miou / ins_num, - iou=overall_iou_metrics[0] / (overall_iou_metrics[1] + 1e-6) - )) - if metric == 'segm_with_each': - # hard-code - with open('/home/whua/overlap_real_qualifiers.json', 'w', encoding='utf-8') as saver: - json.dump(qualifier_list, saver, ensure_ascii=False) metric_results = dict( text_iou=intersection_text / union_text, overlap_iou=intersection_overlap / union_overlap, diff --git a/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py index b91f89b71..42eff337c 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py @@ -31,7 +31,7 @@ from .utils import cal_mask_iou, cal_overlap_mask, cal_union_mask class SynthOverlapDataset(CustomDataset): """Custom Synthetic Overlap dataset for text de-occlusion.""" - CLASSES = ('text', ) + custom_classes = ('text', ) def __init__(self, score_thresh=0.5, iou_thresh=0.5, res_flags=None, **kwargs): self.score_thresh = score_thresh @@ -108,17 +108,6 @@ class SynthOverlapDataset(CustomDataset): canvas[masks[ins_idx]] = img[masks[ins_idx]] cv2.imwrite(os.path.join(vis_dir, save_name), canvas) - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: - if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - def eval_func(self, idx, box_scores, masks): # prepare gt ~ hard code gt_masks = [cv2.imread(x, cv2.IMREAD_UNCHANGED) // 255 for x in self.data_infos[idx]['seg_map_path']] @@ -194,61 +183,31 @@ class SynthOverlapDataset(CustomDataset): return (intersection_text, union_text, intersection_overlap, union_overlap), \ text_ins_miou, max(match_matrix.shape) - def evaluate(self, - results, - metric='bbox', - logger=None, - jsonfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=None, - metric_items=None): - """Evaluation in COCO protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - jsonfile_prefix (str | None): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float], optional): IoU threshold used for - evaluating recalls/mAPs. If set to a list, the average of all - IoUs will also be computed. If not specified, [0.50, 0.55, - 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95] will be used. - Default: None. - metric_items (list[str] | str, optional): Metric items that will - be returned. If not specified, ``['AR@100', 'AR@300', - 'AR@1000', 'AR_s@1000', 'AR_m@1000', 'AR_l@1000' ]`` will be - used when ``metric=='proposal'``, ``['mAP', 'mAP_50', 'mAP_75', - 'mAP_s', 'mAP_m', 'mAP_l']`` will be used when - ``metric=='bbox' or metric=='segm'``. + def _filter_imgs(self, min_size=32): + """Filter images too small or without ground truths.""" + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: + if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: + continue + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds - Returns: - dict[str, float]: COCO style evaluation metric. - """ + def evaluate(self, results, metric='bbox',): metric = metric if isinstance(metric, str) else metric[0] - # allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast'] - allowed_metrics = ['segm', 'segm_multi', 'segm_with_each'] + allowed_metrics = ['segm', 'segm_multi'] if metric not in allowed_metrics: raise KeyError(f'metric {metric} is not supported') assert len(results) == self.__len__() - if metric in ['segm', 'segm_with_each']: + if metric in ['segm']: intersection_text = 0 union_text = 0 intersection_overlap = 0 union_overlap = 0 text_ins_miou_list = [] total_ins_num = 0 - if metric == 'segm_with_each': - qualifier_list = [] for idx, (box_scores, masks) in tqdm(enumerate(results)): # structure: # box_scores: List[ numpy_array with shape (num_ins, 5=4*coord+1*score) * num_classes ] @@ -261,18 +220,6 @@ class SynthOverlapDataset(CustomDataset): union_overlap += overall_iou_metrics[3] text_ins_miou_list.append(text_ins_miou) total_ins_num += ins_num - if metric == 'segm_with_each': - # hard-code - if text_ins_miou / ins_num > 0.8: - qualifier_list.append(dict( - img_path=self.data_infos[idx]['img_path'], - score=text_ins_miou / ins_num, - iou=overall_iou_metrics[0] / (overall_iou_metrics[1] + 1e-6) - )) - if metric == 'segm_with_each': - # hard-code - with open('/home/whua/overlap_qualifiers.json', 'w', encoding='utf-8') as saver: - json.dump(qualifier_list, saver, ensure_ascii=False) metric_results = dict( text_iou=intersection_text / union_text, overlap_iou=intersection_overlap / union_overlap, diff --git a/contrib/Overlap-Recovery/train/src/dataset/utils.py b/contrib/Overlap-Recovery/train/src/dataset/utils.py index 261f0d4ad..e9d3ef291 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/utils.py +++ b/contrib/Overlap-Recovery/train/src/dataset/utils.py @@ -198,107 +198,6 @@ class BitmapMasks: left:left + self.width] = self.masks return BitmapMasks(expanded_mask, expanded_h, expanded_w) - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Translate the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - fill_val (int | float): Border value. Default 0 for masks. - interpolation (str): Same as :func:`mmcv.imtranslate`. - - Returns: - BitmapMasks: Translated BitmapMasks. - """ - if len(self.masks) == 0: - translated_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - translated_masks = mmcv.imtranslate( - self.masks.transpose((1, 2, 0)), - offset, - direction, - border_value=fill_val, - interpolation=interpolation) - if translated_masks.ndim == 2: - translated_masks = translated_masks[:, :, None] - translated_masks = translated_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(translated_masks, *out_shape) - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - magnitude (int | float): The magnitude used for shear. - direction (str): The shear direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as in :func:`mmcv.imshear`. - - Returns: - BitmapMasks: The sheared masks. - """ - if len(self.masks) == 0: - sheared_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - sheared_masks = mmcv.imshear( - self.masks.transpose((1, 2, 0)), - magnitude, - direction, - border_value=border_value, - interpolation=interpolation) - if sheared_masks.ndim == 2: - sheared_masks = sheared_masks[:, :, None] - sheared_masks = sheared_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(sheared_masks, *out_shape) - - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """Rotate the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - angle (int | float): Rotation angle in degrees. Positive values - mean counter-clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the - rotation in source image. If not specified, the center of - the image will be used. - scale (int | float): Isotropic scale factor. - fill_val (int | float): Border value. Default 0 for masks. - - Returns: - BitmapMasks: Rotated BitmapMasks. - """ - if len(self.masks) == 0: - rotated_masks = np.empty((0, *out_shape), dtype=self.masks.dtype) - else: - rotated_masks = mmcv.imrotate( - self.masks.transpose((1, 2, 0)), - angle, - center=center, - scale=scale, - border_value=fill_val) - if rotated_masks.ndim == 2: - # case when only one mask, (h, w) - rotated_masks = rotated_masks[:, :, None] # (h, w, 1) - rotated_masks = rotated_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(rotated_masks, *out_shape) - @property def areas(self): """See :py:attr:`BaseInstanceMasks.areas`.""" diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py index 8650fb000..76879e4ab 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_assigner.py @@ -53,11 +53,6 @@ class AssignResult(NiceRepr): # Interface for possible user-defined properties self._extra_properties = {} - @property - def num_preds(self): - """int: the number of predictions in this assignment""" - return len(self.gt_inds) - def __nice__(self): """str: a "nice" summary string describing this assign result""" parts = [] @@ -77,10 +72,10 @@ class AssignResult(NiceRepr): parts.append(f'labels.shape={tuple(self.labels.shape)!r}') return ', '.join(parts) - def set_extra_property(self, key, value): - """Set user-defined new property.""" - assert key not in self.info - self._extra_properties[key] = value + @property + def num_preds(self): + """int: the number of predictions in this assignment""" + return len(self.gt_inds) @property def info(self): @@ -95,6 +90,11 @@ class AssignResult(NiceRepr): basic_info.update(self._extra_properties) return basic_info + def set_extra_property(self, key, value): + """Set user-defined new property.""" + assert key not in self.info + self._extra_properties[key] = value + def get_extra_property(self, key): """Get user-defined property.""" return self._extra_properties.get(key, None) @@ -143,7 +143,8 @@ class MaskHungarianAssigner(nn.Cell): gt_labels, img_meta=None, gt_bboxes_ignore=None, - eps=1e-7): + eps=1e-7, + **kwargs): """Computes one-to-one matching based on the weighted costs. This method assign each query prediction to a ground truth or @@ -213,7 +214,6 @@ class MaskHungarianAssigner(nn.Cell): cost = cls_cost + reg_cost + dice_cost + b_cost # 3. do Hungarian matching on CPU using linear_sum_assignment - # cost = cost.detach().cpu() cost = cost.asnumpy() if linear_sum_assignment is None: raise NotImplementedError('Please run "pip install scipy" to install scipy first.' ) @@ -251,4 +251,7 @@ CUSTOM_ASSIGNER = { def build_assigner(cfg): assigner_type = cfg.pop('type') - return CUSTOM_ASSIGNER[assigner_type](**cfg) + try: + return CUSTOM_ASSIGNER[assigner_type](**cfg) + except KeyError: + raise KeyError diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py index a2606b0ab..cfbc5111d 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_blocks.py @@ -270,14 +270,14 @@ class MultiheadAttention(nn.Cell): query = query.transpose((1, 0, 2)) key = key.transpose((1, 0, 2)) value = value.transpose((1, 0, 2)) - B, N, _ = query.shape + batch_size, num_sample, _ = query.shape else: - N, B, _ = query.shape + num_sample, batch_size, _ = query.shape out = self.attn( query_tensor=query, key_tensor=key, value_tensor=value, - attention_mask=ops.ones((B, N, N), ms.float32))[0] + attention_mask=ops.ones((batch_size, num_sample, num_sample), ms.float32))[0] if self.batch_first: out = out.transpose((1, 0, 2)) @@ -288,5 +288,4 @@ class MultiheadAttention(nn.Cell): if self.dropout_layer is not None: out = self.dropout_layer(out) return identity + out - # return identity + self.dropout_layer(self.proj_drop(out)) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py index ebb0a6f88..514f5691e 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py @@ -18,7 +18,7 @@ import mindspore as ms from mindspore import nn, ops -from .custom_operations import custom_einsum +from .custom_operations import custom_ein class FocalLossCost: @@ -158,14 +158,14 @@ class DiceCost(object): mask_preds = mask_preds.sigmoid() elif self.pred_act: mask_preds = mask_preds.softmax(dim=0) - dice_cost = self.custom_dice_loss(mask_preds, gt_masks, self.eps) + dice_cost = self.custom_dc_loss(mask_preds, gt_masks, self.eps) return dice_cost * self.weight - def custom_dice_loss(cls, input, target, eps=1e-3): + def custom_dc_loss(cls, input, target, eps=1e-3): input = input.reshape(input.shape[0], -1) target = target.reshape(target.shape[0], -1).astype(ms.float32) # einsum saves 10x memory - a = custom_einsum('nh,mh->nm', input, target) + a = custom_ein('nh,mh->nm', input, target) b = ops.reduce_sum(input * input, 1) + eps c = ops.reduce_sum(target * target, 1) + eps d = (2 * a) / (b[:, None] + c[None, ...]) @@ -200,12 +200,11 @@ class MaskCost(object): elif self.pred_act: cls_pred = cls_pred.softmax(dim=0) - _, H, W = target.shape - # flatten_cls_pred = cls_pred.view(num_proposals, -1) + _, height, width = target.shape # eingum is ~10 times faster than matmul - pos_cost = custom_einsum('nhw,mhw->nm', cls_pred, target) - neg_cost = custom_einsum('nhw,mhw->nm', 1 - cls_pred, 1 - target) - cls_cost = -(pos_cost + neg_cost) / (H * W) + pos_cost = custom_ein('nhw,mhw->nm', cls_pred, target) + neg_cost = custom_ein('nhw,mhw->nm', 1 - cls_pred, 1 - target) + cls_cost = -(pos_cost + neg_cost) / (height * width) return cls_cost * self.weight @@ -218,4 +217,7 @@ CUSTOM_MATCH_COST = { def build_match_cost(cfg): cost_type = cfg.pop('type') - return CUSTOM_MATCH_COST[cost_type](**cfg) + try: + return CUSTOM_MATCH_COST[cost_type](**cfg) + except KeyError: + raise KeyError diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py index 51e8d175b..c475b327b 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py @@ -113,6 +113,5 @@ def multi_apply(func, *args, **kwargs): return tuple(map(list, zip(*map_results))) -# def Einsum(format, x, y): -def custom_einsum(format, x, y): +def custom_ein(format, x, y): return ms.Tensor(np.einsum(format, x.asnumpy(), y.asnumpy()), dtype=x.dtype) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py b/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py index 4e6bc1664..43e6aa075 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/deoccluder_r50.py @@ -79,8 +79,8 @@ class CustomKNet(nn.Cell): gt_sem_cls = [] # batch_input_shape shoud be the same across images pad_h, pad_w = img_metas[0]['batch_input_shape'] - assign_H = pad_h // self.mask_assign_stride - assign_W = pad_w // self.mask_assign_stride + assign_height = pad_h // self.mask_assign_stride + assign_width = pad_w // self.mask_assign_stride for i, gt_mask in enumerate(gt_masks): mask_tensor = gt_mask.to_tensor(ms.float32) @@ -98,11 +98,11 @@ class CustomKNet(nn.Cell): if sem_seg.shape[0] == 0: gt_sem_seg.append( mask_tensor.new_zeros( - (mask_tensor.shape[0], assign_H, assign_W))) + (mask_tensor.shape[0], assign_height, assign_width))) else: gt_sem_seg.append( self.interpolate( - sem_seg[None], (assign_H, assign_W), + sem_seg[None], (assign_height, assign_width), align_corners=False)[0]) gt_sem_cls.append(sem_labels) @@ -112,11 +112,11 @@ class CustomKNet(nn.Cell): if mask_tensor.shape[0] == 0: gt_masks_tensor.append( mask_tensor.new_zeros( - (mask_tensor.shape[0], assign_H, assign_W))) + (mask_tensor.shape[0], assign_height, assign_width))) else: gt_masks_tensor.append( self.interpolate( - mask_tensor[None], (assign_H, assign_W), + mask_tensor[None], (assign_height, assign_width), align_corners=False)[0]) gt_masks = gt_masks_tensor x = self.extract_feat(img) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py b/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py index 5d8226475..f43886baa 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/resnet.py @@ -103,8 +103,8 @@ class ResNet(nn.Cell): c3 = self.layer2(c2) c4 = self.layer3(c3) c5 = self.layer4(c4) - - return c2, c3, c4, c5 + results = (c2, c3, c4, c5) + return results def _make_layer(self, block, planes, blocks, stride=1): downsample = None diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py index e108b57e7..afe2f441d 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py @@ -70,6 +70,10 @@ class CustomKernelIterHead(nn.Cell): for i in range(self.num_stages): self.mask_head[i].init_weights() + @property + def apply_kernel_occlusion(self): + return self.mask_head[0].apply_kernel_occlusion + def init_mask_head(self, mask_roi_extractor, mask_head): """Initialize mask head and mask roi extractor. @@ -88,8 +92,8 @@ class CustomKernelIterHead(nn.Cell): self.mask_head[i] = self.mask_head[0] @property - def apply_kernel_occlusion(self): - return self.mask_head[0].apply_kernel_occlusion + def occ_pair_num(self): + return 2 * self.mask_head[0].pair_num def _mask_forward(self, x, object_feats, mask_preds, **kwargs): stage = kwargs.get('stage', None) @@ -114,10 +118,6 @@ class CustomKernelIterHead(nn.Cell): return mask_results - @property - def occ_pair_num(self): - return 2 * self.mask_head[0].pair_num - def construct(self, *inputs, **kwargs): if self.training: return self.forward_train(*inputs, **kwargs) @@ -243,8 +243,7 @@ class CustomKernelIterHead(nn.Cell): mask_preds, cls_score, img_metas, - imgs_whwh=None, - rescale=False): + **kwargs): # Decode initial proposals num_imgs = len(img_metas) @@ -293,7 +292,6 @@ class CustomKernelIterHead(nn.Cell): # Decode initial proposals num_imgs = len(img_metas) - # num_proposals = proposal_feats.size(1) object_feats = proposal_feats scaled_mask_preds = None diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py index dfed80aeb..bb5da55dc 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_update_head.py @@ -276,7 +276,6 @@ class CustomKernelUpdateHead(KernelUpdateHead): union_area = mask_target[:, 0].astype(ms.int32) | mask_target[:, 1].astype(ms.int32) interaction_area = mask_target[:, 0].astype(ms.int32) & mask_target[:, 1].astype(ms.int32) # union without interaction area - # occ_union_targets.append(union_area ^ interaction_area) occ_union_targets.append(union_area) occ_interact_targets.append(interaction_area) if len(occ_interact_targets) == 0: diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py index bb4aa0a87..aec765f52 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py @@ -201,7 +201,6 @@ class KernelUpdateHead(nn.Cell): get_size = ops.Size() if get_size(cls_score) > 0: avg_factor = labels.astype(ms.float32).asnumpy().sum() - H, W = cls_score.shape[:2] losses['loss_cls'] = self.loss_cls( cls_score.reshape(-1, 1), labels.reshape(-1)).sum() / avg_factor @@ -209,11 +208,11 @@ class KernelUpdateHead(nn.Cell): bool_pos_inds = pos_inds.astype(ms.bool_) # 0~self.num_classes-1 are FG, self.num_classes is BG # do not perform bounding box regression for BG anymore. - H, W = mask_pred.shape[-2:] + height, width = mask_pred.shape[-2:] if bool_pos_inds.any(): candi_index = ops.nonzero(bool_pos_inds).squeeze(-1) - pos_mask_pred = mask_pred.reshape(num_preds, H, - W)[candi_index] + pos_mask_pred = mask_pred.reshape(num_preds, height, + width)[candi_index] pos_mask_targets = mask_targets[candi_index] losses['loss_mask'] = self.loss_mask(pos_mask_pred, pos_mask_targets) @@ -225,37 +224,6 @@ class KernelUpdateHead(nn.Cell): return losses - def _get_target_single(self, pos_inds, neg_inds, pos_mask, neg_mask, - pos_gt_mask, pos_gt_labels, gt_sem_seg, gt_sem_cls, - cfg): - - num_pos = pos_mask.shape[0] - num_neg = neg_mask.shape[0] - num_samples = num_pos + num_neg - H, W = pos_mask.shape[-2:] - # original implementation uses new_zeros since BG are set to be 0 - # now use empty & fill because BG cat_id = num_classes, - # FG cat_id = [0, num_classes-1] - labels = ms.numpy.full((num_samples, ), - self.num_classes, - dtype=ms.int64) - new_zeros = ops.Zeros() - label_weights = new_zeros((num_samples, self.num_classes), pos_mask.dtype) - mask_targets = new_zeros((num_samples, H, W), pos_mask.dtype) - mask_weights = new_zeros((num_samples, H, W), pos_mask.dtype) - if num_pos > 0: - labels[pos_inds] = pos_gt_labels - pos_weight = 1.0 if cfg['pos_weight'] <= 0 else cfg['pos_weight'] - label_weights[pos_inds] = pos_weight - pos_mask_targets = pos_gt_mask - mask_targets[pos_inds] = pos_mask_targets - mask_weights[pos_inds] = 1 - - if num_neg > 0: - label_weights[neg_inds] = 1.0 - - return labels, label_weights, mask_targets, mask_weights - def get_targets(self, sampling_results, gt_mask, @@ -293,7 +261,39 @@ class KernelUpdateHead(nn.Cell): label_weights = ops.concat(label_weights, 0) mask_targets = ops.concat(mask_targets, 0) mask_weights = ops.concat(mask_weights, 0) - return labels, label_weights, mask_targets, mask_weights + results = (labels, label_weights, mask_targets, mask_weights) + return results + + def _get_target_single(self, pos_inds, neg_inds, pos_mask, neg_mask, + pos_gt_mask, pos_gt_labels, gt_sem_seg, gt_sem_cls, + cfg): + + num_pos = pos_mask.shape[0] + num_neg = neg_mask.shape[0] + num_samples = num_pos + num_neg + height, width = pos_mask.shape[-2:] + # original implementation uses new_zeros since BG are set to be 0 + # now use empty & fill because BG cat_id = num_classes, + # FG cat_id = [0, num_classes-1] + labels = ms.numpy.full((num_samples, ), + self.num_classes, + dtype=ms.int64) + new_zeros = ops.Zeros() + label_weights = new_zeros((num_samples, self.num_classes), pos_mask.dtype) + mask_targets = new_zeros((num_samples, height, width), pos_mask.dtype) + mask_weights = new_zeros((num_samples, height, width), pos_mask.dtype) + if num_pos > 0: + labels[pos_inds] = pos_gt_labels + pos_weight = 1.0 if cfg['pos_weight'] <= 0 else cfg['pos_weight'] + label_weights[pos_inds] = pos_weight + pos_mask_targets = pos_gt_mask + mask_targets[pos_inds] = pos_mask_targets + mask_weights[pos_inds] = 1 + + if num_neg > 0: + label_weights[neg_inds] = 1.0 + results = (labels, label_weights, mask_targets, mask_weights) + return results def rescale_masks(self, masks_per_img, img_meta): h, w, _ = img_meta['img_shape'] diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py index 0f3cdc2cf..46794a6c6 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py @@ -141,57 +141,6 @@ class ConvKernelHead(nn.Cell): f'Initialize kernels by normal std: {self.kernel_init_std}') normal_init(self.init_kernels, mean=0, init_gain=self.kernel_init_std) - def _init_layers(self): - """Initialize a sparse set of proposal boxes and proposal features.""" - self.init_kernels = nn.Conv2d( - self.out_channels, - self.num_proposals, - self.conv_kernel_size, - padding=int(self.conv_kernel_size // 2), - has_bias=False) - - if self.semantic_fpn: - if self.loss_seg.use_sigmoid: - self.conv_seg = nn.Conv2d(self.out_channels, self.num_classes, - 1) - else: - self.conv_seg = nn.Conv2d(self.out_channels, - self.num_classes + 1, 1) - - if self.feat_downsample_stride > 1 and self.feat_refine: - self.ins_downsample = ConvModule( - self.in_channels, - self.out_channels, - 3, - stride=self.feat_refine_stride, - padding=1, - norm_cfg=self.norm_cfg) - self.seg_downsample = ConvModule( - self.in_channels, - self.out_channels, - 3, - stride=self.feat_refine_stride, - padding=1, - norm_cfg=self.norm_cfg) - - self.loc_convs = nn.CellList() - for i in range(self.num_loc_convs): - self.loc_convs.append( - ConvModule( - self.in_channels, - self.out_channels, - 1, - norm_cfg=self.norm_cfg)) - - self.seg_convs = nn.CellList() - for i in range(self.num_seg_convs): - self.seg_convs.append( - ConvModule( - self.in_channels, - self.out_channels, - 1, - norm_cfg=self.norm_cfg)) - def forward_train(self, img, gt_masks, @@ -262,6 +211,106 @@ class ConvKernelHead(nn.Cell): results = (losses, proposal_feats, x_feats, mask_preds, cls_scores) return results + def _init_layers(self): + """Initialize a sparse set of proposal boxes and proposal features.""" + self.init_kernels = nn.Conv2d( + self.out_channels, + self.num_proposals, + self.conv_kernel_size, + padding=int(self.conv_kernel_size // 2), + has_bias=False) + + if self.semantic_fpn: + if self.loss_seg.use_sigmoid: + self.conv_seg = nn.Conv2d(self.out_channels, self.num_classes, + 1) + else: + self.conv_seg = nn.Conv2d(self.out_channels, + self.num_classes + 1, 1) + + if self.feat_downsample_stride > 1 and self.feat_refine: + self.ins_downsample = ConvModule( + self.in_channels, + self.out_channels, + 3, + stride=self.feat_refine_stride, + padding=1, + norm_cfg=self.norm_cfg) + self.seg_downsample = ConvModule( + self.in_channels, + self.out_channels, + 3, + stride=self.feat_refine_stride, + padding=1, + norm_cfg=self.norm_cfg) + + self.loc_convs = nn.CellList() + for i in range(self.num_loc_convs): + self.loc_convs.append( + ConvModule( + self.in_channels, + self.out_channels, + 1, + norm_cfg=self.norm_cfg)) + + self.seg_convs = nn.CellList() + for i in range(self.num_seg_convs): + self.seg_convs.append( + ConvModule( + self.in_channels, + self.out_channels, + 1, + norm_cfg=self.norm_cfg)) + + def loss(self, + mask_pred, + cls_scores, + seg_preds, + proposal_feats, + labels, + label_weights, + mask_targets, + mask_weights, + seg_targets, + **kwargs): + losses = dict() + bg_class_ind = self.num_classes + # note in spare rcnn num_gt == num_pos + pos_inds = (labels >= 0).astype(ms.int32) & (labels < bg_class_ind).astype(ms.int32) + num_preds = mask_pred.shape[0] * mask_pred.shape[1] + if cls_scores is not None: + raise NotImplementedError + + bool_pos_inds = pos_inds.astype(ms.bool_) + # 0~self.num_classes-1 are FG, self.num_classes is BG + # do not perform bounding box regression for BG anymore. + height, width = mask_pred.shape[-2:] + if bool_pos_inds.sum(): + candi_index = ops.nonzero(bool_pos_inds).squeeze(-1) + pos_mask_pred = mask_pred.reshape(num_preds, height, width)[candi_index] + pos_mask_targets = mask_targets[candi_index] + losses['loss_rpn_mask'] = self.loss_mask(pos_mask_pred, + pos_mask_targets) + losses['loss_rpn_dice'] = self.loss_dice(pos_mask_pred, + pos_mask_targets) + + if self.loss_rank is not None: + raise NotImplementedError + + else: + losses['loss_rpn_mask'] = mask_pred.sum() * 0 + losses['loss_rpn_dice'] = mask_pred.sum() * 0 + if self.loss_rank is not None: + losses['loss_rank'] = mask_pred.sum() * 0 + + if seg_preds is not None: + if self.loss_seg.use_sigmoid: + losses['loss_rpn_seg'] = self.loss_seg(seg_preds.squeeze(1), seg_targets.astype(ms.float32)) + else: + raise NotImplementedError + + return losses + def _decode_init_proposals(self, img, img_metas): num_imgs = len(img_metas) localization_feats = self.localization_fpn(img) @@ -330,95 +379,6 @@ class ConvKernelHead(nn.Cell): results = (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) return results - def loss(self, - mask_pred, - cls_scores, - seg_preds, - proposal_feats, - labels, - label_weights, - mask_targets, - mask_weights, - seg_targets, - **kwargs): - losses = dict() - bg_class_ind = self.num_classes - # note in spare rcnn num_gt == num_pos - pos_inds = (labels >= 0).astype(ms.int32) & (labels < bg_class_ind).astype(ms.int32) - num_preds = mask_pred.shape[0] * mask_pred.shape[1] - if cls_scores is not None: - raise NotImplementedError - - bool_pos_inds = pos_inds.astype(ms.bool_) - # 0~self.num_classes-1 are FG, self.num_classes is BG - # do not perform bounding box regression for BG anymore. - H, W = mask_pred.shape[-2:] - if bool_pos_inds.sum(): - candi_index = ops.nonzero(bool_pos_inds).squeeze(-1) - pos_mask_pred = mask_pred.reshape(num_preds, H, W)[candi_index] - pos_mask_targets = mask_targets[candi_index] - losses['loss_rpn_mask'] = self.loss_mask(pos_mask_pred, - pos_mask_targets) - losses['loss_rpn_dice'] = self.loss_dice(pos_mask_pred, - pos_mask_targets) - - if self.loss_rank is not None: - raise NotImplementedError - - else: - losses['loss_rpn_mask'] = mask_pred.sum() * 0 - losses['loss_rpn_dice'] = mask_pred.sum() * 0 - if self.loss_rank is not None: - losses['loss_rank'] = mask_pred.sum() * 0 - - if seg_preds is not None: - if self.loss_seg.use_sigmoid: - losses['loss_rpn_seg'] = self.loss_seg(seg_preds.squeeze(1), seg_targets.astype(ms.float32)) - else: - raise NotImplementedError - - return losses - - def _get_target_single(self, pos_inds, neg_inds, pos_mask, neg_mask, - pos_gt_mask, pos_gt_labels, gt_sem_seg, gt_sem_cls, - cfg): - num_pos = pos_mask.shape[0] - num_neg = neg_mask.shape[0] - num_samples = num_pos + num_neg - H, W = pos_mask.shape[-2:] - # original implementation uses new_zeros since BG are set to be 0 - # now use empty & fill because BG cat_id = num_classes, - # FG cat_id = [0, num_classes-1] - labels = ms.numpy.full((num_samples, ), - self.num_classes, - dtype=ms.int64) - new_zeros = ops.Zeros() - type_ = pos_mask.dtype - label_weights = new_zeros((num_samples, ), type_) - mask_targets = new_zeros((num_samples, H, W), type_) - mask_weights = new_zeros((num_samples, H, W), type_) - seg_targets = ms.numpy.full((H, W), - self.num_classes, - dtype=ms.int64) - - if gt_sem_cls is not None and gt_sem_seg is not None: - gt_sem_seg = gt_sem_seg.bool() - for sem_mask, sem_cls in zip(gt_sem_seg, gt_sem_cls): - seg_targets[sem_mask] = sem_cls.astype(ms.int64) - if num_pos > 0: - labels[pos_inds] = pos_gt_labels - pos_weight = 1.0 if cfg.pos_weight <= 0 else cfg.pos_weight - label_weights[pos_inds] = pos_weight - mask_targets[pos_inds] = pos_gt_mask - mask_weights[pos_inds] = 1 - for i in range(num_pos): - seg_targets[pos_gt_mask[i].astype(ms.bool_)] = pos_gt_labels[i] - - if num_neg > 0: - label_weights[neg_inds] = 1.0 - - return labels, label_weights, mask_targets, mask_weights, seg_targets - def get_targets(self, sampling_results, gt_mask, @@ -456,7 +416,48 @@ class ConvKernelHead(nn.Cell): mask_targets = ops.concat(mask_targets, 0) mask_weights = ops.concat(mask_weights, 0) seg_targets = ops.stack(seg_targets, 0) - return labels, label_weights, mask_targets, mask_weights, seg_targets + results = (labels, label_weights, mask_targets, mask_weights, seg_targets) + return results + + def _get_target_single(self, pos_inds, neg_inds, pos_mask, neg_mask, + pos_gt_mask, pos_gt_labels, gt_sem_seg, gt_sem_cls, + cfg): + num_pos = pos_mask.shape[0] + num_neg = neg_mask.shape[0] + num_samples = num_pos + num_neg + height, width = pos_mask.shape[-2:] + # original implementation uses new_zeros since BG are set to be 0 + # now use empty & fill because BG cat_id = num_classes, + # FG cat_id = [0, num_classes-1] + labels = ms.numpy.full((num_samples, ), + self.num_classes, + dtype=ms.int64) + new_zeros = ops.Zeros() + type_ = pos_mask.dtype + label_weights = new_zeros((num_samples, ), type_) + mask_targets = new_zeros((num_samples, height, width), type_) + mask_weights = new_zeros((num_samples, height, width), type_) + seg_targets = ms.numpy.full((height, width), + self.num_classes, + dtype=ms.int64) + + if gt_sem_cls is not None and gt_sem_seg is not None: + gt_sem_seg = gt_sem_seg.bool() + for sem_mask, sem_cls in zip(gt_sem_seg, gt_sem_cls): + seg_targets[sem_mask] = sem_cls.astype(ms.int64) + if num_pos > 0: + labels[pos_inds] = pos_gt_labels + pos_weight = 1.0 if cfg.pos_weight <= 0 else cfg.pos_weight + label_weights[pos_inds] = pos_weight + mask_targets[pos_inds] = pos_gt_mask + mask_weights[pos_inds] = 1 + for i in range(num_pos): + seg_targets[pos_gt_mask[i].astype(ms.bool_)] = pos_gt_labels[i] + + if num_neg > 0: + label_weights[neg_inds] = 1.0 + results = (labels, label_weights, mask_targets, mask_weights, seg_targets) + return results def simple_test_rpn(self, img, img_metas): """Forward function in testing stage.""" @@ -480,15 +481,12 @@ class ConvKernelHead(nn.Cell): """ rpn_results = self._decode_init_proposals_export(x) - # return rpn_results - (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) = rpn_results return rpn_results def _decode_init_proposals_export(self, img): num_imgs = 1 - # localization_feats = self.localization_fpn(img) localization_feats = self.localization_fpn.model_export(img) if isinstance(localization_feats, list): @@ -501,8 +499,6 @@ class ConvKernelHead(nn.Cell): loc_feats = self.ins_downsample(loc_feats) mask_preds = self.init_kernels(loc_feats) - # return mask_preds - if self.semantic_fpn: if isinstance(localization_feats, list): semantic_feats = localization_feats[1] @@ -520,8 +516,6 @@ class ConvKernelHead(nn.Cell): else: seg_preds = None - - # proposal_feats = self.init_kernels.weight.clone() tmp_feat = np.array(self.init_kernels.weight).astype(np.float32) tmp_feat = np.broadcast_to(tmp_feat[None], (num_imgs, ) + tmp_feat.shape) proposal_feats = ms.Tensor(np.copy(tmp_feat), dtype=self.init_kernels.weight.dtype) @@ -558,7 +552,6 @@ class ConvKernelHead(nn.Cell): [mask_preds, seg_preds[:, self.num_thing_classes:]], axis=1) stuff_kernels = self.conv_seg.weight[self. num_thing_classes:].clone() - # stuff_kernels = stuff_kernels[None].broadcast_to((num_imgs, ) + stuff_kernels.shape) stuff_kernels = ms.ops.broadcast_to(stuff_kernels[None], (num_imgs, ) + stuff_kernels.shape) proposal_feats = ops.concat([proposal_feats, stuff_kernels], axis=1) results = (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py index 98d26b9c4..7e3bfd66a 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/positional_encoding.py @@ -153,7 +153,7 @@ class SinePositionalEncoding(nn.Cell): tmp_pos_y = np.stack( (np.sin(tmp_pos_y[:, :, :, 0::2]), np.cos(tmp_pos_y[:, :, :, 1::2])), axis=4 ).reshape(batch_size, height, width, -1) - tmp_pos = np.concatenate((tmp_pos_y, tmp_pos_x),axis=3).transpose((0, 3, 1, 2)) + tmp_pos = np.concatenate((tmp_pos_y, tmp_pos_x), axis=3).transpose((0, 3, 1, 2)) pos = ms.Tensor(tmp_pos, dtype=ms.float32) return pos -- Gitee From 7497d3740dc5ca1b4485f8875363d54e238d427f Mon Sep 17 00:00:00 2001 From: HamPerdredes Date: Wed, 14 Dec 2022 20:10:24 +0800 Subject: [PATCH 45/51] fix code --- .../train/src/dataset/base_dataset.py | 50 +++-- .../train/src/dataset/data_process.py | 127 ++++++------ .../train/src/dataset/real_dataset.py | 23 +-- .../train/src/dataset/synth_dataset.py | 22 +-- .../train/src/dataset/utils.py | 31 --- .../custom_cells/custom_match_cost.py | 8 +- .../custom_cells/custom_operations.py | 4 +- .../deoccluder/roi/custom_kernel_iter_head.py | 28 +-- .../src/deoccluder/roi/kernel_update_head.py | 32 +-- .../train/src/deoccluder/rpn/kernel_head.py | 182 +++++++++--------- 10 files changed, 235 insertions(+), 272 deletions(-) diff --git a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py index b0ba141db..5a25884d5 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py @@ -55,8 +55,6 @@ class CustomDataset: custom_classes = None - PALETTE = None - def __init__(self, ann_file, pipeline, img_prefix='', test_mode=False): self.ann_file = ann_file self.data_root = None @@ -138,17 +136,6 @@ class CustomDataset: results['mask_fields'] = [] results['seg_fields'] = [] - def _filter_imgs(self, min_size=32): - """Filter images too small.""" - if self.filter_empty_gt: - warnings.warn( - 'CustomDataset does not support filtering empty gt images.') - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - def __getitem__(self, idx): """Get training/test data after pipeline. @@ -169,17 +156,16 @@ class CustomDataset: continue return data - def _set_group_flag(self): - """Set flag according to image aspect ratio. - - Images with aspect ratio greater than 1 will be set as group 1, - otherwise group 0. - """ - self.flag = np.zeros(len(self), dtype=np.uint8) - for i in range(len(self)): - img_info = self.data_infos[i] - if img_info['width'] / img_info['height'] > 1: - self.flag[i] = 1 + def _filter_imgs(self, min_size=32): + """Filter images too small.""" + if self.filter_empty_gt: + warnings.warn( + 'CustomDataset does not support filtering empty gt images.') + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds def prepare_train_img(self, idx): """Get training data and annotations after pipeline. @@ -200,6 +186,18 @@ class CustomDataset: self.pre_pipeline(results) return self.pipeline(results) + def _set_group_flag(self): + """Set flag according to image aspect ratio. + + Images with aspect ratio greater than 1 will be set as group 1, + otherwise group 0. + """ + self.flag = np.zeros(len(self), dtype=np.uint8) + for i in range(len(self)): + img_info = self.data_infos[i] + if img_info['width'] / img_info['height'] > 1: + self.flag[i] = 1 + def _rand_another(self, idx): """Get another random index from the same group as the given index.""" pool = np.where(self.flag == self.flag[idx])[0] @@ -265,10 +263,6 @@ class CustomDataset: print(self, results) raise NotImplementedError - def evaluate(self, *args, **kwargs): - print(self, args) - raise NotImplementedError - def __repr__(self): """Print the number of instance number.""" dataset_type = 'Test' if self.test_mode else 'Train' diff --git a/contrib/Overlap-Recovery/train/src/dataset/data_process.py b/contrib/Overlap-Recovery/train/src/dataset/data_process.py index f38361a9a..638208956 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/data_process.py +++ b/contrib/Overlap-Recovery/train/src/dataset/data_process.py @@ -136,6 +136,17 @@ class CustomLoadAnnotations: self.with_label = with_label self.with_mask = with_mask + def __call__(self, results): + if self.with_bbox: + results = self._load_bboxes(results) + if results is None: + return None + if self.with_label: + results = self._load_labels(results) + if self.with_mask: + results = self._load_masks(results) + return results + @staticmethod def _load_bboxes(results): ann_info = results['ann_info'] @@ -153,23 +164,6 @@ class CustomLoadAnnotations: return results - def __call__(self, results): - if self.with_bbox: - results = self._load_bboxes(results) - if results is None: - return None - if self.with_label: - results = self._load_labels(results) - if self.with_mask: - results = self._load_masks(results) - return results - - @staticmethod - def _load_labels(results): - results['gt_labels'] = results['ann_info']['labels'].copy() - results['text_labels'] = results['ann_info']['text_labels'].copy() - return results - def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(with_bbox={self.with_bbox}, ' @@ -177,6 +171,12 @@ class CustomLoadAnnotations: repr_str += f'with_mask={self.with_mask}, ' return repr_str + @staticmethod + def _load_labels(results): + results['gt_labels'] = results['ann_info']['labels'].copy() + results['text_labels'] = results['ann_info']['text_labels'].copy() + return results + @staticmethod def _load_masks(results): h, w = results['img_info']['height'], results['img_info']['width'] @@ -247,16 +247,6 @@ class Resize: results['scale_factor'] = scale_factor results['keep_ratio'] = self.keep_ratio - def _resize_bboxes(self, results): - """Resize bounding boxes with ``results['scale_factor']``.""" - for key in results.get('bbox_fields', []): - bboxes = results[key] * results['scale_factor'] - if self.bbox_clip_border: - img_shape = results['img_shape'] - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - results[key] = bboxes - def __call__(self, results): if 'scale' not in results: if 'scale_factor' in results: @@ -282,7 +272,6 @@ class Resize: self._resize_masks(results) if len(results.get('seg_fields', [])) > 0: raise NotImplementedError - # self._resize_seg(results) return results def _resize_masks(self, results): @@ -295,6 +284,16 @@ class Resize: else: results[key] = results[key].resize(results['img_shape'][:2]) + def _resize_bboxes(self, results): + """Resize bounding boxes with ``results['scale_factor']``.""" + for key in results.get('bbox_fields', []): + bboxes = results[key] * results['scale_factor'] + if self.bbox_clip_border: + img_shape = results['img_shape'] + bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) + bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) + results[key] = bboxes + def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(img_scale={self.img_scale}, ' @@ -333,28 +332,6 @@ class RandomFlip: if isinstance(flip_ratio, list): assert len(self.flip_ratio) == len(self.direction) - def bbox_flip(self, bboxes, img_shape, direction): - assert bboxes.shape[-1] % 4 == 0 - flipped = bboxes.copy() - if direction == 'horizontal': - w = img_shape[1] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - elif direction == 'vertical': - h = img_shape[0] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - elif direction == 'diagonal': - w = img_shape[1] - h = img_shape[0] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - else: - raise ValueError(f"Invalid flipping direction '{direction}'") - return flipped - def __call__(self, results): if 'flip' not in results: if isinstance(self.direction, list): @@ -399,6 +376,29 @@ class RandomFlip: results[key], direction=results['flip_direction']) return results + @staticmethod + def bbox_flip(bboxes, img_shape, direction): + assert bboxes.shape[-1] % 4 == 0 + flipped = bboxes.copy() + if direction == 'horizontal': + w = img_shape[1] + flipped[..., 0::4] = w - bboxes[..., 2::4] + flipped[..., 2::4] = w - bboxes[..., 0::4] + elif direction == 'vertical': + h = img_shape[0] + flipped[..., 1::4] = h - bboxes[..., 3::4] + flipped[..., 3::4] = h - bboxes[..., 1::4] + elif direction == 'diagonal': + w = img_shape[1] + h = img_shape[0] + flipped[..., 0::4] = w - bboxes[..., 2::4] + flipped[..., 1::4] = h - bboxes[..., 3::4] + flipped[..., 2::4] = w - bboxes[..., 0::4] + flipped[..., 3::4] = h - bboxes[..., 1::4] + else: + raise ValueError(f"Invalid flipping direction '{direction}'") + return flipped + def __repr__(self): return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' @@ -432,11 +432,13 @@ class Pad: size=None, size_divisor=None, pad_to_square=False, - pad_val=dict(img=0, masks=0, seg=255), + pad_val=None, pad_ins_num=4, eval_model=False): self.size = size self.size_divisor = size_divisor + if isinstance(pad_val, type(None)): + pad_val = dict(img=0, masks=0, seg=255) if isinstance(pad_val, float) or isinstance(pad_val, int): warnings.warn( 'pad_val of float type is deprecated now, ' @@ -483,14 +485,6 @@ class Pad: for key in results.get('mask_fields', []): results[key] = results[key].pad(pad_shape, pad_val=pad_val) - def _pad_seg(self, results): - """Pad semantic segmentation map according to - ``results['pad_shape']``.""" - pad_val = self.pad_val.get('seg', 255) - for key in results.get('seg_fields', []): - results[key] = mmcv.impad( - results[key], shape=results['pad_shape'][:2], pad_val=pad_val) - def __call__(self, results): self._pad_img(results) if self.eval_model: @@ -514,6 +508,14 @@ class Pad: return results + def _pad_seg(self, results): + """Pad semantic segmentation map according to + ``results['pad_shape']``.""" + pad_val = self.pad_val.get('seg', 255) + for key in results.get('seg_fields', []): + results[key] = mmcv.impad( + results[key], shape=results['pad_shape'][:2], pad_val=pad_val) + def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(size={self.size}, ' @@ -545,7 +547,9 @@ class DefaultFormatBundle: def __init__(self, img_to_float=True, - pad_val=dict(img=0, masks=0, seg=255)): + pad_val=None): + if isinstance(pad_val, type(None)): + pad_val = dict(img=0, masks=0, seg=255) self.img_to_float = img_to_float self.pad_val = pad_val @@ -563,8 +567,9 @@ class DefaultFormatBundle: if len(img.shape) < 3: img = np.expand_dims(img, -1) img = np.ascontiguousarray(img.transpose(2, 0, 1)) + pad_val = self.pad_val.get('img', 0) results['img'] = DataContainer( - to_tensor(img), padding_value=self.pad_val['img'], stack=True) + to_tensor(img), padding_value=pad_val, stack=True) for key in ['proposals', 'gt_bboxes', 'gt_bboxes_ignore', 'gt_labels']: if key not in results: continue diff --git a/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py index b0c256644..716d62a6e 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/real_dataset.py @@ -183,17 +183,6 @@ class RealOverlapDataset(CustomDataset): return (intersection_text, union_text, intersection_overlap, union_overlap), \ text_ins_miou, max(match_matrix.shape) - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: - if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - def evaluate(self, results, metric='segm', **kwargs): metric = metric if isinstance(metric, str) else metric[0] allowed_metrics = ['segm', 'segm_multi'] @@ -254,5 +243,13 @@ class RealOverlapDataset(CustomDataset): return metric_results - - + def _filter_imgs(self, min_size=32): + """Filter images too small or without ground truths.""" + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: + if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: + continue + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds diff --git a/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py index 42eff337c..67a20ba8b 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/synth_dataset.py @@ -183,17 +183,6 @@ class SynthOverlapDataset(CustomDataset): return (intersection_text, union_text, intersection_overlap, union_overlap), \ text_ins_miou, max(match_matrix.shape) - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: - if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - def evaluate(self, results, metric='bbox',): metric = metric if isinstance(metric, str) else metric[0] allowed_metrics = ['segm', 'segm_multi'] @@ -254,4 +243,13 @@ class SynthOverlapDataset(CustomDataset): return metric_results - + def _filter_imgs(self, min_size=32): + """Filter images too small or without ground truths.""" + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if self.filter_empty_gt and len(img_info['seg_map_path']) == 0: + if len(img_info['seg_map_path']) == 0 or len(img_info['text_labels']) == 0: + continue + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds diff --git a/contrib/Overlap-Recovery/train/src/dataset/utils.py b/contrib/Overlap-Recovery/train/src/dataset/utils.py index e9d3ef291..b10a1921b 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/utils.py +++ b/contrib/Overlap-Recovery/train/src/dataset/utils.py @@ -186,42 +186,11 @@ class BitmapMasks: cropped_masks = self.masks[:, y1:y1 + h, x1:x1 + w] return BitmapMasks(cropped_masks, h, w) - def expand(self, expanded_h, expanded_w, top, left): - """See :func:`BaseInstanceMasks.expand`.""" - if len(self.masks) == 0: - expanded_mask = np.empty((0, expanded_h, expanded_w), - dtype=np.uint8) - else: - expanded_mask = np.zeros((len(self), expanded_h, expanded_w), - dtype=np.uint8) - expanded_mask[:, top:top + self.height, - left:left + self.width] = self.masks - return BitmapMasks(expanded_mask, expanded_h, expanded_w) - @property def areas(self): """See :py:attr:`BaseInstanceMasks.areas`.""" return self.masks.sum((1, 2)) - def to_ndarray(self): - """See :func:`BaseInstanceMasks.to_ndarray`.""" - return self.masks - def to_tensor(self, dtype): """See :func:`BaseInstanceMasks.to_tensor`.""" return ms.Tensor(self.masks, dtype=dtype) - - def get_bboxes(self): - num_masks = len(self) - boxes = np.zeros((num_masks, 4), dtype=np.float32) - x_any = self.masks.any(axis=1) - y_any = self.masks.any(axis=2) - for idx in range(num_masks): - x = np.where(x_any[idx, :])[0] - y = np.where(y_any[idx, :])[0] - if len(x) > 0 and len(y) > 0: - # use +1 for x_max and y_max so that the right and bottom - # boundary of instance masks are fully included by the box - boxes[idx, :] = np.array([x[0], y[0], x[-1] + 1, y[-1] + 1], - dtype=np.float32) - return boxes diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py index 514f5691e..16ddc2604 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_match_cost.py @@ -161,12 +161,12 @@ class DiceCost(object): dice_cost = self.custom_dc_loss(mask_preds, gt_masks, self.eps) return dice_cost * self.weight - def custom_dc_loss(cls, input, target, eps=1e-3): - input = input.reshape(input.shape[0], -1) + def custom_dc_loss(cls, input_x, target, eps=1e-3): + input_x = input_x.reshape(input_x.shape[0], -1) target = target.reshape(target.shape[0], -1).astype(ms.float32) # einsum saves 10x memory - a = custom_ein('nh,mh->nm', input, target) - b = ops.reduce_sum(input * input, 1) + eps + a = custom_ein('nh,mh->nm', input_x, target) + b = ops.reduce_sum(input_x * input_x, 1) + eps c = ops.reduce_sum(target * target, 1) + eps d = (2 * a) / (b[:, None] + c[None, ...]) # 1 is a constance that will not affect the matching, so ommitted diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py index c475b327b..3f56ae35e 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/custom_cells/custom_operations.py @@ -113,5 +113,5 @@ def multi_apply(func, *args, **kwargs): return tuple(map(list, zip(*map_results))) -def custom_ein(format, x, y): - return ms.Tensor(np.einsum(format, x.asnumpy(), y.asnumpy()), dtype=x.dtype) +def custom_ein(def_format, x, y): + return ms.Tensor(np.einsum(def_format, x.asnumpy(), y.asnumpy()), dtype=x.dtype) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py index afe2f441d..5cad35b13 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py @@ -66,13 +66,17 @@ class CustomKernelIterHead(nn.Cell): self.mask_sampler.append( build_sampler(rcnn_train_cfg['sampler'])) + @property + def apply_kernel_occlusion(self): + return self.mask_head[0].apply_kernel_occlusion + def init_weights(self): for i in range(self.num_stages): self.mask_head[i].init_weights() @property - def apply_kernel_occlusion(self): - return self.mask_head[0].apply_kernel_occlusion + def occ_pair_num(self): + return 2 * self.mask_head[0].pair_num def init_mask_head(self, mask_roi_extractor, mask_head): """Initialize mask head and mask roi extractor. @@ -91,9 +95,11 @@ class CustomKernelIterHead(nn.Cell): for i in range(self.num_stages): self.mask_head[i] = self.mask_head[0] - @property - def occ_pair_num(self): - return 2 * self.mask_head[0].pair_num + def construct(self, *inputs, **kwargs): + if self.training: + return self.forward_train(*inputs, **kwargs) + else: + return self.simple_test(*inputs, **kwargs) def _mask_forward(self, x, object_feats, mask_preds, **kwargs): stage = kwargs.get('stage', None) @@ -118,12 +124,6 @@ class CustomKernelIterHead(nn.Cell): return mask_results - def construct(self, *inputs, **kwargs): - if self.training: - return self.forward_train(*inputs, **kwargs) - else: - return self.simple_test(*inputs, **kwargs) - def forward_train(self, x, proposal_feats, @@ -315,7 +315,7 @@ class CustomKernelIterHead(nn.Cell): seg_scores = [] mask_preds = mask_preds.detach() # num_det, h,w - det_labels = det_labels.detach() #class id + det_labels = det_labels.detach() # class id cls_scores = cls_scores.detach() num_ins = mask_preds.shape[0] # num_dets, h, w @@ -342,5 +342,5 @@ class CustomKernelIterHead(nn.Cell): align_corners=False) else: scaled_mask_preds = mask_preds - - return cls_score, mask_preds, scaled_mask_preds, object_feats + results = (cls_score, mask_preds, scaled_mask_preds, object_feats) + return results diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py index aec765f52..cbed9dd16 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py @@ -264,6 +264,22 @@ class KernelUpdateHead(nn.Cell): results = (labels, label_weights, mask_targets, mask_weights) return results + def rescale_masks(self, masks_per_img, img_meta): + h, w, _ = img_meta['img_shape'] + expand_dims = ops.ExpandDims() + masks_per_img = self.interpolate( + ms.ops.sigmoid(expand_dims(masks_per_img, 0)), + size=img_meta['batch_input_shape'], + align_corners=False) + + masks_per_img = masks_per_img[:, :, :h, :w] + ori_shape = img_meta['ori_shape'] + seg_masks = self.interpolate( + ms.Tensor(masks_per_img.asnumpy()), + size=tuple(ori_shape[:2].asnumpy().tolist()), + align_corners=False).squeeze(0) + return seg_masks + def _get_target_single(self, pos_inds, neg_inds, pos_mask, neg_mask, pos_gt_mask, pos_gt_labels, gt_sem_seg, gt_sem_cls, cfg): @@ -295,22 +311,6 @@ class KernelUpdateHead(nn.Cell): results = (labels, label_weights, mask_targets, mask_weights) return results - def rescale_masks(self, masks_per_img, img_meta): - h, w, _ = img_meta['img_shape'] - expand_dims = ops.ExpandDims() - masks_per_img = self.interpolate( - ms.ops.sigmoid(expand_dims(masks_per_img, 0)), - size=img_meta['batch_input_shape'], - align_corners=False) - - masks_per_img = masks_per_img[:, :, :h, :w] - ori_shape = img_meta['ori_shape'] - seg_masks = self.interpolate( - ms.Tensor(masks_per_img.asnumpy()), - size=tuple(ori_shape[:2].asnumpy().tolist()), - align_corners=False).squeeze(0) - return seg_masks - def get_seg_masks(self, masks_per_img, labels_per_img, scores_per_img, test_cfg, img_meta): # resize mask predictions back diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py index 46794a6c6..e8d4ab08e 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py @@ -211,6 +211,55 @@ class ConvKernelHead(nn.Cell): results = (losses, proposal_feats, x_feats, mask_preds, cls_scores) return results + def loss(self, + mask_pred, + cls_scores, + seg_preds, + proposal_feats, + labels, + label_weights, + mask_targets, + mask_weights, + seg_targets, + **kwargs): + losses = dict() + bg_class_ind = self.num_classes + # note in spare rcnn num_gt == num_pos + pos_inds = (labels >= 0).astype(ms.int32) & (labels < bg_class_ind).astype(ms.int32) + num_preds = mask_pred.shape[0] * mask_pred.shape[1] + if cls_scores is not None: + raise NotImplementedError + + bool_pos_inds = pos_inds.astype(ms.bool_) + # 0~self.num_classes-1 are FG, self.num_classes is BG + # do not perform bounding box regression for BG anymore. + height, width = mask_pred.shape[-2:] + if bool_pos_inds.sum(): + candi_index = ops.nonzero(bool_pos_inds).squeeze(-1) + pos_mask_pred = mask_pred.reshape(num_preds, height, width)[candi_index] + pos_mask_targets = mask_targets[candi_index] + losses['loss_rpn_mask'] = self.loss_mask(pos_mask_pred, + pos_mask_targets) + losses['loss_rpn_dice'] = self.loss_dice(pos_mask_pred, + pos_mask_targets) + + if self.loss_rank is not None: + raise NotImplementedError + + else: + losses['loss_rpn_mask'] = mask_pred.sum() * 0 + losses['loss_rpn_dice'] = mask_pred.sum() * 0 + if self.loss_rank is not None: + losses['loss_rank'] = mask_pred.sum() * 0 + + if seg_preds is not None: + if self.loss_seg.use_sigmoid: + losses['loss_rpn_seg'] = self.loss_seg(seg_preds.squeeze(1), seg_targets.astype(ms.float32)) + else: + raise NotImplementedError + + return losses + def _init_layers(self): """Initialize a sparse set of proposal boxes and proposal features.""" self.init_kernels = nn.Conv2d( @@ -262,54 +311,45 @@ class ConvKernelHead(nn.Cell): 1, norm_cfg=self.norm_cfg)) - def loss(self, - mask_pred, - cls_scores, - seg_preds, - proposal_feats, - labels, - label_weights, - mask_targets, - mask_weights, - seg_targets, - **kwargs): - losses = dict() - bg_class_ind = self.num_classes - # note in spare rcnn num_gt == num_pos - pos_inds = (labels >= 0).astype(ms.int32) & (labels < bg_class_ind).astype(ms.int32) - num_preds = mask_pred.shape[0] * mask_pred.shape[1] - if cls_scores is not None: - raise NotImplementedError - - bool_pos_inds = pos_inds.astype(ms.bool_) - # 0~self.num_classes-1 are FG, self.num_classes is BG - # do not perform bounding box regression for BG anymore. - height, width = mask_pred.shape[-2:] - if bool_pos_inds.sum(): - candi_index = ops.nonzero(bool_pos_inds).squeeze(-1) - pos_mask_pred = mask_pred.reshape(num_preds, height, width)[candi_index] - pos_mask_targets = mask_targets[candi_index] - losses['loss_rpn_mask'] = self.loss_mask(pos_mask_pred, - pos_mask_targets) - losses['loss_rpn_dice'] = self.loss_dice(pos_mask_pred, - pos_mask_targets) - - if self.loss_rank is not None: - raise NotImplementedError - - else: - losses['loss_rpn_mask'] = mask_pred.sum() * 0 - losses['loss_rpn_dice'] = mask_pred.sum() * 0 - if self.loss_rank is not None: - losses['loss_rank'] = mask_pred.sum() * 0 - - if seg_preds is not None: - if self.loss_seg.use_sigmoid: - losses['loss_rpn_seg'] = self.loss_seg(seg_preds.squeeze(1), seg_targets.astype(ms.float32)) - else: - raise NotImplementedError - - return losses + def get_targets(self, + sampling_results, + gt_mask, + rpn_train_cfg, + concat=True, + gt_sem_seg=None, + gt_sem_cls=None): + pos_inds_list = [res.pos_inds for res in sampling_results] + neg_inds_list = [res.neg_inds for res in sampling_results] + pos_mask_list = [res.pos_masks for res in sampling_results] + neg_mask_list = [res.neg_masks for res in sampling_results] + pos_gt_mask_list = [res.pos_gt_masks for res in sampling_results] + pos_gt_labels_list = [res.pos_gt_labels for res in sampling_results] + if gt_sem_seg is None: + # me: fix hard-code bug. + num_imgs = len(sampling_results) + gt_sem_seg = [None] * num_imgs + gt_sem_cls = [None] * num_imgs + results = multi_apply( + self._get_target_single, + pos_inds_list, + neg_inds_list, + pos_mask_list, + neg_mask_list, + pos_gt_mask_list, + pos_gt_labels_list, + gt_sem_seg, + gt_sem_cls, + cfg=rpn_train_cfg) + (labels, label_weights, mask_targets, mask_weights, + seg_targets) = results + if concat: + labels = ops.concat(labels, 0) + label_weights = ops.concat(label_weights, 0) + mask_targets = ops.concat(mask_targets, 0) + mask_weights = ops.concat(mask_weights, 0) + seg_targets = ops.stack(seg_targets, 0) + results = (labels, label_weights, mask_targets, mask_weights, seg_targets) + return results def _decode_init_proposals(self, img, img_metas): num_imgs = len(img_metas) @@ -379,45 +419,9 @@ class ConvKernelHead(nn.Cell): results = (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) return results - def get_targets(self, - sampling_results, - gt_mask, - rpn_train_cfg, - concat=True, - gt_sem_seg=None, - gt_sem_cls=None): - pos_inds_list = [res.pos_inds for res in sampling_results] - neg_inds_list = [res.neg_inds for res in sampling_results] - pos_mask_list = [res.pos_masks for res in sampling_results] - neg_mask_list = [res.neg_masks for res in sampling_results] - pos_gt_mask_list = [res.pos_gt_masks for res in sampling_results] - pos_gt_labels_list = [res.pos_gt_labels for res in sampling_results] - if gt_sem_seg is None: - # me: fix hard-code bug. - num_imgs = len(sampling_results) - gt_sem_seg = [None] * num_imgs - gt_sem_cls = [None] * num_imgs - results = multi_apply( - self._get_target_single, - pos_inds_list, - neg_inds_list, - pos_mask_list, - neg_mask_list, - pos_gt_mask_list, - pos_gt_labels_list, - gt_sem_seg, - gt_sem_cls, - cfg=rpn_train_cfg) - (labels, label_weights, mask_targets, mask_weights, - seg_targets) = results - if concat: - labels = ops.concat(labels, 0) - label_weights = ops.concat(label_weights, 0) - mask_targets = ops.concat(mask_targets, 0) - mask_weights = ops.concat(mask_weights, 0) - seg_targets = ops.stack(seg_targets, 0) - results = (labels, label_weights, mask_targets, mask_weights, seg_targets) - return results + def simple_test_rpn(self, img, img_metas): + """Forward function in testing stage.""" + return self._decode_init_proposals(img, img_metas) def _get_target_single(self, pos_inds, neg_inds, pos_mask, neg_mask, pos_gt_mask, pos_gt_labels, gt_sem_seg, gt_sem_cls, @@ -459,10 +463,6 @@ class ConvKernelHead(nn.Cell): results = (labels, label_weights, mask_targets, mask_weights, seg_targets) return results - def simple_test_rpn(self, img, img_metas): - """Forward function in testing stage.""" - return self._decode_init_proposals(img, img_metas) - def forward_dummy(self, img, img_metas): """Dummy forward function. -- Gitee From e0b4c16b48f83ce6ccda51b10d931c6a2a68162e Mon Sep 17 00:00:00 2001 From: HamPerdredes Date: Wed, 14 Dec 2022 20:47:06 +0800 Subject: [PATCH 46/51] fix code format --- .../train/src/dataset/base_dataset.py | 177 +++++----- .../train/src/dataset/data_process.py | 304 ++++++------------ .../train/src/dataset/utils.py | 10 +- .../train/src/deoccluder/fpn_neck.py | 5 +- .../deoccluder/roi/custom_kernel_iter_head.py | 70 ++-- .../src/deoccluder/roi/kernel_update_head.py | 62 ++-- .../train/src/deoccluder/rpn/kernel_head.py | 280 ++++++++-------- 7 files changed, 401 insertions(+), 507 deletions(-) diff --git a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py index 5a25884d5..e95bbfb2a 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py @@ -94,6 +94,69 @@ class CustomDataset: """Total number of samples of data.""" return len(self.data_infos) + def __getitem__(self, idx): + """Get training/test data after pipeline. + + Args: + idx (int): Index of data. + + Returns: + dict: Training/test data (with annotation if `test_mode` is set \ + True). + """ + + if self.test_mode: + return self.prepare_test_img(idx) + while True: + data = self.prepare_train_img(idx) + if data is None: + idx = self._rand_another(idx) + continue + return data + + def __repr__(self): + """Print the number of instance number.""" + dataset_type = 'Test' if self.test_mode else 'Train' + result = (f'\n{self.__class__.__name__} {dataset_type} dataset ' + f'with number of images {len(self)}, ' + f'and instance counts: \n') + if self.custom_classes is None: + result += 'Category names are not provided. \n' + return result + instance_count = np.zeros(len(self.custom_classes) + 1).astype(int) + # count the instance number in each image + for idx in range(len(self)): + label = self.get_ann_info(idx)['labels'] + unique, counts = np.unique(label, return_counts=True) + if len(unique) > 0: + # add the occurrence number to each class + instance_count[unique] += counts + else: + # background is the last index + instance_count[-1] += 1 + # create a table with category count + table_data = [['category', 'count'] * 5] + row_data = [] + for cls, count in enumerate(instance_count): + if cls < len(self.custom_classes): + row_data += [f'{cls} [{self.custom_classes[cls]}]', f'{count}'] + else: + # add the background number + row_data += ['-1 background', f'{count}'] + if len(row_data) == 10: + table_data.append(row_data) + row_data = [] + if len(row_data) >= 2: + if row_data[-1] == '0': + row_data = row_data[:-2] + if len(row_data) >= 2: + table_data.append([]) + table_data.append(row_data) + + table = AsciiTable(table_data) + result += table.table + return result + @staticmethod def build_pipeline(pipeline): return PipelineFunc(pipeline) @@ -136,37 +199,6 @@ class CustomDataset: results['mask_fields'] = [] results['seg_fields'] = [] - def __getitem__(self, idx): - """Get training/test data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training/test data (with annotation if `test_mode` is set \ - True). - """ - - if self.test_mode: - return self.prepare_test_img(idx) - while True: - data = self.prepare_train_img(idx) - if data is None: - idx = self._rand_another(idx) - continue - return data - - def _filter_imgs(self, min_size=32): - """Filter images too small.""" - if self.filter_empty_gt: - warnings.warn( - 'CustomDataset does not support filtering empty gt images.') - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - def prepare_train_img(self, idx): """Get training data and annotations after pipeline. @@ -186,23 +218,6 @@ class CustomDataset: self.pre_pipeline(results) return self.pipeline(results) - def _set_group_flag(self): - """Set flag according to image aspect ratio. - - Images with aspect ratio greater than 1 will be set as group 1, - otherwise group 0. - """ - self.flag = np.zeros(len(self), dtype=np.uint8) - for i in range(len(self)): - img_info = self.data_infos[i] - if img_info['width'] / img_info['height'] > 1: - self.flag[i] = 1 - - def _rand_another(self, idx): - """Get another random index from the same group as the given index.""" - pool = np.where(self.flag == self.flag[idx])[0] - return np.random.choice(pool) - @classmethod def get_classes(cls, classes=None): """Get class names of current dataset. @@ -263,46 +278,30 @@ class CustomDataset: print(self, results) raise NotImplementedError - def __repr__(self): - """Print the number of instance number.""" - dataset_type = 'Test' if self.test_mode else 'Train' - result = (f'\n{self.__class__.__name__} {dataset_type} dataset ' - f'with number of images {len(self)}, ' - f'and instance counts: \n') - if self.custom_classes is None: - result += 'Category names are not provided. \n' - return result - instance_count = np.zeros(len(self.custom_classes) + 1).astype(int) - # count the instance number in each image - for idx in range(len(self)): - label = self.get_ann_info(idx)['labels'] - unique, counts = np.unique(label, return_counts=True) - if len(unique) > 0: - # add the occurrence number to each class - instance_count[unique] += counts - else: - # background is the last index - instance_count[-1] += 1 - # create a table with category count - table_data = [['category', 'count'] * 5] - row_data = [] - for cls, count in enumerate(instance_count): - if cls < len(self.custom_classes): - row_data += [f'{cls} [{self.custom_classes[cls]}]', f'{count}'] - else: - # add the background number - row_data += ['-1 background', f'{count}'] - if len(row_data) == 10: - table_data.append(row_data) - row_data = [] - if len(row_data) >= 2: - if row_data[-1] == '0': - row_data = row_data[:-2] - if len(row_data) >= 2: - table_data.append([]) - table_data.append(row_data) + def _filter_imgs(self, min_size=32): + """Filter images too small.""" + if self.filter_empty_gt: + warnings.warn( + 'CustomDataset does not support filtering empty gt images.') + valid_inds = [] + for i, img_info in enumerate(self.data_infos): + if min(img_info['width'], img_info['height']) >= min_size: + valid_inds.append(i) + return valid_inds - table = AsciiTable(table_data) - result += table.table - return result + def _set_group_flag(self): + """Set flag according to image aspect ratio. + + Images with aspect ratio greater than 1 will be set as group 1, + otherwise group 0. + """ + self.flag = np.zeros(len(self), dtype=np.uint8) + for i in range(len(self)): + img_info = self.data_infos[i] + if img_info['width'] / img_info['height'] > 1: + self.flag[i] = 1 + def _rand_another(self, idx): + """Get another random index from the same group as the given index.""" + pool = np.where(self.flag == self.flag[idx])[0] + return np.random.choice(pool) diff --git a/contrib/Overlap-Recovery/train/src/dataset/data_process.py b/contrib/Overlap-Recovery/train/src/dataset/data_process.py index 638208956..3f20d6e04 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/data_process.py +++ b/contrib/Overlap-Recovery/train/src/dataset/data_process.py @@ -147,6 +147,13 @@ class CustomLoadAnnotations: results = self._load_masks(results) return results + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(with_bbox={self.with_bbox}, ' + repr_str += f'with_label={self.with_label}, ' + repr_str += f'with_mask={self.with_mask}, ' + return repr_str + @staticmethod def _load_bboxes(results): ann_info = results['ann_info'] @@ -164,13 +171,6 @@ class CustomLoadAnnotations: return results - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(with_bbox={self.with_bbox}, ' - repr_str += f'with_label={self.with_label}, ' - repr_str += f'with_mask={self.with_mask}, ' - return repr_str - @staticmethod def _load_labels(results): results['gt_labels'] = results['ann_info']['labels'].copy() @@ -208,6 +208,41 @@ class Resize: self.override = False self.bbox_clip_border = True + def __call__(self, results): + if 'scale' not in results: + if 'scale_factor' in results: + img_shape = results['img'].shape[:2] + scale_factor = results['scale_factor'] + assert isinstance(scale_factor, float) + results['scale'] = tuple( + [int(x * scale_factor) for x in img_shape][::-1]) + else: + self._random_scale(results) + else: + if not self.override: + assert 'scale_factor' not in results, ( + 'scale and scale_factor cannot be both set.') + else: + results.pop('scale') + if 'scale_factor' in results: + results.pop('scale_factor') + self._random_scale(results) + + self._resize_img(results) + self._resize_bboxes(results) + self._resize_masks(results) + if len(results.get('seg_fields', [])) > 0: + raise NotImplementedError + return results + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(img_scale={self.img_scale}, ' + repr_str += f'multiscale_mode={self.multiscale_mode}, ' + repr_str += f'keep_ratio={self.keep_ratio}, ' + repr_str += f'bbox_clip_border={self.bbox_clip_border})' + return repr_str + def _random_scale(self, results): if len(self.img_scale) == 1: scale, scale_idx = self.img_scale[0], 0 @@ -247,33 +282,6 @@ class Resize: results['scale_factor'] = scale_factor results['keep_ratio'] = self.keep_ratio - def __call__(self, results): - if 'scale' not in results: - if 'scale_factor' in results: - img_shape = results['img'].shape[:2] - scale_factor = results['scale_factor'] - assert isinstance(scale_factor, float) - results['scale'] = tuple( - [int(x * scale_factor) for x in img_shape][::-1]) - else: - self._random_scale(results) - else: - if not self.override: - assert 'scale_factor' not in results, ( - 'scale and scale_factor cannot be both set.') - else: - results.pop('scale') - if 'scale_factor' in results: - results.pop('scale_factor') - self._random_scale(results) - - self._resize_img(results) - self._resize_bboxes(results) - self._resize_masks(results) - if len(results.get('seg_fields', [])) > 0: - raise NotImplementedError - return results - def _resize_masks(self, results): """Resize masks with ``results['scale']``""" for key in results.get('mask_fields', []): @@ -294,14 +302,6 @@ class Resize: bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) results[key] = bboxes - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(img_scale={self.img_scale}, ' - repr_str += f'multiscale_mode={self.multiscale_mode}, ' - repr_str += f'keep_ratio={self.keep_ratio}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - class RandomFlip: """Flip the image & bbox & mask.""" @@ -376,6 +376,9 @@ class RandomFlip: results[key], direction=results['flip_direction']) return results + def __repr__(self): + return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' + @staticmethod def bbox_flip(bboxes, img_shape, direction): assert bboxes.shape[-1] % 4 == 0 @@ -399,9 +402,6 @@ class RandomFlip: raise ValueError(f"Invalid flipping direction '{direction}'") return flipped - def __repr__(self): - return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' - class Normalize: """Normalize the image.""" @@ -428,13 +428,12 @@ class Normalize: class Pad: """Pad the image & masks & segmentation map.""" - def __init__(self, - size=None, - size_divisor=None, - pad_to_square=False, - pad_val=None, - pad_ins_num=4, - eval_model=False): + def __init__(self, eval_model=False, **kwargs): + size = kwargs.get('size', None) + size_divisor = kwargs.get('size_divisor', None) + pad_to_square = kwargs.get('pad_to_square', False) + pad_val = kwargs.get('pad_val', None) + pad_ins_num = kwargs.get('pad_ins_num', 4) self.size = size self.size_divisor = size_divisor if isinstance(pad_val, type(None)): @@ -460,6 +459,37 @@ class Pad: 'only one of size and size_divisor should be valid' assert size is None or size_divisor is None + def __call__(self, results): + self._pad_img(results) + if self.eval_model: + return results + self._pad_masks(results) + self._pad_seg(results) + # padding instance number to predefined + to_pad = self.pad_ins_num - results['gt_bboxes'].shape[0] + if to_pad > 0: + results['gt_bboxes'] = np.concatenate([results['gt_bboxes'], + np.zeros((to_pad, 4), dtype=np.float32)], + axis=0) + results['gt_labels'] = np.concatenate([results['gt_labels'], + -np.ones((to_pad,), dtype=np.long)]) + gt_masks = results['gt_masks'].masks + h, w = gt_masks.shape[1:] + gt_masks = np.concatenate([gt_masks, + np.zeros((to_pad, h, w), dtype=gt_masks.dtype)], + axis=0) + results['gt_masks'] = BitmapMasks(gt_masks, h, w) + + return results + + def __repr__(self): + repr_str = self.__class__.__name__ + repr_str += f'(size={self.size}, ' + repr_str += f'size_divisor={self.size_divisor}, ' + repr_str += f'pad_to_square={self.pad_to_square}, ' + repr_str += f'pad_val={self.pad_val})' + return repr_str + def _pad_img(self, results): """Pad images according to ``self.size``.""" pad_val = self.pad_val.get('img', 0) @@ -485,29 +515,6 @@ class Pad: for key in results.get('mask_fields', []): results[key] = results[key].pad(pad_shape, pad_val=pad_val) - def __call__(self, results): - self._pad_img(results) - if self.eval_model: - return results - self._pad_masks(results) - self._pad_seg(results) - # padding instance number to predefined - to_pad = self.pad_ins_num - results['gt_bboxes'].shape[0] - if to_pad > 0: - results['gt_bboxes'] = np.concatenate([results['gt_bboxes'], - np.zeros((to_pad, 4), dtype=np.float32)], - axis=0) - results['gt_labels'] = np.concatenate([results['gt_labels'], - -np.ones((to_pad,), dtype=np.long)]) - gt_masks = results['gt_masks'].masks - h, w = gt_masks.shape[1:] - gt_masks = np.concatenate([gt_masks, - np.zeros((to_pad, h, w), dtype=gt_masks.dtype)], - axis=0) - results['gt_masks'] = BitmapMasks(gt_masks, h, w) - - return results - def _pad_seg(self, results): """Pad semantic segmentation map according to ``results['pad_shape']``.""" @@ -516,14 +523,6 @@ class Pad: results[key] = mmcv.impad( results[key], shape=results['pad_shape'][:2], pad_val=pad_val) - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(size={self.size}, ' - repr_str += f'size_divisor={self.size_divisor}, ' - repr_str += f'pad_to_square={self.pad_to_square}, ' - repr_str += f'pad_val={self.pad_val})' - return repr_str - def to_tensor(data): """Convert objects of various python types to :obj:`mindspore.Tensor`.""" @@ -577,10 +576,16 @@ class DefaultFormatBundle: if 'gt_masks' in results: results['gt_masks'] = DataContainer( results['gt_masks'], - padding_value=self.pad_val['masks']) + padding_value=self.pad_val.get('masks', None) + ) return results - def _add_default_meta_keys(self, results): + def __repr__(self): + return self.__class__.__name__ + \ + f'(img_to_float={self.img_to_float})' + + @staticmethod + def _add_default_meta_keys(results): img = results['img'] results.setdefault('pad_shape', img.shape) results.setdefault('scale_factor', 1.0) @@ -593,10 +598,6 @@ class DefaultFormatBundle: to_rgb=False)) return results - def __repr__(self): - return self.__class__.__name__ + \ - f'(img_to_float={self.img_to_float})' - class Collect: """Collect data from the loader relevant to the specific task.""" @@ -620,7 +621,6 @@ class Collect: data['img_metas'] = DataContainer(img_meta) for key in self.keys: data[key] = results[key] - # return data for key in self.keys: if self.eval_mode: out_data.append(results[key]) @@ -639,7 +639,10 @@ class Collect: if isinstance(results[key], type(None)): out_data.append(-1) else: - out_data.append(flip_map[results[key]]) + try: + out_data.append(flip_map[results[key]]) + except KeyError: + raise KeyError else: out_data.append(results[key]) return tuple(out_data) @@ -649,119 +652,6 @@ class Collect: f'(keys={self.keys}, meta_keys={self.meta_keys})' -class MultiScaleFlipAug: - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=[(1333, 400), (1333, 800)], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1333, 400), (1333, 400), (1333, 800), (1333, 800)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (tuple | list[tuple] | None): Images scales for resizing. - scale_factor (float | list[float] | None): Scale factors for resizing. - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal", "vertical" and "diagonal". If - flip_direction is a list, multiple flip augmentations will be - applied. It has no effect when flip == False. Default: - "horizontal". - """ - - def __init__(self, - transforms, - img_scale=None, - scale_factor=None, - flip=False, - flip_direction='horizontal'): - self.transforms = PipelineFunc(transforms) - assert (img_scale is None) ^ (scale_factor is None), ( - 'Must have but only one variable can be set') - if img_scale is not None: - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - self.scale_key = 'scale' - assert mmcv.is_list_of(self.img_scale, tuple) - else: - self.img_scale = scale_factor if isinstance( - scale_factor, list) else [scale_factor] - self.scale_key = 'scale_factor' - - self.flip = flip - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - flip_args = [(False, None)] - if self.flip: - flip_args += [(True, direction) - for direction in self.flip_direction] - for scale in self.img_scale: - for flip, direction in flip_args: - _results = results.copy() - _results[self.scale_key] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' - repr_str += f'flip_direction={self.flip_direction})' - return repr_str - - class ImageToTensor: """Convert image to :obj:`torch.Tensor` by given keys. @@ -807,7 +697,6 @@ CUSTOM_PIPELINES = { 'Pad': Pad, 'DefaultFormatBundle': DefaultFormatBundle, 'Collect': Collect, - 'MultiScaleFlipAug': MultiScaleFlipAug, 'ImageToTensor': ImageToTensor } @@ -817,7 +706,10 @@ class PipelineFunc: self.pipelines = [] for pipe in pipelines: pipe_type = pipe.pop('type') - self.pipelines.append(CUSTOM_PIPELINES[pipe_type](**pipe)) + try: + self.pipelines.append(CUSTOM_PIPELINES[pipe_type](**pipe)) + except KeyError: + raise KeyError def __call__(self, results): for pipe in self.pipelines: diff --git a/contrib/Overlap-Recovery/train/src/dataset/utils.py b/contrib/Overlap-Recovery/train/src/dataset/utils.py index b10a1921b..70da521b5 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/utils.py +++ b/contrib/Overlap-Recovery/train/src/dataset/utils.py @@ -118,6 +118,11 @@ class BitmapMasks: """Number of masks.""" return len(self.masks) + @property + def areas(self): + """See :py:attr:`BaseInstanceMasks.areas`.""" + return self.masks.sum((1, 2)) + def rescale(self, scale, interpolation='nearest'): """See :func:`BaseInstanceMasks.rescale`.""" if len(self.masks) == 0: @@ -186,11 +191,6 @@ class BitmapMasks: cropped_masks = self.masks[:, y1:y1 + h, x1:x1 + w] return BitmapMasks(cropped_masks, h, w) - @property - def areas(self): - """See :py:attr:`BaseInstanceMasks.areas`.""" - return self.masks.sum((1, 2)) - def to_tensor(self, dtype): """See :func:`BaseInstanceMasks.to_tensor`.""" return ms.Tensor(self.masks, dtype=dtype) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py b/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py index e60160f4d..430a0193f 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/fpn_neck.py @@ -29,8 +29,11 @@ def bias_init_zeros(shape): return Tensor(np.array(np.zeros(shape).astype(np.float32)), dtype=mstype.float32) -def _conv(in_channels, out_channels, kernel_size=3, stride=1, padding=0, pad_mode='pad'): +def _conv(in_channels, out_channels, kernel_size=3, **kwargs): """Conv2D wrapper.""" + stride = kwargs.get('stride', 1) + padding = kwargs.get('padding', 0) + pad_mode = kwargs.get('pad_mode', 'pad') shape = (out_channels, in_channels, kernel_size, kernel_size) weights = initializer("XavierUniform", shape=shape, dtype=mstype.float32) shape_bias = (out_channels,) diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py index 5cad35b13..65f54521f 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/custom_kernel_iter_head.py @@ -54,6 +54,18 @@ class CustomKernelIterHead(nn.Cell): self.init_assigner_sampler() self.init_weights() + @property + def apply_kernel_occlusion(self): + return self.mask_head[0].apply_kernel_occlusion + + @property + def occ_pair_num(self): + return 2 * self.mask_head[0].pair_num + + def init_weights(self): + for i in range(self.num_stages): + self.mask_head[i].init_weights() + def init_assigner_sampler(self): """Initialize assigner and sampler for each stage.""" self.mask_assigner = [] @@ -66,18 +78,6 @@ class CustomKernelIterHead(nn.Cell): self.mask_sampler.append( build_sampler(rcnn_train_cfg['sampler'])) - @property - def apply_kernel_occlusion(self): - return self.mask_head[0].apply_kernel_occlusion - - def init_weights(self): - for i in range(self.num_stages): - self.mask_head[i].init_weights() - - @property - def occ_pair_num(self): - return 2 * self.mask_head[0].pair_num - def init_mask_head(self, mask_roi_extractor, mask_head): """Initialize mask head and mask roi extractor. @@ -101,29 +101,6 @@ class CustomKernelIterHead(nn.Cell): else: return self.simple_test(*inputs, **kwargs) - def _mask_forward(self, x, object_feats, mask_preds, **kwargs): - stage = kwargs.get('stage', None) - img_metas = kwargs.get('img_metas', None) - mask_head = self.mask_head[stage] - cls_score, mask_preds, object_feats = mask_head( - x, object_feats, mask_preds, img_metas=img_metas) - if mask_head.mask_upsample_stride > 1 and (stage == self.num_stages - 1 - or self.training): - interpolate = nn.ResizeBilinear() - scaled_mask_preds = interpolate( - mask_preds, - scale_factor=mask_head.mask_upsample_stride, - align_corners=False) - else: - scaled_mask_preds = mask_preds - mask_results = dict( - cls_score=cls_score, - mask_preds=mask_preds, - scaled_mask_preds=scaled_mask_preds, - object_feats=object_feats) - - return mask_results - def forward_train(self, x, proposal_feats, @@ -328,6 +305,29 @@ class CustomKernelIterHead(nn.Cell): return segm_result, seg_scores + def _mask_forward(self, x, object_feats, mask_preds, **kwargs): + stage = kwargs.get('stage', None) + img_metas = kwargs.get('img_metas', None) + mask_head = self.mask_head[stage] + cls_score, mask_preds, object_feats = mask_head( + x, object_feats, mask_preds, img_metas=img_metas) + if mask_head.mask_upsample_stride > 1 and (stage == self.num_stages - 1 + or self.training): + interpolate = nn.ResizeBilinear() + scaled_mask_preds = interpolate( + mask_preds, + scale_factor=mask_head.mask_upsample_stride, + align_corners=False) + else: + scaled_mask_preds = mask_preds + mask_results = dict( + cls_score=cls_score, + mask_preds=mask_preds, + scaled_mask_preds=scaled_mask_preds, + object_feats=object_feats) + + return mask_results + def _mask_forward_export(self, stage, x, object_feats, mask_preds, img_metas): mask_upsample_stride = 2 mask_head = self.mask_head[stage] diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py index cbed9dd16..138dd421b 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/roi/kernel_update_head.py @@ -280,37 +280,6 @@ class KernelUpdateHead(nn.Cell): align_corners=False).squeeze(0) return seg_masks - def _get_target_single(self, pos_inds, neg_inds, pos_mask, neg_mask, - pos_gt_mask, pos_gt_labels, gt_sem_seg, gt_sem_cls, - cfg): - - num_pos = pos_mask.shape[0] - num_neg = neg_mask.shape[0] - num_samples = num_pos + num_neg - height, width = pos_mask.shape[-2:] - # original implementation uses new_zeros since BG are set to be 0 - # now use empty & fill because BG cat_id = num_classes, - # FG cat_id = [0, num_classes-1] - labels = ms.numpy.full((num_samples, ), - self.num_classes, - dtype=ms.int64) - new_zeros = ops.Zeros() - label_weights = new_zeros((num_samples, self.num_classes), pos_mask.dtype) - mask_targets = new_zeros((num_samples, height, width), pos_mask.dtype) - mask_weights = new_zeros((num_samples, height, width), pos_mask.dtype) - if num_pos > 0: - labels[pos_inds] = pos_gt_labels - pos_weight = 1.0 if cfg['pos_weight'] <= 0 else cfg['pos_weight'] - label_weights[pos_inds] = pos_weight - pos_mask_targets = pos_gt_mask - mask_targets[pos_inds] = pos_mask_targets - mask_weights[pos_inds] = 1 - - if num_neg > 0: - label_weights[neg_inds] = 1.0 - results = (labels, label_weights, mask_targets, mask_weights) - return results - def get_seg_masks(self, masks_per_img, labels_per_img, scores_per_img, test_cfg, img_meta): # resize mask predictions back @@ -341,3 +310,34 @@ class KernelUpdateHead(nn.Cell): test_cfg, img_meta): seg_masks = masks_per_img > 0.5 return seg_masks + + def _get_target_single(self, pos_inds, neg_inds, pos_mask, neg_mask, + pos_gt_mask, pos_gt_labels, gt_sem_seg, gt_sem_cls, + cfg): + + num_pos = pos_mask.shape[0] + num_neg = neg_mask.shape[0] + num_samples = num_pos + num_neg + height, width = pos_mask.shape[-2:] + # original implementation uses new_zeros since BG are set to be 0 + # now use empty & fill because BG cat_id = num_classes, + # FG cat_id = [0, num_classes-1] + labels = ms.numpy.full((num_samples, ), + self.num_classes, + dtype=ms.int64) + new_zeros = ops.Zeros() + label_weights = new_zeros((num_samples, self.num_classes), pos_mask.dtype) + mask_targets = new_zeros((num_samples, height, width), pos_mask.dtype) + mask_weights = new_zeros((num_samples, height, width), pos_mask.dtype) + if num_pos > 0: + labels[pos_inds] = pos_gt_labels + pos_weight = 1.0 if cfg['pos_weight'] <= 0 else cfg['pos_weight'] + label_weights[pos_inds] = pos_weight + pos_mask_targets = pos_gt_mask + mask_targets[pos_inds] = pos_mask_targets + mask_weights[pos_inds] = 1 + + if num_neg > 0: + label_weights[neg_inds] = 1.0 + results = (labels, label_weights, mask_targets, mask_weights) + return results diff --git a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py index e8d4ab08e..7ac7a9e3a 100644 --- a/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py +++ b/contrib/Overlap-Recovery/train/src/deoccluder/rpn/kernel_head.py @@ -260,57 +260,6 @@ class ConvKernelHead(nn.Cell): return losses - def _init_layers(self): - """Initialize a sparse set of proposal boxes and proposal features.""" - self.init_kernels = nn.Conv2d( - self.out_channels, - self.num_proposals, - self.conv_kernel_size, - padding=int(self.conv_kernel_size // 2), - has_bias=False) - - if self.semantic_fpn: - if self.loss_seg.use_sigmoid: - self.conv_seg = nn.Conv2d(self.out_channels, self.num_classes, - 1) - else: - self.conv_seg = nn.Conv2d(self.out_channels, - self.num_classes + 1, 1) - - if self.feat_downsample_stride > 1 and self.feat_refine: - self.ins_downsample = ConvModule( - self.in_channels, - self.out_channels, - 3, - stride=self.feat_refine_stride, - padding=1, - norm_cfg=self.norm_cfg) - self.seg_downsample = ConvModule( - self.in_channels, - self.out_channels, - 3, - stride=self.feat_refine_stride, - padding=1, - norm_cfg=self.norm_cfg) - - self.loc_convs = nn.CellList() - for i in range(self.num_loc_convs): - self.loc_convs.append( - ConvModule( - self.in_channels, - self.out_channels, - 1, - norm_cfg=self.norm_cfg)) - - self.seg_convs = nn.CellList() - for i in range(self.num_seg_convs): - self.seg_convs.append( - ConvModule( - self.in_channels, - self.out_channels, - 1, - norm_cfg=self.norm_cfg)) - def get_targets(self, sampling_results, gt_mask, @@ -351,77 +300,31 @@ class ConvKernelHead(nn.Cell): results = (labels, label_weights, mask_targets, mask_weights, seg_targets) return results - def _decode_init_proposals(self, img, img_metas): - num_imgs = len(img_metas) - localization_feats = self.localization_fpn(img) - if isinstance(localization_feats, list): - loc_feats = localization_feats[0] - else: - loc_feats = localization_feats - for conv in self.loc_convs: - loc_feats = conv(loc_feats) - if self.feat_downsample_stride > 1 and self.feat_refine: - loc_feats = self.ins_downsample(loc_feats) - mask_preds = self.init_kernels(loc_feats) - - if self.semantic_fpn: - if isinstance(localization_feats, list): - semantic_feats = localization_feats[1] - else: - semantic_feats = localization_feats - for conv in self.seg_convs: - semantic_feats = conv(semantic_feats) - if self.feat_downsample_stride > 1 and self.feat_refine: - semantic_feats = self.seg_downsample(semantic_feats) - else: - semantic_feats = None - - if semantic_feats is not None: - seg_preds = self.conv_seg(semantic_feats) - else: - seg_preds = None - - - - proposal_feats = self.init_kernels.weight.clone() - proposal_feats = proposal_feats[None].broadcast_to((num_imgs, ) + proposal_feats.shape) - - if semantic_feats is not None: - x_feats = semantic_feats + loc_feats - else: - x_feats = loc_feats - - if self.proposal_feats_with_obj: - sigmoid_masks = self.sigmoid(mask_preds) - nonzero_inds = sigmoid_masks > 0.5 - if self.use_binary: - sigmoid_masks = nonzero_inds.astype(ms.float32) - else: - sigmoid_masks = nonzero_inds.astype(ms.float32) * sigmoid_masks - einsum = ops.Einsum('bnhw,bchw->bnc') - obj_feats = einsum((sigmoid_masks, x_feats)) - else: - obj_feats = None + def simple_test_rpn(self, img, img_metas): + """Forward function in testing stage.""" + return self._decode_init_proposals(img, img_metas) - cls_scores = None + def forward_dummy(self, img, img_metas): + """Dummy forward function. - if self.proposal_feats_with_obj: - proposal_feats = proposal_feats + obj_feats.view( - num_imgs, self.num_proposals, self.out_channels, 1, 1) + Used in flops calculation. + """ + return self._decode_init_proposals(img, img_metas) - if self.cat_stuff_mask and not self.training: - mask_preds = ops.concat( - [mask_preds, seg_preds[:, self.num_thing_classes:]], axis=1) - stuff_kernels = self.conv_seg.weight[self. - num_thing_classes:].clone() - stuff_kernels = stuff_kernels[None].broadcast_to((num_imgs, ) + stuff_kernels.shape) - proposal_feats = ops.concat([proposal_feats, stuff_kernels], axis=1) - results = (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) - return results + def onnx_export(self, x): + """Test without augmentation. + Args: + x (tuple[Tensor]): Features from the upstream network, each is + a 4D-tensor. + img_metas (list[dict]): Meta info of each image. + Returns: + Tensor: dets of shape [N, num_det, 5]. + """ + rpn_results = self._decode_init_proposals_export(x) - def simple_test_rpn(self, img, img_metas): - """Forward function in testing stage.""" - return self._decode_init_proposals(img, img_metas) + (proposal_feats, x_feats, mask_preds, cls_scores, + seg_preds) = rpn_results + return rpn_results def _get_target_single(self, pos_inds, neg_inds, pos_mask, neg_mask, pos_gt_mask, pos_gt_labels, gt_sem_seg, gt_sem_cls, @@ -463,28 +366,6 @@ class ConvKernelHead(nn.Cell): results = (labels, label_weights, mask_targets, mask_weights, seg_targets) return results - def forward_dummy(self, img, img_metas): - """Dummy forward function. - - Used in flops calculation. - """ - return self._decode_init_proposals(img, img_metas) - - def onnx_export(self, x): - """Test without augmentation. - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): Meta info of each image. - Returns: - Tensor: dets of shape [N, num_det, 5]. - """ - rpn_results = self._decode_init_proposals_export(x) - - (proposal_feats, x_feats, mask_preds, cls_scores, - seg_preds) = rpn_results - return rpn_results - def _decode_init_proposals_export(self, img): num_imgs = 1 localization_feats = self.localization_fpn.model_export(img) @@ -556,3 +437,122 @@ class ConvKernelHead(nn.Cell): proposal_feats = ops.concat([proposal_feats, stuff_kernels], axis=1) results = (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) return results + + def _decode_init_proposals(self, img, img_metas): + num_imgs = len(img_metas) + localization_feats = self.localization_fpn(img) + if isinstance(localization_feats, list): + loc_feats = localization_feats[0] + else: + loc_feats = localization_feats + for conv in self.loc_convs: + loc_feats = conv(loc_feats) + if self.feat_downsample_stride > 1 and self.feat_refine: + loc_feats = self.ins_downsample(loc_feats) + mask_preds = self.init_kernels(loc_feats) + + if self.semantic_fpn: + if isinstance(localization_feats, list): + semantic_feats = localization_feats[1] + else: + semantic_feats = localization_feats + for conv in self.seg_convs: + semantic_feats = conv(semantic_feats) + if self.feat_downsample_stride > 1 and self.feat_refine: + semantic_feats = self.seg_downsample(semantic_feats) + else: + semantic_feats = None + + if semantic_feats is not None: + seg_preds = self.conv_seg(semantic_feats) + else: + seg_preds = None + + + + proposal_feats = self.init_kernels.weight.clone() + proposal_feats = proposal_feats[None].broadcast_to((num_imgs, ) + proposal_feats.shape) + + if semantic_feats is not None: + x_feats = semantic_feats + loc_feats + else: + x_feats = loc_feats + + if self.proposal_feats_with_obj: + sigmoid_masks = self.sigmoid(mask_preds) + nonzero_inds = sigmoid_masks > 0.5 + if self.use_binary: + sigmoid_masks = nonzero_inds.astype(ms.float32) + else: + sigmoid_masks = nonzero_inds.astype(ms.float32) * sigmoid_masks + einsum = ops.Einsum('bnhw,bchw->bnc') + obj_feats = einsum((sigmoid_masks, x_feats)) + else: + obj_feats = None + + cls_scores = None + + if self.proposal_feats_with_obj: + proposal_feats = proposal_feats + obj_feats.view( + num_imgs, self.num_proposals, self.out_channels, 1, 1) + + if self.cat_stuff_mask and not self.training: + mask_preds = ops.concat( + [mask_preds, seg_preds[:, self.num_thing_classes:]], axis=1) + stuff_kernels = self.conv_seg.weight[self. + num_thing_classes:].clone() + stuff_kernels = stuff_kernels[None].broadcast_to((num_imgs, ) + stuff_kernels.shape) + proposal_feats = ops.concat([proposal_feats, stuff_kernels], axis=1) + results = (proposal_feats, x_feats, mask_preds, cls_scores, seg_preds) + return results + + def _init_layers(self): + """Initialize a sparse set of proposal boxes and proposal features.""" + self.init_kernels = nn.Conv2d( + self.out_channels, + self.num_proposals, + self.conv_kernel_size, + padding=int(self.conv_kernel_size // 2), + has_bias=False) + + if self.semantic_fpn: + if self.loss_seg.use_sigmoid: + self.conv_seg = nn.Conv2d(self.out_channels, self.num_classes, + 1) + else: + self.conv_seg = nn.Conv2d(self.out_channels, + self.num_classes + 1, 1) + + if self.feat_downsample_stride > 1 and self.feat_refine: + self.ins_downsample = ConvModule( + self.in_channels, + self.out_channels, + 3, + stride=self.feat_refine_stride, + padding=1, + norm_cfg=self.norm_cfg) + self.seg_downsample = ConvModule( + self.in_channels, + self.out_channels, + 3, + stride=self.feat_refine_stride, + padding=1, + norm_cfg=self.norm_cfg) + + self.loc_convs = nn.CellList() + for i in range(self.num_loc_convs): + self.loc_convs.append( + ConvModule( + self.in_channels, + self.out_channels, + 1, + norm_cfg=self.norm_cfg)) + + self.seg_convs = nn.CellList() + for i in range(self.num_seg_convs): + self.seg_convs.append( + ConvModule( + self.in_channels, + self.out_channels, + 1, + norm_cfg=self.norm_cfg)) -- Gitee From 0d60e4d49e4045063af5b89fc6b1eb9fe5d0566d Mon Sep 17 00:00:00 2001 From: HamPerdredes Date: Wed, 14 Dec 2022 20:59:29 +0800 Subject: [PATCH 47/51] fix code --- .../train/src/dataset/base_dataset.py | 36 +++++++++---------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py index e95bbfb2a..79c2d4991 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py @@ -161,6 +161,24 @@ class CustomDataset: def build_pipeline(pipeline): return PipelineFunc(pipeline) + @classmethod + def get_classes(cls, classes=None): + """Get class names of current dataset. + + Args: + classes (Sequence[str] | str | None): If classes is None, use + default custom_classes defined by builtin dataset. If classes is a + string, take it as a file name. The file contains the name of + classes where each line contains one class name. If classes is + a tuple or list, override the custom_classes defined by the dataset. + + Returns: + tuple[str] or list[str]: Names of categories of the dataset. + """ + if classes is None: + return cls.custom_classes + raise NotImplementedError + def load_annotations(self, ann_file): """Load annotation from annotation file.""" print(self.ann_file, ann_file) @@ -218,24 +236,6 @@ class CustomDataset: self.pre_pipeline(results) return self.pipeline(results) - @classmethod - def get_classes(cls, classes=None): - """Get class names of current dataset. - - Args: - classes (Sequence[str] | str | None): If classes is None, use - default custom_classes defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the custom_classes defined by the dataset. - - Returns: - tuple[str] or list[str]: Names of categories of the dataset. - """ - if classes is None: - return cls.custom_classes - raise NotImplementedError - def prepare_test_img(self, idx): """Get testing data after pipeline. -- Gitee From ff614d953c835847e84f5ed5aee1cf04625be085 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Wed, 14 Dec 2022 21:11:06 +0800 Subject: [PATCH 48/51] reformate infer code --- contrib/Overlap-Recovery/inference/preprocess_utils.py | 1 + 1 file changed, 1 insertion(+) diff --git a/contrib/Overlap-Recovery/inference/preprocess_utils.py b/contrib/Overlap-Recovery/inference/preprocess_utils.py index 9c4e5dd39..34bf9516e 100644 --- a/contrib/Overlap-Recovery/inference/preprocess_utils.py +++ b/contrib/Overlap-Recovery/inference/preprocess_utils.py @@ -15,6 +15,7 @@ # limitations under the License. +# code reference mmcv and mmdet import collections import warnings import os.path as osp -- Gitee From f8de329770cc84869ddd93abfaccff9e40a4d2e4 Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Thu, 15 Dec 2022 13:45:46 +0800 Subject: [PATCH 49/51] update readme --- contrib/Overlap-Recovery/README.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index aa04566bf..6f2945ef2 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -229,24 +229,24 @@ sh train/scripts/convert_resnet.sh PATH-TO-PYTORCH-WEIGHT PATH-TO-MINDSPORE-WEIG ## 4 模型转换 -通过第三节的训练后得到ckpt模型文件,在项目运行前需要先将ckpt文件通过 `export.py `转换成ONNX模型文件,然后在本代码仓下通过ATC将ONNX转换成om模型。 +通过第三节的训练后得到ckpt模型文件,在项目运行前需要先将ckpt文件通过 `export.py `转换成ONNX模型文件,然后在本代码仓下通过ATC将ONNX转换成om模型,其中`ckpt->onnx`的转换在训练环境下进行(参考第2节所述),`onnx->om`的转换在推理环境下进行(参考第2节所述)。 模型转换工具(ATC)相关介绍如下:[ATC介绍](https://support.huawei.com/enterprise/zh/doc/EDOC1100234054) 具体步骤如下: -1. 准备好训练得到的ckpt模型文件,放至服务器上`Overlap-Recovery/train/models`文件夹下。 +1. 准备好训练得到的ckpt模型文件,放至服务器上`Overlap-Recovery/train/models`文件夹下,环境同训练环境相同(硬件包含CPU,参考第2节所述)。 -2. 进入`Overlap-Recovery/train`文件夹下,修改`export.py`文件中`ckpt_file_path`和`file_name`参数为自己的路径,执行命令: +2. 进入`Overlap-Recovery/train`文件夹下,修改`export.py`文件中`ckpt_file_path`和`file_name`参数为自己的路径,执行如下命令完成`ckpt->onnx`的模型转换: ``` cd train python export.py ``` -3. 将生成的ONNX模型转移到推理服务器,放至在`Overlap-Recovery/inference/models`路径下。 +3. 将生成的ONNX模型转移到推理服务器,放至在`Overlap-Recovery/inference/models`路径下,环境同推理环境相同(硬件为Ascend 310,参考第2节述所)。 -4. 进入推理服务器执行命令(修改`onnx_model_path`和`output_model_path`参数为自己的路径): +4. 进入推理服务器执行如下命令(修改`onnx_model_path`和`output_model_path`参数为自己的路径)完成`onnx->om`的模型转换: ``` cd inference/models @@ -267,13 +267,13 @@ sh train/scripts/convert_resnet.sh PATH-TO-PYTORCH-WEIGHT PATH-TO-MINDSPORE-WEIG ## 5 模型推理 -当已有模型的om文件,保存在`Overlap-Recovery/inference/models/`下 +当已有模型的om文件,保存在`Overlap-Recovery/inference/models/`下,推理所需环境如第2节所述。 示例步骤如下: **步骤1** 将任意一张待预测的图片存到当前目录下(`./Overlap-Recovery/inference`),文件名修改为`test`。 -**步骤2** 按照模型转换获取om模型,放置在`Overlap-Recovery/inference/models/`路径下。若未自行转换模型,使用的是仓库提供的模型,则无需修改相关文件,否则修改`ominfer.py`中相关配置,将`model_path`对象的路径改成实际的om模型的路径;`img_prefix`和`img_name`对象的路径改成实际的测试图片的路径;`save_path`对象设置成需要保存可视化图像的路径。 +**步骤2** 按照第4节模型转换获取om模型,放置在`Overlap-Recovery/inference/models/`路径下。若未自行转换模型,使用的是仓库提供的模型,则无需修改相关文件,否则修改`ominfer.py`中相关配置,将`model_path`对象的路径改成实际的om模型的路径;`img_prefix`和`img_name`对象的路径改成实际的测试图片的路径;`save_path`对象设置成需要保存可视化图像的路径。 **步骤3** 在命令行输入 如下命令运行单张图片模型推理: -- Gitee From 73bc56fc828a81c2ff349560b29dd32f9d861fdf Mon Sep 17 00:00:00 2001 From: HamPerdredes Date: Thu, 15 Dec 2022 22:48:07 +0800 Subject: [PATCH 50/51] clean code --- contrib/Overlap-Recovery/README.md | 29 ++++++++++--------- .../train/src/dataset/base_dataset.py | 8 ++--- 2 files changed, 19 insertions(+), 18 deletions(-) diff --git a/contrib/Overlap-Recovery/README.md b/contrib/Overlap-Recovery/README.md index aa04566bf..576675e3d 100644 --- a/contrib/Overlap-Recovery/README.md +++ b/contrib/Overlap-Recovery/README.md @@ -53,7 +53,7 @@ eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示 ```pytnon ├── eval.py #精度测试 -├── train.py #模型训练主函数 +├── train.py #模型训练主函数 ├── export.py #将ckpt模型导出为onnx格式的模型 ├── __init__.py ├── src #模型源码及相关辅助函数 @@ -160,7 +160,7 @@ eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示 | MindX SDK | 3.0RC3 | | Ascend-CANN-toolkit | 6.0.RC1 | | ubuntu | 18.04.1 LTS | -| python | 3.9.2 | +| python | 3.9.2 | | MindSpore | 1.9.0 | | opencv-python | 4.6.0.66 | | numpy | 1.23.1 | @@ -169,6 +169,7 @@ eg:本sample工程名称为`Overlap-Recovery`,工程根目录如下图所示 | loguru | 0.2.14 | | tqdm | 4.64.1 | | imagesize | 1.4.1 | +| terminaltables | 3.1.10 | 其中推理环境依赖软件和版本如下表: @@ -210,18 +211,18 @@ sh train/scripts/convert_resnet.sh PATH-TO-PYTORCH-WEIGHT PATH-TO-MINDSPORE-WEIG 参数设置输出路径。 -**步骤3** 按照环境依赖要求配置好训练所需运行环境后,执行如下命令启动模型训练。 - - ``` - python train/train.py +**步骤3** 按照环境依赖要求配置好训练所需运行环境后,执行如下命令启动模型训练。 + + ``` + python train/train.py ``` **步骤4** 使用训练好的mindspore模型直接推理 - 修改```train/src/model_utils/config_base.py```中```checkpoint_path```参数为checkpoint的保存路径,执行如下命令推理。 - - ``` - python train/eval.py + 修改```train/src/model_utils/config_base.py```中```checkpoint_path```参数为checkpoint的保存路径,执行如下命令推理。 + + ``` + python train/eval.py ``` @@ -239,7 +240,7 @@ sh train/scripts/convert_resnet.sh PATH-TO-PYTORCH-WEIGHT PATH-TO-MINDSPORE-WEIG 2. 进入`Overlap-Recovery/train`文件夹下,修改`export.py`文件中`ckpt_file_path`和`file_name`参数为自己的路径,执行命令: - ``` + ``` cd train python export.py ``` @@ -248,7 +249,7 @@ sh train/scripts/convert_resnet.sh PATH-TO-PYTORCH-WEIGHT PATH-TO-MINDSPORE-WEIG 4. 进入推理服务器执行命令(修改`onnx_model_path`和`output_model_path`参数为自己的路径): - ``` + ``` cd inference/models atc --model=[onnx_model_path] --framework=5 --output=[output_model_path] --soc_version=Ascend310 --input_shape="img:1,3,768,768" ``` @@ -277,7 +278,7 @@ sh train/scripts/convert_resnet.sh PATH-TO-PYTORCH-WEIGHT PATH-TO-MINDSPORE-WEIG **步骤3** 在命令行输入 如下命令运行单张图片模型推理: -``` +``` cd inference python ominfer.py ``` @@ -292,7 +293,7 @@ python ominfer.py **步骤2** 在命令行输入 如下命令运行精度测试: -``` +``` cd inference python eval.py ``` diff --git a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py index 79c2d4991..686323da8 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py @@ -157,10 +157,6 @@ class CustomDataset: result += table.table return result - @staticmethod - def build_pipeline(pipeline): - return PipelineFunc(pipeline) - @classmethod def get_classes(cls, classes=None): """Get class names of current dataset. @@ -305,3 +301,7 @@ class CustomDataset: """Get another random index from the same group as the given index.""" pool = np.where(self.flag == self.flag[idx])[0] return np.random.choice(pool) + + @staticmethod + def build_pipeline(pipeline): + return PipelineFunc(pipeline) -- Gitee From 50236bef4b59649f40e9680351559c960bf416ae Mon Sep 17 00:00:00 2001 From: wenwenyu Date: Thu, 15 Dec 2022 22:59:03 +0800 Subject: [PATCH 51/51] clean code --- contrib/Overlap-Recovery/train/src/dataset/base_dataset.py | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py index 686323da8..d8993d23a 100644 --- a/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py +++ b/contrib/Overlap-Recovery/train/src/dataset/base_dataset.py @@ -175,6 +175,10 @@ class CustomDataset: return cls.custom_classes raise NotImplementedError + @staticmethod + def build_pipeline(pipeline): + return PipelineFunc(pipeline) + def load_annotations(self, ann_file): """Load annotation from annotation file.""" print(self.ann_file, ann_file) @@ -302,6 +306,3 @@ class CustomDataset: pool = np.where(self.flag == self.flag[idx])[0] return np.random.choice(pool) - @staticmethod - def build_pipeline(pipeline): - return PipelineFunc(pipeline) -- Gitee