diff --git a/tutorials/source_en/beginner/quick_start.ipynb b/tutorials/source_en/advance/linear_fitting.ipynb similarity index 40% rename from tutorials/source_en/beginner/quick_start.ipynb rename to tutorials/source_en/advance/linear_fitting.ipynb index ee46ca7810c77c81460e30b6e9570dccd0d2f575..375708b66eef9572f8d4b4bb24084220465e2c56 100644 --- a/tutorials/source_en/beginner/quick_start.ipynb +++ b/tutorials/source_en/advance/linear_fitting.ipynb @@ -3,62 +3,133 @@ { "cell_type": "markdown", "source": [ - "# Simple Linear Function Fitting\n", + "# Customization Case:Simple Linear Function Fitting\n", "\n", - "`Ascend` `GPU` `CPU` `Beginner` `Whole Process`\n", - "\n", - "Author: [Yi Yang](https://github.com/helloyesterday)\n", - "\n", - "[![Download Notebook](https://gitee.com/mindspore/docs/raw/master/resource/_static/logo_notebook_en.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/tutorials/en/mindspore_linear_regression.ipynb)  [![View Source On Gitee](https://gitee.com/mindspore/docs/raw/master/resource/_static/logo_source_en.png)](https://gitee.com/mindspore/docs/blob/master/tutorials/source_en/beginner/quick_start.ipynb)" + "[![View Source On Gitee](https://gitee.com/mindspore/docs/raw/tutorials-develop/resource/_static/logo_source_en.png)](https://gitee.com/mindspore/docs/blob/tutorials-develop/tutorials/source_en/beginner/quick_start.ipynb)" ], "metadata": {} }, { "cell_type": "markdown", + "metadata": {}, "source": [ - "## Overview\n", + "MindSpore provides users with three different levels of API: high-level, intermediate-level, and low-level, and the details are described in the [Basic Introduction - Hierarchical Content section](https://www.mindspore.cn/tutorials/en/master/index.html).\n", "\n", - "Regression algorithms usually use a series of properties to predict a value, and the predicted values are consecutive. For example, the price of a house is predicted based on some given feature data of the house, such as area and the number of bedrooms; or future temperature conditions are predicted by using the temperature change data and satellite cloud images in the last week. If the actual price of the house is CNY5 million, and the value predicted through regression analysis is CNY4.99 million, the regression analysis is considered accurate. For machine learning problems, common regression analysis includes linear regression, polynomial regression, and logistic regression. In this chapter, we will use deep learning to fit a linear function $f(x) = 2x + 3$ on MindSpore.\n", + "In order to facilitate the control of the network execution process, MindSpore provides a high-order training and inference interface `mindspore.Model`, which trains and infers the network by specifying the neural network model to be trained and common training settings, and calls the `train` and `eval` methods. At the same time, if you want to personalize a specific module, you can also call the corresponding low-level interface to define the training process of the network.\n", "\n", - "## Environment Preparation\n", + "This chapter will use the low-level and intermediate-level APIs provided by MindSpore to fit linear functions:\n", "\n", - "Complete MindSpore running configuration." - ], - "metadata": {} + "$$f(x) = 2x + 3 \\tag {1}$$\n", + "\n", + "Before initializing the network, you need to configure the `context` parameters to control the policies executed by the program, such as configuring static graph or dynamic graph mode, configuring the hardware environment in which the network runs, and so on. This chapter will introduce configuration information and use low-level and medium-level APIs to customize loss functions, optimizers, training processes, Metrics, and custom validation process modules using low-level and intermediate-level APIs provided by MindSpore.\n", + "\n", + "## Configuration Information\n", + "\n", + "Before initializing the network, you need to configure the `context` parameter to control the policies executed by the program, such as configuring static graph or dynamic graph mode, configuring the hardware environment in which the network runs, and so on. Before initializing the network, you need to configure the `content` parameter to control the policy of program execution, and this section mainly describes execution mode management and hardware management.\n", + "\n", + "### Execution Mode\n", + "\n", + "MindSpore supports both Graph and PyNative modes of operation. The Graph mode is the default mode for MindSpore, while the PyNative mode is used for purposes such as debugging.\n", + "\n", + "- Graph mode (static graph mode): The neural network model is compiled into a whole graph and then sent to hardware for execution. This mode leverages techniques such as graph optimization to improve operational performance while facilitating scale deployments and cross-platform operations.\n", + "\n", + "- PyNative mode ((dynamic graph mode): The individual operators in the neural network are sent one by one to the hardware for execution. This mode is convenient for users to write code and debug the neural network model.)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Mode Choice\n", + "\n", + "By configuring the context parameter, you can control the mode in which the program runs. The main differences between the Graph and PyNative modes are:\n", + "\n", + "- Usage scenario: The Graph mode needs to build the network structure first, and then the framework does the whole map optimization and execution, which is more suitable for scenarios where the network is fixed without changes, and high performance is required. The PyNative mode, on the other hand, executes operators line by line, supports the execution of single operators, ordinary functions and networks, and the operation of gradients alone.\n", + "\n", + "- Network execution: The Graph and PyNative modes have the same precision effect when performing the same network and operators. Since graph mode uses graph optimization, calculation graph sinking and other techniques, graph mode execution network performance and efficiency is higher.\n", + "\n", + "- Code debugging: In script development and network process debugging, it is recommended to use PyNative mode for debugging. In the PyNative mode, you can easily set breakpoints, obtain intermediate results of network execution, and debug the network by pdb. The Graph mode cannot set a breakpoint, only specify the operator to print, and then view the output after the network execution is completed." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When using the Graph mode, set the running mode in the context to `GRAPH_MODE`, you need to use `nn. Cell` class, and write execution code in the `construst` function, or call `@ms_function` decorator.\n", + "\n", + "#### Mode Switch\n", + "\n", + "MindSpore provides a unified encoding method for static and dynamic diagrams, which greatly increases the compatibility of static diagrams and dynamic diagrams, and the users do not need to develop multiple sets of code, and can switch static diagram/dynamic diagram mode with only one line of code. When switching modes, pay attention to the [constraints](https://www.mindspore.cn/docs/note/en/master/static_graph_syntax_support.html) of the target mode.\n", + "\n", + "> For example, the PyNative mode does not support data sinking, etc.\n", + "\n", + "Set the running mode to the dynamic graph mode:" + ] }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, + "metadata": {}, + "outputs": [], "source": [ "from mindspore import context\n", "\n", - "# set the mode to static image mode and the training hardware to CPU\n", - "context.set_context(mode=context.GRAPH_MODE, device_target=\"CPU\")" - ], + "context.set_context(mode=context.PYNATIVE_MODE)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When MindSpore is in the static graph mode, it can switch to dynamic graph mode by `context.set_context (mode=context. PYNATIVE_MODE)`; similarly, when MindSpore is in dynamic graph mode, it can switch to static graph mode by `context.set_context (mode=context. GRAPH_MODE)`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, "outputs": [], - "metadata": { - "ExecuteTime": { - "end_time": "2021-01-04T07:04:52.617310Z", - "start_time": "2021-01-04T07:04:51.919345Z" - } - } + "source": [ + "context.set_context(mode=context.GRAPH_MODE)" + ] }, { "cell_type": "markdown", + "metadata": {}, "source": [ - "> Third-party support package: `matplotlib` and `IPython`. If this package is not installed, run the `pip install matplotlib IPython` command to install it first.\n", + "### Hardware Management\n", "\n", - "## Generating Datasets\n", + "The hardware management part mainly includes two parameters: `device_target` and `device_id`.\n", "\n", - "### Defining the Dataset Generation Function\n", + "- `device_target`: the target device to be run supports `Ascend`, `GPU` and `CPU`, and can be set according to the actual environment conditions.\n", "\n", - "`get_data` is used to generate training and test datasets. Since linear data is fitted, the required training datasets should be randomly distributed around the objective function. Assume that the objective function to be fitted is $f(x)=2x+3$. $f(x)=2x+3+noise$ is used to generate training datasets, and `noise` is a random value that complies with standard normal distribution rules." - ], - "metadata": {} + "- `device_id`: indicates the target device ID, whose value is in the range of [0, `device_num_per_host` - 1]. `device_num_per_host` represents the total number of devices of the server, and the value of the `device_num_per_host` cannot exceed 4096. `device_id` defaults to 0. In the case of non-distributed mode execution, in order to avoid the use of device conflicts, the device ID of the program execution can be determined by setting the `device_id`.\n", + "\n", + "The code examples are as follow:\n", + "\n", + "```Python\n", + "from mindspore import context\n", + "\n", + "context.set_context(device_target=\"Ascend\", device_id=6)\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Generating the Dataset\n", + "\n", + "Define dataset generation functions `get_data` to generate a training dataset and a test dataset.\n", + "\n", + "Since the linear data is fitted, assuming that the objective function to be fitted is: $f(x)=2x+3$, the training dataset we need should be randomly distributed around the function, which is generated in the way $f(x)=2x+3+noise$. `noise` is a random value that follows the standard normal distribution law." + ] }, { "cell_type": "code", - "execution_count": 2, + "execution_count": null, + "metadata": {}, + "outputs": [], "source": [ "import numpy as np\n", "\n", @@ -68,14 +139,7 @@ " noise = np.random.normal(0, 1)\n", " y = x * w + b + noise\n", " yield np.array([x]).astype(np.float32), np.array([y]).astype(np.float32)" - ], - "outputs": [], - "metadata": { - "ExecuteTime": { - "end_time": "2021-01-04T07:04:52.623357Z", - "start_time": "2021-01-04T07:04:52.618320Z" - } - } + ] }, { "cell_type": "markdown", @@ -90,10 +154,10 @@ "source": [ "import matplotlib.pyplot as plt\n", "\n", - "eval_data = list(get_data(50))\n", + "train_data = list(get_data(50))\n", "x_target_label = np.array([-10, 10, 0.1])\n", "y_target_label = x_target_label * 2 + 3\n", - "x_eval_label, y_eval_label = zip(*eval_data)\n", + "x_eval_label, y_eval_label = zip(*train_data)\n", "\n", "plt.scatter(x_eval_label, y_eval_label, color=\"red\", s=5)\n", "plt.plot(x_target_label, y_target_label, color=\"green\")\n", @@ -104,7 +168,7 @@ { "output_type": "display_data", "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXkAAAEICAYAAAC6fYRZAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAqxUlEQVR4nO3dd3wVVfrH8c9DKAKCgBQRCMW2Au6iBBQUQVwrKnZB10aJqLhi12V3QWUVVGyroggW9ocFK00RkCACggREaVJFDCX0JlKSnN8fMwmXcNPIndzk5vt+vfLKnZlz5zyZXJ4czjlzxpxziIhIbCoT7QBERCQ4SvIiIjFMSV5EJIYpyYuIxDAleRGRGKYkLyISw5TkRbIxs6lm1iOfZTuYWUrQMYkcKSV5KbHMbLWZ/WFmu0O+Xol2XDkxs9vMbHq045DSpWy0AxAppMudc5OjHYRIcaWWvMQcM6tgZtvNrHnIvlp+q7+2mVU3s3FmtsnMtvmv6+fz3BXN7B3/fYuBVtmOP2pmK81sl5ktNrOr/P2nAq8Dbfz/cWz393cysx/MbKeZ/WZm/SN0GUQAJXmJQc65fcCnQNeQ3dcD3zjnNuJ97t8GGgLxwB9Afrt5+gEn+F8XAbdmO74SaAccAzwO/J+Z1XXOLQF6Ad855452zlXzy/8O3AJUAzoBd5rZlfn9WUXyoiQvJd3nfqs986unv/89oEtIuRv9fTjntjjnPnHO7XHO7QL+A7TPZ33XA/9xzm11zv0GvBx60Dn3kXNunXMuwzn3IbAcaJ3TyZxzU51zC/zyPwHvFyAWkTypT15Kuitz6JNPAiqZ2ZlAKtAC+AzAzCoBLwAXA9X98lXMLM45l55HfccDv4Vs/xp60MxuAe4HGvm7jgZq5nQyP76BQHOgPFAB+CiPGETyTS15iUl+sh6F12XTFRjnt9oBHgBOAc50zlUFzvX3Wz5OvR5oELIdn/nCzBoCbwK9gWP9LpmFIecNt+Tre8AYoIFz7hi8fvv8xCGSL0ryEsveA24AbvJfZ6qC1w+/3cxq4PWz59co4DF/8LY+cE/Iscp4iXwTgJndjtdCz5QK1Dez8tli2eqc22tmrfG6lUQiRkleSrqx2ebJf5Z5wDk3G29g83jgy5D3vAhUBDYDs4AJBajvcbwuml+AicD/QupbDAwGvsNL6KcBM0LeOwVYBGwws83+vruAJ8xsF/BvvD8iIhFjemiIiEjsUkteRCSGFTrJm1kDM0vyb/xYZGb3+vv7m9laM5vvf11a+HBFRKQgCt1dY2Z1gbrOuXlmVgWYC1yJN594t3PuuUJHKSIiR6TQ8+Sdc+vxppXhnNtlZkuAeoU9r4iIFF5EB17NrBEwDW/a2P3AbcBOIBl4wDm3Lcx7EoFEgMqVK7f805/+FLF4RERKg7lz5252ztUKdyxiSd7Mjga+wbvl+1Mzq4M3Rc0BT+J16XTL7RwJCQkuOTk5IvGIiJQWZjbXOZcQ7lhEZteYWTngE2Ckc+5TAOdcqnMu3TmXgXcXYI7rd4iISDAiMbvGgOHAEufc8yH764YUuwrv9m4RESlCkVig7GzgZmCBmc339/0D6GpmLfC6a1YDd0SgLhERKYBIzK6ZTvgFlb4o7LlFRKRwdMeriEgMU5IXEYlhSvIiIjFMSV5EJIoyXAa3fn4rHy/+OJDzK8mLiETJ5FWTiXsijhE/jqDb6FzvFT1iesariEgR25++nxNfPpHfdnqPC25ZtyWze8wOpC4leRGRIjRq0Shu+PiGrO1Z3WdxZv0zA6tPSV5EpAjs3r+bagOrke7SAbjilCv4/IbP8RYNCI6SvIhIwF6b8xp3f3F31vbiuxZzaq1Ti6RuJXkRkYBs2bOFms/WzNpOPCORNy5/o0hjUJIXEQlA/6n9efybx7O21/RZQ4NjGhR5HEryIiKRkJEBmzbxW4V9xL/UMGt3v/b96N+hf9TCUpIXESmsjAw47zx6HfMtb7Q8+CCmzQ9t5thKx0YxMN0MJSJSaIuXTsc6TstK8K+e8zSun4t6gge15EVEjphLT+eK/13CuF8nAVA2HbZPa0vlO24D5yDg6ZH5oZa8iMgRmLVmJmUGlM1K8B9e9R4H7lxHZVcWGjSADh28bpwoi8Tj/xqYWZKZLTazRWZ2r7+/hplNMrPl/vfqhQ9XRCS60jPSaTm0JW3ePhuA+O2w76k4rq/TEcqUgZkzIS3N+75pU3SDJTIt+TTgAedcU+As4G4zawo8CnztnDsJ+NrfFhEpsSasmEDZJ8syb/08ACb9cBq/vlKW8medDbVre19t20LZst732rWjHHFkHv+3Hljvv95lZkuAekBnoINf7F1gKvBIYesTESlq+9L20fDFhqT+ngpAm/ptmN5tOmUcXmu9du2D/e9JSYfvi6KIDryaWSPgdGA2UMf/AwCwAaiTw3sSgUSA+Pj4SIYjIlJo7y14j5s+vSlre07POSQcn+BtGFAnW2orU+bwfVEUsSRvZkcDnwB9nHM7Qxfdcc45M3Ph3uecGwoMBUhISAhbRkSkqO3at4uqA6tmbV9z6jV8dN1HgS8oFmkRmV1jZuXwEvxI59yn/u5UM6vrH68LbIxEXSIiQXtp1kuHJPilvZfy8fUfl7gEDxFoyZv3Uw8Hljjnng85NAa4FRjofx9d2LpERAKTkcGmNUuo/W7zrF29W/Xmv5f+N4pBFV4kumvOBm4GFpjZfH/fP/CS+ygz6w78ClwfgbpERCIvI4O+3RvxVKPfsnal3JdCvar1ohhUZERids10vOGHcM4v7PlFRIK0evtqGr/UGBp52wOSytD3w3VQtfgMnhaG7ngVkVKr2+huXoL3bX0ujr7unGIxvz1StHaNiJQ6C1IX8OfX/5y1PfSyofQ8vTv0Kj7z2yNFSV5ESg3nHJeMvISvVn4FQMWyFdn88GYqlavkFShG89sjRUleREqFGWtmcM7b52Rtf3L9J1x96tVRjKhoKMmLSExLz0jn9DdOZ8HGBQCcUP0Elty9hHJx5Q4v7D/dKZa6bDTwKiIxa9yycZR9smxWgp9yyxRW/H1Fzgn+vPOgfv1is0xwJKglLyIxZ2/aXuo9X4+tf2wFoH3D9ky5dQplLJd27aZNhy8THAN99GrJi0hMGfHjCCr+p2JWgp+XOI+pt03NPcFDsVwmOBLUkheRmLBj7w6qDaqWtd21eVfeu+a9/J/ArNgtExwJSvIiUuINnjmYByc9mLW9/J7lnFjjxIKfqJgtExwJSvIiUmKl7k7luMHHZW3fd9Z9PH/R87m8o/RRkheREunhSQ/z7Mxns7bX3b+OulXqRjGi4klJXkSKlzzmqq/atooTXj4ha3vg+QN55Bw9WTQnSvIiUnxkzlWfOdOb4ZKU5PWT+/726d8YuWBk1va2R7ZR7ahqUQi05FCSF5HiI4e56j9u+JEWb7TIKjb8iuF0O71b9OIsQZTkRaT4yJyr7rfkXa1anP9uR5JWJwFQtUJVNjywgYrlKkY50JIjIknezN4CLgM2Ouea+/v6Az2BTX6xfzjnvohEfSISo0Lmqk/742faPxmXdejzGz6n8586RzG4kilSLfl3gFeAEdn2v+Ccey5CdYhILMlhgDWNDJp9dC7LtiwD4NSap/LTnT9Rtow6Ho5ERJY1cM5NA7ZG4lwiUgrksBjY5z9/Trkny2Ul+Gm3TWPx3YuV4Ash6CvX28xuAZKBB5xz27IXMLNEIBEgPj4+4HBEpFjINsD6x/o11B5xGrv37wbg/MbnM+nmSVhOSwvE4JLAQQlygbIhwAlAC2A9MDhcIefcUOdcgnMuoVatWgGGIyLFRshiYG9d04RKwxpnJfgfe/3I5Fsm557gY3BJ4KAE1pJ3zqVmvjazN4FxQdUlIiWMGdu//Izqzx4LeF0zf/vz3/jfVf/L+70xuiRwUAJryZtZ6P3FVwELg6pLREqWgdMH+gnes/LvK/OX4CFmlwQOSqSmUL4PdABqmlkK0A/oYGYtAAesBu6IRF0iUnKt37We458/Pmv74bYPM+iCQQU7SYwuCRyUiCR551zXMLuHR+LcIhIb7ptwHy/OfjFre8MDG6hz9BF2s8TgksBB0bwkEQnU8i3LOfmVk7O2B184mPvb3B/FiEoXJXkRCYRzjq6fdOXDRR9m7dvx6A6qVqgaxahKHyV5EYm4eevn0XJoy6ztEVeO4Oa/3BzFiEovJXkRiZgMl0H7d9ozfc10AGpWqslv9/3GUWWPCimkG5mKUpA3Q4lIKZL0SxJxT8RlJfhxXcaw6ZaFHBVXwSuQkQHr13s3MIW7kSkjA1JTwbkijz2WKcmLSKEcSD9Ak5ea0HFERwD+UucvpPXdT6fE5w4m87Q07y7V+Hj49ttDb2QC3cUaICV5ETlinyz+hPIDyvPL9l8AmNFtBvN7zSduy9ZD70r9+eeD22YQF3fojUzh7mKViFCSF5EC23NgD0cNOIprP7oWgEtOvISMf2fQtkFbr0D2u1KbNj243a4dpKTA1KkH++R1F2tgNPAqIgXyRvIb9BrfK2t7wZ0LaF67+aGFwt2VmttdqrqLNTBK8iJyuDAzYLb+sZVjnzm43ky3Ft0Y3jmXG9uz35Wa112quos1EOquEZFDhRkEHTBtwCEJfvW9q3NP8FJsqCUvIocKGQRd+9N06oc8Z7Vvu74M6DggisFJQSnJi8ih/EHQ3lW+5dVWB6cybrx/A7WqqDulpFF3jYgcYumWZVjHabzayrsp6aUvwQ0oS609UQ5Mjoha8iICeAuKXTPqGj77+bOsfTu/aUuVud9rWmMJFqmHhrwFXAZsdM419/fVAD4EGuE9NOT6cA/yFpHom7N2Dq2Htc7afu/q9+h6WletMxMDItVd8w5wcbZ9jwJfO+dOAr72t0WkGMlwGZw57MysBH98lePZ9899XoKHg9MaleBLrIgkeefcNGBrtt2dgXf91+8CV0aiLhHJpzwW/Jq0chJxT8Tx/drvAZhw0wTW3r+W8nHlizJKCViQffJ1nHPr/dcbgLDD8maWCCQCxMfHBxiOSCmSOdd95kyvPz0pyWuVA/vT93PCyyeQsjMFgITjE5jVfRZxZeJyO6OUUEUyu8Y55/Ae6B3u2FDnXIJzLqFWrVpFEY5I7Mthwa8PF35IhQEVshL8rO6zmNNzjhJ8DAuyJZ9qZnWdc+vNrC6wMcC6RCRU5oJffkt+d7VKVH28DM5va11xyhV8ft2n2ObNXneO+txjVpAt+THArf7rW4HRAdYlIqEyF/xKSeHVZ6+jysCqWQl+8V2LGX39Z1jHjlq/vRSI1BTK94EOQE0zSwH6AQOBUWbWHfgVuD4SdYlI/mzeu5Varx+Xtd2rZS+GXDbE20hNPbw7R4uDxaSIJHnnXNccDp0fifOLSMH0S+rHE9OeyNpe02cNDY5pcLBAtu4c3egUu3THq0gMWbNjDQ1fbJi13b99f/p16Hd4Qa3fXmooyYvEiMSxibw5782s7c0PbebYSsfm/Aat314qKMmLlHCLNy2m2WvNsrZfvfRV7mp1VxQjkuJESV6khHLOcfn7lzN++XgAypUpx7ZHtlG5fOUoRybFiZK8SAk0K2UWbYa3ydoede0ormt2XRQjkuJKSV6kBEnPSKfVm634YcMPADQ8piHL7lmm9WYkR0ryIiXEl8u/5NL3Ls3annzzZM5volnKkjsleZFibl/aPuJfjGfj797KIG3qt2F6t+mUMT3YTfKmT4lItOSxFDDAyJ9GctR/jspK8HN6zmFm95lK8JJvasmLREMuSwED7Nq3i6oDq2ZtX9v0WkZdOwrTTUtSQGoOiERDDksBA7w066VDEvzS3kv56LqPlODliKglLxINYdaO2fj7Ruo8d/AO1Hta38PLl7wcxSAlFijJi0RDtrVj/jGlL09PfzrrcMp9KdSrWs/b0MO0pRDUXSMShHwMqlKmDKsr/IE9USYrwQ84bwCunzs0wZ93ntZ9lyOmJC8SaTkl5myJ//bRt9P4pcZZb9v68Fb6ntv30HPl0ncvkh9K8iKRFi4xhyT+BZ1aYY8b78x/B4Chlw3F9XNUr1j98HNl9t2XLat13+WIBN4nb2argV1AOpDmnEsIuk6RqAr3QI6NG3EzZ3Bxl3QmnjgXgMrlKrPxoY1UKlcp53Np3XcppKIaeD3PObe5iOoSia4wiXnG3uWc88/0rCKfXPcxVze9Jn/n07rvUgiaXSOSl9xmt+R0zE/MaRlptBjSgkWbFgFw0jFNWNR7CeXKakExKRpF0SfvgIlmNtfMErMfNLNEM0s2s+RNGlSS4ia32S15zHwZt2wc5Z4sl5Xgk25NYlmflUrwUqTM5TbFKxIVmNVzzq01s9rAJOAe59y0cGUTEhJccnJyoPGIFEhqqpfE09K8wc+UlINdJzkc25u2l7qD67J973YA2jdsz5Rbp2i9GQmMmc3Nabwz8E+dc26t/30j8BnQOug6RSImt9ktYY69O/9dKv6nYlaCn5c4j6m3TVWCl6gJtE/ezCoDZZxzu/zXFwJPBFmnSETlNrsl5NiOqhWo9sTBRN61eVfeu+a9KAQscqigB17rAJ/5CyuVBd5zzk0IuE6RyMptdkuZMjy38n88NOmhrF3L71nOiTVOLKLgRHIXaJJ3zq0C/hJkHSLRsmH3BuoOrpu1fd9Z9/H8Rc9HMSKRw2kKpcgReHDigwz+bnDW9rr711G3St1c3iESHUryIgWwcutKTvzvwa6YQX8dxMNnPxzFiERypyQvkk+3fX4b7/74btb2tke2Ue2oatELSCQfNK9LJA8bdm/g2lHXZiX4tzu/jevnlOClRFBLXiQHzjne/fFd7v/qfvYc2MNTHZ+iz1l9qFiuYrRDE8k3JXmRMFZvX80d4+5g4sqJnBN/DsMuH8YpNU+JdlgiBaYkLxIiIz2NV5MG8dj3T2NmvHLJK9zZ6k7dsSollpK8iG9J6iJ6DGzLzGo7uXhLdV7vP5eGNRrn/UaRYkzNEyn1DqQf4Klvn6LF0DP4ufxORnwKXwzZScMDuTzMQ6SEUEteSrV56+fRfUx35m+Yz3VNr+O/b/xGncXJ0KaN9yxW5/Q0JinR1JKXUumPA3/w2OTHaP1mazbs3sCn13/KqOtGUWfiDFizxkvs9et7q0ump+d9QpFiSi15KXW+/fVbeoztwbIty+jWohvPXfjcwYdolynjfc2Y4SX3WbOgXTuYPt3bL1LC6FMrpcaufbu4e/zdnPvOuexP38+kmycxvPPwgwk+U+3a0KrVwe05c7ylhkVKILXkpVT4cvmX3DHuDlJ2ptDnzD4M6DiAyuUrhy9s5rXc27XzEnz2h4WIlCBK8lKy5PZQ7TC27N7EfWPv4n/LPubUmqcyo9sM2jRok3c9cXFeoi9AXSLFkbprpOTI48HZoZxzfLTwQ5o+XY/3l3zMv36J54eec/OX4DNlPixECV5KsMCTvJldbGZLzWyFmT0adH0SwzZtgpkzvQdnz5yZYz/5ul3ruHrU1Vz/SRcabD5A8lB4YuQ6KmzbWcQBi0RfoEnezOKAV4FLgKZAVzNrGmSdEsNye6g2Xut9+LzhNH21KRNWTOCZvw5i1tJ2/GVL+PIipUHQffKtgRX+YwAxsw+AzsDigOuVWJTLQ7VXbVtF4thEvv7la85teC7DLh/GSceeBFMeVL+6lGpBd9fUA34L2U7x92Uxs0QzSzaz5E2apiahMjIgNdW76zRTtn7y9Ix0Xpz1IqcNOY3v137PkE5DSLo1yUvwYcqLlDZRH3h1zg11ziU45xJq1aoV7XCkuMjHIOviTYs55+1zuO+r++jQqAOL7lpEr4ReWjFSJETQ3TVrgQYh2/X9fSK5CzfIWqcOAPvT9zNo+iAGfDuAKuWr8H9X/R83nnYjpta6yGGCTvJzgJPMrDFecu8C3BhwnRILMgdZZ848ZNA0eV0y3cd056fUn+jSvAsvXfwStStrQFUkJ4Emeedcmpn1Br4C4oC3nHOLgqxTYkS2QdY9aX/Qf2p/Bn83mOOOPo7RXUZzxSlXRDtKkWIv8DtenXNfAF8EXY/EIH/Q9JvV39BjbA9WbF1BzzN68swFz+gh2iL5pGUNpNjauW8nj0x6hNfnvk6T6k34+pav6di4Y7TDEilRlOSlWBq/bDy9xvdi3a513H/W/TzZ8UkqldOTmkQKSkleipXNezbTZ0IfRi4YSbNazfj4uo85s/6Z0Q5LpMRSkpdiwTnHh4s+5J4v72HH3h30a9+Pf7T7B+Xjykc7NJESTUleom7tzrXcOf5Oxi4bS6vjWzH8iuGcVue0aIclEhOU5CVqnHMMmzeMByc9yIH0Awy+cDD3nnkvcWXioh2aSMxQkpeoWLl1JT3H9iRpdRLnNTqPNy9/kxNqnBDtsERijpK8FKn0tAO8NOUp/jlnEOXiyjH0sqH0OKOHliQQCYiSvBROAR7Ht3DdfLo/247vq+3m8k01GDJgPvWqNcj1PSJSOFquT45cPh/Htz99P/2T+nHGG2ewqtxu3v8YRr+xg3r7NHNGJGhqycuRy2WlyEzfr/2ebqO7sWjTIm5cZLz0JdTcA7RprSc1iRQBteTlyOXyOL49B/bwwFcP0GZ4G7bv3c7YLmMYuakdNffFwVlnwfTpepCHSBFQS16OXA6P40v6JYkeY3uwatsqerXsxaALBlG1QlVI6qRH8YkUMSV5KZzMx+sBO/bu4KGJD/LmD8M4scaJTL11Ku0btQ9bVkSKhpK8HJlss2rGLB3DnePvZMPOdTz0ndF/fx0q3d0u2lGKlHqB9cmbWX8zW2tm8/2vS4OqS4pYyKyajRe0pctHN9D5g84cW+4YZr8VxzMTHZWmz/b+CIhIVAU98PqCc66F/6UHh8SKTZtwM2cw8tQ0mracxac/f8YTHZ4g+c4fSGh8dtiBWBGJDnXXSIH9VmEfd/Y6hvE1t3LWjioMS5xJszrNvYNhBmJFJHqCbsn3NrOfzOwtM6secF0SsAyXwevJr9NsSHOS6u7lxbZPMv3ZrQcTPBwcXFWCFykWCtWSN7PJwHFhDvUFhgBPAs7/PhjoFuYciUAiQHx8fGHCkQAt37KcnmN78s2v33B+4/MZevlQmlRvEu2wRCQP5pwLvhKzRsA451zz3MolJCS45OTkwOOR/EvLSOOF717g31P/TYW4Cjx/0fPc3uJ2LSgmUoyY2VznXEK4Y4H1yZtZXefcen/zKmBhUHVJMH7c8CPdx3Rn7vq5dD6lM691eo3jqxwf7bBEpACCHHh9xsxa4HXXrAbuCLAuiaB9afsYMG0AA2cMpEbFGoy6dhTXNr1WrXeREiiwJO+cuzmoc0sE5LBE8He/fUf3Md1ZsnkJN//5Zl646AWOrXRsFAMVkcLQAmWlUZglgn/f/zt9JvTh7LfOZvf+3Xxx4xeMuGqEErxICad58qVRtiWCJ//wCT2nP8zq7au5u9XdPH3+01SpUCXaUYpIBKglXxrVrg1t2rCtchm6dzuWC8ZdT7ky5Zh22zReufQVJXiRGKKWfGnkHJ/V2sJdvTLYVDmVR9s+wr879KNiuYrRjkxEIkwt+VImdXcq14+8kqv/vJg6v8Pst+J4+s/3KcGLxCgl+VLCOceIH0dw6qunMvrXr/jPykbMeSuOlk3O1kJiIjFM3TWlwJoda7hj3B1MWDGBtg3aMvyK4fypxsnwrBYSE4l1SvLFXQ7z2fP1VpfBkDlDePTrR3HO8fLFL3N367spY/5/4GrVgo0blehFYpi6a4qzMPPZ82vpxiW0H9qW3l/2pk39Niy8ayH3nHnPwQRfiHOLSMmhJF+cZZvPnp8nLR1IP8DAb5/mL680Y+Evs3l78cl8deOXNKrWqNDnFpGSR0m+OKtd23vCUj6ftPTD+h84c9iZPDblH3RaBktehds+XYVt3lzoc4tIyaQkX5yZeU9aSkmBqVO97YwMSE2FkCWi96btpe/XfWn1ZivW7VrHx9d9xCfr23Hc3lwSeLhzi0jM0cBrcZN9oDXzSUuZx847z+teadsWkpKYkeItKLZ0y1Jua3Ebgy8cTI2KNSDp6rwHbEPPLSIxSS354iSvwdCQfvTdc2bw988Safd2O/am7eWrv33F253f9hI86DF8IgIoyRcveQ2G+v3oX51chmb3luWVhW/Ru3VvFt61kAtPuDA6MYtIsaYkHw1h+tWBPAdDt+7dxm19GnHxjRlUrNeIb2//lpcveZmjyx9dhMGLSElSqCRvZteZ2SIzyzCzhGzHHjOzFWa21MwuKlyYMSS3LplcBkM/WfwJTV9tyv8tGEnfdn2Z32s+Z8efXeThi0jJUtiB14XA1cAboTvNrCnQBWgGHA9MNrOTnXPphayv5AvXJRM6+JltMHT9rvX0/rI3ny75lNOPO50Jf5tAi+NaFH3cIlIiFaol75xb4pxbGuZQZ+AD59w+59wvwAqgdWHqihnhumTCdN8453hn/js0fa0p45eNZ+D5A/m+5/dK8CJSIEFNoawHzArZTvH3HcbMEoFEgPj4+IDCKUYyu2Qypzc6d9i0yNU715A4NpFJqyZxTvw5DLt8GKfUPCXakYtICZRnkjezycBxYQ71dc6NLmwAzrmhwFCAhIQEl0fx2BDaJbNxY1b3Tfp3M3h1ytP84/unMTNevfRVeiX0OrjejIhIAeWZ5J1zfz2C864FGoRs1/f3SXZ+982SpTPo0bUyM2f8k4tPvJjXO71Ow2oNox2diJRwQTURxwBdzKyCmTUGTgK+D6iuEu1ARhr/eeICWtwdx891yzLiyhF8ceMXXoLPaaqliEg+FXYK5VVmlgK0Acab2VcAzrlFwChgMTABuFszaw43b/08Wr3Zin9O/RdXnnoli+9azM1/uRnLXKNGSwGLSCGZK0atxISEBJecnBztMAL3x4E/ePybx3lu5nPUqlyLIZ2GcOWfrjy0UGqql+DT0ryZOCkpWmdGRMIys7nOuYRwx7RAWVEIWXRs2ppv6TGmB8u3Lqf76d159oJnqV6x+uHvyZxqmTnrRksBi8gRUJIPmt/tsjN5Bo/dWJvX6q+nUbVGTLp5En9tksuYdvapllpoTESOgJJ80DZt4ssN07njjgxSqq6nT/OeDDjzUSrXa5z3e7UUsIgUkiZgB2jLni3cMuNBLr0xgyr7YMYHlXjhpZ+p3PgUDaaKSJFQSz4Azjk+WvwRvb/ozba92/jXt0bfqY4Kth9WzIT0dK+vfeNGOC7cfWYiIpGhlnyErdu1jqs+vIobPr6B+GPimdszmScOtKOC+WvVtG3r9a+npcH116s1LyKBUks+QpxzvPXDWzww8QH2pe/jmb8+w31t7qNsmbKHDqBu2AANGnit+e++O3wVShGRCFKSj4BV21bRc2xPpvwyhXMbnsuwy4dx0rEnHSwQOoB63HFw9tmaGikiRUJJvhDSM9L57/f/pe+UvsRZHEM6DSGxZWLuC4ppaqSIFCEl+SO0aOMiuo/pzuy1s+l0UieGdBpCg2Ma5P1G0NRIESkySvIFtD99P4OmD+LJaU9StUJVRl49kq7Nu2LOeUsRqHUuIsWIknwBzFk7h+5jurNg4wK6NO/Cyxe/TK3KtQ4uJhby4A/KaOKSiESfMlE+7Dmwh4cmPsRZw89iyx9bGN1lNO9f876X4CH8c1tFRIoBteTzMHX1VHqO7cmKrSvoeUZPnr3gWY456phDC2kxMREpppTkc7Bj7w4emfwIb8x9gybVm/D1LV/TsXHH8IU1Y0ZEiikl+TDGLxvPHePuYP3u9Txw1v080aw3leo1yv1NmjEjIsVQYZ8MdZ2ZLTKzDDNLCNnfyMz+MLP5/tfrhQ81eJt+38RNn97EZe9fRvWK1fnu9hk891QylRqfrAXFRKREKmxLfiFwNfBGmGMrnXMtCnn+IuGc44OFH/D3CX9nx94d9G/fn8faPUb5zdsOH1BVa11ESpBCJXnn3BLAeyZpCZWyM4U7x9/JuGXjaF2vNcOvGE7z2s29gxpQFZESLsg++cZm9gOwE/inc+7bcIXMLBFIBIiPjw8wnENluAyGzRvGQ5Me4kD6AQZfOJh7z7yXuDJxocFpQFVESrQ8k7yZTQbCLXre1zk3Ooe3rQfinXNbzKwl8LmZNXPO7cxe0Dk3FBgK3oO88x96GCHPUs0tIa/YuoKeY3sydfVUzmt0Hm9e/iYn1DghfGENqIpICZZnknfO5fIg0hzfsw/Y57+ea2YrgZOB5AJHmF/5uOs0PSOdF2e9yL+S/kW5uHIMvWwoPc7oUaK7m0REchNId42Z1QK2OufSzawJcBKwKoi6soS76zSkBb4gdQHdx3Rnzro5XH7y5QzpNIR6VesFGpKISLQVdgrlVWaWArQBxpvZV/6hc4GfzGw+8DHQyzm3tVCR5iVzkLRs2UMGSfel7aNfUj/OGHoGq7ev5oNrPmB0l9FK8CJSKphzhesGj6SEhASXnFyIHp1sffKzU2bTfUx3Fm1axE2n3cSLF79IzUo1IxewiEgxYGZznXMJ4Y7F1h2v/iDp7/t/519J/+LFWS9Sr2o9xnUdR6eTO0U7OhGRIhdbSR6Y8ssUeo7tyaptq+jVsheDLhhE1QpVox2WiEhUxEyS3753Ow9NfIhhPwzjxBonMvXWqbRv1D7aYYmIRFVMJPnkdcl0/qAzG3Zv4OG2D9O/Q38qlqsY7bBERKIuJpJ8k+pNaFarGaO7jCbh+LBjDyIipVJMJPkaFWsw8eaJ0Q5DRKTY0eP/RERimJK8iEgMU5IXEYlhSvIiIjFMSV5EJIYpyYuIxDAleRGRGKYkLyISw4rVUsNmtgn4tRCnqAlsjlA4kaS4CkZxFYziKphYjKuhc65WuAPFKskXlpkl57SmcjQproJRXAWjuAqmtMWl7hoRkRimJC8iEsNiLckPjXYAOVBcBaO4CkZxFUypiium+uRFRORQsdaSFxGREEryIiIxrEQleTO7zswWmVmGmSVkO/aYma0ws6VmdlEO729sZrP9ch+aWfmA4vzQzOb7X6vNbH4O5Vab2QK/XHIQsWSrr7+ZrQ2J7dIcyl3sX8cVZvZoEcT1rJn9bGY/mdlnZlYth3KBX6+8fnYzq+D/flf4n6VGQcQRpt4GZpZkZov9fwP3hinTwcx2hPx+/11EseX6ezHPy/41+8nMziiCmE4JuQ7zzWynmfXJVqZIrpeZvWVmG81sYci+GmY2ycyW+9+r5/DeW/0yy83s1iMKwDlXYr6AU4FTgKlAQsj+psCPQAWgMbASiAvz/lFAF//168CdRRDzYODfORxbDdQswuvXH3gwjzJx/vVrApT3r2vTgOO6ECjrvx4EDIrG9crPzw7cBbzuv+4CfFhEv7u6wBn+6yrAsjCxdQDGFdXnKb+/F+BS4EvAgLOA2UUcXxywAe+GoSK/XsC5wBnAwpB9zwCP+q8fDfeZB2oAq/zv1f3X1Qtaf4lqyTvnljjnloY51Bn4wDm3zzn3C7ACaB1awMwM6Ah87O96F7gywHAz67weeD/IeiKsNbDCObfKObcf+ADv+gbGOTfROZfmb84C6gdZXy7y87N3xvvsgPdZOt//PQfKObfeOTfPf70LWALUC7reCOkMjHCeWUA1M6tbhPWfD6x0zhXmbvoj5pybBmzNtjv0c5RTLroImOSc2+qc2wZMAi4uaP0lKsnnoh7wW8h2Cof/AzgW2B6STMKVibR2QKpzbnkOxx0w0czmmlliwLFk6u3/l/mtHP6LmJ9rGaRueK2+cIK+Xvn52bPK+J+lHXifrSLjdxGdDswOc7iNmf1oZl+aWbMiCimv30u0P1NdyLmhFY3rBVDHObfef70BqBOmTESuW7F7kLeZTQaOC3Oor3NudFHHk5N8xtmV3Fvx5zjn1ppZbWCSmf3s/9UPJC5gCPAk3j/KJ/G6kroVpr5IxJV5vcysL5AGjMzhNBG/XiWNmR0NfAL0cc7tzHZ4Hl6XxG5/vOVz4KQiCKvY/l78cbcrgMfCHI7W9TqEc86ZWWBz2YtdknfO/fUI3rYWaBCyXd/fF2oL3n8Ty/otsHBl8i2vOM2sLHA10DKXc6z1v280s8/wugsK9Y8jv9fPzN4ExoU5lJ9rGfG4zOw24DLgfOd3SIY5R8SvVzb5+dkzy6T4v+Nj8D5bgTOzcngJfqRz7tPsx0OTvnPuCzN7zcxqOucCXYwrH7+XQD5T+XQJMM85l5r9QLSuly/VzOo659b7XVcbw5RZizdukKk+3nhkgcRKd80YoIs/86Ex3l/j70ML+IkjCbjW33UrEOT/DP4K/OycSwl30Mwqm1mVzNd4g48Lw5WNlGz9oFflUN8c4CTzZiKVx/uv7piA47oYeBi4wjm3J4cyRXG98vOzj8H77ID3WZqS0x+lSPL7/YcDS5xzz+dQ5rjM8QEza4337zvQP0D5/L2MAW7xZ9mcBewI6aoIWo7/m47G9QoR+jnKKRd9BVxoZtX9rtUL/X0FE/TIciS/8BJTCrAPSAW+CjnWF29mxFLgkpD9XwDH+6+b4CX/FcBHQIUAY30H6JVt3/HAFyGx/Oh/LcLrtgj6+v0PWAD85H/I6maPy9++FG/2xsoiimsFXt/jfP/r9exxFdX1CvezA0/g/QECOMr/7KzwP0tNgr4+fr3n4HWz/RRynS4FemV+zoDe/rX5EW8Au20RxBX295ItLgNe9a/pAkJmxgUcW2W8pH1MyL4iv154f2TWAwf8/NUdbxzna2A5MBmo4ZdNAIaFvLeb/1lbAdx+JPVrWQMRkRgWK901IiIShpK8iEgMU5IXEYlhSvIiIjFMSV5EJIYpyYuIxDAleRGRGPb/8IUYX7twnz4AAAAASUVORK5CYII=", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXkAAAEICAYAAAC6fYRZAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAq5ElEQVR4nO3dd3xUVfrH8c8TAkiVjlQR2wruiksEsSyIXVaxi64KSwno4i7qYuOnYF1Rsa1YUKxrQ1BpCoKCdBSQIiCCiBSp0kVKkuf3x0xCiBOSkLkzyeT7fr3mxdx7z73nyc3wzMm5555r7o6IiCSmpHgHICIiwVGSFxFJYEryIiIJTEleRCSBKcmLiCQwJXkRkQSmJC+Sg5lNNLOu+SzbxsxWBx2TyKFSkpdiy8xWmNlvZrYz2+u5eMeVGzPrZGZT4h2HlCzJ8Q5ApJAudvfx8Q5CpKhSS14SjpmVNbOtZnZitnU1w63+WmZW1cxGmdlGM9sSfl8/n8cuZ2avh/dbBJySY/tdZvaDme0ws0Vmdll4/QnAi0Cr8F8cW8Pr25nZN2a23cxWmVm/KJ0GEUBJXhKQu+8BPgSuzbb6auBLd99A6HP/GnAk0BD4DchvN09f4Ojw63ygY47tPwBnAocD9wP/M7M67r4Y6AFMd/eK7l4lXP5X4EagCtAOuMnMLs3vzyqSFyV5Ke4+DrfaM1/dwuvfATpkK3ddeB3u/ou7D3P3Xe6+A3gYaJ3P+q4GHnb3ze6+Cng2+0Z3/8Ddf3b3DHd/H1gKtMjtYO4+0d0XhMvPB94tQCwieVKfvBR3l+bSJz8BKG9mLYH1QDPgIwAzKw88BVwAVA2Xr2Rmpdw9PY/66gKrsi3/lH2jmd0I3AY0Cq+qCNTI7WDh+B4FTgTKAGWBD/KIQSTf1JKXhBRO1kMIddlcC4wKt9oBbgeOB1q6e2XgL+H1lo9DrwUaZFtumPnGzI4EXgZ6AtXDXTLfZjtupClf3wFGAA3c/XBC/fb5iUMkX5TkJZG9A1wD/C38PlMlQv3wW82sGqF+9vwaAtwdvnhbH7gl27YKhBL5RgAz+zuhFnqm9UB9MyuTI5bN7r7bzFoQ6lYSiRoleSnuRuYYJ/9R5gZ3n0nowmZd4NNs+zwNlAM2ATOAMQWo735CXTQ/Ap8Bb2WrbxEwAJhOKKH/EZiabd8vgIXAOjPbFF53M/CAme0A7iP0JSISNaaHhoiIJC615EVEElihk7yZNTCzCeEbPxaa2b/C6/uZ2Rozmxt+XVT4cEVEpCAK3V1jZnWAOu4+x8wqAbOBSwmNJ97p7k8UOkoRETkkhR4n7+5rCQ0rw913mNlioF5hjysiIoUX1QuvZtYImERo2NhtQCdgOzALuN3dt0TYJxVIBahQoULzP/zhD1GLR0SkJJg9e/Ymd68ZaVvUkryZVQS+JHTL94dmVpvQEDUHHiTUpdP5YMdISUnxWbNmRSUeEZGSwsxmu3tKpG1RGV1jZqWBYcDb7v4hgLuvd/d0d88gdBdgrvN3iIhIMKIxusaAwcBid38y2/o62YpdRuj2bhERiaFoTFB2OnADsMDM5obX3QNca2bNCHXXrAC6R6EuEREpgGiMrplC5AmVPinssUVEpHB0x6uISAJTkhcRSWBK8iIiCUxJXkQkjjI8g44fd2TooqGBHF9JXkQkTsYvH0+pB0rx5rw36Tz8oPeKHjI941VEJMb2pu/lmGePYdX20OOCm9dpzsyuMwOpS0leRCSGhiwcwjVDr8lantFlBi3rtwysPiV5EZEY2Ll3J1UerUK6pwNwyfGX8PE1HxOaNCA4SvIiIgF7/uvn+ccn/8haXnTzIk6oeUJM6laSFxEpjIwM2LgRatWCHK3yX3b9Qo3Ha2Qtp/45lZcufimm4Wl0jYjIocrIgLPOgvr1oU2b0HJYv4n9DkjwK3utjHmCB7XkRUTylltrfeNGmDYN0tJC/27cyKrD9tLw6YZZRfq27ku/Nv1iH3OYWvIiIgdzkNY6tWrBaadBcjKcdho9vr7vgAS/qfemuCZ4UJIXETm4CK31LGYwYQKL5n+OtZ3ES7MHATDwooF4X6d6+epxCno/ddeIiBxMZmt92rTQv7VqZW1ydy55vz2jvh8FQHJSMlvv3EqFMhXiFe3vKMmLiBxMuLWes09+xuoZtBrcKqvY+1e+z9VNr45XlLkqdJI3swbAm0BtQk+BGuTuz5hZNeB9oBGhJ0Nd7e5bClufiEjMJSVB7doApGek0+KVFsxZOweAhoc3ZOktSylTqkw8I8xVNPrk04Db3b0JcCrwDzNrAtwFfO7uxwKfh5dFRIqtMcvGkPxgclaCH3fDOH7q9VORTfAQncf/rQXWht/vMLPFQD2gPdAmXOwNYCJwZ2HrExGJtT1pezjy6SNZ/+t6AFrVb8WUzlNIsqI/diWqffJm1gg4GZgJ1A5/AQCsI9SdE2mfVCAVoGHDhpGKiIjEzTsL3uFvH/4ta/nrbl+TUjcljhEVTNSSvJlVBIYBvdx9e/ZJd9zdzcwj7efug4BBACkpKRHLiIjE2o49O6j8aOWs5StOuIIPrvog8AnFoi0qf2uYWWlCCf5td/8wvHq9mdUJb68DbIhGXSIiQXtmxjMHJPglPZcw9OqhmDusXw9efNqjhU7yFvpaGwwsdvcns20aAXQMv+8IDC9sXSIiQdr460bsfqPX2F4A9DylJ97XOa76cQe/87UIi0Z3zenADcACM5sbXncP8CgwxMy6AD8BRW8AqYhIWJ/P+/DIlEeyllffupp6levtLxDpztfaES81FinRGF0zBcitk+rswh5fRCRIK7au4Khnjspafuish+jzlz6/L3iQO1+LMt3xKiIlVufhnXlt7mtZy5vv2EzVclUjF87lzteiTkleRBJfjqmCF6xfwJ9e/FPW5kF/HUS35t1y33f9+lBSr127WHTRZKckLyKJLfOC6bRp+GmtuLBbOcb+8BkA5ZLLsemOTZQvXT73fdu0gcmTQ0n+zDNDrfmkon8TVKbiE6mIyKEIXzCdWieNpLaTsxL8sKuHsavPrtwTfLZ9gdCwyZxTDRcDasmLSEJLr1Gdk3uVZUHFNACOrno0i/+xmNKlSue9c+bF1syWfDG64JpJSV5EEtao70dx8bsXQ8XQ8hc3fM5Zjdvm/wBmMHHigX3yxeSCayYleRFJOLvTdlPvyXps/m0zAK2PbM0XHb/IfUKx3J7hCqH+9zp1Ao44OOqTF5GE8ua8Nyn3cLmsBD8ndQ4TO008eIIvhney5pda8iKSELbt3kaV/lWylq898VreueKdvHcspney5pda8iJS7A2YNuCABL/0lqX5S/Cw/+JqcnKxvLCaF7XkRaTYWr9zPUcMOCJr+dZTb+XJ8588yB4RFNM7WfNLSV5EiqU7xt3B49Mez1r++bafqVPpEC+QZnuGa6JRkheRYmX5luUc/ezRWcuPnv0od56hJ4vmRkleRIqN6z+8nrcXvJ21vOXOLVQ5rEr8AioGlORFpOgKj1+fl7GWZoNOzlo9+JLBdD65cxwDKz6U5EWkaMrIwM9qw9mNpzChUehxe5XLVmbd7esoV7pcfGMrRqL1jNdXzWyDmX2bbV0/M1tjZnPDr4uiUZeIlAyT5o0gqe3krAT/8fmvs+2ubfsTfOYUwMXoeavxEK1x8q8DF0RY/5S7Nwu/PolSXSJSnBQwGadlpHH8c8fTesRlAJywEfZNOJP2LW888JgJfJdqNEUlybv7JGBzNI4lIgmkgMn44+8+pvSDpfn+l+8BmNRxIovuXUfyhC8PHL8e6S5ViSjoPvmeZnYjMAu43d235CxgZqlAKkDDhg0DDkdEYiqfUwb8tu83aj1Ri517dwJw9lFnM+6GcVhuNyYV0+etxkOQ0xq8ABwNNAPWAgMiFXL3Qe6e4u4pNWvWDDAcEYm5nFMGuP+u2+bVb16l/CPlsxL8vB7zGH/j+NwTPOy/S3X16tBUwAl2l2o0BZbk3X29u6e7ewbwMtAiqLpEpIjKTMYrV4aSe4MGWd02W3dvxe43uozoAsD1f7oe7+v8qfafDn7MTJl3qSrBH1Rg3TVmVsfd14YXLwO+PVh5EUlQSUmh1/TpWd02j352H3fPfDiryA///IHGVRvHMcjEFZUkb2bvAm2AGma2GugLtDGzZoADK4Du0ahLRIqhcLfN2vlTqdsrDcIJ/o7T7qD/uf3jHFxii0qSd/drI6weHI1ji0gCMOPWu0/m6ZmTslatu30dtSseZFKwgz2tSfJN88mLSKCW/rIUu994euYzAAw4bwDe1/NO8BoHHxWa1kBEAuHuXDvsWt5f+H7Wum13baNy2cp575zgT2uKJbXkRSTq5qydQ9IDSVkJ/s1L38T7ev4SPCT805piSS15EYmaDM+g9eutmbJyCgA1ytdg1a2rOCz5sIIdKMGf1hRLSvIiEhUTfpxA2zfbZi2PunYU7Y5r9/uC+b2gmsBPa4olJXkRKZR96fs4/rnj+XHrjwCcVPskZqfOplRSqd8XzrygmjkdwYQJoWQugdHZFZFDNmzRMMo8VCYrwU/tPJW5PeZGTvCgicXiQC15ESmwXft2Ua1/Nfak7wHgwmMuZPR1ow8+3wxoYrE4UJIXkQJ5adZL9BjdI2t5wU0LOLHWifnbWRdUY05JXqQkiMLdo5t/20z1x6pnLXdu1pnB7Q/hxnZdUI0p9cmLJLoo3D360KSHDkjwK/614tASvMScWvIiia4Qd4+u2b6G+k/Vz1ruc2YfHmr7UFCRSgCU5EUS3SFe7Oz5SU8Gfj0wa3nDvzdQs4Ie7FPcKMmLJLoCXuxcsmkJfxj4h6zlZy54hn+2/GfQUUpAlORFSoJ8XOx0d64YcgUfffdR1rrtd22nUtlKQUcnAYrKhVcze9XMNpjZt9nWVTOzcWa2NPxv1WjUJSJRlJEB69fz9eqvSHogKSvBv3P5O3hfV4JPANEaXfM6cEGOdXcBn7v7scDn4WURKSoyMsg4qw0t7z2CFoNbAlC3Ul32/N8erv1jpOcASXEUlSTv7pOAzTlWtwfeCL9/A7g0GnWJSHSMmzOUUm0n81W90PKYdu+y5rY1lClVJr6BSVQF2SdfO9uDvNcBETsEzSwVSAVo2LBhgOGICMDe9L0c/ezRrN6+GoCUn2HGd2dS6r5r4hyZBCEmF17d3c3Mc9k2CBgEkJKSErGMiETH+9++T4dhHbKWZ3SeRssyjaFGDdiwQVMNJKAgk/x6M6vj7mvNrA6wIcC6ROQgdu7dSeX/VMYJtaMuOf4SPr7m49CEYpr+N6EF+ZscAXQMv+8IDA+wLhHJxcCvBlLpP5WyEvyimxcxvMPw/TNGavrfhBaVlryZvQu0AWqY2WqgL/AoMMTMugA/AVdHoy4RyZ9NuzZR8/H9d6j2aN6DF/76wu8LavrfhBaVJO/uuY23OjsaxxeRguk7oS8PTHoga3llr5U0OLxB5MKa/jeh6Y5XkQSycttKjnz6yKzlfq370bdN37x31PS/CUtJXiRBpI5M5eU5L2ctb+q9ierlqx9kDykJlORFirlFGxfR9PmmWcsDLxrIzafcHMeIpChRkhcpptydi9+9mNFLRwNQOqk0W+7cQoUyFeIcmRQlSvIixdCM1TNoNbhV1vKQK4dwVdOr4hiRFFVK8iLFSHpGOqe8fArfrPsGgCMPP5Lvb/le881IrnRbm0gx8enST0l+MDkrwY+/YTwr/rmcMpu2gGtGEIlMLXmRIm5P2h4aPt2QDb+GZgZpVb8VUzpPIcnRdASSJ30iRIqwt+e/zWEPH5aV4L/u9jXTukwjyZI0HYHki1ryIkXQjj07qPxo5azlK5tcyZArh+yfbwY0HYHki5K8SBHzzIxn6DW2V9bykp5LOK76cb8vqOkIJB+U5EXiLSMDNm5kQwWoPeCIrNW3tLiFZy989uD7ajoCyYOSvEg8hedyv6fsZP5z+v4RMqtvXU29yvXiGJgkCl14FYmljAxYvz5ryOOKH2ZjbSdlJfiHTrkL7+tK8BI1SvIisZL5BKb69aFNG/7+cSeOeqdF1ubNk1rR58JH4higJCIleZFYCQ95XFAtDWs7idfnvQHAoHYv4j3WUXX8VF08lagLvE/ezFYAO4B0IM3dU4KuU6Qo8po1ueCmSnxWfQsAFUpXYEPvDZQvXT7OkUkii9WF17PcfVOM6hIpcqaunMoZr50B4endh101lMubXBHfoKRE0OgakQClZaTR7MVmLNy4EIBjqx3LwpsXUrpU6ThHJiVFLPrkHfjMzGabWWrOjWaWamazzGzWRt2WLQlk1PejKP1g6awEP6HjBL6/5XsleImpWLTkz3D3NWZWCxhnZt+5+6TMje4+CBgEkJKSoqn0pNjbnbabOgPqsHX3VgBaH9maLzp+EZpvRiTGAv/Uufua8L8bgI+AFgffQ6QIyzHOPac35r5BuYfLZSX4OalzmNhpohK8xE2gnzwzq2BmlTLfA+cB3wZZp0hgcoxzJyMja9O23duw+41OwzsBcO2J1+J9nZPrnByfWEXCgm5e1AammNk84CtgtLuPCbhOkWDkMrXvE9OeoEr/KlnFlt6ylHeueCe0kEfLXyRogfbJu/ty4KQg6xCJmRxT+64rn0Gd+/ffvHTrqbfy5PlP7i+f2fLXQz0kjjSEUiS/sk3t+++5jzHgybpZm36+7WfqVKpzYPlILX/NGCkxpiQvUgA/bP2RY148Jmu5/zn9ueP0OyIX1kM9pAhQkhfJp04fd+KN8HwzAFvu3EKVw6rkvoMe6iFFgDoIRfKwbuc6rhxyZVaCf639a3hfp0qZynlfVM18qIcSvMSJkrxILtyd1+e+TpOBTRj1/SgeafsIu+7ZRadmnQ46nFKkKFF3jUgEK7auoPuo7nz2w2ec0fAMXrn4FY6vcfz+ArqoKsWEWvIi2WR4Bv+d+V9OfP5Epq2axnMXPseXnb48MMHD/ouqycm6qCpFmlryImGLNy6m68iuTFs1jQuOuYAX273IkVWOjFxYF1WlmFCSlxJvX/o+Hp/2OPd/eT8Vy1Tkzbb/5frTb8byunEp86KqSBGm7hop0easnUOLV1rQ54s+tD/uEhZ9dhw3tL0VO+ssXUyVhKAkLyXSb/t+4+7xd9Pi5Ras27mOD6/+kCGtn6P2l7N+NzeNSHGm7hopcSb/NJmuI7vy/S/f07lZZ5447wmqlqsaGu+uO1QlwSjJS4mxY88O7hp/F8/Pep5GVRox7oZxnNP4nP0FdDFVEpCSvJQIny79lO6jurN6+2p6tezFQ20fokKZCr8vqIupkmCU5KV4yMg4pBb2L7t+4daxt/LW/Lc4ocYJTO08lVYNWgUYqEjRoguvUvQdwhQC7s4HCz+gyfNNePfbd7n3zP/jm8vG0Kr+qcHHK1KEBJ7kzewCM1tiZsvM7K6g65MElMsTmXLz846fuXzI5Vw99GoaVG7ArC5f8cD9kyh75NGaZ0ZKnKCf8VoKGAhcCDQBrjWzJkHWKQki+2Pz8jmFgLszeM5gmgxswphlY3jsnMeY0XUGJ5WqW6AvCZFEEnRLvgWwzN2Xu/te4D2gfcB1SnGXs3vGPTTqZfVqmDgxYp/88i3LOfetc+k6sisnHXES83vMp/fpvUlOStY8M1KiBX3htR6wKtvyaqBl9gJmlgqkAjRs2DDgcKRYyG2GxwijXtIz0vnvV/+lzxd9KGWleKHdC6Q2TyXJsrVfNDRSSrC4X3h190HunuLuKTVr1ox3OFIU5LPlvWjjIs547QxuHXsrbRq1YeHNC+mR0uPABJ9JD++QEirolvwaoEG25frhdSK5y6PlvTd9L/2n9OehyQ9RqUwl/nfZ/7juj9dhSuAivxN0kv8aONbMjiKU3DsA1wVcpxQlhzi+Pbebkmb9PIsuI7owf/18OpzYgWcueIZaFdTHLpKbQLtr3D0N6AmMBRYDQ9x9YZB1ShESxUfk7dq3izvG3UHLV1qyadcmhncYzrtXvJt3gs8+SkekBAr8jld3/wT4JOh6pAiK0iPyvlzxJV1HdmXZ5mV0+3M3Hjv3MaocViXvHTO/ZDInHJswIfQXgkgJok+8BKeQQxe379nOTaNuos0bbcjwDD6/8XMGXTwofwkeCnwTlUgi0tw1EpxCDF0c/f1oeozuwc87fua2U2/jwbYPUr50+YLVn/klo6mDpQRTkpdgFXBWx027NtFrTC/eXvA2TWs2ZehVQ2lZv2XeO0ai8fEiSvJSNLg77y98n1s+vYVtu7fRt3Vf7jnzHsqUKlO4A2vqYCnhlOQl7tZsX8NNo29i5PcjOaXuKQy+ZDB/rP3HeIclkhCU5KXwDnEsvLvzypxX+Pe4f7MvfR8DzhvAv1r+i1JJpQIMVqRk0egaKZxDHAv/w+YfOPvNs0kdlUrzOs1ZcNMCbmt124EJXmPcRQpNSV4KJ7dhirkk6PSMdJ6c/iR/fOGPzF47m0F/HcTnN37O0dWOPvC4UbyRSqQkU5KXwok0Fj57gm7dGtauBXe+3fAtp716Grd/djvnND6HRTcvolvzbpHnnNEYd5GoUJ+8FE6kYYobNuxP0JMns7dRAx65th6PHL2Www87nHeveJdrml5z8AnFNMZdJCqU5KXwcg5TzEzQU6fy1RHpdG6fzsJaK7mu8eU8c9lL1ChfI+9jaoy7SFSou0aiz4xdn43m9g+60qorbD0MRs5rytvXDc1fgs+kOeBFCk0teYm6CT9OoOvIrizfspweKd3pf9K/qVz/aCVrkThQkpfcFXD8+7bd2+g9rjcvz3mZY6odw8SOE2ndqHUMAhWR3Ki7RiIr4BDGEUtG0OT5Jgz+ZjC9T+vNvB7zlOBFioDAkryZ9TOzNWY2N/y6KKi6JAD5HMK44dcNdBjagfbvtad6uerM7DqTx859rOAzRopIIILurnnK3Z8IuA4JQs4hjO6hV7jbxt15Z8E7/GvMv9i+ZzsPtHmAO8+4s/ATiolIVKm7RiLLHMK4cmUouTdokNVts2rbKi5+92Ku/+h6jq1+LN90/4Z7W98bOcFragKRuAo6yfc0s/lm9qqZVQ24Lom2pKTQa/p0SEsjY9pUXpz4BE2fb8qEFRN4+vynmfL3KTSt1TTy/pqaQCTuzAvRwjKz8cARETb1AWYAmwAHHgTquHvnCMdIBVIBGjZs2Pynn3465HgkAO7Qpg1LF0+l23UV+bLqNs4+6mwGXTyIxlUbH3zf9etDCT4tLTTtwerVoXHvhzhrpYhEZmaz3T0l0rZC9cm7+zn5DOBlYFQuxxgEDAJISUnR3/RFTJqn89TD7bhv4leUTYbB5w/m783+fvApCTJFmppAD9cWianALryaWR13XxtevAz4Nqi6JBjz1s2jy4guzF47m/bHt+f5ds9Tt1Ld/B8gr3ltMkft6MlNIoEJcnTNY2bWjFB3zQqge4B1SRTtSdvDQ5Me4tGpj1KtXDWGXDmEK5tcmb/We065zWujicdEYiKwJO/uNwR1bImCXPrFp6+aTpcRXVi8aTE3/OkGnjr/KaqXrx69ejXxmEhMqTO0JIow6uXXvb/Sa0wvTn/1dHbu3ckn133Cm5e9Gd0En0kTj4nEjOauKYly3M06/pthdJtyByu2ruAfp/yD/5z9HyqVrRTvKEUkCtSSL4nC/eJbKpaiS+fqnDvqakonlWZSp0k8d9FzSvAiCUQt+ZLIjI9e+Cc3j/6Ojb9t4q7T7uK+1vdRrnS5eEcmIlGmJF/CrN+5nls+6ckHi4dyUu2TGPW3T2het3m8wxKRgCjJlxDuzlvz36LXmF78umsrD3+ZRG8qUTr15HiHJiIBUpIvAVZuW0n3Ud0Zs2wMp9U+hcFPzeEP69MheYZuRhJJcLrwmsAyPIOBXw2k6fNNmfzTZJ694Fkmp07nD8efHppLRjcjiSQ8teQT1JJNS+g6sitTVk7h3MbnMujiQTSq0ii0UTcjiZQYSvIJZl/6PgZMH0C/if0oV7ocr7V/jY4ndTxwSoKcUw2ISMJSkk8g36z9hi4juvDNum+4/ITLGXjRQI6oGGkmaBEpKZTkE8DutN08+OWD9J/anxrlazD0qqFc0eSKeIclIkWAknwxN3XlVLqM6MKSX5bQqVknBpw3gGrlqsU7LBEpIpTki6mde3dyz+f38NxXz9Hw8IaMvX4s5x19XrzDEpEiRkm+GBq7bCypo1JZtW0VPVv05JGzH6FimYrxDktEiiAl+WJk82+buW3sbbwx7w2Or348k/8+mdMbnh7vsESkCCvUzVBmdpWZLTSzDDNLybHtbjNbZmZLzOz8woUpwxYNo8nAJvxv/v/oc2Yf5vaYqwQvInkqbEv+W+By4KXsK82sCdABaArUBcab2XHunl7I+kqctdvW0HN4dz78cTQnH3EyY64fQ7MjmsU7LBEpJgqV5N19MRDp2Z/tgffcfQ/wo5ktA1oA0wtTX0ni7rwx9zVuHdad3yyNR1ccxe19ZpCcXCbeoYlIMRLU3DX1gFXZlleH1/2OmaWa2Swzm7Vx48aAwileVmxezvmvnsXfR3ThxLVpzHsB7nxvFcm/bIl3aCJSzOTZkjez8UCk2yb7uPvwwgbg7oOAQQApKSle2OMVZ+kZ6Qz86jnuGX0blp7BwB+PpsfPdUnaNl2TiYnIIckzybv7OYdw3DVAg2zL9cPrJBeLNy6m68iuTFs1jQtWGC+OhCN//QlWTg7NNaPJxETkEATVXTMC6GBmZc3sKOBY4KuA6irW9qXv4+FJD9PspWZ8t+k73rz0DT5ZeQZH/hqeCviII0KTiSnBi8ghKNSFVzO7DPgvUBMYbWZz3f18d19oZkOARUAa8A+NrPm9OWvn0Hl4Z+atn8fVTa/m2QuepXbF2jDhek0FLCJRYe5Fpxs8JSXFZ82aFe8wAvfbvt+4/8v7eWLaE9SsUJMX2r3ApX+4NN5hiUgxZWaz3T0l0jbd8Rpjk36aRNcRXVm6eSldTu7C4+c+TtVyVeMdlogkKCX5GNm+Zzt3j7+b52c9T6MqjRh3wzjOaXwo17RFRPJPST4GPl36Kd1HdWf19tX0atmLh9o+RIUyFeIdloiUAEryAfpl1y/cOvZW3pr/Fk1qNmFq56m0atAq3mGJSAmiJF9QGRl5jnxxdz5Y9AE9P+nJlt1buPcv99LnzD6UTS4b42BFpKQLapx8YsrIgLPOgvr1oU2b0HIOP+/4mcvev4xrhl5Dw8MbMjt1Ng+c9QBlk0rD+vVQhEYziUjiU5IviI0bYdo0SEsL/Zttrh13Z/CcwTQZ2ISxP4zlsXMeY0bXGfyp9p/y9eUgIhIEJfmCqFUrdBdqcvIBc8ks37Kcc946h64ju3LSEScxv8d8ep/em+SkcG/YQb4cRESCpD75gjCDCROy+uTTPYP/zvwvfb7oQykrxQvtXiC1eSpJluO7M/PLYdo0TTQmIjGlJF9QSUlQuzYLNyyky4guzFwzk3bHtuOFdi/Q4PAGkffJ8eWgqQpEJFaU5Atob/pe+k/pz4OTHqTybuft0Ulce/h2rEPE6fL3C385iIjEkpJ8AXy95mu6jOjCgg0L6HDMpTx780hqbs+A5OmhVrqSuIgUMbrwmg+79u2i92e9OXXwqfzy2y8M7zCcd6/7kJrNTv/dRVgRkaJELflIst3wNPGnL+k2shvLNi+j25+78fi5j3P4YYeHyqmfXUSKOCX5nMJj2rfNnsqd19XipXpraVy1MZ/f+Dltj2p7YFn1s4tIEackn9PGjYzeMIXu3TNYW3Ett5/UgwfaDaB86fLxjkxEpMAK1SdvZleZ2UIzyzCzlGzrG5nZb2Y2N/x6sfChBm/jrxv525Tb+GuHDKruhumzm/FE++eV4EWk2CpsS/5b4HLgpQjbfnD3ZoU8fjByTDLm7rz37Xv8c8w/2bZ7G/3OvI+7a1xKmRNPUl+7iBRrhWrJu/tid18SrWBiIsc8Mqs3r+CSN87nug+vo3HVxszpNou+90+kTPMWoXKaZ0ZEirEgh1AeZWbfmNmXZnZmboXMLNXMZpnZrI2xmNMlPI9MRnoag36bQtMnj+HzZeMYsLQx0zpN4USrrXlmRCRh5NldY2bjgSMibOrj7sNz2W0t0NDdfzGz5sDHZtbU3bfnLOjug4BBEHqQd/5DP0S1arHs7JPpVmcWExtlcNaP8PIIOHrHShiwWfPMiEhCyTPJu3uBH0Tq7nuAPeH3s83sB+A4YFaBIyyMHH3v6RnpPD39Ke49YwGlS1Vk0LmP0/X2t7Ed0/cndM0zIyIJJJDuGjOraWalwu8bA8cCy4OoK1c5+t4XrJ1Hq8Gt+Pf43pyzeA+LvmhCtz93wyZMhNWrYeLE/Qk9c/y7ux70ISLFWmGHUF5mZquBVsBoMxsb3vQXYL6ZzQWGAj3cfXOhIi2ocN/7Hk+jb/Jk/vxKCis2L+e9D5MY/o5Tb+LsUJnMhJ6zxa4HfYhIAijUEEp3/wj4KML6YcCwwhy70GrVYuYFJ9Kl4VwW1nL+1vQanj7/KWp8ciUk56O/PdKDPnR3q4gUMwl5x+uve3/l3gn38nTKPOpVqMuoi1+i3fF/DW3Mb3+7LsCKSAJIuCT/xY9f0G1kN5ZvWU6P5j3of25/KpetvL9Afueb0QVYEUkACZPkt+7eSu/PevPKN69wTLVjmNhxIq0btS7cQTUBmYgUcwmR5Gf9PIv277Vn3c513HHaHfRr049ypcvFOywRkbhLiCTfuGpjmtZsyvAOw0mpm5L3DiIiJURCJPlq5arx2Q2fxTsMEZEiR4//ExFJYEryIiIJTEleRCSBKcmLiCQwJXkRkQSmJC8iksCU5EVEEpiSvIhIAjMvQg/EMLONwE+FOEQNYFOUwokmxVUwiqtgFFfBJGJcR7p7zUgbilSSLywzm+XuRW5eA8VVMIqrYBRXwZS0uNRdIyKSwJTkRUQSWKIl+UHxDiAXiqtgFFfBKK6CKVFxJVSfvIiIHCjRWvIiIpKNkryISAIrVknezK4ys4VmlmFmKTm23W1my8xsiZmdn8v+R5nZzHC5982sTEBxvm9mc8OvFWY2N5dyK8xsQbjcrCBiyVFfPzNbky22i3Ipd0H4PC4zs7tiENfjZvadmc03s4/MrEou5QI/X3n97GZWNvz7XRb+LDUKIo4I9TYwswlmtij8f+BfEcq0MbNt2X6/98UotoP+Xizk2fA5m29mf45BTMdnOw9zzWy7mfXKUSYm58vMXjWzDWb2bbZ11cxsnJktDf9bNZd9O4bLLDWzjocUgLsXmxdwAnA8MBFIyba+CTAPKAscBfwAlIqw/xCgQ/j9i8BNMYh5AHBfLttWADVieP76Af/Oo0yp8PlrDJQJn9cmAcd1HpAcft8f6B+P85Wfnx24GXgx/L4D8H6Mfnd1gD+H31cCvo8QWxtgVKw+T/n9vQAXAZ8CBpwKzIxxfKWAdYRuGIr5+QL+AvwZ+DbbuseAu8Lv74r0mQeqAcvD/1YNv69a0PqLVUve3Re7+5IIm9oD77n7Hnf/EVgGtMhewMwMaAsMDa96A7g0wHAz67waeDfIeqKsBbDM3Ze7+17gPULnNzDu/pm7p4UXZwD1g6zvIPLzs7cn9NmB0Gfp7PDvOVDuvtbd54Tf7wAWA/WCrjdK2gNvesgMoIqZ1Ylh/WcDP7h7Ye6mP2TuPgnYnGN19s9RbrnofGCcu2929y3AOOCCgtZfrJL8QdQDVmVbXs3v/wNUB7ZmSyaRykTbmcB6d1+ay3YHPjOz2WaWGnAsmXqG/2R+NZc/EfNzLoPUmVCrL5Kgz1d+fvasMuHP0jZCn62YCXcRnQzMjLC5lZnNM7NPzaxpjELK6/cS789UB3JvaMXjfAHUdve14ffrgNoRykTlvBW5B3mb2XjgiAib+rj78FjHk5t8xnktB2/Fn+Hua8ysFjDOzL4Lf+sHEhfwAvAgof+UDxLqSupcmPqiEVfm+TKzPkAa8HYuh4n6+SpuzKwiMAzo5e7bc2yeQ6hLYmf4esvHwLExCKvI/l7C190uAe6OsDle5+sA7u5mFthY9iKX5N39nEPYbQ3QINty/fC67H4h9GdicrgFFqlMvuUVp5klA5cDzQ9yjDXhfzeY2UeEugsK9Z8jv+fPzF4GRkXYlJ9zGfW4zKwT8FfgbA93SEY4RtTPVw75+dkzy6wO/44PJ/TZCpyZlSaU4N929w9zbs+e9N39EzN73sxquHugk3Hl4/cSyGcqny4E5rj7+pwb4nW+wtabWR13XxvuutoQocwaQtcNMtUndD2yQBKlu2YE0CE88uEoQt/GX2UvEE4cE4Arw6s6AkH+ZXAO8J27r4600cwqmFmlzPeELj5+G6lstOToB70sl/q+Bo610EikMoT+1B0RcFwXAHcAl7j7rlzKxOJ85ednH0HoswOhz9IXuX0pRVO4338wsNjdn8ylzBGZ1wfMrAWh/9+BfgHl8/cyArgxPMrmVGBbtq6KoOX613Q8zlc22T9HueWiscB5ZlY13LV6XnhdwQR9ZTmaL0KJaTWwB1gPjM22rQ+hkRFLgAuzrf8EqBt+35hQ8l8GfACUDTDW14EeOdbVBT7JFsu88GshoW6LoM/fW8ACYH74Q1YnZ1zh5YsIjd74IUZxLSPU9zg3/HoxZ1yxOl+RfnbgAUJfQACHhT87y8KfpcZBn59wvWcQ6mabn+08XQT0yPycAT3D52YeoQvYp8Ugroi/lxxxGTAwfE4XkG1kXMCxVSCUtA/Pti7m54vQl8xaYF84f3UhdB3nc2ApMB6oFi6bArySbd/O4c/aMuDvh1K/pjUQEUlgidJdIyIiESjJi4gkMCV5EZEEpiQvIpLAlORFRBKYkryISAJTkhcRSWD/D5FXcl3a8f5OAAAAAElFTkSuQmCC\n", "text/plain": [ "
" ] @@ -124,11 +188,11 @@ { "cell_type": "markdown", "source": [ - "In the preceding figure, the green line indicates the objective function, and the red points indicate the verification data `eval_data`.\n", + "In the preceding figure, the green line indicates the objective function, and the red points indicate the verification data `train_data`.\n", "\n", - "### Defining the Data Argumentation Function\n", + "## Loading the dataset\n", "\n", - "Use the MindSpore data conversion function `GeneratorDataset` to convert the data type to that suitable for MindSpore training, and then use `batch` and `repeat` to perform data argumentation. The operation is described as follows:\n", + "Load the dataset and process the data.\n", "\n", "- `ds.GeneratorDataset`: converts the generated data into a MindSpore dataset and saves the x and y values of the generated data to arrays of `data` and `label`.\n", "- `batch`: combines `batch_size` pieces of data into a batch.\n", @@ -159,7 +223,7 @@ { "cell_type": "markdown", "source": [ - "Use the dataset argumentation function to generate training data and view the training data format." + "Use the dataset argumentation function to generate training data and the resulting 1600 data is enhanced by defining `create_dataset` into 100 sets of 16x1 datasets." ], "metadata": {} }, @@ -173,6 +237,7 @@ "\n", "ds_train = create_dataset(data_number, batch_size=batch_number, repeat_size=repeat_number)\n", "print(\"The dataset size of ds_train:\", ds_train.get_dataset_size())\n", + "step_size = ds_train.get_dataset_size()\n", "dict_datasets = next(ds_train.create_dict_iterator())\n", "\n", "print(dict_datasets.keys())\n", @@ -201,11 +266,13 @@ { "cell_type": "markdown", "source": [ - "Use the defined `create_dataset` to perform argumentation on the generated 1600 data records and set them into 100 datasets with the shape of 16 x 1.\n", + "## Defining the Linear Neural Network Model\n", + "\n", + "The `mindspore.nn` class is the base class for building all networks and is also the basic unit of the network. When users need to customize the network, they can inherit `nn. Cell` class, and override the `init` method and the `construst` method. The `mindspore.ops` module provides an implementation of the base operator, and the `nn.Cell` module implements further encapsulation of the base operator, allowing users to flexibly use different operators as needed.\n", "\n", - "## Defining the Training Network\n", + "The following example uses `nn. Cell` builds a simple fully connected network with sample snippet code for subsequent customizations.\n", "\n", - "In MindSpore, use `nn.Dense` to generate a linear function model of single data input and single data output.\n", + "In MindSpore, use `nn.Dense` to generate single data input and the linear function model output by the single data:\n", "\n", "$$f(x)=wx+b\\tag{1}$$\n", "\n", @@ -217,8 +284,8 @@ "cell_type": "code", "execution_count": 6, "source": [ - "from mindspore.common.initializer import Normal\n", "from mindspore import nn\n", + "from mindspore.common.initializer import Normal\n", "\n", "class LinearNet(nn.Cell):\n", " def __init__(self):\n", @@ -240,7 +307,7 @@ { "cell_type": "markdown", "source": [ - "Call the network to view the initialized model parameters." + "After the network model is initialized, the initialized network function and the training dataset are visualized to understand the model functions before fitting." ], "metadata": {} }, @@ -248,41 +315,12 @@ "cell_type": "code", "execution_count": 7, "source": [ + "from mindspore import Tensor\n", + "\n", + "# Initialize the linear regression network\n", "net = LinearNet()\n", + "# Get the network parameters w and b before training\n", "model_params = net.trainable_params()\n", - "for param in model_params:\n", - " print(param, param.asnumpy())" - ], - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "Parameter (name=fc.weight, shape=(1, 1), dtype=Float32, requires_grad=True) [[-0.0052068]]\n", - "Parameter (name=fc.bias, shape=(1,), dtype=Float32, requires_grad=True) [-0.02897885]\n" - ] - } - ], - "metadata": { - "ExecuteTime": { - "end_time": "2021-01-04T07:04:53.100773Z", - "start_time": "2021-01-04T07:04:53.086027Z" - }, - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "After initializing the network model, visualize the initialized network function and training dataset to understand the model function before fitting." - ], - "metadata": {} - }, - { - "cell_type": "code", - "execution_count": 8, - "source": [ - "from mindspore import Tensor\n", "\n", "x_model_label = np.array([-10, 10, 0.1])\n", "y_model_label = (x_model_label * Tensor(model_params[0]).asnumpy()[0][0] +\n", @@ -296,22 +334,18 @@ ], "outputs": [ { - "output_type": "display_data", - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYQAAAD8CAYAAAB3u9PLAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAp8UlEQVR4nO3deZyN5f/H8ddnZkghJFuWpFAoS5NKqUiR0q7o19431Te+pE2pJtKm0kIltG98E2mxhlJSjH3NjPjajZ3s5ly/P849xzGdWZizzPJ+Ph7zmHNf9/Zxz3E+576u674uc84hIiISF+sAREQkf1BCEBERQAlBREQ8SggiIgIoIYiIiEcJQUREgDAkBDOrbmaTzWyRmS00s65e+bNmtsbM5ng/bfMeroiIRIrl9TkEM6sCVHHOzTKz0sBM4FrgJuBv59yreY5SREQiLiGvB3DOrQPWea93mtlioGpejysiItGV5zuEww5mVhOYAjQAugN3AjuAZOBh59zWEPt0AjoBlCxZ8uzTTz89bPGIiBQFM2fO3OScq5DX44QtIZhZKeBn4Hnn3AgzqwRsAhzwHP5qpbuzO0ZiYqJLTk4OSzwiIkWFmc10ziXm9Thh6WVkZsWAr4HPnXMjAJxzG5xz6c45HzAYaBqOc4mISGSEo5eRAe8Di51z/YLKqwRtdh2wIK/nEhGRyMlzozJwAXAbMN/M5nhlTwIdzawR/iqjFcB9YTiXiIhESDh6Gf0KWIhVo/N6bBERiR49qSwiIoASgoiIeJQQREQEUEIQESmwtu7ZSs03aobteOHoZSQiIlHknKPj1x0ZtnBYWI+rOwQRkQLkozkfEdc7LpAMki5OCtuxdYcgIlIALNq4iPrv1A8sN67cmN//9TvF44vTi15hOYcSgohIPrZr/y7OePsMVu1YFShb3nU5NcvWDPu5VGUkIpJPdR7dmVIvlgokg5E3j8QluYgkA9AdgohIvjNqySiuHXZtYLnzOZ3p37Z/xM+rhCAikk+s2LaCU948JbBc/fjqLH5wMSWLl4zK+ZUQRERiyedj//o1nPfdNcxePztQvOCBBdSvWD+bHcNPCUFEJFZ8PnrdVYtna/0vUPThNR9yZ6M7YxKOEoKISLT5fEyeM5KW390ItfxFNy80vnxzDValSvb7RpASgohIFG3YsY7Kr58UWD4m3Vj3iqPcPmBDB5g8GeJi0wE0HDOmVTezyWa2yMwWmllXr/wEM5tgZine73J5D1dEpGBK96XT5rM2hyWD3z+IZ++Ncyh3MAGcg99+g40bYxZjONLQQeBh51w94DzgQTOrB/QAJjrnagMTvWURkSJnwPQBJDyXwLhl4wDol1IL1yeBc2teAA0aQLNmkJDg/12xYsziDMeMaeuAdd7rnWa2GKgKXANc4m32MfAT8HhezyciUlDMXDuTxMGJgeUWNVsw/rbxJBAHr230f/ib+auJNgYtx0hY2xDMrCbQGPgDqOQlC4D1QKVwnktEJL/avnc71V+vzs79OwNla7uvpUrpoAbjSkEfiXFxhy/HSNhaLsysFPA10M05tyN4nXPOAS6L/TqZWbKZJW+MYd2ZiEheOee4dcStlH25bCAZTLhtAi7JHZ4M8qmwJAQzK4Y/GXzunBvhFW8wsyre+ipAWqh9nXODnHOJzrnEChUqhCMcEZGo+3ze58T1juPz+Z8D8OSFT+KSHK1qtYpxZLmX5yojMzPgfWCxc65f0KpvgTuAl7zfo/J6LhGRfMPng40bWRK3hTPeqRcoPrPimcy4dwbHJBwTw+COTjjaEC4AbgPmm9kcr+xJ/Ingv2Z2D/A/4KYwnEtEJPZ8PnZfehENGk5leVCH+tQuqZx6wqmxiyuPwtHL6Fcgq2bxS/N6fBGR/Oahbx7gjUumBpaHXz6EG86/J4YRhYeeVBYRyaXvl35Puy/bBZbvm2m8u/1C7Jm7YxhV+CghiIjkYOX2lZz8xsmB5UolK5HaeSmltu+J+bMD4aSEICKShQPpB7jwwwuZvmZ6oGzu/XM5q9JZ/oUSx8cossjQFJoiIiE8P+V5ivcpHkgGg9sNxiW5Q8mgENIdgohIkF/+9wsXfXRRYPm6069j+E3DibMQ35+9rqeFpdpICUFEBNi4ayMVXz00sFycxbHhkQ2ceNyJoXfw+aBFC/8Ipc2axXTY6nAp2NGLiOSRz/m46ourDksGU++eSvoz6VknA/DfGfz2Gxw8GPNhq8NFCUFEiqyByQOJ7x3PDyk/APDSpS/hkhzNqjfLeeeKFfPNsNXhoiojESlyZq+bTZNBTQLLzWs0Z9Idk0iIO4KPxHw0bHW4KCGISJGxY98Oar5Rk617twbKVj20imrHVzu6A+aTYavDRVVGIlLoOee4a9RdlHmpTCAZjPm/Mbgkd/TJoBDSHYKIFGpDFwyl49cdA8uPNnuUvpf1jWFE+ZcSgogUPLno/5+yOYU6A+oEluuWr8uc++dQIqFEtKIscJQQRKRgyaH//54De2g4sCEpW1ICZUs7L6V2+dqxiLZAURuCiBQs2fT/f2zCYxz3wnGBZDD0hqG4JKdkkEu6QxCRgiWj/3/GHULFioxNHcsVn18R2OSuRnfx/tXvY4WgK2g0hSUhmNkHwFVAmnOugVf2LHAvkJG+n3TOjQ7H+USkCAvq/7+mxAGq9T5U0XHCsSewvOtyjj+mcI1CGi3hqjL6CGgTovx151wj70fJQERyz+eDDRvAuX+sOoiPC3+4gWpvVA+Uzeo0i82PbVYyyIOwJATn3BRgSziOJSISaDiuVg0uucS/7Ok7tS/FnivG1FX+KSzfvfJdXJKjcZXGMQq28Ih0G0JnM7sdSAYeds5tzbyBmXUCOgHUqFEjwuGISIEQouF46r5ULvzwwsAmV9W5ilEdRoUeljpDIRueOtIi2cvoXeBUoBGwDngt1EbOuUHOuUTnXGKFChUiGI6IFBhBA8dtvugc4gZWOSwZpD2Sxncdv8s5GWRxlyGhRSwhOOc2OOfSnXM+YDDQNFLnEpFCxgzfpIlcN+QyTrxoGg5/O8LPd/6MS3JUKJmLL4+FcHjqSItYQjCzKkGL1wELInUuESlcBs8cTHyfYnyzYgwAz7d8HpfkuOjki3LYM0ghHJ460sLV7fRL4BLgRDNbDSQBl5hZI8ABK4D7wnEuESm85m2YR8OBDQPL51Y9l1/u+oVi8cWO/GCFcHjqSAtLQnDOdQxR/H44ji0ihd/OfTs5rf9ppO1KC5T9r9v/qFEmjx1NCtnw1JGmoStEJGacc3T6rhPHv3R8IBl81/E7XJLLezKQI6ahK0QkJoYvGk77r9oHlrud243X27wew4hECUFEomrZlmWc1v+0wPKp5U5l/gPzObbYsf4CPTsQM6oyEpGo2HdwHw3eaXBYMlj84GJS/5N6eDLQswMxo4QgIhHXc2JPSjxfgoUbFwLw6XWf4p5O5/T0cofGKvL5YNEimDo19LMD2YxtJOGhhCAiETNh2QSsl/HCry8AcOtZt+J7xsetDW45/E7g4EH/cuPGUKoUxMcf/uyA7hyiQm0IIhJ2a3eupWq/qoHl0sVLs/KhlZQtUdZfkJZ2+FPES5YcWt61C+bMgfr1D7UhhHrqWN1Jw053CCISNgd9B2nxcYvDksGMe2ew44kdh5IB/PMp4nr1Dl8OTgahttdTxxGhOwQRyb1segD1m9aPh8c/HFh+q81bdDm3S+jjhHqKOLunivXUcVQoIYhI7mQxuf0fq//gvPfPC2zW+tTW/HDLD8THxWd/vMxPEef0VLGeOo44JQQRyZ3gevypU9kyZxpVRrdkf/r+wCbrH15PpVL60C6o1IYgIrnj1eO7+DhuujmO8t9dGEgGk26fhEtySgYFnO4QRCR3zPjojTu469spgL/b57M/x5H05VpV5RQSSggikqOFaQtp8G6DwPLZO0rx24A9FD/vAvX4KUSUEEQkS7v276LugLqs2bkmULa863JqHl8DHlOPn8ImLG0IZvaBmaWZ2YKgshPMbIKZpXi/y4XjXCISHZ1Hd6bUi6UCyWDkzSNxSY6aZWse6vGjZFCohKtR+SOgTaayHsBE51xtYKK3LCL53DdLvsF6GW/PeBuAzud0xiU5rj392tgGJhEXrhnTpphZzUzF1+CfVhPgY+An4PFwnE9EjkIOw0ov37qcWm/VCixXP746ix9cTMniJaMZpcRQJLudVnLOrfNerwdCdkMws05mlmxmyRuDRzYUkfDJZnC4/en7afJek8OSwYIHFrDyoZVKBkVMVJ5DcM45IOSYtc65Qc65ROdcYoUKFaIRjkjRE2pwOCBpchLH9DmG2etnA/DRNR/hkhz1K9aPZbQSI5HsZbTBzKo459aZWRUgLcc9RCQyMgaH84admLxrIS17VQ6svrn+zXx5w5eYmWYsK8IieYfwLXCH9/oOYFQEzyUi2fEGh1u/dBbWcgotP70UgBIJJdjy2BaG3jj0UDLQvANFVri6nX4JTAPqmtlqM7sHeAm4zMxSgFbesojEQLovncs/b0OVT84KlP1+z+/s6bmHcscG9QjPompJioZw9TLqmMWqS8NxfBE5ev3/6M9/xv4nsNzv8n48dP5DoTfOVLWkp5CLFj2pLFJIJa9N5pzB5wSWW57SkvG3js9+WGrNO1CkKSGIFDLb9m6jWr9q7DqwK1C2tvtaqpSukrsDaN6BIkvDX4sUEs45bvn6Fsq9XC6QDCbcNgGX5HKfDKRI0x2CSCHw6dxPuf2b2wPLPZv3pE/LPjGMSAoiJQSRAmzxxsXUe6deYPnMimcy494ZHJNwTAyjkoJKCUGkANp9YDcN3mnA8m3LA2WpXVI59YRTYxiVFHRqQxApYLqN7UbJF0oGksHw9sNxSU7JQPJMdwgi+UEuhov4fun3tPuyXWD5vrPv490r3/U/YSwSBkoIIrGWMVxExsNgkyf7u356Vm5fyclvnBxYrlSyEqn/SaVU8VKxiFYKMSUEkVgLNVxEpUocSD/ABR9cwIy1MwKbzrt/HmdWOjOGwUphpjYEkVjLGC4iISEwXESfKX0o3qd4IBkMbjcYl+SUDCSidIcgEmtBw0VM2bOEi3sf+p52/RnX81X7r4gzr0xDU0sEKSGIRMoRfHin7dlEpYGH5ieIt3g2PLKB8seVP/x42bQ1iOSV3k0ikZDdvAI+H2zYAM7hcz7aft6WSq8eGjto6t1TOfjMwcOTAWhoaok4JQSRSMjqwzsoUbz7f3WI7x3PmNQxALzc6mVckqNZ9WahjxmirUEknCJeZWRmK4CdQDpw0DmXGOlzisRcVvMKbNzI7GVTafJUOpAKQPMazZl0xyQS4nL476ihqSXCotWG0MI5tylK5xIJr6NpyA3x4b1973ZO/rAu2+9ND2y2utsqqpaplvtYNDS1RJCqjESyk5s5hoPaBA7jfXg74M5v7qTsy2XZvm87AGOv/BL3jO/IkoFIhEUjIThgvJnNNLNOmVeaWSczSzaz5I1qJJP8JqeG3BwSxtAFQ4nrHcfHcz8G4LFmj+GSHK0TO6jKR/KdaFQZXeicW2NmFYEJZrbEOTclY6VzbhAwCCAxMdFldRCRmMhpjuEsnjJeunkpdQfUDWxWt3xd5tw/hxIJJaL8DxDJvYgnBOfcGu93mpmNBJoCU7LfSySfyKkhN1PC2FOuNA371yFlS0pgk6Wdl1K7fO0oBy5y5CJaZWRmJc2sdMZr4HJgQSTPKRJ2GQ25oap4MhLG6tU82vMcjnuxZCAZDL1hKC7JKRlIgRHpO4RKwEhveN4E4Avn3NgIn1MkqsYsG0fbL9oGlu9udDdDrh6iYamlwIloQnDO/QU0jOQ5RGJl9Y7VVH+9emC5/LHl+avrXxx/zPExjErk6GksI5EjdCD9ABd/dDHTVk8LlM2+bzaNKjeKXVAiYaDnEESOwMu/vkzxPsUDyeDdK9/FJTklAykUdIcgkguZZy1rV6cd33T45tCw1CKFgBKCSDZ2H9hN36l96Tu1LwBxFsf6h9dToWSFGEcmEn5KCCIhOOcYtnAYj014jFU7VnFz/Zt5udXLnFz25Jx3FimglBBEMpm5diZdx3Zl6qqpNK7cmM+v/5zmJzePdVgiEaeEIOJZv2MtPcc8wodLhlKhZAWGtBvCnY3uJD4uPtahiUSFEoIUefsO7uOt39/kuXFPstfSeXh1NZ4aMI8yx5WLdWgiUaWEIEWWc47vl35P9/HdSd2SylV/Ga+Ngzrb18Pze2HnBk1EI0WK+sxJkbQwbSGtP2vN1UOvplhcMcbeMobvVjenzvYEOP986NAh+zkQRAoh3SFIkbJlzxae/elZ3pnxDqWPKc2bbd7kgcQHKBZfDCZf7h/V1DmoXt0/pPXUqZCWBpUrxzp0kYjTHYIUCQd9B3l7+tvU7l+bt2e8TaezO5HSJYX/nPsffzKAQ6OaVqrkv0sA/93BzTfrLkGKBN0hSKE38a+JdB3blYUbF9KiZgvebPMmZ1Y6M+sdzGDYMKhR4x8T34gUZrpDkEJr2aYUrvu4La0+bcXuA7sZcdMIJt4+MftkkKFyZf/ENwkJoWdKEymEdIcg+Z/Pl/WMZSHs3LeT56f04fVfX6HYQceLq2rSbcgCShQ/LvfnzGmmNJFCKOJ3CGbWxsz+NLNUM+sR6fNJIZPDJPaHbep8fDTnI+oMqMPLv/Wl43xY2h96fLmaElt3Hvm5s5spTaQQivQUmvHA28AVQD2go5nVi+Q5pZAJNYl9CL+t+o1zh5zLXaPuombZmvxxz+98tLk5J+1RlY9IbkW6yqgpkOrNnIaZDQWuARaF2jg1Fa66yv86+EtZxuvclmmfQrQPFeGkwbBqFXZSNXinIsEjTu9gNRN4nPl8QWlO4gb7jLPWdmT8R3GMbzUZa7YbSpbEXrTsz5OL2JYsgdGjDy0H/wSX5bRe+2ifcO4T/D7Nq0gnhKrAqqDl1cC5wRuYWSegE0Dx4g1Zv97fDTxDxuvclmmfwraPAXf6C1YCvb2VCXug2atw4UsQlw5Tn2Ln1Mf5en8pvg4cIQ4ohYjkTswblZ1zg4BBAImJiS45OcYBSezkovHY53MMXzycRyc8wsrtK7nhjBt5+dJXqPlsTSCyyWr/fti791BZ8E9wWU7rtY/2Cec+zsGgQYRFpBPCGqB60HI1r0zkcBmNx7/95q/znzzZ36gbZPa62XQd25VfVv5Cw0oN+eTaT7i45sUxClgk/ygoCWEGUNvMTsGfCDoAt0T4nFIQhWo89h4ES9uVxlOTnmLIrCGUP6487131Hvc0vkfDUouEWUQTgnPuoJl1BsYB8cAHzrmFkTynFFAVK/rvDDLuECpWZH/6fgZMH0Cvn3ux+8Buup3XjWcufoayJcrGOlqRQinibQjOudHA6EifRwo4s8CDYK5CBUanjKb7+O4s3byUtrXb0u/yftQ9sW6soxQp1GLeqCwSEBfH4rgtdP/yTsamjqVu+br8cMsPtK3dNtaRiRQJSgiSL2zds5VeP/fi7RlvU7JYSfpd3o8Hmz5I8fjisQ5NpMhQQpCYSvelM3jWYJ6a9BRb9myh09mdeK7Fc1QoWSHWoYkUOUoIEjOTl0+m27huzNswj4tPvpg32rxBo8qNYh2WSJGlhCBRt3zrch6Z8AgjFo/g5DInM7z9cK4/43osnM/gi8gRU0KQqPl7/9+8+MuLvDbtNeLj4unTog/dz+/OscWOjXVoIoISgkSBz/n4bN5n9PixB+v+XsdtZ93Gi5e+SNXjq8Y6NBEJooQgkeHzwYYN/JE2m/9M78X0tdNpWrUpI24ewXnVzot1dCISghKC5M6RzFrm87Gm9fk8UXo6nzaEKvuK83H7D7m10e3EmWZtFcmv9L9TcnYEs5btPbiXF8Y9Rd2m0xlWH574Bf58M53bT7pCyUAkn9MdguQsm4HnMjjnGLF4BI9MeIQV21Zw/bbyvPLlZmptM2h+gWYsEykA9JVNcpYx8FxC6Oko566fS8tPWnLjVzdSunhpJt4+ka/7p1Fr4VpYuxZ++im80zqJSEToDkFyFjTwXHAbwsZdG3l68tMMnjWYciXK8U7bd7j37HtJiPPeVlWqxDBoETlSSgiSO3FxgWqiA+kHeHvG2zz707P8vf9vujTtTNLFz1Lu2HIxDlJE8kIJQbKXqXfRmJQxdB/fnSWbltB6czn6fQX1Js6By8vEOlIRyaOItSGY2bNmtsbM5ng/GsO4oAnqXfRn26Zc+Xlb2n7RlnRfOt9f8Slj3t1BvfXphxqaRaRAi/QdwuvOuVcjfA6JlI0b2TZzKs+1TOetc5I5bmVpXr3sVbqc24XiccWg2eDDZjgTkYJNVUYSUrovnfdXfcNTXePYVCydf62tTJ9+c6hYKqi7aYiGZhEpuCLd7bSzmc0zsw/MLGSLo5l1MrNkM0veqGqHfOHnFT9z9qCzue+H+zn9tPOY2X4Cg95be3gygEMNzUoGIoWCOeeOfmezH4HKIVb1BH4HNgEOeA6o4py7O7vjJSYmuuTk5KOOR/JmxbYVPDbhMb5a9BU1ytTglcteoX299hqWWiSfM7OZzrnEvB4nT1VGzrlWudnOzAYD3+flXBI5u/bv4qVfX+LVaa9iGL0u6cUjzR7huGLHxTo0EYmiiLUhmFkV59w6b/E6YEGkziVHxznHF/O/4PEfH2fNzjXccuYtvHTpS1QvUz3WoYlIDESyUbmvmTXCX2W0ArgvgueSIzRjzQy6ju3KtNXTOLvK2Qy7cRgX1Lgg1mGJSAxFLCE4526L1LElF7IYrnrdznU8MfEJPp77MZVKVuKDqz/gjkZ3aCRSEVG300Ip44GyjGcEJk9mr28/b/z+Bs//8jz70/fz+AWP82TzJzn+mONjHa2I5BNKCIVR0HDV7repjJr+CQ9Pf46/tv7FNXWv4dXLX+W0E06LdZQiks8oIRRGFSvC+eczP3Uq3dqXYtK4u6hfoT4TbptAq1q56hgmIkWQEkIhtHnXJp6pm8rAFj7KpO9iQJv+3HfO/YeGpRYRCUEtiYXIgfQDvPXHW9QeUIf3TlrHv2dAyluOB2u2VzIQkRzpU6KQGL9sPN3GdmPxpsW0OqUVbwzdSv0JczXwnIjkmhJCrGTRLfRIpWxO4eHxD/Pd0u84tdypjOowinZ12mG3Og08JyJHRFVGsRA0zwCXXOJfPkLb927n0fGPUv+d+vy04if6turLwn8v5Oq6V/vHHsoYeM452LDB/1tEJBu6Q4iFoG6hgcllKlXKeT/8w1J/NPsDnpz4JBv3bOauRnfx/KXPU7lUiDEGQzyPQJy+A4hIaPp0iIWKFf0f0AkJR1TH/+vKX2k6uCn/+r4Tpy3dxPQZjXi/3eDQyQBCJx4RkSwoIcSCmf/b+urV8NNP/mWfL8uqnZXbV9JheAeaf9ictJ3r+WJkHL++D4nj5mf/IX+UiUdEiiYlhGjJ/IEfPLlMFm0Kuw/s5tmfnuX0Aacz6s9RJF2cxJIuf9KxzIVYbj7kQyUeEZEsqA0hGnKqy89UtePS0hi26ScenfAoq3es5ub6N9P3sr7UKFPDv/2RTF2ZkXhERHKgO4RoyKkuP6hqZ2brM2k++kY6ft2RCsdVYMqdUxh649BDyQA0daWIRIQSQrhk0waQY12+Geu/+5J7PmvPOefMIWVLCkPaDWHGvTNofnLz6MQvIkVenhKCmbU3s4Vm5jOzxEzrnjCzVDP708xa5y3MfC6n5wqyqcvfd3Affaf2pc7bp/Pp0uE8fP7DLO28lHua3EN8XHxU/xkiUrTltQ1hAXA98F5woZnVAzoA9YGTgB/NrI5zLj2P58ufcvNcQaa6fOcc3y39ju7jurNs6zLa1WnHa5e/Ru3ytaMcvIiIX57uEJxzi51zf4ZYdQ0w1Dm3zzm3HEgFmublXPlaqCqhbKqQFqYtpPVnrblm6DUUjy/O2P8by7cdv1UyEJGYilQbQlVgVdDyaq/sH8ysk5klm1nyxoL64FTmKiHnQlYhbdmzhS6ju9BwYENmrJ3BW23eYu79c2l9WuGuURORgiHHKiMz+xEI9ShsT+fcqLwG4JwbBAwCSExMLLgD7gRXCaWlHVaFdHDDOt5b9Q3P/PQM2/Zu4/6z76dXi16ceNyJsY1ZRCRIjgnBOXc0U2ytAaoHLVfzyoqGjCqk337jxyvPoNvI1izcuJCWp7TkjdZvcGalM2MdoYjIP0SqyuhboIOZHWNmpwC1gekROlf+Y8ayr4dw7ZBWXNZ4PrsP7GbkzSP58bYf/ckguy6qIiIxktdup9eZ2WrgfOAHMxsH4JxbCPwXWASMBR4stD2MMtm5byc9fuxBvYEN+HHNL7x46YssenAR155+rX9Y6jAMfS0iEgnm8tG31MTERJecnBzrMI6Kz/n4ZO4nPDHxCdb/vZ47Gt7BC5e+wEmlTzp8ww0b/Mng4EF/r6TVqzW0hIjkiZnNdM4l5rxl9jSW0dEKmvHst9XT6Dq2K8lrkzmv2nmM6jCKplWz6GUb1L6gEUhFJD9RQjgaXrXP6vlTefzmE/ii8kZOKn0Sn133GbeceYu/aigrGV1UNb2liOQzSghHYfea5bwa9wsvP+Dw2UaeatyNx5t0oVTVU3L3Aa8RSEUkH9LgdkfAOcd/5w/ljHfqk3SJ48oUWPxHIs+9NotSp9RVI7GIFGi6Q8il2etm03VsV35Z+QsNt8EnY+DiNQkw6wNo0uTQOEZpaVA5iyktRUTyMd0h5CBtVxr3fnsvZw86m8WbFvPelQOZuai5Pxk0awb16/t/m/mTwk036S5BRAok3SFkYX/6fvr/0Z/eU3qz+8BuHjrvIZ6++GnKligLk+89vFF46FCoXh3S02HatNCjnYqI5HNKCJk45/gh5Qe6j+tOypYU2tZuS7/L+1H3xLqHNsrcKFy5MlxwgbqSikiBpoQQZPHGxTw07iHGLRtH3fJ1GX3LaK6ofUXOO6orqYgUAkoIwNY9W+n1cy8GTB9AqeKleL316zx4zoMUiy+W+4OoK6mIFHBFOiEc9B1k8MzBPD35abbu3cq9Te7luRbPUaFkhUMD0Okbv4gUEUW2l9Gk5ZNo8l4T/j363zSo2IBZnWYx8KqBh5KBBqATkSKmyN0h/LX1Lx6d8CgjFo+gZtmaDG8/nOvPuP7w4SZyM0eyiEghU2QSws59O3nx1xfpN60f8XHx9GnRh+7nd+fYYsf+c2MNQCciRVChTwg+5+OzeZ/R48cerPt7HbfVuZEX275O1TLVst5JvYZEpAjK6wQ57c1soZn5zCwxqLymme0xsznez8C8h3rkfl/9O+e/fz53fHMH1Y+vzrTkRnxy+zdUvfr/cm4XyOg1pGQgIkVEXu8QFgDXA++FWLfMOdcoj8c/Kmt2rKHHxB58Nu8zqpSqwsfXfsytlS4j7t811C4gIpKFPCUE59xiIPvx/6Noz4E99JvWjxd+fYF0XzpPXvgkTzR/glLFS/nnL1a7gIhIliLZhnCKmc0GdgBPOed+idSJnHN8vfhrHp3wKCu2reD6M67nlcteoVa5Woc2UruAiEi2ckwIZvYjEGo8557OuVFZ7LYOqOGc22xmZwPfmFl959yOEMfvBHQCqFGjRu4j98xdP5euY7vy8/9+5syKZzLp9km0OKVF6I31NLGISJZyTAjOuVZHelDn3D5gn/d6ppktA+oAySG2HQQMAkhMTHTBcxVn9y1+466NPD35aQbPGky5EuV498p3+VeTf5EQV+g7TomIREREnlQ2swpmFu+9rgXUBv7K1c45PCG8P30/r097ndr9a/P+7Pfp0rQLKV1SuD/xfiUDEZE8yNMnqJldB/QHKgA/mNkc51xr4CKgt5kdAHzA/c65LTke8MCBbJ8QHpMyhofGPcSfm/+k9amteb3165xR4Yy8/BNERMST115GI4GRIcq/Br4+4gMWKxayJ9Cfm/6k+/jujE4ZTe0TavN9x+9pW7ttvundJCJSGOS/OpagnkDb9m2n98+96T+9P8cVO45XL3uVLud2oXh88VhHKSJS6OS/hBAXR3qFE3l/1mB6TurJ5t2b+VeTf9GnZR8qltSzAyIikZLvEsLPK36m69iuzN0wl+Y1mvNmmzdpXKVxrMMSESn08lVC+GvrX1zy8SXUKFODYTcOo3299monEBGJknyVELbt3UbvS3rzSLNHQg9LLSIiEWPOuVjHEHBW47PcvNnzYh2GiEiBYmYznXOJOW+ZvXw1haZ6D4mIxE6+SggiIhI7SggiIgIoIYiIiEcJQUREACUEERHxKCGIiAighCAiIh4lBBERAZQQRETEk6eEYGavmNkSM5tnZiPNrGzQuifMLNXM/jSz1nmOVEREIiqvdwgTgAbOubOApcATAGZWD+gA1AfaAO9kzLEsIiL5U54SgnNuvHPuoLf4O1DNe30NMNQ5t885txxIBZrm5VwiIhJZ4Rz++m5gmPe6Kv4EkWG1V/YPZtYJ6OQt7jOzBWGMKVJOBDbFOohcUJzhpTjDpyDECAUnzrrhOEiOCcHMfgQqh1jV0zk3ytumJ3AQ+PxIA3DODQIGecdJDscQrpGmOMNLcYZXQYizIMQIBSvOcBwnx4TgnGuVQyB3AlcBl7pDkyusAaoHbVbNKxMRkXwqr72M2gCPAVc753YHrfoW6GBmx5jZKUBtYHpeziUiIpGV1zaEAcAxwARv7uPfnXP3O+cWmtl/gUX4q5IedM6l5+J4g/IYT7QozvBSnOFVEOIsCDFCEYszX02hKSIisaMnlUVEBFBCEBERT9QTgpm1N7OFZuYzs8RM63Ic7sLMTjGzP7zthplZ8SjEPMzM5ng/K8xsThbbrTCz+d52YekGdiTM7FkzWxMUa9sstmvjXeNUM+sRgzizHPIk03ZRv545XRuvo8Qwb/0fZlYzGnFliqG6mU02s0Xe/6WuIba5xMy2B70Xnol2nF4c2f4Nze8t73rOM7MmMYixbtB1mmNmO8ysW6ZtYnI9zewDM0sLfj7LzE4wswlmluL9LpfFvnd426SY2R25OqFzLqo/wBn4H6L4CUgMKq8HzMXfSH0KsAyID7H/f4EO3uuBwANRjv814Jks1q0AToz2NQ06/7PAIzlsE+9d21pAce+a14tynJcDCd7rl4GX88P1zM21Af4NDPRedwCGxeDvXAVo4r0ujX/YmMxxXgJ8H+3YjvRvCLQFxgAGnAf8EeN444H1wMn54XoCFwFNgAVBZX2BHt7rHqH+/wAnAH95v8t5r8vldL6o3yE45xY75/4MsSrH4S7M35WpJTDcK/oYuDaC4R7GO/9NwJfROmcENAVSnXN/Oef2A0PxX/uocVkPeRJrubk21+B/34H/fXip976IGufcOufcLO/1TmAxWYwEUABcA3zi/H4HyppZlRjGcymwzDn3vxjGEOCcmwJsyVQc/B7M6jOwNTDBObfFObcV/7hzbXI6X35qQ6gKrApaDjXcRXlgW9CHSZZDYkRIc2CDcy4li/UOGG9mM70hOWKhs3fr/UEWt5K5uc7RdDf+b4ihRPt65ubaBLbx3ofb8b8vY8KrsmoM/BFi9flmNtfMxphZ/ehGFpDT3zC/vR87kPUXvvxwPQEqOefWea/XA5VCbHNU1zWcYxkFWC6Gu8hvchlzR7K/O7jQObfGzCrifzZjiZfhoxIn8C7wHP7/hM/hr966O5znz63cXE/LeciTiF/PgszMSgFfA92cczsyrZ6Fv9rjb68t6Rv8D4hGW4H5G3rtkVfjjdqcSX65nodxzjkzC9uzAxFJCC6H4S6ykJvhLjbjv6VM8L6dhW1IjJxiNrME4Hrg7GyOscb7nWZmI/FXQYT1zZ/ba2tmg4HvQ6yKyrAiubied/LPIU8yHyPi1zOT3FybjG1We++JMvjfl1FlZsXwJ4PPnXMjMq8PThDOudFm9o6Zneici+pAbbn4G+anYW6uAGY55zZkXpFfrqdng5lVcc6t86rX0kJsswZ/u0eGavjbbbOVn6qMchzuwvvgmAzc6BXdAUTrjqMVsMQ5tzrUSjMraWalM17jbziN6sitmeper8vi/DOA2ubvrVUc/y3yt9GIL4NlPeRJ8DaxuJ65uTbf4n/fgf99OCmrhBYpXpvF+8Bi51y/LLapnNG2YWZN8f9fj2riyuXf8Fvgdq+30XnA9qDqkGjLsgYgP1zPIMHvwaw+A8cBl5tZOa/q+HKvLHsxaDW/Dn991j5gAzAuaF1P/L08/gSuCCofDZzkva6FP1GkAl8Bx0Qp7o+A+zOVnQSMDoprrvezEH/VSLSv7afAfGCe96apkjlOb7kt/p4py2IUZyr++s053s/AzHHG6nqGujZAb/zJC6CE975L9d6HtWJw/S7EXy04L+gatgXuz3iPAp296zYXf8N9sxjEGfJvmClOA972rvd8gnoeRjnWkvg/4MsElcX8euJPUOuAA97n5j3426wmAinAj8AJ3raJwJCgfe/23qepwF25OZ+GrhARESB/VRmJiEgMKSGIiAighCAiIh4lBBERAZQQRETEo4QgIiKAEoKIiHj+H5p0uMSopQx0AAAAAElFTkSuQmCC", - "text/plain": [ - "
" - ] - }, - "metadata": { - "needs_background": "light" - } + "output_type": "stream", + "name": "stdout", + "text": [ + "Parameter (name=fc.weight, shape=(1, 1), dtype=Float32, requires_grad=True) [[-0.0052068]]\n", + "Parameter (name=fc.bias, shape=(1,), dtype=Float32, requires_grad=True) [-0.02897885]\n" + ] } ], "metadata": { "ExecuteTime": { - "end_time": "2021-01-04T07:04:53.242097Z", - "start_time": "2021-01-04T07:04:53.102786Z" + "end_time": "2021-01-04T07:04:53.100773Z", + "start_time": "2021-01-04T07:04:53.086027Z" }, "scrolled": true } @@ -319,32 +353,19 @@ { "cell_type": "markdown", "source": [ - "As shown in the preceding figure, the initialized model function in blue differs greatly from the objective function in green.\n", - "\n", - "## Optimizing Model Parameters\n", - "\n", - "After the neural network is defined, the deviation between the output value of the neural network and the actual value is calculated through the loss function in the forward propagation process; then the model parameters are updated through the backward propagation network, and the backward propagation minimizes the loss value through the optimizer function to obtain the optimal model parameters." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Defining the Loss Function\n", + "## Customizing the Loss Function\n", "\n", - "Define the loss function of the model. The mean squared error (MSE) method is used to determine the fitting effect. The smaller the MSE value difference, the better the fitting effect. The loss function formula is as follows:\n", + "The Loss Function is used to measure the degree to which the predicted value differs from the true value. In deep learning, model training is the process of narrowing the loss function value by constantly iterating, so the choice of the loss function during model training is very important, and defining a good loss function can help the loss function value converge faster and achieve better accuracy.\n", "\n", - "$$J(w)=\\frac{1}{2m}\\sum_{i=1}^m(h(x_i)-y^{(i)})^2\\tag{2}$$\n", + "[mindspore.nn](https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.nn.html#id13) provides a number of common loss functions for users to choose from, and also allows users to customize loss functions as needed.\n", "\n", - "Assuming that the $i$th data record in the training data is $(x_i,y^{(i)})$, parameters in formula 2 are described as follows:\n", + "When you customize the loss function class, you can inherit both the base class of the network `nn. Cell`, which can also inherit the base class of the loss function `nn. LossBase`. `nn. LossBase` provides `get_loss` method based on `nn.Cell` to sum or mean the loss values by using the `reduction` parameter to output a scalar. The mean absolute error loss function (MAE) will be defined by using the method of inheriting LosBase, and the formula of the MAE algorithm is as follows:\n", "\n", - "- $J(w)$ specifies the loss value.\n", + "$$loss= \\frac{1}{m}\\sum_{i=1}^m\\lvert y_i-f(x_i) \\rvert $$\n", "\n", - "- $m$ specifies the amount of sample data. In this example, the value of $m$ is `batch_number`.\n", + "In the above equation, $f(x)$ is the predicted value, $y$ is the sample true value, and $loss$ is the average of the distance between the predicted value and the true value.\n", "\n", - "- $h(x_i)$ is a predicted value obtained after the $x_i$ value of the $i$th data record is substituted into the model network (formula 1).\n", - "\n", - "- $y^{(i)}$ is the $y^{(i)}$ value (label value) of the $i$th data record." + "When using the LossBase method to customize the loss function, you need to override the `__init__` method and the `construst` method, and use the `get_loss` method to calculate the loss. The sample code is as follows:" ], "metadata": {} }, @@ -352,7 +373,17 @@ "cell_type": "code", "execution_count": 9, "source": [ - "net_loss = nn.loss.MSELoss()" + "from mindspore import nn, ops\n", + "\n", + "class MyMAELoss(nn.LossBase):\n", + " \"\"\"Define Loss\"\"\"\n", + " def __init__(self, reduction=\"mean\"):\n", + " super(MyMAELoss, self).__init__(reduction)\n", + " self.abs = ops.Abs()\n", + "\n", + " def construct(self, predict, target):\n", + " x = self.abs(predict - target)\n", + " return self.get_loss(x)" ], "outputs": [], "metadata": { @@ -365,20 +396,27 @@ { "cell_type": "markdown", "source": [ - "### Defining the Optimizer\n", + "## Customizing the Optimizer\n", + "\n", + "The optimizer is used to calculate and update network parameters during model training, and the appropriate optimizer can effectively reduce the training time and improve model performance.\n", + "\n", + "[mindspore.nn](https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.nn.html#id14) provides a number of general-purpose optimizers for users to choose, while also allowing users to customize the optimizer as needed.\n", "\n", - "The objective of the backward propagation network is to continuously change the weight value to obtain the minimum loss value. Generally, the weight update formula is used in the linear network:\n", + "When customizing the optimizer, you can inherit the optimizer base class `nn. Optimizer`, overrides `__init__` methods and `construct` methods implement updates to parameters.\n", "\n", - "$$w_{t}=w_{t-1}-\\alpha\\frac{\\partial{J(w_{t-1})}}{\\partial{w}}\\tag{3}$$\n", + "The following example implements the custom optimizer Momentum:\n", "\n", - "Parameters in formula 3 are described as follows:\n", + "$$ v_{t+1} = v_t×u+grad $$\n", "\n", - "- $w_{t}$ indicates the weight after training steps.\n", - "- $w_{t-1}$ indicates the weight before training steps.\n", - "- $\\alpha$ indicates the learning rate.\n", - "- $\\frac{\\partial{J(w_{t-1}\\ )}}{\\partial{w}}$ is the differentiation of the loss function to the weight $w_{t-1}$.\n", + "SGD algorithm with momentum:\n", "\n", - "After all weight values in the function are updated, transfer the values to the model function. This process is the backward propagation. To implement this process, the optimizer function in MindSpore is required." + "$$p_{t+1} = p_t - lr*v_{t+1}$$\n", + "\n", + "Using the SGD algorithm for Nesterov momentum:\n", + "\n", + "$$p_{t+1} = p_t-(grad+v_{t+1}*u)×lr $$\n", + "\n", + "where grad, lr, p, v, and u respectively represent gradients, learning rates, parameters, moments, and momentums." ], "metadata": {} }, @@ -386,7 +424,36 @@ "cell_type": "code", "execution_count": 10, "source": [ - "opt = nn.Momentum(net.trainable_params(), learning_rate=0.005, momentum=0.9)" + "from mindspore import Tensor, Parameter\n", + "from mindspore import nn, ops\n", + "from mindspore import dtype as mstype\n", + "\n", + "class MyMomentum(nn.Optimizer):\n", + " \"\"\"Define the optimizer\"\"\"\n", + " def __init__(self, params, learning_rate, momentum=0.9, use_nesterov=False):\n", + " super(MyMomentum, self).__init__(learning_rate, params)\n", + " self.momentum = Parameter(Tensor(momentum, mstype.float32), name=\"momentum\")\n", + " self.use_nesterov = use_nesterov\n", + " self.moments = self.parameters.clone(prefix=\"moments\", init=\"zeros\")\n", + " self.assign = ops.Assign()\n", + "\n", + " def construct(self, gradients):\n", + " \"\"\"The construct input is a gradient, which automatically passes in gradients during training\"\"\"\n", + " lr = self.get_lr()\n", + " # The weight parameter to be updated\n", + " params = self.parameters\n", + " for i in range(len(params)):\n", + " # Update moments value\n", + " self.assign(self.moments[i], self.moments[i] * self.momentum + gradients[i])\n", + " if self.use_nesterov:\n", + " # Using the SGD algorithm for Nesterov momentum:\n", + " update = params[i] - (self.moments[i] * self.momentum + gradients[i]) * lr\n", + " else:\n", + " # SGD algorithm with momentum\n", + " update = params[i] - self.moments[i] * lr\n", + " # Update the params value as update value\n", + " self.assign(params[i], update)\n", + " return params" ], "outputs": [], "metadata": { @@ -399,9 +466,13 @@ { "cell_type": "markdown", "source": [ - "### Building a Complete Network\n", + "## Customizing the Training Process\n", + "\n", + "`mindspore. Model` provides the interface of `train` and `eval` to facilitate users to use during training, but this interface cannot be applied to all scenarios, such as multi-data and multi-label scenarios, where the users need to define their own training process. This section uses linear regression examples to briefly describe the custom training process. First define the loss network, connecting the forward network to the loss function, and then define the training process, which generally inherits `nn.TrainOneStepCell`. `nn.TrainOneStepCell` encapsulates the loss network and optimizer to implement a backpropagation network and to update the weight parameters.\n", "\n", - "After forward propagation and backward propagation are defined, call the `Model` function in MindSpore to associate the previously defined networks, loss functions, and optimizer function to form a complete computing network." + "### Defining the Loss Function\n", + "\n", + "Define the loss network `MyWithLossCell`, which connects the forward network to the loss function." ], "metadata": {} }, @@ -409,9 +480,23 @@ "cell_type": "code", "execution_count": 11, "source": [ - "from mindspore import Model\n", + "class MyWithLossCell(nn.Cell):\n", + " \"\"\"Define the loss function\"\"\"\n", + "\n", + " def __init__(self, backbone, loss_fn):\n", + " \"\"\"The forward network and the loss function are passed in as parameters when instantiated\"\"\"\n", + " super(MyWithLossCell, self).__init__(auto_prefix=False)\n", + " self.backbone = backbone\n", + " self.loss_fn = loss_fn\n", + "\n", + " def construct(self, data, label):\n", + " \"\"\"Connecting the forward network and the loss function\"\"\"\n", + " out = self.backbone(data)\n", + " return self.loss_fn(out, label)\n", "\n", - "model = Model(net, net_loss, opt)" + " def backbone_network(self):\n", + " \"\"\"The backbone network to be encapsulated\"\"\"\n", + " return self.backbone" ], "outputs": [], "metadata": { @@ -424,13 +509,9 @@ { "cell_type": "markdown", "source": [ - "## Training the Network\n", + "### Defining the Training Process\n", "\n", - "To make the entire training process easier to understand, the test data, objective function, and model network of the training process need to be visualized. The following defines a visualization function which is called after each training step to display a fitting process of the model network.\n", - "\n", - "### Defining the Visualization Function\n", - "\n", - "Defining the Visualization function `plot_model_and_datasets` to visualize the test data, objective function and network model fitting function." + "Define the training process `MyTrainStep`, which inherits `nn.TrainOneStepCell`. `nn.TrainOneStepCell` encapsulates the loss network and optimizer, performs the acquisition of gradient by `ops.GradOperation` operator when performing training and updates the weights through the optimizer." ], "metadata": {} }, @@ -438,24 +519,20 @@ "cell_type": "code", "execution_count": 12, "source": [ - "import matplotlib.pyplot as plt\n", - "import time\n", + "class MyTrainStep(nn.TrainOneStepCell):\n", + " \"\"\"Define the training process\"\"\"\n", "\n", - "def plot_model_and_datasets(net, eval_data):\n", - " weight = net.trainable_params()[0]\n", - " bias = net.trainable_params()[1]\n", - " x = np.arange(-10, 10, 0.1)\n", - " y = x * Tensor(weight).asnumpy()[0][0] + Tensor(bias).asnumpy()[0]\n", - " x1, y1 = zip(*eval_data)\n", - " x_target = x\n", - " y_target = x_target * 2 + 3\n", + " def __init__(self, network, optimizer):\n", + " \"\"\"Parameter initialization\"\"\"\n", + " super(MyTrainStep, self).__init__(network, optimizer)\n", + " self.grad = ops.GradOperation(get_by_list=True)\n", "\n", - " plt.axis([-11, 11, -20, 25])\n", - " plt.scatter(x1, y1, color=\"red\", s=5)\n", - " plt.plot(x, y, color=\"blue\")\n", - " plt.plot(x_target, y_target, color=\"green\")\n", - " plt.show()\n", - " time.sleep(0.2)" + " def construct(self, data, label):\n", + " \"\"\"Construct the training process\"\"\"\n", + " weights = self.weights\n", + " loss = self.network(data, label)\n", + " grads = self.grad(self.network, weights)(data, label)\n", + " return loss, self.optimizer(grads)" ], "outputs": [], "metadata": { @@ -468,9 +545,9 @@ { "cell_type": "markdown", "source": [ - "### Defining the Callback Function\n", + "### Defining the Drawing Function\n", "\n", - "MindSpore provides tools to customize the model training process. The following calls the visualization function in `step_end` to display the fitting process. `display.clear_output` is used to clear the printed content to achieve dynamic fitting effect." + "Define drawing function `plot_model_and_datasets` plot test data, the objective function, and the network model fitting function, and view the loss value." ], "metadata": {} }, @@ -478,17 +555,32 @@ "cell_type": "code", "execution_count": 13, "source": [ - "from IPython import display\n", - "from mindspore.train.callback import Callback\n", + "import matplotlib.pyplot as plt\n", + "import time\n", "\n", - "class ImageShowCallback(Callback):\n", - " def __init__(self, net, eval_data):\n", - " self.net = net\n", - " self.eval_data = eval_data\n", "\n", - " def step_end(self, run_context):\n", - " plot_model_and_datasets(self.net, self.eval_data)\n", - " display.clear_output(wait=True)" + "def plot_model_and_datasets(net, data, loss):\n", + " weight = net.trainable_params()[0]\n", + " bias = net.trainable_params()[1]\n", + " x = np.arange(-10, 10, 0.1)\n", + " y = x * Tensor(weight).asnumpy()[0][0] + Tensor(bias).asnumpy()[0]\n", + " x1, y1 = zip(*data)\n", + " x_target = x\n", + " y_target = x_target * 2 + 3\n", + "\n", + " plt.axis([-11, 11, -20, 25])\n", + " # Raw data\n", + " plt.scatter(x1, y1, color=\"red\", s=5)\n", + " # Predicted data\n", + " plt.plot(x, y, color=\"blue\")\n", + " # Fitting function\n", + " plt.plot(x_target, y_target, color=\"green\")\n", + " # Print the loss value\n", + " plt.title(f\"Loss:{loss}\")\n", + "\n", + " plt.show()\n", + " time.sleep(0.2)\n", + " display.clear_output(wait=True)" ], "outputs": [], "metadata": { @@ -501,14 +593,9 @@ { "cell_type": "markdown", "source": [ - "## Performing Training\n", + "### Executing the Training\n", "\n", - "After the preceding process is complete, use the training parameter `ds_train` to train the model. In this example, `model.train` is called. The parameters are described as follows:\n", - "\n", - "- `epoch`: Number of times that the entire dataset is trained.\n", - "- `ds_train`: Training dataset.\n", - "- `callbacks`: Required callback function during training.\n", - "- `dataset_sink_mode`: Dataset offload mode, which supports the Ascend and GPU computing platforms. In this example, this parameter is set to False for the CPU computing platform." + "Use the training data `ds_train` train the training network `train_net` and visualize the training process." ], "metadata": {} }, @@ -516,15 +603,24 @@ "cell_type": "code", "execution_count": 14, "source": [ - "epoch = 1\n", - "imageshow_cb = ImageShowCallback(net, eval_data)\n", - "\n", - "model.train(epoch, ds_train, callbacks=[imageshow_cb], dataset_sink_mode=False)\n", + "from IPython import display\n", "\n", - "plot_model_and_datasets(net, eval_data)\n", - "print(net.trainable_params())\n", - "for net_param in net.trainable_params():\n", - " print(net_param, net_param.asnumpy())" + "# Loss function\n", + "loss_func = MyMAELoss()\n", + "# Optimizer\n", + "opt = MyMomentum(net.trainable_params(), 0.01)\n", + "# Construct the loss network\n", + "net_with_criterion = MyWithLossCell(net, loss_func)\n", + "# Construct the training network\n", + "train_net = MyTrainStep(net_with_criterion, opt)\n", + "\n", + "for data in ds_train.create_dict_iterator():\n", + " # Perform training and update the weights\n", + " train_net(data['data'], data['label'])\n", + " # Loss values\n", + " loss = net_with_criterion(data['data'], data['label'])\n", + " # Visualize the training process\n", + " plot_model_and_datasets(train_net, train_data, loss)" ], "outputs": [ { @@ -559,11 +655,17 @@ { "cell_type": "markdown", "source": [ - "After the training is complete, the weight parameters of the final model are printed. The value of weight is close to 2.0 and the value of bias is close to 3.0. As a result, the model training meets the expectation.\n", + "## Customizing evaluation metrics\n", "\n", - "## Saving and Loading Models\n", + "When the training task is over, it is often necessary to evaluate the metrics evaluation function to evaluate the quality of the model. The metrics are commonly evaluation index confusion matrix, Accuracy, Precision, Recall, etc.\n", "\n", - "Save the above trained model parameters to a CheckPoint (ckpt for short) file, and then load the model parameters into the network for subsequent inference." + "The [mindspore.nn](https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.nn.html#id16) module provides common evaluation functions, and users can also define their own evaluation indicators as needed. Customizing Metrics functions need to inherit from the `nn.Metric` parent class and reimplement the `clear`, `update`, and `eval` methods in the parent class. The average absolute error (MAE) algorithm is shown in the following equation, and the following is an example of a simple MAE to introduce these three functions and how to use them.\n", + "\n", + "$$ MAE=\\frac{1}{n}\\sum_{i=1}^n\\lvert ypred_i - y_i \\rvert$$\n", + "\n", + "- `clear`: Initialize the relevant internal parameters.\n", + "- `update`: Receive network prediction outputs and labels, calculate errors, and update internal evaluation results. Generally after each step is calculated, the statistical values are updated.\n", + "- `eval`: Calculate the final assessment result, generally at the end of an epoch." ], "metadata": { "ExecuteTime": { @@ -576,17 +678,36 @@ "cell_type": "code", "execution_count": 15, "source": [ - "from mindspore import save_checkpoint, load_checkpoint, load_param_into_net\n", + "class MyMAE(nn.Metric):\n", + " \"\"\"Define metric\"\"\"\n", "\n", - "# save model parameters in ckpt file\n", - "save_checkpoint(net, \"./linear.ckpt\")\n", - "# store the model parameters in the param_dict dictionary\n", - "param_dict = load_checkpoint(\"./linear.ckpt\")\n", - "# view model parameters\n", - "for param in param_dict:\n", - " print(param, \":\", param_dict[param].asnumpy())\n", - "# load parameters into the network\n", - "load_param_into_net(net, param_dict)" + " def __init__(self):\n", + " super(MyMAE, self).__init__()\n", + " self.clear()\n", + "\n", + " def clear(self):\n", + " \"\"\"Initialize the variables abs_error_sum and samples_num\"\"\"\n", + " self.abs_error_sum = 0\n", + " self.samples_num = 0\n", + "\n", + " def update(self, *inputs):\n", + " \"\"\"Update abs_error_sum and samples_num\"\"\"\n", + " if len(inputs) != 2:\n", + " raise ValueError('Mean absolute error need 2 inputs (y_pred, y), but got {}'.format(len(inputs)))\n", + " # Convert Tensor to NumPy for subsequent calculations\n", + " y_pred = inputs[0].asnumpy()\n", + " y = inputs[1].asnumpy()\n", + " # Calculates the absolute error between the predicted value and the true value\n", + " error_abs = np.abs(y.reshape(y_pred.shape) - y_pred)\n", + " self.abs_error_sum += error_abs.sum()\n", + " # The total number of the samples\n", + " self.samples_num += y.shape[0]\n", + "\n", + " def eval(self):\n", + " \"\"\"Calculate the final assessment results\"\"\"\n", + " if self.samples_num == 0:\n", + " raise RuntimeError('Total samples num must not be 0.')\n", + " return self.abs_error_sum / self.samples_num" ], "outputs": [ { @@ -604,41 +725,104 @@ }, { "cell_type": "markdown", + "metadata": {}, "source": [ - "## Inference\n", + "## Customizing the validation process\n", "\n", - "Use `model.predict` to predict the output." - ], - "metadata": {} + "The mindspore.nn module provides an evaluation network wrapper function [nn.WithEvalCell](https://www.mindspore.cn/docs/api/en/master/api_python/nn/mindspore.nn.WithEvalCell.html#mindspore.nn.WithEvalCell), because `nn.WithEvalCell` has only two input `data` and `label`, which is not suitable for multi-data or multi-label scenarios, so it is necessary to customize the evaluation network. For custom evaluation networks in multi-label scenarios, please refer to the [Custom Evaluation and Training section](https://www.mindspore.cn/tutorials/zh-CN/master/advance/train/train_eval.html#Customizingtheevaluationnetwork).\n", + "\n", + "The following example implements a simple customization evaluation network `MyWithEvalCell`, entering inputting data `data` and `label`:" + ] }, { "cell_type": "code", - "execution_count": 16, + "execution_count": null, + "metadata": {}, + "outputs": [], "source": [ - "from mindspore import dtype\n", + "class MyWithEvalCell(nn.Cell):\n", + " \"\"\"Define the validation process\"\"\"\n", "\n", - "# predict the result with an input of 2\n", - "pre_x = Tensor([[2]], dtype=dtype.float32)\n", - "pre_y = model.predict(pre_x)\n", - "print(\"predict result:\", pre_y)" - ], - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "predict result: [[6.967516]]\n" - ] - } - ], - "metadata": {} + " def __init__(self, network):\n", + " super(MyWithEvalCell, self).__init__(auto_prefix=False)\n", + " self.network = network\n", + "\n", + " def construct(self, data, label):\n", + " outputs = self.network(data)\n", + " return outputs, label" + ] }, { "cell_type": "markdown", + "metadata": {}, "source": [ - "When the input is 2, substitute the formula $f(x) = 2x + 3$, and the theoretical output is f(2)=7. The predicted output is very close to 7, as expected." - ], - "metadata": {} + "Perform inference and evaluation:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data_number = 160\n", + "batch_number = 16\n", + "repeat_number = 1\n", + "# Obtain the validation data\n", + "ds_eval = create_dataset(data_number, batch_size=batch_number, repeat_size=repeat_number)\n", + "# Define the evaluation network\n", + "eval_net = MyWithEvalCell(net)\n", + "eval_net.set_train(False)\n", + "# Define the evaluation metrics\n", + "mae = MyMAE()\n", + "\n", + "# Execute the inference process\n", + "for data in ds_eval.create_dict_iterator():\n", + " output, eval_y = eval_net(data['data'], data['label'])\n", + " mae.update(output, eval_y)\n", + "\n", + "mae_result = mae.eval()\n", + "print(\"MAE: \", mae_result)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Output the evaluation error, and MAE and the model on the training set effect is about the same." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Saving and Exporting the Model\n", + "\n", + "Save the above trained model parameters to the CheckPoint (ckpt) file, and then export the CheckPoint file as a MindIR format file for cross-platform inference." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "from mindspore import save_checkpoint, load_checkpoint, export\n", + "\n", + "# Save the model parameters in a ckpt file\n", + "save_checkpoint(net, \"./linear.ckpt\")\n", + "# Save the model parameters in the param_dict dictionary\n", + "param_dict = load_checkpoint(\"./linear.ckpt\")\n", + "# View the model parameters\n", + "for param in param_dict:\n", + " print(param, \":\", param_dict[param].asnumpy())\n", + "\n", + "# Define a linear network\n", + "net1 = LinearNet()\n", + "input_np = np.random.uniform(0.0, 1.0, size=[1, 1]).astype(np.float32)\n", + "export(net1, Tensor(input_np), file_name='linear', file_format='MINDIR')" + ] } ], "metadata": { diff --git a/tutorials/source_en/beginner/basic_process_deep_learning.md b/tutorials/source_en/beginner/basic_process_deep_learning.md deleted file mode 100644 index f971d09470630c3ff4a739f7719fa206119b4b21..0000000000000000000000000000000000000000 --- a/tutorials/source_en/beginner/basic_process_deep_learning.md +++ /dev/null @@ -1,325 +0,0 @@ -# Quick Start for Beginners - -`Ascend` `GPU` `CPU` `Beginner` `Whole Process` - - - -The following describes the basic functions of MindSpore to implement common tasks in deep learning. For details, see links in each section. - -## Configuring the Running Information - -MindSpore uses `context.set_context` to configure the information required for running, such as the running mode, backend information, and hardware information. - -Import the `context` module and configure the required information. - -```python -import os -import argparse -from mindspore import context - -parser = argparse.ArgumentParser(description='MindSpore LeNet Example') -parser.add_argument('--device_target', type=str, default="CPU", choices=['Ascend', 'GPU', 'CPU']) - -args = parser.parse_known_args()[0] -context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target) -``` - -This example runs in graph mode. You can configure hardware information as required. For example, if the code runs on the Ascend AI processor, set `--device_target` to `Ascend`. This rule also applies to the code running on the CPU and GPU. For details about the parameters, see [context.set_context](https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.context.html). - -## Downloading the Dataset - -The MNIST dataset used in this example consists of 10 classes of 28 x 28 pixels grayscale images. It has a training set of 60,000 examples, and a test set of 10,000 examples. - -Click [here](http://yann.lecun.com/exdb/mnist/) to download and unzip the MNIST dataset and place the dataset according to the following directory structure. The following example code downloads and unzips the dataset to the specified location. - -```python -import os -import requests - -def download_dataset(dataset_url, path): - filename = dataset_url.split("/")[-1] - save_path = os.path.join(path, filename) - if os.path.exists(save_path): - return - if not os.path.exists(path): - os.makedirs(path) - res = requests.get(dataset_url, stream=True, verify=False) - with open(save_path, "wb") as f: - for chunk in res.iter_content(chunk_size=512): - if chunk: - f.write(chunk) - -train_path = "datasets/MNIST_Data/train" -test_path = "datasets/MNIST_Data/test" - -download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte", train_path) -download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte", train_path) -download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte", test_path) -download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte", test_path) -``` - -The directory structure of the dataset file is as follows: - -```text - ./datasets/MNIST_Data - ├── test - │ ├── t10k-images-idx3-ubyte - │ └── t10k-labels-idx1-ubyte - └── train - ├── train-images-idx3-ubyte - └── train-labels-idx1-ubyte - - 2 directories, 4 files -``` - -## Data Processing - -Datasets are crucial for model training. A good dataset can effectively improve training accuracy and efficiency. -MindSpore provides the API module `mindspore.dataset` for data processing to store samples and labels. Before loading a dataset, we usually process the dataset. `mindspore.dataset` integrates common data processing methods. - -Import `mindspore.dataset` and other corresponding modules in MindSpore. - -```python -import mindspore.dataset as ds -import mindspore.dataset.transforms.c_transforms as C -import mindspore.dataset.vision.c_transforms as CV -from mindspore.dataset.vision import Inter -from mindspore import dtype as mstype -``` - -Dataset processing consists of the following steps: - -1. Define the `create_dataset` function to create a dataset. -2. Define the data augmentation and processing operations to prepare for subsequent mapping. -3. Use the map function to apply data operations to the dataset. -4. Perform shuffle and batch operations on data. - -```python -def create_dataset(data_path, batch_size=32, repeat_size=1, - num_parallel_workers=1): - # Define the dataset. - mnist_ds = ds.MnistDataset(data_path) - resize_height, resize_width = 32, 32 - rescale = 1.0 / 255.0 - shift = 0.0 - rescale_nml = 1 / 0.3081 - shift_nml = -1 * 0.1307 / 0.3081 - - # Define the mapping to be operated. - resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR) - rescale_nml_op = CV.Rescale(rescale_nml, shift_nml) - rescale_op = CV.Rescale(rescale, shift) - hwc2chw_op = CV.HWC2CHW() - type_cast_op = C.TypeCast(mstype.int32) - - # Use the map function to apply data operations to the dataset. - mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns="label", num_parallel_workers=num_parallel_workers) - mnist_ds = mnist_ds.map(operations=[resize_op, rescale_op, rescale_nml_op, hwc2chw_op], input_columns="image", num_parallel_workers=num_parallel_workers) - - - # Perform shuffle, batch and repeat operations. - buffer_size = 10000 - mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) - mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True) - mnist_ds = mnist_ds.repeat(count=repeat_size) - - return mnist_ds -``` - -In the preceding information, `batch_size` indicates the number of data records in each group. Assume that each group contains 32 data records. - -> MindSpore supports multiple data processing and argumentation operations. For details, see [Processing Data](https://www.mindspore.cn/docs/programming_guide/en/master/pipeline.html) and [Data Augmentation](https://www.mindspore.cn/docs/programming_guide/en/master/augmentation.html). - -## Creating a Model - -To use MindSpore for neural network definition, inherit `mindspore.nn.Cell`. `Cell` is the base class of all neural networks (such as `Conv2d-relu-softmax`). - -Define each layer of a neural network in the `__init__` method in advance, and then define the `construct` method to complete the forward construction of the neural network. According to the LeNet structure, define the network layers as follows: - -```python -import mindspore.nn as nn -from mindspore.common.initializer import Normal - -class LeNet5(nn.Cell): - """ - Lenet network structure - """ - def __init__(self, num_class=10, num_channel=1): - super(LeNet5, self).__init__() - # Define the required operation. - self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid') - self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid') - self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02)) - self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02)) - self.fc3 = nn.Dense(84, num_class, weight_init=Normal(0.02)) - self.relu = nn.ReLU() - self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2) - self.flatten = nn.Flatten() - - def construct(self, x): - # Use the defined operation to construct a forward network. - x = self.conv1(x) - x = self.relu(x) - x = self.max_pool2d(x) - x = self.conv2(x) - x = self.relu(x) - x = self.max_pool2d(x) - x = self.flatten(x) - x = self.fc1(x) - x = self.relu(x) - x = self.fc2(x) - x = self.relu(x) - x = self.fc3(x) - return x - -# Instantiate the network. -net = LeNet5() -``` - -## Optimizing Model Parameters - -To train a neural network model, a loss function and an optimizer need to be defined. - -Loss functions supported by MindSpore include `SoftmaxCrossEntropyWithLogits`, `L1Loss`, and `MSELoss`. The following uses the cross-entropy loss function `SoftmaxCrossEntropyWithLogits`. - -```python -# Define the loss function. -net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') -``` - -> For more information about using loss functions in mindspore, see [Loss Functions](https://www.mindspore.cn/tutorials/en/master/optimization.html#loss-functions). - -MindSpore supports the `Adam`, `AdamWeightDecay`, and `Momentum` optimizers. The following uses the `Momentum` optimizer as an example. - -```python -# Define the optimizer. -net_opt = nn.Momentum(net.trainable_params(), learning_rate=0.01, momentum=0.9) -``` - -> For more information about using an optimizer in mindspore, see [Optimizer](https://www.mindspore.cn/tutorials/en/master/optimization.html#optimizer). - -## Training and Saving the Model - -MindSpore provides the callback mechanism to execute custom logic during training. The following uses `ModelCheckpoint` provided by the framework as an example. -`ModelCheckpoint` can save the network model and parameters for subsequent fine-tuning. - -```python -from mindspore.train.callback import ModelCheckpoint, CheckpointConfig -# Set model saving parameters. -config_ck = CheckpointConfig(save_checkpoint_steps=1875, keep_checkpoint_max=10) -# Use model saving parameters. -ckpoint = ModelCheckpoint(prefix="checkpoint_lenet", config=config_ck) -``` - -The `model.train` API provided by MindSpore can be used to easily train the network. `LossMonitor` can monitor the changes of the `loss` value during the training process. - -```python -# Import the library required for model training. -from mindspore.nn import Accuracy -from mindspore.train.callback import LossMonitor -from mindspore import Model -``` - -```python -def train_net(model, epoch_size, data_path, repeat_size, ckpoint_cb, sink_mode): - """Define a training method.""" - # Load the training dataset. - ds_train = create_dataset(os.path.join(data_path, "train"), 32, repeat_size) - model.train(epoch_size, ds_train, callbacks=[ckpoint_cb, LossMonitor(125)], dataset_sink_mode=sink_mode) -``` - -`dataset_sink_mode` is used to control whether data is offloaded. Data offloading means that data is directly transmitted to the device through a channel to accelerate the training speed. If `dataset_sink_mode` is True, data is offloaded. Otherwise, data is not offloaded. - -Validate the generalization capability of the model based on the result obtained by running the test dataset. - -1. Read the test dataset using the `model.eval` API. -2. Use the saved model parameters for inference. - -```python -def test_net(model, data_path): - """Define a validation method.""" - ds_eval = create_dataset(os.path.join(data_path, "test")) - acc = model.eval(ds_eval, dataset_sink_mode=False) - print("{}".format(acc)) -``` - -Set `train_epoch` to 1 to train the dataset in one epoch. In the `train_net` and `test_net` methods, the previously downloaded training dataset is loaded. `mnist_path` is the path of the MNIST dataset. - -```python -train_epoch = 1 -mnist_path = "./datasets/MNIST_Data" -dataset_size = 1 -model = Model(net, net_loss, net_opt, metrics={"Accuracy": Accuracy()}) -train_net(model, train_epoch, mnist_path, dataset_size, ckpoint, False) -test_net(model, mnist_path) -``` - -Run the following command to execute the script: - -```bash -python lenet.py --device_target=CPU -``` - -Where, - -`lenet.py`: You can paste the preceding code to lenet.py (excluding the code for downloading the dataset). Generally, you can move the import part to the beginning of the code, place the definitions of classes, functions, and methods after the code, and connect the preceding operations in the main method. - -`--device_target=CPU`: specifies the running hardware platform. The parameter value can be `CPU`, `GPU`, or `Ascend`, depending on the actual running hardware platform. - -Loss values are displayed during training, as shown in the following. Although loss values may fluctuate, they gradually decrease and the accuracy gradually increases in general. Loss values displayed each time may be different because of their randomicity. -The following is an example of loss values output during training: - -```text -epoch: 1 step: 125, loss is 2.3083377 -epoch: 1 step: 250, loss is 2.3019726 -... -epoch: 1 step: 1500, loss is 0.028385757 -epoch: 1 step: 1625, loss is 0.0857362 -epoch: 1 step: 1750, loss is 0.05639569 -epoch: 1 step: 1875, loss is 0.12366105 -{'Accuracy': 0.9663477564102564} -``` - -The model accuracy data is displayed in the output content. In the example, the accuracy reaches 96.6%, indicating a good model quality. As the number of network epochs (`train_epoch`) increases, the model accuracy will be further improved. - -## Loading the Model - -```python -from mindspore import load_checkpoint, load_param_into_net -# Load the saved model for testing. -param_dict = load_checkpoint("checkpoint_lenet-1_1875.ckpt") -# Load parameters to the network. -load_param_into_net(net, param_dict) -``` - -> For more information about loading a model in mindspore, see [Loading the Model](https://www.mindspore.cn/tutorials/en/master/save_load_model.html#loading-the-model). - -## Validating the Model - -Use the generated model to predict the classification of a single image. The procedure is as follows: - -> The predicted images will be generated randomly, and the results may be different each time. - -```python -import numpy as np -from mindspore import Tensor - -# Define a test dataset. If batch_size is set to 1, an image is obtained. -ds_test = create_dataset(os.path.join(mnist_path, "test"), batch_size=1).create_dict_iterator() -data = next(ds_test) - -# `images` indicates the test image, and `labels` indicates the actual classification of the test image. -images = data["image"].asnumpy() -labels = data["label"].asnumpy() - -# Use the model.predict function to predict the classification of the image. -output = model.predict(Tensor(data['image'])) -predicted = np.argmax(output.asnumpy(), axis=1) - -# Output the predicted classification and the actual classification. -print(f'Predicted: "{predicted[0]}", Actual: "{labels[0]}"') -``` - -```text - Predicted: "6", Actual: "6" -``` diff --git a/tutorials/source_en/beginner/infer.md b/tutorials/source_en/beginner/infer.md new file mode 100644 index 0000000000000000000000000000000000000000..8ccdf1c3865c5a4de574e12fcee695ebe288d2ee --- /dev/null +++ b/tutorials/source_en/beginner/infer.md @@ -0,0 +1,408 @@ +# Inference and Deployment + + + +This chapter uses the `mobilenet_v2` network fine-tuning approach in MindSpore Vision to develop an AI application (classification of the dog and the croissants) and deploy the trained network model to the Android phone to perform inference and deployment functions. + +## Data Preparation and Loading + +### Downloading the dataset + +First, you need to download the [dog and croissants classification dataset](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/beginner/DogCroissants.zip) used in this case, which has two categories, dog and croissants, and each class has about 150 training images, 20 verification images, and 1 inference image. + +The specific dataset is as follows: + +![datset-dog](https://gitee.com/mindspore/docs/raw/tutorials-develop/tutorials/source_zh_cn/beginner/images/datset_dog.png) + +Use the `DownLoad` interface in MindSpore Vision to download and extract the dataset to the specified path, and the sample code is as follows: + +```python +from mindvision.dataset import DownLoad + +dataset_url = "https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/beginner/DogCroissants.zip" +path = "./datasets" + +dl = DownLoad() +# Download and extract the dataset +dl.download_and_extract_archive(dataset_url, path) +``` + +The directory structure of the dataset is as follows: + +```text +datasets +└── DogCroissants + ├── infer + │ ├── croissants.jpg + │ └── dog.jpg + ├── train + │ ├── croissants + │ └── dog + └── val + ├── croissants + └── dog +``` + +### Loading the Dataset + +Define the `create_dataset` function to load the dog and croissants dataset, perform image enhancement operations on the dataset, and set the dataset batch_size size. + +```python +import mindspore.dataset as ds +import mindspore.dataset.vision.c_transforms as transforms + +def create_dataset(path, batch_size=10, train=True, image_size=224): + dataset = ds.ImageFolderDataset(path, num_parallel_workers=8, class_indexing={"croissants": 0, "dog": 1}) + + # Image augmentation operation + mean = [0.485 * 255, 0.456 * 255, 0.406 * 255] + std = [0.229 * 255, 0.224 * 255, 0.225 * 255] + if train: + trans = [ + transforms.RandomCropDecodeResize(image_size, scale=(0.08, 1.0), ratio=(0.75, 1.333)), + transforms.RandomHorizontalFlip(prob=0.5), + transforms.Normalize(mean=mean, std=std), + transforms.HWC2CHW() + ] + else: + trans = [ + transforms.Decode(), + transforms.Resize(256), + transforms.CenterCrop(image_size), + transforms.Normalize(mean=mean, std=std), + transforms.HWC2CHW() + ] + + dataset = dataset.map(operations=trans, input_columns="image", num_parallel_workers=8) + # Sets the size of the batch_size and discards if the number of samples last fetched is less than batch_size + dataset = dataset.batch(batch_size, drop_remainder=True) + return dataset +``` + +Load the training dataset and validation dataset for subsequent model training and validation. + +```python +# Load the training dataset +train_path = "./datasets/DogCroissants/train" +dataset_train = create_dataset(train_path, train=True) + +# Load the validation dataset +val_path = "./datasets/DogCroissants/val" +dataset_val = create_dataset(val_path, train=False) +``` + +## Model Training + +In this case, we use a pre-trained model to fine-tune the model on the classification dataset of the dog and croissants, and convert the trained CKPT model file to the MINDIR format for subsequent deployment on the phone side. + +> Model training currently only supports running in the Linux environment. + +### Principles of the MobileNet V2 Model + +MobileNet network is a lightweight CNN network focused on mobile, embedding or IoT devices proposed by the Google team in 2017. Compared to the traditional convolutional neural network, MobileNet network uses depthwise separable convolution idea in the premise of a small reduction in accuracy, which greatly reduces the model parameters and amount of operation. And the introduction of width coefficient and resolution coefficient makes the model meet the needs of different application scenarios. + +Since there is a large amount of loss when the Relu activation function processes low-dimensional feature information in the MobileNet network, the MobileNet V2 network proposes to use the inverted residual block and Linear Bottlenecks to design the network, to improve the accuracy of the model and make the optimized model smaller. + +![mobilenet](https://gitee.com/mindspore/docs/raw/tutorials-develop/tutorials/source_zh_cn/beginner/images/mobilenet.png) + +The Inverted residual block structure in the figure first uses 1x1 convolution for upswing, uses 3x3 DepthWise convolution, and finally uses 1x1 convolution for dimensionality reduction, which is in contrast to the Residual block structure. The Residual block first uses 1x1 convolution for dimensionality reduction, uses 3x3 convolution, and finally uses 1x1 convolution for upswing. + +> For detailed contents, refer to [MobileNet V2 thesis](https://arxiv.org/pdf/1801.04381.pdf). + +### Downloading the Pre-trained Model + +Download the [ckpt file of the MobileNetV2 pre-trained model](https://download.mindspore.cn/vision/classification/mobilenet_v2_1.0_224.ckpt) required for the case and the width coefficient of the pre-trained model, and the input image size is (224, 224). The downloaded pre-trained model is saved in the current directory. Use the `DownLoad` in MindSpore Vision to download the pre-trained model file to the current directory, and the sample code is as follows: + +```python +from mindvision.dataset import DownLoad + +models_url = "https://download.mindspore.cn/vision/classification/mobilenet_v2_1.0_224.ckpt" + +dl = DownLoad() +# Download the pre-trained model file +dl.download_url(models_url) +``` + +### MobileNet V2 Model Fine-tuning + +This chapter uses MobileNet V2 pretrained model for fine-tuning, and uses the classification dataset of the dog and croissants to retrain the model by deleting the last parameter of the 1x1 convolution layer for classification in the MobileNet V2 pretrained model, to update the model parameter. + +```python +import mindspore.nn as nn +from mindspore.train import Model +from mindspore import load_checkpoint, load_param_into_net + +from mindvision.classification.models import mobilenet_v2 +from mindvision.engine.loss import CrossEntropySmooth + +# Build a model with a target classification number of 2 and an image input size of (224,224) +network = mobilenet_v2(num_classes=2, resize=224) + +# Save the model parameter in param_dict +param_dict = load_checkpoint("./mobilenet_v2_1.0_224.ckpt") + +# Obtain the parameter name of the last convolutional layer of the mobilenet_v2 network +filter_list = [x.name for x in network.head.classifier.get_parameters()] + +# Delete the last convolutional layer of the pre-trained model +def filter_ckpt_parameter(origin_dict, param_filter): + for key in list(origin_dict.keys()): + for name in param_filter: + if name in key: + print("Delete parameter from checkpoint: ", key) + del origin_dict[key] + break + +filter_ckpt_parameter(param_dict, filter_list) + +# Load the pre-trained model parameters as the network initialization weight +load_param_into_net(network, param_dict) + +# Define the optimizer +network_opt = nn.Momentum(params=network.trainable_params(), learning_rate=0.01, momentum=0.9) + +# Define the loss function +network_loss = CrossEntropySmooth(sparse=True, reduction="mean", smooth_factor=0.1, classes_num=2) + +# Define evaluation metrics +metrics = {"Accuracy": nn.Accuracy()} + +# Initialize the model +model = Model(network, loss_fn=network_loss, optimizer=network_opt, metrics=metrics) +``` + +```text +[WARNING] ME(375486:140361546602304,MainProcess): [mindspore/train/serialization.py:644] 2 parameters in the 'net' are not loaded, because they are not in the 'parameter_dict'. +[WARNING] ME(375486:140361546602304,MainProcess): [mindspore/train/serialization.py:646] head.classifier.weight is not loaded. +[WARNING] ME(375486:140361546602304,MainProcess): [mindspore/train/serialization.py:646] head.classifier.bias is not loaded. +Delete parameter from checkpoint: head.classifier.weight +Delete parameter from checkpoint: head.classifier.bias +Delete parameter from checkpoint: moments.head.classifier.weight +Delete parameter from checkpoint: moments.head.classifier.bias +``` + +> Due to the model fine-tuning, the above WARNING needs to remove the parameters of the last convolutional layer of the pre-trained model, so loading the pre-trained model will show that the `head.classifier` parameter is not loaded. The `head.classifier` parameter will use the initialization value when the model was built. + +### Model Training and Evaluation + +Train and evaluate the network, and use the `mindvision.engine.callback.ValAccMonitor` interface in MindSpore Vision to print the loss value and the evaluation accuracy of the training. After the training is completed, save the CKPT file with the highest evaluation accuracy, `best.ckpt`, in the current directory. + +```python +from mindvision.engine.callback import ValAccMonitor +from mindspore.train.callback import TimeMonitor + +num_epochs = 10 + +# Model training and validation, after the training is completed, save the CKPT file with the highest evaluation accuracy, `best.ckpt`, in the current directory +model.train(num_epochs, + dataset_train, + callbacks=[ValAccMonitor(model, dataset_val, num_epochs), TimeMonitor()]) +``` + +```text +-------------------- +Epoch: [ 1 / 10], Train Loss: [0.388], Accuracy: 0.975 +epoch time: 7390.423 ms, per step time: 254.842 ms +-------------------- +Epoch: [ 2 / 10], Train Loss: [0.378], Accuracy: 0.975 +epoch time: 1876.590 ms, per step time: 64.710 ms +-------------------- +Epoch: [ 3 / 10], Train Loss: [0.372], Accuracy: 1.000 +epoch time: 2103.431 ms, per step time: 72.532 ms +-------------------- +Epoch: [ 4 / 10], Train Loss: [0.346], Accuracy: 1.000 +epoch time: 2246.303 ms, per step time: 77.459 ms +-------------------- +Epoch: [ 5 / 10], Train Loss: [0.376], Accuracy: 1.000 +epoch time: 2164.527 ms, per step time: 74.639 ms +-------------------- +Epoch: [ 6 / 10], Train Loss: [0.353], Accuracy: 1.000 +epoch time: 2191.490 ms, per step time: 75.569 ms +-------------------- +Epoch: [ 7 / 10], Train Loss: [0.414], Accuracy: 1.000 +epoch time: 2183.388 ms, per step time: 75.289 ms +-------------------- +Epoch: [ 8 / 10], Train Loss: [0.362], Accuracy: 1.000 +epoch time: 2219.950 ms, per step time: 76.550 ms +-------------------- +Epoch: [ 9 / 10], Train Loss: [0.354], Accuracy: 1.000 +epoch time: 2174.555 ms, per step time: 74.985 ms +-------------------- +Epoch: [ 10 / 10], Train Loss: [0.364], Accuracy: 1.000 +epoch time: 2190.957 ms, per step time: 75.550 ms +================================================================================ +End of validation the best Accuracy is: 1.000, save the best ckpt file in ./best.ckpt +``` + +### Visualizing Model Predictions + +Define the `visualize_model` function, use the model with the highest validation accuracy described above to make predictions about the input images and visualize the predictions. + +```python +import matplotlib.pyplot as plt +import numpy as np +from PIL import Image + +from mindspore import Tensor + +def visualize_model(path): + image = Image.open(path).convert("RGB") + image = image.resize((224, 224)) + plt.imshow(image) + + # Normalization processing + mean = np.array([0.485 * 255, 0.456 * 255, 0.406 * 255]) + std = np.array([0.229 * 255, 0.224 * 255, 0.225 * 255]) + image = np.array(image) + image = (image - mean) / std + image = image.astype(np.float32) + + # Image channel switches (h, w, c) to (c, h, w) + image = np.transpose(image, (2, 0, 1)) + + # Extend the data dimension to (1,c, h, w) + image = np.expand_dims(image, axis=0) + + # Define and load the network + net = mobilenet_v2(num_classes=2, resize=224) + param_dict = load_checkpoint("./best.ckpt") + load_param_into_net(net, param_dict) + model = Model(net) + + # Model prediction + pre = model.predict(Tensor(image)) + result = np.argmax(pre) + + class_name = {0: "Croissants", 1: "Dog"} + plt.title(f"Predict: {class_name[result]}") + return result + +image1 = "./datasets/DogCroissants/infer/croissants.jpg" +plt.figure(figsize=(15, 7)) +plt.subplot(1, 2, 1) +visualize_model(image1) + +image2 = "./datasets/DogCroissants/infer/dog.jpg" +plt.subplot(1, 2, 2) +visualize_model(image2) + +plt.show() +``` + +### Model Export + +After the model is trained, the network model (i.e. CKPT file) after the training is completed is converted to MindIR format for subsequent inference on the phone side. The `export` interface generates `mobilenet_v2_1.0_224.mindir` files in the current directory. + +```python +from mindspore import export, Tensor + +# Define and load the network parameters +net = mobilenet_v2(num_classes=2, resize=224) +param_dict = load_checkpoint("best.ckpt") +load_param_into_net(net, param_dict) + +# Export the model from the ckpt format to the MINDIR format +input_np = np.random.uniform(0.0, 1.0, size=[1, 3, 224, 224]).astype(np.float32) +export(net, Tensor(input_np), file_name="mobilenet_v2_1.0_224", file_format="MINDIR") +``` + +## Inference and Deployment on the Phone Side + +To implement the inference function of the model file on the phone side, the steps are as follows: + +- Convert file format: Convert MindIR file format to the MindSpore Lite recognizable file on the Android phone; + +- Application deployment: Deploy the app APK on the phone side, that is, download a MindSpore Vision suite Android APK; and + +- Application experience: After finally importing the ms model file to the phone side, experience the recognition function of the dog and croissants. + +### Converting the file format + +Use the [conversion tool](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_tool.html) applied on the use side, and convert the mobilenet_v2_1.0_224.mindir file generated during the training process into a file format recognizable by the MindSpore Lite end-side inference framework mobilenet_v2_1.0_224.ms file. + +The specific model file format conversion method is as follows: + +1. Use MindSpore Lite Converter to convert file formats in the Linux, in the [Linux-x86_64 tool downloading link](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html). + +```shell +# Set the path of the package after downloading and extracting, {converter_path}is the path to the extracted toolkit, PACKAGE_ROOT_PATH is set +export PACKAGE_ROOT_PATH={converter_path} + +# Include the dynamic-link libraries required by the conversion tool in the environment variables LD_LIBRARY_PATH +export LD_LIBRARY_PATH=${PACKAGE_ROOT_PATH}/tools/converter/lib:${LD_LIBRARY_PATH} + +# Execute the conversion command in mindspore-lite-linux-x64/tools/converter/converter +./converter_lite --fmk=MINDIR --modelFile=mobilenet_v2_1.0_224.mindir --outputFile=mobilenet_v2_1.0_224 +``` + +2. Use MindSpore Lite Converter under Windows to convert file formats, in the [Windows-x64 tool downloading link](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html) + +```shell +# Set the path of the package after downloading and extracting, {converter_path}is the path to the extracted toolkit, PACKAGE_ROOT_PATH is the environment variable that is set +set PACKAGE_ROOT_PATH={converter_path} + +# Include the dynamic-link libraries required by the conversion tool in the environment variables PATH +set PATH=%PACKAGE_ROOT_PATH%\tools\converter\lib;%PATH% + +# Execute the conversion command in mindspore-lite-win-x64\tools\converter\converter +call converter_lite --fmk=MINDIR --modelFile=mobilenet_v2_1.0_224.mindir --outputFile=mobilenet_v2_1.0_224 +``` + +After the conversion is successful, `CONVERTL RESULT SUCCESS:0` is printed, and the `mobilenet_v2_1.0_224.ms` file is generated in the current directory. + +> For other environments to download MindSpore Lite Converter, see [Download MindSpore Lite](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html). + +### Application Deployment + +Download [Android apps APK](https://gitee.com/mindspore/vision/releases/) of the MindSpore Vision Suite and install the APK on your phone, whose app name appears as `MindSpore Vision`. + +> MindSpore Vision APK is mainly used as an example of a visual development tool, providing basic UI functions such as taking pictures and selecting pictures, and providing AI application DEMO such as classification, detection, and face recognition. + +After opening the APP and clicking on the `classification` module on the home page, you can click the middle button to take a picture and get the picture, or click the image button in the upper sidebar to select the picture album for the image classification function. + +![main](https://gitee.com/mindspore/docs/raw/tutorials-develop/tutorials/source_zh_cn/beginner/images/app1.png) + +By default, the MindSpore Vision `classification` module has a built-in universal AI network model to identify and classify images. + +![result](https://gitee.com/mindspore/docs/raw/tutorials-develop/tutorials/source_zh_cn/beginner/images/app2.png) + +### Application Experience + +Finally, the custom network model `mobilenet_v2_1.0_224.ms` trained above is deployed to the Android phone side to experience the recognition function of dog and croissants. + +#### Customizing the Model Label Files + +Customizing model deployment requires the following format to define the information for the network model, that is, customizing the label files, and creating a json format label file that must be named after `custom.json` on the local computer side. + +```text +"title": 'dog and croissants', +"file": 'mobilenet_v2_1.0_224.ms', +"label": ['croissants', 'dag'] +``` + +The Json label file should contain three Key value fields of `title`, `file`, and `label`, the meaning of which is as follows: + +- title: customize the module titles (dog and croissants); +- file: the name of the model file converted above; and +- label: `array` information for customizing the label. + +#### Labels and Model Files Deployed to the Phone + +By pressing the `classification` button on the home page of the `MindSpore Vision APK`, you can enter the customization classification mode and select the tags and model files that need to be deployed. + +In order to achieve the recognition function of the mobile phone between dog and croissants, the label file `custom.json` file and the model file `mobilenet_v2_1.0_224.ms` should be placed together in the specified directory on the mobile phone. Here to take the `Android/data/Download/` folder as an example, you need to put the tag file and the model file at the same time in the above mobile phone directory first, as shown in the figure, then click the customize button, and the system file function will pop up. You can click the open file in the upper left corner, and then find the directory address where the Json tag file and the model file are stored, and select the corresponding Json file. + +![step](https://gitee.com/mindspore/docs/raw/tutorials-develop/tutorials/source_zh_cn/beginner/images/app3.png) + +After the label and model file are deployed to the mobile phone, you can click the middle button to take a picture to get the picture, or click the image button in the upper sidebar to select the picture album for the image, and you can classify the dog and the croissants. + +![result1](https://gitee.com/mindspore/docs/raw/tutorials-develop/tutorials/source_zh_cn/beginner/images/app4.png) + +> This chapter only covers the simple deployment process on the phone side. For more information about inference, please refer to [MindSpore Lite](https://www.mindspore.cn/lite/docs/en/master/index.html). + + + + + + + diff --git a/tutorials/source_en/beginner/quick_start.md b/tutorials/source_en/beginner/quick_start.md new file mode 100644 index 0000000000000000000000000000000000000000..564e72ee80006ca97077de3ac4602ae316a9d949 --- /dev/null +++ b/tutorials/source_en/beginner/quick_start.md @@ -0,0 +1,206 @@ +# Quickstart: Handwritten Digit Recognition + + + +This section runs through the basic process of MindSpore deep learning, using the LeNet5 network model as an example to implement common tasks in deep learning. + +## Downloading and Processing the Dataset + +Datasets are very important for model training, and good datasets can effectively improve training accuracy and efficiency. The MNIST dataset used in the example consists of 28∗28 grayscale images of 10 classes. The training dataset contains 60,000 images, and the test dataset contains 10,000 images. + +![mnist](https://gitee.com/mindspore/docs/raw/tutorials-develop/tutorials/source_zh_cn/beginner/images/mnist.png) + +> You can download it from the [MNIST dataset download page](http://yann.lecun.com/exdb/mnist/), unzip it and place it in the bottom directory structure. + +The MindSpore Vision suite provides a Mnist module for downloading and processing MNIST datasets, and the following sample code downloads, extracts, and processes datasets to a specified location: + +```python +from mindvision.dataset import Mnist + +# Download and process the MNIST dataset +download_train = Mnist(path="./mnist", split="train", batch_size=32, repeat_num=1, shuffle=True, resize=32, download=True) + +download_eval = Mnist(path="./mnist", split="test", batch_size=32, resize=32, download=True) + +dataset_train = download_train.run() +dataset_eval = download_eval.run() +``` + +Parameters description: + +- path: dataset path. +- split: dataset type, supporting train, test, and infer, which defaults to train. +- batch_size: the data size set for each training batch, which defaults to 32. +- repeat_num: the number of times the dataset is traversed during training, which defaults to 1. +- shuffle: whether the dataset needs to be randomly scrambled (optional parameter). +- resize: the image size of the output image, which defaults to 32*32. +- download: whether you needs to download the dataset, which defaults to False. + +The directory structure of the downloaded dataset files is as follows: + +```text +./mnist/ +├── test +│ ├── t10k-images-idx3-ubyte +│ └── t10k-labels-idx1-ubyte +└── train + ├── train-images-idx3-ubyte + └── train-labels-idx1-ubyte +``` + +## Building the Model + +According to the network structure of LeNet, there are 7 layers of LeNet removal input layer, including 3 convolutional layers, 2 sub-sampling layers, and 3 fully connected layers. + +![](https://gitee.com/mindspore/docs/raw/tutorials-develop/tutorials/source_zh_cn/beginner/images/lenet.png) + +The MindSpore Vision Suite provides the LeNet network model interface lenet, which defines the network model as follows: + +```python +from mindvision.classification.models import lenet + +network = lenet(num_classes=10, pretrained=False) +``` + +## Defining the Loss Function and the Optimizer + +To train a neural network model, you need to define a loss function and an optimizer function. + +- The loss function here uses the cross-entropy loss function `SoftmaxCrossEntropyWithLogits`. +- The optimizer here uses `Momentum`. + +```python +import mindspore.nn as nn +from mindspore.train import Model + +# Define the loss function +net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') + +# Define the optimizer function +net_opt = nn.Momentum(network.trainable_params(), learning_rate=0.01, momentum=0.9) +``` + +## Training and Saving the Model + +Before starting training, MindSpore needs to declare in advance whether the network model needs to save intermediate processes and results during training, so the `ModelCheckpoint` interface is used to save the network model and parameters for subsequent Fine-tuning operations. + +```python +from mindspore.train.callback import ModelCheckpoint, CheckpointConfig + +# Set the model saving parameter +config_ck = CheckpointConfig(save_checkpoint_steps=1875, keep_checkpoint_max=10) + +# Apply the model saving parameter +ckpoint = ModelCheckpoint(prefix="lenet", directory="./lenet", config=config_ck) +``` + +The `model.train` interface provided by MindSpore makes it easy to train the network, and `LossMonitor` can monitor the change of `loss` value during training. + +```python +from mindvision.engine.callback import LossMonitor + +# Initialize the model parameter +model = Model(network, loss_fn=net_loss, optimizer=net_opt, metrics={'accuracy'}) + +# Train the network model +model.train(10, dataset_train, callbacks=[ckpoint, LossMonitor(0.01, 1875)]) +``` + +```text +Epoch:[ 0/ 10], step:[ 1875/ 1875], loss:[0.314/0.314], time:2237.313 ms, lr:0.01000 +Epoch time: 3577.754 ms, per step time: 1.908 ms, avg loss: 0.314 +Epoch:[ 1/ 10], step:[ 1875/ 1875], loss:[0.031/0.031], time:1306.982 ms, lr:0.01000 +Epoch time: 1307.792 ms, per step time: 0.697 ms, avg loss: 0.031 +Epoch:[ 2/ 10], step:[ 1875/ 1875], loss:[0.007/0.007], time:1324.625 ms, lr:0.01000 +Epoch time: 1325.340 ms, per step time: 0.707 ms, avg loss: 0.007 +Epoch:[ 3/ 10], step:[ 1875/ 1875], loss:[0.021/0.021], time:1396.733 ms, lr:0.01000 +Epoch time: 1397.495 ms, per step time: 0.745 ms, avg loss: 0.021 +Epoch:[ 4/ 10], step:[ 1875/ 1875], loss:[0.028/0.028], time:1594.762 ms, lr:0.01000 +Epoch time: 1595.549 ms, per step time: 0.851 ms, avg loss: 0.028 +Epoch:[ 5/ 10], step:[ 1875/ 1875], loss:[0.007/0.007], time:1242.175 ms, lr:0.01000 +Epoch time: 1242.928 ms, per step time: 0.663 ms, avg loss: 0.007 +Epoch:[ 6/ 10], step:[ 1875/ 1875], loss:[0.033/0.033], time:1199.938 ms, lr:0.01000 +Epoch time: 1200.627 ms, per step time: 0.640 ms, avg loss: 0.033 +Epoch:[ 7/ 10], step:[ 1875/ 1875], loss:[0.175/0.175], time:1228.845 ms, lr:0.01000 +Epoch time: 1229.548 ms, per step time: 0.656 ms, avg loss: 0.175 +Epoch:[ 8/ 10], step:[ 1875/ 1875], loss:[0.009/0.009], time:1237.200 ms, lr:0.01000 +Epoch time: 1237.969 ms, per step time: 0.660 ms, avg loss: 0.009 +Epoch:[ 9/ 10], step:[ 1875/ 1875], loss:[0.000/0.000], time:1287.693 ms, lr:0.01000 +Epoch time: 1288.413 ms, per step time: 0.687 ms, avg loss: 0.000 +``` + +The loss value will be printed during training, and the loss value will fluctuate, but in general, the loss value will gradually decrease and the accuracy will gradually increase. The loss values that each person runs have a certain randomness and are not necessarily exactly the same. + +Verify the generalization capability of the model by running the test data set from the results obtained by running the model: + +1. Use the `model.eval` interface to read in the test data set. +2. Use the saved model parameters for inference. + +```python +acc = model.eval(dataset_eval) + +print("{}".format(acc)) +``` + +```text +{'accuracy': 0.9903846153846154} +``` + +The model accuracy data can be seen in the printed information. The accuracy data in the example reaches more than 95%, and the model quality is good. As the number of network iterations increases, the model accuracy increases further. + +## Loading the Model + +```python +from mindspore import load_checkpoint, load_param_into_net + +# Load the model that has been saved for testing +param_dict = load_checkpoint("./lenet/lenet-1_1875.ckpt") +# Load parameters into the network +load_param_into_net(network, param_dict) +``` + +```text +[] +``` + +> For more information about loading a model in mindspore, see [Loading the Model](https://www.mindspore.cn/tutorials/en/master/save_load_model.html#loading-the-model). + +## Validating the Model + +Use the generated model to predict the classification of a single image. The procedure is as follows: + +> The predicted images will be generated randomly, and the results may be different each time. + +```python +import numpy as np +from mindspore import Tensor +import matplotlib.pyplot as plt + +mnist = Mnist("./mnist", split="train", batch_size=6, resize=32) +dataset_infer = mnist.run() +ds_test = dataset_infer.create_dict_iterator() +data = next(ds_test) +images = data["image"].asnumpy() +labels = data["label"].asnumpy() + +plt.figure() +for i in range(1, 7): + plt.subplot(2, 3, i) + plt.imshow(images[i-1][0], interpolation="None", cmap="gray") +plt.show() + +# Predict the image corresponding classification by using the function model.predict +output = model.predict(Tensor(data['image'])) +predicted = np.argmax(output.asnumpy(), axis=1) + +# Output prediction classification versus actual classification +print(f'Predicted: "{predicted}", Actual: "{labels}"') +``` + +![img](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAXAAAAD6CAYAAAC4RRw1AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAlq0lEQVR4nO2de7AU1bXGvyWCIKCAICIvQRDEB6CAKD5QyqgYA1FjQGNhQoqUyU1pclORpCpaJpUqzB+JicaKJBpJYhnxqkgiRkBRERU58pCXAj5Q3iIiiE9w3z/OsPhmMnPOcM7MnNnd369qim/6Mb37rO5N79VrrW0hBAghhIiPQ5q6AUIIIRqGOnAhhIgUdeBCCBEp6sCFECJS1IELIUSkqAMXQohIaVQHbmYXm9nrZrbOzCaXqlGiaZFdk4tsmyysoXHgZtYMwBoAFwLYAGARgPEhhFWla56oNLJrcpFtk8ehjdh3GIB1IYQ3AcDM/glgDICCF4OZKWuoSgghWIFVsmvE1GFX4CBtK7tWFdtDCJ1yFzbGhdIVwLv0fUNmWRZmNsnMasysphHHEpVDdk0u9dpWdq1a1udb2Jgn8KIIIUwFMBXQ/+hJIgl2NTvwsNquXbusdb/85S9dP/zww64XLFjg+osvvihf45qIJNg1TTTmCXwjgO70vVtmmYgb2TW5yLYJozEd+CIAfc2sl5m1ADAOwMzSNEs0IbJrcpFtE0aDXSghhL1m9j8AngTQDMC9IYSVJWuZaBLSZNdmzZq5Hjp0aNa64447Lu92+/btK3u7ykWabJsWGuUDDyHMAjCrRG0RVYLsmlxk22ShTEwhhIiUskehCFGtHHrogcv/rLPOylr36aefut6zZ4/rL7/8svwNE6JI9AQuhBCRog5cCCEiRS4UkSrYbXLssce6Pu+887K2mz9/vuv33nuv/A0TogHoCVwIISJFHbgQQkSKXChFwIkcgwcPzlrXtm1b12+++abrzZs3u/7888/L2DpxMLC9zj//fNfdunXL2u65555zvWnTpvI3TDQpXBenRYsWrlu3bu26VatWrj/66CPXfE1t27bNdSXuez2BCyFEpKgDF0KISFEHLoQQkSIfeBG0bNnS9Y9//OOsdaeddprr3//+9665hjT7xUTl4dDB7t0PVFMdP36866VLl2bts3btWteffPJJ+RonSgLbmH3V7du3z9qOfd1MmzZtXB999NGue/bs6bpz586u168/ML9Cr169XD/00EN5twHKUz9eT+BCCBEp6sCFECJS5EIpAi5gtH379qx1nTodmGeUQ9EOP/zw8jdMFEXHjh1dc8bloEGDXN99991Z+3AxK1GdsNuE3R5cmOzyyy/P2qd58+Z5f2vAgAGu+/Tpk3f7Qu4XhvuKf/zjH1nrNm4s/eRHegIXQohIUQcuhBCRIhdKEfCw6N13381ax9lWI0aMcM2ZfG+//Xb5GifqhadHGzVqlOunn37a9V133ZW1T66rTFQf7N7o16+f69/97neu2cUJAIcckv+Zle/xzz77zDVnXPLxOPuSYZcNF0QD5EIRQghBqAMXQohISYQLpdDs4jxcfvzxx10vX77cdTGzjO/du9d17rBo0qRJrk8//XTXJ5xwgmseqpcjmF/8Nx06dHDN0Sb9+/d3fdttt7l+//33s/aPefb5tMDuEE6246Scd955J2ufxYsXu169erXrt956K+8+H3zwgeu+ffu6/s1vfuOak31CCMWfQAnQE7gQQkRKvR24md1rZtvMbAUt62Bmc8xsbebf9nX9hqg+ZNfkItumh2JcKPcBuBPA32jZZABPhRCmmNnkzPebSt+84mAXysCBA11fccUVro8//njXnLSxcOHCen+f31Dn1jfg6bZ69OjhmodbvPyNN96o93gV4j5UuV0bA7uzLr74Ytf892fXVsJqtt+HBNt2P5xsxbVsrr/+etc7d+7M2ofr9LN7hKNN9uzZ45r7li5durjmGiuc4PPMM8+43rJlS32n0GjqfQIPITwHYEfO4jEApmX0NABjS9ssUW5k1+Qi26aHhr7E7BxC2P9f2RYAnQttaGaTAEwqtF5UFbJrcinKtrJrXDQ6CiWEEMys4KvXEMJUAFMBoK7tSgVHD3AQ/0UXXeSak2yKcaHwm+UdO7IfbHiYxAkARxxxhOtCQf/VTLXZtRh4lnmuh8F1Mv7+97+73rBhg2t2kyWdumxbjXYtBN/rfB/OmDHDNUeQ5fueD55SjV2h3Ifw/c0uFI5mqkQ9nYZGoWw1sy4AkPlXBa+TgeyaXGTbBNLQDnwmgAkZPQHAY6VpjmhiZNfkItsmkHpdKGb2AICRADqa2QYAtwCYAmC6mU0EsB7AVeVsZH3wUGrVqlWud+3a5ZqHQr1793bNpUaLqX/BbhIA2L17t2senh122GF5dbUQg10PFi4VO2TIENcrVng0HWbPnu26lMk6HK3ApYR5VnMeXpczoSuJtq0PdnM21nXRrl0718OGDXP91a9+1TW7TTiahWslVWImp3o78BDC+AKrRhVYLiJAdk0usm16UCamEEJESiJqoXAEwcqVK12/9tprrnm2HE72OeWUU1zPmzevZG065phj8mpRWo466ijXXPvmyCOPdP2Xv/zFdWMTqbj+BidzcKIY11vhCJi5c+e65mtTs/80PTxhMbtNzj33XNc8Uw+7S1966SXXL7/8smt2rZQLPYELIUSkqAMXQohISYQLhd9Af/jhh655yHrSSSe5Pu2001xzzQyuY1BsWUiOZGBXDpeTPfHEE13PmjXLtUrLNgyezHbkyJGu2U22ZMkS1wsWLGjU8Xgmlu7du7vmqJexY8e65mF3q1atXLPL5Q9/+INruVCaBk6wGz16tOtx48a55lLEXC+Hy89OmzbNdaVnctITuBBCRIo6cCGEiJREuFAK8cILL7jmYREPcTl6gJMueFhbV50MLlfJQyyuy8HuFJ4pZuvWrXW2X+SHI0/Gjz8Q8syzsrz44ouuGzKs5UQNTvz6yU9+4vqqqw7kwnDyDl8vHK3A2xSaXDfNcMIbu8nYFgy7OfnvzPdhriuUXVpf+cpXXH/72992fc455+T9XY4cuvXWW13PnDkz77Erga4iIYSIFHXgQggRKYl2ofDkpFwGlodFl156qWse1j7yyCOuub5BLuwG4Vk9mK5du7rmiBS5UBoGD4M5YefJJ590zcPahsCJHZMnT3Y9ZswY1+wq4QQOHmqffPLJrjVR8n/DNWS+//3vu+YIH47eYThR5vnnn3f9t78dmIgod7Lq6667zjXP3MP3JcOJgVOmTHH92GMHaoE1ZTSZnsCFECJS1IELIUSkqAMXQohISbQPnLMyf/3rX7vmmeQvueQS19/4xjdcX3jhha7rCg3iqZU4RJBh3zr7/ETD4OJgHJrH9Zc//vjjRh2Di5xxcSr2uz744IOu+Z0J157nkNWNGze6TlsWLr+3GDx4sOuf/vSnrjlDmv3ehe4ZfqdwxhlnuP7Wt77lOnfKxPPPP991z549XfO1wyGo7E+vxixqPYELIUSkqAMXQohISbQLhcO8li9f7vquu+5yzdOucVZfjx49XDfW7cEuFM4wEw1j6NChrjt16tTg32G7sNsDACZOnOiaCxdxbXGeqo33v/rqq12zu27+/PmueSq+JJJrF57R/Xvf+55rdlW9+uqrrpcuXeqa72MOyWXXFrvVOASUM6KBbNcM34vLli1zze4wDk3lvqJa0BO4EEJEijpwIYSIlNSM53k2eR76/vWvf3W9Zs0a11zAiGepzoXdKyNGjHDNRbJ4ONmrV6+DaLXIB/9tuaZzsTXc98O2GzBgQNY6dolwJANHPlxwwQWuObqBizLdcccdrjlDs9JFjyoBR2GdffbZWeuuueYa1+yevP/++13PmTPHNWdOd+nSxTXPDJ/r9toPF78q1sXGUTIcncJuGnbDcoRbU6IncCGEiJR6O3Az625m88xslZmtNLMbMss7mNkcM1ub+Td/wQJRlciuyUR2TRfFuFD2AvjfEMJiM2sL4BUzmwPgOgBPhRCmmNlkAJMB3FS+ppYOLmbFxWrWrVvnmmtOc7JOLjwMZ82zkfMQkIvm8FRdTZAYEK1duVY725Lrrg8ZMsQ1D8c58qRNmzauuagZkG3/gQMH5j0Gu0o42mT69OmuOaKB3XhlpKJ2bdGihWt2Q3GCHJB9D9x7772uORmK7cp/Z3ZjHHfcca65/jtfB1yjn7cBsu3P9mO3CU+Pxy46TuKKxoUSQtgcQlic0bsBrAbQFcAYAPsng5sGYGyZ2ijKgOyaTGTXdHFQLzHN7DgAgwEsBNA5hLA5s2oLgM4F9pkEYFIj2ijKjOyaTGTX5FN0B25mbQA8DODGEMIuftMbQghmljcEIIQwFcDUzG8cXJhAheEh7qZNm/Lquli7dq1rHmLxbOkc3cLDNnbfVJIY7cr12bnmyahRo1yz24u35yE01xLnYXPuOna18G9xYs6jjz7qmqNWKuQ2+S8qZdeOHTu6Hj58uGuuTQIA27Ztc821tDnCiyNX2AXDtVM4EYcjedavX+/6mWeecZ07dR27OdmdyYlAbG92/XCkSrVQVBSKmTVH7cVwfwhhv1Nvq5l1yazvAmBbof1FdSK7JhPZNT0UE4ViAO4BsDqE8FtaNRPAhIyeAOCx3H1F9SK7JhPZNV0U40IZAeBaAMvNbGlm2c8BTAEw3cwmAlgP4Kr8u6cHdrXwG+tCbhOu6dEELpRo7crDcdYcedKvX7+8+3JdDXa/5CbWcFIXT5fGU7W98MILedtxsAlFJaaidu3evbvr008/3TVf80B2hMlll13mmhOg2FXCLiy2GU+Rxi7LuXPnur7nnntc87SKQHaSD7eDS9nyNqtWrXJd19SKTUW9HXgI4XkAVmD1qALLRZUjuyYT2TVdKBNTCCEiJTW1UCrBU0895ZrrZPDQkodnnMgjiof/zpwwxeVBOfmD4ZlXuCYOD8eB7ASTJUuWuE56GdiDhaNQuBZKbsTGsGHDXLOri91N7Crh6B12T3G0D5f2ZZdXsXVmZsyYkVfHhJ7AhRAiUtSBCyFEpMiFUkK2bNnimod9PKTj2gpc10EUD0f48DCaJ6DlhI1CUSFcGpY1kF2bhutsiGyeffZZ11w/hhNjgGyX1oYNG1zzfcL3zyuvvOJ69uzZrrl2EUe25NovLegJXAghIkUduBBCRIpcKCWEh9qvv/66a35D3qdPH9c80eusWbNcc9RDWoeGdcEuEY5WaKq6I2lmz549rh9//HHXixcvztru8MMPd80JVOyqYvvx73LkD7tNhJ7AhRAiWtSBCyFEpMiFUia4hsLq1atdn3zyya65zCzXgXj11Vddy4Uiqhl2Z/HMRxwpVNc+xSwXhdETuBBCRIo6cCGEiBR14EIIESnygZcJDiPksMBzzjnH9ebNm11v3brVtXyBIkb4utU1XBn0BC6EEJGiDlwIISJFLpQywVM/zZkzxzXXPN6+fbvrmpoa1yqeJIQoBj2BCyFEpKgDF0KISLFKvi02M72arhJCCIUmvj1oZNfqQXZNLK+EEIbkLtQTuBBCREq9HbiZtTSzl81smZmtNLNbM8t7mdlCM1tnZg+aWYvyN1eUCtk1mciuKSOEUOcHgAFok9HNASwEMBzAdADjMsv/BOD6In4r6FM1H9k1mR/ZNZmfmnw2qvcJPNTyUeZr88wnALgAwP9llk8DMLa+3xLVg+yaTGTXdFGUD9zMmpnZUgDbAMwB8AaAnSGE/QHLGwB0LbDvJDOrMbOafOtF0yG7JhPZNT0U1YGHEPaFEAYB6AZgGID+de+Rte/UEMKQfG9QRdMiuyYT2TU9HFQUSghhJ4B5AM4E0M7M9mdydgOwsbRNE5VCdk0msmvyKSYKpZOZtcvoVgAuBLAatRfGlZnNJgB4rExtFGVAdk0msmu6qDeRx8xORe1Lj2ao7fCnhxB+aWa9AfwTQAcASwB8K4RQ57TgZvYegD0Atte1XULpiOo5754ARqG0dl2P6jrHSlFN5yy7lo5qO+eeIYROuQsrmokJAGZWk0b/WhrOOw3nmEsazjkN55hLLOesTEwhhIgUdeBCCBEpTdGBT22CY1YDaTjvNJxjLmk45zScYy5RnHPFfeBCCCFKg1woQggRKerAhRAiUiragZvZxWb2eqak5eRKHrtSmFl3M5tnZqsy5TxvyCzvYGZzzGxt5t/2Td3WUpEGuwLps63sWv12rZgP3MyaAViD2sywDQAWARgfQlhVkQZUCDPrAqBLCGGxmbUF8ApqK79dB2BHCGFK5mZoH0K4qelaWhrSYlcgXbaVXeOwayWfwIcBWBdCeDOE8Dlqs8LGVPD4FSGEsDmEsDijd6M2jbkras91WmazJJXzTIVdgdTZVnaNwK6V7MC7AniXvhcsaZkUzOw4AINRW1S/cwhhc2bVFgCdm6pdJSZ1dgVSYVvZNQK76iVmmTCzNgAeBnBjCGEXrwu1fivFb0aKbJtMYrRrJTvwjQC60/fElrQ0s+aovRDuDyE8klm8NeNr2+9z29ZU7SsxqbErkCrbyq4R2LWSHfgiAH0zk6u2ADAOwMwKHr8imJkBuAfA6hDCb2nVTNSW8QSSVc4zFXYFUmdb2TUCu1Y0E9PMRgO4HbWlLu8NIfy6YgevEGZ2NoD5AJYD+DKz+Oeo9alNB9ADtSU6rwoh7GiSRpaYNNgVSJ9tZdfqt6tS6YUQIlL0ElMIISJFHbgQQkRKozrwtKTapg3ZNbnItgkjhNCgD2pfbLwBoDeAFgCWARhQzz5Bn+r4yK7J/JTynm3qc9En6/NePhs15gk8Nam2KUN2TS6ybbysz7ewMR14Uam2ZjbJzGrMrKYRxxKVQ3ZNLvXaVnaNi0PLfYAQwlRkpicys1Du44nKILsmE9k1LhrzBJ6qVNsUIbsmF9k2YTSmA09Nqm3KkF2Ti2ybMBrsQgkh7DWz/wHwJA6k2q4sWctEkyC7JhfZNnlUuhaKfGpVQgjBSvVbsmv1ILsmlldCCENyFyoTUwghIkUduBBCRIo6cCGEiBR14EIIESllT+QRQoim4tBDs7u4c845x3Xfvn1df/DBB66XLFniet26dWVsXePRE7gQQkSKOnAhhIiUxLlQWrdu7bpnz56uu3Xr5rpVq1YH9ZscK79jR+Ep8Xbu3Ol69+7drnl4tmvXroM6tigttfPX/reui5YtW7o+6qijXHfo0MF18+bNXX/66aeuP//8c9fr1x8oKPfZZ58V2WJxsDRr1sz1oEGDstbdcMMNrkeOHOn6nXfecf3HP/7R9YYNG1yzXasFPYELIUSkqAMXQohIUQcuhBCRkggfOIcKDRs2zPXVV1/tevTo0a67dOlyUL+/b98+1ytWrMhat3fvXtevvfaa6zfeeMP1woULXT///POu2U8uGgb7sQ855MDzCL8Lad++vesjjzzSdYsWLYo6RqdOnVyfdtpprgcOHOi6bdu2rvmdx/vvv+/6nnvucc3XSjX6VmPmsMMOcz1x4sSsdRxGyNdCnz59XLONH3/8cdfsD68W9AQuhBCRog5cCCEiJREuFA7zuvnmm11zCBG7WTiU72DL6fbq1SvrOw/bTznllLzLFyxYkLet//73v11/8cUXB9UOUQu7Ljp27OiaXWljx451PWLECNccWtpYCl1HHEZ4xBFHuP7FL37hmkPYRONhW+TeV4XsxPclX1O8vBrRE7gQQkSKOnAhhIiURLhQOBKEozw48+2FF15wvXTpUteNzYw86aSTXF9++eWuOctr+PDhrtmVw26Whx9+uFHtSBMcYXLLLbe45r8/u1M42iS3uFExsBuEbfbll1/Wu83HH3/smqOfDj/88Lzb5/6uOHi4P3juueey1l122WWuOauWI8K2bdvmutozp/UELoQQkaIOXAghIiURLhQuDHTnnXe65qHzRx995JqHtZyk0xA2btzoevny5a65sNX48eNdH3/88a7ZzSIXSvGceeaZrjnahKNKCrlK+FrhZCtO2ACyoxc4SoRdH3xNvfXWW67btGnjeu3ata45YYeTQuQyKS18319xxRVZ6zipi+F+gN1h7I6pRvQELoQQkVJvB25m95rZNjNbQcs6mNkcM1ub+Tf/f2uiapFdk4tsmx6KcaHcB+BOAH+jZZMBPBVCmGJmkzPfbyp984qDg/O3bt1a0WPz22t+q81DMm4fu1Zy66pUmPtQ5XYtRP/+/V0fffTRrtltwm6JTZs2uZ49e7ZrTqRi9xeQPXTes2dP3mPwNnwdcG3wJqp3cx8itW1D4Zo47ObKTdZi2zBcm4anVKv2ekX1PoGHEJ4DkDuLwRgA0zJ6GoCxpW2WKDeya3KRbdNDQ19idg4hbM7oLQA6F9rQzCYBmNTA44jKIrsml6JsK7vGRaOjUEIIwcwKFhQJIUwFMBUA6tqumuHhWdeuXbPWnX322a7POuss1+eee65rdqFw5MOyZctK2s5SUm125cgCrjnDUQUcPcB/55dfftk1zzLOU2+xmwTILgnLkSvF1M6p9vKwddk21vuV3SaDBw923b1796zt2IXCLrDVq1e7XrVqletqr1HU0CiUrWbWBQAy/26rZ3sRB7JrcpFtE0hDO/CZACZk9AQAj5WmOaKJkV2Ti2ybQOp1oZjZAwBGAuhoZhsA3AJgCoDpZjYRwHoAV5WzkZWCh9Rc+vOEE05w/fWvfz1rH07G6devn2sezr/44ouuOWGHkz8qTWx25bKe/HdmOzHs9uLZ43lGlvPOO8/1kCFDsvZ//fXXXXNUAtuMIxSqKRknNtuWAi4Be8EFF7jmKCUgO4qoUBJeTOV96+3AQwjjC6waVeK2iAoiuyYX2TY9KBNTCCEiJRG1UBoDD8F79+7tmt9kn3/++a5zayvwUJ2HZOw2mT59uuu5c+e6rvZohaaG/7bs3uKEKZ7AluFkH9aFuOiii7K+cw2TZ555xjXbr6amxjXbnqNWRPng64Pv46FDh7quq3zwokWLXLMtOQKp2tETuBBCRIo6cCGEiJRUulB4BpQBAwa4/s53vuP6kksucc0zqbz77rtZv/X222+7njVrlusnnnjC9Zo1a1xzdIooHo7y4CEuz5jCduUEDE7SKfbvz1Ev11xzjWuOVpkxY4ZrLke7cuVK13KTlQ9O3mH3Z9++fV2z6w3Idrs8/fTTrjl5Jyb0BC6EEJGiDlwIISIlNS4UHjq1a9fONc/uwm6TY4891jXPvDJt2jQwd911l2ueDFU0Hq47wrMoPfroo665fPCRRx7pmhNuXnrpJdc8E05dsP3ZtcYRDjfddKAaK7tWbr75Ztc8gbZoPHwf9+rVy/U3v/lN17klZBm+ptgdumNHbvHGONATuBBCRIo6cCGEiJTUuFC47Oill17q+mtf+5prjjZht8m8efNc33HHHVm/G+vQKzY4qoTdVlOnTnXNw2uOWuGyocXWLOGIFr4WbrzxRtcjRoxwzSVu+ZqSC6W0cMIOl2/micPThJ7AhRAiUtSBCyFEpKgDF0KISEmND5xDB7kmNId/sd+zdevWrjl0jH2dALBw4ULXn3zySUnaKuqGi0VVonDUggULXPN1xAwaNMj18OHDXfO7l507d7ouZmq2NMO+bq65P2bMGNd8H3NRszT9bfUELoQQkaIOXAghIiU1LpT33nvP9QMPPOCai91w+BfXnO7UqZPrX/3qV1m/u2XLFtd33nmna87+U33ouNm+fbtrLlTFU29xSBvPhM7ZgsuWLXO9b9++krczSVx88cWur732WteF3FMMh5PmwrXd+d6NFT2BCyFEpKgDF0KISEmNC4VnEOfpzng51/bmGct5lmsufgVkD4XZHcNZnfPnz3fNU2+JOOBMzvXr17vmade45niHDh1cc7Ymu1/kQqmbnj17uu7Xr59rdm0WcpXUFYXCBazYZrFS7xO4mXU3s3lmtsrMVprZDZnlHcxsjpmtzfyb3yElqhLZNZnIrumiGBfKXgD/G0IYAGA4gB+Y2QAAkwE8FULoC+CpzHcRD7JrMpFdU0S9LpQQwmYAmzN6t5mtBtAVwBgAIzObTQPwDICb8vxE1cHTXHGxoU2bNrlesWKFa3Z7cJ1oADjhhBNcc8IBD6N52Pef//zHNdemrnSkSmx2bdGihWt2aa1bt841R4WUa+o6jkhh+7ErjmdCL2bIX0pis2sxsEuE7xNOjOJ7mt0vdf1WEhJ+DsoHbmbHARgMYCGAzpmLBQC2AOhcYJ9JACY1oo2izMiuyUR2TT5FR6GYWRsADwO4MYSwi9eF2v/K8v53FkKYGkIYEkIYkm+9aFpk12Qiu6aDop7Azaw5ai+G+0MIj2QWbzWzLiGEzWbWBUCU84lxhAG7UHiW8ZqaGtf9+/fP2p+H85dffrnrYcOGuW7btq3rjh07umZ3CtdUqRTVbld2ObRq1cr1D37wA9cc4fPII4+4Lpc7hWtu8Kzo3Nb333/f9eLFi11XKvKk2u1aDOye4uidN998M+82nGxXlwslaRQThWIA7gGwOoTwW1o1E8CEjJ4A4LHSN0+UC9k1mciu6aKYJ/ARAK4FsNzMlmaW/RzAFADTzWwigPUAripLC0W5kF2TieyaIoqJQnkeQKHX56NK25zqgV0rHHnAtRQAYNGiRa55VvQrr7zSNZca5QQfruVQaRdKzHZl18Xo0aNd87Rr//rXv1xzpEpD4AQtjjoaOHCga3atrF692jVfL9y+chGzXRmexpBdm5x8w66x7373u0X9Ll87HC0UK0qlF0KISFEHLoQQkRL/GKKE8Iw8nIjDs/Bs3bo1ax+un8JRJX379nV97rnnuuahNid58LGLnTk96XCiBSfK/PnPf3Z9000HclF+9KMfuW7ZsqXr22+/3TUnfBSbyMF24hmcBgwY4JqH9qtWrXK9a1dWBJ8oEi71WqjsK0cmcdTYD3/4w4K/y/cy3+OxoidwIYSIFHXgQggRKXKhEDyTyoQJE1zz0JzLiQLAz372M9cnnXSSax6esUuEh/Z8vN69e7vmZAW5U2rhv8OTTz7petSoA4EVY8eOdX399de7PuaYY1zfdtttrjm6oS64/s2FF17omt0pmzdvhqgO6nKNHX/88a579OjhmqNTPv744/I0rAzoCVwIISJFHbgQQkRK6l0offr0cc21TMaMGeP6iCOOcH3iiSdm7c8TGbN7pF27dq45EYSTSh566CHXPFOI3CZ18+GHH7qePXu2a44K4Vo048aNc81urjlz5rjOdYGwDdhNc8YZZ7jmRBBuU2MTh0RxcG0ZjgLatu1AmReukQJk34unn366a07I4yiiakdP4EIIESnqwIUQIlJS70LhCIPhw4e75poXXCqUZ4YBgF69euX9XR6C8/Cca2PMmDHDdaVn5IkZ/ts+++yzrtltxYlR7PY466yzXLPt6oo8OProo11z/RouWTt37lzXPGm2KB9cW4ZLyz7xxBOux48fn7UP37/s6spN0IsFPYELIUSkqAMXQohIUQcuhBCRknofOBcb4tnnecbr1q1bF/VbXDecZ7vn2sYcusbHEw2DQ8ZmzZrlmm0xcuRI14MHD3bNYYe57zL4vQdfIzy9F/u9H330Ude52bqiPHDGJfuw7777btc8HSKQHfrJxec++OCDcjSx7OgJXAghIkUduBBCRIoVWxO5JAczq9zBioSLTp166qmuhw4d6ppDDXloDWQP43jYvmzZMtc8M3mxBZTKTQih0LRbB0012pXrrvM0duxCKVR8DMgOQ+QsSy40tmTJEtdr1qxxzTXHK03S7ZpiXgkhDMldqCdwIYSIFHXgQggRKal3oaQVDbWTieyaWBrmQjGzlmb2spktM7OVZnZrZnkvM1toZuvM7EEza1Hfb4nqQXZNJrJryggh1PkBYADaZHRzAAsBDAcwHcC4zPI/Abi+iN8K+lTNR3ZN5kd2TeanJp+N6n0CD7V8lPnaPPMJAC4A8H+Z5dMAjK3vt0T1ILsmE9k1XRT1EtPMmpnZUgDbAMwB8AaAnSGEvZlNNgDoWmDfSWZWY2Y1+daLpkN2TSaya3ooqgMPIewLIQwC0A3AMAD9iz1ACGFqCGFIPge8aFpk12Qiu6aHgwojDCHsBDAPwJkA2pnZ/sIC3QCosEekyK7JRHZNPsVEoXQys3YZ3QrAhQBWo/bCuDKz2QQAj5WpjaIMyK7JRHZNGUW8iT4VwBIArwJYAeDmzPLeAF4GsA7AQwAO01vtqD6yazI/smsyP3mjUCqdyPMegD0Atte3bQLpiOo5754hhE71b1YcGbuuR3WdY6WopnOWXUtHtZ1zXttWtAMHADOrSeMLkjScdxrOMZc0nHMazjGXWM5ZtVCEECJS1IELIUSkNEUHPrUJjlkNpOG803COuaThnNNwjrlEcc4V94ELIYQoDXKhCCFEpKgDF0KISKloB25mF5vZ65maxJMreexKYWbdzWyema3K1GO+IbO8g5nNMbO1mX/bN3VbS0Ua7Aqkz7aya/XbtWI+cDNrBmANalN7NwBYBGB8CGFVRRpQIcysC4AuIYTFZtYWwCuoLd15HYAdIYQpmZuhfQjhpqZraWlIi12BdNlWdo3DrpV8Ah8GYF0I4c0QwucA/glgTAWPXxFCCJtDCIszejdq61B0Re25TstslqR6zKmwK5A628quEdi1kh14VwDv0veCNYmTgpkdB2AwamdF6RxC2JxZtQVA56ZqV4lJnV2BVNhWdo3ArnqJWSbMrA2AhwHcGELYxetCrd9K8ZuRItsmkxjtWskOfCOA7vQ9sTWJzaw5ai+E+0MIj2QWb8342vb73LY1VftKTGrsCqTKtrJrBHatZAe+CEDfzOzYLQCMAzCzgsevCGZmAO4BsDqE8FtaNRO1dZiBZNVjToVdgdTZVnaNwK6VLic7GsDtAJoBuDeE8OuKHbxCmNnZAOYDWA7gy8zin6PWpzYdQA/Ului8KoSwo0kaWWLSYFcgfbaVXavfrkqlF0KISNFLTCGEiBR14EIIESnqwIUQIlLUgQshRKSoAxdCiEhRBy6EEJGiDlwIISLl/wFg7lq93NT4EgAAAABJRU5ErkJggg==) + +```text +Predicted: "[4 6 2 3 5 1]", Actual: "[4 6 2 3 5 1]" +``` + +As you can see from the printed results above, the predicted values are exactly the same as the target values. \ No newline at end of file