diff --git a/docs/en/docs/Kernel/tiered-reliability-memory-management-user-guide.md b/Archive-en/Kernel/tiered-reliability-memory-management-user-guide.md similarity index 60% rename from docs/en/docs/Kernel/tiered-reliability-memory-management-user-guide.md rename to Archive-en/Kernel/tiered-reliability-memory-management-user-guide.md index 47598fab7930fbc361a22a9f391c8f24a219dd4d..1abb912342cc3b1793fddb1b3fb02c54d6e5a5fd 100644 --- a/docs/en/docs/Kernel/tiered-reliability-memory-management-user-guide.md +++ b/Archive-en/Kernel/tiered-reliability-memory-management-user-guide.md @@ -26,14 +26,14 @@ This section describes the general constraints of this feature. Each subfeature 1. During kernel-mode development, pay attention to the following points when allocating memory: - - If the memory allocation API supports the specified `gfp_flag`, only the memory allocation whose `gfp_flag` contains `__GFP_HIGHMEM` and `__GFP_MOVABLE` forcibly allocates the common memory range or redirects to the reliable memory range. Other `gfp_flags` do not intervene. + - If the memory allocation API supports the specified `gfp_flag`, only the memory allocation whose `gfp_flag` contains `__GFP_HIGHMEM` and `__GFP_MOVABLE` forcibly allocates the common memory range or redirects to the reliable memory range. Other `gfp_flags` do not intervene. - - High-reliability memory is allocated from slab, slub, and slob. (If the memory allocated at a time is greater than `KMALLOC_MAX_CACHE_SIZE` and `gfp_flag` is set to a common memory range, low reliable memory may be allocated.) + - High-reliability memory is allocated from slab, slub, and slob. (If the memory allocated at a time is greater than `KMALLOC_MAX_CACHE_SIZE` and `gfp_flag` is set to a common memory range, low reliable memory may be allocated.) 2. During user-mode development, pay attention to the following points when allocating memory: - - After the attribute of a common process is changed to a key process, the highly reliable memory is used only in the actual physical memory allocation phase (page fault). The attribute of the previously allocated memory does not change, and vice versa. Therefore, the memory allocated when a common process is started and changed to a key process may not be highly reliable memory. Whether the configuration takes effect can be verified by querying whether the physical address corresponding to the virtual address belongs to the highly reliable memory range. - - Similar mechanisms (ptmalloc, tcmalloc, and dpdk) in libc libraries, such as chunks in glibc, use cache logic to improve performance. However, memory cache causes inconsistency between the memory allocation logics of the user and the kernel. When a common process becomes a key process, this feature cannot be enabled (it is enabled only when the kernel allocates memory). + - After the attribute of a common process is changed to a key process, the highly reliable memory is used only in the actual physical memory allocation phase (page fault). The attribute of the previously allocated memory does not change, and vice versa. Therefore, the memory allocated when a common process is started and changed to a key process may not be highly reliable memory. Whether the configuration takes effect can be verified by querying whether the physical address corresponding to the virtual address belongs to the highly reliable memory range. + - Similar mechanisms (ptmalloc, tcmalloc, and dpdk) in libc libraries, such as chunks in glibc, use cache logic to improve performance. However, memory cache causes inconsistency between the memory allocation logics of the user and the kernel. When a common process becomes a key process, this feature cannot be enabled (it is enabled only when the kernel allocates memory). 3. When an upper-layer service applies for memory, if the highly reliable memory is insufficient (triggering the native min waterline of the zone) or the corresponding limit is triggered, the page cache is preferentially released to attempt to reclaim the highly reliable memory. If the memory still cannot be allocated, the kernel selects OOM or fallback to the low reliable memory range based on the fallback switch to complete memory allocation. (Fallback indicates that when the memory of a memory management zone or node is insufficient, memory is allocated from other memory management zones or nodes.) @@ -41,34 +41,34 @@ This section describes the general constraints of this feature. Each subfeature 5. The following configuration files are introduced based on the usage of the user-mode highly reliable memory: - - **/proc/sys/vm/task_reliable_limit**: upper limit of the highly reliable memory used by key processes (including systemd). It contains anonymous pages and file pages. The SHMEM used by the process is also counted (included in anonymous pages). + - **/proc/sys/vm/task_reliable_limit**: upper limit of the highly reliable memory used by key processes (including systemd). It contains anonymous pages and file pages. The SHMEM used by the process is also counted (included in anonymous pages). - - **/proc/sys/vm/reliable_pagecache_max_bytes**: soft upper limit of the highly reliable memory used by the global page cache. The number of highly reliable page caches used by common processes is limited. By default, the system does not limit the highly reliable memory used by page caches. This restriction does not apply to scenarios such as highly reliable processes and file system metadata. Regardless of whether fallback is enabled, when a common process triggers the upper limit, the low reliable memory is allocated by default. If the low reliable memory cannot be allocated, the native process is used. + - **/proc/sys/vm/reliable_pagecache_max_bytes**: soft upper limit of the highly reliable memory used by the global page cache. The number of highly reliable page caches used by common processes is limited. By default, the system does not limit the highly reliable memory used by page caches. This restriction does not apply to scenarios such as highly reliable processes and file system metadata. Regardless of whether fallback is enabled, when a common process triggers the upper limit, the low reliable memory is allocated by default. If the low reliable memory cannot be allocated, the native process is used. - - **/proc/sys/vm/shmem_reliable_bytes_limit**: soft upper limit of the highly reliable memory used by the global SHMEM. It limits the amount of highly reliable memory used by the SHMEM of common processes. By default, the system does not limit the amount of highly reliable memory used by SHMEM. High-reliability processes are not subject to this restriction. When fallback is disabled, if a common process triggers the upper limit, memory allocation fails, but OOM does not occur (consistent with the native process). + - **/proc/sys/vm/shmem_reliable_bytes_limit**: soft upper limit of the highly reliable memory used by the global SHMEM. It limits the amount of highly reliable memory used by the SHMEM of common processes. By default, the system does not limit the amount of highly reliable memory used by SHMEM. High-reliability processes are not subject to this restriction. When fallback is disabled, if a common process triggers the upper limit, memory allocation fails, but OOM does not occur (consistent with the native process). - If the above limits are reached, memory allocation fallback or OOM may occur. + If the above limits are reached, memory allocation fallback or OOM may occur. - Memory allocation caused by page faults generated by key processes in the TMPFS or page cache may trigger multiple limits. For details about the interaction between multiple limits, see the following table. + Memory allocation caused by page faults generated by key processes in the TMPFS or page cache may trigger multiple limits. For details about the interaction between multiple limits, see the following table. - | Whether task_reliable_limit Is Reached| Whether reliable_pagecache_max_bytes or shmem_reliable_bytes_limit Is Reached| Memory Allocation Processing Policy | - | --------------------------- | ------------------------------------------------------------ | ------------------------------------------------ | - | Yes | Yes | The page cache is reclaimed first to meet the allocation requirements. Otherwise, fallback or OOM occurs.| - | Yes | No | The page cache is reclaimed first to meet the allocation requirements. Otherwise, fallback or OOM occurs.| - | No | No | High-reliability memory is allocated first. Otherwise, fallback or OOM occurs. | - | No | Yes | High-reliability memory is allocated first. Otherwise, fallback or OOM occurs. | + | Whether task_reliable_limit Is Reached| Whether reliable_pagecache_max_bytes or shmem_reliable_bytes_limit Is Reached| Memory Allocation Processing Policy | + | --------------------------- | ------------------------------------------------------------ | ------------------------------------------------ | + | Yes | Yes | The page cache is reclaimed first to meet the allocation requirements. Otherwise, fallback or OOM occurs.| + | Yes | No | The page cache is reclaimed first to meet the allocation requirements. Otherwise, fallback or OOM occurs.| + | No | No | High-reliability memory is allocated first. Otherwise, fallback or OOM occurs. | + | No | Yes | High-reliability memory is allocated first. Otherwise, fallback or OOM occurs. | - Key processes comply with `task_reliable_limit`. If `task_reliable_limit` is greater than `tmpfs` or `pagecachelimit`, page cache and TMPFS generated by key processes still use highly reliable memory. As a result, the highly reliable memory used by page cache and TMPFS is greater than the corresponding limit. + Key processes comply with `task_reliable_limit`. If `task_reliable_limit` is greater than `tmpfs` or `pagecachelimit`, page cache and TMPFS generated by key processes still use highly reliable memory. As a result, the highly reliable memory used by page cache and TMPFS is greater than the corresponding limit. - When `task_reliable_limit` is triggered, if the size of the highly reliable file cache is less than 4 MB, the file cache will not be reclaimed synchronously. If the highly reliable file cache is less than 4 MB when the page cache is generated, the allocation will fall back to the low reliable memory range. If the highly reliable file cache is greater than 4 MB, the page cache is reclaimed preferentially for allocation. However, when the size is close to 4 MB, direct cache reclamation is triggered more frequently. Because the lock overhead of direct cache reclamation is high, the CPU usage is high. In this case, the file read/write performance is close to the raw disk performance. + When `task_reliable_limit` is triggered, if the size of the highly reliable file cache is less than 4 MB, the file cache will not be reclaimed synchronously. If the highly reliable file cache is less than 4 MB when the page cache is generated, the allocation will fall back to the low reliable memory range. If the highly reliable file cache is greater than 4 MB, the page cache is reclaimed preferentially for allocation. However, when the size is close to 4 MB, direct cache reclamation is triggered more frequently. Because the lock overhead of direct cache reclamation is high, the CPU usage is high. In this case, the file read/write performance is close to the raw disk performance. 6. Even if the system has sufficient highly reliable memory, the allocation may fall back to the low reliable memory range. - - If the memory cannot be migrated to another node for allocation, the allocation falls back to the low reliable memory range of the current node. The common scenarios are as follows: - - If the memory allocation contains `__GFP_THISNODE` (for example, transparent huge page allocation), memory can be allocated only from the current node. If the highly reliable memory of the node does not meet the allocation requirements, the system attempts to allocate memory from the low reliable memory range of the memory node. - - A process runs on a node that contains common memory by running commands such as `taskset` and `numactl`. - - A process is scheduled to a common memory node under the native scheduling mechanism of the system memory. - - High-reliability memory allocation triggers the highly reliable memory usage threshold, which also causes fallback to the low reliable memory range. + - If the memory cannot be migrated to another node for allocation, the allocation falls back to the low reliable memory range of the current node. The common scenarios are as follows: + - If the memory allocation contains `__GFP_THISNODE` (for example, transparent huge page allocation), memory can be allocated only from the current node. If the highly reliable memory of the node does not meet the allocation requirements, the system attempts to allocate memory from the low reliable memory range of the memory node. + - A process runs on a node that contains common memory by running commands such as `taskset` and `numactl`. + - A process is scheduled to a common memory node under the native scheduling mechanism of the system memory. + - High-reliability memory allocation triggers the highly reliable memory usage threshold, which also causes fallback to the low reliable memory range. 7. If tiered-reliability memory fallback is disabled, highly reliable memory cannot be expanded to low reliable memory. As a result, user-mode applications may not be compatible with this feature in determining the memory usage, for example, determining the available memory based on MemFree. @@ -80,14 +80,13 @@ This section describes the general constraints of this feature. Each subfeature ### Scenarios 1. The default page size (`PAGE_SIZE`) is 4 KB. -2. The lower 4 GB memory of the NUMA node 0 must be highly reliable, and the highly reliable memory size and low reliable memory size must meet the kernel requirements. Otherwise, the system may fail to start. There is no requirement on the highly reliable memory size of other nodes. However, - if a node does not have highly reliable memory or the highly reliable memory is insufficient, the per-node management structure may be located in the highly reliable memory of other nodes (because the per-node management structure is a kernel data structure and needs to be located in the highly reliable memory zone). As a result, a kernel warning is generated, for example, `vmemmap_verify` alarms are generated and the performance is affected. +2. The lower 4 GB memory of the NUMA node 0 must be highly reliable, and the highly reliable memory size and low reliable memory size must meet the kernel requirements. Otherwise, the system may fail to start. There is no requirement on the highly reliable memory size of other nodes. However, if a node does not have highly reliable memory or the highly reliable memory is insufficient, the per-node management structure may be located in the highly reliable memory of other nodes (because the per-node management structure is a kernel data structure and needs to be located in the highly reliable memory zone). As a result, a kernel warning is generated, for example, `vmemmap_verify` alarms are generated and the performance is affected. 3. Some statistics (such as the total amount of highly reliable memory for TMPFS) of this feature are collected using the percpu technology, which causes extra overhead. To reduce the impact on performance, there is a certain error when calculating the sum. It is normal that the error is less than 10%. 4. Huge page limit: - - In the startup phase, static huge pages are low reliable memory. By default, static huge pages allocated during running are low reliable memory. If memory allocation occurs in the context of a key process, the allocated huge pages are highly reliable memory. - - In the transparent huge page (THP) scenario, if one of the 512 4 KB pages to be combined (2 MB for example) is a highly reliable page, the newly allocated 2 MB huge page uses highly reliable memory. That is, the THP uses more highly reliable memory. - - The allocation of the reserved 2 MB huge page complies with the native fallback process. If the current node lacks low reliable memory, the allocation falls back to the highly reliable range. - - In the startup phase, 2 MB huge pages are reserved. If no memory node is specified, the load is balanced to each memory node for huge page reservation. If a memory node lacks low reliable memory, highly reliable memory is used according to the native process. + - In the startup phase, static huge pages are low reliable memory. By default, static huge pages allocated during running are low reliable memory. If memory allocation occurs in the context of a key process, the allocated huge pages are highly reliable memory. + - In the transparent huge page (THP) scenario, if one of the 512 4 KB pages to be combined (2 MB for example) is a highly reliable page, the newly allocated 2 MB huge page uses highly reliable memory. That is, the THP uses more highly reliable memory. + - The allocation of the reserved 2 MB huge page complies with the native fallback process. If the current node lacks low reliable memory, the allocation falls back to the highly reliable range. + - In the startup phase, 2 MB huge pages are reserved. If no memory node is specified, the load is balanced to each memory node for huge page reservation. If a memory node lacks low reliable memory, highly reliable memory is used according to the native process. 5. Currently, only the normal system startup scenario is supported. In some abnormal scenarios, kernel startup may be incompatible with the memory tiering function, for example, the kdump startup phase. (Currently, kdump can be automatically disabled. In other scenarios, it needs to be disabled by upper-layer services.) 6. In the swap-in and swap-out, memory offline, KSM, cma, and gigantic page processes, the newly allocated page types are not considered based on the tiered-reliability memory. As a result, the page types may not be defined (for example, the highly reliable memory usage statistics are inaccurate and the reliability level of the allocated memory is not as expected). @@ -113,22 +112,22 @@ When a memory error occurs in the system, the OS overwrites the unallocated low - **High-reliability memory for key processes** - 1. The abuse of the `/proc//reliable` API may cause excessive use of highly reliable memory. - 2. The `reliable` attribute of a user-mode process can be modified by using the proc API or directly inherited from its parent process only after the process is started. `systemd (pid=1)` uses highly reliable memory. Its `reliable` attribute is useless and is not inherited. The `reliable` attribute of kernel-mode threads is invalid. - 3. The program and data segments of processes use highly reliable memory. Because the highly reliable memory is insufficient, the low reliable memory is used for startup. - 4. Common processes also use highly reliable memory in some scenarios, such as HugeTLB, page cache, vDSO, and TMPFS. + 1. The abuse of the `/proc//reliable` API may cause excessive use of highly reliable memory. + 2. The `reliable` attribute of a user-mode process can be modified by using the proc API or directly inherited from its parent process only after the process is started. `systemd (pid=1)` uses highly reliable memory. Its `reliable` attribute is useless and is not inherited. The `reliable` attribute of kernel-mode threads is invalid. + 3. The program and data segments of processes use highly reliable memory. Because the highly reliable memory is insufficient, the low reliable memory is used for startup. + 4. Common processes also use highly reliable memory in some scenarios, such as HugeTLB, page cache, vDSO, and TMPFS. - **Overwrite of unallocated memory** - The overwrite of the unallocated memory can be executed only once and does not support concurrent operations. If this feature is executed, it will have the following impacts: + The overwrite of the unallocated memory can be executed only once and does not support concurrent operations. If this feature is executed, it will have the following impacts: - 1. This feature takes a long time. When one CPU of each node is occupied by the overwrite thread, other tasks cannot be scheduled on the CPU. - 2. During the overwrite process, the zone lock needs to be obtained. Other service processes need to wait until the overwrite is complete. As a result, the memory may not be allocated in time. - 3. In the case of concurrent execution, queuing is blocked, resulting in a longer delay. + 1. This feature takes a long time. When one CPU of each node is occupied by the overwrite thread, other tasks cannot be scheduled on the CPU. + 2. During the overwrite process, the zone lock needs to be obtained. Other service processes need to wait until the overwrite is complete. As a result, the memory may not be allocated in time. + 3. In the case of concurrent execution, queuing is blocked, resulting in a longer delay. - If the machine performance is poor, the kernel RCU stall or soft lockup alarm may be triggered, and the process memory allocation may be blocked. Therefore, this feature can be used only on physical machines if necessary. There is a high probability that the preceding problem occurs on VMs. + If the machine performance is poor, the kernel RCU stall or soft lockup alarm may be triggered, and the process memory allocation may be blocked. Therefore, this feature can be used only on physical machines if necessary. There is a high probability that the preceding problem occurs on VMs. - The following table lists the reference data of physical machines. (The actual time required depends on the hardware performance and system load.) + The following table lists the reference data of physical machines. (The actual time required depends on the hardware performance and system load.) Table 1 Test data when the TaiShan 2280 V2 server is unloaded @@ -150,102 +149,102 @@ This sub-feature provides multiple APIs. You only need to perform steps 1 to 6 t 4. After the startup, you can check whether memory tiering is enabled based on the startup log. If it is enabled, the following information is displayed: - ```shell - mem reliable: init succeed, mirrored memory - ``` + ```shell + mem reliable: init succeed, mirrored memory + ``` 5. The physical address range corresponding to the highly reliable memory can be queried in the startup log. Observe the attributes in the memory map reported by the EFI. The memory range with `MR` is the highly reliable memory range. The following is an excerpt of the startup log. The memory range `mem06` is the highly reliable memory, and `mem07` is the low reliable memory. Their physical address ranges are also listed (the highly and low reliable memory address ranges cannot be directly queried in other modes). - ```text - [ 0.000000] efi: mem06: [Conventional Memory| |MR| | | | | | |WB| | | ] range=[0x0000000100000000-0x000000013fffffff] (1024MB) - [ 0.000000] efi: mem07: [Conventional Memory| | | | | | | | |WB| | | ] range=[0x0000000140000000-0x000000083eb6cfff] (28651MB) - ``` + ```text + [ 0.000000] efi: mem06: [Conventional Memory| |MR| | | | | | |WB| | | ] range=[0x0000000100000000-0x000000013fffffff] (1024MB) + [ 0.000000] efi: mem07: [Conventional Memory| | | | | | | | |WB| | | ] range=[0x0000000140000000-0x000000083eb6cfff] (28651MB) + ``` 6. During kernel-mode development, a page struct page can be determined based on the zone where the page is located. `ZONE_MOVABLE` indicates a low reliable memory zone. If the zone ID is smaller than `ZONE_MOVABLE`, the zone is a highly reliable memory zone. The following is an example: - ```c - bool page_reliable(struct page *page) - { - if (!mem_reliable_status() || !page) - return false; - return page_zonenum(page) < ZONE_MOVABLE; - } - ``` + ```c + bool page_reliable(struct page *page) + { + if (!mem_reliable_status() || !page) + return false; + return page_zonenum(page) < ZONE_MOVABLE; + } + ``` - In addition, the provided APIs are classified based on their functions: + In addition, the provided APIs are classified based on their functions: - 1. **Checking whether the reliability function is enabled at the code layer**: In the kernel module, use the following API to check whether the tiered-reliability memory management function is enabled. If `true` is returned, the function is enabled. If `false` is returned, the function is disabled. + 1. **Checking whether the reliability function is enabled at the code layer**: In the kernel module, use the following API to check whether the tiered-reliability memory management function is enabled. If `true` is returned, the function is enabled. If `false` is returned, the function is disabled. - ```c - #include - bool mem_reliable_status(void); - ``` + ```c + #include + bool mem_reliable_status(void); + ``` - 2. **Memory hot swap**: If the kernel enables the memory hot swap operation (Logical Memory hot-add), the highly and low reliable memories also support this operation. The operation unit is the memory block, which is the same as the native process. + 2. **Memory hot swap**: If the kernel enables the memory hot swap operation (Logical Memory hot-add), the highly and low reliable memories also support this operation. The operation unit is the memory block, which is the same as the native process. - ```shell - # Bring the memory online to the highly reliable memory range. - echo online_kernel > /sys/devices/system/memory/auto_online_blocks - # Bring the memory online to the low reliable memory range. - echo online_movable > /sys/devices/system/memory/auto_online_blocks - ``` + ```shell + # Bring the memory online to the highly reliable memory range. + echo online_kernel > /sys/devices/system/memory/auto_online_blocks + # Bring the memory online to the low reliable memory range. + echo online_movable > /sys/devices/system/memory/auto_online_blocks + ``` - 3. **Dynamically disabling a tiered management function**: The long type is used to determine whether to enable or disable the tiered-reliability memory management function based on each bit. + 3. **Dynamically disabling a tiered management function**: The long type is used to determine whether to enable or disable the tiered-reliability memory management function based on each bit. + -`bit0`: enables tiered-reliability memory management. + -`bit1`: disables fallback to the low reliable memory range. + -`bit2`: disables TMPFS to use highly reliable memory. + -`bit3`: disables the page cache to use highly reliable memory. - - `bit0`: enables tiered-reliability memory management. - - `bit1`: disables fallback to the low reliable memory range. - - `bit2`: disables TMPFS to use highly reliable memory. - - `bit3`: disables the page cache to use highly reliable memory. + Other bits are reserved for extension. If you need to change the value, call the following proc API (the permission is 600). The value range is 0-15. (The subsequent functions are processed only when `bit 0` of the general function is `1`. Otherwise, all functions are disabled.) - Other bits are reserved for extension. If you need to change the value, call the following proc API (the permission is 600). The value range is 0-15. (The subsequent functions are processed only when `bit 0` of the general function is `1`. Otherwise, all functions are disabled.) + ```shell + echo 15 > /proc/sys/vm/reliable_debug + # All functions are disabled because bit0 is 0. + echo 14 > /proc/sys/vm/reliable_debug + ``` - ```shell - echo 15 > /proc/sys/vm/reliable_debug - # All functions are disabled because bit0 is 0. - echo 14 > /proc/sys/vm/reliable_debug - ``` + This command can only be used to disable the function. This command cannot be used to enable a function that has been disabled or is disabled during running. - This command can only be used to disable the function. This command cannot be used to enable a function that has been disabled or is disabled during running. + Note: This function is used for escape and is configured only when the tiered-reliability memory management feature needs to be disabled in abnormal scenarios or during commissioning. Do not use this function as a common function. - Note: This function is used for escape and is configured only when the tiered-reliability memory management feature needs to be disabled in abnormal scenarios or during commissioning. Do not use this function as a common function. + 4. **Viewing highly reliable memory statistics**: Call the native `/proc/meminfo` API. - 4. **Viewing highly reliable memory statistics**: Call the native `/proc/meminfo` API. + -`ReliableTotal`: total size of reliable memory managed by the kernel. + -`ReliableUsed`: total size of reliable memory used by the system, including the reserved memory used in the system. + -`ReliableBuddyMem`: remaining reliable memory of the partner system. + -`ReliableTaskUsed`: highly reliable memory used by systemd and key user processes, including anonymous pages and file pages. + -`ReliableShmem`: highly reliable memory usage of the shared memory, including the total highly reliable memory used by the shared memory, TMPFS, and rootfs. + -`ReliableFileCache`: highly reliable memory usage of the read/write cache. - - `ReliableTotal`: total size of reliable memory managed by the kernel. - - `ReliableUsed`: total size of reliable memory used by the system, including the reserved memory used in the system. - - `ReliableBuddyMem`: remaining reliable memory of the partner system. - - `ReliableTaskUsed`: highly reliable memory used by systemd and key user processes, including anonymous pages and file pages. - - `ReliableShmem`: highly reliable memory usage of the shared memory, including the total highly reliable memory used by the shared memory, TMPFS, and rootfs. - - `ReliableFileCache`: highly reliable memory usage of the read/write cache. + 5. **Overwrite of unallocated memory**: This function requires the configuration item to be enabled. - 5. **Overwrite of unallocated memory**: This function requires the configuration item to be enabled. + Enable `CONFIG_CLEAR_FREELIST_PAGE` and add the startup parameter `clear_freelist`. Call the proc API. The value can only be `1` (the permission is 0200). - Enable `CONFIG_CLEAR_FREELIST_PAGE` and add the startup parameter `clear_freelist`. Call the proc API. The value can only be `1` (the permission is 0200). + ```shell + echo 1 > /proc/sys/vm/clear_freelist_pages + ``` - ```shell - echo 1 > /proc/sys/vm/clear_freelist_pages - ``` + Note: This feature depends on the startup parameter `clear_freelist`. The kernel matches only the prefix of the startup parameter. Therefore, this feature also takes effect for parameters with misspelled suffix, such as `clear_freelisttt`. - Note: This feature depends on the startup parameter `clear_freelist`. The kernel matches only the prefix of the startup parameter. Therefore, this feature also takes effect for parameters with misspelled suffix, such as `clear_freelisttt`. + To prevent misoperations, add the kernel module parameter `cfp_timeout_ms` to indicate the maximum execution duration of the overwrite function. If the overwrite function times out, the system exits even if the overwrite operation is not complete. The default value is `2000` ms (the permission is 0644). - To prevent misoperations, add the kernel module parameter `cfp_timeout_ms` to indicate the maximum execution duration of the overwrite function. If the overwrite function times out, the system exits even if the overwrite operation is not complete. The default value is `2000` ms (the permission is 0644). + ```shell + echo 500 > /sys/module/clear_freelist_page/parameters/cfp_timeout_ms # Set the timeout to 500 ms. + ``` - ```shell - echo 500 > /sys/module/clear_freelist_page/parameters/cfp_timeout_ms # Set the timeout to 500 ms. - ``` + 6. **Checking and modifying the high and low reliability attribute of the current process**: Call the `/proc//reliable` API to check whether the process is a highly reliable process. If the process is running and written, the attribute is inherited. If the subprocess does not require the attribute, manually modify the subprocess attribute. The systemd and kernel threads do not support the read and write of the attribute. The value can be `0` or `1`. The default value is `0`, indicating a low reliable process (the permission is 0644). - 6. **Checking and modifying the high and low reliability attribute of the current process**: Call the `/proc//reliable` API to check whether the process is a highly reliable process. If the process is running and written, the attribute is inherited. If the subprocess does not require the attribute, manually modify the subprocess attribute. The systemd and kernel threads do not support the read and write of the attribute. The value can be `0` or `1`. The default value is `0`, indicating a low reliable process (the permission is 0644). + ```shell + # Change the process whose PID is 1024 to a highly reliable process. After the change, the process applies for memory from the highly reliable memory range. If the memory fails to be allocated, the allocation may fall back to the low reliable memory range. + echo 1 > /proc/1024/reliable + ``` - ```shell - # Change the process whose PID is 1024 to a highly reliable process. After the change, the process applies for memory from the highly reliable memory range. If the memory fails to be allocated, the allocation may fall back to the low reliable memory range. - echo 1 > /proc/1024/reliable - ``` + 7. **Setting the upper limit of highly reliable memory requested by user-mode processes**: Call `/proc/sys/vm/task_reliable_limit` to modify the upper limit of highly reliable memory requested by user-mode processes. The value range is \[`ReliableTaskUsed`, `ReliableTotal`], and the unit is byte (the permission is 0644). - 7. **Setting the upper limit of highly reliable memory requested by user-mode processes**: Call `/proc/sys/vm/task_reliable_limit` to modify the upper limit of highly reliable memory requested by user-mode processes. The value range is \[`ReliableTaskUsed`, `ReliableTotal`], and the unit is byte (the permission is 0644). Notes: - - - The default value is `ulong_max`, indicating that there is no limit. - - If the value is `0`, the reliable process cannot use the highly reliable memory. In fallback mode, the allocation falls back to the low reliable memory range. Otherwise, OOM occurs. - - If the value is not `0` and the limit is triggered, the fallback function is enabled. The allocation falls back to the low reliable memory range. If the fallback function is disabled, OOM is returned. + Notes: + - The default value is `ulong_max`, indicating that there is no limit. + - If the value is `0`, the reliable process cannot use the highly reliable memory. In fallback mode, the allocation falls back to the low reliable memory range. Otherwise, OOM occurs. + - If the value is not `0` and the limit is triggered, the fallback function is enabled. The allocation falls back to the low reliable memory range. If the fallback function is disabled, OOM is returned. ### Highly Reliable Memory for Read and Write Cache @@ -261,9 +260,9 @@ A page cache is also called a file cache. When Linux reads or writes files, the 4. FileCache statistics are first collected in the percpu cache. When the value in the cache exceeds the threshold, the cache is added to the entire system and then displayed in `/proc/meminfo`. `ReliableFileCache` does not have the preceding threshold in `/proc/meminfo`. As a result, the value of `ReliableFileCache` may be greater than that of `FileCache`. 5. Write cache scenarios are restricted by `dirty_limit` (restricted by /`proc/sys/vm/dirty_ratio`, indicating the percentage of dirty pages on a single memory node). If the threshold is exceeded, the current zone is skipped. For tiered-reliability memory, because highly and low reliable memories are in different zones, the write cache may trigger fallback of the local node and use the low reliable memory of the local node. You can run `echo 100 > /proc/sys/vm/dirty_ratio` to cancel the restriction. 6. The highly reliable memory feature for the read/write cache limits the page cache usage. The system performance is affected in the following scenarios: - - If the upper limit of the page cache is too small, the I/O increases and the system performance is affected. - - If the page cache is reclaimed too frequently, system freezing may occur. - - If a large amount of page cache is reclaimed each time after the page cache exceeds the limit, system freezing may occur. + - If the upper limit of the page cache is too small, the I/O increases and the system performance is affected. + - If the page cache is reclaimed too frequently, system freezing may occur. + - If a large amount of page cache is reclaimed each time after the page cache exceeds the limit, system freezing may occur. #### Usage @@ -276,7 +275,7 @@ The function of limiting the page cache size depends on several proc APIs, which | API Name (Native/New) | Permission| Description | Default Value | | ------------------------------------ | ---- | ------------------------------------------------------------ | ------------------------------------------ | | `cache_reclaim_enable` (native) | 644 | Whether to enable the page cache restriction function.
Value range: `0` or `1`. If an invalid value is input, an error is returned.
Example: `echo 1 > cache_reclaim_enable`| 1 | -| `cache_limit_mbytes` (new) | 644 | Upper limit of the cache, in MB.
Value range: The minimum value is 0, indicating that the restriction function is disabled. The maximum value is the memory size in MB, for example, the value displayed by running the `free –m` command (the value of `MemTotal` in `meminfo` converted in MB).
Example: `echo 1024 \> cache_limit_mbytes`
Others: It is recommended that the cache upper limit be greater than or equal to half of the total memory. Otherwise, the I/O performance may be affected if the cache is too small.| 0 | +| `cache_limit_mbytes` (new) | 644 | Upper limit of the cache, in MB.
Value range: The minimum value is 0, indicating that the restriction function is disabled. The maximum value is the memory size in MB, for example, the value displayed by running the `free -m` command (the value of `MemTotal` in `meminfo` converted in MB).
Example: `echo 1024 \> cache_limit_mbytes`
Others: It is recommended that the cache upper limit be greater than or equal to half of the total memory. Otherwise, the I/O performance may be affected if the cache is too small.| 0 | | `cache_reclaim_s` (native) | 644 | Interval for triggering cache reclamation, in seconds. The system creates work queues based on the number of online CPUs. If there are *n* CPUs, the system creates *n* work queues. Each work queue performs reclamation every `cache_reclaim_s` seconds. This parameter is compatible with the CPU online and offline functions. If the CPU is offline, the number of work queues decreases. If the CPU is online, the number of work queues increases.
Value range: The minimum value is `0` (indicating that the periodic reclamation function is disabled) and the maximum value is `43200`. If an invalid value is input, an error is returned.
Example: `echo 120 \> cache_reclaim_s`
Others: You are advised to set the reclamation interval to several minutes (for example, 2 minutes). Otherwise, frequent reclamation may cause system freezing.| 0 | | `cache_reclaim_weight` (native) | 644 | Weight of each reclamation. Each CPU of the kernel expects to reclaim `32 x cache_reclaim_weight` pages each time. This weight applies to both reclamation triggered by the page upper limit and periodic page cache reclamation.
Value range: 1 to 100. If an invalid value is input, an error is returned.
Example: `echo 10 \> cache_reclaim_weight`
Others: You are advised to set this parameter to `10` or a smaller value. Otherwise, the system may freeze each time too much memory is reclaimed.| 1 | | `reliable_pagecache_max_bytes` (new)| 644 | Total amount of highly reliable memory in the page cache.
Value range: 0 to the maximum highly reliable memory, in bytes. You can call `/proc/meminfo` to query the maximum highly reliable memory. If an invalid value is input, an error is returned.
Example: `echo 4096000 \> reliable_pagecache_max_bytes`| Maximum value of the unsigned long type, indicating that the usage is not limited.| diff --git a/docs/en/docs/LLM/chatglm-cpp-user-guide.md b/Archive-en/LLM/chatglm-cpp-user-guide.md similarity index 100% rename from docs/en/docs/LLM/chatglm-cpp-user-guide.md rename to Archive-en/LLM/chatglm-cpp-user-guide.md diff --git a/Archive-en/LLM/figures/chatglm.png b/Archive-en/LLM/figures/chatglm.png new file mode 100644 index 0000000000000000000000000000000000000000..bad255b5fd2ade512d291cdd871c6f7f1262560a Binary files /dev/null and b/Archive-en/LLM/figures/chatglm.png differ diff --git a/docs/en/docs/LLM/figures/llama.png b/Archive-en/LLM/figures/llama.png similarity index 100% rename from docs/en/docs/LLM/figures/llama.png rename to Archive-en/LLM/figures/llama.png diff --git a/docs/en/docs/LLM/llama.cpp-user-guide.md b/Archive-en/LLM/llama.cpp-user-guide.md similarity index 100% rename from docs/en/docs/LLM/llama.cpp-user-guide.md rename to Archive-en/LLM/llama.cpp-user-guide.md diff --git a/docs/en/docs/LLM/LLM-user-guide.md b/Archive-en/LLM/llm-user-guide.md similarity index 100% rename from docs/en/docs/LLM/LLM-user-guide.md rename to Archive-en/LLM/llm-user-guide.md diff --git a/docs/en/docs/HCK/HCK-description-and-usage.md b/Archive-en/uncertain/HCK/HCK-description-and-usage.md similarity index 100% rename from docs/en/docs/HCK/HCK-description-and-usage.md rename to Archive-en/uncertain/HCK/HCK-description-and-usage.md diff --git a/docs/en/docs/memory-fabric/images/IntegratedDeployment.png b/Archive-en/uncertain/Memory-fabric/images/IntegratedDeployment.png similarity index 100% rename from docs/en/docs/memory-fabric/images/IntegratedDeployment.png rename to Archive-en/uncertain/Memory-fabric/images/IntegratedDeployment.png diff --git a/docs/en/docs/memory-fabric/memory-fabric-user-guide.md b/Archive-en/uncertain/Memory-fabric/memory-fabric-user-guide.md similarity index 100% rename from docs/en/docs/memory-fabric/memory-fabric-user-guide.md rename to Archive-en/uncertain/Memory-fabric/memory-fabric-user-guide.md diff --git a/docs/en/docs/NfsMultipath/faqs.md b/Archive-en/uncertain/NfsMultipath/faqs.md similarity index 100% rename from docs/en/docs/NfsMultipath/faqs.md rename to Archive-en/uncertain/NfsMultipath/faqs.md diff --git a/docs/en/docs/NfsMultipath/installation-and-deployment.md b/Archive-en/uncertain/NfsMultipath/installation-and-deployment.md similarity index 100% rename from docs/en/docs/NfsMultipath/installation-and-deployment.md rename to Archive-en/uncertain/NfsMultipath/installation-and-deployment.md diff --git a/docs/en/docs/NfsMultipath/introduction-to-nfs-multipathing.md b/Archive-en/uncertain/NfsMultipath/introduction-to-nfs-multipathing.md similarity index 100% rename from docs/en/docs/NfsMultipath/introduction-to-nfs-multipathing.md rename to Archive-en/uncertain/NfsMultipath/introduction-to-nfs-multipathing.md diff --git a/docs/en/docs/NfsMultipath/nfs-multipathing-user-guide.md b/Archive-en/uncertain/NfsMultipath/nfs-multipathing-user-guide.md similarity index 99% rename from docs/en/docs/NfsMultipath/nfs-multipathing-user-guide.md rename to Archive-en/uncertain/NfsMultipath/nfs-multipathing-user-guide.md index 7696f54025305a4ff0343922a5e0a3ec66e056e6..00d9177f503b3992f3b793fbfb9ec9b41a552ae3 100644 --- a/docs/en/docs/NfsMultipath/nfs-multipathing-user-guide.md +++ b/Archive-en/uncertain/NfsMultipath/nfs-multipathing-user-guide.md @@ -1,4 +1,3 @@ - # NFS Multipathing User Guide This document describes how to install, deploy, and use the multipathing feature of Network File System (NFS) clients. diff --git a/docs/en/docs/NfsMultipath/usage.md b/Archive-en/uncertain/NfsMultipath/usage.md similarity index 100% rename from docs/en/docs/NfsMultipath/usage.md rename to Archive-en/uncertain/NfsMultipath/usage.md diff --git a/docs/en/docs/Open-Source-Software-Notice/openEuler-Open-Source-Software-Notice.zip b/Archive-en/uncertain/Open-Source-Software-Notice/openEuler-Open-Source-Software-Notice.zip similarity index 100% rename from docs/en/docs/Open-Source-Software-Notice/openEuler-Open-Source-Software-Notice.zip rename to Archive-en/uncertain/Open-Source-Software-Notice/openEuler-Open-Source-Software-Notice.zip diff --git a/docs/en/docs/ROS/ROS.md b/Archive-en/uncertain/ROS/ROS.md similarity index 100% rename from docs/en/docs/ROS/ROS.md rename to Archive-en/uncertain/ROS/ROS.md diff --git a/docs/en/docs/ROS/appendix.md b/Archive-en/uncertain/ROS/appendix.md similarity index 100% rename from docs/en/docs/ROS/appendix.md rename to Archive-en/uncertain/ROS/appendix.md diff --git a/docs/en/docs/ROS/faqs.md b/Archive-en/uncertain/ROS/faqs.md similarity index 100% rename from docs/en/docs/ROS/faqs.md rename to Archive-en/uncertain/ROS/faqs.md diff --git a/docs/en/docs/ROS/figures/ROS-ROS2.png b/Archive-en/uncertain/ROS/figures/ROS-ROS2.png similarity index 100% rename from docs/en/docs/ROS/figures/ROS-ROS2.png rename to Archive-en/uncertain/ROS/figures/ROS-ROS2.png diff --git a/docs/en/docs/ROS/figures/ROS-demo.png b/Archive-en/uncertain/ROS/figures/ROS-demo.png similarity index 100% rename from docs/en/docs/ROS/figures/ROS-demo.png rename to Archive-en/uncertain/ROS/figures/ROS-demo.png diff --git a/docs/en/docs/ROS/figures/ROS-release.png b/Archive-en/uncertain/ROS/figures/ROS-release.png similarity index 100% rename from docs/en/docs/ROS/figures/ROS-release.png rename to Archive-en/uncertain/ROS/figures/ROS-release.png diff --git a/docs/en/docs/ROS/figures/ROS2-release.png b/Archive-en/uncertain/ROS/figures/ROS2-release.png similarity index 100% rename from docs/en/docs/ROS/figures/ROS2-release.png rename to Archive-en/uncertain/ROS/figures/ROS2-release.png diff --git a/docs/en/docs/ROS/figures/problem.png b/Archive-en/uncertain/ROS/figures/problem.png similarity index 100% rename from docs/en/docs/ROS/figures/problem.png rename to Archive-en/uncertain/ROS/figures/problem.png diff --git a/docs/en/docs/ROS/figures/ros-humble.png b/Archive-en/uncertain/ROS/figures/ros-humble.png similarity index 100% rename from docs/en/docs/ROS/figures/ros-humble.png rename to Archive-en/uncertain/ROS/figures/ros-humble.png diff --git a/docs/en/docs/ROS/figures/turtlesim.png b/Archive-en/uncertain/ROS/figures/turtlesim.png similarity index 100% rename from docs/en/docs/ROS/figures/turtlesim.png rename to Archive-en/uncertain/ROS/figures/turtlesim.png diff --git a/docs/en/docs/ROS/installation-and-deployment.md b/Archive-en/uncertain/ROS/installation-and-deployment.md similarity index 100% rename from docs/en/docs/ROS/installation-and-deployment.md rename to Archive-en/uncertain/ROS/installation-and-deployment.md diff --git a/docs/en/docs/ROS/introduction-to-ROS.md b/Archive-en/uncertain/ROS/introduction-to-ROS.md similarity index 100% rename from docs/en/docs/ROS/introduction-to-ROS.md rename to Archive-en/uncertain/ROS/introduction-to-ROS.md diff --git a/docs/en/docs/ROS/usage.md b/Archive-en/uncertain/ROS/usage.md similarity index 100% rename from docs/en/docs/ROS/usage.md rename to Archive-en/uncertain/ROS/usage.md diff --git a/docs/en/docs/SystemOptimization/big-data-tuning.md b/Archive-en/uncertain/SystemOptimization/big-data-tuning.md similarity index 80% rename from docs/en/docs/SystemOptimization/big-data-tuning.md rename to Archive-en/uncertain/SystemOptimization/big-data-tuning.md index cb958c1503dbfe5cfc37c34a0df91c175c971393..917b8854f281f098c3fc0a24a9a45371bb743d87 100644 --- a/docs/en/docs/SystemOptimization/big-data-tuning.md +++ b/Archive-en/uncertain/SystemOptimization/big-data-tuning.md @@ -20,9 +20,9 @@ This method applies to Kunpeng servers. For x86 servers, such as Intel servers, 4. Disable prefetch. - - On the BIOS, choose **Advanced** \> **MISC Config** and press **Enter**. + - On the BIOS, choose **Advanced** \> **MISC Config** and press **Enter**. - - Set **CPU Prefetching Configuration** to **Disabled** and press **F10**. + - Set **CPU Prefetching Configuration** to **Disabled** and press **F10**. ### Creating RAID 0 @@ -34,19 +34,19 @@ A RAID array can improve the overall storage and access performance of drives. I 1. Use the **storcli64_arm** file to check the RAID group creation. - ```shell - ./storcli64_arm /c0 show - ``` + ```shell + ./storcli64_arm /c0 show + ``` - Note: You can download the **storcli64_arm** file from the following URL and run it in any directory: . + Note: You can download the **storcli64_arm** file from the following URL and run it in any directory: . 2. Create a RAID 0 array. In the following example, the command is used to create a RAID 0 array for the second 1.2 TB drive. **c0** indicates the ID of the RAID controller card, and **r0** indicates RAID 0. Perform this operation on all drives except the system drive. - ```shell - ./storcli64_arm /c0 add vd r0 drives=65:1 - ``` + ```shell + ./storcli64_arm /c0 add vd r0 drives=65:1 + ``` - + ### Enabling the RAID Controller Card Cache @@ -82,28 +82,28 @@ Take the 1822 NIC as an example. The default value of **rx_buff** is **2** KB. W 1. Check the value of **rx_buff**. The default value is **2**. - ```shell - cat /sys/bus/pci/drivers/hinic/module/parameters/rx_buff - ``` + ```shell + cat /sys/bus/pci/drivers/hinic/module/parameters/rx_buff + ``` 2. Add the **hinic.conf** file to the **/etc/modprobe.d/** directory and change the value of **rx_buff** to **8**. - ```shell - options hinic rx_buff=8 - ``` + ```shell + options hinic rx_buff=8 + ``` 3. Mount the HiNIC driver again for the new parameters to take effect. - ```shell - rmmod hinic - modprobe hinic - ``` + ```shell + rmmod hinic + modprobe hinic + ``` 4. Check whether the **rx_buff** parameter is updated successfully. - ```shell - cat /sys/bus/pci/drivers/hinic/module/parameters/rx_buff - ``` + ```shell + cat /sys/bus/pci/drivers/hinic/module/parameters/rx_buff + ``` ### Adjusting the Ring Buffer @@ -115,30 +115,30 @@ Take the 1822 NIC as an example. The maximum ring buffer size of the NIC is **40 1. Check the default size of the ring buffer. Assume that the current NIC is named **enp131s0**. - ```shell - ethtool -g enp131s0 - ``` + ```shell + ethtool -g enp131s0 + ``` 2. Change the value of **Ring Buffer** to **4096**. - ```shell - ethtool -G enp131s0 rx 4096 tx 4096 - ``` + ```shell + ethtool -G enp131s0 rx 4096 tx 4096 + ``` 3. Confirm that the ring buffer value has been updated. - ```shell - ethtool -g enp131s0 - ``` + ```shell + ethtool -g enp131s0 + ``` - + 4. Reduce the quantity of queues. - ```shell - ethtool -L enp131s0 combined 4 - ethtool -l enp131s0 - ``` + ```shell + ethtool -L enp131s0 combined 4 + ethtool -l enp131s0 + ``` ### Enabling LRO @@ -150,23 +150,23 @@ Take the 1822 NIC as an example. The NIC supports large receive offload (LRO). Y 1. Check whether the value of the LRO parameter is set to **on**. The default value is **off**. Assume that the current NIC is named **enp131s0**. - ```shell - ethtool -k enp131s0 - ``` + ```shell + ethtool -k enp131s0 + ``` 2. Enable LRO. - ```shell - ethtool -K enp131s0 lro on - ``` - + ```shell + ethtool -K enp131s0 lro on + ``` + 3. Check whether the value of the LRO parameter is set to **on**. - ```shell - ethtool -k enp131s0 - ``` + ```shell + ethtool -k enp131s0 + ``` - + ### Binding NIC Interrupts to Cores @@ -182,83 +182,83 @@ To help the service network improve the capability of receiving and sending pack Run the following command: - ```shell - systemctl stop irqbalance.service # (Stop irqbalance. The setting will be invalid after the system restarts.) - systemctl disable irqbalance.service # (Disable irqbalance. The setting takes effect permanently.) - systemctl status irqbalance.service # (Check whether the irqbalance service is disabled.) - ``` + ```shell + systemctl stop irqbalance.service # (Stop irqbalance. The setting will be invalid after the system restarts.) + systemctl disable irqbalance.service # (Disable irqbalance. The setting takes effect permanently.) + systemctl status irqbalance.service # (Check whether the irqbalance service is disabled.) + ``` 2. Check the PCI device number of the NIC. Assume that the current NIC is named **enp131s0**. - ```shell - ethtool -i enp131s0 - ``` + ```shell + ethtool -i enp131s0 + ``` - + 3. Check the NUMA node to which the PCIe NIC connects. - ```shell - lspci -vvvs - ``` + ```shell + lspci -vvvs + ``` - + 4. Check the core range corresponding to the NUMA node. For example, for the Kunpeng 920 5250 processor, the core range can be 48 to 63. - + 5. Bind interrupts to cores. The Hi1822 NIC has 16 queues. Bind the interrupts to the 16 cores of the NUMA node to which the NIC connect (for example, cores 48 to 63 corresponding to NUMA node 1). - ```shell - bash smartIrq.sh - ``` + ```shell + bash smartIrq.sh + ``` - The script content is as follows: + The script content is as follows: - ```shell - #!/bin/bash - irq_list=(`cat /proc/interrupts | grep enp131s0 | awk -F: '{print $1}'`) - cpunum=48 # Change the value to the first core of the node. - for irq in ${irq_list[@]} - do - echo $cpunum > /proc/irq/$irq/smp_affinity_list - echo `cat /proc/irq/$irq/smp_affinity_list` - (( cpunum+=1 )) - done - ``` + ```shell + #!/bin/bash + irq_list=(`cat /proc/interrupts | grep enp131s0 | awk -F: '{print $1}'`) + cpunum=48 # Change the value to the first core of the node. + for irq in ${irq_list[@]} + do + echo $cpunum > /proc/irq/$irq/smp_affinity_list + echo `cat /proc/irq/$irq/smp_affinity_list` + (( cpunum+=1 )) + done + ``` 6. Check whether the core binding is successful. - ```shell - sh irqCheck.sh enp131s0 - ``` - - - - The script content is as follows: - - ```shell - #!/bin/bash - # NIC name - intf=$1 - log=irqSet-`date "+%Y%m%d-%H%M%S"`.log - # Number of available CPUs - cpuNum=$(cat /proc/cpuinfo |grep processor -c) - # RX and TX interrupt lists - irqListRx=$(cat /proc/interrupts | grep ${intf} | awk -F':' '{print $1}') - irqListTx=$(cat /proc/interrupts | grep ${intf} | awk -F':' '{print $1}') - # Bind the RX interrupt requests (IRQs). - for irqRX in ${irqListRx[@]} - do - cat /proc/irq/${irqRX}/smp_affinity_list - done - # Bind the TX IRQs. - for irqTX in ${irqListTx[@]} - do - cat /proc/irq/${irqTX}/smp_affinity_list - done - ``` + ```shell + sh irqCheck.sh enp131s0 + ``` + + + + The script content is as follows: + + ```shell + #!/bin/bash + # NIC name + intf=$1 + log=irqSet-`date "+%Y%m%d-%H%M%S"`.log + # Number of available CPUs + cpuNum=$(cat /proc/cpuinfo |grep processor -c) + # RX and TX interrupt lists + irqListRx=$(cat /proc/interrupts | grep ${intf} | awk -F':' '{print $1}') + irqListTx=$(cat /proc/interrupts | grep ${intf} | awk -F':' '{print $1}') + # Bind the RX interrupt requests (IRQs). + for irqRX in ${irqListRx[@]} + do + cat /proc/irq/${irqRX}/smp_affinity_list + done + # Bind the TX IRQs. + for irqTX in ${irqListTx[@]} + do + cat /proc/irq/${irqTX}/smp_affinity_list + done + ``` ## OS Tuning @@ -301,45 +301,45 @@ Comparison before and after the modification - Purpose - Use the single-queue (soft queue) mode for better performance during Spark tests. + Use the single-queue (soft queue) mode for better performance during Spark tests. - Method - Add **scsi_mod.use_blk_mq=0** to the kernel startup command line in **/etc/grub2-efi.cfg** and restart the system for the modification to take effect. + Add **scsi_mod.use_blk_mq=0** to the kernel startup command line in **/etc/grub2-efi.cfg** and restart the system for the modification to take effect. - + - Kernel I/O parameter configuration - ```shell - #! /bin/bash - - echo 3000 > /proc/sys/vm/dirty_expire_centisecs - echo 500 > /proc/sys/vm/dirty_writeback_centisecs - - echo 15000000 > /proc/sys/kernel/sched_wakeup_granularity_ns - echo 10000000 > /proc/sys/kernel/sched_min_granularity_ns - - systemctl start tuned - sysctl -w kernel.sched_autogroup_enabled=0 - sysctl -w kernel.numa_balancing=0 - - echo 11264 > /proc/sys/vm/min_free_kbytes - echo 60 > /proc/sys/vm/dirty_ratio - echo 5 > /proc/sys/vm/dirty_background_ratio - - list="b c d e f g h i j k l m" # Modify as required. - for i in $list - do - echo 1024 > /sys/block/sd$i/queue/max_sectors_kb - echo 32 > /sys/block/sd$i/device/queue_depth - echo 256 > /sys/block/sd$i/queue/nr_requests - echo mq-deadline > /sys/block/sd$i/queue/scheduler - echo 2048 > /sys/block/sd$i/queue/read_ahead_kb - echo 2 > /sys/block/sd$i/queue/rq_affinity - echo 0 > /sys/block/sd$i/queue/nomerges - done - ``` + ```shell + #! /bin/bash + + echo 3000 > /proc/sys/vm/dirty_expire_centisecs + echo 500 > /proc/sys/vm/dirty_writeback_centisecs + + echo 15000000 > /proc/sys/kernel/sched_wakeup_granularity_ns + echo 10000000 > /proc/sys/kernel/sched_min_granularity_ns + + systemctl start tuned + sysctl -w kernel.sched_autogroup_enabled=0 + sysctl -w kernel.numa_balancing=0 + + echo 11264 > /proc/sys/vm/min_free_kbytes + echo 60 > /proc/sys/vm/dirty_ratio + echo 5 > /proc/sys/vm/dirty_background_ratio + + list="b c d e f g h i j k l m" # Modify as required. + for i in $list + do + echo 1024 > /sys/block/sd$i/queue/max_sectors_kb + echo 32 > /sys/block/sd$i/device/queue_depth + echo 256 > /sys/block/sd$i/queue/nr_requests + echo mq-deadline > /sys/block/sd$i/queue/scheduler + echo 2048 > /sys/block/sd$i/queue/read_ahead_kb + echo 2 > /sys/block/sd$i/queue/rq_affinity + echo 0 > /sys/block/sd$i/queue/nomerges + done + ``` ### Adapting JVM Parameters and Version @@ -370,21 +370,21 @@ Based on the basic Spark configuration values, obtain a group of executor parame - If you use Spark-Test-Tool to test SQL 1 to SQL 10, open the **script/spark-default.conf** file in the tool directory and add the following configuration items: - ```shell - yarn.executor.num 15 - yarn.executor.cores 19 - spark.executor.memory 44G - spark.driver.memory 36G - ``` + ```shell + yarn.executor.num 15 + yarn.executor.cores 19 + spark.executor.memory 44G + spark.driver.memory 36G + ``` - If you use HiBench to test the WordCount, TeraSort, Bayesian, and k-means scenarios, open the **conf/spark.conf** file in the tool directory and adjust the number of running cores and memory size based on the actual environment. - ```shell - yarn.executor.num 15 - yarn.executor.cores 19 - spark.executor.memory 44G - spark.driver.memory 36G - ``` + ```shell + yarn.executor.num 15 + yarn.executor.cores 19 + spark.executor.memory 44G + spark.driver.memory 36G + ``` ### Tuning Items for Dedicated Scenarios @@ -394,38 +394,38 @@ Based on the basic Spark configuration values, obtain a group of executor parame - Purpose - SQL 1 is an I/O-intensive scenario. You can tune I/O parameters for the optimal performance. + SQL 1 is an I/O-intensive scenario. You can tune I/O parameters for the optimal performance. - Method - + - Set the following I/O parameters. **sd$i** indicates the names of all drives that are tested. - + ```shell echo 128 > /sys/block/sd$i/queue/nr_requests echo 512 > /sys/block/sd$i/queue/read_ahead_kb ``` - + - Set the parameters of dirty pages in the memory. - + ```shell /proc/sys/vm/vm.dirty_expire_centisecs 500 /proc/sys/vm/vm.dirty_writeback_centisecs 100 ``` - + - Set the degree of parallelism is in **Spark-Test-Tool/script/spark-default.conf**. - + ```shell spark.sql.shuffle.partitions 350 spark.default.parallelism 580 ``` - + - In this scenario, set other parameters to the general tuning values in [Tuning Spark Application Parameters](#tuning-spark-application-parameters). - + ##### CPU-Intensive Scenarios: SQL 2 & SQL 7 - Purpose - ​SQL 2 and SQL 7 are CPU-intensive scenarios. You can tune Spark executor parameters for the optimal performance. + SQL 2 and SQL 7 are CPU-intensive scenarios. You can tune Spark executor parameters for the optimal performance. - Method @@ -458,41 +458,41 @@ Based on the basic Spark configuration values, obtain a group of executor parame - Purpose - SQL 3 is an I/O- and CPU-intensive scenario. You can tune Spark executor parameters and adjust I/O parameters for the optimal performance. + SQL 3 is an I/O- and CPU-intensive scenario. You can tune Spark executor parameters and adjust I/O parameters for the optimal performance. - Method - Based on the actual environment, adjust the number of running cores and memory size specified by Spark-Test-Tool in the **script/spark-default.conf** file to achieve the optimal performance. For example, for the Kunpeng 920 5220 processor, the following executor parameters are recommended for SQL 3. - + ```shell yarn.executor.num 30 yarn.executor.cores 6 spark.executor.memory 24G spark.driver.memory 36G ``` - + - Adjust the I/O prefetch value. **sd$i** indicates the names of all Spark drives. - + ```shell echo 4096 > /sys/block/sd$i/queue/read_ahead_kb ``` - + - Set the degree of parallelism is in **Spark-Test-Tool/script/spark-default.conf**. - + ```shell spark.sql.shuffle.partitions 150 spark.default.parallelism 360 ``` - + ##### CPU-Intensive Scenario: SQL 4 - Purpose - SQL 4 is a CPU-intensive scenario. You can tune Spark executor parameters and adjust I/O parameters for the optimal performance. + SQL 4 is a CPU-intensive scenario. You can tune Spark executor parameters and adjust I/O parameters for the optimal performance. - Method - Based on the actual environment, adjust the number of running cores and memory size specified by Spark-Test-Tool in the **script/spark-default.conf** file to achieve the optimal performance. For example, for the Kunpeng 920 5220 processor, the following executor parameters are recommended for SQL 4. + Based on the actual environment, adjust the number of running cores and memory size specified by Spark-Test-Tool in the **script/spark-default.conf** file to achieve the optimal performance. For example, for the Kunpeng 920 5220 processor, the following executor parameters are recommended for SQL 4. - Open the **script/spark-default.conf** file in the tool directory and add the following configuration items: @@ -533,12 +533,12 @@ Based on the basic Spark configuration values, obtain a group of executor parame - Purpose - WordCount is an I/O- and CPU-intensive scenario, where the mq-deadline algorithm and I/O parameter adjustment can deliver higher performance than that of the single-queue deadline scheduling algorithm. + WordCount is an I/O- and CPU-intensive scenario, where the mq-deadline algorithm and I/O parameter adjustment can deliver higher performance than that of the single-queue deadline scheduling algorithm. - Method - + - Modify the following configurations. **sd$i** indicates the names of all drives that are tested. - + ```shell echo mq-deadline > /sys/block/sd$i/queue/scheduler echo 512 > /sys/block/sd$i/queue/nr_requests @@ -547,75 +547,75 @@ Based on the basic Spark configuration values, obtain a group of executor parame echo 100 > /proc/sys/vm/dirty_writeback_centisecs echo 5 > /proc/sys/vm/dirty_background_ratio ``` - + - In this scenario, you can set the quantity of partitions and parallelism to three to five times of the total cluster core quantity for data sharding. This reduces the size of a single task file for better performance. You can use the following shard settings: - + ```shell spark.sql.shuffle.partitions 300 spark.default.parallelism 600 ``` - + - Based on the actual environment, adjust the number of running cores and memory size specified by HiBench in the configuration file to achieve the optimal performance. For example, for the Kunpeng 920 5220 processor, the following executor parameters are recommended for WordCount. - + ```shell yarn.executor.num 51 yarn.executor.cores 6 spark.executor.memory 13G spark.driver.memory 36G ``` - + ##### I/O- and CPU-Intensive Scenario: TeraSort - Purpose - ​TeraSort is an I/O- and CPU-intensive scenario. You can adjust I/O parameters and Spark executor parameters for the optimal performance. In addition, TeraSort requires high network bandwidth. You can tune network parameters to improve system performance. + TeraSort is an I/O- and CPU-intensive scenario. You can adjust I/O parameters and Spark executor parameters for the optimal performance. In addition, TeraSort requires high network bandwidth. You can tune network parameters to improve system performance. - Method - Modify the following configurations. **sd$i** indicates the names of all drives that are tested. - ```shell - echo bfq > /sys/block/sd$i/queue/scheduler - echo 512 > /sys/block/sd$i/queue/nr_requests - echo 8192 > /sys/block/sd$i/queue/read_ahead_kb - echo 4 > /sys/block/sd$i/queue/iosched/slice_idle - echo 500 > /proc/sys/vm/dirty_expire_centisecs - echo 100 > /proc/sys/vm/dirty_writeback_centisecs - ``` + ```shell + echo bfq > /sys/block/sd$i/queue/scheduler + echo 512 > /sys/block/sd$i/queue/nr_requests + echo 8192 > /sys/block/sd$i/queue/read_ahead_kb + echo 4 > /sys/block/sd$i/queue/iosched/slice_idle + echo 500 > /proc/sys/vm/dirty_expire_centisecs + echo 100 > /proc/sys/vm/dirty_writeback_centisecs + ``` - In this scenario, you can set the quantity of partitions and parallelism to three to five times of the total cluster core quantity for data sharding. This reduces the size of a single task file for better performance. Open the **conf/spark.conf** file of HiBench and use the following shard settings: - ```shell - spark.sql.shuffle.partitions 1000 - spark.default.parallelism 2000 - ``` + ```shell + spark.sql.shuffle.partitions 1000 + spark.default.parallelism 2000 + ``` - Open the **conf/spark.conf** file of HiBench and add the following executor parameters: - ```shell - yarn.executor.num 27 - yarn.executor.cores 7 - spark.executor.memory 25G - spark.driver.memory 36G - ``` - + ```shell + yarn.executor.num 27 + yarn.executor.cores 7 + spark.executor.memory 25G + spark.driver.memory 36G + ``` + - Tune network parameters. - - ```shell - ethtool -K enp131s0 gro on - ethtool -K enp131s0 tso on - ethtool -K enp131s0 gso on - ethtool -G enp131s0 rx 4096 tx 4096 - ethtool -G enp131s0 rx 4096 tx 4096 - # The TM 280 NIC supports a maximum of 9,000 MTUs. - ifconfig enp131s0 mtu 9000 up - ``` - + + ```shell + ethtool -K enp131s0 gro on + ethtool -K enp131s0 tso on + ethtool -K enp131s0 gso on + ethtool -G enp131s0 rx 4096 tx 4096 + ethtool -G enp131s0 rx 4096 tx 4096 + # The TM 280 NIC supports a maximum of 9,000 MTUs. + ifconfig enp131s0 mtu 9000 up + ``` + ##### CPU-Intensive Scenario: Bayesian - Purpose - ​Bayesian is a CPU-intensive scenario. You can adjust I/O parameters and Spark executor parameters for the optimal performance. + Bayesian is a CPU-intensive scenario. You can adjust I/O parameters and Spark executor parameters for the optimal performance. - Method @@ -650,7 +650,7 @@ Based on the basic Spark configuration values, obtain a group of executor parame - Purpose - ​k-means is a CPU-intensive scenario. You can adjust I/O parameters and Spark executor parameters for the optimal performance. + k-means is a CPU-intensive scenario. You can adjust I/O parameters and Spark executor parameters for the optimal performance. - Method diff --git a/docs/en/docs/SystemOptimization/figures/mysql-tuning-flow.png b/Archive-en/uncertain/SystemOptimization/figures/mysql-tuning-flow.png similarity index 100% rename from docs/en/docs/SystemOptimization/figures/mysql-tuning-flow.png rename to Archive-en/uncertain/SystemOptimization/figures/mysql-tuning-flow.png diff --git a/Archive-en/uncertain/SystemOptimization/mysql-performance-tuning.md b/Archive-en/uncertain/SystemOptimization/mysql-performance-tuning.md new file mode 100644 index 0000000000000000000000000000000000000000..aff86ed344be9b3bd67316733015362037dfa4d2 --- /dev/null +++ b/Archive-en/uncertain/SystemOptimization/mysql-performance-tuning.md @@ -0,0 +1,358 @@ +# MySQL Performance Tuning Guide + +## Introduction + +### Tuning Guidelines + +The MySQL performance varies depending on the hardware, operating system (OS), and basic software in use. It is also affected by the design of each subsystem, algorithms used, and compiler settings. + +Observe the following guidelines when tuning the performance: The guidelines are as follows: + +During performance analysis, analyze system resource bottlenecks from multiple aspects. Poor system performance may not be caused by the system itself but by other factors. For example, high CPU usage may be caused by insufficient memory capacity and the CPU resources may be exhausted by memory scheduling. + +- Adjust only one parameter of a specific aspect that affects the performance at a time. It is difficult to determine which parameter causes the impact on performance when multiple parameters are adjusted at the same time. + +- During the system performance analysis, the performance analysis tool also consumes certain CPU and memory resources, which may worsen the resource bottleneck of the system. +- Ensure that the program runs properly after performance tuning. +- Performance tuning is a continuous process. The result of each tuning should be fed to the subsequent version development. +- Performance tuning cannot compromise code readability and maintainability. + +### Tuning Flow + +Identify problems, find performance bottlenecks, and determine a tuning method based on the bottleneck level. + +The following figure shows the MySQL tuning flow. + +![](./figures/mysql-tuning-flow.png) + +The tuning analysis process is as follows: + +- In most cases, even before the entire pressure test traffic enters the server, the pressure test result may fail to meet the expectation due to network factors, such as the bandwidth, maximum number of connections, and limit on the number of new connections. +- Check whether the key metrics meet requirements. If not, locate the possible problem causes. The problems may exist on a server (in most cases) or a client (in few cases). +- If the problem cause is on a server, check the hardware metrics such as the CPU, memory, drive I/O, and network I/O metrics. If any abnormal hardware metric is detected, further analysis is required. +- If the hardware metrics are normal, check the database metrics, such as the wait events and memory hit ratio. +- If hardware and database metrics are normal, check the algorithms, buffer, cache, synchronization, and asynchronization. If any abnormal metric is detected, further analysis is required. + +## Hardware Tuning + +### Purpose + +You can configure advanced BIOS settings for different server hardware to improve server performance. + +### Method + +This method applies to Kunpeng servers. For x86 servers, such as Intel servers, you can retain the default BIOS configurations. + +1. Disable the SMMU (only for the Kunpeng server). +2. During the server restart, press **Delete** to access the BIOS, choose **Advanced** \> **MISC Config**, and press **Enter**. +3. Set **Support Smmu** to **Disable**. + + Note: Disable the SMMU feature only in non-virtualization scenarios. In virtualization scenarios, enable the SMMU. +4. Disable prefetch. + + - On the BIOS, choose **Advanced** \> **MISC Config** and press **Enter**. + + - Set **CPU Prefetching Configuration** to **Disabled** and press **F10**. + +## OS Tuning + +### NIC Interrupt-Core Binding + +#### Purpose + +To improve system network performance, you can disable the irqbalance service and manually bind NIC interrupts to dedicated cores to isolate NIC interrupts from service requests. + +#### Method + +The optimal number of CPUs for binding interrupts varies depending on the hardware configuration. For example, the optimal number of CPUs for binding interrupts on the Kunpeng 920 5250 processor is 5. You can observe the usage of the five CPUs to determine whether to adjust the number. + +The following script is used to set the optimal interrupt binding for MySQL when the Kunpeng 920 5250 processor and Huawei TM280 25G NIC are used. **$1** indicates the NIC name, **$2** indicates the number of queues (5), and the **$3** indicates the bus name corresponding to the NIC, which can be queried by running `ethtool -i `: + +```shell +#!/bin/bash +eth1=$1 +cnt=$2 +bus=$3 +ethtool -L $eth1 combined $cnt + +irq1=`cat /proc/interrupts| grep -E ${bus} | head -n$cnt | awk -F ':' '{print $1}'` +irq1=`echo $irq1` +cpulist=(91 92 93 94 95) # Set the cores dedicated for handling interrupts. +c=0 +forirq in $irq1 +do + echo ${cpulist[c]} "->" $irq + echo ${cpulist[c]} > /proc/irq/$irq/smp_affinity_list + let "c++" +done +``` + +**Note: If Gazelle is used, you do not need to use the method described in this section.** + +### NUMA Core Binding + +#### Purpose + +NUMA core binding reduces cross-NUMA memory access and improves system memory access performance. + +#### Method + +Based on the NIC interrupt settings in the previous section, set the NUMA core binding range to the remaining cores (0 to 90) before running the MySQL startup command. **$mysql_path** indicates the MySQL installation path. + +```shell +numactl -C 0-90 -i 0-3 $mysql_path/bin/mysqld --defaults-file=/etc/my.cnf & +``` + +**Note: If Gazelle is used, you do not need to use the method described in this section.** + +### Scheduling Parameter Tuning + +#### Purpose + +In high-load scenarios, the CPU utilization cannot reach 100%. In-depth analysis of the scheduling trace of each thread shows that the kernel cannot find a proper process for migration during load balancing. As a result, the CPU is intermittently idle and load balancing fails, wasting CPU resources. You can enable the openEuler STEAL mode to further improve the CPU utilization and system performance. (**This feature is available in openEuler 20.03 SP2 and later versions.**) + +#### Method + +1. Add **sched_steal_node_limit=4** to the end of the kernel startup items in **/etc/grub2-efi.cfg**, as shown in the following figure. + +![](./figures/kernel-boot-option-parameters.png) + +After the modification, reboot the system for the modification to take effect. + +1. Set the STEAL scheduling feature as follows after the reboot: + +```shell +echo STEAL > /sys/kernel/debug/sched_features +``` + +### Memory Hugepage Optimization + +#### Purpose + +The translation lookaside buffer (TLB) is the high-speed cache in the CPU for the page table that stores the mapping between the page addresses of the virtual addresses and the page addresses of the physical addresses. A higher TLB hit ratio indicates better page table query performance. In a given service scenario, increasing the memory page size can improve the TLB hit ratio and access efficiency, thereby improving the server performance. + +#### Method + +- Change the memory page size of the kernel. + + Run the `getconf PAGESIZE` command to check the memory page size. If the memory page size is 4096 (4 KB), you can increase the page size by changing the memory page size value of the Linux kernel. You need to recompile the kernel after modifying the kernel compilation options. The procedure is as follows: + + 1. Run `make menuconfig`. + + 2. Set **PAGESIZE** to 64K (**Kernel Features--\>Page size(64KB)**). + + 3. Compile and install the kernel. + +### Gazelle Protocol Stack Tuning + +#### Purpose + +The deep layers of the native kernel network protocol stack bring high overhead and high system call costs. The Gazelle user-mode protocol stack can be used to replace the kernel protocol stack. By hooking the POSIX interfaces, Gazelle eliminates overheads caused by system calls, thereby greatly improving the network I/O throughput of an application. + +#### Method + +1. Install the dependency packages. + + Configure the Yum source of openEuler and run the `yum` command to install the dependencies. + + ```shell + yum install dpdk libconfig numactl libboundsheck libcap gazelle + ``` + +2. Install the ko file as the **root** user. + + Bind the NIC from the kernel driver to the user-mode driver. Choose one of the following .ko files as required. The MLX4 and MLX5 NICs do not need to be bound to the VFIO or UIO driver. + + ```shell + #If the IOMMU is available + modprobe vfio-pci + #If the IOMMU is not available and the VFIO supports the no-IOMMU mode + modprobe vfio enable_unsafe_noiommu_mode=1 + modprobe vfio-pci + #Other cases + modprobe igb_uio + ``` + +3. Bind DPDK to an NIC. + + Bind the service NIC (enp3s0 is used as an example) to the driver selected in the previous step to provide an interface for the user-mode NIC driver to access NIC resources. + + ```shell + #Remove the service NIC enp3s0. + ip link set enp3s0 down + + #Use vfio-pci + dpdk-devbind -b vfio-pci enp3s0 + + #Use igb_uio + dpdk-devbind -b igb_uio enp3s0 + ``` + +4. Configure memory hugepages. + + Gazelle uses hugepage memory to improve efficiency. You can configure any size for the memory hugepages reserved by the system using the **root** permissions. Select a page size based as required and configure sufficient memory hugepages. By default, 2 GB hugepage memory is configured for each memory node, and the page size is 2 MB. + + ```shell + #Statically allocate 2 MB memory hugepages: 2M * 1024*4 = 8G + echo 8192 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages + + #View configuration results. + grep Huge /proc/meminfo + ``` + +5. Mount hugepage memory. + + To create a directory for the lstack process to access hugepage memory, run the following commands: + + ```shell + mkdir -p /mnt/hugepages-gazelle + chmod -R 700 /mnt/hugepages-gazelle + mount -t hugetlbfs nodev /mnt/hugepages-gazelle -o pagesize=2M + ``` + +6. Use Gazelle for the application. + + Use **LD_PRELOAD** to preload the dynamic library of Gazelle. Use **GAZELLE_BIND_PROCNAME** to specify the MySQL process name. + + ```shell + GAZELLE_BIND_PROCNAME=mysqld LD_PRELOAD=/usr/lib64/liblstack.so $mysql_path/bin/mysqld --defaults-file=/etc/my.cnf --bind-address=192.168.1.10 & + ``` + + In the preceding command, the parameter of `bind-address` is the IP address of the service NIC on the server, which must be the same as the value of **host_addr** in the Gazelle configuration file. + +7. Modify the Gazelle configuration file. + + TModify the Gazelle configuration file **/etc/gazelle/lstack.conf** based on the hardware environment and software requirements. The following is an example: + + ```shell + dpdk_args=["--socket-mem", "2048,2048,2048,2048", "--huge-dir", "/mnt/hugepages-gazelle", "--proc-type", "primary", "--legacy-mem", "--map-perfect"] + + use_ltran=0 + kni_switch=0 + low_power_mode=0 + listen_shadow=1 + + num_cpus="18,38,58,78 " + host_addr="192.168.1.10" + mask_addr="255.255.255.0" + gateway_addr="192.168.1.1" + devices="aa:bb:cc:dd:ee:ff" + ``` + + The **--socket-mem** parameter indicates the memory allocated to each memory node. The default value is 2048 (MB). In this example, four memory nodes are allocated with 2 GB (2048) memory. The **--huge-dir** parameter indicates the directory to which hugepage memory is mounted. **num_cpus** records the IDs of the CPUs bound to the lstack thread. You can select CPUs by NUMA node. The **host_addr**, **mask_addr**, **gateway_addr**, and **devices** parameters indicate the IP address, subnet mask, gateway address, and MAC address of the service NIC, respectively . + + For details, see the [Gazelle User Guide](https://gitee.com/openeuler/gazelle/blob/master/doc/Gazelle%E4%BD%BF%E7%94%A8%E6%8C%87%E5%8D%97.md) + +## MySQL Tuning + +### Database Parameter Tuning + +#### Purpose + +Modify database parameter settings to improve server performance. + +#### Method + +The default configuration file path is **/etc/my.cnf**. You can use the following configuration file parameters to start the database: + +```shell +[mysqld_safe] +log-error=/data/mysql/log/mysql.log +pid-file=/data/mysql/run/mysqld.pid + +[client] +socket=/data/mysql/run/mysql.sock +default-character-set=utf8 + +[mysqld] +basedir=/usr/local/mysql +tmpdir=/data/mysql/tmp +datadir=/data/mysql/data +socket=/data/mysql/run/mysql.sock +port=3306 +user=root +default_authentication_plugin=mysql_native_password +ssl=0 # Disable SSL. +max_connections=2000 # Set the maximum number of connections. +back_log=2048 #Set the number of cached session requests. +performance_schema=OFF # Disable the performance mode. +max_prepared_stmt_count=128000 + +#file +innodb_file_per_table=on # Set one file for each table. +innodb_log_file_size=1500M # Set the log file size. +innodb_log_files_in_group=32 # Set the number of log file groups. +innodb_open_files=4000 # Set the maximum number of tables that can be opened. + +#buffers +innodb_buffer_pool_size=230G # Set the buffer pool size, which is generally 60% of the server memory. +innodb_buffer_pool_instances=16 # Set the number of buffer pool instances to improve the concurrency capability. +innodb_log_buffer_size=64M # Set the log buffer size. + +#tune +sync_binlog=1 # Set the number of sync_binlog transactions to be submitted for drive flushing each time. +innodb_flush_log_at_trx_commit=1 # Each time when a transaction is submitted, MySQL writes the data in the log buffer to the log file and flushes the data to drives. +innodb_use_native_aio=1 # Enable asynchronous I/O. +innodb_spin_wait_delay=180 # Set the spin_wait_delay parameter to prevent system spin. +innodb_sync_spin_loops=25 # Set the spin_loops loop times to prevent system spin. +innodb_spin_wait_pause_multiplier=25 # Set a multiplier value used to determine the number of PAUSE instructions in spin-wait loops. +innodb_flush_method=O_DIRECT # Set the open and write modes of InnoDB data files and redo logs. +innodb_io_capacity=20000 # Set the maximum IOPS of InnoDB background threads. +innodb_io_capacity_max=40000 # Set the maximum IOPS of InnoDB background threads under pressure. +innodb_lru_scan_depth=9000 # Set the number of dirty pages flushed by the page cleaner thread each time. +innodb_page_cleaners=16 # Set the number of threads for writing dirty data to drives. +table_open_cache_instances=32 # Set the maximum number of table cache instances. +table_open_cache=30000 # Set the maximum number of open tables cached in one table cache instance. + +#perf special +innodb_flush_neighbors=0 # Check all pages in the extent where the page is located. If the page is dirty, flush all pages. This parameter is disabled for SSDs. +innodb_write_io_threads=16 # Set the number of write threads. +innodb_read_io_threads=16 # Set the number of read threads. +innodb_purge_threads=32 # Set the number of undo page threads to be purged. +innodb_adaptive_hash_index=0 + +sql_mode=STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,NO_AUTO_VALUE_ON_ZERO,STRICT_ALL_TABLES +``` + +Table 1 Database parameters + +| Parameter | Description | Tuning Suggestion | +| -------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| innodb_thread_concurrency | OS threads used by InnoDB to process the user transaction requests | The default value **0** is recommended, that is, the number of concurrent threads is not limited by default. | +| innodb_read_io_threads | Number of threads that process the read requests in the request queue | Set this parameter based on the number of CPU cores and the read/write ratio. | +| innodb_write_io_threads | Number of threads that process the write requests in the request queue | Set this parameter based on the number of CPU cores and the read/write ratio. | +| innodb_buffer_pool_instances | Number of memory buffer pools. Enable multiple memory buffer pools to hash data to be buffered to different buffer pools. In this way, the memory can be read and written concurrently. | Recommended value: 8 to 32 | +| innodb_open_files | Number of files that can be opened by InnoDB in the innodb_file_per_table mode | A larger value is recommended, especially when there are a large number of tables. | +| innodb_buffer_pool_size | Size of the buffer that caches data and indexes | Recommended value: 60% of the memory | +| innodb_log_buffer_size | Size of the buffer that caches redo logs | Default value: 64 MB. Set this parameter based on the value of **innodb_log_wait**. | +| innodb_io_capacity | Maximum IOPS of InnoDB background threads | Recommended value: 75% of the total I/O QPS | +| innodb_log_files_in_group | Number of redo log groups | - | +| innodb_log_file_size | Size of the redo log file | A larger value is recommended for write-intensive scenarios. However, large size of the redo log file prolongs data restoration. When testing the ultimate performance in the non-production environment, increase the log file size as large as possible. In commercial scenarios, consider the data restoration time when setting this parameter. | +| innodb_flush_method | Method of flushing drives for logs and data. The options include the following:
**datasync**: The data write operation is considered complete when data is written to the buffer of the operating system. Then, the operating system flushes the data from the buffer to drives and updates the metadata of files in drives.
**O_DSYNC**: Logs are written to drives, and data files are flushed through fsync.
**O_DIRECT**: Data files are directly written from the MySQL InnoDB buffer to drives without being buffered in the operating system. The write operation is completed by the flush operation. Logs are buffered by the operating system. | **O_DIRECT** is recommended. | +| innodb_spin_wait_delay | Polling interval | Set this parameter to a value as long as there is no spin_lock hotspot function. Recommended value: **180** | +| innodb_sync_spin_loops | Number of polling times | Set this parameter to a value as long as there is no spin_lock hotspot function. Recommended value: **25** | +| innodb_spin_wait_pause_multiplier | Random number used to control the polling interval | Set this parameter to a value as long as there is no spin_lock hotspot function. Default value: **50**; recommended value: **25** to **50** | +| innodb_lru_scan_depth | Number of available pages in the LRU list | Default value: **1024**. When testing the ultimate performance in the non-production environment, you can increase the value to reduce the number of checkpoints. | +| innodb_page_cleaners | Number of threads for refreshing dirty data | Set it to the same value as **innodb_buffer_pool_instances**. | +| innodb_purge_threads | Number of threads for purging undo | - | +| innodb_flush_log_at_trx_commit | **0**: writes binlog every second no matter whether transactions are submitted.
**1**: writes the content in the log buffer to drives each time a transaction is submitted. The log files are updated to drives. This mode delivers the best security.
**2**: writes data to the operating system cache each time a transaction is submitted, and the operating system updates data to the drives. This mode delivers the best performance. | When testing the ultimate performance in a non-production environment, set this parameter to **0**. | +| innodb_doublewrite | Whether to enable the double write function | When testing the ultimate performance in a non-production environment, set this parameter to **0** to disable double write. | +| ssl | Whether to enable SSL | SSL has great impact on performance. When testing the ultimate performance in a non-production environment, set this parameter to 0 to disable SSL. In commercial scenarios, set this parameter based on customer requirements. | +| table_open_cache_instances | Number of partitions that cache table handles in MySQL | Recommended value: **16** to **32** | +| table_open_cache | Number of tables opened by Mysqld | Recommended value: **30000** | +| skip_log_bin | Whether to enable binlog | When testing the ultimate performance in a non-production environment, add this parameter to the configuration file and disable binlog (add the following content to the configuration file:
skip_log_bin
#log-bin=mysql-bin
). | +| performance_schema | Whether to enable the performance schema | When testing the ultimate performance in a non-production environment, set this parameter to **OFF** to disable the performance schema. | + +### Database Kernel Tuning + +#### Purpose + +Modify the source code of the MySQL database to improve the database performance. To use the database kernel optimization patch, you need to recompile the database. + +#### Method + +MySQL database kernel tuning involves two scenarios: online transaction processing (OLTP) and online analytical processing (OLAP). Different optimization patches are used in different scenarios. + +OLTP is a transaction-oriented processing system. It mainly processes small transactions and queries and responds quickly to user operations. For details about the OLTP kernel optimization patch, see [MySQL Fine-Grained Lock Tuning](https://www.hikunpeng.com/document/detail/en/kunpengdbs/fg/kunpengpatch_20_0001.html). + +OLAP analyzes and queries current and historical data and generates reports to support management and decision-making. For details about the OLAP kernel optimization patch, see [MySQL OLAP Parallel Optimization](https://www.hikunpeng.com/document/detail/en/kunpengdbs/fg/kunpengolap_20_0002.html). diff --git a/docs/en/docs/SystemOptimization/overview.md b/Archive-en/uncertain/SystemOptimization/overview.md similarity index 100% rename from docs/en/docs/SystemOptimization/overview.md rename to Archive-en/uncertain/SystemOptimization/overview.md diff --git a/docs/en/Cloud/ClusterDeployment/Kubernetes/Menu/index.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..07eab559db7490fd792b2590d47b340e396ef89d --- /dev/null +++ b/docs/en/Cloud/ClusterDeployment/Kubernetes/Menu/index.md @@ -0,0 +1,19 @@ +--- +headless: true +--- + +- [Kubernetes Cluster Deployment Guide]({{< relref "./kubernetes.md" >}}) + - [Preparing VMs]( {{< relref "./preparing-vms.md">}}) + - [Manual Cluster Deployment]({{< relref "./deploying-a-kubernetes-cluster-manually.md" >}}) + - [Installing the Kubernetes Software Package]( {{< relref "./installing-the-kubernetes-software-package.md" >}}) + - [Preparing Certificates]({{< relref "./preparing-certificates.md" >}}) + - [Installing etcd]({{< relref "./installing-etcd.md" >}}) + - [Deploying Components on the Control Plane]({{< relref "./deploying-control-plane-components.md" >}}) + - [Deploying a Node Component]({{< relref "./deploying-a-node-component.md" >}}) + - [Automatic Cluster Deployment]({{< relref "./eggo-automatic-deployment.md" >}}) + - [Tool Introduction]({{< relref "./eggo-tool-introduction.md" >}}) + - [Deploying a Cluster]({{< relref "./eggo-deploying-a-cluster.md" >}}) + - [Dismantling a Cluster]({{< relref "./eggo-dismantling-a-cluster.md" >}}) + - [Running the Test Pod]({{< relref "./running-the-test-pod.md" >}}) + - [Kubernetes Cluster Deployment Guide Based on containerd]({{< relref "./kubernetes-cluster-deployment-guide1.md" >}}) + - [Common Issues and Solutions]({{< relref "./kubernetes-common-issues-and-solutions.md" >}}) diff --git a/docs/en/docs/Kubernetes/deploying-a-Kubernetes-cluster-manually.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/deploying-a-kubernetes-cluster-manually.md similarity index 100% rename from docs/en/docs/Kubernetes/deploying-a-Kubernetes-cluster-manually.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/deploying-a-kubernetes-cluster-manually.md diff --git a/docs/en/docs/Kubernetes/deploying-a-node-component.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/deploying-a-node-component.md similarity index 90% rename from docs/en/docs/Kubernetes/deploying-a-node-component.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/deploying-a-node-component.md index 33e8cded11cf50ef664043bdd63fe92b1b0cecbb..7837252a72ddfda522d5dbdaf01d006afbc67f9d 100644 --- a/docs/en/docs/Kubernetes/deploying-a-node-component.md +++ b/docs/en/Cloud/ClusterDeployment/Kubernetes/deploying-a-node-component.md @@ -1,378 +1,381 @@ -# Deploying a Node Component - -This section uses the `k8snode1` node as an example. - -## Environment Preparation - -```bash -# A proxy needs to be configured for the intranet. -$ dnf install -y docker iSulad conntrack-tools socat containernetworking-plugins -$ swapoff -a -$ mkdir -p /etc/kubernetes/pki/ -$ mkdir -p /etc/cni/net.d -$ mkdir -p /opt/cni -# Delete the default kubeconfig file. -$ rm /etc/kubernetes/kubelet.kubeconfig - -## Use iSulad as the runtime ########. -# Configure the iSulad. -cat /etc/isulad/daemon.json -{ - "registry-mirrors": [ - "docker.io" - ], - "insecure-registries": [ - "k8s.gcr.io", - "quay.io" - ], - "pod-sandbox-image": "k8s.gcr.io/pause:3.2",# pause type - "network-plugin": "cni", # If this parameter is left blank, the CNI network plug-in is disabled. In this case, the following two paths become invalid. After the plug-in is installed, restart iSulad. - "cni-bin-dir": "/usr/libexec/cni/", - "cni-conf-dir": "/etc/cni/net.d", -} - -# Add the proxy to the iSulad environment variable and download the image. -cat /usr/lib/systemd/system/isulad.service -[Service] -Type=notify -Environment="HTTP_PROXY=http://name:password@proxy:8080" -Environment="HTTPS_PROXY=http://name:password@proxy:8080" - -# Restart the iSulad and set it to start automatically upon power-on. -systemctl daemon-reload -systemctl restart isulad - -## If Docker is used as the runtime, run the following command: ######## -$ dnf install -y docker -# If a proxy environment is required, configure a proxy for Docker, add the configuration file http-proxy.conf, and edit the following content. Replace name, password, and proxy-addr with the actual values. -$ cat /etc/systemd/system/docker.service.d/http-proxy.conf -[Service] -Environment="HTTP_PROXY=http://name:password@proxy-addr:8080" -$ systemctl daemon-reload -$ systemctl restart docker -``` - -## Creating kubeconfig Configuration Files - -Perform the following operations on each node to create a configuration file: - -```bash -$ kubectl config set-cluster openeuler-k8s \ - --certificate-authority=/etc/kubernetes/pki/ca.pem \ - --embed-certs=true \ - --server=https://192.168.122.154:6443 \ - --kubeconfig=k8snode1.kubeconfig - -$ kubectl config set-credentials system:node:k8snode1 \ - --client-certificate=/etc/kubernetes/pki/k8snode1.pem \ - --client-key=/etc/kubernetes/pki/k8snode1-key.pem \ - --embed-certs=true \ - --kubeconfig=k8snode1.kubeconfig - -$ kubectl config set-context default \ - --cluster=openeuler-k8s \ - --user=system:node:k8snode1 \ - --kubeconfig=k8snode1.kubeconfig - -$ kubectl config use-context default --kubeconfig=k8snode1.kubeconfig -``` - -**Note: Change k8snode1 to the corresponding node name.** - -## Copying the Certificate - -Similar to the control plane, all certificates, keys, and related configurations are stored in the `/etc/kubernetes/pki/` directory. - -```bash -$ ls /etc/kubernetes/pki/ -ca.pem k8snode1.kubeconfig kubelet_config.yaml kube-proxy-key.pem kube-proxy.pem -k8snode1-key.pem k8snode1.pem kube_proxy_config.yaml kube-proxy.kubeconfig -``` - -## CNI Network Configuration - -containernetworking-plugins is used as the CNI plug-in used by kubelet. In the future, plug-ins such as calico and flannel can be introduced to enhance the network capability of the cluster. - -```bash -# Bridge Network Configuration -$ cat /etc/cni/net.d/10-bridge.conf -{ - "cniVersion": "0.3.1", - "name": "bridge", - "type": "bridge", - "bridge": "cnio0", - "isGateway": true, - "ipMasq": true, - "ipam": { - "type": "host-local", - "subnet": "10.244.0.0/16", - "gateway": "10.244.0.1" - }, - "dns": { - "nameservers": [ - "10.244.0.1" - ] - } -} - -# Loopback Network Configuration -$ cat /etc/cni/net.d/99-loopback.conf -{ - "cniVersion": "0.3.1", - "name": "lo", - "type": "loopback" -} -``` - -## Deploying the kubelet Service - -### Configuration File on Which Kubelet Depends - -```bash -$ cat /etc/kubernetes/pki/kubelet_config.yaml -kind: KubeletConfiguration -apiVersion: kubelet.config.k8s.io/v1beta1 -authentication: - anonymous: - enabled: false - webhook: - enabled: true - x509: - clientCAFile: /etc/kubernetes/pki/ca.pem -authorization: - mode: Webhook -clusterDNS: -- 10.32.0.10 -clusterDomain: cluster.local -runtimeRequestTimeout: "15m" -tlsCertFile: "/etc/kubernetes/pki/k8snode1.pem" -tlsPrivateKeyFile: "/etc/kubernetes/pki/k8snode1-key.pem" -``` - -**Note: The IP address of the cluster DNS is 10.32.0.10, which must be the same as the value of service-cluster-ip-range.** - -### Compiling the systemd Configuration File - -```bash -$ cat /usr/lib/systemd/system/kubelet.service -[Unit] -Description=kubelet: The Kubernetes Node Agent -Documentation=https://kubernetes.io/docs/ -Wants=network-online.target -After=network-online.target - -[Service] -ExecStart=/usr/bin/kubelet \ - -config=/etc/kubernetes/pki/kubelet_config.yaml \ - --network-plugin=cni \ - --pod-infra-container-image=k8s.gcr.io/pause:3.2 \ - --kubeconfig=/etc/kubernetes/pki/k8snode1.kubeconfig \ - --register-node=true \ - --hostname-override=k8snode1 \ - --cni-bin-dir="/usr/libexec/cni/" \ - --v=2 - -Restart=always -StartLimitInterval=0 -RestartSec=10 - -[Install] -WantedBy=multi-user.target -``` - -**Note: If iSulad is used as the runtime, add the following configuration:** - -```bash ---container-runtime=remote \ ---container-runtime-endpoint=unix:///var/run/isulad.sock \ -``` - -## Deploying kube-proxy - -### Configuration File on Which kube-proxy Depends - -```bash -cat /etc/kubernetes/pki/kube_proxy_config.yaml -kind: KubeProxyConfiguration -apiVersion: kubeproxy.config.k8s.io/v1alpha1 -clientConnection: - kubeconfig: /etc/kubernetes/pki/kube-proxy.kubeconfig -clusterCIDR: 10.244.0.0/16 -mode: "iptables" -``` - -### Compiling the systemd Configuration File - -```bash -$ cat /usr/lib/systemd/system/kube-proxy.service -[Unit] -Description=Kubernetes Kube-Proxy Server -Documentation=https://kubernetes.io/docs/reference/generated/kube-proxy/ -After=network.target - -[Service] -EnvironmentFile=-/etc/kubernetes/config -EnvironmentFile=-/etc/kubernetes/proxy -ExecStart=/usr/bin/kube-proxy \ - $KUBE_LOGTOSTDERR \ - $KUBE_LOG_LEVEL \ - --config=/etc/kubernetes/pki/kube_proxy_config.yaml \ - --hostname-override=k8snode1 \ - $KUBE_PROXY_ARGS -Restart=on-failure -LimitNOFILE=65536 - -[Install] -WantedBy=multi-user.target -``` - -## Starting a Component Service - -```bash -$ systemctl enable kubelet kube-proxy -$ systemctl start kubelet kube-proxy -``` - -Deploy other nodes in sequence. - -## Verifying the Cluster Status - -Wait for several minutes and run the following command to check the node status: - -```bash -$ kubectl get nodes --kubeconfig /etc/kubernetes/pki/admin.kubeconfig -NAME STATUS ROLES AGE VERSION -k8snode1 Ready 17h v1.20.2 -k8snode2 Ready 19m v1.20.2 -k8snode3 Ready 12m v1.20.2 -``` - -## Deploying coredns - -coredns can be deployed on a node or master node. In this document, coredns is deployed on the `k8snode1` node. - -### Compiling the coredns Configuration File - -```bash -$ cat /etc/kubernetes/pki/dns/Corefile -.:53 { - errors - health { - lameduck 5s - } - ready - kubernetes cluster.local in-addr.arpa ip6.arpa { - pods insecure - endpoint https://192.168.122.154:6443 - tls /etc/kubernetes/pki/ca.pem /etc/kubernetes/pki/admin-key.pem /etc/kubernetes/pki/admin.pem - kubeconfig /etc/kubernetes/pki/admin.kubeconfig default - fallthrough in-addr.arpa ip6.arpa - } - prometheus :9153 - forward . /etc/resolv.conf { - max_concurrent 1000 - } - cache 30 - loop - reload - loadbalance -} -``` - -Note: - -- Listen to port 53. -- Configure the Kubernetes plug-in, including the certificate and the URL of kube api. - -### Preparing the service File of systemd - -```bash -cat /usr/lib/systemd/system/coredns.service -[Unit] -Description=Kubernetes Core DNS server -Documentation=https://github.com/coredns/coredns -After=network.target - -[Service] -ExecStart=bash -c "KUBE_DNS_SERVICE_HOST=10.32.0.10 coredns -conf /etc/kubernetes/pki/dns/Corefile" - -Restart=on-failure -LimitNOFILE=65536 - -[Install] -WantedBy=multi-user.target -``` - -### Starting the Service - -```bash -$ systemctl enable coredns -$ systemctl start coredns -``` - -### Creating the Service Object of coredns - -```bash -$ cat coredns_server.yaml -apiVersion: v1 -kind: Service -metadata: - name: kube-dns - namespace: kube-system - annotations: - prometheus.io/port: "9153" - prometheus.io/scrape: "true" - labels: - k8s-app: kube-dns - kubernetes.io/cluster-service: "true" - kubernetes.io/name: "CoreDNS" -spec: - clusterIP: 10.32.0.10 - ports: - - name: dns - port: 53 - protocol: UDP - - name: dns-tcp - port: 53 - protocol: TCP - - name: metrics - port: 9153 - protocol: TCP -``` - -### Creating the Endpoint Object of coredns - -```bash -$ cat coredns_ep.yaml -apiVersion: v1 -kind: Endpoints -metadata: - name: kube-dns - namespace: kube-system -subsets: - - addresses: - - ip: 192.168.122.157 - ports: - - name: dns-tcp - port: 53 - protocol: TCP - - name: dns - port: 53 - protocol: UDP - - name: metrics - port: 9153 - protocol: TCP -``` - -### Confirming the coredns Service - -```bash -# View the service object. -$ kubectl get service -n kube-system kube-dns -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -kube-dns ClusterIP 10.32.0.10 53/UDP,53/TCP,9153/TCP 51m -# View the endpoint object. -$ kubectl get endpoints -n kube-system kube-dns -NAME ENDPOINTS AGE -kube-dns 192.168.122.157:53,192.168.122.157:53,192.168.122.157:9153 52m -``` +# Deploying a Node Component + +This section uses the `k8snode1` node as an example. + +## Environment Preparation + +```bash +# A proxy needs to be configured for the intranet. +$ dnf install -y docker iSulad conntrack-tools socat containernetworking-plugins +$ swapoff -a +$ mkdir -p /etc/kubernetes/pki/ +$ mkdir -p /etc/cni/net.d +$ mkdir -p /opt/cni +# Delete the default kubeconfig file. +$ rm /etc/kubernetes/kubelet.kubeconfig + +## Use iSulad as the runtime ########. +# Configure the iSulad. +cat /etc/isulad/daemon.json +{ + "registry-mirrors": [ + "docker.io" + ], + "insecure-registries": [ + "k8s.gcr.io", + "quay.io" + ], + "pod-sandbox-image": "k8s.gcr.io/pause:3.2",# pause type + "network-plugin": "cni", # If this parameter is left blank, the CNI network plug-in is disabled. In this case, the following two paths become invalid. After the plug-in is installed, restart iSulad. + "cni-bin-dir": "/usr/libexec/cni/", + "cni-conf-dir": "/etc/cni/net.d", +} + +# Add the proxy to the iSulad environment variable and download the image. +cat /usr/lib/systemd/system/isulad.service +[Service] +Type=notify +Environment="HTTP_PROXY=http://name:password@proxy:8080" +Environment="HTTPS_PROXY=http://name:password@proxy:8080" + +# Restart the iSulad and set it to start automatically upon power-on. +systemctl daemon-reload +systemctl restart isulad + + + + +## If Docker is used as the runtime, run the following command: ######## +$ dnf install -y docker +# If a proxy environment is required, configure a proxy for Docker, add the configuration file http-proxy.conf, and edit the following content. Replace name, password, and proxy-addr with the actual values. +$ cat /etc/systemd/system/docker.service.d/http-proxy.conf +[Service] +Environment="HTTP_PROXY=http://name:password@proxy-addr:8080" +$ systemctl daemon-reload +$ systemctl restart docker +``` + +## Creating kubeconfig Configuration Files + +Perform the following operations on each node to create a configuration file: + +```bash +$ kubectl config set-cluster openeuler-k8s \ + --certificate-authority=/etc/kubernetes/pki/ca.pem \ + --embed-certs=true \ + --server=https://192.168.122.154:6443 \ + --kubeconfig=k8snode1.kubeconfig + +$ kubectl config set-credentials system:node:k8snode1 \ + --client-certificate=/etc/kubernetes/pki/k8snode1.pem \ + --client-key=/etc/kubernetes/pki/k8snode1-key.pem \ + --embed-certs=true \ + --kubeconfig=k8snode1.kubeconfig + +$ kubectl config set-context default \ + --cluster=openeuler-k8s \ + --user=system:node:k8snode1 \ + --kubeconfig=k8snode1.kubeconfig + +$ kubectl config use-context default --kubeconfig=k8snode1.kubeconfig +``` + +**Note: Change k8snode1 to the corresponding node name.** + +## Copying the Certificate + +Similar to the control plane, all certificates, keys, and related configurations are stored in the `/etc/kubernetes/pki/` directory. + +```bash +$ ls /etc/kubernetes/pki/ +ca.pem k8snode1.kubeconfig kubelet_config.yaml kube-proxy-key.pem kube-proxy.pem +k8snode1-key.pem k8snode1.pem kube_proxy_config.yaml kube-proxy.kubeconfig +``` + +## CNI Network Configuration + +containernetworking-plugins is used as the CNI plug-in used by kubelet. In the future, plug-ins such as calico and flannel can be introduced to enhance the network capability of the cluster. + +```bash +# Bridge Network Configuration +$ cat /etc/cni/net.d/10-bridge.conf +{ + "cniVersion": "0.3.1", + "name": "bridge", + "type": "bridge", + "bridge": "cnio0", + "isGateway": true, + "ipMasq": true, + "ipam": { + "type": "host-local", + "subnet": "10.244.0.0/16", + "gateway": "10.244.0.1" + }, + "dns": { + "nameservers": [ + "10.244.0.1" + ] + } +} + +# Loopback Network Configuration +$ cat /etc/cni/net.d/99-loopback.conf +{ + "cniVersion": "0.3.1", + "name": "lo", + "type": "loopback" +} +``` + +## Deploying the kubelet Service + +### Configuration File on Which Kubelet Depends + +```bash +$ cat /etc/kubernetes/pki/kubelet_config.yaml +kind: KubeletConfiguration +apiVersion: kubelet.config.k8s.io/v1beta1 +authentication: + anonymous: + enabled: false + webhook: + enabled: true + x509: + clientCAFile: /etc/kubernetes/pki/ca.pem +authorization: + mode: Webhook +clusterDNS: +- 10.32.0.10 +clusterDomain: cluster.local +runtimeRequestTimeout: "15m" +tlsCertFile: "/etc/kubernetes/pki/k8snode1.pem" +tlsPrivateKeyFile: "/etc/kubernetes/pki/k8snode1-key.pem" +``` + +**Note: The IP address of the cluster DNS is 10.32.0.10, which must be the same as the value of service-cluster-ip-range.** + +### Compiling the systemd Configuration File + +```bash +$ cat /usr/lib/systemd/system/kubelet.service +[Unit] +Description=kubelet: The Kubernetes Node Agent +Documentation=https://kubernetes.io/docs/ +Wants=network-online.target +After=network-online.target + +[Service] +ExecStart=/usr/bin/kubelet \ + --config=/etc/kubernetes/pki/kubelet_config.yaml \ + --network-plugin=cni \ + --pod-infra-container-image=k8s.gcr.io/pause:3.2 \ + --kubeconfig=/etc/kubernetes/pki/k8snode1.kubeconfig \ + --register-node=true \ + --hostname-override=k8snode1 \ + --cni-bin-dir="/usr/libexec/cni/" \ + --v=2 + +Restart=always +StartLimitInterval=0 +RestartSec=10 + +[Install] +WantedBy=multi-user.target +``` + +**Note: If iSulad is used as the runtime, add the following configuration:** + +```bash +--container-runtime=remote \ +--container-runtime-endpoint=unix:///var/run/isulad.sock \ +``` + +## Deploying kube-proxy + +### Configuration File on Which kube-proxy Depends + +```bash +cat /etc/kubernetes/pki/kube_proxy_config.yaml +kind: KubeProxyConfiguration +apiVersion: kubeproxy.config.k8s.io/v1alpha1 +clientConnection: + kubeconfig: /etc/kubernetes/pki/kube-proxy.kubeconfig +clusterCIDR: 10.244.0.0/16 +mode: "iptables" +``` + +### Compiling the systemd Configuration File + +```bash +$ cat /usr/lib/systemd/system/kube-proxy.service +[Unit] +Description=Kubernetes Kube-Proxy Server +Documentation=https://kubernetes.io/docs/reference/generated/kube-proxy/ +After=network.target + +[Service] +EnvironmentFile=-/etc/kubernetes/config +EnvironmentFile=-/etc/kubernetes/proxy +ExecStart=/usr/bin/kube-proxy \ + $KUBE_LOGTOSTDERR \ + $KUBE_LOG_LEVEL \ + --config=/etc/kubernetes/pki/kube_proxy_config.yaml \ + --hostname-override=k8snode1 \ + $KUBE_PROXY_ARGS +Restart=on-failure +LimitNOFILE=65536 + +[Install] +WantedBy=multi-user.target +``` + +## Starting a Component Service + +```bash +systemctl enable kubelet kube-proxy +systemctl start kubelet kube-proxy +``` + +Deploy other nodes in sequence. + +## Verifying the Cluster Status + +Wait for several minutes and run the following command to check the node status: + +```bash +$ kubectl get nodes --kubeconfig /etc/kubernetes/pki/admin.kubeconfig +NAME STATUS ROLES AGE VERSION +k8snode1 Ready 17h v1.20.2 +k8snode2 Ready 19m v1.20.2 +k8snode3 Ready 12m v1.20.2 +``` + +## Deploying coredns + +coredns can be deployed on a node or master node. In this document, coredns is deployed on the `k8snode1` node. + +### Compiling the coredns Configuration File + +```bash +$ cat /etc/kubernetes/pki/dns/Corefile +.:53 { + errors + health { + lameduck 5s + } + ready + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + endpoint https://192.168.122.154:6443 + tls /etc/kubernetes/pki/ca.pem /etc/kubernetes/pki/admin-key.pem /etc/kubernetes/pki/admin.pem + kubeconfig /etc/kubernetes/pki/admin.kubeconfig default + fallthrough in-addr.arpa ip6.arpa + } + prometheus :9153 + forward . /etc/resolv.conf { + max_concurrent 1000 + } + cache 30 + loop + reload + loadbalance +} +``` + +Note: + +- Listen to port 53. +- Configure the Kubernetes plug-in, including the certificate and the URL of kube api. + +### Preparing the service File of systemd + +```bash +cat /usr/lib/systemd/system/coredns.service +[Unit] +Description=Kubernetes Core DNS server +Documentation=https://github.com/coredns/coredns +After=network.target + +[Service] +ExecStart=bash -c "KUBE_DNS_SERVICE_HOST=10.32.0.10 coredns -conf /etc/kubernetes/pki/dns/Corefile" + +Restart=on-failure +LimitNOFILE=65536 + +[Install] +WantedBy=multi-user.target +``` + +### Starting the Service + +```bash +systemctl enable coredns +systemctl start coredns +``` + +### Creating the Service Object of coredns + +```bash +$ cat coredns_server.yaml +apiVersion: v1 +kind: Service +metadata: + name: kube-dns + namespace: kube-system + annotations: + prometheus.io/port: "9153" + prometheus.io/scrape: "true" + labels: + k8s-app: kube-dns + kubernetes.io/cluster-service: "true" + kubernetes.io/name: "CoreDNS" +spec: + clusterIP: 10.32.0.10 + ports: + - name: dns + port: 53 + protocol: UDP + - name: dns-tcp + port: 53 + protocol: TCP + - name: metrics + port: 9153 + protocol: TCP +``` + +### Creating the Endpoint Object of coredns + +```bash +$ cat coredns_ep.yaml +apiVersion: v1 +kind: Endpoints +metadata: + name: kube-dns + namespace: kube-system +subsets: + - addresses: + - ip: 192.168.122.157 + ports: + - name: dns-tcp + port: 53 + protocol: TCP + - name: dns + port: 53 + protocol: UDP + - name: metrics + port: 9153 + protocol: TCP +``` + +### Confirming the coredns Service + +```bash +# View the service object. +$ kubectl get service -n kube-system kube-dns +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kube-dns ClusterIP 10.32.0.10 53/UDP,53/TCP,9153/TCP 51m +# View the endpoint object. +$ kubectl get endpoints -n kube-system kube-dns +NAME ENDPOINTS AGE +kube-dns 192.168.122.157:53,192.168.122.157:53,192.168.122.157:9153 52m +``` diff --git a/docs/en/docs/Kubernetes/deploying-control-plane-components.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/deploying-control-plane-components.md similarity index 100% rename from docs/en/docs/Kubernetes/deploying-control-plane-components.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/deploying-control-plane-components.md diff --git a/docs/en/docs/Kubernetes/eggo-automatic-deployment.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/eggo-automatic-deployment.md similarity index 96% rename from docs/en/docs/Kubernetes/eggo-automatic-deployment.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/eggo-automatic-deployment.md index afd915506d05010ee49497bf948e60d70702453f..e4f2dc31e5776cf8c1a625291db1a4d787804066 100644 --- a/docs/en/docs/Kubernetes/eggo-automatic-deployment.md +++ b/docs/en/Cloud/ClusterDeployment/Kubernetes/eggo-automatic-deployment.md @@ -1,22 +1,20 @@ -# Automatic Deployment - -Manual deployment of Kubernetes clusters requires manually deploying various components. This is both time- and labor-consuming, especially during large scale Kubernetes cluster deployment, as low efficiency and errors are likely to surface. To solve the problem, openEuler launched the Kubernetes cluster deployment tool in version 21.09. This highly flexible tool provides functions such as automatic deployment and deployment process tracking of large scale Kubernetes clusters. - -The following describes the usage of the Kubernetes cluster automatic deployment tool. - -## Architecture Overview - - - -![](./figures/arch.png) - -The overall architecture of automatic cluster deployment is shown in the figure above. The modules are described as follows: - -- GitOps: Responsible for cluster configuration management, such as updating, creating, and deleting configurations.The cluster management function is not provided in version 21.09. -- InitCluster: The meta cluster, which functions as the central cluster to manage the other service clusters. -- eggops: Custom Resource Definitions (CRDs) and controllers used to abstract the Kubernetes clusters. -- master: The master node of Kubernetes, which provides the control plane of the cluster. -- worker: The load node of Kubernetes , which carries user services. -- ClusterA, ClusterB, and ClusterC: service clusters, which carry user services. - -If you are interested in the Kubernetes cluster deployment tool provided by openEuler, visit [https://gitee.com/openeuler/eggo](https://gitee.com/openeuler/eggo). \ No newline at end of file +# Automatic Deployment + +Manual deployment of Kubernetes clusters requires manually deploying various components. This is both time- and labor-consuming, especially during large scale Kubernetes cluster deployment, as low efficiency and errors are likely to surface. To solve the problem, openEuler launched the Kubernetes cluster deployment tool in version 21.09. This highly flexible tool provides functions such as automatic deployment and deployment process tracking of large scale Kubernetes clusters. + +The following describes the usage of the Kubernetes cluster automatic deployment tool. + +## Architecture Overview + +![](./figures/arch.png) + +The overall architecture of automatic cluster deployment is shown in the figure above. The modules are described as follows: + +- GitOps: Responsible for cluster configuration management, such as updating, creating, and deleting configurations.The cluster management function is not provided in version 21.09. +- InitCluster: The meta cluster, which functions as the central cluster to manage the other service clusters. +- eggops: Custom Resource Definitions (CRDs) and controllers used to abstract the Kubernetes clusters. +- master: The master node of Kubernetes, which provides the control plane of the cluster. +- worker: The load node of Kubernetes , which carries user services. +- ClusterA, ClusterB, and ClusterC: service clusters, which carry user services. + +If you are interested in the Kubernetes cluster deployment tool provided by openEuler, visit [https://gitee.com/openeuler/eggo](https://gitee.com/openeuler/eggo). diff --git a/docs/en/docs/Kubernetes/eggo-deploying-a-cluster.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/eggo-deploying-a-cluster.md similarity index 91% rename from docs/en/docs/Kubernetes/eggo-deploying-a-cluster.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/eggo-deploying-a-cluster.md index 91a0b18f3b3222b73bc0187902d659f5ed9b1e59..29b6e1f35e004cce93419962e2b8b1be0f378f5d 100644 --- a/docs/en/docs/Kubernetes/eggo-deploying-a-cluster.md +++ b/docs/en/Cloud/ClusterDeployment/Kubernetes/eggo-deploying-a-cluster.md @@ -1,258 +1,256 @@ -# Deploying a Cluster - -This section describes how to deploy a Kubernetes cluster. - -## Preparing the Environment - -The Kubernetes cluster automatic deployment tool provided by openEuler: - -- Supports Kubernetes clusters deployment in various common Linux distributions, such as openEuler, CentOS, and Ubuntu. -- Supports hybrid deployment of different CPU architectures (such as AMD64 and ARM64). - -### Prerequisites - -The following requirements must be met to use the Kubernetes cluster automatic deployment tool: - -- You have root permission, for cluster deployment. -- The hostname has been configured for the hosts where Kubernetes is to be deployed. Ensure that the tar command is installed and can be used to decompress the tar.gz packages. -- SSH has been configured on the hosts where Kubernetes is to be deployed for remote access. Ensure that the password-free sudo permission is provided when a common user logs in using SSH. - -## Preparing the Installation Packages - -For offline installation, prepare dependency packages (such as etcd software packages, container engine software packages, Kubernetes cluster component software packages, network software packages, CoreDNS software packages, and required container images) based on the cluster architecture. - -Assume that the network plugin is Calico and the architecture of all hosts in the cluster is ARM64. Prepare the installation packages as follows: - -1. Download the required software packages and calico.yaml. - -2. Export the container image. - - ```shell - $ docker save -o images.tar calico/node:v3.19.1 calico/cni:v3.19.1 calico/kube-controllers:v3.19.1 calico/pod2daemon-flexvol:v3.19.1 k8s.gcr.io/pause:3.2 - ``` - -3. Store the downloaded installation packages, files, and images in the specified directory accordingly. For details about the storage format, see "Preparing the Environment." For example: - - ```shell - $ tree package - package - ├── bin - │ ├── bandwidth - │ ├── bridge - │ ├── conntrack - │ ├── containerd - │ ├── containerd-shim - │ ├── coredns - │ ├── ctr - │ ├── dhcp - │ ├── docker - │ ├── dockerd - │ ├── docker-init - │ ├── docker-proxy - │ ├── etcd - │ ├── etcdctl - │ ├── firewall - │ ├── flannel - │ ├── host-device - │ ├── host-local - │ ├── ipvlan - │ ├── kube-apiserver - │ ├── kube-controller-manager - │ ├── kubectl - │ ├── kubelet - │ ├── kube-proxy - │ ├── kube-scheduler - │ ├── loopback - │ ├── macvlan - │ ├── portmap - │ ├── ptp - │ ├── runc - │ ├── sbr - │ ├── socat - │ ├── static - │ ├── tuning - │ ├── vlan - │ └── vrf - ├── file - │ ├── calico.yaml - │ └── docker.service - ├── image - │ └── images.tar - └── packages_notes.md - ``` - -4. Compile packages_notes.md and declare the software package sources for users to view. - - ```shell - 1. etcd - - etcd,etcdctl - - Architecture: ARM64 - - Version: 3.5.0 - - Address: https://github.com/etcd-io/etcd/releases/download/v3.5.0/etcd-v3.5.0-linux-arm64.tar.gz - - 2. Docker Engine - - containerd,containerd-shim,ctr,docker,dockerd,docker-init,docker-proxy,runc - - Architecture: ARM64 - - Version: 19.03.0 - - Address: https://download.docker.com/linux/static/stable/aarch64/docker-19.03.0.tgz - - 3. Kubernetes - - kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kubelet,kube-proxy - - Architecture: ARM64 - - Version: 1.21.3 - - Address: https://www.downloadkubernetes.com/ - - 4. network - - bandwidth,dhcp,flannel,host-local,loopback,portmap,sbr,tuning,vrf,bridge,firewall,host-device,ipvlan,macvlan,ptp,static,vlan - - Architecture: ARM64 - - Version: 0.9.1 - - Address: https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-arm64-v0.9.1.tgz - - 5. CoreDNS - - coredns - - Architecture: ARM64 - - Version: 1.8.4 - - Address: https://github.com/coredns/coredns/releases/download/v1.8.4/coredns_1.8.4_linux_arm64.tgz - - 6. images.tar - - calico/node:v3.19.1 calico/cni:v3.19.1 calico/kube-controllers:v3.19.1 calico/pod2daemon-flexvol:v3.19.1 k8s.gcr.io/pause:3.2 - - Architecture: ARM64 - - Version: N/A - - Address: N/A - 7. calico.yaml - - Architecture: NA - - Version: v3.19.1 - - Address: https://docs.projectcalico.org/manifests/calico.yaml - ``` - -5. Go to the package directory and pack the downloaded software packages into packages-arm64.tar.gz. - - ```shell - $ tar -zcf package-arm64.tar.gz * - ``` - -6. Check the compressed package to ensure that the packaging is successful. - - ```shell - $ tar -tvf package/packages-arm64.tar.gz - drwxr-xr-x root/root 0 2021-07-29 10:37 bin/ - -rwxr-xr-x root/root 3636214 2021-02-05 23:43 bin/sbr - -rwxr-xr-x root/root 40108032 2021-07-28 16:40 bin/kube-proxy - -rwxr-xr-x root/root 4186218 2021-02-05 23:43 bin/vlan - -rwxr-xr-x root/root 3076118 2021-02-05 23:43 bin/static - -rwxr-xr-x root/root 3496425 2021-02-05 23:43 bin/host-local - -rwxr-xr-x root/root 3847814 2021-02-05 23:43 bin/portmap - -rwxr-xr-x root/root 9681959 2021-02-05 23:43 bin/dhcp - -rwxr-xr-x root/root 4054640 2021-02-05 23:43 bin/host-device - -rwxr-xr-x root/root 43909120 2021-07-28 16:41 bin/kube-scheduler - -rwxr-xr-x root/root 32831616 2019-07-18 02:27 bin/containerd - -rwxr-xr-x root/root 3284795 2021-02-05 23:43 bin/flannel - -rwxr-xr-x root/root 21757952 2021-06-16 05:52 bin/etcd - -rwxr-xr-x root/root 546520 2019-07-18 02:27 bin/docker-init - -rwxr-xr-x root/root 5878304 2019-07-18 02:27 bin/containerd-shim - -rwxr-xr-x root/root 4191734 2021-02-05 23:43 bin/macvlan - -rwxr-xr-x root/root 55248437 2019-07-18 02:27 bin/docker - -rwxr-xr-x root/root 376208 2019-10-27 01:42 bin/socat - -rwxr-xr-x root/root 4053707 2021-02-05 23:43 bin/bandwidth - -rwxr-xr-x root/root 4328311 2021-02-05 23:43 bin/ptp - -rwxr-xr-x root/root 3633613 2021-02-05 23:43 bin/vrf - -rwxr-xr-x root/root 3432839 2021-02-05 23:43 bin/loopback - -rwxr-xr-x root/root 109617672 2021-07-28 16:42 bin/kubelet - -rwxr-xr-x root/root 113442816 2021-07-28 16:42 bin/kube-apiserver - -rwxr-xr-x root/root 44171264 2021-05-28 18:33 bin/coredns - -rwxr-xr-x root/root 43122688 2021-07-28 16:41 bin/kubectl - -rwxr-xr-x root/root 16711680 2021-06-16 05:52 bin/etcdctl - -rwxr-xr-x root/root 3570597 2021-02-05 23:43 bin/tuning - -rwxr-xr-x root/root 4397098 2021-02-05 23:43 bin/bridge - -rwxr-xr-x root/root 4612178 2021-02-05 23:43 bin/firewall - -rwxr-xr-x root/root 68921120 2019-07-18 02:27 bin/dockerd - -rwxr-xr-x root/root 2898746 2019-07-18 02:27 bin/docker-proxy - -rwxr-xr-x root/root 4186585 2021-02-05 23:43 bin/ipvlan - -rwxr-xr-x root/root 18446016 2019-07-18 02:27 bin/ctr - -rwxr-xr-x root/root 80752 2019-01-27 19:40 bin/conntrack - -rwxr-xr-x root/root 8037728 2019-07-18 02:27 bin/runc - drwxr-xr-x root/root 0 2021-07-29 10:39 file/ - -rw-r--r-- root/root 20713 2021-07-29 10:39 file/calico.yaml - -rw-r--r-- root/root 1004 2021-07-29 10:39 file/docker.service - drwxr-xr-x root/root 0 2021-07-29 11:02 image/ - -rw-r--r-- root/root 264783872 2021-07-29 11:02 image/images.tar - -rw-r--r-- root/root 1298 2021-07-29 11:05 packages_notes.md - ``` - - - -## Preparing the Configuration File - -Prepare the YAML configuration file used for deployment. You can run the following command to generate a configuration template and modify the generated template.yaml based on deployment requirements: - -```shell -$ eggo template -f template.yaml -``` - -You can also directly modify the default configurations using command lines. For example: - -```shell -$ eggo template -f template.yaml -n k8s-cluster -u username -p password --masters 192.168.0.1 --masters 192.168.0.2 --workers 192.168.0.3 --etcds 192.168.0.4 --loadbalancer 192.168.0.5 -``` - -## Installing the Kubernetes Cluster - -Install the Kubernetes cluster. In this example, template.yaml is the specified configuration file for deployment. - -```shell -$ eggo -d deploy -f template.yaml -``` - -After the installation is complete, verify whether each node in the cluster is successfully installed based on the command output. - -```shell -\------------------------------- -message: create cluster success -summary: -192.168.0.1 success -192.168.0.2 success -192.168.0.3 success -\------------------------------- -To start using cluster: cluster-example, you need following as a regular user: - -​ export KUBECONFIG=/etc/eggo/cluster-example/admin.conf -``` - -## Adding Nodes - -If the nodes in the cluster cannot meet service requirements, you can add nodes to the cluster to expand the capacity. - -- Using the command line to add a single node. The following is an example: - - ```shell - $ eggo -d join --id k8s-cluster --type master,worker --arch arm64 --port 22 192.168.0.5 - ``` - -- Using the configuration file to add multiple nodes: - - ```shell - $ eggo -d join --id k8s-cluster --file join.yaml - ``` - - Configure the nodes to be added in join.yaml. The following is an example: - - ```yaml - masters: # Configure the master node list. It is recommended that each master node is also set as a worker node. Otherwise, the master nodes may fail to directly access the pods. - - name: test0 # Name of the node, which is the node name displayed to the Kubernetes cluster. - ip: 192.168.0.2 #IP address of the node. - port: 22 # Port number for SSH login. - arch: arm64 # Architecture. Set this parameter to amd64 for x86_64. - - name: test1 - ip: 192.168.0.3 - port: 22 - arch: arm64 - workers: # Configure the worker node list. - - name: test0 # Name of the node, which is the node name displayed to the Kubernetes cluster. - ip: 192.168.0.4 #IP address of the node. - port: 22 # Port number for SSH login. - arch: arm64 # Architecture. Set this parameter to amd64 for x86_64. - - name: test2 - ip: 192.168.0.5 - port: 22 - arch: arm64 - ``` +# Deploying a Cluster + +This section describes how to deploy a Kubernetes cluster. + +## Preparing the Environment + +The Kubernetes cluster automatic deployment tool provided by openEuler: + +- Supports Kubernetes clusters deployment in various common Linux distributions, such as openEuler, CentOS, and Ubuntu. +- Supports hybrid deployment of different CPU architectures (such as AMD64 and ARM64). + +### Prerequisites + +The following requirements must be met to use the Kubernetes cluster automatic deployment tool: + +- You have root permission, for cluster deployment. +- The hostname has been configured for the hosts where Kubernetes is to be deployed. Ensure that the tar command is installed and can be used to decompress the tar.gz packages. +- SSH has been configured on the hosts where Kubernetes is to be deployed for remote access. Ensure that the password-free sudo permission is provided when a common user logs in using SSH. + +## Preparing the Installation Packages + +For offline installation, prepare dependency packages (such as etcd software packages, container engine software packages, Kubernetes cluster component software packages, network software packages, CoreDNS software packages, and required container images) based on the cluster architecture. + +Assume that the network plugin is Calico and the architecture of all hosts in the cluster is ARM64. Prepare the installation packages as follows: + +1. Download the required software packages and calico.yaml. + +2. Export the container image. + + ```shell + docker save -o images.tar calico/node:v3.19.1 calico/cni:v3.19.1 calico/kube-controllers:v3.19.1 calico/pod2daemon-flexvol:v3.19.1 k8s.gcr.io/pause:3.2 + ``` + +3. Store the downloaded installation packages, files, and images in the specified directory accordingly. For details about the storage format, see "Preparing the Environment." For example: + + ```shell + $ tree package + package + ├── bin + │ ├── bandwidth + │ ├── bridge + │ ├── conntrack + │ ├── containerd + │ ├── containerd-shim + │ ├── coredns + │ ├── ctr + │ ├── dhcp + │ ├── docker + │ ├── dockerd + │ ├── docker-init + │ ├── docker-proxy + │ ├── etcd + │ ├── etcdctl + │ ├── firewall + │ ├── flannel + │ ├── host-device + │ ├── host-local + │ ├── ipvlan + │ ├── kube-apiserver + │ ├── kube-controller-manager + │ ├── kubectl + │ ├── kubelet + │ ├── kube-proxy + │ ├── kube-scheduler + │ ├── loopback + │ ├── macvlan + │ ├── portmap + │ ├── ptp + │ ├── runc + │ ├── sbr + │ ├── socat + │ ├── static + │ ├── tuning + │ ├── vlan + │ └── vrf + ├── file + │ ├── calico.yaml + │ └── docker.service + ├── image + │ └── images.tar + └── packages_notes.md + ``` + +4. Compile packages_notes.md and declare the software package sources for users to view. + + ```shell + 1. etcd + - etcd,etcdctl + - Architecture: ARM64 + - Version: 3.5.0 + - Address: https://github.com/etcd-io/etcd/releases/download/v3.5.0/etcd-v3.5.0-linux-arm64.tar.gz + + 2. Docker Engine + - containerd,containerd-shim,ctr,docker,dockerd,docker-init,docker-proxy,runc + - Architecture: ARM64 + - Version: 19.03.0 + - Address: https://download.docker.com/linux/static/stable/aarch64/docker-19.03.0.tgz + + 3. Kubernetes + - kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kubelet,kube-proxy + - Architecture: ARM64 + - Version: 1.21.3 + - Address: https://www.downloadkubernetes.com/ + + 4. network + - bandwidth,dhcp,flannel,host-local,loopback,portmap,sbr,tuning,vrf,bridge,firewall,host-device,ipvlan,macvlan,ptp,static,vlan + - Architecture: ARM64 + - Version: 0.9.1 + - Address: https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-arm64-v0.9.1.tgz + + 5. CoreDNS + - coredns + - Architecture: ARM64 + - Version: 1.8.4 + - Address: https://github.com/coredns/coredns/releases/download/v1.8.4/coredns_1.8.4_linux_arm64.tgz + + 6. images.tar + - calico/node:v3.19.1 calico/cni:v3.19.1 calico/kube-controllers:v3.19.1 calico/pod2daemon-flexvol:v3.19.1 k8s.gcr.io/pause:3.2 + - Architecture: ARM64 + - Version: N/A + - Address: N/A + 7. calico.yaml + - Architecture: NA + - Version: v3.19.1 + - Address: https://docs.projectcalico.org/manifests/calico.yaml + ``` + +5. Go to the package directory and pack the downloaded software packages into packages-arm64.tar.gz. + + ```shell + tar -zcf package-arm64.tar.gz * + ``` + +6. Check the compressed package to ensure that the packaging is successful. + + ```shell + $ tar -tvf package/packages-arm64.tar.gz + drwxr-xr-x root/root 0 2021-07-29 10:37 bin/ + -rwxr-xr-x root/root 3636214 2021-02-05 23:43 bin/sbr + -rwxr-xr-x root/root 40108032 2021-07-28 16:40 bin/kube-proxy + -rwxr-xr-x root/root 4186218 2021-02-05 23:43 bin/vlan + -rwxr-xr-x root/root 3076118 2021-02-05 23:43 bin/static + -rwxr-xr-x root/root 3496425 2021-02-05 23:43 bin/host-local + -rwxr-xr-x root/root 3847814 2021-02-05 23:43 bin/portmap + -rwxr-xr-x root/root 9681959 2021-02-05 23:43 bin/dhcp + -rwxr-xr-x root/root 4054640 2021-02-05 23:43 bin/host-device + -rwxr-xr-x root/root 43909120 2021-07-28 16:41 bin/kube-scheduler + -rwxr-xr-x root/root 32831616 2019-07-18 02:27 bin/containerd + -rwxr-xr-x root/root 3284795 2021-02-05 23:43 bin/flannel + -rwxr-xr-x root/root 21757952 2021-06-16 05:52 bin/etcd + -rwxr-xr-x root/root 546520 2019-07-18 02:27 bin/docker-init + -rwxr-xr-x root/root 5878304 2019-07-18 02:27 bin/containerd-shim + -rwxr-xr-x root/root 4191734 2021-02-05 23:43 bin/macvlan + -rwxr-xr-x root/root 55248437 2019-07-18 02:27 bin/docker + -rwxr-xr-x root/root 376208 2019-10-27 01:42 bin/socat + -rwxr-xr-x root/root 4053707 2021-02-05 23:43 bin/bandwidth + -rwxr-xr-x root/root 4328311 2021-02-05 23:43 bin/ptp + -rwxr-xr-x root/root 3633613 2021-02-05 23:43 bin/vrf + -rwxr-xr-x root/root 3432839 2021-02-05 23:43 bin/loopback + -rwxr-xr-x root/root 109617672 2021-07-28 16:42 bin/kubelet + -rwxr-xr-x root/root 113442816 2021-07-28 16:42 bin/kube-apiserver + -rwxr-xr-x root/root 44171264 2021-05-28 18:33 bin/coredns + -rwxr-xr-x root/root 43122688 2021-07-28 16:41 bin/kubectl + -rwxr-xr-x root/root 16711680 2021-06-16 05:52 bin/etcdctl + -rwxr-xr-x root/root 3570597 2021-02-05 23:43 bin/tuning + -rwxr-xr-x root/root 4397098 2021-02-05 23:43 bin/bridge + -rwxr-xr-x root/root 4612178 2021-02-05 23:43 bin/firewall + -rwxr-xr-x root/root 68921120 2019-07-18 02:27 bin/dockerd + -rwxr-xr-x root/root 2898746 2019-07-18 02:27 bin/docker-proxy + -rwxr-xr-x root/root 4186585 2021-02-05 23:43 bin/ipvlan + -rwxr-xr-x root/root 18446016 2019-07-18 02:27 bin/ctr + -rwxr-xr-x root/root 80752 2019-01-27 19:40 bin/conntrack + -rwxr-xr-x root/root 8037728 2019-07-18 02:27 bin/runc + drwxr-xr-x root/root 0 2021-07-29 10:39 file/ + -rw-r--r-- root/root 20713 2021-07-29 10:39 file/calico.yaml + -rw-r--r-- root/root 1004 2021-07-29 10:39 file/docker.service + drwxr-xr-x root/root 0 2021-07-29 11:02 image/ + -rw-r--r-- root/root 264783872 2021-07-29 11:02 image/images.tar + -rw-r--r-- root/root 1298 2021-07-29 11:05 packages_notes.md + ``` + +## Preparing the Configuration File + +Prepare the YAML configuration file used for deployment. You can run the following command to generate a configuration template and modify the generated template.yaml based on deployment requirements: + +```shell +eggo template -f template.yaml +``` + +You can also directly modify the default configurations using command lines. For example: + +```shell +eggo template -f template.yaml -n k8s-cluster -u username -p password --masters 192.168.0.1 --masters 192.168.0.2 --workers 192.168.0.3 --etcds 192.168.0.4 --loadbalancer 192.168.0.5 +``` + +## Installing the Kubernetes Cluster + +Install the Kubernetes cluster. In this example, template.yaml is the specified configuration file for deployment. + +```shell +eggo -d deploy -f template.yaml +``` + +After the installation is complete, verify whether each node in the cluster is successfully installed based on the command output. + +```shell +\------------------------------- +message: create cluster success +summary: +192.168.0.1 success +192.168.0.2 success +192.168.0.3 success +\------------------------------- +To start using cluster: cluster-example, you need following as a regular user: + + export KUBECONFIG=/etc/eggo/cluster-example/admin.conf +``` + +## Adding Nodes + +If the nodes in the cluster cannot meet service requirements, you can add nodes to the cluster to expand the capacity. + +- Using the command line to add a single node. The following is an example: + + ```shell + eggo -d join --id k8s-cluster --type master,worker --arch arm64 --port 22 192.168.0.5 + ``` + +- Using the configuration file to add multiple nodes: + + ```shell + eggo -d join --id k8s-cluster --file join.yaml + ``` + + Configure the nodes to be added in join.yaml. The following is an example: + + ```yaml + masters: # Configure the master node list. It is recommended that each master node is also set as a worker node. Otherwise, the master nodes may fail to directly access the pods. + - name: test0 # Name of the node, which is the node name displayed to the Kubernetes cluster. + ip: 192.168.0.2 #IP address of the node. + port: 22 # Port number for SSH login. + arch: arm64 # Architecture. Set this parameter to amd64 for x86_64. + - name: test1 + ip: 192.168.0.3 + port: 22 + arch: arm64 + workers: # Configure the worker node list. + - name: test0 # Name of the node, which is the node name displayed to the Kubernetes cluster. + ip: 192.168.0.4 #IP address of the node. + port: 22 # Port number for SSH login. + arch: arm64 # Architecture. Set this parameter to amd64 for x86_64. + - name: test2 + ip: 192.168.0.5 + port: 22 + arch: arm64 + ``` diff --git a/docs/en/docs/Kubernetes/eggo-dismantling-a-cluster.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/eggo-dismantling-a-cluster.md similarity index 98% rename from docs/en/docs/Kubernetes/eggo-dismantling-a-cluster.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/eggo-dismantling-a-cluster.md index f99f6b10593b52fa524466aa950200883e0102ac..252662f54aa8c4ce6a785ff47600e8dd270485f3 100644 --- a/docs/en/docs/Kubernetes/eggo-dismantling-a-cluster.md +++ b/docs/en/Cloud/ClusterDeployment/Kubernetes/eggo-dismantling-a-cluster.md @@ -1,26 +1,26 @@ -# Dismantling a Cluster - -When service requirements decrease and the existing number of nodes is not required, you can delete nodes from the cluster to save system resources and reduce costs. Or, when the service does not require a cluster, you can delete the entire cluster. - -## Deleting Nodes - -You can use the command line to delete nodes from the cluster. For example, to delete all node types whose IP addresses are *192.168.0.5* and *192.168.0.6* from the k8s-cluster, run the following command: - -```shell -$ eggo -d delete --id k8s-cluster 192.168.0.5 192.168.0.6 -``` - -## Deleting the Entire Cluster - -> ![](./public_sys-resources/icon-note.gif)**NOTE:** -> -> - When a cluster is deleted, all data in the cluster is deleted and cannot be restored. Exercise caution when performing this operation. -> - Currently, dismantling a cluster does not delete the containers and the container images. However, if the Kubernetes cluster is configured to install a container engine during the deployment, the container engine will be deleted. As a result, the containers may run abnormally. -> - Some error information may be displayed when dismantling the cluster. Generally, this is caused by the error results returned during the delete operations. The cluster can still be properly dismantled. -> - -You can use the command line to delete the entire cluster. For example, run the following command to delete the k8s-cluster: - -```shell -$ eggo -d cleanup --id k8s-cluster -``` +# Dismantling a Cluster + +When service requirements decrease and the existing number of nodes is not required, you can delete nodes from the cluster to save system resources and reduce costs. Or, when the service does not require a cluster, you can delete the entire cluster. + +## Deleting Nodes + +You can use the command line to delete nodes from the cluster. For example, to delete all node types whose IP addresses are *192.168.0.5* and *192.168.0.6* from the k8s-cluster, run the following command: + +```shell +$ eggo -d delete --id k8s-cluster 192.168.0.5 192.168.0.6 +``` + +## Deleting the Entire Cluster + +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> +> - When a cluster is deleted, all data in the cluster is deleted and cannot be restored. Exercise caution when performing this operation. +> - Currently, dismantling a cluster does not delete the containers and the container images. However, if the Kubernetes cluster is configured to install a container engine during the deployment, the container engine will be deleted. As a result, the containers may run abnormally. +> - Some error information may be displayed when dismantling the cluster. Generally, this is caused by the error results returned during the delete operations. The cluster can still be properly dismantled. +> + +You can use the command line to delete the entire cluster. For example, run the following command to delete the k8s-cluster: + +```shell +$ eggo -d cleanup --id k8s-cluster +``` diff --git a/docs/en/docs/Kubernetes/eggo-tool-introduction.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/eggo-tool-introduction.md similarity index 96% rename from docs/en/docs/Kubernetes/eggo-tool-introduction.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/eggo-tool-introduction.md index 0b76260e9ffbb39b1d5e053e65d69d3a3cbe17ae..f640be1ce32265dbdffd8ca06c3c68ce1945421b 100644 --- a/docs/en/docs/Kubernetes/eggo-tool-introduction.md +++ b/docs/en/Cloud/ClusterDeployment/Kubernetes/eggo-tool-introduction.md @@ -1,431 +1,429 @@ -# Tool Introduction - -This chapter describes the information related to the automatic deployment tool. You are advised to read this chapter before deployment. - -## Deployment Modes - -The automatic Kubernetes cluster deployment tool provided by openEuler supports one-click deployment using the CLI. The tool provides the following deployment modes: - -- Offline deployment: Prepare all required RPM packages, binary files, plugins, and container images on the local host, pack the packages into a tar.gz file in a specified format, and compile the corresponding YAML configuration file. Then, you can run commands to deploy the cluster in one-click. This deployment mode can be used when the VM cannot access the external network. -- Online deployment: Compile the YAML configuration file. The required RPM packages, binary files, plugins, and container images are automatically downloaded from the Internet during installation and deployment. In this mode, the VM must be able to access the software sources and the image repository on which the cluster depends, for example, Docker Hub. - -## Configurations - -When you use the automatic Kubernetes cluster deployment tool, use the YAML configuration file to describe the cluster deployment information. This section describes the configuration items and provides configuration examples. - -### Configuration Items - -- cluster-id: Cluster name, which must comply with the naming rules for the DNS names. Example: k8s-cluster - -- username: User name used to log in to the hosts using SSH where the Kubernetes cluster is to be deployed. The user name must be identical on all hosts. - -- private-key-path:The path of the key for password-free SSH login. You only need to configure either private-key-path or password. If both are configured, private-key-path is used preferentially. - -- masters: The master node list. It is recommended that each master node is also set as a worker node. Each master node contains the following sub-items. Each master node must be configured with a group of sub-items: - - name: The name of the master node, which is the node name displayed to the Kubernetes cluster. - - ip: The IP address of the master node. - - port: The port for SSH login of the node. The default value is 22. - - arch: CPU architecture of the master node. For example, the value for x86_64 CPUs is amd64. - -- workers: The list of the worker nodes. Each worker node contains the following sub-items. Each worker node must be configured with a group of sub-items: - - name: The name of the worker node, which is the node name displayed to the Kubernetes cluster. - - ip: The IP address of the master node. - - port: The port for SSH login of the node. The default value is 22. - - arch: CPU architecture of the worker node. For example, the value for x86_64 CPUs is amd64. - -- etcds: The list of etcd nodes. If this parameter is left empty, one etcd node is deployed for each master node. Otherwise, only the configured etcd node is deployed. Each etcd node contains the following sub-items. Each etcd node must be configured with a group of sub-items: - - name: The name of the etcd node, which is the node name displayed to the Kubernetes cluster. - - ip: The IP address of the etcd node. - - port: The port for SSH login. - - arch: CPU architecture of the etcd node. For example, the value for x86_64 CPUs is amd64. - -- loadbalance: The loadbalance node list. Each loadbalance node contains the following sub-items. Each loadbalance node must be configured with a group of sub-items: - - name: The name of the loadbalance node, which is the node name displayed to the Kubernetes cluster. - - ip: The IP address of the loadbalance node. - - port: The port for SSH login. - - arch: CPU architecture of the loadbalance node. For example, the value for x86_64 CPUs is amd64. - - bind-port: The listening port of the load balancing service. - -- external-ca: Whether to use an external CA certificate. If yes, set this parameter to true. Otherwise, set this parameter to false. - -- external-ca-path: The path of the external CA certificate file. This parameter takes affect only when external-ca is set to true. - -- service: service information created by Kubernetes. The service configuration item contains the following sub-items: - - cidr: The IP address segment of the service created by Kubernetes. - - dnsaddr: DNS address of the service created by Kubernetes - - gateway: The gateway address of the service created by Kubernetes. - - dns: The configuration item of the CoreDNS created by Kubernetes. The dns configuration item contains the following sub-items: - - corednstype: The deployment type of the CoreDNS created by Kubernetes. The value can be pod or binary. - - imageversion: The CoreDNS image version of the pod deployment type. - - replicas: The number of CoreDNS replicas of the pod deployment type. - -- network: The network configuration of the Kubernetes cluster. The network configuration item contains the following sub-items: - - podcidr: IP address segment of the Kubernetes cluster network. - - plugin: The network plugin deployed in the Kubernetes cluster - - plugin-args: The configuration file path of the network plugin of the Kubernetes cluster network. Example: {"NetworkYamlPath": "/etc/kubernetes/addons/calico.yaml"} - -- apiserver-endpoint: The IP address or domain name of the APIServer service that can be accessed by external systems. If loadbalance is configured, set this parameter to the IP address of the loadbalance node. Otherwise, set this parameter to the IP address of the first master node. - -- apiserver-cert-sans: The IP addresses and domain names that need to be configured in the APIServer certificate. This configuration item contains the following sub-items: - - dnsnames: The array list of the domain names that need to be configured in the APIServer certificate. - - ips: The array list of IP addresses that need to be configured in the APIServer certificate. - -- apiserver-timeout: APIServer response timeout interval. - -- etcd-token: The etcd cluster name. - -- dns-vip: The virtual IP address of the DNS. - -- dns-domain: The DNS domain name suffix. - -- pause-image: The complete image name of the pause container. - -- network-plugin: The type of the network plugin. This parameter can only be set to cni. If this item is not configured, the default Kubernetes network is used. - -- cni-bin-dir: network plugin address. Use commas (,) to separate multiple addresses. For example: /usr/libexec/cni,/opt/cni/bin. - -- runtime: The type of the container runtime. Currently, docker and iSulad are supported. - -- runtime-endpoint: The endpoint of the container runtime. This parameter is optional when runtime is set to docker. - -- registry-mirrors: The mirror site address of the image repository used for downloading container images. - -- insecure-registries: The address of the image repository used for downloading container images through HTTP. - -- config-extra-args: The extra parameters for starting services of each component (such as kube-apiserver and etcd). This configuration item contains the following sub-items: - - name: The component name. The value can be etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy or kubelet. - - - extra-args: The extended parameters of the component. The format is key: value. Note that the component parameter corresponding to key must be prefixed with a hyphen (-) or two hyphens (--). - - - open-ports: Configure the ports that need to be enabled additionally. The ports required by Kubernetes do not need to be configured. Other plugin ports need to be configured additionally. - - worker | master | etcd | loadbalance: The type of the node where the ports are enabled. Each configuration item contains one or more port and protocol sub-items. - - port: The port address. - - protocol: The port type. The value can be tcp or udp. - - - install: Configure the detailed information about the installation packages or binary files to be installed on each type of nodes. Note that the corresponding files must be packaged in a tar.gz installation package. The following describes the full configuration. Select the configuration items as needed. - - package-source: The detailed information about the installation package. - - type: The compression type of the installation package. Currently, only tar.gz installation packages are supported. - - dstpath: The path where the installation package is to be decompressed on the peer host. The path must be valid absolute path. - - srcpath: The path for storing the installation packages of different architectures. The architecture must correspond to the host architecture. The path must be a valid absolute path. - - arm64: The path of the installation package of the ARM64 architecture. This parameter is required if any ARM64 node is included in the configuration. - - amd64: The path of the installation package of the AMD64 architecture. This parameter is required if any x86_64 node is included in the configuration. - - > ![](./public_sys-resources/icon-note.gif)**NOTE**: - > - > - In the install configuration item, the sub-items of etcd, kubernetes-master, kubernetes-worker, network, loadbalance, container, image, and dns are the same, that is, name, type, dst, schedule, and TimeOut. dst, schedule, and TimeOut are optional. You can determine whether to configure them based on the files to be installed. The following uses the etcd and kubernetes-master nodes as an example. - - - etcd: The list of packages or binary files to be installed on etcd nodes. - - name: The names of the software packages or binary files to be installed. If the software package is an installation package, enter only the name and do not specify the version. During the installation, `$name*` is used for identification. Example: etcd. If there are multiple software packages, use commas (,) to separate them. - - type: The type of the configuration item. The value can be pkg, repo, bin, file, dir, image, yaml, or shell. If type is set to repo, configure the repo source on the corresponding node. - - dst: The path of the destination folder. This parameter is required when type is set to bin, file, or dir. It indicates the directory where a file or folder is stored. To prevent users from incorrectly configuring a path and deleting important files during cleanup, this parameter must be set to a path in the whitelist. For details, see "Whitelist Description." - - kubernetes-master: The list of packages or binary files to be installed on the Kubernetes master nodes. - - kubernetes-worker: The list of packages or binary files to be installed on the Kubernetes worker nodes. - - network: The list of packages or binary files to be installed for the network. - - loadbalance: The list of packages or binary files to be installed on the loadbalance nodes. - - container: The list of packages or binary files to be installed for the containers. - - image: The tar package of the container image. - - dns: Kubernetes CoreDNS installation package. If corednstype is set to pod, this parameter is not required. - - addition: The list of additional installation packages or binary files. - - master: The following configurations will be installed on all master nodes. - - name: The name of the software package or binary file to be installed. - - type: The type of the configuration item. The value can be pkg, repo, bin, file, dir, image, yaml, or shell. If type is set to repo, configure the repo source on the corresponding node. - - schedule: Valid only when type is set to shell. This parameter indicates when the user wants to execute the script. The value can be prejoin (before the node is added), postjoin (after the node is added), precleanup (before the node is removed), or postcleanup (after the node is removed). - - TimeOut: The script execution timeout interval. If the execution times out, the process is forcibly stopped. The default value is 30s. - - worker: The configurations will be installed on all worker nodes. The configuration format is the same as that of master under addition. - -### Whitelist Description - -The value of dst under install must match the whitelist rules. Set it to a path in the whitelist or a subdirectory of the path. The current whitelist is as follows: - -- /usr/bin -- /usr/local/bin -- /opt/cni/bin -- /usr/libexec/cni -- /etc/kubernetes -- /usr/lib/systemd/system -- /etc/systemd/system -- /tmp - -### Configuration Example - -The following is an example of the YAML file configuration. As shown in the example, nodes of different types can be deployed on a same host, but the configurations of these nodes must be the same. For example, a master node and a worker node are deployed on test0. - -```yaml -cluster-id: k8s-cluster -username: root -private-key-path: /root/.ssh/private.key -masters: -- name: test0 - ip: 192.168.0.1 - port: 22 - arch: arm64 -workers: -- name: test0 - ip: 192.168.0.1 - port: 22 - arch: arm64 -- name: test1 - ip: 192.168.0.3 - port: 22 - arch: arm64 -etcds: -- name: etcd-0 - ip: 192.168.0.4 - port: 22 - arch: amd64 -loadbalance: - name: k8s-loadbalance - ip: 192.168.0.5 - port: 22 - arch: amd64 - bind-port: 8443 -external-ca: false -external-ca-path: /opt/externalca -service: - cidr: 10.32.0.0/16 - dnsaddr: 10.32.0.10 - gateway: 10.32.0.1 - dns: - corednstype: pod - imageversion: 1.8.4 - replicas: 2 -network: - podcidr: 10.244.0.0/16 - plugin: calico - plugin-args: {"NetworkYamlPath": "/etc/kubernetes/addons/calico.yaml"} -apiserver-endpoint: 192.168.122.222:6443 -apiserver-cert-sans: - dnsnames: [] - ips: [] -apiserver-timeout: 120s -etcd-external: false -etcd-token: etcd-cluster -dns-vip: 10.32.0.10 -dns-domain: cluster.local -pause-image: k8s.gcr.io/pause:3.2 -network-plugin: cni -cni-bin-dir: /usr/libexec/cni,/opt/cni/bin -runtime: docker -runtime-endpoint: unix:///var/run/docker.sock -registry-mirrors: [] -insecure-registries: [] -config-extra-args: - - name: kubelet - extra-args: - "--cgroup-driver": systemd -open-ports: - worker: - - port: 111 - protocol: tcp - - port: 179 - protocol: tcp -install: - package-source: - type: tar.gz - dstpath: "" - srcpath: - arm64: /root/rpms/packages-arm64.tar.gz - amd64: /root/rpms/packages-x86.tar.gz - etcd: - - name: etcd - type: pkg - dst: "" - kubernetes-master: - - name: kubernetes-client,kubernetes-master - type: pkg - kubernetes-worker: - - name: docker-engine,kubernetes-client,kubernetes-node,kubernetes-kubelet - type: pkg - dst: "" - - name: conntrack-tools,socat - type: pkg - dst: "" - network: - - name: containernetworking-plugins - type: pkg - dst: "" - loadbalance: - - name: gd,gperftools-libs,libunwind,libwebp,libxslt - type: pkg - dst: "" - - name: nginx,nginx-all-modules,nginx-filesystem,nginx-mod-http-image-filter,nginx-mod-http-perl,nginx-mod-http-xslt-filter,nginx-mod-mail,nginx-mod-stream - type: pkg - dst: "" - container: - - name: emacs-filesystem,gflags,gpm-libs,re2,rsync,vim-filesystem,vim-common,vim-enhanced,zlib-devel - type: pkg - dst: "" - - name: libwebsockets,protobuf,protobuf-devel,grpc,libcgroup - type: pkg - dst: "" - - name: yajl,lxc,lxc-libs,lcr,clibcni,iSulad - type: pkg - dst: "" - image: - - name: pause.tar - type: image - dst: "" - dns: - - name: coredns - type: pkg - dst: "" - addition: - master: - - name: prejoin.sh - type: shell - schedule: "prejoin" - TimeOut: "30s" - - name: calico.yaml - type: yaml - dst: "" - worker: - - name: docker.service - type: file - dst: /usr/lib/systemd/system/ - - name: postjoin.sh - type: shell - schedule: "postjoin" -``` - -### Installation Package Structure - -For offline deployment, you need to prepare the Kubernetes software package and the related offline installation packages, and store the offline installation packages in a specific directory structure. The directory structure is as follows: - -```shell -package -├── bin -├── dir -├── file -├── image -├── pkg -└── packages_notes.md -``` - -The preceding directories are described as follows: - -- The directory structure of the offline deployment package corresponds to the package types in the cluster configuration file config. The package types include pkg, repo, bin, file, dir, image, yaml and shell. - -- The bin directory stores binary files, corresponding to the bin package type. - -- The dir directory stores the directory that needs to be copied to the target host. You need to configure the dst destination path, corresponding to the dir package type. - -- The file directory stores three types of files: file, yaml, and shell. The file type indicates the files to be copied to the target host, and requires the dst destination path to be configured. The yaml type indicates the user-defined YAML files, which will be applied after the cluster is deployed. The shell type indicates the scripts to be executed, and requires the schedule execution time to be configured. The execution time includes prejoin (before the node is added), postjoin (after the node is added), precleanup (before the node is removed), and postcleanup (after the node is removed). - -- The image directory stores the container images to be imported. The container images must be in a tar package format that is compatible with Docker (for example, images exported by Docker or isula-build). - -- The pkg directory stores the rpm/deb packages to be installed, corresponding to the pkg package type. You are advised to use binary files to facilitate cross-release deployment. - -### Command Reference - -To utilize the cluster deployment tool provided by openEuler, use the eggo command to deploy the cluster. - -#### Deploying the Kubernetes Cluster - -Run the following command to deploy a Kubernetes cluster using the specified YAML configuration: - -**eggo deploy** \[ **-d** ] **-f** *deploy.yaml* - -| Parameter| Mandatory (Yes/No)| Description | -| ------------- | -------- | --------------------------------- | -| --debug \| -d | No| Displays the debugging information.| -| --file \| -f | Yes| Specifies the path of the YAML file for the Kubernetes cluster deployment.| - -#### Adding a Single Node - -Run the following command to add a specified single node to the Kubernetes cluster: - -**eggo** **join** \[ **-d** ] **--id** *k8s-cluster* \[ **--type** *master,worker* ] **--arch** *arm64* **--port** *22* \[ **--name** *master1*] *IP* - -| Parameter| Mandatory (Yes/No) | Description| -| ------------- | -------- | ------------------------------------------------------------ | -| --debug \| -d | No| Displays the debugging information.| -| --id | Yes| Specifies the name of the Kubernetes cluster where the node is to be added.| -| --type \| -t | No| Specifies the type of the node to be added. The value can be master or worker. Use commas (,) to separate multiple types. The default value is worker.| -| --arch \| -a | Yes| Specifies the CPU architecture of the node to be added.| -| --port \| -p | Yes| Specifies the port number for SSH login of the node to be added.| -| --name \| -n | No| Specifies the name of the node to be added.| -| *IP* | Yes| Actual IP address of the node to be added.| - -#### Adding Multiple Nodes - -Run the following command to add specified multiple nodes to the Kubernetes cluster: - -**eggo** **join** \[ **-d** ] **--id** *k8s-cluster* **-f** *nodes.yaml* - -| Parameter| Mandatory (Yes/No) | Description | -| ------------- | -------- | -------------------------------- | -| --debug \| -d | No| Displays the debugging information.| -| --id | Yes| Specifies the name of the Kubernetes cluster where the nodes are to be added.| -| --file \| -f | Yes| Specifies the path of the YAML configuration file for adding the nodes.| - -#### Deleting Nodes - -Run the following command to delete one or more nodes from the Kubernetes cluster: - -**eggo delete** \[ **-d** ] **--id** *k8s-cluster* *node* \[*node...*] - -| Parameter| Mandatory (Yes/No) | Description | -| ------------- | -------- | -------------------------------------------- | -| --debug \| -d | No| Displays the debugging information.| -| --id | Yes| Specifies the name of the cluster where the one or more nodes to be deleted are located.| -| *node* | Yes| Specifies the IP addresses or names of the one or more nodes to be deleted.| - -#### Deleting the Cluster - -Run the following command to delete the entire Kubernetes cluster: - -**eggo cleanup** \[ **-d** ] **--id** *k8s-cluster* \[ **-f** *deploy.yaml* ] - -| Parameter| Mandatory (Yes/No) | Description| -| ------------- | -------- | ------------------------------------------------------------ | -| --debug \| -d | No| Displays the debugging information.| -| --id | Yes| Specifies the name of the Kubernetes cluster to be deleted.| -| --file \| -f | No| Specifies the path of the YAML file for the Kubernetes cluster deletion. If this parameter is not specified, the cluster configuration cached during cluster deployment is used by default. In normal cases, you are advised not to set this parameter. Set this parameter only when an exception occurs.| - -> ![](./public_sys-resources/icon-note.gif)**NOTE**: -> -> - The cluster configuration cached during cluster deployment is recommended when you delete the cluster. That is, you are advised not to set the --file | -f parameter in normal cases. Set this parameter only when the cache configuration is damaged or lost due to an exception. - -#### Querying the Cluster - -Run the following command to query all Kubernetes clusters deployed using eggo: - -**eggo list** \[ **-d** ] - -| Parameter| Mandatory (Yes/No) | Description | -| ------------- | -------- | ------------ | -| --debug \| -d | No| Displays the debugging information.| - -#### Generating the Cluster Configuration File - -Run the following command to quickly generate the required YAML configuration file for the Kubernetes cluster deployment. - -**eggo template** **-d** **-f** *template.yaml* **-n** *k8s-cluster* **-u** *username* **-p** *password* **--etcd** \[*192.168.0.1,192.168.0.2*] **--masters** \[*192.168.0.1,192.168.0.2*] **--workers** *192.168.0.3* **--loadbalance** *192.168.0.4* - -| Parameter| Mandatory (Yes/No) | Description | -| ------------------- | -------- | ------------------------------- | -| --debug \| -d | No| Displays the debugging information.| -| --file \| -f | No| Specifies the path of the generated YAML file.| -| --name \| -n | No| Specifies the name of the Kubernetes cluster.| -| --username \| -u | No| Specifies the user name for SSH login of the configured node.| -| --password \| -p | No| Specifies the password for SSH login of the configured node.| -| --etcd | No| Specifies the IP address list of the etcd nodes.| -| --masters | No| Specifies the IP address list of the master nodes.| -| --workers | No| Specifies the IP address list of the worker nodes.| -| --loadbalance \| -l | No| Specifies the IP address of the loadbalance node.| - -#### Querying the Help Information - -Run the following command to query the help information of the eggo command: - - **eggo help** - -#### Querying the Help Information of Subcommands - -Run the following command to query the help information of the eggo subcommands: - -**eggo deploy | join | delete | cleanup | list | template -h** - -| Parameter| Mandatory (Yes/No) | Description | -| ----------- | -------- | ------------ | -| --help\| -h | Yes| Displays the help information.| +# Tool Introduction + +This chapter describes the information related to the automatic deployment tool. You are advised to read this chapter before deployment. + +## Deployment Modes + +The automatic Kubernetes cluster deployment tool provided by openEuler supports one-click deployment using the CLI. The tool provides the following deployment modes: + +- Offline deployment: Prepare all required RPM packages, binary files, plugins, and container images on the local host, pack the packages into a tar.gz file in a specified format, and compile the corresponding YAML configuration file. Then, you can run commands to deploy the cluster in one-click. This deployment mode can be used when the VM cannot access the external network. +- Online deployment: Compile the YAML configuration file. The required RPM packages, binary files, plugins, and container images are automatically downloaded from the Internet during installation and deployment. In this mode, the VM must be able to access the software sources and the image repository on which the cluster depends, for example, Docker Hub. + +## Configurations + +When you use the automatic Kubernetes cluster deployment tool, use the YAML configuration file to describe the cluster deployment information. This section describes the configuration items and provides configuration examples. + +### Configuration Items + +- cluster-id: Cluster name, which must comply with the naming rules for the DNS names. Example: k8s-cluster + +- username: User name used to log in to the hosts using SSH where the Kubernetes cluster is to be deployed. The user name must be identical on all hosts. + +- private-key-path:The path of the key for password-free SSH login. You only need to configure either private-key-path or password. If both are configured, private-key-path is used preferentially. + +- masters: The master node list. It is recommended that each master node is also set as a worker node. Each master node contains the following sub-items. Each master node must be configured with a group of sub-items: + - name: The name of the master node, which is the node name displayed to the Kubernetes cluster. + - ip: The IP address of the master node. + - port: The port for SSH login of the node. The default value is 22. + - arch: CPU architecture of the master node. For example, the value for x86_64 CPUs is amd64. + +- workers: The list of the worker nodes. Each worker node contains the following sub-items. Each worker node must be configured with a group of sub-items: + - name: The name of the worker node, which is the node name displayed to the Kubernetes cluster. + - ip: The IP address of the master node. + - port: The port for SSH login of the node. The default value is 22. + - arch: CPU architecture of the worker node. For example, the value for x86_64 CPUs is amd64. + +- etcds: The list of etcd nodes. If this parameter is left empty, one etcd node is deployed for each master node. Otherwise, only the configured etcd node is deployed. Each etcd node contains the following sub-items. Each etcd node must be configured with a group of sub-items: + - name: The name of the etcd node, which is the node name displayed to the Kubernetes cluster. + - ip: The IP address of the etcd node. + - port: The port for SSH login. + - arch: CPU architecture of the etcd node. For example, the value for x86_64 CPUs is amd64. + +- loadbalance: The loadbalance node list. Each loadbalance node contains the following sub-items. Each loadbalance node must be configured with a group of sub-items: + - name: The name of the loadbalance node, which is the node name displayed to the Kubernetes cluster. + - ip: The IP address of the loadbalance node. + - port: The port for SSH login. + - arch: CPU architecture of the loadbalance node. For example, the value for x86_64 CPUs is amd64. + - bind-port: The listening port of the load balancing service. + +- external-ca: Whether to use an external CA certificate. If yes, set this parameter to true. Otherwise, set this parameter to false. + +- external-ca-path: The path of the external CA certificate file. This parameter takes affect only when external-ca is set to true. + +- service: service information created by Kubernetes. The service configuration item contains the following sub-items: + - cidr: The IP address segment of the service created by Kubernetes. + - dnsaddr: DNS address of the service created by Kubernetes + - gateway: The gateway address of the service created by Kubernetes. + - dns: The configuration item of the CoreDNS created by Kubernetes. The dns configuration item contains the following sub-items: + - corednstype: The deployment type of the CoreDNS created by Kubernetes. The value can be pod or binary. + - imageversion: The CoreDNS image version of the pod deployment type. + - replicas: The number of CoreDNS replicas of the pod deployment type. + +- network: The network configuration of the Kubernetes cluster. The network configuration item contains the following sub-items: + - podcidr: IP address segment of the Kubernetes cluster network. + - plugin: The network plugin deployed in the Kubernetes cluster + - plugin-args: The configuration file path of the network plugin of the Kubernetes cluster network. Example: {"NetworkYamlPath": "/etc/kubernetes/addons/calico.yaml"} + +- apiserver-endpoint: The IP address or domain name of the APIServer service that can be accessed by external systems. If loadbalance is configured, set this parameter to the IP address of the loadbalance node. Otherwise, set this parameter to the IP address of the first master node. + +- apiserver-cert-sans: The IP addresses and domain names that need to be configured in the APIServer certificate. This configuration item contains the following sub-items: + - dnsnames: The array list of the domain names that need to be configured in the APIServer certificate. + - ips: The array list of IP addresses that need to be configured in the APIServer certificate. + +- apiserver-timeout: APIServer response timeout interval. + +- etcd-token: The etcd cluster name. + +- dns-vip: The virtual IP address of the DNS. + +- dns-domain: The DNS domain name suffix. + +- pause-image: The complete image name of the pause container. + +- network-plugin: The type of the network plugin. This parameter can only be set to cni. If this item is not configured, the default Kubernetes network is used. + +- cni-bin-dir: network plugin address. Use commas (,) to separate multiple addresses. For example: /usr/libexec/cni,/opt/cni/bin. + +- runtime: The type of the container runtime. Currently, docker and iSulad are supported. + +- runtime-endpoint: The endpoint of the container runtime. This parameter is optional when runtime is set to docker. + +- registry-mirrors: The mirror site address of the image repository used for downloading container images. + +- insecure-registries: The address of the image repository used for downloading container images through HTTP. + +- config-extra-args: The extra parameters for starting services of each component (such as kube-apiserver and etcd). This configuration item contains the following sub-items: + - name: The component name. The value can be etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy or kubelet. + + - extra-args: The extended parameters of the component. The format is key: value. Note that the component parameter corresponding to key must be prefixed with a hyphen (-) or two hyphens (--). + + - open-ports: Configure the ports that need to be enabled additionally. The ports required by Kubernetes do not need to be configured. Other plugin ports need to be configured additionally. + - worker | master | etcd | loadbalance: The type of the node where the ports are enabled. Each configuration item contains one or more port and protocol sub-items. + - port: The port address. + - protocol: The port type. The value can be tcp or udp. + + - install: Configure the detailed information about the installation packages or binary files to be installed on each type of nodes. Note that the corresponding files must be packaged in a tar.gz installation package. The following describes the full configuration. Select the configuration items as needed. + - package-source: The detailed information about the installation package. + - type: The compression type of the installation package. Currently, only tar.gz installation packages are supported. + - dstpath: The path where the installation package is to be decompressed on the peer host. The path must be valid absolute path. + - srcpath: The path for storing the installation packages of different architectures. The architecture must correspond to the host architecture. The path must be a valid absolute path. + - arm64: The path of the installation package of the ARM64 architecture. This parameter is required if any ARM64 node is included in the configuration. + - amd64: The path of the installation package of the AMD64 architecture. This parameter is required if any x86_64 node is included in the configuration. + > ![](./public_sys-resources/icon-note.gif)**NOTE**: + > + > - In the install configuration item, the sub-items of etcd, kubernetes-master, kubernetes-worker, network, loadbalance, container, image, and dns are the same, that is, name, type, dst, schedule, and TimeOut. dst, schedule, and TimeOut are optional. You can determine whether to configure them based on the files to be installed. The following uses the etcd and kubernetes-master nodes as an example. + - etcd: The list of packages or binary files to be installed on etcd nodes. + - name: The names of the software packages or binary files to be installed. If the software package is an installation package, enter only the name and do not specify the version. During the installation, `$name*` is used for identification. Example: etcd. If there are multiple software packages, use commas (,) to separate them. + - type: The type of the configuration item. The value can be pkg, repo, bin, file, dir, image, yaml, or shell. If type is set to repo, configure the repo source on the corresponding node. + - dst: The path of the destination folder. This parameter is required when type is set to bin, file, or dir. It indicates the directory where a file or folder is stored. To prevent users from incorrectly configuring a path and deleting important files during cleanup, this parameter must be set to a path in the whitelist. For details, see "Whitelist Description." + - kubernetes-master: The list of packages or binary files to be installed on the Kubernetes master nodes. + - kubernetes-worker: The list of packages or binary files to be installed on the Kubernetes worker nodes. + - network: The list of packages or binary files to be installed for the network. + - loadbalance: The list of packages or binary files to be installed on the loadbalance nodes. + - container: The list of packages or binary files to be installed for the containers. + - image: The tar package of the container image. + - dns: Kubernetes CoreDNS installation package. If corednstype is set to pod, this parameter is not required. + - addition: The list of additional installation packages or binary files. + - master: The following configurations will be installed on all master nodes. + - name: The name of the software package or binary file to be installed. + - type: The type of the configuration item. The value can be pkg, repo, bin, file, dir, image, yaml, or shell. If type is set to repo, configure the repo source on the corresponding node. + - schedule: Valid only when type is set to shell. This parameter indicates when the user wants to execute the script. The value can be prejoin (before the node is added), postjoin (after the node is added), precleanup (before the node is removed), or postcleanup (after the node is removed). + - TimeOut: The script execution timeout interval. If the execution times out, the process is forcibly stopped. The default value is 30s. + - worker: The configurations will be installed on all worker nodes. The configuration format is the same as that of master under addition. + +### Whitelist Description + +The value of dst under install must match the whitelist rules. Set it to a path in the whitelist or a subdirectory of the path. The current whitelist is as follows: + +- /usr/bin +- /usr/local/bin +- /opt/cni/bin +- /usr/libexec/cni +- /etc/kubernetes +- /usr/lib/systemd/system +- /etc/systemd/system +- /tmp + +### Configuration Example + +The following is an example of the YAML file configuration. As shown in the example, nodes of different types can be deployed on a same host, but the configurations of these nodes must be the same. For example, a master node and a worker node are deployed on test0. + +```yaml +cluster-id: k8s-cluster +username: root +private-key-path: /root/.ssh/private.key +masters: +- name: test0 + ip: 192.168.0.1 + port: 22 + arch: arm64 +workers: +- name: test0 + ip: 192.168.0.1 + port: 22 + arch: arm64 +- name: test1 + ip: 192.168.0.3 + port: 22 + arch: arm64 +etcds: +- name: etcd-0 + ip: 192.168.0.4 + port: 22 + arch: amd64 +loadbalance: + name: k8s-loadbalance + ip: 192.168.0.5 + port: 22 + arch: amd64 + bind-port: 8443 +external-ca: false +external-ca-path: /opt/externalca +service: + cidr: 10.32.0.0/16 + dnsaddr: 10.32.0.10 + gateway: 10.32.0.1 + dns: + corednstype: pod + imageversion: 1.8.4 + replicas: 2 +network: + podcidr: 10.244.0.0/16 + plugin: calico + plugin-args: {"NetworkYamlPath": "/etc/kubernetes/addons/calico.yaml"} +apiserver-endpoint: 192.168.122.222:6443 +apiserver-cert-sans: + dnsnames: [] + ips: [] +apiserver-timeout: 120s +etcd-external: false +etcd-token: etcd-cluster +dns-vip: 10.32.0.10 +dns-domain: cluster.local +pause-image: k8s.gcr.io/pause:3.2 +network-plugin: cni +cni-bin-dir: /usr/libexec/cni,/opt/cni/bin +runtime: docker +runtime-endpoint: unix:///var/run/docker.sock +registry-mirrors: [] +insecure-registries: [] +config-extra-args: + - name: kubelet + extra-args: + "--cgroup-driver": systemd +open-ports: + worker: + - port: 111 + protocol: tcp + - port: 179 + protocol: tcp +install: + package-source: + type: tar.gz + dstpath: "" + srcpath: + arm64: /root/rpms/packages-arm64.tar.gz + amd64: /root/rpms/packages-x86.tar.gz + etcd: + - name: etcd + type: pkg + dst: "" + kubernetes-master: + - name: kubernetes-client,kubernetes-master + type: pkg + kubernetes-worker: + - name: docker-engine,kubernetes-client,kubernetes-node,kubernetes-kubelet + type: pkg + dst: "" + - name: conntrack-tools,socat + type: pkg + dst: "" + network: + - name: containernetworking-plugins + type: pkg + dst: "" + loadbalance: + - name: gd,gperftools-libs,libunwind,libwebp,libxslt + type: pkg + dst: "" + - name: nginx,nginx-all-modules,nginx-filesystem,nginx-mod-http-image-filter,nginx-mod-http-perl,nginx-mod-http-xslt-filter,nginx-mod-mail,nginx-mod-stream + type: pkg + dst: "" + container: + - name: emacs-filesystem,gflags,gpm-libs,re2,rsync,vim-filesystem,vim-common,vim-enhanced,zlib-devel + type: pkg + dst: "" + - name: libwebsockets,protobuf,protobuf-devel,grpc,libcgroup + type: pkg + dst: "" + - name: yajl,lxc,lxc-libs,lcr,clibcni,iSulad + type: pkg + dst: "" + image: + - name: pause.tar + type: image + dst: "" + dns: + - name: coredns + type: pkg + dst: "" + addition: + master: + - name: prejoin.sh + type: shell + schedule: "prejoin" + TimeOut: "30s" + - name: calico.yaml + type: yaml + dst: "" + worker: + - name: docker.service + type: file + dst: /usr/lib/systemd/system/ + - name: postjoin.sh + type: shell + schedule: "postjoin" +``` + +### Installation Package Structure + +For offline deployment, you need to prepare the Kubernetes software package and the related offline installation packages, and store the offline installation packages in a specific directory structure. The directory structure is as follows: + +```shell +package +├── bin +├── dir +├── file +├── image +├── pkg +└── packages_notes.md +``` + +The preceding directories are described as follows: + +- The directory structure of the offline deployment package corresponds to the package types in the cluster configuration file config. The package types include pkg, repo, bin, file, dir, image, yaml and shell. + +- The bin directory stores binary files, corresponding to the bin package type. + +- The dir directory stores the directory that needs to be copied to the target host. You need to configure the dst destination path, corresponding to the dir package type. + +- The file directory stores three types of files: file, yaml, and shell. The file type indicates the files to be copied to the target host, and requires the dst destination path to be configured. The yaml type indicates the user-defined YAML files, which will be applied after the cluster is deployed. The shell type indicates the scripts to be executed, and requires the schedule execution time to be configured. The execution time includes prejoin (before the node is added), postjoin (after the node is added), precleanup (before the node is removed), and postcleanup (after the node is removed). + +- The image directory stores the container images to be imported. The container images must be in a tar package format that is compatible with Docker (for example, images exported by Docker or isula-build). + +- The pkg directory stores the rpm/deb packages to be installed, corresponding to the pkg package type. You are advised to use binary files to facilitate cross-release deployment. + +### Command Reference + +To utilize the cluster deployment tool provided by openEuler, use the eggo command to deploy the cluster. + +#### Deploying the Kubernetes Cluster + +Run the following command to deploy a Kubernetes cluster using the specified YAML configuration: + +**eggo deploy** \[ **-d** ] **-f** *deploy.yaml* + +| Parameter| Mandatory (Yes/No)| Description | +| ------------- | -------- | --------------------------------- | +| --debug \| -d | No| Displays the debugging information.| +| --file \| -f | Yes| Specifies the path of the YAML file for the Kubernetes cluster deployment.| + +#### Adding a Single Node + +Run the following command to add a specified single node to the Kubernetes cluster: + +**eggo** **join** \[ **-d** ] **--id** *k8s-cluster* \[ **--type** *master,worker* ] **--arch** *arm64* **--port** *22* \[ **--name** *master1*] *IP* + +| Parameter| Mandatory (Yes/No) | Description| +| ------------- | -------- | ------------------------------------------------------------ | +| --debug \| -d | No| Displays the debugging information.| +| --id | Yes| Specifies the name of the Kubernetes cluster where the node is to be added.| +| --type \| -t | No| Specifies the type of the node to be added. The value can be master or worker. Use commas (,) to separate multiple types. The default value is worker.| +| --arch \| -a | Yes| Specifies the CPU architecture of the node to be added.| +| --port \| -p | Yes| Specifies the port number for SSH login of the node to be added.| +| --name \| -n | No| Specifies the name of the node to be added.| +| *IP* | Yes| Actual IP address of the node to be added.| + +#### Adding Multiple Nodes + +Run the following command to add specified multiple nodes to the Kubernetes cluster: + +**eggo** **join** \[ **-d** ] **--id** *k8s-cluster* **-f** *nodes.yaml* + +| Parameter| Mandatory (Yes/No) | Description | +| ------------- | -------- | -------------------------------- | +| --debug \| -d | No| Displays the debugging information.| +| --id | Yes| Specifies the name of the Kubernetes cluster where the nodes are to be added.| +| --file \| -f | Yes| Specifies the path of the YAML configuration file for adding the nodes.| + +#### Deleting Nodes + +Run the following command to delete one or more nodes from the Kubernetes cluster: + +**eggo delete** \[ **-d** ] **--id** *k8s-cluster* *node* \[*node...*] + +| Parameter| Mandatory (Yes/No) | Description | +| ------------- | -------- | -------------------------------------------- | +| --debug \| -d | No| Displays the debugging information.| +| --id | Yes| Specifies the name of the cluster where the one or more nodes to be deleted are located.| +| *node* | Yes| Specifies the IP addresses or names of the one or more nodes to be deleted.| + +#### Deleting the Cluster + +Run the following command to delete the entire Kubernetes cluster: + +**eggo cleanup** \[ **-d** ] **--id** *k8s-cluster* \[ **-f** *deploy.yaml* ] + +| Parameter| Mandatory (Yes/No) | Description| +| ------------- | -------- | ------------------------------------------------------------ | +| --debug \| -d | No| Displays the debugging information.| +| --id | Yes| Specifies the name of the Kubernetes cluster to be deleted.| +| --file \| -f | No| Specifies the path of the YAML file for the Kubernetes cluster deletion. If this parameter is not specified, the cluster configuration cached during cluster deployment is used by default. In normal cases, you are advised not to set this parameter. Set this parameter only when an exception occurs.| + +> ![](./public_sys-resources/icon-note.gif)**NOTE**: +> +> - The cluster configuration cached during cluster deployment is recommended when you delete the cluster. That is, you are advised not to set the --file | -f parameter in normal cases. Set this parameter only when the cache configuration is damaged or lost due to an exception. + +#### Querying the Cluster + +Run the following command to query all Kubernetes clusters deployed using eggo: + +**eggo list** \[ **-d** ] + +| Parameter| Mandatory (Yes/No) | Description | +| ------------- | -------- | ------------ | +| --debug \| -d | No| Displays the debugging information.| + +#### Generating the Cluster Configuration File + +Run the following command to quickly generate the required YAML configuration file for the Kubernetes cluster deployment. + +**eggo template** **-d** **-f** *template.yaml* **-n** *k8s-cluster* **-u** *username* **-p** *password* **--etcd** \[*192.168.0.1,192.168.0.2*] **--masters** \[*192.168.0.1,192.168.0.2*] **--workers** *192.168.0.3* **--loadbalance** *192.168.0.4* + +| Parameter| Mandatory (Yes/No) | Description | +| ------------------- | -------- | ------------------------------- | +| --debug \| -d | No| Displays the debugging information.| +| --file \| -f | No| Specifies the path of the generated YAML file.| +| --name \| -n | No| Specifies the name of the Kubernetes cluster.| +| --username \| -u | No| Specifies the user name for SSH login of the configured node.| +| --password \| -p | No| Specifies the password for SSH login of the configured node.| +| --etcd | No| Specifies the IP address list of the etcd nodes.| +| --masters | No| Specifies the IP address list of the master nodes.| +| --workers | No| Specifies the IP address list of the worker nodes.| +| --loadbalance \| -l | No| Specifies the IP address of the loadbalance node.| + +#### Querying the Help Information + +Run the following command to query the help information of the eggo command: + + **eggo help** + +#### Querying the Help Information of Subcommands + +Run the following command to query the help information of the eggo subcommands: + +**eggo deploy | join | delete | cleanup | list | template -h** + +| Parameter| Mandatory (Yes/No) | Description | +| ----------- | -------- | ------------ | +| --help\| -h | Yes| Displays the help information.| diff --git a/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/advertiseAddress.png b/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/advertiseAddress.png new file mode 100644 index 0000000000000000000000000000000000000000..b36e5c4664f2d2e5faaa23128fd4711c11e30179 Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/advertiseAddress.png differ diff --git a/docs/en/docs/Kubernetes/figures/arch.png b/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/arch.png similarity index 100% rename from docs/en/docs/Kubernetes/figures/arch.png rename to docs/en/Cloud/ClusterDeployment/Kubernetes/figures/arch.png diff --git a/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/flannelConfig.png b/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/flannelConfig.png new file mode 100644 index 0000000000000000000000000000000000000000..dc9e7c665edd02fad16d3e6f4970e3125efcbef8 Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/flannelConfig.png differ diff --git a/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/name.png b/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/name.png new file mode 100644 index 0000000000000000000000000000000000000000..dd6ddfdc3476780e8c896bfd5095025507f62fa8 Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/name.png differ diff --git a/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/podSubnet.png b/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/podSubnet.png new file mode 100644 index 0000000000000000000000000000000000000000..b368f77dd7dfd7722dcf7751b3e37ec28755e42d Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/Kubernetes/figures/podSubnet.png differ diff --git a/docs/en/docs/Kubernetes/installing-etcd.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/installing-etcd.md similarity index 98% rename from docs/en/docs/Kubernetes/installing-etcd.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/installing-etcd.md index 9bd37d031107a5b3c3f880db5a90bf5783b2935a..f45e22d16464f59f32c4dec443be4dd5b1d02234 100644 --- a/docs/en/docs/Kubernetes/installing-etcd.md +++ b/docs/en/Cloud/ClusterDeployment/Kubernetes/installing-etcd.md @@ -1,9 +1,9 @@ # Installing etcd - ## Preparing the Environment Run the following command to enable the port used by etcd: + ```bash firewall-cmd --zone=public --add-port=2379/tcp firewall-cmd --zone=public --add-port=2380/tcp @@ -13,7 +13,7 @@ firewall-cmd --zone=public --add-port=2380/tcp Currently, the RPM package is used for installation. -``` +```bash rpm -ivh etcd*.rpm ``` @@ -57,7 +57,7 @@ LimitNOFILE=65536 WantedBy=multi-user.target ``` -**注意:** +**Note:** - The boot setting `ETCD_UNSUPPORTED_ARCH=arm64` needs to be added to ARM64; - In this document, etcd and Kubernetes control are deployed on the same machine. Therefore, the `kubernetes.pem` and `kubernetes-key.pem` certificates are used to start etcd and Kubernetes control. @@ -68,8 +68,8 @@ WantedBy=multi-user.target Start the etcd service. ```bash -$ systemctl enable etcd -$ systemctl start etcd +systemctl enable etcd +systemctl start etcd ``` Then, deploy other hosts in sequence. @@ -86,4 +86,3 @@ $ ETCDCTL_API=3 etcdctl -w table endpoint status --endpoints=https://192.168.12 | https://192.168.122.154:2379 | f93b3808e944c379 | 3.4.14 | 328 kB | false | false | 819 | 21 | 21 | | +------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ ``` - diff --git a/docs/en/docs/Kubernetes/installing-the-Kubernetes-software-package.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/installing-the-kubernetes-software-package.md similarity index 67% rename from docs/en/docs/Kubernetes/installing-the-Kubernetes-software-package.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/installing-the-kubernetes-software-package.md index e88f1adec2524cbf79e5556ce63bee85e5d1fa7f..3fbed3d94232fd0741ecd4f0957e20948ca190af 100644 --- a/docs/en/docs/Kubernetes/installing-the-Kubernetes-software-package.md +++ b/docs/en/Cloud/ClusterDeployment/Kubernetes/installing-the-kubernetes-software-package.md @@ -1,14 +1,11 @@ # Installing the Kubernetes Software Package - ```bash -$ dnf install -y docker conntrack-tools socat +dnf install -y docker conntrack-tools socat ``` After the EPOL source is configured, you can directly install Kubernetes through DNF. ```bash -$ rpm -ivh kubernetes*.rpm +rpm -ivh kubernetes*.rpm ``` - - diff --git a/docs/en/Cloud/ClusterDeployment/Kubernetes/kubernetes-cluster-deployment-guide1.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/kubernetes-cluster-deployment-guide1.md new file mode 100644 index 0000000000000000000000000000000000000000..b4a4f521007057ed4c562a4e9c2582642f4ffb71 --- /dev/null +++ b/docs/en/Cloud/ClusterDeployment/Kubernetes/kubernetes-cluster-deployment-guide1.md @@ -0,0 +1,308 @@ +# Kubernetes Cluster Deployment Guide Based on containerd + +Starting from version 1.21, Kubernetes no longer supports the Kubernetes+Docker setup for cluster deployment. This guide demonstrates how to quickly set up a Kubernetes cluster using containerd as the container runtime. For custom cluster configurations, consult the [official documentation](https://kubernetes.io/docs/home/). + +## Software Package Installation + +### 1. Installing Required Packages + +```sh +yum install -y containerd +yum install -y kubernetes* +yum install -y cri-tools +``` + +> ![](./public_sys-resources/icon-note.gif)**Note** +> +> - If Docker is already installed on the system, uninstall it before installing containerd to prevent conflicts. + +The required containerd version is 1.6.22-15 or higher. If the installed version is not supported, upgrade to version 1.6.22-15 using the following commands, or perform a manual upgrade. + +```sh +wget --no-check-certificate https://repo.openeuler.org/openEuler-24.03-LTS/update/x86_64/Packages/containerd-1.6.22-15.oe2403.x86_64.rpm +rpm -Uvh containerd-1.6.22-15.oe2403.x86_64.rpm +``` + +The package versions downloaded via `yum` in this guide are: + +```text +1. containerd + - Architecture: x86_64 + - Version: 1.6.22-15 +2. kubernetes - client/help/kubeadm/kubelet/master/node + - Architecture: x86_64 + - Version: 1.29.1-4 +3. cri-tools + - Architecture: X86_64 + - Version: 1.29.0-3 +``` + +### 2. Downloading CNI Components + +```sh +mkdir -p /opt/cni/bin +cd /opt/cni/bin +wget --no-check-certificate https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz +tar -xzvf ./cni-plugins-linux-amd64-v1.5.1.tgz -C . +``` + +> ![](./public_sys-resources/icon-note.gif)**Note** +> +> - The provided download link is for the AMD64 architecture. Choose the appropriate version based on your system architecture. Other versions are available in the [GitHub repository](https://github.com/containernetworking/plugins/releases/). + +### 3. Downloading CNI Plugin (Flannel) + +```sh +wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml --no-check-certificate +``` + +## Environment Configuration + +This section configures the OS environment required for Kubernetes. + +### 1. Setting the Host Name + +```sh +hostnamectl set-hostname nodeName +``` + +### 2. Configuring the Firewall + +**Method 1:** + +Configure firewall rules to open ports for etcd and the API Server, ensuring proper communication between the control plane and worker nodes. + +Open ports for etcd: + +```sh +firewall-cmd --zone=public --add-port=2379/tcp --permanent +firewall-cmd --zone=public --add-port=2380/tcp --permanent +``` + +Open ports for the API Server: + +```sh +firewall-cmd --zone=public --add-port=6443/tcp --permanent +``` + +Apply the firewall rules: + +```sh +firewall-cmd --reload +``` + +> ![](./public_sys-resources/icon-note.gif)**Note** +> +> - Firewall configuration may prevent certain container images from functioning properly. To ensure smooth operation, open the necessary ports based on the images being used. + +**Method 2:** + +Disable the firewall using the following commands: + +```sh +systemctl stop firewalld +systemctl disable firewalld +``` + +### 3. Disabling SELinux + +SELinux security policies may block certain operations within containers, such as writing to specific directories, accessing network resources, or executing privileged operations. This can cause critical services like CoreDNS to fail, resulting in `CrashLoopBackOff` or `Error` states. Disable SELinux using the following commands: + +```sh +setenforce 0 +sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config +``` + +### 4. Disabling Swap + +The Kubernetes scheduler allocates pods to nodes based on available memory and CPU resources. If swap is enabled on a node, the actual physical memory and logically available memory may not align, which can affect the scheduler decisions, leading to node overloading or incorrect scheduling. Therefore, disable swap: + +```sh +swapoff -a +sed -ri 's/.*swap.*/#&/' /etc/fstab +``` + +### 5. Configuring the Network + +Enable IPv6 and IPv4 traffic filtering on bridged networks using iptables, and enable IP forwarding to ensure inter-pod communication across nodes: + +```sh +$ cat > /etc/sysctl.d/k8s.conf << EOF +net.bridge.bridge-nf-call-ip6tables = 1 +net.bridge.bridge-nf-call-iptables = 1 +net.ipv4.ip_forward = 1 +vm.swappiness=0 +EOF +$ modprobe br_netfilter +$ sysctl -p /etc/sysctl.d/k8s.conf +``` + +## Configuring containerd + +This section configures containerd, including setting the pause image, cgroup driver, disabling certificate verification for the `registry.k8s.io` image repository, and configuring a proxy. + +First, generate the default configuration file for containerd and output it to the file specified by `containerd_conf`: + +```sh +containerd_conf="/etc/containerd/config.toml" +mkdir -p /etc/containerd +containerd config default > "${containerd_conf}" +``` + +Configure the pause image: + +```sh +pause_img=$(kubeadm config images list | grep pause | tail -1) +sed -i "/sandbox_image/s#\".*\"#\"${pause_img}\"#" "${containerd_conf}" +``` + +Set the cgroup driver to systemd: + +```sh +sed -i "/SystemdCgroup/s/=.*/= true/" "${containerd_conf}" +``` + +Disable certificate verification for the `registry.k8s.io` image repository: + +```sh +sed -i '/plugins."io.containerd.grpc.v1.cri".registry.configs/a\[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.k8s.io".tls]\n insecure_skip_verify = true' /etc/containerd/config.toml +``` + +Configure the proxy (replace "***" in `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY` with your proxy information): + +```sh +$ server_path="/etc/systemd/system/containerd.service.d" +$ mkdir -p "${server_path}" +$ cat > "${server_path}"/http-proxy.conf << EOF +[Service] +Environment="HTTP_PROXY=***" +Environment="HTTPS_PROXY=***" +Environment="NO_PROXY=***" +EOF +``` + +Restart containerd to apply the configurations: + +```sh +systemctl daemon-reload +systemctl restart containerd +``` + +## Configuring crictl to Use containerd as the Container Runtime + +```sh +crictl config runtime-endpoint unix:///run/containerd/containerd.sock +crictl config image-endpoint unix:///run/containerd/containerd.sock +``` + +## Configuring kubelet to Use systemd as the Cgroup Driver + +```sh +systemctl enable kubelet.service +echo 'KUBELET_EXTRA_ARGS="--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"' >> /etc/sysconfig/kubelet +systemctl restart kubelet +``` + +## Creating a Cluster Using Kubeadm (Control Plane Only) + +### 1. Configuring Cluster Information + +```sh +kubeadm config print init-defaults --component-configs KubeletConfiguration >> kubeletConfig.yaml +vim kubeletConfig.yaml +``` + +In the **kubeletConfig.yaml** file, configure the node name, advertise address (`advertiseAddress`), and the CIDR for the Pod network. + +**Modify `name` to match the hostname, consistent with the first step in the environment configuration:** + +![](./figures/name.png) + +**Change `advertiseAddress` to the IP address of the control plane:** + +![](./figures/advertiseAddress.png) + +**Add `podSubnet` under `Networking` to specify the CIDR range:** + +![](./figures/podSubnet.png) + +### 2. Deploying the Cluster + +Use `kubeadm` to deploy the cluster. Many configurations are generated by default (such as authentication certificates). Refer to the [official documentation](https://kubernetes.io/docs/home/) for modifications. + +**Disable the proxy (if applicable):** + +```sh +unset http_proxy https_proxy +``` + +Deploy the cluster using `kubeadm init`: + +```sh +kubeadm init --config kubeletConfig.yaml +``` + +Specify the configuration file for `kubectl`: + +```sh +mkdir -p "$HOME"/.kube +cp -i /etc/kubernetes/admin.conf "$HOME"/.kube/config +chown "$(id -u)":"$(id -g)" "$HOME"/.kube/config +export KUBECONFIG=/etc/kubernetes/admin.conf +``` + +### 3. Deploying the CNI Plugin (Flannel) + +This tutorial uses Flannel as the CNI plugin. Below are the steps to download and deploy Flannel. + +The Flannel used here is downloaded from the `registry-1.docker.io` image repository. To avoid certificate verification issues, configure the image repository to skip certificate verification in the containerd configuration file (**/etc/containerd/config.toml**). + +![](./figures/flannelConfig.png) + +Use `kubectl apply` to deploy the **kube-flannel.yml** file downloaded during the software package installation. + +```sh +kubectl apply -f kube-flannel.yml +``` + +> ![](./public_sys-resources/icon-note.gif)**Note** +> +> The control plane may have taint issues, causing the node status in `kubectl get nodes` to remain "not ready." Refer to the [official documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) to remove taints. + +## Joining the Cluster (Worker Nodes Only) + +**Disable the proxy (if applicable):** + +```sh +unset http_proxy https_proxy +``` + +After installing and configuring the environment on worker nodes, join the cluster using the following command: + +```sh +kubeadm join : --token --discovery-token-ca-cert-hash sha256: +``` + +This command is generated after `kubeadm init` completes on the control plane. Alternatively, you can generate it on the control plane using the following commands: + +```sh +$ kubeadm token create # Generate token. +$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \ + openssl dgst -sha256 -hex | sed 's/^.* //' # Get hash. +``` + +After joining, check the status of worker nodes on the control plane using: + +```sh +kubectl get nodes +``` + +If the node status shows "not ready," it may be due to unsuccessful Flannel plugin deployment. In this case, run the locally generated Flannel executable to complete the deployment. + +**Running kubectl Commands on Worker Nodes (Optional):** + +To run `kubectl` commands on worker nodes, copy the control plane configuration file **/etc/kubernetes/admin.conf** to the same directory, then configure it using: + +```sh +export KUBECONFIG=/etc/kubernetes/admin.conf +``` diff --git a/docs/en/Cloud/ClusterDeployment/Kubernetes/kubernetes-common-issues-and-solutions.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/kubernetes-common-issues-and-solutions.md new file mode 100644 index 0000000000000000000000000000000000000000..30708ab50f270ac09a41ddf5b7106d2e7c8e3a63 --- /dev/null +++ b/docs/en/Cloud/ClusterDeployment/Kubernetes/kubernetes-common-issues-and-solutions.md @@ -0,0 +1,13 @@ +# Common Issues and Solutions + +## Issue 1: Kubernetes + Docker Deployment Failure + +Reason: Kubernetes dropped support for Kubernetes + Docker cluster deployments starting from version 1.21. + +Solution: Use cri-dockerd + Docker for cluster deployment, or consider alternatives like containerd or iSulad. + +## Issue 2: Unable to Install Kubernetes RPM Packages via yum on openEuler + +Reason: Installing Kubernetes-related RPM packages requires proper configuration of the EPOL repository in yum. + +Solution: Follow the repository configuration guide provided in [this link](https://forum.openeuler.org/t/topic/768) to set up the EPOL repository in your environment. diff --git a/docs/en/docs/Kubernetes/Kubernetes.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/kubernetes.md similarity index 100% rename from docs/en/docs/Kubernetes/Kubernetes.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/kubernetes.md diff --git a/docs/en/docs/Kubernetes/preparing-certificates.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/preparing-certificates.md similarity index 99% rename from docs/en/docs/Kubernetes/preparing-certificates.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/preparing-certificates.md index afe150e05387fb94dd98c089e96655e3b74f56d5..997e10e184894aa05f1d9e091e5c1e1337f70985 100644 --- a/docs/en/docs/Kubernetes/preparing-certificates.md +++ b/docs/en/Cloud/ClusterDeployment/Kubernetes/preparing-certificates.md @@ -1,4 +1,3 @@ - # Preparing Certificates **Statement: The certificate used in this document is self-signed and cannot be used in a commercial environment.** diff --git a/docs/en/docs/Kubernetes/preparing-VMs.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/preparing-vms.md similarity index 100% rename from docs/en/docs/Kubernetes/preparing-VMs.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/preparing-vms.md diff --git a/docs/en/docs/A-Tune/public_sys-resources/icon-note.gif b/docs/en/Cloud/ClusterDeployment/Kubernetes/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/A-Tune/public_sys-resources/icon-note.gif rename to docs/en/Cloud/ClusterDeployment/Kubernetes/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/Kubernetes/running-the-test-pod.md b/docs/en/Cloud/ClusterDeployment/Kubernetes/running-the-test-pod.md similarity index 95% rename from docs/en/docs/Kubernetes/running-the-test-pod.md rename to docs/en/Cloud/ClusterDeployment/Kubernetes/running-the-test-pod.md index 036ff51eb510dea02f364560d999bfca68bf2b04..4ead1ca6fae8a709fb41adf958762076dd034602 100644 --- a/docs/en/docs/Kubernetes/running-the-test-pod.md +++ b/docs/en/Cloud/ClusterDeployment/Kubernetes/running-the-test-pod.md @@ -1,42 +1,42 @@ -# Running the Test Pod - -## Configuration File - -```bash -$ cat nginx.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: nginx-deployment - labels: - app: nginx -spec: - replicas: 3 - selector: - matchLabels: - app: nginx - template: - metadata: - labels: - app: nginx - spec: - containers: - - name: nginx - image: nginx:1.14.2 - ports: - - containerPort: 80 -``` - -## Starting the Pod - -Run the kubectl command to run Nginx. - -```bash -$ kubectl apply -f nginx.yaml -deployment.apps/nginx-deployment created -$ kubectl get pods -NAME READY STATUS RESTARTS AGE -nginx-deployment-66b6c48dd5-6rnwz 1/1 Running 0 33s -nginx-deployment-66b6c48dd5-9pq49 1/1 Running 0 33s -nginx-deployment-66b6c48dd5-lvmng 1/1 Running 0 34s -``` +# Running the Test Pod + +## Configuration File + +```bash +$ cat nginx.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 +``` + +## Starting the Pod + +Run the kubectl command to run Nginx. + +```bash +$ kubectl apply -f nginx.yaml +deployment.apps/nginx-deployment created +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +nginx-deployment-66b6c48dd5-6rnwz 1/1 Running 0 33s +nginx-deployment-66b6c48dd5-9pq49 1/1 Running 0 33s +nginx-deployment-66b6c48dd5-lvmng 1/1 Running 0 34s +``` diff --git a/docs/en/Cloud/ClusterDeployment/Menu/index.md b/docs/en/Cloud/ClusterDeployment/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..408c6a2905be22d8367a8de406386e33cd446a08 --- /dev/null +++ b/docs/en/Cloud/ClusterDeployment/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- + +- [Kubernetes Cluster Deployment Guide]({{< relref "./Kubernetes/Menu/index.md" >}}) +- [iSulad + Kubernetes Cluster Deployment Guide]({{< relref "./iSulad+k8s/Menu/index.md" >}}) diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/Menu/index.md b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..98c93e3060a017a2d2497084d668608bb457117a --- /dev/null +++ b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/Menu/index.md @@ -0,0 +1,8 @@ +--- +headless: true +--- + +- [iSulad + Kubernetes Cluster Deployment Guide]({{< relref "./isulad+k8s-cluster+deployment.md" >}}) + - [iSulad+Kubernetes Environment Deployment]({{< relref "./isulad+k8s-deployment.md" >}}) + - [GitLab Deployment]({{< relref "./gitlab-deployment.md" >}}) + - [GitLab Runner Deployment and Testing]({{< relref "./gitlab-runner-deployment.md" >}}) diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/1.view-required-images.png b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/1.view-required-images.png new file mode 100644 index 0000000000000000000000000000000000000000..74cdae5726cec83d5d74b0b8bd01694fd388e342 Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/1.view-required-images.png differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/13.view-cert-config.png b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/13.view-cert-config.png new file mode 100644 index 0000000000000000000000000000000000000000..8e9ce44af5a01670add1b8b2f5a7223a8bd0f35d Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/13.view-cert-config.png differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/14.import-cert.png b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/14.import-cert.png new file mode 100644 index 0000000000000000000000000000000000000000..2a1fdb24d6f5c1c9d44cbce08276289adc5c876c Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/14.import-cert.png differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/15.register-gitlab-runner.jpg b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/15.register-gitlab-runner.jpg new file mode 100644 index 0000000000000000000000000000000000000000..896f13bdc6411b719283f30d9973973950f27a1c Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/15.register-gitlab-runner.jpg differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/17.png b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/17.png new file mode 100644 index 0000000000000000000000000000000000000000..86f90a67185f532b362f4710ce8f7615cf40c9e1 Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/17.png differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/18.dns-config.png b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/18.dns-config.png new file mode 100644 index 0000000000000000000000000000000000000000..46b85396db34577b67679da759b6160ee707dec5 Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/18.dns-config.png differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/2.calico-config.png b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/2.calico-config.png new file mode 100644 index 0000000000000000000000000000000000000000..d656f86d8ce5e110cf240a58e58b05b42aba8c15 Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/2.calico-config.png differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/20.yaml.png b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/20.yaml.png new file mode 100644 index 0000000000000000000000000000000000000000..4a609d864f0ca184d94e9108656a8652a6dad55d Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/20.yaml.png differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/3.png b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/3.png new file mode 100644 index 0000000000000000000000000000000000000000..7394b5f21821ce8d352c2f935c3ea3e490dc0519 Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/3.png differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/4.gitlab-entrance.jpg b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/4.gitlab-entrance.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d3eb0d59d6dee5051470621a4969651668687789 Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/4.gitlab-entrance.jpg differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/5.view-password.jpg b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/5.view-password.jpg new file mode 100644 index 0000000000000000000000000000000000000000..2e3902815108e9e91a07c382a4aae090b7cc6fe9 Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/5.view-password.jpg differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/6.logged-in.png b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/6.logged-in.png new file mode 100644 index 0000000000000000000000000000000000000000..5f4d2c2a9a8bf337263028e859e49499155920b0 Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/6.logged-in.png differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/7.image.png b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/7.image.png new file mode 100644 index 0000000000000000000000000000000000000000..26c811ae616d2fe86e7b8b75c78ef88aff83616b Binary files /dev/null and b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/figures/7.image.png differ diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/gitlab-deployment.md b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/gitlab-deployment.md new file mode 100644 index 0000000000000000000000000000000000000000..ea24d8006a11aabc4074d84867a6da7f2fea4519 --- /dev/null +++ b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/gitlab-deployment.md @@ -0,0 +1,311 @@ +# GitLab Deployment + +## Description + +GitLab deployment is required in Scenario 1 (openEuler native deployment CI/CD based on GitLab CI/CD). In Scenario 2 (openEuler native development cluster managed by GitLab CI/CD), skip this step. + +## Preparing the Server + +Prepare a machine running openEuler 20.03 LTS or later versions. + +## Starting GitLab + +Copy the required YAML files to the **/home** directory and start the related pod. +> **Note**: The YAML files related to GitLab can be obtained from the GitLab official site. + +Example YAML files are as follows. Modify them as required. + +gitlab-redis.yaml + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis + namespace: default + labels: + name: redis +spec: + selector: + matchLabels: + name: redis + template: + metadata: + name: redis + labels: + name: redis + spec: + containers: + - name: redis + image: 10.35.111.11:5000/redis:latest + imagePullPolicy: IfNotPresent + ports: + - name: redis + containerPort: 6379 + volumeMounts: + - mountPath: /var/lib/redis + name: data + livenessProbe: + exec: + command: + - redis-cli + - ping + initialDelaySeconds: 30 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - redis-cli + - ping + initialDelaySeconds: 5 + timeoutSeconds: 1 + volumes: + - name: data + emptyDir: {} + +--- +apiVersion: v1 +kind: Service +metadata: + name: redis + namespace: default + labels: + name: redis +spec: + ports: + - name: redis + port: 6379 + targetPort: redis + selector: + name: redis +``` + +gitlab-postgresql.yaml + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: postgresql + namespace: default + labels: + name: postgresql +spec: + selector: + matchLabels: + name: postgresql + template: + metadata: + name: postgresql + labels: + name: postgresql + spec: + containers: + - name: postgresql + image: 10.35.111.11:5000/postgres:13.6 + imagePullPolicy: IfNotPresent + env: + - name: POSTGRES_HOST_AUTH_METHOD + value: trust + - name: DB_USER + value: gitlab + - name: DB_PASS + value: passw0rd + - name: DB_NAME + value: gitlab_production + - name: DB_EXTENSION + value: pg_trgm + ports: + - name: postgres + containerPort: 5432 + volumeMounts: + - mountPath: /var/lib/postgresql + name: data + livenessProbe: + exec: + command: + - pg_isready + - -h + - localhost + - -U + - postgres + initialDelaySeconds: 30 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - pg_isready + - -h + - localhost + - -U + - postgres + initialDelaySeconds: 5 + timeoutSeconds: 1 + volumes: + - name: data + emptyDir: {} + +--- +apiVersion: v1 +kind: Service +metadata: + name: postgresql + namespace: default + labels: + name: postgresql +spec: + ports: + - name: postgres + port: 5432 + targetPort: postgres + selector: + name: postgresql +``` + +gitlab.yaml + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: gitlab + namespace: default + labels: + name: gitlab +spec: + selector: + matchLabels: + name: gitlab + template: + metadata: + name: gitlab + labels: + name: gitlab + spec: + containers: + - name: gitlab + image: 10.35.111.11:5000/yrzr/gitlab-ce-arm64v8:14.3.2-ce.0 + imagePullPolicy: IfNotPresent + env: + - name: TZ + value: Asia/Shanghai + - name: GITLAB_TIMEZONE + value: Beijing + - name: GITLAB_SECRETS_DB_KEY_BASE + value: long-and-random-alpha-numeric-string + - name: GITLAB_SECRETS_SECRET_KEY_BASE + value: long-and-random-alpha-numeric-string + - name: GITLAB_SECRETS_OTP_KEY_BASE + value: long-and-random-alpha-numeric-string + - name: GITLAB_ROOT_PASSWORD + value: admin321 + - name: GITLAB_ROOT_EMAIL + value: 517554016@qq.com + - name: GITLAB_HOST + value: git.qikqiak.com + - name: GITLAB_PORT + value: "80" + - name: GITLAB_SSH_PORT + value: "22" + - name: GITLAB_NOTIFY_ON_BROKEN_BUILDS + value: "true" + - name: GITLAB_NOTIFY_PUSHER + value: "false" + - name: GITLAB_BACKUP_SCHEDULE + value: daily + - name: GITLAB_BACKUP_TIME + value: 01:00 + - name: DB_TYPE + value: postgres + - name: DB_HOST + value: postgresql + - name: DB_PORT + value: "5432" + - name: DB_USER + value: gitlab + - name: DB_PASS + value: passw0rd + - name: DB_NAME + value: gitlab_production + - name: REDIS_HOST + value: redis + - name: REDIS_PORT + value: "6379" + ports: + - name: http + containerPort: 80 + - name: ssh + containerPort: 22 + volumeMounts: + - mountPath: /home/git/data + name: data + livenessProbe: + httpGet: + path: / + port: 80 + initialDelaySeconds: 180 + timeoutSeconds: 5 + readinessProbe: + httpGet: + path: / + port: 80 + initialDelaySeconds: 5 + timeoutSeconds: 1 + volumes: + - name: data + emptyDir: {} + +--- +apiVersion: v1 +kind: Service +metadata: + name: gitlab + namespace: default + labels: + name: gitlab +spec: + ports: + - name: http + port: 80 + targetPort: http + nodePort: 30852 + - name: ssh + port: 22 + nodePort: 32353 + targetPort: ssh + selector: + name: gitlab + type: NodePort +``` + +Start the containers. + +```shell +kubectl apply -f gitlab-redis.yaml +kubectl apply -f gitlab-postgresql.yaml +kubectl apply -f gitlab.yaml +``` + +Check whether the GitLab pod is set up successfully. + +```shell +kubectl get pod -A -owide +``` + +## Logging in to GitLab + +Log in to the GitLab Web UI. The address is the IP address and the configured port. + +![](figures/4.gitlab-entrance.jpg) +The user name is **root**. The default password can be viewed in the password file in the container. + +```shell +kubectl exec -it gitlab-lab -n default /bin/sh +cat /etc/gitlab/initial_root_password +``` + +![](figures/5.view-password.jpg) + +- After you log in, this page is displayed: + +![](figures/6.logged-in.png) diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/gitlab-runner-deployment.md b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/gitlab-runner-deployment.md new file mode 100644 index 0000000000000000000000000000000000000000..770f1651a7cd302b8f2750e2c353be55875896f9 --- /dev/null +++ b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/gitlab-runner-deployment.md @@ -0,0 +1,178 @@ +# GitLab Runner Deployment and Testing + +## Images and Software + +The following table lists the images required during installation. The version numbers are for reference only. + +| Image | Version | +|------------------------------------|----------| +| gitlab/gitlab-runner | alpine-v14.4.0 | +| gitlab/gitlab-runner-helper | x86_64-54944146 | + +> If the Internet is unavailable in the environment, download the required images in advance. +> Download the images from the Docker Hub official website . + +## Using gitlab-runner.yaml to Start the Runner Container + +In the **gitlab-runner.yaml** file, change the image name. The following is an example of the **.yaml** file. Modify the file as required. + +```shell +vim gitlab-runner.yaml +``` + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: gitlab-runner + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + name: gitlab-runner + template: + metadata: + labels: + name: gitlab-runner + spec: + containers: + - args: + - run + image: gitlab/gitlab-runner:alpine-v14.4.0 + imagePullPolicy: IfNotPresent + name: gitlab-runner + volumeMounts: + - mountPath: /etc/gitlab-runner + name: config + readOnly: false + - mountPath: /etc/ssl/certs + name: cacerts + readOnly: true + restartPolicy: Always + volumes: + - hostPath: + path: /etc/gitlab-runner + name: config + - hostPath: + path: /etc/ssl/key + name: cacerts + +```shell + +Start the container. + +```shell +# kubectl apply -f gitlab-runner.yaml +# kubectl get pod -A -o wide +``` + +![image](figures/7.image.png) + +## Creating a Container Project That Uses User Certificates for Authentication in GitLab + +1. Click **New project**. + +2. Select **Create blank project**. + +3. Enter a name for the project. + +4. Choose **Settings** > **CI/CD** > **Runners** > **Expand**. + +5. Record the address and token for registering the Runner. + +6. Import certificate files. + + Check and generate certificate files **admin.crt**, **admin.key**, and **ca.crt** on the master node. + + - View certificate information. + + ```shell + # cat /etc/kubernetes/admin.conf + ``` + + ![view-cert-config](figures/13.view-cert-config.png) + + - Generate the encrypted **admin.crt**. + + ```shell + # echo "${client-certificate-data}" | base64 -d > admin.crt + ``` + + - Generate the encrypted **admin.key**. + + ```shell + # echo "${client-key-data}" | base64 -d > admin.key + ``` + + - Obtain the CA certificate on the manager node. + + ```shell + # cp /etc/kubernetes/pki/ca.crt ./ + ``` + +7. Import the three certificate files to the GitLab Runner container on the node where the Runner is running. + + > **Note**: To import the certificate files, check the node where the GitLab Runner is running, copy the certificate files to the node, and run the **isula cp** command to import the certificate files. + + ```shell + # isula cp admin.crt [Container ID]:Storage path + # isula cp admin.key [Container ID]:Storage path + # isula cp ca.crt [Container ID]:Storage path + ``` + + Note: The **isula cp** command can copy only one file at a time. + + ![import-cert](figures/14.import-cert.png) + +## Registering the GitLab Runner + +Perform registration in the GitLab Runner container. Currently, interactive registration is used. Obtain the registration information from GitLab. Choose **GitLab** > **Group runners** > **Settings** > **CI/CD** > **Runners**. + +![register-gitlab-runner](figures/15.register-gitlab-runner.jpg) + +Upload the prepared **gitlab-runner-helper** image to the private image repository in advance, go to the GitLab Runner container, and modify the configuration file. + +```shell +# cd /etc/gitlab-runner +# mkdir kubessl +# cp /home/admin.crt /etc/gitlab-runner/kubessl +# cp /home/ca.crt /etc/gitlab-runner/kubessl +# cp /home/admin.key /etc/gitlab-runner/kubessl +# vim /etc/gitlab-runner/config.toml +``` + +![](figures/17.png) + +## Adding the DNS Record of the GitLab Container to the Manager Node + +1. View the IP address of the GitLab container. + + ```shell + # kubectl get pods -Aowide + ``` + +2. Add the IP address of the GitLab container to the Kubernetes DNS configuration file. + + ```shell + # kubectl edit configmaps coredns -n kube-system + ``` + + ![dns](figures/18.dns-config.png) + +3. Restart the CoreDNS service. + + ```shell + # kubectl scale deployment coredns -n kube-system --replicas=0 + # kubectl scale deployment coredns -n kube-system --replicas=2 + ``` + +## GitLab Running Testing + +Return to the GitLab web IDE and choose **CI/CD** > **Editor** > **Create new CI/CD pipeline**. + +- Compile the YAML file as follows: + +![yaml](figures/20.yaml.png) + +- Choose **Pipelines** and view the status. diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/isulad+k8s-cluster+deployment.md b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/isulad+k8s-cluster+deployment.md new file mode 100644 index 0000000000000000000000000000000000000000..bee15e23a8aba3898d456ce3d3399435a8e54386 --- /dev/null +++ b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/isulad+k8s-cluster+deployment.md @@ -0,0 +1,21 @@ +# iSulad + Kubernetes Cluster Deployment Guide + +This document outlines the process of deploying a Kubernetes cluster with kubeadm on the openEuler OS, configuring a Kubernetes + iSulad environment, and setting up gitlab-runner. It serves as a comprehensive guide for creating a native openEuler development environment cluster. + +The guide addresses two primary scenarios: + +**Scenario 1**: A complete walkthrough for establishing a native openEuler development CI/CD pipeline from scratch using gitlab-ci. +**Scenario 2**: Instructions for integrating an existing native openEuler development execution machine cluster into gitlab-ci. + +For scenario 1, the following steps are required: + +1. Set up the Kubernetes + iSulad environment. +2. Deploy gitlab. +3. Install and test gitlab-runner. + +For scenario 2, where a gitlab-ci platform is already available, the process involves: + +1. Configure the Kubernetes + iSulad environment. +2. Install and test gitlab-runner. + +> **Note**: All operations described in this document must be executed with root privileges. diff --git a/docs/en/Cloud/ClusterDeployment/iSulad+k8s/isulad+k8s-deployment.md b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/isulad+k8s-deployment.md new file mode 100644 index 0000000000000000000000000000000000000000..4380c221c465a41c659c174ee0e8ad808326c754 --- /dev/null +++ b/docs/en/Cloud/ClusterDeployment/iSulad+k8s/isulad+k8s-deployment.md @@ -0,0 +1,406 @@ +# iSulad+Kubernetes Environment Deployment + +## Preparing Cluster Servers + +Prepare at least 3 machines running openEuler 20.03 LTS or later versions. The following table lists information about the machines. + +| Host Name | IP Address | OS | Role | Component | +|-------|-------------|------------------------|----------|-----------| +| lab1 | 197.xxx.xxx.xxx | openEuler 20.03 LTS SP3 | Control node | iSulad/Kubernetes | +| lab2 | 197.xxx.xxx.xxx | openEuler 20.03 LTS SP3 | Worker node 1 | iSulad/Kubernetes | +| lab3 | 197.xxx.xxx.xxx | openEuler 20.03 LTS SP3 | Worker node 2 | iSulad/Kubernetes | + +## Preparing Images and Software Packages + +The following table lists software packages and images used in the example. The versions are for reference only. + +| Software | Version | +|------------------------------------|----------| +| iSulad | 2.0.17-2 | +| kubernetes-client | 1.20.2-9 | +| kubernetes-kubeadm | 1.20.2-9 | +| kubernetes-kubelet | 1.20.2-9 | + +| Image | Version | +|------------------------------------|----------| +| k8s.gcr.io/kube-proxy | v1.20.2 | +| k8s.gcr.io/kube-apiserver | v1.20.2 | +| k8s.gcr.io/kube-controller-manager | v1.20.2 | +| k8s.gcr.io/kube-scheduler | v1.20.2 | +| k8s.gcr.io/etcd | 3.4.13-0 | +| k8s.gcr.io/coredns | 1.7.0 | +| k8s.gcr.io/pause | 3.2 | +| calico/node | v3.14.2 | +| calico/pod2daemon-flexvol | v3.14.2 | +| calico/cni | v3.14.2 | +| calico/kube-controllers | v3.14.2 | + +> If you perform the deployment in without an Internet connection, download the software packages, dependencies, and images in advance. + +- Download software packages: +- Download images from Docker Hub: + +## Modifying the hosts File + +1. Change the host name of the machine, for example, **lab1**. + + ```shell + hostnamectl set-hostname lab1 + sudo -i + ``` + +2. Configure host name resolution by modifying the **/etc/hosts** file on each machine. + + ```shell + vim /etc/hosts + ``` + +3. Add the following content (IP address and host name) to the **hosts** file: + + ```text + 197.xxx.xxx.xxx lab1 + 197.xxx.xxx.xxx lab2 + 197.xxx.xxx.xxx lab3 + ``` + +## Preparing the Environment + +1. Disable the firewall/ + + ```shell + systemctl stop firewalld + systemctl disable firewalld + ``` + +2. Disable SELinux. + + ```shell + setenforce 0 + ``` + +3. Disable memory swapping. + + ```shell + swapoff -a + sed -ri 's/.*swap.*/#&/' /etc/fstab + ``` + +4. Configure the network and enable forwarding. + + ```shell + $ cat > /etc/sysctl.d/kubernetes.conf <" + ], + "pod-sandbox-image": "k8s.gcr.io/pause:3.2", + "native.umask": "normal", + "network-plugin": "cni", + "cni-bin-dir": "/opt/cni/bin", + "cni-conf-dir": "/etc/cni/net.d", + "image-layer-check": false, + "use-decrypted-key": true, + "insecure-skip-verify-enforce": false, + "cri-runtimes": { + "kata": "io.containerd.kata.v2" + } + } + ``` + +3. Restart the isulad service. + + ```shell + systemctl restart isulad + ``` + +### Loading the isulad Images + +1. Check the required system images. + + ```shell + kubeadm config images list + ``` + + Pay attention to the versions in the output, as shown in the figure. + ![](figures/1.view-required-images.png) + +2. Pull the images using the `isula` command. + + > **Note**: The versions in the following commands are for reference only. Use the versions in the preceding output. + + ```shell + isula pull k8simage/kube-apiserver:v1.20.15 + isula pull k8smx/kube-controller-manager:v1.20.15 + isula pull k8smx/kube-scheduler:v1.20.15 + isula pull k8smx/kube-proxy:v1.20.15 + isula pull k8smx/pause:3.2 + isula pull k8smx/coredns:1.7.0 + isula pull k8smx/etcd:3.4.13-0 + ``` + +3. Modify the tags of the pulled images. + + ```shell + isula tag k8simage/kube-apiserver:v1.20.15 k8s.gcr.io/kube-apiserver:v1.20.15 + isula tag k8smx/kube-controller-manager:v1.20.15 k8s.gcr.io/kube-controller-manager:v1.20.15 + isula tag k8smx/kube-scheduler:v1.20.15 k8s.gcr.io/kube-scheduler:v1.20.15 + isula tag k8smx/kube-proxy:v1.20.15 k8s.gcr.io/kube-proxy:v1.20.15 + isula tag k8smx/pause:3.2 k8s.gcr.io/pause:3.2 + isula tag k8smx/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0 + isula tag k8smx/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0 + ``` + +4. Remove the old images. + + ```shell + isula rmi k8simage/kube-apiserver:v1.20.15 + isula rmi k8smx/kube-controller-manager:v1.20.15 + isula rmi k8smx/kube-scheduler:v1.20.15 + isula rmi k8smx/kube-proxy:v1.20.15 + isula rmi k8smx/pause:3.2 + isula rmi k8smx/coredns:1.7.0 + isula rmi k8smx/etcd:3.4.13-0 + ``` + +5. View pulled images. + + ```shell + isula images + ``` + +### Installing crictl + +```shell +yum install -y cri-tools +``` + +### Initializing the Master Node + +Initialize the master node. + +```shell +kubeadm init --kubernetes-version v1.20.2 --cri-socket=/var/run/isulad.sock --pod-network-cidr= +``` + +- `--kubernetes-version` indicates the current Kubernetes version. +- `--cri-socket` specifies the engine, that is, isulad. +- `--pod-network-cidr` specifies the IP address range of the pods. + +Enter the following commands as prompted: + +```shell +mkdir -p $HOME/.kube +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config +sudo chown $(id -u):$(id -g) $HOME/.kube/config +``` + +After the initialization, copy the last two lines of the output and run the copied commands on the nodes to add them to the master cluster. The commands can also be generated using the following command: + +```sh +kubeadm token create --print-join-command +``` + +### Adding Nodes + +Paste the `kubeadm join` command generated on Master, add `--cri-socket=/var/run/isulad.sock` before `--discovery-token-ca-cert-hash`, and then run the command. + +```shell +kubeadm join --token bgyis4.euwkjqb7jwuenwvs --cri-socket=/var/run/isulad.sock --discovery-token-ca-cert-hash sha256:3792f02e136042e2091b245ac71c1b9cdcb97990311f9300e91e1c339e1dfcf6 +``` + +### Installing Calico Network Plugins + +1. Pull Calico images. + + Configure the Calico network plugins on the Master node and pull the required images on each node. + + ```shell + isula pull calico/node:v3.14.2 + isula pull calico/cni:v3.14.2 + isula pull calico/kube-controllers:v3.14.2 + isula pull calico/pod2daemon-flexvol:v3.14.2 + ``` + +2. Download the configuration file on Master. + + ```shell + wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml + ``` + +3. Modify **calico.yaml**. + + ```yaml + # vim calico.yaml + + # Modify the following parameters. + + - name: IP_AUTODERECTION_METHOD + Value: "can-reach=197.3.10.254" + + - name: CALICO_IPV4POOL_IPIP + Value: "CrossSubnet" + ``` + + ![](figures/2.calico-config.png) + + - If the default CNI of the pod is Flannel, add the following content to **flannel.yaml**: + + ```yaml + --iface=enp4s0 + ``` + + ![](figures/3.png) + +4. Create a pod. + + ```shell + kubectl apply -f calico.yaml + ``` + + - If you want to delete the configuration file, run the following command: + + ```shell + kubectl delete -f calico.yaml + ``` + +5. View pod information. + + ```shell + kubectl get pod -A -o wide + ``` + +### Checking the Master Node Information + +```shell +kubectl get nodes -o wide +``` + +To reset a node, run the following command: + +```shell +kubeadm reset +``` diff --git a/docs/en/Cloud/ContainerEngine/DockerEngine/Menu/index.md b/docs/en/Cloud/ContainerEngine/DockerEngine/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..e3154aad9bc24f599a8c7e1ffdc3644a64e4c456 --- /dev/null +++ b/docs/en/Cloud/ContainerEngine/DockerEngine/Menu/index.md @@ -0,0 +1,14 @@ +--- +headless: true +--- + +- [Docker Container]({{< relref "./docker-container.md" >}}) + - [Installation and Configuration]({{< relref "./installation-and-configuration-3.md" >}}) + - [Container Management]({{< relref "./container-management-1.md" >}}) + - [Image Management]({{< relref "./image-management-1.md" >}}) + - [Command Reference]({{< relref "./command-reference.md" >}}) + - [Container Engine]({{< relref "./container-engine.md" >}}) + - [Container Management]({{< relref "./container-management-2.md" >}}) + - [Image Management]({{< relref "./image-management-2.md" >}}) + - [Statistics]({{< relref "./statistics.md" >}}) + - [Docker Common Docker Issues and Solutions]({{< relref "./docker-common-issues-and-solutions.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/Container/command-reference.md b/docs/en/Cloud/ContainerEngine/DockerEngine/command-reference.md similarity index 100% rename from docs/en/docs/Container/command-reference.md rename to docs/en/Cloud/ContainerEngine/DockerEngine/command-reference.md diff --git a/docs/en/docs/Container/container-engine.md b/docs/en/Cloud/ContainerEngine/DockerEngine/container-engine.md similarity index 100% rename from docs/en/docs/Container/container-engine.md rename to docs/en/Cloud/ContainerEngine/DockerEngine/container-engine.md diff --git a/docs/en/docs/Container/container-management-1.md b/docs/en/Cloud/ContainerEngine/DockerEngine/container-management-1.md similarity index 96% rename from docs/en/docs/Container/container-management-1.md rename to docs/en/Cloud/ContainerEngine/DockerEngine/container-management-1.md index 5dcc5ef03c4a1dec3243b4564ef04dbce3423059..a78b1d60a564ed515a68e4541edf610fbe99e2a7 100644 --- a/docs/en/docs/Container/container-management-1.md +++ b/docs/en/Cloud/ContainerEngine/DockerEngine/container-management-1.md @@ -564,14 +564,14 @@ When the container is running, the health check status is written into the conta } ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif)**NOTE:** > ->- A maximum of five health check status records can be stored in a container. The last five records are saved. ->- Only one health check configuration item can take effect in a container at a time. The later items configured in the Dockerfile will overwrite the earlier ones. Configurations during container creation will overwrite those in images. ->- In the Dockerfile, you can set **HEALTHCHECK NONE** to cancel the health check configuration in a referenced image. When a container is running, you can set **--no-healthcheck** to cancel the health check configuration in an image. Do not configure the health check and **--no-healthcheck** parameters at the same time during the startup. ->- After a container with configured health check parameters is started, if Docker daemon exits, the health check is not executed. After Docker daemon is restarted, the container health status changes to **starting**. Afterwards, the check rules are the same as above. ->- If health check parameters are set to **0** during container image creation, the default values are used. ->- If health check parameters are set to **0** during container startup, the default values are used. +> - A maximum of five health check status records can be stored in a container. The last five records are saved. +> - Only one health check configuration item can take effect in a container at a time. The later items configured in the Dockerfile will overwrite the earlier ones. Configurations during container creation will overwrite those in images. +> - In the Dockerfile, you can set **HEALTHCHECK NONE** to cancel the health check configuration in a referenced image. When a container is running, you can set **--no-healthcheck** to cancel the health check configuration in an image. Do not configure the health check and **--no-healthcheck** parameters at the same time during the startup. +> - After a container with configured health check parameters is started, if Docker daemon exits, the health check is not executed. After Docker daemon is restarted, the container health status changes to **starting**. Afterwards, the check rules are the same as above. +> - If health check parameters are set to **0** during container image creation, the default values are used. +> - If health check parameters are set to **0** during container startup, the default values are used. ## Stopping and Deleting a Container @@ -601,7 +601,7 @@ docker rm -f container1 ### Precautions -- Do not run the **docker rm –f**_XXX_ command to delete a container. If you forcibly delete a container, the **docker rm** command ignores errors during the process, which may cause residual metadata of the container. If you delete an image in common mode and an error occurs during the deletion process, the deletion fails and no metadata remains. +- Do not run the **docker rm -f**_XXX_ command to delete a container. If you forcibly delete a container, the **docker rm** command ignores errors during the process, which may cause residual metadata of the container. If you delete an image in common mode and an error occurs during the deletion process, the deletion fails and no metadata remains. - Do not run the **docker kill** command. The **docker kill** command sends related signals to service processes in a container. Depending on the signal processing policies of service processes in the container may cause the result that the signal execution cannot be performed as expected. - A container in the restarting state may not stop immediately when you run the **docker stop** command. If a container uses the restart rules, when the container is in the restarting state, there is a low probability that the **docker stop** command on the container returns immediately. The container will still be restarted with the impact of the restart rule. - Do not run the **docker restart** command to restart a container with the **--rm** parameter. When a container with the **--rm** parameter exits, the container is automatically deleted. If the container with the **--rm** parameter is restarted, exceptions may occur. For example, if both the **--rm** and **-ti** parameters are added when the container is started, the restart operation cannot be performed on the container, otherwise, the container may stop responding and cannot exit. diff --git a/docs/en/docs/Container/container-management-2.md b/docs/en/Cloud/ContainerEngine/DockerEngine/container-management-2.md similarity index 99% rename from docs/en/docs/Container/container-management-2.md rename to docs/en/Cloud/ContainerEngine/DockerEngine/container-management-2.md index dc3061cccbc065086d1fda0dcab047be930a7fd0..76bd5f9fa985d44c9566b6ad62900a417bc211ec 100644 --- a/docs/en/docs/Container/container-management-2.md +++ b/docs/en/Cloud/ContainerEngine/DockerEngine/container-management-2.md @@ -1000,7 +1000,7 @@ Example: sudo docker restart busybox ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** +>![](./public_sys-resources/icon-note.gif)**NOTE:** >During the container restart, if a process in the **D** or **Z** state exists in the container, the container may fail to be restarted. In this case, you need to analyze the cause of the **D** or **Z** state of the process in the container. Restart the container after the **D** or **Z** state of the process in the container is released. ## rm diff --git a/docs/en/Cloud/ContainerEngine/DockerEngine/docker-common-issues-and-solutions.md b/docs/en/Cloud/ContainerEngine/DockerEngine/docker-common-issues-and-solutions.md new file mode 100644 index 0000000000000000000000000000000000000000..9ed94a0c471e704c5f9bf13c64c581e462375f53 --- /dev/null +++ b/docs/en/Cloud/ContainerEngine/DockerEngine/docker-common-issues-and-solutions.md @@ -0,0 +1,5 @@ +# Common Docker Issues and Solutions + +## Issue 1: Additional Mount Point in Docker v18.09.9 Compared to v19.03.0 and Later + +In Docker version 18.09.9, containers have an extra mount point compared to those launched in Docker v19.03.0 and later. This is because the default `ipcmode` in v18.09 is set to `shareable`, which creates an additional `shmpath` mount point. To resolve this, either update the `ipcmode` option to `private` in the Docker configuration file or upgrade to a newer Docker version. diff --git a/docs/en/docs/Container/docker-container.md b/docs/en/Cloud/ContainerEngine/DockerEngine/docker-container.md similarity index 99% rename from docs/en/docs/Container/docker-container.md rename to docs/en/Cloud/ContainerEngine/DockerEngine/docker-container.md index 0c22297da0329b232603efb8bef7f8c1b0d385ae..0adcf2fc77979022b8480448dcbbf7a6f0ad4621 100644 --- a/docs/en/docs/Container/docker-container.md +++ b/docs/en/Cloud/ContainerEngine/DockerEngine/docker-container.md @@ -1,4 +1,3 @@ # Docker Container Docker is an open-source Linux container engine that enables quick application packaging, deployment, and delivery. The original meaning of Docker is dork worker, whose job is to pack the goods to the containers, and move containers, and load containers. Similarly, the job of Docker in Linux is to pack applications to containers, and deploy and run applications on various platforms using containers. Docker uses Linux Container technology to turn applications into standardized, portable, and self-managed components, enabling the "build once" and "run everywhere" features of applications. Features of Docker technology include: quick application release, easy application deployment and management, and high application density. - diff --git a/docs/en/docs/Container/image-management-1.md b/docs/en/Cloud/ContainerEngine/DockerEngine/image-management-1.md similarity index 43% rename from docs/en/docs/Container/image-management-1.md rename to docs/en/Cloud/ContainerEngine/DockerEngine/image-management-1.md index 31f4cf824ada16d011168db5107e020df3e3bed9..cd1dc9e4dbe607907a5aedfeff0abc958fb30aa6 100644 --- a/docs/en/docs/Container/image-management-1.md +++ b/docs/en/Cloud/ContainerEngine/DockerEngine/image-management-1.md @@ -1,43 +1,41 @@ # Image Management -- [Image Management](#image-management-1) +- [Image Management](#image-management) - [Creating an Image](#creating-an-image) - [Viewing Images](#viewing-images) - [Deleting Images](#deleting-images) - ## Creating an Image -You can use the **docker pull**, **docker build**,** docker commit**, **docker import**, or **docker load** command to create an image. For details about how to use these commands, see [4.6.3 Image Management](#image-management-43.md#EN-US_TOPIC_0184808261). +You can use the **docker pull**, **docker build**,**docker commit**, **docker import**, or **docker load** command to create an image. For details about how to use these commands, see [4.6.3 Image Management](image-management-2.md). ### Precautions -1. Do not concurrently run the **docker load** and **docker rmi** commands. If both of the following conditions are met, concurrency problems may occur: +1. Do not concurrently run the **docker load** and **docker rmi** commands. If both of the following conditions are met, concurrency problems may occur: - - An image exists in the system. - - The docker rmi and docker load operations are concurrently performed on an image. + - An image exists in the system. + - The docker rmi and docker load operations are concurrently performed on an image. Therefore, avoid this scenario. \(All concurrent operations between the image creation operations such as running the **tag**, **build**, and **load**, and **rmi** commands, may cause similar errors. Therefore, do not concurrently perform these operations with **rmi**.\) -2. If the system is powered off when docker operates an image, the image may be damaged. In this case, you need to manually restore the image. +2. If the system is powered off when docker operates an image, the image may be damaged. In this case, you need to manually restore the image. When the docker operates images \(using the **pull**, **load**, **rmi**, **build**, **combine**, **commit**, or **import** commands\), image data operations are asynchronous, and image metadata is synchronous. Therefore, if the system power is off when not all image data is updated to the disk, the image data may be inconsistent with the metadata. Users can view images \(possibly none images\), but cannot start containers, or the started containers are abnormal. In this case, run the **docker rmi** command to delete the image and perform the previous operations again. The system can be recovered. -3. Do not store a large number of images on nodes in the production environment. Delete unnecessary images in time. +3. Do not store a large number of images on nodes in the production environment. Delete unnecessary images in time. If the number of images is too large, the execution of commands such as **docker image** is slow. As a result, the execution of commands such as **docker build** or **docker commit** fails, and the memory may be stacked. In the production environment, delete unnecessary images and intermediate process images in time. -4. When the **--no-parent** parameter is used to build images, if multiple build operations are performed at the same time and the FROM images in the Dockerfile are the same, residual images may exist. There are two cases: - - If FROM images are incomplete, the images generated when images of FROM are running may remain. Names of the residual images are similar to **base\_v1.0.0-app\_v2.0.0**, or they are none images. - - If the first several instructions in the Dockerfile are the same, none images may remain. - +4. When the **--no-parent** parameter is used to build images, if multiple build operations are performed at the same time and the FROM images in the Dockerfile are the same, residual images may exist. There are two cases: + - If FROM images are incomplete, the images generated when images of FROM are running may remain. Names of the residual images are similar to **base\_v1.0.0-app\_v2.0.0**, or they are none images. + - If the first several instructions in the Dockerfile are the same, none images may remain. ### None Image May Be Generated -1. A none image is the top-level image without a tag. For example, the image ID of **ubuntu** has only one tag **ubuntu**. If the tag is not used but the image ID is still available, the image ID becomes a none image. -2. An image is protected because the image data needs to be exported during image saving. However, if a deletion operation is performed, the image may be successfully untagged and the image ID may fail to be deleted \(because the image is protected\). As a result, the image becomes a none image. -3. If the system is powered off when you run the **docker pull** command or the system is in panic, a none image may be generated. To ensure image integrity, you can run the **docker rmi** command to delete the image and then restart it. -4. If you run the **docker save** command to save an image and specify the image ID as the image name, the loaded image does not have a tag and the image name is **none**. +1. A none image is the top-level image without a tag. For example, the image ID of **ubuntu** has only one tag **ubuntu**. If the tag is not used but the image ID is still available, the image ID becomes a none image. +2. An image is protected because the image data needs to be exported during image saving. However, if a deletion operation is performed, the image may be successfully untagged and the image ID may fail to be deleted \(because the image is protected\). As a result, the image becomes a none image. +3. If the system is powered off when you run the **docker pull** command or the system is in panic, a none image may be generated. To ensure image integrity, you can run the **docker rmi** command to delete the image and then restart it. +4. If you run the **docker save** command to save an image and specify the image ID as the image name, the loaded image does not have a tag and the image name is **none**. ### A Low Probability That Image Fails to Be Built If the Image Is Deleted When Being Built @@ -47,7 +45,7 @@ Currently, the image build process is protected by reference counting. After an Run the following command to view the local image list: -``` +```shell docker images ``` @@ -55,5 +53,4 @@ docker images ### Precautions -Do not run the **docker rmi –f **_XXX_ command to delete images. If you forcibly delete an image, the **docker rmi** command ignores errors during the process, which may cause residual metadata of containers or images. If you delete an image in common mode and an error occurs during the deletion process, the deletion fails and no metadata remains. - +Do not run the **docker rmi -f**_XXX_ command to delete images. If you forcibly delete an image, the **docker rmi** command ignores errors during the process, which may cause residual metadata of containers or images. If you delete an image in common mode and an error occurs during the deletion process, the deletion fails and no metadata remains. diff --git a/docs/en/docs/Container/image-management-2.md b/docs/en/Cloud/ContainerEngine/DockerEngine/image-management-2.md similarity index 86% rename from docs/en/docs/Container/image-management-2.md rename to docs/en/Cloud/ContainerEngine/DockerEngine/image-management-2.md index 2d7fa0077fa033caef470e3291f83ad279d39091..1ea675321ad9fd4d009ce2df72e4b24fa22a3714 100644 --- a/docs/en/docs/Container/image-management-2.md +++ b/docs/en/Cloud/ContainerEngine/DockerEngine/image-management-2.md @@ -1,6 +1,6 @@ # Image Management -- [Image Management](#image-management-2) +- [Image Management](#image-management) - [build](#build) - [history](#history) - [images](#images) @@ -15,11 +15,6 @@ - [search](#search) - [tag](#tag) - -   - - - ## build Syntax: **docker build \[**_options_**\]** _path_ **|** _URL_ **| -** @@ -89,16 +84,12 @@ Parameter description: Common parameters are as follows. For details about more Dockerfile is used to describe how to build an image and automatically build a container. The format of all **Dockerfile** commands is _instruction_ _arguments_. -   - **FROM Command** Syntax: **FROM** _image_ or **FROM** _image_:_tag_ Function: Specifies a basic image, which is the first command for all Dockerfile files. If the tag of a basic image is not specified, the default tag name **latest** is used. -   - **RUN Command** Syntax: **RUN** _command_ \(for example, **run in a shell - \`/bin/sh -c\`**\) or @@ -111,78 +102,58 @@ Function: Runs any command in the image specified by the **FROM** command and **docker commit** _container\_id_ -   - **Remarks** The number sign \(\#\) is used to comment out. -   - **MAINTAINER Command** -Syntax: **MAINTAINER **_name_ +Syntax: **MAINTAINER**_name_ Function: Specifies the name and contact information of the maintenance personnel. -   - **ENTRYPOINT Command** Syntax: **ENTRYPOINT cmd **_param1 param2..._ or **ENTRYPOINT \[**_"cmd", "param1", "param2"..._**\]** Function: Configures the command to be executed during container startup. -   - **USER Command** -Syntax: **USER **_name_ +Syntax: **USER**_name_ Function: Specifies the running user of memcached. -   - **EXPOSE Command** Syntax: **EXPOSE **_port_** \[**_port_**...\]** Function: Enables one or more ports for images. -   - **ENV Command** -Syntax: **ENV**_ key value_ +Syntax: **ENV**_key value_ Function: Configures environment variables. After the environment variables are configured, the **RUN** commands can be subsequently used. -   - **ADD Command** -Syntax: **ADD**_ src dst_ +Syntax: **ADD**_src dst_ Function: Copies a file from the _src_ directory to the _dest_ directory of a container. _src_ indicates the relative path of the source directory to be built. It can be the path of a file or directory, or a remote file URL. _dest_ indicates the absolute path of the container. -   - **VOLUME Command** Syntax: **VOLUME \["**_mountpoint_**"\]** Function: Creates a mount point for sharing a directory. -   - **WORKDIR Command** -Syntax: **workdir **_path_ +Syntax: **workdir**_path_ Function: Runs the **RUN**, **CMD**, and **ENTRYPOINT** commands to set the current working path. The current working path can be set multiple times. If the current working path is a relative path, it is relative to the previous **WORKDIR** command. -   - **CMD command** Syntax: **CMD \[**_"executable","param1","param2"_**\]** \(This command is similar to the **exec** command and is preferred.\) @@ -193,8 +164,6 @@ Syntax: **CMD \[**_"executable","param1","param2"_**\]** \(This command is sim Function: A Dockerfile can contain only one CMD command. If there are multiple CMD commands, only the last one takes effect. -   - **ONBUILD Commands** Syntax: **ONBUILD \[**_other commands_**\]** @@ -203,34 +172,30 @@ Function: This command is followed by other commands, such as the **RUN** and The following is a complete example of the Dockerfile command that builds an image with the sshd service installed. - - - - -
FROM busybox
+```text
+FROM busybox
 ENV  http_proxy http://192.168.0.226:3128
 ENV  https_proxy https://192.168.0.226:3128
 RUN apt-get update && apt-get install -y openssh-server
 RUN mkdir -p /var/run/sshd
 EXPOSE 22
-ENTRYPOINT /usr/sbin/sshd -D
-
+ENTRYPOINT /usr/sbin/sshd -D +``` Example: -1. Run the following command to build an image using the preceding Dockerfile: +1. Run the following command to build an image using the preceding Dockerfile: - ``` - $ sudo docker build -t busybox:latest + ```shell + sudo docker build -t busybox:latest ``` -2. Run the following command to view the generated image: +2. Run the following command to view the generated image: - ``` + ```shell docker images | grep busybox ``` - ## history Syntax: **docker history \[**_options_**\]** _image_ @@ -247,15 +212,13 @@ Parameter description: Example: -``` +```shell $ sudo docker history busybox:test IMAGE CREATED CREATED BY SIZE COMMENT be4672959e8b 15 minutes ago bash 23B 21970dfada48 4 weeks ago 128MB Imported from - ``` -   - ## images Syntax: **docker images \[**_options_**\] \[**_name_**\]** @@ -274,14 +237,12 @@ Parameter description: Example: -``` +```shell $ sudo docker images REPOSITORY TAG IMAGE ID CREATED SIZE busybox latest e02e811dd08f 2 years ago 1.09MB ``` -   - ## import Syntax: **docker import URL|- \[**_repository_**\[**_:tag_**\]\]** @@ -294,7 +255,7 @@ Example: Run the following command to generate a new image for **busybox.tar** exported using the **docker export** command: -``` +```shell $ sudo docker import busybox.tar busybox:test sha256:a79d8ae1240388fd3f6c49697733c8bac4d87283920defc51fb0fe4469e30a4f $ sudo docker images @@ -302,8 +263,6 @@ REPOSITORY TAG IMAGE ID CREATED busybox test a79d8ae12403 2 seconds ago 1.3MB ``` -   - ## load Syntax: **docker load \[**_options_**\]** @@ -316,7 +275,7 @@ Parameter description: Example: -``` +```shell $ sudo docker load -i busybox.tar Loaded image ID: sha256:e02e811dd08fd49e7f6032625495118e63f597eb150403d02e3238af1df240ba $ sudo docker images @@ -328,7 +287,7 @@ busybox latest e02e811dd08f 2 years ago Syntax: **docker login \[**_options_**\] \[**_server_**\]** -Function: Logs in to an image server. If no server is specified, the system logs in to **https://index.docker.io/v1/** by default. +Function: Logs in to an image server. If no server is specified, the system logs in to **** by default. Parameter description: @@ -340,22 +299,22 @@ Parameter description: Example: -``` -$ sudo docker login +```shell +sudo docker login ``` ## logout Syntax: **docker logout \[**_server_**\]** -Function: Logs out of an image server. If no server is specified, the system logs out of **https://index.docker.io/v1/** by default. +Function: Logs out of an image server. If no server is specified, the system logs out of **** by default. Parameter description: none. Example: -``` -$ sudo docker logout +```shell +sudo docker logout ``` ## pull @@ -370,9 +329,9 @@ Parameter description: Example: -1. Run the following command to obtain the Nginx image from the official registry: +1. Run the following command to obtain the Nginx image from the official registry: - ``` + ```shell $ sudo docker pull nginx Using default tag: latest latest: Pulling from official/nginx @@ -385,15 +344,14 @@ Example: When an image is pulled, the system checks whether the dependent layer exists. If yes, the local layer is used. -2. Pull an image from a private registry. +2. Pull an image from a private registry. Run the following command to pull the Fedora image from the private registry, for example, the address of the private registry is **192.168.1.110:5000**: - ``` - $ sudo docker pull 192.168.1.110:5000/fedora + ```shell + sudo docker pull 192.168.1.110:5000/fedora ``` - ## push Syntax: **docker push** _name_**\[**_:tag_**\]** @@ -404,22 +362,21 @@ Parameter description: none. Example: -1. Run the following command to push an image to the private image registry at 192.168.1.110:5000. -2. Label the image to be pushed. \(The **docker tag** command is described in the following section.\) In this example, the image to be pushed is busybox:sshd. +1. Run the following command to push an image to the private image registry at 192.168.1.110:5000. +2. Label the image to be pushed. \(The **docker tag** command is described in the following section.\) In this example, the image to be pushed is busybox:sshd. - ``` - $ sudo docker tag ubuntu:sshd 192.168.1.110:5000/busybox:sshd + ```shell + sudo docker tag ubuntu:sshd 192.168.1.110:5000/busybox:sshd ``` -3. Run the following command to push the tagged image to the private image registry: +3. Run the following command to push the tagged image to the private image registry: - ``` - $ sudo docker push 192.168.1.110:5000/busybox:sshd + ```shell + sudo docker push 192.168.1.110:5000/busybox:sshd ``` During the push, the system automatically checks whether the dependent layer exists in the image registry. If yes, the layer is skipped. - ## rmi Syntax: **docker rmi \[**_options_**\] **_image _**\[**_image..._**\]** @@ -434,8 +391,8 @@ Parameter description: Example: -``` -$ sudo docker rmi 192.168.1.110:5000/busybox:sshd +```shell +sudo docker rmi 192.168.1.110:5000/busybox:sshd ``` ## save @@ -450,7 +407,7 @@ Parameter description: Example: -``` +```shell $ sudo docker save -o nginx.tar nginx:latest $ ls nginx.tar @@ -458,7 +415,7 @@ nginx.tar ## search -Syntax: **docker search **_options_ _TERM_ +Syntax: **docker search**_options_ _TERM_ Function: Searches for a specific image in the image registry. @@ -472,9 +429,9 @@ Parameter description: Example: -1. Run the following command to search for Nginx in the official image library: +1. Run the following command to search for Nginx in the official image library: - ``` + ```shell $ sudo docker search nginx NAME DESCRIPTION STARS OFFICIAL AUTOMATED nginx Official build of Nginx. 11873 [OK] @@ -485,15 +442,12 @@ Example: tiangolo/nginx-rtmp Docker image with Nginx using the nginx-rtmp... 51 [OK] ``` -    - -2. Run the following command to search for busybox in the private image library. The address of the private image library must be added during the search. +2. Run the following command to search for busybox in the private image library. The address of the private image library must be added during the search. - ``` - $ sudo docker search 192.168.1.110:5000/busybox + ```shell + sudo docker search 192.168.1.110:5000/busybox ``` - ## tag Syntax: **docker tag \[**_options_**\] **_image_**\[**_:tag_**\] \[**_registry host/_**\]\[**_username/_**\]**_name_**\[**_:tag_**\]** @@ -506,7 +460,6 @@ Parameter description: Example: +```shell +sudo docker tag busybox:latest busybox:test ``` -$ sudo docker tag busybox:latest busybox:test -``` - diff --git a/docs/en/docs/Container/installation-and-configuration-3.md b/docs/en/Cloud/ContainerEngine/DockerEngine/installation-and-configuration-3.md similarity index 97% rename from docs/en/docs/Container/installation-and-configuration-3.md rename to docs/en/Cloud/ContainerEngine/DockerEngine/installation-and-configuration-3.md index 17277fac22f3490e9477a665e260887cb4b94bb3..843d03d1ac3a918236335c6c86742da6dea81a41 100644 --- a/docs/en/docs/Container/installation-and-configuration-3.md +++ b/docs/en/Cloud/ContainerEngine/DockerEngine/installation-and-configuration-3.md @@ -26,12 +26,12 @@ $ cat /etc/docker/daemon.json Re-configuring various running directories and files \(including **--graph** and **--exec-root**\) may cause directory conflicts or file attribute changes, affecting the normal use of applications. ->![](./public_sys-resources/icon-notice.gif) **NOTICE:** +>![](./public_sys-resources/icon-notice.gif)**NOTICE:** >Therefore, the specified directories or files should be used only by Docker to avoid file attribute changes and security issues caused by conflicts. - Take **--graph** as an example. When **/new/path/** is used as the new root directory of the daemon, if a file exists in **/new/path/** and the directory or file name conflicts with that required by Docker \(for example, **containers**, **hooks**, and **tmp**\), Docker may update the original directory or file attributes, including the owner and permission. ->![](./public_sys-resources/icon-notice.gif) **NOTICE:** +>![](./public_sys-resources/icon-notice.gif)**NOTICE:** >From Docker 17.05, the **--graph** parameter is marked as **Deprecated** and replaced with the **--data-root** parameter. ### Daemon Network Configuration @@ -46,7 +46,7 @@ The default **umask** value of the main container process and exec process is The default value of **umask** is **0027** when Docker starts a container. You can change the value to **0022** by running the **--exec-opt native.umask=normal** command during container startup. ->![](./public_sys-resources/icon-notice.gif) **NOTICE:** +>![](./public_sys-resources/icon-notice.gif)**NOTICE:** >If **native.umask** is configured in **docker create** or **docker run** command, its value is used. For details, see the parameter description in [docker create](./container-management-2.md#create) and [docker run](./container-management-2.md#run). @@ -117,7 +117,7 @@ auditctl -R /etc/audit/rules.d/audit.rules | grep docker auditctl -l | grep docker -w /var/lib/docker/ -p rwxa -k docker ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** +>![](./public_sys-resources/icon-note.gif)**NOTE:** >**-p \[r|w|x|a\]** and **-w** are used together to monitor the read, write, execution, and attribute changes \(such as timestamp changes\) of the directory. In this case, any file or directory operation in the **/var/lib/docker** directory will be recorded in the **audit.log** file. As a result, too many logs will be recorded in the **audit.log** file, which severely affects the memory or CPU usage of the auditd, and further affects the OS. For example, logs similar to the following will be recorded in the **/var/log/audit/audit.log** file each time the **ls /var/lib/docker/containers** command is executed: ```text @@ -253,7 +253,6 @@ You will find that the rootfs of the corresponding container cannot be found on ``` The output format is _A_ on _B_ type _C_ \(_D_\). - - _A_: block device name or **overlay** - _B_: mount point - _C_: file system type @@ -393,7 +392,7 @@ The Docker service cannot be restarted properly due to frequent startup in a sho When a system is unexpectedly powered off or system panic occurs, Docker daemon status may not be updated to the disk in time. As a result, Docker daemon is abnormal after the system is restarted. The possible problems include but are not limited to the following: -- A container is created before the power-off. After the restart, the container is not displayed when the **docker ps –a** command is run, as the file status of the container is not updated to the disk. As a result, daemon cannot obtain the container status after the restart. +- A container is created before the power-off. After the restart, the container is not displayed when the **docker ps -a** command is run, as the file status of the container is not updated to the disk. As a result, daemon cannot obtain the container status after the restart. - Before the system power-off, a file is being written. After daemon is restarted, the file format is incorrect or the file content is incomplete. As a result, loading fails. - As Docker database \(DB\) will be damaged during power-off, all DB files in **data-root** will be deleted during node restart. Therefore, the following information created before the restart will be deleted after the restart: - Network: Resources created through Docker network will be deleted after the node is restarted. @@ -401,5 +400,5 @@ When a system is unexpectedly powered off or system panic occurs, Docker daemon - Cache construction: The cache construction information will be deleted after the node is restarted. - Metadata stored in containerd: Metadata stored in containerd will be recreated when a container is started. Therefore, the metadata stored in containerd will be deleted when the node is restarted. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >If you want to manually clear data and restore the environment, you can set the environment variable **DISABLE\_CRASH\_FILES\_DELETE** to **true** to disable the function of clearing DB files when the daemon process is restarted due to power-off. + > ![](./public_sys-resources/icon-note.gif)**NOTE:** + > If you want to manually clear data and restore the environment, you can set the environment variable **DISABLE\_CRASH\_FILES\_DELETE** to **true** to disable the function of clearing DB files when the daemon process is restarted due to power-off. diff --git a/docs/en/docs/Administration/public_sys-resources/icon-note.gif b/docs/en/Cloud/ContainerEngine/DockerEngine/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/Administration/public_sys-resources/icon-note.gif rename to docs/en/Cloud/ContainerEngine/DockerEngine/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/A-Tune/public_sys-resources/icon-notice.gif b/docs/en/Cloud/ContainerEngine/DockerEngine/public_sys-resources/icon-notice.gif similarity index 100% rename from docs/en/docs/A-Tune/public_sys-resources/icon-notice.gif rename to docs/en/Cloud/ContainerEngine/DockerEngine/public_sys-resources/icon-notice.gif diff --git a/docs/en/docs/Container/statistics.md b/docs/en/Cloud/ContainerEngine/DockerEngine/statistics.md similarity index 98% rename from docs/en/docs/Container/statistics.md rename to docs/en/Cloud/ContainerEngine/DockerEngine/statistics.md index bda570d8c5a4741abd61abedcfcadf7ad012cc83..379fef5aa97f105b3cf0736d76929530329deb43 100644 --- a/docs/en/docs/Container/statistics.md +++ b/docs/en/Cloud/ContainerEngine/DockerEngine/statistics.md @@ -5,7 +5,6 @@ - [info](#info) - [version](#version) - ## events Syntax: **docker events \[**_options_**\]** @@ -22,7 +21,7 @@ Example: After the **docker events** command is executed, a container is created and started by running the **docker run** command. create and start events are output. -``` +```shell $ sudo docker events 2019-08-28T16:23:09.338838795+08:00 container create 53450588a20800d8231aa1dc4439a734e16955387efb5f259c47737dba9e2b5e (image=busybox:latest, name=eager_wu) 2019-08-28T16:23:09.339909205+08:00 container attach 53450588a20800d8231aa1dc4439a734e16955387efb5f259c47737dba9e2b5e (image=busybox:latest, name=eager_wu) @@ -31,8 +30,6 @@ $ sudo docker events 2019-08-28T16:23:09.924121158+08:00 container resize 53450588a20800d8231aa1dc4439a734e16955387efb5f259c47737dba9e2b5e (height=48, image=busybox:latest, name=eager_wu, width=210) ``` -   - ## info Syntax: **docker info** @@ -43,7 +40,7 @@ Parameter description: none. Example: -``` +```shell $ sudo docker info Containers: 4 Running: 3 @@ -70,8 +67,6 @@ Storage Driver: devicemapper ...... ``` -   - ## version Syntax: **docker version** @@ -82,7 +77,7 @@ Parameter description: none. Example: -``` +```shell $ sudo docker version Client: Version: 18.09.0 @@ -105,6 +100,3 @@ Server: OS/Arch: linux/arm64 Experimental: false ``` - -   - diff --git a/docs/en/Cloud/ContainerEngine/Menu/index.md b/docs/en/Cloud/ContainerEngine/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..44b74e35f0150cdfaba45453b2203b607c3d9375 --- /dev/null +++ b/docs/en/Cloud/ContainerEngine/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- + +- [Docker Container Engine]({{< relref "./DockerEngine/Menu/index.md" >}}) +- [iSula Container Engine]({{< relref "./iSulaContainerEngine/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/Menu/index.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..70a28a9c14917ebb7acf4228d36bff8e73556a6f --- /dev/null +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/Menu/index.md @@ -0,0 +1,28 @@ +--- +headless: true +--- + +- [iSulad Container Engine]({{< relref "./isulad-container-engine.md" >}}) + - [Installation, Upgrade, and Uninstallation]({{< relref "./installation-upgrade-Uninstallation.md" >}}) + - [Installation and Configuration]({{< relref "./installation-configuration.md" >}}) + - [Upgrade]({{< relref "./upgrade-methods.md" >}}) + - [Uninstallation]({{< relref "./uninstallation.md" >}}) + - [User Guide]({{< relref "./user-guide.md" >}}) + - [Container Management]({{< relref "./container-management.md" >}}) + - [Interconnection with the CNI Network]({{< relref "./interconnection-with-the-cni-network.md" >}}) + - [Container Resource Management]({{< relref "./container-resource-management.md" >}}) + - [Privileged Container]({{< relref "./privileged-container.md" >}}) + - [CRI-v1alpha2({{< relref "./cri-v1alpha2.md" >}}) + - [CRI-v1({{< relref "./cri-v1.md" >}}) + - [Image Management]({{< relref "./image-management.md" >}}) + - [Checking the Container Health Status]({{< relref "./checking-the-container-health-status.md" >}}) + - [Querying Information]({{< relref "./querying-information.md" >}}) + - [Security Features]({{< relref "./security-features.md" >}}) + - [Supporting OCI hooks]({{< relref "./supporting-oci-hooks.md" >}}) + - [Local Volume Management]({{< relref "./local-volume-management.md" >}}) + - [Interconnecting iSulad shim v2 with StratoVirt]({{< relref "./interconnecting-isula-shim-v2-with-stratovirt.md" >}}) + - [iSulad Support for cgroup v2]({{< relref "./isulad-support-for-cgroup-v2.md" >}}) + - [iSulad Support for CDI]({{< relref "./isulad-support-for-cdi.md" >}}) + - [iSulad Support for NRI]({{< relref "./isulad-support-for-nri.md" >}}) + - [Common Issues and Solutions]({{< relref "./isula-common-issues-and-solutions.md" >}}) + - [Appendix]({{< relref "./appendix.md" >}}) diff --git a/docs/en/docs/Container/appendix.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/appendix.md similarity index 84% rename from docs/en/docs/Container/appendix.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/appendix.md index 517681ef8b4be5881a75211d4df506868e47792d..2daa3e1e5b1fe9a3f060617bed4078c7098f4783 100644 --- a/docs/en/docs/Container/appendix.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/appendix.md @@ -16,7 +16,7 @@

Description

-

login

+

login

  

  

@@ -25,11 +25,6 @@

Specifies the iSulad socket file path to be accessed.

-

--help

- -

Prints help information.

- -

-p, --password

Specifies the password for logging in to the registry.

@@ -58,18 +53,13 @@

Description

-

logout

+

logout

-H, --host

Specifies the iSulad socket file path to be accessed.

-

--help

- -

Prints help information.

- - @@ -83,18 +73,13 @@

Description

-

pull

+

pull

-H, --host

Specifies the iSulad socket file path to be accessed.

-

--help

- -

Prints help information

- - @@ -108,7 +93,7 @@

Description

-

rmi

+

rmi

  

-H, --host

@@ -116,16 +101,6 @@

Specifies the iSulad socket file path to be accessed.

-

--help

- -

Prints help information

- - -

-D, --debug

- -

Enables debugging mode.

- -

-f, --force

Forcibly removes an image.

@@ -144,23 +119,13 @@

Description

-

load

+

load

-H, --host (supported only by iSula)

Specifies the iSulad socket file path to be accessed.

-

--help

- -

Prints help information

- - -

-D, --debug

- -

Enables debugging mode.

- -

-i, --input

Specifies where to import an image. If the image is of the docker type, the value is the image package path. If the image is of the embedded type, the value is the image manifest path.

@@ -171,6 +136,11 @@

Uses the image name specified by TAG instead of the default image name. This parameter is supported when the type is set to docker.

+

-t, --type

+ +

Specifies the image type. The value can be embedded or docker (default value).

+ + @@ -184,7 +154,7 @@

Description

-

images

+

images

  

-H, --host

@@ -192,21 +162,6 @@

Specifies the iSulad socket file path to be accessed.

-

--help

- -

Prints help information

- - -

-D, --debug

- -

Enables debugging mode.

- - -

-f, --filter

- -

Filters information about specified images.

- -

-q, --quit

Displays only the image name.

@@ -226,23 +181,13 @@ -

inspect

+

inspect

-H, --host

Specifies the iSulad socket file path to be accessed.

-

--help

- -

Prints help information

- - -

-D, --debug

- -

Enables debugging mode.

- -

-f, --format

Outputs using a template.

@@ -256,105 +201,6 @@ -**Table 8** tag command parameters - - - - - - - - - - - - - - - - - -

Command

-

Parameter

-

Description

-

tag

-

-H, --host

-

Specifies the iSulad socket file path to be accessed.

-

--help

-

Prints help information

-

-D, --debug

-

Enables debugging mode.

-
- -
- -**Table 9** import command parameters - - - - - - - - - - - - - - - - - -

Command

-

Parameter

-

Description

-

import

-

-H, --host

-

Specifies the iSulad socket file path to be accessed.

-

--help

-

Prints help information

-

-D, --debug

-

Enables debugging mode.

-
- -
- -**Table 10** export command parameters - - - - - - - - - - - - - - - - - - - - -

Command

-

Parameter

-

Description

-

export

-

-H, --host

-

Specifies the iSulad socket file path to be accessed.

-

--help

-

Prints help information

-

-D, --debug

-

Enables debugging mode.

-

-o, --output

-

Outputs to a specified file.

-
- ## CNI Parameters **Table 1** CNI single network parameters @@ -403,7 +249,7 @@

phy-direct

-

ipMasq

+

ipmasp

bool

diff --git a/docs/en/docs/Container/checking-the-container-health-status.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/checking-the-container-health-status.md similarity index 95% rename from docs/en/docs/Container/checking-the-container-health-status.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/checking-the-container-health-status.md index 2a134c49067912df2121037f47e438d8a6dceb32..5db4a32ce83e5d71b4330e25ecbf498c78ea3a12 100644 --- a/docs/en/docs/Container/checking-the-container-health-status.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/checking-the-container-health-status.md @@ -35,10 +35,10 @@ The configurable options are as follows: 4. If the **cmd** command fails to be executed for the number of times specified by **retries**, the container status changes to **health:unhealthy**, and the container continues the health check. 5. When the container status is **health:unhealthy**, the container status changes to **health:healthy** if a check succeeds. 6. If **--exit-on-unhealthy** is set, and the container exits due to reasons other than being killed \(the returned exit code is **137**\), the health check takes effect only after the container is restarted. -7. When the **cmd** command execution is complete or times out, iSulad daemon will record the start time, return value, and standard output of the check to the configuration file of the container. A maximum of five records can be recorded. In addition, the configuration file of the container stores health check parameters. +7. When the **cmd** command execution is complete or times out, Docker daemon will record the start time, return value, and standard output of the check to the configuration file of the container. A maximum of five records can be recorded. In addition, the configuration file of the container stores health check parameters. 8. When the container is running, the health check status is written into the container configurations. You can run the **isula inspect** command to view the status. -```text +```json "Health": { "Status": "healthy", "FailingStreak": 0, @@ -65,3 +65,4 @@ The configurable options are as follows: - If health check parameters are set to **0** during container startup, the default values are used. - After a container with configured health check parameters is started, if iSulad daemon exits, the health check is not executed. After iSulad daemon is restarted, the health status of the running container changes to **starting**. Afterwards, the check rules are the same as above. - If the health check fails for the first time, the health check status will not change from **starting** to **unhealthy** until the specified number of retries \(**--health-retries**\) is reached, or to **healthy** until the health check succeeds. +- The health check function of containers whose runtime is of the Open Container Initiative \(OCI\) type needs to be improved. Only containers whose runtime is of the LCR type are supported. diff --git a/docs/en/docs/Container/container-management.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/container-management.md similarity index 67% rename from docs/en/docs/Container/container-management.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/container-management.md index 019bb44e2db953f8d1a7ae94b64ebfefa0c1a15d..45b930ee6c5e8959afebcbe5367b76a671fef5df 100644 --- a/docs/en/docs/Container/container-management.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/container-management.md @@ -18,8 +18,8 @@ - [Displaying Resource Usage Statistics of a Container](#displaying-resource-usage-statistics-of-a-container) - [Obtaining Container Logs](#obtaining-container-logs) - [Copying Data Between a Container and a Host](#copying-data-between-a-container-and-a-host) - - [Pausing All Processes in a Container](#pausing-all-processes-in-a-container) - - [Resuming All Processes in a Container](#resuming-all-processes-in-a-container) + - [Pausing a Container](#pausing-a-container) + - [Resuming a Container](#resuming-a-container) - [Obtaining Event Messages from the Server in Real Time](#obtaining-event-messages-from-the-server-in-real-time) ## Creating a Container @@ -48,15 +48,10 @@ The following table lists the parameters supported by the **create** command.

Description

-

create

+

create

  

-

--add-host

- -

Adds the mapping between the custom host and the IP address (host:ip).

- - -

--annotation

+

--annotation

Sets annotations for the container. For example, set the native.umask parameter.

--annotation native.umask=normal #The umask value of the started container is 0022.
@@ -64,21 +59,6 @@ The following table lists the parameters supported by the  **create**  command.
 

If this parameter is not set, the umask configuration in iSulad is used.

-

--blkio-weight

- -

Specifies the block I/O (relative weight). The value ranges from 10 to 1000. The default value is 0, indicating that this function is disabled.

- - -

--blkio-weight-device

- -

Specifies the block I/O weight (relative device weight). The format is DEVICE_NAME: weight. The weight value ranges from 10 to 1000. The default value is 0, indicating that this function is disabled.

- - -

--cap-add

- -

Adds the Linux permission function.

- -

--cap-drop

Deletes Linux permissions.

@@ -89,24 +69,9 @@ The following table lists the parameters supported by the **create** command.

Specifies the cgroup parent path of the container.

-

--cpu-period

- -

Limits the period of CPU CFS.

- - -

--cpu-quota

- -

Limits the CPU CFS quota.

- - -

--cpu-rt-period

- -

Limits the real-time CPU period (in microseconds).

- - -

--cpu-rt-runtime

+

--cpuset-cpus

-

Limits the real-time running time of the CPU (in microseconds).

+

Allowed CPUs (for example, 0-3, 0, 1).

--cpu-shares

@@ -114,51 +79,16 @@ The following table lists the parameters supported by the **create** command.

CPU share (relative weight).

-

--cpus

- -

Specifies the number of CPUs.

- - -

--cpuset-cpus

- -

Specifies the CPU that can be executed. Example values: 0-3, 0, 1.

- - -

--cpuset-mems

+

--cpu-quota

-

Specifies memory that can be executed. Example values: 0-3, 0, 1.

+

Limits the CPU CFS quota.

-

--device

+

--device=[]

Adds a device to the container.

-

--device-cgroup-rule

- -

Adds a rule to the list of devices allowed by the cgroup.

- - -

--device-read-bps

- -

Limits the read rate (bytes per second) of the device.

- - -

--device-read-iops

- -

Limits the read rate (I/Os per second) of the device.

- - -

--device-write-bps

- -

Limits the write rate (bytes per second) of the device.

- - -

--device-write-iops

- -

Limits the write rate (I/Os per second) of the device.

- -

--dns

Adds a DNS server.

@@ -174,11 +104,6 @@ The following table lists the parameters supported by the **create** command.

Sets the search domain of a container.

-

--entrypoint

- -

Specifies the entry point to be run when a container is started.

- -

-e, --env

Sets environment variables.

@@ -189,9 +114,9 @@ The following table lists the parameters supported by the **create** command.

Configures environment variables using a file.

-

--env-target-file

+

--entrypoint

-

Specifies the target file path in rootfs to which environment variables are exported.

+

Entry point to run when the container is started.

--external-rootfs=PATH

@@ -211,7 +136,7 @@ The following table lists the parameters supported by the **create** command.

--help

-

Prints help information.

+

Displays help information.

--health-cmd

@@ -254,34 +179,24 @@ The following table lists the parameters supported by the **create** command.

Specifies the iSulad socket file path to be accessed.

-

--host-channel

- -

Creates the shared memory between the host and the container.

- -

-h, --hostname

Container host name.

-

--hugetlb-limit=[]

- -

Limits the huge page file. For example, --hugetlb-limit 2MB:32MB.

- -

-i, --interactive

Enables the standard input of the container even if it is not connected to the standard input of the container.

-

--ipc

+

--hugetlb-limit=[]

-

Specifies the IPC namespace.

+

Limits the size of huge-page files, for example, --hugetlb-limit 2MB:32MB.

-

--kernel-memory

+

--log-opt=[]

-

Limits the kernel memory.

+

Log driver option. By default, the container serial port log function is disabled. You can run the --log-opt disable-log=false command to enable it.

-l,--label

@@ -294,16 +209,6 @@ The following table lists the parameters supported by the **create** command.

Sets container labels using files.

-

--log-driver

- -

Records the container driver.

- - -

--log-opt=[]

- -

Log driver option. By default, the function of recording container serial port logs is disabled. You can enable it by setting --log-opt disable-log=false.

- -

-m, --memory

Memory limit.

@@ -321,7 +226,7 @@ The following table lists the parameters supported by the **create** command.

--memory-swappiness

-

The value of swappiness is a positive integer ranging from 0 to 100. The smaller the value is, the less the swap partition is used and the more the memory is used in the Linux system. The larger the value is, the more the swap space is used by the kernel. The default value is –1, indicating that the default system value is used.

+

The value of swappiness is a positive integer ranging from 0 to 100. The smaller the value is, the less the swap partition is used and the more the memory is used in the Linux system. The larger the value is, the more the swap space is used by the kernel. The default value is -1, indicating that the default system value is used.

--mount

@@ -329,39 +234,19 @@ The following table lists the parameters supported by the **create** command.

Mounts the host directory, volume, or file system to the container.

-

--name=NAME

- -

Container name.

- - -

--net=none

- -

Connects the container to the network.

- -

--no-healthcheck

Disables the health check configuration.

-

--ns-change-opt

- -

Namespace kernel parameter option of the system container.

- - -

--oom-kill-disable

- -

Disables OOM.

- - -

--oom-score-adj

+

--name=NAME

-

Adjusts the OOM preference of the host (from -1000 to 1000).

+

Container name.

-

--pid

+

--net=none

-

Specifies the PID namespace to be used.

+

Connects a container to a network.

--pids-limit

@@ -374,11 +259,6 @@ The following table lists the parameters supported by the **create** command.

Grants container extension privileges.

-

--pull

- -

Pulls the image before running.

- -

-R, --runtime

Container runtime. The parameter value can be lcr, which is case insensitive. Therefore, LCR and lcr are equivalent.

@@ -395,41 +275,11 @@ The following table lists the parameters supported by the **create** command.

For a system container, --restart on-reboot is supported.

-

--security-opt

- -

Security option.

- - -

--shm-size

- -

Size of /dev/shm. The default value is 64MB.

- - -

--stop-signal

- -

Stop signal for a container. The default value is SIGTERM.

- -

--storage-opt

Configures the storage driver option for a container.

-

--sysctl

- -

Sets the sysctl option.

- - -

--system-container

- -

Starts the system container.

- - -

--tmpfs

- -

Mounts the tmpfs directory.

- -

-t, --tty

Allocates a pseudo terminal.

@@ -445,21 +295,6 @@ The following table lists the parameters supported by the **create** command.

User name or UID, in the format of [<name|uid>][:<group|gid>].

-

--user-remap

- -

Maps users to the system container.

- - -

--userns

- -

Sets the user command space for a container when the user-remap option is enabled.

- - -

--uts

- -

Sets the PID namespace.

- -

-v, --volume=[]

Mounts a volume.

@@ -470,11 +305,6 @@ The following table lists the parameters supported by the **create** command.

Uses the mounting configuration of the specified container.

-

--workdir

- -

Sets the working directory in the container.

- - @@ -501,7 +331,7 @@ Create a container. $ isula create busybox fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 $ isula ps -a -STATUS PID IMAGE COMMAND EXIT_CODE RESTART_COUNT STARTAT FINISHAT RUNTIME ID NAMES +STATUS PID IMAGE COMMAND EXIT_CODE RESTART_COUNT STARTAT FINISHAT RUNTIME ID NAMES inited - busybox "sh" 0 0 - - lcr fd7376591a9c fd7376591a9c4521... ``` ## Starting a Container @@ -530,26 +360,16 @@ The following table lists the parameters supported by the **start** command.

Description

-

start

+

start

-H, --host

Specifies the iSulad socket file path to be accessed.

-

-a, --attach

- -

Connects to STDOUT and STDERR of the container.

- - -

-D, --debug

- -

Enables the debug mode.

- - -

--help

+

-R, --runtime

-

Prints help information.

+

Container runtime. The parameter value can be lcr, which is case insensitive. Therefore, LCR and lcr are equivalent.

@@ -589,7 +409,7 @@ The following table lists the parameters supported by the **run** command.

Description

-

run

+

run

--annotation

@@ -599,21 +419,6 @@ The following table lists the parameters supported by the **run** command.

If this parameter is not set, the umask configuration in iSulad is used.

-

--add-host

- -

Adds the mapping between the custom host and the IP address (host:ip).

- - -

--blkio-weight

- -

Specifies the block I/O (relative weight). The value ranges from 10 to 1000. The default value is 0, indicating that this function is disabled.

- - -

--blkio-weight-device

- -

Specifies the block I/O weight (relative device weight). The format is DEVICE_NAME: weight. The weight value ranges from 10 to 1000. The default value is 0, indicating that this function is disabled.

- -

--cap-add

Adds Linux functions.

@@ -629,24 +434,9 @@ The following table lists the parameters supported by the **run** command.

Specifies the cgroup parent path of the container.

-

--cpu-period

- -

Limits the period of CPU CFS.

- - -

--cpu-quota

- -

Limits the CPU CFS quota.

- - -

--cpu-rt-period

- -

Limits the real-time CPU period (in microseconds).

- - -

--cpu-rt-runtime

+

--cpuset-cpus

-

Limits the real-time running time of the CPU (in microseconds).

+

Allowed CPUs (for example, 0-3, 0, 1).

--cpu-shares

@@ -654,19 +444,9 @@ The following table lists the parameters supported by the **run** command.

CPU share (relative weight).

-

--cpus

- -

Specifies the number of CPUs.

- - -

--cpuset-cpus

- -

Specifies the CPU that can be executed. Example values: 0-3, 0, 1.

- - -

--cpuset-mems

+

--cpu-quota

-

Specifies memory that can be executed. Example values: 0-3, 0, 1.

+

Limits the CPU CFS quota.

-d, --detach

@@ -679,31 +459,6 @@ The following table lists the parameters supported by the **run** command.

Adds a device to the container.

-

--device-cgroup-rule

- -

Adds a rule to the list of devices allowed by the cgroup.

- - -

--device-read-bps

- -

Limits the read rate (bytes per second) of the device.

- - -

--device-read-iops

- -

Limits the read rate (I/Os per second) of the device.

- - -

--device-write-bps

- -

Limits the write rate (bytes per second) of the device.

- - -

--device-write-iops

- -

Limits the write rate (I/Os per second) of the device.

- -

--dns

Adds a DNS server.

@@ -719,11 +474,6 @@ The following table lists the parameters supported by the **run** command.

Sets the search domain of a container.

-

--entrypoint

- -

Specifies the entry point to be run when a container is started.

- -

-e, --env

Sets environment variables.

@@ -734,9 +484,9 @@ The following table lists the parameters supported by the **run** command.

Configures environment variables using a file.

-

--env-target-file

+

--entrypoint

-

Specifies the target file path in rootfs to which environment variables are exported.

+

Entry point to run when the container is started.

--external-rootfs=PATH

@@ -756,7 +506,7 @@ The following table lists the parameters supported by the **run** command.

--help

-

Prints help information.

+

Displays help information.

--health-cmd

@@ -799,11 +549,6 @@ The following table lists the parameters supported by the **run** command.

Specifies the iSulad socket file path to be accessed.

-

--host-channel

- -

Creates the shared memory between the host and the container.

- -

-h, --hostname

Container host name.

@@ -819,31 +564,6 @@ The following table lists the parameters supported by the **run** command.

Enables the standard input of the container even if it is not connected to the standard input of the container.

-

--ipc

- -

Specifies the IPC namespace.

- - -

--kernel-memory

- -

Limits the kernel memory.

- - -

-l, --label

- -

Sets a label for a container.

- - -

--lablel-file

- -

Sets the container label through a file.

- - -

--log-driver

- -

Sets the log driver. syslog and json-file are supported.

- -

--log-opt=[]

Log driver option. By default, the container serial port log function is disabled. You can run the --log-opt disable-log=false command to enable it.

@@ -866,7 +586,7 @@ The following table lists the parameters supported by the **run** command.

--memory-swappiness

-

The value of swappiness is a positive integer ranging from 0 to 100. The smaller the value is, the less the swap partition is used and the more the memory is used in the Linux system. The larger the value is, the more the swap space is used by the kernel. The default value is –1, indicating that the default system value is used.

+

The value of swappiness is a positive integer ranging from 0 to 100. The smaller the value is, the less the swap partition is used and the more the memory is used in the Linux system. The larger the value is, the more the swap space is used by the kernel. The default value is -1, indicating that the default system value is used.

--mount

@@ -874,39 +594,19 @@ The following table lists the parameters supported by the **run** command.

Mounts a host directory to a container.

-

--name=NAME

- -

Container name

- - -

--net=none

- -

Connects a container to the network.

- -

--no-healthcheck

Disables the health check configuration.

-

--ns-change-opt

- -

Namespace kernel parameter option of the system container.

- - -

--oom-kill-disable

- -

Disables OOM.

- - -

--oom-score-adj

+

--name=NAME

-

Adjusts the OOM preference of the host (from -1000 to 1000).

+

Container name.

-

--pid

+

--net=none

-

Specifies the PID namespace to be used.

+

Connects a container to a network.

--pids-limit

@@ -919,11 +619,6 @@ The following table lists the parameters supported by the **run** command.

Grants container extension privileges.

-

--pull

- -

Pulls the image before running.

- -

-R, --runtime

Container runtime. The parameter value can be lcr, which is case insensitive. Therefore, LCR and lcr are equivalent.

@@ -945,39 +640,9 @@ The following table lists the parameters supported by the **run** command.

Automatically clears a container upon exit.

-

--security-opt

+

--storage-opt

-

Security option.

- - -

--shm-size

- -

Size of /dev/shm. The default value is 64MB.

- - -

--stop-signal

- -

Stop signal for a container. The default value is SIGTERM.

- - -

--storage-opt

- -

Configures the storage driver option of a container.

- - -

--sysctl

- -

Sets the sysctl option.

- - -

--system-container

- -

Starts the system container.

- - -

--tmpfs

- -

Mounts the tmpfs directory.

+

Configures the storage driver option for a container.

-t, --tty

@@ -995,36 +660,11 @@ The following table lists the parameters supported by the **run** command.

User name or UID, in the format of [<name|uid>][:<group|gid>].

-

--user-remap

- -

Maps users to the system container.

- - -

--userns

- -

Sets the user command space for a container when the user-remap option is enabled.

- - -

--uts

- -

Sets the PID namespace.

- -

-v, --volume=[]

Mounts a volume.

-

--volumes-from=[]

- -

Uses the mounting configuration of the specified container.

- - -

--workdir

- -

Sets the working directory in the container.

- - @@ -1048,15 +688,14 @@ The following table lists the parameters supported by the **run** command. The entry point specified by **--entrypoint** does not exist. - When the **--volume** parameter is used, /dev/ptmx will be deleted and recreated during container startup. Therefore, do not mount the **/dev** directory to that of the container. Use **--device** to mount the devices in **/dev** of the container. -- When the **-it** parameter is used, the **/dev/ptmx** device will be deleted and rebuilt when the container is started. Therefore, do not mount the **/dev** directory to the **/dev** directory of the container. Instead, use **--device** to mount the devices in the **/dev** directory to the container. - Do not use the echo option to input data to the standard input of the **run** command. Otherwise, the client will be suspended. The echo value should be directly transferred to the container as a command line parameter. ```shell - # echo ls | isula run -i busybox /bin/sh - - + [root@localhost ~]# echo ls | isula run -i busybox /bin/sh + + ^C - # + [root@localhost ~]# ``` The client is suspended when the preceding command is executed because the preceding command is equivalent to input **ls** to **stdin**. Then EOF is read and the client does not send data and waits for the server to exit. However, the server cannot determine whether the client needs to continue sending data. As a result, the server is suspended in reading data, and both parties are suspended. @@ -1064,7 +703,7 @@ The following table lists the parameters supported by the **run** command. The correct execution method is as follows: ```shell - # isula run -i busybox ls + [root@localhost ~]# isula run -i busybox ls bin dev etc @@ -1075,7 +714,7 @@ The following table lists the parameters supported by the **run** command. tmp usr var - # + [root@localhost ~]# ``` - If the root directory \(/\) of the host is used as the file system of the container, the following situations may occur during the mounting: @@ -1102,40 +741,30 @@ The following table lists the parameters supported by the **run** command. - >![](./public_sys-resources/icon-notice.gif) **NOTICE:** - >Scenario 1: Mount **/home/test1** and then **/home/test2**. In this case, the content in **/home/test1** overwrites the content in **/mnt**. As a result, the **abc** directory does not exist in **/mnt**, and mounting**/home/test2** to **/mnt/abc** fails. - >Scenario 2: Mount **/home/test2** and then **/home/test1**. In this case, the content of **/mnt** is replaced with the content of **/home/test1** during the second mounting. In this way, the content mounted during the first mounting from **/home/test2** to **/mnt/abc** is overwritten. - >The first scenario is not supported. For the second scenario, users need to understand the risk of data access failures. - -- Exercise caution when configuring the **/sys** and **/proc** directories to be writable. - - The **/sys** and **/proc** directories contain the APIs for maintaining Linux kernel parameters and managing devices. If the directories are writable in a container, container escape may occur. - -- Exercise caution when configuring containers to share namespaces with hosts. - - For example, if you use **--pid**, **--ipc**, **--uts**, or **--net** to configure namespace sharing between the container and the host, the namespace isolation between the container and the host is lost, and the host can be attacked from the container. For example, if you use **--pid** to configure PID namespace sharing between the container and the host, the PID of the process on the host can be viewed in the container and the process can be killed in the container. - -- Exercise caution when configuring parameters that can be used to mount host resources, such as **--device** and **-v**. Do not map sensitive directories or devices of the host to containers to prevent leakage of sensitive information. + > ![](./public_sys-resources/icon-notice.gif)**NOTICE:** + > Scenario 1: Mount **/home/test1** and then **/home/test2**. In this case, the content in **/home/test1** overwrites the content in **/mnt**. As a result, the **abc** directory does not exist in **/mnt**, and mounting**/home/test2** to **/mnt/abc** fails. + > Scenario 2: Mount **/home/test2** and then **/home/test1**. In this case, the content of **/mnt** is replaced with the content of **/home/test1** during the second mounting. In this way, the content mounted during the first mounting from **/home/test2** to **/mnt/abc** is overwritten. + > The first scenario is not supported. For the second scenario, users need to understand the risk of data access failures. -- Exercise caution when using the **--privileged** option to start a container. If the **--privileged** option is used, the container will have excessive permissions, affecting the host configuration. +- Exercise caution when configuring the **/sys** and **/proc** directories as writable. The **/sys** and **/proc** directories contain interfaces for Linux to manage kernel parameters and devices. Configuring these directories as writable in a container may lead to container escape. +- Exercise caution when configuring containers to share namespaces with the host. For example, using **--pid**, **--ipc**, **--uts**, or **--net** to share namespace spaces between the container and the host eliminates namespace isolation between them. This allows attacks on the host from within the container. For instance, using **--pid** to share the PID namespace with the host enables the container to view and kill processes on the host. +- Exercise caution when using parameters like **--device** or **-v** to mount host resources. Avoid mapping sensitive directories or devices of the host into the container to prevent sensitive information leakage. +- Exercise caution when starting containers with the **--privileged** option. The **--privileged** option grants excessive permissions to the container, which can affect the host configuration. - >![](./public_sys-resources/icon-notice.gif) **NOTICE:** + > ![](./public_sys-resources/icon-notice.gif)**NOTICE:** + > In high concurrency scenarios \(200 containers are concurrently started\), the memory management mechanism of Glibc may cause memory holes and large virtual memory \(for example, 10 GB\). This problem is caused by the restriction of the Glibc memory management mechanism in the high concurrency scenario, but not by memory leakage. Therefore, the memory consumption does not increase infinitely. You can set the **MALLOC\_ARENA\_MAX** environment variable to reduce the virtual memory and increase the probability of reducing the physical memory. However, this environment variable will cause the iSulad concurrency performance to deteriorate. Set this environment variable based on the site requirements. > - > - In high concurrency scenarios (200 containers are concurrently started), the memory management mechanism of Glibc may cause memory holes and large virtual memory (for example, 10 GB). This problem is caused by the restriction of the Glibc memory management mechanism in the high concurrency scenario, but not by memory leakage. Therefore, the memory consumption does not increase infinitely. You can set the **MALLOC\_ARENA\_MAX** environment variable to reduce the virtual memory and increase the probability of reducing the physical memory. However, this environment variable will cause the iSulad concurrency performance to deteriorate. Set this environment variable based on the site requirements. - > - > ```text - > To balance performance and memory usage, set MALLOC_ARENA_MAX to 4. (The iSulad performance deterioration on the ARM64 server is controlled by less than 10%.) - > Configuration method: - > 1. To manually start iSulad, run the export MALLOC_ARENA_MAX=4 command and then start the iSulad. - > 2. If systemd manages iSulad, you can modify the /etc/sysconfig/iSulad file by adding MALLOC_ARENA_MAX=4. - > ``` + > To balance performance and memory usage, set MALLOC_ARENA_MAX to 4. (The iSulad performance deterioration on the ARM64 server is controlled by less than 10%.) + > Configuration method: + > 1. To manually start iSulad, run the export MALLOC_ARENA_MAX=4 command and then start the iSulad. + > 2. If systemd manages iSulad, you can modify the /etc/sysconfig/iSulad file by adding MALLOC_ARENA_MAX=4. ### Example Run a new container. ```shell -# isula run -itd busybox +$ isula run -itd busybox 9c2c13b6c35f132f49fb7ffad24f9e673a07b7fe9918f97c0591f0d7014c713b ``` @@ -1165,7 +794,7 @@ The following table lists the parameters supported by the **stop** command.

Description

-

stop

+

stop

-f, --force

@@ -1177,16 +806,6 @@ The following table lists the parameters supported by the **stop** command.

Specifies the iSulad socket file path to be accessed.

-

-D, --debug

- -

Enables the debug mode.

- - -

--help

- -

Prints help information.

- -

-t, --time

Time for graceful stop. If the time exceeds the value of this parameter, the container is forcibly stopped.

@@ -1216,7 +835,7 @@ The following table lists the parameters supported by the **stop** command. Stop a container. ```shell -# isula stop fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 +$ isula stop fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 ``` @@ -1246,23 +865,13 @@ The following table lists the parameters supported by the **kill** command.

Description

-

kill

+

kill

-H, --host

Specifies the iSulad socket file path to be accessed.

-

-D, --debug

- -

Enables the debug mode.

- - -

--help

- -

Prints help information.

- -

-s, --signal

Signal sent to the container.

@@ -1276,7 +885,7 @@ The following table lists the parameters supported by the **kill** command. Kill a container. ```shell -# isula kill fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 +$ isula kill fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 ``` @@ -1306,23 +915,13 @@ The following table lists the parameters supported by the **rm** command.

Description

-

rm

+

rm

-f, --force

Forcibly removes a running container.

-

-D, --debug

- -

Enables the debug mode.

- - -

--help

- -

Prints help information.

- -

-H, --host

Specifies the iSulad socket file path to be accessed.

@@ -1338,14 +937,14 @@ The following table lists the parameters supported by the **rm** command. ### Constraints -- In normal I/O scenarios, it takes T1 to delete a running container in an empty environment (with only one container). In an environment with 200 containers (without a large number of I/O operations and with normal host I/O), it takes T2 to delete a running container. The specification of T2 is as follows: T2 = max \{T1 x 3, 5\}s. +- In normal I/O scenarios, it takes T1 to delete a running container in an empty environment \(with only one container\). In an environment with 200 containers \(without a large number of I/O operations and with normal host I/O\), it takes T2 to delete a running container. The specification of T2 is as follows: T2 = max \{T1 x 3, 5\}s. ### Example Delete a stopped container. ```shell -# isula rm fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 +$ isula rm fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 ``` @@ -1353,7 +952,7 @@ fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 ### Description -To attach standard input, standard output, and standard error of the current terminal to a running container, run the **isula attach** command. +To attach standard input, standard output, and standard error of the current terminal to a running container, run the **isula attach** command. Only containers whose runtime is of the LCR type are supported. ### Usage @@ -1380,7 +979,7 @@ The following table lists the parameters supported by the **attach** command.

--help

-

Prints help information.

+

Displays help information.

-H, --host

@@ -1405,7 +1004,7 @@ The following table lists the parameters supported by the **attach** command. Attach to a running container. ```shell -# isula attach fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 +$ isula attach fd7376591a9c3d8ee9a14f5d2c2e5255b02cc44cddaabca82170efd4497510e1 / # / # ``` @@ -1428,30 +1027,19 @@ The following table lists the parameters supported by the **rename** command. **Table 1** Parameter description - - - - - - - - - - - - - -

Command

-

Parameter

-

Description

-

rename

+ + - - - - - - - @@ -1491,7 +1079,7 @@ The following table lists the parameters supported by the **exec** command. - - - - - - - - - -

Command

--help

+

Parameter

Prints help information.

+

Description

-H, --host

+

rename

Specifies the path of the iSulad socket file to be connected.

+

-H, --host

-D, --debug

-

Enables the debug mode.

+

Renames a container.

Description

exec

+

exec

  

-d, --detach

@@ -1499,21 +1087,11 @@ The following table lists the parameters supported by the **exec** command.

Runs a command in the background.

-D, --debug

-

Enables the debug mode.

-

-e, --env

Sets environment variables. (Note: Currently, iSulad does not use this function.)

--help

-

Prints help information.

-

-H, --host

Specifies the iSulad socket file path to be accessed.

@@ -1534,11 +1112,6 @@ The following table lists the parameters supported by the **exec** command.

Logs in to the container as a specified user.

--workdir

-

Specifies the working directory for running the command. This function is supported only when runtime is set to lcr.

-
@@ -1557,9 +1130,8 @@ The following table lists the parameters supported by the **exec** command. 2. After entering the container, run the **script &** command. 3. Run the **exit** command. The terminal stops responding. - >After the isula exec command is executed to enter the container, the background program stops responding because the isula exec command is executed to enter the container and run the background while1 program. When the bash command is run to exit the process, the while1 program does not exit and becomes an orphan process, which is taken over by process 1. - >The while1 process is executed by the initial bash process fork &exec of the container. The while1 process copies the file handle of the bash process. As a result, the handle is not completely closed when the bash process exits. - >The console process cannot receive the handle closing event, epoll_wait stops responding, and the process does not exit. + After the **isula exec** command is executed to enter the container, the background program stops responding because the **isula exec** command is executed to enter the container and run the background while1 program. When Bash exits, the while1 program does not exit and becomes an orphan process, which is taken over by process 1. + The the while1 process is executed by the initial Bash process **fork &exec** of the container. The while1 process copies the file handle of the Bash process. As a result, the handle is not completely closed when the Bash process exits. The console process cannot receive the handle closing event, epoll_wait stops responding, and the process does not exit. - Do not run the **isula exec** command in the background. Otherwise, the system may be suspended. @@ -1576,28 +1148,28 @@ The following table lists the parameters supported by the **exec** command. Cause: Run the **ls /test** command using **exec**. The command output contains a line feed character. Run the**| grep "xx" | wc -l** command for the output. The processing result is 2 \(two lines\). ```shell - # isula exec -it container ls /test + [root@localhost ~]# isula exec -it container ls /test xx xx10 xx12 xx14 xx3 xx5 xx7 xx9 xx1 xx11 xx13 xx2 xx4 xx6 xx8 - # + [root@localhost ~]# ``` Suggestion: When running the **run/exec** command to perform pipe operations, run the **/bin/bash -c** command to perform pipe operations in the container. ```shell - # isula exec -it container /bin/sh -c "ls /test | grep "xx" | wc -l" + [root@localhost ~]# isula exec -it container /bin/sh -c "ls /test | grep "xx" | wc -l" 15 - # + [root@localhost ~]# ``` - Do not use the **echo** option to input data to the standard input of the **exec** command. Otherwise, the client will be suspended. The echo value should be directly transferred to the container as a command line parameter. ```shell - # echo ls | isula exec 38 /bin/sh - - + [root@localhost ~]# echo ls | isula exec 38 /bin/sh + + ^C - # + [root@localhost ~]# ``` The client is suspended when the preceding command is executed because the preceding command is equivalent to input **ls** to **stdin**. Then EOF is read and the client does not send data and waits for the server to exit. However, the server cannot determine whether the client needs to continue sending data. As a result, the server is suspended in reading data, and both parties are suspended. @@ -1605,7 +1177,7 @@ The following table lists the parameters supported by the **exec** command. The correct execution method is as follows: ```shell - # isula exec 38 ls + [root@localhost ~]# isula exec 38 ls bin dev etc home proc root sys tmp usr var ``` @@ -1614,7 +1186,7 @@ The following table lists the parameters supported by the **exec** command. Run the echo command in a running container. ```shell -# isula exec c75284634bee echo "hello,world" +$ isula exec c75284634bee echo "hello,world" hello,world ``` @@ -1644,7 +1216,7 @@ The following table lists the parameters supported by the **inspect** command.

Description

inspect

+

inspect

  

-H, --host

@@ -1652,16 +1224,6 @@ The following table lists the parameters supported by the **inspect** command.

Specifies the iSulad socket file path to be accessed.

-D, --debug

-

Enables the debug mode.

-

--help

-

Prints help information.

-

-f, --format

Output format.

@@ -1675,17 +1237,16 @@ The following table lists the parameters supported by the **inspect** command.
+### Constraints + +- Lightweight containers do not support the output in \{ \{.State\} \} format but support the output in the \{ \{json .State\} \} format. The **-f** parameter is not supported when the object is an image. + ### Example Query information about a container. ```shell -# isula inspect -f '{{.State.Status} {{.State.Running}}' c75284634bee -running -true - - -# isula inspect c75284634bee +$ isula inspect c75284634bee [ { "Id": "c75284634beeede3ab86c828790b439d16b6ed8a537550456b1f94eb852c1c0a", @@ -1838,7 +1399,7 @@ The following table lists the parameters supported by the **ps** command.

Description

-

ps

+

ps

  

  

  

@@ -1849,16 +1410,6 @@ The following table lists the parameters supported by the **ps** command.

Displays all containers.

-

-D, --debug

- -

Enables the debug mode.

- - -

--help

- -

Prints help information.

- -

-H, --host

Specifies the iSulad socket file path to be accessed.

@@ -1892,11 +1443,11 @@ The following table lists the parameters supported by the **ps** command. Query information about all containers. ```shell -# isula ps -a +$ isula ps -a ID IMAGE STATUS PID COMMAND EXIT_CODE RESTART_COUNT STARTAT FINISHAT RUNTIME NAMES e84660aa059c rnd-dockerhub.huawei.com/official/busybox running 304765 "sh" 0 0 13 minutes ago - lcr e84660aa059cafb0a77a4002e65cc9186949132b8e57b7f4d76aa22f28fde016 -# isula ps -a --format "table {{.ID}} {{.Image}}" --no-trunc +$ isula ps -a --format "table {{.ID}} {{.Image}}" --no-trunc ID IMAGE e84660aa059cafb0a77a4002e65cc9186949132b8e57b7f4d76aa22f28fde016 rnd-dockerhub.huawei.com/official/busybox ``` @@ -1927,23 +1478,13 @@ The following table lists the parameters supported by the **restart** command.

Description

-

restart

+

restart

-H, --host

Specifies the iSulad socket file path to be accessed.

-

-D, --debug

- -

Enables the debug mode.

- - -

--help

- -

Prints help information.

- -

-t, --time

Time for graceful stop. If the time exceeds the value of this parameter, the container is forcibly stopped.

@@ -1973,8 +1514,8 @@ The following table lists the parameters supported by the **restart** command. Restart a container. ```shell -# isula restart c75284634beeede3ab86c828790b439d16b6ed8a537550456b1f94eb852c1c0a - c75284634beeede3ab86c828790b439d16b6ed8a537550456b1f94eb852c1c0a +$ isula restart c75284634beeede3ab86c828790b439d16b6ed8a537550456b1f94eb852c1c0a + c75284634beeede3ab86c828790b439d16b6ed8a537550456b1f94eb852c1c0a ``` ## Waiting for a Container to Exit @@ -2003,21 +1544,16 @@ The following table lists the parameters supported by the **wait** command.

Description

-

wait

+

wait

-H, --host

Specifies the iSulad socket file path to be accessed.

-

-D, --debug

- -

Enables the debug mode.

- - -

--help

+

/

-

Prints help information.

+

Blocks until the container stops and displays the exit code.

@@ -2029,7 +1565,7 @@ Wait for a single container to exit. ```shell $ isula wait c75284634beeede3ab86c828790b439d16b6ed8a537550456b1f94eb852c1c0a - 137 + 137 ``` ## Viewing Process Information in a Container @@ -2058,7 +1594,7 @@ The following table lists the parameters supported by the **top** command.

Description

-

top

+

top

  

-H, --host

@@ -2066,14 +1602,9 @@ The following table lists the parameters supported by the **top** command.

Specifies the iSulad socket file path to be accessed.

-

-D, --debug

+

/

-

Enables the debug mode.

- - -

--help

- -

Prints help information.

+

Queries the process information of a running container.

@@ -2084,7 +1615,7 @@ The following table lists the parameters supported by the **top** command. Query process information in a container. ```shell -# isula top 21fac8bb9ea8e0be4313c8acea765c8b4798b7d06e043bbab99fc20efa72629c +$ isula top 21fac8bb9ea8e0be4313c8acea765c8b4798b7d06e043bbab99fc20efa72629c UID PID PPID C STIME TTY TIME CMD root 22166 22163 0 23:04 pts/1 00:00:00 sh ``` @@ -2115,7 +1646,7 @@ The following table lists the parameters supported by the **stats** command.

Description

-

stats

+

stats

  

  

@@ -2124,16 +1655,6 @@ The following table lists the parameters supported by the **stats** command.

Specifies the iSulad socket file path to be accessed.

-

-D, --debug

- -

Enables the debug mode.

- - -

--help

- -

Prints help information.

- -

-a, --all

Displays all containers. (By default, only running containers are displayed.)

@@ -2144,11 +1665,6 @@ The following table lists the parameters supported by the **stats** command.

Display the first result only. Only statistics in non-stream mode are displayed.

-

--original

- -

Displays the original data of the container without statistics calculation.

- - @@ -2157,15 +1673,15 @@ The following table lists the parameters supported by the **stats** command. Display resource usage statistics. ```shell -# isula stats --no-stream 21fac8bb9ea8e0be4313c8acea765c8b4798b7d06e043bbab99fc20efa72629c CONTAINER CPU % MEM USAGE / LIMIT MEM % BLOCK I / O PIDS -21fac8bb9ea8 0.00 56.00 KiB / 7.45 GiB 0.00 0.00 B / 0.00 B 1 +$ isula stats --no-stream 21fac8bb9ea8e0be4313c8acea765c8b4798b7d06e043bbab99fc20efa72629c CONTAINER CPU % MEM USAGE / LIMIT MEM % BLOCK I / O PIDS +21fac8bb9ea8 0.00 56.00 KiB / 7.45 GiB 0.00 0.00 B / 0.00 B 1 ``` ## Obtaining Container Logs ### Description -To obtain container logs, run the **isula logs** command. +To obtain container logs, run the **isula logs** command. Only containers whose runtime is of the LCR type are supported. ### Usage @@ -2187,7 +1703,7 @@ The following table lists the parameters supported by the **logs** command.

Description

-

logs

+

logs

  

-H, --host

@@ -2195,16 +1711,6 @@ The following table lists the parameters supported by the **logs** command.

Specifies the iSulad socket file path to be accessed.

-

-D, --debug

- -

Enables the debug mode.

- - -

--help

- -

Prints help information.

- -

-f, --follow

Traces log output.

@@ -2215,11 +1721,6 @@ The following table lists the parameters supported by the **logs** command.

Displays the number of log records.

-

-t, --timestamps

- -

Displays the timestamp.

- - @@ -2232,7 +1733,7 @@ The following table lists the parameters supported by the **logs** command. Obtain container logs. ```shell -# isula logs 6a144695f5dae81e22700a8a78fac28b19f8bf40e8827568b3329c7d4f742406 +$ isula logs 6a144695f5dae81e22700a8a78fac28b19f8bf40e8827568b3329c7d4f742406 hello, world hello, world hello, world @@ -2265,23 +1766,13 @@ The following table lists the parameters supported by the **cp** command.

Description

-

cp

+

cp

-H, --host

Specifies the iSulad socket file path to be accessed.

-

-D, --debug

- -

Enables the debug mode.

- - -

--help

- -

Prints help information.

- - @@ -2290,22 +1781,22 @@ The following table lists the parameters supported by the **cp** command. - When iSulad copies files, note that the **/etc/hostname**, **/etc/resolv.conf**, and **/etc/hosts** files are not mounted to the host, neither the **--volume** and **--mount** parameters. Therefore, the original files in the image instead of the files in the real container are copied. ```shell - # isula cp b330e9be717a:/etc/hostname /tmp/hostname - # cat /tmp/hostname - # + [root@localhost tmp]# isula cp b330e9be717a:/etc/hostname /tmp/hostname + [root@localhost tmp]# cat /tmp/hostname + [root@localhost tmp]# ``` - When decompressing a file, iSulad does not check the type of the file or folder to be overwritten in the file system. Instead, iSulad directly overwrites the file or folder. Therefore, if the source is a folder, the file with the same name is forcibly overwritten as a folder. If the source file is a file, the folder with the same name will be forcibly overwritten as a file. ```shell - # rm -rf /tmp/test_file_to_dir && mkdir /tmp/test_file_to_dir - # isula exec b330e9be717a /bin/sh -c "rm -rf /tmp/test_file_to_dir && touch /tmp/test_file_to_dir" - # isula cp b330e9be717a:/tmp/test_file_to_dir /tmp - # ls -al /tmp | grep test_file_to_dir + [root@localhost tmp]# rm -rf /tmp/test_file_to_dir && mkdir /tmp/test_file_to_dir + [root@localhost tmp]# isula exec b330e9be717a /bin/sh -c "rm -rf /tmp/test_file_to_dir && touch /tmp/test_file_to_dir" + [root@localhost tmp]# isula cp b330e9be717a:/tmp/test_file_to_dir /tmp + [root@localhost tmp]# ls -al /tmp | grep test_file_to_dir -rw-r----- 1 root root 0 Apr 26 09:59 test_file_to_dir ``` -- The **cp** command is used only for maintenance and fault locating. Do not use the **cp** command in the production environment. +- iSulad freezes the container during the copy process and restores the container after the copy is complete. ### Example @@ -2321,16 +1812,16 @@ Copy the **/www** directory on container 21fac8bb9ea8 to the **/tmp** direct isula cp 21fac8bb9ea8:/www /tmp/ ``` -## Pausing All Processes in a Container +## Pausing a Container ### Description -The **isula pause** command is used to pause all processes in one or more containers. +To pause all processes in a container, run the **isula pause** command. Only containers whose runtime is of the LCR type are supported. ### Usage ```shell -isula pause [OPTIONS] CONTAINER [CONTAINER...] +isula pause CONTAINER [CONTAINER...] ``` ### Parameters @@ -2343,23 +1834,13 @@ isula pause [OPTIONS] CONTAINER [CONTAINER...]

Description

-

pause

+

pause

-H, --host

Specifies the iSulad socket file path to be accessed.

-

-D, --debug

- -

Enables the debug mode.

- - -

--help

- -

Prints help information.

- - @@ -2374,20 +1855,20 @@ isula pause [OPTIONS] CONTAINER [CONTAINER...] Pause a running container. ```shell -# isula pause 8fe25506fb5883b74c2457f453a960d1ae27a24ee45cdd78fb7426d2022a8bac - 8fe25506fb5883b74c2457f453a960d1ae27a24ee45cdd78fb7426d2022a8bac +$ isula pause 8fe25506fb5883b74c2457f453a960d1ae27a24ee45cdd78fb7426d2022a8bac + 8fe25506fb5883b74c2457f453a960d1ae27a24ee45cdd78fb7426d2022a8bac ``` -## Resuming All Processes in a Container +## Resuming a Container ### Description -The **isula unpause** command is used to resume all processes in one or more containers. It is a reversible process of **isula pause**. +To resume all processes in a container, run the **isula unpause** command. It is the reverse process of **isula pause**. Only containers whose runtime is of the LCR type are supported. ### Usage ```shell -isula unpause [OPTIONS] CONTAINER [CONTAINER...] +isula unpause CONTAINER [CONTAINER...] ``` ### Parameters @@ -2400,23 +1881,13 @@ isula unpause [OPTIONS] CONTAINER [CONTAINER...]

Description

-

pause

+

pause

-H, --host

Specifies the iSulad socket file path to be accessed.

-

-D, --debug

- -

Enables the debug mode.

- - -

--help

- -

Prints help information.

- - @@ -2429,15 +1900,15 @@ isula unpause [OPTIONS] CONTAINER [CONTAINER...] Resume a paused container. ```shell -# isula unpause 8fe25506fb5883b74c2457f453a960d1ae27a24ee45cdd78fb7426d2022a8bac - 8fe25506fb5883b74c2457f453a960d1ae27a24ee45cdd78fb7426d2022a8bac +$ isula unpause 8fe25506fb5883b74c2457f453a960d1ae27a24ee45cdd78fb7426d2022a8bac + 8fe25506fb5883b74c2457f453a960d1ae27a24ee45cdd78fb7426d2022a8bac ``` ## Obtaining Event Messages from the Server in Real Time ### **Description** -The **isula events** command is used to obtain real-time events from the server. +The **isula events** command is used to obtain event messages such as container image lifecycle and running event from the server in real time. Only containers whose runtime type is **lcr** are supported. ### Usage @@ -2455,23 +1926,14 @@ isula events [OPTIONS]

Description

-

events

+ +

events

-H, --host

Specifies the iSulad socket file path to be accessed.

-

-D, --debug

- -

Enables the debug mode.

- - -

--help

- -

Prints help information.

- -

-n, --name

Obtains event messages of a specified container.

@@ -2482,23 +1944,13 @@ isula events [OPTIONS]

Obtains event messages generated since a specified time.

-

-U, --until

- -

Obtains the event at the specified time point.

- - -### Constraints - -- Support container-related events: create, start, restart, stop, exec_create, exec_die, attach, kill, top, rename, archive-path, extract-to-dir, update, pause, unpause, export, and resize. -- Supported image-related events: load, remove, pull, login, and logout. - ### Example Run the following command to obtain event messages from the server in real time: ```shell -# isula events +isula events ``` diff --git a/docs/en/docs/Container/container-resource-management.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/container-resource-management.md similarity index 91% rename from docs/en/docs/Container/container-resource-management.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/container-resource-management.md index 3f1e778f1d41e127494aae254008ed37cd76be29..a2a737cb7e4ab7b50763059925347a66fe31c2ff 100644 --- a/docs/en/docs/Container/container-resource-management.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/container-resource-management.md @@ -1,7 +1,6 @@ # Container Resource Management - [Container Resource Management](#container-resource-management) - - [Description](#description) - [Sharing Resources](#sharing-resources) - [Restricting CPU Resources of a Running Container](#restricting-cpu-resources-of-a-running-container) - [Restricting the Memory Usage of a Running Container](#restricting-the-memory-usage-of-a-running-container) @@ -11,19 +10,12 @@ - [Restricting the Number of Processes or Threads that Can Be Created in a Container](#restricting-the-number-of-processes-or-threads-that-can-be-created-in-a-container) - [Configuring the ulimit Value in a Container](#configuring-the-ulimit-value-in-a-container) -## Overview - -You can use namespaces and cgroups to manage container resources. iSula can use cgroup v1 and cgroup v2 to restrict resources. cgroup v2 is an experimental feature and cannot be put into commercial use. When the system is configured to support only cgroup v2 and cgroup v2 is mounted to the **/sys/fs/cgroup** directory, iSula uses cgroup v2 for resource management. No matter cgroup v1 or cgroup v2 is used to manage container resources, iSula provides the same API for users to implement resource restriction. - ## Sharing Resources ### Description Containers or containers and hosts can share namespace information mutually, including PID, network, IPC, and UTS information. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->When namespace information is shared with a host, the namespace isolation mechanism is unavailable. As a result, information on the host can be queried and operated in a container, causing security risks. For example, if **--pid=host** is used to share the PID namespace of a host, information about other processes on the host can be viewed, causing information leakage or even killing the host process. Exercise caution when using the shared host namespace function to ensure security. - ### Usage When running the **isula create/run** command, you can set the namespace parameters to share resources. For details, see the following parameter description table. @@ -107,11 +99,11 @@ You can specify the following parameters when running the **isula create/run** - - - @@ -126,7 +118,7 @@ You can specify the following parameters when running the **isula create/run** - @@ -135,45 +127,27 @@ You can specify the following parameters when running the **isula create/run** - - - - - - - - - - - - - - - @@ -189,7 +163,7 @@ To restrict a container to use a specific CPU, add **--cpuset-cpus number** wh isula run -tid --cpuset-cpus 0,2-3 busybox sh ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** +>![](./public_sys-resources/icon-note.gif)**NOTE:** >You can check whether the configuration is successful. For details, see "Querying Information About a Single Container." ## Restricting the Memory Usage of a Running Container @@ -306,13 +280,13 @@ When running the **isula create/run** command, set **--device-read/write-bps* To limit the read/write speed of devices in the container, add **--device-write-bps/--device-read-bps :\[\]** when running the container. For example, to limit the read speed of the device **/dev/sda** in the container **busybox** to 1 Mbit/s, run the following command: ```shell -isula run -tid --device-read-bps /dev/sda:1mb busybox sh +isula run -tid --device-write /dev/sda:1mb busybox sh ``` To limit the write speed, run the following command: ```shell -isula run -tid --device-write-bps /dev/sda:1mb busybox sh +isula run -tid read-bps /dev/sda:1mb busybox sh ``` ## Restricting the Rootfs Storage Space of a Container @@ -345,8 +319,8 @@ This feature is implemented by the project quota function of the EXT4 file syste - Format and mount the file system. ```shell - mkfs.ext4 -O quota,project /dev/sdb - mount -o prjquota /dev/sdb /var/lib/isulad + # mkfs.ext4 -O quota,project /dev/sdb + # mount -o prjquota /dev/sdb /var/lib/isulad ``` ### Parameters @@ -368,7 +342,7 @@ When running the **create/run** command, set **--storage-opt**. - @@ -380,7 +354,7 @@ When running the **create/run** command, set **--storage-opt**. In the **isula run/create** command, use the existing parameter **--storage-opt size=**_value_ to set the quota. The value is a positive number in the unit of **\[kKmMgGtTpP\]?\[iI\]?\[bB\]?**. If the value does not contain a unit, the default unit is byte. -```console +```shell $ [root@localhost ~]# isula run -ti --storage-opt size=10M busybox / # df -h Filesystem Size Used Available Use% Mounted on @@ -425,7 +399,7 @@ overlay 10.0M 10.0M 0 100% / The kernel must support the EXT4 project quota function. When running **mkfs**, add **-O quota,project**. When mounting the file system, add **-o prjquota**. If any of the preceding conditions is not met, an error is reported when **--storage-opt size=**_value_ is used. - ```console + ```shell $ [root@localhost ~]# isula run -it --storage-opt size=10Mb busybox df -h Error response from daemon: Failed to prepare rootfs with error: time="2019-04-09T05:13:52-04:00" level=fatal msg="error creating read- write layer with ID "a4c0e55e82c55e4ee4b0f4ee07f80cc2261cf31b2c2dfd628fa1fb00db97270f": --storage-opt is supported only for overlay over @@ -445,7 +419,7 @@ overlay 10.0M 10.0M 0 100% / Docker fails to be started. - ```console + ```shell [root@localhost ~]# docker run -itd --storage-opt size=4k rnd-dockerhub.huawei.com/official/ubuntu-arm64:latest docker: Error response from daemon: symlink /proc/mounts /var/lib/docker/overlay2/e6e12701db1a488636c881b44109a807e187b8db51a50015db34a131294fcf70-init/merged/etc/mtab: disk quota exceeded. See 'docker run --help'. @@ -453,7 +427,7 @@ overlay 10.0M 10.0M 0 100% / The lightweight container is started properly and no error is reported. - ```console + ```shell [root@localhost ~]# isula run -itd --storage-opt size=4k rnd-dockerhub.huawei.com/official/ubuntu-arm64:latest 636480b1fc2cf8ac895f46e77d86439fe2b359a1ff78486ae81c18d089bbd728 [root@localhost ~]# isula ps @@ -467,7 +441,7 @@ overlay 10.0M 10.0M 0 100% / When a lightweight container uses the default configuration during container startup, there are few mount points. The lightweight container is created only when the directory like **/proc** or **/sys** does not exist. The image **rnd-dockerhub.huawei.com/official/ubuntu-arm64:latest** in the test case contains **/proc** and **/sys**. Therefore, no new file or directory is generated during the container startup. As a result, no error is reported during the lightweight container startup. To verify this process, when the image is replaced with **rnd-dockerhub.huawei.com/official/busybox-aarch64:latest**, an error is reported when the lightweight container is started because **/proc** does not exist in the image. - ```console + ```shell [root@localhost ~]# isula run -itd --storage-opt size=4k rnd-dockerhub.huawei.com/official/busybox-aarch64:latest 8e893ab483310350b8caa3b29eca7cd3c94eae55b48bfc82b350b30b17a0aaf4 Error response from daemon: Start container error: runtime error: 8e893ab483310350b8caa3b29eca7cd3c94eae55b48bfc82b350b30b17a0aaf4:tools/lxc_start.c:main:404 starting container process caused "Failed to setup lxc, @@ -476,10 +450,10 @@ overlay 10.0M 10.0M 0 100% / 5. Other description: - When using iSulad with the quota function to switch data disks, ensure that the data disks to be switched are mounted using the **prjquota** option and the mounting mode of the **/var/lib/isulad/storage/overlay2** directory is the same as that of the **/var/lib/isulad** directory. + When using iSulad with the quota function to switch data drives, ensure that the data drives to be switched are mounted using the **prjquota** option and the mounting mode of the **/var/lib/isulad/storage/overlay2** directory is the same as that of the **/var/lib/isulad** directory. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >Before switching the data disk, ensure that the mount point of **/var/lib/isulad/storage/overlay2** is unmounted. + > ![](./public_sys-resources/icon-note.gif)**NOTE:** + > Before switching the data drive, ensure that the mount point of **/var/lib/isulad/storage/overlay2** is unmounted. ## Restricting the Number of File Handles in a Container @@ -531,7 +505,7 @@ isula run -ti --files-limit 1024 busybox bash 1. If the **--files-limit** parameter is set to a small value, for example, 1, the container may fail to be started. - ```console + ```shell [root@localhost ~]# isula run -itd --files-limit 1 rnd-dockerhub.huawei.com/official/busybox-aarch64 004858d9f9ef429b624f3d20f8ba12acfbc8a15bb121c4036de4e5745932eff4 Error response from daemon: Start container error: Container is not running:004858d9f9ef429b624f3d20f8ba12acfbc8a15bb121c4036de4e5745932eff4 @@ -539,7 +513,7 @@ isula run -ti --files-limit 1024 busybox bash Docker will be started successfully, and the value of **files.limit cgroup** is **max**. - ```console + ```shell [root@localhost ~]# docker run -itd --files-limit 1 rnd-dockerhub.huawei.com/official/busybox-aarch64 ef9694bf4d8e803a1c7de5c17f5d829db409e41a530a245edc2e5367708dbbab [root@localhost ~]# docker exec -it ef96 cat /sys/fs/cgroup/files/files.limit @@ -638,7 +612,7 @@ Use either of the following methods to configure ulimit: 2. Use daemon parameters or configuration files. - For details, see **--default-ulimits** in [Configuration Mode](./installation-configuration.md#configuration-mode). + For details, see **--default-ulimits** in [Configuration Mode](installation-configuration.md#configuration-mode). **--ulimit** can limit the following types of resources: diff --git a/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/cri-v1.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/cri-v1.md new file mode 100644 index 0000000000000000000000000000000000000000..8f4e63b1412a632fa1a9ca0a45f928669fd18455 --- /dev/null +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/cri-v1.md @@ -0,0 +1,205 @@ +# CRI API v1 + +## Overview + +Container Runtime Interface (CRI) is the main protocol used by kublet to communicate with container engines. +Kubernetes 1.25 and earlier versions support CRI v1alpha2 and CRI v1. Kubernetes 1.26 and later versions support only CRI v1. + +iSulad supports both [CRI v1alpha2](cri-v1alpha2.md) and CRI v1. +For CRI v1, iSulad supports the functions described in [CRI v1alpha2](cri-v1alpha2.md) and new interfaces and fields defined in CRI v1. + +Currently, iSulad supports CRI v1 1.29. The API described on the official website is as follows: + +[https://github.com/kubernetes/cri-api/blob/kubernetes-1.29.0/pkg/apis/runtime/v1/api.proto](https://github.com/kubernetes/cri-api/blob/kubernetes-1.29.0/pkg/apis/runtime/v1/api.proto) + +The API description file used by iSulad is slightly different from the official API. The interfaces in this document prevail. + +## New Fields of CRI v1 + +- **CgroupDriver** + + Enum values for cgroup drivers. + + | Member| Description | + | :----------------: | :----------------: | + | SYSTEMD = 0 | systemd-cgroup driver| + | CGROUPFS = 1 | cgroupfs driver | + +- **LinuxRuntimeConfiguration** + + cgroup driver used by the container engine + + | Member | Description | + | :------------------------: | :------------------------------: | + | CgroupDriver cgroup_driver | Enum value for the cgroup driver used by the container engine| + +- **ContainerEventType** + + Enum values for container event types + + | Member | Description| + | :-------------------------: | :------------: | + | CONTAINER_CREATED_EVENT = 0 | Container creation event | + | CONTAINER_STARTED_EVENT = 1 | Container startup event | + | CONTAINER_STOPPED_EVENT = 1 | Container stop event | + | CONTAINER_DELETED_EVENT = 1 | Container deletion event | + +- **SwapUsage** + + Virtual memory usage + + | Member | Description | + | :------------------------------: | :------------------: | + | int64 timestamp | Timestamp information | + | UInt64Value swap_available_bytes | Available virtual memory bytes| + | UInt64Value swap_usage_bytes | Used virtual memory bytes| + +## New Interfaces + +### RuntimeConfig + +#### Interface Prototype + +```text +rpc RuntimeConfig(RuntimeConfigRequest) returns (RuntimeConfigResponse) {} +``` + +#### Interface Description + +Obtains the cgroup driver configuration (cgroupfs or systemd-cgroup). + +#### Parameter: RuntimeConfigRequest + +No such field + +#### Returns: RuntimeConfigResponse + +| Return | Description | +| :------------------------------ | :------------------------------------------------- | +| LinuxRuntimeConfiguration linux | CgroupDriver enum value for cgroupfs or systemd-cgroup| + +### GetContainerEvents + +#### Interface Prototype + +```text +rpc GetContainerEvents(GetEventsRequest) returns (stream ContainerEventResponse) {} +``` + +#### Interface Description + +Obtains the pod lifecycle event stream. + +#### Parameter: GetEventsRequest + +No such field + +#### Returns: ContainerEventResponse + +| Return | Description | +| :------------------------------------------- | :-------------------------------- | +| string container_id | Container ID | +| ContainerEventType container_event_type | Container event type | +| int64 created_at | Time when the container event is generated | +| PodSandboxStatus pod_sandbox_status | Status of the pod to which the container belongs | +| repeated ContainerStatus containers_statuses | Status of all containers in the pod to which the container belongs| + +## Change Description + +### CRI V1.29 + +#### [Obtaining the cgroup Driver Configuration](https://github.com/kubernetes/kubernetes/pull/118770) + +`RuntimeConfig` obtains the cgroup driver configuration (cgroupfs or systemd-cgroup). + +#### [GetContainerEvents Supports Pod Lifecycle Events](https://github.com/kubernetes/kubernetes/pull/111384) + +`GetContainerEvents` provides event streams related to the pod lifecycle. + +`PodSandboxStatus` is adjusted accordingly. `ContainerStatuses` is added to provide sandbox content status information. + +#### [ContainerStats Virtual Memory Information](https://github.com/kubernetes/kubernetes/pull/118865) + +The virtual memory usage information `SwapUsage` is added to `ContainerStats`. + +#### [OOMKilled Setting in the Reason Field of ContainerStatus](https://github.com/kubernetes/kubernetes/pull/112977) + +The **Reason** field in **ContainerStatus** should be set to OOMKilled when cgroup out-of-memory occurs. + +#### [Modification of PodSecurityContext.SupplementalGroups Description](https://github.com/kubernetes/kubernetes/pull/113047) + +The description is modified to optimize the comments of **PodSecurityContext.SupplementalGroups**. The behavior that the main UID defined by the container image is not in the list is clarified. + +#### [ExecSync Output Restriction](https://github.com/kubernetes/kubernetes/pull/110435) + +The **ExecSync** return value output is less than 16 MB. + +## User Guide + +### Configuring iSulad to Support CRI V1 + +Configure iSulad to support CRI v1 1.29 used by the new Kubernetes version. + +For CRI v1 1.25 or earlier, the functions of V1alpha2 are the same as those of V1. The new features of CRI v1 1.26 or later are supported only in CRI v1. +The functions and features of this upgrade are supported only in CRI v1. Therefore, you need to enable CRI v1as follows. + +Enable CRI v1. + +Set **enable-cri-v1** in **daemon.json** of iSulad to **true** and restart iSulad. + +```json +{ + "group": "isula", + "default-runtime": "runc", + ... + "enable-cri-v1": true +} +``` + +If iSulad is installed from source, enable the **ENABLE_CRI_API_V1** compile option. + +```bash +cmake ../ -D ENABLE_CRI_API_V1=ON +``` + +### Using RuntimeConfig to Obtain the cgroup Driver Configuration + +#### systemd-cgroup Configuration + +iSulad supports both systemd and cgroupfs cgroup drivers. +By default, cgroupfs is used. You can configure iSulad to support systemd-cgroup. +iSulad supports only systemd-cgroup when the runtime is runc. In the iSulad configuration file **daemon.json**, +set **systemd-cgroup** to **true** and restart iSulad to use the systemd-cgroup driver. + +```json +{ + "group": "isula", + "default-runtime": "runc", + ... + "enable-cri-v1": true, + "systemd-cgroup": true +} +``` + +### Using GetContainerEvents to Generate Pod Lifecycle Events + +#### Pod Events Configuration + +In the iSulad configuration file **daemon.json**, +set **enable-pod-events** to **true** and restart iSulad. + +```json +{ + "group": "isula", + "default-runtime": "runc", + ... + "enable-cri-v1": true, + "enable-pod-events": true +} +``` + +## Constraints + +1. The preceding new features are supported by iSulad only when the container runtime is runc. +2. cgroup out-of-memory (OOM) triggers the deletion of the cgroup path of the container. If iSulad processes the OOM event after the cgroup path is deleted, iSulad cannot capture the OOM event of the container. As a result, the **Reason** field in **ContainerStatus** may be incorrect. +3. iSulad does not support the mixed use of different cgroup drivers to manage containers. After a container is started, the cgroup driver configuration in iSulad should not change. diff --git a/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/cri-v1alpha2.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/cri-v1alpha2.md new file mode 100644 index 0000000000000000000000000000000000000000..092e0b721ee322013a19fc47d0e60826ecf68746 --- /dev/null +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/cri-v1alpha2.md @@ -0,0 +1,1268 @@ +# CRI API v1alpha2 + +## Description + +CRI API is the container runtime APIs provided by Kubernetes. CRI defines service interfaces for containers and images. iSulad uses CRI API to interconnect with Kubernetes. + +The lifecycle of a container is isolated from that of an image. Therefore, two services are required. CRI API is defined using [Protocol Buffers](https://developers.google.com/protocol-buffers/) and is based on [gRPC](https://grpc.io/). + +Currently, the default CRI API version used by iSulad is v1alpha2. The official API description file is as follows: + +[https://github.com/kubernetes/kubernetes/blob/release-1.14/pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto](https://github.com/kubernetes/kubernetes/blob/release-1.14/pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto), + +iSulad uses the API description file of version 1.14 used by Pass, which is slightly different from the official API. The interfaces in this document prevail. + +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> For the WebSocket streaming service of CRI API, the listening address of the server is 127.0.0.1, and the port number is 10350. The port number can be configured through the `--websocket-server-listening-port` command option or in the **daemon.json** configuration file. + +## Interfaces + +The following tables list the parameters that may be used by the interfaces. Some parameters cannot be configured. + +### Interface Parameters + +- **DNSConfig** + + Specifies the DNS servers and search domains of a sandbox. + + | Member | Description | + | :----------------------: | :--------------------------------------------------------: | + | repeated string servers | List of DNS servers of the cluster | + | repeated string searches | List of DNS search domains of the cluster | + | repeated string options | List of DNS options. See .| + +- **Protocol** + + Enum values of the protocols. + + | Member| Description | + | :------: | :-----: | + | TCP = 0 | TCP| + | UDP = 1 | UDP| + +- **PortMapping** + + Specifies the port mapping configurations of a sandbox. + + | Member | Description | + | :------------------: | :----------------: | + | Protocol protocol | Protocol of the port mapping | + | int32 container_port | Port number within the container | + | int32 host_port | Port number on the host | + | string host_ip | Host IP address | + +- **MountPropagation** + + Enum values for mount propagation. + + | Member | Description | + | :-------------------------------: | :--------------------------------------------------: | + | PROPAGATION_PRIVATE = 0 | No mount propagation ("rprivate" in Linux) | + | PROPAGATION_HOST_TO_CONTAINER = 1 | Mounts get propagated from the host to the container ("rslave" in Linux) | + | PROPAGATION_BIDIRECTIONAL = 2 | Mounts get propagated from the host to the container and from the container to the host ("rshared" in Linux) | + +- **Mount** + + Specifies a host volume to mount into a container. (Only files and folders are supported.) + + | Member | Description | + | :--------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------: | + | string container_path | Path in the container | + | string host_path | Path on the host | + | bool readonly | Whether the configuration is read-only in the container. The default value is **false**. | + | bool selinux_relabel | Whether to set the SELinux label (not supported) | + | MountPropagation propagation | Mount propagation configuration. The value can be **0**, **1**, or **2**, corresponding to **rprivate**, **rslave**, or **rshared**. The default value is **0**. | + +- **NamespaceOption** + + | Member | Description | + | :---------------: | :------------------------: | + | bool host_network | Whether to use the network namespace of the host | + | bool host_pid | Whether to use the PID namespace of the host | + | bool host_ipc | Whether to use the IPC namespace of the host | + +- **Capability** + + Contains information about the capabilities to add or drop. + + | Member | Description | + | :-------------------------------: | :----------: | + | repeated string add_capabilities | Capabilities to add | + | repeated string drop_capabilities | Capabilities to drop | + +- **Int64Value** + + Wrapper of the int64 type. + + | Member| Description| + | :----------------: | :------------: | + | int64 value | Actual int64 value | + +- **UInt64Value** + + Wrapper of the uint64 type. + + | Member| Description| + | :----------------: | :------------: | + | uint64 value | Actual uint64 value | + +- **LinuxSandboxSecurityContext** + + Specifies Linux security options for a sandbox. + + Note that these security options are not applied to containers in the sandbox and may not be applicable to a sandbox without any running process. + + | Member | Description | + | :--------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | + | NamespaceOption namespace_options | Options for namespaces of the sandbox | + | SELinuxOption selinux_options | SELinux options (not supported) | + | Int64Value run_as_user | UID to run sandbox processes | + | bool readonly_rootfs | Whether the root file system of the sandbox is read-only | + | repeated int64 supplemental_groups | User group information of process 1 in the sandbox besides the primary group | + | bool privileged | Whether the sandbox can run a privileged container | + | string seccomp_profile_path | Path of the seccomp configuration file. Valid values are:
// **unconfined**: seccomp is not used.
// **localhost/****: path of the configuration file installed in the system.
// **:Full path of the configuration file.
//By default, this parameter is not set, which is identical to **unconfined**.| + +- **LinuxPodSandboxConfig** + + Sets configurations related to Linux hosts and containers. + + | Member | Description | + | :------------------------------------------: | :-------------------------------------------------------------------------------------: | + | string cgroup_parent | Parent cgroup path of the sandbox. The runtime can convert it to the cgroupfs or systemd semantics as required. (Not configurable)| + | LinuxSandboxSecurityContext security_context | Security attributes of the sandbox | + | map\ sysctls | Linux sysctls configurations of the sandbox | + +- **PodSandboxMetadata** + + Stores all necessary information for building the sandbox name. The container runtime is encouraged to expose the metadata in its user interface for better user experience. For example, the runtime can construct a unique sandbox name based on the metadata. + + | Member| Description | + | :----------------: | :----------------------------: | + | string name | Sandbox name | + | string uid | Sandbox UID | + | string namespace | Sandbox namespace | + | uint32 attempt | Number of attempts to create the sandbox. The default value is **0**.| + +- **PodSandboxConfig** + + Contains all the required and optional fields for creating a sandbox. + + | Member | Description | + | :--------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------: | + | PodSandboxMetadata metadata | Metadata of the sandbox. This information uniquely identifies the sandbox, and the runtime should leverage this to ensure correct operation. The runtime may also use this information to improve user experience, such as by constructing a readable sandbox name.| + | string hostname | Host name of the sandbox | + | string log_directory | Directory for storing log files of containers in the sandbox | + | DNSConfig dns_config | DNS configuration of the sandbox | + | repeated PortMapping port_mappings | Port mappings of the sandbox | + | map\ labels | Key-value pairs that may be used to identify a single sandbox or a series of sandboxes | + | map\ annotations | Key-value pair holding arbitrary data. The value cannot be modified and can be queried by using **PodSandboxStatus**. | + | LinuxPodSandboxConfig linux | Options related to the linux host | + +- **PodSandboxNetworkStatus** + + Describes the network status of the sandbox. + + | Member| Description | + | :----------------: | :-------------------: | + | string ip | IP address of the sandbox | + | string name | Name of the network interface in the sandbox | + | string network | Name of the additional network | + +- **Namespace** + + Stores namespace options. + + | Member | Description | + | :---------------------: | :----------------: | + | NamespaceOption options | Linux namespace options | + +- **LinuxPodSandboxStatus** + + Describes the status of the Linux sandbox. + + | Member | Description| + | :---------------------------: | :-------------: | + | Namespace**namespaces** | Sandbox namespace | + +- **PodSandboxState** + + Enum values for sandbox states. + + | Member | Description | + | :------------------: | :--------------------: | + | SANDBOX_READY = 0 | Ready state of the sandbox | + | SANDBOX_NOTREADY = 1 | Non-ready state of the sandbox | + +- **PodSandboxStatus** + + Describes the podsandbox status. + + | Member | Description | + | :---------------------------------------: | :-----------------------------------------------: | + | string id | Sandbox ID | + | PodSandboxMetadata metadata | Sandbox metadata | + | PodSandboxState state | Sandbox state | + | int64 created_at | Creation timestamps of the sandbox in nanoseconds | + | repeated PodSandboxNetworkStatus networks | Multi-plane network status of the sandbox | + | LinuxPodSandboxStatus linux | Status specific to Linux sandboxes | + | map\ labels | Key-value pairs that may be used to identify a single sandbox or a series of sandboxes | + | map\ annotations | Key-value pair holding arbitrary data. The value cannot be modified by the runtime.| + +- **PodSandboxStateValue** + + Wrapper of **PodSandboxState**. + + | Member | Description| + | :-------------------: | :-------------: | + | PodSandboxState state | Sandbox state | + +- **PodSandboxFilter** + + Filtering conditions when listing sandboxes. The intersection of multiple conditions is displayed. + + | Member | Description | + | :--------------------------------: | :--------------------------------------------------: | + | string id | Sandbox ID | + | PodSandboxStateValue state | Sandbox state | + | map\ label_selector | Sandbox labels. Only full match is supported. Regular expressions are not supported.| + +- **PodSandbox** + + Minimal data that describes a sandbox. + + | Member | Description | + | :-----------------------------: | :-----------------------------------------------: | + | string id | Sandbox ID | + | PodSandboxMetadata metadata | Sandbox metadata | + | PodSandboxState state | Sandbox state | + | int64 created_at | Creation timestamps of the sandbox in nanoseconds | + | map\ labels | Key-value pairs that may be used to identify a single sandbox or a series of sandboxes | + | map\ annotations | Key-value pair holding arbitrary data. The value cannot be modified by the runtime | + +- **KeyValue** + + Wrapper of a key-value pair. + + | Member| Description| + | :----------------: | :------------: | + | string key | Key | + | string value | Value | + +- **SELinuxOption** + + SELinux labels to be applied to the container. + + | Member| Description| + | :----------------: | :------------: | + | string user | User | + | string role | Role | + | string type | Type | + | string level | Level | + +- **ContainerMetadata** + + ContainerMetadata contains all necessary information for building the container name. The container runtime is encouraged to expose the metadata in its user interface for better user experience. For example, the runtime can construct a unique container name based on the metadata. + + | Member| Description | + | :----------------: | :------------------------------: | + | string name | Name of a container | + | uint32 attempt | Number of attempts to create the container. The default value is **0**.| + +- **ContainerState** + + Enum values for container states. + + | Member | Description | + | :-------------------: | :-------------------: | + | CONTAINER_CREATED = 0 | The container is created | + | CONTAINER_RUNNING = 1 | The container is running | + | CONTAINER_EXITED = 2 | The container is in the exit state | + | CONTAINER_UNKNOWN = 3 | The container state is unknown | + +- **ContainerStateValue** + + Wrapper of ContainerState. + + | Member | Description| + | :-------------------: | :------------: | + | ContainerState state | Container state value | + +- **ContainerFilter** + + Filtering conditions when listing containers. The intersection of multiple conditions is displayed. + + | Member | Description | + | :--------------------------------: | :----------------------------------------------------: | + | string id | Container ID | + | PodSandboxStateValue state | Container state | + | string pod_sandbox_id | Sandbox ID | + | map\ label_selector | Container labels. Only full match is supported. Regular expressions are not supported.| + +- **LinuxContainerSecurityContext** + + Security configuration that will be applied to a container. + + | Member | Description | + | :--------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------- | + | Capability capabilities | Capabilities to add or drop | + | bool privileged | Whether the container is in privileged mode. The default value is **false**. | + | NamespaceOption namespace_options | Namespace options of the container | + | SELinuxOption selinux_options | SELinux context to be optionally applied (**not supported currently**) | + | Int64Value run_as_user | UID to run container processes. Only one of **run_as_user** and **run_as_username** can be specified at a time. **run_as_username** takes effect preferentially. | + | string run_as_username | User name to run container processes. If specified, the user must exist in the container image (that is, in **/etc/passwd** inside the image) and be resolved there by the runtime. Otherwise, the runtime must throw an error.| + | bool readonly_rootfs | Whether the root file system in the container is read-only. The default value is configured in **config.json**. | + | repeated int64 supplemental_groups | List of groups of the first process in the container besides the primary group | + | string apparmor_profile | AppArmor configuration file for the container (**not supported currently**) | + | string seccomp_profile_path | Seccomp configuration file for the container | + | bool no_new_privs | Whether to set the **no_new_privs** flag on the container | + +- **LinuxContainerResources** + + Resource specification for the Linux container. + + | Member | Description | + | :-------------------------- | :------------------------------------------------------------- | + | int64 cpu_period | CPU Completely Fair Scheduler (CFS) period. The default value is **0**. | + | int64 cpu_quota | CPU CFS quota. The default value is **0**. | + | int64 cpu_shares | CPU shares (weight relative to other containers). The default value is **0**.| + | int64 memory_limit_in_bytes | Memory limit, in bytes. The default value is **0**. | + | int64 oom_score_adj | oom-killer score. The default value is **0**. | + | string cpuset_cpus | CPU cores to be used by the container. The default value is **""**. | + | string cpuset_mems | Memory nodes to be used by the container. The default value is **""**. | + +- **Image** + + Basic information about a container image. + + | Member | Description | + | :--------------------------- | :--------------------- | + | string id | Image ID | + | repeated string repo_tags | Image tag name (**repo_tags**) | + | repeated string repo_digests | Image digest information | + | uint64 size | Image size | + | Int64Value uid | UID of the default image user | + | string username | Name of the default image user | + +- **ImageSpec** + + Internal data structure that represents an image. Currently, **ImageSpec** wraps only the container image name. + + | Member| Description| + | :----------------: | :------------: | + | string image | Container image name | + +- **StorageIdentifier** + + Unique identifier of a storage device. + + | Member| Description| + | :----------------: | :------------: | + | string uuid | UUID of the device | + +- **FilesystemUsage** + + | Member | Description | + | :--------------------------- | :------------------------- | + | int64 timestamp | Timestamp at which the information was collected | + | StorageIdentifier storage_id | UUID of the file system that stores the image | + | UInt64Value used_bytes | Space size used for storing image metadata | + | UInt64Value inodes_used | Number of inodes for storing image metadata | + +- **AuthConfig** + + | Member | Description | + | :-------------------- | :------------------------------------- | + | string username | User name used for downloading images | + | string password | Password used for downloading images | + | string auth | Base64-encoded authentication information used for downloading images | + | string server_address | Address of the server for downloaded images (not supported currently) | + | string identity_token | Token information used for authentication with the image repository (not supported currently) | + | string registry_token | Token information used for interaction with the image repository (not supported currently) | + +- **Container** + + Container description information, such as the ID and state. + + | Member | Description | + | :-----------------------------: | :---------------------------------------------------------: | + | string id | Container ID | + | string pod_sandbox_id | ID of the sandbox to which the container belongs | + | ContainerMetadata metadata | Container metadata | + | ImageSpec image | Image specifications | + | string image_ref | Reference to the image used by the container. For most runtimes, this is an image ID.| + | ContainerState state | Container state | + | int64 created_at | Creation timestamps of the container in nanoseconds | + | map\ labels | Key-value pairs that may be used to identify a single container or a series of containers | + | map\ annotations | Key-value pair holding arbitrary data. The value cannot be modified by the runtime | + +- **ContainerStatus** + + Container status information. + + | Member | Description | + | :-----------------------------: | :-----------------------------------------------------------------------: | + | string id | Container ID | + | ContainerMetadata metadata | Container metadata | + | ContainerState state | Container state | + | int64 created_at | Creation timestamps of the container in nanoseconds | + | int64 started_at | Startup timestamps of the container in nanoseconds | + | int64 finished_at | Exit timestamps of the container in nanoseconds | + | int32 exit_code | Container exit code | + | ImageSpec image | Image specifications | + | string image_ref | Reference to the image used by the container. For most runtimes, this is an image ID. | + | string reason | Brief explanation of why the container is in its current state | + | string message | Human-readable message explaining why the container is in its current state | + | map\ labels | Key-value pairs that may be used to identify a single container or a series of containers | + | map\ annotations | Key-value pair holding arbitrary data. The value cannot be modified by the runtime. | + | repeated Mount mounts | Container mount point information | + | string log_path | Container log file path. The file is in the **log_directory** folder configured in **PodSandboxConfig**.| + +- **ContainerStatsFilter** + + Filtering conditions when listing container states. The intersection of multiple conditions is displayed. + + | Member | Description | + | :--------------------------------: | :----------------------------------------------------: | + | string id | Container ID | + | string pod_sandbox_id | Sandbox ID | + | map\ label_selector | Container labels. Only full match is supported. Regular expressions are not supported.| + +- **ContainerStats** + + Filtering conditions when listing container states. The intersection of multiple conditions is displayed. + + | Member | Description| + | :----------------------------: | :------------: | + | ContainerAttributes attributes | Container Information | + | CpuUsage cpu | CPU usage | + | MemoryUsage memory | Memory usage | + | FilesystemUsage writable_layer | Usage of the writable layer | + +- **ContainerAttributes** + + Basic information about the container. + + | Member | Description | + | :----------------------------: | :-----------------------------------------------: | + | string id | Container ID | + | ContainerMetadata metadata | Container metadata | + | map\ labels | Key-value pairs that may be used to identify a single container or a series of containers | + | map\ annotations | Key-value pair holding arbitrary data. The value cannot be modified by the runtime.| + +- **CpuUsage** + + Container CPU usage. + + | Member | Description | + | :---------------------------------: | :--------------------: | + | int64 timestamp | Timestamp | + | UInt64Value usage_core_nano_seconds | CPU usage duration, in nanoseconds | + +- **MemoryUsage** + + Container memory usage. + + | Member | Description| + | :---------------------------: | :------------: | + | int64 timestamp | Timestamp | + | UInt64Value working_set_bytes | Memory usage | + +- **FilesystemUsage** + + Usage of the writable layer of the container. + + | Member | Description | + | :--------------------------: | :-----------------------: | + | int64 timestamp | Timestamp | + | StorageIdentifier storage_id | Writable layer directory | + | UInt64Value used_bytes | Number of bytes occupied by the image at the writable layer | + | UInt64Value inodes_used | Number of inodes occupied by the image at the writable layer | + +- **Device** + + Host volume to mount into a container. + + | Member | Description | + | :-------------------- | :--------------------------------------------------------------------------------------------------------- | + | string container_path | Mount path within the container | + | string host_path | Mount path on the host | + | string permissions | cgroup permissions of the device (**r** allows the container to read from the specified device; **w** allows the container to write to the specified device; **m** allows the container to create device files that do not yet exist).| + +- **LinuxContainerConfig** + + Configuration specific to Linux containers. + + | Member | Description | + | :--------------------------------------------- | :---------------------- | + | LinuxContainerResources resources | Container resource specifications | + | LinuxContainerSecurityContext security_context | Linux container security configuration | + +- **ContainerConfig** + + Required and optional fields for creating a container. + + | Member | Description | + | :------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | ContainerMetadata metadata | Container metadata. This information uniquely identifies the container, and the runtime should leverage this to ensure correct operation. The runtime may also use this information to improve user experience, such as by constructing a readable container name. (**Required**) | + | ImageSpec image | Image used by the container. (Required) | + | repeated string command | Command to be executed. The default value is **"/bin/sh"**. | + | repeated string args | Arguments of the command to be executed | + | string working_dir | Current working directory of the command to be executed | + | repeated KeyValue envs | Environment variables to set in the container | + | repeated Mount mounts | Mount points in the container | + | repeated Device devices | Devices to be mapped in the container | + | mapstring, labels | Key-value pairs that may be used to index and select individual resources | + | mapstring, annotations | Unstructured key-value map that may be used to store and retrieve arbitrary metadata | + | string log_path | Path relative to **PodSandboxConfig.LogDirectory** for container to store the logs (STDOUT and STDERR) on the host | + | bool stdin | Whether to enable STDIN of the container | + | bool stdin_once | Whether to immediately disconnect all data streams connected to STDIN when a data stream connected to stdin is disconnected (**not supported currently**) | + | bool tty | Whether to use a pseudo terminal to connect to STDIO of the container | + | LinuxContainerConfig linux | Configuration specific to Linux containers | + +- **NetworkConfig** + + Runtime network configuration. + + | Member| Description | + | :----------------- | :-------------------- | + | string pod_cidr | CIDR for pod IP addresses | + +- **RuntimeConfig** + + Runtime network configuration. + + | Member | Description | + | :--------------------------- | :---------------- | + | NetworkConfig network_config | Runtime network configuration | + +- **RuntimeCondition** + + Runtime condition information. + + | Member| Description | + | :----------------- | :------------------------------------------ | + | string type | Runtime condition type | + | bool status | Runtime status | + | string reason | Brief description of the reason for the runtime condition change | + | string message | Human-readable message describing the reason for the runtime condition change | + +- **RuntimeStatus** + + Runtime status. + + | Member | Description | + | :----------------------------------- | :------------------------ | + | repeated RuntimeCondition conditions | Current runtime conditions | + +### Runtime Service + +The runtime service contains interfaces for operating pods and containers, and interfaces for querying the configuration and status of the runtime service. + +#### RunPodSandbox + +#### Interface Prototype + +```protobuf +rpc RunPodSandbox(RunPodSandboxRequest) returns (RunPodSandboxResponse) {} +``` + +#### Interface Description + +Creates and starts a pod sandbox. The sandbox is in the ready state on success. + +#### Precautions + +1. The default image for starting the sandbox is **rnd-dockerhub.huawei.com/library/pause-$\{machine\}:3.0**, where **$\{machine\}** indicates the architecture. On x86\_64, the value of **machine** is **amd64**, on ARM64, the value of **machine** is **aarch64**. Currently, only the **amd64** and **aarch64** images can be downloaded from the rnd-dockerhub repository. If the images do not exist on the host, ensure that the host can download them from the rnd-dockerhub repository. +2. The container names use the field in **PodSandboxMetadata** and are separated by underscores (\_). Therefore, the data in metadata cannot contain underscores. Otherwise, the sandbox runs successfully, but the **ListPodSandbox** interface cannot query the sandbox. + +#### Parameter + +| Member | Description | +| :---------------------- | :-------------------------------------------------------------------- | +| PodSandboxConfig config | Sandbox configuration | +| string runtime_handler | Runtime to use for the sandbox. Currently, **lcr** and **kata-runtime** are supported.| + +#### Returns + +| Return | Description | +| :-------------------- | :--------------------- | +| string pod_sandbox_id | The response data is return on success.| + +#### StopPodSandbox + +#### Interface Prototype + +```protobuf +rpc StopPodSandbox(StopPodSandboxRequest) returns (StopPodSandboxResponse) {} +``` + +#### Interface Description + +Stops the pod sandbox, stops the sandbox container, and reclaims the network resources (such as IP addresses) allocated to the sandbox. If any running container belongs to the sandbox, the container must be forcibly terminated. + +#### Parameter + +| Member | Description| +| :-------------------- | :------------- | +| string pod_sandbox_id | Sandbox ID | + +#### Returns + +| Return| Description| +| :--------------- | :------------- | +| None | None | + +#### RemovePodSandbox + +#### Interface Prototype + +```text +rpc RemovePodSandbox(RemovePodSandboxRequest) returns (RemovePodSandboxResponse) {} +``` + +#### Interface Description + +Removes a sandbox. If there are any running containers in the sandbox, they must be forcibly terminated and removed. This interface must not return an error if the sandbox has already been removed. + +#### Precautions + +1. When a sandbox is deleted, the network resources of the sandbox are not deleted. Before deleting the pod, you must call **StopPodSandbox** to remove the network resources. Ensure that **StopPodSandbox** is called at least once before deleting the sandbox. +2. If the container in a sandbox fails to be deleted when the sandbox is deleted, the sandbox is deleted but the container remains. In this case, you need to manually delete the residual container. + +#### Parameter + +| Member | Description| +| :-------------------- | :------------- | +| string pod_sandbox_id | Sandbox ID | + +#### Returns + +| Return| Description| +| :--------------- | :------------- | +| None | None | + +#### PodSandboxStatus + +#### Interface Prototype + +```text +rpc PodSandboxStatus(PodSandboxStatusRequest) returns (PodSandboxStatusResponse) {} +``` + +#### Interface Description + +Queries the status of the sandbox. If the sandbox does not exist, this interface returns an error. + +#### Parameter + +| Member | Description | +| :-------------------- | :-------------------------------------------------- | +| string pod_sandbox_id | Sandbox ID | +| bool verbose | Whether to return extra information about the sandbox (not configurable currently) | + +#### Returns + +| Return | Description | +| :----------------------- | :--------------------------------------------------------------------------------------------------------------------------------------- | +| PodSandboxStatus status | Sandbox status information | +| map\ info | Extra information of the sandbox. The **key** can be an arbitrary string, and **value** is in JSON format. **info** can include anything debug information. When **verbose** is set to **true**, **info** cannot be empty (not configurable currently). | + +#### ListPodSandbox + +#### Interface Prototype + +```text +rpc ListPodSandbox(ListPodSandboxRequest) returns (ListPodSandboxResponse) {} +``` + +#### Interface Description + +Returns sandbox information. Conditional filtering is supported. + +#### Parameter + +| Member | Description| +| :---------------------- | :------------- | +| PodSandboxFilter filter | Conditional filtering parameters | + +#### Returns + +| Return | Description | +| :------------------------ | :---------------- | +| repeated PodSandbox items | Sandboxes | + +#### CreateContainer + +#### Interface Prototype + +```text +rpc CreateContainer(CreateContainerRequest) returns (CreateContainerResponse) {} +``` + +#### Interface Description + +Creates a container in a PodSandbox. + +#### Precautions + +- **sandbox\_config** in **CreateContainerRequest** is the same as the configuration passed to **RunPodSandboxRequest** to create the PodSandbox. It is passed again for reference. **PodSandboxConfig** is immutable and remains unchanged throughout the lifecycle of a pod. +- The container names use the field in **ContainerMetadata** and are separated by underscores (\_). Therefore, the data in metadata cannot contain underscores. Otherwise, the container runs successfully, but the **ListContainers** interface cannot query the container. +- **CreateContainerRequest** does not contain the **runtime\_handler** field. The runtime type of the created container is the same as that of the corresponding sandbox. + +#### Parameter + +| Member | Description | +| :------------------------------ | :--------------------------------- | +| string pod_sandbox_id | ID of the PodSandbox where the container is to be created | +| ContainerConfig config | Container configuration information | +| PodSandboxConfig sandbox_config | PodSandbox configuration information | + +#### Supplementary Information + +Unstructured key-value map that may be used to store and retrieve arbitrary metadata. Some fields can be transferred through this field because CRI does not provide specific parameters. + +- Customization + + | Custom Key:Value| Description | + | :------------------------- | :------------------------------------------------ | + | cgroup.pids.max:int64_t | Limits the number of processes/threads in a container. (Set **-1** for unlimited.)| + +#### Returns + +| Return | Description | +| :------------------ | :--------------- | +| string container_id | ID of the created container | + +#### StartContainer + +#### Interface Prototype + +```text +rpc StartContainer(StartContainerRequest) returns (StartContainerResponse) {} +``` + +#### Interface Description + +Starts a container. + +#### Parameter + +| Member | Description| +| :------------------ | :------------- | +| string container_id | Container ID | + +#### Returns + +| Return| Description| +| :--------------- | :------------- | +| None | None | + +#### StopContainer + +#### Interface Prototype + +```text +rpc StopContainer(StopContainerRequest) returns (StopContainerResponse) {} +``` + +#### Interface Description + +Stops a running container. The graceful stop timeout can be configured. If the container has been stopped, no error can be returned. + +#### Parameter + +| Member | Description | +| :------------------ | :---------------------------------------------------- | +| string container_id | Container ID | +| int64 timeout | Waiting time before a container is forcibly stopped. The default value is **0**, indicating that the container is forcibly stopped immediately.| + +#### Returns + +None + +#### RemoveContainer + +#### Interface Prototype + +```text +rpc RemoveContainer(RemoveContainerRequest) returns (RemoveContainerResponse) {} +``` + +#### Interface Description + +Deletes a container. If the container is running, it must be forcibly stopped. If the container has been deleted, no error can be returned. + +#### Parameter + +| Member | Description| +| :------------------ | :------------- | +| string container_id | Container ID | + +#### Returns + +None + +#### ListContainers + +#### Interface Prototype + +```text +rpc ListContainers(ListContainersRequest) returns (ListContainersResponse) {} +``` + +#### Interface Description + +Returns container information. Conditional filtering is supported. + +#### Parameter + +| Member | Description| +| :--------------------- | :------------- | +| ContainerFilter filter | Conditional filtering parameters | + +#### Returns + +| Return | Description| +| :---------------------------- | :------------- | +| repeated Container containers | Containers | + +#### ContainerStatus + +#### Interface Prototype + +```text +rpc ContainerStatus(ContainerStatusRequest) returns (ContainerStatusResponse) {} +``` + +#### Interface Description + +Returns container status information. If the container does not exist, an error is returned. + +#### Parameter + +| Member | Description | +| :------------------ | :-------------------------------------------------- | +| string container_id | Container ID | +| bool verbose | Whether to display additional information about the sandbox (not configurable currently) | + +#### Returns + +| Return | Description | +| :----------------------- | :--------------------------------------------------------------------------------------------------------------------------------------- | +| ContainerStatus status | Container status information | +| map\ info | Extra information of the sandbox. The **key** can be an arbitrary string, and **value** is in JSON format. **info** can include anything debug information. When **verbose** is set to **true**, **info** cannot be empty (not configurable currently).| + +#### UpdateContainerResources + +#### Interface Prototype + +```text +rpc UpdateContainerResources(UpdateContainerResourcesRequest) returns (UpdateContainerResourcesResponse) {} +``` + +#### Interface Description + +Updates container resource configurations. + +#### Precautions + +- This interface is used exclusively to update the resource configuration of a container, not a pod. +- Currently, the **oom\_score\_adj** configuration of containers cannot be updated. + +#### Parameter + +| Member | Description | +| :---------------------------- | :---------------- | +| string container_id | Container ID | +| LinuxContainerResources linux | Linux resource configuration information | + +#### Returns + +None + +#### ExecSync + +#### Interface Prototype + +```text +rpc ExecSync(ExecSyncRequest) returns (ExecSyncResponse) {} +``` + +#### Interface Description + +Runs a command synchronously in a container and communicates using gRPC. + +#### Precautions + +This interface runs a single command and cannot open a terminal to interact with the container. + +#### Parameter + +| Member | Description | +| :------------------ | :------------------------------------------------------------------ | +| string container_id | Container ID | +| repeated string cmd | Command to be executed | +| int64 timeout | Timeout interval before a command to be stopped is forcibly terminated, in seconds. The default value is **0**, indicating that there is no timeout limit (**not supported currently**).| + +#### Returns + +| Return| Description | +| :--------------- | :------------------------------------- | +| bytes stdout | Captures the standard output of the command | +| bytes stderr | Captures the standard error output of the command | +| int32 exit_code | Exit code the command finished with. The default value is **0**, indicating success.| + +#### Exec + +#### Interface Prototype + +```text +rpc Exec(ExecRequest) returns (ExecResponse) {} +``` + +#### Interface Description + +Runs a command in the container, obtains the URL from the CRI server using gRPC, and establishes a persistent connection with the WebSocket server based on the obtained URL to interact with the container. + +#### Precautions + +This interface runs a single command and can open a terminal to interact with the container. One of **stdin**, **stdout**, or **stderr** must be true. If **tty** is true, **stderr** must be false. Multiplexing is not supported. In that case, the outputs of **stdout** and **stderr** are combined into a single stream. + +#### Parameter + +| Member | Description | +| :------------------ | :------------------- | +| string container_id | Container ID | +| repeated string cmd | Command to be executed | +| bool tty | Whether to run the command in a TTY | +| bool stdin | Whether to stream standard input | +| bool stdout | Whether to stream standard output | +| bool stderr | Whether to stream standard error output | + +#### Returns + +| Return| Description | +| :--------------- | :------------------------ | +| string url | Fully qualified URL of the exec streaming server | + +#### Attach + +#### Interface Prototype + +```text +rpc Attach(AttachRequest) returns (AttachResponse) {} +``` + +#### Interface Description + +Takes over process 1 of the container, obtains the URL from the CRI server using gRPC, and establishes a persistent connection with the WebSocket server based on the obtained URL to interact with the container. + +#### Parameter + +| Member | Description | +| :------------------ | :------------------ | +| string container_id | Container ID | +| bool tty | Whether to run the command in a TTY | +| bool stdin | Whether to stream standard input | +| bool stdout | Whether to stream standard output | +| bool stderr | Whether to stream standard error output | + +#### Returns + +| Return| Description | +| :--------------- | :-------------------------- | +| string url | Fully qualified URL of the attach streaming server | + +#### ContainerStats + +#### Interface Prototype + +```text +rpc ContainerStats(ContainerStatsRequest) returns (ContainerStatsResponse) {} +``` + +#### Interface Description + +Returns information about the resources occupied by a single container. Only containers whose runtime type is lcr are supported. + +#### Parameter + +| Member | Description| +| :------------------ | :------------- | +| string container_id | Container ID | + +#### Returns + +| Return | Description | +| :------------------- | :------------------------------------------------------ | +| ContainerStats stats | Container information. Information about drives and inodes can be returned only for containers started using images in oci format.| + +#### ListContainerStats + +#### Interface Prototype + +```text +rpc ListContainerStats(ListContainerStatsRequest) returns (ListContainerStatsResponse) {} +``` + +#### Interface Description + +Returns information about resources occupied by multiple containers. Conditional filtering is supported. + +#### Parameter + +| Member | Description| +| :-------------------------- | :------------- | +| ContainerStatsFilter filter | Conditional filtering parameters | + +#### Returns + +| Return | Description | +| :---------------------------- | :-------------------------------------------------------------- | +| repeated ContainerStats stats | List of container information. Information about drives and inodes can be returned only for containers started using images in OCI format.| + +#### UpdateRuntimeConfig + +#### Interface Prototype + +```text +rpc UpdateRuntimeConfig(UpdateRuntimeConfigRequest) returns (UpdateRuntimeConfigResponse); +``` + +#### Interface Description + +Provides standard CRI for updating pod CIDR of the network plugin. Currently, the CNI network plugins do not need to update the pod CIDR. Therefore, this interface only records access logs. + +#### Precautions + +This interface does not modify the system management information, but only records logs. + +#### Parameter + +| Member | Description | +| :--------------------------- | :---------------------- | +| RuntimeConfig runtime_config | Information to be configured for the runtime | + +#### Returns + +None + +#### Status + +#### Interface Prototype + +```text +rpc Status(StatusRequest) returns (StatusResponse) {}; +``` + +#### Interface Description + +Obtains the network status of the runtime and pod. When the network status is obtained, the network configuration is updated. + +#### Precautions + +If the network configuration fails to be updated, the original configuration is not affected. The original configuration is overwritten only when the network configuration is updated successfully. + +#### Parameter + +| Member| Description | +| :----------------- | :---------------------------------------- | +| bool verbose | Whether to display additional runtime information (not supported currently) | + +#### Returns + +| Return | Description | +| :----------------------- | :---------------------------------------------------------------------------------------------------------- | +| RuntimeStatus status | Runtime status | +| map\ info | Additional runtime information. The key of **info** can be any value, and the **value** is in JSON format and can contain any debug information. Additional information is displayed only when **Verbose** is set to **true**.| + +### Image Service + +Provides gRPC APIs for pulling, viewing, and removing images from the image repository. + +#### ListImages + +#### Interface Prototype + +```text +rpc ListImages(ListImagesRequest) returns (ListImagesResponse) {} +``` + +#### Interface Description + +Lists information about existing images. + +#### Precautions + +This interface is a unified interface. Images of embedded format can be queried using **cri images**. However, because embedded images are not in OCI standard, the query result has the following restrictions: + +- The displayed image ID is **digest** of **config** of the image because embedded images do not have image IDs. +- **digest** cannot be displayed because embedded images have only **digest** of **config**, not **digest** of themselves, and **digest** does not comply with OCI specifications. + +#### Parameter + +| Member| Description| +| :----------------- | :------------- | +| ImageSpec filter | Name of images to be filtered | + +#### Returns + +| Return | Description| +| :-------------------- | :------------- | +| repeated Image images | List of images | + +#### ImageStatus + +#### Interface Prototype + +```text +rpc ImageStatus(ImageStatusRequest) returns (ImageStatusResponse) {} +``` + +#### Interface Description + +Queries the details about a specified image. + +#### Precautions + +1. This interface is used to query information about a specified image. If the image does not exist, **ImageStatusResponse** is returned, in which **Image** is **nil**. +2. This interface is a unified interface. Images of embedded format cannot be queried because they do not comply with the OCI specification and lack some fields. + +#### Parameter + +| Member| Description | +| :----------------- | :------------------------------------- | +| ImageSpec image | Image name | +| bool verbose | Queries extra information. This parameter is not supported currently and no extra information is returned.| + +#### Returns + +| Return | Description | +| :----------------------- | :------------------------------------- | +| Image image | Image information | +| map\ info | Extra image information. This parameter is not supported currently and no extra information is returned.| + +#### PullImage + +#### Interface Prototype + +```text +rpc PullImage(PullImageRequest) returns (PullImageResponse) {} +``` + +#### Interface Description + +Downloads an image. + +#### Precautions + +You can download public images or private images using the username, password, and authentication information. The **server_address**, **identity_token**, and **registry_token** fields in **AuthConfig** are not supported. + +#### Parameter + +| Member | Description | +| :------------------------------ | :-------------------------------- | +| ImageSpec image | Name of the image to download | +| AuthConfig auth | Authentication information for downloading a private image | +| PodSandboxConfig sandbox_config | Downloads an Image in the pod context (not supported currently).| + +#### Returns + +| Return| Description | +| :--------------- | :----------------- | +| string image_ref | Information about the downloaded image | + +#### RemoveImage + +#### Interface Prototype + +```text +rpc RemoveImage(RemoveImageRequest) returns (RemoveImageResponse) {} +``` + +#### Interface Description + +Deletes a specified image. + +#### Precautions + +This interface is a unified interface. Images of embedded format cannot be deleted based on the image ID because they do not comply with the OCI specification and lack some fields. + +#### Parameter + +| Member| Description | +| :----------------- | :--------------------- | +| ImageSpec image | Name or ID of the image to be deleted | + +#### Returns + +None + +#### ImageFsInfo + +#### Interface Prototype + +```text +rpc ImageFsInfo(ImageFsInfoRequest) returns (ImageFsInfoResponse) {} +``` + +#### Interface Description + +Queries information about the file systems of an image. + +#### Precautions + +The queried information is the file system information in the image metadata. + +#### Parameter + +None + +#### Returns + +| Return | Description | +| :----------------------------------------- | :------------------- | +| repeated FilesystemUsage image_filesystems | Image file system information | + +### Constraints + +1. If **log_directory** is configured in **PodSandboxConfig** when a sandbox is created, **log_path** must be specified in **ContainerConfig** when a container of the sandbox is created. Otherwise, the container may fail to be started or even deleted using CRI API. + + The actual **LOGPATH** of the container is **log_directory/log_path**. If **log_path** is not configured, the final **LOGPATH** changes to **log_directory**. + + - If the path does not exist, iSulad creates a soft link pointing to the final path of container logs when starting the container, and **log_directory** becomes a soft link. In this case, there are two situations: + + 1. If **log_path** is not configured for other containers in the sandbox, when other containers are started, **log_directory** is deleted and points to **log_path** of the newly started container. As a result, the logs of the previously started container point to the logs of the container started later. + 2. If **log_path** is configured for other containers in the sandbox, **LOGPATH** of the container is **log_directory/log_path**. Because **log_directory** is a soft link, if **log_directory/log_path** is used as the soft link target to point to the actual log path of the container, the container creation fails. + - If the path exists, iSulad attempts to delete the path (non-recursively) when starting the container. If the path is a folder that contains content, the deletion fails. As a result, the soft link fails to be created and the container fails to be started. When the container is deleted, the same symptom occurs. As a result, the container deletion fails. +2. If **log_directory** is configured in **PodSandboxConfig** when a sandbox is created and **log_path** is configured in **ContainerConfig** when a container is created, the final **LOGPATH** is **log_directory/log_path**. iSulad does not create **LOGPATH** recursively. Therefore, you must ensure that **dirname(LOGPATH)**, that is, the parent directory of the final log directory, exists. +3. If **log_directory** is configured in **PodSandboxConfig** when a sandbox is created, and the same **log_path** is specified in **ContainerConfig** when two or more containers are created or containers in different sandboxes point to the same **LOGPATH**, when the containers are started successfully, the log path of the container that is started later overwrites that of the container that is started earlier. +4. If the image content in the remote image repository changes and the CRI image pulling interface is used to download the image again, the image name and tag of the local original image (if it exists) change to "none." + + Example: + + Local image: + + ```text + IMAGE TAG IMAGE ID SIZE + rnd-dockerhub.huawei.com/pproxyisulad/test latest 99e59f495ffaa 753kB + ``` + + After the **rnd-dockerhub.huawei.com/pproxyisulad/test:latest** image in the remote repository is updated and downloaded again: + + ```text + IMAGE TAG IMAGE ID SIZE + 99e59f495ffaa 753kB + rnd-dockerhub.huawei.com/pproxyisulad/test latest d8233ab899d41 1.42MB + ``` + + Run the `isula images` command. **REF** is displayed as **-**. + + ```text + REF IMAGE ID CREATED SIZE + rnd-dockerhub.huawei.com/pproxyisulad/test:latest d8233ab899d41 2019-02-14 19:19:37 1.42MB + - 99e59f495ffaa 2016-05-04 02:26:41 753kB + ``` + +5. The exec and attach interfaces of iSulad CRI API are implemented using WebSocket. Clients interact with iSulad using the same protocol. When using the exec or attach interface, do not transfer a large amount of data or files over the serial port. The exec or attach interface is used only for basic command interaction. If the user side does not process the data or files in a timely manner, data may be lost. In addition, do not use the exec or attach interface to transfer binary data or files. +6. The iSulad CRI API exec/attach depends on libwebsockets (LWS). It is recommended that the streaming API be used only for persistent connection interaction but not in high-concurrency scenarios, because the connection may fail due to insufficient host resources. It is recommended that the number of concurrent connections be less than or equal to 100. diff --git a/docs/en/docs/Container/figures/en-us_image_0183048952.png b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/figures/en-us_image_0183048952.png similarity index 100% rename from docs/en/docs/Container/figures/en-us_image_0183048952.png rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/figures/en-us_image_0183048952.png diff --git a/docs/en/docs/Container/image-management.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/image-management.md similarity index 63% rename from docs/en/docs/Container/image-management.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/image-management.md index e0455a9616a702e1d7993db9d118c35048f0a5c1..ed2ee074cca619ab87077f7fefd3036a8d1cd837 100644 --- a/docs/en/docs/Container/image-management.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/image-management.md @@ -6,13 +6,10 @@ - [Logging Out of a Registry](#logging-out-of-a-registry) - [Pulling Images from a Registry](#pulling-images-from-a-registry) - [Deleting Images](#deleting-images) - - [Adding an Image Tag](#adding-an-image-tag) - [Loading Images](#loading-images) - [Listing Images](#listing-images) - [Inspecting Images](#inspecting-images) - [Two-Way Authentication](#two-way-authentication) - - [Importing rootfs](#importing-rootfs) - - [Exporting rootfs](#exporting-rootfs) - [Embedded Image Management](#embedded-image-management) - [Loading Images](#loading-images-1) - [Listing Images](#listing-images-1) @@ -35,7 +32,7 @@ isula login [OPTIONS] SERVER #### Parameters -For details about the parameters in the **login** command, see **Appendix** > **Command Line Parameters** > **Table 1 login command parameters**. +For details about parameters in the **login** command, see Table 1 in [Command Line Parameters](./appendix.md#command-line-parameters). #### Example @@ -59,7 +56,7 @@ isula logout SERVER #### Parameters -For details about the parameters in the **logout** command, see **Appendix** > **Command Line Parameters** > **Table 2 logout command parameters**. +For details about parameters in the **logout** command, see Table 2 in [Command Line Parameters](./appendix.md#command-line-parameters). #### Example @@ -77,12 +74,12 @@ Pull images from a registry to the local host. #### Usage ```shell -isula pull [OPTIONS] NAME[:TAG] +isula pull [OPTIONS] NAME[:TAG|@DIGEST] ``` #### Parameters -For details about the parameters in the **pull** command, see **Appendix** > **Command Line Parameters** > **Table 3 pull command parameters**. +For details about parameters in the **pull** command, see Table 3 in [Command Line Parameters](./appendix.md#command-line-parameters). #### Example @@ -106,7 +103,7 @@ isula rmi [OPTIONS] IMAGE [IMAGE...] #### Parameters -For details about the parameters in the **rmi** command, see **Appendix** > **Command Line Parameters** > **Table 4 rmi command parameters**. +For details about parameters in the **rmi** command, see Table 4 in [Command Line Parameters](./appendix.md#command-line-parameters). #### Example @@ -115,28 +112,6 @@ $ isula rmi rnd-dockerhub.huawei.com/official/busybox Image "rnd-dockerhub.huawei.com/official/busybox" removed ``` -### Adding an Image Tag - -#### Description - -Add an image tag. - -#### Usage - -```shell -isula tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG] -``` - -#### Parameters - -For details about the parameters in the **tag** command, see **Appendix** > **Command Line Parameters** > **Table 8 tag command parameters**. - -#### Example - -```shell -isula tag busybox:latest test:latest -``` - ### Loading Images #### Description @@ -151,7 +126,7 @@ isula load [OPTIONS] #### Parameters -For details about the parameters in the **load** command, see **Appendix** > **Command Line Parameters** > **Table 5 load command parameters**. +For details about parameters in the **load** command, see Table 5 in [Command Line Parameters](./appendix.md#command-line-parameters). #### Example @@ -169,19 +144,19 @@ List all images in the current environment. #### Usage ```shell -isula images [OPTIONS] +isula images ``` #### Parameters -For details about the parameters in the **images** command, see **Appendix** > **Command Line Parameters** > **Table 6 images command parameters**. +For details about parameters in the **images** command, see Table 6 in [Command Line Parameters](./appendix.md#command-line-parameters). #### Example ```shell $ isula images -REPOSITORY TAG IMAGE ID CREATED SIZE -busybox latest beae173ccac6 2021-12-31 03:19:41 1.184MB +REF IMAGE ID CREATED SIZE +rnd-dockerhub.huawei.com/official/busybox:latest e4db68de4ff2 2019-06-15 08:19:54 1.376 MB ``` ### Inspecting Images @@ -198,7 +173,7 @@ isula inspect [options] CONTAINER|IMAGE [CONTAINER|IMAGE...] #### Parameters -For details about the parameters in the **inspect** command, see **Appendix** > **Command Line Parameters** > **Table 7 inspect command parameters**. +For details about parameters in the **inspect** command, see Table 7 in [Command Line Parameters](./appendix.md#command-line-parameters). #### Example @@ -299,60 +274,6 @@ Image "my.csp-edge.com:5000/busybox" pulling Image "my.csp-edge.com:5000/busybox@sha256:f1bdc62115dbfe8f54e52e19795ee34b4473babdeb9bc4f83045d85c7b2ad5c0" pulled ``` -### Importing rootfs - -#### Description - -Import a .tar package that contains rootfs as an image. Generally, the .tar package is exported by running the **export** command or a .tar package that contains rootfs in compatible format. Currently, the .tar, .tar.gz, .tgz, .bzip, .tar.xz, and .txz formats are supported. Do not use the TAR package in other formats for import. - -#### Usage - -```shell -isula import file REPOSITORY[:TAG] -``` - -After the import is successful, the printed character string is the image ID generated by the imported rootfs. - -#### Parameters - -For details about the parameters in the **import** command, see **Appendix** > **Command Line Parameters** > **Table 9 import command parameters**. - -#### Example - -```shell -$ isula import busybox.tar test -sha256:441851e38dad32478e6609a81fac93ca082b64b366643bafb7a8ba398301839d -$ isula images -REPOSITORY TAG IMAGE ID CREATED SIZE -test latest 441851e38dad 2020-09-01 11:14:35 1.168 MB -``` - -### Exporting rootfs - -#### Description - -Export the content of the rootfs of a container as a TAR package. The exported TAR package can be imported as an image by running the **import** command. - -#### Usage - -```shell -isula export [OPTIONS] [ID|NAME] -``` - -#### Parameters - -For details about the parameters in the **export** command, see **Appendix** > **Command Line Parameters** > **Table 10 export command parameters**. - -#### Example - -```shell -$ isula run -tid --name container_test test sh -d7e601c2ef3eb8d378276d2b42f9e58a2f36763539d3bfcaf3a0a77dc668064b -$ isula export -o rootfs.tar d7e601c -$ ls -rootfs.tar -``` - ## Embedded Image Management ### Loading Images @@ -369,7 +290,7 @@ isula load [OPTIONS] --input=FILE --type=TYPE #### Parameters -For details about the parameters in the **load** command, see **Appendix** > **Command Line Parameters** > **Table 5 load command parameters**. +For details about parameters in the **load** command, see Table 5 in [Command Line Parameters](./appendix.md#command-line-parameters). #### Example @@ -392,14 +313,14 @@ isula images [OPTIONS] #### Parameters -For details about the parameters in the **images** command, see **Appendix** > **Command Line Parameters** > **Table 6 images command parameters**. +For details about parameters in the **images** command, see Table 6 in [Command Line Parameters](./appendix.md#command-line-parameters). #### Example ```shell $ isula images -REPOSITORY TAG IMAGE ID CREATED SIZE -busybox latest beae173ccac6 2021-12-31 03:19:41 1.184MB +REF IMAGE ID CREATED SIZE +test:v1 9319da1f5233 2018-03-01 10:55:44 1.273 MB ``` ### Inspecting Images @@ -416,7 +337,7 @@ isula inspect [options] CONTAINER|IMAGE [CONTAINER|IMAGE...] #### Parameters -For details about the parameters in the **inspect** command, see **Appendix** > **Command Line Parameters** > **Table 7 inspect command parameters**. +For details about parameters in the **inspect** command, see Table 7 in [Command Line Parameters](./appendix.md#command-line-parameters). #### Example @@ -439,7 +360,7 @@ isula rmi [OPTIONS] IMAGE [IMAGE...] #### Parameters -For details about the parameters in the **rmi** command, see **Appendix** > **Command Line Parameters** > **Table 4 rmi command parameters**. +For details about parameters in the **rmi** command, see Table 4 in [Command Line Parameters](./appendix.md#command-line-parameters). #### Example diff --git a/docs/en/docs/Container/installation-configuration.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/installation-configuration.md similarity index 83% rename from docs/en/docs/Container/installation-configuration.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/installation-configuration.md index c800b0dc550091c057bb26ec7b7038fa39900492..4a89dcb12b5a9b251b8fecbf5848d2b255e6531c 100644 --- a/docs/en/docs/Container/installation-configuration.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/installation-configuration.md @@ -1,9 +1,18 @@ -# Installation and Configuration +# Installation and Configuration + +This chapter covers the installation, configuration, upgrade, and removal of iSulad. + +> ![](./public_sys-resources/icon-note.gif) **Note:** +> Root privilege is required for installing, upgrading, or uninstalling iSulad. + + - [Installation and Configuration](#installation-and-configuration) - [Installation Methods](#installation-methods) - [Deployment Configuration](#deployment-configuration) + + ## Installation Methods iSulad can be installed by running the **yum** or **rpm** command. The **yum** command is recommended because dependencies can be installed automatically. @@ -12,90 +21,83 @@ This section describes two installation methods. - \(Recommended\) Run the following command to install iSulad: - ```bash + ```shell sudo yum install -y iSulad ``` - If the **rpm** command is used to install iSulad, you need to download and manually install the RMP packages of iSulad and all its dependencies. To install the RPM package of a single iSulad \(the same for installing dependency packages\), run the following command: - ```bash - # sudo rpm -ihv iSulad-xx.xx.xx-xx.xxx.aarch64.rpm + ```shell + sudo rpm -ihv iSulad-xx.xx.xx-YYYYmmdd.HHMMSS.gitxxxxxxxx.aarch64.rpm ``` ## Deployment Configuration -After iSulad is installed, you can perform related configurations as required. - ### Configuration Mode The iSulad server daemon **isulad** can be configured with a configuration file or by running the **isulad --xxx** command. The priority in descending order is as follows: CLI \> configuration file \> default configuration in code. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->If systemd is used to manage the iSulad process, modify the **OPTIONS** field in the **/etc/sysconfig/iSulad** file, which functions the same as using the CLI. +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> If systemd is used to manage the iSulad process, modify the **OPTIONS** field in the **/etc/sysconfig/iSulad** file, which functions the same as using the CLI. - **CLI** During service startup, configure iSulad using the CLI. To view the configuration options, run the following command: - ```bash - # isulad --help - isulad - + ```shell + $ isulad --help lightweight container runtime daemon Usage: isulad [global options] GLOBAL OPTIONS: - --authorization-plugin Use authorization plugin - --cgroup-parent Set parent cgroup for all containers - --cni-bin-dir The full path of the directory in which to search for CNI plugin binaries. Default: /opt/cni/bin - --cni-conf-dir The full path of the directory in which to search for CNI config files. Default: /etc/cni/net.d - --container-log-driver Set default container log driver, such as: json-file - --container-log-opts Set default container log driver options, such as: max-file=7 to set max number of container log files - --default-ulimit Default ulimits for containers (default []) - -e, --engine Select backend engine - -g, --graph Root directory of the iSulad runtime - -G, --group Group for the unix socket(default is isulad) - --help Show help - --hook-spec Default hook spec file applied to all containers - -H, --host The socket name used to create gRPC server - --image-layer-check Check layer intergrity when needed - --insecure-registry Disable TLS verification for the given registry - --insecure-skip-verify-enforce Force to skip the insecure verify(default false) - --log-driver Set daemon log driver, such as: file - -l, --log-level Set log level, the levels can be: FATAL ALERT CRIT ERROR WARN NOTICE INFO DEBUG TRACE - --log-opt Set daemon log driver options, such as: log-path=/tmp/logs/ to set directory where to store daemon logs - --native.umask Default file mode creation mask (umask) for containers - --network-plugin Set network plugin, default is null, support null and cni - -p, --pidfile Save pid into this file - --pod-sandbox-image The image whose network/ipc namespaces containers in each pod will use. (default "pause-${machine}:3.0") - --registry-mirrors Registry to be prepended when pulling unqualified images, can be specified multiple times - --selinux-enabled Enable selinux support - --start-timeout timeout duration for waiting on a container to start before it is killed - -S, --state Root directory for execution state files - --storage-driver Storage driver to use(default overlay2) - -s, --storage-opt Storage driver options - --tls Use TLS; implied by --tlsverify - --tlscacert Trust certs signed only by this CA (default "/root/.iSulad/ca.pem") - --tlscert Path to TLS certificate file (default "/root/.iSulad/cert.pem") - --tlskey Path to TLS key file (default "/root/.iSulad/key.pem") - --tlsverify Use TLS and verify the remote - --use-decrypted-key Use decrypted private key by default(default true) - --userns-remap User/Group setting for user namespaces - -V, --version Print the version - --websocket-server-listening-port CRI websocket streaming service listening port (default 10350) + --authorization-plugin Use authorization plugin + --cgroup-parent Set parent cgroup for all containers + --cni-bin-dir The full path of the directory in which to search for CNI plugin binaries. Default: /opt/cni/bin + --cni-conf-dir The full path of the directory in which to search for CNI config files. Default: /etc/cni/net.d + --default-ulimit Default ulimits for containers (default []) + -e, --engine Select backend engine + -g, --graph Root directory of the iSulad runtime + -G, --group Group for the unix socket(default is isulad) + --help Show help + --hook-spec Default hook spec file applied to all containers + -H, --host The socket name used to create gRPC server + --image-layer-check Check layer intergrity when needed + --image-opt-timeout Max timeout(default 5m) for image operation + --insecure-registry Disable TLS verification for the given registry + --insecure-skip-verify-enforce Force to skip the insecure verify(default false) + --log-driver Set daemon log driver, such as: file + -l, --log-level Set log level, the levels can be: FATAL ALERT CRIT ERROR WARN NOTICE INFO DEBUG TRACE + --log-opt Set daemon log driver options, such as: log-path=/tmp/logs/ to set directory where to store daemon logs + --native.umask Default file mode creation mask (umask) for containers + --network-plugin Set network plugin, default is null, suppport null and cni + -p, --pidfile Save pid into this file + --pod-sandbox-image The image whose network/ipc namespaces containers in each pod will use. (default "rnd-dockerhub.huawei.com/library/pause-${machine}:3.0") + --registry-mirrors Registry to be prepended when pulling unqualified images, can be specified multiple times + --start-timeout timeout duration for waiting on a container to start before it is killed + -S, --state Root directory for execution state files + --storage-driver Storage driver to use(default overlay2) + -s, --storage-opt Storage driver options + --tls Use TLS; implied by --tlsverify + --tlscacert Trust certs signed only by this CA (default "/root/.iSulad/ca.pem") + --tlscert Path to TLS certificate file (default "/root/.iSulad/cert.pem") + --tlskey Path to TLS key file (default "/root/.iSulad/key.pem") + --tlsverify Use TLS and verify the remote + --use-decrypted-key Use decrypted private key by default(default true) + -V, --version Print the version + --websocket-server-listening-port CRI websocket streaming service listening port (default 10350) ``` - Example: Start iSulad and change the log level to **DEBUG**. + Example: Start iSulad and change the log level to DEBUG. - ```bash - # isulad -l DEBUG + ```shell + isulad -l DEBUG ``` - **Configuration file** - The iSulad configuration files are **/etc/isulad/daemon.json** and **/etc/isulad/daemon_constants.json**. The parameters in the files are described as follows. + The iSulad configuration file is **/etc/isulad/daemon.json**. The parameters in the file are described as follows:

Parameter

Description

+

Description

Value Range

+

Value Range

Mandatory or Not

+

Mandatory or Not

--cpu-quota

Limits the CPU CFS quota in a container.

+

Limits the CPU CFS quota.

64-bit integer

--cpu-shares

Limit the CPU share (relative weight) in a container.

+

Limits the CPU share (relative weight).

64-bit integer

No

--cpu-rt-period

-

Limits the real-time CPU period in a container, in microseconds.

-

64-bit integer

-

No

-

--cpu-rt-runtime

-

Limits the real-time running time of the CPU in a container, in microseconds.

-

64-bit integer

-

No

-

--cpuset-cpus

Limits the CPU nodes used by a container.

+

Limits the CPU nodes.

Character string. The value is the number of CPUs to be set. For example, the value can be **0-3** or **0,1**.

. +

A character string. The value is the number of CPUs to be configured. The value ranges from 0 to 3, or 0 and 1.

No

--cpuset-mems

Limits the memory nodes used by cpuset in a container.

+

Limits the memory nodes used by cpuset in the container.

Character string. The value is the number of CPUs to be set. For example, the value can be **0-3** or **0,1**.

. +

A character string. The value is the number of CPUs to be configured. The value ranges from 0 to 3, or 0 and 1.

No

Restricts the root file system (rootfs) storage space of the container.

The parsed value of rootfsSize is a positive number expressed in bytes within the int64 range. The default unit is B. You can also set the unit to [kKmMgGtTpP])?[iI]?[bB]?$. (The minimum value is 10G in the device mapper scenario.)

+

The size parsed by rootfsSize is a positive 64-bit integer expressed in bytes. You can also set it to ([kKmMgGtTpP])?[iI]?[bB]?$.

No

- - - - - - @@ -219,7 +210,7 @@ The iSulad server daemon **isulad** can be configured with a configuration fil - + + + + + - - - - -

Parameter

@@ -176,17 +178,6 @@ The iSulad server daemon **isulad** can be configured with a configuration fil

You can specify max-file, max-size, and log-path. max-file indicates the number of log files. max-size indicates the threshold for triggering log anti-explosion. If max-file is 1, max-size is invalid. log-path specifies the path for storing log files. The log-file-mode command is used to set the permissions to read and write log files. The value must be in octal format, for example, 0666.

--container-log-driver

-

"container-log": {

-

"driver": "json-file"

-

}

-

Default driver for serial port logs of the container.

-

Specify the default driver for serial port logs of all containers.

-

--start-timeout

"start-timeout": "2m"

@@ -196,7 +187,7 @@ The iSulad server daemon **isulad** can be configured with a configuration fil

None

None

+

--runtime

"default-runtime": "lcr"

When starting a container, set this parameter to specify multiple runtimes. Runtimes in this set are valid for container startup.

Runtime allowlist of a container. The customized runtimes in this set are valid. kata-runtime is used as the example.

+

Runtime whitelist of a container. The customized runtimes in this set are valid. kata-runtime is used as the example.

-p, --pidfile

@@ -266,6 +257,15 @@ The iSulad server daemon **isulad** can be configured with a configuration fil overlay2.basesize=${size} #It is equivalent to overlay2.size.

--image-opt-timeout

+

"image-opt-timeout": "5m"

+

Image operation timeout interval, which is 5m by default.

+

The value -1 indicates that the timeout interval is not limited.

+

--registry-mirrors

"registry-mirrors": [ "docker.io" ]

@@ -436,61 +436,13 @@ The iSulad server daemon **isulad** can be configured with a configuration fil

If the client specifies --websocket-server-listening-port, the specified value is used. The port number ranges from 1024 to 49151.

None

-

"cri-runtimes": {

-

"kata": "io.containerd.kata.v2"

-

}

-

Specifies the mapping of custom CRI runtimes.

-

iSulad can convert RuntimeClass to the corresponding runtime through the custom CRI runtime mapping.

-
- Configuration file **/etc/isulad/daemon_constants.json** + Example: - - - - - - - - - - - - - - - - - - -

Parameter

-

Configuration Example

-

Description

-

Remarks

-

Not supported

-

"default-host": "docker.io"

-

If an image name is prefixed with the image repository name, the image repository name will be removed when the image name is saved and displayed.

-

Generally, this parameter does not need to be modified.

-

Not supported

-

"registry-transformation": {

-

"docker.io": "registry-1.docker.io",

-

"index.docker.io": "registry-1.docker.io"

-

}

-

"key":"value" pair. The image is pulled from the repository specified by "key":"value".

-

Generally, this parameter does not need to be modified.

-
- - Example: - - ```bash - # cat /etc/isulad/daemon.json + ```shell + $ cat /etc/isulad/daemon.json { "group": "isulad", "default-runtime": "lcr", @@ -519,31 +471,19 @@ The iSulad server daemon **isulad** can be configured with a configuration fil "rnd-dockerhub.huawei.com" ], "pod-sandbox-image": "", + "image-opt-timeout": "5m", "native.umask": "secure", "network-plugin": "", "cni-bin-dir": "", "cni-conf-dir": "", "image-layer-check": false, "use-decrypted-key": true, - "insecure-skip-verify-enforce": false, - "cri-runtime": { - "kata": "io.containerd.kata.v2" - } - } - - # cat /etc/isulad/daemon.json - { - "default-host": "docker.io", - "registry-transformation":{ - "docker.io": "registry-1.docker.io", - "index.docker.io": "registry-1.docker.io" - } + "insecure-skip-verify-enforce": false } - ``` - >![](./public_sys-resources/icon-notice.gif) **NOTICE:** - >The default configuration file **/etc/isulad/daemon.json** is for reference only. Configure it based on site requirements. + > ![](./public_sys-resources/icon-notice.gif)**NOTICE:** + > The default configuration file **/etc/isulad/daemon.json** is for reference only. Configure it based on site requirements. ### Storage Description @@ -605,6 +545,13 @@ The iSulad server daemon **isulad** can be configured with a configuration fil

Real-time communication cache file, which is created during iSulad running.

+

\*

+ +

/var/lib/lcr/

+ +

Temporary directory of the LCR component.

+ +

\*

/var/lib/isulad/

@@ -621,13 +568,11 @@ The iSulad server daemon **isulad** can be configured with a configuration fil - In high concurrency scenarios \(200 containers are concurrently started\), the memory management mechanism of Glibc may cause memory holes and large virtual memory \(for example, 10 GB\). This problem is caused by the restriction of the Glibc memory management mechanism in the high concurrency scenario, but not by memory leakage. Therefore, the memory consumption does not increase infinitely. You can set **MALLOC\_ARENA\_MAX** to reducevirtual memory error and increase the rate of reducing physical memory. However, this environment variable will cause the iSulad concurrency performance to deteriorate. Set this environment variable based on the site requirements. - ```bash To balance performance and memory usage, set MALLOC_ARENA_MAX to 4. (The iSulad performance on the ARM64 server is affected by less than 10%.) Configuration method: 1. To manually start iSulad, run the export MALLOC_ARENA_MAX=4 command and then start iSulad. 2. If systemd manages iSulad, you can modify the /etc/sysconfig/iSulad file by adding MALLOC_ARENA_MAX=4. - ``` - Precautions for specifying the daemon running directories @@ -637,8 +582,8 @@ The iSulad server daemon **isulad** can be configured with a configuration fil - Log file management: - >![](./public_sys-resources/icon-notice.gif) **NOTICE:** - >Log function interconnection: logs are managed by systemd as iSulad is and then transmitted to rsyslogd. By default, rsyslog restricts the log writing speed. You can add the configuration item **$imjournalRatelimitInterval 0** to the **/etc/rsyslog.conf** file and restart the rsyslogd service. + > ![](./public_sys-resources/icon-notice.gif)**NOTICE:** + > Log function interconnection: logs are managed by systemd as iSulad is and then transmitted to rsyslogd. By default, rsyslog restricts the log writing speed. You can add the configuration item **$imjournalRatelimitInterval 0** to the **/etc/rsyslog.conf** file and restart the rsyslogd service. - Restrictions on command line parameter parsing @@ -650,18 +595,18 @@ The iSulad server daemon **isulad** can be configured with a configuration fil 2. When a long flag is used, the character string connected to **--** is regarded as a long flag. If the character string contains an equal sign \(=\), the character string before the equal sign \(=\) is a long flag, and the character string after the equal sign \(=\) is a parameter. - ```bash + ```shell isula run --user=root busybox ``` or - ```bash + ```shell isula run --user root busybox ``` - After an iSulad container is started, you cannot run the **isula run -i/-t/-ti** and **isula attach/exec** commands as a non-root user. -- The default path for storing temporary files of iSulad is **/var/lib/isulad/isulad_tmpdir**. If the root directory of iSulad is changed, the path is **\$isulad_root/isulad_tmpdir**. To change the directory for storing temporary files of iSulad, you can configure the **ISULAD_TMPDIR** environment variable before starting iSulad. The **ISULAD_TMPDIR** environment variable is checked during the iSulad startup. If the **ISULAD_TMPDIR** environment variable is configured, the **\$ISULAD_TMPDIR/isulad_tmpdir** directory is used as the path for storing temporary files. Do not store files or folders named **isulad_tmpdir** in **\$ISULAD_TMPDIR** because iSulad recursively deletes the **\$ISULAD_TMPDIR/isulad_tmpdir** directory when it is started to prevent residual data. In addition, ensure that only the **root** user can access the **\$ISULAD_TMPDIR** directory to prevent security problems caused by operations of other users. +- When iSulad connects to an OCI container, only kata-runtime can be used to start the OCI container. ### Daemon Multi-Port Binding @@ -685,7 +630,7 @@ Users can configure one or more ports in the hosts field in the **/etc/isulad/d Users can also run the **-H** or **--host** command in the **/etc/sysconfig/iSulad** file to configure a port, or choose not to specify hosts. -```text +```ini OPTIONS='-H unix:///var/run/isulad.sock --host tcp://127.0.0.1:6789' ``` @@ -695,7 +640,7 @@ If hosts are not specified in the **daemon.json** file and iSulad, the daemon - Users cannot specify hosts in the **/etc/isulad/daemon.json** and **/etc/sysconfig/iSuald** files at the same time. Otherwise, an error will occur and iSulad cannot be started. - ```bash + ```text unable to configure the isulad with file /etc/isulad/daemon.json: the following directives are specified both as a flag and in the configuration file: hosts: (from flag: [unix:///var/run/isulad.sock tcp://127.0.0.1:6789], from file: [unix:///var/run/isulad.sock tcp://localhost:5678 tcp://127.0.0.1:6789]) ``` @@ -713,7 +658,7 @@ iSulad is designed in C/S mode. By default, the iSulad daemon process listens on - Example of generating a plaintext private key and certificate - ```bash + ```shell #!/bin/bash set -e echo -n "Enter pass phrase:" @@ -756,7 +701,7 @@ iSulad is designed in C/S mode. By default, the iSulad daemon process listens on - Example of generating an encrypted private key and certificate request file - ```bash + ```shell #!/bin/bash echo -n "Enter public network ip:" @@ -828,110 +773,108 @@ Mode 1 is used for the server, and mode 2 for the client if the two-way authenti Mode 2 is used for the server and the client if the unidirectional authentication mode is used for communication. ->![](./public_sys-resources/icon-notice.gif) **NOTICE:** -> ->- If RPM is used for installation, the server configuration can be modified in the **/etc/isulad/daemon.json** and **/etc/sysconfig/iSulad** files. ->- Two-way authentication is recommended as it is more secure than non-authentication or unidirectional authentication. ->- GRPC open-source component logs are not taken over by iSulad. To view gRPC logs, set the environment variables **gRPC\_VERBOSITY** and **gRPC\_TRACE** as required. +> ![](./public_sys-resources/icon-notice.gif)**NOTICE:** > +> - If RPM is used for installation, the server configuration can be modified in the **/etc/isulad/daemon.json** and **/etc/sysconfig/iSulad** files. +> - Two-way authentification is recommended as it is more secure than non-authentication or unidirectional authentication. +> - GRPC open-source component logs are not taken over by iSulad. To view gRPC logs, set the environment variables **gRPC\_VERBOSITY** and **gRPC\_TRACE** as required. #### Example On the server: -```bash +```shell isulad -H=tcp://0.0.0.0:2376 --tlsverify --tlscacert ~/.iSulad/ca.pem --tlscert ~/.iSulad/server-cert.pem --tlskey ~/.iSulad/server-key.pem ``` On the client: -```bash +```shell isula version -H=tcp://$HOSTIP:2376 --tlsverify --tlscacert ~/.iSulad/ca.pem --tlscert ~/.iSulad/cert.pem --tlskey ~/.iSulad/key.pem ``` ### devicemapper Storage Driver Configuration -To use the devicemapper storage driver, you need to configure a thinpool device which requires an independent block device with sufficient free space. Take the independent block device **/dev/xvdf** as an example. The configuration method is as follows: - -1. Configuring a thinpool +To use the devicemapper storage driver, you need to configure a thinpool device which requires an independent block device with sufficient free space. Take the independent block device **/dev/xvdf** as an example. The configuration method is as follows. - 1. Stop the iSulad service. +#### Configuring a Thinpool - ```bash - # systemctl stop isulad - ``` +1. Stop the iSulad service. - 2. Create a logical volume manager \(LVM\) volume based on the block device. + ```shell + # systemctl stop isulad + ``` - ```bash - # pvcreate /dev/xvdf - ``` +2. Create a logical volume manager \(LVM\) volume based on the block device. - 3. Create a volume group based on the created physical volume. + ```shell + # pvcreate /dev/xvdf + ``` - ```bash - # vgcreate isula /dev/xvdf - Volume group "isula" successfully created: - ``` +3. Create a volume group based on the created physical volume. - 4. Create two logical volumes named **thinpool** and **thinpoolmeta**. + ```shell + # vgcreate isula /dev/xvdf + Volume group "isula" successfully created: + ``` - ```bash - # lvcreate --wipesignatures y -n thinpool isula -l 95%VG - Logical volume "thinpool" created. - ``` +4. Create two logical volumes named **thinpool** and **thinpoolmeta**. - ```bash - # lvcreate --wipesignatures y -n thinpoolmeta isula -l 1%VG - Logical volume "thinpoolmeta" created. - ``` + ```shell + # lvcreate --wipesignatures y -n thinpool isula -l 95%VG + Logical volume "thinpool" created. + ``` - 5. Convert the two logical volumes into a thinpool and the metadata used by the thinpool. + ```shell + # lvcreate --wipesignatures y -n thinpoolmeta isula -l 1%VG + Logical volume "thinpoolmeta" created. + ``` - ```bash - # lvconvert -y --zero n -c 512K --thinpool isula/thinpool --poolmetadata isula/thinpoolmeta - - WARNING: Converting logical volume isula/thinpool and isula/thinpoolmeta to - thin pool's data and metadata volumes with metadata wiping. - THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) - Converted isula/thinpool to thin pool. - ``` +5. Convert the two logical volumes into a thinpool and the metadata used by the thinpool. -2. Modifying the iSulad configuration files + ```shell + # lvconvert -y --zero n -c 512K --thinpool isula/thinpool --poolmetadata isula/thinpoolmeta + + WARNING: Converting logical volume isula/thinpool and isula/thinpoolmeta to + thin pool's data and metadata volumes with metadata wiping. + THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) + Converted isula/thinpool to thin pool. + ``` - 1. If iSulad has been used in the environment, back up the running data first. +#### Modifying the iSulad Configuration Files - ```bash - # mkdir /var/lib/isulad.bk - # mv /var/lib/isulad/* /var/lib/isulad.bk - ``` +1. If iSulad has been used in the environment, back up the running data first. - 2. Modify configuration files. + ```shell + # mkdir /var/lib/isulad.bk + # mv /var/lib/isulad/* /var/lib/isulad.bk + ``` - Two configuration methods are provided. Select one based on site requirements. +2. Modify configuration files. - - Edit the **/etc/isulad/daemon.json** file, set **storage-driver** to **devicemapper**, and set parameters related to the **storage-opts** field. For details about related parameters, see [Parameter Description](#parameter-description). The following lists the configuration reference: + Two configuration methods are provided. Select one based on site requirements. + - Edit the **/etc/isulad/daemon.json** file, set **storage-driver** to **devicemapper**, and set parameters related to the **storage-opts** field. For details about related parameters, see [Parameter Description](#en-us_topic_0222861454_section1712923715282). The following lists the configuration reference: - ```json - { - "storage-driver": "devicemapper" - "storage-opts": [ - "dm.thinpooldev=/dev/mapper/isula-thinpool", - "dm.fs=ext4", - "dm.min_free_space=10%" - ] - } - ``` + ```json + { + "storage-driver": "devicemapper" + "storage-opts": [ + "dm.thinpooldev=/dev/mapper/isula-thinpool", + "dm.fs=ext4", + "dm.min_free_space=10%" + ] + } + ``` - - You can also edit **/etc/sysconfig/iSulad** to explicitly specify related iSulad startup parameters. For details about related parameters, see [Parameter Description](#parameter-description). The following lists the configuration reference: + - Edit **/etc/sysconfig/iSulad** to explicitly specify related iSulad startup parameters. For details about related parameters, see [Parameter Description](#en-us_topic_0222861454_section1712923715282). The following lists the configuration reference: - ```text - OPTIONS="--storage-driver=devicemapper --storage-opt dm.thinpooldev=/dev/mapper/isula-thinpool --storage-opt dm.fs=ext4 --storage-opt dm.min_free_space=10%" - ``` + ```ini + OPTIONS="--storage-driver=devicemapper --storage-opt dm.thinpooldev=/dev/mapper/isula-thinpool --storage-opt dm.fs=ext4 --storage-opt dm.min_free_space=10%" + ``` 3. Start iSulad for the settings to take effect. - ```bash + ```shell # systemctl start isulad ``` @@ -1012,7 +955,7 @@ For details about parameters supported by storage-opts, see [Table 1](#en-us_to - If graphdriver is devicemapper and the metadata files are damaged and cannot be restored, you need to manually restore the metadata files. Do not directly operate or tamper with metadata of the devicemapper storage driver in Docker daemon. - When the devicemapper LVM is used, if the devicemapper thinpool is damaged due to abnormal power-off, you cannot ensure the data integrity or whether the damaged thinpool can be restored. Therefore, you need to rebuild the thinpool. -##### Precautions for Switching the devicemapper Storage Pool When the User Namespace Feature Is Enabled on iSula +**Precautions for Switching the devicemapper Storage Pool When the User Namespace Feature Is Enabled on iSula** - Generally, the path of the deviceset-metadata file is **/var/lib/isulad/devicemapper/metadata/deviceset-metadata** during container startup. - If user namespaces are used, the path of the deviceset-metadata file is **/var/lib/isulad/**_userNSUID.GID_**/devicemapper/metadata/deviceset-metadata**. diff --git a/docs/en/docs/Container/installation-upgrade-Uninstallation.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/installation-upgrade-Uninstallation.md similarity index 85% rename from docs/en/docs/Container/installation-upgrade-Uninstallation.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/installation-upgrade-Uninstallation.md index b217abab2632fcf0219fdd7879bf1132230ef335..b857929197ba92fb9abb7a7bf8168703ffe02e14 100644 --- a/docs/en/docs/Container/installation-upgrade-Uninstallation.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/installation-upgrade-Uninstallation.md @@ -1,4 +1,3 @@ # Installation, Upgrade and Uninstallation -This chapter describes how to install, configure, upgrade, and uninstall iSulad. - +This chapter describes how to install, configure, upgrade, and uninstall iSulad. diff --git a/docs/en/docs/Container/interconnecting-isula-shim-v2-with-stratovirt.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/interconnecting-isula-shim-v2-with-stratovirt.md similarity index 93% rename from docs/en/docs/Container/interconnecting-isula-shim-v2-with-stratovirt.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/interconnecting-isula-shim-v2-with-stratovirt.md index 649c5ccadabe6183ce2fa013d1640dfb61991c8e..63b546043fc5a89b5227ca51be4d8ad0903b1838 100644 --- a/docs/en/docs/Container/interconnecting-isula-shim-v2-with-stratovirt.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/interconnecting-isula-shim-v2-with-stratovirt.md @@ -4,13 +4,13 @@ shim v2 is a next-generation shim solution. Compared with shim v1, shim v2 features shorter call chains, clearer architecture, and lower memory overhead in multi-service container scenarios. iSula can run secure containers through isulad-shim or containerd-shim-kata-v2. The isulad-shim component is the implementation of the shim v1 solution, and the containerd-shim-kata-v2 component is the implementation of the shim v2 solution in the secure container scenario. This document describes how to interconnect iSula with containerd-shim-kata-v2. -## Interconnecting with containerd-shim-kata-v2 +## Interconnecting with containerd-shim-v2-kata ### Prerequisites -Before interconnecting iSula with containerd-shim-kata-v2, ensure that the following prerequisites are met: +Before interconnecting iSula with containerd-shim-v2-kata, ensure that the following prerequisites are met: -- iSulad and kata-containers have been installed. +- iSulad, lib-shim-v2, and kata-containers have been installed. - StratoVirt supports only the devicemapper storage driver. Therefore, you need to configure the devicemapper environment and ensure that the devicemapper storage driver used by iSulad works properly. ### Environment Setup @@ -19,11 +19,12 @@ The following describes how to install and configure iSulad and kata-containers. #### Installing Dependencies -Configure the Yum source based on the OS version and install iSulad and kata-containers as the **root** user. +Configure the YUM source based on the OS version and install iSulad, lib-shim-v2, and kata-containers as the **root** user. ```shell -yum install iSulad -yum install kata-containers +# yum install iSulad +# yum install kata-containers +# yum install lib-shim-v2 ``` #### Creating and Configuring a Storage Device @@ -115,7 +116,7 @@ III. Making the Configuration Take Effect If the following information is displayed, the configuration is successful: - ```text + ```shell Storage Driver: devicemapper ``` @@ -133,7 +134,7 @@ If containerd-shim-kata-v2 uses QEMU as the virtualization component, perform th Set **sandbox_cgroup_with_emulator** to **false**. Currently, shim v2 does not support this function. Other parameters are the same as the kata configuration parameters in shim v1 or use the default values. - ```text + ```toml sandbox_cgroup_with_emulator = false ``` @@ -196,7 +197,7 @@ If containerd-shim-kata-v2 uses StratoVirt as the virtualization component, perf lsmod |grep vhost_vsock ``` - Download the kernel of the required version and architecture and save it to the **/var/lib/kata/** directory. For example, download the [openeuler repo]() of the x86 architecture of openEuler 21.03. + Download the kernel of the required version and architecture and save it to the **/var/lib/kata/** directory. For example, download the [openeuler repo](https://repo.openeuler.org/) of the x86 architecture of openEuler 21.03. ```bash cd /var/lib/kata diff --git a/docs/en/docs/Container/interconnection-with-the-cni-network.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/interconnection-with-the-cni-network.md similarity index 95% rename from docs/en/docs/Container/interconnection-with-the-cni-network.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/interconnection-with-the-cni-network.md index 367ea7497103e6dfc5cf219670dd6e207619247d..26814d1de7bdf074e24de259c4129dccd9b257a5 100644 --- a/docs/en/docs/Container/interconnection-with-the-cni-network.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/interconnection-with-the-cni-network.md @@ -83,8 +83,8 @@ The following is an example of the CNI network configuration method: The CNI network configuration includes two types, both of which are in the .json file format. -- Single-network plane configuration file with the file name extension .conf or .json. For details about the configuration items, see Table 1 in the appendix. -- Multi-network plane configuration file with the file name extension .conflist. For details about the configuration items, see Table 3 in the appendix. +- Single-network plane configuration file with the file name extension .conf or .json. For details about the configuration items, see [Table 1](#cni-parameters.md#en-us_topic_0184347952_table425023335913) in the appendix. +- Multi-network plane configuration file with the file name extension .conflist. For details about the configuration items, see [Table 3](#cni-parameters.md#en-us_topic_0184347952_table657910563105) in the appendix. ### Adding a Pod to the CNI Network List @@ -94,7 +94,7 @@ If **--network-plugin=cni** is configured for iSulad and the default network p ```json "port_mappings":[ - { + { "protocol": 1, "container_port": 80, "host_port": 8080 @@ -110,10 +110,10 @@ If **--network-plugin=cni** is configured for iSulad and the default network p When StopPodSandbox is called, the interface for removing a pod from the CNI network list will be called to clear network resources. ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif)**NOTE:** > ->1. Before calling the RemovePodSandbox interface, you must call the StopPodSandbox interface at least once. ->2. If StopPodSandbox fails to call the CNI, residual network resources will be cleaned by the CNI network plugin. +> 1. Before calling the RemovePodSandbox interface, you must call the StopPodSandbox interface at least once. +> 2. If StopPodSandbox fails to call the CNI, residual network resources may exist. ## Usage Restrictions diff --git a/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/isula-common-issues-and-solutions.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/isula-common-issues-and-solutions.md new file mode 100644 index 0000000000000000000000000000000000000000..6eb0cd216772695d26f04cd42b749bf7883947b1 --- /dev/null +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/isula-common-issues-and-solutions.md @@ -0,0 +1,21 @@ +# Common Issues and Solutions + +## Issue 1: Changing iSulad Default Runtime to `lxc` Causes Container Startup Error: Failed to Initialize Engine or Runtime + +**Cause**: iSulad uses `runc` as its default runtime. Switching to `lxc` without the required dependencies causes this issue. + +**Solution**: To set `lxc` as the default runtime, install the `lcr` and `lxc` packages. Then, either configure the `runtime` field in the iSulad configuration file to `lcr` or use the `--runtime lcr` flag when launching containers. Avoid uninstalling `lcr` or `lxc` after starting containers, as this may leave behind residual resources during container deletion. + +## Issue 2: Error When Using iSulad CRI V1 Interface: rpc error: code = Unimplemented desc = + +**Cause**: iSulad supports both CRI V1alpha2 and CRI V1 interfaces, with CRI V1alpha2 enabled by default. Using CRI V1 requires explicit configuration. + +**Solution**: Enable the CRI V1 interface by modifying the iSulad configuration file at **/etc/isulad/daemon.json**. + +```json +{ + "enable-cri-v1": true, +} +``` + +When compiling iSulad from source, include the `cmake` option `-D ENABLE_CRI_API_V1=ON` to enable CRI V1 support. diff --git a/docs/en/docs/Container/isulad-container-engine.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/isulad-container-engine.md similarity index 99% rename from docs/en/docs/Container/isulad-container-engine.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/isulad-container-engine.md index 54cd5ca2112776a9d584b4eb2e5132607a5dd743..1f43ba362612eb106b6c415e001823618d8c459e 100644 --- a/docs/en/docs/Container/isulad-container-engine.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/isulad-container-engine.md @@ -1,12 +1,9 @@ # iSulad Container Engine - Compared with Docker, iSulad is a new container solution with a unified architecture design to meet different requirements in the CT and IT fields. Lightweight containers are implemented using C/C++. They are smart, fast, and not restricted by hardware and architecture. With less noise floor overhead, the containers can be widely used. [Figure 1](#en-us_topic_0182207099_fig10763114141217) shows the unified container architecture. **Figure 1** Unified container architecture - ![](./figures/en-us_image_0183048952.png) - diff --git a/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/isulad-support-for-cdi.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/isulad-support-for-cdi.md new file mode 100644 index 0000000000000000000000000000000000000000..e7e3cd7ab551a524f2f898fbb3c018d4cc0ea422 --- /dev/null +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/isulad-support-for-cdi.md @@ -0,0 +1,120 @@ +# iSulad Support for CDI + +## Overview + +Container Device Interface (CDI) is a container runtime specification used to support third-party devices. + +CDI solves the following problems: +In Linux, only one device node needed to be exposed in a container in the past to enable device awareness of the container. However, as devices and software become more complex, vendors want to perform more operations, such as: + +- Exposing multiple device nodes to a container, mounting files from a runtime namespace to a container, or hiding procfs entries. +- Checking the compatibility between containers and devices. For example, checking whether a container can run on a specified device. +- Performing runtime-specific operations, such as virtual machines and Linux container-based runtimes. +- Performing device-specific operations, such as GPU memory cleanup and FPGA re-programming. + +In the absence of third-party device standards, vendors often have to write and maintain multiple plugins for different runtimes, or even contribute vendor-specific code directly in a runtime. In addition, the runtime does not expose the plugin system in a unified manner (or even not at all), resulting in duplication of functionality in higher-level abstractions (such as Kubernetes device plugins). + +To solve the preceding problem, CDI provides the following features: +CDI describes a mechanism that allows third-party vendors to interact with devices without modifying the container runtime. + +The mechanism is exposed as a JSON file (similar to the container network interface CNI), which allows vendors to describe the operations that the container runtime should perform on the OCI-based container. + +Currently, iSulad supports the [CDI v0.6.0](https://github.com/cncf-tags/container-device-interface/blob/v0.6.0/SPEC.md) specification. + +## Configuring iSulad to Support CDI + +Modify the **daemon.json** file as follows and restart iSulad: + +```json +{ + ... + "enable-cri-v1": true, + "cdi-spec-dirs": ["/etc/cdi", "/var/run/cdi"], + "enable-cdi": true +} +``` + +**cdi-spec-dirs** specifies the directory where CDI specifications are stored. If this parameter is not specified, the default value **/etc/cdi** or **/var/run/cdi** is used. + +## Examples + +### CDI Specification Example + +For details about each field, see [CDI v0.6.0](https://github.com/cncf-tags/container-device-interface/blob/v0.6.0/SPEC.md). + +```bash +$ mkdir /etc/cdi +$ cat > /etc/cdi/vendor.json < + - [Local Volume Management](#local-volume-management) - [Overview](#overview) - [Precautions](#precautions) @@ -34,6 +36,8 @@ - [Conflict Combination Rules](#conflict-combination-rules) - [Differences Between iSula and Docker](#differences-between-isula-and-docker) + + ## Overview After a container managed by iSula is destroyed, all data in the container is destroyed. If you want to retain data after the container is destroyed, a data persistence mechanism is required. iSula allows files, directories, or volumes on a host to be mounted to a container at runtime. You can write the data to be persisted to the mount point in the container. After the container is destroyed, the files, directories, and volumes on the host are retained. If you need to delete a file, directory, or volume on the host, you can manually delete the file or directory, or run the iSula command to delete the volume. Currently, the iSula supports only local volume management. Local volumes are classified into named volumes and anonymous volumes. A volume whose name is specified by a user is called a named volume. If a user does not specify a name for a volume, iSula automatically generates a name (a 64-bit random number) for the volume, that is, an anonymous volume. @@ -42,7 +46,7 @@ The following describes how to use iSula to manage local volumes. ## Precautions -- The volume name contains 2 to 64 characters and complies with the regular expression ^\[a-zA-Z0-9\]\[a-zA-Z0-9_.-\]{1,63}$. That is, the first character of the volume name must be a letter or digit, and other characters can be letters, digits, underscores (_), periods (.), and hyphens (-). +- The volume name contains 2 to 64 characters and complies with the regular expression `^[a-zA-Z0-9][a-zA-Z0-9_.-]{1,63}$`. That is, the first character of the volume name must be a letter or digit, and other characters can be letters, digits, underscores (_), periods (.), and hyphens (-). - During container creation, if data exists at the mount point of the container corresponding to the volume, the data is copied to the volume by default. If the iSula breaks down or restarts or the system is powered off during the copy process, the data in the volume may be incomplete. In this case, you need to manually delete the volume or the data in the volume to ensure that the data is correct and complete. ## Usage @@ -101,19 +105,16 @@ When you create and run a container, use the --mount option to mount the files, #### Parameter Description -- type: Type of data mounted to the container. The value can be **bind**, **volume**, **squashfs**, or **tmpfs**. If this parameter is not specified, the default value is **volume**. +- type: Type of data mounted to the container. The value can be bind, volume, or squashfs. If this parameter is not specified, the default value is volume. - src: Path of the file, directory, or volume to be mounted on the host. If the value is an absolute path, the file or directory on the host is mounted. If the value is a volume name, a volume is mounted. If this parameter is not specified, the volume is an anonymous volume. If a folder or volume does not exist, iSula creates a file or volume and then mounts it. The keyword src is also called source. - dst: Mount path in the container. The value must be an absolute path. The keyword dst is also called destination or target. -- KEY=VALUE: Parameter of `--mount`. The values are as follows: +- KEY=VALUE: Parameter of --mount. The values are as follows: -| Key | Value | -| ------------------------------ | --------------------------------------------------------------------------- | -| selinux-opts/bind-selinux-opts | z or Z. z indicates that if SELinux is enabled, the SELinux share label is added during mounting. Z indicates that if SELinux is enabled, the SELinux private label is added during mounting.| -| ro/readonly | 0/false indicates that the mount is read/write. 1/true indicates that the mount is read-only. If this parameter is not specified, the mount is read-only. The parameter is supported only when type is set to bind. | -| bind-propagation | The value is **private/rprivate/slave/rslave/shared/rshared** and functions similar to the value of `-v`. This is valid only when **type** is **bind**. | -| volume-nocopy | Data at the mount point is not copied. If this parameter is not specified, data is copied by default. In addition, if data already exists in the volume, the data will not be copied. This parameter is supported only when type is set to volume. | -| tmpfs-size | Maximum sizeof tmpfs. This is not limited by default. | -| tmpfs-mode | Permission for the mounted tmpfs. By default, the value is **777**. | +| KEY | VALUE | +| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| selinux-opts/bind-selinux-opts | z or Z. z indicates that if SELinux is enabled, the SELinux share label is added during mounting. Z indicates that if SELinux is enabled, the SELinux private label is added during mounting. | +| ro/readonly | 0/false indicates that the mount is read/write. 1/true indicates that the mount is read-only. If this parameter is not specified, the mount is read-only. The parameter is supported only when type is set to bind. | +| volume-nocopy | Data at the mount point is not copied. If this parameter is not specified, data is copied by default. In addition, if data already exists in the volume, the data will not be copied. This parameter is supported only when type is set to volume. | #### Examples @@ -168,7 +169,7 @@ This command is used to query all volumes managed by iSula. Option: -- -q,--quiet: If this parameter is not specified, only the volume driver information and volume name are queried by default. If this parameter is specified, only the volume name is queried. +- -q,--quit: If this parameter is not specified, only the volume driver information and volume name are queried by default. If this parameter is specified, only the volume name is queried. #### Examples diff --git a/docs/en/docs/Container/privileged-container.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/privileged-container.md similarity index 98% rename from docs/en/docs/Container/privileged-container.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/privileged-container.md index cd19dffa22c7fe8b214beb9b5227c1bb2db47cf6..f69f8bd9d03669fb7f577ea491626bb669126166 100644 --- a/docs/en/docs/Container/privileged-container.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/privileged-container.md @@ -1,11 +1,10 @@ # Privileged Container -- [Privileged Container](#privileged-container.) +- [Privileged Container](#privileged-container) - [Scenarios](#scenarios) - - [Usage Restrictions](#usage-restrictions-1) + - [Usage Restrictions](#usage-restrictions) - [Usage Guide](#usage-guide) - ## Scenarios By default, iSulad starts common containers that are suitable for starting common processes. However, common containers have only the default permissions defined by capabilities in the **/etc/default/isulad/config.json** directory. To perform privileged operations \(such as use devices in the **/sys** directory\), a privileged container is required. By using this feature, user **root** in the container has **root** permissions of the host. Otherwise, user **root** in the container has only common user permissions of the host. @@ -14,15 +13,14 @@ By default, iSulad starts common containers that are suitable for starting commo Privileged containers provide all functions for containers and remove all restrictions enforced by the device cgroup controller. A privileged container has the following features: -- Secomp does not block any system call. -- The **/sys** and **/proc** directories are writable. -- All devices on the host can be accessed in the container. +- Secomp does not block any system call. +- The **/sys** and **/proc** directories are writable. +- All devices on the host can be accessed in the container. -- All system capabilities will be enabled. +- All system capabilities will be enabled. Default capabilities of a common container are as follows: -

Capability Key

Description

@@ -104,7 +102,6 @@ Default capabilities of a common container are as follows: When a privileged container is enabled, the following capabilities are added: -

Capability Key

Description

@@ -233,7 +230,6 @@ When a privileged container is enabled, the following capabilities are added: iSulad runs the **--privileged** command to enable the privilege mode for containers. Do not add privileges to containers unless necessary. Comply with the principle of least privilege to reduce security risks. -``` +```shell isula run --rm -it --privileged busybox ``` - diff --git a/docs/en/docs/ApplicationDev/public_sys-resources/icon-note.gif b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/ApplicationDev/public_sys-resources/icon-note.gif rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/Administration/public_sys-resources/icon-notice.gif b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/public_sys-resources/icon-notice.gif similarity index 100% rename from docs/en/docs/Administration/public_sys-resources/icon-notice.gif rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/public_sys-resources/icon-notice.gif diff --git a/docs/en/docs/Container/querying-information.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/querying-information.md similarity index 96% rename from docs/en/docs/Container/querying-information.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/querying-information.md index 0c27a9d1d2e26769e8055e13206df1448fbbe32d..c1d22dfd2bfe84e6e4587f742e4ca5f048306741 100644 --- a/docs/en/docs/Container/querying-information.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/querying-information.md @@ -4,7 +4,6 @@ - [Querying the Service Version](#querying-the-service-version) - [Querying System-level Information](#querying-system-level-information) - ## Querying the Service Version ### Description @@ -13,7 +12,7 @@ The **isula version** command is run to query the version of the iSulad servic ### Usage -``` +```shell isula version ``` @@ -21,13 +20,13 @@ isula version Query the version information. -``` +```shell isula version ``` If the iSulad service is running properly, you can view the information about versions of the client, server, and **OCI config**. -``` +```text Client: Version: 1.0.31 Git commit: fa7f9902738e8b3d7f2eb22768b9a1372ddd1199 @@ -45,7 +44,7 @@ OCI config: If the iSulad service is not running, only the client information is queried and a message is displayed indicating that the connection times out. -``` +```text Client: Version: 1.0.31 Git commit: fa7f9902738e8b3d7f2eb22768b9a1372ddd1199 @@ -64,7 +63,7 @@ The **isula info** command is run to query the system-level information, numbe ### Usage -``` +```shell isula info ``` @@ -72,7 +71,7 @@ isula info Query system-level information, including the number of containers, number of images, kernel version, and operating system \(OS\). -``` +```shell $ isula info Containers: 2 Running: 0 @@ -81,7 +80,7 @@ Containers: 2 Images: 8 Server Version: 1.0.31 Logging Driver: json-file -Cgroup Driver: cgroupfs +Cgroup Driverr: cgroupfs Hugetlb Pagesize: 2MB Kernel Version: 4.19 Operating System: Fedora 29 (Twenty Nine) @@ -92,4 +91,3 @@ Total Memory: 7 GB Name: localhost.localdomain iSulad Root Dir: /var/lib/isulad ``` - diff --git a/docs/en/docs/Container/security-features.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/security-features.md similarity index 72% rename from docs/en/docs/Container/security-features.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/security-features.md index 65436a67383285c7832d39d9fe42cd9176a571b9..739c9452be6b70241d36a21b321d16f48f3fabe6 100644 --- a/docs/en/docs/Container/security-features.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/security-features.md @@ -18,7 +18,7 @@ ### Scenarios -Secure computing mode \(Seccomp\) is a simple sandboxing mechanism introduced to the Linux kernel from version 2.6.23. In some specific scenarios, you may want to perform some privileged operations in a container without starting the privileged container. You can add **--cap-add** at runtime to obtain some small-scope permissions. For container instances with strict security requirements, th capability granularity may not meet the requirements. You can use some methods to control the permission scope in a refined manner. +Secure computing mode \(seccomp\) is a simple sandboxing mechanism introduced to the Linux kernel from version 2.6.23. In some specific scenarios, you may want to perform some privileged operations in a container without starting the privileged container. You can add **--cap-add** at runtime to obtain some small-scope permissions. For container instances with strict security requirements, th capability granularity may not meet the requirements. You can use some methods to control the permission scope in a refined manner. - Example @@ -26,14 +26,11 @@ Secure computing mode \(Seccomp\) is a simple sandboxing mechanism introduced to In the container, you can add chmod 4777 \(the modification permission of the binary file\) to the S flag bit. In this way, on the host, common users who cannot run the binary file \(or whose running permission is restricted\) can obtain the permissions of the binary file \(such as the root permission\) when running the binary file after the action added to the S flag bit is performed, so as to escalate the permission or access other files. - In this scenario, if strict security requirements are required, the chmod, fchmod, and fchmodat system calls need to be tailored by using Seccomp. + In this scenario, if strict security requirements are required, the chmod, fchmod, and fchmodat system calls need to be tailored by using seccomp. ### Usage Restrictions -- Do not disable the Seccomp feature of iSulad. - By default, Seccomp uses an allowlist to disable system calls that are not in the list. **--security-opt 'seccomp:unconfined'** disables the Seccomp feature. If Seccomp is disabled or the allowlist is not correctly configured, the attack surface is increased. - -- By default, Seccomp returns **SCMP_ACT_ERRNO** to system calls that are not in the allow list and opens system calls based on capabilities. Capabilities that are not in the allowlist will not be added to the container. +- Seccomp may affect performance. Before setting seccomp, evaluate the scenario and add the configuration only if necessary. ### Usage Guide @@ -43,11 +40,11 @@ Use **--security-opt** to transfer the configuration file to the container whe isula run -itd --security-opt seccomp=/path/to/seccomp/profile.json rnd-dockerhub.huawei.com/official/busybox ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** +>![](./public_sys-resources/icon-note.gif)**NOTE:** > ->1. When the configuration file is transferred to the container by using **--security-opt** during container creation, the default configuration file \(**/etc/isulad/seccomp\_default.json**\) is used. ->2. When **--security-opt** is set to **unconfined** during container creation, system calls are not filtered for the container. ->3. **/path/to/seccomp/profile.json** must be an absolute path. +> 1. When the configuration file is transferred to the container by using **--security-opt** during container creation, the default configuration file \(**/etc/isulad/seccomp\_default.json**\) is used. +> 2. When **--security-opt** is set to **unconfined** during container creation, system calls are not filtered for the container. +> 3. **/path/to/seccomp/profile.json** must be an absolute path. #### Obtaining the Default Seccomp Configuration of a Common Container @@ -59,7 +56,7 @@ isula run -itd --security-opt seccomp=/path/to/seccomp/profile.json rnd-dockerhu The **seccomp** field contains many **syscalls** fields. Then extract only the **syscalls** fields and perform the customization by referring to the customization of the seccomp configuration file. - ```text + ```json "defaultAction": "SCMP_ACT_ERRNO", "syscalls": [ { @@ -85,13 +82,13 @@ isula run -itd --security-opt seccomp=/path/to/seccomp/profile.json rnd-dockerhu ]... ``` -- Check the Seccomp configuration that can be identified by the LXC. +- Check the seccomp configuration that can be identified by the LXC. ```shell cat /var/lib/isulad/engines/lcr/74353e38021c29314188e29ba8c1830a4677ffe5c4decda77a1e0853ec8197cd/seccomp ``` - ```console + ```text ... waitpid allow write allow @@ -107,7 +104,7 @@ isula run -itd --security-opt seccomp=/path/to/seccomp/profile.json rnd-dockerhu #### Customizing the Seccomp Configuration File -When starting a container, use **--security-opt** to introduce the Seccomp configuration file. Container instances will restrict the running of system APIs based on the configuration file. Obtain the default Seccomp configuration of common containers, obtain the complete template, and customize the configuration file by referring to this section to start the container. +When starting a container, use **--security-opt** to introduce the seccomp configuration file. Container instances will restrict the running of system APIs based on the configuration file. Obtain the default seccomp configuration of common containers, obtain the complete template, and customize the configuration file by referring to this section to start the container. ```shell isula run --rm -it --security-opt seccomp:/path/to/seccomp/profile.json rnd-dockerhub.huawei.com/official/busybox @@ -115,7 +112,7 @@ isula run --rm -it --security-opt seccomp:/path/to/seccomp/profile.json rnd-dock The configuration file template is as follows: -```text +```json { "defaultAction": "SCMP_ACT_ALLOW", "syscalls": [ @@ -128,22 +125,22 @@ The configuration file template is as follows: } ``` ->![](./public_sys-resources/icon-notice.gif) **NOTICE:** +>![](./public_sys-resources/icon-notice.gif)**NOTICE:** > ->- **defaultAction** and **syscalls**: The types of their corresponding actions are the same, but their values must be different. The purpose is to ensure that each syscall has a default action. Clear definitions in the syscall array shall prevail. As long as the values of **defaultAction** and **action** are different, no action conflicts will occur. The following actions are supported: -> **SCMP\_ACT\_ERRNO**: forbids calling syscalls and displays error information. -> **SCMP\_ACT\_ALLOW**: allows calling syscalls. ->- **syscalls**: array, which can contain one or more syscalls. **args** is optional. ->- **name**: syscalls to be filtered. ->- **args**: array. The definition of each object in the array is as follows: +> - **defaultAction** and **syscalls**: The types of their corresponding actions are the same, but their values must be different. The purpose is to ensure that each syscall has a default action. Clear definitions in the syscall array shall prevail. As long as the values of **defaultAction** and **action** are different, no action conflicts will occur. The following actions are supported: +> **SCMP\_ACT\_ERRNO**: forbids calling syscalls and displays error information. +> **SCMP\_ACT\_ALLOW**: allows calling syscalls. +> - **syscalls**: array, which can contain one or more syscalls. **args** is optional. +> - **name**: syscalls to be filtered. +> - **args**: array. The definition of each object in the array is as follows: > -> ```c -> type Arg struct { -> Index uint `json:"index"` // Parameter ID. Take open(fd, buf, len) as an example. The fd corresponds to 0 and buf corresponds to 1. -> Value uint64 `json:"value"` // Value to be compared with the parameter. -> ValueTwo uint64 `json:"value_two"` // It is valid only when Op is set to MaskEqualTo. After the bitwise AND operation is performed on the user-defined value and the value of Value, the result is compared with the value of ValueTwo. If they are the same, the action is executed. -> Op Operator `json:"op"` -> } +> ```go +> type Arg struct { +> Index uint `json:"index"` // Parameter ID. Take open(fd, buf, len) as an example. The fd corresponds to 0 and buf corresponds to 1. +> Value uint64 `json:"value"` // Value to be compared with the parameter. +> ValueTwo uint64 `json:"value_two"` // It is valid only when Op is set to MaskEqualTo. After the bitwise AND operation is performed on the user-defined value and the value of Value, the result is compared with the value of ValueTwo. If they are the same, the action is executed. +> Op Operator `json:"op"` +> } > ``` > > The value of **Op** in **args** can be any of the following: @@ -167,7 +164,7 @@ man capabilities ### Usage Restrictions -- The default capability list \(whitelist\) of the iSulad service, which is carried by common container processes by default, are as follows: +- The default capability list \(allowlist\) of the iSulad service, which is carried by common container processes by default, are as follows: ```text "CAP_CHOWN", @@ -210,14 +207,14 @@ Security-Enhanced Linux \(SELinux\) is a Linux kernel security module that provi - The introduction of SELinux affects the performance. Therefore, evaluate the scenario before setting SELinux. Enable the SELinux function for the daemon and set the SELinux configuration in the container only when necessary. - When you configure labels for a mounted volume, the source directory cannot be a subdirectory of **/**, **/usr**, **/etc**, **/tmp**, **/home**, **/run**, **/var**, **/root**, or **/usr**. ->![](./public_sys-resources/icon-note.gif) **NOTE:** +>![](./public_sys-resources/icon-note.gif)**NOTE:** > ->- iSulad does not support labeling the container file system. To ensure that the container file system and configuration directory are labeled with the container access permission, run the **chcon** command to label them. ->- If SELinux access control is enabled for iSulad, you are advised to add a label to the **/var/lib/isulad** directory before starting daemon. Files and folders generated in the directory during container creation inherit the label by default. For example: +> - iSulad does not support labeling the container file system. To ensure that the container file system and configuration directory are labeled with the container access permission, run the **chcon** command to label them. +> - If SELinux access control is enabled for iSulad, you are advised to add a label to the **/var/lib/isulad** directory before starting daemon. Files and folders generated in the directory during container creation inherit the label by default. For example: > -> ```shell -> chcon -R system_u:object_r:container_file_t:s0 /var/lib/isulad -> ``` +> ```shell +> chcon -R system_u:object_r:container_file_t:s0 /var/lib/isulad +> ``` ### Usage Guide @@ -250,6 +247,6 @@ Security-Enhanced Linux \(SELinux\) is a Linux kernel security module that provi $ isula run -itd -v /test:/test:z rnd-dockerhub.huawei.com/official/centos 9be82878a67e36c826b67f5c7261c881ff926a352f92998b654bc8e1c6eec370 - $ ls -Z /test + $ls -Z /test system_u:object_r:container_file_t:s0 file ``` diff --git a/docs/en/docs/Container/supporting-oci-hooks.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/supporting-oci-hooks.md similarity index 39% rename from docs/en/docs/Container/supporting-oci-hooks.md rename to docs/en/Cloud/ContainerEngine/iSulaContainerEngine/supporting-oci-hooks.md index 76ce5759967a23f6ded80ac49bfc30c429b402a8..66fc0d2af4cc62389c43037c9e43d741ca1bcc29 100644 --- a/docs/en/docs/Container/supporting-oci-hooks.md +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/supporting-oci-hooks.md @@ -5,25 +5,24 @@ - [APIs](#apis) - [Usage Restrictions](#usage-restrictions) - ## Description The running of standard OCI hooks within the lifecycle of a container is supported. There are three types of standard hooks: -- prestart hook: executed after the **isula start** command is executed and before the init process of the container is started. -- poststart hook: executed after the init process is started and before the **isula start** command is returned. -- poststop hook: executed after the container is stopped and before the stop command is returned. +- prestart hook: executed after the **isula start** command is executed and before the init process of the container is started. +- poststart hook: executed after the init process is started and before the **isula start** command is returned. +- poststop hook: executed after the container is stopped and before the stop command is returned. The configuration format specifications of OCI hooks are as follows: -- **path**: \(Mandatory\) The value must be a character string and must be an absolute path. The specified file must have the execute permission. -- **args**: \(Optional\) The value must be a character string array. The syntax is the same as that of **args** in **execv**. -- **env**: \(Optional\) The value must be a character string array. The syntax is the same as that of environment variables. The content is a key-value pair, for example, **PATH=/usr/bin**. -- **timeout**: \(Optional\) The value must be an integer that is greater than 0. It indicates the timeout interval for hook execution. If the running time of the hook process exceeds the configured time, the hook process is killed. +- **path**: \(Mandatory\) The value must be a character string and must be an absolute path. The specified file must have the execute permission. +- **args**: \(Optional\) The value must be a character string array. The syntax is the same as that of **args** in **execv**. +- **env**: \(Optional\) The value must be a character string array. The syntax is the same as that of environment variables. The content is a key-value pair, for example, **PATH=/usr/bin**. +- **timeout**: \(Optional\) The value must be an integer that is greater than 0. It indicates the timeout interval for hook execution. If the running time of the hook process exceeds the configured time, the hook process is killed. The hook configuration is in JSON format and usually stored in a file ended with **json**. An example is as follows: -``` +```json { "prestart": [ { @@ -59,29 +58,25 @@ Both iSulad and iSula provide the hook APIs. The default hook configurations pro The default OCI hook configurations provided by iSulad are as follows: -- Set the configuration item **hook-spec** in the **/etc/isulad/daemon.json** configuration file to specify the path of the hook configuration file. Example: **"hook-spec": "/etc/default/isulad/hooks/default.json"** -- Use the **isulad --hook-spec** parameter to set the path of the hook configuration file. +- Set the configuration item **hook-spec** in the **/etc/isulad/daemon.json** configuration file to specify the path of the hook configuration file. Example: **"hook-spec": "/etc/default/isulad/hooks/default.json"** +- Use the **isulad --hook-spec** parameter to set the path of the hook configuration file. The OCI hook configurations provided by iSula are as follows: -- **isula create --hook-spec**: specifies the path of the hook configuration file in JSON format. -- **isula run --hook-spec**: specifies the path of the hook configuration file in JSON format. +- **isula create --hook-spec**: specifies the path of the hook configuration file in JSON format. +- **isula run --hook-spec**: specifies the path of the hook configuration file in JSON format. The configuration for **run** takes effect in the creation phase. ## Usage Restrictions -- The path specified by **hook-spec** must be an absolute path. -- The file specified by **hook-spec** must exist. -- The path specified by **hook-spec** must contain a common text file in JSON format. -- The file specified by **hook-spec** cannot exceed 10 MB. -- **path** configured for hooks must be an absolute path. -- The file that is designated by **path** configured for hooks must exist. -- The file that is designated by **path** configured for hooks must have the execute permission. -- The owner of the file that is designated by **path** configured for hooks must be user **root**. -- Only user **root** has the write permission on the file that is designated by **path** configured for hooks. -- The value of **timeout** configured for hooks must be greater than **0**. - -    - - +- The path specified by **hook-spec** must be an absolute path. +- The file specified by **hook-spec** must exist. +- The path specified by **hook-spec** must contain a common text file in JSON format. +- The file specified by **hook-spec** cannot exceed 10 MB. +- **path** configured for hooks must be an absolute path. +- The file that is designated by **path** configured for hooks must exist. +- The file that is designated by **path** configured for hooks must have the execute permission. +- The owner of the file that is designated by **path** configured for hooks must be user **root**. +- Only user **root** has the write permission on the file that is designated by **path** configured for hooks. +- The value of **timeout** configured for hooks must be greater than **0**. diff --git a/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/uninstallation.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/uninstallation.md new file mode 100644 index 0000000000000000000000000000000000000000..aac45aae7ff223156d1f5c4e41eaaf736f1f24d4 --- /dev/null +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/uninstallation.md @@ -0,0 +1,22 @@ +# Uninstallation + +To uninstall iSulad, perform the following operations: + +1. Uninstall iSulad and its dependent software packages. + - If the **yum** command is used to install iSulad, run the following command to uninstall iSulad: + + ```shell + sudo yum remove iSulad + ``` + + - If the **rpm** command is used to install iSulad, uninstall iSulad and its dependent software packages. Run the following command to uninstall an RPM package. + + ```shell + sudo rpm -e iSulad-xx.xx.xx-YYYYmmdd.HHMMSS.gitxxxxxxxx.aarch64.rpm + ``` + +2. Images, containers, volumes, and related configuration files are not automatically deleted. The reference command is as follows: + + ```shell + sudo rm -rf /var/lib/iSulad + ``` diff --git a/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/upgrade-methods.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/upgrade-methods.md new file mode 100644 index 0000000000000000000000000000000000000000..06e25fbcb3798e7af344e518fc945aa0198039c5 --- /dev/null +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/upgrade-methods.md @@ -0,0 +1,24 @@ +# Upgrade Methods + +- For an upgrade between patch versions of a major version, for example, upgrading 2.x.x to 2.x.x, run the following command: + + ```shell + sudo yum update -y iSulad + ``` + +- For an upgrade between major versions, for example, upgrading 1.x.x to 2.x.x, save the current configuration file **/etc/isulad/daemon.json**, uninstall the existing iSulad software package, install the iSulad software package to be upgraded, and restore the configuration file. + +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> +> - You can run the **sudo rpm -qa |grep iSulad** or **isula version** command to check the iSulad version. +> - If you want to manually perform upgrade between patch versions of a major version, run the following command to download the RPM packages of iSulad and all its dependent libraries: +> +> ```shell +> sudo rpm -Uhv iSulad-xx.xx.xx-YYYYmmdd.HHMMSS.gitxxxxxxxx.aarch64.rpm +> ``` +> +> If the upgrade fails, run the following command to forcibly perform the upgrade: +> +> ```shell +> sudo rpm -Uhv --force iSulad-xx.xx.xx-YYYYmmdd.HHMMSS.gitxxxxxxxx.aarch64.rpm +> ``` diff --git a/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/user-guide.md b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..c15762fae637869dfe46f9e6620f49b584583187 --- /dev/null +++ b/docs/en/Cloud/ContainerEngine/iSulaContainerEngine/user-guide.md @@ -0,0 +1,6 @@ +# User Guide + +This section describes how to use iSulad. + +> ![](./public_sys-resources/icon-note.gif)**Note:** +> All iSulad operations require root privileges. diff --git a/docs/en/Cloud/ContainerForm/Menu/index.md b/docs/en/Cloud/ContainerForm/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..b36a59d19caac688aedf7ce12ec057166f2e7ada --- /dev/null +++ b/docs/en/Cloud/ContainerForm/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- + +- [Secure Container]({{< relref "./SecureContainer/Menu/index.md" >}}) +- [System Container]({{< relref "./SystemContainer/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Cloud/ContainerForm/SecureContainer/Menu/index.md b/docs/en/Cloud/ContainerForm/SecureContainer/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..85a847a886ea4e9a1f050021e72665ac6a90f70e --- /dev/null +++ b/docs/en/Cloud/ContainerForm/SecureContainer/Menu/index.md @@ -0,0 +1,11 @@ +--- +headless: true +--- + +- [Secure Container]({{< relref "./secure-container.md" >}}) + - [Installation and Deployment]({{< relref "./installation-and-deployment-2.md" >}}) + - [Application Scenarios]({{< relref "./application-scenarios-2.md" >}}) + - [Managing the Lifecycle of a Secure Container]({{< relref "./managing-the-lifecycle-of-a-secure-container.md" >}}) + - [Configuring Resources for a Secure Container]({{< relref "./configuring-resources-for-a-secure-container.md" >}}) + - [Monitoring Secure Containers]({{< relref "./monitoring-secure-containers.md" >}}) + - [Appendix]({{< relref "./appendix-2.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/Container/appendix-2.md b/docs/en/Cloud/ContainerForm/SecureContainer/appendix-2.md similarity index 99% rename from docs/en/docs/Container/appendix-2.md rename to docs/en/Cloud/ContainerForm/SecureContainer/appendix-2.md index f5342bf482626a31862cf0e1d902874e8bc6f336..f68aa927b582f88d61ff2eb16f2b3205072373d1 100644 --- a/docs/en/docs/Container/appendix-2.md +++ b/docs/en/Cloud/ContainerForm/SecureContainer/appendix-2.md @@ -1,7 +1,7 @@ # Appendix -- [Appendix](#appendix-2) - - [configuration.toml](#configuration-toml) +- [Appendix](#appendix) + - [configuration.toml](#configurationtoml) - [APIs](#apis) ## configuration.toml @@ -9,7 +9,7 @@ >![](./public_sys-resources/icon-note.gif) **NOTE:** >The value of each field in the **configuration.toml** file is subject to the **configuration.toml** file in the **kata-containers-<**_version_**\>.rpm package**. You cannot set any field in the configuration file. -``` +```text [hypervisor.qemu] path: specifies the execution path of the virtualization QEMU. kernel: specifies the execution path of the guest kernel. @@ -485,4 +485,3 @@ experimental: enables the experimental feature, which does not support user-defi
- diff --git a/docs/en/docs/Container/application-scenarios-2.md b/docs/en/Cloud/ContainerForm/SecureContainer/application-scenarios-2.md similarity index 98% rename from docs/en/docs/Container/application-scenarios-2.md rename to docs/en/Cloud/ContainerForm/SecureContainer/application-scenarios-2.md index 5346a100ee36da8190c476a4751e0d414e8ca2ec..ae340c389d2b26f3d72fdaa42ff2ffc62a193e21 100644 --- a/docs/en/docs/Container/application-scenarios-2.md +++ b/docs/en/Cloud/ContainerForm/SecureContainer/application-scenarios-2.md @@ -1,4 +1,3 @@ # Application Scenarios This section describes how to use a secure container. - diff --git a/docs/en/docs/Container/configuring-resources-for-a-secure-container.md b/docs/en/Cloud/ContainerForm/SecureContainer/configuring-resources-for-a-secure-container.md similarity index 73% rename from docs/en/docs/Container/configuring-resources-for-a-secure-container.md rename to docs/en/Cloud/ContainerForm/SecureContainer/configuring-resources-for-a-secure-container.md index e6cba396b3d1a8f56c8e5b23e311a4367f5983a6..b59ee1331bdae66ad043cea584b4282b17e27ee1 100644 --- a/docs/en/docs/Container/configuring-resources-for-a-secure-container.md +++ b/docs/en/Cloud/ContainerForm/SecureContainer/configuring-resources-for-a-secure-container.md @@ -1,13 +1,12 @@ # Configuring Resources for a Secure Container - [Configuring Resources for a Secure Container](#configuring-resources-for-a-secure-container) - - [Sharing Resources](#sharing-resources) - - [Limiting Resources](#limiting-resources) - - [Limiting Memory Resources Through the Memory Hotplug Feature](#limiting-memory-resources-through-the-memory-hotplug-feature) + - [Sharing Resources](#sharing-resources) + - [Limiting Resources](#limiting-resources) + - [Limiting Memory Resources Through the Memory Hotplug Feature](#limiting-memory-resources-through-the-memory-hotplug-feature) The secure container runs on a virtualized and isolated lightweight VM. Therefore, resource configuration is divided into two parts: resource configuration for the lightweight VM, that is, host resource configuration; resource configuration for containers in the VM, that is, guest container resource configuration. The following describes resource configuration for the two parts in detail. - ## Sharing Resources Because the secure container runs on a virtualized and isolated lightweight VM, resources in some namespaces on the host cannot be accessed. Therefore, **--net host**, **--ipc host**, **--pid host**, and **--uts host** are not supported during startup. @@ -15,20 +14,23 @@ Because the secure container runs on a virtualized and isolated lightweight VM, When a pod is started, all containers in the pod share the same net namespace and ipc namespace by default. If containers in the same pod need to share the pid namespace, you can use Kubernetes to configure the pid namespace. In Kubernetes 1.11, the pid namespace is disabled by default. ## Limiting Resources + Limitations on sandbox resources should be configured in **configuration.toml**. Common fields are: -- **default_vcpus** :specifies the default number of virtual CPUs. -- **default_maxvcpus** :specifies the max number of virtual CPUs. -- **default_root_ports** :specifies the default number of Root Ports in SB/VM. -- **default_bridges** :specifies the default number of bridges. -- **default_memory** :specifies the size of memory. The default size is 1024 MiB. -- **memory_slots** :specifies the number of memory slots. The default number is **10**. + +- **default_vcpus**: specifies the default number of virtual CPUs. +- **default_maxvcpus**: specifies the max number of virtual CPUs. +- **default_root_ports**: specifies the default number of Root Ports in SB/VM. +- **default_bridges**: specifies the default number of bridges. +- **default_memory**: specifies the size of memory. The default size is 1024 MiB. +- **memory_slots**: specifies the number of memory slots. The default number is **10**. ## Limiting Memory Resources Through the Memory Hotplug Feature + Memory hotplug is a key feature for containers to allocate memory dynamically in deployment. As Kata containers are based on VMs, this feature needs support both from VMM and guest kernel. Luckily, it has been fully supported for the current default version of QEMU and guest kernel used by Kata on ARM64. For other VMMs, e.g, Cloud Hypervisor, the enablement work is on the road. Apart from VMM and guest kernel, memory hotplug also depends on ACPI which depends on firmware. On x86, you can boot a VM using QEMU with ACPI enabled directly, because it boots up with firmware implicitly. For ARM64, however, you need specify firmware explicitly. That is to say, if you are ready to run a normal Kata container on ARM64, what you need extra to do is to install the UEFI ROM before using the memory hotplug feature. ```shell -$ pushd $GOPATH/src/github.com/kata-containers/tests -$ sudo .ci/aarch64/install_rom_aarch64.sh -$ popd +pushd $GOPATH/src/github.com/kata-containers/tests +sudo .ci/aarch64/install_rom_aarch64.sh +popd ``` diff --git a/docs/en/docs/Container/figures/relationship-between-the-secure-container-and-peripheral-components.png b/docs/en/Cloud/ContainerForm/SecureContainer/figures/relationship-between-the-secure-container-and-peripheral-components.png similarity index 100% rename from docs/en/docs/Container/figures/relationship-between-the-secure-container-and-peripheral-components.png rename to docs/en/Cloud/ContainerForm/SecureContainer/figures/relationship-between-the-secure-container-and-peripheral-components.png diff --git a/docs/en/docs/Container/figures/secure-container.png b/docs/en/Cloud/ContainerForm/SecureContainer/figures/secure-container.png similarity index 100% rename from docs/en/docs/Container/figures/secure-container.png rename to docs/en/Cloud/ContainerForm/SecureContainer/figures/secure-container.png diff --git a/docs/en/docs/Container/installation-and-deployment-2.md b/docs/en/Cloud/ContainerForm/SecureContainer/installation-and-deployment-2.md similarity index 44% rename from docs/en/docs/Container/installation-and-deployment-2.md rename to docs/en/Cloud/ContainerForm/SecureContainer/installation-and-deployment-2.md index dfb594ccbd6c75205036a8c9cf256ee19ab7f11a..c5d9534ef2afb5bf84bff413999eb2469e7d7122 100644 --- a/docs/en/docs/Container/installation-and-deployment-2.md +++ b/docs/en/Cloud/ContainerForm/SecureContainer/installation-and-deployment-2.md @@ -1,31 +1,29 @@ # Installation and Deployment - [Installation and Deployment](#installation-and-deployment) - - [Installation Methods](#installation-methods) - - [Prerequisites](#prerequisites) - - [Installation Procedure](#installation-procedure) - - [Deployment Configuration](#deployment-configuration) - - [Configuring the Docker Engine](#configuring-the-docker-engine) - - [iSulad Configuration](#isulad-configuration) - - [Configuration.toml](#configurationtoml) - + - [Installation Methods](#installation-methods) + - [Prerequisites](#prerequisites) + - [Installation Procedure](#installation-procedure) + - [Deployment Configuration](#deployment-configuration) + - [Configuring the Docker Engine](#configuring-the-docker-engine) + - [iSulad Configuration](#isulad-configuration) + - [Configuration.toml](#configurationtoml) ## Installation Methods ### Prerequisites -- The root permission is required for installing a Kata container. -- For better performance experience, a Kata container needs to run on the bare metal server and cannot run on VMs. -- A Kata container depends on the following components \(openEuler 1.0 version\). Ensure that the required components have been installed in the environment. To install iSulad, refer to [Installation Configuration](./installation-configuration.md). - - docker-engine - - qemu - +- The root permission is required for installing a Kata container. +- For better performance experience, a Kata container needs to run on the bare metal server and cannot run on VMs. +- A Kata container depends on the following components \(openEuler 1.0 version\). Ensure that the required components have been installed in the environment. To install iSulad, refer to [Installation Configuration](../../ContainerEngine/iSulaContainerEngine/installation-configuration.md). + - docker-engine + - qemu ### Installation Procedure Released Kata container components are integrated in the **kata-containers-**_version_**.rpm** package. You can run the **rpm** command to install the corresponding software. -``` +```shell rpm -ivh kata-containers-.rpm ``` @@ -35,16 +33,16 @@ rpm -ivh kata-containers-.rpm To enable the Docker engine to support kata-runtime, perform the following steps to configure the Docker engine: -1. Ensure that all software packages \(**docker-engine** and **kata-containers**\) have been installed in the environment. -2. Stop the Docker engine. +1. Ensure that all software packages \(**docker-engine** and **kata-containers**\) have been installed in the environment. +2. Stop the Docker engine. - ``` + ```shell systemctl stop docker ``` -3. Modify the configuration file **/etc/docker/daemon.json** of the Docker engine and add the following configuration: +3. Modify the configuration file **/etc/docker/daemon.json** of the Docker engine and add the following configuration: - ``` + ```json { "runtimes": { "kata-runtime": { @@ -58,27 +56,26 @@ To enable the Docker engine to support kata-runtime, perform the following steps } ``` -4. Restart the Docker engine. +4. Restart the Docker engine. - ``` + ```shell systemctl start docker ``` - ### iSulad Configuration To enable the iSulad to support the new container runtime kata-runtime, perform the following steps which are similar to those for the container engine docker-engine: -1. Ensure that all software packages \(iSulad and kata-containers\) have been installed in the environment. -2. Stop iSulad. +1. Ensure that all software packages \(iSulad and kata-containers\) have been installed in the environment. +2. Stop iSulad. - ``` + ```shell systemctl stop isulad ``` -3. Modify the **/etc/isulad/daemon.json** configuration file of the iSulad and add the following configurations: +3. Modify the **/etc/isulad/daemon.json** configuration file of the iSulad and add the following configurations: - ``` + ```json { "runtimes": { "kata-runtime": { @@ -92,38 +89,35 @@ To enable the iSulad to support the new container runtime kata-runtime, perform } ``` -4. Restart iSulad. +4. Restart iSulad. - ``` + ```shell systemctl start isulad ``` - ### Configuration.toml The Kata container provides a global configuration file configuration.toml. Users can also customize the path and configuration options of the Kata container configuration file. In the **runtimeArges** field of Docker engine, you can use **--kata-config** to specify a private file. The default configuration file path is **/usr/share/defaults/kata-containers/configuration.toml**. -The following lists the common fields in the configuration file. For details about the configuration file options, see [configuration.toml](#configuration-toml-31.md). - -1. hypervisor.qemu - - **path**: specifies the execution path of the virtualization QEMU. - - **kernel**: specifies the execution path of the guest kernel. - - **initrd**: specifies the guest initrd execution path. - - **machine\_type**: specifies the type of the analog chip. The value is **virt** for the ARM architecture and **pc** for the x86 architecture. - - **kernel\_params**: specifies the running parameters of the guest kernel. - -2. proxy.kata - - **path**: specifies the kata-proxy running path. - - **enable\_debug**: enables the debugging function for the kata-proxy process. +The following lists the common fields in the configuration file. For details about the configuration file options, see [configuration.toml](./appendix-2.md#configurationtoml). -3. agent.kata - - **enable\_blk\_mount**: enables guest mounting of the block device. - - **enable\_debug**: enables the debugging function for the kata-agent process. +1. hypervisor.qemu + - **path**: specifies the execution path of the virtualization QEMU. + - **kernel**: specifies the execution path of the guest kernel. + - **initrd**: specifies the guest initrd execution path. + - **machine\_type**: specifies the type of the analog chip. The value is **virt** for the ARM architecture and **pc** for the x86 architecture. + - **kernel\_params**: specifies the running parameters of the guest kernel. -4. runtime - - **enable\_cpu\_memory\_hotplug**: enables CPU and memory hot swap. - - **enable\_debug**: enables debugging for the kata-runtime process. +2. proxy.kata + - **path**: specifies the kata-proxy running path. + - **enable\_debug**: enables the debugging function for the kata-proxy process. +3. agent.kata + - **enable\_blk\_mount**: enables guest mounting of the block device. + - **enable\_debug**: enables the debugging function for the kata-agent process. +4. runtime + - **enable\_cpu\_memory\_hotplug**: enables CPU and memory hot swap. + - **enable\_debug**: enables debugging for the kata-runtime process. diff --git a/docs/en/docs/Container/managing-the-lifecycle-of-a-secure-container.md b/docs/en/Cloud/ContainerForm/SecureContainer/managing-the-lifecycle-of-a-secure-container.md similarity index 66% rename from docs/en/docs/Container/managing-the-lifecycle-of-a-secure-container.md rename to docs/en/Cloud/ContainerForm/SecureContainer/managing-the-lifecycle-of-a-secure-container.md index c16a2ee9a8a087af3d2b3643aba0a162d85f8b2a..418fde5ad30c2759f61e041cdcc8e3558e77c2e0 100644 --- a/docs/en/docs/Container/managing-the-lifecycle-of-a-secure-container.md +++ b/docs/en/Cloud/ContainerForm/SecureContainer/managing-the-lifecycle-of-a-secure-container.md @@ -6,90 +6,82 @@ - [Deleting a Secure Container](#deleting-a-secure-container) - [Running a New Command in the Container](#running-a-new-command-in-the-container) - - - ## Starting a Secure Container You can use the Docker engine or iSulad as the container engine of the secure container. The invoking methods of the two engines are similar. You can select either of them to start a secure container. To start a secure container, perform the following steps: -1. Ensure that the secure container component has been correctly installed and deployed. -2. Prepare the container image. If the container image is busybox, run the following commands to download the container image using the Docker engine or iSulad: +1. Ensure that the secure container component has been correctly installed and deployed. +2. Prepare the container image. If the container image is busybox, run the following commands to download the container image using the Docker engine or iSulad: - ``` + ```shell docker pull busybox ``` - ``` + ```shell isula pull busybox ``` -3. Start a secure container. Run the following commands to start a secure container using the Docker engine and iSulad: +3. Start a secure container. Run the following commands to start a secure container using the Docker engine and iSulad: - ``` + ```shell docker run -tid --runtime kata-runtime --network none busybox ``` - ``` + ```shell isula run -tid --runtime kata-runtime --network none busybox ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >The secure container supports the CNI network only and does not support the CNM network. The **-p** and **--expose** options cannot be used to expose container ports. When using a secure container, you need to specify the **--net=none** option. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > The secure container supports the CNI network only and does not support the CNM network. The **-p** and **--expose** options cannot be used to expose container ports. When using a secure container, you need to specify the **--net=none** option. -4. Start a pod. - 1. Start the pause container and obtain the sandbox ID of the pod based on the command output. Run the following commands to start a pause container using the Docker engine and iSulad: +4. Start a pod. + 1. Start the pause container and obtain the sandbox ID of the pod based on the command output. Run the following commands to start a pause container using the Docker engine and iSulad: - ``` + ```shell docker run -tid --runtime kata-runtime --network none --annotation io.kubernetes.docker.type=podsandbox ``` - ``` + ```shell isula run -tid --runtime kata-runtime --network none --annotation io.kubernetes.cri.container-type=sandbox ``` -    - - 1. Create a service container and add it to the pod. Run the following commands to create a service container using the Docker engine and iSulad: + 2. Create a service container and add it to the pod. Run the following commands to create a service container using the Docker engine and iSulad: - ``` + ```shell docker run -tid --runtime kata-runtime --network none --annotation io.kubernetes.docker.type=container --annotation io.kubernetes.sandbox.id= busybox ``` - ``` + ```shell isula run -tid --runtime kata-runtime --network none --annotation io.kubernetes.cri.container-type=container --annotation io.kubernetes.cri.sandbox-id= busybox ``` **--annotation** is used to mark the container type, which is provided by the Docker engine and iSulad, but not provided by the open-source Docker engine in the upstream community. - - ## Stopping a Secure Container -- Run the following command to stop a secure container: +- Run the following command to stop a secure container: - ``` + ```shell docker stop ``` -- Stop a pod. +- Stop a pod. When stopping a pod, note that the lifecycle of the pause container is the same as that of the pod. Therefore, stop service containers before the pause container. - ## Deleting a Secure Container Ensure that the container has been stopped. -``` +```shell docker rm ``` To forcibly delete a running container, run the **-f** command. -``` +```shell docker rm -f ``` @@ -97,11 +89,11 @@ docker rm -f The pause container functions only as a placeholder container. Therefore, if you start a pod, run a new command in the service container. The pause container does not execute the corresponding command. If only one container is started, run the following command directly: -``` +```shell docker exec -ti ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->1. If the preceding command has no response because another host runs the **docker restart** or **docker stop** command to access the same container, you can press **Ctrl**+**P**+**Q** to exit the operation. ->2. If the **-d** option is used, the command is executed in the background and no error information is displayed. The exit code cannot be used to determine whether the command is executed correctly. - +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> +> 1. If the preceding command has no response because another host runs the **docker restart** or **docker stop** command to access the same container, you can press **Ctrl**+**P**+**Q** to exit the operation. +> 2. If the **-d** option is used, the command is executed in the background and no error information is displayed. The exit code cannot be used to determine whether the command is executed correctly. diff --git a/docs/en/docs/Container/monitoring-secure-containers.md b/docs/en/Cloud/ContainerForm/SecureContainer/monitoring-secure-containers.md similarity index 99% rename from docs/en/docs/Container/monitoring-secure-containers.md rename to docs/en/Cloud/ContainerForm/SecureContainer/monitoring-secure-containers.md index 4bea45be650d38a74aea8f86d1478e8833de4f4e..f261c9cf810283a6b7c66f2b402e025138185c3b 100644 --- a/docs/en/docs/Container/monitoring-secure-containers.md +++ b/docs/en/Cloud/ContainerForm/SecureContainer/monitoring-secure-containers.md @@ -2,14 +2,13 @@ - [Monitoring Secure Containers](#monitoring-secure-containers) - ## Description In kata 2.x, events subcommand is removed and replaced by **kata-runtime metrics**, which can be used to gather metrics associated with infrastructure used to run a sandbox, including virtual machine stats, shim v2 CPU seconds and CPU stat of guest OS and so on. Metrics are organized in a Prometheus compatible format so that they can be easily uploaded to Prometheus when work with kata-monitor. ## Usage -``` +```shell kata-runtime metrics ``` @@ -22,6 +21,7 @@ When using annotation to make a container run in a specific sandbox, clients sho This command can be used to query the status of only one container. ## Example + ```shell $ kata-runtime metrics e2270357d23f9d3dd424011e1e70aa8defb267d813c3d451db58f35aeac97a04 @@ -53,4 +53,4 @@ kata_hypervisor_netdev{interface="lo",item="recv_packets"} 0 kata_hypervisor_netdev{interface="lo",item="sent_bytes"} 0 kata_hypervisor_netdev{interface="lo",item="sent_carrier"} 0 kata_hypervisor_netdev{interface="lo",item="sent_colls"} 0 -``` \ No newline at end of file +``` diff --git a/docs/en/docs/Container/public_sys-resources/icon-note.gif b/docs/en/Cloud/ContainerForm/SecureContainer/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/Container/public_sys-resources/icon-note.gif rename to docs/en/Cloud/ContainerForm/SecureContainer/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/Container/secure-container.md b/docs/en/Cloud/ContainerForm/SecureContainer/secure-container.md similarity index 93% rename from docs/en/docs/Container/secure-container.md rename to docs/en/Cloud/ContainerForm/SecureContainer/secure-container.md index 760d0296ff0c680eeba4c07dc4be7e6683a7d750..af6187378b0298b719dcc167e7601c43aad8963d 100644 --- a/docs/en/docs/Container/secure-container.md +++ b/docs/en/Cloud/ContainerForm/SecureContainer/secure-container.md @@ -1,6 +1,5 @@ # Secure Container - ## Overview The secure container technology is an organic combination of virtualization and container technologies. Compared with a common Linux container, a secure container has better isolation performance. @@ -11,7 +10,6 @@ Secure containers are isolated by the virtualization layers. Containers on the s **Figure 1** Secure container architecture - ![](./figures/secure-container.png) Secure containers are closely related to the concept of pod in Kubernetes. Kubernetes is the open-source ecosystem standard for the container scheduling management platform. It defines a group of container runtime interfaces \(CRIs\). @@ -27,3 +25,6 @@ In a secure container, you can start a single container or start a pod. **Figure 2** Relationship between the secure container and peripheral components ![](./figures/relationship-between-the-secure-container-and-peripheral-components.png "relationship-between-the-secure-container-and-peripheral-components") +> ![](./public_sys-resources/icon-note.gif) **Note:** +> +> Root privileges are necessary for installing and operating secure containers. diff --git a/docs/en/Cloud/ContainerForm/SystemContainer/Menu/index.md b/docs/en/Cloud/ContainerForm/SystemContainer/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..81deafde519c79e68c4e4f8d62447d082fb1e532 --- /dev/null +++ b/docs/en/Cloud/ContainerForm/SystemContainer/Menu/index.md @@ -0,0 +1,19 @@ +--- +headless: true +--- + +- [System Container]({{< relref "./system-container.md" >}}) + - [Installation Guideline]({{< relref "./installation-guideline.md" >}}) + - [Usage Guide]({{< relref "./usage-guide.md" >}}) + - [Specifying Rootfs to Create a Container]({{< relref "./specifying-rootfs-to-create-a-container.md" >}}) + - [Using systemd to Start a Container]({{< relref "./using-systemd-to-start-a-container.md" >}}) + - [Reboot or Shutdown in a Container]({{< relref "./reboot-or-shutdown-in-a-container.md" >}}) + - [Configurable Cgroup Path]({{< relref "./configurable-cgroup-path.md" >}}) + - [Writable Namespace Kernel Parameters]({{< relref "./writable-namespace-kernel-parameters.md" >}}) + - [Shared Memory Channels]({{< relref "./shared-memory-channels.md" >}}) + - [Dynamically Loading the Kernel Module]({{< relref "./dynamically-loading-the-kernel-module.md" >}}) + - [Environment Variable Persisting]({{< relref "./environment-variable-persisting.md" >}}) + - [Maximum Number of Handles]({{< relref "./maximum-number-of-handles.md" >}}) + - [Security and Isolation]({{< relref "./security-and-isolation.md" >}}) + - [Dynamically Managing Container Resources \\(syscontainer-tools\\)]({{< relref "./dynamically-managing-container-resources-syscontainer-tools.md" >}}) + - [Appendix]({{< relref "./appendix-1.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/Container/appendix-1.md b/docs/en/Cloud/ContainerForm/SystemContainer/appendix-1.md similarity index 99% rename from docs/en/docs/Container/appendix-1.md rename to docs/en/Cloud/ContainerForm/SystemContainer/appendix-1.md index 88feb8756850a0852848f376b892f699394645f9..75232351e3a83a6fa3c920cc4c110e793b8e88f9 100644 --- a/docs/en/docs/Container/appendix-1.md +++ b/docs/en/Cloud/ContainerForm/SystemContainer/appendix-1.md @@ -1,14 +1,12 @@ -## Appendix +# Appendix -- [Appendix](#appendix-1) +- [Appendix](#appendix) - [Command Line Interface List](#command-line-interface-list) - ## Command Line Interface List This section lists commands in system containers, which are different from those in common containers. For details about other commands, refer to sections related to the iSulad container engine or run the **isula _XXX_ --help** command. -

Command

Parameters

@@ -91,4 +89,3 @@ This section lists commands in system containers, which are different from those
- diff --git a/docs/en/docs/Container/configurable-cgroup-path.md b/docs/en/Cloud/ContainerForm/SystemContainer/configurable-cgroup-path.md similarity index 94% rename from docs/en/docs/Container/configurable-cgroup-path.md rename to docs/en/Cloud/ContainerForm/SystemContainer/configurable-cgroup-path.md index d5d24d9d0b195d249fc536e32f5022bfa58f0e39..c089da0dc83f4572c2b34ade900dc7f8ee096030 100644 --- a/docs/en/docs/Container/configurable-cgroup-path.md +++ b/docs/en/Cloud/ContainerForm/SystemContainer/configurable-cgroup-path.md @@ -2,14 +2,12 @@ - [Configurable Cgroup Path](#configurable-cgroup-path) - ## Function Description System containers provide the capabilities of isolating and reserving container resources on hosts. You can use the **--cgroup-parent** parameter to specify the cgroup directory used by a container to another directory, thereby flexibly allocating host resources. For example, if the cgroup parent path of containers A, B, and C is set to **/lxc/cgroup1**, and the cgroup parent path of containers D, E, and F is set to **/lxc/cgroup2**, the containers are divided into two groups through the cgroup paths, implementing resource isolation at the cgroup level. ## Parameter Description -

Command

Parameter

@@ -51,21 +49,21 @@ In addition to specifying the cgroup parent path for a system container using co ## Constraints -- If the **cgroup parent** parameter is set on both the daemon and client, the value specified on the client takes effect. -- If container A is started before container B, the cgroup parent path of container B is specified as the cgroup path of container A. When deleting a container, you need to delete container B and then container A. Otherwise, residual cgroup resources exist. +- If the **cgroup parent** parameter is set on both the daemon and client, the value specified on the client takes effect. +- If container A is started before container B, the cgroup parent path of container B is specified as the cgroup path of container A. When deleting a container, you need to delete container B and then container A. Otherwise, residual cgroup resources exist. ## Example Start a system container and specify the **--cgroup-parent** parameter. -``` +```shell [root@localhost ~]# isula run -tid --cgroup-parent /lxc/cgroup123 --system-container --external-rootfs /root/myrootfs none init 115878a4dfc7c5b8c62ef8a4b44f216485422be9a28f447a4b9ecac4609f332e ``` Check the cgroup information of the init process in the container. -``` +```shell [root@localhost ~]# isula inspect -f "{{json .State.Pid}}" 11 22167 [root@localhost ~]# cat /proc/22167/cgroup @@ -89,11 +87,10 @@ The cgroup parent path of the container is set to **/sys/fs/cgroup/**_

Command

Parameter

@@ -30,15 +26,15 @@ Services in a container may depend on some kernel modules. You can set environme ## Constraints -- If loaded kernel modules are not verified or conflict with existing modules on the host, an unpredictable error may occur on the host. Therefore, exercise caution when loading kernel modules. -- Dynamic kernel module loading transfers kernel modules to be loaded to containers. This function is implemented by capturing environment variables for container startup using isulad-tools. Therefore, this function relies on the proper installation and deployment of isulad-tools. -- Loaded kernel modules need to be manually deleted. +- If loaded kernel modules are not verified or conflict with existing modules on the host, an unpredictable error may occur on the host. Therefore, exercise caution when loading kernel modules. +- Dynamic kernel module loading transfers kernel modules to be loaded to containers. This function is implemented by capturing environment variables for container startup using isulad-tools. Therefore, this function relies on the proper installation and deployment of isulad-tools. +- Loaded kernel modules need to be manually deleted. ## Example When starting a system container, specify the **-e KERNEL\_MODULES** parameter. After the system container is started, the ip\_vs module is successfully loaded to the kernel. -``` +```shell [root@localhost ~]# lsmod | grep ip_vs [root@localhost ~]# isula run -tid -e KERNEL_MODULES=ip_vs,ip_vs_wrr --hook-spec /etc/isulad-tools/hookspec.json --system-container --external-rootfs /root/myrootfs none init ae18c4281d5755a1e153a7bff6b3b4881f36c8e528b9baba8a3278416a5d0980 @@ -51,6 +47,6 @@ libcrc32c 16384 3 nf_conntrack,nf_nat,ip_vs ``` >![](./public_sys-resources/icon-note.gif) **NOTE:** ->- isulad-tools must be installed on the host. ->- **--hooks-spec** must be set to **isulad hooks**. - +> +> - isulad-tools must be installed on the host. +> - **--hooks-spec** must be set to **isulad hooks**. diff --git a/docs/en/docs/Container/dynamically-managing-container-resources-(syscontainer-tools).md b/docs/en/Cloud/ContainerForm/SystemContainer/dynamically-managing-container-resources-syscontainer-tools.md similarity index 89% rename from docs/en/docs/Container/dynamically-managing-container-resources-(syscontainer-tools).md rename to docs/en/Cloud/ContainerForm/SystemContainer/dynamically-managing-container-resources-syscontainer-tools.md index 53ea3ff11e0e752e111cf48d6c6d41d075badf95..ebd34899ad152414fccf1ffc254a2ecf9a76288f 100644 --- a/docs/en/docs/Container/dynamically-managing-container-resources-(syscontainer-tools).md +++ b/docs/en/Cloud/ContainerForm/SystemContainer/dynamically-managing-container-resources-syscontainer-tools.md @@ -1,21 +1,19 @@ # Dynamically Managing Container Resources \(syscontainer-tools\) -- [Dynamically Managing Container Resources \(syscontainer-tools\)](#dynamically-managing-container-resources-(syscontainer-tools)) +- [Dynamically Managing Container Resources (syscontainer-tools)](#dynamically-managing-container-resources-syscontainer-tools) - [Device Management](#device-management) - [NIC Management](#nic-management) - [Route Management](#route-management) - [Volume Mounting Management](#volume-mounting-management) - Resources in common containers cannot be managed. For example, a block device cannot be added to a common container, and a physical or virtual NIC cannot be inserted to a common container. In the system container scenario, the syscontainer-tools can be used to dynamically mount or unmount block devices, network devices, routes, and volumes for containers. To use this function, you need to install the syscontainer-tools first. -``` +```shell [root@localhost ~]# yum install syscontainer-tools ``` - ## Device Management ### Function Description @@ -24,7 +22,7 @@ isulad-tools allows you to add block devices \(such as disks and logical volume ### Command Format -``` +```shell isulad-tools [COMMAND][OPTIONS] [ARG...] ``` @@ -40,7 +38,6 @@ In the preceding format: ### Parameter Description -

Command

Function Description

@@ -103,25 +100,24 @@ In the preceding format: ### Constraints -- You can add or delete devices when container instances are not running. After the operation is complete, you can start the container to view the device status. You can also dynamically add a device when the container is running. -- Do not concurrently run the **fdisk** command to format disks in a container and on the host. Otherwise, the container disk usage will be affected. -- When you run the **add-device** command to add a disk to a specific directory of a container, if the parent directory in the container is a multi-level directory \(for example, **/dev/a/b/c/d/e**\) and the directory level does not exist, isulad-tools will automatically create the corresponding directory in the container. When the disk is deleted, the created parent directory is not deleted. If you run the **add-device** command to add a device to this parent directory again, a message is displayed, indicating that a device already exists and cannot be added. -- When you run the** add-device** command to add a disk or update disk parameters, you need to configure the disk QoS. Do not set the write or read rate limit for the block device \(I/O/s or byte/s\) to a small value. If the value is too small, the disk may be unreadable \(the actual reason is the speed is too slow\), affecting service functions. -- When you run the **--blkio-weight-device** command to limit the weight of a specified block device, if the block device supports only the BFQ mode, an error may be reported, prompting you to check whether the current OS environment supports setting the weight of the BFQ block device. +- You can add or delete devices when container instances are not running. After the operation is complete, you can start the container to view the device status. You can also dynamically add a device when the container is running. +- Do not concurrently run the **fdisk** command to format disks in a container and on the host. Otherwise, the container disk usage will be affected. +- When you run the **add-device** command to add a disk to a specific directory of a container, if the parent directory in the container is a multi-level directory \(for example, **/dev/a/b/c/d/e**\) and the directory level does not exist, isulad-tools will automatically create the corresponding directory in the container. When the disk is deleted, the created parent directory is not deleted. If you run the **add-device** command to add a device to this parent directory again, a message is displayed, indicating that a device already exists and cannot be added. +- When you run the**add-device** command to add a disk or update disk parameters, you need to configure the disk QoS. Do not set the write or read rate limit for the block device \(I/O/s or byte/s\) to a small value. If the value is too small, the disk may be unreadable \(the actual reason is the speed is too slow\), affecting service functions. +- When you run the **--blkio-weight-device** command to limit the weight of a specified block device, if the block device supports only the BFQ mode, an error may be reported, prompting you to check whether the current OS environment supports setting the weight of the BFQ block device. ### Example -- Start a system container, and set **hook spec** to the isulad hook execution script. +- Start a system container, and set **hook spec** to the isulad hook execution script. - ``` + ```shell [root@localhost ~]# isula run -tid --hook-spec /etc/isulad-tools/hookspec.json --system-container --external-rootfs /root/root-fs none init eed1096c8c7a0eca6d92b1b3bc3dd59a2a2adf4ce44f18f5372408ced88f8350 ``` +- Add a block device to a container. -- Add a block device to a container. - - ``` + ```shell [root@localhost ~]# isulad-tools add-device ee /dev/sdb:/dev/sdb123 Add device (/dev/sdb) to container(ee,/dev/sdb123) done. [root@localhost ~]# isula exec ee fdisk -l /dev/sdb123 @@ -137,22 +133,21 @@ In the preceding format: /dev/sdb123p5 4096 104857599 104853504 50G 83 Linux ``` -- Update the device information. +- Update the device information. - ``` + ```shell [root@localhost ~]# isulad-tools update-device --device-read-bps /dev/sdb:10m ee Update read bps for device (/dev/sdb,10485760) done. ``` -- Delete a device. +- Delete a device. - ``` + ```shell [root@localhost ~]# isulad-tools remove-device ee /dev/sdb:/dev/sdb123 Remove device (/dev/sdb) from container(ee,/dev/sdb123) done. Remove read bps for device (/dev/sdb) done. ``` - ## NIC Management ### Function Description @@ -161,7 +156,7 @@ isulad-tools allows you to insert physical or virtual NICs on the host to a cont ### Command Format -``` +```shell isulad-tools [COMMAND][OPTIONS] ``` @@ -175,7 +170,6 @@ In the preceding format: ### Parameter Description -

Command

Function Description

@@ -221,42 +215,40 @@ In the preceding format: ### Constraints -- Physical NICs \(eth\) and virtual NICs \(veth\) can be added. -- When adding a NIC, you can also configure the NIC. The configuration parameters include **--ip**, **--mac**, **--bridge**, **--mtu**, **--qlen**. -- A maximum of eight physical NICs can be added to a container. -- If you run the **isulad-tools add-nic** command to add an eth NIC to a container and do not add a hook, you must manually delete the NIC before the container exits. Otherwise, the name of the eth NIC on the host will be changed to the name of that in the container. -- For a physical NIC \(except 1822 VF NIC\), use the original MAC address when running the **add-nic** command. Do not change the MAC address in the container, or when running the **update-nic** command. -- When using the **isulad-tools add-nic** command, set the MTU value. The value range depends on the NIC model. -- When using isulad-tools to add NICs and routes to containers, you are advised to run the **add-nic** command to add NICs and then run the **add-route** command to add routes. When using isulad-tools to delete NICs and routes from a container, you are advised to run the **remove-route** command to delete routes and then run the **remove-nic** command to delete NICs. -- When using isulad-tools to add NICs, add a NIC to only one container. +- Physical NICs \(eth\) and virtual NICs \(veth\) can be added. +- When adding a NIC, you can also configure the NIC. The configuration parameters include **--ip**, **--mac**, **--bridge**, **--mtu**, **--qlen**. +- A maximum of eight physical NICs can be added to a container. +- If you run the **isulad-tools add-nic** command to add an eth NIC to a container and do not add a hook, you must manually delete the NIC before the container exits. Otherwise, the name of the eth NIC on the host will be changed to the name of that in the container. +- For a physical NIC \(except 1822 VF NIC\), use the original MAC address when running the **add-nic** command. Do not change the MAC address in the container, or when running the **update-nic** command. +- When using the **isulad-tools add-nic** command, set the MTU value. The value range depends on the NIC model. +- When using isulad-tools to add NICs and routes to containers, you are advised to run the **add-nic** command to add NICs and then run the **add-route** command to add routes. When using isulad-tools to delete NICs and routes from a container, you are advised to run the **remove-route** command to delete routes and then run the **remove-nic** command to delete NICs. +- When using isulad-tools to add NICs, add a NIC to only one container. ### Example -- Start a system container, and set **hook spec** to the isulad hook execution script. +- Start a system container, and set **hook spec** to the isulad hook execution script. - ``` + ```shell [root@localhost ~]# isula run -tid --hook-spec /etc/isulad-tools/hookspec.json --system-container --external-rootfs /root/root-fs none init 2aaca5c1af7c872798dac1a468528a2ccbaf20b39b73fc0201636936a3c32aa8 ``` +- Add a virtual NIC to a container. -- Add a virtual NIC to a container. - - ``` + ```shell [root@localhost ~]# isulad-tools add-nic --type "veth" --name abc2:bcd2 --ip 172.17.28.5/24 --mac 00:ff:48:13:xx:xx --bridge docker0 2aaca5c1af7c Add network interface to container 2aaca5c1af7c (bcd2,abc2) done ``` -- Add a physical NIC to a container. +- Add a physical NIC to a container. - ``` + ```shell [root@localhost ~]# isulad-tools add-nic --type "eth" --name eth3:eth1 --ip 172.17.28.6/24 --mtu 1300 --qlen 2100 2aaca5c1af7c Add network interface to container 2aaca5c1af7c (eth3,eth1) done ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >When adding a virtual or physical NIC, ensure that the NIC is in the idle state. Adding a NIC in use will disconnect the system network. - + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > When adding a virtual or physical NIC, ensure that the NIC is in the idle state. Adding a NIC in use will disconnect the system network. ## Route Management @@ -266,7 +258,7 @@ isulad-tools can be used to dynamically add or delete routing tables for system ### Command Format -``` +```shell isulad-tools [COMMAND][OPTIONS] [ARG...] ``` @@ -282,7 +274,6 @@ In the preceding format: ### API Description -

Command

Function Description

@@ -334,37 +325,35 @@ In the preceding format: ### Constraints -- When using isulad-tools to add NICs and routes to containers, you are advised to run the **add-nic** command to add NICs and then run the **add-route** command to add routes. When using isulad-tools to delete NICs and routes from a container, you are advised to run the **remove-route** command to delete routes and then run the **remove-nic** command to delete NICs. -- When adding a routing rule to a container, ensure that the added routing rule does not conflict with existing routing rules in the container. +- When using isulad-tools to add NICs and routes to containers, you are advised to run the **add-nic** command to add NICs and then run the **add-route** command to add routes. When using isulad-tools to delete NICs and routes from a container, you are advised to run the **remove-route** command to delete routes and then run the **remove-nic** command to delete NICs. +- When adding a routing rule to a container, ensure that the added routing rule does not conflict with existing routing rules in the container. ### Example -- Start a system container, and set **hook spec** to the isulad hook execution script. +- Start a system container, and set **hook spec** to the isulad hook execution script. - ``` + ```shell [root@localhost ~]# isula run -tid --hook-spec /etc/isulad-tools/hookspec.json --system-container --external-rootfs /root/root-fs none init 0d2d68b45aa0c1b8eaf890c06ab2d008eb8c5d91e78b1f8fe4d37b86fd2c190b ``` +- Use isulad-tools to add a physical NIC to the system container. -- Use isulad-tools to add a physical NIC to the system container. - - ``` + ```shell [root@localhost ~]# isulad-tools add-nic --type "eth" --name enp4s0:eth123 --ip 172.17.28.6/24 --mtu 1300 --qlen 2100 0d2d68b45aa0 Add network interface (enp4s0) to container (0d2d68b45aa0,eth123) done ``` +- isulad-tools adds a routing rule to the system container. Format example: **\[\{"dest":"default", "gw":"192.168.10.1"\},\{"dest":"192.168.0.0/16","dev":"eth0","src":"192.168.1.2"\}\]**. If **dest** is left blank, its value will be **default**. -- isulad-tools adds a routing rule to the system container. Format example: **\[\{"dest":"default", "gw":"192.168.10.1"\},\{"dest":"192.168.0.0/16","dev":"eth0","src":"192.168.1.2"\}\]**. If **dest** is left blank, its value will be **default**. - - ``` + ```shell [root@localhost ~]# isulad-tools add-route 0d2d68b45aa0 '[{"dest":"172.17.28.0/32", "gw":"172.17.28.5","dev":"eth123"}]' Add route to container 0d2d68b45aa0, route: {dest:172.17.28.0/32,src:,gw:172.17.28.5,dev:eth123} done ``` -- Check whether a routing rule is added in the container. +- Check whether a routing rule is added in the container. - ``` + ```shell [root@localhost ~]# isula exec -it 0d2d68b45aa0 route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface @@ -372,7 +361,6 @@ In the preceding format: 172.17.28.0 0.0.0.0 255.255.255.0 U 0 0 0 eth123 ``` - ## Volume Mounting Management ### Function Description @@ -381,7 +369,7 @@ In a common container, you can set the **--volume** parameter during container ### Command Format -``` +```shell isulad-tools [COMMAND][OPTIONS] [ARG...] ``` @@ -397,7 +385,7 @@ In the preceding format: ### API Description -**Table 1**    +**Table 1**

Command

@@ -451,45 +439,42 @@ In the preceding format: ### Constraints -- When running the **add-path** command, specify an absolute path as the mount path. -- The mount point /.sharedpath is generated on the host after the mount path is specified by running the **add-path** command. -- A maximum of 128 volumes can be added to a container. -- Do not overwrite the root directory \(/\) in a container with the host directory by running the **add-path** command. Otherwise, the function is affected. +- When running the **add-path** command, specify an absolute path as the mount path. +- The mount point /.sharedpath is generated on the host after the mount path is specified by running the **add-path** command. +- A maximum of 128 volumes can be added to a container. +- Do not overwrite the root directory \(/\) in a container with the host directory by running the **add-path** command. Otherwise, the function is affected. ### Example -- Start a system container, and set **hook spec** to the isulad hook execution script. +- Start a system container, and set **hook spec** to the isulad hook execution script. - ``` + ```shell [root@localhost ~]# isula run -tid --hook-spec /etc/isulad-tools/hookspec.json --system-container --external-rootfs /root/root-fs none init e45970a522d1ea0e9cfe382c2b868d92e7b6a55be1dd239947dda1ee55f3c7f7 ``` +- Use isulad-tools to mount a directory on the host to a container, implementing resource sharing. -- Use isulad-tools to mount a directory on the host to a container, implementing resource sharing. - - ``` + ```shell [root@localhost ~]# isulad-tools add-path e45970a522d1 /home/test123:/home/test123 Add path (/home/test123) to container(e45970a522d1,/home/test123) done. ``` -- Create a file in the **/home/test123** directory on the host and check whether the file can be accessed in the container. +- Create a file in the **/home/test123** directory on the host and check whether the file can be accessed in the container. - ``` + ```shell [root@localhost ~]# echo "hello world" > /home/test123/helloworld [root@localhost ~]# isula exec e45970a522d1 bash [root@localhost /]# cat /home/test123/helloworld hello world ``` -- Use isulad-tools to delete the mount directory from the container. +- Use isulad-tools to delete the mount directory from the container. - ``` + ```shell [root@localhost ~]# isulad-tools remove-path e45970a522d1 /home/test123:/home/test123 Remove path (/home/test123) from container(e45970a522d1,/home/test123) done [root@localhost ~]# isula exec e45970a522d1 bash [root@localhost /]# ls /home/test123/helloworld ls: cannot access '/home/test123/helloworld': No such file or directory ``` - - diff --git a/docs/en/docs/Container/environment-variable-persisting.md b/docs/en/Cloud/ContainerForm/SystemContainer/environment-variable-persisting.md similarity index 90% rename from docs/en/docs/Container/environment-variable-persisting.md rename to docs/en/Cloud/ContainerForm/SystemContainer/environment-variable-persisting.md index f296c5c144245df505a9256b6373ace2a983f82d..c2348c0b7324d2569af565eaecb6d31d86d80430 100644 --- a/docs/en/docs/Container/environment-variable-persisting.md +++ b/docs/en/Cloud/ContainerForm/SystemContainer/environment-variable-persisting.md @@ -2,14 +2,12 @@ - [Environment Variable Persisting](#environment-variable-persisting) - ## Function Description In a system container, you can make the **env** variable persistent to the configuration file in the rootfs directory of the container by specifying the **--env-target-file** interface parameter. ## Parameter Description -

Command

Parameter

@@ -30,15 +28,15 @@ In a system container, you can make the **env** variable persistent to the con ## Constraints -- If the target file specified by **--env-target-file** exists, the size cannot exceed 10 MB. -- The parameter specified by **--env-target-file** must be an absolute path in the rootfs directory. -- If the value of **--env** conflicts with that of **env** in the target file, the value of **--env** prevails. +- If the target file specified by **--env-target-file** exists, the size cannot exceed 10 MB. +- The parameter specified by **--env-target-file** must be an absolute path in the rootfs directory. +- If the value of **--env** conflicts with that of **env** in the target file, the value of **--env** prevails. ## Example Start a system container and specify the **env** environment variable and **--env-target-file** parameter. -``` +```shell [root@localhost ~]# isula run -tid -e abc=123 --env-target-file /etc/environment --system-container --external-rootfs /root/myrootfs none init b75df997a64da74518deb9a01d345e8df13eca6bcc36d6fe40c3e90ea1ee088e [root@localhost ~]# isula exec b7 cat /etc/environment @@ -48,4 +46,3 @@ abc=123 ``` The preceding information indicates that the **env** variable \(**abc=123**\) of the container has been made persistent to the **/etc/environment** configuration file. - diff --git a/docs/en/docs/Container/installation-guideline.md b/docs/en/Cloud/ContainerForm/SystemContainer/installation-guideline.md similarity index 49% rename from docs/en/docs/Container/installation-guideline.md rename to docs/en/Cloud/ContainerForm/SystemContainer/installation-guideline.md index 738f8861408c2709f8d3250cd2228a9cc584e4f3..1c60ca4ecb1038766531b2b8eb41ae4923c998eb 100644 --- a/docs/en/docs/Container/installation-guideline.md +++ b/docs/en/Cloud/ContainerForm/SystemContainer/installation-guideline.md @@ -1,28 +1,26 @@ # Installation Guideline -1. Install the container engine iSulad. +1. Install the container engine iSulad. - ``` + ```shell # yum install iSulad ``` -2. Install dependent packages of system containers. +2. Install dependent packages of system containers. - ``` + ```shell # yum install isulad-tools authz isulad-lxcfs-toolkit lxcfs ``` -3. Run the following command to check whether iSulad is started: +3. Run the following command to check whether iSulad is started: - ``` + ```shell # systemctl status isulad ``` -4. Enable the lxcfs and authz services. +4. Enable the lxcfs and authz services. - ``` + ```shell # systemctl start lxcfs # systemctl start authz ``` - - diff --git a/docs/en/docs/Container/maximum-number-of-handles.md b/docs/en/Cloud/ContainerForm/SystemContainer/maximum-number-of-handles.md similarity index 92% rename from docs/en/docs/Container/maximum-number-of-handles.md rename to docs/en/Cloud/ContainerForm/SystemContainer/maximum-number-of-handles.md index a8cdb1d40bf2a63c78e36d75b8bc8207b02aeff2..08764a0a779bfa84941f5fef5bb4a9eaccedd1ec 100644 --- a/docs/en/docs/Container/maximum-number-of-handles.md +++ b/docs/en/Cloud/ContainerForm/SystemContainer/maximum-number-of-handles.md @@ -2,14 +2,12 @@ - [Maximum Number of Handles](#maximum-number-of-handles) - ## Function Description System containers support limit on the number of file handles. File handles include common file handles and network sockets. When starting a container, you can specify the **--files-limit** parameter to limit the maximum number of handles opened in the container. ## Parameter Description -

Command

Parameter

@@ -31,14 +29,14 @@ System containers support limit on the number of file handles. File handles incl ## Constraints -- If the value of **--files-limit** is too small, the system container may fail to run the **exec** command and the error "open temporary files" is reported. Therefore, you are advised to set the parameter to a large value. -- File handles include common file handles and network sockets. +- If the value of **--files-limit** is too small, the system container may fail to run the **exec** command and the error "open temporary files" is reported. Therefore, you are advised to set the parameter to a large value. +- File handles include common file handles and network sockets. ## Example To use **--files-limit** to limit the number of file handles opened in a container, run the following command to check whether the kernel supports files cgroup: -``` +```shell [root@localhost ~]# cat /proc/1/cgroup | grep files 10:files:/ ``` @@ -47,7 +45,7 @@ If **files** is displayed, files cgroup is supported. Start the container, specify the **--files-limit** parameter, and check whether the **files.limit** parameter is successfully written. -``` +```shell [root@localhost ~]# isula run -tid --files-limit 1024 --system-container --external-rootfs /tmp/root-fs empty init 01e82fcf97d4937aa1d96eb8067f9f23e4707b92de152328c3fc0ecb5f64e91d [root@localhost ~]# isula exec -it 01e82fcf97d4 bash [root@localhost ~]# cat /sys/fs/cgroup/files/files.limit @@ -56,4 +54,3 @@ Start the container, specify the **--files-limit** parameter, and check whethe ``` The preceding information indicates that the number of file handles is successfully limited in the container. - diff --git a/docs/en/docs/DPU-OS/public_sys-resources/icon-note.gif b/docs/en/Cloud/ContainerForm/SystemContainer/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/DPU-OS/public_sys-resources/icon-note.gif rename to docs/en/Cloud/ContainerForm/SystemContainer/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/Container/reboot-or-shutdown-in-a-container.md b/docs/en/Cloud/ContainerForm/SystemContainer/reboot-or-shutdown-in-a-container.md similarity index 84% rename from docs/en/docs/Container/reboot-or-shutdown-in-a-container.md rename to docs/en/Cloud/ContainerForm/SystemContainer/reboot-or-shutdown-in-a-container.md index 84d5c380070ef178a6e525732c4f5cbf21619d0c..b5e9b77f1215e3e9054fe9abb90e92a1eae1774d 100644 --- a/docs/en/docs/Container/reboot-or-shutdown-in-a-container.md +++ b/docs/en/Cloud/ContainerForm/SystemContainer/reboot-or-shutdown-in-a-container.md @@ -1,15 +1,11 @@ # Reboot or Shutdown in a Container -- [Reboot or Shutdown in a Container](#reboot-or-shutdown-in-a-container) - - ## Function Description The **reboot** and **shutdown** commands can be executed in a system container. You can run the **reboot** command to restart a container, and run the **shutdown** command to stop a container. ## Parameter Description -

Command

Parameter

@@ -32,29 +28,28 @@ The **reboot** and **shutdown** commands can be executed in a system contain ## Constraints -- The shutdown function relies on the actual OS of the container running environment. -- When you run the **shutdown -h now** command to shut down the system, do not open multiple consoles. For example, if you run the **isula run -ti** command to open a console and run the **isula attach** command for the container in another host bash, another console is opened. In this case, the **shutdown** command fails to be executed. +- The shutdown function relies on the actual OS of the container running environment. +- When you run the **shutdown -h now** command to shut down the system, do not open multiple consoles. For example, if you run the **isula run -ti** command to open a console and run the **isula attach** command for the container in another host bash, another console is opened. In this case, the **shutdown** command fails to be executed. ## Example -- Specify the **--restart on-reboot** parameter when starting a container. For example: +- Specify the **--restart on-reboot** parameter when starting a container. For example: - ``` + ```shell [root@localhost ~]# isula run -tid --restart on-reboot --system-container --external-rootfs /root/myrootfs none init 106faae22a926e22c828a0f2b63cf5c46e5d5986ea8a5b26de81390d0ed9714f ``` +- In the container, run the **reboot** command. -- In the container, run the **reboot** command. - - ``` + ```shell [root@localhost ~]# isula exec -it 10 bash [root@localhost /]# reboot ``` Check whether the container is restarted. - ``` + ```shell [root@localhost ~]# isula exec -it 10 ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.1 0.0 21588 9504 ? Ss 12:11 0:00 init @@ -64,9 +59,9 @@ The **reboot** and **shutdown** commands can be executed in a system contain root 26 0.0 0.0 8092 3012 ? Rs+ 12:13 0:00 ps aux ``` -- In the container, run the **shutdown** command. +- In the container, run the **shutdown** command. - ``` + ```shell [root@localhost ~]# isula exec -it 10 bash [root@localhost /]# shutdown -h now [root@localhost /]# [root@localhost ~]# @@ -74,9 +69,7 @@ The **reboot** and **shutdown** commands can be executed in a system contain Check whether the container is stopped. - ``` + ```shell [root@localhost ~]# isula exec -it 10 bash Error response from daemon: Exec container error;Container is not running:106faae22a926e22c828a0f2b63cf5c46e5d5986ea8a5b26de81390d0ed9714f ``` - - diff --git a/docs/en/docs/Container/security-and-isolation.md b/docs/en/Cloud/ContainerForm/SystemContainer/security-and-isolation.md similarity index 97% rename from docs/en/docs/Container/security-and-isolation.md rename to docs/en/Cloud/ContainerForm/SystemContainer/security-and-isolation.md index 740d6295972c57c7a64b17dee7d54a173522ec18..33b30b65b979b8800ca3d8ed07e39ee1c607aa52 100644 --- a/docs/en/docs/Container/security-and-isolation.md +++ b/docs/en/Cloud/ContainerForm/SystemContainer/security-and-isolation.md @@ -47,10 +47,10 @@ In system containers, you can configure the **--user-remap** API parameter to ### Usage Guide ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->Before specifying the **--user-remap** parameter, configure an offset value for UIDs and GIDs of all directories and files in rootfs. The offset value should be equal to that for _uid_ and _gid_ in **--user-remap**. ->For example, run the following command to offset UIDs and GIDs of all files in the **dev** directory with 100000: ->chown 100000:100000 dev +>![](./public_sys-resources/icon-note.gif) **NOTE:** +>Before specifying the **--user-remap** parameter, configure an offset value for UIDs and GIDs of all directories and files in rootfs. The offset value should be equal to that for _uid_ and _gid_ in **--user-remap**. +>For example, run the following command to offset UIDs and GIDs of all files in the **dev** directory with 100000: +>chown 100000:100000 dev Specify the **--user-remap** parameter when the system container is started. @@ -136,24 +136,24 @@ You can configure the startup parameters of the iSulad container engine to speci ```shell #SERVERSIDE - + # Generate CA key openssl genrsa -aes256 -passout "pass:$PASSWORD" -out "ca-key.pem" 4096 # Generate CA openssl req -new -x509 -days $VALIDITY -key "ca-key.pem" -sha256 -out "ca.pem" -passin "pass:$PASSWORD" -subj "/C=$COUNTRY/ST=$STATE/L=$CITY/O=$ORGANIZATION/OU=$ORGANIZATIONAL_UNIT/CN=$COMMON_NAME/emailAddress=$EMAIL" # Generate Server key openssl genrsa -out "server-key.pem" 4096 - + # Generate Server Certs. openssl req -subj "/CN=$COMMON_NAME" -sha256 -new -key "server-key.pem" -out server.csr - + echo "subjectAltName = DNS:localhost,IP:127.0.0.1" > extfile.cnf echo "extendedKeyUsage = serverAuth" >> extfile.cnf - + openssl x509 -req -days $VALIDITY -sha256 -in server.csr -passin "pass:$PASSWORD" -CA "ca.pem" -CAkey "ca-key.pem" -CAcreateserial -out "server-cert.pem" -extfile extfile.cnf - + #CLIENTSIDE - + openssl genrsa -out "key.pem" 4096 openssl req -subj "/CN=$CLIENT_NAME" -new -key "key.pem" -out client.csr echo "extendedKeyUsage = clientAuth" > extfile.cnf @@ -187,10 +187,10 @@ You can configure the startup parameters of the iSulad container engine to speci - Alice can perform any container operations: **\{"name":"policy\_5","users":\["alice"\],"actions":\["container"\]\}** - Alice can perform any container operations, but the request type can only be **get**: **\{"name":"policy\_5","users":\["alice"\],"actions":\["container"\], "readonly":true\}** - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >- **actions** supports regular expressions. - >- **users** does not support regular expressions. - >- A users cannot be repeatedly specified by **users**. That is, a user cannot match multiple rules. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > - **actions** supports regular expressions. + > - **users** does not support regular expressions. + > - A users cannot be repeatedly specified by **users**. That is, a user cannot match multiple rules. 5. After updating the configurations, configure TLS parameters on the client to connect to the container engine. That is, access the container engine with restricted permissions. @@ -202,7 +202,7 @@ You can configure the startup parameters of the iSulad container engine to speci ```shell [root@localhost ~]# mkdir -pv ~/.iSulad - [root@localhost ~]# cp -v {ca,cert,key}.pem ~/.iSulad + [root@localhost ~]# cp -v {ca,cert,key}.pem ~/.iSulad [root@localhost ~]# export ISULAD_HOST=localhost:2375 ISULAD_TLS_VERIFY=1 [root@localhost ~]# isula version ``` @@ -286,7 +286,7 @@ lxcfs-toolkit [OPTIONS] COMMAND [COMMAND_OPTIONS] 1. Install the lxcfs and lxcfs-toolkit packages and start the lxcfs service. ```shell - [root@localhost ~]# yum install lxcfs lxcfs-toolkit + [root@localhost ~]# yum install lxcfs lxcfs-toolkit [root@localhost ~]# systemctl start lxcfs ``` @@ -321,7 +321,7 @@ lxcfs-toolkit [OPTIONS] COMMAND [COMMAND_OPTIONS] CPU variant : 0x0 CPU part : 0xd08 CPU revision : 2 - + processor : 1 BogoMIPS : 100.00 cpu MHz : 2400.000 @@ -331,7 +331,7 @@ lxcfs-toolkit [OPTIONS] COMMAND [COMMAND_OPTIONS] CPU variant : 0x0 CPU part : 0xd08 CPU revision : 2 - + [root@localhost ~]# isula exec a8 free -m total used free shared buff/cache available Mem: 1024 17 997 7 8 1006 diff --git a/docs/en/docs/Container/shared-memory-channels.md b/docs/en/Cloud/ContainerForm/SystemContainer/shared-memory-channels.md similarity index 80% rename from docs/en/docs/Container/shared-memory-channels.md rename to docs/en/Cloud/ContainerForm/SystemContainer/shared-memory-channels.md index f00335a8fe96cb4b9e08c181566800601a15d63a..0739ce59b3e41e20d6034087c15b2422f0aa2a77 100644 --- a/docs/en/docs/Container/shared-memory-channels.md +++ b/docs/en/Cloud/ContainerForm/SystemContainer/shared-memory-channels.md @@ -1,15 +1,11 @@ # Shared Memory Channels -- [Shared Memory Channels](#shared-memory-channels) - - ## Function Description System containers enable the communication between container and host processes through shared memory. You can set the **--host-channel** parameter when creating a container to allow the host to share the same tmpfs with the container so that they can communicate with each other. ## Parameter Description - - - - - - -

Command

Parameter

@@ -20,7 +16,7 @@ System containers enable the communication between container and host processes

isula create/run

--host-channel

+

--host-channel

  • Variable of the string type. Its format is as follows:
    <host path>:<container path>:<rw/ro>:<size limit>
  • The parameter is described as follows:

    <host path>: path to which tmpfs is mounted on the host, which must be an absolute path.

    @@ -35,25 +31,25 @@ System containers enable the communication between container and host processes ## Constraints -- The lifecycle of tmpfs mounted on the host starts from the container startup to the container deletion. After a container is deleted and its occupied space is released, the space is removed. -- When a container is deleted, the path to which tmpfs is mounted on the host is deleted. Therefore, an existing directory on the host cannot be used as the mount path. -- To ensure that processes running by non-root users on the host can communicate with containers, the permission for tmpfs mounted on the host is 1777. +- The lifecycle of tmpfs mounted on the host starts from the container startup to the container deletion. After a container is deleted and its occupied space is released, the space is removed. +- When a container is deleted, the path to which tmpfs is mounted on the host is deleted. Therefore, an existing directory on the host cannot be used as the mount path. +- To ensure that processes running by non-root users on the host can communicate with containers, the permission for tmpfs mounted on the host is 1777. ## Example Specify the **--host-channel** parameter when creating a container. -``` -[root@localhost ~]# isula run --rm -it --host-channel /testdir:/testdir:rw:32M --system-container --external-rootfs /root/myrootfs none init -root@3b947668eb54:/# dd if=/dev/zero of=/testdir/test.file bs=1024 count=64K -dd: error writing '/testdir/test.file': No space left on device -32769+0 records in -32768+0 records out +```shell +[root@localhost ~]# isula run --rm -it --host-channel /testdir:/testdir:rw:32M --system-container --external-rootfs /root/myrootfs none init +root@3b947668eb54:/# dd if=/dev/zero of=/testdir/test.file bs=1024 count=64K +dd: error writing '/testdir/test.file': No space left on device +32769+0 records in +32768+0 records out 33554432 bytes (34 MB, 32 MiB) copied, 0.0766899 s, 438 MB/s ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->- If **--host-channel** is used for size limit, the file size is constrained by the memory limit in the container. \(The OOM error may occur when the memory usage reaches the upper limit.\) ->- If a user creates a shared file on the host, the file size is not constrained by the memory limit in the container. ->- If you need to create a shared file in the container and the service is memory-intensive, you can add the value of **--host-channel** to the original value of the container memory limit, eliminating the impact. - +>![](./public_sys-resources/icon-note.gif) **NOTE:** +> +> - If **--host-channel** is used for size limit, the file size is constrained by the memory limit in the container. \(The OOM error may occur when the memory usage reaches the upper limit.\) +> - If a user creates a shared file on the host, the file size is not constrained by the memory limit in the container. +> - If you need to create a shared file in the container and the service is memory-intensive, you can add the value of **--host-channel** to the original value of the container memory limit, eliminating the impact. diff --git a/docs/en/docs/Container/specifying-rootfs-to-create-a-container.md b/docs/en/Cloud/ContainerForm/SystemContainer/specifying-rootfs-to-create-a-container.md similarity index 78% rename from docs/en/docs/Container/specifying-rootfs-to-create-a-container.md rename to docs/en/Cloud/ContainerForm/SystemContainer/specifying-rootfs-to-create-a-container.md index 6a2dbfae7a81e4ef6dafbb5ce1a951e067c93d5d..8af830647eaa7de05363cc6efdceaeb5498a6217 100644 --- a/docs/en/docs/Container/specifying-rootfs-to-create-a-container.md +++ b/docs/en/Cloud/ContainerForm/SystemContainer/specifying-rootfs-to-create-a-container.md @@ -27,20 +27,19 @@ Different from a common container that needs to be started by specifying a conta ## Constraints -- The rootfs directory specified using the **--external-rootfs** parameter must be an absolute path. -- The rootfs directory specified using the **--external-rootfs** parameter must be a complete OS environment including **systemd** package. Otherwise, the container fails to be started. -- When a container is deleted, the rootfs directory specified using **--external-rootfs** is not deleted. -- Containers based on an ARM rootfs cannot run in the x86 environment. Containers based on an x86 rootfs cannot run in the ARM environment. -- You are advised not to start multiple container instances in the same rootfs. That is, one rootfs is used by only one container instance that is in the lifecycle. +- The rootfs directory specified using the **--external-rootfs** parameter must be an absolute path. +- The rootfs directory specified using the **--external-rootfs** parameter must be a complete OS environment including **systemd** package. Otherwise, the container fails to be started. +- When a container is deleted, the rootfs directory specified using **--external-rootfs** is not deleted. +- Containers based on an ARM rootfs cannot run in the x86 environment. Containers based on an x86 rootfs cannot run in the ARM environment. +- You are advised not to start multiple container instances in the same rootfs. That is, one rootfs is used by only one container instance that is in the lifecycle. ## Example Assuming the local rootfs path is **/root/myrootfs**, run the following command to start a system container: -``` +```shell # isula run -tid --system-container --external-rootfs /root/myrootfs none init ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The rootfs is a user-defined file system. Prepare it by yourself. For example, a rootfs is generated after the TAR package of a container image is decompressed. - +>![](./public_sys-resources/icon-note.gif) **NOTE:** +>The rootfs is a user-defined file system. Prepare it by yourself. For example, a rootfs is generated after the TAR package of a container image is decompressed. diff --git a/docs/en/docs/Container/system-container.md b/docs/en/Cloud/ContainerForm/SystemContainer/system-container.md similarity index 100% rename from docs/en/docs/Container/system-container.md rename to docs/en/Cloud/ContainerForm/SystemContainer/system-container.md diff --git a/docs/en/docs/Container/usage-guide.md b/docs/en/Cloud/ContainerForm/SystemContainer/usage-guide.md similarity index 49% rename from docs/en/docs/Container/usage-guide.md rename to docs/en/Cloud/ContainerForm/SystemContainer/usage-guide.md index df5b305b523c1d4c60c33fd4d270dc58d613be0a..d0d4336a17f0860fb5e81d68afbf5f07c1f71ae3 100644 --- a/docs/en/docs/Container/usage-guide.md +++ b/docs/en/Cloud/ContainerForm/SystemContainer/usage-guide.md @@ -1,19 +1,17 @@ # Usage Guide - System container functions are enhanced based on the iSula container engine. The container management function and the command format of the function provided by system containers are the same as those provided by the iSula container engine. -The following sections describe how to use the enhanced functions provided by system containers. For details about other command operations, see [iSulad Container Engine](#isulad-container-engine.md#EN-US_TOPIC_0184808037). +The following sections describe how to use the enhanced functions provided by system containers. For details about other command operations, see [iSulad Container Engine](../../ContainerEngine/iSulaContainerEngine/isulad-container-engine.md). The system container functions involve only the **isula create/run** command. Unless otherwise specified, this command is used for all functions. The command format is as follows: -``` +```shell isula create/run [OPTIONS] [COMMAND] [ARG...] ``` In the preceding format: -- **OPTIONS**: one or more command parameters. For details about supported parameters, see [iSulad Container Engine](#isulad-container-engine.md#EN-US_TOPIC_0184808037) \> [Appendix](#appendix.md#EN-US_TOPIC_0184808158) \> [Command Line Parameters](#command-line-parameters.md#EN-US_TOPIC_0189976936). -- **COMMAND**: command executed after a system container is started. -- **ARG**: parameter corresponding to the command executed after a system container is started. - +- **OPTIONS**: one or more command parameters. For details about supported parameters, see [iSulad Container Engine](../../ContainerEngine/iSulaContainerEngine/isulad-container-engine.md) \> [Appendix](../../ContainerEngine/iSulaContainerEngine/appendix.md) \> [Command Line Parameters](../../ContainerEngine/iSulaContainerEngine/appendix.md#command-line-parameters). +- **COMMAND**: command executed after a system container is started. +- **ARG**: parameter corresponding to the command executed after a system container is started. diff --git a/docs/en/docs/Container/using-systemd-to-start-a-container.md b/docs/en/Cloud/ContainerForm/SystemContainer/using-systemd-to-start-a-container.md similarity index 99% rename from docs/en/docs/Container/using-systemd-to-start-a-container.md rename to docs/en/Cloud/ContainerForm/SystemContainer/using-systemd-to-start-a-container.md index 69dc5d7bf4c77b31cf594c51af66ca0664f9fe4d..db4ddb666d2d31097f18c17299842301139c00a7 100644 --- a/docs/en/docs/Container/using-systemd-to-start-a-container.md +++ b/docs/en/Cloud/ContainerForm/SystemContainer/using-systemd-to-start-a-container.md @@ -56,7 +56,7 @@ The init process started in system containers differs from that in common contai root 16 1 0 06:49 ? 00:00:00 /usr/lib/systemd/systemd-network dbus 23 1 0 06:49 ? 00:00:00 /usr/bin/dbus-daemon --system -- root 25 0 0 06:49 ? 00:00:00 bash - root 59 25 0 06:49 ? 00:00:00 ps –ef + root 59 25 0 06:49 ? 00:00:00 ps -ef ``` - Run the **systemctl** command in the container to check the service status. The command output indicates that the service is managed by systemd. diff --git a/docs/en/docs/Container/writable-namespace-kernel-parameters.md b/docs/en/Cloud/ContainerForm/SystemContainer/writable-namespace-kernel-parameters.md similarity index 100% rename from docs/en/docs/Container/writable-namespace-kernel-parameters.md rename to docs/en/Cloud/ContainerForm/SystemContainer/writable-namespace-kernel-parameters.md diff --git a/docs/en/Cloud/ContainerRuntime/Kuasar/Menu/index.md b/docs/en/Cloud/ContainerRuntime/Kuasar/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..a4833dcace583f81d3a6294b2310bfd44789f5df --- /dev/null +++ b/docs/en/Cloud/ContainerRuntime/Kuasar/Menu/index.md @@ -0,0 +1,8 @@ +--- +headless: true +--- + +- [Kuasar Multi-Sandbox Container Runtime]({{< relref "./kuasar.md" >}}) + - [Installation and Configuration]({{< relref "./kuasar-install-config.md" >}}) + - [Usage Instructions]({{< relref "./kuasar-usage.md" >}}) + - [Appendix]({{< relref "./kuasar-install-config.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/Container/figures/kuasar_arch.png b/docs/en/Cloud/ContainerRuntime/Kuasar/figures/kuasar_arch.png similarity index 100% rename from docs/en/docs/Container/figures/kuasar_arch.png rename to docs/en/Cloud/ContainerRuntime/Kuasar/figures/kuasar_arch.png diff --git a/docs/en/docs/Container/kuasar-appendix.md b/docs/en/Cloud/ContainerRuntime/Kuasar/kuasar-appendix.md similarity index 100% rename from docs/en/docs/Container/kuasar-appendix.md rename to docs/en/Cloud/ContainerRuntime/Kuasar/kuasar-appendix.md diff --git a/docs/en/docs/Container/kuasar-install-config.md b/docs/en/Cloud/ContainerRuntime/Kuasar/kuasar-install-config.md similarity index 91% rename from docs/en/docs/Container/kuasar-install-config.md rename to docs/en/Cloud/ContainerRuntime/Kuasar/kuasar-install-config.md index c4633b45e5fdaf9bccff8e788dc48d8ea00409dc..429eff57e9d3880937e1edfb5927e6cfe998cf40 100644 --- a/docs/en/docs/Container/kuasar-install-config.md +++ b/docs/en/Cloud/ContainerRuntime/Kuasar/kuasar-install-config.md @@ -6,8 +6,8 @@ - To obtain better performance experience, Kuasar must run on bare metal servers. **Currently, Kuasar cannot run on VMs.** - The running of Kuasar depends on the following openEuler components. Ensure that the dependent components of the required versions have been installed in the environment. - - iSulad (See [Installation and Configuration](./installation-configuration) of iSulad.) - - StratoVirt (See [Installing StratoVirt](../StratoVirt/Install_StratoVirt.md)) + - iSulad (See [Installation and Configuration](../../ContainerEngine/iSulaContainerEngine/installation-configuration.md) of iSulad.) + - StratoVirt (See [Installing StratoVirt](../../../Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-installation.md)) ### Procedure @@ -81,7 +81,7 @@ timeout: 10 ### Kuasar configuration -Modify the configuration file to connect Kuasar to StratoVirt. (You can use the default configuration. For details about the fields in the configuration file, see [附录](./kuasar-appendix.md ).) +Modify the configuration file to connect Kuasar to StratoVirt. (You can use the default configuration. For details about the fields in the configuration file, see [Appendix](./kuasar-appendix.md ).) ```sh $ cat /var/lib/kuasar/config_stratovirt.toml diff --git a/docs/en/docs/Container/kuasar-usage.md b/docs/en/Cloud/ContainerRuntime/Kuasar/kuasar-usage.md similarity index 91% rename from docs/en/docs/Container/kuasar-usage.md rename to docs/en/Cloud/ContainerRuntime/Kuasar/kuasar-usage.md index 211dc468e9a5bab6c2298fe02dfddb7122276370..95804cd40b01d3f25ffac1ef09fa34d9c10ccc15 100644 --- a/docs/en/docs/Container/kuasar-usage.md +++ b/docs/en/Cloud/ContainerRuntime/Kuasar/kuasar-usage.md @@ -77,12 +77,12 @@ Start a Kuasar sandbox. c11df540f913e docker.io/library/busybox:latest 2 minutes ago Running busybox 0 5cbcf744949d8 ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >You can also run a `crictl run` command to start a pod with a service container. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > You can also run a `crictl run` command to start a pod with a service container. > - >```sh - >$ crictl run -r vmm --no-pull container-config.yaml podsandbox-config.yaml - >``` + > ```sh + > crictl run -r vmm --no-pull container-config.yaml podsandbox-config.yaml + > ``` 7. Stop and delete the container and the pod. diff --git a/docs/en/docs/Container/kuasar.md b/docs/en/Cloud/ContainerRuntime/Kuasar/kuasar.md similarity index 100% rename from docs/en/docs/Container/kuasar.md rename to docs/en/Cloud/ContainerRuntime/Kuasar/kuasar.md diff --git a/docs/en/docs/DPUOffload/public_sys-resources/icon-note.gif b/docs/en/Cloud/ContainerRuntime/Kuasar/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/DPUOffload/public_sys-resources/icon-note.gif rename to docs/en/Cloud/ContainerRuntime/Kuasar/public_sys-resources/icon-note.gif diff --git a/docs/en/Cloud/ContainerRuntime/Menu/index.md b/docs/en/Cloud/ContainerRuntime/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..fdb2e1998715cff2a5daeddf5274be339dace165 --- /dev/null +++ b/docs/en/Cloud/ContainerRuntime/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [Kuasar Multi-Sandbox Container Runtime]({{< relref "./Kuasar/Menu/index.md" >}}) diff --git a/docs/en/Cloud/HybridDeployment/Menu/index.md b/docs/en/Cloud/HybridDeployment/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..9fe52bbe8fa3d68b8f0dfd77c4da4a0570363674 --- /dev/null +++ b/docs/en/Cloud/HybridDeployment/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- + +- [Rubik User Guide]({{< relref "./rubik/Menu/index.md" >}}) +- [oncn-bwm User Guide]({{< relref "./oncn-bwm/Menu/index.md" >}}) diff --git a/docs/en/Cloud/HybridDeployment/oncn-bwm/Menu/index.md b/docs/en/Cloud/HybridDeployment/oncn-bwm/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..a7230441c2cc3ba9cb54ad8d3d018cfb0c3e7e3d --- /dev/null +++ b/docs/en/Cloud/HybridDeployment/oncn-bwm/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [oncn-bwm User Guide]({{< relref "./overview.md" >}}) diff --git a/docs/en/docs/oncn-bwm/overview.md b/docs/en/Cloud/HybridDeployment/oncn-bwm/overview.md similarity index 67% rename from docs/en/docs/oncn-bwm/overview.md rename to docs/en/Cloud/HybridDeployment/oncn-bwm/overview.md index 5068a6ed0285ae1cc217b022337a02a4eeb7a691..6b7391e50d4b4bba6c99bd932005ebb9339740bf 100644 --- a/docs/en/docs/oncn-bwm/overview.md +++ b/docs/en/Cloud/HybridDeployment/oncn-bwm/overview.md @@ -13,31 +13,19 @@ The oncn-bwm tool supports the following functions: - Setting the offline service bandwidth range and online service waterline - Querying internal statistics - - ## Installation -To install the oncn-bwm tool, the operating system must be openEuler 22.09. Run the **yum** command on the host where the openEuler Yum source is configured to install the oncn-bwm tool. - -```shell -# yum install oncn-bwm -``` - -This section describes how to install the oncn-bwm tool. - ### Environmental Requirements -* Operating system: openEuler 22.09 - -### Installation Procedure +- Operating system: openEuler-24.03-LTS with the Yum repository of openEuler-24.03-LTS -To install the oncn-bwm tool, do as follows: +### Installation Procedure -1. Configure the Yum source of openEuler and run the `yum` command to install oncn-bwm. +Run the following command: - ``` - yum install oncn-bwm - ``` +```shell +yum install oncn-bwm +``` ## How to Use @@ -55,16 +43,15 @@ The oncn-bwm tool provides the `bwmcli` command line tool to enable pod bandwidt > > Upgrading the oncn-bwm package does not affect the enabling status before the upgrade. Uninstalling the oncn-bwm package disables pod bandwidth management for all NICs. - ### Command Interfaces #### Pod Bandwidth Management -**Commands and Functions** +##### Commands and Functions | Command Format | Function | | --------------------------- | ------------------------------------------------------------ | -| **bwmcli –e** | Enables pod bandwidth management for a specified NIC.| +| **bwmcli -e** | Enables pod bandwidth management for a specified NIC.| | **bwmcli -d** | Disables pod bandwidth management for a specified NIC.| | **bwmcli -p devs** | Queries pod bandwidth management of all NICs on a node.| @@ -74,14 +61,12 @@ The oncn-bwm tool provides the `bwmcli` command line tool to enable pod bandwidt > > - Enable pod bandwidth management before running other `bwmcli` commands. - - -**Examples** +##### Examples - Enable pod bandwidth management for NICs eth0 and eth1. ```shell - # bwmcli –e eth0 –e eth1 + # bwmcli -e eth0 -e eth1 enable eth0 success enable eth1 success ``` @@ -89,7 +74,7 @@ The oncn-bwm tool provides the `bwmcli` command line tool to enable pod bandwidt - Disable pod bandwidth management for NICs eth0 and eth1. ```shell - # bwmcli –d eth0 –d eth1 + # bwmcli -d eth0 -d eth1 disable eth0 success disable eth1 success ``` @@ -107,18 +92,18 @@ The oncn-bwm tool provides the `bwmcli` command line tool to enable pod bandwidt #### Pod Network Priority -**Commands and Functions** +##### Commands and Functions | Command Format | Function | | ------------------------------------------------------------ | ------------------------------------------------------------ | -| **bwmcli –s** *path* | Sets the network priority of a pod. *path* indicates the cgroup path corresponding to the pod, and *prio* indicates the priority. The value of *path* can be a relative path or an absolute path. The default value of *prio* is **0**. The optional values are **0** and **-1**. The value **0** indicates online services, and the value **-1** indicates offline services.| -| **bwmcli –p** *path* | Queries the network priority of a pod. | +| **bwmcli -s** *path* | Sets the network priority of a pod. *path* indicates the cgroup path corresponding to the pod, and *prio* indicates the priority. The value of *path* can be a relative path or an absolute path. The default value of *prio* is **0**. The optional values are **0** and **-1**. The value **0** indicates online services, and the value **-1** indicates offline services.| +| **bwmcli -p** *path* | Queries the network priority of a pod. | > Note: > > Online and offline network priorities are supported. The oncn-bwm tool controls the bandwidth of pods in real time based on the network priority. The specific policy is as follows: For online pods, the bandwidth is not limited. For offline pods, the bandwidth is limited within the offline bandwidth range. -**Examples** +##### Examples - Set the priority of the pod whose cgroup path is **/sys/fs/cgroup/net_cls/test_online** to **0**. @@ -134,16 +119,14 @@ The oncn-bwm tool provides the `bwmcli` command line tool to enable pod bandwidt 0 ``` - - #### Offline Service Bandwidth Range | Command Format | Function | | ---------------------------------- | ------------------------------------------------------------ | -| **bwmcli –s bandwidth** | Sets the offline bandwidth for a host or VM. **low** indicates the minimum bandwidth, and **high** indicates the maximum bandwidth. The unit is KB, MB, or GB, and the value range is [1 MB, 9999 GB].| -| **bwmcli –p bandwidth** | Queries the offline bandwidth of a host or VM. | +| **bwmcli -s bandwidth** | Sets the offline bandwidth for a host or VM. **low** indicates the minimum bandwidth, and **high** indicates the maximum bandwidth. The unit is KB, MB, or GB, and the value range is \[1 MB, 9999 GB].| +| **bwmcli -p bandwidth** | Queries the offline bandwidth of a host or VM. | -> Note: +> Note: > > - All NICs with pod bandwidth management enabled on a host are considered as a whole, that is, the configured online service waterline and offline service bandwidth range are shared. > @@ -151,9 +134,7 @@ The oncn-bwm tool provides the `bwmcli` command line tool to enable pod bandwidt > > - The offline service bandwidth range and online service waterline are used together to limit the offline service bandwidth. When the online service bandwidth is lower than the configured waterline, the offline services can use the configured maximum bandwidth. When the online service bandwidth is higher than the configured waterline, the offline services can use the configured minimum bandwidth. - - -**Examples** +##### Examples - Set the offline bandwidth to 30 Mbit/s to 100 Mbit/s. @@ -169,24 +150,21 @@ The oncn-bwm tool provides the `bwmcli` command line tool to enable pod bandwidt bandwidth is 31457280(B),104857600(B) ``` - - - #### Online Service Waterline -**Commands and Functions** +##### Commands and Functions | Command Format | Function | | ---------------------------------------------- | ------------------------------------------------------------ | -| **bwmcli –s waterline** | Sets the online service waterline for a host or VM. *val* indicates the waterline value. The unit is KB, MB, or GB, and the value range is [20 MB, 9999 GB].| -| **bwmcli –p waterline** | Queries the online service waterline of a host or VM. | +| **bwmcli -s waterline** | Sets the online service waterline for a host or VM. *val* indicates the waterline value. The unit is KB, MB, or GB, and the value range is [20 MB, 9999 GB].| +| **bwmcli -p waterline** | Queries the online service waterline of a host or VM. | > Note: > > - When the total bandwidth of all online services on a host is higher than the waterline, the bandwidth that can be used by offline services is limited. When the total bandwidth of all online services on a host is lower than the waterline, the bandwidth that can be used by offline services is increased. > - The system determines whether the total bandwidth of online services exceeds or is lower than the configured waterline every 10 ms. Then the system determines the bandwidth limit for offline services based on whether the online bandwidth collected within each 10 ms is higher than the waterline. -**Examples** +##### Examples - Set the online service waterline to 20 MB. @@ -202,16 +180,13 @@ The oncn-bwm tool provides the `bwmcli` command line tool to enable pod bandwidt waterline is 20971520(B) ``` - - #### Statistics -**Commands and Functions** +##### Commands and Functions | Command Format | Function | | ------------------- | ------------------ | -| **bwmcli –p stats** | Queries internal statistics.| - +| **bwmcli -p stats** | Queries internal statistics.| > Note: > @@ -225,8 +200,7 @@ The oncn-bwm tool provides the `bwmcli` command line tool to enable pod bandwidt > > - **offline_rate**: current offline service rate. - -**Examples** +##### Examples Query internal statistics. @@ -239,15 +213,11 @@ online_rate: 602 offline_rate: 0 ``` - - - - ### Typical Use Case To configure pod bandwidth management on a node, perform the following steps: -``` +```shell bwmcli -p devs #Query the pod bandwidth management status of the NICs in the system. bwmcli -e eth0 # Enable pod bandwidth management for the eth0 NIC. bwmcli -s /sys/fs/cgroup/net_cls/online 0 # Set the network priority of the online service pod to 0 @@ -255,3 +225,16 @@ bwmcli -s /sys/fs/cgroup/net_cls/offline -1 # Set the network priority of the of bwmcli -s bandwidth 20mb,1gb # Set the bandwidth range for offline services. bwmcli -s waterline 30mb # Set the waterline for online services. ``` + +### Constraints + +1. Only the **root** user is allowed to run the bwmcli command. +2. Currently, this feature supports only two network QoS priorities: offline and online. +3. If the tc qdisc rules have been configured for a NIC, the network QoS function will fail to be enabled for the NIC. +4. After a NIC is removed and then inserted, the original QoS rules will be lost. In this case, you need to manually reconfigure the network QoS function. +5. When you run one command to enable or disable multiple NICs at the same time, if any NIC fails to be operated, operations on subsequent NICs will be stopped. +6. When SELinux is enabled in the environment, if the SELinux policy is not configured for the bwmcli program, some commands (such as setting or querying the waterline, bandwidth, and priority) may fail. You can confirm the failure in SELinux logs. To solve this problem, disable SELinux or configure the SELinux policy for the bwmcli program. +7. Upgrading the software package does not change the enabling status before the upgrade. Uninstalling the software package disables the function for all devices. +8. The NIC name can contain only digits, letters, hyphens (-), and underscores (_). NICs whose names contain other characters cannot be identified. +9. In actual scenarios, bandwidth limiting may cause protocol stack memory overstock. In this case, backpressure depends on transport-layer protocols. For protocols that do not have backpressure mechanisms, such as UDP, packet loss, ENOBUFS, and rate limiting deviation may occur. +10. After using bwmcli to enable the network QoS function of a certain network card, the tc command cannot be used to modify the tc rules of the network card. Otherwise, it may affect the network QoS function of the network card, leading to abnormal functionality. diff --git a/docs/en/Cloud/HybridDeployment/rubik/Menu/index.md b/docs/en/Cloud/HybridDeployment/rubik/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..bd48f549f55d2143c0381b550a139cbf825e9e83 --- /dev/null +++ b/docs/en/Cloud/HybridDeployment/rubik/Menu/index.md @@ -0,0 +1,8 @@ +--- +headless: true +--- + +- [Rubik User Guide]({{< relref "./overview.md" >}}) + - [Installation and Deployment]({{< relref "./installation-and-deployment.md" >}}) + - [HTTP APIs]({{< relref "./http-apis.md" >}}) + - [Example of Isolation for Hybrid Deployed Services]({{< relref "./example-of-isolation-for-hybrid-deployed-services.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/rubik/example-of-isolation-for-hybrid-deployed-services.md b/docs/en/Cloud/HybridDeployment/rubik/example-of-isolation-for-hybrid-deployed-services.md similarity index 95% rename from docs/en/docs/rubik/example-of-isolation-for-hybrid-deployed-services.md rename to docs/en/Cloud/HybridDeployment/rubik/example-of-isolation-for-hybrid-deployed-services.md index 669a51b6ab25409f1bdc10dbdebd0cd88d208453..07f3618d70721d6524a1cd87001b140f7a93c28d 100644 --- a/docs/en/docs/rubik/example-of-isolation-for-hybrid-deployed-services.md +++ b/docs/en/Cloud/HybridDeployment/rubik/example-of-isolation-for-hybrid-deployed-services.md @@ -1,6 +1,6 @@ -## Example of Isolation for Hybrid Deployed Services +# Example of Isolation for Hybrid Deployed Services -### Environment Preparation +## Environment Preparation Check whether the kernel supports isolation of hybrid deployed services. @@ -39,17 +39,17 @@ Server: Experimental: false ``` -### Hybrid Deployed Services +## Hybrid Deployed Services **Online Service ClickHouse** -Use the clickhouse-benchmark tool to test the performance and collect statistics on performance metrics such as QPS, P50, P90, and P99. For details, see https://clickhouse.com/docs/en/operations/utilities/clickhouse-benchmark/. +Use the clickhouse-benchmark tool to test the performance and collect statistics on performance metrics such as QPS, P50, P90, and P99. For details, see . **Offline Service Stress** Stress is a CPU-intensive test tool. You can specify the **--cpu** option to start multiple concurrent CPU-intensive tasks to increase the stress on the system. -### Usage Instructions +## Usage Instructions 1) Start a ClickHouse container (online service). @@ -110,9 +110,9 @@ function stress() function benchmark() { if [ $with_offline == "with_offline" ]; then - stress - sleep 3 - fi + stress + sleep 3 + fi clickhouse echo "Remove test containers." docker rm -f $online_container @@ -127,7 +127,7 @@ prepare benchmark ``` -### Test Results +## Test Results Independently execute the online service ClickHouse. diff --git a/docs/en/docs/Embedded/public_sys-resources/icon-note.gif b/docs/en/Cloud/HybridDeployment/rubik/figures/icon-note.gif similarity index 100% rename from docs/en/docs/Embedded/public_sys-resources/icon-note.gif rename to docs/en/Cloud/HybridDeployment/rubik/figures/icon-note.gif diff --git a/docs/en/docs/rubik/http-apis.md b/docs/en/Cloud/HybridDeployment/rubik/http-apis.md similarity index 100% rename from docs/en/docs/rubik/http-apis.md rename to docs/en/Cloud/HybridDeployment/rubik/http-apis.md diff --git a/docs/en/docs/rubik/installation-and-deployment.md b/docs/en/Cloud/HybridDeployment/rubik/installation-and-deployment.md similarity index 80% rename from docs/en/docs/rubik/installation-and-deployment.md rename to docs/en/Cloud/HybridDeployment/rubik/installation-and-deployment.md index cfb5f286fe86cae6aa4216b2e0876afe51c105ec..eb63300515c48e1622ca4cead4d352dc7427fe52 100644 --- a/docs/en/docs/rubik/installation-and-deployment.md +++ b/docs/en/Cloud/HybridDeployment/rubik/installation-and-deployment.md @@ -19,8 +19,8 @@ This chapter describes how to install and deploy the Rubik component. ### Environment Preparation -* Install the openEuler OS. For details, see the [_openEuler Installation Guide_](../Installation/Installation.md). -* Install and deploy Kubernetes. For details, see the _Kubernetes Cluster Deployment Guide_. +* Install the openEuler OS. For details, see the [_openEuler Installation Guide_](../../../Server/InstallationUpgrade/Installation/installation.md). +* Install and deploy Kubernetes. For details, see the [_Kubernetes Cluster Deployment Guide_](../../ClusterDeployment/Kubernetes/kubernetes-cluster-deployment-guide1.md). * Install the Docker or iSulad container engine. If the iSulad container engine is used, you need to install the isula-build container image building tool. ## Installing Rubik @@ -45,14 +45,13 @@ Rubik is deployed on each Kubernetes node as a DaemonSet. Therefore, you need to enabled=1 gpgcheck=0 ``` - + 2. Install Rubik with **root** permissions. ```shell sudo yum install -y rubik ``` - > ![](./figures/icon-note.gif)**Note**: > > Files related to Rubik are installed in the **/var/lib/rubik** directory. @@ -69,36 +68,36 @@ sudo echo 1 > /proc/sys/vm/memcg_qos_enable 1. Use the Docker or isula-build engine to build Rubik images. Because Rubik is deployed as a DaemonSet, each node requires a Rubik image. After building an image on a node, use the **docker save** and **docker load** commands to load the Rubik image to each node of Kubernetes. Alternatively, build a Rubik image on each node. The following uses isula-build as an example. The command is as follows: -```sh -isula-build ctr-img build -f /var/lib/rubik/Dockerfile --tag rubik:0.1.0 . -``` + ```sh + isula-build ctr-img build -f /var/lib/rubik/Dockerfile --tag rubik:0.1.0 . + ``` -1. On the Kubernetes master node, change the Rubik image name in the **/var/lib/rubik/rubik-daemonset.yaml** file to the name of the image built in the previous step. +2. On the Kubernetes master node, change the Rubik image name in the **/var/lib/rubik/rubik-daemonset.yaml** file to the name of the image built in the previous step. -```yaml -... -containers: -- name: rubik-agent - image: rubik:0.1.0 # The image name must be the same as the Rubik image name built in the previous step. - imagePullPolicy: IfNotPresent -... -``` + ```yaml + ... + containers: + - name: rubik-agent + image: rubik:0.1.0 # The image name must be the same as the Rubik image name built in the previous step. + imagePullPolicy: IfNotPresent + ... + ``` 3. On the Kubernetes master node, run the **kubectl** command to deploy the Rubik DaemonSet so that Rubik will be automatically deployed on all Kubernetes nodes. -```sh -kubectl apply -f /var/lib/rubik/rubik-daemonset.yaml -``` + ```sh + kubectl apply -f /var/lib/rubik/rubik-daemonset.yaml + ``` 4. Run the **kubectl get pods -A** command to check whether Rubik has been deployed on each node in the cluster. (The number of rubik-agents is the same as the number of nodes and all rubik-agents are in the Running status.) -```sh -$ kubectl get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -... -kube-system rubik-agent-76ft6 1/1 Running 0 4s -... -``` + ```sh + $ kubectl get pods -A + NAMESPACE NAME READY STATUS RESTARTS AGE + ... + kube-system rubik-agent-76ft6 1/1 Running 0 4s + ... + ``` ## Common Configuration Description @@ -123,9 +122,9 @@ This section describes common configurations in **config.json**. | Item | Value Type| Value Range | Description | | ---------- | ---------- | ------------------ | ------------------------------------------------------------ | -| autoConfig | Boolean | **true** or **false** | **true**: enables automatic pod awareness.
    **false**: disables automatic pod awareness.| -| autoCheck | Boolean | **true** or **false** | **true**: enables pod priority check.
    **false**: disables pod priority check.| -| logDriver | String | **stdio** or **file** | **stdio**: prints logs to the standard output. The scheduling platform collects and dumps logs.
    **file**: prints files to the log directory specified by **logDir**.| +| autoConfig | Boolean | **true** or **false** | **true**: enables automatic pod awareness.
    **false**: disables automatic pod awareness.| +| autoCheck | Boolean | **true** or **false** | **true**: enables pod priority check.
    **false**: disables pod priority check.| +| logDriver | String | **stdio** or **file** | **stdio**: prints logs to the standard output. The scheduling platform collects and dumps logs.
    **file**: prints files to the log directory specified by **logDir**.| | logDir | String | Absolute path | Directory for storing logs. | | logSize | Integer | \[10,1048576] | Total size of logs, in MB. If the total size of logs reaches the upper limit, the earliest logs will be discarded.| | logLevel | String | **error**, **info**, or **debug**| Log level. | @@ -172,27 +171,27 @@ spec: ## Restrictions -- The maximum number of concurrent HTTP requests that Rubik can receive is 1,000 QPS. If the number of concurrent HTTP requests exceeds the upper limit, an error is reported. +* The maximum number of concurrent HTTP requests that Rubik can receive is 1,000 QPS. If the number of concurrent HTTP requests exceeds the upper limit, an error is reported. -- The maximum number of pods in a single request received by Rubik is 100. If the number of pods exceeds the upper limit, an error is reported. +* The maximum number of pods in a single request received by Rubik is 100. If the number of pods exceeds the upper limit, an error is reported. -- Only one set of Rubik instances can be deployed on each Kubernetes node. Multiple sets of Rubik instances may conflict with each other. +* Only one set of Rubik instances can be deployed on each Kubernetes node. Multiple sets of Rubik instances may conflict with each other. -- Rubik does not provide port access and can communicate only through sockets. +* Rubik does not provide port access and can communicate only through sockets. -- Rubik accepts only valid HTTP request paths and network protocols: http://localhost/ (POST), http://localhost/ping (GET), and http://localhost/version (GET). For details about the functions of HTTP requests, see HTTP APIs(./http-apis.md). +* Rubik accepts only valid HTTP request paths and network protocols: (POST), (GET), and (GET). For details about the functions of HTTP requests, see HTTP APIs(./http-apis.md). -- Rubik drive requirement: 1 GB or more. +* Rubik drive requirement: 1 GB or more. -- Rubik memory requirement: 100 MB or more. +* Rubik memory requirement: 100 MB or more. -- Services cannot be switched from a low priority (offline services) to a high priority (online services). For example, if service A is set to an offline service and then to an online service, Rubik reports an error. +* Services cannot be switched from a low priority (offline services) to a high priority (online services). For example, if service A is set to an offline service and then to an online service, Rubik reports an error. -- When directories are mounted to a Rubik container, the minimum permission on the Rubik local socket directory **/run/Rubik** is **700** on the service side. +* When directories are mounted to a Rubik container, the minimum permission on the Rubik local socket directory **/run/Rubik** is **700** on the service side. -- When the Rubik service is available, the timeout interval of a single request is 120s. If the Rubik process enters the T (stopped or being traced) or D (uninterruptible sleep) state, the service becomes unavailable. In this case, the Rubik service does not respond to any request. To avoid this problem, set the timeout interval on the client to avoid infinite waiting. +* When the Rubik service is available, the timeout interval of a single request is 120s. If the Rubik process enters the T (stopped or being traced) or D (uninterruptible sleep) state, the service becomes unavailable. In this case, the Rubik service does not respond to any request. To avoid this problem, set the timeout interval on the client to avoid infinite waiting. -- If hybrid deployment is used, the original CPU share funtion of cgroup has the following restrictions: +* If hybrid deployment is used, the original CPU share funtion of cgroup has the following restrictions: If both online and offline tasks are running on the CPU, the CPU share configuration of offline tasks does not take effect. diff --git a/docs/en/docs/rubik/overview.md b/docs/en/Cloud/HybridDeployment/rubik/overview.md similarity index 89% rename from docs/en/docs/rubik/overview.md rename to docs/en/Cloud/HybridDeployment/rubik/overview.md index 7c9aa04a502613ea7eb83fff57430a096ee1e232..c9d5ca34d91708f2ac023d7c282aa416c1d413e8 100644 --- a/docs/en/docs/rubik/overview.md +++ b/docs/en/Cloud/HybridDeployment/rubik/overview.md @@ -13,5 +13,5 @@ Rubik supports the following features: This document is intended for community developers, open source enthusiasts, and partners who use the openEuler system and want to learn and use Rubik. Users must: -* Know basic Linux operations. -* Be familiar with basic operations of Kubernetes and Docker/iSulad. +- Know basic Linux operations. +- Be familiar with basic operations of Kubernetes and Docker/iSulad. diff --git a/docs/en/Cloud/ImageBuilder/Menu/index.md b/docs/en/Cloud/ImageBuilder/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..8d21056ea04d9458aaea21aa5b45bef90ed8a63d --- /dev/null +++ b/docs/en/Cloud/ImageBuilder/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [Container Image Building]({{< relref "./isula-build/Menu/index.md" >}}) diff --git a/docs/en/Cloud/ImageBuilder/isula-build/Menu/index.md b/docs/en/Cloud/ImageBuilder/isula-build/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..b57625c6eb77cdb3409cb9ea8ca8368e18a4609c --- /dev/null +++ b/docs/en/Cloud/ImageBuilder/isula-build/Menu/index.md @@ -0,0 +1,8 @@ +--- +headless: true +--- + +- [Container Image Building]({{< relref "./isula-build.md" >}}) + - [User Guide]({{< relref "./isula-build-user-guide.md" >}}) + - [Common Issues and Solutions]({{< relref "./isula-build-common-issues-and-solutions.md" >}}) + - [Appendix]({{< relref "./isula-build-appendix.md" >}}) \ No newline at end of file diff --git a/docs/en/Cloud/ImageBuilder/isula-build/figures/isula-build_arch.png b/docs/en/Cloud/ImageBuilder/isula-build/figures/isula-build_arch.png new file mode 100644 index 0000000000000000000000000000000000000000..911a9ae6f46988586ab49f15de282948f5470c37 Binary files /dev/null and b/docs/en/Cloud/ImageBuilder/isula-build/figures/isula-build_arch.png differ diff --git a/docs/en/Cloud/ImageBuilder/isula-build/isula-build-appendix.md b/docs/en/Cloud/ImageBuilder/isula-build/isula-build-appendix.md new file mode 100644 index 0000000000000000000000000000000000000000..b6cb4d8f323e6d3373b7282215a1e979b99509d9 --- /dev/null +++ b/docs/en/Cloud/ImageBuilder/isula-build/isula-build-appendix.md @@ -0,0 +1,91 @@ +# Appendix + +## Command Line Parameters + +**Table 1** Parameters of the `ctr-img build` command + +| **Command** | **Parameter** | **Description** | +| ------------- | -------------- | ------------------------------------------------------------ | +| ctr-img build | --build-arg | String list, which contains variables required during the build. | +| | --build-static | Key value, which is used to build binary equivalence. Currently, the following key values are included: - build-time: string, which indicates that a fixed timestamp is used to build a container image. The timestamp format is YYYY-MM-DD HH-MM-SS. | +| | -f, --filename | String, which indicates the path of the Dockerfiles. If this parameter is not specified, the current path is used. | +| | --format | String, which indicates the image format **oci** or **docker** (**ISULABUILD_CLI_EXPERIMENTAL** needs to be enabled). | +| | --iidfile | String, which indicates the ID of the image output to a local file. | +| | -o, --output | String, which indicates the image export mode and path.| +| | --proxy | Boolean, which inherits the proxy environment variable on the host. The default value is true. | +| | --tag | String, which indicates the tag value of the image that is successfully built. | +| | --cap-add | String list, which contains permissions required by the **RUN** instruction during the build process.| + +**Table 2** Parameters of the `ctr-img load` command + +| **Command** | **Parameter** | **Description** | +| ------------ | ----------- | --------------------------------- | +| ctr-img load | -i, --input | String, path of the local .tar package to be imported.| + +**Table 3** Parameters of the `ctr-img push` command + +| **Command** | **Parameter** | **Description** | +| ------------ | ----------- | --------------------------------- | +| ctr-img push | -f, --format | String, which indicates the pushed image format **oci** or **docker** (**ISULABUILD_CLI_EXPERIMENTAL** needs to be enabled).| + +**Table 4** Parameters of the `ctr-img rm` command + +| **Command** | **Parameter** | **Description** | +| ---------- | ----------- | --------------------------------------------- | +| ctr-img rm | -a, --all | Boolean, which is used to delete all local persistent images. | +| | -p, --prune | Boolean, which is used to delete all images that are stored persistently on the local host and do not have tags. | + +**Table 5** Parameters of the `ctr-img save` command + +| **Command** | **Parameter** | **Description** | +| ------------ | ------------ | ---------------------------------- | +| ctr-img save | -o, --output | String, which indicates the local path for storing the exported images.| +| ctr-img save | -f, --format | String, which indicates the exported image format **oci** or **docker** (**ISULABUILD_CLI_EXPERIMENTAL** needs to be enabled).| + +**Table 6** Parameters of the `login` command + +| **Command** | **Parameter** | **Description** | +| -------- | -------------------- | ------------------------------------------------------- | +| login | -p, --password-stdin | Boolean, which indicates whether to read the password through stdin. or enter the password in interactive mode. | +| | -u, --username | String, which indicates the username for logging in to the image repository.| + +**Table 7** Parameters of the `logout` command + +| **Command** | **Parameter** | **Description** | +| -------- | --------- | ------------------------------------ | +| logout | -a, --all | Boolean, which indicates whether to log out of all logged-in image repositories. | + +**Table 8** Parameters of the `manifest annotate` command + +| **Command** | **Parameter** | **Description** | +| ----------------- | ------------- | ---------------------------- | +| manifest annotate | --arch | Set architecture | +| | --os | Set operating system | +| | --os-features | Set operating system feature | +| | --variant | Set architecture variant | + +## Communication Matrix + +The isula-build component processes communicate with each other through the Unix socket file. No port is used for communication. + +## File and Permission + +- All isula-build operations must be performed by the **root** user. To perform operations as a non-privileged user, you need to configure the `--group` option. + +- The following table lists the file permissions involved in the running of isula-build. + +| **File Path** | **File/Folder Permission** | **Description** | +| ------------------------------------------- | ------------------- | ------------------------------------------------------------ | +| /usr/bin/isula-build | 550 | Binary file of the command line tool. | +| /usr/bin/isula-builder | 550 | Binary file of the isula-builder process. | +| /usr/lib/systemd/system/isula-build.service | 640 | systemd configuration file, which is used to manage the isula-build service. | +| /usr/isula-build | 650 | Root directory of the isula-builder configuration file. | +| /etc/isula-build/configuration.toml | 600 | General isula-builder configuration file, including the settings of the isula-builder log level, persistency directory, runtime directory, and OCI runtime. | +| /etc/isula-build/policy.json | 600 | Syntax file of the signature verification policy file. | +| /etc/isula-build/registries.toml | 600 | Configuration file of each image repository, including the available image repository list and image repository blacklist. | +| /etc/isula-build/storage.toml | 600 | Configuration file of the local persistent storage, including the configuration of the used storage driver. | +| /etc/isula-build/isula-build.pub | 400 | Asymmetric encryption public key file. | +| /var/run/isula_build.sock | 660 | Local socket of isula-builder. | +| /var/lib/isula-build | 700 | Local persistency directory. | +| /var/run/isula-build | 700 | Local runtime directory. | +| /var/lib/isula-build/tmp/\[build_id\]/isula-build-tmp-*.tar | 644 | Local temporary directory for storing the images when they are exported to iSulad. | diff --git a/docs/en/Cloud/ImageBuilder/isula-build/isula-build-common-issues-and-solutions.md b/docs/en/Cloud/ImageBuilder/isula-build/isula-build-common-issues-and-solutions.md new file mode 100644 index 0000000000000000000000000000000000000000..3a325a91b6cba76e1c1811cd6f3630a6078b237a --- /dev/null +++ b/docs/en/Cloud/ImageBuilder/isula-build/isula-build-common-issues-and-solutions.md @@ -0,0 +1,9 @@ +# Common Issues and Solutions + +## Issue 1: isula-build Image Pull Error: Connection Refused + +When pulling an image, isula-build encounters the error: `pinging container registry xx: get xx: dial tcp host:repo: connect: connection refused`. + +This occurs because the image is sourced from an untrusted registry. + +To resolve this, edit the isula-build registry configuration file located at **/etc/isula-build/registries.toml**. Add the untrusted registry to the `[registries.insecure]` section and restart isula-build. diff --git a/docs/en/docs/Container/isula-build.md b/docs/en/Cloud/ImageBuilder/isula-build/isula-build-user-guide.md similarity index 79% rename from docs/en/docs/Container/isula-build.md rename to docs/en/Cloud/ImageBuilder/isula-build/isula-build-user-guide.md index 011d7acf83879834de58d96fc4ee5f1c97193480..7175dff125746c70a9aafb0dec7da918f2e8384f 100644 --- a/docs/en/docs/Container/isula-build.md +++ b/docs/en/Cloud/ImageBuilder/isula-build/isula-build-user-guide.md @@ -1,20 +1,6 @@ -# Container Image Building +# Installation -## Overview - -isula-build is a container image build tool developed by the iSula container team. It allows you to quickly build container images using Dockerfiles. - -The isula-build uses the server/client mode. The isula-build functions as a client and provides a group of command line tools for image build and management. The isula-builder functions as the server to process client management requests, and runs as a daemon process in the background. - -![isula-build architecture](./figures/isula-build_arch.png) - ->![](./public_sys-resources/icon-note.gif) **Note:** -> -> Currently, isula-build supports OCI image format ([OCI Image Format Specification](https://github.com/opencontainers/image-spec/blob/main/spec.md/)) and Docker image format ([Image Manifest Version 2, Schema 2](https://docs.docker.com/registry/spec/manifest-v2-2/)). Use the `export ISULABUILD_CLI_EXPERIMENTAL=enabled` command to enable the experimental feature for supporting OCI image format. When the experimental feature is disabled, isula-build will take Docker image format as the default image format. Otherwise, isula-build will take OCI image format as the default image format. - -## Installation - -### Preparations +## Preparations To ensure that isula-build can be successfully installed, the following software and hardware requirements must be met: @@ -22,11 +8,11 @@ To ensure that isula-build can be successfully installed, the following software - Supported OS: openEuler - You have the permissions of the root user. -#### Installing isula-build +### Installing isula-build Before using isula-build to build a container image, you need to install the following software packages: -##### (Recommended) Method 1: Using Yum +#### (Recommended) Method 1: Using Yum 1. Configure the openEuler Yum source. @@ -36,7 +22,7 @@ Before using isula-build to build a container image, you need to install the fol sudo yum install -y isula-build ``` -##### Method 2: Using the RPM Package +#### Method 2: Using the RPM Package 1. Obtain an **isula-build-*.rpm** installation package from the openEuler Yum source, for example, **isula-build-0.9.6-4.oe1.x86_64.rpm**. @@ -50,13 +36,13 @@ Before using isula-build to build a container image, you need to install the fol >![](./public_sys-resources/icon-note.gif) **Note:** > -> After the installation is complete, you need to manually start the isula-build service. For details about how to start the service, see [Managing the isula-build Service](#managing-the-isula-build-service). +> After the installation is complete, you need to manually start the isula-build service. For details about how to start the service, see [Managing the isula-build Service](isula-build-user-guide.md#managing-the-isula-build-service). -## Configuring and Managing the isula-build Service +# Configuring and Managing the isula-build Service -### Configuring the isula-build Service +## Configuring the isula-build Service -After the isula-build software package is installed, the systemd starts the isula-build service based on the default configuration contained in the isula-build software package on the isula-build server. If the default configuration file on the isula-build server cannot meet your requirements, perform the following operations to customize the configuration file: After the default configuration is modified, restart the isula-build server for the new configuration to take effect. For details, see [Managing the isula-build Service](#managing-the-isula-build-service). +After the isula-build software package is installed, the systemd starts the isula-build service based on the default configuration contained in the isula-build software package on the isula-build server. If the default configuration file on the isula-build server cannot meet your requirements, perform the following operations to customize the configuration file: After the default configuration is modified, restart the isula-build server for the new configuration to take effect. For details, see [Managing the isula-build Service](isula-build-user-guide.md#managing-the-isula-build-service). Currently, the isula-build server contains the following configuration file: @@ -98,7 +84,7 @@ Currently, the isula-build server contains the following configuration file: > - Currently, only overlay2 can be used as the underlying storage driver. > - Before setting the `--group` option, ensure that the corresponding user group has been created on a local OS and non-privileged users have been added to the group. After isula-builder is restarted, non-privileged users in the group can use the isula-build function. In addition, to ensure permission consistency, the owner group of the isula-build configuration file directory **/etc/isula-build** is set to the group specified by `--group`. -### Managing the isula-build Service +## Managing the isula-build Service Currently, openEuler uses systemd to manage the isula-build service. The isula-build software package contains the systemd service files. After installing the isula-build software package, you can use the systemd tool to start or stop the isula-build service. You can also manually start the isula-builder software. Note that only one isula-builder process can be started on a node at a time. @@ -106,7 +92,7 @@ Currently, openEuler uses systemd to manage the isula-build service. The isula-b > > Only one isula-builder process can be started on a node at a time. -#### (Recommended) Using systemd for Management +### (Recommended) Using systemd for Management You can run the following systemd commands to start, stop, and restart the isula-build service: @@ -134,7 +120,7 @@ The systemd service file of the isula-build software installation package is sto sudo systemctl daemon-reload ``` -#### Directly Running isula-builder +### Directly Running isula-builder You can also run the `isula-builder` command on the server to start the service. The `isula-builder` command can contain flags for service startup. The following flags are supported: @@ -157,9 +143,9 @@ Start the isula-build service. For example, to specify the local persistency dir sudo isula-builder --dataroot "/var/lib/isula-build" --debug=false ``` -## Usage Guidelines +# Usage Guidelines -### Prerequisites +## Prerequisites isula-build depends on the executable file **runc** to build the **RUN** instruction in the Dockerfile. Therefore, runc must be pre-installed in the running environment of isula-build. The installation method depends on the application scenario. If you do not need to use the complete docker-engine tool chain, you can install only the docker-runc RPM package. @@ -177,7 +163,7 @@ sudo yum install -y docker-engine > > Ensure the security of OCI runtime (runc) executable files to prevent malicious replacement. -### Overview +## Overview The isula-build client provides a series of commands for building and managing container images. Currently, the isula-build client provides the following commands: @@ -204,7 +190,7 @@ The isula-build client provides a series of commands for building and managing c The following describes how to use these commands in detail. -### ctr-img: Container Image Management +## ctr-img: Container Image Management The isula-build command groups all container image management commands into the `ctr-img` command. The command format is as follows: @@ -212,7 +198,7 @@ The isula-build command groups all container image management commands into the isula-build ctr-img [command] ``` -#### build: Container Image Build +### build: Container Image Build The subcommand build of the `ctr-img` command is used to build container images. The command format is as follows: @@ -235,7 +221,7 @@ The `build` command contains the following flags: **The following describes the flags in detail.** -##### \--build-arg +#### \--build-arg Parameters in the Dockerfile are inherited from the commands. The usage is as follows: @@ -266,7 +252,7 @@ Storing signatures Build success with image id: 39b62a3342eed40b41a1bcd9cd455d77466550dfa0f0109af7a708c3e895f9a2 ``` -##### \--build-static +#### \--build-static Specifies a static build. That is, when isula-build is used to build a container image, differences between all timestamps and other build factors (such as the container ID and hostname) are eliminated. Finally, a container image that meets the static requirements is built. @@ -292,7 +278,7 @@ For container image build, isula-build supports the same Dockerfile. If the buil In this way, the container images and image IDs built in the same environment for multiple times are the same. -##### \--format +#### \--format This option can be used when the experiment feature is enabled. The default image format is **oci**. You can specify the image format to build. For example, the following commands are used to build an OCI image and a Docker image, respectively. @@ -304,7 +290,7 @@ This option can be used when the experiment feature is enabled. The default imag export ISULABUILD_CLI_EXPERIMENTAL=enabled; sudo isula-build ctr-img build -f Dockerfile --format docker . ``` -##### \--iidfile +#### \--iidfile Run the following command to output the ID of the built image to a file: @@ -325,7 +311,7 @@ $ cat testfile 76cbeed38a8e716e22b68988a76410eaf83327963c3b29ff648296d5cd15ce7b ``` -##### \-o, --output +#### \-o, --output Currently, `-o` and `--output` support the following formats: @@ -347,11 +333,11 @@ When experiment feature is enabled, you can build image in OCI image format with - `oci://registry.example.com/repository:tag`: directly pushes the successfully built image to the remote image repository in OCI image format(OCI image format should be supported by the remote repository), for example, `-o oci://localhost:5000/library/busybox:latest`. -- `oci-archive:/:image:tag`:saves the successfully built image to the local host in OCI image format, for example, `-o oci-archive:/root/image.tar:busybox:latest`。 +- `oci-archive:/:image:tag`: saves the successfully built image to the local host in OCI image format, for example, `-o oci-archive:/root/image.tar:busybox:latest`. In addition to the flags, the `build` subcommand also supports an argument whose type is string and meaning is context, that is, the context of the Dockerfile build environment. The default value of this parameter is the current path where isula-build is executed. This path affects the path retrieved by the **ADD** and **COPY** instructions of the .dockerignore file and Dockerfile. -##### \--proxy +#### \--proxy Specifies whether the container started by the **RUN** instruction inherits the proxy-related environment variables **http_proxy**, **https_proxy**, **ftp_proxy**, **no_proxy**, **HTTP_PROXY**, **HTTPS_PROXY**, and **FTP_PROXY**. The default value is **true**. @@ -361,11 +347,11 @@ When a user configures proxy-related **ARG** or **ENV** in the Dockerfile, the i > > If the client and daemon are running on different terminals, the environment variables of the terminal where the daemon is running are inherited. -##### \--tag +#### \--tag Specifies the tag of the image stored on the local disk after the image is successfully built. -##### \--cap-add +#### \--cap-add Run the following command to add the permission required by the **RUN** instruction during the build process: @@ -397,7 +383,7 @@ sudo isula-build ctr-img build --cap-add CAP_SYS_ADMIN --cap-add CAP_SYS_PTRACE > - Currently, isula-build does not support a remote URL as the data source of the **ADD** instruction in the Dockerfile. > - The local tar package exported using the **docker-archive** and **oci-archive** types are not compressed, you can manually compress the file as required. -#### image: Viewing Local Persistent Build Images +### image: Viewing Local Persistent Build Images You can run the `images` command to view the images in the local persistent storage. @@ -415,7 +401,7 @@ localhost:5000/library/alpine latest a24bb4013296 2022-01 > > The image size displayed by running the `isula-build ctr-img images` command may be different from that displayed by running the `docker images` command. When calculating the image size, `isula-build` directly calculates the total size of .tar packages at each layer, while `docker` calculates the total size of files by decompressing the .tar packages and traversing the diff directory. Therefore, the statistics are different. -#### import: Importing a Basic Container Image +### import: Importing a Basic Container Image A tar file in rootfs form can be imported into isula-build via the `ctr-img import` command. @@ -445,9 +431,9 @@ mybusybox latest 173b3cf612f8 2022-01 >![](./public_sys-resources/icon-note.gif) **Note** > -> isula-build supports the import of container basic images with a maximum size of 1 GB. +> isula-build supports the import of container basic images with a maximum size of 1 GB. -#### load: Importing Cascade Images +### load: Importing Cascade Images Cascade images are images that are saved to the local computer by running the `docker save` or `isula-build ctr-img save` command. The compressed image package contains a layer-by-layer image package named **layer.tar**. You can run the `ctr-img load` command to import the image to isula-build. @@ -491,7 +477,7 @@ Loaded image as c07ddb44daa97e9e8d2d68316b296cc9343ab5f3d2babc5e6e03b80cd580478e > - isula-build allows you to import a container image with a maximum size of 50 GB. > - isula-build automatically recognizes the image format and loads it from the cascade image file. -#### rm: Deleting a Local Persistent Image +### rm: Deleting a Local Persistent Image You can run the `rm` command to delete an image from the local persistent storage. The command format is as follows: @@ -512,7 +498,7 @@ Deleted: sha256:78731c1dde25361f539555edaf8f0b24132085b7cab6ecb90de63d72fa00c01d Deleted: sha256:eeba1bfe9fca569a894d525ed291bdaef389d28a88c288914c1a9db7261ad12c ``` -#### save: Exporting Cascade Images +### save: Exporting Cascade Images You can run the `save` command to export the cascade images to the local disk. The command format is as follows: @@ -572,12 +558,12 @@ Storing signatures Save success with image: [busybox:latest nginx:latest] ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- Save exports an image in .tar format by default. If necessary, you can save the image and then manually compress it. ->- When exporting an image using image name, specify the entire image name in the *REPOSITORY:TAG* format. +> - Save exports an image in .tar format by default. If necessary, you can save the image and then manually compress it. +> - When exporting an image using image name, specify the entire image name in the *REPOSITORY:TAG* format. -#### tag: Tagging Local Persistent Images +### tag: Tagging Local Persistent Images You can run the `tag` command to add a tag to a local persistent container image. The command format is as follows: @@ -604,7 +590,7 @@ alpine v1 a24bb4013296 2020-05 --------------------------------------- ----------- ----------------- ------------------------ ------------ ``` -#### pull: Pulling an Image To a Local Host +### pull: Pulling an Image To a Local Host Run the `pull` command to pull an image from a remote image repository to a local host. Command format: @@ -624,7 +610,7 @@ Storing signatures Pull success with image: example-registry/library/alpine:latest ``` -#### push: Pushing a Local Image to a Remote Repository +### push: Pushing a Local Image to a Remote Repository Run the `push` command to push a local image to a remote repository. Command format: @@ -648,11 +634,11 @@ Storing signatures Push success with image: example-registry/library/mybusybox:latest ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- Before pushing an image, log in to the corresponding image repository. +> - Before pushing an image, log in to the corresponding image repository. -### info: Viewing the Operating Environment and System Information +## info: Viewing the Operating Environment and System Information You can run the `isula-build info` command to view the running environment and system information of isula-build. The command format is as follows: @@ -697,7 +683,7 @@ $ sudo isula-build info -H MemHeapReleased: 52.1 MB ``` -### login: Logging In to the Remote Image Repository +## login: Logging In to the Remote Image Repository You can run the `login` command to log in to the remote image repository. The command format is as follows: @@ -728,7 +714,7 @@ Enter the password in interactive mode. Login Succeeded ``` -### logout: Logging Out of the Remote Image Repository +## logout: Logging Out of the Remote Image Repository You can run the `logout` command to log out of the remote image repository. The command format is as follows: @@ -750,7 +736,7 @@ Example: Removed authentications ``` -### version: Querying the isula-build Version +## version: Querying the isula-build Version You can run the `version` command to view the current version information. @@ -771,15 +757,15 @@ Server: OS/Arch: linux/amd64 ``` -### manifest: Manifest List Management +## manifest: Manifest List Management The manifest list contains the image information corresponding to different system architectures. You can use the same manifest (for example, **openeuler:latest**) in different architectures to obtain the image of the corresponding architecture. The manifest contains the create, annotate, inspect, and push subcommands. >![](./public_sys-resources/icon-note.gif) **NOTE:** > -> manifest is an experiment feature. When using this feature, you need to enable the experiment options on the client and server. For details, see Client Overview and Configuring Services. +> manifest is an experiment feature. When using this feature, you need to enable the experiment options on the client and server. For details, see Client Overview and Configuring Services. -#### create: Manifest List Creation +### create: Manifest List Creation The create subcommand of the `manifest` command is used to create a manifest list. The command format is as follows: @@ -795,7 +781,7 @@ Example: sudo isula-build manifest create openeuler localhost:5000/openeuler_x86:latest localhost:5000/openeuler_aarch64:latest ``` -#### annotate: Manifest List Update +### annotate: Manifest List Update The `annotate` subcommand of the `manifest` command is used to update the manifest list. The command format is as follows: @@ -818,7 +804,7 @@ Example: sudo isula-build manifest annotate --os linux --arch arm64 openeuler:latest localhost:5000/openeuler_aarch64:latest ``` -#### inspect: Manifest List Inspect +### inspect: Manifest List Inspect The `inspect` subcommand of the `manifest` command is used to query the manifest list. The command format is as follows: @@ -856,7 +842,7 @@ $ sudo isula-build manifest inspect openeuler:latest } ``` -#### push: Manifest List Push to the Remote Repository +### push: Manifest List Push to the Remote Repository The manifest subcommand `push` is used to push the manifest list to the remote repository. The command format is as follows: @@ -870,11 +856,11 @@ Example: sudo isula-build manifest push openeuler:latest localhost:5000/openeuler:latest ``` -## Directly Integrating a Container Engine +# Directly Integrating a Container Engine isula-build can be integrated with iSulad or Docker to import the built container image to the local storage of the container engine. -### Integration with iSulad +## Integration with iSulad Images that are successfully built can be directly exported to the iSulad. @@ -898,7 +884,7 @@ busybox 2.0 2d414a5cad6d 2020-08-01 06:41: > - It is required that isula-build and iSulad be on the same node. > - When an image is directly exported to the iSulad, the isula-build client needs to temporarily store the successfully built image as `/var/lib/isula-build/tmp/[build_id]/isula-build-tmp-%v.tar` and then import it to the iSulad. Ensure that the /var/tmp/ directory has sufficient disk space. If the isula-build client process is killed or Ctrl+C is pressed during the export, you need to manually clear the `/var/lib/isula-build/tmp/[build_id]/isula-build-tmp-%v.tar` file. -### Integration with Docker +## Integration with Docker Images that are successfully built can be directly exported to the Docker daemon. @@ -920,11 +906,11 @@ busybox 2.0 2d414a5c > > The isula-build and Docker must be on the same node. -## Precautions +# Precautions This chapter is something about constraints, limitations and differences with `docker build` when use isula-builder build images. -### Constraints or Limitations +## Constraints or Limitations 1. When export an image to [iSulad](https://gitee.com/openeuler/iSulad/blob/master/README.md/), a tag is necessary. 2. Because the OCI runtime, for example, **runc**, will be called by isula-builder when executing the **RUN** instruction, the integrity of the runtime binary should be guaranteed by the user. @@ -936,7 +922,7 @@ This chapter is something about constraints, limitations and differences with `d 8. When export image to a tar package, only tar compression format is supported by isula-builder currently. 9. The base image size is limited to 1 GB when importing a base image using `import`. -### Differences with "docker build" +## Differences with "docker build" `isula-build` complies with [Dockerfile specification](https://docs.docker.com/engine/reference/builder), but there are also some subtle differences between `isula-builder` and `docker build` as follows: @@ -950,95 +936,3 @@ This chapter is something about constraints, limitations and differences with `d 8. Resource restriction on a single build is not supported. If resource restriction is required, you can configure a resource limit on isula-builder. 9. `isula-builder` add each origin layer tar size to get the image size, but docker only uses the diff content of each layer. So the image size listed by `isula-builder images` is different. 10. Image name should be in the *NAME:TAG* format. For example **busybox:latest**, where **latest** must not be omitted. - -## Appendix - -### Command Line Parameters - -**Table 1** Parameters of the `ctr-img build` command - -| **Command** | **Parameter** | **Description** | -| ------------- | -------------- | ------------------------------------------------------------ | -| ctr-img build | --build-arg | String list, which contains variables required during the build. | -| | --build-static | Key value, which is used to build binary equivalence. Currently, the following key values are included: - build-time: string, which indicates that a fixed timestamp is used to build a container image. The timestamp format is YYYY-MM-DD HH-MM-SS. | -| | -f, --filename | String, which indicates the path of the Dockerfiles. If this parameter is not specified, the current path is used. | -| | --format | String, which indicates the image format **oci** or **docker** (**ISULABUILD_CLI_EXPERIMENTAL** needs to be enabled). | -| | --iidfile | String, which indicates the ID of the image output to a local file. | -| | -o, --output | String, which indicates the image export mode and path.| -| | --proxy | Boolean, which inherits the proxy environment variable on the host. The default value is true. | -| | --tag | String, which indicates the tag value of the image that is successfully built. | -| | --cap-add | String list, which contains permissions required by the **RUN** instruction during the build process.| - -**Table 2** Parameters of the `ctr-img load` command - -| **Command** | **Parameter** | **Description** | -| ------------ | ----------- | --------------------------------- | -| ctr-img load | -i, --input | String, path of the local .tar package to be imported.| - -**Table 3** Parameters of the `ctr-img push` command - -| **Command** | **Parameter** | **Description** | -| ------------ | ----------- | --------------------------------- | -| ctr-img push | -f, --format | String, which indicates the pushed image format **oci** or **docker** (**ISULABUILD_CLI_EXPERIMENTAL** needs to be enabled).| - -**Table 4** Parameters of the `ctr-img rm` command - -| **Command** | **Parameter** | **Description** | -| ---------- | ----------- | --------------------------------------------- | -| ctr-img rm | -a, --all | Boolean, which is used to delete all local persistent images. | -| | -p, --prune | Boolean, which is used to delete all images that are stored persistently on the local host and do not have tags. | - -**Table 5** Parameters of the `ctr-img save` command - -| **Command** | **Parameter** | **Description** | -| ------------ | ------------ | ---------------------------------- | -| ctr-img save | -o, --output | String, which indicates the local path for storing the exported images.| -| ctr-img save | -f, --format | String, which indicates the exported image format **oci** or **docker** (**ISULABUILD_CLI_EXPERIMENTAL** needs to be enabled).| - -**Table 6** Parameters of the `login` command - -| **Command** | **Parameter** | **Description** | -| -------- | -------------------- | ------------------------------------------------------- | -| login | -p, --password-stdin | Boolean, which indicates whether to read the password through stdin. or enter the password in interactive mode. | -| | -u, --username | String, which indicates the username for logging in to the image repository.| - -**Table 7** Parameters of the `logout` command - -| **Command** | **Parameter** | **Description** | -| -------- | --------- | ------------------------------------ | -| logout | -a, --all | Boolean, which indicates whether to log out of all logged-in image repositories. | - -**Table 8** Parameters of the `manifest annotate` command - -| **Command** | **Parameter** | **Description** | -| ----------------- | ------------- | ---------------------------- | -| manifest annotate | --arch | Set architecture | -| | --os | Set operating system | -| | --os-features | Set operating system feature | -| | --variant | Set architecture variant | - -### Communication Matrix - -The isula-build component processes communicate with each other through the Unix socket file. No port is used for communication. - -### File and Permission - -- All isula-build operations must be performed by the **root** user. To perform operations as a non-privileged user, you need to configure the `--group` option. - -- The following table lists the file permissions involved in the running of isula-build. - -| **File Path** | **File/Folder Permission** | **Description** | -| ------------------------------------------- | ------------------- | ------------------------------------------------------------ | -| /usr/bin/isula-build | 550 | Binary file of the command line tool. | -| /usr/bin/isula-builder | 550 | Binary file of the isula-builder process. | -| /usr/lib/systemd/system/isula-build.service | 640 | systemd configuration file, which is used to manage the isula-build service. | -| /usr/isula-build | 650 | Root directory of the isula-builder configuration file. | -| /etc/isula-build/configuration.toml | 600 | General isula-builder configuration file, including the settings of the isula-builder log level, persistency directory, runtime directory, and OCI runtime. | -| /etc/isula-build/policy.json | 600 | Syntax file of the signature verification policy file. | -| /etc/isula-build/registries.toml | 600 | Configuration file of each image repository, including the available image repository list and image repository blacklist. | -| /etc/isula-build/storage.toml | 600 | Configuration file of the local persistent storage, including the configuration of the used storage driver. | -| /etc/isula-build/isula-build.pub | 400 | Asymmetric encryption public key file. | -| /var/run/isula_build.sock | 660 | Local socket of isula-builder. | -| /var/lib/isula-build | 700 | Local persistency directory. | -| /var/run/isula-build | 700 | Local runtime directory. | -| /var/lib/isula-build/tmp/\[build_id\]/isula-build-tmp-*.tar | 644 | Local temporary directory for storing the images when they are exported to iSulad. | diff --git a/docs/en/Cloud/ImageBuilder/isula-build/isula-build.md b/docs/en/Cloud/ImageBuilder/isula-build/isula-build.md new file mode 100644 index 0000000000000000000000000000000000000000..a26ac516a723e384b765671803b5aafcb0a6be6a --- /dev/null +++ b/docs/en/Cloud/ImageBuilder/isula-build/isula-build.md @@ -0,0 +1,13 @@ +# Container Image Building + +## Overview + +isula-build is a container image build tool developed by the iSula container team. It allows you to quickly build container images using Dockerfiles. + +The isula-build uses the server/client mode. The isula-build functions as a client and provides a group of command line tools for image build and management. The isula-builder functions as the server to process client management requests, and runs as a daemon process in the background. + +![isula-build architecture](./figures/isula-build_arch.png) + +>![](./public_sys-resources/icon-note.gif) **Note:** +> +> Currently, isula-build supports OCI image format ([OCI Image Format Specification](https://github.com/opencontainers/image-spec/blob/main/spec.md/)) and Docker image format ([Image Manifest Version 2, Schema 2](https://docs.docker.com/registry/spec/manifest-v2-2/)). Use the `export ISULABUILD_CLI_EXPERIMENTAL=enabled` command to enable the experimental feature for supporting OCI image format. When the experimental feature is disabled, isula-build will take Docker image format as the default image format. Otherwise, isula-build will take OCI image format as the default image format. diff --git a/docs/en/docs/Installation/public_sys-resources/icon-note.gif b/docs/en/Cloud/ImageBuilder/isula-build/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/Installation/public_sys-resources/icon-note.gif rename to docs/en/Cloud/ImageBuilder/isula-build/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/Kmesh/kmesh.md b/docs/en/Cloud/Kmesh/Kmesh/Kmesh.md similarity index 100% rename from docs/en/docs/Kmesh/kmesh.md rename to docs/en/Cloud/Kmesh/Kmesh/Kmesh.md diff --git a/docs/en/Cloud/Kmesh/Kmesh/Menu/index.md b/docs/en/Cloud/Kmesh/Kmesh/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..3e0bfa3a7858aa8d75c98afb11334de4f61e4eed --- /dev/null +++ b/docs/en/Cloud/Kmesh/Kmesh/Menu/index.md @@ -0,0 +1,10 @@ +--- +headless: true +--- + +- [Kmesh User Guide]({{< relref "./Kmesh.md" >}}) + - [Introduction to Kmesh]({{< relref "./introduction-to-kmesh.md" >}}) + - [Installation and Deployment]({{< relref "./installation-and-deployment.md" >}}) + - [Usage]({{< relref "./usage.md" >}}) + - [Common Issues and Solutions]({{< relref "./common-issues-and-solutions.md" >}}) + - [Appendix]({{< relref "./appendix.md" >}}) diff --git a/docs/en/docs/Kmesh/appendix.md b/docs/en/Cloud/Kmesh/Kmesh/appendix.md similarity index 100% rename from docs/en/docs/Kmesh/appendix.md rename to docs/en/Cloud/Kmesh/Kmesh/appendix.md diff --git a/docs/en/Cloud/Kmesh/Kmesh/common-issues-and-solutions.md b/docs/en/Cloud/Kmesh/Kmesh/common-issues-and-solutions.md new file mode 100644 index 0000000000000000000000000000000000000000..834d2c4257a933ff8171d62a6694af0ca06ec49c --- /dev/null +++ b/docs/en/Cloud/Kmesh/Kmesh/common-issues-and-solutions.md @@ -0,0 +1,23 @@ +# Common Issues and Solutions + +## Issue 1: Kmesh Service Exits with an Error When Started in Cluster Mode without Control Plane IP Address Configuration + +![](./figures/not_set_cluster_ip.png) + +Cause: When operating in cluster mode, Kmesh requires communication with the control plane to fetch configuration details. Without the correct control plane IP address, the service cannot proceed and exits with an error. + +Solution: Follow the cluster mode setup instructions in the [Installation and Deployment](./installation-and-deployment.md) guide to properly configure the control plane IP address. + +## Issue 2: Kmesh Service Displays "Get Kube Config Error!" during Startup + +![](./figures/get_kubeconfig_error.png) + +Cause: In cluster mode, Kmesh attempts to retrieve the control plane IP address from the k8s configuration. If the kubeconfig file path is not set in the environment, the service cannot access the kubeconfig and throws this error. (Note: This issue does not occur if the control plane IP address is manually specified in the Kmesh configuration file.) + +Solution: Set up kubeconfig using the following commands: + +```shell +mkdir -p $HOME/.kube +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config +sudo chown $(id -u):$(id -g) $HOME/.kube/config +``` diff --git a/docs/en/docs/Kmesh/figures/get_kubeconfig_error.png b/docs/en/Cloud/Kmesh/Kmesh/figures/get_kubeconfig_error.png similarity index 100% rename from docs/en/docs/Kmesh/figures/get_kubeconfig_error.png rename to docs/en/Cloud/Kmesh/Kmesh/figures/get_kubeconfig_error.png diff --git a/docs/en/docs/Kmesh/figures/kmesh-arch.png b/docs/en/Cloud/Kmesh/Kmesh/figures/kmesh-arch.png similarity index 100% rename from docs/en/docs/Kmesh/figures/kmesh-arch.png rename to docs/en/Cloud/Kmesh/Kmesh/figures/kmesh-arch.png diff --git a/docs/en/docs/Kmesh/figures/not_set_cluster_ip.png b/docs/en/Cloud/Kmesh/Kmesh/figures/not_set_cluster_ip.png similarity index 100% rename from docs/en/docs/Kmesh/figures/not_set_cluster_ip.png rename to docs/en/Cloud/Kmesh/Kmesh/figures/not_set_cluster_ip.png diff --git a/docs/en/docs/Kmesh/installation-and-deployment.md b/docs/en/Cloud/Kmesh/Kmesh/installation-and-deployment.md similarity index 96% rename from docs/en/docs/Kmesh/installation-and-deployment.md rename to docs/en/Cloud/Kmesh/Kmesh/installation-and-deployment.md index b331932cae8010b38639b3ed6f057f2d5bfd5fae..d6759030bbde33aa43e3b1d76277a7e7da0ffa4a 100644 --- a/docs/en/docs/Kmesh/installation-and-deployment.md +++ b/docs/en/Cloud/Kmesh/Kmesh/installation-and-deployment.md @@ -10,7 +10,7 @@ ## Preparing the Environment -* Install the openEuler OS by referring to the [*openEuler Installation Guide*](../Installation/Installation.md). +* Install the openEuler OS by referring to the [*openEuler Installation Guide*](../../../Server/InstallationUpgrade/Installation/installation.md). * Root permissions are required for installing Kmesh. diff --git a/docs/en/docs/Kmesh/introduction-to-kmesh.md b/docs/en/Cloud/Kmesh/Kmesh/introduction-to-kmesh.md similarity index 100% rename from docs/en/docs/Kmesh/introduction-to-kmesh.md rename to docs/en/Cloud/Kmesh/Kmesh/introduction-to-kmesh.md diff --git a/docs/en/docs/Kmesh/usage.md b/docs/en/Cloud/Kmesh/Kmesh/usage.md similarity index 100% rename from docs/en/docs/Kmesh/usage.md rename to docs/en/Cloud/Kmesh/Kmesh/usage.md diff --git a/docs/en/Cloud/Kmesh/Menu/index.md b/docs/en/Cloud/Kmesh/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..7603d01981f063bcd41eac2f38794b75206da086 --- /dev/null +++ b/docs/en/Cloud/Kmesh/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [Kmesh User Guide]({{< relref "./Kmesh/Menu/index.md" >}}) diff --git a/docs/en/Cloud/KubeOS/KubeOS/Menu/index.md b/docs/en/Cloud/KubeOS/KubeOS/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..2a07e4768650a7a6be4a9d3c05f7df0a98a0d0c8 --- /dev/null +++ b/docs/en/Cloud/KubeOS/KubeOS/Menu/index.md @@ -0,0 +1,9 @@ +--- +headless: true +--- + +- [KubeOS User Guide]({{< relref "./kubeos-user-guide.md" >}}) + - [About KubeOS]({{< relref "./about-kubeos.md" >}}) + - [Installation and Deployment]({{< relref "./installation-and-deployment.md" >}}) + - [Usage Instructions]({{< relref "./usage-instructions.md" >}}) + - [KubeOS Image Creation]({{< relref "./kubeos-image-creation.md" >}}) diff --git a/docs/en/docs/KubeOS/about-kubeos.md b/docs/en/Cloud/KubeOS/KubeOS/about-kubeos.md similarity index 100% rename from docs/en/docs/KubeOS/about-kubeos.md rename to docs/en/Cloud/KubeOS/KubeOS/about-kubeos.md diff --git a/docs/en/docs/KubeOS/figures/file-system-layout-of-a-container-os.png b/docs/en/Cloud/KubeOS/KubeOS/figures/file-system-layout-of-a-container-os.png similarity index 100% rename from docs/en/docs/KubeOS/figures/file-system-layout-of-a-container-os.png rename to docs/en/Cloud/KubeOS/KubeOS/figures/file-system-layout-of-a-container-os.png diff --git a/docs/en/docs/KubeOS/figures/kubeos-architecture.png b/docs/en/Cloud/KubeOS/KubeOS/figures/kubeos-architecture.png similarity index 100% rename from docs/en/docs/KubeOS/figures/kubeos-architecture.png rename to docs/en/Cloud/KubeOS/KubeOS/figures/kubeos-architecture.png diff --git a/docs/en/docs/KubeOS/installation-and-deployment.md b/docs/en/Cloud/KubeOS/KubeOS/installation-and-deployment.md similarity index 60% rename from docs/en/docs/KubeOS/installation-and-deployment.md rename to docs/en/Cloud/KubeOS/KubeOS/installation-and-deployment.md index b609e26dcf3e7961c630d962f49b8c7a9e678231..d18d50771b4ce36f2aaa59b2a81d879664f17a0e 100644 --- a/docs/en/docs/KubeOS/installation-and-deployment.md +++ b/docs/en/Cloud/KubeOS/KubeOS/installation-and-deployment.md @@ -25,7 +25,7 @@ This chapter describes how to install and deploy the KubeOS tool. ### Environment Preparation -- Install the openEuler system. For details, see the [*openEuler Installation Guide*](../Installation/Installation.md). +- Install the openEuler system. For details, see the [*openEuler Installation Guide*](../../../Server/InstallationUpgrade/Installation/installation.md). - Install qemu-img, bc, Parted, tar, Yum, Docker, and dosfstools. ## KubeOS Installation @@ -34,20 +34,20 @@ To install KubeOS, perform the following steps: 1. Configure the Yum sources openEuler 24.09 and openEuler 24.09:EPOL: - ```conf - [openEuler24.09] # openEuler 24.09 official source - name=openEuler24.09 - baseurl=http://repo.openeuler.org/openEuler-24.09/everything/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-24.09/everything/$basearch/RPM-GPG-KEY-openEuler - ``` + ```conf + [openEuler24.09] # openEuler 24.09 official source + name=openEuler24.09 + baseurl=http://repo.openeuler.org/openEuler-24.09/everything/$basearch/ + enabled=1 + gpgcheck=1 + gpgkey=http://repo.openeuler.org/openEuler-24.09/everything/$basearch/RPM-GPG-KEY-openEuler + ``` 2. Install KubeOS as the **root** user. - ```shell - # yum install KubeOS KubeOS-scripts -y - ``` + ```shell + # yum install KubeOS KubeOS-scripts -y + ``` > ![](./public_sys-resources/icon-note.gif)**NOTE**: > @@ -67,64 +67,64 @@ Before using Docker to create a container image, ensure that Docker has been ins 1. Go to the working directory. - ```shell - cd /opt/kubeOS - ``` + ```shell + cd /opt/kubeOS + ``` 2. Specify the image repository, name, and version for os-proxy. - ```shell - export IMG_PROXY=your_imageRepository/os-proxy_imageName:version - ``` + ```shell + export IMG_PROXY=your_imageRepository/os-proxy_imageName:version + ``` 3. Specify the image repository, name, and version for os-operator. - ```shell - export IMG_OPERATOR=your_imageRepository/os-operator_imageName:version - ``` + ```shell + export IMG_OPERATOR=your_imageRepository/os-operator_imageName:version + ``` 4. Compile a Dockerfile to build an image. Pay attention to the following points when compiling a Dockerfile: - - The os-operator and os-proxy images must be built based on the base image. Ensure that the base image is safe. - - Copy the os-operator and os-proxy binary files to the corresponding images. - - Ensure that the owner and owner group of the os-proxy binary file in the os-proxy image are **root**, and the file permission is **500**. - - Ensure that the owner and owner group of the os-operator binary file in the os-operator image are the user who runs the os-operator process in the container, and the file permission is **500**. - - The locations of the os-operator and os-proxy binary files in the image and the commands run during container startup must correspond to the parameters specified in the YAML file used for deployment. + - The os-operator and os-proxy images must be built based on the base image. Ensure that the base image is safe. + - Copy the os-operator and os-proxy binary files to the corresponding images. + - Ensure that the owner and owner group of the os-proxy binary file in the os-proxy image are **root**, and the file permission is **500**. + - Ensure that the owner and owner group of the os-operator binary file in the os-operator image are the user who runs the os-operator process in the container, and the file permission is **500**. + - The locations of the os-operator and os-proxy binary files in the image and the commands run during container startup must correspond to the parameters specified in the YAML file used for deployment. - An example Dockerfile is as follows: + An example Dockerfile is as follows: - ```text - FROM your_baseimage - COPY ./bin/proxy /proxy - ENTRYPOINT ["/proxy"] - ``` + ```text + FROM your_baseimage + COPY ./bin/proxy /proxy + ENTRYPOINT ["/proxy"] + ``` - ```text - FROM your_baseimage - COPY --chown=6552:6552 ./bin/operator /operator - ENTRYPOINT ["/operator"] - ``` + ```text + FROM your_baseimage + COPY --chown=6552:6552 ./bin/operator /operator + ENTRYPOINT ["/operator"] + ``` - Alternatively, you can use multi-stage builds in the Dockerfile. + Alternatively, you can use multi-stage builds in the Dockerfile. 5. Build the images (the os-operator and os-proxy images) to be included in the containers OS image. - ```shell - # Specify the Dockerfile path of os-proxy. - export DOCKERFILE_PROXY=your_dockerfile_proxy - # Specify the Dockerfile path of os-operator. - export DOCKERFILE_OPERATOR=your_dockerfile_operator - # Build images. - docker build -t ${IMG_OPERATOR} -f ${DOCKERFILE_OPERATOR} . - docker build -t ${IMG_PROXY} -f ${DOCKERFILE_PROXY} . - ``` + ```shell + # Specify the Dockerfile path of os-proxy. + export DOCKERFILE_PROXY=your_dockerfile_proxy + # Specify the Dockerfile path of os-operator. + export DOCKERFILE_OPERATOR=your_dockerfile_operator + # Build images. + docker build -t ${IMG_OPERATOR} -f ${DOCKERFILE_OPERATOR} . + docker build -t ${IMG_PROXY} -f ${DOCKERFILE_PROXY} . + ``` 6. Push the images to the image repository. - ```shell - docker push ${IMG_OPERATOR} - docker push ${IMG_PROXY} - ``` + ```shell + docker push ${IMG_OPERATOR} + docker push ${IMG_PROXY} + ``` ### Creating a KubeOS VM Image @@ -144,25 +144,25 @@ To create a KubeOS VM image, perform the following steps: 1. Go to the working directory. - ```shell - cd /opt/kubeOS/scripts - ``` + ```shell + cd /opt/kubeOS/scripts + ``` 2. Run `kbming.sh` to create a KubeOS image. The following is a command example: - ```shell - bash kbimg.sh create vm-image -p xxx.repo -v v1 -b ../bin/os-agent -e '''$1$xyz$RdLyKTL32WEvK3lg8CXID0''' - ``` + ```shell + bash kbimg.sh create vm-image -p xxx.repo -v v1 -b ../bin/os-agent -e '''$1$xyz$RdLyKTL32WEvK3lg8CXID0''' + ``` - In the command, **xx.repo** indicates the actual Yum source file used for creating the image. You are advised to configure both the **everything** and **EPOL** repositories as Yum sources. + In the command, **xx.repo** indicates the actual Yum source file used for creating the image. You are advised to configure both the **everything** and **EPOL** repositories as Yum sources. - After the KubeOS image is created, the following files are generated in the **/opt/kubeOS/scripts** directory: + After the KubeOS image is created, the following files are generated in the **/opt/kubeOS/scripts** directory: - - **system.img**: system image in raw format. The default size is 20 GB. The size of the root file system partition is less than 2,560 MiB, and the size of the Persist partition is less than 14 GiB. - - **system.qcow2**: system image in QCOW2 format. - - **update.img**: partition image of the root file system that is used for upgrade. + - **system.img**: system image in raw format. The default size is 20 GB. The size of the root file system partition is less than 2,560 MiB, and the size of the Persist partition is less than 14 GiB. + - **system.qcow2**: system image in QCOW2 format. + - **update.img**: partition image of the root file system that is used for upgrade. - The created KubeOS VM image can be used only in a VM of the x86 or AArch64 architecture. KubeOS does not support legacy boot in an x86 VM + The created KubeOS VM image can be used only in a VM of the x86 or AArch64 architecture. KubeOS does not support legacy boot in an x86 VM ### Deploying CRD, os-operator, and os-proxy @@ -181,14 +181,14 @@ To create a KubeOS VM image, perform the following steps: 2. Deploy CRD, RBAC, os-operator, and os-proxy. Assume that the **crd.yaml**, **rbac.yaml**, and **manager.yaml** files are stored in the **config/crd**, **config/rbac**, and **config/manager** directories, respectively. Run the following commands: - ```shell - kubectl apply -f config/crd - kubectl apply -f config/rbac - kubectl apply -f config/manager - ``` + ```shell + kubectl apply -f config/crd + kubectl apply -f config/rbac + kubectl apply -f config/manager + ``` 3. After the deployment is complete, run the following command to check whether each component is started properly. If **STATUS** of all components is **Running**, the components are started properly. - ```shell - kubectl get pods -A - ``` + ```shell + kubectl get pods -A + ``` diff --git a/docs/en/docs/KubeOS/kubeos-image-creation.md b/docs/en/Cloud/KubeOS/KubeOS/kubeos-image-creation.md similarity index 99% rename from docs/en/docs/KubeOS/kubeos-image-creation.md rename to docs/en/Cloud/KubeOS/KubeOS/kubeos-image-creation.md index 382573d2f0d819c2c372933f3a2aa9a2d55e93fe..7deaf04ea508623df9560af1443c9d5d6ed0007d 100644 --- a/docs/en/docs/KubeOS/kubeos-image-creation.md +++ b/docs/en/Cloud/KubeOS/KubeOS/kubeos-image-creation.md @@ -60,7 +60,7 @@ kbimg is an image creation tool required for KubeOS deployment and upgrade. You ``` shell cd /opt/kubeOS/scripts -bash kbimg.sh create upgrade-image -p xxx.repo -v v1 -b ../bin/os-agent -e '''$1$xyz$RdLyKTL32WEvK3lg8CXID0''' -d your_imageRepository/imageName:version +bash kbimg.sh create upgrade-image -p xxx.repo -v v1 -b ../bin/os-agent -e '''$1$xyz$RdLyKTL32WEvK3lg8CXID0''' -d your_imageRepository/imageName:version ``` * After the creation is complete, view the created KubeOS image. @@ -107,7 +107,7 @@ docker images After the KubeOS image is created, the following files are generated in the **/opt/kubeOS/scripts** directory: * **system.qcow2**: system image in QCOW2 format. The default size is 20 GiB. The size of the root file system partition is less than 2,020 MiB, and the size of the Persist partition is less than 16 GiB. * **update.img**: partition image of the root file system used for upgrade. - + ### Creating Images and Files Required for Installing KubeOS on Physical Machines #### Precautions @@ -122,7 +122,7 @@ docker images #### Example * Modify the `00bootup/Global.cfg` file. All parameters are mandatory. Currently, only IPv4 addresses are supported. The following is a configuration example: - + ```shell # rootfs file name rootfs_name=kubeos.tar diff --git a/docs/en/docs/KubeOS/kubeos-user-guide.md b/docs/en/Cloud/KubeOS/KubeOS/kubeos-user-guide.md similarity index 100% rename from docs/en/docs/KubeOS/kubeos-user-guide.md rename to docs/en/Cloud/KubeOS/KubeOS/kubeos-user-guide.md diff --git a/docs/en/docs/KubeOS/public_sys-resources/icon-note.gif b/docs/en/Cloud/KubeOS/KubeOS/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/KubeOS/public_sys-resources/icon-note.gif rename to docs/en/Cloud/KubeOS/KubeOS/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/KubeOS/usage-instructions.md b/docs/en/Cloud/KubeOS/KubeOS/usage-instructions.md similarity index 96% rename from docs/en/docs/KubeOS/usage-instructions.md rename to docs/en/Cloud/KubeOS/KubeOS/usage-instructions.md index 4e115b388212ff76f7fcb39a8b055f246baa76a0..fb8a4a50778ae3b4cf9358b8738b19b7da62b066 100644 --- a/docs/en/docs/KubeOS/usage-instructions.md +++ b/docs/en/Cloud/KubeOS/KubeOS/usage-instructions.md @@ -74,10 +74,10 @@ Create a custom object of the OS type in the cluster and set the corresponding f | `clientcert` | string | Client certificate file used for two-way HTTPS authentication | This parameter is valid only when two-way HTTPS authentication is used. | Required when `mtls` is `true` | | `clientkey` | string | Client private key file used for two-way HTTPS authentication | This parameter is valid only when two-way HTTPS authentication is used. | Required when `mtls` is `true` | | `evictpodforce` | bool | Whether to forcibly evict pods during upgrade/rollback | Must be `true` or `false`. This parameter is valid only for upgrades or rollbacks. | Yes | - | `sysconfigs` | / | Configuration settings | 1. When `opstype` is `config`, only configuration is performed.
    2. When `opstype` is `upgrade/rollback`, it indicates post-upgrade/rollback configuration, meaning it takes effect after the upgrade/rollback and subsequent reboot. For detailed field descriptions, see the [Settings](#settings). | Required when `opstype` is `config` | + | `sysconfigs` | / | Configuration settings | 1. When `opstype` is `config`, only configuration is performed.
    2. When `opstype` is `upgrade/rollback`, it indicates post-upgrade/rollback configuration, meaning it takes effect after the upgrade/rollback and subsequent reboot. For detailed field descriptions, see the [Settings](#settings). | Required when `opstype` is `config` | | `upgradeconfigs`| / | Configuration settings to apply before an upgrade. | This parameter is valid for upgrades or rollbacks and takes effect before the upgrade or rollback operation. For detailed field descriptions, see the [Settings](#settings). | Optional | - | `nodeselector` | string | Label of the nodes targeted for the upgrade/configuration/rollback | This parameter is used to perform operations on nodes with specific labels, rather than all worker nodes in the cluster. The nodes targeted for the operation need to have a label with the `upgrade.openeuler.org/node-selector` key. The `nodeselector` parameter should be set to the value of this label. **Notes:** 1. When this parameter is not set or is set to `no-label`, operations are performed on nodes that do not have the `upgrade.openeuler.org/node-selector` label.
    2. When this parameter is set to `""` (an empty string), operations are performed on nodes that have the `upgrade.openeuler.org/node-selector=""` label.
    3. To ignore labels and perform operations on all nodes, set this parameter to `all-label`. | Optional | - | `timewindow` | / | Time window during which the upgrade/configuration/rollback can take place. | 1. When specifying a time window, both `starttime` and `endtime` must be specified. That is, they should either both be empty or both be non-empty.
    1. Both `starttime` and `endtime` are strings and should be in the `YYYY-MM-DD HH:MM:SS` or `HH:MM:SS` format, and both should follow the same format.
    2. When in `HH:MM:SS` format, if `starttime` is less than `endtime`, it is assumed that `starttime` refers to that time on the next day.
    3. When `timewindow` is not specified, it defaults to no time window restrictions. | Optional | + | `nodeselector` | string | Label of the nodes targeted for the upgrade/configuration/rollback | This parameter is used to perform operations on nodes with specific labels, rather than all worker nodes in the cluster. The nodes targeted for the operation need to have a label with the `upgrade.openeuler.org/node-selector` key. The `nodeselector` parameter should be set to the value of this label. **Notes:** 1. When this parameter is not set or is set to `no-label`, operations are performed on nodes that do not have the `upgrade.openeuler.org/node-selector` label.
    2. When this parameter is set to `""` (an empty string), operations are performed on nodes that have the `upgrade.openeuler.org/node-selector=""` label.
    3. To ignore labels and perform operations on all nodes, set this parameter to `all-label`. | Optional | + | `timewindow` | / | Time window during which the upgrade/configuration/rollback can take place. | 1. When specifying a time window, both `starttime` and `endtime` must be specified. That is, they should either both be empty or both be non-empty.
    1. Both `starttime` and `endtime` are strings and should be in the `YYYY-MM-DD HH:MM:SS` or `HH:MM:SS` format, and both should follow the same format.
    2. When in `HH:MM:SS` format, if `starttime` is less than `endtime`, it is assumed that `starttime` refers to that time on the next day.
    3. When `timewindow` is not specified, it defaults to no time window restrictions. | Optional | | `timeinterval` | int | The time interval between each batch of tasks for the upgrade/configuration/rollback operation. | This parameter is in seconds and defines the time interval between the operator dispatching tasks. If the Kubernetes cluster is busy and cannot immediately respond to the operator's request, the actual interval may be longer than the specified time. | Optional | | `executionmode` | string | The mode in which the upgrade/configuration/rollback operation is executed. | The value can be `serial` or `parallel`. If this parameter is not set, the operation defaults to parallel mode. | Optional | @@ -109,7 +109,7 @@ Create a custom object of the OS type in the cluster and set the corresponding f ``` - Upgrade using a container image - - Before you can upgrade using a container image, you need to create a container image specifically for the upgrade process. For detailed instructions on how to create this image, see [KubeOS OCI 镜像制作](./kubeos-image-creation.md#creating-a-kubeos-oci-image) in [KubeOS Image Creation](./kubeos-image-creation.md). + - Before you can upgrade using a container image, you need to create a container image specifically for the upgrade process. For detailed instructions on how to create this image, see [Creating a KubeOS OCI Image](./kubeos-image-creation.md#creating-a-kubeos-oci-image) in [KubeOS Image Creation](./kubeos-image-creation.md). ``` yaml apiVersion: upgrade.openeuler.org/v1alpha1 diff --git a/docs/en/Cloud/KubeOS/Menu/index.md b/docs/en/Cloud/KubeOS/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..2c346130d73ed7c59edffae2d1a1f8913d7a4254 --- /dev/null +++ b/docs/en/Cloud/KubeOS/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [KubeOS User Guide]({{< relref "./KubeOS/Menu/index.md" >}}) diff --git a/docs/en/Cloud/Menu/index.md b/docs/en/Cloud/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..5526ca59e32b4d51f507f3e181f9c504129d2a25 --- /dev/null +++ b/docs/en/Cloud/Menu/index.md @@ -0,0 +1,13 @@ +--- +headless: true +--- + +- [Container Engines]({{< relref "./ContainerEngine/Menu/index.md" >}}) +- [Container Forms]({{< relref "./ContainerForm/Menu/index.md" >}}) +- [Container Runtimes]({{< relref "./ContainerRuntime/Menu/index.md" >}}) +- [Container Image Building]({{< relref "./ImageBuilder/Menu/index.md" >}}) +- [Cloud-Native OS]({{< relref "./KubeOS/Menu/index.md" >}}) +- [Cloud Base OS]({{< relref "./NestOS/Menu/index.md" >}}) +- [Hybrid Deployment]({{< relref "./HybridDeployment/Menu/index.md" >}}) +- [Cluster Deployment]({{< relref "./ClusterDeployment/Menu/index.md" >}}) +- [Service Mesh]({{< relref "./Kmesh/Menu/index.md" >}}) diff --git a/docs/en/Cloud/NestOS/Menu/index.md b/docs/en/Cloud/NestOS/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..d537d59e82281ad59dd35492786a2c3255233977 --- /dev/null +++ b/docs/en/Cloud/NestOS/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [NestOS User Guide]({{< relref "./NestOS/Menu/index.md" >}}) diff --git a/docs/en/Cloud/NestOS/NestOS/Menu/index.md b/docs/en/Cloud/NestOS/NestOS/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..02f1ed19cd48c961d60e19baaf4ef6a67ac1faff --- /dev/null +++ b/docs/en/Cloud/NestOS/NestOS/Menu/index.md @@ -0,0 +1,7 @@ +--- +headless: true +--- + +- [NestOS User Guide]({{< relref "./overview.md" >}}) + - [NestOS for Container User Guide]({{< relref "./nestos-for-container-user-guide.md" >}}) + - [Feature Description]({{< relref "./feature-description.md" >}}) diff --git a/docs/en/docs/NestOS/feature-description.md b/docs/en/Cloud/NestOS/NestOS/feature-description.md similarity index 94% rename from docs/en/docs/NestOS/feature-description.md rename to docs/en/Cloud/NestOS/NestOS/feature-description.md index 2a607c82f1d5e1342727cf21b1ae942c9830b5be..f53f9cf28af8028c6aba38da47cee88d52905e27 100644 --- a/docs/en/docs/NestOS/feature-description.md +++ b/docs/en/Cloud/NestOS/NestOS/feature-description.md @@ -64,7 +64,7 @@ Zincati is an auto-update agent for NestOS hosts. It works as a client for the C ## System Initialization (Ignition) -Ignition is a distribution-agnostic provisioning utility that not only installs, but also reads configuration files (in JSON format) to provision NestOS. Configurable components include storage and file systems, systemd units, and users. +Ignition is a distribution-agnostic provisioning utility that not only installs, but also reads configuration files (in JSON format) to initialize NestOS. Configurable components include storage and file systems, systemd units, and users. Ignition runs only once during the first boot of the system (while in the initramfs). Because Ignition runs so early in the boot process, it can re-partition disks, format file systems, create users, and write files before the userspace begins to boot. As a result, systemd services are already written to disk when systemd starts, speeding the time to boot. @@ -72,7 +72,7 @@ Ignition runs only once during the first boot of the system (while in the initra Ignition is designed to be used as a provisioning tool, not as a configuration management tool. Ignition encourages immutable infrastructure, in which machine modification requires that users discard the old node and re-provision the machine. (2) Ignition produces the machine specified or no machine at all -Ignition does what it needs to make the system match the state described in the Ignition configuration. If for any reason Ignition cannot deliver the exact machine that the configuration asked for, Ignition prevents the machine from booting successfully. For example, if the user wanted to fetch the document hosted at **https://example.com/foo.conf** and write it to disk, Ignition would prevent the machine from booting if it were unable to resolve the given URL. +Ignition does what it needs to make the system match the state described in the Ignition configuration. If for any reason Ignition cannot deliver the exact machine that the configuration asked for, Ignition prevents the machine from booting successfully. For example, if the user wanted to fetch the document hosted at **** and write it to disk, Ignition would prevent the machine from booting if it were unable to resolve the given URL. (3) Ignition configurations are declarative Ignition configurations describe the state of a system. Ignition configurations do not list a series of steps that Ignition should take. @@ -87,14 +87,14 @@ Afterburn is a one-shot agent for cloud-like platforms which interacts with prov Afterburn comprises several modules which may run at different times during the lifecycle of an instance. Depending on the specific platform, the following services may run in the initramfs on first boot: - - setting local hostname +- setting local hostname - - injecting network command-line arguments +- injecting network command-line arguments The following features are conditionally available on some platforms as systemd service units: - - installing public SSH keys for local system users +- installing public SSH keys for local system users - - retrieving attributes from instance metadata +- retrieving attributes from instance metadata - - checking in to the provider in order to report a successful boot or instance provisioning +- checking in to the provider in order to report a successful boot or instance provisioning diff --git a/docs/en/Cloud/NestOS/NestOS/figures/figure1.png b/docs/en/Cloud/NestOS/NestOS/figures/figure1.png new file mode 100644 index 0000000000000000000000000000000000000000..b4eb9017ed202e854c076802492d8561942dfc88 Binary files /dev/null and b/docs/en/Cloud/NestOS/NestOS/figures/figure1.png differ diff --git a/docs/en/Cloud/NestOS/NestOS/figures/figure2.png b/docs/en/Cloud/NestOS/NestOS/figures/figure2.png new file mode 100644 index 0000000000000000000000000000000000000000..90049769c04e2bd494533da1613e38a5199da3d7 Binary files /dev/null and b/docs/en/Cloud/NestOS/NestOS/figures/figure2.png differ diff --git a/docs/en/Cloud/NestOS/NestOS/nestos-for-container-user-guide.md b/docs/en/Cloud/NestOS/NestOS/nestos-for-container-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..3ebf0215316b9256ce3ea53b3b111bc9b98e1631 --- /dev/null +++ b/docs/en/Cloud/NestOS/NestOS/nestos-for-container-user-guide.md @@ -0,0 +1,985 @@ +# NestOS for Container User Guide + +## 1. Introduction to NestOS + +### 1.1 Overview + +NestOS, developed by KylinSoft and incubated in the openEuler community, is a cloud-native OS designed for modern infrastructure. It incorporates advanced technologies like rpm-ostree support and Ignition configuration, featuring a dual-root file system with mutual backup and atomic update capabilities. The system also includes the nestos-assembler tool for streamlined integration and building. Optimized for Kubernetes and OpenStack platforms, NestOS minimizes container runtime overhead, enabling efficient cluster formation and secure operation of large-scale containerized workloads. + +This guide provides a comprehensive walkthrough of NestOS, covering its building, installation, deployment, and usage. It aims to help users maximize the system benefits for rapid and efficient configuration and deployment. + +### 1.2 Application Scenarios and Advantages + +NestOS serves as an ideal foundation for cloud environments centered around containerized applications. It resolves challenges such as fragmented operation and maintenance (O&M) practices and redundant platform development, which arise from the decoupling of container and orchestration technologies from the underlying infrastructure. By ensuring alignment between application services and the base OS, NestOS delivers consistent and streamlined O&M. + +![figure1](./figures/figure1.png) + +## 2. Environment Preparation + +### 2.1 Build Environment Requirements + +#### 2.1.1 Requirements for Building the nestos-assembler Tool + +- Use openEuler for optimal results. +- Ensure at least 5 GB of available drive space. + +#### 2.1.2 Requirements for Building NestOS + +| Category | Requirements | +| :----------: | :---------------------: | +| CPU | 4 vCPUs | +| Memory | 4 GB | +| Drive | Available space > 10 GB | +| Architecture | x86_64 or AArch64 | +| Others | Support for KVM | + +### 2.2 Deployment Configuration Requirements + +| Category | Recommended Configuration | Minimum Configuration | +| :----------: | :-----------------------: | :-------------------: | +| CPU | > 4 vCPU | 1 vCPU | +| Memory | > 4GB | 512 MB | +| Drive | > 20GB | 10 GB | +| Architecture | x86_64, aarch64 | / | + +## 3. Quick Start + +### 3.1 Quick Build + +(1) Obtain the nestos-assembler container image. + +You are advised the openEuler-based base image. For additional details, see [Section 6.1](#61-nestos-assembler-container-image-creation). + +```shell +docker pull hub.oepkgs.net/nestos/nestos-assembler:24.03-LTS.20240903.0-aarch64 +``` + +(2) Create a script named `nosa` and save it to `/usr/local/bin`, then make it executable. + +```shell +#!/bin/bash + +sudo docker run --rm -it --security-opt label=disable --privileged --user=root \ + -v ${PWD}:/srv/ --device /dev/kvm --device /dev/fuse --network=host \ + --tmpfs /tmp -v /var/tmp:/var/tmp -v /root/.ssh/:/root/.ssh/ -v /etc/pki/ca-trust/:/etc/pki/ca-trust/ \ + ${COREOS_ASSEMBLER_CONFIG_GIT:+-v $COREOS_ASSEMBLER_CONFIG_GIT:/srv/src/config/:ro} \ + ${COREOS_ASSEMBLER_GIT:+-v $COREOS_ASSEMBLER_GIT/src/:/usr/lib/coreos-assembler/:ro} \ + ${COREOS_ASSEMBLER_CONTAINER_RUNTIME_ARGS} \ + ${COREOS_ASSEMBLER_CONTAINER:-nestos-assembler:your_tag} "$@" +``` + +Note: Replace the value of `COREOS_ASSEMBLER_CONTAINER` with the actual nestos-assembler container image in your environment. + +(3) Obtain nestos-config. + +Use `nosa init` to initialize the build workspace, pull the build configuration, and create the `nestos-build` directory. Run the following command in this directory: + +```shell +nosa init https://gitee.com/openeuler/nestos-config +``` + +(4) Adjust build configurations. + +nestos-config provides default build configurations, so no additional steps are required. For customization, refer to [Section 5](#5-build-configuration-nestos-config). + +(5) Build NestOS images. + +```shell +# Pull build configurations and update cache. +nosa fetch +# Generate root file system, qcow2, and OCI images. +nosa build +# Generate live ISO and PXE images. +nosa buildextend-metal +nosa buildextend-metal4k +nosa buildextend-live +``` + +For detailed build and deployment steps, refer to [Section 6](#6-build-process). + +### 3.2 Quick Deployment + +Using the NestOS ISO image as an example, boot into the live environment and execute the following command to complete the installation by following the wizard: + +```shell +sudo installnestos +``` + +For alternative deployment methods, see [Section 8](#8-deployment-process). + +## 4. Default Configuration + +| Item | Default Configuration | +| :-------------------------: | :----------------------------------------------: | +| Docker service | Disabled by default, requires manual activation. | +| SSH service security policy | Supports only key-based login by default. | + +## 5. Build Configuration: nestos-config + +### 5.1 Obtaining Configuration + +The repository for nestos-config is located at + +### 5.2 Directory Structure Explanation + +| Directory/File | Description | +| :---------------: | :------------------------------------: | +| live/* | Boot configuration for live ISO builds | +| overlay.d/* | Custom file configurations | +| tests/* | User-defined test case configurations | +| *.repo | Repository configurations | +| .yaml, manifests/ | Main build configurations | + +### 5.3 Key Files + +#### 5.3.1 .repo Files + +.repo files in the directory are used to configure software repositories for building NestOS. + +#### 5.3.2 YAML Configuration Files + +YAML files in the directory provide various configurations for NestOS builds. For details, refer to [Section 5.4](#54-key-fields). + +### 5.4 Key Fields + +| Field | Purpose | +| :------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------- | +| packages-aarch64, packages-x86_64, packages | Scope of software package integration | +| exclude-packages | Blocklist for software package integration | +| remove-from-packages | Files/folders to remove from specified packages | +| remove-files | Files/folders to remove | +| extra-kargs | Additional kernel boot parameters | +| initramfs-args | Initramfs build parameters | +| postprocess | Post-build scripts for the file system | +| default-target | Default target, such as **multi-user.target** | +| rojig.name, releasever | Image-related information (name and version) | +| lockfile-repos | List of repository names available for builds, which must match the repository names in the repo files described in [Section 5.3.1](#531-repo-files) | + +### 5.5 Configurable Items + +#### 5.5.1 Repository Configuration + +(1) Edit the .repo file in the configuration directory and modify its content to the desired software repositories. + +```shell +$ vim nestos-pool.repo +[repo_name_1] +Name=xxx +baseurl = https://ip.address/1 +enabled = 1 + +[repo_name_2] +Name=xxx +baseurl = https://ip.address/2 +enabled = 1 +``` + +(2) Modify the `lockfile-repos` field in the YAML configuration file to include the corresponding repository names. + +Note: The repository name is the content inside `[]` in the repo file, not the `name` field. + +```shell +$ vim manifests/rpmlist.yaml +Modify the `lockfile-repos` field as follows: +lockfile-repos: +- repo_name_1 +- repo_name_2 +``` + +#### 5.5.2 Software Package Customization + +Modify the `packages`, `packages-aarch64`, and `packages-x86_64` fields to add or remove software packages. + +For example, adding `nano` to the `packages` field ensures that the system includes `nano` after installation. + +```shell +$ vim manifests/rpmlist.yaml +packages: +- bootupd +... +- authselect +- nano +... +packages-aarch64: +- grub2-efi-aa64 +packages-x86_64: +- microcode_ctl +- grub2-efi-x64 +``` + +#### 5.5.3 Image Name and Version Customization + +Modify the `releasever` and `rojig.name` fields in the YAML file to control the image version and name. + +```shell +$ vim manifest.yaml + +releasever: "1.0" +rojig: + license: MIT + name: nestos + summary: NestOS stable +``` + +With the above configuration, the built image format will be **nestos-1.0.$(date "+%Y%m%d").$build_num.$type**, where **build_num** is the build count and **type** is the type suffix. + +#### 5.5.4 Image Release Information Customization + +Normally, release information is provided by the integrated release package (e.g., `openeuler-release`). However, you can rewrite the **/etc/os-release** file by adding a **postprocess** script. + +```shell +$ vim manifests/system-configuration.yaml +# Add the following content to postprocess. If the content already exists, simply modify the corresponding release information. +postprocess: + - | + #!/usr/bin/env bash + set -xeuo pipefail + export OSTREE_VERSION="$(tail -1 /etc/os-release)" + date_now=$(date "+%Y%m%d") + echo -e 'NAME="openEuler NestOS"\nVERSION="24.03-LTS"\nID="openeuler"\nVERSION_ID="24.03-LTS"\nPRETTY_NAME="NestOS"\nANSI_COLOR="0;31"\nBUILDID="'${date_now}'"\nVARIANT="NestOS"\nVARIANT_ID="nestos"\n' > /usr/lib/os-release + echo -e $OSTREE_VERSION >> /usr/lib/os-release + cp -f /usr/lib/os-release /etc/os-release +``` + +#### 5.5.5 Custom File Creation + +Add or modify custom files in the **overlay.d** directory. This allows for customization of the image content. + +```shell +mkdir -p overlay.d/15nestos/etc/test/test.txt +echo "This is a test message !" > overlay.d/15nestos/etc/test/test.txt +``` + +Using the above configuration to build the image. After image boot, the content of the corresponding file in the system will match the custom content added above. + +```shell +[root@nosa-devsh ~]# cat /etc/test/test.txt +This is a test message ! +``` + +## 6. Build Process + +NestOS employs a containerized method to bundle the build toolchain into a comprehensive container image called nestos-assembler. + +NestOS enables users to create the nestos-assembler container image, simplifying the process of building various NestOS image formats in any Linux distribution environment, such as within existing CI/CD pipelines. Additionally, users can manage, debug, and automate testing of build artifacts using this image. + +### 6.1 nestos-assembler Container Image Creation + +#### 6.1.1 Prerequisites + +1. Prepare the base container image. + + The nestos-assembler container image must be based on a base image that supports the Yum or DNF package manager. Although it can be created from any distribution base image, using an openEuler base image is recommended to reduce software compatibility issues. + +2. Install required software packages. + + Install Docker, the essential dependency: + + ```shell + dnf install -y docker + ``` + +3. Clone the nestos-assembler source code repository. + +```shell +git clone --depth=1 --single-branch https://gitee.com/openeuler/nestos-assembler.git +``` + +#### 6.1.2 Building the nestos-assembler Container Image + +Using the openEuler container image as the base, build the image with the following command: + +```shell +cd nestos-assembler/ +docker build -f Dockerfile . -t nestos-assembler:your_tag +``` + +### 6.2 nestos-assembler Container Image Usage + +#### 6.2.1 Prerequisites + +1. Prepare the nestos-assembler container image. + + Once the nestos-assembler container image is built following [Section 6.1](#61-nestos-assembler-container-image-creation), it can be managed and distributed via a privately hosted container image registry. Ensure the correct version of the nestos-assembler container image is pulled before initiating the NestOS build. + +2. Create the nosa script. + + To streamline user operations, you can write a `nosa` command script. This is particularly useful as the NestOS build process involves multiple calls to the nestos-assembler container image for executing various commands and configuring numerous parameters. For quick build details, see [Section 3.1](#31-quick-build). + +#### 6.2.2 Usage Instructions + +nestos-assembler commands + +| Command | Description | +| :-------------------: | :-------------------------------------------------------------------------------------: | +| init | Initialize the build environment and configuration. See [Section 6.3](#63-build-environment-preparation) for details. | +| fetch | Fetch the latest software packages to the local cache based on the build configuration. | +| build | Build the ostree commit, which is the core command for building NestOS. | +| run | Directly start a QEMU instance, using the latest build version by default. | +| prune | Clean up historical build versions, retaining the latest three versions by default. | +| clean | Delete all build artifacts. Use the `--all` parameter to also clean the local cache. | +| list | List the versions and artifacts present in the current build environment. | +| build-fast | Quickly build a new version based on the previous build record. | +| push-container | Push the container image artifact to the container image registry. | +| buildextend-live | Build ISO artifacts and PXE images that support the live environment. | +| buildextend-metal | Build raw artifacts for bare metal. | +| buildextend-metal4k | Build raw artifacts for bare metal in native 4K mode. | +| buildextend-openstack | Build QCOW2 artifacts for the OpenStack platform. | +| buildextend-qemu | Build QCOW2 artifacts for QEMU. | +| basearch | Retrieve the current architecture information. | +| compress | Compress artifacts. | +| kola | Automated testing framework | +| kola-run | A wrapper for automated testing that outputs summarized results | +| runc | Mount the current build root file system in a container. | +| tag | Manage build project tags. | +| virt-install | Create an instance for the specified build version. | +| meta | Manage build project metadata. | +| shell | Enter the nestos-assembler container image. | + +### 6.3 Build Environment Preparation + +The NestOS build environment requires a dedicated empty folder as the working directory, supporting multiple builds while preserving and managing historical versions. Before setting up the build environment, ensure the build configuration is prepared (see [Section 5](#5-build-configuration-nestos-config)). + +You are advised to maintain a separate build configuration for each independent build environment. If you plan to build NestOS for various purposes, maintain multiple build configurations and their corresponding directories. This approach allows independent evolution of configurations and clearer version management. + +#### 6.3.1 Initializing the Build Environment + +Navigate to the target working directory and run the following command to initialize the build environment: + +```shell +nosa init https://gitee.com/openeuler/nestos-config +``` + +Initialization is only required for the first build. Subsequent builds can reuse the same environment unless significant changes are made to the build configuration. + +#### 6.3.2 Build Environment Structure + +After initialization, the following folders are created in the working directory: + +**builds**: stores build artifacts and metadata. The **latest** subdirectory is a symbolic link to the most recent build version. + +**cache**: contains cached data pulled from software sources and package lists specified in the build configuration. Historical NestOS ostree repositories are also stored here. + +**overrides**: used to place files or RPM packages that should be added to the rootfs of the final artifact during the build process. + +**src**: holds the build configuration, including nestos-config-related content. + +**tmp**: used during builds and automated testing. In case of errors, you can inspect VM CLI outputs, journal logs, and other debugging information here. + +### 6.4 Build Steps + +The primary steps and reference commands for building NestOS are outlined below. + +![figure2](./figures/figure2.png) + +#### 6.4.1 Initial Build + +For the initial build, the build environment must be initialized. Refer to [Section 6.3](#63-build-environment-preparation) for detailed instructions. + +For subsequent builds, the existing build environment can be reused. Use `nosa list` to check the current versions and corresponding artifacts in the build environment. + +#### 6.4.2 Updating Build Configuration and Cache + +After initializing the build environment, run the following command to update the build configuration and cache: + +```shell +nosa fetch +``` + +This step validates the build configuration and pulls software packages from the configured sources to the local cache. When the build configuration changes or you want to update to the latest software versions, repeat this step. Otherwise, the build may fail or produce unexpected results. + +If significant changes are made to the build configuration and you want to clear the local cache and re-fetch, use: + +```shell +nosa clean --all +``` + +#### 6.4.3 Building the Immutable Root File system + +The core of NestOS, an immutable OS, is its immutable root file system based on ostree technology. Run the following command to build the ostree file system: + +```shell +nosa build +``` + +By default, the `build` command generates the ostree file system and an OCI archive. You can also include `qemu`, `metal`, or `metal4k` to simultaneously build the corresponding artifacts, equivalent to running `buildextend-qemu`, `buildextend-metal`, and `buildextend-metal4k` afterward. + +```shell +nosa build qemu metal metal4k +``` + +To add custom files or RPM packages during the NestOS build, place them in the **rootfs/** or **rpm/** folders under the **overrides** directory before running the `build` command. + +#### 6.4.4 Building Various Artifacts + +After running the `build` command, you can use `buildextend` commands to build different types of artifacts. Details are as follows. + +- Building QCOW2 images: + +```shell +nosa buildextend-qemu +``` + +- Building ISO images with a live environment or PXE boot components: + +```shell +nosa buildextend-metal +nosa buildextend-metal4k +nosa buildextend-live +``` + +- Building QCOW2 images for the OpenStack environment: + +```shell +nosa buildextend-openstack +``` + +- Building container images for container-based updates: + +When the `nosa build` command is executed, an OCI archive format image is also generated. This image can be pushed to a local or remote image registry directly. + +```shell +nosa push-container [container-image-name] +``` + +The remote image registry address must be appended to the container image name, and no `:` should appear except in the tag. If no `:` is detected, the command generates a tag in the format `{latest_build}-{arch}`. Example: + +```shell +nosa push-container registry.example.com/nestos:1.0.20240903.0-x86_64 +``` + +This command supports the following options: + +`--authfile`: specifies the authentication file for logging into the remote image registry. + +`--insecure`: bypasses SSL/TLS verification for self-signed certificates. + +`--transport`: specifies the target image push protocol. The default is `docker`. Supported options: + +- `containers-storage`: pushes to the local storage directory of container engines like Podman and CRIO. +- `dir`: pushes to a specified local directory. +- `docker`: pushes to a private or remote container image registry using the Docker API. +- `docker-archive`: exports an archive file for use with `docker load`. +- `docker-daemon`: pushes to the local storage directory of the Docker container engine. + +### 6.5 Artifacts Acquisition + +Once the build process is complete, the artifacts are stored in the following directory within the build environment: + +```text +builds/{version}/{arch}/ +``` + +For convenience, if you are only interested in the latest build version or are using CI/CD, a **latest** directory symbol link points to the most recent version directory: + +```text +builds/latest/{arch}/ +``` + +To reduce the size of the artifacts for easier transfer, you can compress them using the following command: + +```shell +nosa compress +``` + +Note that compression removes the original files, which may disable some debugging commands. To restore the original files, use the decompression command: + +```shell +nosa uncompress +``` + +### 6.6 Build Environment Maintenance + +Before or after setting up the NestOS environment, you may need to address specific requirements. The following commands are recommended for resolving these issues. + +#### 6.6.1 Cleaning Up Historical or Invalid Build Versions to Free Drive Space + +To clean up historical build versions, run: + +```shell +nosa prune +``` + +To delete all artifacts in the current build environment, run: + +```shell +nosa clean +``` + +If the build configuration has changed software repositories or historical caches are no longer needed, you can completely clear the current build environment cache: + +```shell +nosa clean --all +``` + +#### 6.6.2 Temporarily Running a Build Version Instance for Debugging or Verification + +```shell +nosa run +``` + +Use `--qemu-image` or `--qemu-iso` to specify the boot image address. For additional parameters, refer to `nosa run --help`. + +Once the instance starts, the build environment directory is mounted to **/var/mnt/workdir**, allowing access to the build environment. + +#### 6.6.3 Running Automated Tests + +```shell +nosa kola run +``` + +This command runs predefined test cases. You can also append a specific test case name to execute it individually. + +```shell +nosa kola testiso +``` + +This command performs installation and deployment tests for ISO or PXE live environments, acting as a smoke test for the build process. + +#### 6.6.4 Debugging and Verifying netsos-assembler + +```shell +nosa shell +``` + +This command launches a shell environment within the build toolchain container, enabling you to verify the functionality of the build toolchain environment. + +## 7. Deployment Configuration + +### 7.1 Introduction + +Before you deploy NestOS, it is essential to understand and prepare the necessary configurations. NestOS offers flexible configuration options through Ignition files, which can be managed using Butane. This simplifies automated deployment and environment setup for users. + +This section provides a detailed overview of Butane functionality and usage, along with configuration examples for various scenarios. These configurations will help you quickly set up and run NestOS, ensuring system security and reliability while meeting application needs. Additionally, we will explore how to customize images by pre-integrating Ignition files, enabling efficient configuration and deployment for specific use cases. + +### 7.2 Introduction to Butane + +Butane is a tool that converts human-readable YAML configuration files into NestOS Ignition files. It simplifies the process of writing complex configurations by allowing users to create configuration files in a more readable format, which are then converted into JSON format suitable for NestOS. + +NestOS has adapted Butane by adding support for the `nestos` variant and configuration specification version `v1.0.0`, corresponding to the Ignition configuration specification `v3.3.0`. This ensures configuration stability and compatibility. + +### 7.3 Butane Usage + +To install the Butane package, use the following command: + +```shell +dnf install butane +``` + +Edit **example.yaml** and execute the following command to convert it into an Ignition file **example.ign**. The process of writing YAML files will be explained in detail later: + +```shell +butane example.yaml -o example.ign -p +``` + +### 7.4 Supported Functional Scenarios + +The following configuration examples (**example.yaml**) briefly describe the main functional scenarios and advanced usage methods supported by NestOS. + +#### 7.4.1 Configuring Users, Groups, Passwords, and SSH Keys + +```YAML +variant: nestos +version: 1.0.0 +passwd: + users: + - name: nest + ssh_authorized_keys: + - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHn2eh... + - name: jlebon + groups: + - wheel + ssh_authorized_keys: + - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDC5QFS... + - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIveEaMRW... + - name: miabbott + groups: + - docker + - wheel + password_hash: $y$j9T$aUmgEDoFIDPhGxEe2FUjc/$C5A... + ssh_authorized_keys: + - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDTey7R... +``` + +#### 7.4.2 File Operations: Configuring Network Interfaces + +```YAML +variant: nestos +version: 1.0.0 +storage: + files: + - path: /etc/NetworkManager/system-connections/ens2.nmconnection + mode: 0600 + contents: + inline: | + [connection] + id=ens2 + type=ethernet + interface-name=ens2 + [ipv4] + address1=10.10.10.10/24,10.10.10.1 + dns=8.8.8.8; + dns-search= + may-fail=false + method=manual +``` + +#### 7.4.3 Creating Directories, Files, and Symbolic Links with Permissions + +```YAML +variant: nestos +version: 1.0.0 +storage: + directories: + - path: /opt/tools + overwrite: true + files: + - path: /var/helloworld + overwrite: true + contents: + inline: Hello, world! + mode: 0644 + user: + name: dnsmasq + group: + name: dnsmasq + - path: /opt/tools/transmogrifier + overwrite: true + contents: + source: https://mytools.example.com/path/to/archive.gz + compression: gzip + verification: + hash: sha512-00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 + mode: 0555 + links: + - path: /usr/local/bin/transmogrifier + overwrite: true + target: /opt/tools/transmogrifier + hard: false +``` + +#### 7.4.4 Writing systemd Services: Starting and Stopping Containers + +```YAML +variant: nestos +version: 1.0.0 +systemd: + units: + - name: hello.service + enabled: true + contents: | + [Unit] + Description=MyApp + After=network-online.target + Wants=network-online.target + + [Service] + TimeoutStartSec=0 + ExecStartPre=-/bin/podman kill busybox1 + ExecStartPre=-/bin/podman rm busybox1 + ExecStartPre=/bin/podman pull busybox + ExecStart=/bin/podman run --name busybox1 busybox /bin/sh -c ""trap 'exit 0' INT TERM; while true; do echo Hello World; sleep 1; done"" + + [Install] + WantedBy=multi-user.target +``` + +### 7.5 Pre-Integration of Ignition Files + +The NestOS build toolchain enables users to customize images based on specific use cases and requirements. After creating the image, nestos-installer offers various features for customizing image deployment and application, such as pre-integrating Ignition files, pre-allocating installation locations, and modifying kernel parameters. Below, we introduce the main functionalities. + +#### 7.5.1 Pre-Integration of Ignition Files into ISO Images + +Prepare the NestOS ISO image locally and install the nestos-installer package. Edit **example.yaml** and use the Butane tool to convert it into an Ignition file. In this example, we configure a simple username and password (the password must be encrypted; the example uses `qwer1234`), as shown below: + +```YAML +variant: nestos +version: 1.0.0 +passwd: + users: + - name: root + password_hash: "$1$root$CPjzNGH.NqmQ7rh26EeXv1" +``` + +After converting the YAML file into an Ignition file, execute the following command to embed the Ignition file and specify the target drive location. Replace `xxx.iso` with the local NestOS ISO image: + +```shell +nestos-installer iso customize --dest-device /dev/sda --dest-ignition example.ign xxx.iso +``` + +When installing using the ISO image with the embedded Ignition file , NestOS will automatically read the Ignition file and install it to the target drive. Once the progress bar reaches 100%, the system will automatically boot into the installed NestOS environment. Users can log in using the username and password configured in the Ignition file. + +#### 7.5.2 Pre-Integration of Ignition Files into PXE Images + +Prepare the NestOS PXE image locally. See [Section 6.5](#65-artifacts-acquisition) for details on obtaining the components. The remaining steps are the same as above. + +To simplify the process for users, nestos-installer also supports extracting PXE components from an ISO image. Execute the following command, replacing `xxx.iso` with the local NestOS ISO image: + +```shell +nestos-installer iso extract pxe xxx.iso +``` + +This will generate the following output files: + +```text +xxx-initrd.img +xxx-rootfs.img +xxx-vmlinuz +``` + +Execute the following command to pre-integrate the Ignition file and specify the target drive location: + +```shell +nestos-installer pxe customize --dest-device /dev/sda --dest-ignition example.ign xxx-initrd.img --output custom-initrd.img +``` + +Replace `xxx-initrd.img` with `custom-initrd.img` according to the PXE installation method for NestOS. After booting, NestOS will automatically read the Ignition file and install it to the target drive. Once the progress bar reaches 100%, the system will automatically boot into the installed NestOS environment. Users can log in using the username and password configured in the Ignition file. + +## 8. Deployment Process + +### 8.1 Introduction + +NestOS supports multiple deployment platforms and common deployment methods, currently focusing on QCOW2, ISO, and PXE. Compared to general-purpose OS deployments, the main difference lies in how to pass custom deployment configurations characterized by Ignition files. The following sections will introduce these methods in detail. + +### 8.2 Installation Using QCOW2 Images + +#### 8.2.1 Creating a QCOW2 Instance with QEMU + +Prepare the NestOS QCOW2 image and the corresponding Ignition file (see [Section 7](#7-deployment-configuration) for details). Execute the following commands in the terminal: + +```shell +IGNITION_CONFIG="/path/to/example.ign" +IMAGE="/path/to/image.qcow2" +IGNITION_DEVICE_ARG="-fw_cfg name=opt/com.coreos/config,file=${IGNITION_CONFIG}" + +qemu-img create -f qcow2 -F qcow2 -b ${IMAGE} my-nestos-vm.qcow2 +``` + +For the AArch64 environment, execute the following command: + +```shell +qemu-kvm -m 2048 -M virt -cpu host -nographic -drive if=virtio,file=my-nestos-vm.qcow2 ${IGNITION_DEVICE_ARG} -nic user,model=virtio,hostfwd=tcp::2222-:22 -bios /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw +``` + +For the x86_64 environment, execute the following command: + +```shell +qemu-kvm -m 2048 -M pc -cpu host -nographic -drive if=virtio,file=my-nestos-vm.qcow2 ${IGNITION_DEVICE_ARG} -nic user,model=virtio,hostfwd=tcp::2222-:22 +``` + +#### 8.2.2 Creating a QCOW2 Instance with virt-install + +Assuming the libvirt service is running normally and the network uses the default subnet bound to the `virbr0` bridge, you can follow these steps to create a NestOS instance. + +Prepare the NestOS QCOW2 image and the corresponding Ignition file (see [Section 7](#7-deployment-configuration) for details). Execute the following commands in the terminal: + +```shell +IGNITION_CONFIG="/path/to/example.ign" +IMAGE="/path/to/image.qcow2" +VM_NAME="nestos" +VCPUS="4" +RAM_MB="4096" +DISK_GB="10" +IGNITION_DEVICE_ARG=(--qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=${IGNITION_CONFIG}") +``` + +**Note: When using virt-install, the QCOW2 image and Ignition file must be specified with absolute paths.** + +Execute the following command to create the instance: + +```shell +virt-install --connect="qemu:///system" --name="${VM_NAME}" --vcpus="${VCPUS}" --memory="${RAM_MB}" --os-variant="kylin-hostos10.0" --import --graphics=none --disk="size=${DISK_GB},backing_store=${IMAGE}" --network bridge=virbr0 "${IGNITION_DEVICE_ARG[@]} +``` + +### 8.3 Installation Using ISO Images + +Prepare the NestOS ISO image and boot it. The first boot of the NestOS ISO image will default to the Live environment, which is a volatile memory-based environment. + +#### 8.3.1 Installing the OS to the Target Drive Using the nestos-installer Wizard Script + +1. In the NestOS live environment, follow the printed instructions upon first entry. Enter the following command to automatically generate a simple Ignition file and proceed with the installation and reboot: + + ```shell + sudo installnestos + ``` + +2. Follow the terminal prompts to enter the username and password. + +3. Select the target drive installation location. Press **Enter** to use the default option **/dev/sda**. + +4. After completing the above steps, nestos-installer will begin installing NestOS to the target drive based on the provided configuration. Once the progress bar reaches 100%, the system will automatically reboot. + +5. After rebooting, the system will automatically enter NestOS. Press **Enter** at the GRUB menu or wait 5 seconds to boot the system. Log in using the previously configured username and password. The installation is now complete. + +#### 8.3.2 Manually Installing the OS to the Target Drive Using the nestos-installer Command + +1. Prepare the Ignition file **example.ign** (see [Section 7](#7-deployment-configuration) for details). + +2. Follow the printed instructions upon first entry into the NestOS live environment. Enter the following command to begin the installation: + + ```shell + sudo nestos-installer install /dev/sda --ignition-file example.ign + ``` + + If network access is available, the Ignition file can also be retrieved via a URL, for example: + + ```shell + sudo nestos-installer install /dev/sda --ignition-file http://www.example.com/example.ign + ``` + +3. After executing the above command, nestos-installer will begin installing NestOS to the target drive based on the provided configuration. Once the progress bar reaches 100%, the system will automatically reboot. + +4. After rebooting, the system will automatically enter NestOS. Press **Enter** at the GRUB menu or wait 5 seconds to boot the system. Log in using the previously configured username and password. The installation is now complete. + +### 8.4 PXE Deployment + +The PXE installation components for NestOS include the kernel, **initramfs.img**, and **rootfs.img**. These components are generated using the `nosa buildextend-live` command (see [Section 6](#6-build-process) for details). + +1. Use the PXELINUX `KERNEL` command to specify the kernel. A simple example is as follows: + + ```shell + KERNEL nestos-live-kernel-x86_64 + ``` + +2. Use the PXELINUX `APPEND` command to specify the initrd and rootfs. A simple example is as follows: + + ```shell + APPEND initrd=nestos-live-initramfs.x86_64.img,nestos-live-rootfs.x86_64.img + ``` + + **Note: If you have pre-integrated the Ignition file into the PXE components as described in [Section 7.5](#75-pre-integration-of-ignition-files), you only need to replace it here and skip the subsequent steps.** + +3. Specify the installation location. For example, to use **/dev/sda**, append the following to the `APPEND` command: + + ```ini + nestosos.inst.install_dev=/dev/sda + ``` + +4. Specify the Ignition file, which must be retrieved over the network. Append the corresponding URL to the `APPEND` command, for example: + + ```ini + nestos.inst.ignition_url=http://www.example.com/example.ign + ``` + +5. After booting, NestOS will automatically read the Ignition file and install the OS to the target drive. Once the progress bar reaches 100%, the system will automatically boot into the installed NestOS environment. Users can log in using the username and password configured in the Ignition file. + +## 9. Basic Usage + +### 9.1 Introduction + +NestOS employs an OS packaging solution based on ostree and rpm-ostree technologies, setting critical directories to read-only mode to prevent accidental modifications to core system files and configurations. Leveraging the overlay layering concept, it allows users to manage RPM packages on top of the base ostree filesystem without disrupting the initial system architecture. Additionally, it supports building OCI-format images, enabling OS version switching at the granularity of images. + +### 9.2 SSH Connection + +For security reasons, NestOS does not support password-based SSH login by default and only allows key-based authentication. This design enhances system security by mitigating risks associated with password leaks or weak password attacks. + +The method for establishing an SSH connection using keys in NestOS is the same as in openEuler. If users need to temporarily enable password-based login, they can follow these steps: + +1. Edit the additional configuration file of the SSH service: + + ```shell + vi /etc/ssh/sshd_config.d/40-disable-passwords.conf + ``` + +2. Modify the default `PasswordAuthentication` setting as follows: + + ```shell + PasswordAuthentication yes + ``` + +3. Restart the sshd service to temporarily enable password-based SSH login. + +### 9.3 RPM Package Installation + +**Note: Immutable OS discourages installing software packages in the runtime environment. This method is provided only for temporary debugging scenarios. For service requirements that necessitate changes to the integrated package list, rebuild the OS by updating the build configuration.** + +NestOS does not support conventional package managers like Yum or DNF. Instead, it uses rpm-ostree to manage system updates and package installations. rpm-ostree combines the advantages of image-based and package-based management, allowing users to layer and manage RPM packages on top of the base OS without disrupting its initial structure. Use the following command to install an RPM package: + +```shell +rpm-ostree install +``` + +After installation, reboot the OS. The bootloader menu will display two branches, with the first branch being the latest by default: + +```shell +systemctl reboot +``` + +After rebooting, check the system package layering status to confirm that the package has been installed in the current version: + +```shell +rpm-ostree status -v +``` + +### 9.4 Version Rollback + +After an update or RPM package installation, the previous version of the OS deployment remains on the drive. If the update causes issues, users can manually roll back to a previous version using rpm-ostree. The specific process is as follows: + +#### 9.4.1 Temporary Rollback + +To temporarily roll back to a previous OS deployment, hold down the **Shift** key during system boot. When the bootloader menu appears, select the corresponding branch (by default, there are two branches; choose the other one). Before doing this, you can use the following command to view the two existing version branches in the current environment: + +```shell +rpm-ostree status +``` + +#### 9.4.2 Permanent Rollback + +To permanently roll back to a previous OS deployment, run the following command in the current version. This operation will set system deployment of the previous version as the default deployment. + +```shell +rpm-ostree rollback +``` + +Reboot to apply the changes. The default deployment option in the bootloader menu will have changed, eliminating the need for manual switching. + +```shell +systemctl reboot +``` + +## 10. Container Image-Based Updates + +### 10.1 Use Case Description + +NestOS, as a container cloud base OS based on the immutable infrastructure concept, distributes and updates the file system as a whole. This approach brings significant convenience in terms of operations and security. However, in real-world production environments, the officially released versions often fail to meet user requirements. For example, users may want to integrate self-maintained critical foundational components by default or further trim software packages to reduce system runtime overhead based on specific scenarios. Therefore, compared to general-purpose OSs, users have stronger and more frequent customization needs for NestOS. + +nestos-assembler can provide OCI-compliant container images. Beyond simply packaging and distributing the root file system, leveraging the ostree native container feature allows container cloud users to utilize familiar technology stacks. By writing a single ContainerFile (Dockerfile), users can easily build customized images for integrating custom components or subsequent upgrade and maintenance tasks. + +### 10.2 Usage + +#### 10.2.1 Customizing Images + +- Basic steps + +1. Refer to [Section 6](#6-build-process) to build the NestOS container image, and use the `nosa push-container` command to push it to a public or private container image registry. +2. Write a Containerfile (Dockerfile) as shown in the following example: + + ```dockerfile + FROM registry.example.com/nestos:1.0.20240603.0-x86_64 + + # Perform custom build steps, such as installing software or copying self-built components. + # Here, installing the strace package is used as an example. + RUN rpm-ostree install strace && rm -rf /var/cache && ostree container commit + ``` + +3. Run `docker build` or integrate it into CI/CD to build the corresponding image. + + > Note: + > 1. NestOS does not have the yum/dnf package manager. If software packages need to be installed, use the `rpm-ostree install` command to install local RPM packages or software provided in the repository. + > 2. If needed, you can also modify the software source configurations in the `/etc/yum.repo.d/` directory. + > 3. Each meaningful build command should end with `&& ostree container commit`. From the perspective of container image build best practices, it is recommended to minimize the number of RUN layers. + > 4. During the build process, non-/usr or /etc directory contents are cleaned up. Therefore, customization via container images is primarily suitable for software package or component updates. Do not use this method for system maintenance or configuration changes (e.g., adding users with `useradd`). + +#### 10.2.2 Deploying/Upgrading Images + +Assume that the container image built in the above steps is pushed as `registry.example.com/nestos:1.0.20240903.0-x86_64`. + +In an environment where NestOS is already deployed, execute the following command: + +```shell +sudo rpm-ostree rebase ostree-unverified-registry:registry.example.com/nestos:1.0.20240903.0-x86_64 +``` + +Reboot to complete the deployment of the customized version. + +After deployment is complete using the container image method, `rpm-ostree upgrade` will default to updating the source from the ostree update source to the container image address. Subsequently, you can update the container image under the same tag. Using `rpm-ostree upgrade` will detect if the remote image has been updated. If changes are detected, it will pull the latest image and complete the deployment. diff --git a/docs/en/docs/NestOS/overview.md b/docs/en/Cloud/NestOS/NestOS/overview.md similarity index 87% rename from docs/en/docs/NestOS/overview.md rename to docs/en/Cloud/NestOS/NestOS/overview.md index a247a64c9fabc01cc59173d42a213bd2d4181c52..b6bcb85ccb0c4b4fd3161b419dd02d16257891bc 100644 --- a/docs/en/docs/NestOS/overview.md +++ b/docs/en/Cloud/NestOS/NestOS/overview.md @@ -1,4 +1,3 @@ # NestOS User Guide -This document describes the installation, deployment, features, and usage of the NestOS cloud-based operating system. NestOS runs common container engines, such as Docker, iSula, PodMan, and CRI-O, and integrates technologies such as Ignition, rpm-ostree, OCI runtime, and SELinux. NestOS adopts the design principles of dual-system partitions, container technology, and cluster architecture. It can adapt to multiple basic running environments in cloud scenarios.In addition, NestOS optimizes Kubernetes and provides support for platforms such as OpenStack and oVirt for IaaS ecosystem construction. In terms of PaaS ecosystem construction, platforms such as OKD and Rancher are supported for easy deployment of clusters and secure running of large-scale containerized workloads. To download NestOS images, visit the [NestOS Repository](https://gitee.com/openeuler/NestOS). - +This document describes the installation, deployment, features, and usage of the NestOS cloud-based operating system. NestOS runs common container engines, such as Docker, iSula, PodMan, and CRI-O, and integrates technologies such as Ignition, rpm-ostree, OCI runtime, and SELinux. NestOS adopts the design principles of dual-system partitions, container technology, and cluster architecture. It can adapt to multiple basic running environments in cloud scenarios.In addition, NestOS optimizes Kubernetes and provides support for platforms such as OpenStack and oVirt for IaaS ecosystem construction. In terms of PaaS ecosystem construction, platforms such as OKD and Rancher are supported for easy deployment of clusters and secure running of large-scale containerized workloads. To download NestOS images, see [NestOS](https://nestos.openeuler.org/). diff --git a/docs/en/docs/Container/container.md b/docs/en/Cloud/container.md similarity index 59% rename from docs/en/docs/Container/container.md rename to docs/en/Cloud/container.md index d30a929ed1d4839b78b8922d4b4808e755128878..71519947009374affc96538f66aeb68b843792bf 100644 --- a/docs/en/docs/Container/container.md +++ b/docs/en/Cloud/container.md @@ -6,9 +6,9 @@ openEuler provides software packages of iSulad and Docker container engines. The following container forms are provided for different application scenarios: -- Common containers applicable to most common scenarios -- Secure containers applicable to strong isolation and multi-tenant scenarios -- System containers applicable to scenarios where the systemd is used to manage services +- Common containers applicable to most common scenarios +- Secure containers applicable to strong isolation and multi-tenant scenarios +- System containers applicable to scenarios where the systemd is used to manage services This document describes how to install and use the container engines and how to deploy and use containers in different forms. @@ -16,5 +16,5 @@ This document describes how to install and use the container engines and how to This document is intended for openEuler users who need to install containers. You can better understand this document if you: -- Be familiar with basic Linux operations. -- Have a basic understanding of containers. +- Be familiar with basic Linux operations. +- Have a basic understanding of containers. diff --git a/docs/en/docs/K3s/K3s-deployment-guide.md b/docs/en/EdgeComputing/K3s/K3s-deployment-guide.md similarity index 93% rename from docs/en/docs/K3s/K3s-deployment-guide.md rename to docs/en/EdgeComputing/K3s/K3s-deployment-guide.md index e61d7082278eaae6b3dcc246d7a60517a524450e..11dd4c263eb74dc90098950b45382a115bead865 100644 --- a/docs/en/docs/K3s/K3s-deployment-guide.md +++ b/docs/en/EdgeComputing/K3s/K3s-deployment-guide.md @@ -1,7 +1,9 @@ # K3s Deployment Guide -### What Is K3s? +## What Is K3s + K3s is a lightweight Kubernetes distribution that is optimized for edge computing and IoT scenarios. The K3s provides the following enhanced features: + - Packaged as a single binary file. - Uses SQLite3-based lightweight storage backend as the default storage mechanism and supports etcd3, MySQL, and PostgreSQL. - Encapsulated in a simple launcher that handles various complex TLS and options. @@ -10,7 +12,8 @@ K3s is a lightweight Kubernetes distribution that is optimized for edge computin - Encapsulates all operations of the Kubernetes control plane in a single binary file and process, capable of automating and managing complex cluster operations including certificate distribution. - Minimizes external dependencies and requires only kernel and cgroup mounting. -### Application Scenarios +## Application Scenarios + K3s is applicable to the following scenarios: - Edge computing @@ -22,9 +25,9 @@ K3s is applicable to the following scenarios: The resources required for running K3s are small. Therefore, K3s is also suitable for development and test scenarios. In these scenarios, K3s facilitates function verification and problem reproduction by shortening cluster startup time and reducing resources consumed by the cluster. -### Deploying K3s +## Deploying K3s -#### Preparations +### Preparations - Ensure that the host names of the server node and agent node are different. @@ -38,20 +41,21 @@ You can run the `hostnamectl set-hostname "host name"` command to change the hos ![1661830441538](./figures/yum-install.png) -#### Deploying the Server Node +### Deploying the Server Node To install K3s on a single server, run the following command on the server node: -``` + +```shell INSTALL_K3S_SKIP_DOWNLOAD=true k3s-install.sh ``` ![1661825352724](./figures/server-install.png) -#### Checking Server Deployment +### Checking Server Deployment ![1661825403705](./figures/check-server.png) -#### Deploying the Agent Node +### Deploying the Agent Node Query the token value of the server node. The token is stored in the **/var/lib/rancher/k3s/server/node-token** file on the server node. @@ -63,17 +67,17 @@ Query the token value of the server node. The token is stored in the **/var/lib/ Add agents. Run the following command on each agent node: -``` +```shell INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken k3s-install.sh ``` > **Note:** -> +> > Replace **myserver** with the IP address of the server or a valid DNS, and replace **mynodetoken** with the token of the server node. ![1661829392357](./figures/agent-install.png) -#### Checking Agent Deployment +### Checking Agent Deployment After the installation is complete, run `kubectl get nodes` on the server node to check if the agent node is successfully registered. @@ -81,6 +85,6 @@ After the installation is complete, run `kubectl get nodes` on the server node t A basic K3S cluster is set up. -#### More +### More For details about how to use K3s, visit the K3s [official website](https://rancher.com/docs/k3s/latest/en/). diff --git a/docs/en/EdgeComputing/K3s/Menu/index.md b/docs/en/EdgeComputing/K3s/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..eef635b3a981067934c51b0ceeea640759705c22 --- /dev/null +++ b/docs/en/EdgeComputing/K3s/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [K3s Deployment Guide]({{< relref "./K3s-deployment-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/K3s/figures/agent-install.png b/docs/en/EdgeComputing/K3s/figures/agent-install.png similarity index 100% rename from docs/en/docs/K3s/figures/agent-install.png rename to docs/en/EdgeComputing/K3s/figures/agent-install.png diff --git a/docs/en/docs/K3s/figures/check-agent.png b/docs/en/EdgeComputing/K3s/figures/check-agent.png similarity index 100% rename from docs/en/docs/K3s/figures/check-agent.png rename to docs/en/EdgeComputing/K3s/figures/check-agent.png diff --git a/docs/en/docs/K3s/figures/check-server.png b/docs/en/EdgeComputing/K3s/figures/check-server.png similarity index 100% rename from docs/en/docs/K3s/figures/check-server.png rename to docs/en/EdgeComputing/K3s/figures/check-server.png diff --git a/docs/en/docs/K3s/figures/server-install.png b/docs/en/EdgeComputing/K3s/figures/server-install.png similarity index 100% rename from docs/en/docs/K3s/figures/server-install.png rename to docs/en/EdgeComputing/K3s/figures/server-install.png diff --git a/docs/en/docs/K3s/figures/set-hostname.png b/docs/en/EdgeComputing/K3s/figures/set-hostname.png similarity index 100% rename from docs/en/docs/K3s/figures/set-hostname.png rename to docs/en/EdgeComputing/K3s/figures/set-hostname.png diff --git a/docs/en/docs/K3s/figures/token.png b/docs/en/EdgeComputing/K3s/figures/token.png similarity index 100% rename from docs/en/docs/K3s/figures/token.png rename to docs/en/EdgeComputing/K3s/figures/token.png diff --git a/docs/en/docs/K3s/figures/yum-install.png b/docs/en/EdgeComputing/K3s/figures/yum-install.png similarity index 100% rename from docs/en/docs/K3s/figures/yum-install.png rename to docs/en/EdgeComputing/K3s/figures/yum-install.png diff --git a/docs/en/EdgeComputing/KubeEdge/Menu/index.md b/docs/en/EdgeComputing/KubeEdge/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..ef25e9d01b12514633b1aac11595fd3b2a3722fa --- /dev/null +++ b/docs/en/EdgeComputing/KubeEdge/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- +- [KubeEdge User Guide]({{< relref "./overview.md" >}}) + - [KubeEdge Usage Guide]({{< relref "./kubeedge-usage-guide.md" >}}) + - [KubeEdge Deployment Guide]({{< relref "./kubeedge-deployment-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/KubeEdge/kubeedge-deployment-guide.md b/docs/en/EdgeComputing/KubeEdge/kubeedge-deployment-guide.md similarity index 100% rename from docs/en/docs/KubeEdge/kubeedge-deployment-guide.md rename to docs/en/EdgeComputing/KubeEdge/kubeedge-deployment-guide.md diff --git a/docs/en/docs/KubeEdge/kubeedge-usage-guide.md b/docs/en/EdgeComputing/KubeEdge/kubeedge-usage-guide.md similarity index 100% rename from docs/en/docs/KubeEdge/kubeedge-usage-guide.md rename to docs/en/EdgeComputing/KubeEdge/kubeedge-usage-guide.md diff --git a/docs/en/docs/KubeEdge/overview.md b/docs/en/EdgeComputing/KubeEdge/overview.md similarity index 63% rename from docs/en/docs/KubeEdge/overview.md rename to docs/en/EdgeComputing/KubeEdge/overview.md index 8b81038bda1ac5fb4618c68678351f99e1b3c63b..5b0b219c468dfdb9ce346ba1c0b86f61a13ef152 100644 --- a/docs/en/docs/KubeEdge/overview.md +++ b/docs/en/EdgeComputing/KubeEdge/overview.md @@ -1,3 +1,3 @@ # KubeEdge User Guide -This document describes how to deploy and use the KubeEdge edge computing platform for users and administrators. \ No newline at end of file +This document describes how to deploy and use the KubeEdge edge computing platform for users and administrators. diff --git a/docs/en/EdgeComputing/Menu/index.md b/docs/en/EdgeComputing/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..37d82ca1f35ebf059f2b1e3fb13d725ed4590740 --- /dev/null +++ b/docs/en/EdgeComputing/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [KubeEdge User Guide]({{< relref "./KubeEdge/Menu/index.md" >}}) +- [K3s User Guide]({{< relref "./K3s/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Embedded/Menu/index.md b/docs/en/Embedded/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..c72dd3e97b5557a0cc0af60cef1757708c171244 --- /dev/null +++ b/docs/en/Embedded/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- + +- [openEuler Embedded User Guide](https://pages.openeuler.openatom.cn/embedded/docs/build/html/master/index.html) +- [UniProton User Guide]({{< relref "./UniProton/Menu/index.md" >}}) diff --git a/docs/en/Embedded/UniProton/Menu/index.md b/docs/en/Embedded/UniProton/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..2c22817922dbb913bb5d3be10575a9f6237c3c2f --- /dev/null +++ b/docs/en/Embedded/UniProton/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- +- [UniProton User Guide]({{< relref "./uniproton-user-guide.md" >}}) + - [UniProton Feature Design]({{< relref "./uniproton-feature-design.md" >}}) + - [UniProton Interfaces]({{< relref "./uniproton-interfaces.md" >}}) \ No newline at end of file diff --git a/docs/en/Embedded/UniProton/figures/FCS.png b/docs/en/Embedded/UniProton/figures/FCS.png new file mode 100644 index 0000000000000000000000000000000000000000..08e6b0a2156f7dd751ce36e2378a7242e97f89b3 Binary files /dev/null and b/docs/en/Embedded/UniProton/figures/FCS.png differ diff --git a/docs/en/Embedded/UniProton/figures/MemoryApplication.png b/docs/en/Embedded/UniProton/figures/MemoryApplication.png new file mode 100644 index 0000000000000000000000000000000000000000..533dbdfce75046db700ac9cb1560cffaaf204dd3 Binary files /dev/null and b/docs/en/Embedded/UniProton/figures/MemoryApplication.png differ diff --git a/docs/en/Embedded/UniProton/figures/MemoryRelease.png b/docs/en/Embedded/UniProton/figures/MemoryRelease.png new file mode 100644 index 0000000000000000000000000000000000000000..954743ce27dd9b74d9ec4d0f45e1e5185ad696f0 Binary files /dev/null and b/docs/en/Embedded/UniProton/figures/MemoryRelease.png differ diff --git a/docs/en/Embedded/UniProton/figures/pend_semaphore.png b/docs/en/Embedded/UniProton/figures/pend_semaphore.png new file mode 100644 index 0000000000000000000000000000000000000000..59d8159d1ff1cecb43f59cc5d7c5a9900db8e767 Binary files /dev/null and b/docs/en/Embedded/UniProton/figures/pend_semaphore.png differ diff --git a/docs/en/Embedded/UniProton/figures/post_semaphore.png b/docs/en/Embedded/UniProton/figures/post_semaphore.png new file mode 100644 index 0000000000000000000000000000000000000000..fa08d76dafd335b60838dda08db61ccadd8c6b8d Binary files /dev/null and b/docs/en/Embedded/UniProton/figures/post_semaphore.png differ diff --git a/docs/en/Embedded/UniProton/uniproton-feature-design.md b/docs/en/Embedded/UniProton/uniproton-feature-design.md new file mode 100644 index 0000000000000000000000000000000000000000..2f3a95b333b6877403d409be5698a2b664e39e6f --- /dev/null +++ b/docs/en/Embedded/UniProton/uniproton-feature-design.md @@ -0,0 +1,156 @@ +# UniProton Feature Design + + + +- [UniProton Feature Design](#uniproton-feature-design) + - [Task Management](#task-management) + - [Event Management](#event-management) + - [Queue Management](#queue-management) + - [Hard Interrupt Management](#hard-interrupt-management) + - [Memory Management](#memory-management) + - [FSC Memory Algorithm](#fsc-memory-algorithm) + - [Core Idea](#core-idea) + - [Memory Application](#memory-application) + - [Memory Release](#memory-release) + - [Timer Management](#timer-management) + - [Semaphore Management](#semaphore-management) + - [Exception Management](#exception-management) + - [CPU Usage Statistics](#cpu-usage-statistics) + - [STM32F407ZGT6 Development Board Support](#stm32f407zgt6-development-board-support) + - [OpenAMP Hybrid Deployment](#openamp-hybrid-deployment) + - [POSIX Standard APIs](#posix-standard-apis) + +## Task Management + +UniProton is a single-process multi-thread operating system (OS). In UniProton, a task represents a thread. Tasks in UniProton are scheduled in preemption mode instead of time slice rotation scheduling. High-priority tasks can interrupt low-priority tasks. Low-priority tasks can be scheduled only after high-priority tasks are suspended or blocked. + +A total of 32 priorities are defined, with priority 0 being the highest and 31 being the lowest. Multiple tasks can be created in a priority. + +The task management module of UniProton provides the following functions: - Creates, deletes, suspends, resumes, and delays tasks; - Locks and unlocks task scheduling; - Obtains the current task ID; - Obtains and sets task private data; - Query the pending semaphore ID of a specified task; - Query the status, context, and general information of a specified task; - Obtains and sets task priorities; - Adjusts the task scheduling order of a specified priority; - Register and unregister hooks for task creation, deletion, and switching. During initialization, UniProton creates an idle task with the lowest priority by default. When no task is in the running status, the idle task is executed. + +## Event Management + +The event mechanism enables communication between threads. Event communication can only be event notifications and no data is transmitted. + +As an extension of tasks, events allow tasks to communicate with each other. Each task supports 32 event types, each represented by a bit of a 32-bit value. + +UniProton can read current task events and write specified task events. Multiple event types can be read or written at one time. + +## Queue Management + +A queue, also called message queue, is a method commonly used for inter-thread communication to store and transfer data. Data can be written to the head or tail of a queue based on the priority, but can be read only from the head of a queue. + +When creating a queue, UniProton allocates memory space for the queue based on the queue length and message unit size input by the user. The queue control block contains **Head** and **Tail** pointers, which indicate the storage status of data in a queue. **Head** indicates the start position of occupied message nodes in the queue. **Tail** indicates the end position of the occupied message nodes in the queue. + +## Hard Interrupt Management + +A hardware interrupt is a level signal that is triggered by hardware and affects system running. A hardware interrupt is used to notify the CPU of a hardware event. Hardware interrupts include maskable interrupts and non-maskable interrupts (NMIs). + +Hardware interrupts have different internal priorities, but they all have a higher priority than other tasks. When multiple hardware interrupts are triggered at the same time, the hardware interrupt with the highest priority is always responded first. Whether a high-priority hardware interrupt can interrupt a low-priority hardware interrupt that is being executed (that is, nested interrupts) depends on the chip platform. + +The OS creates a tick hardware interrupt during initialization for task delay and software timer purposes. The tick is essentially a hardware timer. + +## Memory Management + +Memory management is to dynamically divide and manage large memory areas allocated by users. When a section of a program needs to use the memory, the program calls the memory application function of the OS to obtain the memory block of a specified size. After using the memory, the program calls the memory release function to release the occupied memory. + +UniProton provides the FSC memory algorithm. The following table lists the advantages, disadvantages, and application scenarios of FSC. + +| Algorithm | Advantages | Disadvantages | Application Scenarios | +| :----------------------------------------------------------- | ------------------------------------------------------------ | ------------------------------ | ------------------------------------ | +| Private FSC algorithm| The memory control block information occupies a small amount of memory. The minimum 4-byte-aligned memory block size can be applied for. Adjacent memory blocks can be quickly split and merged without creating memory fragmentation.| The efficiency of memory application and release is low.| It can flexibly adapt to various product scenarios.| + +The FSC memory algorithm is described as follows: + +### FSC Memory Algorithm + +#### Core Idea + +The size of the requested memory is **uwSize**. If the size is in binary, it is expressed as **0b{0}1xxx**. **{0}** indicates that there may be one or more zeros before **1**. Regardless of the content of following **1** (**xxx**), if **1** is changed to **10** and **xxx** is changed to **0**, **10yyy** is always greater than **1xxx** (**yyy** indicates that the corresponding bits of **xxx** are changed to **0**). + +The subscript of the leftmost 1 can be directly obtained. The subscript values are 0 to 31 from the most significant bit to the least significant bit (BitMap), or 0 to 31 from the least significant bit to the most significant bit (uwSize). If the subscripts of the bits of the 32-bit register are 0 to 31 from the most significant bit to the least significant bit, the subscript of the leftmost 1 of 0x80004000 is 0. Therefore, we can maintain an idle linked list header array (the number of elements does not exceed 31). The subscript of the leftmost 1 of the memory block size is used as the index of the linked list header array. That is, all memory blocks with the same subscript of the leftmost 1 are mounted to the same idle linked list. + +For example, the sizes of idle blocks that can be mounted to the linked list whose index is 2 are 4, 5, 6, and 7, and the sizes of idle blocks that can be mounted to the linked list whose index is N are 2^N to 2^(N+1)-1. + +![](./figures/FCS.png) + +#### Memory Application + +When applying for the memory of uwSize, use assembly instructions to obtain the subscript of the leftmost 1 first. Assume that the subscript is **n**. To ensure that the first idle memory block in the idle linked list meets the uwSize requirement, the search starts from the index n+1. If the idle linked list of index n+1 is not empty, the first idle block in the linked list is used. If the linked list of n+1 is empty, the linked list of n+2 is checked, and so on, until a non-empty linked list is found or the index reaches 31. + +A 32-bit BitMap global variable is defined to prevent the for loop from checking whether the idle linked list is empty recursively. If the idle linked list of n is not empty, the value whose subscript is n of BitMap is set to 1. Otherwise, the value is set to 0. The bit whose subscript is 31 of the BitMap is directly set to 1 during initialization. Therefore, the first non-idle linked list is searched from linked list of n+1. Bits 0 to n of the BitMap copy can be cleared first, and then a subscript of the leftmost 1 of the copy is obtained. If the subscript is not equal to 31, the subscript is the array index of the first non-empty idle linked list. + +All idle blocks are connected in series in the form of a bidirectional idle linked list. If the first idle block obtained from the linked list is large, that is, after a usSize memory block is split, the remaining space can be allocated at least once, The remaining idle blocks are added to the corresponding idle linked list. + +![](./figures/MemoryApplication.png) + +The memory control header records the size of the idle memory block (including the control header itself). The memory control header contains a reused member at the beginning. When a memory block is idle, it is used as a pointer to the next idle memory block. When a memory block is occupied, it stores a magic number, indicating that the memory block is not idle. To prevent the magic number from conflicting with the pointer (same as the address value), the upper and lower four bits of the magic number are 0xf. The start addresses of the allocated memory blocks are 4-byte-aligned. Therefore, no conflict occurs. + +#### Memory Release + +When the memory is released, adjacent idle blocks are combined. First, the validity of the address parameter (**pAddr**) is determined by checking the magic number in the control header. The start address of the control header of the next memory block is obtained by adding the start address to the offset value. If the next memory block is idle, the next memory block is deleted from the idle linked list to which it belongs, and the size of the current memory block is adjusted. + +To quickly find the control header of the previous memory block and determine whether the previous memory block is idle during memory release, a member is added to the memory control header to mark whether the previous memory block is idle. When the memory is applied for, the flag of the next memory block can be set to the occupied state (if the idle memory block is divided into two, and the previous memory block is idle, the flag of the current memory block is set to the idle state). When the memory is released, the flag of the next memory block is set to the idle state. When the current memory is released, if the previous memory block is marked as occupied, the previous memory block does not need to be merged; if the previous memory block is marked as idle, the previous memory block needs to be merged. If a memory block is idle, the flag of the next control block is set to the distance to the current control block. + + ![](./figures/MemoryRelease.png) + +## Timer Management + +UniProton provides the software timer function to meet the requirements of timing services. + +Software timers are based on the tick interrupts. Therefore, the period of a timer must be an integral multiple of the tick. The timeout scanning of the software timer is performed in the tick handler function. + +Currently, the software timer interface can be used to create, start, stop, restart, and delete timers. + +## Semaphore Management + +A semaphore is typically used to coordinate a group of competing tasks to access to critical resources. When a mutex is required, the semaphore is used as a critical resource counter. Semaphores include intra-core semaphores and inter-core semaphores. + +The semaphore object has an internal counter that supports the following operations: + +- Pend: The Pend operation waits for the specified semaphore. If the counter value is greater than 0, it is decreased by 1 and a success message is returned. If the counter value of the semaphore is 0, the requesting task is blocked until another task releases the semaphore. The amount of time the task will wait for the semaphore is user configurable. + +- Post: The Post operation releases the specified semaphore. If no task is waiting for the semaphore, the counter is incremented by 1 and returned. Otherwise, the first task (the earliest blocked task) in the list of tasks pending for this semaphore is woken up. + +The counter value of a semaphore corresponds to the number of available resources. It means mutually exclusive resources remained that could be occupied. The counter value can be: + +- 0, indicating that there is no accumulated post operation, and there may be a task blocked on the semaphore. + +- A positive value, indicating that there are one or more post release operations. + +## Exception Management + +Exception takeover of UniProton is a maintenance and test feature that records as much information as possible when an exception occurs to facilitate subsequent fault locating. In addition, the exception hook function is provided so that users can perform special handling when an exception occurs. The exception takeover feature handles internal exceptions and external hardware exceptions. + +## CPU Usage Statistics + +The system CPU usage (CPU percentage, CPUP) in UniProton refers to the CPU usage of the system within a period of time. It reflects the CPU load and the system running status (idle or busy) in the given period of time. The valid range of the system CPUP is 0 to 10000, in basis points. 10000 indicates that the system is fully loaded. + +The thread CPUP refers to the CPU usage of a single thread. It reflects the thread status, busy or idle, in a period of time. The valid range of the thread CPUP is 0 to 10000, in basis points. 10000 indicates that the process is being executed for a period of time. The total CPUPs of all threads (including interrupts and idle tasks) in a single-core system is 10000. + +The system-level CPUP statistics of UniProton depends on the tick module, which is implemented by tick sampling idle tasks or idle software interrupt counter. + +## STM32F407ZGT6 Development Board Support + +The kernel peripheral startup process and board driver of UniProton supports the STM32F407ZGT6 development board. The directory structure is as follows: + +├─apps # Demo based on the real-time OS of UniProton +│ └─hello_world # hello_world example program +├─bsp # Board-level driver to interconnect with the OS +├─build # Build script to build the final image +├─config # Configuration items to adjust running parameters +├─include # APIs provided by the real-time OS of UniProton +└─libs # Static libraries of the real-time OS of UniProton. The makefile example in the build directory has prepared the reference of the header file and static libraries. + +## OpenAMP Hybrid Deployment + +OpenAMP is an open source software framework designed to standardize the interaction between environments in heterogeneous embedded systems through open source solutions based on asymmetric multi-processing. OpenAMP consists of the following components: + +1. Remoteproc manages the life cycle of the slave core, shared memory, and resources such as buffer and vring used for communication, and initializes RPMsg and virtio. +2. RPMsg enables multi-core communication based on virtio. +3. Virtio, which is a paravirtualization technology, uses a set of virtual I/Os to implement driver communication between the master and slave cores. +4. libmetal shields OS implementation details, provides common user APIs to access devices, and handles device interrupts and memory requests. + +## POSIX Standard APIs + +[UniProton supports POSIX standard APIs](./uniproton-interfaces.md). diff --git a/docs/en/Embedded/UniProton/uniproton-interfaces.md b/docs/en/Embedded/UniProton/uniproton-interfaces.md new file mode 100644 index 0000000000000000000000000000000000000000..8ce70e14d862cc3791e0bafdf2da09c1c684835e --- /dev/null +++ b/docs/en/Embedded/UniProton/uniproton-interfaces.md @@ -0,0 +1,6311 @@ +# UniProton Interfaces + +## 任务 + +### 创建并激活任务 + +在OS启动之前(比如在uniAppInit)中创建的任务,只是简单地加入就绪队列。 OS启动后创建的任务,如果优先级高于当前任务且未锁任务,则立即发生任务调度并被运行,否则加入就绪队列,等待执行。 + +**Input**: 任务创建参数,包括任务名、任务栈大小、任务优先级、任务处理函数等。 + +**Processing**: + +1. 申请任务栈空间,初始化任务栈,置栈顶魔术字。 +2. 初始化任务上下文。 +3. 初始化任务控制块。 +4. 激活任务,任务是否马上能得到执行,取决于OS是否已经启动、优先级是否高于当前任务且没有锁任务调度、当前线程是否为硬中断。 + +**Output**: + +- 成功:任务ID,若任务具备执行条件,则直接运行任务,否则将任务挂入就绪队列。 + +- 失败:提示错误码。 + +### 删除任务 + + 删除任务并释放任务资源。 + +**Input**: 任务ID。 + +**Processing**: + +1. 检查任务是否具备删除条件,如锁任务调度情况下不允许删除任务。 +2. 如果任务处于阻塞状态,从对应的阻塞队列中摘除。 +3. 释放任务控制块。 +4. 释放任务栈空间。 +5. 从就绪队列中载入最高优先级的任务,若具备调度条件,则执行。 + +**Output**: + +- 成功:若具备调度条件,则执行就绪队列中的最高任务; + +- 失败:返回错误码。 + +### 挂起任务 + + 挂起任务。 + +**Input**: 任务ID。 + +**Processing**: 将指定任务从就绪队列中摘除,若指定任务处于Running态,则会触发任务切换。 + +**Output**: + +- 成功:挂起指定任务。 + +- 失败:返回错误码。 + +### 恢复任务 + +恢复挂起的任务。 + +**Input**: 任务ID。 + +**Processing**: 恢复挂起的任务,若任务仍处于延时、阻塞态,则只是取消挂起态,并不加入就绪队列。 + +**Output**: + +- 成功:取消任务挂起状态。 +- 失败:返回错误码。 + +### 任务延时 + +将当前任务延时指定时间。 + +**Input**: 延时时间。 + +**Processing**: + +1. 延时时间转换成OS的Tick数。 +2. 将当前任务从就绪队列中摘除,置成延时态。 +3. 从就绪队列中载入最高优先级的任务,并执行。 +4. Tick中断处理函数中判断任务的延时时间是否已经足够,如果足够,将任务加入就绪队列。 + +**Output**: + +- 成功:当前任务切出,就绪队列中的最高优先级任务切入。 +- 失败:返回错误码。 + +### 锁任务调度 + +禁止任务之间的切换。 + +**Input**: 锁任务调度请求。 + +**Processing**: + +1. 若有任务切换请求,将其清除。 +2. 锁任务调度次数依次递增。 + +**Output**: 任务之间无法切换。 + +### 恢复任务调度的锁/解锁状态 + +与锁任务调度配对使用,是否解锁任务调度,取决于最近一次锁任务调度前,是否允许任务调度。 + +**Input**: 恢复任务调度的锁/解锁状态请求。 + +**Processing**: + +1. 锁任务调度次数依次递减。 +2. 若锁任务调度次数等于0,则发起任务调度。 + +**Output**: 若最近一次锁任务调度前,允许任务调度,则从就绪队列中载入最高优先级任务,并执行。否则,维持原状,不能发生任务切换。 + +### 任务PID合法性检查 + +检查指定任务PID是否合法。 + +**Input**: 任务PID。 + +**Processing**: 判断输入的任务PID是否超过最大任务PID号或是否已创建。 + +**Output**: + +- TRUE :任务PID有效。 +- FALSE:任务PID无效。 + +### 任务私有数据获取 + +获取当前任务的私有数据。 + +**Input**: 无。 + +**Processing**: 将任务TCB中记录的任务私有数据返回。 + +**Output**: 任务私有数据。 + +### 查询本核指定任务正在PEND的信号量 + +查询指定任务正在PEND的信号量ID。 + +**Input**: 任务PID。 + +**Processing**: 根据任务状态和任务控制块,判断任务是否PEND在某个信号量上,以及PEND的信号量ID。 + +**Output**: + +- 成功:返回信号量ID。 +- 失败:返回错误码。 + +### 查询任务状态 + +获取指定任务的状态。 + +**Input**: 任务PID。 + +**Processing**: 将指定任务的TCB中记录的任务状态字段返回。 + +**Output**: + +- 成功:返回任务状态信息。 +- 失败:返回错误码。 + +### 查询任务上下文信息 + +获取指定任务的上下文信息。 + +**Input**: 任务PID。 + +**Processing**: 将指定任务的TCB中记录的任务上下文信息返回。 + +**Output**: + +- 成功:返回任务上下文信息。 +- 失败:返回错误码。 + +### 查询任务基本信息 + +获取任务基本信息,包括任务切换时的PC,SP、任务状态、优先级、任务栈大小、栈顶值,任务名等。 + +**Input**: 任务PID,用于存放任务基本信息查询结果的缓冲区 + +**Processing**: 将指定任务的TCB中记录的任务基本信息返回。 + +**Output**: + +- 成功:返回任务基本信息。 +- 失败:返回错误码。 + +### 任务优先级获取 + +获取指定任务的优先级。 + +**Input**: 任务PID + +**Processing**: 将指定任务的TCB中记录的优先级字段返回。 + +**Output**: + +- 成功:返回任务优先级信息。 +- 失败:返回错误码。 + +### 任务优先级设定 + +设置指定任务的优先级。 + +**Input**: 任务PID、优先级值 + +**Processing**: 将输入的任务优先级信息存入指定任务TCB中优先级字段 + +**Output**: + +- 成功:指定任务的优先级被修改。 +- 失败:返回错误码。 + +### 调整指定优先级的任务调度顺序 + +设置指定任务的优先级以及调整调度顺序。 + +**Input**: 指定的优先级、指定需要调整调度顺序的任务,用于保存被调整到队首的任务ID的缓冲。 + +**Processing**: 若指定要调整调度顺序的任务为TASK_NULL_ID,则优先级队列中的第一个就绪任务调整至队尾;否则,将指定要调整调度顺序的任务调整至优先级队列的队首。 + +**Output**: + +- 成功:指定优先级的任务调度顺序被修改。 +- 失败:返回错误码。 + +## 事件 + +### 写事件 + +写事件操作实现对指定任务写入指定类型的事件,可以一次同时写多个事件。 + +**Input**: 任务ID、事件号。 + +**Processing**: + +1. 对指定任务事件类型写上输入事件。 +2. 判断目的任务是否正在接收等待事件,且其等待的事件是否已经符合唤醒条件(唤醒条件即读取的事件已经发生)。 +3. 如果符合唤醒条件,则需清除任务读事件状态。 +4. 如果符合唤醒条件,则需清除任务读事件状态。 +5. 清除任务超时状态。 +6. 在任务没有被挂起的情况下,需要将任务加入就绪队列并尝试任务调度。 + +**Output**: + +- 成功:事件写入成功。 +- 失败:错误码。 + +### 读事件 + +读事件操作可以根据入参事件掩码类型读取单个或者多个事件。 + +**Input**: 要读取的事件掩码、读取事件所采取的策略、超时时间、接收事件的指针。 + +**Processing**: + +1. 根据入参事件掩码类型对自身任务输入读取事件类型。 +2. 判断事件读取模式,是读取输入的所有事件还是其中的任意一种事件。 +3. 根据读取模式,判断期望的事件是否满足读取情况。 +4. 判断事件等待模式:如果为等待事件模式则根据模式来设置相应的超时时间;如果为非等待模式则事件读取失败。 +5. 如果需要等待阻塞读取,则需要将自己的任务从就绪列表中删除,并进行任务调度。 +6. 读取成功后,清除读取的事件类型,并且把事件类型返回。 + +**Output**: + +- 成功:读事件成功,事件指针赋值。 +- 失败:错误码 + +## 队列 + +### 创建队列 + +创建一个队列,创建时可以设定队列长度和队列结点大小。 + +**Input**: 队列节点个数、每个队列节点大小、队列ID指针。 + +**Processing**: + +1. 申请一个空闲的队列资源。 +2. 申请队列所需内存。 +3. 初始化队列配置。 + +**Output**: + +- 成功:队列ID。 +- 失败:错误码。 + +### 读队列 + +读指定队列的数据。 + +**Input**: 队列ID、缓冲区指针、长度指针、超时时间。 + +**Processing**: + +1. 获取指定队列控制块。 +2. 读队列PEND标志,根据缓冲区大小填入队列数据。 +3. 修改队列头指针。 + +**Output**: + +- 成功:缓冲区内填入队列数据。 +- 失败:错误码。 + +### 写队列 + +写数据到指定队列。 + +**Input**: 队列ID、缓冲区指针、缓冲区长度、超时时间、优先级。 + +**Processing**: + +1. 获取指定队列控制块。 +2. 读队列PEND标志,选取消息节点,初始化消息节点并拷贝数据。 +3. 队列读资源计数器加一。 + +**Output**: + +- 成功:写入队列数据成功。 +- 失败:错误码。 + +### 删除队列 + +删除一个消息队列,删除后队列资源被回收。 + +**Input**: 队列ID。 + +**Processing**: + +1. 获取指定队列控制块,确保队列未在使用中。 +2. 释放队列内存。 + +**Output**: + +- 成功:删除队列成功。 +- 失败:错误码 + +### 查询队列的历史最大使用长度 + +获取从队列创建到删除前的历史最大使用长度。 + +**Input**: 队列ID、队列节点使用峰值指针。 + +**Processing**: + +1. 获取指定队列控制块。 +2. 将队列节点使用峰值赋值到指针参数。 + +**Output**: + +- 成功:获取峰值成功。 +- 失败:错误码 + +### 查询指定源PID的待处理消息个数 + +从指定队列中,获取指定源PID的待处理消息个数。 + +**Input**: 队列ID、线程PID、待处理的消息个数指针。 + +**Processing**: + +1. 获取指定队列控制块。 +2. 遍历队列查询待处理的消息个数,赋值到指针变量。 + +**Output**: + +- 成功:获取待处理的消息个数成功。 +- 失败:错误码。 + +## 中断 + +### 创建硬中断 + +硬中断在使用前,必须先创建。 + +**Input**: 硬中断的创建参数,如:硬中断号(与芯片相关)、硬中断优先级、硬中断处理函数等。 + +**Processing**: 根据硬中断号设置硬中断优先级、处理函数。 + +**Output**: + +- 成功:硬中断触发后,CPU能够响应该硬中断,并回调硬中断处理函数。 +- 失败:返回错误码。 + +### 硬中断属性设置 + +在创建硬中断前,需要设置硬中断的模式,包括独立型(#OS_HWI_MODE_ENGROSS)和组合型(#OS_HWI_MODE_COMBINE)两种配置模式。 + +**Input**: 硬中断号、硬中断模式。 + +**Processing**: 根据硬中断号设置硬中断的模式; + +**Output**: + +- 成功:指定的硬中断号被设置好硬中断模式。 +- 失败:返回错误码。 + +### 删除硬中断 + +删除相应硬中断或事件,取消硬中断处理函数的注册。 + +**Input**: 硬中断号。 + +**Processing**: 取消指定硬中断的处理函数与中断号的绑定关系。 + +**Output**: 硬中断被删除,当硬中断信号触发后,CPU不会响应该中断。 + +### 使能硬中断 + +使能指定的硬中断。 + +**Input**: 硬中断号。 + +**Processing**: 将指定硬中断的使能位置位。 + +**Output**: 指定的硬中断被使能,当硬中断信号触发后,CPU会响应该中断。 + +### 屏蔽硬中断 + +屏蔽指定的硬中断。 + +**Input**: 硬中断号。 + +**Processing**: 清除指定硬中断的使能位。 + +**Output**: 指定的硬中断被屏蔽,当硬中断信号触发后,CPU不会响应该中断。 + +### 恢复指定硬中断 + +恢复指定的硬中断。 + +**Input**: 硬中断号、中断使能寄存器的保存值。 + +**Processing**: 还原指定硬中断的使能位。 + +**Output**: 指定中断的使能位恢复为指定状态。 + +### 禁止硬中断 + +禁止响应所有可屏蔽硬中断。 + +**Input**: 禁止硬中断请求。 + +**Processing**: + +1. 记录系统状态,用于后续返回。 +2. 禁止响应所有可屏蔽硬中断。 + +**Output**: + +- 所有可屏蔽硬中断都不能响应。 +- 禁止硬中断响应前的系统状态。 + +### 恢复硬中断 + +恢复硬中断的禁止或允许响应状态,与禁止硬中断配对使用。是否允许响应硬中断,取决于最近一次禁止硬中断前,系统是否允许响应硬中断。 + +**Input**: 最近一次禁止硬中断前的系统状态。 + +**Processing**: 将系统状态恢复到最近一次禁止硬中断前。 + +**Output**: 系统状态恢复到最近一次禁止硬中断前。 + +### 响应硬中断 + +硬中断触发后,CPU会响应硬中断。 + +**Input**: 硬件触发的硬中断信号,且系统没有禁止硬中断。 + +**Processing**: + +1. 保存当前上下文。 +2. 调用硬中断处理函数。 +3. 若任务被打断,则恢复最高优先级任务的上下文,该任务不一定是被中断打断的任务。 +4. 若低优先级中断被打断,则直接恢复低先级中断的上下文。 + +**Output**: 硬中断被响应。 + +### 触发硬中断 + +触发指定核号的指定硬中断。 + +**Input**: 核号、硬中断号。 + +**Processing**: + +1. 目前只支持触发本核的硬中断,若指定的核号不为本核,则做报错处理。 +2. 目前只支持触发软件可触发的硬中断,若指定的中断无法进行软件触发,则做报错处理。 +3. 当以前两个条件都满足,则设置对应的中断触发寄存器,软件触发中断。 + +**Output**: + +- 成功:响应的硬中断被触发。 +- 失败:返回错误码。 + +### 清除中断位 + +清除所有的中断请求位或指定的中断请求位。 + +**Input**: 硬中断号。 + +**Processing**: 清除所有的中断请求位或指定的中断请求位。 + +**Output**: 所有的中断请求位或指定的中断请求位被清除 + +## 定时器 + +### 定时器创建 + +根据定时器类型,触发模式,定时时长,处理函数等创建一个定时器。 + +**Input**: + +1. 创建参数结构体(包括定时器类型,触发模式,定时时长,处理函数等)。 +2. 用于保存输出的定时器句柄的指针。 + +**Processing**: 根据入参找到空闲控制块,将入参内容填入控制块中相应的字段中。 + +**Output**: + +- 成功:定时器创建成功,后续可根据得到的定时器句柄做启动、删除等操作。 +- 失败:返回错误码。 + +### 定时器删除 + +删除指定的定时器。 + +**Input**: 定时器句柄 + +**Processing**: 根据传入的定时器句柄,找到定时器控制块,将其内容清空并将控制块挂接到相应的空闲链表中。 + +**Output**: + +- 成功:定时器被删除。 +- 失败:返回错误码。 + +### 定时器启动 + +指定的定时器开始计时。 + +**Input**: 定时器句柄 + +**Processing**: 对于软件定时器,根据当前Tick计数以及定时器周期,计算结束时间,将此定时器控制块挂入定时器SortLink。 + +**Output**: + +- 成功:定时器开始计时。 +- 失败:返回错误码。 + +### 定时器停止 + +指定的定时器停止计时。 + +**Input**: 定时器句柄。 + +**Processing**: 对于软件定时器,计算剩余时间并将其保存后,将此定时器控制块从定时器SortLink中摘除。 + +**Output**: + +- 成功:指定任务的信号量计数值被修改。 +- 失败:返回错误码。 + +### 定时器重启 + +指定的定时器重新开始计时。 + +**Input**: 定时器句柄 + +**Processing**: 对于软件定时器,根据当前Tick计数以及定时器周期,计算结束时间,将此定时器控制块挂入定时器SortLink。 + +**Output**: + +- 成功:指定任务的信号量计数值被修改。 +- 失败:返回错误码。 + +### 软件定时器组创建 + +创建一个软件定时器组,后续的软件定时器创建时需要以此为基础。 + +**Input**: + +1. 软件定时器组创建参数(主要关注时钟源类型及最大支持的定时器个数)。 +2. 用于保存输出的定时器组号的地址。 + +**Processing**: 根据传入的最大支持的定时器个数申请定时器控制块内存,并完成其初始化操作。 + +**Output**: + +- 成功:基于Tick的软件定时器组被成功创建。 +- 失败:返回错误码。 + +## 信号量 + +### 信号量创建 + +创建一个信号量,并设置其初始计数器数值。 + +**Input**: 信号量初始计数值、用于保存创建得到句柄的地址。 + +**Processing**: 找到一个空闲信号量控制块,将输入的初始计数值填入后将信号量ID当做句柄返回。 + +**Output**: + +- 成功:信号量被创建。 +- 失败:返回错误码。 + +### 信号量删除 + +删除指定信号量,若有任务阻塞于该信号量,则删除失败。 + +**Input**: 信号量句柄 + +**Processing**: 对于核内信号量,根据输入的信号量句柄找到信号量控制块,通过查看控制块中任务阻塞链表是否为空来判断是否有任务阻塞于该信号量,若有则删除失败返回,否则释放该信号量控制块。 + +**Output**: + +- 成功:信号量被删除。 +- 失败:返回错误码。 + +### Pend信号量 + +申请指定的信号量,若其计数值大于0,则直接将计数值减1返回,否则发生任务阻塞,等待时间可通过入参设定。 + +**Input**: 信号量句柄、等待时间 + +**Processing**: + +![](./figures/pend_semaphore.png) + +**Output**: + +- 成功:返回0。 +- 失败:返回错误码。 + +### Post信号量 + +发布信号量,将该信号量计数值+1,若有任务阻塞于该信号量,则将其唤醒。 + +**Input**: 信号量句柄。 + +**Processing**: + +![](./figures/post_semaphore.png) + +**Output**: + +- 成功:信号量发布成功。 +- 失败:返回错误码。 + +### 信号量计数值重置 + +设置指定信号量计数值,如果有任务阻塞于该信号量,则设置失败。 + +**Input**: 信号量句柄、信号量计数值。 + +**Processing**: 根据输入的信号量句柄,找到相应的信号量控制块,查看控制块中任务阻塞链表,若其不为空,则返回错误,否则将控制块中信号量计数值设为输入的计数值。 + +**Output**: + +- 成功:指定信号量的计数值被修改; +- 失败:返回错误码。 + +### 信号量计数值获取 + + 获取指定信号量计数值。 + +**Input**: 信号量句柄 + +**Processing**: 根据输入的信号量句柄,找到相应的信号量控制块,将控制块中记录的信号量计数值返回。 + +**Output**: + +- 成功:返回信号量计数值。 +- 失败:返回错误计数值标记。 + +### 信号量阻塞任务PID获取 + +获取阻塞在指定信号量上的任务个数及任务PID列表。 + +**Input**: + +1. 信号量句柄。 +2. 用于存放输出的阻塞任务个数的地址。 +3. 用于存放输出的阻塞任务PID的缓冲区首地址。 +4. 用于存放输出的阻塞任务PID的缓冲区长度。 + +**Processing**: 若有任务阻塞于指定信号量,则输出阻塞任务的个数及任务PID清单;否则,将阻塞任务个数置为0。 + +**Output**: + +- 成功:输出阻塞于该信号量的任务个数及任务PID清单。 +- 失败:返回错误码。 + +## 异常 + +### 用户注册异常处理钩子 + +用户注册异常处理函数类型定义的异常处理函数钩子,记录异常信息。 + +**Input**: 类型为ExcProcFunc的钩子函数。 + +**Processing**: 将用户注册的钩子函数注册到OS框架里,发生异常时调用。 + +**Output**: + +- 成功:注册成功。 +- 失败:返回错误码。 + +## CPU占用率 + +### 获取当前cpu占用率 + +通过本接口获取当前cpu占用率。 + +**Input**: 无。 + +**Processing**: 采用基于IDLE计数的统计算法,统计结果会有一定误差,误差不超过百分之五。 + +**Output**: + +- 成功:返回当前的cpu占用率[0,10000]。 +- 失败:返回错误码。 + +### 获取指定个数的线程的CPU占用率 + +根据用户输入的线程个数,获取指定个数的线程CPU占用率。 + +**Input**: 线程个数、缓冲区指针、实际线程个数指针。 + +**Processing**: + +1. 采用基于 IDLE 计数的统计算法,统计结果会有一定误差,误差不超过百分之五。 +2. 当配置项中的采样周期值等于0时,线程级CPUP采样周期为两次调用该接口或者PRT_CpupNow之间的间隔。否则,线程级CPUP采样周期为配置项中的OS_CPUP_SAMPLE_INTERVAL大小。 +3. 输出的实际线程个数不大于系统中实际的线程个数(任务个数和一个中断线程)。 +4. 若在一个采样周期内有任务被删除,则统计的任务线程和中断线程CPUP总和小于10000。 + +**Output**: + +- 成功:在缓冲区写入cpu占用率。 +- 失败:返回错误码。 + +### 设置CPU占用率告警阈值 + +根据用户配置的 CPU 占用率告警阈值 warn 和告警恢复阈值 resume,设置告警和恢复阈值。 + +**Input**: 告警阈值、恢复阈值。 + +**Processing**: 设置 CPUP 告警阈值和恢复阈值 + +**Output**: + +- 成功:设置成功。 +- 失败:返回错误码。 + +### 查询CPUP告警阈值和告警恢复阈值 + +根据用户配置的告警阈值指针 warn 和告警恢复阈值指针 resume,查询告警阈值和告警恢复阈值。 + +**Input**: 告警阈值指针、恢复阈值指针。 + +**Processing**: 获取 CPUP 告警阈值和恢复阈值,赋值指针变量。 + +**Output**: + +- 成功:获取成功。 +- 失败:返回错误码。 + +### 注册CPUP告警回调函数 + +根据用户配置的回调函数 hook,注册 CPUP 告警回调函数。 + +**Input**: 类型为 CpupHookFunc 的 CPU 告警回调函数。 + +**Processing**: 将用户的钩子函数注册到 OS 框架。 + +**Output**: + +- 成功:注册成功。 +- 失败:错误码 + +## OS启动 + +### main函数入口 + +二进制执行文件函数入口。 + +**Input**: 无 + +**Output**: + +- 成功:返回OK。 +- 失败:错误码 + +### 用户业务入口 + +PRT_AppInit 用户业务函数入口,在 main 函数后调用,在此函数中添加业务功能代码。 + +**Input**: 无 + +**Output**: + +- 成功:返回OK。 +- 失败:错误码 + +### 硬件驱动初始化入口 + +PRT_HardDrvInit 硬件驱动初始化函数入口,在 main 函数后调用,在此函数中添加板级驱动初始化功能代码。 + +**Input**: 无 + +**Output**: + +- 成功:返回OK。 +- 失败:错误码 + +### 硬件启动流程入口 + +PRT_HardBootInit 在 OS 启动时调用,在main函数前被调用,可以用于 BSS 初始化、随机值设置等。 + +**Input**: 无 + +**Output**: + +- 成功:返回OK。 +- 失败:错误码。 + +## openamp + +### 初始化openamp资源函数 + +初始化保留内存,初始化 remoteproc、virtio、rpmsg,建立 Uniproton 与 Linux 两端配对的 endpoint,供消息收发使用。 + +**Input**: 无。 + +**Output**: + +- 成功:初始化成功。 +- 失败:错误码。 + +### 消息接收函数 + +接收消息,并触发SGI中断 + +**Input**: + +1. 类型为 unsigned char * 的存储消息的缓冲区。 +2. 类型为 int 的消息预期长度。 +3. 类型为 int *,用于获取消息实际长度。 + +**Output**: + +- 成功:消息接收成功。 +- 失败:错误码。 + +### 消息发送函数 + +发送消息和SGI中断 + +**Input**: 类型为 unsigned char * 的存储消息的缓冲区、类型为 int 的消息长度。 + +**Output**: + +- 成功:消息发送成功。 +- 失败:错误码。 + +### 释放openamp资源 + +用于释放openamp资源。 + +**Input**: 无 + +**Output**: + +- 成功:资源释放成功。 +- 失败:错误码。 + +## POSIX接口 + +| 接口名 | 适配情况 | +| :---: | :-----: | +| [pthread_atfork](#pthread_atfork) | 不支持 | +| [pthread_attr_destroy](#pthread_attr_destroy) | 支持 | +| [pthread_attr_getdetachstate](#pthread_attr_getdetachstate) | 支持 | +| [pthread_attr_getguardsize](#pthread_attr_getguardsize) | 不支持 | +| [pthread_attr_getinheritsched](#pthread_attr_getinheritsched) | 支持 | +| [pthread_attr_getschedparam](#pthread_attr_getschedparam) | 支持 | +| [pthread_attr_getschedpolicy](#pthread_attr_getschedpolicy) | 支持 | +| [pthread_attr_getscope](#pthread_attr_getscope) | 支持 | +| [pthread_attr_getstack](#pthread_attr_getstack) | 支持 | +| [pthread_attr_getstackaddr](#pthread_attr_getstackaddr) | 支持 | +| [pthread_attr_getstacksize](#pthread_attr_getstacksize) | 支持 | +| [pthread_attr_init](#pthread_attr_init) | 支持 | +| [pthread_attr_setdetachstate](#pthread_attr_setdetachstate) | 支持 | +| [pthread_attr_setguardsize](#pthread_attr_setguardsize) | 不支持 | +| [pthread_attr_setinheritsched](#pthread_attr_setinheritsched) | 支持 | +| [pthread_attr_setschedparam](#pthread_attr_setschedparam) | 部分支持 | +| [pthread_attr_setschedpolicy](#pthread_attr_setschedpolicy) | 部分支持 | +| [pthread_attr_setscope](#pthread_attr_setscope) | 部分支持 | +| [pthread_attr_setstack](#pthread_attr_setstack) | 支持 | +| [pthread_attr_setstackaddr](#pthread_attr_setstackaddr) | 支持 | +| [pthread_attr_setstacksize](#pthread_attr_setstacksize) | 支持 | +| [pthread_barrier_destroy](#pthread_barrier_destroy) | 支持 | +| [pthread_barrier_init](#pthread_barrier_init) | 部分支持 | +| [pthread_barrier_wait](#pthread_barrier_wait) | 支持 | +| [pthread_barrierattr_getpshared](#pthread_barrierattr_getpshared) | 支持 | +| [pthread_barrierattr_setpshared](#pthread_barrierattr_setpshared) | 部分支持 | +| [pthread_cancel](#pthread_cancel) | 支持 | +| [pthread_cond_broadcast](#pthread_cond_broadcast) | 支持 | +| [pthread_cond_destroy](#pthread_cond_destroy) | 支持 | +| [pthread_cond_init](#pthread_cond_init) | 支持 | +| [pthread_cond_signal](#pthread_cond_signal) | 支持 | +| [pthread_cond_timedwait](#pthread_cond_timedwait) | 支持 | +| [pthread_cond_wait](#pthread_cond_wait) | 支持 | +| [pthread_condattr_destroy](#pthread_condattr_destroy) | 支持 | +| [pthread_condattr_getclock](#pthread_condattr_getclock) | 支持 | +| [pthread_condattr_getpshared](#pthread_condattr_getpshared) | 支持 | +| [pthread_condattr_init](#pthread_condattr_init) | 支持 | +| [pthread_condattr_setclock](#pthread_condattr_setclock) | 部分支持 | +| [pthread_condattr_setpshared](#pthread_condattr_setpshared) | 部分支持 | +| [pthread_create](#pthread_create) | 支持 | +| [pthread_detach](#pthread_detach) | 支持 | +| [pthread_equal](#pthread_equal) | 支持 | +| [pthread_exit](#pthread_exit) | 支持 | +| [pthread_getcpuclockid](#pthread_getcpuclockid) | 不支持 | +| [pthread_getschedparam](#pthread_getschedparam) | 支持 | +| [pthread_getspecific](#pthread_getspecific) | 支持 | +| [pthread_join](#pthread_join) | 支持 | +| [pthread_key_create](#pthread_key_create) | 支持 | +| [pthread_key_delete](#pthread_key_delete) | 支持 | +| [pthread_kill](#pthread_kill) | 不支持 | +| [pthread_mutex_consistent](#pthread_mutex_consistent) | 不支持 | +| [pthread_mutex_destroy](#pthread_mutex_destroy) | 支持 | +| [pthread_mutex_getprioceiling](#pthread_mutex_getprioceiling) | 不支持 | +| [pthread_mutex_init](#pthread_mutex_init) | 支持 | +| [pthread_mutex_lock](#pthread_mutex_lock) | 支持 | +| [pthread_mutex_setprioceiling](#pthread_mutex_setprioceiling) | 不支持 | +| [pthread_mutex_timedlock](#pthread_mutex_timedlock) | 支持 | +| [pthread_mutex_trylock](#pthread_mutex_trylock) | 支持 | +| [pthread_mutex_unlock](#pthread_mutex_unlock) | 支持 | +| [pthread_mutexattr_destroy](#pthread_mutexattr_destroy) | 支持 | +| [pthread_mutexattr_getprioceiling](#pthread_mutexattr_getprioceiling) | 不支持 | +| [pthread_mutexattr_getprotocol](#pthread_mutexattr_getprotocol) | 支持 | +| [pthread_mutexattr_getpshared](#pthread_mutexattr_getpshared) | 部分支持 | +| [pthread_mutexattr_getrobust](#pthread_mutexattr_getrobust) | 部分支持 | +| [pthread_mutexattr_gettype](#pthread_mutexattr_gettype) | 支持 | +| [pthread_mutexattr_init](#pthread_mutexattr_init) | 支持 | +| [pthread_mutexattr_setprioceiling](#pthread_mutexattr_setprioceiling) | 不支持 | +| [pthread_mutexattr_setprotocol](#pthread_mutexattr_setprotocol) | 部分支持 | +| [pthread_mutexattr_setpshared](#pthread_mutexattr_setpshared) | 不支持 | +| [pthread_mutexattr_setrobust](#pthread_mutexattr_setrobust) | 部分支持 | +| [pthread_mutexattr_settype](#pthread_mutexattr_settype) | 支持 | +| [pthread_once](#pthread_once) | 部分支持 | +| [pthread_rwlock_destroy](#pthread_rwlock_destroy) | 支持 | +| [pthread_rwlock_init](#pthread_rwlock_init) | 支持 | +| [pthread_rwlock_rdlock](#pthread_rwlock_rdlock) | 支持 | +| [pthread_rwlock_timedrdlock](#pthread_rwlock_timedrdlock) | 支持 | +| [pthread_rwlock_timedwrlock](#pthread_rwlock_timedwrlock) | 支持 | +| [pthread_rwlock_tryrdlock](#pthread_rwlock_tryrdlock) | 支持 | +| [pthread_rwlock_trywrlock](#pthread_rwlock_trywrlock) | 支持 | +| [pthread_rwlock_unlock](#pthread_rwlock_unlock) | 支持 | +| [pthread_rwlock_wrlock](#pthread_rwlock_wrlock) | 支持 | +| [pthread_rwlockattr_destroy](#pthread_rwlockattr_destroy) | 不支持 | +| [pthread_rwlockattr_getpshared](#pthread_rwlockattr_getpshared) | 部分支持 | +| [pthread_rwlockattr_init](#pthread_rwlockattr_init) | 不支持 | +| [pthread_rwlockattr_setpshared](#pthread_rwlockattr_setpshared) | 部分支持 | +| [pthread_self](#pthread_self) | 支持 | +| [pthread_setcancelstate](#pthread_setcancelstate) | 支持 | +| [pthread_setcanceltype](#pthread_setcanceltype) | 支持 | +| [pthread_setschedparam](#pthread_setschedparam) | 部分支持 | +| [pthread_setschedprio](#pthread_setschedprio) | 支持 | +| [pthread_setspecific](#pthread_setspecific) | 支持 | +| [pthread_sigmask](#pthread_sigmask) | 不支持 | +| [pthread_spin_init](#pthread_spin_init) | 不支持 | +| [pthread_spin_destory](#pthread_spin_destory) | 不支持 | +| [pthread_spin_lock](#pthread_spin_lock) | 不支持 | +| [pthread_spin_trylock](#pthread_spin_trylock) | 不支持 | +| [pthread_spin_unlock](#pthread_spin_unlock) | 不支持 | +| [pthread_testcancel](#pthread_testcancel) | 支持 | +| [sem_close](#sem_close) | 支持 | +| [sem_destroy](#sem_destroy) | 支持 | +| [sem_getvalue](#sem_getvalue) | 支持 | +| [sem_init](#sem_init) | 支持 | +| [sem_open](#sem_open) | 支持 | +| [sem_post](#sem_post) | 支持 | +| [sem_timedwait](#sem_timedwait) | 支持 | +| [sem_trywait](#sem_trywait) | 支持 | +| [sem_unlink](#sem_unlink) | 部分支持 | +| [sem_wait](#sem_wait) | 支持 | +| [sched_yield](#sched_yield) | 支持 | +| [sched_get_priority_max](#sched_get_priority_max) | 支持 | +| [sched_get_priority_min](#sched_get_priority_min) | 支持 | +| [asctime](#asctime) | 支持 | +| [asctime_r](#asctime_r) | 支持 | +| [clock](#clock) | 支持 | +| [clock_getcpuclockid](#clock_getcpuclockid) | 部分支持 | +| [clock_getres](#clock_getres) | 部分支持 | +| [clock_gettime](#clock_gettime) | 支持 | +| [clock_nanosleep](#clock_nanosleep) | 部分支持 | +| [clock_settime](#clock_settime) | 支持 | +| [ctime](#ctime) | 支持 | +| [ctime_r](#ctime_r) | 支持 | +| [difftime](#difftime) | 支持 | +| [getdate](#getdate) | 不支持 | +| [gettimeofday](#gettimeofday) | 支持 | +| [gmtime](#gmtime) | 支持 | +| [gmtime_r](#gmtime_r) | 支持 | +| [localtime](#localtime) | 支持 | +| [localtime_r](#localtime_r) | 支持 | +| [mktime](#mktime) | 支持 | +| [nanosleep](#nanosleep) | 支持 | +| [strftime](#strftime) | 不支持 | +| [strftime_l](#strftime_l) | 不支持 | +| [strptime](#strptime) | 支持 | +| [time](#time) | 支持 | +| [timer_create](#timer_create) | 支持 | +| [timer_delete](#timer_delete) | 支持 | +| [timer_getoverrun](#timer_getoverrun) | 支持 | +| [timer_gettime](#timer_gettime) | 支持 | +| [timer_settime](#timer_settime) | 支持 | +| [times](#times) | 支持 | +| [timespec_get](#timespec_get) | 支持 | +| [utime](#utime) | 不支持 | +| [wcsftime](#wcsftime) | 不支持 | +| [wcsftime_l](#wcsftime_l) | 不支持 | +| [malloc](#malloc) | 支持 | +| [free](#free) | 支持 | +| [memalign](#memalign) | 支持 | +| [realloc](#realloc) | 支持 | +| [malloc_usable_size](#malloc_usable_size) | 支持 | +| [aligned_alloc](#aligned_alloc) | 支持 | +| [reallocarray](#reallocarray) | 支持 | +| [calloc](#calloc) | 支持 | +| [posix_memalign](#posix_memalign) | 支持 | +| [abort](#abort) | 支持 | +| [_Exit](#_Exit) | 支持 | +| [atexit](#atexit) | 支持 | +| [quick_exit](#quick_exit) | 支持 | +| [at_quick_exit](#at_quick_exit) | 支持 | +| [assert](#assert) | 支持 | +| [div](#div) | 支持 | +| [ldiv](#ldiv) | 支持 | +| [lldiv](#lldiv) | 支持 | +| [imaxdiv](#imaxdiv) | 支持 | +| [wcstol](#wcstol) | 支持 | +| [wcstod](#wcstod) | 支持 | +| [fcvt](#fcvt) | 支持 | +| [ecvt](#ecvt) | 支持 | +| [gcvt](#gcvt) | 支持 | +| [qsort](#qsort) | 支持 | +| [abs](#abs) | 支持 | +| [labs](#labs) | 支持 | +| [llabs](#llabs) | 支持 | +| [imaxabs](#imaxabs) | 支持 | +| [strtol](#strtol) | 支持 | +| [strtod](#strtod) | 支持 | +| [atoi](#atoi) | 支持 | +| [atol](#atol) | 支持 | +| [atoll](#atoll) | 支持 | +| [atof](#atof) | 支持 | +| [bsearch](#bsearch) | 支持 | +| [semget](#semget) | 支持 | +| [semctl](#semctl) | 部分支持 | +| [semop](#semop) | 部分支持 | +| [semtimedop](#semtimedop) | 部分支持 | +| [msgget](#msgget) | 支持 | +| [msgctl](#msgctl) | 部分支持 | +| [msgsnd](#msgsnd) | 部分支持 | +| [msgrcv](#msgrcv) | 部分支持 | +| [shmget](#shmget) | 不支持 | +| [shmctl](#shmctl) | 不支持 | +| [shmat](#shmat) | 不支持 | +| [shmdt](#shmdt) | 不支持 | +| [ftok](#ftok) | 不支持 | +| [fstatat](#fstatat) | 支持 | +| [fchmodat](#fchmodat) | 支持 | +| [mkdir](#mkdir) | 支持 | +| [chmod](#chmod) | 支持 | +| [lstat](#lstat) | 支持 | +| [utimensat](#utimensat) | 支持 | +| [mkfifo](#mkfifo) | 支持 | +| [fchmod](#fchmod) | 支持 | +| [mknod](#mknod) | 支持 | +| [statvfs](#statvfs) | 支持 | +| [mkfifoat](#mkfifoat) | 支持 | +| [umask](#umask) | 支持 | +| [mknodat](#mknodat) | 支持 | +| [futimesat](#futimesat) | 支持 | +| [lchmod](#lchmod) | 支持 | +| [futimens](#futimens) | 支持 | +| [mkdirat](#mkdirat) | 支持 | +| [fstat](#fstat) | 支持 | +| [stat](#stat) | 支持 | +| [open](#open) | 支持 | +| [creat](#creat) | 支持 | +| [posix_fadvise](#posix_fadvise) | 不支持 | +| [fcntl](#fcntl) | 支持 | +| [posix_fallocate](#posix_fallocate) | 支持 | +| [openat](#openat) | 支持 | +| [scandir](#scandir) | 不支持 | +| [seekdir](#seekdir) | 支持 | +| [readdir_r](#readdir_r) | 不支持 | +| [fdopendir](#fdopendir) | 支持 | +| [versionsort](#versionsort) | 支持 | +| [alphasort](#alphasort) | 支持 | +| [rewinddir](#rewinddir) | 支持 | +| [dirfd](#dirfd) | 支持 | +| [readdir](#readdir) | 不支持 | +| [telldir](#telldir) | 支持 | +| [closedir](#closedir) | 支持 | +| [opendir](#opendir) | 支持 | +| [putwchar](#putwchar) | 支持 | +| [fgetws](#fgetws) | 支持 | +| [vfwprintf](#vfwprintf) | 支持 | +| [fscanf](#fscanf) | 支持 | +| [snprintf](#snprintf) | 支持 | +| [sprintf](#sprintf) | 支持 | +| [fgetpos](#fgetpos) | 支持 | +| [vdprintf](#vdprintf) | 支持 | +| [gets](#gets) | 支持 | +| [ungetc](#ungetc) | 支持 | +| [ftell](#ftell) | 支持 | +| [clearerr](#clearerr) | 支持 | +| [getc_unlocked](#getc_unlocked) | 支持 | +| [fmemopen](#fmemopen) | 支持 | +| [putwc](#putwc) | 支持 | +| [getchar](#getchar) | 支持 | +| [open_wmemstream](#open_wmemstream) | 支持 | +| [asprintf](#asprintf) | 支持 | +| [funlockfile](#funlockfile) | 支持 | +| [fflush](#fflush) | 支持 | +| [vfprintf](#vfprintf) | 支持 | +| [vsscanf](#vsscanf) | 支持 | +| [vfwscanf](#vfwscanf) | 支持 | +| [puts](#puts) | 支持 | +| [getchar_unlocked](#getchar_unlocked) | 支持 | +| [setvbuf](#setvbuf) | 支持 | +| [getwchar](#getwchar) | 支持 | +| [setbuffer](#setbuffer) | 支持 | +| [vsnprintf](#vsnprintf) | 支持 | +| [freopen](#freopen) | 支持 | +| [fwide](#fwide) | 支持 | +| [sscanf](#sscanf) | 支持 | +| [fgets](#fgets) | 支持 | +| [vswscanf](#vswscanf) | 支持 | +| [vprintf](#vprintf) | 支持 | +| [fputws](#fputws) | 支持 | +| [wprintf](#wprintf) | 支持 | +| [wscanf](#wscanf) | 支持 | +| [fputc](#fputc) | 支持 | +| [putchar](#putchar) | 支持 | +| [flockfile](#flockfile) | 支持 | +| [vswprintf](#vswprintf) | 支持 | +| [fputwc](#fputwc) | 支持 | +| [fopen](#fopen) | 支持 | +| [tmpnam](#tmpnam) | 支持 | +| [ferror](#ferror) | 支持 | +| [printf](#printf) | 支持 | +| [open_memstream](#open_memstream) | 支持 | +| [fwscanf](#fwscanf) | 支持 | +| [fprintf](#fprintf) | 支持 | +| [fgetc](#fgetc) | 支持 | +| [rewind](#rewind) | 支持 | +| [getwc](#getwc) | 支持 | +| [scanf](#scanf) | 支持 | +| [perror](#perror) | 支持 | +| [vsprintf](#vsprintf) | 支持 | +| [vasprintf](#vasprintf) | 支持 | +| [getc](#getc) | 支持 | +| [dprintf](#dprintf) | 支持 | +| [popen](#popen) | 不支持 | +| [putc](#putc) | 支持 | +| [fseek](#fseek) | 支持 | +| [fgetwc](#fgetwc) | 支持 | +| [tmpfile](#tmpfile) | 支持 | +| [putw](#putw) | 支持 | +| [tempnam](#tempnam) | 支持 | +| [vwprintf](#vwprintf) | 支持 | +| [getw](#getw) | 支持 | +| [putchar_unlocked](#putchar_unlocked) | 支持 | +| [fread](#fread) | 支持 | +| [fileno](#fileno) | 支持 | +| [remove](#remove) | 支持 | +| [putc_unlocked](#putc_unlocked) | 支持 | +| [fclose](#fclose) | 支持 | +| [feof](#feof) | 支持 | +| [fwrite](#fwrite) | 支持 | +| [setbuf](#setbuf) | 支持 | +| [pclose](#pclose) | 不支持 | +| [swprintf](#swprintf) | 支持 | +| [fwprintf](#fwprintf) | 支持 | +| [swscanf](#swscanf) | 支持 | +| [rename](#rename) | 支持 | +| [getdelim](#getdelim) | 支持 | +| [vfscanf](#vfscanf) | 支持 | +| [setlinebuf](#setlinebuf) | 支持 | +| [fputs](#fputs) | 支持 | +| [fsetpos](#fsetpos) | 支持 | +| [fopencookie](#fopencookie) | 支持 | +| [fgetln](#fgetln) | 支持 | +| [vscanf](#vscanf) | 支持 | +| [ungetwc](#ungetwc) | 支持 | +| [getline](#getline) | 支持 | +| [ftrylockfile](#ftrylockfile) | 支持 | +| [vwscanf](#vwscanf) | 支持 | + +### 任务管理 + +#### pthread_attr_init + +pthread_attr_init() 函数初始化一个线程对象的属性,需要用 pthread_attr_destroy() 函数对其去除初始化。 + +**参数**:指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr,结构中的元素分别对应着新线程的运行属性。 + +**Output**: + +- 0:初始化成功。 +- ENOMEM:内存不足,无法初始化线程属性对象。 +- EBUSY:attr是以前初始化但未销毁的线程属性。 + +#### pthread_attr_destroy + +pthread_attr_destroy()函数应销毁线程属性对象。被销毁的attr属性对象可以使用pthread_attr_init()重新初始化;在对象被销毁后引用该对象的结果是未定义的。 + +**参数**:指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 + +**Output**: + +- 0:函数销毁对象成功。 +- EINVAL:attr指向的是未初始化的线程属性对象。 + +#### pthread_attr_setstackaddr + +pthread_attr_setstackaddr()函数设置attr对象中的线程创建堆栈addr属性。堆栈addr属性指定用于创建线程堆栈的存储位置。 + +**Input**: 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr、 栈地址stackaddr。 + +**Output**: + +- 0:设置成功。 +- EINVAL:attr指向的是未初始化的线程属性对象。 + +#### pthread_attr_getstackaddr + +pthread_attr_getstackaddr()如果成功,函数将堆栈地址属性值存储在堆栈地址中。 + +**参数**:指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr、栈地址stackaddr. + +**Output**: + +- 0:获取成功。 +- EINVAL:attr指向的是未初始化的线程属性对象。 + +#### pthread_attr_getstacksize + +pthread_attr_getstacksize()和pthread_attr_setstacksize()函数分别应获取和设置 attr 对象中的线程创建堆栈大小属性(以字节为单位)。 + +**参数**: + +1. 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr. +2. 栈大小指针stacksize,指向设置或获取的堆栈大小。 + +**Output**: + +- 0:获取成功。 +- EINVAL:attr指向的是未初始化的线程属性对象。 + +#### pthread_attr_setstacksize + +设置attr对象中的线程创建堆栈大小属性。 + +**参数**: + +1. 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +2. 栈大小指针stacksize,指向设置或获取的堆栈大小。 + +**Output**: + +- 0:设置成功。 +- EINVAL:堆栈size小于最小值或超过限制。 + +#### pthread_attr_getinheritsched + +获取线程的继承属性。 + +**参数**: + +- 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +- 线程的继承性指针inheritsched。 + +**Output**: + +- 0:获取成功。 +- EINVAL:attr指向的是未初始化的线程属性对象。 + +#### pthread_attr_setinheritsched + +设置线程的继承属性。可设置如下参数: + +- PTHREAD_INHERIT_SCHED:指定线程调度属性应继承自创建线程,并且应忽略此attr参数中的调度属性。 +- PTHREAD_EXPLICIT_SCHED:指定线程调度属性应设置为此属性对象中的相应值。 + +**参数**: + +- 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +- 线程的继承性inheritsched。 + +**Output**: + +- 0:设置成功。 +- EINVAL:继承的值无效,或attr指向的是未初始化的线程属性对象。 +- ENOTSUP:试图将属性设置为不支持的值。 + +#### pthread_attr_getschedpolicy + +获取调度策略属性,策略支持SCHED_FIFO。当使用调度策略SCHED_FIFO执行的线程正在等待互斥体时,互斥体解锁,它们应按优先级顺序获取互斥体。 + +**参数**: + +1. 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +2. 线程的调度策略指针policy。 + +**Output**: + +- 0:获取成功。 +- EINVAL:attr指向的是未初始化的线程属性对象。 + +#### pthread_attr_setschedpolicy + +设置调度策略属性,策略支持SCHED_FIFO。当使用调度策略SCHED_FIFO执行的线程正在等待互斥体时,互斥体解锁时,它们应按优先级顺序获取互斥体。 + +**参数**: + +1. 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +2. 线程的调度策略policy。 + +**Output**: + +- 0:设置成功。 +- EINVAL:policy的值无效,或者attr指向没有初始化的线程对象。 +- ENOTSUP:试图将属性设置为不支持的值。 + +#### pthread_attr_getdetachstate + +获取线程分离属性,分离状态应设置为PTHREAD_CREATE_DETAED或PTHREAD_CREATE_JOI无BLE。 + +**参数**: + +1. 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +2. 分离属性指针detachstate。 + +**Output**: + +- 0:获取成功。 +- EINVAL:attr指向没有初始化的线程对象。 + +#### pthread_attr_setdetachstate + +设置线程分离属性。分离状态应设置为PTHREAD_CREATE_DETAED或PTHREAD_CREATE_JOINABLE。 + +**参数**: + +1. 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +2. 分离属性detachstate。 + +**Output**: + +- 0:设置成功。 +- EINVAL:attr指向没有初始化的线程对象或分离状态的值无效。 + +#### pthread_attr_setschedparam + +pthread_attr_setschedparam() 可用来设置线程属性对象的优先级属性。 + +**参数**: + +1. 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +2. 调度属性指针schedparam。 + +**Output**: + +- 0:操作成功。 +- EINVAL:参数不合法或attr未初始化。 +- ENOTSUP:schedparam的优先级属性不支持。 + +#### pthread_attr_getschedparam + +pthread_attr_getschedparam() 可用来获取线程属性对象的优先级属性。 + +**参数**: + +1. 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +2. 调度属性指针schedparam。 + +**Output**: + +- 0:操作成功。 +- EINVAL:参数不合法或attr未初始化。 + +#### pthread_attr_getscope + +pthread_attr_getscope() 可用来获取线程属性对象的作用域属性。 + +**参数**: + +- 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +- 线程的作用域属性指针scope。 + +**Output**: + +- 0:获取成功。 +- EINVAL:指针未初始化。 + +#### pthread_attr_setscope + +设置线程的作用域,支持PTHREAD_SCOPE_SYSTEM,控制线程在系统级竞争资源。 + +**参数**: + +1. 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +2. 作用域scope。 + +**Output**: + +- 0:设置成功。 +- EINVAL:scope的值无效,或者attr指向没有初始化的线程对象。 +- ENOTSUP:试图将属性设置为不支持的值。 + +#### pthread_attr_getstack + +pthread_attr_getstack() 可用来获取线程属性对象的栈信息。 + +**参数**: + +- 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +- 线程的栈地址指针stackAddr。 +- 线程的栈大小指针stackSize。 + +**Output**: + +- 0:获取成功。 +- EINVAL:指针未初始化。 + +#### pthread_attr_setstack + +pthread_attr_setstack() 可用来设置线程属性对象的栈地址和栈大小。 + +**参数**: + +- 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +- 线程的栈地址stackAddr。 +- 线程的栈大小stackSize。 + +**Output**: + +- 0:获取成功。 +- EINVAL:指针未初始化或值无效。 + +#### pthread_attr_getguardsize + +暂不支持。 + +#### pthread_attr_setguardsize + +暂不支持。 + +#### pthread_atfork + +暂不支持。 + +#### pthread_create + +pthread_create()函数创建一个新线程,其属性由 attr 指定。如果 attr 为 NULL,则使用默认属性。创建成功后,pthread_create()应将创建的线程的ID存储在参数 thread 的位置。 + +**参数**: + +1. 指向线程[标识符](https://baike.baidu.com/item/标识符?fromModule=lemma_inlink)的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)thread。 +2. 指向一个线程属性结构的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)attr。 +3. 线程处理函数的起始地址 start_routine。 +4. 运行函数的参数 arg。 + +**Output**: + +- 0:创建成功。 +- EINVAL:attr指定的属性无效。 +- EAGAIN:系统缺少创建新线程所需的资源,或者将超过系统对线程总数施加的限制。 +- EPERM:调用者没有权限。 + +#### pthread_cancel + +取消线程的执行。pthread_cancel()函数应请求取消线程。目标线程的可取消状态和类型决定取消何时生效。当取消被操作时,应调用线程的取消处理程序。 + +**参数**:线程的ID thread。 + +**Output**: + +- 0:取消成功。 +- ESRCH:找不到与给定线程ID相对应的线程。 + +#### pthread_testcancel + +设置可取消状态。pthread_testcancel()函数应在调用线程中创建一个取消点。如果禁用了可取消性,pthread_testcancel()函数将无效。 + +**参数**:无 + +**Output**: 无 + +#### pthread_setcancelstate + +pthread_setcancelstate() 将调用线程的可取消性状态设置为 state 中给出的值。线程以前的可取消性状态返回到oldstate所指向的缓冲区中。state状态的合法值为PTHREAD_CANCEL_E无BLE和PTHREAD_CANCEL_DISABLE。 + +**参数**: + +- 线程的可取消性状态 state。 +- 之前的可取消状态 oldstate。 + +**Output**: + +- 0:设置成功。 +- EINVAL:指定的状态不是 PTHREAD_CANCEL_E无BLE 或 PTHREAD_CANCEL_DISABLE。 + +#### pthread_setcanceltype + +pthread_setcanceltype()函数应原子地将调用线程的可取消类型设置为指定的类型,并在oldtype引用的位置返回上一个可取消类型。类型的合法值为PTHREAD_CANCEL_DEFERRED和PTHREAD_CANCEL_ASYNCHRONOUS。 + +**Input**: + +- 线程的可取消类型type。 +- 之前的可取消类型oldtype。 + +**Output**: + +- 0:设置成功。 +- EINVAL:指定的类型不是PTHREAD_CANCEL_DEFERRED或PTHREAD_CANCEL_ASYNCHRONOUS。 + +#### pthread_exit + +线程的终止可以是调用 pthread_exit 或者该线程的例程结束。由此可看出,一个线程可以隐式退出,也可以显式调用 pthread_exit 函数来退出。pthread_exit 函数唯一的参数 value_ptr 是函数的返回代码,只要 pthread_join 中的第二个参数 value_ptr 不是NULL,这个值将被传递给 value_ptr。 + +**参数**:线程退出状态value_ptr,通常传NULL。 + +**Output**: 无 + +#### pthread_cleanup_push + +pthread_cleanup_push() 函数应将指定的取消处理程序推送到调用线程的取消堆栈上。pthread_cleanup_push必须和pthread_cleanup_pop同时使用。当push后,在线程退出前使用pop,便会调用清理函数。 + +**参数**: + +1. 取消处理程序入口地址 routine。 +2. 传递给处理函数的参数 arg。 + +**Output**: 无 + +#### pthread_cleanup_pop + +pthread_cleanup_pop()应删除调用线程取消处理程序,并可选择调用它(如果execute非零)。 + +**参数**:执行参数execute。 + +**Output**: 无 + +#### pthread_setschedprio + +pthread_setschedprio()函数应将线程 ID 指定的调度优先级设置为 prio 给出的值。如果 pthread_setschedprio()函数失败,则目标线程的调度优先级不应更改。 + +**参数**: + +1. 线程ID:thread。 +2. 优先级:prio。 + +**Output**: + +- 0,设置成功。 +- EINVAL:prio对指定线程的调度策略无效。 +- ENOTSUP:试图将优先级设置为不支持的值。 +- EPERM:调用者没有设置指定线程的调度策略的权限。 +- EPERM:不允许将优先级修改为指定的值。 +- ESRCH:thread指定的线程不存在。 + +#### pthread_self + +pthread_self()函数应返回调用线程的线程ID。 + +**参数**:无 + +**Output**: 返回调用线程的线程ID。 + +#### pthread_equal + +此函数应比较线程ID t1和t2。 + +**参数**: + +1. 线程ID t1。 +2. 线程ID t2。 + +**Output**: + +- 如果t1和t2相等,pthread_equal()函数应返回非零值。 +- 如果t1和t2不相等,应返回零。 +- 如果t1或t2不是有效的线程ID,则行为未定义。 + +#### sched_yield + +sched_yield()函数应强制正在运行的线程放弃处理器,并触发线程调度。 + +**参数**:无 + +**Output**: 输出0时,成功完成;否则应返回值-1。 + +#### sched_get_priority_max + +sched_get_priority_max()和 sched_get_priority_min()函数应分别返回指定调度策略的优先级最大值或最小值。 + +**参数**:调度策略policy。 + +**Output**: + +返回值: + +- -1:失败。 +- 返回优先级最大值。 + +errno: + +- EINVAL:调度策略非法。 + +#### sched_get_priority_min + +返回指定调度策略的优先级最小值 + +**参数**:调度策略policy。 + +**Output**: + +返回值: + +- -1:失败。 +- 返回优先级最小值。 + +errno: + +- EINVAL:调度策略非法。 + +#### pthread_join + +pthread_join() 函数,以阻塞的方式等待 thread 指定的线程结束。当函数返回时,被等待线程的资源被收回。如果线程已经结束,那么该函数会立即返回。并且 thread 指定的线程必须是 joi无ble 的。当 pthread_join()成功返回时,目标线程已终止。对指定同一目标线程的pthread_join()的多个同时调用的结果未定义。如果调用pthread_join()的线程被取消,则目标线程不应被分离 + +**参数**: + +1. 线程ID:thread。 +2. 退出线程:返回值value_ptr。 + +**Output**: + +- 0:成功完成。 +- ESRCH:找不到与给定ID相对应的线程。 +- EDEADLK:检测到死锁或thread的值指定调用线程。 +- EINVAL:thread指定的线程不是joinable的。 + +#### pthread_detach + +实现线程分离,即主线程与子线程分离,子线程结束后,资源自动回收。 + +**参数**:线程ID:thread。 + +**Output**: + +- 0:成功完成。 +- EINVAL:thread是分离线程。 +- ESRCH:给定线程ID指定的线程不存在。 + +#### pthread_key_create + +分配用于标识线程特定数据的键。pthread_key_create 第一个参数为指向一个键值的[指针](https://baike.baidu.com/item/指针/2878304?fromModule=lemma_inlink),第二个参数指明了一个 destructor 函数,如果这个参数不为空,那么当每个线程结束时,系统将调用这个函数来释放绑定在这个键上的内存块。 + +**参数**: + +1. 键值的[指针](https://baike.baidu.com/item/指针/2878304?fromModule=lemma_inlink)key。 +2. destructor 函数入口 destructor。 + +**Output**: + +- 0:创建成功。 +- EAGAIN:系统缺乏创建另一个特定于线程的数据密钥所需的资源,或者已超过系统对每个进程的密钥总数施加的限制。 +- ENOMEM:内存不足,无法创建密钥。 + +#### pthread_setspecific + +pthread_setspecific() 函数应将线程特定的 value 与通过先前调用 pthread_key_create()获得的 key 关联起来。不同的线程可能会将不同的值绑定到相同的键上。这些值通常是指向已保留供调用线程使用的动态分配内存块的指针。 + +**参数**: + +1. 键值key。 +2. 指针value + +**Output**: + +- 0:设置成功。 +- ENOMEM:内存不足,无法将非NULL值与键关联。 +- EINVAL:key的值不合法。 + +#### pthread_getspecific + +将与key关联的数据读出来,返回数据类型为 void *,可以指向任何类型的数据。需要注意的是,在使用此返回的指针时,需满足是 void 类型,虽指向关联的数据地址处,但并不知道指向的数据类型,所以在具体使用时,要对其进行强制类型转换。 + +**参数**:键值key。 + +**Output**: + +- 返回与给定 key 关联的线程特定数据值。 +- NULL:没有线程特定的数据值与键关联。 + +#### pthread_key_delete + +销毁线程特定数据键。 + +**参数**:需要删除的键key。 + +**Output**: + +- 0:删除成功。 +- EINVAL:key值无效。 + +#### pthread_getcpuclockid + +暂不支持。 + +#### pthread_getschedparam + +获取线程调度策略和优先级属性。 + +**参数**: + +1. 线程对象指针thread。 +2. 调度策略指针policy。 +3. 调度属性对象指针param。 + +**Output**: + +- 0:删除成功。 +- EINVAL:指针未初始化。 + +#### pthread_setschedparam + +设置线程调度策略和优先级属性。调度策略仅支持SCHED_FIFO。 + +**参数**: + +1. 线程对象指针thread。 +2. 调度策略指针policy。 +3. 调度属性对象指针param。 + +**Output**: + +- 0:删除成功。 +- EINVAL:指针未初始化。 +- ENOTSUP:设置不支持的值。 + +#### pthread_kill + +暂不支持。 + +#### pthread_once + +pthread_once() 函数使用指定once_contol变量会保证init_routine函数只执行一次。当前init_routine函数不支持被取消。 + +**参数**: + +1. 控制变量control。 +2. 执行函数init_routine。 + +**Output**: + +- 0:删除成功。 +- EINVAL:指针未初始化。 + +#### pthread_sigmask + +暂不支持。 + +#### pthread_spin_init + +暂不支持。 + +#### pthread_spin_destory + +暂不支持。 + +#### pthread_spin_lock + +暂不支持。 + +#### pthread_spin_trylock + +暂不支持。 + +#### pthread_spin_unlock + +暂不支持。 + +### 信号量管理 + +#### sem_init + +sem_init()函数应初始化 sem 引用的匿名信号量。初始化信号量的值应为 value。在成功调用 sem_init()后,信号量可用于后续调用 sem_wait()、sem_timedwait()、sem_trywait()、sem_post()和sem_destroy()。此信号量应保持可用,直到信号量被销毁。 + +**参数**: + +1. 指向信号量指针sem。 +2. 指明信号量的类型pshared。 +3. 信号量值的大小value。 + +**Output**: + +- 0:初始化成功。 +- EINVAL:值参数超过{SEM_VALUE_MAX}。 +- ENOSPC:初始化信号量所需的资源已耗尽,或已达到信号量的限制。 +- EPERM:缺乏初始化信号量的权限。 + +#### sem_destroy + +sem_destroy()函数销毁 sem 指示的匿名信号量。只有使用 sem_init()创建的信号量才能使用 sem_destroy()销毁;使用命名信号量调用 sem_destroy()的效果未定义。在 sem 被另一个对 sem_init()的调用重新初始化之前,后续使用信号量 sem 的效果是未定义的。 + +**参数**:指向信号量指针sem。 + +**Output**: + +- 0:销毁成功。 +- EINVAL:sem不是有效的信号量。 +- EBUSY:信号量上当前有线程被阻止。 + +#### sem_open + +创建并初始化有名信号量。此信号量可用于后续对 sem_wait()、sem_timedwait()、sem_trywait()、sem_post()和sem_close() 的调用。 + +**参数**: + +1. 信号量名无me指针。 + +2. oflag参数控制信号量是创建还是仅通过调用sem_open()访问。以下标志位可以在oflag中设置: + + - O_CREAT:如果信号量不存在,则此标志用于创建信号量。 + + - O_EXCL:如果设置了O_EXCL和O_CREAT,且信号量名称存在,sem_open()将失败。如果设置了O_EXCL而未设置O_CREAT,则效果未定义。 + +3. 如果在oflag参数中指定了O_CREAT和O_EXCL以外的标志,则效果未指定。 + +**Output**: + +- 创建并初始化成功,返回信号量地址。 +- EACCES:创建命名信号量的权限被拒绝。 +- EEXIST:已设置O_CREAT和O_EXCL,且命名信号量已存在。 +- EINTR:sem_open()操作被信号中断。 +- EINVAL:给定名称不支持sem_open(),或在oflag中指定了O_CREAT,并且值大于最大值。 +- EMFILE:当前使用的信号量描述符或文件描述符太多。 +- ENAMETOOLONG:name参数的长度超过{PATH_MAX},或者路径名组件的长度超过{NAME_MAX}。 +- ENFILE:系统中当前打开的信号量太多。 +- ENOENT:未设置O_CREAT且命名信号量不存在。 +- ENOSPC:没有足够的空间来创建新的命名信号量。 + +#### sem_close + +关闭一个命名信号量。未命名的信号量(由sem_init() 创建的信号量)调用 sem_close() 的效果未定义。sem_close() 函数应解除系统分配给此信号量的任何系统资源。此过程后续使用sem指示的信号量的影响未定义。 + +**参数**:信号量指针sem。 + +**Output**: + +- 0: 销毁成功。 +- EINVAL:sem参数不是有效的信号量描述符。 + +#### sem_wait + +sem_wait()函数通过对 sem 引用的信号量执行信号量锁定操作来锁定该信号量。如果信号量值当前为零,则调用线程在锁定信号量或调用被信号中断之前,不会从对 sem_wait()的调用返回。 + +**参数**:信号量指针sem。 + +**Output**: + +- 0:操作成功。 +- EAGAIN:信号量已被锁定,无法立即被 sem_trywait()操作。 +- EDEADLK:检测到死锁条件。 +- EINTR:信号中断了此功能。 +- EINVAL:sem参数未引用有效的信号量。 + +#### sem_trywait + +只有当信号量当前未锁定时,即信号量值当前为正值,sem_trywait()函数才应锁定 sem 引用的信号量。否则它不应锁定信号量。 + +**参数**:信号量指针sem。 + +**Output**: + +- 0:操作成功。 +- EAGAIN:信号量已被锁定,无法立即被sem_trywait()操作。 +- EDEADLK:检测到死锁条件。 +- EINTR:信号中断了此功能。 +- EINVAL:sem参数未引用有效的信号量。 + +#### sem_timedwait + +sem_timedwait()函数应锁定 sem 引用的信号量,就像 sem_wait()函数一样。如果在不等待另一个线程执行sem_post()解锁信号量的情况下无法锁定信号量,则在指定的超时到期时,此等待将终止。 + +**参数**: + +1. 信号量指针sem。 +2. 阻塞时间指针abs_timeout。 + +**Output**: + +- 0:操作成功。 +- EINVAL:线程可能会阻塞,abs_timeout 指定的纳秒值小于0或大于等于1000 million。 +- ETIMEDOUT:在指定的超时到期之前,无法锁定信号量。 +- EDEADLK:检测到死锁条件。 +- EINTR:信号中断了此功能。 +- EINVAL:sem参数未引用有效的信号量。 + +#### sem_post + +sem_post()函数应通过对 sem 引用的信号量执行信号量解锁操作,当有线程阻塞在这个信号量上时,调用这个函数会使其中一个线程不在阻塞,选择机制是有线程的调度策略决定的。 + +**参数**:信号量指针sem。 + +**Output**: + +- 0:操作成功。 +- EINVAL:sem参数未引用有效的信号量。 + +#### sem_getvalue + +sem_getvalue()函数获取 sem 引用的信号量的值,而不影响信号量的状态。获取的 sval 值表示在调用期间某个未指定时间发生的实际信号量值。 + +**参数**: + +1. 信号量指针sem。 +2. 信号量计数值指针sval。 + +**Output**: + +- 0:操作成功。 +- EINVAL:sem参数未引用有效的信号量。 + +#### sem_unlink + +sem_unlink() 函数将删除由字符串名称命名的信号量。如果信号量当前被其他进程引用,那么sem_unlink() 将不会影响信号量的状态。如果在调用sem_unlink() 时一个或多个进程打开了信号量,则信号量的销毁将被推迟,直到信号量的所有引用都被销毁了。 + +**参数**:信号量名称name。 + +**Output**: + +- 0:操作成功。 +- -1:name参数未引用有效的信号量。 + +### 互斥量管理 + +#### pthread_mutexattr_init + +pthread_mutexattr_init()函数初始化互斥锁。如果调用 pthread_mutexattr_init()指定已初始化的attr属性对象行为未定义。 + +**参数**:互斥锁属性对象指针attr。 + +**Output**: + +- 0:操作成功。 +- ENOMEM:内存不足,无法初始化互斥属性对象。 + +#### pthread_mutexattr_destroy + + 注销一个互斥锁。销毁一个互斥锁即意味着释放它所占用的资源,且要求锁当前处于开放状态。 + +**参数**:互斥锁属性对象指针attr。 + +**Output**: + +- 0:操作成功。 +- EINVAL:attr指定的值无效。 + +#### pthread_mutexattr_settype + +pthread_mutexattr_settype()函数设置互斥 type 属性。默认值为 PTHREAD_MUTEX_DEFAULT。有效的互斥类型包括: + +PTHREAD_MUTEX_NORMAL:此类型的互斥锁不会检测死锁。 + +- 如果线程在不解除互斥锁的情况下尝试重新锁定该互斥锁,则会产生死锁。 +- 如果尝试解除由其他线程锁定的互斥锁,会产生不确定的行为。 +- 如果尝试解除锁定的互斥锁未锁定,则会产生不确定的行为。 + +PTHREAD_MUTEX_ERRORCHECK:此类型的互斥锁可提供错误检查。 + +- 如果线程在不解除锁定互斥锁的情况下尝试重新锁定该互斥锁,则会返回错误。 +- 如果线程尝试解除锁定的互斥锁已经由其他线程锁定,则会返回错误。 +- 如果线程尝试解除锁定的互斥锁未锁定,则会返回错误。 + +PTHREAD_MUTEX_RECURSIVE: + +- 如果线程在不解除锁定互斥锁的情况下尝试重新锁定该互斥锁,则可成功锁定该互斥锁。 与 PTHREAD_MUTEX_NORMAL 类型的互斥锁不同,对此类型互斥锁进行重新锁定时不会产生死锁情况。多次锁定互斥锁需要进行相同次数的解除锁定才可以释放该锁,然后其他线程才能获取该互斥锁。 +- 如果线程尝试解除锁定的互斥锁已经由其他线程锁定,则会返回错误。 +- 如果线程尝试解除锁定的互斥锁未锁定,则会返回错误。 + +PTHREAD_MUTEX_DEFAULT: + +- 如果尝试以[递归](https://baike.baidu.com/item/递归?fromModule=lemma_inlink)方式锁定此类型的互斥锁,则会产生不确定的行为。 +- 对于不是由调用线程锁定的此类型互斥锁,如果尝试对它解除锁定,则会产生不确定的行为。 +- 对于尚未锁定的此类型互斥锁,如果尝试对它解除锁定,也会产生不确定的行为。 + +**参数**: + +1. 互斥锁属性对象指针attr。 +2. 互斥锁类型type。 + +**Output**: + +- 0:操作成功。 +- EINVAL:attr指定的值无效,或type无效。 + +#### pthread_mutexattr_gettype + +pthread_mutexattr_gettype() 可用来获取由 pthread_mutexattr_settype() 设置的互斥锁的 type 属性。 + +**参数**: + +1. 互斥锁属性对象指针attr。 +2. 互斥锁类型指针type。 + +**Output**: + +- 0:操作成功。 +- EINVAL:attr指定的值无效。 + +#### pthread_mutexattr_setprotocol + +pthread_mutexattr_setprotocol() 可用来设置互斥锁属性对象的协议属性。定义的 protocol 可以为以下值之一: + +- PTHREAD_PRIO_NONE +- PTHREAD_PRIO_INHERIT +- PTHREAD_PRIO_PROTECT(当前版本暂不支持) + +**参数**: + +1. 互斥锁属性对象指针 attr。 +2. 互斥锁属性对象的协议 protocol。 + +**Output**: + +- 0:操作成功。 +- ENOTSUP:协议指定的值不支持。 +- EINVAL:attr指定的值无效。 +- EPERM:调用者没有权限。 + +#### pthread_mutexattr_getprotocol + +pthread_mutexattr_getprotocol() 获取互斥锁属性对象的协议属性。 + +**参数**: + +1. 互斥锁属性对象指针attr。 +2. 互斥锁属性对象的协议指针protocol。 + +**Output**: + +- 0:操作成功。 +- EINVAL:attr指定的值无效。 +- EPERM:调用者没有权限。 + +#### pthread_mutexattr_getprioceiling + +暂不支持。 + +#### pthread_mutexattr_setprioceiling + +暂不支持。 + +#### pthread_mutexattr_getpshared + +获取互斥锁属性对象的共享属性。当前支持PTHREAD_PROCESS_PRIVATE,互斥锁为进程内私有。 + +**参数**: + +1. 互斥锁属性对象指针attr。 +2. 共享属性指针pshared。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +#### pthread_mutexattr_setpshared + +暂不支持。 + +#### pthread_mutexattr_getrobust + +获取互斥锁属性对象的健壮属性。当前支持PTHREAD_MUTEX_STALLED,如果互斥锁的所有者在持有互斥锁时终止,则不会执行特殊操作。 + +**参数**: + +1. 互斥锁属性对象指针attr。 +2. 健壮属性指针robust。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +#### pthread_mutexattr_setrobust + +设置互斥锁属性对象的健壮属性。当前支持PTHREAD_MUTEX_STALLED。 + +**参数**: + +1. 互斥锁属性对象指针attr。 +2. 健壮属性robust。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 +- ENOTSUP:设置不支持的值。 + +#### pthread_mutex_init + +pthread_mutex_init()函数初始化互斥锁,属性由 attr 指定。如果 attr 为NULL,则使用默认互斥属性。 + +**参数**: + +1. 互斥锁指针mutex。 +2. 互斥锁属性对象指针attr。 + +**Output**: + +- 0:操作成功。 +- EAGAIN:缺少初始化互斥锁所需的资源(内存除外)。 +- ENOMEM:内存不足,无法初始化互斥体。 +- EPERM:没有执行操作的权限。 +- EBUSY:互斥锁已经初始化但尚未销毁。 +- EINVAL:attr指定的值无效。 + +#### pthread_mutex_destroy + +pthread_mutex_destroy() 用于注销一个互斥锁。销毁一个互斥锁即意味着释放它所占用的资源,且要求锁当前处于开放状态。 + +**参数**:互斥锁指针mutex。 + +**Output**: + +- 0:操作成功。 +- EBUSY:锁当前未处于开放状态。 +- EINVAL:mutex指定的值无效。 + +#### pthread_mutex_lock + +当pthread_mutex_lock() 返回时,该[互斥锁](https://baike.baidu.com/item/互斥锁/841823?fromModule=lemma_inlink)已被锁定。[线程](https://baike.baidu.com/item/线程/103101?fromModule=lemma_inlink)调用该函数让互斥锁上锁,如果该互斥锁已被另一个线程锁定和拥有,则调用该线程将阻塞,直到该互斥锁变为可用为止。 + +**参数**:互斥锁指针mutex。 + +**Output**: + +- 0:操作成功。 +- EINVAL:mutex指定的值未初始化。 +- EAGAIN:无法获取互斥锁。 +- EDEADLK:当前线程已经拥有互斥锁。 + +#### pthread_mutex_trylock + +pthread_mutex_trylock() 语义与 pthread_mutex_lock() 类似,不同点在于锁已经被占据时返回 EBUSY, 而非挂起等待。 + +**参数**:互斥锁指针mutex。 + +**Output**: + +- 0,操作成功。 +- EBUSY:mutex指定的锁已经被占据。 +- EINVAL:mutex指定的值未初始化。 +- EAGAIN:无法获取互斥锁。 +- EDEADLK:当前线程已经拥有互斥锁。 + +#### pthread_mutex_timedlock + +pthread_mutex_timedlock() 语义与pthread_mutex_lock() 类似,不同点在于锁已经被占据时增加一个超时时间,等待超时返回错误码。 + +**参数**: + +1. 互斥锁指针mutex。 +2. 超时时间指针abs_timeout。 + +**Output**: + +- 0:操作成功。 +- EINVAL:mutex指定的值未初始化,abs_timeout指定的纳秒值小于0或大于等于1000 million。 +- ETIMEDOUT:等待超时。 +- EAGAIN:无法获取互斥锁。 +- EDEADLK:当前线程已经拥有互斥锁。 + +#### pthread_mutex_unlock + +释放互斥锁。 + +**参数**:互斥锁指针mutex。 + +**Output**: + +- 0:操作成功。 +- EINVAL:mutex指定的值未初始化。 +- EPERM:当前线程不拥有互斥锁。 + +#### pthread_mutex_consistent + +暂不支持。 + +#### pthread_mutex_getprioceiling + +暂不支持。 + +#### pthread_mutex_setprioceiling + +暂不支持。 + +### 读写锁编程 + +#### pthread_rwlock_init + +pthread_rwlock_init()初始化读写锁。如果 attr 为 NULL,则使用默认的读写锁属性。一旦初始化,锁可以使用任何次数,而无需重新初始化。调用 pthread_rwlock_init()指定已初始化的读写锁行为未定义。如果在没有初始化的情况下使用读写锁,则结果是未定义的。 + +**参数**: + +1. 读写锁指针rwlock。 +2. 读写锁属性指针attr。 + +**Output**: + +- 0:操作成功。 +- EAGAIN:系统缺少初始化读写锁所需的资源(内存除外)。 +- ENOMEM:内存不足,无法初始化读写锁。 +- EPERM:没有执行操作的权限。 +- EBUSY:rwlock是以已初始化但尚未销毁的读写锁。 +- EINVAL:attr指定的值无效。 + +#### pthread_rwlock_destroy + +pthread_rwlock_destroy()函数应销毁 rwlock 引用的读写锁,并释放锁使用的资源。在再次调用pthread_rwlock_init()重新初始化锁之前,后续使用锁的行为未定义。如果在任何线程持有 rwlock 时调用pthread_rwlock_destroy()行为未定义。尝试销毁未初始化的读写锁行为未定义。 + +**参数**:读写锁指针rwlock。 + +**Output**: + +- 0:操作成功。 +- EBUSY: rwlock引用的对象被锁定时销毁该对象。 +- EINVAL:attr指定的值无效。 + +#### pthread_rwlock_rdlock + +pthread_rwlock_rdlock()函数应将读锁应用于rwlock引用的读写锁。 + +**参数**:读写锁指针rwlock。 + +**Output**: + +- 0:操作成功。 +- EINVAL:rwlock是未初始化的读写锁。 +- EAGAIN:无法获取读锁,因为已超过rwlock的最大读锁数。 +- EDEADLK:检测到死锁条件或当前线程已拥有写锁。 + +#### pthread_rwlock_tryrdlock + +pthread_rwlock_tryrdlock()函数语义与pthread_rwlock_rdlock()类似。在任何情况下,pthread_rwlock_tryrdlock()函数都不会阻塞;它会一直获取锁,或者失败并立即返回。 + +**参数**:读写锁指针rwlock。 + +**Output**: + +- 0:操作成功。 +- EINVAL:rwlock是未初始化的读写锁。 +- EAGAIN:无法获取读锁,因为已超过rwlock的最大读锁数。 +- EBUSY:无法获取读写锁以进行读取,因为写入程序持有该锁。 + +#### pthread_rwlock_timedrdlock + +pthread_rwlock_timedrdlock()语义与pthread_rwlock_rdlock()类似,不同的是在锁已经被占据时增加一个超时时间,等待超时返回错误码。 + +**参数**: + +1. 读写锁指针rwlock。 +2. 超时时间指针abs_timeout。 + +**Output**: + +- 0:操作成功。 +- ETIMEDOUT:在指定的超时到期之前,无法获取锁。 +- EAGAIN:无法获取读锁,超过锁的最大读锁数量。 +- EDEADLK:检测到死锁条件或调用线程已在rwlock上持有写锁。 +- EINVAL:rwlock指定的锁未初始化,或者abs_timeout纳秒值小于0或大于等于1000 million。 + +#### pthread_rwlock_wrlock + +pthread_rwlock_wrlock()函数将写锁应用于 rwlock 引用的读写锁。如果没有其他线程持有读写锁 rwlock,调用线程将获得写锁。否则,线程应阻塞,直到它能够获得锁。如果调用线程在调用时持有读写锁(无论是读锁还是写锁),则调用线程可能会死锁。 + +**参数**:读写锁指针rwlock。 + +**Output**: + +- 0:操作成功。 +- EINVAL:rwlock指定的值未初始化。 +- EDEADLK:检测到死锁情况,或者当前线程已经拥有用于写入或读取的读写锁。 + +#### pthread_rwlock_trywrlock + +pthread_rwlock_trywrlock()函数类似 pthread_rwlock_wrlock(),但如果任何线程当前持有rwlock(用于读取或写入,该函数将失败)。 + +**参数**:读写锁指针rwlock。 + +**Output**: + +- 0:操作成功。 +- EBUSY:无法获取读写锁以进行写入,因为它已被锁定以进行读取或写入。 +- EINVAL:rwlock指定的值未初始化。 + +#### pthread_rwlock_timedwrlock + +pthread_rwlock_timedwrlock()语义与pthread_rwlock_wrlock()类似,不同的是在锁已经被占据时增加一个超时时间,等待超时返回错误码。 + +**参数**: + +1. 读写锁指针rwlock。 +2. 超时时间指针abs_timeout。 + +**Output**: + +- 0:操作成功。 +- ETIMEDOUT:在指定的超时到期之前,无法获取锁。 +- EAGAIN:无法获取读锁,超过锁的最大读锁数量。 +- EDEADLK:检测到死锁条件或调用线程已在rwlock上持有写锁。 +- EINVAL;rwlock指定的锁未初始化,或者abs_timeout纳秒值小于0或大于等于1000 million。 + +#### pthread_rwlock_unlock + +pthread_rwlock_unlock()函数释放rwlock引用的读写锁上持有的锁。如果读写锁rwlock未被调用线程持有,则结果未定义。 + +**参数**:读写锁指针rwlock。 + +**Output**: + +- 0:操作成功。 +- EINVAL:rwlock指定的锁未初始化。 +- EPERM:当前线程不持有读写锁。 + +#### pthread_rwlockattr_init + +暂不支持 + +#### pthread_rwlockattr_destroy + +暂不支持 + +#### pthread_rwlockattr_getpshared + +pthread_rwlockattr_getpshared() 函数从attr引用的读写锁属性对象中获取进程共享属性的值。 + +**参数**: + +1. 读写锁属性指针attr。 +2. 共享属性指针pshared。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +### pthread_rwlockattr_setpshared + +设置读写锁属性对象中进程共享属性的值。当前支持PTHREAD_PROCESS_PRIVATE,读写锁为进程私有。 + +**参数**: + +1. 读写锁属性指针attr。 +2. 共享属性指针pshared。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 +- ENOTSUP:设置不支持的值。 + +### 线程屏障管理 + +#### pthread_barrier_destroy + +销毁线程屏障变量,并释放该屏障使用的任何资源。 + +**参数**:屏障变量指针b。 + +**Output**: + +- 0:操作成功。 +- EBUSY:另一个线程在使用该变量。 + +#### pthread_barrier_init + +分配线程屏障变量所需的资源,并使用attr的属性初始化屏障。如果attr为NULL,则使用默认的屏障属性。 + +**参数**: + +1. 屏障变量指针b。 +2. 屏障属性指针attr。 +3. 等待线程个数count。 + +**Output**: + +- 0:操作成功。 +- EINVAL:count为0。 +- ENOTSUP:attr指定的屏障属性不支持。 +- EAGAIN:系统缺乏初始化一个屏障所需的资源。 + +#### pthread_barrier_wait + +pthread_barrier_wait() 阻塞调用线程,直到等待的线程达到了预定的数量。 + +**参数**:屏障变量指针b。 + +**Output**: + +- 0:操作成功。 +- -1:第一个线程成功返回。 + +#### pthread_barrierattr_getpshared + +获取屏障属性的共享属性值。 + +**参数**: + +1. 屏障属性指针a。 +2. 共享属性值指针pshared。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +#### pthread_barrierattr_setpshared + +设置屏障属性的共享属性值。支持PTHREAD_PROCESS_PRIVATE,该屏障为进程私有的,不允许不同进程的线程访问该屏障。 + +**参数**: + +1. 屏障属性指针a。 +2. 共享属性值pshared。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 +- ENOTSUP:试图将属性设置为不支持的值。 + +### 条件变量管理 + +#### pthread_cond_init + +使用attr引用的属性初始化cond引用的条件变量。如果attr为NULL,则使用默认条件变量属性。 + +**参数** + +1. 条件变量指针cond。 +2. 条件变量属性指针attr。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 +- EAGAIN:系统缺乏初始化一个条件变量所需的资源。 + +#### pthread_cond_destroy + +销毁指定条件变量,使得该条件变量未初始化,可以使用pthread_cond_init() 重新初始化。 + +**参数**:条件变量指针cond。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 +- EBUSY:另一个线程在使用该变量。 + +#### pthread_cond_broadcast + +pthread_cond_broadcast()函数取消阻塞指定条件变量cond上当前阻塞的所有线程。 + +**参数**:条件变量指针cond。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +#### pthread_cond_signal + +pthread_cond_signal() 函数取消阻塞在指定的条件变量cond上阻塞的线程中的至少一个(如果有任何线程在cond上被阻塞)。 + +**参数**:条件变量指针cond。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +#### pthread_cond_timedwait + +pthread_cond_timedwait() 函数阻塞当前线程等待cond指定的条件变量,并释放互斥体指定的互斥体。只有在另一个线程使用相同的条件变量调用pthread_cond_signal() 或pthread_cond_broadcast() 后,或者如果系统时间达到指定的时间,并且当前线程重新获得互斥锁时,等待线程才会解锁。 + +**参数**: + +1. 条件变量指针cond。 +2. 互斥锁指针m。 +3. 超时时间指针ts。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 +- ETIMEDOUT:阻塞超时 + +#### pthread_cond_wait + +pthread_cond_wait() 函数与pthread_cond_timedwait() 类似,阻塞当前线程等待cond指定的条件变量,并释放互斥体指定的互斥体。只有在另一个线程使用相同的条件变量调用pthread_cond_signal() 或pthread_cond_broadcast() 后,并且当前线程重新获得互斥锁时,等待线程才会解锁。 + +**参数**: + +1. 条件变量指针cond。 +2. 互斥锁指针m。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +#### pthread_condattr_init + +使用属性的默认值初始化条件变量属性对象attr。 + +**参数**: 条件变量属性对象指针attr。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +#### pthread_condattr_destroy + +pthread_condattr_destroy)函数销毁条件变量属性对象,使对象变得未初始化,可以使用pthread_condattr_init() 重新初始化。 + +**参数**:条件变量属性对象指针attr。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +#### pthread_condattr_getclock + +从attr引用的属性对象中获取时钟属性的值。 + +**参数**: + +1. 条件变量属性对象指针attr。 +2. 时钟属性指针clk。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +#### pthread_condattr_setclock + +设置attr引用的属性对象中时钟属性的值。当前支持CLOCK_REALTIME,采用系统时间。 + +**参数**: + +1. 条件变量属性对象指针attr。 +2. 时钟属性clock。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 +- ENOTSUP:设置不支持的值。 + +#### pthread_condattr_getpshared + +从attr引用的属性对象中获取共享属性的值。 + +**参数**: + +1. 条件变量属性对象指针attr。 +2. 共享属性指针pshared。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +#### pthread_condattr_setpshared + +设置attr引用的属性对象中共享属性属性的值。当前支持PTHREAD_PROCESS_PRIVATE,该条件变量为进程私有的,不允许不同进程的线程访问。 + +**参数**: + +1. 条件变量属性对象指针attr。 +2. 共享属性pshared。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 +- ENOTSUP:设置不支持的值。 + +### 时钟管理 + +#### asctime + +asctime() 函数将timeptr指向的tm结构体对象转换为的字符串。 + +**参数**: tm结构体指针timeptr。 + +**Output**: + +- 成功则返回字符串指针。 +- 失败返回NULL。 + +#### asctime_r + +与asctime() 函数类似,将timeptr指向的tm结构体对象转换为的字符串。不同的是该字符串放置在用户提供的缓冲区buf(至少包含26字节)中,然后返回buf。 + +**参数**: + +1. tm结构体指针timeptr。 +2. 字符串缓冲区buf。 + +**Output**: + +- 成功则返回字符串指针。 +- 失败返回NULL。 + +#### clock + +返回该进程使用等处理器时间的最佳近似值。 + +**参数**:无 + +**Output**: + +- 成功则返回时间。 +- -1:失败。 + +#### clock_gettime + +clock_gettime()函数应返回指定时钟的当前值tp。 + +**参数**: + +1. 时钟类型clock_id。 +2. timespec结构体指针tp。 + +**Output**: + +返回值: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:clock_id不合法。 +- ENOTSUP:clock_id不支持。 + +#### clock_settime + +clock_settime()函数应将指定的clock_id设置为tp指定的值。 + +**参数**: + +1. 时钟类型clock_id。 +2. timespec结构体指针tp。 + +**Output**: + +返回值: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:clock_id不合法,或tp参数指定的纳秒值小于0或大于等于1000 million。 +- ENOTSUP:clock_id不支持。 + +#### clock_getres + +clock_getres()返回时钟的分辨率。如果参数res不为NULL,则指定时钟的分辨率应存储在res指向的位置。如果res为NULL,则不返回时钟分辨率。如果clock_settime()的时间参数不是res的倍数,则该值将被截断为res的倍数。 + +**参数**: + +1. 时钟类型clock_id。 +2. timespec结构体指针res。 + +**Output**: + +返回值: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:clock_id不合法。 +- ENOTSUP:clock_id不支持。 + +#### clock_getcpuclockid + +clock_getcpuclockid函数获取CPU时间时钟的ID,当前进程只有一个,因此无论传入的pid是什么,都返回CLOCK_PROCESS_CPUTIME_ID。 + +**参数**: + +1. 进程ID:pid。 +2. 时钟指针:clk。 + +**Output**: + +- 0:操作成功。 + +#### clock_nanosleep + +与nanosleep类似,clock_nanosleep() 允许调用线程在以纳秒精度指定的时间间隔内休眠,并可以将睡眠间隔指定为绝对值或相对值。当前支持CLOCK_REALTIME。 + +**参数**: + +1. 时钟ID:clk。 +2. 是否为绝对值:flag。 +3. 指定的时间间隔值eq。 +4. 剩余时间值:rem。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 +- EINVAL:时钟ID错误。 +- ENOTSUP:不支持的时钟ID。 + +#### nanosleep + +nanosleep()函数应导致当前线程暂停执行,直到rqtp参数指定的时间间隔过去或信号传递到调用线程。挂起时间可能比请求的长,因为参数值被四舍五入到睡眠分辨率的整数倍,或者因为系统调度了其他活动。但是,除被信号中断外,暂停时间不得小于rqtp规定的时间。 + +如果rmtp参数是非NULL,则更新其为剩余的时间量(请求的时间减去实际睡眠时间)。如果rmtp参数为NULL,则不返回剩余时间。 + +**参数**: + +1. timespec结构体指针rqtp。 +2. timespec结构体指针rmtp。 + +**Output**: + +返回值: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:rqtp参数指定的纳秒值小于0或大于等于1000 million。 +- EINTR:信号中断。 + +#### sleep + +sleep()函数应导致调用线程暂停执行,直到参数seconds指定的实时秒数过去或信号被传递到调用线程。由于系统安排了其他活动,暂停时间可能比请求的要长。 + +**参数**: 秒数seconds。 + +**Output**: + +- 0:操作成功。 +- 如果由于信号的传递而返回,则返回值应为“未睡眠”量,以秒为单位。 + +#### timer_create + +timer_create()函数使用指定的时钟clock_id作为时序基创建计时器,在timerid引用的位置返回计时器ID,用于标识计时器。在删除计时器之前,此计时器ID在调用过程中应是唯一的。 + +**参数**: + +1. 时钟类型clock_id。 +2. sigevent结构体指针evp。(仅支持SIGEV_THREAD) +3. 定时器ID指针timerid。 + +**Output**: + +- 0:操作成功。 +- EINVAL:clock_id不合法。 +- EAGAIN:系统缺少足够的资源来满足请求。 +- EINVAL:指定的时钟ID未定义。 +- ENOTSUP:不支持创建附加到clock_id时钟上的计时器。 + +#### timer_delete + +删除定时器。 + +**参数**:定时器ID指针timerid。 + +**Output**: + +- 0:操作成功。 +- EINVAL:timerid不合法。 + +#### timer_settime + +如果value的it_value成员非0,timer_settime()函数从value参数的it_value成员设置timerid指定的计时器的到期时间。如果在调用timer_settime()时指定的计时器已启用,则此调用应将下次到期的时间重置为指定的值。如果value的it_value成员为0,则应解除计时器。 + +**参数**: + +1. 定时器ID timerid。 +2. 计时器的特征flag。 +3. itimerspec结构体指针value。 +4. itimerspec结构体指针ovalue。返回上一次计时器设置超时时间。 + +**Output**: + +- 0:操作成功。 +- EINVAL:timerid不合法。 + +#### timer_gettime + +timer_gettime() 函数存储定时器 timerid 的剩余时间以及间隔。value 的 it_value 成员包含计时器到期前的时间量,如果计时器已解除,则为零。value 的 it_interval 成员将包含 timer_settime() 上次设置的间隔时间。 + +**参数**: + +1. 定时器ID timerid。 +2. itimerspec结构体指针value。 + +**Output**: + +- 0:操作成功。 +- EINVAL:timerid不合法。 + +#### timer_getoverrun + +根据指定的定时器ID,获取定时器的超时次数。 + +**参数**: + +1. 定时器ID timerid。 +2. itimerspec结构体指针value。 + +**Output**: + +- 非负数:超时次数。 +- -1:操作失败。 + +errno: + +- EINVAL:无效ID或定时器未初始化。 + +#### times + +获取进程的执行时间。由于UniProton无用户模式/内核模式且无子进程概念,出参和返回值均为进程执行总时间。 + +**参数**: + +1. tms结构体指针ts。 + +**Output**: + +- 非负数:进程的执行时间。 + +#### ctime + +ctime() 函数将tp指向的time_t结构体对象转换为的字符串。效果等同于asctime(localtime(t))。 + +**参数**: time_t结构体指针tp。 + +**Output**: + +- 成功则返回字符串指针。 +- 失败返回NULL。 + +#### ctime_r + +ctime_r() 函数将tp指向的time_t结构体对象转换为的字符串,并将字符串放入buf指向的数组中(其大小应至少为26字节)并返回buf。 + +**参数**: + +1. tm结构体指针timeptr。 +2. 字符串缓冲区buf。 + +**Output**: + +- 成功则返回字符串指针。 +- 失败返回NULL。 + +#### difftime + +计算两个日历时间之间的差值(由第一个参数减去第二个参数)。 + +**参数**: + +1. 第一个时间值t1。 +2. 第二个时间值t0。 + +**Output**: + +- 返回时间差值。 + +#### getdate + +暂不支持 + +#### gettimeofday + +gettimeofday() 函数应获取当前时间,并将其存储在tp指向的timeval结构中。如果时区结果tz不是空指针,则行为未指定。 + +**参数**: + +1. timeval结构体指针tp。 +2. 时区指针tz。 + +**Output**: + +- 返回0。 + +#### gmtime + +将time_t结构表示的日历时间转换为tm结构表示的时间,无时区转换。 + +**参数**:time_t结构体指针。 + +**Output**: + +返回值: + +- tm结构体指针。 + +errno: + +- EOVERFLOW:转换溢出。 + +#### gmtime_r + +与gmtime函数类似,不同的是gmtime_r会将结果放入在用户提供的tm结构体中。 + +**参数**: + +1. time_t结构体指针。 +2. tm结构体指针。 + +**Output**: + +返回值: + +- tm结构体指针。 + +errno: + +- EOVERFLOW:转换溢出。 + +#### localtime + +将time_t结构表示的日历时间转换为tm结构表示的本地时间,受时区的影响。 + +**参数**:time_t结构体指针。 + +**Output**: + +返回值: + +- tm结构体指针。 + +errno: + +- EOVERFLOW:转换溢出。 + +#### localtime_r + +与localtime函数类似,不同的是localtime_r会将结果放入在用户提供的tm结构体中。 + +**参数**: + +1. time_t结构体指针。 +2. tm结构体指针。 + +**Output**: + +返回值: + +- tm结构体指针。 + +errno: + +- EOVERFLOW:转换溢出。 + +#### mktime + +将已经根据时区信息计算好的tm结构表示的时间转换为time_t结构表示的时间戳,受时区的影响。 + +**参数**:tm结构体指针。 + +**Output**: + +返回值: + +- time_t结构体指针。 + +errno: + +- EOVERFLOW:转换溢出。 + +#### strftime + +暂不支持 + +#### strftime_l + +暂不支持 + +#### strptime + +使用format指定的格式,将buf指向的字符串解析转换为tm结构体的时间值。 + +**参数**: + +1. 时间字符串buf。 +2. 格式字符串format。 +3. tm结构体指针tp。 + +**Output**: + +- 成功则返回指针,指向解析的最后一个字符后面的字符。 +- 失败返回NULL。 + +#### time + +获取当前的日历时间,即从一个标准时间点到此时的时间经过的秒数。 + +**参数**:time_t结构体指针t。 + +**Output**: time_t结构体指针t。 + +#### timespec_get + +返回基于给定时基base的时间,由timespec结构体保存。时基通常为TIME_UTC。 + +**参数**: + +1. timespec结构体指针ts。 +2. 时基base + +**Output**: + +- 成功则返回时基的值。 +- 失败则返回0。 + +#### utime + +暂不支持。 + +#### wcsftime + +暂不支持 + +#### wcsftime_l + +暂不支持 + +### 内存管理 + +#### malloc + +malloc()分配大小(以字节为单位)size 的未使用的空间。 + +**参数**:大小size。 + +**Output**: 分配成功时,返回指向分配空间的指针。 + +- 如果size 为0,则返回空指针或可以成功传递给 free()的唯一指针。 +- 否则它将返回一个空指针,并设置 errno 来指示错误:ENOMEM 存储空间不足。 + +#### free + +Free()函数释放ptr指向的空间,即可供进一步分配。如果ptr是空指针,则不发生任何操作。如果空间已被对free()或realloc()的调用释放,则行为未定义。 + +**参数**:指针ptr。 + +**Output**: 无 + +#### memalign + +memalign()函数将分配按align大小字节对齐,大小为len的内存空间指针。 + +**参数**:align是对齐字节数,len指定分配内存的字节大小。 + +**Output**: 成功完成后,空间大小为len的指针。 + +#### realloc + +realloc()函数将释放ptr所指向的旧对象,并返回一个指向新对象的指针,该对象的大小由size指定。并拷贝旧指针指向的内容到新指针,然后释放旧指针指向的空间。如果ptr是空指针,则realloc()对于指定的大小应等同于malloc()。 + +**参数**:旧指针地址;新指针的目标分配空间大小。 + +**Output**: 在成功完成后,realloc()将返回一个指向分配空间的指针。如果size为0,则行为不可预测。 + +#### malloc_usable_size + +malloc_usable_size()函数返回ptr所指向的块中的可用字节数。 + +**参数**:待计算内存块大小的指针。 + +**Output**: 返回ptr指向的已分配内存块中的可用字节数。如果ptr为NULL,则返回0。 + +#### aligned_alloc + +aligned_alloc()函数分配size字节未初始化的存储空间,按照alignment指定对齐。 + +**参数**:alignment指定对齐;size是分配的字节数。 + +**Output**: 返回指向新分配内存的指针。 + +#### reallocarray + +reallocarray()函数将释放ptr所指向的旧对象,并返回一个指向新对象的指针,该对象的大小由size由入参m和n决定。等同于realloc(ptr, m * n); + +**参数**:ptr待释放的指针内容,m和n代表数组的长度和单个元素的字节数。 + +**Output**: 在成功完成后返回一个指向分配空间的指针。如果size为0,则行为不可预测。 + +#### calloc + +calloc()函数将为一个数组分配未使用的空间,并将该空间应初始化为所有位0。 + +**参数**:m和n分别代表数组的元素个数或单个元素的大小。 + +**Output**: 分配成功时,返回指向分配空间的指针。失败时则行为不可预测。 + +#### posix_memalign + +posix_memalign()函数将分配按align指定的边界对齐的大小字节,并返回指向在memptr中分配的内存的指针。对齐的值应该是sizeof(void *)的2倍幂。 + +**参数**:res分配好的内存空间的首地址,align是对齐字节数,len指定分配内存的字节大小。 + +**Output**: 成功完成后,posix_memalign()将返回零;否则,将返回一个错误号来表示错误,并且不修改memptr的内容,或者将其设置为空指针。 + +### 退出管理 + +#### abort + +abort()函数触发程序的异常终止,除了信号SIGABRT没有被捕获或者返回。 + +**参数**:无 + +**Output**: 无 + +#### _Exit + +_Exit()函数终止程序。 + +**参数**:入参是0,EXIT_SUCCESS, EXIT_FAILURE或任何其他值。wait()和waitpid()只能获得最低有效的8位(即status & 0377);完整的值应该可以从waitid()和siginfo_t中获得,SIGCHLD传递给信号处理程序。 + +**Output**: 无 + +#### atexit + +atexit()注册一个在程终止时运行的函数。在正常的程序终止时,所有由atexit()函数注册的函数都应该按照其注册的相反顺序被调用,除非一个函数在之前注册的函数之后被调用,而这些函数在注册时已经被调用了。正常的终止发生在调用exit()或从main()返回时。 + +**参数**:函数指针,该入参函数不带参数。 + +**Output**: 成功返回0;失败返回非0。 + +#### quick_exit + +quick_exit()函数触发快速程序终止,并以后进先出(LIFO)的顺序调用由at_quick_exit注册的函数。 + +**参数**:程序退出的状态码。 + +**Output**: 无 + +#### at_quick_exit + +at_quick_exit()函数注册由func指向的函数,在快速程序终止时(通过quick_exit)调用。最多能注册32个函数。 + +**参数**:指向快速程序退出时要调用的函数的指针。 + +**Output**: 注册成功返回0,否则为非零值。 + +#### assert + +assert()宏将在程序中插入断言,它将扩展为一个void表达式。当它被执行时,如果判断条件失败。assert()将写失败特定的调用信息,并将调用abort()退出程序。 + +**参数**:判断表达式。 + +**Output**: 无 + +### stdlib接口 + +#### div + +div()函数计算int型除法的商和余数。如果余数或商不能表示,结果是未知的。 + +**参数**:int numer(分子), int denom(分母)。 + +**Output**: 结构体div_t,int型的商和余数。 + +#### ldiv + +ldiv()函数将计算long型除法的商和余数。如果余数或商不能表示,结果是未知的。 + +**参数**:long numer(分子), long denom(分母)。 + +**Output**: 结构体ldiv_t,long型的商和余数。 + +#### lldiv + +lldiv()函数将计算long long型除法的商和余数。如果余数或商不能表示,结果是未知的。 + +**参数**:long long numer(分子), long long denom(分母)。 + +**Output**: 结构体lldiv_t,long long型的商和余数。 + +#### imaxdiv + +imaxdiv()函数将计算intmax_t型除法的商和余数。如果余数或商不能表示,结果是未知的。 + +**参数**:intmax_t numer(分子), intmax_t denom(分母)。 + +**Output**: 结构体imaxdiv_t,intmax_t型的商和余数。 + +#### wcstol + +wcstol()将宽字符串转换为long型正数。输入字符串分解为三部分。 + +1. 初始的(可能为空的)空白宽字符代码序列(由iswspace()指定)。 +2. long型整数,进制的类型由base入参决定。 +3. 由一个或多个无法识别的宽字符代码组成的最终宽字符串。 + +**参数**:指向要解释的以空字符结尾的宽字符串的指针;指向宽字符的指针;解释的整数值的基数。 + +**Output**: 转换后的long型数值。如果无法进行转换,则返回0,并设置errno表示错误。如果正确值在可表示的值范围之外,则返回LONG_MIN,LONG_MAX,LLONG_MIN或LLONG_MAX,并将errno设置为ERANGE。 + +#### wcstod + +wcstod()将宽字符串转换为double型浮点数。输入字符串分解为三部分。 + +1. 初始的(可能为空的)空白宽字符代码序列(由iswspace()指定)。 +2. double型浮点数、无穷大或者NaN。 +3. 由一个或多个无法识别的宽字符代码组成的最终宽字符串。 + +**参数**:指向要解释的以空字符结尾的宽字符串的指针;指向宽字符的指针; + +**Output**: 转换后的double型浮点数。如果越界,则可能返回±HUGE_VAL, ±HUGE_VALF或±HUGE_VALL,并将errno设置为ERANGE。 + +#### fcvt + +fcvt()将浮点数转换为要求长度的字符串,没有小数点,如果超过value的数字长度将补零。 + +**参数**:待转换的浮点数、转换后字符串的长度、小数点所在位指针、符号位指针。 + +**Output**: 转换后字符串指针。 + +#### ecvt + +ecvt()函数将浮点数转换为要求长度的字符串,没有小数点,如果超过value的数字长度不补零(与fcvt的区别)。 + +**参数**:待转换的浮点数、转换后字符串的长度、小数点所在位指针、符号位指针。 + +**Output**: 转换后字符串指针。 + +#### gcvt + +gcvt()函数将double类型的值转换为要求长度的字符串,包含小数点。 + +**参数**:待转换的浮点数,转换后字符串的长度、转换后字符串指针。 + +**Output**: 转换后字符串指针(等于函数成功调用后第三个入参的指针)。 + +#### qsort + +qsort()函数对数据表进行排序。 + +**参数**:qsort()函数将对nel对象数组进行排序,该数组的初始元素由base指向。每个对象的大小(以字节为单位)由width参数指定。如果nel参数的值为0,则不会调用comp所指向的比较函数,也不会进行重排。应用程序应确保compar所指向的比较函数不会改变数组的内容。实现可以在调用比较函数之间对数组元素重新排序,但不能改变任何单个元素的内容。 + +**Output**: 无 + +#### abs + +abs()函数计算并返回int型数值的绝对值。 + +**参数**:int整型数值。 + +**Output**: int整型数值的绝对值。 + +#### labs + +labs()函数计算并返回long型数值的绝对值。 + +**参数**:long型数值。 + +**Output**: long型数值的绝对值。 + +#### llabs + +llabs()函数计算并返回long long型数值的绝对值。 + +**参数**:long long型数值。 + +**Output**: long long型数值的绝对值。 + +#### imaxabs + +imaxabs()函数计算并返回intmax_t型的绝对值。 + +**参数**:intmax_t型数值。 + +**Output**: intmax_t型数值的绝对值。 + +#### strtol + +strtol()函数转换字符串到long型数值。这将nptr所指向的字符串的初始部分转换为long类型的表示形式。首先,它们将输入字符串分解为三部分: + +1. 一个初始的、可能为空的空白字符序列(由isspace()函数判断)。 +2. long型整数,进制的类型由base入参决定。 +3. 由一个或多个不可识别字符组成的最后字符串,包括输入字符串的终止NUL字符。 + +**参数**:待转换的字符串的指针;指向字符的指针;解释的整数值的基数。 + +**Output**: 转换后的long型。如果无法进行转换,则返回0,并设置errno表示错误。如果正确值在可表示的值范围之外,则返回LONG_MIN, LONG_MAX, LLONG_MIN或LLONG_MAX,并将errno设置为EINVAL。 + +#### strtod + +strtod()函数将字符串转换为double型。输入字符串分解为三部分。 + +1. 初始的(可能为空的)空白字符代码序列(由isspace()指定)。 +2. double型浮点数、无穷大或者NaN。 +3. 由一个或多个无法识别的字符代码组成的最终字符串。 + +**参数**:待转换的字符串的指针;指向字符的指针; + +**Output**: 转换后的double型浮点数。如果越界,则可能返回±HUGE_VAL, ±HUGE_VALF或±HUGE_VALL,并将errno设置为EINVAL。 + +#### atoi + +atoi()函数将字符串转换为int型整数。 + +**参数**:待转换的字符串的指针。 + +**Output**: 转换后的int型数值。如果是无法显示数值,返回值不可预测。 + +#### atol + +atol()函数将字符串转换为long型整数。 + +**参数**:待转换的字符串的指针。 + +**Output**: 转换后的long型数值。如果是无法显示数值,返回值不可预测。 + +#### atoll + +atoll()函数将字符串转换为long long型整数。 + +**参数**:待转换的字符串的指针。 + +**Output**: 转换后的long long型数值。如果是无法显示数值,返回值不可预测。 + +#### atof + +atof()函数将字符串转换为double型浮点数。 + +**参数**:待转换的字符串的指针。 + +**Output**: 转换后的double型数值。如果是无法显示数值,返回值不可预测。 + +#### bsearch + +bsearch()函数二分查找一个已排序表.将搜索一个nel对象数组,该数组的初始元素由base指向,以查找与key指向的对象匹配的元素。数组中每个元素的大小由width指定。如果nel参数的值为0,则不会调用compar所指向的比较函数,也不会找到匹配项。 + +**参数**:依次为目标查找的元素,待查找的数组的指针,数组的元素个数,数组每个元素的size大小,两个元素的比较函数。 + +**Output**: 指向数组中匹配成员的指针,如果没找到则返回空指针。 + +### SystemV IPC + +#### semget + +semget()函数返回与参数key相关联的SystemV信号量集的标识符。它可用于获得先前创建的信号量集合的标识符(当flag为0且key不为IPC_PRIVATE时)或来创建一个新的集合。最多可以支持创建SEMSET_MAX_SYS_LIMIT个信号量集合,每个集合最多支持SEMSET_MAX_SEM_NUM个信号量。 + +**参数**: + +1. 键值key。 +2. 信号量的个数nsems。 +3. 信号量的创建方式和权限flag。 + +**Output**: + +- 非负数:信号量集的标识符。 +- -1: 操作失败。 + +errno: + +- EINVAL:参数错误。 +- ENOENT:信号量集不存在。 +- ENOSPC:超出最大信号量集合的限制。 +- EEXIST:flag包含了IPC_CREAT和IPC_EXCL但标识符已存在。 + +#### semctl + +semctl()函数在由semid标识的SystemV信号量集合中的第semnum个信号量上执行由cmd指定的控制操作。集合中的信号量从0开始编号。当前支持的cmd包括IPC_STAT(支持获取信号量集合中的个数)、GETALL(获取信号量集合中所有信号量的值)、GETVAL(获取单个信号量的值)和IPC_RMID(根据标识符删除信号量集合)。 + +**参数**: + +1. 信号量集合标识符semid。 +2. 信号量中的编号semnum。 +3. 要执行的操作命令cmd。 +4. 可选参数union semun结构体arg。 + +**Output**: + +- 0:操作成功。 +- -1: 操作失败。 + +errno: + +- EINVAL:参数错误。 +- EIDRM:信号量集合已删除。 +- EFAULT:arg中的buf或array指针为空。 + +#### semop + +semop()函数对semid关联的信号量集合中选定的信号量进行操作,也就是使用资源或者释放资源。具体操作由struct sembuf结构体来决定。结构体包括数组索引semnum,信号量操作(支持+1或-1,表示释放资源和使用资源)op,操作方式flag(支持IPC_NOWAIT,不阻塞操作)。当前只支持单个信号量的操作。 + +**参数**: + +1. 信号量集合标识符semid。 +2. 指向struct sembuf结构体的数组sops。 +3. 数组个数nsops。 + +**Output**: + +- 0:操作成功。 +- -1: 操作失败。 + +errno: + +- EINVAL:参数错误。 +- ENOTSUP:操作不支持。 +- EFAULT:数组指针sops为空。 +- E2BIG:数组个数nsops超过限制。 +- EIDRM;信号量集合已删除。 +- EFBIG:某个信号量索引超过限制。 +- EAGAIN:操作无法立即进行,如flag包含了IPC_NOWAIT或超时。 + +#### semtimedop + +semtimedop()的行为与semop()相同,不同点在于增加一个超时时间,等待超时返回错误码。 + +**参数**: + +1. 信号量集合标识符semid。 +2. 指向struct sembuf结构体的数组sops。 +3. 数组个数nsops。 +4. timespec结构体指针timeout。 + +**Output**: + +- 0:操作成功。 +- -1: 操作失败。 + +errno: + +- EINVAL:参数错误。 +- ENOTSUP:操作不支持。 +- EFAULT:数组指针sops为空。 +- E2BIG:数组个数nsops超过限制。 +- EIDRM;信号量集合已删除。 +- EFBIG:某个信号量索引超过限制。 +- EAGAIN:操作无法立即进行,如flag包含了IPC_NOWAIT或超时。 + +#### msgget + +msgget()返回与参数key相关联的SystemV消息队列的标识符。它可用于获得先前创建的消息队列的标识符(当flag为0且key不为IPC_PRIVATE时)或来创建一个新的消息队列。最多支持创建MSGQUE_MAX_SYS_LIMIT个消息队列,消息队列默认大小为MSGQUE_MAX_MSG_NUM,消息大小默认为MSGQUE_MAX_MSG_SIZE。 + +**参数**: + +1. 键值key。 +2. 消息队列的创建方式和权限flag。 + +**Output**: + +- 非负数:消息队列的标识符。 +- -1: 操作失败。 + +errno: + +- EINVAL:参数错误。 +- ENOENT:消息队列不存在。 +- ENOSPC:超出最大消息队列的限制。 +- EEXIST:flag包含了IPC_CREAT和IPC_EXCL但标识符已存在。 +- ENOMEM:内存不足。 + +#### msgctl + +msgctl()在标识为msgqid的SystemV消息队列上执行cmd指定的控制操作。当前支持IPC_STAT(支持获取消息队列中的消息个数和大小)、IPC_RMID(删除消息队列)。 + +**参数**: + +1. 消息队列标识符msgqid。 +2. 消息队列控制命令cmd。 +3. 消息队列信息msqid_ds结构体buf。 + +**Output**: + +- 0:操作成功。 +- -1: 操作失败。 + +errno: + +- EINVAL:参数错误。 +- EFAULT:msqid_ds结构体指针为空。 +- EIDRM:消息队列已删除。 +- ENOTSUP:不支持的命令。 + +#### msgsnd + +msgsnd()将msgp指向的消息追加到msgqid指定的SystemV消息队列中,如果队列有足够空间,msgsnd立即执行。消息大小不超过MSGQUE_MAX_MSG_SIZE。当前flag支持IPC_NOWAIT,表示操作不等待。 + +**参数**: + +1. 消息队列标识符msgqid。 +2. 需要发送的消息msgp。 +3. 发送消息的大小msgsz。 +4. 发送方式flag。 + +**Output**: + +- 0:操作成功。 +- -1: 操作失败。 + +errno: + +- EINVAL:参数错误。 +- EFAULT:msgp指针为空。 +- EIDRM:消息队列已删除。 +- ENOTSUP:不支持的命令。 + +#### msgrcv + +msgrcv()函数将消息从msgqid指定的消息队列中移除,并放入msgp指向的缓冲区中。参数msgsz指定了缓冲区buf的大小。当前msgtype支持的值为0,flag支持IPC_NOWAIT,表示操作不等待。 + +**参数**: + +1. 消息队列标识符msgqid。 +2. 需要接受消息的缓冲区msgp。 +3. 接受消息的大小msgsz。 +4. 接受消息的类型msgtype。 +5. 发送方式flag。 + +**Output**: + +- 0:操作成功。 +- -1: 操作失败。 + +errno: + +- EINVAL:参数错误。 +- EFAULT:msgp指针为空。 +- EIDRM:消息队列已删除。 +- ENOTSUP:不支持的命令。 +- ENOMSG:消息队列中没有请求类型的消息。 + +#### shmget + +暂不支持 + +#### shmctl + +暂不支持 + +#### shmat + +暂不支持 + +#### shmdt + +暂不支持 + +#### ftok + +暂不支持 + +#### fstatat + +fstatat()函数根据相对路径获取相关的文件统计信息。 + +**参数**: + +1. 文件描述符fd。 +2. 路径名path。 +3. 文件信息指针st。 +4. 操作标记flag。 + +**Output**: + +- 目录流指针:操作成功。 +- NULL:操作失败。 + +errno: + +- EFAULT:指针为空。 +- ENOENT:没有这样的文件。 +- ENOSYS:缺少相关函数。 + +#### fchmodat + +fchmodat()函数根据相对路径更改相关文件的访问权限。 + +**参数**: + +1. 文件描述符fd。 +2. 路径名path。 +3. 访问模式mode。 +4. 操作标记flag。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 +- ELOOP:符号连接的层数过多。 +- ENOSYS:缺少相关函数。 + +#### mkdir + +mkdir()函数用于参数路径名path的创建目录。 + +**参数**: + +1. 路径名path。 +2. 访问模式mode。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- ENXIO:驱动程序问题。 +- EEXIST:目录已存在。 +- ENOSYS:缺少相关函数。 + +#### chmod + +chmod()函数用来控制相关文件的权限。 + +**参数**: + +1. 路径名path。 +2. 访问模式mode。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 +- ELOOP:符号连接的层数过多。 + +#### lstat + +lstat()函数根据路径名path获取相关的链接文件统计信息。 + +**参数**: + +1. 路径名path。 +2. 文件信息结构体stat。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EFAULT:参数指针为空。 +- ENOENT:没有这样的文件。 +- ENOMEM:内存不足。 + +#### utimensat + +utimensat()函数根据相对目录更改相关的文件时间戳。 + +**参数**: + +1. 文件描述符fd。 +2. 路径名pathname。 +3. 时间结构体数组times。 +4. 操作标识flags。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### mkfifo + +mkfifo()函数用于在文件系统中创建一个有名管道。 + +**参数**: + +1. 路径名pathname。 +2. 访问模式mode。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EEXIST:文件已存在。 +- ENXIO:驱动程序问题。 +- ENOSYS:缺少相关函数。 + +#### fchmod + +fchmod()函数可以修改文件描述符fd相关的文件权限。 + +**参数**: + +1. 文件描述符fd。 +2. 访问模式mode。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 +- ELOOP:符号连接的层数过多。 +- ENOSYS:缺少相关函数。 + +#### mknod + +mknod()函数用于建立FIFO、字符设备文件以及块设备文件等。 + +**参数**: + +1. 路径名path。 +2. 访问模式mode。 +3. 设备号dev。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EEXIST:文件已存在。 +- ENXIO:驱动程序问题。 +- ENOSYS:缺少相关函数。 + +#### statvfs + +statvfs()函数根据路径名path获取磁盘信息。 + +**参数**: + +1. 路径名path。 +2. statvfs结构体指针buf。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EFAULT:错误地址。 +- ENOENT:不存在文件。 +- ENOMEM:内存不足。 + +#### mkfifoat + +mkfifoat()函数与mkfifo()函数类似,在参数fd表示的目录相关位置创建一个有名管道。 + +**参数**: + +1. 文件描述符fd。 +2. 路径名path。 +3. 访问模式mode。 +4. 设备号dev。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EEXIST:文件已存在。 +- ENXIO:驱动程序问题。 +- ENOSYS:缺少相关函数。 + +#### umask + +umask()函数设置预设的文件权限。 + +**参数**: + +1. 访问权限mode。 + +**Output**: + +- 前一个访问权限:操作成功。 + +#### mknodat + +mknodat()函数用于建立FIFO、字符设备文件以及块设备文件等,在参数fd表示的目录相关位置创建。 + +**参数**: + +1. 文件描述符fd。 +2. 路径名path。 +3. 访问模式mode。 +4. 设备号dev。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EEXIST:文件已存在。 +- ENXIO:驱动程序问题。 +- ENOSYS:缺少相关函数。 + +#### futimesat + +futimesat()函数根据目录文件描述符更改相关的文件时间戳。 + +**参数**: + +1. 文件描述符fd。 +2. 路径名pathname。 +3. 时间结构体数组times。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### lchmod + +lchmod()函数用来控制相关文件的权限。 + +**参数**: + +1. 路径名pathname。 +2. 访问模式mode。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 +- ELOOP:符号连接的层数过多。 +- ENOSYS:缺少相关函数。 + +#### futimens + +futimens()函数根据文件描述符fd更改相关的文件时间戳的方法。 + +**参数**: + +1. 文件描述符fd。 +2. 时间结构体数组times。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### mkdirat + +mkdirat()函数根据相对目录文件描述符创建一个新目录。 + +**参数**: + +1. 文件描述符fd。 +2. 路径名path。 +3. 访问模式mode。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- ENXIO:驱动程序问题。 +- EEXIST:目录已存在。 +- ENOSYS:缺少相关函数。 + +#### fstat + +fstat()函数根据文件描述符fd获取相关文件的信息。 + +**参数**: + +1. 文件描述符fd。 +2. stat结构体指针buf。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EBADF:无效的描述符。 +- ENOSYS:缺少相关函数。 + +#### stat + +stat()函数根据路径名获取相关文件的信息。 + +**参数**: + +1. 路径名path。 +2. stat结构体指针buf。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EFAULT:参数指针为空。 +- ENOENT:不存在文件。 +- ENOSYS:缺少相关函数。 + +#### open + +open()函数根据文件名pathname打开相关的文件。 + +**参数**: + +1. 文件名pathname。 +2. 文件访问模式mode。 +3. 可变参数。 + +**Output**: + +- 文件描述符fd:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 +- ELOOP:符号连接的层数过多。 +- EACCES:权限不足。 +- ENXIO:驱动程序问题。 + +#### creat + +creat()函数根据文件名pathname创建相关的文件。 + +**参数**: + +1. 文件名pathname。 +2. 文件访问模式mode。 + +**Output**: + +- 文件描述符fd:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 +- ELOOP:符号连接的层数过多。 +- EACCES:权限不足。 +- ENXIO:驱动程序问题。 + +#### posix_fadvise + +暂不支持。 + +#### fcntl + +fcntl()函数用来修改已经打开文件的属性的函数。操作命令目前只支持F_DUPFD、F_DUPFD_CLOEXEC、F_GETFD、F_SETFD、F_GETFL、F_SETFL、F_GETPATH。 + +**参数**: + +1. 文件描述符fd。 +2. 操作命名cmd。 +3. 可变参数。 + +**Output**: + +- 文件描述符fd:操作成功。 +- -1:操作失败。 + +errno: + +- EBADF:无效的描述符。 +- ENOSYS:缺少相关函数。 +- EINVAL:参数错误。 + +#### posix_fallocate + +posix_fallocate()函数确保为文件描述符fd引用的文件分配磁盘空间,从偏移开始,持续len字节。如果文件大小小于offset+len,则文件为增加到这个大小,否则将保留文件大小不变。 + +**参数**: + +1. 文件描述符fd。 +2. 偏移offset。 +3. 长度len。 + +**Output**: + +- 文件描述符fd:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 +- EFBIG:长度错误。 +- EBADF:无效的描述符。 +- EROFS:文件系统仅可读。 +- ENOSYS:缺少相关函数。 + +#### openat + +openat()函数当参数传入为绝对路径时,与open()函数一致。如果openat()函数的第一个参数fd是常量AT_FDCWD时,则其后的第二个参数路径名是以当前工作目录为基址的;否则以fd指定的目录文件描述符为基址。 + +**参数**: + +1. 文件描述符fd。 +2. 文件路径path。 +3. 打开标识oflag。 +4. 访问模式mode。 + +**Output**: + +- 文件描述符fd:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 +- EFBIG:长度错误。 +- EBADF:无效的描述符。 +- EROFS:文件系统仅可读。 +- ENOSYS:缺少相关函数。 + +#### scandir + +暂不支持。 + +#### seekdir + +seekdir()函数用来设置参数dir目录流当前的读取位置。 + +**参数**: + +1. 目录流指针dir。 +2. 偏移位置off。 + +**Output**: 无。 + +#### readdir_r + +暂不支持。 + +#### fdopendir + +fdopendir()函数打开与目录名称相对应的目录流指针。 + +**参数**: + +1. 文件描述符fd。 + +**Output**: + +- 目录流指针dir:操作成功。 +- NULL:操作失败。 + +errno: + +- EBADF:无效的描述符。 +- ENOTDIR:不是目录。 + +#### versionsort + +versionsort()函数将dirent的名称通过strverscmp()进行比较,用于比较版本号字符串。 + +**参数**: + +1. dirent指针a。 +2. dirent指针b。 + +**Output**: + +- 小于,等于或大于零的整数:操作成功。 + +#### alphasort + +alphasort()函数将dirent的名称通过strcoll()进行比较,用于比较版本号字符串。 + +**参数**: + +1. dirent指针a。 +2. dirent指针b。 + +**Output**: + +- 小于,等于或大于零的整数:操作成功。 + +#### rewinddir + +rewinddir()函数设置参数dir目录流读取位置为原来开头的读取位置。 + +**参数**: + +1. 目录流指针dir。 + +**Output**: 无。 + +#### dirfd + +dirfd()函数返回参数dir目录流相关的文件描述符fd。 + +**参数**: + +1. 目录流指针dir。 + +**Output**: + +- 整型文件描述符值:操作成功。 + +#### readdir + +暂不支持。 + +#### telldir + +telldir()函数返回一个目录流dir的当前位置。 + +**参数**: + +1. 目录流指针dir。 + +**Output**: + +- 位置整型值:操作成功。 + +#### closedir + +closedir()函数关闭一个目录流dir。 + +**参数**: + +1. 目录流指针dir。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EBADF:无效的描述符。 + +#### opendir + +opendir()函数用来打开参数name指定的目录流。 + +**参数**: + +1. 文件名name。 + +**Output**: + +- 目录流指针:操作成功。 +- NULL:操作失败。 + +errno: + +- EINVAL:参数错误。 +- ELOOP:符号连接的层数过多。 +- EACCES:权限不足。 +- ENXIO:驱动程序问题。 + +#### putwchar + +putwchar()函数将宽字符wc写入标准输出stdout。 + +**参数**: + +1. 宽字符wc。 + +**Output**: + +- wc:操作成功。 +- WEOF: 操作失败。 + +errno: + +- EILSEQ:宽字符转换错误。 + +#### fgetws + +fgetws()函数文件流中读取最多n-1个宽字符的字符串,并增加一个终结宽空字符,保存到输入字符串ws中。 + +**参数**: + +1. 宽字符串ws。 +2. 宽字符串长度n。 +3. 文件流stream。 + +**Output**: + +- ws:操作成功。 +- NULL:操作失败。 + +#### vfwprintf + +vfwprintf()函数将format指向可变参数列表中的格式化数据的字符串写入文件流stream。 + +**参数**: + +1. 文件流stream。 +2. 指向宽字符串的指针format。 + +**Output**: + +- 成功转换字符数:操作成功。 +- -1:操作失败。 + +errno: + +- EOVERFLOW:字符串长度溢出。 +- EINVAL:参数错误。 + +#### fscanf + +fscanf()函数从文件流stream读取格式化数据到变量参数列表中。 + +**参数**: + +1. 文件流stream。 +2. 格式化输入format。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- EOF:操作失败。 + +#### snprintf + +snprintf()函数用于格式化输出字符串,并将结果写入到指定的缓冲区,并限制输出的字符数,避免缓冲区溢出。 + +**参数**: + +1. 目标字符串str。 +2. 字符数组的大小size。 +3. 格式化字符串format。 +4. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +#### sprintf + +sprintf()函数发送格式化输出到str所指向的字符串。 + +**参数**: + +1. 目标字符串str。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +#### fgetpos + +fgetpos()函数获取文件流stream的当前文件位置,并把它写入到pos。 + +**参数**: + +1. 指向文件流对象指针stream。 +2. 指向fpos_t对象的指针。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### vdprintf + +vdprintf()函数将可变参数的格式化后的字符串输出到文件描述符中。 + +**参数**: + +1. 文件描述符fd。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### gets + +gets()函数从标准输入stdin读取一行,并存储在str所指向的字符串中。 + +**参数**: + +1. 字符串指针str。 + +**Output**: + +- 字符串指针str:操作成功。 +- NULL:操作失败。 + +#### ungetc + +ungetc()函数将字符char推入到指定的文件流stream中,以便它是下一个被读取到的字符。 + +**参数**: + +1. 被推入的字符char。 +2. 文件流指针stream。 + +**Output**: + +- 字符char:操作成功。 +- EOF:操作失败。 + +#### ftell + +ftell()函数返回给定文件流stream的当前文件位置。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: + +- 当前文件位置:操作成功。 +- -1:操作失败。 + +errno: + +- EOVERFLOW:文件位置溢出。 + +#### clearerr + +clearerr()函数清除给定文件流stream的文件结束符和错误标识符。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: 无 + +#### getc_unlocked + +getc_unlocked()函数从文件流stream读取字符char。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: + +- 字符char:操作成功。 +- EOF:文件流结束。 + +#### fmemopen + +fmemopen()函数打开一个内存流,使其可以读取或写入由buf指定的缓冲区。 + +**参数**: + +1. 缓冲区buf。 +2. 缓冲区大小size。 +3. 操作模式mode。 + +**Output**: + +- 文件流指针:操作成功。 +- NULL:操作失败。 + +errno: + +- EINVAL:参数失败。 +- ENOMEM:内存不足。 + +#### putwc + +putwc()函数将宽字符wc写入给定的文件流stream中。 + +**参数**: + +1. 宽字符wc。 +2. 文件流stream。 + +**Output**: + +- 宽字符wc:操作成功。 +- WEOF:操作失败。 + +errno: + +- EILSEQ:宽字符转换错误。 + +#### getchar + +getchar()函数从标准输入stdin获取一个字符。 + +**参数**:无 + +**Output**: + +- 字符ch:操作成功。 +- EOF:操作失败或文件末尾。 + +#### open_wmemstream + +open_wmemstream()函数打开用于写入宽字符串缓冲区的文件流。缓冲区是动态分配的。 + +**参数**: + +1. 指向缓冲区宽字符串指针ptr。 +2. 指向缓冲区大小的指针size。 + +**Output**: + +- 文件流指针:操作成功。 +- NULL:操作失败。 + +#### asprintf + +asprintf()函数将可变参数的格式化后的数据写入字符串缓冲区buffer。 + +**参数**: + +1. 存放字符串的指针buf。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +#### funlockfile + +funlockfile()函数将文件流解锁。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: 无 + +#### fflush + +fflush()函数刷新文件流stream的输出缓冲区。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: + +- 0:操作成功。 +- EOF:操作失败。 + +#### vfprintf + +vfprintf()函数将可变参数的格式化数据输出到文件流stream中。 + +**参数**: + +1. 文件流指针stream。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +errno: + +- EOVERFLOW:字符串长度溢出。 +- EINVAL:参数错误。 + +#### vsscanf + +vsscanf()函数从字符串中读取格式化的数据到变量参数列表中。 + +**参数**: + +1. 处理字符串的指针str。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +#### vfwscanf + +vfwscanf()函数从文件流中读取格式化的数据到变量参数列表中。 + +**参数**: + +1. 输入文件流stream。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +#### puts + +puts()函数将指定字符串写入到标准输出stdout中。 + +**参数**: + +1. 输出字符串str。 + +**Output**: + +- 字符串长度:操作成功。 +- EOF:操作失败。 + +#### getchar_unlocked + +getchar_unlocked()函数读取标准输入stdin的一个字符。 + +**参数**:无。 + +**Output**: + +- 读取字符char:操作成功。 +- EOF:操作失败。 + +#### setvbuf + +setvbuf()函数设置文件流stream的缓冲模式。当前只支持无缓冲模式。 + +**参数**: + +1. 文件流stream。 +2. 分配的缓冲区buf。 +3. 指定文件缓冲的模式mode。 +4. 缓冲区大小size。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### getwchar + +getwchar()函数从标准输入stdin获取一个宽字符。 + +**参数**:无。 + +**Output**: + +- 宽字符ch:操作成功。 +- EOF:操作失败或文件末尾。 + +#### setbuffer + +setbuffer()函数用来设置文件流stream的缓冲区。 + +**参数**: + +1. 文件流指针stream。 +2. 缓冲区buf。 +3. 缓冲区大小size。 + +**Output**: 无。 + +#### vsnprintf + +vsnprintf()函数将可变参数的格式化数据输出到字符串缓冲区中。 + +**参数**: + +1. 字符串缓冲区str。 +2. 缓冲区大小size。 +3. 格式化字符串format。 +4. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +#### freopen + +freopen()函数将一个新的文件名filename与给定的打开的流stream关联,同时关闭流中的旧文件。 + +**参数**: + +1. 新的文件名filename。 +2. 文件访问模式mode。 +3. 文件流指针stream。 + +**Output**: + +- 文件流指针stream:操作成功。 +- NULL:操作失败。 + +#### fwide + +fwide()函数用于设置文件流的定向。 + +**参数**: + +1. 文件流fp。 +2. 设置模式mode。 + +**Output**: + +- mode:操作成功。 + +#### sscanf + +sscanf()函数从字符串中读取格式化的数据到变量列表中。 + +**参数**: + +1. 处理字符串的指针str。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +#### fgets + +fgets()函数从指定的流中读取数据,每次读取n-1个字符或换行符或结束,并存储在参数str所指向的字符串内。 + +**参数**: + +1. 字符串的指针str。 +2. 读取最大字符数n. +3. 文件流stream。 + +**Output**: + +- 返回相同的str参数:操作成功。 +- NULL:操作失败。 + +#### vswscanf + +vswscanf()函数从参数str读取格式化的宽字符数据到变量参数列表中。 + +**参数**: + +1. 字符串的指针str。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配数:操作成功。 +- -1:操作失败。 + +#### vprintf + +vprintf()函数将格式化字符串输出到标准输出stdout。 + +**参数**: + +1. 格式化字符串format。 +2. 可变参数。 + +**Output**: + +- 成功写入字符数:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 +- EOVERFLOW:字符串溢出。 + +#### fputws + +fputws()函数将参数宽字符串str写入指定的文件流stream。 + +**参数**: + +1. 宽字符串str。 +2. 文件流指针stream。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### wprintf + +wprintf()函数将格式化字符串format写入到标准输出stdout。 + +**参数**: + +1. 格式化字符串format。 +2. 可变参数。 + +**Output**: + +- 返回写入字符数:操作成功。 +- -1:操作失败。 + +#### wscanf + +wscanf()函数从标准输入stdin读取格式化的宽字符数据到可变变量列表。 + +**参数**: + +1. 格式化字符串format。 +2. 可变参数。 + +**Output**: + +- 返回成功匹配数:操作成功。 +- -1:操作失败。 + +#### fputc + +fputc()函数将字符c写入指定的文件流stream。 + +**参数**: + +1. 输入字符c。 +2. 文件流指针stream。 + +**Output**: + +- 返回成功匹配数:操作成功。 +- EOF:操作失败。 + +#### putchar + +putchar()函数将字符c写入标准输入stdin。 + +**参数**: + +1. 输入字符c。 + +**Output**: + +- 返回成功匹配数:操作成功。 +- EOF:操作失败。 + +#### flockfile + +flockfile()函数将指定的文件流锁定。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: 无。 + +#### vswprintf + +vswprintf()函数将格式化字符串写入大小已设置的缓冲区中。 + +**参数**: + +1. 宽字符串ws。 +2. 宽字符串大小len。 +3. 格式化字符串format。 +4. 可变参数。 + +**Output**: + +- 返回成功匹配数:操作成功。 +- -1:操作失败。 + +errno: + +- EOVERFLOW:字符串溢出。 + +#### fputwc + +fputwc()函数将参数宽字符wc写入文件流stream。 + +**参数**: + +1. 宽字符wc。 +2. 文件流指针stream。 + +**Output**: + +- 宽字符wc:操作成功。 +- WEOF:操作失败。 + +#### fopen + +fopen()函数使用给定的模式mode打开filename所指向的文件。 + +**参数**: + +1. 文件名filename。 +2. 文件访问模式mode。 + +**Output**: + +- 文件流指针:操作成功。 +- NULL:操作失败。 + +errno: + +- EINVAL:参数错误。 + +#### tmpnam + +tmpnam()函数生成并返回一个有效的临时文件名,并存储在参数缓冲区buf中。 + +**参数**: + +1. 文件名filename。 + +**Output**: + +- 临时文件名:操作成功。 +- NULL:操作失败。 + +#### ferror + +ferror()函数测试给定文件流stream的错误标识符。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: + +- 0:文件流未出错。 +- 非零值:文件流出错。 + +#### printf + +printf()函数将格式化字符串format写入到标准输出stdout。 + +**参数**: + +1. 格式化字符串format。 +2. 可变参数。 + +**Output**: + +- 成功写入字符数:操作成功。 +- -1:操作失败。 + +#### open_memstream + +open_memstream()函数打开用于写入字符串缓冲区的文件流。缓冲区是动态分配的。 + +**参数**: + +1. 指向缓冲区宽字符串指针ptr。 +2. 指向缓冲区大小的指针size。 + +**Output**: + +- 文件流指针:操作成功。 +- NULL:操作失败。 + +#### fwscanf + +fwscanf()函数从文件流stream中读取格式化数据将值存储到可变变量列表中。 + +**参数**: + +1. 文件流指针stream。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +#### fprintf + +fprintf()函数将格式化字符串写入到指定文件流stream。 + +**参数**: + +1. 文件流指针stream。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +errno: + +- EOVERFLOW:字符串长度溢出。 +- EINVAL:参数错误。 + +#### fgetc + +fgetc()函数从文件流stream中读取一个字符c。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: + +- 读取的字符c:操作成功。 +- EOF:操作失败。 + +#### rewind + +rewind()函数将文件流内部指针重新指向开头。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: 无。 + +#### getwc + +getwc()函数从文件流读取一个宽字符wc。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: + +- 读取的宽字符c:操作成功。 +- WEOF:操作失败。 + +#### scanf + +scanf()函数从标准输入stdin读取格式化输入到可变变量列表中。 + +**参数**: + +1. 格式化字符串format。 +2. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +#### perror + +perror()函数将错误提示字符串打印出来。 + +**参数**: + +1. 错误字符串msg。 + +**Output**: 无。 + +#### vsprintf + +vsprintf()函数将格式化字符串输出到参数指定的字符串str。 + +**参数**: + +1. 字符串缓冲区str。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +#### vasprintf + +vasprintf()函数将格式化字符串写入动态分配的字符串缓冲区中。 + +**参数**: + +1. 字符串缓冲区指针str。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +#### getc + +getc()函数从文件流stream中读取一个字符c。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: + +- 读取的字符c:操作成功。 +- EOF:操作失败。 + +#### dprintf + +dprintf()函数将格式化字符串写入到文件描述符fd指定的文件中。 + +**参数**: + +1. 文件描述符fd。 +2. 格式化字符串format。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 +- EOVERFLOW:字符串溢出。 + +#### popen + +暂不支持。 + +#### putc + +putc()函数将一个字符c写入文件流stream。 + +**参数**: + +1. 文件描述符fd。 +2. 格式化字符串format。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +#### fseek + +fseek()函数设置文件流stream的位置。 + +**参数**: + +1. 文件流stream。 +2. 相对偏移量offset。 +3. 开始位置whence。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### fgetwc + +fgetwc()函数从文件流stream读取一个宽字符wc。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: + +- 宽字符wc:操作成功。 +- WEOF:操作失败。 + +errno: + +- EILSEQ:转换失败。 + +#### tmpfile + +tmpfile()函数生成一个临时文件流指针。 + +**参数**:无。 + +**Output**: + +- 临时文件流指针:操作成功。 +- NULL:操作失败。 + +#### putw + +putw()函数将一个字符c写入指定的文件流stream。 + +**参数**: + +1. 输出字符c。 +2. 文件流指针stream。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### tempnam + +tempnam()函数在指定的目录中可用于创建一个临时文件的文件名。 + +**参数**: + +1. 指定目录dir。 +2. 文件名前缀prefix。 + +**Output**: + +- 临时文件名:操作成功。 +- NULL:操作失败。 + +#### vwprintf + +vwprintf()函数将格式化宽字符串写入到标准输出stdout。 + +**参数**: + +1. 格式化字符串format。 +2. 可变参数。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 +- EOVERFLOW:字符串溢出。 + +#### getw + +getw()函数从指定文件流stream读取一个字符c。 + +**参数**: + +1. 文件流stream。 + +**Output**: + +- 读取的字符c:操作成功。 +- EOF:操作失败。 + +#### putchar_unlocked + +putchar_unlocked()函数将一个字符c写入标准输出stdout。 + +**参数**: + +1. 写入字符c。 + +**Output**: + +- 字符c:操作成功。 +- EOF:操作失败。 + +#### fread + +fread()函数从给定文件流stream读取最多count的字符到字符串缓冲区buf。 + +**参数**: + +1. 缓冲区指针buf。 +2. 每个元素大小size。 +3. 元素个数count。 +4. 文件流指针stream。 + +**Output**: + +- 成功匹配字符数:操作成功。 +- 0:操作失败。 + +#### fileno + +fileno()函数返回文件流stream相关的文件描述符。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: + +- 文件描述符:操作成功。 +- -1:操作失败。 + +errno: + +- EBADF:无效的描述符。 + +#### remove + +remove()函数删除给定的文件名filename。 + +**参数**: + +1. 文件名filename。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### putc_unlocked + +putc_unlocked()函数将一个字符c写入指定的文件流stream。 + +**参数**: + +1. 写入字符c。 +2. 文件流指针stream。 + +**Output**: + +- 写入字符c:操作成功。 +- EOF:操作失败。 + +#### fclose + +fclose()函数关闭文件流stream。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: + +- 0:操作成功。 +- EOF:操作失败。 + +#### feof + +feof()函数检测文件流的文件结束符。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: + +- 1:文件结束。 +- 0:文件未结束。 + +#### fwrite + +fwrite()函数将指定的字符串str写入文件流stream。 + +**参数**: + +1. 字符串str。 +2. 元素的大小size。 +3. 元素的个数count。 +4. 文件流指针stream。 + +**Output**: + +- 返回成功写入字符数:操作成功。 +- 0:操作失败。 + +#### setbuf + +setbuf()函数打开或关闭缓冲机制。 + +**参数**: + +1. 文件流指针stream。 +2. 缓冲区buf。 + +**Output**: 无。 + +#### pclose + +暂不支持。 + +#### swprintf + +swprintf()函数将格式化的数据写入到指定的宽字符串中。 + +**参数**: + +1. 宽字符串缓冲区buf。 +2. 宽字符串大小size。 +3. 格式化字符串format。 +4. 可变参数。 + +**Output**: + +- 成功写入字符数:操作成功。 +- -1:操作失败。 + +errno: + +- EOVERFLOW:字符串溢出。 + +#### fwprintf + +fwprintf()函数将格式化宽字符串写入文件流stream。 + +**参数**: + +1. 文件流stream。 +2. 格式化宽字符串format。 +3. 可变参数。 + +**Output**: + +- 成功写入字符数:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 +- EOVERFLOW:字符串溢出。 + +#### swscanf + +swscanf()函数从宽字符串中读取格式化的数据到变量列表中。 + +**参数**: + +1. 宽字符串ws。 +2. 格式化宽字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配数:操作成功。 +- -1:操作失败。 + +#### rename + +rename()函数将旧文件名old_filename重命名为新文件名new_filename。 + +**参数**: + +1. 旧文件名old_filename。 +2. 新文件名new_filename。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +errno: + +- EINVAL:参数错误。 + +#### getdelim + +getdelim()函数从文件流中读取字符串,由参数指定的delimiter的进行分隔。 + +**参数**: + +1. 字符串缓冲区buf。 +2. 字符串缓冲区大小指针n。 +3. 分隔符delimiter。 +4. 文件流指针stream。 + +**Output**: + +- 成功写入字符数:操作成功。 +- -1:操作失败。 + +#### vfscanf + +vfscanf()函数从文件流stream读取格式化的数据到变量参数列表中。 + +**参数**: + +1. 文件流指针stream。 +2. 格式化字符串format。 +3. 可变参数。 + +**Output**: + +- 成功匹配数:操作成功。 +- -1:操作失败。 + +#### setlinebuf + +setlinebuf()函数设置文件流stream的行缓冲模式。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: 无。 + +#### fputs + +fputs()函数将字符串str写入到指定的文件流stream。 + +**参数**: + +1. 字符串str。 +2. 文件流指针stream。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### fsetpos + +fsetpos()函数将文件指针定位在参数pos指定的位置上。 + +**参数**: + +1. 文件流指针stream。 +2. 文件位置pos。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### fopencookie + +fopencookie()函数打开一个可以自定义实现的I/O流。 + +**参数**: + +1. 结构体指针cookie。 +2. 访问模式mode。 +3. 自定义I/O流函数。 + +**Output**: + +- 文件流指针:操作成功。 +- NULL:操作失败。 + +#### fgetln + +fgetln()函数从文件流中读取一行数据,并存储在大小为*len的缓冲区中。 + +**参数**: + +1. 文件流指针stream。 +2. 缓冲区长度指针len。 + +**Output**: + +- 字符串指针:操作成功。 +- NULL:操作失败。 + +#### vscanf + +vscanf()函数从标准输入stdin将格式化的数据读入可变参数列表。 + +**参数**: + +1. 格式化字符串format。 +2. 可变参数。 + +**Output**: + +- 成功匹配数:操作成功。 +- -1:操作失败。 + +#### ungetwc + +ungetwc()函数将宽字符ch推回与文件流关联的缓冲区中,除非ch等于WEOF。 + +**参数**: + +1. 宽字符wc。 +2. 文件流指针stream。 + +**Output**: + +- 返回宽字符wc:操作成功。 +- WEOF:操作失败。 + +#### getline + +getline()函数从文件流中读取字符串,由换行符\n的进行分隔。 + +**参数**: + +1. 字符串缓冲区buf。 +2. 字符串缓冲区大小指针n。 +3. 文件流指针stream。 + +**Output**: + +- 成功写入字符数:操作成功。 +- -1:操作失败。 + +#### ftrylockfile + +ftrylockfile()函数尝试进行文件锁定。 + +**参数**: + +1. 文件流指针stream。 + +**Output**: + +- 0:操作成功。 +- -1:操作失败。 + +#### vwscanf + +vwscanf()函数从标准输入stdin中读取格式化的数据到变量参数列表中。 + +**参数**: + +1. 文件流指针stream。 +2. 可变参数。 + +**Output**: + +- 成功匹配数:操作成功。 +- -1:操作失败。 + +## C11接口 + +| 接口名 | 适配情况 | +| :---: | :-----: | +| [cnd_broadcast](#cnd_broadcast) | 支持 | +| [cnd_destroy](#cnd_destroy) | 支持 | +| [cnd_init](#cnd_init) | 支持 | +| [cnd_signal](#cnd_signal) | 支持 | +| [cnd_timedwait](#cnd_timedwait) | 支持 | +| [cnd_wait](#cnd_wait) | 支持 | +| [mtx_destroy](#mtx_destroy) | 支持 | +| [mtx_init](#mtx_init) | 支持 | +| [mtx_lock](#mtx_lock) | 支持 | +| [mtx_timedlock](#mtx_timedlock) | 支持 | +| [mtx_trylock](#mtx_trylock) | 支持 | +| [thrd_create](#thrd_create) | 支持 | +| [thrd_current](#thrd_current) | 支持 | +| [thrd_detach](#thrd_detach) | 支持 | +| [thrd_equal](#thrd_equal) | 支持 | +| [thrd_exit](#thrd_exit) | 支持 | +| [thrd_join](#thrd_join) | 支持 | +| [thrd_sleep](#thrd_sleep) | 支持 | +| [thrd_yield](#thrd_yield) | 支持 | +| [tss_create](#tss_create) | 支持 | +| [tss_delete](#tss_delete) | 支持 | +| [tss_get](#tss_get) | 支持 | +| [tss_set](#tss_set) | 支持 | + +### 条件变量管理 + +#### cnd_init + +初始化条件变量cond。同使用条件变量属性为NULL的pthread_cond_init()。 + +**参数** + +1. 条件变量指针cond。 +2. 条件变量属性指针attr。 + +**Output**: + +- thrd_success:操作成功。 +- thrd_error:操作失败。 + +#### cnd_destroy + +销毁指定条件变量,使得该条件变量未初始化,可以使用cnd_init() 重新初始化。同pthread_cond_destory()。 + +**参数**:条件变量指针cond。 + +**Output**: 无。 + +#### cnd_broadcast + +取消阻止当前等待cond所指向的条件变量的所有线程。如果没有线程被阻塞,则不执行任何操作并返回thrd_success。 + +**参数**:条件变量指针cond。 + +**Output**: + +- thrd_success:操作成功。 +- thrd_error:操作失败。 + +#### cnd_signal + +取消阻塞在指定的条件变量cond上阻塞的线程中的至少一个(如果有任何线程在cond上被阻塞)。 + +**参数**:条件变量指针cond。 + +**Output**: + +- thrd_success:操作成功。 +- thrd_error:操作失败。 + +#### cnd_timedwait + +阻塞当前线程等待cond指定的条件变量,并释放互斥体指定的互斥体。只有在另一个线程使用相同的条件变量调用cnd_signal() 或cnd_broadcast() 后,或者如果系统时间达到指定的时间,并且当前线程重新获得互斥锁时,等待线程才会解锁。 + +**参数**: + +1. 条件变量指针cond。 +2. 互斥锁指针m。 +3. 超时时间指针ts。 + +**Output**: + +- thrd_success:操作成功。 +- thrd_error:操作失败。 +- thrd_timedout:阻塞超时 + +#### cnd_wait + +cnd_wait() 函数与cnd_timedwait() 类似,阻塞当前线程等待cond指定的条件变量,并释放互斥体指定的互斥体。只有在另一个线程使用相同的条件变量调用cnd_signal() 或cnd_broadcast() 后,并且当前线程重新获得互斥锁时,等待线程才会解锁。 + +**参数**: + +1. 条件变量指针cond。 +2. 互斥锁指针m。 + +**Output**: + +- thrd_success:操作成功。 +- thrd_error:操作失败。 + +### 互斥锁管理 + +#### mtx_init + +mtx_init()函数根据属性type初始化互斥锁。 + +**参数**: + +1. 互斥锁指针mutex。 +2. 互斥锁属性type。 + +**Output**: + +- thrd_success:操作成功。 +- thrd_error:操作失败。 + +#### mtx_destroy + +mtx_destroy() 用于注销一个互斥锁。销毁一个互斥锁即意味着释放它所占用的资源,且要求锁当前处于开放状态。 + +**参数**:互斥锁指针mutex。 + +**Output**: 无。 + +#### mtx_lock + +当pthread_mutex_lock() 返回时,该[互斥锁](https://baike.baidu.com/item/互斥锁/841823?fromModule=lemma_inlink)已被锁定。[线程](https://baike.baidu.com/item/线程/103101?fromModule=lemma_inlink)调用该函数让互斥锁上锁,如果该互斥锁已被另一个线程锁定和拥有,则调用该线程将阻塞,直到该互斥锁变为可用为止。 + +**参数**:互斥锁指针mutex。 + +**Output**: + +- thrd_success:操作成功。 +- thrd_error:操作失败。 + +#### mtx_timedlock + +mtx_timedlock() 语义与mtx_lock() 类似,不同点在于锁已经被占据时增加一个超时时间,等待超时返回错误码。 + +**参数**: + +1. 互斥锁指针mutex。 +2. 超时时间指针ts。 + +**Output**: + +- thrd_success:操作成功。 +- thrd_error:操作失败。 +- thrd_timedout:等待超时。 + +#### mtx_trylock + +mtx_trylock() 语义与 mtx_lock() 类似,不同点在于锁已经被占据时返回 thrd_busy, 而非挂起等待。 + +**参数**:互斥锁指针mutex。 + +**Output**: + +- thrd_success:操作成功。 +- thrd_busy:mutex指定的锁已经被占据。 +- thrd_error:操作失败。 + +### 任务管理 + +#### thrd_create + +thrd_create()函数创建一个执行函数为func的新线程,创建成功后,将创建的线程的ID存储在参数 thread 的位置。 + +**参数**: + +1. 指向线程[标识符](https://baike.baidu.com/item/标识符?fromModule=lemma_inlink)的[指针](https://baike.baidu.com/item/指针?fromModule=lemma_inlink)thread。 +2. 线程处理函数的起始地址 func。 +3. 运行函数的参数 arg。 + +**Output**: + +- thrd_success:创建成功。 +- thrd_error:attr指定的属性无效。 +- thrd_nomem:系统缺少创建新线程所需的资源。 + +#### thrd_current + +返回调用线程的线程ID。 + +**参数**:无 + +**Output**: 返回调用线程的线程ID。 + +#### thrd_detach + +实现线程分离,即主线程与子线程分离,子线程结束后,资源自动回收。 + +**参数**:线程ID:thread。 + +**Output**: + +- 0:成功完成。 +- EINVAL:thread是分离线程。 +- ESRCH:给定线程ID指定的线程不存在。 + +#### thrd_equal + +此函数应比较线程ID t1和t2。 + +**参数**: + +1. 线程ID t1。 +2. 线程ID t2。 + +**Output**: + +- 如果t1和t2相等,pthread_equal()函数应返回非零值。 +- 如果t1和t2不相等,应返回零。 +- 如果t1或t2不是有效的线程ID,则行为未定义。 + +#### thrd_exit + +线程的终止可以是调用 thrd_exit 或者该线程的例程结束。由此可看出,一个线程可以隐式退出,也可以显式调用 thrd_exit 函数来退出。thrd_exit 函数唯一的参数 value_ptr 是函数的返回代码,只要 thrd_join 中的第二个参数 value_ptr 不是NULL,这个值将被传递给 value_ptr。 + +**参数**:线程退出状态value_ptr,通常传NULL。 + +**Output**: 无 + +#### thrd_join + +thrd_join() 函数,以阻塞的方式等待 thread 指定的线程结束。当函数返回时,被等待线程的资源被收回。如果线程已经结束,那么该函数会立即返回。并且 thread 指定的线程必须是 joinable 的。当 thrd_join()成功返回时,目标线程已终止。对指定同一目标线程的thrd_join()的多个同时调用的结果未定义。如果调用thrd_join()的线程被取消,则目标线程不应被分离。 + +**参数**: + +1. 线程ID:thread。 +2. 退出线程:返回值value_ptr。 + +**Output**: + +- thrd_success:操作成功。 + +#### thrd_sleep + +至少在达到time_point指向的基于TIME_UTC的时间点之前,阻塞当前线程的执行。如果收到未被忽略的信号,睡眠可能会恢复。 + +**参数**: + +1. 应等待时间:req。 +2. 实际等待时间:rem。 + +**Output**: + +- 0:操作成功。 +- -2: 操作失败。 + +#### thrd_yield + +thrd_yield()函数应强制正在运行的线程放弃处理器,并触发线程调度。 + +**参数**:无 + +**Output**: 输出0时,成功完成;否则应返回值-1。 + +#### tss_create + +分配用于标识线程特定数据的键。tss_create 第一个参数为指向一个键值的[指针](https://baike.baidu.com/item/指针/2878304?fromModule=lemma_inlink),第二个参数指明了一个 destructor 函数,如果这个参数不为空,那么当每个线程结束时,系统将调用这个函数来释放绑定在这个键上的内存块。 + +**参数**: + +1. 键值的[指针](https://baike.baidu.com/item/指针/2878304?fromModule=lemma_inlink)tss。 +2. destructor 函数入口 destructor。 + +**Output**: + +- thrd_success:操作成功。 +- thrd_error:操作失败。 + +#### tss_delete + +销毁线程特定数据键。 + +**参数**:需要删除的键key。 + +**Output**: 无 + +#### tss_get + +将与key关联的数据读出来,返回数据类型为 void *,可以指向任何类型的数据。需要注意的是,在使用此返回的指针时,需满足是 void 类型,虽指向关联的数据地址处,但并不知道指向的数据类型,所以在具体使用时,要对其进行强制类型转换。 + +**参数**:键值key。 + +**Output**: + +- 返回与给定 key 关联的线程特定数据值。 +- NULL:没有线程特定的数据值与键关联。 + +#### tss_set + +tss_set() 函数应将线程特定的 value 与通过先前调用 tss_create()获得的 key 关联起来。不同的线程可能会将不同的值绑定到相同的键上。这些值通常是指向已保留供调用线程使用的动态分配内存块的指针。 + +**参数**: + +1. 键值key。 +2. 指针value + +**Output**: + +- 0:设置成功。 + +## 其他接口 + +| 接口名 | 适配情况 | +| :---: | :-----: | +| [pthread_getattr_default_np](#pthread_getattr_default_np) | 支持 | +| [pthread_getattr_np](#pthread_getattr_np) | 支持 | +| [pthread_getname_np](#pthread_getattr_np) | 支持 | +| [pthread_setattr_default_np](#pthread_setattr_default_np) | 支持 | +| [pthread_setname_np](#pthread_setname_np) | 支持 | +| [pthread_timedjoin_np](#pthread_timedjoin_np) | 支持 | +| [pthread_tryjoin_np](#pthread_tryjoin_np) | 支持 | +| [ftime](#ftime) | 支持 | +| [timegm](#timegm) | 支持 | + +### pthread_getattr_default_np + +pthread_getattr_default_np() 函数初始化attr引用的线程属性对象,使其包含用于创建线程的默认属性。 + +**参数**:线程属性对象attr。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +### pthread_setattr_default_np + +pthread_setattr_default_np() 函数用于设置创建新线程的默认属性,即当使用NULL的第二个参数调用pthread_create时使用的属性。 + +**参数**:线程属性对象attr。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +### pthread_getattr_np + +pthread_getattr_np() 函数初始化attr引用的线程属性对象,使其包含描述正在运行的线程线程的实际属性值。 + +**参数**: + +1. 线程ID值thread。 +2. 线程属性对象attr。 + +**Output**: + +- 0:操作成功。 +- 非0值:操作失败。 + +### pthread_getname_np + +pthread_getname_np() 函数可用于检索线程的名称。thread参数指定要检索其名称的线程。 + +**参数**: + +1. 线程ID值thread。 +2. 线程名字符串name。 +3. 字符串大小len。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +### pthread_setname_np + +pthread_setname_np() 函数可用于设置线程的名称。 + +**参数**: + +1. 线程ID值thread。 +2. 线程名字符串name。 +3. 字符串大小len。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 + +### pthread_timedjoin_np + +类似pthread_join,如果线程尚未终止,则调用将阻塞直到abstime中指定的最大时间。如果超时在线程终止之前到期,则调用将返回错误。 + +**参数**: + +1. 线程ID值thread。 +2. 线程退出状态status。 +3. 阻塞时间指针ts。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 +- ETIMEDOUT:阻塞超时。 + +### pthread_tryjoin_np + +类似pthread_join,但如果线程尚未终止,将立即返回EBUSY。 + +**参数**: + +1. 线程ID值thread。 +2. 线程退出状态status。 + +**Output**: + +- 0:操作成功。 +- EINVAL:指针未初始化。 +- EBUSY:调用时线程尚未终止。 + +### ftime + +取得当前的时间和日期,由一个timeb结构体返回。 + +**参数**: + +1. timeb结构体指针tp。 + +**Output**: 无 + +### timegm + +将tm结构体表示的时间转换为自一个标准时间点以来的时间,不受本地时区的影响。 + +**参数**: + +1. tm结构体指针tp。 + +**Output**: + +返回值: + +- time_t结构体表示的时间值。 +- -1: 转换失败。 + +errno: + +- EOVERFLOW:转换溢出。 + +## math数学库 + +| 接口名 | 描述 | 输入参数 | 适配情况 | +| :---: | :-----: | :-----: | :-----: | +| [acos](#acos) | 计算参数x的反余弦值,参数x的取值范围[-1, +1],返回类型double | double类型的浮点数x | 支持 | +| [acosf](#acosf) | 计算参数x的反余弦值,参数x的取值范围[-1, +1],返回类型float | float类型的浮点数x | 支持 | +| [acosl](#acosl) | 计算参数x的反余弦值,参数x的取值范围[-1, +1],返回类型long double | long double类型的浮点数x | 支持 | +| [acosh](#acosh) | 计算参数x的反双曲余弦值,返回类型double | double类型的浮点数x | 支持 | +| [acoshf](#acoshf) | 计算参数x的反双曲余弦值,返回类型float | float类型的浮点数x | 支持 | +| [acoshl](#acoshl) | 计算参数x的反双曲余弦值,返回类型long double | long double类型的浮点数x | 支持 | +| [asin](#asin) | 计算参数x的反正弦值,参数x的取值范围为[-1, +1] | duoble类型的浮点数x | 支持 | +| [asinf](#asinf) | 计算参数x的反正弦值,参数x的取值范围为[-1, +1] | float类型的浮点数x | 支持 | +| [asinl](#asinl) | 计算参数x的反正弦值,参数x的取值范围为[-1, +1] | long double类型的浮点数x | 支持 | +| [asinh](#asinh) | 计算参数x的反双曲正弦值,返回类型double | double类型的浮点数x | 支持 | +| [asinhf](#asinhf) | 计算参数x的反双曲正弦值,返回类型float | float类型的浮点数x | 支持 | +| [asinhl](#asinhl) | 计算参数x的反双曲正弦值,返回类型long double | long double类型的浮点数x | 支持 | +| [atan](#atan) | 计算参数x的反正切值,返回类型double | double类型的浮点数x | 支持 | +| [atanf](#atanf) | 计算参数x的反正切值,返回类型float | float类型的浮点数x | 支持 | +| [atanl](#atanl) | 计算参数x的反正切值,返回类型long double | long double类型的浮点数x | 支持 | +| [atan2](#atan2) | 计算参数y除以x的反正切值,使用两个参数的符号确定返回值的象限 | double类型的浮点数y
    double类型的浮点数x | 支持 | +| [atan2f](#atan2f) | 计算参数y除以x的反正切值,使用两个参数的符号确定返回值的象限 | float类型的浮点数y
    float类型的浮点数x | 支持 | +| [atan2l](#atan2l) | 计算参数y除以x的反正切值,使用两个参数的符号确定返回值的象限 | long double类型的浮点数y
    long double类型的浮点数x | 支持 | +| [atanh](#atanh) | 计算参数x的反双曲正切值,返回类型double | double类型的浮点数x | 支持 | +| [atanhf](#atanhf) | 计算参数x的反双曲正切值,返回类型float | float类型的浮点数x | 支持 | +| [atanhl](#atanhl) | 计算参数x的反双曲正切值,返回类型long double | long double类型的浮点数x | 支持 | +| [cbrt](#cbrt) | 计算参数x的立方根,返回类型double | double类型的浮点数x | 支持 | +| [cbrtf](#cbrtf) | 计算参数x的立方根,返回类型float | float类型的浮点数x | 支持 | +| [cbrtl](#cbrtl) | 计算参数x的立方根,返回类型long double | long double类型的浮点数x | 支持 | +| [ceil](#ceil) | 计算不小于参数x的最小整数值,返回类型double | duoble类型的浮点数x | 支持 | +| [ceilf](#ceilf) | 计算不小于参数x的最小整数值,返回类型float | float类型的浮点数x | 支持 | +| [ceill](#ceill) | 计算不小于参数x的最小整数值,返回类型long double | long duoble类型的浮点数x | 支持 | +| [copysign](#copysign) | 生成一个值,该值具有参数x的大小和参数y的符号 | duoble类型的浮点数x
    double类型的浮点数y | 支持 | +| [copysignf](#copysignf) | 生成一个值,该值具有参数x的大小和参数y的符号 | float类型的浮点数x
    float类型的浮点数y | 支持 | +| [copysignl](#copysignl) | 生成一个值,该值具有参数x的大小和参数y的符号 | long duoble类型的浮点数x
    long double类型的浮点数y | 支持 | +| [cos](#cos) | 计算参数x的余弦值,参数应为弧度值,返回类型double | duoble类型的浮点数x | 支持 | +| [cosf](#cosf) | 计算参数x的余弦值,参数应为弧度值,返回类型float | float类型的浮点数x | 支持 | +| [cosl](#cosl) | 计算参数x的余弦值,参数应为弧度值,返回类型long double | long double类型的浮点数x | 支持 | +| [cosh](#cosh) | 计算参数x的双曲余弦值,返回类型double | double类型的浮点数x | 支持 | +| [coshf](#coshf) | 计算参数x的双曲余弦值,返回类型float | float类型的浮点数x | 支持 | +| [coshl](#coshl) | 计算参数x的双曲余弦值,返回类型long double | long double类型的浮点数x | 支持 | +| [erf](#erf) | 计算参数x的高斯误差函数的值 | double类型的浮点数x | 支持 | +| [erff](#erff) | 计算参数x的高斯误差函数的值 | float类型的浮点数x | 支持 | +| [erfl](#erfl) | 计算参数x的高斯误差函数的值 | long double类型的浮点数x | 支持 | +| [erfc](#erfc) | 计算参数x的高斯误差函数的值 | double类型的浮点数x | 支持 | +| [erfcf](#erfcf) | 计算参数x的互补误差函数的值 | float类型的浮点数x | 支持 | +| [erfcl](#erfcl) | 计算参数x的互补误差函数的值 | long double类型的浮点数x | 支持 | +| [exp](#exp) | 以e为基数的指数,即$e^x$的值,返回类型double | double类型的浮点数x | 支持 | +| [expf](#expf) | 以e为基数的指数,即$e^x$的值,返回类型float | float类型的浮点数x | 支持 | +| [expl](#expl) | 以e为基数的指数,即$e^x$的值,返回类型long double | long double类型的浮点数x | 支持 | +| [exp10](#exp10) | 以10为基数的指数,即$10^x$的值,返回类型double | double类型的浮点数x | 支持 | +| [exp10f](#exp10f) | 以10为基数的指数,即$10^x$的值,返回类型float | float类型的浮点数x | 支持 | +| [exp10l](#exp10l) | 以10为基数的指数,即$10^x$的值,返回类型long double | long double类型的浮点数x | 支持 | +| [exp2](#exp2) | 以2为基数的指数函数,返回类型double | double类型的浮点数x | 支持 | +| [exp2f](#exp2f) | 以2为基数的指数函数,返回类型float | float类型的浮点数x | 支持 | +| [exp2l](#exp2l) | 以2为基数的指数函数,返回类型long double | long double类型的浮点数x | 支持 | +| [expm1](#expm1) | 计算$e^x - 1$的值。如果参数x是个小值,expm1(x)函数的值比表达式$e^x - 1$更准确 | double类型的浮点数x | 支持 | +| [expm1f](#expm1f) | 计算$e^x - 1$的值。如果参数x是个小值,expm1(x)函数的值比表达式$e^x - 1$更准确 | float类型的浮点数x | 支持 | +| [expm1l](#expm1l) | 计算$e^x - 1$的值。如果参数x是个小值,expm1(x)函数的值比表达式$e^x - 1$更准确 | long double类型的浮点数x | 支持 | +| [fabs](#fabs) | 计算参数x的绝对值,返回类型double | double类型的浮点数x | 支持 | +| [fabsf](#fabsf) | 计算参数x的绝对值,返回类型float | float类型的浮点数x | 支持 | +| [fabsl](#fabsl) | 计算参数x的绝对值,返回类型long double | long double类型的浮点数x | 支持 | +| [fdim](#fdim) | 计算参数x和参数y之间的正差值 | double类型的浮点数x
    double类型的浮点数y | 支持 | +| [fdimf](#fdimf) | 计算参数x和参数y之间的正差值 | float类型的浮点数x
    float类型的浮点数y | 支持 | +| [fdiml](#fdiml) | 计算参数x和参数y之间的正差值 | long double类型的浮点数x
    long double类型的浮点数y | 支持 | +| [finite](#finite) | 如果参数x既不是无限值也不是NaN,则返回一个非零值,否则返回0 | double类型的浮点数x | 支持 | +| [finitef](#finitef) | 如果参数x既不是无限值也不是NaN,则返回一个非零值,否则返回0 | float类型的浮点数x| 支持 | +| [floor](#floor) | 计算不大于参数x到最大整数值,返回类型double | double类型的浮点数x | 支持 | +| [floorf](#floorf) | 计算不大于参数x到最大整数值,返回类型float | float类型的浮点数x | 支持 | +| [floorl](#floorl) | 计算不大于参数x到最大整数值,返回类型long double | long double类型的浮点数x | 支持 | +| [fma](#fma) | 计算表达式$(x * y) + z$的值,返回double类型 | double类型的浮点数x
    double类型的浮点数y
    double类型的浮点数z | 支持 | +| [fmaf](#fmaf) | 计算表达式$(x * y) + z$的值,返回float类型 | float类型的浮点数x
    float类型的浮点数y
    float类型的浮点数z | 支持 | +| [fmal](#fmal) | 计算表达式$(x * y) + z$的值,返回long double类型 | long double类型的浮点数x
    long double类型的浮点数y
    long double类型的浮点数z | 支持 | +| [fmax](#fmax) | 确定其参数的最大数值。如果一个参数是非数值(NaN),另一个参数是数值,fmax函数将选择数值 | double类型的浮点数x
    double类型的浮点数y | 支持 | +| [fmaxf](#fmaxf) | 确定其参数的最大数值。如果一个参数是非数值(NaN),另一个参数是数值,fmax函数将选择数值 | float类型的浮点数x
    float类型的浮点数y | 支持 | +| [fmaxl](#fmaxl) | 确定其参数的最大数值。如果一个参数是非数值(NaN),另一个参数是数值,fmax函数将选择数值 | long double类型的浮点数x
    long double类型的浮点数y | 支持 | +| [fmin](#fmin) | 返回其参数的最小数值。非数值NaN参数视为缺失数据。如果一个参数是非数值,另一个参数是数值,fmin函数将选择数值 | double类型的浮点数x
    double类型的浮点数y | 支持 | +| [fminf](#fminf) | 返回其参数的最小数值。非数值NaN参数视为缺失数据。如果一个参数是非数值,另一个参数是数值,fmin函数将选择数值 | float类型的浮点数x
    float类型的浮点数y | 支持 | +| [fminl](#fminl) | 返回其参数的最小数值。非数值NaN参数视为缺失数据。如果一个参数是非数值,另一个参数是数值,fmin函数将选择数值 | long double类型的浮点数x
    long double类型的浮点数y | 支持 | +| [fmod](#fmod) | 计算表达式x/y的浮点余数,返回double类型 | double类型的浮点数x
    double类型的浮点数y | 支持 | +| [fmodf](#fmodf) | 计算表达式x/y的浮点余数,返回float类型 | float类型的浮点数x
    float类型的浮点数y | 支持 | +| [fmodl](#fmodl) | 计算表达式x/y的浮点余数,返回long double类型 | long double类型的浮点数x
    long double类型的浮点数y | 支持 | +| [frexp](#frexp) | 将浮点数分解为规格化小数和2的整数幂,并将整数存入参数exp指向的对象中 | double类型的浮点数x
    int *类型的浮点数y | 支持 | +| [frexpf](#frexpf) | 将浮点数分解为规格化小数和2的整数幂,并将整数存入参数exp指向的对象中 | float类型的浮点数x
    int *类型的浮点数y | 支持 | +| [frexpl](#frexpl) | 将浮点数分解为规格化小数和2的整数幂,并将整数存入参数exp指向的对象中 | long double类型的浮点数x
    int *类型的浮点数y | 支持 | +| [hypot](#hypot) | 计算表达式$(x^2 + y^2)^{1/2}$的值 | double类型的浮点数x
    double类型的浮点数y | 支持 | +| [hypotf](#hypotf) | 计算表达式$(x^2 + y^2)^{1/2}$的值 | float类型的浮点数x
    float类型的浮点数y | 支持 | +| [hypotl](#hypotl) | 计算表达式$(x^2 + y^2)^{1/2}$的值 | long double类型的浮点数x
    long double类型的浮点数y | 支持 | +| [ilogb](#ilogb) | 以FLT_RADIX作为对数的底数,返回double类型x的对数的整数部分 | double类型的浮点数x | 支持 | +| [ilogbf](#ilogbf) | 以FLT_RADIX作为对数的底数,返回float类型x的对数的整数部分 | float类型的浮点数x | 支持 | +| [ilogbl](#ilogbl) | 以FLT_RADIX作为对数的底数,返回long double类型x的对数的整数部分 | long double类型的浮点数x | 支持 | +| [j0](#j0) | 计算参数x的第一类0阶贝塞尔函数 | double类型浮点数x | 支持 | +| [j0f](#j0f) | 计算参数x的第一类0阶贝塞尔函数 | float类型浮点数x | 支持 | +| [j1](#j1) | 计算参数x的第一类1阶贝塞尔函数 | double类型浮点数x | 支持 | +| [j1f](#j1f) | 计算参数x的第一类1阶贝塞尔函数 | float类型浮点数x | 支持 | +| [jn](#jn) | 计算参数x的第一类n阶贝塞尔函数 | int类型阶数
    double类型浮点数x | 支持 | +| [jnf](#jnf) | 计算参数x的第一类n阶贝塞尔函数 | int类型阶数
    float类型浮点数x | 支持 | +| [ldexp](#ldexp) | 计算参数x与2的exp次幂的乘积,即返回$x * 2^{exp}$的double类型值。 | double类型的浮点数x
    int类型的指数exp | 支持 | +| [ldexpf](#ldexpf) | 计算参数x与2的exp次幂的乘积,即返回$x * 2^{exp}$的float类型值。 | float类型的浮点数x
    int类型的指数exp | 支持 | +| [ldexpl](#ldexpl) | 计算参数x与2的exp次幂的乘积,即返回$x * 2^{exp}$的long double类型值。 | long double类型的浮点数x
    int类型的指数exp | 支持 | +| [lgamma](#lgamma) | 计算参数x伽玛绝对值的自然对数,返回double类型 | double类型的浮点数x | 支持 | +| [lgammaf](#lgammaf) | 计算参数x伽玛绝对值的自然对数,返回float类型 | float类型的浮点数x | 支持 | +| [lgammal](#lgammal) | 计算参数x伽玛绝对值的自然对数,返回long double类型 | long double类型的浮点数x | 支持 | +| [lgamma_r](#lgamma_r) | 计算参数x伽玛绝对值的自然对数,与lgamma不同在于是线程安全的 | double类型的浮点数x
    int *类型符号参数 | 支持 | +| [lgamma_r](#lgamma_r) | 计算参数x伽玛绝对值的自然对数,与lgamma不同在于是线程安全的 | float类型的浮点数x
    int *类型符号参数 | 支持 | +| [llrint](#llrint) | 根据当前舍入模式,将参数舍入为long long int类型的最接近整数值 | double类型的浮点数x | 支持 | +| [llrintf](#llrintf) | 根据当前舍入模式,将参数舍入为long long int类型的最接近整数值 | float类型的浮点数x | 支持 | +| [llrintl](#llrintl) | 根据当前舍入模式,将参数舍入为long long int类型的最接近整数值 | long double类型的浮点数x | 支持 | +| [llround](#llround) | 将double类型x舍入为浮点形式表示的long long int型最近整数值。如果x位于两个整数中心,将向远离0的方向舍入。 | double类型的浮点数x | 支持 | +| [llroundf](#llroundf) | 将float类型x舍入为浮点形式表示的long long int型最近整数值。如果x位于两个整数中心,将向远离0的方向舍入。 | float类型的浮点数x | 支持 | +| [llroundl](#llroundl) | 将long double类型x舍入为浮点形式表示的long long int型最近整数值。如果x位于两个整数中心,将向远离0的方向舍入。 | long double类型的浮点数x | 支持 | +| [log](#log) | double类型x的自然对数函数 | double类型的浮点数x | 支持 | +| [logf](#logf) | float类型x的自然对数函数 | float类型的浮点数x | 支持 | +| [logl](#logl) | long double类型x的自然对数函数 | long double类型的浮点数x | 支持 | +| [log10](#log10) | double类型x以10为底数的对数函数 | double类型的浮点数x | 支持 | +| [log10f](#log10f) | float类型x以10为底数的对数函数 | float类型的浮点数x | 支持 | +| [log10l](#log10l) | long double类型x以10为底数的对数函数 | long double类型的浮点数x | 支持 | +| [log1p](#log1p) | 以e为底数的对数函数,计算$log_e(1 + x)$的值。如果参数x是个小值,表达式log1p(x)比表达式log(1 + x)更准确 | double类型的浮点数x | 支持 | +| [log1pf](#log1pf) | 以e为底数的对数函数,计算$log_e(1 + x)$的值。如果参数x是个小值,表达式log1p(x)比表达式log(1 + x)更准确 | float类型的浮点数x | 支持 | +| [log1pl](#log1pl) | 以e为底数的对数函数,计算$log_e(1 + x)$的值。如果参数x是个小值,表达式log1p(x)比表达式log(1 + x)更准确 | long double类型的浮点数x | 支持 | +| [log2](#log2) | double类型x以2为底数的对数函数 | double类型的浮点数x | 支持 | +| [log2f](#log2f) | float类型x以2为底数的对数函数 | flaot类型的浮点数x | 支持 | +| [log2l](#log2l) | long double类型x以2为底数的对数函数 | long double类型的浮点数x | 支持 | +| [logb](#logb) | double类型x以FLT_RADIX为的底数到对数函数 | double类型的浮点数x | 支持 | +| [logbf](#logbf) | float类型x以FLT_RADIX为的底数到对数函数 | float类型的浮点数x | 支持 | +| [logbl](#logbl) | double类型x以FLT_RADIX为的底数到对数函数 | double类型的浮点数x | 支持 | +| [lrint](#lrint) | 根据当前舍入模式,将参数舍入为long int类型的最接近整数值 | double类型的浮点数x | 支持 | +| [lrintf](#lrintf) | 根据当前舍入模式,将参数舍入为long int类型的最接近整数值 | float类型的浮点数x | 支持 | +| [lrintl](#lrintl) | 根据当前舍入模式,将参数舍入为long int类型的最接近整数值 | long double类型的浮点数x | 支持 | +| [lround](#lround) | 将double类型x舍入为浮点形式表示的long int型最近整数值。如果x位于两个整数中心,将向远离0的方向舍入。 | double类型的浮点数x | 支持 | +| [lroundf](#lroundf) | 将float类型x舍入为浮点形式表示的long int型最近整数值。如果x位于两个整数中心,将向远离0的方向舍入。 | float类型的浮点数x | 支持 | +| [lroundl](#lroundl) | 将long double类型x舍入为浮点形式表示的long int型最近整数值。如果x位于两个整数中心,将向远离0的方向舍入。 | long double类型的浮点数x | 支持 | +| [modf](#modf) | 将double类型的参数value分成整数部分和小数部分,两部分与参数value具有相同的类型和符号。整数部分以浮点形式存入参数iptr指向的对象中 | double类型的浮点数value
    double *类型的指数iptr | 支持 | +| [modff](#modff) | 将float类型的参数value分成整数部分和小数部分,两部分与参数value具有相同的类型和符号。整数部分以浮点形式存入参数iptr指向的对象中 | float类型的浮点数value
    float *类型的指数iptr | 支持 | +| [modfl](#modfl) | 将long double类型的参数value分成整数部分和小数部分,两部分与参数value具有相同的类型和符号。整数部分以浮点形式存入参数iptr指向的对象中 | long double类型的浮点数value
    long double *类型的指数iptr | 支持 | +| [nan](#nan) | 返回一个double类型的非数值NaN,内容由参数tagp确定 | const char*类型tagp | 支持 | +| [nanf](#nanf) | 返回一个float类型的非数值NaN,内容由参数tagp确定 | const char*类型tagp | 支持 | +| [nanl](#nanl) | 返回一个long double类型的非数值NaN,内容由参数tagp确定 | const char*类型tagp | 支持 | +| [nearbyint](#nearbyint) | 根据当前舍入模式,将double型参数x舍入为浮点格式的double型整数值 | double类型x | 支持 | +| [nearbyintf](#nearbyintf) | 根据当前舍入模式,将float型参数x舍入为浮点格式的float型整数值 | float类型x | 支持 | +| [nearbyintl](#nearbyintl) | 根据当前舍入模式,将long double型参数x舍入为浮点格式的double型整数值 | long double类型x | 支持 | +| [nextafter](#nextafter) | 返回double类型参数x沿参数y方向的下一个可表示值 | double类型x
    double类型y | 支持 | +| [nextafterf](#nextafterf) | 返回double类型参数x沿参数y方向的下一个可表示值 | float类型x
    flaot类型y | 支持 | +| [nextafterl](#nextafterl) | 返回double类型参数x沿参数y方向的下一个可表示值 | long double类型x
    long double类型y | 支持 | +| [nexttoward](#nexttoward) | 返回double类型参数x沿参数y方向的下一个可表示值,等价于nextafter,区别在于参数y为long double | double类型浮点数x
    long double类型浮点数y | 支持 | +| [nexttowardf](#nexttowardf) | 返回double类型参数x沿参数y方向的下一个可表示值,等价于nextafter,区别在于参数y为long double | float类型浮点数x
    long double类型浮点数y | 支持 | +| [nexttowardl](#nexttowardl) | 返回double类型参数x沿参数y方向的下一个可表示值,等价于nextafter,区别在于参数y为long double | long double类型浮点数x
    long double类型浮点数y | 支持 | +| [pow](#pow) | 计算表达式$x^y$的值 | double类型浮点数x
    double类型浮点数y | 支持 | +| [powf](#powf) | 计算表达式$x^y$的值 | float类型浮点数x
    float类型浮点数y | 支持 | +| [powl](#powl) | 计算表达式$x^y$的值 | long double类型浮点数x
    long double类型浮点数y | 支持 | +| [pow10](#pow10) | 计算表达式$10^x$的值 | double类型浮点数x | 支持 | +| [pow10f](#pow10f) | 计算表达式$10^x$的值 | float类型浮点数x| 支持 | +| [pow10l](#pow10l) | 计算表达式$10^x$的值 | long double类型浮点数x | 支持 | +| [remainder](#remainder) | 计算参数x除以y的余数,等同于drem | double类型浮点数x
    double类型浮点数y | 支持 | +| [remainderf](#remainderf) | 计算参数x除以y的余数,等同于dremf | float类型浮点数x
    float类型浮点数y | 支持 | +| [remainderl](#remainderl) | 计算参数x除以y的余数 | long double类型浮点数x
    long double类型浮点数y | 支持 | +| [remquo](#remquo) | 计算参数x和参数y的浮点余数,并将商保存在传递的参数指针quo中 | double类型浮点数x
    double类型浮点数y
    int *类型商que | 支持 | +| [remquof](#remquof) | 计算参数x和参数y的浮点余数,并将商保存在传递的参数指针quo中 | float类型浮点数x
    float类型浮点数y
    int *类型商que | 支持 | +| [remquol](#remquol) | 计算参数x和参数y的浮点余数,并将商保存在传递的参数指针quo中 | long double类型浮点数x
    long double类型浮点数y
    int *类型商que | 支持 | +| [rint](#rint) | 根据当前舍入模式,将参数x舍入为浮点个数的整数值 | double类型的浮点数x | 支持 | +| [rintf](#rintf) | 根据当前舍入模式,将参数x舍入为浮点个数的整数值 | float类型的浮点数x | 支持 | +| [rintl](#rintl) | 根据当前舍入模式,将参数x舍入为浮点个数的整数值 | long double类型的浮点数x | 支持 | +| [round](#round) | 将double类型x舍入为浮点形式表示的double型最近整数值。如果x位于两个整数中心,将向远离0的方向舍入。 | double类型的浮点数x | 支持 | +| [roundf](#roundf) | 将float类型x舍入为浮点形式表示的float型最近整数值。如果x位于两个整数中心,将向远离0的方向舍入。 | float类型的浮点数x | 支持 | +| [roundl](#roundl) | 将long double类型x舍入为浮点形式表示的long double型最近整数值。如果x位于两个整数中心,将向远离0的方向舍入。 | long double类型的浮点数x | 支持 | +| [scalb](#scalb) | 计算$x * FLT\_RADIX^{exp}$的double类型值 | double类型的浮点数x
    double类型的指数exp | 支持 | +| [scalbf](#scalbf) | 计算$x * FLT\_RADIX^{exp}$的float类型值 | float类型的浮点数x
    float类型的指数exp | 支持 | +| [scalbln](#scalbln) | 计算$x * FLT\_RADIX^{exp}$的double类型值 | double类型的浮点数x
    long类型的指数exp | 支持 | +| [scalblnf](#scalblnf) | 计算$x * FLT\_RADIX^{exp}$的float类型值 | float类型的浮点数x
    long类型的指数exp | 支持 | +| [scalblnl](#scalblnl) | 计算$x * FLT\_RADIX^{exp}$的long double类型值 | long double类型的浮点数x
    long类型的指数exp | 支持 | +| [scalbn](#scalbn) | 计算$x * FLT\_RADIX^{exp}$的double类型值 | double类型的浮点数x
    int类型的指数exp | 支持 | +| [scalbnf](#scalbnf) | 计算$x * FLT\_RADIX^{exp}$的float类型值 | float类型的浮点数x
    int类型的指数exp | 支持 | +| [scalbnl](#scalbnl) | 计算$x * FLT\_RADIX^{exp}$的long double类型值 | long double类型的浮点数x
    int类型的指数exp | 支持 | +| [significand](#significand) | 用于分离浮点数x的尾数部分,返回double类型 | double类型的浮点数x | 支持 | +| [significandf](#significandf) | 用于分离浮点数x的尾数部分,返回double类型 | double类型的浮点数x | 支持 | +| [sin](#sin) | 计算参数x的正弦值,参数应为弧度值,返回double类型 | double类型的浮点数x | 支持 | +| [sinf](#sinf) | 计算参数x的正弦值,参数应为弧度值,返回float类型 | float类型的浮点数x | 支持 | +| [sinl](#sinl) | 计算参数x的正弦值,参数应为弧度值,返回long double类型 | long double类型的浮点数x | 支持 | +| [sincos](#sincos) | 同时计算参数x的正弦值和余弦值,并将结果存储在*sin和*cos,比单独调用sin和cos效率更高 | double类型的浮点数x
    double*类型的浮点数sin
    double*类型的浮点数cos | 支持 | +| [sincosf](#sincosf) | 同时计算参数x的正弦值和余弦值,并将结果存储在*sin和*cos,比单独调用sin和cos效率更高 | float类型的浮点数x
    float*类型的浮点数sin
    float*类型的浮点数cos | 支持 | +| [sincosl](#sincosl) | 同时计算参数x的正弦值和余弦值,并将结果存储在*sin和*cos,比单独调用sin和cos效率更高 | long double类型的浮点数x
    long double*类型的浮点数sin
    long double*类型的浮点数cos | 支持 | +| [sinh](#sinh) | 计算参数x的双曲正弦值,返回double类型 | double类型的浮点数x | 支持 | +| [sinhf](#sinhf) | 计算参数x的双曲正弦值,返回float类型 | float类型的浮点数x | 支持 | +| [sinhl](#sinhl) | 计算参数x的双曲正弦值,返回long double类型 | long double类型的浮点数x | 支持 | +| [sqrt](#sqrt) | 计算参数x的平方根,返回类型double | double类型的浮点数x | 支持 | +| [sqrtf](#sqrtf) | 计算参数x的平方根,返回类型float | float类型的浮点数x | 支持 | +| [sqrtl](#sqrtl) | 计算参数x的平方根,返回类型long double | long double类型的浮点数x | 支持 | +| [tan](#tan) | 计算参数x的正切值,参数应为弧度值,返回double类型 | double类型的浮点数x | 支持 | +| [tanf](#tanf) | 计算参数x的正切值,参数应为弧度值,返回float类型 | float类型的浮点数x | 支持 | +| [tanl](#tanl) | 计算参数x的正切值,参数应为弧度值,返回long double类型 | long double类型的浮点数x | 支持 | +| [tanh](#tanh) | 计算参数x的双曲正切值,返回double类型 | double类型的浮点数x | 支持 | +| [tanhf](#tanhf) | 计算参数x的双曲正切值,返回float类型 | float类型的浮点数x | 支持 | +| [tanhl](#tanhl) | 计算参数x的双曲正切值,返回long double类型 | long double类型的浮点数x | 支持 | +| [tgamma](#tgamma) | 计算参数x的伽马函数,返回double类型 | double类型的浮点数x | 支持 | +| [tgammaf](#tgammaf) | 计算参数x的伽马函数,返回float类型 | float类型的浮点数x | 支持 | +| [tgammal](#tgammal) | 计算参数x的伽马函数,返回long double类型 | long double类型的浮点数x | 支持 | +| [trunc](#trunc) | 截取参数x的整数部分,并将整数部分以浮点形式表示 | double类型的浮点数x | 支持 | +| [truncf](#truncf) | 截取参数x的整数部分,并将整数部分以浮点形式表示 | float类型的浮点数x | 支持 | +| [truncl](#truncl) | 截取参数x的整数部分,并将整数部分以浮点形式表示 | long double类型的浮点数x | 支持 | +| [y0](#y0) | 计算参数x的第二类0阶贝塞尔函数 | double类型的浮点数x | 支持 | +| [y0f](#y0f) | 计算参数x的第二类0阶贝塞尔函数 | float类型的浮点数x | 支持 | +| [y1](#y1) | 计算参数x的第二类1阶贝塞尔函数 | double类型的浮点数x | 支持 | +| [y1f](#y1f) | 计算参数x的第二类1阶贝塞尔函数 | float类型的浮点数x | 支持 | +| [yn](#yn) | 计算参数x的第二类n阶贝塞尔函数 | int类型阶数n
    double类型的浮点数x | 支持 | +| [ynf](#ynf) | 计算参数x的第二类n阶贝塞尔函数 | int类型阶数n
    float类型的浮点数x | 支持 | + +## 设备驱动 + +### register_driver + +在文件系统中注册一个字符设备驱动程序。 + +**参数**: + +1. 要创建的索引节点的路径path。 +2. file_operations结构体指针fops。 +3. 访问权限mode。 +4. 将与inode关联的私有用户数据priv。 + +**Output**: + +- 0:操作成功。 +- 负数值:操作失败。 + +#### unregister_driver + +从文件系统中删除“path”处的字符驱动程序。 + +**参数**: + +1. 要删除的索引节点的路径path。 + +**Output**: + +- 0:操作成功。 +- -EINVAL:无效的path路径。 +- -EEXIST:path中已存在inode。 +- -ENOMEM:内存不足。 + +#### register_blockdriver + +在文件系统中注册一个块设备驱动程序。 + +**参数**: + +1. 要创建的索引节点的路径path。 +2. block_operations结构体指针bops。 +3. 访问权限mode。 +4. 将与inode关联的私有用户数据priv。 + +**Output**: + +- 0:操作成功。 +- -EINVAL:无效的path路径。 +- -EEXIST:path中已存在inode。 +- -ENOMEM:内存不足。 + +#### unregister_blockdriver + +从文件系统中删除“path”处的块设备驱动程序。 + +**参数**: + +1. 要删除的索引节点的路径path。 + +**Output**: + +- 0:操作成功。 +- -EINVAL:无效的path路径。 +- -EEXIST:path中已存在inode。 +- -ENOMEM:内存不足。 + +## Shell模块 + +### SHELLCMD_ENTRY + +向Shell模块静态注册命令。 + +**参数**: + +1. 命令变量名name。 +2. 命令类型cmdType。 +3. 命令关键字cmdKey。 +4. 处理函数的入参最大个数paraNum。 +5. 命令处理函数回调cmdHook。 + +**Output**: 无 + +### osCmdReg + +向Shell模块动态注册命令。 + +**参数**: + +1. 命令类型cmdType。 +2. 命令关键字cmdKey。 +3. 处理函数的入参最大个数paraNum。 +4. 命令处理函数回调cmdHook。 + +**Output**: + +- 0:操作成功。 +- OS_ERRNO_SHELL_NOT_INIT:shell模块未初始化。 +- OS_ERRNO_SHELL_CMDREG_PARA_ERROR:无效的输入参数。 +- OS_ERRNO_SHELL_CMDREG_CMD_ERROR:无效的字符串关键字。 +- OS_ERRNO_SHELL_CMDREG_CMD_EXIST:关键字已存在。 +- OS_ERRNO_SHELL_CMDREG_MEMALLOC_ERROR:内存不足。 diff --git a/docs/en/Embedded/UniProton/uniproton-user-guide.md b/docs/en/Embedded/UniProton/uniproton-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..c37875f1eeef8571aa81c7abb916799486ad2ae7 --- /dev/null +++ b/docs/en/Embedded/UniProton/uniproton-user-guide.md @@ -0,0 +1,11 @@ +# UniProton User Guide + +## Introduction + +UniProton is an operating system (OS) for embedded scenarios provided by the openEuler community. It aims to build a high-quality OS platform that shields underlying hardware differences for upper-layer service software and provides powerful debugging functions. UniProton allows service software to be quickly ported to different hardware platforms, facilitates chip selection, and reduces costs for hardware procurement and software maintenance. + +This document describes the basic functions and APIs of UniProton. + +## Build Procedure + +For details about the build procedure, see . diff --git a/docs/en/Server/Administration/Administrator/Menu/index.md b/docs/en/Server/Administration/Administrator/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..07ccf3d35e2d01a365470f02e3bccc5949e8875b --- /dev/null +++ b/docs/en/Server/Administration/Administrator/Menu/index.md @@ -0,0 +1,16 @@ +--- +headless: true +--- +- [Administrator Guide]({{< relref "./administration.md" >}}) + - [Viewing System Information]({{< relref "./viewing-system-information.md" >}}) + - [Basic Configuration]({{< relref "./basic-configuration.md" >}}) + - [User and User Group Management]({{< relref "./user-and-user-group-management.md" >}}) + - [Software Package Management with DNF]({{< relref "./using-dnf-to-manage-software-packages.md" >}}) + - [Service Management]({{< relref "./service-management.md" >}}) + - [Process Management]({{< relref "./process-management.md" >}}) + - [Service Configuration]({{< relref "./configuring-services.md" >}}) + - [Configuring the Repo Server]({{< relref "./configuring-the-repo-server.md" >}}) + - [Configuring the FTP Server]({{< relref "./configuring-the-ftp-server.md" >}}) + - [Configuring the Web Server]({{< relref "./configuring-the-web-server.md" >}}) + - [Setting Up the Database Server]({{< relref "./setting-up-the-database-server.md" >}}) + - [Common Issues and Solutions]({{< relref "./common-issues-and-solutions.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/Administration/administration.md b/docs/en/Server/Administration/Administrator/administration.md similarity index 100% rename from docs/en/docs/Administration/administration.md rename to docs/en/Server/Administration/Administrator/administration.md diff --git a/docs/en/docs/Administration/basic-configuration.md b/docs/en/Server/Administration/Administrator/basic-configuration.md similarity index 98% rename from docs/en/docs/Administration/basic-configuration.md rename to docs/en/Server/Administration/Administrator/basic-configuration.md index 7ab41eaca958a79b4a011fd4b85239b530daabbd..de4ace598bb9a10dca4c22e8a0b4c02d1e7357ef 100644 --- a/docs/en/docs/Administration/basic-configuration.md +++ b/docs/en/Server/Administration/Administrator/basic-configuration.md @@ -33,6 +33,7 @@ - [Disabling Network Drivers](#disabling-network-drivers) + ## Setting the System Locale System locale settings are stored in the /etc/locale.conf file and can be modified by the localectl command. These settings are read at system boot by the systemd daemon. @@ -42,7 +43,7 @@ System locale settings are stored in the /etc/locale.conf file and can be modifi To display the current locale status, run the following command: ```shell -$ localectl status +localectl status ``` Example command output: @@ -96,7 +97,7 @@ Keyboard layout settings are stored in the /etc/locale.conf file and can be modi To display the current keyboard layout settings, run the following command: ```shell -$ localectl status +localectl status ``` Example command output: @@ -173,7 +174,7 @@ System clock synchronized: no Your system clock can be automatically synchronized with a remote server using the Network Time Protocol (NTP). Run the following command as the user **root** to enable or disable NTP. The value of _boolean_ is **yes** or **no**, indicating that the NTP is enabled or disabled for automatic system clock synchronization. Change the value as required. -> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > If the remote NTP server is enabled to automatically synchronize the system clock, you cannot manually change the date and time. If you need to manually change the date or time, ensure that automatic NTP system clock synchronization is disabled. You can run the **timedatectl set-ntp no** command to disable the NTP service. ```shell @@ -188,7 +189,7 @@ timedatectl set-ntp yes #### Changing the Current Date -> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > Before changing the date, ensure that automatic NTP system clock synchronization has been disabled. Run the following command as the user **root** to change the current date. In the command, _YYYY_ indicates the year, _MM_ indicates the month, and _DD_ indicates the day. Change them as required. @@ -205,7 +206,7 @@ timedatectl set-time '2019-08-14' #### Changing the Current Time -> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > Before changing the time, ensure that automatic NTP system clock synchronization has been disabled. To change the current time, run the following command as the user **root**. In the command, _HH_ indicates the hour, _MM_ indicates the minute, and _SS_ indicates the second. Change them as required. @@ -306,21 +307,21 @@ date +"format" Example commands and outputs: - To display the current date and time: - + ```shell - $ date + $ date Sat Aug 17 17:26:34 CST 2019 ``` - To display the current date and time in UTC: - + ```shell $ date --utc Sat Aug 17 09:26:18 UTC 2019 ``` - To customize the output of the date command: - + ```shell $ date +"%Y-%m-%d %H:%M" 2019-08-17 17:24 diff --git a/docs/en/docs/Administration/faqs.md b/docs/en/Server/Administration/Administrator/common-issues-and-solutions.md similarity index 81% rename from docs/en/docs/Administration/faqs.md rename to docs/en/Server/Administration/Administrator/common-issues-and-solutions.md index 776e3cae0b296e6bbfd96408ce072dd1a48c6930..c232d7b8293c508bc1ab7b22756830ecf29b0de5 100644 --- a/docs/en/docs/Administration/faqs.md +++ b/docs/en/Server/Administration/Administrator/common-issues-and-solutions.md @@ -1,24 +1,6 @@ -# FAQs - - -- [FAQs](#faqs) - - [Why Is the Memory Usage of the libvirtd Service Queried by Running the systemctl and top Commands Different?](#why-is-the-memory-usage-of-the-libvirtd-service-queried-by-running-the-systemctl-and-top-commands-different) - - [An Error Occurs When stripsize Is Set to 4 During RAID 0 Volume Configuration](#an-error-occurs-when-stripsize-is-set-to-4-during-raid-0-volume-configuration) - - [Failed to Compile MariaDB Using rpmbuild](#failed-to-compile-mariadb-using-rpmbuild) - - [Failed to Start the SNTP Service Using the Default Configuration](#failed-to-start-the-sntp-service-using-the-default-configuration) - - [Installation Failure Caused by Software Package Conflict, File Conflict, or Missing Software Package](#installation-failure-caused-by-software-package-conflict-file-conflict-or-missing-software-package) - - [Failed to Downgrade libiscsi](#failed-to-downgrade-libiscsi) - - [Failed to Downgrade xfsprogs](#failed-to-downgrade-xfsprogs) - - [Failed to Downgrade elfutils](#failed-to-downgrade-elfutils) - - [CPython/Lib Detects CVE-2019-9674: Zip Bomb](#cpythonlib-detects-cve-2019-9674-zip-bomb) - - [ReDoS Attack Occurs Due to Improper Use of glibc Regular Expressions](#redos-attack-occurs-due-to-improper-use-of-glibc-regular-expressions) - - [An Error Is Reported When gdbm-devel Is Installed or Uninstalled During the Installation and Uninstallation of httpd-devel and apr-util-devel](#an-error-is-reported-when-gdbm-devel-is-installed-or-uninstalled-during-the-installation-and-uninstallation-of-httpd-devel-and-apr-util-devel) - - [An rpmdb Error Is Reported When Running the yum or dnf Command After the System Is Rebooted](#an-rpmdb-error-is-reported-when-running-the-yum-or-dnf-command-after-the-system-is-rebooted) - - [Failed to Run `rpmrebuild -d /home/test filesystem` to Rebuild the filesystem Package](#failed-to-run-rpmrebuild--d-hometest-filesystem-to-rebuild-the-filesystem-package) - - [An Error Is Reported When modprobe or `insmod` Is Executed With the `-f` Option](#an-error-is-reported-when-modprobe-or-insmod-is-executed-with-the--f-option) - - -## Why Is the Memory Usage of the libvirtd Service Queried by Running the systemctl and top Commands Different? +# Common Issues and Solutions + +## Issue 1: Why Is the Memory Usage of the libvirtd Service Queried by Running the systemctl and top Commands Different ### Symptom @@ -43,7 +25,7 @@ RSS in the output of the **top** command = anon\_rss + file\_rss; Shared memor In conclusion, the definition of memory usage obtained by running the **systemd** command is different from that obtained by running the **top** command. Therefore, the query results are different. -## An Error Occurs When stripsize Is Set to 4 During RAID 0 Volume Configuration +## Issue 2: An Error Occurs When stripsize Is Set to 4 During RAID 0 Volume Configuration ### Symptom @@ -57,7 +39,7 @@ The 64 KB page table can be enabled only in the scenario where **stripsize** i You do not need to modify the configuration file. When running the **lvcreate** command on openEuler, set **stripesize** to **64** because the minimum supported stripe size is 64 KB. -## Failed to Compile MariaDB Using rpmbuild +## Issue 3: Failed to Compile MariaDB Using rpmbuild ### Symptom @@ -91,7 +73,7 @@ After the modification: The modification disables the function of executing test cases during compilation, which does not affect the compilation and the RPM package content after compilation. -## Failed to Start the SNTP Service Using the Default Configuration +## Issue 4: Failed to Start the SNTP Service Using the Default Configuration ### Symptom @@ -105,7 +87,7 @@ The domain name of the NTP server is not added to the default configuration. Modify the **/etc/sysconfig/sntp** file and add the domain name of the NTP server in China: **0.generic.pool.ntp.org**. -## Installation Failure Caused by Software Package Conflict, File Conflict, or Missing Software Package +## Issue 5: Installation Failure Caused by Software Package Conflict, File Conflict, or Missing Software Package ### Symptom @@ -193,17 +175,17 @@ If a software package is missing, perform the following steps \(the missed softw The **python3-edk2-devel.noarch** file conflicts with the **build.noarch** file due to duplicate file names. ```shell - # yum install python3-edk2-devel.noarch build.noarch + $ yum install python3-edk2-devel.noarch build.noarch ... Error: Transaction test error: file /usr/bin/build conflicts between attempted installs of python3-edk2-devel-202002-3.oe1.noarch and build-20191114-324.4.oe1.noarch ``` -## Failed to Downgrade libiscsi +## Issue 6: Failed to Downgrade libiscsi ### Symptom -libiscsi-1.19.0-4 or later fails to be downgraded to libiscsi-1.19.0-3 or earlier. +libiscsi-1.19.4 or later fails to be downgraded to libiscsi-1.19.3 or earlier. ```text Error: @@ -217,8 +199,8 @@ Problem: problem with installed package libiscsi-utils-1.19.0-4.oe1.x86_64 ### Possible Cause -In libiscsi-1.19.0-3 or earlier, binary files named **iscsi-xxx** are packed into the main package **libiscsi**. However, these binary files introduce improper dependency CUnit. To solve this problem, in libiscsi-1.19.0-4, these binary files are separated into the **libiscsi-utils** subpackage. The main package is weakly dependent on the subpackage. You can integrate or uninstall the subpackage during image building based on product requirements. If the subpackage is not integrated or is uninstalled, the functions of the **libiscsi** main package are not affected. -When libiscsi-1.19.0-4 or later is downgraded to libiscsi-1.19.0-3 or earlier and the **libiscsi-utils** subpackage is installed in the system, because libiscsi-1.19.0-3 or earlier does not contain **libiscsi-utils**, **libiscsi-utils** will fail to be downgraded. Due to the fact that **libiscsi-utils** depends on the **libiscsi** main package before the downgrade, a dependency problem occurs and the libiscsi downgrade fails. +In libiscsi-1.19.3 or earlier, binary files named **iscsi-xxx** are packed into the main package **libiscsi**. However, these binary files introduce improper dependency CUnit. To solve this problem, in libiscsi-1.19.4, these binary files are separated into the **libiscsi-utils** subpackage. The main package is weakly dependent on the subpackage. You can integrate or uninstall the subpackage during image building based on product requirements. If the subpackage is not integrated or is uninstalled, the functions of the **libiscsi** main package are not affected. +When libiscsi-1.19.4 or later is downgraded to libiscsi-1.19.3 or earlier and the **libiscsi-utils** subpackage is installed in the system, because libiscsi-1.19.3 or earlier does not contain **libiscsi-utils**, **libiscsi-utils** will fail to be downgraded. Due to the fact that **libiscsi-utils** depends on the **libiscsi** main package before the downgrade, a dependency problem occurs and the libiscsi downgrade fails. ### Solution @@ -228,7 +210,7 @@ Run the following command to uninstall the **libiscsi-utils** subpackage and the yum remove libiscsi-utils ``` -## Failed to Downgrade xfsprogs +## Issue 7: Failed to Downgrade xfsprogs ### Symptom @@ -257,7 +239,7 @@ Run the following command to uninstall the **xfsprogs-xfs_scrub** subpackage and yum remove xfsprogs-xfs_scrub ``` -## Failed to Downgrade elfutils +## Issue 8: Failed to Downgrade elfutils ### Symptom @@ -283,7 +265,7 @@ Run the following command to uninstall the elfutils-extra subpackage and then pe yum remove -y elfutils-extra ``` -## CPython/Lib Detects CVE-2019-9674: Zip Bomb +## Issue 9: CPython/Lib Detects CVE-2019-9674: Zip Bomb ### Symptom @@ -297,7 +279,7 @@ Remote attackers use zip bombs to cause denial of service, affecting target syst Add the alarm information to **zipfile** at . -## ReDoS Attack Occurs Due to Improper Use of glibc Regular Expressions +## Issue 10: ReDoS Attack Occurs Due to Improper Use of glibc Regular Expressions ### Symptom @@ -333,7 +315,7 @@ A core dump occurs on the process that uses the regular expression. The glibc re 3. After a user program detects a process exception, the user program can restart the process to restore services, improving program reliability. -## An Error Is Reported When gdbm-devel Is Installed or Uninstalled During the Installation and Uninstallation of httpd-devel and apr-util-devel +## Issue 11: An Error Is Reported When gdbm-devel Is Installed or Uninstalled During the Installation and Uninstallation of httpd-devel and apr-util-devel ### Symptom @@ -355,7 +337,7 @@ A core dump occurs on the process that uses the regular expression. The glibc re 1. Install gdbm-1.18.1-2 to upgrade gdbm. The error is rectified. 2. Upgrade gdbm, and then install gdbm-devel to make it depend on the gdbm of the later version. The error is rectified. -## An rpmdb Error Is Reported When Running the yum or dnf Command After the System Is Rebooted +## Issue 12: An rpmdb Error Is Reported When Running the yum or dnf Command After the System Is Rebooted ### Symptom @@ -375,7 +357,7 @@ Step 1 Run the `kill -9` command to terminate all running RPM-related commands. Step 2 Run `rm -rf /var/lib/rpm/__db.00*` to delete all db.00 files. Step 3 Run the `rpmdb --rebuilddb` command to rebuild the RPM database. -## Failed to Run `rpmrebuild -d /home/test filesystem` to Rebuild the filesystem Package +## Issue 13: Failed to Run `rpmrebuild -d /home/test filesystem` to Rebuild the filesystem Package ### Symptom @@ -391,12 +373,12 @@ Failed to run the `rpmrebuild --comment-missing=y --keep-perm -b -d /home/test f The software package creates the directory in the **%pretrans -p** phase, and modify the directory in the **%ghost** phase. If you create a file or directory in the directory and use `rpmrebuild` to build the package, the created file or directory will be included in the package. The root cause of the symptom is that **filesystem** creates the **/proc** directory in the **%pretrans** phase and modifies the directory in the **%ghost** phase, but some small processes are dynamically created during system running. As a result, `rpmrebuild` cannot include the processes in the package because they are not files or directories and fails to rebuild the package. - + ### Solution Do not use `rpmrebuild` to rebuild the **filesystem** package. -## An Error Is Reported When modprobe or `insmod` Is Executed With the `-f` Option +## Issue 14: An Error Is Reported When modprobe or `insmod` Is Executed With the `-f` Option ### Symptom diff --git a/docs/en/docs/Administration/configuring-services.md b/docs/en/Server/Administration/Administrator/configuring-services.md similarity index 88% rename from docs/en/docs/Administration/configuring-services.md rename to docs/en/Server/Administration/Administrator/configuring-services.md index 35ffda1bb6f4f8a2eae7527a03101872449a82fc..e33439eb2e5ad90f0eaa7013ed262100489e21fd 100644 --- a/docs/en/docs/Administration/configuring-services.md +++ b/docs/en/Server/Administration/Administrator/configuring-services.md @@ -1,4 +1 @@ # Configuring Services - - - diff --git a/docs/en/docs/Administration/configuring-the-ftp-server.md b/docs/en/Server/Administration/Administrator/configuring-the-ftp-server.md similarity index 96% rename from docs/en/docs/Administration/configuring-the-ftp-server.md rename to docs/en/Server/Administration/Administrator/configuring-the-ftp-server.md index ebc2ec1def16cc75c2c567ffdff23e474fc3539c..0083d40ed1927ef534925885ee05a0cf9532d8d2 100644 --- a/docs/en/docs/Administration/configuring-the-ftp-server.md +++ b/docs/en/Server/Administration/Administrator/configuring-the-ftp-server.md @@ -64,8 +64,8 @@ To start, stop, or restart the vsftpd service, run the corresponding command as tcp6 0 0 :::21 :::* LISTEN 19716/vsftpd ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >If the **netstat** command does not exist, run the **dnf install net-tools** command to install the **net-tools** software and then run the **netstat** command. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > If the **netstat** command does not exist, run the **dnf install net-tools** command to install the **net-tools** software and then run the **netstat** command. - Stopping the vsftpd services @@ -142,8 +142,8 @@ You can modify the vsftpd configuration file to control user permissions. [Tabl ### Default Configuration Description ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The configuration content in this document is for reference only. You can modify the content based on the site requirements \(for example, security hardening requirements\). +>![](./public_sys-resources/icon-note.gif) **NOTE:** +>The configuration content in this document is for reference only. You can modify the content based on the site requirements \(for example, security hardening requirements\). In the openEuler system, vsftpd does not open to anonymous users by default. Run the vim command to view the main configuration file. The content is as follows: @@ -308,7 +308,7 @@ You are advised to configure a welcome information file for the vsftpd service. Generally, users need to restrict the login permission of some accounts. You can set the restriction as required. -By default, vsftpd manages and restricts user identities based on user lists stored in two files. FTP requests from a user in any of the files will be denied. +By default, vsftpd manages and restricts user identities based on user lists stored in two files. FTP requests from a user in any of the files will be denied. - **/etc/vsftpd/user_list** can be used as an allowlist, blocklist, or invalid list, which is determined by the **userlist_enable** and **userlist_deny** parameters. - **/etc/vsftpd/ftpusers** can be used as a blocklist only, regardless of the parameters. @@ -333,8 +333,8 @@ ftp> bye 221 Goodbye. ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->If the **ftp** command does not exist, run the **dnf install ftp** command as the **root** user to install the **ftp** software and then run the **ftp** command. +>![](./public_sys-resources/icon-note.gif) **NOTE:** +>If the **ftp** command does not exist, run the **dnf install ftp** command as the **root** user to install the **ftp** software and then run the **ftp** command. ## Configuring a Firewall @@ -427,9 +427,9 @@ Generally, the get or mget command is used to download files. ftp> mget *.* ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >- In this case, a message is displayed each time a file is downloaded. To block the prompt information, run the **prompt off** command before running the **mget \*.\*** command. - >- The files are downloaded to the current directory on the Linux host. For example, if you run the ftp command in /home/myopenEuler/, all files are downloaded to /home/myopenEuler/. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > - In this case, a message is displayed each time a file is downloaded. To block the prompt information, run the **prompt off** command before running the **mget \*.\*** command. + > - The files are downloaded to the current directory on the Linux host. For example, if you run the ftp command in /home/myopenEuler/, all files are downloaded to /home/myopenEuler/. ### Uploading a File @@ -499,5 +499,5 @@ Generally, the **delete** or **mdelete** command is used to delete a file. Run the bye command to disconnect from the server. ```shell -ftp> bye +ftp> bye ``` diff --git a/docs/en/Server/Administration/Administrator/configuring-the-repo-server.md b/docs/en/Server/Administration/Administrator/configuring-the-repo-server.md new file mode 100644 index 0000000000000000000000000000000000000000..bb90e94ef8b3f757a42ee0aaa25699023672e31c --- /dev/null +++ b/docs/en/Server/Administration/Administrator/configuring-the-repo-server.md @@ -0,0 +1,407 @@ +# Configuring the Repository Server + +>![](./public_sys-resources/icon-note.gif) **NOTE:** +> openEuler provides multiple reposiotrys for online usage. For details about the reposiotrys, see [OS Installation](../../Releasenotes/Releasenotes/os-installation.md). If you cannot obtain the openEuler reposiotry online, you can use the ISO release package provided by openEuler to create a local openEuler reposiotry. This section uses the **openEuler-21.09-aarch64-dvd.iso** file as an example. Modify the ISO file as required. + + + +- [Configuring the Repository Server](#configuring-the-repository-server) + - [Overview](#overview) + - [Creating or Updating a Local Reposiotry](#creating-or-updating-a-local-reposiotry) + - [Obtaining the ISO File](#obtaining-the-iso-file) + - [Mounting an ISO File to Create a Reposiotry](#mounting-an-iso-file-to-create-a-reposiotry) + - [Creating a Local Reposiotry](#creating-a-local-reposiotry) + - [Updating the Reposiotry](#updating-the-reposiotry) + - [Deploying the Remote Reposiotry](#deploying-the-remote-reposiotry) + - [Installing and Configuring Nginx](#installing-and-configuring-nginx) + - [Starting Nginx](#starting-nginx) + - [Deploying the Reposiotry](#deploying-the-reposiotry) + - [Using the Reposiotry](#using-the-reposiotry) + - [Configuring Repository as the Yum Repository](#configuring-repository-as-the-yum-repository) + - [Repository Priority](#repository-priority) + - [Related Commands of dnf](#related-commands-of-dnf) + + +## Overview + +Create the **openEuler-21.09-aarch64-dvd.iso** file provided by openEuler as the reposiotry. The following uses Nginx as an example to describe how to deploy the reposiotry and provide the HTTP service. + +## Creating or Updating a Local Reposiotry + +Mount the openEuler ISO file **openEuler-21.09-aarch64-dvd.iso** to create and update a reposiotry. + +### Obtaining the ISO File + +Obtain the openEuler ISO file from the following website: + +[https://repo.openeuler.org/openEuler-21.09/ISO/](https://repo.openeuler.org/openEuler-21.09/ISO/) + +### Mounting an ISO File to Create a Reposiotry + +Run the **mount** command as the **root** user to mount the ISO file. + +The following is an example: + +```shell +mount /home/openEuler/openEuler-21.09-aarch64-dvd.iso /mnt/ +``` + +The mounted mnt directory is as follows: + +```text +. +│── boot.catalog +│── docs +│── EFI +│── images +│── Packages +│── repodata +│── TRANS.TBL +└── RPM-GPG-KEY-openEuler +``` + +In the preceding directory, **Packages** indicates the directory where the RPM package is stored, **repodata** indicates the directory where the reposiotry metadata is stored, and **RPM-GPG-KEY-openEuler** indicates the public key for signing openEuler. + +### Creating a Local Reposiotry + +You can copy related files in the ISO file to a local directory to create a local reposiotry. The following is an example: + +```shell +mount /home/openEuler/openEuler-21.09-aarch64-dvd.iso /mnt/ +mkdir -p /home/openEuler/srv/repo/ +cp -r /mnt/Packages /home/openEuler/srv/repo/ +cp -r /mnt/repodata /home/openEuler/srv/repo/ +cp -r /mnt/RPM-GPG-KEY-openEuler /home/openEuler/srv/repo/ +``` + +The local reposiotry directory is as follows: + +```text +. +│── Packages +│── repodata +└── RPM-GPG-KEY-openEuler +``` + +**Packages** indicates the directory where the RPM package is stored, **repodata** indicates the directory where the reposiotry metadata is stored, and **RPM-GPG-KEY-openEuler** indicates the public key for signing openEuler. + +### Updating the Reposiotry + +You can update the reposiotry in either of the following ways: + +- Use the latest ISO file to update the existing reposiotry. The method is the same as that for creating a reposiotry. That is, mount the ISO file or copy the ISO file to the local directory. + +- Add a RPM package to the **Packages** directory of the reposiotry and run the **createrepo** command to update the reposiotry. + + ```shell + createrepo --update --workers=10 ~/srv/repo + ``` + +In this command, **--update** indicates the update, and **--workers** indicates the number of threads, which can be customized. + +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +If the command output contains "createrepo: command not found", run the **dnf install createrepo** command as the **root** user to install the **createrepo** softeware. + +## Deploying the Remote Reposiotry + +Install openEuler OS and deploy the reposiotry using Nginx on openEuler OS. + +### Installing and Configuring Nginx + +1. Download the Nginx tool and install it as the **root** user. + +2. After Nginx is installed, configure /etc/nginx/nginx.conf as the **root** user. + + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > The configuration content in this document is for reference only. You can configure the content based on the site requirements (for example, security hardening requirements). + + ```text + user nginx; + worker_processes auto; # You are advised to set this parameter to **core-1** . + error_log /var/log/nginx/error.log warn; # Log storage location + pid /var/run/nginx.pid; + + events { + worker_connections 1024; + } + + http { + include /etc/nginx/mime.types; + default_type application/octet-stream; + + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + + access_log /var/log/nginx/access.log main; + sendfile on; + keepalive_timeout 65; + + server { + listen 80; + server_name localhost; # Server name (URL) + client_max_body_size 4G; + root /usr/share/nginx/repo; # Default service directory + + location / { + autoindex on; # Enable the access to lower-layer files in the directory. + autoindex_exact_size on; + autoindex_localtime on; + } + + } + + } + ``` + +### Starting Nginx + +1. Run the following `systemctl` commands as the **root** user to start the Nginx service. + + ```shell + systemctl enable nginx + systemctl start nginx + ``` + +2. You can run the following command to check whether Nginx is started successfully: + + ```shell + systemctl status nginx + ``` + + - [Figure 1](#en-us_topic_0151920971_fd25e3f1d664b4087ae26631719990a71) indicates that the Nginx service is started successfully. + + **Figure 1** The Nginx service is successfully started. + ![](./figures/the-nginx-service-is-successfully-started.png "the-nginx-service-is-successfully-started") + + - If the Nginx service fails to be started, view the error information. + + ```shell + systemctl status nginx.service --full + ``` + + **Figure 2** The Nginx service startup fails + ![](./figures/nginx-startup-failure.png "nginx-startup-failure") + + As shown in [Figure 2](#en-us_topic_0151920971_f1f9f3d086e454b9cba29a7cae96a4c54), the Nginx service fails to be created because the /var/spool/nginx/tmp/client\_body directory fails to be created. You need to manually create the directory as the **root** user. Solve similar problems as follows: + + ```shell + mkdir -p /var/spool/nginx/tmp/client_body + mkdir -p /var/spool/nginx/tmp/proxy + mkdir -p /var/spool/nginx/tmp/fastcgi + mkdir -p /usr/share/nginx/uwsgi_temp + mkdir -p /usr/share/nginx/scgi_temp + ``` + +### Deploying the Reposiotry + +1. Run the following command as the **root** user to create the **/usr/share/nginx/repo** directory specified in the Nginx configuration file /etc/nginx/nginx.conf: + + ```shell + mkdir -p /usr/share/nginx/repo + ``` + +2. Run the following command as the **root** user to modify the **/usr/share/nginx/repo** directory permission: + + ```shell + chmod -R 755 /usr/share/nginx/repo + ``` + +3. Configure firewall rules as the **root** user to enable the port (port 80) configured for Nginx. + + ```shell + firewall-cmd --add-port=80/tcp --permanent + firewall-cmd --reload + ``` + + Check whether port 80 is enabled as the **root** user. If the output is **yes**, port 80 is enabled. + + ```shell + firewall-cmd --query-port=80/tcp + ``` + + You can also enable port 80 using iptables as the **root** user. + + ```shell + iptables -I INPUT -p tcp --dport 80 -j ACCEPT + ``` + +4. After the Nginx service is configured, you can use the IP address to access the web page, as shown in [Figure 3](#en-us_topic_0151921017_fig1880404110396). + + **Figure 3** Nginx deployment succeeded + ![](./figures/nginx-deployment-succeeded.png "nginx-deployment-succeeded") + +5. Use either of the following methods to add the reposiotry to the **/usr/share/nginx/repo** directory: + + - Copy related files in the image to the **/usr/share/nginx/repo** directory as the **root** user. + + ```shell + mount /home/openEuler/openEuler-21.09-aarch64-dvd.iso /mnt/ + cp -r /mnt/Packages /usr/share/nginx/repo/ + cp -r /mnt/repodata /usr/share/nginx/repo/ + cp -r /mnt/RPM-GPG-KEY-openEuler /usr/share/nginx/repo/ + chmod -R 755 /usr/share/nginx/repo + ``` + + The **openEuler-21.09-aarch64-dvd.iso** file is stored in the **/home/openEuler** directory. + + - Create a soft link for the reposiotry in the **/usr/share/nginx/repo** directory as the **root** user. + + ```shell + ln -s /mnt /usr/share/nginx/repo/os + ``` + + **/mnt** is the created reposiotry, and **/usr/share/nginx/repo/os** points to **/mnt** . + +## Using the Reposiotry + +The reposiotry can be configured as a Yum repository, which is a shell front-end software package manager. Based on the Redhat package manager (RPM), YUM can automatically download the RPM package from the specified server, install the package, and process dependent relationship. It supports one-off installation for all dependent software packages. + +### Configuring Repository as the Yum Repository + +You can configure the built reposiotry as the Yum repository and create the \*\*\*.repo configuration file (the extension .repo is mandatory) in the /etc/yum.repos.d/ directory as the **root** user. You can configure the Yum repository on the local host or HTTP server. + +- Configuring the local Yum repository. + + Create the **openEuler.repo** file in the **/etc/yum.repos.d** directory and use the local repository as the Yum repository. The content of the **openEuler.repo** file is as follows: + + ```text + [base] + name=base + baseurl=file:///home/openEuler/srv/repo + enabled=1 + gpgcheck=1 + gpgkey=file:///home/openEuler/srv/repo/RPM-GPG-KEY-openEuler + ``` + + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > + > - **repoid** indicates the ID of the software repository. Repoids in all .repo configuration files must be unique. In the example, **repoid** is set to **base**. + > - **name** indicates the string that the software repository describes. + > - **baseurl** indicates the address of the software repository. + > - **enabled** indicates whether to enable the software source repository. The value can be **1** or **0**. The default value is **1**, indicating that the software source repository is enabled. + > - **gpgcheck** indicates whether to enable the GNU privacy guard (GPG) to check the validity and security of sources of RPM packages. **1** indicates GPG check is enabled. **0** indicates the GPG check is disabled. + > - **gpgkey** indicates the public key used to verify the signature. + +- Configuring the Yum repository for the HTTP server + + Create the **openEuler.repo** file in the **/etc/yum.repos.d** directory. + + - If the reposiotry of the HTTP server deployed by the user is used as the Yum repository, the content of **openEuler.repo** is as follows: + + ```text + [base] + name=base + baseurl=http://192.168.139.209/ + enabled=1 + gpgcheck=1 + gpgkey=http://192.168.139.209/RPM-GPG-KEY-openEuler + ``` + + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > 192.168.139.209 is an example. Replace it with the actual IP address. + + - If the openEuler reposiotry provided by openEuler is used as the Yum repository, the content of **openEuler.repo** is as follows (the AArch64-based OS reposiotry is used as an example): + + ```text + [base] + name=base + baseurl=http://repo.openeuler.org/openEuler-21.09/OS/aarch64/ + enabled=1 + gpgcheck=1 + gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/aarch64/RPM-GPG-KEY-openEuler + ``` + +### Repository Priority + +If there are multiple reposiotrys, you can set the repository priority in the .repo file. If the priority is not set, the default priority is **99** . If the same RPM package exists in the sources with the same priority, the latest version is installed. **1** indicates the highest priority and **99** indicates the lowest priority. The following shows how to set the priority of **openEuler.repo** to **2**. + +```text +[base] +name=base +baseurl=http://192.168.139.209/ +enabled=1 +priority=2 +gpgcheck=1 +gpgkey=http://192.168.139.209/RPM-GPG-KEY-openEuler +``` + +### Related Commands of dnf + +The **dnf** command can automatically parse the dependency between packages during installation and upgrade. The common usage method is as follows: + +```shell +dnf +``` + +Common commands are as follows: + +- Installation + + Run the following command as the **root** user. + + ```shell + dnf install + ``` + +- Upgrade + + Run the following command as the **root** user. + + ```shell + dnf update + ``` + +- Rollback + + Run the following command as the **root** user. + + ```shell + dnf downgrade + ``` + +- Update check + + ```shell + dnf check-update + ``` + +- Uninstallation + + Run the following command as the **root** user. + + ```shell + dnf remove + ``` + +- Query + + ```shell + dnf search + ``` + +- Local installation + + Run the following command as the **root** user. + + ```shell + dnf localinstall + ``` + +- Historical records check + + ```shell + dnf history + ``` + +- Cache records clearing + + ```shell + dnf clean all + ``` + +- Cache update + + ```shell + dnf makecache + ``` diff --git a/docs/en/docs/Administration/configuring-the-web-server.md b/docs/en/Server/Administration/Administrator/configuring-the-web-server.md similarity index 93% rename from docs/en/docs/Administration/configuring-the-web-server.md rename to docs/en/Server/Administration/Administrator/configuring-the-web-server.md index 412f2575e868169626b87f3cfe72cda61e556d25..e622bfa3be7478e393a4d6d39aa0c6445efdae3d 100644 --- a/docs/en/docs/Administration/configuring-the-web-server.md +++ b/docs/en/Server/Administration/Administrator/configuring-the-web-server.md @@ -1,4 +1,5 @@ # Configuring the Web Server + - [Configuring the Web Server](#configuring-the-web-server) @@ -59,8 +60,8 @@ You can use the systemctl tool to manage the httpd service, including starting, Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service. ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->If the running Apache HTTP server functions as a secure server, a password is required after the system is started. The password is an encrypted private SSL key. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> If the running Apache HTTP server functions as a secure server, a password is required after the system is started. The password is an encrypted private SSL key. #### Stopping the Service @@ -154,10 +155,10 @@ If the following information is displayed, the syntax of the configuration file Syntax OK ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- Before modifying the configuration file, back up the original file so that the configuration file can be quickly restored if a fault occurs. ->- The modified configuration file takes effect only after the web service is restarted. +> - Before modifying the configuration file, back up the original file so that the configuration file can be quickly restored if a fault occurs. +> - The modified configuration file takes effect only after the web service is restarted. ### Management Module and SSL @@ -195,12 +196,12 @@ For example, to load the asis DSO module, perform the following steps: asis_module (shared) ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->**Common httpd commands** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> **Common httpd commands** > ->- httpd -v: views the httpd version number. ->- httpd -l: views the static modules compiled into the httpd program. ->- httpd -M: views the static modules and loaded dynamic modules that have been compiled into the httpd program. +> - httpd -v: views the httpd version number. +> - httpd -l: views the static modules compiled into the httpd program. +> - httpd -M: views the static modules and loaded dynamic modules that have been compiled into the httpd program. #### Introduction to SSL @@ -336,8 +337,8 @@ You can use the systemctl tool to manage the Nginx service, including starting, Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service. ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->If the running Nginx server functions as a secure server, a password is required after the system is started. The password is an encrypted private SSL key. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> If the running Nginx server functions as a secure server, a password is required after the system is started. The password is an encrypted private SSL key. #### Stopping the Service @@ -427,10 +428,10 @@ nginx -t If the command output contains **syntax is ok**, the syntax of the configuration file is correct. ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- Before modifying the configuration file, back up the original file so that the configuration file can be quickly restored if a fault occurs. ->- The modified configuration file takes effect only after the web service is restarted. +> - Before modifying the configuration file, back up the original file so that the configuration file can be quickly restored if a fault occurs. +> - The modified configuration file takes effect only after the web service is restarted. ### Management Modules @@ -465,14 +466,14 @@ After the web server is set up, perform the following operations to check whethe RX errors 0 dropped 43 overruns 0 frame 0 TX packets 2246438 bytes 203186675 (193.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 - + enp4s0: flags=4163 mtu 1500 ether 52:54:00:7d:80:9e txqueuelen 1000 (Ethernet) RX packets 149937274 bytes 44652889185 (41.5 GiB) RX errors 0 dropped 1102561 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 - + lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 diff --git a/docs/en/docs/Administration/figures/1665628542704.png b/docs/en/Server/Administration/Administrator/figures/1665628542704.png similarity index 100% rename from docs/en/docs/Administration/figures/1665628542704.png rename to docs/en/Server/Administration/Administrator/figures/1665628542704.png diff --git a/docs/en/docs/Administration/figures/creat_datadisk.png b/docs/en/Server/Administration/Administrator/figures/creat_datadisk.png similarity index 100% rename from docs/en/docs/Administration/figures/creat_datadisk.png rename to docs/en/Server/Administration/Administrator/figures/creat_datadisk.png diff --git a/docs/en/docs/Administration/figures/creat_datadisk1.png b/docs/en/Server/Administration/Administrator/figures/creat_datadisk1.png similarity index 100% rename from docs/en/docs/Administration/figures/creat_datadisk1.png rename to docs/en/Server/Administration/Administrator/figures/creat_datadisk1.png diff --git a/docs/en/docs/Administration/figures/d1376b2a-d036-41c4-b852-e8368f363b5e-1.png b/docs/en/Server/Administration/Administrator/figures/d1376b2a-d036-41c4-b852-e8368f363b5e-1.png similarity index 100% rename from docs/en/docs/Administration/figures/d1376b2a-d036-41c4-b852-e8368f363b5e-1.png rename to docs/en/Server/Administration/Administrator/figures/d1376b2a-d036-41c4-b852-e8368f363b5e-1.png diff --git a/docs/en/docs/Administration/figures/d1376b2a-d036-41c4-b852-e8368f363b5e.png b/docs/en/Server/Administration/Administrator/figures/d1376b2a-d036-41c4-b852-e8368f363b5e.png similarity index 100% rename from docs/en/docs/Administration/figures/d1376b2a-d036-41c4-b852-e8368f363b5e.png rename to docs/en/Server/Administration/Administrator/figures/d1376b2a-d036-41c4-b852-e8368f363b5e.png diff --git a/docs/en/docs/Administration/figures/en-us_image_0230050789.png b/docs/en/Server/Administration/Administrator/figures/en-us_image_0230050789.png similarity index 100% rename from docs/en/docs/Administration/figures/en-us_image_0230050789.png rename to docs/en/Server/Administration/Administrator/figures/en-us_image_0230050789.png diff --git a/docs/en/docs/Administration/figures/en-us_image_0231563132.png b/docs/en/Server/Administration/Administrator/figures/en-us_image_0231563132.png similarity index 100% rename from docs/en/docs/Administration/figures/en-us_image_0231563132.png rename to docs/en/Server/Administration/Administrator/figures/en-us_image_0231563132.png diff --git a/docs/en/docs/Administration/figures/en-us_image_0231563134.png b/docs/en/Server/Administration/Administrator/figures/en-us_image_0231563134.png similarity index 100% rename from docs/en/docs/Administration/figures/en-us_image_0231563134.png rename to docs/en/Server/Administration/Administrator/figures/en-us_image_0231563134.png diff --git a/docs/en/docs/Administration/figures/en-us_image_0231563135.png b/docs/en/Server/Administration/Administrator/figures/en-us_image_0231563135.png similarity index 100% rename from docs/en/docs/Administration/figures/en-us_image_0231563135.png rename to docs/en/Server/Administration/Administrator/figures/en-us_image_0231563135.png diff --git a/docs/en/docs/Administration/figures/en-us_image_0231563136.png b/docs/en/Server/Administration/Administrator/figures/en-us_image_0231563136.png similarity index 100% rename from docs/en/docs/Administration/figures/en-us_image_0231563136.png rename to docs/en/Server/Administration/Administrator/figures/en-us_image_0231563136.png diff --git a/docs/en/docs/Administration/figures/example-command-output.png b/docs/en/Server/Administration/Administrator/figures/example-command-output.png similarity index 100% rename from docs/en/docs/Administration/figures/example-command-output.png rename to docs/en/Server/Administration/Administrator/figures/example-command-output.png diff --git a/docs/en/docs/Administration/figures/login.png b/docs/en/Server/Administration/Administrator/figures/login.png similarity index 100% rename from docs/en/docs/Administration/figures/login.png rename to docs/en/Server/Administration/Administrator/figures/login.png diff --git a/docs/en/docs/Administration/figures/mariadb-logical-architecture.png b/docs/en/Server/Administration/Administrator/figures/mariadb-logical-architecture.png similarity index 100% rename from docs/en/docs/Administration/figures/mariadb-logical-architecture.png rename to docs/en/Server/Administration/Administrator/figures/mariadb-logical-architecture.png diff --git a/docs/en/docs/Administration/figures/nginx-deployment-succeeded.png b/docs/en/Server/Administration/Administrator/figures/nginx-deployment-succeeded.png similarity index 100% rename from docs/en/docs/Administration/figures/nginx-deployment-succeeded.png rename to docs/en/Server/Administration/Administrator/figures/nginx-deployment-succeeded.png diff --git a/docs/en/docs/Administration/figures/nginx-startup-failure.png b/docs/en/Server/Administration/Administrator/figures/nginx-startup-failure.png similarity index 100% rename from docs/en/docs/Administration/figures/nginx-startup-failure.png rename to docs/en/Server/Administration/Administrator/figures/nginx-startup-failure.png diff --git a/docs/en/docs/Administration/figures/postgres.png b/docs/en/Server/Administration/Administrator/figures/postgres.png similarity index 100% rename from docs/en/docs/Administration/figures/postgres.png rename to docs/en/Server/Administration/Administrator/figures/postgres.png diff --git a/docs/en/docs/Administration/figures/postgresql-architecture.png b/docs/en/Server/Administration/Administrator/figures/postgresql-architecture.png similarity index 100% rename from docs/en/docs/Administration/figures/postgresql-architecture.png rename to docs/en/Server/Administration/Administrator/figures/postgresql-architecture.png diff --git a/docs/en/docs/Administration/figures/the-nginx-service-is-successfully-started.png b/docs/en/Server/Administration/Administrator/figures/the-nginx-service-is-successfully-started.png similarity index 100% rename from docs/en/docs/Administration/figures/the-nginx-service-is-successfully-started.png rename to docs/en/Server/Administration/Administrator/figures/the-nginx-service-is-successfully-started.png diff --git a/docs/en/docs/Administration/process-management.md b/docs/en/Server/Administration/Administrator/process-management.md similarity index 98% rename from docs/en/docs/Administration/process-management.md rename to docs/en/Server/Administration/Administrator/process-management.md index 74abc6ff58335e3de37f4e3b0ba3da5ea1a09362..d91e608335102bdaf4dd71da230cec0474390062 100644 --- a/docs/en/docs/Administration/process-management.md +++ b/docs/en/Server/Administration/Administrator/process-management.md @@ -127,7 +127,7 @@ The `kill` command sends a signal to terminate running processes. By default, th Two types of syntax of the `kill` command: ```shell -kill [-s signal | -p] [-a] PID… +kill [-s signal | -p] [-a] PID... kill -l [signal] ``` @@ -268,27 +268,27 @@ minute hour day-of-month month-of-year day-of-week commands

minute

The minute of the hour at which commands will be executed. Value range: 0–59.

+

The minute of the hour at which commands will be executed. Value range: 0-59.

hour

The hour of the day at which scheduled commands will be executed. Value range: 0–23.

+

The hour of the day at which scheduled commands will be executed. Value range: 0-23.

day-of-month

The day of the month on which scheduled commands will be executed. Value range: 1–31.

+

The day of the month on which scheduled commands will be executed. Value range: 1-31.

month-of-year

The month of the year in which scheduled commands will be executed. Value range: 1–12.

+

The month of the year in which scheduled commands will be executed. Value range: 1-12.

day-of-week

The day of the week on which scheduled commands will be executed. Value range: 0–6.

+

The day of the week on which scheduled commands will be executed. Value range: 0-6.

commands

diff --git a/docs/en/docs/A-Tune/public_sys-resources/icon-caution.gif b/docs/en/Server/Administration/Administrator/public_sys-resources/icon-caution.gif similarity index 100% rename from docs/en/docs/A-Tune/public_sys-resources/icon-caution.gif rename to docs/en/Server/Administration/Administrator/public_sys-resources/icon-caution.gif diff --git a/docs/en/docs/Kubernetes/public_sys-resources/icon-note.gif b/docs/en/Server/Administration/Administrator/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/Kubernetes/public_sys-resources/icon-note.gif rename to docs/en/Server/Administration/Administrator/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/ApplicationDev/public_sys-resources/icon-notice.gif b/docs/en/Server/Administration/Administrator/public_sys-resources/icon-notice.gif similarity index 100% rename from docs/en/docs/ApplicationDev/public_sys-resources/icon-notice.gif rename to docs/en/Server/Administration/Administrator/public_sys-resources/icon-notice.gif diff --git a/docs/en/docs/Administration/service-management.md b/docs/en/Server/Administration/Administrator/service-management.md similarity index 100% rename from docs/en/docs/Administration/service-management.md rename to docs/en/Server/Administration/Administrator/service-management.md diff --git a/docs/en/docs/Administration/setting-up-the-database-server.md b/docs/en/Server/Administration/Administrator/setting-up-the-database-server.md similarity index 90% rename from docs/en/docs/Administration/setting-up-the-database-server.md rename to docs/en/Server/Administration/Administrator/setting-up-the-database-server.md index b84c1dc97a4923bc3d3b86bcb3a512304f5edada..cf3ed31bf09dc0ca00100389c59ef47f58169c04 100644 --- a/docs/en/docs/Administration/setting-up-the-database-server.md +++ b/docs/en/Server/Administration/Administrator/setting-up-the-database-server.md @@ -1,5 +1,7 @@ # Setting Up the Database Server + + - [Setting Up the Database Server](#setting-up-the-database-server) - [PostgreSQL Server](#postgresql-server) - [Software Description](#software-description) @@ -106,13 +108,13 @@ ### Configuring the Environment ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The following environment configuration is for reference only. Configure the environment based on the site requirements. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> The following environment configuration is for reference only. Configure the environment based on the site requirements. #### Disabling the Firewall and Automatic Startup ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->It is recommended that firewall be disabled in the test environment to prevent network impact. Configure the firewall based on actual requirements. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> It is recommended that firewall be disabled in the test environment to prevent network impact. Configure the firewall based on actual requirements. 1. Stop the firewall service as the **root** user. @@ -126,8 +128,8 @@ systemctl disable firewalld ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >The automatic startup is automatically disabled as the firewall is disabled. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > The automatic startup is automatically disabled as the firewall is disabled. #### Disabling SELinux @@ -139,8 +141,8 @@ #### Creating a User Group and a User ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->In the server environment, independent users are assigned to each process to implement permission isolation for security purposes. The user group and user are created for the OS, not for the database. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> In the server environment, independent users are assigned to each process to implement permission isolation for security purposes. The user group and user are created for the OS, not for the database. 1. Create a PostgreSQL user or user group as the **root** user. @@ -160,10 +162,10 @@ #### Creating Data Drives ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- When testing the ultimate performance, you are advised to attach NVMe SSDs with better I/O performance to create PostgreSQL test instances to avoid the impact of disk I/O on the performance test result. This section uses NVMe SSDs as an example. For details, see Step 1 to Step 4. ->- In a non-performance test, run the following command as the **root** user to create a data directory. Then skip this section. +> - When testing the ultimate performance, you are advised to attach NVMe SSDs with better I/O performance to create PostgreSQL test instances to avoid the impact of disk I/O on the performance test result. This section uses NVMe SSDs as an example. For details, see Step 1 to Step 4. +> - In a non-performance test, run the following command as the **root** user to create a data directory. Then skip this section. > `mkdir /data` 1. Create a file system \(xfs is used as an example as the **root** user. Create the file system based on the site requirements.\). If a file system has been created for a disk, an error will be reported when you run this command. You can use the **-f** parameter to forcibly create a file system. @@ -225,8 +227,8 @@ ##### Initializing the Database ->![](./public_sys-resources/icon-notice.gif) **NOTICE:** ->Perform this step as the postgres user. +> ![](./public_sys-resources/icon-notice.gif)**NOTICE:** +> Perform this step as the postgres user. 1. Switch to the created PostgreSQL user. @@ -268,8 +270,8 @@ ![](./figures/login.png) - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >You do not need to enter a password when logging in to the database for the first time. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > You do not need to enter a password when logging in to the database for the first time. ##### Configuring the Database Accounts and Passwords @@ -695,8 +697,8 @@ postgres=# \l; You can run the **DROP DATABASE** statement or **dropdb** command to delete a database. The **dropdb** command encapsulates the **DROP DATABASE** statement and needs to be executed on the shell GUI instead of the database GUI. ->![](./public_sys-resources/icon-caution.gif) **CAUTION:** ->Exercise caution when deleting a database. Once a database is deleted, all tables and data in the database will be deleted. +> ![](./public_sys-resources/icon-caution.gif) **CAUTION:** +> Exercise caution when deleting a database. Once a database is deleted, all tables and data in the database will be deleted. ```pgsql DROP DATABASE databasename; @@ -811,13 +813,13 @@ Each storage engine manages and stores data in different ways, and supports diff ### Configuring the Environment ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The following environment configuration is for reference only. Configure the environment based on the site requirements. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> The following environment configuration is for reference only. Configure the environment based on the site requirements. #### Disabling the Firewall and Automatic Startup ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->It is recommended that firewall be disabled in the test environment to prevent network impact. Configure the firewall based on actual requirements. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> It is recommended that firewall be disabled in the test environment to prevent network impact. Configure the firewall based on actual requirements. 1. Stop the firewall service as the **root** user. @@ -831,8 +833,8 @@ Each storage engine manages and stores data in different ways, and supports diff systemctl disable firewalld ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >The automatic startup is automatically disabled as the firewall is disabled. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > The automatic startup is automatically disabled as the firewall is disabled. #### Disabling SELinux @@ -844,8 +846,8 @@ Each storage engine manages and stores data in different ways, and supports diff #### Creating a User Group and a User ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->In the server environment, independent users are assigned to each process to implement permission isolation for security purposes. The user group and user are created for the OS, not for the database. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> In the server environment, independent users are assigned to each process to implement permission isolation for security purposes. The user group and user are created for the OS, not for the database. 1. Create a MySQL user or user group as the **root** user. @@ -867,10 +869,10 @@ Each storage engine manages and stores data in different ways, and supports diff #### Creating Data Drives ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- If a performance test needs to be performed, an independent drive is required for the data directory. You need to format and mount the drive. For details, see Method 1 or Method 2. ->- In a non-performance test, run the following command as the **root** user to create a data directory. Then skip this section. +> - If a performance test needs to be performed, an independent drive is required for the data directory. You need to format and mount the drive. For details, see Method 1 or Method 2. +> - In a non-performance test, run the following command as the **root** user to create a data directory. Then skip this section. > `mkdir /data` ##### Method 1: Using fdisk for Drive Management as the **root** user @@ -910,12 +912,12 @@ Each storage engine manages and stores data in different ways, and supports diff ![](./figures/creat_datadisk1.png) ##### Method 2: Using LVM for Drive Management as the **root** user + +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> Install the LVM2 package in the image as follows: > ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->Install the LVM2 package in the image as follows: -> ->1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.md). If the repository has been configured, skip this step. ->2. Install LVM2. +> 1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.md). If the repository has been configured, skip this step. +> 2. Install LVM2. > **yum install lvm2** 1. Create a physical volume, for example, **sdb**. @@ -1022,8 +1024,8 @@ Each storage engine manages and stores data in different ways, and supports diff After the command is executed, the system prompts you to enter the password. The password is the one set in [2](#li197143190587). - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >Run the **\\q** or **exit** command to exit the database. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > Run the **\\q** or **exit** command to exit the database. #### Uninstalling MariaDB @@ -1156,8 +1158,8 @@ In the preceding information: - **FOR 'username'@'hostname'**: specifies the username and hostname whose password is to be changed. This parameter is optional. - **PASSWORD\('newpassword'\)**: indicates that the **PASSWORD\(\)** function is used to set a new password. That is, the new password must be transferred to the **PASSWORD\(\)** function for encryption. ->![](./public_sys-resources/icon-caution.gif) **CAUTION:** ->The **PASSWORD\(\)** function is a unidirectional encryption function. Once encrypted, the original plaintext cannot be decrypted. +> ![](./public_sys-resources/icon-caution.gif) **CAUTION:** +> The **PASSWORD\(\)** function is a unidirectional encryption function. Once encrypted, the original plaintext cannot be decrypted. If the **FOR** clause is not added to the **SET PASSWORD** statement, the password of the current user is changed. @@ -1181,8 +1183,8 @@ Use the **DROP USER** statement to delete one or more user accounts and relate DROP USER 'username1'@'hostname1' [,'username2'@'hostname2']...; ``` ->![](./public_sys-resources/icon-caution.gif) **CAUTION:** ->The deletion of users does not affect the tables, indexes, or other database objects that they have created, because the database does not record the accounts that have created these objects. +> ![](./public_sys-resources/icon-caution.gif) **CAUTION:** +> The deletion of users does not affect the tables, indexes, or other database objects that they have created, because the database does not record the accounts that have created these objects. The **DROP USER** statement can be used to delete one or more database accounts and their original permissions. @@ -1308,8 +1310,8 @@ In the preceding command, **databasename** indicates the database name. You can run the **DROP DATABASE** statement to delete a database. ->![](./public_sys-resources/icon-caution.gif) **CAUTION:** ->Exercise caution when deleting a database. Once a database is deleted, all tables and data in the database will be deleted. +> ![](./public_sys-resources/icon-caution.gif) **CAUTION:** +> Exercise caution when deleting a database. Once a database is deleted, all tables and data in the database will be deleted. ```pgsql DROP DATABASE databasename; @@ -1438,13 +1440,13 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard ### Configuring the Environment ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The following environment configuration is for reference only. Configure the environment based on the site requirements. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> The following environment configuration is for reference only. Configure the environment based on the site requirements. #### Disabling the Firewall and Automatic Startup ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->It is recommended that firewall be disabled in the test environment to prevent network impact. Configure the firewall based on actual requirements. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> It is recommended that firewall be disabled in the test environment to prevent network impact. Configure the firewall based on actual requirements. 1. Stop the firewall service as the **root** user. @@ -1458,8 +1460,8 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard systemctl disable firewalld ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >The automatic startup is automatically disabled as the firewall is disabled. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > The automatic startup is automatically disabled as the firewall is disabled. #### Disabling SELinux @@ -1471,8 +1473,8 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard #### Creating a User Group and a User ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->In the server environment, independent users are assigned to each process to implement permission isolation for security purposes. The user group and user are created for the OS, not for the database. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> In the server environment, independent users are assigned to each process to implement permission isolation for security purposes. The user group and user are created for the OS, not for the database. 1. Create a MySQL user or user group as the **root** user. @@ -1494,10 +1496,10 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard #### Creating Data Drives ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- If a performance test needs to be performed, an independent drive is required for the data directory. You need to format and mount the drive. For details, see Method 1 or Method 2. ->- In a non-performance test, run the following command as the **root** user to create a data directory. Then skip this section. +> - If a performance test needs to be performed, an independent drive is required for the data directory. You need to format and mount the drive. For details, see Method 1 or Method 2. +> - In a non-performance test, run the following command as the **root** user to create a data directory. Then skip this section. > `mkdir /data` ##### Method 1: Using fdisk for Drive Management as the **root** user @@ -1537,12 +1539,12 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard ![](./figures/creat_datadisk.png) ##### Method 2: Using LVM for Drive Management as the **root** user + +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> Install the LVM2 package in the image as follows: > ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->Install the LVM2 package in the image as follows: -> ->1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.md). If the repository has been configured, skip this step. ->2. Install LVM2. +> 1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.md). If the repository has been configured, skip this step. +> 2. Install LVM2. > **yum install lvm2** 1. Create a PV, for example, **sdb**. @@ -1664,8 +1666,8 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard ![](./figures/en-us_image_0231563132.png) - >![](./public_sys-resources/icon-caution.gif) **CAUTION:** - >In the configuration file, **basedir** specifies the software installation path. Change it based on actual situation. + > ![](./public_sys-resources/icon-caution.gif) **CAUTION:** + > In the configuration file, **basedir** specifies the software installation path. Change it based on actual situation. 3. Change the group and user of the **/etc/my.cnf** file to **mysql:mysql** as the **root** user. @@ -1680,8 +1682,8 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard echo export PATH=$PATH:/usr/local/mysql/bin >> /etc/profile ``` - >![](./public_sys-resources/icon-caution.gif) **CAUTION:** - >In the command, **/usr/local/mysql/bin** is the absolute path of the **bin** files in the MySQL software installation directory. Change it based on actual situation. + > ![](./public_sys-resources/icon-caution.gif) **CAUTION:** + > In the command, **/usr/local/mysql/bin** is the absolute path of the **bin** files in the MySQL software installation directory. Change it based on actual situation. 2. Run the following command as the **root** user to make the environment variables take effect: @@ -1691,8 +1693,8 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard 3. Initialize the database as the **root** user. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >The second line from the bottom contains the initial password, which will be used when you log in to the database. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > The second line from the bottom contains the initial password, which will be used when you log in to the database. ```shell $ mysqld --defaults-file=/etc/my.cnf --initialize @@ -1705,8 +1707,8 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard 4. Start the database. - >![](./public_sys-resources/icon-caution.gif) **CAUTION:** - >Start MySQL as user **mysql** if it is the first time to start the database service. If you start MySQL as user **root**, a message will be displayed indicating that the **mysql.log** file is missing. If you start MySQL as user **mysql**, the **mysql.log** file will be generated in the **/data/mysql/log** directory. No error will be displayed if you start the database as user **root** again. + > ![](./public_sys-resources/icon-caution.gif) **CAUTION:** + > Start MySQL as user **mysql** if it is the first time to start the database service. If you start MySQL as user **root**, a message will be displayed indicating that the **mysql.log** file is missing. If you start MySQL as user **mysql**, the **mysql.log** file will be generated in the **/data/mysql/log** directory. No error will be displayed if you start the database as user **root** again. 1. Modify the file permission as the **root** user. @@ -1731,10 +1733,10 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard 5. Log in to the database. - >![](./public_sys-resources/icon-note.gif) **NOTE:** + > ![](./public_sys-resources/icon-note.gif) **NOTE:** > - >- Enter the initial password generated during database initialization \([3](#li15634560582)\). - >- If MySQL is installed by using an RPM package obtained from the official website, the **mysqld** file is located in the **/usr/sbin** directory. Ensure that the directory specified in the command is correct. + > - Enter the initial password generated during database initialization \([3](#li15634560582)\). + > - If MySQL is installed by using an RPM package obtained from the official website, the **mysqld** file is located in the **/usr/sbin** directory. Ensure that the directory specified in the command is correct. ```shell /usr/local/mysql/bin/mysql -uroot -p -S /data/mysql/run/mysql.sock @@ -1927,8 +1929,8 @@ Use the **DROP USER** statement to delete one or more user accounts and relate DROP USER 'username1'@'hostname1' [,'username2'@'hostname2']...; ``` ->![](./public_sys-resources/icon-caution.gif) **CAUTION:** ->The deletion of users does not affect the tables, indexes, or other database objects that they have created, because the database does not record the accounts that have created these objects. +> ![](./public_sys-resources/icon-caution.gif) **CAUTION:** +> The deletion of users does not affect the tables, indexes, or other database objects that they have created, because the database does not record the accounts that have created these objects. The **DROP USER** statement can be used to delete one or more database accounts and their original permissions. @@ -2054,8 +2056,8 @@ In the preceding command, _databasename_ indicates the database name. Run the **DROP DATABASE** statement to delete a database. ->![](./public_sys-resources/icon-caution.gif) **CAUTION:** ->Exercise caution when deleting a database. Once a database is deleted, all tables and data in the database will be deleted. +> ![](./public_sys-resources/icon-caution.gif) **CAUTION:** +> Exercise caution when deleting a database. Once a database is deleted, all tables and data in the database will be deleted. ```pgsql DROP DATABASE databasename; diff --git a/docs/en/docs/Administration/user-and-user-group-management.md b/docs/en/Server/Administration/Administrator/user-and-user-group-management.md similarity index 88% rename from docs/en/docs/Administration/user-and-user-group-management.md rename to docs/en/Server/Administration/Administrator/user-and-user-group-management.md index 32c30d32458c0966aa9a837cf533d7175afa0690..7bb0b94540785164e6b9fbcf9ad510abecbcf322 100644 --- a/docs/en/docs/Administration/user-and-user-group-management.md +++ b/docs/en/Server/Administration/Administrator/user-and-user-group-management.md @@ -51,8 +51,8 @@ For example, to create a user named userexample, run the following command as th useradd userexample ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->If no prompt is displayed, the user is successfully created. After the user is created, run the **passwd** command to assign a password to the user. A new account without a password will be banned. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> If no prompt is displayed, the user is successfully created. After the user is created, run the **passwd** command to assign a password to the user. A new account without a password will be banned. To view information about the new user, run the **id** command: @@ -103,8 +103,8 @@ Retype new password: passwd: all authentication tokens updated successfully. ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->If the command output contains **BAD PASSWORD: The password fails the dictionary check - it is too simplistic/sytematic**, the password is too simple and needs to be reset. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> If the command output contains **BAD PASSWORD: The password fails the dictionary check - it is too simplistic/sytematic**, the password is too simple and needs to be reset. ### Modifying a User Account @@ -172,8 +172,8 @@ userdel Test If you also need to delete the user's home directory and all contents in the directory, run the **userdel** command with the -r option to delete them recursively. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->You are not advised to directly delete a user who has logged in to the system. To forcibly delete a user, run the **userdel -f** _Test_ command. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> You are not advised to directly delete a user who has logged in to the system. To forcibly delete a user, run the **userdel -f** _Test_ command. ### Granting Rights to a Common User @@ -223,12 +223,12 @@ The information configured in the **/etc/sudoers** file is as follows: This indicates that newuser1 on the ted1 host can run the **useradd** and **userdel** commands as the user **root**. - >![](./public_sys-resources/icon-note.gif) **NOTE:** + > ![](./public_sys-resources/icon-note.gif) **NOTE:** > - >- You can define multiple aliases in a line and separate them with colons \(:\). - >- You can add an exclamation mark \(!\) before a command or a command alias to make the command or the command alias invalid. - >- There are two keywords: ALL and NOPASSWD. ALL indicates all files, hosts, or commands, and NOPASSWD indicates that no password is required. - >- By modifying user access, you can change the access permission of a common user to be the same as that of the user **root**. Then, you can grant rights to the common user. + > - You can define multiple aliases in a line and separate them with colons \(:\). + > - You can add an exclamation mark \(!\) before a command or a command alias to make the command or the command alias invalid. + > - There are two keywords: ALL and NOPASSWD. ALL indicates all files, hosts, or commands, and NOPASSWD indicates that no password is required. + > - By modifying user access, you can change the access permission of a common user to be the same as that of the user **root**. Then, you can grant rights to the common user. The following is an example of the **sudoers** file: @@ -311,8 +311,8 @@ For example, run the following command to delete user group Test: groupdel Test ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The user's primary group cannot be directly deleted. To forcibly delete a user's primary group, run the **groupdel -f** _Test_ command. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> The user's primary group cannot be directly deleted. To forcibly delete a user's primary group, run the **groupdel -f** _Test_ command. ### Adding a User to a Group or Removing a User from a Group diff --git a/docs/en/docs/Administration/using-dnf-to-manage-software-packages.md b/docs/en/Server/Administration/Administrator/using-dnf-to-manage-software-packages.md similarity index 52% rename from docs/en/docs/Administration/using-dnf-to-manage-software-packages.md rename to docs/en/Server/Administration/Administrator/using-dnf-to-manage-software-packages.md index 227a21428b84eaa44ef34573d17d6847f517c527..962cbffd828229baedb2bf673ba8f5c87fa877ac 100644 --- a/docs/en/docs/Administration/using-dnf-to-manage-software-packages.md +++ b/docs/en/Server/Administration/Administrator/using-dnf-to-manage-software-packages.md @@ -2,10 +2,10 @@ DNF is a Linux software package management tool used to manage RPM software packages. The DNF can query software package information, obtain software packages from a specified software library, automatically process dependencies to install or uninstall software packages, and update the system to the latest available version. ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- DNF is fully compatible with YUM and provides YUM-compatible command lines and APIs for extensions and plug-ins. ->- You must have the administrator rights to use the DNF. All commands in this chapter must be executed by the administrator. +> - DNF is fully compatible with YUM and provides YUM-compatible command lines and APIs for extensions and plug-ins. +> - You must have the administrator rights to use the DNF. All commands in this chapter must be executed by the administrator. ## Configuring the DNF @@ -38,58 +38,18 @@ Common options are as follows: **Table 1** main parameter description - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

cachedir

-

Cache directory for storing RPM packages and database files.

-

keepcache

-

The options are 1 and 0, indicating whether to cache the RPM packages and header files that have been successfully installed. The default value is 0, indicating that the RPM packages and header files are not cached.

-

debuglevel

-

Sets debugging information generated by the DNF. The value ranges from 0 to 10. A larger value indicates more detailed debugging information. The default value is 2. The value 0 indicates that the debug information is not displayed.

-

clean_requirements_on_remove

-

Deletes the dependency items that are no longer used during DNF removal. If the software package is installed through the DNF instead of the explicit user request, the software package can be deleted only through clean_requirements_on_remove, that is, the software package is introduced as a dependency item. The default value is True.

-

best

-

The system always attempts to install the latest version of the upgrade package. If the latest version cannot be installed, the system displays the cause and stops the installation. The default value is True.

-

obsoletes

-

The options are 1 and 0, indicating whether to allow the update of outdated RPM packages. The default value is 1, indicating that the update is allowed.

-

gpgcheck

-

The options are 1 and 0, indicating whether to perform GPG verification. The default value is 1, indicating that verification is required.

-

plugins

-

The options are 1 and 0, indicating that the DNF plug-in is enabled or disabled. The default value is 1, indicating that the DNF plug-in is enabled.

-

installonly_limit

-

Sets the number of packages that can be installed at the same time by running the installonlypkgs command. The default value is 3. You are advised not to decrease the value.

-
+ +| Parameter | Description | +| ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| cachedir | Cache directory for storing RPM packages and database files. | +| keepcache | The options are 1 and 0, indicating whether to cache the RPM packages and header files that have been successfully installed. The default value is 0, indicating that the RPM packages and header files are not cached. | +| debuglevel | Sets debugging information generated by the DNF. The value ranges from 0 to 10. A larger value indicates more detailed debugging information. The default value is 2. The value 0 indicates that the debug information is not displayed. | +| clean\_requirements\_on\_remove | Deletes the dependency items that are no longer used during DNF removal. If the software package is installed through the DNF instead of the explicit user request, the software package can be deleted only through clean\_requirements\_on\_remove, that is, the software package is introduced as a dependency item. The default value is **True**. | +| best | The system always attempts to install the latest version of the upgrade package. If the latest version cannot be installed, the system displays the cause and stops the installation. The default value is **True**. | +| obsoletes | The options are **1** and **0**, indicating whether to allow the update of outdated RPM packages. The default value is 1, indicating that the update is allowed. | +| gpgcheck | The options are **1** and **0**, indicating whether to perform GPG verification. The default value is **1**, indicating that verification is required. | +| plugins | The options are **1** and **0**, indicating that the DNF plug-in is enabled or disabled. The default value is **1**, indicating that the DNF plug-in is enabled. | +| installonly\_limit | Sets the number of packages that can be installed at the same time by running the **installonlypkgs** command. The default value is 3. You are advised not to decrease the value. | #### Configuring the repository Part @@ -105,35 +65,21 @@ The repository part allows you to customize openEuler software source repositori baseurl=repository_url ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >openEuler provides an online image source at [https://repo.openeuler.org/](https://repo.openeuler.org/). For example, if the openEuler 21.03 version is aarch64, the **baseurl** can be set to [https://repo.openeuler.org/openEuler-21.03/OS/aarch64/](https://repo.openeuler.org/openEuler-21.03/OS/aarch64/). + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > openEuler provides an online image source at [https://repo.openeuler.org/](https://repo.openeuler.org/). For example, if the openEuler 21.03 version is aarch64, the **baseurl** can be set to [https://repo.openeuler.org/openEuler-21.03/OS/aarch64/](https://repo.openeuler.org/openEuler-21.03/OS/aarch64/). Common options are as follows: **Table 2** repository parameter description - - - - - - - - - - - -

Parameter

-

Description

-

name=repository_name

-

Name string of a software repository.

-

baseurl=repository_url

-

Address of the software repository.

-
  • Network location using the HTTP protocol, for example, http://path/to/repo
  • Network location using the FTP protocol, for example, ftp://path/to/repo
  • Local path: for example, file:///path/to/local/repo
-
+ +| Parameter | Description | +|-----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| name=repository_name | Name string of a software repository. | +| baseurl=repository_url | Address of the software repository.
- Network location using the HTTP protocol, for example, `http://path/to/repo`
- Network location using the FTP protocol, for example, `ftp://path/to/repo`
- Local path: for example, `file:///path/to/local/repo` | - Configuring the .repo file in the **/etc/yum.repos.d** directory - openEuler provides multiple repo sources for users online. For details about the repo sources, see [OS Installation](../Releasenotes/installing-the-os.md). + openEuler provides multiple repo sources for users online. For details about the repo sources, see [OS Installation](../../Releasenotes/Releasenotes/os-installation.md). For example, run the following command as the **root** user to add the openeuler repo source to the openEuler.repo file. @@ -150,7 +96,7 @@ The repository part allows you to customize openEuler software source repositori gpgkey=https://repo.openeuler.org/openEuler-21.03/OS/$basearch/RPM-GPG-KEY-openEuler ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** + > ![](./public_sys-resources/icon-note.gif) **NOTE:** > > - **enabled** indicates whether to enable the software source repository. The value can be **1** or **0**. The default value is **1**, indicating that the software source repository is enabled. > - **gpgkey** is the public key used to verify the signature. @@ -334,8 +280,8 @@ The following is an example: dnf install httpd ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->If the RPM package fails to be installed, see [Installation Failure Caused by Software Package Conflict, File Conflict, or Missing Software Package](./faqs.md#installation-failure-caused-by-software-package-conflict-file-conflict-or-missing-software-package). +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> If the RPM package fails to be installed, see [Installation Failure Caused by Software Package Conflict, File Conflict, or Missing Software Package](./common-issues-and-solutions.md#issue-5-installation-failure-caused-by-software-package-conflict-file-conflict-or-missing-software-package). ### Downloading Software Packages diff --git a/docs/en/docs/Administration/viewing-system-information.md b/docs/en/Server/Administration/Administrator/viewing-system-information.md similarity index 66% rename from docs/en/docs/Administration/viewing-system-information.md rename to docs/en/Server/Administration/Administrator/viewing-system-information.md index 5ac2d113813c5dd4c05daaf1ea86141e89e9fecb..350816bcc2b399cf37456a0aa39e0de81c558b20 100644 --- a/docs/en/docs/Administration/viewing-system-information.md +++ b/docs/en/Server/Administration/Administrator/viewing-system-information.md @@ -1,14 +1,14 @@ # Viewing System Information -- View the system information. +- View the system information. - ``` - $ cat /etc/os-release + ```shell + cat /etc/os-release ``` For example, the command and output are as follows: - ``` + ```shell $ cat /etc/os-release NAME="openEuler" VERSION="21.09" @@ -18,30 +18,28 @@ ANSI_COLOR="0;31" ``` - -- View system resource information. +- View system resource information. Run the following command to view the CPU information: - ``` + ```shell # lscpu ``` Run the following command to view the memory information: - ``` - $ free + ```shell + free ``` Run the following command to view the disk information: + ```shell + fdisk -l ``` - $ fdisk -l - ``` - -- View the real-time system resource information. +- View the real-time system resource information. + ```shell + top ``` - $ top - ``` \ No newline at end of file diff --git a/docs/en/Server/Administration/CompaCommand/Menu/index.md b/docs/en/Server/Administration/CompaCommand/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..54d57ca6d41b592ab9de585b2eff248816eaa170 --- /dev/null +++ b/docs/en/Server/Administration/CompaCommand/Menu/index.md @@ -0,0 +1,7 @@ +--- +headless: true +--- + +- [Compatibility Commands]({{< relref "./overview.md" >}}) + - [utshell User Guide]({{< relref "./utshell-guide.md" >}}) + - [utsudo User Guide]({{< relref "./utsudo-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/memsafety/utsudo/figures/-e.png b/docs/en/Server/Administration/CompaCommand/figures/-e.png similarity index 100% rename from docs/en/docs/memsafety/utsudo/figures/-e.png rename to docs/en/Server/Administration/CompaCommand/figures/-e.png diff --git a/docs/en/docs/memsafety/utsudo/figures/-k.png b/docs/en/Server/Administration/CompaCommand/figures/-k.png similarity index 100% rename from docs/en/docs/memsafety/utsudo/figures/-k.png rename to docs/en/Server/Administration/CompaCommand/figures/-k.png diff --git a/docs/en/docs/memsafety/utsudo/figures/-l.png b/docs/en/Server/Administration/CompaCommand/figures/-l.png similarity index 100% rename from docs/en/docs/memsafety/utsudo/figures/-l.png rename to docs/en/Server/Administration/CompaCommand/figures/-l.png diff --git a/docs/en/docs/memsafety/utsudo/figures/grep.png b/docs/en/Server/Administration/CompaCommand/figures/grep.png similarity index 100% rename from docs/en/docs/memsafety/utsudo/figures/grep.png rename to docs/en/Server/Administration/CompaCommand/figures/grep.png diff --git a/docs/en/docs/memsafety/utsudo/figures/install.png b/docs/en/Server/Administration/CompaCommand/figures/install.png similarity index 100% rename from docs/en/docs/memsafety/utsudo/figures/install.png rename to docs/en/Server/Administration/CompaCommand/figures/install.png diff --git a/docs/en/docs/memsafety/utshell/media/commands1.png b/docs/en/Server/Administration/CompaCommand/media/commands1.png similarity index 100% rename from docs/en/docs/memsafety/utshell/media/commands1.png rename to docs/en/Server/Administration/CompaCommand/media/commands1.png diff --git a/docs/en/docs/memsafety/utshell/media/commands2.png b/docs/en/Server/Administration/CompaCommand/media/commands2.png similarity index 100% rename from docs/en/docs/memsafety/utshell/media/commands2.png rename to docs/en/Server/Administration/CompaCommand/media/commands2.png diff --git a/docs/en/docs/memsafety/utshell/media/install-y.png b/docs/en/Server/Administration/CompaCommand/media/install-y.png similarity index 100% rename from docs/en/docs/memsafety/utshell/media/install-y.png rename to docs/en/Server/Administration/CompaCommand/media/install-y.png diff --git a/docs/en/docs/memsafety/utshell/media/install.png b/docs/en/Server/Administration/CompaCommand/media/install.png similarity index 100% rename from docs/en/docs/memsafety/utshell/media/install.png rename to docs/en/Server/Administration/CompaCommand/media/install.png diff --git a/docs/en/docs/memsafety/utshell/media/uninstall.png b/docs/en/Server/Administration/CompaCommand/media/uninstall.png similarity index 100% rename from docs/en/docs/memsafety/utshell/media/uninstall.png rename to docs/en/Server/Administration/CompaCommand/media/uninstall.png diff --git a/docs/en/docs/memsafety/overview.md b/docs/en/Server/Administration/CompaCommand/overview.md similarity index 100% rename from docs/en/docs/memsafety/overview.md rename to docs/en/Server/Administration/CompaCommand/overview.md diff --git a/docs/en/docs/memsafety/utshell/utshell_guide.md b/docs/en/Server/Administration/CompaCommand/utshell-guide.md similarity index 100% rename from docs/en/docs/memsafety/utshell/utshell_guide.md rename to docs/en/Server/Administration/CompaCommand/utshell-guide.md diff --git a/docs/en/docs/memsafety/utsudo/utsudo_user_guide.md b/docs/en/Server/Administration/CompaCommand/utsudo-user-guide.md similarity index 100% rename from docs/en/docs/memsafety/utsudo/utsudo_user_guide.md rename to docs/en/Server/Administration/CompaCommand/utsudo-user-guide.md diff --git a/docs/en/Server/Administration/Menu/index.md b/docs/en/Server/Administration/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..45485ef15b1407d954565b7c63d8509c4e52bf0a --- /dev/null +++ b/docs/en/Server/Administration/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- +- [Administrator Guide]({{< relref "./Administrator/Menu/index.md" >}}) +- [sysMaster User Guide]({{< relref "./sysMaster/Menu/index.md" >}}) +- [Compatibility Commands]({{< relref "./CompaCommand/Menu/index.md" >}}) diff --git a/docs/en/Server/Administration/sysMaster/Menu/index.md b/docs/en/Server/Administration/sysMaster/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..322547cfad488f1f7231c17049bae02f1305fe3a --- /dev/null +++ b/docs/en/Server/Administration/sysMaster/Menu/index.md @@ -0,0 +1,10 @@ +--- +headless: true +--- +- [sysMaster User Guide]({{< relref "./overview.md" >}}) + - [Service Management]({{< relref "./service_management.md" >}}) + - [Installation and Deployment]({{< relref "./sysmaster_install_deploy.md" >}}) + - [Usage Instructions]({{< relref "./sysmaster_usage.md" >}}) + - [Device Management]({{< relref "./device_management.md" >}}) + - [Installation and Deployment]({{< relref "./devmaster_install_deploy.md" >}}) + - [Usage Instructions]({{< relref "./devmaster_usage.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/sysMaster/device_management.md b/docs/en/Server/Administration/sysMaster/device_management.md similarity index 100% rename from docs/en/docs/sysMaster/device_management.md rename to docs/en/Server/Administration/sysMaster/device_management.md diff --git a/docs/en/docs/sysMaster/devmaster_install_deploy.md b/docs/en/Server/Administration/sysMaster/devmaster_install_deploy.md similarity index 87% rename from docs/en/docs/sysMaster/devmaster_install_deploy.md rename to docs/en/Server/Administration/sysMaster/devmaster_install_deploy.md index 58ed09437161f7268276a70b11264d43067fd8c2..e12ac33f71bd262de3d8df12c51fbeebecc9b682 100644 --- a/docs/en/docs/sysMaster/devmaster_install_deploy.md +++ b/docs/en/Server/Administration/sysMaster/devmaster_install_deploy.md @@ -43,18 +43,17 @@ Currently, devmaster can be used in the VM environment. This section describes t > > devmaster must be started with the root privilege and cannot be running with udev at the same time. Before starting devmaster, stop the udev service. > - > > If udev is started by sysMaster, run the following command: - > - > ```shell - > # sctl stop udevd.service udevd-control.socket udevd-kernel.socket - > ``` - > + + ```shell + # sctl stop udevd.service udevd-control.socket udevd-kernel.socket + ``` + > If udev is started by systemd, run the following command: - > - > ```shell - > # systemctl stop systemd-udevd.service systemd-udevd systemd-udevd-kernel.socket systemd-udevd-control.socket - > ``` + + ```shell + # systemctl stop systemd-udevd.service systemd-udevd systemd-udevd-kernel.socket systemd-udevd-control.socket + ``` 4. Run the following command to use the `devctl` tool to trigger a device event: diff --git a/docs/en/docs/sysMaster/devmaster_usage.md b/docs/en/Server/Administration/sysMaster/devmaster_usage.md similarity index 100% rename from docs/en/docs/sysMaster/devmaster_usage.md rename to docs/en/Server/Administration/sysMaster/devmaster_usage.md diff --git a/docs/en/docs/sysMaster/figures/devmaster_architecture.png b/docs/en/Server/Administration/sysMaster/figures/devmaster_architecture.png similarity index 100% rename from docs/en/docs/sysMaster/figures/devmaster_architecture.png rename to docs/en/Server/Administration/sysMaster/figures/devmaster_architecture.png diff --git a/docs/en/docs/sysMaster/figures/sysMaster.png b/docs/en/Server/Administration/sysMaster/figures/sysMaster.png similarity index 100% rename from docs/en/docs/sysMaster/figures/sysMaster.png rename to docs/en/Server/Administration/sysMaster/figures/sysMaster.png diff --git a/docs/en/docs/sysMaster/overview.md b/docs/en/Server/Administration/sysMaster/overview.md similarity index 100% rename from docs/en/docs/sysMaster/overview.md rename to docs/en/Server/Administration/sysMaster/overview.md diff --git a/docs/en/docs/Quickstart/public_sys-resources/icon-note.gif b/docs/en/Server/Administration/sysMaster/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/Quickstart/public_sys-resources/icon-note.gif rename to docs/en/Server/Administration/sysMaster/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/sysMaster/service_management.md b/docs/en/Server/Administration/sysMaster/service_management.md similarity index 100% rename from docs/en/docs/sysMaster/service_management.md rename to docs/en/Server/Administration/sysMaster/service_management.md diff --git a/docs/en/Server/Administration/sysMaster/sysmaster_install_deploy.md b/docs/en/Server/Administration/sysMaster/sysmaster_install_deploy.md new file mode 100644 index 0000000000000000000000000000000000000000..91a82913e8715b0188791abefe261d30dee1bc98 --- /dev/null +++ b/docs/en/Server/Administration/sysMaster/sysmaster_install_deploy.md @@ -0,0 +1,98 @@ +# Installation and Deployment + +sysmaster can be used in containers and VMs. This document uses the AArch64 architecture as an example to describe how to install and deploy sysmaster in both scenarios. + +## Software + +* OS: openEuler 23.09 + +## Hardware + +* x86_64 or AArch64 architecture + +## Installation and Deployment in Containers + +1. Install Docker. + + ```bash + yum install -y docker + systemctl restart docker + ``` + +2. Load the base container image. + + Download the container image. + + ```bash + wget https://repo.openeuler.org/openEuler-23.09/docker_img/aarch64/openEuler-docker.aarch64.tar.xz + xz -d openEuler-docker.aarch64.tar.xz + ``` + + Load the container image. + + ```bash + docker load --input openEuler-docker.aarch64.tar + ``` + +3. Build the container. + + Create a Dockerfile. + + ```bash + cat << EOF > Dockerfile + FROM openeuler-23.09 + RUN yum install -y sysmaster + CMD ["/usr/lib/sysmaster/init"] + EOF + ``` + + Build the container. + + ```bash + docker build -t openeuler-23.09:latest . + ``` + +4. Start and enter the container. + + Start the container. + + ```bash + docker run -itd --privileged openeuler-23.09:latest + ``` + + Obtain the container ID. + + ```bash + docker ps + ``` + + Use the container ID to enter the container. + + ```bash + docker exec -it /bin/bash + ``` + +## Installation and Deployment in VMs + +1. Create an initramfs image. + To avoid the impact of systemd in the initrd phase, you need to create an initramfs image with systemd removed and use this image to enter the initrd procedure. Run the following command: + + ```bash + dracut -f --omit "systemd systemd-initrd systemd-networkd dracut-systemd" /boot/initrd_withoutsd.img + ``` + +2. Add a boot item. + Add a boot item to **grub.cfg**, whose path is **/boot/efi/EFI/openEuler/grub.cfg** in the AArch64 architecture and **/boot/grub2/grub.cfg** in the x86_64 architecture. Back up the original configurations and modify the configurations as follows: + + * **menuentry**: Change **openEuler (6.4.0-5.0.0.13.oe23.09.aarch64) 23.09** to **openEuler 23.09 withoutsd**. + * **linux**: Change **root=/dev/mapper/openeuler-root ro** to **root=/dev/mapper/openeuler-root rw**. + * **linux**: If Plymouth is installed, add **plymouth.enable=0** to disable it. + * **linux**: Add **init=/usr/lib/sysmaster/init**. + * **initrd**: Set to **/initrd_withoutsd.img**. +3. Install sysmaster. + + ```bash + yum install sysmaster + ``` + +4. If the **openEuler 23.09 withoutsd** boot item is displayed after the restart, the configuration is successful. Select **openEuler 23.09 withoutsd** to log in to the VM. diff --git a/docs/en/docs/sysMaster/sysmaster_usage.md b/docs/en/Server/Administration/sysMaster/sysmaster_usage.md similarity index 100% rename from docs/en/docs/sysMaster/sysmaster_usage.md rename to docs/en/Server/Administration/sysMaster/sysmaster_usage.md diff --git a/docs/en/Server/Development/ApplicationDev/Menu/index.md b/docs/en/Server/Development/ApplicationDev/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..8ded02e1cb747bcf8abd6657334f8b29eb4caead --- /dev/null +++ b/docs/en/Server/Development/ApplicationDev/Menu/index.md @@ -0,0 +1,11 @@ +--- +headless: true +--- +- [Application Development Guide]({{< relref "./application-development.md" >}}) + - [Preparing the Development Environment]({{< relref "./preparations-for-development-environment.md" >}}) + - [Using GCC for Compilation]({{< relref "./using-gcc-for-compilation.md" >}}) + - [Using LLVM/Clang for Compilation]({{< relref "./using-clang-for-compilation.md" >}}) + - [Using Make for Compilation]({{< relref "./using-make-for-compilation.md" >}}) + - [Using JDK for Compilation]({{< relref "./using-jdk-for-compilation.md" >}}) + - [Building an RPM Package]({{< relref "./building-an-rpm-package.md" >}}) + - [Common Issues and Solutions]({{< relref "./common-issues-and-solutions.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/ApplicationDev/application-development.md b/docs/en/Server/Development/ApplicationDev/application-development.md similarity index 100% rename from docs/en/docs/ApplicationDev/application-development.md rename to docs/en/Server/Development/ApplicationDev/application-development.md diff --git a/docs/en/docs/ApplicationDev/building-an-rpm-package.md b/docs/en/Server/Development/ApplicationDev/building-an-rpm-package.md similarity index 94% rename from docs/en/docs/ApplicationDev/building-an-rpm-package.md rename to docs/en/Server/Development/ApplicationDev/building-an-rpm-package.md index adff77c5db14f3d8e145525e3c278c41fd3e5ff4..506f801049711a452caef95011527929985ebc01 100644 --- a/docs/en/docs/ApplicationDev/building-an-rpm-package.md +++ b/docs/en/Server/Development/ApplicationDev/building-an-rpm-package.md @@ -68,6 +68,7 @@ The format of the **rpmbuild** command is rpmbuild \[_option_...\] [Table 1](#table1342946175212) describes the common rpmbuild packaging options. **Table 1** rpmbuild Packaging Options + | Option | Description | |----------|--------------| |-bp _specfile_ |Starts build from the **%prep** phase of the _specfile_ (decompress the source code package and install the patch). | @@ -99,7 +100,7 @@ The format of the **rpmbuild** command is rpmbuild \[_option_...\] |--root _DIRECTORY_ |Sets _DIRECTORY_ to the highest level. The default value is **/**, indicating the highest level. | |--recompile _sourcefile_ |Installs the specified source code package _sourcefile_, that is, start preparation, compilation, and installation of the source code package. | |--rebuild _sourcefile_ |Builds a new binary package based on `--recompile`. When the build is complete, the build directory, source code, and .spec file are deleted. The deletion effect is the same as that of --clean. | -|-?,--help |Displays detailed help information. | +|-?, --help |Displays detailed help information. | |--version |Displays detailed version information. | ## Building an RPM Package Locally @@ -140,7 +141,7 @@ Run the following command to create the .spec file in the **~/rpmbuild/SPECS** d ```shell cd ~/rpmbuild/SPECS -vi hello.spec +vi hello.spec ``` Write the corresponding content to the file and save the file. The following is an example of the file content. Modify the corresponding fields based on the actual requirements. @@ -213,7 +214,7 @@ fi Run the following command in the directory where the .spec file is located to build the source code, binary files, and software packages that contain debugging information: ```shell -rpmbuild -ba hello.spec +rpmbuild -ba hello.spec ``` Run the following command to view the execution result: @@ -251,10 +252,10 @@ This section describes how to build an RPM software package online on OBS. #### Building an Existing Software Package ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- If you use OBS for the first time, register an individual account on the OBS web page. ->- With this method, you must copy the modified code and commit it to the code directory before performing the following operations. The code directory is specified in the **\_service** file. +> - If you use OBS for the first time, register an individual account on the OBS web page. +> - With this method, you must copy the modified code and commit it to the code directory before performing the following operations. The code directory is specified in the **\_service** file. To modify the source code of the existing software and build the modified source file into an RPM software package on the OBS web client, perform the following steps: @@ -283,8 +284,8 @@ To modify the source code of the existing software and build the modified source ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >Click **Save** to save the **\_service** file. OBS downloads the source code from the specified URL to the software directory of the corresponding OBS project based on the **\_service** file description and replaces the original file. For example, the **kernel** directory of the **openEuler:Mainline** project in the preceding example. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > Click **Save** to save the **\_service** file. OBS downloads the source code from the specified URL to the software directory of the corresponding OBS project based on the **\_service** file description and replaces the original file. For example, the **kernel** directory of the **openEuler:Mainline** project in the preceding example. 7. After the files are copied and replaced, OBS automatically starts to build the RPM software package. Wait until the build is complete and view the build status in the status bar on the right. - **succeeded**: The build is successful. You can click **succeeded** to view the build logs, as shown in [Figure 2](#fig10319114217337). @@ -310,8 +311,8 @@ To add a new software package on the OBS web page, perform the following steps: **Figure 3** Deleting a software package from a subproject ![](./figures/deleting-a-software-package-from-a-subproject.png "deleting-a-software-package-from-a-subproject") - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >The purpose of creating a project by using existing software is to inherit the dependency such as the environment. Therefore, you need to delete these files. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > The purpose of creating a project by using existing software is to inherit the dependency such as the environment. Therefore, you need to delete these files. 6. Click **Create Package**. On the page that is displayed, enter the software package name, title, and description, and click **Create** to create a software package, as shown in [Figure 4](#fig6762111693811) and [Figure 5](#fig18351153518389). @@ -369,8 +370,8 @@ You have obtained the **root** permission, and have configured a repo source f dnf install osc build ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >The compilation of RPM software packages depends on build. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > The compilation of RPM software packages depends on build. 2. Configure the OSC. 1. Run the following command to open the **\~/.oscrc** file: @@ -472,8 +473,8 @@ You have obtained the **root** permission, and have configured a repo source f osc buildlog standard_aarch64 aarch64 ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >You can also open the created project on the web client to view the build logs. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > You can also open the created project on the web client to view the build logs. #### Adding a Software Package @@ -498,7 +499,7 @@ To use the OSC tool of OBS to add a new software package, perform the following 3. Create a software package in your own project. For example, to add the **my-first-obs-package** software package, run the following command: ```shell - mkdir my-first-obs-package + mkdir my-first-obs-package cd my-first-obs-package ``` @@ -526,8 +527,8 @@ To use the OSC tool of OBS to add a new software package, perform the following osc buildlog standard_aarch64 aarch64 ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >You can also open the created project on the web client to view the build logs. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > You can also open the created project on the web client to view the build logs. #### Obtaining the Software Package @@ -544,5 +545,5 @@ The parameters in the command are described as follows. You can modify the param - _standard\_aarch64_: repository name. - _aarch64_: repository architecture name. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->You can also obtain the software package built using OSC from the web page. For details, see [Obtaining the Software Package](#obtaining-the-software-package). +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> You can also obtain the software package built using OSC from the web page. For details, see [Obtaining the Software Package](#obtaining-the-software-package). diff --git a/docs/en/docs/ApplicationDev/FAQ.md b/docs/en/Server/Development/ApplicationDev/common-issues-and-solutions.md similarity index 40% rename from docs/en/docs/ApplicationDev/FAQ.md rename to docs/en/Server/Development/ApplicationDev/common-issues-and-solutions.md index f10dab063696adb8e4e563e75cae8d906003bc6d..9a409dd841557496c2e9132954a2414af79b02c8 100644 --- a/docs/en/docs/ApplicationDev/FAQ.md +++ b/docs/en/Server/Development/ApplicationDev/common-issues-and-solutions.md @@ -1,10 +1,6 @@ -# FAQ +# Common Issues and Solutions -- [FAQ](#faq) - - [Self-compilation of Some Applications Depending on the **java-devel** Package Fails](#self-compilation-of-some-applications-depending-on-the-java-devel-package-fails) - - -## Self-compilation of Some Applications Depending on the **java-devel** Package Fails +## Issue 1: Self-compilation of Some Applications Depending on the java-devel Package Fails ### Symptom @@ -12,13 +8,12 @@ The self-compilation of some applications that depend on java-devel fails when t ### Cause Analysis -To provide OpenJDK features that are updated and compatible with Java applications, the openEuler provides OpenJDK of multiple versions, such as OpenJDK 1.8.0 and OpenJDK 11. The compilation of some applications depends on the **java-devel** package. When the **java-devel** package is installed, the system installs java-11-openjdk of a later version by default. As a result, the compilation of these applications fails. +To provide OpenJDK features that are updated and compatible with Java applications, the openEuler provides OpenJDK of multiple versions, such as OpenJDK 1.8.0 and OpenJDK 11. The compilation of some applications depends on the java-devel package. When the java-devel package is installed, the system installs java-11-openjdk of a later version by default. As a result, the compilation of these applications fails. ### Solution -You need to run the following command to install java-1.8.0-openjdk and then run the **rpmbuild** command to perform self-compilation: +You need to run the following command to install java-1.8.0-openjdk and then run the `rpmbuild` command to perform self-compilation: ```shell -# yum install java-1.8.0-openjdk - -``` \ No newline at end of file +# yum install java-1.8.0-openjdk +``` diff --git a/docs/en/docs/ApplicationDev/figures/add-file-page.png b/docs/en/Server/Development/ApplicationDev/figures/add-file-page.png similarity index 100% rename from docs/en/docs/ApplicationDev/figures/add-file-page.png rename to docs/en/Server/Development/ApplicationDev/figures/add-file-page.png diff --git a/docs/en/docs/ApplicationDev/figures/branch-confirmation-page.png b/docs/en/Server/Development/ApplicationDev/figures/branch-confirmation-page.png similarity index 100% rename from docs/en/docs/ApplicationDev/figures/branch-confirmation-page.png rename to docs/en/Server/Development/ApplicationDev/figures/branch-confirmation-page.png diff --git a/docs/en/docs/ApplicationDev/figures/create-package-page.png b/docs/en/Server/Development/ApplicationDev/figures/create-package-page.png similarity index 100% rename from docs/en/docs/ApplicationDev/figures/create-package-page.png rename to docs/en/Server/Development/ApplicationDev/figures/create-package-page.png diff --git a/docs/en/docs/ApplicationDev/figures/creating-a-software-package.png b/docs/en/Server/Development/ApplicationDev/figures/creating-a-software-package.png similarity index 100% rename from docs/en/docs/ApplicationDev/figures/creating-a-software-package.png rename to docs/en/Server/Development/ApplicationDev/figures/creating-a-software-package.png diff --git a/docs/en/docs/ApplicationDev/figures/deleting-a-software-package-from-a-subproject.png b/docs/en/Server/Development/ApplicationDev/figures/deleting-a-software-package-from-a-subproject.png similarity index 100% rename from docs/en/docs/ApplicationDev/figures/deleting-a-software-package-from-a-subproject.png rename to docs/en/Server/Development/ApplicationDev/figures/deleting-a-software-package-from-a-subproject.png diff --git a/docs/en/docs/ApplicationDev/figures/en-us_image_0229243671.png b/docs/en/Server/Development/ApplicationDev/figures/en-us_image_0229243671.png similarity index 100% rename from docs/en/docs/ApplicationDev/figures/en-us_image_0229243671.png rename to docs/en/Server/Development/ApplicationDev/figures/en-us_image_0229243671.png diff --git a/docs/en/docs/ApplicationDev/figures/en-us_image_0229243702.png b/docs/en/Server/Development/ApplicationDev/figures/en-us_image_0229243702.png similarity index 100% rename from docs/en/docs/ApplicationDev/figures/en-us_image_0229243702.png rename to docs/en/Server/Development/ApplicationDev/figures/en-us_image_0229243702.png diff --git a/docs/en/docs/ApplicationDev/figures/en-us_image_0229243704.png b/docs/en/Server/Development/ApplicationDev/figures/en-us_image_0229243704.png similarity index 100% rename from docs/en/docs/ApplicationDev/figures/en-us_image_0229243704.png rename to docs/en/Server/Development/ApplicationDev/figures/en-us_image_0229243704.png diff --git a/docs/en/docs/ApplicationDev/figures/en-us_image_0229243712.png b/docs/en/Server/Development/ApplicationDev/figures/en-us_image_0229243712.png similarity index 100% rename from docs/en/docs/ApplicationDev/figures/en-us_image_0229243712.png rename to docs/en/Server/Development/ApplicationDev/figures/en-us_image_0229243712.png diff --git a/docs/en/docs/ApplicationDev/figures/repositories-page.png b/docs/en/Server/Development/ApplicationDev/figures/repositories-page.png similarity index 100% rename from docs/en/docs/ApplicationDev/figures/repositories-page.png rename to docs/en/Server/Development/ApplicationDev/figures/repositories-page.png diff --git a/docs/en/docs/ApplicationDev/figures/rpm-software-package-download-page.png b/docs/en/Server/Development/ApplicationDev/figures/rpm-software-package-download-page.png similarity index 100% rename from docs/en/docs/ApplicationDev/figures/rpm-software-package-download-page.png rename to docs/en/Server/Development/ApplicationDev/figures/rpm-software-package-download-page.png diff --git a/docs/en/docs/ApplicationDev/figures/succeeded-page.png b/docs/en/Server/Development/ApplicationDev/figures/succeeded-page.png similarity index 100% rename from docs/en/docs/ApplicationDev/figures/succeeded-page.png rename to docs/en/Server/Development/ApplicationDev/figures/succeeded-page.png diff --git a/docs/en/docs/ApplicationDev/preparations-for-development-environment.md b/docs/en/Server/Development/ApplicationDev/preparations-for-development-environment.md similarity index 32% rename from docs/en/docs/ApplicationDev/preparations-for-development-environment.md rename to docs/en/Server/Development/ApplicationDev/preparations-for-development-environment.md index 1bc30c69702d52652cac66eecb259ee4490c26de..4f610f9fce891aebdccba8448b4af15f8b0ff790 100644 --- a/docs/en/docs/ApplicationDev/preparations-for-development-environment.md +++ b/docs/en/Server/Development/ApplicationDev/preparations-for-development-environment.md @@ -3,94 +3,30 @@ ## Environment Requirements - If physical machines (PMs) are used, the minimum hardware requirements of the development environment are described in [Table 1](#table154419352610). - - **Table 1** Minimum hardware specifications - - - - - - - - - - - - - - - - - - - - - - - - - -

Component

-

Minimum Hardware Specification

-

Description

-

Architecture

-
  • AArch64
  • x86_64
-
  • 64-bit Arm architecture
  • 64-bit Intel x86 architecture
-

CPU

-
  • Huawei Kunpeng 920 series
  • Intel ® Xeon® processor
-

-

-

Memory

-

≥ 4 GB (8 GB or higher recommended for better user experience)

-

-

-

Hard disk

-

≥ 120 GB (for better user experience)

-

IDE, SATA, SAS interfaces are supported.

-
+ + **Table 1** Minimum hardware specifications + + + + | Component | Minimum Hardware Specification | Description | + | ------------ | -------------------------------------------------------------- | ------------------------------------------------------------- | + | Architecture | - AArch64
- x86_64 | - 64-bit Arm architecture
- 64-bit Intel x86 architecture | + | CPU | - Huawei Kunpeng 920 series
- Intel Xeon processor | - | + | Memory | ≥ 4 GB (8 GB or higher recommended for better user experience) | - | + | Hard disk | ≥ 120 GB (for better user experience) | IDE, SATA, SAS interfaces are supported. | - If virtual machines (VMs) are used, the minimum virtualization space required for the development environment is described in [Table 2](#table780410493819). - - **Table 2** Minimum virtualization space - - - - - - - - - - - - - - - - - - - - - - - - - -

Component

-

Minimum Virtualization Space

-

Description

-

Architecture

-
  • AArch64
  • x86_64
-

-

-

CPU

-

Two CPUs

-

-

-

Memory

-

≥ 4 GB (8 GB or higher recommended for better user experience)

-

-

-

Hard disk

-

≥ 32 GB (120 GB or higher recommended for better user experience)

-

-

-
+ + **Table 2** Minimum virtualization space + + + + | Component | Minimum Virtualization Space | Description | + | ------------ | ----------------------------------------------------------------- | ----------- | + | Architecture | - AArch64
- x86_64 | - | + | CPU | Two CPUs | - | + | Memory | ≥ 4 GB (8 GB or higher recommended for better user experience) | - | + | Hard disk | ≥ 32 GB (120 GB or higher recommended for better user experience) | - | ### OS Requirements @@ -105,152 +41,157 @@ Configure an online Yum source using the online openEuler repo source. Alternati ### Configuring an Online Yum Source by Obtaining the Online openEuler Repo Source > ![](./public_sys-resources/icon-note.gif) **NOTE:** -> openEuler provides multiple repo sources for users online. For details about the repo sources, see [Installing the OS](../Releasenotes/installing-the-os.md). This section uses the OS repo source file of the AArch64 architecture as an example. +> openEuler provides multiple repo sources for users online. For details about the repo sources, see [OS Installation](../../Releasenotes/Releasenotes/os-installation.md). This section uses the OS repo source file of the AArch64 architecture as an example. 1. Go to the yum source directory and check the .repo configuration file. - ```shell - $ cd /etc/yum.repos.d - $ ls - openEuler-xxx.repo - ``` + ```shell + $ cd /etc/yum.repos.d + $ ls + openEuler-xxx.repo + ``` 2. Edit the **openEuler-xxx.repo** file as the **root** user. Configure the online openEuler repo source as the yum source. - ```shell - vi openEuler-xxx.repo - ``` - - Edit the **openEuler-xxx.repo** file as follows: - - ```text - [osrepo] - name=osrepo - baseurl=http://repo.openeuler.org/openEuler-{version}/OS/{arch}/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-{version}/OS/{arch}/RPM-GPG-KEY-openEuler - ``` - - > ![](./public_sys-resources/icon-note.gif) **NOTE:** - > - > - **repoid** indicates the ID of the software repository. Repoids in all .repo configuration files must be unique. In the example, **repoid** is set to **base**. - > - **name** indicates the string that the software repository describes. - > - **baseurl** indicates the address of the software repository. - > - **enabled** indicates whether to enable the software source repository. The value can be **1** or **0**. The default value is **1**, indicating that the software source repository is enabled. - > - **gpgcheck** indicates whether to enable the GNU privacy guard (GPG) to check the validity and security of sources of RPM packages. **1** indicates GPG check is enabled. **0** indicates the GPG check is disabled. If this option is not specified, the GPG check is enabled by default. - > - **gpgkey** indicates the public key used to verify the signature. - + ```shell + vi openEuler-xxx.repo + ``` + + Edit the **openEuler-xxx.repo** file as follows: + + ```text + [osrepo] + name=osrepo + baseurl=http://repo.openeuler.org/openEuler-{version}/OS/{arch}/ + enabled=1 + gpgcheck=1 + gpgkey=http://repo.openeuler.org/openEuler-{version}/OS/{arch}/RPM-GPG-KEY-openEuler + ``` + + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > + > - **repoid** indicates the ID of the software repository. Repoids in all .repo configuration files must be unique. In the example, **repoid** is set to **base**. + > - **name** indicates the string that the software repository describes. + > - **baseurl** indicates the address of the software repository. + > - **enabled** indicates whether to enable the software source repository. The value can be **1** or **0**. The default value is **1**, indicating that the software source repository is enabled. + > - **gpgcheck** indicates whether to enable the GNU privacy guard (GPG) to check the validity and security of sources of RPM packages. **1** indicates GPG check is enabled. **0** indicates the GPG check is disabled. If this option is not specified, the GPG check is enabled by default. + > - **gpgkey** indicates the public key used to verify the signature. + ### Configuring a Local Yum Source by Mounting an ISO File -> ![](./public_sys-resources/icon-note.gif) ********NOTE:******** -> openEuler provides multiple ISO release packages. For details about each ISO release package, see [OS Installation](../Releasenotes/installing-the-os.md). This section does not specify the version and architecture of related files. Choose them based on the actual requirements. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> openEuler provides multiple ISO release packages. For details about each ISO release package, see [OS Installation](../../Releasenotes/Releasenotes/os-installation.md). This section does not specify the version and architecture of related files. Choose them based on the actual requirements. 1. Download the ISO release package. - - Download an ISO image using a cross-platform file transfer tool. - 1. Visit the [openEuler community](https://www.openeuler.org/en/). - 2. Choose **Downloads** > **Community Editions**. - 3. Locate the target version. Then, click **Download**. The download list is displayed. - 4. The download list includes the following architectures: - - **x86\_64**: ISO of the x86\_64 architecture. - - **AArch64**: ISO of the AArch64 architecture. - - **ARM32**: ISO for embedded devices. - - 5. Click **AArch64** and **Server**. - 6. Choose the required ISO type and click **Download** to download the openEuler release package to the local host. - 7. Click **SHA256** to copy the checksum. Save the checksum as a local verification file. - 8. Log in to the openEuler OS and create a directory for storing the release package and verification file, for example, **~/iso**. - - ```shell - mkdir ~/iso - ``` - - 9. Use a cross-platform file transfer tool (such as WinSCP) to upload the local openEuler release package and verification file to the openEuler OS. - - - Run the **wget** command to download the ISO image. - 1. Visit the [openEuler community](https://www.openeuler.org/en/). - 2. Choose **Downloads** > **Community Editions**. - 3. Locate the target version. Then, click **Download**. The download list is displayed. - 4. The download list includes the following architectures: - - **x86\_64**: ISO of the x86\_64 architecture. - - **AArch64**: ISO of the AArch64 architecture. - - **ARM32**: ISO for embedded devices. - - 5. Click **AArch64**. - 6. Click **Server**. - 7. Choose the required ISO type, right-click **Download**, and copy the link address. - 8. Right-click **SHA256** and copy the link address. - 9. Log in to the openEuler OS, create a directory for storing the release package and verification file, for example, **~/iso**. Then switch to the directory. - - ```shell - mkdir ~/iso - cd ~/iso - ``` - - 10. Run the **wget** command to remotely download the release package and verification file. In the command, replace **ipaddriso** with the address copied in steps 7. - - ```shell - wget ipaddriso - ``` + + Download an ISO image using a cross-platform file transfer tool. + + 1. Visit the [openEuler community](https://www.openeuler.org/en/). + 2. Choose **Downloads** > **Community Editions**. + 3. Locate the target version. Then, click **Download**. The download list is displayed. + 4. The download list includes the following architectures: + + - **x86\_64**: ISO of the x86\_64 architecture. + - **AArch64**: ISO of the AArch64 architecture. + - **ARM32**: ISO for embedded devices. + + 5. Click **AArch64** and **Server**. + 6. Choose the required ISO type and click **Download** to download the openEuler release package to the local host. + 7. Click **SHA256** to copy the checksum. Save the checksum as a local verification file. + 8. Log in to the openEuler OS and create a directory for storing the release package and verification file, for example, **~/iso**. + + ```shell + mkdir ~/iso + ``` + + 9. Use a cross-platform file transfer tool (such as WinSCP) to upload the local openEuler release package and verification file to the openEuler OS. + + Run the **wget** command to download the ISO image. + + 1. Visit the [openEuler community](https://www.openeuler.org/en/). + 2. Choose **Downloads** > **Community Editions**. + 3. Locate the target version. Then, click **Download**. The download list is displayed. + 4. The download list includes the following architectures: + + - **x86\_64**: ISO of the x86\_64 architecture. + - **AArch64**: ISO of the AArch64 architecture. + - **ARM32**: ISO for embedded devices. + + 5. Click **AArch64**. + 6. Click **Server**. + 7. Choose the required ISO type, right-click **Download**, and copy the link address. + 8. Right-click **SHA256** and copy the link address. + 9. Log in to the openEuler OS, create a directory for storing the release package and verification file, for example, **~/iso**. Then switch to the directory. + + ```shell + mkdir ~/iso + cd ~/iso + ``` + + 10. Run the **wget** command to remotely download the release package and verification file. In the command, replace **ipaddriso** with the address copied in steps 7. + + ```shell + wget ipaddriso + ``` 2. Release Package Integrity Check - 1. Calculate the SHA256 verification value of the openEuler release package. + 1. Calculate the SHA256 verification value of the openEuler release package. - ```shell - sha256sum openEuler-xxx-dvd.iso - ``` + ```shell + sha256sum openEuler-xxx-dvd.iso + ``` - After the command is run, the verification value is displayed. + After the command is run, the verification value is displayed. - 2. Check whether the calculated value is the same as that of the saved SHA256 value. + 2. Check whether the calculated value is the same as that of the saved SHA256 value. - If the verification values are consistent, the .iso file is not damaged. If they are inconsistent, the file is damaged and you need to obtain the file again. + If the verification values are consistent, the .iso file is not damaged. If they are inconsistent, the file is damaged and you need to obtain the file again. 3. Mount the ISO file and configure it as a repo source. - ```shell - mount /home/iso/openEuler-xxx-dvd.iso /mnt/ - ``` + ```shell + mount /home/iso/openEuler-xxx-dvd.iso /mnt/ + ``` - ```text - . - │── boot.catalog - │── docs - │── EFI - │── images - │── Packages - │── repodata - │── TRANS.TBL - └── RPM-GPG-KEY-openEuler - ``` + ```text + . + │── boot.catalog + │── docs + │── EFI + │── images + │── Packages + │── repodata + │── TRANS.TBL + └── RPM-GPG-KEY-openEuler + ``` - In the preceding directory, **Packages** indicates the directory where the RPM package is stored, **repodata** indicates the directory where the repo source metadata is stored, and **RPM-GPG-KEY-openEuler** indicates the public key for signing openEuler. + In the preceding directory, **Packages** indicates the directory where the RPM package is stored, **repodata** indicates the directory where the repo source metadata is stored, and **RPM-GPG-KEY-openEuler** indicates the public key for signing openEuler. 4. Go to the yum source directory and check the .repo configuration file in the directory. - ```shell - $ cd /etc/yum.repos.d - $ ls - openEuler-xxx.repo - ``` + ```shell + $ cd /etc/yum.repos.d + $ ls + openEuler-xxx.repo + ``` 5. Edit the **openEuler-xxx.repo** file as the **root** user. Configure the local openEuler repo source created in step [3](#li6236932222) as the yum source. - ```shell - vi openEuler-xxx.repo - ``` + ```shell + vi openEuler-xxx.repo + ``` - Edit the **openEuler-xxx.repo** file as follows: + Edit the **openEuler-xxx.repo** file as follows: - ```text - [localosrepo] - name=localosrepo - baseurl=file:///mnt - enabled=1 - gpgcheck=1 - gpgkey=file:///mnt/RPM-GPG-KEY-openEuler - ``` + ```text + [localosrepo] + name=localosrepo + baseurl=file:///mnt + enabled=1 + gpgcheck=1 + gpgkey=file:///mnt/RPM-GPG-KEY-openEuler + ``` ## Installing the Software Packages @@ -260,7 +201,7 @@ The software required varies in different development environments. However, the 1. Run the `dnf list installed | grep jdk` command to check whether the JDK software is installed. - Check the command output. If the command output contains "jdk", the JDK has been installed. If no such information is displayed, the software is not installed. + Check the command output. If the command output contains "jdk", the JDK has been installed. If no such information is displayed, the software is not installed. 2. Run `dnf clean all` to clear the cache. @@ -268,19 +209,19 @@ The software required varies in different development environments. However, the 4. Run `dnf search jdk | grep jdk` to query the JDK software package that can be installed. - View the command output and install the **java-{version}-openjdk-devel.aarch64** software package. + View the command output and install the **java-{version}-openjdk-devel.aarch64** software package. 5. Install the JDK software package as the **root** user. Run `dnf install java-{version}-openjdk-devel.aarch64`. 6. Query information about the JDK software by running `java -version`. - Check the command output. If the command output contains "openjdk version", the JDK has been correctly installed. + Check the command output. If the command output contains "openjdk version", the JDK has been correctly installed. ### Installing the rpm-build Software Package 1. Run the `dnf list installed | grep rpm-build` command to check whether the rpm-build software is installed. - Check the command output. If the command output contains "rpm-build", the software has been installed. If no such information is displayed, the software is not installed. + Check the command output. If the command output contains "rpm-build", the software has been installed. If no such information is displayed, the software is not installed. 2. Run `dnf clean all` to clear the cache. @@ -363,17 +304,17 @@ Edit the configuration file in the **.ssh** directory and save the file. 1. Run the **vim** command to open the configuration file. - ```shell - vim config - ``` + ```shell + vim config + ``` 2. Add the following content to the end of the file and save the file: - ```text - Host * - ForwardAgent yes - ForwardX11 yes - ``` + ```text + Host * + ForwardAgent yes + ForwardX11 yes + ``` ### Downloading and Running IntelliJ IDEA diff --git a/docs/en/docs/Releasenotes/public_sys-resources/icon-note.gif b/docs/en/Server/Development/ApplicationDev/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/Releasenotes/public_sys-resources/icon-note.gif rename to docs/en/Server/Development/ApplicationDev/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/ApplicationDev/using-clang-for-compilation.md b/docs/en/Server/Development/ApplicationDev/using-clang-for-compilation.md similarity index 100% rename from docs/en/docs/ApplicationDev/using-clang-for-compilation.md rename to docs/en/Server/Development/ApplicationDev/using-clang-for-compilation.md diff --git a/docs/en/docs/ApplicationDev/using-gcc-for-compilation.md b/docs/en/Server/Development/ApplicationDev/using-gcc-for-compilation.md similarity index 99% rename from docs/en/docs/ApplicationDev/using-gcc-for-compilation.md rename to docs/en/Server/Development/ApplicationDev/using-gcc-for-compilation.md index 84997d895bba8099d2bc4a18020fbc4f2b37ed07..71a2fb09ef9069af422378ca637a41490f0ae0bc 100644 --- a/docs/en/docs/ApplicationDev/using-gcc-for-compilation.md +++ b/docs/en/Server/Development/ApplicationDev/using-gcc-for-compilation.md @@ -308,8 +308,8 @@ If you choose to search for a DLL, to ensure that the DLL can be linked when the export LD_LIBRARY_PATH=libraryDIR:$LD_LIBRARY_PATH ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >**LD\_LIBRARY\_PATH** is an environment variable of the DLL. If the DLL is not in the default directories \(**/lib** and **/usr/lib**\), you need to specify the environment variable **LD\_LIBRARY\_PATH**. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > **LD\_LIBRARY\_PATH** is an environment variable of the DLL. If the DLL is not in the default directories \(**/lib** and **/usr/lib**\), you need to specify the environment variable **LD\_LIBRARY\_PATH**. - Add the DLL path **libraryDIR** to **/etc/ld.so.conf** and run **ldconfig**, or use the DLL path **libraryDIR** as a parameter to run **ldconfig**. diff --git a/docs/en/docs/ApplicationDev/using-jdk-for-compilation.md b/docs/en/Server/Development/ApplicationDev/using-jdk-for-compilation.md similarity index 93% rename from docs/en/docs/ApplicationDev/using-jdk-for-compilation.md rename to docs/en/Server/Development/ApplicationDev/using-jdk-for-compilation.md index 4946b917d12b6bf483f3507c1a327deaf3a04de0..b95faa6f06b55024f2cd64ac3fe835a0cd7dfc65 100644 --- a/docs/en/docs/ApplicationDev/using-jdk-for-compilation.md +++ b/docs/en/Server/Development/ApplicationDev/using-jdk-for-compilation.md @@ -1,4 +1,5 @@ # Using JDK for Compilation + - [Using JDK for Compilation](#using-jdk-for-compilation) @@ -22,8 +23,6 @@ A Java Development Kit \(JDK\) is a software package required for Java developme ## Basics - - ### File Type and Tool For any given input file, the file type determines which tool to use for processing. The common file types and tools are described in [Table 1](#table634145764320) and [Table 2](#table103504146433). @@ -86,8 +85,8 @@ For any given input file, the file type determines which tool to use for process To generate a program from Java source code files and run the program using Java, compilation and run are required. -1. Compilation: Use the Java compiler \(javac\) to compile Java source code files \(.java files\) into .class bytecode files. -2. Run: Execute the bytecode files on the Java virtual machine \(JVM\). +1. Compilation: Use the Java compiler \(javac\) to compile Java source code files \(.java files\) into .class bytecode files. +2. Run: Execute the bytecode files on the Java virtual machine \(JVM\). ### Common JDK Options @@ -362,7 +361,7 @@ The package declaration statement must be added to the beginning of the source p In Java, there are two methods to use the common classes in the package provided by Java or the classes in the custom package. -- Add the package name before the name of the class to be referenced. +- Add the package name before the name of the class to be referenced. For example, name.A obj=new name.A \(\) @@ -370,11 +369,11 @@ In Java, there are two methods to use the common classes in the package provided Example: Create a test object of the Test class in the example package. - ``` + ```java example.Test test = new example.Test(); ``` -- Use **import** at the beginning of the file to import the classes in the package. +- Use **import** at the beginning of the file to import the classes in the package. The format of the **import** statement is import pkg1\[.pkg2\[.pkg3...\]\].\(classname | \*\). @@ -382,38 +381,35 @@ In Java, there are two methods to use the common classes in the package provided Example: Import the **Test** class in the **example** package. - ``` + ```java import example.Test; ``` Example: Import the entire **example** package. - ``` + ```java import example.*; ``` - ## Examples - - ### Compiling a Java Program Without a Package -1. Run the **cd** command to go to the code directory. The **~/code** directory is used as an example. The command is as follows: +1. Run the **cd** command to go to the code directory. The **~/code** directory is used as an example. The command is as follows: - ``` - $ cd ~/code + ```shell + cd ~/code ``` -2. Compile the Hello World program and save it as **HelloWorld.java**. The following uses the Hello World program as an example. The command is as follows: +2. Compile the Hello World program and save it as **HelloWorld.java**. The following uses the Hello World program as an example. The command is as follows: - ``` - $ vi HelloWorld.java + ```shell + vi HelloWorld.java ``` Code example: - ``` + ```java public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World"); @@ -421,43 +417,42 @@ In Java, there are two methods to use the common classes in the package provided } ``` -3. Run the following command to compile the code in the code directory: +3. Run the following command to compile the code in the code directory: - ``` - $ javac HelloWorld.java + ```shell + javac HelloWorld.java ``` If no error is reported, the execution is successful. -4. After the compilation is complete, the HelloWorld.class file is generated. You can run the **java** command to view the result. The following is an example: +4. After the compilation is complete, the HelloWorld.class file is generated. You can run the **java** command to view the result. The following is an example: - ``` + ```shell $ java HelloWorld Hello World ``` - ### Compiling a Java Program with a Package -1. Run the **cd** command to go to the code directory. The **~/code** directory is used as an example. Create the **~/code/Test/my/example**, **~/code/Hello/world/developers**, and **~/code/Hi/openos/openeuler** subdirectories in the directory to store source files. +1. Run the **cd** command to go to the code directory. The **~/code** directory is used as an example. Create the **~/code/Test/my/example**, **~/code/Hello/world/developers**, and **~/code/Hi/openos/openeuler** subdirectories in the directory to store source files. + ```shell + cd ~/code + mkdir -p Test/my/example + mkdir -p Hello/world/developers + mkdir -p Hi/openos/openeuler ``` - $ cd ~/code - $ mkdir -p Test/my/example - $ mkdir -p Hello/world/developers - $ mkdir -p Hi/openos/openeuler - ``` - -2. Run the **cd** command to go to the **~/code/Test/my/example** directory and create **Test.java**. - ``` - $ cd ~/code/Test/my/example - $ vi Test.java +2. Run the **cd** command to go to the **~/code/Test/my/example** directory and create **Test.java**. + + ```shell + cd ~/code/Test/my/example + vi Test.java ``` The following is an example of the Test.java code: - ``` + ```java package my.example; import world.developers.Hello; import openos.openeuler.Hi; @@ -471,16 +466,16 @@ In Java, there are two methods to use the common classes in the package provided } ``` -3. Run the **cd** command to go to the **~/code/Hello/world/developers** directory and create **Hello.java**. +3. Run the **cd** command to go to the **~/code/Hello/world/developers** directory and create **Hello.java**. - ``` - $ cd ~/code/Hello/world/developers - $ vi Hello.java + ```shell + cd ~/code/Hello/world/developers + vi Hello.java ``` The following is an example of the Hello.java code: - ``` + ```java package world.developers; public class Hello { public void hello(){ @@ -489,16 +484,16 @@ In Java, there are two methods to use the common classes in the package provided } ``` -4. Run the **cd** command to go to the **~/code/Hi/openos/openeuler** directory and create **Hi.java**. +4. Run the **cd** command to go to the **~/code/Hi/openos/openeuler** directory and create **Hi.java**. - ``` - $ cd ~/code/Hi/openos/openeuler - $ vi Hi.java + ```shell + cd ~/code/Hi/openos/openeuler + vi Hi.java ``` The following is an example of the Hi.java code: - ``` + ```java package openos.openeuler; public class Hi { public void hi(){ @@ -507,25 +502,25 @@ In Java, there are two methods to use the common classes in the package provided } ``` -5. Run the **cd** command to go to the **~/code** directory and use javac to compile the source file. +5. Run the **cd** command to go to the **~/code** directory and use javac to compile the source file. - ``` - $ cd ~/code - $ javac -classpath Hello:Hi Test/my/example/Test.java + ```shell + cd ~/code + javac -classpath Hello:Hi Test/my/example/Test.java ``` After the command is executed, the **Test.class**, **Hello.class**, and **Hi.class** files are generated in the **~/code/Test/my/example**, **~/code/Hello/world/developers**, and **~/code/Hi/openos/openeuler** directories. -6. Run the **cd** command to go to the **~/code** directory and run the **Test** program using Java. +6. Run the **cd** command to go to the **~/code** directory and run the **Test** program using Java. - ``` - $ cd ~/code - $ java -classpath Test:Hello:Hi my/example/Test + ```shell + cd ~/code + java -classpath Test:Hello:Hi my/example/Test ``` The command output is as follows: - ``` + ```text Hello, openEuler. Hi, the global developers. ``` diff --git a/docs/en/docs/ApplicationDev/using-make-for-compilation.md b/docs/en/Server/Development/ApplicationDev/using-make-for-compilation.md similarity index 100% rename from docs/en/docs/ApplicationDev/using-make-for-compilation.md rename to docs/en/Server/Development/ApplicationDev/using-make-for-compilation.md diff --git a/docs/en/Server/Development/GCC/Menu/index.md b/docs/en/Server/Development/GCC/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..a5836d6006acce5e194f03e1c71b78bc0a4f8f36 --- /dev/null +++ b/docs/en/Server/Development/GCC/Menu/index.md @@ -0,0 +1,9 @@ +--- +headless: true +--- +- [GCC User Guide]({{< relref "./overview.md" >}}) + - [Kernel FDO User Guide]({{< relref "./kernel_FDO_user_guide.md" >}}) + - [LTO User Guide]({{< relref "./lto-user-guide.md" >}}) + - [GCC Basic Performance Optimization User Guide]({{< relref "./gcc-basic-performance-optimization-user-guide.md" >}}) + - [Alternative GCC 14 User Guide]({{< relref "./gcc-14-secondary-version-compilation-toolchain-user-guide.md" >}}) + - [PIN User Guide]({{< relref "./pin-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Development/GCC/gcc-14-secondary-version-compilation-toolchain-user-guide.md b/docs/en/Server/Development/GCC/gcc-14-secondary-version-compilation-toolchain-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..947bcbc6aed1624cdfb75fbdf15619b051fc230e --- /dev/null +++ b/docs/en/Server/Development/GCC/gcc-14-secondary-version-compilation-toolchain-user-guide.md @@ -0,0 +1,139 @@ +# Background + +## Overview + +OSs prioritize robustness by adopting time-tested, stable software versions rather than the latest releases. This strategy minimizes instability risks from version changes and maintains system stability throughout the LTS cycle. Consequently, openEuler has chosen GCC 12.3.1 as its baseline for the entire 24.03 LTS lifecycle. + +This decision, however, introduces challenges. Many hardware features rely on the foundational GCC toolchain, and using an older GCC version delays the activation of new features in OS releases. Additionally, some users prefer the latest compiler versions to unlock new capabilities, which often deliver performance gains over older versions. + +To enable diverse computational features and cater to varying user needs for hardware support, openEuler 24.09 introduces the openEuler GCC Toolset. This multi-version GCC compilation toolchain, tailored for openEuler, provides a secondary GCC version higher than the system primary version, offering users a more flexible and efficient compilation environment. With the openEuler GCC Toolset 14, users can seamlessly switch between GCC versions to leverage new hardware features and benefit from the latest GCC optimizations. + +## Solution Design + +### Compilation Toolchain Features + +The GCC compilation toolchain, developed and maintained by GNU, is a collection of open source tools designed to translate high-level language code into machine language. Beyond GCC itself, it includes a range of auxiliary tools and libraries, collectively forming a comprehensive compilation environment. + +1. GCC compiler (such as `gcc`, `g++`, and `gfortran`): + + - Role: The GCC compiler is the heart of the toolchain, handling preprocessing and compilation to transform source code into assembly or intermediate representation. For C++ code, `g++` acts as the C++ frontend, performing compilation and automatically linking C++ standard libraries. + +2. Binutils toolset: + + - Tools: including the linker (`ld`), assembler (`as`), object file viewer (`readelf`), symbol viewer (`nm`), object file format converter (`objcopy`), disassembler (`objdump`), and size viewer (`size`) + - Role: These tools support the compilation process by converting assembly to machine code, linking object files into executables, and inspecting file details. + +3. glibc library: + + - Role: The GNU C Library (glibc) is the standard C library for GNU and Linux systems, providing essential functions like `printf` and `malloc` required for compiling C programs. + +4. Other auxiliary tools: + + - Debugger (`gdb`): assists developers in debugging executables to identify and fix errors. + - Performance Analysis Tool (`gprof`): helps analyze and optimize program performance. + +### Toolchain Selection + +The software components in the toolchain significantly influence compilation results, with GCC, binutils, and glibc being the core elements. Since glibc, the C standard library, is tightly coupled with the OS kernel version, it remains unchanged. This toolchain includes only GCC and binutils to fulfill the needs of secondary version compilation. + +The latest GCC release, gcc-14.2.0, is selected for the openEuler GCC toolset. For binutils, while openEuler 24.09 defaults to version 2.41, the latest GCC 14 recommends binutils-2.42. Thus, binutils-2.42 is chosen for this toolchain. + +The openEuler GCC toolset incorporates gcc-14.2.0 and binutils-2.42 as the secondary version toolchain to ensure compilation environment stability and efficiency while minimizing complexity. This approach balances compilation quality and user experience. The toolchain GCC version will be updated to gcc-14.3.0 upon its release by the upstream community. + +### Architecture Design + +To differentiate from the default toolchain and prevent conflicts, this toolchain is named gcc-toolset-14. Its package names begin with the prefix `gcc-toolset-14-`, followed by the original toolchain package name. To avoid path overlap with the default **/usr** installation path, gcc-toolset-14 is installed in **/opt/openEuler/gcc-toolset-14/**. Additionally, to distinguish it from open source GCC and enable future integration of openEuler community features, the version of gcc-toolset-14-gcc is set to 14.2.1. + +The applications and libraries in gcc-toolset-14 coexist with the system default GCC version without replacing or overwriting it. They are not set as the default or preferred option. To simplify version switching and management, the scl-utils tool is introduced. Its usage and switching methods are outlined below. + +## Installation and Deployment + +### Software Requirements + +- OS: openEuler 24.09 + +### Hardware Requirements + +- AArch64 or X86_64 + +### Installation Methods + +Install the default GCC compiler, gcc-12.3.1, in **/usr**: + +```shell +yum install -y gcc gcc-c++ +``` + +Install the secondary version compilation toolchain, gcc-toolset-14, in **/opt/openEuler/gcc-toolset-14/root/usr/**: + +```shell +yum install -y gcc-toolset-14-gcc* +yum install -y gcc-toolset-14-binutils* +``` + +## Usage + +This solution uses the SCL (Software Collections) tool to manage different versions of the compilation toolchain. + +### SCL + +SCL is a vital Linux tool that enables users to install and use multiple versions of applications and runtime environments safely and conveniently, preventing system conflicts. + +Key benefits of SCL include: + +1. Multi-version coexistence: allows installing and using multiple versions of software libraries, tools, and runtime environments on the same system to meet diverse needs. +2. Avoiding system conflicts: isolates different software versions to prevent conflicts with the system default version. +3. Enhancing development efficiency: provides developers with the latest toolchains and runtime environments, improving productivity. + +### Version Switching Methods + +**Install SCL:** + +```shell +yum install scl-utils scl-utils-build +``` + +**Register gcc-toolset-14:** + +```shell +## Register gcc-toolset-14. +scl register /opt/openEuler/gcc-toolset-14/ + +## Deregister gcc-toolset-14. +scl deregister gcc-toolset-14 +``` + +Use `scl list-collections` to verify that gcc-toolset-14 is successfully registered. + +**Switch to gcc-toolset-14:** + +```shell +scl enable gcc-toolset-14 bash +``` + +This command launches a new bash shell session with tools from gcc-toolset-14, replacing the system defaults. In this session, there is no need to manually switch compiler versions or paths. To exit the gcc-toolset-14 environment, type `exit` to return to the system default version. + +SCL works by automatically setting environment variables for different tool versions. For details, check the **/opt/openEuler/gcc-toolset-14/enable** file, which contains all environment variable configurations for gcc-toolset-14. If SCL is unavailable, use the following methods to switch toolchain versions: + +```shell +## Option 1: Without SCL, use a script to switch the compilation toolchain. +source /opt/openEuler/gcc-toolset-14/enable + +## Option 2: With SCL, use SCL to switch the toolchain and activate the runtime environment. +scl enable gcc-toolset-14 bash +``` + +## Usage Constraints + +### Compilation Scenarios + +- **Primary version**: Use the system default gcc-12.3.1 for standard compilation and building. +- **Secondary version**: When the advanced features of GCC 14 are needed for application building, use SCL to switch the bash environment to the gcc-toolset-14 compilation environment. + +### GCC 14 Secondary Version Usage Instructions + +1. The openEuler GCC toolset 14 secondary compilation toolchain offers two usage methods: + 1) **Dynamic linking**: By default, the `-lstdc++` option is automatically included for dynamic linking. This links the system dynamic library **/usr/lib64/libstdc++.so.6** and the **libstdc++_nonshared.a** static library provided by the GCC 14 secondary version. This static library contains stable C++ features introduced in GCC 14 compared to GCC 12. + 2) **Static linking**: You can use the `-static` option for static linking, which links the full-feature **libstdc++.a** static library provided by the GCC 14 secondary version. The path to this library is **/opt/openEuler/gcc-toolset-14/root/usr/lib/gcc/aarch64-openEuler-linux/14/libstdc++.a**. + +2. By default, builds use dynamic linking, which links the **libstdc++_nonshared.a** static library. To ensure system compatibility, this library only includes officially standardized C++ features. Experimental features like `-fmodules-ts` and `-fmodule-header`, which are part of C++20 module capabilities, are not included in **libstdc++_nonshared.a**. If you need these features, you should use static linking to fully link the GCC 14 secondary version static library. diff --git a/docs/en/Server/Development/GCC/gcc-basic-performance-optimization-user-guide.md b/docs/en/Server/Development/GCC/gcc-basic-performance-optimization-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..e65f3458ee487eb261d7f254db1aeac3b8cb4946 --- /dev/null +++ b/docs/en/Server/Development/GCC/gcc-basic-performance-optimization-user-guide.md @@ -0,0 +1,185 @@ +# GCC Basic Performance Optimization User Guide + +## Introduction + +Compiler performance optimization plays a vital role in enhancing application development efficiency, runtime performance, and maintainability. It is a significant research area in computer science and a critical component of the software development process. GCC for openEuler extends its general compilation optimization capabilities by improving backend performance techniques, including instruction optimization, vectorization enhancements, prefetching improvements, and data flow analysis optimizations. + +## Installation and Deployment + +### Software Requirements + +OS: openEuler 24.09 + +### Hardware Requirements + +AArch64 architecture + +### Software Installation + +Install GCC and related components as required. For example, to install GCC: + +```shell +yum install gcc +``` + +## Usage + +### CRC Optimization + +#### Description + +Detects CRC software loop code and generates efficient hardware instructions. + +#### Usage + +Include the `-floop-crc` option during compilation. + +Note: The `-floop-crc` option must be used alongside `-O3 -march=armv8.1-a`. + +### If-Conversion Enhancement + +#### Description + +Improves If-Conversion optimization by leveraging additional registers to minimize conflicts. + +#### Usage + +This optimization is part of the RTL if-conversion process. Enable it using the following compilation options: + +`-fifcvt-allow-complicated-cmps` + +`--param=ifcvt-allow-register-renaming=[0,1,2]` (The value controls the optimization scope.) + +Note: This optimization requires the `-O2` optimization level and should be used with `--param=max-rtl-if-conversion-unpredictable-cost=48` and `--param=max-rtl-if-conversion-predictable-cost=48`. + +### Multiplication Calculation Optimization + +#### Description + +Optimizes Arm-related instruction merging to recognize 32-bit complex combinations of 64-bit integer multiplication logic and produce efficient 64-bit instructions. + +#### Usage + +Enable the optimization using the `-fuaddsub-overflow-match-all` and `-fif-conversion-gimple` options. + +Note: This optimization requires `-O3` or higher optimization levels. + +### cmlt Instruction Generation Optimization + +#### Description + +Generates `cmlt` instructions for specific arithmetic operations, reducing the instruction count. + +#### Usage + +Enable the optimization using the `-mcmlt-arith` option. + +Note: This optimization requires `-O3` or higher optimization levels. + +### Vectorization Optimization Enhancement + +#### Description + +Identifies and simplifies redundant instructions generated during vectorization, enabling shorter loops to undergo vectorization. + +#### Usage + +Enable the optimization using the parameter `--param=vect-alias-flexible-segment-len=1` (default is 0). + +Note: This optimization requires `-O3` or higher optimization levels. + +### Combined Optimization of min max and uzp1/uzp2 Instructions + +#### Description + +Identifies opportunities to optimize `min max` and `uzp1/uzp2` instructions together, reducing the instruction count to enhance performance. + +#### Usage + +Enable `min max` optimization with the `-fconvert-minmax` option. The `uzp1/uzp2` instruction optimization is automatically enabled at `-O3` or higher levels. + +Note: This optimization requires `-O3` or higher optimization levels. + +### ldp/stp Optimization + +#### Description + +Detects poorly performing `ldp/stp` instructions and splits them into two separate `ldr` and `str` instructions. + +#### Usage + +Enable the optimization using the `-fsplit-ldp-stp` option. Control the search range with the parameter `--param=param-ldp-dependency-search-range=[1,32]` (default is 16). + +Note: This optimization requires `-O1` or higher optimization levels. + +### AES Instruction Optimization + +#### Description + +Identifies AES software algorithm instruction sequences and replaces them with hardware instructions for acceleration. + +#### Usage + +Enable the optimization using the `-fcrypto-accel-aes` option. + +Note: This optimization requires `-O3` or higher optimization levels. + +### Indirect Call Optimization + +#### Description + +Analyzes and optimizes indirect calls in the program, converting them into direct calls where possible. + +#### Usage + +Enable the optimization using the `-ficp -ficp-speculatively` options. + +Note: This optimization must be used with `-O2 -flto -flto-partition=one`. + +### IPA-prefetch + +#### Description + +Detects indirect memory accesses in loops and inserts prefetch instructions to minimize latency. + +#### Usage + +Enable the optimization using the `-fipa-prefetch -fipa-ic` options. + +Note: This optimization must be used with `-O3 -flto`. + +### -fipa-struct-reorg + +#### Description + +Optimizes memory layout by reorganizing the arrangement of structure members to improve cache hit rates. + +#### Usage + +Add the options `-O3 -flto -flto-partition=one -fipa-struct-reorg` to enable the optimization. + +Note: The `-fipa-struct-reorg` option requires `-O3 -flto -flto-partition=one` to be enabled globally. + +### -fipa-reorder-fields + +#### Description + +Optimizes memory layout by reordering structure members from largest to smallest, reducing padding and improving cache hit rates. + +#### Usage + +Add the options `-O3 -flto -flto-partition=one -fipa-reorder-fields` to enable the optimization. + +Note: The `-fipa-reorder-fields` option requires `-O3 -flto -flto-partition=one` to be enabled globally. + +### -ftree-slp-transpose-vectorize + +#### Description + +Enhances data flow analysis for loops with consecutive memory reads by inserting temporary arrays during loop splitting. During SLP vectorization, it introduces transposition analysis for `grouped_stores`. + +#### Usage + +Add the options `-O3 -ftree-slp-transpose-vectorize` to enable the optimization. + +Note: The `-ftree-slp-transpose-vectorize` option requires `-O3` to be enabled. diff --git a/docs/en/docs/GCC/kernel_FDO_user_guide.md b/docs/en/Server/Development/GCC/kernel_FDO_user_guide.md similarity index 100% rename from docs/en/docs/GCC/kernel_FDO_user_guide.md rename to docs/en/Server/Development/GCC/kernel_FDO_user_guide.md diff --git a/docs/en/Server/Development/GCC/lto-user-guide.md b/docs/en/Server/Development/GCC/lto-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..cd188c1ca5f8c0e3c025f8b6b0562048ed0e926a --- /dev/null +++ b/docs/en/Server/Development/GCC/lto-user-guide.md @@ -0,0 +1,25 @@ +# Introduction to Link-Time Optimization + +In traditional compilation, GCC compiles and optimizes individual source files (compilation units) to generate .o object files containing assembly code. The linker then processes these `.o` files, resolving symbol tables and performing relocations to create the final executable. However, the linker, which has access to cross-file function call information, operates on assembly code and cannot perform compilation optimizations. Conversely, the compilation stage capable of optimizations lacks global cross-file information. While this approach improves efficiency by recompiling only modified units, it misses many cross-file optimization opportunities. + +Link-Time Optimization (LTO) addresses this limitation by enabling optimizations during the linking phase, leveraging cross-compilation-unit call information. To achieve this, LTO preserves the Intermediate Representation (IR) required for optimizations until linking. During linking, the linker invokes the LTO plugin to perform whole-program analysis, make better optimization decisions, and generate more efficient IR. This optimized IR is then converted into object files with assembly code, and the linker completes the standard linking process. + +# Enabling LTO in Version Builds + +## Background + +Many international communities have adopted LTO in their version builds to achieve better performance and smaller binary sizes. LTO is emerging as a key area for exploring compilation optimization opportunities. Starting with version 24.09, openEuler will introduce LTO in its version builds. + +## Solution + +To enable LTO during package builds, we will add `-flto -ffat-lto-objects` to the global compilation options in the macros of **openEuler-rpm-config**. The `-flto` flag enables Link-Time Optimization, while `-ffat-lto-objects` generates fat object files containing both LTO object information and the assembly information required for regular linking. During the build process, LTO object information is used for optimizations. However, since LTO object files are not compatible across GCC versions, we remove the LTO-related fields from `.o/.a` files before packaging them into `.rpm` files, retaining only the assembly code needed for regular linking. This ensures that static libraries remain unaffected. + +## Scope of Enablement + +Due to the significant differences between LTO and traditional compilation workflows, and to minimize the impact on version quality, LTO is currently enabled for only 500+ packages. The list of these packages is available in **/usr/lib/rpm/%{_vendor}/lto_white_list**. These whitelisted applications have been successfully built and passed their test suites with LTO enabled. The LTO compilation options (`-flto -ffat-lto-objects`) are applied only when building whitelisted applications; otherwise, they are omitted. + +In future innovation releases, we will work with application maintainers to expand the scope of LTO enablement. + +## Notes + +The current hot-patching mechanism is incompatible with LTO, causing hot patches to fail when LTO is enabled. We have agreed with the hot-patching team on a solution, which will be implemented in future releases. diff --git a/docs/en/docs/GCC/overview.md b/docs/en/Server/Development/GCC/overview.md similarity index 100% rename from docs/en/docs/GCC/overview.md rename to docs/en/Server/Development/GCC/overview.md diff --git a/docs/en/docs/Pin/pin-user-guide.md b/docs/en/Server/Development/GCC/pin-user-guide.md similarity index 97% rename from docs/en/docs/Pin/pin-user-guide.md rename to docs/en/Server/Development/GCC/pin-user-guide.md index f7bcd80f9767e06b01264ab27c9329526c004bea..39374fc240a23b068645eb27838ab16871c4e6a8 100644 --- a/docs/en/docs/Pin/pin-user-guide.md +++ b/docs/en/Server/Development/GCC/pin-user-guide.md @@ -11,7 +11,7 @@ ## Preparing the Environment -* Install the openEuler operating system. For details, see the [*openEuler 23.03 Installation Guide*](../Installation/Installation.md). +* Install the openEuler operating system. For details, see the [*openEuler Installation Guide*](../../InstallationUpgrade/Installation/installation.md). ### Install the dependency diff --git a/docs/en/Server/Development/Menu/index.md b/docs/en/Server/Development/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..3679b507656b768dfb2b7cc2f466820cd92e56d0 --- /dev/null +++ b/docs/en/Server/Development/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [Application Development Guide]({{< relref "./ApplicationDev/Menu/index.md" >}}) +- [GCC User Guide]({{< relref "./GCC/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/DiversifiedComputing/DPU-OS/Menu/index.md b/docs/en/Server/DiversifiedComputing/DPU-OS/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..a4ee17c30a075ae8570c0c1e348e87a8d76ad5eb --- /dev/null +++ b/docs/en/Server/DiversifiedComputing/DPU-OS/Menu/index.md @@ -0,0 +1,8 @@ +--- +headless: true +--- + +- [DPU-OS]({{< relref "./overview.md" >}}) + - [DPU-OS Background and Requirements]({{< relref "./dpu-os-background-and-requirements.md" >}}) + - [DPU-OS Tailoring Guide]({{< relref "./dpu-os-tailoring-guide.md" >}}) + - [Verification and Deployment]({{< relref "./verification-and-deployment.md" >}}) diff --git a/docs/en/Server/DiversifiedComputing/DPU-OS/dpu-os-background-and-requirements.md b/docs/en/Server/DiversifiedComputing/DPU-OS/dpu-os-background-and-requirements.md new file mode 100644 index 0000000000000000000000000000000000000000..849d841464bf3af5100d9aa7317794ebc6e8722b --- /dev/null +++ b/docs/en/Server/DiversifiedComputing/DPU-OS/dpu-os-background-and-requirements.md @@ -0,0 +1,67 @@ +# DPU-OS Background and Requirements + +## Overview + +In data center and cloud environments, Moore's Law has reached its limits, leading to a slowdown in the growth of general-purpose CPU computing power. At the same time, network I/O speeds and performance continue to rise, creating a growing disparity between the two. This gap highlights the inability of current general-purpose processors to meet the demands of network, disk, and other I/O processing. In traditional data centers, a significant portion of general-purpose CPU resources is consumed by I/O and management tasks, a phenomenon known as the "Datacenter Tax." AWS estimates that this tax can consume over 30% of a data center's computing power, and in some cases, even more. + +The DPU was introduced to address this issue by offloading management, network, storage, and security tasks from the host CPU to dedicated processor chips. This offloading accelerates processing, reduces costs, and improves efficiency. Leading cloud providers like AWS, Alibaba Cloud, and Huawei Cloud have developed custom chips to handle these offloaded tasks, ensuring that 100% of data center computing resources are available for customer use. + +The DPU market is experiencing rapid growth, driven by strong demand from cloud providers and big data applications. Numerous Chinese DPU startups have also entered the market with innovative products. This growth presents challenges for cloud and big data providers, who must integrate diverse DPU products, and for DPU manufacturers, who must adapt device drivers to customer-specified operating systems. openEuler, a leading open-source operating system in China, addresses these challenges by offering DPU-OS, a solution built on openEuler that bridges the gap between DPU manufacturers and customers. Furthermore, since DPUs rely on their OS to support service acceleration, DPU-OS requires performance optimization. By leveraging openEuler, DPU-related acceleration capabilities can be embedded into DPU-OS, fostering a robust DPU software ecosystem. + +## DPU-OS Requirements Analysis and Design + +### Current State of DPUs and OS Requirements + +DPUs exhibit several key characteristics and challenges: + +- Limited general-purpose processing resources + + DPUs are in the early stages of development, with hardware continuously evolving. Power constraints result in modest hardware specifications. Mainstream DPUs typically feature 8 to 24 CPU cores with limited single-core performance. Memory capacity ranges from 16 to 32GB, and local storage varies from tens to hundreds of gigabytes. The operating system running on DPUs must accommodate these constraints. + +- Varied DPU-OS installation methods + + The diversity of DPU manufacturers and products has led to multiple installation and deployment methods. These include PXE network installation, USB installation, and custom methods such as host-delivered installation images. + +- High performance requirements + + DPU application scenarios demand high performance. Compared to general-purpose server operating systems, DPU-OS may require specific kernel features or functional components. Examples include vDPA for device passthrough and live migration, vendor-specific driver support, seamless DPU process offloading, customized user-space data plane acceleration tools like DPDK/SPDK/OVS, and DPU management and monitoring tools. + +Based on these characteristics, the following requirements for DPU-OS are proposed: + +- Ultra-lightweight DPU-OS installation package + + Trim the openEuler system image to eliminate unnecessary packages and optimize system services to reduce resource overhead. + +- Customization support and tools + + Provide customization configurations and tools to enable customers or DPU manufacturers to tailor the system. openEuler offers an ISO reference implementation. + +- Customized kernel and system for peak performance + + Customize the kernel and drivers to deliver competitive features for DPUs. Enable hardware acceleration through tailored components and optimize system configurations for superior performance. Include DPU-related management and control tools for unified administration. + +### DPU-OS Design + +**Figure 1** Overall Design of DPU-OS + +![dpuos-arch](./figures/dpuos-arch.png) + +As illustrated in Figure 1, DPU-OS is structured into five layers: + +- **Kernel layer**: Customize the kernel configuration to remove non-essential features and modules, creating a lightweight kernel. Enable specific kernel features to deliver high-performance DPU capabilities. + +- **Driver layer**: Trim and customize openEuler native drivers, selecting the minimal required set. Integrate DPU vendor-specific drivers to natively support certain DPU hardware products. + +- **System configuration layer**: Optimize system settings through sysctl and proc configurations to ensure peak performance for DPU-related services. + +- **Peripheral package layer**: Customize and trim openEuler peripheral packages, selecting the minimal set. Provide a suite of DPU-related custom tools. + +- **System service layer**: Streamline native system service startup items to eliminate unnecessary services, minimizing runtime overhead. + +This five-layer design achieves the goal of a lightweight, high-performance DPU-OS. While this is a long-term design heavily reliant on the DPU software and hardware ecosystem, the current phase focuses on trimming using openEuler's imageTailor tool. + +For detailed steps on DPU-OS trimming, refer to the [DPU-OS Tailoring Guide](./dpu-os-tailoring-guide.md). For verification and deployment, consult the [DPU-OS Deployment and Verification Guide](./verification-and-deployment.md). + +> ![](./public_sys-resources/icon-note.gif)**Note**: +> +> Currently, DPU-OS leverages openEuler's existing kernel and peripheral packages, trimmed using the imageTailor tool to produce a lightweight OS installation image. Future development will integrate additional kernel and peripheral package features based on specific needs. diff --git a/docs/en/Server/DiversifiedComputing/DPU-OS/dpu-os-tailoring-guide.md b/docs/en/Server/DiversifiedComputing/DPU-OS/dpu-os-tailoring-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..05e25a8519f98d24a5bf508dae7c58a6200d7358 --- /dev/null +++ b/docs/en/Server/DiversifiedComputing/DPU-OS/dpu-os-tailoring-guide.md @@ -0,0 +1,65 @@ +# DPU-OS Tailoring Guide + +This document explains how to use imageTailor to trim the DPU-OS installation image using configuration files from the [dpu-utilities repository](https://gitee.com/openeuler/dpu-utilities/tree/master/dpuos). Follow these steps: + +## Prepare imageTailor and Required RPM Packages + +Install the imageTailor tool by referring to the [imageTailor User Guide](https://docs.openeuler.org/zh/docs/22.03_LTS/docs/TailorCustom/imageTailor%E4%BD%BF%E7%94%A8%E6%8C%87%E5%8D%97.html) and prepare the necessary RPM packages for tailoring. + +You can use the openEuler installation image as the RPM source. While **openEuler-22.03-LTS-everything-debug-aarch64-dvd.iso** contains a complete set of RPMs, it is large. Alternatively, use the RPMs from **openEuler-22.03-LTS-aarch64-dvd.iso** along with the install-scripts.noarch package. + +Obtain the `install-scripts.noarch` package from the everything repository or download it using yum: + +```bash +yum install -y --downloadonly --downloaddir=./ install-scripts +``` + +## Copy DPUOS Configuration Files + +The imageTailor tool is installed in **/opt/imageTailor** by default. Copy the DPU-OS configuration files to the appropriate paths, selecting the correct architecture directory. The DPU-OS tailoring configuration repository supports x86_64 and aarch64 architectures. + +```bash +cp -rf custom/cfg_dpuos /opt/imageTailor/custom +cp -rf kiwi/minios/cfg_dpuos /opt/imageTailor/kiwi/minios/cfg_dpuos +``` + +## Modify Other Configuration Files + +- Add a line for `dpuos` configuration in **kiwi/eulerkiwi/product.conf**: + +```bash +dpuos PANGEA EMBEDDED DISK GRUB2 install_mode=install install_media=CD install_repo=CD selinux=0 +``` + +- Add a line for `dpuos` configuration in **kiwi/eulerkiwi/minios.conf**: + +```bash +dpuos kiwi/minios/cfg_dpuos yes +``` + +- Add a line for `dpuos` configuration in **repos/RepositoryRule.conf**: + +```bash +dpuos 1 rpm-dir euler_base +``` + +## Set Passwords + +Navigate to **/opt/imageTailor** and update the passwords in the following files: + +- **custom/cfg_dpuos/usr_file/etc/default/grub** + +- **custom/cfg_dpuos/rpm.conf** + +- **kiwi/minios/cfg_dpuos/rpm.conf** + +For password generation and modification, refer to the openEuler imageTailor manual section on [Configuring Initial Passwords](../../../Tools/CommunityTools/ImageCustom/imageTailor/imagetailor-user-guide.md#configuring-initial-passwords). + +## Execute the Tailoring Command + +Run the following command to perform the tailoring. The resulting ISO will be saved in **/opt/imageTailor/result**: + +```bash +cd /opt/imageTailor +./mkdliso -p dpuos -c custom/cfg_dpuos --sec --minios force +``` diff --git a/docs/en/Server/DiversifiedComputing/DPU-OS/figures/dpuos-arch.png b/docs/en/Server/DiversifiedComputing/DPU-OS/figures/dpuos-arch.png new file mode 100644 index 0000000000000000000000000000000000000000..453370ab07858a13a6c40f8d22e3f608e9ec6b4c Binary files /dev/null and b/docs/en/Server/DiversifiedComputing/DPU-OS/figures/dpuos-arch.png differ diff --git a/docs/en/Server/DiversifiedComputing/DPU-OS/overview.md b/docs/en/Server/DiversifiedComputing/DPU-OS/overview.md new file mode 100644 index 0000000000000000000000000000000000000000..89d83786b9a29940803a05f959d209dc6d9f1c4c --- /dev/null +++ b/docs/en/Server/DiversifiedComputing/DPU-OS/overview.md @@ -0,0 +1,11 @@ +# Overview + +This document outlines the background requirements and design principles of DPU-OS. It also details the process of creating a DPU-OS image by customizing the openEuler operating system, along with deployment and verification methods. The feature leverages the openEuler ecosystem to deliver a lightweight, high-performance DPU-OS, offering a reference implementation for data processing unit (DPU) scenarios and users. + +This document targets community developers, DPU vendors, and customers using openEuler who want to explore and adopt DPUs. Users should possess the following skills and knowledge: + +- Proficiency in basic Linux operations. + +- Understanding of Linux system construction and deployment fundamentals. + +- Familiarity with the openEuler imageTailor tool for image customization. diff --git a/docs/en/docs/SecHarden/public_sys-resources/icon-note.gif b/docs/en/Server/DiversifiedComputing/DPU-OS/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/SecHarden/public_sys-resources/icon-note.gif rename to docs/en/Server/DiversifiedComputing/DPU-OS/public_sys-resources/icon-note.gif diff --git a/docs/en/Server/DiversifiedComputing/DPU-OS/verification-and-deployment.md b/docs/en/Server/DiversifiedComputing/DPU-OS/verification-and-deployment.md new file mode 100644 index 0000000000000000000000000000000000000000..0a468aac1b889ad465589b008fc2acac7fc12c05 --- /dev/null +++ b/docs/en/Server/DiversifiedComputing/DPU-OS/verification-and-deployment.md @@ -0,0 +1,38 @@ +# Verification and Deployment + +Once DPU-OS is built, it can be installed and deployed for verification. Since DPU hardware is still in its early stages, you can also use VirtualBox to set up a virtual machine for deployment and testing. + +## Deploying DPU-OS on VirtualBox + +This section outlines the steps to install and deploy DPU-OS using the VirtualBox hypervisor. + +### Preparation for Verification + +Before deploying DPU-OS, ensure the following prerequisites are met: + +- Obtain the DPU-OS ISO file. +- Ensure the host machine has VirtualBox installed. + +### Initial Installation and Startup + +#### Creating a Virtual Machine + +Create a new virtual machine in VirtualBox: + +- Configure the virtual machine with at least 2 CPUs and 4GB of RAM. + +- Allocate a virtual disk with a recommended size of 60GB or larger. + +- Enable EFI boot in the system extension properties. + +- In the storage settings, select the local DPU-OS ISO file as the optical drive. + +- Customize other settings such as network or display as needed. + +#### Starting the Virtual Machine + +Start the newly created virtual machine and choose **Install from ISO** to begin the DPU-OS installation. The installation process is automated and requires no manual input. After installation, the system will reboot automatically. + +Select **Boot From Local Disk** to start DPU-OS. Use the password specified during the DPU-OS creation process. + +By following these steps, you can successfully deploy and verify DPU-OS locally. diff --git a/docs/en/Server/DiversifiedComputing/DPUOffload/Menu/index.md b/docs/en/Server/DiversifiedComputing/DPUOffload/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..d9bf46f1df52e8b383ab6e439a7d2b2f32b7f3cf --- /dev/null +++ b/docs/en/Server/DiversifiedComputing/DPUOffload/Menu/index.md @@ -0,0 +1,9 @@ +--- +headless: true +--- + +- [libvirt Direct Connection Aggregation Environment Establishment]({{< relref "./libvirt-direct-connection-aggregation-environment-establishment.md" >}}) + - [qtfs Shared File System]({{< relref "./qtfs-architecture-and-usage.md" >}}) + - [Imperceptible DPU Offload User Guide]({{< relref "./overview.md" >}}) + - [Imperceptible Container Management Plane Offload]({{< relref "./imperceptible-container-management-plane-offload.md" >}}) + - [Imperceptible Container Management Plane Offload Deployment Guide]({{< relref "./offload-deployment-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/DPUOffload/config/client.json b/docs/en/Server/DiversifiedComputing/DPUOffload/config/client.json similarity index 100% rename from docs/en/docs/DPUOffload/config/client.json rename to docs/en/Server/DiversifiedComputing/DPUOffload/config/client.json diff --git a/docs/en/docs/DPUOffload/config/prepare.sh b/docs/en/Server/DiversifiedComputing/DPUOffload/config/prepare.sh similarity index 100% rename from docs/en/docs/DPUOffload/config/prepare.sh rename to docs/en/Server/DiversifiedComputing/DPUOffload/config/prepare.sh diff --git a/docs/en/docs/DPUOffload/config/rexec.service b/docs/en/Server/DiversifiedComputing/DPUOffload/config/rexec.service similarity index 100% rename from docs/en/docs/DPUOffload/config/rexec.service rename to docs/en/Server/DiversifiedComputing/DPUOffload/config/rexec.service diff --git a/docs/en/docs/DPUOffload/config/server.json b/docs/en/Server/DiversifiedComputing/DPUOffload/config/server.json similarity index 100% rename from docs/en/docs/DPUOffload/config/server.json rename to docs/en/Server/DiversifiedComputing/DPUOffload/config/server.json diff --git a/docs/en/docs/DPUOffload/config/server_start.sh b/docs/en/Server/DiversifiedComputing/DPUOffload/config/server_start.sh similarity index 100% rename from docs/en/docs/DPUOffload/config/server_start.sh rename to docs/en/Server/DiversifiedComputing/DPUOffload/config/server_start.sh diff --git a/docs/en/docs/DPUOffload/config/whitelist b/docs/en/Server/DiversifiedComputing/DPUOffload/config/whitelist similarity index 100% rename from docs/en/docs/DPUOffload/config/whitelist rename to docs/en/Server/DiversifiedComputing/DPUOffload/config/whitelist diff --git a/docs/en/docs/DPUOffload/figures/arch.png b/docs/en/Server/DiversifiedComputing/DPUOffload/figures/arch.png similarity index 100% rename from docs/en/docs/DPUOffload/figures/arch.png rename to docs/en/Server/DiversifiedComputing/DPUOffload/figures/arch.png diff --git a/docs/en/docs/DPUOffload/figures/offload-arch.png b/docs/en/Server/DiversifiedComputing/DPUOffload/figures/offload-arch.png similarity index 100% rename from docs/en/docs/DPUOffload/figures/offload-arch.png rename to docs/en/Server/DiversifiedComputing/DPUOffload/figures/offload-arch.png diff --git a/docs/en/docs/DPUOffload/figures/qtfs-arch.png b/docs/en/Server/DiversifiedComputing/DPUOffload/figures/qtfs-arch.png similarity index 100% rename from docs/en/docs/DPUOffload/figures/qtfs-arch.png rename to docs/en/Server/DiversifiedComputing/DPUOffload/figures/qtfs-arch.png diff --git a/docs/en/docs/DPUOffload/imperceptible-container-management-plane-offload.md b/docs/en/Server/DiversifiedComputing/DPUOffload/imperceptible-container-management-plane-offload.md similarity index 100% rename from docs/en/docs/DPUOffload/imperceptible-container-management-plane-offload.md rename to docs/en/Server/DiversifiedComputing/DPUOffload/imperceptible-container-management-plane-offload.md diff --git a/docs/en/docs/DPUOffload/libvirt-direct-connection-aggregation-environment-establishment.md b/docs/en/Server/DiversifiedComputing/DPUOffload/libvirt-direct-connection-aggregation-environment-establishment.md similarity index 94% rename from docs/en/docs/DPUOffload/libvirt-direct-connection-aggregation-environment-establishment.md rename to docs/en/Server/DiversifiedComputing/DPUOffload/libvirt-direct-connection-aggregation-environment-establishment.md index 118467766ef52f4d4d6a2869cc0a3fc3ee6ad1e3..9cdd5e743afb5b69ec27526c177a0603df0251d3 100644 --- a/docs/en/docs/DPUOffload/libvirt-direct-connection-aggregation-environment-establishment.md +++ b/docs/en/Server/DiversifiedComputing/DPUOffload/libvirt-direct-connection-aggregation-environment-establishment.md @@ -6,8 +6,8 @@ Prepare two physical machines (VMs have not been tested) that can communicate wi One physical machine functions as the DPU, and the other functions as the host. In this document, DPU and HOST refer to the two physical machines. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->In the test mode, network ports are exposed without connection authentication, which is risky and should be used only for internal tests and verification. Do not use this mode in the production environment. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> In the test mode, network ports are exposed without connection authentication, which is risky and should be used only for internal tests and verification. Do not use this mode in the production environment. ## vsock mode @@ -152,34 +152,36 @@ mkdir /var/run/rexec You can start the rexec_server service on the server in either of the following ways. * Method 1: -Configure rexec as a systemd service. -Add the **[rexec.service](./config/rexec.service)** file to **/usr/lib/systemd/system**. + Configure rexec as a systemd service. -Then, use `systemctl` to manage the rexec service. + Add the **[rexec.service](./config/rexec.service)** file to **/usr/lib/systemd/system**. -Start the service for the first time: + Then, use `systemctl` to manage the rexec service. -```bash -systemctl daemon-reload + Start the service for the first time: -systemctl enable --now rexec -``` + ```bash + systemctl daemon-reload -Restart the service: + systemctl enable --now rexec + ``` -```bash -systemctl stop rexec + Restart the service: -systemctl start rexec -``` + ```bash + systemctl stop rexec + + systemctl start rexec + ``` * Method 2: -Manually start the service in the background. -```bash -nohup /usr/bin/rexec_server 2>&1 & -``` + Manually start the service in the background. + + ```bash + nohup /usr/bin/rexec_server 2>&1 & + ``` ## 3.4 libvirt Service Deployment diff --git a/docs/en/docs/DPUOffload/offload-deployment-guide.md b/docs/en/Server/DiversifiedComputing/DPUOffload/offload-deployment-guide.md similarity index 98% rename from docs/en/docs/DPUOffload/offload-deployment-guide.md rename to docs/en/Server/DiversifiedComputing/DPUOffload/offload-deployment-guide.md index 094150a113179bfbf8e19661df28d5c2808ffb99..5bc0c03ab107dc7d145e99803602be2a054d0d34 100644 --- a/docs/en/docs/DPUOffload/offload-deployment-guide.md +++ b/docs/en/Server/DiversifiedComputing/DPUOffload/offload-deployment-guide.md @@ -1,10 +1,9 @@ - # Imperceptible Container Management Plane Offload Deployment Guide -> ![](./public_sys-resources/icon-note.gif) **NOTE**: +> ![](./public_sys-resources/icon-note.gif)**NOTE**: > > In this user guide, modifications are performed to the container management plane components and the rexec tool of a specific version. You can modify other versions based on the actual execution environment. The patch provided in this document is for verification only and is not for commercial use. -> ![](./public_sys-resources/icon-note.gif) **NOTE**: +> ![](./public_sys-resources/icon-note.gif)**NOTE**: > > The communication between shared file systems is implemented through the network. You can perform a simulated offload using two physical machines or VMs connected through the network. > @@ -163,6 +162,6 @@ Because **/var/run/** is bound to **/another_rootfs/var/run/**, you can use Dock The container management plane is offloaded to the DPU. You can run `docker` commands to create and delete containers, or use `kubectl` on the current node to schedule and destroy pods. The actual container service process runs on the host. -> ![](./public_sys-resources/icon-note.gif) **NOTE**: +> ![](./public_sys-resources/icon-note.gif)**NOTE**: > > This guide describes only the container management plane offload. The offload of container network and data volumes requires additional offload capabilities, which are not included. You can perform cross-node startup of containers that are not configured with network and storage by referring to this guide. diff --git a/docs/en/docs/DPUOffload/overview.md b/docs/en/Server/DiversifiedComputing/DPUOffload/overview.md similarity index 100% rename from docs/en/docs/DPUOffload/overview.md rename to docs/en/Server/DiversifiedComputing/DPUOffload/overview.md diff --git a/docs/en/docs/StratoVirt/public_sys-resources/icon-note.gif b/docs/en/Server/DiversifiedComputing/DPUOffload/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/StratoVirt/public_sys-resources/icon-note.gif rename to docs/en/Server/DiversifiedComputing/DPUOffload/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/DPUOffload/qtfs-architecture-and-usage.md b/docs/en/Server/DiversifiedComputing/DPUOffload/qtfs-architecture-and-usage.md similarity index 100% rename from docs/en/docs/DPUOffload/qtfs-architecture-and-usage.md rename to docs/en/Server/DiversifiedComputing/DPUOffload/qtfs-architecture-and-usage.md diff --git a/docs/en/docs/DPUOffload/scripts/qemu-kvm b/docs/en/Server/DiversifiedComputing/DPUOffload/scripts/qemu-kvm similarity index 100% rename from docs/en/docs/DPUOffload/scripts/qemu-kvm rename to docs/en/Server/DiversifiedComputing/DPUOffload/scripts/qemu-kvm diff --git a/docs/en/docs/DPUOffload/scripts/virt_start.sh b/docs/en/Server/DiversifiedComputing/DPUOffload/scripts/virt_start.sh similarity index 100% rename from docs/en/docs/DPUOffload/scripts/virt_start.sh rename to docs/en/Server/DiversifiedComputing/DPUOffload/scripts/virt_start.sh diff --git a/docs/en/docs/DPUOffload/scripts/virt_umount.sh b/docs/en/Server/DiversifiedComputing/DPUOffload/scripts/virt_umount.sh similarity index 100% rename from docs/en/docs/DPUOffload/scripts/virt_umount.sh rename to docs/en/Server/DiversifiedComputing/DPUOffload/scripts/virt_umount.sh diff --git a/docs/en/Server/DiversifiedComputing/Menu/index.md b/docs/en/Server/DiversifiedComputing/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..ca9894b5aa4df2f5159ee18329ca2d3f50d2c104 --- /dev/null +++ b/docs/en/Server/DiversifiedComputing/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- + +- [Direct Connection Aggregation User Guide]({{< relref "./DPUOffload/Menu/index.md" >}}) +- [DPU-OS]({{< relref "./DPU-OS/Menu/index.md" >}}) diff --git a/docs/en/Server/HighAvailability/HA/Menu/index.md b/docs/en/Server/HighAvailability/HA/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..e73404e656b28ac6db83b4f52ebb9d94752595b1 --- /dev/null +++ b/docs/en/Server/HighAvailability/HA/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [HA Installation and Deployment]({{< relref "./ha-installation-and-deployment.md" >}}) +- [HA Usage Examples]({{< relref "./ha-usage-examples.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/desktop/figures/HA-add-resource.png b/docs/en/Server/HighAvailability/HA/figures/HA-add-resource.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-add-resource.png rename to docs/en/Server/HighAvailability/HA/figures/HA-add-resource.png diff --git a/docs/en/docs/desktop/figures/HA-apache-show.png b/docs/en/Server/HighAvailability/HA/figures/HA-apache-show.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-apache-show.png rename to docs/en/Server/HighAvailability/HA/figures/HA-apache-show.png diff --git a/docs/en/docs/desktop/figures/HA-apache-suc.png b/docs/en/Server/HighAvailability/HA/figures/HA-apache-suc.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-apache-suc.png rename to docs/en/Server/HighAvailability/HA/figures/HA-apache-suc.png diff --git a/docs/en/docs/desktop/figures/HA-api.png b/docs/en/Server/HighAvailability/HA/figures/HA-api.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-api.png rename to docs/en/Server/HighAvailability/HA/figures/HA-api.png diff --git a/docs/en/docs/desktop/figures/HA-clone-suc.png b/docs/en/Server/HighAvailability/HA/figures/HA-clone-suc.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-clone-suc.png rename to docs/en/Server/HighAvailability/HA/figures/HA-clone-suc.png diff --git a/docs/en/docs/desktop/figures/HA-clone.png b/docs/en/Server/HighAvailability/HA/figures/HA-clone.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-clone.png rename to docs/en/Server/HighAvailability/HA/figures/HA-clone.png diff --git a/docs/en/docs/desktop/figures/HA-corosync.png b/docs/en/Server/HighAvailability/HA/figures/HA-corosync.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-corosync.png rename to docs/en/Server/HighAvailability/HA/figures/HA-corosync.png diff --git a/docs/en/docs/desktop/figures/HA-firstchoice-cmd.png b/docs/en/Server/HighAvailability/HA/figures/HA-firstchoice-cmd.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-firstchoice-cmd.png rename to docs/en/Server/HighAvailability/HA/figures/HA-firstchoice-cmd.png diff --git a/docs/en/docs/desktop/figures/HA-firstchoice.png b/docs/en/Server/HighAvailability/HA/figures/HA-firstchoice.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-firstchoice.png rename to docs/en/Server/HighAvailability/HA/figures/HA-firstchoice.png diff --git a/docs/en/docs/desktop/figures/HA-group-new-suc.png b/docs/en/Server/HighAvailability/HA/figures/HA-group-new-suc.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-group-new-suc.png rename to docs/en/Server/HighAvailability/HA/figures/HA-group-new-suc.png diff --git a/docs/en/docs/desktop/figures/HA-group-new-suc2.png b/docs/en/Server/HighAvailability/HA/figures/HA-group-new-suc2.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-group-new-suc2.png rename to docs/en/Server/HighAvailability/HA/figures/HA-group-new-suc2.png diff --git a/docs/en/docs/desktop/figures/HA-group-new.png b/docs/en/Server/HighAvailability/HA/figures/HA-group-new.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-group-new.png rename to docs/en/Server/HighAvailability/HA/figures/HA-group-new.png diff --git a/docs/en/docs/desktop/figures/HA-group-suc.png b/docs/en/Server/HighAvailability/HA/figures/HA-group-suc.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-group-suc.png rename to docs/en/Server/HighAvailability/HA/figures/HA-group-suc.png diff --git a/docs/en/docs/desktop/figures/HA-group.png b/docs/en/Server/HighAvailability/HA/figures/HA-group.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-group.png rename to docs/en/Server/HighAvailability/HA/figures/HA-group.png diff --git a/docs/en/docs/desktop/figures/HA-home-page.png b/docs/en/Server/HighAvailability/HA/figures/HA-home-page.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-home-page.png rename to docs/en/Server/HighAvailability/HA/figures/HA-home-page.png diff --git a/docs/en/docs/desktop/figures/HA-login.png b/docs/en/Server/HighAvailability/HA/figures/HA-login.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-login.png rename to docs/en/Server/HighAvailability/HA/figures/HA-login.png diff --git a/docs/en/docs/desktop/figures/HA-mariadb-suc.png b/docs/en/Server/HighAvailability/HA/figures/HA-mariadb-suc.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-mariadb-suc.png rename to docs/en/Server/HighAvailability/HA/figures/HA-mariadb-suc.png diff --git a/docs/en/docs/desktop/figures/HA-mariadb.png b/docs/en/Server/HighAvailability/HA/figures/HA-mariadb.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-mariadb.png rename to docs/en/Server/HighAvailability/HA/figures/HA-mariadb.png diff --git a/docs/en/docs/desktop/figures/HA-nfs-suc.png b/docs/en/Server/HighAvailability/HA/figures/HA-nfs-suc.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-nfs-suc.png rename to docs/en/Server/HighAvailability/HA/figures/HA-nfs-suc.png diff --git a/docs/en/docs/desktop/figures/HA-nfs.png b/docs/en/Server/HighAvailability/HA/figures/HA-nfs.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-nfs.png rename to docs/en/Server/HighAvailability/HA/figures/HA-nfs.png diff --git a/docs/en/docs/desktop/figures/HA-pacemaker.png b/docs/en/Server/HighAvailability/HA/figures/HA-pacemaker.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-pacemaker.png rename to docs/en/Server/HighAvailability/HA/figures/HA-pacemaker.png diff --git a/docs/en/docs/desktop/figures/HA-pcs-status.png b/docs/en/Server/HighAvailability/HA/figures/HA-pcs-status.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-pcs-status.png rename to docs/en/Server/HighAvailability/HA/figures/HA-pcs-status.png diff --git a/docs/en/docs/desktop/figures/HA-pcs.png b/docs/en/Server/HighAvailability/HA/figures/HA-pcs.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-pcs.png rename to docs/en/Server/HighAvailability/HA/figures/HA-pcs.png diff --git a/docs/en/docs/desktop/figures/HA-refresh.png b/docs/en/Server/HighAvailability/HA/figures/HA-refresh.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-refresh.png rename to docs/en/Server/HighAvailability/HA/figures/HA-refresh.png diff --git a/docs/en/docs/desktop/figures/HA-vip-suc.png b/docs/en/Server/HighAvailability/HA/figures/HA-vip-suc.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-vip-suc.png rename to docs/en/Server/HighAvailability/HA/figures/HA-vip-suc.png diff --git a/docs/en/docs/desktop/figures/HA-vip.png b/docs/en/Server/HighAvailability/HA/figures/HA-vip.png similarity index 100% rename from docs/en/docs/desktop/figures/HA-vip.png rename to docs/en/Server/HighAvailability/HA/figures/HA-vip.png diff --git a/docs/en/docs/thirdparty_migration/installha.md b/docs/en/Server/HighAvailability/HA/ha-installation-and-deployment.md similarity index 41% rename from docs/en/docs/thirdparty_migration/installha.md rename to docs/en/Server/HighAvailability/HA/ha-installation-and-deployment.md index bfa6283411cdde3800bafc9228da1a5618655b3c..4b28d3c1a6bde9c1a526f86ae5c867415595b765 100644 --- a/docs/en/docs/thirdparty_migration/installha.md +++ b/docs/en/Server/HighAvailability/HA/ha-installation-and-deployment.md @@ -1,37 +1,35 @@ -# Installing and Deploying an HA Cluster +# HA Installation and Deployment -This section describes how to install and deploy an HA cluster. +This document describes how to install and deploy an HA cluster. ## Installation and Deployment -### Preparing the Environment - -At least two physical machines or virtual machines (VMs) installed with openEuler 21.03 are required. This section uses two physical machines or VMs as an example. For details about how to install openEuler 21.03, see the [_openEuler Installation Guide_](../Installation/Installation.md). +- Prepare the environment: At least two physical machines or VMs with openEuler installed are required. (This section uses two physical machines or VMs as an example.) For details about how to install openEuler 21.03, see the [_openEuler Installation Guide_](../../InstallationUpgrade/Installation/installation.md). ### Modifying the Host Name and the /etc/hosts File -**Note**: You need to perform the following operations on both hosts. The following uses one host as an example. The IP address used in this section is for reference only. +- **Note: You need to perform the following operations on both hosts. The following takes one host as an example. IP addresses in this document are for reference only.** -Before using the HA software, ensure that the host name has been changed and all host names have been written into the **/etc/hosts** file. +Before using the HA software, ensure that all host names have been changed and written into the **/etc/hosts** file. -1. Run the following command to change the host name: +- Run the following command to change the host name: - ```shell - hostnamectl set-hostname ha1 - ``` +```shell +hostnamectl set-hostname ha1 +``` -2. Edit the `/etc/hosts` file and write the following fields: +- Edit the **/etc/hosts** file and write the following fields: - ```text - 172.30.30.65 ha1 - 172.30.30.66 ha2 - ``` +```text +172.30.30.65 ha1 +172.30.30.66 ha2 +``` -### Configuring the Yum Source +### Configuring the Yum Repository -After the system is successfully installed, the Yum source is configured by default. The file location is stored in the `/etc/yum.repos.d/openEuler.repo` file. The HA software package uses the following sources: +After the system is successfully installed, the Yum source is configured by default. The file location is stored in the **/etc/yum.repos.d/openEuler.repo** file. The HA software package uses the following sources: -```conf +```text [OS] name=OS baseurl=http://repo.openeuler.org/openEuler-23.09/OS/$basearch/ @@ -54,21 +52,21 @@ gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-23.09/OS/$basearch/RPM-GPG-KEY-openEuler ``` -### Installing the Components of the HA Software Package +### Installing the HA Software Package Components ```shell yum install -y corosync pacemaker pcs fence-agents fence-virt corosync-qdevice sbd drbd drbd-utils ``` -### Setting the **hacluster** User Password +### Setting the hacluster User Password ```shell passwd hacluster ``` -### Modifying the `/etc/corosync/corosync.conf` file +### Modifying the /etc/corosync/corosync.conf File -```conf +```text totem { version: 2 cluster_name: hacluster @@ -106,79 +104,79 @@ nodelist { } ``` -### Managing Services +### Managing the Services -#### Disabling the Firewall +#### Disabling the firewall -1. Run the following command to disable the firewall: +1. Stop the firewall. - ```shell - systemctl stop firewalld - ``` + ```shell + systemctl stop firewalld + ``` -2. Change **SELinux** to **disabled** in the **`/etc/selinux/config`** file. +2. Change the status of SELINUX in the **/etc/selinux/config** file to disabled. - ```text - SELINUX=disabled - ``` + ```text + # SELINUX=disabled + ``` -#### Managing the pcs Service +#### Managing the pcs service -1. Run the following command to start the pcs service: +1. Start the pcs service. - ```shell - systemctl start pcsd - ``` + ```shell + systemctl start pcsd + ``` -2. Run the following command to query the pcs service status: +2. Query the pcs service status. - ```shell - systemctl status pcsd - ``` + ```shell + systemctl status pcsd + ``` - The service is started successfully if the following information is displayed: + The service is started successfully if the following information is displayed: - ![](./figures/HA-pcs.png) + ![](./figures/HA-pcs.png) -#### Managing the Pacemaker Service +#### Managing the Pacemaker service -1. Run the following command to start the Pacemaker service: +1. Start the Pacemaker service. - ```shell - systemctl start pacemaker - ``` + ```shell + systemctl start pacemaker + ``` -2. Run the following command to query the Pacemaker service status: +2. Query the Pacemaker service status. - ```shell - systemctl status pacemaker - ``` + ```shell + systemctl status pacemaker + ``` - The service is started successfully if the following information is displayed: + The service is started successfully if the following information is displayed: - ![](./figures/HA-pacemaker.png) + ![](./figures/HA-pacemaker.png) -#### Managing the Corosync Service +#### Managing the Corosync service -1. Run the following command to start the Corosync service: +1. Start the Corosync service. - ```shell - systemctl start corosync - ``` + ```shell + systemctl start corosync + ``` -2. Run the following command to query the Corosync service status: +2. Query the Corosync service status. - ```shell - systemctl status corosync - ``` + ```shell + systemctl status corosync + ``` - The service is started successfully if the following information is displayed: + The service is started successfully if the following information is displayed: - ![](./figures/HA-corosync.png) + ![](./figures/HA-corosync.png) ### Performing Node Authentication -**Note**: Perform this operation on either node. +- **Note: Run this command on any node.** ```shell pcs host auth ha1 ha2 @@ -186,16 +184,16 @@ pcs host auth ha1 ha2 ### Accessing the Front-End Management Platform -After the preceding services are started, open the browser (Chrome or Firefox is recommended) and enter `https://localhost:2224` in the address box. +After the preceding services are started, open the browser (Chrome or Firefox is recommended) and enter `https://localhost:2224` in the navigation bar. -- The following figure shows the native management platform: +- This page is the native management platform. ![](./figures/HA-login.png) -For details about how to install the management platform newly developed by the community, see `https://gitee.com/openeuler/ha-api/blob/master/docs/build.md`. +For details about how to install the management platform newly developed by the community, see . -- The following is the management platform newly developed by the community: +- The following is the management platform newly developed by the community. ![](./figures/HA-api.png) -For details about how to use the HA cluster and how to add an instance, see [HA Usage Example](../desktop/HA_use_cases.md). +- For how to quickly use an HA cluster and add an instance, see the [HA Usage Example](./ha-usage-examples.md/). diff --git a/docs/en/docs/thirdparty_migration/usecase.md b/docs/en/Server/HighAvailability/HA/ha-usage-examples.md similarity index 97% rename from docs/en/docs/thirdparty_migration/usecase.md rename to docs/en/Server/HighAvailability/HA/ha-usage-examples.md index 2e7d6a568fc654ee797d771100af62e75591634a..122d4ae29c8fc731498e9ff2078613f86ea00c38 100644 --- a/docs/en/docs/thirdparty_migration/usecase.md +++ b/docs/en/Server/HighAvailability/HA/ha-usage-examples.md @@ -1,6 +1,6 @@ -# HA Use Cases +# HA Usage Examples -This section describes how to get started with the HA cluster and add an instance. If you are not familiar with HA cluster installation, see [Installing and Deploying an HA Cluster](./installha.md). +This section describes how to get started with the HA cluster and add an instance. If you are not familiar with HA cluster installation, see [HA Installation and Deployment](./ha-installation-and-deployment.md). ## Quick Start Guide @@ -112,7 +112,7 @@ The following uses Apache as an example to describe how to add resources through ![](./figures/HA-group.png) - >**Notes:** + > **Notes:** > Group resources are started in the sequence of child resources. Therefore, you need to select child resources in sequence. 2. If the following information is displayed, the resource is added successfully. @@ -136,8 +136,7 @@ The following uses Apache as an example to describe how to add resources through - Stopping a resource: Select a target resource from the resource node list. The target resource must be running. Stop the resource. - Clearing a resource: Select a target resource from the resource node list. Clear the resource. - Migrating a resource: Select a target resource from the resource node list. The resource must be a common resource or group resource in the running status. Migrate the resource to migrate it to a specified node. -- Migrating back a resource: Select a target resource from the resource node list. The resource must be a migrated resource. Migrate back the resource to clear the migration settings of the resource and migrate the resource back to the original node. -After you click **Migrate Back**, the status change of the resource item in the list is the same as that when the resource is started. +- Migrating back a resource: Select a target resource from the resource node list. The resource must be a migrated resource. Migrate back the resource to clear the migration settings of the resource and migrate the resource back to the original node. After you click **Migrate Back**, the status change of the resource item in the list is the same as that when the resource is started. - Deleting a resource: Select a target resource from the resource node list. Delete the resource. ### Setting Resource Relationships diff --git a/docs/en/Server/HighAvailability/Menu/index.md b/docs/en/Server/HighAvailability/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..44269fa08bfa6bd40134a635dcbb5c336b5e190e --- /dev/null +++ b/docs/en/Server/HighAvailability/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [HA User Guide]({{< relref "./HA/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/InstallationUpgrade/Installation/Menu/index.md b/docs/en/Server/InstallationUpgrade/Installation/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..b1bd95143341cdf975bcbbf9b3c66cd187755ac8 --- /dev/null +++ b/docs/en/Server/InstallationUpgrade/Installation/Menu/index.md @@ -0,0 +1,20 @@ +--- +headless: true +--- + +- [Installation Guide]({{< relref "./installation.md" >}}) + - [Installation on Servers]({{< relref "./installation-on-servers.md" >}}) + - [Installation Preparations]({{< relref "./installation-preparations.md" >}}) + - [Installation Modes]({{< relref "./installation-modes.md" >}}) + - [Installation Guide]({{< relref "./installation-guide.md" >}}) + - [Using Kickstart for Automatic Installation]({{< relref "./using-kickstart-for-automatic-installation.md" >}}) + - [Common Issues and Solutions]({{< relref "./server-installation-common-issues-and-solutions.md" >}}) + - [Installation on Raspberry Pi]({{< relref "./install-pi.md" >}}) + - [Installation Preparations]({{< relref "./installation-preparations-1.md" >}}) + - [Installation Modes]({{< relref "./installation-modes-1.md" >}}) + - [Installation Guide]({{< relref "./installation-guide-1.md" >}}) + - [Common Issues and Solutions]({{< relref "./raspi-common-issues-and-solutions.md" >}}) + - [More Resources]({{< relref "./more-resources.md" >}}) + - [RISC-V Installation Guide]({{< relref "./risc-v.md" >}}) + - [VM Installation]({{< relref "./risc-v-qemu.md" >}}) + - [More Resources]({{< relref "./risc-v-more.md" >}}) diff --git a/docs/en/docs/Installation/figures/Figure-18.png b/docs/en/Server/InstallationUpgrade/Installation/figures/Figure-18.png similarity index 100% rename from docs/en/docs/Installation/figures/Figure-18.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/Figure-18.png diff --git a/docs/en/docs/Installation/figures/Megaraid_IO_Request_uncompleted.png b/docs/en/Server/InstallationUpgrade/Installation/figures/Megaraid_IO_Request_uncompleted.png similarity index 100% rename from docs/en/docs/Installation/figures/Megaraid_IO_Request_uncompleted.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/Megaraid_IO_Request_uncompleted.png diff --git a/docs/en/Server/InstallationUpgrade/Installation/figures/Partition_expansion.png b/docs/en/Server/InstallationUpgrade/Installation/figures/Partition_expansion.png new file mode 100644 index 0000000000000000000000000000000000000000..37a6ef7a2371a9a5518f6d2ce0dc6d36fc71fe1b Binary files /dev/null and b/docs/en/Server/InstallationUpgrade/Installation/figures/Partition_expansion.png differ diff --git a/docs/en/docs/Installation/figures/bios.png b/docs/en/Server/InstallationUpgrade/Installation/figures/bios.png similarity index 100% rename from docs/en/docs/Installation/figures/bios.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/bios.png diff --git a/docs/en/docs/Installation/figures/cancle_disk.png b/docs/en/Server/InstallationUpgrade/Installation/figures/cancle_disk.png similarity index 100% rename from docs/en/docs/Installation/figures/cancle_disk.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/cancle_disk.png diff --git a/docs/en/docs/Installation/figures/completing-the-automatic-installation.png b/docs/en/Server/InstallationUpgrade/Installation/figures/completing-the-automatic-installation.png similarity index 100% rename from docs/en/docs/Installation/figures/completing-the-automatic-installation.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/completing-the-automatic-installation.png diff --git a/docs/en/docs/Installation/figures/custom_paratition.png b/docs/en/Server/InstallationUpgrade/Installation/figures/custom_paratition.png similarity index 100% rename from docs/en/docs/Installation/figures/custom_paratition.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/custom_paratition.png diff --git a/docs/en/docs/Installation/figures/dialog-box-showing-no-bootable-device.png b/docs/en/Server/InstallationUpgrade/Installation/figures/dialog-box-showing-no-bootable-device.png similarity index 100% rename from docs/en/docs/Installation/figures/dialog-box-showing-no-bootable-device.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/dialog-box-showing-no-bootable-device.png diff --git a/docs/en/docs/Installation/figures/drive-icon.png b/docs/en/Server/InstallationUpgrade/Installation/figures/drive-icon.png similarity index 100% rename from docs/en/docs/Installation/figures/drive-icon.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/drive-icon.png diff --git a/docs/en/docs/A-Tune/figures/en-us_image_0213178479.png b/docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0213178479.png similarity index 100% rename from docs/en/docs/A-Tune/figures/en-us_image_0213178479.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0213178479.png diff --git a/docs/en/docs/Installation/figures/en-us_image_0229291243.png b/docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291243.png similarity index 100% rename from docs/en/docs/Installation/figures/en-us_image_0229291243.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291243.png diff --git a/docs/en/docs/Installation/figures/en-us_image_0229291247.png b/docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291247.png similarity index 100% rename from docs/en/docs/Installation/figures/en-us_image_0229291247.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291247.png diff --git a/docs/en/docs/Installation/figures/en-us_image_0229291264.jpg b/docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291264.jpg similarity index 100% rename from docs/en/docs/Installation/figures/en-us_image_0229291264.jpg rename to docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291264.jpg diff --git a/docs/en/docs/Installation/figures/en-us_image_0229291270.png b/docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291270.png similarity index 100% rename from docs/en/docs/Installation/figures/en-us_image_0229291270.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291270.png diff --git a/docs/en/docs/Installation/figures/en-us_image_0229291272.png b/docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291272.png similarity index 100% rename from docs/en/docs/Installation/figures/en-us_image_0229291272.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291272.png diff --git a/docs/en/docs/Installation/figures/en-us_image_0229291280.png b/docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291280.png similarity index 100% rename from docs/en/docs/Installation/figures/en-us_image_0229291280.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291280.png diff --git a/docs/en/docs/Installation/figures/en-us_image_0229291286.png b/docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291286.png similarity index 100% rename from docs/en/docs/Installation/figures/en-us_image_0229291286.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229291286.png diff --git a/docs/en/docs/Installation/figures/en-us_image_0229420473.png b/docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229420473.png similarity index 100% rename from docs/en/docs/Installation/figures/en-us_image_0229420473.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0229420473.png diff --git a/docs/en/docs/Installation/figures/en-us_image_0231657950.png b/docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0231657950.png similarity index 100% rename from docs/en/docs/Installation/figures/en-us_image_0231657950.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/en-us_image_0231657950.png diff --git a/docs/en/docs/Installation/figures/enforce-secure-boot.png b/docs/en/Server/InstallationUpgrade/Installation/figures/enforce-secure-boot.png similarity index 100% rename from docs/en/docs/Installation/figures/enforce-secure-boot.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/enforce-secure-boot.png diff --git a/docs/en/docs/Installation/figures/error-message.png b/docs/en/Server/InstallationUpgrade/Installation/figures/error-message.png similarity index 100% rename from docs/en/docs/Installation/figures/error-message.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/error-message.png diff --git a/docs/en/docs/Installation/figures/figure-10.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-10.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-10.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-10.png diff --git a/docs/en/docs/Installation/figures/figure-11.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-11.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-11.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-11.png diff --git a/docs/en/docs/Installation/figures/figure-12.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-12.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-12.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-12.png diff --git a/docs/en/docs/Installation/figures/figure-13.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-13.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-13.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-13.png diff --git a/docs/en/docs/Installation/figures/figure-14.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-14.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-14.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-14.png diff --git a/docs/en/docs/Installation/figures/figure-15.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-15.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-15.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-15.png diff --git a/docs/en/docs/Installation/figures/figure-16.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-16.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-16.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-16.png diff --git a/docs/en/docs/Installation/figures/figure-17.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-17.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-17.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-17.png diff --git a/docs/en/docs/Installation/figures/figure-19.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-19.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-19.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-19.png diff --git a/docs/en/docs/Installation/figures/figure-4.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-4.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-4.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-4.png diff --git a/docs/en/docs/Installation/figures/figure-5.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-5.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-5.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-5.png diff --git a/docs/en/docs/Installation/figures/figure-6.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-6.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-6.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-6.png diff --git a/docs/en/docs/Installation/figures/figure-7.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-7.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-7.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-7.png diff --git a/docs/en/docs/Installation/figures/figure-8.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-8.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-8.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-8.png diff --git a/docs/en/docs/Installation/figures/figure-9.png b/docs/en/Server/InstallationUpgrade/Installation/figures/figure-9.png similarity index 100% rename from docs/en/docs/Installation/figures/figure-9.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/figure-9.png diff --git a/docs/en/docs/Installation/figures/ftp-mode.png b/docs/en/Server/InstallationUpgrade/Installation/figures/ftp-mode.png similarity index 100% rename from docs/en/docs/Installation/figures/ftp-mode.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/ftp-mode.png diff --git a/docs/en/docs/Installation/figures/http-mode.png b/docs/en/Server/InstallationUpgrade/Installation/figures/http-mode.png similarity index 100% rename from docs/en/docs/Installation/figures/http-mode.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/http-mode.png diff --git a/docs/en/docs/Installation/figures/image-dialog-box.png b/docs/en/Server/InstallationUpgrade/Installation/figures/image-dialog-box.png similarity index 100% rename from docs/en/docs/Installation/figures/image-dialog-box.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/image-dialog-box.png diff --git a/docs/en/docs/Installation/figures/nfs-mode.png b/docs/en/Server/InstallationUpgrade/Installation/figures/nfs-mode.png similarity index 100% rename from docs/en/docs/Installation/figures/nfs-mode.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/nfs-mode.png diff --git a/docs/en/docs/Installation/figures/reset_devices.png b/docs/en/Server/InstallationUpgrade/Installation/figures/reset_devices.png similarity index 100% rename from docs/en/docs/Installation/figures/reset_devices.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/reset_devices.png diff --git a/docs/en/docs/Installation/figures/restart-icon.png b/docs/en/Server/InstallationUpgrade/Installation/figures/restart-icon.png similarity index 100% rename from docs/en/docs/Installation/figures/restart-icon.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/restart-icon.png diff --git a/docs/en/docs/Installation/figures/security.png b/docs/en/Server/InstallationUpgrade/Installation/figures/security.png similarity index 100% rename from docs/en/docs/Installation/figures/security.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/security.png diff --git a/docs/en/docs/Installation/figures/startparam.png b/docs/en/Server/InstallationUpgrade/Installation/figures/startparam.png similarity index 100% rename from docs/en/docs/Installation/figures/startparam.png rename to docs/en/Server/InstallationUpgrade/Installation/figures/startparam.png diff --git a/docs/en/docs/Installation/install-pi.md b/docs/en/Server/InstallationUpgrade/Installation/install-pi.md similarity index 100% rename from docs/en/docs/Installation/install-pi.md rename to docs/en/Server/InstallationUpgrade/Installation/install-pi.md diff --git a/docs/en/Server/InstallationUpgrade/Installation/installation-guide-1.md b/docs/en/Server/InstallationUpgrade/Installation/installation-guide-1.md new file mode 100644 index 0000000000000000000000000000000000000000..07c21a9435a6ec4343a3a9a615b1c30ef0aeefeb --- /dev/null +++ b/docs/en/Server/InstallationUpgrade/Installation/installation-guide-1.md @@ -0,0 +1,186 @@ +# Installation Guide + +This section describes how to enable the Raspberry Pi function after [Writing Raspberry Pi Images into the SD card](installation-modes-1.md). + + + +- [Installation Guide](#installation-guide) + - [Starting the System](#starting-the-system) + - [Logging in to the System](#logging-in-to-the-system) + - [Configuring the System](#configuring-the-system) + - [Expanding the Root Directory Partition](#expanding-the-root-directory-partition) + - [Connecting to the Wi-Fi Network](#connecting-to-the-wi-fi-network) + + +## Starting the System + +After an image is written into the SD card, insert the SD card into the Raspberry Pi and power on the SD card. + +For details about the Raspberry Pi hardware, visit the [Raspberry Pi official website](https://www.raspberrypi.org/). + +## Logging in to the System + +You can log in to the Raspberry Pi in either of the following ways: + +1. Local login + + Connect the Raspberry Pi to the monitor (the Raspberry Pi video output interface is Micro HDMI), keyboard, and mouse, and start the Raspberry Pi. The Raspberry Pi startup log is displayed on the monitor. After Raspberry Pi is started, enter the user name **root** and password **openeuler** to log in. + +2. SSH remote login + + By default, the Raspberry Pi uses the DHCP mode to automatically obtain the IP address. If the Raspberry Pi is connected to a known router, you can log in to the router to check the IP address. The new IP address is the Raspberry Pi IP address. + + According to the preceding figure, the IP address of the Raspberry Pi is **192.168.31.109**. You can run the `ssh root@192.168.31.109` command and enter the password `openeuler` to remotely log in to the Raspberry Pi. + +## Configuring the System + +### Expanding the Root Directory Partition + +The space of the default root directory partition is small. Therefore, you need to expand the partition capacity before using it. + +To expand the root directory partition capacity, perform the following procedure: + +1. Run the `fdisk -l` command as the root user to check the drive partition information. The command output is as follows: + + ```shell + # fdisk -l + Disk /dev/mmcblk0: 14.86 GiB, 15931539456 bytes, 31116288 sectors + Units: sectors of 1 * 512 = 512 bytes + Sector size (logical/physical): 512 bytes / 512 bytes + I/O size (minimum/optimal): 512 bytes / 512 bytes + Disklabel type: dos + Disk identifier: 0xf2dc3842 + + Device Boot Start End Sectors Size Id Type + /dev/mmcblk0p1 * 8192 593919 585728 286M c W95 FAT32 (LBA) + /dev/mmcblk0p2 593920 1593343 999424 488M 82 Linux swap / Solaris + /dev/mmcblk0p3 1593344 5044223 3450880 1.7G 83 Linux + ``` + + The drive letter of the SD card is **/dev/mmcblk0**, which contains three partitions: + + - **/dev/mmcblk0p1**: boot partition + - **/dev/mmcblk0p2**: swap partition + - **/dev/mmcblk0p3**: root directory partition + + Here, we need to expand the capacity of `/dev/mmcblk0p3`. + +2. Run the `fdisk /dev/mmcblk0` command as the root user and the interactive command line interface (CLI) is displayed. To expand the partition capacity, perform the following procedure as shown in [Figure 1](#zh-cn_topic_0151920806_f6ff7658b349942ea87f4521c0256c315). + + 1. Enter `p` to check the partition information. + + Record the start sector number of `/dev/mmcblk0p3`. That is, the value in the `Start` column of the `/dev/mmcblk0p3` information. In the example, the start sector number is `1593344`. + + 2. Enter `d` to delete the partition. + + 3. Enter `3` or press `Enter` to delete the partition whose number is `3`. That is, the `/dev/mmcblk0p3`. + + 4. Enter `n` to create a partition. + + 5. Enter `p` or press `Enter` to create a partition of the `Primary` type. + + 6. Enter `3` or press `Enter` to create a partition whose number is `3`. That is, the `/dev/mmcblk0p3`. + + 7. Enter the start sector number of the new partition. That is, the start sector number recorded in Step `1`. In the example, the start sector number is `1593344`. + + > ![](./public_sys-resources/icon-notice.gif)**NOTE:** + > Do not press **Enter** or use the default parameters. + + 8. Press `Enter` to use the last sector number by default as the end sector number of the new partition. + + 9. Enter `N` without changing the sector ID. + + 10. Enter `w` to save the partition settings and exit the interactive CLI. + + **Figure 1** Expand the partition capacity + ![](./figures/Partition_expansion.png) + +3. Run the `fdisk -l` command as the root user to check the drive partition information and ensure that the drive partition is correct. The command output is as follows: + + ```shell + # fdisk -l + Disk /dev/mmcblk0: 14.86 GiB, 15931539456 bytes, 31116288 sectors + Units: sectors of 1 * 512 = 512 bytes + Sector size (logical/physical): 512 bytes / 512 bytes + I/O size (minimum/optimal): 512 bytes / 512 bytes + Disklabel type: dos + Disk identifier: 0xf2dc3842 + + Device Boot Start End Sectors Size Id Type + /dev/mmcblk0p1 * 8192 593919 585728 286M c W95 FAT32 (LBA) + /dev/mmcblk0p2 593920 1593343 999424 488M 82 Linux swap / Solaris + /dev/mmcblk0p3 1593344 31116287 29522944 14.1G 83 Linux + ``` + +4. Run the `resize2fs /dev/mmcblk0p3` command as the root user to increase the size of the unloaded file system. + +5. Run the `df -lh` command to check the drive space information and ensure that the root directory partition has been expanded. + + > ![](./public_sys-resources/icon-notice.gif)**NOTE:** + > If the root directory partition is not expanded, run the `reboot` command to restart the Raspberry Pi and then run the `resize2fs /dev/mmcblk0p3` command as the root user. + +### Connecting to the Wi-Fi Network + +To connect to the Wi-Fi network, perform the following procedure: + +1. Check the IP address and network adapter information. + + `ip a` + + Obtain information about the wireless network adapter **wlan0**: + + ```text + 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000 + link/ether dc:a6:32:50:de:57 brd ff:ff:ff:ff:ff:ff + inet 192.168.31.109/24 brd 192.168.31.255 scope global dynamic noprefixroute eth0 + valid_lft 41570sec preferred_lft 41570sec + inet6 fe80::cd39:a969:e647:3043/64 scope link noprefixroute + valid_lft forever preferred_lft forever + 3: wlan0: mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 + link/ether e2:e6:99:89:47:0c brd ff:ff:ff:ff:ff:ff + ``` + +2. Scan information about available Wi-Fi networks. + + `nmcli dev wifi` + +3. Connect to the Wi-Fi network. + + Run the `nmcli dev wifi connect SSID password PWD` command as the root user to connect to the Wi-Fi network. + + In the command, `SSID` indicates the SSID of the available Wi-Fi network scanned in the preceding step, and `PWD` indicates the password of the Wi-Fi network. For example, if the `SSID` is `openEuler-wifi`and the password is `12345678`, the command for connecting to the Wi-Fi network is `nmcli dev wifi connect openEuler-wifi password 12345678`. The connection is successful. + + ```text + Device 'wlan0' successfully activated with '26becaab-4adc-4c8e-9bf0-1d63cf5fa3f1'. + ``` + +4. Check the IP address and wireless network adapter information. + + `ip a` + + ```text + 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000 + link/ether dc:a6:32:50:de:57 brd ff:ff:ff:ff:ff:ff + inet 192.168.31.109/24 brd 192.168.31.255 scope global dynamic noprefixroute eth0 + valid_lft 41386sec preferred_lft 41386sec + inet6 fe80::cd39:a969:e647:3043/64 scope link noprefixroute + valid_lft forever preferred_lft forever + 3: wlan0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 + link/ether dc:a6:32:50:de:58 brd ff:ff:ff:ff:ff:ff + inet 192.168.31.110/24 brd 192.168.31.255 scope global dynamic noprefixroute wlan0 + valid_lft 43094sec preferred_lft 43094sec + inet6 fe80::394:d086:27fa:deba/64 scope link noprefixroute + valid_lft forever preferred_lft forever + ``` diff --git a/docs/en/docs/Installation/installation-guideline.md b/docs/en/Server/InstallationUpgrade/Installation/installation-guide.md similarity index 75% rename from docs/en/docs/Installation/installation-guideline.md rename to docs/en/Server/InstallationUpgrade/Installation/installation-guide.md index 8daaf4b7d4efa72da7f340df04af0eeba61adf90..fd74ba15797ee8a249e72b0bb14c8c568c4c4f2f 100644 --- a/docs/en/docs/Installation/installation-guideline.md +++ b/docs/en/Server/InstallationUpgrade/Installation/installation-guide.md @@ -1,329 +1,331 @@ -# Installation Guideline - -This section describes how to install openEuler using a CD/DVD-ROM. The installation process is the same for other installation modes except the boot option. - -## Starting the Installation - -### Booting from the CD/DVD-ROM Drive - -Load the ISO image of openEuler from the CD/DVD-ROM drive of the server and restart the server. The procedure is as follows: - ->![](./public_sys-resources/icon-note.gif) **Note** ->Before the installation, ensure that the server boots from the CD/DVD-ROM drive preferentially. The following steps describe how to install openEuler using a virtual CD/DVD-ROM drive on the baseboard management controller (BMC). The procedure for installing openEuler from a physical drive is the same as that of a virtual drive. - -1. On the toolbar, click the icon shown in the following figure. - - **Figure 1** Drive icon - ![](./figures/drive-icon.png) - - An image dialog box is displayed, as shown in the following figure. - - **Figure 2** Image dialog box - ![](./figures/image-dialog-box.png) - -2. Select **Image File** and then click **...**. The **Open** dialog box is displayed. -3. Select the image file and click **Open**. In the image dialog box, click **Connect**. If **Connect** changes to **Disconnect**, the virtual CD/DVD-ROM drive is connected to the server. -4. On the toolbar, click the restart icon shown in the following figure to restart the device. - - **Figure 3** Restart icon - ![](./figures/restart-icon.png) - -### Installation Wizard - -A boot menu is displayed after the system is booted using the boot medium. In addition to options for starting the installation program, some other options are available on the boot menu. During system installation, the **Test this media & install openEuler 21.09** mode is used by default. Press the arrow keys on the keyboard to change the selection, and press **Enter** when the desired option is highlighted. - ->![](./public_sys-resources/icon-note.gif) **Note** -> ->- If you do not perform any operations within 1 minute, the system automatically selects the default option **Test this media & install openEuler 21.09** and enters the installation page. ->- During physical machine installation, if you cannot use the arrow keys to select boot options and the system does not respond after you press **Enter**, click ![](./figures/en-us_image_0229420473.png) on the BMC page and configure **Key & Mouse Reset**. - -**Figure 4** Installation Wizard -![](./figures/figure-4.png) - -Installation wizard options are described as follows: - -- **Install openEuler 21.09**: Install openEuler on your server in GUI mode. - -- **Test this media & install openEuler 21.09**: Default option. Install openEuler on your server in GUI mode. The integrity of the installation medium is checked before the installation program is started. - -- **Troubleshooting**: Troubleshooting mode, which is used when the system cannot be installed properly. In troubleshooting mode, the following options are available: - - **Install openEuler 21.09 in basic graphics mode**: Basic graphics installation mode. In this mode, the video driver is not started before the system starts and runs. - - **Rescue the openEuler system**: Rescue mode, which is used to restore the system. In rescue mode, the installation process is printed to the Virtual Network Computing (VNC) or BMC interface, and the serial port is unavailable. -On the installation wizard screen, press **e** to go to the parameter editing screen of the selected option, and press **c** to go to the command line interface (CLI). - -### Installation in GUI Mode - -On the installation wizard page, select **Test this media & install openEuler 21.09** to enter the GUI installation mode. - -Perform graphical installation operations using a keyboard. - -- Press **Tab** or **Shift+Tab** to move between GUI controls (such as buttons, area boxes, and check boxes). -- Press the up or down arrow key to move a target in the list. -- Press the left or right arrow key to move between the horizontal toolbar and watch bar. -- Press the spacebar or **Enter** to select or delete highlighted options, expand or collapse a drop-down list. -- Press **Alt+a shortcut key** (the shortcut key varies for different pages) to select the control where the shortcut key is located. The shortcut key can be highlighted (underlined) by holding down **Alt**. - -## Configuring an Installation Program Language - -After the installation starts, the system will prompt the language that is used during the installation process. English is configured by default, as shown in the following figure. Configure another language as required. - -**Figure 5** Selecting a language -![](./figures/figure-5.png) - -After the language is set, click **Continue**. The installation page is displayed. - -If you want to exit the installation, click **Exit**. The message **Are you sure you want to exit the installation program?** is displayed. Click **Yes** in the dialog box to go back to the installation wizard page. - -## Entering the Installation Page - -After the installation program starts, the installation page is displayed, as shown in the following figure. On the page, you can configure the time, language, installation source, network, and storage device. - -Some configuration items are matched with safety symbols. A safety symbol will disappear after the item is configured. Start the installation only when all the safety symbols disappear from the page. - -If you want to exit the installation, click **Exit**. The message **Are you sure you want to exit the installation program?** is displayed. Click **Yes** in the dialog box to go back to the installation wizard page. - -**Figure 6** Installation summary -![](./figures/figure-6.png) - -## Setting the Keyboard Layout - -On the **INSTALLATION SUMMARY** page, click **KEYBOARD**. You can add or delete multiple keyboard layouts in the system. - -- To view the keyboard layout: Select a keyboard layout in the left box and click **keyboard** under the box. -- To test the keyboard layout: Select the keyboard layout in the left box and click keyboard icon in the upper right corner to switch to the desired layout, and then type in the right text box to ensure that the keyboard layout can work properly. - -**Figure 7** Setting the keyboard layout -![](./figures/figure-7.png) - -After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. - -## Setting a System Language - -On the **INSTALLATION SUMMARY** page, click **LANGUAGE SUPPORT** to set the system language, as shown in the following figure. Set another language as required, such as Chinese. - ->![](./public_sys-resources/icon-note.gif) **Note** ->If you select **Chinese**, the system does not support the display of Chinese characters when you log in to the system using VNC, but supports the display of Chinese characters when you log in to the system using a serial port. When you log in to the system using SSH, whether the system supports the display of Chinese characters depends on the SSH client. If you select **English**, the display is not affected. - -**Figure 8** Setting a system language - -![](./figures/figure-8.png) - -After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. - -## Setting Date and Time - -On the **INSTALLATION SUMMARY** page, click **TIME & DATE**. On the **TIME & DATE** page, set the system time zone, date, and time. - -When setting the time zone, select a region from the drop-down list of **Region** and a city from the drop-down list of **City** at the top of the page, as shown in the following figure. - -If your city is not displayed in the drop-down list, select the nearest city in the same time zone. - ->![](./public_sys-resources/icon-note.gif) **Note** -> ->- Before manually setting the time zone, disable the network time synchronization function in the upper right corner. ->- If you want to use the network time, ensure that the network can connect to the remote NTP server. For details about how to set the network, see [Setting the Network and Host Name](#setting-the-network-and-host-name). - -**Figure 9** Setting date and time - -![](./figures/figure-9.png) - -After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. - -## Setting the Installation Source - -On the **INSTALLATION SUMMARY** page, click **INSTALLATION SOURCE** to locate the installation source. - -- When you use a complete CD/DVD-ROM for installation, the installation program automatically detects and displays the installation source information. You can use the default settings, as shown in the following figure. - - **Figure 10** Installation source - ![](./figures/figure-10.png) - -- When the network source is used for installation, you need to set the URL of the network source. - - - HTTP or HTTPS mode - - The following figure shows the installation source in HTTP or HTTPS mode: - - ![](./figures/http-mode.png) - - If the HTTPS server uses a private certificate, press **e** on the installation wizard page to go to the parameter editing page of the selected option, and add the **inst.noverifyssl** parameter. - - Enter the actual installation source address, for example, ****, where **openEuler-21.09** indicates the version number, and **x86-64** indicates the CPU architecture. Use the actual version number and CPU architecture. - - - FTP mode - - The following figure shows the installation source in FTP mode. Enter the FTP address in the text box. - - ![](./figures/ftp-mode.png) - - You need to set up an FTP server, mount the **openEuler-21.09-x86_64-dvd.iso** image, and copy the mounted files to the shared directory on the FTP server. **x86_64** indicates the CPU architecture. Use the actual image. - - - NFS mode - - The following figure shows the installation source in NFS mode. Enter the NFS address in the text box. - - ![](./figures/nfs-mode.png) - - You need to set up an NFS server, mount the **openEuler-21.09-x86_64-dvd.iso** image, and copy the mounted file to the shared directory on the NFS server. **x86_64** indicates the CPU architecture. Use the actual image. - -During the installation, if you have any questions about configuring the installation source, see [An Exception Occurs During the Selection of the Installation Source](https://gitee.com/openeuler/docs/blob/5232a58d1e76f59c50d68183bdfd3f6dc1603390/docs/en/docs/Installation/faqs.md#an-exception-occurs-during-the-selection-of-the-installation-source). - -After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. - -## Selecting Installation Software - -On the **INSTALLATION SUMMARY** page, click **SOFTWARE SELECTION** to specify the software package to be installed. - -Based on the actual requirements, select **Minimal Install** in the left box and select an add-on in the **Additional software for Selected Environment** area in the right box, as shown in the following figure. - -**Figure 11** Selecting installation software -![](./figures/figure-11.png) - ->![](./public_sys-resources/icon-note.gif) **Note** -> ->- In **Minimal Install** mode, not all packages in the installation source will be installed. If the required package is not installed, you can mount the installation source to the local host as a repo source, and use DNF to install the package. ->- If you select **Virtualization Host**, the virtualization components QEMU, libvirt, and edk2 are installed by default. You can select whether to install the OVS component in the add-on area. - -After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. - -## Setting the Installation Destination - -On the **INSTALLATION SUMMARY** page, click **INSTALLATION DESTINATION** to select the OS installation disk and partition. - -You can view available local storage devices in the following figure. - -**Figure 12** Setting the installation destination -![](./figures/figure-12.png) - -### Storage Configuration - -On the **INSTALLATION DESTINATION** page, configure storage for system partition. You can either manually configure partitions or select **Automatic** for automatic partitioning. - ->![](./public_sys-resources/icon-note.gif) **Note** -> ->- During partitioning, to ensure system security and performance, you are advised to divide the device into the following partitions: **/boot**, **/var**, **/var/log**, **/var/log/audit**, **/home**, and **/tmp**. See [Table 1](#table1). ->- If the system is configured with the **swap** partition, the **swap** partition is used when the physical memory of the system is insufficient. Although the **swap** partition can be used to expand the physical memory, if it is used due to insufficient memory, the system response slows and the system performance deteriorates. Therefore, you are not advised to configure it in the system with sufficient physical memory or in the performance sensitive system. ->- If you need to split a logical volume group, select **Custom** to manually partition the logical volume group. On the **MANUAL PARTITIONING** page, click **Modify** in the **Volume Group** area to reconfigure the logical volume group. - -**Table 1** Suggested disk partitions - -| Partition Type | Partition Type | Partition Size | Description | -| --- | --- | --- | --- | -| Primary partition | / | 20 GB | Root directory used to install the OS. | -| Primary partition | /boot | 1 GB | Boot partition. | -| Primary partition | /swap | 16 GB | Swap partition. | -| Logical partition | /home | 1 GB | Stores local user data. | -| Logical partition | /tmp | 10 GB | Stores temporary files. | -| Logical partition | /var | 5 GB | Stores the dynamic data of the daemon process and other system service processes. | -| Logical partition | /var/log | 10 GB | Stores system log data. | -| Logical partition | /var/log/audit | 2 GB | Stores system audit log data. | -| Logical partition | /usr| 5 GB | Stores shared and read-only applications. | -| Logical partition | /var/tmp | 5 GB | Stores temporary files that can be retained during system reboot. | -| Logical partition | /opt | Size of the remaining disk space. | Used to install application software. | - -**Automatic** - -Select **Automatic** if the software is installed in a new storage device or the data in the storage device is not required. After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. - -**Custom** - -If you need to manually partition the disk, click **Custom** and click **Done** in the upper left corner. The following page is displayed. - -On the **MANUAL PARTITIONING** page, you can partition the disk in either of the following ways. After the partitioning is completed, the window shown in the following figure is displayed. - -- Automatic creation: Click **Click here to create them automatically**. The system automatically assigns four mount points according to the available storage space: **/boot**, **/**, **/home**, **/boot/efi**, and **swap**. - -- Manual creation: Click ![](./figures/en-us_image_0229291243.png) to add a mount point. It is recommended that the expected capacity of each mount point not exceed the available space. - - >![](./public_sys-resources/icon-note.gif) **Note** - >If the expected capacity of the mount point exceeds the available space, the system allocates the remaining available space to the mount point. - -**Figure 13** MANUAL PARTITIONING page -![](./figures/figure-13.png) - ->![](./public_sys-resources/icon-note.gif) **Note** ->If non-UEFI mode is selected, the **/boot/efi** partition is not required. Otherwise, it is required. - -After the setting is complete, click **Done** in the upper left corner to go back to the **SUMMARY OF CHANGES** page. - -Click **Accept Changes** to go back to the **INSTALLATION SUMMARY** page. - -## Setting the Network and Host Name - -On the **INSTALLATION SUMMARY** page, select **NETWORK & HOST NAME** to configure the system network functions. - -The installation program automatically detects a local access interface. The detected interface is listed in the left box, and the interface details are displayed in the right area, as shown in [Figure 14](#zh-cn_topic_0186390264_zh-cn_topic_0122145831_fig123700157297). You can enable or disable a network interface by clicking the switch in the upper right corner of the page. The switch is turned off by default. If the installation source is set to network, turn on the switch. You can also click **Configure** to configure the selected interface. Select **Connect automatically with priority** to enable the NIC automatic startup upon system startup, as shown in [Figure 15](#zh-cn_topic_0186390264_zh-cn_topic_0122145831_fig6). - -In the lower left box, enter the host name. The host name can be the fully quantified domain name (FQDN) in the format of *hostname.domain_name* or the brief host name in the format of *hostname*. - -**Figure 14** Setting the network and host name -![](./figures/figure-14.png) - -**Figure 15** Configuring the network -![](./figures/figure-15.png) - -After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. - -## Setting the Root Password - -Select Root Password on the **INSTALLATION SUMMARY** page. The **Root Password** page is displayed, as shown in the following figure. Enter a password based on [Password Complexity](#password-complexity) requirements and confirm the password. - ->![](./public_sys-resources/icon-note.gif) **Note** -> ->- The **root** account is used to perform key system management tasks. You are not advised to use the **root** account for daily work or system access. ->- If you select **Lock root account** on the **Root Password** page, the **root** account will be disabled. - -**Figure 16** root password -![](./figures/figure-16.png) - -### Password Complexity - -The password of the **root** user or the password of the new user must meet the password complexity requirements. Otherwise, the password configuration or user creation will fail. The password complexity requirements are as follows: - -1. A password must contain at least eight characters. - -2. A password must contain at least three of the following types: uppercase letters, lowercase letters, digits, and special characters. - -3. A password must be different from the account name. - -4. A password cannot contain words in the dictionary. - - >![](./public_sys-resources/icon-note.gif) **Note** - >In the installed openEuler environment, you can run the`cracklib-unpacker /usr/share/cracklib/pw_dict > dictionary.txt` command to export the dictionary library file **dictionary.txt**, and then check whether the password is in the dictionary. - -After the settings are completed, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. - -## Creating a User - -Click **User Creation**. [Figure 17](#zh-cn_topic_0186390266_zh-cn_topic_0122145909_fig1237715313319) shows the page for creating a user. Enter a username and set a password. By clicking **Advanced**, you can also configure a home directory and a user group, as shown in [Figure 18](#zh-cn_topic_0186390266_zh-cn_topic_0122145909_fig128716531312). - -**Figure 17** Creating a user -![](./figures/figure-17.png) - -**Figure 18** Advanced user configuration -![](./figures/figure-18.png) - -After configuration, click **Done** in the left-upper corner to switch back to the **INSTALLATION SUMMARY** page. - -## Starting Installation - -On the installation page, after all the mandatory items are configured, the alarm symbols will disappear. Then, you can click **Begin Installation** to install the system. - -## Installation Procedure - -After the installation starts, the overall installation progress and the progress of writing the software package to the system are displayed. See the following figure. - ->![](./figures/en-us_image_0213178479.png) ->If you click **Exit** or reset or power off the server during the installation, the installation is interrupted and the system is unavailable. In this case, you need to reinstall the system. - -**Figure 19** Installation process -![](./figures/figure-19.png) - -## Completing the Installation - -After openEuler is installed, Click **Reboot** to restart the system. - ->![](./public_sys-resources/icon-note.gif) **Note** -> ->- If a physical CD/DVD-ROM is used for installation and it is not automatically ejected during the restart, manually remove it. Then, the openEuler CLI login page is displayed. ->- If a virtual CD/DVD-ROM is used for installation, change the server boot option to **Hard Disk** and restart the server. Then, the openEuler CLI login page is displayed. +# Installation Guideline + +This section describes how to install openEuler using a CD/DVD-ROM. The installation process is the same for other installation modes except the boot option. + +## Starting the Installation + +### Booting from the CD/DVD-ROM Drive + +Load the ISO image of openEuler from the CD/DVD-ROM drive of the server and restart the server. The procedure is as follows: + +> ![](./public_sys-resources/icon-note.gif) **Note** +> Before the installation, ensure that the server boots from the CD/DVD-ROM drive preferentially. The following steps describe how to install openEuler using a virtual CD/DVD-ROM drive on the baseboard management controller (BMC). The procedure for installing openEuler from a physical drive is the same as that of a virtual drive. + +1. On the toolbar, click the icon shown in the following figure. + + **Figure 1** Drive icon + ![](./figures/drive-icon.png) + + An image dialog box is displayed, as shown in the following figure. + + **Figure 2** Image dialog box + ![](./figures/image-dialog-box.png) + +2. Select **Image File** and then click **...**. The **Open** dialog box is displayed. +3. Select the image file and click **Open**. In the image dialog box, click **Connect**. If **Connect** changes to **Disconnect**, the virtual CD/DVD-ROM drive is connected to the server. +4. On the toolbar, click the restart icon shown in the following figure to restart the device. + + **Figure 3** Restart icon + ![](./figures/restart-icon.png) + +### Installation Wizard + +A boot menu is displayed after the system is booted using the boot medium. In addition to options for starting the installation program, some other options are available on the boot menu. During system installation, the **Test this media & install openEuler 21.09** mode is used by default. Press the arrow keys on the keyboard to change the selection, and press **Enter** when the desired option is highlighted. + +> ![](./public_sys-resources/icon-note.gif) **Note** +> +> - If you do not perform any operations within 1 minute, the system automatically selects the default option **Test this media & install openEuler 21.09** and enters the installation page. +> - During physical machine installation, if you cannot use the arrow keys to select boot options and the system does not respond after you press **Enter**, click ![](./figures/en-us_image_0229420473.png) on the BMC page and configure **Key & Mouse Reset**. + +**Figure 4** Installation Wizard +![](./figures/figure-4.png) + +Installation wizard options are described as follows: + +- **Install openEuler 21.09**: Install openEuler on your server in GUI mode. + +- **Test this media & install openEuler 21.09**: Default option. Install openEuler on your server in GUI mode. The integrity of the installation medium is checked before the installation program is started. + +- **Troubleshooting**: Troubleshooting mode, which is used when the system cannot be installed properly. In troubleshooting mode, the following options are available: + - **Install openEuler 21.09 in basic graphics mode**: Basic graphics installation mode. In this mode, the video driver is not started before the system starts and runs. + - **Rescue the openEuler system**: Rescue mode, which is used to restore the system. In rescue mode, the installation process is printed to the Virtual Network Computing (VNC) or BMC interface, and the serial port is unavailable. + +On the installation wizard screen, press **e** to go to the parameter editing screen of the selected option, and press **c** to go to the command line interface (CLI). + +### Installation in GUI Mode + +On the installation wizard page, select **Test this media & install openEuler 21.09** to enter the GUI installation mode. + +Perform graphical installation operations using a keyboard. + +- Press **Tab** or **Shift+Tab** to move between GUI controls (such as buttons, area boxes, and check boxes). +- Press the up or down arrow key to move a target in the list. +- Press the left or right arrow key to move between the horizontal toolbar and watch bar. +- Press the spacebar or **Enter** to select or delete highlighted options, expand or collapse a drop-down list. +- Press **Alt+a shortcut key** (the shortcut key varies for different pages) to select the control where the shortcut key is located. The shortcut key can be highlighted (underlined) by holding down **Alt**. + +## Configuring an Installation Program Language + +After the installation starts, the system will prompt the language that is used during the installation process. English is configured by default, as shown in the following figure. Configure another language as required. + +**Figure 5** Selecting a language +![](./figures/figure-5.png) + +After the language is set, click **Continue**. The installation page is displayed. + +If you want to exit the installation, click **Exit**. The message **Are you sure you want to exit the installation program?** is displayed. Click **Yes** in the dialog box to go back to the installation wizard page. + +## Entering the Installation Page + +After the installation program starts, the installation page is displayed, as shown in the following figure. On the page, you can configure the time, language, installation source, network, and storage device. + +Some configuration items are matched with safety symbols. A safety symbol will disappear after the item is configured. Start the installation only when all the safety symbols disappear from the page. + +If you want to exit the installation, click **Exit**. The message **Are you sure you want to exit the installation program?** is displayed. Click **Yes** in the dialog box to go back to the installation wizard page. + +**Figure 6** Installation summary +![](./figures/figure-6.png) + +## Setting the Keyboard Layout + +On the **INSTALLATION SUMMARY** page, click **KEYBOARD**. You can add or delete multiple keyboard layouts in the system. + +- To view the keyboard layout: Select a keyboard layout in the left box and click **keyboard** under the box. +- To test the keyboard layout: Select the keyboard layout in the left box and click keyboard icon in the upper right corner to switch to the desired layout, and then type in the right text box to ensure that the keyboard layout can work properly. + +**Figure 7** Setting the keyboard layout +![](./figures/figure-7.png) + +After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. + +## Setting a System Language + +On the **INSTALLATION SUMMARY** page, click **LANGUAGE SUPPORT** to set the system language, as shown in the following figure. Set another language as required, such as Chinese. + +> ![](./public_sys-resources/icon-note.gif) **Note** +> If you select **Chinese**, the system does not support the display of Chinese characters when you log in to the system using VNC, but supports the display of Chinese characters when you log in to the system using a serial port. When you log in to the system using SSH, whether the system supports the display of Chinese characters depends on the SSH client. If you select **English**, the display is not affected. + +**Figure 8** Setting a system language + +![](./figures/figure-8.png) + +After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. + +## Setting Date and Time + +On the **INSTALLATION SUMMARY** page, click **TIME & DATE**. On the **TIME & DATE** page, set the system time zone, date, and time. + +When setting the time zone, select a region from the drop-down list of **Region** and a city from the drop-down list of **City** at the top of the page, as shown in the following figure. + +If your city is not displayed in the drop-down list, select the nearest city in the same time zone. + +> ![](./public_sys-resources/icon-note.gif) **Note** +> +> - Before manually setting the time zone, disable the network time synchronization function in the upper right corner. +> - If you want to use the network time, ensure that the network can connect to the remote NTP server. For details about how to set the network, see [Setting the Network and Host Name](#setting-the-network-and-host-name). + +**Figure 9** Setting date and time + +![](./figures/figure-9.png) + +After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. + +## Setting the Installation Source + +On the **INSTALLATION SUMMARY** page, click **INSTALLATION SOURCE** to locate the installation source. + +- When you use a complete CD/DVD-ROM for installation, the installation program automatically detects and displays the installation source information. You can use the default settings, as shown in the following figure. + + **Figure 10** Installation source + ![](./figures/figure-10.png) + +- When the network source is used for installation, you need to set the URL of the network source. + + - HTTP or HTTPS mode + + The following figure shows the installation source in HTTP or HTTPS mode: + + ![](./figures/http-mode.png) + + If the HTTPS server uses a private certificate, press **e** on the installation wizard page to go to the parameter editing page of the selected option, and add the **inst.noverifyssl** parameter. + + Enter the actual installation source address, for example, ****, where **openEuler-21.09** indicates the version number, and **x86-64** indicates the CPU architecture. Use the actual version number and CPU architecture. + + - FTP mode + + The following figure shows the installation source in FTP mode. Enter the FTP address in the text box. + + ![](./figures/ftp-mode.png) + + You need to set up an FTP server, mount the **openEuler-21.09-x86_64-dvd.iso** image, and copy the mounted files to the shared directory on the FTP server. **x86_64** indicates the CPU architecture. Use the actual image. + + - NFS mode + + The following figure shows the installation source in NFS mode. Enter the NFS address in the text box. + + ![](./figures/nfs-mode.png) + + You need to set up an NFS server, mount the **openEuler-21.09-x86_64-dvd.iso** image, and copy the mounted file to the shared directory on the NFS server. **x86_64** indicates the CPU architecture. Use the actual image. + +During the installation, if you have any questions about configuring the installation source, see [An Exception Occurs During the Selection of the Installation Source](https://gitee.com/openeuler/docs/blob/5232a58d1e76f59c50d68183bdfd3f6dc1603390/docs/en/docs/Installation/faqs.md#an-exception-occurs-during-the-selection-of-the-installation-source). + +After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. + +## Selecting Installation Software + +On the **INSTALLATION SUMMARY** page, click **SOFTWARE SELECTION** to specify the software package to be installed. + +Based on the actual requirements, select **Minimal Install** in the left box and select an add-on in the **Additional software for Selected Environment** area in the right box, as shown in the following figure. + +**Figure 11** Selecting installation software +![](./figures/figure-11.png) + +> ![](./public_sys-resources/icon-note.gif) **Note** +> +> - In **Minimal Install** mode, not all packages in the installation source will be installed. If the required package is not installed, you can mount the installation source to the local host as a repo source, and use DNF to install the package. +> - If you select **Virtualization Host**, the virtualization components QEMU, libvirt, and edk2 are installed by default. You can select whether to install the OVS component in the add-on area. + +After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. + +## Setting the Installation Destination + +On the **INSTALLATION SUMMARY** page, click **INSTALLATION DESTINATION** to select the OS installation disk and partition. + +You can view available local storage devices in the following figure. + +**Figure 12** Setting the installation destination +![](./figures/figure-12.png) + +### Storage Configuration + +On the **INSTALLATION DESTINATION** page, configure storage for system partition. You can either manually configure partitions or select **Automatic** for automatic partitioning. + +> ![](./public_sys-resources/icon-note.gif) **Note** +> +> - During partitioning, to ensure system security and performance, you are advised to divide the device into the following partitions: **/boot**, **/var**, **/var/log**, **/var/log/audit**, **/home**, and **/tmp**. See [Table 1](#table1). +> - If the system is configured with the **swap** partition, the **swap** partition is used when the physical memory of the system is insufficient. Although the **swap** partition can be used to expand the physical memory, if it is used due to insufficient memory, the system response slows and the system performance deteriorates. Therefore, you are not advised to configure it in the system with sufficient physical memory or in the performance sensitive system. +> - If you need to split a logical volume group, select **Custom** to manually partition the logical volume group. On the **MANUAL PARTITIONING** page, click **Modify** in the **Volume Group** area to reconfigure the logical volume group. + +**Table 1** Suggested disk partitions + + +| Partition Type | Partition Type | Partition Size | Description | +| --- | --- | --- | --- | +| Primary partition | / | 20 GB | Root directory used to install the OS. | +| Primary partition | /boot | 1 GB | Boot partition. | +| Primary partition | /swap | 16 GB | Swap partition. | +| Logical partition | /home | 1 GB | Stores local user data. | +| Logical partition | /tmp | 10 GB | Stores temporary files. | +| Logical partition | /var | 5 GB | Stores the dynamic data of the daemon process and other system service processes. | +| Logical partition | /var/log | 10 GB | Stores system log data. | +| Logical partition | /var/log/audit | 2 GB | Stores system audit log data. | +| Logical partition | /usr| 5 GB | Stores shared and read-only applications. | +| Logical partition | /var/tmp | 5 GB | Stores temporary files that can be retained during system reboot. | +| Logical partition | /opt | Size of the remaining disk space. | Used to install application software. | + +**Automatic** + +Select **Automatic** if the software is installed in a new storage device or the data in the storage device is not required. After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. + +**Custom** + +If you need to manually partition the disk, click **Custom** and click **Done** in the upper left corner. The following page is displayed. + +On the **MANUAL PARTITIONING** page, you can partition the disk in either of the following ways. After the partitioning is completed, the window shown in the following figure is displayed. + +- Automatic creation: Click **Click here to create them automatically**. The system automatically assigns four mount points according to the available storage space: **/boot**, **/**, **/home**, **/boot/efi**, and **swap**. + +- Manual creation: Click ![](./figures/en-us_image_0229291243.png) to add a mount point. It is recommended that the expected capacity of each mount point not exceed the available space. + + > ![](./public_sys-resources/icon-note.gif) **Note** + > If the expected capacity of the mount point exceeds the available space, the system allocates the remaining available space to the mount point. + +**Figure 13** MANUAL PARTITIONING page +![](./figures/figure-13.png) + +> ![](./public_sys-resources/icon-note.gif) **Note** +> If non-UEFI mode is selected, the **/boot/efi** partition is not required. Otherwise, it is required. + +After the setting is complete, click **Done** in the upper left corner to go back to the **SUMMARY OF CHANGES** page. + +Click **Accept Changes** to go back to the **INSTALLATION SUMMARY** page. + +## Setting the Network and Host Name + +On the **INSTALLATION SUMMARY** page, select **NETWORK & HOST NAME** to configure the system network functions. + +The installation program automatically detects a local access interface. The detected interface is listed in the left box, and the interface details are displayed in the right area, as shown in [Figure 14](#zh-cn_topic_0186390264_zh-cn_topic_0122145831_fig123700157297). You can enable or disable a network interface by clicking the switch in the upper right corner of the page. The switch is turned off by default. If the installation source is set to network, turn on the switch. You can also click **Configure** to configure the selected interface. Select **Connect automatically with priority** to enable the NIC automatic startup upon system startup, as shown in [Figure 15](#zh-cn_topic_0186390264_zh-cn_topic_0122145831_fig6). + +In the lower left box, enter the host name. The host name can be the fully quantified domain name (FQDN) in the format of *hostname.domain_name* or the brief host name in the format of *hostname*. + +**Figure 14** Setting the network and host name +![](./figures/figure-14.png) + +**Figure 15** Configuring the network +![](./figures/figure-15.png) + +After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. + +## Setting the Root Password + +Select Root Password on the **INSTALLATION SUMMARY** page. The **Root Password** page is displayed, as shown in the following figure. Enter a password based on [Password Complexity](#password-complexity) requirements and confirm the password. + +> ![](./public_sys-resources/icon-note.gif) **Note** +> +> - The **root** account is used to perform key system management tasks. You are not advised to use the **root** account for daily work or system access. +> - If you select **Lock root account** on the **Root Password** page, the **root** account will be disabled. + +**Figure 16** root password +![](./figures/figure-16.png) + +### Password Complexity + +The password of the **root** user or the password of the new user must meet the password complexity requirements. Otherwise, the password configuration or user creation will fail. The password complexity requirements are as follows: + +1. A password must contain at least eight characters. + +2. A password must contain at least three of the following types: uppercase letters, lowercase letters, digits, and special characters. + +3. A password must be different from the account name. + +4. A password cannot contain words in the dictionary. + + > ![](./public_sys-resources/icon-note.gif) **Note** + > In the installed openEuler environment, you can run the`cracklib-unpacker /usr/share/cracklib/pw_dict > dictionary.txt` command to export the dictionary library file **dictionary.txt**, and then check whether the password is in the dictionary. + +After the settings are completed, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. + +## Creating a User + +Click **User Creation**. [Figure 17](#zh-cn_topic_0186390266_zh-cn_topic_0122145909_fig1237715313319) shows the page for creating a user. Enter a username and set a password. By clicking **Advanced**, you can also configure a home directory and a user group, as shown in [Figure 18](#zh-cn_topic_0186390266_zh-cn_topic_0122145909_fig128716531312). + +**Figure 17** Creating a user +![](./figures/figure-17.png) + +**Figure 18** Advanced user configuration +![](./figures/figure-18.png) + +After configuration, click **Done** in the left-upper corner to switch back to the **INSTALLATION SUMMARY** page. + +## Starting Installation + +On the installation page, after all the mandatory items are configured, the alarm symbols will disappear. Then, you can click **Begin Installation** to install the system. + +## Installation Procedure + +After the installation starts, the overall installation progress and the progress of writing the software package to the system are displayed. See the following figure. + +> ![](./figures/en-us_image_0213178479.png) +> If you click **Exit** or reset or power off the server during the installation, the installation is interrupted and the system is unavailable. In this case, you need to reinstall the system. + +**Figure 19** Installation process +![](./figures/figure-19.png) + +## Completing the Installation + +After openEuler is installed, Click **Reboot** to restart the system. + +> ![](./public_sys-resources/icon-note.gif) **Note** +> +> - If a physical CD/DVD-ROM is used for installation and it is not automatically ejected during the restart, manually remove it. Then, the openEuler CLI login page is displayed. +> - If a virtual CD/DVD-ROM is used for installation, change the server boot option to **Hard Disk** and restart the server. Then, the openEuler CLI login page is displayed. diff --git a/docs/en/docs/Installation/Installation-Modes1.md b/docs/en/Server/InstallationUpgrade/Installation/installation-modes-1.md similarity index 94% rename from docs/en/docs/Installation/Installation-Modes1.md rename to docs/en/Server/InstallationUpgrade/Installation/installation-modes-1.md index 4b5318c7de3dbe690af342b540044312a13a5648..0dc69ad9fb754456eb46cf31640149d3841c2244 100644 --- a/docs/en/docs/Installation/Installation-Modes1.md +++ b/docs/en/Server/InstallationUpgrade/Installation/installation-modes-1.md @@ -1,10 +1,10 @@ # Installation Modes -> ![](./public_sys-resources/icon-notice.gif) **NOTE** -> +> ![](./public_sys-resources/icon-notice.gif)**NOTE** +> > - The hardware supports only Raspberry Pi 3B/3B+/4B. > - The installation is performed by writing images to the SD card. This section describes how to write images on Windows, Linux, and Mac. -> - The image used in this section is the Raspberry Pi image of openEuler. For details about how to obtain the image, see [Installation Preparations](Installation-Preparations1.md). +> - The image used in this section is the Raspberry Pi image of openEuler. For details about how to obtain the image, see [Installation Preparations](installation-preparations-1.md). @@ -22,6 +22,7 @@ - [Writing Images to the SD Card](#writing-images-to-the-sd-card-2) + ## Writing Images on Windows This section uses Windows 10 as an example to describe how to write images to the SD card in the Windows environment. @@ -33,9 +34,9 @@ To format the SD card, perform the following procedures: 1. Download and install a SD card formatting tool. The following operations use SD Card Formatter as an example. 2. Start SD Card Formatter. In **Select card**, select the drive letter of the SD card to be formatted. - + If no image has been installed in the SD card, only one drive letter exists. In **Select card**, select the drive letter of the SD card to be formatted. - + If an image has been installed in the SD card, one or more drive letters exist. For example, the SD card corresponds to three drive letters: E, G, and H. In **Select card**, you can select the drive letter E of the boot partition. 3. In **Formatting options**, select a formatting mode. The default mode is **Quick format**. @@ -46,8 +47,8 @@ To format the SD card, perform the following procedures: ### Writing Images to the SD Card -> ![](./public_sys-resources/icon-notice.gif) **NOTE** -If the compressed image file **openEuler-21.09-raspi-aarch64.img.xz** is obtained, decompress the file to obtain the **openEuler-21.09-raspi-aarch64.img** image file. +> ![](./public_sys-resources/icon-notice.gif)**NOTE** +> If the compressed image file **openEuler-21.09-raspi-aarch64.img.xz** is obtained, decompress the file to obtain the **openEuler-21.09-raspi-aarch64.img** image file. To write the **openEuler-21.09-raspi-aarch64.img** image file to the SD card, perform the following procedures: diff --git a/docs/en/docs/Installation/installation-modes.md b/docs/en/Server/InstallationUpgrade/Installation/installation-modes.md similarity index 78% rename from docs/en/docs/Installation/installation-modes.md rename to docs/en/Server/InstallationUpgrade/Installation/installation-modes.md index ca5179265e12d8ce7aecddbc67f6d2e74cea010c..4c1db631c01f46475ca248cb1e18a95e296db54d 100644 --- a/docs/en/docs/Installation/installation-modes.md +++ b/docs/en/Server/InstallationUpgrade/Installation/installation-modes.md @@ -1,9 +1,9 @@ # Installation Modes ->![](./public_sys-resources/icon-notice.gif) **NOTICE** +> ![](./public_sys-resources/icon-notice.gif)**NOTICE** > ->- Only TaiShan 200 servers and FusionServer Pro rack server are supported. For details about the supported server models, see [Hardware Compatibility](./installation-preparations.md#hardware-compatibility). Only a virtualization platform created by the virtualization components \(openEuler as the host OS and QEMU and KVM provided in the release package\) of openEuler and the x86 virtualization platform of Huawei public cloud are supported. ->- Currently, only installation modes such as DVD-ROM, USB flash drive, network, QCOW2 image, and private image are supported. In addition, only the x86 virtualization platform of Huawei public cloud supports the private image installation mode. +> - Only TaiShan 200 servers and FusionServer Pro rack server are supported. For details about the supported server models, see [Hardware Compatibility](./installation-preparations.md#hardware-compatibility). Only a virtualization platform created by the virtualization components \(openEuler as the host OS and QEMU and KVM provided in the release package\) of openEuler and the x86 virtualization platform of Huawei public cloud are supported. +> - Currently, only installation modes such as DVD-ROM, USB flash drive, network, QCOW2 image, and private image are supported. In addition, only the x86 virtualization platform of Huawei public cloud supports the private image installation mode. @@ -36,8 +36,8 @@ If you have obtained a DVD-ROM, directly install the OS using the DVD-ROM. If yo Perform the following operations to start the installation: ->![](./public_sys-resources/icon-note.gif) **NOTE** ->Set the system to preferentially boot from the DVD-ROM drive. Take BIOS as an example. You need to move the **CD/DVD-ROM Drive** option under **Boot Type Order** to the top. +> ![](./public_sys-resources/icon-note.gif) **NOTE** +> Set the system to preferentially boot from the DVD-ROM drive. Take BIOS as an example. You need to move the **CD/DVD-ROM Drive** option under **Boot Type Order** to the top. 1. Disconnect all drives that are not required, such as USB drives. 2. Start your computer system. @@ -60,8 +60,8 @@ Pay attention to the capacity of the USB flash drive. The USB flash drive must h [ 170.171135] sd 5:0:0:0: [sdb] Attached SCSI removable disk ``` - >![](./public_sys-resources/icon-note.gif) **NOTE** - >Take the **sdb** USB flash drive as an example. + > ![](./public_sys-resources/icon-note.gif) **NOTE** + > Take the **sdb** USB flash drive as an example. 2. Switch to user **root**. When running the **su** command, you need to enter the password. @@ -105,13 +105,13 @@ Pay attention to the capacity of the USB flash drive. The USB flash drive must h dd if=/home/testuser/Downloads/openEuler-21.09-aarch64-dvd.iso of=/dev/sdb bs=512k ``` - >![](./public_sys-resources/icon-note.gif) **NOTE** - >As described in ISOLINUX, the ISO 9660 file system created by the **mkisofs** command will boot through BIOS firmware, but only from the CD-ROM, DVD-ROM, or BD. In this case, you need to run the **isohybrid -u your.iso** command to process the ISO file and then run the **dd** command to write the ISO file to the USB flash drive. (This problem affects only the x86 architecture.) + > ![](./public_sys-resources/icon-note.gif) **NOTE** + > As described in ISOLINUX, the ISO 9660 file system created by the **mkisofs** command will boot through BIOS firmware, but only from the CD-ROM, DVD-ROM, or BD. In this case, you need to run the **isohybrid -u your.iso** command to process the ISO file and then run the **dd** command to write the ISO file to the USB flash drive. (This problem affects only the x86 architecture.) 5. After the image is written, remove the USB flash drive. No progress is displayed during the image write process. When the number sign (#) appears again, run the following command to write the data to the drive. Then exit the **root** account and remove the USB flash drive. In this case, you can use the USB drive as the installation source of the system. - + ```bash sync ``` @@ -120,8 +120,8 @@ Pay attention to the capacity of the USB flash drive. The USB flash drive must h Perform the following operations to start the installation: ->![](./public_sys-resources/icon-note.gif) **NOTE** ->Set the system to preferentially boot from the USB flash drive. Take the BIOS as an example. You need to move the **USB** option under **Boot Type Order** to the top. +> ![](./public_sys-resources/icon-note.gif) **NOTE** +> Set the system to preferentially boot from the USB flash drive. Take the BIOS as an example. You need to move the **USB** option under **Boot Type Order** to the top. 1. Disconnect all drives that are not required. 2. Open your computer system. @@ -138,8 +138,8 @@ If the target hardware is installed with a PXE-enabled NIC, configure it to boot For installation through the network using PXE, the client uses a PXE-enabled NIC to send a broadcast request for DHCP information and IP address to the network. The DHCP server provides the client with an IP address and other network information, such as the IP address or host name of the DNS and FTP server \(which provides the files required for starting the installation program\), and the location of the files on the server. ->![](./public_sys-resources/icon-note.gif) **NOTE** ->The TFTP, DHCP, and HTTP server configurations are not described here. For details, see [Full-automatic Installation Guide](./using-kickstart-for-automatic-installation.md#full-automatic-installation-guide). +> ![](./public_sys-resources/icon-note.gif) **NOTE** +> The TFTP, DHCP, and HTTP server configurations are not described here. For details, see [Full-automatic Installation Guide](./using-kickstart-for-automatic-installation.md#full-automatic-installation-guide). ## Installation Through a QCOW2 Image @@ -184,7 +184,7 @@ Perform the following operations to start the installation: 5. Create a VM. 6. Start the VM. -For details, see the [*Virtualization User Guide*](./../Virtualization/virtualization.md). +For details, see the [*Virtualization User Guide*](../../../Virtualization/VirtualizationPlatform/Virtualization/virtualization.md) ## Installation Through a Private Image diff --git a/docs/en/docs/Installation/install-server.md b/docs/en/Server/InstallationUpgrade/Installation/installation-on-servers.md similarity index 100% rename from docs/en/docs/Installation/install-server.md rename to docs/en/Server/InstallationUpgrade/Installation/installation-on-servers.md diff --git a/docs/en/docs/Installation/Installation-Preparations1.md b/docs/en/Server/InstallationUpgrade/Installation/installation-preparations-1.md similarity index 99% rename from docs/en/docs/Installation/Installation-Preparations1.md rename to docs/en/Server/InstallationUpgrade/Installation/installation-preparations-1.md index e8bf87cbda5ae891dae8721501b8dab9003b00fb..1d8a4c1b158ce51d3fe74eecfa104dfb6fa8d52a 100644 --- a/docs/en/docs/Installation/Installation-Preparations1.md +++ b/docs/en/Server/InstallationUpgrade/Installation/installation-preparations-1.md @@ -45,11 +45,11 @@ To verify the file integrity, perform the following procedures: ```shell sha256sum openEuler-22.03-LTS-SP2-raspi-aarch64.img.xz ``` - + After the command is executed, the verification value is displayed. 3. Check whether the verification values obtained from the step 1 and step 2 are consistent. - + If they are consistent, the downloaded file is not damaged. Otherwise, the downloaded file is incomplete and you need to obtain the file again. ## Installation Requirements diff --git a/docs/en/docs/Installation/installation-preparations.md b/docs/en/Server/InstallationUpgrade/Installation/installation-preparations.md similarity index 86% rename from docs/en/docs/Installation/installation-preparations.md rename to docs/en/Server/InstallationUpgrade/Installation/installation-preparations.md index 0ad55d5d963d4f962a44cfbaf6137aa4c1f9e9ae..d28ee5185ff2b38bb493f694769f1c1e9d84c084 100644 --- a/docs/en/docs/Installation/installation-preparations.md +++ b/docs/en/Server/InstallationUpgrade/Installation/installation-preparations.md @@ -43,9 +43,8 @@ Compare the verification value recorded in the verification file with the .iso f Before verifying the integrity of the release package, prepare the following files: -ISO file: **openEuler-22.03-LTS-SP2-aarch64-dvd.iso** - -Verification file: Copy and save the **Integrity Check** SHA256 value to a local file. +- ISO file: **openEuler-22.03-LTS-SP2-aarch64-dvd.iso** +- Verification file: Copy and save the **Integrity Check** SHA256 value to a local file. ### Procedures @@ -69,19 +68,7 @@ To install the openEuler OS on a PM, the PM must meet the following requirements ### Hardware Compatibility -You need to take hardware compatibility into account during openEuler installation. [Table 1](#table14948632047) describes the types of supported servers. - ->![](./public_sys-resources/icon-note.gif) **NOTE:** -> ->- TaiShan 200 servers are backed by Huawei Kunpeng 920 processors. ->- Currently, only Huawei TaiShan and FusionServer Pro servers are supported. More servers from other vendors will be supported in the future. - -**Table 1** Supported servers - -| Server Type | Server Name | Server Model | -| :---- | :---- | :---- | -| Rack server | TaiShan 200 | 2280 balanced model | -| Rack server | FusionServer Pro | FusionServer Pro 2288H V5
NOTE:
The server must be configured with the Avago SAS3508 RAID controller card and the LOM-X722 NIC.| +You need to take hardware compatibility into account during openEuler installation. [Compatibility List](https://www.openeuler.org/en/compatibility/) describes the types of supported servers. ### Minimum Hardware Specifications diff --git a/docs/en/docs/Installation/Installation.md b/docs/en/Server/InstallationUpgrade/Installation/installation.md similarity index 100% rename from docs/en/docs/Installation/Installation.md rename to docs/en/Server/InstallationUpgrade/Installation/installation.md diff --git a/docs/en/Server/InstallationUpgrade/Installation/more-resources.md b/docs/en/Server/InstallationUpgrade/Installation/more-resources.md new file mode 100644 index 0000000000000000000000000000000000000000..fab0df5d93e9e7c5283d8d46368d3c4d6402c887 --- /dev/null +++ b/docs/en/Server/InstallationUpgrade/Installation/more-resources.md @@ -0,0 +1,4 @@ +# Reference + +- How to Create a Raspberry Pi Image File +- How to Use Raspberry Pi diff --git a/docs/en/docs/TailorCustom/public_sys-resources/icon-note.gif b/docs/en/Server/InstallationUpgrade/Installation/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/TailorCustom/public_sys-resources/icon-note.gif rename to docs/en/Server/InstallationUpgrade/Installation/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/Container/public_sys-resources/icon-notice.gif b/docs/en/Server/InstallationUpgrade/Installation/public_sys-resources/icon-notice.gif similarity index 100% rename from docs/en/docs/Container/public_sys-resources/icon-notice.gif rename to docs/en/Server/InstallationUpgrade/Installation/public_sys-resources/icon-notice.gif diff --git a/docs/en/docs/Installation/FAQ1.md b/docs/en/Server/InstallationUpgrade/Installation/raspi-common-issues-and-solutions.md similarity index 95% rename from docs/en/docs/Installation/FAQ1.md rename to docs/en/Server/InstallationUpgrade/Installation/raspi-common-issues-and-solutions.md index eaab85269015e7eddab588cda34550db5ecb09f3..9fb8fe803d822467d0eb00d76606187152fdd0ec 100644 --- a/docs/en/docs/Installation/FAQ1.md +++ b/docs/en/Server/InstallationUpgrade/Installation/raspi-common-issues-and-solutions.md @@ -1,6 +1,6 @@ -# FAQs +# Common Issues and Solutions -## Failed to Start the Raspberry Pi +## Issue 1: Failed to Start the Raspberry Pi ### Symptom @@ -17,7 +17,7 @@ The possible causes are as follows: Re-write the complete image to the SD card. -## Failed to Connect to Wi-Fi by Running the nmcli Command +## Issue 2: Failed to Connect to Wi-Fi by Running the nmcli Command ### Symptom @@ -46,7 +46,7 @@ Run the `nmtui` command to enter the nmtui utility. Perform the following steps 7. Check whether the added Wi-Fi connection is activated. The name of an activated Wi-Fi connection is marked with an asterisk (*). If the Wi-Fi connection is not activated, select the Wi-Fi connection, press the right arrow key on the keyboard to select **Activate**, and press **Enter** to activate the connection. After the activation is complete, select **Back** and press **Enter** to return to the home screen of the nmtui utility. 8. Select **Quit**, press the right arrow key on the keyboard to select **OK**, and press **Enter** to exit the nmtui utility. -## Failed to Install the TensorFlow and Related Packages +## Issue 3: Failed to Install the TensorFlow and Related Packages ### Symptom diff --git a/docs/en/docs/Installation/riscv_more.md b/docs/en/Server/InstallationUpgrade/Installation/risc-v-more.md similarity index 100% rename from docs/en/docs/Installation/riscv_more.md rename to docs/en/Server/InstallationUpgrade/Installation/risc-v-more.md diff --git a/docs/en/docs/Installation/riscv_qemu.md b/docs/en/Server/InstallationUpgrade/Installation/risc-v-qemu.md similarity index 100% rename from docs/en/docs/Installation/riscv_qemu.md rename to docs/en/Server/InstallationUpgrade/Installation/risc-v-qemu.md diff --git a/docs/en/docs/Installation/riscv.md b/docs/en/Server/InstallationUpgrade/Installation/risc-v.md similarity index 100% rename from docs/en/docs/Installation/riscv.md rename to docs/en/Server/InstallationUpgrade/Installation/risc-v.md diff --git a/docs/en/docs/Installation/faqs.md b/docs/en/Server/InstallationUpgrade/Installation/server-installation-common-issues-and-solutions.md similarity index 75% rename from docs/en/docs/Installation/faqs.md rename to docs/en/Server/InstallationUpgrade/Installation/server-installation-common-issues-and-solutions.md index 6404d9c95f640c8b2fbefbd360c5506d98236f51..261d191d5bb23e30f081c8fd957335be5ad8bd6f 100644 --- a/docs/en/docs/Installation/faqs.md +++ b/docs/en/Server/InstallationUpgrade/Installation/server-installation-common-issues-and-solutions.md @@ -1,28 +1,28 @@ -# FAQs +# Common Issues and Solutions -## openEuler Fails to Start After It Is Installed to the Second Disk +## Issue 1: openEuler Fails to Start After It Is Installed to the Second Drive ### Symptom -The OS is installed on the second disk **sdb** during the installation, causing startup failure. +The OS is installed on the second drive **sdb** during the installation, causing startup failure. ### Possible Causes -When openEuler is installed to the second disk, MBR and GRUB are installed to the second disk **sdb** by default. The following two situations may occur: +When openEuler is installed to the second drive, MBR and GRUB are installed to the second drive **sdb** by default. The following two situations may occur: -1. openEuler installed on the first disk is loaded and started if it is complete. -2. openEuler installed on the first disk fails to be started from hard disks if it is incomplete. +1. openEuler installed on the first drive is loaded and started if it is complete. +2. openEuler installed on the first drive fails to be started from hard drives if it is incomplete. -The preceding two situations occur because the first disk **sda** is booted by default to start openEuler in the BIOS window. If openEuler is not installed on the **sda** disk, system restart fails. +The preceding two situations occur because the first drive **sda** is booted by default to start openEuler in the BIOS window. If openEuler is not installed on the **sda** drive, system restart fails. ### Solutions This problem can be solved using either of the following two methods: -- During the openEuler installation, select the first disk or both disks, and install the boot loader on the first disk **sda**. +- During the openEuler installation, select the first drive or both drives, and install the boot loader on the first drive **sda**. - After installing openEuler, restart it by modifying the boot option in the BIOS window. -## openEuler Enters Emergency Mode After It Is Started +## Issue 2: openEuler Enters Emergency Mode After It Is Started ### Symptom @@ -32,19 +32,19 @@ openEuler enters emergency mode after it is powered on. ### Possible Causes -Damaged OS files result in disk mounting failure, or overpressured I/O results in disk mounting timeout \(threshold: 90s\). +Damaged OS files result in drive mounting failure, or overpressured I/O results in drive mounting timeout \(threshold: 90s\). -An unexpected system power-off and low I/O performance of disks may also cause the problem. +An unexpected system power-off and low I/O performance of drives may also cause the problem. ### Solutions 1. Log in to openEuler as the **root** user. 2. Check and restore files by using the file system check \(fsck\) tool, and restart openEuler. - >![fig](./public_sys-resources/icon-note.gif) **NOTE:** - >The fsck tool checks and maintains inconsistent file systems. If the system is powered off or a disk is faulty, run the **fsck** command to check file systems. Run the **fsck.ext3 -h** and **fsck.ext4 -h** commands to view the usage method of the fsck tool. + > ![fig](./public_sys-resources/icon-note.gif) **NOTE:** + > The fsck tool checks and maintains inconsistent file systems. If the system is powered off or a drive is faulty, run the **fsck** command to check file systems. Run the **fsck.ext3 -h** and **fsck.ext4 -h** commands to view the usage method of the fsck tool. -If you want to disable the timeout mechanism of disk mounting, add **x-systemd.device-timeout=0** to the **etc/fstab** file. For example: +If you want to disable the timeout mechanism of drive mounting, add **x-systemd.device-timeout=0** to the **etc/fstab** file. For example: ```sh # @@ -60,11 +60,11 @@ UUID=afcc811f-4b20-42fc-9d31-7307a8cfe0df /boot ext4 defaults,x-systemd.device-t /dev/mapper/openEuler-swap swap swap defaults 0 0 ``` -## openEuler Fails to Be Reinstalled When an Unactivated Logical Volume Group Exists +## Issue 3: openEuler Fails to Be Reinstalled When an Unactivated Logical Volume Group Exists ### Symptom -After a disk fails, openEuler fails to be reinstalled because a logical volume group that cannot be activated exists in openEuler. +After a drive fails, openEuler fails to be reinstalled because a logical volume group that cannot be activated exists in openEuler. ### Possible Causes @@ -106,7 +106,7 @@ Before reinstalling openEuler, restore the abnormal logical volume group to the vgremove -y testvg32947 ``` -## An Exception Occurs During the Selection of the Installation Source +## Issue 4: An Exception Occurs During the Selection of the Installation Source ### Symptom @@ -120,7 +120,7 @@ This is because the software package dependency in the installation source is ab Check whether the installation source is abnormal. Use the new installation source. -## Kdump Service Fails to Be Enabled +## Issue 5: Kdump Service Fails to Be Enabled ### Symptom @@ -205,22 +205,22 @@ The following table describes the parameters of the memory reserved for the kdum
-## Fails to Select Only One Disk for Reinstallation When openEuler Is Installed on a Logical Volume Consisting of Multiple Disks +## Issue 6: Fails to Select Only One Drive for Reinstallation When openEuler Is Installed on a Logical Volume Consisting of Multiple Drives ### Symptom -If openEuler is installed on a logical volume consisting of multiple disks, an error message will be displayed as shown in [Figure 1](#fig115949762617) when you attempt to select one of the disks for reinstallation. +If openEuler is installed on a logical volume consisting of multiple drives, an error message will be displayed as shown in [Figure 1](#fig115949762617) when you attempt to select one of the drives for reinstallation. **Figure 1** Error message ![fig](./figures/error-message.png "error-message") ### Possible Causes -The previous logical volume contains multiple disks. If you select one of the disks for reinstallation, the logical volume will be damaged. +The previous logical volume contains multiple drives. If you select one of the drives for reinstallation, the logical volume will be damaged. ### Solutions -The logical volume formed by multiple disks is equivalent to a volume group. Therefore, you only need to delete the corresponding volume group. +The logical volume formed by multiple drives is equivalent to a volume group. Therefore, you only need to delete the corresponding volume group. 1. Press **Ctrl**+**Alt**+**F2** to switch to the CLI and run the following command to find the volume group: @@ -242,10 +242,10 @@ The logical volume formed by multiple disks is equivalent to a volume group. The systemctl restart anaconda ``` - >![fig](./public_sys-resources/icon-note.gif) **NOTE:** - >You can also press **Ctrl**+**Alt**+**F6** to return to the GUI and click **Refresh** in the lower right corner to refresh the storage configuration. + > ![fig](./public_sys-resources/icon-note.gif) **NOTE:** + > You can also press **Ctrl**+**Alt**+**F6** to return to the GUI and click **Refresh** in the lower right corner to refresh the storage configuration. -## openEuler Fails to Be Installed on an x86 PM in UEFI Mode due to Secure Boot Option Setting +## Issue 7: openEuler Fails to Be Installed on an x86 PM in UEFI Mode due to Secure Boot Option Setting ### Symptom @@ -274,10 +274,10 @@ Access the BIOS, set **Secure Boot** to **Disabled**, and reinstall the openE ![fig](./figures/enforce-secure-boot.png) - >![fig](./public_sys-resources/icon-note.gif) **NOTE:** - >After **Enforce Secure Boot** is set to **Disabled**, save the settings and exit. Then, reinstall the system. + > ![fig](./public_sys-resources/icon-note.gif) **NOTE:** + > After **Enforce Secure Boot** is set to **Disabled**, save the settings and exit. Then, reinstall the system. -## pmie_check Is Reported in the messages Log During openEuler Installation +## Issue 8: pmie_check Is Reported in the messages Log During openEuler Installation ### Symptom @@ -305,11 +305,11 @@ After the OS is installed and restarted, perform either of the following two ope ```sh -## Installation Fails when a User Selects Two Disks with OS Installed and Customizes Partitioning +## Issue 9: Installation Fails when a User Selects Two Drives with OS Installed and Customizes Partitioning ### Symptom -During the OS installation, the OS has been installed on two disks. In this case, if you select one disk for custom partitioning, and click **Cancel** to perform custom partitioning on the other disk, the installation fails. +During the OS installation, the OS has been installed on two drives. In this case, if you select one drive for custom partitioning, and click **Cancel** to perform custom partitioning on the other drive, the installation fails. ![fig](./figures/cancle_disk.png) @@ -317,17 +317,17 @@ During the OS installation, the OS has been installed on two disks. In this case ### Possible Causes -A user selects a disk for partitioning twice. After the user clicks **Cancel** and then selects the other disk, the disk information is incorrect. As a result, the installation fails. +A user selects a drive for partitioning twice. After the user clicks **Cancel** and then selects the other drive, the drive information is incorrect. As a result, the installation fails. ### Solutions -Select the target disk for custom partitioning. Do not frequently cancel the operation. If you have to cancel and select another disk, you are advised to reinstall the OS. +Select the target drive for custom partitioning. Do not frequently cancel the operation. If you have to cancel and select another drive, you are advised to reinstall the OS. ### Learn More About the Issue at -## vmcore Fails to Be Generated by Kdump on the PM with LSI MegaRAID Card Installed +## Issue 10: vmcore Fails to Be Generated by Kdump on the PM with LSI MegaRAID Card Installed ### Symptom @@ -337,7 +337,7 @@ After the Kdump service is deployed, kernel breaks down due to the manual execut ### Possible Causes -The **reset_devices** parameter is configured by default and is enabled during second kernel startup, making MegaRAID driver or disk faulty. An error is reported when the vmcore file is dumped ana accesses the MegaRAID card. As a result, vmcore fails to be generated. +The **reset_devices** parameter is configured by default and is enabled during second kernel startup, making MegaRAID driver or drive faulty. An error is reported when the vmcore file is dumped ana accesses the MegaRAID card. As a result, vmcore fails to be generated. ### Solutions diff --git a/docs/en/docs/Installation/using-kickstart-for-automatic-installation.md b/docs/en/Server/InstallationUpgrade/Installation/using-kickstart-for-automatic-installation.md similarity index 93% rename from docs/en/docs/Installation/using-kickstart-for-automatic-installation.md rename to docs/en/Server/InstallationUpgrade/Installation/using-kickstart-for-automatic-installation.md index e731b9de0638a5da691139f48e0c8f2300db7279..6da8dbf75a8fd91026766a53d41ddcbfe46f0101 100644 --- a/docs/en/docs/Installation/using-kickstart-for-automatic-installation.md +++ b/docs/en/Server/InstallationUpgrade/Installation/using-kickstart-for-automatic-installation.md @@ -1,4 +1,5 @@ # Using Kickstart for Automatic Installation + - [Using Kickstart for Automatic Installation](#using-kickstart-for-automatic-installation) @@ -88,12 +89,12 @@ To use kickstart to perform semi-automatic installation of openEuler, perform th **Environment Preparation** ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->Before the installation, ensure that the firewall of the HTTP server is disabled. Run the following command to disable the firewall: +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> Before the installation, ensure that the firewall of the HTTP server is disabled. Run the following command to disable the firewall: > ->``` ->iptables -F ->``` +> ```shell +> iptables -F +> ``` 1. Install httpd and start the service. @@ -157,19 +158,19 @@ To use kickstart to perform semi-automatic installation of openEuler, perform th ===================================== ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >The method of generating the password ciphertext is as follows: - > - >``` - ># python3 - >Python 3.7.0 (default, Apr 1 2019, 00:00:00) - >[GCC 7.3.0] on linux - >Type "help", "copyright", "credits" or "license" for more information. - >>>> import crypt - >>>> passwd = crypt.crypt("myPasswd") - >>>> print (passwd) - >$6$63c4tDmQGn5SDayV$mZoZC4pa9Jdt6/ALgaaDq6mIExiOO2EjzomB.Rf6V1BkEMJDcMddZeGdp17cMyc9l9ML9ldthytBEPVcnboR/0 - >``` + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > The method of generating the password ciphertext is as follows: + + ```python + # python3 + Python 3.7.0 (default, Apr 1 2019, 00:00:00) + [GCC 7.3.0] on linux + Type "help", "copyright", "credits" or "license" for more information. + >>> import crypt + >>> passwd = crypt.crypt("myPasswd") + >>> print (passwd) + $6$63c4tDmQGn5SDayV$mZoZC4pa9Jdt6/ALgaaDq6mIExiOO2EjzomB.Rf6V1BkEMJDcMddZeGdp17cMyc9l9ML9ldthytBEPVcnboR/0 + ``` 3. Mount the ISO image file to the CD-ROM drive of the computer where openEuler is to be installed. @@ -209,12 +210,12 @@ To use kickstart to perform full-automatic installation of openEuler, perform th **Environment Preparation** ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->Before the installation, ensure that the firewall of the HTTP server is disabled. Run the following command to disable the firewall: +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> Before the installation, ensure that the firewall of the HTTP server is disabled. Run the following command to disable the firewall: > ->``` ->iptables -F ->``` +> ```shell +> iptables -F +> ``` 1. Install httpd and start the service. diff --git a/docs/en/Server/InstallationUpgrade/Menu/index.md b/docs/en/Server/InstallationUpgrade/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..04de2c8321cc02db230319fc0610df15d525bad6 --- /dev/null +++ b/docs/en/Server/InstallationUpgrade/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [Installation Guide]({{< relref "./Installation/Menu/index.md" >}}) +- [Upgrade Guide]({{< relref "./Upgrade/Menu/index.md" >}}) diff --git a/docs/en/Server/InstallationUpgrade/Upgrade/Menu/index.md b/docs/en/Server/InstallationUpgrade/Upgrade/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..b5624bc13c7e57f43f4104ce9ad3a764531eea03 --- /dev/null +++ b/docs/en/Server/InstallationUpgrade/Upgrade/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [Upgrade Guide]({{< relref "./openeuler-22.03-lts-upgrade-and-downgrade-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/os_upgrade_and_downgrade/images/LTS_version.png b/docs/en/Server/InstallationUpgrade/Upgrade/images/LTS_version.png similarity index 100% rename from docs/en/docs/os_upgrade_and_downgrade/images/LTS_version.png rename to docs/en/Server/InstallationUpgrade/Upgrade/images/LTS_version.png diff --git a/docs/en/docs/os_upgrade_and_downgrade/images/SP1_repo.png b/docs/en/Server/InstallationUpgrade/Upgrade/images/SP1_repo.png similarity index 100% rename from docs/en/docs/os_upgrade_and_downgrade/images/SP1_repo.png rename to docs/en/Server/InstallationUpgrade/Upgrade/images/SP1_repo.png diff --git a/docs/en/docs/os_upgrade_and_downgrade/images/SP1_version.png b/docs/en/Server/InstallationUpgrade/Upgrade/images/SP1_version.png similarity index 100% rename from docs/en/docs/os_upgrade_and_downgrade/images/SP1_version.png rename to docs/en/Server/InstallationUpgrade/Upgrade/images/SP1_version.png diff --git a/docs/en/docs/os_upgrade_and_downgrade/openEuler_22.03_LTS_upgrade_and_downgrade.md b/docs/en/Server/InstallationUpgrade/Upgrade/openeuler-22.03-lts-upgrade-and-downgrade-guide.md similarity index 82% rename from docs/en/docs/os_upgrade_and_downgrade/openEuler_22.03_LTS_upgrade_and_downgrade.md rename to docs/en/Server/InstallationUpgrade/Upgrade/openeuler-22.03-lts-upgrade-and-downgrade-guide.md index cdbc840ab2208941da7fffcfffcc4aa20f57649d..50254d8c7d67c8a70ef873c6d4bf5127cc3a8bbe 100644 --- a/docs/en/docs/os_upgrade_and_downgrade/openEuler_22.03_LTS_upgrade_and_downgrade.md +++ b/docs/en/Server/InstallationUpgrade/Upgrade/openeuler-22.03-lts-upgrade-and-downgrade-guide.md @@ -1,4 +1,4 @@ -# openEuler 22.03 LTS Upgrade and Downgrade Guide +# Upgrade and Downgrade Guide This document describes how to upgrade openEuler 22.03 LTS to openEuler 22.03 LTS SP1. The operations for other versions are similar. @@ -10,7 +10,7 @@ View the versions of openEuler and the kernel in the current environment. ![LTS_version](./images/LTS_version.png) -## 2. Compatibility Upgrade +## 2. Upgrade Execution ### 2.1 Adding the openEuler 22.03 LTS SP1 Repositories (openEuler-22.03-LTS-SP1.repo) @@ -36,9 +36,9 @@ Note: 1. If an error is reported during the upgrade, run `dnf update --skip-broken -x conflict_pkg1 |tee update_log` to avoid the problem. If multiple packages conflict, use the `-x conflict_pkg1 -x conflict_pkg2 -x conflict_pkg3` options to skip the packages and analyze, validate, and update the conflicted packages after the upgrade. 2. Options: -`--allowerasing`: Allow erasing of installed packages to resolve dependencies. -`--skip-broken`: Resolve dependency problems by skipping packages. -`-x`: Used with `--skip-broken` to specify the packages to be skipped. + `--allowerasing`: Allow erasing of installed packages to resolve dependencies. + `--skip-broken`: Resolve dependency problems by skipping packages. + `-x`: Used with `--skip-broken` to specify the packages to be skipped. ### 2.3 Rebooting the OS @@ -52,17 +52,17 @@ View the versions of openEuler and the kernel in the current environment. ![SP1_version](./images/SP1_version.png) -## 4. Compatibility Downgrade +## 4. Downgrade Execution ### 4.1 Performing the Downgrade -``` +```bash dnf downgrade | tee downgrade_log ``` ### 4.2 Rebooting the OS -``` +```bash reboot ``` diff --git a/docs/en/Server/Maintenance/A-Ops/Menu/index.md b/docs/en/Server/Maintenance/A-Ops/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..149a6ac658648a777d1d465b9e94b25dcf52eaba --- /dev/null +++ b/docs/en/Server/Maintenance/A-Ops/Menu/index.md @@ -0,0 +1,12 @@ +--- +headless: true +--- +- [A-Ops User Guide]({{< relref "./overview.md" >}}) + - [A-Ops Deployment]({{< relref "./deploying-aops.md" >}}) + - [A-Ops Intelligent Location Framework User Guide]({{< relref "./aops-intelligent-location-framework-user-guide.md" >}}) + - [aops-agent Deployment]({{< relref "./deploying-aops-agent.md" >}}) + - [Hot Patch DNF Plugin Command Usage]({{< relref "./dnf-plugin-command-usage.md" >}}) + - [Configuration Tracing Service]({{< relref "./gala-ragdoll-user-guide.md" >}}) + - [Architecture Awareness Service]({{< relref "./architecture-awareness-service-manual.md" >}}) + - [Community Hot Patch Creation and Release Process]({{< relref "./community-hot-patch-creation-and-release-process.md" >}}) + - [Automated O&M Service]({{< relref "./automated-om-service-manual.md" >}}) diff --git a/docs/en/Server/Maintenance/A-Ops/aops-intelligent-location-framework-user-guide.md b/docs/en/Server/Maintenance/A-Ops/aops-intelligent-location-framework-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..8bcc02fe01c053549e4a200b7cbb544cb02fae60 --- /dev/null +++ b/docs/en/Server/Maintenance/A-Ops/aops-intelligent-location-framework-user-guide.md @@ -0,0 +1,182 @@ +# AOps 智能定位框架使用手册 + +参照[AOps部署指南](deploying-aops.md)部署AOps前后端服务后,即可使用AOps智能定位框架。 + +下文会从页面的维度进行AOps智能定位框架功能的介绍。 + +## 1. 工作台 + + 该页面为数据看板页面,用户登录后,仍在该页面。 + + ![4911661916984_.pic](./figures/工作台.jpg) + +支持操作: + +- 当前纳管的主机数量 +- 当前所有未确认的告警数量 + +- 每个主机组告警情况的统计 + +- 用户帐户操作 + + - 修改密码 + - 退出登录 +- 业务域和CVE信息暂不支持 + +## 2. 资产管理 + +资产管理分为对主机组进行管理以及对主机进行管理。每个主机在agent侧注册时需指定一个已存在的主机组进行注册,注册完毕后会在前端进行显示。 + +(1)主机组页面: + +![4761661915951_.pic](./figures/主机组.jpg) + +支持如下操作: + +- 主机组添加 +- 主机组删除 +- 查看当前所有主机组 +- 查看每个主机组下的主机信息 + +添加主机组时,需指定主机组的名称和描述。注意:请勿重复名称。 + +![添加主机组](./figures/添加主机组.jpg) + +(2)主机管理页面: + +![主机管理](./figures/主机管理.jpg) + +支持如下操作: + +- 查看主机列表(可根据主机组、管理节点进行筛选,可根据主机名称进行排序) +- 删除主机 +- 点击主机可跳转到主机详情界面 + +(3)主机详细信息界面: + +![主机详情](./figures/主机详情.jpg) + +详情页的上半部分展示了该主机的操作系统及CPU等的基础信息。 + +![插件管理](./figures/插件管理.jpg) + +详情页的下半部分,用户可以看到该主机当前运行的采集插件信息(目前agent只支持gala-gopher插件)。 + +支持如下操作: + +- 查看主机基础信息及插件信息 +- 插件的管理(gala-gopher) + - 插件资源查看 + - 插件的开启和管理 + - gala-gopher的采集探针的开启和关闭 +- 主机场景的识别 + +点击场景识别后,系统会生成该主机的场景,并推荐检测该场景所需开启的插件以及采集项,用户可以根据推荐结果进行插件/探针的调整。 + +注意:修改插件信息如关闭插件或开关探针后,需要点击保存才能生效。 + +![修改插件](./figures/修改插件.png) + +## 3. 智能定位 + +AOps项目的智能定位策略采用内置网络诊断应用作为模板,生成个性化工作流的策略进行检测和诊断。 + +“应用”作为工作流的模板,描述了检测中各步骤的串联情况,内置各步骤中使用的检测模型的推荐逻辑。用户在生成工作流时,可根据各主机的采集项、场景等信息,定制出工作流的详细信息。 + +(1)工作流列表页面: + +![工作流](./figures/工作流.jpg) + +支持操作: + +- 查看当前工作流列表,支持按照主机组、应用和状态进行筛选,并支持分页操作 +- 查看当前应用列表 + +(2)工作流详情页面: + +![工作流详情](./figures/工作流详情.jpg) + +支持操作: + +- 查看工作流所属主机组,主机数量、状态等基础信息 +- 查看单指标检测、多指标检测、集群故障诊断各步骤的详细算法模型信息 +- 修改检测各步骤应用的模型 +- 执行、暂停和删除工作流 + +修改某检测步骤的模型时,用户可根据模型名或标签搜索系统内置的模型库,选中模型后点击应用进行更改。 + +![修改模型](./figures/修改模型.png) + +(3)应用详情页面 + +![app详情](./figures/应用.png) + +支持操作: + +- 查看应用的整体流程 +- 基于应用创建工作流 + +创建工作流时,点击右上角的创建工作流按钮,并在右侧弹出的窗口中输入工作流的名称和描述,选择要检测的主机组。选中主机组后,下方会列出该主机组的所有主机,用户可选中部分主机后移到右侧的列表,最后点击创建,即可在工作流列表中看到新创建的工作流。 + +![app详情](./figures/app详情.jpg) + +![创建工作流](./figures/创建工作流.jpg) + +(4)告警 + +启动工作流后,会根据工作流的执行周期定时触发诊断,每次诊断若结果为异常,则会作为一条告警存入数据库,同时也会反应在前端告警页面中。 + +![告警](./figures/告警.jpg) + +支持操作: + +- 查看当前告警总数 +- 查看各主机组的告警数量 +- 查看告警列表 +- 告警确认 +- 查看告警详情 +- 下载诊断报告 + +告警确认后,将不在列表中显示 + +![告警确认](./figures/告警确认.jpg) + +点击异常详情后,可以根据主机维度查看告警详情,包括异常数据项的展示以及根因节点、根因异常的判断等。 + +![告警详情](./figures/告警详情.jpg) + +## 4. 配置溯源 + +AOps项目的配置溯源用于对目标主机配置文件内容的变动进行检测记录,对于文件配置错误类引发的故障起到很好的支撑作用。 + +### 创建配置域 + +![](./figures/chuangjianyewuyu.png) + +### 添加配置域纳管node + +![](./figures/tianjianode.png) + +### 添加配置域配置 + +![](./figures/xinzengpeizhi.png) + +### 查询预期配置 + +![](./figures/chakanyuqi.png) + +### 删除配置 + +![](./figures/shanchupeizhi.png) + +### 查询实际配置 + +![](./figures/chaxunshijipeizhi.png) + +### 配置校验 + +![](./figures/zhuangtaichaxun.png) + +### 配置同步 + +暂未提供 diff --git a/docs/en/docs/A-Ops/architecture-awareness-service-manual.md b/docs/en/Server/Maintenance/A-Ops/architecture-awareness-service-manual.md similarity index 43% rename from docs/en/docs/A-Ops/architecture-awareness-service-manual.md rename to docs/en/Server/Maintenance/A-Ops/architecture-awareness-service-manual.md index 7bd4426c511d9ebba88342b8cfe9f323c742fe93..94c161a46ceae69dac88d0b712cf79b4cca74402 100644 --- a/docs/en/docs/A-Ops/architecture-awareness-service-manual.md +++ b/docs/en/Server/Maintenance/A-Ops/architecture-awareness-service-manual.md @@ -6,44 +6,42 @@ - Installing using the repo source mounted by Yum. - Configure the Yum sources **openEuler22.09** and **openEuler22.09:Epol** in the **/etc/yum.repos.d/openEuler.repo** file. - - ```ini - [everything] # openEuler 22.09 officially released repository - name=openEuler22.09 - baseurl=https://repo.openeuler.org/openEuler-22.09/everything/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=https://repo.openeuler.org/openEuler-22.09/everything/$basearch/RPM-GPG-KEY-openEuler - - [Epol] # openEuler 22.09:Epol officially released repository - name=Epol - baseurl=https://repo.openeuler.org/openEuler-22.09/EPOL/main/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=https://repo.openeuler.org/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler - ``` - - Run the following commands to download and install gala-spider and its dependencies. - - ```shell - # A-Ops architecture awareness service, usually installed on the master node - yum install gala-spider - yum install python3-gala-spider - - # A-Ops architecture awareness probe, usually installed on the master node - yum install gala-gopher - ``` + Configure the Yum sources **openEuler23.09** and **openEuler23.09:Epol** in the **/etc/yum.repos.d/openEuler.repo** file. + + ```ini + [everything] # openEuler 23.09 officially released repository + name=openEuler23.09 + baseurl=https://repo.openeuler.org/openEuler-23.09/everything/$basearch/ + enabled=1 + gpgcheck=1 + gpgkey=https://repo.openeuler.org/openEuler-23.09/everything/$basearch/RPM-GPG-KEY-openEuler + + [Epol] # openEuler 23.09:Epol officially released repository + name=Epol + baseurl=https://repo.openeuler.org/openEuler-23.09/EPOL/main/$basearch/ + enabled=1 + gpgcheck=1 + gpgkey=https://repo.openeuler.org/openEuler-23.09/OS/$basearch/RPM-GPG-KEY-openEuler + ``` + + Run the following commands to download and install gala-spider and its dependencies. + + ```shell + # A-Ops architecture awareness service, usually installed on the master node + yum install gala-spider + yum install python3-gala-spider + + # A-Ops architecture awareness probe, usually installed on the master node + yum install gala-gopher + ``` - Installing using the RPM packages. Download **gala-spider-vx.x.x-x.oe1.aarch64.rpm**, and then run the following commands to install the modules. (`x.x-x` indicates the version. Replace it with the actual version number.) - ```shell - rpm -ivh gala-spider-vx.x.x-x.oe1.aarch64.rpm - - rpm -ivh gala-gopher-vx.x.x-x.oe1.aarch64.rpm - ``` - + ```shell + rpm -ivh gala-spider-vx.x.x-x.oe1.aarch64.rpm + rpm -ivh gala-gopher-vx.x.x-x.oe1.aarch64.rpm + ``` ### Installing Using the A-Ops Deployment Service @@ -56,14 +54,15 @@ Modify the deployment task list and enable the steps for gala_spider: step_list: ... gala_gopher: - enable: false - continue: false + enable: false + continue: false gala_spider: - enable: false - continue: false + enable: false + continue: false ... ``` + diff --git a/docs/en/Server/Maintenance/A-Ops/automated-om-service-manual.md b/docs/en/Server/Maintenance/A-Ops/automated-om-service-manual.md new file mode 100644 index 0000000000000000000000000000000000000000..7277fef3f64a28dbf6f9f035ccba8d1a16e837c9 --- /dev/null +++ b/docs/en/Server/Maintenance/A-Ops/automated-om-service-manual.md @@ -0,0 +1,64 @@ +# 自动化运维服务使用手册 + +## 主要功能 + +本服务主要包括批量命令执行和批量脚本执行的功能,两者均支持定时任务。 + +### 命令相关 + +命令页面包含命令管理和命令执行两个页签 + +#### 命令管理 + +命令管理界面可以对命令进行增删查改功能: +![](./image/新建命令.png) +![](./image/命令管理界面.png) + +#### 命令执行 + +命令执行界面可以创建命令执行任务: +![](./image/新建命令任务.png) + +命令执行界面可以对创建的任务进行任务信息查看,任务执行,任务结果查看,删除任务等操作: +![](./image/命令执行.png) +任务结果查看: +![](./image/任务结果查看.png) + +### 脚本相关 + +脚本页面包含脚本管理、脚本执行、操作管理三个页签 + +#### 操作管理 + +为了屏蔽操作系统的架构,系统对脚本执行的影响,抽象出操作的概念。一个操作对应一组包括各类架构和系统的脚本,脚本执行时选择主机和操作后,根据主机的系统和架构选择对应的脚本进行执行。 + +操作管理界面可以对操作进行增删查改功能: + +![](./image/操作管理.png) + +#### 脚本管理 + +脚本管理界面可以对脚本进行上传,查询,删除,编辑功能: +![](./image/脚本管理.png) +创建脚本: +![](./image/创建脚本.png) + +#### 脚本执行 + +脚本执行界面可以创建脚本执行任务,创建脚本任务仅能通过选择操作来选择对应脚本: +![](./image/脚本执行.png) +创建脚本任务: +![](./image/创建脚本任务.png) + +### 定时任务 + +定时任务支持指定时间执行和周期执行功能 +单次执行: +![](./image/单次执行.png) +周期执行: +![](./image/周期执行.png) + +### 文件推送 + +文件推送功能支持将脚本推送至指定目录,此类任务不会执行脚本,且推送任务与定时任务互斥: +![](./image/文件推送.png) diff --git a/docs/en/Server/Maintenance/A-Ops/community-hot-patch-creation-and-release-process.md b/docs/en/Server/Maintenance/A-Ops/community-hot-patch-creation-and-release-process.md new file mode 100644 index 0000000000000000000000000000000000000000..00165edd722393672544bbc274c516ce8867fb2b --- /dev/null +++ b/docs/en/Server/Maintenance/A-Ops/community-hot-patch-creation-and-release-process.md @@ -0,0 +1,260 @@ +# 社区热补丁制作发布流程 + +## 制作内核态/用户态热补丁 + +### 场景1. 在src-openEuler/openEuler仓下评论pr制作新版本热补丁 + +> 制作内核态热补丁需在**openEuler/kernel**仓评论pr。 +> +> 制作用户态热补丁需在src-openEuler仓评论pr,现在支持**src-openEuler/openssl,src-openEuler/glibc,src-openEuler/systemd**。 + +#### 1. 在已合入pr下评论制作热补丁 + +- 从src-openeuler仓【支持openssl, glibc, systemd】评论已合入pr制作新版本热补丁。 + +```shell +/makehotpatch [软件包版本号] [patch list] [cve/bug] [issue id] [os_branch] +``` + +命令说明:使用多个patch用','分隔,需注意patch的先后顺序。 + +![image-20230629114903593](./image/src-openEuler仓评论.png) + +- 从openeuler仓【支持kernel】评论已合入pr制作新版本热补丁。 + +```shell +/makehotpatch [软件包版本号] [cve/bug] [issue id] [os_branch] +``` + +![image-20230629142933917](./image/openEuler仓评论.png) + +评论后,门禁触发hotpatch_metadata仓创建热补丁issue以及同步该pr。 + +#### 2. hotpatch_metadata仓自动创建热补丁issue、同步该pr + +pr评论区提示启动热补丁制作流程。 + +![image-20230629143426498](./image/启动热补丁工程流程.png) + +随后,hotpatch_metadata仓自动创建热补丁issue,并在hotpatch_metadata仓同步该pr。 + +> 热补丁issue用于跟踪热补丁制作流程。 +> +> hotpatch_metadata仓用于触发制作热补丁。 + +![image-20230629144503840](./image/热补丁issue链接和pr链接.png) + +点击查看热补丁issue链接内容。 + +- 热补丁Issue类别为hotpatch。 + +![image-20230607161545732](./image/image-20230607161545732.png) + +点击查看在hotpatch_metadata仓自动创建的pr。 + +![hotpatch-fix-pr](./image/hotpatch-fix-pr.png) + +#### 3. 触发制作热补丁 + +打开hotpatch_metadata仓自动创建的pr,评论区可以查看热补丁制作信息。 + +![img](./image/45515A7F-0EC2-45AA-9B58-AB92DE9B0979.png) + +查看热补丁制作结果。 + +![img](./image/E574E637-0BF3-4F3B-BAE6-04ECBD09D151.png) + +如果热补丁制作失败,可以根据相关日志信息修改pr、评论 /retest直到热补丁可以被成功制作。 + +如果热补丁制作成功,可以通过Download link下载热补丁进行自验。 + +![image-20230608151244425](./image/hotpatch-pr-success.png) + +**若热补丁制作成功,可以对热补丁进行审阅**。 + +### 场景2、从hotpatch_metadata仓提pr修改热补丁 + +> 从hotpatch_metadata仓提pr只能修改还未正式发布的热补丁。 +> + +#### 1. 提pr + +用户需要手动创建热补丁issue。 + +(1)阅读readme,根据热补丁issue模版创建热补丁。 + +![image-20230612113428096](./image/image-20230612113428096.png) + +> 用户不允许修改热补丁元数据文件中已被正式发布的热补丁的相关内容。 +> + +pr内容: + +- patch文件。 +- 修改热补丁元数据hotmetadata.xml文件。 + +#### 2. 触发制作热补丁 + +**若热补丁制作成功,可以对热补丁进行审阅**。 + +### 场景3、从hotpatch_metadata仓提pr制作新版本热补丁 + +#### 1. 提pr + +在hotpatch_metadata仓提pr。 + +(1)阅读readme,根据热补丁issue模版创建热补丁。 + +![image-20230612113428096](./image/image-20230612113428096.png) + +pr内容: + +- patch文件。 +- 如果没有相应热补丁元数据hotmetadata.xml文件,则手动创建;否则修改热补丁元数据hotmetadata.xml文件。 + +#### 2. 触发制作热补丁 + +**若热补丁制作成功,可以对热补丁进行审阅**。 + +## 审阅热补丁 + +### 1. 审阅热补丁pr + +确认可发布,合入pr。 + +### 2. pr合入,回填热补丁issue + +在热补丁issue页面补充热补丁路径,包含src.rpm/arm架构/x86架构的rpm包,以及对应hotpatch.xml,用于展示热补丁信息。 + +> 如果一个架构失败,强行合入,也可只发布单架构的包。 + +![img](./image/EF5E0132-6E5C-4DD1-8CB5-73035278E233.png) + +- 热补丁Issue标签为hotpatch。 + +- 查看热补丁元数据内容。 + +热补丁元数据模版: + +> 热补丁元数据用于管理查看热补丁相关历史制作信息。 + +```xml + + + Managing Hot Patch Metadata + + + + src.rpm归档地址 + x86架构debuginfo二进制包归档地址 + arm架构debuginfo二进制包归档地址 + patch文件 + + https://gitee.com/wanghuan158/hot-patch_metadata/issues/I7AE5F + + + + +``` + +```xml + + + Managing Hot Patch Metadata + + + + download_link + download_link + download_link + 0001-PEM-read-bio-ret-failure.patch + + https://gitee.com/wanghuan158/hot-patch_metadata/issues/I7AE5F + + + download_link + download_link + download_link + 0001-PEM-read-bio-ret-failure.patch + + https://gitee.com/wanghuan158/hot-patch_metadata/issues/I7AE5P + + + + +``` + +> 注意:download_link均为repo仓正式的归档链接。 +> +> 热补丁当前只考虑演进,version 2基于version 1的src继续构建。 + +![image-20230607163358749](./image/image-20230607163358749.png) + +### 3. 关闭相应热补丁Issue + +## 发布热补丁 + +### 1、收集热补丁发布需求 + +在release-management仓库每周update需求收集的issue下方,手动评论start-update命令,此时会收集待发布的热补丁和待发布的修复cve的冷补丁。后台会在hotpatch_meta仓库根据hotpatch标签查找已关闭的热补丁issue。 + +### 2、生成安全公告热补丁信息 + +社区根据收集到的热补丁issue信息,在生成安全公告的同时生成hotpatch字段补丁,过滤已经发布的漏洞补丁。 + +- 在安全公告文件新增HotPatchTree字段,记录和公告相关漏洞的热补丁,每个补丁按架构和CVE字段区分(Type=ProductName 记录分支,Type=ProductArch 记录补丁具体的rpm包)。 + +![](./image/patch-file.PNG) + +### 3、Majun平台上传文件到openEuler官网,同步生成updateinfo.xml文件 + +社区将生成的安全公告上传到openEuler官网,同时基于所收集的热补丁信息生成updateinfo.xml文件。 + +![](./image/hotpatch-xml.PNG) + +updateinfo.xml文件样例: + +```xml + + + + openEuler-SA-2022-1 + An update for mariadb is now available for openEuler-22.03-LTS + Important + openEuler + + + + + + patch-redis-6.2.5-1-HP001.(CVE-2022-24048) + + + openEuler + + patch-redis-6.2.5-1-HP001-1-1.aarch64.rpm + + + patch-redis-6.2.5-1-HP001-1-1.x86_64.rpm + + + patch-redis-6.2.5-1-HP002-1-1.aarch64.rpm + + + patch-redis-6.2.5-1-HP002-1-1.x86_64.rpm + + + + + ... + +``` + +### 4、openEuler官网可以查看更新的热补丁信息,以cve编号划分 + +![image-20230612113626330](./image/image-20230612113626330.png) + +### 5、获取热补丁相关文件 + +社区将热补丁相关文件同步至openEuler的repo源下,可以在各个分支的hotpatch目录下获取相应文件。 +> openEuler的repo地址: diff --git a/docs/en/Server/Maintenance/A-Ops/deploying-aops-agent.md b/docs/en/Server/Maintenance/A-Ops/deploying-aops-agent.md new file mode 100644 index 0000000000000000000000000000000000000000..740bac0f0ae2900c797d56707dde3b72004dfd3c --- /dev/null +++ b/docs/en/Server/Maintenance/A-Ops/deploying-aops-agent.md @@ -0,0 +1,656 @@ +# Deploying aops-agent + +## 1. Environment Requirements + +One host running on openEuler 20.03 or later + +## 2. Configuration Environment Deployment + +### 2.1 Disabling the Firewall + +```shell +systemctl stop firewalld +systemctl disable firewalld +systemctl status firewalld +``` + +### 2.2 Deploying aops-agent + +1. Run `yum install aops-agent` to install aops-agent based on the Yum source. + +2. Modify the configuration file. Change the value of the **ip** in the agent section to the IP address of the local host. + + ```shell + vim /etc/aops/agent.conf + ``` + + The following uses 192.168.1.47 as an example. + + ```ini + [agent] + ;IP address and port number bound when the aops-agent is started. + ip=192.168.1.47 + port=12000 + + [gopher] + ;Default path of the gala-gopher configuration file. If you need to change the path, ensure that the file path is correct. + config_path=/opt/gala-gopher/gala-gopher.conf + + ;aops-agent log collection configuration + [log] + ;Level of the logs to be collected, which can be set to DEBUG, INFO, WARNING, ERROR, or CRITICAL + log_level=INFO + ;Location for storing collected logs + log_dir=/var/log/aops + ;Maximum size of a log file + max_bytes=31457280 + ;Number of backup logs + backup_count=40 + ``` + +3. Run `systemctl start aops-agent` to start the service. + +### 2.3 Registering with aops-manager + +To identify users and prevent APIs from being invoked randomly, aops-agent uses tokens to authenticate users, reducing the pressure on the deployed hosts. + +For security purposes, the active registration mode is used to obtain the token. Before the registration, prepare the information to be registered on aops-agent and run the `register` command to register the information with aops-manager. No database is configured for aops-agent. After the registration is successful, the token is automatically saved to the specified file and the registration result is displayed on the GUI. In addition, save the local host information to the aops-manager database for subsequent management. + +1. Prepare the **register.json** file. + + Prepare the information required for registration on aops-agent and save the information in JSON format. The data structure is as follows: + + ```JSON + { + // Name of the login user + "web_username":"admin", + // User password + "web_password": "changeme", + // Host name + "host_name": "host1", + // Name of the group to which the host belongs + "host_group_name": "group1", + // IP address of the host where aops-manager is running + "manager_ip":"192.168.1.23", + // Whether to register as a management host + "management":false, + // External port for running aops-manager + "manager_port":"11111", + // Port for running aops-agent + "agent_port":"12000" + } + ``` + + Note: Ensure that aops-manager is running on the target host, for example, 192.168.1.23, and the registered host group exists. + +2. Run `aops_agent register -f register.json`. +3. The registration result is displayed on the GUI. If the registration is successful, the token character string is saved to a specified file. If the registration fails, locate the fault based on the message and log content (**/var/log/aops/aops.log**). + + The following is an example of the registration result: + + - Registration succeeded. + + ```shell + [root@localhost ~]# aops_agent register -f register.json + Agent Register Success + ``` + + - Registration failed. The following uses the aops-manager start failure as an example. + + ```shell + [root@localhost ~]# aops_agent register -f register.json + Agent Register Fail + [root@localhost ~]# + ``` + + - Log content + + ```shell + 2022-09-05 16:11:52,576 ERROR command_manage/register/331: HTTPConnectionPool(host='192.168.1.23', port=11111): Max retries exceeded with url: /manage/host/add (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) + [root@localhost ~]# + ``` + +## 3. Plug-in Support + +### 3.1 gala-gopher + +#### 3.1.1 Introduction + +gala-gopher is a low-load probe framework based on eBPF. It can be used to monitor the CPU, memory, and network status of hosts and collect data. You can configure the collection status of existing probes based on service requirements. + +#### 3.1.2 Deployment + +1. Run `yum install gala-gopher` to install gala-gopher based on the Yum source. +2. Enable probes based on service requirements. You can view information about probes in **/opt/gala-gopher/gala-gopher.conf**. +3. Run `systemctl start gala-gopher` to start the gala-gopher service. + +#### 3.1.3 Others + +For more information about gala-gopher, see . + +## 4. API Support + +### 4.1 List of External APIs + +| No.| API | Type| Description | +| ---- | ------------------------------ | ---- | ----------------------| +| 1 | /v1/agent/plugin/start | POST | Starts a plug-in. | +| 2 | /v1/agent/plugin/stop | POST | Stops a plug-in. | +| 3 | /v1/agent/application/info | GET | Collects running applications in the target application collection.| +| 4 | /v1/agent/host/info | GET | Obtains host information. | +| 5 | /v1/agent/plugin/info | GET | Obtains the plug-in running information in aops-agent. | +| 6 | /v1/agent/file/collect | POST | Collects content of the configuration file. | +| 7 | /v1/agent/collect/items/change | POST | Changes the running status of plug-in collection items. | + +#### 4.1.1 /v1/agent/plugin/start + +- Description: Starts the plug-in that is installed but not running. Currently, only the gala-gopher plug-in is supported. + +- HTTP request mode: POST + +- Data submission mode: query + +- Request parameter + + | Parameter | Mandatory| Type| Description | + | ----------- | ---- | ---- | ------ | + | plugin_name | True | str | Plug-in name| + +- Request parameter example + + | Parameter | Value | + | ----------- | ----------- | + | plugin_name | gala-gopher | + +- Response body parameters + + | Parameter| Type| Description | + | ------ | ---- | ---------------- | + | code | int | Return code | + | msg | str | Information corresponding to the status code| + +- Response example + + ```json + { + "code": 200, + "msg": "xxxx" + } + ``` + +#### 4.1.2 /v1/agent/plugin/stop + +- Description: Stops a running plug-in. Currently, only the gala-gopher plug-in is supported. + +- HTTP request mode: POST + +- Data submission mode: query + +- Request parameter + + | Parameter | Mandatory| Type| Description | + | ----------- | ---- | ---- | ------ | + | plugin_name | True | str | Plug-in name| + +- Request parameter example + + | Parameter | Value | + | ----------- | ----------- | + | plugin_name | gala-gopher | + +- Response body parameters + + | Parameter| Type| Description | + | ------ | ---- | ---------------- | + | code | int | Return code | + | msg | str | Information corresponding to the status code| + +- Response example + + ```json + { + "code": 200, + "msg": "xxxx" + } + ``` + +#### 4.1.3 /v1/agent/application/info + +- Description: Collects running applications in the target application collection. Currently, the target application collection contains MySQL, Kubernetes, Hadoop, Nginx, Docker, and gala-gopher. + +- HTTP request mode: GET + +- Data submission mode: query + +- Request parameter + + | Parameter | Mandatory | Type | Description | + | --------- | --------- | ---- | ----------- | + | | | | | + +- Request parameter example + + | Parameter| Value| + | ------ | ------ | + | | | + +- Response body parameters + + | Parameter| Type| Description | + | ------ | ---- | ---------------- | + | code | int | Return code | + | msg | str | Information corresponding to the status code| + | resp | dict | Response body | + + - resp + + | Parameter | Type | Description | + | ------- | --------- | -------------------------- | + | running | List\[str] | List of the running applications| + +- Response example + + ```json + { + "code": 200, + "msg": "xxxx", + "resp": { + "running": [ + "mysql", + "docker" + ] + } + } + ``` + +#### 4.1.4 /v1/agent/host/info + +- Description: Obtains information about the host where aops-agent is installed, including the system version, BIOS version, kernel version, CPU information, and memory information. + +- HTTP request mode: POST + +- Data submission mode: application/json + +- Request parameter + + | Parameter | Mandatory| Type | Description | + | --------- | ---- | --------- | ------------------------------------------------ | + | info_type | True | List\[str] | List of the information to be collected. Currently, only the CPU, disk, memory, and OS are supported.| + +- Request parameter example + + ```json + ["os", "cpu","memory", "disk"] + ``` + +- Response body parameters + + | Parameter| Type| Description | + | ------ | ---- | ---------------- | + | code | int | Return code | + | msg | str | Information corresponding to the status code| + | resp | dict | Response body | + + - resp + + | Parameter| Type | Description | + | ------ | ---------- | -------- | + | cpu | dict | CPU information | + | memory | dict | Memory information| + | os | dict | OS information | + | disk | List\[dict] | Disk information| + + - cpu + + | Parameter | Type| Description | + | ------------ | ---- | --------------- | + | architecture | str | CPU architecture | + | core_count | int | Number of cores | + | l1d_cache | str | L1 data cache size| + | l1i_cache | str | L1 instruction cache size| + | l2_cache | str | L2 cache size | + | l3_cache | str | L3 cache size | + | model_name | str | Model name | + | vendor_id | str | Vendor ID | + + - memory + + | Parameter | Type | Description | + | --------- | ---------- | --------------------------- | + | size | str | Total memory | + | total | int | Number of DIMMs | + | info | List\[dict] | Information about all DIMMs | + + - info + + | Parameter | Type | Description | + | ------------ | ---- | ----------- | + | size | str | Memory size | + | type | str | Type | + | speed | str | Speed | + | manufacturer | str | Vendor | + + - os + + | Parameter | Type | Description | + | ------------ | ---- | -------------- | + | bios_version | str | BIOS version | + | os_version | str | OS version | + | kernel | str | Kernel version | + +- Response example + + ```json + { + "code": 200, + "msg": "operate success", + "resp": { + "cpu": { + "architecture": "aarch64", + "core_count": "128", + "l1d_cache": "8 MiB (128 instances)", + "l1i_cache": "8 MiB (128 instances)", + "l2_cache": "64 MiB (128 instances)", + "l3_cache": "128 MiB (4 instances)", + "model_name": "Kunpeng-920", + "vendor_id": "HiSilicon" + }, + "memory": { + "info": [ + { + "manufacturer": "Hynix", + "size": "16 GB", + "speed": "2933 MT/s", + "type": "DDR4" + }, + { + "manufacturer": "Hynix", + "size": "16 GB", + "speed": "2933 MT/s", + "type": "DDR4" + } + ], + "size": "32G", + "total": 2 + }, + "os": { + "bios_version": "1.82", + "kernel": "5.10.0-60.18.0.50", + "os_version": "openEuler 22.03 LTS" + }, + "disk": [ + { + "capacity": "xxGB", + "model": "xxxxxx" + } + ] + } + } + ``` + +#### 4.1.5 /v1/agent/plugin/info + +- Description: Obtains the plug-in running status of the host. Currently, only the gala-gopher plug-in is supported. + +- HTTP request mode: GET + +- Data submission mode: query + +- Request parameter + + | Parameter | Mandatory | Type | Description | + | --------- | --------- | ---- | ----------- | + | | | | | + +- Request parameter example + + | Parameter | Value | + | --------- | ----- | + | | | + +- Response body parameters + + | Parameter | Type | Description | + | --------- | ----------- | -------------------------------------------- | + | code | int | Return code | + | msg | str | Information corresponding to the status code | + | resp | List\[dict] | Response body | + + - resp + + | Parameter | Type | Description | + | ------------- | ---------- | -------------------------------------------- | + | plugin_name | str | Plug-in name | + | collect_items | list | Running status of plug-in collection items | + | is_installed | str | Information corresponding to the status code | + | resource | List\[dict] | Plug-in resource usage | + | status | str | Plug-in running status | + + - resource + + | Parameter | Type | Description | + | ------------- | ---- | -------------- | + | name | str | Resource name | + | current_value | str | Resource usage | + | limit_value | str | Resource limit | + +- Response example + + ```json + { + "code": 200, + "msg": "operate success", + "resp": [ + { + "collect_items": [ + { + "probe_name": "system_tcp", + "probe_status": "off", + "support_auto": false + }, + { + "probe_name": "haproxy", + "probe_status": "auto", + "support_auto": true + }, + { + "probe_name": "nginx", + "probe_status": "auto", + "support_auto": true + }, + ], + "is_installed": true, + "plugin_name": "gala-gopher", + "resource": [ + { + "current_value": "0.0%", + "limit_value": null, + "name": "cpu" + }, + { + "current_value": "13 MB", + "limit_value": null, + "name": "memory" + } + ], + "status": "active" + } + ] + } + ``` + +#### 4.1.6 /v1/agent/file/collect + +- Description: Collects information such as the content, permission, and owner of the target configuration file. Currently, only text files smaller than 1 MB, without execute permission, and supporting UTF8 encoding can be read. + +- HTTP request mode: POST + +- Data submission mode: application/json + +- Request parameter + + | Parameter | Mandatory| Type | Description | + | --------------- | ---- | --------- | ------------------------ | + | configfile_path | True | List\[str] | List of the full paths of the files to be collected| + +- Request parameter example + + ```json + [ "/home/test.conf", "/home/test.ini", "/home/test.json"] + ``` + +- Response body parameters + + | Parameter | Type | Description | + | ------------- | ----------- | --------------------------------------- | + | infos | List\[dict] | File collection information | + | success_files | List\[str] | List of files successfully collected | + | fail_files | List\[str] | List of files that fail to be collected | + + - infos + + | Parameter | Type | Description | + | --------- | ---- | --------------- | + | path | str | File path | + | content | str | File content | + | file_attr | dict | File attributes | + + - file_attr + + | Parameter | Type | Description | + | --------- | ---- | ------------------------------- | + | mode | str | Permission of the file type | + | owner | str | File owner | + | group | str | Group to which the file belongs | + +- Response example + + ```json + { + "infos": [ + { + "content": "this is a test file", + "file_attr": { + "group": "root", + "mode": "0644", + "owner": "root" + }, + "path": "/home/test.txt" + } + ], + "success_files": [ + "/home/test.txt" + ], + "fail_files": [ + "/home/test.txt" + ] + } + ``` + +#### 4.1.7 /v1/agent/collect/items/change + +- Description: Changes the collection status of the plug-in collection items. Currently, only the status of the gala-gopher collection items can be changed. For the gala-gopher collection items, see **/opt/gala-gopher/gala-gopher.conf**. + +- HTTP request mode: POST + +- Data submission mode: application/json + +- Request parameter + + | Parameter | Mandatory | Type | Description | + | ----------- | --------- | ---- | ------------------------------------------------------------ | + | plugin_name | True | dict | Expected modification result of the plug-in collection items | + + - plugin_name + + | Parameter | Mandatory | Type | Description | + | ------------ | --------- | ------ | --------------------------------------------------- | + | collect_item | True | string | Expected modification result of the collection item | + +- Request parameter example + + ```json + { + "gala-gopher":{ + "redis":"auto", + "system_inode":"on", + "tcp":"on", + "haproxy":"auto" + } + } + ``` + +- Response body parameters + + | Parameter | Type | Description | + | --------- | ----------- | -------------------------------------------- | + | code | int | Return code | + | msg | str | Information corresponding to the status code | + | resp | List\[dict] | Response body | + + - resp + + | Parameter | Type | Description | + | ----------- | ---- | -------------------------------------------------------- | + | plugin_name | dict | Modification result of the corresponding collection item | + + - plugin_name + + | Parameter | Type | Description | + | ------- | --------- | ---------------- | + | success | List\[str] | Collection items that are successfully modified| + | failure | List\[str] | Collection items that fail to be modified| + +- Response example + + ```json + { + "code": 200, + "msg": "operate success", + "resp": { + "gala-gopher": { + "failure": [ + "redis" + ], + "success": [ + "system_inode", + "tcp", + "haproxy" + ] + } + } + } + ``` + + ### FAQs + +1. If an error is reported, view the **/var/log/aops/aops.log** file, rectify the fault based on the error message in the log file, and restart the service. + +2. You are advised to run aops-agent in Python 3.7 or later. Pay attention to the version of the Python dependency library when installing it. + +3. The value of **access_token** can be obtained from the **/etc/aops/agent.conf** file after the registration is complete. + +4. To limit the CPU and memory resources of a plug-in, add **MemoryHigh** and **CPUQuota** to the **Service** section in the service file corresponding to the plug-in. + + For example, set the memory limit of gala-gopher to 40 MB and the CPU limit to 20%. + + ```ini + [Unit] + Description=a-ops gala gopher service + After=network.target + + [Service] + Type=exec + ExecStart=/usr/bin/gala-gopher + Restart=on-failure + RestartSec=1 + RemainAfterExit=yes + ;Limit the maximum memory that can be used by processes in the unit. The limit can be exceeded. However, after the limit is exceeded, the process running speed is limited, and the system reclaims the excess memory as much as possible. + ;The option value can be an absolute memory size in bytes (K, M, G, or T suffix based on 1024) or a relative memory size in percentage. + MemoryHigh=40M + ;Set the CPU time limit for the processes of this unit. The value must be a percentage ending with %, indicating the maximum percentage of the total time that the unit can use a single CPU. + CPUQuota=20% + + [Install] + WantedBy=multi-user.target + ``` diff --git a/docs/en/docs/A-Ops/deploying-aops.md b/docs/en/Server/Maintenance/A-Ops/deploying-aops.md similarity index 90% rename from docs/en/docs/A-Ops/deploying-aops.md rename to docs/en/Server/Maintenance/A-Ops/deploying-aops.md index f7f2cb8d867ab915bf51bcdf7b002a172936a01e..60154be4bb161ef1765457058cc59c4944280ed6 100644 --- a/docs/en/docs/A-Ops/deploying-aops.md +++ b/docs/en/Server/Maintenance/A-Ops/deploying-aops.md @@ -2,15 +2,15 @@ ## 1. Environment Requirements -- Two hosts running on openEuler 22.09 +- Two hosts running on openEuler 24.09 - These two hosts are used to deploy two modes of the check module: scheduler and executor. Other services, such as MySQL, Elasticsearch, and aops-manager, can be independently deployed on any host. To facilitate operations, deploy these services on host A. + These two hosts are used to deploy two modes of the check module: scheduler and executor. Other services, such as MySQL, Elasticsearch, and aops-manager, can be independently deployed on any host. To facilitate operations, deploy these services on host A. - More than 8 GB memory ## 2. Configuring the Deployment Environment -### Host A: +### Host A Deploy the following A-Ops services on host A: aops-tools, aops-manager, aops-check, aops-web, aops-agent, and gala-gopher. @@ -22,7 +22,7 @@ The deployment procedure is as follows: Disable the firewall on the local host. -``` +```shell systemctl stop firewalld systemctl disable firewalld systemctl status firewalld @@ -32,7 +32,7 @@ systemctl status firewalld Install aops-tools. -``` +```shell yum install aops-tools ``` @@ -42,30 +42,30 @@ yum install aops-tools Use the **aops-basedatabase** script installed during aops-tools installation to install MySQL. -``` +```shell cd /opt/aops/aops_tools ./aops-basedatabase mysql ``` Modify the MySQL configuration file. -``` +```shell vim /etc/my.cnf ``` Add **bind-address** and set it to the IP address of the local host. -![1662346986112](./figures/修改mysql配置文件.png) +![1662346986112](./figures/modify-mysql-config-file.png) Restart the MySQL service. -``` +```shell systemctl restart mysqld ``` Connect to the database and set the permission. -``` +```shell mysql show databases; use mysql; @@ -79,7 +79,7 @@ exit Use the **aops-basedatabase** script installed during aops-tools installation to install Elasticsearch. -``` +```shell cd /opt/aops/aops_tools ./aops-basedatabase elasticsearch ``` @@ -88,19 +88,19 @@ Modify the configuration file. Modify the Elasticsearch configuration file. -``` +```shell vim /etc/elasticsearch/elasticsearch.yml ``` -![1662370718890](./figures/elasticsearch配置2.png) +![1662370718890](./figures/elasticsearch-config-2.png) -![1662370575036](./figures/elasticsearch配置1.png) +![1662370575036](./figures/elasticsearch-config-1.png) ![1662370776219](./figures/elasticsearch3.png) Restart the Elasticsearch service. -``` +```shell systemctl restart elasticsearch ``` @@ -108,19 +108,19 @@ systemctl restart elasticsearch Install aops-manager. -``` +```shell yum install aops-manager ``` Modify the configuration file. -``` +```shell vim /etc/aops/manager.ini ``` Change the IP address of each service in the configuration file to the actual IP address. Because all services are deployed on host A, you need to set their IP addresses to the actual IP address of host A. -``` +```ini [manager] ip=192.168.1.1 // Change the service IP address to the actual IP address of host A. port=11111 @@ -153,7 +153,7 @@ port=11112 Start the aops-manager service. -``` +```shell systemctl start aops-manager ``` @@ -161,23 +161,23 @@ systemctl start aops-manager Install aops-web. -``` +```shell yum install aops-web ``` Modify the configuration file. Because all services are deployed on host A, set the IP address of each service accessed by aops-web to the actual IP address of host A. -``` +```shell vim /etc/nginx/aops-nginx.conf ``` The following figure shows the configuration of some services. -![1662378186528](./figures/配置web.png) +![1662378186528](./figures/web-config.png) Enable the aops-web service. -``` +```shell systemctl start aops-web ``` @@ -187,13 +187,13 @@ systemctl start aops-web Install ZooKeeper. -``` +```shell yum install zookeeper ``` Start the ZooKeeper service. -``` +```shell systemctl start zookeeper ``` @@ -201,23 +201,23 @@ systemctl start zookeeper Install Kafka. -``` +```shell yum install kafka ``` Modify the configuration file. -``` +```shell vim /opt/kafka/config/server.properties ``` Change the value of **listeners** to the IP address of the local host. -![1662381371927](./figures/kafka配置.png) +![1662381371927](./figures/kafka-config.png) Start the Kafka service. -``` +```shell cd /opt/kafka/bin nohup ./kafka-server-start.sh ../config/server.properties & tail -f ./nohup.out # Check all the outputs of nohup. If the IP address of host A and the Kafka startup success INFO are displayed, Kafka is started successfully. @@ -227,19 +227,19 @@ tail -f ./nohup.out # Check all the outputs of nohup. If the IP address of host Install aops-check. -``` +```shell yum install aops-check ``` Modify the configuration file. -``` +```shell vim /etc/aops/check.ini ``` Change the IP address of each service in the configuration file to the actual IP address. Because all services are deployed on host A, you need to set their IP addresses to the actual IP address of host A. -``` +```ini [check] ip=192.168.1.1 // Change the service IP address to the actual IP address of host A. port=11112 @@ -295,13 +295,13 @@ task_group_id=CHECK_TASK_GROUP_ID Start the aops-check service in configurable mode. -``` +```shell systemctl start aops-check ``` #### 2.8 Deploying the Client Services -aops-agent and gala-gopher must be deployed on the client. For details, see the [Deploying aops-agent](deploying-aops-agent.md). +aops-agent and gala-gopher must be deployed on the client. For details, see [aops-agent Deployment](deploying-aops-agent.md). Note: Before registering a host, you need to add a host group to ensure that the host group to which the host belongs exists. In this example, only host A is deployed and managed. @@ -309,23 +309,23 @@ Note: Before registering a host, you need to add a host group to ensure that the Install Prometheus. -``` +```shell yum install prometheus2 ``` Modify the configuration file. -``` +```shell vim /etc/prometheus/prometheus.yml ``` Add the gala-gopher addresses of all clients to the monitoring host of Prometheus. -![1662377261742](./figures/prometheus配置.png) +![1662377261742](./figures/prometheus-config.png) Start the Prometheus service: -``` +```shell systemctl start prometheus ``` @@ -347,7 +347,7 @@ vim /etc/ragdoll/gala-ragdoll.conf Change the IP address in **collect_address** of the **collect** section to the IP address of host A, and change the values of **collect_api** and **collect_port** to the actual API and port number. -``` +```ini [git] git_dir = "/home/confTraceTest" user_name = "user_name" @@ -366,7 +366,6 @@ sync_port = 11114 [ragdoll] port = 11114 - ``` Start the gala-ragdoll service. @@ -375,7 +374,7 @@ Start the gala-ragdoll service. systemctl start gala-ragdoll ``` -### Host B: +### Host B Only aops-check needs to be deployed on host B as the executor. @@ -383,19 +382,19 @@ Only aops-check needs to be deployed on host B as the executor. Install aops-check. -``` +```shell yum install aops-check ``` Modify the configuration file. -``` +```shell vim /etc/aops/check.ini ``` Change the IP address of each service in the configuration file to the actual IP address. Change the IP address of the aops-check service deployed on host B to the IP address of host B. Because other services are deployed on host A, change the IP addresses of those services to the IP address of host A. -``` +```ini [check] ip=192.168.1.2 // Change the IP address to the actual IP address of host B. port=11112 @@ -451,10 +450,8 @@ task_group_id=CHECK_TASK_GROUP_ID Start the aops-check service in executor mode. -``` +```shell systemctl start aops-check ``` - - The service deployment on the two hosts is complete. diff --git a/docs/en/docs/A-Ops/dnf-command-usage.md b/docs/en/Server/Maintenance/A-Ops/dnf-plugin-command-usage.md similarity index 98% rename from docs/en/docs/A-Ops/dnf-command-usage.md rename to docs/en/Server/Maintenance/A-Ops/dnf-plugin-command-usage.md index 760809b3b54285b57abfa0eb22f34c8e125472f9..04f32add022a7dbd30411defe37c369b52c4e174 100644 --- a/docs/en/docs/A-Ops/dnf-command-usage.md +++ b/docs/en/Server/Maintenance/A-Ops/dnf-plugin-command-usage.md @@ -1,6 +1,12 @@ -# DNF Command Usage +# DNF Plugin Command Usage -Af ter installing A-Ops Apollo, you can run DNF commands to use Apollo functions related to hot patches, such as hot patch scanning (`dnf hot-updateinfo`), setting and querying (`dnf hotpatch`), and applying (`dnf hotupgrade`). This document describes the usage of the commands. +Install the DNF plugin: + +```shell +dnf install dnf-hotpatch-plugin +``` + +After installing the plugin, you can run DNF commands to use plugin functions related to hot patches, such as hot patch scanning (`dnf hot-updateinfo`), setting and querying (`dnf hotpatch`), and applying (`dnf hotupgrade`). This document describes the usage of the commands. ## Hot Patch Scanning diff --git a/docs/en/docs/A-Ops/figures/add_config.png b/docs/en/Server/Maintenance/A-Ops/figures/add_config.png similarity index 100% rename from docs/en/docs/A-Ops/figures/add_config.png rename to docs/en/Server/Maintenance/A-Ops/figures/add_config.png diff --git a/docs/en/docs/A-Ops/figures/add_node.png b/docs/en/Server/Maintenance/A-Ops/figures/add_node.png similarity index 100% rename from docs/en/docs/A-Ops/figures/add_node.png rename to docs/en/Server/Maintenance/A-Ops/figures/add_node.png diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/app\350\257\246\346\203\205.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/app\350\257\246\346\203\205.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..bd179be46c9e711d7148ee44dc56f4a7a02f56bf Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/app\350\257\246\346\203\205.jpg" differ diff --git a/docs/en/docs/A-Ops/figures/view_expected_config.png b/docs/en/Server/Maintenance/A-Ops/figures/chakanyuqi.png similarity index 100% rename from docs/en/docs/A-Ops/figures/view_expected_config.png rename to docs/en/Server/Maintenance/A-Ops/figures/chakanyuqi.png diff --git a/docs/en/docs/A-Ops/figures/query_actual_config.png b/docs/en/Server/Maintenance/A-Ops/figures/chaxunshijipeizhi.png similarity index 100% rename from docs/en/docs/A-Ops/figures/query_actual_config.png rename to docs/en/Server/Maintenance/A-Ops/figures/chaxunshijipeizhi.png diff --git a/docs/en/Server/Maintenance/A-Ops/figures/chuangjianyewuyu.png b/docs/en/Server/Maintenance/A-Ops/figures/chuangjianyewuyu.png new file mode 100644 index 0000000000000000000000000000000000000000..8849a2fc81dbd14328c6c66c53033164a0b67b52 Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/chuangjianyewuyu.png differ diff --git a/docs/en/docs/A-Ops/figures/create_service_domain.png b/docs/en/Server/Maintenance/A-Ops/figures/create_service_domain.png similarity index 100% rename from docs/en/docs/A-Ops/figures/create_service_domain.png rename to docs/en/Server/Maintenance/A-Ops/figures/create_service_domain.png diff --git a/docs/en/docs/A-Ops/figures/delete_config.png b/docs/en/Server/Maintenance/A-Ops/figures/delete_config.png similarity index 100% rename from docs/en/docs/A-Ops/figures/delete_config.png rename to docs/en/Server/Maintenance/A-Ops/figures/delete_config.png diff --git a/docs/en/Server/Maintenance/A-Ops/figures/elasticsearch-config-1.png b/docs/en/Server/Maintenance/A-Ops/figures/elasticsearch-config-1.png new file mode 100644 index 0000000000000000000000000000000000000000..1b7e0eab093b2f0455b8f3972884e5f757fbec3d Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/elasticsearch-config-1.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/figures/elasticsearch-config-2.png b/docs/en/Server/Maintenance/A-Ops/figures/elasticsearch-config-2.png new file mode 100644 index 0000000000000000000000000000000000000000..620dbbda71259e3b6ee6a2efb646a9692adf2456 Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/elasticsearch-config-2.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/figures/elasticsearch3.png b/docs/en/Server/Maintenance/A-Ops/figures/elasticsearch3.png new file mode 100644 index 0000000000000000000000000000000000000000..893aae242aa9117c64f323374d4728d230894973 Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/elasticsearch3.png differ diff --git a/docs/en/docs/A-Ops/figures/hot_patch_statuses.png b/docs/en/Server/Maintenance/A-Ops/figures/hot_patch_statuses.png similarity index 100% rename from docs/en/docs/A-Ops/figures/hot_patch_statuses.png rename to docs/en/Server/Maintenance/A-Ops/figures/hot_patch_statuses.png diff --git a/docs/en/Server/Maintenance/A-Ops/figures/kafka-config.png b/docs/en/Server/Maintenance/A-Ops/figures/kafka-config.png new file mode 100644 index 0000000000000000000000000000000000000000..57eb17ccbd2fa63d97f700c29847fac7f08042ff Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/kafka-config.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/figures/modify-mysql-config-file.png b/docs/en/Server/Maintenance/A-Ops/figures/modify-mysql-config-file.png new file mode 100644 index 0000000000000000000000000000000000000000..d83425ee0622be329782620318818662b292e88b Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/modify-mysql-config-file.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/figures/prometheus-config.png b/docs/en/Server/Maintenance/A-Ops/figures/prometheus-config.png new file mode 100644 index 0000000000000000000000000000000000000000..7c8d0328967e8eb9bc4aa7465a273b9ef5a30b58 Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/prometheus-config.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/figures/query_actual_config.png b/docs/en/Server/Maintenance/A-Ops/figures/query_actual_config.png new file mode 100644 index 0000000000000000000000000000000000000000..d5f6e450fc0e1e246492ca71a6fcd8db572eb469 Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/query_actual_config.png differ diff --git a/docs/en/docs/A-Ops/figures/query_status.png b/docs/en/Server/Maintenance/A-Ops/figures/query_status.png similarity index 100% rename from docs/en/docs/A-Ops/figures/query_status.png rename to docs/en/Server/Maintenance/A-Ops/figures/query_status.png diff --git a/docs/en/Server/Maintenance/A-Ops/figures/shanchupeizhi.png b/docs/en/Server/Maintenance/A-Ops/figures/shanchupeizhi.png new file mode 100644 index 0000000000000000000000000000000000000000..cfea2eb44f7b8aa809404b8b49b4bd2e24172568 Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/shanchupeizhi.png differ diff --git a/docs/en/docs/A-Ops/figures/sync_conf.png b/docs/en/Server/Maintenance/A-Ops/figures/sync_conf.png similarity index 100% rename from docs/en/docs/A-Ops/figures/sync_conf.png rename to docs/en/Server/Maintenance/A-Ops/figures/sync_conf.png diff --git a/docs/en/Server/Maintenance/A-Ops/figures/tianjianode.png b/docs/en/Server/Maintenance/A-Ops/figures/tianjianode.png new file mode 100644 index 0000000000000000000000000000000000000000..d68f5e12a62548f2ec59374bda9ab07f43b8b5cb Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/tianjianode.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/figures/view_expected_config.png b/docs/en/Server/Maintenance/A-Ops/figures/view_expected_config.png new file mode 100644 index 0000000000000000000000000000000000000000..bbead6a91468d5dee570cfdc66faf9a4ab155d7c Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/view_expected_config.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/figures/web-config.png b/docs/en/Server/Maintenance/A-Ops/figures/web-config.png new file mode 100644 index 0000000000000000000000000000000000000000..721335115922e03f255e67e6b775c1ac0cfbbc50 Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/web-config.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/figures/xinzengpeizhi.png b/docs/en/Server/Maintenance/A-Ops/figures/xinzengpeizhi.png new file mode 100644 index 0000000000000000000000000000000000000000..18d71c2e099c19b5d28848eec6a8d11f29ccee27 Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/xinzengpeizhi.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/figures/zhuangtaichaxun.png b/docs/en/Server/Maintenance/A-Ops/figures/zhuangtaichaxun.png new file mode 100644 index 0000000000000000000000000000000000000000..a3d0b3294bf6e0eeec50a2c2f8c5059bdc256376 Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/figures/zhuangtaichaxun.png differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\344\270\273\346\234\272\347\256\241\347\220\206.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/\344\270\273\346\234\272\347\256\241\347\220\206.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..9f6d8858468c0cc72c1bd395403f064cc63f82bd Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\344\270\273\346\234\272\347\256\241\347\220\206.jpg" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\344\270\273\346\234\272\347\273\204.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/\344\270\273\346\234\272\347\273\204.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..fb5472de6b3d30abf6af73e286f70ac8e1d58c15 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\344\270\273\346\234\272\347\273\204.jpg" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\344\270\273\346\234\272\350\257\246\346\203\205.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/\344\270\273\346\234\272\350\257\246\346\203\205.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..effd8c29aba14c2e8f301f9f60d8f25ce8c533f0 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\344\270\273\346\234\272\350\257\246\346\203\205.jpg" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\344\277\256\346\224\271\346\217\222\344\273\266.png" "b/docs/en/Server/Maintenance/A-Ops/figures/\344\277\256\346\224\271\346\217\222\344\273\266.png" new file mode 100644 index 0000000000000000000000000000000000000000..ba4a8d4d9aadb7f712bdcb4b193f05f956d38841 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\344\277\256\346\224\271\346\217\222\344\273\266.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\344\277\256\346\224\271\346\250\241\345\236\213.png" "b/docs/en/Server/Maintenance/A-Ops/figures/\344\277\256\346\224\271\346\250\241\345\236\213.png" new file mode 100644 index 0000000000000000000000000000000000000000..23ff4e5fddb87ac157b1002a70c47d9b4c76b873 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\344\277\256\346\224\271\346\250\241\345\236\213.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\345\210\233\345\273\272\345\267\245\344\275\234\346\265\201.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/\345\210\233\345\273\272\345\267\245\344\275\234\346\265\201.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..1a2b45e860914a1ac0cfb6908b02fb5cad4cbd60 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\345\210\233\345\273\272\345\267\245\344\275\234\346\265\201.jpg" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\345\221\212\350\255\246.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/\345\221\212\350\255\246.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..89ac88e154275d4be8179d773e7093f2357f425f Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\345\221\212\350\255\246.jpg" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\345\221\212\350\255\246\347\241\256\350\256\244.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/\345\221\212\350\255\246\347\241\256\350\256\244.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..57844f772853c541f7a1328b007a9b6ae4d5caf0 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\345\221\212\350\255\246\347\241\256\350\256\244.jpg" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\345\221\212\350\255\246\350\257\246\346\203\205.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/\345\221\212\350\255\246\350\257\246\346\203\205.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..5b4830b47897a0d51be28238a879a70b1de9ca3b Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\345\221\212\350\255\246\350\257\246\346\203\205.jpg" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\345\267\245\344\275\234\345\217\260.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/\345\267\245\344\275\234\345\217\260.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..998b81e3b88d888d0915dcff48dc8cc5df30d91c Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\345\267\245\344\275\234\345\217\260.jpg" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\345\267\245\344\275\234\346\265\201.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/\345\267\245\344\275\234\346\265\201.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..17fb5b13034e1fc5276c68583fed1952415b0b5f Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\345\267\245\344\275\234\346\265\201.jpg" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\345\267\245\344\275\234\346\265\201\350\257\246\346\203\205.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/\345\267\245\344\275\234\346\265\201\350\257\246\346\203\205.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..458e023847bb2ad1f198f5a2dd1691748038137e Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\345\267\245\344\275\234\346\265\201\350\257\246\346\203\205.jpg" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\345\272\224\347\224\250.png" "b/docs/en/Server/Maintenance/A-Ops/figures/\345\272\224\347\224\250.png" new file mode 100644 index 0000000000000000000000000000000000000000..aa34bb909ee7c86a95126c13fa532ce93410a931 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\345\272\224\347\224\250.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\346\217\222\344\273\266\347\256\241\347\220\206.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/\346\217\222\344\273\266\347\256\241\347\220\206.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..2258d03976902052aaf39d36b6374fa680b9f8aa Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\346\217\222\344\273\266\347\256\241\347\220\206.jpg" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/figures/\346\267\273\345\212\240\344\270\273\346\234\272\347\273\204.jpg" "b/docs/en/Server/Maintenance/A-Ops/figures/\346\267\273\345\212\240\344\270\273\346\234\272\347\273\204.jpg" new file mode 100644 index 0000000000000000000000000000000000000000..9fcd24d949e500323e7a466be7cbfaf48d257ad0 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/figures/\346\267\273\345\212\240\344\270\273\346\234\272\347\273\204.jpg" differ diff --git a/docs/en/docs/A-Ops/using-gala-ragdoll.md b/docs/en/Server/Maintenance/A-Ops/gala-ragdoll-user-guide.md similarity index 60% rename from docs/en/docs/A-Ops/using-gala-ragdoll.md rename to docs/en/Server/Maintenance/A-Ops/gala-ragdoll-user-guide.md index 3b0c460b9d765c0e9d2775dfc6e283e7c63fa9e7..a41c63b69b0c73a5a10825457b53f5e7a865088e 100644 --- a/docs/en/docs/A-Ops/using-gala-ragdoll.md +++ b/docs/en/Server/Maintenance/A-Ops/gala-ragdoll-user-guide.md @@ -1,52 +1,49 @@ -gala-ragdoll Usage Guide -============================ +# gala-ragdoll Usage Guide ## Installation -#### Manual Installation +### Manual Installation - Installing using the repo source mounted by Yum. - Configure the Yum sources **openEuler22.09** and **openEuler22.09:Epol** in the **/etc/yum.repos.d/openEuler.repo** file. + Configure the Yum sources **openEuler22.09** and **openEuler22.09:Epol** in the **/etc/yum.repos.d/openEuler.repo** file. - ```ini - [everything] # openEuler 22.09 officially released repository - name=openEuler22.09 - baseurl=https://repo.openeuler.org/openEuler-22.09/everything/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=https://repo.openeuler.org/openEuler-22.09/everything/$basearch/RPM-GPG-KEY-openEuler + ```ini + [everything] # openEuler 22.09 officially released repository + name=openEuler22.09 + baseurl=https://repo.openeuler.org/openEuler-22.09/everything/$basearch/ + enabled=1 + gpgcheck=1 + gpgkey=https://repo.openeuler.org/openEuler-22.09/everything/$basearch/RPM-GPG-KEY-openEuler - [Epol] # openEuler 22.09:Epol officially released repository - name=Epol - baseurl=https://repo.openeuler.org/openEuler-22.09/EPOL/main/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=https://repo.openeuler.org/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler + [Epol] # openEuler 22.09:Epol officially released repository + name=Epol + baseurl=https://repo.openeuler.org/openEuler-22.09/EPOL/main/$basearch/ + enabled=1 + gpgcheck=1 + gpgkey=https://repo.openeuler.org/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler - ``` + ``` - Run the following commands to download and install gala-ragdoll and its dependencies. + Run the following commands to download and install gala-ragdoll and its dependencies. - ```shell - yum install gala-ragdoll # A-Ops configuration source tracing service - yum install python3-gala-ragdoll - - yum install gala-spider # A-Ops architecture awareness service - yum install python3-gala-spider - ``` + ```shell + yum install gala-ragdoll # A-Ops configuration source tracing service + yum install python3-gala-ragdoll -- Installing using the RPM packages. Download **gala-ragdoll-vx.x.x-x.oe1.aarch64.rpm**, and then run the following commands to install the modules. (`x.x-x` indicates the version. Replace it with the actual version number.) - - ```shell - rpm -ivh gala-ragdoll-vx.x.x-x.oe1.aarch64.rpm - ``` + yum install gala-spider # A-Ops architecture awareness service + yum install python3-gala-spider + ``` +- Installing using the RPM packages. Download **gala-ragdoll-vx.x.x-x.oe1.aarch64.rpm**, and then run the following commands to install the modules. (`x.x-x` indicates the version. Replace it with the actual version number.) + ```shell + rpm -ivh gala-ragdoll-vx.x.x-x.oe1.aarch64.rpm + ``` -#### Installing Using the A-Ops Deployment Service +### Installing Using the A-Ops Deployment Service -##### Editing the Task List +#### Editing the Task List Modify the deployment task list and enable the steps for gala_ragdoll: @@ -60,25 +57,24 @@ step_list: ... ``` -##### Editing the Host List + ### Configuration File Description -```/etc/yum.repos.d/openEuler.repo``` is the configuration file used to specify the Yum source address. The content of the configuration file is as follows: +`/etc/yum.repos.d/openEuler.repo` is the configuration file used to specify the Yum source address. The content of the configuration file is as follows: -``` +```ini [OS] name=OS baseurl=http://repo.openeuler.org/openEuler-20.09/OS/$basearch/ @@ -98,18 +94,18 @@ The following extended fields are added: | type | Configuration file type | ini, key-value, json, text, and more | | spacer | Spacer between a configuration item and its value | " ", "=", ":", and more | -Attachment: Learning the YANG language: https://datatracker.ietf.org/doc/html/rfc7950/. +Attachment: Learning the YANG language: . ### Creating Domains using Configuration Source Tracing -#### Viewing the configuration file. +#### Viewing the Configuration File gala-ragdoll contains the configuration file of the configuration source tracing. -``` +```shell [root@openeuler-development-1-1drnd ~]# cat /etc/ragdoll/gala-ragdoll.conf [git] // Defines the current Git information, including the directory and user information of the Git repository. -git_dir = "/home/confTraceTestConf" +git_dir = "/home/confTraceTestConf" user_name = "user" user_email = "email" @@ -119,30 +115,22 @@ collect_api = "/manage/config/collect" [ragdoll] port = 11114 - ``` #### Creating the Configuration Domain - ![](./figures/create_service_domain.png) - - #### Adding Managed Nodes to the Configuration Domain ![](./figures/add_node.png) - - -#### Adding Configurations to the Configuration Domain - +#### Adding Configurations to the Configuration Domain ![](./figures/add_config.png) #### Querying the Expected Configuration - ![](./figures/view_expected_config.png) #### Deleting Configurations @@ -153,15 +141,10 @@ port = 11114 ![](./figures/query_actual_config.png) - - #### Verifying the Configuration - ![](./figures/query_status.png) - - #### Configuration Synchronization ![](./figures/sync_conf.png) diff --git a/docs/en/Server/Maintenance/A-Ops/image/45515A7F-0EC2-45AA-9B58-AB92DE9B0979.png b/docs/en/Server/Maintenance/A-Ops/image/45515A7F-0EC2-45AA-9B58-AB92DE9B0979.png new file mode 100644 index 0000000000000000000000000000000000000000..c810b26ad0c052960dfdf4bfd78e9224ce465318 Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/image/45515A7F-0EC2-45AA-9B58-AB92DE9B0979.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/image/E574E637-0BF3-4F3B-BAE6-04ECBD09D151.png b/docs/en/Server/Maintenance/A-Ops/image/E574E637-0BF3-4F3B-BAE6-04ECBD09D151.png new file mode 100644 index 0000000000000000000000000000000000000000..6ef6ef9bd126e6c2007389065bbecc1cfdd97f5b Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/image/E574E637-0BF3-4F3B-BAE6-04ECBD09D151.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/image/EF5E0132-6E5C-4DD1-8CB5-73035278E233.png b/docs/en/Server/Maintenance/A-Ops/image/EF5E0132-6E5C-4DD1-8CB5-73035278E233.png new file mode 100644 index 0000000000000000000000000000000000000000..a2a29d2e1b62f7df409e87d03f2525ba8355f77e Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/image/EF5E0132-6E5C-4DD1-8CB5-73035278E233.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/image/hotpatch-fix-pr.png b/docs/en/Server/Maintenance/A-Ops/image/hotpatch-fix-pr.png new file mode 100644 index 0000000000000000000000000000000000000000..209c73f7b4522819c52662a9038bdf19a88eacfd Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/image/hotpatch-fix-pr.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/image/hotpatch-pr-success.png b/docs/en/Server/Maintenance/A-Ops/image/hotpatch-pr-success.png new file mode 100644 index 0000000000000000000000000000000000000000..48ea807e03c0f8e6efbceacbbc583c6ac3b3c865 Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/image/hotpatch-pr-success.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/image/hotpatch-xml.PNG b/docs/en/Server/Maintenance/A-Ops/image/hotpatch-xml.PNG new file mode 100644 index 0000000000000000000000000000000000000000..f1916620d3cc7b1c29059bcc5513fdc7ee94127b Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/image/hotpatch-xml.PNG differ diff --git a/docs/en/Server/Maintenance/A-Ops/image/image-20230607161545732.png b/docs/en/Server/Maintenance/A-Ops/image/image-20230607161545732.png new file mode 100644 index 0000000000000000000000000000000000000000..ba6992bea8d2a1d7ca4769ebfdd850b98d1a372f Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/image/image-20230607161545732.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/image/image-20230607163358749.png b/docs/en/Server/Maintenance/A-Ops/image/image-20230607163358749.png new file mode 100644 index 0000000000000000000000000000000000000000..191c36b65058ce8dea6bb2f1fe10a85b0177f2cf Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/image/image-20230607163358749.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/image/image-20230612113428096.png b/docs/en/Server/Maintenance/A-Ops/image/image-20230612113428096.png new file mode 100644 index 0000000000000000000000000000000000000000..48b59b5e6cb4043703de96066c8d67e85eed4f16 Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/image/image-20230612113428096.png differ diff --git a/docs/en/Server/Maintenance/A-Ops/image/image-20230612113626330.png b/docs/en/Server/Maintenance/A-Ops/image/image-20230612113626330.png new file mode 100644 index 0000000000000000000000000000000000000000..9d3621022deb02b267c3eb29315a7fe33c1f095e Binary files /dev/null and b/docs/en/Server/Maintenance/A-Ops/image/image-20230612113626330.png differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/openEuler\344\273\223\350\257\204\350\256\272.png" "b/docs/en/Server/Maintenance/A-Ops/image/openEuler\344\273\223\350\257\204\350\256\272.png" new file mode 100644 index 0000000000000000000000000000000000000000..29223cbddc39f8fcc0b725a3ed83495709e05f78 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/openEuler\344\273\223\350\257\204\350\256\272.png" differ diff --git a/docs/en/docs/A-Ops/image/patch-file.PNG b/docs/en/Server/Maintenance/A-Ops/image/patch-file.PNG similarity index 100% rename from docs/en/docs/A-Ops/image/patch-file.PNG rename to docs/en/Server/Maintenance/A-Ops/image/patch-file.PNG diff --git "a/docs/en/Server/Maintenance/A-Ops/image/src-openEuler\344\273\223\350\257\204\350\256\272.png" "b/docs/en/Server/Maintenance/A-Ops/image/src-openEuler\344\273\223\350\257\204\350\256\272.png" new file mode 100644 index 0000000000000000000000000000000000000000..3f8fbd534e8f8a48fdd60a5c3f13b33531a4112a Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/src-openEuler\344\273\223\350\257\204\350\256\272.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\344\273\273\345\212\241\347\273\223\346\236\234\346\237\245\347\234\213.png" "b/docs/en/Server/Maintenance/A-Ops/image/\344\273\273\345\212\241\347\273\223\346\236\234\346\237\245\347\234\213.png" new file mode 100644 index 0000000000000000000000000000000000000000..31fe24f44facaaa62fbeddd3eef0090a3be88908 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\344\273\273\345\212\241\347\273\223\346\236\234\346\237\245\347\234\213.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\345\210\233\345\273\272\350\204\232\346\234\254.png" "b/docs/en/Server/Maintenance/A-Ops/image/\345\210\233\345\273\272\350\204\232\346\234\254.png" new file mode 100644 index 0000000000000000000000000000000000000000..feb95836d056335d9d7ef673acc5fdf39e29bd8e Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\345\210\233\345\273\272\350\204\232\346\234\254.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\345\210\233\345\273\272\350\204\232\346\234\254\344\273\273\345\212\241.png" "b/docs/en/Server/Maintenance/A-Ops/image/\345\210\233\345\273\272\350\204\232\346\234\254\344\273\273\345\212\241.png" new file mode 100644 index 0000000000000000000000000000000000000000..e7b1c5fc77c4027f1cdb96941440220db8637e5f Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\345\210\233\345\273\272\350\204\232\346\234\254\344\273\273\345\212\241.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\345\215\225\346\254\241\346\211\247\350\241\214.png" "b/docs/en/Server/Maintenance/A-Ops/image/\345\215\225\346\254\241\346\211\247\350\241\214.png" new file mode 100644 index 0000000000000000000000000000000000000000..8020c60843c11e566778a1a03c1fa7516de9dd6b Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\345\215\225\346\254\241\346\211\247\350\241\214.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\345\220\257\345\212\250\347\203\255\350\241\245\344\270\201\345\267\245\347\250\213\346\265\201\347\250\213.png" "b/docs/en/Server/Maintenance/A-Ops/image/\345\220\257\345\212\250\347\203\255\350\241\245\344\270\201\345\267\245\347\250\213\346\265\201\347\250\213.png" new file mode 100644 index 0000000000000000000000000000000000000000..1405eced0a14e3956191e111b7c1d588e5b3d27b Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\345\220\257\345\212\250\347\203\255\350\241\245\344\270\201\345\267\245\347\250\213\346\265\201\347\250\213.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\345\221\250\346\234\237\346\211\247\350\241\214.png" "b/docs/en/Server/Maintenance/A-Ops/image/\345\221\250\346\234\237\346\211\247\350\241\214.png" new file mode 100644 index 0000000000000000000000000000000000000000..b75743556384fe58690847b3794607ef9a890d6d Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\345\221\250\346\234\237\346\211\247\350\241\214.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\345\221\275\344\273\244\346\211\247\350\241\214.png" "b/docs/en/Server/Maintenance/A-Ops/image/\345\221\275\344\273\244\346\211\247\350\241\214.png" new file mode 100644 index 0000000000000000000000000000000000000000..b5c9fbbeb5a4bba5f81d753fa5aa620ad261804c Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\345\221\275\344\273\244\346\211\247\350\241\214.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\345\221\275\344\273\244\347\256\241\347\220\206\347\225\214\351\235\242.png" "b/docs/en/Server/Maintenance/A-Ops/image/\345\221\275\344\273\244\347\256\241\347\220\206\347\225\214\351\235\242.png" new file mode 100644 index 0000000000000000000000000000000000000000..c0357fc88d33c8b706203b70016d53629a3db70c Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\345\221\275\344\273\244\347\256\241\347\220\206\347\225\214\351\235\242.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\346\223\215\344\275\234\347\256\241\347\220\206.png" "b/docs/en/Server/Maintenance/A-Ops/image/\346\223\215\344\275\234\347\256\241\347\220\206.png" new file mode 100644 index 0000000000000000000000000000000000000000..3a1b8c3accdfb688da2e8e54eb17e86d18ee4d0b Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\346\223\215\344\275\234\347\256\241\347\220\206.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\346\226\207\344\273\266\346\216\250\351\200\201.png" "b/docs/en/Server/Maintenance/A-Ops/image/\346\226\207\344\273\266\346\216\250\351\200\201.png" new file mode 100644 index 0000000000000000000000000000000000000000..c449eb18608e0146275f1b9f4ca41d05d48af021 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\346\226\207\344\273\266\346\216\250\351\200\201.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\346\226\260\345\273\272\345\221\275\344\273\244.png" "b/docs/en/Server/Maintenance/A-Ops/image/\346\226\260\345\273\272\345\221\275\344\273\244.png" new file mode 100644 index 0000000000000000000000000000000000000000..50d5bd4ce5499512acf2b8af86445fe6df6ce29f Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\346\226\260\345\273\272\345\221\275\344\273\244.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\346\226\260\345\273\272\345\221\275\344\273\244\344\273\273\345\212\241.png" "b/docs/en/Server/Maintenance/A-Ops/image/\346\226\260\345\273\272\345\221\275\344\273\244\344\273\273\345\212\241.png" new file mode 100644 index 0000000000000000000000000000000000000000..792ec4e81017575fd27466a275c0502563808296 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\346\226\260\345\273\272\345\221\275\344\273\244\344\273\273\345\212\241.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\347\203\255\350\241\245\344\270\201issue\351\223\276\346\216\245\345\222\214pr\351\223\276\346\216\245.png" "b/docs/en/Server/Maintenance/A-Ops/image/\347\203\255\350\241\245\344\270\201issue\351\223\276\346\216\245\345\222\214pr\351\223\276\346\216\245.png" new file mode 100644 index 0000000000000000000000000000000000000000..c9f6dc0a0f1a1758bb936b61ec939f8f5eeee633 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\347\203\255\350\241\245\344\270\201issue\351\223\276\346\216\245\345\222\214pr\351\223\276\346\216\245.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\350\204\232\346\234\254\346\211\247\350\241\214.png" "b/docs/en/Server/Maintenance/A-Ops/image/\350\204\232\346\234\254\346\211\247\350\241\214.png" new file mode 100644 index 0000000000000000000000000000000000000000..4ab626ad5949e17a5d486431ae4c0481ca42a442 Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\350\204\232\346\234\254\346\211\247\350\241\214.png" differ diff --git "a/docs/en/Server/Maintenance/A-Ops/image/\350\204\232\346\234\254\347\256\241\347\220\206.png" "b/docs/en/Server/Maintenance/A-Ops/image/\350\204\232\346\234\254\347\256\241\347\220\206.png" new file mode 100644 index 0000000000000000000000000000000000000000..62c60399dc58a79a9ab48a7eb584ce615c11b05c Binary files /dev/null and "b/docs/en/Server/Maintenance/A-Ops/image/\350\204\232\346\234\254\347\256\241\347\220\206.png" differ diff --git a/docs/en/docs/A-Ops/overview.md b/docs/en/Server/Maintenance/A-Ops/overview.md similarity index 100% rename from docs/en/docs/A-Ops/overview.md rename to docs/en/Server/Maintenance/A-Ops/overview.md diff --git a/docs/en/Server/Maintenance/CommonSkills/Menu/index.md b/docs/en/Server/Maintenance/CommonSkills/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..dfd6f982ada09b0e0b0c42e29188e562242b494e --- /dev/null +++ b/docs/en/Server/Maintenance/CommonSkills/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [Information Collection]({{< relref "./information-collection.md" >}}) +- [Common Configurations]({{< relref "./common-configurations.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/ops_guide/common-skills.md b/docs/en/Server/Maintenance/CommonSkills/common-configurations.md similarity index 99% rename from docs/en/docs/ops_guide/common-skills.md rename to docs/en/Server/Maintenance/CommonSkills/common-configurations.md index 4d7c1ce4b10853101e80754b855e6d492bf70918..84183f07a1dbc71188524ca3c4a51965c43f507d 100644 --- a/docs/en/docs/ops_guide/common-skills.md +++ b/docs/en/Server/Maintenance/CommonSkills/common-configurations.md @@ -1,6 +1,6 @@ -# Common Skills +# Common Configurations -- [Common Skills](#common-skills) +- [Common Configurations](#common-configurations) - [Configuring the Network](#configuring-the-network) - [Managing RPM Packages](#managing-rpm-packages) - [Configuring SSH](#configuring-ssh) @@ -89,8 +89,7 @@ RPM installs the required software to a set of management programs on the Linux - If no, do not install the software. During the installation, all software information is written into the RPM database for subsequent query, verification, and uninstallation. - -![en-us_other_0000001337581224](./images/en-us_other_0000001337581224.jpeg) +![en-us_other_0000001337581224](images/en-us_other_0000001337581224.jpeg) 1. Default installation path of the RPM packages diff --git a/docs/en/docs/ops_guide/images/en-us_other_0000001337581224.jpeg b/docs/en/Server/Maintenance/CommonSkills/images/en-us_other_0000001337581224.jpeg similarity index 100% rename from docs/en/docs/ops_guide/images/en-us_other_0000001337581224.jpeg rename to docs/en/Server/Maintenance/CommonSkills/images/en-us_other_0000001337581224.jpeg diff --git a/docs/en/docs/ops_guide/information-collection.md b/docs/en/Server/Maintenance/CommonSkills/information-collection.md similarity index 100% rename from docs/en/docs/ops_guide/information-collection.md rename to docs/en/Server/Maintenance/CommonSkills/information-collection.md diff --git a/docs/en/Server/Maintenance/CommonTools/Menu/index.md b/docs/en/Server/Maintenance/CommonTools/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..d4a9ce7255bf44ac9c2707b636228cb949ef2b20 --- /dev/null +++ b/docs/en/Server/Maintenance/CommonTools/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [Commonly Used Tools for Location and Demarcation]({{< relref "./commonly-used-tools.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/ops_guide/commonly-used-tools.md b/docs/en/Server/Maintenance/CommonTools/commonly-used-tools.md similarity index 91% rename from docs/en/docs/ops_guide/commonly-used-tools.md rename to docs/en/Server/Maintenance/CommonTools/commonly-used-tools.md index 1639165cceb203fe3e71fa7cc4cc37e1e1df1a70..81a4da0a40667da4482dc85a6534cfcf35dc0c91 100644 --- a/docs/en/docs/ops_guide/commonly-used-tools.md +++ b/docs/en/Server/Maintenance/CommonTools/commonly-used-tools.md @@ -1,9 +1,9 @@ # Commonly Used Tools - [Commonly Used Tools](#commonly-used-tools) - - [ftrace](#ftrace) - - [strace](#strace) - - [kdump](#kdump) + - [ftrace](#ftrace) + - [strace](#strace) + - [kdump](#kdump) ## ftrace @@ -14,7 +14,7 @@ ftrace provides access interfaces for user space through the debugfs. After the debugfs is configured in the kernel, the **/sys/kernel/debug** directory is created. The debugfs is mounted to this directory. If the kernel supports ftrace-related configuration items, a **tracing** directory is created in the debugfs. The debugfs is mounted to this directory. The following figure shows the content of this directory. -![](./images/zh-cn_image_0000001322372918.png) +![](./images/en-us_image_0000001322372918.png) - **Introduction to the ftrace debugfs interface** @@ -38,7 +38,7 @@ trace: queries trace data. - **Available tracers** -![zh-cn_image_0000001373373585](./images/zh-cn_image_0000001373373585.png) +![en-us_image_0000001373373585](./images/en-us_image_0000001373373585.png) ```shell function: a function call tracing program that does not require parameters @@ -67,7 +67,7 @@ tail -f /sys/kernel/debug/tracing/trace Trace mmap, which corresponds to the system call **do_mmap**. Output the **addr** input parameter. -![zh-cn_image_0000001373379529](./images/zh-cn_image_0000001373379529.png) +![en-us_image_0000001373379529](./images/en-us_image_0000001373379529.png) ```shell # Trace through the kprobe. @@ -82,7 +82,7 @@ echo 1 > tracing_on # View trace data. ``` -![zh-cn_image_0000001322379488](./images/zh-cn_image_0000001322379488.png) +![en-us_image_0000001322379488](./images/en-us_image_0000001322379488.png) - **Tracing function calls** @@ -99,7 +99,7 @@ echo 1 > tracing_on # View trace data. ``` -![zh-cn_image_0000001322219840](./images/zh-cn_image_0000001322219840.png) +![en-us_image_0000001322219840](./images/en-us_image_0000001322219840.png) ## strace @@ -107,7 +107,7 @@ The `strace` command is a diagnosis and debugging tool. You can use the `strace` You can run the `strace -h` command to view the functions provided by strace. -![zh-cn_image_0000001322112990](./images/zh-cn_image_0000001322112990.png) +![en-us_image_0000001322112990](./images/en-us_image_0000001322112990.png) The most common usage is to trace a command, trace the forks, print the time, and output the result to the **output** file. @@ -135,7 +135,7 @@ strace -f -tt -o output xx vim /etc/default/grub ``` - ![zh-cn_image_0000001372821865](./images/zh-cn_image_0000001372821865.png) + ![en-us_image_0000001372821865](./images/en-us_image_0000001372821865.png) ```shell # Regenerate the grub configuration file. @@ -151,7 +151,7 @@ strace -f -tt -o output xx Step 1. Retain the default settings of the kernel. When a hard lock or oops occurs, a panic is triggered. - ![zh-cn_image_0000001372824637](./images/zh-cn_image_0000001372824637.png) + ![en-us_image_0000001372824637](./images/en-us_image_0000001372824637.png) Step 2. Modify the settings. The following commands cam make the settings take effect only once and become invalid after the system is restarted. @@ -190,7 +190,7 @@ Step 3. Run the following command to start crash debugging: crash {vmcore file} {debug kernel vmlinux} ``` -![zh-cn_image_0000001372748125](./images/zh-cn_image_0000001372748125.png) +![en-us_image_0000001372748125](./images/en-us_image_0000001372748125.png) The format of the **crash** debugging command is *command args*. *command* indicates the command to be executed, and *args* indicates the parameters required by some debugging commands. diff --git a/docs/en/docs/ops_guide/images/c50cb9df64f4659787c810167c89feb4_1884x257.png b/docs/en/Server/Maintenance/CommonTools/images/c50cb9df64f4659787c810167c89feb4_1884x257.png similarity index 100% rename from docs/en/docs/ops_guide/images/c50cb9df64f4659787c810167c89feb4_1884x257.png rename to docs/en/Server/Maintenance/CommonTools/images/c50cb9df64f4659787c810167c89feb4_1884x257.png diff --git a/docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001321685172.png b/docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001321685172.png new file mode 100644 index 0000000000000000000000000000000000000000..a98265bdf251608c0ff394fefe545cd3192bdb28 Binary files /dev/null and b/docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001321685172.png differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001322112990.png b/docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001322112990.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001322112990.png rename to docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001322112990.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001322219840.png b/docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001322219840.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001322219840.png rename to docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001322219840.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001322372918.png b/docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001322372918.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001322372918.png rename to docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001322372918.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001322379488.png b/docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001322379488.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001322379488.png rename to docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001322379488.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001372748125.png b/docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001372748125.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001372748125.png rename to docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001372748125.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001372821865.png b/docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001372821865.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001372821865.png rename to docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001372821865.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001372824637.png b/docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001372824637.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001372824637.png rename to docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001372824637.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001373373585.png b/docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001373373585.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001373373585.png rename to docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001373373585.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001373379529.png b/docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001373379529.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001373379529.png rename to docs/en/Server/Maintenance/CommonTools/images/en-us_image_0000001373379529.png diff --git a/docs/en/Server/Maintenance/Gala/Menu/index.md b/docs/en/Server/Maintenance/Gala/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..48fa64412052fc7c38bea495ef693c796126949f --- /dev/null +++ b/docs/en/Server/Maintenance/Gala/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- +- [gala-anteater User Guide]({{< relref "./using-gala-anteater.md" >}}) +- [gala-gopher User Guide]({{< relref "./using-gala-gopher.md" >}}) +- [gala-spider User Guide]({{< relref "./using-gala-spider.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Maintenance/Gala/figures/attach-process.png b/docs/en/Server/Maintenance/Gala/figures/attach-process.png new file mode 100644 index 0000000000000000000000000000000000000000..f76e8f4513cb45fbece12e6237039c41786b0467 Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/attach-process.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/deadlock.png b/docs/en/Server/Maintenance/Gala/figures/deadlock.png new file mode 100644 index 0000000000000000000000000000000000000000..d4f863a1a87d7aad3128481c763ee715aefd0a9f Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/deadlock.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/deadlock2.png b/docs/en/Server/Maintenance/Gala/figures/deadlock2.png new file mode 100644 index 0000000000000000000000000000000000000000..3be42a5a34f90c2f3b351c7077635c580ea847a7 Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/deadlock2.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/deadlock3.png b/docs/en/Server/Maintenance/Gala/figures/deadlock3.png new file mode 100644 index 0000000000000000000000000000000000000000..5ef1a08394daf6433e10f85a5b3c57df25c3e303 Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/deadlock3.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/flame_muti_ins.png b/docs/en/Server/Maintenance/Gala/figures/flame_muti_ins.png new file mode 100644 index 0000000000000000000000000000000000000000..5943c7fda223a7fde4d2987ad56af4ffa776bd81 Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/flame_muti_ins.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/gala-gopher-start-success.png b/docs/en/Server/Maintenance/Gala/figures/gala-gopher-start-success.png new file mode 100644 index 0000000000000000000000000000000000000000..ab16e9d3661db3fd4adc6c605b2d2d08e79fdc1c Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/gala-gopher-start-success.png differ diff --git a/docs/en/docs/A-Ops/figures/gala-spider-arch.png b/docs/en/Server/Maintenance/Gala/figures/gala-spider-arch.png similarity index 100% rename from docs/en/docs/A-Ops/figures/gala-spider-arch.png rename to docs/en/Server/Maintenance/Gala/figures/gala-spider-arch.png diff --git a/docs/en/Server/Maintenance/Gala/figures/gopher-arch.png b/docs/en/Server/Maintenance/Gala/figures/gopher-arch.png new file mode 100644 index 0000000000000000000000000000000000000000..f151965a21d11dd7a3e215cc4ef23d70d059f4b1 Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/gopher-arch.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/lockcompete1.png b/docs/en/Server/Maintenance/Gala/figures/lockcompete1.png new file mode 100644 index 0000000000000000000000000000000000000000..5848b114e02d09f23303da8cff7aef56216f655f Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/lockcompete1.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/lockcompete2.png b/docs/en/Server/Maintenance/Gala/figures/lockcompete2.png new file mode 100644 index 0000000000000000000000000000000000000000..ed02a882a145dafeafb76469f328085edecc6775 Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/lockcompete2.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/lockcompete3.png b/docs/en/Server/Maintenance/Gala/figures/lockcompete3.png new file mode 100644 index 0000000000000000000000000000000000000000..3992edc5b7ea61d8a2aa08ce47f0876b7d2e8cf3 Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/lockcompete3.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/lockcompete4.png b/docs/en/Server/Maintenance/Gala/figures/lockcompete4.png new file mode 100644 index 0000000000000000000000000000000000000000..049ac49bcc1fb71ea9fe6866bd27e84d0acf42b1 Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/lockcompete4.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/lockcompete5.png b/docs/en/Server/Maintenance/Gala/figures/lockcompete5.png new file mode 100644 index 0000000000000000000000000000000000000000..8b5cf5aaef43f125abdf3adb8a7f798dd2c86b54 Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/lockcompete5.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/lockcompete6.png b/docs/en/Server/Maintenance/Gala/figures/lockcompete6.png new file mode 100644 index 0000000000000000000000000000000000000000..c3b1f5f097b9e9bcabf75229eabc6ce8fe126a71 Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/lockcompete6.png differ diff --git a/docs/en/docs/A-Ops/figures/spider_topology.png b/docs/en/Server/Maintenance/Gala/figures/spider_topology.png similarity index 100% rename from docs/en/docs/A-Ops/figures/spider_topology.png rename to docs/en/Server/Maintenance/Gala/figures/spider_topology.png diff --git a/docs/en/Server/Maintenance/Gala/figures/tprofiling-dashboard-detail.png b/docs/en/Server/Maintenance/Gala/figures/tprofiling-dashboard-detail.png new file mode 100644 index 0000000000000000000000000000000000000000..2093808bc4e1654956f6143393757c1244f08f98 Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/tprofiling-dashboard-detail.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/tprofiling-dashboard.png b/docs/en/Server/Maintenance/Gala/figures/tprofiling-dashboard.png new file mode 100644 index 0000000000000000000000000000000000000000..15f4917f5a0bfcf5dee1f8fe68e65635ffebd85e Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/tprofiling-dashboard.png differ diff --git a/docs/en/Server/Maintenance/Gala/figures/tprofiling-run-arch.png b/docs/en/Server/Maintenance/Gala/figures/tprofiling-run-arch.png new file mode 100644 index 0000000000000000000000000000000000000000..0ad835125a5e7b7f66938543de1e1c9d53706ce4 Binary files /dev/null and b/docs/en/Server/Maintenance/Gala/figures/tprofiling-run-arch.png differ diff --git a/docs/en/docs/A-Ops/using-gala-anteater.md b/docs/en/Server/Maintenance/Gala/using-gala-anteater.md similarity index 100% rename from docs/en/docs/A-Ops/using-gala-anteater.md rename to docs/en/Server/Maintenance/Gala/using-gala-anteater.md diff --git a/docs/en/Server/Maintenance/Gala/using-gala-gopher.md b/docs/en/Server/Maintenance/Gala/using-gala-gopher.md new file mode 100644 index 0000000000000000000000000000000000000000..5c07a1ef2a55dfbb7cfadbbd3c02b3ce3d08c976 --- /dev/null +++ b/docs/en/Server/Maintenance/Gala/using-gala-gopher.md @@ -0,0 +1,1119 @@ +# Using gala-gopher + +As a data collection module, gala-gopher provides OS-level monitoring capabilities, supports dynamic probe installation and uninstallation, and integrates third-party probes in a non-intrusive manner to quickly expand the monitoring scope. + +This chapter describes how to deploy and use the gala-gopher service. + +# Installation + +Mount the repositories. + +```basic +[oe-2209] # openEuler 23.09 officially released repository +name=oe2209 +baseurl=http://119.3.219.20:82/openEuler:/23.09/standard_x86_64 +enabled=1 +gpgcheck=0 +priority=1 + +[oe-2209:Epol] # openEuler 23.09: Epol officially released repository +name=oe2209_epol +baseurl=http://119.3.219.20:82/openEuler:/23.09:/Epol/standard_x86_64/ +enabled=1 +gpgcheck=0 +priority=1 +``` + +Install gala-gopher. + +```bash +# yum install gala-gopher +``` + +# Configuration + +## Configuration Description + +The configuration file of gala-gopher is **/opt/gala-gopher/gala-gopher.conf**. The configuration items in the file are described as follows (the parts that do not need to be manually configured are not described): + +The following configurations can be modified as required: + +- `global`: global configuration for gala-gopher. + - `log_file_name`: name of the gala-gopher log file. + - `log_level`: gala-gopher log level (currently not enabled). + - `pin_path`: path for storing the map shared by the eBPF probe (keep the default configuration). +- `metric`: configuration for metric data output. + - `out_channel`: output channel for metrics (`web_server`, `logs`, or `kafka`). If empty, the output channel is disabled. + - `kafka_topic`: topic configuration for Kafka output. +- `event`: configuration for abnormal event output. + - `out_channel`: output channel for events (`logs` or `kafka`). If empty, the output channel is disabled. + - `kafka_topic`: topic configuration for Kafka output. + - `timeout`: reporting interval for the same event. + - `desc_language`: language for event descriptions (`zh_CN` or `en_US`). +- `meta`: configuration for metadata output. + - `out_channel`: output channel for metadata (`logs` or `kafka`). If empty, the output channel is disabled. + - `kafka_topic`: topic configuration for Kafka output. +- `ingress`: probe data reporting configuration (currently unused). + - `interval`: unused. +- `egress`: database reporting configuration (currently unused). + - `interval`: unused. + - `time_range`: unused. +- `imdb`: cache configuration. + - `max_tables_num`: maximum number of cache tables. Each meta file in **/opt/gala-gopher/meta** corresponds to a table. + - `max_records_num`: maximum records per cache table. Each probe typically generates at least one record per observation period. + - `max_metrics_num`: maximum number of metrics per record. + - `record_timeout`: cache table aging time (seconds). Records not updated within this time are deleted. +- `web_server`: `web_server` output channel configuration. + - `port`: listening port. +- `rest_api_server`: + - `port`: listening port for the REST API. + - `ssl_auth`: enables HTTPS encryption and authentication for the REST API (`on` or `off`). Enable in production. + - `private_key`: absolute path to the server's private key file for HTTPS encryption (required if `ssl_auth` is `on`). + - `cert_file`: absolute path to the server's certificate file for HTTPS encryption (required if `ssl_auth` is `on`). + - `ca_file`: absolute path to the CA certificate for client authentication (required if `ssl_auth` is `on`). +- `kafka`: Kafka output channel configuration. + - `kafka_broker`: IP address and port of the Kafka server. + - `batch_num_messages`: number of messages per batch. + - `compression_codec`: message compression type. + - `queue_buffering_max_messages`: maximum number of messages in the producer buffer. + - `queue_buffering_max_kbytes`: maximum size (KB) of the producer buffer. + - `queue_buffering_max_ms`: maximum time (ms) the producer waits for more messages before sending a batch. +- `logs`: `logs` output channel configuration. + - `metric_dir`: path for metric data logs. + - `event_dir`: path for abnormal event logs. + - `meta_dir`: path for metadata logs. + - `debug_dir`: path for gala-gopher runtime logs. + +## Configuration File Example + +- Select the data output channels. + + ```yaml + metric = + { + out_channel = "web_server"; + kafka_topic = "gala_gopher"; + }; + + event = + { + out_channel = "kafka"; + kafka_topic = "gala_gopher_event"; + }; + + meta = + { + out_channel = "kafka"; + kafka_topic = "gala_gopher_metadata"; + }; + ``` + +- Configure Kafka and Web Server. + + ```yaml + web_server = + { + port = 8888; + }; + + kafka = + { + kafka_broker = ":9092"; + }; + ``` + +- Select the probe to be enabled. The following is an example. + + ```yaml + probes = + ( + { + name = "system_infos"; + param = "-t 5 -w /opt/gala-gopher/task_whitelist.conf -l warn -U 80"; + switch = "on"; + }, + ); + extend_probes = + ( + { + name = "tcp"; + command = "/opt/gala-gopher/extend_probes/tcpprobe"; + param = "-l warn -c 1 -P 7"; + switch = "on"; + } + ); + ``` + +# Start + +After the configuration is complete, start gala-gopher. + +```bash +# systemctl start gala-gopher.service +``` + +Query the status of the gala-gopher service. + +```bash +# systemctl status gala-gopher.service +``` + +If the following information is displayed, the service is started successfully: Check whether the enabled probe is started. If the probe thread does not exist, check the configuration file and gala-gopher run log file. + +![](./figures/gala-gopher-start-success.png) + +> Note: The root permission is required for deploying and running gala-gopher. + +# How to Use + +## Deployment of External Dependent Software + +![](./figures/gopher-arch.png) + +As shown in the preceding figure, the green parts are external dependent components of gala-gopher. gala-gopher outputs metric data to Prometheus, metadata and abnormal events to Kafka. gala-anteater and gala-spider in gray rectangles obtain data from Prometheus and Kafka. + +> Note: Obtain the installation packages of Kafka and Prometheus from the official websites. + +### REST Dynamic Configuration Interface + +The web server port is configurable (default is 9999). The URL format is `http://[gala-gopher-node-ip-address]:[port]/[function (collection feature)]`. For example, the URL for the flamegraph is `http://localhost:9999/flamegraph` (the following documentation uses the flamegraph as an example). + +#### Configuring the Probe Monitoring Scope + +Probes are disabled by default and can be dynamically enabled and configured via the API. Taking the flamegraph as an example, the REST API can be used to enable `oncpu`, `offcpu`, and `mem` flamegraph capabilities. The monitoring scope can be configured based on four dimensions: process ID, process name, container ID, and pod. + +Below is an example of an API that simultaneously enables the oncpu and offcpu collection features for the flamegraph: + +```sh +curl -X PUT http://localhost:9999/flamegraph --data-urlencode json=' +{ + "cmd": { + "bin": "/opt/gala-gopher/extend_probes/stackprobe", + "check_cmd": "", + "probe": [ + "oncpu", + "offcpu" + ] + }, + "snoopers": { + "proc_id": [ + 101, + 102 + ], + "proc_name": [ + { + "comm": "app1", + "cmdline": "", + "debugging_dir": "" + }, + { + "comm": "app2", + "cmdline": "", + "debugging_dir": "" + } + ], + "pod_id": [ + "pod1", + "pod2" + ], + "container_id": [ + "container1", + "container2" + ] + } +}' +``` + +A full description of the collection features is provided below: + +| Collection Feature | Description | Sub-item Scope | Monitoring Targets | Startup File | Startup Condition | +| ------------------ | -------------------------------------------------- | ----------------------------------------------------------------------------------------- | ---------------------------------------- | ---------------------------------- | ------------------------- | +| flamegraph | Online performance flamegraph observation | oncpu, offcpu, mem | proc_id, proc_name, pod_id, container_id | $gala-gopher-dir/stackprobe | NA | +| l7 | Application layer 7 protocol observation | l7_bytes_metrics, l7_rpc_metrics, l7_rpc_trace | proc_id, proc_name, pod_id, container_id | $gala-gopher-dir/l7probe | NA | +| tcp | TCP exception and state observation | tcp_abnormal, tcp_rtt, tcp_windows, tcp_rate, tcp_srtt, tcp_sockbuf, tcp_stats, tcp_delay | proc_id, proc_name, pod_id, container_id | $gala-gopher-dir/tcpprobe | NA | +| socket | Socket (TCP/UDP) exception observation | tcp_socket, udp_socket | proc_id, proc_name, pod_id, container_id | $gala-gopher-dir/endpoint | NA | +| io | Block layer I/O observation | io_trace, io_err, io_count, page_cache | NA | $gala-gopher-dir/ioprobe | NA | +| proc | Process system calls, I/O, DNS, VFS observation | base_metrics, proc_syscall, proc_fs, proc_io, proc_dns, proc_pagecache | proc_id, proc_name, pod_id, container_id | $gala-gopher-dir/taskprobe | NA | +| jvm | JVM layer GC, threads, memory, cache observation | NA | proc_id, proc_name, pod_id, container_id | $gala-gopher-dir/jvmprobe | NA | +| ksli | Redis performance SLI (access latency) observation | NA | proc_id, proc_name, pod_id, container_id | $gala-gopher-dir/ksliprobe | NA | +| postgre_sli | PG DB performance SLI (access latency) observation | NA | proc_id, proc_name, pod_id, container_id | $gala-gopher-dir/pgsliprobe | NA | +| opengauss_sli | openGauss access throughput observation | NA | \[ip, port, dbname, user, password] | $gala-gopher-dir/pg_stat_probe.py | NA | +| dnsmasq | DNS session observation | NA | proc_id, proc_name, pod_id, container_id | $gala-gopher-dir/rabbitmq_probe.sh | NA | +| lvs | LVS session observation | NA | NA | $gala-gopher-dir/trace_lvs | lsmod\|grep ip_vs\| wc -l | +| nginx | Nginx L4/L7 layer session observation | NA | proc_id, proc_name, pod_id, container_id | $gala-gopher-dir/nginx_probe | NA | +| haproxy | Haproxy L4/7 layer session observation | NA | proc_id, proc_name, pod_id, container_id | $gala-gopher-dir/trace_haproxy | NA | +| kafka | Kafka producer/consumer topic observation | NA | dev, port | $gala-gopher-dir/kafkaprobe | NA | +| baseinfo | System basic information | cpu, mem, nic, disk, net, fs, proc, host | proc_id, proc_name, pod_id, container_id | system_infos | NA | +| virt | Virtualization management information | NA | NA | virtualized_infos | NA | +| tprofiling | Thread-level performance profiling observation | oncpu, syscall_file, syscall_net, syscall_lock, syscall_sched | proc_id, proc_name | | | + +### Configuring Probe Runtime Parameters + +Probes require additional parameter settings during runtime, such as configuring the sampling period and reporting period for flamegraphs. + +```sh +curl -X PUT http://localhost:9999/flamegraph --data-urlencode json=' +{ + "params": { + "report_period": 180, + "sample_period": 180, + "metrics_type": [ + "raw", + "telemetry" + ] + } +}' +``` + +Detailed runtime parameters are as follows: + +| Parameter | Description | Default & Range | Unit | Supported Monitoring Scope | Supported by gala-gopher | +| ------------------- | ---------------------------------------- | -------------------------------------------------------------- | ------- | ------------------------------------------- | ------------------------ | +| sample_period | Sampling period | 5000, \[100~10000] | ms | io, tcp | Y | +| report_period | Reporting period | 60, \[5~600] | s | ALL | Y | +| latency_thr | Latency reporting threshold | 0, \[10~100000] | ms | tcp, io, proc, ksli | Y | +| offline_thr | Process offline reporting threshold | 0, \[10~100000] | ms | proc | Y | +| drops_thr | Packet loss reporting threshold | 0, \[10~100000] | package | tcp, nic | Y | +| res_lower_thr | Resource percentage lower limit | 0%, \[0%~100%] | percent | ALL | Y | +| res_upper_thr | Resource percentage upper limit | 0%, \[0%~100%] | percent | ALL | Y | +| report_event | Report abnormal events | 0, \[0, 1] | NA | ALL | Y | +| metrics_type | Report telemetry metrics | raw, \[raw, telemetry] | NA | ALL | N | +| env | Working environment type | node, \[node, container, kubenet] | NA | ALL | N | +| report_source_port | Report source port | 0, \[0, 1] | NA | tcp | Y | +| l7_protocol | Layer 7 protocol scope | http, \[http, pgsql, mysql, redis, kafka, mongo, rocketmq, dns] | NA | l7 | Y | +| support_ssl | Support SSL encrypted protocol observation | 0, \[0, 1] | NA | l7 | Y | +| multi_instance | Output separate flamegraphs for each process | 0, \[0, 1] | NA | flamegraph | Y | +| native_stack | Display native language stack (for JAVA processes) | 0, \[0, 1] | NA | flamegraph | Y | +| cluster_ip_backend | Perform Cluster IP backend conversion | 0, \[0, 1] | NA | tcp, l7 | Y | +| pyroscope_server | Set flamegraph UI server address | localhost:4040 | NA | flamegraph | Y | +| svg_period | Flamegraph SVG file generation period | 180, \[30, 600] | s | flamegraph | Y | +| perf_sample_period | Period for collecting stack info in oncpu flamegraph | 10, \[10, 1000] | ms | flamegraph | Y | +| svg_dir | Directory for storing flamegraph SVG files | "/var/log/gala-gopher/stacktrace" | NA | flamegraph | Y | +| flame_dir | Directory for storing raw stack info in flamegraphs | "/var/log/gala-gopher/flamegraph" | NA | flamegraph | Y | +| dev_name | Observed network card/disk device name | "" | NA | io, kafka, ksli, postgre_sli, baseinfo, tcp | Y | +| continuous_sampling | Enable continuous sampling | 0, \[0, 1] | NA | ksli | Y | +| elf_path | Path to the executable file to observe | "" | NA | nginx, haproxy, dnsmasq | Y | +| kafka_port | Kafka port number to observe | 9092, \[1, 65535] | NA | kafka | Y | +| cadvisor_port | Port number for starting cadvisor | 8080, \[1, 65535] | NA | cadvisor | Y | + +### Starting and Stopping Probes + +```sh +curl -X PUT http://localhost:9999/flamegraph --data-urlencode json=' +{ + "state": "running" // optional: running, stopped +}' +``` + +### Constraints and Limitations + +1. The interface is stateless. The settings uploaded each time represent the final runtime configuration for the probe, including state, parameters, and monitoring scope. +2. Monitoring targets can be combined arbitrarily, and the monitoring scope is the union of all specified targets. +3. The startup file must be valid and accessible. +4. Collection features can be enabled partially or fully as needed, but disabling a feature requires disabling it entirely. +5. The monitoring target for opengauss is a DB instance (IP/Port/dbname/user/password). +6. The interface can receive a maximum of 2048 characters per request. + +#### Querying Probe Configurations and Status + +```sh +curl -X GET http://localhost:9999/flamegraph +{ + "cmd": { + "bin": "/opt/gala-gopher/extend_probes/stackprobe", + "check_cmd": "" + "probe": [ + "oncpu", + "offcpu" + ] + }, + "snoopers": { + "proc_id": [ + 101, + 102 + ], + "proc_name": [ + { + "comm": "app1", + "cmdline": "", + "debugging_dir": "" + }, + { + "comm": "app2", + "cmdline": "", + "debugging_dir": "" + } + ], + "pod_id": [ + "pod1", + "pod2" + ], + "container_id": [ + "container1", + "container2" + ] + }, + "params": { + "report_period": 180, + "sample_period": 180, + "metrics_type": [ + "raw", + "telemetry" + ] + }, + "state": "running" +} +``` + +## Introduction to stackprobe + +A performance flamegraph tool designed for cloud-native environments. + +### Features + +- Supports observation of applications written in C/C++, Go, Rust, and Java. +- Call stack supports container and process granularity: For processes within containers, the workload Pod name and container name are marked with `[Pod]` and `[Con]` prefixes at the bottom of the call stack. Process names are prefixed with `[]`, while threads and functions (methods) have no prefix. +- Supports generating SVG format flamegraphs locally or uploading call stack data to middleware. +- Supports generating/uploading flamegraphs for multiple instances based on process granularity. +- For Java processes, flamegraphs can simultaneously display native methods and Java methods. +- Supports multiple types of flamegraphs, including oncpu, offcpu, and mem. +- Supports custom sampling periods. + +### Usage Instructions + +Basic startup command example: Start the performance flamegraph with default parameters. + +```sh +curl -X PUT http://localhost:9999/flamegraph -d json='{ "cmd": {"probe": ["oncpu"] }, "snoopers": {"proc_name": [{ "comm": "cadvisor"}] }, "state": "running"}' +``` + +Advanced startup command example: Start the performance flamegraph with custom parameters. For a complete list of configurable parameters, refer to [Configuring Probe Runtime Parameters](#configuring-probe-runtime-parameters). + +```sh +curl -X PUT http://localhost:9999/flamegraph -d json='{ "cmd": { "check_cmd": "", "probe": ["oncpu", "offcpu", "mem"] }, "snoopers": { "proc_name": [{ "comm": "cadvisor", "cmdline": "", "debugging_dir": "" }, { "comm": "java", "cmdline": "", "debugging_dir": "" }] }, "params": { "perf_sample_period": 100, "svg_period": 300, "svg_dir": "/var/log/gala-gopher/stacktrace", "flame_dir": "/var/log/gala-gopher/flamegraph", "pyroscope_server": "localhost:4040", "multi_instance": 1, "native_stack": 0 }, "state": "running"}' +``` + +Key configuration options explained: + +- **Enabling flamegraph types**: + + Set via the `probe` parameter. Values include `oncpu`, `offcpu`, and `mem`, representing CPU usage time, blocked time, and memory allocation statistics, respectively. + + Example: + + `"probe": ["oncpu", "offcpu", "mem"]` + +- **Setting the period for generating local SVG flamegraphs**: + + Configured via the `svg_period` parameter, in seconds. Default is 180, with an optional range of \[30, 600]. + + Example: + + `"svg_period": 300` + +- **Enabling/disabling stack information upload to Pyroscope**: + + Set via the `pyroscope_server` parameter. The value must include the address and port. If empty or incorrectly formatted, the probe will not attempt to upload stack information. The upload period is 30 seconds. + + Example: + + `"pyroscope_server": "localhost:4040"` + +- **Setting the call stack sampling period**: + + Configured via the `perf_sample_period` parameter, in milliseconds. Default is 10, with an optional range of \[10, 1000]. This parameter only applies to oncpu flamegraphs. + + Example: + + `"perf_sample_period": 100` + +- **Enabling/disabling multi-instance flamegraph generation**: + + Set via the `multi_instance` parameter, with values 0 or 1. Default is 0. A value of 0 merges flamegraphs for all processes, while 1 generates separate flamegraphs for each process. + + Example: + + `"multi_instance": 1` + +- **Enabling/disabling native call stack collection**: + + Set via the `native_stack` parameter, with values 0 or 1. Default is 0. This parameter only applies to Java processes. A value of 0 disables collection of the JVM's native call stack, while 1 enables it. + + Example: + + `"native_stack": 1` + + Visualization: (Left: `"native_stack": 1`, Right: `"native_stack": 0`) + + ![image-20230804172905729](./figures/flame_muti_ins.png) + +### Implementation Plan + +#### 1. User-Space Program Logic + +The program periodically (every 30 seconds) converts kernel-reported stack information from addresses to symbols using the symbol table. It then uses the flamegraph plugin or pyroscope to generate a flame graph from the symbolized call stack. + +The approach to obtaining the symbol table differs based on the code segment type. + +- Kernel Symbol Table: Access **/proc/kallsyms**. + +- Native Language Symbol Table: Query the process virtual memory mapping file (**/proc/{pid}/maps**) to retrieve address mappings for each code segment in the process memory. The libelf library is then used to load the symbol table of the corresponding module for each segment. + +- Java Language Symbol Table: + + Since Java methods are not statically mapped to the process virtual address space, alternative methods are used to obtain the symbolized Java call stack. + +##### Method 1: Perf Observation + +A JVM agent dynamic library is loaded into the Java process to monitor JVM method compilation and loading events. This allows real-time recording of memory address-to-Java symbol mappings, generating the Java process symbol table. This method requires the Java process to be launched with the `-XX:+PreserveFramePointer` option. Its key advantage is that the flame graph can display the JVM call stack, and the resulting Java flame graph can be merged with those of other processes for unified visualization. + +##### Method 2: JFR Observation + +The JVM built-in profiler, Java Flight Recorder (JFR), is dynamically enabled to monitor various events and metrics of the Java application. This is accomplished by loading a Java agent into the Java process, which internally calls the JFR API. This method offers the advantage of more precise and comprehensive collection of Java method call stacks. + +Both Java performance analysis methods can be loaded in real time (without restarting the Java process) and feature low overhead. When stackprobe startup parameters are configured as `"multi_instance": 1` and `"native_stack": 0`, it uses Method 2 to generate the Java process flame graph; otherwise, it defaults to Method 1. + +#### 2. Kernel-Space Program Logic + +The kernel-space functionality is implemented using eBPF. Different flame graph types correspond to distinct eBPF programs. These programs periodically or through event triggers traverse the current user-space and kernel-space call stacks, reporting the results to user space. + +##### 2.1 On-CPU Flame Graph + +A sampling eBPF program is attached to perf software event `PERF_COUNT_SW_CPU_CLOCK` to periodically sample the call stack. + +##### 2.2 Off-CPU Flame Graph + +A sampling eBPF program is attached to process scheduling tracepoint `sched_switch`. This program records the time and process ID when a process is scheduled out and samples the call stack when the process is scheduled back in. + +##### 2.3 Memory Flame Graph + +A sampling eBPF program is attached to page fault tracepoint `page_fault_user`. The call stack is sampled whenever this event is triggered. + +#### 3. Java Language Support + +- stackprobe main process: + + 1. Receives an IPC message to identify the Java process to be observed. + 2. Utilizes the Java agent loading module to inject the JVM agent program into the target Java process: `jvm_agent.so` (for [Method 1](#method-1-perf-observation)) or `JstackProbeAgent.jar` (for [Method 2](#method-2-jfr-observation)). + 3. For Method 1, the main process loads the `java-symbols.bin` file of the corresponding Java process to facilitate address-to-symbol conversion. For Method 2, it loads the `stacks-{flame_type}.txt` file of the corresponding Java process, which can be directly used to generate flame graphs. + +- Java agent loading module: + + 1. Detects a new Java process and copies the JVM agent program to `/proc//root/tmp` in the process space (to ensure visibility to the JVM inside the container during attachment). + 2. Adjusts the ownership of the directory and JVM agent program to match the observed Java process. + 3. Launches the `jvm_attach` subprocess and passes the relevant parameters of the observed Java process. + +- JVM agent program: + + - jvm_agent.so: Registers JVMTI callback functions. + + When the JVM loads a Java method or dynamically compiles a native method, it triggers the callback function. The callback records the Java class name, method name, and corresponding memory address in `/proc//root/tmp/java-data-/java-symbols.bin` within the observed Java process space. + - JstackProbeAgent.jar: Invokes the JFR API. + + Activates JFR for 30 seconds and transforms the JFR statistics into a stack format suitable for flame graphs. The output is saved to `/proc//root/tmp/java-data-/stacks-.txt` in the observed Java process space. For more information, refer to [JstackProbe Introduction](https://gitee.com/openeuler/gala-gopher/blob/dev/src/probes/extends/java.probe/jstack.probe/readme.md). + +- jvm_attach: Dynamically loads the JVM agent program into the JVM of the observed process (based on `sun.tools.attach.LinuxVirtualMachine` from the JDK source code and the `jattach` tool). + + 1. Configures its own namespace (the JVM requires the attaching process and the observed process to share the same namespace for agent loading). + 2. Verifies if the JVM attach listener is active (by checking for the existence of the UNIX socket file `/proc//root/tmp/.java_pid`). + 3. If inactive, creates `/proc//cwd/.attach_pid` and sends a SIGQUIT signal to the JVM. + 4. Establishes a connection to the UNIX socket. + 5. Interprets the response; a value of 0 indicates successful attachment. + + Attachment process diagram: + + ![Attachment process](./figures/attach-process.png) + +### Precautions + +- To achieve the best observation results for Java applications, configure the stackprobe startup options to `"multi_instance": 1` and `"native_stack": 0` to enable JFR observation (JDK8u262+). Otherwise, stackprobe will use perf to generate Java flame graphs. When using perf, ensure that the JVM option `XX:+PreserveFramePointer` is enabled (JDK8 or later). + +### Constraints + +- Supports observation of Java applications based on the hotSpot JVM. + +## Introduction to tprofiling + +tprofiling, a thread-level application performance diagnostic tool provided by gala-gopher, leverages eBPF technology. It monitors key system performance events at the thread level, associating them with detailed event content. This enables real-time recording of thread states and key activities, helping users quickly pinpoint application performance bottlenecks. + +### Features + +From the OS perspective, a running application comprises multiple processes, each containing multiple running threads. tprofiling monitors and records key activities (referred to as **events**) performed by these threads. The tool then presents these events on a timeline in the front-end interface, providing an intuitive view of what each thread is doing at any given moment—whether it is executing on the CPU or blocked by file or network I/O operations. When performance issues arise, analyzing the sequence of key performance events for a given thread enables rapid problem isolation and localization. + +Currently, with its implemented event monitoring capabilities, tprofiling can identify application performance issues such as: + +- File I/O latency and blocking +- Network I/O latency and blocking +- Lock contention +- Deadlocks + +As more event types are added and refined, tprofiling will cover a broader range of application performance problems. + +### Event Observation Scope + +tprofiling currently supports two main categories of system performance events: syscall events and on-CPU events. + +**Syscall Events** + +Application performance often suffers from system resource bottlenecks like excessive CPU usage or I/O wait times. Applications typically access these resources through syscalls. Observation key syscall events helps identify time-consuming or blocking resource access operations. + +The syscall events currently observed by tprofiling are detailed in the [Supported Syscall Events](#supported-system-call-events) section. These events fall into categories such as file operations, network operations, lock operations, and scheduling operations. Examples of observed syscall events include: + +- File Operations + - `read`/`write`: Reading from or writing to disk files or network connections; these operations can be time-consuming or blocking. + - `sync`/`fsync`: Synchronizing file data to disk, which blocks the thread until completion. +- Network Operations + - `send`/`recv`: Reading from or writing to network connections; these operations can be time-consuming or blocking. +- Lock Operations + - `futex`: A syscall related to user-mode lock implementations. A `futex` call often indicates lock contention, potentially causing threads to block. +- Scheduling Operations: These syscall events can change a thread's state, such as yielding the CPU, sleeping, or waiting for other threads. + - `nanosleep`: The thread enters a sleep state. + - `epoll_wait`: The thread waits for I/O events, blocking until an event arrives. + +**on-CPU Events** + +A thread's running state can be categorized as either on-CPU (executing on a CPU core) or off-CPU (not executing). Observation on-CPU events helps identify threads performing time-consuming CPU-bound operations. + +### Event Content + +Thread profiling events include the following information: + +- Event Source: This includes the thread ID, thread name, process ID, process name, container ID, container name, host ID, and host name associated with the event. + + - `thread.pid`: The thread ID. + - `thread.comm`: The thread name. + - `thread.tgid`: The process ID. + - `proc.name`: The process name. + - `container.id`: The container ID. + - `container.name`: The container name. + - `host.id`: The host ID. + - `host.name`: The host name. + +- Event Attributes: These include common attributes and extended attributes. + + - Common Attributes: These include the event name, event type, start time, end time, and duration. + + - `event.name`: The event name. + - `event.type`: The event type, which can be `oncpu`, `file`, `net`, `lock`, or `sched`. + - `start_time`: The event start time, which is the start time of the first event in an aggregated event. See [Aggregated Events](#aggregated-events) for more information. + - `end_time`: The event end time, which is the end time of the last event in an aggregated event. + - `duration`: The event duration, calculated as (`end_time` - `start_time`). + - `count`: The number of aggregated events. + + - Extended Attributes: These provide more detailed information specific to each syscall event. For example, `read` and `write` events for files or network connections include the file path, network connection details, and function call stack. + + - `func.stack`: The function call stack. + - `file.path`: The file path for file-related events. + - `sock.conn`: The TCP connection information for network-related events. + - `futex.op`: The `futex` operation type, which can be `wait` or `wake`. + + Refer to the [Supported Syscall Events](#supported-system-call-events) section for details on the extended attributes supported by each event type. + +### Event Output + +As an eBPF probe extension provided by gala-gopher, tprofiling sends generated system events to gala-gopher for processing. gala-gopher then outputs these events in the openTelemetry format and publishes them as JSON messages to a Kafka queue. Front-end applications can consume these tprofiling events by subscribing to the Kafka topic. + +Here's an example of a thread profiling event output: + +```json +{ + "Timestamp": 1661088145000, + "SeverityText": "INFO", + "SeverityNumber": 9, + "Body": "", + "Resource": { + "host.id": "", + "host.name": "", + "thread.pid": 10, + "thread.tgid": 10, + "thread.comm": "java", + "proc.name": "xxx.jar", + "container.id": "", + "container.name": "", + }, + "Attributes": { + values: [ + { + // common info + "event.name": "read", + "event.type": "file", + "start_time": 1661088145000, + "end_time": 1661088146000, + "duration": 0.1, + "count": 1, + // extend info + "func.stack": "read;", + "file.path": "/test.txt" + }, + { + "event.name": "oncpu", + "event.type": "oncpu", + "start_time": 1661088146000, + "end_time": 1661088147000, + "duration": 0.1, + "count": 1, + } + ] + } +} +``` + +Key fields: + +- `Timestamp`: The timestamp when the event was reported. +- `Resource`: Information about the event source. +- `Attributes`: Event attribute information, containing a `values` list. Each item in the list represents a tprofiling event from the same source and includes the event's attributes. + +### Quick Start + +#### Installation + +tprofiling is an eBPF probe extension for gala-gopher, so you must first install gala-gopher before enabling tprofiling. + +[gala-ops](https://gitee.com/openeuler/gala-docs) provides a demo UI for tprofiling based on Kafka, Logstash, Elasticsearch, and Grafana. You can use the gala-ops deployment tools for quick setup. + +#### Architecture + +![](./figures/tprofiling-run-arch.png) + +Software components: + +- Kafka: An open-source message queue that receives and stores tprofiling events collected by gala-gopher. +- Logstash: A real-time, open-source log collection engine that consumes tprofiling events from Kafka, processes them (filtering, transformation, etc.), and sends them to Elasticsearch. +- Elasticsearch: An open, distributed search and analytics engine that stores the processed tprofiling events for querying and visualization in Grafana. +- Grafana: An open-source visualization tool to query and visualize the collected tprofiling events. Users interact with tprofiling through the Grafana UI to analyze application performance. + +#### Deploying the tprofiling Probe + +First, install gala-gopher as described in the [gala-gopher documentation](https://gitee.com/openeuler/gala-gopher#快速开始). Because tprofiling events are sent to Kafka, configure the Kafka service address during deployment. + +After installing and running gala-gopher, start the tprofiling probe using gala-gopher's HTTP-based dynamic configuration API: + +```sh +curl -X PUT http://:9999/tprofiling -d json='{"cmd": {"probe": ["oncpu", "syscall_file", "syscall_net", "syscall_sched", "syscall_lock"]}, "snoopers": {"proc_name": [{"comm": "java"}]}, "state": "running"}' +``` + +Configuration parameters: + +- ``: The IP address of the node where gala-gopher is deployed. +- `probe`: Under `cmd`, the `probe` configuration specifies the system events that the tprofiling probe monitors. `oncpu`, `syscall_file`, `syscall_net`, `syscall_sched`, and `syscall_lock` correspond to on-CPU events and file, network, scheduling, and lock syscall events, respectively. You can enable only the desired tprofiling event types. +- `proc_name`: Under `snoopers`, the `proc_name` configuration filters the processes to monitor by process name. You can also filter by process ID using the `proc_id` configuration. See [REST Dynamic Configuration Interface](#rest-dynamic-configuration-interface) for details. + +To stop the tprofiling probe, run: + +```sh +curl -X PUT http://:9999/tprofiling -d json='{"state": "stopped"}' +``` + +#### Deploying the Front-End Software + +The tprofiling UI requires Kafka, Logstash, Elasticsearch, and Grafana. Install these components on a management node. You can use the gala-ops deployment tools for quick installation; see the [Online Deployment Documentation](https://gitee.com/openeuler/gala-docs#%E5%9C%A8%E7%BA%BF%E9%83%A8%E7%BD%B2). + +On the management node, obtain the deployment script from the [Online Deployment Documentation](https://gitee.com/openeuler/gala-docs#%E5%9C%A8%E7%BA%BF%E9%83%A8%E7%BD%B2) and run the following command to install Kafka, Logstash, and Elasticsearch with one command: + +```sh +sh deploy.sh middleware -K -E -A -p +``` + +Run the following command to install Grafana: + +```sh +sh deploy.sh grafana -P -E +``` + +#### Usage + +After completing the deployment, access A-Ops by browsing to `http://[deployment_node_management_IP_address]:3000` and logging into Grafana. The default username and password are both **admin**. + +After logging in, find the **ThreadProfiling** dashboard. + +![image-20230628155002410](./figures/tprofiling-dashboard.png) + +Click to enter the tprofiling UI and explore its features. + +![image-20230628155249009](./figures/tprofiling-dashboard-detail.png) + +### Use Cases + +#### Case 1: Deadlock Detection + +![image-20230628095802499](./figures/deadlock.png) + +The above diagram shows the thread profiling results of a deadlock demo process. The pie chart shows that `lock` events (in gray) consume a significant portion of the execution time. The lower section displays the thread profiling results for the entire process, with the vertical axis representing the sequence of profiling events for different threads. The `java` main thread remains blocked. The `LockThd1` and `LockThd2` service threads execute `oncpu` and `file` events, followed by simultaneous, long-duration `lock` events. Hovering over a `lock` event reveals that it triggers a `futex` syscall lasting 60 seconds. + +![image-20230628101056732](./figures/deadlock2.png) + +This suggests potential issues with `LockThd1` and `LockThd2`. We can examine their thread profiling results in the thread view. + +![image-20230628102138540](./figures/deadlock3.png) + +This view displays the profiling results for each thread, with the vertical axis showing the sequence of events. `LockThd1` and `LockThd2` normally execute `oncpu` events, including `file` and `lock` events, periodically. However, around 10:17:00, they both execute a long `futex` event without any intervening `oncpu` events, indicating a blocked state. `futex` is a syscall related to user-space lock implementation, and its invocation often signals lock contention and potential blocking. + +Based on this analysis, a deadlock likely exists between `LockThd1` and `LockThd2`. + +#### Case 2: Lock Contention Detection + +![image-20230628111119499](./figures/lockcompete1.png) + +The above diagram shows the thread profiling results for a lock contention demo process. The process primarily executes `lock`, `net`, and `oncpu` events, involving three service threads. Between 11:05:45 and 11:06:45, the event execution times for all three threads increase significantly, indicating a potential performance problem. We can examine each thread's profiling results in the thread view, focusing on this period. + +![image-20230628112709827](./figures/lockcompete2.png) + +By examining the event sequence for each thread, we can understand their activities: + +- Thread `CompeteThd1`: Periodically triggers short `oncpu` events, performing a calculation task. However, around 11:05:45, it begins triggering long `oncpu` events, indicating a time-consuming calculation. + + ![image-20230628113336435](./figures/lockcompete3.png) + +- Thread `CompeteThd2`: Periodically triggers short `net` events. Clicking on an event reveals that the thread is sending network messages via the `write` syscall, along with the TCP connection details. Similarly, around 11:05:45, it starts executing long `futex` events and becomes blocked, increasing the interval between `write` events. + + ![image-20230628113759887](./figures/lockcompete4.png) + + ![image-20230628114340386](./figures/lockcompete5.png) + +- Thread `tcp-server`: A TCP server that continuously reads client requests via the `read` syscall. Starting around 11:05:45, the `read` event execution time increases, indicating that it is waiting to receive network requests. + + ![image-20230628114659071](./figures/lockcompete6.png) + +Based on this analysis, whenever `CompeteThd1` performs a long `oncpu` operation, `CompeteThd2` calls `futex` and enters a blocked state. Once `CompeteThd1` completes the `oncpu` operation, `CompeteThd2` acquires the CPU and performs the network `write` operation. This strongly suggests lock contention between `CompeteThd1` and `CompeteThd2`. Because `CompeteThd2` is waiting for a lock and cannot send network requests, the `tcp-server` thread spends most of its time waiting for `read` requests. + +### Topics + +#### Supported System Call Events + +When selecting system call events for monitoring, consider these principles: + +1. Choose potentially time-consuming or blocking events, such as file, network, or lock operations, as they involve system resource access. +2. Choose events that affect a thread's running state. + +| Event/Syscall Name | Description | Default Type | Extended Content | +| ------------------ | ------------------------------------------------------------------------------ | ------------ | -------------------------------------- | +| `read` | Reads/writes to drive files or the network; may be time-consuming or blocking. | `file` | `file.path`, `sock.conn`, `func.stack` | +| `write` | Reads/writes to drive files or the network; may be time-consuming or blocking. | `file` | `file.path`, `sock.conn`, `func.stack` | +| `readv` | Reads/writes to drive files or the network; may be time-consuming or blocking. | `file` | `file.path`, `sock.conn`, `func.stack` | +| `writev` | Reads/writes to drive files or the network; may be time-consuming or blocking. | `file` | `file.path`, `sock.conn`, `func.stack` | +| `preadv` | Reads/writes to drive files or the network; may be time-consuming or blocking. | `file` | `file.path`, `sock.conn`, `func.stack` | +| `pwritev` | Reads/writes to drive files or the network; may be time-consuming or blocking. | `file` | `file.path`, `sock.conn`, `func.stack` | +| `sync` | Synchronously flushes files to the drive; blocks the thread until completion. | `file` | `func.stack` | +| `fsync` | Synchronously flushes files to the drive; blocks the thread until completion. | `file` | `file.path`, `sock.conn`, `func.stack` | +| `fdatasync` | Synchronously flushes files to the drive; blocks the thread until completion. | `file` | `file.path`, `sock.conn`, `func.stack` | +| `sched_yield` | Thread voluntarily relinquishes the CPU for rescheduling. | `sched` | `func.stack` | +| `nanosleep` | Thread enters a sleep state. | `sched` | `func.stack` | +| `clock_nanosleep` | Thread enters a sleep state. | `sched` | `func.stack` | +| `wait4` | Thread blocks. | `sched` | `func.stack` | +| `waitpid` | Thread blocks. | `sched` | `func.stack` | +| `select` | Thread blocks and waits for an event. | `sched` | `func.stack` | +| `pselect6` | Thread blocks and waits for an event. | `sched` | `func.stack` | +| `poll` | Thread blocks and waits for an event. | `sched` | `func.stack` | +| `ppoll` | Thread blocks and waits for an event. | `sched` | `func.stack` | +| `epoll_wait` | Thread blocks and waits for an event. | `sched` | `func.stack` | +| `sendto` | Reads/writes to the network; may be time-consuming or blocking. | `net` | `sock.conn`, `func.stack` | +| `recvfrom` | Reads/writes to the network; may be time-consuming or blocking. | `net` | `sock.conn`, `func.stack` | +| `sendmsg` | Reads/writes to the network; may be time-consuming or blocking. | `net` | `sock.conn`, `func.stack` | +| `recvmsg` | Reads/writes to the network; may be time-consuming or blocking. | `net` | `sock.conn`, `func.stack` | +| `sendmmsg` | Reads/writes to the network; may be time-consuming or blocking. | `net` | `sock.conn`, `func.stack` | +| `recvmmsg` | Reads/writes to the network; may be time-consuming or blocking. | `net` | `sock.conn`, `func.stack` | +| `futex` | Often indicates lock contention; the thread may block. | `lock` | `futex.op`, `func.stack` | + +#### Aggregated Events + +tprofiling currently supports two main categories of system performance events: system call events and `oncpu` events. In certain scenarios, `oncpu` events and some system call events (like `read` and `write`) can trigger frequently, generating a large volume of system events. This can negatively impact both the performance of the application being observed and the tprofiling probe itself. + +To improve performance, tprofiling aggregates multiple system events with the same name from the same thread within a one-second interval into a single reported event. Therefore, a tprofiling event is actually an aggregated event containing one or more identical system events. Some attribute meanings differ between aggregated events and real system events: + +- `start_time`: The start time of the first system event in the aggregation. +- `end_time`: Calculated as `start_time + duration`. +- `duration`: The sum of the actual execution times of all system events in the aggregation. +- `count`: The number of system events aggregated. When `count` is 1, the aggregated event is equivalent to a single system event. +- Extended event attributes: The extended attributes of the first system event in the aggregation. + +## Introduction to L7Probe + +Purpose: L7 traffic observation, covering common protocols like HTTP1.X, PG, MySQL, Redis, Kafka, HTTP2.0, MongoDB, and RocketMQ. Supports observation of encrypted streams. + +Scope: Node, container, and Kubernetes pod environments. + +### Code Framework Design + +```text +L7Probe + | --- included // Public header files + | --- connect.h // L7 connect object definition + | --- pod.h // pod/container object definition + | --- conn_tracker.h // L7 protocol tracking object definition + | --- protocol // L7 protocol parsing + | --- http // HTTP1.X L7 message structure definition and parsing + | --- mysql // mysql L7 message structure definition and parsing + | --- pgsql // pgsql L7 message structure definition and parsing + | --- bpf // Kernel bpf code + | --- L7.h // BPF program parses L7 protocol types + | --- kern_sock.bpf.c // Kernel socket layer observation + | --- libssl.bpf.c // OpenSSL layer observation + | --- gossl.bpf.c // Go SSL layer observation + | --- cgroup.bpf.c // Pod lifecycle observation + | --- pod_mng.c // pod/container instance management (detects pod/container lifecycle) + | --- conn_mng.c // L7 Connect instance management (handles BPF observation events, such as Open/Close events, Stats statistics) + | --- conn_tracker.c // L7 traffic tracking (tracks data from BPF observation, such as data generated by send/write, read/recv system events) + | --- bpf_mng.c // BPF program lifecycle management (dynamically opens, loads, attaches, and unloads BPF programs, including uprobe BPF programs) + | --- session_conn.c // Manages JSSE sessions (records the mapping between JSSE sessions and socket connections, and reports JSSE connection information) + | --- L7Probe.c // Main probe program +``` + +### Probe Output + +| Metric Name | Table Name | Metric Type | Unit | Metric Description | +| --------------- | ---------- | ----------- | ---- | ------------------------------------------------------------------------------------------------------------------------------ | +| tgid | N/A | Key | N/A | Process ID of the L7 session. | +| client_ip | N/A | Key | N/A | Client IP address of the L7 session. | +| server_ip | N/A | Key | N/A | Server IP address of the L7 session.
Note: In Kubernetes, Cluster IP addresses can be translated to Backend IP addresses. | +| server_port | N/A | Key | N/A | Server port of the L7 session.
Note: In Kubernetes, Cluster Ports can be translated to Backend Ports. | +| l4_role | N/A | Key | N/A | Role of the L4 protocol (TCP Client/Server or UDP). | +| l7_role | N/A | Key | N/A | Role of the L7 protocol (Client or Server). | +| protocol | N/A | Key | N/A | Name of the L7 protocol (HTTP/HTTP2/MySQL...). | +| ssl | N/A | Label | N/A | Indicates whether the L7 session uses SSL encryption. | +| bytes_sent | l7_link | Gauge | N/A | Number of bytes sent by the L7 session. | +| bytes_recv | l7_link | Gauge | N/A | Number of bytes received by the L7 session. | +| segs_sent | l7_link | Gauge | N/A | Number of segments sent by the L7 session. | +| segs_recv | l7_link | Gauge | N/A | Number of segments received by the L7 session. | +| throughput_req | l7_rpc | Gauge | QPS | Request throughput of the L7 session. | +| throughput_resp | l7_rpc | Gauge | QPS | Response throughput of the L7 session. | +| req_count | l7_rpc | Gauge | N/A | Request count of the L7 session. | +| resp_count | l7_rpc | Gauge | N/A | Response count of the L7 session. | +| latency_avg | l7_rpc | Gauge | ns | Average latency of the L7 session. | +| latency | l7_rpc | Histogram | ns | Latency histogram of the L7 session. | +| latency_sum | l7_rpc | Gauge | ns | Total latency of the L7 session. | +| err_ratio | l7_rpc | Gauge | % | Error rate of the L7 session. | +| err_count | l7_rpc | Gauge | N/A | Error count of the L7 session. | + +### Dynamic Control + +#### Controlling the Scope of Pod Observation + +1. REST request sent to gala-gopher. +2. gala-gopher forwards the request to L7Probe. +3. L7Probe identifies relevant containers based on the Pod information. +4. L7Probe retrieves the CGroup ID (`cpuacct_cgrp_id`) of each container and writes it to the object module (using the `cgrp_add` API). +5. During socket system event processing, the CGroup (`cpuacct_cgrp_id`) of the process is obtained, referencing the Linux kernel code (`task_cgroup`). +6. Filtering occurs during observation via the object module (using the `is_cgrp_exist` API). + +#### Controlling Observation Capabilities + +1. REST request sent to gala-gopher. +2. gala-gopher forwards the request to L7Probe. +3. L7Probe dynamically enables or disables BPF-based observation features (including throughput, latency, tracing, and protocol type detection) based on the request parameters. + +### Observation Points + +#### Kernel Socket System Calls + +TCP-related system calls: + +```c +// int connect(int sockfd, const struct sockaddr *addr, socklen_t addrlen); +// int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen); +// int accept4(int sockfd, struct sockaddr *addr, socklen_t *addrlen, int flags); +// ssize_t write(int fd, const void *buf, size_t count); +// ssize_t send(int sockfd, const void *buf, size_t len, int flags); +// ssize_t read(int fd, void *buf, size_t count); +// ssize_t recv(int sockfd, void *buf, size_t len, int flags); +// ssize_t writev(int fd, const struct iovec *iov, int iovcnt); +// ssize_t readv(int fd, const struct iovec *iov, int iovcnt); +``` + +TCP and UDP-related system calls: + +```c +// ssize_t sendto(int sockfd, const void *buf, size_t len, int flags, const struct sockaddr *dest_addr, socklen_t addrlen); +// ssize_t recvfrom(int sockfd, void *buf, size_t len, int flags, struct sockaddr *src_addr, socklen_t *addrlen); +// ssize_t sendmsg(int sockfd, const struct msghdr *msg, int flags); +// ssize_t recvmsg(int sockfd, struct msghdr *msg, int flags); +// int close(int fd); +``` + +Important notes: + +1. `read`/`write` and `readv`/`writev` can be confused with regular file I/O. The kernel function `security_socket_sendmsg` is observed to determine if a file descriptor (FD) refers to a socket operation. +2. `sendto`/`recvfrom` and `sendmsg`/`recvmsg` are used by both TCP and UDP. Refer to the manuals below. +3. `sendmmsg`/`recvmmsg` and `sendfile` are not currently supported. + +[sendto manual](https://man7.org/linux/man-pages/man2/send.2.html): If sendto() is used on a connection-mode (SOCK_STREAM, SOCK_SEQPACKET) socket, the arguments dest_addr and addrlen are ignored (and the error EISCONN may be returned when they are not NULL and 0), and the error ENOTCONN is returned when the socket was not actually connected. otherwise, the address of the target is given by dest_addr with addrlen specifying its size. + +`sendto` determines that the protocol is TCP if the `dest_addr` parameter is NULL; otherwise, it is UDP. + +[recvfrom manual](https://linux.die.net/man/2/recvfrom): The recvfrom() and recvmsg() calls are used to receive messages from a socket, and may be used to receive data on a socket whether or not it is connection-oriented. + +`recvfrom` determines that the protocol is TCP if the `src_addr` parameter is NULL; otherwise, it is UDP. + +[sendmsg manual](https://man7.org/linux/man-pages/man3/sendmsg.3p.html): The sendmsg() function shall send a message through a connection-mode or connectionless-mode socket. If the socket is a connectionless-mode socket, the message shall be sent to the address specified by msghdr if no pre-specified peer address has been set. If a peer address has been pre-specified, either themessage shall be sent to the address specified in msghdr (overriding the pre-specified peer address), or the function shall return -1 and set errno to \[EISCONN]. If the socket is connection-mode, the destination address in msghdr shall be ignored. + +`sendmsg` determines that the protocol is TCP if `msghdr->msg_name` is NULL; otherwise, it is UDP. + +[recvmsg manual](https://man7.org/linux/man-pages/man3/recvmsg.3p.html): The recvmsg() function shall receive a message from a connection-mode or connectionless-mode socket. It is normally used with connectionless-mode sockets because it permits the application to retrieve the source address of received data. + +`recvmsg` determines that the protocol is TCP if `msghdr->msg_name` is NULL; otherwise, it is UDP. + +#### libSSL API + +SSL_write + +SSL_read + +#### Go SSL API + +#### JSSE API + +sun/security/ssl/SSLSocketImpl$AppInputStream + +sun/security/ssl/SSLSocketImpl$AppOutputStream + +### JSSE Observation Scheme + +#### Loading the JSSEProbe + +The `l7_load_jsse_agent` function in `main` loads the JSSEProbe. + +It polls processes in the whitelist (`g_proc_obj_map_fd`). If a process is a Java process, it uses `jvm_attach` to load **JSSEProbeAgent.jar** into it. After loading, the Java process outputs observation information to **/tmp/java-data-/jsse-metrics.txt** at specific points (see [JSSE API](#jsse-api)). + +#### Processing JSSEProbe Messages + +The `l7_jsse_msg_handler` thread handles JSSEProbe messages. + +It polls processes in the whitelist (`g_proc_obj_map_fd`). If a process has a `jsse-metrics` output file, it reads the file line by line, then parses, converts, and reports JSSE read/write information. + +##### 1. Parsing JSSE Read/Write Information + +The `jsse-metrics.txt` output format is: + +```text +|jsse_msg|662220|Session(1688648699909|TLS_AES_256_GCM_SHA384)|1688648699989|Write|127.0.0.1|58302|This is test message| +``` + +It parses the process ID, session ID, time, read/write operation, IP address, port, and payload. + +The parsed information is stored in `session_data_args_s`. + +##### 2. Converting JSSE Read/Write Information + +It converts the information in `session_data_args_s` into `sock_conn` and `conn_data`. + +This conversion queries two hash maps: + +`session_head`: Records the mapping between the JSSE session ID and the socket connection ID. If the process ID and 4-tuple information match, the session and socket connection are linked. + +`file_conn_head`: Records the last session ID of the Java process, in case L7Probe doesn't start reading from the beginning of a request and can't find the session ID. + +##### 3. Reporting JSSE Read/Write Information + +It reports `sock_conn` and `conn_data` to the map. + +## sliprobe Introduction + +`sliprobe` uses eBPF to collect and report container-level service-level indicator (SLI) metrics periodically. + +### Features + +- Collects the total latency and statistical histogram of CPU scheduling events per container. Monitored events include scheduling wait, active sleep, lock/IO blocking, scheduling delay, and long system calls. +- Collects the total latency and statistical histogram of memory allocation events per container. Monitored events include memory reclamation, swapping, and memory compaction. +- Collects the total latency and statistical histogram of BIO layer I/O operations per container. + +### Usage Instructions + +Example command to start `sliprobe`: Specifies a reporting period of 15 seconds and observes SLI metrics for containers `abcd12345678` and `abcd87654321`. + +```shell +curl -X PUT http://localhost:9999/sli -d json='{"params":{"report_period":15}, "snoopers":{"container_id":[{"container_id": "abcd12345678","abcd87654321"}]}, "state":"running"}' +``` + +### Code Logic + +#### Overview + +1. The user-space application receives a list of containers to monitor and stores the inode of each container's `cpuacct` subsystem directory in an eBPF map, sharing it with the kernel. +2. The kernel traces relevant kernel events using eBPF kprobes/tracepoints, determines if the event belongs to a monitored container, and records the event type and timestamp. It aggregates and reports SLI metrics for processes in the same cgroup at regular intervals. +3. The user-space application receives and prints the SLI metrics reported by the kernel. + +#### How SLI Metrics Are Calculated + +##### CPU SLI + +1. **cpu_wait** + + At the `sched_stat_wait` tracepoint, get the `delay` value (second parameter). + +2. **cpu_sleep** + + At the `sched_stat_sleep` tracepoint, get the `delay` value (second parameter). + +3. **cpu_iowait** + + At the `sched_stat_blocked` tracepoint, if the current process is `in_iowait`, get the `delay` value (second parameter). + +4. **cpu_block** + + At the `sched_stat_blocked` tracepoint, if the current process is not `in_iowait`, get the `delay` value (second parameter). + +5. **cpu_rundelay** + + At the `sched_switch` tracepoint, get the `run_delay` value of the next scheduled process (`next->sched_info.run_delay`) from the third parameter `next` and store it in `task_sched_map`. Calculate the difference in `run_delay` between two scheduling events of the same process. + +6. **cpu_longsys** + + At the `sched_switch` tracepoint, get the `task` structure of the next scheduled process from the third parameter `next`. Obtain the number of context switches (`nvcsw+nivcsw`) and user-space execution time (`utime`) from the `task` structure. If the number of context switches and user-space execution time remain the same between two scheduling events of the same process, the process is assumed to be executing a long system call. Accumulate the time the process spends in kernel mode. + +##### MEM SLI + +1. **mem_reclaim** + + Calculate the difference between the return and entry timestamps of the `mem_cgroup_handle_over_high` function. + + Calculate the difference between the timestamps of the `mm_vmscan_memcg_reclaim_end` and `mm_vmscan_memcg_reclaim_begin` tracepoints. + +2. **mem_swapin** + + Calculate the difference between the return and entry timestamps of the `do_swap_page` function. + +3. **mem_compact** + + Calculate the difference between the return and entry timestamps of the `try_to_compact_pages` function. + +##### IO SLI + +1. **bio_latency** + + Calculate the timestamp difference between entering the `bio_endio` function and triggering the `block_bio_queue` tracepoint. + + Calculate the timestamp difference between entering the `bio_endio` function and exiting the `generic_make_request_checks` function. + +## Output Data + +- **Metric** + + Prometheus Server has a built-in Express Browser UI. You can use PromQL statements to query metric data. For details, see [Using the expression browser](https://prometheus.io/docs/prometheus/latest/getting_started/#using-the-expression-browser) in the official document. The following is an example. + + If the specified metric is `gala_gopher_tcp_link_rcv_rtt`, the metric data displayed on the UI is as follows: + + ```basic + gala_gopher_tcp_link_rcv_rtt{client_ip="x.x.x.165",client_port="1234",hostname="openEuler",instance="x.x.x.172:8888",job="prometheus",machine_id="1fd3774xx",protocol="2",role="0",server_ip="x.x.x.172",server_port="3742",tgid="1516"} 1 + ``` + +- **Metadata** + + You can directly consume data from the Kafka topic `gala_gopher_metadata`. The following is an example. + + ```bash + # Input request + ./bin/kafka-console-consumer.sh --bootstrap-server x.x.x.165:9092 --topic gala_gopher_metadata + # Output data + {"timestamp": 1655888408000, "meta_name": "thread", "entity_name": "thread", "version": "1.0.0", "keys": ["machine_id", "pid"], "labels": ["hostname", "tgid", "comm", "major", "minor"], "metrics": ["fork_count", "task_io_wait_time_us", "task_io_count", "task_io_time_us", "task_hang_count"]} + ``` + +- **Abnormal events** + + You can directly consume data from the Kafka topic `gala_gopher_event`. The following is an example. + + ```bash + # Input request + ./bin/kafka-console-consumer.sh --bootstrap-server x.x.x.165:9092 --topic gala_gopher_event + # Output data + {"timestamp": 1655888408000, "meta_name": "thread", "entity_name": "thread", "version": "1.0.0", "keys": ["machine_id", "pid"], "labels": ["hostname", "tgid", "comm", "major", "minor"], "metrics": ["fork_count", "task_io_wait_time_us", "task_io_count", "task_io_time_us", "task_hang_count"]} + ``` diff --git a/docs/en/docs/A-Ops/using-gala-spider.md b/docs/en/Server/Maintenance/Gala/using-gala-spider.md similarity index 92% rename from docs/en/docs/A-Ops/using-gala-spider.md rename to docs/en/Server/Maintenance/Gala/using-gala-spider.md index e2d30f92cb12a90c8f3e11fad09b07db0eaf1c27..acfffd8dd0f7d4e4946e1267434cbed7850f316b 100644 --- a/docs/en/docs/A-Ops/using-gala-spider.md +++ b/docs/en/Server/Maintenance/Gala/using-gala-spider.md @@ -124,9 +124,9 @@ The running of gala-spider depends on multiple external software for interaction The dotted box on the right indicates the two functional components of gala-spider. The green parts indicate the external components that gala-spider directly depends on, and the gray rectangles indicate the external components that gala-spider indirectly depends on. - **spider-storage**: core component of gala-spider, which provides the topology storage function. - 1. Obtains the metadata of the observation object from Kafka. - 2. Obtains information about all observation object instances from Prometheus. - 3. Saves the generated topology to the graph database ArangoDB. + 1. Obtains the metadata of the observation object from Kafka. + 2. Obtains information about all observation object instances from Prometheus. + 3. Saves the generated topology to the graph database ArangoDB. - **gala-inference**: core component of gala-spider, which provides the root cause locating function. It subscribes to abnormal KPI events from Kafka to trigger the root cause locating process of abnormal KPIs, constructs a fault propagation graph based on the topology obtained from the ArangoDB, and outputs the root cause locating result to Kafka. - **prometheus**: time series database. The observation metric data collected by the gala-gopher component is reported to Prometheus for further processing. - **kafka**: messaging middleware, which is used to store the observation object metadata reported by gala-gopher, exception events reported by the exception detection component gala-anteater, and root cause locating results reported by the cause-inference component. @@ -136,9 +136,9 @@ The dotted box on the right indicates the two functional components of gala-spid The two functional components in gala-spider are released as independent software packages. -​ **spider-storage**: corresponds to the gala-spider software package in this section. +**spider-storage**: corresponds to the gala-spider software package in this section. -​ **gala-inference**: corresponds to the gala-inference software package. +**gala-inference**: corresponds to the gala-inference software package. For details about how to deploy the gala-gopher software, see [Using gala-gopher](using-gala-gopher.md). This section only describes how to deploy ArangoDB. @@ -153,48 +153,48 @@ The RPM-based ArangoDB deployment process is as follows: 1. Configure the Yum sources. - ```basic - [oe-2209] # openEuler 22.09 officially released repository - name=oe2209 - baseurl=http://119.3.219.20:82/openEuler:/22.09/standard_x86_64 - enabled=1 - gpgcheck=0 - priority=1 - - [oe-2209:Epol] # openEuler 22.09: Epol officially released repository - name=oe2209_epol - baseurl=http://119.3.219.20:82/openEuler:/22.09:/Epol/standard_x86_64/ - enabled=1 - gpgcheck=0 - priority=1 - ``` + ```basic + [oe-2209] # openEuler 22.09 officially released repository + name=oe2209 + baseurl=http://119.3.219.20:82/openEuler:/22.09/standard_x86_64 + enabled=1 + gpgcheck=0 + priority=1 + + [oe-2209:Epol] # openEuler 22.09: Epol officially released repository + name=oe2209_epol + baseurl=http://119.3.219.20:82/openEuler:/22.09:/Epol/standard_x86_64/ + enabled=1 + gpgcheck=0 + priority=1 + ``` 2. Install arangodb3. - ```sh - # yum install arangodb3 - ``` + ```sh + # yum install arangodb3 + ``` 3. Modify the configurations. - The configuration file of the arangodb3 server is **/etc/arangodb3/arangod.conf**. You need to modify the following configurations: + The configuration file of the arangodb3 server is **/etc/arangodb3/arangod.conf**. You need to modify the following configurations: - - `endpoint`: IP address of the arangodb3 server. - - `authentication`: whether identity authentication is required for accessing the arangodb3 server. Currently, gala-spider does not support identity authentication. Therefore, set `authentication` to `false`. + - `endpoint`: IP address of the arangodb3 server. + - `authentication`: whether identity authentication is required for accessing the arangodb3 server. Currently, gala-spider does not support identity authentication. Therefore, set `authentication` to `false`. - The following is an example. + The following is an example. - ```yaml - [server] - endpoint = tcp://0.0.0.0:8529 - authentication = false - ``` + ```yaml + [server] + endpoint = tcp://0.0.0.0:8529 + authentication = false + ``` 4. Start arangodb3. - ```sh - # systemctl start arangodb3 - ``` + ```sh + # systemctl start arangodb3 + ``` #### Modifying gala-spider Configuration Items @@ -248,7 +248,7 @@ You can query the topology generated by gala-spider on the UI provided by Arango 3. On the **COLLECTIONS** page, you can view the collections of observation object instances and topology relationships stored in different time segments, as shown in the following figure. - ![spider_topology](./figures/spider_topology.png) + ![spider_topology](./figures/spider_topology.png) 4. You can query the stored topology using the AQL statements provided by ArangoDB. For details, see the [AQL Documentation](https://www.arangodb.com/docs/3.8/aql/). diff --git a/docs/en/Server/Maintenance/KernelLiveUpgrade/Menu/index.md b/docs/en/Server/Maintenance/KernelLiveUpgrade/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..b2433d6ae4770c66fab3c26740ad2440d9cf7b06 --- /dev/null +++ b/docs/en/Server/Maintenance/KernelLiveUpgrade/Menu/index.md @@ -0,0 +1,7 @@ +--- +headless: true +--- +- [Kernel Hot Upgrade Guide]({{< relref "./kernel-live-upgrade.md" >}}) + - [Installation and Deployment]({{< relref "./installation-and-deployment.md" >}}) + - [Usage Guide]({{< relref "./usage-guide.md" >}}) + - [Common Problems and Solutions]({{< relref "./common-issues-and-solutions.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Maintenance/KernelLiveUpgrade/common-issues-and-solutions.md b/docs/en/Server/Maintenance/KernelLiveUpgrade/common-issues-and-solutions.md new file mode 100644 index 0000000000000000000000000000000000000000..b279c0defae1f100e6b346e00c4078748e0afdce --- /dev/null +++ b/docs/en/Server/Maintenance/KernelLiveUpgrade/common-issues-and-solutions.md @@ -0,0 +1,29 @@ +# Common Problems and Solutions + +## Issue 1: After the `nvwa update` Command Is Executed, the System Is Not Upgraded + +Cause: An error occurs when the running information is retained or the kernel is replaced. + +Solution: View logs to find the error cause. + +## Issue 2: After the Acceleration Feature Is Enabled, the `nvwa` Command Fails to Be Executed + +Cause: NVWA provides many acceleration features, including quick kexec, pin memory, and cpu park. These features involve the cmdline configuration and memory allocation. When selecting the memory, run cat /proc/iomemory to ensure that the selected memory does not conflict with that of other programs. If necessary, run the dmesg command to check whether error logs exist after the feature is enabled. + +## Issue 3: After the Hot Upgrade, the Related Process Is Not Recovered + +Cause: Check whether the nvwa service is running. If the nvwa service is running, the service or process may fail to be recovered. + +Solution: Run the service `nvwa status` command to view the nvwa logs. If the service fails to be started, check whether the service is enabled, and then run the `systemd` command to view the logs of the corresponding service. Further logs are stored in the process or service folder named after the path specified by **criu_dir**. The dump.log file stores the logs generated when the running information is retained, and the restore.log file restores the logs generated for process recovery. + +## Issue 4: The Recovery Fails, and the Log Displays "Can't Fork for 948: File Exists." + +Cause: The kernel hot upgrade tool finds that the PID of the program is occupied during program recovery. + +Solution: The current kernel does not provide a mechanism for retaining PIDs. Related policies are being developed. This restriction will be resolved in later kernel versions. Currently, you can only manually restart related processes. + +## Issue 5: When the `nvwa` Command Is Used to Save and Recover a Simple Program (Hello World), the System Displays a Message Indicating That the Operation Fails or the Program Is Not Running + +Cause: There are many restrictions on the use of CRIU. + +Solution: View the NVWA logs. If the error is related to the CRIU, check the dump.log or restore.log file in the corresponding directory. For details about the usage restrictions related to the CRIU, see [CRIU WiKi](https://criu.org/What_cannot_be_checkpointed). diff --git a/docs/en/docs/KernelLiveUpgrade/installation-and-deployment.md b/docs/en/Server/Maintenance/KernelLiveUpgrade/installation-and-deployment.md similarity index 98% rename from docs/en/docs/KernelLiveUpgrade/installation-and-deployment.md rename to docs/en/Server/Maintenance/KernelLiveUpgrade/installation-and-deployment.md index c8289dab41f23d0d9047f062e442b416f1c5bbab..dac2841ad83927aae7fecdd22092d6a68b2a37fd 100644 --- a/docs/en/docs/KernelLiveUpgrade/installation-and-deployment.md +++ b/docs/en/Server/Maintenance/KernelLiveUpgrade/installation-and-deployment.md @@ -28,7 +28,7 @@ This document describes how to install and deploy the kernel hot upgrade tool. ## Environment Preparation -- Install the openEuler system. For details, see the [_openEuler Installation Guide_](../Installation/Installation.md). +- Install the openEuler system. For details, see the [_openEuler Installation Guide_](../../InstallationUpgrade/Installation/installation.md) - The root permission is required for installing the kernel hot upgrade tool. diff --git a/docs/en/Server/Maintenance/KernelLiveUpgrade/kernel-live-upgrade.md b/docs/en/Server/Maintenance/KernelLiveUpgrade/kernel-live-upgrade.md new file mode 100644 index 0000000000000000000000000000000000000000..a12557cb441503e148a1e3462d25ac05e1767f32 --- /dev/null +++ b/docs/en/Server/Maintenance/KernelLiveUpgrade/kernel-live-upgrade.md @@ -0,0 +1,14 @@ +# Kernel Live Upgrade Guide + +This document describes how to install, deploy, and use the kernel live upgrade feature on openEuler. This kernel live upgrade feature on openEuler is implemented through quick kernel restart and hot program migration. A user-mode tool is provided to automate this process. + +This document is intended for community developers, open-source enthusiasts, and partners who want to learn about and use the openEuler system and kernel live upgrade. The users are expected to know basics about the Linux operating system. + +## Application Scenario + +The kernel live upgrade is to save and restore the process running data with the second-level end-to-end latency. + +The following two conditions must be met: + +1. The kernel needs to be restarted due to vulnerability fixing or version update. +2. Services running on the kernel can be quickly recovered after the kernel is restarted. diff --git a/docs/en/docs/KernelLiveUpgrade/how-to-run.md b/docs/en/Server/Maintenance/KernelLiveUpgrade/usage-guide.md similarity index 99% rename from docs/en/docs/KernelLiveUpgrade/how-to-run.md rename to docs/en/Server/Maintenance/KernelLiveUpgrade/usage-guide.md index c37c3710214cf45e452fc574591613ca611784c9..c41f6465c9f032e2629670acb9ecd9baba183bef 100644 --- a/docs/en/docs/KernelLiveUpgrade/how-to-run.md +++ b/docs/en/Server/Maintenance/KernelLiveUpgrade/usage-guide.md @@ -1,8 +1,8 @@ -# How to Run +# Usage Guide -- [How to Run](#how-to-run) +- [Usage Guide](#usage-guide) - [Command](#command) - [Restrictions](#restrictions) - [NVWA Acceleration Feature Description and Usage](#nvwa-acceleration-feature-description-and-usage) diff --git a/docs/en/Server/Maintenance/Menu/index.md b/docs/en/Server/Maintenance/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..50a385dd74a17fdc48ef29f6c57a81348356ab84 --- /dev/null +++ b/docs/en/Server/Maintenance/Menu/index.md @@ -0,0 +1,11 @@ +--- +headless: true +--- +- [A-Ops User Guide]({{< relref "./A-Ops/Menu/index.md" >}}) +- [gala User Guide]({{< relref "./Gala/Menu/index.md" >}}) +- [sysmonitor User Guide]({{< relref "./sysmonitor/Menu/index.md" >}}) +- [Kernel Live Upgrade User Guide]({{< relref "./KernelLiveUpgrade/Menu/index.md" >}}) +- [SysCare User Guide]({{< relref "./SysCare/Menu/index.md" >}}) +- [Common Skills]({{< relref "./CommonSkills/Menu/index.md" >}}) +- [Commonly Used Tools for Location and Demarcation]({{< relref "./CommonTools/Menu/index.md" >}}) +- [Troubleshooting]({{< relref "./Troubleshooting/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Maintenance/SysCare/Menu/index.md b/docs/en/Server/Maintenance/SysCare/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..9e6f114e8c5a5a64b93edc0837f4810a9f3d8cab --- /dev/null +++ b/docs/en/Server/Maintenance/SysCare/Menu/index.md @@ -0,0 +1,9 @@ +--- +headless: true +--- +- [SysCare User Guide]({{< relref "./syscare-user-guide.md" >}}) + - [SysCare Introduction]({{< relref "./syscare-introduction.md" >}}) + - [SysCare Installation]({{< relref "./installing-syscare.md" >}}) + - [SysCare Usage]({{< relref "./using-syscare.md" >}}) + - [Constraints]({{< relref "./constraints.md" >}}) + - [Common Issues and Solutions]({{< relref "./common-issues-and-solutions.md" >}}) diff --git a/docs/en/docs/SysCare/faqs.md b/docs/en/Server/Maintenance/SysCare/common-issues-and-solutions.md similarity index 95% rename from docs/en/docs/SysCare/faqs.md rename to docs/en/Server/Maintenance/SysCare/common-issues-and-solutions.md index 979e131eded61c76adf93a15b336d7f0d921e242..0203440f4d006419efe239298ca85ce3a09edf81 100644 --- a/docs/en/docs/SysCare/faqs.md +++ b/docs/en/Server/Maintenance/SysCare/common-issues-and-solutions.md @@ -1,4 +1,4 @@ -# FAQs +# Common Issues and Solutions ## Issue 1: "alloc upatch module memory failed" diff --git a/docs/en/docs/SysCare/constraints.md b/docs/en/Server/Maintenance/SysCare/constraints.md similarity index 100% rename from docs/en/docs/SysCare/constraints.md rename to docs/en/Server/Maintenance/SysCare/constraints.md diff --git a/docs/en/docs/SysCare/figures/syscare_arch.png b/docs/en/Server/Maintenance/SysCare/figures/syscare_arch.png similarity index 100% rename from docs/en/docs/SysCare/figures/syscare_arch.png rename to docs/en/Server/Maintenance/SysCare/figures/syscare_arch.png diff --git a/docs/en/docs/SysCare/installing_SysCare.md b/docs/en/Server/Maintenance/SysCare/installing-syscare.md similarity index 100% rename from docs/en/docs/SysCare/installing_SysCare.md rename to docs/en/Server/Maintenance/SysCare/installing-syscare.md diff --git a/docs/en/docs/SysCare/SysCare_introduction.md b/docs/en/Server/Maintenance/SysCare/syscare-introduction.md similarity index 74% rename from docs/en/docs/SysCare/SysCare_introduction.md rename to docs/en/Server/Maintenance/SysCare/syscare-introduction.md index 2613ebe8a0da6a28dc616a6fd657000a785600f5..17c05d1368dda9dd18380122ac23d1d1cd3c15b0 100644 --- a/docs/en/docs/SysCare/SysCare_introduction.md +++ b/docs/en/Server/Maintenance/SysCare/syscare-introduction.md @@ -11,9 +11,9 @@ SysCare is an online live patching tool that automatically fixes bugs and vulner SysCare supports live patching for kernels and user-mode services: 1. One-click creation -SysCare is a unified environment for both kernel- and user-mode live patches that ignores differences between patches, ensuring they can be created with just one click. + SysCare is a unified environment for both kernel- and user-mode live patches that ignores differences between patches, ensuring they can be created with just one click. 2. Patch lifecycle operations -SysCare provides a unified patch management interface for users to install, activate, uninstall, and query patches. + SysCare provides a unified patch management interface for users to install, activate, uninstall, and query patches. ## SysCare Technologies diff --git a/docs/en/docs/SysCare/SysCare_user_guide.md b/docs/en/Server/Maintenance/SysCare/syscare-user-guide.md similarity index 100% rename from docs/en/docs/SysCare/SysCare_user_guide.md rename to docs/en/Server/Maintenance/SysCare/syscare-user-guide.md diff --git a/docs/en/docs/SysCare/using_SysCare.md b/docs/en/Server/Maintenance/SysCare/using-syscare.md similarity index 100% rename from docs/en/docs/SysCare/using_SysCare.md rename to docs/en/Server/Maintenance/SysCare/using-syscare.md diff --git a/docs/en/Server/Maintenance/Troubleshooting/Menu/index.md b/docs/en/Server/Maintenance/Troubleshooting/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..f3682d1c327fded0299b8b1f389f27d1f0d02f22 --- /dev/null +++ b/docs/en/Server/Maintenance/Troubleshooting/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [Troubleshooting]({{< relref "./troubleshooting.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/ops_guide/troubleshooting.md b/docs/en/Server/Maintenance/Troubleshooting/troubleshooting.md similarity index 82% rename from docs/en/docs/ops_guide/troubleshooting.md rename to docs/en/Server/Maintenance/Troubleshooting/troubleshooting.md index ec921d6e2d21875b502ed42533465ef023aae9e7..37e477d1cd18c0979609cd4879bea3cc70eb91ca 100644 --- a/docs/en/docs/ops_guide/troubleshooting.md +++ b/docs/en/Server/Maintenance/Troubleshooting/troubleshooting.md @@ -4,7 +4,7 @@ - [Triggering kdump Restart](#triggering-kdump-restart) - [Performing Forcible Restart](#performing-forcible-restart) - [Restarting the Network](#restarting-the-network) - - [Repairing the File System](#repairing-the-file-system) + - [Repairing the File System](#repairing-the-file-system) - [Manually Dropping Cache](#manually-dropping-cache) - [Rescue Mode and Single-User Mode](#rescue-mode-and-single-user-mode) @@ -78,24 +78,24 @@ echo 3 > /proc/sys/vm/drop_caches Mount the openEuler 22.03 LTS SP2 ISO image and enter the rescue mode. - 1. Select **Troubleshooting**. - 2. Select **Rescue a openEuler system**. - 3. Proceed as prompted. + 1. Select **Troubleshooting**. + 2. Select **Rescue a openEuler system**. + 3. Proceed as prompted. - ```text - 1)Continue + ```text + 1)Continue - 2)Read-only mount + 2)Read-only mount - 3)Skip to shell + 3)Skip to shell - 4)Quit(Reboot) - ``` + 4)Quit(Reboot) + ``` - Single-user mode - On the login page, enter **e** to go to the grub page, add **init=/bin/sh** to the **linux** line, and press **Ctrl**+**X**. + On the login page, enter **e** to go to the grub page, add **init=/bin/sh** to the **linux** line, and press **Ctrl**+**X**. - 1. Run the `mount -o remount,rw /` command. - 2. Perform operations such as changing the password. - 3. Enter **exit** to exit. + 1. Run the `mount -o remount,rw /` command. + 2. Perform operations such as changing the password. + 3. Enter **exit** to exit. diff --git a/docs/en/Server/Maintenance/sysmonitor/Menu/index.md b/docs/en/Server/Maintenance/sysmonitor/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..faad1a4eb72c77b2746d283260a63915241f171b --- /dev/null +++ b/docs/en/Server/Maintenance/sysmonitor/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [sysmonitor User Guide]({{< relref "./sysmonitor-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Maintenance/sysmonitor/figures/sysmonitor_functions.png b/docs/en/Server/Maintenance/sysmonitor/figures/sysmonitor_functions.png new file mode 100644 index 0000000000000000000000000000000000000000..e9655456ebce192d196e5f55c5fc09c03fa440d8 Binary files /dev/null and b/docs/en/Server/Maintenance/sysmonitor/figures/sysmonitor_functions.png differ diff --git a/docs/en/Server/Maintenance/sysmonitor/sysmonitor-user-guide.md b/docs/en/Server/Maintenance/sysmonitor/sysmonitor-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..a636d2e68cb39718cf1a43fc75df1a130c4ba176 --- /dev/null +++ b/docs/en/Server/Maintenance/sysmonitor/sysmonitor-user-guide.md @@ -0,0 +1,797 @@ +# sysmonitor User Guide + +## Introduction + +The system monitor (sysmonitor) daemon monitors exceptions that occur during OS running and records the exceptions in the system log file **/var/log/sysmonitor.log**. sysmonitor runs as a service. You can run the `systemctl start|stop|restart|reload sysmonitor` command to start, stop, restart, and reload the service. You are advised to deploy sysmonitor to locate system exceptions. + +![](./figures/sysmonitor_functions.png) + +### Precautions + +- sysmonitor cannot run concurrently. +- Ensure that all configuration files are valid. Otherwise, the monitoring service may be abnormal. +- The root privilege is required for sysmonitor service operations, configuration file modification, and log query. The **root** user has the highest permission in the system. When performing operations as the **root** user, follow the operation guide to avoid system management and security risks caused by improper operations. + +### Configuration Overview + +Configuration file **/etc/sysconfig/sysmonitor** of sysmonitor defines the monitoring period of each monitoring item and specifies whether to enable monitoring. Spaces are not allowed between the configuration item, equal sign (=), and configuration value, for example, **PROCESS_MONITOR="on"**. + +Configuration description + +| Item | Description | Mandatory| Default Value | +| ------------------------- | ------------------------------------------------------------ | -------- | -------------------------------------- | +| PROCESS_MONITOR | Whether to enable key process monitoring. The value can be **on** or **off**. | No | on | +| PROCESS_MONITOR_PERIOD | Monitoring period on key processes, in seconds. | No | 3 | +| PROCESS_RECALL_PERIOD | Interval for attempting to restart a key process after the process fails to be recovered, in minutes. The value can be an integer ranging from 1 to 1440.| No | 1 | +| PROCESS_RESTART_TIMEOUT | Timeout interval for recovering a key process service from an exception, in seconds. The value can be an integer ranging from 30 to 300.| No | 90 | +| PROCESS_ALARM_SUPRESS_NUM | Number of alarm suppression times when the key process monitoring configuration uses the alarm command to report alarms. The value is a positive integer.| No | 5 | +| FILESYSTEM_MONITOR | Whether to enable ext3 and ext4 file system monitoring. The value can be **on** or **off**. | No | on | +| DISK_MONITOR | Whether to enable drive partition monitoring. The value can be **on** or **off**. | No | on | +| DISK_MONITOR_PERIOD | Drive monitoring period, in seconds. | No | 60 | +| INODE_MONITOR | Whether to enable drive inode monitoring. The value can be **on** or **off**. | No | on | +| INODE_MONITOR_PERIOD | Drive inode monitoring period, in seconds. | No | 60 | +| NETCARD_MONITOR | Whether to enable NIC monitoring. The value can be **on** or **off**. | No | on | +| FILE_MONITOR | Whether to enable file monitoring. The value can be **on** or **off**. | No | on | +| CPU_MONITOR | Whether to enable CPU monitoring. The value can be **on** or **off**. | No | on | +| MEM_MONITOR | Whether to enable memory monitoring. The value can be **on** or **off**. | No | on | +| PSCNT_MONITOR | Whether to enable process count monitoring. The value can be **on** or **off**. | No | on | +| FDCNT_MONITOR | Whether to enable file descriptor (FD) count monitoring. The value can be **on** or **off**. | No | on | +| CUSTOM_DAEMON_MONITOR | Whether to enable custom daemon item monitoring. The value can be **on** or **off**. | No | on | +| CUSTOM_PERIODIC_MONITOR | Whether to enable custom periodic item monitoring. The value can be **on** or **off**. | No | on | +| IO_DELAY_MONITOR | Whether to enable local drive I/O latency monitoring. The value can be **on** or **off**. | No | off | +| PROCESS_FD_NUM_MONITOR | Whether to enable process FD count monitoring. The value can be **on** or **off**. | No | on | +| PROCESS_MONITOR_DELAY | Whether to wait until all monitoring items are normal when sysmonitor is started. The value can be **on** (wait) or **off** (do not wait).| No | on | +| NET_RATE_LIMIT_BURST | NIC route information printing rate, that is, the number of logs printed per second. | No | 5
Valid range: 0 to 100 | +| FD_MONITOR_LOG_PATH | FD monitoring log file | No | /var/log/sysmonitor.log| +| ZOMBIE_MONITOR | Whether to monitor zombie processes | No | off | +| CHECK_THREAD_MONITOR | Whether to enable internal thread self-healing. The value can be **on** or **off**. | No | on | +| CHECK_THREAD_FAILURE_NUM | Number of internal thread self-healing checks in a period. | No | 3
Valid range: 2 to 10 | + +- After modifying the **/etc/sysconfig/sysmonitor** configuration file, restart the sysmonitor service for the configurations to take effect. +- If an item is not configured in the configuration file, it is enabled by default. +- After the internal thread self-healing function is enabled, if a sub-thread of the monitoring item is suspended and the number of checks in a period exceeds the configured value, the sysmonitor service is restarted for restoration. The configuration is reloaded. The configured key process monitoring and customized monitoring are restarted. If this function affects user experience, you can disable it. + +### Command Reference + +- Start sysmonitor. + +```shell +systemctl start sysmonitor +``` + +- Stop sysmonitor. + +```shell +systemctl stop sysmonitor +``` + +- Restart sysmonitor. + +```shell +systemctl restart sysmonitor +``` + +- Reload sysmonitor for the modified configurations to take effect. + +```shell +systemctl reload sysmonitor +``` + +### Monitoring Logs + +By default, logs is split and dumped to prevent the **sysmonitor.log** file from getting to large. Logs are dumped to a drive directory. In this way, a certain number of logs can be retained. + +The configuration file is **/etc/rsyslog.d/sysmonitor.conf**. Because this rsyslog configuration file is added, after sysmonitor is installed for the first time, you need to restart the rsyslog service to make the sysmonitor log configuration take effect. + +```text +$template sysmonitorformat,"%TIMESTAMP:::date-rfc3339%|%syslogseverity-text%|%msg%\n" + +$outchannel sysmonitor, /var/log/sysmonitor.log, 2097152, /usr/libexec/sysmonitor/sysmonitor_log_dump.sh +if ($programname == 'sysmonitor' and $syslogseverity <= 6) then { +:omfile:$sysmonitor;sysmonitorformat +stop +} + +if ($msg contains 'Time has been changed') then { +:omfile:$sysmonitor;sysmonitorformat +stop +} + +if ($programname == 'sysmonitor' and $syslogseverity > 6) then { +/dev/null +stop +} +``` + +## ext3/ext4 Filesystem Monitoring + +### Introduction + +A fault in the filesystem may trigger I/O operation errors, which further cause OS faults. File system fault detection can detect the faults in real time so that system administrators or users can rectify them in a timely manner. + +### Configuration File Description + +None + +### Exception Logs + +For a file system to which the errors=remount-ro mounting option is added, if the ext3 or ext4 file system is faulty, the following exception information is recorded in the **sysmonitor.log** file: + +```text +info|sysmonitor[127]: loop0 filesystem error. Remount filesystem read-only. +``` + +In other exception scenarios, if the ext3 or ext4 file system is faulty, the following exception information is recorded in the **sysmonitor.log** file: + +```text +info|sysmonitor[127]: fs_monitor_ext3_4: loop0 filesystem error. flag is 1879113728. +``` + +## Key Processing Monitoring + +### Introduction + +Key processes in the system are periodically monitored. When a key process exits abnormally, sysmonitor automatically attempts to recover the key process. If the recovery fails, alarms can be reported. The system administrator can be promptly notified of the abnormal process exit event and whether the process is restarted. Fault locating personnel can locate the time when the process exits abnormally from logs. + +### Configuration File Description + +The configuration file directory is **/etc/sysmonitor/process**. Each process or module corresponds to a configuration file. + +```text +USER=root +NAME=irqbalance +RECOVER_COMMAND=systemctl restart irqbalance +MONITOR_COMMAND=systemctl status irqbalance +STOP_COMMAND=systemctl stop irqbalance +``` + +The configuration items are as follows: + +| Item | Description | Mandatory| Default Value | +| ---------------------- | ------------------------------------------------------------ | -------- | --------------------------------------------------- | +| NAME | Process or module name | Yes | None | +| RECOVER_COMMAND | Recovery command | No | None | +| MONITOR_COMMAND | Monitoring command
If the command output is 0, the process is normal. If the command output is greater than 0, the process is abnormal.| No | pgrep -f $(which xxx)
*xxx* is the process name configured in the **NAME** field.| +| STOP_COMMAND | Stopping command | No | None | +| USER | User name
User for executing the monitoring, recovery, and stopping commands or scripts | No | If this item is left blank, the **root** user is used by default. | +| CHECK_AS_PARAM | Parameter passing switch
If this item is on, the return value of **MONITOR_COMMAND** is transferred to the **RECOVER_COMMAND** command or script as an input parameter. If this item is set to off or other values, the function is disabled.| No | None | +| MONITOR_MODE | Monitoring mode
- **parallel** or **serial**
| No | serial | +| MONITOR_PERIOD | Monitoring period
- Parallel monitoring period
- This item does not take effect when the monitoring mode is **serial**.| No | 3 | +| USE_CMD_ALARM | Alarm mode
If this parameter is set to **on** or **ON**, alarms are reported using the alarm reporting command. | No | None | +| ALARM_COMMAND | Alarm reporting command | No | None | +| ALARM_RECOVER_COMMAND | Alarm recovery command | No | No | + +- After modifying the configuration file for monitoring key processes, run `systemctl reload sysmonitor`. The new configuration takes effect after a monitoring period. +- The recovery command and monitoring command must not block. Otherwise, the monitoring thread of the key process becomes abnormal. +- When the recovery command is executed for more than 90 seconds, the stopping command is executed to stop the process. +- If the recovery command is empty or not configured, the monitoring command does not attempt to recover the key process when detecting that the key process is abnormal. +- If a key process is abnormal and fails to be started for three consecutive times, the process is started based on the period specified by **PROCESS_RECALL_PERIOD** in the global configuration file. +- If the monitored process is not a daemon process, **MONITOR_COMMAND** is mandatory. +- If the configured key service does not exist in the current system, the monitoring does not take effect and the corresponding information is printed in the log. If a fatal error occurs in other configuration items, the default configuration is used and no error is reported. +- The permission on the configuration file is 600. You are advised to set the monitoring item to the **service** type of systemd (for example, **MONITOR_COMMAND=systemctl status irqbalance**). If a process is monitored, ensure that the **NAME** field is an absolute path. +- The restart, reload, and stop of sysmonitor do not affect the monitored processes or services. +- If **USE_CMD_ALARM** is set to **on**, you must ensure the validiy of **ALARM_COMMAND** and **ALARM_RECOVER_COMMAND**. If **ALARM_COMMAND** or **ALARM_RECOVER_COMMAND** is empty or not configured, no alarm is reported. +- The security of user-defined commands, such as the monitoring, recovery, stopping, alarm reporting, and alarm recovery commands, is ensured by users. Commands are executed by the user **root**. You are advised to set the script command permission to be used only by the user **root** to prevent privilege escalation for common users. +- If the length of the monitoring command cannot be greater than 200 characters. Otherwise, the process monitoring fails to be added. +- When the recovery command is set to a systemd service restart command (for example, **RECOVER_COMMAND=systemctl restart irqbalance**), check whether the recovery command conflicts with the open source systemd service recovery mechanism. Otherwise, the behavior of key processes may be affected after exceptions occur. +- The processes started by the sysmonitor service are in the same cgroup as the sysmonitor service, and resources cannot be restricted separately. Therefore, you are advised to use the open source systemd mechanism to recover the processes. + +### Exception Logs + +- **RECOVER_COMMAND** configured + + If a process or module exception is detected, the following exception information is recorded in the **/var/log/sysmonitor.log** file: + + ```text + info|sysmonitor[127]: irqbalance is abnormal, check cmd return 1, use "systemctl restart irqbalance" to recover + ``` + + If the process or module recovers, the following information is recorded in the **/var/log/sysmonitor.log** file: + + ```text + info|sysmonitor[127]: irqbalance is recovered + ``` + +- **RECOVER_COMMAND** not configured + + If a process or module exception is detected, the following exception information is recorded in the **/var/log/sysmonitor.log** file: + + ```text + info|sysmonitor[127]: irqbalance is abnormal, check cmd return 1, recover cmd is null, will not recover + ``` + + If the process or module recovers, the following information is recorded in the **/var/log/sysmonitor.log** file: + + ```text + info|sysmonitor[127]: irqbalance is recovered + ``` + +## File Monitoring + +### Introduction + +If key system files are deleted accidentally, the system may run abnormally or even break down. Through file monitoring, you can learn about the deletion of key files or the addition of malicious files in the system in a timely manner, so that administrators and users can learn and rectify faults in a timely manner. + +### Configuration File Description + +The configuration file is **/etc/sysmonitor/file**. Each monitoring configuration item occupies a line. A monitoring configuration item contains the file (directory) and event to be monitored. The file (directory) to be monitored is an absolute path. The file (directory) to be monitored and the event to be monitored are separated by one or more spaces. + +The file monitoring configuration items can be added to the **/etc/sysmonitor/file.d** directory. The configuration method is the same as that of the **/etc/sysmonitor/file** directory. + +- Due to the log length limit, it is recommended that the absolute path of a file or directory be less than 223 characters. Otherwise, the printed logs may be incomplete. + +- Ensure that the path of the monitored file is correct. If the configured file does not exist or the path is incorrect, the file cannot be monitored. + +- Due to the path length limit of the system, the absolute path of the monitored file or directory must be less than 4096 characters. + +- Directories and regular files can be monitored. **/proc**, **/proc/\***, **/dev**, **/dev/\***, **/sys**, **/sys/\***, pipe files, or socket files cannot be monitored. + +- Only deletion events can be monitored in **/var/log** and **/var/log/\***. + +- If multiple identical paths exist in the configuration file, the first valid configuration takes effect. In the log file, you can see messages indicating that the identical paths are ignored. + +- Soft links cannot be monitored. When a hard link file deletion event is configured, the event is printed only after the file and all its hard links are deleted. + +- When a monitored event occurs after the file monitoring is successfully added, the monitoring log records the absolute path of the configured file. + +- Currently, directories cannot be monitored recursively. The configured directory is monitored but not its subdirectories. + +- The events to be monitored are configured using bitmaps as follows. + +```text + ------------------------------- + | 11~32 | 10 | 9 | 1~8 | + ------------------------------- +``` + +Each bit in the event bitmap represents an event. If bit _n_ is set to 1, the event corresponding to bit _n_ is monitored. The hexadecimal number corresponding to the monitoring bitmap is the event monitoring item written to the configuration file. + +| Item| Description | Mandatory| +| ------ | ------------------ | -------- | +| 1~8 | Reserved | No | +| 9 | File or directory addition event| Yes | +| 10 | File or directory deletion event| Yes | +| 11~32 | Reserved | No | + +- After modifying the file monitoring configuration file, run `systemctl reload sysmonitor`. The new configuration takes effect within 60 seconds. +- Strictly follow the preceding rules to configure events to be monitored. If the configuration is incorrect, the events cannot be monitored. If an event to be monitored in the configuration item is empty, only the deletion event is monitored by default, that is, **0x200**. +- After a file or directory is deleted, the deletion event is reported only when all processes that open the file stop. +- If a monitored a is modified by `vi` or `sed`, "File XXX may have been changed" is recorded in the monitoring log. +- Currently, file addition and deletion events can be monitored, that is, the ninth and tenth bits take effect. Other bits are reserved and do not take effect. If a reserved bit is configured, the monitoring log displays a message indicating that the event monitoring is incorrectly configured. + +**Example** + +Monitor the subdirectory addition and deletion events in **/home**. The lower 12-bit bitmap is 001100000000. The configuration is as follows: + +```text +/home 0x300 +``` + +Monitor the file deletion events of **/etc/ssh/sshd_config**. The lower 12-bit bitmap is 001000000000. The configuration is as follows: + +```text +/etc/sshd/sshd_config 0x200 +``` + +### Exception Logs + +If a configured event occurs to the monitored file, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[127]: 1 events queued +info|sysmonitor[127]: 1th events handled +info|sysmonitor[127]: Subfile "111" under "/home" was added. +``` + +## Drive Partition Monitoring + +### Introduction + +The system periodically monitors the drive partitions mounted to the system. When the drive partition usage is greater than or equal to the configured alarm threshold, the system records a drive space alarm. When the drive partition usage falls below the configured alarm recovery threshold, a drive space recovery alarm is recorded. + +### Configuration File Description + +The configuration file is **/etc/sysmonitor/disk**. + +```text +DISK="/var/log" ALARM="90" RESUME="80" +DISK="/" ALARM="95" RESUME="85" +``` + +| Item| Description | Mandatory| Default Value| +| ------ | ---------------------- | -------- | ------ | +| DISK | Mount directory | Yes | None | +| ALARM | Integer indicating the drive space alarm threshold| No | 90 | +| RESUME | Integer indicating the drive space alarm recovery threshold| No | 80 | + +- After modifying the configuration file for drive space monitoring, run `systemctl reload sysmonitor`. The new configuration takes effect after a monitoring period. +- If a mount directory is configured repeatedly, the last configuration item takes effect. +- The value of **ALARM** must be greater than that of **RESUME**. +- Only the mount point or the drive partition of the mount point can be monitored. +- When the CPU usage and I/O usage are high, the `df` command execution may time out. As a result, the drive usage cannot be obtained. +- If a drive partition is mounted to multiple mount points, an alarm is reported for each mount point. + +### Exception Logs + +If a drive space alarm is detected, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +warning|sysmonitor[127]: report disk alarm, /var/log used:90% alarm:90% +info|sysmonitor[127]: report disk recovered, /var/log used:4% resume:10% +``` + +## NIC Status Monitoring + +### Introduction + +During system running, the NIC status or IP address may change due to human factors or exceptions. You can monitor the NIC status and IP address changes to detect exceptions in a timely manner and locate exception causes. + +### Configuration File Description + +The configuration file is **/etc/sysmonitor/network**. + +```text +#dev event +eth1 UP +``` + +The following table describes the configuration items. +| Item| Description | Mandatory| Default Value | +| ------ | ------------------------------------------------------------ | -------- | ------------------------------------------------- | +| dev | NIC name | Yes | None | +| event | Event to be monitored. The value can be **UP**, **DOWN**, **NEWADDR**, or **DELADDR**.
- UP: The NIC is up.
- DOWN: The NIC is down.
- NEWADDR: An IP address is added.
- DELADDR: An IP address is deleted.| No | If this item is empty, **UP**, **DOWN**, **NEWADDR**, and **DELADDR** are monitored.| + +- After modifying the configuration file for NIC monitoring, run `systemctl reload sysmonitor` for the new configuration to take effect. +- The **UP** and **DOWN** status of virtual NICs cannot be monitored. +- Ensure that each line in the NIC monitoring configuration file contains less than 4096 characters. Otherwise, a configuration error message will be recorded in the monitoring log. +- By default, all events of all NICs are monitored. That is, if no NIC monitoring is configured, the **UP**, **DOWN**, **NEWADDR**, and **DELADDR** events of all NICs are monitored. +- If a NIC is configured but no event is configured, all events of the NIC are monitored by default. +- The events of route addition can be recorded five times per second. You can change the number of times by setting **NET_RATE_LIMIT_BURST** in **/etc/sysconfig/sysmonitor**. + +### Exception Logs + +If a NIC event is detected, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[127]: lo: ip[::1] prefixlen[128] is added, comm: (ostnamed)[1046], parent comm: syst emd[1] +info|sysmonitor[127]: lo: device is up, comm: (ostnamed)[1046], parent comm: systemd[1] +``` + +If a route event is detected, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[881]: Fib4 replace table=255 192.168.122.255/32, comm: daemon-init[1724], parent com m: systemd[1] +info|sysmonitor[881]: Fib4 replace table=254 192.168.122.0/24, comm: daemon-init[1724], parent comm: systemd[1] +info|sysmonitor[881]: Fib4 replace table=255 192.168.122.0/32, comm: daemon-init[1724], parent comm: systemd[1] +info|sysmonitor[881]: Fib6 replace fe80::5054:ff:fef6:b73e/128, comm: kworker/1:3[209], parent comm: kthreadd[2] +``` + +## CPU Monitoring + +### Introduction + +The system monitors the global CPU usage or the CPU usage in a specified domain. When the CPU usage exceeds the configured alarm threshold, the system runs the configured log collection command. + +### Configuration File Description + +The configuration file is **/etc/sysmonitor/cpu**. + +When the global CPU usage of the system is monitored, an example of the configuration file is as follows: + +```text +# cpu usage alarm percent +ALARM="90" + +# cpu usage alarm resume percent +RESUME="80" + +# monitor period (second) +MONITOR_PERIOD="60" + +# stat period (second) +STAT_PERIOD="300" + +# command executed when cpu usage exceeds alarm percent +REPORT_COMMAND="" +``` + +When the CPU usage of a specific domain is monitored, an example of the configuration file is as follows: + +```text +# monitor period (second) +MONITOR_PERIOD="60" + +# stat period (second) +STAT_PERIOD="300" + +DOMAIN="0,1" ALARM="90" RESUME="80" +DOMAIN="2,3" ALARM="50" RESUME="40" + +# command executed when cpu usage exceeds alarm percent +REPORT_COMMAND="" +``` + +| Item | Description | Mandatory| Default Value| +| -------------- | ------------------------------------------------------------ | -------- | ------ | +| ALARM | Number greater than 0, indicating the CPU usage alarm threshold | No | 90 | +| RESUME | Number greater than or equal to 0, indicating the CPU usage alarm recovery threshold | No | 80 | +| MONITOR_PERIOD | Monitoring period, in seconds. The value is greater than 0. | No | 60 | +| STAT_PERIOD | Statistical period, in seconds. The value is greater than 0. | No | 300 | +| DOMAIN | CPU IDs in the domain, represented by decimal numbers
- CPU IDs can be enumerated and separated by commas, for exmaple, **1,2,3**. CPU IDs can be specified as a range in the formate of _X_-_Y_, for example, **0-2**. The two representations can be used together, for example, **0, 1, 2-3** or **0-1, 2-3**. Spaces or other characters are not allowed.
- Each monitoring domain has an independent configuration item. Each configuration item supports a maximum of 256 CPUs. A CPU ID must be unique in a domain and across domains.| No | None | +| REPORT_COMMAND | Command for collecting logs after the CPU usage exceeds the alarm threshold | No | None | + +- After modifying the configuration file for CPU monitoring, run `systemctl reload sysmonitor`. The new configuration takes effect after a monitoring period. +- The value of **ALARM** must be greater than that of **RESUME**. +- After the CPU domain monitoring is configured, the global average CPU usage of the system is not monitored, and the separately configured **ALARM** and **RESUME** values do not take effect. +- If the configuration of a monitoring domain is invalid, CPU monitoring is not performed at all. +- All CPUs configured in **DOMAIN** must be online. Otherwise, the domain cannot be monitored. +- The command of **REPORT_COMMAND** cannot contain insecure characters such as **&**, **;**, and **>**, and the total length cannot exceed 159 characters. Otherwise, the command cannot be executed. +- Ensure the security and validity of **REPORT_COMMAND**. sysmonitor is responsible only for running the command as the **root** user. +- **REPORT_COMMAND** must not block. When the execution time of the command exceeds 60s, the sysmonitor forcibly stops the execution. +- Even if the CPU usage of multiple domains exceeds the threshold in a monitoring period, **REPORT_COMMAND** is executed only once. + +### Exception Logs + +If a global CPU usage alarm is detected or cleared and the log collection command is configured, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[127]: CPU usage alarm: 91.3% +info|sysmonitor[127]: cpu monitor: execute REPORT_COMMAND[sysmoniotrcpu] sucessfully +info|sysmonitor[127]: CPU usage resume 70.1% +``` + +If a domain average CPU usage alarm is detected or cleared and the log collection command is configured, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[127]: CPU 1,2,3 usage alarm: 91.3% +info|sysmonitor[127]: cpu monitor: execute REPORT_COMMAND[sysmoniotrcpu] sucessfully +info|sysmonitor[127]: CPU 1,2,3 usage resume 70.1% +``` + +## Memory Monitoring + +### Introduction + +Monitors the system memory usage and records logs when the memory usage exceeds or falls below the threshold. + +### Configuration File Description + +The configuration file is **/etc/sysmonitor/memory**. + +```text +# memory usage alarm percent +ALARM="90" + +# memory usage alarm resume percent +RESUME="80" + +# monitor period(second) +PERIOD="60" +``` + +### Configuration Item Description + +| Item| Description | Mandatory| Default Value| +| ------ | ----------------------------- | -------- | ------ | +| ALARM | Number greater than 0, indicating the memory usage alarm threshold | No | 90 | +| RESUME | Number greater than or equal to 0, indicating the memory usage alarm recovery threshold| No | 80 | +| PERIOD | Monitoring period, in seconds. The value is greater than 0. | No | 60 | + +- After modifying the configuration file for memory monitoring, run `systemctl reload sysmonitor`. The new configuration takes effect after a monitoring period. +- The value of **ALARM** must be greater than that of **RESUME**. +- The average memory usage in three monitoring periods is used to determine whether an alarm is reported or cleared. + +### Exception Logs + +If a memory alarm is detected, sysmonitor obtains the **/proc/meminfo** information and prints the information in the **/var/log/sysmonitor.log** file. The information is as follows: + +```text +info|sysmonitor[127]: memory usage alarm: 90% +info|sysmonitor[127]:---------------show /proc/meminfo: --------------- +info|sysmonitor[127]:MemTotal: 3496388 kB +info|sysmonitor[127]:MemFree: 2738100 kB +info|sysmonitor[127]:MemAvailable: 2901888 kB +info|sysmonitor[127]:Buffers: 165064 kB +info|sysmonitor[127]:Cached: 282360 kB +info|sysmonitor[127]:SwapCached: 4492 kB +...... +info|sysmonitor[127]:---------------show_memory_info end. --------------- +``` + +If the following information is printed, sysmonitor runs `echo m > /proc/sysrq-trigger` to export memory allocation information. You can view the information in **/var/log/messages**. + +```text +info|sysmonitor[127]: sysrq show memory ifno in message. +``` + +When the alarm is recovered, the following information is displayed: + +```text +info|sysmonitor[127]: memory usage resume: 4.6% +``` + +## Process and Thread Monitoring + +### Introduction + +Monitors the number of processes and threads. When the total number of processes or threads exceeds or falls below the threshold, a log is recorded or an alarm is reported. + +### Configuration File Description + +The configuration file is **/etc/sysmonitor/pscnt**. + +```text +# number of processes(include threads) when alarm occur +ALARM="1600" + +# number of processes(include threads) when alarm resume +RESUME="1500" + +# monitor period(second) +PERIOD="60" + +# process count usage alarm percent +ALARM_RATIO="90" + +# process count usage resume percent +RESUME_RATIO="80" + +# print top process info with largest num of threads when threads alarm +# (range: 0-1024, default: 10, monitor for thread off:0) +SHOW_TOP_PROC_NUM="10" +``` + +| Item | Description | Mandatory| Default Value| +| ----------------- | ------------------------------------------------------------ | -------- | ------ | +| ALARM | Integer greater than 0, indicating the process count alarm threshold | No | 1600 | +| RESUME | Integer greater than or equal to 0, indicating the process count alarm recovery threshold | No | 1500 | +| PERIOD | Monitoring period, in seconds. The value is greater than 0. | No | 60 | +| ALARM_RATIO | Number greater than 0 and less than or equal to 100. Process count alarm threshold. | No | 90 | +| RESUME_RATIO | Number greater than 0 and less than or equal to 100. Process count alarm recovery threshold, which must be less than **ALARM_RATIO**.| No | 80 | +| SHOW_TOP_PROC_NUM | Whether to use the latest `top` information about threads | No | 10 | + +- After modifying the configuration file for process count monitoring, run `systemctl reload sysmonitor`. The new configuration takes effect after a monitoring period. +- The value of **ALARM** must be greater than that of **RESUME**. +- The process count alarm threshold is the larger between **ALARM** and **ALARM_RATIO** in **/proc/sys/kernel/pid_max**. The alarm recovery threshold is the larger of **RESUME** and **RESUME_RATIO** in **/proc/sys/kernel/pid_max**. +- The thread count alarm threshold is the larger between **ALARM** and **ALARM_RATIO** in **/proc/sys/kernel/threads-max**. The alarm recovery threshold is the larger of **RESUME** and **RESUME_RATIO** in **/proc/sys/kernel/threads-max**. +- The value of **SHOW_TOP_PROC_NUM** ranges from 0 to 1024. 0 indicates that thread monitoring is disabled. A larger value, for example, 1024, indicates that thread alarms will be generated in the environment. If the alarm threshold is high, the performance is affected. You are advised to set this parameter to the default value 10 or a smaller value. If the impact is huge, you are advised to set this parameter to 0 to disable thread monitoring. +- The value of **PSCNT_MONITOR** in **/etc/sysconfig/sysmonitor** and the value of **SHOW_TOP_PROC_NUM** in **/etc/sysmonitor/pscnt** determine whether thread monitoring is enabled. + - If **PSCNT_MONITOR** is on and **SHOW_TOP_PROC_NUM** is set to a valid value, thread monitoring is enabled. + - If **PSCNT_MONITOR** is on and **SHOW_TOP_PROC_NUM** is 0, thread monitoring is disabled. + - If **PSCNT_MONITOR** is off, thread monitoring is disabled. +- When a process count alarm is generated, the system FD usage information and memory information (**/proc/meminfo**) are printed. +- When a thread count alarm is generated, the total number of threads, `top` process information, number of processes in the current environment, number of system FDs, and memory information (**/proc/meminfo**) are printed. +- If system resources are insufficient before a monitoring period ends, for example, the thread count exceeds the maximum number allowed, the monitoring cannot run properly due to resource limitation. As a result, the alarm cannot be generated. + +### Exception Logs + +If a process count alarm is detected, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[127]:---------------process count alarm start: --------------- +info|sysmonitor[127]: process count alarm:1657 +info|sysmonitor[127]: process count alarm, show sys fd count: 2592 +info|sysmonitor[127]: process count alarm, show mem info +info|sysmonitor[127]:---------------show /proc/meminfo: --------------- +info|sysmonitor[127]:MemTotal: 3496388 kB +info|sysmonitor[127]:MemFree: 2738100 kB +info|sysmonitor[127]:MemAvailable: 2901888 kB +info|sysmonitor[127]:Buffers: 165064 kB +info|sysmonitor[127]:Cached: 282360 kB +info|sysmonitor[127]:SwapCached: 4492 kB +...... +info|sysmonitor[127]:---------------show_memory_info end. --------------- +info|sysmonitor[127]:---------------process count alarm end: --------------- +``` + +If a process count recovery alarm is detected, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[127]: process count resume: 1200 +``` + +If a thread count alarm is detected, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[127]:---------------threads count alarm start: --------------- +info|sysmonitor[127]:threads count alarm: 273 +info|sysmonitor[127]:open threads most 10 processes is [top1:pid=1756900,openthreadsnum=13,cmd=/usr/bin/sysmonitor --daemon] +info|sysmonitor[127]:open threads most 10 processes is [top2:pid=3130,openthreadsnum=13,cmd=/usr/lib/gassproxy -D] +..... +info|sysmonitor[127]:---------------threads count alarm end. --------------- +``` + +## System FD Count Monitoring + +### Introduction + +Monitors the number of system FDs. When the total number of system FDs exceeds or is less than the threshold, a log is recorded. + +### Configuration File Description + +The configuration file is **/etc/sysmonitor/sys_fd_conf**. + +```text +# system fd usage alarm percent +SYS_FD_ALARM="80" +# system fd usage alarm resume percent +SYS_FD_RESUME="70" +# monitor period (second) +SYS_FD_PERIOD="600" +``` + +Configuration items: + +| Item | Description | Mandatory| Default Value| +| ------------- | --------------------------------------------------------- | -------- | ------ | +| SYS_FD_ALARM | Integer greater than 0 and less than 100, indicating the alarm threshold of the percentage of the total number of FDs and the maximum number of FDs allowed.| No | 80 | +| SYS_FD_RESUME | Integer greater than 0 and less than 100, indicating the alarm recovery threshold of the percentage of the total number of FDs and the maximum number of FDs allowed.| No | 70 | +| SYS_FD_PERIOD | Integer between 100 and 86400, indicating the monitor period in seconds | No | 600 | + +- After modifying the configuration file for FD count monitoring, run `systemctl reload sysmonitor`. The new configuration takes effect after a monitoring period. +- The value of **SYS_FD_ALARM** must be greater than that of **SYS_FD_RESUME**. If the value is invalid, the default value is used and a log is recorded. + +### Exception Logs + +An FD count alarm is recorded in the monitoring logs when detected. The following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[127]: sys fd count alarm: 259296 +``` + +When a system FD usage alarm is generated, the top three processes that use the most FDs are printed. + +```text +info|sysmonitor[127]:open fd most three processes is:[top1:pid=23233,openfdnum=5000,cmd=/home/openfile] +info|sysmonitor[127]:open fd most three processes is:[top2:pid=23267,openfdnum=5000,cmd=/home/openfile] +info|sysmonitor[127]:open fd most three processes is:[top3:pid=30144,openfdnum=5000,cmd=/home/openfile] +``` + +## Drive Inode Monitoring + +### Introduction + +Periodically monitors the inodes of mounted drive partitions. When the drive partition inode usage is greater than or equal to the configured alarm threshold, the system records a drive inode alarm. When the drive inode usage falls below the configured alarm recovery threshold, a drive inode recovery alarm is recorded. + +### Configuration File Description + +The configuration file is **/etc/sysmonitor/inode**. + +```text +DISK="/" +DISK="/var/log" +``` + +| Item| Description | Mandatory| Default Value| +| ------ | ------------------------- | -------- | ------ | +| DISK | Mount directory | Yes | None | +| ALARM | Integer indicating the drive inode alarm threshold| No | 90 | +| RESUME | Integer indicating the drive inode alarm recovery threshold| No | 80 | + +- After modifying the configuration file for drive inode monitoring, run `systemctl reload sysmonitor`. The new configuration takes effect after a monitoring period. +- If a mount directory is configured repeatedly, the last configuration item takes effect. +- The value of **ALARM** must be greater than that of **RESUME**. +- Only the mount point or the drive partition of the mount point can be monitored. +- When the CPU usage and I/O usage are high, the `df` command execution may time out. As a result, the drive inode usage cannot be obtained. +- If a drive partition is mounted to multiple mount points, an alarm is reported for each mount point. + +### Exception Logs + +If a drive inode alarm is detected, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[4570]:report disk inode alarm, /var/log used:90% alarm:90% +info|sysmonitor[4570]:report disk inode recovered, /var/log used:79% alarm:80% +``` + +## Local Drive I/O Latency Monitoring + +### Introduction + +Reads the local drive I/O latency data every 5 seconds and collects statistics on 60 groups of data every 5 minutes. If more than 30 groups of data are greater than the configured maximum I/O latency, the system records a log indicating excessive drive I/O latency. + +### Configuration File Description + +The configuration file is **/etc/sysmonitor/iodelay**. + +```text +DELAY_VALUE="500" +``` + +| Item | Description | Mandatory| Default Value| +| ----------- | -------------------- | -------- | ------ | +| DELAY_VALUE | Maximum drive I/O latency| Yes | 500 | + +### Exception Logs + +If a drive I/O latency alarm is detected, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[127]:local disk sda IO delay is too large, I/O delay threshold is 70. +info|sysmonitor[127]:disk is sda, io delay data: 71 72 75 87 99 29 78 ...... +``` + +If a drive I/O latency recovery alarm is detected, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[127]:local disk sda IO delay is normal, I/O delay threshold is 70. +info|sysmonitor[127]:disk is sda, io delay data: 11 22 35 8 9 29 38 ...... +``` + +## Zombie Process Monitoring + +### Introduction + +Monitors the number of zombie processes in the system. If the number is greater than the alarm threshold, an alarm log is recorded. When the number drops lower than the recovery threshold, a recovery alarm is reported. + +### Configuration File Description + +The configuration file is **/etc/sysmonitor/zombie**. + +```text +# Ceiling zombie process counts of alarm +ALARM="500" + +# Floor zombie process counts of resume +RESUME="400" + +# Periodic (second) +PERIOD="600" +``` + +| Item| Description | Mandatory| Default Value| +| ------ | ------------------------------- | -------- | ------ | +| ALARM | Number greater than 0, indicating the zombie process count alarm threshold | No | 500 | +| RESUME | Number greater than or equal to 0, indicating the zombie process count recovery threshold| No | 400 | +| PERIOD | Monitoring period, in seconds. The value is greater than 0. | No | 60 | + +### Exception Logs + +If a zombie process count alarm is detected, the following information is displayed in the **/var/log/sysmonitor.log** file: + +```text +info|sysmonitor[127]: zombie process count alarm: 600 +info|sysmonitor[127]: zombie process count resume: 100 +``` + +## Custom Monitoring + +### Introduction + +You can customize monitoring items. The monitoring framework reads the content of the configuration file, parses the monitoring attributes, and calls the monitoring actions to be performed. The monitoring module provides only the monitoring framework. It is not aware of what users are monitoring or how to monitor, and does not report alarms. + +### Configuration File Description + +The configuration files are stored in **/etc/sysmonitor.d/**. Each process or module corresponds to a configuration file. + +```text +MONITOR_SWITCH="on" +TYPE="periodic" +EXECSTART="/usr/sbin/iomonitor_daemon" +PERIOD="1800" +``` + +| Item | Description | Mandatory | Default Value| +| -------------- | ------------------------------------------------------------ | --------------------- | ------ | +| MONITOR_SWITCH | Monitoring switch | No | off | +| TYPE | Custom monitoring item type
**daemon**: background execution
**periodic**: periodic execution| Yes | None | +| EXECSTART | Monitoring command | Yes | None | +| ENVIROMENTFILE | Environment variable file | No | None | +| PERIOD | If the type is **periodic**, this parameter is mandatory and sets the monitoring period. The value is an integer greater than 0.| Yes when the type is **periodic**| None | + +- The absolute path of the configuration file or environment variable file cannot contain more than 127 characters. The environment variable file path cannot be a soft link path. +- The length of the **EXECSTART** command cannot exceed 159 characters. No space is allowed in the key field. +- The execution of the periodic monitoring command cannot time out. Otherwise, the custom monitoring framework will be affected. +- Currently, a maximum of 256 environment variables can be configured. +- The custom monitoring of the daemon type checks whether the `reload` command is delivered or whether the daemon process exits abnormally every 10 seconds. If the `reload` command is delivered, the new configuration is loaded 10 seconds later. If a daemon process exits abnormally, the daemon process is restarted 10 seconds later. +- If the content of the **ENVIROMENTFILE** file changes, for example, an environment variable is added or the environment variable value changes, you need to restart the sysmonitor service for the new environment variable to take effect. +- You are advised to set the permission on the configuration files in the **/etc/sysmonitor.d/** directory to 600. If **EXECSTART** is only an executable file, you are advised to set the permission on the executable file to 550. +- After a daemon process exits abnormally, sysmonitor reloads the configuration file of the daemon process. + +### Exception Logs + +If a monitoring item of the daemon type exits abnormally, the **/var/log/sysmonitor.log** file records the following information: + +```text +info|sysmonitor[127]: custom daemon monitor: child process[11609] name unetwork_alarm exit code[127],[1] times. +``` diff --git a/docs/en/Server/MemoryandStorage/GMEM/Menu/index.md b/docs/en/Server/MemoryandStorage/GMEM/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..883043f65021060088a07093c38c02d531aa0f13 --- /dev/null +++ b/docs/en/Server/MemoryandStorage/GMEM/Menu/index.md @@ -0,0 +1,7 @@ +--- +headless: true +--- + +- [GMEM User Guide]({{< relref "./introduction-to-gmem.md" >}}) + - [Installation and Deployment]({{< relref "./installation-and-deployment.md" >}}) + - [Usage Instructions]({{< relref "./usage-instructions.md" >}}) diff --git a/docs/en/docs/GMEM/images/GMEM_architecture.png b/docs/en/Server/MemoryandStorage/GMEM/images/GMEM_architecture.png similarity index 100% rename from docs/en/docs/GMEM/images/GMEM_architecture.png rename to docs/en/Server/MemoryandStorage/GMEM/images/GMEM_architecture.png diff --git a/docs/en/docs/GMEM/install_deploy.md b/docs/en/Server/MemoryandStorage/GMEM/installation-and-deployment.md similarity index 85% rename from docs/en/docs/GMEM/install_deploy.md rename to docs/en/Server/MemoryandStorage/GMEM/installation-and-deployment.md index 0a9991c8a70919e8a69fd3b8820d4bdb0b8f6984..fd5cf18f893f953544b6157e003087e865218ee1 100644 --- a/docs/en/docs/GMEM/install_deploy.md +++ b/docs/en/Server/MemoryandStorage/GMEM/installation-and-deployment.md @@ -24,9 +24,9 @@ This section describes how to install and deploy GMEM. | Source | Software Package | | ------------------------------------------------------------ | ------------------------------------------------------------ | - | openEuler 23.09 | kernel-6.4.0-xxx.aarch64.rpm
kernel-devel-6.4.0-xxx.aarch64.rpm
libgmem-xxx.aarch64.rpm
libgmem-devel-xxx.aarch64.rpm | - | Ascend community | CANN package:
Ascend-cann-toolkit-xxx-linux.aarch64.rpm
NPU firmware and driver:
Ascend-hdk-910-npu-driver-xxx.aarch64.rpm
Ascend-hdk-910-npu-firmware-xxx.noarch.rpm | - | Contact the maintainers of the GMEM community.
[@yang_yanchao](https://gitee.com/yang_yanchao) email:
[@LemmyHuang](https://gitee.com/LemmyHuang) email: | gmem-example-xxx.aarch64.rpm
mindspore-xxx-linux_aarch64.whl | + | openEuler 23.09 | kernel-6.4.0-xxx.aarch64.rpm
kernel-devel-6.4.0-xxx.aarch64.rpm
libgmem-xxx.aarch64.rpm
libgmem-devel-xxx.aarch64.rpm | + | Ascend community | CANN package:
Ascend-cann-toolkit-xxx-linux.aarch64.rpm
NPU firmware and driver:
Ascend-hdk-910-npu-driver-xxx.aarch64.rpm
Ascend-hdk-910-npu-firmware-xxx.noarch.rpm | + | Contact the maintainers of the GMEM community.
[@yang_yanchao](https://gitee.com/yang_yanchao) email:
[@LemmyHuang](https://gitee.com/LemmyHuang) email: | gmem-example-xxx.aarch64.rpm
mindspore-xxx-linux_aarch64.whl | * Install the kernel. @@ -36,7 +36,7 @@ This section describes how to install and deploy GMEM. [root@localhost ~]# cat /boot/config-`uname -r` | grep CONFIG_GMEM CONFIG_GMEM=y CONFIG_GMEM_DEV=m - + [root@localhost ~]# cat /boot/config-`uname -r` | grep CONFIG_REMOTE_PAGER CONFIG_REMOTE_PAGER=m CONFIG_REMOTE_PAGER_MASTER=m @@ -52,13 +52,13 @@ This section describes how to install and deploy GMEM. Configure **transparent_hugepage**. ```sh - echo always > /sys/kernel/mm/transparent_hugepage/enabled + echo always > /sys/kernel/mm/transparent_hugepage/enabled ``` * Install the user-mode dynamic library libgmem. ```sh - yum install libgmem libgmem-devel + yum install libgmem libgmem-devel ``` * Install the CANN framework. @@ -92,7 +92,7 @@ This section describes how to install and deploy GMEM. | 0 | 0000:81:00.0 | 0 1979 / 15039 0 / 32768 | +======================+===============+====================================================+ ``` - + * Install the gmem-example software package. gmem-example updates the host driver, NPU driver, and NPU kernel. After the installation is complete, restart the system for the driver to take effect. @@ -110,7 +110,7 @@ This section describes how to install and deploy GMEM. MindSpore version: x.x.x The result of multiplication calculation is correct, MindSpore has been installed on platform [Ascend] successfully! ``` - + ## Performing Training or Inference After installation is complete, you can execute MindSpore-based training or inference directly without any adaptation. diff --git a/docs/en/docs/GMEM/GMEM_introduction.md b/docs/en/Server/MemoryandStorage/GMEM/introduction-to-gmem.md similarity index 100% rename from docs/en/docs/GMEM/GMEM_introduction.md rename to docs/en/Server/MemoryandStorage/GMEM/introduction-to-gmem.md diff --git a/docs/en/docs/GMEM/usage.md b/docs/en/Server/MemoryandStorage/GMEM/usage-instructions.md similarity index 99% rename from docs/en/docs/GMEM/usage.md rename to docs/en/Server/MemoryandStorage/GMEM/usage-instructions.md index 5203c52e2146c04277e135b8a0de40eec7337d5d..14ae5dfaaf8381ae4092440127fa06d7519eba0e 100644 --- a/docs/en/docs/GMEM/usage.md +++ b/docs/en/Server/MemoryandStorage/GMEM/usage-instructions.md @@ -16,13 +16,13 @@ libgmem is the abstraction layer of the GMEM user API. It encapsulates the prece ``` * Memory release - + `munmap` is used to release memory of hosts and devices. - + ```c munmap(addr, size); ``` - + * Memory semantics `FreeEager`: For an address segment within the specified range \[**addr**, **addr** + **size**], `FreeEager` releases a complete page that aligns the page size inwards (the default page size is 2 MB). If no complete page exists in the range, a success message is returned. @@ -48,14 +48,14 @@ libgmem is the abstraction layer of the GMEM user API. It encapsulates the prece * Other APIs Obtain the NUMA ID of the current device. If the API is invoked successfully, the NUMA ID is returned. Otherwise, an error code is returned. - + ```c Prototype: `int gmemGetNumaId (void);` Usage: `numaid = gmemGetNumaId ();` ``` - + Obtain the GMEM statistics of the kernel. - + ```sh cat /proc/gmemstat ``` diff --git a/docs/en/Server/MemoryandStorage/HSAK/Menu/index.md b/docs/en/Server/MemoryandStorage/HSAK/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..b166033f616dc18e2f7f19fbe8f50e62b0abb57b --- /dev/null +++ b/docs/en/Server/MemoryandStorage/HSAK/Menu/index.md @@ -0,0 +1,8 @@ +--- +headless: true +--- + +- [HSAK Developer Guide]({{< relref "./hsak-developer-guide.md" >}}) + - [Development with HSAK]({{< relref "./development_with_hsak.md" >}}) + - [HSAK Tool Usage]({{< relref "./hsak_tool_usage.md" >}}) + - [HSAK C APIs]({{< relref "./hsak_c_apis.md" >}}) diff --git a/docs/en/docs/HSAK/develop_with_hsak.md b/docs/en/Server/MemoryandStorage/HSAK/development_with_hsak.md similarity index 44% rename from docs/en/docs/HSAK/develop_with_hsak.md rename to docs/en/Server/MemoryandStorage/HSAK/development_with_hsak.md index cbd2bb70c1b451fafd4baade6878ffcd30d7ee95..240524c2b54415ec5f0fbe3d72d25aa50ceb2d81 100644 --- a/docs/en/docs/HSAK/develop_with_hsak.md +++ b/docs/en/Server/MemoryandStorage/HSAK/development_with_hsak.md @@ -1,6 +1,8 @@ -# Instructions +# Development with HSAK -## **nvme.conf.in** Configuration File +## Instructions + +### **nvme.conf.in** Configuration File By default, the HSAK configuration file is located in **/etc/spdk/nvme.conf.in**. You can modify the configuration file based on service requirements. The content of the configuration file is as follows: @@ -26,14 +28,14 @@ By default, the HSAK configuration file is located in **/etc/spdk/nvme.conf.in** 1. **BatchSize**: number of I/Os that can be submitted in batches. The default value is **8**, and the maximum value is **32**. -## Header File Reference +### Header File Reference HSAK provides two external header files. Include the two files when using HSAK for development. 1. **bdev_rw.h**: defines the macros, enumerations, data structures, and APIs of the user-mode I/O operations on the data plane. 2. **ublock.h**: defines macros, enumerations, data structures, and APIs for functions such as device management and information obtaining on the management plane. -## Service Running +### Service Running After software development and compilation, you must run the **setup.sh** script to rebind the NVMe drive driver to the user mode before running the software. The script is located in **/opt/spdk** by default. Run the following commands to change the drive driver's binding mode from kernel to user and reserve 1024 x 2 MB huge pages: @@ -54,126 +56,126 @@ Run the following commands to restore the drive driver's mode from user to kerne 0000:40:00.0 (8086 2701): uio_pci_generic -> nvme ``` -## User-Mode I/O Read and Write Scenarios +### User-Mode I/O Read and Write Scenarios Call HSAK APIs in the following sequence to read and write service data through the user-mode I/O channel: 1. Initialize the HSAK UIO module. - Call **libstorage_init_module** to initialize the HSAK user-mode I/O channel. + Call **libstorage_init_module** to initialize the HSAK user-mode I/O channel. 2. Open a drive block device. - Call **libstorage_open** to open a specified block device. If multiple block devices need to be opened, call this API repeatedly. + Call **libstorage_open** to open a specified block device. If multiple block devices need to be opened, call this API repeatedly. 3. Allocate I/O memory. - Call **libstorage_alloc_io_buf** or **libstorage_mem_reserve** to allocate memory. **libstorage_alloc_io_buf** can allocate a maximum of 65 KB I/Os, and **libstorage_mem_reserve** can allocate unlimited memory unless there is no available space. + Call **libstorage_alloc_io_buf** or **libstorage_mem_reserve** to allocate memory. **libstorage_alloc_io_buf** can allocate a maximum of 65 KB I/Os, and **libstorage_mem_reserve** can allocate unlimited memory unless there is no available space. 4. Perform read and write operations on a drive. - You can call the following APIs to perform read and write operations based on service requirements: + You can call the following APIs to perform read and write operations based on service requirements: - - libstorage_async_read - - libstorage_async_readv - - libstorage_async_write - - libstorage_async_writev - - libstorage_sync_read - - libstorage_sync_write + - libstorage_async_read + - libstorage_async_readv + - libstorage_async_write + - libstorage_async_writev + - libstorage_sync_read + - libstorage_sync_write 5. Free I/O memory. - Call **libstorage_free_io_buf** or **libstorage_mem_free** to free memory, which must correspond to the API used to allocate memory. + Call **libstorage_free_io_buf** or **libstorage_mem_free** to free memory, which must correspond to the API used to allocate memory. 6. Close a drive block device. - Call **libstorage_close** to close a specified block device. If multiple block devices are opened, call this API repeatedly to close them. - - | API | Description | - | ----------------------- | ------------------------------------------------------------ | - | libstorage_init_module | Initializes the HSAK module. | - | libstorage_open | Opens a block device. | - | libstorage_alloc_io_buf | Allocates memory from buf_small_pool or buf_large_pool of SPDK. | - | libstorage_mem_reserve | Allocates memory space from the huge page memory reserved by DPDK. | - | libstorage_async_read | Delivers asynchronous I/O read requests (the read buffer is a contiguous buffer). | - | libstorage_async_readv | Delivers asynchronous I/O read requests (the read buffer is a discrete buffer). | - | libstorage_async_write | Delivers asynchronous I/O write requests (the write buffer is a contiguous buffer). | - | libstorage_async_wrtiev | Delivers asynchronous I/O write requests (the write buffer is a discrete buffer). | - | libstorage_sync_read | Delivers synchronous I/O read requests (the read buffer is a contiguous buffer). | - | libstorage_sync_write | Delivers synchronous I/O write requests (the write buffer is a contiguous buffer). | - | libstorage_free_io_buf | Frees the allocated memory to buf_small_pool or buf_large_pool of SPDK. | - | libstorage_mem_free | Frees the memory space that libstorage_mem_reserve allocates. | - | libstorage_close | Closes a block device. | - | libstorage_exit_module | Exits the HSAK module. | - -## Drive Management Scenarios + Call **libstorage_close** to close a specified block device. If multiple block devices are opened, call this API repeatedly to close them. + + | API | Description | + | ----------------------- | ------------------------------------------------------------ | + | libstorage_init_module | Initializes the HSAK module. | + | libstorage_open | Opens a block device. | + | libstorage_alloc_io_buf | Allocates memory from buf_small_pool or buf_large_pool of SPDK. | + | libstorage_mem_reserve | Allocates memory space from the huge page memory reserved by DPDK. | + | libstorage_async_read | Delivers asynchronous I/O read requests (the read buffer is a contiguous buffer). | + | libstorage_async_readv | Delivers asynchronous I/O read requests (the read buffer is a discrete buffer). | + | libstorage_async_write | Delivers asynchronous I/O write requests (the write buffer is a contiguous buffer). | + | libstorage_async_wrtiev | Delivers asynchronous I/O write requests (the write buffer is a discrete buffer). | + | libstorage_sync_read | Delivers synchronous I/O read requests (the read buffer is a contiguous buffer). | + | libstorage_sync_write | Delivers synchronous I/O write requests (the write buffer is a contiguous buffer). | + | libstorage_free_io_buf | Frees the allocated memory to buf_small_pool or buf_large_pool of SPDK. | + | libstorage_mem_free | Frees the memory space that libstorage_mem_reserve allocates. | + | libstorage_close | Closes a block device. | + | libstorage_exit_module | Exits the HSAK module. | + +### Drive Management Scenarios HSAK contains a group of C APIs, which can be used to format drives and create and delete namespaces. 1. Call the C API to initialize the HSAK UIO component. If the HSAK UIO component has been initialized, skip this operation. - libstorage_init_module + libstorage_init_module 2. Call corresponding APIs to perform drive operations based on service requirements. The following APIs can be called separately: - - libstorage_create_namespace + - libstorage_create_namespace - - libstorage_delete_namespace + - libstorage_delete_namespace - - libstorage_delete_all_namespace + - libstorage_delete_all_namespace - - libstorage_nvme_create_ctrlr + - libstorage_nvme_create_ctrlr - - libstorage_nvme_delete_ctrlr + - libstorage_nvme_delete_ctrlr - - libstorage_nvme_reload_ctrlr + - libstorage_nvme_reload_ctrlr - - libstorage_low_level_format_nvm + - libstorage_low_level_format_nvm - - libstorage_deallocate_block + - libstorage_deallocate_block 3. If you exit the program, destroy the HSAK UIO. If other services are using the HSAK UIO, you do not need to exit the program and destroy the HSAK UIO. - libstorage_exit_module + libstorage_exit_module - | API | Description | - | ------------------------------- | ------------------------------------------------------------ | - | libstorage_create_namespace | Creates a namespace on a specified controller (the prerequisite is that the controller supports namespace management). | - | libstorage_delete_namespace | Deletes a namespace from a specified controller. | - | libstorage_delete_all_namespace | Deletes all namespaces from a specified controller. | - | libstorage_nvme_create_ctrlr | Creates an NVMe controller based on the PCI address. | - | libstorage_nvme_delete_ctrlr | Destroys an NVMe controller based on the controller name. | - | libstorage_nvme_reload_ctrlr | Automatically creates or destroys the NVMe controller based on the input configuration file. | - | libstorage_low_level_format_nvm | Low-level formats an NVMe drive. | - | libstorage_deallocate_block | Notifies NVMe drives of blocks that can be freed for garbage collection. | + | API | Description | + | ------------------------------- | ------------------------------------------------------------ | + | libstorage_create_namespace | Creates a namespace on a specified controller (the prerequisite is that the controller supports namespace management). | + | libstorage_delete_namespace | Deletes a namespace from a specified controller. | + | libstorage_delete_all_namespace | Deletes all namespaces from a specified controller. | + | libstorage_nvme_create_ctrlr | Creates an NVMe controller based on the PCI address. | + | libstorage_nvme_delete_ctrlr | Destroys an NVMe controller based on the controller name. | + | libstorage_nvme_reload_ctrlr | Automatically creates or destroys the NVMe controller based on the input configuration file. | + | libstorage_low_level_format_nvm | Low-level formats an NVMe drive. | + | libstorage_deallocate_block | Notifies NVMe drives of blocks that can be freed for garbage collection. | -## Data-Plane Drive Information Query +### Data-Plane Drive Information Query The I/O data plane of HSAK provides a group of C APIs for querying drive information. Upper-layer services can process service logic based on the queried information. 1. Call the C API to initialize the HSAK UIO component. If the HSAK UIO component has been initialized, skip this operation. - libstorage_init_module + libstorage_init_module 2. Call corresponding APIs to query information based on service requirements. The following APIs can be called separately: - - libstorage_get_nvme_ctrlr_info + - libstorage_get_nvme_ctrlr_info - - libstorage_get_mgr_info_by_esn + - libstorage_get_mgr_info_by_esn - - libstorage_get_mgr_smart_by_esn + - libstorage_get_mgr_smart_by_esn - - libstorage_get_bdev_ns_info + - libstorage_get_bdev_ns_info - - libstorage_get_ctrl_ns_info + - libstorage_get_ctrl_ns_info 3. If you exit the program, destroy the HSAK UIO. If other services are using the HSAK UIO, you do not need to exit the program and destroy the HSAK UIO. - libstorage_exit_module + libstorage_exit_module - | API | Description | - | ------------------------------- | ------------------------------------------------------------ | - | libstorage_get_nvme_ctrlr_info | Obtains information about all controllers. | - | libstorage_get_mgr_info_by_esn | Obtains the management information of the drive corresponding to an ESN. | - | libstorage_get_mgr_smart_by_esn | Obtains the S.M.A.R.T. information of the drive corresponding to an ESN. | - | libstorage_get_bdev_ns_info | Obtains namespace information based on the device name. | - | libstorage_get_ctrl_ns_info | Obtains information about all namespaces based on the controller name. | + | API | Description | + | ------------------------------- | ------------------------------------------------------------ | + | libstorage_get_nvme_ctrlr_info | Obtains information about all controllers. | + | libstorage_get_mgr_info_by_esn | Obtains the management information of the drive corresponding to an ESN. | + | libstorage_get_mgr_smart_by_esn | Obtains the S.M.A.R.T. information of the drive corresponding to an ESN. | + | libstorage_get_bdev_ns_info | Obtains namespace information based on the device name. | + | libstorage_get_ctrl_ns_info | Obtains information about all namespaces based on the controller name. | -## Management-Plane Drive Information Query +### Management-Plane Drive Information Query The management plane component Ublock of HSAK provides a group of C APIs for querying drive information on the management plane. @@ -189,23 +191,23 @@ The management plane component Ublock of HSAK provides a group of C APIs for que 6. If you exit the program, destroy the HSAK Ublock module (the destruction method on the server is the same as that on the client). - | API | Description | - | ---------------------------- | ------------------------------------------------------------ | - | init_ublock | Initializes the Ublock function module. This API must be called before the other Ublock APIs. A process can be initialized only once because the init_ublock API initializes DPDK. The initial memory allocated by DPDK is bound to the process PID. One PID can be bound to only one memory. In addition, DPDK does not provide an API for freeing the memory. The memory can be freed only by exiting the process. | - | ublock_init | It is the macro definition of the init_ublock API. It can be considered as initializing Ublock to an RPC service. | - | ublock_init_norpc | It is the macro definition of the init_ublock API. It can be considered as initializing Ublock to a non-RPC service. | - | ublock_get_bdevs | Obtains the device list. The obtained device list contains only PCI addresses and does not contain specific device information. To obtain specific device information, call the ublock_get_bdev API. | - | ublock_get_bdev | Obtains information about a specific device, including the device serial number, model, and firmware version. The information is stored in character arrays instead of character strings. | - | ublock_get_bdev_by_esn | Obtains the device information based on the specified ESN, including the serial number, model, and firmware version. | - | ublock_get_SMART_info | Obtains the S.M.A.R.T. information of a specified device. | - | ublock_get_SMART_info_by_esn | Obtains the S.M.A.R.T. information of the device corresponding to an ESN. | - | ublock_get_error_log_info | Obtains the error log information of a device. | - | ublock_get_log_page | Obtains information about a specified log page of a specified device. | - | ublock_free_bdevs | Frees the device list. | - | ublock_free_bdev | Frees device resources. | - | ublock_fini | Destroys the Ublock module. This API destroys the Ublock module and internally created resources. This API must be used together with the Ublock initialization API. | - -## Log Management + | API | Description | + | ---------------------------- | ------------------------------------------------------------ | + | init_ublock | Initializes the Ublock function module. This API must be called before the other Ublock APIs. A process can be initialized only once because the init_ublock API initializes DPDK. The initial memory allocated by DPDK is bound to the process PID. One PID can be bound to only one memory. In addition, DPDK does not provide an API for freeing the memory. The memory can be freed only by exiting the process. | + | ublock_init | It is the macro definition of the init_ublock API. It can be considered as initializing Ublock to an RPC service. | + | ublock_init_norpc | It is the macro definition of the init_ublock API. It can be considered as initializing Ublock to a non-RPC service. | + | ublock_get_bdevs | Obtains the device list. The obtained device list contains only PCI addresses and does not contain specific device information. To obtain specific device information, call the ublock_get_bdev API. | + | ublock_get_bdev | Obtains information about a specific device, including the device serial number, model, and firmware version. The information is stored in character arrays instead of character strings. | + | ublock_get_bdev_by_esn | Obtains the device information based on the specified ESN, including the serial number, model, and firmware version. | + | ublock_get_SMART_info | Obtains the S.M.A.R.T. information of a specified device. | + | ublock_get_SMART_info_by_esn | Obtains the S.M.A.R.T. information of the device corresponding to an ESN. | + | ublock_get_error_log_info | Obtains the error log information of a device. | + | ublock_get_log_page | Obtains information about a specified log page of a specified device. | + | ublock_free_bdevs | Frees the device list. | + | ublock_free_bdev | Frees device resources. | + | ublock_fini | Destroys the Ublock module. This API destroys the Ublock module and internally created resources. This API must be used together with the Ublock initialization API. | + +### Log Management HSAK logs are exported to **/var/log/messages** through syslog by default and managed by the rsyslog service of the OS. If a custom log directory is required, use rsyslog to configure the log directory. @@ -213,17 +215,17 @@ HSAK logs are exported to **/var/log/messages** through syslog by default and ma 2. Restart the rsyslog service: - ```shell - if ($programname == 'LibStorage') then { - action(type="omfile" fileCreateMode="0600" file="/var/log/HSAK/run.log") - stop - } - ``` + ```shell + if ($programname == 'LibStorage') then { + action(type="omfile" fileCreateMode="0600" file="/var/log/HSAK/run.log") + stop + } + ``` 3. Start the HSAK process. The log information is redirected to the target directory. - ```shell - sysemctl restart rsyslog - ``` + ```shell + sysemctl restart rsyslog + ``` 4. If redirected logs need to be dumped, manually configure log dump in the **/etc/logrotate.d/syslog** file. diff --git a/docs/en/docs/HSAK/introduce_hsak.md b/docs/en/Server/MemoryandStorage/HSAK/hsak-developer-guide.md similarity index 89% rename from docs/en/docs/HSAK/introduce_hsak.md rename to docs/en/Server/MemoryandStorage/HSAK/hsak-developer-guide.md index 031efb9c3fd3b9658c100ef6864970b77c80b073..3aa9395bc75c6e693f79546e17b4afb67da59159 100644 --- a/docs/en/docs/HSAK/introduce_hsak.md +++ b/docs/en/Server/MemoryandStorage/HSAK/hsak-developer-guide.md @@ -13,23 +13,23 @@ The HSAK user-mode I/O engine is developed based on the open-source SPDK. 1. Download the HSAK source code. - $ git clone + ```shell + git clone + ``` 2. Install the compilation and running dependencies. - The compilation and running of HSAK depend on components such as Storage Performance Development Kit (SPDK), Data Plane Development Kit (DPDK), and libboundscheck. + The compilation and running of HSAK depend on components such as Storage Performance Development Kit (SPDK), Data Plane Development Kit (DPDK), and libboundscheck. 3. Start the compilation. - $ cd hsak - - $ mkdir build - - $ cd build - - $ cmake .. - - $ make + ```shell + cd hsak + mkdir build + cd build + cmake .. + make + ``` ## Precautions diff --git a/docs/en/Server/MemoryandStorage/HSAK/hsak_c_apis.md b/docs/en/Server/MemoryandStorage/HSAK/hsak_c_apis.md new file mode 100644 index 0000000000000000000000000000000000000000..8cd265d145487b43a2fa5cc6ab3b068236471df2 --- /dev/null +++ b/docs/en/Server/MemoryandStorage/HSAK/hsak_c_apis.md @@ -0,0 +1,2521 @@ +# C APIs + +## Macro Definition and Enumeration + +### bdev_rw.h + +#### enum libstorage_ns_lba_size + +1. Prototype + + ```c + enum libstorage_ns_lba_size + { + LIBSTORAGE_NVME_NS_LBA_SIZE_512 = 0x9, + LIBSTORAGE_NVME_NS_LBA_SIZE_4K = 0xc + }; + ``` + +2. Description + + Sector (data) size of a drive. + +#### enum libstorage_ns_md_size + +1. Prototype + + ```c + enum libstorage_ns_md_size + { + LIBSTORAGE_METADATA_SIZE_0 = 0, + LIBSTORAGE_METADATA_SIZE_8 = 8, + LIBSTORAGE_METADATA_SIZE_64 = 64 + }; + ``` + +2. Description + + Metadata size of a drive. + +3. Remarks + + - ES3000 V3 (single-port) supports formatting of five sector types (512+0, 512+8, 4K+64, 4K, and 4K+8). + + - ES3000 V3 (dual-port) supports formatting of four sector types (512+0, 512+8, 4K+64, and 4K). + + - ES3000 V5 supports formatting of five sector types (512+0, 512+8, 4K+64, 4K, and 4K+8). + + - Optane drives support formatting of seven sector types (512+0, 512+8, 512+16,4K, 4K+8, 4K+64, and 4K+128). + +#### enum libstorage_ns_pi_type + +1. Prototype + + ```c + enum libstorage_ns_pi_type + { + LIBSTORAGE_FMT_NVM_PROTECTION_DISABLE = 0x0, + LIBSTORAGE_FMT_NVM_PROTECTION_TYPE1 = 0x1, + LIBSTORAGE_FMT_NVM_PROTECTION_TYPE2 = 0x2, + LIBSTORAGE_FMT_NVM_PROTECTION_TYPE3 = 0x3, + }; + ``` + +2. Description + + Protection type supported by drives. + +3. Remarks + + ES3000 supports only protection types 0 and 3. Optane drives support only protection types 0 and 1. + +#### enum libstorage_crc_and_prchk + +1. Prototype + + ```c + enum libstorage_crc_and_prchk + { + LIBSTORAGE_APP_CRC_AND_DISABLE_PRCHK = 0x0, + LIBSTORAGE_APP_CRC_AND_ENABLE_PRCHK = 0x1, + LIBSTORAGE_LIB_CRC_AND_DISABLE_PRCHK = 0x2, + LIBSTORAGE_LIB_CRC_AND_ENABLE_PRCHK = 0x3, + #define NVME_NO_REF 0x4 + LIBSTORAGE_APP_CRC_AND_DISABLE_PRCHK_NO_REF = LIBSTORAGE_APP_CRC_AND_DISABLE_PRCHK | NVME_NO_REF, + LIBSTORAGE_APP_CRC_AND_ENABLE_PRCHK_NO_REF = LIBSTORAGE_APP_CRC_AND_ENABLE_PRCHK | NVME_NO_REF, + }; + ``` + +2. Description + + - **LIBSTORAGE_APP_CRC_AND_DISABLE_PRCHK**: Cyclic redundancy check (CRC) is performed for the application layer, but not for HSAK. CRC is disabled for drives. + + - **LIBSTORAGE_APP_CRC_AND_ENABLE_PRCHK**: CRC is performed for the application layer, but not for HSAK. CRC is enabled for drives. + + - **LIBSTORAGE_LIB_CRC_AND_DISABLE_PRCHK**: CRC is performed for HSAK, but not for the application layer. CRC is disabled for drives. + + - **LIBSTORAGE_LIB_CRC_AND_ENABLE_PRCHK**: CRC is performed for HSAK, but not for the application layer. CRC is enabled for drives. + + - **LIBSTORAGE_APP_CRC_AND_DISABLE_PRCHK_NO_REF**: CRC is performed for the application layer, but not for HSAK. CRC is disabled for drives. REF tag verification is disabled for drives whose PI TYPE is 1 (Intel Optane P4800). + + - **LIBSTORAGE_APP_CRC_AND_ENABLE_PRCHK_NO_REF**: CRC is performed for the application layer, but not for HSAK. CRC is enabled for drives. REF tag verification is disabled for drives whose PI TYPE is 1 (Intel Optane P4800). + + - If PI TYPE of an Intel Optane P4800 drive is 1, the CRC and REF tag of the metadata area are verified by default. + + - Intel Optane P4800 drives support DIF in 512+8 format but does not support DIF in 4096+64 format. + + - For ES3000 V3 and ES3000 V5, PI TYPE of the drives is 3. By default, only the CRC of the metadata area is performed. + + - ES3000 V3 supports DIF in 512+8 format but does not support DIF in 4096+64 format. ES3000 V5 supports DIF in both 512+8 and 4096+64 formats. + + The summary is as follows: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
E2E Verification ModeCtrl FlagCRC Generator Write ProcessRead Process
Application VerificationCRC for HSAKCRC for DrivesApplication VerificationCRC for HSAKCRC for Drives
Halfway protection0ControllerXXXXXX
1ControllerXXXXX
2ControllerXXXXXX
3ControllerXXXXX
Full protection0AppXXXX
1AppXX
2HSAKXXXX
3HSAKXX
+ +#### enum libstorage_print_log_level + +1. Prototype + + ```c + enum libstorage_print_log_level + { + LIBSTORAGE_PRINT_LOG_ERROR, + LIBSTORAGE_PRINT_LOG_WARN, + LIBSTORAGE_PRINT_LOG_NOTICE, + LIBSTORAGE_PRINT_LOG_INFO, + LIBSTORAGE_PRINT_LOG_DEBUG, + }; + ``` + +2. Description + + Storage Performance Development Kit (SPDK) log print levels: ERROR, WARN, NOTICE, INFO, and DEBUG, corresponding to 0 to 4 in the configuration file. + +#### MAX_BDEV_NAME_LEN + +1. Prototype + + ```c + #define MAX_BDEV_NAME_LEN 24 + ``` + +2. Description + + Maximum length of a block device name. + +#### MAX_CTRL_NAME_LEN + +1. Prototype + + ```c + #define MAX_CTRL_NAME_LEN 16 + ``` + +2. Description + + Maximum length of a controller. + +#### LBA_FORMAT_NUM + +1. Prototype + + ```c + #define LBA_FORMAT_NUM 16 + ``` + +2. Description + + Number of LBA formats supported by a controller. + +#### LIBSTORAGE_MAX_DSM_RANGE_DESC_COUNT + +1. Prototype + + ```c + #define LIBSTORAGE_MAX_DSM_RANGE_DESC_COUNT 256 + ``` + +2. Description + + Maximum number of 16-byte sets in the dataset management command. + +### ublock.h + +#### UBLOCK_NVME_UEVENT_SUBSYSTEM_UIO + +1. Prototype + + ```c + #define UBLOCK_NVME_UEVENT_SUBSYSTEM_UIO 1 + ``` + +2. Description + + This macro is used to define that the subsystem corresponding to the uevent event is the userspace I/O subsystem (UIO) provided by the kernel. When the service receives the uevent event, this macro is used to determine whether the event is a UIO event that needs to be processed. + + The value of the int subsystem member in struct ublock_uevent is **UBLOCK_NVME_UEVENT_SUBSYSTEM_UIO**. Currently, only this value is available. + +#### UBLOCK_TRADDR_MAX_LEN + +1. Prototype + + ```c + #define UBLOCK_TRADDR_MAX_LEN 256 + ``` + +2. Description + + The *Domain:Bus:Device.Function* (**%04x:%02x:%02x.%x**) format indicates the maximum length of the PCI address character string. The actual length is far less than 256 bytes. + +#### UBLOCK_PCI_ADDR_MAX_LEN + +1. Prototype + + ```c + #define UBLOCK_PCI_ADDR_MAX_LEN 256 + ``` + +2. Description + + Maximum length of the PCI address character string. The actual length is far less than 256 bytes. The possible formats of the PCI address are as follows: + + - Full address: **%x:%x:%x.%x** or **%x.%x.%x.%x** + + - When the **Function** value is **0**: **%x:%x:%x** + + - When the **Domain** value is **0**: **%x:%x.%x** or **%x.%x.%x** + + - When the **Domain** and **Function** values are **0**: **%x:%x** or **%x.%x** + +#### UBLOCK_SMART_INFO_LEN + +1. Prototype + + ```c + #define UBLOCK_SMART_INFO_LEN 512 + ``` + +2. Description + + Size of the structure for the S.M.A.R.T. information of an NVMe drive, which is 512 bytes. + +#### enum ublock_rpc_server_status + +1. Prototype + + ```c + enum ublock_rpc_server_status { + // start rpc server or not + UBLOCK_RPC_SERVER_DISABLE = 0, + UBLOCK_RPC_SERVER_ENABLE = 1, + }; + ``` + +2. Description + + Status of the RPC service in HSAK. The status can be enabled or disabled. + +#### enum ublock_nvme_uevent_action + +1. Prototype + + ```c + enum ublock_nvme_uevent_action { + UBLOCK_NVME_UEVENT_ADD = 0, + UBLOCK_NVME_UEVENT_REMOVE = 1, + UBLOCK_NVME_UEVENT_INVALID, + }; + ``` + +2. Description + + Indicates whether the uevent hot swap event is to insert or remove a drive. + +#### enum ublock_subsystem_type + +1. Prototype + + ```c + enum ublock_subsystem_type { + SUBSYSTEM_UIO = 0, + SUBSYSTEM_NVME = 1, + SUBSYSTEM_TOP + }; + ``` + +2. Description + + Type of the callback function, which is used to determine whether the callback function is registered for the UIO driver or kernel NVMe driver. + +## Data Structure + +### bdev_rw.h + +#### struct libstorage_namespace_info + +1. Prototype + + ```c + struct libstorage_namespace_info + { + char name[MAX_BDEV_NAME_LEN]; + uint64_t size; /** namespace size in bytes */ + uint64_t sectors; /** number of sectors */ + uint32_t sector_size; /** sector size in bytes */ + uint32_t md_size; /** metadata size in bytes */ + uint32_t max_io_xfer_size; /** maximum i/o size in bytes */ + uint16_t id; /** namespace id */ + uint8_t pi_type; /** end-to-end data protection information type */ + uint8_t is_active :1; /** namespace is active or not */ + uint8_t ext_lba :1; /** namespace support extending LBA size or not */ + uint8_t dsm :1; /** namespace supports Dataset Management or not */ + uint8_t pad :3; + uint64_t reserved; + }; + ``` + +2. Description + + This data structure contains the namespace information of a drive. + +3. Struct members + + | Member | Description | + | ---------------------------- | ------------------------------------------------------------ | + | char name\[MAX_BDEV_NAME_LEN] | Name of the namespace. | + | uint64_t size | Size of the drive space allocated to the namespace, in bytes. | + | uint64_t sectors | Number of sectors. | + | uint32_t sector_size | Size of each sector, in bytes. | + | uint32_t md_size | Metadata size, in bytes. | + | uint32_t max_io_xfer_size | Maximum size of data in a single I/O operation, in bytes. | + | uint16_t id | Namespace ID. | + | uint8_t pi_type | Data protection type. The value is obtained from enum libstorage_ns_pi_type. | + | uint8_t is_active :1 | Namespace active or not. | + | uint8_t ext_lba :1 | Whether the namespace supports logical block addressing (LBA) in extended mode. | + | uint8_t dsm :1 | Whether the namespace supports dataset management. | + | uint8_t pad :3 | Reserved parameter. | + | uint64_t reserved | Reserved parameter. | + +#### struct libstorage_nvme_ctrlr_info + +1. Prototype + + ```c + struct libstorage_nvme_ctrlr_info { + char name[MAX_CTRL_NAME_LEN]; + char address[24]; + struct { + uint32_t domain; + uint8_t bus; + uint8_t dev; + uint8_t func; + } pci_addr; + uint64_t totalcap; /* Total NVM Capacity in bytes */ + uint64_t unusecap; /* Unallocated NVM Capacity in bytes */ + int8_t sn[20]; /* Serial number */ + uint8_t fr[8]; /* Firmware revision */ + uint32_t max_num_ns; /* Number of namespaces */ + uint32_t version; + uint16_t num_io_queues; /* num of io queues */ + uint16_t io_queue_size; /* io queue size */ + uint16_t ctrlid; /* Controller id */ + uint16_t pad1; + struct { + struct { + uint32_t ms : 16; /* metadata size */ + uint32_t lbads : 8; /* lba data size */ + uint32_t reserved : 8; + } lbaf[LBA_FORMAT_NUM]; + uint8_t nlbaf; + uint8_t pad2[3]; + uint32_t cur_format : 4; + uint32_t cur_extended : 1; + uint32_t cur_pi : 3; + uint32_t cur_pil : 1; + uint32_t cur_can_share : 1; + uint32_t mc_extented : 1; + uint32_t mc_pointer : 1; + uint32_t pi_type1 : 1; + uint32_t pi_type2 : 1; + uint32_t pi_type3 : 1; + uint32_t md_start : 1; + uint32_t md_end : 1; + uint32_t ns_manage : 1; /* Supports the Namespace Management and Namespace Attachment commands */ + uint32_t directives : 1; /* Controller support Directives or not */ + uint32_t streams : 1; /* Controller support Streams Directives or not */ + uint32_t dsm : 1; /* Controller support Dataset Management or not */ + uint32_t reserved : 11; + } cap_info; + }; + ``` + +2. Description + + This data structure contains the controller information of a drive. + +3. Struct members + + | Member | Description | + | ------------------------------------------------------------ | ------------------------------------------------------------ | + | char name\[MAX_CTRL_NAME_LEN] | Controller name. | + | char address\[24] | PCI address, which is a character string. | + | struct
{
uint32_t domain;
uint8_t bus;
uint8_t dev;
uint8_t func;
} pci_addr | PCI address, in segments. | + | uint64_t totalcap | Total capacity of the controller, in bytes. Optane drives are based on the NVMe 1.0 protocol and do not support this parameter. | + | uint64_t unusecap | Free capacity of the controller, in bytes. Optane drives are based on the NVMe 1.0 protocol and do not support this parameter. | + | int8_t sn\[20]; | Serial number of a drive, which is an ASCII character string without **0**. | + | uint8_t fr\[8]; | Drive firmware version, which is an ASCII character string without **0**. | + | uint32_t max_num_ns | Maximum number of namespaces. | + | uint32_t version | NVMe protocol version supported by the controller. | + | uint16_t num_io_queues | Number of I/O queues supported by a drive. | + | uint16_t io_queue_size | Maximum length of an I/O queue. | + | uint16_t ctrlid | Controller ID. | + | uint16_t pad1 | Reserved parameter. | + + Members of the struct cap_info substructure: + + | Member | Description | + | ------------------------------------------------------------ | ------------------------------------------------------------ | + | struct
{
uint32_t ms : 16;
uint32_t lbads : 8;
uint32_t reserved : 8;
}lbaf\[LBA_FORMAT_NUM] | **ms**: metadata size. The minimum value is 8 bytes.
**lbads**: The LBA size is 2^lbads, and the value of **lbads** is greater than or equal to 9. | + | uint8_t nlbaf | Number of LBA formats supported by the controller. | + | uint8_t pad2\[3] | Reserved parameter. | + | uint32_t cur_format : 4 | Current LBA format of the controller. | + | uint32_t cur_extended : 1 | Whether the controller supports LBA in extended mode. | + | uint32_t cur_pi : 3 | Current protection type of the controller. | + | uint32_t cur_pil : 1 | The current protection information (PI) of the controller is located in the first or last eight bytes of the metadata. | + | uint32_t cur_can_share : 1 | Whether the namespace supports multi-path transmission. | + | uint32_t mc_extented : 1 | Whether metadata is transmitted as part of the data buffer. | + | uint32_t mc_pointer : 1 | Whether metadata is separated from the data buffer. | + | uint32_t pi_type1 : 1 | Whether the controller supports protection type 1. | + | uint32_t pi_type2 : 1 | Whether the controller supports protection type 2. | + | uint32_t pi_type3 : 1 | Whether the controller supports protection type 3. | + | uint32_t md_start : 1 | Whether the controller supports protection information in the first eight bytes of metadata. | + | uint32_t md_end : 1 | Whether the controller supports protection information in the last eight bytes of metadata. | + | uint32_t ns_manage : 1 | Whether the controller supports namespace management. | + | uint32_t directives : 1 | Whether the Directives command set is supported. | + | uint32_t streams : 1 | Whether Streams Directives is supported. | + | uint32_t dsm : 1 | Whether Dataset Management commands are supported. | + | uint32_t reserved : 11 | Reserved parameter. | + +#### struct libstorage_dsm_range_desc + +1. Prototype + + ```c + struct libstorage_dsm_range_desc + { + /* RESERVED */ + uint32_t reserved; + + /* NUMBER OF LOGICAL BLOCKS */ + uint32_t block_count; + + /* UNMAP LOGICAL BLOCK ADDRESS */uint64_t lba;}; + ``` + +2. Description + + Definition of a single 16-byte set in the data management command set. + +3. Struct members + + | Member | Description | + | -------------------- | ------------------------ | + | uint32_t reserved | Reserved parameter. | + | uint32_t block_count | Number of LBAs per unit. | + | uint64_t lba | Start LBA. | + +#### struct libstorage_ctrl_streams_param + +1. Prototype + + ```c + struct libstorage_ctrl_streams_param + { + /* MAX Streams Limit */ + uint16_t msl; + + /* NVM Subsystem Streams Available */ + uint16_t nssa; + + /* NVM Subsystem Streams Open */uint16_t nsso; + + uint16_t pad; + }; + ``` + +2. Description + + Streams attribute value supported by NVMe drives. + +3. Struct members + + | Member | Description | + | ------------- | ------------------------------------------------------------ | + | uint16_t msl | Maximum number of Streams resources supported by a drive. | + | uint16_t nssa | Number of Streams resources that can be used by each NVM subsystem. | + | uint16_t nsso | Number of Streams resources used by each NVM subsystem. | + | uint16_t pad | Reserved parameter. | + +#### struct libstorage_bdev_streams_param + +1. Prototype + + ```c + struct libstorage_bdev_streams_param + { + /* Stream Write Size */ + uint32_t sws; + + /* Stream Granularity Size */ + uint16_t sgs; + + /* Namespace Streams Allocated */ + uint16_t nsa; + + /* Namespace Streams Open */ + uint16_t nso; + + uint16_t reserved[3]; + }; + ``` + +2. Description + + Streams attribute value of the namespace. + +3. Struct members + + | Member | Description | + | -------------------- | ------------------------------------------------------------ | + | uint32_t sws | Write granularity with the optimal performance, in sectors. | + | uint16_t sgs | Write granularity allocated to Streams, in sws. | + | uint16_t nsa | Number of private Streams resources that can be used by a namespace. | + | uint16_t nso | Number of private Streams resources used by a namespace. | + | uint16_t reserved\[3] | Reserved parameter. | + +#### struct libstorage_mgr_info + +1. Prototype + + ```c + struct libstorage_mgr_info + { + char pci[24]; + char ctrlName[MAX_CTRL_NAME_LEN]; + uint64_t sector_size; + uint64_t cap_size; + uint16_t device_id; + uint16_t subsystem_device_id; + uint16_t vendor_id; + uint16_t subsystem_vendor_id; + uint16_t controller_id; + int8_t serial_number[20]; + int8_t model_number[40]; + uint8_t firmware_revision[8]; + }; + ``` + +2. Description + + Drive management information (consistent with the drive information used by the management plane). + +3. Struct members + + | Member | Description | + | -------------------------------- | ---------------------------------------------- | + | char pci\[24] | Character string of the drive PCI address. | + | char ctrlName\[MAX_CTRL_NAME_LEN] | Character string of the drive controller name. | + | uint64_t sector_size | Drive sector size. | + | uint64_t cap_size | Drive capacity, in bytes. | + | uint16_t device_id | Drive device ID. | + | uint16_t subsystem_device_id | Drive subsystem device ID. | + | uint16\*t vendor\*id | Drive vendor ID. | + | uint16_t subsystem_vendor_id | Drive subsystem vendor ID. | + | uint16_t controller_id | Drive controller ID. | + | int8_t serial_number\[20] | Drive serial number. | + | int8_t model_number\[40] | Device model. | + | uint8_t firmware_revision\[8] | Firmware version. | + +#### struct **attribute**((packed)) libstorage_smart_info + +1. Prototype + + ```c + /* same with struct spdk_nvme_health_information_page in nvme_spec.h */ + struct __attribute__((packed)) libstorage_smart_info { + /* details of uint8_t critical_warning + * + * union spdk_nvme_critical_warning_state { + * uint8_t raw; + * struct { + * uint8_t available_spare : 1; + * uint8_t temperature : 1; + * uint8_t device_reliability : 1; + * uint8_t read_only : 1; + * uint8_t volatile_memory_backup : 1; + * uint8_t reserved : 3; + * } bits; + * }; + */ + uint8_t critical_warning; + uint16_t temperature; + uint8_t available_spare; + uint8_t available_spare_threshold; + uint8_t percentage_used; + uint8_t reserved[26]; + + /* + * Note that the following are 128-bit values, but are + * defined as an array of 2 64-bit values. + */ + /* Data Units Read is always in 512-byte units. */ + uint64_t data_units_read[2]; + /* Data Units Written is always in 512-byte units. */ + uint64_t data_units_written[2]; + /* For NVM command set, this includes Compare commands. */ + uint64_t host_read_commands[2]; + uint64_t host_write_commands[2]; + /* Controller Busy Time is reported in minutes. */ + uint64_t controller_busy_time[2]; + uint64_t power_cycles[2]; + uint64_t power_on_hours[2]; + uint64_t unsafe_shutdowns[2]; + uint64_t media_errors[2]; + uint64_t num_error_info_log_entries[2]; + + /* Controller temperature related. */ + uint32_t warning_temp_time; + uint32_t critical_temp_time; + uint16_t temp_sensor[8]; + uint8_t reserved2[296]; + }; + + ``` + +2. Description + + This data structure defines the S.M.A.R.T. information of a drive. + +3. Struct members + + | Member | **Description (For details, see the NVMe protocol.)** | + | -------------------------------------- | ------------------------------------------------------------ | + | uint8_t critical_warning | Critical alarm of the controller status. If a bit is set to 1, the bit is valid. You can set multiple bits to be valid. Critical alarms are returned to the host through asynchronous events.
Bit 0: When this bit is set to 1, the redundant space is less than the specified threshold.
Bit 1: When this bit is set to 1, the temperature is higher or lower than a major threshold.
Bit 2: When this bit is set to 1, component reliability is reduced due to major media errors or internal errors.
Bit 3: When this bit is set to 1, the medium has been set to the read-only mode.
Bit 4: When this bit is set to 1, the volatile component of the controller fails. This parameter is valid only when the volatile component exists in the controller.
Bits 5-7: reserved. | + | uint16_t temperature | Temperature of a component. The unit is Kelvin. | + | uint8_t available_spare | Percentage of the available redundant space (0 to 100%). | + | uint8_t available_spare_threshold | Threshold of the available redundant space. An asynchronous event is reported when the available redundant space is lower than the threshold. | + | uint8_t percentage_used | Percentage of the actual service life of a component to the service life of the component expected by the manufacturer. The value **100** indicates that the actual service life of the component has reached to the expected service life, but the component can still be used. The value can be greater than 100, but any value greater than 254 will be set to 255. | + | uint8_t reserved\[26] | Reserved. | + | uint64_t data_units_read\[2] | Number of 512 bytes read by the host from the controller. The value **1** indicates that 1000 x 512 bytes are read, which exclude metadata. If the LBA size is not 512 bytes, the controller converts it into 512 bytes for calculation. The value is expressed in hexadecimal notation. | + | uint64_t data_units_written\[2] | Number of 512 bytes written by the host to the controller. The value **1** indicates that 1000 x 512 bytes are written, which exclude metadata. If the LBA size is not 512 bytes, the controller converts it into 512 bytes for calculation. The value is expressed in hexadecimal notation. | + | uint64_t host_read_commands\[2] | Number of read commands delivered to the controller. | + | uint64_t host_write_commands\[2]; | Number of write commands delivered to the controller. | + | uint64_t controller_busy_time\[2] | Busy time for the controller to process I/O commands. The process from the time the commands are delivered to the time the results are returned to the CQ is busy. The time is expressed in minutes. | + | uint64_t power_cycles\[2] | Number of machine on/off cycles. | + | uint64_t power_on_hours\[2] | Power-on duration, in hours. | + | uint64_t unsafe_shutdowns\[2] | Number of abnormal power-off times. The value is incremented by 1 when CC.SHN is not received during power-off. | + | uint64_t media_errors\[2] | Number of times that the controller detects unrecoverable data integrity errors, including uncorrectable ECC errors, CRC errors, and LBA tag mismatch. | + | uint64_t num_error_info_log_entries\[2] | Number of entries in the error information log within the controller lifecycle. | + | uint32_t warning_temp_time | Accumulated time when the temperature exceeds the warning alarm threshold, in minutes. | + | uint32_t critical_temp_time | Accumulated time when the temperature exceeds the critical alarm threshold, in minutes. | + | uint16_t temp_sensor\[8] | Temperature of temperature sensors 1-8. The unit is Kelvin. | + | uint8_t reserved2\[296] | Reserved. | + +#### libstorage_dpdk_contig_mem + +1. Prototype + + ```c + struct libstorage_dpdk_contig_mem { + uint64_t virtAddr; + uint64_t memLen; + uint64_t allocLen; + }; + ``` + +2. Description + + Description about a contiguous virtual memory segment in the parameters of the callback function that notifies the service layer of initialization completion after the DPDK memory is initialized. + + Currently, 800 MB memory is reserved for HSAK. Other memory is returned to the service layer through **allocLen** in this struct for the service layer to allocate memory for self-management. + + The total memory to be reserved for HSAK is about 800 MB. The memory reserved for each memory segment is calculated based on the number of NUMA nodes in the environment. When there are too many NUMA nodes, the memory reserved on each memory segment is too small. As a result, HSAK initialization fails. Therefore, HSAK supports only the environment with a maximum of four NUMA nodes. + +3. Struct members + + | Member | Description | + | ----------------- | -------------------------------------------------------- | + | uint64_t virtAddr | Start address of the virtual memory. | + | uint64_t memLen | Length of the virtual memory, in bytes. | + | uint64_t allocLen | Available memory length in the memory segment, in bytes. | + +#### struct libstorage_dpdk_init_notify_arg + +1. Prototype + + ```c + struct libstorage_dpdk_init_notify_arg { + uint64_t baseAddr; + uint16_t memsegCount; + struct libstorage_dpdk_contig_mem *memseg; + }; + ``` + +2. Description + + Callback function parameter used to notify the service layer of initialization completion after DPDK memory initialization, indicating information about all virtual memory segments. + +3. Struct members + + | Member | Description | + | ----------------------------------------- | ------------------------------------------------------------ | + | uint64_t baseAddr | Start address of the virtual memory. | + | uint16_t memsegCount | Number of valid **memseg** array members, that is, the number of contiguous virtual memory segments. | + | struct libstorage_dpdk_contig_mem *memseg | Pointer to the memory segment array. Each array element is a contiguous virtual memory segment, and every two elements are discontiguous. | + +#### struct libstorage_dpdk_init_notify + +1. Prototype + + ```c + struct libstorage_dpdk_init_notify { + const char *name; + void (*notifyFunc)(const struct libstorage_dpdk_init_notify_arg *arg); + TAILQ_ENTRY(libstorage_dpdk_init_notify) tailq; + }; + ``` + +2. Description + + Struct used to notify the service layer of the callback function registration after the DPDK memory is initialized. + +3. Struct members + + | Member | Description | + | ------------------------------------------------------------ | ------------------------------------------------------------ | + | const char *name | Name of the service-layer module of the registered callback function. | + | void (*notifyFunc)(const struct libstorage_dpdk_init_notify_arg*arg) | Callback function parameter used to notify the service layer of initialization completion after the DPDK memory is initialized. | + | TAILQ_ENTRY(libstorage_dpdk_init_notify) tailq | Linked list that stores registered callback functions. | + +### ublock.h + +#### struct ublock_bdev_info + +1. Prototype + + ```c + struct ublock_bdev_info { + uint64_t sector_size; + uint64_t cap_size; // cap_size + uint16_t device_id; + uint16_t subsystem_device_id; // subsystem device id of nvme control + uint16_t vendor_id; + uint16_t subsystem_vendor_id; + uint16_t controller_id; + int8_t serial_number[20]; + int8_t model_number[40]; + int8_t firmware_revision[8]; + }; + ``` + +2. Description + + This data structure contains the device information of a drive. + +3. Struct members + + | Member | Description | + | ---------------------------- | ----------------------------------------------- | + | uint64_t sector_size | Sector size of a drive, for example, 512 bytes. | + | uint64_t cap_size | Total drive capacity, in bytes. | + | uint16_t device_id | Device ID. | + | uint16_t subsystem_device_id | Device ID of a subsystem. | + | uint16_t vendor_id | Main ID of the device vendor. | + | uint16_t subsystem_vendor_id | Sub-ID of the device vendor. | + | uint16_t controller_id | ID of the device controller. | + | int8_t serial_number\[20] | Device serial number. | + | int8_t model_number\[40] | Device model. | + | int8_t firmware_revision\[8] | Firmware version. | + +#### struct ublock_bdev + +1. Prototype + + ```c + struct ublock_bdev { + char pci[UBLOCK_PCI_ADDR_MAX_LEN]; + struct ublock_bdev_info info; + struct spdk_nvme_ctrlr *ctrlr; + TAILQ_ENTRY(ublock_bdev) link; + }; + ``` + +2. Description + + The data structure contains the drive information of the specified PCI address, and the structure itself is a node of the queue. + +3. Struct members + + | Member | Description | + | --------------------------------- | ------------------------------------------------------------ | + | char pci\[UBLOCK_PCI_ADDR_MAX_LEN] | PCI address. | + | struct ublock_bdev_info info | Drive information. | + | struct spdk_nvme_ctrlr *ctrlr | Data structure of the device controller. The members in this structure are not open to external systems. External services can obtain the corresponding member data through the SPDK open source interface. | + | TAILQ_ENTRY(ublock_bdev) link | Structure of the pointers before and after a queue. | + +#### struct ublock_bdev_mgr + +1. Prototype + + ```c + struct ublock_bdev_mgr { + TAILQ_HEAD(, ublock_bdev) bdevs; + }; + ``` + +2. Description + + This data structure defines the header structure of a ublock_bdev queue. + +3. Struct members + + | Member | Description | + | -------------------------------- | ----------------------- | + | TAILQ_HEAD(, ublock_bdev) bdevs; | Queue header structure. | + +#### struct **attribute**((packed)) ublock_SMART_info + +1. Prototype + + ```c + struct __attribute__((packed)) ublock_SMART_info { + uint8_t critical_warning; + uint16_t temperature; + uint8_t available_spare; + uint8_t available_spare_threshold; + uint8_t percentage_used; + uint8_t reserved[26]; + /* + + Note that the following are 128-bit values, but are + + defined as an array of 2 64-bit values. + */ + /* Data Units Read is always in 512-byte units. */ + uint64_t data_units_read[2]; + /* Data Units Written is always in 512-byte units. */ + uint64_t data_units_written[2]; + /* For NVM command set, this includes Compare commands. */ + uint64_t host_read_commands[2]; + uint64_t host_write_commands[2]; + /* Controller Busy Time is reported in minutes. */ + uint64_t controller_busy_time[2]; + uint64_t power_cycles[2]; + uint64_t power_on_hours[2]; + uint64_t unsafe_shutdowns[2]; + uint64_t media_errors[2]; + uint64_t num_error_info_log_entries[2]; + /* Controller temperature related. */ + uint32_t warning_temp_time; + uint32_t critical_temp_time; + uint16_t temp_sensor[8]; + uint8_t reserved2[296]; + }; + ``` + +2. Description + + This data structure defines the S.M.A.R.T. information of a drive. + +3. Struct members + + | Member | Description (For details, see the NVMe protocol.) | + | -------------------------------------- | ------------------------------------------------------------ | + | uint8_t critical_warning | Critical alarm of the controller status. If a bit is set to 1, the bit is valid. You can set multiple bits to be valid. Critical alarms are returned to the host through asynchronous events.
Bit 0: When this bit is set to 1, the redundant space is less than the specified threshold.
Bit 1: When this bit is set to 1, the temperature is higher or lower than a major threshold.
Bit 2: When this bit is set to 1, component reliability is reduced due to major media errors or internal errors.
Bit 3: When this bit is set to 1, the medium has been set to the read-only mode.
Bit 4: When this bit is set to 1, the volatile component of the controller fails. This parameter is valid only when the volatile component exists in the controller.
Bits 5-7: reserved. | + | uint16_t temperature | Temperature of a component. The unit is Kelvin. | + | uint8_t available_spare | Percentage of the available redundant space (0 to 100%). | + | uint8_t available_spare_threshold | Threshold of the available redundant space. An asynchronous event is reported when the available redundant space is lower than the threshold. | + | uint8_t percentage_used | Percentage of the actual service life of a component to the service life of the component expected by the manufacturer. The value **100** indicates that the actual service life of the component has reached to the expected service life, but the component can still be used. The value can be greater than 100, but any value greater than 254 will be set to 255. | + | uint8_t reserved\[26] | Reserved. | + | uint64_t data_units_read\[2] | Number of 512 bytes read by the host from the controller. The value **1** indicates that 1000 x 512 bytes are read, which exclude metadata. If the LBA size is not 512 bytes, the controller converts it into 512 bytes for calculation. The value is expressed in hexadecimal notation. | + | uint64_t data_units_written\[2] | Number of 512 bytes written by the host to the controller. The value **1** indicates that 1000 x 512 bytes are written, which exclude metadata. If the LBA size is not 512 bytes, the controller converts it into 512 bytes for calculation. The value is expressed in hexadecimal notation. | + | uint64_t host_read_commands\[2] | Number of read commands delivered to the controller. | + | uint64_t host_write_commands\[2]; | Number of write commands delivered to the controller. | + | uint64_t controller_busy_time\[2] | Busy time for the controller to process I/O commands. The process from the time the commands are delivered to the time the results are returned to the CQ is busy. The value is expressed in minutes. | + | uint64_t power_cycles\[2] | Number of machine on/off cycles. | + | uint64_t power_on_hours\[2] | Power-on duration, in hours. | + | uint64_t unsafe_shutdowns\[2] | Number of abnormal power-off times. The value is incremented by 1 when CC.SHN is not received during power-off. | + | uint64_t media_errors\[2] | Number of unrecoverable data integrity errors detected by the controller, including uncorrectable ECC errors, CRC errors, and LBA tag mismatch. | + | uint64_t num_error_info_log_entries\[2] | Number of entries in the error information log within the controller lifecycle. | + | uint32_t warning_temp_time | Accumulated time when the temperature exceeds the warning alarm threshold, in minutes. | + | uint32_t critical_temp_time | Accumulated time when the temperature exceeds the critical alarm threshold, in minutes. | + | uint16_t temp_sensor\[8] | Temperature of temperature sensors 1-8. The unit is Kelvin. | + | uint8_t reserved2\[296] | Reserved. | + +#### struct ublock_nvme_error_info + +1. Prototype + + ```c + struct ublock_nvme_error_info { + uint64_t error_count; + uint16_t sqid; + uint16_t cid; + uint16_t status; + uint16_t error_location; + uint64_t lba; + uint32_t nsid; + uint8_t vendor_specific; + uint8_t reserved[35]; + }; + ``` + +2. Description + + This data structure contains the content of a single error message in the device controller. The number of errors supported by different controllers may vary. + +3. Struct members + + | Member | Description (For details, see the NVMe protocol.) | + | ----------------------- | ------------------------------------------------------------ | + | uint64_t error_count | Error sequence number, which increases in ascending order. | + | uint16_t sqid | Submission queue identifier for the command associated with an error message. If an error cannot be associated with a specific command, this parameter should be set to **FFFFh**. | + | uint16_t cid | Command identifier associated with an error message. If an error cannot be associated with a specific command, this parameter should be set to **FFFFh**. | + | uint16_t status | Status of a completed command. | + | uint16_t error_location | Command parameter associated with an error message. | + | uint64_t lba | First LBA when an error occurs. | + | uint32_t nsid | Namespace where an error occurs. | + | uint8_t vendor_specific | Log page identifier associated with the page if other vendor-specific error messages are available. The value **00h** indicates that no additional information is available. The valid value ranges from 80h to FFh. | + | uint8_t reserved\[35] | Reserved. | + +#### struct ublock_uevent + +1. Prototype + + ```c + struct ublock_uevent { + enum ublock_nvme_uevent_action action; + int subsystem; + char traddr[UBLOCK_TRADDR_MAX_LEN + 1]; + }; + ``` + +2. Description + + This data structure contains parameters related to the uevent event. + +3. Struct members + + | Member | Description | + | -------------------------------------- | ------------------------------------------------------------ | + | enum ublock_nvme_uevent_action action | Whether the uevent event type is drive insertion or removal through enumeration. | + | int subsystem | Subsystem type of the uevent event. Currently, only **UBLOCK_NVME_UEVENT_SUBSYSTEM_UIO** is supported. If the application receives other values, no processing is required. | + | char traddr\[UBLOCK_TRADDR_MAX_LEN + 1] | PCI address character string in the *Domain:Bus:Device.Function* (**%04x:%02x:%02x.%x**) format. | + +#### struct ublock_hook + +1. Prototype + + ```c + struct ublock_hook + { + ublock_callback_func ublock_callback; + void *user_data; + }; + ``` + +2. Description + + This data structure is used to register callback functions. + +3. Struct members + + | Member | Description | + | ------------------------------------ | ------------------------------------------------------------ | + | ublock_callback_func ublock_callback | Function executed during callback. The type is bool func(void *info, void*user_data). | + | void *user_data | User parameter transferred to the callback function. | + +#### struct ublock_ctrl_iostat_info + +1. Prototype + + ```c + struct ublock_ctrl_iostat_info + { + uint64_t num_read_ops; + uint64_t num_write_ops; + uint64_t read_latency_ms; + uint64_t write_latency_ms; + uint64_t io_outstanding; + uint64_t num_poll_timeout; + uint64_t io_ticks_ms; + }; + ``` + +2. Description + + This data structure is used to obtain the I/O statistics of a controller. + +3. Struct members + + | Member | Description | + | ------------------------- | ------------------------------------------------------------ | + | uint64_t num_read_ops | Accumulated number of read I/Os of the controller. | + | uint64_t num_write_ops | Accumulated number of write I/Os of the controller. | + | uint64_t read_latency_ms | Accumulated read latency of the controller, in ms. | + | uint64_t write_latency_ms | Accumulated write latency of the controller, in ms. | + | uint64_t io_outstanding | Queue depth of the controller. | + | uint64_t num_poll_timeout | Accumulated number of polling timeouts of the controller. | + | uint64_t io_ticks_ms | Accumulated I/O processing latency of the controller, in ms. | + +## API + +### bdev_rw.h + +#### libstorage_get_nvme_ctrlr_info + +1. Prototype + + uint32_t libstorage_get_nvme_ctrlr_info(struct libstorage_nvme_ctrlr_info** ppCtrlrInfo); + +2. Description + + Obtains information about all controllers. + +3. Parameters + + | Parameter | Description | + | ----------------------------------------------- | ------------------------------------------------------------ | + | struct libstorage_nvme_ctrlr_info** ppCtrlrInfo | Output parameter, which returns all obtained controller information.
Note:
Free the memory using the free API in a timely manner. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | 0 | Failed to obtain controller information or no controller information is obtained. | + | > 0 | Number of obtained controllers. | + +#### libstorage_get_mgr_info_by_esn + +1. Prototype + + ```c + int32_t libstorage_get_mgr_info_by_esn(const char *esn, struct libstorage_mgr_info *mgr_info); + ``` + +2. Description + + Obtains the management information about the NVMe drive corresponding to the ESN. + +3. Parameters + + | Parameter | Description | + | ------------------------------------ | ------------------------------------------------------------ | + | const char *esn | ESN of the target device.
Note:
An ESN is a string of a maximum of 20 characters (excluding the end character of the string), but the length may vary according to hardware vendors. For example, if the length is less than 20 characters, spaces are padded at the end of the character string.
| + | struct libstorage_mgr_info *mgr_info | Output parameter, which returns all obtained NVMe drive management information. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | 0 | Succeeded in querying the NVMe drive management information corresponding to an ESN. | + | -1 | Failed to query the NVMe drive management information corresponding to an ESN. | + | -2 | No NVMe drive matching an ESN is obtained. | + +#### libstorage_get_mgr_smart_by_esn + +1. Prototype + + ```c + int32_t libstorage_get_mgr_smart_by_esn(const char *esn, uint32_t nsid, struct libstorage_smart_info *mgr_smart_info); + ``` + +2. Description + + Obtains the S.M.A.R.T. information of the NVMe drive corresponding to an ESN. + +3. Parameters + + | Parameter | Description | + | ------------------------------------ | ------------------------------------------------------------ | + | const char *esn | ESN of the target device.
Note:
An ESN is a string of a maximum of 20 characters (excluding the end character of the string), but the length may vary according to hardware vendors. For example, if the length is less than 20 characters, spaces are padded at the end of the character string.
| + | uint32_t nsid | Specified namespace. | + | struct libstorage_mgr_info *mgr_info | Output parameter, which returns all obtained S.M.A.R.T. information of NVMe drives. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | 0 | Succeeded in querying the S.M.A.R.T. information of the NVMe drive corresponding to an ESN. | + | -1 | Failed to query the S.M.A.R.T. information of the NVMe drive corresponding to an ESN. | + | -2 | No NVMe drive matching an ESN is obtained. | + +#### libstorage_get_bdev_ns_info + +1. Prototype + + ```c + uint32_t libstorage_get_bdev_ns_info(const char* bdevName, struct libstorage_namespace_info** ppNsInfo); + ``` + +2. Description + + Obtains namespace information based on the device name. + +3. Parameters + + | Parameter | Description | + | ------------------------------------------- | ------------------------------------------------------------ | + | const char* bdevName | Device name. | + | struct libstorage_namespace_info** ppNsInfo | Output parameter, which returns namespace information.
Note:
Free the memory using the free API in a timely manner. | + +4. Return value + + | Return Value | Description | + | ------------ | ---------------------------- | + | 0 | The operation failed. | + | 1 | The operation is successful. | + +#### libstorage_get_ctrl_ns_info + +1. Prototype + + ```c + uint32_t libstorage_get_ctrl_ns_info(const char* ctrlName, struct libstorage_namespace_info** ppNsInfo); + ``` + +2. Description + + Obtains information about all namespaces based on the controller name. + +3. Parameters + + | Parameter | Description | + | ------------------------------------------- | ------------------------------------------------------------ | + | const char* ctrlName | Controller name. | + | struct libstorage_namespace_info** ppNsInfo | Output parameter, which returns information about all namespaces.
Note:
Free the memory using the free API in a timely manner. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | 0 | Failed to obtain the namespace information or no namespace information is obtained. | + | > 0 | Number of namespaces obtained. | + +#### libstorage_create_namespace + +1. Prototype + + ```c + int32_t libstorage_create_namespace(const char* ctrlName, uint64_t ns_size, char** outputName); + ``` + +2. Description + + Creates a namespace on a specified controller (the prerequisite is that the controller supports namespace management). + + Optane drives are based on the NVMe 1.0 protocol and do not support namespace management. Therefore, this API is not supported. + + ES3000 V3 and V5 support only one namespace by default. By default, a namespace exists on the controller. To create a namespace, delete the original namespace. + +3. Parameters + + | Parameter | Description | + | -------------------- | ------------------------------------------------------------ | + | const char* ctrlName | Controller name. | + | uint64_t ns_size | Size of the namespace to be created (unit: sector_size). | + | char** outputName | Output parameter, which indicates the name of the created namespace.
Note:
Free the memory using the free API in a timely manner. | + +4. Return value + + | Return Value | Description | + | ------------ | ---------------------------------------------- | + | ≤ 0 | Failed to create the namespace. | + | > 0 | ID of the created namespace (starting from 1). | + +#### libstorage_delete_namespace + +1. Prototype + + ```c + int32_t libstorage_delete_namespace(const char* ctrlName, uint32_t ns_id); + ``` + +2. Description + + Deletes a namespace from a specified controller. Optane drives are based on the NVMe 1.0 protocol and do not support namespace management. Therefore, this API is not supported. + +3. Parameters + + | Parameter | Description | + | -------------------- | ---------------- | + | const char* ctrlName | Controller name. | + | uint32_t ns_id | Namespace ID | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | 0 | Deletion succeeded. | + | Other values | Deletion failed.
Note:
Before deleting a namespace, stop I/O operations. Otherwise, the namespace fails to be deleted. | + +#### libstorage_delete_all_namespace + +1. Prototype + + ```c + int32_t libstorage_delete_all_namespace(const char* ctrlName); + ``` + +2. Description + + Deletes all namespaces from a specified controller. Optane drives are based on the NVMe 1.0 protocol and do not support namespace management. Therefore, this API is not supported. + +3. Parameters + + | Parameter | Description | + | -------------------- | ---------------- | + | const char* ctrlName | Controller name. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | 0 | Deletion succeeded. | + | Other values | Deletion failed.
Note:
Before deleting a namespace, stop I/O operations. Otherwise, the namespace fails to be deleted. | + +#### libstorage_nvme_create_ctrlr + +1. Prototype + + ```c + int32_t libstorage_nvme_create_ctrlr(const char *pci_addr, const char *ctrlr_name); + ``` + +2. Description + + Creates an NVMe controller based on the PCI address. + +3. Parameters + + | Parameter | Description | + | ---------------- | ---------------- | + | char *pci_addr | PCI address. | + | char *ctrlr_name | Controller name. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------- | + | < 0 | Creation failed. | + | 0 | Creation succeeded. | + +#### libstorage_nvme_delete_ctrlr + +1. Prototype + + ```c + int32_t libstorage_nvme_delete_ctrlr(const char *ctrlr_name); + ``` + +2. Description + + Destroys an NVMe controller based on the controller name. + +3. Parameters + + | Parameter | Description | + | ---------------------- | ---------------- | + | const char *ctrlr_name | Controller name. | + + This API can be called only after all delivered I/Os are returned. + +4. Return value + + | Return Value | Description | + | ------------ | ---------------------- | + | < 0 | Destruction failed. | + | 0 | Destruction succeeded. | + +#### libstorage_nvme_reload_ctrlr + +1. Prototype + + ```c + int32_t libstorage_nvme_reload_ctrlr(const char *cfgfile); + ``` + +2. Description + + Adds or deletes an NVMe controller based on the configuration file. + +3. Parameters + + | Parameter | Description | + | ------------------- | ------------------------------- | + | const char *cfgfile | Path of the configuration file. | + + Before using this API to delete a drive, ensure that all delivered I/Os have been returned. + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | < 0 | Failed to add or delete drives based on the configuration file. (Drives may be successfully added or deleted for some controllers.) | + | 0 | Drives are successfully added or deleted based on the configuration file. | + + > Constraints + + - Currently, a maximum of 36 controllers can be configured in the configuration file. + + - The reload API creates as many controllers as possible. If a controller fails to be created, the creation of other controllers is not affected. + + - In concurrency scenarios, the final drive initialization status may be inconsistent with the input configuration file. + + - If you delete a drive that is delivering I/Os by reloading the drive, I/Os fail. + + - After the controller name (for example, **nvme0**) corresponding to the PCI address in the configuration file is modified, the modification does not take effect after this interface is called. + + - The reload function is valid only when drives are added or deleted. Other configuration items in the configuration file cannot be reloaded. + +#### libstorage_low_level_format_nvm + +1. Prototype + + ```c + int8_t libstorage_low_level_format_nvm(const char* ctrlName, uint8_t lbaf, + enum libstorage_ns_pi_type piType, + bool pil_start, bool ms_extented, uint8_t ses); + ``` + +2. Description + + Low-level formats NVMe drives. + +3. Parameters + + | Parameter | Description | + | --------------------------------- | ------------------------------------------------------------ | + | const char* ctrlName | Controller name. | + | uint8_t lbaf | LBA format to be used. | + | enum libstorage_ns_pi_type piType | Protection type to be used. | + | bool pil_start | The protection information is stored in first eight bytes (1) or last eight bytes (0) of the metadata. | + | bool ms_extented | Whether to format to the extended type. | + | uint8_t ses | Whether to perform secure erase during formatting. Currently, only the value **0** (no-secure erase) is supported. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------- | + | < 0 | Formatting failed. | + | ≥ 0 | LBA format generated after successful formatting. | + + > Constraints + + - This low-level formatting API will clear the data and metadata of the drive namespace. Exercise caution when using this API. + + - It takes several seconds to format an ES3000 drive and several minutes to format an Intel Optane drive. Before using this API, wait until the formatting is complete. If the formatting process is forcibly stopped, the formatting fails. + + - Before formatting, stop the I/O operations on the data plane. If the drive is processing I/O requests, the formatting may fail occasionally. If the formatting is successful, the drive may discard the I/O requests that are being processed. Therefore, before formatting the drive, ensure that the I/O operations on the data plane are stopped. + + - During the formatting, the controller is reset. As a result, the initialized drive resources are unavailable. Therefore, after the formatting is complete, restart the I/O process on the data plane. + + - ES3000 V3 supports protection types 0 and 3, PI start and PI end, and mc extended. ES3000 V3 supports DIF in 512+8 format but does not support DIF in 4096+64 format. + + - ES3000 V5 supports protection types 0 and 3, PI start and PI end, mc extended, and mc pointer. ES3000 V5 supports DIF in both 512+8 and 4096+64 formats. + + - Optane drives support protection types 0 and 1, PI end, and mc extended. Optane drives support DIF in 512+8 format but does not support DIF in 4096+64 format. + + | **Drive Type** | **LBA Format** | **Drive Type** | **LBA Format** | + | ------------------ | ------------------------------------------------------------ | -------------- | ------------------------------------------------------------ | + | Intel Optane P4800 | lbaf0:512+0
lbaf1:512+8
lbaf2:512+16
lbaf3:4096+0
lbaf4:4096+8
lbaf5:4096+64
lbaf6:4096+128 | ES3000 V3, V5 | lbaf0:512+0
lbaf1:512+8
lbaf2:4096+64
lbaf3:4096+0
lbaf4:4096+8 | + +#### LIBSTORAGE_CALLBACK_FUNC + +1. Prototype + + ```c + typedef void (*LIBSTORAGE_CALLBACK_FUNC)(int32_t cb_status, int32_t sct_code, void* cb_arg); + ``` + +2. Description + + Registered HSAK I/O completion callback function. + +3. Parameters + + | Parameter | Description | + | ----------------- | ------------------------------------------------------------ | + | int32_t cb_status | I/O status code. The value **0** indicates success, a negative value indicates system error code, and a positive value indicates drive error code (for different error codes,
see [Appendixes](#Appendixes)). | + | int32_t sct_code | I/O status code type:
0: [GENERIC](#generic)
1: [COMMAND_SPECIFIC](#command_specific)
2: [MEDIA_DATA_INTERGRITY_ERROR](#media_data_intergrity_error)
7: VENDOR_SPECIFIC | + | void* cb_arg | Input parameter of the callback function. | + +4. Return value + + None. + +#### libstorage_deallocate_block + +1. Prototype + + ```c + int32_t libstorage_deallocate_block(int32_t fd, struct libstorage_dsm_range_desc *range, uint16_t range_count, LIBSTORAGE_CALLBACK_FUNC cb, void* cb_arg); + ``` + +2. Description + + Notifies NVMe drives of the blocks that can be released. + +3. Parameters + + | Parameter | Description | + | --------------------------------------- | ------------------------------------------------------------ | + | int32_t fd | Open drive file descriptor. | + | struct libstorage_dsm_range_desc *range | Description of blocks that can be released on NVMe drives.
Note:
This parameter requires **libstorage_mem_reserve** to allocate huge page memory. 4 KB alignment is required during memory allocation, that is, align is set to 4096.
The TRIM range of drives is restricted based on different drives. Exceeding the maximum TRIM range on the drives may cause data exceptions. | + | uint16_t range_count | Number of members in the array range. | + | LIBSTORAGE_CALLBACK_FUNC cb | Callback function. | + | void* cb_arg | Callback function parameter. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------- | + | < 0 | Failed to deliver the request. | + | 0 | Request submitted successfully. | + +#### libstorage_async_write + +1. Prototype + + ```c + int32_t libstorage_async_write(int32_t fd, void *buf, size_t nbytes, off64_t offset, void *md_buf, size_t md_len, enum libstorage_crc_and_prchk dif_flag, LIBSTORAGE_CALLBACK_FUNC cb, void* cb_arg); + ``` + +2. Description + + Delivers asynchronous I/O write requests (the write buffer is a contiguous buffer). + +3. Parameters + + | Parameter | Description | + | -------------------------------------- | ------------------------------------------------------------ | + | int32_t fd | File descriptor of the block device. | + | void *buf | Buffer for I/O write data (four-byte aligned and cannot cross the 4 KB page boundary).
Note:
The LBA in extended mode must contain the metadata memory size. | + | size_t nbytes | Size of a single write I/O, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | + | off64_t offset | Write offset of the LBA, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | + | void *md_buf | Metadata buffer. (Applicable only to LBAs in separated mode. Set this parameter to **NULL** for LBAs in extended mode.) | + | size_t md_len | Buffer length of metadata. (Applicable only to LBAs in separated mode. Set this parameter to **0** for LBAs in extended mode.) | + | enum libstorage_crc_and_prchk dif_flag | Whether to calculate DIF and whether to enable drive verification. | + | LIBSTORAGE_CALLBACK_FUNC cb | Registered callback function. | + | void* cb_arg | Parameters of the callback function. | + +4. Return value + + | Return Value | Description | + | ------------ | ---------------------------------------------- | + | 0 | I/O write requests are submitted successfully. | + | Other values | Failed to submit I/O write requests. | + +#### libstorage_async_read + +1. Prototype + + ```c + int32_t libstorage_async_read(int32_t fd, void *buf, size_t nbytes, off64_t offset, void *md_buf, size_t md_len, enum libstorage_crc_and_prchk dif_flag, LIBSTORAGE_CALLBACK_FUNC cb, void* cb_arg); + ``` + +2. Description + + Delivers asynchronous I/O read requests (the read buffer is a contiguous buffer). + +3. Parameters + + | Parameter | Description | + | -------------------------------------- | ------------------------------------------------------------ | + | int32_t fd | File descriptor of the block device. | + | void *buf | Buffer for I/O read data (four-byte aligned and cannot cross the 4 KB page boundary).
Note:
LBAs in extended mode must contain the metadata memory size. | + | size_t nbytes | Size of a single read I/O, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | + | off64_t offset | Read offset of the LBA, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. The LBA in extended mode does not include the metadata size. | + | void *md_buf | Metadata buffer. (Applicable only to LBAs in separated mode. Set this parameter to **NULL** for LBAs in extended mode.). | + | size_t md_len | Buffer length of metadata. (Applicable only to LBAs in separated mode. Set this parameter to **0** for LBAs in extended mode.). | + | enum libstorage_crc_and_prchk dif_flag | Whether to calculate DIF and whether to enable drive verification. | + | LIBSTORAGE_CALLBACK_FUNC cb | Registered callback function. | + | void* cb_arg | Parameters of the callback function. | + +4. Return value + + | Return Value | Description | + | ------------ | --------------------------------------------- | + | 0 | I/O read requests are submitted successfully. | + | Other values | Failed to submit I/O read requests. | + +#### libstorage_async_writev + +1. Prototype + + ```c + int32_t libstorage_async_writev(int32_t fd, struct iovec *iov, int iovcnt, size_t nbytes, off64_t offset, void *md_buf, size_t md_len, enum libstorage_crc_and_prchk dif_flag, LIBSTORAGE_CALLBACK_FUNC cb, void* cb_arg); + ``` + +2. Description + + Delivers asynchronous I/O write requests (the write buffer is a discrete buffer). + +3. Parameters + + | Parameter | Description | + | -------------------------------------- | ------------------------------------------------------------ | + | int32_t fd | File descriptor of the block device. | + | struct iovec *iov | Buffer for I/O write data.
Note:
LBAs in extended mode must contain the metadata size.
The address must be 4-byte-aligned and the length cannot exceed 4 GB. | + | int iovcnt | Number of buffers for I/O write data. | + | size_t nbytes | Size of a single write I/O, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | + | off64_t offset | Write offset of the LBA, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | + | void *md_buf | Metadata buffer. (Applicable only to LBAs in separated mode. Set this parameter to **NULL** for LBAs in extended mode.) | + | size_t md_len | Length of the metadata buffer. (Applicable only to LBAs in separated mode. Set this parameter to **0** for LBAs in extended mode.) | + | enum libstorage_crc_and_prchk dif_flag | Whether to calculate DIF and whether to enable drive verification. | + | LIBSTORAGE_CALLBACK_FUNC cb | Registered callback function. | + | void* cb_arg | Parameters of the callback function. | + +4. Return value + + | Return Value | Description | + | ------------ | ---------------------------------------------- | + | 0 | I/O write requests are submitted successfully. | + | Other values | Failed to submit I/O write requests. | + +#### libstorage_async_readv + +1. Prototype + + ```c + int32_t libstorage_async_readv(int32_t fd, struct iovec *iov, int iovcnt, size_t nbytes, off64_t offset, void *md_buf, size_t md_len, enum libstorage_crc_and_prchk dif_flag, LIBSTORAGE_CALLBACK_FUNC cb, void* cb_arg); + ``` + +2. Description + + Delivers asynchronous I/O read requests (the read buffer is a discrete buffer). + +3. Parameters + + | Parameter | Description | + | -------------------------------------- | ------------------------------------------------------------ | + | int32_t fd | File descriptor of the block device. | + | struct iovec *iov | Buffer for I/O read data.
Note:
LBAs in extended mode must contain the metadata size.
The address must be 4-byte-aligned and the length cannot exceed 4 GB. | + | int iovcnt | Number of buffers for I/O read data. | + | size_t nbytes | Size of a single read I/O, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | + | off64_t offset | Read offset of the LBA, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | + | void *md_buf | Metadata buffer. (Applicable only to LBAs in separated mode. Set this parameter to **NULL** for LBAs in extended mode.) | + | size_t md_len | Length of the metadata buffer. (Applicable only to LBAs in separated mode. Set this parameter to **0** for LBAs in extended mode.) | + | enum libstorage_crc_and_prchk dif_flag | Whether to calculate DIF and whether to enable drive verification. | + | LIBSTORAGE_CALLBACK_FUNC cb | Registered callback function. | + | void* cb_arg | Parameters of the callback function. | + +4. Return value + + | Return Value | Description | + | ------------ | --------------------------------------------- | + | 0 | I/O read requests are submitted successfully. | + | Other values | Failed to submit I/O read requests. | + +#### libstorage_sync_write + +1. Prototype + + ```c + int32_t libstorage_sync_write(int fd, const void *buf, size_t nbytes, off_t offset); + ``` + +2. Description + + Delivers synchronous I/O write requests (the write buffer is a contiguous buffer). + +3. Parameters + + | Parameter | Description | + | -------------- | ------------------------------------------------------------ | + | int32_t fd | File descriptor of the block device. | + | void *buf | Buffer for I/O write data (four-byte aligned and cannot cross the 4 KB page boundary).
Note:
LBAs in extended mode must contain the metadata memory size. | + | size_t nbytes | Size of a single write I/O, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | + | off64_t offset | Write offset of the LBA, in bytes. (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | + +4. Return value + + | Return Value | Description | + | ------------ | ---------------------------------------------- | + | 0 | I/O write requests are submitted successfully. | + | Other values | Failed to submit I/O write requests. | + +#### libstorage_sync_read + +1. Prototype + + ```c + int32_t libstorage_sync_read(int fd, const void *buf, size_t nbytes, off_t offset); + ``` + +2. Description + + Delivers synchronous I/O read requests (the read buffer is a contiguous buffer). + +3. Parameters + + | Parameter | Description | + | -------------- | ------------------------------------------------------------ | + | int32_t fd | File descriptor of the block device. | + | void *buf | Buffer for I/O read data (four-byte aligned and cannot cross the 4 KB page boundary).
Note:
LBAs in extended mode must contain the metadata memory size. | + | size_t nbytes | Size of a single read I/O, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | + | off64_t offset | Read offset of the LBA, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | + +4. Return value + + | Return Value | Description | + | ------------ | --------------------------------------------- | + | 0 | I/O read requests are submitted successfully. | + | Other values | Failed to submit I/O read requests. | + +#### libstorage_open + +1. Prototype + + ```c + int32_t libstorage_open(const char* devfullname); + ``` + +2. Description + + Opens a block device. + +3. Parameters + + | Parameter | Description | + | ----------------------- | ---------------------------------------- | + | const char* devfullname | Block device name (format: **nvme0n1**). | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | -1 | Opening failed. For example, the device name is incorrect, or the number of opened FDs is greater than the number of available channels of the NVMe drive. | + | > 0 | File descriptor of the block device. | + + After the MultiQ function in **nvme.conf.in** is enabled, different FDs are returned if a thread opens the same device for multiple times. Otherwise, the same FD is returned. This attribute applies only to the NVMe device. + +#### libstorage_close + +1. Prototype + + ```c + int32_t libstorage_close(int32_t fd); + ``` + +2. Description + + Closes a block device. + +3. Parameters + + | Parameter | Description | + | ---------- | ------------------------------------------ | + | int32_t fd | File descriptor of an opened block device. | + +4. Return value + + | Return Value | Description | + | ------------ | ----------------------------------------------- | + | -1 | Invalid file descriptor. | + | -16 | The file descriptor is busy. Retry is required. | + | 0 | Close succeeded. | + +#### libstorage_mem_reserve + +1. Prototype + + ```c + void* libstorage_mem_reserve(size_t size, size_t align); + ``` + +2. Description + + Allocates memory space from the huge page memory reserved by the DPDK. + +3. Parameters + + | Parameter | Description | + | ------------ | ----------------------------------- | + | size_t size | Size of the memory to be allocated. | + | size_t align | Aligns allocated memory space. | + +4. Return value + + | Return Value | Description | + | ------------ | -------------------------------------- | + | NULL | Allocation failed. | + | Other values | Address of the allocated memory space. | + +#### libstorage_mem_free + +1. Prototype + + ```c + void libstorage_mem_free(void* ptr); + ``` + +2. Description + + Frees the memory space pointed to by **ptr**. + +3. Parameters + + | Parameter | Description | + | --------- | ---------------------------------------- | + | void* ptr | Address of the memory space to be freed. | + +4. Return value + + None. + +#### libstorage_alloc_io_buf + +1. Prototype + + ```c + void* libstorage_alloc_io_buf(size_t nbytes); + ``` + +2. Description + + Allocates memory from buf_small_pool or buf_large_pool of the SPDK. + +3. Parameters + + | Parameter | Description | + | ------------- | ----------------------------------- | + | size_t nbytes | Size of the buffer to be allocated. | + +4. Return value + + | Return Value | Description | + | ------------ | -------------------------------------- | + | Other values | Start address of the allocated buffer. | + +#### libstorage_free_io_buf + +1. Prototype + + ```c + int32_t libstorage_free_io_buf(void *buf, size_t nbytes); + ``` + +2. Description + + Frees the allocated memory to buf_small_pool or buf_large_pool of the SPDK. + +3. Parameters + + | Parameter | Description | + | ------------- | ---------------------------------------- | + | void *buf | Start address of the buffer to be freed. | + | size_t nbytes | Size of the buffer to be freed. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------ | + | -1 | Freeing failed. | + | 0 | Freeing succeeded. | + +#### libstorage_init_module + +1. Prototype + + ```c + int32_t libstorage_init_module(const char* cfgfile); + ``` + +2. Description + + Initializes the HSAK module. + +3. Parameters + + | Parameter | Description | + | ------------------- | ------------------------------------ | + | const char* cfgfile | Name of the HSAK configuration file. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------- | + | Other values | Initialization failed. | + | 0 | Initialization succeeded. | + +#### libstorage_exit_module + +1. Prototype + + ```c + int32_t libstorage_exit_module(void); + ``` + +2. Description + + Exits the HSAK module. + +3. Parameters + + None. + +4. Return value + + | Return Value | Description | + | ------------ | --------------------------------- | + | Other values | Failed to exit the cleanup. | + | 0 | Succeeded in exiting the cleanup. | + +#### LIBSTORAGE_REGISTER_DPDK_INIT_NOTIFY + +1. Prototype + + ```c + LIBSTORAGE_REGISTER_DPDK_INIT_NOTIFY(_name, _notify) + ``` + +2. Description + + Service layer registration function, which is used to register the callback function when the DPDK initialization is complete. + +3. Parameters + + | Parameter | Description | + | --------- | ------------------------------------------------------------ | + | _name | Name of a module at the service layer. | + | _notify | Prototype of the callback function registered at the service layer: **void (*notifyFunc)(const struct libstorage_dpdk_init_notify_arg *arg);** | + +4. Return value + + None + +### ublock.h + +#### init_ublock + +1. Prototype + + ```c + int init_ublock(const char *name, enum ublock_rpc_server_status flg); + ``` + +2. Description + + Initializes the Ublock module. This API must be called before other Ublock APIs. If the flag is set to **UBLOCK_RPC_SERVER_ENABLE**, that is, Ublock functions as the RPC server, the same process can be initialized only once. + + When Ublock is started as the RPC server, the monitor thread of a server is started at the same time. When the monitor thread detects that the RPC server thread is abnormal (for example, thread suspended), the monitor thread calls the exit function to trigger the process to exit. + + In this case, the product script is used to start the process again. + +3. Parameters + + | Parameter | Description | + | ------------------------------------ | ------------------------------------------------------------ | + | const char *name | Module name. The default value is **ublock**. You are advised to set this parameter to **NULL**. | + | enum ublock_rpc_server_status
flg | Whether to enable RPC. The value can be **UBLOCK_RPC_SERVER_DISABLE** or **UBLOCK_RPC_SERVER_ENABLE**.
If RPC is disabled and the drive is occupied by service processes, the Ublock module cannot obtain the drive information. | + +4. Return value + + | Return Value | Description | + | ------------- | ------------------------------------------------------------ | + | 0 | Initialization succeeded. | + | -1 | Initialization failed. Possible cause: The Ublock module has been initialized. | + | Process exits | Ublock considers that the following exceptions cannot be rectified and directly calls the exit API to exit the process:
- The RPC service needs to be created, but it fails to be created onsite.
- Failed to create a hot swap monitoring thread. | + +#### ublock_init + +1. Prototype + + ```c + #define ublock_init(name) init_ublock(name, UBLOCK_RPC_SERVER_ENABLE) + ``` + +2. Description + + It is the macro definition of the init_ublock API. It can be regarded as initializing Ublock into the required RPC service. + +3. Parameters + + | Parameter | Description | + | --------- | ------------------------------------------------------------ | + | name | Module name. The default value is **ublock**. You are advised to set this parameter to **NULL**. | + +4. Return value + + | Return Value | Description | + | ------------- | ------------------------------------------------------------ | + | 0 | Initialization succeeded. | + | -1 | Initialization failed. Possible cause: The Ublock RPC server module has been initialized. | + | Process exits | Ublock considers that the following exceptions cannot be rectified and directly calls the exit API to exit the process:
- The RPC service needs to be created, but it fails to be created onsite.
- Failed to create a hot swap monitoring thread. | + +#### ublock_init_norpc + +1. Prototype + + ```c + #define ublock_init_norpc(name) init_ublock(name, UBLOCK_RPC_SERVER_DISABLE) + ``` + +2. Description + + It is the macro definition of the init_ublock API and can be considered as initializing Ublock into a non-RPC service. + +3. Parameters + + | Parameter | Description | + | --------- | ------------------------------------------------------------ | + | name | Module name. The default value is **ublock**. You are advised to set this parameter to **NULL**. | + +4. Return value + + | Return Value | Description | + | ------------- | ------------------------------------------------------------ | + | 0 | Initialization succeeded. | + | -1 | Initialization failed. Possible cause: The Ublock client module has been initialized. | + | Process exits | Ublock considers that the following exceptions cannot be rectified and directly calls the exit API to exit the process:
- The RPC service needs to be created, but it fails to be created onsite.
- Failed to create a hot swap monitoring thread. | + +#### ublock_fini + +1. Prototype + + ```c + void ublock_fini(void); + ``` + +2. Description + + Destroys the Ublock module and internally created resources. This API must be used together with the Ublock initialization API. + +3. Parameters + + None. + +4. Return value + + None. + +#### ublock_get_bdevs + +1. Prototype + + ```c + int ublock_get_bdevs(struct ublock_bdev_mgr* bdev_list); + ``` + +2. Description + + Obtains the device list (all NVMe devices in the environment, including kernel-mode and user-mode drivers). The obtained NVMe device list contains only PCI addresses and does not contain specific device information. To obtain specific device information, call ublock_get_bdev. + +3. Parameters + + | Parameter | Description | + | --------------------------------- | ------------------------------------------------------------ | + | struct ublock_bdev_mgr* bdev_list | Output parameter, which returns the device queue. The **bdev_list** pointer must be allocated externally. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------ | + | 0 | The device queue is obtained successfully. | + | -2 | No NVMe device exists in the environment. | + | Other values | Failed to obtain the device list. | + +#### ublock_free_bdevs + +1. Prototype + + ```c + void ublock_free_bdevs(struct ublock_bdev_mgr* bdev_list); + ``` + +2. Description + + Releases a device list. + +3. Parameters + + | Parameter | Description | + | --------------------------------- | ------------------------------------------------------------ | + | struct ublock_bdev_mgr* bdev_list | Head pointer of the device queue. After the device queue is cleared, the **bdev_list** pointer is not released. | + +4. Return value + + None. + +#### ublock_get_bdev + +1. Prototype + + ```c + int ublock_get_bdev(const char *pci, struct ublock_bdev *bdev); + ``` + +2. Description + + Obtains information about a specific device. In the device information, the serial number, model, and firmware version of the NVMe device are saved as character arrays instead of character strings. (The return format varies depending on the drive controller, and the arrays may not end with 0.) + + After this API is called, the corresponding device is occupied by Ublock. Therefore, call ublock_free_bdev to free resources immediately after the required service operation is complete. + +3. Parameters + + | Parameter | Description | + | ------------------------ | ------------------------------------------------------------ | + | const char *pci | PCI address of the device whose information needs to be obtained. | + | struct ublock_bdev *bdev | Output parameter, which returns the device information. The **bdev** pointer must be allocated externally. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | 0 | The device information is obtained successfully. | + | -1 | Failed to obtain device information due to incorrect parameters. | + | -11(EAGAIN) | Failed to obtain device information due to the RPC query failure. A retry is required (3s sleep is recommended). | + +#### ublock_get_bdev_by_esn + +1. Prototype + + ```c + int ublock_get_bdev_by_esn(const char *esn, struct ublock_bdev *bdev); + ``` + +2. Description + + Obtains information about the device corresponding to an ESN. In the device information, the serial number, model, and firmware version of the NVMe device are saved as character arrays instead of character strings. (The return format varies depending on the drive controller, and the arrays may not end with 0.) + + After this API is called, the corresponding device is occupied by Ublock. Therefore, call ublock_free_bdev to free resources immediately after the required service operation is complete. + +3. Parameters + + | Parameter | Description | + | ------------------------ | ------------------------------------------------------------ | + | const char *esn | ESN of the device whose information is to be obtained.
Note:
An ESN is a string of a maximum of 20 characters (excluding the end character of the string), but the length may vary according to hardware vendors. For example, if the length is less than 20 characters, spaces are padded at the end of the character string. | + | struct ublock_bdev *bdev | Output parameter, which returns the device information. The **bdev** pointer must be allocated externally. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | 0 | The device information is obtained successfully. | + | -1 | Failed to obtain device information due to incorrect parameters. | + | -11(EAGAIN) | Failed to obtain device information due to the RPC query failure. A retry is required (3s sleep is recommended). | + +#### ublock_free_bdev + +1. Prototype + + ```c + void ublock_free_bdev(struct ublock_bdev *bdev); + ``` + +2. Description + + Frees device resources. + +3. Parameters + + | Parameter | Description | + | ------------------------ | ------------------------------------------------------------ | + | struct ublock_bdev *bdev | Pointer to the device information. After the data in the pointer is cleared, the **bdev** pointer is not freed. | + +4. Return value + + None. + +#### TAILQ_FOREACH_SAFE + +1. Prototype + + ```c + #define TAILQ_FOREACH_SAFE(var, head, field, tvar) + for ((var) = TAILQ_FIRST((head)); + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); + (var) = (tvar)) + ``` + +2. Description + + Provides a macro definition for each member of the secure access queue. + +3. Parameters + + | Parameter | Description | + | --------- | ------------------------------------------------------------ | + | var | Queue node member on which you are performing operations. | + | head | Queue head pointer. Generally, it refers to the object address defined by **TAILQ_HEAD(xx, xx) obj**. | + | field | Name of the struct used to store the pointers before and after the queue in the queue node. Generally, it is the name defined by **TAILQ_ENTRY (xx) name**. | + | tvar | Next queue node member. | + +4. Return value + + None. + +#### ublock_get_SMART_info + +1. Prototype + + ```c + int ublock_get_SMART_info(const char *pci, uint32_t nsid, struct ublock_SMART_info *smart_info); + ``` + +2. Description + + Obtains the S.M.A.R.T. information of a specified device. + +3. Parameters + + | Parameter | Description | + | ------------------------------------ | ------------------------------------------------------------ | + | const char *pci | Device PCI address. | + | uint32_t nsid | Specified namespace. | + | struct ublock_SMART_info *smart_info | Output parameter, which returns the S.M.A.R.T. information of the device. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | 0 | The S.M.A.R.T. information is obtained successfully. | + | -1 | Failed to obtain S.M.A.R.T. information due to incorrect parameters. | + | -11(EAGAIN) | Failed to obtain S.M.A.R.T. information due to the RPC query failure. A retry is required (3s sleep is recommended). | + +#### ublock_get_SMART_info_by_esn + +1. Prototype + + ```c + int ublock_get_SMART_info_by_esn(const char *esn, uint32_t nsid, struct ublock_SMART_info *smart_info); + ``` + +2. Description + + Obtains the S.M.A.R.T. information of the device corresponding to an ESN. + +3. Parameters + + | Parameter | Description | + | --------------------------------------- | ------------------------------------------------------------ | + | const char *esn | Device ESN.
Note:
An ESN is a string of a maximum of 20 characters (excluding the end character of the string), but the length may vary according to hardware vendors. For example, if the length is less than 20 characters, spaces are padded at the end of the character string. | + | uint32_t nsid | Specified namespace. | + | struct ublock_SMART_info
*smart_info | Output parameter, which returns the S.M.A.R.T. information of the device. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | 0 | The S.M.A.R.T. information is obtained successfully. | + | -1 | Failed to obtain SMART information due to incorrect parameters. | + | -11(EAGAIN) | Failed to obtain S.M.A.R.T. information due to the RPC query failure. A retry is required (3s sleep is recommended). | + +#### ublock_get_error_log_info + +1. Prototype + + ```c + int ublock_get_error_log_info(const char *pci, uint32_t err_entries, struct ublock_nvme_error_info *errlog_info); + ``` + +2. Description + + Obtains the error log information of a specified device. + +3. Parameters + + | Parameter | Description | + | ------------------------------------------ | ------------------------------------------------------------ | + | const char *pci | Device PCI address. | + | uint32_t err_entries | Number of error logs to be obtained. A maximum of 256 error logs can be obtained. | + | struct ublock_nvme_error_info *errlog_info | Output parameter, which returns the error log information of the device. For the **errlog_info** pointer, the caller needs to apply for space and ensure that the obtained space is greater than or equal to err_entries x size of (struct ublock_nvme_error_info). | + +4. Return value + + | Return Value | Description | + | ------------------------------------------------------------ | ------------------------------------------------------------ | + | Number of obtained error logs. The value is greater than or equal to 0. | Error logs are obtained successfully. | + | -1 | Failed to obtain error logs due to incorrect parameters. | + | -11(EAGAIN) | Failed to obtain error logs due to the RPC query failure. A retry is required (3s sleep is recommended). | + +#### ublock_get_log_page + +1. Prototype + + ```c + int ublock_get_log_page(const char *pci, uint8_t log_page, uint32_t nsid, void *payload, uint32_t payload_size); + ``` + +2. Description + + Obtains information about a specified device and log page. + +3. Parameters + + | Parameter | Description | + | --------------------- | ------------------------------------------------------------ | + | const char *pci | Device PCI address. | + | uint8_t log_page | ID of the log page to be obtained. For example, **0xC0** and **0xCA** indicate the customized S.M.A.R.T. information of ES3000 V5 drives. | + | uint32_t nsid | Namespace ID. Some log pages support obtaining by namespace while some do not. If obtaining by namespace is not supported, the caller must transfer **0XFFFFFFFF**. | + | void *payload | Output parameter, which stores log page information. The caller is responsible for allocating memory. | + | uint32_t payload_size | Size of the applied payload, which cannot be greater than 4096 bytes. | + +4. Return value + + | Return Value | Description | + | ------------ | ---------------------------------------------------- | + | 0 | The log page is obtained successfully. | + | -1 | Failed to obtain error logs due to parameter errors. | + +#### ublock_info_get_pci_addr + +1. Prototype + + ```c + char *ublock_info_get_pci_addr(const void *info); + ``` + +2. Description + + Obtains the PCI address of the hot swap device. + + The memory occupied by info and the memory occupied by the returned PCI address do not need to be freed by the service process. + +3. Parameters + + | Parameter | Description | + | ---------------- | ------------------------------------------------------------ | + | const void *info | Hot swap event information transferred by the hot swap monitoring thread to the callback function. | + +4. Return value + + | Return Value | Description | + | ------------ | --------------------------------- | + | NULL | Failed to obtain the information. | + | Other values | Obtained PCI address. | + +#### ublock_info_get_action + +1. Prototype + + ```c + enum ublock_nvme_uevent_action ublock_info_get_action(const void *info); + ``` + +2. Description + + Obtains the type of the hot swap event. + + The memory occupied by info does not need to be freed by service process. + +3. Parameters + + | Parameter | Description | + | ---------------- | ------------------------------------------------------------ | + | const void *info | Hot swap event information transferred by the hot swap monitoring thread to the callback function. | + +4. Return value + + | Return Value | Description | + | -------------------------- | ------------------------------------------------------------ | + | Type of the hot swap event | Type of the event that triggers the callback function. For details, see the definition in **5.1.2.6 enum ublock_nvme_uevent_action**. | + +#### ublock_get_ctrl_iostat + +1. Prototype + + ```c + int ublock_get_ctrl_iostat(const char* pci, struct ublock_ctrl_iostat_info *ctrl_iostat); + ``` + +2. Description + + Obtains the I/O statistics of a controller. + +3. Parameters + + | Parameter | Description | + | ------------------------------------------- | ------------------------------------------------------------ | + | const char* pci | PCI address of the controller whose I/O statistics are to be obtained. | + | struct ublock_ctrl_iostat_info *ctrl_iostat | Output parameter, which returns I/O statistics. The **ctrl_iostat** pointer must be allocated externally. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------------------------ | + | 0 | Succeeded in obtaining I/O statistics. | + | -1 | Failed to obtain I/O statistics due to invalid parameters or RPC errors. | + | -2 | Failed to obtain I/O statistics because the NVMe drive is not taken over by the I/O process. | + | -3 | Failed to obtain I/O statistics because the I/O statistics function is disabled. | + +#### ublock_nvme_admin_passthru + +1. Prototype + + ```c + int32_t ublock_nvme_admin_passthru(const char *pci, void *cmd, void *buf, size_t nbytes); + ``` + +2. Description + + Transparently transmits the **nvme admin** command to the NVMe device. Currently, only the **nvme admin** command for obtaining the identify parameter is supported. + +3. Parameters + + | Parameter | Description | + | --------------- | ------------------------------------------------------------ | + | const char *pci | PCI address of the destination controller of the **nvme admin** command. | + | void *cmd | Pointer to the **nvme admin** command struct. The struct size is 64 bytes. For details, see the NVMe specifications. Currently, only the command for obtaining the identify parameter is supported. | + | void *buf | Saves the output of the **nvme admin** command. The space is allocated by users and the size is expressed in nbytes. | + | size_t nbytes | Size of the user buffer. The buffer for the identify parameter is 4096 bytes, and that for the command to obtain the identify parameter is 4096 nbytes. | + +4. Return value + + | Return Value | Description | + | ------------ | ------------------------------------------ | + | 0 | The user command is executed successfully. | + | -1 | Failed to execute the user command. | + +# Appendixes + +## GENERIC + +Generic Error Code Reference + +| sc | value | +| ------------------------------------------ | ----- | +| NVME_SC_SUCCESS | 0x00 | +| NVME_SC_INVALID_OPCODE | 0x01 | +| NVME_SC_INVALID_FIELD | 0x02 | +| NVME_SC_COMMAND_ID_CONFLICT | 0x03 | +| NVME_SC_DATA_TRANSFER_ERROR | 0x04 | +| NVME_SC_ABORTED_POWER_LOSS | 0x05 | +| NVME_SC_INTERNAL_DEVICE_ERROR | 0x06 | +| NVME_SC_ABORTED_BY_REQUEST | 0x07 | +| NVME_SC_ABORTED_SQ_DELETION | 0x08 | +| NVME_SC_ABORTED_FAILED_FUSED | 0x09 | +| NVME_SC_ABORTED_MISSING_FUSED | 0x0a | +| NVME_SC_INVALID_NAMESPACE_OR_FORMAT | 0x0b | +| NVME_SC_COMMAND_SEQUENCE_ERROR | 0x0c | +| NVME_SC_INVALID_SGL_SEG_DESCRIPTOR | 0x0d | +| NVME_SC_INVALID_NUM_SGL_DESCIRPTORS | 0x0e | +| NVME_SC_DATA_SGL_LENGTH_INVALID | 0x0f | +| NVME_SC_METADATA_SGL_LENGTH_INVALID | 0x10 | +| NVME_SC_SGL_DESCRIPTOR_TYPE_INVALID | 0x11 | +| NVME_SC_INVALID_CONTROLLER_MEM_BUF | 0x12 | +| NVME_SC_INVALID_PRP_OFFSET | 0x13 | +| NVME_SC_ATOMIC_WRITE_UNIT_EXCEEDED | 0x14 | +| NVME_SC_OPERATION_DENIED | 0x15 | +| NVME_SC_INVALID_SGL_OFFSET | 0x16 | +| NVME_SC_INVALID_SGL_SUBTYPE | 0x17 | +| NVME_SC_HOSTID_INCONSISTENT_FORMAT | 0x18 | +| NVME_SC_KEEP_ALIVE_EXPIRED | 0x19 | +| NVME_SC_KEEP_ALIVE_INVALID | 0x1a | +| NVME_SC_ABORTED_PREEMPT | 0x1b | +| NVME_SC_SANITIZE_FAILED | 0x1c | +| NVME_SC_SANITIZE_IN_PROGRESS | 0x1d | +| NVME_SC_SGL_DATA_BLOCK_GRANULARITY_INVALID | 0x1e | +| NVME_SC_COMMAND_INVALID_IN_CMB | 0x1f | +| NVME_SC_LBA_OUT_OF_RANGE | 0x80 | +| NVME_SC_CAPACITY_EXCEEDED | 0x81 | +| NVME_SC_NAMESPACE_NOT_READY | 0x82 | +| NVME_SC_RESERVATION_CONFLICT | 0x83 | +| NVME_SC_FORMAT_IN_PROGRESS | 0x84 | + +## COMMAND_SPECIFIC + +Error Code Reference for Specific Commands + +| sc | value | +| ------------------------------------------ | ----- | +| NVME_SC_COMPLETION_QUEUE_INVALID | 0x00 | +| NVME_SC_INVALID_QUEUE_IDENTIFIER | 0x01 | +| NVME_SC_MAXIMUM_QUEUE_SIZE_EXCEEDED | 0x02 | +| NVME_SC_ABORT_COMMAND_LIMIT_EXCEEDED | 0x03 | +| NVME_SC_ASYNC_EVENT_REQUEST_LIMIT_EXCEEDED | 0x05 | +| NVME_SC_INVALID_FIRMWARE_SLOT | 0x06 | +| NVME_SC_INVALID_FIRMWARE_IMAGE | 0x07 | +| NVME_SC_INVALID_INTERRUPT_VECTOR | 0x08 | +| NVME_SC_INVALID_LOG_PAGE | 0x09 | +| NVME_SC_INVALID_FORMAT | 0x0a | +| NVME_SC_FIRMWARE_REQ_CONVENTIONAL_RESET | 0x0b | +| NVME_SC_INVALID_QUEUE_DELETION | 0x0c | +| NVME_SC_FEATURE_ID_NOT_SAVEABLE | 0x0d | +| NVME_SC_FEATURE_NOT_CHANGEABLE | 0x0e | +| NVME_SC_FEATURE_NOT_NAMESPACE_SPECIFIC | 0x0f | +| NVME_SC_FIRMWARE_REQ_NVM_RESET | 0x10 | +| NVME_SC_FIRMWARE_REQ_RESET | 0x11 | +| NVME_SC_FIRMWARE_REQ_MAX_TIME_VIOLATION | 0x12 | +| NVME_SC_FIRMWARE_ACTIVATION_PROHIBITED | 0x13 | +| NVME_SC_OVERLAPPING_RANGE | 0x14 | +| NVME_SC_NAMESPACE_INSUFFICIENT_CAPACITY | 0x15 | +| NVME_SC_NAMESPACE_ID_UNAVAILABLE | 0x16 | +| NVME_SC_NAMESPACE_ALREADY_ATTACHED | 0x18 | +| NVME_SC_NAMESPACE_IS_PRIVATE | 0x19 | +| NVME_SC_NAMESPACE_NOT_ATTACHED | 0x1a | +| NVME_SC_THINPROVISIONING_NOT_SUPPORTED | 0x1b | +| NVME_SC_CONTROLLER_LIST_INVALID | 0x1c | +| NVME_SC_DEVICE_SELF_TEST_IN_PROGRESS | 0x1d | +| NVME_SC_BOOT_PARTITION_WRITE_PROHIBITED | 0x1e | +| NVME_SC_INVALID_CTRLR_ID | 0x1f | +| NVME_SC_INVALID_SECONDARY_CTRLR_STATE | 0x20 | +| NVME_SC_INVALID_NUM_CTRLR_RESOURCES | 0x21 | +| NVME_SC_INVALID_RESOURCE_ID | 0x22 | +| NVME_SC_CONFLICTING_ATTRIBUTES | 0x80 | +| NVME_SC_INVALID_PROTECTION_INFO | 0x81 | +| NVME_SC_ATTEMPTED_WRITE_TO_RO_PAGE | 0x82 | + +## MEDIA_DATA_INTERGRITY_ERROR + +Error Code Reference for Medium Exceptions + +| sc | value | +| -------------------------------------- | ----- | +| NVME_SC_WRITE_FAULTS | 0x80 | +| NVME_SC_UNRECOVERED_READ_ERROR | 0x81 | +| NVME_SC_GUARD_CHECK_ERROR | 0x82 | +| NVME_SC_APPLICATION_TAG_CHECK_ERROR | 0x83 | +| NVME_SC_REFERENCE_TAG_CHECK_ERROR | 0x84 | +| NVME_SC_COMPARE_FAILURE | 0x85 | +| NVME_SC_ACCESS_DENIED | 0x86 | +| NVME_SC_DEALLOCATED_OR_UNWRITTEN_BLOCK | 0x87 | diff --git a/docs/en/docs/HSAK/hsak_tools_usage.md b/docs/en/Server/MemoryandStorage/HSAK/hsak_tool_usage.md similarity index 58% rename from docs/en/docs/HSAK/hsak_tools_usage.md rename to docs/en/Server/MemoryandStorage/HSAK/hsak_tool_usage.md index e7268ba81652164dee22c13c8c7efbada4329ebe..342c01a55218a6290565e29f403e1249bb3120f6 100644 --- a/docs/en/docs/HSAK/hsak_tools_usage.md +++ b/docs/en/Server/MemoryandStorage/HSAK/hsak_tool_usage.md @@ -74,9 +74,9 @@ libstorage-iostat [-t ] [-i ] [-d ] - The I/O statistics are as follows: - | Device | r/s | w/s | rKB/s | wKB/s | avgrq-sz | avgqu-sz | r_await | w_await | await | svctm | util% | poll-n | - | ----------- | ------------------------------ | ------------------------------- | ----------------------------------- | ------------------------------------ | -------------------------------------- | -------------------------- | --------------------- | ---------------------- | ------------------------------- | --------------------------------------- | ------------------ | -------------------------- | - | Device name | Number of read I/Os per second | Number of write I/Os per second | Number of read I/O bytes per second | Number of write I/O bytes per second | Average size of delivered I/Os (bytes) | I/O depth of a drive queue | I/O read latency (μs) | I/O write latency (μs) | Average read/write latency (μs) | Processing latency of a single I/O (μs) | Device utilization | Number of polling timeouts | + | Device | r/s | w/s | rKB/s | wKB/s | avgrq-sz | avgqu-sz | r_await | w_await | await | svctm | util% | poll-n | + | ----------- | ------------------------------ | ------------------------------- | ----------------------------------- | ------------------------------------ | -------------------------------------- | -------------------------- | --------------------- | ---------------------- | ------------------------------- | --------------------------------------- | ------------------ | -------------------------- | + | Device name | Number of read I/Os per second | Number of write I/Os per second | Number of read I/O bytes per second | Number of write I/O bytes per second | Average size of delivered I/Os (bytes) | I/O depth of a drive queue | I/O read latency (μs) | I/O write latency (μs) | Average read/write latency (μs) | Processing latency of a single I/O (μs) | Device utilization | Number of polling timeouts | ## Commands for Drive Read/Write Operations @@ -90,34 +90,34 @@ libstorage-rw [OPTIONS...] 1. **COMMAND** parameters - - **read**: reads a specified logical block from the device to the data buffer (standard output by default). + - **read**: reads a specified logical block from the device to the data buffer (standard output by default). - - **write**: writes data in a data buffer (standard input by default) to a specified logical block of the NVMe device. + - **write**: writes data in a data buffer (standard input by default) to a specified logical block of the NVMe device. - - **help**: displays the help information about the command line. + - **help**: displays the help information about the command line. 2. **device**: specifies the PCI address, for example, **0000:09:00.0**. 3. **OPTIONS** parameters - - **--start-block, -s**: indicates the 64-bit start address of the logical block to be read or written. The default value is **0**. + - **--start-block, -s**: indicates the 64-bit start address of the logical block to be read or written. The default value is **0**. - - **--block-count, -c**: indicates the number of the logical blocks to be read or written (counted from 0). + - **--block-count, -c**: indicates the number of the logical blocks to be read or written (counted from 0). - - **--data-size, -z**: indicates the number of bytes of the data to be read or written. + - **--data-size, -z**: indicates the number of bytes of the data to be read or written. - - **--namespace-id, -n**: indicates the namespace ID of the device. The default value is **1**. + - **--namespace-id, -n**: indicates the namespace ID of the device. The default value is **1**. - - **--data, -d**: indicates the data file used for read and write operations (The read data is saved during read operations and the written data is provided during write operations.) + - **--data, -d**: indicates the data file used for read and write operations (The read data is saved during read operations and the written data is provided during write operations.) - - **--limited-retry, -l**: indicates that the device controller restarts for a limited number of times to complete device read and write operations. + - **--limited-retry, -l**: indicates that the device controller restarts for a limited number of times to complete device read and write operations. - - **--force-unit-access, -f**: ensures that read and write operations are completed from the nonvolatile media before the instruction is completed. + - **--force-unit-access, -f**: ensures that read and write operations are completed from the nonvolatile media before the instruction is completed. - - **--show-command, -v**: displays instruction information before sending a read/write command. + - **--show-command, -v**: displays instruction information before sending a read/write command. - - **--dry-run, -w**: displays only information about read and write instructions but does not perform actual read and write operations. + - **--dry-run, -w**: displays only information about read and write instructions but does not perform actual read and write operations. - - **--latency. -t**: collects statistics on the end-to-end read and write latency of the CLI. + - **--latency. -t**: collects statistics on the end-to-end read and write latency of the CLI. - - **--help, -h**: displays the help information about related commands. + - **--help, -h**: displays the help information about related commands. diff --git a/docs/en/Server/MemoryandStorage/Menu/index.md b/docs/en/Server/MemoryandStorage/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..03f1007248e93336d9aa1f2edaefd269fb3ec143 --- /dev/null +++ b/docs/en/Server/MemoryandStorage/Menu/index.md @@ -0,0 +1,8 @@ +--- +headless: true +--- + +- [Logical Volume Configuration and Management]({{< relref "./lvm/Menu/index.md" >}}) +- [etmem User Guide]({{< relref "./etmem/Menu/index.md" >}}) +- [GMEM User Guide]({{< relref "./GMEM/Menu/index.md" >}}) +- [HSAK Developer Guide]({{< relref "./HSAK/Menu/index.md" >}}) diff --git a/docs/en/Server/MemoryandStorage/etmem/Menu/index.md b/docs/en/Server/MemoryandStorage/etmem/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..18f9d71a4684fae3bc7073607b6d1843b68a07dd --- /dev/null +++ b/docs/en/Server/MemoryandStorage/etmem/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [etmem User Guide]({{< relref "./etmem-user-guide.md" >}}) diff --git a/docs/en/docs/Administration/memory-management.md b/docs/en/Server/MemoryandStorage/etmem/etmem-user-guide.md similarity index 97% rename from docs/en/docs/Administration/memory-management.md rename to docs/en/Server/MemoryandStorage/etmem/etmem-user-guide.md index 7b1e4947f988016c63acf2247594934267d83f57..6391f9f7f4d8b5d2e5be24b1413d52a7d9e63b07 100644 --- a/docs/en/docs/Administration/memory-management.md +++ b/docs/en/Server/MemoryandStorage/etmem/etmem-user-guide.md @@ -1,8 +1,8 @@ -# etmem for Tiered Memory Expansion +# etmem User Guide ## Introduction -The development of CPU computing power - particularly lower costs of ARM cores - makes memory cost and capacity become the core frustration that restricts business costs and performance. Therefore, the most pressing issue is how to save memory cost and how to expand memory capacity. +The development of CPU computing power, particularly lower costs of ARM cores, makes memory cost and capacity become the core frustration that restricts business costs and performance. Therefore, the most pressing issue is how to save memory cost and how to expand memory capacity. etmem is a tiered memory expansion technology that uses DRAM+memory compression/high-performance storage media to form tiered memory storage. Memory data is tiered, and cold data is migrated from memory media to high-performance storage media to release memory space and reduce memory costs. @@ -160,7 +160,7 @@ Fields in the configuration files are described as follows: | interval | Time interval for each memory scan | Yes | Yes | 1~1200 | interval=5 // The interval is 5s. | | sleep | Time interval for each memory scan+operation | Yes | Yes | 1~1200 | sleep=10 //The interval is 10s | | sysmem_threshold| Memory swapping threshold. This is a slide engine configuration item. | No | Yes | 0~100 | sysmem_threshold=50 // When available memory is less than 50%, etmem swaps out memory.| -| swapcache_high_wmark| High watermark of swapcache. This is a slide engine configuration item. | No | Yes | 1~100 | swapcache_high_wmark=5 // swapcache can be up to 5% of the system memory. If this ratio is reached, etmem triggers swapcache recycling.
Note: swapcache_high_wmark must be greater than swapcache_low_wmark.| +| swapcache_high_wmark| High watermark of swapcache. This is a slide engine configuration item. | No | Yes | 1~100 | swapcache_high_wmark=5 // swapcache can be up to 5% of the system memory. If this ratio is reached, etmem triggers swapcache recycling.
Note: swapcache_high_wmark must be greater than swapcache_low_wmark.| | swapcache_low_wmark| Low watermark of swapcache. This is a slide engine configuration item. | No | Yes | \[1~swapcache_high_wmark\) | swapcache_low_wmark=3 //When swapcache recycling is triggered, the system recycles the swapcache memory occupancy to less than 3%.| | \[engine\] | Beginning identifier of the engine public configuration section | No | No | N/A | Beginning identifier of the engine parameters, indicating that the parameters below are within the range of the engine section until another \[xxx\] or the end of the file | | project | project to which the engine belongs | Yes | Yes | String of up to 64 characters | If a project named test exists, the item can be **project=test**. | @@ -185,7 +185,7 @@ Fields in the configuration files are described as follows: | anon_only | Scans anonymous pages only. This is a cslide engine configuration item. | No | Yes | yes/no | anon_only=no | | ign_host | Ignores page table scan information on the host. This is a cslide engine configuration item. | No | Yes | yes/no | ign_host=no | | task_private_key | Reserved for a task of a third-party policy to parse private parameters. This is a third-party engine configuration item. | No | No |Restrict according to the third-party policy's private parameters.|Configure the private task parameters according to the third-party policy.| -| swap_threshold | Process memory swapping threshold. This is a slide engine configuration item. | No | Yes | Absolute value of memory available to the process | swap_threshold=10g // Memory swapping will not be triggered when the process memory is less than 10 GB.
Currently, the unit can only be **g** or **G**. This item is used with **sysmem_threshold**. When system memory is lower than **sysmem_threshold**, memory of processes in the allowlist is checked. | +| swap_threshold | Process memory swapping threshold. This is a slide engine configuration item. | No | Yes | Absolute value of memory available to the process | swap_threshold=10g // Memory swapping will not be triggered when the process memory is less than 10 GB.
Currently, the unit can only be **g** or **G**. This item is used with **sysmem_threshold**. When system memory is lower than **sysmem_threshold**, memory of processes in the allowlist is checked. | | swap_flag| Enables process memory swapping. This is a slide engine configuration item. | No | Yes | yes/no | swap_flag=yes | ### Starting etmemd @@ -212,7 +212,7 @@ The `0` parameter of option `-l` and the `etmemd_socket` parameter of option `-s | Option | Description | Mandatory | Contains Parameters | Parameter Range | Example | | --------------- | ---------------------------------- | -------- | ---------- | --------------------- | ------------------------------------------------------------ | -| -l or \-\-log-level | etmemd log level | No | Yes | 0~3 | 0: debug level
1: info level
2: warning level
3: error level
Logs whose levels are higher than the specified value are printed to **/var/log/message**. | +| -l or \-\-log-level | etmemd log level | No | Yes | 0~3 | 0: debug level
1: info level
2: warning level
3: error level
Logs whose levels are higher than the specified value are printed to **/var/log/message**. | | -s or \-\-socket | Socket listened by etmemd to interact with the client | Yes | Yes | String of up to 107 characters | Socket listened by etmemd | | -m or \-\-mode-systemctl| Starts the etmemd service through systemctl | No| No| N/A| The `-m` option needs to be specified in the service file.| | -h or \-\-help | Prints help information | No | No | N/A | This option prints help information and exit. | @@ -251,6 +251,7 @@ When etmemd is running normally, run `etmem` with the `obj` option to perform ad ```bash etmem obj del --file /etc/etmem/slide_conf.yaml --socket etmemd_socket + ``` #### Command Parameters @@ -576,7 +577,7 @@ etmemd -l 0 -s etmemd_socket -m | Option | Description | Mandatory | Contains Parameters | Parameter Range | Example | | --------------- | ---------------------------------- | -------- | ---------- | --------------------- | ------------------------------------------------------------ | -| -l or \-\-log-level | etmemd log level | No | Yes | 0~3 | 0: debug level
1: info level
2: warning level
3: error level
Logs whose levels are higher than the specified value are printed to **/var/log/message**. | +| -l or \-\-log-level | etmemd log level | No | Yes | 0~3 | 0: debug level
1: info level
2: warning level
3: error level
Logs whose levels are higher than the specified value are printed to **/var/log/message**. | | -s or \-\-socket | Socket listened by etmemd to interact with the client | Yes | Yes | String of up to 107 characters | Socket listened by etmemd | | -m or \-\-mode-systemctl| Starts the etmemd service through systemctl | No| No| N/A| The `-m` option needs to be specified in the service file.| | -h or \-\-help | Prints help information | No | No | N/A | This option prints help information and exit. | diff --git a/docs/en/Server/MemoryandStorage/lvm/Menu/index.md b/docs/en/Server/MemoryandStorage/lvm/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..a4694b2eb88ac56d8a83cf5463db365d9799a0b8 --- /dev/null +++ b/docs/en/Server/MemoryandStorage/lvm/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [Managing Drives Through LVM]({{< relref "./managing-drives-through-lvm.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/Administration/managing-hard-disks-through-lvm.md b/docs/en/Server/MemoryandStorage/lvm/managing-drives-through-lvm.md similarity index 96% rename from docs/en/docs/Administration/managing-hard-disks-through-lvm.md rename to docs/en/Server/MemoryandStorage/lvm/managing-drives-through-lvm.md index a51290929f1a51d04b4e819b863f0f099efe8020..005acdd62e13a92f6ea61182483d5b8f77cc2610 100644 --- a/docs/en/docs/Administration/managing-hard-disks-through-lvm.md +++ b/docs/en/Server/MemoryandStorage/lvm/managing-drives-through-lvm.md @@ -1,4 +1,5 @@ # Managing Drives Through LVM + - [Managing Drives Through LVM](#managing-drives-through-lvm) @@ -60,10 +61,10 @@ When drives are managed using LVM, file systems are distributed on multiple driv ## Installing the LVM ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The LVM has been installed on the openEuler OS by default. You can run the **rpm -qa | grep lvm2** command to check whether it is installed. If the command output contains "lvm2", the LVM has been installed. In this case, skip this section. If no information is output, the LVM is not installed. Install it by referring to this section. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> The LVM has been installed on the openEuler OS by default. You can run the **rpm -qa | grep lvm2** command to check whether it is installed. If the command output contains "lvm2", the LVM has been installed. In this case, skip this section. If no information is output, the LVM is not installed. Install it by referring to this section. -1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.md). +1. Configure the local yum source. For details, see [Configuring the Repo Server](../../Administration/Administrator/configuring-the-repo-server.md). 2. Clear the cache. ```bash @@ -368,8 +369,8 @@ In the preceding information: - _lvname_: device file corresponding to the LV whose attributes are to be displayed. If this option is not set, attributes of all LVs are displayed. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >Device files corresponding to LVs are stored in the VG directory. For example, if LV **lv1** is created in VG **vg1**, the device file corresponding to **lv1** is **/dev/vg1/lv1**. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > Device files corresponding to LVs are stored in the VG directory. For example, if LV **lv1** is created in VG **vg1**, the device file corresponding to **lv1** is **/dev/vg1/lv1**. Example: Run the following command to display the basic information about LV **lv1**: diff --git a/docs/en/docs/Virtualization/public_sys-resources/icon-note.gif b/docs/en/Server/MemoryandStorage/lvm/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/Virtualization/public_sys-resources/icon-note.gif rename to docs/en/Server/MemoryandStorage/lvm/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/Administration/overview.md b/docs/en/Server/MemoryandStorage/overview.md similarity index 42% rename from docs/en/docs/Administration/overview.md rename to docs/en/Server/MemoryandStorage/overview.md index b802495f1d8859189e2038d5ee423f9f0f602ca5..242fb83804d813152c27bef9e0958025f982ab55 100644 --- a/docs/en/docs/Administration/overview.md +++ b/docs/en/Server/MemoryandStorage/overview.md @@ -4,129 +4,129 @@ The memory is an important component of a computer, and is used to temporarily store operation data in the CPU and data exchanged with an external memory such as hardware. In particular, a non-uniform memory access architecture (NUMA) is a memory architecture designed for a multiprocessor computer. The memory access time depends on the location of the memory relative to the processor. In NUMA mode, a processor accesses the local memory faster than the non-local memory (the memory is located in another processor or shared between processors). -## Viewing Memory +## Memory Monitoring -1. **free**: displays the system memory status. +1. `free`: displays the system memory status. Example: - ```bash + ```shell # Display the system memory status in MB. free -m ``` The output is as follows: - ```text + ```shell [root@openEuler ~]# free -m total used free shared buff/cache available Mem: 2633 436 324 23 2072 2196 Swap: 4043 0 4043 ``` - The fields in the command output are described as follows: + The fields in the command output are as follows: - |Field|Description| - |--|--| - |total|Total memory size.| - |used|Used memory.| - |free|Free memory.| - |shared|Total memory shared by multiple processes.| - |buff/cache|Total number of buffers and caches.| - |available|Estimated available memory to start a new application without swapping.| + | Field | Description | + | ---------- | ----------------------------------------------------------------------- | + | total | Total memory size. | + | used | Used memory. | + | free | Free memory. | + | shared | Total memory shared by multiple processes. | + | buff/cache | Total number of buffers and caches. | + | available | Estimated available memory to start a new application without swapping. | -2. **vmstat**: dynamically monitors the system memory and views the system memory usage. +2. `vmstat`: dynamically monitors the system memory and views the system memory usage. Example: - ```bash + ```shell # Monitor the system memory and display active and inactive memory. vmstat -a ``` The output is as follows: - ```text + ```shell [root@openEuler ~]# vmstat -a procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free inact active si so bi bo in cs us sy id wa st 2 0 520 331980 1584728 470332 0 0 0 2 15 19 0 0 100 0 0 ``` - In the command output, the field related to the memory is described as follows: + In the command output, the field related to the memory is as follows: - |Field|Description| - |--|--| - |memory|Memory information.
**-swpd**: usage of the virtual memory, in KB.
**-free**: free memory capacity, in KB.
**-inact**: inactive memory capacity, in KB.
**-active**: active memory capacity, in KB.| + | Field | Description | + | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | memory | Memory information.
**-swpd**: usage of the virtual memory, in KB.
**-free**: free memory capacity, in KB.
**-inact**: inactive memory capacity, in KB.
**-active**: active memory capacity, in KB. | -3. **sar**: monitors the memory usage of the system. +3. `sar`: monitors the memory usage of the system. Example: - ```bash + ```shell # Monitor the memory usage in the sampling period in the system. Collect the statistics every two seconds for three times. sar -r 2 3 ``` The output is as follows: - ```text + ```shell [root@openEuler ~]# sar -r 2 3 04:02:09 PM kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kb dirty - 04:02:11 PM 332180 2249308 189420 7.02 142172 1764312 787948 11.52 470404 1584924 + 04:02:11 PM 332180 2249308 189420 7.02 142172 1764312 787948 11.52 470404 1584924 36 - 04:02:13 PM 332148 2249276 189452 7.03 142172 1764312 787948 11.52 470404 1584924 + 04:02:13 PM 332148 2249276 189452 7.03 142172 1764312 787948 11.52 470404 1584924 36 - 04:02:15 PM 332148 2249276 189452 7.03 142172 1764312 787948 11.52 470404 1584924 + 04:02:15 PM 332148 2249276 189452 7.03 142172 1764312 787948 11.52 470404 1584924 36 - Average: 332159 2249287 189441 7.03 142172 1764312 787948 11.52 470404 1584924 + Average: 332159 2249287 189441 7.03 142172 1764312 787948 11.52 470404 1584924 36 ``` The fields in the command output are described as follows: - |Field|Description| - |--|--| - |kbmemfree|Unused memory space.| - |kbmemused|Used memory space.| - |%memused|Percentage of the used space.| - |kbbuffers|Amount of data stored in the buffer.| - |kbcached|Data access volume in all domains of the system.| + | Field | Description | + | --------- | ------------------------------------------------ | + | kbmemfree | Unused memory space. | + | kbmemused | Used memory space. | + | %memused | Percentage of the used space. | + | kbbuffers | Amount of data stored in the buffer. | + | kbcached | Data access volume in all domains of the system. | -4. **numactl**: displays the NUMA node configuration and status. +4. `numactl`: displays the NUMA node configuration and status. Example: - ```bash + ```shell # Check the current NUMA configuration. numactl -H ``` The output is as follows: - ```text + ```shell [root@openEuler ~]# numactl -H available: 1 nodes (0) node 0 cpus: 0 1 2 3 node 0 size: 2633 MB node 0 free: 322 MB node distances: - node 0 - 0: 10 + node 0 + 0: 10 ``` The server contains one NUMA node. The NUMA node that contains four cores and 6 GB memory. The command also displays the distance between NUMA nodes. The further the distance, the higher the latency of cross-node memory accesses, which should be avoided as much as possible. - **numastat**: displays NUMA node status. + `numastat`: displays NUMA node status. - ```bash + ```shell # Check the NUMA node status. numastat ``` - ```text + ```shell [root@openEuler ~]# numastat node0 numa_hit 5386186 @@ -134,16 +134,16 @@ The memory is an important component of a computer, and is used to temporarily s numa_foreign 0 interleave_hit 17483 local_node 5386186 - other_node 0 + other_node 0 ``` - The the fields in the command output and their meanings are as follows: + The fields in the `numastat` command output are described as follows: - |Field|Description| - |--|--| - |numa_hit|Number of times that the CPU core accesses the local memory on a node.| - |numa_miss|Number of times that the core of a node accesses the memory of other nodes.| - |numa_foreign|Number of pages that were allocated to the local node but moved to other nodes. Each numa_foreign corresponds to a numa_miss event.| - |interleave_hit|Number of pages of the interleave policy that are allocated to this node.| - |local_node|Size of memory that was allocated to this node by processes on this node.| - |other_node|Size of memory that was allocated to other nodes by processes on this node.| + | Field | Description | + | -------------- | ----------------------------------------------------------------------------------------------------------------------------------- | + | numa_hit | Number of times that the CPU core accesses the local memory on a node. | + | numa_miss | Number of times that the core of a node accesses the memory of other nodes. | + | numa_foreign | Number of pages that were allocated to the local node but moved to other nodes. Each numa_foreign corresponds to a numa_miss event. | + | interleave_hit | Number of pages of the interleave policy that are allocated to this node. | + | local_node | Size of memory that was allocated to this node by processes on this node. | + | other_node | Size of memory that was allocated to other nodes by processes on this node. | diff --git a/docs/en/Server/Menu/index.md b/docs/en/Server/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..c39ecb7838889693cd84a846d59f4310327b4f65 --- /dev/null +++ b/docs/en/Server/Menu/index.md @@ -0,0 +1,15 @@ +--- +headless: true +--- +- [Introduction]({{< relref "./Releasenotes/Menu/index.md" >}}) +- [Quick Start]({{< relref "./Quickstart/Menu/index.md" >}}) +- [Installation and Upgrade]({{< relref "./InstallationUpgrade/Menu/index.md" >}}) +- [OS Administration]({{< relref "./Administration/Menu/index.md" >}}) +- [Maintenance]({{< relref "./Maintenance/Menu/index.md" >}}) +- [Security]({{< relref "./Security/Menu/index.md" >}}) +- [Memory and Storage]({{< relref "./MemoryandStorage/Menu/index.md" >}}) +- [Network]({{< relref "./Network/Menu/index.md" >}}) +- [Performance]({{< relref "./Performance/Menu/index.md" >}}) +- [Development]({{< relref "./Development/Menu/index.md" >}}) +- [High Availability]({{< relref "./HighAvailability/Menu/index.md" >}}) +- [Diversified Computing]({{< relref "./DiversifiedComputing/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Network/Gazelle/Menu/index.md b/docs/en/Server/Network/Gazelle/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..79a1d19864692271efe9ceacd7a0a6569871f540 --- /dev/null +++ b/docs/en/Server/Network/Gazelle/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [Gazelle User Guide]({{< relref "./gazelle-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/Gazelle/Gazelle.md b/docs/en/Server/Network/Gazelle/gazelle-user-guide.md similarity index 55% rename from docs/en/docs/Gazelle/Gazelle.md rename to docs/en/Server/Network/Gazelle/gazelle-user-guide.md index ed174db457a30ba51bbc2d541fb8e3bf38dfeae7..747fe969031a3406a966caf4065992a986e3126d 100644 --- a/docs/en/docs/Gazelle/Gazelle.md +++ b/docs/en/Server/Network/Gazelle/gazelle-user-guide.md @@ -5,11 +5,11 @@ Gazelle is a high-performance user-mode protocol stack. It directly reads and writes NIC packets in user mode based on DPDK and transmit the packets through shared hugepage memory, and uses the LwIP protocol stack. Gazelle greatly improves the network I/O throughput of applications and accelerates the network for the databases, such as MySQL and Redis. - High Performance -Zero-copy and lock-free packets that can be flexibly scaled out and scheduled adaptively. + Zero-copy and lock-free packets that can be flexibly scaled out and scheduled adaptively. - Universality -Compatible with POSIX without modification, and applicable to different types of applications. + Compatible with POSIX without modification, and applicable to different types of applications. -In the single-process scenario where the NIC supports multiple queues, use **liblstack.so** only to shorten the packet path. In other scenarios, use the ltran process to distribute packets to each thread. +In the single-process scenario where the NIC supports multiple queues, use **liblstack.so** only to shorten the packet path. ## Installation @@ -33,21 +33,7 @@ To configure the operating environment and use Gazelle to accelerate application ### 1. Installing the .ko File as the root User -Install the .ko files based on the site requirements to enable the virtual network ports and bind NICs to the user-mode driver. -To enable the virtual network port function, use **rte_kni.ko**. - -```sh -modprobe rte_kni carrier="on" -``` - -Configure NetworkManager not to manage the KNI NIC. - -```sh -$ cat /etc/NetworkManager/conf.d/99-unmanaged-devices.conf -[keyfile] -unmanaged-devices=interface-name:kni -$ systemctl reload NetworkManager -``` +Install the .ko files based on the site requirements to bind NICs to the user-mode driver. Bind the NIC from the kernel driver to the user-mode driver. Choose one of the following .ko files based on the site requirements. @@ -96,21 +82,15 @@ Run the **cat** command to query the actual number of reserved pages. If the con ### 4. Mounting Memory Huge Pages -Create two directories for the lstack and ltran processes to access the memory huge pages. Run the following commands: +Create a directory for the lstack process to access the memory huge pages. Run the following commands: ```sh -mkdir -p /mnt/hugepages-ltran mkdir -p /mnt/hugepages-lstack -chmod -R 700 /mnt/hugepages-ltran chmod -R 700 /mnt/hugepages-lstack -mount -t hugetlbfs nodev /mnt/hugepages-ltran -o pagesize=2M mount -t hugetlbfs nodev /mnt/hugepages-lstack -o pagesize=2M ``` ->NOTE: -The huge pages mounted to **/mnt/hugepages-ltran** and **/mnt/hugepages-lstack** must be in the same page size. - ### 5. Enabling Gazelle for an Application Enable Gazelle for an application using either of the following methods as required. @@ -126,18 +106,18 @@ gcc test.c -o test ${LSTACK_LIBS} ``` - Use the **LD_PRELOAD** environment variable to load the Gazelle library. -Use the **GAZELLE_BIND_PROCNAME** environment variable to specify the process name, and **LD_PRELOAD** to specify the Gazelle library path. + Use the **GAZELLE_BIND_PROCNAME** environment variable to specify the process name, and **LD_PRELOAD** to specify the Gazelle library path. -```sh -GAZELLE_BIND_PROCNAME=test LD_PRELOAD=/usr/lib64/liblstack.so ./test -``` + ```sh + GAZELLE_BIND_PROCNAME=test LD_PRELOAD=/usr/lib64/liblstack.so ./test + ``` - Use the **GAZELLE_THREAD_NAME** environment variable to specify the thread bound to Gazelle. -If only one thread of a multi-thread process meets the conditions for using Gazelle, use **GAZELLE_THREAD_NAME** to specify the thread for using Gazelle. Other threads use kernel-mode protocol stack. + If only one thread of a multi-thread process meets the conditions for using Gazelle, use **GAZELLE_THREAD_NAME** to specify the thread for using Gazelle. Other threads use kernel-mode protocol stack. -```sh -GAZELLE_BIND_PROCNAME=test GAZELLE_THREAD_NAME=test_thread LD_PRELOAD=/usr/lib64/liblstack.so ./test -``` + ```sh + GAZELLE_BIND_PROCNAME=test GAZELLE_THREAD_NAME=test_thread LD_PRELOAD=/usr/lib64/liblstack.so ./test + ``` ### 6. Configuring Gazelle @@ -145,24 +125,29 @@ GAZELLE_BIND_PROCNAME=test GAZELLE_THREAD_NAME=test_thread LD_PRELOAD=/usr/lib64 |Options|Value|Remarks| |:---|:---|:---| -|dpdk_args|--socket-mem (mandatory)
--huge-dir (mandatory)
--proc-type (mandatory)
--legacy-mem
--map-perfect
-d|DPDK initialization parameter. For details, see the DPDK description.
**--map-perfect** is an extended feature. It is used to prevent the DPDK from occupying excessive address space and ensure that extra address space is available for lstack.
The **-d** option is used to load the specified .so library file.| +|dpdk_args|--socket-mem (mandatory)
--huge-dir (mandatory)
--proc-type (mandatory)
--legacy-mem
--map-perfect
-d|DPDK initialization parameter. For details, see the DPDK description.
**--map-perfect** is an extended feature. It is used to prevent the DPDK from occupying excessive address space and ensure that extra address space is available for lstack.
The **-d** option is used to load the specified .so library file.| |listen_shadow| 0/1 | Whether to use the shadow file descriptor for listening. This function is enabled when there is a single listen thread and multiple protocol stack threads.| -|use_ltran| 0/1 | Whether to use ltran.| +|use_ltran| 0/1 | Whether to use ltran. This parameter is no longer supported.| |num_cpus|"0,2,4 ..."|IDs of the CPUs bound to the lstack threads. The number of IDs is the number of lstack threads (less than or equal to the number of NIC queues). You can select CPUs by NUMA nodes.| -|num_wakeup|"1,3,5 ..."|IDs of the CPUs bound to the wakeup threads. The number of IDs is the number of wakeup threads, which is the same as the number of lstack threads. Select CPUs of the same NUMA nodes of the **num_cpus** parameter respectively. If this parameter is not set, the wakeup thread is not used.| |low_power_mode|0/1|Whether to enable the low-power mode. This parameter is not supported currently.| -|kni_switch|0/1|Whether to enable the rte_kni module. The default value is **0**. This module can be enabled only when ltran is not used.| +|kni_switch|0/1|Whether to enable the rte_kni module. The default value is **0**. This parameter is no longer supported.| |unix_prefix|"string"|Prefix string of the Unix socket file used for communication between Gazelle processes. By default, this parameter is left blank. The value must be the same as the value of **unix_prefix** in **ltran.conf** of the ltran process that participates in communication, or the value of the **-u** option for `gazellectl`. The value cannot contain special characters and can contain a maximum of 128 characters.| |host_addr|"192.168.xx.xx"|IP address of the protocol stack, which is also the IP address of the application.| |mask_addr|"255.255.xx.xx"|Subnet mask.| |gateway_addr|"192.168.xx.1"|Gateway address.| -|devices|"aa:bb:cc:dd:ee:ff"|MAC address for NIC communication. The value must be the same as that of **bond_macs** in the **ltran.conf** file.| +|devices|"aa:bb:cc:dd:ee:ff"|MAC address for NIC communication. The NIC is used as the primary bond NIC in bond 1 mode. | |app_bind_numa|0/1|Whether to bind the epoll and poll threads of an application to the NUMA node where the protocol stack is located. The default value is 1, indicating that the threads are bound.| |send_connect_number|4|Number of connections for sending packets in each protocol stack loop. The value is a positive integer.| |read_connect_number|4|Number of connections for receiving packets in each protocol stack loop. The value is a positive integer.| |rpc_number|4|Number of RPC messages processed in each protocol stack loop. The value is a positive integer.| |nic_read_num|128|Number of data packets read from the NIC in each protocol stack cycle. The value is a positive integer.| -|mbuf_pool_size|1024000|Size of the mbuf address pool applied for during initialization. Set this parameter based on the NIC configuration. The value must be a positive integer less than 5120000 and not too small, otherwise the startup fails.| +|bond_mode|-1|Bond mode. Currently, two network ports can be bonded. The default value is -1, indicating that the bond mode is disabled. bond1/4/6 is supported.| +|bond_slave_mac|"aa:bb:cc:dd:ee:ff;AA:BB:CC:DD:EE:FF"|MAC addresses of the bond network ports. Separate the MAC addresses with semicolons (;).| +|bond_miimon|10|Listening interval in bond mode. The default value is 10. The value ranges from 0 to 1500.| +|udp_enable|0/1|Whether to enable the UDP function. The default value is 1.| +|nic_vlan_mode|-1|Whether to enable the VLAN mode. The default value is -1, indicating that the VLAN mode is disabled. The value ranges from -1 to 4095. IDs 0 and 4095 are commonly reserved in the industry and have no actual effect.| +|tcp_conn_count|1500|Maximum number of TCP connections. The value of this parameter multiplied by **mbuf_count_per_conn** is the size of the mbuf pool applied for during initialization. If the value is too small, the startup fails. The value of (**tcp_conn_count** x **mbuf_count_per_conn** x 2048) cannot be greater than the huge page size.| +|mbuf_count_per_conn|170|Number of mbuf required by each TCP connection. The value of this parameter multiplied by **tcp_conn_count** is the size of the mbuf address pool applied for during initialization. If the value is too small, the startup fails. The value of (**tcp_conn_count** x **mbuf_count_per_conn** x 2048) cannot be greater than the huge page size.| lstack.conf example: @@ -187,104 +172,53 @@ read_connect_number=4 rpc_number=4 nic_read_num=128 mbuf_pool_size=1024000 -``` - -- The **ltran.conf** file is used to specify ltran startup parameters. The default path is **/etc/gazelle/ltran.conf**. To enable ltran, set **use_ltran=1** in the **lstack.conf** file. The configuration parameters are as follows: - -|Options|Value|Remarks| -|:---|:---|:---| -|forward_kit|"dpdk"|Specified transceiver module of an NIC.
This field is reserved and is not used currently.| -|forward_kit_args|-l
--socket-mem (mandatory)
--huge-dir (mandatory)
--proc-TYPE (mandatory)
--legacy-mem (mandatory)
--map-perfect (mandatory)
-d|DPDK initialization parameter. For details, see the DPDK description.
**--map-perfect** is an extended feature. It is used to prevent the DPDK from occupying excessive address space and ensure that extra address space is available for lstack.
The **-d** option is used to load the specified .so library file.| -|kni_switch|0/1|Whether to enable the rte_kni module. The default value is **0**.| -|unix_prefix|"string"|Prefix string of the Unix socket file used for communication between Gazelle processes. By default, this parameter is left blank. The value must be the same as the value of **unix_prefix** in **lstack.conf** of the lstack process that participates in communication, or the value of the **-u** option for `gazellectl`.| -|dispatch_max_clients|n|Maximum number of clients supported by ltran.
The total number of lstack protocol stack threads cannot exceed 32.| -|dispatch_subnet|192.168.xx.xx|Subnet mask, which is the subnet segment of the IP addresses that can be identified by ltran. The value is an example. Set the subnet based on the site requirements.| -|dispatch_subnet_length|n|Length of the Subnet that can be identified by ltran. For example, if the value of length is 4, the value ranges from 192.168.1.1 to 192.168.1.16.| -|bond_mode|n|Bond mode. Currently, only Active Backup(Mode1) is supported. The value is 1.| -|bond_miimon|n|Bond link monitoring time. The unit is millisecond. The value ranges from 1 to 2^64 - 1 - (1000 x 1000).| -|bond_ports|"0x01"|DPDK NIC to be used. The value **0x01** indicates the first NIC.| -|bond_macs|"aa:bb:cc:dd:ee:ff"|MAC address of the bound NIC, which must be the same as the MAC address of the KNI.| -|bond_mtu|n|Maximum transmission unit. The default and maximum value is 1500. The minimum value is 68.| - -ltran.conf example: - -```sh -forward_kit_args="-l 0,1 --socket-mem 1024,0,0,0 --huge-dir /mnt/hugepages-ltran --proc-type primary --legacy-mem --map-perfect --syslog daemon" -forward_kit="dpdk" - -kni_switch=0 - -dispatch_max_clients=30 -dispatch_subnet="192.168.1.0" -dispatch_subnet_length=8 - bond_mode=1 -bond_mtu=1500 -bond_miimon=100 -bond_macs="aa:bb:cc:dd:ee:ff" -bond_ports="0x1" - -tcp_conn_scan_interval=10 +bond_slave_mac="aa:bb:cc:dd:ee:ff;AA:BB:CC:DD:EE:FF" +udp_enable=1 +nic_vlan_mode=-1 ``` -### 7. Starting an Application - -- Start the ltran process. -If there is only one process and the NIC supports multiple queues, the NIC multi-queue is used to distribute packets to each thread. You do not need to start the ltran process. Set the value of **use_ltran** in the **lstack.conf** file to **0**. -If you do not use `-config-file` to specify a configuration file when starting ltran, the default configuration file path **/etc/gazelle/ltran.conf** is used. +- The ltran mode is deprecated. If multiple processes are required, try the virtual network mode using SR-IOV network hardware. -```sh -ltran --config-file ./ltran.conf -``` +### 7. Starting an Application - Start the application. -If the environment variable **LSTACK_CONF_PATH** is not used to specify the configuration file before the application is started, the default configuration file path **/etc/gazelle/lstack.conf** is used. + If the environment variable **LSTACK_CONF_PATH** is not used to specify the configuration file before the application is started, the default configuration file path **/etc/gazelle/lstack.conf** is used. -```sh -export LSTACK_CONF_PATH=./lstack.conf -LD_PRELOAD=/usr/lib64/liblstack.so GAZELLE_BIND_PROCNAME=redis-server redis-server redis.conf -``` + ```sh + export LSTACK_CONF_PATH=./lstack.conf + LD_PRELOAD=/usr/lib64/liblstack.so GAZELLE_BIND_PROCNAME=redis-server redis-server redis.conf + ``` ### 8. APIs Gazelle wraps the POSIX interfaces of the application. The code of the application does not need to be modified. -### 9. Commissioning Commands - -- If the ltran mode is not used, the **gazellectl ltran xxx** and **gazellectl lstack show {ip | pid} -r** commands are not supported. +### 9. Debugging Commands ```sh Usage: gazellectl [-h | help] - or: gazellectl ltran {quit | show | set} [LTRAN_OPTIONS] [time] [-u UNIX_PREFIX] or: gazellectl lstack {show | set} {ip | pid} [LSTACK_OPTIONS] [time] [-u UNIX_PREFIX] - quit ltran process exit - - where LTRAN_OPTIONS := - show ltran all statistics - -r, rate show ltran statistics per second - -i, instance show ltran instance register info - -b, burst show ltran NIC packet len per second - -l, latency show ltran latency - set: - loglevel {error | info | debug} set ltran loglevel - where LSTACK_OPTIONS := show lstack all statistics -r, rate show lstack statistics per second -s, snmp show lstack snmp -c, connetct show lstack connect -l, latency show lstack latency + -x, xstats show lstack xstats + -k, nic-features show state of protocol offload and other feature + -a, aggregatin [time] show lstack send/recv aggregation set: loglevel {error | info | debug} set lstack loglevel lowpower {0 | 1} set lowpower enable [time] measure latency time default 1S ``` -The `-u` option specifies the prefix of the Unix socket for communication between Gazelle processes. The value of this parameter must be the same as that of **unix_prefix** in the **ltran.conf** or **lstack.conf** file. +The `-u` option specifies the prefix of the Unix socket for communication between Gazelle processes. The value of this parameter must be the same as that of **unix_prefix** in the **lstack.conf** file. **Packet Capturing Tool** -The NIC used by Gazelle is managed by DPDK. Therefore, tcpdump cannot capture Gazelle packets. As a substitute, Gazelle uses gazelle-pdump provided in the dpdk-tools software package as the packet capturing tool. gazelle-pdump uses the multi-process mode of DPDK to share memory with the lstack or ltran process. In ltran mode, gazelle-pdump can capture only ltran packets that directly communicate with the NIC. By filtering tcpdump data packets, gazelle-pdump can filter packets of a specific lstack process. +The NIC used by Gazelle is managed by DPDK. Therefore, tcpdump cannot capture Gazelle packets. As a substitute, Gazelle uses gazelle-pdump provided in the dpdk-tools software package as the packet capturing tool. gazelle-pdump uses the multi-process mode of DPDK to share memory with the lstack process. [Usage](https://gitee.com/openeuler/gazelle/blob/master/doc/pdump/pdump.md) **Thread Binding** @@ -309,11 +243,9 @@ Restrictions of Gazelle are as follows: - Blocking **accept()** or **connect()** is not supported. - A maximum of 1500 TCP connections are supported. -- Currently, only TCP, ICMP, ARP, and IPv4 are supported. +- Currently, only TCP, ICMP, ARP, IPv4, and UDP are supported. - When a peer end pings Gazelle, the specified packet length must be less than or equal to 14,000 bytes. - Transparent huge pages are not supported. -- ltran does not support the hybrid bonding of multiple types of NICs. -- The active/standby mode (bond1 mode) of ltran supports active/standby switchover only when a fault occurs at the link layer (for example, the network cable is disconnected), but does not support active/standby switchover when a fault occurs at the physical layer (for example, the NIC is powered off or removed). - VM NICs do not support multiple queues. ### Operation Restrictions @@ -321,24 +253,25 @@ Restrictions of Gazelle are as follows: - By default, the command lines and configuration files provided by Gazelle requires **root** permissions. Privilege escalation and changing of file owner are required for non-root users. - To bind the NIC from user-mode driver back to the kernel driver, you must exit Gazelle first. - Memory huge pages cannot be remounted to subdirectories created in the mount point. -- The minimum huge page memory required by ltran is 1 GB. - The minimum hugepage memory of each application instance protocol stack thread is 800 MB. - Gazelle supports only 64-bit OSs. - The `-march=native` option is used when building the x86 version of Gazelle to optimize Gazelle based on the CPU instruction set of the build environment (Intel® Xeon® Gold 5118 CPU @ 2.30GHz). Therefore, the CPU of the operating environment must support the SSE4.2, AVX, AVX2, and AVX-512 instruction set extensions. - The maximum number of IP fragments is 10 (the maximum ping packet length is 14,790 bytes). TCP does not use IP fragments. - You are advised to set the **rp_filter** parameter of the NIC to 1 using the `sysctl` command. Otherwise, the Gazelle protocol stack may not be used as expected. Instead, the kernel protocol stack is used. -- If ltran is not used, the KNI cannot be configured to be used only for local communication. In addition, you need to configure the NetworkManager not to manage the KNI network adapter before starting Gazelle. -- The IP address and MAC address of the virtual KNI must be the same as those in the **lstack.conf** file. +- The hybrid bonding of multiple types of NICs is not supported. +- The active/standby mode (bond1 mode) supports active/standby switchover only when a fault occurs at the link layer (for example, the network cable is disconnected), but does not support active/standby switchover when a fault occurs at the physical layer (for example, the NIC is powered off or removed). +- If the length of UDP packets to be sent exceeds 45952 (32 x 1436) bytes, increase the value of **send_ring_size** to at least 64. ## Precautions You need to evaluate the use of Gazelle based on application scenarios. - + +The ltran mode and kni module is no longer supported due to changes in the dependencies and upstream community. + **Shared Memory** - Current situation: - The memory huge pages are mounted to the **/mnt/hugepages-lstack** directory. During process initialization, files are created in the **/mnt/hugepages-lstack** directory. Each file corresponds to a huge page, and the mmap function is performed on the files. After receiving the registration information of lstask, ltran configures the files in the **mmap** directory of the information page based on the huge page memory configurations, implementing shared huge page memory. - The procedure also applies to the files in the **/mnt/hugepages-ltran** directory. + The memory huge pages are mounted to the **/mnt/hugepages-lstack** directory. During process initialization, files are created in the **/mnt/hugepages-lstack** directory. Each file corresponds to a huge page, and the mmap function is performed on the files. - Current mitigation measures The huge page file permission is **600**. Only the owner can access the files. The default owner is the **root** user. Other users can be configured. Huge page files are locked by DPDK and cannot be directly written or mapped. @@ -349,4 +282,4 @@ You need to evaluate the use of Gazelle based on application scenarios. Gazelle does not limit the traffic. Users can send packets at the maximum NIC line rate to the network, which may congest the network. **Process Spoofing** -If two lstack processes A and B are legitimately registered with ltran, A can impersonate B to send spoofing messages to ltran and modify the ltran forwarding control information. As a result, the communication of B becomes abnormal, and information leakage occurs when packets for B are sent to A. Ensure that all lstack processes are trusted. +Ensure that all lstack processes are trusted. diff --git a/docs/en/Server/Network/Menu/index.md b/docs/en/Server/Network/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..00d2204b032559bdb4e74f102546cca8dcb0b6cf --- /dev/null +++ b/docs/en/Server/Network/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- + +- [Network Configuration]({{< relref "./NetworConfig/Menu/index.md" >}}) +- [Gazelle User Guide]({{< relref "./Gazelle/Menu/index.md" >}}) diff --git a/docs/en/Server/Network/NetworConfig/Menu/index.md b/docs/en/Server/Network/NetworConfig/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..21d4dcfa2d9abd3499b0611adf8753cc27b108ed --- /dev/null +++ b/docs/en/Server/Network/NetworConfig/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [Network Configuration]({{< relref "./network-configuration.md" >}}) diff --git a/docs/en/docs/Administration/configuring-the-network.md b/docs/en/Server/Network/NetworConfig/network-configuration.md similarity index 93% rename from docs/en/docs/Administration/configuring-the-network.md rename to docs/en/Server/Network/NetworConfig/network-configuration.md index 5fe150f1b54293fede7db9a575787bd71e379326..ad4b16648d208f36bfafb7dcb2a66ff240b01127 100644 --- a/docs/en/docs/Administration/configuring-the-network.md +++ b/docs/en/Server/Network/NetworConfig/network-configuration.md @@ -1,31 +1,11 @@ # Configuring the Network - - -- [Configuring the Network](#configuring-the-network) - - [Configuring an IP Address](#configuring-an-ip-address) - - [Using the nmcli Command](#using-the-nmcli-command) - - [Using the ip Command](#using-the-ip-command) - - [Configuring the Network Through the ifcfg File](#configuring-the-network-through-the-ifcfg-file) - - [Configuring a Host Name](#configuring-a-host-name) - - [Introduction](#introduction) - - [Configuring a Host Name by Running the hostnamectl Command](#configuring-a-host-name-by-running-the-hostnamectl-command) - - [Configuring a Host Name by Running the nmcli Command](#configuring-a-host-name-by-running-the-nmcli-command) - - [Configuring Network Bonding](#configuring-network-bonding) - - [Running the nmcli Command](#running-the-nmcli-command) - - [Configuring Network Bonding by Using a Command Line](#configuring-network-bonding-by-using-a-command-line) - - [IPv6 Differences \(vs IPv4\)](#ipv6-differences-vs-ipv4) - - [Restrictions](#restrictions) - - [Configuration Description](#configuration-description) - - [FAQs](#faqs) - - ## Configuring an IP Address ### Using the nmcli Command ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The network configuration configured by running the **nmcli** command takes effect immediately and will not be lost after the system restarts. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> The network configuration configured by running the **nmcli** command takes effect immediately and will not be lost after the system restarts. #### Introduction to nmcli @@ -108,8 +88,8 @@ enp3s0 c88d7b69-f529-35ca-81ab-aa729ac542fd ethernet enp3s0 virbr0 ba552da6-f014-49e3-91fa-ec9c388864fa bridge virbr0 ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->In the command output, **NAME** indicates the connection ID \(name\). +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> In the command output, **NAME** indicates the connection ID \(name\). After a network connection is added, the corresponding configuration file is generated and associated with the corresponding device. To check for available devices, run the following command: @@ -175,8 +155,8 @@ To add a static IPv4 network connection, run the following command: nmcli connection add type ethernet con-name connection-name ifname interface-name ip4 address gw4 address ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->To add an IPv6 address and related gateway information, use the **ip6** and **gw6** options. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> To add an IPv6 address and related gateway information, use the **ip6** and **gw6** options. For example, to create a static connection configuration file named **net-static**, run the following command as the **root** user: @@ -274,30 +254,30 @@ nmcli --ask device wifi connect "$SSID" **Method 2: Connect to the Wi-Fi network using the configuration file.** -1,Run the following command to check for available Wi-Fi access points: +1. Run the following command to check for available Wi-Fi access points: -```shell -nmcli dev wifi list -``` + ```shell + nmcli dev wifi list + ``` -2,Run the following command to generate a static IP address configuration that allows Wi-Fi connections automatically allocated by the DNS: +2. Run the following command to generate a static IP address configuration that allows Wi-Fi connections automatically allocated by the DNS: -```shell -nmcli con add con-name Wifi ifname wlan0 type wifi ssid MyWifi ip4 192.168.100.101/24 gw4 192.168.100.1 -``` + ```shell + nmcli con add con-name Wifi ifname wlan0 type wifi ssid MyWifi ip4 192.168.100.101/24 gw4 192.168.100.1 + ``` -3,Run the following command to set a WPA2 password, for example, **answer**: +3. Run the following command to set a WPA2 password, for example, **answer**: -```shell -nmcli con modify Wifi wifi-sec.key-mgmt wpa-psk -nmcli con modify Wifi wifi-sec.psk answer -``` + ```shell + nmcli con modify Wifi wifi-sec.key-mgmt wpa-psk + nmcli con modify Wifi wifi-sec.psk answer + ``` -4,Run the following command to change the Wi-Fi status: +4. Run the following command to change the Wi-Fi status: -```shell -nmcli radio wifi [ on | off ] -``` + ```shell + nmcli radio wifi [ on | off ] + ``` ##### Modifying Attributes @@ -349,8 +329,8 @@ $ nmcli connection show id 'Wifi ' | grep mtu ### Using the ip Command ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The network configuration configured using the **ip** command takes effect immediately, but the configuration will be lost after the system restarts. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> The network configuration configured using the **ip** command takes effect immediately, but the configuration will be lost after the system restarts. #### Configuring IP Addresses @@ -441,8 +421,8 @@ In the preceding command, **192.168.2.1** is the IP address of the target netw ### Configuring the Network Through the ifcfg File ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The network configured in the **ifcfg** file does not take effect immediately. After modifying the file (for example, **ifcfg-enp3s0**), you need to run the **nmcli con reload;nmcli con up enp3s0** command as the **root** user to reload the configuration file and activate the connection for the modification to take effect. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> The network configured in the **ifcfg** file does not take effect immediately. After modifying the file (for example, **ifcfg-enp3s0**), you need to run the **nmcli con reload;nmcli con up enp3s0** command as the **root** user to reload the configuration file and activate the connection for the modification to take effect. #### Configuring a Static Network @@ -515,8 +495,8 @@ There are three types of host names: **static**, **transient**, and **pretty* - **transient**: Dynamic host name, which is maintained by the kernel. The initial value is a static host name. The default value is **localhost**. The value can be changed when the DHCP or mDNS server is running. - **pretty**: Flexible host name, which can be set in any form \(including special characters/blanks\). Static and transient host names are subject to the general domain name restrictions. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->Static and transient host names can contain only letters \(a–z and A–Z\), digits \(0–9\), hyphens \(-\), and periods \(.\). The host names cannot start or end with a period \(.\) or contain two consecutive periods \(.\). The host name can contain a maximum of 64 characters. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> Static and transient host names can contain only letters \(a to z and A to Z\), digits \(0 to 9\), hyphens \(-\), and periods \(.\). The host names cannot start or end with a period \(.\) or contain two consecutive periods \(.\). The host name can contain a maximum of 64 characters. ### Configuring a Host Name by Running the hostnamectl Command @@ -528,8 +508,8 @@ Run the following command to view the current host name: hostnamectl status ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->If no option is specified in the command, the **status** option is used by default. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> If no option is specified in the command, the **status** option is used by default. #### Setting All Host Names @@ -715,8 +695,8 @@ $ ifup enp4s0 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/8) ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->If an interface is in **up** state, run the **ifdown** _enp3s0_ command to change the state to **down**. In the command, _enp3s0_ indicates the actual NIC name. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> If an interface is in **up** state, run the **ifdown** _enp3s0_ command to change the state to **down**. In the command, _enp3s0_ indicates the actual NIC name. After that, enable all the slave interfaces to enable the bonding \(do not set them to **Down**\). @@ -866,11 +846,11 @@ Both IPv6 and IPv4 addresses can be obtained through DHCP as the **root** user. } ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** + > ![](./public_sys-resources/icon-note.gif) **NOTE:** > - >- \: a 32-digit integer, indicating the enterprise ID. The enterprise is registered through the IANA. - >- \: a 16-digit integer, indicating the length of the vendor class string. - >- \: character string of the vendor class to be set, for example, HWHW. + > - \: a 32-digit integer, indicating the enterprise ID. The enterprise is registered through the IANA. + > - \: a 16-digit integer, indicating the length of the vendor class string. + > - \: character string of the vendor class to be set, for example, HWHW. On the client: @@ -897,8 +877,8 @@ Both IPv6 and IPv4 addresses can be obtained through DHCP as the **root** user. } ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >In substring \(option dhcp6.vendor-class, 6, 10\), the start position of the substring is 6, because the substring contains four bytes of and two bytes of . The end position of the substring is 6+. In this example, the vendor class string is HWHW, and the length of the string is 4. Therefore, the end position of the substring is 6 + 4 = 10. You can specify and as required. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > In substring \(option dhcp6.vendor-class, 6, 10\), the start position of the substring is 6, because the substring contains four bytes of and two bytes of . The end position of the substring is 6+. In this example, the vendor class string is HWHW, and the length of the string is 4. Therefore, the end position of the substring is 6 + 4 = 10. You can specify and as required. On the server: @@ -927,8 +907,8 @@ struct sockaddr_in6 { }; ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->sin6\_scope\_id: a 32-bit integer. For the link-local address, it identifies the index of the specified interface. For the link-range sin6\_addr, it identifies the index of the specified interface. For the site-range sin6\_addr, it is used as the site identifier \(the site-local address has been discarded\). +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> sin6\_scope\_id: a 32-bit integer. For the link-local address, it identifies the index of the specified interface. For the link-range sin6\_addr, it identifies the index of the specified interface. For the site-range sin6\_addr, it is used as the site identifier \(the site-local address has been discarded\). When the link-local address is used for socket communication, the interface index corresponding to the address needs to be specified when the destination address is constructed. Generally, you can use the if\_nametoindex function to convert an interface name into an interface index number. Details are as follows: @@ -981,8 +961,8 @@ PERSISTENT_DHCLIENT=yes|no|1|0 - DHCPV6C: **no** indicates that an IPv6 address is statically configured, and **yes** indicates that the DHCPv6 dhclient is enabled to dynamically obtain the IPv6 address. - PERSISTENT\_DHCLIENT: **no|0** indicates that the IPv4 dhclient process is configured as nonpersistent. If the dhclient sends a request packet to the DHCP server but does not receive any response, the dhclient exits after a period of time and the exit value is 2. **yes|1** indicates that the IPv4 dhclient process is configured to be persistent. The dhclient process repeatedly sends request packets to the DHCP server. **If PERSISTENT\_DHCLIENT is not configured, dhclient of IPv4 is set to yes|1 by default.** - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >The PERSISTENT\_DHCLIENT configuration takes effect only for IPv4 and does not take effect for IPv6-related dhclient -6 processes. By default, the persistence configuration is not performed for IPv6. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > The PERSISTENT\_DHCLIENT configuration takes effect only for IPv4 and does not take effect for IPv6-related dhclient -6 processes. By default, the persistence configuration is not performed for IPv6. #### Differences Between IPv4 and IPv6 Configuration Using the iproute Command @@ -1345,7 +1325,7 @@ $ActionQueueType Direct $MainMsgQueueType Direct ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- In direct mode, the queue size is reduced by 1. Therefore, one log is reserved in the queue for the next log output. ->- The direct mode degrades the rsyslog performance of the server. +> - In direct mode, the queue size is reduced by 1. Therefore, one log is reserved in the queue for the next log output. +> - The direct mode degrades the rsyslog performance of the server. diff --git a/docs/en/docs/rubik/figures/icon-note.gif b/docs/en/Server/Network/NetworConfig/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/rubik/figures/icon-note.gif rename to docs/en/Server/Network/NetworConfig/public_sys-resources/icon-note.gif diff --git a/docs/en/Server/Performance/CPUOptimization/KAE/Menu/index.md b/docs/en/Server/Performance/CPUOptimization/KAE/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..e72e5aac34a530e8a37960583f2f8fdd62046679 --- /dev/null +++ b/docs/en/Server/Performance/CPUOptimization/KAE/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [Using the Kunpeng Accelerator Engine (KAE)]({{< relref "./using-the-kae.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/Administration/figures/en-us_image_0231143189.png b/docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143189.png similarity index 100% rename from docs/en/docs/Administration/figures/en-us_image_0231143189.png rename to docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143189.png diff --git a/docs/en/docs/Administration/figures/en-us_image_0231143191.png b/docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143191.png similarity index 100% rename from docs/en/docs/Administration/figures/en-us_image_0231143191.png rename to docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143191.png diff --git a/docs/en/docs/Administration/figures/en-us_image_0231143193.png b/docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143193.png similarity index 100% rename from docs/en/docs/Administration/figures/en-us_image_0231143193.png rename to docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143193.png diff --git a/docs/en/docs/Administration/figures/en-us_image_0231143195.png b/docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143195.png similarity index 100% rename from docs/en/docs/Administration/figures/en-us_image_0231143195.png rename to docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143195.png diff --git a/docs/en/docs/Administration/figures/en-us_image_0231143196.png b/docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143196.png similarity index 100% rename from docs/en/docs/Administration/figures/en-us_image_0231143196.png rename to docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143196.png diff --git a/docs/en/docs/Administration/figures/en-us_image_0231143197.png b/docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143197.png similarity index 100% rename from docs/en/docs/Administration/figures/en-us_image_0231143197.png rename to docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143197.png diff --git a/docs/en/docs/Administration/figures/en-us_image_0231143198.png b/docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143198.png similarity index 100% rename from docs/en/docs/Administration/figures/en-us_image_0231143198.png rename to docs/en/Server/Performance/CPUOptimization/KAE/figures/en-us_image_0231143198.png diff --git a/docs/en/docs/secGear/public_sys-resources/icon-note.gif b/docs/en/Server/Performance/CPUOptimization/KAE/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/secGear/public_sys-resources/icon-note.gif rename to docs/en/Server/Performance/CPUOptimization/KAE/public_sys-resources/icon-note.gif diff --git a/docs/en/docs/Administration/using-the-kae.md b/docs/en/Server/Performance/CPUOptimization/KAE/using-the-kae.md similarity index 74% rename from docs/en/docs/Administration/using-the-kae.md rename to docs/en/Server/Performance/CPUOptimization/KAE/using-the-kae.md index 2a2e1aee1a140554a1b2708ed5cddc99fb400152..27677a9544428ab0762609fba718d158e712d60f 100644 --- a/docs/en/docs/Administration/using-the-kae.md +++ b/docs/en/Server/Performance/CPUOptimization/KAE/using-the-kae.md @@ -1,23 +1,5 @@ # Using the Kunpeng Accelerator Engine (KAE) - - -- [Using the Kunpeng Accelerator Engine (KAE)](#using-the-kunpeng-accelerator-engine-kae) - - [Overview](#overview) - - [Application Scenarios](#application-scenarios) - - [Installing, Running, and Uninstalling the KAE](#installing-running-and-uninstalling-the-kae) - - [Installing the Accelerator Software Packages](#installing-the-accelerator-software-packages) - - [Upgrading the Accelerator Software Packages](#upgrading-the-accelerator-software-packages) - - [Uninstalling the Accelerator Software Packages](#uninstalling-the-accelerator-software-packages) - - [Querying Logs](#querying-logs) - - [Acceleration Engine Application](#acceleration-engine-application) - - [Example Code for the KAE](#example-code-for-the-kae) - - [Usage of the KAE in the OpenSSL Configuration File openssl.cnf](#usage-of-the-kae-in-the-openssl-configuration-file-opensslcnf) - - [Troubleshooting](#troubleshooting) - - [Failed to Initialize the Accelerator Engine](#failed-to-initialize-the-accelerator-engine) - - [Failed to Identify Accelerator Devices After the Acceleration Engine Is Installed](#failed-to-identify-accelerator-devices-after-the-acceleration-engine-is-installed) - - [Failed to Upgrade the Accelerator Drivers](#failed-to-upgrade-the-accelerator-drivers) - - + ## Overview Kunpeng Accelerator Engine \(KAE\) is a software acceleration library of openEuler, which provides hardware acceleration engine function on the Kunpeng 920 processor. It supports symmetric encryption, asymmetric encryption, and digital signature. It is ideal for accelerating SSL/TLS applications, reducing processor consumption and improving processor efficiency. In addition, users can quickly migrate existing services through the standard OpenSSL interface. @@ -76,10 +58,10 @@ The KAE applies to the following scenarios, as shown in [Table 1](#table1191582 - The accelerator engine is enabled on TaiShan 200 servers. ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- You need to import the accelerator license. For details, see section "License Management" in the [TaiShan Rack Server iBMC \(V500 or Later\) User Guide](https://support.huawei.com/enterprise/en/doc/EDOC1100121685/426cffd9?idPath=7919749|9856522|21782478|8060757). ->- If the accelerator is used in the physical machine scenario, the SMMU must be disabled. For details, see the [TaiShan 200 Server BIOS Parameter Reference](https://support.huawei.com/enterprise/en/doc/EDOC1100088647). +> - You need to import the accelerator license. For details, see section "License Management" in the [TaiShan Rack Server iBMC \(V500 or Later\) User Guide](https://support.huawei.com/enterprise/en/doc/EDOC1100121685/426cffd9?idPath=7919749%257C9856522%257C21782478%257C8060757). +> - If the accelerator is used in the physical machine scenario, the SMMU must be disabled. For details, see the [TaiShan 200 Server BIOS Parameter Reference](https://support.huawei.com/enterprise/en/doc/EDOC1100088647). - CPU: Kunpeng 920 - OS: openEuler-21.09-aarch64-dvd.iso @@ -89,28 +71,28 @@ The KAE applies to the following scenarios, as shown in [Table 1](#table1191582 **Table 2** RPM software packages of the KAE - + +

Software Package

+ - - - - - - @@ -137,8 +119,8 @@ The KAE applies to the following scenarios, as shown in [Table 1](#table1191582 3. Use SSH to copy all accelerator engine software packages to the created directory. 4. In the directory, run the **rpm -ivh** command to install the accelerator engine software packages. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >Install the **libwd** package first because the **libkae** package installation depends on the **libwd** package. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > Install the **libwd** package first because the **libkae** package installation depends on the **libwd** package. ```shell rpm -ivh uacce*.rpm hisi*.rpm libwd-*.rpm libkae*.rpm @@ -220,10 +202,10 @@ The KAE applies to the following scenarios, as shown in [Table 1](#table1191582 6. Restart the system or run commands to manually load the accelerator engine drivers to the kernel in sequence, and check whether the drivers are successfully loaded. ```shell - modprobe uacce - lsmod | grep uacce + modprobe uacce + lsmod | grep uacce modprobe hisi_qm - lsmod | grep hisi_qm + lsmod | grep hisi_qm modprobe hisi_sec2 # Loads the hisi_sec2 driver to the kernel based on the configuration file in /etc/modprobe.d/hisi_sec2.conf. modprobe hisi_hpre # Loads the hisi_hpre driver to the kernel based on the configuration file in /etc/modprobe.d/hisi_hpre.conf. ``` @@ -280,13 +262,13 @@ You can run the following commands to test some accelerator functions. rsa 2048 bits 0.000355s 0.000022s 2819.0 45478.4 ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->After the KAE is used, the signature performance is improved from 724.1 sign/s to 2819 sign/s. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> After the KAE is used, the signature performance is improved from 724.1 sign/s to 2819 sign/s. - Use the OpenSSL software algorithm to test the asynchronous RSA performance. ```shell - $ ./openssl speed -elapsed -async_jobs 36 rsa2048 + $ ./openssl speed -elapsed -async_jobs 36 rsa2048 .... sign verify sign/s verify/s rsa 2048 bits 0.001318s 0.000032s 735.7 28555 @@ -295,14 +277,14 @@ You can run the following commands to test some accelerator functions. - Use the KAE to test the asynchronous RSA performance. ```shell - $ ./openssl speed -engine kae -elapsed -async_jobs 36 rsa2048 - .... + $ ./openssl speed -engine kae -elapsed -async_jobs 36 rsa2048 + .... sign verify sign/s verify/s rsa 2048 bits 0.000018s 0.000009s 54384.1 105317.0 ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->After the KAE is used, the asynchronous RSA signature performance is improved from 735.7 sign/s to 54384.1 sign/s. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> After the KAE is used, the asynchronous RSA signature performance is improved from 735.7 sign/s to 54384.1 sign/s. - Use the OpenSSL software algorithm to test the performance of the SM4 CBC mode. @@ -319,7 +301,7 @@ You can run the following commands to test some accelerator functions. ```shell $ ./openssl speed -elapsed -engine kae -evp sm4-cbc - engine "kae" set. + engine "kae" set. You have chosen to measure elapsed time instead of user CPU time. ... Doing sm4-cbc for 3s on 1048576 size blocks: 11409 sm4-cbc's in 3.00s @@ -328,8 +310,8 @@ You can run the following commands to test some accelerator functions. sm4-cbc 383317.33k 389427.20k 395313.15k 392954.73k 394264.58k 394264.58k ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->After the KAE is used, the SM4 CBC mode performance is improved from 82312.53 kbit/s to 383317.33 kbit/s when the input data block size is 8 MB. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> After the KAE is used, the SM4 CBC mode performance is improved from 82312.53 kbit/s to 383317.33 kbit/s when the input data block size is 8 MB. - Use the OpenSSL software algorithm to test the SM3 mode performance. @@ -354,8 +336,8 @@ You can run the following commands to test some accelerator functions. sm3 648243.20k 666965.33k 677030.57k 678778.20k 676681.05k 668292.44k ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->After the KAE is used, the SM3 algorithm performance is improved from 52428.80 kbit/s to 668292.44 kbit/s when the input data block size is 8 MB. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> After the KAE is used, the SM3 algorithm performance is improved from 52428.80 kbit/s to 668292.44 kbit/s when the input data block size is 8 MB. - Use the OpenSSL software algorithm to test the asynchronous performance of the AES algorithm in CBC mode. @@ -382,10 +364,10 @@ You can run the following commands to test some accelerator functions. aes-128-cbc 3747037.87k 3996774.40k 1189085.18k 1196774.74k 1196979.11k 1199570.94k ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- The AES algorithm supports only asynchronous mode when the data length is 256 KB or less. ->- After the KAE is used, the AES algorithm performance is improved from 1123328.00 kbit/s to 3996774.40 kbit/s when the input data block size is 100 KB. +> - The AES algorithm supports only asynchronous mode when the data length is 256 KB or less. +> - After the KAE is used, the AES algorithm performance is improved from 1123328.00 kbit/s to 3996774.40 kbit/s when the input data block size is 100 KB. ### Upgrading the Accelerator Software Packages @@ -416,21 +398,21 @@ You can run the **rpm -Uvh** command to upgrade the accelerator software. ```shell # Uninstall the existing drivers. - $ lsmod | grep uacce - uacce 262144 3 hisi_hpre,hisi_sec2,hisi_qm - $ - $ rmmod hisi_hpre - $ rmmod hisi_sec2 - $ rmmod hisi_qm - $ rmmod uacce - $ lsmod | grep uacce - $ + $ lsmod | grep uacce + uacce 262144 3 hisi_hpre,hisi_sec2,hisi_qm + $ + $ rmmod hisi_hpre + $ rmmod hisi_sec2 + $ rmmod hisi_qm + $ rmmod uacce + $ lsmod | grep uacce + $ # Load the new drivers. $ modprobe uacce - $ modprobe hisi_qm + $ modprobe hisi_qm $ modprobe hisi_sec2 # Loads the hisi_sec2 driver to the kernel based on the configuration file in /etc/modprobe.d/hisi_sec2.conf. $ modprobe hisi_hpre # Loads the hisi_hpre driver to the kernel based on the configuration file in /etc/modprobe.d/hisi_hpre.conf. - $ lsmod | grep uacce + $ lsmod | grep uacce uacce 36864 3 hisi_sec2,hisi_qm,hisi_hpre ``` @@ -446,20 +428,20 @@ You do not need the accelerator engine software or you want to install a new one 2. Restart the system or run commands to manually uninstall the accelerator drivers loaded to the kernel, and check whether the drivers are successfully uninstalled. ```shell - # lsmod | grep uacce - uacce 36864 3 hisi_sec2,hisi_qm,hisi_hpre - # rmmod hisi_hpre - # rmmod hisi_sec2 - # rmmod hisi_qm - # rmmod uacce - # lsmod | grep uacce + # lsmod | grep uacce + uacce 36864 3 hisi_sec2,hisi_qm,hisi_hpre + # rmmod hisi_hpre + # rmmod hisi_sec2 + # rmmod hisi_qm + # rmmod uacce + # lsmod | grep uacce # ``` 3. Run the **rpm -e** command to uninstall the accelerator engine software packages. The following is an example: - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >Due to the dependency relationships, the **libkae** package must be uninstalled before the **libwd** package. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > Due to the dependency relationships, the **libkae** package must be uninstalled before the **libwd** package. ![](./figures/en-us_image_0231143196.png) @@ -488,11 +470,11 @@ You do not need the accelerator engine software or you want to install a new one - @@ -500,8 +482,8 @@ You do not need the accelerator engine software or you want to install a new one - @@ -510,59 +492,59 @@ You do not need the accelerator engine software or you want to install a new one ## Acceleration Engine Application ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->If you have not purchased the engine license, you are advised not to use the KAE to invoke the corresponding algorithms. Otherwise, the performance of the OpenSSL encryption algorithm may be affected. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> If you have not purchased the engine license, you are advised not to use the KAE to invoke the corresponding algorithms. Otherwise, the performance of the OpenSSL encryption algorithm may be affected. ### Example Code for the KAE ```c -#include +#include -#include +#include -/* OpenSSL headers */ +/* OpenSSL headers */ -#include +#include -#include +#include -#include +#include -#include +#include -int main(int argc, char **argv) +int main(int argc, char **argv) -{ +{ - /* Initializing OpenSSL */ + /* Initializing OpenSSL */ - SSL_load_error_strings(); + SSL_load_error_strings(); - ERR_load_BIO_strings(); + ERR_load_BIO_strings(); - OpenSSL_add_all_algorithms(); + OpenSSL_add_all_algorithms(); - /*You can use ENGINE_by_id Function to get the handle of the Huawei Accelerator Engine*/ + /*You can use ENGINE_by_id Function to get the handle of the Huawei Accelerator Engine*/ - ENGINE *e = ENGINE_by_id("kae"); + ENGINE *e = ENGINE_by_id("kae"); /* Enable the accelerator asynchronization function. This parameter is optional. The value 0 indicates disabled, and the value 1 indicates enabled. The asynchronous function is enabled by default. */ - ENGINE_ctrl_cmd_string(e, "KAE_CMD_ENABLE_ASYNC", "1", 0) + ENGINE_ctrl_cmd_string(e, "KAE_CMD_ENABLE_ASYNC", "1", 0) - ENGINE_init(e); + ENGINE_init(e); RSA*rsa=RSA_new_method(e);#Specify the engine for RSA encryption and decryption. - /*The user code*/ + /*The user code*/ - ...... + ...... -; +; - ENGINE_free(e); + ENGINE_free(e); -; +; } ``` @@ -572,16 +554,16 @@ int main(int argc, char **argv) Create the **openssl.cnf** file and add the following configuration information to the file: ```text -openssl_conf=openssl_def -[openssl_def] -engines=engine_section -[engine_section] -kae=kae_section -[kae_section] -engine_id=kae -dynamic_path=/usr/local/lib/engines-1.1/kae.so +openssl_conf=openssl_def +[openssl_def] +engines=engine_section +[engine_section] +kae=kae_section +[kae_section] +engine_id=kae +dynamic_path=/usr/local/lib/engines-1.1/kae.so KAE_CMD_ENABLE_ASYNC=1 #The value 0 indicates that the asynchronous function is disabled. The value 1 indicates that the asynchronous function is enabled. The asynchronous function is enabled by default. -default_algorithms=ALL +default_algorithms=ALL init=1 ``` @@ -594,45 +576,45 @@ export OPENSSL_CONF=/home/app/openssl.cnf #Path for storing the openssl.cnf file The following is an example of the OpenSSL configuration file: ```c -#include +#include -#include +#include -/* OpenSSL headers */ +/* OpenSSL headers */ -#include +#include -#include +#include -#include +#include -#include +#include -int main(int argc, char **argv) +int main(int argc, char **argv) -{ +{ - /* Initializing OpenSSL */ + /* Initializing OpenSSL */ - SSL_load_error_strings(); + SSL_load_error_strings(); - ERR_load_BIO_strings(); + ERR_load_BIO_strings(); -#Load openssl configure +#Load openssl configure -OPENSSL_init_crypto(OPENSSL_INIT_LOAD_CONFIG, NULL); OpenSSL_add_all_algorithms(); +OPENSSL_init_crypto(OPENSSL_INIT_LOAD_CONFIG, NULL); OpenSSL_add_all_algorithms(); - /*You can use ENGINE_by_id Function to get the handle of the Huawei Accelerator Engine*/ + /*You can use ENGINE_by_id Function to get the handle of the Huawei Accelerator Engine*/ - ENGINE *e = ENGINE_by_id("kae"); + ENGINE *e = ENGINE_by_id("kae"); - /*The user code*/ + /*The user code*/ - ...... + ...... -; +; - ENGINE_free(e); + ENGINE_free(e); ; } @@ -658,13 +640,13 @@ The accelerator engine is not completely loaded. 2. Check whether the accelerator engine library exists in **/usr/lib64** \(directory for RPM installation\) or **/usr/local/lib** \(directory for source code installation\) and the OpenSSL installation directory, and check whether the correct soft link is established. ```shell - $ ll /usr/local/lib/engines-1.1/ |grep kae + $ ll /usr/local/lib/engines-1.1/ |grep kae # Check whether the KAE has been correctly installed and whether a soft link has been established. If yes, the displayed information is as follows: lrwxrwxrwx. 1 root root 22 Nov 12 02:33 kae.so -> kae.so.1.0.1 lrwxrwxrwx. 1 root root 22 Nov 12 02:33 kae.so.0 -> kae.so.1.0.1 -rwxr-xr-x. 1 root root 112632 May 25 2019 kae.so.1.0.1 $ - $ ll /usr/lib64/ | grep libwd + $ ll /usr/lib64/ | grep libwd # Check whether libwd has been correctly installed and whether a soft link has been established. If yes, the displayed information is as follows: lrwxrwxrwx. 1 root root 14 Nov 12 02:33 libwd.so -> libwd.so.1.0.1 lrwxrwxrwx. 1 root root 14 Nov 12 02:33 libwd.so.0 -> libwd.so.1.0.1 @@ -675,7 +657,7 @@ The accelerator engine is not completely loaded. 3. Check whether the path of the OpenSSL engine library can be exported by running the **export** command. ```shell - $ echo $OPENSSL_ENGINES + $ echo $OPENSSL_ENGINES $ export OPENSSL_ENGINES=/usr/local/lib/engines-1.1 $ echo $OPENSSL_ENGINES /usr/local/lib/engines-1.1 @@ -724,7 +706,7 @@ After the acceleration engine is installed, the accelerator devices cannot be id ``` 4. If no physical device is found in [3](#li1560012551369), perform the following operations: - - Check whether the accelerator license has been imported. If no, import the accelerator license. For details, see "License Management" in the [TaiShan Rack Server iBMC \(V500 or Later\) User Guide](https://support.huawei.com/enterprise/en/doc/EDOC1100121685/426cffd9?idPath=7919749|9856522|21782478|8060757). After the accelerator license is imported, power off and restart the iBMC to enable the license. + - Check whether the accelerator license has been imported. If no, import the accelerator license. For details, see "License Management" in the [TaiShan Rack Server iBMC \(V500 or Later\) User Guide](https://support.huawei.com/enterprise/en/doc/EDOC1100121685/426cffd9?idPath=7919749%257C9856522%257C21782478%257C8060757). After the accelerator license is imported, power off and restart the iBMC to enable the license. - Check whether the iBMC and BIOS versions support the accelerator feature. ### Failed to Upgrade the Accelerator Drivers diff --git a/docs/en/Server/Performance/CPUOptimization/Menu/index.md b/docs/en/Server/Performance/CPUOptimization/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..8af68840e8022623dee8d71e7a442d614bc752b9 --- /dev/null +++ b/docs/en/Server/Performance/CPUOptimization/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [sysBoost User Guide]({{< relref "./sysBoost/Menu/index.md" >}}) +- [Using the Kunpeng Accelerator Engine (KAE)]({{< relref "./KAE/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Performance/CPUOptimization/sysBoost/Menu/index.md b/docs/en/Server/Performance/CPUOptimization/sysBoost/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..ac86596b8e42f1c0f80d1c0db01e75782634b007 --- /dev/null +++ b/docs/en/Server/Performance/CPUOptimization/sysBoost/Menu/index.md @@ -0,0 +1,7 @@ +--- +headless: true +--- +- [sysBoost User Guide]({{< relref "./sysboost.md" >}}) + - [Getting to Know sysBoost]({{< relref "./getting-to-know-sysboost.md" >}}) + - [Installation and Deployment]({{< relref "./installation-and-deployment.md" >}}) + - [Usage Instructions]({{< relref "./usage-instructions.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/sysBoost/figures/architecture.png b/docs/en/Server/Performance/CPUOptimization/sysBoost/figures/architecture.png similarity index 100% rename from docs/en/docs/sysBoost/figures/architecture.png rename to docs/en/Server/Performance/CPUOptimization/sysBoost/figures/architecture.png diff --git a/docs/en/docs/sysBoost/figures/icon-note.gif b/docs/en/Server/Performance/CPUOptimization/sysBoost/figures/icon-note.gif similarity index 100% rename from docs/en/docs/sysBoost/figures/icon-note.gif rename to docs/en/Server/Performance/CPUOptimization/sysBoost/figures/icon-note.gif diff --git a/docs/en/docs/sysBoost/getting-to-know-sysBoost.md b/docs/en/Server/Performance/CPUOptimization/sysBoost/getting-to-know-sysboost.md similarity index 99% rename from docs/en/docs/sysBoost/getting-to-know-sysBoost.md rename to docs/en/Server/Performance/CPUOptimization/sysBoost/getting-to-know-sysboost.md index c3742e9ad5b39449ad4746cd5aa14cf6cf6228ad..6666c961d15ce0a5dfac833b6e6a61324db22799 100644 --- a/docs/en/docs/sysBoost/getting-to-know-sysBoost.md +++ b/docs/en/Server/Performance/CPUOptimization/sysBoost/getting-to-know-sysboost.md @@ -25,7 +25,7 @@ sysBoost reorders the code of executable files and dynamic libraries online to a - exec native huge page mechanism: The user-mode huge page mechanism requires specific application configuration and recompilation. The exec native huge page mechanism directly uses huge page memory when the kernel loads the ELF file,without the need for modifying applications. ### Architecture - + **Figure 1** sysBoost architecture ![](./figures/architecture.png) diff --git a/docs/en/docs/sysBoost/installation-and-deployment.md b/docs/en/Server/Performance/CPUOptimization/sysBoost/installation-and-deployment.md similarity index 96% rename from docs/en/docs/sysBoost/installation-and-deployment.md rename to docs/en/Server/Performance/CPUOptimization/sysBoost/installation-and-deployment.md index e164d33dec47448218310f1e648202ccaf8a77e0..43979e5020e9f7ce2e36f4cecdcf87225c1fb6b5 100644 --- a/docs/en/docs/sysBoost/installation-and-deployment.md +++ b/docs/en/Server/Performance/CPUOptimization/sysBoost/installation-and-deployment.md @@ -64,5 +64,5 @@ To install the sysBoost, perform the following steps (**xxx** in the commands in yum install ncurses-relocation-xxx -y ``` - >![](./figures/icon-note.gif) **说明:** + > ![](./figures/icon-note.gif) **Note:** > If the ELF files and their dependency libraries contain the relocation segment, skip this step. diff --git a/docs/en/docs/sysBoost/sysBoost.md b/docs/en/Server/Performance/CPUOptimization/sysBoost/sysboost.md similarity index 100% rename from docs/en/docs/sysBoost/sysBoost.md rename to docs/en/Server/Performance/CPUOptimization/sysBoost/sysboost.md diff --git a/docs/en/docs/sysBoost/usage-instructions.md b/docs/en/Server/Performance/CPUOptimization/sysBoost/usage-instructions.md similarity index 100% rename from docs/en/docs/sysBoost/usage-instructions.md rename to docs/en/Server/Performance/CPUOptimization/sysBoost/usage-instructions.md diff --git a/docs/en/Server/Performance/Menu/index.md b/docs/en/Server/Performance/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..51372e56fa81074928c7d310dd68ed80693ceb28 --- /dev/null +++ b/docs/en/Server/Performance/Menu/index.md @@ -0,0 +1,7 @@ +--- +headless: true +--- +- [Overview]({{< relref "./Overall/Menu/index.md" >}}) +- [Tuning Framework]({{< relref "./TuningFramework/Menu/index.md" >}}) +- [CPU Optimization]({{< relref "./CPUOptimization/Menu/index.md" >}}) +- [System Optimization]({{< relref "./SystemOptimization/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Performance/Overall/Menu/index.md b/docs/en/Server/Performance/Overall/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..a4fbd627da98c30528a21e5de988341c7988d527 --- /dev/null +++ b/docs/en/Server/Performance/Overall/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [System Resources and Performance]({{< relref "./systemResource/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Performance/Overall/systemResource/Menu/index.md b/docs/en/Server/Performance/Overall/systemResource/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..5f9f66c77c6039faacf9ee663847fd528573b283 --- /dev/null +++ b/docs/en/Server/Performance/Overall/systemResource/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [System Resources and Performance]({{< relref "./system-resources-and-performance.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001335457246.png b/docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001335457246.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001335457246.png rename to docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001335457246.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001336448570.png b/docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001336448570.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001336448570.png rename to docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001336448570.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337039920.png b/docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001337039920.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001337039920.png rename to docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001337039920.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001384808269.png b/docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001384808269.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001384808269.png rename to docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001384808269.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001385585749.png b/docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001385585749.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001385585749.png rename to docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001385585749.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001385611905.png b/docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001385611905.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001385611905.png rename to docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001385611905.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001385905845.png b/docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001385905845.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001385905845.png rename to docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001385905845.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001386149037.png b/docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001386149037.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001386149037.png rename to docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001386149037.png diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001389098425.png b/docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001389098425.png similarity index 100% rename from docs/en/docs/ops_guide/images/zh-cn_image_0000001389098425.png rename to docs/en/Server/Performance/Overall/systemResource/images/zh-cn_image_0000001389098425.png diff --git a/docs/en/docs/ops_guide/system-resources-and-performance.md b/docs/en/Server/Performance/Overall/systemResource/system-resources-and-performance.md similarity index 87% rename from docs/en/docs/ops_guide/system-resources-and-performance.md rename to docs/en/Server/Performance/Overall/systemResource/system-resources-and-performance.md index 19f4efeeeab27b06923434c5495b268a2848a9ea..03394cfcb95f022791d047edf7e69d1005988797 100644 --- a/docs/en/docs/ops_guide/system-resources-and-performance.md +++ b/docs/en/Server/Performance/Overall/systemResource/system-resources-and-performance.md @@ -7,18 +7,19 @@ A central processing unit (CPU) is one of main devices of a computer, and a function of the CPU is to interpret computer instructions and process data in computer software. 1. Physical core: an actual CPU core that can be seen. It has independent circuit components and L1 and L2 caches and can independently execute instructions. A CPU can have multiple physical cores. -2. Logical core: a core that exists at the logical layer in the same physical core. Generally, a physical core corresponds to a thread. However, if hyper-threading is enabled and the number of hyper-threads is *n*, a physical core can be divided into *n* logical cores. +2. Logical core: a core that exists at the logical layer in the same physical core. Generally, a physical core corresponds to a thread. However, if hyper-threading is enabled and the number of hyper-threads is *n*, a physical core can be divided into *n* logical cores. + You can run the **lscpu** command to check the number of CPUs on the server, the number of physical cores in each CPU, and the number of logical cores in each CPU. ### Common CPU Performance Analysis Tools -1. **uptime**: prints the average system load. You can view the last three numbers to determine the change trend of the average load. -If the average load is greater than the number of CPUs, the CPUs are insufficient to serve threads and some threads are waiting. If the average load is less than the number of CPUs, there are remaining CPUs. +1. **uptime**: prints the average system load. You can view the last three numbers to determine the change trend of the average load. + If the average load is greater than the number of CPUs, the CPUs are insufficient to serve threads and some threads are waiting. If the average load is less than the number of CPUs, there are remaining CPUs. ![zh-cn_image_0000001384808269](./images/zh-cn_image_0000001384808269.png) -2. **vmstat**: dynamically monitors the usage of system resources and checks which phase occupies the most system resources. - You can run the **vmstat -h** command to view command parameters. +2. **vmstat**: dynamically monitors the usage of system resources and checks which phase occupies the most system resources. + You can run the **vmstat -h** command to view command parameters. Example: ```shell @@ -28,6 +29,7 @@ If the average load is greater than the number of CPUs, the CPUs are insufficien ![](./images/zh-cn_image_0000001385585749.png) The fields in the command output are described as follows: + |Field|Description| |--|--| |procs|Process information.| @@ -35,10 +37,10 @@ If the average load is greater than the number of CPUs, the CPUs are insufficien |swap|Swap partition information.| |io|Drive read/write information.| |system|System information.| - |cpu|CPU information.
**-us**: percentage of the CPU computing time consumed by non-kernel processes.
**-sy**: percentage of the CPU computing time consumed by kernel processes.
**-id**: idle.
**-wa**: percentage of CPU resources consumed by waiting for I/Os.
**-st**: percentage of CPUs stolen by VMs.| + |cpu|CPU information.
**-us**: percentage of the CPU computing time consumed by non-kernel processes.
**-sy**: percentage of the CPU computing time consumed by kernel processes.
**-id**: idle.
**-wa**: percentage of CPU resources consumed by waiting for I/Os.
**-st**: percentage of CPUs stolen by VMs.| 3. **sar**: analyzes system performance, observes current activities and configurations, and archives and reports historical statistics. -Example: + Example: ```shell # Check the overall CPU load of the system. Collect the statistics every 3 seconds for five times. @@ -91,7 +93,7 @@ The memory is an important component of a computer, and is used to temporarily s ### Common Memory Analysis Tools and Methods 1. **free**: displays the system memory status. -Example: + Example: ```shell # Display the system memory status in MB. @@ -138,7 +140,7 @@ Example: |Field|Description| |--|--| - |memory|Memory information.
**-swpd**: usage of the virtual memory, in KB.
**-free**: free memory capacity, in KB.
**-inact**: inactive memory capacity, in KB.
**-active**: active memory capacity, in KB.| + |memory|Memory information.
**-swpd**: usage of the virtual memory, in KB.
**-free**: free memory capacity, in KB.
**-inact**: inactive memory capacity, in KB.
**-active**: active memory capacity, in KB.| 3. **sar**: monitors the memory usage of the system. @@ -154,13 +156,13 @@ Example: ```text 04:02:09 PM kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kb dirty - 04:02:11 PM 332180 2249308 189420 7.02 142172 1764312 787948 11.52 470404 1584924 + 04:02:11 PM 332180 2249308 189420 7.02 142172 1764312 787948 11.52 470404 1584924 36 - 04:02:13 PM 332148 2249276 189452 7.03 142172 1764312 787948 11.52 470404 1584924 + 04:02:13 PM 332148 2249276 189452 7.03 142172 1764312 787948 11.52 470404 1584924 36 - 04:02:15 PM 332148 2249276 189452 7.03 142172 1764312 787948 11.52 470404 1584924 + 04:02:15 PM 332148 2249276 189452 7.03 142172 1764312 787948 11.52 470404 1584924 36 - Average: 332159 2249287 189441 7.03 142172 1764312 787948 11.52 470404 1584924 + Average: 332159 2249287 189441 7.03 142172 1764312 787948 11.52 470404 1584924 36 ``` @@ -191,11 +193,11 @@ Example: node 0 size: 2633 MB node 0 free: 322 MB node distances: - node 0 - 0: 10 + node 0 + 0: 10 ``` - Ther server contains one NUMA node, which consists of four CPU cores, each has about 6 GB memory. + Ther server contains one NUMA node, which consists of four CPU cores, each has about 6 GB memory. The output also shows distances between nodes. The greater the distance, the larger the latency of corss-NUMA node memory accesses. Applications should not access memory across NUMA nodes frequently. **numastat**: displays the NUMA node status. @@ -212,7 +214,7 @@ Example: numa_foreign 0 interleave_hit 17483 local_node 5386186 - other_node 0 + other_node 0 ``` The fields in the **numstat** command output are described as follows: @@ -306,6 +308,6 @@ I/O indicates input/output. Input refers to the operation of receiving signals o |Field|Description| |--|--| - |reads|**-total**: total number of reads that have been successfully completed.
**-merged**: number of merged reads (resulting in one I/O).
**-sectors**: sectors from which data is successfully read.
**-ms**: number of milliseconds spent on reading data.| - |writes|**-total**: total number of writes that have been successfully completed.
**-merged**: merged writes (resulting in one I/O).
**-sectors**: sectors to which data is successfully written.
**-ms**: number of milliseconds spent on writing data.| + |reads|**-total**: total number of reads that have been successfully completed.
**-merged**: number of merged reads (resulting in one I/O).
**-sectors**: sectors from which data is successfully read.
**-ms**: number of milliseconds spent on reading data.| + |writes|**-total**: total number of writes that have been successfully completed.
**-merged**: merged writes (resulting in one I/O).
**-sectors**: sectors to which data is successfully written.
**-ms**: number of milliseconds spent on writing data.| |IO|**-cur**: number of I/O operations in progress. **-sec**: total number of seconds spent on I/O.| diff --git a/docs/en/Server/Performance/SystemOptimization/A-Tune/Menu/index.md b/docs/en/Server/Performance/SystemOptimization/A-Tune/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..d3ab972f3b143a251e7360d8d8e93840ae56c1cf --- /dev/null +++ b/docs/en/Server/Performance/SystemOptimization/A-Tune/Menu/index.md @@ -0,0 +1,10 @@ +--- +headless: true +--- +- [A-Tune User Guide]({{< relref "./a-tune.md" >}}) + - [Getting to Know A-Tune]({{< relref "./getting-to-know-a-tune.md" >}}) + - [Installation and Deployment]({{< relref "./installation-and-deployment.md" >}}) + - [Usage Instructions]({{< relref "./usage-instructions.md" >}}) + - [Native-Turbo]({{< relref "./native-turbo.md" >}}) + - [Common Issues and Solutions]({{< relref "./common-issues-and-solutions.md" >}}) + - [Appendix]({{< relref "./appendix.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/A-Tune/A-Tune.md b/docs/en/Server/Performance/SystemOptimization/A-Tune/a-tune.md similarity index 80% rename from docs/en/docs/A-Tune/A-Tune.md rename to docs/en/Server/Performance/SystemOptimization/A-Tune/a-tune.md index cb94a36db10e5d10f1ed758055c3a7ad99011d38..b481797f5c97d2c5e477fe1a0c7a4b92f646d7b3 100644 --- a/docs/en/docs/A-Tune/A-Tune.md +++ b/docs/en/Server/Performance/SystemOptimization/A-Tune/a-tune.md @@ -2,4 +2,4 @@ This document describes how to install and use A-Tune, which is a performance self-optimization software for openEuler. -This document is intended for developers, open-source enthusiasts, and partners who use the openEuler system and want to know and use A-Tune. You need to have basic knowledge of the Linux OS. \ No newline at end of file +This document is intended for developers, open-source enthusiasts, and partners who use the openEuler system and want to know and use A-Tune. You need to have basic knowledge of the Linux OS. diff --git a/docs/en/docs/A-Tune/appendixes.md b/docs/en/Server/Performance/SystemOptimization/A-Tune/appendix.md similarity index 95% rename from docs/en/docs/A-Tune/appendixes.md rename to docs/en/Server/Performance/SystemOptimization/A-Tune/appendix.md index 2d776555c04a00f5a7c56e5d8b503925019af32a..81568dbd22322b5003f19aa0d953edfc519004f3 100644 --- a/docs/en/docs/A-Tune/appendixes.md +++ b/docs/en/Server/Performance/SystemOptimization/A-Tune/appendix.md @@ -1,9 +1,8 @@ -# Appendixes +# Appendix -- [Appendixes](#appendixes) +- [Appendix](#appendix) - [Acronyms and Abbreviations](#acronyms-and-abbreviations) - ## Acronyms and Abbreviations **Table 1** Terminology @@ -21,5 +20,3 @@

Software Package

Description

+

Description

kae_driver-version number-1.OS type.aarch64.rpm

+

kae_driver-version_num-1.OS_type.aarch64.rpm

Accelerator driver, including the uacce.ko, hisi_qm.ko, hisi_sec2.ko, and hisi_hpre.ko kernel modules.

Algorithms supported: SM3, SM4, AES, RSA, and DH.

libwd-version number-1.OS type.aarch64.rpm

+

libwd-version_num-1.OS_type.aarch64.rpm

Coverage: libwd.so dynamic link library.

+

Coverage: libwd.so dynamic link library

It provides interfaces for the KAE.

libkae-version number-1.OS type.aarch64.rpm

+

libkae-version_num-1.OS_type.aarch64.rpm

Dependency: libwd RPM package.

-

Coverage: libkae.so dynamic library.

+

Dependency: libwd RPM package

+

Coverage: libkae.so dynamic link library

Algorithms supported: SM3, SM4, AES, RSA, and DH.

kae.log

By default, the log level of the OpenSSL engine log is error. To set the log level, perform the following procedure:

-
  1. Run export KAE_CONF_ENV=/var/log/.
  2. Create the kae.cnf file in /var/log/.
  3. In the kae.cnf file, configure the content as follows:

    [LogSection]

    -

    debug_level=error #Value: none, error, info, warning or debug

    +

By default, the log level of the OpenSSL engine log is error. To set the log level, perform the following procedure:

+
  1. Run export KAE_CONF_ENV=/var/log/.
  2. Create the kae.cnf file in /var/log/.
  3. In the kae.cnf file, configure the content as follows:

    [LogSection]

    +

    debug_level=error #Value: none, error, info, warning or debug

-
NOTE:

In normal cases, you are advised not to enable the info or debug log level. Otherwise, the accelerator performance will deteriorate.

+
NOTE:

In normal cases, you are advised not to enable the info or debug log level. Otherwise, the accelerator performance will deteriorate.

messages/syslog

  • Kernel logs are stored in the /var/log/messages directory.
-
NOTE:

Alternatively, you can run the dmesg > /var/log/dmesg.log command to collect driver and kernel logs.

+
  • Kernel logs are stored in the /var/log/messages directory.
+
NOTE:

Alternatively, you can run the dmesg > /var/log/dmesg.log command to collect driver and kernel logs.

- - diff --git a/docs/en/docs/A-Tune/faqs.md b/docs/en/Server/Performance/SystemOptimization/A-Tune/common-issues-and-solutions.md similarity index 79% rename from docs/en/docs/A-Tune/faqs.md rename to docs/en/Server/Performance/SystemOptimization/A-Tune/common-issues-and-solutions.md index 632b9d4d5a40dc02ab8b4069216851fea241e65a..38a9d93493919182e8bcc05f45740b3f50aae64f 100644 --- a/docs/en/docs/A-Tune/faqs.md +++ b/docs/en/Server/Performance/SystemOptimization/A-Tune/common-issues-and-solutions.md @@ -1,12 +1,12 @@ -# FAQs +# Common Issues and Solutions -## Q1: An error occurs when the **train** command is used to train a model, and the message "training data failed" is displayed +## Issue 1: An error occurs when the **train** command is used to train a model, and the message "training data failed" is displayed Cause: Only one type of data is collected by using the **collection**command. Solution: Collect data of at least two data types for training. -## Q2: atune-adm cannot connect to the atuned service +## Issue 2: atune-adm cannot connect to the atuned service Possible cause: @@ -41,7 +41,7 @@ Solution: no_proxy=$no_proxy, Listening_IP_address ``` -## Q3: The atuned service cannot be started, and the message "Job for atuned.service failed because a timeout was exceeded." is displayed +## Issue 3: The atuned service cannot be started, and the message "Job for atuned.service failed because a timeout was exceeded." is displayed Cause: The hosts file does not contain the localhost information. diff --git a/docs/en/docs/A-Tune/figures/en-us_image_0214540398.png b/docs/en/Server/Performance/SystemOptimization/A-Tune/figures/en-us_image_0214540398.png similarity index 100% rename from docs/en/docs/A-Tune/figures/en-us_image_0214540398.png rename to docs/en/Server/Performance/SystemOptimization/A-Tune/figures/en-us_image_0214540398.png diff --git a/docs/en/docs/A-Tune/figures/en-us_image_0227497000.png b/docs/en/Server/Performance/SystemOptimization/A-Tune/figures/en-us_image_0227497000.png similarity index 100% rename from docs/en/docs/A-Tune/figures/en-us_image_0227497000.png rename to docs/en/Server/Performance/SystemOptimization/A-Tune/figures/en-us_image_0227497000.png diff --git a/docs/en/docs/A-Tune/figures/en-us_image_0227497343.png b/docs/en/Server/Performance/SystemOptimization/A-Tune/figures/en-us_image_0227497343.png similarity index 100% rename from docs/en/docs/A-Tune/figures/en-us_image_0227497343.png rename to docs/en/Server/Performance/SystemOptimization/A-Tune/figures/en-us_image_0227497343.png diff --git a/docs/en/docs/A-Tune/figures/en-us_image_0231122163.png b/docs/en/Server/Performance/SystemOptimization/A-Tune/figures/en-us_image_0231122163.png similarity index 100% rename from docs/en/docs/A-Tune/figures/en-us_image_0231122163.png rename to docs/en/Server/Performance/SystemOptimization/A-Tune/figures/en-us_image_0231122163.png diff --git a/docs/en/docs/A-Tune/figures/en-us_image_0245342444.png b/docs/en/Server/Performance/SystemOptimization/A-Tune/figures/en-us_image_0245342444.png similarity index 100% rename from docs/en/docs/A-Tune/figures/en-us_image_0245342444.png rename to docs/en/Server/Performance/SystemOptimization/A-Tune/figures/en-us_image_0245342444.png diff --git a/docs/en/docs/A-Tune/figures/picture1.png b/docs/en/Server/Performance/SystemOptimization/A-Tune/figures/picture1.png similarity index 100% rename from docs/en/docs/A-Tune/figures/picture1.png rename to docs/en/Server/Performance/SystemOptimization/A-Tune/figures/picture1.png diff --git a/docs/en/docs/A-Tune/figures/picture4.png b/docs/en/Server/Performance/SystemOptimization/A-Tune/figures/picture4.png similarity index 100% rename from docs/en/docs/A-Tune/figures/picture4.png rename to docs/en/Server/Performance/SystemOptimization/A-Tune/figures/picture4.png diff --git a/docs/en/Server/Performance/SystemOptimization/A-Tune/getting-to-know-a-tune.md b/docs/en/Server/Performance/SystemOptimization/A-Tune/getting-to-know-a-tune.md new file mode 100644 index 0000000000000000000000000000000000000000..78c84dd0df7194bca6669cbfcb8ee47ad437d49a --- /dev/null +++ b/docs/en/Server/Performance/SystemOptimization/A-Tune/getting-to-know-a-tune.md @@ -0,0 +1,68 @@ +# Getting to Know A-Tune + +- [Getting to Know A-Tune](#getting-to-know-a-tune) + - [Introduction](#introduction) + - [Architecture](#architecture) + - [Supported Features and Service Models](#supported-features-and-service-models) + +## Introduction + +An operating system \(OS\) is basic software that connects applications and hardware. It is critical for users to adjust OS and application configurations and make full use of software and hardware capabilities to achieve optimal service performance. However, numerous workload types and varied applications run on the OS, and the requirements on resources are different. Currently, the application environment composed of hardware and software involves more than 7000 configuration objects. As the service complexity and optimization objects increase, the time cost for optimization increases exponentially. As a result, optimization efficiency decreases sharply. Optimization becomes complex and brings great challenges to users. + +Second, as infrastructure software, the OS provides a large number of software and hardware management capabilities. The capability required varies in different scenarios. Therefore, capabilities need to be enabled or disabled depending on scenarios, and a combination of capabilities will maximize the optimal performance of applications. + +In addition, the actual business embraces hundreds and thousands of scenarios, and each scenario involves a wide variety of hardware configurations for computing, network, and storage. The lab cannot list all applications, business scenarios, and hardware combinations. + +To address the preceding challenges, openEuler launches A-Tune. + +A-Tune is an AI-based engine that optimizes system performance. It uses AI technologies to precisely profile business scenarios, discover and infer business characteristics, so as to make intelligent decisions, match with the optimal system parameter configuration combination, and give recommendations, ensuring the optimal business running status. + +![](figures/en-us_image_0227497000.png) + +## Architecture + +The following figure shows the A-Tune core technical architecture, which consists of intelligent decision-making, system profile, and interaction system. + +- Intelligent decision-making layer: consists of the awareness and decision-making subsystems, which implements intelligent awareness of applications and system optimization decision-making, respectively. +- System profile layer: consists of the feature engineering and two-layer classification model. The feature engineering is used to automatically select service features, and the two-layer classification model is used to learn and classify service models. +- Interaction system layer: monitors and configures various system resources and executes optimization policies. + +![](figures/en-us_image_0227497343.png) + +## Supported Features and Service Models + +### Supported Features + +[Table 1](#table1919220557576) describes the main features supported by A-Tune, feature maturity, and usage suggestions. + +**Table 1** Feature maturity + + + +| Feature | Maturity | Usage Suggestion | +| --------------------------------------------------------- | -------- | ---------------- | +| Auto optimization of 15 applications in 11 workload types | Tested | Pilot | +| User-defined profile and service models | Tested | Pilot | +| Automatic parameter optimization | Tested | Pilot | + +### Supported Service Models + +Based on the workload characteristics of applications, A-Tune classifies services into 11 types. For details about the bottleneck of each type and the applications supported by A-Tune, see [Table 2](#table2819164611311). + +**Table 2** Supported workload types and applications + + + +| Service category | Type | Bottleneck | Supported Application | +| ------------------ | -------------------- | ------------------------------------------------------------ | ----------------------------------- | +| default | Default type | Low resource usage in terms of cpu, memory, network, and I/O | N/A | +| webserver | Web application | Bottlenecks of cpu and network | Nginx, Apache Traffic Server | +| database | Database | Bottlenecks of cpu, memory, and I/O | Mongodb, Mysql, Postgresql, Mariadb | +| big_data | Big data | Bottlenecks of cpu and memory | Hadoop-hdfs, Hadoop-spark | +| middleware | Middleware framework | Bottlenecks of cpu and network | Dubbo | +| in-memory_database | Memory database | Bottlenecks of memory and I/O | Redis | +| basic-test-suite | Basic test suite | Bottlenecks of cpu and memory | SPECCPU2006, SPECjbb2015 | +| hpc | Human genome | Bottlenecks of cpu, memory, and I/O | Gatk4 | +| storage | Storage | Bottlenecks of network, and I/O | Ceph | +| virtualization | Virtualization | Bottlenecks of cpu, memory, and I/O | Consumer-cloud, Mariadb | +| docker | Docker | Bottlenecks of cpu, memory, and I/O | Mariadb | diff --git a/docs/en/docs/A-Tune/installation-and-deployment.md b/docs/en/Server/Performance/SystemOptimization/A-Tune/installation-and-deployment.md similarity index 98% rename from docs/en/docs/A-Tune/installation-and-deployment.md rename to docs/en/Server/Performance/SystemOptimization/A-Tune/installation-and-deployment.md index ae6bca81115139c4be67069003add94f6a030dfa..307e9a77b4436947849401ad1f09daefe6b5402d 100644 --- a/docs/en/docs/A-Tune/installation-and-deployment.md +++ b/docs/en/Server/Performance/SystemOptimization/A-Tune/installation-and-deployment.md @@ -29,7 +29,7 @@ This chapter describes how to install and deploy A-Tune. ## Environment Preparation -For details about installing an openEuler OS, see the [_openEuler Installation Guide_](../Installation/Installation.md). +For details about installing an openEuler OS, see the [_openEuler Installation Guide_](../../../InstallationUpgrade/Installation/installation.md). ## A-Tune Installation @@ -90,8 +90,8 @@ To install the A-Tune, perform the following steps: 4. Install an A-Tune server. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >In this step, both the server and client software packages are installed. For the single-node deployment, skip **Step 5**. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > In this step, both the server and client software packages are installed. For the single-node deployment, skip **Step 5**. ```shell yum install atune -y @@ -113,7 +113,7 @@ To install the A-Tune, perform the following steps: atune-xxx atune-engine-xxx ``` - + If the preceding information is displayed, the installation is successful. ## A-Tune Deployment @@ -372,7 +372,7 @@ To use AI functions, you need to start the A-Tune engine service. If the following command output is displayed, the service is started successfully: ![](./figures/en-us_image_0245342444.png) - + ## Distributed Deployment ### Purpose of Distributed Deployment diff --git a/docs/en/docs/A-Tune/native-turbo.md b/docs/en/Server/Performance/SystemOptimization/A-Tune/native-turbo.md similarity index 98% rename from docs/en/docs/A-Tune/native-turbo.md rename to docs/en/Server/Performance/SystemOptimization/A-Tune/native-turbo.md index 4b1050a5044e73309fd471a96c84814818bbed9b..0abd1b3e503143f89e99faedd06cd0ac17a42110 100644 --- a/docs/en/docs/A-Tune/native-turbo.md +++ b/docs/en/Server/Performance/SystemOptimization/A-Tune/native-turbo.md @@ -43,7 +43,7 @@ To facilitate the use of huge pages, the Native-Turbo feature enables the system -zcommon-page-size=0x200000 -zmax-page-size=0x200000 ``` -2. Sufficient huge pages must be reserved before use. Otherwise, the program wil fail to be executed. +2. Sufficient huge pages must be reserved before use. Otherwise, the program will fail to be executed. If the cgroup is used, pay attention to the `hugetlb` limit. If the limit is less than the number of required huge pages, the system may break down during running. diff --git a/docs/en/docs/sysMaster/public_sys-resources/icon-note.gif b/docs/en/Server/Performance/SystemOptimization/A-Tune/public_sys-resources/icon-note.gif similarity index 100% rename from docs/en/docs/sysMaster/public_sys-resources/icon-note.gif rename to docs/en/Server/Performance/SystemOptimization/A-Tune/public_sys-resources/icon-note.gif diff --git a/docs/en/Server/Performance/SystemOptimization/A-Tune/usage-instructions.md b/docs/en/Server/Performance/SystemOptimization/A-Tune/usage-instructions.md new file mode 100644 index 0000000000000000000000000000000000000000..4a4cb70a614dd7f155473b2db1ccb65a76b34181 --- /dev/null +++ b/docs/en/Server/Performance/SystemOptimization/A-Tune/usage-instructions.md @@ -0,0 +1,757 @@ +# Usage Instructions + +You can use functions provided by A-Tune through the CLI client atune-adm. This chapter describes the functions and usage of the A-Tune client. + +- [Usage Instructions](#usage-instructions) + - [Overview](#overview) + - [Querying Workload Types](#querying-workload-types) + - [list](#list) + - [Workload Type Analysis and Auto Optimization](#workload-type-analysis-and-auto-optimization) + - [analysis](#analysis) + - [User-defined Model](#user-defined-model) + - [define](#define) + - [collection](#collection) + - [train](#train) + - [undefine](#undefine) + - [Querying Profiles](#querying-profiles) + - [info](#info) + - [Updating a Profile](#updating-a-profile) + - [update](#update) + - [Activating a Profile](#activating-a-profile) + - [profile](#profile) + - [Rolling Back Profiles](#rolling-back-profiles) + - [rollback](#rollback) + - [Updating Database](#updating-database) + - [upgrade](#upgrade) + - [Querying System Information](#querying-system-information) + - [check](#check) + - [Automatic Parameter Optimization](#automatic-parameter-optimization) + - [Tuning](#tuning) + +## Overview + +- You can run the **atune-adm help/--help/-h** command to query commands supported by atune-adm. +- The **define**, **update**, **undefine**, **collection**, **train**, and **upgrade**commands do not support remote execution. +- In the command format, brackets \(\[\]\) indicate that the parameter is optional, and angle brackets \(<\>\) indicate that the parameter is mandatory. The actual parameters prevail. + +## Querying Workload Types + +### list + +#### Function + +Query the supported profiles, and the values of Active. + +#### Format + +**atune-adm list** + +#### Example + +```shell +# atune-adm list + +Support profiles: ++------------------------------------------------+-----------+ +| ProfileName | Active | ++================================================+===========+ +| arm-native-android-container-robox | false | ++------------------------------------------------+-----------+ +| basic-test-suite-euleros-baseline-fio | false | ++------------------------------------------------+-----------+ +| basic-test-suite-euleros-baseline-lmbench | false | ++------------------------------------------------+-----------+ +| basic-test-suite-euleros-baseline-netperf | false | ++------------------------------------------------+-----------+ +| basic-test-suite-euleros-baseline-stream | false | ++------------------------------------------------+-----------+ +| basic-test-suite-euleros-baseline-unixbench | false | ++------------------------------------------------+-----------+ +| basic-test-suite-speccpu-speccpu2006 | false | ++------------------------------------------------+-----------+ +| basic-test-suite-specjbb-specjbb2015 | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-hdfs-dfsio-hdd | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-hdfs-dfsio-ssd | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-bayesian | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-kmeans | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-sql1 | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-sql10 | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-sql2 | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-sql3 | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-sql4 | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-sql5 | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-sql6 | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-sql7 | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-sql8 | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-sql9 | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-tersort | false | ++------------------------------------------------+-----------+ +| big-data-hadoop-spark-wordcount | false | ++------------------------------------------------+-----------+ +| cloud-compute-kvm-host | false | ++------------------------------------------------+-----------+ +| database-mariadb-2p-tpcc-c3 | false | ++------------------------------------------------+-----------+ +| database-mariadb-4p-tpcc-c3 | false | ++------------------------------------------------+-----------+ +| database-mongodb-2p-sysbench | false | ++------------------------------------------------+-----------+ +| database-mysql-2p-sysbench-hdd | false | ++------------------------------------------------+-----------+ +| database-mysql-2p-sysbench-ssd | false | ++------------------------------------------------+-----------+ +| database-postgresql-2p-sysbench-hdd | false | ++------------------------------------------------+-----------+ +| database-postgresql-2p-sysbench-ssd | false | ++------------------------------------------------+-----------+ +| default-default | false | ++------------------------------------------------+-----------+ +| docker-mariadb-2p-tpcc-c3 | false | ++------------------------------------------------+-----------+ +| docker-mariadb-4p-tpcc-c3 | false | ++------------------------------------------------+-----------+ +| hpc-gatk4-human-genome | false | ++------------------------------------------------+-----------+ +| in-memory-database-redis-redis-benchmark | false | ++------------------------------------------------+-----------+ +| middleware-dubbo-dubbo-benchmark | false | ++------------------------------------------------+-----------+ +| storage-ceph-vdbench-hdd | false | ++------------------------------------------------+-----------+ +| storage-ceph-vdbench-ssd | false | ++------------------------------------------------+-----------+ +| virtualization-consumer-cloud-olc | false | ++------------------------------------------------+-----------+ +| virtualization-mariadb-2p-tpcc-c3 | false | ++------------------------------------------------+-----------+ +| virtualization-mariadb-4p-tpcc-c3 | false | ++------------------------------------------------+-----------+ +| web-apache-traffic-server-spirent-pingpo | false | ++------------------------------------------------+-----------+ +| web-nginx-http-long-connection | true | ++------------------------------------------------+-----------+ +| web-nginx-https-short-connection | false | ++------------------------------------------------+-----------+ +``` + +> ![](public_sys-resources/icon-note.gif) **NOTE:** +> If the value of Active is **true**, the profile is activated. In the example, the profile of web-nginx-http-long-connection is activated. + +## Workload Type Analysis and Auto Optimization + +### analysis + +#### Function + +Collect real-time statistics from the system to identify and automatically optimize workload types. + +#### Format + +**atune-adm analysis** \[OPTIONS\] + +#### Parameter Description + +- OPTIONS + +| Parameter | Description | +| ------------------------ | ---------------------------------------------------------------------------------------------- | +| --model, -m | New model generated after user self-training | +| --characterization, -c | Use the default model for application identification and do not perform automatic optimization | +| --times value, -t value | Time duration for data collection | +| --script value, -s value | File to be executed | + +#### Example + +- Use the default model for application identification. + + ```shell + # atune-adm analysis --characterization + ``` + +- Use the default model to identify applications and perform automatic tuning. + + ```shell + # atune-adm analysis + ``` + +- Use the user-defined training model for recognition. + + ```shell + # atune-adm analysis --model /usr/libexec/atuned/analysis/models/new-model.m + ``` + +## User-defined Model + +A-Tune allows users to define and learn new models. To define a new model, perform the following steps: + +1. Run the **define** command to define a new profile. +2. Run the **collection** command to collect the system data corresponding to the application. +3. Run the **train** command to train the model. + +### define + +#### Function + +Add a user-defined application scenarios and the corresponding profile tuning items. + +#### Format + +**atune-adm define** + +#### Example + +Add a profile whose service_type is **test_service**, application_name is **test_app**, scenario_name is **test_scenario**, and tuning item configuration file is **example.conf**. + +```shell +# atune-adm define test_service test_app test_scenario ./example.conf +``` + +The **example.conf** file can be written as follows (the following optimization items are optional and are for reference only). You can also run the **atune-adm info** command to view how the existing profile is written. + +```ini + [main] + # list its parent profile + [kernel_config] + # to change the kernel config + [bios] + # to change the bios config + [bootloader.grub2] + # to change the grub2 config + [sysfs] + # to change the /sys/* config + [systemctl] + # to change the system service status + [sysctl] + # to change the /proc/sys/* config + [script] + # the script extension of cpi + [ulimit] + # to change the resources limit of user + [schedule_policy] + # to change the schedule policy + [check] + # check the environment + [tip] + # the recommended optimization, which should be performed manunaly +``` + +### collection + +#### Function + +Collect the global resource usage and OS status information during service running, and save the collected information to a CSV output file as the input dataset for model training. + +> ![](public_sys-resources/icon-note.gif) **NOTE:** +> +> - This command depends on the sampling tools such as perf, mpstat, vmstat, iostat, and sar. +> - Currently, only the Kunpeng 920 CPU is supported. You can run the **dmidecode -t processor** command to check the CPU model. + +#### Format + +**atune-adm collection** + +#### Parameter Description + +- OPTIONS + +| Parameter | Description | +| ----------- | ----------- | +| --filename, -f | Name of the generated CSV file used for training: *name*-*timestamp*.csv | +| --output_path, -o | Path for storing the generated CSV file. The absolute path is required. | +| --disk, -b | Disk used during service running, for example, /dev/sda. | +| --network, -n | Network port used during service running, for example, eth0. | +| --app_type, -t | Mark the application type of the service as a label for training. | +| --duration, -d | Data collection time during service running, in seconds. The default collection time is 1200 seconds. | +| --interval, -i | Interval for collecting data, in seconds. The default interval is 5 seconds. | + +#### Example + +```shell +# atune-adm collection --filename name --interval 5 --duration 1200 --output_path /home/data --disk sda --network eth0 --app_type test_service-test_app-test_scenario +``` + +> Note: +> +> In the example, data is collected every 5 seconds for a duration of 1200 seconds. The collected data is stored as the *name* file in the **/home/data** directory. The application type of the service is defined by the `atune-adm define` command, which is **test_service-test_app-test_scenario** in this example. +> The data collection interval and duration can be specified using the preceding command options. + +### train + +#### Function + +Use the collected data to train the model. Collect data of at least two application types during training. Otherwise, an error is reported. + +#### Format + +**atune-adm train** + +#### Parameter Description + +- OPTIONS + + | Parameter | Description | + | ----------------- | ------------------------------------------------------ | + | --data_path, -d | Path for storing CSV files required for model training | + | --output_file, -o | Model generated through training | + +#### Example + +Use the CSV file in the **data** directory as the training input. The generated model **new-model.m** is stored in the **model** directory. + +```shell +# atune-adm train --data_path /home/data --output_file /usr/libexec/atuned/analysis/models/new-model.m +``` + +### undefine + +#### Function + +Delete a user-defined profile. + +#### Format + +**atune-adm undefine** + +#### Example + +Delete the user-defined profile. + +```shell +# atune-adm undefine test_service-test_app-test_scenario +``` + +## Querying Profiles + +### info + +#### Function + +View the profile content. + +#### Format + +**atune-adm info** + +#### Example + +View the profile content of web-nginx-http-long-connection. + +```shell +# atune-adm info web-nginx-http-long-connection + +*** web-nginx-http-long-connection: + +# +# nginx http long connection A-Tune configuration +# +[main] +include = default-default + +[kernel_config] +#TODO CONFIG + +[bios] +#TODO CONFIG + +[bootloader.grub2] +iommu.passthrough = 1 + +[sysfs] +#TODO CONFIG + +[systemctl] +sysmonitor = stop +irqbalance = stop + +[sysctl] +fs.file-max = 6553600 +fs.suid_dumpable = 1 +fs.aio-max-nr = 1048576 +kernel.shmmax = 68719476736 +kernel.shmall = 4294967296 +kernel.shmmni = 4096 +kernel.sem = 250 32000 100 128 +net.ipv4.tcp_tw_reuse = 1 +net.ipv4.tcp_syncookies = 1 +net.ipv4.ip_local_port_range = 1024 65500 +net.ipv4.tcp_max_tw_buckets = 5000 +net.core.somaxconn = 65535 +net.core.netdev_max_backlog = 262144 +net.ipv4.tcp_max_orphans = 262144 +net.ipv4.tcp_max_syn_backlog = 262144 +net.ipv4.tcp_timestamps = 0 +net.ipv4.tcp_synack_retries = 1 +net.ipv4.tcp_syn_retries = 1 +net.ipv4.tcp_fin_timeout = 1 +net.ipv4.tcp_keepalive_time = 60 +net.ipv4.tcp_mem = 362619 483495 725238 +net.ipv4.tcp_rmem = 4096 87380 6291456 +net.ipv4.tcp_wmem = 4096 16384 4194304 +net.core.wmem_default = 8388608 +net.core.rmem_default = 8388608 +net.core.rmem_max = 16777216 +net.core.wmem_max = 16777216 + +[script] +prefetch = off +ethtool = -X {network} hfunc toeplitz + +[ulimit] +{user}.hard.nofile = 102400 +{user}.soft.nofile = 102400 + +[schedule_policy] +#TODO CONFIG + +[check] +#TODO CONFIG + +[tip] +SELinux provides extra control and security features to linux kernel. Disabling SELinux will improve the performance but may cause security risks. = kernel +disable the nginx log = application +``` + +## Updating a Profile + +You can update the existing profile as required. + +### update + +#### Function + +Update the original tuning items in the existing profile to the content in the **new.conf** file. + +#### Format + +**atune-adm update** + +#### Example + +Change the tuning item of the profile named **test_service-test_app-test_scenario** to **new.conf**. + +```shell +# atune-adm update test_service-test_app-test_scenario ./new.conf +``` + +## Activating a Profile + +### profile + +#### Function + +Manually activate the profile to make it in the active state. + +#### Format + +**atune-adm profile** + +#### Parameter Description + +For details about the profile name, see the query result of the list command. + +#### Example + +Activate the profile corresponding to the web-nginx-http-long-connection. + +```shell +# atune-adm profile web-nginx-http-long-connection +``` + +## Rolling Back Profiles + +### rollback + +#### Functions + +Roll back the current configuration to the initial configuration of the system. + +#### Format + +**atune-adm rollback** + +#### Example + +```shell +# atune-adm rollback +``` + +## Updating Database + +### upgrade + +#### Function + +Update the system database. + +#### Format + +**atune-adm upgrade** + +#### Parameter Description + +- DB\_FILE + + New database file path. + +#### Example + +The database is updated to **new\_sqlite.db**. + +```shell +# atune-adm upgrade ./new_sqlite.db +``` + +## Querying System Information + +### check + +#### Function + +Check the CPU, BIOS, OS, and NIC information. + +#### Format + +**atune-adm check** + +#### Example + +```shell +# atune-adm check + cpu information: + cpu:0 version: Kunpeng 920-6426 speed: 2600000000 HZ cores: 64 + cpu:1 version: Kunpeng 920-6426 speed: 2600000000 HZ cores: 64 + system information: + DMIBIOSVersion: 0.59 + OSRelease: 4.19.36-vhulk1906.3.0.h356.eulerosv2r8.aarch64 + network information: + name: eth0 product: HNS GE/10GE/25GE RDMA Network Controller + name: eth1 product: HNS GE/10GE/25GE Network Controller + name: eth2 product: HNS GE/10GE/25GE RDMA Network Controller + name: eth3 product: HNS GE/10GE/25GE Network Controller + name: eth4 product: HNS GE/10GE/25GE RDMA Network Controller + name: eth5 product: HNS GE/10GE/25GE Network Controller + name: eth6 product: HNS GE/10GE/25GE RDMA Network Controller + name: eth7 product: HNS GE/10GE/25GE Network Controller + name: docker0 product: +``` + +## Automatic Parameter Optimization + +A-Tune provides the automatic search capability with the optimal configuration, saving the trouble of manually configuring parameters and performance evaluation. This greatly improves the search efficiency of optimal configurations. + +### Tuning + +#### Function + +Use the specified project file to search the dynamic space for parameters and find the optimal solution under the current environment configuration. + +#### Format + +**atune-adm tuning** \[OPTIONS\] + +> ![](public_sys-resources/icon-note.gif) **NOTE:** +Before running the command, ensure that the following conditions are met: + +1. The YAML configuration file on the server has been edited and stored in the **/etc/atuned/tuning/** directory of the atuned service. +2. The YAML configuration file of the client has been edited and stored on the atuned client. + +#### Parameter Description + +- OPTIONS + +| Parameter | Description | +| ------------- | ----------------------------------------------------------- | +| --restore, -r | Restores the initial configuration before tuning. | +| --project, -p | Specifies the project name in the YAML file to be restored. | +| --restart, -c | Perform tuning based on historical tuning results. | +| --detail, -d | Print detailed information about the tuning process. | + +> ![](public_sys-resources/icon-note.gif) **NOTE:** +> If this parameter is used, the -p parameter must be followed by a specific project name and the YAML file of the project must be specified. + +- **PROJECT\_YAML**: YAML configuration file of the client. + +#### Configuration Description + +**Table 1** YAML file on the server + +| Name | Description | Type | Value Range | +| ----------- | ----------- | ----------- | ----------- | +| project | Project name. | Character string | - | +| startworkload | Script for starting the service to be optimized. | Character string | - | +| stopworkload | Script for stopping the service to be optimized. | Character string | - | +| maxiterations | Maximum number of optimization iterations, which is used to limit the number of iterations on the client. Generally, the more optimization iterations, the better the optimization effect, but the longer the time required. Set this parameter based on the site requirements. | Integer | >10 | +| object | Parameters to be optimized and related information.
For details about the object configuration items, see Table 2. | - | - | + +**Table 2** Description of object configuration items + +| Name | Description | Type | Value Range | +| ----------- | ----------- | ----------- | ----------- | +| name | Parameter to be optimized. | Character string | - | +| desc | Description of parameters to be optimized. | Character string | - | +| get | Script for querying parameter values. | - | - | +| set | Script for setting parameter values. | - | - | +| needrestart | Specifies whether to restart the service for the parameter to take effect. | Enumeration | **true** or **false** | +| type | Parameter type. Currently, the **discrete** and **continuous** types are supported. | Enumeration | **discrete** or **continuous** | +| dtype | This parameter is available only when type is set to **discrete**. Currently, **int**, **float** and **string** are supported. | Enumeration | int, float, string | +| scope | Parameter setting range. This parameter is valid only when type is set to discrete and **dtype** is set to **int** or **float**, or **type** is set to **continuous**. | Integer/Float | The value is user-defined and must be within the valid range of this parameter. | +| step | Parameter value step, which is used when **dtype** is set to **int** or **float**. | Integer/Float | This value is user-defined. | +| items | Enumerated value of which the parameter value is not within the scope. This is used when **dtype** is set to **int** or **float**. | Integer/Float | The value is user-defined and must be within the valid range of this parameter. | +| options | Enumerated value range of the parameter value, which is used when **dtype** is set to **string**. | Character string | The value is user-defined and must be within the valid range of this parameter. | + +**Table 3** Description of configuration items of a YAML file on the client + +| Name | Description | Type | Value Range | +| ----------- | ----------- | ----------- | ----------- | +| project | Project name, which must be the same as that in the configuration file on the server. | Character string | - | +| engine | Tuning algorithm. | Character string | "random", "forest", "gbrt", "bayes", "extraTrees" | +| iterations | Number of optimization iterations. | Integer | ≥ 10 | +| random_starts | Number of random iterations. | Integer | < iterations | +| feature_filter_engine | Parameter search algorithm, which is used to select important parameters. This parameter is optional. | Character string | "lhs" | +| feature_filter_cycle | Parameter search cycles, which is used to select important parameters. This parameter is used together with feature_filter_engine. | Integer | - | +| feature_filter_iters | Number of iterations for each cycle of parameter search, which is used to select important parameters. This parameter is used together with feature_filter_engine. | Integer | - | +| split_count | Number of evenly selected parameters in the value range of tuning parameters, which is used to select important parameters. This parameter is used together with feature_filter_engine. | Integer | - | +| benchmark | Performance test script. | - | - | +| evaluations | Performance test evaluation index.
For details about the evaluations configuration items, see Table 4. | - | - | + +**Table 4** Description of evaluations configuration item + +| Name | Description | Type | Value Range | +| ----------- | ----------- | ----------- | ----------- | +| name | Evaluation index name. | Character string | - | +| get | Script for obtaining performance evaluation results. | - | - | +| type | Specifies a **positive** or **negative** type of the evaluation result. The value **positive** indicates that the performance value is minimized, and the value **negative** indicates that the performance value is maximized. | Enumeration | **positive** or **negative** | +| weight | Weight of the index. The value ranges from 0 to 100. | Integer | 0-100 | +| threshold | Minimum performance requirement of the index. | Integer | User-defined | + +#### Example + +The following is an example of the YAML file configuration on a server: + +```yaml +project: "compress" +maxiterations: 500 +startworkload: "" +stopworkload: "" +object : + - + name : "compressLevel" + info : + desc : "The compresslevel parameter is an integer from 1 to 9 controlling the level of compression" + get : "cat /root/A-Tune/examples/tuning/compress/compress.py | grep 'compressLevel=' | awk -F '=' '{print $2}'" + set : "sed -i 's/compressLevel=\\s*[0-9]*/compressLevel=$value/g' /root/A-Tune/examples/tuning/compress/compress.py" + needrestart : "false" + type : "continuous" + scope : + - 1 + - 9 + dtype : "int" + - + name : "compressMethod" + info : + desc : "The compressMethod parameter is a string controlling the compression method" + get : "cat /root/A-Tune/examples/tuning/compress/compress.py | grep 'compressMethod=' | awk -F '=' '{print $2}' | sed 's/\"//g'" + set : "sed -i 's/compressMethod=\\s*[0-9,a-z,\"]*/compressMethod=\"$value\"/g' /root/A-Tune/examples/tuning/compress/compress.py" + needrestart : "false" + type : "discrete" + options : + - "bz2" + - "zlib" + - "gzip" + dtype : "string" +``` + +The following is an example of the YAML file configuration on a client: + +```yaml +project: "compress" +engine : "gbrt" +iterations : 20 +random_starts : 10 + +benchmark : "python3 /root/A-Tune/examples/tuning/compress/compress.py" +evaluations : + - + name: "time" + info: + get: "echo '$out' | grep 'time' | awk '{print $3}'" + type: "positive" + weight: 20 + - + name: "compress_ratio" + info: + get: "echo '$out' | grep 'compress_ratio' | awk '{print $3}'" + type: "negative" + weight: 80 +``` + +#### Example + +- Download test data. + + ```shell + wget http://cs.fit.edu/~mmahoney/compression/enwik8.zip + ``` + +- Prepare the tuning environment. + + Example of **prepare.sh**: + + ```shell + #!/usr/bin/bash + if [ "$#" -ne 1 ]; then + echo "USAGE: $0 the path of enwik8.zip" + exit 1 + fi + + path=$( + cd "$(dirname "$0")" + pwd + ) + + echo "unzip enwik8.zip" + unzip "$path"/enwik8.zip + + echo "set FILE_PATH to the path of enwik8 in compress.py" + sed -i "s#compress/enwik8#$path/enwik8#g" "$path"/compress.py + + echo "update the client and server yaml files" + sed -i "s#python3 .*compress.py#python3 $path/compress.py#g" "$path"/compress_client.yaml + sed -i "s# compress/compress.py# $path/compress.py#g" "$path"/compress_server.yaml + + echo "copy the server yaml file to /etc/atuned/tuning/" + cp "$path"/compress_server.yaml /etc/atuned/tuning/ + ``` + + Run the script. + + ```shell + sh prepare.sh enwik8.zip + ``` + +- Run the `tuning` command to tune the parameters. + + ```shell + atune-adm tuning --project compress --detail compress_client.yaml + ``` + +- Restore the configuration before running `tuning`. **compress** indicates the project name in the YAML file. + + ```shell + atune-adm tuning --restore --project compress + ``` diff --git a/docs/en/Server/Performance/SystemOptimization/Menu/index.md b/docs/en/Server/Performance/SystemOptimization/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..156064f5b999d5f817798fd3554689a37fa3a47a --- /dev/null +++ b/docs/en/Server/Performance/SystemOptimization/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [A-Tune User Guide]({{< relref "./A-Tune/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Performance/TuningFramework/Menu/index.md b/docs/en/Server/Performance/TuningFramework/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..ec388b93edcc10e5fd2de7c0c8f5fb0a92a8f413 --- /dev/null +++ b/docs/en/Server/Performance/TuningFramework/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [oeAware User Guide]({{< relref "./oeAware/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Performance/TuningFramework/oeAware/Menu/index.md b/docs/en/Server/Performance/TuningFramework/oeAware/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..fd68c7664e8b3d238a041376dada0c3380edcc2f --- /dev/null +++ b/docs/en/Server/Performance/TuningFramework/oeAware/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [oeAware User Guide]({{< relref "./oeaware-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Performance/TuningFramework/oeAware/figures/dep-failed.png b/docs/en/Server/Performance/TuningFramework/oeAware/figures/dep-failed.png new file mode 100644 index 0000000000000000000000000000000000000000..afb4750135657876b455978bf9d8f5eff36be91e Binary files /dev/null and b/docs/en/Server/Performance/TuningFramework/oeAware/figures/dep-failed.png differ diff --git a/docs/en/Server/Performance/TuningFramework/oeAware/figures/dep.png b/docs/en/Server/Performance/TuningFramework/oeAware/figures/dep.png new file mode 100644 index 0000000000000000000000000000000000000000..91388d6a860f032c86c0559b232f2d5ef55a40f8 Binary files /dev/null and b/docs/en/Server/Performance/TuningFramework/oeAware/figures/dep.png differ diff --git a/docs/en/Server/Performance/TuningFramework/oeAware/figures/dependency.png b/docs/en/Server/Performance/TuningFramework/oeAware/figures/dependency.png new file mode 100644 index 0000000000000000000000000000000000000000..0cd087fb0c9095e63aa76e0d2464a92225af2399 Binary files /dev/null and b/docs/en/Server/Performance/TuningFramework/oeAware/figures/dependency.png differ diff --git a/docs/en/Server/Performance/TuningFramework/oeAware/oeaware-user-guide.md b/docs/en/Server/Performance/TuningFramework/oeAware/oeaware-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..eb5e3d7e09e00fe449cae3b61b37cb2370152114 --- /dev/null +++ b/docs/en/Server/Performance/TuningFramework/oeAware/oeaware-user-guide.md @@ -0,0 +1,520 @@ +# oeAware User Guide + +## Introduction + +oeAware is a framework for implementing low-load collection, sensing, and tuning on openEuler. It aims to intelligently enable optimization features after dynamically detecting system behaviors. Traditional optimization features run independently and are statically enabled or disabled. oeAware divides optimization into three layers: collection, sensing, and tuning. Each layer is associated through subscription and is developed as plugins. + +## Plugin Description + +**Plugin definition**: Each plugin corresponds to an .so file. Plugins are classified into collection plugins, sensing plugins, and tuning plugins. + +**Instance definition**: The scheduling unit in the service is instance. A plugin contains multiple instances. For example, a collection plugin includes multiple collection items, and each collection item is an instance. + +**Dependencies Between Instances** + +Before running an instance, ensure that the dependency between the instances is met. + +![img](./figures/dependency.png) + +- A collection instance does not depend on any other instance. + +- A sensing instance depends on a collection instance and other sensing instances. + +- A tuning instance depends on a collection instance, sensing instance, and other tuning instances. + +## Installation + +Configure the openEuler Yum repository and run the `yum` commands to install oeAware. on openEuler 22.03 LTS SP4, oeAware has been installed by default. + +```shell +yum install oeAware-manager +``` + +### Service Startup + +Run the `systemd` command to start the service. + +```shell +systemctl start oeaware +``` + +Skip this step + +Configuration file path: **/etc/oeAware/config.yaml** + +```yaml +log_path: /var/log/oeAware # Log storage path +log_level: 1 # Log level. 1: DUBUG; 2: INFO; 3: WARN; 4: ERROR. +enable_list: # Plugins are enabled by default. + - name: libtest.so # Configure the plugin and enable all instances of the plugin. + - name: libtest1.so # Configure the plugin and enable the specified plugin instances. + instances: + - instance1 + - instance2 + ... + ... +plugin_list: # Downloaded packages are supported. + - name: test #The name must be unique. If the name is repeated, the first occurrence is used. + description: hello world + url: https://gitee.com/openeuler/oeAware-manager/raw/master/README.md #url must not be empty. + ... +``` + +After modifying the configuration file, run the following commands to restart the service: + +```shell +systemctl daemon-reload +systemctl restart oeaware +``` + +## Usage + +Start the oeaware service. Then, manage plugins and instances using the `oeawarectl` command, which supports loading, unloading, and querying plugins, along with enabling, disabling, and querying instances. + +### Plugin Loading + +By default, the service loads the plugins in the plugin storage paths. + +Collection plugin path: /usr/lib64/oeAware-plugin/collector + +Sensing plugin path: /usr/lib64/oeAware-plugin/scenario + +Tuning plugin path: /usr/lib64/oeAware-plugin/tune + +You can also manually load the plugins. + +```shell +oeawarectl -l | --load -t | --type # plugin type can be collector, scenario, or tune +``` + +Example + +```shell +[root@localhost ~]# oeawarectl -l libthread_collect.so -t collector +Plugin loaded successfully. +``` + +If the operation fails, an error description is returned. + +### Plugin Unloading + +```shell +oeawarectl -r | --remove +``` + +Example + +```shell +[root@localhost ~]# oeawarectl -r libthread_collect.so +Plugin remove successfully. +``` + +If the operation fails, an error description is returned. + +### Plugin Query + +#### Querying Plugin Status + +```shell +oeawarectl -q # Query all loaded plugins. +oeawarectl --query # Query a specified plugin. +``` + +Example + +```shell +[root@localhost ~]# oeawarectl -q +Show plugins and instances status. +------------------------------------------------------------ +libthread_collector.so + thread_collector(available, close) # Plugin instance and status +libpmu.so + pmu_cycles_sampling(available, close) + pmu_cycles_counting(available, close) + pmu_uncore_counting(available, close) + pmu_spe_sampling(available, close) +libthread_tune.so + thread_tune(available, close) +libthread_scenario.so + thread_scenario(available, close) +------------------------------------------------------------ +format: +[plugin] + [instance]([dependency status], [running status]) +dependency status: available means satisfying dependency, otherwise unavailable. +running status: running means that instance is running, otherwise close. +``` + +If the operation fails, an error description is returned. + +#### Querying Plugin Dependencies + +```shell +oeawarectl -Q # Query the dependency graph of loaded instances. +oeawarectl --query-dep= # Query the dependency graph of a specified instance. +``` + +A **dep.png** file will be generated in the current directory to display the dependencies. + +Example + +Relationship diagram when dependencies are met +![img](./figures/dep.png) + +Relationship diagram when dependencies are not met + +![img](./figures/dep-failed.png) + +If the operation fails, an error description is returned. + +### Enabling Plugins + +#### Enabling a Plugin Instance + +```shell +oeawarectl -e | --enable +``` + +If the operation fails, an error description is returned. + +#### Disabling a Plugin Instance + +```shell +oeawarectl -d | --disable +``` + +If the operation fails, an error description is returned. + +### Downloading and Installing Plugins + +Use the `--list` command to query the RPM packages that can be downloaded and installed plugins. + +```shell +oeawarectl --list +``` + +The query result is as follows: + +```shell +Supported Packages: # Downloadable packages +[name1] # plugin_list configured in config +[name2] +... +Installed Plugins: # Installed plugins +[name1] +[name2] +... +``` + +Use the `--install` command to download and install the RPM package. + +```shell +oeawarectl -i | --install # Name of a package queried using --list (package in Supported Packages) +``` + +If the operation fails, an error description is returned. + +### Help + +Use the `--help` command for help information. + +```shell +usage: oeawarectl [options]... + options + -l|--load [plugin] load plugin and need plugin type. + -t|--type [plugin_type] assign plugin type. there are three types: + collector: collection plugin. + scenario: awareness plugin. + tune: tune plugin. + -r|--remove [plugin] remove plugin from system. + -e|--enable [instance] enable the plugin instance. + -d|--disable [instance] disable the plugin instance. + -q query all plugins information. + --query [plugin] query the plugin information. + -Q query all instances dependencies. + --query-dep [instance] query the instance dependency. + --list the list of supported plugins. + -i|--install [plugin] install plugin from the list. + --help show this help message. +``` + +## Plugin Development + +### Common Data Structures of Plugins + +```c +struct DataBuf { + int len; + void *data; +}; +``` + +**struct DataBuf** is the data buffer. + +- **data**: specific data. **data** is an array. The data type can be defined as required. +- len: size of **data**. + +```c +struct DataRingBuf { + const char *instance_name; + int index; + uint64_t count; + struct DataBuf *buf; + int buf_len; +}; +``` + +**struct DataRingBuf** facilitates data transfer between plugins, primarily utilizing a circular buffer. + +- **instance_name**: instance of the incoming data. For instance, when data reaches a perception plugin, it distinguishes which collection item belongs to which collection plugin. + +- **index**: current data write position. For example, after each data collection, the index increments. + +- **count**: execution count of the instance, continuously accumulating. + +- **buf**: data buffer. Some collection items require multiple samplings before the perception plugin processes them, so the buf array stores these samples. + +- **buf_len**: size of the data buffer. Once the buffer is initialized, **buf_len** remains constant. + +```C +struct Param { + const struct DataRingBuf **ring_bufs; + int len; +}; +``` + +- **ring_bufs**: data required by the instance, sourced from other instances. +- **len**: length of the **ring_bufs** array. + +### Instance Interfaces + +```C +struct Interface { + const char* (*get_version)(); + /* The instance name is a unique identifier in the system. */ + const char* (*get_name)(); + const char* (*get_description)(); + /* Specifies the instance dependencies, which is used as the input information + * for instance execution. + */ + const char* (*get_dep)(); + /* Instance scheduling priority. In a uniform time period, a instance with a + * lower priority is scheduled first. + */ + int (*get_priority)(); + int (*get_type)(); + /* Instance execution period. */ + int (*get_period)(); + bool (*enable)(); + void (*disable)(); + const struct DataRingBuf* (*get_ring_buf)(); + void (*run)(const struct Param*); +}; +``` + +```c +int get_instance(Interface **interface); +``` + +Every plugin includes a **get_instance** function to provide instances to the framework. + +Obtaining the version number + +1. Interface definition + + ```c + char* (*get_version)(); + ``` + +2. Interface description + +3. Parameter description + +4. Return value description + + The specific version number is returned. This interface is reserved. + +Obtaining the instance name + +1. Interface definition + + ```c + char* (*get_name)(); + ``` + +2. Interface description + + Obtains the name of an instance. When you run the `-q` command on the client, the instance name is displayed. In addition, you can run the `--enable` command to enable the instance. + +3. Parameter description + +4. Return value description + + The name of the instance is returned. Ensure that the instance name is unique. + +Obtaining description information + +1. Interface definition + + ```c + char* (*get_description)(); + ``` + +2. Interface description + +3. Parameter description + +4. Return value description + + The detailed description is returned. This interface is reserved. + +Obtaining the type + +1. Interface definition + + ```c + char* (*get_type)(); + ``` + +2. Interface description + +3. Parameter description + +4. Return value description + + The specific type information is returned. This interface is reserved. + +Obtaining the sampling period + +1. Interface definition + + ```c + int (*get_cycle)(); + ``` + +2. Interface description + + Obtains the sampling period. Different collection items can use different collection periods. + +3. Parameter description + +4. Return value description + + The specific sampling period is returned. The unit is ms. + +Obtaining dependencies + +1. Interface definition + + ```c + char* (*get_dep)(); + ``` + +2. Interface description + +3. Parameter description + +4. Return value description + + Information about the dependent instances is returned. This interface is reserved. + +Enabling an instance + +1. Interface definition + + ```c + void (*enable)(); + ``` + +2. Interface description + + Enables an instance. + +3. Parameter description + +4. Return value description + +Disabling an instance + +1. Interface definition + + ```c + void (*disable)(); + ``` + +2. Interface description + + Disables an instance. + +3. Parameter description + +4. Return value description + +Obtaining the data buffer + +1. Interface definition + + ```c + const DataRingBuf* (*get_ring_buf)(); + ``` + +2. Interface description + + Obtains the buffer management pointer of the collection data (the memory is applied for by the plugin). The pointer is used by sensing plugins. + +3. Parameter description + +4. Return value description + + The **struct DataRingBuf** management pointer is returned. + +Executing an instance + +1. Interface definition + + ```c + void (*run)(const Param*); + ``` + +2. Interface description + + Runs at regular intervals according to the execution cycle. + +3. Parameter description + + Contains the data necessary for the instance to execute. + +4. Return value description + +## Supported Plugins + +- **libpmu.so**: collects PMU-related data. +- **libthread_collector.so**: gathers thread information within the system. +- **libthread_scenario.so**: monitors details of a specific thread. +- **libthread_tune.so**: enhances UnixBench performance. +- **libsmc_tune.so**: enables SMC acceleration for seamless TCP protocol performance improvements. +- **libtune_numa.so**: optimizes cross-NUMA node memory access to boost system performance. + +## Constraints + +### Function Constraints + +By default, oeAware integrates the libkperf module for collecting Arm microarchitecture information. This module can be called by only one process at a time. If this module is called by other processes or the perf command is used, conflicts may occur. + +### Operation Constraints + +Currently, only the **root** user can operate oeAware. + +## Notes + +The user group and permission of the oeAware configuration file and plugins are strictly verified. Do not modify the permissions and user group of oeAware-related files. + +Permissions: + +- Plugin files: 440 + +- Client executable file: 750 + +- Server executable file: 750 + +- Service configuration file: 640 diff --git a/docs/en/Server/Quickstart/Menu/index.md b/docs/en/Server/Quickstart/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..f742a295f26e593c67e2e4dccf25be09634586e3 --- /dev/null +++ b/docs/en/Server/Quickstart/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [Quick Start]({{< relref "./Quickstart/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Quickstart/Quickstart/Menu/index.md b/docs/en/Server/Quickstart/Quickstart/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..1980afcd53bfe7f42183514cac159346f89b27d8 --- /dev/null +++ b/docs/en/Server/Quickstart/Quickstart/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [Quick Start]({{< relref "./quick-start.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/Installation/figures/Installation_wizard.png b/docs/en/Server/Quickstart/Quickstart/figures/Installation_wizard.png similarity index 100% rename from docs/en/docs/Installation/figures/Installation_wizard.png rename to docs/en/Server/Quickstart/Quickstart/figures/Installation_wizard.png diff --git a/docs/en/docs/Installation/figures/advanced-user-configuration.png b/docs/en/Server/Quickstart/Quickstart/figures/advanced-user-configuration.png similarity index 100% rename from docs/en/docs/Installation/figures/advanced-user-configuration.png rename to docs/en/Server/Quickstart/Quickstart/figures/advanced-user-configuration.png diff --git a/docs/en/docs/Installation/figures/creating-a-user.png b/docs/en/Server/Quickstart/Quickstart/figures/creating-a-user.png similarity index 100% rename from docs/en/docs/Installation/figures/creating-a-user.png rename to docs/en/Server/Quickstart/Quickstart/figures/creating-a-user.png diff --git a/docs/en/docs/Quickstart/figures/drive-icon.png b/docs/en/Server/Quickstart/Quickstart/figures/drive-icon.png similarity index 100% rename from docs/en/docs/Quickstart/figures/drive-icon.png rename to docs/en/Server/Quickstart/Quickstart/figures/drive-icon.png diff --git a/docs/en/docs/Quickstart/figures/en-us_image_0229420473.png b/docs/en/Server/Quickstart/Quickstart/figures/en-us_image_0229420473.png similarity index 100% rename from docs/en/docs/Quickstart/figures/en-us_image_0229420473.png rename to docs/en/Server/Quickstart/Quickstart/figures/en-us_image_0229420473.png diff --git a/docs/en/docs/Quickstart/figures/image-dialog-box.png b/docs/en/Server/Quickstart/Quickstart/figures/image-dialog-box.png similarity index 100% rename from docs/en/docs/Quickstart/figures/image-dialog-box.png rename to docs/en/Server/Quickstart/Quickstart/figures/image-dialog-box.png diff --git a/docs/en/docs/Installation/figures/installation-process.png b/docs/en/Server/Quickstart/Quickstart/figures/installation-process.png similarity index 100% rename from docs/en/docs/Installation/figures/installation-process.png rename to docs/en/Server/Quickstart/Quickstart/figures/installation-process.png diff --git a/docs/en/docs/Installation/figures/installation-summary.png b/docs/en/Server/Quickstart/Quickstart/figures/installation-summary.png similarity index 100% rename from docs/en/docs/Installation/figures/installation-summary.png rename to docs/en/Server/Quickstart/Quickstart/figures/installation-summary.png diff --git a/docs/en/docs/Installation/figures/password-of-the-root-account.png b/docs/en/Server/Quickstart/Quickstart/figures/password-of-the-root-account.png similarity index 100% rename from docs/en/docs/Installation/figures/password-of-the-root-account.png rename to docs/en/Server/Quickstart/Quickstart/figures/password-of-the-root-account.png diff --git a/docs/en/docs/Quickstart/figures/restart-icon.png b/docs/en/Server/Quickstart/Quickstart/figures/restart-icon.png similarity index 100% rename from docs/en/docs/Quickstart/figures/restart-icon.png rename to docs/en/Server/Quickstart/Quickstart/figures/restart-icon.png diff --git a/docs/en/docs/Installation/figures/selecting-a-language.png b/docs/en/Server/Quickstart/Quickstart/figures/selecting-a-language.png similarity index 100% rename from docs/en/docs/Installation/figures/selecting-a-language.png rename to docs/en/Server/Quickstart/Quickstart/figures/selecting-a-language.png diff --git a/docs/en/docs/Installation/figures/selecting-installation-software.png b/docs/en/Server/Quickstart/Quickstart/figures/selecting-installation-software.png similarity index 100% rename from docs/en/docs/Installation/figures/selecting-installation-software.png rename to docs/en/Server/Quickstart/Quickstart/figures/selecting-installation-software.png diff --git a/docs/en/docs/Quickstart/figures/setting-the-boot-device.png b/docs/en/Server/Quickstart/Quickstart/figures/setting-the-boot-device.png similarity index 100% rename from docs/en/docs/Quickstart/figures/setting-the-boot-device.png rename to docs/en/Server/Quickstart/Quickstart/figures/setting-the-boot-device.png diff --git a/docs/en/docs/Installation/figures/setting-the-installation-destination.png b/docs/en/Server/Quickstart/Quickstart/figures/setting-the-installation-destination.png similarity index 100% rename from docs/en/docs/Installation/figures/setting-the-installation-destination.png rename to docs/en/Server/Quickstart/Quickstart/figures/setting-the-installation-destination.png diff --git a/docs/en/Server/Quickstart/Quickstart/public_sys-resources/icon-note.gif b/docs/en/Server/Quickstart/Quickstart/public_sys-resources/icon-note.gif new file mode 100644 index 0000000000000000000000000000000000000000..6314297e45c1de184204098efd4814d6dc8b1cda Binary files /dev/null and b/docs/en/Server/Quickstart/Quickstart/public_sys-resources/icon-note.gif differ diff --git a/docs/en/docs/Embedded/public_sys-resources/icon-notice.gif b/docs/en/Server/Quickstart/Quickstart/public_sys-resources/icon-notice.gif similarity index 100% rename from docs/en/docs/Embedded/public_sys-resources/icon-notice.gif rename to docs/en/Server/Quickstart/Quickstart/public_sys-resources/icon-notice.gif diff --git a/docs/en/docs/Quickstart/quick-start.md b/docs/en/Server/Quickstart/Quickstart/quick-start.md similarity index 61% rename from docs/en/docs/Quickstart/quick-start.md rename to docs/en/Server/Quickstart/Quickstart/quick-start.md index d09fe522dd51b3160bc5fa30b010414103b8f2cc..3b84475f0c3dcb0466719fbeb3b9828b651e653e 100644 --- a/docs/en/docs/Quickstart/quick-start.md +++ b/docs/en/Server/Quickstart/Quickstart/quick-start.md @@ -1,83 +1,83 @@ # Quick Start -This document uses openEuler 22.03 LTS SP2 installed on the TaiShan 200 server as an example to describe how to quickly install and use openEuler OS. For details about the installation requirements and methods, see the [Installation Guide](./../Installation/Installation.md). +This document uses openEuler 22.03 LTS SP2 installed on the TaiShan 200 server as an example to describe how to quickly install and use openEuler OS. For details about the installation requirements and methods, see the [Installation Guide](../../InstallationUpgrade/Installation/installation.md). ## Making Preparations - Hardware Compatibility - - [Table 1](#table14948632047) describes the types of supported servers. - - **Table 1** Supported servers - - - - - - - - - - - - - - - - - -

Server Type

-

Server Name

-

Server Model

-

Rack server

-

TaiShan 200

-

2280 balanced model

-

Rack server

-

FusionServer Pro

-

FusionServer Pro 2288H V5

-
NOTE:

The server must be configured with the Avago SAS3508 RAID controller card and the LOM-X722 NIC.

-
-
+ + [Table 1](#table14948632047) describes the types of supported servers. + + **Table 1** Supported servers + + + + + + + + + + + + + + + + + +

Server Type

+

Server Name

+

Server Model

+

Rack server

+

TaiShan 200

+

2280 balanced model

+

Rack server

+

FusionServer Pro

+

FusionServer Pro 2288H V5

+
NOTE:

The server must be configured with the Avago SAS3508 RAID controller card and the LOM-X722 NIC.

+
+
- Minimum Hardware Specifications - - [Table 2](#tff48b99c9bf24b84bb602c53229e2541) lists the minimum hardware specifications supported by openEuler. - - **Table 2** Minimum hardware specifications - - - - - - + + [Table 2](#tff48b99c9bf24b84bb602c53229e2541) lists the minimum hardware specifications supported by openEuler. + + **Table 2** Minimum hardware specifications + + + +
+ + - - - - + + + + - - + + - - + + - - + + - - -

Component

Minimum Hardware Specifications

Description

Architecture

  • AArch64
  • x86_64
  • 64-bit Arm architecture
  • 64-bit Intel x86 architecture

CPU

  • Huawei Kunpeng 920 series
  • Intel ® Xeon® processor

-

Memory

≥ 4 GB (8 GB or higher recommended for better user experience)

-

Drive

≥ 120 GB (for better user experience)

  • IDE, SATA, and SAS drives are supported.
  • A driver software is required to use the NVMe drive with the DIF feature. Contact the drive manufacturer if the feature is not available.

+
## Obtaining the Installation Source @@ -89,17 +89,17 @@ Perform the following operations to obtain the openEuler release package: 4. Click **Download** on the right of **openEuler 23.09**. 5. Download the required openEuler release package and the corresponding verification file based on the architecture and scenario. - - If the AArch64 architecture is used: + - If the AArch64 architecture is used: - 1. Click **AArch64**. - 2. For local installation, download the **Offline Standard ISO** or **Offline Everything ISO** release package **openEuler-22.03-LTS-SP2-aarch64-dvd.iso** to the local host. - 3. For network installation, download the **Network Install ISO** release package **openEuler-22.03-LTS-SP2-netinst-aarch64-dvd.iso** to the local host. + 1. Click **AArch64**. + 2. For local installation, download the **Offline Standard ISO** or **Offline Everything ISO** release package **openEuler-22.03-LTS-SP2-aarch64-dvd.iso** to the local host. + 3. For network installation, download the **Network Install ISO** release package **openEuler-22.03-LTS-SP2-netinst-aarch64-dvd.iso** to the local host. - - If the x86\_64 architecture is used: + - If the x86\_64 architecture is used: - 1. Click **x86_64**. - 2. For local installation, download the **Offline Standard ISO** or **Offline Everything ISO** release package **openEuler-22.03-LTS-SP2-x86_64-dvd.iso** to the local host. - 3. For network installation, download the **Network Install ISO** release package **openEuler-22.03-LTS-SP2-netinst-x86_64-dvd.iso** to the local host. + 1. Click **x86_64**. + 2. For local installation, download the **Offline Standard ISO** or **Offline Everything ISO** release package **openEuler-22.03-LTS-SP2-x86_64-dvd.iso** to the local host. + 3. For network installation, download the **Network Install ISO** release package **openEuler-22.03-LTS-SP2-netinst-x86_64-dvd.iso** to the local host. > ![](./public_sys-resources/icon-note.gif) **NOTE:** > @@ -110,7 +110,7 @@ Perform the following operations to obtain the openEuler release package: ## Checking the Release Package Integrity > ![](./public_sys-resources/icon-note.gif) **NOTE:** -> +> > This section describes how to verify the integrity of the release package in the AArch64 architecture. The procedure for verifying the integrity of the release package in the x86\_64 architecture is the same. ### Overview @@ -135,35 +135,35 @@ Verification file: Copy and save the SHA256 value corresponding to the ISO file. 2. Check whether the values calculated in step 1 and step 2 are the same. - If the values are consistent, the ISO file is not damaged. Otherwise, the file is damaged and you need to obtain it again. - If the values are consistent, the ISO file is not damaged. Otherwise, the file is damaged and you need to obtain it again. + If the values are consistent, the ISO file is not damaged. Otherwise, the file is damaged and you need to obtain it again. + If the values are consistent, the ISO file is not damaged. Otherwise, the file is damaged and you need to obtain it again. ## Starting Installation 1. Log in to the iBMC WebUI. - For details, see [TaiShan 200 Server User Guide (Model 2280)](https://support.huawei.com/enterprise/en/doc/EDOC1100093459). + For details, see [TaiShan 200 Server User Guide (Model 2280)](https://support.huawei.com/enterprise/en/doc/EDOC1100093459). 2. Choose **Configuration** from the main menu, and select **Boot Device** from the navigation tree. The **Boot Device** page is displayed. - Set **Effective** and **Boot Medium** to **One-time** and **DVD-ROM**, respectively, and click **Save**, as shown in [Figure 1](#fig1011938131018). + Set **Effective** and **Boot Medium** to **One-time** and **DVD-ROM**, respectively, and click **Save**, as shown in [Figure 1](#fig1011938131018). - **Figure 1** Setting the boot device -![fig](./figures/setting-the-boot-device.png "setting-the-boot-device") + **Figure 1** Setting the boot device + ![fig](./figures/setting-the-boot-device.png "setting-the-boot-device") 3. Choose **Remote Console** from the main menu. The **Remote Console** page is displayed. - Select an integrated remote console as required to access the remote virtual console, for example, **Java Integrated Remote Console (Shared)**. + Select an integrated remote console as required to access the remote virtual console, for example, **Java Integrated Remote Console (Shared)**. 4. On the toolbar, click the icon shown in the following figure. - **Figure 2** Drive icon -![fig](./figures/drive-icon.png "drive-icon") + **Figure 2** Drive icon + ![fig](./figures/drive-icon.png "drive-icon") - An image dialog box is displayed, as shown in the following figure. + An image dialog box is displayed, as shown in the following figure. - **Figure 3** Image dialog box -![fig](./figures/image-dialog-box.png "image-dialog-box") + **Figure 3** Image dialog box + ![fig](./figures/image-dialog-box.png "image-dialog-box") 5. Select **Image File** and then click **Browse**. The **Open** dialog box is displayed. @@ -171,15 +171,15 @@ Verification file: Copy and save the SHA256 value corresponding to the ISO file. 7. On the toolbar, click the restart icon shown in the following figure to restart the device. - **Figure 4** Restart icon -![fig](./figures/restart-icon.png "restart-icon") + **Figure 4** Restart icon + ![fig](./figures/restart-icon.png "restart-icon") 8. A boot menu is displayed after the system restarts, as shown in [Figure 5](#fig1648754873314). - > ![fig](./public_sys-resources/icon-note.gif) **NOTE:** - > - > - If you do not perform any operations within 1 minute, the system automatically selects the default option **Test this media \& install openEuler 22.03-LTS-SP2** and enters the installation page. - > - During physical machine installation, if you cannot use the arrow keys to select boot options and the system does not respond after you press **Enter**, click ![fig](./figures/en-us_image_0229420473.png) on the BMC page and configure **Key \& Mouse Reset**. + > ![fig](./public_sys-resources/icon-note.gif) **NOTE:** + > + > - If you do not perform any operations within 1 minute, the system automatically selects the default option **Test this media \& install openEuler 22.03-LTS-SP2** and enters the installation page. + > - During physical machine installation, if you cannot use the arrow keys to select boot options and the system does not respond after you press **Enter**, click ![fig](./figures/en-us_image_0229420473.png) on the BMC page and configure **Key \& Mouse Reset**. **Figure 5** Installation wizard ![fig](./figures/Installation_wizard.png "Installation_wizard") @@ -197,14 +197,14 @@ After entering the GUI installation page, perform the following operations to in 2. On the **INSTALLATION SUMMARY** page, set configuration items based on the site requirements. - - A configuration item with an alarm symbol must be configured. When the alarm symbol disappears, you can perform the next operation. - - A configuration item without an alarm symbol is configured by default. - - You can click **Begin Installation** to install the system only when all alarms are cleared. + - A configuration item with an alarm symbol must be configured. When the alarm symbol disappears, you can perform the next operation. + - A configuration item without an alarm symbol is configured by default. + - You can click **Begin Installation** to install the system only when all alarms are cleared. **Figure 7** Installation summary ![fig](./figures/installation-summary.png "installation-summary") - 1. Select **Software Selection** to set configuration items. + 1. Select **Software Selection** to set configuration items. Based on the site requirements, select **Minimal Install** on the left box and select an add-on in the **Add-Ons for Selected Environment** area on the right, as shown in [Figure 8](#fig1133717611109). @@ -218,11 +218,11 @@ After entering the GUI installation page, perform the following operations to in After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. - 2. Select **Installation Destination** to set configuration items. + 2. Select **Installation Destination** to set configuration items. On the **INSTALLATION DESTINATION** page, select a local storage device. - > ![fig](./public_sys-resources/icon-notice.gif) **NOTICE:** + > ![fig](./public_sys-resources/icon-notice.gif)**NOTICE:** > The NVMe data protection feature is not supported because the NVMe drivers built in the BIOSs of many servers are of earlier versions. (Data protection: Format disk sectors to 512+N or 4096+N bytes.) Therefore, when selecting a proper storage medium, do not select an NVMe SSD with data protection enabled as the system disk. Otherwise, the OS may fail to boot. > Users can consult the server vendor about whether the BIOS supports NVMe disks with data protection enabled as system disks. If you cannot confirm whether the BIOS supports NVMe disks with data protection enabled as system disks, you are not advised to use an NVMe disk to install the OS, or you can disable the data protection function of an NVMe disk to install the OS. @@ -239,7 +239,7 @@ After entering the GUI installation page, perform the following operations to in After the setting is complete, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. - 3. Select **Root Password** and set the root password. + 3. Select **Root Password** and set the root password. On the **ROOT PASSWORD** page, enter a password that meets the [Password Complexity](#password-complexity) requirements and confirm the password, as shown in [Figure 10](#zh-cn_topic_0186390266_zh-cn_topic_0122145909_fig1323165793018). @@ -254,47 +254,44 @@ After entering the GUI installation page, perform the following operations to in The password of the **root** user or a new user must meet the password complexity requirements. Otherwise, the password setting or user creation will fail. The password must meet the following requirements: 1. Contains at least eight characters. - 2. Contains at least three of the following: uppercase letters, lowercase letters, digits, and special characters. - 3. Different from the user name. - 4. Not allowed to contain words in the dictionary. - > ![](./public_sys-resources/icon-note.gif) **NOTE:** - > - > In the openEuler environment, you can run the `cracklib-unpacker /usr/share/cracklib/pw_dict > dictionary.txt` command to export the dictionary library file **dictionary.txt**. You can check whether the password is in this dictionary. - - **Figure 10** root password - ![fig](./figures/password-of-the-root-account.png "Root password") + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > + > In the openEuler environment, you can run the `cracklib-unpacker /usr/share/cracklib/pw_dict > dictionary.txt` command to export the dictionary library file **dictionary.txt**. You can check whether the password is in this dictionary. + + **Figure 10** root password + ![fig](./figures/password-of-the-root-account.png "Root password") After the settings are completed, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. - 4. Select **Create a User** and set the parameters. + 4. Select **Create a User** and set the parameters. [Figure 11](#zh-cn_topic_0186390266_zh-cn_topic_0122145909_fig1237715313319) shows the page for creating a user. Enter the user name and set the password. The password complexity requirements are the same as those of the root password. In addition, you can set the home directory and user group by clicking **Advanced**, as shown in [Figure 12](#zh-cn_topic_0186390266_zh-cn_topic_0122145909_fig1237715313319). **Figure 11** Creating a user ![fig](./figures/creating-a-user.png "creating-a-user") - **Figure 12** Advanced user configuration + **Figure 12** Advanced user configuration ![fig](./figures/advanced-user-configuration.png "Advanced user configuration") After the settings are completed, click **Done** in the upper left corner to go back to the **INSTALLATION SUMMARY** page. - 5. Set other configuration items. You can use the default values for other configuration items. + 5. Set other configuration items. You can use the default values for other configuration items. 3. Click **Start the Installation** to install the system, as shown in [Figure 13](#zh-cn_topic_0186390266_zh-cn_topic_0122145909_fig1237715313319). - **Figure 13** Starting the installation + **Figure 13** Starting the installation ![fig](./figures/installation-process.png "installation-process") 4. After the installation is completed, restart the system. - openEuler has been installed. Click **Reboot** to restart the system. + openEuler has been installed. Click **Reboot** to restart the system. ## Viewing System Information -After the system is installed and restarted, the system CLI login page is displayed. Enter the username and password set during the installation to log in to openEuler and view the following system information. For details about system management and configuration, see the [openEuler 22.03-LTS-SP2 Administrator Guide](../Administration/administration.md). +After the system is installed and restarted, the system CLI login page is displayed. Enter the username and password set during the installation to log in to openEuler and view the following system information. For details about system management and configuration, see the [openEuler 22.03-LTS-SP2 Administrator Guide](../../Administration/Administrator/administration.md). - Run the following command to view the system information: @@ -348,4 +345,4 @@ After the system is installed and restarted, the system CLI login page is displa ```sh # ip addr - ``` \ No newline at end of file + ``` diff --git a/docs/en/Server/Releasenotes/Menu/index.md b/docs/en/Server/Releasenotes/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..432af51d06e4f53565f8a41378276c80a2e45e0d --- /dev/null +++ b/docs/en/Server/Releasenotes/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [Release Notes]({{< relref "./Releasenotes/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Releasenotes/Releasenotes/Menu/index.md b/docs/en/Server/Releasenotes/Releasenotes/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..dc421dfffb88c09a61ea5f86f7e8afc17939888e --- /dev/null +++ b/docs/en/Server/Releasenotes/Releasenotes/Menu/index.md @@ -0,0 +1,15 @@ +--- +headless: true +--- +- [Release Notes]({{< relref "./release-notes.md" >}}) + - [Introduction]({{< relref "./introduction.md" >}}) + - [User Notice]({{< relref "./user-notice.md" >}}) + - [Account List]({{< relref "./account-list.md" >}}) + - [OS Installation]({{< relref "./os-installation.md" >}}) + - [Key Features]({{< relref "./key-features.md" >}}) + - [Known Issues]({{< relref "./known-issues.md" >}}) + - [Resolved Issues]({{< relref "./resolved-issues.md" >}}) + - [Common Vulnerabilities and Exposures (CVEs)]({{< relref "./cve.md" >}}) + - [Source Code]({{< relref "./source-code.md" >}}) + - [Contribution]({{< relref "./contribution.md" >}}) + - [Acknowledgment]({{< relref "./acknowledgment.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/Releasenotes/account-list.md b/docs/en/Server/Releasenotes/Releasenotes/account-list.md similarity index 63% rename from docs/en/docs/Releasenotes/account-list.md rename to docs/en/Server/Releasenotes/Releasenotes/account-list.md index f31dceb5e3927466715cc79a2f425125d6858e0e..b7a6e6e4228d9df129680ce33ab8f20ea1938a01 100644 --- a/docs/en/docs/Releasenotes/account-list.md +++ b/docs/en/Server/Releasenotes/Releasenotes/account-list.md @@ -3,4 +3,4 @@ | User Name| Default Password | Function | User Status| Login Mode | Remarks | | ------ | ------------- | ------------------ | -------- | ------------------ | ------------------------------------------------------------ | | root | openEuler12#$ | Default user of the VM image| Enabled | Remote login | This account is used to log in to the VM installed using the openEuler VM image. | -| root | openEuler#12 | GRUB2 login | Enabled | Local login and remote login| GRand UnifiedBootloader (GRUB) is used to boot different systems, such as Windows and Linux.
GRUB2 is an upgraded version of GRUB. When the system is started, you can modify startup parameters on the GRUB2 GUI. To ensure that the system startup parameters are modified with authorization, you need to encrypt the GRUB2 GUI. The GRUB2 GUI can be modified only when you enter the correct GRUB2 password.| +| root | openEuler#12 | GRUB2 login | Enabled | Local login and remote login| GRand UnifiedBootloader (GRUB) is used to boot different systems, such as Windows and Linux.
GRUB2 is an upgraded version of GRUB. When the system is started, you can modify startup parameters on the GRUB2 GUI. To ensure that the system startup parameters are modified with authorization, you need to encrypt the GRUB2 GUI. The GRUB2 GUI can be modified only when you enter the correct GRUB2 password.| diff --git a/docs/en/docs/Releasenotes/acknowledgment.md b/docs/en/Server/Releasenotes/Releasenotes/acknowledgment.md similarity index 100% rename from docs/en/docs/Releasenotes/acknowledgment.md rename to docs/en/Server/Releasenotes/Releasenotes/acknowledgment.md diff --git a/docs/en/docs/Releasenotes/contribution.md b/docs/en/Server/Releasenotes/Releasenotes/contribution.md similarity index 100% rename from docs/en/docs/Releasenotes/contribution.md rename to docs/en/Server/Releasenotes/Releasenotes/contribution.md diff --git a/docs/en/docs/Releasenotes/common-vulnerabilities-and-exposures-(cve).md b/docs/en/Server/Releasenotes/Releasenotes/cve.md similarity index 100% rename from docs/en/docs/Releasenotes/common-vulnerabilities-and-exposures-(cve).md rename to docs/en/Server/Releasenotes/Releasenotes/cve.md diff --git a/docs/en/docs/Releasenotes/introduction.md b/docs/en/Server/Releasenotes/Releasenotes/introduction.md similarity index 65% rename from docs/en/docs/Releasenotes/introduction.md rename to docs/en/Server/Releasenotes/Releasenotes/introduction.md index 7dca4bb25e938f9a2034a7469be9f1125c5a1bf2..565577db870e6aaa1fe7df096667f8714c5cd4dd 100644 --- a/docs/en/docs/Releasenotes/introduction.md +++ b/docs/en/Server/Releasenotes/Releasenotes/introduction.md @@ -1,4 +1,3 @@ # Introduction -openEuler is an open-source operating system. The current openEuler kernel is based on Linux and supports Kunpeng and other processors. It fully unleashes the potential of computing chips. As an efficient, stable, and secure open-source OS built by global open-source contributors, openEuler applies to database, big data, cloud computing, and artificial intelligence \(AI\) scenarios. In addition, openEuler community is an open-source community for global OSs. Through community cooperation, openEuler builds an innovative platform, builds a unified and open OS that supports multiple processor architectures, and promotes the prosperity of the software and hardware application ecosystem. - +openEuler is an open source operating system. The current openEuler kernel is based on Linux and supports Kunpeng and other processors. It fully unleashes the potential of computing chips. As an efficient, stable, and secure open source OS built by global open source contributors, openEuler applies to database, big data, cloud computing, and artificial intelligence \(AI\) scenarios. In addition, openEuler community is an open source community for global OSs. Through community cooperation, openEuler builds an innovative platform, builds a unified and open OS that supports multiple processor architectures, and promotes the prosperity of the software and hardware application ecosystem. diff --git a/docs/en/docs/Releasenotes/key-features.md b/docs/en/Server/Releasenotes/Releasenotes/key-features.md similarity index 81% rename from docs/en/docs/Releasenotes/key-features.md rename to docs/en/Server/Releasenotes/Releasenotes/key-features.md index ed48af6124fc8c5781b450346052aee2dc9f05a6..6f2cb7f7ced8d903a75a557de8a2b7c1c71adf4b 100644 --- a/docs/en/docs/Releasenotes/key-features.md +++ b/docs/en/Server/Releasenotes/Releasenotes/key-features.md @@ -10,7 +10,7 @@ Generalized Memory Management (GMEM) is an optimal solution for memory managemen - **User APIs**: Users can directly use the memory map (mmap) of the OS to allocate the unified virtual memory. GMEM adds the flag (MMAP_PEER_SHARED) for allocating the unified virtual memory to the mmap system call. The libgmem user-mode library provides the hmadvise API of memory prefetch semantics to help users optimize the accelerator memory access efficiency. -## Native Support for Open Source Large Language Models (LLaMA and ChatGLM) +## Native Support for Open Source Large Language Models (LLaMa and ChatGLM) The two model inference frameworks, llama.cpp and chatglm-cpp, are implemented based on C/C++. They allow users to deploy and use open source large language models on CPUs by means of model quantization. llama.cpp supports the deployment of multiple open source LLMs, such as LLaMa, LLaMa2, and Vicuna. It supports the deployment of multiple open source Chinese LLMs, such as ChatGLM-6B, ChatGLM2-6B, and Baichuan-13B. @@ -18,35 +18,28 @@ The two model inference frameworks, llama.cpp and chatglm-cpp, are implemented b - They accelerate memory for efficient CPU inference through int4/int8 quantization, optimized KV cache, and parallel computing. -## Features in the openEuler Kernel 6.4 +## Features in the openEuler Kernel 6.6 -openEuler 23.09 runs on Linux kernel 6.4. It inherits the competitive advantages of community versions and innovative features released in the openEuler community. +openEuler 24.09 runs on Linux kernel 6.6. It inherits the competitive advantages of community versions and innovative features released in the openEuler community. -- **Tidal affinity scheduling:** The system dynamically adjusts CPU affinity based on the service load. When the service load is light, the system uses preferred CPUs to enhance resource locality. When the service load is heavy, the system has new CPU cores added to improve the QoS. +- **Folio-based memory management**: Folio-based Linux memory management is used instead of page. A folio consists of one or more pages and is declared in struct folio. Folio-based memory management is performed on one or more complete pages, rather than on PAGE_SIZE bytes. This alleviates compound page conversion and tail page misoperations, while decreasing the number of least recently used (LRU) linked lists and optimizing memory reclamation. It allocates more continuous memory on a per-operation basis to reduce the number of page faults and mitigate memory fragmentation. Folio-based management accelerates large I/Os and improves throughput, and large folios consisting of anonymous pages or file pages are available. For AArch64 systems, a contiguous bit (16 contiguous page table entries are cached in a single entry within a translation lookaside buffer, or TLB) is provided to reduce system TLB misses and improve system performance. In openEuler 24.09, multi-size transparent hugepage (mTHP) allocation by anonymous shmem and mTHP lazyfreeing are available. The memory subsystem supports large folios, with a new sysfs control interface for allocating mTHPs by page cache and a system-level switch for feature toggling. -- **CPU QoS priority-based load balancing**: CPU QoS isolation is enhanced in online and offline hybrid deployments, and QoS load balancing across CPUs is supported to further reduce QoS interference from offline services. +- **Multipath TCP (MPTCP)**: MPTCP is introduced to let applications use multiple network paths for parallel data transmission, compared with single-path transmission over TCP. This design improves network hardware resource utilization and intelligently allocates traffic to different transmission paths, thereby relieving network congestion and improving throughput. -- **Simultaneous multithreading (SMT) expeller free of priority inversion**: This feature resolves the priority inversion problem in the SMT expeller feature and reduces the impact of offline tasks on the quality of service (QoS) of online tasks. + MPTCP features the following performance highlights: -- **Multiple priorities in a hybrid deployment**: Each cgroup can have a **cpu.qos_level** that ranges from -2 to 2. You can set **qos_level_weight** to assign a different priority to each cgroup and allocate CPU resources to each cgroup based on the CPU usage. This feature is also capable of wakeup preemption. + - Selects the optimal path after evaluating indicators such as latency and bandwidth. + - Ensures hitless network switchover and uninterrupted data transmission when switching between networks. + - Uses multiple channels where data packets are distributed to implement parallel transmission, increasing network bandwidth. -- Programmable scheduling: The programmable scheduling framework based on eBPF allows the kernel scheduler to dynamically expand scheduling policies to meet performance requirements of different loads. + In the lab environment, the Rsync file transfer tool that adopts MPTCP v1 shows good transmission efficiency improvement. Specifically, a 1.3 GB file can be transferred in just 14.35s (down from 114.83s), and the average transfer speed is increased from 11.08 MB/s to 88.25 MB/s. In simulations of path failure caused by unexpected faults during transmission, MPTCP seamlessly switches data to other available channels, ensuring transmission continuity and data integrity. + In openEuler 24.09, MPTCP-related features in Linux mainline kernel 6.9 have been fully transplanted and optimized. -- **NUMA-aware spinlock**: The lock transfer algorithm is optimized for the multi-NUMA system based on the MCS spinlock. The lock is preferentially transferred within the local NUMA node, greatly reducing cross-NUMA cache synchronization and ping-pong. As a result, the overall lock throughput is increased and service performance is improved. +**Large folio for ext4 file systems**: The IOzone performance can be improved by 80%, and the writeback process of the iomap framework supports batch block mapping. Blocks can be requested in batches in default ext4, optimizing ext4 performance in various benchmarks. For ext4 buffer I/O and page cache writeback operations, the buffer_head framework is replaced with the iomap framework that adds large folio support for ext4. In version 24.09, the performance of small buffered I/Os (≤ 4 KB) is optimized when the block size is smaller than the folio size, typically seeing a 20% performance increase. -- **TCP compression**: The data of specified ports can be compressed at the TCP layer before transmission. After the data is received, it is decompressed and transferred to the user mode. TCP compression accelerates data transmission between nodes. +- **CacheFiles failover**: In on-demand mode of CacheFiles, if the daemon breaks down or is killed, subsequent read and mount requests return an input/output error. The mount points can be used only after the daemon is restarted and the mount operations are performed again. For public cloud services, such I/O errors will be passed to cloud service users, which may impact job execution and endanger the overall system stability. The CacheFiles failover feature renders it unnecessary to remount the mount points upon daemon crashes. It requires only the daemon to restart, ensuring that these events are invisible to users. -- **Kernel live patches**: Kernel live patches are used to fix bugs in kernel function implementation without a system restart. They dynamically replace functions when the system is running. Live patches on openEuler work by modifying instructions. They feature a high patching efficiency because they directly jump to new functions without search and transfer, while live patches on the Linux mainline version work based on ftrace. - -- **Sharepool shared memory**: This technology shares data among multiple processes and allows multiple processes to access the same memory area for data sharing. - -- **Memcg asynchronous reclamation**: This optimized mechanism asynchronously reclaims memory in the memcg memory subsystem when the system load is low to avoid memory reclamation delay when the system load becomes heavy. - -- **filescgroup**: The filescgroup subsystem manages the number of files (that is, the number of handles) opened by a process group. This subsystem provides easy-to-call APIs for resource management. Compared with the rlimit method, the filescgroup subsystem can better control the number of file handles for resource application and release, dynamic adjustment of resource usage, and group control. - -- **Cgroup writeback for cgroup v1**: Cgroup writeback provides a flexible method to manage the writeback behavior of the file system cache. The main functions of cgroup writeback include cache writeback control, I/O priority control, and writeback policy adjustment. - -- **Core suspension detection**: If the performance monitor unit (PMU) stops counting, the hard lockup cannot detect system suspension. The core suspension detection feature enables each CPU to check whether adjacent CPUs are suspended. This ensures that the system can perform self-healing even when some CPUs are suspended due to interruption disabling. +**PGO for Clang**: Profile-guided optimization (PGO) is a feedback-directed compiler optimization technology that collects program runtime information to guide the compiler through optimization decision-making. Based on industry experience, PGO can be used to optimize large-scale data center applications (such as MySQL, Nginx, and Redis) and Linux kernels. Test results show that LLVM PGO provides over 20% performance increase on Nginx, in which a 10%+ performance increase is brought by kernel optimizations. ## Embedded @@ -71,7 +64,7 @@ SysCare is a system-level hotfix software that provides security patches and hot - eBPF is used to monitor the compiler process. In this way, hot patch change information can be obtained in pure user mode without creating character devices, and users can compile hot patches in multiple containers concurrently. - Users can install different RPM packages (syscare-build-kmod or syscare-build-ebpf) to use ko or eBPF. The syscare-build process automatically adapts to the corresponding underlying implementation. - + ## GCC for openEuler GCC for openEuler is developed based on the open source GCC 12.3 and supports features such as automatic feedback-directed optimization (FDO), software and hardware collaboration, memory optimization, SVE, and vectorized math libraries. @@ -154,7 +147,7 @@ utshell is a new shell that inherits the usage habits of Bash. It can interact w ## migration-tools -Developed by UnionTech Software Technology Co., Ltd., migration-tools is oriented to users who want to quickly, smoothly, stably, and securely migrate services to the openEuler OS. migration-tools consists of the following modules: +migration-tools is oriented to users who want to quickly, smoothly, stably, and securely migrate services to the openEuler OS. migration-tools consists of the following modules: - **Server module**: It is developed on the Python Flask Web framework. As the core of migration-tools, it receives task requests, processes execution instructions, and distributes the instructions to each Agent. @@ -230,7 +223,7 @@ SsysBoost is a tool for optimizing the system microarchitecture for applications ## CTinspector -CTinspector is a language VM running framework developed by China Telecom e-Cloud Technology Co., Ltd. based on the eBPF instruction set. The CTinspector running framework enables application instances to be quickly expanded to diagnose network performance bottlenecks, storage I/O hotspots, and load balancing, improving the stability and timeliness of diagnosis during system running. +CTinspector is a language VM running framework developed by China Telecom Cloud Technology Co., Ltd based on the eBPF instruction set. The CTinspector running framework enables application instances to be quickly expanded to diagnose network performance bottlenecks, storage I/O hotspots, and load balancing, improving the stability and timeliness of diagnosis during system running. - CTinspector uses a packet VM of the eBPF instruction set. The minimum size of the packet VM is 256 bytes, covering all VM components, including registers, stack segments, code segments, data segments, and page tables. @@ -240,7 +233,7 @@ CTinspector is a language VM running framework developed by China Telecom e-Clou ## CVE-ease -CVE-ease is an innovative Common Vulnerabilities and Exposures (CVE) platform developed by China Telecom e-Cloud Technology Co., Ltd. It collects various CVE information released by multiple security platforms and notifies users of the information through multiple channels, such as email, WeChat, and DingTalk. The CVE-ease platform aims to help users quickly learn about and cope with vulnerabilities in the system. In addition to improving system security and stability, users can view CVE details on the CVE-ease platform, including vulnerability description, impact scope, and fixing suggestions, and select a fixing solution as required. +CVE-ease is an innovative Common Vulnerabilities and Exposures (CVE) platform developed by China Telecom Cloud Technology Co., Ltd It collects various CVE information released by multiple security platforms and notifies users of the information through multiple channels, such as email, WeChat, and DingTalk. The CVE-ease platform aims to help users quickly learn about and cope with vulnerabilities in the system. In addition to improving system security and stability, users can view CVE details on the CVE-ease platform, including vulnerability description, impact scope, and fixing suggestions, and select a fixing solution as required. CVE-ease has the following capabilities: diff --git a/docs/en/Server/Releasenotes/Releasenotes/known-issues.md b/docs/en/Server/Releasenotes/Releasenotes/known-issues.md new file mode 100644 index 0000000000000000000000000000000000000000..499a97f065ca8a114efa2c315c76744bcf1e619c --- /dev/null +++ b/docs/en/Server/Releasenotes/Releasenotes/known-issues.md @@ -0,0 +1,10 @@ +# Known Issues + +| No. | Issue ID | Description | Severity | Impact Analysis | Mitigation Measures | Historical Discovery Scenarios | +| ---- | ------- | ----------- | -------- | --------------- | ------------------- | ----------------------------- | +| 1 | [I5LZXD](https://gitee.com/src-openEuler/openldap/issues/I5LZXD) | Build issue with openldap in openEuler:22.09 | Minor | Test case failures occur during the build process. This is a test case design issue with limited impact. The problem can be temporarily resolved by using sleep to wait for operations to complete, though it may still fail under high load. | Skip the affected test cases and monitor the upstream community for a fix. | | +| 2 | [I5NLZI](https://gitee.com/src-openEuler/dde/issues/I5NLZI) | Abnormal icon display in the launcher (openEuler 22.09 rc2) | Minor | Only affects icon display in the DDE desktop launcher, with no functional impact. The usability issue is manageable. | Switch themes to avoid the issue. | | +| 3 | [I5P5HM](https://gitee.com/src-openEuler/afterburn/issues/I5P5HM) | Uninstallation error: `Failed to stop afterburn-sshkeys@.service` (22.09_RC3_EPOL, arm/x86) | Minor | | | | +| 4 | [I5PQ3O](https://gitee.com/src-openEuler/openmpi/issues/I5PQ3O) | Execution error with `ompi-clean -v -d` (openEuler-22.09-RC3) | Major | This package is specific to NestOS, with limited usage. It is enabled by default for the **core** user in NestOS, with minimal impact on the server version. | No mitigation measures have been provided by the SIG. | | +| 5 | [I5Q2FE](https://gitee.com/src-openEuler/udisks2/issues/I5Q2FE) | Build issue with udisks2 in openEuler:22.09 | Minor | Test case failures occur during the build process. The issue has not been reproduced in long-term local builds, and the environment was not retained. | Monitor the community build success rate. | | +| 6 | [I5SJ0R](https://gitee.com/src-openEuler/podman/issues/I5SJ0R) | Execution error with `podman create --blkio-weight-device /dev/loop0:123:15 fedora ls` (22.09RC5, arm/x86) | Minor | The blkio-weight feature is supported in the 4.xx kernel but not in the 5.10 version. | Track the upgrade of the podman component. | | diff --git a/docs/en/docs/Releasenotes/installing-the-os.md b/docs/en/Server/Releasenotes/Releasenotes/os-installation.md similarity index 32% rename from docs/en/docs/Releasenotes/installing-the-os.md rename to docs/en/Server/Releasenotes/Releasenotes/os-installation.md index 48f672163b9c596187bff0e9e6be64de70796129..187037f875c01fd9af2900a18e2ce4443563b15b 100644 --- a/docs/en/docs/Releasenotes/installing-the-os.md +++ b/docs/en/Server/Releasenotes/Releasenotes/os-installation.md @@ -8,92 +8,28 @@ The openEuler release files include [ISO release package](http://repo.openeuler. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Name

-

Description

-

openEuler-22.09-aarch64-dvd.iso

-

Base installation ISO file of the AArch64 architecture, including the core components for running the minimum system.

-

openEuler-22.09-everything-aarch64-dvd.iso

-

Full installation ISO file of the AArch64 architecture, including all components for running the entire system.

-

openEuler-22.09-everything-debug-aarch64-dvd.iso

-

ISO file for openEuler debugging in the AArch64 architecture, including the symbol table information required for debugging.

-

openEuler-22.09-x86_64-dvd.iso

-

Base installation ISO file of the x86_64 architecture, including the core components for running the minimum system.

-

openEuler-22.09-everything-x86_64-dvd.iso

-

Full installation ISO file of the x86_64 architecture, including all components for running the entire system.

-

openEuler-22.09-everything-debuginfo-x86_64-dvd.iso

-

ISO file for openEuler debugging in the x86_64 architecture, including the symbol table information required for debugging.

-

openEuler-22.09-source-dvd.iso

-

ISO file of the openEuler source code.

-

openEuler-21.09-edge-aarch64-dvd.iso

-

Edge ISO file in the AArch64 architecture, including the core components for running the minimum system.

-

openEuler-21.09-edge-x86_64-dvd.iso

-

Edge ISO file in the x86_64 architecture, including the core components for running the minimum system.

-

openEuler-21.09-Desktop-aarch64-dvd.iso

-

Desktop ISO file in the AArch64 architecture, including the minimum software set for running the development desktop.

-

openEuler-21.09-Desktop-x86_64-dvd.iso

-

Desktop ISO file in the x86_64 architecture, including the minimum software set for running the development desktop.

-
+| Name | Description | +| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | +| openEuler-22.09-aarch64-dvd.iso | Base installation ISO file of the AArch64 architecture, including the core components for running the minimum system. | +| openEuler-22.09-everything-aarch64-dvd.iso | Full installation ISO file of the AArch64 architecture, including all components for running the entire system. | +| openEuler-22.09-everything-debug-aarch64-dvd.iso | ISO file for openEuler debugging in the AArch64 architecture, including the symbol table information required for debugging. | +| openEuler-22.09-x86\_64-dvd.iso | Base installation ISO file of the x86\_64 architecture, including the core components for running the minimum system. | +| openEuler-22.09-everything-x86\_64-dvd.iso | Full installation ISO file of the x86\_64 architecture, including all components for running the entire system. | +| openEuler-22.09-everything-debuginfo-x86\_64-dvd.iso | ISO file for openEuler debugging in the x86\_64 architecture, including the symbol table information required for debugging. | +| openEuler-22.09-source-dvd.iso | ISO file of the openEuler source code. | +| openEuler-21.09-edge-aarch64-dvd.iso | Edge ISO file in the AArch64 architecture, including the core components for running the minimum system. | +| openEuler-21.09-edge-x86\_64-dvd.iso | Edge ISO file in the x86\_64 architecture, including the core components for running the minimum system. | +| openEuler-21.09-Desktop-aarch64-dvd.iso | Desktop ISO file in the AArch64 architecture, including the minimum software set for running the development desktop. | +| openEuler-21.09-Desktop-x86\_64-dvd.iso | Desktop ISO file in the x86\_64 architecture, including the minimum software set for running the development desktop. | **Table 2** VM images - - - - - - - - - - - -

Name

-

Description

-

openEuler-22.09-aarch64.qcow2.xz

-

VM image of openEuler in the AArch64 architecture.

-

openEuler-22.09-x86_64.qcow2.xz

-

VM image of openEuler in the x86_64 architecture.

-
+| Name | Description | +| -------------------------------- | -------------------------------------------------- | +| openEuler-22.09-aarch64.qcow2.xz | VM image of openEuler in the AArch64 architecture. | +| openEuler-22.09-x86\_64.qcow2.xz | VM image of openEuler in the x86\_64 architecture. | >![](./public_sys-resources/icon-note.gif) **NOTE** >The default password of the **root** user of the VM image is **openEuler12#$**. Change the password upon the first login. @@ -102,24 +38,10 @@ The openEuler release files include [ISO release package](http://repo.openeuler. - - - - - - - - - - - -

Name

-

Description

-

openEuler-docker.aarch64.tar.xz

-

AContainer image of openEuler in the AArch64 architecture.

-

openEuler-docker.x86_64.tar.xz

-

Container image of openEuler in the x86_64 architecture.

-
+| Name | Description | +| ------------------------------- | ---------------------------------------------------------- | +| openEuler-docker.aarch64.tar.xz | AContainer image of openEuler in the AArch64 architecture. | +| openEuler-docker.x86\_64.tar.xz | Container image of openEuler in the x86\_64 architecture. | **Table 4** Embedded images @@ -137,69 +59,19 @@ The openEuler release files include [ISO release package](http://repo.openeuler. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Name

-

Description

-

ISO

-

Stores ISO images.

-

OS

-

Stores basic software package sources.

-

debuginfo

-

Stores debugging package sources.

-

docker_img

-

Stores container images.

-

virtual_machine_img

-

Stores VM images.

-

embedded_img

-

Stores embedded images.

-

everything

-

Stores full software package sources.

-

extras

-

Stores extended software package sources.

-

source

-

Stores source code software package.

-

update

-

Stores update software package sources.

-

EPOL

-

Stores extended openEuler package sources.

-
+| Name | Description | +| --------------------- | ------------------------------------------ | +| ISO | Stores ISO images. | +| OS | Stores basic software package sources. | +| debuginfo | Stores debugging package sources. | +| docker\_img | Stores container images. | +| virtual\_machine\_img | Stores VM images. | +| embedded\_img | Stores embedded images. | +| everything | Stores full software package sources. | +| extras | Stores extended software package sources. | +| source | Stores source code software package. | +| update | Stores update software package sources. | +| EPOL | Stores extended openEuler package sources. | ## Minimum Hardware Specifications @@ -207,32 +79,11 @@ The openEuler release files include [ISO release package](http://repo.openeuler. **Table 6** Minimum hardware requirements - - - - - - - - - - - - - - - - -

Component

-

Minimum Hardware Specification

-

CPU

-

Kunpeng 920 (AArch64)/ x86_64 (later than Skylake)

-

x86-64 (later than Skylake)

-

Memory

-

≥ 8 GB

-

Hard drive

-

≥ 120 GB

-
+| Component | Minimum Hardware Specification | +| ---------- | -------------------------------------------------------------------------------------- | +| CPU | Kunpeng 920 (AArch64)/ x86\_64 (later than Skylake)

x86-64 (later than Skylake) | +| Memory | ≥ 8 GB | +| Hard drive | ≥ 120 GB | ## Hardware Compatibility diff --git a/docs/en/Server/Releasenotes/Releasenotes/public_sys-resources/icon-note.gif b/docs/en/Server/Releasenotes/Releasenotes/public_sys-resources/icon-note.gif new file mode 100644 index 0000000000000000000000000000000000000000..6314297e45c1de184204098efd4814d6dc8b1cda Binary files /dev/null and b/docs/en/Server/Releasenotes/Releasenotes/public_sys-resources/icon-note.gif differ diff --git a/docs/en/docs/Releasenotes/release_notes.md b/docs/en/Server/Releasenotes/Releasenotes/release-notes.md similarity index 100% rename from docs/en/docs/Releasenotes/release_notes.md rename to docs/en/Server/Releasenotes/Releasenotes/release-notes.md diff --git a/docs/en/docs/Releasenotes/resolved-issues.md b/docs/en/Server/Releasenotes/Releasenotes/resolved-issues.md similarity index 100% rename from docs/en/docs/Releasenotes/resolved-issues.md rename to docs/en/Server/Releasenotes/Releasenotes/resolved-issues.md diff --git a/docs/en/docs/Releasenotes/source-code.md b/docs/en/Server/Releasenotes/Releasenotes/source-code.md similarity index 84% rename from docs/en/docs/Releasenotes/source-code.md rename to docs/en/Server/Releasenotes/Releasenotes/source-code.md index c207d103605c45658196acc76f1d1bde1e95ef04..f4ddca12ae04ba59655fe5cceb1e1e0af1c21e26 100644 --- a/docs/en/docs/Releasenotes/source-code.md +++ b/docs/en/Server/Releasenotes/Releasenotes/source-code.md @@ -5,4 +5,4 @@ openEuler contains two code repositories: - Code repository: [https://gitee.com/openeuler](https://gitee.com/openeuler) - Software package repository: [https://gitee.com/src-openeuler](https://gitee.com/src-openeuler) -The openEuler release packages also provide the source ISO files. For details, see [OS Installation](./installing-the-os.md). +The openEuler release packages also provide the source ISO files. For details, see [OS Installation](./os-installation.md). diff --git a/docs/en/docs/Releasenotes/user-notice.md b/docs/en/Server/Releasenotes/Releasenotes/user-notice.md similarity index 100% rename from docs/en/docs/Releasenotes/user-notice.md rename to docs/en/Server/Releasenotes/Releasenotes/user-notice.md diff --git a/docs/en/Server/Security/CVE-ease/Menu/index.md b/docs/en/Server/Security/CVE-ease/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..ad353023861c9cb84852593e823282a0499682e3 --- /dev/null +++ b/docs/en/Server/Security/CVE-ease/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [CVE-ease Design Overview]({{< relref "./cve-ease-design-overview.md" >}}) + - [CVE-ease Introduction and Installation]({{< relref "./cve-ease-introduction-and-installation.md" >}})$$ diff --git a/docs/en/Server/Security/CVE-ease/cve-ease-design-overview.md b/docs/en/Server/Security/CVE-ease/cve-ease-design-overview.md new file mode 100644 index 0000000000000000000000000000000000000000..52bc28b7403ec053dbb9f2b6b776913fdd70af13 --- /dev/null +++ b/docs/en/Server/Security/CVE-ease/cve-ease-design-overview.md @@ -0,0 +1,33 @@ +# CVE-ease Design Overview + +## 1. Overview + +Common Vulnerabilities and Exposures (CVEs) plays a vital role in ensuring system security and stability, making effective handling and management of CVE data essential. To address this, the CVE-ease vulnerability management system was developed, enabling real-time acquisition, management, and reporting of vulnerability information. + +## 2. Key Features + +CVE-ease provides the following core functionalities: + +- Acquisition and analysis of CVE information +- Organization and storage of CVE data +- Access to historical and real-time CVE details +- Real-time tracking of CVE status +- Immediate broadcasting of CVE updates + +## 3. Module Functions + +![](./figures/CVE-ease_desigin_table.png) + +### 3.1 CVE Acquisition + +During operation, CVE-ease periodically scrapes CVE information from disclosure websites. Before scraping, the system scans the CVE database to create an index of existing CVE IDs and verify connectivity with CVE platforms. The process begins by retrieving the IDs of the latest disclosed CVEs, followed by fetching detailed descriptions based on these IDs. If the data is successfully retrieved, the process concludes. Otherwise, the system retries until successful. + +### 3.2 CVE Information Organization and Storage + +Scraped CVE data is structured and stored in a database according to predefined formats, with feature values calculated. If a scraped CVE ID is not present in the database, it is added directly. If the ID exists, the system compares feature values and updates the entry if discrepancies are found. + +### 3.3 Viewing Historical and Real-Time CVE Information + +Users can query specific CVE details through the interactive interface. By default, the system displays the 10 most recent CVEs. Users can customize queries to retrieve historical CVE data based on criteria such as CVE score or year. + +![](./figures/CVE-ease_function.png) diff --git a/docs/en/Server/Security/CVE-ease/cve-ease-introduction-and-installation.md b/docs/en/Server/Security/CVE-ease/cve-ease-introduction-and-installation.md new file mode 100644 index 0000000000000000000000000000000000000000..672af6be5adb416f086e88b6de17e1041f454cd4 --- /dev/null +++ b/docs/en/Server/Security/CVE-ease/cve-ease-introduction-and-installation.md @@ -0,0 +1,566 @@ +# CVE-ease + +## Project Overview + +CVE-ease is a dedicated platform for CVE information, aggregating data from community releases and delivering timely notifications via email, WeChat, DingTalk, and other channels. Users can access detailed CVE information on the platform, such as vulnerability descriptions, affected scopes, and remediation recommendations. This enables them to choose appropriate fixes tailored to their systems. + +The platform is designed to help users swiftly identify and address vulnerabilities, thereby improving system security and stability. + +CVE-ease is an **independent innovation initiative by CTYun**. Open sourced in the openEuler community, it strictly adheres to the **Mulan PSL2** license. We welcome community contributions to the project, working together to create a secure, stable, and reliable ecosystem for domestic operating systems. + +Open source details: + +- This repository **strictly** complies with the [Mulan Permissive Software License, Version 2](http://license.coscl.org.cn/MulanPSL2). +- **This repository meets the open source guidelines of China Telecom Cloud Technology Co., Ltd, having undergone thorough review and preparation to present a high-quality open source project with complete documentation and resources**. +- The repository is managed by designated personnel from the company, ensuring **long-term maintenance for LTS versions** and ongoing development support. + +## Software Architecture + +CVE-ease is a platform dedicated to CVE information, structured around four core modules: CVE crawler, CVE analyzer, CVE notifier, and CVE frontend. Below is an overview of functionality and design of each module. + +- **CVE crawler** + +This module collects CVE information from multiple data sources provided by the openEuler community and stores it in relational databases like MySQL. These data sources are primarily managed by the cve-manager project, which supports fetching CVE details from NVD, CNNVD, CNVD, RedHat, Ubuntu, and Debian. CVE-ease employs Python-based crawler scripts, each tailored to a specific data source. These scripts can be executed on a schedule or manually, formatting the scraped raw CVE data and storing it for further analysis. + +- **CVE analyzer** + +This module processes CVE information by parsing, categorizing, and scoring it. Written in Python, the analyzer script periodically retrieves raw CVE data from the relational database and performs tasks such as extracting basic attributes (such as ID, title, description), categorizing the impact scope (such as OS, software packages), scoring severity (such as CVSS scores), and matching remediation suggestions (such as patch links). The processed structured data is then stored in SQL format for future queries and presentation. + +- **CVE notifier** + +This module sends CVE notifications to users via email, WeChat, DingTalk, and other channels based on their subscription preferences. The Python-based notifier script periodically retrieves structured CVE data from the MySQL database, filters it according to user-configured impact scopes, generates appropriate notification content for different channels, and invokes APIs to deliver notifications. The script also logs sending results and feedback, updating subscription statuses in the database. + +- **CVE frontend** + +This module offers a user-friendly CLI terminal command, enabling users to view, search, and subscribe to CVE information. + +The architecture of CVE-ease is designed to create an efficient, flexible, and scalable platform, providing users with timely and accurate security vulnerability intelligence. + +## Development Roadmap + +1. Adapt repodata to support multiple OSVs. +2. Add MOTD login broadcast functionality. +3. Enhance the DNF plugin to include patching capabilities. +4. Implement automatic patching for specific packages. +5. Introduce specific package awareness features. +6. ... + +**We highly value your feedback on the development direction of CVE-ease. If you have any suggestions or ideas, feel free to share them with us. Your input is greatly appreciated!** + +## Installation Guide + +CVE-ease is in fast-paced development, offering installation methods such as direct installation, container installation, and RPM package installation. + +### Direct Installation + +```shell +git clone https://gitee.com/openeuler/cve-ease cve-ease.git +cd cve-ease.git/cve-ease +make install +``` + +### Container Installation + +```shell +git clone https://gitee.com/openeuler/cve-ease cve-ease.git +cd cve-ease.git/cve-ease +make run-in-docker +``` + +### RPM Package Installation + +```shell +git clone https://gitee.com/openeuler/cve-ease cve-ease.git +cd cve-ease.git/cve-ease +make gensrpm +cd .. +rpm -ivh *.src.rpm +cd ~/rpmbuild +rpmbuild -ba SPECS/cve-ease.spec +cd RPMS/noarch +rpm -ivh *.rpm +``` + +## Usage Guide + +### Help Information + +- Running the `cve-ease` command without options displays the help menu. +- The `cve-ease` command includes multiple subcommands, organized into `basic`, `info`, and `notifier` categories. +- Use the `help` subcommand to view detailed information for each command category. + +```shell +# cve-ease + +Available commands: + +basic commands: + config Print cve-ease config + daemon Run as daemon without interactive + motd Motd info manager + service Service manager + +info commands: + cve OpenEuler CVE info + cvrf OpenEuler CVRF info + db Database manager + help List available commands + logger Logger config + repodata Repodata info + rpm Rpm info + sa OpenEuler security notice info + +notifier commands: + dingding Notifier of dingding + feishu Notifier of feishu + mail163 Notifier of mail163 + mailqq Notifier of mailqq + wecom Notifier of wecom + +Try "cve-ease --help" for help about global gconfig +Try "cve-ease help" to get all available commands +Try "cve-ease --help" for help about the gconfig of a particular command +Try "cve-ease help " to get commands under a particular category +Available commands are: basic, info, notifier + +# cve-ease help info +Available commands: + +info commands: + cve OpenEuler CVE info + cvrf OpenEuler CVRF info + db Database manager + help List available commands + logger Logger config + repodata Repodata info + rpm Rpm info + sa OpenEuler security notice info + +Try "cve-ease --help" for help about global gconfig +Try "cve-ease help" to get all available commands +Try "cve-ease --help" for help about the gconfig of a particular command +Try "cve-ease help " to get commands under a particular category +Available commands are: basic, info, notifier +``` + +### Configuration File + +The configuration file is located at `/etc/cve-ease/cve-ease.cfg`. + +```ini +[main] +pid_file_path = /var/log/cve-ease/cve-ease.pid +lock_file_path = /var/log/cve-ease/cve-ease.lock + +# log configuration + +# debug/ error(default) / warn +log_level = debug +log_file_path = /var/log/cve-ease/cve-ease.log +log_maxbytes = 10240 +log_backup_num = 30 + +# sql configuration +db_type = sqlite +db_file_path = /usr/share/cve-ease/cve-ease.db +db_user = +db_password = +db_host = +db_port = +product = openEuler-23.09 +expiration_days = 14 + +# notifier +notifier_record_num = 9 + +# filter +focus_on = kernel,systemd,openssh,openssl + +[wecom] +enabled = 1 +# https://developer.work.weixin.qq.com/document/path/91770?version=4.0.19.6020&platform=win +# https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=fe9eae1f-xxxx-4ae3-xxxx-ecf9f77abba6 + +update_key = 2142ef2a-d99d-417d-8c31-b550b0fcb4e3 +status_key = 2142ef2a-d99d-417d-8c31-b550b0fcb4e3 + + +[dingding] +enabled = 1 +# just for test +update_key = 81907155a6cc88004e1ed6bcdd86c68d5b21565ed59d549ca031abc93d90d9cb +status_key = 81907155a6cc88004e1ed6bcdd86c68d5b21565ed59d549ca031abc93d90d9cb + + +[feishu] +enabled = 1 +# just for test +update_key = 5575739b-f59d-48db-b737-63672b2c32ab +status_key = 5575739b-f59d-48db-b737-63672b2c32ab + + +[mail163] +enabled = 0 +mail_sender = xxxxxxx@163.com +mail_recver = xxxxxxx@163.com +mail_smtp_token = xxxxxx + + +[mailqq] +enabled = 0 +mail_sender = xxxxxxx@qq.com +mail_recver = xxxxxxx@qq.com +mail_smtp_token = xxxxxxxx +``` + +### CVE-ease + +The CVE-ease service consists of two files, **cve-ease.service** and **cve-ease.timer**, utilizing the systemd timer functionality for scheduled execution. + +```ini +# /usr/lib/systemd/system/cve-ease.timer +# CTyunOS cve-ease: MulanPSL2 +# +# This file is part of cve-ease. +# + +[Unit] +Description=CTyunOS cve-ease Project +Documentation=https://gitee.com/openeuler/cve-ease + +[Timer] +OnBootSec=1m +OnUnitActiveSec=10m +RandomizedDelaySec=10 + +[Install] +WantedBy=timers.target +``` + +```shell +# systemctl enable --now cve-ease.timer +Created symlink /etc/systemd/system/timers.target.wants/cve-ease.timer → /usr/lib/systemd/system/cve-ease.timer. +# systemctl status cve-ease.timer +● cve-ease.timer - CTyunOS cve-ease Project + Loaded: loaded (/usr/lib/systemd/system/cve-ease.timer; enabled; vendor preset: disabled) + Active: active (waiting) since Sat 2023-03-18 17:55:53 CST; 5s ago + Trigger: Sat 2023-03-18 18:05:55 CST; 9min left + Docs: https://gitee.com/openeuler/cve-ease + +Mar 18 17:55:53 56d941221b41 systemd[1]: Started CTyunOS cve-ease Project. +# systemctl status cve-ease.service +● cve-ease.service - CTyunOS cve-ease project + Loaded: loaded (/usr/lib/systemd/system/cve-ease.service; disabled; vendor preset: disabled) + Active: inactive (dead) since Sat 2023-03-18 17:55:56 CST; 5s ago + Docs: https://gitee.com/openeuler/cve-ease + Process: 196 ExecStart=/usr/bin/cve-ease daemon (code=exited, status=0/SUCCESS) + Main PID: 196 (code=exited, status=0/SUCCESS) + +Mar 18 17:55:53 56d941221b41 systemd[1]: Starting CTyunOS cve-ease project... +Mar 18 17:55:56 56d941221b41 systemd[1]: cve-ease.service: Succeeded. +Mar 18 17:55:56 56d941221b41 systemd[1]: Started CTyunOS cve-ease project. +``` + +### basic Commands + +#### config + +```shell +Usage: cve-ease config +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -r, --rawdata print raw config file content +``` + +```shell +cve-ease config # Display the configuration file path and active settings. +cve-ease config -r # Display the configuration file path and raw data. +``` + +#### daemon + +- The `daemon` command acts as the entry point for the systemd service and is typically not run manually. +- The service is executed periodically by the systemd timer associated with cve-ease. + +```ini +# /usr/lib/systemd/system/cve-ease.service +# CTyunOS cve-ease: MulanPSL2 +# +# This file is part of cve-ease. +# + +[Unit] +Description=CTyunOS cve-ease project +Documentation=https://gitee.com/openeuler/cve-ease + +[Service] +Type=oneshot +ExecStart=/usr/bin/cve-ease daemon + +[Install] +WantedBy=multi-user.target +``` + +#### motd + +- TODO. + +#### service + +`service` command options for controlling the cve-ease service: + +```shell +Usage: cve-ease service +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -k, --kill kill cve-ease service + -r, --restart restart cve-ease service + -s, --status get cve-ease service status + -v, --verbose show verbose output +``` + +```shell +cve-ease service -k # Pause the cve-ease service +cve-ease service -r # Restart the cve-ease service +cve-ease service -s # Query the cve-ease service status +``` + +### info Commands + +#### cve + +Retrieve CVE data from the openEuler community in the [openEuler Security Center](https://www.openeuler.org/en/security/cve/). + +```shell +Usage: cve-ease cve +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -r, --rawdata get cve cache and print raw data without write db + -m, --makecache get cve cache + -l, --list list all cve info + -t, --total get cve info statistics + -v, --verbose show verbose output +``` + +```shell +cve-ease cve -m # Collect CVE data and store it in the database. +cve-ease cve -l # Fetch CVE data from the database and format it for display. +cve-ease cve -t # Retrieve and show CVE statistics from the database. +cve-ease cve -r # Gather CVE data and display it in raw form (without saving to the database). +``` + +#### sa + +Retrieve security advisory (SA) data from the openEuler community in the [openEuler Security Center](https://www.openeuler.org/en/security/cve/). + +```shell +Usage: cve-ease sa +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -r, --rawdata get sa cache and print raw data without write db + -m, --makecache get sa cache + -l, --list list all sa info + -t, --total get sa info statistics + -v, --verbose show verbose output +``` + +```shell +cve-ease sa -m # Collect SA data and store it in the database. +cve-ease sa -l # Fetch SA data from the database and format it for display. +cve-ease sa -t # Retrieve and show SA statistics from the database. +cve-ease sa -r # Gather SA data and display it in raw form (without saving to the database). +``` + +#### cvrf + +Common Vulnerability Reporting Framework (CVRF)-related commands: + +```shell +cve-ease cvrf -m # Collect CVRF data and store it in the database. +cve-ease cvrf -l # Fetch CVRF data from the database and format it for display. +cve-ease cvrf -t # Retrieve and show CVRF statistics from the database. +``` + +#### rpm + +```shell +Usage: cve-ease rpm +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -l, --list list all rpm info + -v, --verbose show verbose output +``` + +```shell +cve-ease rpm -l # Use the RPM interface to retrieve and display details of installed RPM packages in the system. +``` + +#### repodata + +```shell +Usage: cve-ease repodata +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -m, --makecache cache repodata to database + -p PRODUCT, --product=PRODUCT + specific product (work with --check) + --osv=OSV specific osv rpm release + -t, --total get total rpm statistics + -l, --list list all rpm + -c, --check check repo cve + -v, --verbose show verbose output +``` + +```bash +cve-ease repodata -p ctyunos2 -m # Set ctyunos2 as the OSV version, cache its repository metadata, and store it in the database. +cve-ease repodata --osv ctyunos2 -p openEuler-23.09 -c # Compare the ctyunos2 repository with the openEuler 23.09 repository. +cve-ease repodata -l # Display the package details available in the database. +cve-ease repodata -t # Fetch and show statistics for the repository data in the database. +``` + +#### logger + +```shell +Usage: cve-ease logger +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -l, --list list all logger info + -t, --total get logger statistics + -v, --verbose show verbose output +``` + +#### db + +```shell +Usage: cve-ease db +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -p, --purge purge db and recreate it (Danger Operation) + -s, --stats get database statistics + -v, --verbose show verbose output +``` + +### notifier Commands + +#### wecom + +```shell +Usage: cve-ease wecom +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -t, --test run test + -v, --verbose show verbose output + -c CONTENT, --content=CONTENT + show verbose output +``` + +```shell +cve-ease wecom -t # Send a test message to a WeCom group. +cve-ease wecom -t -c 'helloworld' # Send a custom test message to a WeCom group. +``` + +#### dingding + +```shell +Usage: cve-ease dingding +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -t, --test run test + -v, --verbose show verbose output + -c CONTENT, --content=CONTENT + show verbose output +``` + +```shell +cve-ease dingding -t # Send a test message to a DingTalk group. +cve-ease dingding -t -c 'helloworld' # Send a custom test message to a DingTalk group. +``` + +#### feishu + +```shell +Usage: cve-ease feishu +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -t, --test run test + -v, --verbose show verbose output + -c CONTENT, --content=CONTENT + show verbose output +``` + +```shell +cve-ease feishu -t # Send a test message to a Lark group. +cve-ease feishu -t -c 'helloworld' # Send a custom test message to a Lark group. +``` + +#### mail163 + +```shell +Usage: cve-ease mail163 +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -t, --test run test + -v, --verbose show verbose output + -c CONTENT, --content=CONTENT + show verbose output +``` + +```shell +cve-ease mail163 -t # Send a test message to a 163 mail address. +cve-ease mail163 -t -c 'helloworld' # Send a custom test message to a 163 mail address. +``` + +#### mailqq + +```shell +Usage: cve-ease mailqq +(Specify the --help global option for a list of other help options) + +Options: + -h, --help show this help message and exit + -t, --test run test + -v, --verbose show verbose output + -c CONTENT, --content=CONTENT + show verbose output +``` + +```shell +cve-ease mailqq -t # Send a test message to a QQ mail address. +cve-ease mailqq -t -c 'helloworld' # Send a custom test message to a QQ mail address. +``` + +## How to Contribute + +1. Fork the repository. +2. Since the project is in fast-paced development with only the master branch active, make your changes on the master branch and submit them. +3. Create a pull request (PR) with a clear description of its functionality and purpose, along with relevant test cases. +4. Notify the repository maintainer to review your PR. + +## Core Developers and Contact Details + +- You Yifeng - [Gitee Private Message](https://gitee.com/youyifeng) +- Wu Kaishun - [Gitee Private Message](https://gitee.com/wuzimo) diff --git a/docs/en/Server/Security/CVE-ease/figures/CVE-ease_desigin_table.png b/docs/en/Server/Security/CVE-ease/figures/CVE-ease_desigin_table.png new file mode 100644 index 0000000000000000000000000000000000000000..c02a3569bca30fd225c048360e66a2cf052bc84e Binary files /dev/null and b/docs/en/Server/Security/CVE-ease/figures/CVE-ease_desigin_table.png differ diff --git a/docs/en/Server/Security/CVE-ease/figures/CVE-ease_function.png b/docs/en/Server/Security/CVE-ease/figures/CVE-ease_function.png new file mode 100644 index 0000000000000000000000000000000000000000..3f4119eddcfd5eef088123895c5b39a040f0526e Binary files /dev/null and b/docs/en/Server/Security/CVE-ease/figures/CVE-ease_function.png differ diff --git a/docs/en/Server/Security/CertSignature/Menu/index.md b/docs/en/Server/Security/CertSignature/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..2f675c152c646048aa03327b95ff3cb15fe48c08 --- /dev/null +++ b/docs/en/Server/Security/CertSignature/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- +- [Overview of Certificates and Signatures]({{< relref "./overview_of_certificates_and_signatures.md" >}}) + - [Introduction to Signature Certificates]({{< relref "./introduction_to_signature_certificates.md" >}}) + - [Secure Boot]({{< relref "./secure_boot.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Security/CertSignature/figures/cert-tree.png b/docs/en/Server/Security/CertSignature/figures/cert-tree.png new file mode 100644 index 0000000000000000000000000000000000000000..930a664600b31140c3939b1abd005cc2572cdbf9 Binary files /dev/null and b/docs/en/Server/Security/CertSignature/figures/cert-tree.png differ diff --git a/docs/en/Server/Security/CertSignature/figures/mokutil-db.png b/docs/en/Server/Security/CertSignature/figures/mokutil-db.png new file mode 100644 index 0000000000000000000000000000000000000000..82dbe6e04cafe3e9ac039ba19acd5996d4cf2259 Binary files /dev/null and b/docs/en/Server/Security/CertSignature/figures/mokutil-db.png differ diff --git a/docs/en/Server/Security/CertSignature/figures/mokutil-sb-off.png b/docs/en/Server/Security/CertSignature/figures/mokutil-sb-off.png new file mode 100644 index 0000000000000000000000000000000000000000..f3018c9fd0236e9c2cf560f0da3827ed2a877f6d Binary files /dev/null and b/docs/en/Server/Security/CertSignature/figures/mokutil-sb-off.png differ diff --git a/docs/en/Server/Security/CertSignature/figures/mokutil-sb-on.png b/docs/en/Server/Security/CertSignature/figures/mokutil-sb-on.png new file mode 100644 index 0000000000000000000000000000000000000000..449b6774dc61a601cf884845fbd0be5d314108e1 Binary files /dev/null and b/docs/en/Server/Security/CertSignature/figures/mokutil-sb-on.png differ diff --git a/docs/en/Server/Security/CertSignature/figures/mokutil-sb-unsupport.png b/docs/en/Server/Security/CertSignature/figures/mokutil-sb-unsupport.png new file mode 100644 index 0000000000000000000000000000000000000000..525c72f78b897ffaba0d356406ab9d9e64024d91 Binary files /dev/null and b/docs/en/Server/Security/CertSignature/figures/mokutil-sb-unsupport.png differ diff --git a/docs/en/Server/Security/CertSignature/introduction_to_signature_certificates.md b/docs/en/Server/Security/CertSignature/introduction_to_signature_certificates.md new file mode 100644 index 0000000000000000000000000000000000000000..3720dea42fdad92c6b1cae08087d1e6713307d60 --- /dev/null +++ b/docs/en/Server/Security/CertSignature/introduction_to_signature_certificates.md @@ -0,0 +1,46 @@ +# Introduction to Signature Certificates + +openEuler supports two signature mechanisms: openPGP and CMS, which are used for different file types. + +| File Type | Signature Type | Signature Format| +| --------------- | ------------ | -------- | +| EFI files | authenticode | CMS | +| Kernel module files | modsig | CMS | +| IMA digest lists| modsig | CMS | +| RPM software packages | RPM | openPGP | + +## openPGP Certificate Signing + +openEuler uses openPGP certificates to sign RPM software packages. The signature certificates are released with the OS image. You can obtain certificates used by openEuler in either of the following ways: + +Method 1: Download the certificate from the repository. For example, download the certificate of openEuler 24.03 LTS from the following address: + +```text +https://repo.openeuler.org/openEuler-24.03-LTS/OS/aarch64/RPM-GPG-KEY-openEuler +``` + +Method 2: Log in to the system and obtain the file from the specified path. + +```shell +cat /etc/pki/rpm-gpg/RPM-GPG-KEY-openEuler +``` + +## CMS Certificate Signing + +The openEuler signature platform uses a three-level certificate chain to manage signature private keys and certificates. + +![](./figures/cert-tree.png) + +Certificates of different levels have different validity periods. The current plan is as follows: + +| Type| Validity Period| +| -------- | ------ | +| Root certificate | 30 years | +| Level-2 certificate| 10 years | +| Level-3 certificate| 3 years | + +The openEuler root certificate can be downloaded from the community certificate center. + +```text +https://www.openeuler.org/en/security/certificate-center/ +``` diff --git a/docs/en/Server/Security/CertSignature/overview_of_certificates_and_signatures.md b/docs/en/Server/Security/CertSignature/overview_of_certificates_and_signatures.md new file mode 100644 index 0000000000000000000000000000000000000000..5b34fb2790887ffa71ef519565d902264e69afd3 --- /dev/null +++ b/docs/en/Server/Security/CertSignature/overview_of_certificates_and_signatures.md @@ -0,0 +1,29 @@ +# Overview of Certificates and Signatures + +## Overview + +Digital signature is an important technology for protecting the integrity of OSs. By adding signatures to key system components and verifying the signatures in subsequent processes such as component loading and running, you can effectively check component integrity and prevent security problems caused by component tampering. Multiple system integrity protection mechanisms are supported in the industry to protect the integrity of different types of components in each phase of system running. Typical technical mechanisms include: + +- Secure boot +- Kernel module signing +- Integrity measurement architecture (IMA) +- RPM signature verification + +The preceding integrity protection security mechanisms depend on signatures (usually integrated in the component release phase). However, open source communities generally lack signature private keys and certificate management mechanisms. Therefore, Linux distributions released by open source communities generally do not provide default signatures or use only private keys temporarily generated in the build phase for signatures. Usually, these integrity protection security mechanisms can be enabled only after users or downstream OSVs perform secondary signing, which increases the cost of security functions and reduces usability. + +## Solution + +The openEuler community infrastructure supports the signature service. The signature platform manages signature private keys and certificates in a unified manner and works with the EulerMaker build platform to automatically sign key files during the software package build process of the community edition. Currently, the following file types are supported: + +- EFI files +- Kernel module files +- IMA digest lists +- RPM software packages + +## Constraints + +The signature service of the openEuler community has the following constraints: + +- Currently, only official releases of the openEuler community can be signed. Private builds cannot be signed. +- Currently, only EFI files related to OS secure boot can be signed, including shim, GRUB, and kernel files. +- Currently, only the kernel module files provided by the kernel software package can be signed. diff --git a/docs/en/Server/Security/CertSignature/secure_boot.md b/docs/en/Server/Security/CertSignature/secure_boot.md new file mode 100644 index 0000000000000000000000000000000000000000..fb8f217924ac306bfaac7dea46457a48c2edc009 --- /dev/null +++ b/docs/en/Server/Security/CertSignature/secure_boot.md @@ -0,0 +1,58 @@ +# Secure Boot + +## Overview + +Secure Boot relies on public and private key pairs to sign and verify components in the booting process. During booting, the previous component authenticates the digital signature of the next component. If the authentication is successful, the next component runs. If the authentication fails, the booting stops. Secure Boot ensures the integrity of each component during system booting and prevents unauthenticated components from being loaded and running, preventing security threats to the system and user data. + +Components to be authenticated and loaded in sequence in Secure Boot are BIOS, shim, GRUB, and vmlinuz (kernel image). + +Related EFI startup components are signed by the openEuler signature platform using signcode. The public key certificate is integrated into the signature database by the BIOS. During the boot, the BIOS verifies the shim. The shim and GRUB components obtain the public key certificate from the signature database of the BIOS and verify the next-level components. + +## Background and Solutions + +In earlier openEuler versions, secure boot components are not signed. Therefore, the secure boot function cannot be directly used to ensure the integrity of system components. + +In openEuler 22.03 LTS SP3 and later versions, openEuler uses the community signature platform to sign OS components, including the GRUB and vmlinuz components, and integrates the community signature root certificate in the shim component. + +For the shim component, to facilitate end-to-end secure boot, the signature platform of the openEuler community is used for signature. After external CAs officially operate the secure boot component signature service, the signatures of these CAs will be integrated into the shim module of openEuler. + +## Usage + +### Obtaining the openEuler Certificate + +To obtain the openEuler root certificate, visit and download it from the **Certificate Center** directory. + +The root certificate name on the web page are **openEuler Shim Default CA** and **default-x509ca.cert**. + +### Operation in the BIOS + +Import the openEuler root certificate to the certificate database of the BIOS and enable secure boot in the BIOS. + +For details about how to import the BIOS certificate and enable secure boot, see the documents provided by the BIOS vendor. + +### Operation in the OS + +Check the certificate information in the database: `mokutil -db` + +![](./figures/mokutil-db.png) +Note: There is a large amount of certificate information. Only some important information is displayed in the screenshot. + +Check the secure boot status: `mokutil --sb` + +- **SecureBoot disabled**: Secure boot is disabled. + +![](./figures/mokutil-sb-off.png) + +- **SecureBoot enabled**: Secure boot is enabled. + +![](./figures/mokutil-sb-on.png) + +- **not supported**: The system does not support secure boot. + +![](./figures/mokutil-sb-unsupport.png) + +## Constraints + +- **Software**: The OS must be booted in UEFI mode. +- **Architecture**: Arm or x86 +- **Hardware**: The BIOS must support the verification function related to secure boot. diff --git a/docs/en/Server/Security/Menu/index.md b/docs/en/Server/Security/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..0af6f439cd7dd04ed6071587c492bf09d778332d --- /dev/null +++ b/docs/en/Server/Security/Menu/index.md @@ -0,0 +1,10 @@ +--- +headless: true +--- +- [Security Hardening Guide]({{< relref "./SecHarden/Menu/index.md" >}}) +- [Trusted Computing]({{< relref "./TrustedComputing/Menu/index.md" >}}) +- [secGear Developer Guide]({{< relref "./secGear/Menu/index.md" >}}) +- [CVE-ease Design Overview]({{< relref "./CVE-ease/Menu/index.md" >}}) +- [Certificates and Signatures]({{< relref "./CertSignature/Menu/index.md" >}}) +- [Introduction to SBOM]({{< relref "./Sbom/Menu/index.md" >}}) +- [ShangMi]({{< relref "./ShangMi/Menu/index.md" >}}) diff --git a/docs/en/Server/Security/Sbom/Menu/index.md b/docs/en/Server/Security/Sbom/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..501bc4f4a11fd86c9868fcb6e5bd39945576ce80 --- /dev/null +++ b/docs/en/Server/Security/Sbom/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [Introduction to SBOM]({{< relref "./sbom.md" >}}) \ No newline at end of file diff --git a/docs/en/Server/Security/Sbom/figures/image.png b/docs/en/Server/Security/Sbom/figures/image.png new file mode 100644 index 0000000000000000000000000000000000000000..b4bfa78fee5662ed919d3f2fe76fa407f20f9ec9 Binary files /dev/null and b/docs/en/Server/Security/Sbom/figures/image.png differ diff --git a/docs/en/Server/Security/Sbom/sbom.md b/docs/en/Server/Security/Sbom/sbom.md new file mode 100644 index 0000000000000000000000000000000000000000..44ab29c501bf7a245378596418cbb2d42eeee9bc --- /dev/null +++ b/docs/en/Server/Security/Sbom/sbom.md @@ -0,0 +1,50 @@ +# 1. Introduction to SBOM + +A Software Bill of Materials (SBOM) serves as a formal, machine-readable inventory that uniquely identifies software components and their contents. Beyond basic identification, it tracks copyright and licensing details. Organizations use SBOM to enhance supply chain transparency, and it is rapidly becoming a mandatory deliverable in software distribution. + +# 2. SBOM Core Requirements + +The National Telecommunications and Information Administration (NTIA) has established baseline requirements for SBOM implementation. These essential data elements enable component tracking throughout the software supply chain and serve as the foundation for extended features such as license tracking and vulnerability monitoring. + +| Core Field | Definition | +| ------------------------------- | ------------------------------------------------------------ | +| Supplier | Entity responsible for component creation and identification | +| Component | Official designation of the software unit | +| Version | Tracking identifier for component iterations | +| Other identifiers | Supplementary reference keys | +| Dependencies | Mapping of component relationships and inclusions | +| SBOM author | Entity generating the SBOM documentation | +| Timestamp | SBOM generation date and time | +| **Recommended Optional Fields** | | +| Component hash | Digital fingerprint for security verification | +| Lifecycle phase | Development stage at SBOM creation | + +# 3. openEuler SBOM Implementation + +openEuler's SBOM framework incorporates extensive metadata tracking through SPDX, including: + +| Base Field | SPDX Path | +| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Supplier | document->packages->supplier | +| Name | document->packages->name | +| Version | document->packages->versionInfo (epoch:version-release in openEuler) | +| Other identifiers | document->packages->externalRefs->purl | +| Dependencies | document->packages->externalRefs->purl | +| SBOM author | document->creationInfo->creators | +| Timestamp | document->creationInfo->created | +| Component hash | document->packages->checksums | +| Lifecycle phase | Not supported | +| Other relationships | Internal subcomponents: document->packages->externalRefs(category:PROVIDE_MANAGER)->purl
Runtime dependencies: document->relationships(relationshipType:DEPENDS_ON) | +| License info | document->packages->licenseDeclared document->packages->licenseConcluded | +| Copyright info | document->packages->copyrightText | +| Upstream community | document->packages->externalRefs(category:SOURCE_MANAGER)->url | +| Patch information | Patch files: document->files(fileTypes:SOURCE)
Patch relationships: document->relationships(relationshipType:PATCH_APPLIED) | +| Component source | document->packages->downloadLocation | +| Component details | document->packages->description document->packages->summary | +| Website/Blog | document->packages->homepage | + +# 4. SBOM Structure + +The system uses RPM packages as the fundamental unit for SBOM generation and analysis. + +![](./figures/image.png) diff --git a/docs/en/Server/Security/SecHarden/Menu/index.md b/docs/en/Server/Security/SecHarden/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..34e62af849643b2d44655a9c78b87f19be8121f0 --- /dev/null +++ b/docs/en/Server/Security/SecHarden/Menu/index.md @@ -0,0 +1,15 @@ +--- +headless: true +--- +- [Security Hardening Guide]({{< relref "./secharden.md" >}}) + - [OS Hardening Overview]({{< relref "./os-hardening-overview.md" >}}) + - [Security Configuration Description]({{< relref "./security-configuration-benchmark.md" >}}) + - [Security Hardening Guide]({{< relref "./security-hardening-guide.md" >}}) + - [Account Passwords]({{< relref "./account-passwords.md" >}}) + - [Authentication and Authorization]({{< relref "./authentication-and-authorization.md" >}}) + - [System Services]({{< relref "./system-services.md" >}}) + - [File Permissions]({{< relref "./file-permissions.md" >}}) + - [Kernel Parameters]({{< relref "./kernel-parameters.md" >}}) + - [SELinux Configuration]({{< relref "./selinux-configuration.md" >}}) + - [Security Hardening Tools]({{< relref "./security-hardening-tools.md" >}}) + - [Appendix]({{< relref "./appendix.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/SecHarden/account-passwords.md b/docs/en/Server/Security/SecHarden/account-passwords.md similarity index 100% rename from docs/en/docs/SecHarden/account-passwords.md rename to docs/en/Server/Security/SecHarden/account-passwords.md diff --git a/docs/en/docs/SecHarden/appendix.md b/docs/en/Server/Security/SecHarden/appendix.md similarity index 100% rename from docs/en/docs/SecHarden/appendix.md rename to docs/en/Server/Security/SecHarden/appendix.md diff --git a/docs/en/docs/SecHarden/authentication-and-authorization.md b/docs/en/Server/Security/SecHarden/authentication-and-authorization.md similarity index 73% rename from docs/en/docs/SecHarden/authentication-and-authorization.md rename to docs/en/Server/Security/SecHarden/authentication-and-authorization.md index f5b0884954ccc2ed1ec98207ac52d1aa2294305c..660fb2ebdc52ff4cbdd764fe509de1c064fff820 100644 --- a/docs/en/docs/SecHarden/authentication-and-authorization.md +++ b/docs/en/Server/Security/SecHarden/authentication-and-authorization.md @@ -58,8 +58,8 @@ export TMOUT=300 The **umask** value is used to set default permission on files and directories. A smaller **umask** value indicates that group users or other users have incorrect permission, which brings system security risks. Therefore, the default **umask** value must be set to **0077** for all users, that is, the default permission on user directories is **700** and the permission on user files is **600**. The **umask** value indicates the complement of a permission. For details about how to convert the **umask** value to a permission, see [umask Values](./appendix.md#umask-values). ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->By default, the **umask** value of the openEuler user is set to **0022**. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> By default, the **umask** value of the openEuler user is set to **0022**. ### Implementation @@ -69,8 +69,8 @@ The **umask** value is used to set default permission on files and directories echo "umask 0077" >> $FILE ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >_$FILE_ indicates the file name, for example, echo "umask 0077" \>\> /etc/bashrc. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > _$FILE_ indicates the file name, for example, echo "umask 0077" \>\> /etc/bashrc. 2. Set the ownership and group of the **/etc/bashrc** file and all files in the **/etc/profile.d/** directory to **root**. @@ -78,8 +78,8 @@ The **umask** value is used to set default permission on files and directories chown root.root $FILE ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >_$FILE_ indicates the file name, for example, **chown root.root /etc/bashrc**. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > _$FILE_ indicates the file name, for example, **chown root.root /etc/bashrc**. ## Setting the GRUB2 Encryption Password @@ -89,15 +89,15 @@ GRand Unified Bootloader \(GRUB\) is an operating system boot manager used to bo When starting the system, you can modify the startup parameters of the system on the GRUB2 screen. To ensure that the system startup parameters are not modified randomly, you need to encrypt the GRUB2 screen. The startup parameters can be modified only when the correct GRUB2 password is entered. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The default password of GRUB2 is **openEuler\#12**. You are advised to change the default password upon the first login and periodically update the password. If the password is leaked, startup item configurations may be modified, causing the system startup failure. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> The default password of GRUB2 is **openEuler\#12**. You are advised to change the default password upon the first login and periodically update the password. If the password is leaked, startup item configurations may be modified, causing the system startup failure. ### Implementation 1. Run the **grub2-mkpasswd-pbkdf2** command to generate an encrypted password. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >SHA-512 is used as the GRUB2 encryption algorithm. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > SHA-512 is used as the GRUB2 encryption algorithm. ```shell $ grub2-mkpasswd-pbkdf2 @@ -107,9 +107,9 @@ When starting the system, you can modify the startup parameters of the system on grub.pbkdf2.sha512.10000.5A45748D892672FDA02DD3B6F7AE390AC6E6D532A600D4AC477D25C7D087644697D8A0894DFED9D86DC2A27F4E01D925C46417A225FC099C12DBD3D7D49A7425.2BD2F5BF4907DCC389CC5D165DB85CC3E2C94C8F9A30B01DACAA9CD552B731BA1DD3B7CC2C765704D55B8CD962D2AEF19A753CBE9B8464E2B1EB39A3BB4EAB08 ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >Enter the same password in the **Enter password** and **Reenter password** lines. - >After **openEuler\#12** is encrypted by **grub2-mkpasswd-pbkdf2**, the output is **grub.pbkdf2.sha512.10000.5A45748D892672FDA02DD3B6F7AE390AC6E6D532A600D4AC477D25C7D087644697D8A0894DFED9D86DC2A27F4E01D925C46417A225FC099C12DBD3D7D49A7425.2BD2F5BF4907DCC389CC5D165DB85CC3E2C94C8F9A30B01DACAA9CD552B731BA1DD3B7CC2C765704D55B8CD962D2AEF19A753CBE9B8464E2B1EB39A3BB4EAB08**. The ciphertext is different each time. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > Enter the same password in the **Enter password** and **Reenter password** lines. + > After **openEuler\#12** is encrypted by **grub2-mkpasswd-pbkdf2**, the output is **grub.pbkdf2.sha512.10000.5A45748D892672FDA02DD3B6F7AE390AC6E6D532A600D4AC477D25C7D087644697D8A0894DFED9D86DC2A27F4E01D925C46417A225FC099C12DBD3D7D49A7425.2BD2F5BF4907DCC389CC5D165DB85CC3E2C94C8F9A30B01DACAA9CD552B731BA1DD3B7CC2C765704D55B8CD962D2AEF19A753CBE9B8464E2B1EB39A3BB4EAB08**. The ciphertext is different each time. 2. Open **/boot/efi/EFI/openEuler/grub.cfg** in a vi editor. In different modes, the paths of the **grub.cfg** file are different. See the note below. Append the following fields to the beginning of **/boot/efi/EFI/openEuler/grub.cfg**. @@ -118,11 +118,11 @@ When starting the system, you can modify the startup parameters of the system on password_pbkdf2 root grub.pbkdf2.sha512.10000.5A45748D892672FDA02DD3B6F7AE390AC6E6D532A600D4AC477D25C7D087644697D8A0894DFED9D86DC2A27F4E01D925C46417A225FC099C12DBD3D7D49A7425.2BD2F5BF4907DCC389CC5D165DB85CC3E2C94C8F9A30B01DACAA9CD552B731BA1DD3B7CC2C765704D55B8CD962D2AEF19A753CBE9B8464E2B1EB39A3BB4EAB08 ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** + > ![](./public_sys-resources/icon-note.gif) **NOTE:** > - >- In different modes, the paths of the **grub.cfg** file are different: In the UEFI mode of the x86 architecture, the path is **/boot/efi/EFI/openEuler/grub.cfg**. In the Legacy BIOS mode of the x86 architecture, the path is **/boot/grub2/grub.cfg**. In the aarch64 architecture, the path is **/boot/efi/EFI/openEuler/grub.cfg**. - >- The **superusers** field is used to set the account name of the super GRUB2 administrator. - >- The first parameter following the **password\_pbkdf2** field is the GRUB2 account name, and the second parameter is the encrypted password of the account. + > - In different modes, the paths of the **grub.cfg** file are different: In the UEFI mode of the x86 architecture, the path is **/boot/efi/EFI/openEuler/grub.cfg**. In the Legacy BIOS mode of the x86 architecture, the path is **/boot/grub2/grub.cfg**. In the aarch64 architecture, the path is **/boot/efi/EFI/openEuler/grub.cfg**. + > - The **superusers** field is used to set the account name of the super GRUB2 administrator. + > - The first parameter following the **password\_pbkdf2** field is the GRUB2 account name, and the second parameter is the encrypted password of the account. ## Setting the Secure Single-user Mode diff --git a/docs/en/docs/SecHarden/file-permissions.md b/docs/en/Server/Security/SecHarden/file-permissions.md similarity index 89% rename from docs/en/docs/SecHarden/file-permissions.md rename to docs/en/Server/Security/SecHarden/file-permissions.md index eaee0cb7a3f59b3cb970a7bf89d015934f036e16..8d8e032cbd643eb37a7c02ebbf4fa28a0674842b 100644 --- a/docs/en/docs/SecHarden/file-permissions.md +++ b/docs/en/Server/Security/SecHarden/file-permissions.md @@ -1,4 +1,3 @@ - # File Permissions - [File Permissions](#file-permissions) @@ -92,8 +91,8 @@ For example, openEuler supports UEFI and legacy BIOS installation modes. The GRU find dirname -type l -follow 2>/dev/null ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >_dir__name_ indicates the directory to be searched. Normally, key system directories, such as **/bin**, **/boot**, **/usr**, **/lib64**, **/lib**, and **/var**, need to be searched. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > _dir__name_ indicates the directory to be searched. Normally, key system directories, such as **/bin**, **/boot**, **/usr**, **/lib64**, **/lib**, and **/var**, need to be searched. 2. If these symbolic links are useless, run the following command to delete them: @@ -101,8 +100,8 @@ For example, openEuler supports UEFI and legacy BIOS installation modes. The GRU rm -f filename ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >_filename_ indicates the file name obtained in [Step 1](#en-us_topic_0152100319_l4dc74664c4fb400aaf91fb314c4f9da6). + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > _filename_ indicates the file name obtained in [Step 1](#en-us_topic_0152100319_l4dc74664c4fb400aaf91fb314c4f9da6). ## Setting the umask Value for a Daemon @@ -110,8 +109,8 @@ For example, openEuler supports UEFI and legacy BIOS installation modes. The GRU The **umask** value is used to set default permission on files and directories. If the **umask** value is not specified, the file has the globally writable permission. This brings risks. A daemon provides a service for the system to receive user requests or network customer requests. To improve the security of files and directories created by the daemon, you are advised to set **umask** to **0027**. The **umask** value indicates the complement of a permission. For details about how to convert the **umask** value to a permission, see [umask Values](./appendix.md#umask-values). ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->By default, the **umask** value of the daemon is set to **0022** in openEuler. +> ![](./public_sys-resources/icon-note.gif) **NOTE:** +> By default, the **umask** value of the daemon is set to **0022** in openEuler. ### Implementation @@ -156,12 +155,12 @@ Any user can modify globally writable files, which affects system integrity. chmod o-w filename ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >You can run the following command to check whether the sticky bit is set for the file or directory. If the command output contains the **T** flag, the file or directory is with a sticky bit. In the command, _filename_ indicates the name of the file or directory to be queried. - > - >```shell - >ls -l filename - >``` + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > You can run the following command to check whether the sticky bit is set for the file or directory. If the command output contains the **T** flag, the file or directory is with a sticky bit. In the command, _filename_ indicates the name of the file or directory to be queried. + + ```shell + ls -l filename + ``` ## Restricting Permissions on the at Command diff --git a/docs/en/docs/SecHarden/kernel-parameters.md b/docs/en/Server/Security/SecHarden/kernel-parameters.md similarity index 96% rename from docs/en/docs/SecHarden/kernel-parameters.md rename to docs/en/Server/Security/SecHarden/kernel-parameters.md index b193426b3b65a5e4bcad99a471cc2c82c2d46cc8..9f57c7650c3b3d7efd4b0347745bfd29ac29dc39 100644 --- a/docs/en/docs/SecHarden/kernel-parameters.md +++ b/docs/en/Server/Security/SecHarden/kernel-parameters.md @@ -3,7 +3,6 @@ - [Kernel Parameters](#kernel-parameters) - [Hardening the Security of Kernel Parameters](#hardening-the-security-of-kernel-parameters) - ## Hardening the Security of Kernel Parameters ### Description @@ -12,15 +11,16 @@ Kernel parameters specify the status of network configurations and application p ### Implementation -1. Write the hardening items in [Table 1](#en-us_topic_0152100187_t69b5423c26644b26abe94d88d38878eb) to the **/etc/sysctl.conf** file. +1. Write the hardening items in [Table 1](#en-us_topic_0152100187_t69b5423c26644b26abe94d88d38878eb) to the **/etc/sysctl.conf** file. + + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > Record security hardening items as follows: - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >Record security hardening items as follows: - >``` - >net.ipv4.icmp_echo_ignore_broadcasts = 1 - >net.ipv4.conf.all.rp_filter = 1 - >net.ipv4.conf.default.rp_filter = 1 - >``` + ```ini + net.ipv4.icmp_echo_ignore_broadcasts = 1 + net.ipv4.conf.all.rp_filter = 1 + net.ipv4.conf.default.rp_filter = 1 + ``` **Table 1** Policies for hardening the security of kernel parameters @@ -197,33 +197,32 @@ Kernel parameters specify the status of network configurations and application p
-2. Run the following command to load the kernel parameters set in the **sysctl.conf** file: +2. Run the following command to load the kernel parameters set in the **sysctl.conf** file: ``` sysctl -p /etc/sysctl.conf ``` - ### Other Security Suggestions -- **net.ipv4.icmp\_echo\_ignore\_all**: ignores ICMP requests. +- **net.ipv4.icmp\_echo\_ignore\_all**: ignores ICMP requests. For security purposes, you are advised to enable this item. The default value is **0**. Set the value to **1** to enable this item. After this item is enabled, all incoming ICMP Echo request packets will be ignored, which will cause failure to ping the target host. Determine whether to enable this item based on your actual networking condition. -- **net.ipv4.conf.all.log\_martians/net.ipv4.conf.default.log\_martians**: logs spoofed, source routed, and redirect packets. +- **net.ipv4.conf.all.log\_martians/net.ipv4.conf.default.log\_martians**: logs spoofed, source routed, and redirect packets. For security purposes, you are advised to enable this item. The default value is **0**. Set the value to **1** to enable this item. After this item is enabled, data from forbidden IP addresses will be logged. Too many new logs will overwrite old logs because the total number of logs allowed is fixed. Determine whether to enable this item based on your actual usage scenario. -- **net.ipv4.tcp\_timestamps**: disables tcp\_timestamps. +- **net.ipv4.tcp\_timestamps**: disables tcp\_timestamps. For security purposes, you are advised to disable tcp\_timestamps. The default value is **1**. Set the value to **0** to disable tcp\_timestamps. After this item is disabled, TCP retransmission timeout will be affected. Determine whether to disable this item based on the actual usage scenario. -- **net.ipv4.tcp\_max\_syn\_backlog**: determines the number of queues that is in SYN\_RECV state. +- **net.ipv4.tcp\_max\_syn\_backlog**: determines the number of queues that is in SYN\_RECV state. This parameter determines the number of queues that is in SYN\_RECV state. When this number is exceeded, new TCP connection requests will not be accepted. This to some extent prevents system resource exhaustion. Configure this parameter based on your actual usage scenario. diff --git a/docs/en/docs/SecHarden/os-hardening-overview.md b/docs/en/Server/Security/SecHarden/os-hardening-overview.md similarity index 100% rename from docs/en/docs/SecHarden/os-hardening-overview.md rename to docs/en/Server/Security/SecHarden/os-hardening-overview.md diff --git a/docs/en/Server/Security/SecHarden/public_sys-resources/icon-note.gif b/docs/en/Server/Security/SecHarden/public_sys-resources/icon-note.gif new file mode 100644 index 0000000000000000000000000000000000000000..6314297e45c1de184204098efd4814d6dc8b1cda Binary files /dev/null and b/docs/en/Server/Security/SecHarden/public_sys-resources/icon-note.gif differ diff --git a/docs/en/docs/SecHarden/secHarden.md b/docs/en/Server/Security/SecHarden/secharden.md similarity index 100% rename from docs/en/docs/SecHarden/secHarden.md rename to docs/en/Server/Security/SecHarden/secharden.md diff --git a/docs/en/docs/SecHarden/security-configuration-benchmark.md b/docs/en/Server/Security/SecHarden/security-configuration-benchmark.md similarity index 100% rename from docs/en/docs/SecHarden/security-configuration-benchmark.md rename to docs/en/Server/Security/SecHarden/security-configuration-benchmark.md diff --git a/docs/en/docs/SecHarden/security-hardening-guide.md b/docs/en/Server/Security/SecHarden/security-hardening-guide.md similarity index 100% rename from docs/en/docs/SecHarden/security-hardening-guide.md rename to docs/en/Server/Security/SecHarden/security-hardening-guide.md diff --git a/docs/en/docs/SecHarden/security-hardening-tools.md b/docs/en/Server/Security/SecHarden/security-hardening-tools.md similarity index 90% rename from docs/en/docs/SecHarden/security-hardening-tools.md rename to docs/en/Server/Security/SecHarden/security-hardening-tools.md index 015249e6dd50ce0647704a698a9a53b4ca3405a8..57c21e6b3635b96d28b7d477d045b023b1ca883b 100644 --- a/docs/en/docs/SecHarden/security-hardening-tools.md +++ b/docs/en/Server/Security/SecHarden/security-hardening-tools.md @@ -17,11 +17,11 @@ You need to modify the **usr-security.conf** file so that the security hardeni Each line in the **usr-security.conf** file indicates a configuration item. The configuration format varies according to the configuration content. The following describes the format of each configuration item. ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- All configuration items start with an execution ID. The execution ID is a positive integer and can be customized. ->- Contents of a configuration item are separated by an at sign \(@\). ->- If the actual configuration content contains an at sign \(@\), use two at signs \(@@\) to distinguish the content from the separator. For example, if the actual content is **xxx@yyy**, set this item to **xxx@@yyy**. Currently, an at sign \(@\) cannot be placed at the beginning or end of the configuration content. +> - All configuration items start with an execution ID. The execution ID is a positive integer and can be customized. +> - Contents of a configuration item are separated by an at sign \(@\). +> - If the actual configuration content contains an at sign \(@\), use two at signs \(@@\) to distinguish the content from the separator. For example, if the actual content is **xxx@yyy**, set this item to **xxx@@yyy**. Currently, an at sign \(@\) cannot be placed at the beginning or end of the configuration content. - **d**: comment diff --git a/docs/en/docs/SecHarden/selinux-configuration.md b/docs/en/Server/Security/SecHarden/selinux-configuration.md similarity index 98% rename from docs/en/docs/SecHarden/selinux-configuration.md rename to docs/en/Server/Security/SecHarden/selinux-configuration.md index 34f71d6969f90f4c4fb8a23cc9983c19dceaeadc..edc2ef688ec0dfe60452611d45c11273f2d3b9ac 100644 --- a/docs/en/docs/SecHarden/selinux-configuration.md +++ b/docs/en/Server/Security/SecHarden/selinux-configuration.md @@ -158,8 +158,8 @@ By default, openEuler uses SELinux to improve system security. SELinux has three ```shell $ cat demo.fc - /usr/bin/example -- system_u:object_r:example_exec_t:s0 - /resource -- system_u:object_r:resource_file_t:s0 + /usr/bin/example -- system_u:object_r:example_exec_t:s0 + /resource -- system_u:object_r:resource_file_t:s0 ``` 2. Compose the TE file (example). diff --git a/docs/en/docs/SecHarden/system-services.md b/docs/en/Server/Security/SecHarden/system-services.md similarity index 96% rename from docs/en/docs/SecHarden/system-services.md rename to docs/en/Server/Security/SecHarden/system-services.md index 42d306874f89c881439ec7b762c39856c9a0270e..5cb9acf3693579420a981d2f06f4951feb4c7840 100644 --- a/docs/en/docs/SecHarden/system-services.md +++ b/docs/en/Server/Security/SecHarden/system-services.md @@ -288,8 +288,8 @@ To harden a client, perform the following steps:
- >![](./public_sys-resources/icon-note.gif) **NOTE:** - >By default, the messages displayed before and after SSH login are saved in the **/etc/issue.net** file. The default information in the **/etc/issue.net** file is **Authorized users only.** **All activities may be monitored and reported.** + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > By default, the messages displayed before and after SSH login are saved in the **/etc/issue.net** file. The default information in the **/etc/issue.net** file is **Authorized users only.** **All activities may be monitored and reported.** - Client hardening policies @@ -329,8 +329,8 @@ To harden a client, perform the following steps:
- >![](./public_sys-resources/icon-note.gif) **NOTE:** - >Third-party clients and servers that use the Diffie-Hellman algorithm are required to allow at least 2048-bit connection. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > Third-party clients and servers that use the Diffie-Hellman algorithm are required to allow at least 2048-bit connection. ### Other Security Suggestions @@ -362,8 +362,8 @@ To harden a client, perform the following steps: SFTP is a secure FTP designed to provide secure file transfer over SSH. Users can only use dedicated accounts to access SFTP for file upload and download, instead of SSH login. In addition, directories that can be accessed over SFTP are limited to prevent directory traversal attacks. The configuration process is as follows: - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >In the following configurations, **sftpgroup** is an example user group name, and **sftpuser** is an example username. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > In the following configurations, **sftpgroup** is an example user group name, and **sftpuser** is an example username. 1. Create an SFTP user group. @@ -427,15 +427,15 @@ To harden a client, perform the following steps: ForceCommand internal-sftp ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >- **%u** is a wildcard character. Enter **%u** to represent the username of the current SFTP user. - >- The following content must be added to the end of the **/etc/ssh/sshd\_config** file: - > - > ```text - > Match Group sftpgroup - > ChrootDirectory /sftp/%u - > ForceCommand internal-sftp - > ``` + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > - **%u** is a wildcard character. Enter **%u** to represent the username of the current SFTP user. + > - The following content must be added to the end of the **/etc/ssh/sshd\_config** file: + + ```text + Match Group sftpgroup + ChrootDirectory /sftp/%u + ForceCommand internal-sftp + ``` 9. Restart the SSH service. @@ -451,5 +451,5 @@ To harden a client, perform the following steps: ssh -t testuser@192.168.1.100 su ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >**192.168.1.100** is an example IP address, and **testuser** is an example username. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > **192.168.1.100** is an example IP address, and **testuser** is an example username. diff --git a/docs/en/Server/Security/ShangMi/Menu/index.md b/docs/en/Server/Security/ShangMi/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..557dca328dcd684128adc37b596580b84a22d302 --- /dev/null +++ b/docs/en/Server/Security/ShangMi/Menu/index.md @@ -0,0 +1,15 @@ +--- +headless: true +--- + +- [ShangMi Overview]({{< relref "./overview.md" >}}) + - [Drive Encryption]({{< relref "./drive-encryption.md" >}}) + - [Kernel Module Signing]({{< relref "./kernel-module-signing.md" >}}) + - [Algorithm Library]({{< relref "./algorithm-library.md" >}}) + - [File Integrity Protection]({{< relref "./file-integrity-protection.md" >}}) + - [User Identity Authentication]({{< relref "./user-identity-authentication.md" >}}) + - [Certificates]({{< relref "./certificates.md" >}}) + - [Secure Boot]({{< relref "./secure-boot.md" >}}) + - [SSH Stack]({{< relref "./ssh-stack.md" >}}) + - [TLCP Stack]({{< relref "./tlcp-stack.md" >}}) + - [RPM Signature Verification]({{< relref "./rpm-signature-verification.md" >}}) diff --git a/docs/en/docs/ShangMi/algorithm-library.md b/docs/en/Server/Security/ShangMi/algorithm-library.md similarity index 76% rename from docs/en/docs/ShangMi/algorithm-library.md rename to docs/en/Server/Security/ShangMi/algorithm-library.md index 63eaf7bd69356da05dd740b0b27ee0fe6f7202f6..ca377c053521b70466745a8961caedfe461ff54f 100644 --- a/docs/en/docs/ShangMi/algorithm-library.md +++ b/docs/en/Server/Security/ShangMi/algorithm-library.md @@ -8,7 +8,7 @@ OpenSSL is a common cryptographic algorithm library software that supports SM2, OpenSSL 1.1.1m-6 or later -``` +```shell $ rpm -qa openssl openssl-1.1.1m-6.oe2209.x86_64 ``` @@ -19,71 +19,71 @@ openssl-1.1.1m-6.oe2209.x86_64 1. SM2 public key algorithm -Generate an SM2 private key. + Generate an SM2 private key. -``` -$ openssl ecparam -genkey -name SM2 -out priv.key -``` + ```shell + openssl ecparam -genkey -name SM2 -out priv.key + ``` -Generate a public key based on the private key. + Generate a public key based on the private key. -``` -$ openssl ec -in priv.key -pubout -out pub.key -read EC key -writing EC key -``` + ```shell + $ openssl ec -in priv.key -pubout -out pub.key + read EC key + writing EC key + ``` -Use the SM2 algorithm to sign the file and set the message digest algorithm to SM3. + Use the SM2 algorithm to sign the file and set the message digest algorithm to SM3. -``` -$ openssl dgst -sm3 -sign priv.key -out data.sig data -``` + ```shell + openssl dgst -sm3 -sign priv.key -out data.sig data + ``` -Use the public key to verify the signature. + Use the public key to verify the signature. -``` -$ openssl dgst -sm3 -verify pub.key -signature data.sig data -Verified OK -``` + ```shell + $ openssl dgst -sm3 -verify pub.key -signature data.sig data + Verified OK + ``` 2. SM3 message digest algorithm -Use the SM3 algorithm for data digest. + Use the SM3 algorithm for data digest. -``` -$ openssl dgst -sm3 data -SM3(data)= a794922bb9f0a034257f6c7090a3e8429801a42d422c21f1473e83b7f7eac385 -``` + ```shell + $ openssl dgst -sm3 data + SM3(data)= a794922bb9f0a034257f6c7090a3e8429801a42d422c21f1473e83b7f7eac385 + ``` 3. SM4 symmetric cipher algorithm -Use the SM4 algorithm to encrypt data. **-K** and **-iv** specify the key value and IV value used for encryption, respectively. Generally, the key value and IV value need to be randomly generated. + Use the SM4 algorithm to encrypt data. **-K** and **-iv** specify the key value and IV value used for encryption, respectively. Generally, the key value and IV value need to be randomly generated. -``` -$ openssl enc -sm4 -in data -K 123456789ABCDEF0123456789ABCDEF0 -iv 123456789ABCDEF0123456789ABCDEF0 -out data.enc -``` + ```shell + openssl enc -sm4 -in data -K 123456789ABCDEF0123456789ABCDEF0 -iv 123456789ABCDEF0123456789ABCDEF0 -out data.enc + ``` -Use the SM4 algorithm to decrypt data. + Use the SM4 algorithm to decrypt data. -``` -$ openssl enc -d -sm4 -in data.enc -K 123456789ABCDEF0123456789ABCDEF0 -iv 123456789ABCDEF0123456789ABCDEF0 -out data.raw -``` + ```shell + openssl enc -d -sm4 -in data.enc -K 123456789ABCDEF0123456789ABCDEF0 -iv 123456789ABCDEF0123456789ABCDEF0 -out data.raw + ``` -Compare the encrypted and decrypted data. The results are consistent. + Compare the encrypted and decrypted data. The results are consistent. -``` -$ diff data data.raw -``` + ```shell + diff data data.raw + ``` #### Scenario 2: Using APIs to Call Cryptographic Algorithms You can directly install openssl-help and query the **man** manual. -``` -$ yum install openssl-help -$ man sm2 -$ man EVP_sm3 -$ man EVP_sm4_cbc +```shell +yum install openssl-help +man sm2 +man EVP_sm3 +man EVP_sm4_cbc ``` ## Kernel Cryptographic Interface @@ -96,7 +96,7 @@ The cryptographic algorithms of the Linux kernel is managed by the crypto framew Kernel 5.10.0-106 or later -``` +```shell # rpm -qa kernel kernel-5.10.0-106.1.0.55.oe2209.x86_64 ``` @@ -107,7 +107,7 @@ kernel-5.10.0-106.1.0.55.oe2209.x86_64 Use **/proc/crypto** to query the registered SM series cryptographic algorithms. By default, the SM2 and SM3 algorithms are loaded. -``` +```shell $ cat /proc/crypto | grep sm3 -A8 name : sm3 driver : sm3-generic @@ -133,7 +133,7 @@ type : akcipher By default, the SM4 algorithm is not loaded. You need to insert the corresponding module first. -``` +```shell $ modprobe sm4-generic $ cat /proc/crypto | grep sm4 -A8 name : sm4 @@ -153,7 +153,7 @@ max keysize : 16 The method of calling SM series cryptographic algorithms is the same as that of calling other algorithms of the same type. For details, see the Linux kernel document. -``` +```text https://www.kernel.org/doc/html/v5.10/crypto/userspace-if.html ``` @@ -170,7 +170,7 @@ The crypto framework allows registration of algorithm implementations related to When multiple instances of the same algorithm are registered, the default algorithm is selected based on the registered priority of each algorithm instance. A larger **priority** value indicates a higher priority. The priority of a pure software algorithm (with the suffix **-generic**) is fixed to **100**. By default, the performance optimization through instruction sets is disabled for the SM series cryptographic algorithms and is provided for users in the form of a kernel module. For example, to enable the AVX instruction set optimization of the SM3 algorithm, do as follows: -``` +```shell $ modprobe sm3-avx $ cat /proc/crypto | grep sm3 -A8 name : sm3 diff --git a/docs/en/docs/ShangMi/certificates.md b/docs/en/Server/Security/ShangMi/certificates.md similarity index 100% rename from docs/en/docs/ShangMi/certificates.md rename to docs/en/Server/Security/ShangMi/certificates.md diff --git a/docs/en/Server/Security/ShangMi/drive-encryption.md b/docs/en/Server/Security/ShangMi/drive-encryption.md new file mode 100644 index 0000000000000000000000000000000000000000..4e260107885b0559064c3c6f6bf823110e3b26a8 --- /dev/null +++ b/docs/en/Server/Security/ShangMi/drive-encryption.md @@ -0,0 +1,90 @@ +# Drive Encryption + +## Overview + +Drive encryption protects the storage confidentiality of important data. Data is encrypted based on a specified encryption algorithm and then written to drives. This feature mainly involves the user-mode tool cryptsetup and the kernel-mode module dm-crypt. Currently, the drive encryption feature provided by the openEuler OS supports ShangMi (SM) series cryptographic algorithms. Parameters are as follows: + +- Encryption modes: luks2 and plain; +- Key length: 256 bits; +- Message digest algorithm: SM3; +- Encryption algorithm: sm4-xts-plain64. + +## Prerequisites + +1. Kernel 5.10.0-106 or later + + ```shell + $ rpm -qa kernel + kernel-5.10.0-106.1.0.55.oe2209.x86_64 + ``` + +2. cryptsetup 2.4.1-1 or later + + ```shell + $ rpm -qa cryptsetup + cryptsetup-2.4.1-1.oe2209.x86_64 + ``` + +## How to Use + +A drive is formatted in a specified encryption mode and mapped to **/dev/mapper** as a dm device. Subsequent drive read and write operations are performed through the dm device. Data encryption and decryption are performed in kernel mode and are not perceived by users. The procedure is as follows: + +1. Format the drive and map the drive as a dm device. + + a. luks2 mode + + Set the encryption mode to luks2, encryption algorithm to sm4-xts-plain64, key length to 256 bits, and message digest algorithm to SM3. + + ```shell + # cryptsetup luksFormat /dev/sdd -c sm4-xts-plain64 --key-size 256 --hash sm3 + # cryptsetup luksOpen /dev/sdd crypt1 + ``` + + b. plain mode + + Set the encryption mode to plain, encryption algorithm to sm4-xts-plain64, key length to 256 bits, and message digest algorithm to SM3. + + ```shell + # cryptsetup plainOpen /dev/sdd crypt1 -c sm4-xts-plain64 --key-size 256 --hash sm3 + ``` + +2. After the mapping is successful, run the **lsblk** command to view the device information. + + ```shell + # lsblk + NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS + ...... + sdd 8:48 0 50G 0 disk + └─crypt1 253:3 0 50G 0 crypt + ...... + ``` + +3. Perform I/O read and write operations on the encrypted device. + + Deliver I/Os to raw drives. + + ```shell + # dd if=/dev/random of=/dev/mapper/crypt1 bs=4k count=10240 + ``` + + Deliver I/Os through the file system. + + ```shell + # mkfs.ext4 /dev/mapper/crypt1 + # mount /dev/mapper/crypt1 /mnt/crypt/ + # dd if=/dev/random of=/mnt/crypt/tmp bs=4k count=10240 + ``` + +4. Disable device mapping. + + If a file system is mounted, unmount it first. + + ```shell + # umount /mnt/crypt + ``` + + Closes a device. + + ```shell + # cryptsetup close crypt1 + ``` diff --git a/docs/en/docs/ShangMi/file-integrity-protection.md b/docs/en/Server/Security/ShangMi/file-integrity-protection.md similarity index 94% rename from docs/en/docs/ShangMi/file-integrity-protection.md rename to docs/en/Server/Security/ShangMi/file-integrity-protection.md index 0e04b149ceef71bba007c9e9ac85ced4223f41a6..409b28e2e2c57638c9b5e48390a0ccd0482bf18c 100644 --- a/docs/en/docs/ShangMi/file-integrity-protection.md +++ b/docs/en/Server/Security/ShangMi/file-integrity-protection.md @@ -7,23 +7,25 @@ IAM is a mandatory access control subsystem provided by the Linux kernel. It mea ### Prerequisites 1. The openEuler kernel compilation environment has been prepared. For details, see . -2. In openEuler kernel 5.10, the ShangMi (SM) series cryptographic algorithms are supported for kernel module signing. You are advised to select the latest kernel 5.10 source code for compilation. -3. The kernel SM2 root certificate has been generated. +2. You are advised to select the latest kernel source code for compilation. +3. The kernel SM2 root certificate has been generated (for appraisal mode only). - ```shell + ```sh # Generate a certificate configuration file. (Other fields in the configuration file can be defined as required.) - $ echo 'subjectKeyIdentifier=hash' > ca.cfg + echo 'subjectKeyIdentifier=hash' > ima.cfg + echo 'authorityKeyIdentifier=keyid,issuer' >> ima.cfg + echo 'keyUsage=digitalSignature,nonRepudiation' >> ima.cfg # Generate a private key for SM2 signing. - $ openssl ecparam -genkey -name SM2 -out ca.key + # openssl ecparam -genkey -name SM2 -out ima.key # Generate a signing request. - $ openssl req -new -sm3 -key ca.key -out ca.csr + # openssl req -new -sm3 -key ima.key -out ima.csr # Generate an SM2 certificate. - $ openssl x509 -req -days 3650 -extfile ca.cfg -signkey ca.key -in ca.csr -out ca.crt + # openssl x509 -req -days 3650 -extfile ima.cfg -signkey ima.key -in ima.csr -out ima.crt ``` 4. The level-2 IMA certificate has been generated. - ```shell + ```sh # Create a certificate configuration file. echo 'subjectKeyIdentifier=hash' > ima.cfg echo 'authorityKeyIdentifier=keyid,issuer' >> ima.cfg @@ -39,7 +41,7 @@ IAM is a mandatory access control subsystem provided by the Linux kernel. It mea 5. The root certificate has been placed in the kernel source code directory, and **CONFIG_SYSTEM_TRUSTED_KEYS** has been modified to compile the certificate to the kernel trusted key. - ```shell + ```sh $ cp /path/to/ca.crt . $ make openeuler_defconfig $ cat .config | grep CONFIG_SYSTEM_TRUSTED_KEYS @@ -48,13 +50,13 @@ IAM is a mandatory access control subsystem provided by the Linux kernel. It mea 6. The kernel has been compiled and installed. -```shell +```sh make -j64 make modules_install make install ``` -### How to Use +### Usage #### Scenario 1: Native IMA @@ -62,13 +64,13 @@ make install Configure the IMA policy and message digest algorithm, disable the IMA-appraisal mode, and restart the system. -```shell +```sh ima_policy=tcb ima_hash=sm3 ima_appraise=off ``` Check the measurement log. It is found that the IMA measures all protected files and the message digest algorithm is SM3. -```shell +```sh cat /sys/kernel/security/ima/ascii_runtime_measurements 10 601989730f01fb4688bba92d0ec94340cd90757f ima-sig sm3:0000000000000000000000000000000000000000000000000000000000000000 boot_aggregate 10 dc0a98316b03ab15edd2b8daae75a0d64bca7c56 ima-sig sm3:3c62ee3c13ee32d7a287e04c843c03ebb428a5bb3dd83561efffe9b08444be22 /usr/lib/systemd/systemd @@ -80,7 +82,7 @@ cat /sys/kernel/security/ima/ascii_runtime_measurements Configure the IMA policy and message digest algorithm, enable the IMA-appraisal fix mode, and restart the system. -```shell +```sh ima_policy=appraise_tcb ima_hash=sm3 ima_appraise=fix ``` @@ -88,13 +90,13 @@ ima_policy=appraise_tcb ima_hash=sm3 ima_appraise=fix Perform an open operation on all files to be appraised to automatically mark the .ima extension. -```shell +```sh find / -fstype ext4 -type f -uid 0 -exec dd if='{}' of=/dev/null count=0 status=none \; ``` After the marking is complete, you can see that all files with the .ima extension of the SM3 message digest algorithm are marked. -```shell +```sh getfattr -m - -d -e hex /bin/bash getfattr: Removing leading '/' from absolute path names # file: bin/bash @@ -107,7 +109,7 @@ SM3(/bin/bash)= a794922bb9f0a034257f6c7090a3e8429801a42d422c21f1473e83b7f7eac385 Enable the enforce mode and restart the system. The system can run properly. -```shell +```sh ima_policy=appraise_tcb ima_hash=sm3 ima_appraise=enforce ``` @@ -118,14 +120,14 @@ ima_policy=appraise_tcb ima_hash=sm3 ima_appraise=enforce 1. The SM root certificate has been preset in the kernel. 2. The ima-evm-utils software package whose version is later than or equal to the specified version has been installed. -```shell +```sh $ rpm -qa ima-evm-utils ima-evm-utils-1.3.2-4.oe2209.x86_64 ``` Generate a level-2 IMA certificate. -```shell +```sh # Create a certificate configuration file. echo 'subjectKeyIdentifier=hash' > ima.cfg echo 'authorityKeyIdentifier=keyid,issuer' >> ima.cfg @@ -141,7 +143,7 @@ openssl x509 -outform DER -in ima.crt -out x509_ima.der Place the IMA certificate in the **/etc/keys** directory and run the **dracut** command to create initrd again. -```shell +```sh mkdir -p /etc/keys cp x509_ima.der /etc/keys echo 'install_items+=" /etc/keys/x509_ima.der "' >> /etc/dracut.conf @@ -150,19 +152,19 @@ dracut -f Sign the files to be protected. For example, sign all executable files of the **root** user in the **/usr/bin** directory. -```shell +```sh find /usr/bin -fstype ext4 -type f -executable -uid 0 -exec evmctl -a sm3 ima_sign --key /path/to/ima.key '{}' \; ``` Enable the enforce mode and restart the system. The system can run properly. -```shell +```sh ima_policy=appraise_tcb ima_hash=sm3 ima_appraise=enforce ``` Check the protection effect of the signing mode. -```shell +```sh # getfattr -m - -d /bin/echo getfattr: Removing leading '/' from absolute path names # file: bin/echo @@ -183,7 +185,7 @@ For each file under the IMA protection, you need to use either the hash mode or Configure **-a sm3** when calling **gen_digest_lists** to generate SM3 digest lists. -```shell +```sh gen_digest_lists -a sm3 -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/bash -d -i i: gen_digest_lists -a sm3 -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/bash -d -i i: -T ``` @@ -192,18 +194,16 @@ gen_digest_lists -a sm3 -t metadata -f compact -i l:policy -o add -p -1 -m immut The overall procedure is the same as that for enabling the IMA Digest Lists feature. The only difference is that the **ima_hash** startup parameter is set to **sm3**. The following gives an example: -```shell +```sh # Log mode ima_template=ima-sig ima_policy="exec_tcb|appraise_exec_tcb|appraise_exec_immutable" initramtmpfs ima_hash=sm3 ima_appraise=log evm=allow_metadata_writes evm=x509 ima_digest_list_pcr=11 ima_appraise_digest_list=digest # enforce mode ima_template=ima-sig ima_policy="exec_tcb|appraise_exec_tcb|appraise_exec_immutable" initramtmpfs ima_hash=sm3 ima_appraise=enforce-evm evm=allow_metadata_writes evm=x509 ima_digest_list_pcr=11 ima_appraise_digest_list=digest ``` -For details about other steps, see **Administrator Guide** > **Trusted Computing** > **[Initial Deployment in the Digest Lists Scenario](../Administration/trusted-computing.md#initial-deployment-in-the-digest-lists-scenario)**. - After the configuration is complete, restart the system and query the measurement log. The default algorithm in the measurement log is changed to SM3. -```shell +```sh $ cat /sys/kernel/security/ima/ascii_runtime_measurements ...... 11 9e32183b5b1da72c6ff4298a44026e3f9af510c9 ima-sig sm3:5a2d81cd135f41e73e0224b9a81c3d8608ccde8caeafd5113de959ceb7c84948 /usr/bin/upload_digest_lists @@ -219,7 +219,7 @@ $ cat /sys/kernel/security/ima/ascii_runtime_measurements 1. The SM root certificate has been preset in the kernel. 2. The digest-list-tools and ima-evm-utils software packages of the specified versions or later have been installed. -```shell +```sh $ rpm -qa ima-evm-utils ima-evm-utils-1.3.2-4.oe2209.x86_64 $ rpm -qa digest-list-tools @@ -230,7 +230,7 @@ digest-list-tools-0.3.95-10.oe2209.x86_64 1. Generate level-2 IMA and EVM certificates (sub-certificates of the SM root certificate preset in the kernel). - ```shell + ```sh # Create a certificate configuration file. echo 'subjectKeyIdentifier=hash' > ima.cfg echo 'authorityKeyIdentifier=keyid,issuer' >> ima.cfg @@ -247,7 +247,7 @@ digest-list-tools-0.3.95-10.oe2209.x86_64 2. Place the IMA and EVM certificates in the **/etc/keys** directory and run the **dracut** command to create initrd again. - ```shell + ```sh mkdir -p /etc/keys cp x509_ima.der /etc/keys cp x509_evm.der /etc/keys @@ -257,7 +257,7 @@ digest-list-tools-0.3.95-10.oe2209.x86_64 3. Enable the IMA Digest Lists function. After the restart, check whether the certificates are imported to the IMA and EVM key rings. - ```shell + ```sh $ cat /proc/keys ...... 024dee5e I------ 1 perm 1f0f0000 0 0 keyring .evm: 1 @@ -268,7 +268,7 @@ digest-list-tools-0.3.95-10.oe2209.x86_64 4. Sign the IMA digest lists using the private keys corresponding to the IMA and EVM certificates. The signed IMA digest lists can be imported to the kernel. - ```shell + ```sh # Use **evmctl** to sign the digest lists. evmctl ima_sign --key /path/to/ima.key -a sm3 0-metadata_list-compact-tree-1.8.0-2.oe2209.x86_64 # Check the extension after signing. @@ -287,7 +287,7 @@ digest-list-tools-0.3.95-10.oe2209.x86_64 1. By default, the hash algorithm used in the digest lists provided by openEuler is SHA256. When the IMA digest lists measurement algorithm is set to SM3, you must remove the digest lists from the **/etc/ima/digest_lists** directory, generate new digest lists, and sign the digest lists. Otherwise, an error occurs during file integrity check. The procedure is as follows: - ```shell + ```sh # Reset the SELinux tag of a disk (perform this operation when IMA extension verification and SELinux are enabled). fixfiles -F restore # Generate digest lists for all files (you can also specify the files). @@ -314,12 +314,12 @@ AIDE is a lightweight intrusion detection tool. It checks file integrity to dete AIDE 0.17.4-1 or later -```shell +```sh $ rpm -qa aide aide-0.17.4-1.oe2209.x86_64 ``` -## How to Use +## Usage Add the SM3 algorithm to the **/etc/aide.conf** configuration file. @@ -331,11 +331,11 @@ DATAONLY = p+n+u+g+s+acl+selinux+xattrs+sha256+sm3 ...... ``` -1. Initialize the database and save the database as the benchmark. +Initialize the database and save the database as the benchmark. Initialize the database. -```shell +```sh aide -c /etc/aide.conf -i ``` @@ -380,13 +380,13 @@ End timestamp: 2022-08-12 09:01:25 +0800 (run time: 2m 43s) Save the database as the benchmark. -```shell +```sh mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz ``` ### Scenario 1: Detecting Changes of Protected Files -```shell +```sh $ aide -c /etc/aide.conf --check --------------------------------------------------- Detailed information about changes: @@ -409,7 +409,7 @@ File: /boot/config-5.10.0-106.3.0.57.oe2209.aarch64 Update the database. After the update, the database file is **/var/lib/aide/aide.db.new.gz**. -```shell +```sh $ aide -c /etc/aide.conf --update --------------------------------------------------- Detailed information about changes: @@ -440,7 +440,7 @@ database_new=file:@@{DBDIR}/aide.db.new.gz Compare the databases. -```shell +```sh $ aide -c /etc/aide.conf --compare --------------------------------------------------- Detailed information about changes: diff --git a/docs/en/docs/ShangMi/kernel-module-signing.md b/docs/en/Server/Security/ShangMi/kernel-module-signing.md similarity index 86% rename from docs/en/docs/ShangMi/kernel-module-signing.md rename to docs/en/Server/Security/ShangMi/kernel-module-signing.md index 0119a8c54e5479fb5acf8f787b5730379255157f..d9ced9bdcff475b349284b1f790f52fbccfe288c 100644 --- a/docs/en/docs/ShangMi/kernel-module-signing.md +++ b/docs/en/Server/Security/ShangMi/kernel-module-signing.md @@ -6,11 +6,11 @@ The kernel module signing facility is an important mechanism for protecting Linu ## Prerequisites -1. The openEuler kernel compilation environment has been prepared. For details, see https://gitee.com/openeuler/kernel/wikis/kernel. +1. The openEuler kernel compilation environment has been prepared. For details, see . 2. In openEuler kernel 5.10, the ShangMi (SM) series cryptographic algorithms are supported for kernel module signing. You are advised to select the latest kernel 5.10 source code for compilation. 3. The SM2 private key and certificate used for kernel module signing have been generated. The reference commands using OpenSSL are as follows: -``` +```shell # Generate a certificate configuration file. (Other fields in the configuration file can be defined as required.) $ echo 'subjectKeyIdentifier=hash' > mod.cfg # Generate a private key for SM2 signing. @@ -27,41 +27,41 @@ $ openssl x509 -req -days 3650 -extfile mod.cfg -signkey mod.key -in mod.csr -ou Write the certificate and private key to the **mod.pem** file. -``` -$ cat /path/to/mod.key > mod.pem -$ cat /path/to/mod.crt >> mod.pem +```shell +cat /path/to/mod.key > mod.pem +cat /path/to/mod.crt >> mod.pem ``` Use the SM3 algorithm to sign the kernel module in the kernel compilation options. -``` -$ make openeuler_defconfig -$ make menuconfig +```shell +make openeuler_defconfig +make menuconfig ``` Choose **Enable loadable module support** > **Sign modules with SM3** on the GUI. -``` +```shell Which hash algorithm should modules be signed with? (Sign modules with SM3) ``` Configure **Cryptographic API** > **Certificates for signature checking** to read the private key and certificate used for kernel signing from **mod.pem**. -``` +```text (mod.pem) File name or PKCS#11 URI of module signing key ``` Build the kernel. -``` -$ make -j64 -$ make modules_install -$ make install +```shell +make -j64 +make modules_install +make install ``` Run the **modinfo** command to check the signature information of the kernel module. -``` +```shell $ modinfo /usr/lib/modules/5.10.0/kernel/crypto/sm4.ko filename: /usr/lib/modules/5.10.0/kernel/crypto/sm4.ko license: GPL v2 @@ -77,17 +77,17 @@ signer: Internet Widgits Pty Ltd sig_key: 33:0B:96:3E:1F:C1:CA:28:98:72:F5:AE:FF:3F:A4:F3:50:5D:E1:87 sig_hashalgo: sm3 signature: 30:45:02:21:00:81:96:8D:40:CE:7F:7D:AE:3A:4B:CC:DC:9A:F2:B4: - 16:87:3E:C3:DC:77:ED:BC:6E:F5:D8:F3:DD:77:2B:D4:05:02:20:3B: - 39:5A:89:9D:DC:27:83:E8:D8:B4:75:86:FF:33:2B:34:33:D0:90:76: - 32:4D:36:88:84:34:31:5C:83:63:6B + 16:87:3E:C3:DC:77:ED:BC:6E:F5:D8:F3:DD:77:2B:D4:05:02:20:3B: + 39:5A:89:9D:DC:27:83:E8:D8:B4:75:86:FF:33:2B:34:33:D0:90:76: + 32:4D:36:88:84:34:31:5C:83:63:6B ``` ### Scenario 2: Manual Signing Call **sign_file** in the kernel source code directory to sign the specified kernel module. -``` -$ ./scripts/sign-file sm3 /path/to/mod.key /path/to/mod.crt +```shell +./scripts/sign-file sm3 /path/to/mod.key /path/to/mod.crt ``` Other steps are the same as those in scenario 1. @@ -96,12 +96,12 @@ Other steps are the same as those in scenario 1. Add **module.sig_enforce** to the kernel startup parameters to enable forcible signature verification for the kernel module. -``` +```text linux /vmlinuz-5.10.0-106.1.0.55.oe2209.x86_64 root=/dev/mapper/openeuler-root ro resume=/dev/mapper/openeuler-swap rd.lvm.lv=openeuler/root rd.lvm.lv=openeuler/swap crashkernel=512M module.sig_enforce ``` After the system is restarted, only the kernel modules that pass the certificate verification can be loaded. -``` +```shell # insmod /usr/lib/modules/5.10.0/kernel/crypto/sm4.ko ``` diff --git a/docs/en/docs/ShangMi/overview.md b/docs/en/Server/Security/ShangMi/overview.md similarity index 36% rename from docs/en/docs/ShangMi/overview.md rename to docs/en/Server/Security/ShangMi/overview.md index 5a398bd72e24cb75ef0a7b15d9ebd410e6cb4b54..d450b81a20cfc57fdef580cfa4cb0f1a2dce3021 100644 --- a/docs/en/docs/ShangMi/overview.md +++ b/docs/en/Server/Security/ShangMi/overview.md @@ -1,6 +1,16 @@ # Overview -The ShangMi (SM) features for the openEuler OS aims to enable SM series cryptographic algorithms for key security features of the OS and provide cryptographic services such as the SM series cryptographic algorithm library, certificates, and secure transmission protocols for upper-layer applications. +ShangMi (SM) algorithms are commercial-grade cryptographic technologies. Cryptographic algorithms form the backbone of security technologies in information systems. Globally, widely adopted algorithms include RSA, AES, and SHA256. In parallel, China has developed a suite of cryptographic algorithms that cater to mainstream application scenarios. Among these, SM2, SM3, and SM4 are particularly prominent in OSs. + +| Algorithm | Publicly Available | Type | Application Scenarios | +| --------- | ------------------ | --------------------- | --------------------------------------------------------------------- | +| SM2 | Yes | Asymmetric encryption | Digital signatures, key exchange, encryption/decryption, PKI systems | +| SM3 | Yes | Hash algorithm | Integrity protection, one-way encryption, and other general scenarios | +| SM4 | Yes | Symmetric encryption | Encrypted storage, secure transmission | + +Additionally, other publicly available algorithms like SM9 and ZUC, as well as non-public algorithms such as SM1 and SM7, are part of the ecosystem. Notably, all publicly available Chinese algorithms have been integrated into ISO/IEC standards, gaining international recognition. China has also established a series of cryptographic technical specifications and application standards, including commercial cryptographic certificate standards and the TLCP protocol stack. These collectively form China's commercial cryptographic standard system, which guides the development of the cryptographic security industry. + +The SM features for the openEuler OS aims to enable SM series cryptographic algorithms for key security features of the OS and provide cryptographic services such as the SM series cryptographic algorithm library, certificates, and secure transmission protocols for upper-layer applications. Currently, the following SM features are supported: @@ -15,4 +25,5 @@ Currently, the following SM features are supported: 9. The SM2 certificate is supported in kernel module signing and module signature verification. 10. SM4-CBC and SM4-GCM algorithms are supported in Kernel Transport Layer Security (KTLS). 11. SM3 and SM4 algorithms are supported in Kunpeng Accelerator Engine (KAE). -12. The SM3 algorithm and SM2 certificated signature are supported for UEFI secure boot. \ No newline at end of file +12. UEFI secure boot supports the SM3 digest algorithm and SM2 digital signatures. +13. RPM supports the SM2 encryption/decryption algorithm and SM3 digest algorithm for signing and verification. diff --git a/docs/en/Server/Security/ShangMi/rpm-signature-verification.md b/docs/en/Server/Security/ShangMi/rpm-signature-verification.md new file mode 100644 index 0000000000000000000000000000000000000000..c0df357090c3133907ed134f3a8fdd370cf012be --- /dev/null +++ b/docs/en/Server/Security/ShangMi/rpm-signature-verification.md @@ -0,0 +1,99 @@ +# RPM Signature Verification + +## Overview + +openEuler employs RPM for package management, adhering to the openPGP signature specification. openEuler 24.03 LTS SP1 enhances the open source RPM by adding support for SM2/3 algorithm-based signature generation and verification. + +The following packages have been enhanced for SM algorithm capabilities: + +- GnuPG: The `gpg` CLI tool now supports generating SM signatures. +- RPM: RPM can now invoke `gpg` commands and openSSL APIs for SM signature generation and verification. +- openSSL: SM signature verification is supported (already supported in the open source version). + +## Prerequisites + +1. The following or later versions of gnupg2, libgcrypt, and rpm packages must be installed: + + ```sh + $ rpm -qa libgcrypt + libgcrypt-1.10.2-3.oe2403sp1.x86_64 + + $ rpm -qa gnupg2 + gnupg2-2.4.3-5.oe2403sp1.x86_64 + + $ rpm -qa rpm + rpm-4.18.2-20.oe2403sp1.x86_64 + ``` + +2. ECDSA signing and verification are limited to SM2. + +## Usage + +1. Generate a key. + + Method 1: + + ```sh + gpg --full-generate-key --expert + ``` + + Method 2: + + ```sh + gpg --quick-generate-key sm2p256v1 + ``` + + You will be prompted to enter a password. This password is required for subsequent key operations or signing. Pressing Enter without entering a password means no password is set. + +2. Export the certificate. + + ```sh + gpg -o --export + ``` + +3. Enable the macro for SM3 hash algorithm and SM2 algorithm. + + ```sh + $ vim /usr/lib/rpm/macros + %_enable_sm2p256v1_sm3_algo 1 + ``` + +4. Import the certificate into the RPM database. + + ```sh + rpm --import + ``` + +5. Write the macros required for signing. + + ```sh + $ vim ~/.rpmmacros + %_signature gpg + %_gpg_path /root/.gnupg + %_gpg_name + %_gpgbin /usr/bin/gpg2 + + %__gpg_sign_cmd %{shescape:%{__gpg}} \ + gpg --no-verbose --no-armor --no-secmem-warning --passphrase-file /root/passwd \ + %{?_gpg_digest_algo:--digest-algo=%{_gpg_digest_algo}} \ + %{?_gpg_sign_cmd_extra_args} \ + %{?_gpg_name:-u %{shescape:%{_gpg_name}}} \ + -sbo %{shescape:%{?__signature_filename}} \ + %{?__plaintext_filename:-- %{shescape:%{__plaintext_filename}}} + ``` + + `%__gpg_sign_cmd` includes the default configuration with the addition of `--passphrase-file /root/passwd`. The **passwd** file contains the password. This addition is required only If a password is set in step 1. + +6. Generate a RPM package signature. + + ```sh + rpmsign --addsign + ``` + +7. Verify the RPM package signature. + + ```sh + rpm -Kv + ``` + + If the output shows "Header V4 ECDSA/SM3 Signature" and "OK," the signature verification is successful. diff --git a/docs/en/docs/ShangMi/secure-boot.md b/docs/en/Server/Security/ShangMi/secure-boot.md similarity index 69% rename from docs/en/docs/ShangMi/secure-boot.md rename to docs/en/Server/Security/ShangMi/secure-boot.md index 092a05510ad2ba1fff0a99e005f92e95183ce359..b233f1581c4e75d83512f47220b9ed13008e9c54 100644 --- a/docs/en/docs/ShangMi/secure-boot.md +++ b/docs/en/Server/Security/ShangMi/secure-boot.md @@ -19,7 +19,7 @@ openEuler adds support for ShangMi (SM) algorithms to the pesign EFI signature t 1. The following software packages (or their later versions) must be installed: -``` +```shell openssl-1.1.1m-15.oe2203.aarch64 nss-3.72.0-4.oe2203.aarch64 pesign-115-2.oe2203.aarch64 @@ -29,19 +29,19 @@ crypto-policies-20200619-3.git781bbd4.oe2203.noarch 2. Download the source code of the openEuler shim component. Ensure that the version in the spec file is later than 15.6-7. -``` +```shell git clone https://gitee.com/src-openeuler/shim.git -b openEuler-22.03-LTS-SP1 --depth 1 ``` 3. Install software packages required for building the shim component: -``` +```shell yum install elfutils-libelf-devel gcc gnu-efi gnu-efi-devel openssl-devel make git rpm-build ``` 4. Check whether the SM3 algorithm is enabled for nss. If not, modify the file content as follows: -``` +```shell cat /usr/share/crypto-policies/DEFAULT/nss.txt | grep SM3 config="disallow=ALL allow=HMAC-SHA256:HMAC-SHA1:HMAC-SHA384:HMAC-SHA512:CURVE25519:SECP256R1:SECP384R1:SECP521R1:aes256-gcm:chacha20-poly1305:aes256-cbc:aes128-gcm:aes128-cbc:SHA256:SHA384:SHA512:SHA224:SHA1:ECDHE-RSA:ECDHE-ECDSA:RSA:DHE-RSA:ECDSA:RSA-PSS:RSA-PKCS:tls-version-min=tls1.0:dtls-version-min=dtls1.0:DH-MIN=1023:DSA-MIN=2048:RSA-MIN=2048:SM3" ``` @@ -50,7 +50,7 @@ config="disallow=ALL allow=HMAC-SHA256:HMAC-SHA1:HMAC-SHA384:HMAC-SHA512:CURVE25 1. Generate the key and certificate for signing the shim component. The shim signature is verified by the BIOS. As most BIOSs do not support SM algorithms, the RSA algorithm is used. For BIOSs that support SM algorithms you can generate the SM2 key and certificate by referring to the next step. -``` +```shell openssl genrsa -out rsa.key 4096 openssl req -new -key rsa.key -out rsa.csr -subj '/C=AA/ST=BB/O=CC/OU=DD/CN=secure boot BIOS' openssl x509 -req -days 365 -in rsa.csr -signkey rsa.key -out rsa.crt @@ -59,7 +59,7 @@ openssl x509 -in rsa.crt -out rsa.der -outform der 2. Generate the SM2 key and certificate for signing the GRUB and kernel components. -``` +```shell openssl ecparam -genkey -name SM2 -out sm2.key openssl req -new -sm3 -key sm2.key -out sm2.csr -subj '/C=AA/ST=BB/O=CC/OU=DD/CN=secure boot shim' openssl x509 -req -days 3650 -signkey sm2.key -in sm2.csr -out sm2.crt @@ -68,7 +68,7 @@ openssl x509 -in sm2.crt -out sm2.der -outform der 3. Create an NSS database and import the keys and certificates generated in the preceding two steps to the NSS database. -``` +```shell # The NSS database is organized in the form of directories. The storage location can be customized. mkdir certdb certutil -N -d certdb @@ -86,67 +86,67 @@ pk12util -d certdb -i sm2.p12 1. Go to the shim source code directory, modify the configuration variables in shim.spec to enable the support for SM algorithms, and specify the built-in SM2 certificate. -``` -%global enable_sm 1 -%global vendor_cert /path/to/sm2.der -``` + ```text + %global enable_sm 1 + %global vendor_cert /path/to/sm2.der + ``` 2. Build the shim software package. -``` -rpmbuild -ba shim.spec --define "_sourcedir $PWD" -``` + ```shell + rpmbuild -ba shim.spec --define "_sourcedir $PWD" + ``` 3. Install the built shim software package. -``` -rpm -Uvh ~/rpmbuild/RPMS/aarch64/shim-xxx.rpm -``` + ```shell + rpm -Uvh ~/rpmbuild/RPMS/aarch64/shim-xxx.rpm + ``` ## SM Signature for UEFI Files 1. Sign the shim component with the RSA key and certificate and replace the original one. -``` -# ARM64 -pesign -n certdb -c rsa -s -i /boot/efi/EFI/openEuler/shimaa64.efi -o shimaa64.efi.signed -cp shimaa64.efi.signed /boot/efi/EFI/openEuler/shimaa64.efi -# x86 -pesign -n certdb -c rsa -s -i /boot/efi/EFI/openEuler/shimx64.efi -o shimx64.efi.signed -cp shimx64.efi.signed /boot/efi/EFI/openEuler/shimx64.efi -``` + ```shell + # ARM64 + pesign -n certdb -c rsa -s -i /boot/efi/EFI/openEuler/shimaa64.efi -o shimaa64.efi.signed + cp shimaa64.efi.signed /boot/efi/EFI/openEuler/shimaa64.efi + # x86 + pesign -n certdb -c rsa -s -i /boot/efi/EFI/openEuler/shimx64.efi -o shimx64.efi.signed + cp shimx64.efi.signed /boot/efi/EFI/openEuler/shimx64.efi + ``` 2. Sign the GRUB component with the SM2 key and certificate and replace the original one. -``` -# ARM64 -pesign -n certdb -c sm2 -s -i /boot/efi/EFI/openEuler/grubaa64.efi -o grubaa64.efi.signed -d sm3 -cp grubaa64.efi.signed /boot/efi/EFI/openEuler/grubaa64.efi -# x86 -pesign -n certdb -c sm2 -s -i /boot/efi/EFI/openEuler/grubx64.efi -o grubx64.efi.signed -d sm3 -cp grubx64.efi.signed /boot/efi/EFI/openEuler/grubx64.efi -``` + ```shell + # ARM64 + pesign -n certdb -c sm2 -s -i /boot/efi/EFI/openEuler/grubaa64.efi -o grubaa64.efi.signed -d sm3 + cp grubaa64.efi.signed /boot/efi/EFI/openEuler/grubaa64.efi + # x86 + pesign -n certdb -c sm2 -s -i /boot/efi/EFI/openEuler/grubx64.efi -o grubx64.efi.signed -d sm3 + cp grubx64.efi.signed /boot/efi/EFI/openEuler/grubx64.efi + ``` 3. Sign the kernel component with the SM2 key and certificate and replace the original one. (Note that the file name contains the actual version number.) -``` -# For the ARM64 architecture,you need to decompress and sign the component, and compress it again. -cp /boot/vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64 vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64.gz -gzip -d vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64.gz -pesign -n certdb -c sm2 -s -i vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64 -o vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64.signed -d sm3 -gzip vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64.signed -cp vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64.signed.gz /boot/vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64 -# x86 -pesign -n certdb -c sm2 -s -i /boot/vmlinuz-5.10.0-126.0.0.66.oe2203.x86_64 -o vmlinuz-5.10.0-126.0.0.66.oe2203.x86_64.signed -d sm3 -cp vmlinuz-5.10.0-126.0.0.66.oe2203.x86_64.signed /boot/vmlinuz-5.10.0-126.0.0.66.oe2203.x86_64 -``` + ```shell + # For the ARM64 architecture,you need to decompress and sign the component, and compress it again. + cp /boot/vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64 vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64.gz + gzip -d vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64.gz + pesign -n certdb -c sm2 -s -i vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64 -o vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64.signed -d sm3 + gzip vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64.signed + cp vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64.signed.gz /boot/vmlinuz-5.10.0-126.0.0.66.oe2203.aarch64 + # x86 + pesign -n certdb -c sm2 -s -i /boot/vmlinuz-5.10.0-126.0.0.66.oe2203.x86_64 -o vmlinuz-5.10.0-126.0.0.66.oe2203.x86_64.signed -d sm3 + cp vmlinuz-5.10.0-126.0.0.66.oe2203.x86_64.signed /boot/vmlinuz-5.10.0-126.0.0.66.oe2203.x86_64 + ``` 4. Check the signature information. The following uses shim and GRUB as examples: -``` -pesign -S -i /boot/efi/EFI/openEuler/grubaa64.efi -pesign -S -i /boot/efi/EFI/openEuler/shimaa64.efi -``` + ```shell + pesign -S -i /boot/efi/EFI/openEuler/grubaa64.efi + pesign -S -i /boot/efi/EFI/openEuler/shimaa64.efi + ``` ## Secure Boot @@ -154,32 +154,32 @@ Enter the BIOS, import the certificate for signing the shim component, and enabl 1. Place the RSA certificate for signing the shim component in the **/boot/efi/EFI/openEuler** directory. -``` -cp rsa.der /boot/efi/EFI/openEuler -``` + ```shell + cp rsa.der /boot/efi/EFI/openEuler + ``` 2. Restart the system. 3. Enter BIOS to enable Secure Boot: -``` -Setup > Security > Secure Boot > Enable -``` + ```text + Setup > Security > Secure Boot > Enable + ``` 4. Set the Secure Boot mode to custom: -``` -Setup > Security > Secure Boot Certificate Configuration > Secure Boot Mode > Custom -``` + ```text + Setup > Security > Secure Boot Certificate Configuration > Secure Boot Mode > Custom + ``` 5. Import the Secure Boot certificate: -``` -Setup > Security > Secure Boot Certificate Configuration > Options Related to Secure Boot Custom Mode > DB Options > Import Signature > Add Signature by File > Select rsa.der > Save and exit. -``` + ```text + Setup > Security > Secure Boot Certificate Configuration > Options Related to Secure Boot Custom Mode > DB Options > Import Signature > Add Signature by File > Select rsa.der > Save and exit. + ``` 6. Save the configuration and restart the system. The system is started successfully. Secure Boot is enabled. -``` -mokutil --sb-state -SecureBoot enabled -``` + ```shell + mokutil --sb-state + SecureBoot enabled + ``` diff --git a/docs/en/docs/ShangMi/ssh-stack.md b/docs/en/Server/Security/ShangMi/ssh-stack.md similarity index 51% rename from docs/en/docs/ShangMi/ssh-stack.md rename to docs/en/Server/Security/ShangMi/ssh-stack.md index d2ad40da79467da371d2a9c6646ccc25394c6022..145302146b99876a56c0b79f066c737b3be65069 100644 --- a/docs/en/docs/ShangMi/ssh-stack.md +++ b/docs/en/Server/Security/ShangMi/ssh-stack.md @@ -8,7 +8,7 @@ The OpenSSH component is a Secure Shell Protocol (SSH) component implemented bas OpenSSH 8.8p1-5 or later -``` +```shell $ rpm -qa | grep openssh openssh-8.8p1-5.oe2209.x86_64 openssh-server-8.8p1-5.oe2209.x86_64 @@ -21,32 +21,32 @@ openssh-clients-8.8p1-5.oe2209.x86_64 1. On the client, call **ssh-keygen** to generate a user key, which is saved as **\~/.ssh/id_sm2** and **\~/.ssh/id_sm2.pub** by default. Then, send **\~/.ssh/id_sm2.pub** from the client to the server. (You can also run the **ssh-copy-id** command to send the file.) -``` -$ ssh-keygen -t sm2 -m PEM -``` + ```shell + $ ssh-keygen -t sm2 -m PEM + ``` 2. On the server, call **ssh-keygen** to generate a host key and add the public key sent by the client to the authorized key file list. (If you run the **ssh-copy-id** command, the public key is automatically written.) -``` -$ ssh-keygen -t sm2 -m PEM -f /etc/ssh/ssh_host_sm2_key -$ cat /path/to/id_sm2.pub >> ~/.ssh/authorized_keys -``` + ```shell + $ ssh-keygen -t sm2 -m PEM -f /etc/ssh/ssh_host_sm2_key + $ cat /path/to/id_sm2.pub >> ~/.ssh/authorized_keys + ``` 3. On the server, modify the **/etc/ssh/sshd_config** file to support login using SM series cryptographic algorithms. The following table lists the SM configuration items. -| Description | Configuration Item | SM Value | -|---------------------|------------------------|---------------| -| Authentication key for the host key and public key (configurable only on the server)| HostKeyAlgorithms | /etc/ssh/ssh_host_sm2_key | -| Host key and public key authentication algorithm | HostKeyAlgorithms | sm2 | -| Key exchange algorithm | KexAlgorithms | sm2-sm3 | -| Symmetric cryptographic algorithm | Ciphers | sm4-ctr | -| Integrity check algorithm | MACs | hmac-sm3 | -| User public key authentication algorithm | PubkeyAcceptedKeyTypes | sm2 | -| Authentication key for the user public key (configurable only on the client) | IdentityFile | ~/.ssh/id_sm2 | -| Hash algorithm used for printing key fingerprints | FingerprintHash | sm3 | +| Description | Configuration Item | SM Value | +| ------------------------------------------------------------------------------------ | ---------------------- | ------------------------- | +| Authentication key for the host key and public key (configurable only on the server) | HostKeyAlgorithms | /etc/ssh/ssh_host_sm2_key | +| Host key and public key authentication algorithm | HostKeyAlgorithms | sm2 | +| Key exchange algorithm | KexAlgorithms | sm2-sm3 | +| Symmetric cryptographic algorithm | Ciphers | sm4-ctr | +| Integrity check algorithm | MACs | hmac-sm3 | +| User public key authentication algorithm | PubkeyAcceptedKeyTypes | sm2 | +| Authentication key for the user public key (configurable only on the client) | IdentityFile | ~/.ssh/id_sm2 | +| Hash algorithm used for printing key fingerprints | FingerprintHash | sm3 | 4. On the client, configure the SM series cryptographic algorithms to complete the login. You can enable the SM Cipher Suites on the client by running commands or modifying the configuration file. The following shows how to log in using the CLI: -``` -ssh -o PreferredAuthentications=publickey -o HostKeyAlgorithms=sm2 -o PubkeyAcceptedKeyTypes=sm2 -o Ciphers=sm4-ctr -o MACs=hmac-sm3 -o KexAlgorithms=sm2-sm3 -i ~/.ssh/id_sm2 [remote-ip] -``` + ```shell + ssh -o PreferredAuthentications=publickey -o HostKeyAlgorithms=sm2 -o PubkeyAcceptedKeyTypes=sm2 -o Ciphers=sm4-ctr -o MACs=hmac-sm3 -o KexAlgorithms=sm2-sm3 -i ~/.ssh/id_sm2 [remote-ip] + ``` diff --git a/docs/en/docs/ShangMi/tlcp-stack.md b/docs/en/Server/Security/ShangMi/tlcp-stack.md similarity index 100% rename from docs/en/docs/ShangMi/tlcp-stack.md rename to docs/en/Server/Security/ShangMi/tlcp-stack.md diff --git a/docs/en/docs/ShangMi/user-identity-authentication.md b/docs/en/Server/Security/ShangMi/user-identity-authentication.md similarity index 66% rename from docs/en/docs/ShangMi/user-identity-authentication.md rename to docs/en/Server/Security/ShangMi/user-identity-authentication.md index 4ce8db4c75e3124861e31d44fb4963a08b61ff09..241d2e6a1d14059a937f63bf0d5e1cb9cc413279 100644 --- a/docs/en/docs/ShangMi/user-identity-authentication.md +++ b/docs/en/Server/Security/ShangMi/user-identity-authentication.md @@ -12,52 +12,51 @@ PAM is a pluggable authentication module of the system that provides an authenti 1. PAM 1.5.2-2 or later -``` -$ rpm -qa pam -pam-1.5.2-2.oe2209.x86_64 -``` + ```shell + $ rpm -qa pam + pam-1.5.2-2.oe2209.x86_64 + ``` 2. libxcrypt 4.4.26-2 or later -``` -$ rpm -qa libxcrypt -pam-4.4.26-2.oe2209.x86_64 -``` + ```shell + $ rpm -qa libxcrypt + pam-4.4.26-2.oe2209.x86_64 + ``` ### How to Use 1. Open the **/etc/pam.d/password-auth** and **/etc/pam.d/system-auth** files, locate the line starting with **password sufficient pam_unix.so**, and change the algorithm field in the line to **sm3**. -``` -$ cat /etc/pam.d/password-auth -...... -password sufficient pam_unix.so sm3 shadow nullok try_first_pass use_authtok -...... - -$ cat /etc/pam.d/system-auth -...... -password sufficient pam_unix.so sm3 shadow nullok try_first_pass use_authtok -...... -``` + ```shell + $ cat /etc/pam.d/password-auth + ...... + password sufficient pam_unix.so sm3 shadow nullok try_first_pass use_authtok + ...... + + $ cat /etc/pam.d/system-auth + ...... + password sufficient pam_unix.so sm3 shadow nullok try_first_pass use_authtok + ...... + ``` 2. After the configuration is modified, the password changed by running the **passwd** command or the password created by a new user is encrypted using the SM3 algorithm. The encryption result starts with **sm3** and is stored in **/etc/shadow**. -``` -$ passwd testuser -Changing password for user testuser. -New password: -Retype new password: -passwd: all authentication tokens updated successfully. -$ cat /etc/shadow | grep testuser -testuser:$sm3$wnY86eyUlB5946gU$99LlMr0ddeZNDqnB2KRxn9f30SFCCvMv1WN1cFdsKJ2:19219:0:90:7:35:: -``` + ```shell + $ passwd testuser + Changing password for user testuser. + New password: + Retype new password: + passwd: all authentication tokens updated successfully. + $ cat /etc/shadow | grep testuser + testuser:$sm3$wnY86eyUlB5946gU$99LlMr0ddeZNDqnB2KRxn9f30SFCCvMv1WN1cFdsKJ2:19219:0:90:7:35:: + ``` ### Notes 1. By default, the SHA512 algorithm is used. After the SM3 algorithm is used, the existing user passwords are not affected. The cryptographic algorithm takes effect only after the passwords are changed. 2. If PAM and libxcrypt need to be downgraded to non-SM versions and existing user passwords are encrypted using the SM3 algorithm, modify the configuration first. - Set the algorithm to a non-SM algorithm, change the user passwords, and downgrade the software to a non-SM version. Otherwise, these users cannot log in to the system. - + Set the algorithm to a non-SM algorithm, change the user passwords, and downgrade the software to a non-SM version. Otherwise, these users cannot log in to the system. ## Using Shadow to Encrypt User Passwords @@ -69,7 +68,7 @@ Shadow is a common user management component in Linux. It provides commands such Shadow 4.9-4 or later -``` +```shell $ rpm -qa shadow shadow-4.9-4.oe2209.x86_64 ``` @@ -78,19 +77,19 @@ shadow-4.9-4.oe2209.x86_64 1. By default, **chpasswd** uses the PAM configuration. Use **-c** to specify the SM3 algorithm. The encryption result starts with **sm3** and is stored in **/etc/shadow**. -``` -$ echo testuser:testPassword |chpasswd -c SM3 -$ cat /etc/shadow | grep testuser -testuser:$sm3$moojQQeBfdGOrL14$NqjckLHlk3ICs1cx.0rKZwRHafjVlqksdSJqfx9eYh6:19220:0:99999:7::: + ```shell + $ echo testuser:testPassword |chpasswd -c SM3 + $ cat /etc/shadow | grep testuser + testuser:$sm3$moojQQeBfdGOrL14$NqjckLHlk3ICs1cx.0rKZwRHafjVlqksdSJqfx9eYh6:19220:0:99999:7::: + ``` -``` 2. By default, **chgpasswd** uses the PAM configuration. Use **-c** to specify the SM3 algorithm. The encryption result starts with **sm3** and is stored in **/etc/shadow**. -``` -$ echo testGroup:testPassword |chpasswd -c SM3 -$ cat /etc/gshadow | grep testGroup -testGroup:$sm3$S3h3X6U6KsXg2Gkc$LFCAnKbi6JItarQz4Y/Aq9/hEbEMQXq9nQ4rY1j9BY9:: -``` + ```shell + $ echo testGroup:testPassword |chpasswd -c SM3 + $ cat /etc/gshadow | grep testGroup + testGroup:$sm3$S3h3X6U6KsXg2Gkc$LFCAnKbi6JItarQz4Y/Aq9/hEbEMQXq9nQ4rY1j9BY9:: + ``` ### Notes @@ -106,27 +105,27 @@ The libuser library implements a standardized interface for operating and managi libuser 0.63-3 or later -``` +```shell $ rpm -qa libuser libuser-0.63-3.oe2209.x86_64 ``` ### How to Use -1. Configure **crypt_style=sm3** in the **[defaults]** section in **/etc/libuser.conf**. +1. Configure **crypt_style=sm3** in the **\[defaults]** section in **/etc/libuser.conf**. -``` -$ cat /etc/libuser.conf -...... -[defaults] -crypt_style = sm3 -...... -``` + ```shell + $ cat /etc/libuser.conf + ...... + [defaults] + crypt_style = sm3 + ...... + ``` 2. When you run the **lusermod**, **lpasswd**, or **luseradd** command to set a user password, the default password encryption algorithm is SM3. The encryption result starts with **sm3** and is stored in **/etc/shadow**. -``` -# luseradd testuser -P Test@123 -# cat /etc/shadow | grep testuser -testuser:$sm3$1IJtoN6zlBDCiPKC$5oxscBTgiquPAEmZWGNDVqTPrboHJw3fFSohjF6sONB:18862:0:90:7:35:: -``` + ```shell + # luseradd testuser -P Test@123 + # cat /etc/shadow | grep testuser + testuser:$sm3$1IJtoN6zlBDCiPKC$5oxscBTgiquPAEmZWGNDVqTPrboHJw3fFSohjF6sONB:18862:0:90:7:35:: + ``` diff --git a/docs/en/docs/Administration/trusted-computing.md b/docs/en/Server/Security/TrustedComputing/DIM.md similarity index 34% rename from docs/en/docs/Administration/trusted-computing.md rename to docs/en/Server/Security/TrustedComputing/DIM.md index 5ba4816f745e1d327673467a9882645ae6b1d100..cc00d38f3e384871420556a9121d145dc488b48d 100644 --- a/docs/en/docs/Administration/trusted-computing.md +++ b/docs/en/Server/Security/TrustedComputing/DIM.md @@ -1,668 +1,14 @@ -# Trusted Computing - -## Trusted Computing Basics - -### What Is Trusted Computing - -The definition of being trusted varies with international organizations. - -1. Trusted Computing Group (TCG): - - An entity that is trusted always achieves the desired goal in an expected way. - -2. International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) (1999): - - The components, operations, or processes involved in computing are predictable under any conditions and are resistant to viruses and a certain degree of physical interference. - -3. IEEE Computer Society Technical Committee on Dependable Computing: - - Being trusted means that the services provided by the computer system can be proved to be reliable, and mainly refers to the reliability and availability of the system. - -In short, being trusted means that the system operates according to a pre-determined design and policy. - -A trusted computing system consists of a root of trust, a trusted hardware platform, operating system (OS), and application. The basic idea of the system is to create a trusted computing base (TCB) first, and then establish a trust chain that covers the hardware platform, OS, and application. In the trust chain, authentication is performed from the root to the next level, extending trust level by level and building a secure and trusted computing environment. - -![](./figures/trusted_chain.png) - -Unlike the traditional security mechanism that eliminates viruses without solving the root of the problem, trusted computing adopts the whitelist mechanism to allow only authorized kernels, kernel modules, and applications to run on the system. The system will reject the execution of a program that is unknown or has been changed. - -## Kernel Integrity Measurement Architecture (IMA) - -### Overview - -#### IMA - -The integrity measurement architecture (IMA) is a subsystem in the kernel. The IMA can measure files accessed through **execve()**, **mmap()**, and **open()** systems based on user-defined policies. The measurement result can be used for **local or remote attestation**, or can be compared with an existing reference value to **control the access to files**. - -According to the Wiki definition, the function of the kernel integrity subsystem includes three parts: - -- Measure: Detects accidental or malicious modifications to files, either remotely or locally. -- Appraise: Measures a file and compares it with a reference value stored in the extended attribute to control the integrity of the local file. -- Audit: Writes the measurement result into system logs for auditing. - -Figuratively, IMA measurement is an observer that only records modification without interfering in it, and IMA appraisal is more like a strict security guard that rejects any unauthorized access to programs. - -#### EVM - -The extended verification module (EVM) is used to calculate a hash value based on the security extended attributes of a file in the system, including **security.ima** and **security.selinux**. Then this value is signed by the key stored in the TPM or other trusted environments. The signature value is stored in **security.evm** and cannot be tampered with. If the value is tampered with, the signature verification fails when the file is accessed again. - -In summary, the EVM is used to provide offline protection for security extended attributes by calculating the digest of the attributes and signing and storing them in **security.evm**. - -#### IMA Digest Lists - -IMA Digest Lists are an enhancement of the original kernel integrity protection mechanism provided by openEuler. It replaces the original IMA mechanism to protect file integrity. - -Digest lists are binary data files in a special format. Each digest list corresponds to an RPM package and records the hash values of protected files (executable files and dynamic library files) in the RPM package. - -After the startup parameters are correctly configured, the kernel maintains a hash table (invisible to the user space) and provides interfaces (**digest\_list\_data** and **digest\_list\_data\_del**) that update the hash table using **securityfs**. The digest lists are signed by the private key when they are built. When uploaded to the kernel through the interface, the digest lists need to be verified by the public key in the kernel. - -![](./figures/ima_digest_list_update.png) - -When IMA appraisal is enabled, each time an executable file or dynamic library file is accessed, the hook in the kernel is invoked to calculate the hash values of the file content and extended attributes and search in the kernel hash table. If the calculated hash values match the one in the table, the file is allowed to be executed. Otherwise, the access is denied. - -![1599719649188](./figures/ima_verification.png) - -The IMA Digest Lists extension provided by the openEuler kernel provides higher security, performance, and usability than the native IMA mechanism of the kernel community, facilitating the implementation of the integrity protection mechanism in the production environment. - -- **A complete trust chain for high security** - - The native IMA mechanism requires that the file extended attribute be generated and marked in advance on the live network. When the file is accessed, the file extended attribute is used as a reference value, resulting in an incomplete trust chain. - - The IMA Digest Lists extension saves the reference digest value of the file in the kernel space. During the construction, the reference digest value of the file is carried in the released RPM package in the form of a digest list. When the RPM package is installed, the digest list is imported and the signature is verified, ensuring that the reference value comes from the software publisher and implementing a complete trust chain. - -- **Superior performance** - - The trusted platform module (TPM) chip is a low-speed chip, making the PCR extension operation a performance bottleneck in the IMA measurement scenario. To shatter this bottleneck, the Digest Lists extension reduces unnecessary PCR extension operations while ensuring security, providing 65% higher performance than the native IMA mechanism. - - In the IMA appraisal scenario, the Digest Lists extension performs signature verification in the startup phase to prevent signature verification from being performed each time the file is accessed. This helps deliver a 20% higher file access performance in the operation phase than that in the native IMA appraisal scenario. - -- **Fast deployment and smooth upgrade** - - When the native IMA mechanism is deployed for the first time or the software package is updated, you need to switch to the fix mode, manually mark the extended attributes of the file, and then restart the system to enter the enforcing mode. In this way, the installed program can be accessed normally. - - The Digest Lists extension can be used immediately after the installation is completed. In addition, the RPM package can be directly installed or upgraded in the enforcing mode without restarting the system or manually marking the extended attributes of the file. This minimizes user perception during the operation, allowing for quick deployment and smooth upgrade on the live network. - -Note: The IMA Digest Lists extension advances the signature verification of the native IMA to the startup phase. This causes the assumption that the memory in the kernel space cannot be tampered with. As a result, the IMA depends on other security mechanisms (secure startup of kernel module and dynamic memory measurement) to protect the integrity of the kernel memory. - -However, either the native IMA mechanism of the community or the IMA Digest Lists extension is only a link in the trust chain of trusted computing, and cannot ensure the system security alone. Security construction is always a systematic project that builds in-depth defense. - -### Constraints - -1. The current IMA appraisal mode can only protect immutable files in the system, including executable files and dynamic library files. -2. The IMA provides integrity measurement at the application layer. The security of the IMA depends on the reliability of the previous links. -3. Currently, the IMA does not support the import of the third-party application digest lists. -4. The startup log may contain `Unable to open file: /etc/keys/x509_ima.der`. This error is reported from the open source community and does not affect the use of the IMA digest lists feature. -5. In the ARM version, audit errors may occur when the log mode is enabled for the IMA. This occurs because the modprobe loads the kernel module before the digest lists are imported, but does not affect the normal functions. - -### Application Scenario - -#### IMA Measurement - -The purpose of IMA measurement is to detect unexpected or malicious modifications to system files. The measurement result can be used for local or remote attestation. - -If a TPM chip exists in the system, the measurement result is extended to a specified PCR register of the TPM chip. Due to the unidirectional PCR extension and the hardware security of the TPM chip, a user cannot modify the extended measurement result, thereby ensuring authenticity of the measurement result. - -The file scope and triggering conditions of IMA measurement can be configured by the user using the IMA policy. - -By default, IMA is disabled. However, the system searches for the **ima-policy** policy file in the `/etc/ima/` path. If the file is found, the system measures the files in the system based on the policy during startup. If you do not want to manually compile the policy file, you can configure the `ima_policy=tcb` in the startup parameters using the default policy. For details about more policy parameters, see the section *IMA Startup Parameters* in *Appendix*. - -You can check the currently loaded IMA policy in the `/sys/kernel/security/ima/policy` file. The IMA measurement log is located in the `/sys/kernel/security/ima/ascii_runtime_measurements` file, as shown in the following figure: - -```shell -$ head /sys/kernel/security/ima/ascii_runtime_measurements -10 ddee6004dc3bd4ee300406cd93181c5a2187b59b ima-ng sha1:9797edf8d0eed36b1cf92547816051c8af4e45ee boot_aggregate -10 180ecafba6fadbece09b057bcd0d55d39f1a8a52 ima-ng sha1:db82919bf7d1849ae9aba01e28e9be012823cf3a /init -10 ac792e08a7cf8de7656003125c7276968d84ea65 ima-ng sha1:f778e2082b08d21bbc59898f4775a75e8f2af4db /bin/bash -10 0a0d9258c151356204aea2498bbca4be34d6bb05 ima-ng sha1:b0ab2e7ebd22c4d17d975de0d881f52dc14359a7 /lib64/ld-2.27.so -10 0d6b1d90350778d58f1302d00e59493e11bc0011 ima-ng sha1:ce8204c948b9fe3ae67b94625ad620420c1dc838 /etc/ld.so.cache -10 d69ac2c1d60d28b2da07c7f0cbd49e31e9cca277 ima-ng sha1:8526466068709356630490ff5196c95a186092b8 /lib64/libreadline.so.7.0 -10 ef3212c12d1fbb94de9534b0bbd9f0c8ea50a77b ima-ng sha1:f80ba92b8a6e390a80a7a3deef8eae921fc8ca4e /lib64/libc-2.27.so -10 f805861177a99c61eabebe21003b3c831ccf288b ima-ng sha1:261a3cd5863de3f2421662ba5b455df09d941168 /lib64/libncurses.so.6.1 -10 52f680881893b28e6f0ce2b132d723a885333500 ima-ng sha1:b953a3fa385e64dfe9927de94c33318d3de56260 /lib64/libnss_files-2.27.so -10 4da8ce3c51a7814d4e38be55a2a990a5ceec8b27 ima-ng sha1:99a9c095c7928ecca8c3a4bc44b06246fc5f49de /etc/passwd -``` - -From left to right, the content of each record indicates: - -1. PCR: PCR register for extending measurement results (The default value is 10. This register is valid only when the TPM chip is installed in the system.) -2. Template hash value: hash value that is finally used for extension, combining the file content hash and the length and value of the file path -3. Template: template of the extended measurement value, for example, **ima-ng** -4. File content hash value: hash value of the measured file content -5. File path: path of the measured file - -#### IMA Appraisal - -The purpose of IMA appraisal is to control access to local files by comparing the reference value with the standard reference value. - -IMA uses the security extension attributes **security.ima** and **security.evm** to store the reference values of file integrity measurement. - -- **security.ima**: stores the hash value of the file content -- **security.evm**: stores the hash value signature of a file extended attribute - -When a protected file is accessed, the hook in the kernel is triggered to verify the integrity of the extended attributes and content of the file. - -1. Use the public key in the kernel keyring to verify the signature value in the extended attribute of the **security.evm** file, and compare this signature value with the hash value of the extended attribute of the current file. If they match, the extended attribute of the file is complete (including **security.ima**). -2. When the extended attribute of the file is complete, the system compares the extended attribute of the file **security.ima** with the digest value of the current file content. If they match, the system allows for the access to the file. - -Likewise, the file scope and trigger conditions for IMA appraisal can be configured by users using IMA policies. - -#### IMA Digest Lists - -Currently, the IMA Digest Lists extension supports the following three combinations of startup parameters: - -- IMA measurement mode: - - ```shell - ima_policy=exec_tcb ima_digest_list_pcr=11 - ``` - -- IMA appraisal log mode + IMA measurement mode: - - ```shell - ima_template=ima-sig ima_policy="exec_tcb|appraise_exec_tcb|appraise_exec_immutable" initramtmpfs ima_hash=sha256 ima_appraise=log evm=allow_metadata_writes evm=x509 ima_digest_list_pcr=11 ima_appraise_digest_list=digest - ``` - -- IMA appraisal enforcing mode + IMA measurement mode: - - ```shell - ima_template=ima-sig ima_policy="exec_tcb|appraise_exec_tcb|appraise_exec_immutable" initramtmpfs ima_hash=sha256 ima_appraise=enforce-evm evm=allow_metadata_writes evm=x509 ima_digest_list_pcr=11 ima_appraise_digest_list=digest - ``` - -### Procedure - -#### Initial Deployment in the Native IMA Scenario - -When the system is started for the first time, you need to configure the following startup parameters: - -```shell -ima_appraise=fix ima_policy=appraise_tcb -``` - -In the `fix` mode, the system can be started when no reference value is available. `appraise_tcb` corresponds to an IMA policy. For details, see *IMA Startup Parameters* in the *Appendix*. - -Next, you need to access all the files that need to be verified to add IMA extended attributes to them: - -```shell -time find / -fstype ext4 -type f -uid 0 -exec dd if='{}' of=/dev/null count=0 status=none \; -``` - -This process takes some time. After the command is executed, you can see the marked reference value in the extended attributes of the protected file. - -```shell -$ getfattr -m - -d /sbin/init -# file: sbin/init -security.ima=0sAXr7Qmun5mkGDS286oZxCpdGEuKT -security.selinux="system_u:object_r:init_exec_t" -``` - -Configure the following startup parameters and restart the system: - -```shell -ima_appraise=enforce ima_policy=appraise_tcb -``` - -#### Initial Deployment in the Digest Lists Scenario - -1. Set kernel parameters to enter the log mode. - - Add the following parameters to edit the `/boot/efi/EFI/openEuler/grub.cfg` file: - - ```shell - ima_template=ima-sig ima_policy="exec_tcb|appraise_exec_tcb|appraise_exec_immutable" initramtmpfs ima_hash=sha256 ima_appraise=log evm=allow_metadata_writes evm=x509 ima_digest_list_pcr=11 ima_appraise_digest_list=digest - ``` - - Run the `reboot` command to restart the system and enter the log mode. In this mode, integrity check has been enabled, but the system can be started even if the check fails. - -2. Install the dependency package. - - Run the **yum** command to install **digest-list-tools** and **ima-evm-utils**. Ensure that the versions are not earlier than the following: - - ```shell - $ yum install digest-list-tools ima-evm-utils - $ rpm -qa | grep digest-list-tools - digest-list-tools-0.3.93-1.oe1.x86_64 - $ rpm -qa | grep ima-evm-utils - ima-evm-utils-1.2.1-9.oe1.x86_64 - ``` - -3. If the **plymouth** package is installed, you need to add `-a` to the end of the **cp** command in line 147 in the `/usr/libexec/plymouth/plymouth-populate-initrd` script file: - - ```shell - ... - ddebug "Installing $_src" - cp -a --sparse=always -pfL "$PLYMOUTH_SYSROOT$_src" "${initdir}/$target" - } - ``` - -4. Run `dracut` to generate **initrd** again: - - ```shell - dracut -f -e xattr - ``` - - Edit the `/boot/efi/EFI/openEuler/grub.cfg` file by changing **ima\_appraise=log** to **ima\_appraise=enforce-evm**. - - ```shell - ima_template=ima-sig ima_policy="exec_tcb|appraise_exec_tcb|appraise_exec_immutable" initramtmpfs ima_hash=sha256 ima_appraise=enforce-evm evm=allow_metadata_writes evm=x509 ima_digest_list_pcr=11 ima_appraise_digest_list=digest - ``` - - Run the **reboot** command to complete the initial deployment. - -#### Building Digest Lists on OBS - -Open Build Service (OBS) is a compilation system that was first used for building software packages in openSUSE and supports distributed compilation of multiple architectures. - -Before building a digest list, ensure that your project contains the following RPM packages from openEuler: - -- digest-list-tools -- pesign-obs-integration -- selinux-policy -- rpm -- openEuler-rpm-config - -Add **Project Config** in the deliverable project: - -```shell -Preinstall: pesign-obs-integration digest-list-tools selinux-policy-targeted -Macros: -%__brp_digest_list /usr/lib/rpm/openEuler/brp-digest-list %{buildroot} -:Macros -``` - -- The following content is added to **Preinstall**: **digest-list-tools** for generating the digest list; **pesign-obs-integration** for generating the digest list signature; **selinux-policy-targeted**, ensuring that the SELinux label in the environment is correct when the digest list is generated. -- Define the macro **%\_\_brp\_digest\_list** in Macros. The RPM runs this macro to generate a digest list for the compiled binary file in the build phase. This macro can be used as a switch to control whether the digest list is generated in the project. - -After the configuration is completed, OBS automatically performs full build. In normal cases, the following two files are added to the software package: - -- **/etc/ima/digest\_lists/0-metadata\_list-compact-\[package name]-\[version number]** -- **/etc/ima/digest\_lists.tlv/0-metadata\_list-compact\_tlv-\[package name]-\[version number]** - -#### Building Digest Lists on Koji - -Koji is a compilation system of the Fedora community. The openEuler community will support Koji in the future. - -### FAQs - -1. Why does the system fail to be started, or commands fail to be executed, or services are abnormal after the system is started in enforcing mode? - - In enforcing mode, IMA controls file access. If the content or extended attributes of a file to be accessed are incomplete, the access will be denied. If key commands that affect system startup cannot be executed, the system cannot be started. - - Check whether the following problems exist: - - - **Check whether the digest list is added to initrd.** - - Check whether the **dracut** command is executed to add the digest list to the kernel during the initial deployment. If the digest list is not added to **initrd**, the digest list cannot be imported during startup. As a result, the startup fails. - - - **Check whether the official RPM package is used.** - - If a non-official openEuler RPM package is used, the RPM package may not carry the digest list, or the private key for signing the digest list does not match the public key for signature verification in the kernel. As a result, the digest list is not imported to the kernel. - - If the cause is not clear, enter the log mode and find the cause from the error log: - - ```shell - dmesg | grep appraise - ``` - -2. Why access control is not performed on system files in enforcing mode? - - When the system does not perform access control on the file as expected, check whether the IMA policy in the startup parameters is correctly configured: - - ```shell - $ cat /proc/cmdline - ...ima_policy=exec_tcb|appraise_exec_tcb|appraise_exec_immutable... - ``` - - Run the following command to check whether the IMA policy in the current kernel has taken effect: - - ```shell - cat /sys/kernel/security/ima/policy - ``` - - If the policy file is empty, it indicates that the policy fails to be set. In this case, the system does not perform access control. - -3. After the initial deployment is completed, do I need to manually run the **dracut** command to generate **initrd** after installing, upgrading, or uninstalling the software package? - - No. The **digest\_list.so** plug-in provided by the RPM package can automatically update the digest list at the RPM package granularity, allowing users to be unaware of the digest list. - -### Appendixes - -#### Description of the IMA securityfs Interface - -The native IMA provides the following **securityfs** interfaces: - -> Note: The following interface paths are in the `/sys/kernel/security/` directory. - -| Path | Permission | Description | -| ------------------------------ | ---------- | ------------------------------------------------------------ | -| ima/policy | 600 | IMA policy interface | -| ima/ascii_runtime_measurement | 440 | IMA measurement result in ASCII code format | -| ima/binary_runtime_measurement | 440 | IMA measurement result in binary format | -| ima/runtime_measurement_count | 440 | Measurement result statistics | -| ima/violations | 440 | Number of IMA measurement result conflicts | -| evm | 660 | EVM mode, that is, the mode for verifying the integrity of extended attributes of files | - -The values of `/sys/kernel/security/evm` are as follows: - -- 0: EVM uninitialized. -- 1: Uses HMAC (symmetric encryption) to verify the integrity of extended attributes. -- 2: Uses the public key signature (asymmetric encryption) to verify the integrity of extended attributes. -- 6: Disables the integrity check of extended attributes (This mode is used for openEuler). - -The additional **securityfs** interfaces provided by the IMA Digest Lists extension are as follows: - -| Path | Permission | Description | -| ------------------------ | ---------- | ---------------------------------------------------------- | -| ima/digests_count | 440 | Total number of digests (IMA+EVM) in the system hash table | -| ima/digest_list_data | 200 | New interfaces in the digest list | -| ima/digest_list_data_del | 200 | Interfaces deleted from the digest list | - -#### IMA Policy Syntax - -Each IMA policy statement must start with an **action** represented by the keyword action and be followed by a **filtering condition**: - -- **action**: indicates the action of a policy. Only one **action** can be selected for a policy. - - > Note: You can **ignore the word action** and directly write **dont\_measure** instead of **action=dont\_measure**. - -- **func**: indicates the type of the file to be measured or authenticated. It is often used together with **mask**. Only one **func** can be selected for a policy. - - - **FILE\_CHECK** can be used only with **MAY\_EXEC**, **MAY\_WRITE**, and **MAY\_READ**. - - **MODULE\_CHECK**, **MMAP\_CHECK**, and **BPRM\_CHECK** can be used only with **MAY\_EXEC**. - - A combination without the preceding matching relationships does not take effect. - -- **mask**: indicates the operation upon which files will be measured or appraised. Only one **mask** can be selected for a policy. - -- **fsmagic**: indicates the hexadecimal magic number of the file system type, which is defined in the `/usr/include/linux/magic.h` file. - - > Note: By default, all file systems are measured unless you use the **dont\_measure/dont\_appraise** to mark a file system not to be measured. - -- **fsuid**: indicates the UUID of a system device. The value is a hexadecimal string of 16 characters. - -- **objtype**: indicates the file type. Only one file type can be selected for a policy. - - > Note: **objtype** has a finer granularity than **func**. For example, **obj\_type=nova\_log\_t** indicates the nova log file. - -- **uid**: indicates the user (represented by the user ID) who performs operations on the file. Only one **uid** can be selected for a policy. - -- **fowner**: indicates the owner (represented by the user ID) of the file. Only one **fowner** can be selected for a policy. - -The values and description of the keywords are as follows: - -| Keyword | Value | Description | -| ------------- | ------------------ | ------------------------------------------------------------ | -| action | measure | Enables IMA measurement | -| | dont_measure | Disables IMA measurement | -| | appraise | Enables IMA appraisal | -| | dont_appraise | Disables IMA appraisal | -| | audit | Enables audit | -| func | FILE_CHECK | File to be opened | -| | MODULE_CHECK | Kernel module file to be loaded | -| | MMAP_CHECK | Dynamic library file to be mapped to the memory space of the process | -| | BRPM_CHECK | File to be executed (excluding script files opened by programs such as `/bin/hash`) | -| | POLICY_CHECK | File to be loaded as a supplement to the IMA policy | -| | FIRMWARE_CHECK | Firmware to be loaded into memory | -| | DIGEST_LIST_CHECK | Digest list file to be loaded into the kernel | -| | KEXEC_KERNEL_CHECK | kexec kernel to be switched to | -| mask | MAY_EXEC | Executes a file | -| | MAY_WRITE | Writes data to a file This operation is not recommended because it is restricted by open source mechanisms such as echo and vim (the essence of modification is to create a temporary file and then rename it). The IMA measurement of **MAY\_WRITE** is not triggered each time the file is modified. | -| | MAY_READ | Reads a file | -| | MAY_APPEND | Extends file attributes | -| fsmagic | fsmagic=xxx | Hexadecimal magic number of the file system type | -| fsuuid | fsuuid=xxx | UUID of a system device. The value is a hexadecimal string of 16 characters. | -| fowner | fowner=xxx | User ID of the file owner | -| uid | uid=xxx | ID of the user who operates the file | -| obj_type | obj_type=xxx_t | File type (based on the SELinux tag) | -| pcr | pcr= | Selects the PCR used to extend the measurement values in the TPM. The default value is 10. | -| appraise_type | imasig | Signature-based IMA appraisal | -| | meta_immutable | Evaluates the extended attributes of the file based on signatures (supporting the digest list). | - -> Note: **PATH\_CHECK** is equivalent to **FILE\_CHECK**, and **FILE\_MMAP** is equivalent to **MMAP\_CHECK**. They are not mentioned in this table. - -#### IMA Native Startup Parameters - -The following table lists the kernel startup parameters of the native IMA. - -| Parameter | Value | Description | -| ---------------- | ------------ | ------------------------------------------------------------ | -| ima_appraise | off | Disables the IMA appraisal mode. The integrity check is not performed when the file is accessed and no new reference value is generated for the file. | -| | enforce | Enables the IMA appraisal enforcing mode to perform the integrity check when the file is accessed. That is, the file digest value is calculated and compared with the reference value. If the comparison fails, the file access is rejected. In this case, the IMA generates a new reference value for the new file. | -| | fix | Enables the IMA repair mode. In this mode, the reference value of a protected file can be updated. | -| | log | Enables the IMA appraisal log mode to perform the integrity check when the file is accessed. However, commands can be executed even if the check fails, and only logs are recorded. | -| ima_policy | tcb | Measures all file execution, dynamic library mapping, kernel module import, and device driver loading. The file read behavior of the root user is also measured. | -| | appraise_tcb | Evaluates all files whose owner is the root user. | -| | secure_boot | Evaluates the kernel module import, hardware driver loading, kexec kernel switchover, and IMA policies. The prerequisite is that these files have IMA signatures. | -| ima_tcb | None | Equivalent to **ima\_policy=tcb**. | -| ima_appraise_tcb | None | Equivalent to **ima\_policy=appraise\_tcb**. | -| ima_hash | sha1/md5/... | IMA digest algorithm. The default value is sha1. | -| ima_template | ima | IMA measurement extension template | -| | ima-ng | IMA measurement extension template | -| | ima-sig | IMA measurement extension template | -| integrity_audit | 0 | Basic integrity audit information (default) | -| | 1 | Additional integrity audit information | - -> Note: The **ima\_policy** parameter can specify multiple values at the same time, for example, **ima\_policy=tcb\|appraise\_tcb**. After the system is started, the IMA policy of the system is the sum of the policies for the two parameters. - -The IMA policy for the `ima_policy=tcb` startup parameter is as follows: - -```console -# PROC_SUPER_MAGIC = 0x9fa0 -dont_measure fsmagic=0x9fa0 -# SYSFS_MAGIC = 0x62656572 -dont_measure fsmagic=0x62656572 -# DEBUGFS_MAGIC = 0x64626720 -dont_measure fsmagic=0x64626720 -# TMPFS_MAGIC = 0x01021994 -dont_measure fsmagic=0x1021994 -# DEVPTS_SUPER_MAGIC=0x1cd1 -dont_measure fsmagic=0x1cd1 -# BINFMTFS_MAGIC=0x42494e4d -dont_measure fsmagic=0x42494e4d -# SECURITYFS_MAGIC=0x73636673 -dont_measure fsmagic=0x73636673 -# SELINUX_MAGIC=0xf97cff8c -dont_measure fsmagic=0xf97cff8c -# SMACK_MAGIC=0x43415d53 -dont_measure fsmagic=0x43415d53 -# CGROUP_SUPER_MAGIC=0x27e0eb -dont_measure fsmagic=0x27e0eb -# CGROUP2_SUPER_MAGIC=0x63677270 -dont_measure fsmagic=0x63677270 -# NSFS_MAGIC=0x6e736673 -dont_measure fsmagic=0x6e736673 -measure func=MMAP_CHECK mask=MAY_EXEC -measure func=BPRM_CHECK mask=MAY_EXEC -measure func=FILE_CHECK mask=MAY_READ uid=0 -measure func=MODULE_CHECK -measure func=FIRMWARE_CHECK -``` - -The IMA policy for the `ima_policy=tcb_appraise` startup parameter is as follows: - -```console -# PROC_SUPER_MAGIC = 0x9fa0 -dont_appraise fsmagic=0x9fa0 -# SYSFS_MAGIC = 0x62656572 -dont_appraise fsmagic=0x62656572 -# DEBUGFS_MAGIC = 0x64626720 -dont_appraise fsmagic=0x64626720 -# TMPFS_MAGIC = 0x01021994 -dont_appraise fsmagic=0x1021994 -# RAMFS_MAGIC -dont_appraise fsmagic=0x858458f6 -# DEVPTS_SUPER_MAGIC=0x1cd1 -dont_appraise fsmagic=0x1cd1 -# BINFMTFS_MAGIC=0x42494e4d -dont_appraise fsmagic=0x42494e4d -# SECURITYFS_MAGIC=0x73636673 -dont_appraise fsmagic=0x73636673 -# SELINUX_MAGIC=0xf97cff8c -dont_appraise fsmagic=0xf97cff8c -# SMACK_MAGIC=0x43415d53 -dont_appraise fsmagic=0x43415d53 -# NSFS_MAGIC=0x6e736673 -dont_appraise fsmagic=0x6e736673 -# CGROUP_SUPER_MAGIC=0x27e0eb -dont_appraise fsmagic=0x27e0eb -# CGROUP2_SUPER_MAGIC=0x63677270 -dont_appraise fsmagic=0x63677270 -appraise fowner=0 -``` - -The IMA policy for the `ima_policy=secure_boot` startup parameter is as follows: - -```console -appraise func=MODULE_CHECK appraise_type=imasig -appraise func=FIRMWARE_CHECK appraise_type=imasig -appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig -appraise func=POLICY_CHECK appraise_type=imasig -``` - -#### IMA Digest List Startup Parameters - -The kernel startup parameters added to the IMA digest list feature are as follows: - -| Parameter | Value | Description | -| ------------------------ | ----------------------- | ------------------------------------------------------------ | -| integrity | 0 | Disables the IMA feature (by default) | -| | 1 | Enables the IMA feature | -| ima_appraise | off | Disables the IMA appraisal mode | -| | enforce-evm | Enables the IMA appraisal forced mode to perform the integrity check when the file is accessed and control the access. | -| ima_appraise_digest_list | digest | When the EVM is disabled, the abstract list is used for IMA appraise. The abstract list protects both the content and extended attributes of the file. | -| | digest-nometadata | If the EVM digest value does not exist, the integrity check is performed only based on the IMA digest value (the file extended attribute is not protected). | -| evm | fix | Allows for any modification to the extended attribute (even if the modification causes the failure to verify the integrity of the extended attribute). | -| | ignore | Allowed to modify the extended attribute only when it does not exist or is incorrect. | -| ima_policy | exec_tcb | IMA measurement policy. For details, see the following policy description. | -| | appraise_exec_tcb | IMA appraisal policy. For details, see the following policy description. | -| | appraise_exec_immutable | IMA appraisal policy. For details, see the following policy description. | -| ima_digest_list_pcr | 11 | Uses PCR 11 instead of PCR 10, and uses only the digest list for measurement. | -| | +11 | The PCR 10 measurement is reserved. When the TPM chip is available, the measurement result is written to the TPM chip. | -| initramtmpfs | None | Adds the support for **tmpfs**. | - -The IMA policy for the `ima_policy=exec_tcb` startup parameter is as follows: - -```console -dont_measure fsmagic=0x9fa0 -dont_measure fsmagic=0x62656572 -dont_measure fsmagic=0x64626720 -dont_measure fsmagic=0x1cd1 -dont_measure fsmagic=0x42494e4d -dont_measure fsmagic=0x73636673 -dont_measure fsmagic=0xf97cff8c -dont_measure fsmagic=0x43415d53 -dont_measure fsmagic=0x27e0eb -dont_measure fsmagic=0x63677270 -dont_measure fsmagic=0x6e736673 -measure func=MMAP_CHECK mask=MAY_EXEC -measure func=BPRM_CHECK mask=MAY_EXEC -measure func=MODULE_CHECK -measure func=FIRMWARE_CHECK -measure func=POLICY_CHECK -measure func=DIGEST_LIST_CHECK -measure parser -``` - -The IMA policy for the `ima_policy=appraise_exec_tcb` startup parameter is as follows: - -```console -appraise func=MODULE_CHECK appraise_type=imasig -appraise func=FIRMWARE_CHECK appraise_type=imasig -appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig -appraise func=POLICY_CHECK appraise_type=imasig -appraise func=DIGEST_LIST_CHECK appraise_type=imasig -dont_appraise fsmagic=0x9fa0 -dont_appraise fsmagic=0x62656572 -dont_appraise fsmagic=0x64626720 -dont_appraise fsmagic=0x858458f6 -dont_appraise fsmagic=0x1cd1 -dont_appraise fsmagic=0x42494e4d -dont_appraise fsmagic=0x73636673 -dont_appraise fsmagic=0xf97cff8c -dont_appraise fsmagic=0x43415d53 -dont_appraise fsmagic=0x6e736673 -dont_appraise fsmagic=0x27e0eb -dont_appraise fsmagic=0x63677270 -``` - -The IMA policy for the `ima_policy=appraise_exec_immutable` startup parameter is as follows: - -```console -appraise func=BPRM_CHECK appraise_type=imasig appraise_type=meta_immutable -appraise func=MMAP_CHECK -appraise parser appraise_type=imasig -``` - -#### IMA Kernel Compilation Options - -The native IMA provides the following compilation options: - -| Compilation Option | Description | -| -------------------------------- | ------------------------------------------------------- | -| CONFIG_INTEGRITY | IMA/EVM compilation switch | -| CONFIG_INTEGRITY_SIGNATURE | Enables IMA signature verification | -| CONFIG_INTEGRITY_ASYMMETRIC_KEYS | Enables IMA asymmetric signature verification | -| CONFIG_INTEGRITY_TRUSTED_KEYRING | Enables IMA/EVM key ring | -| CONFIG_INTEGRITY_AUDIT | Compiles the IMA audit module | -| CONFIG_IMA | IMA compilation switch | -| CONFIG_IMA_WRITE_POLICY | Allows updating the IMA policy in the running phase | -| CONFIG_IMA_MEASURE_PCR_IDX | Allows specifying the PCR number of the IMA measurement | -| CONFIG_IMA_LSM_RULES | Allows configuring LSM rules | -| CONFIG_IMA_APPRAISE | IMA appraisal compilation switch | -| IMA_APPRAISE_BOOTPARAM | Enables IMA appraisal startup parameters | -| CONFIG_EVM | EVM compilation switch | - -The additional compilation options provided by the IMA Digest Lists extension are as follows: - -| Compilation Option | Description | -| ------------------ | ----------------------------------- | -| CONFIG_DIGEST_LIST | Enables the IMA Digest List feature | - -#### IMA Performance Reference Data - -The following figure compares the performance when IMA is disabled, native IMA is enabled, and IMA digest list is enabled. - -![img](./figures/ima_performance.png) - -#### Impact of IMA on the kdump Service - -When the IMA enforce mode is enabled and kexec system call verification is configured in the policy, kdump may fail to be started. - -```console -appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig -``` - -Cause of the kdump startup failure: After IMA is enabled, file integrity needs to be verified. Therefore, the **kexec_file_load** system call is restricted when kdump loads kernel image files. You can modify **KDUMP_FILE_LOAD** in the **/etc/sysconfig/kdump** configuration file to enable the **kexec_file_load** system call. - -```console -KDUMP_FILE_LOAD="on" -``` - -At the same time, the **kexec_file_load** system call itself also verifies the signature of the file. Therefore, the loaded kernel image file must contain the correct secure boot signature, and the current kernel must contain the corresponding verification certificate. - -#### IMA Root Certificate Configuration - -Currently, openEuler uses the RPM key to sign the IMA digest list. To ensure that the IMA function is available out of the box, openEuler imports the RPM root certificate (PGP certificate) to the kernel by default during kernel compilation. Currently, there are two PGP certificates, namely, the OBS certificate used in the earlier version and the openEuler certificate used in the switchover of openEuler 22.03 LTS SP1. - -```shell -$ cat /proc/keys | grep PGP -1909b4ad I------ 1 perm 1f030000 0 0 asymmetri private OBS b25e7f66: PGP.rsa b25e7f66 [] -2f10cd36 I------ 1 perm 1f030000 0 0 asymmetri openeuler fb37bc6f: PGP.rsa fb37bc6f [] -``` - -The current kernel does not support the import of the PGP sub-public key, and the switched openEuler certificate uses the sub-key signature. Therefore, the openEuler kernel preprocesses the certificate before compilation, extracts the sub-public key, and imports it to the kernel. For details, see the [process_pgp_certs.sh](https://gitee.com/src-openeuler/kernel/blob/openEuler-22.03-LTS-SP1/process_pgp_certs.sh) script file in the code repository of the kernel software package. - -If the user does not use the IMA digest list function or uses other keys to implement signature/verification, you can remove the related code and configure the kernel root certificate by yourself. - -## Dynamic Integrity Measurement (DIM) +# Dynamic Integrity Measurement (DIM) This section describes the DIM feature and its usage. -### Context +## Context With the development of the IT industry, information systems are facing an increasing number of security risks. Information systems run a large amount of software, some of which is inevitably vulnerable. Once exploited by attackers, these vulnerabilities could severely damage system services, resulting in data leakage and service unavailability. Most software attacks are accompanied by integrity damage, such as malicious process execution, configuration file tampering, and backdoor implantation. Therefore, protection technologies are proposed in the industry. Key data is measured and verified during system startup to ensure that the system can run properly. However, popular integrity protection technologies (such as secure boot and file integrity measurement) cannot protect memory data during process running. If an attacker somehow modifies the instructions of a process, the process may be hijacked or implanted with a backdoor, which is highly destructive and covert. To defend against such attacks, the DIM technology is proposed to measure and protect key data in the memory of a running process. -### Terminology +## Terminology Static baseline: baseline measurement data generated by parsing the binary file of the measurement target @@ -672,11 +18,11 @@ Measurement policy: configuration information for measuring the target Measurement log: list of measurement information, including the measurement targets and measurement results -### Description +## Description The DIM feature measures key data (such as code sections and data sections) in the memory during program running and compares the measurement result with the baseline value to determine whether the memory data has been tampered with. In this way, DIM can detect attacks and take countermeasures. -#### Function Scope +### Function Scope - Currently, DIM supports only AArch64 and x86 architectures. - Currently, DIM supports measurement of the following key memory data: @@ -686,7 +32,7 @@ The DIM feature measures key data (such as code sections and data sections) in t - DIM can work with the following hardware: - The measurement result can be extended to the Platform Configuration Register (PCR) of Trusted Platform Module (TPM) 2.0 to connect to the remote attestation service. -#### Technical Limitations +### Technical Limitations - For user-mode processes, only mapped code sections of files can be measured. Anonymous code sections cannot be measured. - Kernel hot patches cannot be measured. @@ -704,7 +50,7 @@ The DIM feature measures key data (such as code sections and data sections) in t > - Hashing is performed during DIM running, which consumes CPU resources. The impact depends on the size of the data to be measured. > - Resources will be locked and semaphores need to be obtained during DIM running, which may cause other concurrent processes to wait. -#### Specification Constraints +### Specification Constraints | Item | Value | | ------------------------------------------------------------ | ---- | @@ -712,7 +58,7 @@ The DIM feature measures key data (such as code sections and data sections) in t | Maximum number of tampering measurement logs recorded for a measurement target during multiple measurement periods after a dynamic baseline is established.| 10| | Maximum number of measurement policies that can be stored in **/etc/dim/policy**|10,000| -#### Architecture Description +### Architecture Description DIM includes the dim_tools and dim software packages, which contain the following components: @@ -726,16 +72,16 @@ The following figure shows the overall architecture: ![](./figures/dim_architecture.jpg) -#### Key Procedures +### Key Procedures Both the dim_core and dim_monitor modules provide the memory data measurement function, including the following core processes: - Dynamic baseline process: The dim_core module reads and parses the policy and static baseline file, measures the code section of the target process, stores the measurement result as a dynamic baseline in the memory, compares the dynamic baseline data with the static baseline data, and records the comparison result in measurement logs. The dim_monitor module measures the code sections and key data of the dim_core module, uses the data as the dynamic baseline, and records measurement logs. - Dynamic measurement process: The dim_core and dim_monitor modules measure the target and compare the measurement result with the dynamic baseline. If the measurement result is inconsistent with the dynamic baseline, the dim_core and dim_monitor modules record the result in measurement logs. -#### Interfaces +### Interfaces -##### Interface Files +#### Interface Files | Path | Description | | ------------------------------- | ------------------------------------------------------------ | @@ -746,7 +92,7 @@ Both the dim_core and dim_monitor modules provide the memory data measurement fu | /etc/keys/x509_dim.der | Certificate file, which is used to verify the signature information of the policy file and static baseline file and is used when the signature verification function is enabled| | /sys/kernel/security/dim | DIM file system directory, which is generated after the DIM kernel module is loaded and provides kernel interfaces for operating the DIM function| -##### File Formats +#### File Formats 1. Measurement policy file format @@ -821,25 +167,25 @@ Both the dim_core and dim_monitor modules provide the memory data measurement fu **Example:** - 1. The code section of the bash process is measured. The measurement result is consistent with the static baseline. + 1. The code section of the bash process is measured. The measurement result is consistent with the static baseline. ```text 12 0f384a6d24e121daf06532f808df624d5ffc061e20166976e89a7bb24158eb87 sha256:db032449f9e20ba37e0ec4a506d664f24f496bce95f2ed972419397951a3792e /usr/bin.bash [static baseline] ``` - 2. The code section of the bash process is measured. The measurement result is inconsistent with the static baseline. + 2. The code section of the bash process is measured. The measurement result is inconsistent with the static baseline. ```text 12 0f384a6d24e121daf06532f808df624d5ffc061e20166976e89a7bb24158eb87 sha256:db032449f9e20ba37e0ec4a506d664f24f496bce95f2ed972419397951a3792e /usr/bin.bash [tampered] ``` - 3. The code section of the ext4 kernel module is measured. No static baseline is found. + 3. The code section of the ext4 kernel module is measured. No static baseline is found. ```text 12 0f384a6d24e121daf06532f808df624d5ffc061e20166976e89a7bb24158eb87 sha256:db032449f9e20ba37e0ec4a506d664f24f496bce95f2ed972419397951a3792e ext4 [no static baseline] ``` - 4. dim_monitor measures dim_core and records the measurement result of the baseline. + 4. dim_monitor measures dim_core and records the measurement result of the baseline. ```text 12 660d594ba050c3ec9a7cdc8cf226c5213c1e6eec50ba3ff51ff76e4273b3335a sha256:bdab94a05cc9f3ad36d29ebbd14aba8f6fd87c22ae580670d18154b684de366c dim_core.text [dynamic baseline] @@ -850,7 +196,7 @@ Both the dim_core and dim_monitor modules provide the memory data measurement fu The files are in the common format. For details, see [Enabling Signature Verification](#enabling-signature-verification). -##### Kernel Module Parameters +#### Kernel Module Parameters 1. dim_core parameters @@ -885,7 +231,7 @@ The files are in the common format. For details, see [Enabling Signature Verific modprobe dim_monitor measure_log_capacity=10000 measure_hash=sm3 ``` -##### Kernel Interfaces +#### Kernel Interfaces 1. dim_core interface @@ -923,13 +269,13 @@ The files are in the common format. For details, see [Enabling Signature Verific - error: An error occurs during dynamic baseline establishment or dynamic measurement. You need to rectify the error and trigger dynamic baseline establishment or dynamic measurement again. - protected: The dynamic baseline has been established and is protected. -##### User-Mode Tool Interface +#### User-Mode Tool Interface See for the details of the `dim_gen_baseline` CLI interface. -### Usage +## Usage -#### Installing and Uninstalling DIM +### Installing and Uninstalling DIM **Prerequisites**: @@ -973,7 +319,7 @@ Unload the KO files before uninstalling the RPM package. > dim_monitor must be loaded after dim_core and removed before dim_core. > You can also install DIM from source. For details, see . -#### Measuring Code Sections of User-Mode Processes +### Measuring Code Sections of User-Mode Processes **Prerequisites**: @@ -1035,7 +381,7 @@ Trigger the measurement again and query the measurement logs. You can see a meas 0 08a2f6f2922ad3d1cf376ae05cf0cc507c2f5a1c605adf445506bc84826531d6 sha256:855ec9a890ff22034f7e13b78c2089e28e8d217491665b39203b50ab47b111c8 /opt/dim/demo/dim_test_demo [tampered] ``` -#### Measuring Code Sections of Kernel Modules +### Measuring Code Sections of Kernel Modules **Prerequisites**: @@ -1107,7 +453,7 @@ Trigger the measurement again and query the measurement logs. You can see a meas 0 6205915fe63a7042788c919d4f0ff04cc5170647d7053a1fe67f6c0943cd1f40 sha256:4cb77370787323140cb572a789703be1a4168359716a01bf745aa05de68a14e3 dim_test_module [tampered] ``` -#### Measuring Code Sections of the Kernel +### Measuring Code Sections of the Kernel **Prerequisites**: @@ -1153,7 +499,7 @@ The preceding measurement log indicates that the kernel is successfully measured After the measurement is complete, you can perform **Step 4** to query the measurement logs. If the measurement result is consistent with the dynamic baseline, the measurement logs are not updated. Otherwise, an exception measurement log is added. -#### Measuring the dim_core Module +### Measuring the dim_core Module **Prerequisites**: @@ -1208,7 +554,7 @@ Trigger the measurement of dim_monitor again and query the measurement logs. You 0 6a60d78230954aba2e6ea6a6b20a7b803d7adb405acbb49b297c003366cfec0d sha256:449ba11b0bfc6146d4479edea2b691aa37c0c025a733e167fd97e77bbb4b9dab dim_core.data [tampered] ``` -#### Extending to the TPM PCR +### Extending to the TPM PCR **Prerequisites**: @@ -1260,7 +606,7 @@ Trigger the measurement of dim_monitor again and query the measurement logs. You 13: 0xBFB9FF69493DEF9C50E52E38B332BDA8DE9C53E90FB96D14CD299E756205F8EA ``` -#### Enabling Signature Verification +### Enabling Signature Verification **Prerequisites**: @@ -1308,7 +654,7 @@ The baseline establishment will fail if it is triggered after the policy file is > >If the signature verification of a static baseline file fails, dim_core skips the parsing of the file without causing a baseline establishment failure. -#### Configuring Measurement Algorithms +### Configuring Measurement Algorithms **Prerequisites**: @@ -1345,7 +691,7 @@ The baseline establishment will fail if it is triggered after the policy file is 0 2c862bb477b342e9ac7d4dd03b6e6705c19e0835efc15da38aafba110b41b3d1 sm3:a4d31d5f4d5f08458717b520941c2aefa0b72dc8640a33ee30c26a9dab74eae9 dim_core.data [dynamic baseline] ``` -#### Configuring Automatic Measurement +### Configuring Automatic Measurement **Prerequisites**: @@ -1373,7 +719,7 @@ echo 1 > /sys/kernel/security/dim/interval In this case, the measurement is not triggered immediately. One minute later, dynamic baseline establishment, or dynamic measurement, is triggered. Subsequently, dynamic measurement is triggered every minute. -#### Configuring the Measurement Scheduling Time +### Configuring the Measurement Scheduling Time **Prerequisites**: @@ -1386,459 +732,3 @@ modprobe dim_core measure_schedule=10 ``` When the dynamic baseline establishment or dynamic measurement is triggered, dim_core schedules the CPU to release 10 ms each time a process is measured. - -## Remote Attestation (Kunpeng Security Library) - -### Introduction - -This project develops basic security software components running on Kunpeng processors. In the early stage, the project focuses on trusted computing fields such as remote attestation to empower security developers in the community. - -### Software Architecture - -On the platform without TEE enabled, this project can provide the platform remote attestation feature, and its software architecture is shown in the following figure: - -![img](./figures/RA-arch-1.png) - -On the platform that has enabled TEE, this project can provide TEE remote attestation feature, and its software architecture is shown in the following figure: - -![img](./figures/RA-arch-2.png) - -### Installation and Configuration - -1. Run the following command to use the RPM package of the Yum installation program: - - ```shell - yum install kunpengsecl-ras kunpengsecl-rac kunpengsecl-rahub kunpengsecl-qcaserver kunpengsecl-attester kunpengsecl-tas kunpengsecl-devel - ``` - -2. Prepare the database environment. Go to the `/usr/share/attestation/ras` directory and run the p`prepare-database-env.sh` script to automatically configure the database environment. - -3. The configuration files required for program running are stored in three paths: current path `./config.yaml`, home path `${HOME}/.config/attestation/ras(rac)(rahub)(qcaserver)(attester)(tas)/config.yaml`, and system path `/etc/attestation/ras(rac)(rahub)(qcaserver)(attester)(tas)/config.yaml`. - -4. (Optional) To create a home directory configuration file, run the `prepare-ras(rac)(hub)(qca)(attester)(tas)conf-env.sh` script in `/usr/share/attestation/ras(rac)(rahub)(qcaserver)(attester)(tas)` after installing the RPM package. - -### Options - -#### RAS Boot Options - -Run the `ras` command to start the RAS program. Note that you need to provide the ECDSA public key in the current directory and name it `ecdsakey.pub`. Options are as follows: - -```console - -H --https HTTP/HTTPS mode switch. The default value is https(true), false=http. - -h --hport RESTful API port listened by RAS in HTTPS mode. - -p, --port string Client API port listened by RAS. - -r, --rest string RESTful API port listened by RAS in HTTP mode. - -T, --token Generates a verification code for test and exits. - -v, --verbose Prints more detailed RAS runtime log information. - -V, --version Prints the RAS version and exits. -``` - -#### RAC Boot Options - -Run the `sudo raagent` command to start the RAC program. Note that the sudo permission is required to enable the physical TPM module. Options are as follows: - -```console - -s, --server string Specifies the RAS service port to be connected. - -t, --test Starts in test mode. - -v, --verbose Prints more detailed RAC runtime log information. - -V, --version Prints the RAC version and exits. - -i, --imalog Specifies the path of the IMA file. - -b, --bioslog Specifies the path of the BIOS file. - -T, --tatest Starts in TA test mode. -``` - -**Note:** ->1.To use TEE remote attestation feature, you must start RAC not in TA test mode. And place the uuid, whether to use TCB, mem_hash and img_hash of the TA to be attestated sequentially in the **talist** file under the RAC execution path. At the same time, pre install the **libqca.so** and **libteec.so** library provided by the TEE team. The format of the **talist** file is as follows: -> ->```text ->e08f7eca-e875-440e-9ab0-5f381136c600 false ccd5160c6461e19214c0d8787281a1e3c4048850352abe45ce86e12dd3df9fde 46d5019b0a7ffbb87ad71ea629ebd6f568140c95d7b452011acfa2f9daf61c7a ->``` -> ->2.To not use TEE remote attestation feature, you must copy the **libqca.so** and **libteec.so** library in `${DESTDIR}/usr/share/attestation/qcaserver` path to `/usr/lib` or `/usr/lib64` path, and start RAC in TA test mode. - -#### QCA Boot Options - -Run the `${DESTDIR}/usr/bin/qcaserver` command to start the QCA program. Note that to start QTA normally, the full path of qcaserver must be used, and the CA path parameter in QTA needs to be kept the same as the path. Options are as follows: - -```console - -C, --scenario int Sets the application scenario of the program, The default value is sce_no_as(0), 1=sce_as_no_daa, 2=sce_as_with_daa. - -S, --server string Specifies the open server address/port. -``` - -#### ATTESTER Boot Options - -Run the `attester` command to start the ATTESTER program. Options are as follows: - -```console - -B, --basevalue string Sets the base value file read path - -M, --mspolicy int Sets the measurement strategy, which defaults to -1 and needs to be specified manually. 1=compare only img-hash values, 2=compare only hash values, and 3=compare both img-hash and hash values at the same time. - -S, --server string Specifies the address of the server to connect to. - -U, --uuid int Specifies the trusted apps to verify. - -V, --version Prints the program version and exit. - -T, --test Reads fixed nonce values to match currently hard-coded trusted reports. -``` - -#### TAS Boot Options - -Run the `tas` command to start the TAS program. Options are as follows: - -```console - -T, --token Generates a verification code for test and exits. -``` - -**Note:** ->1.To enable the TAS, you must configure the private key for TAS. Run the following command to modify the configuration file in the home directory: -> ->```shell ->$ cd ${HOME}/.config/attestation/tas ->$ vim config.yaml -> # The values of the following DAA_GRP_KEY_SK_X and DAA_GRP_KEY_SK_Y are for testing purposes only. -> # Be sure to update their contents to ensure safety before normal use. ->tasconfig: -> port: 127.0.0.1:40008 -> rest: 127.0.0.1:40009 -> akskeycertfile: ./ascert.crt -> aksprivkeyfile: ./aspriv.key -> huaweiitcafile: ./Huawei IT Product CA.pem -> DAA_GRP_KEY_SK_X: 65a9bf91ac8832379ff04dd2c6def16d48a56be244f6e19274e97881a776543c65a9bf91ac8832379ff04dd2c6def16d48a56be244f6e19274e97881a776543c -> DAA_GRP_KEY_SK_Y: 126f74258bb0ceca2ae7522c51825f980549ec1ef24f81d189d17e38f1773b56126f74258bb0ceca2ae7522c51825f980549ec1ef24f81d189d17e38f1773b56 ->``` -> ->Then enter `tas` to start TAS program. -> ->2.In an environment with TAS, in order to improve the efficiency of QCA's certificate configuration process, not every boot needs to access the TAS to generate the certificate, but through the localized storage of the certificate. That is, read the certification path configured in `config.yaml` on QCA side, check if a TAS-issued certificate has been saved locally through the `func hasAKCert(s int) bool` function. If the certificate is successfully read, there is no need to access TAS. If the certificate cannot be read, you need to access TAS and save the certificate returned by TAS locally. - -### API Definition - -#### RAS APIs - -To facilitate the administrator to manage the target server, RAS and the user TA in the TEE deployed on the target server, the following APIs are designed for calling: - -| API | Method | -| --------------------------------- | --------------------------- | -| / | GET | -| /{id} | GET, POST, DELETE | -| /{from}/{to} | GET | -| /{id}/reports | GET | -| /{id}/reports/{reportid} | GET, DELETE | -| /{id}/basevalues | GET | -| /{id}/newbasevalue | POST | -| /{id}/basevalues/{basevalueid} | GET, POST, DELETE | -| /{id}/ta/{tauuid}/status | GET | -| /{id}/ta/{tauuid}/tabasevalues | GET | -| /{id}/ta/{tauuid}/tabasevalues/{tabasevalueid} | GET, POST, DELETE | -| /{id}/ta/{tauuid}/newtabasevalue | POST | -| /{id}/ta/{tauuid}/tareports | GET | -| /{id}/ta/{tauuid}/tareports/{tareportid} | GET, POST, DELETE | -| /{id}/basevalues/{basevalueid} | GET, DELETE | -| /version | GET | -| /config | GET, POST | -| /{id}/container/status | GET | -| /{id}/device/status | GET | - -The usage of the preceding APIs is described as follows: - -To query information about all servers, use `/`. - -```shell -curl -X GET -H "Content-Type: application/json" http://localhost:40002/ -``` - -*** -To query detailed information about a target server, use the GET method of `/{id}`. **{id}** is the unique ID allocated by RAS to the target server. - -```shell -curl -X GET -H "Content-Type: application/json" http://localhost:40002/1 -``` - -*** -To modify information about the target server, use the POST method of `/{id}`. `$AUTHTOKEN` is the identity verification code automatically generated by running the `ras -T` command. - -```go -type clientInfo struct { - Registered *bool `json:"registered"` // Registration status of the target server - IsAutoUpdate *bool `json:"isautoupdate"`// Target server base value update policy -} -``` - -```shell -curl -X POST -H "Authorization: $AUTHTOKEN" -H "Content-Type: application/json" http://localhost:40002/1 -d '{"registered":false, "isautoupdate":false}' -``` - -*** -To delete a target server, use the DELETE method of `/{id}`. - -**Note:** ->This method does not delete all information about the target server. Instead, it sets the registration status of the target server to `false`. - -```shell -curl -X DELETE -H "Authorization: $AUTHTOKEN" -H "Content-Type: application/json" http://localhost:40002/1 -``` - -*** -To query information about all servers in a specified range, use the GET method of `/{from}/{to}`. - -```shell -curl -X GET -H "Content-Type: application/json" http://localhost:40002/1/9 -``` - -*** -To query all trust reports of the target server, use the GET method of `/{id}/reports`. - -```shell -curl -X GET -H "Content-Type: application/json" http://localhost:40002/1/reports -``` - -*** -To query details about a specified trust report of the target server, use the GET method of `/{id}/reports/{reportid}`. **{reportid}** indicates the unique ID assigned by RAS to the trust report of the target server. - -```shell -curl -X GET -H "Content-Type: application/json" http://localhost:40002/1/reports/1 -``` - -*** -To delete a specified trust report of the target server, use the DELETE method of `/{id}/reports/{reportid}`. - -**Note:** ->This method will delete all information about the specified trusted report, and the report cannot be queried through the API. - -```shell -curl -X DELETE -H "Authorization: $AUTHTOKEN" -H "Content-Type: application/json" http://localhost:40002/1/reports/1 -``` - -*** -To query all base values of the target server, use the GET method of `/{id}/reports/{reportid}`. - -```shell -curl -X GET -H "Content-Type: application/json" http://localhost:40002/1/basevalues -``` - -*** -To add a base value to the target server, use the POST method of `/{id}/newbasevalue`. - -```go -type baseValueJson struct { - BaseType string `json:"basetype"` // Base value type - Uuid string `json:"uuid"` // ID of a container or device - Name string `json:"name"` // Base value name - Enabled bool `json:"enabled"` // Whether the base value is available - Pcr string `json:"pcr"` // PCR value - Bios string `json:"bios"` // BIOS value - Ima string `json:"ima"` // IMA value - IsNewGroup bool `json:"isnewgroup"` // Whether this is a group of new reference values -} -``` - -```shell -curl -X POST -H "Authorization: $AUTHTOKEN" -H "Content-Type: application/json" http://localhost:40002/1/newbasevalue -d '{"name":"test", "basetype":"host", "enabled":true, "pcr":"testpcr", "bios":"testbios", "ima":"testima", "isnewgroup":true}' -``` - -*** -To query details about a specified base value of a target server, use the get method of `/{id}/basevalues/{basevalueid}`. **{basevalueid}** indicates the unique ID allocated by RAS to the specified base value of the target server. - -```shell -curl -X GET -H "Content-Type: application/json" http://localhost:40002/1/basevalues/1 -``` - -*** -To change the availability status of a specified base value of the target server, use the POST method of `/{id}/basevalues/{basevalueid}`. - -```shell -curl -X POST -H "Content-type: application/json" -H "Authorization: $AUTHTOKEN" http://localhost:40002/1/basevalues/1 -d '{"enabled":true}' -``` - -*** -To delete a specified base value of the target server, use the DELETE method of `/{id}/basevalues/{basevalueid}`. - -**Note:** ->This method will delete all the information about the specified base value, and the base value cannot be queried through the API. - -```shell -curl -X DELETE -H "Authorization: $AUTHTOKEN" -H "Content-Type: application/json" http://localhost:40002/1/basevalues/1 -``` - -To query the trusted status of a specific user TA on the target server, use the GET method of the `"/{id}/ta/{tauuid}/status"` interface. Where {id} is the unique identification number assigned by RAS to the target server, and {tauuid} is the identification number of the specific user TA. - -```shell -curl -X GET -H "Content-type: application/json" -H "Authorization: $AUTHTOKEN" http://localhost:40002/1/ta/test/status -``` - -*** -To query all the baseline value information of a specific user TA on the target server, use the GET method of the `"/{id}/ta/{tauuid}/tabasevalues"` interface. - -```shell -curl -X GET -H "Content-type: application/json" http://localhost:40002/1/ta/test/tabasevalues -``` - -*** -To query the details of a specified base value for a specific user TA on the target server, use the GET method of the `"/{id}/ta/{tauuid}/tabasevalues/{tabasevalueid}"` interface. where {tabasevalueid} is the unique identification number assigned by RAS to the specified base value of a specific user TA on the target server. - -```shell -curl -X GET -H "Content-type: application/json" http://localhost:40002/1/ta/test/tabasevalues/1 -``` - -*** -To modify the available status of a specified base value for a specific user TA on the target server, use the `POST` method of the `"/{id}/ta/{tauuid}/tabasevalues/{tabasevalueid}"` interface. - -```shell -curl -X POST -H "Content-type: application/json" -H "Authorization: $AUTHTOKEN" http://localhost:40002/1/ta/test/tabasevalues/1 --data '{"enabled":true}' -``` - -*** -To delete the specified base value of a specific user TA on the target server, use the `DELETE` method of the `"/{id}/ta/{tauuid}/tabasevalues/{tabasevalueid}"` interface. - -**Note:** ->This method will delete all information about the specified base value, and the base value cannot be queried through the API. - -```shell -curl -X DELETE -H "Content-type: application/json" -H "Authorization: $AUTHTOKEN" -k http://localhost:40002/1/ta/test/tabasevalues/1 -``` - -*** -To add a baseline value to a specific user TA on the target server, use the `POST` method of the `"/{id}/ta/{tauuid}/newtabasevalue"` interface. - -```go -type tabaseValueJson struct { - Uuid string `json:"uuid"` // the identification number of the user TA - Name string `json:"name"` // base value name - Enabled bool `json:"enabled"` // whether a baseline value is available - Valueinfo string `json:"valueinfo"` // mirror hash value and memory hash value -} -``` - -```shell -curl -X POST -H "Content-Type: application/json" -H "Authorization: $AUTHTOKEN" -k http://localhost:40002/1/ta/test/newtabasevalue -d '{"uuid":"test", "name":"testname", "enabled":true, "valueinfo":"test info"}' -``` - -*** -To query the target server for all trusted reports for a specific user TA, use the `GET` method of the `"/{id}/ta/{tauuid}/tareports"` interface. - -```shell -curl -X GET -H "Content-type: application/json" http://localhost:40002/1/ta/test/tareports -``` - -*** -To query the details of a specified trusted report for a specific user TA on the target server, use the `GET` method of the `"/{id}/ta/{tauuid}/tareports/{tareportid}"` interface. Where {tareportid} is the unique identification number assigned by RAS to the specified trusted report of a specific user TA on the target server. - -```shell -curl -X GET -H "Content-type: application/json" http://localhost:40002/1/ta/test/tareports/2 -``` - -*** -To delete the specified trusted report of a specific user TA on the target server, use the `DELETE` method of the `"/{id}/ta/{tauuid}/tareports/{tareportid}"` interface. - -**Note:** ->This method will delete all information of the specified trusted report, and the report cannot be queried through the API. - -```shell -curl -X DELETE -H "Content-type: application/json" http://localhost:40002/1/ta/test/tareports/2 -``` - -*** -To obtain the version information of the program, use the GET method of `/version`. - -```shell -curl -X GET -H "Content-Type: application/json" http://localhost:40002/version -``` - -*** -To query the configuration information about the target server, RAS, or database, use the GET method of `/config`. - -```shell -curl -X GET -H "Content-Type: application/json" http://localhost:40002/config -``` - -*** -To modify the configuration information about the target server, RAS, or database, use the POST method of /config. - -```go -type cfgRecord struct { - // Target server configuration - HBDuration string `json:"hbduration" form:"hbduration"` - TrustDuration string `json:"trustduration" form:"trustduration"` - DigestAlgorithm string `json:"digestalgorithm" form:"digestalgorithm"` - // RAS configuration - MgrStrategy string `json:"mgrstrategy" form:"mgrstrategy"` - ExtractRules string `json:"extractrules" form:"extractrules"` - IsAllupdate *bool `json:"isallupdate" form:"isallupdate"` - LogTestMode *bool `json:"logtestmode" form:"logtestmode"` -} -``` - -```shell -curl -X POST -H "Authorization: $AUTHTOKEN" -H "Content-Type: application/json" http://localhost:40002/config -d '{"hbduration":"5s","trustduration":"20s","DigestAlgorithm":"sha256"}' -``` - -#### TAS APIs - -To facilitate the administrator's management of TAS for remote control, the following API is designed for calling: - -| API | Method | -| --------------------| ------------------| -| /config | GET, POST | - -To query the configuration information, use the GET method of the `/config` interface. - -```shell -curl -X GET -H "Content-Type: application/json" http://localhost:40009/config -``` - -*** -To modify the configuration information, use the POST method of the `/config` interface. - -```shell -curl -X POST -H "Content-Type: application/json" -H "Authorization: $AUTHTOKEN" http://localhost:40009/config -d '{"basevalue":"testvalue"}' -``` - -**Note:** ->Currently, only the base value in the configuration information of TAS is supported for querying and modifying. - -### FAQs - -1. Why cannot RAS be started after it is installed? - - > In the current RAS design logic, after the program is started, it needs to search for the `ecdsakey.pub` file in the current directory and read the file as the identity verification code for accessing the program. If the file does not exist in the current directory, an error is reported during RAS boot. - >> Solution 1: Run the `ras -T` command to generate a test token. The `ecdsakey.pub` file is generated. - >> Solution 2: After deploying the oauth2 authentication service, save the verification public key of the JWT token generator as `ecdsakey.pub`. - -2. Why cannot RAS be accessed through REST APIs after it is started? - - > RAS is started in HTTPS mode by default. Therefore, you need to provide a valid certificate for RAS to access it. However, RAS started in HTTP mode does not require a certificate. - -## Trusted Platform Control Module - -### Background - -Trusted computing has undergone continuous development and improvement in the past 40 years and has become an important branch of information security. Trusted computing technologies have developed rapidly in recent years and have solved the challenges in Trusted Computing 2.0—integration of trusted systems and existing systems, trusted management, and simplification of trusted application development. These technical breakthroughs form Trusted Computing 3.0, that is, trusted computing based on an active immune system. Compared with the passive plug-in architecture of the previous generation, Trusted Computing 3.0 proposes a new trusted system framework based on self-controlled cryptography algorithms, control chips, trusted software, trusted connections, policy management, and secure and trusted protection applications, implementing trust across the networks. - -The trusted platform control module (TPCM) is a base and core module that can be integrated into a trusted computing platform to establish and ensure a trust source. As one of the innovations in Trusted Computing 3.0 and the core of active immunity, TPCM implements active control over the entire platform. - -The TPCM-based Trusted Computing 3.0 architecture consists of the protection module and the computing module. On the one hand, based on the Trusted Cryptography Module (TPM), the TPCM main control firmware measures the reliability of the protection and computing modules, as well as their firmware. On the other hand, the Trusted Software Base (TSB) measures the reliability of system software and application software. In addition, the TPCM management platform verifies the reliability measurement and synchronizes and manages the trust policies. - -### Feature Description - -The overall system design consists of the protection module, computing module, and trusted management center software, as shown in the following figure. - -![](./figures/TPCM.png) - -- Trusted management center: This centralized management platform, provided by a third-party vendor, formulates, delivers, maintains, and stores protection policies and reference values for trusted computing nodes. -- Protection module: This module operates independently of the computing module and provides trusted computing protection functions that feature active measurement and active control to implement security protection during computing. The protection module consists of the TPCM main control firmware, TCB, and TCM. As a key module for implementing trust protection in a trusted computing node, the TPCM can be implemented in multiple forms, such as cards, chips, and IP cores. It contains a CPU and memory, firmware, and software such as an OS and trusted function components. The TPCM operates alongside the computing module and works according to the built-in protection policy to monitor the trust of protected resources, such as hardware, firmware, and software of the computing module. The TPCM is the Root of Trust in a trusted computing node. - -- Computing module: This module includes hardware, an OS, and application layer software. The running of the OS can be divided into the boot phase and the running phase. In the boot phase, GRUB2 and shim of openEuler support the reliability measurement capability, which protects boot files such as shim, GRUB2, kernel, and initramfs. In the running phase, openEuler supports the deployment of the trusted verification agent (provided by third-party vendor HTTC). The agent sends data to the TPCM for trusted measurement and protection in the running phase. - -The TPCM interacts with other components as follows: - -1. The TPCM hardware, firmware, and software provide an operating environment for the TSB. The trusted function components of the TPCM provide support for the TSB to implement measurement, control, support, and decision-making based on the policy library interpretation requirements. -2. The TPCM accesses the TCM for trusted cryptography functions to complete computing tasks such as trusted verification, measurement, and confidential storage, and provides services for TCM access. -3. The TPCM connects to the trusted management center through the management interface to implement protection policy management and trusted report processing. -4. The TPCM uses the built-in controller and I/O port to interact with the controller of the computing module through the bus to actively monitor the computing module. -5. The built-in protection agent in the OS of the computing module obtains the code and data related to the preset protection object and provides them to the TPCM. The TPCM forwards the monitoring information to the TSB, and the TSB analyzes and processes the information according to the policy library. - -### Constraints - -Supported server: TaiShan 200 server (model 2280) -Supported BMC card: BC83SMMC - -### Application Scenarios - -The TPCM enables a complete trust chain to ensure that the OS boots into a trusted computing environment. diff --git a/docs/en/Server/Security/TrustedComputing/IMA.md b/docs/en/Server/Security/TrustedComputing/IMA.md new file mode 100644 index 0000000000000000000000000000000000000000..117ad73abe370587847a371aee3a8290eb09cca1 --- /dev/null +++ b/docs/en/Server/Security/TrustedComputing/IMA.md @@ -0,0 +1,1164 @@ +# Kernel Integrity Measurement Architecture (IMA) + +## Overview + +### Introduction to IMA + +IMA is a kernel subsystem that measures files accessed through system calls like `execve()`, `mmap()`, and `open()` based on custom policies. These measurements can be used for **local or remote attestation** or compared against reference values to **control file access**. + +IMA operates in two main modes: + +- Measurement: This mode observes the integrity of files. When protected files are accessed, measurement records are added to the measurement log in kernel memory. If the system has a Trusted Platform Module (TPM), the measurement digest can also be extended into the TPM platform configuration registers (PCRs) to ensure the integrity of the measurement data. This mode does not restrict file access but provides recorded file information to upper-layer applications for remote attestation. +- Appraisal: This mode verifies file integrity, preventing access to unknown or tampered files. It uses cryptographic methods like hashes, signatures, and hash-based message authentication codes (HMACs) to validate file contents. If verification fails, the file is inaccessible to any process. This feature enhances system resilience by isolating compromised files and preventing further damage during an attack. + +In summary, the measurement mode acts as a passive observer, while the appraisal mode enforces strict access control, blocking any file that fails integrity checks. + +### Introduction to EVM + +EVM, or Extended Verification Module, builds on IMA capabilities. While IMA protects file contents, EVM extends this protection to file attributes such as `uid`, `security.ima`, and `security.selinux`. + +### Introduction to IMA Digest Lists + +IMA digest lists enhance the native integrity protection mechanism of the kernel in openEuler, addressing key limitations of the native IMA/EVM mechanism: + +**File access performance degradation due to extension to TPM** + +In IMA measurement mode, each measurement requires accessing the TPM. Since the TPM operates at low speeds, typically using the Serial Peripheral Interface (SPI) protocol with clock frequencies in the tens of MHz, system call performance suffers. + +![](./figures/ima_tpm.png) + +**File access performance degradation due to asymmetric operations** + +In IMA appraisal mode, immutable files are protected using a signature mechanism. Each file verification requires signature validation, and the complexity of asymmetric operations further degrades system call performance. + +![](./figures/ima_sig_verify.png) + +**Decreased efficiency and security from complex deployment** + +In IMA appraisal mode, deployment requires the system to first enter fix mode to mark IMA/EVM extended attributes, then switch to appraisal mode for startup. Additionally, upgrading protected files necessitates rebooting into fix mode to update files and extended attributes. This process reduces deployment efficiency and exposes keys in the runtime environment, compromising security. + +![](./figures/ima_priv_key.png) + +IMA digest lists address these issues by managing baseline digest values for multiple files through a single hash list file. This file consolidates the baseline digest values of files (such as all executables in a software package) for centralized management. The baseline digest values can include file content digests (for IMA mode) and file extended attribute digests (for EVM mode). This consolidated file is the IMA digest list file. + +![](./figures/ima_digest_list_pkg.png) + +When the IMA digest list feature is enabled, the kernel maintains a hash allowlist pool to store digest values from imported IMA digest list files. It also provides interfaces via securityfs for importing, deleting, and querying these files. + +In measurement mode, imported digest list files must undergo measurement and extension to TPM before being added to the allowlist pool. If the digest value of a target file matches the allowlist pool, no additional measurement logging or extension to TPM is required. In appraisal mode, imported digest list files must pass signature verification before being added to the allowlist pool. The digest value of the accessed target file is then matched against the allowlist pool to determine the appraisal result. + +![](./figures/ima_digest_list_flow.png) + +Compared to the native Linux IMA/EVM mechanism, the IMA digest list extension enhances security, performance, and usability for better practical implementation: + +- Security: IMA digest lists are distributed with software packages. During installation, digest lists are imported simultaneously, ensuring baseline values originate from the software distributor (like the openEuler community). This eliminates the need to generate baseline values in the runtime environment, establishing a complete trust chain. +- Performance: The IMA digest list mechanism operates on a per-digest-list basis, reducing TPM access and asymmetric operation frequency to 1/n (where n is the average number of file hashes managed by a single digest list). This improves system call and boot performance. +- Usability: The IMA digest list mechanism supports out-of-the-box functionality, allowing the system to enter appraisal mode immediately after installation. It also enables software package installation and upgrades in appraisal mode without requiring fix mode for file marking, facilitating rapid deployment and seamless updates. + +It is worth noting that, unlike the native IMA/EVM, the IMA digest list stores baseline values for measurement/appraisal in kernel memory. This assumes that kernel memory cannot be tampered with by unauthorized entities. Therefore, the IMA digest list relies on additional security mechanisms (including kernel module secure boot and runtime memory measurement) to safeguard kernel memory integrity. + +However, both the native IMA mechanism and the IMA digest list extension are only components of the system security chain. Neither can independently ensure system security. Security is inherently a systematic, defense-in-depth engineering effort. + +## Interface Description + +### Kernel Boot Parameters + +The openEuler IMA/EVM mechanism provides the following kernel boot parameters. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterValueFunction
ima_appraiseenforce-evmEnable IMA appraisal enforce mode (EVM enabled).
log-evmEnable IMA appraisal log mode (EVM enabled).
enforceEnable IMA appraisal enforce mode.
logEnable IMA appraisal log mode.
offDisable IMA appraisal.
ima_appraise_digest_listdigestEnable IMA+EVM appraisal based on digest lists (comparing file content and extended attributes).
digest-nometadataEnable IMA appraisal based on digest lists (comparing file content only).
evmx509Enable portable signature-based EVM directly (regardless of EVM certificate loading)
completePrevent modification of EVM mode via securityfs after boot.
allow_metadata_writesAllow file metadata modifications without EVM interception.
ima_hashsha256/sha1/...Specify the IMA measurement hash algorithm.
ima_templateimaSpecify the IMA measurement template (d or n).
ima-ngSpecify the IMA measurement template (d-ng or n-ng), default template.
ima-sigSpecify the IMA measurement template (d-ng, n-ng, or sig).
ima_policyexec_tcbMeasure all files accessed via execution or mapping, including loaded kernel modules, firmware, and kernel files.
tcbExtend exec_tcb policy to measure files accessed with uid=0 or euid=0.
secure_bootAppraise all loaded kernel modules, firmware, and kernel files, using IMA signature mode.
appraise_exec_tcbExtend secure_boot policy to appraise all files accessed via execution or mapping.
appraise_tcbAppraise all files owned by uid=0.
appraise_exec_immutableUsed with the appraise_exec_tcb policy, making executable file extended attributes immutable.
ima_digest_list_pcr10Extend IMA measurement results based on digest list in PCR 10, disable native IMA measurement.
11Extend IMA measurement results based on digest list in PCR 11, disable native IMA measurement.
+11Extend IMA measurement results based on digest list in PCR 11, extend native IMA measurement results in PCR 10.
ima_digest_db_sizenn[M]Set kernel digest list size limit (0 to 64 MB), defaulting to 16 MB if not configured. ("Not configured" means to omit the parameter, not leaving the value cannot blank like ima_digest_db_size=.)
ima_capacity-1 to 2147483647Set the kernel measurement log entry limit, defaulting to 100,000. -1 means no limit.
initramtmpfsNoneSupport tmpfs in initrd to carry file extended attributes.
+ +Based on user scenarios, the following parameter combinations are recommended: + +**(1) Native IMA measurement** + +```ini +# Native IMA measurement + custom policy +# No configuration required. This is enabled by default. +# Native IMA measurement + TCB default policy +ima_policy="tcb" +``` + +**(2) IMA measurement based on digest list** + +```ini +# Digest list IMA measurement + custom policy +ima_digest_list_pcr=11 ima_template=ima-ng initramtmpfs +# Digest list IMA measurement + default policy +ima_digest_list_pcr=11 ima_template=ima-ng ima_policy="exec_tcb" initramtmpfs +``` + +**(3) IMA appraisal based on digest list, protecting file content only** + +```ini +# IMA appraisal + log mode +ima_appraise=log ima_appraise_digest_list=digest-nometadata ima_policy="appraise_exec_tcb" initramtmpfs +# IMA appraisal + enforce mode +ima_appraise=enforce ima_appraise_digest_list=digest-nometadata ima_policy="appraise_exec_tcb" initramtmpfs +``` + +**(4) IMA appraisal based on digest list, protecting file content and extended attributes** + +```ini +# IMA appraisal + log mode +ima_appraise=log-evm ima_appraise_digest_list=digest ima_policy="appraise_exec_tcb|appraise_exec_immutable" initramtmpfs evm=x509 evm=complete +# IMA appraisal + enforce mode +ima_appraise=enforce-evm ima_appraise_digest_list=digest ima_policy="appraise_exec_tcb|appraise_exec_immutable" initramtmpfs evm=x509 evm=complete +``` + +> ![](./public_sys-resources/icon-note.gif) **Note:** +> +> All four parameter sets above can be used individually, but only digest list-based measurement and appraisal modes can be combined, such as (2) with (3) or (2) with (4). + +### securityfs Interface Description + +The securityfs interfaces provided by openEuler IMA are located in the **/sys/kernel/security** directory. Below are the interface names and their descriptions. + +| Path | Permissions | Description | +| :----------------------------- | :---------- | :---------------------------------------------------------------------- | +| ima/policy | 600 | Display or import IMA policies. | +| ima/ascii_runtime_measurement | 440 | Display IMA measurement logs in ASCII format. | +| ima/binary_runtime_measurement | 440 | Display IMA measurement logs in binary format. | +| ima/runtime_measurement_count | 440 | Display the count of IMA measurement log entries. | +| ima/violations | 440 | Display the number of abnormal IMA measurement logs. | +| ima/digests_count | 440 | Display the total number of digests in the system hash table (IMA+EVM). | +| ima/digest_list_data | 200 | Add digest lists. | +| ima/digest_list_data_del | 200 | Delete digest lists. | +| evm | 660 | Query or set EVM mode. | + +The **/sys/kernel/security/evm** interface supports the following values: + +- `0`: EVM is not initialized. +- `1`: Use HMAC (symmetric encryption) to verify extended attribute integrity. +- `2`: Use public key signature verification (asymmetric encryption) to verify extended attribute integrity. +- `6`: Disable extended attribute integrity verification. + +### Digest List Management Tool Description + +The digest-list-tools package includes tools for generating and managing IMA digest list files. The primary CLI tools are as follows. + +#### gen_digest_lists + +The `gen_digest_lists` tool allows users to generate digest lists. The command options are defined below. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OptionValueFunction
-d<path>Specify the directory to store the generated digest list files. The directory must be valid.
-fcompactSpecify the format of the generated digest list files. Currently, only the compact format is supported.
-i<option arg>:<option value>Define the target file range for generating digest lists. Specific parameters are listed below.
I:<path>Specify the absolute path of files for which digest lists will be generated. If a directory is specified, recursive generation is performed.
E:<path>Specify paths or directories to exclude.
F:<path>Specify paths or directories to generate digest lists for all files under them (ignores the filtering effect of e: when combined with e:).
e:Generate digest lists only for executable files.
l:policyMatch file security contexts from the SELinux policy instead of reading them directly from file extended attributes.
i:Include the file digest value in the calculated extended attribute information when generating metadata-type digest lists (required).
M:Allow explicit specification of file extended attribute information (requires use with the rpmbuild command).
u:Use the list file name specified by the L: parameter as the name of the generated digest list file (requires use with the rpmbuild command).
L:<path>Specify the path to the list file, which contains the information data required to generate digest lists (requires use with the rpmbuild command).
-oaddSpecify the operation for generating digest lists. Currently, only the add operation is supported, which adds the digest list to the file.
-p-1Specify the position in the file where the digest list will be written. Currently, only -1 is supported.
-tfileGenerate digest lists only for file content.
metadataGenerate digest lists for both file content and extended attributes.
-TN/AIf this option is not used, digest list files are generated. Otherwise, TLV digest list files are generated.
-A<path>Specify the relative root directory to truncate the file path prefix for path matching and SELinux label matching.
-mimmutableSpecify the modifiers attribute for the generated digest list files. Currently, only immutable is supported. In enforce/enforce-evm mode, digest lists can only be opened in read-only mode.
-hN/APrint help information.
+ +**Usage examples** + +- Scenario 1: Generate a digest list/TLV digest list for a single file. + + ```shell + gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ls -d ./ -i i: + gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ls -d ./ -i i: -T + ``` + +- Scenario 2: Generate a digest list/TLV digest list for a single file and specify a relative root directory. + + ```shell + gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ls -A /usr/ -d ./ -i i: + gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ls -A /usr/ -d ./ -i i: -T + ``` + +- Scenario 3: Recursively generate a digest list/TLV digest list for files in a directory. + + ```shell + gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ -d ./ -i i: + gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ -d ./ -i i: -T + ``` + +- Scenario 4: Recursively generate a digest list/TLV digest list for executable files in a directory. + + ```shell + gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ -d ./ -i i: -i e: + gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ -d ./ -i i: -i e: -T + ``` + +- Scenario 5: Recursively generate a digest list/TLV digest list for files in a directory, excluding specific subdirectories. + + ```shell + gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/ -d ./ -i i: -i E:/usr/bin/ + gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/ -d ./ -i i: -i E:/usr/bin/ -T + ``` + +- Scenario 6: In an `rpmbuild` callback script, generate a digest list by reading the list file passed by `rpmbuild`. + + ```shell + gen_digest_lists -i M: -t metadata -f compact -d $DIGEST_LIST_DIR -i l:policy \ + -i i: -o add -p -1 -m immutable -i L:$BIN_PKG_FILES -i u: \ + -A $RPM_BUILD_ROOT -i e: \ + -i E:/usr/src \ + -i E:/boot/efi \ + -i F:/lib \ + -i F:/usr/lib \ + -i F:/lib64 \ + -i F:/usr/lib64 \ + -i F:/lib/modules \ + -i F:/usr/lib/modules \ + -i F:/lib/firmware \ + -i F:/usr/lib/firmware + + gen_digest_lists -i M: -t metadata -f compact -d $DIGEST_LIST_DIR.tlv \ + -i l:policy -i i: -o add -p -1 -m immutable -i L:$BIN_PKG_FILES -i u: \ + -T -A $RPM_BUILD_ROOT -i e: \ + -i E:/usr/src \ + -i E:/boot/efi \ + -i F:/lib \ + -i F:/usr/lib \ + -i F:/lib64 \ + -i F:/usr/lib64 \ + -i F:/lib/modules \ + -i F:/usr/lib/modules \ + -i F:/lib/firmware \ + -i F:/usr/lib/firmware + ``` + +#### manage_digest_lists + +The `manage_digest_lists` tool is designed to parse and convert binary-format TLV digest list files into a human-readable text format. Below are the command options. + +| Option | Value | Function | +| ------ | ------------ | ------------------------------------------------------------------------------------------------------------ | +| -d | \ | Specify the directory containing the TLV digest list files. | +| -f | \ | Specify the name of the TLV digest list file. | +| -p | dump | Define the operation type. Currently, only `dump` is supported, which parses and prints the TLV digest list. | +| -v | N/A | Print verbose details. | +| -h | N/A | Display help information. | + +**Usage example** + +View TLV digest list information. + +```ini +manage_digest_lists -p dump -d /etc/ima/digest_lists.tlv/ +``` + +## File Format Description + +### IMA Policy File Syntax Description + +The IMA policy file is a text-based file that can include multiple rule statements separated by newline characters (`\n`). Each rule statement must begin with an **action** keyword, followed by one or more **filter conditions**: + +```text + [filter condition 2] [filter condition 3]... +``` + +The action keyword defines the specific action for the policy. Only one action is allowed per policy. Refer to the table below for specific actions (note that the `action=` prefix can be omitted in practice, for example, use `dont_measure` instead of `action=dont_measure`). + +Supported filter conditions include: + +- `func`: indicates the type of file to be measured or appraised. It is typically used with `mask`. Only one `func` is allowed per policy. + - `FILE_CHECK` can only be paired with `MAY_EXEC`, `MAY_WRITE`, or `MAY_READ`. + - `MODULE_CHECK`, `MMAP_CHECK`, and `BPRM_CHECK` can only be paired with `MAY_EXEC`. + - Other combinations will not take effect. + +- `mask`: specifies the operation on the file that triggers measurement or appraisal. Only one `mask` is allowed per policy. + +- `fsmagic`: represents the hexadecimal magic number of the file system type, as defined in **/usr/include/linux/magic.h** (by default, all file systems are measured unless excluded using `dont_measure` or `dont_appraise`). + +- `fsuuid`: represents the 16-character hexadecimal string of the system device UUID. + +- `objtype`: specifies the file security type. Only one file type is allowed per policy. Compared to `func`, `objtype` offers finer granularity. For example, `obj_type=nova_log_t` refers to files with the SELinux type `nova_log_t`. + +- `uid`: specifies the user (by user ID) performing the operation on the file. Only one `uid` is allowed per policy. + +- `fowner`: specifies the file owner (by user ID). Only one `fowner` is allowed per policy. + +The keywords are detailed as follows. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
KeywordValueDescription
actionmeasureEnable IMA measurement.
actiondont_measureDisable IMA measurement.
actionappraiseEnable IMA appraisal.
actiondont_appraiseDisable IMA appraisal.
actionauditEnable auditing.
funcFILE_CHECKFiles to be opened
funcMODULE_CHECKKernel module files to be loaded
funcMMAP_CHECKShared library files to be mapped into process memory
funcBRPM_CHECKFiles to be executed (excluding script files opened via programs like /bin/hash)
funcPOLICY_CHECKIMA policy files to be imported
funcFIRMWARE_CHECKFirmware to be loaded into memory
funcDIGEST_LIST_CHECKDigest list files to be loaded into the kernel
funcKEXEC_KERNEL_CHECKKernel to be switched to using kexec
maskMAY_EXECExecute a file.
maskMAY_WRITEWrite to a file.
maskMAY_READRead a file.
maskMAY_APPENDAppend to a file.
fsmagicfsmagic=xxxHexadecimal magic number representing the file system type
fsuuidfsuuid=xxx16-character hexadecimal string representing the system device UUID
fownerfowner=xxxUser ID of the file owner
uiduid=xxxUser ID of the user performing the operation on the file
obj_typeobj_type=xxx_tFile type based on SELinux labels
pcrpcr=<num>PCR in TPM for extending measurements (defaulting to 10)
appraise_typeimasigIMA appraisal based on signatures
appraise_typemeta_immutableAppraisal based on file extended attributes with signatures (supporting digest lists)
+ +## Usage Instructions + +> ![](./public_sys-resources/icon-note.gif) **Note:** +> Native IMA/EVM is an open source Linux feature. This section offers a concise overview of its basic usage. For further details, consult the open source wiki: +> + +### Native IMA Usage Instructions + +#### IMA Measurement Mode + +To enable IMA measurement, configure the measurement policy. + +**Step 1:** Specify the measurement policy by configuring boot parameters or manually. For example, configure the IMA policy via boot parameters as follows: + +```ini +ima_policy="tcb" +``` + +Alternatively, manually configure the IMA policy like this: + +```shell +echo "measure func=BPRM_CHECK" > /sys/kernel/security/ima/policy +``` + +**Step 2:** Reboot the system. You can then check the measurement log in real time to monitor the current measurement status: + +```shell +cat /sys/kernel/security/ima/ascii_runtime_measurements +``` + +#### IMA Appraisal Mode + +Enter fix mode, complete IMA labeling for files, and then enable log or enforce mode. + +**Step 1:** Configure boot parameters and reboot to enter fix mode: + +```ini +ima_appraise=fix ima_policy=appraise_tcb +``` + +**Step 2:** Generate IMA extended attributes for all files requiring appraisal: + +For immutable files (such as binary program files), use signature mode to write the signature of the file digest value into the IMA extended attribute. For example (where **/path/to/ima.key** is the signing private key matching the IMA certificate): + +```shell +find /usr/bin -fstype ext4 -type f -executable -uid 0 -exec evmctl -a sha256 ima_sign --key /path/to/ima.key '{}' \; +``` + +For mutable files (such as data files), use hash mode to write the file digest value into the IMA extended attribute. IMA supports an automatic labeling mechanism, where accessing the file in fix mode will generate the IMA extended attribute: + +```shell +find / -fstype ext4 -type f -uid 0 -exec dd if='{}' of=/dev/null count=0 status=none \; +``` + +To verify if the file has been successfully labeled with the IMA extended attribute (`security.ima`), use: + +```shell +getfattr -m - -d /sbin/init +``` + +**Step 3:** Configure boot parameters, switch the IMA appraisal mode to log or enforce, and reboot the system: + +```ini +ima_appraise=enforce ima_policy=appraise_tcb +``` + +### IMA Digest List Usage Instructions + +#### Prerequisites + +Before using the IMA digest list feature, install the ima-evm-utils and digest-list-tools packages: + +```shell +yum install ima-evm-utils digest-list-tools +``` + +#### Mechanism Overview + +##### Digest List Files + +After installing RPM packages released by openEuler, digest list files are automatically generated in the **/etc/ima** directory. The following types of files exist: + +**/etc/ima/digest_lists/0-metadata_list-compact-** + +This is the IMA digest list file, generated using the `gen_digest_lists` command (see [gen_digest_lists](#gen_digest_lists) for details). This binary file contains header information and a series of SHA256 hash values, representing the digest values of legitimate file contents and file extended attributes. Once measured or appraised, this file is imported into the kernel, and IMA digest list measurement or appraisal is performed based on the allowlist digest values in this file. + +**/etc/ima/digest_lists/0-metadata_list-rpm-** + +This is the RPM digest list file, **essentially the header information of the RPM package**. After the RPM package is installed, if the IMA digest list file does not contain a signature, the RPM header information is written into this file, and the signature of the header information is written into the `security.ima` extended attribute. This allows the authenticity of the RPM header information to be verified through the signature. Since the RPM header information includes the digest value of the digest list file, indirect verification of the digest list is achieved. + +**/etc/ima/digest_lists/0-parser_list-compact-libexec** + +This is the IMA parser digest list file, storing the digest value of the **/usr/libexec/rpm_parser** file. This file is used to establish a trust chain from the RPM digest list to the IMA digest list. The kernel IMA digest list mechanism performs special verification on processes generated by this file. If the process is confirmed to be the `rpm_parser` program, all digest lists imported by it are trusted without requiring signature verification. + +**/etc/ima/digest_lists.sig/0-metadata_list-compact-.sig** + +This is the signature file for the IMA digest list. If this file is included in the RPM package, its content is written into the `security.ima` extended attribute of the corresponding RPM digest list file during the RPM installation phase. This enables signature verification during the IMA digest list import phase. + +**/etc/ima/digest_lists.tlv/0-metadata_list-compact_tlv-** + +This is the TLV digest list file, typically generated alongside the IMA digest list file for target files. It stores the integrity information of the target files (such as file content digest values and file extended attributes). The purpose of this file is to assist users in querying or restoring the integrity information of target files. + +##### Digest List File Signing Methods + +In IMA digest list appraisal mode, the IMA digest list file must undergo signature verification before it can be imported into the kernel and used for subsequent file whitelist matching. The IMA digest list file supports the following signing methods. + +**(1) IMA extended attribute signature** + +This is the native IMA signing mechanism, where the signature information is stored in the `security.ima` extended attribute in a specific format. It can be generated and added using the `evmctl` command: + +```shell +evmctl ima_sign --key /path/to/ima.key -a sha256 +``` + +Alternatively, the `-f` parameter can be added to store the signature and header information in a separate file: + +```shell +evmctl ima_sign -f --key /path/to/ima.key -a sha256 +``` + +When IMA digest list appraisal mode is enabled, you can directly write the path of a a digest list file to the kernel interface to import or delete it. This process automatically triggers appraisal, performing signature verification on the digest list file content based on the `security.ima` extended attribute: + +```shell +# Import the IMA digest list file. +echo > /sys/kernel/security/ima/digest_list_data +# Delete the IMA digest list file. +echo > /sys/kernel/security/ima/digest_list_data_del +``` + +**(2) IMA digest list appended signature (default in openEuler 24.03 LTS)** + +Starting with openEuler 24.03 LTS, IMA-specific signing keys are supported, and Cryptographic Message Syntax (CMS) signing is used. Since the signature information includes a certificate chain, it may exceed the length limit for the `security.ima` extended attribute. Therefore, a signature appending method similar to kernel module insertion is adopted: + + + +The signing mechanism is as follows: + +1. Append the CMS signature information to the end of the IMA digest list file. + +2. Fill in the structure and append it to the end of the signature information. The structure is defined as follows: + + ```c + struct module_signature { + u8 algo; /* Public-key crypto algorithm [0] */ + u8 hash; /* Digest algorithm [0] */ + u8 id_type; /* Key identifier type [PKEY_ID_PKCS7] */ + u8 signer_len; /* Length of signer's name [0] */ + u8 key_id_len; /* Length of key identifier [0] */ + u8 __pad[3]; + __be32 sig_len; /* Length of signature data */ + }; + ``` + +3. Add the `"~Module signature appended~\n"` magic string. + +A reference script for this step is as follows: + +```shell +#!/bin/bash +DIGEST_FILE=$1 # Path to the IMA digest list file +SIG_FILE=$2 # Path to save the IMA digest list signature information +OUT=$3 # Output path for the digest list file after adding the signature information + +cat $DIGEST_FILE $SIG_FILE > $OUT +echo -n -e "\x00\x00\x02\x00\x00\x00\x00\x00" >> $OUT +echo -n -e $(printf "%08x" "$(ls -l $SIG_FILE | awk '{print $5}')") | xxd -r -ps >> $OUT +echo -n "~Module signature appended~" >> $OUT +echo -n -e "\x0a" >> $OUT +``` + +**(3) Reusing RPM signatures (default in openEuler 22.03 LTS)** + +openEuler 22.03 LTS supports reusing the RPM signing mechanism to sign IMA digest list files. This aims to address the lack of dedicated IMA signing keys in the version. Users do not need to be aware of this signing process. When an RPM package contains an IMA digest list file but no IMA digest list signature file, this signing mechanism is automatically used. The core principle is to verify the IMA digest list through the RPM package header information. + +For RPM packages released by openEuler, each package file can consist of two parts: + +- **RPM header information:** Stores RPM package attribute fields, including the package name and file digest list. The integrity of this information is ensured by the RPM header signature. +- **RPM files:** The actual files installed into the system, including IMA digest list files generated during the build phase. + + + +During RPM package installation, if the RPM process detects that the digest list file in the package does not contain a signature, it creates an RPM digest list file in the **/etc/ima** directory, writes the RPM header information into the file content, and writes the RPM header signature into the `security.ima` extended attribute of the file. This allows indirect verification and import of the IMA digest list through the RPM digest list. + +##### IMA Digest List Import + +When IMA measurement mode is enabled, importing IMA digest list files does not require signature verification. The file path can be directly written to the kernel interface to import or delete the digest list: + +```shell +# Import the IMA digest list file. +echo > /sys/kernel/security/ima/digest_list_data +# Delete the IMA digest list file. +echo > /sys/kernel/security/ima/digest_list_data_del +``` + +When IMA appraisal mode is enabled, importing a digest list requires signature verification. Depending on the signing method, there are two import approaches. + +**Direct import** + +For IMA digest list files that already contain signature information (IMA extended attribute signature or IMA digest list appended signature), the file path can be directly written to the kernel interface to import or delete the digest list. This process automatically triggers appraisal, performing signature verification on the digest list file content based on the `security.ima` extended attribute: + +```shell +# Import the IMA digest list file. +echo > /sys/kernel/security/ima/digest_list_data +# Delete the IMA digest list file. +echo > /sys/kernel/security/ima/digest_list_data_del +``` + +**Using `upload_digest_lists` for import** + +For IMA digest list files that reuse RPM signatures, the `upload_digest_lists` command must be used for import. The specific commands are as follows (note that the specified path should point to the corresponding RPM digest list): + +```shell +# Import the IMA digest list file. +upload_digest_lists add +# Delete the IMA digest list file. +upload_digest_lists del +``` + +This process is relatively complex and requires the following prerequisites: + +1. The system has already imported the digest lists (including IMA digest lists and IMA PARSER digest lists) from the `digest_list_tools` package released by openEuler. + +2. An IMA appraisal policy for application execution (`BPRM_CHECK` policy) has been configured. + +#### Operation Guide + +##### Automatic Digest List Generation During RPM Build + +The openEuler RPM toolchain supports the `%__brp_digest_list` macro, which is configured as follows: + +```text +%__brp_digest_list /usr/lib/rpm/brp-digest-list %{buildroot} +``` + +When this macro is configured, the **/usr/lib/rpm/brp-digest-list** script is invoked when a user runs the `rpmbuild` command to build a package. This script handles the generation and signing of digest lists. By default, openEuler generates digest lists for critical files such as executables, dynamic libraries, and kernel modules. YOU can also modify the script to customize the scope of digest list generation and specify signing keys. The following example uses a user-defined signing key (**/path/to/ima.key**) to sign the digest list. + +```shell +...... (line 66) +DIGEST_LIST_TLV_PATH="$DIGEST_LIST_DIR.tlv/0-metadata_list-compact_tlv-$(basename $BIN_PKG_FILES)" +[ -f $DIGEST_LIST_TLV_PATH ] || exit 0 + +chmod 644 $DIGEST_LIST_TLV_PATH +echo $DIGEST_LIST_TLV_PATH + +evmctl ima_sign -f --key /path/to/ima.key -a sha256 $DIGEST_LIST_PATH &> /dev/null +chmod 400 $DIGEST_LIST_PATH.sig +mkdir -p $DIGEST_LIST_DIR.sig +mv $DIGEST_LIST_PATH.sig $DIGEST_LIST_DIR.sig +echo $DIGEST_LIST_DIR.sig/0-metadata_list-compact-$(basename $BIN_PKG_FILES).sig +``` + +##### IMA Digest List Measurement + +Enable IMA digest list measurement using the following steps: + +**Step 1:** Configure the boot parameters to enable IMA measurement. The process is similar to **native IMA measurement**, but a specific TPM PCR must be configured for measurement. An example boot parameter is as follows: + +```ini +ima_policy=exec_tcb ima_digest_list_pcr=11 +``` + +**Step 2:** Import the IMA digest list. For example, using the digest list of the Bash package: + +```shell +echo /etc/ima/digest_lists/0-metadata_list-compact-bash-5.1.8-6.oe2203sp1.x86_64 > /sys/kernel/security/ima/digest_list_data +``` + +The IMA digest list measurement logs can be queried as follows: + +```shell +cat /sys/kernel/security/ima/ascii_runtime_measurements +``` + +After importing the IMA digest list, if the digest value of the measured file is included in the IMA digest list, no additional measurement logs will be recorded. + +##### IMA Digest List Appraisal + +###### Boot With the Default Policy + +You can configure the `ima_policy` parameter in the boot parameters to specify the IMA default policy. After IMA initialization during kernel startup, the default policy is immediately applied for appraisal. You can enable IMA digest list appraisal using the following steps: + +**Step 1:** Execute the `dracut` command to write the digest list file into initrd: + +```shell +dracut -f -e xattr +``` + +**Step 2:** Configure the boot parameters and IMA policy. Typical configurations are as follows: + +```ini +# IMA appraisal in log/enforce mode based on digest lists, protecting only file content, with the default policy set to appraise_exec_tcb +ima_appraise=log ima_appraise_digest_list=digest-nometadata ima_policy="appraise_exec_tcb" initramtmpfs module.sig_enforce +ima_appraise=enforce ima_appraise_digest_list=digest-nometadata ima_policy="appraise_exec_tcb" initramtmpfs module.sig_enforce +# IMA appraisal in log/enforce mode based on digest lists, protecting file content and extended attributes, with the default policy set to appraise_exec_tcb+appraise_exec_immutable +ima_appraise=log-evm ima_appraise_digest_list=digest ima_policy="appraise_exec_tcb|appraise_exec_immutable" initramtmpfs evm=x509 evm=complete module.sig_enforce +ima_appraise=enforce-evm ima_appraise_digest_list=digest ima_policy="appraise_exec_tcb|appraise_exec_immutable" initramtmpfs evm=x509 evm=complete module.sig_enforce +``` + +Reboot the system to enable IMA digest list appraisal. During the boot process, the IMA policy will take effect, and the IMA digest list files will be automatically imported. + +###### Boot Without a Default Policy + +You can omit the `ima_policy` parameter in the boot parameters, indicating that no default policy is applied during system startup. The IMA appraisal mechanism will wait for the you to import a policy before becoming active. + +**Step 1:** Configure the boot parameters. Typical configurations are as follows: + +```ini +# IMA appraisal in log/enforce mode based on digest lists, protecting only file content, with no default policy +ima_appraise=log ima_appraise_digest_list=digest-nometadata initramtmpfs +ima_appraise=enforce ima_appraise_digest_list=digest-nometadata initramtmpfs +# IMA appraisal in log/enforce mode based on digest lists, protecting file content and extended attributes, with no default policy +ima_appraise=log-evm ima_appraise_digest_list=digest initramtmpfs evm=x509 evm=complete +ima_appraise=enforce-evm ima_appraise_digest_list=digest initramtmpfs evm=x509 evm=complete +``` + +Reboot the system. At this point, since no policy is configured, IMA appraisal will not be active. + +**Step 2:** Import the IMA policy by writing the full path of the policy file to the kernel interface: + +```shell +echo /path/to/policy > /sys/kernel/security/ima/policy +``` + +> ![](./public_sys-resources/icon-note.gif) **Note:** +> +> The policy must include certain fixed rules. Refer to the following policy templates. +> +> For openEuler 22.03 LTS (reusing RPM signatures): + +```ini +# Do not appraise access behavior for the securityfs file system. +dont_appraise fsmagic=0x73636673 +# Other user-defined dont_appraise rules +...... +# Appraise imported IMA digest list files. +appraise func=DIGEST_LIST_CHECK appraise_type=imasig +# Appraise all files opened by the /usr/libexec/rpm_parser process. +appraise parser appraise_type=imasig +# Appraise executed applications (triggering appraisal for /usr/libexec/rpm_parser execution, additional conditions such as SELinux labels can be added). +appraise func=BPRM_CHECK appraise_type=imasig +# Other user-defined appraise rules +...... +``` + +> For openEuler 24.03 LTS (IMA extended attribute signatures or appended signatures): + +```ini +# User-defined dont_appraise rules +...... +# Appraise imported IMA digest list files. +appraise func=DIGEST_LIST_CHECK appraise_type=imasig|modsig +# Other user-defined appraise rules. +...... +``` + +**Step 3:** Import the IMA digest list files. Different import methods are required for digest lists with different signing methods. + +> For openEuler 22.03 LTS (IMA digest lists reusing RPM signatures): + +```ini +# Import digest lists from the digest_list_tools package +echo /etc/ima/digest_lists/0-metadata_list-compact-digest-list-tools-0.3.95-13.x86_64 > /sys/kernel/security/ima/digest_list_data +echo /etc/ima/digest_lists/0-parser_list-compact-libexec > /sys/kernel/security/ima/digest_list_data +# Import other RPM digest lists +upload_digest_lists add /etc/ima/digest_lists +# Check the number of imported digest list entries +cat /sys/kernel/security/ima/digests_count +``` + +> For openEuler 24.03 LTS (IMA digest lists with appended signatures): + +```shell +find /etc/ima/digest_lists -name "0-metadata_list-compact-*" -exec echo {} > /sys/kernel/security/ima/digest_list_data \; +``` + +##### Software Upgrade + +After the IMA digest list feature is enabled, files covered by IMA protection require synchronized updates to their digest lists during upgrades. For RPM packages released by openEuler, the addition, update, and deletion of digest lists within the RPM packages are automatically handled during package installation, upgrade, or removal, without requiring manual user intervention. For user-maintained non-RPM packages, manual import of digest lists is necessary. + +##### User Certificate Import + +You can import custom certificates to measure or appraise software not released by openEuler. The openEuler IMA appraisal mode supports signature verification using certificates from the following two key rings: + +- **builtin_trusted_keys**: root certificates pre-configured during kernel compilation +- **ima**: imported via **/etc/keys/x509_ima.der** in initrd, which must be a subordinate certificate of any certificate in the `builtin_trusted_keys` key ring + +**Steps to import a root certificate into the builtin_trusted_keys key ring:** + +**Step 1:** Generate a root certificate using the `openssl` command: + +```shell +echo 'subjectKeyIdentifier=hash' > root.cfg +openssl genrsa -out root.key 4096 +openssl req -new -sha256 -key root.key -out root.csr -subj "/C=AA/ST=BB/O=CC/OU=DD/CN=openeuler test ca" +openssl x509 -req -days 3650 -extfile root.cfg -signkey root.key -in root.csr -out root.crt +openssl x509 -in root.crt -out root.der -outform DER +``` + +**Step 2:** Obtain the openEuler kernel source code, using the latest OLK-5.10 branch as an example: + +```shell +git clone https://gitee.com/openeuler/kernel.git -b OLK-5.10 +``` + +**Step 3:** Navigate to the source code directory and copy the root certificate into it: + +```shell +cd kernel +cp /path/to/root.der . +``` + +Modify the `CONFIG_SYSTEM_TRUSTED_KEYS` option in the config file: + +```shell +CONFIG_SYSTEM_TRUSTED_KEYS="./root.crt" +``` + +**Step 4:** Compile and install the kernel (steps omitted; ensure digest lists are generated for kernel modules). + +**Step 5:** Verify certificate import after rebooting: + +```shell +keyctl show %:.builtin_trusted_keys +``` + +**Steps to import a subordinate certificate into the ima key ring (requires the root certificate to be imported into the builtin_trusted_keys key ring first):** + +**Step 1:** Generate a subordinate certificate based on the root certificate using the `openssl` command: + +```shell +echo 'subjectKeyIdentifier=hash' > ima.cfg +echo 'authorityKeyIdentifier=keyid,issuer' >> ima.cfg +echo 'keyUsage=digitalSignature' >> ima.cfg +openssl genrsa -out ima.key 4096 +openssl req -new -sha256 -key ima.key -out ima.csr -subj "/C=AA/ST=BB/O=CC/OU=DD/CN=openeuler test ima" +openssl x509 -req -sha256 -CAcreateserial -CA root.crt -CAkey root.key -extfile ima.cfg -in ima.csr -out ima.crt +openssl x509 -outform DER -in ima.crt -out x509_ima.der +``` + +**Step 2:** Copy the IMA certificate to the **/etc/keys** directory: + +```shell +mkdir -p /etc/keys/ +cp x509_ima.der /etc/keys/ +``` + +**Step 3:** Package initrd, embedding the IMA certificate and digest lists into the initrd image: + +```shell +echo 'install_items+=" /etc/keys/x509_ima.der "' >> /etc/dracut.conf +dracut -f -e xattr +``` + +**Step 4:** Verify certificate import after rebooting: + +```shell +keyctl show %:.ima +``` + +#### Typical Use Cases + +Depending on the operating mode, IMA digest lists can be applied to trusted measurement scenarios and user-space secure boot scenarios. + +##### Trusted Measurement Scenario + +The trusted measurement scenario primarily relies on the IMA digest list measurement mode, where the kernel and hardware root of trust (RoT) like the TPM jointly measure critical files. This is combined with a remote attestation toolchain to prove the trusted state of current system files: + +![](./figures/ima_trusted_measurement.png) + +**Runtime phase** + +- During software package deployment, digest lists are imported synchronously. IMA measures the digest lists and records the measurement logs (synchronously extended to the TPM). + +- When an application is executed, IMA measurement is triggered. If the file digest matches the allowlist, it is ignored. Otherwise, the measurement log is recorded (synchronously extended to the TPM). + +**Attestation phase (industry standard process)** + +- The remote attestation server sends an attestation request, and the client returns the IMA measurement logs along with the signed TPM PCR values. + +- The remote attestation server sequentially verifies the PCR (signature verification), measurement logs (PCR replay), and file measurement information (comparison with local baseline values), reporting the results to the security center. + +- The security management center takes corresponding actions, such as event notification or node isolation. + +##### User-Space Secure Boot Scenario + +The user-space secure boot scenario primarily relies on the IMA digest list appraisal mode, similar to secure boot. It aims to perform integrity checks on executed applications or accessed critical files. If the check fails, access is denied: + +![](./figures/ima_secure_boot.png) + +**Runtime phase** + +- During application deployment, digest lists are imported. After the kernel verifies the signature, the digest values are loaded into the kernel hash table as an allowlist. + +- When an application is executed, IMA verification is triggered. The file hash value is calculated, and if it matches the baseline value, access is allowed. Otherwise, the event is logged or access is denied. + +## Appendix + +### Kernel Compilation Options + +The compilation options provided by native IMA/EVM and their descriptions are as follows. + +| Compilation Option | Functionality | +| :------------------------------- | :------------------------------------------- | +| CONFIG_INTEGRITY | Enable IMA/EVM compilation. | +| CONFIG_INTEGRITY_SIGNATURE | Enable IMA signature verification. | +| CONFIG_INTEGRITY_ASYMMETRIC_KEYS | Enable IMA asymmetric signature verification. | +| CONFIG_INTEGRITY_TRUSTED_KEYRING | Enable IMA/EVM keyring. | +| CONFIG_INTEGRITY_AUDIT | Compile IMA audit module. | +| CONFIG_IMA | Enable IMA. | +| CONFIG_IMA_WRITE_POLICY | Allow updating IMA policies during runtime. | +| CONFIG_IMA_MEASURE_PCR_IDX | Allow specifying IMA measurement PCR index. | +| CONFIG_IMA_LSM_RULES | Allow configuring LSM rules. | +| CONFIG_IMA_APPRAISE | Enable IMA appraisal. | +| IMA_APPRAISE_BOOTPARAM | Enable IMA appraisal boot parameters. | +| CONFIG_EVM | Enable EVM. | + +The compilation options provided by the openEuler IMA digest list feature and their descriptions are as follows (enabled by default in openEuler kernel compilation). + +| Compilation Option | Functionality | +| :----------------- | :----------------------------- | +| CONFIG_DIGEST_LIST | Enable the IMA digest list feature. | + +### IMA Digest List Root Certificate + +In openEuler 22.03, the RPM key pair is used to sign IMA digest lists. To ensure the IMA feature is usable out-of-the-box, the openEuler kernel compilation process imports the RPM root certificate (PGP certificate) into the kernel by default. This includes the OBS certificate used in older versions and the openEuler certificate introduced in openEuler 22.03 LTS SP1: + +```shell +# cat /proc/keys | grep PGP +1909b4ad I------ 1 perm 1f030000 0 0 asymmetri private OBS b25e7f66: PGP.rsa b25e7f66 [] +2f10cd36 I------ 1 perm 1f030000 0 0 asymmetri openeuler fb37bc6f: PGP.rsa fb37bc6f [] +``` + +Since the current kernel does not support importing PGP subkeys, and the openEuler certificate uses subkey signing, the openEuler kernel preprocesses the certificate before compilation by extracting the subkey and importing it into the kernel. The specific process can be found in the [**process_pgp_certs.sh**](https://gitee.com/src-openeuler/kernel/blob/openEuler-22.03-LTS-SP1/process_pgp_certs.sh) script in the kernel package repository. + +Starting from openEuler 24.03, dedicated IMA certificates are supported. For details, refer to the relevant section in [Introduction to Signature Certificates](../CertSignature/introduction_to_signature_certificates.md). + +If you do not intend to use the IMA digest list feature or prefer other keys for signing/verification, you can remove the related code and implement your own kernel root certificate configuration. diff --git a/docs/en/Server/Security/TrustedComputing/Menu/index.md b/docs/en/Server/Security/TrustedComputing/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..89457976e71863de5f031612e7c6de1b7bad3028 --- /dev/null +++ b/docs/en/Server/Security/TrustedComputing/Menu/index.md @@ -0,0 +1,9 @@ +--- +headless: true +--- +- [Trusted Computing]({{< relref "./trusted-computing.md" >}}) + - [Kernel Integrity Measurement Architecture (IMA)]({{< relref "./IMA.md" >}}) + - [Dynamic Integrity Measurement (DIM)]({{< relref "./DIM.md" >}}) + - [Remote Attestation (Kunpeng Security Library)]({{< relref "./remote-attestation-kunpeng-security-library.md" >}}) + - [Trusted Platform Control Module]({{< relref "./TPCM.md" >}}) + - [Common Issues and Solutions]({{< relref "./trusted-computing-common-issues-and-solutions.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/Administration/TPCM.md b/docs/en/Server/Security/TrustedComputing/TPCM.md similarity index 95% rename from docs/en/docs/Administration/TPCM.md rename to docs/en/Server/Security/TrustedComputing/TPCM.md index a5612cb6dbf36d43dc42a6f5c5d8cb9a58f9ab00..e92a4bbb8183fb04dc992708257fcbce2e10a9a2 100644 --- a/docs/en/docs/Administration/TPCM.md +++ b/docs/en/Server/Security/TrustedComputing/TPCM.md @@ -16,12 +16,12 @@ The overall system design consists of the protection module, computing module, a - Trusted management center: This centralized management platform, provided by a third-party vendor, formulates, delivers, maintains, and stores protection policies and reference values for trusted computing nodes. - Protection module: This module operates independently of the computing module and provides trusted computing protection functions that feature active measurement and active control to implement security protection during computing. The protection module consists of the TPCM main control firmware, TCB, and TCM. As a key module for implementing trust protection in a trusted computing node, the TPCM can be implemented in multiple forms, such as cards, chips, and IP cores. It contains a CPU and memory, firmware, and software such as an OS and trusted function components. The TPCM operates alongside the computing module and works according to the built-in protection policy to monitor the trust of protected resources, such as hardware, firmware, and software of the computing module. The TPCM is the Root of Trust in a trusted computing node. -- Computing module: This module includes hardware, an OS, and application layer software. The running of the OS can be divided into the boot phase and the running phase. In the boot phase, GRUB2 and shim of openEuler support the reliability measurement capability, which protects boot files such as shim, GRUB2, kernel, and initramfs. In the running phase, openEuler supports the deployment of the trusted verification agent (provided by third-party vendor HTTC). The agent sends -data to the TPCM for trusted measurement and protection in the running phase. + +- Computing module: This module includes hardware, an OS, and application layer software. The running of the OS can be divided into the boot phase and the running phase. In the boot phase, GRUB2 and shim of openEuler support the reliability measurement capability, which protects boot files such as shim, GRUB2, kernel, and initramfs. In the running phase, openEuler supports the deployment of the trusted verification agent (provided by third-party vendor HTTC). The agent sends data to the TPCM for trusted measurement and protection in the running phase. The TPCM interacts with other components as follows: -1. The TPCM hardware, firmware, and software provide an operating environment for the TSB. The trusted function components of the TPCM provide support for the TSB to implement measurement, control, support, and decision making based on the policy library interpretation requirements. +1. The TPCM hardware, firmware, and software provide an operating environment for the TSB. The trusted function components of the TPCM provide support for the TSB to implement measurement, control, support, and decision-making based on the policy library interpretation requirements. 2. The TPCM accesses the TCM for trusted cryptography functions to complete computing tasks such as trusted verification, measurement, and confidential storage, and provides services for TCM access. 3. The TPCM connects to the trusted management center through the management interface to implement protection policy management and trusted report processing. 4. The TPCM uses the built-in controller and I/O port to interact with the controller of the computing module through the bus to actively monitor the computing module. @@ -29,8 +29,7 @@ The TPCM interacts with other components as follows: ## Constraints -Supported server: TaiShan 200 server (model 2280) VF - +Supported server: TaiShan 200 server (model 2280) Supported BMC card: BC83SMMC ## Application Scenarios diff --git a/docs/en/docs/Administration/figures/RA-arch-1.png b/docs/en/Server/Security/TrustedComputing/figures/RA-arch-1.png similarity index 100% rename from docs/en/docs/Administration/figures/RA-arch-1.png rename to docs/en/Server/Security/TrustedComputing/figures/RA-arch-1.png diff --git a/docs/en/docs/Administration/figures/RA-arch-2.png b/docs/en/Server/Security/TrustedComputing/figures/RA-arch-2.png similarity index 100% rename from docs/en/docs/Administration/figures/RA-arch-2.png rename to docs/en/Server/Security/TrustedComputing/figures/RA-arch-2.png diff --git a/docs/en/docs/Administration/figures/TPCM.png b/docs/en/Server/Security/TrustedComputing/figures/TPCM.png similarity index 100% rename from docs/en/docs/Administration/figures/TPCM.png rename to docs/en/Server/Security/TrustedComputing/figures/TPCM.png diff --git a/docs/en/docs/Administration/figures/dim_architecture.jpg b/docs/en/Server/Security/TrustedComputing/figures/dim_architecture.jpg similarity index 100% rename from docs/en/docs/Administration/figures/dim_architecture.jpg rename to docs/en/Server/Security/TrustedComputing/figures/dim_architecture.jpg diff --git a/docs/en/docs/Administration/figures/ima-modsig.png b/docs/en/Server/Security/TrustedComputing/figures/ima-modsig.png similarity index 100% rename from docs/en/docs/Administration/figures/ima-modsig.png rename to docs/en/Server/Security/TrustedComputing/figures/ima-modsig.png diff --git a/docs/en/Server/Security/TrustedComputing/figures/ima_digest_list_flow.png b/docs/en/Server/Security/TrustedComputing/figures/ima_digest_list_flow.png new file mode 100644 index 0000000000000000000000000000000000000000..11711ca21c6b327c3d347ad4c389d037a6c2c6ae Binary files /dev/null and b/docs/en/Server/Security/TrustedComputing/figures/ima_digest_list_flow.png differ diff --git a/docs/en/Server/Security/TrustedComputing/figures/ima_digest_list_pkg.png b/docs/en/Server/Security/TrustedComputing/figures/ima_digest_list_pkg.png new file mode 100644 index 0000000000000000000000000000000000000000..8a2128add583d3c25ee5f281bb882c94f23b97c7 Binary files /dev/null and b/docs/en/Server/Security/TrustedComputing/figures/ima_digest_list_pkg.png differ diff --git a/docs/en/Server/Security/TrustedComputing/figures/ima_priv_key.png b/docs/en/Server/Security/TrustedComputing/figures/ima_priv_key.png new file mode 100644 index 0000000000000000000000000000000000000000..c939b8e2e8bcd30869f938161ea1edbccd9c89c4 Binary files /dev/null and b/docs/en/Server/Security/TrustedComputing/figures/ima_priv_key.png differ diff --git a/docs/en/Server/Security/TrustedComputing/figures/ima_rpm.png b/docs/en/Server/Security/TrustedComputing/figures/ima_rpm.png new file mode 100644 index 0000000000000000000000000000000000000000..6c4b620ded02ee96357eb587890555af5a319e51 Binary files /dev/null and b/docs/en/Server/Security/TrustedComputing/figures/ima_rpm.png differ diff --git a/docs/en/Server/Security/TrustedComputing/figures/ima_secure_boot.png b/docs/en/Server/Security/TrustedComputing/figures/ima_secure_boot.png new file mode 100644 index 0000000000000000000000000000000000000000..85b959ff1da0f4bcf919f6fea712a0c053f7ad01 Binary files /dev/null and b/docs/en/Server/Security/TrustedComputing/figures/ima_secure_boot.png differ diff --git a/docs/en/Server/Security/TrustedComputing/figures/ima_sig_verify.png b/docs/en/Server/Security/TrustedComputing/figures/ima_sig_verify.png new file mode 100644 index 0000000000000000000000000000000000000000..e0d7ff55ab93dca65763881ba4ff136b85521123 Binary files /dev/null and b/docs/en/Server/Security/TrustedComputing/figures/ima_sig_verify.png differ diff --git a/docs/en/Server/Security/TrustedComputing/figures/ima_tpm.png b/docs/en/Server/Security/TrustedComputing/figures/ima_tpm.png new file mode 100644 index 0000000000000000000000000000000000000000..931440ebdb8a8c993a2f9ef331b214b40d8f9535 Binary files /dev/null and b/docs/en/Server/Security/TrustedComputing/figures/ima_tpm.png differ diff --git a/docs/en/Server/Security/TrustedComputing/figures/ima_trusted_measurement.png b/docs/en/Server/Security/TrustedComputing/figures/ima_trusted_measurement.png new file mode 100644 index 0000000000000000000000000000000000000000..e64224fdf4b99429aeabb87947de2c9f23f1df14 Binary files /dev/null and b/docs/en/Server/Security/TrustedComputing/figures/ima_trusted_measurement.png differ diff --git a/docs/en/docs/Administration/figures/trusted_chain.png b/docs/en/Server/Security/TrustedComputing/figures/trusted_chain.png similarity index 100% rename from docs/en/docs/Administration/figures/trusted_chain.png rename to docs/en/Server/Security/TrustedComputing/figures/trusted_chain.png diff --git a/docs/en/Server/Security/TrustedComputing/public_sys-resources/icon-note.gif b/docs/en/Server/Security/TrustedComputing/public_sys-resources/icon-note.gif new file mode 100644 index 0000000000000000000000000000000000000000..6314297e45c1de184204098efd4814d6dc8b1cda Binary files /dev/null and b/docs/en/Server/Security/TrustedComputing/public_sys-resources/icon-note.gif differ diff --git a/docs/en/Server/Security/TrustedComputing/remote-attestation-kunpeng-security-library.md b/docs/en/Server/Security/TrustedComputing/remote-attestation-kunpeng-security-library.md new file mode 100644 index 0000000000000000000000000000000000000000..9d6cb0b519514e2261d6ce2569c6ee41bb02d6cf --- /dev/null +++ b/docs/en/Server/Security/TrustedComputing/remote-attestation-kunpeng-security-library.md @@ -0,0 +1,417 @@ +# Remote Attestation (Kunpeng Security Library) + +## Introduction + +This project develops basic security software components running on Kunpeng processors. In the early stage, the project focuses on trusted computing fields such as remote attestation to empower security developers in the community. + +## Software Architecture + +On the platform without TEE enabled, this project can provide the platform remote attestation feature, and its software architecture is shown in the following figure: + +![img](./figures/RA-arch-1.png) + +On the platform that has enabled TEE, this project can provide TEE remote attestation feature, and its software architecture is shown in the following figure: + +![img](./figures/RA-arch-2.png) + +## Installation and Configuration + +1. Run the following command to use the RPM package of the Yum installation program: + + ```shell + yum install kunpengsecl-ras kunpengsecl-rac kunpengsecl-rahub kunpengsecl-qcaserver kunpengsecl-attester kunpengsecl-tas kunpengsecl-devel + ``` + +2. Prepare the database environment. Go to the `/usr/share/attestation/ras` directory and run the p`prepare-database-env.sh` script to automatically configure the database environment. + +3. The configuration files required for program running are stored in three paths: current path `./config.yaml`, home path `${HOME}/.config/attestation/ras(rac)(rahub)(qcaserver)(attester)(tas)/config.yaml`, and system path `/etc/attestation/ras(rac)(rahub)(qcaserver)(attester)(tas)/config.yaml`. + +4. (Optional) To create a home directory configuration file, run the `prepare-ras(rac)(hub)(qca)(attester)(tas)conf-env.sh` script in `/usr/share/attestation/ras(rac)(rahub)(qcaserver)(attester)(tas)` after installing the RPM package. + +## Options + +### RAS Boot Options + +Run the `ras` command to start the RAS program. Note that you need to provide the ECDSA public key in the current directory and name it `ecdsakey.pub`. Options are as follows: + +```console + -H --https HTTP/HTTPS mode switch. The default value is https(true), false=http. + -h --hport RESTful API port listened by RAS in HTTPS mode. + -p, --port string Client API port listened by RAS. + -r, --rest string RESTful API port listened by RAS in HTTP mode. + -T, --token Generates a verification code for test and exits. + -v, --verbose Prints more detailed RAS runtime log information. + -V, --version Prints the RAS version and exits. +``` + +### RAC Boot Options + +Run the `sudo raagent` command to start the RAC program. Note that the sudo permission is required to enable the physical TPM module. Options are as follows: + +```console + -s, --server string Specifies the RAS service port to be connected. + -t, --test Starts in test mode. + -v, --verbose Prints more detailed RAC runtime log information. + -V, --version Prints the RAC version and exits. + -i, --imalog Specifies the path of the IMA file. + -b, --bioslog Specifies the path of the BIOS file. + -T, --tatest Starts in TA test mode. +``` + +**Note:** +>1.To use TEE remote attestation feature, you must start RAC not in TA test mode. And place the uuid, whether to use TCB, mem_hash and img_hash of the TA to be attestated sequentially in the **talist** file under the RAC execution path. At the same time, pre install the **libqca.so** and **libteec.so** library provided by the TEE team. The format of the **talist** file is as follows: +> +>```text +>e08f7eca-e875-440e-9ab0-5f381136c600 false ccd5160c6461e19214c0d8787281a1e3c4048850352abe45ce86e12dd3df9fde 46d5019b0a7ffbb87ad71ea629ebd6f568140c95d7b452011acfa2f9daf61c7a +>``` +> +>2.To not use TEE remote attestation feature, you must copy the **libqca.so** and **libteec.so** library in `${DESTDIR}/usr/share/attestation/qcaserver` path to `/usr/lib` or `/usr/lib64` path, and start RAC in TA test mode. + +### QCA Boot Options + +Run the `${DESTDIR}/usr/bin/qcaserver` command to start the QCA program. Note that to start QTA normally, the full path of qcaserver must be used, and the CA path parameter in QTA needs to be kept the same as the path. Options are as follows: + +```console + -C, --scenario int Sets the application scenario of the program, The default value is sce_no_as(0), 1=sce_as_no_daa, 2=sce_as_with_daa. + -S, --server string Specifies the open server address/port. +``` + +### ATTESTER Boot Options + +Run the `attester` command to start the ATTESTER program. Options are as follows: + +```console + -B, --basevalue string Sets the base value file read path + -M, --mspolicy int Sets the measurement strategy, which defaults to -1 and needs to be specified manually. 1=compare only img-hash values, 2=compare only hash values, and 3=compare both img-hash and hash values at the same time. + -S, --server string Specifies the address of the server to connect to. + -U, --uuid int Specifies the trusted apps to verify. + -V, --version Prints the program version and exit. + -T, --test Reads fixed nonce values to match currently hard-coded trusted reports. +``` + +### TAS Boot Options + +Run the `tas` command to start the TAS program. Options are as follows: + +```console + -T, --token Generates a verification code for test and exits. +``` + +**Note:** +>1.To enable the TAS, you must configure the private key for TAS. Run the following command to modify the configuration file in the home directory: +> +>```shell +>$ cd ${HOME}/.config/attestation/tas +>$ vim config.yaml +> # The values of the following DAA_GRP_KEY_SK_X and DAA_GRP_KEY_SK_Y are for testing purposes only. +> # Be sure to update their contents to ensure safety before normal use. +>tasconfig: +> port: 127.0.0.1:40008 +> rest: 127.0.0.1:40009 +> akskeycertfile: ./ascert.crt +> aksprivkeyfile: ./aspriv.key +> huaweiitcafile: ./Huawei IT Product CA.pem +> DAA_GRP_KEY_SK_X: 65a9bf91ac8832379ff04dd2c6def16d48a56be244f6e19274e97881a776543c65a9bf91ac8832379ff04dd2c6def16d48a56be244f6e19274e97881a776543c +> DAA_GRP_KEY_SK_Y: 126f74258bb0ceca2ae7522c51825f980549ec1ef24f81d189d17e38f1773b56126f74258bb0ceca2ae7522c51825f980549ec1ef24f81d189d17e38f1773b56 +>``` +> +>Then enter `tas` to start TAS program. +> +>2.In an environment with TAS, in order to improve the efficiency of QCA's certificate configuration process, not every boot needs to access the TAS to generate the certificate, but through the localized storage of the certificate. That is, read the certification path configured in `config.yaml` on QCA side, check if a TAS-issued certificate has been saved locally through the `func hasAKCert(s int) bool` function. If the certificate is successfully read, there is no need to access TAS. If the certificate cannot be read, you need to access TAS and save the certificate returned by TAS locally. + +## API Definition + +### RAS APIs + +To facilitate the administrator to manage the target server, RAS and the user TA in the TEE deployed on the target server, the following APIs are designed for calling: + +| API | Method | +| --------------------------------- | --------------------------- | +| / | GET | +| /{id} | GET, POST, DELETE | +| /{from}/{to} | GET | +| /{id}/reports | GET | +| /{id}/reports/{reportid} | GET, DELETE | +| /{id}/basevalues | GET | +| /{id}/newbasevalue | POST | +| /{id}/basevalues/{basevalueid} | GET, POST, DELETE | +| /{id}/ta/{tauuid}/status | GET | +| /{id}/ta/{tauuid}/tabasevalues | GET | +| /{id}/ta/{tauuid}/tabasevalues/{tabasevalueid} | GET, POST, DELETE | +| /{id}/ta/{tauuid}/newtabasevalue | POST | +| /{id}/ta/{tauuid}/tareports | GET | +| /{id}/ta/{tauuid}/tareports/{tareportid} | GET, POST, DELETE | +| /{id}/basevalues/{basevalueid} | GET, DELETE | +| /version | GET | +| /config | GET, POST | +| /{id}/container/status | GET | +| /{id}/device/status | GET | + +The usage of the preceding APIs is described as follows: + +To query information about all servers, use `/`. + +```shell +curl -X GET -H "Content-Type: application/json" http://localhost:40002/ +``` + +*** +To query detailed information about a target server, use the GET method of `/{id}`. **{id}** is the unique ID allocated by RAS to the target server. + +```shell +curl -X GET -H "Content-Type: application/json" http://localhost:40002/1 +``` + +*** +To modify information about the target server, use the POST method of `/{id}`. `$AUTHTOKEN` is the identity verification code automatically generated by running the `ras -T` command. + +```go +type clientInfo struct { + Registered *bool `json:"registered"` // Registration status of the target server + IsAutoUpdate *bool `json:"isautoupdate"`// Target server base value update policy +} +``` + +```shell +curl -X POST -H "Authorization: $AUTHTOKEN" -H "Content-Type: application/json" http://localhost:40002/1 -d '{"registered":false, "isautoupdate":false}' +``` + +*** +To delete a target server, use the DELETE method of `/{id}`. + +**Note:** +>This method does not delete all information about the target server. Instead, it sets the registration status of the target server to `false`. + +```shell +curl -X DELETE -H "Authorization: $AUTHTOKEN" -H "Content-Type: application/json" http://localhost:40002/1 +``` + +*** +To query information about all servers in a specified range, use the GET method of `/{from}/{to}`. + +```shell +curl -X GET -H "Content-Type: application/json" http://localhost:40002/1/9 +``` + +*** +To query all trust reports of the target server, use the GET method of `/{id}/reports`. + +```shell +curl -X GET -H "Content-Type: application/json" http://localhost:40002/1/reports +``` + +*** +To query details about a specified trust report of the target server, use the GET method of `/{id}/reports/{reportid}`. **{reportid}** indicates the unique ID assigned by RAS to the trust report of the target server. + +```shell +curl -X GET -H "Content-Type: application/json" http://localhost:40002/1/reports/1 +``` + +*** +To delete a specified trust report of the target server, use the DELETE method of `/{id}/reports/{reportid}`. + +**Note:** +>This method will delete all information about the specified trusted report, and the report cannot be queried through the API. + +```shell +curl -X DELETE -H "Authorization: $AUTHTOKEN" -H "Content-Type: application/json" http://localhost:40002/1/reports/1 +``` + +*** +To query all base values of the target server, use the GET method of `/{id}/reports/{reportid}`. + +```shell +curl -X GET -H "Content-Type: application/json" http://localhost:40002/1/basevalues +``` + +*** +To add a base value to the target server, use the POST method of `/{id}/newbasevalue`. + +```go +type baseValueJson struct { + BaseType string `json:"basetype"` // Base value type + Uuid string `json:"uuid"` // ID of a container or device + Name string `json:"name"` // Base value name + Enabled bool `json:"enabled"` // Whether the base value is available + Pcr string `json:"pcr"` // PCR value + Bios string `json:"bios"` // BIOS value + Ima string `json:"ima"` // IMA value + IsNewGroup bool `json:"isnewgroup"` // Whether this is a group of new reference values +} +``` + +```shell +curl -X POST -H "Authorization: $AUTHTOKEN" -H "Content-Type: application/json" http://localhost:40002/1/newbasevalue -d '{"name":"test", "basetype":"host", "enabled":true, "pcr":"testpcr", "bios":"testbios", "ima":"testima", "isnewgroup":true}' +``` + +*** +To query details about a specified base value of a target server, use the get method of `/{id}/basevalues/{basevalueid}`. **{basevalueid}** indicates the unique ID allocated by RAS to the specified base value of the target server. + +```shell +curl -X GET -H "Content-Type: application/json" http://localhost:40002/1/basevalues/1 +``` + +*** +To change the availability status of a specified base value of the target server, use the POST method of `/{id}/basevalues/{basevalueid}`. + +```shell +curl -X POST -H "Content-type: application/json" -H "Authorization: $AUTHTOKEN" http://localhost:40002/1/basevalues/1 -d '{"enabled":true}' +``` + +*** +To delete a specified base value of the target server, use the DELETE method of `/{id}/basevalues/{basevalueid}`. + +**Note:** +>This method will delete all the information about the specified base value, and the base value cannot be queried through the API. + +```shell +curl -X DELETE -H "Authorization: $AUTHTOKEN" -H "Content-Type: application/json" http://localhost:40002/1/basevalues/1 +``` + +To query the trusted status of a specific user TA on the target server, use the GET method of the `"/{id}/ta/{tauuid}/status"` interface. Where {id} is the unique identification number assigned by RAS to the target server, and {tauuid} is the identification number of the specific user TA. + +```shell +curl -X GET -H "Content-type: application/json" -H "Authorization: $AUTHTOKEN" http://localhost:40002/1/ta/test/status +``` + +*** +To query all the baseline value information of a specific user TA on the target server, use the GET method of the `"/{id}/ta/{tauuid}/tabasevalues"` interface. + +```shell +curl -X GET -H "Content-type: application/json" http://localhost:40002/1/ta/test/tabasevalues +``` + +*** +To query the details of a specified base value for a specific user TA on the target server, use the GET method of the `"/{id}/ta/{tauuid}/tabasevalues/{tabasevalueid}"` interface. where {tabasevalueid} is the unique identification number assigned by RAS to the specified base value of a specific user TA on the target server. + +```shell +curl -X GET -H "Content-type: application/json" http://localhost:40002/1/ta/test/tabasevalues/1 +``` + +*** +To modify the available status of a specified base value for a specific user TA on the target server, use the `POST` method of the `"/{id}/ta/{tauuid}/tabasevalues/{tabasevalueid}"` interface. + +```shell +curl -X POST -H "Content-type: application/json" -H "Authorization: $AUTHTOKEN" http://localhost:40002/1/ta/test/tabasevalues/1 --data '{"enabled":true}' +``` + +*** +To delete the specified base value of a specific user TA on the target server, use the `DELETE` method of the `"/{id}/ta/{tauuid}/tabasevalues/{tabasevalueid}"` interface. + +**Note:** +>This method will delete all information about the specified base value, and the base value cannot be queried through the API. + +```shell +curl -X DELETE -H "Content-type: application/json" -H "Authorization: $AUTHTOKEN" -k http://localhost:40002/1/ta/test/tabasevalues/1 +``` + +*** +To add a baseline value to a specific user TA on the target server, use the `POST` method of the `"/{id}/ta/{tauuid}/newtabasevalue"` interface. + +```go +type tabaseValueJson struct { + Uuid string `json:"uuid"` // the identification number of the user TA + Name string `json:"name"` // base value name + Enabled bool `json:"enabled"` // whether a baseline value is available + Valueinfo string `json:"valueinfo"` // mirror hash value and memory hash value +} +``` + +```shell +curl -X POST -H "Content-Type: application/json" -H "Authorization: $AUTHTOKEN" -k http://localhost:40002/1/ta/test/newtabasevalue -d '{"uuid":"test", "name":"testname", "enabled":true, "valueinfo":"test info"}' +``` + +*** +To query the target server for all trusted reports for a specific user TA, use the `GET` method of the `"/{id}/ta/{tauuid}/tareports"` interface. + +```shell +curl -X GET -H "Content-type: application/json" http://localhost:40002/1/ta/test/tareports +``` + +*** +To query the details of a specified trusted report for a specific user TA on the target server, use the `GET` method of the `"/{id}/ta/{tauuid}/tareports/{tareportid}"` interface. Where {tareportid} is the unique identification number assigned by RAS to the specified trusted report of a specific user TA on the target server. + +```shell +curl -X GET -H "Content-type: application/json" http://localhost:40002/1/ta/test/tareports/2 +``` + +*** +To delete the specified trusted report of a specific user TA on the target server, use the `DELETE` method of the `"/{id}/ta/{tauuid}/tareports/{tareportid}"` interface. + +**Note:** +>This method will delete all information of the specified trusted report, and the report cannot be queried through the API. + +```shell +curl -X DELETE -H "Content-type: application/json" http://localhost:40002/1/ta/test/tareports/2 +``` + +*** +To obtain the version information of the program, use the GET method of `/version`. + +```shell +curl -X GET -H "Content-Type: application/json" http://localhost:40002/version +``` + +*** +To query the configuration information about the target server, RAS, or database, use the GET method of `/config`. + +```shell +curl -X GET -H "Content-Type: application/json" http://localhost:40002/config +``` + +*** +To modify the configuration information about the target server, RAS, or database, use the POST method of /config. + +```go +type cfgRecord struct { + // Target server configuration + HBDuration string `json:"hbduration" form:"hbduration"` + TrustDuration string `json:"trustduration" form:"trustduration"` + DigestAlgorithm string `json:"digestalgorithm" form:"digestalgorithm"` + // RAS configuration + MgrStrategy string `json:"mgrstrategy" form:"mgrstrategy"` + ExtractRules string `json:"extractrules" form:"extractrules"` + IsAllupdate *bool `json:"isallupdate" form:"isallupdate"` + LogTestMode *bool `json:"logtestmode" form:"logtestmode"` +} +``` + +```shell +curl -X POST -H "Authorization: $AUTHTOKEN" -H "Content-Type: application/json" http://localhost:40002/config -d '{"hbduration":"5s","trustduration":"20s","DigestAlgorithm":"sha256"}' +``` + +### TAS APIs + +To facilitate the administrator's management of TAS for remote control, the following API is designed for calling: + +| API | Method | +| --------------------| ------------------| +| /config | GET, POST | + +To query the configuration information, use the GET method of the `/config` interface. + +```shell +curl -X GET -H "Content-Type: application/json" http://localhost:40009/config +``` + +*** +To modify the configuration information, use the POST method of the `/config` interface. + +```shell +curl -X POST -H "Content-Type: application/json" -H "Authorization: $AUTHTOKEN" http://localhost:40009/config -d '{"basevalue":"testvalue"}' +``` + +**Note:** +>Currently, only the base value in the configuration information of TAS is supported for querying and modifying. + +## FAQs + +1. Why cannot RAS be started after it is installed? + + > In the current RAS design logic, after the program is started, it needs to search for the `ecdsakey.pub` file in the current directory and read the file as the identity verification code for accessing the program. If the file does not exist in the current directory, an error is reported during RAS boot. + >> Solution 1: Run the `ras -T` command to generate a test token. The `ecdsakey.pub` file is generated. + >> Solution 2: After deploying the oauth2 authentication service, save the verification public key of the JWT token generator as `ecdsakey.pub`. + +2. Why cannot RAS be accessed through REST APIs after it is started? + + > RAS is started in HTTPS mode by default. Therefore, you need to provide a valid certificate for RAS to access it. However, RAS started in HTTP mode does not require a certificate. diff --git a/docs/en/Server/Security/TrustedComputing/trusted-computing-common-issues-and-solutions.md b/docs/en/Server/Security/TrustedComputing/trusted-computing-common-issues-and-solutions.md new file mode 100644 index 0000000000000000000000000000000000000000..8f62252b26e236142ce5f4c5d77b4ab00180040c --- /dev/null +++ b/docs/en/Server/Security/TrustedComputing/trusted-computing-common-issues-and-solutions.md @@ -0,0 +1,219 @@ +# Common Issues and Solutions + +## Issue 1: System Fails to Boot After IMA Appraisal Enforce Mode Is Enabled with the Default Policy + +The default IMA policy may include checks for critical file access processes such as application execution and kernel module loading. If access to these critical files fails, the system may fail to boot. Common causes include: + +1. The IMA verification certificate is not imported into the kernel, causing the digest list to fail verification. +2. The digest list file is not correctly signed, leading to verification failure. +3. The digest list file is not imported into the initrd, preventing the digest list from being loaded during the boot process. +4. The digest list file does not match the application, causing the application to fail matching the imported digest list. + +Enter the system in log mode to locate and fix the issue. Reboot the system, enter the GRUB menu, and modify the boot parameters to start in log mode: + +```ini +ima_appraise=log +``` + +After the system boots, follow the steps below to troubleshoot. + +**Step 1:** Check the IMA certificates in the key ring. + +```shell +keyctl show %:.builtin_trusted_keys +``` + +For openEuler LTS versions, at least the following kernel certificates should exist (for other versions, reference based on their release dates): + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
VersionCertificate
openEuler 22.03 LTSprivate OBS b25e7f66
openEuler 22.03 LTS SP1/2/3private OBS b25e7f66
openeuler <openeuler@compass-ci.com> b675600b
openEuler 22.03 LTS SP4private OBS b25e7f66
openeuler <openeuler@compass-ci.com> b675600b
openeuler <openeuler@compass-ci.com> fb37bc6f
openEuler 24.03openEuler kernel ICA 1: 90bb67eb4b57eb62bf6f867e4f56bd4e19e7d041
+ +If you have imported other kernel root certificates, use the `keyctl` command to confirm whether the certificates were successfully imported. By default, openEuler does not use the IMA key ring. If you are using it, check whether the user certificates exist in the IMA key ring with the following command: + +```shell +keyctl show %:.ima +``` + +If the issue is that the certificate was not correctly imported, refer to [User Certificate Import](./IMA.md#user-certificate-import) for troubleshooting. + +**Step 2:** Check if the digest list contains signature information. + +Query the digest list files in the current system with the following command: + +```shell +ls /etc/ima/digest_lists | grep '_list-compact-' +``` + +For each digest list file, ensure that **one of the following three** signature conditions is met: + +1. The digest list file has a corresponding **RPM digest list file**, and the `security.ima` extended attribute of the **RPM digest list file** contains a signature value. For example, for the bash package digest list, the digest list file path is: + + ```text + /etc/ima/digest_lists/0-metadata_list-compact-bash-5.1.8-6.oe2203sp1.x86_64 + ``` + + The RPM digest list path is: + + ```text + /etc/ima/digest_lists/0-metadata_list-rpm-bash-5.1.8-6.oe2203sp1.x86_64 + ``` + + Check the RPM digest list signature by ensuring the `security.ima` extended attribute is not empty: + + ```shell + getfattr -n security.ima /etc/ima/digest_lists/0-metadata_list-rpm-bash-5.1.8-6.oe2203sp1.x86_64 + ``` + +2. The `security.ima` extended attribute of the digest list file is not empty: + + ```shell + getfattr -n security.ima /etc/ima/digest_lists/0-metadata_list-compact-bash-5.1.8-6.oe2203sp1.x86_64 + ``` + +3. The digest list file contains signature information at the end. Verify if the file content ends with the `~Module signature appended~` magic string (supported in openEuler 24.03 LTS and later versions): + + ```shell + tail -c 28 /etc/ima/digest_lists/0-metadata_list-compact-kernel-6.6.0-28.0.0.34.oe2403.x86_64 + ``` + +If the issue is that the digest list does not contain signature information, refer to [Digest List File Signing Methods](./IMA.md#digest-list-file-signing-methods) for troubleshooting. + +**Step 3:** Verify the correctness of the digest list signature. + +After ensuring that the digest list contains signature information, also ensure that the digest list is signed with the correct private key, meaning the signing private key matches the certificate in the kernel. In addition to manually checking the private key, users can check the dmesg logs or audit logs (default path: **/var/log/audit/audit.log**) for signature verification failures. A typical log output is as follows: + +```ini +type=INTEGRITY_DATA msg=audit(1722578008.756:154): pid=3358 uid=0 auid=0 ses=1 subj=unconfined_u:unconfined_r:haikang_t:s0-s0:c0.c1023 op=appraise_data cause=invalid-signature comm="bash" name="/root/0-metadata_list-compact-bash-5.1.8-6.oe2203sp1.x86_64" dev="dm-0" ino=785161 res=0 errno=0UID="root" AUID="root" +``` + +If the issue is incorrect signature information, refer to [Digest List File Signing Methods](./IMA.md#digest-list-file-signing-methods) for troubleshooting. + +**Step 4:** Check if the digest list file is imported into the initrd. + +Query whether the digest list file exists in the current initrd with the following command: + +```shell +lsinitrd | grep 'etc/ima/digest_lists' +``` + +If no digest list file is found, users need to recreate the initrd and verify that the digest list is successfully imported: + +```shell +dracut -f -e xattr +``` + +**Step 5:** Verify that the IMA digest list matches the application. + +Refer to [Issue 2](#issue-2-file-execution-fails-after-ima-appraisal-enforce-mode-is-enabled). + +## Issue 2: File Execution Fails After IMA Appraisal Enforce Mode Is Enabled + +After IMA appraisal enforce mode is enabled, if the content or extended attributes of a file configured with IMA policies are incorrect (for example, they do not match the imported digest list), file access may be denied. Common causes include: + +1. The digest list was not successfully imported (refer to [Issue 1](#issue-1-system-fails-to-boot-after-ima-appraisal-enforce-mode-is-enabled-with-the-default-policy) for details). +2. The file content or attributes have been tampered with. + +For scenarios where file execution fails, first ensure that the digest list file has been successfully imported into the kernel. Check the number of digest lists to determine the import status: + +```shell +cat /sys/kernel/security/ima/digests_count +``` + +Next, use the audit logs (default path: **/var/log/audit/audit.log**) to identify which file failed verification and the reason. A typical log output is as follows: + +```ini +type=INTEGRITY_DATA msg=audit(1722811960.997:2967): pid=7613 uid=0 auid=0 ses=1 subj=unconfined_u:unconfined_r:haikang_t:s0-s0:c0.c1023 op=appraise_data cause=IMA-signature-required comm="bash" name="/root/test" dev="dm-0" ino=814424 res=0 errno=0UID="root" AUID="root" +``` + +After identifying the file that failed verification, compare it with the TLV digest list to determine the cause of tampering. For scenarios where extended attribute verification is not enabled, only compare the SHA256 hash value of the file with the `IMA digest` entry in the TLV digest list. For scenarios where extended attribute verification is enabled, also compare the current file attributes with the extended attributes displayed in the TLV digest list. + +Once the cause of the issue is determined, resolve it by restoring the file content and attributes or regenerating the digest list for the file, signing it, and importing it into the kernel. + +## Issue 3: Errors Occur During Packages Installation Across openEuler 22.03 LTS SP Versions After IMA Appraisal Mode Is Enabled + +After IMA appraisal mode is enabled, installing packages from different SP versions of openEuler 22.03 LTS triggers the import of IMA digest lists. This process includes a signature verification step, where the kernel uses its certificates to verify the digest list signatures. Due to changes in signing certificates during the evolution of openEuler, backward compatibility issues may arise in certain cross-SP-version installation scenarios (there are no forward compatibility issues, meaning newer kernels can verify older IMA digest list files without problems). + +You are advised to ensure that the following signing certificates are present in the current kernel: + +```shell +# keyctl show %:.builtin_trusted_keys +Keyring + 566488577 ---lswrv 0 0 keyring: .builtin_trusted_keys + 383580336 ---lswrv 0 0 \_ asymmetric: openeuler b675600b + 453794670 ---lswrv 0 0 \_ asymmetric: private OBS b25e7f66 + 938520011 ---lswrv 0 0 \_ asymmetric: openeuler fb37bc6f +``` + +If any certificates are missing, you are advised to upgrade the kernel to the latest version: + +```shell +yum update kernel +``` + +openEuler 24.03 LTS and later versions include dedicated IMA certificates and support certificate chain verification, ensuring the certificate lifecycle covers the entire LTS version. + +## Issue 4: IMA Digest List Import Fails Despite Correct Signatures After IMA Digest List Appraisal Mode Is Enabled + +The IMA digest list import process includes a verification mechanism. If a digest list fails signature verification during import, the digest list import functionality is disabled, preventing even correctly signed digest lists from being imported afterward. Check the dmesg logs for the following message to confirm if this is the cause: + +```shell +# dmesg +ima: 0-metadata_list-compact-bash-5.1.8-6.oe2203sp1.x86_64 not appraised, disabling digest lists lookup for appraisal +``` + +If such a log is present, a digest list file with an incorrect signature was imported while IMA digest list appraisal mode was enabled, causing the functionality to be disabled. In this case, reboot the system and fix the incorrect digest list signature information. + +## Issue 5: Importing User-Defined IMA Certificates Fails in openEuler 24.03 LTS and Later Versions + +Linux kernel 6.6 introduced additional field validation restrictions for importing certificates. Certificates imported into the IMA key ring must meet the following constraints (following the X.509 standard format): + +- It must be a digital signature certificate, meaning the `keyUsage=digitalSignature` field must be set. +- It must not be a CA certificate, meaning the `basicConstraints=CA:TRUE` field must not be set. +- It must not be an intermediate certificate, meaning the `keyUsage=keyCertSign` field must not be set. + +## Issue 6: kdump Service Fails to Start After IMA Appraisal Mode Is Enabled + +After IMA appraisal enforce mode is enabled, if the IMA policy includes the following `KEXEC_KERNEL_CHECK` rule, the kdump service may fail to start: + +```shell +appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig +``` + +The reason is that in this scenario, all files loaded via `kexec` must undergo integrity verification. As a result, the kernel restricts the loading of kernel image files by kdump to the `kexec_file_load` system call. This can be enabled by modifying the **/etc/sysconfig/kdump** configuration file: + +```shell +KDUMP_FILE_LOAD="on" +``` + +Additionally, the `kexec_file_load` system call itself performs signature verification on the files. Therefore, the kernel image file being loaded must contain a valid secure boot signature, and the current kernel must include the corresponding verification certificate. diff --git a/docs/en/Server/Security/TrustedComputing/trusted-computing.md b/docs/en/Server/Security/TrustedComputing/trusted-computing.md new file mode 100644 index 0000000000000000000000000000000000000000..79eaffe99da1aef7d416af9bd0863fc54d851972 --- /dev/null +++ b/docs/en/Server/Security/TrustedComputing/trusted-computing.md @@ -0,0 +1,27 @@ +# Trusted Computing + +## Trusted Computing Basics + +### What Is Trusted Computing + +The definition of being trusted varies with international organizations. + +1. Trusted Computing Group (TCG): + + An entity that is trusted always achieves the desired goal in an expected way. + +2. International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) (1999): + + The components, operations, or processes involved in computing are predictable under any conditions and are resistant to viruses and a certain degree of physical interference. + +3. IEEE Computer Society Technical Committee on Dependable Computing: + + Being trusted means that the services provided by the computer system can be proved to be reliable, and mainly refers to the reliability and availability of the system. + +In short, being trusted means that the system operates according to a pre-determined design and policy. + +A trusted computing system consists of a root of trust, a trusted hardware platform, operating system (OS), and application. The basic idea of the system is to create a trusted computing base (TCB) first, and then establish a trust chain that covers the hardware platform, OS, and application. In the trust chain, authentication is performed from the root to the next level, extending trust level by level and building a secure and trusted computing environment. + +![](./figures/trusted_chain.png) + +Unlike traditional security approaches that reactively tackle threats—such as identifying and removing viruses individually—trusted computing employs an allowlist strategy. This ensures that only verified kernels, kernel modules, and applications can operate on the system. Any program that is modified or unrecognized is automatically blocked from execution. diff --git a/docs/en/Server/Security/secGear/Menu/index.md b/docs/en/Server/Security/secGear/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..df11b1edf3019ecdbdd138e81b4d72f498592349 --- /dev/null +++ b/docs/en/Server/Security/secGear/Menu/index.md @@ -0,0 +1,10 @@ +--- +headless: true +--- +- [secGear Developer Guide]({{< relref "./secgear.md" >}}) + - [Introduction to secGear]({{< relref "./introduction-to-secgear.md" >}}) + - [secGear Installation]({{< relref "./secgear-installation.md" >}}) + - [API Reference]({{< relref "./api-reference.md" >}}) + - [Developer Guide]({{< relref "./developer-guide.md" >}}) + - [secGear Tools]({{< relref "./using-secgear-tools.md" >}}) + - [Application Scenarios]({{< relref "./application-scenarios.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/secGear/api-reference.md b/docs/en/Server/Security/secGear/api-reference.md similarity index 98% rename from docs/en/docs/secGear/api-reference.md rename to docs/en/Server/Security/secGear/api-reference.md index 31dec0b0fe9727ba27cea73bb4581b8a4d6eb516..d043f820d47aa3d2c46a8b0bbee8d8d2e00136b1 100644 --- a/docs/en/docs/secGear/api-reference.md +++ b/docs/en/Server/Security/secGear/api-reference.md @@ -159,9 +159,9 @@ This API is called by the TEE to encrypt the internal data of the enclave so tha ```c cc_enclave_result_t cc_enclave_seal_data(uint8_t *seal_data, uint32_t seal_data_len, -​ cc_enclave_sealed_data_t *sealed_data, uint32_t sealed_data_len, + cc_enclave_sealed_data_t *sealed_data, uint32_t sealed_data_len, -​ uint8_t *additional_text, uint32_t additional_text_len) + uint8_t *additional_text, uint32_t additional_text_len) ``` **Parameters:** diff --git a/docs/en/docs/secGear/application-scenarios.md b/docs/en/Server/Security/secGear/application-scenarios.md similarity index 100% rename from docs/en/docs/secGear/application-scenarios.md rename to docs/en/Server/Security/secGear/application-scenarios.md diff --git a/docs/en/docs/secGear/developer-guide.md b/docs/en/Server/Security/secGear/developer-guide.md similarity index 100% rename from docs/en/docs/secGear/developer-guide.md rename to docs/en/Server/Security/secGear/developer-guide.md diff --git a/docs/en/docs/secGear/figures/BJCA_Crypto_Module.PNG b/docs/en/Server/Security/secGear/figures/BJCA_Crypto_Module.png similarity index 100% rename from docs/en/docs/secGear/figures/BJCA_Crypto_Module.PNG rename to docs/en/Server/Security/secGear/figures/BJCA_Crypto_Module.png diff --git a/docs/en/docs/secGear/figures/Mindspore.png b/docs/en/Server/Security/secGear/figures/Mindspore.png similarity index 100% rename from docs/en/docs/secGear/figures/Mindspore.png rename to docs/en/Server/Security/secGear/figures/Mindspore.png diff --git a/docs/en/docs/secGear/figures/Mindspore_original.PNG b/docs/en/Server/Security/secGear/figures/Mindspore_original.png similarity index 100% rename from docs/en/docs/secGear/figures/Mindspore_original.PNG rename to docs/en/Server/Security/secGear/figures/Mindspore_original.png diff --git a/docs/en/docs/secGear/figures/develop_step.png b/docs/en/Server/Security/secGear/figures/develop_step.png similarity index 100% rename from docs/en/docs/secGear/figures/develop_step.png rename to docs/en/Server/Security/secGear/figures/develop_step.png diff --git a/docs/en/docs/secGear/figures/openLooKeng.PNG b/docs/en/Server/Security/secGear/figures/openLooKeng.png similarity index 100% rename from docs/en/docs/secGear/figures/openLooKeng.PNG rename to docs/en/Server/Security/secGear/figures/openLooKeng.png diff --git a/docs/en/docs/secGear/figures/secGear_arch.png b/docs/en/Server/Security/secGear/figures/secGear_arch.png similarity index 100% rename from docs/en/docs/secGear/figures/secGear_arch.png rename to docs/en/Server/Security/secGear/figures/secGear_arch.png diff --git a/docs/en/docs/secGear/figures/secret_gaussdb.png b/docs/en/Server/Security/secGear/figures/secret_gaussdb.png similarity index 100% rename from docs/en/docs/secGear/figures/secret_gaussdb.png rename to docs/en/Server/Security/secGear/figures/secret_gaussdb.png diff --git a/docs/en/docs/secGear/introduction-to-secGear.md b/docs/en/Server/Security/secGear/introduction-to-secgear.md similarity index 76% rename from docs/en/docs/secGear/introduction-to-secGear.md rename to docs/en/Server/Security/secGear/introduction-to-secgear.md index 04faa00a5345a2b85a4e0b92b938a231b8f3b5c9..310fd3423cbb0bd8f901776be7ba73c87eae9aa1 100644 --- a/docs/en/docs/secGear/introduction-to-secGear.md +++ b/docs/en/Server/Security/secGear/introduction-to-secgear.md @@ -34,29 +34,29 @@ Switchless is a technology that uses shared memory to reduce the number of conte ```c typedef struct { - uint32_t num_uworkers; - uint32_t num_tworkers; - uint32_t switchless_calls_pool_size; - uint32_t retries_before_fallback; - uint32_t retries_before_sleep; - uint32_t parameter_num; - uint32_t workers_policy; - uint32_t rollback_to_common; - cpu_set_t num_cores; + uint32_t num_uworkers; + uint32_t num_tworkers; + uint32_t switchless_calls_pool_size; + uint32_t retries_before_fallback; + uint32_t retries_before_sleep; + uint32_t parameter_num; + uint32_t workers_policy; + uint32_t rollback_to_common; + cpu_set_t num_cores; } cc_sl_config_t; ``` | Configuration Item | Description | | -------------------------- | ------------------------------------------------------------ | - | num_uworkers | Number of proxy worker threads in the REE, which are used to make switchless out calls (OCALLs). Currently, this field takes effect only on the SGX platform and can be configured on the Arm platform. However, because the Arm platform does not support OCALLs, the configuration does not take effect on the Arm platform.
Specifications:
Arm: maximum value: **512**; minimum value: **1**; default value: **8** (used when this field is set to **0**).
SGX: maximum value: **4294967295**; minimum value: **1**.| - | num_tworkers | Number of proxy worker threads in the TEE, which are used to make switchless enclave calls (ECALLs).
Specifications:
Arm: maximum value: **512**; minimum value: **1**; default value: **8** (used when this field is set to **0**).
SGX: maximum value: **4294967295**; minimum value: **1**.| - | switchless_calls_pool_size | Size of the switchless call pool. The pool can contain **switchless_calls_pool_size** x 64 switchless calls. For example, if **switchless_calls_pool_size=1**, 64 switchless calls are contained in the pool.
Specifications:
Arm: maximum value: **8**; minimum value: **1**; default value: **1** (used when this field is set to **0**).
SGX: maximum value: **8**; minimum value: **1**; default value: **1** (used when **switchless_calls_pool_size** is set to **0**).| - | retries_before_fallback | After the **pause** assembly instruction is executed for **retries_before_fallback** times, if the switchless call is not made by the proxy worker thread on the other side, the system rolls back to the switch call mode. This field takes effect only on the SGX platform.
Specifications:
SGX: maximum value: **4294967295**; minimum value: **1**; default value: **20000** (used when this field is set to **0**).| - | retries_before_sleep | After the **pause** assembly instruction is executed for **retries_before_sleep** times, if the proxy worker thread does not receive any task, the proxy worker thread enters the sleep state. This field takes effect only on the SGX platform.
Specifications:
SGX: maximum value: **4294967295**; minimum value: **1**; default value: **20000** (used when this field is set to **0**).| - | parameter_num | Maximum number of parameters supported by a switchless function. This field takes effect only on the Arm platform.
Specifications:
Arm: maximum value: **16**; minimum value: **0**.| - | workers_policy | Running mode of the switchless proxy thread. This field takes effect only on the Arm platform.
Specifications:
Arm:
**WORKERS_POLICY_BUSY**: The proxy thread always occupies CPU resources regardless of whether there are tasks to be processed. This mode applies to scenarios that require high performance and extensive system software and hardware resources.
**WORKERS_POLICY_WAKEUP**: The proxy thread wakes up only when there is a task. After the task is processed, the proxy thread enters the sleep state and waits to be woken up by a new task.| - | rollback_to_common | Whether to roll back to a common call when an asynchronous switchless call fails. This field takes effect only on the Arm platform.
Specifications:
Arm:
**0**: No. If the operation fails, only the error code is returned.
Other values: Yes. If the operation fails, an asynchronous switchless call is rolled back to a common call and the return value of the common call is returned.| - | num_cores | Cores for binding processes in the REE. The maximum value is number of CPU cores. | + | num_uworkers | Number of proxy worker threads in the REE, which are used to make switchless out calls (OCALLs). Currently, this field takes effect only on the SGX platform and can be configured on the Arm platform. However, because the Arm platform does not support OCALLs, the configuration does not take effect on the Arm platform.
Specifications:
Arm: maximum value: **512**; minimum value: **1**; default value: **8** (used when this field is set to **0**).
SGX: maximum value: **4294967295**; minimum value: **1**.| + | num_tworkers | Number of proxy worker threads in the TEE, which are used to make switchless enclave calls (ECALLs).
Specifications:
Arm: maximum value: **512**; minimum value: **1**; default value: **8** (used when this field is set to **0**).
SGX: maximum value: **4294967295**; minimum value: **1**.| + | switchless_calls_pool_size | Size of the switchless call pool. The pool can contain **switchless_calls_pool_size** x 64 switchless calls. For example, if **switchless_calls_pool_size=1**, 64 switchless calls are contained in the pool.
Specifications:
Arm: maximum value: **8**; minimum value: **1**; default value: **1** (used when this field is set to **0**).
SGX: maximum value: **8**; minimum value: **1**; default value: **1** (used when **switchless_calls_pool_size** is set to **0**).| + | retries_before_fallback | After the **pause** assembly instruction is executed for **retries_before_fallback** times, if the switchless call is not made by the proxy worker thread on the other side, the system rolls back to the switch call mode. This field takes effect only on the SGX platform.
Specifications:
SGX: maximum value: **4294967295**; minimum value: **1**; default value: **20000** (used when this field is set to **0**).| + | retries_before_sleep | After the **pause** assembly instruction is executed for **retries_before_sleep** times, if the proxy worker thread does not receive any task, the proxy worker thread enters the sleep state. This field takes effect only on the SGX platform.
Specifications:
SGX: maximum value: **4294967295**; minimum value: **1**; default value: **20000** (used when this field is set to **0**).| + | parameter_num | Maximum number of parameters supported by a switchless function. This field takes effect only on the Arm platform.
Specifications:
Arm: maximum value: **16**; minimum value: **0**.| + | workers_policy | Running mode of the switchless proxy thread. This field takes effect only on the Arm platform.
Specifications:
Arm:
**WORKERS_POLICY_BUSY**: The proxy thread always occupies CPU resources regardless of whether there are tasks to be processed. This mode applies to scenarios that require high performance and extensive system software and hardware resources.
**WORKERS_POLICY_WAKEUP**: The proxy thread wakes up only when there is a task. After the task is processed, the proxy thread enters the sleep state and waits to be woken up by a new task.| + | rollback_to_common | Whether to roll back to a common call when an asynchronous switchless call fails. This field takes effect only on the Arm platform.
Specifications:
Arm:
**0**: No. If the operation fails, only the error code is returned.
Other values: Yes. If the operation fails, an asynchronous switchless call is rolled back to a common call and the return value of the common call is returned.| + | num_cores | Number of cores for TEE core binding
Specifications: The maximum value is the number of cores in the environment. | 1. Add the **transition_using_threads** flag when defining the API in the enclave description language (EDL) file. @@ -84,7 +84,7 @@ A secure channel is a technology that combines confidential computing remote att #### How to Use -The secure channel is provided as a library and consists of the client, host, and enclave, which are called by the client, server client application (CA), and server trusted application (TA) of the service program respectively. +The secure channel is provided as a library and consists of the client, host, and enclave, which are icalled by the client, server client application (CA), and server trusted application (TA) of the service program respectively. | Module | Header File | Library File | Dependency | |------------|--------------------------|-----------------------|---------| @@ -113,54 +113,54 @@ The secure channel is provided as a library and consists of the client, host, an A secure channel encapsulates only the key negotiation process and encryption and decryption APIs, but does not establish any network connection. The negotiation process reuses the network connection of the service. The network connection between the client and server is established and maintained by the service. The message sending hook function and network connection pointer are transferred during the initialization of the secure channel on the client and the server. For details, see [secure channel examples](https://gitee.com/openeuler/secGear/tree/master/examples/secure_channel). -## Remote Attestation +### Remote Attestation -### Customer Pain Points +#### Challenges -With the development of confidential computing technology, several mainstream technologies have emerged, including Arm TrustZone/CCA, Intel SGX/TDX, QingTian Enclave, and Hygon CSV. Product solutions may involve multiple confidential computing hardware and even collaboration between different TEEs. Remote attestation is a crucial part of the trust chain for any confidential computing technology. However, each technology has its own format for remote attestation reports and verification processes. Users need to integrate different verification processes for different TEE attestation reports, which increases integration burdens and hinders the expansion of new TEE types. +As confidential computing technologies advance, several major platforms have emerged, including Arm Trustzone/CCA, Intel SGX/TDX, QingTian Enclave, and Hygon CSV. Solutions often involve multiple confidential computing hardware platforms, sometimes requiring collaboration between different TEEs. Remote attestation is a crucial part of the trust chain in any confidential computing technology. However, each technology has its own attestation report format and verification process. This forces users to integrate separate verification workflows for each TEE, increasing complexity and hindering the adoption of new TEE types. -### Solution +#### Solution The unified remote attestation framework of secGear addresses the key components related to remote attestation in confidential computing, abstracting away the differences between different TEEs. It provides two components: attestation agent and attestation service. The agent is integrated by users to obtain attestation reports and connect to the attestation service. The service can be deployed independently and supports the verification of iTrustee and virtCCA remote attestation reports. -### Features +#### Feature Description The unified remote attestation framework focuses on confidential computing functionalities, while service deployment and operation capabilities are provided by third-party deployment services. The key features of the unified remote attestation framework are as follows: - Report verification plugin framework: Supports runtime compatibility with attestation report verification for different TEE platforms, such as iTrustee, virtCCA, and CCA. It also supports the extension of new TEE report verification plugins. -- Certificate baseline management: Supports the management of TCB/TA baseline values and public key certificates for different TEE types. Centralized deployment on the server ensures transparency for users. +- Certificate baseline management: Supports the management of baseline values of Trusted Computing Bases (TCB) and Trusted Applications (TA) as well as public key certificates for different TEE types. Centralized deployment on the server ensures transparency for users. - Policy management: Provides default policies for ease of use and customizable policies for flexibility. - Identity token: Issues identity tokens for different TEEs, endorsed by a third party for mutual authentication between different TEE types. - Attestation agent: Supports connection to attestation service/peer-to-peer attestation, compatible with TEE report retrieval and identity token verification. It is easy to integrate, allowing users to focus on their service logic. -Two modes are supported depending on the usage scenario: Peer-to-peer verification and attestation service verification. +Two modes are supported depending on the usage scenario: peer-to-peer verification and attestation service verification. Attestation service verification process: 1. The user (regular node or TEE) initiates a challenge to the TEE platform. 2. The TEE platform obtains the TEE attestation report through the attestation agent and returns it to the user. 3. The user-side attestation agent forwards the report to the remote attestation service. -4. The remote attestation service verifies the report and returns a unified format identity token endorsed by a third party. +4. The remote attestation service verifies the report and returns an identity token in a unified format endorsed by a third party. 5. The attestation agent verifies the identity token and parses the attestation report verification result. -6. Once the verification passes, a secure connection is established. +6. Upon successful verification, a secure connection is established. Peer-to-peer verification process (without the attestation service): -1. The user initiates a challenge to the TEE platform, and the TEE platform returns the attestation report to the user. +1. The user initiates a challenge to the TEE platform, which then returns the attestation report to the user. 2. The user uses a local peer-to-peer TEE verification plugin to verify the report. > ![](./public_sys-resources/icon-note.gif) **Note:** > -> The attestation agents used for peer-to-peer verification and attestation service verification are different. During compilation, the compilation options determine whether to compile attestation agents for attestation service mode or peer-to-peer mode. +> The attestation agent varies depending on whether peer-to-peer verification or remote attestation service verification is used. Users can select the desired mode during compilation by specifying the appropriate option, enabling the attestation agent to support either the attestation service or peer-to-peer mode. -### Application Scenarios +#### Application Scenarios In scenarios like finance and AI, where confidential computing is used to protect the security of privacy data during runtime, remote attestation is a technical means to verify the legitimacy of the confidential computing environment and applications. secGear provides components that are easy to integrate and deploy, helping users quickly enable confidential computing remote attestation capabilities. ## Acronyms and Abbreviations | Acronym/Abbreviation| Full Name | -| ------ | ----------------------------- | +| ------ | ----------------------------- | ---------------- | | REE | rich execution environment | | TEE | trusted execution environment| | EDL | enclave description language | diff --git a/docs/en/Server/Security/secGear/public_sys-resources/icon-note.gif b/docs/en/Server/Security/secGear/public_sys-resources/icon-note.gif new file mode 100644 index 0000000000000000000000000000000000000000..6314297e45c1de184204098efd4814d6dc8b1cda Binary files /dev/null and b/docs/en/Server/Security/secGear/public_sys-resources/icon-note.gif differ diff --git a/docs/en/docs/secGear/secGear-installation.md b/docs/en/Server/Security/secGear/secgear-installation.md similarity index 44% rename from docs/en/docs/secGear/secGear-installation.md rename to docs/en/Server/Security/secGear/secgear-installation.md index 34d17b1fb4881f7635a1054adb5421298dd45948..b548cc2491ca2dfb08bd1615a6f0be729361dd73 100644 --- a/docs/en/docs/secGear/secGear-installation.md +++ b/docs/en/Server/Security/secGear/secgear-installation.md @@ -20,49 +20,41 @@ > - For common servers, the TrustZone feature cannot be enabled only by upgrading the BMC, BIOS, and TEE OS firmware. > - By default, the TrustZone feature is disabled on the server. For details about how to enable the TrustZone feature on the server, see BIOS settings. -#### OS - -openEuler 20.03 LTS SP2 or later - -openEuler 22.09 - -openEuler 22.03 LTS or later - ### Environment Preparation For details, see [Environment Requirements](https://www.hikunpeng.com/document/detail/en/kunpengcctrustzone/fg-tz/kunpengtrustzone_20_0018.html) and [Procedure](https://www.hikunpeng.com/document/detail/en/kunpengcctrustzone/fg-tz/kunpengtrustzone_20_0019.html) on the Kunpeng official website. ### Installation -1. Configure the openEuler Yum source. You can configure an online Yum source or configure a local Yum source by mounting an ISO file. The following uses openEuler 22.03 LTS as an example. For other versions, use the Yum source of the corresponding version. +1. Configure the openEuler Yum repository. You can configure an online Yum repository (see the example below) or configure a local Yum repository by mounting an ISO file. - ```shell - /etc/yum.repo/openEuler.repo - [osrepo] - name=osrepo - baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/everything/aarch64/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/everything/aarch64/RPM-GPG-KEY-openEuler - ``` + ```shell + vi /etc/yum.repo/openEuler.repo + [osrepo] + name=osrepo + baseurl=http://repo.openeuler.org/openEuler-{version}/everything/aarch64/ + enabled=1 + gpgcheck=1 + gpgkey=http://repo.openeuler.org/openEuler-{version}/everything/aarch64/RPM-GPG-KEY-openEuler + ``` 2. Install secGear. - ```shell - #Install the compiler. - yum install cmake ocaml-dune - - #Install secGear. - yum install secGear-devel - - #Check whether the installations are successful. If the command output is as follows, the installations are successful. - rpm -qa | grep -E 'secGear|itrustee|ocaml-dune' - itrustee_sdk-xxx - itrustee_sdk-devel-xxx - secGear-xxx - secGear-devel-xxx - ocaml-dune-xxx - ``` + ```shell + #Install the compiler. + yum install cmake ocaml-dune + + #Install secGear. + yum install secGear-devel + + #Check whether the installations are successful. If the command output is as follows, the installations are successful. + rpm -qa | grep -E 'secGear|itrustee|ocaml-dune' + itrustee_sdk-xxx + itrustee_sdk-devel-xxx + secGear-xxx + secGear-devel-xxx + ocaml-dune-xxx + ``` ## x86 Environment @@ -72,55 +64,47 @@ For details, see [Environment Requirements](https://www.hikunpeng.com/document/d Processor that supports the Intel SGX feature -#### OS - -openEuler 20.03 LTS SP2 or later - -openEuler 22.09 - -openEuler 22.03 LTS or later - ### Environment Preparation Purchase a device that supports the Intel SGX feature and enable the SGX feature by referring to the BIOS setting manual of the device. ### Installation -1. Configure the openEuler Yum source. You can configure an online Yum source or configure a local Yum source by mounting an ISO file. The following uses openEuler 22.03 LTS as an example. For other versions, use the Yum source of the corresponding version. +1. Configure the openEuler Yum repository. You can configure an online Yum repository (see the example below) or configure a local Yum repository by mounting an ISO file. - ```shell - vi openEuler.repo - [osrepo] - name=osrepo - baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/everything/x86_64/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/everything/x86_64/RPM-GPG-KEY-openEuler - ``` + ```shell + vi openEuler.repo + [osrepo] + name=osrepo + baseurl=http://repo.openeuler.org/openEuler{version}/everything/x86_64/ + enabled=1 + gpgcheck=1 + gpgkey=http://repo.openeuler.org/openEuler-{version}/everything/x86_64/RPM-GPG-KEY-openEuler + ``` 2. Install secGear. - ```shell - #Install the compiler. - yum install cmake ocaml-dune - - #Install secGear. - yum install secGear-devel - - #Check whether the installations are successful. If the command output is as follows, the installations are successful. - rpm -qa | grep -E 'secGear|ocaml-dune|sgx' - secGear-xxx - secGear-devel-xxx - ocaml-dune-xxx - libsgx-epid-xxx - libsgx-enclave-common-xxx - libsgx-quote-ex-xxx - libsgx-aesm-launch-plugin-xxx - libsgx-uae-service-xxx - libsgx-ae-le-xxx - libsgx-urts-xxx - sgxsdk-xxx - sgx-aesm-service-xxx - linux-sgx-driver-xxx - libsgx-launch-xxx - ``` + ```shell + # Install the compiler. + yum install cmake ocaml-dune + + # Install secGear. + yum install secGear-devel + + # Check whether the installations are successful. If the command output is as follows, the installations are successful. + rpm -qa | grep -E 'secGear|ocaml-dune|sgx' + secGear-xxx + secGear-devel-xxx + ocaml-dune-xxx + libsgx-epid-xxx + libsgx-enclave-common-xxx + libsgx-quote-ex-xxx + libsgx-aesm-launch-plugin-xxx + libsgx-uae-service-xxx + libsgx-ae-le-xxx + libsgx-urts-xxx + sgxsdk-xxx + sgx-aesm-service-xxx + linux-sgx-driver-xxx + libsgx-launch-xxx + ``` diff --git a/docs/en/docs/secGear/secGear.md b/docs/en/Server/Security/secGear/secgear.md similarity index 100% rename from docs/en/docs/secGear/secGear.md rename to docs/en/Server/Security/secGear/secgear.md diff --git a/docs/en/docs/secGear/using-the-secGear-tool.md b/docs/en/Server/Security/secGear/using-secgear-tools.md similarity index 92% rename from docs/en/docs/secGear/using-the-secGear-tool.md rename to docs/en/Server/Security/secGear/using-secgear-tools.md index 6ebef34b1b4a1fc52b401df9696034ba00192593..ce3dab2f876904813b811bcb7b8da1b7ab6cac94 100644 --- a/docs/en/docs/secGear/using-the-secGear-tool.md +++ b/docs/en/Server/Security/secGear/using-secgear-tools.md @@ -2,7 +2,7 @@ secGear provides a tool set to facilitate application development. This document describes the tools and how to use them. -## Codegener: Code Generation Tool +## Code Generation Tool: codegener ### Overview @@ -18,7 +18,7 @@ The EDL file syntax is similar to the C language syntax. The following describes | Member | Description | | ----------------------- | ------------------------------------------------------------ | -| include "my_type.h” | Uses the type defined in the external inclusion file. | +| include "my_type.h" | Uses the type defined in the external inclusion file. | | trusted | Declares that secure functions are available on the trusted application (TA) side. | | untrusted | Declares that insecure functions are available on the TA side. | | return_type | Defines the return value type. | @@ -28,7 +28,7 @@ The EDL file syntax is similar to the C language syntax. The following describes ### Usage Instructions -#### **Command Format** +#### Command Format The format of the codegen command is as follows: @@ -40,11 +40,11 @@ ARM architecture **codegen_arm64** < --trustzone | --sgx > \[--trusted-dir \ | **--untrusted-dir** \| --trusted | --untrusted ] edlfile -#### **Parameter Description** +#### Parameter Description The parameters are described as follows: -| **Parameter** | Mandatory/Optional | Description | +| Parameter | Mandatory/Optional | Description | | ---------------------- | -------- | ------------------------------------------------------------ | | --trustzone \| --sgx | Mandatory | Generates the API function corresponding to the confidential computing architecture only in the current command directory. If no parameter is specified, the SGX API function is generated by default. | | --search-path \ | Optional | Specifies the search path of the file that the EDL file to be converted depends on. | @@ -93,13 +93,13 @@ secGear sign_tool is a command line tool, including the compilation tool chain a ### Operation Instructions -#### **Format** +#### Format The sign_tool contains the sign command (for signing the enclave) and the digest command (for generating the digest value). Command format: -**sign_tool.sh -d** \[sign | digest] **-x** \ **-i** \ **-p** \ **-s** \ \[OPTIONS] **–o** \ +**sign_tool.sh -d** \[sign | digest] **-x** \ **-i** \ **-p** \ **-s** \ \[OPTIONS] **-o** \ -#### **Parameter Description** +#### Parameter Description | sign Command Parameter | Description | Mandatory/Optional | | -------------- | -------------------------------------------------------------| -------------------------------------------- | @@ -118,15 +118,15 @@ The sign_tool contains the sign command (for signing the enclave) and the digest | -x \ | enclave type (sgx or trustzone) | Mandatory | | -h | Prints the help information. | Optional | -#### **Single-Step Signature** +#### Single-Step Signature Set the enclave type is SGX, sign the test.enclave, and generate the signature file signed.enclave. The following is an example: ```shell -sign_tool.sh –d sign –x sgx –i test.enclave -k private_test.pem –o signed.enclave +sign_tool.sh -d sign -x sgx -i test.enclave -k private_test.pem -o signed.enclave ``` -#### **Two-Step Signature** +#### Two-Step Signature The following uses SGX as an example to describe the two-step signature procedure: @@ -135,7 +135,7 @@ The following uses SGX as an example to describe the two-step signature procedur Use the sign_tool to generate the digest value digest.data and the temporary intermediate file signdata. The file is used when the signature file is generated and is automatically deleted after being signed. Example: ```shell - sign_tool.sh –d digest –x sgx –i input –o digest.data + sign_tool.sh -d digest -x sgx -i input -o digest.data ``` 2. Send digest.data to the signature authority or platform and obtain the corresponding signature. @@ -143,7 +143,7 @@ The following uses SGX as an example to describe the two-step signature procedur 3. Use the obtained signature to generate the signed dynamic library signed.enclave. ```shell - sign_tool.sh –d sign –x sgx–i input –p pub.pem –s signature –o signed.enclave + sign_tool.sh -d sign -x sgx-i input -p pub.pem -s signature -o signed.enclave ``` Note: To release an official version of applications supported by Intel SGX, you need to apply for an Intel whitelist. For details about the process, see the Intel document at . diff --git a/docs/en/Tools/AI/AI_Container_Image_Userguide/Menu/index.md b/docs/en/Tools/AI/AI_Container_Image_Userguide/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..ca64531a1880bb2255a41ec5a780b9f3ad2c3942 --- /dev/null +++ b/docs/en/Tools/AI/AI_Container_Image_Userguide/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [AI Container Image User Guide]({{< relref "./ai-container-image-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/AI/AI_Container_Image_Userguide/ai-container-image-user-guide.md b/docs/en/Tools/AI/AI_Container_Image_Userguide/ai-container-image-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..2fc2bf95735167d439f398b4310fe4c0e1d8ddaa --- /dev/null +++ b/docs/en/Tools/AI/AI_Container_Image_Userguide/ai-container-image-user-guide.md @@ -0,0 +1,110 @@ +# openEuler AI Container Image User Guide + +## Overview + +The openEuler AI container images encapsulate SDKs for different hardware computing power and software such as AI frameworks and foundation model applications. Start a container using one of the images, and you can use or develop AI applications in your environment. This greatly reduces the time required for application deployment and environment configuration. + +## Obtaining Images + +openEuler has released container images for the Ascend and NVIDIA platforms. Click the links below to download: + +- [openeuler/cann](https://hub.docker.com/r/openeuler/cann) + Stores SDK images for installing CANN software on the openEuler base image in the Ascend environment. + +- [openeuler/cuda](https://hub.docker.com/r/openeuler/cuda) + Stores SDK images for installing CUDA software on the openEuler base image in the NVIDIA environment. + +- [openeuler/pytorch](https://hub.docker.com/r/openeuler/pytorch) + Stores the AI framework image for installing PyTorch based on the SDK image. + +- [openeuler/tensorflow](https://hub.docker.com/r/openeuler/tensorflow) + Stores the AI framework image for installing TensorFlow based on the SDK image. + +- [openeuler/llm](https://hub.docker.com/r/openeuler/tensorrt) + Stores model application images for installing foundation model applications and toolchains based on the AI framework image. + +For details about AI container image classification and image tag specifications, see [oEEP-0014](https://gitee.com/openeuler/TC/blob/master/oEEP/oEEP-0014%20openEuler%20AI%E5%AE%B9%E5%99%A8%E9%95%9C%E5%83%8F%E8%BD%AF%E4%BB%B6%E6%A0%88%E8%A7%84%E8%8C%83.md). + +The size of an AI container image is large. You are advised to run the following command to pull the image to the local environment before starting the container: + +```sh +docker pull image:tag +``` + +In the command, `image` indicates the repository name, for example, `openeuler/cann`, and `tag` indicates the tag of the target image. After the image is pulled, you can start the container. Note that you must have Docker installed before running the `docker pull` command. + +## Starting a Container + +1. Install Docker. For details about how to install Docker, see [Install Docker Engine](https://docs.docker.com/engine/install/). Alternatively, run the following command to install: + + ```sh + yum install -y docker + ``` + + or + + ```sh + apt-get install -y docker + ``` + +2. Installing nvidia-container in the NVIDIA Environment + + (1) Configure the Yum or APT repository. + - For Yum: + + ```sh + curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo | \ + sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo + ``` + + - For APT: + + ```sh + curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg + ``` + + ```sh + curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ + sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ + sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list + ``` + + (2) Install nvidia-container-toolkit and nvidia-container-runtime. + + ```sh + # For Yum + yum install -y nvidia-container-toolkit nvidia-container-runtime + ``` + + ```sh + # For APT + apt-get install -y nvidia-container-toolkit nvidia-container-runtime + ``` + + (3) Configure Docker. + + ```sh + nvidia-ctk runtime configure --runtime=docker + systemctl restart docker + ``` + + Skip this step in the non-NVIDIA environment. + +3. Ensure that the correct driver and firmware are installed. You can obtain the correct versions from [NVIDIA](https://www.nvidia.com/) or [Ascend](https://www.hiascend.com/) official site. If the driver and firmware are installed, run the `npu-smi` command on the Ascend platform or run the `nvidia-smi` command on the NVIDIA platform. If the hardware information is correctly displayed, the installed version is correct. + +4. After the preceding operations are complete, run the `docker run` command to start the container. + +```sh +# In the Ascend environment +docker run --rm --network host \ + --device /dev/davinci0:/dev/davinci0 \ + --device /dev/davinci_manager --device /dev/devmm_svm --device /dev/hisi_hdc \ + -v /usr/local/dcmi:/usr/local/dcmi -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ + -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ + -ti image:tag +``` + +```sh +# In the NVIDIA environment +docker run --gpus all -d -ti image:tag +``` diff --git a/docs/en/Tools/AI/AI_Large_Model_Service_Images_Userguide/Menu/index.md b/docs/en/Tools/AI/AI_Large_Model_Service_Images_Userguide/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..d99f09109e444f0f03a7ae4ebbffc17f76318799 --- /dev/null +++ b/docs/en/Tools/AI/AI_Large_Model_Service_Images_Userguide/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [LLM Service Image User Guide]({{< relref "./llm-service-image-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/AI/AI_Large_Model_Service_Images_Userguide/llm-service-image-user-guide.md b/docs/en/Tools/AI/AI_Large_Model_Service_Images_Userguide/llm-service-image-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..192955425e4a07dbd7e6c82969488ee08e8b5ac9 --- /dev/null +++ b/docs/en/Tools/AI/AI_Large_Model_Service_Images_Userguide/llm-service-image-user-guide.md @@ -0,0 +1,102 @@ +# Container Images for Large Language Models + +openEuler provides container images to support large language models (LLMs) such as Baichuan, ChatGLM, and iFLYTEK Spark. + +The provided container images come with pre-installed dependencies for both CPU and GPU environments, ensuring a seamless out-of-the-box experience. + +## Pulling the Image (CPU Version) + +```bash +docker pull openeuler/llm-server:1.0.0-oe2203sp3 +``` + +## Pulling the Image (GPU Version) + +```bash +docker pull icewangds/llm-server:1.0.0 +``` + +## Downloading the Model + +Download the model and convert it to GGUF format. + +```bash +# Install Hugging Face Hub. +pip install huggingface-hub + +# Download the model you want to deploy. +export HF_ENDPOINT=https://hf-mirror.com +huggingface-cli download --resume-download baichuan-inc/Baichuan2-13B-Chat --local-dir /root/models/Baichuan2-13B-Chat --local-dir-use-symlinks False + +# Convert the model to GGUF format. +cd /root/models/ +git clone https://github.com/ggerganov/llama.cpp.git +python llama.cpp/convert-hf-to-gguf.py ./Baichuan2-13B-Chat +# Path to the generated GGUF model: /root/models/Baichuan2-13B-Chat/ggml-model-f16.gguf +``` + +## Launch + +Docker v25.0.0 or above is required. + +To use a GPU image, you must install nvidia-container-toolkit. Detailed installation instructions are available in the official NVIDIA documentation: [Installing the NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html). + +**docker-compose.yaml** file content: + +```yaml +version: '3' +services: + model: + image: : # Image name and tag + restart: on-failure:5 + ports: + - 8001:8000 # Listening port number. Change "8001" to modify the port. + volumes: + - /root/models:/models # LLM mount directory + environment: + - MODEL=/models/Baichuan2-13B-Chat/ggml-model-f16.gguf # Model file path inside the container + - MODEL_NAME=baichuan13b # Custom model name + - KEY=sk-12345678 # Custom API Key + - CONTEXT=8192 # Context size + - THREADS=8 # Number of CPU threads, required only for CPU deployment + deploy: # GPU resources, required only for GPU deployment + resources: + reservations: + devices: + - driver: nvidia + count: all + capabilities: [gpu] +``` + +```bash +docker-compose -f docker-compose.yaml up +``` + +`docker run` command: + +```bash +# For CPU deployment +docker run -d --restart on-failure:5 -p 8001:8000 -v /root/models:/models -e MODEL=/models/Baichuan2-13B-Chat/ggml-model-f16.gguf -e MODEL_NAME=baichuan13b -e KEY=sk-12345678 openeuler/llm-server:1.0.0-oe2203sp3 + +# For GPU deployment +docker run -d --gpus all --restart on-failure:5 -p 8001:8000 -v /root/models:/models -e MODEL=/models/Baichuan2-13B-Chat/ggml-model-f16.gguf -e MODEL_NAME=baichuan13b -e KEY=sk-12345678 icewangds/llm-server:1.0.0 +``` + +## Testing + +Call the LLM interface to test the deployment. A successful return indicates successful deployment of the LLM service. + +```bash +curl -X POST http://127.0.0.1:8001/v1/chat/completions \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer sk-12345678" \ + -d '{ + "model": "baichuan13b", + "messages": [ + {"role": "system", "content": "You are a openEuler community assistant, please answer the following question."}, + {"role": "user", "content": "Who are you?"} + ], + "stream": false, + "max_tokens": 1024 + }' +``` diff --git a/docs/en/Tools/AI/Menu/index.md b/docs/en/Tools/AI/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..5e7fd86a0e8182a114d521e5efabd14da287975e --- /dev/null +++ b/docs/en/Tools/AI/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- +- [openEuler Copilot System]({{< relref "./openEuler_Copilot_System/Menu/index.md" >}}) +- [LLM Service Image User Guide]({{< relref "./AI_Large_Model_Service_Images_Userguide/Menu/index.md" >}}) +- [AI Container Image User Guide]({{< relref "./openEuler_Copilot_System/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/Menu/index.md b/docs/en/Tools/AI/openEuler_Copilot_System/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..8b62dd592884d052226873780a8b561da3034462 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/Menu/index.md @@ -0,0 +1,23 @@ +--- +headless: true +--- +- [openEuler Copilot System]({{< relref "./README.md" >}}) + - [User Guide](#) + - [Web Client](#) + - [Introduction]({{< relref "./user-guide/web-client/introduction.md" >}}) + - [Registration and Login]({{< relref "./user-guide/web-client/registration-and-login.md" >}}) + - [Intelligent Q&A Guide]({{< relref "./user-guide/web-client/intelligent-q-and-a-guide.md" >}}) + - [Intelligent Plugin Overview]({{< relref "./user-guide/web-client/intelligent-plugin-overview.md" >}}) + - [CLI Client](#) + - [Obtaining the API Key]({{< relref "./user-guide/cli-client/obtaining-the-api-key.md" >}}) + - [CLI Assistant Guide]({{< relref "./user-guide/cli-client/cli-assistant-guide.md" >}}) + - [Intelligent Tuning]({{< relref "./user-guide/cli-client/intelligent-tuning.md" >}}) + - [Intelligent Diagnosis]({{< relref "./user-guide/cli-client/intelligent-diagnosis.md" >}}) + - [Deployment Guide](#) + - [Network Environment Deployment Guide]({{< relref "./deployment-guide/network-environment-deployment-guide.md" >}}) + - [Offline Environment Deployment Guide]({{< relref "./deployment-guide/offline-environment-deployment-guide.md" >}}) + - [Local Asset Library Setup Guide]({{< relref "./deployment-guide/local-asset-library-setup-guide.md" >}}) + - [Plugin Deployment Guide](#) + - [Intelligent Tuning]({{< relref "./deployment-guide/plugin-deployment-guide/intelligent-tuning/plugin-intelligent-tuning-deployment-guide.md" >}}) + - [Intelligent Diagnosis]({{< relref "./deployment-guide/plugin-deployment-guide/intelligent-diagnosis/plugin-intelligent-diagnosis-deployment-guide.md" >}}) + - [AI Container Stack]({{< relref "./deployment-guide/plugin-deployment-guide/ai-container-stack/plugin-ai-container-stack-deployment-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/README.md b/docs/en/Tools/AI/openEuler_Copilot_System/README.md new file mode 100644 index 0000000000000000000000000000000000000000..e68b45aad4f5e0d931e69a17110e9e41c6e98265 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/README.md @@ -0,0 +1,42 @@ +# openEuler Copilot System + +## Feature Description + +The openEuler Copilot System offers an intelligent Q&A platform accessible through two interfaces: web and intelligent shell. + +- Web: provides a user-friendly experience for consulting basic OS knowledge, openEuler updates, operational solutions, project introductions, and usage instructions. +- Intelligent shell: facilitates natural language interaction with openEuler, supporting heuristic O&M tasks. + +## Application Scenarios + +- Common users: Explore detailed information about openEuler, including migration strategies. +- Developers: Learn about the development process, key features, and project contributions in openEuler. +- System administrators: Discover solutions for common and complex issues, along with system management commands and knowledge. + +## User Manual Content + +### Deployment Guide + +- [Web Client Deployment Guide](./deployment-guide/) + - [Network Environment Deployment Guide](./deployment-guide/network-environment-deployment-guide.md) + - [Offline Environment Deployment Guide](./deployment-guide/offline-environment-deployment-guide.md) + +- [Plugin Deployment Guide](./deployment-guide/plugin-deployment-guide/) + - [Intelligent Tuning](./deployment-guide/plugin-deployment-guide/intelligent-tuning/plugin-intelligent-tuning-deployment-guide.md) + - [Intelligent Diagnosis](./deployment-guide/plugin-deployment-guide/intelligent-diagnosis/plugin-intelligent-diagnosis-deployment-guide.md) + - [AI Container Stack](./deployment-guide/plugin-deployment-guide/ai-container-stack/plugin-ai-container-stack-deployment-guide.md) + +- [Local Asset Library Setup Guide](./deployment-guide/local-asset-library-setup-guide.md) + +### User Guide + +- [Web Client (Gitee AI) User Guide](./user-guide/web-client/introduction.md) + - [Registration and Login](./user-guide/web-client/registration-and-login.md) + - [Intelligent Q&A Guide](./user-guide/web-client/intelligent-q-and-a-guide.md) + - [Intelligent Plugin Overview](./user-guide/web-client/intelligent-plugin-overview.md) + +- [Intelligent Shell User Guide](./user-guide/cli-client/cli-assistant-guide.md) + - [Preparation: Obtaining the API Key](./user-guide/cli-client/obtaining-the-api-key.md) + - [Intelligent Plugins](./user-guide/cli-client/cli-assistant-guide.md#智能插件) + - [Intelligent Tuning](./user-guide/cli-client/intelligent-tuning.md) + - [Intelligent Diagnosis](./user-guide/cli-client/intelligent-diagnosis.md) diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/local-asset-library-setup-guide.md b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/local-asset-library-setup-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..4fa97bffd8f24214a60fed0c2d61045fcc1212f1 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/local-asset-library-setup-guide.md @@ -0,0 +1,406 @@ +# 本地资产库构建指南 + +- RAG 是一个检索增强的模块,该指南主要是为rag提供命令行的方式进行数据库管理、资产管理、资产库管理和语料资产管理; + 对于数据库管理提供了清空数据库、初始化数据库等功能; + 对于资产管理提供了资产创建、资产查询和资产删除等功能; + 对于资产库管理提供了资产库创建、资产库查询和资产库删除等功能; + 对于语料资产管理提供了语料上传、语料查询和语料删除等功能。 +- 当前指南面向管理员进行编写,对于管理员而言,可以拥有多个资产,一个资产包含多个资产库(不同资产库的使用的向量化模型可以不同),一个资产库对应一个语料资产。 +- 本地语料上传指南是用户构建项目专属语料的指导,当前支持 docx、pdf、markdown、txt 和 xlsx 文件上传,推荐使用 docx 格式上传。 + +## 准备工作 + +- RAG 中关于语料上传目录挂载的配置: + +将本地语料保存到服务器的目录,例如 /home/docs 目录,且将 /home/data 目录权限设置为755 + +```bash +# 设置本地存放文档目录权限为755 +chmod -R 755 /home/docs +``` + +将文件存放的源目录映射至 RAG 容器目标目录,源目录的配置在 中,下面是文件中具体配置映射源目录的配置方法: + +![配置映射源目录](./pictures/local-asset-library-setup/配置映射源目录.png) + +中间层的配置(链接源目录和目标目录的配置)在 中,下面是文件中具体映射中间层的配置方法: + +![配置映射中间层](./pictures/local-asset-library-setup/配置映射中间层.png) + +目标目录的配置在 中,下面是文件中具体映射目标目录的配置方法: + +![配置映射目标目录](./pictures/local-asset-library-setup/配置映射目标目录.png) + +- 更新 Copilot 服务: + + ```bash + root@openeuler:/home/EulerCopilot/euler-copilot-helm/chart# helm upgrade -n euler-copilot service . + # 请注意:service是服务名,可根据实际修改 + ``` + +- 进入到 RAG 容器: + + ```bash + root@openeuler:~# kubectl -n euler-copilot get pods + NAME READY STATUS RESTARTS AGE + framework-deploy-service-bb5b58678-jxzqr 2/2 Running 0 16d + mysql-deploy-service-c7857c7c9-wz9gn 1/1 Running 0 17d + pgsql-deploy-service-86b4dc4899-ppltc 1/1 Running 0 17d + rag-deploy-service-5b7887644c-sm58z 2/2 Running 0 110m + redis-deploy-service-f8866b56-kj9jz 1/1 Running 0 17d + vectorize-deploy-service-57f5f94ccf-sbhzp 2/2 Running 0 17d + web-deploy-service-74fbf7999f-r46rg 1/1 Running 0 2d + # 进入rag pod + root@openeuler:~# kubectl -n euler-copilot exec -it rag-deploy-service-5b7887644c-sm58z -- bash + ``` + +- 设置 PYTHONPATH + + ```bash + # 设置PYTHONPATH + export PYTHONPATH=$(pwd) + ``` + +## 上传语料 + +### 查看脚本帮助信息 + +```bash +python3 scripts/rag_kb_manager.pyc --help +usage: rag_kb_manager.pyc [-h] --method + {init_database_info,init_rag_info,init_database,clear_database,create_kb,del_kb,query_kb,create_kb_asset,del_kb_asset,query_kb_asset,up_corpus,del_corpus,query_corpus,stop_corpus_uploading_job} + [--database_url DATABASE_URL] [--vector_agent_name VECTOR_AGENT_NAME] [--parser_agent_name PARSER_AGENT_NAME] + [--rag_url RAG_URL] [--kb_name KB_NAME] [--kb_asset_name KB_ASSET_NAME] [--corpus_dir CORPUS_DIR] + [--corpus_chunk CORPUS_CHUNK] [--corpus_name CORPUS_NAME] [--up_chunk UP_CHUNK] + [--embedding_model {TEXT2VEC_BASE_CHINESE_PARAPHRASE,BGE_LARGE_ZH,BGE_MIXED_MODEL}] [--vector_dim VECTOR_DIM] + [--num_cores NUM_CORES] + +optional arguments: + -h, --help show this help message and exit + --method {init_database_info,init_rag_info,init_database,clear_database,create_kb,del_kb,query_kb,create_kb_asset,del_kb_asset,query_kb_asset,up_corpus,del_corpus,query_corpus,stop_corpus_uploading_job} + 脚本使用模式,有init_database_info(初始化数据库配置)、init_database(初始化数据库)、clear_database(清除数据库)、create_kb(创建资产)、 + del_kb(删除资产)、query_kb(查询资产)、create_kb_asset(创建资产库)、del_kb_asset(删除资产库)、query_kb_asset(查询 + 资产库)、up_corpus(上传语料,当前支持txt、html、pdf、docx和md格式)、del_corpus(删除语料)、query_corpus(查询语料)和 + stop_corpus_uploading_job(上传语料失败后,停止当前上传任务) + --database_url DATABASE_URL + 语料资产所在数据库的url + --vector_agent_name VECTOR_AGENT_NAME + 向量化插件名称 + --parser_agent_name PARSER_AGENT_NAME + 分词插件名称 + --rag_url RAG_URL rag服务的url + --kb_name KB_NAME 资产名称 + --kb_asset_name KB_ASSET_NAME + 资产库名称 + --corpus_dir CORPUS_DIR + 待上传语料所在路径 + --corpus_chunk CORPUS_CHUNK + 语料切割尺寸 + --corpus_name CORPUS_NAME + 待查询或者待删除语料名 + --up_chunk UP_CHUNK 语料单次上传个数 + --embedding_model {TEXT2VEC_BASE_CHINESE_PARAPHRASE,BGE_LARGE_ZH,BGE_MIXED_MODEL} + 初始化资产时决定使用的嵌入模型 + --vector_dim VECTOR_DIM + 向量化维度 + --num_cores NUM_CORES + 语料处理使用核数 +``` + +### 具体操作 + +以下出现的命令中带**初始化**字段需要在进行资产管理前按指南中出现的相对顺序进行执行,命令中带**可重复**执字段的在后续过程中可以反复执行,命令中带**注意**字段的需谨慎执行。 + +### 步骤1:配置数据库和 RAG 信息 + +- #### 配置数据库信息(初始化) + +```bash +python3 scripts/rag_kb_manager.pyc --method init_database_info --database_url postgresql+psycopg2://postgres:123456@{dabase_url}:{databse_port}/postgres +``` + +**注意:** + +**{dabase_url}**为 k8s 集群内访问 postgres 服务的 url,请根据具体情况修改,一般为 **{postgres_servive_name}-{{ .Release.Name }}.\.svc.cluster.local** 格式,其中 **{postgres_servive_name}** 可以从 找到: + +![k8s集群中postgres服务的名称](./pictures/local-asset-library-setup/k8s集群中postgres服务的名称.png) + +**{{ .Release.Name }}**和**\** 为部署服务时helm安装应用时指定的 **my-release-name** 以及 **my-namespace**,一条 helm 安装应用的命令如下所示: + +```bash +helm install my-release-name --namespace my-namespace path/to/chart +``` + +**database_port** 的信息可以在 中查看,以下为字段所在位置(一般为5432): + +![postgres服务端口](./pictures/local-asset-library-setup/postgres服务端口.png) + +数据库信息配置命令执行命令完成之后会在 scripts/config 下出现 database_info.json 文件,文件内容如下: + +```bash +{"database_url": "postgresql+psycopg2://postgres:123456@{dabase_url}:{databse_port}/postgres"} +``` + +下面是命令执行成功的截图: + +![数据库配置信息成功](./pictures/local-asset-library-setup/数据库配置信息成功.png) + +- #### 配置rag信息(初始化) + +```bash +python3 scripts/rag_kb_manager.pyc --method init_rag_info --rag_url http://{rag_url}:{rag_port} +``` + +**{rag_url}** 为 0.0.0.0,**{rag_port}** 可以从 中获取(一般为8005): + +![rag_port](./pictures/local-asset-library-setup/rag_port.png) + +数据库信息配置命令执行命令完成之后会在 scripts/config 下出现 rag_info.json 文件,文件内容如下: + +```bash +{"rag_url": "http://{rag_url}:{rag_port}"} +``` + +下面是命令执行成功的截图: + +![rag配置信息成功](./pictures/local-asset-library-setup/rag配置信息成功.png) + +### 步骤2:初始化数据库 + +- #### 初始化数据库表格 + +```bash +python3 scripts/rag_kb_manager.pyc --method init_database +# 注意: +# 对于特殊关系型数据库可指定插件参数'--vector_agent_name VECTOR_AGENT_NAME'和 '--parser_agent_name PARSER_AGENT_NAME';其中VECTOR_AGENT_NAME默认为vector, PARSER_AGENT_NAME默认为zhparser +``` + +命令执行完成之后可以进入数据库容器查看表格是否创建成功,首先获取命名空间中的所有节点名称: + +```bash +# 获取命名空间中的所有pod节点 +kubectl get pods -n euler-copilot +``` + +结果如下: + +![获取数据库pod名称](./pictures/local-asset-library-setup/获取数据库pod名称.png) + +使用下面命令进入数据库: + +```bash +kubectl exec -it pgsql-deploy-b4cc79794-qn8zd -n euler-copilot -- bash +``` + +进入容器后使用下面命令进入数据库: + +```bash +root@pgsql-deploy-b4cc79794-qn8zd:/tmp# psql -U postgres +``` + +再使用\dt查看数据库初始化情况,出现下面内容表示数据库初始化成功: + +![数据库初始化](./pictures/local-asset-library-setup/数据库初始化.png) + +- #### 清空数据库(注意) + + 假设您想清空 RAG 产生的所有数据库数据,可以使用下面命令(**此命令会清空整个数据库,需谨慎操作!**)。 + +```bash +python3 scripts/rag_kb_manager.pyc --method clear_database +# 清空数据库请谨慎操作 +``` + +### 步骤3:创建资产 + + 下列指令若不指定 kb_name,则默认资产名为 default_test(ps:Copilot 不允许存在两个同名的资产): + +- #### 创建资产(可重复) + +```bash +python3 scripts/rag_kb_manager.pyc --method create_kb --kb_name default_test +``` + +创建资产成功会有以下提示: + +![创建资产成功](./pictures/local-asset-library-setup/创建资产成功.png) + +创建同名资产会有以下提示: + +![重复创建资产失败](./pictures/local-asset-library-setup/重复创建资产失败.png) + +- #### 删除资产(可重复) + +```bash +python3 scripts/rag_kb_manager.pyc --method del_kb --kb_name default_test +``` + +删除资产成功会出现以下提示(会将资产下的所有资产库和语料资产全部删除): + +![删除资产成功](./pictures/local-asset-library-setup/删除资产成功.png) + +对于不存在的资产进行删除,会出现以下提示: + +![删除不存在的资产失败](./pictures/local-asset-library-setup/删除不存在的资产失败.png) + +- #### 查询资产(可重复) + +```bash +python3 scripts/rag_kb_manager.pyc --method query_kb +``` + +查询资产成功会出现下面内容: + +![查询资产](./pictures/local-asset-library-setup/查询资产.png) + +对于无资产的情况下查询资产会出现以下内容: + +![无资产时查询资产](./pictures/local-asset-library-setup/无资产时查询资产.png) + +### 步骤4:创建资产库 + +下列指令若不指定资产名(kb_name)和资产库名(kb_asset_name),则默认资产名为 default_test 和资产库名 default_test_asset(ps:Copilot 同一个资产下不允许存在两个同名的资产库): + +- #### 创建资产库(可重复) + +```bash +python3 scripts/rag_kb_manager.pyc --method create_kb_asset --kb_name default_test --kb_asset_name default_test_asset +# 创建属于default_test的资产库 +``` + +对于创建资产库成功会出现以下内容: + +![资产库创建成功](./pictures/local-asset-library-setup/资产库创建成功.png) + +对于指定不存在的资产库创建资产会出现以下内容: + +![指定不存在的资产创建资产库失败](./pictures/local-asset-library-setup/指定不存在的资产创建资产库失败.png) + +对于同一个资产下重复创建同名资产库会出现以下内容: + +![创建资产库失败由于统一资产下存在同名资产库](./pictures/local-asset-library-setup/创建资产库失败由于统一资产下存在同名资产库.png) + +- #### 删除资产库(可重复) + +```bash +python3 scripts/rag_kb_manager.pyc --method del_kb_asset --kb_name default_test --kb_asset_name default_test_asset +``` + +对于删除资产库成功会出现以下内容: + +![资产库删除成功](./pictures/local-asset-library-setup/资产库删除成功png.png) + +对于删除不存在的资产库失败会出现以下内容: + +![资产下不存在对应资产库](./pictures/local-asset-library-setup/删除资产库失败,资产下不存在对应资产库.png) + +对于删除不存在的资产下的资产库会出现以下内容: + +![不存在资产](./pictures/local-asset-library-setup/资产库删除失败,不存在资产.png) + +- #### 查询资产库(可重复) + +```bash +python3 scripts/rag_kb_manager.pyc --method query_kb_asset --kb_name default_test +# 注意:资产是最上层的,资产库属于资产,且不能重名 +``` + +对于查询资产库成功会出现以下内容: + +![资产下查询资产库成功](./pictures/local-asset-library-setup/资产下查询资产库成功.png) + +对于资产内无资产库的情况下查询资产库会出现以下内容: + +![资产下未查询到资产库](./pictures/local-asset-library-setup/资产下未查询到资产库.png) + +对于查询不存在的资产下的资产库会出现以下内容: + +![不存在资产](./pictures/local-asset-library-setup/资产库查询失败,不存在资产.png) + +### 步骤5:上传语料 + +下列指令若不指定资产名(kb_name)和资产库名(kb_asset_name),则默认资产名为 default_test 和资产库名 default_test_asset,对于删除语料命令需要指定完整的语料名称(语料统一为 docx 格式保存在数据库中,可以通过查询语料命令查看已上传的文档名称);对于查询语料命令可以不指定语料名称(corpus_name),此时默认查询所有语料,可以指定部分或者完整的语料名,此时通过模糊搜索匹配数据库内相关的语料名称。 + +- 上传语料 + +```bash +python3 scripts/rag_kb_manager.pyc --method up_corpus --corpus_dir ./scripts/docs/ --kb_name default_test --kb_asset_name default_test_asset +# 注意: +# 1. RAG容器用于存储用户语料的目录路径是'./scripts/docs/'。在执行相关命令前,请确保该目录下已有本地上传的语料。 +# 2. 若语料已上传但查询未果,请检查宿主机上的待向量化语料目录(位于/home/euler-copilot/docs)的权限设置。 +# 为确保无权限问题影响,您可以通过运行chmod 755 /home/euler-copilot/docs命令来赋予该目录最大访问权限。 +``` + +对于语料上传成功会出现以下内容: + +![语料上传成功](./pictures/local-asset-library-setup/语料上传成功.png) + +对于语料具体的分割和上传情况可以在 logs/app.log 下查看,内容如下: + +![查看文档产生片段总数和上传成功总数](./pictures/local-asset-library-setup/查看文档产生片段总数和上传成功总数.png) + +- 删除语料 + +```bash +python3 scripts/rag_kb_manager.pyc --method del_corpus --corpus_name abc.docx --kb_name default_test --kb_asset_name default_test_asset +# 上传的文件统一转换为docx +``` + +对于语料删除成功会出现以下内容: + +![删除语料](./pictures/local-asset-library-setup/删除语料.png) + +对于删除不存在的语料会出现以下内容: + +![语料删除失败](./pictures/local-asset-library-setup/语料删除失败,未查询到相关语料.png) + +- 查询语料 + +```bash +# 查询指定名称的语料: +python3 scripts/rag_kb_manager.pyc --method query_corpus --corpus_name 语料名.docx +# 查询所有语料: +python3 scripts/rag_kb_manager.pyc --method query_corpus +``` + +对于查询所有语料会出现以下内容: + +![查询全部语料](./pictures/local-asset-library-setup/查询全部语料.png) + +- 停止上传任务 + +```bash +python3 scripts/rag_kb_manager.pyc --method stop_corpus_uploading_job +``` + +对于某些极端条件下(例如内存受限),上传语料失败,需要执行上面shell命令用于清除语料上传失败的缓存。 + +## 网页端查看语料上传进度 + +您可以灵活设置端口转发规则,通过执行如下命令将容器端口映射到主机上的指定端口,并在任何设备上通过访问 `http://<主机IP>:<映射端口>`(例如 )来查看语料上传的详细情况。 + +```bash +kubectl port-forward rag-deploy-service-5b7887644c-sm58z 3000:8005 -n euler-copilot --address=0.0.0.0 +# 注意: 3000是主机上的端口,8005是rag的容器端口,可修改映射到主机上的端口 +``` + +## 验证上传后效果 + +上传语料成功之后你可以通过以下命令直接与 RAG 交互,来观察语料是否上传成功。 + +```bash +curl -k -X POST "http://{rag_url}:{rag_port}/kb/get_answer" -H "Content-Type: application/json" -d '{ \ + "question": "question", \ + "kb_sn": "kb_name", \ + "fetch_source": true, \ + "top_k": 3 \ +}' +``` + +- `question`:问题 + +- `kb_sn`:资产库名称 + +- `fetch_source`:是否返回关联片段以及片段来源,`false` 代表不返回,`true` 代表返回 + +- `top_k`:关联语料片段个数,需要大于等于3 diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/network-environment-deployment-guide.md b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/network-environment-deployment-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..b707e6f42f213c2da3b117f55259ea78d6e05a90 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/network-environment-deployment-guide.md @@ -0,0 +1,617 @@ +# 网络环境部署指南 + +## 介绍 + +openEuler Copilot System 是一款智能问答工具,使用 openEuler Copilot System 可以解决操作系统知识获取的便捷性,并且为OS领域模型赋能开发者及运维人员。作为获取操作系统知识,使能操作系统生产力工具 (如 A-Ops / A-Tune / x2openEuler / EulerMaker / EulerDevOps / StratoVirt / iSulad 等),颠覆传统命令交付方式,由传统命令交付方式向自然语义进化,并结合智能体任务规划能力,降低开发、使用操作系统特性的门槛。 + +### 组件介绍 + +| 组件 | 端口 | 说明 | +| ----------------------------- | --------------- | -------------------- | +| euler-copilot-framework | 8002 (内部端口) | 智能体框架服务 | +| euler-copilot-web | 8080 | 智能体前端界面 | +| euler-copilot-rag | 8005 (内部端口) | 检索增强服务 | +| euler-copilot-vectorize-agent | 8001 (内部端口) | 文本向量化服务 | +| mysql | 3306 (内部端口) | MySQL数据库 | +| redis | 6379 (内部端口) | Redis数据库 | +| postgres | 5432 (内部端口) | 向量数据库 | +| secret_inject | 无 | 配置文件安全复制工具 | + +## 环境要求 + +### 软件要求 + +| 类型 | 版本要求 | 说明 | +|------------| -------------------------------------|--------------------------------------| +| 操作系统 | openEuler 22.03 LTS 及以上版本 | 无 | +| K3s | >= v1.30.2,带有 Traefik Ingress 工具 | K3s 提供轻量级的 Kubernetes 集群,易于部署和管理 | +| Helm | >= v3.15.3 | Helm 是一个 Kubernetes 的包管理工具,其目的是快速安装、升级、卸载 openEuler Copilot System 服务 | +| python | >=3.9.9 | python3.9.9 以上版本为模型的下载和安装提供运行环境 | + +### 硬件要求 + +| 类型 | 硬件要求 | +|----------------| -----------------------------| +| 服务器 | 1台 | +| CPU | 鲲鹏或x86_64,>= 32 cores | +| RAM | >= 64GB | +| 存储 | >= 500 GB | +| GPU | Tesla V100 16GB,4张 | +| NPU | 910ProB、910B | + +注意: + +1. 若无 GPU 或 NPU 资源,建议通过调用 OpenAI 接口的方式来实现功能。(接口样例: 参考链接:[API-KEY的获取与配置](https://help.aliyun.com/zh/dashscope/developer-reference/acquisition-and-configuration-of-api-key?spm=a2c4g.11186623.0.0.30e7694eaaxxGa)) +2. 调用第三方 OpenAI 接口的方式不需要安装高版本的 python (>=3.9.9) +3. 英伟达 GPU 对 Docker 的支持必需要新版本 Docker (>= v25.4.0) +4. 如果k8s集群环境,则不需要单独安装k3s,要求version >= 1.28 + +### 部署视图 + +![部署图](./pictures/部署视图.png) + +## 获取 openEuler Copilot System + +- 从 openEuler Copilot System 的官方Git仓库 [euler-copilot-framework](https://gitee.com/openeuler/euler-copilot-framework) 下载最新的部署仓库 +- 如果您正在使用 Kubernetes,则不需要安装 k3s 工具。 + +```bash +# 下载目录以 home 为例 +cd /home +``` + +```bash +git clone https://gitee.com/openeuler/euler-copilot-framework.git +``` + +## 环境准备 + +设备需联网并符合 openEuler Copilot System 的最低软硬件要求。确认服务器、硬件、驱动等准备就绪后,即可开始环境准备工作。为了顺利进行后续操作,请按照指引,先进入我 +们的脚本部署目录,并且按照提供的操作步骤和脚本路径依次执行,以确保初始化成功。 + +```bash +# 进入部署脚本目录 +cd /home/euler-copilot-framework/euler-copilot-helm/scripts && tree +``` + +```bash +. +├── check_env.sh +├── download_file.sh +├── get_log.sh +├── install_tools.sh +└── prepare_docker.sh +``` + +| 序号 | 步骤内容 | 相关指令 | 说明 | +|-------------- |----------|---------------------------------------------|------------------------------------------ | +|1| 环境检查 | `bash check_env.sh` | 主要对服务器的主机名、DNS、防火墙设置、磁盘剩余空间大小、网络、检查SELinux的设置 | +|2| 文件下载 | `bash download_file.sh` | 模型bge-reranker-large、bge-mixed-mode下载 | +|3| 安装部署工具 | `bash install_tools.sh v1.30.2+k3s1 v3.15.3 cn` | 安装helm、k3s工具。注意:cn的使用是使用镜像站,可以去掉不用 | +|4| 大模型准备 | 提供第三方 OpenAI 接口或基于硬件本都部署大模型 | 本地部署大模型可参考附录部分 | + +## 安装 + +您的环境现已就绪,接下来即可启动 openEuler Copilot System 的安装流程。 + +- 下载目录以home为例,进入 openEuler Copilot System 仓库的 Helm 配置文件目录 + + ```bash + cd /home/euler-copilot-framework && ll + ``` + + ```bash + total 28 + drwxr-xr-x 3 root root 4096 Aug 28 17:45 docs/ + drwxr-xr-x 5 root root 4096 Aug 28 17:45 euler-copilot-helm/ + ``` + +- 查看euler-copilot-helm的目录 + + ```bash + tree euler-copilot-helm + ``` + + ```bash + euler-copilot-helm/chart + ├── databases + │   ├── Chart.yaml + │   ├── configs + │   ├── templates + │   └── values.yaml + ├── authhub + │   ├── Chart.yaml + │   ├── configs + │   ├── templates + │   └── values.yaml + └── euler_copilot + ├── Chart.yaml + ├── configs + ├── templates + │   ├── NOTES.txt + │   ├── rag + │   ├── vectorize + │   └── web + └── values.yaml + ``` + +### 1. 安装数据库 + +- 编辑 values.yaml + + ```bash + cd euler-copilot-helm/chart/databases + ``` + + 仅需修改镜像tag为对应架构,其余可不进行修改 + + ```bash + vim values.yaml + ``` + +- 创建命名空间 + + ```bash + kubectl create namespace euler-copilot + ``` + + 设置环境变量 + + ```bash + export KUBECONFIG=/etc/rancher/k3s/k3s.yaml + ``` + +- 安装数据库 + + ```bash + helm install -n euler-copilot databases . + ``` + +- 查看 pod 状态 + + ```bash + kubectl -n euler-copilot get pods + ``` + + ```bash + pgsql-deploy-databases-86b4dc4899-ppltc 1/1 Running 0 17d + redis-deploy-databases-f8866b56-kj9jz 1/1 Running 0 17d + mysql-deploy-databases-57f5f94ccf-sbhzp 2/2 Running 0 17d + ``` + +- 若服务器之前部署过 mysql,则可预先清除下 pvc,再部署 databases。 + + ```bash + # 获取pvc + kubectl -n euler-copilot get pvc + ``` + + ```bash + # 删除pvc + kubectl -n euler-copilot delete pvc mysql-pvc + ``` + +### 2. 安装鉴权平台Authhub + +- 编辑 values.yaml + + ```bash + cd euler-copilot-helm/chart/authhub + ``` + + 请结合 YAML 中的注释中的\[必填]项进行修改 + + ```bash + vim values.yaml + ``` + + - 注意: + 1. authHub 需要域名,可预先申请域名或在 'C:\Windows\System32\drivers\etc\hosts' 下配置。 + authhub和euler-copilot必须是同一个根域名的两个子域名, 例如authhub.test.com和eulercopilot.test.com + 2. 修改tag为对应架构的tag; + +- 安装 AuthHub + + ```bash + helm install -n euler-copilot authhub . + ``` + + AuthHub 默认账号 `administrator`, 密码 `changeme` + +- 查看 pod 状态 + + ```bash + kubectl -n euler-copilot get pods + ``` + + ```bash + NAME READY STATUS RESTARTS AGE + authhub-backend-deploy-authhub-64896f5cdc-m497f 2/2 Running 0 16d + authhub-web-deploy-authhub-7c48695966-h8d2p 1/1 Running 0 17d + pgsql-deploy-databases-86b4dc4899-ppltc 1/1 Running 0 17d + redis-deploy-databases-f8866b56-kj9jz 1/1 Running 0 17d + mysql-deploy-databases-57f5f94ccf-sbhzp 2/2 Running 0 17d + ``` + +- 登录 AuthHub + + AuthHub 的域名以 `authhub.test.com` 为例,浏览器输入`https://authhub.test.com`, 登录界面如下图所示: + + ![部署图](./pictures/authhub登录界面.png) + +- 创建应用eulercopilot + + ![部署图](./pictures/创建应用界面.png) + 点击创建应用,输入应用名称、应用主页和应用回调地址(登录后回调地址),参考如下: + - 应用名称:eulercopilot + - 应用主页: + - 应用回调地址: + - 应用创建好后会生成 Client ID 和 Client Secret,将生成的 Client ID 和 Client Secret 配置到应用里,以 eulercopilot 为例,创建应用后在配置文件中添加配置 `euler-copilot-helm/chart/euler_copilot/values.yaml` 中添加配置 + + ![部署图](./pictures/创建应用成功界面.png) + +### 2. 安装 openEuler Copilot System + +- 编辑 values.yaml + + ```bash + cd euler-copilot-helm/chart/euler_copilot + ``` + + 请结合 YAML 中的注释中的\[必填]项进行修改 + + ```bash + vim values.yaml + ``` + + - 注意: + 1. 查看系统架构,并修改values.yaml中的tag; + 2. 修改values.yaml中的globals的domain为EulerCopilot域名,并配置大模型的相关信息 + 3. 手动创建`docs_dir`、`plugin_dir`、`models`三个文件挂载目录 + 4. 修改values.yaml中framework章节的web_url和oidc设置 + 5. 如果部署插件,则需要配置用于Function Call的模型,此时必须有GPU环境用于部署sglang,可参考附件 + +- 安装 openEuler Copilot System + + ```bash + helm install -n euler-copilot service . + ``` + +- 查看 Pod 状态 + + ```bash + kubectl -n euler-copilot get pods + ``` + + 镜像拉取过程可能需要大约一分钟的时间,请耐心等待。部署成功后,所有 Pod 的状态应显示为 Running。 + + ```bash + NAME READY STATUS RESTARTS AGE + authhub-backend-deploy-authhub-64896f5cdc-m497f 2/2 Running 0 16d + authhub-web-deploy-authhub-7c48695966-h8d2p 1/1 Running 0 17d + pgsql-deploy-databases-86b4dc4899-ppltc 1/1 Running 0 17d + redis-deploy-databases-f8866b56-kj9jz 1/1 Running 0 17d + mysql-deploy-databases-57f5f94ccf-sbhzp 2/2 Running 0 17d + framework-deploy-service-bb5b58678-jxzqr 2/2 Running 0 16d + rag-deploy-service-5b7887644c-sm58z 2/2 Running 0 110m + vectorize-deploy-service-57f5f94ccf-sbhzp 2/2 Running 0 17d + web-deploy-service-74fbf7999f-r46rg 1/1 Running 0 2d + ``` + + 注意:如果 Pod 状态出现失败,建议按照以下步骤进行排查 + + 注意:如果 Pod 状态出现失败,建议按照以下步骤进行排查 + + 1. 查看 Kubernetes 集群的事件 (Events),以获取更多关于 Pod 失败的上下文信息 + + ```bash + kubectl -n euler-copilot get events + ``` + + 2. 查看镜像拉取是否成功 + + ```bash + k3s crictl images + ``` + + 3. 检查 RAG 的 Pod 日志,以确定是否有错误信息或异常行为。 + + ```bash + kubectl logs rag-deploy-service-5b7887644c-sm58z -n euler-copilot + ``` + + 4. 验证 Kubernetes 集群的资源状态,检查服务器资源或配额是否足够,资源不足常导致 Pod 镜像服拉取失败。 + + ```bash + df -h + ``` + + 5. 如果未拉取成且镜像大小为0,请检查是否是 k3s 版本未满足要求,低于 v1.30.2 + + ```bash + k3s -v + ``` + + 6. 确认 values.yaml 中 framework 的 OIDC 设置是否正确配置,以确保身份验证和授权功能正常工作。 + + ```bash + vim /home/euler-copilot-framework/euler-copilot-helm/chart/euler_copilot/values.yaml + ``` + +## 验证安装 + +恭喜您,openEuler Copilot System 的部署已完成!现在,您可以开启智能问答的非凡体验之旅了。 +请在浏览器中输入 (其中 port 默认值为8080,若更改则需相应调整)访问 openEuler Copilot System 网页,并尝试进行问答体验。 + +![Web 界面](./pictures/WEB界面.png) + +## 安装插件 + +详细信息请参考文档 [插件部署指南](./plugin-deployment-guide/) + +## 构建专有领域智能问答 + +### 1. 构建 openEuler 专业知识领域的智能问答 + +1. 修改 values.yaml 的 pg 的镜像仓为 `pg-data` +2. 修改 values.yaml 的 rag 部分的字段 `knowledgebaseID: openEuler_2bb3029f` +3. 将 `vim euler-copilot-helm/chart/databases/templates/pgsql/pgsql-deployment.yaml` 的 volumes 相关字段注释 +4. 进入 `cd euler-copilot-helm/chart/databases`,执行更新服务 `helm upgrade -n euler-copilot databases .` +5. 进入 `cd euler-copilot-helm/chart/euler_copilot`,执行更新服务 `helm upgrade -n euler-copilot service .` +6. 进入网页端进行 openEuler 专业知识领域的问答 + +### 2. 构建项目专属知识领域智能问答 + +详细信息请参考文档 [本地资产库构建指南](local-asset-library-setup-guide.md) + +## 附录 + +### 大模型准备 + +#### GPU 环境 + +参考以下方式进行部署 + +1. 下载模型文件: + + ```bash + huggingface-cli download --resume-download Qwen/Qwen1.5-14B-Chat --local-dir Qwen1.5-14B-Chat + ``` + +2. 创建终端 control + + ```bash + screen -S control + ``` + + ```bash + python3 -m fastchat.serve.controller + ``` + + - 按 Ctrl A+D 置于后台 + +3. 创建新终端 api + + ```bash + screen -S api + ``` + + ```bash + python3 -m fastchat.serve.openai_api_server --host 0.0.0.0 --port 30000 --api-keys sk-123456 + ``` + + - 按 Ctrl A+D 置于后台 + - 如果当前环境的 Python 版本是 3.12 或者 3.9 可以创建 python3.10 的 conda 虚拟环境 + + ```bash + mkdir -p /root/py310 + ``` + + ```bash + conda create --prefix=/root/py310 python==3.10.14 + ``` + + ```bash + conda activate /root/py310 + ``` + +4. 创建新终端 worker + + ```bash + screen -S worker + ``` + + ```bash + screen -r worker + ``` + + 安装 fastchat 和 vllm + + ```bash + pip install fschat vllm + ``` + + 安装依赖: + + ```bash + pip install fschat[model_worker] + ``` + + ```bash + python3 -m fastchat.serve.vllm_worker --model-path /root/models/Qwen1.5-14B-Chat/ --model-name qwen1.5 --num-gpus 8 --gpu-memory-utilization=0.7 --dtype=half + ``` + + - 按 Ctrl A+D 置于后台 + +5. 按照如下方式配置文件,并更新服务。 + + ```bash + vim euler-copilot-helm/chart/euler_copilot/values.yaml + ``` + + 修改如下部分 + + ```yaml + llm: + # 开源大模型,OpenAI兼容接口 + openai: + url: "http://$(IP):30000" + key: "sk-123456" + model: qwen1.5 + max_tokens: 8192 + ``` + +#### NPU 环境 + +NPU 环境部署可参考链接 [MindIE安装指南](https://www.hiascend.com/document/detail/zh/mindie/10RC2/whatismindie/mindie_what_0001.html) + +## FAQ + +### 1. huggingface 使用报错? + +```text +File "/usr/lib/python3.9/site-packages/urllib3/connection.py", line 186, in _new_conn +raise NewConnectionError( +urllib3.exceptions.eanconectionError: : Failed to establish a new conmection: [Errno 101] Network is unreachable +``` + +- 解决办法 + +```bash +pip3 install -U huggingface_hub +``` + +```bash +export HF_ENDPOINT=https://hf-mirror.com +``` + +### 2. 如何在 RAG 容器中调用获取问答结果的接口? + +- 请先进入到 RAG 对应 Pod + +```bash +curl -k -X POST "http://localhost:8005/kb/get_answer" -H "Content-Type: application/json" -d '{ \ + "question": "", \ + "kb_sn": "default_test", \ + "fetch_source": true }' +``` + +### 3. 执行 `helm upgrade` 报错 + +```text +Error: INSTALLATI0N FAILED: Kubernetes cluster unreachable: Get "http:/localhost:880/version": dial tcp [:1:8089: connect: connection refused +``` + +或者 + +```text +Error: UPGRADE FAILED: Kubernetes cluster unreachable: the server could not find the requested resource +``` + +- 解决办法 + +```bash +export KUBECONFIG=/etc/rancher/k3s/k3s.yaml +``` + +### 4. 无法查看 Pod 日志? + +```text +[root@localhost euler-copilot]# kubectl logs rag-deployservice65c75c48d8-44vcp-n euler-copilotDefaulted container "rag" out of: rag.rag-copy secret (init)Error from server: Get "https://172.21.31.11:10250/containerlogs/euler copilot/rag deploy"service 65c75c48d8-44vcp/rag": Forbidden +``` + +- 解决办法 + 如果设置了代理,需要将本机的网络 IP 从代理中剔除 + +```bash +cat /etc/systemd/system/k3s.service.env +``` + +```text +http_proxy="http://172.21.60.51:3128" +https_proxy="http://172.21.60.51:3128" +no_proxy=172.21.31.10 # 代理中剔除本机IP +``` + +### 5. GPU环境部署大模型时出现无法流式回复? + +在服务执行 curl 大模型失败,但是将 `"stream": true` 改为 `"stream": false`就可以 curl 通? + +```bash +curl http://localhost:30000/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer sk-123456" -d '{ +"model": "qwen1.5", +"messages": [ +{ +"role": "system", +"content": "你是情感分析专家,你的任务是xxxx" +}, +{ +"role": "user", +"content": "你好" +} +], +"stream": true, +"n": 1, +"max_tokens": 32768 +}' +``` + +- 解决办法: + +```bash +pip install Pydantic=1.10.13 +``` + +### 6. 如何部署sglang? + +```bash +# 1. 激活 Conda 环境, 并激活 Python 3.10 的 Conda 环境。假设你的环境名为 `myenv`: +conda activate myenv + +# 2. 在激活的环境中,安装 sglang[all] 和 flashinfer +pip install sglang[all]==0.3.0 +pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/ + +# 3. 启动服务器 +python -m sglang.launch_server --served-model-name Qwen2.5-32B --model-path Qwen2.5-32B-Instruct-AWQ --host 0.0.0.0 --port 8001 --api-key sk-12345 --mem-fraction-static 0.5 --tp 8 +``` + +- 验证安装 + +```bash +pip show sglang +pip show flashinfer +``` + +- 注意: + +1. API Key:请确保 `--api-key` 参数中的 API 密钥是正确的 +2. 模型路径: 确保 `--model-path` 参数中的路径是正确的,并且模型文件存在于该路径下。 +3. CUDA 版本:确保你的系统上安装了 CUDA 12.1 和 PyTorch 2.4,因为 `flashinfer` 包依赖于这些特定版本。 +4. 线程池大小:根据你的GPU资源和预期负载调整线程池大小。如果你有 8 个 GPU,那么可以选择 --tp 8 来充分利用这些资源。 + +### 7. 如何 curl embedding + +```bash +curl -k -X POST http://$IP:8001/embedding \ + -H "Content-Type: application/json" \ + -d '{"texts": ["sample text 1", "sample text 2"]}' +# $IP为vectorize的Embedding的内网地址 +``` + +### 8. 如何生成证书 + +```bash +下载地址: https://github.com/FiloSottile/mkcert/releases +# 1. 下载 mkcert +# x86_64 +wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-amd64 +# arm64 +wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-arm64 +# 2. 执行下面的命令生成秘钥 +mkcert -install +# mkcert 可直接接域名或 IP, 生成证书和秘钥 +mkcert example.com +# 3. 将证书和秘钥拷贝到 `/home/euler-copilot-framework_openeuler/euler-copilot-helm/chart_ssl/traefik-secret.yaml` 中, 并执行下面命令使其生效。 +kubectl apply -f traefik-secret.yaml +``` diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/offline-environment-deployment-guide.md b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/offline-environment-deployment-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..89552eda2044fc8925f23ba5f4aeb6a238b7ad7e --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/offline-environment-deployment-guide.md @@ -0,0 +1,733 @@ +# 无网络环境下部署指南 + +## 介绍 + +openEuler Copilot System 是一款智能问答工具,使用 openEuler Copilot System 可以解决操作系统知识获取的便捷性,并且为OS领域模型赋能开发者及运维人员。作为获取操作系统知识,使能操作系统生产力工具 (如 A-Ops / A-Tune / x2openEuler / EulerMaker / EulerDevOps / StratoVirt / iSulad 等),颠覆传统命令交付方式,由传统命令交付方式向自然语义进化,并结合智能体任务规划能力,降低开发、使用操作系统特性的门槛。 + +### 组件介绍 + +| 组件 | 端口 | 说明 | +| ----------------------------- | --------------- | -------------------- | +| euler-copilot-framework | 8002 (内部端口) | 智能体框架服务 | +| euler-copilot-web | 8080 | 智能体前端界面 | +| euler-copilot-rag | 8005 (内部端口) | 检索增强服务 | +| euler-copilot-vectorize-agent | 8001 (内部端口) | 文本向量化服务 | +| mysql | 3306 (内部端口) | MySQL数据库 | +| redis | 6379 (内部端口) | Redis数据库 | +| postgres | 5432 (内部端口) | 向量数据库 | +| secret_inject | 无 | 配置文件安全复制工具 | + +## 环境要求 + +### 软件要求 + +| 类型 | 版本要求 | 说明 | +|------------| -------------------------------------|--------------------------------------| +| 操作系统 | openEuler 22.03 LTS 及以上版本 | 无 | +| K3s | >= v1.30.2,带有 Traefik Ingress 工具 | K3s 提供轻量级的 Kubernetes 集群,易于部署和管理 | +| Helm | >= v3.15.3 | Helm 是一个 Kubernetes 的包管理工具,其目的是快速安装、升级、卸载 openEuler Copilot System 服务 | +| python | >=3.9.9 | python3.9.9 以上版本为模型的下载和安装提供运行环境 | + +### 硬件要求 + +| 类型 | 硬件要求 | +|----------------| -----------------------------| +| 服务器 | 1台 | +| CPU | 鲲鹏或x86_64,>= 32 cores | +| RAM | >= 64GB | +| 存储 | >= 500 GB | +| GPU | Tesla V100 16GB,4张 | +| NPU | 910ProB、910B | + +注意: + +1. 若无 GPU 或 NPU 资源,建议通过调用 OpenAI 接口的方式来实现功能。(接口样例:) +2. 调用第三方 OpenAI 接口的方式不需要安装高版本的 python (>=3.9.9) +3. 英伟达 GPU 对 Docker 的支持必需要新版本 Docker (>= v25.4.0) + +### 部署视图 + +![部署图](./pictures/部署视图.png) + +## 获取 openEuler Copilot System + +- 从 openEuler Copilot System 的官方Git仓库 [euler-copilot-framework](https://gitee.com/openeuler/euler-copilot-framework) 下载最新的部署仓库 +- 如果您正在使用 Kubernetes,则不需要安装 k3s 工具。 + + ```bash + # 下载目录以 home 为例 + cd /home + ``` + + ```bash + git clone https://gitee.com/openeuler/euler-copilot-framework.git + ``` + +## 环境准备 + +如果您的服务器、硬件、驱动等全部就绪,即可启动环境初始化流程,以下部署步骤在无公网环境执行。 + +### 1. 环境检查 + +环境检查主要是对服务器的主机名、DNS、防火墙设置、磁盘剩余空间大小、网络、检查 SELinux 的设置。 + +- 主机名设置 + 在Shell中运行如下命令: + + ```bash + cat /etc/hostname + echo "主机名" > /etc/hostname + ``` + +- 系统DNS设置:需要给当前主机设置有效的DNS +- 防火墙设置 + + ```bash + # 查看防火墙状态 + systemctl status firewalld + # 查看防火墙列表 + firewall-cmd --list-all + # 关闭防火墙 + systemctl stop firewalld + systemctl disable firewalld + ``` + +- SELinux设置 + + ```bash + # 需要关闭selinux,可以临时关闭或永久关闭 + # 永久关闭SELinux + sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config + # 临时关闭 + setenforce 0 + ``` + +### 2. 文件下载 + +- 模型文件 bge-reranker-large、bge-mixed-model 下载 [模型文件下载链接](https://repo.oepkgs.net/openEuler/rpm/openEuler-22.03-LTS/contrib/EulerCopilot/) + + ```bash + mkdir -p /home/EulerCopilot/models + cd /home/EulerCopilot/models + # 将需要下载的bge文件放置在models目录 + wget https://repo.oepkgs.net/openEuler/rpm/openEuler-22.03-LTS/contrib/EulerCopilot/bge-mixed-model.tar.gz + wget https://repo.oepkgs.net/openEuler/rpm/openEuler-22.03-LTS/contrib/EulerCopilot/bge-reranker-large.tar.gz + ``` + +- 下载分词工具 text2vec-base-chinese-paraphrase [分词工具下载链接](https://repo.oepkgs.net/openEuler/rpm/openEuler-22.03-LTS/contrib/EulerCopilot/) + + ```bash + mkdir -p /home/EulerCopilot/text2vec + cd /home/EulerCopilot/text2vec + wget https://repo.oepkgs.net/openEuler/rpm/openEuler-22.03-LTS/contrib/EulerCopilot/text2vec-base-chinese-paraphrase.tar.gz + ``` + +- 镜像包下载 + - x86或arm架构的EulerCopilot服务的各组件镜像单独提供 + +### 3. 安装部署工具 + +#### 3.1 安装 Docker + +如需要基于 GPU/NPU 部署大模型,需要检查 Docker 版本是否满足>= v25.4.0 ,如不满足,请升级 Docker 版本 + +#### 3.2 安装 K3s 并导入镜像 + +- 安装 SELinux 配置文件 + + ```bash + yum install -y container-selinux selinux-policy-base + # packages里有k3s-selinux-0.1.1-rc1.el7.noarch.rpm的离线包 + rpm -i https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm + ``` + +- x86 架构安装 k3s + + ```bash + # 在有网络的环境上获取k3s相关包,以v1.30.3+k3s1示例 + wget https://github.com/k3s-io/k3s/releases/download/v1.30.3%2Bk3s1/k3s + wget https://github.com/k3s-io/k3s/releases/download/v1.30.3%2Bk3s1/k3s-airgap-images-amd64.tar.zst + cp k3s /usr/local/bin/ + cd /var/lib/rancher/k3s/agent + mkdir images + cp k3s-airgap-images-arm64.tar.zst /var/lib/rancher/k3s/agent/images + # packages里有k3s-install.sh的离线包 + curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh + INSTALL_K3S_SKIP_DOWNLOAD=true ./k3s-install.sh + export KUBECONFIG=/etc/rancher/k3s/k3s.yaml + ``` + +- arm 架构安装 k3s + + ```bash + # 在有网络的环境上获取k3s相关包,以v1.30.3+k3s1示例 + wget https://github.com/k3s-io/k3s/releases/download/v1.30.3%2Bk3s1/k3s-arm64 + wget https://github.com/k3s-io/k3s/releases/download/v1.30.3%2Bk3s1/k3s-airgap-images-arm64.tar.zst + cp k3s-arm64 /usr/local/bin/k3s + cd /var/lib/rancher/k3s/agent + mkdir images + cp k3s-airgap-images-arm64.tar.zst /var/lib/rancher/k3s/agent/images + # packages里有k3s-install.sh的离线包 + curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh + INSTALL_K3S_SKIP_DOWNLOAD=true ./k3s-install.sh + export KUBECONFIG=/etc/rancher/k3s/k3s.yaml + ``` + +- 导入镜像 + + ```bash + # 导入已下载的镜像文件 + k3s ctr image import $(镜像文件) + ``` + +#### 3.3 安装 Helm 工具 + +- x86_64 架构 + + ```bash + wget https://get.helm.sh/helm-v3.15.0-linux-amd64.tar.gz + tar -xzf helm-v3.15.0-linux-amd64.tar.gz + mv linux-amd64/helm /usr/sbin + rm -rf linux-amd64 + ``` + +- arm64 架构 + + ```bash + wget https://get.helm.sh/helm-v3.15.0-linux-arm64.tar.gz + tar -xzf helm-v3.15.0-linux-arm64.tar.gz + mv linux-arm64/helm /usr/sbin + rm -rf linux-arm64 + ``` + +#### 3.4 大模型准备 + +提供第三方openai接口或基于硬件本都部署大模型,本地部署大模型可参考附录部分。 + +## 安装 + +您的环境现已就绪,接下来即可启动 openEuler Copilot System 的安装流程。 + +- 下载目录以home为例,进入 openEuler Copilot System 仓库的 Helm 配置文件目录 + + ```bash + cd /home/euler-copilot-framework && ll + ``` + + ```bash + total 28 + drwxr-xr-x 3 root root 4096 Aug 28 17:45 docs/ + drwxr-xr-x 5 root root 4096 Aug 28 17:45 euler-copilot-helm/ + ``` + +- 查看euler-copilot-helm的目录 + + ```bash + tree euler-copilot-helm + ``` + + ```bash + euler-copilot-helm/chart + ├── databases + │ ├── Chart.yaml + │ ├── configs + │ ├── templates + │ └── values.yaml + ├── authhub + │ ├── Chart.yaml + │ ├── configs + │ ├── templates + │ └── values.yaml + └── euler_copilot + ├── Chart.yaml + ├── configs + ├── templates + │ ├── NOTES.txt + │ ├── rag + │ ├── vectorize + │ └── web + └── values.yaml + ``` + +### 1. 安装数据库 + +- 编辑 values.yaml + + ```bash + cd euler-copilot-helm/chart/databases + ``` + + 仅需修改镜像tag为对应架构,其余可不进行修改 + + ```bash + vim values.yaml + ``` + +- 创建命名空间 + + ```bash + kubectl create namespace euler-copilot + ``` + + 设置环境变量 + + ```bash + export KUBECONFIG=/etc/rancher/k3s/k3s.yaml + ``` + +- 安装数据库 + + ```bash + helm install -n euler-copilot databases . + ``` + +- 查看 pod 状态 + + ```bash + kubectl -n euler-copilot get pods + ``` + + ```bash + pgsql-deploy-databases-86b4dc4899-ppltc 1/1 Running 0 17d + redis-deploy-databases-f8866b56-kj9jz 1/1 Running 0 17d + mysql-deploy-databases-57f5f94ccf-sbhzp 2/2 Running 0 17d + ``` + +- 若服务器之前部署过 mysql,则可预先清除下 pvc,再部署 databases。 + + ```bash + # 获取pvc + kubectl -n euler-copilot get pvc + ``` + + ```bash + # 删除pvc + kubectl -n euler-copilot delete pvc mysql-pvc + ``` + +### 2. 安装鉴权平台Authhub + +- 编辑 values.yaml + + ```bash + cd euler-copilot-helm/chart/authhub + ``` + + 请结合 YAML 中的注释中的\[必填]项进行修改 + + ```bash + vim values.yaml + ``` + + - 注意: + 1. authHub 需要域名,可预先申请域名或在 'C:\Windows\System32\drivers\etc\hosts' 下配置。 + authhub和euler-copilot必须是同一个根域名的两个子域名, 例如authhub.test.com和 + eulercopilot.test.com + 2. 修改tag为对应架构的tag; + +- 安装 AuthHub + + ```bash + helm install -n euler-copilot authhub . + ``` + + AuthHub 默认账号 `administrator`, 密码 `changeme` + +- 查看 pod 状态 + + ```bash + kubectl -n euler-copilot get pods + ``` + + ```bash + NAME READY STATUS RESTARTS AGE + authhub-backend-deploy-authhub-64896f5cdc-m497f 2/2 Running 0 16d + authhub-web-deploy-authhub-7c48695966-h8d2p 1/1 Running 0 17d + pgsql-deploy-databases-86b4dc4899-ppltc 1/1 Running 0 17d + redis-deploy-databases-f8866b56-kj9jz 1/1 Running 0 17d + mysql-deploy-databases-57f5f94ccf-sbhzp 2/2 Running 0 17d + ``` + +- 登录 AuthHub + + AuthHub 的域名以 `authhub.test.com` 为例,浏览器输入`https://authhub.test.com`, 登录界面如下图所示: + + ![部署图](./pictures/authhub登录界面.png) + +- 创建应用eulercopilot + + ![部署图](./pictures/创建应用界面.png) + + 点击创建应用,输入应用名称、应用主页和应用回调地址(登录后回调地址),参考如下: + - 应用名称:eulercopilot + - 应用主页: + - 应用回调地址: + - 应用创建好后会生成 Client ID 和 Client Secret,将生成的 Client ID 和 Client Secret 配置到应用里,以 eulercopilot 为例,创建应用后在配置文件中添加配置 `euler-copilot-helm/chart/euler_copilot/values.yaml` 中添加配置 + + ![部署图](./pictures/创建应用成功界面.png) + +### 2. 安装 openEuler Copilot System + +- 编辑 values.yaml + + ```bash + cd euler-copilot-helm/chart/euler_copilot + ``` + + 请结合 YAML 中的注释中的\[必填]项进行修改 + + ```bash + vim values.yaml + ``` + + - 注意: + 1. 查看系统架构,并修改values.yaml中的tag; + 2. 修改values.yaml中的globals的domain为EulerCopilot域名,并配置大模型的相关信息 + 3. 手动创建`docs_dir`、`plugin_dir`、`models`三个文件挂载目录 + 4. 修改values.yaml中framework章节的web_url和oidc设置 + 5. 如果部署插件,则需要配置用于Function Call的模型,此时必须有GPU环境用于部署sglang,可参考附件 + +- 安装 openEuler Copilot System + + ```bash + helm install -n euler-copilot service . + ``` + +- 查看 Pod 状态 + + ```bash + kubectl -n euler-copilot get pods + ``` + + 镜像拉取过程可能需要大约一分钟的时间,请耐心等待。部署成功后,所有 Pod 的状态应显示为 Running。 + + ```bash + NAME READY STATUS RESTARTS AGE + authhub-backend-deploy-authhub-64896f5cdc-m497f 2/2 Running 0 16d + authhub-web-deploy-authhub-7c48695966-h8d2p 1/1 Running 0 17d + pgsql-deploy-databases-86b4dc4899-ppltc 1/1 Running 0 17d + redis-deploy-databases-f8866b56-kj9jz 1/1 Running 0 17d + mysql-deploy-databases-57f5f94ccf-sbhzp 2/2 Running 0 17d + framework-deploy-service-bb5b58678-jxzqr 2/2 Running 0 16d + rag-deploy-service-5b7887644c-sm58z 2/2 Running 0 110m + vectorize-deploy-service-57f5f94ccf-sbhzp 2/2 Running 0 17d + web-deploy-service-74fbf7999f-r46rg 1/1 Running 0 2d + ``` + + 注意:如果 Pod 状态出现失败,建议按照以下步骤进行排查 + + 1. 查看 Kubernetes 集群的事件 (Events),以获取更多关于 Pod 失败的上下文信息 + + ```bash + kubectl -n euler-copilot get events + ``` + + 2. 查看镜像拉取是否成功 + + ```bash + k3s crictl images + ``` + + 3. 检查 RAG 的 Pod 日志,以确定是否有错误信息或异常行为。 + + ```bash + kubectl logs rag-deploy-service-5b7887644c-sm58z -n euler-copilot + ``` + + 4. 验证 Kubernetes 集群的资源状态,检查服务器资源或配额是否足够,资源不足常导致 Pod 镜像服拉取失败。 + + ```bash + df -h + ``` + + 5. 如果未拉取成且镜像大小为0,请检查是否是 k3s 版本未满足要求,低于 v1.30.2 + + ```bash + k3s -v + ``` + + 6. 确认 values.yaml 中 framework 的 OIDC 设置是否正确配置,以确保身份验证和授权功能正常工作。 + + ```bash + vim /home/euler-copilot-framework/euler-copilot-helm/chart/euler_copilot/values.yaml + ``` + +## 验证安装 + +恭喜您,openEuler Copilot System 的部署已完成!现在,您可以开启智能问答的非凡体验之旅了。 +请在浏览器中输入 (其中 port 默认值为8080,若更改则需相应调整)访问 openEuler Copilot System 网页,并尝试进行问答体验。 + +![Web 界面](./pictures/WEB界面.png) + +## 安装插件 + +详细信息请参考文档 [插件部署指南](./plugin-deployment-guide/) + +## 构建专有领域智能问答 + +### 1. 构建 openEuler 专业知识领域的智能问答 + +1. 修改 values.yaml 的 pg 的镜像仓为 `pg-data` +2. 修改 values.yaml 的 rag 部分的字段 `knowledgebaseID: openEuler_2bb3029f` +3. 将 `vim euler-copilot-helm/chart/databases/templates/pgsql/pgsql-deployment.yaml` 的 volumes 相关字段注释 +4. 进入 `cd euler-copilot-helm/chart/databases`,执行更新服务 `helm upgrade -n euler-copilot databases .` +5. 进入 `cd euler-copilot-helm/chart/euler_copilot`,执行更新服务 `helm upgrade -n euler-copilot service .` +6. 进入网页端进行 openEuler 专业知识领域的问答 + +### 2. 构建项目专属知识领域智能问答 + +详细信息请参考文档 [本地资产库构建指南](local-asset-library-setup-guide.md) + +## 附录 + +### 大模型准备 + +#### GPU 环境 + +参考以下方式进行部署 + +1. 下载模型文件: + + ```bash + huggingface-cli download --resume-download Qwen/Qwen1.5-14B-Chat --local-dir Qwen1.5-14B-Chat + ``` + +2. 创建终端 control + + ```bash + screen -S control + ``` + + ```bash + python3 -m fastchat.serve.controller + ``` + + - 按 Ctrl A+D 置于后台 + +3. 创建新终端 api + + ```bash + screen -S api + ``` + + ```bash + python3 -m fastchat.serve.openai_api_server --host 0.0.0.0 --port 30000 --api-keys sk-123456 + ``` + + - 按 Ctrl A+D 置于后台 + - 如果当前环境的 Python 版本是 3.12 或者 3.9 可以创建 python3.10 的 conda 虚拟环境 + + ```bash + mkdir -p /root/py310 + ``` + + ```bash + conda create --prefix=/root/py310 python==3.10.14 + ``` + + ```bash + conda activate /root/py310 + ``` + +4. 创建新终端 worker + + ```bash + screen -S worker + ``` + + ```bash + screen -r worker + ``` + + 安装 fastchat 和 vllm + + ```bash + pip install fschat vllm + ``` + + 安装依赖: + + ```bash + pip install fschat[model_worker] + ``` + + ```bash + python3 -m fastchat.serve.vllm_worker --model-path /root/models/Qwen1.5-14B-Chat/ --model-name qwen1.5 --num-gpus 8 --gpu-memory-utilization=0.7 --dtype=half + ``` + + - 按 Ctrl A+D 置于后台 + +5. 按照如下方式配置文件,并更新服务。 + + ```bash + vim euler-copilot-helm/chart/euler_copilot/values.yaml + ``` + + 修改如下部分 + + ```yaml + llm: + # 开源大模型,OpenAI兼容接口 + openai: + url: "http://$(IP):30000" + key: "sk-123456" + model: qwen1.5 + max_tokens: 8192 + ``` + +#### NPU 环境 + +NPU 环境部署可参考链接 [MindIE安装指南](https://www.hiascend.com/document/detail/zh/mindie/10RC2/whatismindie/mindie_what_0001.html) + +## FAQ + +### 1. huggingface 使用报错? + +```text +File "/usr/lib/python3.9/site-packages/urllib3/connection.py", line 186, in _new_conn +raise NewConnectionError( +urllib3.exceptions.eanconectionError: : Failed to establish a new conmection: [Errno 101] Network is unreachable +``` + +- 解决办法 + +```bash +pip3 install -U huggingface_hub +``` + +```bash +export HF_ENDPOINT=https://hf-mirror.com +``` + +### 2. 如何在 RAG 容器中调用获取问答结果的接口? + +- 请先进入到 RAG 对应 Pod + +```bash +curl -k -X POST "http://localhost:8005/kb/get_answer" -H "Content-Type: application/json" -d '{ \ + "question": "", \ + "kb_sn": "default_test", \ + "fetch_source": true }' +``` + +### 3. 执行 `helm upgrade` 报错 + +```text +Error: INSTALLATI0N FAILED: Kubernetes cluster unreachable: Get "http:/localhost:880/version": dial tcp [:1:8089: connect: connection refused +``` + +或者 + +```text +Error: UPGRADE FAILED: Kubernetes cluster unreachable: the server could not find the requested resource +``` + +- 解决办法 + +```bash +export KUBECONFIG=/etc/rancher/k3s/k3s.yaml +``` + +### 4. 无法查看 Pod 日志? + +```text +[root@localhost euler-copilot]# kubectl logs rag-deployservice65c75c48d8-44vcp-n euler-copilotDefaulted container "rag" out of: rag.rag-copy secret (init)Error from server: Get "https://172.21.31.11:10250/containerlogs/euler copilot/rag deploy"service 65c75c48d8-44vcp/rag": Forbidden +``` + +- 解决办法 + 如果设置了代理,需要将本机的网络 IP 从代理中剔除 + +```bash +cat /etc/systemd/system/k3s.service.env +``` + +```text +http_proxy="http://172.21.60.51:3128" +https_proxy="http://172.21.60.51:3128" +no_proxy=172.21.31.10 # 代理中剔除本机IP +``` + +### 5. GPU环境部署大模型时出现无法流式回复? + +在服务执行 curl 大模型失败,但是将 `"stream": true` 改为 `"stream": false`就可以 curl 通? + +```bash +curl http://localhost:30000/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer sk-123456" -d '{ +"model": "qwen1.5", +"messages": [ +{ +"role": "system", +"content": "你是情感分析专家,你的任务是xxxx" +}, +{ +"role": "user", +"content": "你好" +} +], +"stream": true, +"n": 1, +"max_tokens": 32768 +}' +``` + +- 解决办法: + +```bash +pip install Pydantic=1.10.13 +``` + +### 6. 如何部署sglang? + +```bash +# 1. 激活 Conda 环境, 并激活 Python 3.10 的 Conda 环境。假设你的环境名为 `myenv`: +conda activate myenv + +# 2. 在激活的环境中,安装 sglang[all] 和 flashinfer +pip install sglang[all]==0.3.0 +pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/ + +# 3. 启动服务器 +python -m sglang.launch_server --served-model-name Qwen2.5-32B --model-path Qwen2.5-32B-Instruct-AWQ --host 0.0.0.0 --port 8001 --api-key sk-12345 --mem-fraction-static 0.5 --tp 8 +``` + +- 验证安装 + +```bash +pip show sglang +pip show flashinfer +``` + +- 注意: + +1. API Key:请确保 `--api-key` 参数中的 API 密钥是正确的 +2. 模型路径: 确保 `--model-path` 参数中的路径是正确的,并且模型文件存在于该路径下。 +3. CUDA 版本:确保你的系统上安装了 CUDA 12.1 和 PyTorch 2.4,因为 `flashinfer` 包依赖于这些特定版本。 +4. 线程池大小:根据你的GPU资源和预期负载调整线程池大小。如果你有 8 个 GPU,那么可以选择 --tp 8 来充分利用这些资源。 + +### 7. 如何 curl embedding + +```bash +curl -k -X POST http://$IP:8001/embedding \ + -H "Content-Type: application/json" \ + -d '{"texts": ["sample text 1", "sample text 2"]}' +# $IP为vectorize的Embedding的内网地址 +``` + +### 8. 如何生成证书? + +```bash +下载地址: https://github.com/FiloSottile/mkcert/releases +# 1. 下载 mkcert +# x86_64 +wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-amd64 +# arm64 +wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-arm64 + +# 2. 执行下面的命令生成秘钥 +mkcert -install +# mkcert 可直接接域名或 IP, 生成证书和秘钥 +mkcert example.com + +# 3. 将证书和秘钥拷贝到 /home/euler-copilot-framework_openeuler/euler-copilot-helm/chart_ssl/traefik-secret.yaml 中, 并执行下面命令使其生效。 +kubectl apply -f traefik-secret.yaml +``` diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/WEB\347\225\214\351\235\242.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/WEB\347\225\214\351\235\242.png" new file mode 100644 index 0000000000000000000000000000000000000000..bb9be4e33ce470865fe5a07decbc056b9ee4e9bb Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/WEB\347\225\214\351\235\242.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/authhub\347\231\273\345\275\225\347\225\214\351\235\242.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/authhub\347\231\273\345\275\225\347\225\214\351\235\242.png" new file mode 100644 index 0000000000000000000000000000000000000000..341828b1b6f728888d1dd52eec755033680155da Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/authhub\347\231\273\345\275\225\347\225\214\351\235\242.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/k8s\351\233\206\347\276\244\344\270\255postgres\346\234\215\345\212\241\347\232\204\345\220\215\347\247\260.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/k8s\351\233\206\347\276\244\344\270\255postgres\346\234\215\345\212\241\347\232\204\345\220\215\347\247\260.png" new file mode 100644 index 0000000000000000000000000000000000000000..473a0006c9710c92375e226a760c3a79989312f9 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/k8s\351\233\206\347\276\244\344\270\255postgres\346\234\215\345\212\241\347\232\204\345\220\215\347\247\260.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/postgres\346\234\215\345\212\241\347\253\257\345\217\243.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/postgres\346\234\215\345\212\241\347\253\257\345\217\243.png" new file mode 100644 index 0000000000000000000000000000000000000000..cfee6d88da56bc939886caece540f7de8cf77bbc Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/postgres\346\234\215\345\212\241\347\253\257\345\217\243.png" differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/rag_port.png b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/rag_port.png new file mode 100644 index 0000000000000000000000000000000000000000..b1d93f9c9d7587aa88a27d7e0bf185586583d438 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/rag_port.png differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/rag\351\205\215\347\275\256\344\277\241\346\201\257\346\210\220\345\212\237.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/rag\351\205\215\347\275\256\344\277\241\346\201\257\346\210\220\345\212\237.png" new file mode 100644 index 0000000000000000000000000000000000000000..fec3cdaa2b260e50f5523477da3e58a9e14e2130 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/rag\351\205\215\347\275\256\344\277\241\346\201\257\346\210\220\345\212\237.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\233\345\273\272\350\265\204\344\272\247\345\272\223\345\244\261\350\264\245\347\224\261\344\272\216\347\273\237\344\270\200\350\265\204\344\272\247\344\270\213\345\255\230\345\234\250\345\220\214\345\220\215\350\265\204\344\272\247\345\272\223.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\233\345\273\272\350\265\204\344\272\247\345\272\223\345\244\261\350\264\245\347\224\261\344\272\216\347\273\237\344\270\200\350\265\204\344\272\247\344\270\213\345\255\230\345\234\250\345\220\214\345\220\215\350\265\204\344\272\247\345\272\223.png" new file mode 100644 index 0000000000000000000000000000000000000000..624459821de4542b635eeffa115eeba780929a4e Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\233\345\273\272\350\265\204\344\272\247\345\272\223\345\244\261\350\264\245\347\224\261\344\272\216\347\273\237\344\270\200\350\265\204\344\272\247\344\270\213\345\255\230\345\234\250\345\220\214\345\220\215\350\265\204\344\272\247\345\272\223.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\233\345\273\272\350\265\204\344\272\247\346\210\220\345\212\237.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\233\345\273\272\350\265\204\344\272\247\346\210\220\345\212\237.png" new file mode 100644 index 0000000000000000000000000000000000000000..3104717bfa8f6615ad6726577a24938bc29884b2 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\233\345\273\272\350\265\204\344\272\247\346\210\220\345\212\237.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\240\351\231\244\344\270\215\345\255\230\345\234\250\347\232\204\350\265\204\344\272\247\345\244\261\350\264\245.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\240\351\231\244\344\270\215\345\255\230\345\234\250\347\232\204\350\265\204\344\272\247\345\244\261\350\264\245.png" new file mode 100644 index 0000000000000000000000000000000000000000..454b9fdfa4b7f209dc370f78677a2f4e71ea49be Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\240\351\231\244\344\270\215\345\255\230\345\234\250\347\232\204\350\265\204\344\272\247\345\244\261\350\264\245.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\240\351\231\244\350\257\255\346\226\231.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\240\351\231\244\350\257\255\346\226\231.png" new file mode 100644 index 0000000000000000000000000000000000000000..d52d25d4778f6db2d2ec076d65018c40cd1da4d3 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\240\351\231\244\350\257\255\346\226\231.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\240\351\231\244\350\265\204\344\272\247\345\272\223\345\244\261\350\264\245\357\274\214\350\265\204\344\272\247\344\270\213\344\270\215\345\255\230\345\234\250\345\257\271\345\272\224\350\265\204\344\272\247\345\272\223.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\240\351\231\244\350\265\204\344\272\247\345\272\223\345\244\261\350\264\245\357\274\214\350\265\204\344\272\247\344\270\213\344\270\215\345\255\230\345\234\250\345\257\271\345\272\224\350\265\204\344\272\247\345\272\223.png" new file mode 100644 index 0000000000000000000000000000000000000000..82ed79c0154bd8e406621440c4e4a7caaab7e06e Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\240\351\231\244\350\265\204\344\272\247\345\272\223\345\244\261\350\264\245\357\274\214\350\265\204\344\272\247\344\270\213\344\270\215\345\255\230\345\234\250\345\257\271\345\272\224\350\265\204\344\272\247\345\272\223.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\240\351\231\244\350\265\204\344\272\247\346\210\220\345\212\237.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\240\351\231\244\350\265\204\344\272\247\346\210\220\345\212\237.png" new file mode 100644 index 0000000000000000000000000000000000000000..7dd2dea945f39ada1d7dd053d150a995b160f203 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\345\210\240\351\231\244\350\265\204\344\272\247\346\210\220\345\212\237.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\214\207\345\256\232\344\270\215\345\255\230\345\234\250\347\232\204\350\265\204\344\272\247\345\210\233\345\273\272\350\265\204\344\272\247\345\272\223\345\244\261\350\264\245.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\214\207\345\256\232\344\270\215\345\255\230\345\234\250\347\232\204\350\265\204\344\272\247\345\210\233\345\273\272\350\265\204\344\272\247\345\272\223\345\244\261\350\264\245.png" new file mode 100644 index 0000000000000000000000000000000000000000..be89bdfde2518bba3941eee5d475f52ad9124343 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\214\207\345\256\232\344\270\215\345\255\230\345\234\250\347\232\204\350\265\204\344\272\247\345\210\233\345\273\272\350\265\204\344\272\247\345\272\223\345\244\261\350\264\245.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\225\260\346\215\256\345\272\223\345\210\235\345\247\213\345\214\226.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\225\260\346\215\256\345\272\223\345\210\235\345\247\213\345\214\226.png" new file mode 100644 index 0000000000000000000000000000000000000000..27530840aaa5382a226e1ed8baea883895d9d75e Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\225\260\346\215\256\345\272\223\345\210\235\345\247\213\345\214\226.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\225\260\346\215\256\345\272\223\351\205\215\347\275\256\344\277\241\346\201\257\346\210\220\345\212\237.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\225\260\346\215\256\345\272\223\351\205\215\347\275\256\344\277\241\346\201\257\346\210\220\345\212\237.png" new file mode 100644 index 0000000000000000000000000000000000000000..aa04e6f7f0648adfca1240c750ca5b79b88da5f9 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\225\260\346\215\256\345\272\223\351\205\215\347\275\256\344\277\241\346\201\257\346\210\220\345\212\237.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\227\240\350\265\204\344\272\247\346\227\266\346\237\245\350\257\242\350\265\204\344\272\247.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\227\240\350\265\204\344\272\247\346\227\266\346\237\245\350\257\242\350\265\204\344\272\247.png" new file mode 100644 index 0000000000000000000000000000000000000000..74905172c0c0a0acc4c4d0e35efd2493dc421c4e Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\227\240\350\265\204\344\272\247\346\227\266\346\237\245\350\257\242\350\265\204\344\272\247.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\237\245\347\234\213\346\226\207\346\241\243\344\272\247\347\224\237\347\211\207\346\256\265\346\200\273\346\225\260\345\222\214\344\270\212\344\274\240\346\210\220\345\212\237\346\200\273\346\225\260.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\237\245\347\234\213\346\226\207\346\241\243\344\272\247\347\224\237\347\211\207\346\256\265\346\200\273\346\225\260\345\222\214\344\270\212\344\274\240\346\210\220\345\212\237\346\200\273\346\225\260.png" new file mode 100644 index 0000000000000000000000000000000000000000..432fbfcd02f6d2220e7d2a8512aee893d67be24d Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\237\245\347\234\213\346\226\207\346\241\243\344\272\247\347\224\237\347\211\207\346\256\265\346\200\273\346\225\260\345\222\214\344\270\212\344\274\240\346\210\220\345\212\237\346\200\273\346\225\260.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\237\245\350\257\242\345\205\250\351\203\250\350\257\255\346\226\231.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\237\245\350\257\242\345\205\250\351\203\250\350\257\255\346\226\231.png" new file mode 100644 index 0000000000000000000000000000000000000000..a4f4ea8a3999a9ab659ccd9ea39b80b21ff46e84 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\237\245\350\257\242\345\205\250\351\203\250\350\257\255\346\226\231.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\237\245\350\257\242\350\265\204\344\272\247.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\237\245\350\257\242\350\265\204\344\272\247.png" new file mode 100644 index 0000000000000000000000000000000000000000..675b40297363664007f96948fb21b1cb90d6beea Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\346\237\245\350\257\242\350\265\204\344\272\247.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\216\267\345\217\226\346\225\260\346\215\256\345\272\223pod\345\220\215\347\247\260.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\216\267\345\217\226\346\225\260\346\215\256\345\272\223pod\345\220\215\347\247\260.png" new file mode 100644 index 0000000000000000000000000000000000000000..8fc0c988e8b3830c550c6be6e42b88ac13448d1a Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\216\267\345\217\226\346\225\260\346\215\256\345\272\223pod\345\220\215\347\247\260.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\257\255\346\226\231\344\270\212\344\274\240\346\210\220\345\212\237.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\257\255\346\226\231\344\270\212\344\274\240\346\210\220\345\212\237.png" new file mode 100644 index 0000000000000000000000000000000000000000..5c897e9883e868bf5160d92cb106ea4e4e9bc356 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\257\255\346\226\231\344\270\212\344\274\240\346\210\220\345\212\237.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\257\255\346\226\231\345\210\240\351\231\244\345\244\261\350\264\245\357\274\214\346\234\252\346\237\245\350\257\242\345\210\260\347\233\270\345\205\263\350\257\255\346\226\231.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\257\255\346\226\231\345\210\240\351\231\244\345\244\261\350\264\245\357\274\214\346\234\252\346\237\245\350\257\242\345\210\260\347\233\270\345\205\263\350\257\255\346\226\231.png" new file mode 100644 index 0000000000000000000000000000000000000000..407e49b929b7ff4cf14703046a4ba0bfe1bb441e Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\257\255\346\226\231\345\210\240\351\231\244\345\244\261\350\264\245\357\274\214\346\234\252\346\237\245\350\257\242\345\210\260\347\233\270\345\205\263\350\257\255\346\226\231.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\344\270\213\346\234\252\346\237\245\350\257\242\345\210\260\350\265\204\344\272\247\345\272\223.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\344\270\213\346\234\252\346\237\245\350\257\242\345\210\260\350\265\204\344\272\247\345\272\223.png" new file mode 100644 index 0000000000000000000000000000000000000000..45ab521ec5f5afbd81ad54f023aae3b7a867dbf2 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\344\270\213\346\234\252\346\237\245\350\257\242\345\210\260\350\265\204\344\272\247\345\272\223.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\344\270\213\346\237\245\350\257\242\350\265\204\344\272\247\345\272\223\346\210\220\345\212\237.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\344\270\213\346\237\245\350\257\242\350\265\204\344\272\247\345\272\223\346\210\220\345\212\237.png" new file mode 100644 index 0000000000000000000000000000000000000000..90ed5624ae93ff9784a750514c53293df4e961f0 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\344\270\213\346\237\245\350\257\242\350\265\204\344\272\247\345\272\223\346\210\220\345\212\237.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\345\272\223\345\210\233\345\273\272\346\210\220\345\212\237.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\345\272\223\345\210\233\345\273\272\346\210\220\345\212\237.png" new file mode 100644 index 0000000000000000000000000000000000000000..7b2cc38a931c9c236517c14c86fa93e3eb2b6dcd Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\345\272\223\345\210\233\345\273\272\346\210\220\345\212\237.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\345\272\223\345\210\240\351\231\244\345\244\261\350\264\245\357\274\214\344\270\215\345\255\230\345\234\250\350\265\204\344\272\247.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\345\272\223\345\210\240\351\231\244\345\244\261\350\264\245\357\274\214\344\270\215\345\255\230\345\234\250\350\265\204\344\272\247.png" new file mode 100644 index 0000000000000000000000000000000000000000..1365a8d69467dec250d3451ac63e2615a2194c18 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\345\272\223\345\210\240\351\231\244\345\244\261\350\264\245\357\274\214\344\270\215\345\255\230\345\234\250\350\265\204\344\272\247.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\345\272\223\345\210\240\351\231\244\346\210\220\345\212\237png.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\345\272\223\345\210\240\351\231\244\346\210\220\345\212\237png.png" new file mode 100644 index 0000000000000000000000000000000000000000..1bd944264baa9369e6f8fbfd04cabcd12730c0e9 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\345\272\223\345\210\240\351\231\244\346\210\220\345\212\237png.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\345\272\223\346\237\245\350\257\242\345\244\261\350\264\245\357\274\214\344\270\215\345\255\230\345\234\250\350\265\204\344\272\247.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\345\272\223\346\237\245\350\257\242\345\244\261\350\264\245\357\274\214\344\270\215\345\255\230\345\234\250\350\265\204\344\272\247.png" new file mode 100644 index 0000000000000000000000000000000000000000..58bcd320e145dd29d9e5d49cb6d86964ebb83b51 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\350\265\204\344\272\247\345\272\223\346\237\245\350\257\242\345\244\261\350\264\245\357\274\214\344\270\215\345\255\230\345\234\250\350\265\204\344\272\247.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\351\205\215\347\275\256\346\230\240\345\260\204\344\270\255\351\227\264\345\261\202.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\351\205\215\347\275\256\346\230\240\345\260\204\344\270\255\351\227\264\345\261\202.png" new file mode 100644 index 0000000000000000000000000000000000000000..809b785b999b6663d9e9bd41fed953925093d6bd Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\351\205\215\347\275\256\346\230\240\345\260\204\344\270\255\351\227\264\345\261\202.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\351\205\215\347\275\256\346\230\240\345\260\204\346\272\220\347\233\256\345\275\225.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\351\205\215\347\275\256\346\230\240\345\260\204\346\272\220\347\233\256\345\275\225.png" new file mode 100644 index 0000000000000000000000000000000000000000..62ba5f6615f18deb3d5a71fd68ee8c929638d814 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\351\205\215\347\275\256\346\230\240\345\260\204\346\272\220\347\233\256\345\275\225.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\351\205\215\347\275\256\346\230\240\345\260\204\347\233\256\346\240\207\347\233\256\345\275\225.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\351\205\215\347\275\256\346\230\240\345\260\204\347\233\256\346\240\207\347\233\256\345\275\225.png" new file mode 100644 index 0000000000000000000000000000000000000000..d32c672fafcb0ef665bda0bcfdce19d2df44db01 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\351\205\215\347\275\256\346\230\240\345\260\204\347\233\256\346\240\207\347\233\256\345\275\225.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\351\207\215\345\244\215\345\210\233\345\273\272\350\265\204\344\272\247\345\244\261\350\264\245.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\351\207\215\345\244\215\345\210\233\345\273\272\350\265\204\344\272\247\345\244\261\350\264\245.png" new file mode 100644 index 0000000000000000000000000000000000000000..a5ecd6b65abc97320e7467f00d82ff1fd9bf0e44 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/local-asset-library-setup/\351\207\215\345\244\215\345\210\233\345\273\272\350\265\204\344\272\247\345\244\261\350\264\245.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/\345\210\233\345\273\272\345\272\224\347\224\250\346\210\220\345\212\237\347\225\214\351\235\242.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/\345\210\233\345\273\272\345\272\224\347\224\250\346\210\220\345\212\237\347\225\214\351\235\242.png" new file mode 100644 index 0000000000000000000000000000000000000000..a871907f348317e43633cf05f5241cb978476fb4 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/\345\210\233\345\273\272\345\272\224\347\224\250\346\210\220\345\212\237\347\225\214\351\235\242.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/\345\210\233\345\273\272\345\272\224\347\224\250\347\225\214\351\235\242.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/\345\210\233\345\273\272\345\272\224\347\224\250\347\225\214\351\235\242.png" new file mode 100644 index 0000000000000000000000000000000000000000..d82c736a94b106a30fd8d1f7b781f9e335bb441f Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/\345\210\233\345\273\272\345\272\224\347\224\250\347\225\214\351\235\242.png" differ diff --git "a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/\351\203\250\347\275\262\350\247\206\345\233\276.png" "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/\351\203\250\347\275\262\350\247\206\345\233\276.png" new file mode 100644 index 0000000000000000000000000000000000000000..181bf1d2ddbe15cfd296c27df27d865bdbce8d69 Binary files /dev/null and "b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/pictures/\351\203\250\347\275\262\350\247\206\345\233\276.png" differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/flows/get_all_docker_images_flow.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/flows/get_all_docker_images_flow.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d1c4332203be24d3395d45eee2b1620b18d6f06c --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/flows/get_all_docker_images_flow.yaml @@ -0,0 +1,15 @@ +name: get_all_supported_AI_docker_images +description: "获取所有支持的docker容器镜像,输入为空,输出为支持的AI容器镜像列表,包括名字、tag、registry、repository" +steps: + - name: start + call_type: api + params: + endpoint: GET /docker/images + next: list2markdown + - name: list2markdown + call_type: llm + params: + user_prompt: | + 当前已有的docker容器及tag为:{data}。请将这份内容输出为markdown表格,表头为registry、repository、image_name、tag,请注意如果一个容器镜像有多个tag版本,请分多行展示。 +next_flow: + - docker_pull_specified_AI_docker_images \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/flows/pull_images_flow.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/flows/pull_images_flow.yaml new file mode 100644 index 0000000000000000000000000000000000000000..277677924f152672e5f0b02305733347900d4e4b --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/flows/pull_images_flow.yaml @@ -0,0 +1,15 @@ +name: docker_pull_specified_AI_docker_images +description: "从dockerhub拉取指定的docker容器镜像,输入为容器镜像的名字和tag" +steps: + - name: start + call_type: api + params: + endpoint: POST /docker/pull + next: extract_key + - name: extract_key + call_type: extract + params: + keys: + - data.shell +next_flow: + - docker_run_specified_AI_docker_images \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/flows/run_images_flow.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/flows/run_images_flow.yaml new file mode 100644 index 0000000000000000000000000000000000000000..54fe3ca39d9fe16b3c1bbcc506b7cf6f0e673ea9 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/flows/run_images_flow.yaml @@ -0,0 +1,13 @@ +name: docker_run_specified_AI_docker_images +description: "运行指定的容器镜像,输入为容器镜像的名字和tag" +steps: + - name: start + call_type: api + params: + endpoint: POST /docker/run + next: extract_key + - name: extract_key + call_type: extract + params: + keys: + - data.shell diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/openapi.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/openapi.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b46bf07f044302169c6c02f4f61be22f2fb5657f --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/openapi.yaml @@ -0,0 +1,190 @@ +openapi: 3.0.2 +info: + title: compatibility-ai-infra + version: 0.1.0 +servers: + - url: http://ai-infra-service.compatibility-ai-infra.svc.cluster.local:8101 +paths: + /docker/images: + get: + description: 获取所有支持的AI容器信息,返回容器名字和tag + responses: + '200': + description: Successful Response + content: + application/json: + schema: + $ref: '#/components/schemas/ResponseData' + /docker/pull: + post: + description: 输入容器镜像名字和容器镜像tag,返回拉取该容器的shell命令 + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PullDockerImages' + required: true + responses: + '200': + description: Successful Response + content: + application/json: + schema: + $ref: '#/components/schemas/ResponseData' + '422': + description: Validation Error + content: + application/json: + schema: + $ref: '#/components/schemas/HTTPValidationError' + /docker/run: + post: + description: 输入容器名字和tag,返回运行该容器的shell命令 + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/RunDockerImages' + required: true + responses: + '200': + description: Successful Response + content: + application/json: + schema: + $ref: '#/components/schemas/ResponseData' + '422': + description: Validation Error + content: + application/json: + schema: + $ref: '#/components/schemas/HTTPValidationError' +components: + schemas: + HTTPValidationError: + description: HTTP校验错误 + type: object + properties: + detail: + title: Detail + type: array + items: + $ref: '#/components/schemas/ValidationError' + PullDockerImages: + description: 生成容器拉取命令的接口的入参 + required: + - image_name + - image_tag + type: object + properties: + image_name: + description: 容器镜像的名字,不要包含转义符 + type: string + enum: + - cann + - oneapi-runtime + - oneapi-basekit + - llm-server + - mlflow + - llm + - tensorflow + - pytorch + - cuda + image_tag: + description: 容器镜像的tag,不要包含转义符 + type: string + enum: + - "8.0.RC1-oe2203sp4" + - "cann7.0.RC1.alpha002-oe2203sp2" + - "2024.2.0-oe2403lts" + - "1.0.0-oe2203sp3" + - "2.11.1-oe2203sp3" + - "2.13.1-oe2203sp3" + - "chatglm2_6b-pytorch2.1.0.a1-cann7.0.RC1.alpha002-oe2203sp2" + - "llama2-7b-q8_0-oe2203sp2" + - "chatglm2-6b-q8_0-oe2203sp2" + - "fastchat-pytorch2.1.0.a1-cann7.0.RC1.alpha002-oe2203sp2" + - "tensorflow2.15.0-oe2203sp2" + - "tensorflow2.15.0-cuda12.2.0-devel-cudnn8.9.5.30-oe2203sp2" + - "pytorch2.1.0-oe2203sp2" + - "pytorch2.1.0-cuda12.2.0-devel-cudnn8.9.5.30-oe2203sp2" + - "pytorch2.1.0.a1-cann7.0.RC1.alpha002-oe2203sp2" + - "cuda12.2.0-devel-cudnn8.9.5.30-oe2203sp2" + ResponseData: + description: 接口返回值的固定格式 + required: + - code + - message + - data + type: object + properties: + code: + description: 状态码 + type: integer + message: + description: 状态信息 + type: string + data: + description: 返回数据 + type: any + RunDockerImages: + description: 生成容器运行命令的接口的入参 + required: + - image_name + - image_tag + type: object + properties: + image_name: + description: 容器镜像的名字,不要包含转义符 + type: string + enum: + - cann + - oneapi-runtime + - oneapi-basekit + - llm-server + - mlflow + - llm + - tensorflow + - pytorch + - cuda + image_tag: + description: 容器镜像的tag,不要包含转义符 + type: string + enum: + - "8.0.RC1-oe2203sp4" + - "cann7.0.RC1.alpha002-oe2203sp2" + - "2024.2.0-oe2403lts" + - "1.0.0-oe2203sp3" + - "2.11.1-oe2203sp3" + - "2.13.1-oe2203sp3" + - "chatglm2_6b-pytorch2.1.0.a1-cann7.0.RC1.alpha002-oe2203sp2" + - "llama2-7b-q8_0-oe2203sp2" + - "chatglm2-6b-q8_0-oe2203sp2" + - "fastchat-pytorch2.1.0.a1-cann7.0.RC1.alpha002-oe2203sp2" + - "tensorflow2.15.0-oe2203sp2" + - "tensorflow2.15.0-cuda12.2.0-devel-cudnn8.9.5.30-oe2203sp2" + - "pytorch2.1.0-oe2203sp2" + - "pytorch2.1.0-cuda12.2.0-devel-cudnn8.9.5.30-oe2203sp2" + - "pytorch2.1.0.a1-cann7.0.RC1.alpha002-oe2203sp2" + - "cuda12.2.0-devel-cudnn8.9.5.30-oe2203sp2" + ValidationError: + description: 接口的入参校验错误时返回的内容格式 + required: + - loc + - msg + - type + type: object + properties: + loc: + title: Location + type: array + items: + anyOf: + - type: string + - type: integer + msg: + title: Message + type: string + type: + title: Error Type + type: string \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/plugin.json b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/plugin.json new file mode 100644 index 0000000000000000000000000000000000000000..6136093d2313bd85ae2f2244adef96d48dad90bd --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/Compatibility-AI-Infra/plugin.json @@ -0,0 +1,6 @@ +{ + "id": "ai_docker_images", + "name": "AI容器镜像", + "description": "该插件接受用户的输入,检查当前支持哪些AI容器,拉取容器,运行容器", + "predefined_question": "查看当前支持哪些AI容器,拉取指定的容器,运行指定的容器" +} \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/plugin-ai-container-stack-deployment-guide.md b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/plugin-ai-container-stack-deployment-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..2bf5e300f521338a7495b45635873692b27ee2ed --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/ai-container-stack/plugin-ai-container-stack-deployment-guide.md @@ -0,0 +1,35 @@ +# AI容器栈部署指南 + +## 准备工作 + ++ 提前安装 [openEuler Copilot System 命令行(智能 Shell)客户端](../../../user-guide/cli-client/cli-assistant-guide.md) + ++ 修改 /xxxx/xxxx/values.yaml 文件的 `euler-copilot-tune` 部分,将 `enable` 字段改为 `True` + +```yaml +enable: True +``` + ++ 更新环境 + +```bash +helm upgrade euler-copilot . +``` + ++ 检查 Compatibility-AI-Infra 目录下的 openapi.yaml 中 `servers.url` 字段,确保AI容器服务的启动地址被正确设置 + ++ 获取 `$plugin_dir` 插件文件夹的路径,该变量位于 euler-copilot-helm/chart/euler_copilot/values.yaml 中的 `framework` 模块 + ++ 如果插件目录不存在,需新建该目录 + ++ 将该目录下的 Compatibility-AI-Infra 文件夹放到 `$plugin_dir` 中 + +```bash +cp -r ./Compatibility-AI-Infra $PLUGIN_DIR +``` + ++ 重建 framework pod,重载插件配置 + +```bash +kubectl delete pod framework-xxxx -n 命名空间 +``` diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/flows/demarcation.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/flows/demarcation.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6831bdea203e1ffd360f765e5f85ebdce704a437 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/flows/demarcation.yaml @@ -0,0 +1,18 @@ +name: demarcation +description: 该工具的作用为针对已知异常事件进行定界分析。需从上下文中获取start_time(开始时间),end_time(结束时间),container_id(容器ID) +steps: + - name: start + call_type: api + params: + endpoint: POST /demarcation + next: report_gen + - name: report_gen + call_type: llm + params: + system_prompt: 你是一个系统智能助手,擅长分析系统的故障现象,最终生成分析报告。 + user_prompt: | + 您是一个专业的运维人员,擅长分析系统的故障现象,最终生成分析报告。当前异常检测结果为{data}。 + 将root_causes_metric_top3内容输出为表格形式,并为每个根因指标进行标号。 + 整个分析报告应该符合markdown规范 +next_flow: + - detection \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/flows/detection.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/flows/detection.yaml new file mode 100644 index 0000000000000000000000000000000000000000..836c71423d63248cd84fe20593d6f848c9b35363 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/flows/detection.yaml @@ -0,0 +1,10 @@ +name: detection +description: 该工具的作用为针对已知容器ID和指标,执行profiling分析任务,得到任务ID。需从上下文中获取container_id(容器ID)和三个metric(指标)的其中一个。 +steps: + - name: start + call_type: api + params: + endpoint: POST /detection + next: end + - name: end + call_type: none diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/flows/inspection.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/flows/inspection.yaml new file mode 100644 index 0000000000000000000000000000000000000000..afaefe31106c5ec2016fb3f030fb363950b62516 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/flows/inspection.yaml @@ -0,0 +1,16 @@ +name: inspection +description: 该工具的作用为在指定机器上对容器进行异常事件检测。需从上下文中获取start_time(开始时间),end_time(结束时间),machine_id(机器IP) +steps: + - name: start + call_type: api + params: + endpoint: POST /inspection + next: list2markdown + - name: list2markdown + call_type: llm + params: + user_prompt: | + 您是一个专业的运维人员,擅长分析系统的故障现象,最终生成分析报告。当前的异常检测结果为{data}。请将anomaly_events_times_list的信息,输出为表格形式。整个分析报告请符合markdown规范。 + +next_flow: + - demarcation \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/flows/show_profiling.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/flows/show_profiling.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b82172eb272e6c0679dd32582e18e4ecda7dc2bf --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/flows/show_profiling.yaml @@ -0,0 +1,36 @@ +name: show_profiling +description: 根据已知的智能诊断任务ID(task_id),获取报告的原始数据。随后根据原始数据,生成详细的报告。 +steps: + - name: start + call_type: api + params: + endpoint: POST /show_profiling + next: report_gen + - name: report_gen + call_type: llm + params: + system_prompt: | + 你是一个数据分析和性能分析的专家,请按以下的模板分析出应用的性能瓶颈: + + 1.分析topStackSelf字段中自身耗时排名前3的函数调用栈,分析结果中应该包含函数的耗时信息、函数调用栈的解释说明。 + 2.分析topStackTotal字段中总耗时排名前3的函数调用栈,分析结果中应该包含函数的耗时信息、函数调用栈的解释说明。 + 3.总结前两步的分析结果,并给出影响应用性能的瓶颈所在,同时给出建议。 + user_prompt: | + 现有定界分析结果:{data} + 上面提供了一个JSON对象,它包含了应用程序的Profiling分析报告。该JSON对象包括如下几个字段: + + - traceEvents:它是一个事件列表,列表中的每一项表示一个事件,每个事件以字典格式存储,事件的主要内容解释如下: + - cat 字段:表示事件的分类,它的值包括 syscall、python_gc、sample、pthread_sync,oncpu。其中,syscall 表示这是一个系统调用事件;python_gc 表示这是一个Python垃圾回收事件;sample表示这是一个cpu调用栈采样事件;oncpu表示这是一个OnCPU事件,它说明了pid字段所代表的进程正在占用cpu。 + - name字段:表示事件的名称; + - pid字段:表示事件的进程ID; + - tid字段:表示事件所在的线程ID; + - ts字段:表示事件发生的开始时间,它是一个时间戳格式,单位是微秒; + - dur字段:表示事件的耗时,单位是微秒; + - sf字段:表示事件的函数调用栈,内容是以分号(;)分隔的函数名列表,分号左边是调用方的函数名,分号右边是被调用的函数名。 + - args字段:表示每个事件特有的信息,内容主要包括:count字段,表示事件发生的计数;thread.name字段,表示事件所在的线程的名称;cpu字段,表示采样的cpu编号。 + - topStackSelf:表示应用程序在执行CPU操作期间,自身耗时排名前10的函数调用栈列表。自身耗时是指函数调用栈自身的耗时。列表中的每一项内容说明如下: + - stack:用字符串表示的一个函数调用栈,内容是以分号(;)分隔的函数名列表,分号左边是调用方的函数名,分号右边是被调用的函数名。 + - self_time:stack表示的函数调用栈的自身耗时,单位是毫秒。 + - topStackTotal:表示应用程序在执行CPU操作期间,总耗时排名前10的函数调用栈列表,总耗时是指函数调用栈累积的耗时,它包含了自身耗时。列表中的每一项内容说明如下: + - stack:用字符串表示的一个函数调用栈,内容是以分号(;)分隔的函数名列表,分号左边是调用方的函数名,分号右边是被调用的函数名。 + - total_time:stack表示的函数调用栈的总耗时,单位是毫秒。 \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/openapi.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/openapi.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9ebf2715d5ff61cd86150cfa9b208c2c48a2afa3 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/openapi.yaml @@ -0,0 +1,255 @@ +openapi: 3.0.2 +info: + title: 智能诊断 + version: 1.0.0 +servers: + - url: http://192.168.10.31:20030 +paths: + /inspection: + post: + description: 对指定机器进行异常检测,返回异常事件 + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/InspectionRequestData' + required: true + responses: + '200': + description: Successful Response + content: + application/json: + schema: {} + '422': + description: Validation Error + content: + application/json: + schema: + $ref: '#/components/schemas/HTTPValidationError' + /demarcation: + post: + description: 对指定容器进行异常定界 + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/DemarcationRequestData' + required: true + responses: + '200': + description: Successful Response + content: + application/json: + schema: {} + '422': + description: Validation Error + content: + application/json: + schema: + $ref: '#/components/schemas/HTTPValidationError' + /detection: + post: + description: 根据定界结果指标进行定位 + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/DetectionRequestData' + required: true + responses: + '200': + description: Successful Response + content: + application/json: + schema: {} + '422': + description: Validation Error + content: + application/json: + schema: + $ref: '#/components/schemas/HTTPValidationError' + /show_profiling: + post: + description: 根据任务ID,获取Profiling结果 + requestBody: + content: + application/json: + schema: + type: object + description: 请求数据 + required: + - task_id + properties: + task_id: + type: string + description: 任务ID,为UUID类型 + responses: + '200': + description: Successful Response + content: + application/json: + schema: + $ref: "#/components/schemas/ShowProfilingResponse" + '422': + description: Validation Error + content: + application/json: + schema: + $ref: '#/components/schemas/HTTPValidationError' +components: + schemas: + HTTPValidationError: + type: object + description: HTTP 校验错误 + properties: + detail: + type: array + items: + $ref: '#/components/schemas/ValidationError' + title: Detail + InspectionRequestData: + type: object + description: 巡检接口入参 + required: + - machine_id + - start_time + - end_time + properties: + machine_id: + description: 机器IP。如果给定的信息没有指定任何机器IP,则默认为“default_0.0.0.0”。 + type: string + title: Machine_ID + default: default_0.0.0.0 + start_time: + description: 根据给定的信息提取出开始时间,如果给定的信息不包含开始时间,开始时间可以设置为当前时间往前推2分钟,最终解析出的时间以'%Y-%m-%d %H:%M:%S'格式输出 + type: string + title: Start_Time + default: '' + end_time: + description: 根据给定的信息提取出结束时间,如果给定的信息不包含结束时间,结束时间可以设置为当前时间,最终解析出的时间以'%Y-%m-%d %H:%M:%S'格式输出 + type: string + title: End_Time + default: '' + DemarcationRequestData: + type: object + description: 定界接口入参 + required: + - start_time + - end_time + - container_id + properties: + start_time: + description: 根据给定的信息提取出开始时间,如果给定的信息不包含开始时间,开始时间可以设置为当前时间往前推2分钟,最终解析出的时间以'%Y-%m-%d %H:%M:%S'格式输出 + type: string + title: Start_Time + default: '' + end_time: + description: 根据给定的信息提取出结束时间,如果给定的信息不包含结束时间,结束时间可以设置为当前时间,最终解析出的时间以'%Y-%m-%d %H:%M:%S'格式输出 + type: string + title: End_Time + default: '' + container_id: + description: 结合问题中指定的具体异常事件,根据给定信息提取容器ID + type: string + title: Container_ID + default: '' + DetectionRequestData: + type: object + description: 定位接口入参 + required: + - container_id + - metric + properties: + container_id: + description: 结合问题中指定的具体指标或者指标号,根据给定信息提取容器ID + type: string + title: Container_ID + default: '' + metric: + description: 结合问题中的具体指标或者指标号,根据给定信息提取具体指标值作为metric + type: string + title: Metric + default: '' + ShowProfilingResponse: + type: object + description: show profiling 的返回结果 + properties: + traceEvents: + type: array + items: + type: object + properties: + cat: + type: string + description: Event category (syscall, python_gc, sample, pthread_sync, oncpu) + name: + type: string + description: Event name + pid: + type: integer + format: int32 + description: Process ID + tid: + type: integer + format: int32 + description: Thread ID + ts: + type: integer + format: int64 + description: Timestamp of the event start in microseconds + dur: + type: integer + format: int32 + description: Duration of the event in microseconds + sf: + type: string + description: Call stack represented as a list of function names separated by semicolons + args: + type: object + additionalProperties: true + description: Additional event-specific information such as count, thread.name, and cpu + topStackSelf: + type: array + items: + type: object + properties: + stack: + type: string + description: Call stack represented as a list of function names separated by semicolons + self_time: + type: number + format: int + description: Exclusive time spent in the call stack in milliseconds + topStackTotal: + type: array + items: + type: object + properties: + stack: + type: string + description: Call stack represented as a list of function names separated by semicolons + total_time: + type: number + format: int + description: Total inclusive time spent in the call stack in milliseconds + ValidationError: + type: object + required: + - loc + - msg + - type + title: ValidationError + properties: + loc: + type: array + items: + anyOf: + - type: string + - type: integer + title: Location + msg: + type: string + title: Message + type: + type: string + title: Error Type \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/plugin.json b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/plugin.json new file mode 100644 index 0000000000000000000000000000000000000000..b0ef2fd7aa0c13ad626a01d0fc7a4bf010ab3178 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/euler-copilot-rca/plugin.json @@ -0,0 +1,5 @@ +{ + "id": "rca", + "name": "智能诊断", + "description": "该插件具备以下功能:巡检,定界,定位" +} \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/plugin-intelligent-diagnosis-deployment-guide.md b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/plugin-intelligent-diagnosis-deployment-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..b51923516c93c0e6246d4bf03f5411bf904ff22a --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-diagnosis/plugin-intelligent-diagnosis-deployment-guide.md @@ -0,0 +1,189 @@ +# 智能诊断部署指南 + +## 准备工作 + ++ 提前安装 [openEuler Copilot System 命令行(智能 Shell)客户端](../../../user-guide/cli-client/cli-assistant-guide.md) + ++ 被诊断机器不能安装 crictl 和 isula,只能有 docker 一个容器管理工具 + ++ 在需要被诊断的机器上安装 gala-gopher 和 gala-anteater + +## 部署 gala-gopher + +### 1. 准备 BTF 文件 + +**如果Linux内核支持 BTF,则不需要准备 BTF 文件。**可以通过以下命令来查看Linux内核是否已经支持 BTF: + +```bash +cat /boot/config-$(uname -r) | grep CONFIG_DEBUG_INFO_BTF +``` + +如果输出结果为`CONFIG_DEBUG_INFO_BTF=y`,则表示内核支持BTF。否则表示内核不支持BTF。 +如果内核不支持BTF,需要手动制作BTF文件。步骤如下: + +1. 获取当前Linux内核版本的 vmlinux 文件 + + vmlinux 文件存放在 `kernel-debuginfo` 包里面,存放路径为 `/usr/lib/debug/lib/modules/$(uname -r)/vmlinux`。 + + 例如,对于 `kernel-debuginfo-5.10.0-136.65.0.145.oe2203sp1.aarch64`,对应的vmlinux路径为`/usr/lib/debug/lib/modules/5.10.0-136.65.0.145.oe2203sp1.aarch64/vmlinux`。 + +2. 制作 BTF 文件 + + 基于获取到 vmlinux 文件来制作 BTF 文件。这一步可以在自己的环境里操作。首先,需要安装相关的依赖包: + + ```bash + # 说明:dwarves 包中包含 pahole 命令,llvm 包中包含 llvm-objcopy 命令 + yum install -y llvm dwarves + ``` + + 执行下面的命令行,生成 BTF 文件。 + + ```bash + kernel_version=4.19.90-2112.8.0.0131.oe1.aarch64 # 说明:这里需要替换成目标内核版本,可通过 uname -r 命令获取 + pahole -J vmlinux + llvm-objcopy --only-section=.BTF --set-section-flags .BTF=alloc,readonly --strip-all vmlinux ${kernel_version}.btf + strip -x ${kernel_version}.btf + ``` + + 生成的 BTF 文件名称为`.btf`格式,其中 ``为目标机器的内核版本,可通过 `uname -r` 命令获取。 + +### 2. 下载 gala-gopher 容器镜像 + +#### 在线下载 + +gala-gopher 容器镜像已归档到 仓库中,可通过如下命令获取。 + +```bash +# 获取 aarch64 架构的镜像 +docker pull hub.oepkgs.net/a-ops/gala-gopher-profiling-aarch64:latest +# 获取 x86_64 架构的镜像 +docker pull hub.oepkgs.net/a-ops/gala-gopher-profiling-x86_64:latest +``` + +#### 离线下载 + +若无法通过在线下载的方式下载容器镜像,可联系我(何秀军 00465007)获取压缩包。 + +拿到压缩包后,放到目标机器上,解压并加载容器镜像,命令行如下: + +```bash +tar -zxvf gala-gopher-profiling-aarch64.tar.gz +docker load < gala-gopher-profiling-aarch64.tar +``` + +### 3. 启动 gala-gopher 容器 + +容器启动命令: + +```shell +docker run -d --name gala-gopher-profiling --privileged --pid=host --network=host -v /:/host -v /etc/localtime:/etc/localtime:ro -v /sys:/sys -v /usr/lib/debug:/usr/lib/debug -v /var/lib/docker:/var/lib/docker -v /tmp/$(uname -r).btf:/opt/gala-gopher/btf/$(uname -r).btf -e GOPHER_HOST_PATH=/host gala-gopher-profiling-aarch64:latest +``` + +启动配置参数说明: + ++ `-v /tmp/$(uname -r).btf:/opt/gala-gopher/btf/$(uname -r).btf` :如果内核支持 BTF,则删除该配置即可。如果内核不支持 BTF,则需要将前面准备好的 BTF 文件拷贝到目标机器上,并将 `/tmp/$(uname -r).btf` 替换为对应的路径。 ++ `gala-gopher-profiling-aarch64-0426` :gala-gopher容器对应的tag,替换成实际下载的tag。 + +探针启动: + ++ `container_id` 为需要观测的容器 id ++ 分别启动 sli 和 container 探针 + +```bash +curl -X PUT http://localhost:9999/sli -d json='{"cmd":{"check_cmd":""},"snoopers":{"container_id":[""]},"params":{"report_period":5},"state":"running"}' +``` + +```bash +curl -X PUT http://localhost:9999/container -d json='{"cmd":{"check_cmd":""},"snoopers":{"container_id":[""]},"params":{"report_period":5},"state":"running"}' +``` + +探针关闭 + +```bash +curl -X PUT http://localhost:9999/sli -d json='{"state": "stopped"}' +``` + +```bash +curl -X PUT http://localhost:9999/container -d json='{"state": "stopped"}' +``` + +## 部署 gala-anteater + +源码部署: + +```bash +# 请指定分支为 930eulercopilot +git clone https://gitee.com/GS-Stephen_Curry/gala-anteater.git +``` + +安装部署请参考 +(请留意python版本导致执行setup.sh install报错) + +镜像部署: + +```bash +docker pull hub.oepkgs.net/a-ops/gala-anteater:2.0.2 +``` + +`/etc/gala-anteater/config/gala-anteater.yaml` 中 Kafka 和 Prometheus 的 `server` 和 `port` 需要按照实际部署修改,`model_topic`、`meta_topic`、`group_id` 自定义 + +```yaml +Kafka: + server: "xxxx" + port: "xxxx" + model_topic: "xxxx" # 自定义,与rca配置中保持一致 + meta_topic: "xxxx" # 自定义,与rca配置中保持一致 + group_id: "xxxx" # 自定义,与rca配置中保持一致 + # auth_type: plaintext/sasl_plaintext, please set "" for no auth + auth_type: "" + username: "" + password: "" + +Prometheus: + server: "xxxx" + port: "xxxx" + steps: "5" +``` + +gala-anteater 中模型的训练依赖于 gala-gopher 采集的数据,因此请保证 gala-gopher 探针正常运行至少24小时,在运行 gala-anteater。 + +## 部署 gala-ops + +每个中间件的大致介绍: + +kafka : 一个数据库中间件, 分布式数据分流作用, 可以配置为当前的管理节点。 + +prometheus:性能监控, 配置需要监控的生产节点 ip list。 + +直接通过yum install安装kafka和prometheus,可参照安装脚本 + +只需要参照其中 kafka 和 prometheus 的安装即可 + +## 部署 euler-copilot-rca + +镜像拉取 + +```bash +docker pull hub.oepkgs.net/a-ops/euler-copilot-rca:0.9.1 +``` + ++ 修改 `config/config.json` 文件,配置 gala-gopher 镜像的 `container_id` 以及 `ip`,Kafka 和 Prometheus 的 `ip` 和 `port`(与上述 gala-anteater 配置保持一致) + +```yaml +"gopher_container_id": "xxxx", # gala-gopher的容器id + "remote_host": "xxxx" # gala-gopher的部署机器ip + }, + "kafka": { + "server": "xxxx", + "port": "xxxx", + "storage_topic": "usad_intermediate_results", + "anteater_result_topic": "xxxx", + "rca_result_topic": "xxxx", + "meta_topic": "xxxx" + }, + "prometheus": { + "server": "xxxx", + "port": "xxxx", + "steps": 5 + }, +``` diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/flows/data_collection.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/flows/data_collection.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d2718f0dd059f3a8a34d02cbc67436c6fc274a28 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/flows/data_collection.yaml @@ -0,0 +1,15 @@ +name: data_collection +description: 采集某一指定ip主机的系统性能指标 +steps: + - name: start + call_type: api + params: + endpoint: POST /performance_metric + next: show_data + - name: show_data + call_type: llm + params: + user_prompt: | + 当前采集到系统性能指标为:{data}, 输出内容请符合markdown规范。 +next_flow: + - performance_analysis \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/flows/performance_analysis.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/flows/performance_analysis.yaml new file mode 100644 index 0000000000000000000000000000000000000000..07e2a2ada9c54568be3f3bf13c5b2223e615037a --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/flows/performance_analysis.yaml @@ -0,0 +1,15 @@ +name: performance_analysis +description: 分析性能指标并生成性能分析报告 +steps: + - name: start + call_type: api + params: + endpoint: POST /performance_report + next: extract_key + - name: extract_key + call_type: extract + params: + keys: + - data.output +next_flow: + - performance_tuning \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/flows/performance_tuning.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/flows/performance_tuning.yaml new file mode 100644 index 0000000000000000000000000000000000000000..e938a0bf1bd83f971c4eaaff2d447a150fcf5560 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/flows/performance_tuning.yaml @@ -0,0 +1,13 @@ +name: performance_tuning +description: 基于性能能分析报告,生成操作系统和Mysql应用的性能优化建议,结果以shell脚本的形式返回 +steps: + - name: start + call_type: api + params: + endpoint: POST /optimization_suggestion + next: extract_key + - name: extract_key + call_type: extract + params: + keys: + - data.script \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/openapi.yaml b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/openapi.yaml new file mode 100644 index 0000000000000000000000000000000000000000..18ede5a988fdc06c9de09ff0f2b7077554bedbff --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/openapi.yaml @@ -0,0 +1,147 @@ +openapi: 3.0.2 +info: + title: 智能诊断 + version: 1.0.0 +servers: + - url: http://euler-copilot-tune.euler-copilot.svc.cluster.local:8100 +paths: + /performance_metric: + post: + description: 对指定机器进行性能指标采集,返回指标值 + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PerformanceMetricRequestData' + required: true + responses: + '200': + description: Successful Response + content: + application/json: + schema: {} + '422': + description: Validation Error + content: + application/json: + schema: + $ref: '#/components/schemas/HTTPValidationError' + /performance_report: + post: + description: 基于采集到的指标,对指定机器进行性能诊断,生成性能分析报告 + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PerformanceReportRequestData' + required: true + responses: + '200': + description: Successful Response + content: + application/json: + schema: {} + '422': + description: Validation Error + content: + application/json: + schema: + $ref: '#/components/schemas/HTTPValidationError' + /optimization_suggestion: + post: + description: 根据性能分析报告,以及指定的机器应用信息,生成调优建议 + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/OptimizationSuggestionRequestData' + required: true + responses: + '200': + description: Successful Response + content: + application/json: + schema: {} + '422': + description: Validation Error + content: + application/json: + schema: + $ref: '#/components/schemas/HTTPValidationError' +components: + schemas: + HTTPValidationError: + type: object + description: HTTP 校验错误 + properties: + detail: + type: array + items: + $ref: '#/components/schemas/ValidationError' + OptimizationSuggestionRequestData: + type: object + description: 生成优化建议的接口的入参 + required: + - app + - ip + properties: + app: + type: string + description: 应用名称 + default: mysql + enum: + - mysql + - none + ip: + type: string + description: 点分十进制的ipv4地址, 例如192.168.10.43 + example: "192.168.10.43" + PerformanceMetricRequestData: + type: object + description: 性能指标采集的接口的入参 + required: + - app + - ip + properties: + ip: + type: string + description: 点分十进制的ipv4地址, 例如192.168.10.43 + example: "192.168.10.43" + app: + type: string + description: App + default: none + enum: + - mysql + - none + PerformanceReportRequestData: + type: object + description: 生成性能报告接口的入参 + required: + - ip + properties: + ip: + type: string + description: 点分十进制的ipv4地址, 例如192.168.10.43 + example: "192.168.10.43" + ValidationError: + type: object + required: + - loc + - msg + - type + title: ValidationError + properties: + loc: + type: array + items: + anyOf: + - type: string + - type: integer + title: Location + msg: + type: string + title: Message + type: + type: string + title: Error Type \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/plugin.json b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/plugin.json new file mode 100644 index 0000000000000000000000000000000000000000..c4b95f57e6169a93dcaf7c08e2d328f5be6bf893 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/euler-copilot-tune/plugin.json @@ -0,0 +1,6 @@ +{ + "id": "tune", + "name": "智能性能优化", + "description": "该插件具备以下功能:采集系统性能指标,分析系统性能,推荐系统性能优化建议", + "automatic_flow": false +} \ No newline at end of file diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/plugin-intelligent-tuning-deployment-guide.md b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/plugin-intelligent-tuning-deployment-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..e74a53193cdafd61c68121355fe3246d406cd2da --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/deployment-guide/plugin-deployment-guide/intelligent-tuning/plugin-intelligent-tuning-deployment-guide.md @@ -0,0 +1,131 @@ +# 智能调优部署指南 + +## 准备工作 + ++ 提前安装 [openEuler Copilot System 命令行(智能 Shell)客户端](../../../user-guide/cli-client/cli-assistant-guide.md) + ++ 被调优机器需要为 openEuler 22.03 LTS-SP3 + ++ 在需要被调优的机器上安装依赖 + +```bash +yum install -y sysstat perf +``` + ++ 被调优机器需要开启 SSH 22端口 + +## 编辑配置文件 + +修改values.yaml文件的tune部分,将 `enable` 字段改为 `True` ,并配置大模型设置、 +Embedding模型文件地址、以及需要调优的机器和对应机器上的 mysql 的账号名以及密码 + +```bash +vim /home/euler-copilot-framework/euler-copilot-helm/chart/agents/values.yaml +``` + +```yaml +tune: + # 【必填】是否启用智能调优Agent + enabled: true + # 镜像设置 + image: + # 镜像仓库。留空则使用全局设置。 + registry: "" + # 【必填】镜像名称 + name: euler-copilot-tune + # 【必填】镜像标签 + tag: "0.9.1" + # 拉取策略。留空则使用全局设置。 + imagePullPolicy: "" + # 【必填】容器根目录只读 + readOnly: false + # 性能限制设置 + resources: {} + # Service设置 + service: + # 【必填】Service类型,ClusterIP或NodePort + type: ClusterIP + nodePort: + # 大模型设置 + llm: + # 【必填】模型地址(需要包含v1后缀) + url: + # 【必填】模型名称 + name: "" + # 【必填】模型API Key + key: "" + # 【必填】模型最大Token数 + max_tokens: 8096 + # 【必填】Embedding模型文件地址 + embedding: "" + # 待优化机器信息 + machine: + # 【必填】IP地址 + ip: "" + # 【必填】Root用户密码 + # 注意:必需启用Root用户以密码形式SSH登录 + password: "" + # 待优化应用设置 + mysql: + # 【必填】数据库用户名 + user: "root" + # 【必填】数据库密码 + password: "" +``` + +## 安装智能调优插件 + +```bash +helm install -n euler-copilot agents . +``` + +如果之前有执行过安装,则按下面指令更新插件服务 + +```bash +helm upgrade-n euler-copilot agents . +``` + +如果 framework未重启,则需要重启framework配置 + +```bash +kubectl delete pod framework-deploy-service-bb5b58678-jxzqr -n eulercopilot +``` + +## 测试 + ++ 查看 tune 的 pod 状态 + + ```bash + NAME READY STATUS RESTARTS AGE + authhub-backend-deploy-authhub-64896f5cdc-m497f 2/2 Running 0 16d + authhub-web-deploy-authhub-7c48695966-h8d2p 1/1 Running 0 17d + pgsql-deploy-databases-86b4dc4899-ppltc 1/1 Running 0 17d + redis-deploy-databases-f8866b56-kj9jz 1/1 Running 0 17d + mysql-deploy-databases-57f5f94ccf-sbhzp 2/2 Running 0 17d + framework-deploy-service-bb5b58678-jxzqr 2/2 Running 0 16d + rag-deploy-service-5b7887644c-sm58z 2/2 Running 0 110m + vectorize-deploy-service-57f5f94ccf-sbhzp 2/2 Running 0 17d + web-deploy-service-74fbf7999f-r46rg 1/1 Running 0 2d + tune-deploy-agents-5d46bfdbd4-xph7b 1/1 Running 0 2d + ``` + ++ pod启动失败排查办法 + + 检查 euler-copilot-tune 目录下的 openapi.yaml 中 `servers.url` 字段,确保调优服务的启动地址被正确设置 + + 检查 `$plugin_dir` 插件文件夹的路径是否配置正确,该变量位于 `euler-copilot-helm/chart/euler_copilot/values.yaml` 中的 `framework`模块,如果插件目录不存在,需新建该目录,并需要将该目录下的 euler-copilot-tune 文件夹放到 `$plugin_dir` 中。 + + 检查sglang的地址和key填写是否正确,该变量位于 `vim /home/euler-copilot-framework/euler-copilot-helm/chart/euler_copilot/values.yaml` + + ```yaml + # 用于Function Call的模型 + scheduler: + # 推理框架类型 + backend: sglang + # 模型地址 + url: "" + # 模型 API Key + key: "" + # 数据库设置 + ``` + ++ 命令行客户端使用智能调优 + + 具体使用可参考 [openEuler Copilot System 命令行(智能插件:智能调优)](../../../user-guide/cli-client/intelligent-tuning.md) diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/cli-assistant-guide.md b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/cli-assistant-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..d965d51e293c8304710c0469f1da4b605db7b32e --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/cli-assistant-guide.md @@ -0,0 +1,169 @@ +# 命令行助手使用指南 + +## 简介 + +openEuler Copilot System 命令行助手是一个命令行(Shell)AI 助手,您可以通过它来快速生成 Shell 命令并执行,从而提高您的工作效率。除此之外,基于 Gitee AI 在线服务的标准版本还内置了 openEuler 的相关知识,可以助力您学习与使用 openEuler 操作系统。 + +## 环境要求 + +- 操作系统:openEuler 22.03 LTS SP3,或者 openEuler 24.03 LTS 及以上版本 +- 命令行软件: + - Linux 桌面环境:支持 GNOME、KDE、DDE 等桌面环境的内置终端 + - 远程 SSH 链接:支持兼容 xterm-256 与 UTF-8 字符集的终端 + +## 安装 + +openEuler Copilot System 命令行助手支持通过 OEPKGS 仓库进行安装。 + +### 配置 OEPKGS 仓库 + +```bash +sudo dnf config-manager --add-repo https://repo.oepkgs.net/openeuler/rpm/`sed 's/release //;s/[()]//g;s/ /-/g' /etc/openEuler-release`/extras/`uname -m` +``` + +```bash +sudo dnf clean all +``` + +```bash +sudo dnf makecache +``` + +### 安装命令行助手 + +```bash +sudo dnf install eulercopilot-cli +``` + +若遇到 `Error: GPG check FAILED` 错误,使用 `--nogpgcheck` 跳过检查。 + +```bash +sudo dnf install --nogpgcheck eulercopilot-cli +``` + +## 初始化 + +```bash +copilot --init +``` + +然后根据提示输入 API Key 完成配置。 + +![shell-init](./pictures/shell-init.png) + +初次使用前请先退出终端或重新连接 SSH 会话使配置生效。 + +- **查看助手帮助页面** + + ```bash + copilot --help + ``` + + ![shell-help](./pictures/shell-help.png) + +## 使用 + +在终端中输入问题,按下 `Ctrl + O` 提问。 + +### 快捷键 + +- 输入自然语言问题后,按下 `Ctrl + O` 可以直接向 AI 提问。 +- 直接按下 `Ctrl + O` 可以自动填充命令前缀 `copilot`,输入参数后按下 `Enter` 即可执行。 + +### 智能问答 + +命令行助手初始化完成后,默认处于智能问答模式。 +命令提示符**左上角**会显示当前模式。 +若当前模式不是“智能问答”,执行 `copilot -c` (`copilot --chat`) 切换到智能问答模式。 + +![chat-ask](./pictures/shell-chat-ask.png) + +AI 回答完毕后,会根据历史问答生成推荐问题,您可以复制、粘贴到命令行中进行追问。输入追问的问题后,按下 `Enter` 提问。 + +![chat-next](./pictures/shell-chat-continue.png) + +![chat-next-result](./pictures/shell-chat-continue-result.png) + +智能问答模式下支持连续追问,每次追问最多可以关联3条历史问答的上下文。 + +输入 `exit` 可以退出智能问答模式,回到 Linux 命令行。 + +![chat-exit](./pictures/shell-chat-exit.png) + +- 若问答过程中遇到程序错误,可以按下 `Ctrl + C` 立即退出当前问答,再尝试重新提问。 + +### Shell 命令 + +AI 会根据您的问题返回 Shell 命令,openEuler Copilot System 命令行助手可以解释、编辑或执行这些命令,并显示命令执行结果。 + +![shell-cmd](./pictures/shell-cmd.png) + +命令行助手会自动提取 AI 回答中的命令,并显示相关操作。您可以通过键盘上下键选择操作,按下 `Enter` 确认。 + +![shell-cmd-interact](./pictures/shell-cmd-interact.png) + +#### 解释 + +如果 AI 仅返回了一条命令,选择解释后会直接请求 AI 解释命令,并显示回答。 +若 AI 回答了多条命令,选择后会显示命令列表,您每次可以选择**一条**请求 AI 解释。 + +![shell-cmd-explain-select](./pictures/shell-cmd-explain-select.png) + +完成解释后,您可以继续选择其他操作。 + +![shell-cmd-explain-result](./pictures/shell-cmd-explain-result.png) + +#### 编辑 + +![shell-cmd-edit](./pictures/shell-cmd-edit.png) + +选择一条命令进行编辑,编辑完成后按下 `Enter` 确认。 + +![shell-cmd-edit-result](./pictures/shell-cmd-edit-result.png) + +完成编辑后,您可以继续编辑其他命令或选择其他操作。 + +#### 执行 + +如果 AI 仅返回了一条命令,选择执行后会直接执行命令,并显示执行结果。 +若 AI 回答了多条命令,选择后会显示命令列表,您每次可以选择**多条**命令来执行。 + +您可以通过键盘上下键移动光标,按下 `空格键` 选择命令,按下 `Enter` 执行所选命令。 +被选中的命令会显示**蓝色高亮**,如图所示。 + +![shell-cmd-exec-multi-select](./pictures/shell-cmd-exec-multi-select.png) + +若不选择任何命令,直接按下 `Enter`,则会跳过执行命令,直接进入下一轮问答。 + +按下 `Enter` 后,被选中的命令会从上到下依次执行。 + +![shell-cmd-exec-result](./pictures/shell-cmd-exec-result.png) + +若执行过程中遇到错误,命令行助手会显示错误信息,并**终止执行命令**,进入下一轮问答。 +您可以在下一轮问答中提示 AI 更正命令,或要求 AI 重新生成命令。 + +### 智能插件 + +在 Linux 命令行中执行 `copilot -p` (`copilot --plugin`) 切换到智能插件模式。 + +![shell-plugin](./pictures/shell-plugin.png) + +输入问题并按下 `Ctrl + O` 提问后,从列表中选择插件,按下 `Enter` 调用插件回答问题。 + +![shell-plugin-select](./pictures/shell-plugin-select.png) + +![shell-plugin-result](./pictures/shell-plugin-result.png) + +## 卸载 + +```bash +sudo dnf remove eulercopilot-cli +``` + +然后使用以下命令删除配置文件。 + +```bash +rm ~/.config/eulercopilot/config.json +``` + +卸载完成后请重启终端或重新连接 SSH 会话使配置还原。 diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/intelligent-diagnosis.md b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/intelligent-diagnosis.md new file mode 100644 index 0000000000000000000000000000000000000000..eb999cb5483620450b2e2aea77a818382aeca2a4 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/intelligent-diagnosis.md @@ -0,0 +1,50 @@ +# 智能插件:智能诊断 + +部署智能诊断工具后,可以通过 openEuler Copilot System 智能体框架实现对本机进行诊断。 +在智能诊断模式提问,智能体框架服务可以调用本机的诊断工具诊断异常状况、分析并生成报告。 + +## 操作步骤 + +**步骤1** 切换到“智能插件”模式 + +```bash +copilot -p +``` + +![切换到智能插件模式](./pictures/shell-plugin-diagnose-switch-mode.png) + +**步骤2** 异常事件检测 + +```bash +帮我进行异常事件检测 +``` + +按下 `Ctrl + O` 键提问,然后在插件列表中选择“智能诊断”。 + +![异常事件检测](./pictures/shell-plugin-diagnose-detect.png) + +**步骤3** 查看异常事件详情 + +```bash +查看 XXX 容器的异常事件详情 +``` + +![查看异常事件详情](./pictures/shell-plugin-diagnose-detail.png) + +**步骤4** 执行异常事件分析 + +```bash +请对 XXX 容器的 XXX 指标执行 profiling 分析 +``` + +![异常事件分析](./pictures/shell-plugin-diagnose-profiling.png) + +**步骤5** 查看异常事件分析报告 + +等待 5 至 10 分钟,然后查看分析报告。 + +```bash +查看 对应的 profiling 报告 +``` + +![执行优化脚本](./pictures/shell-plugin-diagnose-report.png) diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/intelligent-tuning.md b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/intelligent-tuning.md new file mode 100644 index 0000000000000000000000000000000000000000..b5c40581668ae4f6074043e62a93b2c4b240e5b3 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/intelligent-tuning.md @@ -0,0 +1,53 @@ +# 智能插件:智能调优 + +部署智能调优工具后,可以通过 openEuler Copilot System 智能体框架实现对本机进行调优。 +在智能调优模式提问,智能体框架服务可以调用本机的调优工具采集性能指标,并生成性能分析报告和性能优化建议。 + +## 操作步骤 + +**步骤1** 切换到“智能调优”模式 + +```bash +copilot -t +``` + +![切换到智能调优模式](./pictures/shell-plugin-tuning-switch-mode.png) + +**步骤2** 采集性能指标 + +```bash +帮我进行性能指标采集 +``` + +![性能指标采集](./pictures/shell-plugin-tuning-metrics-collect.png) + +**步骤3** 生成性能分析报告 + +```bash +帮我生成性能分析报告 +``` + +![性能分析报告](./pictures/shell-plugin-tuning-report.png) + +**步骤4** 生成性能优化建议 + +```bash +请生成性能优化脚本 +``` + +![性能优化脚本](./pictures/shell-plugin-tuning-script-gen.png) + +**步骤5** 选择“执行命令”,运行优化脚本 + +![执行优化脚本](./pictures/shell-plugin-tuning-script-exec.png) + +- 脚本内容如图: + ![优化脚本内容](./pictures/shell-plugin-tuning-script-view.png) + +## 远程调优 + +如果需要对其他机器进行远程调优,请在上文示例的问题前面加上对应机器的 IP 地址。 + +例如:`请对 192.168.1.100 这台机器进行性能指标采集。` + +进行远程调优前请确保目标机器已部署智能调优工具,同时请确保 openEuler Copilot System 智能体框架能够访问目标机器。 diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/obtaining-the-api-key.md b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/obtaining-the-api-key.md new file mode 100644 index 0000000000000000000000000000000000000000..01381a772743299de24d58a7a94bf0a180f77d29 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/obtaining-the-api-key.md @@ -0,0 +1,28 @@ +# 获取 API Key + +## 前言 + +openEuler Copilot System 命令行助手使用 API Key 来验证用户身份,并获取 API 访问权限。 +因此,开始使用前,您需要先获取 API Key。 + +## 注意事项 + +- 请妥善保管您的 API Key,不要泄露给他人。 +- API Key 仅用于命令行助手与 DevStation 桌面端,不用于其他用途。 +- 每位用户仅可拥有一个 API Key,重复创建 API Key 将导致旧密钥失效。 +- API Key 仅在创建时显示一次,请务必及时保存。若密钥丢失,您需要重新创建。 +- 若您在使用过程中遇到“请求过于频繁”的错误,您的 API Key 可能已被他人使用,请及时前往官网刷新或撤销 API Key。 + +## 获取方法 + +1. 登录 [openEuler Copilot System (Gitee AI) 官网](https://eulercopilot.gitee.com)。 +2. 点击右上角头像,选择“API KEY”。 +3. 点击“新建”按钮。 +4. **请立即保存 API Key,它仅在创建时显示一次,请勿泄露给他人。** + +## 管理 API Key + +1. 登录 [openEuler Copilot System (Gitee AI) 官网](https://eulercopilot.gitee.com)。 +2. 点击右上角头像,选择“API KEY”。 +3. 点击“刷新”按钮,刷新 API Key;点击“撤销”按钮,撤销 API Key。 + - 刷新 API Key 后,旧密钥失效,请立即保存新生成的 API Key。 diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-chat-ask.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-chat-ask.png new file mode 100644 index 0000000000000000000000000000000000000000..00d5cf5ecf894dd62366ec086bf96eae532f0b5d Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-chat-ask.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-chat-continue-result.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-chat-continue-result.png new file mode 100644 index 0000000000000000000000000000000000000000..f30f9fe7a015e775742bc184b8ac75790dc482fa Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-chat-continue-result.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-chat-continue.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-chat-continue.png new file mode 100644 index 0000000000000000000000000000000000000000..7e4801504fd53fab989574416e6220c4fa3f1d38 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-chat-continue.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-chat-exit.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-chat-exit.png new file mode 100644 index 0000000000000000000000000000000000000000..0bb81190a3039f6c5a311b365376ec230c1ad4b5 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-chat-exit.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-edit-result.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-edit-result.png new file mode 100644 index 0000000000000000000000000000000000000000..c5e6f8245e7d66cdbe5370f18d15a791a33a517a Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-edit-result.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-edit.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-edit.png new file mode 100644 index 0000000000000000000000000000000000000000..bb6209373a6d2a1881728bee352e7c3b46cc91d7 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-edit.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-exec-multi-select.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-exec-multi-select.png new file mode 100644 index 0000000000000000000000000000000000000000..2dda108a39af54fc15a4ff8c0dca107de38b9cf0 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-exec-multi-select.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-exec-result.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-exec-result.png new file mode 100644 index 0000000000000000000000000000000000000000..f4fff6a62b8b4220b52fdf55b133f2ba37850569 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-exec-result.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-explain-result.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-explain-result.png new file mode 100644 index 0000000000000000000000000000000000000000..707dd36aa7c7eadae4f29254cf5fc18ce877f597 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-explain-result.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-explain-select.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-explain-select.png new file mode 100644 index 0000000000000000000000000000000000000000..bf58b69e241ea11a6945f21e3fc69d22a401be2e Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-explain-select.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-interact.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-interact.png new file mode 100644 index 0000000000000000000000000000000000000000..00bb3a288fbd2fb962b08f34fbe90c733afe0343 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd-interact.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd.png new file mode 100644 index 0000000000000000000000000000000000000000..619172c8ed60a7b536364944a306fbf76fcbfb1f Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-cmd.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-help.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-help.png new file mode 100644 index 0000000000000000000000000000000000000000..97d0dedd3f7b1c749bc5fded471744923d766b8b Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-help.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-init.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-init.png new file mode 100644 index 0000000000000000000000000000000000000000..bbb2257eb1ff2bfec36110409fc6c55a26386c9e Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-init.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-detail.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-detail.png new file mode 100644 index 0000000000000000000000000000000000000000..7bd624e025eaae4b77c603d88bf1b9ad5e235fe7 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-detail.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-detect.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-detect.png new file mode 100644 index 0000000000000000000000000000000000000000..2b38259ff0c1c7045dbff9abf64f36a109a3377b Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-detect.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-profiling.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-profiling.png new file mode 100644 index 0000000000000000000000000000000000000000..0e63c01f35dbc291f805b56de749eac09e0a079d Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-profiling.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-report.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-report.png new file mode 100644 index 0000000000000000000000000000000000000000..c16f0184a2ad3d2468466b33d0e861d2a31bc4e2 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-report.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-switch-mode.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-switch-mode.png new file mode 100644 index 0000000000000000000000000000000000000000..165c6c453353b70c3e1e2cb07d7f43d5ee3525e3 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-diagnose-switch-mode.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-result.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-result.png new file mode 100644 index 0000000000000000000000000000000000000000..3e3f45a974a0700d209f7d30af89eb2050a392d6 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-result.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-select.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-select.png new file mode 100644 index 0000000000000000000000000000000000000000..13959203c77eaa9f41051897cf9e847ff3642a8a Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-select.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-metrics-collect.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-metrics-collect.png new file mode 100644 index 0000000000000000000000000000000000000000..4d5678b7f77b05d48552fcb9656f4a4372dbbe61 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-metrics-collect.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-report.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-report.png new file mode 100644 index 0000000000000000000000000000000000000000..01daaa9a84c13158a95afddffeb8a7e3303f1e76 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-report.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-script-exec.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-script-exec.png new file mode 100644 index 0000000000000000000000000000000000000000..0b694c3fba6918ef39cca977b2072b2913d12b95 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-script-exec.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-script-gen.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-script-gen.png new file mode 100644 index 0000000000000000000000000000000000000000..6e95551767e213f59669d03fd4cceba05801a983 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-script-gen.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-script-view.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-script-view.png new file mode 100644 index 0000000000000000000000000000000000000000..c82c77bf6f4e4e19f400395aaadc9f99dc8d373c Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-script-view.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-switch-mode.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-switch-mode.png new file mode 100644 index 0000000000000000000000000000000000000000..0f06c803ea3621a0f4fb83bbbe731e2bb4bba788 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin-tuning-switch-mode.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin.png new file mode 100644 index 0000000000000000000000000000000000000000..4c1afd306a6aee029f5bda38aa7b1fce57227e31 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/cli-client/pictures/shell-plugin.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/intelligent-plugin-overview.md b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/intelligent-plugin-overview.md new file mode 100644 index 0000000000000000000000000000000000000000..3a37dc9384dcc2080ceb7a687e94e9700e4513eb --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/intelligent-plugin-overview.md @@ -0,0 +1,19 @@ +# 智能插件 + +## 使用方法 + +1. 如图所示,在输入框左上角可以选择插件,点击显示插件列表。 + + ![智能插件](./pictures/plugin-list.png) + +2. 勾选一个插件,然后提问。 + + ![智能插件](./pictures/plugin-selected.png) + +3. 等待服务响应,查看返回结果。 + + 智能插件模式下,推荐问题将置顶推荐的工作流,蓝色文字为对应插件名称,点击后可快捷追问。 + + ![智能插件](./pictures/plugin-suggestion.png) + + ![智能插件](./pictures/plugin-result.png) diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/intelligent-q-and-a-guide.md b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/intelligent-q-and-a-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..a4d0c0e270b9931d6aa1a72d0397655ac4d9c1ca --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/intelligent-q-and-a-guide.md @@ -0,0 +1,134 @@ +# 智能问答使用指南 + +## 开始对话 + +在对话区下侧输入框即可输入对话想要提问的内容,输入 `Shift + Enter` 可进行换行,输入 `Enter` 即可发送对话提问内容,或者单击“发送”也可发送对话提问内容。 + +> **说明** +> 对话区位于页面的主体部分,如图 1 所示。 + +- 图 1 对话区 + ![对话区](./pictures/chat-area.png) + +### 多轮连续对话 + +openEuler Copilot System 智能问答支持多轮连续对话。只需要在同一个对话中继续追问即可使用,如图 2 所示。 + +- 图 2 多轮对话 + ![多轮对话](./pictures/context-support.png) + +### 重新生成 + +如遇到 AI 生成的内容有误或不完整的特殊情况,可以要求 AI 重新回答问题。单击对话回答左下侧的“重新生成”文字,可让 openEuler Copilot System 重新回答用户问题,重新回答后,在对话回答右下侧,会出现回答翻页的图标![向前翻页](./pictures/icon-arrow-prev.png)和![向后翻页](./pictures/icon-arrow-next.png),单击![向前翻页](./pictures/icon-arrow-prev.png)或![向后翻页](./pictures/icon-arrow-next.png)可查看不同的回答,如图 3 所示。 + +- 图 3 重新生成 + ![重新生成](./pictures/regenerate.png) + +### 推荐问题 + +在 AI 回答的下方,会展示一些推荐的问题,单击即可进行提问,如图 4 所示。 + +- 图 4 推荐问题 + ![推荐问题](./pictures/recommend-questions.png) + +## 管理对话 + +> **须知** +> +> 对话管理区页面左侧。 + +### 新建对话 + +单击“新建对话”按钮即可新建对话,如图 5 所示。 + +- 图 5 新建对话 + ![新建对话](./pictures/new-chat.png) + +### 对话历史记录搜索 + +在页面左侧历史记录搜索输入框输入关键词,然后单击![icon-search](./pictures/icon-search.png)即可进行对话历史记录搜索如图 6 所示。 + +- 图 6 对话历史记录搜索 + ![对话历史记录搜索](./pictures/search-history.png) + +### 对话历史记录单条管理 + +历史记录的列表位于历史记录搜索栏的下方,在每条对话历史记录的右侧,单击![编辑](./pictures/icon-edit.png)即可编辑对话历史记录的名字,如图 7 所示。 + +- 图 7 重命名历史记录 + ![重命名历史记录](./pictures/rename-session.png) + +在对话历史记录名字重新书写完成后,单击右侧![确认](./pictures/icon-confirm.png)即可完成重命名,或者单击右侧![取消](./pictures/icon-cancel.png)放弃本次重命名,如图 8 所示。 + +- 图 8 完成/取消重命名历史记录 + ![完成/取消重命名历史记录](./pictures/rename-session-confirmation.png) + +另外,单击对话历史记录右侧的删除图标,如图 9 所示,即可对删除单条对话历史记录进行二次确认,在二次确认弹出框,如图 10 所示,单击“确认”,可确认删除单条对话历史记录,或者单击“取消”,取消本次删除。 + +- 图 9 删除单条历史记录 + ![删除单条历史记录](./pictures/delete-session.png) + +- 图 10 删除单条历史记录二次确认 + ![删除单条历史记录二次确认](./pictures/delete-session-confirmation.png) + +### 对话历史记录批量删除 + +首先单击“批量删除”,如图 11 所示。 + +- 图 11 批量删除 + ![批量删除](./pictures/bulk-delete.png) + +然后可对历史记录进行选择删除,如图 12 所示。单击“全选”,即对所有历史记录选中,单击单条历史记录或历史记录左侧的选择框,可对单条历史记录进行选中。 + +- 图 12 批量删除历史记录选择 + ![批量删除历史记录选择](./pictures/bulk-delete-multi-select.png) + +最后需要对批量删除历史记录进行二次确认,如图 13 所示,单击“确认”,即删除,单击“取消”,即取消本次删除。 + +- 图 13 批量删除二次确认 + ![批量删除二次确认](./pictures/bulk-delete-confirmation.png) + +## 反馈与举报 + +在对话记录区,对话回答的右下侧,可进行对话回答反馈,如图 14 所示,单击![满意](./pictures/icon-thumb-up.png),可给对话回答点赞;单击![不满意](./pictures/icon-thumb-down.png),可以给对话回答反馈不满意的原因。 + +- 图 14 点赞和不满意反馈 + ![点赞和不满意反馈](./pictures/feedback.png) + +对于反馈不满意原因,如图 15 所示,在单击![不满意](./pictures/icon-thumb-down.png)之后,对话机器人会展示反馈内容填写的对话框,可选择相关的不满意原因的选项。 + +- 图 15 回答不满意反馈 + ![回答不满意反馈](./pictures/feedback-illegal.png) + +其中单击选择“存在错误信息”,需要填写参考答案链接和描述,如图 16 所示。 + +- 图 16 回答不满意反馈——存在错误信息 + ![回答不满意反馈——存在错误信息](./pictures/feedback-misinfo.png) + +### 举报 + +如果发现 AI 返回的内容中有违规信息,可以点击右下角按钮举报,如图 17 所示。点击举报后选择举报类型并提交,若没有合适的选项,请选择“其他”并输入原因,如图 18 所示。 + +- 图 17 举报按钮 + ![举报1](./pictures/report.png) + +- 图 18 选择举报类型 + ![举报2](./pictures/report-options.png) + +## 查看服务协议和隐私政策 + +单击文字“服务协议”,即可查看服务协议,单击文字“隐私政策”,即可查看隐私政策,如图 19、图 20 所示。 + +- 图 19 服务协议和隐私政策入口 + ![服务协议和隐私政策入口](./pictures/privacy-policy-entry.png) + +- 图 20 服务协议和隐私政策 + ![服务协议和隐私政策](./pictures/privacy-policy.png) + +## 附录 + +### 用户信息导出说明 + +#### 具体说明 + +openEuler Copilot System 后台存在用户信息导出功能,如用户需要,需主动通过 邮箱联系我们,运维会将导出的用户信息通过邮箱回送给用户。 diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/introduction.md b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/introduction.md new file mode 100644 index 0000000000000000000000000000000000000000..afd47b84eab80faadce20970b8881eb515996805 --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/introduction.md @@ -0,0 +1,67 @@ +# 前言 + +## 概述 + +本文档介绍了 openEuler Copilot System 的使用方法,对 openEuler Copilot System 线上服务的 Web 界面的各项功能做了详细介绍,同时提供了常见的问题解答,详细请参考对应手册。 + +## 读者对象 + +本文档主要适用于 openEuler Copilot System 的使用人员。使用人员必须具备以下经验和技能: + +- 熟悉 openEuler 操作系统相关情况。 +- 有 AI 对话使用经验。 + +## 修改记录 + +| 文档版本 | 发布日期 | 修改说明 | +|--------|------------|----------------| +| 03 | 2024-09-19 | 更新新版界面。 | +| 02 | 2024-05-13 | 优化智能对话操作指引。 | +| 01 | 2024-01-28 | 第一次正式发布。 | + +## 介绍 + +### 免责声明 + +- 使用过程中涉及的非工具本身验证功能所用的用户名和密码,不作他用,且不会被保存在系统环境中。 +- 在您进行对话或操作前应当确认您为应用程序的所有者或已获得所有者的充足授权同意。 +- 对话结果中可能包含您所分析应用的内部信息和相关数据,请妥善管理。 +- 除非法律法规或双方合同另有规定,openEuler 社区对分析结果不做任何明示或暗示的声明和保证,不对分析结果的适销性、满意度、非侵权性或特定用途适用性等作出任何保证或者承诺。 +- 您根据分析记录所采取的任何行为均应符合法律法规的要求,并由您自行承担风险。 +- 未经所有者授权,任何个人或组织均不得使用应用程序及相关分析记录从事任何活动。openEuler 社区不对由此造成的一切后果负责,亦不承担任何法律责任。必要时,将追究其法律责任。 + +### openEuler Copilot System 简介 + +openEuler Copilot System 是一个基于 openEuler 操作系统的人工智能助手,可以帮助用户解决各种技术问题,提供技术支持和咨询服务。它使用了最先进的自然语言处理技术和机器学习算法,能够理解用户的问题并提供相应的解决方案。 + +### 场景内容 + +1. OS 领域通用知识:openEuler Copilot System 可以咨询 Linux 常规知识、上游信息和工具链介绍和指导。 +2. openEuler 专业知识:openEuler Copilot System 可以咨询 openEuler 社区信息、技术原理和使用指导。 +3. openEuler 扩展知识:openEuler Copilot System 可以咨询 openEuler 周边硬件特性知识和ISV、OSV相关信息。 +4. openEuler 应用案例:openEuler Copilot System 可以提供 openEuler 技术案例、行业应用案例。 +5. shell 命令生成:openEuler Copilot System 可以帮助用户生成单条 shell 命令或者复杂命令。 + +总之,openEuler Copilot System 可以应用于各种场景,帮助用户提高工作效率和了解 Linux、openEuler 等的相关知识。 + +### 访问和使用 + +openEuler Copilot System 通过网址访问 Web 网页进行使用。账号注册与登录请参考[注册与登录](./registration-and-login.md)。使用方法请参考[智能问答使用指南](./intelligent-q-and-a-guide.md)。 + +### 界面说明 + +#### 界面分区 + +openEuler Copilot System 界面主要由如图 1 所示的区域组成,各个区域的作用如表 1 所示。 + +- 图 1 openEuler Copilot System 界面 + ![Copilot 界面](./pictures/main-page-sections.png) + +- 表 1 openEuler Copilot System 首页界面分区说明 + +| 区域 | 名称 | 说明 | +|-----|------------|----------------------------------------------------------------| +| 1 | 设置管理区 | 提供账号登录和退出操作入口和明亮/黑暗模式切换功能 | +| 2 | 对话管理区 | 用于用户新建对话、对话历史记录管理和对话历史记录批量删除操作 | +| 3 | 对话区 | 用于用户和 openEuler Copilot System 的对话聊天 | +| 4 | 服务协议和隐私政策区 | 提供查看服务协议和隐私政策入口 | diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/bulk-delete-confirmation.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/bulk-delete-confirmation.png new file mode 100644 index 0000000000000000000000000000000000000000..33230200fbe9f1e0fa72c27f51b8786192aa14f2 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/bulk-delete-confirmation.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/bulk-delete-multi-select.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/bulk-delete-multi-select.png new file mode 100644 index 0000000000000000000000000000000000000000..96d8201681c4a7772c815a2b9183a0efca9179c2 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/bulk-delete-multi-select.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/bulk-delete.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/bulk-delete.png new file mode 100644 index 0000000000000000000000000000000000000000..929230cd06cc792b633ab183155225926d2c300d Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/bulk-delete.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/chat-area.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/chat-area.png new file mode 100644 index 0000000000000000000000000000000000000000..752f18ad4bd85aaa1132c50cc4c7b7dc159aec91 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/chat-area.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/context-support.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/context-support.png new file mode 100644 index 0000000000000000000000000000000000000000..0bd5f091d0eff34d9b5f36eec6df63b191656daa Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/context-support.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/delete-session-confirmation.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/delete-session-confirmation.png new file mode 100644 index 0000000000000000000000000000000000000000..efd07828e97de46c9660c162ef553362765d5577 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/delete-session-confirmation.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/delete-session.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/delete-session.png new file mode 100644 index 0000000000000000000000000000000000000000..596af33f7be41d456a57e6a297820530f8485f34 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/delete-session.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/feedback-illegal.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/feedback-illegal.png new file mode 100644 index 0000000000000000000000000000000000000000..b6e84ba45977d911db960da97bdff714624ba18c Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/feedback-illegal.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/feedback-misinfo.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/feedback-misinfo.png new file mode 100644 index 0000000000000000000000000000000000000000..cc5505226add1e6fbde7b93ff09877038e8cfdce Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/feedback-misinfo.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/feedback.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/feedback.png new file mode 100644 index 0000000000000000000000000000000000000000..9fe1c27acb57d4d24a26c8dde61ee4272f954e46 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/feedback.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/gitee-login-click2signup.jpg b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/gitee-login-click2signup.jpg new file mode 100644 index 0000000000000000000000000000000000000000..dde8fbe201a44c116e58c3d435737f1a6a3f6f34 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/gitee-login-click2signup.jpg differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/gitee-login.jpg b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/gitee-login.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ac922094fd513e3f8642f885351f541200e6450b Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/gitee-login.jpg differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/gitee-signup.jpg b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/gitee-signup.jpg new file mode 100644 index 0000000000000000000000000000000000000000..57e473466cba423be0d6f76814b5a0656804a884 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/gitee-signup.jpg differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-arrow-next.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-arrow-next.png new file mode 100644 index 0000000000000000000000000000000000000000..1a36c84e0965f9dbf1f90e9a3daadcd1a2560951 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-arrow-next.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-arrow-prev.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-arrow-prev.png new file mode 100644 index 0000000000000000000000000000000000000000..eb667e93cc6d51aa191a0ac7607e72d4d6923cbc Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-arrow-prev.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-cancel.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-cancel.png new file mode 100644 index 0000000000000000000000000000000000000000..34d4454b6f92ee12db6841dafe0e94a12c3b9584 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-cancel.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-confirm.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-confirm.png new file mode 100644 index 0000000000000000000000000000000000000000..1d650f8192e04fae8f7b7c08cd527227c91b833a Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-confirm.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-edit.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-edit.png new file mode 100644 index 0000000000000000000000000000000000000000..f7b28aa605b5e899855a261d641d27a2674703eb Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-edit.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-search.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-search.png new file mode 100644 index 0000000000000000000000000000000000000000..7902923196c3394ae8eafaf5a2b6fdf7f19b1f40 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-search.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-thumb-down.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-thumb-down.png new file mode 100644 index 0000000000000000000000000000000000000000..cda14d196d92898da920ed64ad37fa9dd124c775 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-thumb-down.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-thumb-up.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-thumb-up.png new file mode 100644 index 0000000000000000000000000000000000000000..c75ce44bff456e24bc19040c18e4e644bbb77bd1 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-thumb-up.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-user.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-user.png new file mode 100644 index 0000000000000000000000000000000000000000..e6b06878b76d9e6d268d74070539b388129fa8c4 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/icon-user.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/login-popup.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/login-popup.png new file mode 100644 index 0000000000000000000000000000000000000000..4ac4116f72aa56c81affdb31b806325966331aa9 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/login-popup.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/logout.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/logout.png new file mode 100644 index 0000000000000000000000000000000000000000..e2288c35d89d598f3bb8d939bdf6a9d125bcae83 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/logout.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/main-page-sections.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/main-page-sections.png new file mode 100644 index 0000000000000000000000000000000000000000..155b68928177de0785f4705d2df14c0233b24743 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/main-page-sections.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/new-chat.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/new-chat.png new file mode 100644 index 0000000000000000000000000000000000000000..176bb3e1e932caa758a56540345218c57ee2ff20 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/new-chat.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/plugin-list.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/plugin-list.png new file mode 100644 index 0000000000000000000000000000000000000000..2745f7d82a21cd9eba139898f5ea0c5ab979037f Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/plugin-list.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/plugin-result.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/plugin-result.png new file mode 100644 index 0000000000000000000000000000000000000000..7056aebeecba8760e0ca2773348cce0a0b8167f1 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/plugin-result.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/plugin-selected.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/plugin-selected.png new file mode 100644 index 0000000000000000000000000000000000000000..9182ffa57db9da349cb36186a7b3cb035b51b8aa Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/plugin-selected.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/plugin-suggestion.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/plugin-suggestion.png new file mode 100644 index 0000000000000000000000000000000000000000..bb416881550349000f61b0c1bd3dd540878bd6ad Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/plugin-suggestion.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/privacy-policy-entry.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/privacy-policy-entry.png new file mode 100644 index 0000000000000000000000000000000000000000..d7efce3e6e8d477ef47a1bc8a9bba0d087cf8058 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/privacy-policy-entry.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/privacy-policy.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/privacy-policy.png new file mode 100644 index 0000000000000000000000000000000000000000..dc22c50de7f9d2dc3e0bf523175e7915c91c630f Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/privacy-policy.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/recommend-questions.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/recommend-questions.png new file mode 100644 index 0000000000000000000000000000000000000000..076ec7092af7fe7987e5dc7c864a6b9f8b2b1160 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/recommend-questions.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/regenerate.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/regenerate.png new file mode 100644 index 0000000000000000000000000000000000000000..655c9d5002df4a17aaf84e8780fff4a0118c6c01 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/regenerate.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/rename-session-confirmation.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/rename-session-confirmation.png new file mode 100644 index 0000000000000000000000000000000000000000..d64708bd57d53deafdc5ddbb70d9deaeaca0d132 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/rename-session-confirmation.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/rename-session.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/rename-session.png new file mode 100644 index 0000000000000000000000000000000000000000..73e7e19c5ac8e8035df0e4b553a9b78ff5c9a051 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/rename-session.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/report-options.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/report-options.png new file mode 100644 index 0000000000000000000000000000000000000000..8a54fd2598d51fc40b57052f404dd830cf621f4d Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/report-options.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/report.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/report.png new file mode 100644 index 0000000000000000000000000000000000000000..471bcbe8614fc8bab4dcc1805fa1bf4574990fc8 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/report.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/search-history.png b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/search-history.png new file mode 100644 index 0000000000000000000000000000000000000000..2239d14a7aa8bc13a7b8d3ec71ba9ed71b95e850 Binary files /dev/null and b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/pictures/search-history.png differ diff --git a/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/registration-and-login.md b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/registration-and-login.md new file mode 100644 index 0000000000000000000000000000000000000000..8bc1d52fcbd1c3595171d6b2006caff6abd62fbf --- /dev/null +++ b/docs/en/Tools/AI/openEuler_Copilot_System/user-guide/web-client/registration-and-login.md @@ -0,0 +1,68 @@ +# 登录 openEuler Copilot System + +本章节以 Windows 10 操作系统安装的 Chrome 121 浏览器为例介绍登录 openEuler Copilot System 界面的操作步骤。 + +## 浏览器要求 + +浏览器要求如表 1 所示。 + +- 表 1 浏览器要求 + +| 浏览器类型 | 最低版本 | 推荐版本 | +| ----- | ----- | ----- | +| Google Chrome | 72 | 121 或更高版本 | +| Mozilla Firefox | 89 | 122 或更高版本 | +| Apple Safari | 11.0 | 16.3 或更高版本 | + +## 申请访问权限 + +访问 openEuler Copilot System 在线环境,需要依照[【GITEE AI】openEuler Copilot System 在线环境体验申请教程](https://gitee.com/openeuler/euler-copilot-framework/issues/IARUWT?from=project-issue)申请访问权限 + +## 操作步骤 + +> **须知** +> openEuler Copilot System 线上服务 (Gitee AI) 账号和 Gitee 官网账号是通用的。 + +**步骤1** 打开本地 PC 机的浏览器,在地址栏输入 [https://ai.gitee.com/apps/zhengw99/openEulerCopilotSystem](https://ai.gitee.com/apps/zhengw99/openEulerCopilotSystem),按 `Enter`。在未登录状态,进入 openEuler Copilot System,会出现登录提示弹出框,如图 1 所示。 + +- 图 1 未登录 + ![未登录](./pictures/login-popup.png) + +**步骤2** 登录 openEuler Copilot System(已注册账号)。 + +打开登录界面,如图 2 所示。 + +- 图 2 登录 openEuler Copilot System + ![登录 openEuler Copilot System](./pictures/gitee-login.jpg) + +## 注册 openEuler Copilot System 账号 + +> **前提条件** +> 未注册 Gitee 账号。 + +**步骤1** 进入登录页,单击“点此注册”,如图 3 所示。 + +- 图 3 点此注册 + ![点此注册](./pictures/gitee-login-click2signup.jpg) + +**步骤2** 进入账号注册页,根据页面提示填写相关内容,如图 4 所示。 + +- 图 4 账号注册 + ![账号注册](./pictures/gitee-signup.jpg) + +**步骤3** 按页面要求填写账号信息后,单击“立即注册”,即可注册成功。注册后即可返回登录。 + +## 退出登录 + +> **前提条件** +> 已登录 openEuler Copilot System + +**步骤1** 单击![退出登录](./pictures/icon-user.png)后,会出现“退出登录”下拉框,如图 5 所示。 + +> **说明** +> 账号管理区位于页面的右上角部分,如图 5 所示。 + +- 图 5 账号管理区 + ![账号管理区](./pictures/logout.png) + +**步骤2** 单击“退出登录”即可退出登录,如图 5 所示。 diff --git a/docs/en/Tools/Cloud/CPDS/Menu/index.md b/docs/en/Tools/Cloud/CPDS/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..e17a749949a8c44b18535f51f3bc09e251f5a1f6 --- /dev/null +++ b/docs/en/Tools/Cloud/CPDS/Menu/index.md @@ -0,0 +1,7 @@ +--- +headless: true +--- +- [CPDS User Guide]({{< relref "./cpds-user-guide.md" >}}) + - [CPDS Introduction]({{< relref "./cpds-introduction.md" >}}) + - [Installation and Deployment]({{< relref "./installation-and-deployment.md" >}}) + - [Usage Instructions]({{< relref "./usage-instructions.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/Cloud/CPDS/cpds-introduction.md b/docs/en/Tools/Cloud/CPDS/cpds-introduction.md new file mode 100644 index 0000000000000000000000000000000000000000..91b647eaca4008e39d77a6697a26e52f88a347f8 --- /dev/null +++ b/docs/en/Tools/Cloud/CPDS/cpds-introduction.md @@ -0,0 +1,57 @@ +# CPDS Overview + +## Introduction + +CPDS (Container Problem Detect System), developed by Beijing Linx Software Corp., is a fault detection system for container clusters. It monitors and identifies container top faults and sub-health conditions. + +## Key Features + +**1. Cluster information collection** + +The system uses node agents on host machines, leveraging systemd, initv, and eBPF technologies to monitor key container services. It collects data on node networks, kernels, drive LVM, and other critical metrics. It also tracks application status, resource usage, system function execution, and I/O operations within containers for anomalies. + +**2. Cluster exception detection** + +The system gathers raw data from cluster nodes and applies predefined rules to detect anomalies, extracting essential information. It uploads both detection results and raw data online while ensuring data persistence. + +**3. Node and service container fault/sub-health diagnosis** + +Using exception detection data, the system diagnoses faults or sub-health conditions in nodes and service containers. Analysis results are stored persistently, and a UI layer enables real-time and historical diagnosis data access. + +## System Architecture + +CPDS comprises four components, as illustrated below. The system follows a microservices architecture, with components interacting via APIs. + +![Architecture](images/architecture.png) + +- [cpds-agent](https://gitee.com/openeuler/cpds-agent): Collects raw data about containers and systems from cluster nodes. + +- [cpds-detector](https://gitee.com/openeuler/cpds-detector): Analyzes node data based on exception rules to detect abnormalities. + +- [cpds-analyzer](https://gitee.com/openeuler/cpds-analyzer): Diagnoses node health using configured rules to assess current status. + +- [cpds-dashboard](https://gitee.com/openeuler/cpds-dashboard): Provides a web interface for node health visualization and diagnostic rule configuration. + +## Supported Fault Detection + +CPDS detects the following fault conditions. + +| No. | Fault Detection Item | +| --- | -------------------------------------------------------------------- | +| 1 | Container service functionality | +| 2 | Container node agent functionality | +| 3 | Container group functionality | +| 4 | Node health detection functionality | +| 5 | Log collection functionality | +| 6 | Drive usage exceeding 85% | +| 7 | Network issues | +| 8 | Kernel crashes | +| 9 | Residual LVM drive issues | +| 10 | CPU usage exceeding 85% | +| 11 | Node monitoring functionality | +| 12 | Container memory allocation failures | +| 13 | Container memory allocation timeouts | +| 14 | Container network response timeouts | +| 15 | Slow container drive read/write operations | +| 16 | Zombie child processes in container applications | +| 17 | Child process and thread creation failures in container applications | diff --git a/docs/en/Tools/Cloud/CPDS/cpds-user-guide.md b/docs/en/Tools/Cloud/CPDS/cpds-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..3a234a92804c23ab94fad06623776c351ff30300 --- /dev/null +++ b/docs/en/Tools/Cloud/CPDS/cpds-user-guide.md @@ -0,0 +1,3 @@ +# Overview + +This document outlines the installation, deployment, and usage of CPDS. diff --git a/docs/en/Tools/Cloud/CPDS/images/architecture.png b/docs/en/Tools/Cloud/CPDS/images/architecture.png new file mode 100644 index 0000000000000000000000000000000000000000..0b5e2f24446e79e1e51d6a2288d7cca72d28e0d2 Binary files /dev/null and b/docs/en/Tools/Cloud/CPDS/images/architecture.png differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\345\216\237\345\247\213\346\225\260\346\215\256\345\233\276\350\241\250.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\345\216\237\345\247\213\346\225\260\346\215\256\345\233\276\350\241\250.png" new file mode 100644 index 0000000000000000000000000000000000000000..3f3929f2cb29a8a211852af1d468322d52b6e1af Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\345\216\237\345\247\213\346\225\260\346\215\256\345\233\276\350\241\250.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\345\216\237\345\247\213\346\225\260\346\215\256\346\243\200\347\264\242.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\345\216\237\345\247\213\346\225\260\346\215\256\346\243\200\347\264\242.png" new file mode 100644 index 0000000000000000000000000000000000000000..da9ab5d92c314be3e97560c449b8e55ff6cc44aa Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\345\216\237\345\247\213\346\225\260\346\215\256\346\243\200\347\264\242.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\345\270\203\345\261\200.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\345\270\203\345\261\200.png" new file mode 100644 index 0000000000000000000000000000000000000000..be9a66c364e92b3376766c59f3cebeebe123daec Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\345\270\203\345\261\200.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\346\227\266\351\227\264\350\214\203\345\233\264\351\200\211\346\213\251.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\346\227\266\351\227\264\350\214\203\345\233\264\351\200\211\346\213\251.png" new file mode 100644 index 0000000000000000000000000000000000000000..f4abd49e0bf2f62b6bea4968bf13a131f859918a Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\346\227\266\351\227\264\350\214\203\345\233\264\351\200\211\346\213\251.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\346\237\245\347\234\213\345\216\237\345\247\213\346\225\260\346\215\256.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\346\237\245\347\234\213\345\216\237\345\247\213\346\225\260\346\215\256.png" new file mode 100644 index 0000000000000000000000000000000000000000..a6d27210ed11f2c2ae66068d0ebd74121fa62437 Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\346\237\245\347\234\213\345\216\237\345\247\213\346\225\260\346\215\256.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\346\237\245\347\234\213\350\247\204\345\210\231.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\346\237\245\347\234\213\350\247\204\345\210\231.png" new file mode 100644 index 0000000000000000000000000000000000000000..f896cdb44a15c527fec3a8df8e6149673f194aeb Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\346\237\245\347\234\213\350\247\204\345\210\231.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\346\267\273\345\212\240\350\247\204\345\210\231.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\346\267\273\345\212\240\350\247\204\345\210\231.png" new file mode 100644 index 0000000000000000000000000000000000000000..599665e676bc623604bee376e883524838dd663a Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\346\267\273\345\212\240\350\247\204\345\210\231.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\345\201\245\345\272\267.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\345\201\245\345\272\267.png" new file mode 100644 index 0000000000000000000000000000000000000000..75adbc5d24a46edfe914b2c7e1297882388058fd Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\345\201\245\345\272\267.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\345\256\271\345\231\250\345\201\245\345\272\267\347\233\221\346\216\247.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\345\256\271\345\231\250\345\201\245\345\272\267\347\233\221\346\216\247.png" new file mode 100644 index 0000000000000000000000000000000000000000..9ba876561699f46b49f6bc0cc8815d0b5806087b Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\345\256\271\345\231\250\345\201\245\345\272\267\347\233\221\346\216\247.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\346\246\202\350\247\210-\346\214\211\351\222\256.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\346\246\202\350\247\210-\346\214\211\351\222\256.png" new file mode 100644 index 0000000000000000000000000000000000000000..e0ceb89269e6f8a161a2613ae9c542199161e1c8 Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\346\246\202\350\247\210-\346\214\211\351\222\256.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\346\246\202\350\247\210.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\346\246\202\350\247\210.png" new file mode 100644 index 0000000000000000000000000000000000000000..6f5fd9a5728bb416638feba8174cfb499e5e8f7a Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\346\246\202\350\247\210.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\347\211\251\347\220\206\350\265\204\346\272\220\347\233\221\346\216\247.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\347\211\251\347\220\206\350\265\204\346\272\220\347\233\221\346\216\247.png" new file mode 100644 index 0000000000000000000000000000000000000000..c0253de34176db4b23e1491a86bb7892966a7c96 Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\212\202\347\202\271\347\211\251\347\220\206\350\265\204\346\272\220\347\233\221\346\216\247.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\257\212\346\226\255\347\273\223\346\236\234.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\257\212\346\226\255\347\273\223\346\236\234.png" new file mode 100644 index 0000000000000000000000000000000000000000..ffec25e8ff163efc47c38b23fc180ce8188d38dc Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\350\257\212\346\226\255\347\273\223\346\236\234.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\351\233\206\347\276\244-\345\256\271\345\231\250\345\201\245\345\272\267\347\233\221\346\216\247.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\351\233\206\347\276\244-\345\256\271\345\231\250\345\201\245\345\272\267\347\233\221\346\216\247.png" new file mode 100644 index 0000000000000000000000000000000000000000..5924d73d2b659d92387a19521fd0692d07601046 Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\351\233\206\347\276\244-\345\256\271\345\231\250\345\201\245\345\272\267\347\233\221\346\216\247.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\351\233\206\347\276\244\346\246\202\350\247\210.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\351\233\206\347\276\244\346\246\202\350\247\210.png" new file mode 100644 index 0000000000000000000000000000000000000000..351614ce5254748c4d233a2189f3f4a71d2f546a Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\351\233\206\347\276\244\346\246\202\350\247\210.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\351\233\206\347\276\244\347\211\251\347\220\206\350\265\204\346\272\220\347\233\221\346\216\247.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\351\233\206\347\276\244\347\211\251\347\220\206\350\265\204\346\272\220\347\233\221\346\216\247.png" new file mode 100644 index 0000000000000000000000000000000000000000..5256783e5f907232f3cbd3655b8ee81c03c0ffdd Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\351\233\206\347\276\244\347\211\251\347\220\206\350\265\204\346\272\220\347\233\221\346\216\247.png" differ diff --git "a/docs/en/Tools/Cloud/CPDS/images/cpds-page/\351\233\206\347\276\244\347\212\266\346\200\201-\346\246\202\350\247\210.png" "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\351\233\206\347\276\244\347\212\266\346\200\201-\346\246\202\350\247\210.png" new file mode 100644 index 0000000000000000000000000000000000000000..751887e223d0a323feee7baef3202ed23e6ce5de Binary files /dev/null and "b/docs/en/Tools/Cloud/CPDS/images/cpds-page/\351\233\206\347\276\244\347\212\266\346\200\201-\346\246\202\350\247\210.png" differ diff --git a/docs/en/Tools/Cloud/CPDS/installation-and-deployment.md b/docs/en/Tools/Cloud/CPDS/installation-and-deployment.md new file mode 100644 index 0000000000000000000000000000000000000000..009a7b5cddf181e0ac93274a204baf8ecaba8658 --- /dev/null +++ b/docs/en/Tools/Cloud/CPDS/installation-and-deployment.md @@ -0,0 +1,258 @@ +# Installation and Deployment + +This chapter provides a step-by-step guide to installing and deploying CPDS. + +## Installing CPDS + +This section covers the steps to install CPDS components. + +1. Install cpds-agent. + + > cpds-agent gathers raw data from nodes and can be installed independently on multiple nodes. + + ```shell + yum install cpds-agent + ``` + +2. Install cpds-detector. + + ```shell + yum install cpds-detector + ``` + +3. Install cpds-analyzer. + + ```shell + yum install cpds-analyzer + ``` + +4. Install cpds-dashboard. + + ```shell + yum install cpds-dashboard + ``` + +5. Install Cpds. + + ```shell + yum install Cpds + ``` + +# Deployment of CPDS + +This section explains the configuration and deployment of CPDS. + +## Configuration Overview + +### cpds-agent Configuration + +cpds-agent collects node network information by sending ICMP packets to a specified IP address. The `net_diagnostic_dest` field must specify a reachable IP address, not the local node IP address. You are advised to set the master node IP address on worker nodes and any worker node IP address on the master. + +```bash +vim /etc/cpds/agent/config.json +``` + +```json +{ + "expose_port":"20001", # Port to listen on + "log_cfg_file": "/etc/cpds/agent/log.conf", + "net_diagnostic_dest": "192.30.25.18" # Destination IP address for ICMP packets +} +``` + +### Prometheus Configuration + +CPDS uses Prometheus to collect raw data generated by cpds-agent. cpds-agent opens port 20001 by default. Edit the Prometheus configuration file to connect to cpds-agent for data collection. + +```bash +vim /etc/prometheus/prometheus.yml +``` + +```yaml +global: + scrape_interval: 2s + evaluation_interval: 3s +scrape_configs: + - job_name: "cpds" + static_configs: + - targets: ["cpds-agent1:port","cpds-agent2:port","..."] # IP addresses and ports of deployed cpds-agent instances +``` + +### cpds-detector Configuration + +```bash +vim /etc/cpds/detector/config.yml +``` + +```yaml +generic: + bindAddress: "127.0.0.1" # Address to listen on + port: 19091 # Port to listen on + +database: + host: "127.0.0.1" # Database IP address + port: 3306 # Database port + username: root # Database username + password: root # Database password + maxOpenConnections: 123 # Maximum number of connections + +prometheus: + host: "127.0.0.1" # Detector IP address + port: 9090 # Prometheus port + +log: + fileName: "/var/log/cpds/cpds-detector/cpds-detector.log" + level: "warn" + maxAge: 15 + maxBackups: 100 + maxSize: 100 + localTime: true + compress: true +``` + +### cpds-analyzer Configuration + +```bash +vim /etc/cpds/analyzer/config.yml +``` + +```yaml +generic: + bindAddress: "127.0.0.1" # Address to listen on + port: 19091 # Port to listen on + +database: + host: "127.0.0.1" # Database IP address + port: 3306 # Database port + username: root # Database username + password: root # Database password + maxOpenConnections: 123 # Maximum number of connections + +detector: + host: "127.0.0.1" # Detector IP address + port: 19092 # Detector port + +log: + fileName: "/var/log/cpds/cpds-analyzer/cpds-analyzer.log" + level: "warn" + maxAge: 15 + maxBackups: 100 + maxSize: 100 + localTime: true +``` + +### cpds-dashboard Configuration + +```bash +vim /etc/nginx/conf.d/cpds-ui.conf +``` + +```conf +server { + listen 10119; + + location / { + root /etc/cpds/cpds-ui/; + index index.html index.htm; + } + + location /api/ { + proxy_pass http://127.0.0.1:19091; # Backend analyzer IP address and port + } + + location /websocket/ { + proxy_pass http://127.0.0.1:19091; # Backend analyzer IP address and port + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-Proto http; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "Upgrade"; + } +} +``` + +# Starting CPDS + +This section outlines the steps to start CPDS. + +## Disabling the Firewall + +```shell +systemctl stop firewalld +systemctl disable firewalld +``` + +Set the SELINUX status to `disabled` in the **/etc/selinux/config** file. + +```conf +SELINUX=disabled +``` + +Restart the system to apply the changes. + +## Initializing the Database + +1. Start the database service. + + ```shell + systemctl start mariadb.service + systemctl enable mariadb.service + ``` + +2. Initialize the database with root privileges. + + ```shell + /usr/bin/mysql_secure_installation + ``` + + > During the process, you will be prompted to enter the **root** user password for the database. If no password is set, press **Enter** and follow the prompts to configure the settings. + +3. Configure database connection permissions. + + ```shell + mysql -u root -p + ``` + + Enter the password set in the previous step when prompted. + + ```shell + GRANT ALL PRIVILEGES ON *.* TO 'username'@'%' IDENTIFIED BY 'password' WITH GRANT OPTION; + ``` + + > Replace `username` with the database username and `password` with the corresponding password. + + For example: + + ```shell + mysql -u root -p + Enter password: + Welcome to the MariaDB monitor. Commands end with ; or \g. + Your MariaDB connection id is 5 + Server version: 10.5.16-MariaDB MariaDB Server + + Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + + Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'root' WITH GRANT OPTION; + Query OK, 0 rows affected (0.014 sec) + ``` + +### Starting the Service + +```shell +systemctl start Cpds.service +systemctl enable Cpds.service +``` + +Start cpds-agent on all nodes. + +```shell +systemctl start cpds-agent +systemctl enable cpds-agent +``` + +### Accessing the Frontend Management Platform + +Once the services are running, open a browser and navigate to **** to access the frontend management platform. diff --git a/docs/en/Tools/Cloud/CPDS/usage-instructions.md b/docs/en/Tools/Cloud/CPDS/usage-instructions.md new file mode 100644 index 0000000000000000000000000000000000000000..9275b25015c0cf5a28bb462a0a790b483e240e34 --- /dev/null +++ b/docs/en/Tools/Cloud/CPDS/usage-instructions.md @@ -0,0 +1,261 @@ +# 使用手册 + + + +- [介绍](#介绍) +- [页面及功能说明](#页面及功能说明) + - [页面布局](#页面布局) + - [概览](#概览) + - [监控告警](#监控告警) + - [集群状态](#集群状态) + - [集群状态-概览](#集群状态-概览) + - [集群状态-物理资源监控](#集群状态-物理资源监控) + - [集群状态-容器健康监控](#集群状态-容器健康监控) + - [节点健康](#节点健康) + - [节点健康-概览](#节点健康-概览) + - [节点健康-物理资源监控](#节点健康-物理资源监控) + - [节点健康-容器健康监控](#节点健康-容器健康监控) + - [健康诊断](#健康诊断) + - [诊断结果](#诊断结果) + - [原始数据检索](#原始数据检索) + - [原始数据图表](#原始数据图表) + - [规则管理](#规则管理) + - [查看规则](#查看规则) + - [添加规则](#添加规则) + + + +## 介绍 + +CPDS(Container Problem Detect System)容器故障检测系统,是由北京凝思软件股份有限公司设计并开发的容器集群故障检测系统,该软件系统实现了对容器 TOP 故障、亚健康检测的监测与识别。 + +主要分为四个子模块: + +1. 信息采集组件 cpds-agent:本组件根据cpds-detetor(异常检测组件)需要的数据进行相应采集。 +2. 异常检测组件 cpds-detector:本组件根据cpds-analyzer(容器故障/亚健康诊断组件)下发的异常规则,对集群各节点原始数据进行分析,检测节点是否存在异常。 +3. 故障/亚健康诊断组件 cpds-analyzer:本组件根据cpds-dashboard(用户交互组件)下发的诊断规则,对cpds-detector(异常检测组件)收集的异常数据进行处理,判断集群节点是否处于容器故障/亚健康状态。 +4. 用户交互组件 cpds-dashboard:本组件从 cpds-analyzer(故障/亚健康诊断)组件中获取诊断结果数据,并以实时查看、离线查看的形式进行可视化诊断结果展示,便于容器集群运维人员进行分析与策略制定下发。 + +## 页面及功能说明 + +### 页面布局 + +CPDS页面布局分为导航栏、导航菜单、操作区。 + +> 页面布局如下图 + +![页面布局](./images/cpds-page/布局.png) + +| 序号 | 名称 | 说明 | +| ---- | ---- | ---- | +| 1 | 导航菜单 | 导航菜单包含 CPDS 所有功能,选择不同菜单项后,右侧操作区将显示对应的操作页面。| +| 2 | 导航栏 | 用于指示用户当前页面位于导航树的位置。 | +| 3 | 操作区 | 显示当前操作信息,提供操作功能。 | + +### 概览 + +概览页面可查看整个集群的状态信息,包括集群容器健康状态、集群节点状态、集资源用量、节点监控状态、诊断结果。查看概览流程如下图所示: + +![集群概览](./images/cpds-page/集群概览.png) + +| 名称 | 说明 | +| ---- | ---- | +| 容器健康状态 | 显示集群中运行中的容器个数占全部容器个数的百分比,并显示全部容器、运行中的容器、停止的容器的个数。 | +| 集群节点状态 | 显示在线节点占全部节点的百分比,并显示全部节点、在线节点、离线节点的个数。 | +| 集群资源用量 | 显示集群 CUP、内容、磁盘的使用的量、总量和使用百分比。 | +| 节点监控状态 | 显示集群节点的 ip 地址、节点状态、节点运行容器数量占比。点击下方的查看更多,会跳转至“监控告警-节点健康”,可以查看更详细的节点信息。 | +| 诊断结果 | 显示触发规则的名称、当前状态、规则第一次触发的时间,以及后续触发的最新时间。点击下方的查看更多,会跳转至“健康诊断-诊断结果”,查看更详细的诊断结果。 | + +### 监控告警 + +监控告警能够对集群、节点的物理资源、容器状态进行监控。 + +#### 集群状态 + +显示集群主机在线状态,提供物理资源监控和容器健康监控。 + +##### 集群状态-概览 + +查看集群信息和节点信息,集群信息包括集群容器健康状态、集群节点状态、集资源用量。查看集群信息流程如下所示: + +1. 点击左侧导航菜单“监控告警”→“集群状态”,选择“概览”标签页,进入概览页面。如下图所示: + ![集群状态-概览](./images/cpds-page/集群状态-概览.png) + + | 名称 | 说明 | + | ---- | ---- | + | 容器健康状态 | 显示集群中运行中的容器个数占全部容器个数的百分比,并显示全部容器、运行中的容器、停止的容器的个数。 | + | 集群节点状态 | 显示在线节点占全部节点的百分比,并显示全部节点、在线节点、离线节点的个数。 | + | 资源使用情况 | 显示集群 CUP、内容、磁盘的使用的量和总量。 | + | 节点监控状态 | 详见 [节点健康](#节点健康)。 | + +##### 集群状态-物理资源监控 + +点击左侧导航菜单“监控告警”→“集群状态”,选择“物理资源监控”标签页,物理资源监控页面内容如下图所示。 +![集群物理资源监控](./images/cpds-page/集群物理资源监控.png) + +> 其中点击查询时间范围按钮可选择查询数据的时间范围,如下图所示。 +> ![时间范围选择](./images/cpds-page/时间范围选择.png) + +下面将对物理资源监控内容进行说明。 + +| 名称 | 说明 | +| ---- | ---- | +| 集群总 CPU 使用率 | 集群 CPU 使用百分比 | +| 集群总内存使用率 | 集群内存使用百分比 | +| 集群总磁盘使用率 | 集群磁盘使用百分比 | +| 集群iowait | 集群CPU等待I/O设备完成输入输出操作而处于空闲状态的时间 | +| 网络iops | 集群网卡每秒接收和发送数据包总数 | +| 网络网速 | 集群网卡每秒接收和发送数据大小 | +| 网络丢包率 | 集群单位时间内网卡丢失数据包占总数据包的百分比 | +| 网络错误率 | 集群单位时间内网卡出现错误的数据包占总数据包的百分比 | +| 网络重传率 | 集群单位时间内重传数据包占总数据包的百分比 | +| 集群总磁盘吞吐速率 | 集群磁盘每秒完成读写操作的数据量 | +| 磁盘 iops | 集群磁盘每秒完成读写操作的次数 | + +##### 集群状态-容器健康监控 + +点击左侧导航菜单“监控告警”→“集群状态”,选择“容器健康监控”标签页,该页面显示集群容器健康监控信息如下图所示: +![集群-容器健康监控](./images/cpds-page/集群-容器健康监控.png) + +下面将对容器健康监控内容进行说明。 + +| 名称 | 说明 | +| ---- | ---- | +| 容器CPU使用率 | 容器 CPU 使用量占集群 CPU 总量的百分比 | +| 容器磁盘使用率 | 容器磁盘使用量占集群磁盘总量的百分比 | +| 容器流量 | 容器每秒网卡接收/发送的数据量 | +| 容器内存使用率 | 容器内存使用量占集群内存总量的百分比 | + +#### 节点健康 + +显示各节点主机在线状态及架构信息,提供物理资源监控和容器健康监控。 +> 节点健康主页面如下图所示: + +![节点健康](./images/cpds-page/节点健康.png) + +##### 节点健康-概览 + +点击左侧导航菜单“监控告警”→“节点健康”,点击表格中节点对应的 ip 地址进入节点概览页面。 +> 节点概览页面如下图所示: + +![节点概览](./images/cpds-page/节点概览.png) + +> 点击如下图三个组件,可刷新或切换曲线图展示的数据内容。 + +![节点概览-按钮](./images/cpds-page/节点概览-按钮.png) + +##### 节点健康-物理资源监控 + +点击左侧导航菜单“监控告警”→“节点健康”,点击表格中节点对应的 ip 地址进入节点概览页面,选择“物理资源监控”标签页,物理资源监控页面内容如下图所示: +![节点物理资源监控](./images/cpds-page/节点物理资源监控.png) + +下面将对物理资源监控内容进行说明。 + +| 名称 | 说明 | +| ---- | ---- | +| 节点 CPU 使用率 | 节点 CPU 使用百分比 | +| 节点内存使用率 | 节点内存使用百分比 | +| 节点磁盘使用率 | 节点磁盘使用百分比 | +| 节点iowait | 节点CPU等待I/O设备完成输入输出操作而处于空闲状态的时间 | +| 节点网络iops | 节点网卡每秒接收和发送数据包总数 | +| 节点网络网速 | 节点网卡每秒接收和发送数据大小 | +| 节点网络丢包率 | 节点单位时间内网卡丢失数据包占总数据包的百分比 | +| 节点网络错误率 | 节点单位时间内网卡出现错误的数据包占总数据包的百分比 | +| 节点网络重传率 | 节点单位时间内重传数据包占总数据包的百分比 | +| 节点磁盘吞吐速率 | 节点磁盘每秒完成读写操作的数据量 | +| 节点磁盘 iops | 节点磁盘每秒完成读写操作的次数 | + +##### 节点健康-容器健康监控 + +点击左侧导航菜单“监控告警”→“节点健康”,点击表格中节点对应的 ip 地址进入节点概览页面,选择“容器健康监控”标签页,容器健康监控页面如下图所示: +![节点容器健康监控](./images/cpds-page/节点容器健康监控.png) + +> 该页面可根据容器状态、容器名称进行排序。 + +节点容器健康监控数据说明如下表: + +| 名称 | 说明 | +| ---- | ---- | +| 容器名称 | 容器的完整id | +| 容器状态 | 容器的运行状态,包括:运行中、已创建、停止等待、暂停共四个状态 | +| CPU用量 | 容器CPU使用率 | +| 内存用量 | 容器内存用量 | +| 出站流量 | 容器网卡对外发送数据大小 | +| 如站流量 | 容器网卡对接收数据大小 | + +### 健康诊断 + +利用故障/亚健康检测规则,对各节点原始数据进行计算分析,得出诊断结果,提供诊断结果列表。支持诊断原始数据查看,显示诊断时所使用的原始数据,支持对时间进行过滤,显示不同时间段原始数据的值,并提供图表展示原始数据的变化规律。 + +#### 诊断结果 + +将规则列表中的规则拿来进行判断,满足判断条件的规则将被加入到诊断结果列表中。规则信息详见 [规则管理](#规则管理)。 +查看诊断结果列表流程如下: + +1. 点击左侧导航菜单“健康诊断”→“诊断结果”,进入诊断结果页面。如下图所示: + ![诊断结果](./images/cpds-page/诊断结果.png) + +2. 可在左上角输入规则名称对诊断结果进行筛选。 + +3. 点击查看原始数据,可以查看最近 10 分钟内原始数据的变化规律,如下图所示: + ![查看原始数据](./images/cpds-page/查看原始数据.png) + +4. 点击删除可以删除对应的诊断结果。 + +#### 原始数据检索 + +支持诊断原始数据查看,显示诊断时所使用的原始数据,支持对时间进行过滤,显示不同时间段原始数据的值,并提供图表展示原始数据的变化规律。页面布局如下图所示: + ![原始数据检索](./images/cpds-page/原始数据检索.png) + +功能说明如下表所示: + +| 名称 | 说明 | +| ---- | ---- | +| 原始数据查询 | 利用表达式对原始数据进行查询,可以设置时间选择器对时间进行过滤。可以查询到一段时间内原始数据的变化规律。 | +| 容器状态 | 当利用表达式成功查询原始数据后,查询记录将被记录到表格中,如果超过 10 条不同表达式记录,最先查询的记录将被删除。如果是相同表达式,那么记录会被覆盖。 | + +##### 原始数据图表 + +原始数据图表如下图所示: + ![原始数据图表](./images/cpds-page/原始数据图表.png) + +图表信息说明如下表: + +| 名称 | 说明 | +| ---- | ---- | +| 监控指标 | 显示内容为查询的表达式 | +| 原始数据曲线图 | 显示该表达式在一段时间内查询结果的变化规律 | +| 原始数据表格 | 显示当前时间,查询结果的字段以及值 | + +### 规则管理 + +#### 查看规则 + +支持故障/亚健康检测规则列表查看、创建、编辑、删除功能,列表包括规则名、表达式、告警级别和亚健康、故障比较规则值信息。查看规则流程如下: + +1. 点击左侧导航菜单“规则管理”→“查看规则”,进入规则列表页面,如下图所示: + ![查看规则](./images/cpds-page/查看规则.png) + +2. 通过左上角输入规则名称,点击搜索可以对规则进行过滤。 +3. 点击删除可以删除对应规则。 + +#### 添加规则 + +击左侧导航菜单“规则管理”→“查看规则”,进入规则列表页面,点击添加规则或者编辑,如下图所示: + ![添加规则](./images/cpds-page/添加规则.png) + +添加/编辑规则内容有如下几点限制: + +1. 规则名称,只能包含数字、英文字母、下划线。 +2. 表达式必须符合PromQL语法,参考[prometheus官方文档](https://prometheus.io/docs/prometheus/latest/querying/basics/)。 +3. 亚健康阈值只能输入数字类型。 +4. 故障康阈值只能输入数字类型。 + +> 阈值只能在对应的比较条件选择之后才能输入。 +> 当比较条件选择之后,对应的阈值必须填写入。 +> 亚健康比较条件、故障比较条件二者必须选择一个,或者两个都选。 + +## 注意事项 + +1. 默认规则,规则名称为node_etcd_service、node_kube_apiserver、node_kube_controller_manager、node_kube_proxynode_kube_scheduler的规则表达式中的ip需要自行更换为实际ip。 +2. 当前版本CPDS只支持对docker容器运行时的故障检测。 diff --git a/docs/en/Tools/Cloud/CTinspector/Menu/index.md b/docs/en/Tools/Cloud/CTinspector/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..cf80293a7e0c0c8e0c78c9cea10e040e1346497f --- /dev/null +++ b/docs/en/Tools/Cloud/CTinspector/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- +- [CTinspector Introduction]({{< relref "./ctinspector-introduction.md" >}}) + - [Installation and Deployment]({{< relref "./installation-and-deployment.md" >}}) + - [Usage Instructions]({{< relref "./usage-instructions.md" >}}) diff --git a/docs/en/docs/CTinspector/CTinspector-introduction.md b/docs/en/Tools/Cloud/CTinspector/ctinspector-introduction.md similarity index 100% rename from docs/en/docs/CTinspector/CTinspector-introduction.md rename to docs/en/Tools/Cloud/CTinspector/ctinspector-introduction.md diff --git a/docs/en/docs/CTinspector/figures/CT-package-vm.png b/docs/en/Tools/Cloud/CTinspector/figures/CT-package-vm.png similarity index 100% rename from docs/en/docs/CTinspector/figures/CT-package-vm.png rename to docs/en/Tools/Cloud/CTinspector/figures/CT-package-vm.png diff --git a/docs/en/docs/CTinspector/figures/CTinspector-arch.png b/docs/en/Tools/Cloud/CTinspector/figures/CTinspector-arch.png similarity index 100% rename from docs/en/docs/CTinspector/figures/CTinspector-arch.png rename to docs/en/Tools/Cloud/CTinspector/figures/CTinspector-arch.png diff --git a/docs/en/docs/CTinspector/figures/migrate_node_1.png b/docs/en/Tools/Cloud/CTinspector/figures/migrate_node_1.png similarity index 100% rename from docs/en/docs/CTinspector/figures/migrate_node_1.png rename to docs/en/Tools/Cloud/CTinspector/figures/migrate_node_1.png diff --git a/docs/en/docs/CTinspector/figures/migrate_node_2.png b/docs/en/Tools/Cloud/CTinspector/figures/migrate_node_2.png similarity index 100% rename from docs/en/docs/CTinspector/figures/migrate_node_2.png rename to docs/en/Tools/Cloud/CTinspector/figures/migrate_node_2.png diff --git a/docs/en/docs/CTinspector/installation-and-deployment.md b/docs/en/Tools/Cloud/CTinspector/installation-and-deployment.md similarity index 88% rename from docs/en/docs/CTinspector/installation-and-deployment.md rename to docs/en/Tools/Cloud/CTinspector/installation-and-deployment.md index 6c03eb21f344939f47667c68389ef3f4e02429fd..b74c40e744744c780aec8012ff8c1b8452625bd3 100644 --- a/docs/en/docs/CTinspector/installation-and-deployment.md +++ b/docs/en/Tools/Cloud/CTinspector/installation-and-deployment.md @@ -10,7 +10,7 @@ ## Environment Preparation -* Install openEuler by referring to [Installation Guide](../Installation/installation.md). +* Install openEuler by referring to [Installation Guide](../../../Server/InstallationUpgrade/Installation/installation.md) * CTinspector installation requires **root** permissions. @@ -27,6 +27,7 @@ yum install ctinspector ```shell rpm -q ctinspector ``` + * Check whether the core dynamic library **libebpf_vm_executor.so** or main program **vm_test** is installed. ```shell diff --git a/docs/en/docs/CTinspector/usage.md b/docs/en/Tools/Cloud/CTinspector/usage-instructions.md similarity index 92% rename from docs/en/docs/CTinspector/usage.md rename to docs/en/Tools/Cloud/CTinspector/usage-instructions.md index b4487f99ce8e7f99be9269c2bb8e5ff5c0a69c93..528f108b760517eee5ac60c85a88ab085a6a8558 100644 --- a/docs/en/docs/CTinspector/usage.md +++ b/docs/en/Tools/Cloud/CTinspector/usage-instructions.md @@ -12,6 +12,7 @@ rdma link add rxe_0 type rxe netdev ens33 ``` ## Application Development + Use relevant APIs to develop a scenario-specific application. Build the application as a binary ELF file based on the eBPF instruction set. Take **vm_migrate** of the provided **ebpf_example** for example. **vm_migrate** calls the CTinspector framework and can migrate package VMs between nodes in a resumable manner. ```text @@ -36,8 +37,9 @@ clang -O2 -fno-inline -emit-llvm -I/usr/include/ctinspector/ -c migrate.c -o - | ``` ## Application Running -Running **vm_migrate** on node 1. + +Running **vm_migrate** on node 1. ![](./figures/migrate_node_1.png) -Running the CTinspector main program on node 2. +Running the CTinspector main prgram on node 2. ![](./figures/migrate_node_2.png) diff --git a/docs/en/Tools/Cloud/Menu/index.md b/docs/en/Tools/Cloud/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..95108a734ac0461fe1727e3dd4676656af4a62ab --- /dev/null +++ b/docs/en/Tools/Cloud/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- +- [CTinspector User Guide]({{< relref "./CTinspector/Menu/index.md" >}}) +- [CPDS User Guide]({{< relref "./CPDS/Menu/index.md" >}}) +- [PilotGo User Guide]({{< relref "./PilotGo/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/Cloud/PilotGo/Menu/index.md b/docs/en/Tools/Cloud/PilotGo/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..b403912dc9c1f6170783bca5f40c376b7853db15 --- /dev/null +++ b/docs/en/Tools/Cloud/PilotGo/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [PilotGo User Guide]({{< relref "./pilotgo-introduction.md" >}}) + - [Usage Instructions]({{< relref "./usage-instructions.md" >}}) \ No newline at end of file diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/G\346\217\222\344\273\2661.png" "b/docs/en/Tools/Cloud/PilotGo/figures/G\346\217\222\344\273\2661.png" new file mode 100644 index 0000000000000000000000000000000000000000..5c7eaa4cde3364c70ca6bff24c768edad986a59c Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/G\346\217\222\344\273\2661.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/G\346\217\222\344\273\2662.png" "b/docs/en/Tools/Cloud/PilotGo/figures/G\346\217\222\344\273\2662.png" new file mode 100644 index 0000000000000000000000000000000000000000..45437297fb46749b9f840f45e38cc3e5c4d0d595 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/G\346\217\222\344\273\2662.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/G\346\217\222\344\273\2663.png" "b/docs/en/Tools/Cloud/PilotGo/figures/G\346\217\222\344\273\2663.png" new file mode 100644 index 0000000000000000000000000000000000000000..d120fdc034f2c588c222837e8316a33cda339e22 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/G\346\217\222\344\273\2663.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/G\346\217\222\344\273\2664.png" "b/docs/en/Tools/Cloud/PilotGo/figures/G\346\217\222\344\273\2664.png" new file mode 100644 index 0000000000000000000000000000000000000000..1e2ed031ac525d8a69c98c9f143b3edece72be77 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/G\346\217\222\344\273\2664.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/P\346\217\222\344\273\2661.png" "b/docs/en/Tools/Cloud/PilotGo/figures/P\346\217\222\344\273\2661.png" new file mode 100644 index 0000000000000000000000000000000000000000..f4a923729e62fb321931342ec56238b568dbf16e Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/P\346\217\222\344\273\2661.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/P\346\217\222\344\273\2662.png" "b/docs/en/Tools/Cloud/PilotGo/figures/P\346\217\222\344\273\2662.png" new file mode 100644 index 0000000000000000000000000000000000000000..d54a04a42afa0f0ae7d37fb2eef88943e4b402f5 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/P\346\217\222\344\273\2662.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/P\346\217\222\344\273\2663.png" "b/docs/en/Tools/Cloud/PilotGo/figures/P\346\217\222\344\273\2663.png" new file mode 100644 index 0000000000000000000000000000000000000000..a85aad4547a6dc8b6d55d50524c69c92668e54a6 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/P\346\217\222\344\273\2663.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/P\346\217\222\344\273\2664.png" "b/docs/en/Tools/Cloud/PilotGo/figures/P\346\217\222\344\273\2664.png" new file mode 100644 index 0000000000000000000000000000000000000000..c56bcc5248a53f9d5daeadaddb998d69ef154c4e Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/P\346\217\222\344\273\2664.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\345\257\206\347\240\2011.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\345\257\206\347\240\2011.png" new file mode 100644 index 0000000000000000000000000000000000000000..a51096f17e336fc0917bce7be08ff69ec2604562 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\345\257\206\347\240\2011.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\345\257\206\347\240\2012.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\345\257\206\347\240\2012.png" new file mode 100644 index 0000000000000000000000000000000000000000..f26d9ddf85da2d5955ce8f9d338fd1bb036b1132 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\345\257\206\347\240\2012.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\345\257\206\347\240\2013.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\345\257\206\347\240\2013.png" new file mode 100644 index 0000000000000000000000000000000000000000..b3ffd4507aab3a85b3ab8e775bc1ab4c1efcfda3 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\345\257\206\347\240\2013.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\350\212\202\347\202\2711.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\350\212\202\347\202\2711.png" new file mode 100644 index 0000000000000000000000000000000000000000..4a127fafef22d62f326e38075173f53f244acfa7 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\350\212\202\347\202\2711.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\350\212\202\347\202\2712.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\350\212\202\347\202\2712.png" new file mode 100644 index 0000000000000000000000000000000000000000..8a097306b1dbf7ce5c6cb14e9c84ff7f59079dfb Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\350\212\202\347\202\2712.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\350\212\202\347\202\2713.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\350\212\202\347\202\2713.png" new file mode 100644 index 0000000000000000000000000000000000000000..1e517062c17505a2ec0905863934e5e0a5e47c36 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\344\277\256\346\224\271\350\212\202\347\202\2713.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\211\271\346\254\2411.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\211\271\346\254\2411.png" new file mode 100644 index 0000000000000000000000000000000000000000..ee14b990e8ab6cf0c71bef1a40cb74cd2919e2fc Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\211\271\346\254\2411.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\211\271\346\254\2412.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\211\271\346\254\2412.png" new file mode 100644 index 0000000000000000000000000000000000000000..1f5a1658552227a88cf07f592e048c4bc1005286 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\211\271\346\254\2412.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\211\271\346\254\2413.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\211\271\346\254\2413.png" new file mode 100644 index 0000000000000000000000000000000000000000..4066752952e177ca2bb14b61a86d44ff1efc11f6 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\211\271\346\254\2413.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\211\271\346\254\2414.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\211\271\346\254\2414.png" new file mode 100644 index 0000000000000000000000000000000000000000..ade3fb143ac6a0186985b63c5505afef9666e57e Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\211\271\346\254\2414.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\226\207\344\273\2661.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\226\207\344\273\2661.png" new file mode 100644 index 0000000000000000000000000000000000000000..74889505efa10bf45d699d9c8ec19c81cd63ef4f Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\226\207\344\273\2661.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\226\207\344\273\2662.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\226\207\344\273\2662.png" new file mode 100644 index 0000000000000000000000000000000000000000..0a0f563aa9efd21a789058b76dc88e5e0208a996 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\226\207\344\273\2662.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\226\207\344\273\2663.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\226\207\344\273\2663.png" new file mode 100644 index 0000000000000000000000000000000000000000..e7dfcf189d030a4bffa1ce92885e27e3fab7ecde Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\233\345\273\272\346\226\207\344\273\2663.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\211\271\346\254\2411.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\211\271\346\254\2411.png" new file mode 100644 index 0000000000000000000000000000000000000000..e360587420e42233933a9bb27ad31a62557374f0 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\211\271\346\254\2411.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\211\271\346\254\2412.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\211\271\346\254\2412.png" new file mode 100644 index 0000000000000000000000000000000000000000..0efb93e8dd16f855b444d6a5891be38fdebe92c7 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\211\271\346\254\2412.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\211\271\346\254\2413.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\211\271\346\254\2413.png" new file mode 100644 index 0000000000000000000000000000000000000000..2263d7c359bc58451f9382693b98c15cae4fb273 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\211\271\346\254\2413.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\234\272\345\231\2501.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\234\272\345\231\2501.png" new file mode 100644 index 0000000000000000000000000000000000000000..74c10a8dee0fb08e4ac39d73c3389b9a2262c143 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\234\272\345\231\2501.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\234\272\345\231\2502.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\234\272\345\231\2502.png" new file mode 100644 index 0000000000000000000000000000000000000000..d4e467dd0b6fbd9d13a928deebfa8cca1a515c61 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\234\272\345\231\2502.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\234\272\345\231\2503.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\234\272\345\231\2503.png" new file mode 100644 index 0000000000000000000000000000000000000000..1bb38a09498d5a0d8c96aef1ce7b39f8bbb43207 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\346\234\272\345\231\2503.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\347\224\250\346\210\2671.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\347\224\250\346\210\2671.png" new file mode 100644 index 0000000000000000000000000000000000000000..c0599cd9d3679c2c16debcbf46b85b1328130104 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\347\224\250\346\210\2671.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\347\224\250\346\210\2672.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\347\224\250\346\210\2672.png" new file mode 100644 index 0000000000000000000000000000000000000000..96a3636ed380608616fccb672017ef363108d529 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\347\224\250\346\210\2672.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\212\202\347\202\2712.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\212\202\347\202\2712.png" new file mode 100644 index 0000000000000000000000000000000000000000..e739b14f7b60794065a9ec8a9b2478b2f0b37dd0 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\212\202\347\202\2712.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\212\202\347\202\2713.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\212\202\347\202\2713.png" new file mode 100644 index 0000000000000000000000000000000000000000..d8c8967d525a68515a7ce651f7d30169654bd784 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\212\202\347\202\2713.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\247\222\350\211\2621.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\247\222\350\211\2621.png" new file mode 100644 index 0000000000000000000000000000000000000000..cf3d51f7ab12f241f8a93223631406d0c1b99ab4 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\247\222\350\211\2621.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\247\222\350\211\2622.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\247\222\350\211\2622.png" new file mode 100644 index 0000000000000000000000000000000000000000..b41055b466720578ca9282ff31589b6e147e8ada Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\247\222\350\211\2622.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\247\222\350\211\2623.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\247\222\350\211\2623.png" new file mode 100644 index 0000000000000000000000000000000000000000..661ed75def31a49cbf6043493c1805d65c83a83b Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\210\240\351\231\244\350\247\222\350\211\2623.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\212\237\350\203\275\346\250\241\345\235\227.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\212\237\350\203\275\346\250\241\345\235\227.png" new file mode 100644 index 0000000000000000000000000000000000000000..86782bfc46f42a051b56f457cd46fad60cad3332 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\212\237\350\203\275\346\250\241\345\235\227.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\217\230\346\233\264\351\203\250\351\227\2501.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\217\230\346\233\264\351\203\250\351\227\2501.png" new file mode 100644 index 0000000000000000000000000000000000000000..23c2d754679c0a374d89c26596669e9bbbebf2f6 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\217\230\346\233\264\351\203\250\351\227\2501.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\345\217\230\346\233\264\351\203\250\351\227\2502.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\345\217\230\346\233\264\351\203\250\351\227\2502.png" new file mode 100644 index 0000000000000000000000000000000000000000..0efb1384611e7f5b4cb1370e626a238908567dbb Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\345\217\230\346\233\264\351\203\250\351\227\2502.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\344\270\213\345\217\2211.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\344\270\213\345\217\2211.png" new file mode 100644 index 0000000000000000000000000000000000000000..387df3d4cd301fe677e663c6a919abf093efba87 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\344\270\213\345\217\2211.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\344\270\213\345\217\2212.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\344\270\213\345\217\2212.png" new file mode 100644 index 0000000000000000000000000000000000000000..ca5e64cbf7d0aeabcececacea125585484e873ca Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\344\270\213\345\217\2212.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\345\215\270\350\275\2751.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\345\215\270\350\275\2751.png" new file mode 100644 index 0000000000000000000000000000000000000000..4bc4ca6f620619fe10a81205a939535f83e772c2 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\345\215\270\350\275\2751.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\345\215\270\350\275\2752.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\345\215\270\350\275\2752.png" new file mode 100644 index 0000000000000000000000000000000000000000..68467232ca5bd65a03eccc4fc3fb8a5e95529ddf Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\345\215\270\350\275\2752.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\346\223\215\344\275\2341.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\346\223\215\344\275\2341.png" new file mode 100644 index 0000000000000000000000000000000000000000..5cee721e3c0ce14f666a85cd3acb27b57684f077 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\211\271\351\207\217\346\223\215\344\275\2341.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\344\270\213\345\217\2211.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\344\270\213\345\217\2211.png" new file mode 100644 index 0000000000000000000000000000000000000000..d5d54a3679b9a183dbc8eddacf881a8c30c0967b Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\344\270\213\345\217\2211.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\344\270\213\345\217\2212.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\344\270\213\345\217\2212.png" new file mode 100644 index 0000000000000000000000000000000000000000..d639180465474d529758cb83e98b8bd44c409e47 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\344\270\213\345\217\2212.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\344\270\213\345\217\2213.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\344\270\213\345\217\2213.png" new file mode 100644 index 0000000000000000000000000000000000000000..87082b54be5d405f859bd17b558b061e08565f0c Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\344\270\213\345\217\2213.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\210\240\351\231\2443.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\210\240\351\231\2443.png" new file mode 100644 index 0000000000000000000000000000000000000000..a8f2dd996fb826ace2a657ae330aaf36d7c1b884 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\210\240\351\231\2443.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\216\206\345\217\262\347\211\210\346\234\254.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\216\206\345\217\262\347\211\210\346\234\254.png" new file mode 100644 index 0000000000000000000000000000000000000000..74f5e745836607702d69f97939b8629446dc0d71 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\216\206\345\217\262\347\211\210\346\234\254.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\233\236\346\273\2321.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\233\236\346\273\2321.png" new file mode 100644 index 0000000000000000000000000000000000000000..8a7e6dfd18608275d496de46cc157bdcfcc1ffa4 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\233\236\346\273\2321.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\233\236\346\273\2322.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\233\236\346\273\2322.png" new file mode 100644 index 0000000000000000000000000000000000000000..0ceef0dcacc27149d2feb2eff3a7902af1c13186 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\233\236\346\273\2322.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\233\236\346\273\2323.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\233\236\346\273\2323.png" new file mode 100644 index 0000000000000000000000000000000000000000..69b4cda58e7962c11e40bdac7555afb9428941b2 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\233\236\346\273\2323.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\233\236\346\273\2324.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\233\236\346\273\2324.png" new file mode 100644 index 0000000000000000000000000000000000000000..79281449c580ef3059dc30329416e6fb564fb5ae Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\226\207\344\273\266\345\233\236\346\273\2324.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\345\206\205\346\240\270\344\277\256\346\224\2711.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\345\206\205\346\240\270\344\277\256\346\224\2711.png" new file mode 100644 index 0000000000000000000000000000000000000000..ae23a49e9ef1d9c2be390a4715f83457c05dce69 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\345\206\205\346\240\270\344\277\256\346\224\2711.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\345\206\205\346\240\270\344\277\256\346\224\2712.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\345\206\205\346\240\270\344\277\256\346\224\2712.png" new file mode 100644 index 0000000000000000000000000000000000000000..344f95e052c876e312043099b36267e4e9544e5c Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\345\206\205\346\240\270\344\277\256\346\224\2712.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\345\206\205\346\240\270\344\277\256\346\224\2713.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\345\206\205\346\240\270\344\277\256\346\224\2713.png" new file mode 100644 index 0000000000000000000000000000000000000000..1f108d6f224f30a5973b4ddbe7e8d551d8e1f9c5 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\345\206\205\346\240\270\344\277\256\346\224\2713.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\346\234\215\345\212\241\345\201\234\346\255\242.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\346\234\215\345\212\241\345\201\234\346\255\242.png" new file mode 100644 index 0000000000000000000000000000000000000000..c482e8389f10bca2f1ad43545af169d6dd26b1a5 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\346\234\215\345\212\241\345\201\234\346\255\242.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\346\234\215\345\212\241\345\220\257\345\212\250.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\346\234\215\345\212\241\345\220\257\345\212\250.png" new file mode 100644 index 0000000000000000000000000000000000000000..3d8674a65895b1138ca2826b2496f17c81e5818b Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\346\234\215\345\212\241\345\220\257\345\212\250.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\346\234\215\345\212\241\351\207\215\345\220\257.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\346\234\215\345\212\241\351\207\215\345\220\257.png" new file mode 100644 index 0000000000000000000000000000000000000000..a77c72630b6ab284232f7584d4f688e243439960 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\346\234\215\345\212\241\351\207\215\345\220\257.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\347\273\210\347\253\257.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\347\273\210\347\253\257.png" new file mode 100644 index 0000000000000000000000000000000000000000..2a7e5cbb1366030517ceeacc8a1459a764ac98eb Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\347\273\210\347\253\257.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\347\273\210\347\253\2571.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\347\273\210\347\253\2571.png" new file mode 100644 index 0000000000000000000000000000000000000000..d3130734e2fb884c74209411dbb647d88e575a8f Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\347\273\210\347\253\2571.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\350\275\257\344\273\266\345\214\205\345\215\270\350\275\275.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\350\275\257\344\273\266\345\214\205\345\215\270\350\275\275.png" new file mode 100644 index 0000000000000000000000000000000000000000..cc74a97dcf92ca3eb57b8cf7b2319e73cf10c099 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\350\275\257\344\273\266\345\214\205\345\215\270\350\275\275.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\350\275\257\344\273\266\345\214\205\345\256\211\350\243\2052.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\350\275\257\344\273\266\345\214\205\345\256\211\350\243\2052.png" new file mode 100644 index 0000000000000000000000000000000000000000..b24a22cbafc042b7d4cb234708a161a4b6910048 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\234\272\345\231\250\350\275\257\344\273\266\345\214\205\345\256\211\350\243\2052.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\267\273\345\212\240\347\224\250\346\210\2671.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\267\273\345\212\240\347\224\250\346\210\2671.png" new file mode 100644 index 0000000000000000000000000000000000000000..e5f5631e6ca19f8498fa2b030613b0a75d7168f1 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\267\273\345\212\240\347\224\250\346\210\2671.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\267\273\345\212\240\347\224\250\346\210\2672.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\267\273\345\212\240\347\224\250\346\210\2672.png" new file mode 100644 index 0000000000000000000000000000000000000000..017c47fdc9974c3a9ee5758c05512eb0b01a929c Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\267\273\345\212\240\347\224\250\346\210\2672.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\267\273\345\212\240\350\247\222\350\211\2621.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\267\273\345\212\240\350\247\222\350\211\2621.png" new file mode 100644 index 0000000000000000000000000000000000000000..a51db5c136e8d6baf61187d8882d4b02758cb056 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\267\273\345\212\240\350\247\222\350\211\2621.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\346\267\273\345\212\240\350\247\222\350\211\2622.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\346\267\273\345\212\240\350\247\222\350\211\2622.png" new file mode 100644 index 0000000000000000000000000000000000000000..a352b27353c2513f55cad32d968b1095de96eb23 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\346\267\273\345\212\240\350\247\222\350\211\2622.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\205\2451.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\205\2451.png" new file mode 100644 index 0000000000000000000000000000000000000000..7b7c230d9942bd9fceaeb2fbb23b3e16255b2505 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\205\2451.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\205\2452.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\205\2452.png" new file mode 100644 index 0000000000000000000000000000000000000000..dad2779f6ddb6577a636fe8fb6050aeec69ee2ad Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\205\2452.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\205\2453.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\205\2453.png" new file mode 100644 index 0000000000000000000000000000000000000000..88d855f0e0f48d3da3523d59df9e2358fb49a92c Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\205\2453.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\207\2721.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\207\2721.png" new file mode 100644 index 0000000000000000000000000000000000000000..6198f25e96b6f782e042a1e1c36b0bef897ca064 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\207\2721.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\207\2722.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\207\2722.png" new file mode 100644 index 0000000000000000000000000000000000000000..c55645090a3475c117b2e5805b42bad57a90dfd0 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\224\250\346\210\267\345\257\274\345\207\2722.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\231\273\345\275\225.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\231\273\345\275\225.png" new file mode 100644 index 0000000000000000000000000000000000000000..6eb0106de32bd3d9da30d194035f129e3083791a Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\231\273\345\275\225.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\211\271\346\254\2411.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\211\271\346\254\2411.png" new file mode 100644 index 0000000000000000000000000000000000000000..068b66d65a0f63fabd9f4cd78b46aafbbd1eb8b7 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\211\271\346\254\2411.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\211\271\346\254\2413.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\211\271\346\254\2413.png" new file mode 100644 index 0000000000000000000000000000000000000000..a469a8798beecb882e5823132f442ee1eaf5cb21 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\211\271\346\254\2413.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\226\207\344\273\2661.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\226\207\344\273\2661.png" new file mode 100644 index 0000000000000000000000000000000000000000..50b5f27cc9cecee17b7758683f61bf21544e8c3b Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\226\207\344\273\2661.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\226\207\344\273\2662.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\226\207\344\273\2662.png" new file mode 100644 index 0000000000000000000000000000000000000000..1362aac595643c19f924cf92098bf43abf75c78e Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\226\207\344\273\2662.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\226\207\344\273\2663.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\226\207\344\273\2663.png" new file mode 100644 index 0000000000000000000000000000000000000000..ffa2ed188539c7aa0f95cd6beb21d07c0ed6fc84 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\346\226\207\344\273\2663.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\347\224\250\346\210\2671.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\347\224\250\346\210\2671.png" new file mode 100644 index 0000000000000000000000000000000000000000..36cdb73c8cffc40e7e9d6831691183cdfb481649 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\347\224\250\346\210\2671.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\347\224\250\346\210\2672.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\347\224\250\346\210\2672.png" new file mode 100644 index 0000000000000000000000000000000000000000..7391fda93795f334f7674c98c811bf93919e99a0 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\347\224\250\346\210\2672.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\350\247\222\350\211\2621.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\350\247\222\350\211\2621.png" new file mode 100644 index 0000000000000000000000000000000000000000..d752d16e201a493d71feee178f6a9ca4541df5ed Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\350\247\222\350\211\2621.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\350\247\222\350\211\2622.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\350\247\222\350\211\2622.png" new file mode 100644 index 0000000000000000000000000000000000000000..25c650b0393a73ba5b40f3409a760e420881dcfe Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\347\274\226\350\276\221\350\247\222\350\211\2622.png" differ diff --git "a/docs/en/Tools/Cloud/PilotGo/figures/\351\207\215\347\275\256\345\257\206\347\240\2011.png" "b/docs/en/Tools/Cloud/PilotGo/figures/\351\207\215\347\275\256\345\257\206\347\240\2011.png" new file mode 100644 index 0000000000000000000000000000000000000000..0f33a7a9476814caf942edb428b55a8aa31e3d91 Binary files /dev/null and "b/docs/en/Tools/Cloud/PilotGo/figures/\351\207\215\347\275\256\345\257\206\347\240\2011.png" differ diff --git a/docs/en/Tools/Cloud/PilotGo/pilotgo-introduction.md b/docs/en/Tools/Cloud/PilotGo/pilotgo-introduction.md new file mode 100644 index 0000000000000000000000000000000000000000..d1c8fd2b0b77879ac8762fadb08a2a9487151931 --- /dev/null +++ b/docs/en/Tools/Cloud/PilotGo/pilotgo-introduction.md @@ -0,0 +1,37 @@ +# PilotGo介绍 + +PilotGo 是 openEuler 社区原生孵化的运维管理平台,采用插件式架构设计,功能模块轻量化组合、独立迭代演进,同时保证核心功能稳定;同时使用插件来增强平台功能、并打通不同运维组件之间的壁垒,实现了全局的状态感知及自动化流程。 + +## 功能描述 + +PilotGo 核心功能模块包括: + +* 用户管理:支持按照组织结构分组管理,支持导入已有平台账号,迁移方便; + +* 权限管理:支持基于RBAC的权限管理,灵活可靠; + +* 主机管理:状态前端可视化、直接执行软件包管理、服务管理、内核参数调优、简单易操作; + +* 批次管理:支持运维操作并发执行,稳定高效; + +* 日志审计:跟踪记录用户及插件的变更操作,方便问题回溯及安全审计; + +* 告警管理:平台异常实时感知; + +* 插件功能:支持扩展平台功能,插件联动,自动化能力倍增,减少人工干预。 + +![本地路径](./figures/功能模块.png) + +当前OS发布版本还集成了以下插件: + +* Prometheus:托管Prometheus监控组件,自动化下发及配置node-exporter监控数据采集,对接平台告警功能; + +![本地路径](./figures/P插件3.png) + +* Grafana:集成Grafana可视化平台,提供美观易用的指标监控面板功能。 + +![本地路径](./figures/G插件4.png) + +## 应用场景 + +PiotGo可用于典型的服务器集群管理场景,支持大批量的服务器集群基本管理及监控;通过集成对应的业务功能插件,还可实现业务集群的统一平台管理,例如Mysql数据库集群、redis数据缓存集群、nginx网关集群等。 diff --git a/docs/en/Tools/Cloud/PilotGo/usage-instructions.md b/docs/en/Tools/Cloud/PilotGo/usage-instructions.md new file mode 100644 index 0000000000000000000000000000000000000000..dc6369e3ceb6dd098fb670bc89541eba6a5feb0c --- /dev/null +++ b/docs/en/Tools/Cloud/PilotGo/usage-instructions.md @@ -0,0 +1,327 @@ +# PilotGo平台使用手册 + +PilotGo 是一个 openEuler 社区原生的运维管理平台,采用插件式开发,增强平台的扩展性、并打通不同运维组件之间的壁垒。PilotGo 核心功能包括:集群管理、批次管理、主机管理、用户管理、权限管理、主机监控、运维审计等。 + +## 1 PilotGo安装与配置 + +PilotGo可以单机部署也可以采用集群式部署。安装之前先关闭防火墙。 + +### 1.1 PilotGo-server 安装与配置 + +安装mysql; +安装redis,设置redis密码(修改),运行命令: + +`dnf install redis6` + +`vim /etc/redis/redis.conf` + +`#requirepass foobared去掉注释,foobared改为自己的密码` + +`bind 0.0.0.0` + +启动MySQL和redis服务,然后执行: + +`dnf install PilotGo-server` + +修改/opt/PilotGo/server/config_server.yaml里面mysql和redis的配置信息,启动服务: + +`systemctl start PilotGo-server` + +访问页面: + +### 1.2 PilotGo-agent安装与配置 + +执行以下命令进行安装: + +`dnf install PilotGo-agent` + +修改/opt/PilotGo/agent/config_agent.yaml里面的ip信息,启动服务: + +`systemctl start PilotGo-agent` + +### 1.3 PilotGo插件安装与配置 + +详情见3 插件使用手册 + +## 2 PilotGo平台使用说明 + +### 2.1 首次登录 + +#### 2.1.1 用户登录页面 + +用户登录页面如图所示,输入正确的用户名和密码登录系统。默认用户名为,默认密码为admin,首次登录之后建议先修改密码。![本地路径](./figures/登录.png) + +### 2.2 用户模块 + +#### 2.2.1 创建用户 + +创建用户的方式又两种,一种是手动创建单个用户,另外一种是批量导入多个用户。 + +##### 2.2.1.1 创建单个用户 + +1. 具有创建用户权限的用户成功登录之后点击左侧导航栏中的用户管理; +2. 点击页面右上角的添加按钮; +3. 在页面中输入用户名、密码、邮箱,选择部门和角色类型,并点击确定按钮;![本地路径](./figures/添加用户1.png) +4. 页面弹框提示“添加用户成功”,并显示新创建的用户信息,表示创建用户成功。![本地路径](./figures/添加用户2.png) + +##### 2.2.1.2 批量导入多个用户 + +1. 具有创建用户权限的用户成功登录之后点击左侧导航栏中的用户管理; +2. 击页面的批量导入按钮,选择文件点击打开按钮;![本地路径](./figures/用户导入1.png) +3. 显示用户信息则完成用户导入。![本地路径](./figures/用户导入2.png)![本地路径](./figures/用户导入3.png) + +#### 2.2.2 修改用户信息及密码 + +##### 2.2.2.1 修改用户信息 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的用户管理; +2. 找到用户信息,点击操作栏中的编辑按钮; +3. 在页面中输入要修改的用户信息,并点击确定按钮;![本地路径](./figures/编辑用户1.png) +4. 页面弹框提示“用户信息修改成功”,并显示修改后的用户信息。![本地路径](./figures/编辑用户2.png) + +##### 2.2.2.2 修改密码 + +修改密码有两种方式,第一是用户知道密码登录系统后自己修改,第二是用户忘记密码,由管理员登录系统后重置此用户密码,重置默认密码为邮箱@符号的前半部分。 + +###### 2.2.2.2.1 手动修改密码 + +1. 用户登录系统后点击右上角的人像图标和修改密码;![本地路径](./figures/修改密码1.png) +2. 连续输入两次新密码,点击确定按钮;![本地路径](./figures/修改密码2.png) +3. 页面弹框提示“修改成功”。![本地路径](./figures/修改密码3.png) + +###### 2.2.2.2.2 重置密码 + +1. 管理员登录成功后点击左侧导航栏中的用户管理; +2. 找到用户信息,点击操作栏中的重置密码按钮; +3. 用户使用默认密码可以登录系统。![本地路径](./figures/重置密码1.png) + +#### 2.2.3 删除用户 + +1. 管理员登录成功后点击左侧导航栏中的用户管理; +2. 点击页面小方块选择要删除的用户; +3. 点击页面右上角的删除按钮,并点击确定;![本地路径](./figures/删除用户1.png) +4. 页面弹框提示“用户删除成功”,并用户管理页面不显示删除用户的信息。![本地路径](./figures/删除用户2.png) + +#### 2.2.4 导出用户 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的用户管理; +2. 点击页面的导出按钮;![本地路径](./figures/用户导出1.png) +3. 浏览器显示下载进度,成功下载后打开xlsx文件查看信息。![本地路径](./figures/用户导出2.png) + +### 2.3 角色模块 + +#### 2.3.1 添加角色 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的角色管理; +2. 点击页面的添加按钮; +3. 输入角色名和描述信息,并点击确定按钮;![本地路径](./figures/添加角色1.png) +4. 页面弹框提示“新增角色成功”,并页面显示新添加的角色信息。![本地路径](./figures/添加角色2.png) + +### 2.3.2 修改角色 + +#### 2.3.2.1 修改角色信息 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的角色管理; +2. 点击对应角色的编辑按钮; +3. 输入新的角色名和描述信息,并点击确定按钮;![本地路径](./figures/添加角色1.png) +4. 页面弹框提示“角色信息修改成功”,并页面显示修改后的角色信息。![本地路径](./figures/编辑角色2.png) + +#### 2.3.2.2 修改角色权限 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的角色管理; +2. 点击对应角色的变更按钮; +3. 选择相应的权限,点击重置按钮可以清空所选权限,并点击确定按钮;![本地路径](./figures/编辑角色1.png) +4. 页面弹框提示“角色权限变更成功”。![本地路径](./figures/编辑角色2.png) + +### 2.3.3 删除角色 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的角色管理; +2. 点击对应角色的删除按钮,并点击确定;![本地路径](./figures/删除角色1.png)![本地路径](./figures/删除角色2.png) +3. 页面弹框提示“角色删除成功”,并不显示删除的角色信息。![本地路径](./figures/删除角色3.png) + +### 2.4 部门树模块 + +#### 2.4.1 修改部门节点 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的系统和机器列表; +2. 在部门节点对应位置点击修改符号,输入节点名字并点击确定;![本地路径](./figures/修改节点1.png)![本地路径](./figures/修改节点2.png) +3. 页面弹框提示“修改成功”,并显示修改后的部门节点信息。![本地路径](./figures/修改节点3.png) + +#### 2.4.2 删除部门节点 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的系统和机器列表; +2. 在部门节点对应位置点击删除符号并点击确定;![本地路径](./figures/修改节点1.png)![本地路径](./figures/删除节点2.png) +3. 页面弹框提示“删除成功”,并不显示删除节点的信息。![本地路径](./figures/删除节点3.png) + +### 2.5 配置库模块 + +#### 2.5.1 添加 repo 配置文件 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的库配置文件; +2. 点击页面的新增按钮;![本地路径](./figures/创建文件1.png) +3. 输入文件名、文件类型、文件路径、描述和内容等信息,文件名必须以.repo结尾,文件路径必须正确,文件内容要符合repo文件的格式,并点击确定按钮;![本地路径](./figures/创建文件2.png) +4. 页面弹框提示“文件保存成功”;并显示新增的repo配置文件信息。![本地路径](./figures/创建文件3.png) + +#### 2.5.2 修改 repo 配置文件 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的库配置文件; +2. 找到要修改的repo文件,点击对应的编辑按钮;![本地路径](./figures/编辑文件1.png) +3. 输入修改后的文件名、文件类型、文件路径、描述和内容等信息,并点击确定按钮;![本地路径](./figures/编辑文件2.png) +4. 页面弹框提示“配置文件修改成功”;并显示修改后的repo配置文件信息。![本地路径](./figures/编辑文件3.png) + +#### 2.5.3 删除 repo 配置文件 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的库配置文件; +2. 选择要删除的文件,点击页面的删除按钮,并点击确定;![本地路径](./figures/删除角色1.png)![本地路径](./figures/删除角色2.png) +3. 页面弹框提示“存储的文件已从数据库删除”,且页面不显示删除的repo配置文件信息。![本地路径](./figures/文件删除3.png) + +#### 2.5.4 下发 repo 配置文件 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的库配置文件; +2. 找到要下发的文件,点击页面的下发按钮,选择要下发的批次,并点击确定;![本地路径](./figures/文件下发1.png)![本地路径](./figures/文件下发2.png) +3. 页面弹框提示“配置文件下发成功”。![本地路径](./figures/文件下发3.png) + +#### 2.5.5 回滚 repo 配置文件历史版本 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的库配置文件; +2. 找到要回滚的文件,点击页面的历史版本按钮;![本地路径](./figures/文件历史版本.png) +3. 选择要回滚的版本,点击回滚按钮并点击确定;![本地路径](./figures/文件回滚1.png)![本地路径](./figures/文件回滚2.png) +4. 页面弹框提示“已回退到历史版本”,历史版本页面增加一条“-latest”记录。![本地路径](./figures/文件回滚3.png)![本地路径](./figures/文件回滚4.png) + +### 2.6 批次模块 + +#### 2.6.1 创建批次 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的系统和创建批次; +2. 点击机器所在的部门名字,在备选项中选择0个或多个机器ip(点击ip前面的方框),若选择一个或多个部门的所有机器可以点击部门列表的方框,并点击备选项中的部门名称,选择完成后点击向右的箭头;![本地路径](./figures/创建批次1.png) +3. 输入批次名称和描述,并点击创建按钮;![本地路径](./figures/创建批次2.png) +4. 页面弹框提示“批次入库成功”,并批次页面显示新创建的批次信息。![本地路径](./figures/创建批次3.png)![本地路径](./figures/创建批次4.png) + +#### 2.6.2 修改批次 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的批次; +2. 点击对应批次的编辑按钮;![本地路径](./figures/编辑批次1.png) +3. 输入新的批次名称和备注信息,并点击确定按钮;![本地路径](./figures/编辑文件2.png) +4. 页面弹框提示“批次修改成功”,并显示修改后的批次信息。![本地路径](./figures/编辑批次3.png) + +#### 2.6.3 删除批次 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的批次; +2. 选择要删除的批次,点击删除按钮并点击确定;![本地路径](./figures/删除批次1.png)![本地路径](./figures/删除批次2.png) +3. 页面弹框提示“批次删除成功”,并不显示删除批次的信息。![本地路径](./figures/删除批次3.png) + +#### 2.6.4 批量安装软件包 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的批次,并点击批次名称;![本地路径](./figures/批量操作1.png) +2. 点击右上角的rpm下发按钮,在搜索框输入软件包的名称,并点击下发按钮;![本地路径](./figures/批量下发1.png) +3. 页面弹框提示“软件包安装成功”,agent端可以查到下发的rpm包。![本地路径](./figures/批量下发2.png) + +#### 2.6.5 批量卸载软件包 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的批次,并点击批次名称;![本地路径](./figures/批量操作1.png) +2. 点击右上角的rpm卸载按钮,在搜索框输入软件包的名称,并点击卸载按钮;![本地路径](./figures/批量卸载1.png) +3. 页面弹框提示“软件包卸载成功”,agent端无此软件包。![本地路径](./figures/批量卸载2.png) + +### 2.7 机器模块 + +#### 2.7.1 删除机器 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的系统和机器列表; +2. 选择要删除的机器,点击删除按钮并点击确定;![本地路径](./figures/删除机器1.png)![本地路径](./figures/删除机器2.png) +3. 页面弹框提示“机器删除成功”,并不显示删除机器的信息。![本地路径](./figures/删除机器3.png) + +#### 2.7.2 变更机器部门 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的系统和机器列表; +2. 选择要变更部门的机器,点击变更部门按钮; +3. 核对变更部门机器ip的信息,选择新的部门,并点击确定;![本地路径](./figures/变更部门1.png) +4. 页面弹框提示“机器部门修改成功”,并显示变更后的信息。![本地路径](./figures/变更部门2.png) + +#### 2.7.3 修改机器内核参数 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的系统和机器列表; +2. 点击要查看信息的机器ip,并点击内核参数信息栏目;![本地路径](./figures/机器内核修改1.png) +3. 输入要查找的内核,点击修改,输入参数值并点击确定;![本地路径](./figures/机器内核修改2.png) +4. 页面显示修改进度,成功后显示100%。![本地路径](./figures/机器内核修改3.png) + +#### 2.7.4 启动机器服务 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的系统和机器列表; +2. 点击要查看信息的机器ip,并点击服务信息栏目; +3. 在搜索框输入要启动的服务名称,并点击启动按钮; +4. 页面显示软件包名、执行动作、执行结果进度条信息。![本地路径](./figures/机器服务启动.png) + +#### 2.7.5 重启机器服务 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的系统和机器列表; +2. 点击要查看信息的机器ip,并点击服务信息栏目; +3. 在搜索框输入要重启的服务名称,并点击重启按钮; +4. 页面显示软件包名、执行动作、执行结果进度条信息。![本地路径](./figures/机器服务重启.png) + +#### 2.7.6 停止机器服务 + +1. 具有该权限的用户成功登录,点击左侧导航栏中的系统和机器列表; +2. 点击要查看信息的机器ip,并点击服务信息栏目; +3. 在搜索框输入要启动的服务名称,并点击停止按钮; +4. 页面显示软件包名、执行动作、执行结果进度条信息。![本地路径](./figures/机器服务停止.png) + +#### 2.7.7 安装软件包 + +1. 有该权限的用户成功登录,点击左侧导航栏中的系统和机器列表; +2. 点击要查看信息的机器ip,并点击软件包信息栏目; +3. 在搜索框输入软件包的名称,并点击安装按钮; +4. 页面显示repo名称、repo地址信息,并页面显示软件包名、执行动作、结果等信息。![本地路径](./figures/机器软件包安装2.png) + +#### 2.7.8 卸载软件包 + +1. 有该权限的用户成功登录,点击左侧导航栏中的系统和机器列表; +2. 点击要查看信息的机器ip,并点击软件包信息栏目; +3. 在搜索框输入软件包的名称,并点击卸载按钮; +4. 页面显示repo名称、repo地址信息,并页面显示软件包名、执行动作、结果等信息。![本地路径](./figures/机器软件包卸载.png) + +#### 2.7.9 连接机器终端 + +1. 有该权限的用户成功登录,点击左侧导航栏中的系统和机器列表; +2. 点击要查看信息的机器ip,并点击终端信息栏目; +3. 输入ip地址和机器密码,点击连接按钮;![本地路径](./figures/机器终端1.png) +4. 页面显示终端窗口。![本地路径](./figures/机器终端.png) + +## 3 PilotGo平台插件使用说明 + +### 3.1 Grafana插件使用说明 + +1. 在任意一台服务器上执行dnf install PilotGo-plugin-grafana grafana; +2. 将/opt/PilotGo/plugin/grafana/config.yaml文件中ip地址修改为本机真实ip,修改/etc/grafana/grafana.ini文件一下信息: + + ```shell + root_url = http://真实ip:9999/plugin/grafana + + serve_from_sub_path = true + + allow_embedding = true + ``` + +3. 重启两个服务,执行以下命令: + + ```shell + systemctl restart grafana-server + + systemctl start PilotGo-plugin-grafana + ``` + +4. 成功登录pilotgo平台,点击左侧导航栏中的插件管理,点击添加插件按钮,填写插件名称和服务地址,并点击确定;![本地路径](./figures/G插件1.png) +5. 页面增加一条插件管理数据,导航栏增加一个插件按钮。![本地路径](./figures/G插件2.png)![本地路径](./figures/G插件3.png) + +### 3.2 Prometheus插件使用说明 + +1. 在任意一台服务器上执行dnf install PilotGo-plugin-prometheus; +2. 将/opt/PilotGo/plugin/prometheus/server/config.yml文件中ip地址修改为本机真实ip和mysql服务地址; +3. 重启服务,执行以下命令: + + ```shell + systemctl start PilotGo-plugin-prometheusX + ``` + +4. 成功登录pilotgo平台,点击左侧导航栏中的插件管理,点击添加插件按钮,填写插件名称和服务地址,并点击确定;![本地路径](./figures/P插件1.png) +5. 页面增加一条插件管理数据,导航栏增加一个插件按钮。![本地路径](./figures/P插件2.png)![本地路径](./figures/P插件3.png) +6. 在页面选择机器ip和监控时间,展示机器数据面板。![本地路径](./figures/P插件4.png) diff --git a/docs/en/Tools/CommunityTools/Compilation/Menu/index.md b/docs/en/Tools/CommunityTools/Compilation/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..7804d1838670041267aa4a87f0e6ba269dcf6f6e --- /dev/null +++ b/docs/en/Tools/CommunityTools/Compilation/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [GCC User Guide]({{< relref "../../../Server/Development/GCC/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/CommunityTools/ImageCustom/Menu/index.md b/docs/en/Tools/CommunityTools/ImageCustom/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..e15be5864a98b381599a456d019c12a69eab9857 --- /dev/null +++ b/docs/en/Tools/CommunityTools/ImageCustom/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [isocut User Guide]({{< relref "./isocut/Menu/index.md" >}}) +- [imageTailor User Guide]({{< relref "./imageTailor/Menu/index.md" diff --git a/docs/en/Tools/CommunityTools/ImageCustom/imageTailor/Menu/index.md b/docs/en/Tools/CommunityTools/ImageCustom/imageTailor/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..8d6dc6f660f43fc73a4e9861a648f9ed93fb4172 --- /dev/null +++ b/docs/en/Tools/CommunityTools/ImageCustom/imageTailor/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [imageTailor User Guide]({{< relref "./imagetailor-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/TailorCustom/figures/flowchart.png b/docs/en/Tools/CommunityTools/ImageCustom/imageTailor/figures/flowchart.png similarity index 100% rename from docs/en/docs/TailorCustom/figures/flowchart.png rename to docs/en/Tools/CommunityTools/ImageCustom/imageTailor/figures/flowchart.png diff --git a/docs/en/docs/TailorCustom/imageTailor-user-guide.md b/docs/en/Tools/CommunityTools/ImageCustom/imageTailor/imagetailor-user-guide.md similarity index 45% rename from docs/en/docs/TailorCustom/imageTailor-user-guide.md rename to docs/en/Tools/CommunityTools/ImageCustom/imageTailor/imagetailor-user-guide.md index 2ad4ae70147104cf945e4eeeedfa07e587a552e0..0afb9b602f4611a982d0c56aa418bef9d84b7de6 100644 --- a/docs/en/docs/TailorCustom/imageTailor-user-guide.md +++ b/docs/en/Tools/CommunityTools/ImageCustom/imageTailor/imagetailor-user-guide.md @@ -1,31 +1,30 @@ -# ImageTailor User Guide - - - [Introduction](#introduction) - - [Installation](#installation) - - [Software and Hardware Requirements](#software-and-hardware-requirements) - - [Obtaining the Installation Package](#obtaining-the-installation-package) - - [Installing imageTailor](#installing-imagetailor) - - [Directory Description](#directory-description) - - [Image Customization](#image-customization) - - [Overall Process](#overall-process) - - [Customizing Service Packages](#customizing-service-packages) - - [Setting a Local Repo Source](#setting-a-local-repo-source) - - [Adding Files](#adding-files) - - [Adding RPM Packages](#adding-rpm-packages) - - [Adding Hook Scripts](#adding-hook-scripts) - - [Configuring System Parameters](#configuring-system-parameters) - - [Configuring Host Parameters](#configuring-host-parameters) - - [Configuring Initial Passwords](#configuring-initial-passwords) - - [Configuring Partitions](#configuring-partitions) - - [Configuring the Network](#configuring-the-network) - - [Configuring Kernel Parameters](#configuring-kernel-parameters) - - [Creating an Image](#creating-an-image) - - [Command Description](#command-description) - - [Image Creation Guide](#image-creation-guide) - - [Tailoring Time Zones](#tailoring-time-zones) - - [Customization Example](#customization-example) - - +# imageTailor User Guide + +- [imageTailor User Guide](#imagetailor-user-guide) + - [Introduction](#introduction) + - [Installation](#installation) + - [Software and Hardware Requirements](#software-and-hardware-requirements) + - [Obtaining the Installation Package](#obtaining-the-installation-package) + - [Installing imageTailor](#installing-imagetailor) + - [Directory Description](#directory-description) + - [Image Customization](#image-customization) + - [Overall Process](#overall-process) + - [Customizing Service Packages](#customizing-service-packages) + - [Setting a Local repository](#setting-a-local-repository) + - [Adding Files](#adding-files) + - [Adding RPM Packages](#adding-rpm-packages) + - [Adding Hook Scripts](#adding-hook-scripts) + - [Configuring System Parameters](#configuring-system-parameters) + - [Configuring Host Parameters](#configuring-host-parameters) + - [Configuring Initial Passwords](#configuring-initial-passwords) + - [Configuring Partitions](#configuring-partitions) + - [Configuring the Network](#configuring-the-network) + - [Configuring Kernel Parameters](#configuring-kernel-parameters) + - [Creating an Image](#creating-an-image) + - [Command Description](#command-description) + - [Image Creation Guide](#image-creation-guide) + - [Tailoring Time Zones](#tailoring-time-zones) + - [Customization Example](#customization-example) ## Introduction @@ -40,8 +39,6 @@ To address these problems, openEuler provides the imageTailor tool for tailoring - System configuration modification: Configures the host name, startup services, time zone, network, partitions, drivers to be loaded, and kernel version. - Software package addition: Adds custom RPM packages or files to the system. - - ## Installation This section uses openEuler 22.03 LTS in the AArch64 architecture as an example to describe the installation method. @@ -62,13 +59,11 @@ The software and hardware requirements of imageTailor are as follows: - The SElinux service is disabled. - ```shell - $ sudo setenforce 0 - $ getenforce - Permissive - ``` - - + ```shell + $ sudo setenforce 0 + $ getenforce + Permissive + ``` ### Obtaining the Installation Package @@ -76,27 +71,27 @@ Download the openEuler release package to install and use imageTailor. 1. Obtain the ISO image file and the corresponding verification file. - The image must be an everything image. Assume that the image is to be stored in the **root** directory. Run the following commands: + The image must be an everything image. Assume that the image is to be stored in the **root** directory. Run the following commands: - ```shell - $ cd /root/temp - $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso - $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso.sha256sum - ``` + ```shell + cd /root/temp + wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso + wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso.sha256sum + ``` -3. Obtain the verification value in the sha256sum verification file. +2. Obtain the verification value in the sha256sum verification file. - ```shell - $ cat openEuler-22.03-LTS-everything-aarch64-dvd.iso.sha256sum - ``` + ```shell + cat openEuler-22.03-LTS-everything-aarch64-dvd.iso.sha256sum + ``` -4. Calculate the verification value of the ISO image file. +3. Calculate the verification value of the ISO image file. - ```shell - $ sha256sum openEuler-22.03-LTS-everything-aarch64-dvd.iso - ``` + ```shell + sha256sum openEuler-22.03-LTS-everything-aarch64-dvd.iso + ``` -5. Compare the verification value in the sha256sum file with that of the ISO image. If they are the same, the file integrity is verified. Otherwise, the file integrity is damaged. You need to obtain the file again. +4. Compare the verification value in the sha256sum file with that of the ISO image. If they are the same, the file integrity is verified. Otherwise, the file integrity is damaged. You need to obtain the file again. ### Installing imageTailor @@ -104,63 +99,63 @@ The following uses openEuler 22.03 LTS in AArch64 architecture as an example to 1. Ensure that openEuler 22.03 LTS (or a running environment that meets the requirements of imageTailor) has been installed on the host. - ```shell - $ cat /etc/openEuler-release - openEuler release 22.03 LTS - ``` + ```shell + $ cat /etc/openEuler-release + openEuler release 22.03 LTS + ``` 2. Create a **/etc/yum.repos.d/local.repo** file to configure the Yum source. The following is an example of the configuration file. **baseurl** indicates the directory for mounting the ISO image. - ```shell - [local] - name=local - baseurl=file:///root/imageTailor_mount - gpgcheck=0 - enabled=1 - ``` + ```shell + [local] + name=local + baseurl=file:///root/imageTailor_mount + gpgcheck=0 + enabled=1 + ``` 3. Run the following commands as the **root** user to mount the image to the **/root/imageTailor_mount** directory as the Yum source (ensure that the value of **baseurl** is the same as that configured in the repo file and the disk space of the directory is greater than 20 GB): - ```shell - $ mkdir /root/imageTailor_mount - $ sudo mount -o loop /root/temp/openEuler-22.03-LTS-everything-aarch64-dvd.iso /root/imageTailor_mount/ - ``` + ```shell + mkdir /root/imageTailor_mount + sudo mount -o loop /root/temp/openEuler-22.03-LTS-everything-aarch64-dvd.iso /root/imageTailor_mount/ + ``` 4. Make the Yum source take effect. - ```shell - $ yum clean all - $ yum makecache - ``` + ```shell + yum clean all + yum makecache + ``` 5. Install the imageTailor tool as the **root** user. - ```shell - $ sudo yum install -y imageTailor - ``` + ```shell + sudo yum install -y imageTailor + ``` 6. Run the following command as the **root** user to verify that the tool has been installed successfully: - ```shell - $ cd /opt/imageTailor/ - $ sudo ./mkdliso -h - ------------------------------------------------------------------------------------------------------------- - Usage: mkdliso -p product_name -c configpath [--minios yes|no|force] [-h] [--sec] - Options: - -p,--product Specify the product to make, check custom/cfg_yourProduct. - -c,--cfg-path Specify the configuration file path, the form should be consistent with custom/cfg_xxx - --minios Make minios: yes|no|force - --sec Perform security hardening - -h,--help Display help information - - Example: - command: - ./mkdliso -p openEuler -c custom/cfg_openEuler --sec - - help: - ./mkdliso -h - ------------------------------------------------------------------------------------------------------------- - ``` + ```shell + $ cd /opt/imageTailor/ + $ sudo ./mkdliso -h + ------------------------------------------------------------------------------------------------------------- + Usage: mkdliso -p product_name -c configpath [--minios yes|no|force] [-h] [--sec] + Options: + -p,--product Specify the product to make, check custom/cfg_yourProduct. + -c,--cfg-path Specify the configuration file path, the form should be consistent with custom/cfg_xxx + --minios Make minios: yes|no|force + --sec Perform security hardening + -h,--help Display help information + + Example: + command: + ./mkdliso -p openEuler -c custom/cfg_openEuler --sec + + help: + ./mkdliso -h + ------------------------------------------------------------------------------------------------------------- + ``` ### Directory Description @@ -201,41 +196,43 @@ The steps are described as follows: - Customize service packages: Add RPM packages (including service RPM packages, commands, drivers, and library files) and files (including custom files, commands, drivers, and library files). - - Adding service RPM packages: Add RPM packages to the ISO image as required. For details, see [Installation](#installation). - - Adding custom files: If you want to perform custom operations such as hardware check, system configuration check, and driver installation when the target ISO system is installed or started, you can compile custom files and package them to the ISO image. - - Adding drivers, commands, and library files: If the RPM package source of openEuler does not contain the required drivers, commands, or library files, you can use imageTailor to package the corresponding drivers, commands, or library files into the ISO image. + - Adding service RPM packages: Add RPM packages to the ISO image as required. For details, see [Installation](#installation). + - Adding custom files: If you want to perform custom operations such as hardware check, system configuration check, and driver installation when the target ISO system is installed or started, you can compile custom files and package them to the ISO image. + - Adding drivers, commands, and library files: If the RPM package source of openEuler does not contain the required drivers, commands, or library files, you can use imageTailor to package the corresponding drivers, commands, or library files into the ISO image. - Configure system parameters: - - Configuring host parameters: To ensure that the ISO image is successfully installed and started, you need to configure host parameters. - - Configuring partitions: You can configure service partitions based on the service plan and adjust system partitions. - - Configuring the network: You can set system network parameters as required, such as the NIC name, IP address, and subnet mask. - - Configuring the initial password: To ensure that the ISO image is successfully installed and started, you need to configure the initial passwords of the **root** user and GRUB. - - Configuring kernel parameters: You can configure the command line parameters of the kernel as required. + - Configuring host parameters: To ensure that the ISO image is successfully installed and started, you need to configure host parameters. + - Configuring partitions: You can configure service partitions based on the service plan and adjust system partitions. + - Configuring the network: You can set system network parameters as required, such as the NIC name, IP address, and subnet mask. + - Configuring the initial password: To ensure that the ISO image is successfully installed and started, you need to configure the initial passwords of the **root** user and GRUB. + - Configuring kernel parameters: You can configure the command line parameters of the kernel as required. - Configure security hardening policies. - ImageTailor provides default security hardening policies. You can modify **security_s.conf** (in the ISO image customization phase) to perform secondary security hardening on the system based on service requirements. For details, see the [Security Hardening Guide](https://docs.openeuler.org/en/docs/22.03_LTS/docs/SecHarden/secHarden.html). + imageTailor provides default security hardening policies. You can modify **security_s.conf** (in the ISO image customization phase) to perform secondary security hardening on the system based on service requirements. For details, see the [Security Hardening Guide](https://docs.openeuler.org/en/docs/22.03_LTS/docs/SecHarden/secHarden.html). - Create an ISO image: - Use the imageTailor tool to create an ISO image. + Use the imageTailor tool to create an ISO image. ### Customizing Service Packages You can pack service RPM packages, custom files, drivers, commands, and library files into the target ISO image as required. -#### Setting a Local Repo Source +#### Setting a Local repository -To customize an ISO image, you must set a repo source in the **/opt/imageTailor/repos/euler_base/** directory. This section describes how to set a local repo source. +To customize an ISO image, you must set a repository in the **/opt/imageTailor/repos/euler_base/** directory. This section describes how to set a local repository. 1. Download the ISO file released by openEuler. (The RPM package of the everything image released by the openEuler must be used.) + ```shell - $ cd /opt - $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso + cd /opt + wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso ``` 2. Create a mount directory **/opt/openEuler_repo** and mount the ISO file to the directory. + ```shell $ sudo mkdir -p /opt/openEuler_repo $ sudo mount openEuler-22.03-LTS-everything-aarch64-dvd.iso /opt/openEuler_repo @@ -243,6 +240,7 @@ To customize an ISO image, you must set a repo source in the **/opt/imageTailor/ ``` 3. Copy the RPM packages in the ISO file to the **/opt/imageTailor/repos/euler_base/** directory. + ```shell $ sudo rm -rf /opt/imageTailor/repos/euler_base && sudo mkdir -p /opt/imageTailor/repos/euler_base $ sudo cp -ar /opt/openEuler_repo/Packages/* /opt/imageTailor/repos/euler_base @@ -263,24 +261,24 @@ You can add files to an ISO image as required. The file types include custom fil - The file stored in the **/opt/imageTailor/custom/cfg_openEuler/usr_file** directory will be generated in the root directory of the ISO. Therefore, the directory structure of the file must be a complete path starting from the root directory so that imageTailor can place the file in the correct directory. - For example, if you want **file1** to be in the **/opt** directory of the ISO, create an **opt** directory in the **usr_file** directory and copy **file1** to the **opt** directory. For example: - - ```shell - $ pwd - /opt/imageTailor/custom/cfg_openEuler/usr_file - - $ tree - . - ├── etc - │   ├── default - │   │   └── grub - │   └── profile.d - │   └── csh.precmd - └── opt - └── file1 + For example, if you want **file1** to be in the **/opt** directory of the ISO, create an **opt** directory in the **usr_file** directory and copy **file1** to the **opt** directory. For example: + + ```shell + $ pwd + /opt/imageTailor/custom/cfg_openEuler/usr_file - 4 directories, 3 files - ``` + $ tree + . + ├── etc + │   ├── default + │   │   └── grub + │   └── profile.d + │   └── csh.precmd + └── opt + └── file1 + + 4 directories, 3 files + ``` - The paths in **/opt/imageTailor/custom/cfg_openEuler/usr_file** must be real paths. For example, the paths do not contain soft links. You can run the `realpath` or `readlink -f` command to query the real path. @@ -292,78 +290,82 @@ You can add files to an ISO image as required. The file types include custom fil To add RPM packages (drivers, commands, or library files) to an ISO image, perform the following steps: ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif)**NOTE:** > ->- The **rpm.conf** and **cmd.conf** files are stored in the **/opt/imageTailor/custom/cfg_openEuler/** directory. ->- The RPM package tailoring granularity below indicates **sys_cut='no'**. For details about the cutout granularity, see [Configuring Host Parameters](#configuring-host-parameters). ->- If no local repo source is configured, configure a local repo source by referring to [Setting a Local Repo Source](#setting-a-local-repo-source). +> - The **rpm.conf** and **cmd.conf** files are stored in the **/opt/imageTailor/custom/cfg_openEuler/** directory. +> - The RPM package tailoring granularity below indicates **sys_cut='no'**. For details about the cutout granularity, see [Configuring Host Parameters](#configuring-host-parameters). +> - If no local repository is configured, configure a local repository by referring to [Setting a Local repository](#setting-a-local-repository). > 1. Check whether the **/opt/imageTailor/repos/euler_base/** directory contains the RPM package to be added. - - If yes, go to step 2. - - If no, go to step 3. + - If yes, go to step 2. + - If no, go to step 3. + 2. Configure the RPM package information in the **\** section in the **rpm.conf** file. - - For the RPM package tailoring granularity, no further action is required. - - For other tailoring granularities, go to step 4. + + - For the RPM package tailoring granularity, no further action is required. + - For other tailoring granularities, go to step 4. + 3. Obtain the RPM package and store it in the **/opt/imageTailor/custom/cfg_openEuler/usr_rpm** directory. If the RPM package depends on other RPM packages, store the dependency packages to this directory because the added RPM package and its dependent RPM packages must be packed into the ISO image at the same time. - - For the RPM package tailoring granularity, go to step 4. - - For other tailoring granularities, no further action is required. -4. Configure the drivers, commands, and library files to be retained in the RPM package in the **rpm.conf** and **cmd.conf** files. If there are common files to be tailored, configure them in the **\\** section in the **cmd.conf** file. + - For the RPM package tailoring granularity, go to step 4. + - For other tailoring granularities, no further action is required. + +4. Configure the drivers, commands, and library files to be retained in the RPM package in the **rpm.conf** and **cmd.conf** files. If there are common files to be tailored, configure them in the **\\** section in the **cmd.conf** file. ##### Configuration File Description | Operation | Configuration File| Section | | :----------- | :----------- | :----------------------------------------------------------- | -| Adding drivers | rpm.conf | \
\
\

Note: The **driver_name** is the relative path of **/lib/modules/{kernel_version_number}/kernel/**.| -| Adding commands | cmd.conf | \
\
\
| -| Adding library files | cmd.conf | \
\
\
| -| Deleting other files| cmd.conf | \
\
\

Note: The file name must be an absolute path.| +| Adding drivers | rpm.conf | \
\
\

Note: The **driver_name** is the relative path of **/lib/modules/{kernel_version_number}/kernel/**.| +| Adding commands | cmd.conf | \
\
\
| +| Adding library files | cmd.conf | \
\
\
| +| Deleting other files| cmd.conf | \
\
\

Note: The file name must be an absolute path.| **Example** - Adding drivers - ```shell - - - - - ...... - - ``` + ```shell + + + + + ...... + + ``` - Adding commands - ```shell - - - - - ...... - - ``` + ```shell + + + + + ...... + + ``` - Adding library files - ```shell - - - - - - ``` + ```shell + + + + + + ``` - Deleting other files - ```shell - - - - - - ``` + ```shell + + + + + + ``` #### Adding Hook Scripts @@ -373,11 +375,9 @@ A hook script is invoked by the OS during startup and installation to execute th The script name must start with **S+number** (the number must be at least two digits). The number indicates the execution sequence of the hook script. Example: **S01xxx.sh** ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif)**NOTE:** > ->The scripts in the **hook** directory are executed using the `source` command. Therefore, exercise caution when using the `exit` command in the scripts because the entire installation script exits after the `exit` command is executed. - - +> The scripts in the **hook** directory are executed using the `source` command. Therefore, exercise caution when using the `exit` command in the scripts because the entire installation script exits after the `exit` command is executed. ##### Description of hook Subdirectories @@ -388,7 +388,7 @@ The script name must start with **S+number** (the number must be at least two di | env_check_hook | S01check_hw.sh | Before the OS installation initialization | The script is used to check hardware specifications and types before initialization.| | set_install_ip_hook | S01set_install_ip.sh | When network configuration is being performed during OS installation initialization. | You can customize the network configuration by using a custom script.| | before_partition_hook | S01checkpart.sh | Before partitioning | You can check correctness of the partition configuration file by using a custom script.| -| before_setup_os_hook | N/A | Before the repo file is decompressed | You can customize partition mounting.
If the decompression path of the installation package is not the root partition specified in the partition configuration, customize partition mounting and assign the decompression path to the input global variable.| +| before_setup_os_hook | N/A | Before the repo file is decompressed | You can customize partition mounting.
If the decompression path of the installation package is not the root partition specified in the partition configuration, customize partition mounting and assign the decompression path to the input global variable.| | before_mkinitrd_hook | S01install_drv.sh | Before the `mkinitrd` command is run | The hook script executed before running the `mkinitrd` command when **initrd** is saved to the disk. You can add and update driver files in **initrd**.| | after_setup_os_hook | N/A | After OS installation | After the installation is complete, you can perform custom operations on the system files, such as modifying **grub.cfg**.| | install_succ_hook | N/A | When the OS is successfully installed | The scripts in this subdirectory are used to parse the installation information and send information of whether the installation succeeds.**install_succ_hook** cannot be set to **install_break**.| @@ -400,7 +400,7 @@ Before creating an ISO image, you need to configure system parameters, including #### Configuring Host Parameters - The **\ \** section in the **/opt/imageTailor/custom/cfg_openEuler/sys.conf** file is used to configure common system parameters, such as the host name and kernel boot parameters. +The **\ \** section in the **/opt/imageTailor/custom/cfg_openEuler/sys.conf** file is used to configure common system parameters, such as the host name and kernel boot parameters. The default configuration provided by openEuler is as follows. You can modify the configuration as required. @@ -422,85 +422,80 @@ The parameters are described as follows: - sys_service_enable - This parameter is optional. Services enabled by the OS by default. Separate multiple services with spaces. If you do not need to add a system service, use the default value **ipcc**. Pay attention to the following during the configuration: + This parameter is optional. Services enabled by the OS by default. Separate multiple services with spaces. If you do not need to add a system service, use the default value **ipcc**. Pay attention to the following during the configuration: - - Default system services cannot be deleted. - - You can configure service-related services, but the repo source must contain the service RPM package. - - By default, only the services configured in this parameter are enabled. If a service depends on other services, you need to configure the depended services in this parameter. + - Default system services cannot be deleted. + - You can configure service-related services, but the repository must contain the service RPM package. + - By default, only the services configured in this parameter are enabled. If a service depends on other services, you need to configure the depended services in this parameter. - sys_service_disable - This parameter is optional. Services that are not allowed to automatically start upon system startup. Separate multiple services with spaces. If no system service needs to be disabled, leave this parameter blank. + This parameter is optional. Services that are not allowed to automatically start upon system startup. Separate multiple services with spaces. If no system service needs to be disabled, leave this parameter blank. - sys_utc - (Mandatory) Indicates whether to use coordinated universal time (UTC) time. The value can be **yes** or **no**. The default value is **yes**. + (Mandatory) Indicates whether to use coordinated universal time (UTC) time. The value can be **yes** or **no**. The default value is **yes**. - sys_timezone - This parameter is optional. Sets the time zone. The value can be a time zone supported by openEuler, which can be queried in the **/usr/share/zoneinfo/zone.tab** file. + This parameter is optional. Sets the time zone. The value can be a time zone supported by openEuler, which can be queried in the **/usr/share/zoneinfo/zone.tab** file. - sys_cut - (Mandatory) Indicates whether to tailor the RPM packages. The value can be **yes**, **no**, or **debug**.**yes** indicates that the RPM packages are tailored. **no** indicates that the RPM packages are not tailored (only the RPM packages in the **rpm.conf** file is installed). **debug** indicates that the RPM packages are tailored but the `rpm` command is retained for customization after installation. The default value is **no**. + (Mandatory) Indicates whether to tailor the RPM packages. The value can be **yes**, **no**, or **debug**.**yes** indicates that the RPM packages are tailored. **no** indicates that the RPM packages are not tailored (only the RPM packages in the **rpm.conf** file is installed). **debug** indicates that the RPM packages are tailored but the `rpm` command is retained for customization after installation. The default value is **no**. - >![](./public_sys-resources/icon-note.gif) NOTE: - > - > - imageTailor installs the RPM package added by the user, deletes the files configured in the **\** section of the **cmd.conf** file, and then deletes the commands, libraries, and drivers that are not configured in **cmd.conf** or **rpm.conf**. - > - When **sys_cut='yes'** is configured, imageTailor does not support the installation of the `rpm` command. Even if the `rpm` command is configured in the **rpm.conf** file, the configuration does not take effect. + > ![](./public_sys-resources/icon-note.gif) NOTE: + > + > - imageTailor installs the RPM package added by the user, deletes the files configured in the **\** section of the **cmd.conf** file, and then deletes the commands, libraries, and drivers that are not configured in **cmd.conf** or **rpm.conf**. + > - When **sys_cut='yes'** is configured, imageTailor does not support the installation of the `rpm` command. Even if the `rpm` command is configured in the **rpm.conf** file, the configuration does not take effect. - sys_usrrpm_cut - (Mandatory) Indicates whether to tailor the RPM packages added by users to the **/opt/imageTailor/custom/cfg_openEuler/usr_rpm** directory. The value can be **yes** or **no**. The default value is **no**. + (Mandatory) Indicates whether to tailor the RPM packages added by users to the **/opt/imageTailor/custom/cfg_openEuler/usr_rpm** directory. The value can be **yes** or **no**. The default value is **no**. - - **sys_usrrpm_cut='yes'**: imageTailor installs the RPM packages added by the user, deletes the file configured in the **\** section in the **cmd.conf** file, and then deletes the commands, libraries, and drivers that are not configured in **cmd.conf** or **rpm.conf**. + - **sys_usrrpm_cut='yes'**: imageTailor installs the RPM packages added by the user, deletes the file configured in the **\** section in the **cmd.conf** file, and then deletes the commands, libraries, and drivers that are not configured in **cmd.conf** or **rpm.conf**. - - **sys_usrrpm_cut='no'**: imageTailor installs the RPM packages added by the user but does not delete the files in the RPM packages. + - **sys_usrrpm_cut='no'**: imageTailor installs the RPM packages added by the user but does not delete the files in the RPM packages. - sys_hostname - (Mandatory) Host name. After the OS is deployed in batches, you are advised to change the host name of each node to ensure that the host name of each node is unique. + (Mandatory) Host name. After the OS is deployed in batches, you are advised to change the host name of each node to ensure that the host name of each node is unique. - The host name must be a combination of letters, digits, and hyphens (-) and must start with a letter or digit. Letters are case sensitive. The value contains a maximum of 63 characters. The default value is **Euler**. + The host name must be a combination of letters, digits, and hyphens (-) and must start with a letter or digit. Letters are case sensitive. The value contains a maximum of 63 characters. The default value is **Euler**. - sys_usermodules_autoload - (Optional) Driver loaded during system startup. When configuring this parameter, you do not need to enter the file extension **.ko**. If there are multiple drivers, separate them by space. By default, this parameter is left blank, indicating that no additional driver is loaded. + (Optional) Driver loaded during system startup. When configuring this parameter, you do not need to enter the file extension **.ko**. If there are multiple drivers, separate them by space. By default, this parameter is left blank, indicating that no additional driver is loaded. - sys_gconv - (Optional) This parameter is used to tailor **/usr/lib/gconv** and **/usr/lib64/gconv**. The options are as follows: + (Optional) This parameter is used to tailor **/usr/lib/gconv** and **/usr/lib64/gconv**. The options are as follows: - - **null**/**NULL**: indicates that this parameter is not configured. If **sys_cut='yes'** is configured, **/usr/lib/gconv** and **/usr/lib64/gconv** will be deleted. - - **all**/**ALL**: keeps **/usr/lib/gconv** and **/usr/lib64/gconv**. - - **xxx,xxx**: keeps the corresponding files in the **/usr/lib/gconv** and **/usr/lib64/gconv** directories. If multiple files need to be kept, use commas (,) to separate them. + - **null**/**NULL**: indicates that this parameter is not configured. If **sys_cut='yes'** is configured, **/usr/lib/gconv** and **/usr/lib64/gconv** will be deleted. + - **all**/**ALL**: keeps **/usr/lib/gconv** and **/usr/lib64/gconv**. + - **xxx,xxx**: keeps the corresponding files in the **/usr/lib/gconv** and **/usr/lib64/gconv** directories. If multiple files need to be kept, use commas (,) to separate them. - sys_man_cut - (Optional) Indicates whether to tailor the man pages. The value can be **yes** or **no**. The default value is **yes**. + (Optional) Indicates whether to tailor the man pages. The value can be **yes** or **no**. The default value is **yes**. - +> ![](./public_sys-resources/icon-note.gif) NOTE: ->![](./public_sys-resources/icon-note.gif) NOTE: -> -> If both **sys_cut** and **sys_usrrpm_cut** are configured, **sys_cut** is used. The following rules apply: -> -> - sys_cut='no' -> -> No matter whether **sys_usrrpm_cut** is set to **yes** or **no**, the system RPM package tailoring granularity is used. That is, imageTailor installs the RPM packages in the repo source and the RPM packages in the **usr_rpm** directory, however, the files in the RPM package are not deleted. Even if some files in the RPM packages are not required, imageTailor will delete them. -> -> - sys_cut='yes' -> -> - sys_usrrpm_cut='no' -> -> System RPM package tailoring granularity: imageTailor deletes files in the RPM packages in the repo sources as configured. -> -> - sys_usrrpm_cut='yes' -> -> System and user RPM package tailoring granularity: imageTailor deletes files in the RPM packages in the repo sources and the **usr_rpm** directory as configured. -> +If both **sys_cut** and **sys_usrrpm_cut** are configured, **sys_cut** is used. The following rules apply: + +- sys_cut='no' + +No matter whether **sys_usrrpm_cut** is set to **yes** or **no**, the system RPM package tailoring granularity is used. That is, imageTailor installs the RPM packages in the repository and the RPM packages in the **usr_rpm** directory, however, the files in the RPM package are not deleted. Even if some files in the RPM packages are not required, imageTailor will delete them. + +- sys_cut='yes' +1. sys_usrrpm_cut='no' + System RPM package tailoring granularity: imageTailor deletes files in the RPM packages in the repositories as configured. + +2. sys_usrrpm_cut='yes' + + System and user RPM package tailoring granularity: imageTailor deletes files in the RPM packages in the repositories and the **usr_rpm** directory as configured. #### Configuring Initial Passwords @@ -516,13 +511,13 @@ The **root** and GRUB passwords must be configured during OS installation. Other The initial password of the **root** user is stored in the **/opt/imageTailor/custom/cfg_openEuler/rpm.conf** file. You can modify this file to set the initial password of the **root** user. ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- If the `--minios yes/force` parameter is required when you run the `mkdliso` command to create an ISO image, you need to enter the corresponding information in the **/opt/imageTailor/kiwi/minios/cfg_minios/rpm.conf** file. +> - If the `--minios yes/force` parameter is required when you run the `mkdliso` command to create an ISO image, you need to enter the corresponding information in the **/opt/imageTailor/kiwi/minios/cfg_minios/rpm.conf** file. The default configuration of the initial password of the **root** user in the **/opt/imageTailor/custom/cfg_openEuler/rpm.conf** file is as follows. Add a password of your choice. -``` +```xml @@ -541,33 +536,34 @@ Before creating an ISO image, you need to change the initial password of the **r 1. Add a user for generating a password, for example, **testUser**. - ```shell - $ sudo useradd testUser - ``` + ```shell + sudo useradd testUser + ``` 2. Set the password of **testUser**. Run the following command and set the password as prompted: - ```shell - $ sudo passwd testUser - Changing password for user testUser. - New password: - Retype new password: - passwd: all authentication tokens updated successfully. - ``` + ```shell + $ sudo passwd testUser + Changing password for user testUser. + New password: + Retype new password: + passwd: all authentication tokens updated successfully. + ``` 3. View the **/etc/shadow** file. The content following **testUser** (string between two colons) is the ciphertext of the password. - ``` shell script - $ sudo cat /etc/shadow | grep testUser - testUser:$6$YkX5uFDGVO1VWbab$jvbwkZ2Kt0MzZXmPWy.7bJsgmkN0U2gEqhm9KqT1jwQBlwBGsF3Z59heEXyh8QKm3Qhc5C3jqg2N1ktv25xdP0:19052:0:90:7:35:: - ``` + ``` shell script + $ sudo cat /etc/shadow | grep testUser + testUser:$6$YkX5uFDGVO1VWbab$jvbwkZ2Kt0MzZXmPWy.7bJsgmkN0U2gEqhm9KqT1jwQBlwBGsF3Z59heEXyh8QKm3Qhc5C3jqg2N1ktv25xdP0:19052:0:90:7:35:: + ``` 4. Copy and paste the ciphertext to the **pwd** field in the **/opt/imageTailor/custom/cfg_openEuler/rpm.conf** file. - ``` shell script - - - - ``` + + ``` shell script + + + + ``` 5. If the `--minios yes/force` parameter is required when you run the `mkdliso` command to create an ISO image, configure the **pwd** field of the corresponding user in **/opt/imageTailor/kiwi/minios/cfg_minios/rpm.conf**. @@ -590,51 +586,50 @@ The initial GRUB password is stored in the **/opt/imageTailor/custom/cfg_openEul 1. Run the following command and set the GRUB password as prompted: - ```shell - $ sudo grub2-set-password -o ./ - Enter password: - Confirm password: - grep: .//grub.cfg: No such file or directory - WARNING: The current configuration lacks password support! - Update your configuration with grub2-mkconfig to support this feature. - ``` + ```shell + $ sudo grub2-set-password -o ./ + Enter password: + Confirm password: + grep: .//grub.cfg: No such file or directory + WARNING: The current configuration lacks password support! + Update your configuration with grub2-mkconfig to support this feature. + ``` 2. After the command is executed, the **user.cfg** file is generated in the current directory. The content starting with **grub.pbkdf2.sha512** is the encrypted GRUB password. - ```shell - $ sudo cat user.cfg - GRUB2_PASSWORD=grub.pbkdf2.sha512.10000.CE285BE1DED0012F8B2FB3DEA38782A5B1040FEC1E49D5F602285FD6A972D60177C365F1 - B5D4CB9D648AD4C70CF9AA2CF9F4D7F793D4CE008D9A2A696A3AF96A.0AF86AB3954777F40D324816E45DD8F66CA1DE836DC7FBED053DB02 - 4456EE657350A27FF1E74429546AD9B87BE8D3A13C2E686DD7C71D4D4E85294B6B06E0615 - ``` + ```shell + $ sudo cat user.cfg + GRUB2_PASSWORD=grub.pbkdf2.sha512.10000.CE285BE1DED0012F8B2FB3DEA38782A5B1040FEC1E49D5F602285FD6A972D60177C365F1 + B5D4CB9D648AD4C70CF9AA2CF9F4D7F793D4CE008D9A2A696A3AF96A.0AF86AB3954777F40D324816E45DD8F66CA1DE836DC7FBED053DB02 + 4456EE657350A27FF1E74429546AD9B87BE8D3A13C2E686DD7C71D4D4E85294B6B06E0615 + ``` 3. Copy the preceding ciphertext and add the following configuration to the **/opt/imageTailor/custom/cfg_openEuler/usr_file/etc/default/grub** file: - ```shell - GRUB_PASSWORD="grub.pbkdf2.sha512.10000.CE285BE1DED0012F8B2FB3DEA38782A5B1040FEC1E49D5F602285FD6A972D60177C365F1 - B5D4CB9D648AD4C70CF9AA2CF9F4D7F793D4CE008D9A2A696A3AF96A.0AF86AB3954777F40D324816E45DD8F66CA1DE836DC7FBED053DB02 - 4456EE657350A27FF1E74429546AD9B87BE8D3A13C2E686DD7C71D4D4E85294B6B06E0615" - ``` - + ```shell + GRUB_PASSWORD="grub.pbkdf2.sha512.10000.CE285BE1DED0012F8B2FB3DEA38782A5B1040FEC1E49D5F602285FD6A972D60177C365F1 + B5D4CB9D648AD4C70CF9AA2CF9F4D7F793D4CE008D9A2A696A3AF96A.0AF86AB3954777F40D324816E45DD8F66CA1DE836DC7FBED053DB02 + 4456EE657350A27FF1E74429546AD9B87BE8D3A13C2E686DD7C71D4D4E85294B6B06E0615" + ``` #### Configuring Partitions If you want to adjust system partitions or service partitions, modify the **\** section in the **/opt/imageTailor/custom/cfg_openEuler/sys.conf** file. ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif) **NOTE:** > ->- System partition: partition for storing the OS. ->- Service partition: partition for service data. ->- The type of a partition is determined by the content it stores, not the size, mount path, or file system. ->- Partition configuration is optional. You can manually configure partitions after OS installation. +> - System partition: partition for storing the OS. +> - Service partition: partition for service data. +> - The type of a partition is determined by the content it stores, not the size, mount path, or file system. +> - Partition configuration is optional. You can manually configure partitions after OS installation. - The format of **\** is as follows: +The format of **\** is as follows: -disk_ID mount _path partition _size partition_type file_system [Secondary formatting flag] +disk_ID mount_path partition _size partition_type file_system \[Secondary formatting flag] The default configuration is as follows: -``` shell script +```shell hd0 /boot 512M primary ext4 yes hd0 /boot/efi 200M primary vfat yes @@ -648,51 +643,51 @@ hd0 /home max logical ext4 The parameters are described as follows: - disk_ID: - ID of a disk. Set this parameter in the format of **hd***x*, where *x* indicates the *x*th disk. + ID of a disk. Set this parameter in the format of **hd***x*, where *x* indicates the *x*th disk. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - > - >Partition configuration takes effect only when the disk can be recognized. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > + > Partition configuration takes effect only when the disk can be recognized. - mount_path: - Mount path to a specified partition. You can configure service partitions and adjust the default system partition. If you do not mount partitions, set this parameter to **-**. + Mount path to a specified partition. You can configure service partitions and adjust the default system partition. If you do not mount partitions, set this parameter to **-**. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - > - >- You must configure the mount path to **/**. You can adjust mount paths to other partitions according to your needs. - >- When the UEFI boot mode is used, the partition configuration in the x86_64 architecture must contain the mount path **/boot**, and the partition configuration in the AArch64 architecture must contain the mount path **/boot/efi**. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > + > - You must configure the mount path to **/**. You can adjust mount paths to other partitions according to your needs. + > - When the UEFI boot mode is used, the partition configuration in the x86_64 architecture must contain the mount path **/boot**, and the partition configuration in the AArch64 architecture must contain the mount path **/boot/efi**. - partition_size: - The value types are as follows: + The value types are as follows: - - G/g: The unit of a partition size is GB, for example, 2G. - - M/m: The unit of a partition size is MB, for example, 300M. - - T/t: The unit of a partition size is TB, for example, 1T. - - MAX/max: The rest space of a hard disk is used to create a partition. This value can only be assigned to the last partition. + - G/g: The unit of a partition size is GB, for example, 2G. + - M/m: The unit of a partition size is MB, for example, 300M. + - T/t: The unit of a partition size is TB, for example, 1T. + - MAX/max: The rest space of a hard disk is used to create a partition. This value can only be assigned to the last partition. - >![](./public_sys-resources/icon-note.gif) **NOTE:** -> - >- A partition size value cannot contain decimal numbers. If there are decimal numbers, change the unit of the value to make the value an integer. For example, 1.5 GB should be changed to 1536 MB. - >- When the partition size is set to **MAX**/**max**, the size of the remaining partition cannot exceed the limit of the supported file system type (the default file system type is **ext4**, and the maximum size is **16T**). + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > + > - A partition size value cannot contain decimal numbers. If there are decimal numbers, change the unit of the value to make the value an integer. For example, 1.5 GB should be changed to 1536 MB. + > - When the partition size is set to **MAX**/**max**, the size of the remaining partition cannot exceed the limit of the supported file system type (the default file system type is **ext4**, and the maximum size is **16T**). - partition_type: - The values of partition types are as follows: + The values of partition types are as follows: - - primary: primary partitions - - extended: extended partition (configure only *disk_ID* for this partition) - - logical: logical partitions + - primary: primary partitions + - extended: extended partition (configure only *disk_ID* for this partition) + - logical: logical partitions - file_system: - Currently, **ext4** and **vfat** file systems are supported. + Currently, **ext4** and **vfat** file systems are supported. -- [Secondary formatting flag]: - Indicates whether to format the disk during secondary installation. This parameter is optional. +- Secondary formatting flag: + Indicates whether to format the disk during secondary installation. This parameter is optional. - - The value can be **yes** or **no**. The default value is **no**. + - The value can be **yes** or **no**. The default value is **no**. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - > - >Secondary formatting indicates that openEuler has been installed on the disk before this installation. If the partition table configuration (partition size, mount point, and file type) used in the previous installation is the same as that used in the current installation, this flag can be used to configure whether to format the previous partitions, except the **/boot** and **/** partitions. If the target host is installed for the first time, this flag does not take effect, and all partitions with specified file systems are formatted. + > ![](./public_sys-resources/icon-note.gif) **NOTE:** + > + > Secondary formatting indicates that openEuler has been installed on the disk before this installation. If the partition table configuration (partition size, mount point, and file type) used in the previous installation is the same as that used in the current installation, this flag can be used to configure whether to format the previous partitions, except the **/boot** and **/** partitions. If the target host is installed for the first time, this flag does not take effect, and all partitions with specified file systems are formatted. #### Configuring the Network @@ -712,14 +707,13 @@ STARTMODE="auto" The following table describes the parameters. -- | Parameter | Mandatory or Not| Value | Description | - | :-------- | -------- | :------------------------------------------------ | :----------------------------------------------------------- | - | BOOTPROTO | Yes | none / static / dhcp | **none**: No protocol is used for boot, and no IP address is assigned.
**static**: An IP address is statically assigned.
**dhcp**: An IP address is dynamically obtained using the dynamic host configuration protocol (DHCP).| - | DEVICE | Yes | Example: **eth1** | NIC name. | - | IPADDR | Yes | Example: **192.168.11.100** | IP address.
This parameter must be configured only when the value of **BOOTPROTO** is **static**.| - | NETMASK | Yes | - | Subnet mask.
This parameter must be configured only when the value of **BOOTPROTO** is **static**.| - | STARTMODE | Yes | manual / auto / hotplug / ifplugd / nfsroot / off | NIC start mode.
**manual**: A user runs the `ifup` command on a terminal to start an NIC.
**auto**/**hotplug**/**ifplug**/**nfsroot**: An NIC is started when the OS identifies it.
**off**: An NIC cannot be started in any situations.
For details about the parameters, run the `man ifcfg` command on the host that is used to create the ISO image.| - +| Parameter | Mandatory or Not| Value | Description | +| :-------- | -------- | :------------------------------------------------ | :----------------------------------------------------------- | +| BOOTPROTO | Yes | none / static / dhcp | **none**: No protocol is used for boot, and no IP address is assigned.
**static**: An IP address is statically assigned.
**dhcp**: An IP address is dynamically obtained using the dynamic host configuration protocol (DHCP).| +| DEVICE | Yes | Example: **eth1** | NIC name. | +| IPADDR | Yes | Example: **192.168.11.100** | IP address.
This parameter must be configured only when the value of **BOOTPROTO** is **static**.| +| NETMASK | Yes | - | Subnet mask.
This parameter must be configured only when the value of **BOOTPROTO** is **static**.| +| STARTMODE | Yes | manual / auto / hotplug / ifplugd / nfsroot / off | NIC start mode.
**manual**: A user runs the `ifup` command on a terminal to start an NIC.
**auto**/**hotplug**/**ifplug**/**nfsroot**: An NIC is started when the OS identifies it.
**off**: An NIC cannot be started in any situations.
For details about the parameters, run the `man ifcfg` command on the host that is used to create the ISO image.| #### Configuring Kernel Parameters @@ -733,31 +727,31 @@ The meanings of the configurations are as follows (for details about other commo - net.ifnames=0 biosdevname=0 - Name the NIC in traditional mode. + Name the NIC in traditional mode. - crashkernel=512M - The memory space reserved for kdump is 512 MB. + The memory space reserved for kdump is 512 MB. - oops=panic panic=3 - The kernel panics when an oops error occurs, and the system restarts 3 seconds later. + The kernel panics when an oops error occurs, and the system restarts 3 seconds later. - softlockup_panic=1 - The kernel panics when a soft-lockup is detected. + The kernel panics when a soft-lockup is detected. - reserve_kbox_mem=16M - The memory space reserved for Kbox is 16 MB. + The memory space reserved for Kbox is 16 MB. - console=tty0 - Specifies **tty0** as the output device of the first virtual console. + Specifies **tty0** as the output device of the first virtual console. - crash_kexec_post_notifiers - After the system crashes, the function registered with the panic notification chain is called first, and then kdump is executed. + After the system crashes, the function registered with the panic notification chain is called first, and then kdump is executed. ### Creating an Image @@ -767,7 +761,7 @@ After customizing the operating system, you can use the `mkdliso` script to crea ##### Syntax -**mkdliso -p openEuler -c custom/cfg_openEuler [--minios yes|no|force] [--sec] [-h]** +**mkdliso -p openEuler -c custom/cfg_openEuler \[--minios yes|no|force] \[--sec] \[-h]** ##### Parameter Description @@ -775,52 +769,50 @@ After customizing the operating system, you can use the `mkdliso` script to crea | -------- | -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | | -p | Yes | Specifies the product name. | **openEuler** | | c | Yes | Specifies the relative path of the configuration file. | **custom/cfg_openEuler** | -| --minios | No | Specifies whether to create the **initrd** file that is used to boot the system during system installation. | The default value is **yes**.
**yes**: The **initrd** file will be created when the command is executed for the first time. When a subsequent `mkdliso` is executed, the system checks whether the **initrd** file exists in the **usr_install/boot** directory using sha256 verification. If the **initrd** file exists, it is not created again. Otherwise, it is created.
**no**: The **initrd** file is not created. The **initrd** file used for system boot and running is the same.
**force**: The **initrd** file will be created forcibly, regardless of whether it exists in the **usr_install/boot** directory or not.| -| --sec | No | Specifies whether to perform security hardening on the generated ISO file.
If this parameter is not specified, the user should undertake the resultant security risks| N/A | +| --minios | No | Specifies whether to create the **initrd** file that is used to boot the system during system installation. | The default value is **yes**.
**yes**: The **initrd** file will be created when the command is executed for the first time. When a subsequent `mkdliso` is executed, the system checks whether the **initrd** file exists in the **usr_install/boot** directory using sha256 verification. If the **initrd** file exists, it is not created again. Otherwise, it is created.
**no**: The **initrd** file is not created. The **initrd** file used for system boot and running is the same.
**force**: The **initrd** file will be created forcibly, regardless of whether it exists in the **usr_install/boot** directory or not.| +| --sec | No | Specifies whether to perform security hardening on the generated ISO file.
If this parameter is not specified, the user should undertake the resultant security risks| N/A | | -h | No | Obtains help information. | N/A | #### Image Creation Guide To create an ISO image using`mkdliso`, perform the following steps: ->![](./public_sys-resources/icon-note.gif) NOTE: +> ![](./public_sys-resources/icon-note.gif) NOTE: > > - The absolute path to `mkdliso` must not contain spaces. Otherwise, the ISO image creation will fail. > - In the environment for creating the ISO image, the value of **umask** must be set to **0022**. 1. Run the `mkdliso` command as the **root** user to generate the ISO image file. The following command is used for reference: - ```shell - # sudo /opt/imageTailor/mkdliso -p openEuler -c custom/cfg_openEuler --sec - ``` - - After the command is executed, the created files are stored in the **/opt/imageTailor/result/{date}** directory, including **openEuler-aarch64.iso** and **openEuler-aarch64.iso.sha256**. - -2. Verify the integrity of the ISO image file. Assume that the date and time is **2022-03-21-14-48**. + ```shell + # sudo /opt/imageTailor/mkdliso -p openEuler -c custom/cfg_openEuler --sec + ``` - ```shell - $ cd /opt/imageTailor/result/2022-03-21-14-48/ - $ sha256sum -c openEuler-aarch64.iso.sha256 - ``` + After the command is executed, the created files are stored in the **/opt/imageTailor/result/{date}** directory, including **openEuler-aarch64.iso** and **openEuler-aarch64.iso.sha256**. - If the following information is displayed, the ISO image creation is complete. +2. Verify the integrity of the ISO image file. Assume that the date and time is **2022-03-21-14-48**. - ``` - openEuler-aarch64.iso: OK - ``` + ```shell + cd /opt/imageTailor/result/2022-03-21-14-48/ + sha256sum -c openEuler-aarch64.iso.sha256 + ``` - If the following information is displayed, the image is incomplete. The ISO image file is damaged and needs to be created again. + If the following information is displayed, the ISO image creation is complete. - ```shell - openEuler-aarch64.iso: FAILED - sha256sum: WARNING: 1 computed checksum did NOT match - ``` + ```text + openEuler-aarch64.iso: OK + ``` -3. View the logs. + If the following information is displayed, the image is incomplete. The ISO image file is damaged and needs to be created again. - After an image is created, you can view logs as required (for example, when an error occurs during image creation). When an image is created for the first time, the corresponding log file and security hardening log file are compressed into a TAR package (the log file is named in the format of **sys_custom_log_{Date}.tar.gz**) and stored in the **result/log directory**. Only the latest 50 compressed log packages are stored in this directory. If the number of compressed log packages exceeds 50, the earliest files will be overwritten. + ```shell + openEuler-aarch64.iso: FAILED + sha256sum: WARNING: 1 computed checksum did NOT match + ``` +3. View the logs. + After an image is created, you can view logs as required (for example, when an error occurs during image creation). When an image is created for the first time, the corresponding log file and security hardening log file are compressed into a TAR package (the log file is named in the format of **sys_custom_log_{Date}.tar.gz**) and stored in the **result/log directory**. Only the latest 50 compressed log packages are stored in this directory. If the number of compressed log packages exceeds 50, the earliest files will be overwritten. ### Tailoring Time Zones @@ -838,7 +830,7 @@ Each subfolder represents an area. The current areas include continents, oceans, All time zones are in the format of *area/location*. For example, if China Standard Time is used in southern China, the time zone is Asia/Shanghai (location may not be the capital). The corresponding time zone file is: -``` +```texta /usr/share/zoneinfo/Asia/Shanghai ``` @@ -850,79 +842,79 @@ This section describes how to use imageTailor to create an ISO image. 1. Check whether the environment used to create the ISO meets the requirements. - ``` shell - $ cat /etc/openEuler-release - openEuler release 22.03 LTS - ``` + ``` shell + $ cat /etc/openEuler-release + openEuler release 22.03 LTS + ``` 2. Ensure that the root directory has at least 40 GB free space. - ```shell - $ df -h - Filesystem Size Used Avail Use% Mounted on - ...... - /dev/vdb 196G 28K 186G 1% / - ``` + ```shell + $ df -h + Filesystem Size Used Avail Use% Mounted on + ...... + /dev/vdb 196G 28K 186G 1% / + ``` 3. Install the imageTailor tailoring tool. For details, see [Installation](#installation). - ```shell - $ sudo yum install -y imageTailor - $ ll /opt/imageTailor/ - total 88K - drwxr-xr-x. 3 root root 4.0K Mar 3 08:00 custom - drwxr-xr-x. 10 root root 4.0K Mar 3 08:00 kiwi - -r-x------. 1 root root 69K Mar 3 08:00 mkdliso - drwxr-xr-x. 2 root root 4.0K Mar 9 14:48 repos - drwxr-xr-x. 2 root root 4.0K Mar 9 14:48 security-tool - ``` - -4. Configure a local repo source. - - ```shell - $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso - $ sudo mkdir -p /opt/openEuler_repo - $ sudo mount openEuler-22.03-LTS-everything-aarch64-dvd.iso /opt/openEuler_repo - mount: /opt/openEuler_repo: WARNING: source write-protected, mounted read-only. - $ sudo rm -rf /opt/imageTailor/repos/euler_base && sudo mkdir -p /opt/imageTailor/repos/euler_base - $ sudo cp -ar /opt/openEuler_repo/Packages/* /opt/imageTailor/repos/euler_base - $ sudo chmod -R 644 /opt/imageTailor/repos/euler_base - $ sudo ls /opt/imageTailor/repos/euler_base|wc -l - 2577 - $ sudo umount /opt/openEuler_repo && sudo rm -rf /opt/openEuler_repo - $ cd /opt/imageTailor - ``` - + ```shell + $ sudo yum install -y imageTailor + $ ll /opt/imageTailor/ + total 88K + drwxr-xr-x. 3 root root 4.0K Mar 3 08:00 custom + drwxr-xr-x. 10 root root 4.0K Mar 3 08:00 kiwi + -r-x------. 1 root root 69K Mar 3 08:00 mkdliso + drwxr-xr-x. 2 root root 4.0K Mar 9 14:48 repos + drwxr-xr-x. 2 root root 4.0K Mar 9 14:48 security-tool + ``` + +4. Configure a local repository. + + ```shell + $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso + $ sudo mkdir -p /opt/openEuler_repo + $ sudo mount openEuler-22.03-LTS-everything-aarch64-dvd.iso /opt/openEuler_repo + mount: /opt/openEuler_repo: WARNING: source write-protected, mounted read-only. + $ sudo rm -rf /opt/imageTailor/repos/euler_base && sudo mkdir -p /opt/imageTailor/repos/euler_base + $ sudo cp -ar /opt/openEuler_repo/Packages/* /opt/imageTailor/repos/euler_base + $ sudo chmod -R 644 /opt/imageTailor/repos/euler_base + $ sudo ls /opt/imageTailor/repos/euler_base|wc -l + 2577 + $ sudo umount /opt/openEuler_repo && sudo rm -rf /opt/openEuler_repo + $ cd /opt/imageTailor + ``` + 5. Change the **root** and GRUB passwords. - Replace **\${pwd}** with the encrypted password by referring to [Configuring Initial Passwords](#configuring-initial-passwords). + Replace **\${pwd}** with the encrypted password by referring to [Configuring Initial Passwords](#configuring-initial-passwords). - ```shell - $ cd /opt/imageTailor/ - $ sudo vi custom/cfg_openEuler/usr_file/etc/default/grub - GRUB_PASSWORD="${pwd1}" - $ - $ sudo vi kiwi/minios/cfg_minios/rpm.conf - + ```shell + $ cd /opt/imageTailor/ + $ sudo vi custom/cfg_openEuler/usr_file/etc/default/grub + GRUB_PASSWORD="${pwd1}" + $ + $ sudo vi kiwi/minios/cfg_minios/rpm.conf + - - $ - $ sudo vi custom/cfg_openEuler/rpm.conf - + + $ + $ sudo vi custom/cfg_openEuler/rpm.conf + - - ``` + + ``` 6. Run the tailoring command. - ```shell - $ sudo rm -rf /opt/imageTailor/result - $ sudo ./mkdliso -p openEuler -c custom/cfg_openEuler --minios force - ...... - Complete release iso file at: result/2022-03-09-15-31/openEuler-aarch64.iso - move all mkdliso log file to result/log/sys_custom_log_20220309153231.tar.gz - $ ll result/2022-03-09-15-31/ - total 889M - -rw-r--r--. 1 root root 889M Mar 9 15:32 openEuler-aarch64.iso - -rw-r--r--. 1 root root 87 Mar 9 15:32 openEuler-aarch64.iso.sha256 - ``` + ```shell + $ sudo rm -rf /opt/imageTailor/result + $ sudo ./mkdliso -p openEuler -c custom/cfg_openEuler --minios force + ...... + Complete release iso file at: result/2022-03-09-15-31/openEuler-aarch64.iso + move all mkdliso log file to result/log/sys_custom_log_20220309153231.tar.gz + $ ll result/2022-03-09-15-31/ + total 889M + -rw-r--r--. 1 root root 889M Mar 9 15:32 openEuler-aarch64.iso + -rw-r--r--. 1 root root 87 Mar 9 15:32 openEuler-aarch64.iso.sha256 + ``` diff --git a/docs/en/Tools/CommunityTools/ImageCustom/imageTailor/public_sys-resources/icon-note.gif b/docs/en/Tools/CommunityTools/ImageCustom/imageTailor/public_sys-resources/icon-note.gif new file mode 100644 index 0000000000000000000000000000000000000000..6314297e45c1de184204098efd4814d6dc8b1cda Binary files /dev/null and b/docs/en/Tools/CommunityTools/ImageCustom/imageTailor/public_sys-resources/icon-note.gif differ diff --git a/docs/en/Tools/CommunityTools/ImageCustom/isocut/Menu/index.md b/docs/en/Tools/CommunityTools/ImageCustom/isocut/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..13bb368a7df97ed1a9bc18ec659469b66f52b197 --- /dev/null +++ b/docs/en/Tools/CommunityTools/ImageCustom/isocut/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [isocut User Guide]({{< relref "./isocut-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/CommunityTools/ImageCustom/isocut/common-issues-and-solutions.md b/docs/en/Tools/CommunityTools/ImageCustom/isocut/common-issues-and-solutions.md new file mode 100644 index 0000000000000000000000000000000000000000..4f69f2f5f16a4f326b2e9abfb4b99f07ac85562a --- /dev/null +++ b/docs/en/Tools/CommunityTools/ImageCustom/isocut/common-issues-and-solutions.md @@ -0,0 +1,45 @@ +# Common Issues and Solutions + +## Issue 1: The System Fails to Be Installed Using an Image Tailored Based on the Default RPM Package List + +### Context + +When isocut is used to tailor an image, the **/etc/isocut/rpmlist** configuration file is used to specify the software packages to be installed. + +Images of different OS versions contain different software packages. As a result, some packages may be missing during image tailoring. +Therefore, the **/etc/isocut/rpmlist** file contains only the kernel software package by default, ensuring that the image can be successfully tailored. + +### Symptom + +The ISO image is successfully tailored using the default configuration, but fails to be installed. + +An error message is displayed during the installation, indicating that packages are missing: + +![](./figures/lack_pack.png) + +### Possible Cause + +The ISO image tailored based on the default RPM package list lacks necessary RPM packages during installation. +The missing RPM packages are displayed in the error message, and may vary depending on the version. + +### Solution + +Add the missing packages. + +1. Find the missing RPM packages based on the error message. +2. Add the missing RPM packages to the **/etc/isocut/rpmlist** configuration file. +3. Tailor and install the ISO image again. + +For example, if the missing packages are those in the example error message, modify the **rpmlist** configuration file as follows: + +```shell +$ cat /etc/isocut/rpmlist +kernel.aarch64 +lvm2.aarch64 +chrony.aarch64 +authselect.aarch64 +shim.aarch64 +efibootmgr.aarch64 +grub2-efi-aa64.aarch64 +dosfstools.aarch64 +``` diff --git a/docs/en/docs/TailorCustom/figures/lack_pack.png b/docs/en/Tools/CommunityTools/ImageCustom/isocut/figures/lack_pack.png similarity index 100% rename from docs/en/docs/TailorCustom/figures/lack_pack.png rename to docs/en/Tools/CommunityTools/ImageCustom/isocut/figures/lack_pack.png diff --git a/docs/en/docs/TailorCustom/isocut-user-guide.md b/docs/en/Tools/CommunityTools/ImageCustom/isocut/isocut-user-guide.md similarity index 34% rename from docs/en/docs/TailorCustom/isocut-user-guide.md rename to docs/en/Tools/CommunityTools/ImageCustom/isocut/isocut-user-guide.md index dbafbe87057fcc05f8e6eb95b7f28466f38fcf0f..f38a9943f8d725b8d9e8c2949b2effee53e12df7 100644 --- a/docs/en/docs/TailorCustom/isocut-user-guide.md +++ b/docs/en/Tools/CommunityTools/ImageCustom/isocut/isocut-user-guide.md @@ -4,12 +4,12 @@ - [Software and Hardware Requirements](#software-and-hardware-requirements) - [Installation](#Installation) - [Tailoring and Customizing an Image](#tailoring-and-customizing-an-image) - - [Command Description](#command-description) - - [Software Package Source](#software-package-source) - - [Operation Guide](#operation-guide) - + - [Command Description](#command-description) + - [Software Package Source](#software-package-source) + - [Operation Guide](#operation-guide) ## Introduction + The size of an openEuler image is large, and the process of downloading or transferring an image is time-consuming. In addition, when an openEuler image is used to install the OS, all RPM packages contained in the image are installed. You cannot choose to install only the required software packages. In some scenarios, you do not need to install the full software package provided by the image, or you need to install additional software packages. Therefore, openEuler provides an image tailoring and customization tool. You can use this tool to customize an ISO image that contains only the required RPM packages based on an openEuler image. The software packages can be the ones contained in an official ISO image or specified in addition to meet custom requirements. @@ -30,72 +30,70 @@ The following uses openEuler 20.03 LTS SP3 on the AArch64 architecture as an exa 1. Ensure that openEuler 20.03 LTS SP3 has been installed on the computer. - ``` shell script + ```shell $ cat /etc/openEuler-release openEuler release 20.03 (LTS-SP3) - ``` + ``` 2. Download the ISO image (must be an **everything** image) of the corresponding architecture and save it to any directory (it is recommended that the available space of the directory be greater than 20 GB). In this example, the ISO image is saved to the **/home/isocut_iso** directory. - The download address of the AArch64 image is as follows: + The download address of the AArch64 image is as follows: - https://repo.openeuler.org/openEuler-20.03-LTS-SP3/ISO/aarch64/openEuler-20.03-LTS-SP3-everything-aarch64-dvd.iso + - > **Note:** - > The download address of the x86_64 image is as follows: - > - > https://repo.openeuler.org/openEuler-20.03-LTS-SP3/ISO/x86_64/openEuler-20.03-LTS-SP3-everything-x86_64-dvd.iso + > **Note:** + > The download address of the x86_64 image is as follows: + > + > 3. Create a **/etc/yum.repos.d/local.repo** file to configure the Yum source. The following is an example of the configuration file. **baseurl** is the directory for mounting the ISO image. - - ``` shell script - [local] - name=local - baseurl=file:///home/isocut_mount - gpgcheck=0 - enabled=1 - ``` - + + ```shell + [local] + name=local + baseurl=file:///home/isocut_mount + gpgcheck=0 + enabled=1 + ``` + 4. Run the following command as the **root** user to mount the image to the **/home/isocut_mount** directory (ensure that the mount directory is the same as **baseurl** configured in the **repo** file) as the Yum source: - ```shell - sudo mount -o loop /home/isocut_iso/openEuler-20.03-LTS-SP3-everything-aarch64-dvd.iso /home/isocut_mount - ``` + ```shell + sudo mount -o loop /home/isocut_iso/openEuler-20.03-LTS-SP3-everything-aarch64-dvd.iso /home/isocut_mount + ``` 5. Make the Yum source take effect. - ```shell - yum clean all - yum makecache - ``` + ```shell + yum clean all + yum makecache + ``` 6. Install the image tailoring and customization tool as the **root** user. - ```shell - sudo yum install -y isocut - ``` + ```shell + sudo yum install -y isocut + ``` 7. Run the following command as the **root** user to verify that the tool has been installed successfully: - ```shell + ```shell $ sudo isocut -h Checking input ... usage: isocut [-h] [-t temporary_path] [-r rpm_path] [-k file_path] source_iso dest_iso - + Cut openEuler iso to small one - + positional arguments: source_iso source iso image dest_iso destination iso image - + optional arguments: -h, --help show this help message and exit -t temporary_path temporary path -r rpm_path extern rpm packages path -k file_path kickstart file - ``` - - + ``` ## Tailoring and Customizing an Image @@ -120,8 +118,6 @@ Run the `isocut` command to use the image tailoring and customization tool. The | *source_iso* | Yes| Path and name of the ISO source image to be tailored. If no path is specified, the current path is used by default.| | *dest_iso* | Yes| Specifies the path and name of the new ISO image created by the tool. If no path is specified, the current path is used by default.| - - ### Software Package Source The RPM packages of the new image can be: @@ -130,16 +126,14 @@ The RPM packages of the new image can be: - Specified in addition. In this case, use the `-r` parameter to specify the path in which the RPM packages are stored when running the `isocut` command and add the RPM package names to the **/etc/isocut/rpmlist** configuration file. (See the name format above.) - - - >![](./public_sys-resources/icon-note.gif) **NOTE:** - > - >- When customizing an image, if an RPM package specified in the configuration file cannot be found, the RPM package will not be added to the image. - >- If the dependency of the RPM package is incorrect, an error may be reported when running the tailoring and customization tool. + > ![](./public_sys-resources/icon-note.gif)**NOTE:** + > + > - When customizing an image, if an RPM package specified in the configuration file cannot be found, the RPM package will not be added to the image. + > - If the dependency of the RPM package is incorrect, an error may be reported when running the tailoring and customization tool. ### kickstart Functions -You can use kickstart to install images automatically by using the `-k` parameter to specify a kickstart file when running the **isocut** command. +You can use kickstart to install images automatically by using the `-k` parameter to specify a kickstart file when running the `isocut` command. The isocut tool provides a kickstart template (**/etc/isocut/anaconda-ks.cfg**). You can modify the template as required. @@ -166,411 +160,32 @@ Obtain the initial password of the **root** user as follows (**root** permission 1. Add a user for generating the password, for example, **testUser**. - ``` shell script - $ sudo useradd testUser - ``` + ```shell + sudo useradd testUser + ``` 2. Set the password for the **testUser** user. Run the following command to set the password as prompted: - ``` shell script - $ sudo passwd testUser - Changing password for user testUser. - New password: - Retype new password: - passwd: all authentication tokens updated successfully. - ``` + ```shell + $ sudo passwd testUser + Changing password for user testUser. + New password: + Retype new password: + passwd: all authentication tokens updated successfully. + ``` 3. View the **/etc/shadow** file to obtain the encrypted password. The encrypted password is the string between the two colons (:) following the **testUser** user name. (******* is used as an example.) - ``` shell script - $ sudo cat /etc/shadow | grep testUser - testUser:***:19052:0:90:7:35:: - ``` - -4. Run the following command to replace the **pwd** field in the **/etc/isocut/anaconda-ks.cfg** file with the encrypted password (replace __***__ with the actual password): - ``` shell script - rootpw --iscrypted *** - ``` - -###### Configuring the Initial GRUB2 Password - -Add the following configuration to the **/etc/isocut/anaconda-ks.cfg** file to set the initial GRUB2 password: Replace **${pwd}** with the encrypted password. - -```shell -%addon com_huawei_grub_safe --iscrypted --password='${pwd}' -%end -``` - -> ![](./public_sys-resources/icon-note.gif) NOTE: -> -> - The **root** permissions are required for configuring the initial GRUB password. -> - The default user corresponding to the GRUB password is **root**. -> -> - The `grub2-set-password` command must exist in the system. If the command does not exist, install it in advance. - -1. Run the following command and set the GRUB2 password as prompted: - - ```shell - $ sudo grub2-set-password -o ./ - Enter password: - Confirm password: - grep: .//grub.cfg: No such file or directory - WARNING: The current configuration lacks password support! - Update your configuration with grub2-mkconfig to support this feature. - ``` - -2. After the command is executed, the **user.cfg** file is generated in the current directory. The content starting with **grub.pbkdf2.sha512** is the encrypted GRUB2 password. - - ```shell - $ sudo cat user.cfg - GRUB2_PASSWORD=grub.pbkdf2.sha512.*** - ``` - -3. Add the following information to the **/etc/isocut/anaconda-ks.cfg** file. Replace ******* with the encrypted GRUB2 password. - - ```shell - %addon com_huawei_grub_safe --iscrypted --password='grub.pbkdf2.sha512.***' - %end - ``` - -##### Configuring the %packages Field - -If you want to specify additional RPM packages and use kickstart for automatic installation, specify the RPM packages in the **%packages** field in both the **/etc/isocut/rpmlist** file and the kickstart file. - -This section describes how to specify RPM packages in the **/etc/isocut/anaconda-ks.cfg** file. - -The default configurations of **%packages** in the **/etc/isocut/anaconda-ks.cfg** file are as follows: - -```shell -%packages --multilib --ignoremissing -acl.aarch64 -aide.aarch64 -...... -NetworkManager.aarch64 -%end -``` - -Add specified RPM packages to the **%packages** configurations in the following format: - -*software_package_name.architecture*. For example, **kernel.aarch64**. - -```shell -%packages --multilib --ignoremissing -acl.aarch64 -aide.aarch64 -...... -NetworkManager.aarch64 -kernel.aarch64 -%end -``` - -### Operation Guide - - - ->![](./public_sys-resources/icon-note.gif) **NOTE:** -> ->- Do not modify or delete the default configuration items in the **/etc/isocut/rpmlist** file. ->- All `isocut` operations require **root** permissions. ->- The source image to be tailored can be a basic image or **everything** image. In this example, the basic image **openEuler-20.03-LTS-SP3-aarch64-dvd.iso** is used. ->- In this example, assume that the new image is named **new.iso** and stored in the **/home/result** directory, the temporary directory for running the tool is **/home/temp**, and the additional RPM packages are stored in the **/home/rpms** directory. - - - -1. Open the configuration file **/etc/isocut/rpmlist** and specify the RPM packages to be installed (from the official ISO image). - - ``` shell script - sudo vi /etc/isocut/rpmlist - ``` - -2. Ensure that the space of the temporary directory for running the image tailoring and customization tool is greater than 8 GB. The default temporary directory is** /tmp**. You can also use the `-t` parameter to specify another directory as the temporary directory. The path of the directory must be an absolute path. In this example, the **/home/temp** directory is used. The following command output indicates that the available drive space of the **/home** directory is 38 GB, which meets the requirements. - - ```shell - $ df -h - Filesystem Size Used Avail Use% Mounted on - devtmpfs 1.2G 0 1.2G 0% /dev - tmpfs 1.5G 0 1.5G 0% /dev/shm - tmpfs 1.5G 23M 1.5G 2% /run - tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup - /dev/mapper/openeuler_openeuler-root 69G 2.8G 63G 5% / - /dev/sda2 976M 114M 796M 13% /boot - /dev/mapper/openeuler_openeuler-home 61G 21G 38G 35% /home - ``` - -3. Tailor and customize the image. - - **Scenario 1**: All RPM packages of the new image are from the official ISO image. - - ``` shell script - $ sudo isocut -t /home/temp /home/isocut_iso/openEuler-20.03-LTS-SP3-aarch64-dvd.iso /home/result/new.iso - Checking input ... - Checking user ... - Checking necessary tools ... - Initing workspace ... - Copying basic part of iso image ... - Downloading rpms ... - Finish create yum conf - finished - Regenerating repodata ... - Checking rpm deps ... - Getting the description of iso image ... - Remaking iso ... - Adding checksum for iso ... - Adding sha256sum for iso ... - ISO cutout succeeded, enjoy your new image "/home/result/new.iso" - isocut.lock unlocked ... - ``` - If the preceding information is displayed, the custom image **new.iso** is successfully created. - - **Scenario 2**: The RPM packages of the new image are from the official ISO image and additional packages in **/home/rpms**. - - ```shell - sudo isocut -t /home/temp -r /home/rpms /home/isocut_iso/openEuler-20.03-LTS-SP3-aarch64-dvd.iso /home/result/new.iso - ``` - **Scenario 3**: The kickstart file is used for automatic installation. You need to modify the **/etc/isocut/anaconda-ks.cfg** file. ```shell - sudo isocut -t /home/temp -k /etc/isocut/anaconda-ks.cfg /home/isocut_iso/openEuler-20.03-LTS-SP3-aarch64-dvd.iso /home/result/new.iso + $ sudo cat /etc/shadow | grep testUser + testUser:***:19052:0:90:7:35:: ``` +4. Run the following command to replace the **pwd** field in the **/etc/isocut/anaconda-ks.cfg** file with the encrypted password (replace __***__ with the actual password): -## FAQs - -### The System Fails to Be Installed Using an Image Tailored Based on the Default RPM Package List - -#### Context - -When isocut is used to tailor an image, the **/etc/isocut/rpmlist** configuration file is used to specify the software packages to be installed. - -Images of different OS versions contain different software packages. As a result, some packages may be missing during image tailoring. -Therefore, the **/etc/isocut/rpmlist** file contains only the kernel software package by default, -ensuring that the image can be successfully tailored. - -#### Symptom - -The ISO image is successfully tailored using the default configuration, but fails to be installed. - -An error message is displayed during the installation, indicating that packages are missing: - -![](./figures/lack_pack.png) - -#### Possible Cause - -The ISO image tailored based on the default RPM package list lacks necessary RPM packages during installation. -The missing RPM packages are displayed in the error message, and may vary depending on the version. - -#### Solution - -1. Add the missing packages. - - 1. Find the missing RPM packages based on the error message. - 2. Add the missing RPM packages to the **/etc/isocut/rpmlist** configuration file. - 3. Tailor and install the ISO image again. - - For example, if the missing packages are those in the example error message, modify the **rpmlist** configuration file as follows: ```shell - $ cat /etc/isocut/rpmlist - kernel.aarch64 - lvm2.aarch64 - chrony.aarch64 - authselect.aarch64 - shim.aarch64 - efibootmgr.aarch64 - grub2-efi-aa64.aarch64 - dosfstools.aarch64 + rootpw --iscrypted *** ``` -# isocut Usage Guide - -- [Introduction](#introduction) -- [Software and Hardware Requirements](#software-and-hardware-requirements) -- [Installation](#Installation) -- [Tailoring and Customizing an Image](#tailoring-and-customizing-an-image) - - [Command Description](#command-description) - - [Software Package Source](#software-package-source) - - [Operation Guide](#operation-guide) - - -## Introduction -The size of an openEuler image is large, and the process of downloading or transferring an image is time-consuming. In addition, when an openEuler image is used to install the OS, all RPM packages contained in the image are installed. You cannot choose to install only the required software packages. - -In some scenarios, you do not need to install the full software package provided by the image, or you need to install additional software packages. Therefore, openEuler provides an image tailoring and customization tool. You can use this tool to customize an ISO image that contains only the required RPM packages based on an openEuler image. The software packages can be the ones contained in an official ISO image or specified in addition to meet custom requirements. - -This document describes how to install and use the openEuler image tailoring and customization tool. - -## Software and Hardware Requirements - -The hardware and software requirements of the computer to make an ISO file using the openEuler tailoring and customization tool are as follows: - -- The CPU architecture is AArch64 or x86_64. -- The operating system is openEuler 22.03 LTS. -- You are advised to reserve at least 30 GB drive space for running the tailoring and customization tool and storing the ISO image. - -## Installation - -The following uses openEuler 22.03 LTS on the AArch64 architecture as an example to describe how to install the ISO image tailoring and customization tool. - -1. Ensure that openEuler 22.03 LTS has been installed on the computer. - - ``` shell script - $ cat /etc/openEuler-release - openEuler release 22.03 LTS - ``` - -2. Download the ISO image (must be an **everything** image) of the corresponding architecture and save it to any directory (it is recommended that the available space of the directory be greater than 20 GB). In this example, the ISO image is saved to the **/home/isocut_iso** directory. - - The download address of the AArch64 image is as follows: - - https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso - - > **Note:** - > The download address of the x86_64 image is as follows: - > - > https://repo.openeuler.org/openEuler-22.03-LTS/ISO/x86_64/openEuler-22.03-LTS-everything-x86_64-dvd.iso - -3. Create a **/etc/yum.repos.d/local.repo** file to configure the Yum source. The following is an example of the configuration file. **baseurl** is the directory for mounting the ISO image. - - ``` shell script - [local] - name=local - baseurl=file:///home/isocut_mount - gpgcheck=0 - enabled=1 - ``` - -4. Run the following command as the **root** user to mount the image to the **/home/isocut_mount** directory (ensure that the mount directory is the same as **baseurl** configured in the **repo** file) as the Yum source: - - ```shell - sudo mount -o loop /home/isocut_iso/openEuler-22.03-LTS-everything-aarch64-dvd.iso /home/isocut_mount - ``` - -5. Make the Yum source take effect. - - ```shell - yum clean all - yum makecache - ``` - -6. Install the image tailoring and customization tool as the **root** user. - - ```shell - sudo yum install -y isocut - ``` - -7. Run the following command as the **root** user to check whether the tool has been installed successfully: - - ```shell - $ sudo isocut -h - Checking input ... - usage: isocut [-h] [-t temporary_path] [-r rpm_path] [-k file_path] source_iso dest_iso - - Cut EulerOS iso to small one - - positional arguments: - source_iso source iso image - dest_iso destination iso image - - optional arguments: - -h, --help show this help message and exit - -t temporary_path temporary path - -r rpm_path extern rpm packages path - -k file_path kickstart file - ``` - - - -## Tailoring and Customizing an Image - -This section describes how to use the image tailoring and customization tool to create an image by tailoring or adding RPM packages to an openEuler image. - -### Command Description - -#### Format - -Run the `isocut` command to use the image tailoring and customization tool. The command format is as follows: - -**isocut** [ --help | -h ] [ -t <*temp_path*> ] [ -r <*rpm_path*> ] [ -k <*file_path*> ] < *source_iso* > < *dest_iso* > - -#### Parameter Description - -| Parameter| Mandatory| Description| -| ------------ | -------- | -------------------------------------------------------- | -| --help \| -h | No| Queries the help information about the command.| -| -t <*temp_path*> | No| Specifies the temporary directory *temp_path* for running the tool, which is an absolute path. The default value is **/tmp**.| -| -r <*rpm_path*> | No| Specifies the path of the RPM packages to be added to the ISO image.| -| -k <*file_path*> | No | Specifies the kickstart template path if kickstart is used for automatic installation. | -| *source_iso* | Yes| Path and name of the ISO source image to be tailored. If no path is specified, the current path is used by default.| -| *dest_iso* | Yes| Specifies the path and name of the new ISO image created by the tool. If no path is specified, the current path is used by default.| - - - -### Software Package Source - -The RPM packages of the new image can be: - -- Packages contained in an official ISO image. In this case, the RPM packages to be installed are specified in the configuration file **/etc/isocut/rpmlist**. The configuration format is *software_package_name.architecture*. For example, **kernel.aarch64**. - -- Specified in addition. In this case, use the `-r` parameter to specify the path in which the RPM packages are stored when running the `isocut` command and add the RPM package names to the **/etc/isocut/rpmlist** configuration file. (See the name format above.) - - - - >![](./public_sys-resources/icon-note.gif) **NOTE:** - > - >- When customizing an image, if an RPM package specified in the configuration file cannot be found, the RPM package will not be added to the image. - >- If the dependency of the RPM package is incorrect, an error may be reported when running the tailoring and customization tool. - -### kickstart Functions - -You can use kickstart to install images automatically by using the `-k` parameter to specify a kickstart file when running the **isocut** command. - -The isocut tool provides a kickstart template (**/etc/isocut/anaconda-ks.cfg**). You can modify the template as required. - -#### Modifying the kickstart Template - -If you need to use the kickstart template provided by the isocut tool, perform the following modifications: - -- Configure the root user password and the GRUB2 password in the **/etc/isocut/anaconda-ks.cfg** file. Otherwise, the automatic image installation will pause during the password setting process, waiting for you to manually enter the passwords. -- If you want to specify additional RPM packages and use kickstart for automatic installation, specify the RPM packages in the **%packages** field in both the **/etc/isocut/rpmlist** file and the kickstart file. - -See the next section for details about how to modify the kickstart file. - -##### Configuring Initial Passwords - -###### Setting the Initial Password of the **root** User - -Set the initial password of the **root** user as follows in the **/etc/isocut/anaconda-ks.cfg** file. Replace **${pwd}** with the encrypted password. - -```shell -rootpw --iscrypted ${pwd} -``` - -Obtain the initial password of the **root** user as follows (**root** permissions are required): - -1. Add a user for generating the password, for example, **testUser**. - - ``` shell script - $ sudo useradd testUser - ``` - -2. Set the password for the **testUser** user. Run the following command to set the password as prompted: - - ``` shell script - $ sudo passwd testUser - Changing password for user testUser. - New password: - Retype new password: - passwd: all authentication tokens updated successfully. - ``` - -3. View the **/etc/shadow** file to obtain the encrypted password. The encrypted password is the string between the two colons (:) following the **testUser** user name. (******* is used as an example.) - - ``` shell script - $ sudo cat /etc/shadow | grep testUser - testUser:***:19052:0:90:7:35:: - ``` - -4. Run the following command to replace the **pwd** field in the **/etc/isocut/anaconda-ks.cfg** file with the encrypted password (replace __***__ with the actual password): - ``` shell script - rootpw --iscrypted *** - ``` ###### Configuring the Initial GRUB2 Password @@ -581,7 +196,7 @@ Add the following configuration to the **/etc/isocut/anaconda-ks.cfg** file to s %end ``` -> ![](./public_sys-resources/icon-note.gif) NOTE: +> ![](./public_sys-resources/icon-note.gif)NOTE: > > - The **root** permissions are required for configuring the initial GRUB password. > - The default user corresponding to the GRUB password is **root**. @@ -590,28 +205,28 @@ Add the following configuration to the **/etc/isocut/anaconda-ks.cfg** file to s 1. Run the following command and set the GRUB2 password as prompted: - ```shell - $ sudo grub2-set-password -o ./ - Enter password: - Confirm password: - grep: .//grub.cfg: No such file or directory - WARNING: The current configuration lacks password support! - Update your configuration with grub2-mkconfig to support this feature. - ``` + ```shell + $ sudo grub2-set-password -o ./ + Enter password: + Confirm password: + grep: .//grub.cfg: No such file or directory + WARNING: The current configuration lacks password support! + Update your configuration with grub2-mkconfig to support this feature. + ``` 2. After the command is executed, the **user.cfg** file is generated in the current directory. The content starting with **grub.pbkdf2.sha512** is the encrypted GRUB2 password. - ```shell - $ sudo cat user.cfg - GRUB2_PASSWORD=grub.pbkdf2.sha512.*** - ``` + ```shell + $ sudo cat user.cfg + GRUB2_PASSWORD=grub.pbkdf2.sha512.*** + ``` 3. Add the following information to the **/etc/isocut/anaconda-ks.cfg** file. Replace ******* with the encrypted GRUB2 password. - ```shell - %addon com_huawei_grub_safe --iscrypted --password='grub.pbkdf2.sha512.***' - %end - ``` + ```shell + %addon com_huawei_grub_safe --iscrypted --password='grub.pbkdf2.sha512.***' + %end + ``` ##### Configuring the %packages Field @@ -646,26 +261,22 @@ kernel.aarch64 ### Operation Guide - - ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif)**NOTE:** > ->- Do not modify or delete the default configuration items in the **/etc/isocut/rpmlist** file. ->- All `isocut` operations require **root** permissions. ->- The source image to be tailored can be a basic image or **everything** image. In this example, the basic image **openEuler-22.03-LTS-aarch64-dvd.iso** is used. ->- In this example, assume that the new image is named **new.iso** and stored in the **/home/result** directory, the temporary directory for running the tool is **/home/temp**, and the additional RPM packages are stored in the **/home/rpms** directory. - - +> - Do not modify or delete the default configuration items in the **/etc/isocut/rpmlist** file. +> - All `isocut` operations require **root** permissions. +> - The source image to be tailored can be a basic image or **everything** image. In this example, the basic image **openEuler-20.03-LTS-SP3-aarch64-dvd.iso** is used. +> - In this example, assume that the new image is named **new.iso** and stored in the **/home/result** directory, the temporary directory for running the tool is **/home/temp**, and the additional RPM packages are stored in the **/home/rpms** directory. 1. Open the configuration file **/etc/isocut/rpmlist** and specify the RPM packages to be installed (from the official ISO image). - ``` shell script - sudo vi /etc/isocut/rpmlist - ``` + ```shell + sudo vi /etc/isocut/rpmlist + ``` -2. Ensure that the space of the temporary directory for running the image tailoring and customization tool is greater than 8 GB. The default temporary directory is** /tmp**. You can also use the `-t` parameter to specify another directory as the temporary directory. The path of the directory must be an absolute path. In this example, the **/home/temp** directory is used. The following command output indicates that the available drive space of the **/home** directory is 38 GB, which meets the requirements. +2. Ensure that the space of the temporary directory for running the image tailoring and customization tool is greater than 8 GB. The default temporary directory is**/tmp**. You can also use the `-t` parameter to specify another directory as the temporary directory. The path of the directory must be an absolute path. In this example, the **/home/temp** directory is used. The following command output indicates that the available drive space of the **/home** directory is 38 GB, which meets the requirements. - ```shell + ```shell $ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.2G 0 1.2G 0% /dev @@ -675,14 +286,14 @@ kernel.aarch64 /dev/mapper/openeuler_openeuler-root 69G 2.8G 63G 5% / /dev/sda2 976M 114M 796M 13% /boot /dev/mapper/openeuler_openeuler-home 61G 21G 38G 35% /home - ``` - + ``` + 3. Tailor and customize the image. **Scenario 1**: All RPM packages of the new image are from the official ISO image. - ``` shell script - $ sudo isocut -t /home/temp /home/isocut_iso/openEuler-22.03-LTS-aarch64-dvd.iso /home/result/new.iso + ```shell + $ sudo isocut -t /home/temp /home/isocut_iso/openEuler-20.03-LTS-SP3-aarch64-dvd.iso /home/result/new.iso Checking input ... Checking user ... Checking necessary tools ... @@ -700,61 +311,17 @@ kernel.aarch64 ISO cutout succeeded, enjoy your new image "/home/result/new.iso" isocut.lock unlocked ... ``` + If the preceding information is displayed, the custom image **new.iso** is successfully created. **Scenario 2**: The RPM packages of the new image are from the official ISO image and additional packages in **/home/rpms**. - - ```shell - sudo isocut -t /home/temp -r /home/rpms /home/isocut_iso/openEuler-22.03-LTS-aarch64-dvd.iso /home/result/new.iso - ``` - **Scenario 3**: The kickstart file is used for automatic installation. You need to modify the **/etc/isocut/anaconda-ks.cfg** file. + ```shell - sudo isocut -t /home/temp -k /etc/isocut/anaconda-ks.cfg /home/isocut_iso/openEuler-22.03-LTS-aarch64-dvd.iso /home/result/new.iso + sudo isocut -t /home/temp -r /home/rpms /home/isocut_iso/openEuler-20.03-LTS-SP3-aarch64-dvd.iso /home/result/new.iso ``` + **Scenario 3**: The kickstart file is used for automatic installation. You need to modify the **/etc/isocut/anaconda-ks.cfg** file. -## FAQs - -### The System Fails to Be Installed Using an Image Tailored Based on the Default RPM Package List - -#### Context - -When isocut is used to tailor an image, the **/etc/isocut/rpmlist** configuration file is used to specify the software packages to be installed. - -Images of different OS versions contain different software packages. As a result, some packages may be missing during image tailoring. -Therefore, the **/etc/isocut/rpmlist** file contains only the kernel software package by default, -ensuring that the image can be successfully tailored. - -#### Symptom - -The ISO image is successfully tailored using the default configuration, but fails to be installed. - -An error message is displayed during the installation, indicating that packages are missing: - -![](./figures/lack_pack.png) - -#### Possible Cause - -The ISO image tailored based on the default RPM package list lacks necessary RPM packages during installation. -The missing RPM packages are displayed in the error message, and may vary depending on the version. - -#### Solution - -1. Add the missing packages. - - 1. Find the missing RPM packages based on the error message. - 2. Add the missing RPM packages to the **/etc/isocut/rpmlist** configuration file. - 3. Tailor and install the ISO image again. - - For example, if the missing packages are those in the example error message, modify the **rpmlist** configuration file as follows: ```shell - $ cat /etc/isocut/rpmlist - kernel.aarch64 - lvm2.aarch64 - chrony.aarch64 - authselect.aarch64 - shim.aarch64 - efibootmgr.aarch64 - grub2-efi-aa64.aarch64 - dosfstools.aarch64 + sudo isocut -t /home/temp -k /etc/isocut/anaconda-ks.cfg /home/isocut_iso/openEuler-20.03-LTS-SP3-aarch64-dvd.iso /home/result/new.iso ``` diff --git a/docs/en/Tools/CommunityTools/ImageCustom/isocut/public_sys-resources/icon-note.gif b/docs/en/Tools/CommunityTools/ImageCustom/isocut/public_sys-resources/icon-note.gif new file mode 100644 index 0000000000000000000000000000000000000000..6314297e45c1de184204098efd4814d6dc8b1cda Binary files /dev/null and b/docs/en/Tools/CommunityTools/ImageCustom/isocut/public_sys-resources/icon-note.gif differ diff --git a/docs/en/Tools/CommunityTools/Menu/index.md b/docs/en/Tools/CommunityTools/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..e5ca879c143707897f26f2d34db6771482aaa274 --- /dev/null +++ b/docs/en/Tools/CommunityTools/Menu/index.md @@ -0,0 +1,9 @@ +--- +headless: true +--- +- [Image Building]({{< relref "./ImageCustom/Menu/index.md" >}}) +- [Compilation]({{< relref "./Compilation/Menu/index.md" >}}) +- [Performance Optimization]({{< relref "./Performance/Menu/index.md" >}}) +- [Migration]({{< relref "./Migration/Menu/index.md" >}}) +- [Virtualization]({{< relref "./Virtualization/Menu/index.md" >}}) +- [epkg]({{< relref "./epkg/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/CommunityTools/Migration/Menu/index.md b/docs/en/Tools/CommunityTools/Migration/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..4ce86bfac6d63a54ac1bc474bf6c79f6f0219521 --- /dev/null +++ b/docs/en/Tools/CommunityTools/Migration/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [migration-tools User Guide]({{< relref "./migration-tools/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/Menu/index.md b/docs/en/Tools/CommunityTools/Migration/migration-tools/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..ed12be6ebd199fdeea585e46df26f5613138d872 --- /dev/null +++ b/docs/en/Tools/CommunityTools/Migration/migration-tools/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [migration-tools User Guide]({{< relref "./migration-tools-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/environment-check.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/environment-check.png new file mode 100644 index 0000000000000000000000000000000000000000..b03c4da5ba24e345a3614cd2c7d7e3b52983ad1a Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/environment-check.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/home-page.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/home-page.png new file mode 100644 index 0000000000000000000000000000000000000000..2fb66ae7dc8336d6e38437ba79175fe1c2207a5d Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/home-page.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/kernel.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/kernel.png new file mode 100644 index 0000000000000000000000000000000000000000..ecd5bbb3cf306e46da3448de46c4f9fc2e03eed2 Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/kernel.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/license.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/license.png new file mode 100644 index 0000000000000000000000000000000000000000..41eb3b6aa755619b94965f8060a27c1940e6936e Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/license.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-check.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-check.png new file mode 100644 index 0000000000000000000000000000000000000000..776e9cafdf7e569cd33e1abd47217aa47c86f134 Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-check.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-complete.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-complete.png new file mode 100644 index 0000000000000000000000000000000000000000..c832bb723ea5400aa2fe1f932f1f5dcb5a3d5065 Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-complete.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-confirmation.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-confirmation.png new file mode 100644 index 0000000000000000000000000000000000000000..69567eabd886befe43fb8a512e7b6cde87fd0937 Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-confirmation.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-in-progress.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-in-progress.png new file mode 100644 index 0000000000000000000000000000000000000000..e5bae64aaa303a6e7950ec0fdeb46a66bfaa70a3 Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-in-progress.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-start.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-start.png new file mode 100644 index 0000000000000000000000000000000000000000..d8a2b58cd5ce8e559bd82c14400e16b4635d0ec3 Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-start.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-tools-conf.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-tools-conf.png new file mode 100644 index 0000000000000000000000000000000000000000..80520c44a86172c9f18e3d50930e0fcc25f411bb Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/migration-tools-conf.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/openeuler-migration-complete.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/openeuler-migration-complete.png new file mode 100644 index 0000000000000000000000000000000000000000..20c5a4afaf2b06fae865137b9d4efd53326a9611 Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/openeuler-migration-complete.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/prompt.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/prompt.png new file mode 100644 index 0000000000000000000000000000000000000000..224a79aece026a4fa2b003753f40fc0f1ebd4d0d Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/prompt.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/repo.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/repo.png new file mode 100644 index 0000000000000000000000000000000000000000..78437bbc839bff8906b535989bbac0c38a6263b7 Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/repo.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/user-check.png b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/user-check.png new file mode 100644 index 0000000000000000000000000000000000000000..8ec0b5518f37532eedb5897bc368d9b58a1ccafc Binary files /dev/null and b/docs/en/Tools/CommunityTools/Migration/migration-tools/figures/user-check.png differ diff --git a/docs/en/Tools/CommunityTools/Migration/migration-tools/migration-tools-user-guide.md b/docs/en/Tools/CommunityTools/Migration/migration-tools/migration-tools-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..819845fe31eb071accbf7babc2c468de22f7644d --- /dev/null +++ b/docs/en/Tools/CommunityTools/Migration/migration-tools/migration-tools-user-guide.md @@ -0,0 +1,245 @@ +# migration-tools User Guide + +## Introduction + +This document outlines the usage of server migration software (migration-tools) for seamless migration from CentOS 7 and CentOS 8 systems to UnionTech OS Server (UOS). +The software features a web-based interface that simplifies the migration process through an intuitive graphical environment. + +### Deployment Method + +Install the server component on an openEuler 23.09 server and deploy the agent component on CentOS 7/CentOS 8 servers targeted for migration. + +#### Supported Systems for Migration + +1. Migration from AMD64 and AArch64 CentOS systems to UOS is supported. You need to prepare a complete repository for the target system before migration. + +2. openEuler migration: Only migration from CentOS 7.4 CUI to openEuler 20.03 LTS SP1 is supported. + +3. Systems with i686 architecture RPM packages should not be migrated as this will lead to migration failure. + +| Source System | Target System | Software Repository | +| ----------------- | ----------------------- | ------------------------------- | +| CentOS 7.4 CUI | openEuler 20.03 LTS SP1 | openEuler public repository | +| CentOS 7.0 to 7.7 | UOS 1002a | UOS 1002a (complete repository) | +| CentOS 8.0 to 8.2 | UOS 1050a | UOS 1050a (complete repository) | + +### Usage Instructions + +#### Installation and Configuration + +##### Installing migration-tools-server + +- Disable the firewall. + + ``` shell + systemctl stop firewalld + ``` + +- Install migration-tools-server. + + ``` shell + yum install migration-tools-server -y + ``` + +- Edit the configuration file. + + ``` shell + vim /etc/migration-tools/migration-tools.conf + ``` + + ![Configuration File](./figures/migration-tools-conf.png) + +- Restart the migration-tools-server service. + + ``` shell + systemctl restart migration-tools-server + ``` + +- Distribute the agent package. Choose the appropriate agent package based on the migration system version. + + For CentOS 7 series: + + Replace `xx.xx.xx.xx` with the migration machine's IP address. + + ``` shell + scp -r /usr/lib/migration-tools-server/agent-rpm/el7 root@xx.xx.xx.xx:/root + ``` + + For CentOS 8 series: + + ``` shell + scp -r /usr/lib/migration-tools-server/agent-rpm/el8 root@xx.xx.xx.xx:/root + ``` + +#### Migrating to openEuler + +> **Note:** openEuler migration currently supports only standalone script-based migration. + +- Distribute the migration script from the server to the agent. + + ``` shell + cd /usr/lib/migration-tools-server/ut-Migration-tools-0.1/centos7/ + scp openeuler/centos72openeuler.py root@10.12.23.106:/root + ``` + +- Install the required dependencies for migration. + + ``` shell + yum install python3 dnf rsync yum-utils -y + ``` + +- Begin the migration process. + + ``` shell + python3 centos7/openeuler/centos72openeuler.py + ``` + +- The system will automatically reboot after migration, and the process will be complete upon restart. + + ![openEuler Migration Complete](./figures/openeuler-migration-complete.png) + +#### Migrating to UOS + +##### Installing migration-tools-agent + +On the CentOS machine to be migrated, follow these steps: + +> **Note:** Currently, migration-tools only supports migration from CentOS 7.4 CUI to openEuler 20.03 LTS SP1. + +- Disable the firewall. + + ``` shell + systemctl stop firewalld + ``` + +- Install epel-release (some dependencies are included in the epel repository). + + ``` shell + yum install epel-release -y + ``` + +- Install the migration-tools-agent package (for CentOS 7 series, install the package corresponding to the architecture). + + For CentOS 7: + + ``` shell + cd /root/el7/x86_64 + yum install ./* -y + ``` + + For CentOS 8: + + ``` shell + cd /root/el8/ + yum install ./* -y + ``` + +- Edit the configuration file. + + ``` shell + vim /etc/migration-tools/migration-tools.conf + ``` + + ![Configuration File](./figures/migration-tools-conf.png) + +- Restart the migration-tools-agent service. + + ``` shell + systemctl restart migration-tools-agent + ``` + +##### UOS Migration Steps + +- Access the web interface. + + Once both the server and agent services are running, open a browser (Chrome is recommended) and navigate to `https://server_IP_address:9999`. + + ![Home Page](./figures/home-page.png) + +- Click "I have read and agree to this agreement," then proceed by clicking "Next." + ![License Agreement](./figures/license.png) + +- Review the migration prompt page and click "Next." + ![Prompt](./figures/prompt.png) + +- The environment check page will verify the system version and available disk space. Click "Next" once the check is complete. + +> **Note:** If the check stalls, ensure the agent firewall is disabled and both server and agent services are active. Refresh the browser to restart the check. + +![Environment Check](./figures/environment-check.png) + +- The user check page will validate the username and password. Using the root user is recommended. Click "Next" to initiate the check, and the system will automatically proceed to the repository configuration page upon completion. + + ![User Check](./figures/user-check.png) + +Repository Configuration Page: + +- Enter the appropriate repository path based on the system to be migrated. + + CentOS 7: 1002a, CentOS 8: 1050a + +- Ensure the repository is complete; otherwise, the migration will fail. + +- Only one repository path needs to be entered in the input field. + +![Repo](./figures/repo.png) + +- After entering the repository, click "Next." Once the repository connectivity check is complete, proceed to the kernel version selection page. Select the 4.19 kernel and click "Next." + + ![Kernel](./figures/kernel.png) + +- The migration environment check page compares software package differences before and after migration and generates a report. After the check, you can export the report. + + > **Note:** The check typically takes about one hour. Please wait patiently. + + ![Migration Check](./figures/migration-check.png) + +- After the check, click "Next." A confirmation window for system migration will appear. Ensure the system is backed up, then click "Confirm" to start the migration. + + ![Migration Confirmation](./figures/migration-confirmation.png) + +- After clicking "Confirm," the system migration page will appear. + + ![Migration Start](./figures/migration-start.png) + +- Click "View Details" to monitor the migration progress. + + ![Migration in Progress](./figures/migration-in-progress.png) + +- Once migration is complete, the page will redirect to the completion page. From here, you can export the migration analysis report and logs. + +- The exported files can be found in the **/var/tmp/uos-migration/** directory on the server. Unzip the files to view them. + + ![Migration Complete](./figures/migration-complete.png) + +- After migration, manually restart the agent machine and verify the migration status. + +###### Verification Steps + +Run the following command to verify if the OS has been successfully migrated to the target version. + +``` shell +uosinfo +``` + +If the output matches the expected information below, the migration is successful. + +1002a: + +``` shell +################################################# +Release: UnionTech OS Server release 20 (kongli) +Kernel : 4.19.0-91.77.97.uelc20.x86_64 +Build : UnionTech OS Server 20 1002c 20211228 x86_64 +################################################# +``` + +1050a: + +``` shell +################################################# +Release: UnionTech OS Server release 20 (kongzi) +Kernel : 4.19.0-91.82.88.uelc20.x86_64 +Build : UnionTech OS Server 20 1050a 20220214 x86_64 +################################################# +``` diff --git a/docs/en/Tools/CommunityTools/Performance/Menu/index.md b/docs/en/Tools/CommunityTools/Performance/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..fde66bd0d6f5506786e7e1846134c5b1e6e2ba32 --- /dev/null +++ b/docs/en/Tools/CommunityTools/Performance/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [A-Tune User Guide]({{< relref "../../../Server/Performance/SystemOptimization/A-Tune/Menu/index.md" >}}) +- [oeAware User Guide]({{< relref "../../../Server/Performance/TuningFramework/oeAware/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/Menu/index.md b/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..43a3a573eeb550ce024b9f5509de938f9896e00b --- /dev/null +++ b/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- +- [EulerLauncher User Guide]({{< relref "./overall.md" >}}) + - [Installing and Running EulerLauncher on Windows]({{< relref "./win-user-manual.md" >}}) + - [Installing and Running EulerLauncher on macOS]({{< relref "./mac-user-manual.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/EulerLauncher/images/mac-content.jpg b/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/images/mac-content.jpg similarity index 100% rename from docs/en/docs/EulerLauncher/images/mac-content.jpg rename to docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/images/mac-content.jpg diff --git a/docs/en/docs/EulerLauncher/images/mac-install.jpg b/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/images/mac-install.jpg similarity index 100% rename from docs/en/docs/EulerLauncher/images/mac-install.jpg rename to docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/images/mac-install.jpg diff --git a/docs/en/docs/EulerLauncher/images/mac-start.jpg b/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/images/mac-start.jpg similarity index 100% rename from docs/en/docs/EulerLauncher/images/mac-start.jpg rename to docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/images/mac-start.jpg diff --git a/docs/en/docs/EulerLauncher/images/mac-terminal.jpg b/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/images/mac-terminal.jpg similarity index 100% rename from docs/en/docs/EulerLauncher/images/mac-terminal.jpg rename to docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/images/mac-terminal.jpg diff --git a/docs/en/docs/EulerLauncher/images/mac-visudo.jpg b/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/images/mac-visudo.jpg similarity index 100% rename from docs/en/docs/EulerLauncher/images/mac-visudo.jpg rename to docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/images/mac-visudo.jpg diff --git a/docs/en/docs/EulerLauncher/mac-user-manual.md b/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/mac-user-manual.md similarity index 98% rename from docs/en/docs/EulerLauncher/mac-user-manual.md rename to docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/mac-user-manual.md index 45d8d71d6660794279d0e1e44dea9726ff4fffc3..134b28a578a1ef66f4e876d0cfe38abec2e8ec3a 100644 --- a/docs/en/docs/EulerLauncher/mac-user-manual.md +++ b/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/mac-user-manual.md @@ -37,12 +37,12 @@ brew install wget EulerLauncher depends on QEMU to run on macOS. To improve user network experience, [vmnet framework][1] of macOS is used to provide VM network capabilities. Currently, administrator permissions are required for using vmnet. When using the QEMU backend to create VMs with vmnet network devices, you need to enable the administrator permission. EulerLauncher automatically uses the `sudo` command to implement this process during startup. Therefore, you need to configure the `sudo` password-free permission for the current user. If you do not want to perform this configuration, please stop using EulerLauncher. 1. On the macOS desktop, press **Shift**+**Command**+**U** to open the **Utilities** folder in **Go** and find **Terminal.app**. - + 2. Enter `sudo visudo` in the terminal to modify the sudo configuration file. Note that you may be required to enter the password in this step. Enter the password as prompted. 3. Find and replace `%admin ALL=(ALL) ALL` with `%admin ALL=(ALL) NOPASSWD: ALL`. - + 4. Press **ESC** and enter **:wq** to save the settings. @@ -85,7 +85,7 @@ The directory generated after the decompression contains the following files: EulerLauncher configurations are as follows: - ```shell + ```ini [default] log_dir = # Log file location (xxx.log) work_dir = # EulerLauncher working directory, which is used to store VM images and VM files. diff --git a/docs/en/docs/EulerLauncher/overall.md b/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/overall.md similarity index 100% rename from docs/en/docs/EulerLauncher/overall.md rename to docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/overall.md diff --git a/docs/en/docs/EulerLauncher/win-user-manual.md b/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/win-user-manual.md similarity index 92% rename from docs/en/docs/EulerLauncher/win-user-manual.md rename to docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/win-user-manual.md index be313bc7f06526d68e88e4586be4266cdc361a3d..aa74f01b92d287da7c163e96f49ff6d862e47b32 100644 --- a/docs/en/docs/EulerLauncher/win-user-manual.md +++ b/docs/en/Tools/CommunityTools/Virtualization/EulerLauncher/win-user-manual.md @@ -11,7 +11,7 @@ The directory generated after the decompression contains the following files: - **eulerlauncher.exe**: EulerLauncher CLI client. You can use this client to interact with the eulerlauncherd daemon process and perform operations on VMs and images. - **eulerlauncher.conf**: EulerLauncher configuration file, which must be stored in the same directory as **eulerlauncherd.exe**. Configure the file as follows: -```text +```ini [default] # Configure the directory for storing log files. log_dir = D:\eulerlauncher-workdir\logs @@ -54,9 +54,9 @@ After **eulerlauncherd.exe** is executed, the eulerlauncherd icon is displayed i 2. Download a remote image. ```PowerShell - eulerlauncher.exe download-image 22.03-LTS + eulerlauncher.exe download-image 23.09 - Downloading: 22.03-LTS, this might take a while, please check image status with "images" command. + Downloading: 23.09, this might take a while, please check image status with "images" command. ``` The image download request is an asynchronous request. The download is completed in the background. The time required depends on your network status. The overall image download process includes download, decompression, and format conversion. During the download, you can run the `image` command to view the download progress and image status at any time. @@ -89,7 +89,7 @@ After **eulerlauncherd.exe** is executed, the eulerlauncherd icon is displayed i Loading: 2309-load, this might take a while, please check image status with "images" command. ``` - Load the **openEuler-22.03-LTS-x86_64.qcow2.xz** file in the **D:\\** directory to the EulerLauncher system and name it **2203-load**. Similar to the download command, the load command is also an asynchronous command. You need to run the image list command to query the image status until the image status is **Ready**. Compared with directly downloading an image, loading an image is much faster. + Load the **openEuler-23.09-x86_64.qcow2.xz** file in the **D:\\** directory to the EulerLauncher system and name it **2309-load**. Similar to the download command, the load command is also an asynchronous command. You need to run the image list command to query the image status until the image status is **Ready**. Compared with directly downloading an image, loading an image is much faster. ```PowerShell eulerlauncher.exe images diff --git a/docs/en/Tools/CommunityTools/Virtualization/Menu/index.md b/docs/en/Tools/CommunityTools/Virtualization/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..3ded130a0443af20afbe4c55bf3cfeca92e1b138 --- /dev/null +++ b/docs/en/Tools/CommunityTools/Virtualization/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [EulerLauncher User Guide]({{< relref "./EulerLauncher/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/CommunityTools/epkg/Menu/index.md b/docs/en/Tools/CommunityTools/epkg/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..26959cbdee487ff14c3fc6302bb499223630e27c --- /dev/null +++ b/docs/en/Tools/CommunityTools/epkg/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [epkg User Guide]({{< relref "./epkgUse/Menu/index.md" >}}) +- [autopkg User Guide]({{< relref "./autopkg/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/CommunityTools/epkg/autopkg/Menu/index.md b/docs/en/Tools/CommunityTools/epkg/autopkg/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..1db4c6c9813a1b0446bfdc5835b813a33ca56b7d --- /dev/null +++ b/docs/en/Tools/CommunityTools/epkg/autopkg/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [autopkg User Guide]({{< relref "./autopkg.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/CommunityTools/epkg/autopkg/autopkg.md b/docs/en/Tools/CommunityTools/epkg/autopkg/autopkg.md new file mode 100644 index 0000000000000000000000000000000000000000..407a57c5db1a0d7ea79c839d78985a47947dc2d2 --- /dev/null +++ b/docs/en/Tools/CommunityTools/epkg/autopkg/autopkg.md @@ -0,0 +1,165 @@ +# Overview + +This software streamlines package integration for the openEuler community by automating the bulk import of source code repositories from public platforms like GitHub. It detects package dependencies and generates binary files automatically, eliminating the need for manual package writing and maintenance. With built-in support for cMake, autotools, Meson, Maven, and Python build systems, the software substantially increases the success rate of end-to-end package integration. + +## Installation and Uninstallation + +### 1. Installation + +Download the source code from the repository. + +```bash +git clone https://gitee.com/qiu-tangke/autopkg.git -b ${branch} +``` + +Navigate to the repository directory and install the software using `pip`. This is compatible with openeuler 22.03 LTS and newer versions. For other versions, ensure that a Python 3.8 or higher environment is available. + +```bash +pip install dist/autopkg-***-py3-none-any.whl +``` + +### 2. Uninstallation + +```bash +pip uninstall autopkg +``` + +## Quick Start + +### 1. Environment Preparation + +The software must run on the host machine and requires Docker container support. Prepare a Docker image of the openEuler OS using the following methods. + +#### Method 1: Direct Image Acquisition from Source Repository + +```bash +arch=$(uname -m) +if [ "$arch" == "aarch64" ]; then + wget https://cache-openeuler.obs.cn-north-4.myhuaweicloud.com/52f2b17e15ceeefecf5646d7711df7e94691ea1adb11884b926532ae52ab3c22/autopkg-latest_aarch64.tar.xz + docker load < autopkg-latest_aarch64.tar.xz +elif [ "$arch" == "x86_64" ]; then + wget https://cache-openeuler.obs.cn-north-4.myhuaweicloud.com/710a5f18188efc70bfa0119d0b35dcbb62cab911c9eb77b86dc6aebdbbfc69de/autopkg-latest_x86-64.tar.xz + docker load < autopkg-latest_x86-64.tar.xz +else + echo "Error: The system architecture is neither aarch64 nor x86_64, it is $arch." +fi +``` + +#### Method 2: Manual Image Construction via Commands (in Case Method 1 Fails) + +```bash +arch=$(uname -m) +wget "https://repo.huaweicloud.com/openeuler/openEuler-23.03/docker_img/${arch}/openEuler-docker.${arch}.tar.xz" +docker load < "openEuler-docker.${arch}.tar.xz" +docker run -dti --privileged --name=autopkg_working --network=host openEuler-23.03:latest +docker exec -ti ${container_id} bash # The following commands are executed inside the container. +yum install -y git make gcc cmake python3-pip ruby ruby-devel rubygems-devel npm maven automake perl wget curl meson +cat >> /root/phase.sh << EOF +#/usr/bin/env bash + +prep +build +install +EOF +exit # Exit the container. +docker commit ${container_id} > autopkg:latest # Save the container modifications. +docker tag ${new_image_id} autopkg:latest # Name and tag the image. +``` + +### 2. Command Line + +```bash +autopkg --help +-g,--git-url: Provide the git repository URL, for example, 'https://***.git'. +-t,--tar-url: Provide the tar package URL. +-d,--dir: Specify the local repository path. +-n,--name: Specify the package name, used for interface request information. +-v,--version: Specify the version, used with "-n". +-l,--language: Specify the language, used with "-n". +-o,--output: Set the output file path. +-b,--build: Enable debug log mode. +-c,--config: Provide directly usable configuration information. +``` + +### 3. Common Commands + +#### A. Specifying Local Repository Path + +```bash +autopkg -d ${package_dir} -o ${output_path} +``` + +![](./images/dir_test.PNG) + +#### B. Specifying Source Package URL + +```bash +autopkg -t ${tar_url} -o ${output_path} +``` + +![](./images/tar_url_test.PNG) + +#### C. Specifying Package Name without Compilation + +```bash +autopkg -n ${name} -v ${version} -l ${language} -o ${output_path} +``` + +![](./images/name_test.PNG) + +## Output File Description + +Upon successful package compilation, the system generates **package.yaml**, **phase.sh**, and **{package_name}.epkg**. If compilation is skipped, only **package.yaml** and **phase.sh** are produced. The output path is determined by the `--output` parameter, defaulting to **/tmp/autopkg/output**. + +### 1. package.yaml (Example: Jekyll, Ruby Compilation) + +Basic information of the package + +```yaml +meta: + summary: No detailed summary available + description: | + # [Jekyll](https://jekyllrb.com/) +name: jekyll +version: 4.3.3 +homepage: https://localhost:8080/jekyll-0.0.1.tar.gz +license: MIT +source: + '0': https://localhost:8080/jekyll-0.0.1.tar.gz # For local repositories, the URL is simulated by the local service +release: 0 +buildRequires: +- ruby +- ruby-devel +- rubygems-devel +``` + +### 2. phase.sh (Example: Jekyll, Ruby Compilation) + +Build script for the package + +```bash +#!/usr/bin/env bash + +prep() { + cd /root/workspace +} + +build() { + if [ -f *.gemspec ]; then + gem build *.gemspec + fi + mkdir -p usr/ + gem install -V --local --build-root usr --force --document=ri,rdoc *.gem +} + +install() { + rm -rf /opt/buildroot + mkdir /opt/buildroot + cp -r usr/ /opt/buildroot +} +``` + +### 3. ***.epkg + +Installation package of the software. +![](./images/local_epkg.PNG) diff --git a/docs/en/Tools/CommunityTools/epkg/autopkg/images/dir_test.PNG b/docs/en/Tools/CommunityTools/epkg/autopkg/images/dir_test.PNG new file mode 100644 index 0000000000000000000000000000000000000000..3d223e1c3f7aca150b724746b43a931f77e6c6d9 Binary files /dev/null and b/docs/en/Tools/CommunityTools/epkg/autopkg/images/dir_test.PNG differ diff --git a/docs/en/Tools/CommunityTools/epkg/autopkg/images/local_epkg.PNG b/docs/en/Tools/CommunityTools/epkg/autopkg/images/local_epkg.PNG new file mode 100644 index 0000000000000000000000000000000000000000..7f5ecdf2948a661ab2d513f008ae72758706fe28 Binary files /dev/null and b/docs/en/Tools/CommunityTools/epkg/autopkg/images/local_epkg.PNG differ diff --git a/docs/en/Tools/CommunityTools/epkg/autopkg/images/name_test.PNG b/docs/en/Tools/CommunityTools/epkg/autopkg/images/name_test.PNG new file mode 100644 index 0000000000000000000000000000000000000000..95e5cbfc916919e0996172cd4c55433c55eed4c2 Binary files /dev/null and b/docs/en/Tools/CommunityTools/epkg/autopkg/images/name_test.PNG differ diff --git a/docs/en/Tools/CommunityTools/epkg/autopkg/images/tar_url_test.PNG b/docs/en/Tools/CommunityTools/epkg/autopkg/images/tar_url_test.PNG new file mode 100644 index 0000000000000000000000000000000000000000..80a49875c4eacbe74da42a859d7d9df251533fb8 Binary files /dev/null and b/docs/en/Tools/CommunityTools/epkg/autopkg/images/tar_url_test.PNG differ diff --git a/docs/en/Tools/CommunityTools/epkg/epkgUse/Menu/index.md b/docs/en/Tools/CommunityTools/epkg/epkgUse/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..c4a3bd54624bd244cfcef56c72e3a396e700dd6e --- /dev/null +++ b/docs/en/Tools/CommunityTools/epkg/epkgUse/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [epkg User Guide]({{< relref "./epkg-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/CommunityTools/epkg/epkgUse/epkg-user-guide.md b/docs/en/Tools/CommunityTools/epkg/epkgUse/epkg-user-guide.md new file mode 100644 index 0000000000000000000000000000000000000000..d88d3e2949955d7fb4be45cf9389bb966ea70394 --- /dev/null +++ b/docs/en/Tools/CommunityTools/epkg/epkgUse/epkg-user-guide.md @@ -0,0 +1,320 @@ +# epkg User Guide + +## Introduction + +This document explains how to initialize the working environment for the epkg package manager and how to use its basic features. All operation results in this document are demonstrated using a non-root user as an example. +Note: Currently, epkg packages are only compatible with the AArch64 architecture, and support for other architectures will be expanded in the future. + +## Quick Start + +The following examples demonstrate how to install different versions of software packages. + +```bash +# Install epkg using curl. +# During installation, you can choose between user/global installation modes to install epkg for the current user or all users. +# Only the root user can use the global installation mode. +wget https://repo.oepkgs.net/openeuler/epkg/rootfs/epkg-installer.sh +sh epkg-installer.sh + +# Uninstall epkg. +wget https://repo.oepkgs.net/openeuler/epkg/rootfs/epkg-uninstaller.sh +sh epkg-uninstaller.sh + +# Initialize epkg. +epkg init +bash // Re-execute .bashrc to update the PATH + +# Create environment 1. +epkg env create t1 +epkg install tree +tree --version +which tree + +# View repositories. +[root@vm-4p64g ~]# epkg repo list +------------------------------------------------------------------------------------------------------------------------------------------------------ +channel | repo | url +------------------------------------------------------------------------------------------------------------------------------------------------------ +openEuler-22.03-LTS-SP3 | OS | https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-22.03-LTS-SP3/OS/aarch64/ +openEuler-24.09 | everything | https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64/ +openEuler-24.09 | OS | https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/OS/aarch64/ +------------------------------------------------------------------------------------------------------------------------------------------------------ + +# Create environment 2, specify a repository. +epkg env create t2 --repo openEuler-22.03-LTS-SP3 +epkg install tree +tree --version +which tree + +# Switch back to environment 1. +epkg env activate t1 +``` + +## epkg Usage + +```bash +Usage: + epkg install PACKAGE + epkg install [--env ENV] PACKAGE (under development) + epkg remove [--env ENV] PACKAGE (under development) + epkg upgrade [PACKAGE] (under development) + + epkg search PACKAGE (under development) + epkg list (under development) + + epkg env list + epkg env create|remove ENV + epkg env activate ENV + epkg env deactivate ENV + epkg env register|unregister ENV + epkg env history ENV (under development) + epkg env rollback ENV (under development) +``` + +Package installation: + +```bash +epkg env create $env # Create an environment. +epkg install $package # Install a package in the environment. +epkg env create $env2 --repo $repo # Create environment 2, specify a repository. +epkg install $package # Install a package in environment 2. +``` + +Package building: + +```bash +epkg build ${yaml_path}/$pkg_name.yaml +``` + +### Installing Software + +Function description: + +Install software in the current environment (confirm the current environment before operation). + +Command: + +```shell +epkg install ${package_name} +``` + +Example output: + +```shell +[root@2d785c36ee2e /]# epkg env activate t1 +Add common to path +Add t1 to path +Environment 't1' activated. +Environment 't1' activated. +[root@2d785c36ee2e /]# epkg install tree +EPKG_ENV_NAME: t1 +Caching repodata for: "OS" +Cache for "OS" already exists. Skipping... +Caching repodata for: "OS" +Cache for "OS" already exists. Skipping... +Caching repodata for: "everything" +Cache for "everything" already exists. Skipping... +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/FF/FFCRTKRFGFQ6S2YVLOSUF6PHSMRP7A2N__ncurses-libs__6.4__8.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/D5/D5BOEFTRBNV3E4EXBVXDSRNTIGLGWVB7__glibc-all-langpacks__2.38__34.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/VX/VX6SUOPGEVDWF6E5M2XBV53VS7IXSFM5__openEuler-repos__1.0__3.3.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/LO/LO6RYZTBB2Q7ZLG6SWSICKGTEHUTBWUA__libselinux__3.5__3.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/EP/EPIEEK2P5IUPO4PIOJ2BXM3QPEFTZUCT__basesystem__12__3.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/2G/2GYDDYVWYYIDGOLGTVUACSBHYVRCRJH3__setup__2.14.5__2.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/HC/HCOKXTWQQUPCFPNI7DMDC6FGSDOWNACC__glibc__2.38__34.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/OJ/OJQAHJTY3Y7MZAXETYMTYRYSFRVVLPDC__glibc-common__2.38__34.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/FJ/FJXG3K2TSUYXNU4SES2K3YSTA3AHHUMB__tree__2.1.1__1.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/KD/KDYRBN74LHKSZISTLMYOMTTFVLV4GPYX__readline__8.2__2.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/MN/MNJPSSBS4OZJL5EB6YKVFLMV4TGVBUBA__tzdata__2024a__2.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/S4/S4FBO2SOMG3GKP5OMDWP4XN5V4FY7OY5__bash__5.2.21__1.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/EJ/EJGRNRY5I6XIDBWL7H5BNYJKJLKANVF6__libsepol__3.5__3.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/TZ/TZRQZRU2PNXQXHRE32VCADWGLQG6UL36__bc__1.07.1__12.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/WY/WYMBYMCARHXD62ZNUMN3GQ34DIWMIQ4P__filesystem__3.16__6.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/KQ/KQ2UE3U5VFVAQORZS4ZTYCUM4QNHBYZ7__openEuler-release__24.09__55.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/HD/HDTOK5OTTFFKSTZBBH6AIAGV4BTLC7VT__openEuler-gpg-keys__1.0__3.3.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/EB/EBLBURHOKKIUEEFHZHMS2WYF5OOKB4L3__pcre2__10.42__8.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/YW/YW5WTOMKY2E5DLYYMTIDIWY3XIGHNILT__info__7.0.3__3.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start download https://repo.oepkgs.net/openeuler/epkg/channel/openEuler-24.09/everything/aarch64//store/E4/E4KCO6VAAQV5AJGNPW4HIXDHFXMR4EJV__ncurses-base__6.4__8.oe2409.epkg +############################################################################################################################################################################################################### 100.0% +start install FFCRTKRFGFQ6S2YVLOSUF6PHSMRP7A2N__ncurses-libs__6.4__8.oe2409 +start install D5BOEFTRBNV3E4EXBVXDSRNTIGLGWVB7__glibc-all-langpacks__2.38__34.oe2409 +start install VX6SUOPGEVDWF6E5M2XBV53VS7IXSFM5__openEuler-repos__1.0__3.3.oe2409 +start install LO6RYZTBB2Q7ZLG6SWSICKGTEHUTBWUA__libselinux__3.5__3.oe2409 +start install EPIEEK2P5IUPO4PIOJ2BXM3QPEFTZUCT__basesystem__12__3.oe2409 +start install 2GYDDYVWYYIDGOLGTVUACSBHYVRCRJH3__setup__2.14.5__2.oe2409 +start install HCOKXTWQQUPCFPNI7DMDC6FGSDOWNACC__glibc__2.38__34.oe2409 +start install OJQAHJTY3Y7MZAXETYMTYRYSFRVVLPDC__glibc-common__2.38__34.oe2409 +start install FJXG3K2TSUYXNU4SES2K3YSTA3AHHUMB__tree__2.1.1__1.oe2409 +start install KDYRBN74LHKSZISTLMYOMTTFVLV4GPYX__readline__8.2__2.oe2409 +start install MNJPSSBS4OZJL5EB6YKVFLMV4TGVBUBA__tzdata__2024a__2.oe2409 +start install S4FBO2SOMG3GKP5OMDWP4XN5V4FY7OY5__bash__5.2.21__1.oe2409 +start install EJGRNRY5I6XIDBWL7H5BNYJKJLKANVF6__libsepol__3.5__3.oe2409 +start install TZRQZRU2PNXQXHRE32VCADWGLQG6UL36__bc__1.07.1__12.oe2409 +start install WYMBYMCARHXD62ZNUMN3GQ34DIWMIQ4P__filesystem__3.16__6.oe2409 +start install KQ2UE3U5VFVAQORZS4ZTYCUM4QNHBYZ7__openEuler-release__24.09__55.oe2409 +start install HDTOK5OTTFFKSTZBBH6AIAGV4BTLC7VT__openEuler-gpg-keys__1.0__3.3.oe2409 +start install EBLBURHOKKIUEEFHZHMS2WYF5OOKB4L3__pcre2__10.42__8.oe2409 +start install YW5WTOMKY2E5DLYYMTIDIWY3XIGHNILT__info__7.0.3__3.oe2409 +start install E4KCO6VAAQV5AJGNPW4HIXDHFXMR4EJV__ncurses-base__6.4__8.oe2409 +``` + +### Listing Environments + +Function description: + +List all environments in epkg (under the `$EPKG_ENVS_ROOT` directory) and indicate the current environment. + +Command: + +```shell +epkg env list +``` + +Example output: + +```shell +[small_leek@19e784a5bc38 bin]# epkg env list +Available environments(sort by time): +w1 +main +common +You are in [main] now +``` + +### Creating an Environment + +Function description: + +Create a new environment. After successful creation, the new environment is activated by default, but is not globally registered. + +Command: + +```shell +epkg env create ${env_name} +``` + +Example output: + +```shell +[small_leek@b0e608264355 bin]# epkg env create work1 +YUM --installroot directory structure created successfully in: /root/.epkg/envs/work1/profile-1 +Environment 'work1' added to PATH. +Environment 'work1' activated. +Environment 'work1' created. +``` + +### Activating an Environment + +Function description: + +Activate the specified environment, refresh `EPKG_ENV_NAME` and `RPMDB_DIR` (used to point to `--dbpath` when software is installed into the specified environment), refresh `PATH` to include the specified environment and the common environment, and set the specified environment as the first priority. + +Command: + +```shell +epkg env activate ${env_name} +``` + +Example output: + +```shell +[small_leek@9d991d463f89 bin]# epkg env activate main +Environment 'main' activated +``` + +### Deactivating an Environment + +Function description: + +Deactivate the specified environment, refresh `EPKG_ENV_NAME` and `RPMDB_DIR`, refresh `PATH`, and default to the main environment. + +Command: + +```shell +epkg env deactivate ${env_name} +``` + +Example output: + +```shell +[small_leek@398ec57ce780 bin]# epkg env deactivate w1 +Environment 'w1' deactivated. +``` + +### Registering an Environment + +Function description: + +Register the specified environment, persistently refresh `PATH` to include all registered environments in epkg, and set the specified environment as the first priority. + +Command: + +```shell +epkg env register ${env_name} +``` + +Example output: + +```shell +[small_leek@5042ae77dd75 bin]# epkg env register lkp +EPKG_ACTIVE_ENV: +Environment 'lkp' has been registered to PATH. +``` + +### Unregistering an Environment + +Function description: + +Unregister the specified environment, persistently refresh `PATH` to include all registered environments in epkg except the specified one. + +Command: + +```shell +epkg env unregister ${env_name} +``` + +Example output: + +```shell +[small_leek@69393675945d /]# epkg env unregister w4 +EPKG_ACTIVE_ENV: +Environment 'w4' has been unregistered from PATH. +``` + +### Building an epkg Package + +Function description: + +Build an epkg package using the YAML file provided by autopkg. + +Command: + +```shell +epkg build ${yaml_path}/$pkg_name.yaml +``` + +Example output: + +```shell +[small_leek@69393675945d /]# epkg build /root/epkg/build/test/tree/package.yaml +pkg_hash: fbfqtsnza9ez1zk0cy23vyh07xfzsydh, dir: /root/.cache/epkg/build-workspace/result +Compress success: /root/.cache/epkg/build-workspace/epkg/fbfqtsnza9ez1zk0cy23vyh07xfzsydh__tree__2.1.1__0.oe2409.epkg +``` diff --git a/docs/en/Tools/DevOps/CodeManage/Menu/index.md b/docs/en/Tools/DevOps/CodeManage/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..f43529fa64dee2be32774df9cd3ec0fcf9b9f394 --- /dev/null +++ b/docs/en/Tools/DevOps/CodeManage/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [patch-tracking]({{< relref "./patch-tracking/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/DevOps/CodeManage/patch-tracking/Menu/index.md b/docs/en/Tools/DevOps/CodeManage/patch-tracking/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..22f2dc65c5236d34c32e3cc86129c6fa889cf1b7 --- /dev/null +++ b/docs/en/Tools/DevOps/CodeManage/patch-tracking/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [patch-tracking]({{< relref "./patch-tracking.md" >}}) +- [Common Issues and Solutions]({{< relref "./common-issues-and-solutions.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/DevOps/CodeManage/patch-tracking/common-issues-and-solutions.md b/docs/en/Tools/DevOps/CodeManage/patch-tracking/common-issues-and-solutions.md new file mode 100644 index 0000000000000000000000000000000000000000..5c20243357b920b4befa1343450c21687c7c39d0 --- /dev/null +++ b/docs/en/Tools/DevOps/CodeManage/patch-tracking/common-issues-and-solutions.md @@ -0,0 +1,15 @@ +# Common Issues and Solutions + +## Issue 1: Connection Refused upon Access to api.github.com + +### Symptom + +During the operation of patch-tracking, the following error message may occur: + +```text +Sep 21 22:00:10 localhost.localdomain patch-tracking[36358]: 2020-09-21 22:00:10,812 - patch_tracking.util.github_api - WARNING - HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /user (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) +``` + +### Cause Analysis + +The preceding problem is caused by the unstable network access between patch-tracking and GitHub API. Ensure that patch-tracking is operating in a stable network environment (for example, [Huawei Cloud Elastic Cloud Server](https://www.huaweicloud.com/intl/en-us/product/ecs.html)). diff --git a/docs/en/Tools/DevOps/CodeManage/patch-tracking/images/Maintainer.jpg b/docs/en/Tools/DevOps/CodeManage/patch-tracking/images/Maintainer.jpg new file mode 100644 index 0000000000000000000000000000000000000000..da0d5f1b5d928eca3a0d63795f59c55331136065 Binary files /dev/null and b/docs/en/Tools/DevOps/CodeManage/patch-tracking/images/Maintainer.jpg differ diff --git a/docs/en/Tools/DevOps/CodeManage/patch-tracking/images/PatchTracking.jpg b/docs/en/Tools/DevOps/CodeManage/patch-tracking/images/PatchTracking.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e12afd6227c18c333f289b9aa71abf608d8058a0 Binary files /dev/null and b/docs/en/Tools/DevOps/CodeManage/patch-tracking/images/PatchTracking.jpg differ diff --git a/docs/en/docs/userguide/patch-tracking.md b/docs/en/Tools/DevOps/CodeManage/patch-tracking/patch-tracking.md similarity index 54% rename from docs/en/docs/userguide/patch-tracking.md rename to docs/en/Tools/DevOps/CodeManage/patch-tracking/patch-tracking.md index 217111a0c88377fcdf5cef14e892af54d9b3a574..2ebaf2ed16040fe4c944f42a4109c87a537c2db0 100644 --- a/docs/en/docs/userguide/patch-tracking.md +++ b/docs/en/Tools/DevOps/CodeManage/patch-tracking/patch-tracking.md @@ -20,21 +20,21 @@ patch-tracking-cli is a command line tool located in the client. It invokes the a. Patch tracking service procedure - The procedure for handling the submitted patch is as follows: +The procedure for handling the submitted patch is as follows: - 1. Add the tracking item using the command line tool. - 2. Automatically obtain patch files from the upstream repository (for example, GitHub) that is configured for the tracking item. - 3. Create a temporary branch and submit the obtained patch file to the temporary branch. - 4. Automatically submit an issue to the corresponding repository and generate the PR associated with the issue. +1. Add the tracking item using the command line tool. +2. Automatically obtain patch files from the upstream repository (for example, GitHub) that is configured for the tracking item. +3. Create a temporary branch and submit the obtained patch file to the temporary branch. +4. Automatically submit an issue to the corresponding repository and generate the PR associated with the issue. ![PatchTracking](./images/PatchTracking.jpg) b. Procedure for the Maintainer to handle the submitted patch - The procedure for handling the submitted patch is as follows: +The procedure for handling the submitted patch is as follows: - 1. The Maintainer analyzes the PR. - 2. Execute the continuous integration (CI). After the CI is successfully executed, determine whether to merge the PR. +1. The Maintainer analyzes the PR. +2. Execute the continuous integration (CI). After the CI is successfully executed, determine whether to merge the PR. ![Maintainer](./images/Maintainer.jpg) @@ -77,23 +77,23 @@ Method 1: Install patch-tracking from the repo source. 2. Run the following command to install `patch-tracking`: - ```shell - dnf install patch-tracking - ``` + ```shell + dnf install patch-tracking + ``` Method 2: Install patch-tracking using the RPM package. 1. Install the required dependencies. - ```shell - dnf install python3-uWSGI python3-flask python3-Flask-SQLAlchemy python3-Flask-APScheduler python3-Flask-HTTPAuth python3-requests python3-pandas - ``` + ```shell + dnf install python3-uWSGI python3-flask python3-Flask-SQLAlchemy python3-Flask-APScheduler python3-Flask-HTTPAuth python3-requests python3-pandas + ``` 2. `patch-tracking-1.0.0-1.oe1.noarch.rpm` is used as an example. Run the following command to install patch-tracking. - ```shell - rpm -ivh patch-tracking-1.0.0-1.oe1.noarch.rpm - ``` + ```shell + rpm -ivh patch-tracking-1.0.0-1.oe1.noarch.rpm + ``` ### Generating a Certificate @@ -112,65 +112,65 @@ Configure the corresponding parameters in the configuration file. The path of th 1. Configure the service listening address. - ```text - LISTEN = "127.0.0.1:5001" - ``` + ```text + LISTEN = "127.0.0.1:5001" + ``` 2. GitHub Token is used to access the repository information hosted in the upstream open source software repository of GitHub. For details about how to create a GitHub token, see [Creating a personal access token](https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token). - ```text - GITHUB_ACCESS_TOKEN = "" - ``` + ```text + GITHUB_ACCESS_TOKEN = "" + ``` 3. For a repository that is hosted on Gitee and needs to be tracked, configure a Gitee Token with the repository permission to submit patch files, issues, and PRs. - ```text - GITEE_ACCESS_TOKEN = "" - ``` + ```text + GITEE_ACCESS_TOKEN = "" + ``` 4. Scan the database as scheduled to detect whether new or modified tracking items exist and obtain upstream patches for the detected tracking items. Set the interval of scanning and the unit is second. - ```text - SCAN_DB_INTERVAL = 3600 - ``` + ```text + SCAN_DB_INTERVAL = 3600 + ``` 5. When the command line tool is running, you need to enter the user name and password hash value for the authentication for the POST interface. - ```text - USER = "admin" - PASSWORD = "" - ``` + ```text + USER = "admin" + PASSWORD = "" + ``` - > The default value of `USER` is `admin`. + > The default value of `USER` is `admin`. - Run the following command to obtain the password hash value. **Test@123** is the configured password. + Run the following command to obtain the password hash value. **Test@123** is the configured password. - ```shell - generate_password Test@123 - ``` + ```shell + generate_password Test@123 + ``` - > The password hash value must meet the following complexity requirements: - > - > - The length is more than or equal to 6 bytes. - > - The password must contain uppercase letters, lowercase letters, digits, and special characters (**~!@#%\^\*-\_=+**). + > The password hash value must meet the following complexity requirements: + > + > - The length is more than or equal to 6 bytes. + > - The password must contain uppercase letters, lowercase letters, digits, and special characters (**~!@#%\^\*-\_=+**). - Add the password hash value to the quotation marks of `PASSWORD = ""`. + Add the password hash value to the quotation marks of `PASSWORD = ""`. ### Starting the Patch Tracking Service You can use either of the following methods to start the service: - Using systemd. - - ```shell - systemctl start patch-tracking - ``` + + ```shell + systemctl start patch-tracking + ``` - Running the executable program. - - ```shell - /usr/bin/patch-tracking - ``` + + ```shell + /usr/bin/patch-tracking + ``` ## Tool Usage @@ -180,75 +180,75 @@ You can associate the software repository and branch to be tracked with the corr - Using CLI - Parameter description: + Parameter description: - > `--user`: User name to be authenticated for the POST interface. It is the same as the USER parameter in the **settings.conf** file. - > `--password`: Password to be authenticated for the POST interface. It is the password string corresponding to the PASSWORD hash value in the **settings.conf** file. - > `--server`: URL for starting the patch tracking service, for example, 127.0.0.1:5001. - > `--version\_control`: Control tool of the upstream repository version. Only GitHub is supported. - > `--repo`: Name of the repository to be tracked, in the format of organization/repository. - > - > `--branch`: Branch name of the repository to be tracked. - > `--scm\_repo`: Name of the upstream repository to be tracked, in the GitHub format of organization/repository. - > `--scm\_branch`: Branch of the upstream repository to be tracked. - > `--scm_commit`: Commit from which the tracking starts. By default, the tracking starts from the latest commit. - > `--enabled`: Indicates whether to automatically track the repository. + > `--user`: User name to be authenticated for the POST interface. It is the same as the USER parameter in the **settings.conf** file. + > `--password`: Password to be authenticated for the POST interface. It is the password string corresponding to the PASSWORD hash value in the **settings.conf** file. + > `--server`: URL for starting the patch tracking service, for example, 127.0.0.1:5001. + > `--version\_control`: Control tool of the upstream repository version. Only GitHub is supported. + > `--repo`: Name of the repository to be tracked, in the format of organization/repository. + > + > `--branch`: Branch name of the repository to be tracked. + > `--scm\_repo`: Name of the upstream repository to be tracked, in the GitHub format of organization/repository. + > `--scm\_branch`: Branch of the upstream repository to be tracked. + > `--scm_commit`: Commit from which the tracking starts. By default, the tracking starts from the latest commit. + > `--enabled`: Indicates whether to automatically track the repository. - For example: + For example: - ```shell - patch-tracking-cli add --server 127.0.0.1:5001 --user admin --password Test@123 --version_control github --repo testPatchTrack/testPatch1 --branch master --scm_repo BJMX/testPatch01 --scm_branch test --enabled true - ``` + ```shell + patch-tracking-cli add --server 127.0.0.1:5001 --user admin --password Test@123 --version_control github --repo testPatchTrack/testPatch1 --branch master --scm_repo BJMX/testPatch01 --scm_branch test --enabled true + ``` - Using a Specified File - Parameter description: + Parameter description: - > `--server`: URL for starting the patch tracking service, for example, 127.0.0.1:5001. - > `--user`: User name to be authenticated for the POST interface. It is the same as the USER parameter in the **settings.conf** file. - > `--password`: Password to be authenticated for the POST interface. It is the password string corresponding to the PASSWORD hash value in the **settings.conf** file. - > `--file`: YAML file path. + > `--server`: URL for starting the patch tracking service, for example, 127.0.0.1:5001. + > `--user`: User name to be authenticated for the POST interface. It is the same as the USER parameter in the **settings.conf** file. + > `--password`: Password to be authenticated for the POST interface. It is the password string corresponding to the PASSWORD hash value in the **settings.conf** file. + > `--file`: YAML file path. - Add the information about the repository, branch, version management tool, and whether to enable monitoring to the YAML file (for example, **tracking.yaml**). The file path is used as the command of the `--file` to invoke the input parameters. + Add the information about the repository, branch, version management tool, and whether to enable monitoring to the YAML file (for example, **tracking.yaml**). The file path is used as the command of the `--file` to invoke the input parameters. - For example: + For example: - ```shell - patch-tracking-cli add --server 127.0.0.1:5001 --user admin --password Test@123 --file tracking.yaml - ``` + ```shell + patch-tracking-cli add --server 127.0.0.1:5001 --user admin --password Test@123 --file tracking.yaml + ``` - The format of the YAML file is as follows. The content on the left of the colon (:) cannot be modified, and the content on the right of the colon (:) needs to be set based on the site requirements. + The format of the YAML file is as follows. The content on the left of the colon (:) cannot be modified, and the content on the right of the colon (:) needs to be set based on the site requirements. - ```shell - version_control: github - scm_repo: xxx/xxx - scm_branch: master - repo: xxx/xxx - branch: master - enabled: true - ``` + ```shell + version_control: github + scm_repo: xxx/xxx + scm_branch: master + repo: xxx/xxx + branch: master + enabled: true + ``` - > version\_control: Control tool of the upstream repository version. Only GitHub is supported. - > scm\_repo: Name of the upstream repository to be tracked, in the GitHub format of organization/repository. - > scm\_branch: Branch of the upstream repository to be tracked. - > repo: Name of the repository to be tracked, in the format of organization/repository. - > branch: Branch name of the repository to be tracked. - > enabled: Indicates whether to automatically track the repository. + > version\_control: Control tool of the upstream repository version. Only GitHub is supported. + > scm\_repo: Name of the upstream repository to be tracked, in the GitHub format of organization/repository. + > scm\_branch: Branch of the upstream repository to be tracked. + > repo: Name of the repository to be tracked, in the format of organization/repository. + > branch: Branch name of the repository to be tracked. + > enabled: Indicates whether to automatically track the repository. - Using a Specified Directory - Place multiple `xxx.yaml` files in a specified directory, such as the `test_yaml`, and run the following command to record the tracking items of all YAML files in the specified directory. + Place multiple `xxx.yaml` files in a specified directory, such as the `test_yaml`, and run the following command to record the tracking items of all YAML files in the specified directory. - Parameter description: + Parameter description: - > `--user`: User name to be authenticated for the POST interface. It is the same as the USER parameter in the **settings.conf** file. - > `--password`: Password to be authenticated for the POST interface. It is the password string corresponding to the PASSWORD hash value in the **settings.conf** file. - > `--server`: URL for starting the patch tracking service, for example, 127.0.0.1:5001. - > `--dir`: Path where the YAML file is stored. + > `--user`: User name to be authenticated for the POST interface. It is the same as the USER parameter in the **settings.conf** file. + > `--password`: Password to be authenticated for the POST interface. It is the password string corresponding to the PASSWORD hash value in the **settings.conf** file. + > `--server`: URL for starting the patch tracking service, for example, 127.0.0.1:5001. + > `--dir`: Path where the YAML file is stored. - ```shell - patch-tracking-cli add --server 127.0.0.1:5001 --user admin --password Test@123 --dir /home/Work/test_yaml/ - ``` + ```shell + patch-tracking-cli add --server 127.0.0.1:5001 --user admin --password Test@123 --dir /home/Work/test_yaml/ + ``` ### Querying a Tracking Item @@ -298,19 +298,3 @@ patch-tracking-cli delete --server 127.0.0.1:5001 --user admin --password Test@1 ### Checking Issues and PRs on Gitee Log in to Gitee and check the software project to be tracked. On the Issues and Pull Requests tab pages of the project, you can see the item named in `[patch tracking] TIME`, for example, the `[patch tracking] 20200713101548`. This item is the issue and PR of the patch file that is just generated. - -## FAQ - -### Connection Refused upon Access to api.github.com - -#### Symptom - -During the operation of patch-tracking, the following error message may occur: - -```text -Sep 21 22:00:10 localhost.localdomain patch-tracking[36358]: 2020-09-21 22:00:10,812 - patch_tracking.util.github_api - WARNING - HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /user (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) -``` - -#### Cause Analysis - -The preceding problem is caused by the unstable network access between patch-tracking and GitHub API. Ensure that patch-tracking is operating in a stable network environment (for example, [Huawei Cloud Elastic Cloud Server](https://www.huaweicloud.com/intl/en-us/product/ecs.html)). diff --git a/docs/en/Tools/DevOps/Menu/index.md b/docs/en/Tools/DevOps/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..7ef67bfd91ca8eaa32656df46a17bb45bf14292f --- /dev/null +++ b/docs/en/Tools/DevOps/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [Source Code Management]({{< relref "./CodeManage/Menu/index.md" >}}) +- [Package Management]({{< relref "./PackageManage/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/DevOps/packageManage/Menu/index.md b/docs/en/Tools/DevOps/packageManage/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..8ba6528d69769f976b4c757449caf2e56a5b469c --- /dev/null +++ b/docs/en/Tools/DevOps/packageManage/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [pkgship]({{< relref "./pkgship/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/DevOps/packageManage/pkgship/Menu/index.md b/docs/en/Tools/DevOps/packageManage/pkgship/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..7a71ac229b6dc12d731afebd78fe26a7ef71b2b4 --- /dev/null +++ b/docs/en/Tools/DevOps/packageManage/pkgship/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [pkgship]({{< relref "./pkgship.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/DevOps/packageManage/pkgship/images/packagemanagement.png b/docs/en/Tools/DevOps/packageManage/pkgship/images/packagemanagement.png new file mode 100644 index 0000000000000000000000000000000000000000..6d314e2c6ad6bafd321d9f76cd6aa5f17a8cb394 Binary files /dev/null and b/docs/en/Tools/DevOps/packageManage/pkgship/images/packagemanagement.png differ diff --git a/docs/en/Tools/DevOps/packageManage/pkgship/images/panel.png b/docs/en/Tools/DevOps/packageManage/pkgship/images/panel.png new file mode 100644 index 0000000000000000000000000000000000000000..150eb8c8229f9e8cb47706f3b82f07516a505076 Binary files /dev/null and b/docs/en/Tools/DevOps/packageManage/pkgship/images/panel.png differ diff --git a/docs/en/Tools/DevOps/packageManage/pkgship/pkgship.md b/docs/en/Tools/DevOps/packageManage/pkgship/pkgship.md new file mode 100644 index 0000000000000000000000000000000000000000..f6a4a8798b276049b1313585c0cccf92be81e99a --- /dev/null +++ b/docs/en/Tools/DevOps/packageManage/pkgship/pkgship.md @@ -0,0 +1,435 @@ +# pkgship + + + +- [pkgship](#pkgship) + - [Introduction](#introduction) + - [Architecture](#architecture) + - [Using the Software Online](#using-the-software-online) + - [Downloading the Software](#downloading-the-software) + - [Operating Environment](#operating-environment) + - [Installing the Tool](#installing-the-tool) + - [Configuring Parameters](#configuring-parameters) + - [Starting and Stopping the Service](#starting-and-stopping-the-service) + - [Using the Tool](#using-the-tool) + - [Viewing and Dumping Logs](#viewing-and-dumping-logs) + - [pkgship-panel](#pkgship-panel) + + + +## Introduction + +The pkgship is a query tool used to manage the dependency of OS software packages and provide a complete dependency graph. The pkgship provides functions such as software package dependency query and lifecycle management. + +1. Software package basic information query: Allow community personnel to quickly obtain information about the name, version, and description of the software package. +2. Software package dependency query: Allow community personnel to understand the impact on software when software packages are introduced, updated, or deleted. + +## Architecture + +The system uses the Flask-RESTful development mode. The following figure shows the architecture: + +![avatar](./images/packagemanagement.png) + +## Using the Software Online + +pkgship provides a [public online service](https://pkgmanage.openeuler.org/Packagemanagement). You can directly use pkgship online if you do not need to customize your query. + +To use a custom data source, install, configure, and use pkgship by referring to the following sections. + +## Downloading the Software + +- The repo source is officially released at: +- You can obtain the source code at: +- You can obtain the RPM package at: + +## Operating Environment + +- Hardware configuration: + +| Item| Recommended Specification| +|----------|----------| +| CPU| 8 cores| +| Memory| 32 GB (minimum: 4 GB)| +| Dive space| 20 GB| +| Network bandwidth| 300 Mbit/s| +| I/O| 375 MB/s| + +- Software configuration: + +| Name| Specifications| +|----------|----------| +| Elasticsearch| 7.10.1. Single-node and cluster deployment is available.| +| Redis| 5.0.4 or later is recommended. You are advised to set the size to 3/4 of the memory.| +| Python| 3.8 or later.| + +## Installing the Tool + +>Note: The software can run in Docker. In openEuler 21.09, due to environment restrictions, use the **--privileged** parameter when creating a Docker. Otherwise, the software fails to be started. This document will be updated after the adaptation. + +**1. Installing the pkgship** + +You can use either of the following methods to install the pkgship: + +- Method 1: Mount the repo source using DNF. + + Use DNF to mount the repo source where the pkgship is located (for details, see the [Application Development Guide](../../../../Server/Development/ApplicationDev/application-development.md)). Then run the following command to download and install the pkgship and its dependencies: + + ```bash + dnf install pkgship + ``` + +- Method 2: Install the RPM package. Download the RPM package of the pkgship and run the following command to install the pkgship (x.x-x indicates the version number and needs to be replaced with the actual one): + + ```bash + rpm -ivh pkgship-x.x-x.oe1.noarch.rpm + ``` + + Or + + ```bash + dnf install pkgship-x.x-x.oe1.noarch.rpm + ``` + +**2. Installing Elasticsearch and Redis** + +If Elasticsearch or Redis is not installed in the environment, you can execute the automatic installation script after the pkgship is installed. + +The default script path is as follows: + +```bash +/etc/pkgship/auto_install_pkgship_requires.sh +``` + +Run the following command: + +```bash +/bin/bash auto_install_pkgship_requires.sh elasticsearch +``` + +Or + +```bash +/bin/bash auto_install_pkgship_requires.sh redis +``` + +**3. Adding a User After the Installation** + +After the pkgship software is installed, the system automatically creates a user named **pkgshipuser** and a user group named **pkgshipuser**. They will be used when the service is started and running. + +## Configuring Parameters + +1. Configure the parameters in the configuration file. The default configuration file of the system is stored in **/etc/pkgship/package.ini**. Modify the configuration file as required. + + ```bash + vim /etc/pkgship/package.ini + ``` + + ```ini + [SYSTEM-System Configuration] + ; Path for storing the .yaml file imported during database initialization. The .yaml file records the location of the imported .sqlite file. + init_conf_path=/etc/pkgship/conf.yaml + + ; Service query port + query_port=8090 + + ; Service query IP address + query_ip_addr=127.0.0.1 + + ; Address of the remote service. The command line can directly call the remote service to complete the data request. + remote_host=https://api.openeuler.org/pkgmanage + + ; Directory for storing temporary files during initialization and download. The directory will not be occupied for a long time. It is recommended that the available space be at least 1 GB. + temporary_directory=/opt/pkgship/tmp/ + + [LOG-Logs] + ; Service log storage path + log_path=/var/log/pkgship/ + + ; Log level. The options are as follows: + ; INFO DEBUG WARNING ERROR CRITICAL + log_level=INFO + + ; Maximum size of a service log file. If the size of a service log file exceeds the value of this parameter, the file is automatically compressed and dumped. The default value is 30 MB. + max_bytes=31457280 + + ; Maximum number of backup log files. The default value is 30. + backup_count=30 + + [UWSGI-Web Server Configuration] + ; Operation log path + daemonize=/var/log/pkgship-operation/uwsgi.log + ; Size of data transmitted between the front end and back end + buffer-size=65536 + ; Network connection timeout interval + http-timeout=600 + ; Service response time + harakiri=600 + + [REDIS-Cache Configuration] + ; The address of the Redis cache server can be the released domain or IP address that can be accessed. + ; The default link address is 127.0.0.1. + redis_host=127.0.0.1 + + ; Port number of the Redis cache server. The default value is 6379. + redis_port=6379 + + ; Maximum number of connections allowed by the Redis server at a time. + redis_max_connections=10 + + [DATABASE-Database] + ; Database access address. The default value is the IP address of the local host. + database_host=127.0.0.1 + + ; Database access port. The default value is 9200. + database_port=9200 + ``` + +2. Create a YAML configuration file to initialize the database. The **conf.yaml** file is stored in the **/etc/pkgship/** directory by default. The pkgship reads the name of the database to be created and the SQLite file to be imported based on this configuration. You can also configure the repo address of the SQLite file. An example of the **conf.yaml** file is as follows: + + ```yaml + dbname: oe20.03 #Database name + src_db_file: /etc/pkgship/repo/openEuler-20.09/src #Local path of the source package + bin_db_file: /etc/pkgship/repo/openEuler-20.09/bin #Local path of the binary package + priority: 1 #Database priority + + dbname: oe20.09 + src_db_file: https://repo.openeuler.org/openEuler-20.09/source #Repo source of the source package + bin_db_file: https://repo.openeuler.org/openEuler-20.09/everything/aarch64 #Repo source of the binary package + priority: 2 + ``` + + > To change the storage path, change the value of **init\_conf\_path** in the **package.ini** file. + > + > The SQLite file path cannot be configured directly. + > + > The value of **dbname** can contain only lowercase letters, digits, periods (.), hyphens (-), underscores (_), and plus signs (+), and must start and end with lower case letters or digits. + +## Starting and Stopping the Service + +The pkgship can be started and stopped in two modes: systemctl mode and pkgshipd mode. In systemctl mode, the automatic startup mechanism can be stopped when an exception occurs. You can run any of the following commands: + +```bash +systemctl start pkgship.service # Start the service. + +systemctl stop pkgship.service # Stop the service. + +systemctl restart pkgship.service # Restart the service. +``` + +```bash +pkgshipd start # Start the service. + +pkgshipd stop # Stop the service. +``` + +> Only one mode is supported in each start/stop period. The two modes cannot be used at the same time. +> +> The pkgshipd startup mode can be used only by the **pkgshipuser** user. +> +> If the **systemctl** command is not supported in the Docker environment, run the **pkgshipd** command to start or stop the service. + +## Using the Tool + +1. Initialize the database. + + > Application scenario: After the service is started, to query the package information and dependency in the corresponding database (for example, oe20.03 and oe20.09), you need to import the SQLite (including the source code library and binary library) generated by the **createrepo** to the service. Then insert the generated JSON body of the package information into the corresponding database of Elasticsearch. The database name is the value of d**bname-source/binary** generated based on the value of **dbname** in the **conf.yaml** file. + + ```bash + pkgship init [-filepath path] + ``` + + > Parameter description: + > **-filepath**: (Optional) Specifies the path of the initialization configuration file **config.yaml.** You can use either a relative path or an absolute path. If no parameter is specified, the default configuration is used for initialization. + +2. Query a single package. + + You can query details about a source package or binary package (**packagename**) in the specified **database** table. + + > Application scenario: You can query the detailed information about the source package or binary package in a specified database. + + ```bash + pkgship pkginfo $packageName $database [-s] + ``` + + > Parameter description: + > **packagename**: (Mandatory) Specifies the name of the software package to be queried. + > **database**: (Mandatory) Specifies the database name. + > + > **-s**: (Optional) Specifies that the source package `src` is to be queried by `-s`. If this parameter is not specified, the binary package information of `bin` is queried by default. + +3. Query all packages. + + Query information about all packages in the database. + + > Application scenario: You can query information about all software packages in a specified database. + + ```bash + pkgship list $database [-s] + ``` + + > Parameter description: + > **database**: (Mandatory) Specifies the database name. + > **-s**: (Optional) Specifies that the source package `src` is to be queried by `-s`. If this parameter is not specified, the binary package information of `bin` is queried by default. + +4. Query the installation dependency. + + Query the installation dependency of the binary package (**binaryName**). + + > Application scenario: When you need to install the binary package A, you need to install B, the installation dependency of A, and C, the installation dependency of B, etc. A can be installed only after all the installation dependencies are installed in the system. Therefore, before installing the binary package A, you may need to query all installation dependencies of A. You can run the following command to query multiple databases based on the default priority of the platform, and to customize the database query priority. + + ```bash + pkgship installdep [$binaryName $binaryName1 $binaryName2...] [-dbs] [db1 db2...] [-level] $level + ``` + + > Parameter description: + > **binaryName**: (Mandatory) Specifies the name of the dependent binary package to be queried. Multiple packages can be transferred. + > + > **-dbs:** (Optional) Specifies the priority of the database to be queried. If this parameter is not specified, the database is queried based on the default priority. + > + > **-level**: (Optional) Specifies the dependency level to be queried. If this parameter is not specified, the default value **0** is used, indicating that all levels are queried. + +5. Query the compilation dependency. + + Query all compilation dependencies of the source code package (**sourceName**). + + > Application scenario: To compile the source code package A, you need to install B, the compilation dependency package of A. To install B, you need to obtain all installation dependency packages of B. Therefore, before compiling the source code package A, you need to query the compilation dependencies of A and all installation dependencies of these compilation dependencies. You can run the following command to query multiple databases based on the default priority of the platform, and to customize the database query priority. + + ```bash + pkgship builddep [$sourceName $sourceName1 $sourceName2..] -dbs [db1 db2 ..] [-level] $level + ``` + + > Parameter description: + > **sourceName**: (Mandatory) Specifies the name of the source package on which the compilation depends. Multiple packages can be queried. + > + > **-dbs:** (Optional) Specifies the priority of the database to be queried. If this parameter is not specified, the database is queried based on the default priority. + > + > **-level**: (Optional) Specifies the dependency level to be queried. If this parameter is not specified, the default value **0** is used, indicating that all levels are queried. + +6. Query the self-compilation and self-installation dependencies. + + Query the installation and compilation dependencies of a specified binary package (**binaryName**) or source package (**sourceName**). In the command, **\[pkgName]** indicates the name of the binary package or source package to be queried. When querying a binary package, you can query all installation dependencies of the binary package, and the compilation dependencies of the source package corresponding to the binary package, as well as all installation dependencies of these compilation dependencies. When querying a source package, you can query its compilation dependency, and all installation dependencies of the compilation dependency, as well as all installation dependencies of the binary packages generated by the source package. In addition, you can run this command together with the corresponding parameters to query the self-compilation dependency of a software package and the dependency of a subpackage. + + > Application scenario: If you want to introduce a new software package based on the existing version library, you need to introduce all compilation and installation dependencies of the software package. You can run this command to query these two dependency types at the same time to know the packages introduced by the new software package, and to query binary packages and source packages. + + ```bash + pkgship selfdepend [$pkgName1 $pkgName2 $pkgName3 ..] [-dbs] [db1 db2..] [-b] [-s] [-w] + ``` + + > Parameter description: + > + > **pkgName**: (Mandatory) Specifies the name of the software package on which the installation depends. Multiple software packages can be transferred. + > + > **-dbs:** (Optional) Specifies the priority of the database to be queried. If this parameter is not specified, the database is queried based on the default priority. + > + > **-b**: (Optional) Specifies that the package to be queried is a binary package. If this parameter is not specified, the source package is queried by default. + > + > **-s**: (Optional) If **-s** is specified, all installation dependencies, compilation dependencies (that is, compilation dependencies of the source package on which compilation depends), and installation dependencies of all compilation dependencies of the software package are queried. If **-s** is not added, all installation dependencies and layer-1 compilation dependencies of the software package, as well as all installation dependencies of layer-1 compilation dependencies, are queried. + > + > **-w**: (Optional) If **-w** is specified, when a binary package is introduced, the query result displays the source package corresponding to the binary package and all binary packages generated by the source package. If **-w** is not specified, only the corresponding source package is displayed in the query result when a binary package is imported. + +7. Query dependency. + Query the packages that depend on the software package (**pkgName**) in a database (**dbName**). + + > Application scenario: You can run this command to query the software packages that will be affected by the upgrade or deletion of the software source package A. This command displays the source packages (for example, B) that depend on the binary packages generated by source package A (if it is a source package or the input binary package for compilation). It also displays the binary packages (for example, C1) that depend on A for installation. Then, it queries the source package (for example, D) that depend on the binary package generated by B C1 for compilation and the binary package (for example E1) for installation. This process continues until it traverses the packages that depend on the binary packages. + + ```bash + pkgship bedepend dbName [$pkgName1 $pkgName2 $pkgName3] [-w] [-b] [-install/build] + ``` + + > Parameter description: + > + > **dbName**: (Mandatory) Specifies the name of the repository whose dependency needs to be queried. Only one repository can be queried each time. + > + > **pkgName**: (Mandatory) Specifies the name of the software package to be queried. Multiple software packages can be queried. + > + > **-w**: (Optional) If **-w** is not specified, the query result does not contain the subpackages of the corresponding source package by default. If **\[-w]** is specified after the command, not only the dependency of binary package C1 is queried, but also the dependency of other binary packages (such as C2 and C3) generated by source package C corresponding to C1 is queried. + > + > **-b**: (Optional) Specifies `-b` and indicates that the package to be queried is a binary package. By default, the source package is queried. + > + > **-install/build**: (Optional) `-install` indicates that installation dependencies are queried. `-build` indicates that build dependencies are queried. By default, all dependencies are queried. `-install` and `-build` are exclusive to each other. + +8. Query the database information. + + > Application scenario: Check which databases are initialized in Elasticsearch. This function returns the list of initialized databases based on the priority. + + `pkgship dbs` + +9. Obtain the version number. + + > Application scenario: Obtain the version number of the pkgship software. + + `pkgship -v` + +## Viewing and Dumping Logs + +**Viewing Logs** + +When the pkgship service is running, two types of logs are generated: service logs and operation logs. + +1. Service logs: + + Path: **/var/log/pkgship/log\_info.log**. You can customize the path through the **log\_path** field in the **package.ini** file. + + Function: This log records the internal running of the code to facilitate fault locating. + + Permission: The permissions on the path and the log file are 755 and 644, respectively. Common users can view the log file. + +2. Operation logs: + + Path: **/var/log/pkgship-operation/uwsgi.log**. You can customize the path through the **daemonize** field in the **package.ini** file. + + Function: This log records user operation information, including the IP address, access time, URL, and result, to facilitate subsequent queries and record attacker information. + + Permission: The permissions on the path and the log file are 700 and 644, respectively. Only the **root** and **pkgshipuser** users can view the log file. + +**Dumping Logs** + +1. Service log dumping: + + - Dumping mechanism + + Use the dumping mechanism of the logging built-in function of Python to back up logs based on the log size. + + > The items are used to configure the capacity and number of backups of each log in the **package.ini** file. + + ```ini + ; Maximum capacity of each file, the unit is byte, default is 30M + max_bytes=31457280 + + ; Number of old logs to keep;default is 30 + backup_count=30 + ``` + + - Dumping process + + After a log is written, if the size of the log file exceeds the configured log capacity, the log file is automatically compressed and dumped. The compressed file name is **log\_info.log.***x***.gz**, where *x* is a number. A smaller number indicates a later backup. + + When the number of backup log files reaches the threshold, the earliest backup log file is deleted and the latest compressed log file is backed up. + +2. Operation log dumping: + + - Dumping mechanism + + A script is used to dump data by time. Data is dumped once a day and is retained for 30 days. Customized configuration is not supported. + + > The script is stored in **/etc/pkgship/uwsgi\_logrotate.sh**. + + - Dumping process + + When the pkgship is started, the script for dumping data runs in the background. From the startup, dumping and compression are performed every other day. A total of 30 compressed files are retained. The compressed file name is **uwsgi.log-20201010*x*.zip**, where *x* indicates the hour when the file is compressed. + + After the pkgship is stopped, the script for dumping data is stopped and data is not dumped . When the pkgship is started again, the script for dumping data is executed again. + +## pkgship-panel + +### Introduction + +pkgship-panel integrates software package build information and maintenance information so that version maintenance personnel can quickly identify abnormal software packages and notify the package owners to solve the problems, ensuring build project stability and improving the OS build success rate. + +### Architecture + +![](images/panel.png) + +### Using the Tool + +The data source of pkgship-panel cannot be configured. You are advised to use the [pkgship-panel official website](https://pkgmanage.openeuler.org/Infomanagement). diff --git a/docs/en/Tools/Maintenance/HotPatching/Menu/index.md b/docs/en/Tools/Maintenance/HotPatching/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..f9d1d903a2e2d9425bbee4428c844bfda6398753 --- /dev/null +++ b/docs/en/Tools/Maintenance/HotPatching/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [SysCare User Guide]({{< relref "../../../Server/Maintenance/SysCare/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/Maintenance/Menu/index.md b/docs/en/Tools/Maintenance/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..feae1001d80c5637c6228f3909ff5a37dba1c47c --- /dev/null +++ b/docs/en/Tools/Maintenance/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [Hot Patch Creation]({{< relref "./HotPatching/Menu/index.md" >}}) +- [System Monitoring]({{< relref "./SystemMonitoring/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/Maintenance/SystemMonitoring/Menu/index.md b/docs/en/Tools/Maintenance/SystemMonitoring/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..95d5f4d8358f8a691b9233d03951cbca2630dd28 --- /dev/null +++ b/docs/en/Tools/Maintenance/SystemMonitoring/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [sysmonitor User Guide]({{< relref "../../../Server/Maintenance/sysmonitor/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/Menu/index.md b/docs/en/Tools/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..1acb0ed63fd649c6413cd7b309f085762da62a77 --- /dev/null +++ b/docs/en/Tools/Menu/index.md @@ -0,0 +1,10 @@ +--- +headless: true +--- +- [Community Tools]({{< relref "./CommunityTools/Menu/index.md" >}}) +- [DevOps]({{< relref "./DevOps/Menu/index.md" >}}) +- [AI]({{< relref "./AI/Menu/index.md" >}}) +- [Desktop]({{< relref "./Desktop/Menu/index.md" >}}) +- [Cloud]({{< relref "./Cloud/Menu/index.md" >}}) +- [O&M]({{< relref "./Maintenance/Menu/index.md" >}}) +- [Security]({{< relref "./Security/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/Security/Menu/index.md b/docs/en/Tools/Security/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..e48c6fd437b6e20962cdec813138beb721da9a26 --- /dev/null +++ b/docs/en/Tools/Security/Menu/index.md @@ -0,0 +1,4 @@ +--- +headless: true +--- +- [secGear Developer Guide]({{< relref "../../Server/Security/secGear/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/oncn-bwm/.keep b/docs/en/Tools/desktop/.keep similarity index 100% rename from docs/en/docs/oncn-bwm/.keep rename to docs/en/Tools/desktop/.keep diff --git a/docs/en/Tools/desktop/DDE/Menu/index.md b/docs/en/Tools/desktop/DDE/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..aee0861129a69efed6515e91f39ce76a894e6c4a --- /dev/null +++ b/docs/en/Tools/desktop/DDE/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- +- [DDE Installation]({{< relref "./dde-installation.md" >}}) +- [DDE User Guide]({{< relref "./dde-user-guide.md" >}}) +- [Common Issues and Solutions]({{< relref "./dde-common-issues-and-solutions.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/desktop/DDE/dde-common-issues-and-solutions.md b/docs/en/Tools/desktop/DDE/dde-common-issues-and-solutions.md new file mode 100644 index 0000000000000000000000000000000000000000..480c34548b0d9f1fed584a3e3c3555ee01674c32 --- /dev/null +++ b/docs/en/Tools/desktop/DDE/dde-common-issues-and-solutions.md @@ -0,0 +1,19 @@ +# Common Issues and Solutions + +## Issue 1: After DDE Is Installed, Why Are the Computer and Recycle Bin Icons Not Displayed on the Desktop When I Log in as the **root** User + +### Issue + +After the DDE is installed, the computer and recycle bin icon is not displayed on the desktop when a user logs in as the **root** user. + +![img](./figures/dde-1.png) + +### Cause + +The **root** user is created before the DDE is installed. During the installation, the DDE does not add desktop icons for existing users. This issue does not occur if the user is created after the DDE is installed. + +### Solution + +Right-click the icon in the launcher and choose **Send to Desktop**. The icon functions the same as the one added by DDE. + +![img](./figures/dde-2.png) diff --git a/docs/en/docs/desktop/installing-DDE.md b/docs/en/Tools/desktop/DDE/dde-installation.md similarity index 52% rename from docs/en/docs/desktop/installing-DDE.md rename to docs/en/Tools/desktop/DDE/dde-installation.md index c28f8034803077685ae87276ad448abab4025cf6..80b1e26de882060a6597193ebfc5c6c31d4390b5 100644 --- a/docs/en/docs/desktop/installing-DDE.md +++ b/docs/en/Tools/desktop/DDE/dde-installation.md @@ -1,31 +1,39 @@ # DDE Installation -#### Introduction + +## Introduction DDE is a powerful desktop environment developed by UnionTech. It contains dozens of self-developed desktop applications. -#### Procedure +## Procedure -1. [Download](https://openeuler.org/zh/download/) the openEuler ISO file and install the OS. +1. [Download](https://openeuler.org/en/download/) the openEuler ISO file and install the OS. 2. Update the software source. -```bash -sudo dnf update -``` + + ```bash + sudo dnf update + ``` + 3. Install DDE. -```bash -sudo dnf install dde -``` + + ```bash + sudo dnf install dde + ``` + 4. Set the system to start with the graphical interface. -```bash -sudo systemctl set-default graphical.target -``` + + ```bash + sudo systemctl set-default graphical.target + ``` + 5. Reboot the system. -```bash -sudo reboot -``` + + ```bash + sudo reboot + ``` + 6. After the reboot is complete, use the user created during the installation process or the **openeuler** user to log in to the desktop. - > DDE does not allow login as the root user. - > DDE has a built-in openeuler user whose password is openeuler. + > DDE does not allow login as the **root** user. + > DDE has a built-in **openeuler** user whose password is **openeuler**. Now you can use DDE. - diff --git a/docs/en/docs/desktop/DDE-user-guide.md b/docs/en/Tools/desktop/DDE/dde-user-guide.md old mode 100755 new mode 100644 similarity index 96% rename from docs/en/docs/desktop/DDE-user-guide.md rename to docs/en/Tools/desktop/DDE/dde-user-guide.md index 6e22fb2e8f840de4b402788792ca91b665d7c112..13025c4eb83122c19aaac11d9f3fa87a53fbff71 --- a/docs/en/docs/desktop/DDE-user-guide.md +++ b/docs/en/Tools/desktop/DDE/dde-user-guide.md @@ -94,7 +94,7 @@ All the texts, pictures and documents cut and copied by the current user after l 4. Click![close](./figures/icon57-o.svg)to delete the current content and click **Clear All** to clear the clipboard. - ![1|clipboard](./figures/40.png) + ![1|clipboard](./figures/40.png) ## Dock @@ -379,9 +379,9 @@ Set screen resolution, brightness, direction and display scaling properly to hav 1. On the homepage of Control Center, click ![display_normal](./figures/icon72-o.svg). 2. Click **Brightness**. - - Drag the slider to set screen brightness. - - Switch on **Night Shift**, the screen hue will be auto-adjusted according to your location. - - Switch on **Auto Brightness**, the monitor will change the brightness automatically according to ambient light (shown only if PC has a light sensor). + - Drag the slider to set screen brightness. + - Switch on **Night Shift**, the screen hue will be auto-adjusted according to your location. + - Switch on **Auto Brightness**, the monitor will change the brightness automatically according to ambient light (shown only if PC has a light sensor). ##### Change Refresh Rate @@ -403,9 +403,9 @@ Expand your desktop by multiple screens! Use VGA/HDMI/DP cable to connect your c 1. On the homepage of Control Center, click ![display_normal](./figures/icon72-o.svg). 2. Click **Multiple Displays**. 3. Select a display mode: - - **Duplicate**: display the same image on other screens. - - **Extend**: expand the desktop across the screens. - - **Customize**: customize the display settings for multiple screens. + - **Duplicate**: display the same image on other screens. + - **Extend**: expand the desktop across the screens. + - **Customize**: customize the display settings for multiple screens. In multiple displays, press **Super** + **P** to show its OSD. @@ -531,7 +531,7 @@ If you are at a place without network, mobile network adapter is a useful tool t #### DSL/PPPoE Connections -DSL is a dial-up connection using a standard phone line and analog modem to access the Internet. Configure the modem, plug the telephone line into the network interface of the computer, create a broadband dial-up connection, and enter the user name and password provided by the operator to dial up the Internet. +DSL is a dial-up connection using a standard phone line and analog modem to access the Internet. Configure the modem, plug the telephone line into the network interface of the computer, create a broadband dial-up connection, and enter the user name and password provided by the operator to dial up the Internet. ##### Create a PPPoE Connection @@ -591,18 +591,18 @@ Set your speaker and microphone properly to make you hear more comfortable and m 2. Click **Output** to: - - Select output device type from the dropdown list after **Output Device**. + - Select output device type from the dropdown list after **Output Device**. - - Drag the slider to adjust output volume and left/right balance. - - Switch on **Volume Boost**, the volume could be adjustable from 0~150% (the former range is 0~100%). + - Drag the slider to adjust output volume and left/right balance. + - Switch on **Volume Boost**, the volume could be adjustable from 0~150% (the former range is 0~100%). #### Input 1. On the homepage of Control Center, click ![sound_normal](./figures/icon116-o.svg). 2. Click **Input** to: - - Select input device type from the dropdown list after **Input Device**. - - Adjust input volume by dragging the slider. - - You can enable **Automatic Noise Suppression** by clicking the button after "Automatic Noise Suppression". + - Select input device type from the dropdown list after **Input Device**. + - Adjust input volume by dragging the slider. + - You can enable **Automatic Noise Suppression** by clicking the button after "Automatic Noise Suppression". > ![tips](./figures/icon125-o.svg)Tips: *Usually, you need to turn up the input volume to make sure that you can hear the sound of the sound source, but the volume should not be too high, because it will cause distortion of the sound. Here is how to set input volume: Speak to your microphone at a normal volume and view "Input Level". If the indicator changes obviously according to the volume, then the input volume is at a proper level.* @@ -650,8 +650,8 @@ Note that the auto-sync function will be disabled after changing date and time m 1. On the homepage of Control Center, click ![time](./figures/icon124-o.svg). 2. Click **Time Settings**. - - Switch on/off **Auto Sync**. - - Enter the correct date and time. + - Switch on/off **Auto Sync**. + - Enter the correct date and time. 3. Click **Confirm**. #### Set Time Format @@ -812,7 +812,7 @@ The shortcut list includes all shortcuts in the system. View, modify and customi 6. After being successfully added, click **Edit**. 7. Click ![delete](./figures/icon71-o.svg) to delete the custom shortcut. -> ![tips](./figures/icon125-o.svg)Tips: *To change the shortcut, click it and press a new shortcut to change it directly. To edit the name and command of the custom shortcut, click**Edit ** > ![edit](./figures/icon75-o.svg) near the shortcut name to enter the shortcut settings.* +> ![tips](./figures/icon125-o.svg)Tips: *To change the shortcut, click it and press a new shortcut to change it directly. To edit the name and command of the custom shortcut, click **Edit** > ![edit](./figures/icon75-o.svg) near the shortcut name to enter the shortcut settings.* ### System Info @@ -846,4 +846,4 @@ You can use the keyboard to switch between various interface areas, select objec | ![Up](./figures/icon127-o.svg) ![Down](./figures/icon73-o.svg) ![Left](./figures/icon88-o.svg) ![Right](./figures/icon111-o.svg) | Used to select different objects in the same area. Press ![Right](./figures/icon111-o.svg) to enter the lower menu and ![Left](./figures/icon88-o.svg) to return to the upper menu. Press![Up](./figures/icon127-o.svg)and ![Down](./figures/icon73-o.svg) to switch between up and down. | | **Enter** | Execute the selected operation. | | **Space** | Preview the selected object in File Manager; start and pause the playback in Music and Movie; expand the drop-down options in the drop-down list (The enter key is also available.). | -| **Ctrl** + **M** | Open the right-click menu. | +| **Ctrl**+**M** | Open the right-click menu. | diff --git a/docs/en/docs/desktop/figures/38.png b/docs/en/Tools/desktop/DDE/figures/38.png similarity index 100% rename from docs/en/docs/desktop/figures/38.png rename to docs/en/Tools/desktop/DDE/figures/38.png diff --git a/docs/en/docs/desktop/figures/39.png b/docs/en/Tools/desktop/DDE/figures/39.png similarity index 100% rename from docs/en/docs/desktop/figures/39.png rename to docs/en/Tools/desktop/DDE/figures/39.png diff --git a/docs/en/docs/desktop/figures/40.png b/docs/en/Tools/desktop/DDE/figures/40.png similarity index 100% rename from docs/en/docs/desktop/figures/40.png rename to docs/en/Tools/desktop/DDE/figures/40.png diff --git a/docs/en/docs/desktop/figures/41.png b/docs/en/Tools/desktop/DDE/figures/41.png similarity index 100% rename from docs/en/docs/desktop/figures/41.png rename to docs/en/Tools/desktop/DDE/figures/41.png diff --git a/docs/en/docs/desktop/figures/42.png b/docs/en/Tools/desktop/DDE/figures/42.png similarity index 100% rename from docs/en/docs/desktop/figures/42.png rename to docs/en/Tools/desktop/DDE/figures/42.png diff --git a/docs/en/docs/desktop/figures/43.jpg b/docs/en/Tools/desktop/DDE/figures/43.jpg similarity index 100% rename from docs/en/docs/desktop/figures/43.jpg rename to docs/en/Tools/desktop/DDE/figures/43.jpg diff --git a/docs/en/docs/desktop/figures/44.png b/docs/en/Tools/desktop/DDE/figures/44.png similarity index 100% rename from docs/en/docs/desktop/figures/44.png rename to docs/en/Tools/desktop/DDE/figures/44.png diff --git a/docs/en/docs/desktop/figures/45.png b/docs/en/Tools/desktop/DDE/figures/45.png similarity index 100% rename from docs/en/docs/desktop/figures/45.png rename to docs/en/Tools/desktop/DDE/figures/45.png diff --git a/docs/en/docs/desktop/figures/46.png b/docs/en/Tools/desktop/DDE/figures/46.png similarity index 100% rename from docs/en/docs/desktop/figures/46.png rename to docs/en/Tools/desktop/DDE/figures/46.png diff --git a/docs/en/docs/desktop/figures/47.jpg b/docs/en/Tools/desktop/DDE/figures/47.jpg similarity index 100% rename from docs/en/docs/desktop/figures/47.jpg rename to docs/en/Tools/desktop/DDE/figures/47.jpg diff --git a/docs/en/docs/desktop/figures/48.png b/docs/en/Tools/desktop/DDE/figures/48.png similarity index 100% rename from docs/en/docs/desktop/figures/48.png rename to docs/en/Tools/desktop/DDE/figures/48.png diff --git a/docs/en/docs/desktop/figures/50.png b/docs/en/Tools/desktop/DDE/figures/50.png similarity index 100% rename from docs/en/docs/desktop/figures/50.png rename to docs/en/Tools/desktop/DDE/figures/50.png diff --git a/docs/en/docs/desktop/figures/51.png b/docs/en/Tools/desktop/DDE/figures/51.png similarity index 100% rename from docs/en/docs/desktop/figures/51.png rename to docs/en/Tools/desktop/DDE/figures/51.png diff --git a/docs/en/docs/desktop/figures/52.png b/docs/en/Tools/desktop/DDE/figures/52.png similarity index 100% rename from docs/en/docs/desktop/figures/52.png rename to docs/en/Tools/desktop/DDE/figures/52.png diff --git a/docs/en/docs/desktop/figures/53.png b/docs/en/Tools/desktop/DDE/figures/53.png similarity index 100% rename from docs/en/docs/desktop/figures/53.png rename to docs/en/Tools/desktop/DDE/figures/53.png diff --git a/docs/en/docs/desktop/figures/54.png b/docs/en/Tools/desktop/DDE/figures/54.png similarity index 100% rename from docs/en/docs/desktop/figures/54.png rename to docs/en/Tools/desktop/DDE/figures/54.png diff --git a/docs/en/docs/desktop/figures/56.png b/docs/en/Tools/desktop/DDE/figures/56.png similarity index 100% rename from docs/en/docs/desktop/figures/56.png rename to docs/en/Tools/desktop/DDE/figures/56.png diff --git a/docs/en/docs/desktop/figures/57.png b/docs/en/Tools/desktop/DDE/figures/57.png similarity index 100% rename from docs/en/docs/desktop/figures/57.png rename to docs/en/Tools/desktop/DDE/figures/57.png diff --git a/docs/en/docs/desktop/figures/58.png b/docs/en/Tools/desktop/DDE/figures/58.png similarity index 100% rename from docs/en/docs/desktop/figures/58.png rename to docs/en/Tools/desktop/DDE/figures/58.png diff --git a/docs/en/docs/desktop/figures/59.png b/docs/en/Tools/desktop/DDE/figures/59.png similarity index 100% rename from docs/en/docs/desktop/figures/59.png rename to docs/en/Tools/desktop/DDE/figures/59.png diff --git a/docs/en/docs/desktop/figures/60.jpg b/docs/en/Tools/desktop/DDE/figures/60.jpg similarity index 100% rename from docs/en/docs/desktop/figures/60.jpg rename to docs/en/Tools/desktop/DDE/figures/60.jpg diff --git a/docs/en/docs/desktop/figures/61.png b/docs/en/Tools/desktop/DDE/figures/61.png similarity index 100% rename from docs/en/docs/desktop/figures/61.png rename to docs/en/Tools/desktop/DDE/figures/61.png diff --git a/docs/en/docs/desktop/figures/62.png b/docs/en/Tools/desktop/DDE/figures/62.png similarity index 100% rename from docs/en/docs/desktop/figures/62.png rename to docs/en/Tools/desktop/DDE/figures/62.png diff --git a/docs/en/docs/desktop/figures/63.jpg b/docs/en/Tools/desktop/DDE/figures/63.jpg similarity index 100% rename from docs/en/docs/desktop/figures/63.jpg rename to docs/en/Tools/desktop/DDE/figures/63.jpg diff --git a/docs/en/docs/desktop/figures/63.png b/docs/en/Tools/desktop/DDE/figures/63.png similarity index 100% rename from docs/en/docs/desktop/figures/63.png rename to docs/en/Tools/desktop/DDE/figures/63.png diff --git a/docs/en/docs/desktop/figures/dde-1.png b/docs/en/Tools/desktop/DDE/figures/dde-1.png similarity index 100% rename from docs/en/docs/desktop/figures/dde-1.png rename to docs/en/Tools/desktop/DDE/figures/dde-1.png diff --git a/docs/en/docs/desktop/figures/dde-2.png b/docs/en/Tools/desktop/DDE/figures/dde-2.png similarity index 100% rename from docs/en/docs/desktop/figures/dde-2.png rename to docs/en/Tools/desktop/DDE/figures/dde-2.png diff --git a/docs/en/docs/desktop/figures/icon101-o.svg b/docs/en/Tools/desktop/DDE/figures/icon101-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon101-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon101-o.svg diff --git a/docs/en/docs/desktop/figures/icon103-o.svg b/docs/en/Tools/desktop/DDE/figures/icon103-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon103-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon103-o.svg diff --git a/docs/en/docs/desktop/figures/icon105-o.svg b/docs/en/Tools/desktop/DDE/figures/icon105-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon105-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon105-o.svg diff --git a/docs/en/docs/desktop/figures/icon107-o.svg b/docs/en/Tools/desktop/DDE/figures/icon107-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon107-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon107-o.svg diff --git a/docs/en/docs/desktop/figures/icon110-o.svg b/docs/en/Tools/desktop/DDE/figures/icon110-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon110-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon110-o.svg diff --git a/docs/en/docs/desktop/figures/icon111-o.svg b/docs/en/Tools/desktop/DDE/figures/icon111-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon111-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon111-o.svg diff --git a/docs/en/docs/desktop/figures/icon112-o.svg b/docs/en/Tools/desktop/DDE/figures/icon112-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon112-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon112-o.svg diff --git a/docs/en/docs/desktop/figures/icon116-o.svg b/docs/en/Tools/desktop/DDE/figures/icon116-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon116-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon116-o.svg diff --git a/docs/en/docs/desktop/figures/icon120-o.svg b/docs/en/Tools/desktop/DDE/figures/icon120-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon120-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon120-o.svg diff --git a/docs/en/docs/desktop/figures/icon122-o.svg b/docs/en/Tools/desktop/DDE/figures/icon122-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon122-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon122-o.svg diff --git a/docs/en/docs/desktop/figures/icon124-o.svg b/docs/en/Tools/desktop/DDE/figures/icon124-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon124-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon124-o.svg diff --git a/docs/en/docs/desktop/figures/icon125-o.svg b/docs/en/Tools/desktop/DDE/figures/icon125-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon125-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon125-o.svg diff --git a/docs/en/docs/desktop/figures/icon126-o.svg b/docs/en/Tools/desktop/DDE/figures/icon126-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon126-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon126-o.svg diff --git a/docs/en/docs/desktop/figures/icon127-o.svg b/docs/en/Tools/desktop/DDE/figures/icon127-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon127-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon127-o.svg diff --git a/docs/en/docs/desktop/figures/icon128-o.svg b/docs/en/Tools/desktop/DDE/figures/icon128-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon128-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon128-o.svg diff --git a/docs/en/docs/desktop/figures/icon132-o.svg b/docs/en/Tools/desktop/DDE/figures/icon132-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon132-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon132-o.svg diff --git a/docs/en/docs/desktop/figures/icon134-o.svg b/docs/en/Tools/desktop/DDE/figures/icon134-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon134-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon134-o.svg diff --git a/docs/en/docs/desktop/figures/icon136-o.svg b/docs/en/Tools/desktop/DDE/figures/icon136-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon136-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon136-o.svg diff --git a/docs/en/docs/desktop/figures/icon49-o.svg b/docs/en/Tools/desktop/DDE/figures/icon49-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon49-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon49-o.svg diff --git a/docs/en/docs/desktop/figures/icon50-o.svg b/docs/en/Tools/desktop/DDE/figures/icon50-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon50-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon50-o.svg diff --git a/docs/en/docs/desktop/figures/icon52-o.svg b/docs/en/Tools/desktop/DDE/figures/icon52-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon52-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon52-o.svg diff --git a/docs/en/docs/desktop/figures/icon53-o.svg b/docs/en/Tools/desktop/DDE/figures/icon53-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon53-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon53-o.svg diff --git a/docs/en/docs/desktop/figures/icon54-o.svg b/docs/en/Tools/desktop/DDE/figures/icon54-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon54-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon54-o.svg diff --git a/docs/en/docs/desktop/figures/icon56-o.svg b/docs/en/Tools/desktop/DDE/figures/icon56-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon56-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon56-o.svg diff --git a/docs/en/docs/desktop/figures/icon57-o.svg b/docs/en/Tools/desktop/DDE/figures/icon57-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon57-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon57-o.svg diff --git a/docs/en/docs/desktop/figures/icon58-o.svg b/docs/en/Tools/desktop/DDE/figures/icon58-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon58-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon58-o.svg diff --git a/docs/en/docs/desktop/figures/icon62-o.svg b/docs/en/Tools/desktop/DDE/figures/icon62-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon62-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon62-o.svg diff --git a/docs/en/docs/desktop/figures/icon63-o.svg b/docs/en/Tools/desktop/DDE/figures/icon63-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon63-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon63-o.svg diff --git a/docs/en/docs/desktop/figures/icon66-o.svg b/docs/en/Tools/desktop/DDE/figures/icon66-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon66-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon66-o.svg diff --git a/docs/en/docs/desktop/figures/icon68-o.svg b/docs/en/Tools/desktop/DDE/figures/icon68-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon68-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon68-o.svg diff --git a/docs/en/docs/desktop/figures/icon69-o.svg b/docs/en/Tools/desktop/DDE/figures/icon69-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon69-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon69-o.svg diff --git a/docs/en/docs/desktop/figures/icon70-o.svg b/docs/en/Tools/desktop/DDE/figures/icon70-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon70-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon70-o.svg diff --git a/docs/en/docs/desktop/figures/icon71-o.svg b/docs/en/Tools/desktop/DDE/figures/icon71-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon71-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon71-o.svg diff --git a/docs/en/docs/desktop/figures/icon72-o.svg b/docs/en/Tools/desktop/DDE/figures/icon72-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon72-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon72-o.svg diff --git a/docs/en/docs/desktop/figures/icon73-o.svg b/docs/en/Tools/desktop/DDE/figures/icon73-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon73-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon73-o.svg diff --git a/docs/en/docs/desktop/figures/icon75-o.svg b/docs/en/Tools/desktop/DDE/figures/icon75-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon75-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon75-o.svg diff --git a/docs/en/docs/desktop/figures/icon83-o.svg b/docs/en/Tools/desktop/DDE/figures/icon83-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon83-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon83-o.svg diff --git a/docs/en/docs/desktop/figures/icon84-o.svg b/docs/en/Tools/desktop/DDE/figures/icon84-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon84-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon84-o.svg diff --git a/docs/en/docs/desktop/figures/icon86-o.svg b/docs/en/Tools/desktop/DDE/figures/icon86-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon86-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon86-o.svg diff --git a/docs/en/docs/desktop/figures/icon88-o.svg b/docs/en/Tools/desktop/DDE/figures/icon88-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon88-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon88-o.svg diff --git a/docs/en/docs/desktop/figures/icon90-o.svg b/docs/en/Tools/desktop/DDE/figures/icon90-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon90-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon90-o.svg diff --git a/docs/en/docs/desktop/figures/icon92-o.svg b/docs/en/Tools/desktop/DDE/figures/icon92-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon92-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon92-o.svg diff --git a/docs/en/docs/desktop/figures/icon94-o.svg b/docs/en/Tools/desktop/DDE/figures/icon94-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon94-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon94-o.svg diff --git a/docs/en/docs/desktop/figures/icon97-o.svg b/docs/en/Tools/desktop/DDE/figures/icon97-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon97-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon97-o.svg diff --git a/docs/en/docs/desktop/figures/icon99-o.svg b/docs/en/Tools/desktop/DDE/figures/icon99-o.svg similarity index 100% rename from docs/en/docs/desktop/figures/icon99-o.svg rename to docs/en/Tools/desktop/DDE/figures/icon99-o.svg diff --git a/docs/en/Tools/desktop/Gnome/Menu/index.md b/docs/en/Tools/desktop/Gnome/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..8c70a953f2dbc72c09bbf30ce73f1c247eb79e13 --- /dev/null +++ b/docs/en/Tools/desktop/Gnome/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [GNOME Installation]({{< relref "./gnome-installation.md" >}}) +- [GNOME User Guide]({{< relref "./gnome-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/desktop/figures/gnome-1.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-1.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-1.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-1.png diff --git a/docs/en/docs/desktop/figures/gnome-10.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-10.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-10.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-10.png diff --git a/docs/en/docs/desktop/figures/gnome-11.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-11.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-11.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-11.png diff --git a/docs/en/docs/desktop/figures/gnome-12.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-12.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-12.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-12.png diff --git a/docs/en/docs/desktop/figures/gnome-13.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-13.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-13.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-13.png diff --git a/docs/en/docs/desktop/figures/gnome-14.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-14.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-14.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-14.png diff --git a/docs/en/docs/desktop/figures/gnome-15.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-15.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-15.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-15.png diff --git a/docs/en/docs/desktop/figures/gnome-16.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-16.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-16.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-16.png diff --git a/docs/en/docs/desktop/figures/gnome-17.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-17.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-17.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-17.png diff --git a/docs/en/docs/desktop/figures/gnome-18.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-18.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-18.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-18.png diff --git a/docs/en/docs/desktop/figures/gnome-19.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-19.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-19.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-19.png diff --git a/docs/en/docs/desktop/figures/gnome-2.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-2.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-2.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-2.png diff --git a/docs/en/docs/desktop/figures/gnome-20.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-20.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-20.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-20.png diff --git a/docs/en/docs/desktop/figures/gnome-21.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-21.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-21.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-21.png diff --git a/docs/en/docs/desktop/figures/gnome-22.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-22.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-22.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-22.png diff --git a/docs/en/docs/desktop/figures/gnome-23.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-23.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-23.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-23.png diff --git a/docs/en/docs/desktop/figures/gnome-24.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-24.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-24.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-24.png diff --git a/docs/en/docs/desktop/figures/gnome-25.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-25.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-25.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-25.png diff --git a/docs/en/docs/desktop/figures/gnome-26.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-26.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-26.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-26.png diff --git a/docs/en/docs/desktop/figures/gnome-27.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-27.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-27.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-27.png diff --git a/docs/en/docs/desktop/figures/gnome-28.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-28.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-28.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-28.png diff --git a/docs/en/docs/desktop/figures/gnome-29.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-29.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-29.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-29.png diff --git a/docs/en/docs/desktop/figures/gnome-3.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-3.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-3.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-3.png diff --git a/docs/en/docs/desktop/figures/gnome-30.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-30.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-30.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-30.png diff --git a/docs/en/docs/desktop/figures/gnome-31.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-31.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-31.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-31.png diff --git a/docs/en/docs/desktop/figures/gnome-32.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-32.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-32.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-32.png diff --git a/docs/en/docs/desktop/figures/gnome-33.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-33.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-33.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-33.png diff --git a/docs/en/docs/desktop/figures/gnome-34.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-34.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-34.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-34.png diff --git a/docs/en/docs/desktop/figures/gnome-35.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-35.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-35.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-35.png diff --git a/docs/en/docs/desktop/figures/gnome-36.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-36.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-36.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-36.png diff --git a/docs/en/docs/desktop/figures/gnome-37.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-37.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-37.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-37.png diff --git a/docs/en/docs/desktop/figures/gnome-38.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-38.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-38.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-38.png diff --git a/docs/en/docs/desktop/figures/gnome-39.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-39.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-39.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-39.png diff --git a/docs/en/docs/desktop/figures/gnome-4.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-4.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-4.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-4.png diff --git a/docs/en/docs/desktop/figures/gnome-40.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-40.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-40.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-40.png diff --git a/docs/en/docs/desktop/figures/gnome-41.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-41.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-41.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-41.png diff --git a/docs/en/docs/desktop/figures/gnome-42.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-42.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-42.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-42.png diff --git a/docs/en/docs/desktop/figures/gnome-43.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-43.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-43.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-43.png diff --git a/docs/en/docs/desktop/figures/gnome-44.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-44.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-44.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-44.png diff --git a/docs/en/docs/desktop/figures/gnome-45.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-45.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-45.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-45.png diff --git a/docs/en/docs/desktop/figures/gnome-46.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-46.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-46.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-46.png diff --git a/docs/en/docs/desktop/figures/gnome-47.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-47.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-47.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-47.png diff --git a/docs/en/docs/desktop/figures/gnome-48.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-48.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-48.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-48.png diff --git a/docs/en/docs/desktop/figures/gnome-49.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-49.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-49.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-49.png diff --git a/docs/en/docs/desktop/figures/gnome-5.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-5.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-5.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-5.png diff --git a/docs/en/docs/desktop/figures/gnome-50.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-50.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-50.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-50.png diff --git a/docs/en/docs/desktop/figures/gnome-51.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-51.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-51.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-51.png diff --git a/docs/en/docs/desktop/figures/gnome-52.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-52.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-52.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-52.png diff --git a/docs/en/docs/desktop/figures/gnome-53.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-53.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-53.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-53.png diff --git a/docs/en/docs/desktop/figures/gnome-54.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-54.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-54.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-54.png diff --git a/docs/en/docs/desktop/figures/gnome-55.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-55.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-55.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-55.png diff --git a/docs/en/docs/desktop/figures/gnome-56.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-56.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-56.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-56.png diff --git a/docs/en/docs/desktop/figures/gnome-57.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-57.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-57.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-57.png diff --git a/docs/en/docs/desktop/figures/gnome-58.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-58.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-58.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-58.png diff --git a/docs/en/docs/desktop/figures/gnome-59.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-59.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-59.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-59.png diff --git a/docs/en/docs/desktop/figures/gnome-6.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-6.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-6.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-6.png diff --git a/docs/en/docs/desktop/figures/gnome-7.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-7.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-7.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-7.png diff --git a/docs/en/docs/desktop/figures/gnome-8.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-8.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-8.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-8.png diff --git a/docs/en/docs/desktop/figures/gnome-9.PNG b/docs/en/Tools/desktop/Gnome/figures/gnome-9.png similarity index 100% rename from docs/en/docs/desktop/figures/gnome-9.PNG rename to docs/en/Tools/desktop/Gnome/figures/gnome-9.png diff --git a/docs/en/docs/desktop/installing-GNOME.md b/docs/en/Tools/desktop/Gnome/gnome-installation.md similarity index 100% rename from docs/en/docs/desktop/installing-GNOME.md rename to docs/en/Tools/desktop/Gnome/gnome-installation.md diff --git a/docs/en/docs/desktop/Gnome_userguide.md b/docs/en/Tools/desktop/Gnome/gnome-user-guide.md similarity index 100% rename from docs/en/docs/desktop/Gnome_userguide.md rename to docs/en/Tools/desktop/Gnome/gnome-user-guide.md diff --git a/docs/en/Tools/desktop/Kiran/Menu/index.md b/docs/en/Tools/desktop/Kiran/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..ba82ed59b01caaf12bed97c8e8880106fa0b2f03 --- /dev/null +++ b/docs/en/Tools/desktop/Kiran/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [Kiran Installation]({{< relref "./kiran-user-guide.md" >}}) +- [Kiran User Guide]({{< relref "./kiran-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/desktop/figures/kiran-1.png b/docs/en/Tools/desktop/Kiran/figures/kiran-1.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-1.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-1.png diff --git a/docs/en/docs/desktop/figures/kiran-10.png b/docs/en/Tools/desktop/Kiran/figures/kiran-10.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-10.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-10.png diff --git a/docs/en/docs/desktop/figures/kiran-11.png b/docs/en/Tools/desktop/Kiran/figures/kiran-11.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-11.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-11.png diff --git a/docs/en/docs/desktop/figures/kiran-12.png b/docs/en/Tools/desktop/Kiran/figures/kiran-12.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-12.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-12.png diff --git a/docs/en/docs/desktop/figures/kiran-13.png b/docs/en/Tools/desktop/Kiran/figures/kiran-13.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-13.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-13.png diff --git a/docs/en/docs/desktop/figures/kiran-14.png b/docs/en/Tools/desktop/Kiran/figures/kiran-14.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-14.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-14.png diff --git a/docs/en/docs/desktop/figures/kiran-15.png b/docs/en/Tools/desktop/Kiran/figures/kiran-15.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-15.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-15.png diff --git a/docs/en/docs/desktop/figures/kiran-16.png b/docs/en/Tools/desktop/Kiran/figures/kiran-16.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-16.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-16.png diff --git a/docs/en/docs/desktop/figures/kiran-17.png b/docs/en/Tools/desktop/Kiran/figures/kiran-17.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-17.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-17.png diff --git a/docs/en/docs/desktop/figures/kiran-18.png b/docs/en/Tools/desktop/Kiran/figures/kiran-18.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-18.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-18.png diff --git a/docs/en/docs/desktop/figures/kiran-19.png b/docs/en/Tools/desktop/Kiran/figures/kiran-19.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-19.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-19.png diff --git a/docs/en/docs/desktop/figures/kiran-2.png b/docs/en/Tools/desktop/Kiran/figures/kiran-2.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-2.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-2.png diff --git a/docs/en/docs/desktop/figures/kiran-20.png b/docs/en/Tools/desktop/Kiran/figures/kiran-20.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-20.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-20.png diff --git a/docs/en/docs/desktop/figures/kiran-21.png b/docs/en/Tools/desktop/Kiran/figures/kiran-21.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-21.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-21.png diff --git a/docs/en/docs/desktop/figures/kiran-22.png b/docs/en/Tools/desktop/Kiran/figures/kiran-22.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-22.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-22.png diff --git a/docs/en/docs/desktop/figures/kiran-23.png b/docs/en/Tools/desktop/Kiran/figures/kiran-23.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-23.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-23.png diff --git a/docs/en/docs/desktop/figures/kiran-24.png b/docs/en/Tools/desktop/Kiran/figures/kiran-24.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-24.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-24.png diff --git a/docs/en/docs/desktop/figures/kiran-25.png b/docs/en/Tools/desktop/Kiran/figures/kiran-25.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-25.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-25.png diff --git a/docs/en/docs/desktop/figures/kiran-26.png b/docs/en/Tools/desktop/Kiran/figures/kiran-26.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-26.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-26.png diff --git a/docs/en/docs/desktop/figures/kiran-27.png b/docs/en/Tools/desktop/Kiran/figures/kiran-27.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-27.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-27.png diff --git a/docs/en/docs/desktop/figures/kiran-28.png b/docs/en/Tools/desktop/Kiran/figures/kiran-28.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-28.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-28.png diff --git a/docs/en/docs/desktop/figures/kiran-29.png b/docs/en/Tools/desktop/Kiran/figures/kiran-29.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-29.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-29.png diff --git a/docs/en/docs/desktop/figures/kiran-3.png b/docs/en/Tools/desktop/Kiran/figures/kiran-3.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-3.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-3.png diff --git a/docs/en/docs/desktop/figures/kiran-30.png b/docs/en/Tools/desktop/Kiran/figures/kiran-30.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-30.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-30.png diff --git a/docs/en/docs/desktop/figures/kiran-31.png b/docs/en/Tools/desktop/Kiran/figures/kiran-31.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-31.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-31.png diff --git a/docs/en/docs/desktop/figures/kiran-32.png b/docs/en/Tools/desktop/Kiran/figures/kiran-32.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-32.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-32.png diff --git a/docs/en/docs/desktop/figures/kiran-33.png b/docs/en/Tools/desktop/Kiran/figures/kiran-33.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-33.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-33.png diff --git a/docs/en/docs/desktop/figures/kiran-34.png b/docs/en/Tools/desktop/Kiran/figures/kiran-34.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-34.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-34.png diff --git a/docs/en/docs/desktop/figures/kiran-35.png b/docs/en/Tools/desktop/Kiran/figures/kiran-35.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-35.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-35.png diff --git a/docs/en/docs/desktop/figures/kiran-36.png b/docs/en/Tools/desktop/Kiran/figures/kiran-36.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-36.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-36.png diff --git a/docs/en/docs/desktop/figures/kiran-37.png b/docs/en/Tools/desktop/Kiran/figures/kiran-37.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-37.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-37.png diff --git a/docs/en/docs/desktop/figures/kiran-38.png b/docs/en/Tools/desktop/Kiran/figures/kiran-38.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-38.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-38.png diff --git a/docs/en/docs/desktop/figures/kiran-39.png b/docs/en/Tools/desktop/Kiran/figures/kiran-39.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-39.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-39.png diff --git a/docs/en/docs/desktop/figures/kiran-4.png b/docs/en/Tools/desktop/Kiran/figures/kiran-4.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-4.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-4.png diff --git a/docs/en/docs/desktop/figures/kiran-40.png b/docs/en/Tools/desktop/Kiran/figures/kiran-40.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-40.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-40.png diff --git a/docs/en/docs/desktop/figures/kiran-41.png b/docs/en/Tools/desktop/Kiran/figures/kiran-41.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-41.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-41.png diff --git a/docs/en/docs/desktop/figures/kiran-42.png b/docs/en/Tools/desktop/Kiran/figures/kiran-42.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-42.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-42.png diff --git a/docs/en/docs/desktop/figures/kiran-43.png b/docs/en/Tools/desktop/Kiran/figures/kiran-43.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-43.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-43.png diff --git a/docs/en/docs/desktop/figures/kiran-44.png b/docs/en/Tools/desktop/Kiran/figures/kiran-44.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-44.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-44.png diff --git a/docs/en/docs/desktop/figures/kiran-45.png b/docs/en/Tools/desktop/Kiran/figures/kiran-45.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-45.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-45.png diff --git a/docs/en/docs/desktop/figures/kiran-46.png b/docs/en/Tools/desktop/Kiran/figures/kiran-46.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-46.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-46.png diff --git a/docs/en/docs/desktop/figures/kiran-47.png b/docs/en/Tools/desktop/Kiran/figures/kiran-47.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-47.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-47.png diff --git a/docs/en/docs/desktop/figures/kiran-48.png b/docs/en/Tools/desktop/Kiran/figures/kiran-48.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-48.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-48.png diff --git a/docs/en/docs/desktop/figures/kiran-49.png b/docs/en/Tools/desktop/Kiran/figures/kiran-49.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-49.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-49.png diff --git a/docs/en/docs/desktop/figures/kiran-5.png b/docs/en/Tools/desktop/Kiran/figures/kiran-5.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-5.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-5.png diff --git a/docs/en/docs/desktop/figures/kiran-50.png b/docs/en/Tools/desktop/Kiran/figures/kiran-50.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-50.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-50.png diff --git a/docs/en/docs/desktop/figures/kiran-6.png b/docs/en/Tools/desktop/Kiran/figures/kiran-6.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-6.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-6.png diff --git a/docs/en/docs/desktop/figures/kiran-7.png b/docs/en/Tools/desktop/Kiran/figures/kiran-7.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-7.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-7.png diff --git a/docs/en/docs/desktop/figures/kiran-8.png b/docs/en/Tools/desktop/Kiran/figures/kiran-8.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-8.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-8.png diff --git a/docs/en/docs/desktop/figures/kiran-9.png b/docs/en/Tools/desktop/Kiran/figures/kiran-9.png similarity index 100% rename from docs/en/docs/desktop/figures/kiran-9.png rename to docs/en/Tools/desktop/Kiran/figures/kiran-9.png diff --git a/docs/en/docs/desktop/install-kiran.md b/docs/en/Tools/desktop/Kiran/kiran-installation.md similarity index 95% rename from docs/en/docs/desktop/install-kiran.md rename to docs/en/Tools/desktop/Kiran/kiran-installation.md index 9cf2217d39a99d7c390aac3f2dc0ff2fe579df6d..4aab96decb8c766339c2913d7c671e27d81c61c9 100644 --- a/docs/en/docs/desktop/install-kiran.md +++ b/docs/en/Tools/desktop/Kiran/kiran-installation.md @@ -19,7 +19,7 @@ sudo dnf update 1. Install kiran-desktop. ```shell -sudo dnf install kiran-desktop +sudo dnf -y install kiran-desktop ``` 1. Set the system to start with the graphical interface, and then restart the system using the `reboot` command. diff --git a/docs/en/docs/desktop/Kiran_userguide.md b/docs/en/Tools/desktop/Kiran/kiran-user-guide.md similarity index 100% rename from docs/en/docs/desktop/Kiran_userguide.md rename to docs/en/Tools/desktop/Kiran/kiran-user-guide.md diff --git a/docs/en/Tools/desktop/Menu/index.md b/docs/en/Tools/desktop/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..bc99b4ddb5ca257627633bd54e8d9b4b13f802ac --- /dev/null +++ b/docs/en/Tools/desktop/Menu/index.md @@ -0,0 +1,7 @@ +--- +headless: true +--- +- [GNOME User Guide]({{< relref "./Gnome/Menu/index.md" >}}) +- [UKUI User Guide]({{< relref "./UKUI/Menu/index.md" >}}) +- [DDE User Guide]({{< relref "./DDE/Menu/index.md" >}}) +- [Kiran User Guide]({{< relref "./Kiran/Menu/index.md" >}}) \ No newline at end of file diff --git a/docs/en/Tools/desktop/UKUI/Menu/index.md b/docs/en/Tools/desktop/UKUI/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..2ca24af08fdb6a81c2db5eb3638bfe1e498c87f9 --- /dev/null +++ b/docs/en/Tools/desktop/UKUI/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- +- [UKUI Installation]({{< relref "./ukui-installation.md" >}}) +- [UKUI User Guide]({{< relref "./ukui-user-guide.md" >}}) \ No newline at end of file diff --git a/docs/en/docs/desktop/figures/1.png b/docs/en/Tools/desktop/UKUI/figures/1.png similarity index 100% rename from docs/en/docs/desktop/figures/1.png rename to docs/en/Tools/desktop/UKUI/figures/1.png diff --git a/docs/en/docs/desktop/figures/10.png b/docs/en/Tools/desktop/UKUI/figures/10.png similarity index 100% rename from docs/en/docs/desktop/figures/10.png rename to docs/en/Tools/desktop/UKUI/figures/10.png diff --git a/docs/en/docs/desktop/figures/11.png b/docs/en/Tools/desktop/UKUI/figures/11.png similarity index 100% rename from docs/en/docs/desktop/figures/11.png rename to docs/en/Tools/desktop/UKUI/figures/11.png diff --git a/docs/en/docs/desktop/figures/12.png b/docs/en/Tools/desktop/UKUI/figures/12.png similarity index 100% rename from docs/en/docs/desktop/figures/12.png rename to docs/en/Tools/desktop/UKUI/figures/12.png diff --git a/docs/en/docs/desktop/figures/13.png b/docs/en/Tools/desktop/UKUI/figures/13.png similarity index 100% rename from docs/en/docs/desktop/figures/13.png rename to docs/en/Tools/desktop/UKUI/figures/13.png diff --git a/docs/en/docs/desktop/figures/14.png b/docs/en/Tools/desktop/UKUI/figures/14.png similarity index 100% rename from docs/en/docs/desktop/figures/14.png rename to docs/en/Tools/desktop/UKUI/figures/14.png diff --git a/docs/en/docs/desktop/figures/15.png b/docs/en/Tools/desktop/UKUI/figures/15.png similarity index 100% rename from docs/en/docs/desktop/figures/15.png rename to docs/en/Tools/desktop/UKUI/figures/15.png diff --git a/docs/en/docs/desktop/figures/16.png b/docs/en/Tools/desktop/UKUI/figures/16.png similarity index 100% rename from docs/en/docs/desktop/figures/16.png rename to docs/en/Tools/desktop/UKUI/figures/16.png diff --git a/docs/en/docs/desktop/figures/17.png b/docs/en/Tools/desktop/UKUI/figures/17.png similarity index 100% rename from docs/en/docs/desktop/figures/17.png rename to docs/en/Tools/desktop/UKUI/figures/17.png diff --git a/docs/en/docs/desktop/figures/18.png b/docs/en/Tools/desktop/UKUI/figures/18.png similarity index 100% rename from docs/en/docs/desktop/figures/18.png rename to docs/en/Tools/desktop/UKUI/figures/18.png diff --git a/docs/en/docs/desktop/figures/19.png b/docs/en/Tools/desktop/UKUI/figures/19.png similarity index 100% rename from docs/en/docs/desktop/figures/19.png rename to docs/en/Tools/desktop/UKUI/figures/19.png diff --git a/docs/en/docs/desktop/figures/2.png b/docs/en/Tools/desktop/UKUI/figures/2.png similarity index 100% rename from docs/en/docs/desktop/figures/2.png rename to docs/en/Tools/desktop/UKUI/figures/2.png diff --git a/docs/en/docs/desktop/figures/20.png b/docs/en/Tools/desktop/UKUI/figures/20.png similarity index 100% rename from docs/en/docs/desktop/figures/20.png rename to docs/en/Tools/desktop/UKUI/figures/20.png diff --git a/docs/en/docs/desktop/figures/21.png b/docs/en/Tools/desktop/UKUI/figures/21.png similarity index 100% rename from docs/en/docs/desktop/figures/21.png rename to docs/en/Tools/desktop/UKUI/figures/21.png diff --git a/docs/en/docs/desktop/figures/22.png b/docs/en/Tools/desktop/UKUI/figures/22.png similarity index 100% rename from docs/en/docs/desktop/figures/22.png rename to docs/en/Tools/desktop/UKUI/figures/22.png diff --git a/docs/en/docs/desktop/figures/23.png b/docs/en/Tools/desktop/UKUI/figures/23.png similarity index 100% rename from docs/en/docs/desktop/figures/23.png rename to docs/en/Tools/desktop/UKUI/figures/23.png diff --git a/docs/en/docs/desktop/figures/24.png b/docs/en/Tools/desktop/UKUI/figures/24.png similarity index 100% rename from docs/en/docs/desktop/figures/24.png rename to docs/en/Tools/desktop/UKUI/figures/24.png diff --git a/docs/en/docs/desktop/figures/25.png b/docs/en/Tools/desktop/UKUI/figures/25.png similarity index 100% rename from docs/en/docs/desktop/figures/25.png rename to docs/en/Tools/desktop/UKUI/figures/25.png diff --git a/docs/en/docs/desktop/figures/26.png b/docs/en/Tools/desktop/UKUI/figures/26.png similarity index 100% rename from docs/en/docs/desktop/figures/26.png rename to docs/en/Tools/desktop/UKUI/figures/26.png diff --git a/docs/en/docs/desktop/figures/27.png b/docs/en/Tools/desktop/UKUI/figures/27.png similarity index 100% rename from docs/en/docs/desktop/figures/27.png rename to docs/en/Tools/desktop/UKUI/figures/27.png diff --git a/docs/en/docs/desktop/figures/28.png b/docs/en/Tools/desktop/UKUI/figures/28.png similarity index 100% rename from docs/en/docs/desktop/figures/28.png rename to docs/en/Tools/desktop/UKUI/figures/28.png diff --git a/docs/en/docs/desktop/figures/29.png b/docs/en/Tools/desktop/UKUI/figures/29.png similarity index 100% rename from docs/en/docs/desktop/figures/29.png rename to docs/en/Tools/desktop/UKUI/figures/29.png diff --git a/docs/en/docs/desktop/figures/3.png b/docs/en/Tools/desktop/UKUI/figures/3.png similarity index 100% rename from docs/en/docs/desktop/figures/3.png rename to docs/en/Tools/desktop/UKUI/figures/3.png diff --git a/docs/en/docs/desktop/figures/30.png b/docs/en/Tools/desktop/UKUI/figures/30.png similarity index 100% rename from docs/en/docs/desktop/figures/30.png rename to docs/en/Tools/desktop/UKUI/figures/30.png diff --git a/docs/en/docs/desktop/figures/31.png b/docs/en/Tools/desktop/UKUI/figures/31.png similarity index 100% rename from docs/en/docs/desktop/figures/31.png rename to docs/en/Tools/desktop/UKUI/figures/31.png diff --git a/docs/en/docs/desktop/figures/32.png b/docs/en/Tools/desktop/UKUI/figures/32.png similarity index 100% rename from docs/en/docs/desktop/figures/32.png rename to docs/en/Tools/desktop/UKUI/figures/32.png diff --git a/docs/en/docs/desktop/figures/33.png b/docs/en/Tools/desktop/UKUI/figures/33.png similarity index 100% rename from docs/en/docs/desktop/figures/33.png rename to docs/en/Tools/desktop/UKUI/figures/33.png diff --git a/docs/en/docs/desktop/figures/34.png b/docs/en/Tools/desktop/UKUI/figures/34.png similarity index 100% rename from docs/en/docs/desktop/figures/34.png rename to docs/en/Tools/desktop/UKUI/figures/34.png diff --git a/docs/en/docs/desktop/figures/35.png b/docs/en/Tools/desktop/UKUI/figures/35.png similarity index 100% rename from docs/en/docs/desktop/figures/35.png rename to docs/en/Tools/desktop/UKUI/figures/35.png diff --git a/docs/en/docs/desktop/figures/36.png b/docs/en/Tools/desktop/UKUI/figures/36.png similarity index 100% rename from docs/en/docs/desktop/figures/36.png rename to docs/en/Tools/desktop/UKUI/figures/36.png diff --git a/docs/en/docs/desktop/figures/37.png b/docs/en/Tools/desktop/UKUI/figures/37.png similarity index 100% rename from docs/en/docs/desktop/figures/37.png rename to docs/en/Tools/desktop/UKUI/figures/37.png diff --git a/docs/en/docs/desktop/figures/4.png b/docs/en/Tools/desktop/UKUI/figures/4.png similarity index 100% rename from docs/en/docs/desktop/figures/4.png rename to docs/en/Tools/desktop/UKUI/figures/4.png diff --git a/docs/en/docs/desktop/figures/5.png b/docs/en/Tools/desktop/UKUI/figures/5.png similarity index 100% rename from docs/en/docs/desktop/figures/5.png rename to docs/en/Tools/desktop/UKUI/figures/5.png diff --git a/docs/en/docs/desktop/figures/6.png b/docs/en/Tools/desktop/UKUI/figures/6.png similarity index 100% rename from docs/en/docs/desktop/figures/6.png rename to docs/en/Tools/desktop/UKUI/figures/6.png diff --git a/docs/en/docs/desktop/figures/7.png b/docs/en/Tools/desktop/UKUI/figures/7.png similarity index 100% rename from docs/en/docs/desktop/figures/7.png rename to docs/en/Tools/desktop/UKUI/figures/7.png diff --git a/docs/en/docs/desktop/figures/8.png b/docs/en/Tools/desktop/UKUI/figures/8.png similarity index 100% rename from docs/en/docs/desktop/figures/8.png rename to docs/en/Tools/desktop/UKUI/figures/8.png diff --git a/docs/en/docs/desktop/figures/9.png b/docs/en/Tools/desktop/UKUI/figures/9.png similarity index 100% rename from docs/en/docs/desktop/figures/9.png rename to docs/en/Tools/desktop/UKUI/figures/9.png diff --git a/docs/en/docs/desktop/figures/icon1.png b/docs/en/Tools/desktop/UKUI/figures/icon1.png similarity index 100% rename from docs/en/docs/desktop/figures/icon1.png rename to docs/en/Tools/desktop/UKUI/figures/icon1.png diff --git a/docs/en/docs/desktop/figures/icon10-o.png b/docs/en/Tools/desktop/UKUI/figures/icon10-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon10-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon10-o.png diff --git a/docs/en/docs/desktop/figures/icon11-o.png b/docs/en/Tools/desktop/UKUI/figures/icon11-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon11-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon11-o.png diff --git a/docs/en/docs/desktop/figures/icon12-o.png b/docs/en/Tools/desktop/UKUI/figures/icon12-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon12-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon12-o.png diff --git a/docs/en/docs/desktop/figures/icon13-o.png b/docs/en/Tools/desktop/UKUI/figures/icon13-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon13-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon13-o.png diff --git a/docs/en/docs/desktop/figures/icon14-o.png b/docs/en/Tools/desktop/UKUI/figures/icon14-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon14-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon14-o.png diff --git a/docs/en/docs/desktop/figures/icon15-o.png b/docs/en/Tools/desktop/UKUI/figures/icon15-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon15-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon15-o.png diff --git a/docs/en/docs/desktop/figures/icon16.png b/docs/en/Tools/desktop/UKUI/figures/icon16.png similarity index 100% rename from docs/en/docs/desktop/figures/icon16.png rename to docs/en/Tools/desktop/UKUI/figures/icon16.png diff --git a/docs/en/docs/desktop/figures/icon17.png b/docs/en/Tools/desktop/UKUI/figures/icon17.png similarity index 100% rename from docs/en/docs/desktop/figures/icon17.png rename to docs/en/Tools/desktop/UKUI/figures/icon17.png diff --git a/docs/en/docs/desktop/figures/icon18.png b/docs/en/Tools/desktop/UKUI/figures/icon18.png similarity index 100% rename from docs/en/docs/desktop/figures/icon18.png rename to docs/en/Tools/desktop/UKUI/figures/icon18.png diff --git a/docs/en/docs/desktop/figures/icon19-o.png b/docs/en/Tools/desktop/UKUI/figures/icon19-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon19-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon19-o.png diff --git a/docs/en/docs/desktop/figures/icon2.png b/docs/en/Tools/desktop/UKUI/figures/icon2.png similarity index 100% rename from docs/en/docs/desktop/figures/icon2.png rename to docs/en/Tools/desktop/UKUI/figures/icon2.png diff --git a/docs/en/docs/desktop/figures/icon26-o.png b/docs/en/Tools/desktop/UKUI/figures/icon26-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon26-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon26-o.png diff --git a/docs/en/docs/desktop/figures/icon27-o.png b/docs/en/Tools/desktop/UKUI/figures/icon27-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon27-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon27-o.png diff --git a/docs/en/docs/desktop/figures/icon28-o.png b/docs/en/Tools/desktop/UKUI/figures/icon28-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon28-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon28-o.png diff --git a/docs/en/docs/desktop/figures/icon3.png b/docs/en/Tools/desktop/UKUI/figures/icon3.png similarity index 100% rename from docs/en/docs/desktop/figures/icon3.png rename to docs/en/Tools/desktop/UKUI/figures/icon3.png diff --git a/docs/en/docs/desktop/figures/icon30-o.png b/docs/en/Tools/desktop/UKUI/figures/icon30-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon30-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon30-o.png diff --git a/docs/en/docs/desktop/figures/icon31-o.png b/docs/en/Tools/desktop/UKUI/figures/icon31-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon31-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon31-o.png diff --git a/docs/en/docs/desktop/figures/icon32.png b/docs/en/Tools/desktop/UKUI/figures/icon32.png similarity index 100% rename from docs/en/docs/desktop/figures/icon32.png rename to docs/en/Tools/desktop/UKUI/figures/icon32.png diff --git a/docs/en/docs/desktop/figures/icon33.png b/docs/en/Tools/desktop/UKUI/figures/icon33.png similarity index 100% rename from docs/en/docs/desktop/figures/icon33.png rename to docs/en/Tools/desktop/UKUI/figures/icon33.png diff --git a/docs/en/docs/desktop/figures/icon34.png b/docs/en/Tools/desktop/UKUI/figures/icon34.png similarity index 100% rename from docs/en/docs/desktop/figures/icon34.png rename to docs/en/Tools/desktop/UKUI/figures/icon34.png diff --git a/docs/en/docs/desktop/figures/icon35.png b/docs/en/Tools/desktop/UKUI/figures/icon35.png similarity index 100% rename from docs/en/docs/desktop/figures/icon35.png rename to docs/en/Tools/desktop/UKUI/figures/icon35.png diff --git a/docs/en/docs/desktop/figures/icon36.png b/docs/en/Tools/desktop/UKUI/figures/icon36.png similarity index 100% rename from docs/en/docs/desktop/figures/icon36.png rename to docs/en/Tools/desktop/UKUI/figures/icon36.png diff --git a/docs/en/docs/desktop/figures/icon37.png b/docs/en/Tools/desktop/UKUI/figures/icon37.png similarity index 100% rename from docs/en/docs/desktop/figures/icon37.png rename to docs/en/Tools/desktop/UKUI/figures/icon37.png diff --git a/docs/en/docs/desktop/figures/icon38.png b/docs/en/Tools/desktop/UKUI/figures/icon38.png similarity index 100% rename from docs/en/docs/desktop/figures/icon38.png rename to docs/en/Tools/desktop/UKUI/figures/icon38.png diff --git a/docs/en/docs/desktop/figures/icon39.png b/docs/en/Tools/desktop/UKUI/figures/icon39.png similarity index 100% rename from docs/en/docs/desktop/figures/icon39.png rename to docs/en/Tools/desktop/UKUI/figures/icon39.png diff --git a/docs/en/docs/desktop/figures/icon4.png b/docs/en/Tools/desktop/UKUI/figures/icon4.png similarity index 100% rename from docs/en/docs/desktop/figures/icon4.png rename to docs/en/Tools/desktop/UKUI/figures/icon4.png diff --git a/docs/en/docs/desktop/figures/icon40.png b/docs/en/Tools/desktop/UKUI/figures/icon40.png similarity index 100% rename from docs/en/docs/desktop/figures/icon40.png rename to docs/en/Tools/desktop/UKUI/figures/icon40.png diff --git a/docs/en/docs/desktop/figures/icon41.png b/docs/en/Tools/desktop/UKUI/figures/icon41.png similarity index 100% rename from docs/en/docs/desktop/figures/icon41.png rename to docs/en/Tools/desktop/UKUI/figures/icon41.png diff --git a/docs/en/docs/desktop/figures/icon42-o.png b/docs/en/Tools/desktop/UKUI/figures/icon42-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon42-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon42-o.png diff --git a/docs/en/docs/desktop/figures/icon43-o.png b/docs/en/Tools/desktop/UKUI/figures/icon43-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon43-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon43-o.png diff --git a/docs/en/docs/desktop/figures/icon44-o.png b/docs/en/Tools/desktop/UKUI/figures/icon44-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon44-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon44-o.png diff --git a/docs/en/docs/desktop/figures/icon45-o.png b/docs/en/Tools/desktop/UKUI/figures/icon45-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon45-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon45-o.png diff --git a/docs/en/docs/desktop/figures/icon46-o.png b/docs/en/Tools/desktop/UKUI/figures/icon46-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon46-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon46-o.png diff --git a/docs/en/docs/desktop/figures/icon47-o.png b/docs/en/Tools/desktop/UKUI/figures/icon47-o.png similarity index 100% rename from docs/en/docs/desktop/figures/icon47-o.png rename to docs/en/Tools/desktop/UKUI/figures/icon47-o.png diff --git a/docs/en/docs/desktop/figures/icon5.png b/docs/en/Tools/desktop/UKUI/figures/icon5.png similarity index 100% rename from docs/en/docs/desktop/figures/icon5.png rename to docs/en/Tools/desktop/UKUI/figures/icon5.png diff --git a/docs/en/docs/desktop/figures/icon6.png b/docs/en/Tools/desktop/UKUI/figures/icon6.png similarity index 100% rename from docs/en/docs/desktop/figures/icon6.png rename to docs/en/Tools/desktop/UKUI/figures/icon6.png diff --git a/docs/en/docs/desktop/figures/icon7.png b/docs/en/Tools/desktop/UKUI/figures/icon7.png similarity index 100% rename from docs/en/docs/desktop/figures/icon7.png rename to docs/en/Tools/desktop/UKUI/figures/icon7.png diff --git a/docs/en/docs/desktop/figures/icon8.png b/docs/en/Tools/desktop/UKUI/figures/icon8.png similarity index 100% rename from docs/en/docs/desktop/figures/icon8.png rename to docs/en/Tools/desktop/UKUI/figures/icon8.png diff --git a/docs/en/docs/desktop/figures/icon9.png b/docs/en/Tools/desktop/UKUI/figures/icon9.png similarity index 100% rename from docs/en/docs/desktop/figures/icon9.png rename to docs/en/Tools/desktop/UKUI/figures/icon9.png diff --git a/docs/en/docs/desktop/installing-UKUI.md b/docs/en/Tools/desktop/UKUI/ukui-installation.md similarity index 84% rename from docs/en/docs/desktop/installing-UKUI.md rename to docs/en/Tools/desktop/UKUI/ukui-installation.md index 210245b78f233854687e0b03ee33812eb639a590..df992cb81e772eeecd010ad4bb5e9ef3040e9a4f 100644 --- a/docs/en/docs/desktop/installing-UKUI.md +++ b/docs/en/Tools/desktop/UKUI/ukui-installation.md @@ -1,4 +1,5 @@ # UKUI Installation + UKUI is a Linux desktop built by the KylinSoft software team over the years, primarily based on GTK and QT. Compared to other UI interfaces, UKUI is easy to use. The components of UKUI are small and low coupling, can run alone without relying on other suites. It can provide user a friendly and efficient experience. UKUI supports both x86_64 and aarch64 architectures. @@ -6,16 +7,22 @@ UKUI supports both x86_64 and aarch64 architectures. You are advised to create an administrator user before installing UKUI. 1. Download openEuler ISO and update the software source. -``` -sudo dnf update -``` + + ```shell + sudo dnf update + ``` + 2. Install UKUI. -``` -sudo dnf install ukui -``` + + ```shell + sudo dnf install ukui + ``` + 3. If you want to set the system to start with the graphical interface after confirming the installation, run the following command and reboot the system (`reboot`). -``` -systemctl set-default graphical.target -``` + + ```shell + systemctl set-default graphical.target + ``` + UKUI is constantly updated. Please check the latest installation method: [https://gitee.com/openeuler/ukui](https://gitee.com/openeuler/ukui) diff --git a/docs/en/docs/desktop/UKUI-user-guide.md b/docs/en/Tools/desktop/UKUI/ukui-user-guide.md old mode 100755 new mode 100644 similarity index 90% rename from docs/en/docs/desktop/UKUI-user-guide.md rename to docs/en/Tools/desktop/UKUI/ukui-user-guide.md index f5f2b830eb0d8e8bdd7f002e8d667dda939ce282..1a48cb8b8481b5051b7441394dd21ded1c809218 --- a/docs/en/docs/desktop/UKUI-user-guide.md +++ b/docs/en/Tools/desktop/UKUI/ukui-user-guide.md @@ -1,30 +1,29 @@ # UKUI Desktop Environment ## Overview + Desktop Environment is the basis for the user's operation on the graphical interface, and provides multiple functions including taskbar, start menu, etc. The main interface is shown in figure below. ![Fig. 1 Desktop main interface-big](./figures/1.png) -
- ## Desktop -### Desktop’s Icons +### Desktop Icons + The system places three icons Computer, Recycle Bin and Personal by default, and double click the left mouse button to open the page. The functions are shown in table below. -| Icon | Description | +| Icon | Description | | :------------ | :------------ | | ![](./figures/icon1.png) | Computer: Show the drives and hardwares connected to the machine| | ![](./figures/icon2.png) | Recycle Bin: Show documents that have been diverted| | ![](./figures/icon3.png) | Personal: Show personal home directory| - -
- + In addition, right-clicking "Computer" and selecting "Properties", it can show the current system version, kernel version, activation and other related information. ![Fig. 2 "Computer" - "Properties"-big](./figures/2.png) ### Right-click Menu + Right-click on the desktop blank and a menu appears as shown in figure below, providing the users with some shortcut features. ![Fig. 3 Right-click Menu](./figures/3.png) @@ -37,11 +36,10 @@ Some of the options are described in table below. | View type | Four view types are available: small icon, medium icon, large icon, super large icon | | Sort by | Four ways to arrange documents according to name, type of document, size and date of modification | -
- ## Taskbar ### Basic Function + Taskbar is located at the bottom and includes the Start Menu, Multi View Switch, File Browser, Firefox Web Browser, WPS, and Tray Menu. ![Fig. 4 Taskbar](./figures/4.png) @@ -58,11 +56,13 @@ Taskbar is located at the bottom and includes the Start Menu, Multi View Switch, |Show Desktop| The button is on the far right. Minimize all windows on the desktop and return to the desktop; Clicking again will restore the windows | #### Multi View Switch + Click the icon "![](./figures/icon10-o.png)" on the taskbar to enter the interface shown in figure below, and select the operation area that users need to work on at the moment in multiple work areas. ![Fig. 5 Multi View Switch-big](./figures/5.png) #### Preview Window + Users move the mouse over the app icon in the taskbar, and then a small preview window will be shown if this app has already been opened. Hover over the specified window as shown below for hover state, the window will be slightly fuzzy glass effect (left), the rest of the window as default Status (right). @@ -74,6 +74,7 @@ Users can close the application by right-clicking on the app icon in the taskbar ![Fig. 7 Taskbar - Right-click Preview](./figures/7.png) #### Sidebar + The sidebar is located at the right of the entire desktop. Click the icon "![](./figures/icon11-o.png)" in the taskbar tray menu to open the storage menu, and click the icon "![](./figures/icon12-o.png)" in Sidebar to pop up the sidebar as shown in figure below. The sidebar consists of two parts: Notification Center, Clipboard and Widget. @@ -81,6 +82,7 @@ The sidebar consists of two parts: Notification Center, Clipboard and Widget. ![Fig. 8 Sidebar without message status-big](./figures/8.png) ##### Notification Center + Notification center will display a list of recent important and newest information. Select "Clear" in the upper right corner to clear the list of information; Select "Setting" in the upper right corner to go to the notification settings in the control center, and users can set which applications can show information and the quantity of information. @@ -96,6 +98,7 @@ Icon "![](./figures/icon13-o.png)" at the top right corner of the sidebar can st ![Fig. 11 Message Organizer](./figures/11.png) ##### Clipboard + Clipboard can save the contents those were recently selected to copy or cut, and users can operate them by using the icons in Table. ![Fig. 12 Clipboard](./figures/12.png) @@ -107,16 +110,16 @@ Clicking "![](./figures/icon15-o.png)", users can edit the the contents of the c | Icon | Description | Icon | Description | | :------------ | :------------ | :------------ | :------------ | | ![](./figures/icon16.png) | Copy the content | ![](./figures/icon18.png) | Edit the content | -| ![](./figures/icon17.png) | Delete the content | | | - -
- +| ![](./figures/icon17.png) | Delete the content | | | + The second label of the clipboard is the small plug-in that contains alarm clock, notebook, user feedback. ![Fig. 14 Plug-in](./figures/14.png) #### Tray Menu + ##### Storage Menu + Click "![](./figures/icon19-o.png)" at the tray menu to open the storage menu. It contains Kylin Weather, Input Method, Bluetooth, USB, etc. @@ -124,11 +127,13 @@ It contains Kylin Weather, Input Method, Bluetooth, USB, etc. ![Fig. 15 Storage Menu](./figures/15.png) ##### Input Method + The taskbar input method defaults to Sogou input method. Use the shortcut key "Ctrl+Space" to switch it out, and the "Shift" key to switch between Chinese and English modes. ![Fig. 16 Input Method](./figures/16.png) ##### USB + When the USB is inserted into the computer, it will be automatically read the data inside. Click "![](./figures/icon26-o.png)" to open the window as shown in figure below. @@ -138,6 +143,7 @@ When users need to umount the USB, please click the icon "![](./figures/icon27-o ![Fig. 17 The status of USB](./figures/17.png) ##### Power Supply + Click the icon "![](./figures/icon28-o.png)": When no power supply is detected. @@ -159,17 +165,16 @@ If the power manager pops up a"low battery" window, users can click to turn on t ![Fig. 21 Power Saving Mode](./figures/21.png) ##### Network + Users can choose wired or wireless network connections by clicking the icon "![](./figures/icon31-o.png)" of network manager. | Icon | Description | Icon | Description | | :------------ | :------------ | :------------ | :------------ | -|![](./figures/icon32.png)| Connected |![](./figures/icon37.png)| Unconnected | -|![](./figures/icon33.png)| Connection limited |![](./figures/icon38.png)| Locked | -|![](./figures/icon34.png)| Connecting |![](./figures/icon39.png)| Wifi connected | -|![](./figures/icon35.png)| Wifi unconnected |![](./figures/icon40.png)| Wificonnection limited | -|![](./figures/icon36.png)| Wifi locked |![](./figures/icon41.png)| Wifi connecting | - -
+|![](./figures/icon32.png)| Connected |![](./figures/icon37.png)| Unconnected | +|![](./figures/icon33.png)| Connection limited |![](./figures/icon38.png)| Locked | +|![](./figures/icon34.png)| Connecting |![](./figures/icon39.png)| Wifi connected | +|![](./figures/icon35.png)| Wifi unconnected |![](./figures/icon40.png)| Wificonnection limited | +|![](./figures/icon36.png)| Wifi locked |![](./figures/icon41.png)| Wifi connecting | ![Fig. 22 Network Connection](./figures/22.png) @@ -193,6 +198,7 @@ Users can choose wired or wireless network connections by clicking the icon "![] ![Fig. 26 Network Setting](./figures/26.png) ##### Volume + Click the icon "![](./figures/icon43-o.png)" to open the volume window, and there provides three modes. - Mini Mode @@ -211,6 +217,7 @@ Click the icon "![](./figures/icon43-o.png)" to open the volume window, and ther ![Fig. 29 According to Application List](./figures/29.png) ##### Calendar + Click the date&time on the taskbar to open the calendar window. Users can view the day's information by filtering the year, month, day. The date will be displayed in large letters, with the time, the week, the festival,and the lunar calendar. Taboos can be seen by checking. @@ -218,19 +225,21 @@ Users can view the day's information by filtering the year, month, day. The date ![Fig. 30 Calendar-big](./figures/30.png) ##### Night Mode + Click the icon "![](./figures/icon44-o.png)" on the Taskbar and then the system changes to the night mode. #### Advanced Setting + Right-click the Taskbar to open the menu. ![Fig. 31 Right-Clicking Menu](./figures/31.png) Users can set the lauserst of taskbar according to "Taskbar Settings". -
+## Window + +### Window Manager -## Window -### Window Manager The functions provided as shown in Table. | Function | Description | @@ -242,21 +251,20 @@ The functions provided as shown in Table. | Drag and Drop | Long press the left mouse button at the title bar to move the window to any position | | Resize | Move the mouse to the corner of the window and long press the left button to resize the window | -
- ### Window Switch -There are three ways to switch windows: -* Click the window title on the Taskbar. +There are three ways to switch windows: -* Click the different window at the desktop. +- Click the window title on the Taskbar. -* Use shortcut keys < Alt > + < Tab >. +- Click the different window at the desktop. -
+- Use shortcut keys < Alt > + < Tab >. ## Start Menu + ### Basic Function + Click the button to open the "Start Menu". It provides sliding bar. @@ -264,32 +272,39 @@ It provides sliding bar. ![Fig. 32 Start Menu](./figures/32.png) #### Category Menu at right side + When the mouse is over the right side of the start menu, it will appear a pre-expanded cue bar. Clicking to expand, and then three categories are showing at the right side by default: "Common Software", "Alphabetical Category", and "Functional category". -* All Software: List all software, recently used software will be displayed on the top of this page. +- All Software: List all software, recently used software will be displayed on the top of this page. -* Alphabetical Category: List all software by first letter. +- Alphabetical Category: List all software by first letter. -* Functional category: List all software by their functions. +- Functional category: List all software by their functions. Users can click the button at top right corner to view fullscreen menu mode. ![Fig. 33 Fullscreen Menu-big](./figures/33.png) #### Function Button at right side + It provides User Avatar, Computer, Control Center and Shutdown four options. ##### User Avatar + Click "![](./figures/icon45-o.png)" to view user's information. ##### Computer + Click "![](./figures/icon46-o.png)" to open personal home folder ##### Control Center + Click "![](./figures/icon47-o.png)" to go to the control center. ##### Shutdown + ###### Lock Screen + When users do not need to use the computer temporarily, the lock screen can be selected (without affecting the current running state of the system) to prevent misoperations. And input the password to re-enter the system. The system will automatically lock the screen after a period of idle time by default. @@ -297,11 +312,13 @@ When users do not need to use the computer temporarily, the lock screen can be s ![Fig. 34 Lock Screen-big](./figures/34.png) ###### Switch Users & Log Out + When users want to select another user to log in using the computer, users can select "Log out" or "Switch user". At this point, the system will close all running applications; Therefore, please save the current jobs before performing this action. ###### Shutdown & Reboot + There are two ways: 1)"Start Menu" > "Power" > "Shutdown" @@ -319,6 +336,7 @@ The system will shutdown or reboot immediately without popping up the dialog box Right-clicking Start Menu, it provides lock screen, switch user, log out, reboot, and shutdown five shortcut options. ### Applications + Users can search apps in the search box by key words. As shown in figure below, the result will show up automatically with the input. ![Fig. 36 Search Apps](./figures/36.png) @@ -336,11 +354,9 @@ The options are described in table below. | Add to Desktop Shortcut |Generate shortcut icon for the application on the desktop| | Uninstall |Remove the application| -
- ## FAQ -### Can’t login to the system after locking the screen? +### Failed to Login to the System after Locking the Screen - Switch to character terminal by < Ctrl + Alt + F2 >. @@ -350,17 +366,15 @@ The options are described in table below. - Switch to graphical interface by < Ctrl + Alt + F1 >, and input the passwd. -
+## Appendix -## Appendix ### Shortcut Key -|Shortcut Key|Function| -| :------ | :----- -| F5 | Refresh the desktop | -| F1 | Open the user-guide| -| Alt + Tab | Switch the window | -| win | Open the Start Menu | -| Ctrl + Alt + L | Lock Screen | -| Ctrl + Alt + Delete | Log out | - +| Shortcut Key | Function | +| :------------------ | :------------------ | +| F5 | Refresh the desktop | +| F1 | Open the user-guide | +| Alt + Tab | Switch the window | +| win | Open the Start Menu | +| Ctrl + Alt + L | Lock Screen | +| Ctrl + Alt + Delete | Log out | diff --git a/docs/en/Tools/desktop/XFCE/Menu/index.md b/docs/en/Tools/desktop/XFCE/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..6aa1a6615439ddb3d2299bb82c85f7b79c179afb --- /dev/null +++ b/docs/en/Tools/desktop/XFCE/Menu/index.md @@ -0,0 +1,6 @@ +--- +headless: true +--- +- [Xfce Installation]({{< relref "./xfce-installation.md" >}}) +- [Xfce User Guide]({{< relref "./xfce-user-guide.md" >}}) +- [Xfce Common Issues and Solutions]({{< relref "./xfce-common-issues-and-solutions.md" >}}) diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-1.png b/docs/en/Tools/desktop/XFCE/figures/xfce-1.png new file mode 100644 index 0000000000000000000000000000000000000000..0e478b9f10ddf3210d5f5fada2e45329e2d1d028 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-1.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-2.png b/docs/en/Tools/desktop/XFCE/figures/xfce-2.png new file mode 100644 index 0000000000000000000000000000000000000000..33a946d988d499a1e98cb43968b72119bd48d7a5 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-2.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-3.png b/docs/en/Tools/desktop/XFCE/figures/xfce-3.png new file mode 100644 index 0000000000000000000000000000000000000000..020356f0c981fac2aafe33c8e997efbf01af9253 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-3.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-4.png b/docs/en/Tools/desktop/XFCE/figures/xfce-4.png new file mode 100644 index 0000000000000000000000000000000000000000..21369e366322955023b427e7a2ae63fd29b387e5 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-4.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-5.png b/docs/en/Tools/desktop/XFCE/figures/xfce-5.png new file mode 100644 index 0000000000000000000000000000000000000000..1f7807877f775fe6aa32652a29ef833e48e1a6ee Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-5.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-6.png b/docs/en/Tools/desktop/XFCE/figures/xfce-6.png new file mode 100644 index 0000000000000000000000000000000000000000..e5376fcfd1737234a885d4d95649cd996005cf0c Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-6.png differ diff --git a/docs/en/docs/desktop/figures/xfce-7.png b/docs/en/Tools/desktop/XFCE/figures/xfce-7.png similarity index 100% rename from docs/en/docs/desktop/figures/xfce-7.png rename to docs/en/Tools/desktop/XFCE/figures/xfce-7.png diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-71.png b/docs/en/Tools/desktop/XFCE/figures/xfce-71.png new file mode 100644 index 0000000000000000000000000000000000000000..11d1618c907d4bb18de1eb68e42e9b98d92d91c3 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-71.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-8.png b/docs/en/Tools/desktop/XFCE/figures/xfce-8.png new file mode 100644 index 0000000000000000000000000000000000000000..f6f97d9a173105cb6a72e4b8c48deab25ecac898 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-8.png differ diff --git a/docs/en/docs/desktop/figures/xfce-81.png b/docs/en/Tools/desktop/XFCE/figures/xfce-81.png similarity index 100% rename from docs/en/docs/desktop/figures/xfce-81.png rename to docs/en/Tools/desktop/XFCE/figures/xfce-81.png diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-811.png b/docs/en/Tools/desktop/XFCE/figures/xfce-811.png new file mode 100644 index 0000000000000000000000000000000000000000..58233638eca203d917081d6a9ac5003474cbf60b Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-811.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-812.png b/docs/en/Tools/desktop/XFCE/figures/xfce-812.png new file mode 100644 index 0000000000000000000000000000000000000000..0fc975f75da95dce8a3e5a098d024578335c9426 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-812.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-813.png b/docs/en/Tools/desktop/XFCE/figures/xfce-813.png new file mode 100644 index 0000000000000000000000000000000000000000..4d399468c74355cbaa765380720cb9561e95f834 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-813.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-814.png b/docs/en/Tools/desktop/XFCE/figures/xfce-814.png new file mode 100644 index 0000000000000000000000000000000000000000..c09fd6524a20ba04e0fca30307d35fa05e79c1f4 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-814.png differ diff --git a/docs/en/docs/desktop/figures/xfce-82.png b/docs/en/Tools/desktop/XFCE/figures/xfce-82.png similarity index 100% rename from docs/en/docs/desktop/figures/xfce-82.png rename to docs/en/Tools/desktop/XFCE/figures/xfce-82.png diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-821.png b/docs/en/Tools/desktop/XFCE/figures/xfce-821.png new file mode 100644 index 0000000000000000000000000000000000000000..c5c1f3567dccda3d0d49ae445612d5b9ba27e09a Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-821.png differ diff --git a/docs/en/docs/desktop/figures/xfce-83.png b/docs/en/Tools/desktop/XFCE/figures/xfce-83.png similarity index 100% rename from docs/en/docs/desktop/figures/xfce-83.png rename to docs/en/Tools/desktop/XFCE/figures/xfce-83.png diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-831.png b/docs/en/Tools/desktop/XFCE/figures/xfce-831.png new file mode 100644 index 0000000000000000000000000000000000000000..6456dd02f0281a5ec8d752ba5b95be581bcbfa09 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-831.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-832.png b/docs/en/Tools/desktop/XFCE/figures/xfce-832.png new file mode 100644 index 0000000000000000000000000000000000000000..2932aaacf71fa53f1d0c10340df3aebcc016e991 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-832.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-84.png b/docs/en/Tools/desktop/XFCE/figures/xfce-84.png new file mode 100644 index 0000000000000000000000000000000000000000..e0435c2edf9f68d193cff036215f32c259d378f0 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-84.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-841.png b/docs/en/Tools/desktop/XFCE/figures/xfce-841.png new file mode 100644 index 0000000000000000000000000000000000000000..c2c06346d4a296bfbe7836139cd943baa1ce6ea5 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-841.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-842.png b/docs/en/Tools/desktop/XFCE/figures/xfce-842.png new file mode 100644 index 0000000000000000000000000000000000000000..101bf6923e3780617d33dde04b92232ca7f87b42 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-842.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-85.png b/docs/en/Tools/desktop/XFCE/figures/xfce-85.png new file mode 100644 index 0000000000000000000000000000000000000000..21b39638fe4c83e0da5cdc69ecad9b7a22718a55 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-85.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-851.png b/docs/en/Tools/desktop/XFCE/figures/xfce-851.png new file mode 100644 index 0000000000000000000000000000000000000000..893064ca10399a683afbcb3752266d93b0a79a51 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-851.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-86.png b/docs/en/Tools/desktop/XFCE/figures/xfce-86.png new file mode 100644 index 0000000000000000000000000000000000000000..35e8a99e31e4a49eb64b24cfbab825111e40f709 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-86.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-861.png b/docs/en/Tools/desktop/XFCE/figures/xfce-861.png new file mode 100644 index 0000000000000000000000000000000000000000..affc46c874991a3b289e15072e06ba6566c099b1 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-861.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-87.png b/docs/en/Tools/desktop/XFCE/figures/xfce-87.png new file mode 100644 index 0000000000000000000000000000000000000000..47524c21d57c887c3398ea53a675f89e9f92113f Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-87.png differ diff --git a/docs/en/docs/desktop/figures/xfce-9.png b/docs/en/Tools/desktop/XFCE/figures/xfce-9.png similarity index 100% rename from docs/en/docs/desktop/figures/xfce-9.png rename to docs/en/Tools/desktop/XFCE/figures/xfce-9.png diff --git a/docs/en/docs/desktop/figures/xfce-91.png b/docs/en/Tools/desktop/XFCE/figures/xfce-91.png similarity index 100% rename from docs/en/docs/desktop/figures/xfce-91.png rename to docs/en/Tools/desktop/XFCE/figures/xfce-91.png diff --git a/docs/en/docs/desktop/figures/xfce-911.png b/docs/en/Tools/desktop/XFCE/figures/xfce-911.png similarity index 100% rename from docs/en/docs/desktop/figures/xfce-911.png rename to docs/en/Tools/desktop/XFCE/figures/xfce-911.png diff --git a/docs/en/docs/desktop/figures/xfce-92.png b/docs/en/Tools/desktop/XFCE/figures/xfce-92.png similarity index 100% rename from docs/en/docs/desktop/figures/xfce-92.png rename to docs/en/Tools/desktop/XFCE/figures/xfce-92.png diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-921.png b/docs/en/Tools/desktop/XFCE/figures/xfce-921.png new file mode 100644 index 0000000000000000000000000000000000000000..5eb6f40df9ca73e11b9b9fa5079496ac0c36857b Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-921.png differ diff --git a/docs/en/docs/desktop/figures/xfce-93.png b/docs/en/Tools/desktop/XFCE/figures/xfce-93.png similarity index 100% rename from docs/en/docs/desktop/figures/xfce-93.png rename to docs/en/Tools/desktop/XFCE/figures/xfce-93.png diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-931.png b/docs/en/Tools/desktop/XFCE/figures/xfce-931.png new file mode 100644 index 0000000000000000000000000000000000000000..a156e5cf14ae154b93e845ff1bd5bc6ba12c9beb Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-931.png differ diff --git a/docs/en/docs/desktop/figures/xfce-94.png b/docs/en/Tools/desktop/XFCE/figures/xfce-94.png similarity index 100% rename from docs/en/docs/desktop/figures/xfce-94.png rename to docs/en/Tools/desktop/XFCE/figures/xfce-94.png diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-941.png b/docs/en/Tools/desktop/XFCE/figures/xfce-941.png new file mode 100644 index 0000000000000000000000000000000000000000..f7904da12dc807836acfb9d6f24b8d9b976a2fdc Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-941.png differ diff --git a/docs/en/docs/desktop/figures/xfce-95.png b/docs/en/Tools/desktop/XFCE/figures/xfce-95.png similarity index 100% rename from docs/en/docs/desktop/figures/xfce-95.png rename to docs/en/Tools/desktop/XFCE/figures/xfce-95.png diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-951.png b/docs/en/Tools/desktop/XFCE/figures/xfce-951.png new file mode 100644 index 0000000000000000000000000000000000000000..6521a28275d2b63c12b47604c7afc926f7938697 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-951.png differ diff --git a/docs/en/docs/desktop/figures/xfce-96.png b/docs/en/Tools/desktop/XFCE/figures/xfce-96.png similarity index 100% rename from docs/en/docs/desktop/figures/xfce-96.png rename to docs/en/Tools/desktop/XFCE/figures/xfce-96.png diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-961.png b/docs/en/Tools/desktop/XFCE/figures/xfce-961.png new file mode 100644 index 0000000000000000000000000000000000000000..874fa200f4e63b690261d7827f3c73cf70861b32 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-961.png differ diff --git a/docs/en/Tools/desktop/XFCE/figures/xfce-962.png b/docs/en/Tools/desktop/XFCE/figures/xfce-962.png new file mode 100644 index 0000000000000000000000000000000000000000..bb84e35e43e992bc68b053a0da760bd5aa8b0270 Binary files /dev/null and b/docs/en/Tools/desktop/XFCE/figures/xfce-962.png differ diff --git a/docs/en/Tools/desktop/XFCE/xfce-common-issues-and-solutions.md b/docs/en/Tools/desktop/XFCE/xfce-common-issues-and-solutions.md new file mode 100644 index 0000000000000000000000000000000000000000..21b68776fb1a55872cd96fbbce896f578960d648 --- /dev/null +++ b/docs/en/Tools/desktop/XFCE/xfce-common-issues-and-solutions.md @@ -0,0 +1,7 @@ +# Common Issues and Solutions + +## Issue 1: Why Is the Background Color of the LightDM Login Page Black + +The login page is black because `background` is not set in the default configuration file **/etc/lightdm/lightdm-gtk-greeter.conf** of lightdm-gtk. + +Set `background=/usr/share/backgrounds/xfce/xfce-blue.jpg` in the `greeter` section at the end of the configuration file, and then run the `systemctl restart lightdm` command. diff --git a/docs/en/docs/desktop/installing-Xfce.md b/docs/en/Tools/desktop/XFCE/xfce-installation.md similarity index 43% rename from docs/en/docs/desktop/installing-Xfce.md rename to docs/en/Tools/desktop/XFCE/xfce-installation.md index e22e2332734d4454fa375994b26145c8a3f553f3..80941b2a8d5e9702edcdc6d7ee4c5178695c97a0 100644 --- a/docs/en/docs/desktop/installing-Xfce.md +++ b/docs/en/Tools/desktop/XFCE/xfce-installation.md @@ -6,64 +6,65 @@ Xfce supports the x86\_64 and AArch64 architectures. You are advised to create an administrator during the installation. -1. [Download ](https://openeuler.org/en/download/)the openEuler ISO image and install the system. Run the following command to update the software source. You are advised to configure the Everything source and the EPOL source. This document describes how to install Xfce in the minimum installation scenario. - - ``` - sudo dnf update - ``` +1. [Download](https://openeuler.org/en/download/)the openEuler ISO image and install the system. Run the following command to update the software source. You are advised to configure the Everything source and the EPOL source. This document describes how to install Xfce in the minimum installation scenario. + + ```shell + sudo dnf update + ``` 2. Run the following command to install the font library: - - ``` - sudo dnf install dejavu-fonts liberation-fonts gnu-*-fonts google-*-fonts - ``` + + ```shell + sudo dnf install dejavu-fonts liberation-fonts gnu-*-fonts google-*-fonts + ``` 3. Run the following command to install Xorg: - - ``` - sudo dnf install xorg-* - ``` + + ```shell + sudo dnf install xorg-* + ``` 4. Run the following command to install Xfce: - - ``` - sudo dnf install xfwm4 xfdesktop xfce4-* xfce4-*-plugin *fonts - ``` + + ```shell + sudo dnf install xfwm4 xfdesktop xfce4-* xfce4-*-plugin *fonts + ``` 5. Run the following command to install the login manager: - - ``` - sudo dnf install lightdm lightdm-gtk - ``` + + ```shell + sudo dnf install lightdm lightdm-gtk + ``` 6. Run the following command to start Xfce using the login manager: - - ```` - sudo systemctl start lightdm - ```` - - After the login manager is started, choose **Xfce Session** in the upper right corner and enter the user name and password to log in. + + ````shell + sudo systemctl start lightdm + ```` + + After the login manager is started, choose **Xfce Session** in the upper right corner and enter the user name and password to log in. 7. Run the following command to set the GUI to start upon system boot: - - ``` - sudo systemctl enable lightdm - sudo systemctl set-default graphical.target - ``` - - If GDM is installed by default, you are advised to disable GDM. - - ``` - systemctl disable gdm - ``` + + ```shell + sudo systemctl enable lightdm + sudo systemctl set-default graphical.target + ``` + + If GDM is installed by default, you are advised to disable GDM. + + ```shell + systemctl disable gdm + ``` 8. Restart the server. - - ``` - sudo reboot - ``` - - 9. FAQs -**Why Is the Background Color of the LightDM Login Page Black?** -The login page is black because `background` is not set in the default configuration file **/etc/lightdm/lightdm-gtk-greeter.conf** of lightdm-gtk. -Set `background=/usr/share/backgrounds/xfce/xfce-blue.jpg` in the `greeter` section at the end of the configuration file, and then run the `systemctl restart lightdm` command. + + ```shell + sudo reboot + ``` + +9. FAQs + - **Why Is the Background Color of the LightDM Login Page Black?** + + The login page is black because `background` is not set in the default configuration file **/etc/lightdm/lightdm-gtk-greeter.conf** of lightdm-gtk. + Set `background=/usr/share/backgrounds/xfce/xfce-blue.jpg` in the `greeter` section at the end of the configuration file, and then run the `systemctl restart lightdm` command. diff --git a/docs/en/docs/desktop/Xfce_userguide.md b/docs/en/Tools/desktop/XFCE/xfce-user-guide.md similarity index 88% rename from docs/en/docs/desktop/Xfce_userguide.md rename to docs/en/Tools/desktop/XFCE/xfce-user-guide.md index 1ae20fb84d8904d454d41892106decf8d22d2ced..c74a522f43d0ba2583bcf128fb4726051a02ab39 100644 --- a/docs/en/docs/desktop/Xfce_userguide.md +++ b/docs/en/Tools/desktop/XFCE/xfce-user-guide.md @@ -1,4 +1,4 @@ -# Xfce Desktop Environment +# Xfce User Guide ## 1. Overview @@ -6,9 +6,7 @@ Xfce is a lightweight desktop environment running on Unix-like operating systems The following figure shows the WebUI. -![Figure 1 Main screen of the desktop - big](./figures/xfce-1.png) - -
+![Figure 1 Main screen of the desktop](./figures/xfce-1.png) ## 2. Desktop @@ -16,7 +14,7 @@ The following figure shows the WebUI. By default, icons such as the file system, main folder, and mount directory are placed. You can double-click the icons to open the page. -![Figure 2 Default desktop icons - big](./figures/xfce-2.png) +![Figure 2 Default desktop icons](./figures/xfce-2.png) ### 2.2 Shortcut Menu @@ -39,8 +37,6 @@ The following table describes some options. | Properties| Set desktop properties, such as the general, logo, and permission.| | Applications| Applications| -
- ## 3. Taskbar ### 3.1 Basic Functions @@ -58,11 +54,11 @@ The taskbar is located at the top, including application, window display area, m #### 3.1.1 Applications -![Figure 5 All applications – big](./figures/xfce-5.png) +![Figure 5 All applications](./figures/xfce-5.png) #### 3.1.2 Window Display Area -![Figure 6 Window display area - big](./figures/xfce-6.png) +![Figure 6 Window display area](./figures/xfce-6.png) #### 3.1.3 Multi-View Switching @@ -70,11 +66,11 @@ Click ![](./figures/xfce-7.png) in the taskbar to enter the corresponding work a For example, you can use the mouse to switch among multiple workspaces to select the operation area that you want to work in. -![Figure 7 Switching among multiple views - big](./figures/xfce-71.png) +![Figure 7 Switching among multiple views](./figures/xfce-71.png) #### 3.1.4 Tray -![Figure 8 Tray menu - big](./figures/xfce-8.png) +![Figure 8 Tray menu](./figures/xfce-8.png) ##### 3.1.4.1 Network @@ -116,7 +112,7 @@ You can click **Power Manager Settings** to configure the display and nodes. Click ![](./figures/xfce-84.png) on the taskbar. -![Figure 16 Notification center - big](./figures/xfce-841.png) +![Figure 16 Notification center](./figures/xfce-841.png) You can disable the notification function by selecting **Do not disturb**. @@ -124,7 +120,7 @@ The notification center displays the latest important information list. You can You can click **Notification settings** to go to the notification setting page of the control panel and set the applications to be displayed and the number of messages to be displayed. -![Figure 17 Notification center - big](./figures/xfce-842.png) +![Figure 17 Notification center](./figures/xfce-842.png) ##### 3.1.4.5 Calendar @@ -136,7 +132,7 @@ You can choose a year, a month and a day to view the information of a specific d Right-click the time and date on the taskbar and click **Properties** to set the time. -![Figure 19 Date setting - big](./figures/xfce-851.png) +![Figure 19 Date setting](./figures/xfce-851.png) ##### 3.1.4.6 Advanced Settings @@ -184,8 +180,6 @@ To log out of the GUI, click **Log Out**. Then, the system closes all running applications. Therefore, before performing this operation, save the current work. -
- ## 4. Shortcut Operation Bar ### 4.1 Basic Functions @@ -207,7 +201,7 @@ The shortcut operation bar is located at the bottom, including the icons for dis Click ![](./figures/xfce-91.png) on the shortcut operation bar to display the desktop. -![Figure 24 Showing the desktop - big](./figures/xfce-911.png) +![Figure 24 Showing the desktop](./figures/xfce-911.png) #### 4.1.2 Terminal @@ -219,26 +213,26 @@ Click ![](./figures/xfce-92.png) on the shortcut operation bar to open a termina You can click the ![](./figures/xfce-93.png) icon on the shortcut operation bar to open a file manager. -![Figure 26 File manager - big](./figures/xfce-931.png) +![Figure 26 File manager](./figures/xfce-931.png) #### 4.1.4 Web Browser You can click the ![](./figures/xfce-94.png) icon on the shortcut operation bar to open a web browser. -![Figure 27 Web browser - big](./figures/xfce-941.png) +![Figure 27 Web browser](./figures/xfce-941.png) #### 4.1.5 Application Finder You can click the ![](./figures/xfce-95.png) icon on the shortcut operation bar to open an application program search interface. -![Figure 28 Searching for an application - big](./figures/xfce-951.png) +![Figure 28 Searching for an application](./figures/xfce-951.png) #### 4.1.6 User Home Directory Click ![](./figures/xfce-96.png) on the shortcut operation bar and click **Open File**. The user home directory page is displayed. -![Figure 29 User home directory - big](./figures/xfce-961.png) +![Figure 29 User home directory](./figures/xfce-961.png) Click the ![](./figures/xfce-96.png) icon on the shortcut operation bar, and then click **Open in Terminal** to open a terminal. The current directory is the home directory of the user. -![Figure 30 User home directory - big](./figures/xfce-962.png) \ No newline at end of file +![Figure 30 User home directory](./figures/xfce-962.png) diff --git a/docs/en/Virtualization/Menu/index.md b/docs/en/Virtualization/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..4be1b9a0a19d5bce9cb6e833d975afd6ee718daa --- /dev/null +++ b/docs/en/Virtualization/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [Virtualization]({{< relref "./VirtualizationPlatform/Menu/index.md" >}}) diff --git a/docs/en/Virtualization/VirtualizationPlatform/Menu/index.md b/docs/en/Virtualization/VirtualizationPlatform/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..b9429cd692401ee4fab31e8dcfbc9af6f3b7b764 --- /dev/null +++ b/docs/en/Virtualization/VirtualizationPlatform/Menu/index.md @@ -0,0 +1,7 @@ +--- +headless: true +--- + +- [Virtualization User Guide]({{< relref "./Virtualization/Menu/index.md" >}}) +- [StratoVirt User Guide]({{< relref "./StratoVirt/Menu/index.md" >}}) +- [OpenStack User Guide]({{< relref "./OpenStack/Menu/index.md" >}}) diff --git a/docs/en/Virtualization/VirtualizationPlatform/OpenStack/Menu/index.md b/docs/en/Virtualization/VirtualizationPlatform/OpenStack/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..f5362b27b63d7ff5544027fbd8d87704b5b28055 --- /dev/null +++ b/docs/en/Virtualization/VirtualizationPlatform/OpenStack/Menu/index.md @@ -0,0 +1,5 @@ +--- +headless: true +--- + +- [OpenStack User Guide]({{< relref "./openstack.md" >}}) diff --git a/docs/en/docs/thirdparty_migration/openstack.md b/docs/en/Virtualization/VirtualizationPlatform/OpenStack/openstack.md similarity index 65% rename from docs/en/docs/thirdparty_migration/openstack.md rename to docs/en/Virtualization/VirtualizationPlatform/OpenStack/openstack.md index e56c50b9cfa0aeefb078f4fc9069193fce18b044..a67e43655c8520d7b5330fc74f99cb2a1fa65356 100644 --- a/docs/en/docs/thirdparty_migration/openstack.md +++ b/docs/en/Virtualization/VirtualizationPlatform/OpenStack/openstack.md @@ -1,3 +1,3 @@ # openEuler OpenStack -openEuler OpenStack documents have been moved to [OpenStack SIG Doc](https://openeuler.gitee.io/openstack/). +openEuler OpenStack documents have been moved to [OpenStack SIG Doc](https://openstack-sig.readthedocs.io/). diff --git a/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/Menu/index.md b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..7bb3ea5520c7fc9c2778aad4600185c89763f9d7 --- /dev/null +++ b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/Menu/index.md @@ -0,0 +1,13 @@ +--- +headless: true +--- + +- [StratoVirt User Guide]({{< relref "./stratovirt-user-guide.md" >}}) + - [StratoVirt Introduction]({{< relref "./stratovirt-introduction.md" >}}) + - [StratoVirt Installation]({{< relref "./stratovirt-installation.md" >}}) + - [Environment Preparation]({{< relref "./environment-preparation.md" >}}) + - [VM Configuration]({{< relref "./vm-configuration.md" >}}) + - [VM Management]({{< relref "./vm-management.md" >}}) + - [Interconnection with the iSula Secure Container]({{< relref "./interconnection-with-isula.md" >}}) + - [Interconnection with libvirt]({{< relref "./interconnection-with-libvirt.md" >}}) + - [StratoVirt VFIO User Guide]({{< relref "./stratovirt-vfio-user-guide.md" >}}) diff --git a/docs/en/docs/StratoVirt/Prepare_env.md b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/environment-preparation.md similarity index 62% rename from docs/en/docs/StratoVirt/Prepare_env.md rename to docs/en/Virtualization/VirtualizationPlatform/StratoVirt/environment-preparation.md index 5b827dd29f00b1cef4fdb94d957bc1a975fca21f..857bf943014968ccabc33aa31ff15000a56bb249 100644 --- a/docs/en/docs/StratoVirt/Prepare_env.md +++ b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/environment-preparation.md @@ -1,5 +1,4 @@ -# Preparing the Environment - +# Environment Preparation ## Usage @@ -15,31 +14,28 @@ The following are required in the environment for running StratoVirt: - nmap tool - Kernel and rootfs images - - ## Preparing Devices and Tools - To run StratoVirt, the MMIO device must be implemented. Therefore, before running StratoVirt, ensure that the **/dev/vhost-vsock** device exists. - Check whether the device exists. - - ``` - $ ls /dev/vhost-vsock - /dev/vhost-vsock - ``` + Check whether the device exists. - If the device does not exist, run the following command to generate it: + ```shell + $ ls /dev/vhost-vsock + /dev/vhost-vsock + ``` - ``` - $ modprobe vhost_vsock - ``` + If the device does not exist, run the following command to generate it: + ```shell + modprobe vhost_vsock + ``` - To use QMP commands, install the nmap tool first. After configuring the Yum source, run the following command to install the tool: - ``` - # yum install nmap - ``` + ```shell + # yum install nmap + ``` ## Preparing Images @@ -49,35 +45,34 @@ StratoVirt of the current version supports only the PE kernel image of the x86_6 1. Run the following commands to obtain the kernel source code of openEuler: - ``` - $ git clone https://gitee.com/openeuler/kernel.git - $ cd kernel - ``` + ```shell + git clone https://gitee.com/openeuler/kernel.git + cd kernel + ``` 2. Run the following command to check and switch to the kernel version openEuler-22.03-LTS: - ``` - $ git checkout openEuler-22.03-LTS - ``` + ```shell + git checkout openEuler-22.03-LTS + ``` 3. Configure and compile the Linux kernel. You are advised to use the [recommended configuration file](https://gitee.com/openeuler/stratovirt/tree/master/docs/kernel_config)). Copy the file to the kernel directory, rename it to **.config**, and run the `make olddefconfig` command to update to the latest default configuration (otherwise, you may need to manually select options for subsequent compilation). Alternatively, you can run the following command to configure the kernel as prompted. The system may display a message indicating that specific dependencies are missing. Run the `yum install` command to install the dependencies as prompted. - ``` - $ make menuconfig - ``` + ```shell + make menuconfig + ``` 4. Run the following command to create and convert the kernel image to the PE format. The converted image is **vmlinux.bin**. - ``` - $ make -j vmlinux && objcopy -O binary vmlinux vmlinux.bin - ``` + ```shell + make -j vmlinux && objcopy -O binary vmlinux vmlinux.bin + ``` 5. If you want to use the kernel in bzImzge format on the x86 platform, run the following command: - ``` - $ make -j bzImage - ``` - + ```shell + make -j bzImage + ``` ## Creating the Rootfs Image @@ -85,73 +80,70 @@ The rootfs image is a file system image. When StratoVirt is started, the ext4 im 1. Prepare a file with a proper size (for example, create a file with the size of 10 GB in **/home**). - ``` - $ cd /home - $ dd if=/dev/zero of=./rootfs.ext4 bs=1G count=10 - ``` + ```shell + cd /home + dd if=/dev/zero of=./rootfs.ext4 bs=1G count=10 + ``` 2. Create an empty ext4 file system on this file. - ``` - $ mkfs.ext4 ./rootfs.ext4 - ``` + ```shell + mkfs.ext4 ./rootfs.ext4 + ``` 3. Mount the file image. Create the **/mnt/rootfs** directory and mount **rootfs.ext4** to the directory as user **root**. - ``` - $ mkdir /mnt/rootfs - # Return to the directory where the file system is created, for example, **/home**. - $ cd /home - $ sudo mount ./rootfs.ext4 /mnt/rootfs && cd /mnt/rootfs - ``` + ```shell + $ mkdir /mnt/rootfs + # Return to the directory where the file system is created, for example, **/home**. + $ cd /home + $ sudo mount ./rootfs.ext4 /mnt/rootfs && cd /mnt/rootfs + ``` 4. Obtain the latest alpine-mini rootfs of the corresponding processor architecture. - - If the AArch64 processor architecture is used, you can get the latest rootfs from the [alpine](http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/). For example, alpine-minirootfs-3.16.0-aarch64.tar.gz, the reference commands are as follows: - - ``` - $ wget http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/aarch64/alpine-minirootfs-3.16.0-aarch64.tar.gz - $ tar -zxvf alpine-minirootfs-3.16.0-aarch64.tar.gz - $ rm alpine-minirootfs-3.16.0-aarch64.tar.gz - ``` + - If the AArch64 processor architecture is used, you can get the latest rootfs from the [alpine](http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/). For example, alpine-minirootfs-3.16.0-aarch64.tar.gz, the reference commands are as follows: + ```shell + wget http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/aarch64/alpine-minirootfs-3.16.0-aarch64.tar.gz + tar -zxvf alpine-minirootfs-3.16.0-aarch64.tar.gz + rm alpine-minirootfs-3.16.0-aarch64.tar.gz + ``` - - If the x86_64 processor architecture is used, you can get the latest rootfs from the [alpine](http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/). For example, alpine-minirootfs-3.16.0-x86_64.tar.gz, the reference commands are as follows: + - If the x86_64 processor architecture is used, you can get the latest rootfs from the [alpine](http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/). For example, alpine-minirootfs-3.16.0-x86_64.tar.gz, the reference commands are as follows: - ``` - $ wget http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/x86_64/alpine-minirootfs-3.16.0-x86_64.tar.gz - $ tar -zxvf alpine-minirootfs-3.16.0-x86_64.tar.gz - $ rm alpine-minirootfs-3.16.0-x86_64.tar.gz + ```shell + wget http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/x86_64/alpine-minirootfs-3.16.0-x86_64.tar.gz + tar -zxvf alpine-minirootfs-3.16.0-x86_64.tar.gz + rm alpine-minirootfs-3.16.0-x86_64.tar.gz ``` - 5. Run the following commands to create a simple **/sbin/init** for the ext4 file image: - ``` - $ rm sbin/init; touch sbin/init && cat > sbin/init < sbin/init < ![](./public_sys-resources/icon-note.gif)**NOTE** > -> Before using libvirt to manage StratoVirt VMs, pay attention to the features supported by StratoVirt, including mutually exclusive relationships between features, and feature prerequisites and specifications. For details, see [Configuring VMs](VM_configuration.md) in CLI mode. +> Before using libvirt to manage StratoVirt VMs, pay attention to the features supported by StratoVirt, including mutually exclusive relationships between features, and feature prerequisites and specifications. For details, see [Configuring VMs](./vm-configuration.md) mode. ### VM Description @@ -120,9 +120,9 @@ This section describes how to use the XML file to configure VM devices, includin Attribute **bus**: ID of the bus to which the device is to be mounted. - Attribute **slot**: ID of the slot to which the device is to be mounted. The value range is [0, 31]. + Attribute **slot**: ID of the slot to which the device is to be mounted. The value range is \[0, 31]. - Attribute **function**: ID of the function to which the device is to be mounted. The value range is [0, 7]. + Attribute **function**: ID of the function to which the device is to be mounted. The value range is \[0, 7]. #### Configuration Example @@ -134,14 +134,14 @@ Set the disk path to **/home/openEuler-21.09-stratovirt.img**, iothread quantity 1 - - - + + + - 10000 + 10000 -
- +
+ ... @@ -177,7 +177,7 @@ Set the disk path to **/home/openEuler-21.09-stratovirt.img**, iothread quantity #### Configuration Example -Before configuring the network, [configure the Linux bridge](https://gitee.com/yanhuiling/docs/blob/stable2-20.03_LTS_SP2/docs/en/docs/Virtualization/environment-preparation.md#preparing-the-vm-network) first. Set the MAC address to **de:ad:be:ef:00:01** and network bridge to **br0**. Use the virtio-net device, and mount it to the PCI bus whose bus ID is 2, slot ID is 0, and function ID is 0. The following is the example: +Before configuring the network, [configure the Linux bridge](../Virtualization/environment-preparation.md#preparing-the-vm-network) first. Set the MAC address to **de:ad:be:ef:00:01** and network bridge to **br0**. Use the virtio-net device, and mount it to the PCI bus whose bus ID is 2, slot ID is 0, and function ID is 0. The following is the example: ```xml @@ -380,7 +380,7 @@ Set the CPU architecture of the VM to ARM and the mainboard to **virt**. The sta ### Huge Page Memory -##### Elements +#### Elements - **memoryBacking**: configures the memory information. @@ -672,7 +672,7 @@ If you have created a VM configuration file named **StratoVirt** in st.xml forma After the VM is created, you can run the `virsh console` or `virsh -c stratovirt:///system console` command to log in to it. If the VM name is **StratoVirt**, run the following command: -``` +```shell virsh console StratoVirt /// virsh -c stratovirt:///system console StratoVirt diff --git a/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/public_sys-resources/icon-note.gif b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/public_sys-resources/icon-note.gif new file mode 100644 index 0000000000000000000000000000000000000000..6314297e45c1de184204098efd4814d6dc8b1cda Binary files /dev/null and b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/public_sys-resources/icon-note.gif differ diff --git a/docs/en/docs/StratoVirt/Install_StratoVirt.md b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-installation.md similarity index 93% rename from docs/en/docs/StratoVirt/Install_StratoVirt.md rename to docs/en/Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-installation.md index 4751e1131703fe67f0b865e685a5e202193e85d8..a03c22afa255d98efa61becdc2f1d29a794ca544 100644 --- a/docs/en/docs/StratoVirt/Install_StratoVirt.md +++ b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-installation.md @@ -8,30 +8,25 @@ - 2-core CPU - 4 GiB memory -- 16 GiB available disk space +- 16 GiB available drive space ### Software Requirements Operating system: openEuler 21.03 - - ## Component Installation To use StratoVirt virtualization, it is necessary to install StratoVirt. Before the installation, ensure that the openEuler Yum source has been configured. 1. Run the following command as user **root** to install the StratoVirt component: - ``` + ```shell # yum install stratovirt ``` - 2. Check whether the installation is successful. - ``` + ```shell $ stratovirt -version StratoVirt 2.1.0 ``` - - diff --git a/docs/en/docs/StratoVirt/StratoVirt_introduction.md b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-introduction.md similarity index 88% rename from docs/en/docs/StratoVirt/StratoVirt_introduction.md rename to docs/en/Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-introduction.md index f7a5afcb498593eb133d09ef34b0813b606c0cdb..2275e1a687067b53939687cd2efcb073fdf425c1 100644 --- a/docs/en/docs/StratoVirt/StratoVirt_introduction.md +++ b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-introduction.md @@ -1,13 +1,10 @@ # Introduction to StratoVirt - ## Overview StratoVirt is an enterprise-class Virtual Machine Monitor (VMM) oriented to cloud data centers in the computing industry. It uses a unified architecture to support VM, container, and serverless scenarios. StratoVirt has competitive advantages in key technologies such as lightweight low noise, software and hardware synergy, and Rust language-level security. StratoVirt reserves component-based assembling capabilities and APIs in the architecture design. Advanced features can be flexibly assembled as required until they evolve to support standard virtualization. In this way, StratoVirt can strike a balance between feature requirements, application scenarios, and flexibility. - - ## Architecture Description The StratoVirt core architecture consists of three layers from top to bottom: @@ -15,8 +12,8 @@ The StratoVirt core architecture consists of three layers from top to bottom: - External API: compatible with the QEMU Monitor Protocol (QMP), has complete OCI compatibility capabilities, and supports interconnection with libvirt. - BootLoader: discards the traditional BIOS+GRUB boot mode to achieve fast boot in lightweight scenarios, and provides UEFI boot support for standard VMs. - Emulated mainboard: - - MicroVM: Fully utilizes software and hardware collaboration capabilities, simplifies device models, and provides low-latency resource scaling capabilities. - - Standard VM: implements UEFI boot with constructed ACPI tables. Virtio-pci and VFIO devices can be attached to greatly improve the VM I/O performance. + - MicroVM: Fully utilizes software and hardware collaboration capabilities, simplifies device models, and provides low-latency resource scaling capabilities. + - Standard VM: implements UEFI boot with constructed ACPI tables. Virtio-pci and VFIO devices can be attached to greatly improve the VM I/O performance. Figure 1 shows the overall architecture. @@ -37,7 +34,7 @@ Figure 1 shows the overall architecture. ## Implementation -#### Running Architecture +### Running Architecture - A StratoVirt VM is an independent process in Linux. The process has three types of threads: main thread, vCPU thread, and I/O thread: - The main thread is a cycle for asynchronously collecting and processing events from external modules, such as a vCPU thread. diff --git a/docs/en/docs/StratoVirt/StratoVirt_guidence.md b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-user-guide.md similarity index 99% rename from docs/en/docs/StratoVirt/StratoVirt_guidence.md rename to docs/en/Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-user-guide.md index 461f0bf0490f0a18176972f10c4ea8f7edee1491..2b25153302e58a910bee07b858f810d87d72d34d 100644 --- a/docs/en/docs/StratoVirt/StratoVirt_guidence.md +++ b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-user-guide.md @@ -1,4 +1,3 @@ # StratoVirt Virtualization User Guide This document describes Stratovirt virtualization, providing instructions on how to install Stratovirt based on openEuler and how to use Stratovirt virtualization. The purpose is to help users learn about Stratovirt and guide users and administrators to install and use StratoVirt. - diff --git a/docs/en/docs/StratoVirt/StratoVirt_VFIO_instructions.md b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-vfio-user-guide.md similarity index 89% rename from docs/en/docs/StratoVirt/StratoVirt_VFIO_instructions.md rename to docs/en/Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-vfio-user-guide.md index a47145edf7b9f103d4c95e6f88148673ad8841b9..8939febdcad78d8d4aa43b9737180748a91173a6 100644 --- a/docs/en/docs/StratoVirt/StratoVirt_VFIO_instructions.md +++ b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/stratovirt-vfio-user-guide.md @@ -1,9 +1,10 @@ -# StratoVirt VFIO Instructions -### Device Passthrough Management +# StratoVirt VFIO User Guide + +## Device Passthrough Management With device passthrough, a virtualization platform can enable VMs to directly use hardware devices, improving VM performance. This chapter describes the device passthrough feature supported by StratoVirt. -### Prerequisites +## Prerequisites To use device passthrough, a host must meet the following requirements: @@ -36,13 +37,15 @@ To use device passthrough, a host must meet the following requirements: iommu: Default domain type: Translated ``` - Enable IOMMU: + Enable IOMMU: 1.Add boot parameters for linux kernel: `intel_iommu=on iommu=pt`; + ```shell vim /boot/grub2/grub.cfg linux /vmlinuz-5.15.0+ root=/dev/mapper/openeuler-root ro resume=/dev/mapper/openeuler-swap rd.lvm.lv=openeuler/root rd.lvm.lv=openeuler/swap crashkernel=512M intel_iommu=on iommu=pt ``` + 2.Reboot Host OS; 2. Load the vfio-pci kernel module. @@ -81,18 +84,18 @@ To use device passthrough, a host must meet the following requirements: Finally bind the PCI device to the vfio-pci driver. ```shell - lspci -ns 0000:03:00.0 |awk -F':| ' '{print 5" "6}' > /sys/bus/pci/drivers/vfio-pci/new_id + lspci -ns 0000:03:00.0 |awk -F':| ' '{print $5" "$6}' > /sys/bus/pci/drivers/vfio-pci/new_id ``` After the NIC is bound to the vfio-pci driver, the NIC information cannot be queried on the host. Only the PCI device information can be queried. -### VFIO Device Passthrough +## VFIO Device Passthrough -#### Introduction +### Introduction The VFIO is a user-mode device driver solution provided by the kernel. The VFIO driver can securely present capabilities such as device I/O, interrupt, and DMA to user space. After StratoVirt uses the VFIO device passthrough solution, the I/O performance of VMs is greatly improved. -#### Using VFIO Passthrough +### Using VFIO Passthrough StratoVirt interconnects with libvirt to enable you to manage and configure VMs by modifying corresponding XML files. The following describes how to enable VFIO passthrough by modifying the XML file of a VM. @@ -129,9 +132,9 @@ In the preceding example, the device type is PCI, and **managed='yes'** indicate ```shell # virsh create stratovirt_$arch.xml # virsh list --all -Id Name State +Id Name State -------------------- -1 StratoVirt running +1 StratoVirt running # virsh console 1 ``` @@ -142,40 +145,38 @@ Id Name State ```shell # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 - inet 127.0.0.1/8 scope host lo - valid_lft forever preferred_lft forever + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever 2: enp1s0: mtu 1500 qdisc noop state DOWN group default qlen 1000 - link/ether 72:b8:51:9d:d1:27 brd ff:ff:ff:ff:ff:ff + link/ether 72:b8:51:9d:d1:27 brd ff:ff:ff:ff:ff:ff ``` - (2) Dynamically configure the IP address of the NIC. ```shell # dhclient ``` - (3) Check whether the IP address is configured successfully. ```shell # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 - inet 127.0.0.1/8 scope host lo - valid_lft forever preferred_lft forever + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever 2: enp1s0: mtu 1500 qdisc mq state UP group default qlen 1000 - link/ether 72:b8:51:9d:d1:27 brd ff:ff:ff:ff:ff:ff - inet 192.168.1.3/16 brd 192.168.255.255 scope global dynamic enp1s0 - valid_lft 86453sec preferred_lft 86453sec + link/ether 72:b8:51:9d:d1:27 brd ff:ff:ff:ff:ff:ff + inet 192.168.1.3/16 brd 192.168.255.255 scope global dynamic enp1s0 + valid_lft 86453sec preferred_lft 86453sec ``` The preceding command output indicates that the IP address 192.168.1.3 is successfully assigned and the VM can directly use the configured NIC. Note: If the passthrough NIC is not connected to a physical network, network information cannot be obtained. -#### Unbinding the VFIO Driver +### Unbinding the VFIO Driver To unbind a passthrough NIC from a VM, log in to the host and run the following command to bind the NIC to the host again.**hinic** indicates the NIC driver type. @@ -186,13 +187,13 @@ To unbind a passthrough NIC from a VM, log in to the host and run the following Note: Before binding a VFIO driver, you can run the **ethtool -i enp0** command on the host to obtain the NIC driver type.**enp0** indicates the name of the corresponding NIC. -### SR-IOV Passthrough +## SR-IOV Passthrough -#### Introduction +### Introduction When VFIO passthrough is enabled, VMs can directly access hardware, but each device can be exclusively used by only one VM. The SR-IOV passthrough technology can virtualize a physical function (PF) into multiple virtual functions (VFs) and directly pass the VFs to different VMs. This technology increases the number of available devices. -#### Procedure +### Procedure **Step 1** Create multiple VFs. @@ -219,4 +220,3 @@ If the following information is displayed, four VFs 03:00.1, 03:00.2, 03:00.3, a ``` **Step 3** All the created VFs can be passed to VMs. The method for using an SR-IOV device is the same as that for using a common PCI device. - diff --git a/docs/en/docs/StratoVirt/VM_configuration.md b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/vm-configuration.md similarity index 87% rename from docs/en/docs/StratoVirt/VM_configuration.md rename to docs/en/Virtualization/VirtualizationPlatform/StratoVirt/vm-configuration.md index 72170ecfb81206f635507287db1746cbcdf6e040..6e5562a0ed7e69466fcb99400920eac1b14d13d0 100644 --- a/docs/en/docs/StratoVirt/VM_configuration.md +++ b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/vm-configuration.md @@ -1,4 +1,4 @@ -# Configuring VMs +# VM Configuration ## Overview @@ -19,21 +19,21 @@ StratoVirt supports lightweight and standard VMs. ### Lightweight VMs -- Number of VM CPUs: [1, 254] -- VM memory size: [128 MiB, 512 GiB]. The default memory size is 256 MiB. -- Number of VM disks (including hot plugged-in disks): [0, 6] -- Number of VM NICs (including hot plugged-in NICs): [0, 2] +- Number of VM CPUs: \[1, 254] +- VM memory size: \[128 MiB, 512 GiB]. The default memory size is 256 MiB. +- Number of VM disks (including hot plugged-in disks): \[0, 6] +- Number of VM NICs (including hot plugged-in NICs): \[0, 2] - The VM console device supports only single way connection. - If the host CPU architecture is x86_64, a maximum of 11 MMIO devices can be configured. However, you are advised to configure a maximum of two other devices except disks and NICs. On the AArch64 platform, a maximum of 160 MMIO devices can be configured. You are advised to configure a maximum of 12 other devices except disks and NICs. ### Standard VMs -- Number of VM CPUs: [1, 254] -- VM memory size: [128 MiB, 512 GiB]. The default memory size is 256 MiB. +- Number of VM CPUs: \[1, 254] +- VM memory size: \[128 MiB, 512 GiB]. The default memory size is 256 MiB. - The VM console device supports only single way connection. - Only one console device is supported. - A maximum of 32 PCI devices are supported. -- PCI bus to which the PCI device is mounted: slot ID [0, 32); function ID [0, 8). +- PCI bus to which the PCI device is mounted: slot ID \[0, 32); function ID \[0, 8). ## Minimal Configuration @@ -50,19 +50,19 @@ The minimum configuration for running StratoVirt is as follows: The format of the command configured by running cmdline is as follows: -**$ /path/to/stratovirt** *- [Parameter 1] [Option]-[Parameter 2] [Option]...* +**$ /path/to/stratovirt** *- \[Parameter 1] \[Option]-\[Parameter 2] \[Option]...* ### **Usage Instructions** 1. To ensure that the socket required by QMP can be created, run the following command to clear the environment: - ``` + ```shell $rm [parameter] *[user-defined socket file path]* ``` 2. Run the cmdline command. - ``` + ```shell $/path/to/stratovirt - *[Parameter 1] [Parameter option] - [Parameter 2] [Parameter option]*... ``` @@ -76,8 +76,8 @@ The following table lists the basic configuration information. | -kernel | /path/to/vmlinux.bin| Configures the kernel image.| | -append | console=ttyS0 root=/dev/vda reboot=k panic=1 rw | Configures the kernel command line parameter. For lightweight VMs, **console** is fixed at **ttyS0** (irrelevant to the platform architecture). For the standard x86_64 virtualization platform, **console** is default to **ttyS0**. For the AArch64 platform, **console** is default to **ttyAMA0**. If the virtio-console device is configured but the serial port device is not configured, set **console** to **hvc0** (irrelevant to the architecture).| | -initrd | /path/to/initrd.img | Configures the initrd file.| -| -smp | [cpus=]n[,maxcpus=,sockets=,dies=,clusters=,cores=,threads=]| **cpus** specifies the number of CPUs with the value range of [1,254]. **maxcpus** specifies the maximum number of CPUs with the value range of [1,254]. **sockets**, **dies**, **clusters**, **cores**, and **threads** specifies the number of sockets, dies, clusters, cores, and threads respectively. The values of **sockets**, **cores**, and **threads**, if not specified, depend on the value of **maxcpus**. The values satisfy the following relationship: **maxcpus**=**sockets** x **dies** x **clusters** x **cores** x **threads**.| -| -m | Memory size (MiB/GiB). The default unit is MiB.| Configures the memory size. The value range is [128 MiB, 512 GiB]. The default memory size is 256 MiB.| +| -smp | \[cpus=]n\[,maxcpus=,sockets=,dies=,clusters=,cores=,threads=]| **cpus** specifies the number of CPUs with the value range of \[1,254]. **maxcpus** specifies the maximum number of CPUs with the value range of \[1,254]. **sockets**, **dies**, **clusters**, **cores**, and **threads** specifies the number of sockets, dies, clusters, cores, and threads respectively. The values of **sockets**, **cores**, and **threads**, if not specified, depend on the value of **maxcpus**. The values satisfy the following relationship: **maxcpus**=**sockets** x **dies** x **clusters** x **cores** x **threads**.| +| -m | Memory size (MiB/GiB). The default unit is MiB.| Configures the memory size. The value range is \[128 MiB, 512 GiB]. The default memory size is 256 MiB.| | -qmp | unix:/path/to/socket,server,nowait | Configures QMP. Before running QMP, ensure that the socket file does not exist.| | -D | /path/to/logfile | Configures the log file.| | -pidfile | /path/to/pidfile | Configures the pid file. This parameter must be used together with **-daemonize**. Ensure that the pid file does not exist before running the script.| @@ -117,14 +117,14 @@ Disk configuration consists of two steps: driver configuration and block device The lightweight VM configuration format is as follows: -``` +```conf -drive id=drive_id,file=path_on_host[,readonly=off][,direct=off][,throttling.iops-total=200][,if=none] -device virtio-blk-device,drive=drive_id[,iothread=iothread1][,serial=serial_num] ``` The standard VM configuration format is as follows: -``` +```conf -drive id=drive_id,file=path_on_host[,readonly=off][,direct=off][,throttling.iops-total=200][,if=none] -device virtio-blk-pci,drive=drive_id,bus=pcie.0,addr=0x3.0x0[,iothread=iothread1,][serial=serial_num][,multifunction=on][,bootindex=1] ``` @@ -140,7 +140,7 @@ QoS is short for quality of service. In cloud scenarios, multiple VMs are starte ##### Precautions - Currently, QoS supports the configuration of disk IOPS. -- The value range of IOPS is [0, 1000000]. The value **0** indicates that the IOPS is not limited. The actual IOPS does not exceed the preset value or the upper limit of the actual backend disk performance. +- The value range of IOPS is \[0, 1000000]. The value **0** indicates that the IOPS is not limited. The actual IOPS does not exceed the preset value or the upper limit of the actual backend disk performance. - Only the average IOPS can be limited. Instantaneous burst traffic cannot be limited. ##### Configuration Methods @@ -149,7 +149,7 @@ Usage: **CLI** -``` +```conf -drive xxx,throttling.iops-total=200 ``` @@ -178,45 +178,45 @@ VM NIC configuration includes the following configuration items: > > Before using the network, run the following commands to configure the host bridge and tap device: > -> ``` -> $ brctl addbr qbr0 -> $ ip tuntap add tap0 mode tap -> $ brctl addif qbr0 tap0 -> $ ifconfig qbr0 up; ifconfig tap0 up -> $ ifconfig qbr0 192.168.0.1 +> ```shell +> brctl addbr qbr0 +> ip tuntap add tap0 mode tap +> brctl addif qbr0 tap0 +> ifconfig qbr0 up; ifconfig tap0 up +> ifconfig qbr0 192.168.0.1 > ``` -1. Configure virtio-net. ([] indicates an optional parameter.) +1. Configure virtio-net. (\[] indicates an optional parameter.) -Lightweight VMs: + Lightweight VMs: -``` --netdev tap,id=netdevid,ifname=host_dev_name[,vhostfd=2] --device virtio-net-device,netdev=netdevid,id=netid[,iothread=iothread1,mac=12:34:56:78:9A:BC] -``` + ```Conf + -netdev tap,id=netdevid,ifname=host_dev_name[,vhostfd=2] + -device virtio-net-device,netdev=netdevid,id=netid[,iothread=iothread1,mac=12:34:56:78:9A:BC] + ``` -Standard VMs: + Standard VMs: -``` --netdev tap,id=netdevid,ifname=host_dev_name[,vhostfd=2] --device virtio-net-pci,netdev=netdevid,id=netid,bus=pcie.0,addr=0x2.0x0[,multifunction=on,iothread=iothread1,mac=12:34:56:78:9A:BC] -``` + ```Conf + -netdev tap,id=netdevid,ifname=host_dev_name[,vhostfd=2] + -device virtio-net-pci,netdev=netdevid,id=netid,bus=pcie.0,addr=0x2.0x0[,multifunction=on,iothread=iothread1,mac=12:34:56:78:9A:BC] + ``` 2. Configure vhost-net. -Lightweight VMs: + Lightweight VMs: -``` --netdev tap,id=netdevid,ifname=host_dev_name,vhost=on[,vhostfd=2] --device virtio-net-device,netdev=netdevid,id=netid[,iothread=iothread1,mac=12:34:56:78:9A:BC] -``` + ```conf + -netdev tap,id=netdevid,ifname=host_dev_name,vhost=on[,vhostfd=2] + -device virtio-net-device,netdev=netdevid,id=netid[,iothread=iothread1,mac=12:34:56:78:9A:BC] + ``` -Standard VMs: + Standard VMs: -``` --netdev tap,id=netdevid,ifname=host_dev_name,vhost=on[,vhostfd=2] --device virtio-net-pci,netdev=netdevid,id=netid,bus=pcie.0,addr=0x2.0x0[,multifunction=on,iothread=iothread1,mac=12:34:56:78:9A:BC] -``` + ```conf + -netdev tap,id=netdevid,ifname=host_dev_name,vhost=on[,vhostfd=2] + -device virtio-net-pci,netdev=netdevid,id=netid,bus=pcie.0,addr=0x2.0x0[,multifunction=on,iothread=iothread1,mac=12:34:56:78:9A:BC] + ``` ### chardev Configuration @@ -232,7 +232,7 @@ When chardev is used, a console file is created and used. Therefore, ensure that #### Configuration Methods -``` +```conf -chardev backend,id=chardev_id[,path=path,server,nowait] ``` @@ -245,13 +245,13 @@ A serial port is a VM device used to transmit data between hosts and VMs. To use #### Configuration Methods -``` +```conf -serial chardev:chardev_id ``` Or: -``` +```conf -chardev backend[,path=path,server,nowait] ``` @@ -270,7 +270,7 @@ The console configuration consists of three steps: specify virtio-serial, create Lightweight VMs: -``` +```conf -device virtio-serial-device[,id=virtio-serial0] -chardev socket,path=socket_path,id=virtioconsole1,server,nowait -device virtconsole,chardev=virtioconsole1,id=console_id @@ -278,7 +278,7 @@ Lightweight VMs: Standard VMs: -``` +```conf -device virtio-serial-pci,bus=pcie.0,addr=0x1.0x0[,multifunction=on,id=virtio-serial0] -chardev socket,path=socket_path,id=virtioconsole1,server,nowait -device virtconsole,chardev=virtioconsole1,id=console_id @@ -295,13 +295,13 @@ The vsock is also a device for communication between hosts and VMs. It is simila Lightweight VMs: -``` +```conf -device vhost-vsock-device,id=vsock_id,guest-cid=3 ``` Standard VMs: -``` +```conf -device vhost-vsock-pci,id=vsock_id,guest-cid=3,bus=pcie.0,addr=0x1.0x0[,multifunction=on] ``` @@ -331,7 +331,7 @@ StratoVirt supports the configuration of huge pages for VMs. Compared with the t Mount the huge page file system to a specified directory. `/path/to/hugepages` is the user-defined empty directory. -``` +```shell mount -t hugetlbfs hugetlbfs /path/to/hugepages ``` @@ -339,13 +339,13 @@ mount -t hugetlbfs hugetlbfs /path/to/hugepages - Set the number of static huge pages. `num` indicates the specified number. - ``` + ```shell sysctl vm.nr_hugepages=num ``` -* Query huge page statistics. +- Query huge page statistics. - ``` + ```shell cat /proc/meminfo | grep Hugepages ``` @@ -359,7 +359,7 @@ mount -t hugetlbfs hugetlbfs /path/to/hugepages - CLI - ``` + ```shell -mem-path /page/to/hugepages ``` @@ -403,13 +403,13 @@ Lightweight VMs: Disks -``` +```conf -device virtio-blk-device xxx,iothread=iothread1 ``` NICs -``` +```conf -device virtio-net-device xxx,iothread=iothread2 ``` @@ -417,19 +417,17 @@ Standard VMs: Disks -``` +```conf -device virtio-blk-pci xxx,iothread=iothread1 ``` NICs -``` +```conf -device virtio-net-pci xxx,iothread=iothread2 ``` -``` Parameters: -``` 1. **iothread**: Set this parameter to the iothread ID, indicating the thread that processes the I/O of the device. 2. *xxx*: other configurations of the disk or NIC. @@ -459,13 +457,13 @@ During running of a VM, the balloon driver in it occupies or releases memory to Lightweight VMs: -``` +```conf -device virtio-balloon-device[,deflate-on-oom=true|false][,free-page-reporting=true|false] ``` Standard VMs: -``` +```conf -device virtio-balloon-pci,bus=pcie.0,addr=0x4.0x0[,deflate-on-oom=true|false][,free-page-reporting=true|false][,multifunction=on|off] ``` @@ -488,14 +486,14 @@ Virtio RNG is a paravirtualized random number generator that generates hardware Virtio RNG can be configured as the Virtio MMIO device or Virtio PCI device. To configure the Virtio RNG device as a Virtio MMIO device, run the following command: -``` +```conf -object rng-random,id=objrng0,filename=/path/to/random_file -device virtio-rng-device,rng=objrng0,max-bytes=1234,period=1000 ``` To configure the Virtio RNG device as a Virtio PCI device, run the following command: -``` +```conf -object rng-random,id=objrng0,filename=/path/to/random_file -device virtio-rng-pci,rng=objrng0,max-bytes=1234,period=1000,bus=pcie.0,addr=0x1.0x0,id=rng-id[,multifunction=on] ``` @@ -506,12 +504,12 @@ Parameters: - **period**: period for limiting the read rate of random number characters, in milliseconds. - **max-bytes**: maximum number of bytes of a random number generated by a character device within a period. - **bus**: name of the bus to which the Virtio RNG device is mounted. -- **addr**: address of the Virtio RNG device. The parameter format is **addr=***[slot].[function]*, where *slot* and *function* indicate the slot number and function number of the device respectively. The slot number and function number are hexadecimal numbers. The function number of the Virtio RNG device is **0x0**. +- **addr**: address of the Virtio RNG device. The parameter format is **addr=***slot.function*, where *slot* and *function* indicate the slot number and function number of the device respectively. The slot number and function number are hexadecimal numbers. The function number of the Virtio RNG device is **0x0**. #### Precautions - If **period** and **max-bytes** are not configured, the read rate of random number characters is not limited. -- Otherwise, the value range of **max-bytes/period\*1000** is [64, 1000000000]. It is recommended that the value be not too small to prevent the rate of obtaining random number characters from being too slow. +- Otherwise, the value range of **max-bytes/period\*1000** is \[64, 1000000000]. It is recommended that the value be not too small to prevent the rate of obtaining random number characters from being too slow. - Only the average number of random number characters can be limited, and the burst traffic cannot be limited. - If the guest needs to use the Virtio RNG device, the guest kernel requires the following configurations: **CONFIG_HW_RANDOM=y**, **CONFIG_HW_RANDOM_VIA=y**, and **CONFIG_HW_RANDOM_VIRTIO=y**. - When configuring the Virtio RNG device, check whether the entropy pool is sufficient to avoid VM freezing. For example, if the character device path is **/dev/random**, you can check **/proc/sys/kernel/random/entropy_avail** to view the current entropy pool size. When the entropy pool is full, the entropy pool size is **4096**. Generally, the value is greater than 1000. @@ -573,7 +571,7 @@ StratoVirt supports USB keyboards and mice. You can remotely connect to the VM t Add the following option to the StratoVirt startup command to configure the USB controller: -``` +```conf -device nec-usb-xhci,id=xhci,bus=pcie.0,addr=0xa.0x0 ``` @@ -587,7 +585,7 @@ The configured `bus` and `addr` values cannot conflict with other PCI devices. O Add the following option to the StratoVirt startup command to configure the USB keyboard: -``` +```conf -device usb-bkd,id=kbd ``` @@ -597,7 +595,7 @@ Parameters: Add the following option to the StratoVirt startup command to configure the USB mouse: -``` +```conf -device usb-tablet,id=tablet ``` @@ -626,7 +624,7 @@ virtio-gpu devices can be configured for standard VMs for graphics display. Standard VMs: -``` +```conf -device virtio-gpu-pci,id=XX,bus=pcie.0,addr=0x2.0x0[,max_outputs=XX][,edid=true|false][,xres=XX][,yres=XX][,max_hostmem=XX] ``` @@ -645,13 +643,13 @@ This section provides an example of the minimum configuration for creating a lig 1. Log in to the host and delete the socket file to ensure that the QMP can be created. - ``` + ```shell rm -f /tmp/stratovirt.socket ``` 2. Run StratoVirt. - ``` + ```shell $ /path/to/stratovirt \ -kernel /path/to/vmlinux.bin \ -append console=ttyS0 root=/dev/vda rw reboot=k panic=1 \ @@ -669,13 +667,13 @@ This section provides an example of the minimum configuration for creating a sta 1. Delete the socket file to ensure that QMP can be created. - ``` + ```shell rm -f /tmp/stratovirt.socket ``` 2. Run StratoVirt. - ``` + ```shell $ /path/to/stratovirt \ -kernel /path/to/vmlinux.bin \ -append console=ttyAMA0 root=/dev/vda rw reboot=k panic=1 \ diff --git a/docs/en/docs/StratoVirt/VM_management.md b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/vm-management.md similarity index 91% rename from docs/en/docs/StratoVirt/VM_management.md rename to docs/en/Virtualization/VirtualizationPlatform/StratoVirt/vm-management.md index 55bcdb7893202a949605b9fc38399b7cc59a6a94..453a42050f3691d347a94ee98c53d6f4dd0049e1 100644 --- a/docs/en/docs/StratoVirt/VM_management.md +++ b/docs/en/Virtualization/VirtualizationPlatform/StratoVirt/vm-management.md @@ -1,5 +1,4 @@ -# Managing VMs - +# VM Management ## Overview @@ -21,13 +20,11 @@ Run the **query-status** command to query the running status of a VM. - Example: -``` +```shell <- { "execute": "query-status" } -> { "return": { "running": true,"singlestep": false,"status": "running" } ``` - - ### Querying Topology Information Run the **query-cpus** command to query the topologies of all CPUs. @@ -38,7 +35,7 @@ Run the **query-cpus** command to query the topologies of all CPUs. - Example: -``` +```shell <- { "execute": "query-cpus" } -> {"return":[{"CPU":0,"arch":"x86","current":true,"halted":false,"props":{"core-id":0,"socket-id":0,"thread-id":0},"qom_path":"/machine/unattached/device[0]","thread_id":8439},{"CPU":1,"arch":"x86","current":true,"halted":false,"props":{"core-id":0,"socket-id":1,"thread-id":0},"qom_path":"/machine/unattached/device[1]","thread_id":8440}]} ``` @@ -48,11 +45,12 @@ Run the **query-cpus** command to query the topologies of all CPUs. Run the **query-hotpluggable-cpus** command to query the online/offline statuses of all vCPUs. - Usage: + **{ "execute": "query-hotpluggable-cpus" }** - Example: -``` +```shell <- { "execute": "query-hotpluggable-cpus" } -> {"return":[{"props":{"core-id":0,"socket-id":0,"thread-id":0},"qom-path":"/machine/unattached/device[0]","type":"host-x86-cpu","vcpus-count":1},{"props":{"core-id":0,"socket-id":1,"thread-id":0},"qom-path":"/machine/unattached/device[1]","type":"host-x86-cpu","vcpus-count":1}]} ``` @@ -71,11 +69,10 @@ Use the command line parameters to specify the VM configuration, and create and - When using the command line parameters to specify the VM configuration, run the following command to create and start the VM: -``` +```shell $/path/to/stratovirt - *[Parameter 1] [Parameter option] - [Parameter 2] [Parameter option]*... ``` - > ![](./public_sys-resources/icon-note.gif) > > After the lightweight VM is started, there are two NICs: eth0 and eth1. The two NICs are reserved for hot plugging: eth0 first and then eth1. Currently, only two virtio-net NICs can be hot plugged. @@ -86,19 +83,19 @@ StratoVirt uses QMP to manage VMs. To stop, resume, or exit a VM, connect it the Open a new CLI (CLI B) on the host and run the following command to connect to the api-channel as the **root** user: -``` +```shell # ncat -U /path/to/socket ``` After the connection is set up, you will receive a greeting message from StratoVirt, as shown in the following: -``` +```shell {"QMP":{"version":{"qemu":{"micro":1,"minor":0,"major":4},"package":""},"capabilities":[]}} ``` You can now manage the VM by entering the QMP commands in CLI B. -> ![](./public_sys-resources/icon-note.gif) +> ![](./public_sys-resources/icon-note.gif) > > QMP provides **stop**, **cont**, **quit**, and **query-status** commands to manage and query VM statuses. > @@ -114,7 +111,7 @@ QMP provides the **stop** command to stop a VM, that is, to stop all vCPUs of th The **stop** command and the command output are as follows: -``` +```shell <- {"execute":"stop"} -> {"event":"STOP","data":{},"timestamp":{"seconds":1583908726,"microseconds":162739}} -> {"return":{}} @@ -130,7 +127,7 @@ QMP provides the **cont** command to resume a stopped VM, that is, to resume all The **cont** command and the command output are as follows: -``` +```shell <- {"execute":"cont"} -> {"event":"RESUME","data":{},"timestamp":{"seconds":1583908853,"microseconds":411394}} -> {"return":{}} @@ -144,7 +141,7 @@ QMP provides the **quit** command to exit a VM, that is, to exit the StratoVirt **Example:** -``` +```shell <- {"execute":"quit"} -> {"return":{}} -> {"event":"SHUTDOWN","data":{"guest":false,"reason":"host-qmp-quit"},"timestamp":{"ds":1590563776,"microseconds":519808}} @@ -158,11 +155,11 @@ StratoVirt allows you to adjust the number of disks when a VM is running. That i **Note** -* For a standard VM, the **CONFIG_HOTPLUG_PCI_PCIE=y** configuration must be enabled for the VM kernel. +- For a standard VM, the **CONFIG_HOTPLUG_PCI_PCIE=y** configuration must be enabled for the VM kernel. -* For a standard VM, devices can be hot added to the root port. The root port device must be configured before the VM is started. +- For a standard VM, devices can be hot added to the root port. The root port device must be configured before the VM is started. -* You are not advised to hot swap a device when the VM is being started, stopped, or under high internal pressure. Otherwise, the VM may become abnormal because the drivers on the VM cannot respond in a timely manner. +- You are not advised to hot swap a device when the VM is being started, stopped, or under high internal pressure. Otherwise, the VM may become abnormal because the drivers on the VM cannot respond in a timely manner. #### Hot Adding Disks @@ -170,21 +167,20 @@ StratoVirt allows you to adjust the number of disks when a VM is running. That i Lightweight VM: -``` +```shell {"execute": "blockdev-add", "arguments": {"node-name": "drive-0", "file": {"driver": "file", "filename": "/path/to/block"}, "cache": {"direct": true}, "read-only": false}} {"execute": "device_add", "arguments": {"id": "drive-0", "driver": "virtio-blk-mmio", "addr": "0x1"}} ``` Standard VM: -``` +```shell {"execute": "blockdev-add", "arguments": {"node-name": "drive-0", "file": {"driver": "file", "filename": "/path/to/block"}, "cache": {"direct": true}, "read-only": false}} {"execute":"device_add", "arguments":{"id":"drive-0", "driver":"virtio-blk-pci", "drive": "drive-0", "addr":"0x0", "bus": "pcie.1"}} ``` **Parameters:** - - For a lightweight VM, the value of **node-name** in **blockdev-add** must be the same as that of **id** in **device_add**. For example, the values of **node-name** and **id** are both **drive-0** as shown above. - For a standard VM, the value of **drive** must be the same as that of **node-name** in **blockdev-add**. @@ -201,7 +197,7 @@ Standard VM: Lightweight VM: -``` +```shell <- {"execute": "blockdev-add", "arguments": {"node-name": "drive-0", "file": {"driver": "file", "filename": "/path/to/block"}, "cache": {"direct": true}, "read-only": false}} -> {"return": {}} <- {"execute": "device_add", "arguments": {"id": "drive-0", "driver": "virtio-blk-mmio", "addr": "0x1"}} @@ -210,25 +206,26 @@ Lightweight VM: Standard VM: -``` +```shell <- {"execute": "blockdev-add", "arguments": {"node-name": "drive-0", "file": {"driver": "file", "filename": "/path/to/block"}, "cache": {"direct": true}, "read-only": false}} -> {"return": {}} <- {"execute":"device_add", "arguments":{"id":"drive-0", "driver":"virtio-blk-pci", "drive": "drive-0", "addr":"0x0", "bus": "pcie.1"}} -> {"return": {}} ``` + #### Hot Removing Disks **Usage:** Lightweight VM: -``` +```shell {"execute": "device_del", "arguments": {"id":"drive-0"}} ``` Standard VM: -``` +```shell {"execute": "device_del", "arguments": {"id":"drive-0"}} {"execute": "blockdev-del", "arguments": {"node-name": "drive-0"}} ``` @@ -236,13 +233,14 @@ Standard VM: **Parameters:** **id** indicates the ID of the disk to be hot removed. + - **node-name** indicates the backend name of the disk. **Example:** Lightweight VM: -``` +```shell <- {"execute": "device_del", "arguments": {"id": "drive-0"}} -> {"event":"DEVICE_DELETED","data":{"device":"drive-0","path":"drive-0"},"timestamp":{"seconds":1598513162,"microseconds":367129}} -> {"return": {}} @@ -250,7 +248,7 @@ Lightweight VM: Standard VM: -``` +```shell <- {"execute": "device_del", "arguments": {"id":"drive-0"}} -> {"return": {}} -> {"event":"DEVICE_DELETED","data":{"device":"drive-0","path":"drive-0"},"timestamp":{"seconds":1598513162,"microseconds":367129}} @@ -259,17 +257,18 @@ Standard VM: ``` A **DEVICE_DELETED** event indicates that the device is removed from StratoVirt. + ### Hot-Pluggable NICs StratoVirt allows you to adjust the number of NICs when a VM is running. That is, you can add or delete VM NICs without interrupting services. **Note** -* For a standard VM, the **CONFIG_HOTPLUG_PCI_PCIE=y** configuration must be enabled for the VM kernel. +- For a standard VM, the **CONFIG_HOTPLUG_PCI_PCIE=y** configuration must be enabled for the VM kernel. -* For a standard VM, devices can be hot added to the root port. The root port device must be configured before the VM is started. +- For a standard VM, devices can be hot added to the root port. The root port device must be configured before the VM is started. -* You are not advised to hot swap a device when the VM is being started, stopped, or under high internal pressure. Otherwise, the VM may become abnormal because the drivers on the VM cannot respond in a timely manner. +- You are not advised to hot swap a device when the VM is being started, stopped, or under high internal pressure. Otherwise, the VM may become abnormal because the drivers on the VM cannot respond in a timely manner. #### Hot Adding NICs @@ -277,36 +276,36 @@ StratoVirt allows you to adjust the number of NICs when a VM is running. That is 1. Create and enable a Linux bridge. For example, if the bridge name is **qbr0**, run the following command: -```shell -# brctl addbr qbr0 -# ifconfig qbr0 up -``` + ```shell + # brctl addbr qbr0 + # ifconfig qbr0 up + ``` 2. Create and enable a tap device. For example, if the tap device name is **tap0**, run the following command: -```shell -# ip tuntap add tap0 mode tap -# ifconfig tap0 up -``` + ```shell + # ip tuntap add tap0 mode tap + # ifconfig tap0 up + ``` 3. Add the tap device to the bridge. -```shell -# brctl addif qbr0 tap0 -``` + ```shell + # brctl addif qbr0 tap0 + ``` **Usage:** Lightweight VM: -``` +```shell {"execute":"netdev_add", "arguments":{"id":"net-0", "ifname":"tap0"}} {"execute":"device_add", "arguments":{"id":"net-0", "driver":"virtio-net-mmio", "addr":"0x0"}} ``` Standard VM: -``` +```shell {"execute":"netdev_add", "arguments":{"id":"net-0", "ifname":"tap0"}} {"execute":"device_add", "arguments":{"id":"net-0", "driver":"virtio-net-pci", "addr":"0x0", "netdev": "net-0", "bus": "pcie.1"}} ``` @@ -317,7 +316,7 @@ Standard VM: - For a standard VM, the value of **netdev** must be the value of **id** in **netdev_add**. -- For a lightweight VM, the value of **addr**, starting from **0x0**, is mapped to an NIC on the VM. **0x0** is mapped to **eth0 **, **0x1** is mapped to **eth1**. For a standard VM, the value of **addr** must be **0x0**. +- For a lightweight VM, the value of **addr**, starting from **0x0**, is mapped to an NIC on the VM. **0x0** is mapped to **eth0**, **0x1** is mapped to **eth1**. For a standard VM, the value of **addr** must be **0x0**. - For a standard VM, **bus** indicates the name of the bus to mount the device. Currently, the device can be hot added only to the root port device. The value of **bus** must be the ID of the root port device. @@ -327,7 +326,7 @@ Standard VM: Lightweight VM: -``` +```shell <- {"execute":"netdev_add", "arguments":{"id":"net-0", "ifname":"tap0"}} -> {"return": {}} <- {"execute":"device_add", "arguments":{"id":"net-0", "driver":"virtio-net-mmio", "addr":"0x0"}} @@ -338,7 +337,7 @@ Lightweight VM: Standard VM: -``` +```shell <- {"execute":"netdev_add", "arguments":{"id":"net-0", "ifname":"tap0"}} -> {"return": {}} <- {"execute":"device_add", "arguments":{"id":"net-0", "driver":"virtio-net-pci", "addr":"0x0", "netdev": "net-0", "bus": "pcie.1"}} @@ -351,18 +350,17 @@ Standard VM: Lightweight VM: -``` +```shell {"execute": "device_del", "arguments": {"id": "net-0"}} ``` Standard VM: -``` +```shell {"execute": "device_del", "arguments": {"id":"net-0"}} {"execute": "netdev_del", "arguments": {"id": "net-0"}} ``` - **Parameters:** **id**: NIC ID, for example, **net-0**. @@ -373,7 +371,7 @@ Standard VM: Lightweight VM: -``` +```shell <- {"execute": "device_del", "arguments": {"id": "net-0"}} -> {"event":"DEVICE_DELETED","data":{"device":"net-0","path":"net-0"},"timestamp":{"seconds":1598513339,"microseconds":97310}} -> {"return": {}} @@ -381,7 +379,7 @@ Lightweight VM: Standard VM: -``` +```shell <- {"execute": "device_del", "arguments": {"id":"net-0"}} -> {"return": {}} -> {"event":"DEVICE_DELETED","data":{"device":"net-0","path":"net-0"},"timestamp":{"seconds":1598513339,"microseconds":97310}} @@ -397,17 +395,17 @@ You can add or delete the passthrough devices of a StratoVirt standard VM when i **Note** -* The **CONFIG_HOTPLUG_PCI_PCIE=y** configuration must be enabled for the VM kernel. +- The **CONFIG_HOTPLUG_PCI_PCIE=y** configuration must be enabled for the VM kernel. -* Devices can be hot added to the root port. The root port device must be configured before the VM is started. +- Devices can be hot added to the root port. The root port device must be configured before the VM is started. -* You are not advised to hot swap a device when the VM is being started, stopped, or under high internal pressure. Otherwise, the VM may become abnormal because the drivers on the VM cannot respond in a timely manner. +- You are not advised to hot swap a device when the VM is being started, stopped, or under high internal pressure. Otherwise, the VM may become abnormal because the drivers on the VM cannot respond in a timely manner. #### Hot Adding Pass-through Devices **Usage:** -``` +```shell {"execute":"device_add", "arguments":{"id":"vfio-0", "driver":"vfio-pci", "bus": "pcie.1", "addr":"0x0", "host": "0000:1a:00.3"}} ``` @@ -423,7 +421,7 @@ You can add or delete the passthrough devices of a StratoVirt standard VM when i **Example:** -``` +```shell <- {"execute":"device_add", "arguments":{"id":"vfio-0", "driver":"vfio-pci", "bus": "pcie.1", "addr":"0x0", "host": "0000:1a:00.3"}} -> {"return": {}} ``` @@ -432,7 +430,7 @@ You can add or delete the passthrough devices of a StratoVirt standard VM when i **Usage:** -``` +```shell {"execute": "device_del", "arguments": {"id": "vfio-0"}} ``` @@ -442,7 +440,7 @@ You can add or delete the passthrough devices of a StratoVirt standard VM when i **Example:** -``` +```shell <- {"execute": "device_del", "arguments": {"id": "vfio-0"}} -> {"return": {}} -> {"event":"DEVICE_DELETED","data":{"device":"vfio-0","path":"vfio-0"},"timestamp":{"seconds":1614310541,"microseconds":554250}} @@ -456,8 +454,8 @@ The balloon device is used to reclaim idle memory from a VM. It called by runnin **Usage:** -``` -{"execute": "balloon", "arguments": {"value": 2147483648‬}} +```shell +{"execute": "balloon", "arguments": {"value": 2147483648}} ``` **Parameters:** @@ -468,14 +466,14 @@ The balloon device is used to reclaim idle memory from a VM. It called by runnin The memory size configured during VM startup is 4 GiB. If the idle memory of the VM queried by running the free command is greater than 2 GiB, you can run the QMP command to set the guest memory size to 2147483648 bytes. -``` -<- {"execute": "balloon", "arguments": {"value": 2147483648‬}} +```shell +<- {"execute": "balloon", "arguments": {"value": 2147483648}} -> {"return": {}} ``` Query the actual memory of the VM: -``` +```shell <- {"execute": "query-balloon"} -> {"return":{"actual":2147483648}} ``` @@ -511,7 +509,6 @@ For StratoVirt VMs, perform the following steps to create a storage snapshot: <- {"execute":"stop"} -> {"event":"STOP","data":{},"timestamp":{"seconds":1583908726,"microseconds":162739}} -> {"return":{}} - ``` 3. Confirm that the VM is stopped. @@ -519,7 +516,6 @@ For StratoVirt VMs, perform the following steps to create a storage snapshot: ```shell <- {"execute":"query-status"} -> {"return":{"running":true,"singlestep":false,"status":"paused"}} - ``` 4. Run the following QMP command to create a VM snapshot in a specified absolute path, for example, **/path/to/template**: @@ -527,14 +523,12 @@ For StratoVirt VMs, perform the following steps to create a storage snapshot: ```shell <- {"execute":"migrate", "arguments":{"uri":"file:/path/to/template"}} -> {"return":{}} - ``` 5. Check whether the snapshot is successfully created. ```shell <- {"execute":"query-migrate"} - ``` If "{"return":{"status":"completed"}}" is displayed, the snapshot is successfully created. @@ -556,7 +550,6 @@ You can run the `query-migrate` QMP command on the host to query the status of t ```shell <- {"execute":"query-migrate"} -> {"return":{"status":"completed"}} - ``` ### Restoring a VM @@ -564,9 +557,9 @@ You can run the `query-migrate` QMP command on the host to query the status of t #### Precautions - The following models support the snapshot and boot from snapshot features: - - microvm - - Q35 (x86_64) - - virt (AArch64) + - microvm + - Q35 (x86_64) + - virt (AArch64) - When a snapshot is used for restoration, the configured devices must be the same as those used when the snapshot is created. - If a microVM is used and the disk/NIC hot plugging-in feature is enabled before the snapshot is taken, you need to configure the hot plugged-in disks or NICs in the startup command line during restoration. @@ -596,7 +589,6 @@ $ stratovirt \ -device virtio-blk-device,drive=rootfs \ -qmp unix:/path/to/socket,server,nowait \ -serial stdio - ``` Then, the command for restoring the VM from the snapshot (assume that the snapshot storage path is **/path/to/template**) is as follows: @@ -612,7 +604,6 @@ $ stratovirt \ -qmp unix:/path/to/another_socket,server,nowait \ -serial stdio \ -incoming file:/path/to/template - ``` ## VM Live Migration @@ -622,11 +613,11 @@ $ stratovirt \ StratoVirt provides the VM live migration capability, that is, migrating a VM from one server to another without interrupting VM services. VM live migration can be used in the following scenarios: + - When a server is overloaded, the VM live migration technology can be used to migrate VMs to another physical server for load balancing. - When a server needs maintenance, VMs on the server can be migrated to another physical server without interrupting services. - When a server is faulty and hardware needs to be replaced or the networking needs to be adjusted, VMs on the server can be migrated to another physical machine to prevent VM service interruption. - ### Live Migration Operations This section describes how to live migrate a VM. @@ -635,32 +626,32 @@ This section describes how to live migrate a VM. 1. Log in to the host where the source VM is located as the **root** user and run the following command to start the source VM. Modify the parameters as required: -```shell -./stratovirt \ - -machine q35 \ - -kernel ./vmlinux.bin \ - -append "console=ttyS0 pci=off reboot=k quiet panic=1 root=/dev/vda" \ - -drive file=path/to/rootfs,id=rootfs,readonly=off,direct=off \ - -device virtio-blk-pci,drive=rootfs,id=rootfs,bus=pcie.0,addr=0 \ - -qmp unix:path/to/socket1,server,nowait \ - -serial stdio \ -``` + ```shell + ./stratovirt \ + -machine q35 \ + -kernel ./vmlinux.bin \ + -append "console=ttyS0 pci=off reboot=k quiet panic=1 root=/dev/vda" \ + -drive file=path/to/rootfs,id=rootfs,readonly=off,direct=off \ + -device virtio-blk-pci,drive=rootfs,id=rootfs,bus=pcie.0,addr=0 \ + -qmp unix:path/to/socket1,server,nowait \ + -serial stdio \ + ``` 2. Log in to the host where the target VM is located as the **root** user and run the following command to start the target VM. The parameters must be consistent with those of the source VM: -```shell -./stratovirt \ - -machine q35 \ - -kernel ./vmlinux.bin \ - -append "console=ttyS0 pci=off reboot=k quiet panic=1 root=/dev/vda" \ - -drive file=path/to/rootfs,id=rootfs,readonly=off,direct=off \ - -device virtio-blk-pci,drive=rootfs,id=rootfs,bus=pcie.0,addr=0 \ - -qmp unix:path/to/socket2,server,nowait \ - -serial stdio \ - -incoming tcp:192.168.0.1:4446 \ -``` - -> ![](public_sys-resources/icon-note.gif) **NOTE:** + ```shell + ./stratovirt \ + -machine q35 \ + -kernel ./vmlinux.bin \ + -append "console=ttyS0 pci=off reboot=k quiet panic=1 root=/dev/vda" \ + -drive file=path/to/rootfs,id=rootfs,readonly=off,direct=off \ + -device virtio-blk-pci,drive=rootfs,id=rootfs,bus=pcie.0,addr=0 \ + -qmp unix:path/to/socket2,server,nowait \ + -serial stdio \ + -incoming tcp:192.168.0.1:4446 \ + ``` + +> ![](public_sys-resources/icon-note.gif)**NOTE:** > > - The parameters for starting the target VM must be consistent with those for starting the source VM: > - To change the data transmission mode for live migration from TCP to the UNIX socket protocol, change the `-incoming tcp:192.168.0.1:4446` parameter for starting the target VM to `-incoming unix:/tmp/stratovirt-migrate.socket`. However, the UNIX socket protocol supports only live migration between different VMs on the same physical host. @@ -675,7 +666,8 @@ $ ncat -U path/to/socket1 <- {"execute":"migrate", "arguments":{"uri":"tcp:192.168.0.1:4446"}} -> {"return":{}} ``` -> ![](public_sys-resources/icon-note.gif) **NOTE:** + +> ![](public_sys-resources/icon-note.gif)**NOTE:** > > If the UNIX socket protocol is used for live migration transmission, change `"uri":"tcp:192.168.0.1:4446"` in the command to `"uri":"unix:/tmp/stratovirt-migrate.socket"`. @@ -720,10 +712,12 @@ $ ncat -U path/to/socket ### Constraints StratoVirt supports live migration of the following standard VM boards: + - Q35 (x86_64) - virt (AArch64) The following devices and features do not support live migration: + - vhost-net device - vhost-user-net device - virtio balloon device @@ -732,7 +726,8 @@ The following devices and features do not support live migration: - Shared memory (back-end memory feature) The following command parameters for starting the source and target VMs must be the same: + - virtio-net: MAC address - device: BDF number - smp -- m \ No newline at end of file +- m diff --git a/docs/en/Virtualization/VirtualizationPlatform/Virtualization/Menu/index.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/Menu/index.md new file mode 100644 index 0000000000000000000000000000000000000000..dbe61e7c3d28d14e3361e4f2bdc7169b567cb1c9 --- /dev/null +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/Menu/index.md @@ -0,0 +1,20 @@ +--- +headless: true +--- + +- [Virtualization User Guide]({{< relref "./virtualization.md" >}}) + - [Introduction to Virtualization]({{< relref "./introduction-to-virtualization.md" >}}) + - [Environment Preparation]({{< relref "./environment-preparation.md" >}}) + - [VM Configuration]({{< relref "./vm-configuration.md" >}}) + - [Managing VMs]({{< relref "./managing-vms.md" >}}) + - [VM Live Migration]({{< relref "./vm-live-migration.md" >}}) + - [System Resource Management]({{< relref "./system-resource-management.md" >}}) + - [Managing Devices]({{< relref "./managing-devices.md" >}}) + - [VM Maintainability Management]({{< relref "./vm-maintainability-management.md" >}}) + - [Best Practices]({{< relref "./best-practices.md" >}}) + - [Tool Guide]({{< relref "./tool-guide.md" >}}) + - [vmtop]({{< relref "./vmtop.md" >}}) + - [LibcarePlus]({{< relref "./libcareplus.md" >}}) + - [Skylark VM Hybrid Deployment]({{< relref "./skylark.md" >}}) + - [Common Issues and Solutions]({{< relref "./common-issues-and-solutions.md" >}}) + - [Appendix]({{< relref "./appendix.md" >}}) diff --git a/docs/en/docs/Virtualization/appendix.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/appendix.md similarity index 99% rename from docs/en/docs/Virtualization/appendix.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/appendix.md index 4277aa8ac9e0afb6a7e8bfa89764e7a8762708f6..6c9a781909da8c95fca096b383cfbecd84b98480 100644 --- a/docs/en/docs/Virtualization/appendix.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/appendix.md @@ -1,8 +1,7 @@ # Appendix -- [Appendix](#appendix.md) - - [Terminology & Acronyms and Abbreviations](#terminology-acronyms-and-abbreviations) - +- [Appendix](#appendix) + - [Terminology \& Acronyms and Abbreviations](#terminology--acronyms-and-abbreviations) ## Terminology & Acronyms and Abbreviations @@ -142,4 +141,3 @@ For the terminology & acronyms and abbreviation used in this document, see [Tab
- diff --git a/docs/en/docs/Virtualization/best-practices.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/best-practices.md similarity index 75% rename from docs/en/docs/Virtualization/best-practices.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/best-practices.md index de57ecfd54fbb2c16c15a9440fa882356d1451bb..dc31b7010777e0119ac19211fb61446dfc81a78b 100644 --- a/docs/en/docs/Virtualization/best-practices.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/best-practices.md @@ -1,6 +1,5 @@ # Best Practices - ## Performance Best Practices ### Halt-Polling @@ -9,8 +8,8 @@ If compute resources are sufficient, the halt-polling feature can be used to enable VMs to obtain performance similar to that of physical machines. If the halt-polling feature is not enabled, the host allocates CPU resources to other processes when the vCPU exits due to idle timeout. When the halt-polling feature is enabled on the host, the vCPU of the VM performs polling when it is idle. The polling duration depends on the actual configuration. If the vCPU is woken up during the polling, the vCPU can continue to run without being scheduled from the host. This reduces the scheduling overhead and improves the VM system performance. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The halt-polling mechanism ensures that the vCPU thread of the VM responds in a timely manner. However, when the VM has no load, the host also performs polling. As a result, the host detects that the CPU usage of the vCPU is high, but the actual CPU usage of the VM is not high. +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> The halt-polling mechanism ensures that the vCPU thread of the VM responds in a timely manner. However, when the VM has no load, the host also performs polling. As a result, the host detects that the CPU usage of the vCPU is high, but the actual CPU usage of the VM is not high. #### Instructions @@ -18,7 +17,7 @@ The halt-polling feature is enabled by default. You can dynamically change the h For example, to set the polling duration to 400,000 ns, run the following command: -``` +```shell # echo 400000 > /sys/module/kvm/parameters/halt_poll_ns ``` @@ -28,8 +27,8 @@ For example, to set the polling duration to 400,000 ns, run the following comman By default, QEMU main threads handle backend VM read and write operations on the KVM. This causes the following issues: -- VM I/O requests are processed by a QEMU main thread. Therefore, the single-thread CPU usage becomes the bottleneck of VM I/O performance. -- The QEMU global lock \(qemu\_global\_mutex\) is used when VM I/O requests are processed by the QEMU main thread. If the I/O processing takes a long time, the QEMU main thread will occupy the global lock for a long time. As a result, the VM vCPU cannot be scheduled properly, affecting the overall VM performance and user experience. +- VM I/O requests are processed by a QEMU main thread. Therefore, the single-thread CPU usage becomes the bottleneck of VM I/O performance. +- The QEMU global lock \(qemu\_global\_mutex\) is used when VM I/O requests are processed by the QEMU main thread. If the I/O processing takes a long time, the QEMU main thread will occupy the global lock for a long time. As a result, the VM vCPU cannot be scheduled properly, affecting the overall VM performance and user experience. You can configure the I/O thread attribute for the virtio-blk disk or virtio-scsi controller. At the QEMU backend, an I/O thread is used to process read and write requests of a virtual disk. The mapping relationship between the I/O thread and the virtio-blk disk or virtio-scsi controller can be a one-to-one relationship to minimize the impact on the QEMU main thread, enhance the overall I/O performance of the VM, and improve user experience. @@ -37,9 +36,9 @@ You can configure the I/O thread attribute for the virtio-blk disk or virtio-scs To use I/O threads to process VM disk read and write requests, you need to modify VM configurations as follows: -- Configure the total number of high-performance virtual disks on the VM. For example, set **** to **4** to control the total number of I/O threads. +- Configure the total number of high-performance virtual disks on the VM. For example, set **** to **4** to control the total number of I/O threads. - ``` + ```xml VMName 4194304 @@ -48,9 +47,9 @@ To use I/O threads to process VM disk read and write requests, you need to modif 4 ``` -- Configure the I/O thread attribute for the virtio-blk disk. **** indicates I/O thread IDs. The IDs start from 1 and each ID must be unique. The maximum ID is the value of ****. For example, to allocate I/O thread 2 to the virtio-blk disk, set parameters as follows: +- Configure the I/O thread attribute for the virtio-blk disk. **** indicates I/O thread IDs. The IDs start from 1 and each ID must be unique. The maximum ID is the value of ****. For example, to allocate I/O thread 2 to the virtio-blk disk, set parameters as follows: - ``` + ```xml @@ -59,9 +58,9 @@ To use I/O threads to process VM disk read and write requests, you need to modif ``` -- Configure the I/O thread attribute for the virtio-scsi controller. For example, to allocate I/O thread 2 to the virtio-scsi controller, set parameters as follows: +- Configure the I/O thread attribute for the virtio-scsi controller. For example, to allocate I/O thread 2 to the virtio-scsi controller, set parameters as follows: - ``` + ```xml @@ -69,18 +68,17 @@ To use I/O threads to process VM disk read and write requests, you need to modif ``` -- Bind I/O threads to a physical CPU. +- Bind I/O threads to a physical CPU. Binding I/O threads to specified physical CPUs does not affect the resource usage of vCPU threads. **** indicates I/O thread IDs, and **** indicates IDs of the bound physical CPUs. - ``` + ```xml ``` - ### Raw Device Mapping #### Overview @@ -93,11 +91,11 @@ RDM can be classified into virtual RDM and physical RDM based on backend impleme VM configuration files need to be modified for RDM. The following is a configuration example. -- Virtual RDM +- Virtual RDM The following is an example of mounting the SCSI disk **/dev/sdc** on the host to the VM as a virtual raw device: - ``` + ```xml ... @@ -113,12 +111,11 @@ VM configuration files need to be modified for RDM. The following is a configura ``` - -- Physical RDM +- Physical RDM The following is an example of mounting the SCSI disk **/dev/sdc** on the host to the VM as a physical raw device: - ``` + ```xml ... @@ -134,7 +131,6 @@ VM configuration files need to be modified for RDM. The following is a configura ``` - ### kworker Isolation and Binding #### Overview @@ -145,7 +141,7 @@ kworker is a per-CPU thread implemented by the Linux kernel. It is used to execu You can modify the **/sys/devices/virtual/workqueue/cpumask** file to bind tasks in the workqueue to the CPU specified by **cpumasks**. Masks in **cpumask** are in hexadecimal format. For example, if you need to bind kworker to CPU0 to CPU7, run the following command to change the mask to **ff**: -``` +```shell # echo ff > /sys/devices/virtual/workqueue/cpumask ``` @@ -155,25 +151,23 @@ You can modify the **/sys/devices/virtual/workqueue/cpumask** file to bind tas Compared with traditional 4 KB memory paging, openEuler also supports 2 MB/1 GB memory paging. HugePage memory can effectively reduce TLB misses and significantly improve the performance of memory-intensive services. openEuler uses two technologies to implement HugePage memory. -- Static HugePages +- Static HugePages The static HugePage requires that a static HugePage pool be reserved before the host OS is loaded. When creating a VM, you can modify the XML configuration file to specify that the VM memory is allocated from the static HugePage pool. The static HugePage ensures that all memory of a VM exists on the host as the HugePage to ensure physical continuity. However, the deployment difficulty is increased. After the page size of the static HugePage pool is changed, the host needs to be restarted for the change to take effect. The size of a static HugePage can be 2 MB or 1 GB. - -- THP +- THP If the transparent HugePage \(THP\) mode is enabled, the VM automatically selects available 2 MB consecutive pages and automatically splits and combines HugePages when allocating memory. When no 2 MB consecutive pages are available, the VM selects available 64 KB \(AArch64 architecture\) or 4 KB \(x86\_64 architecture\) pages for allocation. By using THP, users do not need to be aware of it and 2 MB HugePages can be used to improve memory access performance. - If VMs use static HugePages, you can disable THP to reduce the overhead of the host OS and ensure stable VM performance. #### Instructions -- Configure static HugePages. +- Configure static HugePages. Before creating a VM, modify the XML file to configure a static HugePage for the VM. - ``` + ```xml @@ -183,7 +177,7 @@ If VMs use static HugePages, you can disable THP to reduce the overhead of the h The preceding XML segment indicates that a 1 GB static HugePage is configured for the VM. - ``` + ```xml @@ -193,21 +187,20 @@ If VMs use static HugePages, you can disable THP to reduce the overhead of the h The preceding XML segment indicates that a 2 MB static HugePage is configured for the VM. -- Configure the THP. +- Configure the THP. Dynamically enable the THP through sysfs. - ``` + ```shell # echo always > /sys/kernel/mm/transparent_hugepage/enabled ``` Dynamically disable the THP. - ``` + ```shell # echo never > /sys/kernel/mm/transparent_hugepage/enabled ``` - ### PV-qspinlock #### Overview @@ -218,12 +211,12 @@ PV-qspinlock optimizes the spin lock in the virtual scenario of CPU overcommitme Modify the **/boot/efi/EFI/openEuler/grub.cfg** configuration file of the VM, add **arm_pvspin** to the startup parameter in the command line, and restart the VM for the modification to take effect. After PV-qspinlock takes effect, run the **dmesg** command on the VM. The following information is displayed: -``` +```shell [ 0.000000] arm-pv: PV qspinlocks enabled ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->PV-qspinlock is supported only when the operating systems of the host machine and VM are both openEuler 20.09 or later and the VM kernel compilation option **CONFIG_PARAVIRT_SPINLOCKS** is set to **y** (default value for openEuler). +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> PV-qspinlock is supported only when the operating systems of the host machine and VM are both openEuler 20.09 or later and the VM kernel compilation option **CONFIG_PARAVIRT_SPINLOCKS** is set to **y** (default value for openEuler). ### Guest-Idle-Haltpoll @@ -231,17 +224,17 @@ Modify the **/boot/efi/EFI/openEuler/grub.cfg** configuration file of the VM, ad To ensure fairness and reduce power consumption, when the vCPU of the VM is idle, the VM executes the WFx/HLT instruction to exit to the host machine and triggers context switchover. The host machine determines whether to schedule other processes or vCPUs on the physical CPU or enter the energy saving mode. However, overheads of switching between a virtual machine and a host machine, additional context switching, and IPI wakeup are relatively high, and this problem is particularly prominent in services where sleep and wakeup are frequently performed. The Guest-Idle-Haltpoll technology indicates that when the vCPU of a VM is idle, the WFx/HLT is not executed immediately and VM-exit occurs. Instead, polling is performed on the VM for a period of time. During this period, the tasks of other vCPUs that share the LLC on the vCPU are woken up without sending IPI interrupts. This reduces the overhead of sending and receiving IPI interrupts and the overhead of VM-exit, thereby reducing the task wakeup latency. ->![](public_sys-resources/icon-note.gif) **NOTE:** - The execution of the **idle-haltpoll** command by the vCPU on the VM increases the CPU overhead of the vCPU on the host machine. Therefore, it is recommended that the vCPU exclusively occupy physical cores on the host machine when this feature is enabled. +> ![](public_sys-resources/icon-note.gif)**NOTE:** +The execution of the **idle-haltpoll** command by the vCPU on the VM increases the CPU overhead of the vCPU on the host machine. Therefore, it is recommended that the vCPU exclusively occupy physical cores on the host machine when this feature is enabled. #### Procedure The Guest-Idle-Haltpoll feature is disabled by default. The following describes how to enable this feature. -1. Enable the Guest-Idle-Haltpoll feature. +1. Enable the Guest-Idle-Haltpoll feature. - If the processor architecture of the host machine is x86, you can configure hint-dedicated in the XML file of the VM on the host machine to enable this feature. In this way, the status that the vCPU exclusively occupies the physical core can be transferred to the VM through the VM XML configuration. The host machine ensures the status of the physical core exclusively occupied by the vCPU. - ``` + ```xml ... @@ -255,45 +248,45 @@ The Guest-Idle-Haltpoll feature is disabled by default. The following describes ``` Alternatively, set **cpuidle\_haltpoll.force** to **Y** in the kernel startup parameters of the VM to forcibly enable the function. This method does not require the host machine to configure the vCPU to exclusively occupy the physical core. - ``` + + ```ini cpuidle_haltpoll.force=Y ``` - If the processor architecture of the host machine is AArch64, this feature can be enabled only by configuring **cpuidle\_haltpoll.force=Y haltpoll.enable=Y** in the VM kernel startup parameters. - ``` + ```ini cpuidle_haltpoll.force=Y haltpoll.enable=Y ``` -2. Check whether the Guest-Idle-Haltpoll feature takes effect. Run the following command on the VM. If **haltpoll** is returned, the feature has taken effect. +2. Check whether the Guest-Idle-Haltpoll feature takes effect. Run the following command on the VM. If **haltpoll** is returned, the feature has taken effect. - ``` + ```shell # cat /sys/devices/system/cpu/cpuidle/current_driver ``` -3. (Optional) Set the Guest-Idle-Haltpoll parameter. +3. (Optional) Set the Guest-Idle-Haltpoll parameter. The following configuration files are provided in the **/sys/module/haltpoll/parameters/** directory of the VM. You can adjust the configuration parameters based on service characteristics. - - **guest\_halt\_poll\_ns**: a global parameter that specifies the maximum polling duration after the vCPU is idle. The default value is **200000** (unit: ns). - - **guest\_halt\_poll\_shrink**: a divisor that is used to shrink the current vCPU **guest\_halt\_poll\_ns** when the wakeup event occurs after the **global guest\_halt\_poll\_ns** time. The default value is **2**. - - **guest\_halt\_poll\_grow**: a multiplier that is used to extend the current vCPU **guest\_halt\_poll\_ns** when the wakeup event occurs after the current vCPU **guest\_halt\_poll\_ns** and before the global **guest\_halt\_poll\_ns**. The default value is **2**. - - **guest\_halt\_poll\_grow\_start**: When the system is idle, the **guest\_halt\_poll\_ns** of each vCPU reaches 0. This parameter is used to set the initial value of the current vCPU **guest\_halt\_poll\_ns** to facilitate scaling in and scaling out of the vCPU polling duration. The default value is **50000** (unit: ns). - - **guest\_halt\_poll\_allow\_shrink**: a switch that is used to enable vCPU **guest\_halt\_poll\_ns** scale-in. The default value is **Y**. (**Y** indicates enabling the scale-in; **N** indicates disabling the scale-in.) + - **guest\_halt\_poll\_ns**: a global parameter that specifies the maximum polling duration after the vCPU is idle. The default value is **200000** (unit: ns). + - **guest\_halt\_poll\_shrink**: a divisor that is used to shrink the current vCPU **guest\_halt\_poll\_ns** when the wakeup event occurs after the **global guest\_halt\_poll\_ns** time. The default value is **2**. + - **guest\_halt\_poll\_grow**: a multiplier that is used to extend the current vCPU **guest\_halt\_poll\_ns** when the wakeup event occurs after the current vCPU **guest\_halt\_poll\_ns** and before the global **guest\_halt\_poll\_ns**. The default value is **2**. + - **guest\_halt\_poll\_grow\_start**: When the system is idle, the **guest\_halt\_poll\_ns** of each vCPU reaches 0. This parameter is used to set the initial value of the current vCPU **guest\_halt\_poll\_ns** to facilitate scaling in and scaling out of the vCPU polling duration. The default value is **50000** (unit: ns). + - **guest\_halt\_poll\_allow\_shrink**: a switch that is used to enable vCPU **guest\_halt\_poll\_ns** scale-in. The default value is **Y**. (**Y** indicates enabling the scale-in; **N** indicates disabling the scale-in.) You can run the following command as the **root** user to change the parameter values. In the preceding command, _value_ indicates the parameter value to be set, and _configFile_ indicates the corresponding configuration file. - ``` + ```shell # echo value > /sys/module/haltpoll/parameters/configFile ``` For example, to set the global guest\_halt\_poll\_ns to 200000 ns, run the following command: - ``` + ```shell # echo 200000 > /sys/module/haltpoll/parameters/guest_halt_poll_ns ``` - ## Security Best Practices ### Libvirt Authentication @@ -306,10 +299,10 @@ When a user uses libvirt remote invocation but no authentication is performed, a By default, the libvirt remote invocation function is disabled on openEuler. This following describes how to enable the libvirt remote invocation and libvirt authentication functions. -1. Log in to the host. -2. Modify the libvirt service configuration file **/etc/libvirt/libvirtd.conf** to enable the libvirt remote invocation and libvirt authentication functions. For example, to enable the TCP remote invocation that is based on the Simple Authentication and Security Layer \(SASL\) framework, configure parameters by referring to the following: +1. Log in to the host. +2. Modify the libvirt service configuration file **/etc/libvirt/libvirtd.conf** to enable the libvirt remote invocation and libvirt authentication functions. For example, to enable the TCP remote invocation that is based on the Simple Authentication and Security Layer \(SASL\) framework, configure parameters by referring to the following: - ``` + ```ini #Transport layer security protocol. The value 0 indicates that the protocol is disabled, and the value 1 indicates that the protocol is enabled. You can set the value as needed. listen_tls = 0 #Enable the TCP remote invocation. To enable the libvirt remote invocation and libvirt authentication functions, set the value to 1. @@ -318,38 +311,38 @@ By default, the libvirt remote invocation function is disabled on openEuler. Thi auth_tcp = "sasl" ``` -3. Modify the **/etc/sasl2/libvirt.conf** configuration file to set the SASL mechanism and SASLDB. +3. Modify the **/etc/sasl2/libvirt.conf** configuration file to set the SASL mechanism and SASLDB. - ``` + ```ini #Authentication mechanism of the SASL framework. mech_list: digest-md5 #Database for storing usernames and passwords sasldb_path: /etc/libvirt/passwd.db ``` -4. Add the user for SASL authentication and set the password. Take the user **userName** as an example. The command is as follows: +4. Add the user for SASL authentication and set the password. Take the user **userName** as an example. The command is as follows: - ``` + ```shell # saslpasswd2 -a libvirt userName Password: Again (for verification): ``` -5. Modify the **/etc/sysconfig/libvirtd** configuration file to enable the libvirt listening option. +5. Modify the **/etc/sysconfig/libvirtd** configuration file to enable the libvirt listening option. - ``` + ```ini LIBVIRTD_ARGS="--listen" ``` -6. Restart the libvirtd service to make the modification to take effect. +6. Restart the libvirtd service to make the modification to take effect. - ``` + ```shell # systemctl restart libvirtd ``` -7. Check whether the authentication function for libvirt remote invocation takes effect. Enter the username and password as prompted. If the libvirt service is successfully connected, the function is successfully enabled. +7. Check whether the authentication function for libvirt remote invocation takes effect. Enter the username and password as prompted. If the libvirt service is successfully connected, the function is successfully enabled. - ``` + ```shell # virsh -c qemu+tcp://192.168.0.1/system Please enter your authentication name: openeuler Please enter your password: @@ -361,24 +354,22 @@ By default, the libvirt remote invocation function is disabled on openEuler. Thi virsh # ``` - #### Managing SASL The following describes how to manage SASL users. -- Query an existing user in the database. +Query an existing user in the database. - ``` - # sasldblistusers2 -f /etc/libvirt/passwd.db - user@localhost.localdomain: userPassword - ``` +```shell +# sasldblistusers2 -f /etc/libvirt/passwd.db +user@localhost.localdomain: userPassword +``` -- Delete a user from the database. - - ``` - # saslpasswd2 -a libvirt -d user - ``` +Delete a user from the database. +```shell +# saslpasswd2 -a libvirt -d user +``` ### qemu-ga @@ -388,23 +379,23 @@ QEMU guest agent \(qemu-ga\) is a daemon running within VMs. It allows users on In some scenarios with high security requirements, qemu-ga provides the blacklist function to prevent internal information leakage of VMs. You can use a blacklist to selectively shield some functions provided by qemu-ga. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The qemu-ga installation package is **qemu-guest-agent-**_xx_**.rpm**. It is not installed on openEuler by default. _xx_ indicates the actual version number. +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> The qemu-ga installation package is **qemu-guest-agent-**_xx_**.rpm**. It is not installed on openEuler by default. _xx_ indicates the actual version number. #### Procedure To add a qemu-ga blacklist, perform the following steps: -1. Log in to the VM and ensure that the qemu-guest-agent service exists and is running. +1. Log in to the VM and ensure that the qemu-guest-agent service exists and is running. - ``` + ```shell # systemctl status qemu-guest-agent |grep Active Active: active (running) since Wed 2018-03-28 08:17:33 CST; 9h ago ``` -2. Query which **qemu-ga** commands can be added to the blacklist: +2. Query which **qemu-ga** commands can be added to the blacklist: - ``` + ```shell # qemu-ga --blacklist ? guest-sync-delimited guest-sync @@ -415,33 +406,30 @@ To add a qemu-ga blacklist, perform the following steps: ... ``` +3. Set the blacklist. Add the commands to be shielded to **--blacklist** in the **/usr/lib/systemd/system/qemu-guest-agent.service** file. Use spaces to separate different commands. For example, to add the **guest-file-open** and **guest-file-close** commands to the blacklist, configure the file by referring to the following: -3. Set the blacklist. Add the commands to be shielded to **--blacklist** in the **/usr/lib/systemd/system/qemu-guest-agent.service** file. Use spaces to separate different commands. For example, to add the **guest-file-open** and **guest-file-close** commands to the blacklist, configure the file by referring to the following: - - ``` + ```ini [Service] ExecStart=-/usr/bin/qemu-ga \ --blacklist=guest-file-open guest-file-close ``` +4. Restart the qemu-guest-agent service. -4. Restart the qemu-guest-agent service. - - ``` + ```shell # systemctl daemon-reload # systemctl restart qemu-guest-agent ``` -5. Check whether the qemu-ga blacklist function takes effect on the VM, that is, whether the **--blacklist** parameter configured for the qemu-ga process is correct. +5. Check whether the qemu-ga blacklist function takes effect on the VM, that is, whether the **--blacklist** parameter configured for the qemu-ga process is correct. - ``` + ```shell # ps -ef|grep qemu-ga|grep -E "blacklist=|b=" root 727 1 0 08:17 ? 00:00:00 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=guest-file-open guest-file-close guest-file-read guest-file-write guest-file-seek guest-file-flush -F/etc/qemu-ga/fsfreeze-hook ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >For more information about qemu-ga, visit [https://wiki.qemu.org/Features/GuestAgent](https://wiki.qemu.org/Features/GuestAgent). - + > ![](./public_sys-resources/icon-note.gif)**NOTE:** + > For more information about qemu-ga, visit [https://wiki.qemu.org/Features/GuestAgent](https://wiki.qemu.org/Features/GuestAgent). ### sVirt Protection @@ -451,58 +439,58 @@ In a virtualization environment that uses the discretionary access control \(DAC #### Enabling sVirt Protection -I. Enable SELinux on the host. - 1. Log in to the host. - 2. Enable the SELinux function on the host. - a. Modify the system startup parameter file **grub.cfg** to set **selinux** to **1**. +1. Enable SELinux on the host. - ``` + 1. Log in to the host. + 2. Enable the SELinux function on the host. + + - Modify the system startup parameter file **grub.cfg** to set **selinux** to **1**. + + ```ini selinux=1 ``` - b. Modify **/etc/selinux/config** to set the **SELINUX** to **enforcing**. + - Modify **/etc/selinux/config** to set the **SELINUX** to **enforcing**. - ``` + ```ini SELINUX=enforcing ``` - 3. Restart the host. + 3. Restart the host. - ``` + ```shell # reboot ``` +2. Create a VM where the sVirt function is enabled. + 1. Add the following information to the VM configuration file: -II. Create a VM where the sVirt function is enabled. - 1. Add the following information to the VM configuration file: - - ``` + ```xml ``` Or check whether the following configuration exists in the file: - ``` + ```xml ``` - 2. Create a VM. + 2. Create a VM. - ``` + ```shell # virsh define openEulerVM.xml ``` -III. Check whether sVirt is enabled. +3. Check whether sVirt is enabled. Run the following command to check whether sVirt protection has been enabled for the QEMU process of the running VM. If **svirt\_t:s0:c** exists, sVirt protection has been enabled. - ``` + ```shell # ps -eZ|grep qemu |grep "svirt_t:s0:c" system_u:system_r:svirt_t:s0:c200,c947 11359 ? 00:03:59 qemu-kvm system_u:system_r:svirt_t:s0:c427,c670 13790 ? 19:02:07 qemu-kvm ``` - ### VM Trusted Boot #### Overview @@ -515,8 +503,6 @@ The CRTM is the root of the measure boot and the first component of the system s During startup, the previous component measures (calculates the hash value) the next component, and then extends the measurement value to the trusted storage area, for example, the PCR of the TPM. The CRTM measurement BootLoader extends the measurement value to the PCR, and the BootLoader measurement OS extends the measurement value to the PCR. - - #### Configuring the vTPM Device to Enable Measurement Startup **Installing the swtpm and libtpms Software** @@ -524,47 +510,47 @@ During startup, the previous component measures (calculates the hash value) the swtpm provides a TPM emulator (TPM 1.2 and TPM 2.0) that can be integrated into a virtualization environment. So far, it has been integrated into QEMU and serves as a prototype system in RunC. swtpm uses libtpms to provide TPM1.2 and TPM2.0 simulation functions. Currently, openEuler 21.03 provides the libtpms and swtpm sources. You can run the yum command to install them. -``` +```shell # yum install libtpms swtpm swtpm-devel swtpm-tools -``` - +```shell **Configuring the vTPM Device for the VM** -1. Add the following configuration to the VM configuration file: +1. Add the following configuration to the VM configuration file: - ``` - - ... + ```xml + + ... ... - - - + + + ... - - ... - + + ... + ``` - >![](public_sys-resources/icon-note.gif) **NOTE:** - >Currently, trusted boot of VMs on the AArch64 architecture of openEuler 20.09 does not support the ACPI feature. Therefore, do not configure the ACPI feature for VMs. Otherwise, vTPM devices cannot be identified after VMs are started. If the AArch64 architecture is used in versions earlier than openEuler 22.09, set **tpm model** to **<tpm model='tpm-tis-device'>**. + > ![](public_sys-resources/icon-note.gif)**NOTE:** + > Currently, trusted boot of VMs on the AArch64 architecture of openEuler 20.09 does not support the ACPI feature. Therefore, do not configure the ACPI feature for VMs. Otherwise, vTPM devices cannot be identified after VMs are started. If the AArch64 architecture is used in versions earlier than openEuler 22.09, set **tpm model** to **<tpm model='tpm-tis-device'>**. -2. Create a VM. +2. Create a VM. - ``` + ```shell # virsh define MeasuredBoot.xml ``` -3. Start the VM. - + +3. Start the VM. + Before starting the VM, run the **chmod** command to grant the following permissions to the **/var/lib/swtpm-localca/** directory. Otherwise, libvirt cannot start swtpm. - ``` + ```shell # chmod -R 777 /var/lib/swtpm-localca/ # - # virsh start MeasuredbootVM - `` + # virsh start MeasuredbootVM + ``` **Confirming that the Measure Boot Is Successfully Enabled** @@ -574,8 +560,7 @@ The vBIOS determines whether to enable the measure boot function. Currently, the Log in to the VM as the **root** user and check whether the TPM driver, tpm2-tss protocol stack, and tpm2-tools are installed on the VM. By default, the tpm driver (tpm_tis.ko), tpm2-tss protocol stack, and tpm2-tools are installed in openEuler 21.03. If another OS is used, run the following command to check whether the driver and related tools are installed: - -``` +```shell # lsmod |grep tpm # tpm_tis 16384 0 # @@ -583,9 +568,10 @@ By default, the tpm driver (tpm_tis.ko), tpm2-tss protocol stack, and tpm2-tools # # yum install tpm2-tss tpm2-tools ``` + You can run the **tpm2_pcrread** (**tpm2_pcrlist** in tpm2_tools of earlier versions) command to list all PCR values. -``` +```shell # tpm2_pcrread sha1 : 0 : fffdcae7cef57d93c5f64d1f9b7f1879275cff55 diff --git a/docs/en/Virtualization/VirtualizationPlatform/Virtualization/common-issues-and-solutions.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/common-issues-and-solutions.md new file mode 100644 index 0000000000000000000000000000000000000000..b50334284b57d5d91d7f2627e92affe01470241a --- /dev/null +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/common-issues-and-solutions.md @@ -0,0 +1,13 @@ +# Common Issues and Solutions + +## Issue 1: QEMU Hot Patch Created with LibcarePlus Fails to Load + +The problem arises when the QEMU version does not match the hot patch version. To resolve this, download the source code for the corresponding QEMU version and ensure the environments for creating the hot patch and building the QEMU package are identical. The buildID can verify consistency. If users lack the QEMU build environment, they can **build and install the package themselves**, then use the buildID from `/usr/libexec/qemu-kvm` in the self-built package. + +## Issue 2: Hot Patch Created with LibcarePlus Is Loaded but Not Effective + +This occurs because certain functions are not supported, including dead loops, non-exiting functions, recursive functions, initialization functions, inline functions, and functions shorter than 5 bytes. To address this, verify if the patched function falls under these constraints. + +## Issue 3: The First Result Displayed by the kvmtop Tool Is Calculated from Two Samples with a 0.05-Second Interval, Resulting in Significant Fluctuations + +This issue stems from a defect in the open-source top framework, and there is currently no solution available. diff --git a/docs/en/docs/Virtualization/environment-preparation.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/environment-preparation.md similarity index 100% rename from docs/en/docs/Virtualization/environment-preparation.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/environment-preparation.md diff --git a/docs/en/docs/Virtualization/figures/CertEnrollP1.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP1.png similarity index 100% rename from docs/en/docs/Virtualization/figures/CertEnrollP1.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP1.png diff --git a/docs/en/docs/Virtualization/figures/CertEnrollP2.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP2.png similarity index 100% rename from docs/en/docs/Virtualization/figures/CertEnrollP2.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP2.png diff --git a/docs/en/docs/Virtualization/figures/CertEnrollP3.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP3.png similarity index 100% rename from docs/en/docs/Virtualization/figures/CertEnrollP3.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP3.png diff --git a/docs/en/docs/Virtualization/figures/CertEnrollP4.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP4.png similarity index 100% rename from docs/en/docs/Virtualization/figures/CertEnrollP4.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP4.png diff --git a/docs/en/docs/Virtualization/figures/CertEnrollP5.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP5.png similarity index 100% rename from docs/en/docs/Virtualization/figures/CertEnrollP5.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP5.png diff --git a/docs/en/docs/Virtualization/figures/CertEnrollP6.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP6.png similarity index 100% rename from docs/en/docs/Virtualization/figures/CertEnrollP6.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP6.png diff --git a/docs/en/docs/Virtualization/figures/CertEnrollP7.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP7.png similarity index 100% rename from docs/en/docs/Virtualization/figures/CertEnrollP7.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP7.png diff --git a/docs/en/docs/Virtualization/figures/CertEnrollP8.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP8.png similarity index 100% rename from docs/en/docs/Virtualization/figures/CertEnrollP8.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/CertEnrollP8.png diff --git a/docs/en/docs/Virtualization/figures/OSBootFlow.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/OSBootFlow.png similarity index 100% rename from docs/en/docs/Virtualization/figures/OSBootFlow.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/OSBootFlow.png diff --git a/docs/en/docs/Virtualization/figures/SecureBootFlow.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/SecureBootFlow.png similarity index 100% rename from docs/en/docs/Virtualization/figures/SecureBootFlow.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/SecureBootFlow.png diff --git a/docs/en/docs/Virtualization/figures/kvm-architecture.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/kvm-architecture.png similarity index 100% rename from docs/en/docs/Virtualization/figures/kvm-architecture.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/kvm-architecture.png diff --git a/docs/en/docs/Virtualization/figures/status-transition-diagram.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/status-transition-diagram.png similarity index 100% rename from docs/en/docs/Virtualization/figures/status-transition-diagram.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/status-transition-diagram.png diff --git a/docs/en/docs/Virtualization/figures/virtual-network-structure.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/virtual-network-structure.png similarity index 100% rename from docs/en/docs/Virtualization/figures/virtual-network-structure.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/virtual-network-structure.png diff --git a/docs/en/docs/Virtualization/figures/virtualized-architecture.png b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/virtualized-architecture.png similarity index 100% rename from docs/en/docs/Virtualization/figures/virtualized-architecture.png rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/figures/virtualized-architecture.png diff --git a/docs/en/docs/Virtualization/introduction-to-virtualization.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/introduction-to-virtualization.md similarity index 84% rename from docs/en/docs/Virtualization/introduction-to-virtualization.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/introduction-to-virtualization.md index 04370eebbcc81ea3485835c0affe811478a77523..13ff9f9fee78402049056908648f71ae79c2d8ff 100644 --- a/docs/en/docs/Virtualization/introduction-to-virtualization.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/introduction-to-virtualization.md @@ -13,15 +13,14 @@ Virtualization enables multiple virtual machines \(VMs\) to run on a physical se Currently, mainstream virtualization technologies are classified into two types based on the implementation structure of the Virtual Machine Monitor \(VMM\): -- Hypervisor model +- Hypervisor model In this model, VMM is considered as a complete operating system \(OS\) and has the virtualization function. VMM directly manages all physical resources, including processors, memory, and I/O devices. -- Host model +- Host model In this model, physical resources are managed by a host OS, which is a traditional OS, such as Linux and Windows. The host OS does not provide the virtualization capability. The VMM that provides the virtualization capability runs on the host OS as a driver or software of the system. The VMM invokes the host OS service to obtain resources and simulate the processor, memory, and I/O devices. The virtualization implementation of this model includes KVM and Virtual Box. - Kernel-based Virtual Machine \(KVM\) is a kernel module of Linux. It makes Linux a hypervisor. [Figure 2](#fig310953013541) shows the KVM architecture. KVM does not simulate any hardware device. It is used to enable virtualization capabilities provided by the hardware, such as Intel VT-x, AMD-V, ARM virtualization extensions. The user-mode QEMU simulates the mainboard, memory, and I/O devices. The user-mode QEMU works with the KVM module to simulate VM hardware. The guest OS runs on the hardware simulated by the QEMU and KVM. **Figure 2** KVM architecture @@ -31,60 +30,51 @@ Kernel-based Virtual Machine \(KVM\) is a kernel module of Linux. It makes Linux Virtualization components provided in the openEuler software package: -- KVM: provides the core virtualization infrastructure to make the Linux system a hypervisor. Multiple VMs can run on the same host at the same time. -- QEMU: simulates a processor and provides a set of device models to work with KVM to implement hardware-based virtualization simulation acceleration. -- Libvirt: provides a tool set for managing VMs, including unified, stable, and open application programming interfaces \(APIs\), daemon process \(libvirtd\), and default command line management tool \(virsh\). -- Open vSwitch: provides a virtual network tool set for VMs, supports programming extension and standard management interfaces and protocols \(such as NetFlow, sFlow, IPFIX, RSPAN, CLI, LACP, and 802.1ag\). +- KVM: provides the core virtualization infrastructure to make the Linux system a hypervisor. Multiple VMs can run on the same host at the same time. +- QEMU: simulates a processor and provides a set of device models to work with KVM to implement hardware-based virtualization simulation acceleration. +- Libvirt: provides a tool set for managing VMs, including unified, stable, and open application programming interfaces \(APIs\), daemon process \(libvirtd\), and default command line management tool \(virsh\). +- Open vSwitch: provides a virtual network tool set for VMs, supports programming extension and standard management interfaces and protocols \(such as NetFlow, sFlow, IPFIX, RSPAN, CLI, LACP, and 802.1ag\). ## Virtualization Characteristics Virtualization has the following characteristics: -- Partition +- Partition Virtualization can logically divide software on a physical server to run multiple VMs \(virtual servers\) with different specifications. - -- Isolation +- Isolation Virtualization can simulate virtual hardware and provide hardware conditions for VMs to run complete OSs. The OSs of each VM are independent and isolated from each other. For example, if the OS of a VM breaks down due to a fault or malicious damage, the OSs and applications of other VMs are not affected. - -- Encapsulation +- Encapsulation Encapsulation is performed on a per VM basis. The excellent encapsulation capability makes VMs more flexible than physical machines. Functions such as live migration, snapshot, and cloning of VMs can be realized, implementing quick deployment and automatic O&M of data centers. - -- Hardware-irrelevant +- Hardware-irrelevant After being abstracted by the virtualization layer, VMs are not directly bound to underlying hardware and can run on other servers without being modified. - ## Virtualization Advantages Virtualization brings the following benefits to infrastructure of the data center: -- Flexibility and scalability +- Flexibility and scalability Users can dynamically allocate and reclaim resources based to meet dynamic service requirements. In addition, users can plan different VM specifications based on product requirements and adjust the scale without changing the physical resource configuration. - -- Higher availability and better O&M methods +- Higher availability and better O&M methods Virtualization provides O&M methods such as live migration, snapshot, live upgrade, and automatic DR. Physical resources can be deleted, upgraded, or changed without affecting users, improving service continuity and implementing automatic O&M. - -- Security hardening +- Security hardening Virtualization provides OS-level isolation and hardware-based processor operation privilege-level control. Compared with simple sharing mechanisms, virtualization provides higher security and implements controllable and secure access to data and services. - -- High resource utilization +- High resource utilization Virtualization supports dynamic sharing of physical resources and resource pools, improving resource utilization. - ## openEuler Virtualization openEuler provides KVM virtualization components that support the AArch64 and x86\_64 processor architectures. - diff --git a/docs/en/docs/Virtualization/LibcarePlus.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/libcareplus.md similarity index 87% rename from docs/en/docs/Virtualization/LibcarePlus.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/libcareplus.md index 1a6fe540b61c631256a9612b1d451887a398dc7a..396e1e5b1555b171643c2a74bdc5266179c36cda 100644 --- a/docs/en/docs/Virtualization/LibcarePlus.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/libcareplus.md @@ -3,21 +3,21 @@ - [LibcarePlus](#libcareplus) - - [Overview](#overview) - - [Hardware and Software Requirements](#hardware-and-software-requirements) - - [Precautions and Constraints](#precautions-and-constraints) - - [Installing LibcarePlus](#installing-libcareplus) - - [Software Installation Dependencies](#software-installation-dependencies) - - [Installing LibcarePlus](#installing-libcareplus-1) - - [Creating LibcarePlus Hot Patches](#creating-libcareplus-hot-patches) - - [Introduction](#introduction) - - [Manual Creation](#manual-creation) - - [Creation Through a Script](#creation-through-a-script) - - [Applying the LibcarePlus Hot Patch](#applying-the-libcareplus-hot-patch) - - [Preparation](#preparation) - - [Loading the Hot Patch](#loading-the-hot-patch) - - [Querying a Hot Patch](#querying-a-hot-patch) - - [Uninstalling the Hot Patch](#uninstalling-the-hot-patch) + - [Overview](#overview) + - [Hardware and Software Requirements](#hardware-and-software-requirements) + - [Precautions and Constraints](#precautions-and-constraints) + - [Installing LibcarePlus](#installing-libcareplus) + - [Software Installation Dependencies](#software-installation-dependencies) + - [Installing LibcarePlus](#installing-libcareplus-1) + - [Creating LibcarePlus Hot Patches](#creating-libcareplus-hot-patches) + - [Introduction](#introduction) + - [Manual Creation](#manual-creation) + - [Creation Through a Script](#creation-through-a-script) + - [Applying the LibcarePlus Hot Patch](#applying-the-libcareplus-hot-patch) + - [Preparation](#preparation) + - [Loading the Hot Patch](#loading-the-hot-patch) + - [Querying a Hot Patch](#querying-a-hot-patch) + - [Uninstalling the Hot Patch](#uninstalling-the-hot-patch) @@ -52,14 +52,14 @@ When using LibcarePlus, comply with the following hot patch specifications and c - Thread local storage (TLS) variables of the initial executable (IE) model can be modified. - Symbols defined in a patch cannot be used in subsequent patches. - Hot patches are not supported in the following scenarios: - - Infinite loop function, non-exit function, inline function, initialization function, and non-maskable interrupt (NMI) function - - Replacing global variables - - Functions less than 5 bytes - - Modifying the header file - - Adding or deleting the input and output parameters of the target function - - Changing (adding, deleting, or modifying) data structure members - - Modifying the C files that contain GCC macros such as __LINE__ and __FILE__ - - Modifying the Intel vector assembly instruction + - Infinite loop function, non-exit function, inline function, initialization function, and non-maskable interrupt (NMI) function + - Replacing global variables + - Functions less than 5 bytes + - Modifying the header file + - Adding or deleting the input and output parameters of the target function + - Changing (adding, deleting, or modifying) data structure members + - Modifying the C files that contain GCC macros such as __LINE__ and __FILE__ + - Modifying the Intel vector assembly instruction ## Installing LibcarePlus @@ -235,10 +235,9 @@ This section describes how to use LibcarePlus built-in **libcare-patch-make** sc Expand foo.patch

- ``` diff - --- foo.c 2020-12-09 15:39:51.159632075 +0800 - +++ bar.c 2020-12-09 15:40:03.818632220 +0800 + --- foo.c 2020-12-09 15:39:51.159632075 +0800 + +++ bar.c 2020-12-09 15:40:03.818632220 +0800 @@ -1,10 +1,10 @@ -// foo.c +// bar.c @@ -257,30 +256,28 @@ This section describes how to use LibcarePlus built-in **libcare-patch-make** sc

- 2. Write the **makefile** for building **foo.c** as follows:
Expand makefile

- ``` makefile - all: foo - - foo: foo.c - $(CC) -o $@ $< - - clean: - rm -f foo + ``` makefile + all: foo + + foo: foo.c + $(CC) -o $@ $< + + clean: + rm -f foo - install: foo - mkdir $$DESTDIR || : - cp foo $$DESTDIR - ``` + install: foo + mkdir $$DESTDIR || : + cp foo $$DESTDIR + ```

-
- + 3. After the **makefile** is done, directly call `libcare-patch-make`. If `libcare-patch-make` asks you which file to install the patch, enter the original file name, as shown in the following: @@ -297,8 +294,8 @@ This section describes how to use LibcarePlus built-in **libcare-patch-make** sc Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- - |--- foo.c 2020-12-10 09:43:04.445375845 +0800 - |+++ bar.c 2020-12-10 09:48:36.778379648 +0800 + |--- foo.c 2020-12-10 09:43:04.445375845 +0800 + |+++ bar.c 2020-12-10 09:48:36.778379648 +0800 -------------------------- File to patch: foo.c patching file foo.c @@ -315,8 +312,6 @@ This section describes how to use LibcarePlus built-in **libcare-patch-make** sc After the command is executed, the output indicates that the hot patch file is in the **patchroot** directory of the current directory, and the executable file is in the **lpmake** directory. By default, the Build ID is used to name a hot patch file generated by a script. - - ## Applying the LibcarePlus Hot Patch This following uses the original file **foo.c** and patch file **bar.c** as an example to describe how to use the LibcarePlus hot patch. @@ -359,7 +354,6 @@ The procedure for applying the LibcarePlus hot patch is as follows: Hello world being patched! ``` - ### Querying a Hot Patch The procedure for querying a LibcarePlus hot patch is as follows: @@ -381,9 +375,6 @@ The procedure for querying a LibcarePlus hot patch is as follows: Patch id: 0001 ``` - - - ### Uninstalling the Hot Patch The procedure for uninstalling the LibcarePlus hot patch is as follows: diff --git a/docs/en/docs/Virtualization/managing-devices.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/managing-devices.md similarity index 49% rename from docs/en/docs/Virtualization/managing-devices.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/managing-devices.md index b8ee22dc217551d458ee5d0c341394d1fc0c509e..ac6459cfc73d20580d78f04b6ecd5a858c3e190d 100644 --- a/docs/en/docs/Virtualization/managing-devices.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/managing-devices.md @@ -1,30 +1,29 @@ # Managing Devices - [Managing Devices](#managing-devices) - - [Configuring a PCIe Controller for a VM](#configuring-a-pcie-controller-for-a-vm) - - [Overview](#overview) - - [Configuring the PCIe Root, PCIe Root Port, and PCIe-PCI-Bridge](#configuring-the-pcie-root-pcie-root-port-and-pcie-pci-bridge) - - [Managing Virtual Disks](#managing-virtual-disks) - - [Managing vNICs](#managing-vnics) - - [Configuring a Virtual Serial Port](#configuring-a-virtual-serial-port) - - [Managing Device Passthrough](#managing-device-passthrough) - - [PCI Passthrough](#pci-passthrough) - - [SR-IOV Passthrough](#sr-iov-passthrough) - - [Managing VM USB](#managing-vm-usb) - - [Configuring USB Controllers](#configuring-usb-controllers) - - [Configuring a USB Passthrough Device](#configuring-a-usb-passthrough-device) - - [Storing Snapshots](#storing-snapshots) - - [Overview](#overview-7) - - [Procedure](#procedure-4) - - [Configuring Disk I/O Suspension](#configuring-disk-io-suspension) - - [Introduction](#introduction) - - [Overview](#overview-8) - - [Application Scenarios](#application-scenarios) - - [Precautions and Restrictions](#precautions-and-restrictions) - - [Disk I/O Suspension Configuration](#disk-io-suspension-configuration) - - [Using the QEMU CLI](#using-the-qemu-cli) - - [Using an XML Configuration File](#using-an-xml-configuration-file) - + - [Configuring a PCIe Controller for a VM](#configuring-a-pcie-controller-for-a-vm) + - [Overview](#overview) + - [Configuring the PCIe Root, PCIe Root Port, and PCIe-PCI-Bridge](#configuring-the-pcie-root-pcie-root-port-and-pcie-pci-bridge) + - [Managing Virtual Disks](#managing-virtual-disks) + - [Managing vNICs](#managing-vnics) + - [Configuring a Virtual Serial Port](#configuring-a-virtual-serial-port) + - [Managing Device Passthrough](#managing-device-passthrough) + - [PCI Passthrough](#pci-passthrough) + - [SR-IOV Passthrough](#sr-iov-passthrough) + - [Managing VM USB](#managing-vm-usb) + - [Configuring USB Controllers](#configuring-usb-controllers) + - [Configuring a USB Passthrough Device](#configuring-a-usb-passthrough-device) + - [Storing Snapshots](#storing-snapshots) + - [Overview](#overview-7) + - [Procedure](#procedure-4) + - [Configuring Disk I/O Suspension](#configuring-disk-io-suspension) + - [Introduction](#introduction) + - [Overview](#overview-8) + - [Application Scenarios](#application-scenarios) + - [Precautions and Restrictions](#precautions-and-restrictions) + - [Disk I/O Suspension Configuration](#disk-io-suspension-configuration) + - [Using the QEMU CLI](#using-the-qemu-cli) + - [Using an XML Configuration File](#using-an-xml-configuration-file) ## Configuring a PCIe Controller for a VM @@ -36,11 +35,11 @@ The NIC, disk controller, and PCIe pass-through devices in a VM must be mounted The VM PCIe controller is configured using the XML file. The **model** corresponding to PCIe root, PCIe root port, and PCIe-PCI-bridge in the XML file are **pcie-root**, **pcie-root-port**, and **pcie-to-pci-bridge**, respectively. -- Simplified configuration method +- Simplified configuration method Add the following contents to the XML file of the VM. Other attributes of the controller are automatically filled by libvirt. - ``` + ```xml @@ -51,11 +50,11 @@ The VM PCIe controller is configured using the XML file. The **model** corresp The **pcie-root** and **pcie-to-pci-bridge** occupy one **index** respectively. Therefore, the final **index** is the number of required **root ports + 1**. -- Complete configuration method +- Complete configuration method Add the following contents to the XML file of the VM: - ``` + ```xml @@ -76,38 +75,37 @@ The VM PCIe controller is configured using the XML file. The **model** corresp In the preceding contents: - - The **chassis** and **port** attributes of the root port must be in ascending order. Because a PCIe-PCI-bridge is inserted in the middle, the **chassis** number skips **2**, but the **port** numbers are still consecutive. - - The **address function** of the root port ranges from **0\*0** to **0\*7**. - - A maximum of eight functions can be mounted to each slot. When the slot is full, the slot number increases. + - The **chassis** and **port** attributes of the root port must be in ascending order. Because a PCIe-PCI-bridge is inserted in the middle, the **chassis** number skips **2**, but the **port** numbers are still consecutive. + - The **address function** of the root port ranges from **0\*0** to **0\*7**. + - A maximum of eight functions can be mounted to each slot. When the slot is full, the slot number increases. The complete configuration method is complex. Therefore, the simplified one is recommended. - ## Managing Virtual Disks ### Overview Virtual disk types include virtio-blk, virtio-scsi, and vhost-scsi. virtio-blk simulates a block device, and virtio-scsi and vhost-scsi simulate SCSI devices. -- virtio-blk: It can be used for common system disk and data disk. In this configuration, the virtual disk is presented as **vd\[a-z\]** or **vd\[a-z\]\[a-z\]** in the VM. -- virtio-scsi: It is recommended for common system disk and data disk. In this configuration, the virtual disk is presented as **sd\[a-z\]** or **sd\[a-z\]\[a-z\]** in the VM. -- vhost-scsi: It is recommended for the virtual disk that has high performance requirements. In this configuration, the virtual disk is presented as **sd\[a-z\]** or **sd\[a-z\]\[a-z\]** on the VM. +- virtio-blk: It can be used for common system disk and data disk. In this configuration, the virtual disk is presented as **vd\[a-z\]** or **vd\[a-z\]\[a-z\]** in the VM. +- virtio-scsi: It is recommended for common system disk and data disk. In this configuration, the virtual disk is presented as **sd\[a-z\]** or **sd\[a-z\]\[a-z\]** in the VM. +- vhost-scsi: It is recommended for the virtual disk that has high performance requirements. In this configuration, the virtual disk is presented as **sd\[a-z\]** or **sd\[a-z\]\[a-z\]** on the VM. ### Procedure For details about how to configure a virtual disk, see **VM Configuration** > **Network Devices**. This section uses the virtio-scsi disk as an example to describe how to attach and detach a virtual disk. -- Attach a virtio-scsi disk. +- Attach a virtio-scsi disk. Run the **virsh attach-device** command to attach the virtio-scsi virtual disk. - ``` + ```shell # virsh attach-device ``` The preceding command can be used to attach a disk to a VM online. The disk information is specified in the **attach-device.xml** file. The following is an example of the **attach-device.xml** file: - ``` + ```shell ### attach-device.xml ### @@ -120,17 +118,16 @@ For details about how to configure a virtual disk, see **VM Configuration** > ** The disk attached by running the preceding commands becomes invalid after the VM is shut down and restarted. If you need to permanently attach a virtual disk to a VM, run the **virsh attach-device** command with the **--config** parameter. -- Detach a virtio-scsi disk. +- Detach a virtio-scsi disk. If a disk attached online is no longer used, run the **virsh detach** command to dynamically detach it. - ``` + ```shell # virsh detach-device ``` **detach-device.xml** specifies the XML information of the disk to be detached, which must be the same as the XML information during dynamic attachment. - ## Managing vNICs ### Overview @@ -139,19 +136,19 @@ The vNIC types include virtio-net, vhost-net, and vhost-user. After creating a V ### Procedure -For details about how to configure a virtual NIC, see [3.2.4.2 Network Devices](#network-device). This section uses the vhost-net NIC as an example to describe how to attach and detach a vNIC. +For details about how to configure a virtual NIC, see [3.2.4.2 Network Devices](./vm-configuration.md#network-devices). This section uses the vhost-net NIC as an example to describe how to attach and detach a vNIC. -- Attach the vhost-net NIC. +- Attach the vhost-net NIC. Run the **virsh attach-device** command to attach the vhost-net vNIC. - ``` + ```shell # virsh attach-device ``` The preceding command can be used to attach a vhost-net NIC to a running VM. The NIC information is specified in the **attach-device.xml** file. The following is an example of the **attach-device.xml** file: - ``` + ```shell ### attach-device.xml ### @@ -164,17 +161,16 @@ For details about how to configure a virtual NIC, see [3.2.4.2 Network Devices] The vhost-net NIC attached using the preceding commands becomes invalid after the VM is shut down and restarted. If you need to permanently attach a vNIC to a VM, run the **virsh attach-device** command with the **--config** parameter. -- Detach the vhost-net NIC. +- Detach the vhost-net NIC. If a NIC attached online is no longer used, run the **virsh detach** command to dynamically detach it. - ``` + ```shell # virsh detach-device ``` **detach-device.xml** specifies the XML information of the vNIC to be detached, which must be the same as the XML information during dynamic attachment. - ## Configuring a Virtual Serial Port ### Overview @@ -185,9 +181,9 @@ In a virtualization environment, VMs and host machines need to communicate with The Linux VM serial port console is a pseudo terminal device connected to the host machine through the serial port of the VM. It implements interactive operations on the VM through the host machine. In this scenario, the serial port needs to be configured in the pty type. This section describes how to configure a pty serial port. -- Add the following virtual serial port configuration items under the **devices** node in the XML configuration file of the VM: +- Add the following virtual serial port configuration items under the **devices** node in the XML configuration file of the VM: - ``` + ```xml @@ -195,19 +191,18 @@ The Linux VM serial port console is a pseudo terminal device connected to the ho ``` -- Run the **virsh console** command to connect to the pty serial port of the running VM. +- Run the **virsh console** command to connect to the pty serial port of the running VM. - ``` + ```shell # virsh console ``` -- To ensure that no serial port message is missed, use the **--console** option to connect to the serial port when starting the VM. +- To ensure that no serial port message is missed, use the **--console** option to connect to the serial port when starting the VM. - ``` + ```shell # virsh start --console ``` - ## Managing Device Passthrough The device passthrough technology enables VMs to directly access physical devices. The I/O performance of VMs can be improved in this way. @@ -218,7 +213,7 @@ Currently, the VFIO passthrough is used. It can be classified into PCI passthrou PCI passthrough directly assigns a physical PCI device on the host to a VM. The VM can directly access the device. PCI passthrough uses the VFIO device passthrough mode. The PCI passthrough configuration file in XML format for a VM is as follows: -``` +```xml @@ -231,100 +226,22 @@ PCI passthrough directly assigns a physical PCI device on the host to a VM. The **Table 1** Device configuration items for PCI passthrough - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

Value

-

hostdev.source.address.domain

-

Domain ID of the PCI device on the host OS.

-

≥ 0

-

hostdev.source.address.bus

-

Bus ID of the PCI device on the host OS.

-

≥ 1

-

hostdev.source.address.slot

-

Device ID of the PCI device on the host OS.

-

≥ 0

-

hostdev.source.address.function

-

Function ID of the PCI device on the host OS.

-

≥ 0

-

hostdev.driver.name

-

Backend driver of PCI passthrough. This parameter is optional.

-

vfio (default value)

-

hostdev.rom

-

Whether the VM can access the ROM of the passthrough device.

-

This parameter can be set to on or off. The default value is on.

-
  • on: indicates that the VM can access the ROM of the passthrough device. For example, if a VM with a passthrough NIC needs to boot from the preboot execution environment (PXE), or a VM with a passthrough Host Bus Adapter (HBA) card needs to boot from the ROM, you can set this parameter to on.
  • off: indicates that the VM cannot access the ROM of the passthrough device.
-

hostdev.address.type

-

Device type displayed on the guest, which must be the same as the actual device type.

-

**pci** (default configuration)

-

hostdev.address.domain

-

Domain number of the device displayed on the guest.

-

0x0000

-

hostdev.address.bus

-

Bus number of the device displayed on the guest.

-

**0x00** (default configuration). This parameter can only be set to the bus number configured in section "Configuring a PCIe Controller for a VM."

-

hostdev.address.slot

-

Slot number of the device displayed on the guest.

-

The slot number range is [0x03,0x1e]

-

Note:

-
  • The first slot number 0x00 is occupied by the system, the second slot number 0x01 is occupied by the IDE controller and USB controller, and the third slot number 0x02 is occupied by the video.
  • The last slot number 0x1f is occupied by the pvchannel.
-

hostdev.address.function

-

Function number of the device displayed on the guest.

-

**0x0** (default configuration): The function number range is [0x0,0x7]

-
- ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->VFIO passthrough is implemented by IOMMU group. Devices are divided to IOMMU groups based on access control services (ACS) on hardware. Devices in the same IOMMU group can be assigned to only one VM. If multiple functions on a PCI device belong to the same IOMMU group, they can be directly assigned to only one VM as well. +| Parameter | Description | Value | +|---|---|---| +| hostdev.source.address.domain | Domain ID of the PCI device on the host OS. | ≥ 0 | +| hostdev.source.address.bus | Bus ID of the PCI device on the host OS. | ≥ 1 | +| hostdev.source.address.slot | Device ID of the PCI device on the host OS. | ≥ 0 | +| hostdev.source.address.function | Function ID of the PCI device on the host OS. | ≥ 0 | +| hostdev.driver.name | Backend driver of PCI passthrough. This parameter is optional. | **vfio** (default value) | +| hostdev.rom | Whether the VM can access the ROM of the passthrough device. | This parameter can be set to **on** or **off**. The default value is **on**.
- **on**: indicates that the VM can access the ROM of the passthrough device. For example, if a VM with a passthrough NIC needs to boot from the preboot execution environment (PXE), or a VM with a passthrough Host Bus Adapter (HBA) card needs to boot from the ROM, you can set this parameter to **on**.
- **off**: indicates that the VM cannot access the ROM of the passthrough device. | +| hostdev.address.type | Device type displayed on the guest, which must be the same as the actual device type. | **pci** (default configuration) | +| hostdev.address.domain | Domain number of the device displayed on the guest. | 0x0000 | +| hostdev.address.bus | Bus number of the device displayed on the guest. | **0x00** (default configuration). This parameter can only be set to the bus number configured in section "Configuring a PCIe Controller for a VM." | +| hostdev.address.slot | Slot number of the device displayed on the guest. | The slot number range is \[0x03,0x1e]
Note: - The first slot number 0x00 is occupied by the system, the second slot number 0x01 is occupied by the IDE controller and USB controller, and the third slot number 0x02 is occupied by the video. - The last slot number 0x1f is occupied by the pvchannel. | +| hostdev.address.function | Function number of the device displayed on the guest. | **0x0** (default configuration): The function number range is \[0x0,0x7] | + +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> VFIO passthrough is implemented by IOMMU group. Devices are divided to IOMMU groups based on access control services (ACS) on hardware. Devices in the same IOMMU group can be assigned to only one VM. If multiple functions on a PCI device belong to the same IOMMU group, they can be directly assigned to only one VM as well. ### SR-IOV Passthrough @@ -332,38 +249,39 @@ PCI passthrough directly assigns a physical PCI device on the host to a VM. The Single Root I/O Virtualization (SR-IOV) is a hardware-based virtualization solution. With the SR-IOV technology, a physical function (PF) can provide multiple virtual functions (VFs), and each VF can be directly assigned to a VM. This greatly improves hardware resource utilization and I/O performance of VMs. A typical application scenario is SR-IOV passthrough for NICs. With the SR-IOV technology, a physical NIC (PF) can function as multiple VF NICs, and then the VFs can be directly assigned to VMs. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->- SR-IOV requires the support of physical hardware. Before using SR-IOV, ensure that the hardware device to be directly assigned supports SR-IOV and the device driver on the host OS works in SR-IOV mode. ->- The following describes how to query the NIC model: ->In the following command output, values in the first column indicate the PCI numbers of NICs, and **19e5:1822** indicates the vendor ID and device ID of the NIC. ->``` -># lspci | grep Ether ->05:00.0 Ethernet controller: Device 19e5:1822 (rev 45) ->07:00.0 Ethernet controller: Device 19e5:1822 (rev 45) ->09:00.0 Ethernet controller: Device 19e5:1822 (rev 45) ->0b:00.0 Ethernet controller: Device 19e5:1822 (rev 45) ->81:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) ->81:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) ->``` - +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> +> - SR-IOV requires the support of physical hardware. Before using SR-IOV, ensure that the hardware device to be directly assigned supports SR-IOV and the device driver on the host OS works in SR-IOV mode. +> - The following describes how to query the NIC model: +> In the following command output, values in the first column indicate the PCI numbers of NICs, and **19e5:1822** indicates the vendor ID and device ID of the NIC. +> +> ```shell +> # lspci | grep Ether +> 05:00.0 Ethernet controller: Device 19e5:1822 (rev 45) +> 07:00.0 Ethernet controller: Device 19e5:1822 (rev 45) +> 09:00.0 Ethernet controller: Device 19e5:1822 (rev 45) +> 0b:00.0 Ethernet controller: Device 19e5:1822 (rev 45) +> 81:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) +> 81:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) +> ``` #### Procedure To configure SR-IOV passthrough for a NIC, perform the following steps: -1. Enable the SR-IOV mode for the NIC. - 1. Ensure that VF driver support provided by the NIC supplier exists on the guest OS. Otherwise, VFs in the guest OS cannot work properly. - 2. Enable the SMMU/IOMMU support in the BIOS of the host OS. The enabling method varies depending on the servers of different vendors. For details, see the help documents of the servers. - 3. Configure the host driver to enable the SR-IOV VF mode. The following uses the Hi1822 NIC as an example to describe how to enable 16 VFs. +1. Enable the SR-IOV mode for the NIC. + 1. Ensure that VF driver support provided by the NIC supplier exists on the guest OS. Otherwise, VFs in the guest OS cannot work properly. + 2. Enable the SMMU/IOMMU support in the BIOS of the host OS. The enabling method varies depending on the servers of different vendors. For details, see the help documents of the servers. + 3. Configure the host driver to enable the SR-IOV VF mode. The following uses the Hi1822 NIC as an example to describe how to enable 16 VFs. - ``` + ```shell echo 16 > /sys/class/net/ethX/device/sriov_numvfs ``` -2. Obtain the PCI BDF information of PFs and VFs. - 1. Run the following command to obtain the NIC resource list on the current board: +2. Obtain the PCI BDF information of PFs and VFs. + 1. Run the following command to obtain the NIC resource list on the current board: - ``` + ```shell # lspci | grep Eth 03:00.0 Ethernet controller: Huawei Technologies Co., Ltd. Hi1822 Family (4*25GE) (rev 45) 04:00.0 Ethernet controller: Huawei Technologies Co., Ltd. Hi1822 Family (4*25GE) (rev 45) @@ -375,9 +293,9 @@ To configure SR-IOV passthrough for a NIC, perform the following steps: 7d:00.3 Ethernet controller: Huawei Technologies Co., Ltd. Device a221 (rev 20) ``` - 2. Run the following command to view the PCI BDF information of VFs: + 2. Run the following command to view the PCI BDF information of VFs: - ``` + ```shell # lspci | grep "Virtual Function" 03:00.1 Ethernet controller: Huawei Technologies Co., Ltd. Hi1822 Family Virtual Function (rev 45) 03:00.2 Ethernet controller: Huawei Technologies Co., Ltd. Hi1822 Family Virtual Function (rev 45) @@ -391,50 +309,50 @@ To configure SR-IOV passthrough for a NIC, perform the following steps: 03:01.2 Ethernet controller: Huawei Technologies Co., Ltd. Hi1822 Family Virtual Function (rev 45) ``` - 3. Select an available VF and write its configuration to the VM configuration file based on its BDF information. For example, the bus ID of the device **03:00.1** is **03**, its slot ID is **00**, and its function ID is **1**. + 3. Select an available VF and write its configuration to the VM configuration file based on its BDF information. For example, the bus ID of the device **03:00.1** is **03**, its slot ID is **00**, and its function ID is **1**. -3. Identify and manage the mapping between PFs and VFs. - 1. Identify VFs corresponding to a PF. The following uses PF 03.00.0 as an example: +3. Identify and manage the mapping between PFs and VFs. + 1. Identify VFs corresponding to a PF. The following uses PF 03.00.0 as an example: - ``` + ```shell # ls -l /sys/bus/pci/devices/0000\:03\:00.0/ ``` The following symbolic link information is displayed. You can obtain the VF IDs (virtfnX) and PCI BDF IDs based on the information. - 2. Identify the PF corresponding to a VF. The following uses VF 03:00.1 as an example: + 2. Identify the PF corresponding to a VF. The following uses VF 03:00.1 as an example: - ``` + ```shell # ls -l /sys/bus/pci/devices/0000\:03\:00.1/ ``` The following symbolic link information is displayed. You can obtain PCI BDF IDs of the PF based on the information. - ``` + ```shell lrwxrwxrwx 1 root root 0 Mar 28 22:44 physfn -> ../0000:03:00.0 ``` - 3. Obtain names of NICs corresponding to the PFs or VFs. For example: + 3. Obtain names of NICs corresponding to the PFs or VFs. For example: - ``` + ```shell # ls /sys/bus/pci/devices/0000:03:00.0/net eth0 ``` - 4. Set the MAC address, VLAN, and QoS information of VFs to ensure that the VFs are in the **Up** state before passthrough. The following uses VF 03:00.1 as an example. The PF is eth0 and the VF ID is **0**. + 4. Set the MAC address, VLAN, and QoS information of VFs to ensure that the VFs are in the **Up** state before passthrough. The following uses VF 03:00.1 as an example. The PF is eth0 and the VF ID is **0**. - ``` + ```shell # ip link set eth0 vf 0 mac 90:E2:BA:21:XX:XX #Sets the MAC address. # ifconfig eth0 up # ip link set eth0 vf 0 rate 100 #Sets the VF outbound rate, in Mbit/s. # ip link show eth0 #Views the MAC address, VLAN ID, and QoS information to check whether the configuration is successful. ``` -4. Mount the SR-IOV NIC to the VM. +4. Mount the SR-IOV NIC to the VM. When creating a VM, add the SR-IOV passthrough configuration item to the VM configuration file. - ``` + ```xml @@ -448,58 +366,25 @@ To configure SR-IOV passthrough for a NIC, perform the following steps: **Table 1** SR-IOV configuration options - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

Value

-

hostdev.managed

-

Two modes for libvirt to process PCI devices.

-

no: default value. The passthrough device is managed by the user.

-

yes: The passthrough device is managed by libvirt. Set this parameter to yes in the SR-IOV passthrough scenario.

-

hostdev.source.address.bus

-

Bus ID of the PCI device on the host OS.

-

≥ 1

-

hostdev.source.address.slot

-

Device ID of the PCI device on the host OS.

-

≥ 0

-

hostdev.source.address.function

-

Function ID of the PCI device on the host OS.

-

≥ 0

-
- - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >Disabling the SR-IOV function: - >To disable the SR-IOV function after the VM is stopped and no VF is in use, run the following command: - >The following uses the Hi1822 NIC corresponding network interface name: eth0) as an example: - >``` - >echo 0 > /sys/class/net/eth0/device/sriov_numvfs - >``` + | Parameter | Description | Value | + | -------------------------------- | ----------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | hostdev.managed | Two modes for libvirt to process PCI devices. | **no**: default value. The passthrough device is managed by the user.
**yes**: The passthrough device is managed by libvirt. Set this parameter to **yes** in the SR-IOV passthrough scenario. | + | hostdev.source.address.bus | Bus ID of the PCI device on the host OS. | ≥ 1 | + | hostdev.source.address.slot | Device ID of the PCI device on the host OS. | ≥ 0 | + | hostdev.source.address.function | Function ID of the PCI device on the host OS. | ≥ 0 | + + > ![](./public_sys-resources/icon-note.gif)**NOTE:** + > Disabling the SR-IOV function: + > To disable the SR-IOV function after the VM is stopped and no VF is in use, run the following command: + > The following uses the Hi1822 NIC corresponding network interface name: eth0) as an example: + + ```shell + echo 0 > /sys/class/net/eth0/device/sriov_numvfs + ``` #### Configuring SR-IOV Passthrough for the HPRE Accelerator -The accelerator engine is a hardware acceleration solution provided by TaiShan 200 servers based on the Kunpeng 920 processors. The HPRE accelerator is used to accelerate SSL/TLS applications. It significantly reduces processor consumption and improves processor efficiency. +The accelerator engine is a hardware acceleration solution provided by TaiShan 200 servers. The HPRE accelerator is used to accelerate SSL/TLS applications. It significantly reduces processor consumption and improves processor efficiency. On the Kunpeng server, you need to pass through the VFs of the HPRE accelerator on the host to the VM for internal services of the VM. **Table 1** HPRE accelerator description @@ -513,14 +398,134 @@ On the Kunpeng server, you need to pass through the VFs of the HPRE accelerator | VF DeviceID | 0xA259 | | Maximum number of VFs | An HPRE PF supports a maximum of 63 VFs. | +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> When a VM is using a VF device, the driver on the host cannot be uninstalled, and the accelerator does not support hot swap. +> VF operation (If **VFNUMS** is **0**, the VF is disabled, and **hpre_num** is used to identify a specific accelerator device): +> +> ```shell +> echo $VFNUMS > /sys/class/uacce/hisi_hpre-$hpre_num/device/sriov_numvfs +> ``` + +### vDPA Passthrough + +#### Overview + +vDPA passthrough connects a device on a host to the vDPA framework, uses the vhost-vdpa driver to present a character device, and configures the character device for VMs to use. vDPA passthrough drives can serve as system or data drives for VMs and support hot expansion of data drives. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->When a VM is using a VF device, the driver on the host cannot be uninstalled, and the accelerator does not support hot swap. ->VF operation (If **VFNUMS** is **0**, the VF is disabled, and **hpre_num** is used to identify a specific accelerator device): ->``` ->echo $VFNUMS > /sys/class/uacce/hisi_hpre-$hpre_num/device/sriov_numvfs ->``` +vDPA passthrough provides the similar I/O performance as VFIO passthrough, provides flexibility of VirtIO devices, and supports live migration of vDPA passthrough devices. +With the SR-IOV solution, vDPA passthrough can virtualize a physical NIC (PF) into multiple NICs (VFs), and then connect the VFs to the vDPA framework for VMs to use. + +#### Procedure + +To configure vDPA passthrough, perform the following steps as user **root**: + +1. Create and configure VFs. For details, see steps 1 to 3 in SR-IOV passthrough. The following uses **virtio-net** devices as an example (**08:00.6** and **08:00.7** are PFs, and the others are created VFs): + + ```shell + # lspci | grep -i Eth | grep Virtio + 08:00.6 Ethernet controller: Virtio: Virtio network device + 08:00.7 Ethernet controller: Virtio: Virtio network device + 08:01.1 Ethernet controller: Virtio: Virtio network device + 08:01.2 Ethernet controller: Virtio: Virtio network device + 08:01.3 Ethernet controller: Virtio: Virtio network device + 08:01.4 Ethernet controller: Virtio: Virtio network device + 08:01.5 Ethernet controller: Virtio: Virtio network device + 08:01.6 Ethernet controller: Virtio: Virtio network device + 08:01.7 Ethernet controller: Virtio: Virtio network device + 08:02.0 Ethernet controller: Virtio: Virtio network device + 08:02.1 Ethernet controller: Virtio: Virtio network device + 08:02.2 Ethernet controller: Virtio: Virtio network device + ``` + +2. Unbind the VF drivers and bind the vDPA driver of the hardware vendor. + + ```shell + echo 0000:08:01.1 > /sys/bus/pci/devices/0000\:08\:01.1/driver/unbind + echo 0000:08:01.2 > /sys/bus/pci/devices/0000\:08\:01.2/driver/unbind + echo 0000:08:01.3 > /sys/bus/pci/devices/0000\:08\:01.3/driver/unbind + echo 0000:08:01.4 > /sys/bus/pci/devices/0000\:08\:01.4/driver/unbind + echo 0000:08:01.5 > /sys/bus/pci/devices/0000\:08\:01.5/driver/unbind + echo -n "1af4 1000" > /sys/bus/pci/drivers/vender_vdpa/new_id + ``` + +3. After vDPA devices are bound, you can run the `vdpa` command to query the list of devices managed by vDPA. + + ```shell + # vdpa mgmtdev show + pci/0000:08:01.1: + supported_classes net + pci/0000:08:01.2: + supported_classes net + pci/0000:08:01.3: + supported_classes net + pci/0000:08:01.4: + supported_classes net + pci/0000:08:01.5: + supported_classes net + ``` + +4. After the vDPA devices are created, create the vhost-vDPA devices. + + ```shell + vdpa dev add name vdpa0 mgmtdev pci/0000:08:01.1 + vdpa dev add name vdpa1 mgmtdev pci/0000:08:01.2 + vdpa dev add name vdpa2 mgmtdev pci/0000:08:01.3 + vdpa dev add name vdpa3 mgmtdev pci/0000:08:01.4 + vdpa dev add name vdpa4 mgmtdev pci/0000:08:01.5 + ``` + +5. After the vhost-vDPA devices are created, you can run the `vdpa` command to query the vDPA device list or run the `libvirt` command to query the vhost-vDPA device information. + + ```shell + # vdpa dev show + vdpa0: type network mgmtdev pci/0000:08:01.1 vendor_id 6900 max_vqs 3 max_vq_size 256 + vdpa1: type network mgmtdev pci/0000:08:01.2 vendor_id 6900 max_vqs 3 max_vq_size 256 + vdpa2: type network mgmtdev pci/0000:08:01.3 vendor_id 6900 max_vqs 3 max_vq_size 256 + vdpa3: type network mgmtdev pci/0000:08:01.4 vendor_id 6900 max_vqs 3 max_vq_size 256 + vdpa4: type network mgmtdev pci/0000:08:01.5 vendor_id 6900 max_vqs 3 max_vq_size 256 + + # virsh nodedev-list vdpa + vdpa_vdpa0 + vdpa_vdpa1 + vdpa_vdpa2 + vdpa_vdpa3 + vdpa_vdpa4 + + # virsh nodedev-dumpxml vdpa_vdpa0 + + vdpa_vdpa0 + /sys/devices/pci0000:00/0000:00:0c.0/0000:08:01.1/vdpa0 + pci_0000_08_01_1 + + vhost_vdpa + + + /dev/vhost-vdpa-0 + + + ``` + +6. Mount a vDPA device to the VM. + + When creating a VM, add the item for the vDPA passthrough device to the VM configuration file: + + ```xml + + + + + + ``` + + **Table 4** vDPA configuration description + + | Parameter | Description | Value | + | ------------------ | ---------------------------------------------------- | ----------------- | + | hostdev.source.dev | Path of the vhost-vDPA character device on the host. | /dev/vhost-vdpa-x | + + > ![](./public_sys-resources/icon-note.gif)**NOTE:** + > The procedures of creating and configuring VFs and binding the vDPA drivers vary with the design of hardware vendors. Follow the procedure of the corresponding vendor. ## Managing VM USB @@ -532,19 +537,19 @@ To facilitate the use of USB devices such as USB key devices and USB mass storag A USB controller is a virtual controller that provides specific USB functions for USB devices on VMs. To use USB devices on a VM, you must configure USB controllers for the VM. Currently, openEuler supports the following types of USB controllers: -- Universal host controller interface (UHCI): also called the USB 1.1 host controller specification. -- Enhanced host controller interface (EHCI): also called the USB 2.0 host controller specification. -- Extensible host controller interface (xHCI): also called the USB 3.0 host controller specification. +- Universal host controller interface (UHCI): also called the USB 1.1 host controller specification. +- Enhanced host controller interface (EHCI): also called the USB 2.0 host controller specification. +- Extensible host controller interface (xHCI): also called the USB 3.0 host controller specification. #### Precautions -- The host server must have USB controller hardware and modules that support USB 1.1, USB 2.0, and USB 3.0 specifications. -- You need to configure USB controllers for the VM by following the order of USB 1.1, USB 2.0, and USB 3.0. -- An xHCI controller has eight ports and can be mounted with a maximum of four USB 3.0 devices and four USB 2.0 devices. An EHCI controller has six ports and can be mounted with a maximum of six USB 2.0 devices. A UHCI controller has two ports and can be mounted with a maximum of two USB 1.1 devices. -- On each VM, only one USB controller of the same type can be configured. -- USB controllers cannot be hot swapped. -- If the USB 3.0 driver is not installed on a VM, the xHCI controller may not be identified. For details about how to download and install the USB 3.0 driver, refer to the official description provided by the corresponding OS distributor. -- To ensure the compatibility of the OS, set the bus ID of the USB controller to **0** when configuring a USB tablet for the VM. The tablet is mounted to the USB 1.1 controller by default. +- The host server must have USB controller hardware and modules that support USB 1.1, USB 2.0, and USB 3.0 specifications. +- You need to configure USB controllers for the VM by following the order of USB 1.1, USB 2.0, and USB 3.0. +- An xHCI controller has eight ports and can be mounted with a maximum of four USB 3.0 devices and four USB 2.0 devices. An EHCI controller has six ports and can be mounted with a maximum of six USB 2.0 devices. A UHCI controller has two ports and can be mounted with a maximum of two USB 1.1 devices. +- On each VM, only one USB controller of the same type can be configured. +- USB controllers cannot be hot swapped. +- If the USB 3.0 driver is not installed on a VM, the xHCI controller may not be identified. For details about how to download and install the USB 3.0 driver, refer to the official description provided by the corresponding OS distributor. +- To ensure the compatibility of the OS, set the bus ID of the USB controller to **0** when configuring a USB tablet for the VM. The tablet is mounted to the USB 1.1 controller by default. #### Configuration Methods @@ -552,21 +557,21 @@ The following describes the configuration items of USB controllers for a VM. You The configuration item of the USB 1.1 controller (UHCI) in the XML configuration file is as follows: -``` +```xml ``` The configuration item of the USB 2.0 controller (EHCI) in the XML configuration file is as follows: -``` +```xml ``` The configuration item of the USB 3.0 controller (xHCI) in the XML configuration file is as follows: -``` +```xml ``` @@ -579,10 +584,10 @@ After USB controllers are configured for a VM, a physical USB device on the host #### Precautions -- A USB device can be assigned to only one VM. -- A VM with a USB passthrough device does not support live migration. -- VM creation fails if no USB passthrough devices exist in the VM configuration file. -- Forcibly hot removing a USB storage device that is performing read or write operation may damage files in the USB storage device. +- A USB device can be assigned to only one VM. +- A VM with a USB passthrough device does not support live migration. +- VM creation fails if no USB passthrough devices exist in the VM configuration file. +- Forcibly hot removing a USB storage device that is performing read or write operation may damage files in the USB storage device. #### Configuration Description @@ -590,7 +595,7 @@ The following describes the configuration items of a USB device for a VM. Description of the USB device in the XML configuration file: -``` +```xml
@@ -599,23 +604,23 @@ Description of the USB device in the XML configuration file: ``` -- **
**: *m_ indicates the USB bus address on the host, and _n* indicates the device ID. -- **
**: indicates that the USB device is to be mounted to the USB controller specified on the VM. *x_ indicates the controller ID, which corresponds to the index number of the USB controller configured on the VM. _y* indicates the port address. When configuring a USB passthrough device, you need to set this parameter to ensure that the controller to which the device is mounted is as expected. +- **
**: *m_ indicates the USB bus address on the host, and _n* indicates the device ID. +- **
**: indicates that the USB device is to be mounted to the USB controller specified on the VM. *x_ indicates the controller ID, which corresponds to the index number of the USB controller configured on the VM. _y* indicates the port address. When configuring a USB passthrough device, you need to set this parameter to ensure that the controller to which the device is mounted is as expected. #### Configuration Methods To configure USB passthrough, perform the following steps: -1. Configure USB controllers for the VM. For details, see [Configuring USB Controllers](#configuring-usb-controllers). -2. Query information about the USB device on the host. +1. Configure USB controllers for the VM. For details, see [Configuring USB Controllers](#configuring-usb-controllers). +2. Query information about the USB device on the host. Run the **lsusb** command (the **usbutils** software package needs to be installed) to query the USB device information on the host, including the bus address, device address, device vendor ID, device ID, and product description. For example: - ``` + ```shell # lsusb ``` - ``` + ```text Bus 008 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 007 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub @@ -631,71 +636,67 @@ To configure USB passthrough, perform the following steps: Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub ``` -3. Prepare the XML description file of the USB device. Before hot removing the device, ensure that the USB device is not in use. Otherwise, data may be lost. -4. Run the hot swapping commands. +3. Prepare the XML description file of the USB device. Before hot removing the device, ensure that the USB device is not in use. Otherwise, data may be lost. +4. Run the hot swapping commands. Take a VM whose name is **openEulerVM** as an example. The corresponding configuration file is **usb.xml**. - - Hot adding of the USB device takes effect only for the current running VM. After the VM is restarted, hot add the USB device again. + - Hot adding of the USB device takes effect only for the current running VM. After the VM is restarted, hot add the USB device again. - ``` + ```shell # virsh attach-device openEulerVM usb.xml --live ``` - - Complete persistency configurations for hot adding of the USB device. After the VM is restarted, the USB device is automatically assigned to the VM. + - Complete persistency configurations for hot adding of the USB device. After the VM is restarted, the USB device is automatically assigned to the VM. - ``` + ```shell # virsh attach-device openEulerVM usb.xml --config ``` - - Hot removing of the USB device takes effect only for the current running VM. After the VM is restarted, the USB device with persistency configurations is automatically assigned to the VM. + - Hot removing of the USB device takes effect only for the current running VM. After the VM is restarted, the USB device with persistency configurations is automatically assigned to the VM. - ``` + ```shell # virsh detach-device openEulerVM usb.xml --live ``` - - Complete persistency configurations for hot removing of the USB device. + - Complete persistency configurations for hot removing of the USB device. - ``` + ```shell # virsh detach-device openEulerVM usb.xml --config ``` - - ## Storing Snapshots ### Overview The VM system may be damaged due to virus damage, system file deletion by mistake, or incorrect formatting. As a result, the system cannot be started. To quickly restore a damaged system, openEuler provides the storage snapshot function. openEuler can create a snapshot that records the VM status at specific time points without informing users (usually within a few seconds). The snapshot can be used to restore the VM to the status when the snapshots were taken. For example, a damaged system can be quickly restored with the help of snapshots, which improves system reliability. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->Currently, storage snapshots can be QCOW2 and RAW images only. Block devices are not supported. +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> Currently, storage snapshots can be QCOW2 and RAW images only. Block devices are not supported. ### Procedure To create VM storage snapshots, perform the following steps: -1. Log in to the host and run the **virsh domblklist** command to query the disk used by the VM. +1. Log in to the host and run the **virsh domblklist** command to query the disk used by the VM. - ``` + ```shell # virsh domblklist openEulerVM Target Source --------------------------------------------- vda /mnt/openEuler-image.qcow2 ``` +2. Run the following command to create the VM disk snapshot **openEuler-snapshot1.qcow2**: -1. Run the following command to create the VM disk snapshot **openEuler-snapshot1.qcow2**: - - ``` + ```shell # virsh snapshot-create-as --domain openEulerVM --disk-only --diskspec vda,snapshot=external,file=/mnt/openEuler-snapshot1.qcow2 --atomic Domain snapshot 1582605802 created ``` +3. Run the following command to query disk snapshots: -1. Run the following command to query disk snapshots: - - ``` + ```shell # virsh snapshot-list openEulerVM Name Creation Time State --------------------------------------------------------- @@ -736,11 +737,11 @@ A cloud disk that may cause storage plane link disconnection is used as the back - After a storage fault occurs, the following problems cannot be solved although disk I/O suspension occurs: - 1. Failed to execute advanced storage features. + 1. Failed to execute advanced storage features. - Advanced features include: virtual disk hot swap, virtual disk creation, VM startup, VM shutdown, VM forcible shutdown, VM hibernation, VM wakeup, VM storage live migration, VM storage live migration cancellation, VM storage snapshot creation, VM storage snapshot combination, VM disk capacity query, online disk capacity expansion, virtual CD-ROM drive insertion, and CD-ROM drive ejection from the VM. + Advanced features include: virtual disk hot swap, virtual disk creation, VM startup, VM shutdown, VM forcible shutdown, VM hibernation, VM wakeup, VM storage live migration, VM storage live migration cancellation, VM storage snapshot creation, VM storage snapshot combination, VM disk capacity query, online disk capacity expansion, virtual CD-ROM drive insertion, and CD-ROM drive ejection from the VM. - 2. Failed to execute the VM lifecycle. + 2. Failed to execute the VM lifecycle. - When a live migration is initiated for a VM configured with disk I/O suspension, the disk I/O suspension configuration must be the same as that of the source host in the XML configuration of the destination disk. diff --git a/docs/en/docs/Virtualization/managing-vms.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/managing-vms.md similarity index 95% rename from docs/en/docs/Virtualization/managing-vms.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/managing-vms.md index 33405a73e94729f3ed25d584fbb32cf3187352b6..3ffcf6ae8617a823cc2868ebf94f27336efd6dac 100644 --- a/docs/en/docs/Virtualization/managing-vms.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/managing-vms.md @@ -72,8 +72,8 @@ In libvirt, a created VM instance is called a **domain**, which describes the c
->![](./public_sys-resources/icon-note.gif) **NOTE:** ->Run the **virsh** command to query the VM ID and UUID. For details, see [Querying VM Information](#querying-vm-information). +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> Run the **virsh** command to query the VM ID and UUID. For details, see [Querying VM Information](#querying-vm-information). ### Management Commands @@ -496,14 +496,14 @@ Before logging in to a VM using a client, such as RealVNC or TightVNC, ensure th - You have obtained the VNC listening port of the VM. This port is automatically allocated when the client is started. Generally, the port number is **5900 + x** \(_x_ is a positive integer and increases in ascending order based on the VM startup sequence. **5900** is invisible to users.\) - If a password has been set for the VNC, you also need to obtain the VNC password of the VM. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >To set a password for the VM VNC, edit the XML configuration file of the VM. That is, add the **passwd** attribute to the **graphics** element and set the attribute value to the password to be configured. For example, to set the VNC password of the VM to **n8VfjbFK**, configure the XML file as follows: - > - >```shell - > - > - > - >``` + > ![](./public_sys-resources/icon-note.gif)**NOTE:** + > To set a password for the VM VNC, edit the XML configuration file of the VM. That is, add the **passwd** attribute to the **graphics** element and set the attribute value to the password to be configured. For example, to set the VNC password of the VM to **n8VfjbFK**, configure the XML file as follows: + + ```xml + + + + ``` #### Procedure @@ -516,12 +516,12 @@ Before logging in to a VM using a client, such as RealVNC or TightVNC, ensure th :3 ``` - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >To log in to the VNC, you need to configure firewall rules to allow the connection of the VNC port. The reference command is as follows, where _X_ is **5900 + Port number**, for example, **5903**. - > - >```shell - >firewall-cmd --zone=public --add-port=X/tcp - >``` + > ![](./public_sys-resources/icon-note.gif)**NOTE:** + > To log in to the VNC, you need to configure firewall rules to allow the connection of the VNC port. The reference command is as follows, where _X_ is **5900 + Port number**, for example, **5903**. + + ```shell + firewall-cmd --zone=public --add-port=X/tcp + ``` 2. Start the VncViewer software and enter the IP address and port number of the host. The format is **host IP address:port number**, for example, **10.133.205.53:3**. 3. Click **OK** and enter the VNC password \(optional\) to log in to the VM VNC. @@ -532,10 +532,10 @@ Before logging in to a VM using a client, such as RealVNC or TightVNC, ensure th By default, the VNC server and client transmit data in plaintext. Therefore, the communication content may be intercepted by a third party. To improve security, openEuler allows the VNC server to configure the Transport Layer Security \(TLS\) mode for encryption and authentication. TLS implements encrypted communication between the VNC server and client to prevent communication content from being intercepted by third parties. ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif)**NOTE:** > ->- To use the TLS encryption authentication mode, the VNC client must support the TLS mode \(for example, TigerVNC\). Otherwise, the VNC client cannot be connected. ->- The TLS encryption authentication mode is configured at the host level. After this feature is enabled, the TLS encryption authentication mode is enabled for the VNC clients of all VMs running on the host. +> - To use the TLS encryption authentication mode, the VNC client must support the TLS mode \(for example, TigerVNC\). Otherwise, the VNC client cannot be connected. +> - The TLS encryption authentication mode is configured at the host level. After this feature is enabled, the TLS encryption authentication mode is enabled for the VNC clients of all VMs running on the host. #### Procedure @@ -552,8 +552,8 @@ To enable the TLS encryption authentication mode for the VNC, perform the follow 2. Create a certificate and a private key file for the VNC. The following uses GNU TLS as an example. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >To use GNU TLS, install the gnu-utils software package in advance. + > ![](./public_sys-resources/icon-note.gif)**NOTE:** + > To use GNU TLS, install the gnu-utils software package in advance. 1. Create a certificate file issued by the Certificate Authority \(CA\). @@ -644,9 +644,9 @@ To enable the TLS encryption authentication mode for the VNC, perform the follow 5. Copy the generated client certificates **ca-cert.pem**, **client-cert.pem**, and **client-key.pem** to the VNC client. After the TLS certificate of the VNC client is configured, you can use VNC TLS to log in to the VM. - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >- For details about how to configure the VNC client certificate, see the usage description of each client. - >- For details about how to log in to the VM, see [Logging In Using VNC Passwords](#logging-in-using-vnc-passwords). + > ![](./public_sys-resources/icon-note.gif)**NOTE:** + > - For details about how to configure the VNC client certificate, see the usage description of each client. + > - For details about how to log in to the VM, see [Logging In Using VNC Passwords](#logging-in-using-vnc-passwords). ## VM Secure Boot diff --git a/docs/en/docs/Administration/public_sys-resources/icon-caution.gif b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/public_sys-resources/icon-caution.gif similarity index 100% rename from docs/en/docs/Administration/public_sys-resources/icon-caution.gif rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/public_sys-resources/icon-caution.gif diff --git a/docs/en/Virtualization/VirtualizationPlatform/Virtualization/public_sys-resources/icon-note.gif b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/public_sys-resources/icon-note.gif new file mode 100644 index 0000000000000000000000000000000000000000..6314297e45c1de184204098efd4814d6dc8b1cda Binary files /dev/null and b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/public_sys-resources/icon-note.gif differ diff --git a/docs/en/docs/Virtualization/Skylark.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/skylark.md similarity index 90% rename from docs/en/docs/Virtualization/Skylark.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/skylark.md index 6528aeb5c97d4aa7970e868557bfef731c6ecd8d..20abdd7168fc354deaca22e1f7eeeb07e7933dc6 100644 --- a/docs/en/docs/Virtualization/Skylark.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/skylark.md @@ -1,12 +1,12 @@ # Skylark - [Skylark](#skylark) - - [Skylark Introduction](#skylark-introduction) - - [Architecture and Features](#architecture-and-features) - - [Skylark Installation](#skylark-installation) - - [Skylark Configuration](#skylark-configuration) - - [Skylark Usage](#skylark-usage) - - [Best Practices](#best-practices) + - [Skylark Introduction](#skylark-introduction) + - [Architecture and Features](#architecture-and-features) + - [Skylark Installation](#skylark-installation) + - [Skylark Configuration](#skylark-configuration) + - [Skylark Usage](#skylark-usage) + - [Best Practices](#best-practices) ## Skylark Introduction @@ -27,6 +27,7 @@ For details about how to better use the priority feature of Skylark in actual ap ### Overall Architecture The core class of Skylark is `QoSManager`. Class members include data collection class instances, QoS analysis class instances, QoS control class instances, and task scheduling class instances. + - `DataCollector`: data collection class. It has the `HostInfo` and `GuestInfo` members, which collect host information and VM information, respectively. - `PowerAnalyzer`: power consumption analysis class, which analyzes power consumption interference and low-priority VMs to be restricted. - `CpuController`: CPU bandwidth control class, which limits the CPU bandwidth of low-priority VMs. @@ -34,6 +35,7 @@ The core class of Skylark is `QoSManager`. Class members include data collection - `BackgroundScheduler`: task scheduling class, which periodically drives the preceding modules to continuously manage QoS. After checking the host environment, Skylark creates a daemon process. The daemon has a main scheduling thread and one or more job threads. + - The main scheduling thread is unique. It connects to libvirt, creates and initializes the `QosManager` class instance, and then starts to drive the Job threads. - Each Job thread periodically executes a QoS management task. @@ -69,6 +71,7 @@ During initialization, Skylark sets the **cpu.qos_level** field of the slice lev ### Hardware Requirements Processor architecture: AArch64 or x86_64 + - For Intel processors, the RDT function must be supported. - For the AArch64 architecture, only Kunpeng 920 processor is supported, and the BIOS must be upgraded to 1.79 or later to support the MPAM function. @@ -114,20 +117,20 @@ After the Skylark component is installed, you can modify the configuration file - **TDP_THRESHOLD** is a floating point number used to control the maximum power consumption of a VM. When the power consumption of the host exceeds **TDP * TDP_THRESHOLD**, a TDP hotspot occurs, and a power consumption control operation is triggered. The value ranges from 0.8 to 1, with the default value being 0.98. - **FREQ_THRESHOLD** is a floating point number used to control the minimum CPU frequency when a TDP hotspot occurs on the host. The value ranges from 0.8 to 1, with the default value being 0.98. - 1. When the frequency of some CPUs is lower than **max_freq * FREQ_THRESHOLD**, Skylark limits the CPU bandwidth of low-priority VMs running on these CPUs. - 2. If such a CPU does not exist, Skylark limits the CPU bandwidth of some low-priority VMs based on the CPU usage of low-priority VMs. + 1. When the frequency of some CPUs is lower than **max_freq * FREQ_THRESHOLD**, Skylark limits the CPU bandwidth of low-priority VMs running on these CPUs. + 2. If such a CPU does not exist, Skylark limits the CPU bandwidth of some low-priority VMs based on the CPU usage of low-priority VMs. - **QUOTA_THRESHOLD** is a floating point number used to control the CPU bandwidth that a restricted low-priority VM can obtain (CPU bandwidth before restriction x **QUOTA_THRESHOLD**). The value ranges from 0.8 to 1, with the default value being 0.9. - **ABNORMAL_THRESHOLD** is an integer used to control the number of low-priority VM restriction periods. The value ranges from 1 to 5, with the default value being 3. - 1. In each power consumption control period, if a low-priority VM is restricted, its number of remaining restriction periods is updated to **ABNORMAL_THRESHOLD**. - 2. Otherwise, its number of remaining restriction periods decreases by 1. When the number of remaining restriction periods of the VM is 0, the CPU bandwidth of the VM is restored to the value before the restriction. + 1. In each power consumption control period, if a low-priority VM is restricted, its number of remaining restriction periods is updated to **ABNORMAL_THRESHOLD**. + 2. Otherwise, its number of remaining restriction periods decreases by 1. When the number of remaining restriction periods of the VM is 0, the CPU bandwidth of the VM is restored to the value before the restriction. ### LLC/MB Interference Control -Skylark's interference control on LLC/MB depends on the RDT/MPAM function provided by hardware. For Intel x86_64 processors, **rdt=cmt,mbmtotal,mbmlocal,l3cat,mba** needs to be added to kernel command line parameters. For Kunpeng920 processors, **mpam=acpi** needs to be added to kernel command line parameters. +Skylark's interference control on LLC/MB depends on the RDT/MPAM function provided by hardware. For Intel x86_64 processors, **rdt=cmt,mbmtotal,mbmlocal,l3cat,mba** needs to be added to kernel command line parameters. For Kunpeng 920 processors, **mpam=acpi** needs to be added to kernel command line parameters. -- **MIN_LLC_WAYS_LOW_VMS** is an integer used to control the number of LLC ways that can be accessed by low-priority VMs. The value ranges from 1 to 3, with the default value being 2. During initialization, Skylark limits the number of accessible LLC ways for low-priority VMs to this value. +- **MIN_LLC_WAYS_LOW_VMS** is an integer used to control the number of LLC ways that can be accessed by low-priority VMs. The value ranges from 1 to 3, with the default value being 2. During initialization, Skylark limits the numfer of accessible LLC ways for low-priority VMs to this value. - **MIN_MBW_LOW_VMS** is a floating point number used to control the memory bandwidth ratio available to low-priority VMs. The value ranges from 0.1 to 0.2, with the default value being 0.1. Skylark limits the memory bandwidth of low-priority VMs based on this value during initialization. @@ -192,4 +195,4 @@ Skylark detects VM creation events, manages VMs of different priorities, and per To ensure optimal performance of high-priority VMs, you are advised to bind each vCPU of high-priority VMs to a physical CPU. To enable low-priority VMs to make full use of idle physical resources, you are advised to bind vCPUs of low-priority VMs to CPUs that are bound to high-priority VMs. -To ensure that low-priority VMs are scheduled when high-priority VMs occupy CPU resources for a long time, you are advised to reserve a small number of for low-priority VMs. \ No newline at end of file +To ensure that low-priority VMs are scheduled when high-priority VMs occupy CPU resources for a long time, you are advised to reserve a small number of for low-priority VMs. diff --git a/docs/en/docs/Virtualization/system-resource-management.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/system-resource-management.md similarity index 72% rename from docs/en/docs/Virtualization/system-resource-management.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/system-resource-management.md index a581202aa71ea32a1e8ba6b943ccd4125d9f8b77..81adc3a16bf6aaeeec2352b41380077092cb683f 100644 --- a/docs/en/docs/Virtualization/system-resource-management.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/system-resource-management.md @@ -1,11 +1,11 @@ # system Resource Management -The **libvirt** command manages VM system resources, such as vCPU and virtual memory resources. +The `libvirt` command manages VM system resources, such as vCPU and virtual memory resources. Before you start: - Ensure that the libvirtd daemon is running on the host. -- Run the **virsh list --all** command to check that the VM has been defined. +- Run the `virsh list --all` command to check that the VM has been defined. ## Managing vCPU @@ -37,7 +37,7 @@ Change the value of **cpu\_shares** allocated to the VM to balance the schedul iothread_quota : -1 ``` -- Online modification: Run the **virsh schedinfo** command with the **--live** parameter to modify the CPU share of a running VM. +- Online modification: Run the `virsh schedinfo` command with the `--live` parameter to modify the CPU share of a running VM. ```shell virsh schedinfo --live cpu_shares= @@ -61,7 +61,7 @@ Change the value of **cpu\_shares** allocated to the VM to balance the schedul The modification of the **cpu\_shares** value takes effect immediately. The running time of the _openEulerVM_ is twice the original running time. However, the modification will become invalid after the VM is shut down and restarted. -- Permanent modification: Run the **virsh schedinfo** command with the **--config** parameter to change the CPU share of the VM in the libvirt internal configuration. +- Permanent modification: Run the `virsh schedinfo` command with the `--config` parameter to change the CPU share of the VM in the libvirt internal configuration. ```shell virsh schedinfo --config cpu_shares= @@ -93,7 +93,7 @@ You can bind the QEMU main process to a specific physical CPU range, ensuring th #### Procedure -Run the **virsh emulatorpin** command to bind the QEMU main process to a physical CPU. +Run the `virsh emulatorpin` command to bind the QEMU main process to a physical CPU. - Check the range of the physical CPU bound to the QEMU process: @@ -106,7 +106,7 @@ Run the **virsh emulatorpin** command to bind the QEMU main process to a physi This indicates that the QEMU main process corresponding to VM **openEulerVM** can be scheduled on all physical CPUs of the host. -- Online binding: Run the **virsh emulatorpin** command with the **--live** parameter to modify the binding relationship between the QEMU process and the running VM. +- Online binding: Run the `virsh emulatorpin` command with the `--live` parameter to modify the binding relationship between the QEMU process and the running VM. ```shell $ virsh emulatorpin openEulerVM --live 2-3 @@ -119,7 +119,7 @@ Run the **virsh emulatorpin** command to bind the QEMU main process to a physi The preceding commands bind the QEMU process corresponding to VM **openEulerVM** to physical CPUs **2** and **3**. That is, the QEMU process is scheduled only on the two physical CPUs. The binding relationship takes effect immediately but becomes invalid after the VM is shut down and restarted. -- Permanent binding: Run the **virsh emulatorpin** command with the **--config** parameter to modify the binding relationship between the VM and the QEMU process in the libvirt internal configuration. +- Permanent binding: Run the `virsh emulatorpin` command with the `--config` parameter to modify the binding relationship between the VM and the QEMU process in the libvirt internal configuration. ```shell $ virsh emulatorpin openEulerVM --config 0-3,^1 @@ -140,7 +140,7 @@ The vCPU of a VM is bound to a physical CPU. That is, the vCPU is scheduled only #### Procedure -Run the **virsh vcpupin** command to adjust the binding relationship between vCPUs and physical CPUs. +Run the `virsh vcpupin` command to adjust the binding relationship between vCPUs and physical CPUs. - View the vCPU binding information of the VM. @@ -156,7 +156,7 @@ Run the **virsh vcpupin** command to adjust the binding relationship between v This indicates that all vCPUs of VM **openEulerVM** can be scheduled on all physical CPUs of the host. -- Online adjustment: Run the **vcpu vcpupin** command with the **--live** parameter to modify the vCPU binding relationship of a running VM. +- Online adjustment: Run the `vcpu vcpupin` command with the `--live` parameter to modify the vCPU binding relationship of a running VM. ```shell $ virsh vcpupin openEulerVM --live 0 2-3 @@ -172,7 +172,7 @@ Run the **virsh vcpupin** command to adjust the binding relationship between v The preceding commands bind vCPU **0** of VM **openEulerVM** to pCPU **2** and pCPU **3**. That is, vCPU **0** is scheduled only on the two physical CPUs. The binding relationship takes effect immediately but becomes invalid after the VM is shut down and restarted. -- Permanent adjustment: Run the **virsh vcpupin** command with the **--config** parameter to modify the vCPU binding relationship of the VM in the libvirt internal configuration. +- Permanent adjustment: Run the `virsh vcpupin` command with the `--config` parameter to modify the vCPU binding relationship of the VM in the libvirt internal configuration. ```shell $ virsh vcpupin openEulerVM --config 0 0-3,^1 @@ -188,55 +188,59 @@ Run the **virsh vcpupin** command to adjust the binding relationship between v The preceding commands bind vCPU **0** of VM **openEulerVM** to physical CPUs **0**, **2**, and **3**. That is, vCPU **0** is scheduled only on the three physical CPUs. The modification of the binding relationship does not take effect immediately. Instead, the modification takes effect after the next startup of the VM and takes effect permanently. -### CPU Hot Add +### CPU Hotplug #### Overview -This feature allows users to hot add CPUs to a running VM without affecting its normal running. When the internal service pressure of a VM keeps increasing, all CPUs will be overloaded. To improve the computing capability of the VM, you can use the CPU hot add function to increase the number of CPUs on the VM without stopping it. +CPU hotplug allows you to increase or decrease the number of CPUs for a running VM without affecting services on it. When the internal service pressure rises to a level where existing CPUs become saturated, CPU hotplug can dynamically boost the computing power of a VM, guaranteeing stable service throughput. CPU hotplug also enables the removal of unused computing resources during low service load, minimizing computing costs. + +Note: CPU hotplug is added for the AArch64 architecture in openEuler 24.03 LTS. However, the new implementation of the mainline community is not compatible with that of earlier openEuler versions. Therefore, the guest OS must match the host OS. That is, the guest and host machines must both run openEuler 24.03 LTS or later versions, or versions earlier than openEuler 24.03 LTS. #### Constraints - For processors using the AArch64 architecture, the specified VM chipset type \(machine\) needs to be virt-4.1 or a later version when a VM is created. For processors using the x86\_64 architecture, the specified VM chipset type \(machine\) needs to be pc-i440fx-1.5 or a later version when a VM is created. -- When configuring Guest NUMA, you need to configure the vCPUs that belong to the same socket in the same vNode. Otherwise, the VM may be soft locked up after the CPU is hot added, which may cause the VM panic. -- VMs do not support CPU hot add during migration, hibernation, wake-up, or snapshot. +- The initial CPU of an AArch64 VM cannot be hot removed. +- When configuring Guest NUMA, you need to configure the vCPUs that belong to the same socket in the same vNode. Otherwise, the VM may be soft locked up after the CPU is hot added or removed, which may cause the VM panic. +- VMs do not support CPU hotplug during migration, hibernation, wake-up, or snapshot. - Whether the hot added CPU can automatically go online depends on the VM OS logic rather than the virtualization layer. - CPU hot add is restricted by the maximum number of CPUs supported by the Hypervisor and GuestOS. - When a VM is being started, stopped, or restarted, the hot added CPU may become invalid. However, the hot added CPU takes effect after the VM is restarted. -- During VM CPU hot add, if the number of added CPUs is not an integer multiple of the number of cores in the VM CPU topology configuration item, the CPU topology displayed in the VM may be disordered. You are advised to add CPUs whose number is an integer multiple of the number of cores each time. -- If the hot added CPU needs to take effect online and is still valid after the VM is restarted, the --config and --live options need to be transferred to the virsh setvcpus API to persist the hot added CPU. +- CPU hotplug may time out when a VM is starting, shutting down, or restarting. Retry when the VM is in the normal running state. +- During VM CPU hotplug, if the number of added or removed CPUs is not an integer multiple of the number of cores in the VM CPU topology configuration item, the CPU topology displayed in the VM may be disordered. You are advised to add or remove CPUs whose number is an integer multiple of the number of cores each time. +- If the hot added or removed CPU needs to take effect online and is still valid after the VM is restarted, the `--config` and `--live` options need to be transferred to the `virsh setvcpus` interface to persist the hot added or removed CPU. #### Procedure **VM XML Configuration** -1. To use the CPU hot add function, configure the number of CPUs, the maximum number of CPUs supported by the VM, and the VM chipset type when creating the VM. (For the AArch64 architecture, the virt-4.1 or a later version is required. For the x86\_64 architecture, the pc-i440fx-1.5 or later version is required. The AArch64 VM is used as an example. The configuration template is as follows: +1. To use the CPU hot add function, configure the number of CPUs, the maximum number of CPUs supported by the VM, and the VM chipset type when creating the VM. (For the AArch64 architecture, the virt-4.2 or a later version is required. For the x86\_64 architecture, the pc-i440fx-1.5 or later version is required. The AArch64 VM is used as an example. The configuration template is as follows: ```xml ... n - hvm + hvm ... ``` - >![](./public_sys-resources/icon-note.gif) **Note** + > ![](./public_sys-resources/icon-note.gif)**Note** > - >- The value of placement must be static. - >- m indicates the current number of CPUs on the VM, that is, the default number of CPUs after the VM is started. n indicates the maximum number of CPUs that can be hot added to a VM. The value cannot exceed the maximum CPU specifications supported by the Hypervisor or GuestOS. n is greater than or equal to m. + > - The value of placement must be static. + > - m indicates the current number of CPUs on the VM, that is, the default number of CPUs after the VM is started. n indicates the maximum number of CPUs that can be hot added to a VM. The value cannot exceed the maximum CPU specifications supported by the Hypervisor or GuestOS. n is greater than or equal to m. For example, if the current number of CPUs of a VM is 4 and the maximum number of hot added CPUs is 64, the XML configuration is as follows: ```xml - …… + ... 64 - hvm + hvm - …… + ... ``` **Hot Adding and Bringing CPUs Online** @@ -248,15 +252,15 @@ This feature allows users to hot add CPUs to a running VM without affecting its ACTION=="add", SUBSYSTEM=="cpu", ATTR{online}="1" ``` - >![](./public_sys-resources/icon-note.gif) **Note** - >If you do not use the udev rules, you can use the root permission to manually bring the hot added CPU online by running the following commands: - > - >```shell - >for i in `grep -l 0 /sys/devices/system/cpu/cpu*/online` - >do - > echo 1 > $i - >done - >``` + > ![](./public_sys-resources/icon-note.gif)**Note** + > If you do not use the udev rules, you can use the root permission to manually bring the hot added CPU online by running the following commands: + + ```shell + for i in `grep -l 0 /sys/devices/system/cpu/cpu*/online` + do + echo 1 > $i + done + ``` 2. Use the virsh tool to hot add CPUs to the VM. For example, to set the number of CPUs after hot adding to 6 on the VM named openEulerVM and make the hot add take effect online, run the following command: @@ -264,17 +268,37 @@ This feature allows users to hot add CPUs to a running VM without affecting its virsh setvcpus openEulerVM 6 --live ``` - >![](./public_sys-resources/icon-note.gif) **Note** - >The format for running the virsh setvcpus command to hot add a VM CPU is as follows: - > - >```shell - >virsh setvcpus [--config] [--live] - >``` - > - >- domain: Parameter, which is mandatory. Specifies the name of a VM. - >- count: Parameter, which is mandatory. Specifies the number of target CPUs, that is, the number of CPUs after hot adding. - >- --config: Option, which is optional. This parameter is still valid when the VM is restarted. - >- --live: Option, which is optional. The configuration takes effect online. + > ![](./public_sys-resources/icon-note.gif)**Note** + > The format for running the `virsh setvcpus` command to hot add VM CPUs is as follows: + + ```shell + virsh setvcpus [--config] [--live] + ``` + + > - `domain`: Parameter, which is mandatory. Specifies the name of a VM. + > - `count`: Parameter, which is mandatory. Specifies the number of target CPUs, that is, the number of CPUs after hot adding. + > - `--config`: Option, which is optional. This parameter is still valid when the VM is restarted. + > - `--live`: Option, which is optional. The configuration takes effect online. + +**Hot Removing CPUs** + +Use the virsh tool to hot remove CPUs from the VM. For example, to set the number of CPUs after hot removal to 4 on the VM named openEulerVM, run the following command: + +```shell +virsh setvcpus openEulerVM 4 --live +``` + +> ![](./public_sys-resources/icon-note.gif)**Note** +> The format for running the `virsh setvcpus` command to hot remove VM CPUs is as follows: +> +> ```shell +> virsh setvcpus [--config] [--live] +> ``` +> +> - `domain`: Parameter, which is mandatory. Specifies the name of a VM. +> - `count`: Parameter, which is mandatory. Specifies the number of target CPUs, that is, the number of CPUs after hot removal. +> - `--config`: Option, which is optional. This parameter is still valid when the VM is restarted. +> - `--live`: Option, which is optional. The configuration takes effect online. ## Managing Virtual Memory @@ -325,10 +349,10 @@ To improve VM performance, you can specify NUMA nodes for a VM using the VM XML If the vCPU of the VM is bound to the physical CPU of **node 0**, the performance deterioration caused by the vCPU accessing the remote memory can be avoided. - >![](./public_sys-resources/icon-note.gif) **NOTE:** + > ![](./public_sys-resources/icon-note.gif)**NOTE:** > - >- The sum of memory allocated to the VM cannot exceed the remaining available memory of the NUMA node. Otherwise, the VM may fail to start. - >- You are advised to bind the VM memory and vCPU to the same NUMA node to avoid the performance deterioration caused by vCPU access to the remote memory. For example, bind the vCPU to NUMA node 0 as well. + > - The sum of memory allocated to the VM cannot exceed the remaining available memory of the NUMA node. Otherwise, the VM may fail to start. + > - You are advised to bind the VM memory and vCPU to the same NUMA node to avoid the performance deterioration caused by vCPU access to the remote memory. For example, bind the vCPU to NUMA node 0 as well. ### Configuring Guest NUMA @@ -360,10 +384,10 @@ After Guest NUMA is configured in the VM XML configuration file, you can view th ``` ->![](./public_sys-resources/icon-note.gif) **NOTE:** +> ![](./public_sys-resources/icon-note.gif)**NOTE:** > ->- **** provides the NUMA topology function for VMs. **cell id** indicates the vNode ID, **cpus** indicates the vCPU ID, and **memory** indicates the memory size on the vNode. ->- If you want to use Guest NUMA to provide better performance, configure <**numatune\>** and **** so that the vCPU and memory are distributed on the same physical NUMA node. +> - **** provides the NUMA topology function for VMs. **cell id** indicates the vNode ID, **cpus** indicates the vCPU ID, and **memory** indicates the memory size on the vNode. +> - If you want to use Guest NUMA to provide better performance, configure <**numatune\>** and **** so that the vCPU and memory are distributed on the same physical NUMA node. > - **cellid** in **** corresponds to **cell id** in ****. **mode** can be set to **strict** \(apply for memory from a specified node strictly. If the memory is insufficient, the application fails.\), **preferred** \(apply for memory from a node first. If the memory is insufficient, apply for memory from another node\), or **interleave** \(apply for memory from a specified node in cross mode\).; **nodeset** indicates the specified physical NUMA node. > - In ****, you need to bind the vCPU in the same **cell id** to the physical NUMA node that is the same as the **memnode**. @@ -375,7 +399,7 @@ In virtualization scenarios, the memory, CPU, and external devices of VMs are si #### Constraints -- For processors using the AArch64 architecture, the specified VM chipset type \(machine\) needs to be virt-4.1 or a later version when a VM is created.For processors using the x86 architecture, the specified VM chipset type \(machine\) needs to be a later version than pc-i440fx-1.5 when a VM is created. +- For processors using the AArch64 architecture, the specified VM chipset type \(machine\) needs to be virt-4.2 or a later version when a VM is created.For processors using the x86 architecture, the specified VM chipset type \(machine\) needs to be a later version than pc-i440fx-1.5 when a VM is created. - Guest NUMA on which the memory hot add feature depends needs to be configured on the VM. Otherwise, the memory hot add process cannot be completed. - When hot adding memory, you need to specify the ID of Guest NUMA node to which the new memory belongs. Otherwise, the memory hot add fails. - The VM kernel should support memory hot add. Otherwise, the VM cannot identify the newly added memory or the memory cannot be brought online. @@ -411,11 +435,11 @@ In virtualization scenarios, the memory, CPU, and external devices of VMs are si .... ``` ->![](./public_sys-resources/icon-note.gif) **Note** ->In the preceding information, ->the value of slots in the maxMemory field indicates the reserved memory slots. The maximum value is 256. ->maxMemory indicates the maximum physical memory supported by the VM. ->For details about how to configure Guest NUMA, see "Configuring Guest NUMA." +> ![](./public_sys-resources/icon-note.gif)**Note** +> In the preceding information, +> the value of slots in the maxMemory field indicates the reserved memory slots. The maximum value is 256. +> maxMemory indicates the maximum physical memory supported by the VM. +> For details about how to configure Guest NUMA, see "Configuring Guest NUMA." **Hot Adding and Bringing Memory Online** @@ -445,12 +469,12 @@ In virtualization scenarios, the memory, CPU, and external devices of VMs are si virsh attach-device openEulerVM memory.xml --live ``` - >![](./public_sys-resources/icon-note.gif) **Note** - >If you do not use the udev rules, you can use the root permission to manually bring the hot added memory online by running the following command: -> - >```text - >for i in `grep -l offline /sys/devices/system/memory/memory*/state` - >do - > echo online > $i - >done - >``` + > ![](./public_sys-resources/icon-note.gif)**Note** + > If you do not use the udev rules, you can use the root permission to manually bring the hot added memory online by running the following command: + + ```shell + for i in `grep -l offline /sys/devices/system/memory/memory*/state` + do + echo online > $i + done + ``` diff --git a/docs/en/docs/Virtualization/tool-guide.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/tool-guide.md similarity index 70% rename from docs/en/docs/Virtualization/tool-guide.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/tool-guide.md index 565ebecf35494d5cc45e3a4f8f2a4973841fa0dd..aecca3d63a94c825cb0dbf1cccc2d45d8ff87bba 100644 --- a/docs/en/docs/Virtualization/tool-guide.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/tool-guide.md @@ -1 +1,3 @@ -To help users better use virtualization, openEuler provides a set of tools, including vmtop and LibcarePlus. This section describes how to install and use these tools. \ No newline at end of file +# Tool Guide + +To help users better use virtualization, openEuler provides a set of tools, including vmtop and LibcarePlus. This section describes how to install and use these tools. diff --git a/docs/en/docs/Virtualization/virtualization-installation.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/virtualization-installation.md similarity index 57% rename from docs/en/docs/Virtualization/virtualization-installation.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/virtualization-installation.md index 81e0efb790d84d6a9e1723c54bd2f017e682c7a0..d33ee0fc732bdf744cd02c25d946c47e7d3ef1e8 100644 --- a/docs/en/docs/Virtualization/virtualization-installation.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/virtualization-installation.md @@ -3,23 +3,22 @@ This chapter describes how to install virtualization components in openEuler. - [Installing Virtualization Components](#installing-virtualization-components) - - [Minimum Hardware Requirements](#minimum-hardware-requirements) - - [Installing Core Virtualization Components](#installing-core-virtualization-components) - - [Installation Methods](#installation-methods) - - [Prerequisites](#prerequisites) - - [Procedure](#procedure) - - [Verifying the Installation](#verifying-the-installation) - + - [Minimum Hardware Requirements](#minimum-hardware-requirements) + - [Installing Core Virtualization Components](#installing-core-virtualization-components) + - [Installation Methods](#installation-methods) + - [Prerequisites](#prerequisites) + - [Procedure](#procedure) + - [Verifying the Installation](#verifying-the-installation) ## Minimum Hardware Requirements The minimum hardware requirements for installing virtualization components on openEuler are as follows: -- AArch64 processor architecture: ARMv8 or later, supporting virtualization expansion -- x86\_64 processor architecture, supporting VT-x -- 2-core CPU -- 4 GB memory -- 16 GB available disk space +- AArch64 processor architecture: ARMv8 or later, supporting virtualization expansion +- x86\_64 processor architecture, supporting VT-x +- 2-core CPU +- 4 GB memory +- 16 GB available disk space ## Installing Core Virtualization Components @@ -27,41 +26,40 @@ The minimum hardware requirements for installing virtualization components on op #### Prerequisites -- The yum source has been configured. For details, see the _openEuler 21.03 Administrator Guide_. -- Only the administrator has permission to perform the installation. +- The yum source has been configured. For details, see the _openEuler 21.03 Administrator Guide_. +- Only the administrator has permission to perform the installation. #### Procedure -1. Install the QEMU component. +1. Install the QEMU component. ```shell # yum install -y qemu ``` - >![](./public_sys-resources/icon-caution.gif) Notice: - >By default, the QEMU component runs as user qemu and user group qemu. If you are not familiar with Linux user group and user permission management, you may encounter insufficient permission when creating and starting VMs. You can use either of the following methods to solve this problem: - >Method 1: Modify the QEMU configuration file. Run the `sudo vim /etc/libvirt/qemu.conf` command to open the QEMU configuration file, find `user = "root"` and `group = "root"`, uncomment them (delete `#`), save the file, and exit. - >Method 2: Change the owner of the VM files. Ensure that user qemu has the permission to access the folder where VM files are stored. Run the `sudo chown qemu:qemu xxx.qcow2` command to change the owner of the VM files that need to be read and written. + > ![](./public_sys-resources/icon-caution.gif) Notice: + > By default, the QEMU component runs as user qemu and user group qemu. If you are not familiar with Linux user group and user permission management, you may encounter insufficient permission when creating and starting VMs. You can use either of the following methods to solve this problem: + > Method 1: Modify the QEMU configuration file. Run the `sudo vim /etc/libvirt/qemu.conf` command to open the QEMU configuration file, find `user = "root"` and `group = "root"`, uncomment them (delete `#`), save the file, and exit. + > Method 2: Change the owner of the VM files. Ensure that user qemu has the permission to access the folder where VM files are stored. Run the `sudo chown qemu:qemu xxx.qcow2` command to change the owner of the VM files that need to be read and written. -2. Install the libvirt component. +2. Install the libvirt component. ```shell # yum install -y libvirt ``` -3. Start the libvirtd service. +3. Start the libvirtd service. ```shell # systemctl start libvirtd ``` - ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The KVM module is integrated in the openEuler kernel and does not need to be installed separately. +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> The KVM module is integrated in the openEuler kernel and does not need to be installed separately. ### Verifying the Installation -1. Check whether the kernel supports KVM virtualization, that is, check whether the **/dev/kvm** and **/sys/module/kvm** files exist. The command and output are as follows: +1. Check whether the kernel supports KVM virtualization, that is, check whether the **/dev/kvm** and **/sys/module/kvm** files exist. The command and output are as follows: ```shell # ls /dev/kvm @@ -75,7 +73,7 @@ The minimum hardware requirements for installing virtualization components on op If the preceding files exist, the kernel supports KVM virtualization. If the preceding files do not exist, KVM virtualization is not enabled during kernel compilation. In this case, you need to use the Linux kernel that supports KVM virtualization. -2. Check whether QEMU is successfully installed. If the installation is successful, the QEMU software package information is displayed. The command and output are as follows: +2. Check whether QEMU is successfully installed. If the installation is successful, the QEMU software package information is displayed. The command and output are as follows: ```shell # rpm -qi qemu @@ -109,7 +107,7 @@ The minimum hardware requirements for installing virtualization components on op As QEMU requires no host kernel patches to run, it is safe and easy to use. ``` -3. Check whether libvirt is successfully installed. If the installation is successful, the libvirt software package information is displayed. The command and output are as follows: +3. Check whether libvirt is successfully installed. If the installation is successful, the libvirt software package information is displayed. The command and output are as follows: ```shell # rpm -qi libvirt @@ -134,7 +132,7 @@ The minimum hardware requirements for installing virtualization components on op the libvirtd server exporting the virtualization support. ``` -4. Check whether the libvirt service is started successfully. If the service is in the **Active** state, the service is started successfully. You can use the virsh command line tool provided by the libvirt. The command and output are as follows: +4. Check whether the libvirt service is started successfully. If the service is in the **Active** state, the service is started successfully. You can use the virsh command line tool provided by the libvirt. The command and output are as follows: ```shell # systemctl status libvirtd @@ -150,5 +148,3 @@ The minimum hardware requirements for installing virtualization components on op ─40754 /usr/sbin/libvirtd ``` - - diff --git a/docs/en/docs/Virtualization/virtualization.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/virtualization.md similarity index 100% rename from docs/en/docs/Virtualization/virtualization.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/virtualization.md diff --git a/docs/en/docs/Virtualization/vm-configuration.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/vm-configuration.md similarity index 98% rename from docs/en/docs/Virtualization/vm-configuration.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/vm-configuration.md index de74da00de45abed40d38fd4d1e362f7dabb34b9..87a5fa1008030adf04f068efe1ca66891a2cf8ea 100644 --- a/docs/en/docs/Virtualization/vm-configuration.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/vm-configuration.md @@ -355,8 +355,8 @@ The bus is a channel for information communication between components of a compu The PCIe bus is a typical tree structure and has good scalability. The buses are associated with each other by using a controller. The following uses the PCIe bus as an example to describe how to configure a bus topology for a VM. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The bus configuration is complex. If the device topology does not need to be precisely controlled, the default bus configuration automatically generated by libvirt can be used. +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> The bus configuration is complex. If the device topology does not need to be precisely controlled, the default bus configuration automatically generated by libvirt can be used. #### Elements @@ -552,8 +552,8 @@ In addition to storage devices and network devices, some external devices need t For example, in the following example, the VM emulator path, pty serial port, VirtIO media device, USB tablet, USB keyboard, and VNC graphics device are configured. ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->When **type** of **graphics** is set to **VNC**, you are advised to set the **passwd** attribute, that is, the password for logging in to the VM using VNC. +> ![](./public_sys-resources/icon-note.gif)**NOTE:** +> When **type** of **graphics** is set to **VNC**, you are advised to set the **passwd** attribute, that is, the password for logging in to the VM using VNC. ```xml diff --git a/docs/en/docs/Virtualization/vm-live-migration.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/vm-live-migration.md similarity index 97% rename from docs/en/docs/Virtualization/vm-live-migration.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/vm-live-migration.md index a4473ce30584b52c36ced9c82ee39ea23fe48f7b..ae842d92a91d9d446efb3dc8ac88b658bb479f64 100644 --- a/docs/en/docs/Virtualization/vm-live-migration.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/vm-live-migration.md @@ -1,10 +1,10 @@ # VM Live Migration - [VM Live Migration](#vm-live-migration) - - [Introduction](#introduction-1) - - [Application Scenarios](#application-scenarios) - - [Precautions and Restrictions](#precautions-and-restrictions) - - [Live Migration Operations](#live-migration-operations) + - [Introduction](#introduction-1) + - [Application Scenarios](#application-scenarios) + - [Precautions and Restrictions](#precautions-and-restrictions) + - [Live Migration Operations](#live-migration-operations) ## Introduction @@ -61,13 +61,13 @@ Procedure: For example, if the VM name is **openEulerVM** and the calculation time is 1s, run the following command: -``` +```shell virsh qemu-monitor-command openEulerVM '{"execute":"calc-dirty-rate", "arguments": {"calc-time": 1}} ``` After 1s, run the following command to query the dirty page change rate: -``` +```shell virsh qemu-monitor-command openEulerVM '{"execute":"query-dirty-rate"}' ``` @@ -77,7 +77,7 @@ Before live migration, run the **virsh migrate-setmaxdowntime** command to spe For example, to set the maximum downtime of the VM named **openEulerVM** to **500 ms**, run the following command: -``` +```shell # virsh migrate-setmaxdowntime openEulerVM 500 ``` @@ -85,13 +85,13 @@ In addition, you can run the **virsh migrate-setspeed** command to limit the b For example, to set the live migration bandwidth of the VM named **openEulerVM** to **500 Mbit/s**, run the following command: -``` +```shell # virsh migrate-setspeed openEulerVM --bandwidth 500 ``` You can run the **migrate-getspeed** command to query the maximum bandwidth during VM live migration. -``` +```shell # virsh migrate-getspeed openEulerVM 500 ``` @@ -106,13 +106,13 @@ You can use migrate-set-parameters to set parameters related to live migration. For example, set the live migration algorithm of the VM named _openEulerVM_ to zstd and retain the default values for other parameters. -``` +```shell # virsh qemu-monitor-command openeulerVM '{ "execute": "migrate-set-parameters", "arguments": {"compress-method": "zstd"}}' ``` You can run the query-migrate-parameters command to query parameters related to live migration. -``` +```shell # virsh qemu-monitor-command openeulerVM '{ "execute": "query-migrate-parameters"}' --pretty { @@ -148,7 +148,7 @@ You can run the query-migrate-parameters command to query parameters related to 1. Check whether the storage device is shared. - ``` + ```shell # virsh domblklist Target Source -------------------------------------------- @@ -162,7 +162,7 @@ You can run the query-migrate-parameters command to query parameters related to For example, run the **virsh migrate** command to migrate VM **openEulerVM** to the destination host. - ``` + ```shell # virsh migrate --live --unsafe openEulerVM qemu+ssh:///system ``` @@ -186,7 +186,7 @@ You can run the query-migrate-parameters command to query parameters related to For example, the **virsh domblklist** command output shows that the VM to be migrated has a disk sda in qcow2 format. The XML configuration of sda is as follows: - ``` + ```xml @@ -197,13 +197,13 @@ You can run the query-migrate-parameters command to query parameters related to Before live migration, create a virtual disk file in the same disk directory on the destination host. Ensure that the disk format and size are the same. - ``` + ```shell # qemu-img create -f qcow2 /mnt/sdb/openeuler/openEulerVM.qcow2 20G ``` 2. Run the **virsh migrate** command on the source to perform live migration. During the migration, the storage is also migrated to the destination. - ``` + ```shell # virsh migrate --live --unsafe --copy-storage-all --migrate-disks sda \ openEulerVM qemu+ssh:///system ``` @@ -232,13 +232,13 @@ The multiFd can be used to perform multi-channel TLS migration. However, the CPU Encrypted transmission command for single-channel live migration: -``` +```shell virsh migrate --live --unsafe --tls --domain openEulerVM --desturi qemu+tcp:///system --migrateuri tcp:// ``` Encrypted transmission command for multi-channel live migration: -``` +```shell virsh migrate --live --unsafe --parallel --tls --domain openEulerVM --desturi qemu+tcp:///system --migrateuri tcp:// ``` diff --git a/docs/en/docs/Virtualization/vm-maintainability-management.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/vm-maintainability-management.md similarity index 95% rename from docs/en/docs/Virtualization/vm-maintainability-management.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/vm-maintainability-management.md index 0fd8b277f1d62cf656a514fd179329eb254e80a0..d6f53cac24eecfe4fbbe3fa9a1bb0e1ceaa264f1 100644 --- a/docs/en/docs/Virtualization/vm-maintainability-management.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/vm-maintainability-management.md @@ -1,5 +1,4 @@ -# VM Maintainability Management - +# VM Maintainability Management ## VM NMI Watchdog @@ -7,7 +6,6 @@ The NMI Watchdog is a mechanism used to detect hardlockup in Linux. Even if normal interrupts are disabled, non-maskable interrupt (NMI) can interrupt the code execution and further detect hardlockup. The current Arm architecture does not support native NMI, so it enables Pseudo-NMI based on the interrupt priority and configures Performance Monitoring Interrupt (PMI) as NMI to implement NMI Watchdog (PMU Watchdog). - ### Precautions - The VM OS needs to support Pseudo-NMI and corresponding kernel parameters needs to be configured. @@ -22,11 +20,7 @@ To configure the NMI Watchdog for a VM in ARM architecture,perform the following 2. Check whether the PMU Watchdog is successfully loaded on the VM. If the loading is successful, information similar to the following is displayed in the dmesg log of the kernel: - - ``` + + ```text [2.1173222] NMI watchdog: CPU0 freq probed as 2399999942 HZ. ``` - - - - \ No newline at end of file diff --git a/docs/en/docs/Virtualization/vmtop.md b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/vmtop.md similarity index 44% rename from docs/en/docs/Virtualization/vmtop.md rename to docs/en/Virtualization/VirtualizationPlatform/Virtualization/vmtop.md index 346cc01c219e3ca7d349c36ef39650cbb20a86df..74bf1f2b00e9a7e8c4398650ebbb5a1adebf39a8 100644 --- a/docs/en/docs/Virtualization/vmtop.md +++ b/docs/en/Virtualization/VirtualizationPlatform/Virtualization/vmtop.md @@ -80,23 +80,23 @@ vmtop [option] #### Option Description -- -d: sets the refresh interval, in seconds. -- -H: displays the VM thread information. -- -n: sets the number of refresh times and exits after the refresh is complete. -- -b: displays Batch mode, which can be used to redirect to a file. -- -h: displays help information. -- -v: displays versions. -- -p: monitors the VM with a specified ID. +- `-d`: sets the refresh interval, in seconds. +- `-H`: displays the VM thread information. +- `-n`: sets the number of refresh times and exits after the refresh is complete. +- `-b`: displays Batch mode, which can be used to redirect to a file. +- `-h`: displays help information. +- `-v`: displays versions. +- `-p`: monitors the VM with a specified ID. #### Keyboard Shortcut Shortcut key used when the vmtop is running. -- H: displays or stops the VM thread information. The information is displayed by default. -- up/down: moves the VM list upwards or downwards. -- left/right: moves the cursor leftwards or rightwards to display the columns that are hidden due to the screen width. -- f: enters the editing mode of a monitoring item and selects the monitoring item to be enabled. -- q: exits the vmtop process. +- **H**: displays or stops the VM thread information. The information is displayed by default. +- **up/down**: moves the VM list upwards or downwards. +- **left/right**: moves the cursor leftwards or rightwards to display the columns that are hidden due to the screen width. +- **f**: enters the editing mode of a monitoring item and selects the monitoring item to be enabled. +- **q**: exits the vmtop process. ### Example @@ -119,78 +119,78 @@ Domains: 1 running As shown in the output, there is only one VM named "example" on the host. The ID is 2. The CPU usage is 13.0%. The total number of traps within one second is 1452. The physical CPU occupied by the VM process is CPU 106. The ratio of the VM internal occupation time to the CPU running time is 99.7%. 1. Display VM thread information. -Press H to display the thread information. - -```sh -vmtop - 2020-09-14 10:11:27 - 1.0 -Domains: 1 running - - DID VM/task-name PID %CPU EXThvc EXTwfe EXTwfi EXTmmioU EXTmmioK EXTfp EXTirq EXTsys64 EXTmabt EXTsum S P %ST %GUE %HYP - 2 example 4054916 13.0 0 0 1191 17 4 120 76 147 0 1435 S 119 0.0 123.7 4.0 - |_ qemu-kvm 4054916 0.0 0 0 0 0 0 0 0 0 0 0 S 119 0.0 0.0 0.0 - |_ qemu-kvm 4054928 0.0 0 0 0 0 0 0 0 0 0 0 S 119 0.0 0.0 0.0 - |_ signalfd_com 4054929 0.0 0 0 0 0 0 0 0 0 0 0 S 120 0.0 0.0 0.0 - |_ IO mon_iothr 4054932 0.0 0 0 0 0 0 0 0 0 0 0 S 117 0.0 0.0 0.0 - |_ CPU 0/KVM 4054933 3.0 0 0 280 6 4 28 19 41 0 350 S 105 0.0 27.9 0.0 - |_ CPU 1/KVM 4054934 3.0 0 0 260 0 0 16 12 36 0 308 S 31 0.0 20.0 0.0 - |_ CPU 2/KVM 4054935 3.0 0 0 341 0 0 44 20 26 0 387 R 108 0.0 27.9 4.0 - |_ CPU 3/KVM 4054936 5.0 0 0 310 11 0 32 25 44 0 390 S 103 0.0 47.9 0.0 - |_ memory_lock 4054940 0.0 0 0 0 0 0 0 0 0 0 0 S 126 0.0 0.0 0.0 - |_ vnc_worker 4054944 0.0 0 0 0 0 0 0 0 0 0 0 S 118 0.0 0.0 0.0 - |_ worker 4143738 0.0 0 0 0 0 0 0 0 0 0 0 S 120 0.0 0.0 0.0 -``` - -The example VM has 11 threads, including the vCPU thread, vnc_worker, and IO mon_iotreads. Each thread also displays detailed CPU usage and trap information. + Press **H** to display the thread information. + + ```sh + vmtop - 2020-09-14 10:11:27 - 1.0 + Domains: 1 running + + DID VM/task-name PID %CPU EXThvc EXTwfe EXTwfi EXTmmioU EXTmmioK EXTfp EXTirq EXTsys64 EXTmabt EXTsum S P %ST %GUE %HYP + 2 example 4054916 13.0 0 0 1191 17 4 120 76 147 0 1435 S 119 0.0 123.7 4.0 + |_ qemu-kvm 4054916 0.0 0 0 0 0 0 0 0 0 0 0 S 119 0.0 0.0 0.0 + |_ qemu-kvm 4054928 0.0 0 0 0 0 0 0 0 0 0 0 S 119 0.0 0.0 0.0 + |_ signalfd_com 4054929 0.0 0 0 0 0 0 0 0 0 0 0 S 120 0.0 0.0 0.0 + |_ IO mon_iothr 4054932 0.0 0 0 0 0 0 0 0 0 0 0 S 117 0.0 0.0 0.0 + |_ CPU 0/KVM 4054933 3.0 0 0 280 6 4 28 19 41 0 350 S 105 0.0 27.9 0.0 + |_ CPU 1/KVM 4054934 3.0 0 0 260 0 0 16 12 36 0 308 S 31 0.0 20.0 0.0 + |_ CPU 2/KVM 4054935 3.0 0 0 341 0 0 44 20 26 0 387 R 108 0.0 27.9 4.0 + |_ CPU 3/KVM 4054936 5.0 0 0 310 11 0 32 25 44 0 390 S 103 0.0 47.9 0.0 + |_ memory_lock 4054940 0.0 0 0 0 0 0 0 0 0 0 0 S 126 0.0 0.0 0.0 + |_ vnc_worker 4054944 0.0 0 0 0 0 0 0 0 0 0 0 S 118 0.0 0.0 0.0 + |_ worker 4143738 0.0 0 0 0 0 0 0 0 0 0 0 S 120 0.0 0.0 0.0 + ``` + + The example VM has 11 threads, including the vCPU thread, vnc_worker, and IO mon_iotreads. Each thread also displays detailed CPU usage and trap information. 2. Select the monitoring item. -Enter f to edit the monitoring item. - -```sh -field filter - select which field to be showed -Use up/down to navigate, use space to set whether chosen filed to be showed -'q' to quit to normal display - - * DID - * VM/task-name - * PID - * %CPU - * EXThvc - * EXTwfe - * EXTwfi - * EXTmmioU - * EXTmmioK - * EXTfp - * EXTirq - * EXTsys64 - * EXTmabt - * EXTsum - * S - * P - * %ST - * %GUE - * %HYP -``` - -All monitoring items are displayed by default. You can press the up or down key to select a monitoring item, press the space key to set whether to display or hide the monitoring item, and press the q key to exit. -After %ST, %GUE, and %HYP are hidden, the following information is displayed: - -```sh -vmtop - 2020-09-14 10:23:25 - 1.0 -Domains: 1 running - - DID VM/task-name PID %CPU EXThvc EXTwfe EXTwfi EXTmmioU EXTmmioK EXTfp EXTirq EXTsys64 EXTmabt EXTsum S P - 2 example 4054916 12.0 0 0 1213 14 1 144 68 168 0 1464 S 125 - |_ qemu-kvm 4054916 0.0 0 0 0 0 0 0 0 0 0 0 S 125 - |_ qemu-kvm 4054928 0.0 0 0 0 0 0 0 0 0 0 0 S 119 - |_ signalfd_com 4054929 0.0 0 0 0 0 0 0 0 0 0 0 S 120 - |_ IO mon_iothr 4054932 0.0 0 0 0 0 0 0 0 0 0 0 S 117 - |_ CPU 0/KVM 4054933 2.0 0 0 303 6 0 29 10 35 0 354 S 98 - |_ CPU 1/KVM 4054934 4.0 0 0 279 0 0 39 17 49 0 345 S 1 - |_ CPU 2/KVM 4054935 3.0 0 0 283 0 0 33 20 40 0 343 S 122 - |_ CPU 3/KVM 4054936 3.0 0 0 348 8 1 43 21 44 0 422 S 110 - |_ memory_lock 4054940 0.0 0 0 0 0 0 0 0 0 0 0 S 126 - |_ vnc_worker 4054944 0.0 0 0 0 0 0 0 0 0 0 0 S 118 - |_ worker 1794 0.0 0 0 0 0 0 0 0 0 0 0 S 126 -``` - -%ST, %GUE, and %HYP will not be displayed on the screen. + Enter **f** to edit the monitoring item. + + ```sh + field filter - select which field to be showed + Use up/down to navigate, use space to set whether chosen filed to be showed + 'q' to quit to normal display + + * DID + * VM/task-name + * PID + * %CPU + * EXThvc + * EXTwfe + * EXTwfi + * EXTmmioU + * EXTmmioK + * EXTfp + * EXTirq + * EXTsys64 + * EXTmabt + * EXTsum + * S + * P + * %ST + * %GUE + * %HYP + ``` + + All monitoring items are displayed by default. You can press the up or down key to select a monitoring item, press the space key to set whether to display or hide the monitoring item, and press the q key to exit. + After %ST, %GUE, and %HYP are hidden, the following information is displayed: + + ```sh + vmtop - 2020-09-14 10:23:25 - 1.0 + Domains: 1 running + + DID VM/task-name PID %CPU EXThvc EXTwfe EXTwfi EXTmmioU EXTmmioK EXTfp EXTirq EXTsys64 EXTmabt EXTsum S P + 2 example 4054916 12.0 0 0 1213 14 1 144 68 168 0 1464 S 125 + |_ qemu-kvm 4054916 0.0 0 0 0 0 0 0 0 0 0 0 S 125 + |_ qemu-kvm 4054928 0.0 0 0 0 0 0 0 0 0 0 0 S 119 + |_ signalfd_com 4054929 0.0 0 0 0 0 0 0 0 0 0 0 S 120 + |_ IO mon_iothr 4054932 0.0 0 0 0 0 0 0 0 0 0 0 S 117 + |_ CPU 0/KVM 4054933 2.0 0 0 303 6 0 29 10 35 0 354 S 98 + |_ CPU 1/KVM 4054934 4.0 0 0 279 0 0 39 17 49 0 345 S 1 + |_ CPU 2/KVM 4054935 3.0 0 0 283 0 0 33 20 40 0 343 S 122 + |_ CPU 3/KVM 4054936 3.0 0 0 348 8 1 43 21 44 0 422 S 110 + |_ memory_lock 4054940 0.0 0 0 0 0 0 0 0 0 0 0 S 126 + |_ vnc_worker 4054944 0.0 0 0 0 0 0 0 0 0 0 0 S 118 + |_ worker 1794 0.0 0 0 0 0 0 0 0 0 0 0 S 126 + ``` + + %ST, %GUE, and %HYP will not be displayed on the screen. diff --git a/docs/en/docs/A-Ops/deploying-aops-agent.md b/docs/en/docs/A-Ops/deploying-aops-agent.md deleted file mode 100644 index 6e8445dbe0a64eb479655266c96c19759458ec61..0000000000000000000000000000000000000000 --- a/docs/en/docs/A-Ops/deploying-aops-agent.md +++ /dev/null @@ -1,670 +0,0 @@ - -# Deploying aops-agent -### 1. Environment Requirements - -One host running on openEuler 20.03 or later - -### 2. Configuration Environment Deployment - -#### 2.1 Disabling the Firewall - -```shell -systemctl stop firewalld -systemctl disable firewalld -systemctl status firewalld -``` - -#### 2.2 Deploying aops-agent - -1. Run `yum install aops-agent` to install aops-agent based on the Yum source. - -2. Modify the configuration file. Change the value of the **ip** in the agent section to the IP address of the local host. - -``` -vim /etc/aops/agent.conf -``` - - The following uses 192.168.1.47 as an example. - - ```ini - [agent] - ;IP address and port number bound when the aops-agent is started. - ip=192.168.1.47 - port=12000 - - [gopher] - ;Default path of the gala-gopher configuration file. If you need to change the path, ensure that the file path is correct. - config_path=/opt/gala-gopher/gala-gopher.conf - - ;aops-agent log collection configuration - [log] - ;Level of the logs to be collected, which can be set to DEBUG, INFO, WARNING, ERROR, or CRITICAL - log_level=INFO - ;Location for storing collected logs - log_dir=/var/log/aops - ;Maximum size of a log file - max_bytes=31457280 - ;Number of backup logs - backup_count=40 - ``` - -3. Run `systemctl start aops-agent` to start the service. - -#### 2.3 Registering with aops-manager - -To identify users and prevent APIs from being invoked randomly, aops-agent uses tokens to authenticate users, reducing the pressure on the deployed hosts. - -For security purposes, the active registration mode is used to obtain the token. Before the registration, prepare the information to be registered on aops-agent and run the `register` command to register the information with aops-manager. No database is configured for aops-agent. After the registration is successful, the token is automatically saved to the specified file and the registration result is displayed on the GUI. In addition, save the local host information to the aops-manager database for subsequent management. - -1. Prepare the **register.json** file. - - Prepare the information required for registration on aops-agent and save the information in JSON format. The data structure is as follows: - -```JSON -{ - // Name of the login user - "web_username":"admin", - // User password - "web_password": "changeme", - // Host name - "host_name": "host1", - // Name of the group to which the host belongs - "host_group_name": "group1", - // IP address of the host where aops-manager is running - "manager_ip":"192.168.1.23", - // Whether to register as a management host - "management":false, - // External port for running aops-manager - "manager_port":"11111", - // Port for running aops-agent - "agent_port":"12000" -} -``` - -Note: Ensure that aops-manager is running on the target host, for example, 192.168.1.23, and the registered host group exists. - -2. Run `aops_agent register -f register.json`. -3. The registration result is displayed on the GUI. If the registration is successful, the token character string is saved to a specified file. If the registration fails, locate the fault based on the message and log content (**/var/log/aops/aops.log**). - -The following is an example of the registration result: - -- Registration succeeded. - -```shell -[root@localhost ~]# aops_agent register -f register.json -Agent Register Success -``` - -- Registration failed. The following uses the aops-manager start failure as an example. - -```shell -[root@localhost ~]# aops_agent register -f register.json -Agent Register Fail -[root@localhost ~]# -``` - -- Log content - -```shell -2022-09-05 16:11:52,576 ERROR command_manage/register/331: HTTPConnectionPool(host='192.168.1.23', port=11111): Max retries exceeded with url: /manage/host/add (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) -[root@localhost ~]# -``` - -### 3. Plug-in Support - -#### 3.1 gala-gopher - -##### 3.1.1 Introduction - -gala-gopher is a low-load probe framework based on eBPF. It can be used to monitor the CPU, memory, and network status of hosts and collect data. You can configure the collection status of existing probes based on service requirements. - -##### 3.1.2 Deployment - -1. Run `yum install gala-gopher` to install gala-gopher based on the Yum source. -2. Enable probes based on service requirements. You can view information about probes in **/opt/gala-gopher/gala-gopher.conf**. -3. Run `systemctl start gala-gopher` to start the gala-gopher service. - -##### 3.1.3 Others - -For more information about gala-gopher, see https://gitee.com/openeuler/gala-gopher/blob/master/README.md. - -### 4. API Support - -#### 4.1 List of External APIs - -| No.| API | Type| Description | -| ---- | ------------------------------ | ---- | ----------------------| -| 1 | /v1/agent/plugin/start | POST | Starts a plug-in. | -| 2 | /v1/agent/plugin/stop | POST | Stops a plug-in. | -| 3 | /v1/agent/application/info | GET | Collects running applications in the target application collection.| -| 4 | /v1/agent/host/info | GET | Obtains host information. | -| 5 | /v1/agent/plugin/info | GET | Obtains the plug-in running information in aops-agent. | -| 6 | /v1/agent/file/collect | POST | Collects content of the configuration file. | -| 7 | /v1/agent/collect/items/change | POST | Changes the running status of plug-in collection items. | - -##### 4.1.1 /v1/agent/plugin/start - -+ Description: Starts the plug-in that is installed but not running. Currently, only the gala-gopher plug-in is supported. - -+ HTTP request mode: POST - -+ Data submission mode: query - -+ Request parameter - - | Parameter | Mandatory| Type| Description | - | ----------- | ---- | ---- | ------ | - | plugin_name | True | str | Plug-in name| - -+ Request parameter example - - | Parameter | Value | - | ----------- | ----------- | - | plugin_name | gala-gopher | - -+ Response body parameters - - | Parameter| Type| Description | - | ------ | ---- | ---------------- | - | code | int | Return code | - | msg | str | Information corresponding to the status code| - -+ Response example - - ```json - { - "code": 200, - "msg": "xxxx" - } - ``` - - -##### 4.1.2 /v1/agent/plugin/stop - -+ Description: Stops a running plug-in. Currently, only the gala-gopher plug-in is supported. - -+ HTTP request mode: POST - -+ Data submission mode: query - -+ Request parameter - - | Parameter | Mandatory| Type| Description | - | ----------- | ---- | ---- | ------ | - | plugin_name | True | str | Plug-in name| - -+ Request parameter example - - | Parameter | Value | - | ----------- | ----------- | - | plugin_name | gala-gopher | - -+ Response body parameters - - | Parameter| Type| Description | - | ------ | ---- | ---------------- | - | code | int | Return code | - | msg | str | Information corresponding to the status code| - -+ Response example - - ```json - { - "code": 200, - "msg": "xxxx" - } - ``` - - -##### 4.1.3 /v1/agent/application/info - -+ Description: Collects running applications in the target application collection. Currently, the target application collection contains MySQL, Kubernetes, Hadoop, Nginx, Docker, and gala-gopher. - -+ HTTP request mode: GET - -+ Data submission mode: query - -+ Request parameter - - | Parameter| Mandatory| Type| Description| - | ------ | ---- | ---- | ---- | - | | | | | - -+ Request parameter example - - | Parameter| Value| - | ------ | ------ | - | | | - -+ Response body parameters - - | Parameter| Type| Description | - | ------ | ---- | ---------------- | - | code | int | Return code | - | msg | str | Information corresponding to the status code| - | resp | dict | Response body | - - + resp - - | Parameter | Type | Description | - | ------- | --------- | -------------------------- | - | running | List[str] | List of the running applications| - -+ Response example - - ```json - { - "code": 200, - "msg": "xxxx", - "resp": { - "running": [ - "mysql", - "docker" - ] - } - } - ``` - - -##### 4.1.4 /v1/agent/host/info - -+ Description: Obtains information about the host where aops-agent is installed, including the system version, BIOS version, kernel version, CPU information, and memory information. - -+ HTTP request mode: POST - -+ Data submission mode: application/json - -+ Request parameter - - | Parameter | Mandatory| Type | Description | - | --------- | ---- | --------- | ------------------------------------------------ | - | info_type | True | List[str] | List of the information to be collected. Currently, only the CPU, disk, memory, and OS are supported.| - -+ Request parameter example - - ```json - ["os", "cpu","memory", "disk"] - ``` - -+ Response body parameters - - | Parameter| Type| Description | - | ------ | ---- | ---------------- | - | code | int | Return code | - | msg | str | Information corresponding to the status code| - | resp | dict | Response body | - - + resp - - | Parameter| Type | Description | - | ------ | ---------- | -------- | - | cpu | dict | CPU information | - | memory | dict | Memory information| - | os | dict | OS information | - | disk | List[dict] | Disk information| - - + cpu - - | Parameter | Type| Description | - | ------------ | ---- | --------------- | - | architecture | str | CPU architecture | - | core_count | int | Number of cores | - | l1d_cache | str | L1 data cache size| - | l1i_cache | str | L1 instruction cache size| - | l2_cache | str | L2 cache size | - | l3_cache | str | L3 cache size | - | model_name | str | Model name | - | vendor_id | str | Vendor ID | - - + memory - - | Parameter| Type | Description | - | ------ | ---------- | -------------- | - | size | str | Total memory | - | total | int | Number of DIMMs | - | info | List[dict] | Information about all DIMMs| - - + info - - | Parameter | Type| Description | - | ------------ | ---- | -------- | - | size | str | Memory size| - | type | str | Type | - | speed | str | Speed | - | manufacturer | str | Vendor | - - + os - - | Parameter | Type| Description | - | ------------ | ---- | -------- | - | bios_version | str | BIOS version| - | os_version | str | OS version| - | kernel | str | Kernel version| - -+ Response example - - ```json - { - "code": 200, - "msg": "operate success", - "resp": { - "cpu": { - "architecture": "aarch64", - "core_count": "128", - "l1d_cache": "8 MiB (128 instances)", - "l1i_cache": "8 MiB (128 instances)", - "l2_cache": "64 MiB (128 instances)", - "l3_cache": "128 MiB (4 instances)", - "model_name": "Kunpeng-920", - "vendor_id": "HiSilicon" - }, - "memory": { - "info": [ - { - "manufacturer": "Hynix", - "size": "16 GB", - "speed": "2933 MT/s", - "type": "DDR4" - }, - { - "manufacturer": "Hynix", - "size": "16 GB", - "speed": "2933 MT/s", - "type": "DDR4" - } - ], - "size": "32G", - "total": 2 - }, - "os": { - "bios_version": "1.82", - "kernel": "5.10.0-60.18.0.50", - "os_version": "openEuler 22.03 LTS" - }, - "disk": [ - { - "capacity": "xxGB", - "model": "xxxxxx" - } - ] - } - } - ``` - - -##### 4.1.5 /v1/agent/plugin/info - -+ Description: Obtains the plug-in running status of the host. Currently, only the gala-gopher plug-in is supported. - -+ HTTP request mode: GET - -+ Data submission mode: query - -+ Request parameter - - | Parameter| Mandatory| Type| Description| - | ------ | ---- | ---- | ---- | - | | | | | - -+ Request parameter example - - | Parameter| Value| - | ------ | ------ | - | | | - -+ Response body parameters - - | Parameter| Type | Description | - | ------ | ---------- | ---------------- | - | code | int | Return code | - | msg | str | Information corresponding to the status code| - | resp | List[dict] | Response body | - - + resp - - | Parameter | Type | Description | - | ------------- | ---------- | ------------------ | - | plugin_name | str | Plug-in name | - | collect_items | list | Running status of plug-in collection items| - | is_installed | str | Information corresponding to the status code | - | resource | List[dict] | Plug-in resource usage | - | status | str | Plug-in running status | - - + resource - - | Parameter | Type| Description | - | ------------- | ---- | ---------- | - | name | str | Resource name | - | current_value | str | Resource usage| - | limit_value | str | Resource limit| - -+ Response example - - ``` - { - "code": 200, - "msg": "operate success", - "resp": [ - { - "collect_items": [ - { - "probe_name": "system_tcp", - "probe_status": "off", - "support_auto": false - }, - { - "probe_name": "haproxy", - "probe_status": "auto", - "support_auto": true - }, - { - "probe_name": "nginx", - "probe_status": "auto", - "support_auto": true - }, - ], - "is_installed": true, - "plugin_name": "gala-gopher", - "resource": [ - { - "current_value": "0.0%", - "limit_value": null, - "name": "cpu" - }, - { - "current_value": "13 MB", - "limit_value": null, - "name": "memory" - } - ], - "status": "active" - } - ] - } - ``` - - -##### 4.1.6 /v1/agent/file/collect - -+ Description: Collects information such as the content, permission, and owner of the target configuration file. Currently, only text files smaller than 1 MB, without execute permission, and supporting UTF8 encoding can be read. - -+ HTTP request mode: POST - -+ Data submission mode: application/json - -+ Request parameter - - | Parameter | Mandatory| Type | Description | - | --------------- | ---- | --------- | ------------------------ | - | configfile_path | True | List[str] | List of the full paths of the files to be collected| - -+ Request parameter example - - ```json - [ "/home/test.conf", "/home/test.ini", "/home/test.json"] - ``` - -+ Response body parameters - - | Parameter | Type | Description | - | ------------- | ---------- | ---------------- | - | infos | List[dict] | File collection information | - | success_files | List[str] | List of files successfully collected| - | fail_files | List[str] | List of files that fail to be collected| - - + infos - - | Parameter | Type| Description | - | --------- | ---- | -------- | - | path | str | File path| - | content | str | File content| - | file_attr | dict | File attributes| - - + file_attr - - | Parameter| Type| Description | - | ------ | ---- | ------------ | - | mode | str | Permission of the file type| - | owner | str | File owner| - | group | str | Group to which the file belongs| - -+ Response example - - ```json - { - "infos": [ - { - "content": "this is a test file", - "file_attr": { - "group": "root", - "mode": "0644", - "owner": "root" - }, - "path": "/home/test.txt" - } - ], - "success_files": [ - "/home/test.txt" - ], - "fail_files": [ - "/home/test.txt" - ] - } - ``` - - -##### 4.1.7 /v1/agent/collect/items/change - -+ Description: Changes the collection status of the plug-in collection items. Currently, only the status of the gala-gopher collection items can be changed. For the gala-gopher collection items, see **/opt/gala-gopher/gala-gopher.conf**. - -+ HTTP request mode: POST - -+ Data submission mode: application/json - -+ Request parameter - - | Parameter | Mandatory| Type| Description | - | ----------- | ---- | ---- | -------------------------- | - | plugin_name | True | dict | Expected modification result of the plug-in collection items| - - + plugin_name - - | Parameter | Mandatory| Type | Description | - | ------------ | ---- | ------ | ------------------ | - | collect_item | True | string | Expected modification result of the collection item| - -+ Request parameter example - - ```json - { - "gala-gopher":{ - "redis":"auto", - "system_inode":"on", - "tcp":"on", - "haproxy":"auto" - } - } - ``` - -+ Response body parameters - - | Parameter| Type | Description | - | ------ | ---------- | ---------------- | - | code | int | Return code | - | msg | str | Information corresponding to the status code| - | resp | List[dict] | Response body | - - + resp - - | Parameter | Type| Description | - | ----------- | ---- | ------------------ | - | plugin_name | dict | Modification result of the corresponding collection item| - - + plugin_name - - | Parameter | Type | Description | - | ------- | --------- | ---------------- | - | success | List[str] | Collection items that are successfully modified| - | failure | List[str] | Collection items that fail to be modified| - -+ Response example - - ```json - { - "code": 200, - "msg": "operate success", - "resp": { - "gala-gopher": { - "failure": [ - "redis" - ], - "success": [ - "system_inode", - "tcp", - "haproxy" - ] - } - } - } - ``` - - - - ### FAQs - -1. If an error is reported, view the **/var/log/aops/aops.log** file, rectify the fault based on the error message in the log file, and restart the service. - -2. You are advised to run aops-agent in Python 3.7 or later. Pay attention to the version of the Python dependency library when installing it. - -3. The value of **access_token** can be obtained from the **/etc/aops/agent.conf** file after the registration is complete. - -4. To limit the CPU and memory resources of a plug-in, add **MemoryHigh** and **CPUQuota** to the **Service** section in the service file corresponding to the plug-in. - - For example, set the memory limit of gala-gopher to 40 MB and the CPU limit to 20%. - - ```ini - [Unit] - Description=a-ops gala gopher service - After=network.target - - [Service] - Type=exec - ExecStart=/usr/bin/gala-gopher - Restart=on-failure - RestartSec=1 - RemainAfterExit=yes - ;Limit the maximum memory that can be used by processes in the unit. The limit can be exceeded. However, after the limit is exceeded, the process running speed is limited, and the system reclaims the excess memory as much as possible. - ;The option value can be an absolute memory size in bytes (K, M, G, or T suffix based on 1024) or a relative memory size in percentage. - MemoryHigh=40M - ;Set the CPU time limit for the processes of this unit. The value must be a percentage ending with %, indicating the maximum percentage of the total time that the unit can use a single CPU. - CPUQuota=20% - - [Install] - WantedBy=multi-user.target - ``` - - - - - - diff --git a/docs/en/docs/A-Ops/figures/0BFA7C40-D404-4772-9C47-76EAD7D24E69.png b/docs/en/docs/A-Ops/figures/0BFA7C40-D404-4772-9C47-76EAD7D24E69.png deleted file mode 100644 index 910f58dbf8fb13d52826b7c74728f4c28599660f..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/0BFA7C40-D404-4772-9C47-76EAD7D24E69.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/1631073636579.png b/docs/en/docs/A-Ops/figures/1631073636579.png deleted file mode 100644 index 5aacc487264ac63fbe5322b4f89fca3ebf9c7cd9..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/1631073636579.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/1631073840656.png b/docs/en/docs/A-Ops/figures/1631073840656.png deleted file mode 100644 index 122e391eafe7c0d8d081030a240df90aea260150..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/1631073840656.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/1631101736624.png b/docs/en/docs/A-Ops/figures/1631101736624.png deleted file mode 100644 index 74e2f2ded2ea254c66b221e8ac27a0d8bed9362a..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/1631101736624.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/1631101865366.png b/docs/en/docs/A-Ops/figures/1631101865366.png deleted file mode 100644 index abfbc280a368b93af1e1165385af3a9cac89391d..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/1631101865366.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/1631101982829.png b/docs/en/docs/A-Ops/figures/1631101982829.png deleted file mode 100644 index 0b1c9c7c3676b804dbdf19afbe4f3ec9dbe0627f..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/1631101982829.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/1631102019026.png b/docs/en/docs/A-Ops/figures/1631102019026.png deleted file mode 100644 index 54e8e7d1cffbb28711074e511b08c73f66c1fb75..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/1631102019026.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/20210908212726.png b/docs/en/docs/A-Ops/figures/20210908212726.png deleted file mode 100644 index f7d399aecd46605c09fe2d1f50a1a8670cd80432..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/20210908212726.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/D466AC8C-2FAF-4797-9A48-F6C346A1EC77.png b/docs/en/docs/A-Ops/figures/D466AC8C-2FAF-4797-9A48-F6C346A1EC77.png deleted file mode 100644 index 4b937ab846017ead71ca8b5a75b8af1f0f28e1ef..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/D466AC8C-2FAF-4797-9A48-F6C346A1EC77.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/a-ops_architecture.png b/docs/en/docs/A-Ops/figures/a-ops_architecture.png deleted file mode 100644 index 7a831b183e8cba5da16b9be9d965abe9811ada5b..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/a-ops_architecture.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/add_fault_tree.png b/docs/en/docs/A-Ops/figures/add_fault_tree.png deleted file mode 100644 index 664efd5150fcb96f009ce0eddc3d9ac91b9e622f..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/add_fault_tree.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/add_host_group.png b/docs/en/docs/A-Ops/figures/add_host_group.png deleted file mode 100644 index ed4ab3616d418ecf33a006fee3985b8b6d2d965d..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/add_host_group.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/check.PNG b/docs/en/docs/A-Ops/figures/check.PNG deleted file mode 100644 index 2dce821dd43eec6f0d13cd6b2dc1e30653f35489..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/check.PNG and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/dashboard.PNG b/docs/en/docs/A-Ops/figures/dashboard.PNG deleted file mode 100644 index 2a4a827191367309aad28a8a6c1835df602bdf72..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/dashboard.PNG and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/decryption.png b/docs/en/docs/A-Ops/figures/decryption.png deleted file mode 100644 index da07cfdf9296e201a82cceb210e651261fe7ecee..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/decryption.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/delete_host_group.png b/docs/en/docs/A-Ops/figures/delete_host_group.png deleted file mode 100644 index e4d85f6e3f1a269a483943f5115f54daa3de51de..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/delete_host_group.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/delete_hosts.png b/docs/en/docs/A-Ops/figures/delete_hosts.png deleted file mode 100644 index b3da935739369dad1318fe135146755ede13c694..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/delete_hosts.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/deploy.PNG b/docs/en/docs/A-Ops/figures/deploy.PNG deleted file mode 100644 index e30dcb0eb05eb4f41202c736863f3e0ff216398d..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/deploy.PNG and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/diag.PNG b/docs/en/docs/A-Ops/figures/diag.PNG deleted file mode 100644 index a67e8515b8313a50b06cb985611ef9c166851811..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/diag.PNG and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/diag_error1.png b/docs/en/docs/A-Ops/figures/diag_error1.png deleted file mode 100644 index 9e5b1139febe9f00156b37f3268269ac30a78737..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/diag_error1.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/diag_main_page.png b/docs/en/docs/A-Ops/figures/diag_main_page.png deleted file mode 100644 index b536af938250004bac3053b234bf20bcbf075c9b..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/diag_main_page.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/diagnosis.png b/docs/en/docs/A-Ops/figures/diagnosis.png deleted file mode 100644 index 2c85102fe28deaac0a35fde85fd4497994d2c031..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/diagnosis.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/domain.PNG b/docs/en/docs/A-Ops/figures/domain.PNG deleted file mode 100644 index bad499f96df5934565d36edf2308cec5e4147719..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/domain.PNG and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/domain_config.PNG b/docs/en/docs/A-Ops/figures/domain_config.PNG deleted file mode 100644 index 8995424b35cda75f08881037446b7816a0ca09dc..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/domain_config.PNG and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/execute_diag.png b/docs/en/docs/A-Ops/figures/execute_diag.png deleted file mode 100644 index afb5f7e9fbfb1d1ce46d096a61729766b4940cd3..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/execute_diag.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/group.PNG b/docs/en/docs/A-Ops/figures/group.PNG deleted file mode 100644 index 584fd1f7195694a3419482cace2a71fa1cd9a3ec..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/group.PNG and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/host.PNG b/docs/en/docs/A-Ops/figures/host.PNG deleted file mode 100644 index 3c00681a567cf8f1e1baddfb6fdb7b6cf7df43de..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/host.PNG and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/hosts.png b/docs/en/docs/A-Ops/figures/hosts.png deleted file mode 100644 index f4c7b9103baab7748c83392f6120c8f00880860f..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/hosts.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/hosts_in_group.png b/docs/en/docs/A-Ops/figures/hosts_in_group.png deleted file mode 100644 index 9f188d207162fa1418a61a10f83ef9c51a512e65..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/hosts_in_group.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/spider.PNG b/docs/en/docs/A-Ops/figures/spider.PNG deleted file mode 100644 index 53bad6dd38e36db9cadfdbeda21cbc3ef59eddf7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/spider.PNG and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/spider_detail.jpg b/docs/en/docs/A-Ops/figures/spider_detail.jpg deleted file mode 100644 index b69636fe2161380be56f37caf7fd904d2e63e302..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/spider_detail.jpg and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/view_fault_tree.png b/docs/en/docs/A-Ops/figures/view_fault_tree.png deleted file mode 100644 index a566417b18e8bcf19153730904893fc8d827d885..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/view_fault_tree.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/view_report.png b/docs/en/docs/A-Ops/figures/view_report.png deleted file mode 100644 index 2029141179302ecef45d34cb0c9dc916b9142e7b..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/view_report.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/figures/view_report_list.png b/docs/en/docs/A-Ops/figures/view_report_list.png deleted file mode 100644 index 58307ec6ef4c73b6b0f039b1052e5870629ac2e8..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/figures/view_report_list.png and /dev/null differ diff --git a/docs/en/docs/A-Ops/image/image-20230607161545732.png.jpg b/docs/en/docs/A-Ops/image/image-20230607161545732.png.jpg deleted file mode 100644 index ad4d525d163c2e001f7980cde8712519a13125d6..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Ops/image/image-20230607161545732.png.jpg and /dev/null differ diff --git a/docs/en/docs/A-Ops/using-gala-gopher.md b/docs/en/docs/A-Ops/using-gala-gopher.md deleted file mode 100644 index 6277655b7051d11c1254fbf98b63c5285e6d2846..0000000000000000000000000000000000000000 --- a/docs/en/docs/A-Ops/using-gala-gopher.md +++ /dev/null @@ -1,228 +0,0 @@ -# Using gala-gopher - -As a data collection module, gala-gopher provides OS-level monitoring capabilities, supports dynamic probe installation and uninstallation, and integrates third-party probes in a non-intrusive manner to quickly expand the monitoring scope. - -This chapter describes how to deploy and use the gala-gopher service. - -#### Installation - -Mount the repo sources. - -```basic -[oe-2209] # openEuler 22.09 officially released repository -name=oe2209 -baseurl=http://119.3.219.20:82/openEuler:/22.09/standard_x86_64 -enabled=1 -gpgcheck=0 -priority=1 - -[oe-2209:Epol] # openEuler 22.09: Epol officially released repository -name=oe2209_epol -baseurl=http://119.3.219.20:82/openEuler:/22.09:/Epol/standard_x86_64/ -enabled=1 -gpgcheck=0 -priority=1 -``` - -Install gala-gopher. - -```bash -# yum install gala-gopher -``` - - - -#### Configuration - -##### Configuration Description - -The configuration file of gala-gopher is **/opt/gala-gopher/gala-gopher.conf**. The configuration items in the file are described as follows (the parts that do not need to be manually configured are not described): - -The following configurations can be modified as required: - -- `global`: gala-gopher global configuration information. - - `log_directory`: gala-gopher log file name. - - `pin_path`: path for storing the map shared by the eBPF probe. You are advised to retain the default value. -- `metric`: metric output mode. - - `out_channel`: metric output channel. The value can be `web_server` or `kafka`. If this parameter is left empty, the output channel is disabled. - - `kafka_topic`: topic configuration information if the output channel is Kafka. -- `event`: output mode of abnormal events. - - `out_channel`: event output channel. The value can be `logs` or `kafka`. If this parameter is left empty, the output channel is disabled. - - `kafka_topic`: topic configuration information if the output channel is Kafka. -- `meta`: metadata output mode. - - `out_channel`: metadata output channel. The value can be `logs` or `kafka`. If this parameter is left empty, the output channel is disabled. - - `kafka_topic`: topic configuration information if the output channel is Kafka. -- `imdb`: cache specification configuration. - - `max_tables_num`: maximum number of cache tables. In the **/opt/gala-gopher/meta** directory, each meta corresponds to a table. - - `max_records_num`: maximum number of records in each cache table. Generally, each probe generates at least one observation record in an observation period. - - `max_metrics_num`: maximum number of metrics contained in each observation record. - - `record_timeout`: aging time of the cache table. If a record in the cache table is not updated within the aging time, the record is deleted. The unit is second. -- `web_server`: configuration of the web_server output channel. - - `port`: listening port. -- `kafka`: configuration of the Kafka output channel. - - `kafka_broker`: IP address and port number of the Kafka server. -- `logs`: configuration of the logs output channel. - - `metric_dir`: path for storing metric data logs. - - `event_dir`: path for storing abnormal event data logs. - - `meta_dir`: metadata log path. - - `debug_dir`: path of gala-gopher run logs. -- `probes`: native probe configuration. - - `name`: probe name, which must be the same as the native probe name. For example, the name of the **example.probe** probe is **example**. - - `param`: probe startup parameters. For details about the supported parameters, see [Startup Parameters](#startup-parameters). - - `switch`: whether to start a probe. The value can be `on` or `off`. -- `extend_probes`: third-party probe configuration. - - `name`: probe name. - - `command`: command for starting a probe. - - `param`: probe startup parameters. For details about the supported parameters, see [Startup Parameters](#startup-parameters). - - `start_check`: If `switch` is set to `auto`, the system determines whether to start the probe based on the execution result of `start_check`. - - `switch`: whether to start a probe. The value can be `on`, `off`, or `auto`. The value `auto` determines whether to start the probe based on the result of `start_check`. - -##### Startup Parameters - -| Parameter| Description | -| ------ | ------------------------------------------------------------ | -| -l | Whether to enable the function of reporting abnormal events. | -| -t | Sampling period, in seconds. By default, the probe reports data every 5 seconds. | -| -T | Delay threshold, in ms. The default value is **0**. | -| -J | Jitter threshold, in ms. The default value is **0**. | -| -O | Offline time threshold, in ms. The default value is **0**. | -| -D | Packet loss threshold. The default value is **0**. | -| -F | If this parameter is set to `task`, data is filtered by **task_whitelist.conf**. If this parameter is set to the PID of a process, only the process is monitored.| -| -P | Range of probe programs loaded to each probe. Currently, the tcpprobe and taskprobe probes are involved.| -| -U | Resource usage threshold (upper limit). The default value is **0** (%). | -| -L | Resource usage threshold (lower limit). The default value is **0** (%). | -| -c | Whether the probe (TCP) identifies `client_port`. The default value is **0** (no). | -| -N | Name of the observation process of the specified probe (ksliprobe). The default value is **NULL**. | -| -p | Binary file path of the process to be observed, for example, `nginx_probe`. You can run `-p /user/local/sbin/nginx` to specify the Nginx file path. The default value is **NULL**.| -| -w | Filtering scope of monitored applications, for example, `-w /opt/gala-gopher/task_whitelist.conf`. You can write the names of the applications to be monitored to the **task_whitelist.conf** file. The default value is **NULL**, indicating that the applications are not filtered.| -| -n | NIC to mount tc eBPF. The default value is **NULL**, indicating that all NICs are mounted. Example: `-n eth0`| - -##### Configuration File Example - -- Select the data output channels. - - ```yaml - metric = - { - out_channel = "web_server"; - kafka_topic = "gala_gopher"; - }; - - event = - { - out_channel = "kafka"; - kafka_topic = "gala_gopher_event"; - }; - - meta = - { - out_channel = "kafka"; - kafka_topic = "gala_gopher_metadata"; - }; - ``` - -- Configure Kafka and Web Server. - - ```yaml - web_server = - { - port = 8888; - }; - - kafka = - { - kafka_broker = ":9092"; - }; - ``` - -- Select the probe to be enabled. The following is an example. - - ```yaml - probes = - ( - { - name = "system_infos"; - param = "-t 5 -w /opt/gala-gopher/task_whitelist.conf -l warn -U 80"; - switch = "on"; - }, - ); - extend_probes = - ( - { - name = "tcp"; - command = "/opt/gala-gopher/extend_probes/tcpprobe"; - param = "-l warn -c 1 -P 7"; - switch = "on"; - } - ); - ``` - - - -#### Start - -After the configuration is complete, start gala-gopher. - -```bash -# systemctl start gala-gopher.service -``` - -Query the status of the gala-gopher service. - -```bash -# systemctl status gala-gopher.service -``` - -If the following information is displayed, the service is started successfully: Check whether the enabled probe is started. If the probe thread does not exist, check the configuration file and gala-gopher run log file. - -![gala-gopher成功启动状态](./figures/gala-gopher成功启动状态.png) - -> Note: The root permission is required for deploying and running gala-gopher. - - - -#### How to Use - -##### Deployment of External Dependent Software - -![gopher软件架构图](./figures/gopher软件架构图.png) - -As shown in the preceding figure, the green parts are external dependent components of gala-gopher. gala-gopher outputs metric data to Prometheus, metadata and abnormal events to Kafka. gala-anteater and gala-spider in gray rectangles obtain data from Prometheus and Kafka. - -> Note: Obtain the installation packages of Kafka and Prometheus from the official websites. - - - -##### Output Data - -- **Metric** - - Prometheus Server has a built-in Express Browser UI. You can use PromQL statements to query metric data. For details, see [Using the expression browser](https://prometheus.io/docs/prometheus/latest/getting_started/#using-the-expression-browser) in the official document. The following is an example. - - If the specified metric is `gala_gopher_tcp_link_rcv_rtt`, the metric data displayed on the UI is as follows: - - ```basic - gala_gopher_tcp_link_rcv_rtt{client_ip="x.x.x.165",client_port="1234",hostname="openEuler",instance="x.x.x.172:8888",job="prometheus",machine_id="1fd3774xx",protocol="2",role="0",server_ip="x.x.x.172",server_port="3742",tgid="1516"} 1 - ``` - -- **Metadata** - - You can directly consume data from the Kafka topic `gala_gopher_metadata`. The following is an example. - - ```bash - # Input request - ./bin/kafka-console-consumer.sh --bootstrap-server x.x.x.165:9092 --topic gala_gopher_metadata - # Output data - {"timestamp": 1655888408000, "meta_name": "thread", "entity_name": "thread", "version": "1.0.0", "keys": ["machine_id", "pid"], "labels": ["hostname", "tgid", "comm", "major", "minor"], "metrics": ["fork_count", "task_io_wait_time_us", "task_io_count", "task_io_time_us", "task_hang_count"]} - ``` - -- **Abnormal events** - - You can directly consume data from the Kafka topic `gala_gopher_event`. The following is an example. - - ```bash - # Input request - ./bin/kafka-console-consumer.sh --bootstrap-server x.x.x.165:9092 --topic gala_gopher_event - # Output data - {"timestamp": 1655888408000, "meta_name": "thread", "entity_name": "thread", "version": "1.0.0", "keys": ["machine_id", "pid"], "labels": ["hostname", "tgid", "comm", "major", "minor"], "metrics": ["fork_count", "task_io_wait_time_us", "task_io_count", "task_io_time_us", "task_hang_count"]} - ``` diff --git a/docs/en/docs/A-Tune/figures/en-us_image_0213178480.png b/docs/en/docs/A-Tune/figures/en-us_image_0213178480.png deleted file mode 100644 index ad5ed3f7beeb01e6a48707c4806606b41d687e22..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Tune/figures/en-us_image_0213178480.png and /dev/null differ diff --git a/docs/en/docs/A-Tune/getting-to-know-a-tune.md b/docs/en/docs/A-Tune/getting-to-know-a-tune.md deleted file mode 100644 index 2092e0152e2c31ea4bf1aa95277302bcc981b6a9..0000000000000000000000000000000000000000 --- a/docs/en/docs/A-Tune/getting-to-know-a-tune.md +++ /dev/null @@ -1,195 +0,0 @@ -# Getting to Know A-Tune - -- [Getting to Know A-Tune](#getting-to-know-a-tune) - - [Introduction](#introduction) - - [Architecture](#architecture) - - [Supported Features and Service Models](#supported-features-and-service-models) - - - -## Introduction - -An operating system \(OS\) is basic software that connects applications and hardware. It is critical for users to adjust OS and application configurations and make full use of software and hardware capabilities to achieve optimal service performance. However, numerous workload types and varied applications run on the OS, and the requirements on resources are different. Currently, the application environment composed of hardware and software involves more than 7000 configuration objects. As the service complexity and optimization objects increase, the time cost for optimization increases exponentially. As a result, optimization efficiency decreases sharply. Optimization becomes complex and brings great challenges to users. - -Second, as infrastructure software, the OS provides a large number of software and hardware management capabilities. The capability required varies in different scenarios. Therefore, capabilities need to be enabled or disabled depending on scenarios, and a combination of capabilities will maximize the optimal performance of applications. - -In addition, the actual business embraces hundreds and thousands of scenarios, and each scenario involves a wide variety of hardware configurations for computing, network, and storage. The lab cannot list all applications, business scenarios, and hardware combinations. - -To address the preceding challenges, openEuler launches A-Tune. - -A-Tune is an AI-based engine that optimizes system performance. It uses AI technologies to precisely profile business scenarios, discover and infer business characteristics, so as to make intelligent decisions, match with the optimal system parameter configuration combination, and give recommendations, ensuring the optimal business running status. - -![](figures/en-us_image_0227497000.png) - -## Architecture - -The following figure shows the A-Tune core technical architecture, which consists of intelligent decision-making, system profile, and interaction system. - -- Intelligent decision-making layer: consists of the awareness and decision-making subsystems, which implements intelligent awareness of applications and system optimization decision-making, respectively. -- System profile layer: consists of the feature engineering and two-layer classification model. The feature engineering is used to automatically select service features, and the two-layer classification model is used to learn and classify service models. -- Interaction system layer: monitors and configures various system resources and executes optimization policies. - -![](figures/en-us_image_0227497343.png) - -## Supported Features and Service Models - -### Supported Features - -[Table 1](#table1919220557576) describes the main features supported by A-Tune, feature maturity, and usage suggestions. - -**Table 1** Feature maturity - - - - - - - - - - - - - - - - - - - -

Feature

-

Maturity

-

Usage Suggestion

-

Auto optimization of 15 applications in 11 workload types

-

Tested

-

Pilot

-

User-defined profile and service models

-

Tested

-

Pilot

-

Automatic parameter optimization

-

Tested

-

Pilot

-
- - -### Supported Service Models - -Based on the workload characteristics of applications, A-Tune classifies services into 11 types. For details about the bottleneck of each type and the applications supported by A-Tune, see [Table 2](#table2819164611311). - -**Table 2** Supported workload types and applications - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Service category

-

Type

-

Bottleneck

-

Supported Application

-

default

-

Default type

-

Low resource usage in terms of cpu, memory, network, and I/O

-

N/A

-

webserver

-

Web application

-

Bottlenecks of cpu and network

-

Nginx, Apache Traffic Server

-

database

-

Database

-
Bottlenecks of cpu, memory, and I/O

-

Mongodb, Mysql, Postgresql, Mariadb

-

big_data

-

Big data

-

Bottlenecks of cpu and memory

-

Hadoop-hdfs, Hadoop-spark

-

middleware

-

Middleware framework

-

Bottlenecks of cpu and network

-

Dubbo

-

in-memory_database

-

Memory database

-

Bottlenecks of memory and I/O

-

Redis

-

basic-test-suite

-

Basic test suite

-

Bottlenecks of cpu and memory

-

SPECCPU2006, SPECjbb2015

-

hpc

-

Human genome

-

Bottlenecks of cpu, memory, and I/O

-

Gatk4

-

storage

-

Storage

-

Bottlenecks of network, and I/O

-

Ceph

-

virtualization

-

Virtualization

-

Bottlenecks of cpu, memory, and I/O

-

Consumer-cloud, Mariadb

-

docker

-

Docker

-

Bottlenecks of cpu, memory, and I/O

-

Mariadb

-
- - - diff --git a/docs/en/docs/A-Tune/public_sys-resources/icon-danger.gif b/docs/en/docs/A-Tune/public_sys-resources/icon-danger.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Tune/public_sys-resources/icon-danger.gif and /dev/null differ diff --git a/docs/en/docs/A-Tune/public_sys-resources/icon-tip.gif b/docs/en/docs/A-Tune/public_sys-resources/icon-tip.gif deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Tune/public_sys-resources/icon-tip.gif and /dev/null differ diff --git a/docs/en/docs/A-Tune/public_sys-resources/icon-warning.gif b/docs/en/docs/A-Tune/public_sys-resources/icon-warning.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/A-Tune/public_sys-resources/icon-warning.gif and /dev/null differ diff --git a/docs/en/docs/A-Tune/usage-instructions.md b/docs/en/docs/A-Tune/usage-instructions.md deleted file mode 100644 index 49524f10328b470080145f008d8d3e520f5f0e5d..0000000000000000000000000000000000000000 --- a/docs/en/docs/A-Tune/usage-instructions.md +++ /dev/null @@ -1,1146 +0,0 @@ -# Usage Instructions - -You can use functions provided by A-Tune through the CLI client atune-adm. This chapter describes the functions and usage of the A-Tune client. - -- [Usage Instructions](#usage-instructions) - - [Overview](#overview) - - [Querying Workload Types](#querying-workload-types) - - [list](#list) - - [Workload Type Analysis and Auto Optimization](#workload-type-analysis-and-auto-optimization) - - [analysis](#analysis) - - [User-defined Model](#user-defined-model) - - [define](#define) - - [collection](#collection) - - [train](#train) - - [undefine](#undefine) - - [Querying Profiles](#querying-profiles) - - [info](#info) - - [Updating a Profile](#updating-a-profile) - - [update](#update) - - [Activating a Profile](#activating-a-profile) - - [profile](#profile) - - [Rolling Back Profiles](#rolling-back-profiles) - - [rollback](#rollback) - - [Updating Database](#updating-database) - - [upgrade](#upgrade) - - [Querying System Information](#querying-system-information) - - [check](#check) - - [Automatic Parameter Optimization](#automatic-parameter-optimization) - - [Tuning](#tuning) - - - -## Overview - -- You can run the **atune-adm help/--help/-h** command to query commands supported by atune-adm. -- The **define**, **update**, **undefine**, **collection**, **train**, and **upgrade **commands do not support remote execution. -- In the command format, brackets \(\[\]\) indicate that the parameter is optional, and angle brackets \(<\>\) indicate that the parameter is mandatory. The actual parameters prevail. - - -## Querying Workload Types - - - -### list - -#### Function - -Query the supported profiles, and the values of Active. - -#### Format - -**atune-adm list** - -#### Example - -``` -# atune-adm list - -Support profiles: -+------------------------------------------------+-----------+ -| ProfileName | Active | -+================================================+===========+ -| arm-native-android-container-robox | false | -+------------------------------------------------+-----------+ -| basic-test-suite-euleros-baseline-fio | false | -+------------------------------------------------+-----------+ -| basic-test-suite-euleros-baseline-lmbench | false | -+------------------------------------------------+-----------+ -| basic-test-suite-euleros-baseline-netperf | false | -+------------------------------------------------+-----------+ -| basic-test-suite-euleros-baseline-stream | false | -+------------------------------------------------+-----------+ -| basic-test-suite-euleros-baseline-unixbench | false | -+------------------------------------------------+-----------+ -| basic-test-suite-speccpu-speccpu2006 | false | -+------------------------------------------------+-----------+ -| basic-test-suite-specjbb-specjbb2015 | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-hdfs-dfsio-hdd | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-hdfs-dfsio-ssd | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-bayesian | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-kmeans | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-sql1 | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-sql10 | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-sql2 | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-sql3 | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-sql4 | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-sql5 | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-sql6 | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-sql7 | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-sql8 | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-sql9 | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-tersort | false | -+------------------------------------------------+-----------+ -| big-data-hadoop-spark-wordcount | false | -+------------------------------------------------+-----------+ -| cloud-compute-kvm-host | false | -+------------------------------------------------+-----------+ -| database-mariadb-2p-tpcc-c3 | false | -+------------------------------------------------+-----------+ -| database-mariadb-4p-tpcc-c3 | false | -+------------------------------------------------+-----------+ -| database-mongodb-2p-sysbench | false | -+------------------------------------------------+-----------+ -| database-mysql-2p-sysbench-hdd | false | -+------------------------------------------------+-----------+ -| database-mysql-2p-sysbench-ssd | false | -+------------------------------------------------+-----------+ -| database-postgresql-2p-sysbench-hdd | false | -+------------------------------------------------+-----------+ -| database-postgresql-2p-sysbench-ssd | false | -+------------------------------------------------+-----------+ -| default-default | false | -+------------------------------------------------+-----------+ -| docker-mariadb-2p-tpcc-c3 | false | -+------------------------------------------------+-----------+ -| docker-mariadb-4p-tpcc-c3 | false | -+------------------------------------------------+-----------+ -| hpc-gatk4-human-genome | false | -+------------------------------------------------+-----------+ -| in-memory-database-redis-redis-benchmark | false | -+------------------------------------------------+-----------+ -| middleware-dubbo-dubbo-benchmark | false | -+------------------------------------------------+-----------+ -| storage-ceph-vdbench-hdd | false | -+------------------------------------------------+-----------+ -| storage-ceph-vdbench-ssd | false | -+------------------------------------------------+-----------+ -| virtualization-consumer-cloud-olc | false | -+------------------------------------------------+-----------+ -| virtualization-mariadb-2p-tpcc-c3 | false | -+------------------------------------------------+-----------+ -| virtualization-mariadb-4p-tpcc-c3 | false | -+------------------------------------------------+-----------+ -| web-apache-traffic-server-spirent-pingpo | false | -+------------------------------------------------+-----------+ -| web-nginx-http-long-connection | true | -+------------------------------------------------+-----------+ -| web-nginx-https-short-connection | false | -+------------------------------------------------+-----------+ - -``` - ->![](public_sys-resources/icon-note.gif) **NOTE:** ->If the value of Active is **true**, the profile is activated. In the example, the profile of web-nginx-http-long-connection is activated. - -## Workload Type Analysis and Auto Optimization - - -### analysis - -#### Function - -Collect real-time statistics from the system to identify and automatically optimize workload types. - -#### Format - -**atune-adm analysis** \[OPTIONS\] - -#### Parameter Description - -- OPTIONS - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

--model, -m

-

New model generated after user self-training

-

--characterization, -c

-

Use the default model for application identification and do not perform automatic optimization

-

--times value, -t value

-

Time duration for data collection

-

--script value, -s value

-

File to be executed

-
- - -#### Example - -- Use the default model for application identification. - - ``` - # atune-adm analysis --characterization - ``` - -- Use the default model to identify applications and perform automatic tuning. - - ``` - # atune-adm analysis - ``` - -- Use the user-defined training model for recognition. - - ``` - # atune-adm analysis --model /usr/libexec/atuned/analysis/models/new-model.m - ``` - - -## User-defined Model - -A-Tune allows users to define and learn new models. To define a new model, perform the following steps: - -1. Run the **define** command to define a new profile. -2. Run the **collection** command to collect the system data corresponding to the application. -3. Run the **train** command to train the model. - - -### define - -#### Function - -Add a user-defined application scenarios and the corresponding profile tuning items. - -#### Format - -**atune-adm define** - -#### Example - -Add a profile whose service_type is **test_service**, application_name is **test_app**, scenario_name is **test_scenario**, and tuning item configuration file is **example.conf**. - -``` -# atune-adm define test_service test_app test_scenario ./example.conf -``` - -The **example.conf** file can be written as follows (the following optimization items are optional and are for reference only). You can also run the **atune-adm info** command to view how the existing profile is written. - -``` - [main] - # list its parent profile - [kernel_config] - # to change the kernel config - [bios] - # to change the bios config - [bootloader.grub2] - # to change the grub2 config - [sysfs] - # to change the /sys/* config - [systemctl] - # to change the system service status - [sysctl] - # to change the /proc/sys/* config - [script] - # the script extension of cpi - [ulimit] - # to change the resources limit of user - [schedule_policy] - # to change the schedule policy - [check] - # check the environment - [tip] - # the recommended optimization, which should be performed manunaly -``` - -### collection - -#### Function - -Collect the global resource usage and OS status information during service running, and save the collected information to a CSV output file as the input dataset for model training. - ->![](public_sys-resources/icon-note.gif) **NOTE:** ->- This command depends on the sampling tools such as perf, mpstat, vmstat, iostat, and sar. ->- Currently, only the Kunpeng 920 CPU is supported. You can run the **dmidecode -t processor** command to check the CPU model. - -#### Format - -**atune-adm collection** - -#### Parameter Description - -- OPTIONS - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

--filename, -f

-

Name of the generated CSV file used for training: name-timestamp.csv

-

--output_path, -o

-

Path for storing the generated CSV file. The absolute path is required.

-

--disk, -b

-

Disk used during service running, for example, /dev/sda.

-

--network, -n

-

Network port used during service running, for example, eth0.

-

--app_type, -t

-

Mark the application type of the service as a label for training.

-

--duration, -d

-

Data collection time during service running, in seconds. The default collection time is 1200 seconds.

-

--interval, -i

-

Interval for collecting data, in seconds. The default interval is 5 seconds.

-
- - -#### Example - -``` -# atune-adm collection --filename name --interval 5 --duration 1200 --output_path /home/data --disk sda --network eth0 --app_type test_service-test_app-test_scenario -``` - -> Note: -> -> In the example, data is collected every 5 seconds for a duration of 1200 seconds. The collected data is stored as the *name* file in the **/home/data** directory. The application type of the service is defined by the `atune-adm define` command, which is **test_service-test_app-test_scenario** in this example. -> The data collection interval and duration can be specified using the preceding command options. - - -### train - -#### Function - -Use the collected data to train the model. Collect data of at least two application types during training. Otherwise, an error is reported. - -#### Format - -**atune-adm train** - -#### Parameter Description - -- OPTIONS - - | Parameter | Description | - | ----------------- | ------------------------------------------------------ | - | --data_path, -d | Path for storing CSV files required for model training | - | --output_file, -o | Model generated through training | - - -#### Example - -Use the CSV file in the **data** directory as the training input. The generated model **new-model.m** is stored in the **model** directory. - -``` -# atune-adm train --data_path /home/data --output_file /usr/libexec/atuned/analysis/models/new-model.m -``` - -### undefine - -#### Function - -Delete a user-defined profile. - -#### Format - -**atune-adm undefine** - -#### Example - -Delete the user-defined profile. - -``` -# atune-adm undefine test_service-test_app-test_scenario -``` - -## Querying Profiles - - -### info - -#### Function - -View the profile content. - -#### Format - -**atune-adm info** - -#### Example - -View the profile content of web-nginx-http-long-connection. - -``` -# atune-adm info web-nginx-http-long-connection - -*** web-nginx-http-long-connection: - -# -# nginx http long connection A-Tune configuration -# -[main] -include = default-default - -[kernel_config] -#TODO CONFIG - -[bios] -#TODO CONFIG - -[bootloader.grub2] -iommu.passthrough = 1 - -[sysfs] -#TODO CONFIG - -[systemctl] -sysmonitor = stop -irqbalance = stop - -[sysctl] -fs.file-max = 6553600 -fs.suid_dumpable = 1 -fs.aio-max-nr = 1048576 -kernel.shmmax = 68719476736 -kernel.shmall = 4294967296 -kernel.shmmni = 4096 -kernel.sem = 250 32000 100 128 -net.ipv4.tcp_tw_reuse = 1 -net.ipv4.tcp_syncookies = 1 -net.ipv4.ip_local_port_range = 1024 65500 -net.ipv4.tcp_max_tw_buckets = 5000 -net.core.somaxconn = 65535 -net.core.netdev_max_backlog = 262144 -net.ipv4.tcp_max_orphans = 262144 -net.ipv4.tcp_max_syn_backlog = 262144 -net.ipv4.tcp_timestamps = 0 -net.ipv4.tcp_synack_retries = 1 -net.ipv4.tcp_syn_retries = 1 -net.ipv4.tcp_fin_timeout = 1 -net.ipv4.tcp_keepalive_time = 60 -net.ipv4.tcp_mem = 362619 483495 725238 -net.ipv4.tcp_rmem = 4096 87380 6291456 -net.ipv4.tcp_wmem = 4096 16384 4194304 -net.core.wmem_default = 8388608 -net.core.rmem_default = 8388608 -net.core.rmem_max = 16777216 -net.core.wmem_max = 16777216 - -[script] -prefetch = off -ethtool = -X {network} hfunc toeplitz - -[ulimit] -{user}.hard.nofile = 102400 -{user}.soft.nofile = 102400 - -[schedule_policy] -#TODO CONFIG - -[check] -#TODO CONFIG - -[tip] -SELinux provides extra control and security features to linux kernel. Disabling SELinux will improve the performance but may cause security risks. = kernel -disable the nginx log = application -``` - -## Updating a Profile - -You can update the existing profile as required. - - -### update - -#### Function - -Update the original tuning items in the existing profile to the content in the **new.conf** file. - -#### Format - -**atune-adm update** - -#### Example - -Change the tuning item of the profile named **test_service-test_app-test_scenario** to **new.conf**. - -``` -# atune-adm update test_service-test_app-test_scenario ./new.conf -``` - -## Activating a Profile - -### profile - -#### Function - -Manually activate the profile to make it in the active state. - -#### Format - -**atune-adm profile** - -#### Parameter Description - -For details about the profile name, see the query result of the list command. - -#### Example - -Activate the profile corresponding to the web-nginx-http-long-connection. - -``` -# atune-adm profile web-nginx-http-long-connection -``` - -## Rolling Back Profiles - -### rollback - -#### Functions - -Roll back the current configuration to the initial configuration of the system. - -#### Format - -**atune-adm rollback** - -#### Example - -``` -# atune-adm rollback -``` - -## Updating Database - -### upgrade - -#### Function - -Update the system database. - -#### Format - -**atune-adm upgrade** - -#### Parameter Description - -- DB\_FILE - - New database file path. - - -#### Example - -The database is updated to **new\_sqlite.db**. - -``` -# atune-adm upgrade ./new_sqlite.db -``` - -## Querying System Information - - -### check - -#### Function - -Check the CPU, BIOS, OS, and NIC information. - -#### Format - -**atune-adm check** - -#### Example - -``` -# atune-adm check - cpu information: - cpu:0 version: Kunpeng 920-6426 speed: 2600000000 HZ cores: 64 - cpu:1 version: Kunpeng 920-6426 speed: 2600000000 HZ cores: 64 - system information: - DMIBIOSVersion: 0.59 - OSRelease: 4.19.36-vhulk1906.3.0.h356.eulerosv2r8.aarch64 - network information: - name: eth0 product: HNS GE/10GE/25GE RDMA Network Controller - name: eth1 product: HNS GE/10GE/25GE Network Controller - name: eth2 product: HNS GE/10GE/25GE RDMA Network Controller - name: eth3 product: HNS GE/10GE/25GE Network Controller - name: eth4 product: HNS GE/10GE/25GE RDMA Network Controller - name: eth5 product: HNS GE/10GE/25GE Network Controller - name: eth6 product: HNS GE/10GE/25GE RDMA Network Controller - name: eth7 product: HNS GE/10GE/25GE Network Controller - name: docker0 product: -``` - -## Automatic Parameter Optimization - -A-Tune provides the automatic search capability with the optimal configuration, saving the trouble of manually configuring parameters and performance evaluation. This greatly improves the search efficiency of optimal configurations. - - -### Tuning - -#### Function - -Use the specified project file to search the dynamic space for parameters and find the optimal solution under the current environment configuration. - -#### Format - -**atune-adm tuning** \[OPTIONS\] - ->![](public_sys-resources/icon-note.gif) **NOTE:** ->Before running the command, ensure that the following conditions are met: ->1. The YAML configuration file on the server has been edited and stored in the **/etc/atuned/tuning/** directory of the atuned service. ->2. The YAML configuration file of the client has been edited and stored on the atuned client. - -#### Parameter Description - -- OPTIONS - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

--restore, -r

-

Restores the initial configuration before tuning.

-

--project, -p

-

Specifies the project name in the YAML file to be restored.

-

--restart, -c

-

Perform tuning based on historical tuning results.

-

--detail, -d

-

Print detailed information about the tuning process.

-
- - - >![](public_sys-resources/icon-note.gif) **NOTE:** - >If this parameter is used, the -p parameter must be followed by a specific project name and the YAML file of the project must be specified. - - -- **PROJECT\_YAML**: YAML configuration file of the client. - -#### Configuration Description - -**Table 1** YAML file on the server - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Name

-

Description

-

Type

-

Value Range

-

project

-

Project name.

-

Character string

-

-

-

startworkload

-

Script for starting the service to be optimized.

-

Character string

-

-

-

stopworkload

-

Script for stopping the service to be optimized.

-

Character string

-

-

-

maxiterations

-

Maximum number of optimization iterations, which is used to limit the number of iterations on the client. Generally, the more optimization iterations, the better the optimization effect, but the longer the time required. Set this parameter based on the site requirements.

-

Integer

-

>10

-

object

-

Parameters to be optimized and related information.

-

For details about the object configuration items, see Table 2.

-

-

-

-

-
- -**Table 2** Description of object configuration items - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Name

-

Description

-

Type

-

Value Range

-

name

-

Parameter to be optimized.

-

Character string

-

-

-

desc

-

Description of parameters to be optimized.

-

Character string

-

-

-

get

-

Script for querying parameter values.

-

-

-

-

-

set

-

Script for setting parameter values.

-

-

-

-

-

needrestart

-

Specifies whether to restart the service for the parameter to take effect.

-

Enumeration

-

true or false

-

type

-

Parameter type. Currently, the discrete and continuous types are supported.

-

Enumeration

-

discrete or continuous

-

dtype

-

This parameter is available only when type is set to discrete. Currently, int, float and string are supported.

-

Enumeration

-

int, float, string

-

scope

-

Parameter setting range. This parameter is valid only when type is set to discrete and dtype is set to int or float, or type is set to continuous.

-

Integer/Float

-

The value is user-defined and must be within the valid range of this parameter.

-

step

-

Parameter value step, which is used when dtype is set to int or float.

-

Integer/Float

-

This value is user-defined.

-

items

-

Enumerated value of which the parameter value is not within the scope. This is used when dtype is set to int or float.

-

Integer/Float

-

The value is user-defined and must be within the valid range of this parameter.

-

options

-

Enumerated value range of the parameter value, which is used when dtype is set to string.

-

Character string

-

The value is user-defined and must be within the valid range of this parameter.

-
- -**Table 3** Description of configuration items of a YAML file on the client - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Name

-

Description

-

Type

-

Value Range

-

project

-

Project name, which must be the same as that in the configuration file on the server.

-

Character string

-

-

-

engine

-

Tuning algorithm.

-

Character string

-

"random", "forest", "gbrt", "bayes", "extraTrees"

-

iterations

-

Number of optimization iterations.

-

Integer

-

≥ 10

-

random_starts

-

Number of random iterations.

-

Integer

-

< iterations

-

feature_filter_engine

-

Parameter search algorithm, which is used to select important parameters. This parameter is optional.

-

Character string

-

"lhs"

-

feature_filter_cycle

-

Parameter search cycles, which is used to select important parameters. This parameter is used together with feature_filter_engine.

-

Integer

-

-

-

feature_filter_iters

-

Number of iterations for each cycle of parameter search, which is used to select important parameters. This parameter is used together with feature_filter_engine.

-

Integer

-

-

-

split_count

-

Number of evenly selected parameters in the value range of tuning parameters, which is used to select important parameters. This parameter is used together with feature_filter_engine.

-

Integer

-

-

-

benchmark

-

Performance test script.

-

-

-

-

-

evaluations

-

Performance test evaluation index.

-

For details about the evaluations configuration items, see Table 4.

-

-

-

-

-
- - -**Table 4** Description of evaluations configuration item - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Name

-

Description

-

Type

-

Value Range

-

name

-

Evaluation index name.

-

Character string

-

-

-

get

-

Script for obtaining performance evaluation results.

-

-

-

-

-

type

-

Specifies a positive or negative type of the evaluation result. The value positive indicates that the performance value is minimized, and the value negative indicates that the performance value is maximized.

-

Enumeration

-

positive or negative

-

weight

-

Weight of the index. The value ranges from 0 to 100.

-

Integer

-

0-100

-

threshold

-

Minimum performance requirement of the index.

-

Integer

-

User-defined

-
- -#### Example - -The following is an example of the YAML file configuration on a server: - -``` -project: "compress" -maxiterations: 500 -startworkload: "" -stopworkload: "" -object : - - - name : "compressLevel" - info : - desc : "The compresslevel parameter is an integer from 1 to 9 controlling the level of compression" - get : "cat /root/A-Tune/examples/tuning/compress/compress.py | grep 'compressLevel=' | awk -F '=' '{print $2}'" - set : "sed -i 's/compressLevel=\\s*[0-9]*/compressLevel=$value/g' /root/A-Tune/examples/tuning/compress/compress.py" - needrestart : "false" - type : "continuous" - scope : - - 1 - - 9 - dtype : "int" - - - name : "compressMethod" - info : - desc : "The compressMethod parameter is a string controlling the compression method" - get : "cat /root/A-Tune/examples/tuning/compress/compress.py | grep 'compressMethod=' | awk -F '=' '{print $2}' | sed 's/\"//g'" - set : "sed -i 's/compressMethod=\\s*[0-9,a-z,\"]*/compressMethod=\"$value\"/g' /root/A-Tune/examples/tuning/compress/compress.py" - needrestart : "false" - type : "discrete" - options : - - "bz2" - - "zlib" - - "gzip" - dtype : "string" -``` - -   - -The following is an example of the YAML file configuration on a client: - -``` -project: "compress" -engine : "gbrt" -iterations : 20 -random_starts : 10 - -benchmark : "python3 /root/A-Tune/examples/tuning/compress/compress.py" -evaluations : - - - name: "time" - info: - get: "echo '$out' | grep 'time' | awk '{print $3}'" - type: "positive" - weight: 20 - - - name: "compress_ratio" - info: - get: "echo '$out' | grep 'compress_ratio' | awk '{print $3}'" - type: "negative" - weight: 80 -``` - -   - -#### Example - -- Download test data. - ``` - wget http://cs.fit.edu/~mmahoney/compression/enwik8.zip - ``` -- Prepare the tuning environment. - - Example of **prepare.sh**: - ``` - #!/usr/bin/bash - if [ "$#" -ne 1 ]; then - echo "USAGE: $0 the path of enwik8.zip" - exit 1 - fi - - path=$( - cd "$(dirname "$0")" - pwd - ) - - echo "unzip enwik8.zip" - unzip "$path"/enwik8.zip - - echo "set FILE_PATH to the path of enwik8 in compress.py" - sed -i "s#compress/enwik8#$path/enwik8#g" "$path"/compress.py - - echo "update the client and server yaml files" - sed -i "s#python3 .*compress.py#python3 $path/compress.py#g" "$path"/compress_client.yaml - sed -i "s# compress/compress.py# $path/compress.py#g" "$path"/compress_server.yaml - - echo "copy the server yaml file to /etc/atuned/tuning/" - cp "$path"/compress_server.yaml /etc/atuned/tuning/ - ``` - Run the script. - ``` - sh prepare.sh enwik8.zip - ``` -- Run the `tuning` command to tune the parameters. - - ``` - atune-adm tuning --project compress --detail compress_client.yaml - ``` - -- Restore the configuration before running `tuning`. **compress** indicates the project name in the YAML file. - - ``` - atune-adm tuning --restore --project compress - ``` \ No newline at end of file diff --git a/docs/en/docs/Administration/IMA.md b/docs/en/docs/Administration/IMA.md deleted file mode 100644 index c0610aa41da74dadc2645e6e0597036fd5d47b1e..0000000000000000000000000000000000000000 --- a/docs/en/docs/Administration/IMA.md +++ /dev/null @@ -1,1366 +0,0 @@ - -# Integrity Measurement Architecture - -## Overview - -### Introduction to the Integrity Measurement Architecture (IMA) - -IMA is a kernel subsystem that measures files access through system calls such as `execve()`, `mmap()`, and `open()` based on custom policies. These measurements can be used for **local and remote attestation** or compared against known references to **control file access**. - -IMA primarily operates in two modes: - -- Measurement: This mode provides visibility into the integrity of files. When a protected file is accessed, a measurement record is added to the measurement log (located in kernel memory). If the system includes a Trusted Platform Module (TPM) chip, the measurement digest can be extended into the Platform Configure Register (PCR) of TPM to prevent tampering. This mode does not control file access but allows upper-layer applications to use the recorded file information for remote attestation. -- Appraisal: This mode verifies file integrity, preventing access to unknown or tampered files. It employs cryptographic techniques like hashing, signing, and HMAC to validate file content. If validation fails, access to the file is denied for all processes. This feature offers a fundamental layer of system resilience by sacrificing access to potentially compromised files, thus limiting the impact of attacks. - -In essence, the measurement mode acts as a passive observer, while the appraisal mode acts as a strict security guard, refusing access to any file with inconsistencies between its identity and measured attributes. - -### Introduction to the Extended Verification Module (EVM) - -EVM extends the capabilities of IMA. Building upon the file content integrity protection of IMA, EVM safeguards extended file attributes, including the UID, `security.ima`, and `security.selinux`. - -### Introduction to IMA Digest Lists - -IMA digest lists, an openEuler enhancement to the native integrity protection mechanism of the kernel, address several limitations of IMA/EVM: - -**TPM extension performance impact:** - -In the IMA measurement mode, accessing the TPM chip for each measurement event, a relatively slow process using low-frequency (dozens of MHz) SPI communication, degrades system call performance. - -![](./figures/ima_tpm.png) - -**Asymmetric cryptography performance impact:** - -In the IMA appraisal mode, signature verification for each file access during the validation of immutable files, a computationally intensive operation, also negatively impacts system call performance. - -![](./figures/ima_sig_verify.png) - -**Deployment complexity and security concerns:** - -The IMA appraisal mode requires deployment in the fix mode to initially tag files with IMA/EVM extended attributes before switching to the verification mode. Updating protected files necessitates rebooting into the fix mode, introducing operational inefficiency and potential security risks by requiring access to keys in the running environment. - -![](./figures/ima_priv_key.png) - -IMA digest lists address these issues by managing baseline digests for a set of files (such as executable files within a software package) within a single hash list file. These baseline digests can encompass both file content (for IMA) and extended attributes (for EVM). - -![](./figures/ima_digest_list_pkg.png) - -When IMA digest lists are enabled, the kernel maintains an allowlist hash pool containing imported IMA digest list entries, accessible via securityfs for import, deletion, and query. - -In the measurement mode, imported lists undergo measurement and TPM extension before being added to the allowlist. Subsequent accesses to files with matching digests bypass further measurement logging and TPM interaction. In the appraisal mode, imported lists undergo signature verification before being added to the allowlist. Subsequent file access involves comparing file digests against this allowlist for appraisal decisions. - -![](./figures/ima_digest_list_flow.png) - -Compared to native IMA/EVM, IMA digest lists offer improvements in: - -- Security: Bundling IMA digest lists with software packages ensures that baseline values originate from trusted sources (such as the openEuler community) upon installation, eliminating the need for generating these values in the running environment, thus establishing a stronger chain of trust. -- Performance: Performing measurements and validations at the digest list level reduces TPM access and asymmetric cryptography operations by a factor of _n_ (average number of file hashes per digest list), improving both system call and boot performance. -- Usability: IMA digest lists enable an out-of-the-box appraisal mode experience, allowing direct entry into this mode after installation and supporting software installation/upgrades without requiring the fix mode, simplifying deployment and enabling seamless updates. - -Note that by maintaining baseline values in kernel memory, IMA digest lists rely on the assumption of an uncompromised kernel, necessitating other security mechanisms (such as secure kernel module loading, runtime memory measurement) to protect kernel integrity. - -Ultimately, both native IMA and IMA digest lists are components within a larger security framework, emphasizing the importance of a layered security approach for overall system protection. - -## Interface Description - -### Kernel Boot Parameters - -The following table describes the kernel boot parameters provided by the openEuler IMA/EVM mechanism: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ParameterValueFunction
ima_appraiseenforce-evmEnforced verification mode of IMA appraisal (EVM enabled)
log-evmLog mode of IMA appraisal (EVM enabled)
enforceEnforced verification mode of IMA appraisal
logLog mode of IMA appraisal
offIMA appraisal disabled
ima_appraise_digest_listdigestIMA+EVM appraisal based on the digest list (comparing file content and extended attributes)
digest-nometadataIMA appraisal based on the digest list (comparing file content only)
evmx509Directly enables EVM based on portable signatures (whether or not the EVM certificate is loaded)
completeDoes not allow modification of EVM mode through the securityfs interface after startup
allow_metadata_writesAllows modification of file metadata without EVM interception
ima_hashsha256/sha1/...Declares the IMA measurement hash algorithm
ima_templateimaDeclares the IMA measurement template (d|n)
ima-ngDeclares the IMA measurement template (d-ng|n-ng), which is used by default
ima-sigDeclares the IMA measurement template (d-ng|n-ng|sig)
ima_policyexec_tcbMeasures all executed and memory-mapped files, as well as loaded kernel modules, firmware, and kernel files
tcbOn the basis of the exec_tcb policy, additionally measures files accessed as uid=0 or euid=0
secure_bootEvaluates all loaded kernel modules, firmware, and kernel files, and specifies the use of IMA signature mode
appraise_exec_tcbOn the basis of the secure_boot policy, additionally evaluates all executed and memory-mapped files
appraise_tcbEvaluates all files accessed with owner 0
appraise_exec_immutableUsed in conjunction with the appraise_exec_tcb policy where the extended attributes of executable files are immutable
ima_digest_list_pcr10Extends the digest list-based IMA measurement results in PCR 10, disabling native IMA measurement
11Extends the digest list-based IMA measurement results in PCR 11, disabling native IMA measurement
+11Extends the digest list-based IMA measurement results in PCR 11, extending the native IMA measurement results in PCR 10
ima_digest_db_sizenn[M]Sets the upper limit of the kernel digest list size (0 MB to 64 MB). The default is 16 MB if not configured. (Not configured means that the parameter does not present. Note that the value cannot be left blank, such as "ima_digest_db_size=")
ima_capacity-1~2147483647Sets the maximum number of kernel measurement log entries. The default is 100,000 if not configured. -1 means no upper limit.
initramtmpfsNoneSupports tmpfs in initrd to carry file extended attributes
- -Based on the user requirements, the following parameter combinations are recommended: - -**(1) Native IMA measurement** - -```conf -# Native IMA measurement + custom policy -No configuration required, enabled by default -# Native IMA measurement + TCB default policy -ima_policy="tcb" -``` - -**(2) Digest list-based IMA measurement** - -```conf -# Digest list IMA measurement + custom policy -ima_digest_list_pcr=11 ima_template=ima-ng initramtmpfs -# Digest list IMA measurement + default policy -ima_digest_list_pcr=11 ima_template=ima-ng ima_policy="exec_tcb" initramtmpfs -``` - -**(3) Digest list-based IMA appraisal, protecting file content only** - -```conf -# IMA appraisal + log mode -ima_appraise=log ima_appraise_digest_list=digest-nometadata ima_policy="appraise_exec_tcb" initramtmpfs -# IMA appraisal + enforced verification mode -ima_appraise=enforce ima_appraise_digest_list=digest-nometadata ima_policy="appraise_exec_tcb" initramtmpfs -``` - -**(4) Digest list-based IMA appraisal, protecting file content and extended attributes** - -```conf -# IMA appraisal + log mode -ima_appraise=log-evm ima_appraise_digest_list=digest ima_policy="appraise_exec_tcb|appraise_exec_immutable" initramtmpfs evm=x509 evm=complete -# IMA appraisal + enforced verification mode -ima_appraise=enforce-evm ima_appraise_digest_list=digest ima_policy="appraise_exec_tcb|appraise_exec_immutable" initramtmpfs evm=x509 evm=complete -``` - -> ![](./public_sys-resources/icon-note.gif) **Note:** -> -> The above four parameters can be configured and used independently, but only the digest list-based measurement and appraisal modes can be used in combination, that is, (2) and (3) are paired or (2) and (4) are paired. - -## Securityfs Interfaces - -The securityfs interfaces provided by openEuler IMA are located in the **/sys/kernel/security** directory. The interface names and descriptions are as follows: - -| Path | Permission | Description | -| :----------------------------- | :--------- | :----------------------------------------------------------------------- | -| ima/policy | 600 | Queries/Imports the IMA policy query. | -| ima/ascii_runtime_measurement | 440 | Queries IMA measurement log and outputs as a string. | -| ima/binary_runtime_measurement | 440 | Queries IMA measurement log and outputs in binary format. | -| ima/runtime_measurement_count | 440 | Queries the number of IMA measurement log entries. | -| ima/violations | 440 | Queries the number of abnormal IMA measurement logs. | -| ima/digests_count | 440 | Displays the total number of digests in the system hash table (IMA+EVM). | -| ima/digest_list_data | 200 | Adds digest lists. | -| ima/digest_list_data_del | 200 | Deletes digest lists | -| evm | 660 | Queries/Sets the EVM mode | - -The **/sys/kernel/security/evm** interface takes the following values: - -- **0**: Does not initialize EVM. -- **1**: Uses HMAC (symmetric encryption) to verify the integrity of extended attributes. -- **2**: Uses public key signature verification (asymmetric encryption) to verify the integrity of extended attributes. -- **6**: Disables integrity verification for extended attributes. - -### Digest List Management Tools - -The digest-list-tools package provides tools for generating and managing IMA digest list files. It mainly includes the following command-line tools. - -#### gen_digest_lists - -This tool generates IMA digest lists. Its command options are defined as follows: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
OptionValueFunction
-d<path>Specifies the directory to store the generated digest list files. The value must be a valid directory.
-fcompactSpecifies the format of the generated digest list files. Currently, only the compact format is supported.
-i<option arg>:<option value>Specifies the target file range for generating the digest list. Specific parameter definitions are as follows.
I:<path>Specifies the absolute path of the file for which to generate a digest list. If a directory is specified, recursive generation is performed.
E:<path>Specifies the path or directory to exclude.
F:<path>Specifies the path or directory for which to generate digest lists for all files (when the e: parameter is also specified, the filtering effect of the e: option is ignored).
e:Generates a digest list only for executable files.
l:policyMatches file security contexts from the system SELinux policy instead of reading them directly from file extended attributes.
i:When generating a metadata digest list, includes the digest value of the file in the calculated extended attribute information (required).
M:Allows explicitly specifying the file extended attribute information (must be used with the rpmbuild command).
u:Uses the list file name specified by the L: parameter as the file name for generating the digest list (must be used with the rpmbuild command).
L:<path>Specifies the path of the list file, which contains the information data needed to generate the digest list (must be used with the rpmbuild command).
-oaddSpecifies the operation to perform on the generated digest list. Currently, only the add operation is supported, which adds the digest list to the file.
-p-1Specifies the position in the file where the digest list is written. Currently, only -1 is supported.
-tfileGenerates a digest list only for the file content.
metadataGenerates separate digest lists for the file content and extended attributes.
-TNAIf this parameter is not added, a digest list file is generated. If this parameter is added, a TLV digest list file is generated.
-A<path>Specifies the relative root directory. The file path is truncated with the specified prefix removed for path matching and SELinux label matching.
-mimmutableSpecifies the modifiers attribute of the generated digest list file. Currently, only immutable is supported. In enforce/enforce-evm mode, the digest list can only be opened in read-only mode.
-hNAPrints help information.
- -##### Usage Examples - -- Scenario 1: Generate a digest list/TLV digest list for a single file. - - ```shell - gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ls -d ./ -i i: gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ls -d ./ -i i: -T - ``` - -- Scenario 2: Generate a digest list/TLV digest list for a single file and specify a relative root directory. - - ```shell - gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ls -A /usr/ -d ./ -i i: gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ls -A /usr/ -d ./ -i i: -T - ``` - -- Scenario 3: Recursively generate digest lists/TLV digest lists for files in a directory. - - ```shell - gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ -d ./ -i i: - gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ -d ./ -i i: -T - ``` - -- Scenario 4: Recursively generate digest lists/TLV digest lists for executable files in a directory. - - ```shell - gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ -d ./ -i i: -i e:gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/bin/ -d ./ -i i: -i e: -T - ``` - -- Scenario 5: Recursively generate digest lists/TLV digest lists for files in a directory, excluding certain subdirectories. - - ```shell - gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/ -d ./ -i i: -i E:/usr/bin/gen_digest_lists -t metadata -f compact -i l:policy -o add -p -1 -m immutable -i I:/usr/ -d ./ -i i: -i E:/usr/bin/ -T - ``` - -- Scenario 6: In the `rpmbuild` callback script, generate a digest list by reading the list file passed in by `rpmbuild`. - - ```shell - gen_digest_lists -i M: -t metadata -f compact -d $DIGEST_LIST_DIR -i l:policy \ - -i i: -o add -p -1 -m immutable -i L:$BIN_PKG_FILES -i u: \ - -A $RPM_BUILD_ROOT -i e: \ - -i E:/usr/src \ - -i E:/boot/efi \ - -i F:/lib \ - -i F:/usr/lib \ - -i F:/lib64 \ - -i F:/usr/lib64 \ - -i F:/lib/modules \ - -i F:/usr/lib/modules \ - -i F:/lib/firmware \ - -i F:/usr/lib/firmware - - gen_digest_lists -i M: -t metadata -f compact -d $DIGEST_LIST_DIR.tlv \ - -i l:policy -i i: -o add -p -1 -m immutable -i L:$BIN_PKG_FILES -i u: \ - -T -A $RPM_BUILD_ROOT -i e: \ - -i E:/usr/src \ - -i E:/boot/efi \ - -i F:/lib \ - -i F:/usr/lib \ - -i F:/lib64 \ - -i F:/usr/lib64 \ - -i F:/lib/modules \ - -i F:/usr/lib/modules \ - -i F:/lib/firmware \ - -i F:/usr/lib/firmware - ``` - -#### manage_digest_lists - -This tool is primarily used to parse binary TLV digest list files into a human-readable text format. The command options are defined as follows: - -| Parameter Name | Value | Function | -| -------------- | ------------ | --------------------------------------------------------------------------------------------------------------- | -| -d | \ | Specifies the directory where the TLV digest list files are stored. | -| -f | \ | Specifies the TLV digest list file name. | -| -p | dump | Specifies the operation type. Currently, only `dump` is supported, which parses and prints the TLV digest list. | -| -v | N/A | Prints detailed information. | -| -h | N/A | Prints help information. | - -##### Usage Example - -View the TLV digest list information. - -```shell -manage_digest_lists -p dump -d /etc/ima/digest_lists.tlv/ -``` - -## File Format Specification - -### IMA Policy File Syntax - -An IMA policy file is a text file that can contain multiple rule statements separated by newline characters `\n`. Each rule statement must begin with an action keyword, followed by **filtering conditions**: - -```text - [Filtering condition 2] [Filtering condition 3]... -``` - -The action keyword indicates the specific action of the policy rule. Each rule can only have one action. The specific actions are shown in the table below (**you can omit the `action=` prefix**, for example, directly write `dont_measure` instead of `action=dont_measure`). - -The following types of filtering conditions are supported: - -- `func`: represents the type of file being measured or appraised. It is often used with `mask`. Each rule can only have one `func`. - - `FILE_CHECK` can only be used with `MAY_EXEC`, `MAY_WRITE`, or `MAY_READ`. - - `MODULE_CHECK`, `MMAP_CHECK`, and `BPRM_CHECK` can only be used with `MAY_EXEC`. - - Combinations other than the matching relationships above will have no effect. - -- `mask`: represents the operation being performed on the file when it is measured or appraised. Each rule can only have one `mask`. - -- `fsmagic`: represents the hexadecimal magic number of the filesystem type, defined in the **/usr/include/linux/magic.h** file. (By default, all file systems are measured unless the `dont_measure` or `dont_appraise` flag is used). - -- `fsuuid`: represents a 16-character hexadecimal string of the system device UUID. - -- `objtype`: represents the security type of the file. Each rule can only have one file type. `objtype` is more granular than `func`. For example, `obj_type=nova_log_t` represents files with the nova_log_t SELinux type. - -- `uid`: represents the user (user ID) performing the operation on the file. Each rule can only have one `uid`. - -- `fowner`: represents the owner of the file (user ID). Each rule can only have one `fowner`. - -The specific values and descriptions of the keywords are as follows: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
KeywordValueDescription
actionmeasureEnables IMA measurement.
dont_measureDisables IMA measurement.
appraiseEnables IMA appraisal.
dont_appraiseDisables IMA appraisal.
auditEnables auditing.
funcFILE_CHECKFile to be opened
MODULE_CHECKKernel module file to be loaded
MMAP_CHECKShared library file to be mapped to the process memory space
BRPM_CHECKExecutable file to be executed (excluding script files opened by programs like /bin/hash)
POLICY_CHECKIMA policy file to be imported
FIRMWARE_CHECKFirmware to be loaded into memory
DIGEST_LIST_CHECKDigest list file to be loaded into the kernel
KEXEC_KERNEL_CHECKKernel to be switched to by kexec
maskMAY_EXECExecute file
MAY_WRITEWrite file
MAY_READRead file
MAY_APPENDExtend file attributes
fsmagicfsmagic=xxxHexadecimal magic number of the file system type
fsuuidfsuuid=xxx16-character hexadecimal string of the system device UUID
fownerfowner=xxxUser ID of the file owner
uiduid=xxxUser ID of the user operating on the file
obj_typeobj_type=xxx_tType of the file (based on SELinux label)
pcrpcr=TPM PCR used for extending measurement values (default is 10)
appraise_typeimasigIMA appraisal based on signature
meta_immutableAppraisal of file extended attributes based on signature (digest list supported)
- -## Usage Instructions - -> ![](./public_sys-resources/icon-note.gif) **NOte:** -> -> The native IMA/EVM is a Linux open source feature. This section only provides a brief introduction to its basic usage. For details, see the open source wiki: -> -> - -### Native IMA - -#### Measurement Mode - -Configure a measurement policy to enable IMA measurement. - -**Step 1:** You can specify the measurement policy through boot parameters or manual configuration. - -Boot parameter example: - -```conf -ima_policy="tcb" -``` - -Manual configuration example: - -```shell -echo "measure func=BPRM_CHECK" > /sys/kernel/security/ima/policy -``` - -**Step 2:** Reboot the system and check the measurement logs. - -```shell -cat /sys/kernel/security/ima/ascii_runtime_measurements -``` - -#### Appraisal Mode - -To use the appraisal mode, enter the fix mode to apply IMA signatures to files. Then, switch to the log or enforce mode. - -**Step 1:** Configure the boot parameters and reboot the system to the enter fix mode. - -```conf -ima_appraise=fix ima_policy=appraise_tcb -``` - -**Step 2:** Generate IMA extended attributes for all files that need appraisal. - -The signature mode can be used for immutable files (such as binary files). This involves writing the signature of the file hash into the IMA extended attribute. In the following example, **/path/to/ima.key** is the path to the IMA signing private key. - -```shell -find /usr/bin -fstype ext4 -type f -executable -uid 0 -exec evmctl -a sha256 ima_sign --key /path/to/ima.key '{}' \; -``` - -The hash mode can be used for mutable files (such as data files). This involves writing the file hash into the IMA extended attribute. IMA supports automatic tagging in the fix mode. Accessing a file will automatically generate the IMA extended attribute. - -```shell -find / -fstype ext4 -type f -uid 0 -exec dd if='{}' of=/dev/null count=0 status=none \; -``` - -You can check if a file has the IMA extended attribute (`security.ima`) by running the following command: - -```shell -getfattr -m - -d /sbin/init -``` - -**Step 3:** Switch to log or enforce mode by configuring the boot parameters and rebooting the system. - -```conf -ima_appraise=enforce ima_policy=appraise_tcb -``` - -## IMA Digest Lists - -### Prerequisites - -Before using the IMA digest lists, you need to install the ima-evm-utils and digest-list-tools packages: - -```shell -yum install ima-evm-utils digest-list-tools -``` - -### Mechanism Introduction - -#### Digest List Files - -After installing an RPM package released by openEuler, a digest list file is generated in the **/etc/ima** directory by default. There are several types of files depending on the file name: - -- **/etc/ima/digest_lists/0-metadata_list-compact-** - -This is the IMA digest list file, generated by the `gen_digest_lists` command (see the [gen_digest_lists](#gen_digest_lists) section for details). This file is in binary format and contains header information and a series of SHA256 hash values, representing the legitimate file content digest values and file extended attribute digest values, respectively. The file will be imported to the kernel after it is measured or appraised. The allowlist digest values in this file are used as the basis for IMA digest list measurement or appraisal. - -- **/etc/ima/digest_lists/0-metadata_list-rpm-** - -This is the RPM digest list file, which **is actually the header information of the RPM package.** After the RPM package is installed, if the IMA digest list file does not contain a signature, the RPM header information will be written to this file, and the signature of the header information will be written to the `security.ima` extended attribute. In this way, the authenticity of the RPM header information can be verified through the signature. Since the RPM header information also contains the digest value of the digest list file, indirect verification of the digest list can be achieved. - -- **/etc/ima/digest_lists/0-parser_list-compact-libexec** - -This is the IMA PARSER digest list file, which stores the digest value of the **/usr/libexec/rpm_parser** file. This file is used to implement the RPM digest list -> IMA digest list chain of trust. The kernel IMA digest list mechanism will perform special verification on the process generated by executing this file. If it is determined to be the `rpm_parser` program, it will trust all the digest lists it imports without verifying the signature. - -- **/etc/ima/digest_lists.sig/0-metadata_list-compact-.sig** - -This is the signature file of the IMA digest list. If the RPM package contains this file, the content of this file will be written to the `security.ima` extended attribute of the corresponding RPM digest list file during the RPM package installation phase, so as to perform signature verification during the IMA digest list import kernel phase. - -- **/etc/ima/digest_lists.tlv/0-metadata_list-compact_tlv-** - -This is the TLV digest list file, which is usually generated along with the IMA digest list file for the target file. It stores the integrity information of the target file (such as the file content digest value and file extended attributes). The function of this file is to assist users in querying and recovering the integrity information of the target file. - -#### Digest List File Signature Modes - -In IMA appraisal mode, the IMA digest list file needs to be signed and verified before it can be imported into the kernel and used for subsequent file allowlist matching. The IMA digest list file supports the following signature modes: - -**(1) Extended attribute signature** - -This is the native IMA signature mechanism. The signature information is stored in a certain format in the `security.ima` extended attribute. It can be generated and added using the `evmctl` command: - -```shell -evmctl ima_sign --key /path/to/ima.key -a sha256 -``` - -You can also add the `-f` parameter to store the signature information and header information in a separate file: - -```shell -evmctl ima_sign -f --key /path/to/ima.key -a sha256 -``` - -When the IMA appraisal mode is enabled, the digest list file path can be directly written to a kernel interface to import or delete the digest list. This process will automatically trigger the appraisal, and the signature verification of the digest list file content will be performed based on the `security.ima` extended attribute: - -```shell -# Import the IMA digest list file. -echo > /sys/kernel/security/ima/digest_list_data -# Delete the IMA digest list file. -echo > /sys/kernel/security/ima/digest_list_data_del -``` - -**(2) IMA digest list appended signature (default in openEuler 24.03 LTS)** - -Starting from openEuler 24.03 LTS, a dedicated IMA signature key is supported, and CMS signature is adopted. Since the signature information contains a certificate chain, it may be too long to be written into the `security.ima` extended attribute of a file. Therefore, appended signatures similar to kernel modules are adopted: - -![](./figures/ima-modsig.png) - -The signature mechanism is as follows: - -1. The CMS signature information is appended to the end of the IMA digest list file. - -2. A structure is populated and added to the end of the signature information. The structure is defined as follows: - - ```c - struct module_signature { - u8 algo; /* Public-key crypto algorithm [0] */ - u8 hash; /* Digest algorithm [0] */ - u8 id_type; /* Key identifier type [PKEY_ID_PKCS7] */ - u8 signer_len; /* Length of signer's name [0] */ - u8 key_id_len; /* Length of key identifier [0] */ - u8 __pad[3]; - __be32 sig_len; /* Length of signature data */ - }; - ``` - -3. Magic string `~Module signature appended~` is added. - - The reference script for this step is as follows: - - ```shell - #!/bin/bash - DIGEST_FILE=$1 # IMA digest list file path - SIG_FILE=$2 # IMA digest list signature information save path - OUT=$3 # Output path of the digest list file after the signature information is added - - cat $DIGEST_FILE $SIG_FILE > $OUT - echo -n -e "\x00\x00\x02\x00\x00\x00\x00\x00" >> $OUT - echo -n -e $(printf "%08x" "$(ls -l $SIG_FILE | awk '{print $5}')") | xxd -r -ps >> $OUT - echo -n "~Module signature appended~" >> $OUT - echo -n -e "\x0a" >> $OUT - ``` - -**(3) Reused RPM signature (default in openEuler 22.03 LTS)** - -openEuler 22.03 LTS supports reusing of RPM signatures to sign the IMA digest list file. This mechanism aims to solve the problem that the version does not have a dedicated IMA signature key. The signing process is transparent to users. When the RPM package contains an IMA digest list file but does not contain a signature file for the IMA digest list, this signature mechanism will be used automatically. Its core principle is to verify the IMA digest list through the header information of the RPM package. - -For RPM packages released by openEuler, each package file can contain two parts: - -- **RPM header information:** RPM package attribute fields, such as package name and file digest value list. Its integrity is guaranteed by the RPM header signature. - -- **RPM files:** Files actually installed to the system, including the IMA digest list file generated during the build phase. - -![](./figures/ima_rpm.png) - -During RPM package installation, if the RPM process detects that the digest list file in the package does not contain a signature, it will create an RPM digest list file in the **/etc/ima** directory, write the RPM header information to the file content, and write the RPM header signature to the `security.ima` extended attribute of the file. Subsequently, the RPM digest list can be indirectly used to verify and import the IMA digest list. - -#### IMA Digest List Import - -In the IMA measurement mode, importing the IMA digest list file does not require signature verification. You can directly write the path to the kernel interface to import or delete the digest list. - -```shell -# Import the IMA digest list file. -echo > /sys/kernel/security/ima/digest_list_data -# Delete the IMA digest list file. -echo > /sys/kernel/security/ima/digest_list_data_del -``` - -In the IMA appraisal mode, importing the digest list requires signature verification. There are two import methods according to the signature methods. - -**Direct import** - -For IMA digest list files that already contain signature information (Extended attribute signature or IMA digest list appended signature), you can directly write the path to the kernel interface to import or delete the digest list. This process will automatically trigger the appraisal, and the signature verification of the digest list file content will be completed based on the `security.ima` extended attribute: - -```shell -# Import the IMA digest list file. -echo > /sys/kernel/security/ima/digest_list_data -# Delete the IMA digest list file. -echo > /sys/kernel/security/ima/digest_list_data_del -``` - -**Import using `upload_digest_lists`** - -For IMA digest list files that reuse RPM signatures, you need to run the `upload_digest_lists` command to import them. The specific commands are as follows (note that the specified path is the corresponding RPM digest list): - -```shell -# Import the IMA digest list file. -upload_digest_lists add -# Delete the IMA digest list file. -upload_digest_lists del -``` - -This process is relatively complicated and needs to meet the following prerequisites: - -1. The digest list in the digest_list_tools package released by openEuler has been imported into the system (including the IMA digest list and the IMA PARSER digest list). - -2. The IMA appraisal policy for application execution (`BPRM_CHECK`) has been configured. - -### Operation Guide - -#### Automatic Generation of Digest Lists During RPM Build - -The openEuler RPM toolchain supports the `%__brp_digest_list` macro. The configuration format is as follows: - -```text -%__brp_digest_list /usr/lib/rpm/brp-digest-list %{buildroot} -``` - -After this macro is configured, when the you call the `rpmbuild` command to build the software package, the **/usr/lib/rpm/brp-digest-list** script will be called during the RPM packaging phase to generate and sign the digest list. By default, openEuler generates digest lists for key files such as executable files, dynamic libraries, and kernel modules. You can also modify the script to configure the scope of digest list generation and specify the signing key. The following example uses the user-defined signing key **/path/to/ima.key** to sign the digest list. - -```shell -...... (line 66) -DIGEST_LIST_TLV_PATH="$DIGEST_LIST_DIR.tlv/0-metadata_list-compact_tlv-$(basename $BIN_PKG_FILES)" -[ -f $DIGEST_LIST_TLV_PATH ] || exit 0 - -chmod 644 $DIGEST_LIST_TLV_PATH -echo $DIGEST_LIST_TLV_PATH - -evmctl ima_sign -f --key /path/to/ima.key -a sha256 $DIGEST_LIST_PATH &> /dev/null -chmod 400 $DIGEST_LIST_PATH.sig -mkdir -p $DIGEST_LIST_DIR.sig -mv $DIGEST_LIST_PATH.sig $DIGEST_LIST_DIR.sig -echo $DIGEST_LIST_DIR.sig/0-metadata_list-compact-$(basename $BIN_PKG_FILES).sig -``` - -#### IMA Digest List Measurement - -You can enable IMA digest list measurement following the steps below: - -**Step 1:** Configure the boot parameter measurement policy to enable the IMA measurement function. The specific steps are the same as **Native IMA measurement**. The difference is that the TPM PCR used for measurement needs to be configured separately. The boot parameter example is as follows: - -```conf -ima_policy=exec_tcb ima_digest_list_pcr=11 -``` - -**Step 2:** Import the IMA digest list. Take the digest list of the bash software package as an example: - -```shell -echo /etc/ima/digest_lists/0-metadata_list-compact-bash-5.1.8-6.oe2203sp1.x86_64 > /sys/kernel/security/ima/digest_list_data -``` - -You can query the measurement log of the IMA digest list. - -```shell -cat /sys/kernel/security/ima/ascii_runtime_measurements -``` - -After the IMA digest list is imported, if the subsequently measured file digest value is included in the IMA digest list, no additional measurement log will be recorded. - -#### IMA Digest List Appraisal - -##### Startup with the Default Policy - -You can configure the **ima_policy** parameter in the boot parameters to specify the IMA default policy. Then, in the kernel boot phase, the default policy will be enabled immediately after IMA initialization to perform appraisal. You can enable the IMA digest list appraisal function following the steps below: - -**Step 1:** Run the `dracut` command to write the digest list file to initrd: - -```shell -dracut -f -e xattr -``` - -**Step 2:** Configure the boot parameters and IMA policy. Typical configurations are as follows: - -```conf -# IMA appraisal log/enforce mode based on the digest list, only protecting the file content, configuring the default policy as appraise_exec_tcb -ima_appraise=log ima_appraise_digest_list=digest-nometadata ima_policy="appraise_exec_tcb" initramtmpfs module.sig_enforce -ima_appraise=enforce ima_appraise_digest_list=digest-nometadata ima_policy="appraise_exec_tcb" initramtmpfs module.sig_enforce -# IMA appraisal log/enforce mode based on the digest list, protecting the file content and extended attributes, configuring the default policy as appraise_exec_tcb+appraise_exec_immutable -ima_appraise=log-evm ima_appraise_digest_list=digest ima_policy="appraise_exec_tcb|appraise_exec_immutable" initramtmpfs evm=x509 evm=complete module.sig_enforce -ima_appraise=enforce-evm ima_appraise_digest_list=digest ima_policy="appraise_exec_tcb|appraise_exec_immutable" initramtmpfs evm=x509 evm=complete module.sig_enforce -``` - -Reboot the system to enable the IMA digest list appraisal function. The IMA policy will take effect and the IMA digest list file will be imported automatically during the startup process. - -##### Startup without the Default Policy - -You can choose not to configure the **ima_policy** parameter in the boot parameters, which means that there is no default policy during the system startup phase. The IMA appraisal mechanism will take effect and be enabled after you import the policy. - -**Step 1:** Configure the boot parameters. Typical configurations are as follows: - -```conf -# IMA appraisal log/enforce mode based on the digest list, only protecting the file content, no default policy -ima_appraise=log ima_appraise_digest_list=digest-nometadata initramtmpfs -ima_appraise=enforce ima_appraise_digest_list=digest-nometadata initramtmpfs -# IMA appraisal log/enforce mode based on the digest list, protecting the file content and extended attributes, no default policy -ima_appraise=log-evm ima_appraise_digest_list=digest initramtmpfs evm=x509 evm=complete -ima_appraise=enforce-evm ima_appraise_digest_list=digest initramtmpfs evm=x509 evm=complete -``` - -Reboot the system. At this time, since there is no policy in the system, IMA appraisal will not take effect. - -**Step 2:** Import the IMA policy. Write the full path of the policy file to the kernel interface. - -```shell -echo /path/to/policy > /sys/kernel/security/ima/policy -``` - -> ![](./public_sys-resources/icon-note.gif) **Note:** -> -> The policy needs to include some fixed rules. Refer to the policy templates below. -> -> The policy template for openEuler 22.03 LTS is as follows (reusing the RPM signature): -> - -```conf -# Do not appraise the access behavior of the securityfs file system. -dont_appraise fsmagic=0x73636673 -# Other user-defined dont_appraise rules -...... -# Appraise the imported IMA digest list file. -appraise func=DIGEST_LIST_CHECK appraise_type=imasig -# Appraise all files opened by the /usr/libexec/rpm_parser process. -appraise parser appraise_type=imasig -# Appraise the executed application (trigger the appraisal of /usr/libexec/rpm_parser, and you can also add other restrictions, such as SELinux labels). -appraise func=BPRM_CHECK appraise_type=imasig -# Other user-defined appraise rules -...... -``` -> -> The policy template for openEuler 24.03 LTS is as follows (Extended attribute signature or appended signature scenario): -> - -```conf -# User-defined dont_appraise rules -...... -# Appraise the imported IMA digest list file. -appraise func=DIGEST_LIST_CHECK appraise_type=imasig|modsig -# Other user-defined appraise rules -...... -``` - -**Step 3:** Import the IMA digest list file. For digest lists with different signature methods, different import methods need to be used. - -The method of importing the digest list for openEuler 22.03 LTS is as follows (IMA digest list that reuses the RPM signature): - -```shell -# Import the digest list of the digest_list_tools package. -echo /etc/ima/digest_lists/0-metadata_list-compact-digest-list-tools-0.3.95-13.x86_64 > /sys/kernel/security/ima/digest_list_data -echo /etc/ima/digest_lists/0-parser_list-compact-libexec > /sys/kernel/security/ima/digest_list_data -# Import other RPM digest lists. -upload_digest_lists add /etc/ima/digest_lists -# Check the number of imported digest list entries. -cat /sys/kernel/security/ima/digests_count -``` - -The method of importing the digest list for openEuler 24.03 LTS is as follows (IMA digest list with appended signature): - -```shell -find /etc/ima/digest_lists -name "0-metadata_list-compact-*" -exec echo {} > /sys/kernel/security/ima/digest_list_data \; -``` - -#### Software Upgrade - -After the IMA digest list function is enabled, for files within the IMA protection scope, the digest list needs to be updated synchronously during software upgrades. For RPM packages released by openEuler, the digest lists in the RPM packages will be automatically added, updated, and deleted during package installation, upgrade, and uninstallation, without manual intervention. For user-maintained software packages in non-RPM format, the digest list needs to be imported manually. - -#### User Certificate Import - -You can import custom certificates to perform measurement or appraisal on software not released by openEuler. The openEuler IMA appraisal mode supports obtaining certificates from the following two key rings for signature verification: - -- `builtin_trusted_keys` key ring: root certificates preset during kernel compilation - -- `ima` key ring: imported using **/etc/keys/x509_ima.der** in initrd, which needs to be a sub-certificate of any certificate in the `builtin_trusted_keys` key ring. - -**The steps to import the root certificate into the `builtin_trusted_keys` key ring are as follows:** - -**Step 1:** Generate a root certificate. Take the `openssl` command as an example. - -```shell -echo 'subjectKeyIdentifier=hash' > root.cfg -openssl genrsa -out root.key 4096 -openssl req -new -sha256 -key root.key -out root.csr -subj "/C=AA/ST=BB/O=CC/OU=DD/CN=openeuler test ca" -openssl x509 -req -days 3650 -extfile root.cfg -signkey root.key -in root.csr -out root.crt -openssl x509 -in root.crt -out root.der -outform DER -``` - -**Step 2:** Get the openEuler kernel source code. Take the latest **OLK-5.10** branch as an example. - -```shell -git clone https://gitee.com/openeuler/kernel.git -b OLK-5.10 -``` - -**Step 3:** Go to the source code directory and copy the root certificate to the directory. - -```shell -cd kernel -cp /path/to/root.der . -``` - -Modify the `CONFIG_SYSTEM_TRUSTED_KEYS` option in the config file: - -```conf -CONFIG_SYSTEM_TRUSTED_KEYS="./root.crt" -``` - -**Step 4:** Compile and install the kernel (the steps are omitted, note that you need to generate a digest list for the kernel module). - -**Step 5:** Check whether the certificate is successfully imported after a reboot. - -```shell -keyctl show %:.builtin_trusted_keys -``` - -**The steps to import the sub-certificate into the ima key ring are as follows. Note that the root certificate needs to be imported into the `builtin_trusted_keys` key ring in advance:** - -**Step 1:** Generate a sub-certificate based on the root certificate. Take the `openssl` command as an example. - -```shell -echo 'subjectKeyIdentifier=hash' > ima.cfg -echo 'authorityKeyIdentifier=keyid,issuer' >> ima.cfg -echo 'keyUsage=digitalSignature' >> ima.cfg -openssl genrsa -out ima.key 4096 -openssl req -new -sha256 -key ima.key -out ima.csr -subj "/C=AA/ST=BB/O=CC/OU=DD/CN=openeuler test ima" -openssl x509 -req -sha256 -CAcreateserial -CA root.crt -CAkey root.key -extfile ima.cfg -in ima.csr -out ima.crt -openssl x509 -outform DER -in ima.crt -out x509_ima.der -``` - -**Step 2:** Copy the IMA certificate to the **/etc/keys** directory: - -```shell -mkdir -p /etc/keys/ -cp x509_ima.der /etc/keys/ -``` - -**Step 3:** Package initrd and put the IMA certificate and digest list into the initrd image. - -```shell -echo 'install_items+=" /etc/keys/x509_ima.der "' >> /etc/dracut.conf -dracut -f -e xattr -``` - -**Step 4:** Check whether the certificate is successfully imported after a reboot. - -```shell -keyctl show %:.ima -``` - -#### Typical Use Cases - -According to different operating modes, the IMA digest list can be applied to trusted measurement scenarios and user-mode secure boot scenarios. - -##### Trusted Measurement - -The trusted measurement scenario is mainly based on the IMA digest list measurement mode. The measurement of key files is completed jointly by the kernel and the hardware trusted root (such as TPM), and then the remote attestation tool chain is combined to complete the attestation of the file trusted state of the system. - -![](./figures/ima_trusted_measurement.png) - -**Runtime Phase** - -- The digest list is imported synchronously when the software package is deployed, and IMA measures the digest list and records the measurement log (synchronously extending TPM). - -- When the application is executed, IMA measurement is triggered. If the file digest value matches the allowlist, it will be ignored. Otherwise, the measurement log will be recorded (synchronously extending TPM). - -**Attestation Phase (Industry Common Process)** - -- The remote attestation server sends an attestation request, and the client returns the IMA measurement log and the signed TPM PCR value. - -- The remote attestation server verifies the correctness of the PCR (signature verification), measurement log (PCR replay), and file measurement information (comparing with the local baseline value) in sequence, and reports the result to the security center. - -- The security management center takes corresponding actions, such as event notification and node isolation. - -##### User-Mode Secure Boot - -The user-mode secure boot scenario is mainly based on the IMA digest list appraisal mode. Similar to secure boot, it aims to perform integrity verification on the executed application or accessed key files. If the verification fails, the access will be denied. - -![](./figures/ima_secure_boot.png) - -**Runtime Phase** - -- The digest list is imported when the application is deployed. After the kernel verifies the signature, the digest value is loaded into the kernel hash table as an allowlist. - -- When the application is executed, IMA verification is triggered, and the file hash value is calculated. If it is consistent with the baseline value, access is allowed; otherwise, the log is recorded or access is denied. - -## Appendix - -### Kernel Compile Option Description - -The compile options provided by native IMA/EVM are as follows: - -| Option | Function | -| :------------------------------- | :----------------------------------------------- | -| CONFIG_INTEGRITY | Overall compile switch for IMA/EVM | -| CONFIG_INTEGRITY_SIGNATURE | Enables IMA signature verification. | -| CONFIG_INTEGRITY_ASYMMETRIC_KEYS | Enables IMA asymmetric signature verification. | -| CONFIG_INTEGRITY_TRUSTED_KEYRING | Enables the IMA/EVM keyring. | -| CONFIG_INTEGRITY_AUDIT | Compiles the IMA audit module. | -| CONFIG_IMA | Overall compile switch for IMA | -| CONFIG_IMA_WRITE_POLICY | Allows updating the IMA policy at runtime. | -| CONFIG_IMA_MEASURE_PCR_IDX | Allows specifying the IMA measurement PCR index. | -| CONFIG_IMA_LSM_RULES | Allows configuring LSM rules. | -| CONFIG_IMA_APPRAISE | Overall compile switch for IMA appraisal | -| IMA_APPRAISE_BOOTPARAM | Enables the boot parameter for IMA appraisal. | -| CONFIG_EVM | Overall compile switch for EVM | - -The compile options provided by the openEuler IMA digest list feature (enabled by default in the openEuler kernel) are as follows: - -| Compile Option | Function | -| :----------------- | :------------------------------------- | -| CONFIG_DIGEST_LIST | Switch for the IMA digest list feature | - -### IMA Digest List Root Certificates - -openEuler 22.03 LTS uses RPM key pairs to sign IMA digest lists. To ensure that the IMA function is available out of the box, the openEuler kernel imports the RPM root certificate (PGP certificate) into the kernel by default during compilation. Currently, the kernel includes the OBS certificate used in older versions and the openEuler certificate switched to in openEuler 22.03 LTS SP1: - -```shell -$ cat /proc/keys | grep PGP -1909b4ad I------ 1 perm 1f030000 0 0 asymmetri private OBS b25e7f66: PGP.rsa b25e7f66 [] -2f10cd36 I------ 1 perm 1f030000 0 0 asymmetri openeuler fb37bc6f: PGP.rsa fb37bc6f [] -``` - -Because the current kernel does not support importing PGP sub-public keys, and the switched openEuler certificate uses sub-keys for signing, the openEuler kernel preprocesses the certificate before compilation, extracts the sub-public key, and imports it into the kernel. For details about the processing process, see the **[process_pgp_certs.sh](https://gitee.com/src-openeuler/kernel/blob/openEuler-22.03-LTS-SP1/process_pgp_certs.sh)** script file in the kernel software package code repository. - -openEuler 24.03 LTS and later versions support dedicated IMA certificates. - -If you do not use the IMA digest list function or use other keys for signing or verification, you can remove the related code and configure kernel root certificates yourself. - -### FAQ - -#### FAQ1: The System Fails to Start after the IMA Appraisal Enforce Mode Is Enabled and the Default Policy Is Configured - -The default IMA policy may include verification of key file access processes such as application execution and kernel module loading. If key file access fails, the system may fail to start. Common causes are as follows: - -1. The IMA verification certificate is not imported into the kernel, resulting in failure to verify the digest list. -2. The digest list file is not signed correctly, resulting in digest list verification failure. -3. The digest list file is not imported into the initrd, resulting in failure to import the digest list during startup. -4. The digest list file does not match the application, resulting in application matching failure with the imported digest list. - -You need to enter the system in the log mode to locate and fix the problem. Reboot the system, go to the GRUB menu, modify the boot parameters, and start the system in the log mode. - -```conf -ima_appraise=log -``` - -After the system starts, you can troubleshoot the problem by referring to the following process: - -**Step 1**: Check the IMA certificate in the keyring. - -```shell -keyctl show %:.builtin_trusted_keys -``` - -For openEuler LTS versions, at least the following kernel certificates should exist (for other unlisted versions, determine whether the certificates should exist based on the release time): - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VersionCertificate
openEuler 22.03 LTSprivate OBS b25e7f66
openEuler 22.03 LTS SP1/2/3private OBS b25e7f66
openeuler <openeuler@compass-ci.com> b675600b
openEuler 22.03 LTS SP4private OBS b25e7f66
openeuler <openeuler@compass-ci.com> b675600b
openeuler <openeuler@compass-ci.com> fb37bc6f
openEuler 24.03 LTSopenEuler kernel ICA 1: 90bb67eb4b57eb62bf6f867e4f56bd4e19e7d041
- -If you have imported other kernel root certificates, you also need to use the `keyctl` command to query and confirm whether the certificate has been successfully imported. openEuler does not use the IMA keyring by default. If you are using it, use the following command to query whether the user certificate exists in the IMA keyring: - -```shell -keyctl show %:.ima -``` - -If the result shows that the certificate is not imported correctly, follow the process in the [User Certificate Import](#user-certificate-import) section. - -**Step 2:** Check whether the digest list carries signature information. - -You can use the following command to query the digest list file in the system: - -```shell -ls /etc/ima/digest_lists | grep '_list-compact-' -``` - -For each digest list file, check for the existence of **one of the following three** signature information: - -1. Check whether the digest list file has a corresponding **RPM digest list file**, and whether the IMA extended attribute of the **RPM digest list file** contains the signature value. Taking the digest list of the bash software package as an example, the digest list file path is: - - ```text - /etc/ima/digest_lists/0-metadata_list-compact-bash-5.1.8-6.oe2203sp1.x86_64 - ``` - - The RPM digest list path is: - - ```text - /etc/ima/digest_lists/0-metadata_list-rpm-bash-5.1.8-6.oe2203sp1.x86_64 - ``` - - Confirm that the RPM digest list signature, that is, the `security.ima` extended attribute of the file is not empty: - - ```shell - getfattr -n security.ima /etc/ima/digest_lists/0-metadata_list-rpm-bash-5.1.8-6.oe2203sp1.x86_64 - ``` - -2. Confirm that the `security.ima` extended attribute of the digest list file is not empty: - - ```shell - getfattr -n security.ima /etc/ima/digest_lists/0-metadata_list-compact-bash-5.1.8-6.oe2203sp1.x86_64 - ``` - -3. Confirm that the end of the digest list file contains signature information. You can judge by checking whether the end of the file content contains the magic string `~Module signature appended~` (for signing methods supported in openEuler 24.03 LTS and later versions): - - ```shell - tail -c 28 /etc/ima/digest_lists/0-metadata_list-compact-kernel-6.6.0-28.0.0.34.oe2403.x86_64 - ``` - - If the result shows that the digest list does not contain signature information, follow the process in the [Mechanism Introduction](#mechanism-introduction) section. - -**Step 3:** Check whether the signature information of the digest list is correct. - -After ensuring that the digest list carries signature information, you also need to ensure that the digest list is signed with the correct private key, that is, the signature private key that matches the certificate in the kernel. In addition to checking the private key, you can also check the dmesg log or audit log (the default path is **/var/log/audit/audit.log**). A typical log output is as follows: - -```log -type=INTEGRITY_DATA msg=audit(1722578008.756:154): pid=3358 uid=0 auid=0 ses=1 subj=unconfined_u:unconfined_r:haikang_t:s0-s0:c0.c1023 op=appraise_data cause=invalid-signature comm="bash" name="/root/0-metadata_list-compact-bash-5.1.8-6.oe2203sp1.x86_64" dev="dm-0" ino=785161 res=0 errno=0UID="root" AUID="root" -``` - -If the result shows that the signature information is incorrect, follow the process in the [Mechanism Introduction](#mechanism-introduction) section. - -**Step 4:** Check whether the digest list file is imported in the initrd. - -Use the following command to query whether the digest list file exists in the current initrd: - -```shell -lsinitrd | grep 'etc/ima/digest_lists' -``` - -If the digest list file is not found, remake the initrd and check whether the digest list is successfully imported. - -```shell -dracut -f -e xattr -``` - -**Step 5:** Check whether the IMA digest list matches the application. - -Refer to FAQ2. - -#### FAQ2: After the IMA Appraisal Enforce Mode Is Enabled, the Execution of Some Files Fails - -After the IMA appraisal enforce mode is enabled, for a file configured with the IMA policy, if the content or extended attributes of the file are incorrect (inconsistent with the imported digest list), the file access may be rejected. Common reasons include: - -1. The digest list is not imported successfully (refer to FAQ1). -2. The file content or attributes have been tampered with. - -If file execution fails, first determine whether the digest list file has been successfully imported into the kernel. You can check the number of digest lists to determine the import status. - -```shell -cat /sys/kernel/security/ima/digests_count -``` - -Then, determine which file verification failed and the reason through the audit log (the default path is **/var/log/audit/audit.log**). A typical log output is as follows: - -```log -type=INTEGRITY_DATA msg=audit(1722811960.997:2967): pid=7613 uid=0 auid=0 ses=1 subj=unconfined_u:unconfined_r:haikang_t:s0-s0:c0.c1023 op=appraise_data cause=IMA-signature-required comm="bash" name="/root/test" dev="dm-0" ino=814424 res=0 errno=0UID="root" AUID="root" -``` - -After determining the file that failed the verification, compare the TLV digest list to determine the reason why the file was tampered with. If extended attribute verification is not enabled, only compare the SHA256 hash value of the file and the **IMA digest** item in the TLV digest list. If extended attribute verification is enabled, you also need to compare the current attributes of the file and the extended attributes displayed in the TLV digest list. - -After determining the cause, you can solve the issue by restoring the content and attributes of the file, or regenerating the digest list for the current file, signing it, and importing it into the kernel. - -#### FAQ3: An Error Message Appears during Installation of a Software Package of a Different SP Version of openEuler 22.03 LtLTS after the IMA Appraisal Mode Is Enabled - -After the IMA appraisal mode is enabled, when a software package of a different SP version of openEuler 22.03 LTS is installed, the import of the IMA digest list will be automatically triggered. This includes the signature verification process for the digest list, which uses the certificate in the kernel to verify the signature of the digest list. Since the signing certificate changes during the evolution of openEuler, there are backward compatibility issues in some cross-version installation scenarios (no forward compatibility issues, that is, the new kernel can verify the IMA digest list file of older versions normally). - -You are advised to confirm that the current kernel contains the following signature certificates: - -```shell -# keyctl show %:.builtin_trusted_keys -Keyring - 566488577 ---lswrv 0 0 keyring: .builtin_trusted_keys - 383580336 ---lswrv 0 0 \_ asymmetric: openeuler b675600b - 453794670 ---lswrv 0 0 \_ asymmetric: private OBS b25e7f66 - 938520011 ---lswrv 0 0 \_ asymmetric: openeuler fb37bc6f -``` - -If a certificate is missing, upgrade the kernel to the latest version. - -```shell -yum update kernel -``` - -openEuler 24.03 LTS and later versions have dedicated IMA certificates and support certificate chain verification. The certificate life cycle can cover the entire LTS version. - -#### FAQ4: After the IMA Appraisal Mode Is Enabled, the IMA Digest List File Is Signed Correctly, but the Import Fails - -The IMA digest list import involves a check mechanism. If the signature verification of the digest list fails during an import process, the digest list import function will be disabled, resulting in even correctly signed digest list files not being imported later. Check whether the following information exists in the dmesg log: - -```shell -# dmesg -ima: 0-metadata_list-compact-bash-5.1.8-6.oe2203sp1.x86_64 not appraised, disabling digest lists lookup for appraisal -``` - -The above information indicates that the IMA appraisal mode is enabled and a digest list file with an incorrect signature has been imported, resulting in the function being disabled. In this case, reboot the system and repair the incorrect digest list signature information. - -#### FAQ5: Importing User-Defined IMA Certificates Fails in openEuler 24.03 LTS and Later Versions - -Linux kernel 6.6 adds field verification restrictions to the imported certificate. For certificates imported into the IMA keyring, the following constraints need to be met (following the X.509 standard format): - -- It is a digital signature certificate, that is, the `keyUsage=digitalSignature` field is set. -- It is not a CA certificate, that is, the `basicConstraints=CA:TRUE` field cannot be set. -- It is not an intermediate certificate, that is, the `keyUsage=keyCertSign` field cannot be set. - -#### FAQ6: The kdump Service Fails to Start after the IMA Appraisal Mode Is Enabled - -After the IMA appraisal enforce mode is enabled, if the following `KEXEC_KERNEL_CHECK` rule is configured in the IMA policy, the kdump service may fail to start: - -```shell -appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig -``` - -The reason is that in this case, all files loaded by `kexec` need to go through integrity verification, so the kernel requires the `kexec_file_load` system call to be used when kdump loads the kernel image file. The `kexec_file_load` system call can be enabled by configuring the `KDUMP_FILE_LOAD` in the **/etc/sysconfig/kdump** configuration file. - -```conf -KDUMP_FILE_LOAD="on" -``` - -At the same time, the `kexec_file_load` system call itself will also perform signature verification on the file, so the kernel image file to be loaded must contain a correct secure boot signature, and the current kernel must contain a corresponding verification certificate. diff --git a/docs/en/docs/Administration/configuring-the-repo-server.md b/docs/en/docs/Administration/configuring-the-repo-server.md deleted file mode 100644 index 18855a77ae4131b94e1e06a734dbf37b3626d100..0000000000000000000000000000000000000000 --- a/docs/en/docs/Administration/configuring-the-repo-server.md +++ /dev/null @@ -1,407 +0,0 @@ -# Configuring the Repo Server - ->![](./public_sys-resources/icon-note.gif) **NOTE:** -> openEuler provides multiple repo sources for online usage. For details about the repo sources, see [OS Installation](../Releasenotes/installing-the-os.md). If you cannot obtain the openEuler repo source online, you can use the ISO release package provided by openEuler to create a local openEuler repo source. This section uses the **openEuler-21.09-aarch64-dvd.iso** file as an example. Modify the ISO file as required. - - - -- [Configuring the Repo Server](#configuring-the-repo-server) - - [Overview](#overview) - - [Creating or Updating a Local Repo Source](#creating-or-updating-a-local-repo-source) - - [Obtaining the ISO File](#obtaining-the-iso-file) - - [Mounting an ISO File to Create a Repo Source](#mounting-an-iso-file-to-create-a-repo-source) - - [Creating a Local Repo Source](#creating-a-local-repo-source) - - [Updating the Repo Source](#updating-the-repo-source) - - [Deploying the Remote Repo Source](#deploying-the-remote-repo-source) - - [Installing and Configuring Nginx](#installing-and-configuring-nginx) - - [Starting Nginx](#starting-nginx) - - [Deploying the Repo Source](#deploying-the-repo-source) - - [Using the Repo Source](#using-the-repo-source) - - [Configuring Repo as the Yum Source](#configuring-repo-as-the-yum-source) - - [Repo Priority](#repo-priority) - - [Related Commands of dnf](#related-commands-of-dnf) - - -## Overview - -Create the **openEuler-21.09-aarch64-dvd.iso** file provided by openEuler as the repo source. The following uses Nginx as an example to describe how to deploy the repo source and provide the HTTP service. - -## Creating or Updating a Local Repo Source - -Mount the openEuler ISO file **openEuler-21.09-aarch64-dvd.iso** to create and update a repo source. - -### Obtaining the ISO File - -Obtain the openEuler ISO file from the following website: - -[https://repo.openeuler.org/openEuler-21.09/ISO/](https://repo.openeuler.org/openEuler-21.09/ISO/) - -### Mounting an ISO File to Create a Repo Source - -Run the **mount** command as the **root** user to mount the ISO file. - -The following is an example: - -```shell -mount /home/openEuler/openEuler-21.09-aarch64-dvd.iso /mnt/ -``` - -The mounted mnt directory is as follows: - -```text -. -│── boot.catalog -│── docs -│── EFI -│── images -│── Packages -│── repodata -│── TRANS.TBL -└── RPM-GPG-KEY-openEuler -``` - -In the preceding directory, **Packages** indicates the directory where the RPM package is stored, **repodata** indicates the directory where the repo source metadata is stored, and **RPM-GPG-KEY-openEuler** indicates the public key for signing openEuler. - -### Creating a Local Repo Source - -You can copy related files in the ISO file to a local directory to create a local repo source. The following is an example: - -```shell -mount /home/openEuler/openEuler-21.09-aarch64-dvd.iso /mnt/ -mkdir -p /home/openEuler/srv/repo/ -cp -r /mnt/Packages /home/openEuler/srv/repo/ -cp -r /mnt/repodata /home/openEuler/srv/repo/ -cp -r /mnt/RPM-GPG-KEY-openEuler /home/openEuler/srv/repo/ -``` - -The local Repo directory is as follows: - -```text -. -│── Packages -│── repodata -└── RPM-GPG-KEY-openEuler -``` - -**Packages** indicates the directory where the RPM package is stored, **repodata** indicates the directory where the repo source metadata is stored, and **RPM-GPG-KEY-openEuler** indicates the public key for signing openEuler. - -### Updating the Repo Source - -You can update the repo source in either of the following ways: - -- Use the latest ISO file to update the existing repo source. The method is the same as that for creating a repo source. That is, mount the ISO file or copy the ISO file to the local directory. - -- Add a RPM package to the **Packages** directory of the repo source and run the **createrepo** command to update the repo source. - - ```shell - createrepo --update --workers=10 ~/srv/repo - ``` - -In this command, **--update** indicates the update, and **--workers** indicates the number of threads, which can be customized. - -> ![](./public_sys-resources/icon-note.gif) **NOTE:** -If the command output contains "createrepo: command not found", run the **dnf install createrepo** command as the **root** user to install the **createrepo** softeware. - -## Deploying the Remote Repo Source - -Install openEuler OS and deploy the repo source using Nginx on openEuler OS. - -### Installing and Configuring Nginx - -1. Download the Nginx tool and install it as the **root** user. - -2. After Nginx is installed, configure /etc/nginx/nginx.conf as the **root** user. - - > ![](./public_sys-resources/icon-note.gif) **NOTE:** -The configuration content in this document is for reference only. You can configure the content based on the site requirements (for example, security hardening requirements). - - ```text - user nginx; - worker_processes auto; # You are advised to set this parameter to **core-1** . - error_log /var/log/nginx/error.log warn; # Log storage location - pid /var/run/nginx.pid; - - events { - worker_connections 1024; - } - - http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - '$status $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - - access_log /var/log/nginx/access.log main; - sendfile on; - keepalive_timeout 65; - - server { - listen 80; - server_name localhost; # Server name (URL) - client_max_body_size 4G; - root /usr/share/nginx/repo; # Default service directory - - location / { - autoindex on; # Enable the access to lower-layer files in the directory. - autoindex_exact_size on; - autoindex_localtime on; - } - - } - - } - ``` - -### Starting Nginx - -1. Run the following `systemctl` commands as the **root** user to start the Nginx service. - - ```shell - systemctl enable nginx - systemctl start nginx - ``` - -2. You can run the following command to check whether Nginx is started successfully: - - ```shell - systemctl status nginx - ``` - - - [Figure 1](#en-us_topic_0151920971_fd25e3f1d664b4087ae26631719990a71) indicates that the Nginx service is started successfully. - - **Figure 1** The Nginx service is successfully started. -![](./figures/the-nginx-service-is-successfully-started.png "the-nginx-service-is-successfully-started") - - - If the Nginx service fails to be started, view the error information. - - ```shell - systemctl status nginx.service --full - ``` - - **Figure 2** The Nginx service startup fails - ![](./figures/nginx-startup-failure.png "nginx-startup-failure") - - As shown in [Figure 2](#en-us_topic_0151920971_f1f9f3d086e454b9cba29a7cae96a4c54), the Nginx service fails to be created because the /var/spool/nginx/tmp/client\_body directory fails to be created. You need to manually create the directory as the **root** user. Solve similar problems as follows: - - ```shell - mkdir -p /var/spool/nginx/tmp/client_body - mkdir -p /var/spool/nginx/tmp/proxy - mkdir -p /var/spool/nginx/tmp/fastcgi - mkdir -p /usr/share/nginx/uwsgi_temp - mkdir -p /usr/share/nginx/scgi_temp - ``` - -### Deploying the Repo Source - -1. Run the following command as the **root** user to create the /usr/share/nginx/repo directory specified in the Nginx configuration file /etc/nginx/nginx.conf: - - ```shell - mkdir -p /usr/share/nginx/repo - ``` - -2. Run the following command as the **root** user to modify the /usr/share/nginx/repo directory permission: - - ```shell - chmod -R 755 /usr/share/nginx/repo - ``` - -3. Configure firewall rules as the **root** user to enable the port (port 80) configured for Nginx. - - ```shell - firewall-cmd --add-port=80/tcp --permanent - firewall-cmd --reload - ``` - - Check whether port 80 is enabled as the **root** user. If the output is **yes**, port 80 is enabled. - - ```shell - firewall-cmd --query-port=80/tcp - ``` - - You can also enable port 80 using iptables as the **root** user. - - ```shell - iptables -I INPUT -p tcp --dport 80 -j ACCEPT - ``` - -4. After the Nginx service is configured, you can use the IP address to access the web page, as shown in [Figure 3](#en-us_topic_0151921017_fig1880404110396). - - **Figure 3** Nginx deployment succeeded -![](./figures/nginx-deployment-succeeded.png "nginx-deployment-succeeded") - -5. Use either of the following methods to add the repo source to the /usr/share/nginx/repo directory: - - - Copy related files in the image to the /usr/share/nginx/repo directory as the **root** user. - - ```shell - mount /home/openEuler/openEuler-21.09-aarch64-dvd.iso /mnt/ - cp -r /mnt/Packages /usr/share/nginx/repo/ - cp -r /mnt/repodata /usr/share/nginx/repo/ - cp -r /mnt/RPM-GPG-KEY-openEuler /usr/share/nginx/repo/ - chmod -R 755 /usr/share/nginx/repo - ``` - - The **openEuler-21.09-aarch64-dvd.iso** file is stored in the **/home/openEuler** directory. - - - Create a soft link for the repo source in the /usr/share/nginx/repo directory as the **root** user. - - ```shell - ln -s /mnt /usr/share/nginx/repo/os - ``` - - **/mnt** is the created repo source, and **/usr/share/nginx/repo/os** points to **/mnt** . - -## Using the repo Source - -The repo source can be configured as a yum source, which is a shell front-end software package manager. Based on the Redhat package manager (RPM), YUM can automatically download the RPM package from the specified server, install the package, and process dependent relationship. It supports one-off installation for all dependent software packages. - -### Configuring Repo as the Yum Source - -You can configure the built repo as the yum source and create the \*\*\*.repo configuration file (the extension .repo is mandatory) in the /etc/yum.repos.d/ directory as the **root** user. You can configure the yum source on the local host or HTTP server. - -- Configuring the local yum source. - - Create the **openEuler.repo** file in the **/etc/yum.repos.d** directory and use the local repository as the yum source. The content of the **openEuler.repo** file is as follows: - - ```text - [base] - name=base - baseurl=file:///home/openEuler/srv/repo - enabled=1 - gpgcheck=1 - gpgkey=file:///home/openEuler/srv/repo/RPM-GPG-KEY-openEuler - ``` - - > ![](./public_sys-resources/icon-note.gif) **NOTE:** - > - > - **repoid** indicates the ID of the software repository. Repoids in all .repo configuration files must be unique. In the example, **repoid** is set to **base**. - > - **name** indicates the string that the software repository describes. - > - **baseurl** indicates the address of the software repository. - > - **enabled** indicates whether to enable the software source repository. The value can be **1** or **0**. The default value is **1**, indicating that the software source repository is enabled. - > - **gpgcheck** indicates whether to enable the GNU privacy guard (GPG) to check the validity and security of sources of RPM packages. **1** indicates GPG check is enabled. **0** indicates the GPG check is disabled. - > - **gpgkey** indicates the public key used to verify the signature. - -- Configuring the yum source for the HTTP server - - Create the **openEuler.repo** file in the **/etc/yum.repos.d** directory. - - - If the repo source of the HTTP server deployed by the user is used as the yum source, the content of **openEuler.repo** is as follows: - - ```text - [base] - name=base - baseurl=http://192.168.139.209/ - enabled=1 - gpgcheck=1 - gpgkey=http://192.168.139.209/RPM-GPG-KEY-openEuler - ``` - - > ![](./public_sys-resources/icon-note.gif) **NOTE:** - > 192.168.139.209 is an example. Replace it with the actual IP address. - - - If the openEuler repo source provided by openEuler is used as the yum source, the content of **openEuler.repo** is as follows (the AArch64-based OS repo source is used as an example): - - ```text - [base] - name=base - baseurl=http://repo.openeuler.org/openEuler-21.09/OS/aarch64/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/aarch64/RPM-GPG-KEY-openEuler - ``` - -### repo Priority - -If there are multiple repo sources, you can set the repo priority in the .repo file. If the priority is not set, the default priority is **99** . If the same RPM package exists in the sources with the same priority, the latest version is installed. **1** indicates the highest priority and **99** indicates the lowest priority. The following shows how to set the priority of **openEuler.repo** to **2**. - -```text -[base] -name=base -baseurl=http://192.168.139.209/ -enabled=1 -priority=2 -gpgcheck=1 -gpgkey=http://192.168.139.209/RPM-GPG-KEY-openEuler -``` - -### Related Commands of dnf - -The **dnf** command can automatically parse the dependency between packages during installation and upgrade. The common usage method is as follows: - -```shell -dnf -``` - -Common commands are as follows: - -- Installation - - Run the following command as the **root** user. - - ```shell - dnf install - ``` - -- Upgrade - - Run the following command as the **root** user. - - ```shell - dnf update - ``` - -- Rollback - - Run the following command as the **root** user. - - ```shell - dnf downgrade - ``` - -- Update check - - ```shell - dnf check-update - ``` - -- Uninstallation - - Run the following command as the **root** user. - - ```shell - dnf remove - ``` - -- Query - - ```shell - dnf search - ``` - -- Local installation - - Run the following command as the **root** user. - - ```shell - dnf localinstall - ``` - -- Historical records check - - ```shell - dnf history - ``` - -- Cache records clearing - - ```shell - dnf clean all - ``` - -- Cache update - - ```shell - dnf makecache - ``` diff --git a/docs/en/docs/Administration/figures/en-us_image_0229622729.png b/docs/en/docs/Administration/figures/en-us_image_0229622729.png deleted file mode 100644 index 47f2d1cac133379469ed88b2bcb7213d75cf881e..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/en-us_image_0229622729.png and /dev/null differ diff --git a/docs/en/docs/Administration/figures/en-us_image_0229622789.png b/docs/en/docs/Administration/figures/en-us_image_0229622789.png deleted file mode 100644 index 102d523ea5c2a1fedf4975556bf8b26f7599daaf..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/en-us_image_0229622789.png and /dev/null differ diff --git a/docs/en/docs/Administration/figures/ima_digest_list_flow.png b/docs/en/docs/Administration/figures/ima_digest_list_flow.png deleted file mode 100644 index 73a93fd310b074471be5e307b74f1c8f539aac42..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/ima_digest_list_flow.png and /dev/null differ diff --git a/docs/en/docs/Administration/figures/ima_digest_list_pkg.png b/docs/en/docs/Administration/figures/ima_digest_list_pkg.png deleted file mode 100644 index 68fc2bb921b60f47c99c38030f44d8f136dfa396..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/ima_digest_list_pkg.png and /dev/null differ diff --git a/docs/en/docs/Administration/figures/ima_digest_list_update.png b/docs/en/docs/Administration/figures/ima_digest_list_update.png deleted file mode 100644 index 771067e31cee84591fbb914d7be4e8c576d7f5d2..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/ima_digest_list_update.png and /dev/null differ diff --git a/docs/en/docs/Administration/figures/ima_performance.png b/docs/en/docs/Administration/figures/ima_performance.png deleted file mode 100644 index f5d641e8682ad2b9c0fbfad191add1819f5b2eef..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/ima_performance.png and /dev/null differ diff --git a/docs/en/docs/Administration/figures/ima_priv_key.png b/docs/en/docs/Administration/figures/ima_priv_key.png deleted file mode 100644 index 8ced564c45e8861a338f7c0fae5fff837501e96b..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/ima_priv_key.png and /dev/null differ diff --git a/docs/en/docs/Administration/figures/ima_rpm.png b/docs/en/docs/Administration/figures/ima_rpm.png deleted file mode 100644 index 6f9abe39c887bfb1c997e6adcd0c7555a77fb6d5..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/ima_rpm.png and /dev/null differ diff --git a/docs/en/docs/Administration/figures/ima_secure_boot.png b/docs/en/docs/Administration/figures/ima_secure_boot.png deleted file mode 100644 index 01cf1782f9748b7d77c3929f2766bc744c31e59a..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/ima_secure_boot.png and /dev/null differ diff --git a/docs/en/docs/Administration/figures/ima_sig_verify.png b/docs/en/docs/Administration/figures/ima_sig_verify.png deleted file mode 100644 index 69623c75374a56615dce1199f0dc3a42892838c1..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/ima_sig_verify.png and /dev/null differ diff --git a/docs/en/docs/Administration/figures/ima_tpm.png b/docs/en/docs/Administration/figures/ima_tpm.png deleted file mode 100644 index 0986f2f955449cbdb1a4e0bd485977ef373beb6a..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/ima_tpm.png and /dev/null differ diff --git a/docs/en/docs/Administration/figures/ima_trusted_measurement.png b/docs/en/docs/Administration/figures/ima_trusted_measurement.png deleted file mode 100644 index 9212fc722d13a114b86208d68f2c539107a5e2a4..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/ima_trusted_measurement.png and /dev/null differ diff --git a/docs/en/docs/Administration/figures/ima_verification.png b/docs/en/docs/Administration/figures/ima_verification.png deleted file mode 100644 index fc879949db5387c61ccf6176f948b9a00f4fb053..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/figures/ima_verification.png and /dev/null differ diff --git a/docs/en/docs/Administration/public_sys-resources/icon-danger.gif b/docs/en/docs/Administration/public_sys-resources/icon-danger.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/public_sys-resources/icon-danger.gif and /dev/null differ diff --git a/docs/en/docs/Administration/public_sys-resources/icon-tip.gif b/docs/en/docs/Administration/public_sys-resources/icon-tip.gif deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/public_sys-resources/icon-tip.gif and /dev/null differ diff --git a/docs/en/docs/Administration/public_sys-resources/icon-warning.gif b/docs/en/docs/Administration/public_sys-resources/icon-warning.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Administration/public_sys-resources/icon-warning.gif and /dev/null differ diff --git a/docs/en/docs/ApplicationDev/public_sys-resources/icon-caution.gif b/docs/en/docs/ApplicationDev/public_sys-resources/icon-caution.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ApplicationDev/public_sys-resources/icon-caution.gif and /dev/null differ diff --git a/docs/en/docs/ApplicationDev/public_sys-resources/icon-danger.gif b/docs/en/docs/ApplicationDev/public_sys-resources/icon-danger.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ApplicationDev/public_sys-resources/icon-danger.gif and /dev/null differ diff --git a/docs/en/docs/ApplicationDev/public_sys-resources/icon-tip.gif b/docs/en/docs/ApplicationDev/public_sys-resources/icon-tip.gif deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ApplicationDev/public_sys-resources/icon-tip.gif and /dev/null differ diff --git a/docs/en/docs/ApplicationDev/public_sys-resources/icon-warning.gif b/docs/en/docs/ApplicationDev/public_sys-resources/icon-warning.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ApplicationDev/public_sys-resources/icon-warning.gif and /dev/null differ diff --git a/docs/en/docs/Container/application-scenarios.md b/docs/en/docs/Container/application-scenarios.md deleted file mode 100644 index fe74c96c762fd08199445dbda6c552d38dcce197..0000000000000000000000000000000000000000 --- a/docs/en/docs/Container/application-scenarios.md +++ /dev/null @@ -1,5 +0,0 @@ -# Application Scenarios - -This section describes how to use the iSulad. - - diff --git a/docs/en/docs/Container/cri.md b/docs/en/docs/Container/cri.md deleted file mode 100644 index 72f71923c393bd0ca0c7c04833401f378374642e..0000000000000000000000000000000000000000 --- a/docs/en/docs/Container/cri.md +++ /dev/null @@ -1,2902 +0,0 @@ -# CRI - -- [CRI](#cri) - - [Description](#description) - - [APIs](#apis) - - [API Parameters](#api-parameters) - - [Runtime Service](#runtime-service) - - [RunPodSandbox](#runpodsandbox) - - [StopPodSandbox](#stoppodsandbox) - - [RemovePodSandbox](#removepodsandbox) - - [PodSandboxStatus](#podsandboxstatus) - - [ListPodSandbox](#listpodsandbox) - - [CreateContainer](#createcontainer) - - [Supplement](#supplement) - - [StartContainer](#startcontainer) - - [StopContainer](#stopcontainer) - - [RemoveContainer](#removecontainer) - - [ListContainers](#listcontainers) - - [ContainerStatus](#containerstatus) - - [UpdateContainerResources](#updatecontainerresources) - - [ExecSync](#execsync) - - [Exec](#exec) - - [Attach](#attach) - - [ContainerStats](#containerstats) - - [ListContainerStats](#listcontainerstats) - - [UpdateRuntimeConfig](#updateruntimeconfig) - - [Status](#status) - - [Image Service](#image-service) - - [ListImages](#listimages) - - [ImageStatus](#imagestatus) - - [PullImage](#pullimage) - - [RemoveImage](#removeimage) - - [ImageFsInfo](#imagefsinfo) - - [Constraints](#constraints) - -## Description - -The Container Runtime Interface \(CRI\) provided by Kubernetes defines container and image service APIs. iSulad uses the CRI to interconnect with Kubernetes. - -Since the container runtime is isolated from the image lifecycle, two services need to be defined. This API is defined by using [Protocol Buffer](https://developers.google.com/protocol-buffers/) based on [gRPC](https://grpc.io/). - -The current CRI version is v1alpha1. For official API description, access the following link: - -[https://github.com/kubernetes/kubernetes/blob/release-1.14/pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto](https://github.com/kubernetes/kubernetes/blob/release-1.14/pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto) - -iSulad uses the API description file of version 1.14 used by Pass, which is slightly different from the official API description file. API description in this document prevails. - ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The listening IP address of the CRI WebSocket streaming service is **127.0.0.1** and the port number is **10350**. The port number can be configured in the **--websocket-server-listening-port** command or in the **daemon.json** configuration file. - -## APIs - -The following tables list the parameters that may be used in each API. Some parameters do not take effect now, which have been noted in the corresponding parameter description. - -### API Parameters - -- **DNSConfig** - - The API is used to configure DNS servers and search domains of a sandbox. - - - - - - - - - - - - - - - -

Parameter

-

Description

-

repeated string servers

-

DNS server list of a cluster.

-

repeated string searches

-

DNS search domain list of a cluster.

-

repeated string options

-

DNS option list. For details, see https://linux.die.net/man/5/resolv.conf.

-
- -- **Protocol** - - The API is used to specify enum values of protocols. - - - - - - - - - - - - -

Parameter

-

Description

-

TCP = 0↵

-

Transmission Control Protocol (TCP).

-

UDP = 1

-

User Datagram Protocol (UDP).

-
- -- **PortMapping** - - The API is used to configure the port mapping for a sandbox. - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

Protocol protocol

-

Protocol used for port mapping.

-

int32 container_port

-

Port number in the container.

-

int32 host_port

-

Port number on the host.

-

string host_ip

-

Host IP address.

-
- -- **MountPropagation** - - The API is used to specify enums of mount propagation attributes. - - - - - - - - - - - - - - - -

Parameter

-

Description

-

PROPAGATION_PRIVATE = 0

-

No mount propagation attributes, that is, private in Linux.

-

PROPAGATION_HOST_TO_CONTAINER = 1

-

Mount attribute that can be propagated from the host to the container, that is, rslave in Linux.

-

PROPAGATION_BIDIRECTIONAL = 2

-

Mount attribute that can be propagated between a host and a container, that is, rshared in Linux.

-
- -- **Mount** - - The API is used to mount a volume on the host to a container. \(Only files and folders are supported.\) - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string container_path

-

Path in the container.

-

string host_path

-

Path on the host.

-

bool readonly

-

Whether the configuration is read-only in the container.

-

Default value: false

-

bool selinux_relabel

-

Whether to set the SELinux label. This parameter does not take effect now.

-

MountPropagation propagation

-

Mount propagation attribute.

-

The value can be 0, 1, or 2, corresponding to the private, rslave, and rshared propagation attributes respectively.

-

Default value: 0

-
- -- **NamespaceOption** - - - - - - - - - - - - - - - -

Parameter

-

Description

-

bool host_network

-

Whether to use host network namespaces.

-

bool host_pid

-

Whether to use host PID namespaces.

-

bool host_ipc

-

Whether to use host IPC namespaces.

-
- -- **Capability** - - This API is used to specify the capabilities to be added and deleted. - - - - - - - - - - - - -

Parameter

-

Description

-

repeated string add_capabilities

-

Capabilities to be added.

-

repeated string drop_capabilities

-

Capabilities to be deleted.

-
- -- **Int64Value** - - The API is used to encapsulate data of the signed 64-bit integer type. - - - - - - - - - -

Parameter

-

Description

-

int64 value

-

Actual value of the signed 64-bit integer type.

-
- -- **UInt64Value** - - The API is used to encapsulate data of the unsigned 64-bit integer type. - - - - - - - - - -

Parameter

-

Description

-

uint64 value

-

Actual value of the unsigned 64-bit integer type.

-
- -- **LinuxSandboxSecurityContext** - - The API is used to configure the Linux security options of a sandbox. - - Note that these security options are not applied to containers in the sandbox, and may not be applied to the sandbox without any running process. - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

NamespaceOption namespace_options

-

Sandbox namespace options.

-

SELinuxOption selinux_options

-

SELinux options. This parameter does not take effect now.

-

Int64Value run_as_user

-

Process UID in the sandbox.

-

bool readonly_rootfs

-

Whether the root file system of the sandbox is read-only.

-

repeated int64 supplemental_groups

-

Information of the user group of the init process in the sandbox (except the primary GID).

-

bool privileged

-

Whether the sandbox is a privileged container.

-

string seccomp_profile_path

-

Path of the seccomp configuration file. Valid values are as follows:

-

// unconfined: Seccomp is not configured.

-

// localhost/ Full path of the configuration file: configuration file path installed in the system.

-

// Full path of the configuration file: full path of the configuration file.

-

// unconfined is the default value.

-
- -- **LinuxPodSandboxConfig** - - The API is used to configure information related to the Linux host and containers. - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string cgroup_parent

-

Parent path of the cgroup of the sandbox. The runtime can use the cgroupfs or systemd syntax based on site requirements. This parameter does not take effect now.

-

LinuxSandboxSecurityContext security_context

-

Security attribute of the sandbox.

-

map<string, string> sysctls

-

Linux sysctls configuration of the sandbox.

-
- -- **PodSandboxMetadata** - - Sandbox metadata contains all information that constructs a sandbox name. It is recommended that the metadata be displayed on the user interface during container running to improve user experience. For example, a unique sandbox name can be generated based on the metadata during running. - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string name

-

Sandbox name.

-

string uid

-

Sandbox UID.

-

string namespace

-

Sandbox namespace.

-

uint32 attempt

-

Number of attempts to create a sandbox.

-

Default value: 0

-
- -- **PodSandboxConfig** - - This API is used to specify all mandatory and optional configurations for creating a sandbox. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

PodSandboxMetadata metadata

-

Sandbox metadata, which uniquely identifies a sandbox. The runtime must use the information to ensure that operations are correctly performed, and to improve user experience, for example, construct a readable sandbox name.

-

string hostname

-

Host name of the sandbox.

-

string log_directory

-

Folder for storing container log files in the sandbox.

-

DNSConfig dns_config

-

Sandbox DNS configuration.

-

repeated PortMapping port_mappings

-

Sandbox port mapping.

-

map<string, string> labels

-

Key-value pair that can be used to identify a sandbox or a series of sandboxes.

-

map<string, string> annotations

-

Key-value pair that stores any information, whose values cannot be changed and can be queried by using the PodSandboxStatus API.

-

LinuxPodSandboxConfig linux

-

Options related to the Linux host.

-
- -- **PodSandboxNetworkStatus** - - The API is used to describe the network status of a sandbox. - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string ip

-

IP address of the sandbox.

-

string name

-

Network interface name in the sandbox.

-

string network

-

Name of the additional network.

-
- -- **Namespace** - - The API is used to set namespace options. - - - - - - - - - -

Parameter

-

Description

-

NamespaceOption options

-

Linux namespace options.

-
- -- **LinuxPodSandboxStatus** - - The API is used to describe the status of a Linux sandbox. - - - - - - - - - -

Parameter

-

Description

-

Namespace namespaces

-

Sandbox namespace.

-
- -- **PodSandboxState** - - The API is used to specify enum data of the sandbox status values. - - - - - - - - - - - - -

Parameter

-

Description

-

SANDBOX_READY = 0

-

The sandbox is ready.

-

SANDBOX_NOTREADY = 1

-

The sandbox is not ready.

-
- -- **PodSandboxStatus** - - The API is used to describe the PodSandbox status. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string id

-

Sandbox ID.

-

PodSandboxMetadata metadata

-

Sandbox metadata.

-

PodSandboxState state

-

Sandbox status value.

-

int64 created_at

-

Sandbox creation timestamp (unit: ns).

-

repeated PodSandboxNetworkStatus networks

-

Multi-plane network status of the sandbox.

-

LinuxPodSandboxStatus linux

-

Sandbox status complying with the Linux specifications.

-

map<string, string> labels

-

Key-value pair that can be used to identify a sandbox or a series of sandboxes.

-

map<string, string> annotations

-

Key-value pair that stores any information, whose values cannot be changed by the runtime.

-
- -- **PodSandboxStateValue** - - The API is used to encapsulate [PodSandboxState](#en-us_topic_0182207110_li1818214574195). - - - - - - - - - -

Parameter

-

Description

-

PodSandboxState state

-

Sandbox status value.

-
- -- **PodSandboxFilter** - - The API is used to add filter criteria for the sandbox list. The intersection of multiple filter criteria is displayed. - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string id

-

Sandbox ID.

-

PodSandboxStateValue state

-

Sandbox status.

-

map<string, string> label_selector

-

Sandbox label, which does not support regular expressions and must be fully matched.

-
- -- **PodSandbox** - - This API is used to provide a minimum description of a sandbox. - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string id

-

Sandbox ID.

-

PodSandboxMetadata metadata

-

Sandbox metadata.

-

PodSandboxState state

-

Sandbox status value.

-

int64 created_at

-

Sandbox creation timestamp (unit: ns).

-

map<string, string> labels

-

Key-value pair that can be used to identify a sandbox or a series of sandboxes.

-

map<string, string> annotations

-

Key-value pair that stores any information, whose values cannot be changed by the runtime.

-
- -- **KeyValue** - - The API is used to encapsulate key-value pairs. - - - - - - - - - - - - -

Parameter

-

Description

-

string key

-

Key

-

string value

-

Value

-
- -- **SELinuxOption** - - The API is used to specify the SELinux label of a container. - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string user

-

User

-

string role

-

Role

-

string type

-

Type

-

string level

-

Level

-
- -- **ContainerMetadata** - - Container metadata contains all information that constructs a container name. It is recommended that the metadata be displayed on the user interface during container running to improve user experience. For example, a unique container name can be generated based on the metadata during running. - - - - - - - - - - - - -

Parameter

-

Description

-

string name

-

Container name.

-

uint32 attempt

-

Number of attempts to create a container.

-

Default value: 0

-
- -- **ContainerState** - - The API is used to specify enums of container status values. - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

CONTAINER_CREATED = 0

-

The container is created.

-

CONTAINER_RUNNING = 1

-

The container is running.

-

CONTAINER_EXITED = 2

-

The container exits.

-

CONTAINER_UNKNOWN = 3

-

Unknown container status.

-
- -- **ContainerStateValue** - - The API is used to encapsulate the data structure of [ContainerState](#en-us_topic_0182207110_li65182518309). - - - - - - - - - -

Parameter

-

Description

-

ContainerState state

-

Container status value.

-
- -- **ContainerFilter** - - The API is used to add filter criteria for the container list. The intersection of multiple filter criteria is displayed. - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string id

-

Container ID.

-

PodSandboxStateValue state

-

Container status.

-

string pod_sandbox_id

-

Sandbox ID.

-

map<string, string> label_selector

-

Container label, which does not support regular expressions and must be fully matched.

-
- -- **LinuxContainerSecurityContext** - - The API is used to specify container security configurations. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

Capability capabilities

-

Added or removed capabilities.

-

bool privileged

-

Whether the container is in privileged mode. Default value: false

-

NamespaceOption namespace_options

-

Container namespace options.

-

SELinuxOption selinux_options

-

SELinux context, which is optional. This parameter does not take effect now.

-

Int64Value run_as_user

-

UID for running container processes. Only run_as_user or run_as_username can be specified at a time. run_as_username takes effect preferentially.

-

string run_as_username

-

Username for running container processes. If specified, the user must exist in /etc/passwd in the container image and be parsed by the runtime. Otherwise, an error must occur during running.

-

bool readonly_rootfs

-

Whether the root file system in a container is read-only. The default value is configured in config.json.

-

repeated int64 supplemental_groups

-

List of user groups of the init process running in the container (except the primary GID).

-

string apparmor_profile

-

AppArmor configuration file of the container. This parameter does not take effect now.

-

string seccomp_profile_path

-

Path of the seccomp configuration file of the container.

-

bool no_new_privs

-

Whether to set the no_new_privs flag in the container.

-
- -- **LinuxContainerResources** - - The API is used to specify configurations of Linux container resources. - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

int64 cpu_period

-

CPU CFS period. Default value: 0

-

int64 cpu_quota

-

CPU CFS quota. Default value: 0

-

int64 cpu_shares

-

CPU share (relative weight). Default value: 0

-

int64 memory_limit_in_bytes

-

Memory limit (unit: byte). Default value: 0

-

int64 oom_score_adj

-

OOMScoreAdj that is used to adjust the OOM killer. Default value: 0

-

string cpuset_cpus

-

CPU core used by the container. Default value: null

-

string cpuset_mems

-

Memory nodes used by the container. Default value: null

-
- -- **Image** - - The API is used to describe the basic information about an image. - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string id

-

Image ID.

-

repeated string repo_tags

-

Image tag name repo_tags.

-

repeated string repo_digests

-

Image digest information.

-

uint64 size

-

Image size.

-

Int64Value uid

-

Default image UID.

-

string username

-

Default image username.

-
- -- **ImageSpec** - - The API is used to represent the internal data structure of an image. Currently, ImageSpec encapsulates only the container image name. - - - - - - - - - -

Parameter

-

Description

-

string image

-

Container image name.

-
- -- **StorageIdentifier** - - The API is used to specify the unique identifier for defining the storage. - - - - - - - - - -

Parameter

-

Description

-

string uuid

-

Device UUID.

-
- -- **FilesystemUsage** - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

int64 timestamp

-

Timestamp when file system information is collected.

-

StorageIdentifier storage_id

-

UUID of the file system that stores images.

-

UInt64Value used_bytes

-

Size of the metadata that stores images.

-

UInt64Value inodes_used

-

Number of inodes of the metadata that stores images.

-
- -- **AuthConfig** - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string username

-

Username used for downloading images.

-

string password

-

Password used for downloading images.

-

string auth

-

Authentication information used for downloading images. The value is encoded by using Base64.

-

string server_address

-

IP address of the server where images are downloaded. This parameter does not take effect now.

-

string identity_token

-

Information about the token used for the registry authentication. This parameter does not take effect now.

-

string registry_token

-

Information about the token used for the interaction with the registry. This parameter does not take effect now.

-
- -- **Container** - - The API is used to describe container information, such as the ID and status. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string id

-

Container ID.

-

string pod_sandbox_id

-

ID of the sandbox to which the container belongs.

-

ContainerMetadata metadata

-

Container metadata.

-

ImageSpec image

-

Image specifications.

-

string image_ref

-

Image used by the container. This parameter is an image ID for most runtime.

-

ContainerState state

-

Container status.

-

int64 created_at

-

Container creation timestamp (unit: ns).

-

map<string, string> labels

-

Key-value pair that can be used to identify a container or a series of containers.

-

map<string, string> annotations

-

Key-value pair that stores any information, whose values cannot be changed by the runtime.

-
- -- **ContainerStatus** - - The API is used to describe the container status information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string id

-

Container ID.

-

ContainerMetadata metadata

-

Container metadata.

-

ContainerState state

-

Container status.

-

int64 created_at

-

Container creation timestamp (unit: ns).

-

int64 started_at

-

Container start timestamp (unit: ns).

-

int64 finished_at

-

Container exit timestamp (unit: ns).

-

int32 exit_code

-

Container exit code.

-

ImageSpec image

-

Image specifications.

-

string image_ref

-

Image used by the container. This parameter is an image ID for most runtime.

-

string reason

-

Brief description of the reason why the container is in the current status.

-

string message

-

Information that is easy to read and indicates the reason why the container is in the current status.

-

map<string, string> labels

-

Key-value pair that can be used to identify a container or a series of containers.

-

map<string, string> annotations

-

Key-value pair that stores any information, whose values cannot be changed by the runtime.

-

repeated Mount mounts

-

Information about the container mount point.

-

string log_path

-

Path of the container log file that is in the log_directory folder configured in PodSandboxConfig.

-
- -- **ContainerStatsFilter** - - The API is used to add filter criteria for the container stats list. The intersection of multiple filter criteria is displayed. - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string id

-

Container ID.

-

string pod_sandbox_id

-

Sandbox ID.

-

map<string, string> label_selector

-

Container label, which does not support regular expressions and must be fully matched.

-
- -- **ContainerStats** - - The API is used to add filter criteria for the container stats list. The intersection of multiple filter criteria is displayed. - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

ContainerAttributes attributes

-

Container information.

-

CpuUsage cpu

-

CPU usage information.

-

MemoryUsage memory

-

Memory usage information.

-

FilesystemUsage writable_layer

-

Information about the writable layer usage.

-
- -- **ContainerAttributes** - - The API is used to list basic container information. - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string id

-

Container ID.

-

ContainerMetadata metadata

-

Container metadata.

-

map<string,string> labels

-

Key-value pair that can be used to identify a container or a series of containers.

-

map<string,string> annotations

-

Key-value pair that stores any information, whose values cannot be changed by the runtime.

-
- -- **CpuUsage** - - The API is used to list the CPU usage information of a container. - - - - - - - - - - - - -

Parameter

-

Description

-

int64 timestamp

-

Timestamp.

-

UInt64Value usage_core_nano_seconds

-

CPU usage (unit: ns).

-
- -- **MemoryUsage** - - The API is used to list the memory usage information of a container. - - - - - - - - - - - - -

Parameter

-

Description

-

int64 timestamp

-

Timestamp.

-

UInt64Value working_set_bytes

-

Memory usage.

-
- -- **FilesystemUsage** - - The API is used to list the read/write layer information of a container. - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

int64 timestamp

-

Timestamp.

-

StorageIdentifier storage_id

-

Writable layer directory.

-

UInt64Value used_bytes

-

Number of bytes occupied by images at the writable layer.

-

UInt64Value inodes_used

-

Number of inodes occupied by images at the writable layer.

-
- -- **Device** - - The API is used to specify the host volume to be mounted to a container. - - - - - - - - - - - - - - -

Parameter

-

Description

-

string container_path

-

Mounting path of a container.

-

string host_path

-

Mounting path on the host.

-

string permissions

-

Cgroup permission of a device. (r indicates that containers can be read from a specified device. w indicates that containers can be written to a specified device. m indicates that containers can create new device files.)

-
- -- **LinuxContainerConfig** - - The API is used to specify Linux configurations. - - - - - - - - - - - -

Parameter

-

Description

-

LinuxContainerResources resources

-

Container resource specifications.

-

LinuxContainerSecurityContext security_context

-

Linux container security configuration.

-
- -- **ContainerConfig** - - The API is used to specify all mandatory and optional fields for creating a container. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

ContainerMetadata metadata

-

Container metadata. The information will uniquely identify a container and should be used at runtime to ensure correct operations. The information can also be used at runtime to optimize the user experience (UX) design, for example, construct a readable name. This parameter is mandatory.

-

ImageSpec image

-

Image used by the container. This parameter is mandatory.

-

repeated string command

-

Command to be executed. Default value: /bin/sh

-

repeated string args

-

Parameters of the command to be executed.

-

string working_dir

-

Current working path of the command.

-

repeated KeyValue envs

-

Environment variables configured in the container.

-

repeated Mount mounts

-

Information about the mount point to be mounted in the container.

-

repeated Device devices

-

Information about the device to be mapped in the container.

-

map<string, string> labels

-

Key-value pair that can be used to index and select a resource.

-

map<string, string> annotations

-

Unstructured key-value mappings that can be used to store and retrieve any metadata.

-

string log_path

-

Relative path to PodSandboxConfig.LogDirectory, which is used to store logs (STDOUT and STDERR) on the container host.

-

bool stdin

-

Whether to open stdin of the container.

-

bool stdin_once

-

Whether to immediately disconnect other data flows connected with stdin when a data flow connected with stdin is disconnected. This parameter does not take effect now.

-

bool tty

-

Whether to use a pseudo terminal to connect to stdio of the container.

-

LinuxContainerConfig linux

-

Container configuration information in the Linux system.

-
- -- **NetworkConfig** - - This API is used to specify runtime network configurations. - - - - - - - - -

Parameter

-

Description

-

string pod_cidr

-

CIDR used by pod IP addresses.

-
- -- **RuntimeConfig** - - This API is used to specify runtime network configurations. - - - - - - - - -

Parameter

-

Description

-

NetworkConfig network_config

-

Runtime network configurations.

-
- -- **RuntimeCondition** - - The API is used to describe runtime status information. - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string type

-

Runtime status type.

-

bool status

-

Runtime status.

-

string reason

-

Brief description of the reason for the runtime status change.

-

string message

-

Message with high readability, which indicates the reason for the runtime status change.

-
- -- **RuntimeStatus** - - The API is used to describe runtime status. - - - - - - - - -

Parameter

-

Description

-

repeated RuntimeCondition conditions

-

List of current runtime status.

-
- -### Runtime Service - -The runtime service provides APIs for operating pods and containers, and APIs for querying the configuration and status information of the runtime service. - -#### RunPodSandbox - -##### Prototype - -```shell -rpc RunPodSandbox(RunPodSandboxRequest) returns (RunPodSandboxResponse) {} -``` - -##### Description - -This API is used to create and start a PodSandbox. If the PodSandbox is successfully run, the sandbox is in the ready state. - -##### Precautions - -1. The default image for starting a sandbox is **rnd-dockerhub.huawei.com/library/pause-$\{**_machine_**\}:3.0** where **$\{**_machine_**\}** indicates the architecture. On x86\_64, the value of _machine_ is **amd64**. On ARM64, the value of _machine_ is **aarch64**. Currently, only the **amd64** or **aarch64** image can be downloaded from the rnd-dockerhub registry. If the image does not exist on the host, ensure that the host can download the image from the rnd-dockerhub registry. If you want to use another image, refer to **pod-sandbox-image** in the _iSulad Deployment Configuration_. -2. The container name is obtained from fields in [PodSandboxMetadata](#apis.md#en-us_topic_0182207110_li2359918134912) and separated by underscores \(\_\). Therefore, the metadata cannot contain underscores \(\_\). Otherwise, the [ListPodSandbox](#listpodsandbox.md#EN-US_TOPIC_0184808098) API cannot be used for query even when the sandbox is running successfully. - -##### Parameters - - - - - - - - - - - - -

Parameter

-

Description

-

PodSandboxConfig config

-

Sandbox configuration.

-

string runtime_handler

-

Runtime for the created sandbox. Currently, lcr and kata-runtime are supported.

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

string pod_sandbox_id

-

If the operation is successful, the response is returned.

-
- -#### StopPodSandbox - -##### Prototype - -```shell -rpc StopPodSandbox(StopPodSandboxRequest) returns (StopPodSandboxResponse) {} -``` - -##### Description - -This API is used to stop PodSandboxes and sandbox containers, and reclaim the network resources \(such as IP addresses\) allocated to a sandbox. If any running container belongs to the sandbox, the container must be forcibly stopped. - -##### Parameters - - - - - - - - - -

Parameter

-

Description

-

string pod_sandbox_id

-

Sandbox ID.

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

None

-

None

-
- -#### RemovePodSandbox - -##### Prototype - -```shell -rpc RemovePodSandbox(RemovePodSandboxRequest) returns (RemovePodSandboxResponse) {} -``` - -##### Description - -This API is used to delete a sandbox. If any running container belongs to the sandbox, the container must be forcibly stopped and deleted. If the sandbox has been deleted, no errors will be returned. - -##### Precautions - -1. When a sandbox is deleted, network resources of the sandbox are not deleted. Before deleting a pod, you must call StopPodSandbox to clear network resources. Ensure that StopPodSandbox is called at least once before deleting the sandbox. -2. Ifa sanbox is deleted and containers in the sandbox is not deleted successfully, you need to manually delete the containers. - -##### Parameters - - - - - - - - - -

Parameter

-

Description

-

string pod_sandbox_id

-

Sandbox ID.

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

None

-

None

-
- -#### PodSandboxStatus - -##### Prototype - -```shell -rpc PodSandboxStatus(PodSandboxStatusRequest) returns (PodSandboxStatusResponse) {} -``` - -##### Description - -This API is used to query the sandbox status. If the sandbox does not exist, an error is returned. - -##### Parameters - - - - - - - - - - - - -

Parameter

-

Description

-

string pod_sandbox_id

-

Sandbox ID

-

bool verbose

-

Whether to display additional information about the sandbox. This parameter does not take effect now.

-
- -##### Return Values - - - - - - - - - - - - -

Return Value

-

Description

-

PodSandboxStatus status

-

Status of the sandbox.

-

map<string, string> info

-

Additional information about the sandbox. The key can be any string, and the value is a JSON character string. The information can be any debugging content. When verbose is set to true, info cannot be empty. This parameter does not take effect now.

-
- -#### ListPodSandbox - -##### Prototype - -```shell -rpc ListPodSandbox(ListPodSandboxRequest) returns (ListPodSandboxResponse) {} -``` - -##### Description - -This API is used to return the sandbox information list. Filtering based on criteria is supported. - -##### Parameters - - - - - - - - - -

Parameter

-

Description

-

PodSandboxFilter filter

-

Filter criteria.

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

repeated PodSandbox items

-

Sandbox information list.

-
- -#### CreateContainer - -```shell -grpc::Status CreateContainer(grpc::ServerContext *context, const runtime::CreateContainerRequest *request, runtime::CreateContainerResponse *reply) {} -``` - -##### Description - -This API is used to create a container in the PodSandbox. - -##### Precautions - -- **sandbox\_config** in**CreateContainerRequest** is the same as the configuration transferred to **RunPodSandboxRequest** to create a PodSandbox. It is transferred again for reference only. PodSandboxConfig must remain unchanged throughout the lifecycle of a pod. -- The container name is obtained from fields in [ContainerMetadata](#apis.md#en-us_topic_0182207110_li17135914132319) and separated by underscores \(\_\). Therefore, the metadata cannot contain underscores \(\_\). Otherwise, the [ListContainers](#listcontainers.md#EN-US_TOPIC_0184808103) API cannot be used for query even when the sandbox is running successfully. -- **CreateContainerRequest** does not contain the **runtime\_handler** field. The runtime type of the container is the same as that of the corresponding sandbox. - -##### Parameters - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string pod_sandbox_id

-

ID of the PodSandbox where a container is to be created.

-

ContainerConfig config

-

Container configuration information.

-

PodSandboxConfig sandbox_config

-

PodSandbox configuration information.

-
- -#### Supplement - -Unstructured key-value mappings that can be used to store and retrieve any metadata. The field can be used to transfer parameters for the fields for which the CRI does not provide specific parameters. - -- Customize the field: - - - - - - - - - -

Custom key:value

-

Description

-

cgroup.pids.max:int64_t

-

Used to limit the number of processes or threads in a container. (Set the parameter to -1 for unlimited number.)

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

string container_id

-

ID of the created container.

-
- -#### StartContainer - -##### Prototype - -```shell -rpc StartContainer(StartContainerRequest) returns (StartContainerResponse) {} -``` - -##### Description - -This API is used to start a container. - -##### Parameters - - - - - - - - - -

Parameter

-

Description

-

string container_id

-

Container ID.

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

None

-

None

-
- -#### StopContainer - -##### Prototype - -```shell -rpc StopContainer(StopContainerRequest) returns (StopContainerResponse) {} -``` - -##### Description - -This API is used to stop a running container. You can set a graceful timeout time. If the container has been stopped, no errors will be returned. - -##### Parameters - - - - - - - - - - - - -

Parameter

-

Description

-

string container_id

-

Container ID.

-

int64 timeout

-

Waiting time before a container is forcibly stopped. The default value is 0, indicating forcible stop.

-
- -##### Return Values - -None - -#### RemoveContainer - -##### Prototype - -```shell -rpc RemoveContainer(RemoveContainerRequest) returns (RemoveContainerResponse) {} -``` - -##### Description - -This API is used to delete a container. If the container is running, it must be forcibly stopped. If the container has been deleted, no errors will be returned. - -##### Parameters - - - - - - - - - -

Parameter

-

Description

-

string container_id

-

Container ID.

-
- -##### Return Values - -None - -#### ListContainers - -##### Prototype - -```shell -rpc ListContainers(ListContainersRequest) returns (ListContainersResponse) {} -``` - -##### Description - -This API is used to return the container information list. Filtering based on criteria is supported. - -##### Parameters - - - - - - - - - -

Parameter

-

Description

-

ContainerFilter filter

-

Filter criteria.

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

repeated Container containers

-

Container information list.

-
- -#### ContainerStatus - -##### Prototype - -```shell -rpc ContainerStatus(ContainerStatusRequest) returns (ContainerStatusResponse) {} -``` - -##### Description - -This API is used to return the container status information. If the container does not exist, an error will be returned. - -##### Parameters - - - - - - - - - - - - -

Parameter

-

Description

-

string container_id

-

Container ID.

-

bool verbose

-

Whether to display additional information about the sandbox. This parameter does not take effect now.

-
- -##### Return Values - - - - - - - - - - - - -

Return Value

-

Description

-

ContainerStatus status

-

Container status information.

-

map<string, string> info

-

Additional information about the sandbox. The key can be any string, and the value is a JSON character string. The information can be any debugging content. When verbose is set to true, info cannot be empty. This parameter does not take effect now.

-
- -#### UpdateContainerResources - -##### Prototype - -```shell -rpc UpdateContainerResources(UpdateContainerResourcesRequest) returns (UpdateContainerResourcesResponse) {} -``` - -##### Description - -This API is used to update container resource configurations. - -##### Precautions - -- This API cannot be used to update the pod resource configurations. -- The value of **oom\_score\_adj** of any container cannot be updated. - -##### Parameters - - - - - - - - - - - - -

Parameter

-

Description

-

string container_id

-

Container ID.

-

LinuxContainerResources linux

-

Linux resource configuration information.

-
- -##### Return Values - -None - -#### ExecSync - -##### Prototype - -```shell -rpc ExecSync(ExecSyncRequest) returns (ExecSyncResponse) {} -``` - -##### Description - -The API is used to run a command in containers in synchronization mode through the gRPC communication method. - -##### Precautions - -The interaction between the terminal and the containers must be disabled when a single command is executed. - -##### Parameters - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string container_id

-

Container ID.

-

repeated string cmd

-

Command to be executed.

-

int64 timeout

-

Timeout period for stopping the command (unit: second). The default value is 0, indicating that there is no timeout limit. This parameter does not take effect now.

-
- -##### Return Values - - - - - - - - - - - - - - - -

Return Value

-

Description

-

bytes stdout

-

Standard output of the capture command.

-

bytes stderr

-

Standard error output of the capture command.

-

int32 exit_code

-

Exit code, which represents the completion of command execution. The default value is 0, indicating that the command is executed successfully.

-
- -#### Exec - -##### Prototype - -```shell -rpc Exec(ExecRequest) returns (ExecResponse) {} -``` - -##### Description - -This API is used to run commands in a container through the gRPC communication method, that is, obtain URLs from the CRI server, and then use the obtained URLs to establish a long connection to the WebSocket server, implementing the interaction with the container. - -##### Precautions - -The interaction between the terminal and the container can be enabled when a single command is executed. One of **stdin**, **stdout**, and **stderr**must be true. If **tty** is true, **stderr** must be false. Multiplexing is not supported. In this case, the output of **stdout** and **stderr** will be combined to a stream. - -##### Parameters - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string container_id

-

Container ID.

-

repeated string cmd

-

Command to be executed.

-

bool tty

-

Whether to run the command in a TTY.

-

bool stdin

-

Whether to generate the standard input stream.

-

bool stdout

-

Whether to generate the standard output stream.

-

bool stderr

-

Whether to generate the standard error output stream.

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

string url

-

Fully qualified URL of the exec streaming server.

-
- -#### Attach - -##### Prototype - -```shell -rpc Attach(AttachRequest) returns (AttachResponse) {} -``` - -##### Description - -This API is used to take over the init process of a container through the gRPC communication method, that is, obtain URLs from the CRI server, and then use the obtained URLs to establish a long connection to the WebSocket server, implementing the interaction with the container. Only containers whose runtime is of the LCR type are supported. - -##### Parameters - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Description

-

string container_id

-

Container ID.

-

bool tty

-

Whether to run the command in a TTY.

-

bool stdin

-

Whether to generate the standard input stream.

-

bool stdout

-

Whether to generate the standard output stream.

-

bool stderr

-

Whether to generate the standard error output stream.

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

string url

-

Fully qualified URL of the attach streaming server.

-
- -#### ContainerStats - -##### Prototype - -```shell -rpc ContainerStats(ContainerStatsRequest) returns (ContainerStatsResponse) {} -``` - -##### Description - -This API is used to return information about resources occupied by a container. Only containers whose runtime is of the LCR type are supported. - -##### Parameters - - - - - - - - - -

Parameter

-

Description

-

string container_id

-

Container ID.

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

ContainerStats stats

-

Container information. Note: Disks and inodes support only the query of containers started by OCI images.

-
- -#### ListContainerStats - -##### Prototype - -```shell -rpc ListContainerStats(ListContainerStatsRequest) returns (ListContainerStatsResponse) {} -``` - -##### Description - -This API is used to return the information about resources occupied by multiple containers. Filtering based on criteria is supported. - -##### Parameters - - - - - - - - - -

Parameter

-

Description

-

ContainerStatsFilter filter

-

Filter criteria.

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

repeated ContainerStats stats

-

Container information list. Note: Disks and inodes support only the query of containers started by OCI images.

-
- -#### UpdateRuntimeConfig - -##### Prototype - -```shell -rpc UpdateRuntimeConfig(UpdateRuntimeConfigRequest) returns (UpdateRuntimeConfigResponse); -``` - -##### Description - -This API is used as a standard CRI to update the pod CIDR of the network plug-in. Currently, the CNI network plug-in does not need to update the pod CIDR. Therefore, this API records only access logs. - -##### Precautions - -API operations will not modify the system management information, but only record a log. - -##### Parameters - - - - - - - - - -

Parameter

-

Description

-

RuntimeConfig runtime_config

-

Information to be configured for the runtime.

-
- -##### Return Values - -None - -#### Status - -##### Prototype - -```shell -rpc Status(StatusRequest) returns (StatusResponse) {}; -``` - -##### Description - -This API is used to obtain the network status of the runtime and pod. Obtaining the network status will trigger the update of network configuration. Only containers whose runtime is of the LCR type are supported. - -##### Precautions - -If the network configuration fails to be updated, the original configuration is not affected. The original configuration is overwritten only when the update is successful. - -##### Parameters - - - - - - - - - -

Parameter

-

Description

-

bool verbose

-

Whether to display additional runtime information. This parameter does not take effect now.

-
- -##### Return Values - - - - - - - - - - - - -

Return Value

-

Description

-

RuntimeStatus status

-

Runtime status.

-

map<string, string> info

-

Additional information about the runtime. The key of info can be any value. The value must be in JSON format and can contain any debugging information. When verbose is set to true, info cannot be empty.

-
- -### Image Service - -The service provides the gRPC API for pulling, viewing, and removing images from the registry. - -#### ListImages - -##### Prototype - -```shell -rpc ListImages(ListImagesRequest) returns (ListImagesResponse) {} -``` - -##### Description - -This API is used to list existing image information. - -##### Precautions - -This is a unified API. You can run the **cri images** command to query embedded images. However, embedded images are not standard OCI images. Therefore, query results have the following restrictions: - -- An embedded image does not have an image ID. Therefore, the value of **image ID** is the config digest of the image. -- An embedded image has only config digest, and it does not comply with the OCI image specifications. Therefore, the value of **digest** cannot be displayed. - -##### Parameters - - - - - - - - - -

Parameter

-

Description

-

ImageSpec filter

-

Name of the image to be filtered.

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

repeated Image images

-

Image information list.

-
- -#### ImageStatus - -##### Prototype - -```shell -rpc ImageStatus(ImageStatusRequest) returns (ImageStatusResponse) {} -``` - -##### Description - -The API is used to query the information about a specified image. - -##### Precautions - -1. If the image to be queried does not exist, **ImageStatusResponse** is returned and **Image** is set to **nil** in the return value. -2. This is a unified API. Since embedded images do not comply with the OCI image specifications and do not contain required fields, the images cannot be queried by using this API. - -##### Parameters - - - - - - - - - - - - -

Parameter

-

Description

-

ImageSpec image

-

Image name.

-

bool verbose

-

Whether to query additional information. This parameter does not take effect now. No additional information is returned.

-
- -##### Return Values - - - - - - - - - - - - -

Return Value

-

Description

-

Image image

-

Image information.

-

map<string, string> info

-

Additional image information. This parameter does not take effect now. No additional information is returned.

-
- -#### PullImage - -##### Prototype - -```shell - rpc PullImage(PullImageRequest) returns (PullImageResponse) {} -``` - -##### Description - -This API is used to download images. - -##### Precautions - -Currently, you can download public images, and use the username, password, and auth information to download private images. The **server\_address**, **identity\_token**, and **registry\_token** fields in **authconfig** cannot be configured. - -##### Parameters - - - - - - - - - - - - - - - -

Parameter

-

Description

-

ImageSpec image

-

Name of the image to be downloaded.

-

AuthConfig auth

-

Verification information for downloading a private image.

-

PodSandboxConfig sandbox_config

-

Whether to download an image in the pod context. This parameter does not take effect now.

-
- -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

string image_ref

-

Information about the downloaded image.

-
- -#### RemoveImage - -##### Prototype - -```shell -rpc RemoveImage(RemoveImageRequest) returns (RemoveImageResponse) {} -``` - -##### Description - -This API is used to delete specified images. - -##### Precautions - -This is a unified API. Since embedded images do not comply with the OCI image specifications and do not contain required fields, you cannot delete embedded images by using this API and the image ID. - -##### Parameters - - - - - - - - - -

Parameter

-

Description

-

ImageSpec image

-

Name or ID of the image to be deleted.

-
- -##### Return Values - -None - -#### ImageFsInfo - -##### Prototype - -```shell -rpc ImageFsInfo(ImageFsInfoRequest) returns (ImageFsInfoResponse) {} -``` - -##### Description - -This API is used to query the information about the file system that stores images. - -##### Precautions - -Queried results are the file system information in the image metadata. - -##### Parameters - -None - -##### Return Values - - - - - - - - - -

Return Value

-

Description

-

repeated FilesystemUsage image_filesystems

-

Information about the file system that stores images.

-
- -### Constraints - -1. If **log\_directory** is configured in the **PodSandboxConfig** parameter when a sandbox is created, **log\_path** must be specified in **ContainerConfig** when all containers that belong to the sandbox are created. Otherwise, the containers may not be started or deleted by using the CRI. - - The actual value of **LOGPATH** of containers is **log\_directory/log\_path**. If **log\_path** is not set, the final value of **LOGPATH** is changed to **log\_directory**. - - - If the path does not exist, iSulad will create a soft link pointing to the actual path of container logs when starting a container. Then **log\_directory** becomes a soft link. There are two cases: - 1. In the first case, if **log\_path** is not configured for other containers in the sandbox, **log\_directory** will be deleted and point to **log\_path** of the newly started container. As a result, logs of the first started container point to logs of the later started container. - 2. In the second case, if **log\_path** is configured for other containers in the sandbox, the value of **LOGPATH** of the container is **log\_directory/log\_path**. Because **log\_directory** is a soft link, the creation fails when **log\_directory/log\_path** is used as the soft link to point to the actual path of container logs. - - - If the path exists, iSulad will attempt to delete the path \(non-recursive\) when starting a container. If the path is a folder path containing content, the deletion fails. As a result, the soft link fails to be created, the container fails to be started, and the same error occurs when the container is going to be deleted. - -2. If **log\_directory** is configured in the **PodSandboxConfig** parameter when a sandbox is created, and **log\_path** is specified in **ContainerConfig** when a container is created, the final value of **LOGPATH** is **log\_directory/log\_path**. iSulad does not recursively create **LOGPATH**, therefore, you must ensure that **dirname\(LOGPATH\)** exists, that is, the upper-level path of the final log file path exists. -3. If **log\_directory** is configured in the **PodSandboxConfig** parameter when a sandbox is created, and the same **log\_path** is specified in **ContainerConfig** when multiple containers are created, or if containers in different sandboxes point to the same **LOGPATH**, the latest container log path will overwrite the previous path after the containers are started successfully. -4. If the image content in the remote registry changes and the original image is stored in the local host, the name and tag of the original image are changed to **none** when you call the CRI Pull image API to download the image again. - - An example is as follows: - - Locally stored images: - - ```console - IMAGE TAG IMAGE ID SIZE - rnd-dockerhub.huawei.com/pproxyisulad/test latest 99e59f495ffaa 753kB - ``` - - After the **rnd-dockerhub.huawei.com/pproxyisulad/test:latest** image in the remote registry is updated and downloaded again: - - ```console - IMAGE TAG IMAGE ID SIZE - 99e59f495ffaa 753kB - rnd-dockerhub.huawei.com/pproxyisulad/test latest d8233ab899d41 1.42MB - ``` - - Run the **isula images** command. The value of **REF** is displayed as **-**. - - ```console - REF IMAGE ID CREATED SIZE - rnd-dockerhub.huawei.com/pproxyisulad/test:latest d8233ab899d41 2019-02-14 19:19:37 1.42MB - - 99e59f495ffaa 2016-05-04 02:26:41 753kB - ``` diff --git a/docs/en/docs/Container/figures/en-us_image_0221924926.png b/docs/en/docs/Container/figures/en-us_image_0221924926.png deleted file mode 100644 index 62ef0decdf6f1e591059904001d712a54f727e68..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Container/figures/en-us_image_0221924926.png and /dev/null differ diff --git a/docs/en/docs/Container/figures/en-us_image_0221924927.png b/docs/en/docs/Container/figures/en-us_image_0221924927.png deleted file mode 100644 index ad5ed3f7beeb01e6a48707c4806606b41d687e22..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Container/figures/en-us_image_0221924927.png and /dev/null differ diff --git a/docs/en/docs/Container/figures/isula-build_arch.png b/docs/en/docs/Container/figures/isula-build_arch.png deleted file mode 100644 index f92f15085820ce824bc2ca60ff7d6d25e95f1402..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Container/figures/isula-build_arch.png and /dev/null differ diff --git a/docs/en/docs/Container/installation-and-deployment.md b/docs/en/docs/Container/installation-and-deployment.md deleted file mode 100644 index fa661af71113812c87b8beff6863fad163f743e6..0000000000000000000000000000000000000000 --- a/docs/en/docs/Container/installation-and-deployment.md +++ /dev/null @@ -1,977 +0,0 @@ -# Installation and Configuration - - -- [Installation and Configuration](./installation-configuration) - - [Installation Methods](#installation-methods) - - [Deployment Configuration](#deployment-configuration) - - [Configuration Mode](#configuration-mode) - - [Storage Description](#storage-description) - - [Constraints](#constraints) - - [Daemon Multi-Port Binding](#daemon-multi-port-binding) - - [Configuring TLS Authentication and Enabling Remote Access](#configuring-tls-authentication-and-enabling-remote-access) - - [devicemapper Storage Driver Configuration](#devicemapper-storage-driver-configuration) - - - -## Installation Methods - -iSulad can be installed by running the **yum** or **rpm** command. The **yum** command is recommended because dependencies can be installed automatically. - -This section describes two installation methods. - -- \(Recommended\) Run the following command to install iSulad: - - `` - $ sudo yum install -y iSulad - ``` - - -- If the **rpm** command is used to install iSulad, you need to download and manually install the RMP packages of iSulad and all its dependencies. To install the RPM package of a single iSulad \(the same for installing dependency packages\), run the following command: - - ``` - $ sudo rpm -ihv iSulad-xx.xx.xx-YYYYmmdd.HHMMSS.gitxxxxxxxx.aarch64.rpm - ``` - - -## Deployment Configuration - -### Configuration Mode - -The iSulad server daemon **isulad** can be configured with a configuration file or by running the **isulad --xxx** command. The priority in descending order is as follows: CLI \> configuration file \> default configuration in code. - ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->If systemd is used to manage the iSulad process, modify the **OPTIONS** field in the **/etc/sysconfig/iSulad** file, which functions the same as using the CLI. - -- **CLI** - - During service startup, configure iSulad using the CLI. To view the configuration options, run the following command: - - ``` - $ isulad --help - lightweight container runtime daemon - - Usage: isulad [global options] - - GLOBAL OPTIONS: - - --authorization-plugin Use authorization plugin - --cgroup-parent Set parent cgroup for all containers - --cni-bin-dir The full path of the directory in which to search for CNI plugin binaries. Default: /opt/cni/bin - --cni-conf-dir The full path of the directory in which to search for CNI config files. Default: /etc/cni/net.d - --default-ulimit Default ulimits for containers (default []) - -e, --engine Select backend engine - -g, --graph Root directory of the iSulad runtime - -G, --group Group for the unix socket(default is isulad) - --help Show help - --hook-spec Default hook spec file applied to all containers - -H, --host The socket name used to create gRPC server - --image-layer-check Check layer intergrity when needed - --image-opt-timeout Max timeout(default 5m) for image operation - --insecure-registry Disable TLS verification for the given registry - --insecure-skip-verify-enforce Force to skip the insecure verify(default false) - --log-driver Set daemon log driver, such as: file - -l, --log-level Set log level, the levels can be: FATAL ALERT CRIT ERROR WARN NOTICE INFO DEBUG TRACE - --log-opt Set daemon log driver options, such as: log-path=/tmp/logs/ to set directory where to store daemon logs - --native.umask Default file mode creation mask (umask) for containers - --network-plugin Set network plugin, default is null, support null and cni - -p, --pidfile Save pid into this file - --pod-sandbox-image The image whose network/ipc namespaces containers in each pod will use. (default "rnd-dockerhub.huawei.com/library/pause-${machine}:3.0") - --registry-mirrors Registry to be prepended when pulling unqualified images, can be specified multiple times - --start-timeout timeout duration for waiting on a container to start before it is killed - -S, --state Root directory for execution state files - --storage-driver Storage driver to use(default overlay2) - -s, --storage-opt Storage driver options - --tls Use TLS; implied by --tlsverify - --tlscacert Trust certs signed only by this CA (default "/root/.iSulad/ca.pem") - --tlscert Path to TLS certificate file (default "/root/.iSulad/cert.pem") - --tlskey Path to TLS key file (default "/root/.iSulad/key.pem") - --tlsverify Use TLS and verify the remote - --use-decrypted-key Use decrypted private key by default(default true) - -V, --version Print the version - --websocket-server-listening-port CRI websocket streaming service listening port (default 10350) - ``` - - Example: Start iSulad and change the log level to DEBUG. - - ``` - $ isulad -l DEBUG - ``` - - -- **Configuration file** - - The iSulad configuration file is **/etc/isulad/daemon.json**. The parameters in the file are described as follows: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Example

-

Description

-

Remarks

-

-e, --engine

-

"engine": "lcr"

-

iSulad runtime, which is Icr by default.

-

None

-

-G, --group

-

"group": "isulad"

-

Socket group.

-

None

-

--hook-spec

-

"hook-spec": "/etc/default/isulad/hooks/default.json"

-

Default hook configuration file for all containers.

-

None

-

-H, --host

-

"hosts": "unix:///var/run/isulad.sock"

-

Communication mode.

-

In addition to the local socket, the tcp://ip:port mode is supported. The port number ranges from 0 to 65535, excluding occupied ports.

-

--log-driver

-

"log-driver": "file"

-

Log driver configuration.

-

None

-

-l, --log-level

-

"log-level": "ERROR"

-

Log output level.

-

None

-

--log-opt

-

"log-opts": {

-

"log-file-mode": "0600",

-

"log-path": "/var/lib/isulad",

-

"max-file": "1",

-

"max-size": "30KB"

-

}

-

Log-related configuration.

-

You can specify max-file, max-size, and log-path. max-file indicates the number of log files. max-size indicates the threshold for triggering log anti-explosion. If max-file is 1, max-size is invalid. log-path specifies the path for storing log files. The log-file-mode command is used to set the permissions to read and write log files. The value must be in octal format, for example, 0666.

-

--start-timeout

-

"start-timeout": "2m"

-

Time required for starting a container.

-

None

-

--runtime

-

"default-runtime": "lcr"

-

Container runtime, which is lcr by default.

-

If neither the CLI nor the configuration file specifies the runtime, lcr is used by default. The priorities of the three specifying methods are as follows: CLI > configuration file > default value lcr. Currently, lcr and kata-runtime are supported.

-

None

-
"runtimes":  {
-        "kata-runtime": {
-          "path": "/usr/bin/kata-runtime",
-          "runtime-args": [
-            "--kata-config",
-            "/usr/share/defaults/kata-containers/configuration.toml"
-          ]
-        }
-    }
-

When starting a container, set this parameter to specify multiple runtimes. Runtimes in this set are valid for container startup.

-

Runtime whitelist of a container. The customized runtimes in this set are valid. kata-runtime is used as the example.

-

-p, --pidfile

-

"pidfile": "/var/run/isulad.pid"

-

File for storing PIDs.

-

This parameter is required only when more than two container engines need to be started.

-

-g, --graph

-

"graph": "/var/lib/isulad"

-

Root directory for iSulad runtimes.

-

-S, --state

-

"state": "/var/run/isulad"

-

Root directory of the execution file.

-

--storage-driver

-

"storage-driver": "overlay2"

-

Image storage driver, which is overlay2 by default.

-

Only overlay2 is supported.

-

-s, --storage-opt

-

"storage-opts": [ "overlay2.override_kernel_check=true" ]

-

Image storage driver configuration options.

-

The options are as follows:

-
overlay2.override_kernel_check=true #Ignore the kernel version check.
-    overlay2.size=${size} #Set the rootfs quota to ${size}.
-    overlay2.basesize=${size} #It is equivalent to overlay2.size.
-

--image-opt-timeout

-

"image-opt-timeout": "5m"

-

Image operation timeout interval, which is 5m by default.

-

The value -1 indicates that the timeout interval is not limited.

-

--registry-mirrors

-

"registry-mirrors": [ "docker.io" ]

-

Registry address.

-

None

-

--insecure-registry

-

"insecure-registries": [ ]

-

Registry without TLS verification.

-

None

-

--native.umask

-

"native.umask": "secure"

-

Container umask policy. The default value is secure. The value normal indicates insecure configuration.

-

Set the container umask value.

-

The value can be null (0027 by default), normal, or secure.

-
normal #The umask value of the started container is 0022.
-    secure #The umask value of the started container is 0027 (default value).
-

--pod-sandbox-image

-

"pod-sandbox-image": "rnd-dockerhub.huawei.com/library/pause-aarch64:3.0"

-

By default, the pod uses the image. The default value is rnd-dockerhub.huawei.com/library/pause-${machine}:3.0.

-

None

-

--network-plugin

-

"network-plugin": ""

-

Specifies a network plug-in. The value is a null character by default, indicating that no network configuration is available and the created sandbox has only the loop NIC.

-

The CNI and null characters are supported. Other invalid values will cause iSulad startup failure.

-

--cni-bin-dir

-

"cni-bin-dir": ""

-

Specifies the storage location of the binary file on which the CNI plug-in depends.

-

The default value is /opt/cni/bin.

-

--cni-conf-dir

-

"cni-conf-dir": ""

-

Specifies the storage location of the CNI network configuration file.

-

The default value is /etc/cni/net.d.

-

--image-layer-check=false

-

"image-layer-check": false

-

Image layer integrity check. To enable the function, set it to true; otherwise, set it to false. It is disabled by default.

-

When iSulad is started, the image layer integrity is checked. If the image layer is damaged, the related images are unavailable. iSulad cannot verify empty files, directories, and link files. Therefore, if the preceding files are lost due to a power failure, the integrity check of iSulad image data may fail to be identified. When the iSulad version changes, check whether the parameter is supported. If not, delete it from the configuration file.

-

--insecure-skip-verify-enforce=false

-

"insecure-skip-verify-enforce": false

-

Indicates whether to forcibly skip the verification of the certificate host name/domain name. The value is of the Boolean type, and the default value is false. If this parameter is set to true, the verification of the certificate host name/domain name is skipped.

-

The default value is false (not skipped). Note: Restricted by the YAJL JSON parsing library, if a non-Boolean value that meets the JSON format requirements is configured in the /etc/isulad/daemon.json configuration file, the default value used by iSulad is false.

-

--use-decrypted-key=true

-

"use-decrypted-key": true

-

Specifies whether to use an unencrypted private key. The value is of the Boolean type. If this parameter is set to true, an unencrypted private key is used. If this parameter is set to false, the encrypted private key is used, that is, two-way authentication is required.

-

The default value is true, indicating that an unencrypted private key is used. Note: Restricted by the YAJL JSON parsing library, if a non-Boolean value that meets the JSON format requirements is configured in the /etc/isulad/daemon.json configuration file, the default value used by iSulad is true.

-

--tls

-

"tls":false

-

Specifies whether to use TLS. The value is of the Boolean type.

-

This parameter is used only in -H tcp://IP:PORT mode. The default value is false.

-

--tlsverify

-

"tlsverify":false

-

Specifies whether to use TLS and verify remote access. The value is of the Boolean type.

-

This parameter is used only in -H tcp://IP:PORT mode.

-

--tlscacert

-

--tlscert

-

--tlskey

-

"tls-config": {

-

"CAFile": "/root/.iSulad/ca.pem",

-

"CertFile": "/root/.iSulad/server-cert.pem",

-

"KeyFile":"/root/.iSulad/server-key.pem"

-

}

-

TLS certificate-related configuration.

-

This parameter is used only in -H tcp://IP:PORT mode.

-

--authorization-plugin

-

"authorization-plugin": "authz-broker"

-

User permission authentication plugin.

-

Only authz-broker is supported.

-

--cgroup-parent

-

"cgroup-parent": "lxc/mycgroup"

-

Default cgroup parent path of a container, which is of the string type.

-

Specifies the cgroup parent path of a container. If --cgroup-parent is specified on the client, the client parameter prevails.

-

Note: If container A is started before container B, the cgroup parent path of container B is specified as the cgroup path of container A. When deleting a container, you need to delete container B and then container A in sequence. Otherwise, residual cgroup resources exist.

-

--default-ulimits

-

"default-ulimits": {

-

"nofile": {

-

"Name": "nofile",

-

"Hard": 6400,

-

"Soft": 3200

-

}

-

}

-

Specifies the ulimit restriction type, soft value, and hard value.

-

Specifies the restricted resource type, for example, nofile. The two field names must be the same, that is, nofile. Otherwise, an error is reported. The value of Hard must be greater than or equal to that of Soft. If the Hard or Soft field is not set, the default value 0 is used.

-

--websocket-server-listening-port

-

"websocket-server-listening-port": 10350

-

Specifies the listening port of the CRI WebSocket streaming service. The default port number is 10350.

-

Specifies the listening port of the CRI websocket streaming service.

-

If the client specifies --websocket-server-listening-port, the specified value is used. The port number ranges from 1024 to 49151.

-
- - Example: - - ``` - $ cat /etc/isulad/daemon.json - { - "group": "isulad", - "default-runtime": "lcr", - "graph": "/var/lib/isulad", - "state": "/var/run/isulad", - "engine": "lcr", - "log-level": "ERROR", - "pidfile": "/var/run/isulad.pid", - "log-opts": { - "log-file-mode": "0600", - "log-path": "/var/lib/isulad", - "max-file": "1", - "max-size": "30KB" - }, - "log-driver": "stdout", - "hook-spec": "/etc/default/isulad/hooks/default.json", - "start-timeout": "2m", - "storage-driver": "overlay2", - "storage-opts": [ - "overlay2.override_kernel_check=true" - ], - "registry-mirrors": [ - "docker.io" - ], - "insecure-registries": [ - "rnd-dockerhub.huawei.com" - ], - "pod-sandbox-image": "", - "image-opt-timeout": "5m", - "native.umask": "secure", - "network-plugin": "", - "cni-bin-dir": "", - "cni-conf-dir": "", - "image-layer-check": false, - "use-decrypted-key": true, - "insecure-skip-verify-enforce": false - } - ``` - - >![](./public_sys-resources/icon-notice.gif) **NOTICE:** - >The default configuration file **/etc/isulad/daemon.json** is for reference only. Configure it based on site requirements. - - -### Storage Description - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

File

-

Directory

-

Description

-

\*

-

/etc/default/isulad/

-

Stores the OCI configuration file and hook template file of iSulad. The file configuration permission is set to 0640, and the sysmonitor check permission is set to 0550.

-

\*

-

/etc/isulad/

-

Default configuration files of iSulad and seccomp.

-

isulad.sock

-

/var/run/

-

Pipe communication file, which is used for the communication between the client and iSulad.

-

isulad.pid

-

/var/run/

-

File for storing the iSulad PIDs. It is also a file lock to prevent multiple iSulad instances from being started.

-

\*

-

/run/lxc/

-

Lock file, which is created during iSulad running.

-

\*

-

/var/run/isulad/

-

Real-time communication cache file, which is created during iSulad running.

-

\*

-

/var/run/isula/

-

Real-time communication cache file, which is created during iSulad running.

-

\*

-

/var/lib/lcr/

-

Temporary directory of the LCR component.

-

\*

-

/var/lib/isulad/

-

Root directory where iSulad runs, which stores the created container configuration, default log path, database file, and mount point.

-

/var/lib/isulad/mnt/: mount point of the container rootfs.

-

/var/lib/isulad/engines/lcr/: directory for storing LCR container configurations. Each container has a directory named after the container.

-
- -### Constraints - -- In high concurrency scenarios \(200 containers are concurrently started\), the memory management mechanism of Glibc may cause memory holes and large virtual memory \(for example, 10 GB\). This problem is caused by the restriction of the Glibc memory management mechanism in the high concurrency scenario, but not by memory leakage. Therefore, the memory consumption does not increase infinitely. You can set **MALLOC\_ARENA\_MAX** to reducevirtual memory error and increase the rate of reducing physical memory. However, this environment variable will cause the iSulad concurrency performance to deteriorate. Set this environment variable based on the site requirements. - - ``` - To balance performance and memory usage, set MALLOC_ARENA_MAX to 4. (The iSulad performance on the ARM64 server is affected by less than 10%.) - - Configuration method: - 1. To manually start iSulad, run the export MALLOC_ARENA_MAX=4 command and then start iSulad. - 2. If systemd manages iSulad, you can modify the /etc/sysconfig/iSulad file by adding MALLOC_ARENA_MAX=4. - ``` - -- Precautions for specifying the daemon running directories - - Take **--root** as an example. When **/new/path/** is used as the daemon new root directory, if a file exists in **/new/path/** and the directory or file name conflicts with that required by iSulad \(for example, **engines** and **mnt**\), iSulad may update the original directory or file attributes including the owner and permission. - - Therefore, please note the impact of re-specifying various running directories and files on their attributes. You are advised to specify a new directory or file for iSulad to avoid file attribute changes and security issues caused by conflicts. - -- Log file management: - - >![](./public_sys-resources/icon-notice.gif) **NOTICE:** - >Log function interconnection: logs are managed by systemd as iSulad is and then transmitted to rsyslogd. By default, rsyslog restricts the log writing speed. You can add the configuration item **$imjournalRatelimitInterval 0** to the **/etc/rsyslog.conf** file and restart the rsyslogd service. - -- Restrictions on command line parameter parsing - - When the iSulad command line interface is used, the parameter parsing mode is slightly different from that of Docker. For flags with parameters in the command line, regardless of whether a long or short flag is used, only the first space after the flag or the character string after the equal sign \(=\) directly connected to the flag is used as the flag parameter. The details are as follows: - - 1. When a short flag is used, each character in the character string connected to the hyphen \(-\) is considered as a short flag. If there is an equal sign \(=\), the character string following the equal sign \(=\) is considered as the parameter of the short flag before the equal sign \(=\). - - **isula run -du=root busybox** is equivalent to **isula run -du root busybox**, **isula run -d -u=root busybox**, or **isula run -d -u root busybox**. When **isula run -du:root** is used, as **-:** is not a valid short flag, an error is reported. The preceding command is equivalent to **isula run -ud root busybox**. However, this method is not recommended because it may cause semantic problems. - - 1. When a long flag is used, the character string connected to **--** is regarded as a long flag. If the character string contains an equal sign \(=\), the character string before the equal sign \(=\) is a long flag, and the character string after the equal sign \(=\) is a parameter. - - ``` - isula run --user=root busybox - ``` - - or - - ``` - isula run --user root busybox - ``` - - -- After an iSulad container is started, you cannot run the **isula run -i/-t/-ti** and **isula attach/exec** commands as a non-root user. -- When iSulad connects to an OCI container, only kata-runtime can be used to start the OCI container. - -### Daemon Multi-Port Binding - -#### Description - -The daemon can bind multiple UNIX sockets or TCP ports and listen on these ports. The client can interact with the daemon through these ports. - -#### Port - -Users can configure one or more ports in the hosts field in the **/etc/isulad/daemon.json** file, or choose not to specify hosts. - -``` -{ - "hosts": [ - "unix:///var/run/isulad.sock", - "tcp://localhost:5678", - "tcp://127.0.0.1:6789" - ] -} -``` - -Users can also run the **-H** or **--host** command in the **/etc/sysconfig/iSulad** file to configure a port, or choose not to specify hosts. - -``` -OPTIONS='-H unix:///var/run/isulad.sock --host tcp://127.0.0.1:6789' -``` - -If hosts are not specified in the **daemon.json** file and iSulad, the daemon listens on **unix:///var/run/isulad.sock** by default after startup. - -#### Restrictions - -- Users cannot specify hosts in the **/etc/isulad/daemon.json** and **/etc/sysconfig/iSuald** files at the same time. Otherwise, an error will occur and iSulad cannot be started. - - ``` - unable to configure the isulad with file /etc/isulad/daemon.json: the following directives are specified both as a flag and in the configuration file: hosts: (from flag: [unix:///var/run/isulad.sock tcp://127.0.0.1:6789], from file: [unix:///var/run/isulad.sock tcp://localhost:5678 tcp://127.0.0.1:6789]) - ``` - -- If the specified host is a UNIX socket, the socket must start with **unix://** followed by a valid absolute path. -- If the specified host is a TCP port, the TCP port number must start with **tcp://** followed by a valid IP address and port number. The IP address can be that of the local host. -- A maximum of 10 valid ports can be specified. If more than 10 ports are specified, an error will occur and iSulad cannot be started. - -### Configuring TLS Authentication and Enabling Remote Access - -#### Description - -iSulad is designed in C/S mode. By default, the iSulad daemon process listens only on the local/var/run/isulad.sock. Therefore, you can run commands to operate containers only on the local client iSula. To enable iSula's remote access to the container, the iSulad daemon process needs to listen on the remote access port using TCP/IP. However, listening is performed only by simply configuring tcp ip:port. In this case, all IP addresses can communicate with iSulad by calling **isula -H tcp://**_remote server IP address_**:port**, which may cause security problems. Therefore, it is recommended that a more secure version, namely Transport Layer Security \(TLS\), be used for remote access. - -#### Generating TLS Certificate - -- Example of generating a plaintext private key and certificate - - ``` - #!/bin/bash - set -e - echo -n "Enter pass phrase:" - read password - echo -n "Enter public network ip:" - read publicip - echo -n "Enter host:" - read HOST - - echo " => Using hostname: $publicip, You MUST connect to iSulad using this host!" - - mkdir -p $HOME/.iSulad - cd $HOME/.iSulad - rm -rf $HOME/.iSulad/* - - echo " => Generating CA key" - openssl genrsa -passout pass:$password -aes256 -out ca-key.pem 4096 - echo " => Generating CA certificate" - openssl req -passin pass:$password -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem -subj "/C=CN/ST=zhejiang/L=hangzhou/O=Huawei/OU=iSulad/CN=iSulad@huawei.com" - echo " => Generating server key" - openssl genrsa -passout pass:$password -out server-key.pem 4096 - echo " => Generating server CSR" - openssl req -passin pass:$password -subj /CN=$HOST -sha256 -new -key server-key.pem -out server.csr - echo subjectAltName = DNS:$HOST,IP:$publicip,IP:127.0.0.1 >> extfile.cnf - echo extendedKeyUsage = serverAuth >> extfile.cnf - echo " => Signing server CSR with CA" - openssl x509 -req -passin pass:$password -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf - echo " => Generating client key" - openssl genrsa -passout pass:$password -out key.pem 4096 - echo " => Generating client CSR" - openssl req -passin pass:$password -subj '/CN=client' -new -key key.pem -out client.csr - echo " => Creating extended key usage" - echo extendedKeyUsage = clientAuth > extfile-client.cnf - echo " => Signing client CSR with CA" - openssl x509 -req -passin pass:$password -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem -extfile extfile-client.cnf - rm -v client.csr server.csr extfile.cnf extfile-client.cnf - chmod -v 0400 ca-key.pem key.pem server-key.pem - chmod -v 0444 ca.pem server-cert.pem cert.pem - ``` - - -- Example of generating an encrypted private key and certificate request file - - ``` - #!/bin/bash - - echo -n "Enter public network ip:" - read publicip - echo -n "Enter pass phrase:" - read password - - # remove certificates from previous execution. - rm -f *.pem *.srl *.csr *.cnf - - - # generate CA private and public keys - echo 01 > ca.srl - openssl genrsa -aes256 -out ca-key.pem -passout pass:$password 2048 - openssl req -subj '/C=CN/ST=zhejiang/L=hangzhou/O=Huawei/OU=iSulad/CN=iSulad@huawei.com' -new -x509 -days $DAYS -passin pass:$password -key ca-key.pem -out ca.pem - - # create a server key and certificate signing request (CSR) - openssl genrsa -aes256 -out server-key.pem -passout pass:$PASS 2048 - openssl req -new -key server-key.pem -out server.csr -passin pass:$password -subj '/CN=iSulad' - - echo subjectAltName = DNS:iSulad,IP:${publicip},IP:127.0.0.1 > extfile.cnf - echo extendedKeyUsage = serverAuth >> extfile.cnf - # sign the server key with our CA - openssl x509 -req -days $DAYS -passin pass:$password -in server.csr -CA ca.pem -CAkey ca-key.pem -out server-cert.pem -extfile extfile.cnf - - # create a client key and certificate signing request (CSR) - openssl genrsa -aes256 -out key.pem -passout pass:$password 2048 - openssl req -subj '/CN=client' -new -key key.pem -out client.csr -passin pass:$password - - # create an extensions config file and sign - echo extendedKeyUsage = clientAuth > extfile.cnf - openssl x509 -req -days 365 -passin pass:$password -in client.csr -CA ca.pem -CAkey ca-key.pem -out cert.pem -extfile extfile.cnf - - # remove the passphrase from the client and server key - openssl rsa -in server-key.pem -out server-key.pem -passin pass:$password - openssl rsa -in key.pem -out key.pem -passin pass:$password - - # remove generated files that are no longer required - rm -f ca-key.pem ca.srl client.csr extfile.cnf server.csr - ``` - - -#### APIs - -``` -{ - "tls": true, - "tls-verify": true, - "tls-config": { - "CAFile": "/root/.iSulad/ca.pem", - "CertFile": "/root/.iSulad/server-cert.pem", - "KeyFile":"/root/.iSulad/server-key.pem" - } -} -``` - -#### Restrictions - -The server supports the following modes: - -- Mode 1 \(client verified\): tlsverify, tlscacert, tlscert, tlskey -- Mode 2 \(client not verified\): tls, tlscert, tlskey - -The client supports the following modes: - -- Mode 1 \(verify the identity based on the client certificate, and verify the server based on the specified CA\): tlsverify, tlscacert, tlscert, tlskey -- Mode 2 \(server verified\): tlsverify, tlscacert - -Mode 1 is used for the server, and mode 2 for the client if the two-way authentication mode is used for communication. - -Mode 2 is used for the server and the client if the unidirectional authentication mode is used for communication. - ->![](./public_sys-resources/icon-notice.gif) **NOTICE:** ->- If RPM is used for installation, the server configuration can be modified in the **/etc/isulad/daemon.json** and **/etc/sysconfig/iSulad** files. ->- Two-way authentication is recommended as it is more secure than non-authentication or unidirectional authentication. ->- GRPC open-source component logs are not taken over by iSulad. To view gRPC logs, set the environment variables **gRPC\_VERBOSITY** and **gRPC\_TRACE** as required. ->   - -#### Example - -On the server: - -``` - isulad -H=tcp://0.0.0.0:2376 --tlsverify --tlscacert ~/.iSulad/ca.pem --tlscert ~/.iSulad/server-cert.pem --tlskey ~/.iSulad/server-key.pem -``` - -On the client: - -``` - isula version -H=tcp://$HOSTIP:2376 --tlsverify --tlscacert ~/.iSulad/ca.pem --tlscert ~/.iSulad/cert.pem --tlskey ~/.iSulad/key.pem -``` - -### devicemapper Storage Driver Configuration - -To use the devicemapper storage driver, you need to configure a thinpool device which requires an independent block device with sufficient free space. Take the independent block device **/dev/xvdf** as an example. The configuration method is as follows: - -1. Configuring a thinpool - -1. Stop the iSulad service. - - ``` - # systemctl stop isulad - ``` - -2. Create a logical volume manager \(LVM\) volume based on the block device. - - ``` - # pvcreate /dev/xvdf - ``` - -3. Create a volume group based on the created physical volume. - - ``` - # vgcreate isula /dev/xvdf - Volume group "isula" successfully created: - ``` - -4. Create two logical volumes named **thinpool** and **thinpoolmeta**. - - ``` - # lvcreate --wipesignatures y -n thinpool isula -l 95%VG - Logical volume "thinpool" created. - ``` - - ``` - # lvcreate --wipesignatures y -n thinpoolmeta isula -l 1%VG - Logical volume "thinpoolmeta" created. - ``` - -5. Convert the two logical volumes into a thinpool and the metadata used by the thinpool. - - ``` - # lvconvert -y --zero n -c 512K --thinpool isula/thinpool --poolmetadata isula/thinpoolmeta - - WARNING: Converting logical volume isula/thinpool and isula/thinpoolmeta to - thin pool's data and metadata volumes with metadata wiping. - THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) - Converted isula/thinpool to thin pool. - ``` - - -   - -2. Modifying the iSulad configuration files - -1. If iSulad has been used in the environment, back up the running data first. - - ``` - # mkdir /var/lib/isulad.bk - # mv /var/lib/isulad/* /var/lib/isulad.bk - ``` - -2. Modify configuration files. - - Two configuration methods are provided. Select one based on site requirements. - - - Edit the **/etc/isulad/daemon.json** file, set **storage-driver** to **devicemapper**, and set parameters related to the **storage-opts** field. For details about related parameters, see [Parameter Description](#en-us_topic_0222861454_section1712923715282). The following lists the configuration reference: - - ``` - { - "storage-driver": "devicemapper" - "storage-opts": [ - "dm.thinpooldev=/dev/mapper/isula-thinpool", - "dm.fs=ext4", - "dm.min_free_space=10%" - ] - } - ``` - - - You can also edit **/etc/sysconfig/iSulad** to explicitly specify related iSulad startup parameters. For details about related parameters, see [Parameter Description](#en-us_topic_0222861454_section1712923715282). The following lists the configuration reference: - - ``` - OPTIONS="--storage-driver=devicemapper --storage-opt dm.thinpooldev=/dev/mapper/isula-thinpool --storage-opt dm.fs=ext4 --storage-opt dm.min_free_space=10%" - ``` - -3. Start iSulad for the settings to take effect. - - ``` - # systemctl start isulad - ``` - - -#### Parameter Description - -For details about parameters supported by storage-opts, see [Table 1](#en-us_topic_0222861454_table3191161993812). - -**Table 1** Parameter description - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Parameter

-

Mandatory or Not

-

Description

-

dm.fs

-

Yes

-

Specifies the type of the file system used by a container. This parameter must be set to ext4, that is, dm.fs=ext4.

-

dm.basesize

-

No

-

Specifies the maximum storage space of a single container. The unit can be k, m, g, t, or p. An uppercase letter can also be used, for example, dm.basesize=50G. This parameter is valid only during the first initialization.

-

dm.mkfsarg

-

No

-

Specifies the additional mkfs parameters when a basic device is created. For example: dm.mkfsarg=-O ^has_journal

-

dm.mountopt

-

No

-

Specifies additional mount parameters when a container is mounted. For example: dm.mountopt=nodiscard

-

dm.thinpooldev

-

No

-

Specifies the thinpool device used for container or image storage.

-

dm.min_free_space

-

No

-

Specifies minimum percentage of reserved space. For example, dm.min_free_space=10% indicates that storage-related operations such as container creation will fail when the remaining storage space falls below 10%.

-
- -#### Precautions - -- When configuring devicemapper, if the system does not have sufficient space for automatic capacity expansion of thinpool, disable the automatic capacity expansion function. - - To disable automatic capacity expansion, set both **thin\_pool\_autoextend\_threshold** and **thin\_pool\_autoextend\_percent** in the **/etc/lvm/profile/isula-thinpool.profile** file to **100**. - - ``` - activation { - thin_pool_autoextend_threshold=100 - thin_pool_autoextend_percent=100 - } - ``` - -- When devicemapper is used, use Ext4 as the container file system. You need to add **--storage-opt dm.fs=ext4** to the iSulad configuration parameters. -- If graphdriver is devicemapper and the metadata files are damaged and cannot be restored, you need to manually restore the metadata files. Do not directly operate or tamper with metadata of the devicemapper storage driver in Docker daemon. -- When the devicemapper LVM is used, if the devicemapper thinpool is damaged due to abnormal power-off, you cannot ensure the data integrity or whether the damaged thinpool can be restored. Therefore, you need to rebuild the thinpool. - -**Precautions for Switching the devicemapper Storage Pool When the User Namespace Feature Is Enabled on iSula** - -- Generally, the path of the deviceset-metadata file is **/var/lib/isulad/devicemapper/metadata/deviceset-metadata** during container startup. -- If user namespaces are used, the path of the deviceset-metadata file is **/var/lib/isulad/**_userNSUID.GID_**/devicemapper/metadata/deviceset-metadata**. -- When you use the devicemapper storage driver and the container is switched between the user namespace scenario and common scenario, the **BaseDeviceUUID** content in the corresponding deviceset-metadata file needs to be cleared. In the thinpool capacity expansion or rebuild scenario, you also need to clear the **BaseDeviceUUID** content in the deviceset-metadata file. Otherwise, the iSulad service fails to be restarted. - diff --git a/docs/en/docs/Container/public_sys-resources/icon-caution.gif b/docs/en/docs/Container/public_sys-resources/icon-caution.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Container/public_sys-resources/icon-caution.gif and /dev/null differ diff --git a/docs/en/docs/Container/public_sys-resources/icon-danger.gif b/docs/en/docs/Container/public_sys-resources/icon-danger.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Container/public_sys-resources/icon-danger.gif and /dev/null differ diff --git a/docs/en/docs/Container/public_sys-resources/icon-tip.gif b/docs/en/docs/Container/public_sys-resources/icon-tip.gif deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Container/public_sys-resources/icon-tip.gif and /dev/null differ diff --git a/docs/en/docs/Container/public_sys-resources/icon-warning.gif b/docs/en/docs/Container/public_sys-resources/icon-warning.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Container/public_sys-resources/icon-warning.gif and /dev/null differ diff --git a/docs/en/docs/Container/uninstallation.md b/docs/en/docs/Container/uninstallation.md deleted file mode 100644 index 192e53f5508cc82384d3d474111029f0d67cc802..0000000000000000000000000000000000000000 --- a/docs/en/docs/Container/uninstallation.md +++ /dev/null @@ -1,24 +0,0 @@ -# Uninstallation - -To uninstall iSulad, perform the following operations: - -1. Uninstall iSulad and its dependent software packages. - - If the **yum** command is used to install iSulad, run the following command to uninstall iSulad: - - ``` - $ sudo yum remove iSulad - ``` - - - If the **rpm** command is used to install iSulad, uninstall iSulad and its dependent software packages. Run the following command to uninstall an RPM package. - - ``` - sudo rpm -e iSulad-xx.xx.xx-YYYYmmdd.HHMMSS.gitxxxxxxxx.aarch64.rpm - ``` - -2. Images, containers, volumes, and related configuration files are not automatically deleted. The reference command is as follows: - - ``` - $ sudo rm -rf /var/lib/iSulad - ``` - - diff --git a/docs/en/docs/Container/upgrade-methods.md b/docs/en/docs/Container/upgrade-methods.md deleted file mode 100644 index 5294263ed82402538f59fb9cfe43f950e9b367e8..0000000000000000000000000000000000000000 --- a/docs/en/docs/Container/upgrade-methods.md +++ /dev/null @@ -1,21 +0,0 @@ -# Upgrade Methods - -- For an upgrade between patch versions of a major version, for example, upgrading 2.x.x to 2.x.x, run the following command: - - ``` - $ sudo yum update -y iSulad - ``` - -- For an upgrade between major versions, for example, upgrading 1.x.x to 2.x.x, save the current configuration file **/etc/isulad/daemon.json**, uninstall the existing iSulad software package, install the iSulad software package to be upgraded, and restore the configuration file. - ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->- You can run the **sudo rpm -qa |grep iSulad** or **isula version** command to check the iSulad version. ->- If you want to manually perform upgrade between patch versions of a major version, run the following command to download the RPM packages of iSulad and all its dependent libraries: -> ``` -> $ sudo rpm -Uhv iSulad-xx.xx.xx-YYYYmmdd.HHMMSS.gitxxxxxxxx.aarch64.rpm -> ``` -> If the upgrade fails, run the following command to forcibly perform the upgrade: -> ``` -> $ sudo rpm -Uhv --force iSulad-xx.xx.xx-YYYYmmdd.HHMMSS.gitxxxxxxxx.aarch64.rpm -> ``` - diff --git a/docs/en/docs/DPU-OS/dpu-os-background-and-requirements.md b/docs/en/docs/DPU-OS/dpu-os-background-and-requirements.md deleted file mode 100644 index 7995cc738b125ced2a435f44d69586006f5dacbe..0000000000000000000000000000000000000000 --- a/docs/en/docs/DPU-OS/dpu-os-background-and-requirements.md +++ /dev/null @@ -1,67 +0,0 @@ -# Background and Requirements - -## Overview - -In data center and cloud scenarios, Moore's Law fails. The CPU computing power growth rate of general processing units slows down, while the network I/O speed and performance keep increasing. The processing capability of current general-purpose processors cannot keep up with the I/O processing requirements of networks and drives. In traditional data centers, more and more general-purpose CPU computing power is consumed by I/O and management plane. Such resource loss is called "datacenter tax". According to AWS statistics, datacenter taxes may account for more than 30% of the computing power of data centers, and even more in specific scenarios. - -The data processing unit (DPU) is developed to free the computing resources from the datacenter taxes. The management plane, network, storage, and security capabilities are offloaded to the DPU for processing acceleration, reducing costs and improving efficiency. Currently, mainstream cloud vendors, such as AWS, Alibaba Cloud, and Huawei Cloud, offload the management plane and related data planes to self-developed processors, achieving 100% sales of data center computing resources. - -Currently, DPU is undergoing rapid development. Cloud and big data scenarios have strong demands for DPUs. Many DPU startups have launched different DPU products. To meet such requirements, cloud and big data vendors need to consider how to integrate and use different DPU products. DPU vendors also need to adapt device drivers to user OSs. openEuler builds DPU-OS to solve the problem of DPU adaptation for DPU vendors and users. In addition, the OS on the DPU is used to accelerate some services. Therefore, the performance of DPU-OS is optimized and accelerated. DPU-related acceleration capabilities can be built based on openEuler and embedded in DPU-OS to build a DPU software ecosystem. - -## DPU-OS Requirement Analysis and Design - -### DPU Status Quo and OS Requirements - -DPUs typically have the following characteristics and problems: - -* Limited general processing capabilities and resources - - Currently, the DPU is still in the early stage of development, and the hardware is still evolving. In addition, the current hardware specifications are low due to the limitation of the DPU power supply. Mainstream DPUs often have 8 to 24 general-purpose processor cores with weak single-core capability and limited amount of memory, usually 16 to 32 GB. And the local storage space of a DPU ranges from dozens of GB to hundreds of GB. These restrictions need to be taken into consideration during the design of an OS that runs on DPUs. - -* Various DPU OS Installation Modes - - The variety of DPU vendors and products result in different installation and deployment modes of OSs, including PXE network installation, USB flash drive installation, and custom installation (using an image delivered by the host). - -* DPU Performance Requirements - - DPUs are required to have strong performance in their application scenarios. Compared with common server OSs, a DPU OS may have to support some specific features, such as vDPU for device passthrough and hot migration, vendor-specific driver support, seamless offload of DPU processes, custom optimized user mode data plane acceleration tools (for example, DPDK, SPDK, and OVS), and DPU management and monitoring tools. - -Based on the preceding DPU status, the requirements for DPU-OS are as follows: - -* Lightweight installation package - - The openEuler system image is tailored to reduce the space occupied by unnecessary software packages. System services are optimized to reduce the resource overhead. - -* Tailoring Configurations and Tool Support - - Tailoring configuration and tool support are provided. Users and DPU vendors can customize the tailoring based on their requirements. openEuler provides ISO reference implementation. - -* Customized kernel and system for ultimate performance - - The customized kernel and related drivers provide competitive DPU kernel features. The customized acceleration components enable DPU hardware acceleration. The optimized system configurations deliver better performance. DPU management and control tools facilitate unified management. - -### DPU-OS Design - -**Figure 1** DPU-OS overall design - -![dpuos-arch](./figures/dpuos-arch.png) - -As shown in Figure 1, DPU-OS contains five layers: - -* Kernel layer: The kernel configurations are customized to be lightweight, with unnecessary kernel features and modules removed. Specific kernel features are enabled to provide high-performance DPU kernel capabilities. - -* Driver layer: The native openEuler drivers are tailored to keep the minimum collection. Some DPU vendor drivers are integrated to natively support related DPU hardware. - -* System configuration layer: sysctl and proc are configured for optimal performance of DPU-related services. - -* Peripheral package layer: openEuler peripheral packages are tailored to keep the minimum collection. A set of DPU-related customization tools are provided. - -* System service layer: The native service startup items are optimized to reduce unnecessary system services and minimize the system running overhead. - -The five-layer design makes DPU-OS lightweight with ultimate performance. This solution is a long-term design and strongly depends on the software and hardware ecosystem of DPUs. In the first phase, this solution is tailored based on openEuler imageTailor. - -For details about how to tailor DPU-OS, see [DPU-OS Tailoring Guide](./dpu-os-tailoring-guide.md). For details about how to verify and deploy DPU-OS, see [DPU-OS Deployment Verification Guide](./verification-and-deployment.md). - -> ![](./public_sys-resources/icon-note.gif) **NOTE:** -> -> Currently, DPU-OS is tailored with imageTailor based on the existing openEuler kernel and peripheral packages to provide a lightweight OS installation image. In the future, related kernel and peripheral package features can be developed and integrated based on actual requirements. diff --git a/docs/en/docs/DPU-OS/dpu-os-tailoring-guide.md b/docs/en/docs/DPU-OS/dpu-os-tailoring-guide.md deleted file mode 100644 index c79768919ed12fbc5ec6cf4144b864e8e1fd8244..0000000000000000000000000000000000000000 --- a/docs/en/docs/DPU-OS/dpu-os-tailoring-guide.md +++ /dev/null @@ -1,65 +0,0 @@ -# DPU-OS Tailoring Guide - -This section describes how to use imageTailor to obtain the DPU-OS installation image based on the dpuos configuration file of the [dpu-utilities repository](https://gitee.com/openeuler/dpu-utilities/tree/master/dpuos). The procedure is as follows. - -## Preparing imageTailor and Required RPM Packages - -Install the imageTailor tool and prepare the RPM packages required for tailoring. For details, see the [imageTailor User Guide](../TailorCustom/imageTailor-user-guide.md). - -You can use the installation image provided by openEuler as the RPM package source for image tailoring. **openEuler-22.03-LTS-everything-debug-aarch64-dvd.iso** have a complete collection of RPM packages but the image is large in size. You can use the RPM packages in **openEuler-22.03-LTS-aarch64-dvd.iso** and **install-scripts.noarch**. - -You can obtain **install-scripts.noarch** from the everything package or download it using `yum`. - -```bash -yum install -y --downloadonly --downloaddir=./ install-scripts -``` - -## Copying DPU-OS Configuration Files - -The imageTailor tool is installed in the **/opt/imageTailor** directory by default. Run the following command to copy the DPU-OS configuration files to the corresponding directories (select the directories of the target architecture): Currently, the DPU-OS tailoring configuration library supports the x86_64 and AArch64 architectures. - -```bash -cp -rf custom/cfg_dpuos /opt/imageTailor/custom -cp -rf kiwi/minios/cfg_dpuos /opt/imageTailor/kiwi/minios/cfg_dpuos -``` - -## Modifying Other Configuration Files - -* Add the **dpuos** configuration to the **kiwi/eulerkiwi/product.conf** file. - -```text -dpuos PANGEA EMBEDDED DISK GRUB2 install_mode=install install_media=CD install_repo=CD selinux=0 -``` - -* Add the **dpuos** configuration to the **kiwi/eulerkiwi/minios.conf** file. - -```text -dpuos kiwi/minios/cfg_dpuos yes -``` - -* Add the **dpuos** configuration to the **repos/RepositoryRule.conf** file. - -```text -dpuos 1 rpm-dir euler_base -``` - -## Setting a Password - -Go to the subdirectories of **/opt/imageTailor** and change the passwords in the following files: - -* `custom/cfg_dpuos/usr_file/etc/default/grub` - -* `custom/cfg_dpuos/rpm.conf` - -* `kiwi/minios/cfg_dpuos/rpm.conf` - -For details about how to generate and modify password, see [Configuring Initial Passwords](../TailorCustom/imageTailor-user-guide.md#configuring-initial-passwords) in the _openEuler imageTailor User Guide_. - -## Running the Tailoring Command - -Run the following command to tailor the ISO file. The tailored ISO file is stored in the **/opt/imageTailor/result** directory. - -```bash -cd /opt/imageTailor -./mkdliso -p dpuos -c custom/cfg_dpuos --sec --minios force -``` diff --git a/docs/en/docs/DPU-OS/figures/dpuos-arch.png b/docs/en/docs/DPU-OS/figures/dpuos-arch.png deleted file mode 100644 index d6a73ecbf5954f2a4cf337ab16110f2c474f0319..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/DPU-OS/figures/dpuos-arch.png and /dev/null differ diff --git a/docs/en/docs/DPU-OS/overview.md b/docs/en/docs/DPU-OS/overview.md deleted file mode 100644 index d2b955531051f2e52b194f54ad63d1e753486765..0000000000000000000000000000000000000000 --- a/docs/en/docs/DPU-OS/overview.md +++ /dev/null @@ -1,11 +0,0 @@ -# Overview - -This document introduces the background requirements and overall design of DPU-OS, and describes how to create, deploy, and verify a DPU-OS image on openEuler. DPU-OS is built based on the openEuler ecosystem. It delivers ultimate performance and provides an implementation reference of DPU OSs for DPU scenarios and users. - -This article is intended for community developers, DPU vendors, and users who use the openEuler OS and want to learn and use DPUs. Users must: - -- Know basic Linux operations. - -- Be familiar with the basic knowledge and operations related to build and deployment on Linux. - -- Understand openEuler ImageTailor. diff --git a/docs/en/docs/DPU-OS/verification-and-deployment.md b/docs/en/docs/DPU-OS/verification-and-deployment.md deleted file mode 100644 index 2c1581d72014721571a2f965c1b0d953228fd931..0000000000000000000000000000000000000000 --- a/docs/en/docs/DPU-OS/verification-and-deployment.md +++ /dev/null @@ -1,38 +0,0 @@ -# Verification and Deployment - -After the DPU-OS image is created, you can install and deploy DPU-OS for verification. Currently, the DPU hardware is not mature enough. Therefore, you can use VirtualBox to start a VM for deployment and verification. - -## Deploying DPU-OS on VirtualBox - -This section describes how to install and deploy DPU-OS on VirtualBox. - -### Verification Preparation - -Before deploying DPU-OS, make the following preparations: - -- Obtain the DPU-OS ISO. -- Prepare a host machine with VirtualBox installed. - -### Initial Installation and Startup - -#### Creating a VM - -Creating a VM on VirtualBox. - -- Select the VM configuration. Two CPUs and at least 4 GB memory are recommended. - -- Create a VM disk. At least 60 GB is recommended. - -- In the system extended attributes, enable EFI. - -- In the storage settings, select the local DPU-OS ISO file as the CD-ROM file. - -- Other network or display settings can be custom. - -#### Starting the VM - -Start the created VM and select **Install from ISO** to install DPU-OS. After DPU-OS is automatically installed, the VM restarts. - -Select **Boot From Local Disk** to boot into DPU-OS. The password is the one specified when the DPU-OS image is created. - -After the preceding steps are performed, the local deployment of the DPU-OS is verified. diff --git a/docs/en/docs/Embedded/application-development-using-sdk.md b/docs/en/docs/Embedded/application-development-using-sdk.md deleted file mode 100644 index d05ff8b8634f5a515703a003d9abc9fcd57c0feb..0000000000000000000000000000000000000000 --- a/docs/en/docs/Embedded/application-development-using-sdk.md +++ /dev/null @@ -1,181 +0,0 @@ -# Application Development Using openEuler Embedded SDK - -In addition to the basic functions of openEuler Embedded, you can also develop applications, that is, running your own programs on openEuler Embedded. This chapter describes how to develop applications using openEuler Embedded SDK. - - -- [Application Development Using openEuler Embedded SDK](#application-development-using-openeuler-embedded-sdk) - - [Installing the SDK](#installing-the-sdk) - - [Using the SDK to Build a Hello World Example](#using-the-sdk-to-build-a-hello-world-example) - - [Using the SDK to Build a Kernel Module Example](#using-the-sdk-to-build-a-kernel-module-example) - - -### Installing the SDK - -1. **Install dependent software packages.** - - To use the SDK to develop the kernel module, you need to install some necessary software packages. Run the following command: - - ``` - Install on openEuler: - yum install make gcc g++ flex bison gmp-devel libmpc-devel openssl-devel - - Install on Ubuntu: - apt-get install make gcc g++ flex bison libgmp3-dev libmpc-dev libssl-dev - ``` - -2. **Run the self-extracting installation script of the SDK.** - - Run the following command: - - ``` - sh openeuler-glibc-x86_64-openeuler-image-aarch64-qemu-aarch64-toolchain-22.09.sh - ``` - - Enter the installation path of the toolchain as prompted. The default path is **/opt/openeuler/\/**. You can also set the path to a relative or absolute path. - - The following is an example: - - ``` - sh ./openeuler-glibc-x86_64-openeuler-image-armv7a-qemu-arm-toolchain-22.09.sh - openEuler embedded(openEuler Embedded Reference Distro) SDK installer version 22.09 - ================================================================ - Enter target directory for SDK (default: /opt/openeuler/22.09): sdk - You are about to install the SDK to "/usr1/openeuler/sdk". Proceed [Y/n]? y - Extracting SDK...............................................done - Setting it up...SDK has been successfully set up and is ready to be used. - Each time you wish to use the SDK in a new shell session, you need to source the environment setup script e.g. - $ . /usr1/openeuler/sdk/environment-setup-armv7a-openeuler-linux-gnueabi - ``` - -3. **Set the environment variable of the SDK.** - - Run the `source` command. The `source` command is displayed in the output of the previous step. Run the command. - - ``` - . /usr1/openeuler/myfiles/sdk/environment-setup-armv7a-openeuler-linux-gnueabi - ``` - -4. **Check whether the installation is successful.** - - Run the following command to check whether the installation and environment configuration are successful: - - ``` - arm-openeuler-linux-gnueabi-gcc -v - ``` - -### Using the SDK to Build a Hello World Example - -1. **Prepare the code.** - - The following describes how to build a hello world program that runs in the image of the openEuler Embedded root file system. - - Create a **hello.c** file. The source code is as follows: - - ``` c - #include - - int main(void) - { - printf("hello world\n"); - } - ``` - - Compose a **CMakeLists.txt** file as follows and place it in the same directory as the **hello.c** file. - - ``` - project(hello C) - - add_executable(hello hello.c) - ``` - -2. **Compile and generate a binary file.** - - Go to the directory where the **hello.c** file is stored and run the following commands to compile the file using the toolchain: - - ``` - cmake .. - make - ``` - - Copy the compiled **hello** program to a sub-directory of **/tmp/** (for example, **/tmp/myfiles/**) on openEuler Embedded. For details about how to copy the file, see [Shared File System Enabled Scenario](./installation-and-running.md#shared-file-system-enabled-scenario). - -3. **Run the user-mode program.** - - Run the **hello** program on openEuler Embedded. - - ``` - cd /tmp/myfiles/ - ./hello - ``` - - If the running is successful, the message **hello world** is displayed. - -### Using the SDK to Build a Kernel Module Example - -1. **Prepare the code.** - - The following describes how to build a kernel module that runs in the kernel of openEuler Embedded. - - Create a **hello.c** file. The source code is as follows: - - ```c - #include - #include - - static int hello_init(void) - { - printk("Hello, openEuler Embedded!\r\n"); - return 0; - } - - static void hello_exit(void) - { - printk("Byebye!"); - } - - module_init(hello_init); - module_exit(hello_exit); - - MODULE_LICENSE("GPL"); - ``` - - Compose a Makefile as follows and place it in the same directory as the **hello.c** file. - - ``` - KERNELDIR := ${KERNEL_SRC_DIR} - CURRENT_PATH := $(shell pwd) - - target := hello - obj-m := $(target).o - - build := kernel_modules - - kernel_modules: - $(MAKE) -C $(KERNELDIR) M=$(CURRENT_PATH) modules - clean: - $(MAKE) -C $(KERNELDIR) M=$(CURRENT_PATH) clean - ``` - - `KERNEL_SRC_DIR` indicates the directory of the kernel source tree. This variable is automatically configured after the SDK is installed. - - ![](./public_sys-resources/icon-note.gif) **NOTE** - - - `KERNEL_SRC_DIR` indicates the directory of the kernel source tree. This variable is automatically configured after the SDK is installed. - - - The `$(MAKE) -C $(KERNELDIR) M=$(CURRENT_PATH) modules` and `$(MAKE) -C $(KERNELDIR) M=$(CURRENT_PATH) clean` codes are preceded by the Tab key instead of the Space key. - -2. **Compile and generate a kernel module.** - - Go to the directory where the **hello.c** file is stored and run the following command to compile the file using the toolchain: - - make - - Copy the compiled **hello.ko** file to a directory on openEuler Embedded. For details about how to copy the file, see the [Shared File System Enabled Scenario](./installation-and-running.md#shared-file-system-enabled-scenario). - -3. **Insert the kernel module.** - - Insert the kernel module to openEuler Embedded: - - insmod hello.ko - - If the running is successful, the message **Hello, openEuler Embedded!** is output to the kernel logs. \ No newline at end of file diff --git a/docs/en/docs/Embedded/container-build-guide.md b/docs/en/docs/Embedded/container-build-guide.md deleted file mode 100644 index 1414c94c4e9af1cddb684dfd0aac37d27b516754..0000000000000000000000000000000000000000 --- a/docs/en/docs/Embedded/container-build-guide.md +++ /dev/null @@ -1,197 +0,0 @@ -Container Build Guide -============================== - -The openEuler Embedded build process is based on the openEuler OS, and requires many system tools and build tools to be installed. To help developers quickly set up a build environment, the OS and tools on which the build process depends are encapsulated into a container. In this way, developers can avoid the time-consuming environment preparation process and focus on development. - - - - - [Environment Preparation](#environment-preparation) - - [Installing Docker](#installing-docker) - - [Obtaining the Container Image](#obtaining-the-container-image) - - [Preparing the Container Build Environment](#preparing-the-container-build-environment) - - [Version Build](#version-build) - - [Downloading Source Code](#downloading-source-code) - - [Compiling the Build](#compiling-the-build) - - [Build Result](#build-result) - - -## Environment Preparation - -Use Docker to create a container environment. The software and hardware requirements of Docker are as follows: - -- OS: openEuler 20.03/22.03, Ubuntu 20.04/22.04, Debian 11, and SUSE 12.05 are recommended. -- Kernel: Linux 3.8 or later is recommended. -- Driver: The kernel must include a proper storage driver, for example, Device Mapper, AUFS, vfs, btrfs, or ZFS. -- Architecture: 64-bit architecture (currently only x86-64 and AMD64). - -### Installing Docker - -------------- - -1. Check whether Docker has been installed in the current environment. - - Run the following command. If the Docker version is displayed, Docker has been installed in the current environment. You can use it directly. - - ``` {.sourceCode .console} - docker version - ``` - -2. If Docker is not installed, install it by referring to the [official document](http://docs.docker.com/engine/install/). - - Install Docker on openEuler by referring to the installation guide for CentOS. - - For example, run the following command to install Docker on openEuler: - - ``` - sudo yum install docker - ``` - -### Obtaining the Container Image - ---------------- - -Run the `docker pull` command to pull the image from Huawei Cloud to the host machine: - -``` -docker pull swr.cn-north-4.myhuaweicloud.com/openeuler-embedded/openeuler-container -``` - -By default, the latest image is downloaded. You can also specify the image version to be downloaded based on the version to be compiled. The command is as follows: - -``` -docker pull [Container Image Name]:[Tag] -# example: docker pull swr.cn-north-4.myhuaweicloud.com/openeuler-embedded/openeuler-container:latest -``` - -Container images - -| Container Image Name | Tag | For Image Branch | Kernel Version | Libc Version | -| ------------------------------------------------------------ | --------- | ------------------- | -------------- | ------------ | -| swr.cn-north-4.myhuaweicloud.com/openeuler -embedded/openeuler-container | latest | master | 21.03 | 2.31 | -| swr.cn-north-4.myhuaweicloud.com/openeuler -embedded/openeuler-container | 22.09 | openEuler-22.09 | 21.03 | 2.31 | -| swr.cn-north-4.myhuaweicloud.com/openeuler -embedded/openeuler-container | 22.03-lts | openEuler-22.03-LTS | 22.03 LTS | 2.34 | -| swr.cn-north-4.myhuaweicloud.com/openeuler -embedded/openeuler-container | 21.09 | openEuler-21.09 | 21.03 | 2.31 | - -> ![](./public_sys-resources/icon-note.gif) **NOTE** -> -> - To build openEuler images of different branches or versions, you need to use different containers. The **For Image Branch** column shows the mapping. -> - To be compatible with the host tool and native SDK of Yocto Poky, the built-in libc 2.31 container is used. Therefore, the C library version is earlier than 22.03. - -### Preparing the Container Build Environment - -------------------- - -#### 1. Start a container. - -Run the `docker run` command to start a container. To ensure that the container can run in the background and access the Internet after being started, you are advised to run the following command to start the container: - -``` -docker run -idt --network host swr.cn-north-4.myhuaweicloud.com/openeuler-embedded/openeuler-container bash -``` - -Parameter description: - -- **-i**: keeps the standard input open. -- **-d**: starts a container in daemon mode in the background. -- **-t**: allocates a pseudo-tty and binds it to the standard input of the container. -- **\--network**: connects a container to the network of the host machine. -- **swr.cn-north-4.myhuaweicloud.com/openeuler-embedded/openeuler-container**: specifies the name of the image (image_name:image_version). -- **bash**: method for accessing a container. - -#### 2. Check the ID of the started container. - -``` -docker ps -``` - -#### 3. Enter the container. - -``` -docker exec -it bash -``` - -After the build environment is ready, you can build in the container. - -## Version Build - -### Downloading Source Code - -1. Obtain the source code download script. - - ``` - git clone https://gitee.com/openeuler/yocto-meta-openeuler.git -b -v /usr1/openeuler/src/yocto-meta-openeuler - #example: git clone https://gitee.com/openeuler/yocto-meta-openeuler.git -b master -v /usr1/openeuler/src/yocto-meta-openeuler - ``` - - > ![](./public_sys-resources/icon-note.gif) **NOTE** - > - > - For details about **<For Image Branch>**, see the content in the third column of the container image list. - > - > - The full code required for build is obtained from the yocto-meta-openeuler repository. Therefore, if you want to build the code of the corresponding version (such as openEuler 22.09 or openEuler 22.03 LTS), download the yocto-meta-openeuler repository of the corresponding branch. - > - Different containers are required for building openEuler images of different branches or versions. - -2. Download the source code using the script. - - ``` - cd /usr1/openeuler/src/yocto-meta-openeuler/scripts - sh download_code.sh /usr1/openeuler/src - ``` - - > ![](./public_sys-resources/icon-note.gif) **NOTE** - > - > 22.09 and later versions support **/usr1/openeuler/src/yocto-meta-openeuler/script/oe_helper.sh**. You can run the **source oe_helper.sh** command to download the code by referring to the **usage** description. - -### Compiling the Build - -- Compilation architecture: aarch64-std, aarch64-pro, arm-std, or raspberrypi4-64 -- Build directory: **/usr1/build** -- Source code directory: **/usr1/openeuler/src** -- Path of the compiler: **/usr1/openeuler/gcc/openeuler\_gcc\_arm64le** - ->![](./public_sys-resources/icon-note.gif) **NOTE** ->- Use different compilers for different compilation architectures. aarch64-std, aarch64-pro, and raspberrypi4-64 use the openeuler\_gcc\_arm64le compiler, and arm-std uses the openeuler\_gcc\_arm32le compiler. ->- The following uses the aarch64-std architecture as an example. - -1. Change the owner group of the **/usr1** directory to **openeuler**. Otherwise, permission issues may occur when switching to the **openeuler** user. - - ``` - chown -R openeuler:users /usr1 - ``` - -2. Switch to the **openeuler** user. - - ``` - su openeuler - ``` - -3. Go to the path where the build script is stored and run the script. - - ``` - # Go to the directory where the compilation initialization scripts are stored. - cd /usr1/openeuler/src/yocto-meta-openeuler/scripts - ``` - ``` - # For versions earlier than 22.03, skip this command. (You must run this command in 22.09 and later versions.) - # Initialize the container build dependency tool (poky nativesdk). - . /opt/buildtools/nativesdk/environment-setup-x86_64-pokysdk-linux - ``` - ``` - # Initialize the compilation environment using the compilation initialization script. - source compile.sh aarch64-std /usr1/build /usr1/openeuler/gcc/openeuler_gcc_arm64le - bitbake openeuler-image - ``` - - > ![](./public_sys-resources/icon-note.gif) **NOTE** - > - > 22.09 and later versions support **/usr1/openeuler/src/yocto-meta-openeuler/script/oe_helper.sh**. You can run the **source oe_helper.sh** command to download the code by referring to the **usage** description. - -### Build Result - -By default, the files are generated in the **output** directory of the build directory. For example, the built files of the aarch64-std example are generated in the **/usr1/build/output** directory, as shown in the following table: - -| Filename | Description | -| --------------------------------------------------------- | ----------------------------------- | -| Image-\* | openEuler Embedded image | -| openeuler-glibc-x86\_64-openeuler-image-\*-toolchain-\*.sh | openEuler Embedded SDK toolchain | -| openeuler-image-qemu-aarch64-\*.rootfs.cpio.gz | openEuler Embedded file system | -| zImage | openEuler Embedded compressed image | diff --git a/docs/en/docs/Embedded/embedded.md b/docs/en/docs/Embedded/embedded.md deleted file mode 100644 index 9077b34292e25b5504254863c07c64e0dcb0232f..0000000000000000000000000000000000000000 --- a/docs/en/docs/Embedded/embedded.md +++ /dev/null @@ -1,5 +0,0 @@ -# Embedded - -This document includes the following content: - -- openEuler Embedded User Guide: describes how to use, build, and develop relevant programs. diff --git a/docs/en/docs/Embedded/installation-and-running.md b/docs/en/docs/Embedded/installation-and-running.md deleted file mode 100644 index 1ec121c66fcff0a053b0fa4f99f6054235afdfd1..0000000000000000000000000000000000000000 --- a/docs/en/docs/Embedded/installation-and-running.md +++ /dev/null @@ -1,186 +0,0 @@ -# Installation and Running - -This chapter describes how to obtain a pre-built image and how to run an image. - - - -- [Installation and Running](#installation-and-running) - - [Obtaining the Image](#obtaining-the-image) - - [Image Content](#image-content) - - [Running the Image](#running-the-image) - - [Simplified Running Scenario](#simplified-running-scenario) - - [Shared File System Enabled Scenario](#shared-file-system-enabled-scenario) - - [Network Enabled Scenario](#network-enabled-scenario) - - -## Obtaining the Image -The released pre-built images support only the ARM and AArch64 architectures, and are compatible only with the ARM virt-4.0 platform of QEMU. You can obtain the images through the following links: - -- [qemu_arm](https://repo.openeuler.org/openEuler-22.09/embedded_img/arm32/arm-std/) for ARM Cortex A15 processor of 32-bit ARM architecture. -- [qemu_aarch64](https://repo.openeuler.org/openEuler-22.09/embedded_img/arm64/aarch64-std/) for ARM Cortex A57 processor of 64-bit AArch64 architecture. - -You can deploy an openEuler Embedded image on a physical bare-metal server, cloud server, container, or VM as long as the environment supports QEMU 5.0 or later. - -## Image Content - -The downloaded image consists of the following parts: - -- Kernel image **zImage**, which is built based on Linux 5.10 of the openEuler community. You can obtain the kernel configurations through the following links: - - [ARM(Cortex a15)](https://gitee.com/openeuler/yocto-embedded-tools/blob/openEuler-22.09/config/arm/defconfig-kernel) - - [ARM(Cortex a57)](https://gitee.com/openeuler/yocto-embedded-tools/blob/openEuler-22.09/config/arm64/defconfig-kernel) for the AArch64 architecture. The image provides the image self-decompression function in addition. For details, see the corresponding [patch](https://gitee.com/openeuler/yocto-embedded-tools/blob/openEuler-22.09/patches/arm64/0001-arm64-add-zImage-support-for-arm64.patch). - -- Root file system image: - - **openeuler-image-qemu-xxx.cpio.gz**, which is the image of the standard root file system. It has received necessary security hardening and includes various software packages, such as audit, cracklib, OpenSSH, Linux PAM, shadow and software packages supported by iSula. - -- Software Development Kit (SDK) - - - **openeuler-glibc-x86_64-xxxxx.sh**: The self-extracting installation package of openEuler Embedded SDK. The SDK contains tools, libraries, and header files for developing user-mode applications and kernel modules. - -## Running the Image - -You can run the image to experience the functions of openEuler Embedded, and develop basic embedded Linux applications. - -![](./public_sys-resources/icon-note.gif) **Note:** - -- You are advised to use QEMU 5.0 or later to run the image. Some additional functions (the network and shared file system) depend on the virtio-net and virtio-fs features of QEMU. If these features are not enabled in QEMU, errors may occur during image running. In this case, you may need to recompile QEMU from the source code. - -- When running the image, you are advised to place the kernel image and root file system image in the same directory. - -Download and install QEMU by referring to the [QEMU official website](https://www.qemu.org/download/#linux), or download and build from [source](https://www.qemu.org/download/#source). Use the following command to verify the installation: - -``` -qemu-system-aarch64 --version -``` - -### Simplified Running Scenario - -In this scenario, the network and shared file system are not enabled in QEMU. You can use this scenario to experience the functions. - -1. **Start QEMU.** - - For the ARM architecture (ARM Cortex A15), run the following command: - - ``` - qemu-system-arm -M virt-4.0 -m 1G -cpu cortex-a15 -nographic -kernel zImage -initrd - ``` - - For the AArch64 architecture (ARM Cortex A57), run the following command: - - ``` - qemu-system-aarch64 -M virt-4.0 -m 1G -cpu cortex-a57 -nographic -kernel zImage -initrd - ``` - - ![](./public_sys-resources/icon-note.gif) **Note:** - - -The standard root file system image is securely hardened and requires you to set a password for the **root** user during the first startup. The password must comply with the follow requirements: - -1. Must contain at least eight characters. - -2. Must contain digits, letters, and special characters. - - @#$%^&*+|\\=~`!?,.:;-_'"(){}[]/>< - - For example, **openEuler@2021**. - - - -2. **Check whether QEMU is started successfully.** - - The shell of openEuler Embedded will be displayed after QEMU is successfully started and logged in. - -### Shared File System Enabled Scenario - -The shared file system allows the host machine of QEMU to share files with openEuler Embedded. In this way, programs that are cross-compiled on the host machine can run on openEuler Embedded after being copied to the shared directory. - -Assume that the **/tmp** directory of the host machine is used as the shared directory, and a **hello_openeuler.txt** file is created in the directory in advance. To enable the shared file system function, perform the following steps: - -1. **Start QEMU.** - - For the ARM architecture (ARM Cortex A15), run the following command: - - ``` - qemu-system-arm -M virt-4.0 -m 1G -cpu cortex-a15 -nographic -kernel zImage -initrd -device virtio-9p-device,fsdev=fs1,mount_tag=host -fsdev local,security_model=passthrough,id=fs1,path=/tmp - ``` - - For the AArch64 architecture (ARM Cortex A57), run the following command: - - ``` - qemu-system-aarch64 -M virt-4.0 -m 1G -cpu cortex-a57 -nographic -kernel zImage -initrd -device virtio-9p-device,fsdev=fs1,mount_tag=host -fsdev local,security_model=passthrough,id=fs1,path=/tmp - ``` - -2. **Mount the file system.** - - After you start and log in to openEuler Embedded, run the following commands to mount the shared file system: - - ``` - cd /tmp - mkdir host - mount -t 9p -o trans=virtio,version=9p2000.L host /tmp/host - ``` - - That is, mount the 9p file system to the **/tmp/host** directory of openEuler Embedded to implement sharing mapping. - -3. **Check whether the file system is shared successfully.** - - In openEuler Embedded, run the following commands: - - ``` - cd /tmp/host - ls - ``` - If **hello_openeuler.txt** is discovered, the file system is shared successfully. - -### Network Enabled Scenario - -The virtio-net of QEMU and the virtual NIC of the host machine allow for the network communication between the host machine and openEuler Embedded. In addition to sharing files using virtio-fs, you can transfer files between the host machine and openEuler Embedded using the network, for example, the **scp** command. - -1. **Start QEMU.** - - For the ARM architecture (ARM Cortex A15), run the following command: - - ``` - qemu-system-arm -M virt-4.0 -m 1G -cpu cortex-a15 -nographic -kernel zImage -initrd -device virtio-net-device,netdev=tap0 -netdev tap,id=tap0,script=/etc/qemu-ifup - ``` - - For the AArch64 architecture (ARM Cortex A57), run the following command: - - ``` - qemu-system-aarch64 -M virt-4.0 -m 1G -cpu cortex-a57 -nographic -kernel zImage -initrd -device virtio-net-device,netdev=tap0 -netdev tap,id=tap0,script=/etc/qemu-ifup - ``` - -2. **Create a vNIC on the host machine.** - - You can create a **/qemu-ifup** script in the **/etc** directory and run the script to create a **tap0** vNIC on the host machine. The script details are as follows: - - ``` - #!/bin/bash - ifconfig $1 192.168.10.1 up - ``` - **root** permissions are required for running the script. - - ``` - chmod a+x qemu-ifup - ``` - - Use the **qemu-ifup** script to create a **tap0** vNIC on the host machine. The IP address of the vNIC is **192.168.10.1**. - -3. **Configure the NIC of openEuler Embedded.** - - Log in to openEuler Embedded and run the following command: - ``` - ifconfig eth0 192.168.10.2 - ``` - -4. **Check whether the network connection is normal.** - - In openEuler Embedded, run the following command: - ``` - ping 192.168.10.1 - ``` - - If the IP address can be pinged, the network connection between the host machine and openEuler Embedded is normal. - - >![](./public_sys-resources/icon-note.gif) **Note:** - > - >If you need openEuler Embedded to access the Internet through the host machine, create a bridge on the host machine. For details, see the related documents. diff --git a/docs/en/docs/Embedded/openEuler-Embedded-22.09-release-notes.md b/docs/en/docs/Embedded/openEuler-Embedded-22.09-release-notes.md deleted file mode 100644 index b5148b4ed7321cd034ba25f4dd9bc3278b8f9fa7..0000000000000000000000000000000000000000 --- a/docs/en/docs/Embedded/openEuler-Embedded-22.09-release-notes.md +++ /dev/null @@ -1,29 +0,0 @@ -# openEuler Embedded 22.09 Release Notes - -openEuler Embedded 22.09 is the second innovation release of openEuler Embedded. This section describes the main features of this version. - -## Kernel - -- The kernel is upgraded to 5.10.0-106.18.0. - -- The kernel supports Preempt-RT patches. - -- The kernel supports Raspberry Pi 4B patches. - -## Software Packages - -- More than 140 software packages are supported. For details, see [Supported Software Packages](https://openeuler.gitee.io/yocto-meta-openeuler/features/software_package_description.html). - -## Feature Highlights - -- The multi-OS hybrid deployment capability is enhanced. The Raspberry Pi 4B hybrid deployment instance is added. The service-oriented hybrid deployment function is added. Zephyr can be accessed through the Linux shell CLI. For details, see [Multi-OS Hybrid Deployment Framework](https://openeuler.gitee.io/yocto-meta-openeuler/features/mcs.html). - -- The distributed soft bus capability is enhanced. The distributed soft bus-based openEuler and OpenHarmony device authentication and interconnection are supported. Southbound Wi-Fi transmission media are supported. For details, see [Distributed Soft Bus](https://openeuler.gitee.io/yocto-meta-openeuler/features/distributed_soft_bus.html). - -- Security hardening. For details, see [Security Hardening Description](https://openeuler.gitee.io/yocto-meta-openeuler/security_hardening/index.html). - -- Preempt-RT-based soft real-time. For details, see [Soft Real-Time System Introduction](https://openeuler.gitee.io/yocto-meta-openeuler/features/preempt_rt.html). - -## Build System - -- The NativeSDK is added for containerized build. For details, see [Container Build Guide](./container-build-guide.md). diff --git a/docs/en/docs/Embedded/public_sys-resources/hosttools.png b/docs/en/docs/Embedded/public_sys-resources/hosttools.png deleted file mode 100644 index 1b154b40fc76ead162cdd7c7d32303f581d9cfa8..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Embedded/public_sys-resources/hosttools.png and /dev/null differ diff --git a/docs/en/docs/Embedded/public_sys-resources/icon-caution.gif b/docs/en/docs/Embedded/public_sys-resources/icon-caution.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Embedded/public_sys-resources/icon-caution.gif and /dev/null differ diff --git a/docs/en/docs/Embedded/public_sys-resources/icon-danger.gif b/docs/en/docs/Embedded/public_sys-resources/icon-danger.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Embedded/public_sys-resources/icon-danger.gif and /dev/null differ diff --git a/docs/en/docs/Embedded/public_sys-resources/icon-tip.gif b/docs/en/docs/Embedded/public_sys-resources/icon-tip.gif deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Embedded/public_sys-resources/icon-tip.gif and /dev/null differ diff --git a/docs/en/docs/Embedded/public_sys-resources/icon-warning.gif b/docs/en/docs/Embedded/public_sys-resources/icon-warning.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Embedded/public_sys-resources/icon-warning.gif and /dev/null differ diff --git a/docs/en/docs/EulerLauncher/images/tray-icon.png b/docs/en/docs/EulerLauncher/images/tray-icon.png deleted file mode 100644 index be4dcb7853cd90e02c9cb37f4f6ee2c75da13469..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/EulerLauncher/images/tray-icon.png and /dev/null differ diff --git a/docs/en/docs/EulerLauncher/images/win-install.jpg b/docs/en/docs/EulerLauncher/images/win-install.jpg deleted file mode 100644 index 715655ba9b3ed1cb037d385668aea931ed5c7c29..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/EulerLauncher/images/win-install.jpg and /dev/null differ diff --git a/docs/en/docs/EulerLauncher/images/win-terminal-1.jpg b/docs/en/docs/EulerLauncher/images/win-terminal-1.jpg deleted file mode 100644 index 21dbb721c45068b146c84bbb46cd2845e987f7ce..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/EulerLauncher/images/win-terminal-1.jpg and /dev/null differ diff --git a/docs/en/docs/EulerLauncher/images/win-terminal-2.jpg b/docs/en/docs/EulerLauncher/images/win-terminal-2.jpg deleted file mode 100644 index 74893803d48e5c7de52325be3f96fdd8c0400b27..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/EulerLauncher/images/win-terminal-2.jpg and /dev/null differ diff --git a/docs/en/docs/HSAK/hsak_interface.md b/docs/en/docs/HSAK/hsak_interface.md deleted file mode 100644 index c1e45123bf7ca1bf14ec9bbdac6420f5182321ba..0000000000000000000000000000000000000000 --- a/docs/en/docs/HSAK/hsak_interface.md +++ /dev/null @@ -1,2551 +0,0 @@ -## C APIs - -### Macro Definition and Enumeration - -#### bdev_rw.h - -##### enum libstorage_ns_lba_size - -1. Prototype - -``` -enum libstorage_ns_lba_size -{ -LIBSTORAGE_NVME_NS_LBA_SIZE_512 = 0x9, -LIBSTORAGE_NVME_NS_LBA_SIZE_4K = 0xc -}; -``` - -2. Description - -Sector (data) size of a drive. - -##### enum libstorage_ns_md_size - -1. Prototype - -``` -enum libstorage_ns_md_size -{ -LIBSTORAGE_METADATA_SIZE_0 = 0, -LIBSTORAGE_METADATA_SIZE_8 = 8, -LIBSTORAGE_METADATA_SIZE_64 = 64 -}; -``` - -2. Description - -Metadata size of a drive. - -3. Remarks - -- ES3000 V3 (single-port) supports formatting of five sector types (512+0, 512+8, 4K+64, 4K, and 4K+8). - -- ES3000 V3 (dual-port) supports formatting of four sector types (512+0, 512+8, 4K+64, and 4K). - -- ES3000 V5 supports formatting of five sector types (512+0, 512+8, 4K+64, 4K, and 4K+8). - -- Optane drives support formatting of seven sector types (512+0, 512+8, 512+16,4K, 4K+8, 4K+64, and 4K+128). - - -##### enum libstorage_ns_pi_type - -1. Prototype - -``` -enum libstorage_ns_pi_type -{ -LIBSTORAGE_FMT_NVM_PROTECTION_DISABLE = 0x0, -LIBSTORAGE_FMT_NVM_PROTECTION_TYPE1 = 0x1, -LIBSTORAGE_FMT_NVM_PROTECTION_TYPE2 = 0x2, -LIBSTORAGE_FMT_NVM_PROTECTION_TYPE3 = 0x3, -}; -``` - -2. Description - -Protection type supported by drives. - -3. Remarks - -ES3000 supports only protection types 0 and 3. Optane drives support only protection types 0 and 1. - -##### enum libstorage_crc_and_prchk - -1. Prototype - -``` -enum libstorage_crc_and_prchk -{ -LIBSTORAGE_APP_CRC_AND_DISABLE_PRCHK = 0x0, -LIBSTORAGE_APP_CRC_AND_ENABLE_PRCHK = 0x1, -LIBSTORAGE_LIB_CRC_AND_DISABLE_PRCHK = 0x2, -LIBSTORAGE_LIB_CRC_AND_ENABLE_PRCHK = 0x3, -#define NVME_NO_REF 0x4 -LIBSTORAGE_APP_CRC_AND_DISABLE_PRCHK_NO_REF = LIBSTORAGE_APP_CRC_AND_DISABLE_PRCHK | NVME_NO_REF, -LIBSTORAGE_APP_CRC_AND_ENABLE_PRCHK_NO_REF = LIBSTORAGE_APP_CRC_AND_ENABLE_PRCHK | NVME_NO_REF, -}; -``` - -2. Description - -- **LIBSTORAGE_APP_CRC_AND_DISABLE_PRCHK**: Cyclic redundancy check (CRC) is performed for the application layer, but not for HSAK. CRC is disabled for drives. - -- **LIBSTORAGE_APP_CRC_AND_ENABLE_PRCHK**: CRC is performed for the application layer, but not for HSAK. CRC is enabled for drives. - -- **LIBSTORAGE_LIB_CRC_AND_DISABLE_PRCHK**: CRC is performed for HSAK, but not for the application layer. CRC is disabled for drives. - -- **LIBSTORAGE_LIB_CRC_AND_ENABLE_PRCHK**: CRC is performed for HSAK, but not for the application layer. CRC is enabled for drives. - -- **LIBSTORAGE_APP_CRC_AND_DISABLE_PRCHK_NO_REF**: CRC is performed for the application layer, but not for HSAK. CRC is disabled for drives. REF tag verification is disabled for drives whose PI TYPE is 1 (Intel Optane P4800). - -- **LIBSTORAGE_APP_CRC_AND_ENABLE_PRCHK_NO_REF**: CRC is performed for the application layer, but not for HSAK. CRC is enabled for drives. REF tag verification is disabled for drives whose PI TYPE is 1 (Intel Optane P4800). - -- If PI TYPE of an Intel Optane P4800 drive is 1, the CRC and REF tag of the metadata area are verified by default. - -- Intel Optane P4800 drives support DIF in 512+8 format but does not support DIF in 4096+64 format. - -- For ES3000 V3 and ES3000 V5, PI TYPE of the drives is 3. By default, only the CRC of the metadata area is performed. - -- ES3000 V3 supports DIF in 512+8 format but does not support DIF in 4096+64 format. ES3000 V5 supports DIF in both 512+8 and 4096+64 formats. - - -The summary is as follows: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
E2E Verification ModeCtrl FlagCRC Generator Write ProcessRead Process
Application VerificationCRC for HSAKCRC for DrivesApplication VerificationCRC for HSAKCRC for Drives
Halfway protection0ControllerXXXXXX
1ControllerXXXXX
2ControllerXXXXXX
3ControllerXXXXX
Full protection0AppXXXX
1AppXX
2HSAKXXXX
3HSAKXX
- - - - - -##### enum libstorage_print_log_level - -1. Prototype - -``` -enum libstorage_print_log_level -{ -LIBSTORAGE_PRINT_LOG_ERROR, -LIBSTORAGE_PRINT_LOG_WARN, -LIBSTORAGE_PRINT_LOG_NOTICE, -LIBSTORAGE_PRINT_LOG_INFO, -LIBSTORAGE_PRINT_LOG_DEBUG, -}; -``` - -2. Description - -Storage Performance Development Kit (SPDK) log print levels: ERROR, WARN, NOTICE, INFO, and DEBUG, corresponding to 0 to 4 in the configuration file. - -##### MAX_BDEV_NAME_LEN - -1. Prototype - -``` -#define MAX_BDEV_NAME_LEN 24 -``` - -2. Description - -Maximum length of a block device name. - -##### MAX_CTRL_NAME_LEN - -1. Prototype - -``` -#define MAX_CTRL_NAME_LEN 16 -``` - -2. Description - -Maximum length of a controller. - -##### LBA_FORMAT_NUM - -1. Prototype - -``` -#define LBA_FORMAT_NUM 16 -``` - -2. Description - -Number of LBA formats supported by a controller. - -##### LIBSTORAGE_MAX_DSM_RANGE_DESC_COUNT - -1. Prototype - -``` -#define LIBSTORAGE_MAX_DSM_RANGE_DESC_COUNT 256 -``` - -2. Description - -Maximum number of 16-byte sets in the dataset management command. - -#### ublock.h - -##### UBLOCK_NVME_UEVENT_SUBSYSTEM_UIO - -1. Prototype - -``` -#define UBLOCK_NVME_UEVENT_SUBSYSTEM_UIO 1 -``` - -2. Description - -This macro is used to define that the subsystem corresponding to the uevent event is the userspace I/O subsystem (UIO) provided by the kernel. When the service receives the uevent event, this macro is used to determine whether the event is a UIO event that needs to be processed. - -The value of the int subsystem member in struct ublock_uevent is **UBLOCK_NVME_UEVENT_SUBSYSTEM_UIO**. Currently, only this value is available. - -##### UBLOCK_TRADDR_MAX_LEN - -1. Prototype - -``` -#define UBLOCK_TRADDR_MAX_LEN 256 -``` - -2. Description - -The *Domain:Bus:Device.Function* (**%04x:%02x:%02x.%x**) format indicates the maximum length of the PCI address character string. The actual length is far less than 256 bytes. - -##### UBLOCK_PCI_ADDR_MAX_LEN - -1. Prototype - -``` -#define UBLOCK_PCI_ADDR_MAX_LEN 256 -``` - -2. Description - -Maximum length of the PCI address character string. The actual length is far less than 256 bytes. The possible formats of the PCI address are as follows: - -- Full address: **%x:%x:%x.%x** or **%x.%x.%x.%x** - -- When the **Function** value is **0**: **%x:%x:%x** - -- When the **Domain** value is **0**: **%x:%x.%x** or **%x.%x.%x** - -- When the **Domain** and **Function** values are **0**: **%x:%x** or **%x.%x** - -##### UBLOCK_SMART_INFO_LEN - -1. Prototype - -``` -#define UBLOCK_SMART_INFO_LEN 512 -``` - -2. Description - -Size of the structure for the S.M.A.R.T. information of an NVMe drive, which is 512 bytes. - -##### enum ublock_rpc_server_status - -1. Prototype - -``` -enum ublock_rpc_server_status { -// start rpc server or not -UBLOCK_RPC_SERVER_DISABLE = 0, -UBLOCK_RPC_SERVER_ENABLE = 1, -}; -``` - -2. Description - -Status of the RPC service in HSAK. The status can be enabled or disabled. - -##### enum ublock_nvme_uevent_action - -1. Prototype - -``` -enum ublock_nvme_uevent_action { -UBLOCK_NVME_UEVENT_ADD = 0, -UBLOCK_NVME_UEVENT_REMOVE = 1, -UBLOCK_NVME_UEVENT_INVALID, -}; -``` - -2. Description - -Indicates whether the uevent hot swap event is to insert or remove a drive. - -##### enum ublock_subsystem_type - -1. Prototype - -``` -enum ublock_subsystem_type { -SUBSYSTEM_UIO = 0, -SUBSYSTEM_NVME = 1, -SUBSYSTEM_TOP -}; -``` - -2. Description - -Type of the callback function, which is used to determine whether the callback function is registered for the UIO driver or kernel NVMe driver. - -### Data Structure - -#### bdev_rw.h - -##### struct libstorage_namespace_info - -1. Prototype - -``` -struct libstorage_namespace_info -{ -char name[MAX_BDEV_NAME_LEN]; -uint64_t size; /** namespace size in bytes */ -uint64_t sectors; /** number of sectors */ -uint32_t sector_size; /** sector size in bytes */ -uint32_t md_size; /** metadata size in bytes */ -uint32_t max_io_xfer_size; /** maximum i/o size in bytes */ -uint16_t id; /** namespace id */ -uint8_t pi_type; /** end-to-end data protection information type */ -uint8_t is_active :1; /** namespace is active or not */ -uint8_t ext_lba :1; /** namespace support extending LBA size or not */ -uint8_t dsm :1; /** namespace supports Dataset Management or not */ -uint8_t pad :3; -uint64_t reserved; -}; -``` - -2. Description - -This data structure contains the namespace information of a drive. - -3. Struct members - -| Member | Description | -| ---------------------------- | ------------------------------------------------------------ | -| char name[MAX_BDEV_NAME_LEN] | Name of the namespace. | -| uint64_t size | Size of the drive space allocated to the namespace, in bytes. | -| uint64_t sectors | Number of sectors. | -| uint32_t sector_size | Size of each sector, in bytes. | -| uint32_t md_size | Metadata size, in bytes. | -| uint32_t max_io_xfer_size | Maximum size of data in a single I/O operation, in bytes. | -| uint16_t id | Namespace ID. | -| uint8_t pi_type | Data protection type. The value is obtained from enum libstorage_ns_pi_type. | -| uint8_t is_active :1 | Namespace active or not. | -| uint8_t ext_lba :1 | Whether the namespace supports logical block addressing (LBA) in extended mode. | -| uint8_t dsm :1 | Whether the namespace supports dataset management. | -| uint8_t pad :3 | Reserved parameter. | -| uint64_t reserved | Reserved parameter. | - - - - -##### struct libstorage_nvme_ctrlr_info - -1. Prototype - -``` -struct libstorage_nvme_ctrlr_info -{ -char name[MAX_CTRL_NAME_LEN]; -char address[24]; -struct -{ -uint32_t domain; -uint8_t bus; -uint8_t dev; -uint8_t func; -} pci_addr; -uint64_t totalcap; /* Total NVM Capacity in bytes */ -uint64_t unusecap; /* Unallocated NVM Capacity in bytes */ -int8_t sn[20]; /* Serial number */ -uint8_t fr[8]; /* Firmware revision */ -uint32_t max_num_ns; /* Number of namespaces */ -uint32_t version; -uint16_t num_io_queues; /* num of io queues */ -uint16_t io_queue_size; /* io queue size */ -uint16_t ctrlid; /* Controller id */ -uint16_t pad1; -struct -{ -struct -{ -/** metadata size */ -uint32_t ms : 16; -/** lba data size */ -uint32_t lbads : 8; -uint32_t reserved : 8; -} lbaf[LBA_FORMAT_NUM]; -uint8_t nlbaf; -uint8_t pad2[3]; -uint32_t cur_format : 4; -uint32_t cur_extended : 1; -uint32_t cur_pi : 3; -uint32_t cur_pil : 1; -uint32_t cur_can_share : 1; -uint32_t mc_extented : 1; -uint32_t mc_pointer : 1; -uint32_t pi_type1 : 1; -uint32_t pi_type2 : 1; -uint32_t pi_type3 : 1; -uint32_t md_start : 1; -uint32_t md_end : 1; -uint32_t ns_manage : 1; /* Supports the Namespace Management and Namespace Attachment commands */ -uint32_t directives : 1; /* Controller support Directives or not */ -uint32_t streams : 1; /* Controller support Streams Directives or not */ -uint32_t dsm : 1; /* Controller support Dataset Management or not */ -uint32_t reserved : 11; -} cap_info; -}; -``` - -1. Description - -This data structure contains the controller information of a drive. - -2. Struct members - - -| Member | Description | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| char name[MAX_CTRL_NAME_LEN] | Controller name. | -| char address[24] | PCI address, which is a character string. | -| struct
{
uint32_t domain;
uint8_t bus;
uint8_t dev;
uint8_t func;
} pci_addr | PCI address, in segments. | -| uint64_t totalcap | Total capacity of the controller, in bytes. Optane drives are based on the NVMe 1.0 protocol and do not support this parameter. | -| uint64_t unusecap | Free capacity of the controller, in bytes. Optane drives are based on the NVMe 1.0 protocol and do not support this parameter. | -| int8_t sn[20]; | Serial number of a drive, which is an ASCII character string without **0**. | -| uint8_t fr[8]; | Drive firmware version, which is an ASCII character string without **0**. | -| uint32_t max_num_ns | Maximum number of namespaces. | -| uint32_t version | NVMe protocol version supported by the controller. | -| uint16_t num_io_queues | Number of I/O queues supported by a drive. | -| uint16_t io_queue_size | Maximum length of an I/O queue. | -| uint16_t ctrlid | Controller ID. | -| uint16_t pad1 | Reserved parameter. | - -Members of the struct cap_info substructure: - -| Member | Description | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| struct
{
uint32_t ms : 16;
uint32_t lbads : 8;
uint32_t reserved : 8;
}lbaf[LBA_FORMAT_NUM] | **ms**: metadata size. The minimum value is 8 bytes.
**lbads**: The LBA size is 2^lbads, and the value of **lbads** is greater than or equal to 9. | -| uint8_t nlbaf | Number of LBA formats supported by the controller. | -| uint8_t pad2[3] | Reserved parameter. | -| uint32_t cur_format : 4 | Current LBA format of the controller. | -| uint32_t cur_extended : 1 | Whether the controller supports LBA in extended mode. | -| uint32_t cur_pi : 3 | Current protection type of the controller. | -| uint32_t cur_pil : 1 | The current protection information (PI) of the controller is located in the first or last eight bytes of the metadata. | -| uint32_t cur_can_share : 1 | Whether the namespace supports multi-path transmission. | -| uint32_t mc_extented : 1 | Whether metadata is transmitted as part of the data buffer. | -| uint32_t mc_pointer : 1 | Whether metadata is separated from the data buffer. | -| uint32_t pi_type1 : 1 | Whether the controller supports protection type 1. | -| uint32_t pi_type2 : 1 | Whether the controller supports protection type 2. | -| uint32_t pi_type3 : 1 | Whether the controller supports protection type 3. | -| uint32_t md_start : 1 | Whether the controller supports protection information in the first eight bytes of metadata. | -| uint32_t md_end : 1 | Whether the controller supports protection information in the last eight bytes of metadata. | -| uint32_t ns_manage : 1 | Whether the controller supports namespace management. | -| uint32_t directives : 1 | Whether the Directives command set is supported. | -| uint32_t streams : 1 | Whether Streams Directives is supported. | -| uint32_t dsm : 1 | Whether Dataset Management commands are supported. | -| uint32_t reserved : 11 | Reserved parameter. | - -##### struct libstorage_dsm_range_desc - -1. Prototype - -``` -struct libstorage_dsm_range_desc -{ -/* RESERVED */ -uint32_t reserved; - -/* NUMBER OF LOGICAL BLOCKS */ -uint32_t block_count; - -/* UNMAP LOGICAL BLOCK ADDRESS */uint64_t lba;}; -``` - -2. Description - -Definition of a single 16-byte set in the data management command set. - -3. Struct members - -| Member | Description | -| -------------------- | ------------------------ | -| uint32_t reserved | Reserved parameter. | -| uint32_t block_count | Number of LBAs per unit. | -| uint64_t lba | Start LBA. | - -##### struct libstorage_ctrl_streams_param - -1. Prototype - -``` -struct libstorage_ctrl_streams_param -{ -/* MAX Streams Limit */ -uint16_t msl; - -/* NVM Subsystem Streams Available */ -uint16_t nssa; - -/* NVM Subsystem Streams Open */uint16_t nsso; - -uint16_t pad; -}; -``` - -2. Description - -Streams attribute value supported by NVMe drives. - -3. Struct members - -| Member | Description | -| ------------- | ------------------------------------------------------------ | -| uint16_t msl | Maximum number of Streams resources supported by a drive. | -| uint16_t nssa | Number of Streams resources that can be used by each NVM subsystem. | -| uint16_t nsso | Number of Streams resources used by each NVM subsystem. | -| uint16_t pad | Reserved parameter. | - - - -##### struct libstorage_bdev_streams_param - -1. Prototype - -``` -struct libstorage_bdev_streams_param -{ -/* Stream Write Size */ -uint32_t sws; - -/* Stream Granularity Size */ -uint16_t sgs; - -/* Namespace Streams Allocated */ -uint16_t nsa; - -/* Namespace Streams Open */ -uint16_t nso; - -uint16_t reserved[3]; -}; -``` - -2. Description - -Streams attribute value of the namespace. - -3. Struct members - -| Member | Description | -| -------------------- | ------------------------------------------------------------ | -| uint32_t sws | Write granularity with the optimal performance, in sectors. | -| uint16_t sgs | Write granularity allocated to Streams, in sws. | -| uint16_t nsa | Number of private Streams resources that can be used by a namespace. | -| uint16_t nso | Number of private Streams resources used by a namespace. | -| uint16_t reserved[3] | Reserved parameter. | - -##### struct libstorage_mgr_info - -1. Prototype - -``` -struct libstorage_mgr_info -{ -char pci[24]; -char ctrlName[MAX_CTRL_NAME_LEN]; -uint64_t sector_size; -uint64_t cap_size; -uint16_t device_id; -uint16_t subsystem_device_id; -uint16_t vendor_id; -uint16_t subsystem_vendor_id; -uint16_t controller_id; -int8_t serial_number[20]; -int8_t model_number[40]; -uint8_t firmware_revision[8]; -}; -``` - -2. Description - -Drive management information (consistent with the drive information used by the management plane). - -3. Struct members - -| Member | Description | -| -------------------------------- | ---------------------------------------------- | -| char pci[24] | Character string of the drive PCI address. | -| char ctrlName[MAX_CTRL_NAME_LEN] | Character string of the drive controller name. | -| uint64_t sector_size | Drive sector size. | -| uint64_t cap_size | Drive capacity, in bytes. | -| uint16_t device_id | Drive device ID. | -| uint16_t subsystem_device_id | Drive subsystem device ID. | -| uint16­*t vendor*id | Drive vendor ID. | -| uint16_t subsystem_vendor_id | Drive subsystem vendor ID. | -| uint16_t controller_id | Drive controller ID. | -| int8_t serial_number[20] | Drive serial number. | -| int8_t model_number[40] | Device model. | -| uint8_t firmware_revision[8] | Firmware version. | - -##### struct **attribute**((packed)) libstorage_smart_info - -1. Prototype - -``` -/* same with struct spdk_nvme_health_information_page in nvme_spec.h */ -struct __attribute__((packed)) libstorage_smart_info { -/* details of uint8_t critical_warning - -union spdk_nvme_critical_warning_state { - -uint8_t raw; -* - -struct { - -uint8_t available_spare : 1; - -uint8_t temperature : 1; - -uint8_t device_reliability : 1; - -uint8_t read_only : 1; - -uint8_t volatile_memory_backup : 1; - -uint8_t reserved : 3; - -} bits; - -}; -*/ -uint8_t critical_warning; -uint16_t temperature; -uint8_t available_spare; -uint8_t available_spare_threshold; -uint8_t percentage_used; -uint8_t reserved[26]; - -/* - -Note that the following are 128-bit values, but are - -defined as an array of 2 64-bit values. -*/ -/* Data Units Read is always in 512-byte units. */ -uint64_t data_units_read[2]; -/* Data Units Written is always in 512-byte units. */ -uint64_t data_units_written[2]; -/* For NVM command set, this includes Compare commands. */ -uint64_t host_read_commands[2]; -uint64_t host_write_commands[2]; -/* Controller Busy Time is reported in minutes. */ -uint64_t controller_busy_time[2]; -uint64_t power_cycles[2]; -uint64_t power_on_hours[2]; -uint64_t unsafe_shutdowns[2]; -uint64_t media_errors[2]; -uint64_t num_error_info_log_entries[2]; -/* Controller temperature related. */ -uint32_t warning_temp_time; -uint32_t critical_temp_time; -uint16_t temp_sensor[8]; -uint8_t reserved2[296]; -}; -``` - -1. Description - -This data structure defines the S.M.A.R.T. information of a drive. - -2. Struct members - -| Member | **Description (For details, see the NVMe protocol.)** | -| -------------------------------------- | ------------------------------------------------------------ | -| uint8_t critical_warning | Critical alarm of the controller status. If a bit is set to 1, the bit is valid. You can set multiple bits to be valid. Critical alarms are returned to the host through asynchronous events.
Bit 0: When this bit is set to 1, the redundant space is less than the specified threshold.
Bit 1: When this bit is set to 1, the temperature is higher or lower than a major threshold.
Bit 2: When this bit is set to 1, component reliability is reduced due to major media errors or internal errors.
Bit 3: When this bit is set to 1, the medium has been set to the read-only mode.
Bit 4: When this bit is set to 1, the volatile component of the controller fails. This parameter is valid only when the volatile component exists in the controller.
Bits 5-7: reserved. | -| uint16_t temperature | Temperature of a component. The unit is Kelvin. | -| uint8_t available_spare | Percentage of the available redundant space (0 to 100%). | -| uint8_t available_spare_threshold | Threshold of the available redundant space. An asynchronous event is reported when the available redundant space is lower than the threshold. | -| uint8_t percentage_used | Percentage of the actual service life of a component to the service life of the component expected by the manufacturer. The value **100** indicates that the actual service life of the component has reached to the expected service life, but the component can still be used. The value can be greater than 100, but any value greater than 254 will be set to 255. | -| uint8_t reserved[26] | Reserved. | -| uint64_t data_units_read[2] | Number of 512 bytes read by the host from the controller. The value **1** indicates that 1000 x 512 bytes are read, which exclude metadata. If the LBA size is not 512 bytes, the controller converts it into 512 bytes for calculation. The value is expressed in hexadecimal notation. | -| uint64_t data_units_written[2] | Number of 512 bytes written by the host to the controller. The value **1** indicates that 1000 x 512 bytes are written, which exclude metadata. If the LBA size is not 512 bytes, the controller converts it into 512 bytes for calculation. The value is expressed in hexadecimal notation. | -| uint64_t host_read_commands[2] | Number of read commands delivered to the controller. | -| uint64_t host_write_commands[2]; | Number of write commands delivered to the controller. | -| uint64_t controller_busy_time[2] | Busy time for the controller to process I/O commands. The process from the time the commands are delivered to the time the results are returned to the CQ is busy. The time is expressed in minutes. | -| uint64_t power_cycles[2] | Number of machine on/off cycles. | -| uint64_t power_on_hours[2] | Power-on duration, in hours. | -| uint64_t unsafe_shutdowns[2] | Number of abnormal power-off times. The value is incremented by 1 when CC.SHN is not received during power-off. | -| uint64_t media_errors[2] | Number of times that the controller detects unrecoverable data integrity errors, including uncorrectable ECC errors, CRC errors, and LBA tag mismatch. | -| uint64_t num_error_info_log_entries[2] | Number of entries in the error information log within the controller lifecycle. | -| uint32_t warning_temp_time | Accumulated time when the temperature exceeds the warning alarm threshold, in minutes. | -| uint32_t critical_temp_time | Accumulated time when the temperature exceeds the critical alarm threshold, in minutes. | -| uint16_t temp_sensor[8] | Temperature of temperature sensors 1–8. The unit is Kelvin. | -| uint8_t reserved2[296] | Reserved. | - -##### libstorage_dpdk_contig_mem - -1. Prototype - -``` -struct libstorage_dpdk_contig_mem { -uint64_t virtAddr; -uint64_t memLen; -uint64_t allocLen; -}; -``` - -2. Description - -Description about a contiguous virtual memory segment in the parameters of the callback function that notifies the service layer of initialization completion after the DPDK memory is initialized. - -Currently, 800 MB memory is reserved for HSAK. Other memory is returned to the service layer through **allocLen** in this struct for the service layer to allocate memory for self-management. - -The total memory to be reserved for HSAK is about 800 MB. The memory reserved for each memory segment is calculated based on the number of NUMA nodes in the environment. When there are too many NUMA nodes, the memory reserved on each memory segment is too small. As a result, HSAK initialization fails. Therefore, HSAK supports only the environment with a maximum of four NUMA nodes. - -3. Struct members - -| Member | Description | -| ----------------- | -------------------------------------------------------- | -| uint64_t virtAddr | Start address of the virtual memory. | -| uint64_t memLen | Length of the virtual memory, in bytes. | -| uint64_t allocLen | Available memory length in the memory segment, in bytes. | - -##### struct libstorage_dpdk_init_notify_arg - -1. Prototype - -``` -struct libstorage_dpdk_init_notify_arg { -uint64_t baseAddr; -uint16_t memsegCount; -struct libstorage_dpdk_contig_mem *memseg; -}; -``` - -2. Description - -Callback function parameter used to notify the service layer of initialization completion after DPDK memory initialization, indicating information about all virtual memory segments. - -3. Struct members - -| Member | Description | -| ----------------------------------------- | ------------------------------------------------------------ | -| uint64_t baseAddr | Start address of the virtual memory. | -| uint16_t memsegCount | Number of valid **memseg** array members, that is, the number of contiguous virtual memory segments. | -| struct libstorage_dpdk_contig_mem *memseg | Pointer to the memory segment array. Each array element is a contiguous virtual memory segment, and every two elements are discontiguous. | - -##### struct libstorage_dpdk_init_notify - -1. Prototype - -``` -struct libstorage_dpdk_init_notify { -const char *name; -void (*notifyFunc)(const struct libstorage_dpdk_init_notify_arg *arg); -TAILQ_ENTRY(libstorage_dpdk_init_notify) tailq; -}; -``` - -2. Description - -Struct used to notify the service layer of the callback function registration after the DPDK memory is initialized. - -3. Struct members - -| Member | Description | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| const char *name | Name of the service-layer module of the registered callback function. | -| void (*notifyFunc)(const struct libstorage_dpdk_init_notify_arg *arg) | Callback function parameter used to notify the service layer of initialization completion after the DPDK memory is initialized. | -| TAILQ_ENTRY(libstorage_dpdk_init_notify) tailq | Linked list that stores registered callback functions. | - -#### ublock.h - -##### struct ublock_bdev_info - -1. Prototype - -``` -struct ublock_bdev_info { -uint64_t sector_size; -uint64_t cap_size; // cap_size -uint16_t device_id; -uint16_t subsystem_device_id; // subsystem device id of nvme control -uint16_t vendor_id; -uint16_t subsystem_vendor_id; -uint16_t controller_id; -int8_t serial_number[20]; -int8_t model_number[40]; -int8_t firmware_revision[8]; -}; -``` - -2. Description - -This data structure contains the device information of a drive. - -3. Struct members - -| Member | Description | -| ---------------------------- | ----------------------------------------------- | -| uint64_t sector_size | Sector size of a drive, for example, 512 bytes. | -| uint64_t cap_size | Total drive capacity, in bytes. | -| uint16_t device_id | Device ID. | -| uint16_t subsystem_device_id | Device ID of a subsystem. | -| uint16_t vendor_id | Main ID of the device vendor. | -| uint16_t subsystem_vendor_id | Sub-ID of the device vendor. | -| uint16_t controller_id | ID of the device controller. | -| int8_t serial_number[20] | Device serial number. | -| int8_t model_number[40] | Device model. | -| int8_t firmware_revision[8] | Firmware version. | - -##### struct ublock_bdev - -1. Prototype - -``` -struct ublock_bdev { -char pci[UBLOCK_PCI_ADDR_MAX_LEN]; -struct ublock_bdev_info info; -struct spdk_nvme_ctrlr *ctrlr; -TAILQ_ENTRY(ublock_bdev) link; -}; -``` - -2. Description - -The data structure contains the drive information of the specified PCI address, and the structure itself is a node of the queue. - -3. Struct members - -| Member | Description | -| --------------------------------- | ------------------------------------------------------------ | -| char pci[UBLOCK_PCI_ADDR_MAX_LEN] | PCI address. | -| struct ublock_bdev_info info | Drive information. | -| struct spdk_nvme_ctrlr *ctrlr | Data structure of the device controller. The members in this structure are not open to external systems. External services can obtain the corresponding member data through the SPDK open source interface. | -| TAILQ_ENTRY(ublock_bdev) link | Structure of the pointers before and after a queue. | - -##### struct ublock_bdev_mgr - -1. Prototype - -``` -struct ublock_bdev_mgr { -TAILQ_HEAD(, ublock_bdev) bdevs; -}; -``` - -2. Description - -This data structure defines the header structure of a ublock_bdev queue. - -3. Struct members - -| Member | Description | -| -------------------------------- | ----------------------- | -| TAILQ_HEAD(, ublock_bdev) bdevs; | Queue header structure. | - -##### struct **attribute**((packed)) ublock_SMART_info - -1. Prototype - -``` -struct __attribute__((packed)) ublock_SMART_info { -uint8_t critical_warning; -uint16_t temperature; -uint8_t available_spare; -uint8_t available_spare_threshold; -uint8_t percentage_used; -uint8_t reserved[26]; -/* - -Note that the following are 128-bit values, but are - -defined as an array of 2 64-bit values. -*/ -/* Data Units Read is always in 512-byte units. */ -uint64_t data_units_read[2]; -/* Data Units Written is always in 512-byte units. */ -uint64_t data_units_written[2]; -/* For NVM command set, this includes Compare commands. */ -uint64_t host_read_commands[2]; -uint64_t host_write_commands[2]; -/* Controller Busy Time is reported in minutes. */ -uint64_t controller_busy_time[2]; -uint64_t power_cycles[2]; -uint64_t power_on_hours[2]; -uint64_t unsafe_shutdowns[2]; -uint64_t media_errors[2]; -uint64_t num_error_info_log_entries[2]; -/* Controller temperature related. */ -uint32_t warning_temp_time; -uint32_t critical_temp_time; -uint16_t temp_sensor[8]; -uint8_t reserved2[296]; -}; -``` - -2. Description - -This data structure defines the S.M.A.R.T. information of a drive. - -3. Struct members - -| Member | Description (For details, see the NVMe protocol.) | -| -------------------------------------- | ------------------------------------------------------------ | -| uint8_t critical_warning | Critical alarm of the controller status. If a bit is set to 1, the bit is valid. You can set multiple bits to be valid. Critical alarms are returned to the host through asynchronous events.
Bit 0: When this bit is set to 1, the redundant space is less than the specified threshold.
Bit 1: When this bit is set to 1, the temperature is higher or lower than a major threshold.
Bit 2: When this bit is set to 1, component reliability is reduced due to major media errors or internal errors.
Bit 3: When this bit is set to 1, the medium has been set to the read-only mode.
Bit 4: When this bit is set to 1, the volatile component of the controller fails. This parameter is valid only when the volatile component exists in the controller.
Bits 5-7: reserved. | -| uint16_t temperature | Temperature of a component. The unit is Kelvin. | -| uint8_t available_spare | Percentage of the available redundant space (0 to 100%). | -| uint8_t available_spare_threshold | Threshold of the available redundant space. An asynchronous event is reported when the available redundant space is lower than the threshold. | -| uint8_t percentage_used | Percentage of the actual service life of a component to the service life of the component expected by the manufacturer. The value **100** indicates that the actual service life of the component has reached to the expected service life, but the component can still be used. The value can be greater than 100, but any value greater than 254 will be set to 255. | -| uint8_t reserved[26] | Reserved. | -| uint64_t data_units_read[2] | Number of 512 bytes read by the host from the controller. The value **1** indicates that 1000 x 512 bytes are read, which exclude metadata. If the LBA size is not 512 bytes, the controller converts it into 512 bytes for calculation. The value is expressed in hexadecimal notation. | -| uint64_t data_units_written[2] | Number of 512 bytes written by the host to the controller. The value **1** indicates that 1000 x 512 bytes are written, which exclude metadata. If the LBA size is not 512 bytes, the controller converts it into 512 bytes for calculation. The value is expressed in hexadecimal notation. | -| uint64_t host_read_commands[2] | Number of read commands delivered to the controller. | -| uint64_t host_write_commands[2]; | Number of write commands delivered to the controller. | -| uint64_t controller_busy_time[2] | Busy time for the controller to process I/O commands. The process from the time the commands are delivered to the time the results are returned to the CQ is busy. The value is expressed in minutes. | -| uint64_t power_cycles[2] | Number of machine on/off cycles. | -| uint64_t power_on_hours[2] | Power-on duration, in hours. | -| uint64_t unsafe_shutdowns[2] | Number of abnormal power-off times. The value is incremented by 1 when CC.SHN is not received during power-off. | -| uint64_t media_errors[2] | Number of unrecoverable data integrity errors detected by the controller, including uncorrectable ECC errors, CRC errors, and LBA tag mismatch. | -| uint64_t num_error_info_log_entries[2] | Number of entries in the error information log within the controller lifecycle. | -| uint32_t warning_temp_time | Accumulated time when the temperature exceeds the warning alarm threshold, in minutes. | -| uint32_t critical_temp_time | Accumulated time when the temperature exceeds the critical alarm threshold, in minutes. | -| uint16_t temp_sensor[8] | Temperature of temperature sensors 1–8. The unit is Kelvin. | -| uint8_t reserved2[296] | Reserved. | - -##### struct ublock_nvme_error_info - -1. Prototype - -``` -struct ublock_nvme_error_info { -uint64_t error_count; -uint16_t sqid; -uint16_t cid; -uint16_t status; -uint16_t error_location; -uint64_t lba; -uint32_t nsid; -uint8_t vendor_specific; -uint8_t reserved[35]; -}; -``` - -2. Description - -This data structure contains the content of a single error message in the device controller. The number of errors supported by different controllers may vary. - -3. Struct members - -| Member | Description (For details, see the NVMe protocol.) | -| ----------------------- | ------------------------------------------------------------ | -| uint64_t error_count | Error sequence number, which increases in ascending order. | -| uint16_t sqid | Submission queue identifier for the command associated with an error message. If an error cannot be associated with a specific command, this parameter should be set to **FFFFh**. | -| uint16_t cid | Command identifier associated with an error message. If an error cannot be associated with a specific command, this parameter should be set to **FFFFh**. | -| uint16_t status | Status of a completed command. | -| uint16_t error_location | Command parameter associated with an error message. | -| uint64_t lba | First LBA when an error occurs. | -| uint32_t nsid | Namespace where an error occurs. | -| uint8_t vendor_specific | Log page identifier associated with the page if other vendor-specific error messages are available. The value **00h** indicates that no additional information is available. The valid value ranges from 80h to FFh. | -| uint8_t reserved[35] | Reserved. | - -##### struct ublock_uevent - -1. Prototype - -``` -struct ublock_uevent { -enum ublock_nvme_uevent_action action; -int subsystem; -char traddr[UBLOCK_TRADDR_MAX_LEN + 1]; -}; -``` - -2. Description - -This data structure contains parameters related to the uevent event. - -3. Struct members - -| Member | Description | -| -------------------------------------- | ------------------------------------------------------------ | -| enum ublock_nvme_uevent_action action | Whether the uevent event type is drive insertion or removal through enumeration. | -| int subsystem | Subsystem type of the uevent event. Currently, only **UBLOCK_NVME_UEVENT_SUBSYSTEM_UIO** is supported. If the application receives other values, no processing is required. | -| char traddr[UBLOCK_TRADDR_MAX_LEN + 1] | PCI address character string in the *Domain:Bus:Device.Function* (**%04x:%02x:%02x.%x**) format. | - -##### struct ublock_hook - -1. Prototype - -``` -struct ublock_hook -{ -ublock_callback_func ublock_callback; -void *user_data; -}; -``` - -2. Description - -This data structure is used to register callback functions. - -3. Struct members - -| Member | Description | -| ------------------------------------ | ------------------------------------------------------------ | -| ublock_callback_func ublock_callback | Function executed during callback. The type is bool func(void *info, void *user_data). | -| void *user_data | User parameter transferred to the callback function. | - -##### struct ublock_ctrl_iostat_info - -1. Prototype - -``` -struct ublock_ctrl_iostat_info -{ -uint64_t num_read_ops; -uint64_t num_write_ops; -uint64_t read_latency_ms; -uint64_t write_latency_ms; -uint64_t io_outstanding; -uint64_t num_poll_timeout; -uint64_t io_ticks_ms; -}; -``` - -2. Description - -This data structure is used to obtain the I/O statistics of a controller. - -3. Struct members - -| Member | Description | -| ------------------------- | ------------------------------------------------------------ | -| uint64_t num_read_ops | Accumulated number of read I/Os of the controller. | -| uint64_t num_write_ops | Accumulated number of write I/Os of the controller. | -| uint64_t read_latency_ms | Accumulated read latency of the controller, in ms. | -| uint64_t write_latency_ms | Accumulated write latency of the controller, in ms. | -| uint64_t io_outstanding | Queue depth of the controller. | -| uint64_t num_poll_timeout | Accumulated number of polling timeouts of the controller. | -| uint64_t io_ticks_ms | Accumulated I/O processing latency of the controller, in ms. | - -### API - -#### bdev_rw.h - -##### libstorage_get_nvme_ctrlr_info - -1. Prototype - -uint32_t libstorage_get_nvme_ctrlr_info(struct libstorage_nvme_ctrlr_info** ppCtrlrInfo); - -2. Description - -Obtains information about all controllers. - -3. Parameters - -| Parameter | Description | -| ----------------------------------------------- | ------------------------------------------------------------ | -| struct libstorage_nvme_ctrlr_info** ppCtrlrInfo | Output parameter, which returns all obtained controller information.
Note:
Free the memory using the free API in a timely manner. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| 0 | Failed to obtain controller information or no controller information is obtained. | -| > 0 | Number of obtained controllers. | - -##### libstorage_get_mgr_info_by_esn - -1. Prototype - -``` -int32_t libstorage_get_mgr_info_by_esn(const char *esn, struct libstorage_mgr_info *mgr_info); -``` - -2. Description - -Obtains the management information about the NVMe drive corresponding to the ESN. - -3. Parameters - -| Parameter | Description | -| ------------------------------------ | ------------------------------------------------------------ | -| const char *esn | ESN of the target device.
Note:
An ESN is a string of a maximum of 20 characters (excluding the end character of the string), but the length may vary according to hardware vendors. For example, if the length is less than 20 characters, spaces are padded at the end of the character string.
| -| struct libstorage_mgr_info *mgr_info | Output parameter, which returns all obtained NVMe drive management information. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| 0 | Succeeded in querying the NVMe drive management information corresponding to an ESN. | -| -1 | Failed to query the NVMe drive management information corresponding to an ESN. | -| -2 | No NVMe drive matching an ESN is obtained. | - -##### libstorage_get_mgr_smart_by_esn - -1. Prototype - -``` -int32_t libstorage_get_mgr_smart_by_esn(const char *esn, uint32_t nsid, struct libstorage_smart_info *mgr_smart_info); -``` - -2. Description - -Obtains the S.M.A.R.T. information of the NVMe drive corresponding to an ESN. - -3. Parameters - -| Parameter | Description | -| ------------------------------------ | ------------------------------------------------------------ | -| const char *esn | ESN of the target device.
Note:
An ESN is a string of a maximum of 20 characters (excluding the end character of the string), but the length may vary according to hardware vendors. For example, if the length is less than 20 characters, spaces are padded at the end of the character string.
| -| uint32_t nsid | Specified namespace. | -| struct libstorage_mgr_info *mgr_info | Output parameter, which returns all obtained S.M.A.R.T. information of NVMe drives. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| 0 | Succeeded in querying the S.M.A.R.T. information of the NVMe drive corresponding to an ESN. | -| -1 | Failed to query the S.M.A.R.T. information of the NVMe drive corresponding to an ESN. | -| -2 | No NVMe drive matching an ESN is obtained. | - -##### libstorage_get_bdev_ns_info - -1. Prototype - -``` -uint32_t libstorage_get_bdev_ns_info(const char* bdevName, struct libstorage_namespace_info** ppNsInfo); -``` - -2. Description - -Obtains namespace information based on the device name. - -3. Parameters - -| Parameter | Description | -| ------------------------------------------- | ------------------------------------------------------------ | -| const char* bdevName | Device name. | -| struct libstorage_namespace_info** ppNsInfo | Output parameter, which returns namespace information.
Note:
Free the memory using the free API in a timely manner. | - -4. Return value - -| Return Value | Description | -| ------------ | ---------------------------- | -| 0 | The operation failed. | -| 1 | The operation is successful. | - -##### libstorage_get_ctrl_ns_info - -1. Prototype - -``` -uint32_t libstorage_get_ctrl_ns_info(const char* ctrlName, struct libstorage_namespace_info** ppNsInfo); -``` - -2. Description - -Obtains information about all namespaces based on the controller name. - -3. Parameters - -| Parameter | Description | -| ------------------------------------------- | ------------------------------------------------------------ | -| const char* ctrlName | Controller name. | -| struct libstorage_namespace_info** ppNsInfo | Output parameter, which returns information about all namespaces.
Note:
Free the memory using the free API in a timely manner. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| 0 | Failed to obtain the namespace information or no namespace information is obtained. | -| > 0 | Number of namespaces obtained. | - -##### libstorage_create_namespace - -1. Prototype - -``` -int32_t libstorage_create_namespace(const char* ctrlName, uint64_t ns_size, char** outputName); -``` - -2. Description - -Creates a namespace on a specified controller (the prerequisite is that the controller supports namespace management). - -Optane drives are based on the NVMe 1.0 protocol and do not support namespace management. Therefore, this API is not supported. - -ES3000 V3 and V5 support only one namespace by default. By default, a namespace exists on the controller. To create a namespace, delete the original namespace. - -3. Parameters - -| Parameter | Description | -| -------------------- | ------------------------------------------------------------ | -| const char* ctrlName | Controller name. | -| uint64_t ns_size | Size of the namespace to be created (unit: sector_size). | -| char** outputName | Output parameter, which indicates the name of the created namespace.
Note:
Free the memory using the free API in a timely manner. | - -4. Return value - -| Return Value | Description | -| ------------ | ---------------------------------------------- | -| ≤ 0 | Failed to create the namespace. | -| > 0 | ID of the created namespace (starting from 1). | - -##### libstorage_delete_namespace - -1. Prototype - -``` -int32_t libstorage_delete_namespace(const char* ctrlName, uint32_t ns_id); -``` - -2. Description - -Deletes a namespace from a specified controller. Optane drives are based on the NVMe 1.0 protocol and do not support namespace management. Therefore, this API is not supported. - -3. Parameters - -| Parameter | Description | -| -------------------- | ---------------- | -| const char* ctrlName | Controller name. | -| uint32_t ns_id | Namespace ID | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| 0 | Deletion succeeded. | -| Other values | Deletion failed.
Note:
Before deleting a namespace, stop I/O operations. Otherwise, the namespace fails to be deleted. | - -##### libstorage_delete_all_namespace - -1. Prototype - -``` -int32_t libstorage_delete_all_namespace(const char* ctrlName); -``` - -2. Description - -Deletes all namespaces from a specified controller. Optane drives are based on the NVMe 1.0 protocol and do not support namespace management. Therefore, this API is not supported. - -3. Parameters - -| Parameter | Description | -| -------------------- | ---------------- | -| const char* ctrlName | Controller name. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| 0 | Deletion succeeded. | -| Other values | Deletion failed.
Note:
Before deleting a namespace, stop I/O operations. Otherwise, the namespace fails to be deleted. | - -##### libstorage_nvme_create_ctrlr - -1. Prototype - -``` -int32_t libstorage_nvme_create_ctrlr(const char *pci_addr, const char *ctrlr_name); -``` - -2. Description - -Creates an NVMe controller based on the PCI address. - -3. Parameters - -| Parameter | Description | -| ---------------- | ---------------- | -| char *pci_addr | PCI address. | -| char *ctrlr_name | Controller name. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------- | -| < 0 | Creation failed. | -| 0 | Creation succeeded. | - -##### libstorage_nvme_delete_ctrlr - -1. Prototype - -``` -int32_t libstorage_nvme_delete_ctrlr(const char *ctrlr_name); -``` - -1. Description - -Destroys an NVMe controller based on the controller name. - -2. Parameters - -| Parameter | Description | -| ---------------------- | ---------------- | -| const char *ctrlr_name | Controller name. | - -This API can be called only after all delivered I/Os are returned. - -3. Return value - -| Return Value | Description | -| ------------ | ---------------------- | -| < 0 | Destruction failed. | -| 0 | Destruction succeeded. | - -##### libstorage_nvme_reload_ctrlr - -1. Prototype - -``` -int32_t libstorage_nvme_reload_ctrlr(const char *cfgfile); -``` - -2. Description - -Adds or deletes an NVMe controller based on the configuration file. - -3. Parameters - -| Parameter | Description | -| ------------------- | ------------------------------- | -| const char *cfgfile | Path of the configuration file. | - - -Before using this API to delete a drive, ensure that all delivered I/Os have been returned. - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| < 0 | Failed to add or delete drives based on the configuration file. (Drives may be successfully added or deleted for some controllers.) | -| 0 | Drives are successfully added or deleted based on the configuration file. | - -> Constraints - -- Currently, a maximum of 36 controllers can be configured in the configuration file. - -- The reload API creates as many controllers as possible. If a controller fails to be created, the creation of other controllers is not affected. - -- In concurrency scenarios, the final drive initialization status may be inconsistent with the input configuration file. - -- If you delete a drive that is delivering I/Os by reloading the drive, I/Os fail. - -- After the controller name (for example, **nvme0**) corresponding to the PCI address in the configuration file is modified, the modification does not take effect after this interface is called. - -- The reload function is valid only when drives are added or deleted. Other configuration items in the configuration file cannot be reloaded. - -##### libstorage_low_level_format_nvm - -1. Prototype - -``` -int8_t libstorage_low_level_format_nvm(const char* ctrlName, uint8_t lbaf, -enum libstorage_ns_pi_type piType, -bool pil_start, bool ms_extented, uint8_t ses); -``` - -2. Description - -Low-level formats NVMe drives. - -3. Parameters - -| Parameter | Description | -| --------------------------------- | ------------------------------------------------------------ | -| const char* ctrlName | Controller name. | -| uint8_t lbaf | LBA format to be used. | -| enum libstorage_ns_pi_type piType | Protection type to be used. | -| bool pil_start | The protection information is stored in first eight bytes (1) or last eight bytes (0) of the metadata. | -| bool ms_extented | Whether to format to the extended type. | -| uint8_t ses | Whether to perform secure erase during formatting. Currently, only the value **0** (no-secure erase) is supported. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------- | -| < 0 | Formatting failed. | -| ≥ 0 | LBA format generated after successful formatting. | - -> Constraints - -- This low-level formatting API will clear the data and metadata of the drive namespace. Exercise caution when using this API. - -- It takes several seconds to format an ES3000 drive and several minutes to format an Intel Optane drive. Before using this API, wait until the formatting is complete. If the formatting process is forcibly stopped, the formatting fails. - -- Before formatting, stop the I/O operations on the data plane. If the drive is processing I/O requests, the formatting may fail occasionally. If the formatting is successful, the drive may discard the I/O requests that are being processed. Therefore, before formatting the drive, ensure that the I/O operations on the data plane are stopped. - -- During the formatting, the controller is reset. As a result, the initialized drive resources are unavailable. Therefore, after the formatting is complete, restart the I/O process on the data plane. - -- ES3000 V3 supports protection types 0 and 3, PI start and PI end, and mc extended. ES3000 V3 supports DIF in 512+8 format but does not support DIF in 4096+64 format. - -- ES3000 V5 supports protection types 0 and 3, PI start and PI end, mc extended, and mc pointer. ES3000 V5 supports DIF in both 512+8 and 4096+64 formats. - -- Optane drives support protection types 0 and 1, PI end, and mc extended. Optane drives support DIF in 512+8 format but does not support DIF in 4096+64 format. - -| **Drive Type** | **LBA Format** | **Drive Type** | **LBA Format** | -| ------------------ | ------------------------------------------------------------ | -------------- | ------------------------------------------------------------ | -| Intel Optane P4800 | lbaf0:512+0
lbaf1:512+8
lbaf2:512+16
lbaf3:4096+0
lbaf4:4096+8
lbaf5:4096+64
lbaf6:4096+128 | ES3000 V3, V5 | lbaf0:512+0
lbaf1:512+8
lbaf2:4096+64
lbaf3:4096+0
lbaf4:4096+8 | - -##### LIBSTORAGE_CALLBACK_FUNC - -1. Prototype - -``` -typedef void (*LIBSTORAGE_CALLBACK_FUNC)(int32_t cb_status, int32_t sct_code, void* cb_arg); -``` - -2. Description - -Registered HSAK I/O completion callback function. - -3. Parameters - -| Parameter | Description | -| ----------------- | ------------------------------------------------------------ | -| int32_t cb_status | I/O status code. The value **0** indicates success, a negative value indicates system error code, and a positive value indicates drive error code (for different error codes,
see [Appendixes](#Appendixes)). | -| int32_t sct_code | I/O status code type:
0: [GENERIC](#generic)
1: [COMMAND_SPECIFIC](#command_specific)
2: [MEDIA_DATA_INTERGRITY_ERROR](#media_data_intergrity_error)
7: VENDOR_SPECIFIC | -| void* cb_arg | Input parameter of the callback function. | - -4. Return value - -None. - -##### libstorage_deallocate_block - -1. Prototype - -``` -int32_t libstorage_deallocate_block(int32_t fd, struct libstorage_dsm_range_desc *range, uint16_t range_count, LIBSTORAGE_CALLBACK_FUNC cb, void* cb_arg); -``` - -2. Description - -Notifies NVMe drives of the blocks that can be released. - -3. Parameters - -| Parameter | Description | -| --------------------------------------- | ------------------------------------------------------------ | -| int32_t fd | Open drive file descriptor. | -| struct libstorage_dsm_range_desc *range | Description of blocks that can be released on NVMe drives.
Note:
This parameter requires **libstorage_mem_reserve** to allocate huge page memory. 4 KB alignment is required during memory allocation, that is, align is set to 4096.
The TRIM range of drives is restricted based on different drives. Exceeding the maximum TRIM range on the drives may cause data exceptions. | -| uint16_t range_count | Number of members in the array range. | -| LIBSTORAGE_CALLBACK_FUNC cb | Callback function. | -| void* cb_arg | Callback function parameter. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------- | -| < 0 | Failed to deliver the request. | -| 0 | Request submitted successfully. | - -##### libstorage_async_write - -1. Prototype - -``` -int32_t libstorage_async_write(int32_t fd, void *buf, size_t nbytes, off64_t offset, void *md_buf, size_t md_len, enum libstorage_crc_and_prchk dif_flag, LIBSTORAGE_CALLBACK_FUNC cb, void* cb_arg); -``` - -2. Description - -Delivers asynchronous I/O write requests (the write buffer is a contiguous buffer). - -3. Parameters - -| Parameter | Description | -| -------------------------------------- | ------------------------------------------------------------ | -| int32_t fd | File descriptor of the block device. | -| void *buf | Buffer for I/O write data (four-byte aligned and cannot cross the 4 KB page boundary).
Note:
The LBA in extended mode must contain the metadata memory size. | -| size_t nbytes | Size of a single write I/O, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | -| off64_t offset | Write offset of the LBA, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | -| void *md_buf | Metadata buffer. (Applicable only to LBAs in separated mode. Set this parameter to **NULL** for LBAs in extended mode.) | -| size_t md_len | Buffer length of metadata. (Applicable only to LBAs in separated mode. Set this parameter to **0** for LBAs in extended mode.) | -| enum libstorage_crc_and_prchk dif_flag | Whether to calculate DIF and whether to enable drive verification. | -| LIBSTORAGE_CALLBACK_FUNC cb | Registered callback function. | -| void* cb_arg | Parameters of the callback function. | - -4. Return value - -| Return Value | Description | -| ------------ | ---------------------------------------------- | -| 0 | I/O write requests are submitted successfully. | -| Other values | Failed to submit I/O write requests. | - -##### libstorage_async_read - -1. Prototype - -``` -int32_t libstorage_async_read(int32_t fd, void *buf, size_t nbytes, off64_t offset, void *md_buf, size_t md_len, enum libstorage_crc_and_prchk dif_flag, LIBSTORAGE_CALLBACK_FUNC cb, void* cb_arg); -``` - -2. Description - -Delivers asynchronous I/O read requests (the read buffer is a contiguous buffer). - -3. Parameters - -| Parameter | Description | -| -------------------------------------- | ------------------------------------------------------------ | -| int32_t fd | File descriptor of the block device. | -| void *buf | Buffer for I/O read data (four-byte aligned and cannot cross the 4 KB page boundary).
Note:
LBAs in extended mode must contain the metadata memory size. | -| size_t nbytes | Size of a single read I/O, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | -| off64_t offset | Read offset of the LBA, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. The LBA in extended mode does not include the metadata size. | -| void *md_buf | Metadata buffer. (Applicable only to LBAs in separated mode. Set this parameter to **NULL** for LBAs in extended mode.). | -| size_t md_len | Buffer length of metadata. (Applicable only to LBAs in separated mode. Set this parameter to **0** for LBAs in extended mode.). | -| enum libstorage_crc_and_prchk dif_flag | Whether to calculate DIF and whether to enable drive verification. | -| LIBSTORAGE_CALLBACK_FUNC cb | Registered callback function. | -| void* cb_arg | Parameters of the callback function. | - -4. Return value - -| Return Value | Description | -| ------------ | --------------------------------------------- | -| 0 | I/O read requests are submitted successfully. | -| Other values | Failed to submit I/O read requests. | - -##### libstorage_async_writev - -1. Prototype - -``` -int32_t libstorage_async_writev(int32_t fd, struct iovec *iov, int iovcnt, size_t nbytes, off64_t offset, void *md_buf, size_t md_len, enum libstorage_crc_and_prchk dif_flag, LIBSTORAGE_CALLBACK_FUNC cb, void* cb_arg); -``` - -2. Description - -Delivers asynchronous I/O write requests (the write buffer is a discrete buffer). - -3. Parameters - -| Parameter | Description | -| -------------------------------------- | ------------------------------------------------------------ | -| int32_t fd | File descriptor of the block device. | -| struct iovec *iov | Buffer for I/O write data.
Note:
LBAs in extended mode must contain the metadata size.
The address must be 4-byte-aligned and the length cannot exceed 4 GB. | -| int iovcnt | Number of buffers for I/O write data. | -| size_t nbytes | Size of a single write I/O, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | -| off64_t offset | Write offset of the LBA, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | -| void *md_buf | Metadata buffer. (Applicable only to LBAs in separated mode. Set this parameter to **NULL** for LBAs in extended mode.) | -| size_t md_len | Length of the metadata buffer. (Applicable only to LBAs in separated mode. Set this parameter to **0** for LBAs in extended mode.) | -| enum libstorage_crc_and_prchk dif_flag | Whether to calculate DIF and whether to enable drive verification. | -| LIBSTORAGE_CALLBACK_FUNC cb | Registered callback function. | -| void* cb_arg | Parameters of the callback function. | - -4. Return value - -| Return Value | Description | -| ------------ | ---------------------------------------------- | -| 0 | I/O write requests are submitted successfully. | -| Other values | Failed to submit I/O write requests. | - -##### libstorage_async_readv - -1. Prototype - -``` -int32_t libstorage_async_readv(int32_t fd, struct iovec *iov, int iovcnt, size_t nbytes, off64_t offset, void *md_buf, size_t md_len, enum libstorage_crc_and_prchk dif_flag, LIBSTORAGE_CALLBACK_FUNC cb, void* cb_arg); -``` - -2. Description - -Delivers asynchronous I/O read requests (the read buffer is a discrete buffer). - -3. Parameters - -| Parameter | Description | -| -------------------------------------- | ------------------------------------------------------------ | -| int32_t fd | File descriptor of the block device. | -| struct iovec *iov | Buffer for I/O read data.
Note:
LBAs in extended mode must contain the metadata size.
The address must be 4-byte-aligned and the length cannot exceed 4 GB. | -| int iovcnt | Number of buffers for I/O read data. | -| size_t nbytes | Size of a single read I/O, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | -| off64_t offset | Read offset of the LBA, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | -| void *md_buf | Metadata buffer. (Applicable only to LBAs in separated mode. Set this parameter to **NULL** for LBAs in extended mode.) | -| size_t md_len | Length of the metadata buffer. (Applicable only to LBAs in separated mode. Set this parameter to **0** for LBAs in extended mode.) | -| enum libstorage_crc_and_prchk dif_flag | Whether to calculate DIF and whether to enable drive verification. | -| LIBSTORAGE_CALLBACK_FUNC cb | Registered callback function. | -| void* cb_arg | Parameters of the callback function. | - -4. Return value - -| Return Value | Description | -| ------------ | --------------------------------------------- | -| 0 | I/O read requests are submitted successfully. | -| Other values | Failed to submit I/O read requests. | - -##### libstorage_sync_write - -1. Prototype - -``` -int32_t libstorage_sync_write(int fd, const void *buf, size_t nbytes, off_t offset); -``` - -2. Description - -Delivers synchronous I/O write requests (the write buffer is a contiguous buffer). - -3. Parameters - -| Parameter | Description | -| -------------- | ------------------------------------------------------------ | -| int32_t fd | File descriptor of the block device. | -| void *buf | Buffer for I/O write data (four-byte aligned and cannot cross the 4 KB page boundary).
Note:
LBAs in extended mode must contain the metadata memory size. | -| size_t nbytes | Size of a single write I/O, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | -| off64_t offset | Write offset of the LBA, in bytes. (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | - -4. Return value - -| Return Value | Description | -| ------------ | ---------------------------------------------- | -| 0 | I/O write requests are submitted successfully. | -| Other values | Failed to submit I/O write requests. | - -##### libstorage_sync_read - -1. Prototype - -``` -int32_t libstorage_sync_read(int fd, const void *buf, size_t nbytes, off_t offset); -``` - -2. Description - -Delivers synchronous I/O read requests (the read buffer is a contiguous buffer). - -3. Parameters - -| Parameter | Description | -| -------------- | ------------------------------------------------------------ | -| int32_t fd | File descriptor of the block device. | -| void *buf | Buffer for I/O read data (four-byte aligned and cannot cross the 4 KB page boundary).
Note:
LBAs in extended mode must contain the metadata memory size. | -| size_t nbytes | Size of a single read I/O, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | -| off64_t offset | Read offset of the LBA, in bytes (an integer multiple of **sector_size**).
Note:
Only the data size is included. LBAs in extended mode do not include the metadata size. | - -4. Return value - -| Return Value | Description | -| ------------ | --------------------------------------------- | -| 0 | I/O read requests are submitted successfully. | -| Other values | Failed to submit I/O read requests. | - -##### libstorage_open - -1. Prototype - -``` -int32_t libstorage_open(const char* devfullname); -``` - -2. Description - -Opens a block device. - -3. Parameters - -| Parameter | Description | -| ----------------------- | ---------------------------------------- | -| const char* devfullname | Block device name (format: **nvme0n1**). | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| -1 | Opening failed. For example, the device name is incorrect, or the number of opened FDs is greater than the number of available channels of the NVMe drive. | -| > 0 | File descriptor of the block device. | - -After the MultiQ function in **nvme.conf.in** is enabled, different FDs are returned if a thread opens the same device for multiple times. Otherwise, the same FD is returned. This attribute applies only to the NVMe device. - -##### libstorage_close - -1. Prototype - -``` -int32_t libstorage_close(int32_t fd); -``` - -2. Description - -Closes a block device. - -3. Parameters - -| Parameter | Description | -| ---------- | ------------------------------------------ | -| int32_t fd | File descriptor of an opened block device. | - -4. Return value - -| Return Value | Description | -| ------------ | ----------------------------------------------- | -| -1 | Invalid file descriptor. | -| -16 | The file descriptor is busy. Retry is required. | -| 0 | Close succeeded. | - -##### libstorage_mem_reserve - -1. Prototype - -``` -void* libstorage_mem_reserve(size_t size, size_t align); -``` - -2. Description - -Allocates memory space from the huge page memory reserved by the DPDK. - -3. Parameters - -| Parameter | Description | -| ------------ | ----------------------------------- | -| size_t size | Size of the memory to be allocated. | -| size_t align | Aligns allocated memory space. | - -4. Return value - -| Return Value | Description | -| ------------ | -------------------------------------- | -| NULL | Allocation failed. | -| Other values | Address of the allocated memory space. | - -##### libstorage_mem_free - -1. Prototype - -``` -void libstorage_mem_free(void* ptr); -``` - -2. Description - -Frees the memory space pointed to by **ptr**. - -3. Parameters - -| Parameter | Description | -| --------- | ---------------------------------------- | -| void* ptr | Address of the memory space to be freed. | - -4. Return value - -None. - -##### libstorage_alloc_io_buf - -1. Prototype - -``` -void* libstorage_alloc_io_buf(size_t nbytes); -``` - -2. Description - -Allocates memory from buf_small_pool or buf_large_pool of the SPDK. - -3. Parameters - -| Parameter | Description | -| ------------- | ----------------------------------- | -| size_t nbytes | Size of the buffer to be allocated. | - -4. Return value - -| Return Value | Description | -| ------------ | -------------------------------------- | -| Other values | Start address of the allocated buffer. | - -##### libstorage_free_io_buf - -1. Prototype - -``` -int32_t libstorage_free_io_buf(void *buf, size_t nbytes); -``` - -2. Description - -Frees the allocated memory to buf_small_pool or buf_large_pool of the SPDK. - -3. Parameters - -| Parameter | Description | -| ------------- | ---------------------------------------- | -| void *buf | Start address of the buffer to be freed. | -| size_t nbytes | Size of the buffer to be freed. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------ | -| -1 | Freeing failed. | -| 0 | Freeing succeeded. | - -##### libstorage_init_module - -1. Prototype - -``` -int32_t libstorage_init_module(const char* cfgfile); -``` - -2. Description - -Initializes the HSAK module. - -3. Parameters - -| Parameter | Description | -| ------------------- | ------------------------------------ | -| const char* cfgfile | Name of the HSAK configuration file. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------- | -| Other values | Initialization failed. | -| 0 | Initialization succeeded. | - -##### libstorage_exit_module - -1. Prototype - -``` -int32_t libstorage_exit_module(void); -``` - -2. Description - -Exits the HSAK module. - -3. Parameters - -None. - -4. Return value - -| Return Value | Description | -| ------------ | --------------------------------- | -| Other values | Failed to exit the cleanup. | -| 0 | Succeeded in exiting the cleanup. | - -##### LIBSTORAGE_REGISTER_DPDK_INIT_NOTIFY - -1. Prototype - -``` -LIBSTORAGE_REGISTER_DPDK_INIT_NOTIFY(_name, _notify) -``` - -2. Description - -Service layer registration function, which is used to register the callback function when the DPDK initialization is complete. - -3. Parameters - -| Parameter | Description | -| --------- | ------------------------------------------------------------ | -| _name | Name of a module at the service layer. | -| _notify | Prototype of the callback function registered at the service layer: **void (*notifyFunc)(const struct libstorage_dpdk_init_notify_arg *arg);** | - -4. Return value - -None - -#### ublock.h - -##### init_ublock - -1. Prototype - -``` -int init_ublock(const char *name, enum ublock_rpc_server_status flg); -``` - -2. Description - -Initializes the Ublock module. This API must be called before other Ublock APIs. If the flag is set to **UBLOCK_RPC_SERVER_ENABLE**, that is, Ublock functions as the RPC server, the same process can be initialized only once. - -When Ublock is started as the RPC server, the monitor thread of a server is started at the same time. When the monitor thread detects that the RPC server thread is abnormal (for example, thread suspended), the monitor thread calls the exit function to trigger the process to exit. - -In this case, the product script is used to start the process again. - -3. Parameters - -| Parameter | Description | -| ------------------------------------ | ------------------------------------------------------------ | -| const char *name | Module name. The default value is **ublock**. You are advised to set this parameter to **NULL**. | -| enum ublock_rpc_server_status
flg | Whether to enable RPC. The value can be **UBLOCK_RPC_SERVER_DISABLE** or **UBLOCK_RPC_SERVER_ENABLE**.
If RPC is disabled and the drive is occupied by service processes, the Ublock module cannot obtain the drive information. | - -4. Return value - -| Return Value | Description | -| ------------- | ------------------------------------------------------------ | -| 0 | Initialization succeeded. | -| -1 | Initialization failed. Possible cause: The Ublock module has been initialized. | -| Process exits | Ublock considers that the following exceptions cannot be rectified and directly calls the exit API to exit the process:
- The RPC service needs to be created, but it fails to be created onsite.
- Failed to create a hot swap monitoring thread. | - -##### ublock_init - -1. Prototype - -``` -#define ublock_init(name) init_ublock(name, UBLOCK_RPC_SERVER_ENABLE) -``` - -2. Description - -It is the macro definition of the init_ublock API. It can be regarded as initializing Ublock into the required RPC service. - -3. Parameters - -| Parameter | Description | -| --------- | ------------------------------------------------------------ | -| name | Module name. The default value is **ublock**. You are advised to set this parameter to **NULL**. | - -4. Return value - -| Return Value | Description | -| ------------- | ------------------------------------------------------------ | -| 0 | Initialization succeeded. | -| -1 | Initialization failed. Possible cause: The Ublock RPC server module has been initialized. | -| Process exits | Ublock considers that the following exceptions cannot be rectified and directly calls the exit API to exit the process:
- The RPC service needs to be created, but it fails to be created onsite.
- Failed to create a hot swap monitoring thread. | - -##### ublock_init_norpc - -1. Prototype - -``` -#define ublock_init_norpc(name) init_ublock(name, UBLOCK_RPC_SERVER_DISABLE) -``` - -2. Description - -It is the macro definition of the init_ublock API and can be considered as initializing Ublock into a non-RPC service. - -3. Parameters - -| Parameter | Description | -| --------- | ------------------------------------------------------------ | -| name | Module name. The default value is **ublock**. You are advised to set this parameter to **NULL**. | - -4. Return value - -| Return Value | Description | -| ------------- | ------------------------------------------------------------ | -| 0 | Initialization succeeded. | -| -1 | Initialization failed. Possible cause: The Ublock client module has been initialized. | -| Process exits | Ublock considers that the following exceptions cannot be rectified and directly calls the exit API to exit the process:
- The RPC service needs to be created, but it fails to be created onsite.
- Failed to create a hot swap monitoring thread. | - -##### ublock_fini - -1. Prototype - -``` -void ublock_fini(void); -``` - -2. Description - -Destroys the Ublock module and internally created resources. This API must be used together with the Ublock initialization API. - -3. Parameters - -None. - -4. Return value - -None. - -##### ublock_get_bdevs - -1. Prototype - -``` -int ublock_get_bdevs(struct ublock_bdev_mgr* bdev_list); -``` - -2. Description - -Obtains the device list (all NVMe devices in the environment, including kernel-mode and user-mode drivers). The obtained NVMe device list contains only PCI addresses and does not contain specific device information. To obtain specific device information, call ublock_get_bdev. - -3. Parameters - -| Parameter | Description | -| --------------------------------- | ------------------------------------------------------------ | -| struct ublock_bdev_mgr* bdev_list | Output parameter, which returns the device queue. The **bdev_list** pointer must be allocated externally. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------ | -| 0 | The device queue is obtained successfully. | -| -2 | No NVMe device exists in the environment. | -| Other values | Failed to obtain the device list. | - -##### ublock_free_bdevs - -1. Prototype - -``` -void ublock_free_bdevs(struct ublock_bdev_mgr* bdev_list); -``` - -2. Description - -Releases a device list. - -3. Parameters - -| Parameter | Description | -| --------------------------------- | ------------------------------------------------------------ | -| struct ublock_bdev_mgr* bdev_list | Head pointer of the device queue. After the device queue is cleared, the **bdev_list** pointer is not released. | - -4. Return value - -None. - -##### ublock_get_bdev - -1. Prototype - -``` -int ublock_get_bdev(const char *pci, struct ublock_bdev *bdev); -``` - -2. Description - -Obtains information about a specific device. In the device information, the serial number, model, and firmware version of the NVMe device are saved as character arrays instead of character strings. (The return format varies depending on the drive controller, and the arrays may not end with 0.) - -After this API is called, the corresponding device is occupied by Ublock. Therefore, call ublock_free_bdev to free resources immediately after the required service operation is complete. - -3. Parameters - -| Parameter | Description | -| ------------------------ | ------------------------------------------------------------ | -| const char *pci | PCI address of the device whose information needs to be obtained. | -| struct ublock_bdev *bdev | Output parameter, which returns the device information. The **bdev** pointer must be allocated externally. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| 0 | The device information is obtained successfully. | -| -1 | Failed to obtain device information due to incorrect parameters. | -| -11(EAGAIN) | Failed to obtain device information due to the RPC query failure. A retry is required (3s sleep is recommended). | - -##### ublock_get_bdev_by_esn - -1. Prototype - -``` -int ublock_get_bdev_by_esn(const char *esn, struct ublock_bdev *bdev); -``` - -2. Description - -Obtains information about the device corresponding to an ESN. In the device information, the serial number, model, and firmware version of the NVMe device are saved as character arrays instead of character strings. (The return format varies depending on the drive controller, and the arrays may not end with 0.) - -After this API is called, the corresponding device is occupied by Ublock. Therefore, call ublock_free_bdev to free resources immediately after the required service operation is complete. - -3. Parameters - -| Parameter | Description | -| ------------------------ | ------------------------------------------------------------ | -| const char *esn | ESN of the device whose information is to be obtained.
Note:
An ESN is a string of a maximum of 20 characters (excluding the end character of the string), but the length may vary according to hardware vendors. For example, if the length is less than 20 characters, spaces are padded at the end of the character string. | -| struct ublock_bdev *bdev | Output parameter, which returns the device information. The **bdev** pointer must be allocated externally. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| 0 | The device information is obtained successfully. | -| -1 | Failed to obtain device information due to incorrect parameters. | -| -11(EAGAIN) | Failed to obtain device information due to the RPC query failure. A retry is required (3s sleep is recommended). | - -##### ublock_free_bdev - -1. Prototype - -``` -void ublock_free_bdev(struct ublock_bdev *bdev); -``` - -2. Description - -Frees device resources. - -3. Parameters - -| Parameter | Description | -| ------------------------ | ------------------------------------------------------------ | -| struct ublock_bdev *bdev | Pointer to the device information. After the data in the pointer is cleared, the **bdev** pointer is not freed. | - -4. Return value - -None. - -##### TAILQ_FOREACH_SAFE - -1. Prototype - -``` -#define TAILQ_FOREACH_SAFE(var, head, field, tvar) -for ((var) = TAILQ_FIRST((head)); -(var) && ((tvar) = TAILQ_NEXT((var), field), 1); -(var) = (tvar)) -``` - -2. Description - -Provides a macro definition for each member of the secure access queue. - -3. Parameters - -| Parameter | Description | -| --------- | ------------------------------------------------------------ | -| var | Queue node member on which you are performing operations. | -| head | Queue head pointer. Generally, it refers to the object address defined by **TAILQ_HEAD(xx, xx) obj**. | -| field | Name of the struct used to store the pointers before and after the queue in the queue node. Generally, it is the name defined by **TAILQ_ENTRY (xx) name**. | -| tvar | Next queue node member. | - -4. Return value - -None. - -##### ublock_get_SMART_info - -1. Prototype - -``` -int ublock_get_SMART_info(const char *pci, uint32_t nsid, struct ublock_SMART_info *smart_info); -``` - -2. Description - -Obtains the S.M.A.R.T. information of a specified device. - -3. Parameters - -| Parameter | Description | -| ------------------------------------ | ------------------------------------------------------------ | -| const char *pci | Device PCI address. | -| uint32_t nsid | Specified namespace. | -| struct ublock_SMART_info *smart_info | Output parameter, which returns the S.M.A.R.T. information of the device. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| 0 | The S.M.A.R.T. information is obtained successfully. | -| -1 | Failed to obtain S.M.A.R.T. information due to incorrect parameters. | -| -11(EAGAIN) | Failed to obtain S.M.A.R.T. information due to the RPC query failure. A retry is required (3s sleep is recommended). | - -##### ublock_get_SMART_info_by_esn - -1. Prototype - -``` -int ublock_get_SMART_info_by_esn(const char *esn, uint32_t nsid, struct ublock_SMART_info *smart_info); -``` - -2. Description - -Obtains the S.M.A.R.T. information of the device corresponding to an ESN. - -3. Parameters - -| Parameter | Description | -| --------------------------------------- | ------------------------------------------------------------ | -| const char *esn | Device ESN.
Note:
An ESN is a string of a maximum of 20 characters (excluding the end character of the string), but the length may vary according to hardware vendors. For example, if the length is less than 20 characters, spaces are padded at the end of the character string. | -| uint32_t nsid | Specified namespace. | -| struct ublock_SMART_info
*smart_info | Output parameter, which returns the S.M.A.R.T. information of the device. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| 0 | The S.M.A.R.T. information is obtained successfully. | -| -1 | Failed to obtain SMART information due to incorrect parameters. | -| -11(EAGAIN) | Failed to obtain S.M.A.R.T. information due to the RPC query failure. A retry is required (3s sleep is recommended). | - -##### ublock_get_error_log_info - -1. Prototype - -``` -int ublock_get_error_log_info(const char *pci, uint32_t err_entries, struct ublock_nvme_error_info *errlog_info); -``` - -2. Description - -Obtains the error log information of a specified device. - -3. Parameters - -| Parameter | Description | -| ------------------------------------------ | ------------------------------------------------------------ | -| const char *pci | Device PCI address. | -| uint32_t err_entries | Number of error logs to be obtained. A maximum of 256 error logs can be obtained. | -| struct ublock_nvme_error_info *errlog_info | Output parameter, which returns the error log information of the device. For the **errlog_info** pointer, the caller needs to apply for space and ensure that the obtained space is greater than or equal to err_entries x size of (struct ublock_nvme_error_info). | - -4. Return value - -| Return Value | Description | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| Number of obtained error logs. The value is greater than or equal to 0. | Error logs are obtained successfully. | -| -1 | Failed to obtain error logs due to incorrect parameters. | -| -11(EAGAIN) | Failed to obtain error logs due to the RPC query failure. A retry is required (3s sleep is recommended). | - -##### ublock_get_log_page - -1. Prototype - -``` -int ublock_get_log_page(const char *pci, uint8_t log_page, uint32_t nsid, void *payload, uint32_t payload_size); -``` - -2. Description - -Obtains information about a specified device and log page. - -3. Parameters - -| Parameter | Description | -| --------------------- | ------------------------------------------------------------ | -| const char *pci | Device PCI address. | -| uint8_t log_page | ID of the log page to be obtained. For example, **0xC0** and **0xCA** indicate the customized S.M.A.R.T. information of ES3000 V5 drives. | -| uint32_t nsid | Namespace ID. Some log pages support obtaining by namespace while some do not. If obtaining by namespace is not supported, the caller must transfer **0XFFFFFFFF**. | -| void *payload | Output parameter, which stores log page information. The caller is responsible for allocating memory. | -| uint32_t payload_size | Size of the applied payload, which cannot be greater than 4096 bytes. | - -4. Return value - -| Return Value | Description | -| ------------ | ---------------------------------------------------- | -| 0 | The log page is obtained successfully. | -| -1 | Failed to obtain error logs due to parameter errors. | - -##### ublock_info_get_pci_addr - -1. Prototype - -``` -char *ublock_info_get_pci_addr(const void *info); -``` - -2. Description - -Obtains the PCI address of the hot swap device. - -The memory occupied by info and the memory occupied by the returned PCI address do not need to be freed by the service process. - -3. Parameters - -| Parameter | Description | -| ---------------- | ------------------------------------------------------------ | -| const void *info | Hot swap event information transferred by the hot swap monitoring thread to the callback function. | - -4. Return value - -| Return Value | Description | -| ------------ | --------------------------------- | -| NULL | Failed to obtain the information. | -| Other values | Obtained PCI address. | - -##### ublock_info_get_action - -1. Prototype - -``` -enum ublock_nvme_uevent_action ublock_info_get_action(const void *info); -``` - -2. Description - -Obtains the type of the hot swap event. - -The memory occupied by info does not need to be freed by service process. - -3. Parameters - -| Parameter | Description | -| ---------------- | ------------------------------------------------------------ | -| const void *info | Hot swap event information transferred by the hot swap monitoring thread to the callback function. | - -4. Return value - -| Return Value | Description | -| -------------------------- | ------------------------------------------------------------ | -| Type of the hot swap event | Type of the event that triggers the callback function. For details, see the definition in **5.1.2.6 enum ublock_nvme_uevent_action**. | - -##### ublock_get_ctrl_iostat - -1. Prototype - -``` -int ublock_get_ctrl_iostat(const char* pci, struct ublock_ctrl_iostat_info *ctrl_iostat); -``` - -2. Description - -Obtains the I/O statistics of a controller. - -3. Parameters - -| Parameter | Description | -| ------------------------------------------- | ------------------------------------------------------------ | -| const char* pci | PCI address of the controller whose I/O statistics are to be obtained. | -| struct ublock_ctrl_iostat_info *ctrl_iostat | Output parameter, which returns I/O statistics. The **ctrl_iostat** pointer must be allocated externally. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------------------------ | -| 0 | Succeeded in obtaining I/O statistics. | -| -1 | Failed to obtain I/O statistics due to invalid parameters or RPC errors. | -| -2 | Failed to obtain I/O statistics because the NVMe drive is not taken over by the I/O process. | -| -3 | Failed to obtain I/O statistics because the I/O statistics function is disabled. | - -##### ublock_nvme_admin_passthru - -1. Prototype - -``` -int32_t ublock_nvme_admin_passthru(const char *pci, void *cmd, void *buf, size_t nbytes); -``` - -2. Description - -Transparently transmits the **nvme admin** command to the NVMe device. Currently, only the **nvme admin** command for obtaining the identify parameter is supported. - -3. Parameters - -| Parameter | Description | -| --------------- | ------------------------------------------------------------ | -| const char *pci | PCI address of the destination controller of the **nvme admin** command. | -| void *cmd | Pointer to the **nvme admin** command struct. The struct size is 64 bytes. For details, see the NVMe specifications. Currently, only the command for obtaining the identify parameter is supported. | -| void *buf | Saves the output of the **nvme admin** command. The space is allocated by users and the size is expressed in nbytes. | -| size_t nbytes | Size of the user buffer. The buffer for the identify parameter is 4096 bytes, and that for the command to obtain the identify parameter is 4096 nbytes. | - -4. Return value - -| Return Value | Description | -| ------------ | ------------------------------------------ | -| 0 | The user command is executed successfully. | -| -1 | Failed to execute the user command. | - -## Appendixes - -### GENERIC - -Generic Error Code Reference - -| sc | value | -| ------------------------------------------ | ----- | -| NVME_SC_SUCCESS | 0x00 | -| NVME_SC_INVALID_OPCODE | 0x01 | -| NVME_SC_INVALID_FIELD | 0x02 | -| NVME_SC_COMMAND_ID_CONFLICT | 0x03 | -| NVME_SC_DATA_TRANSFER_ERROR | 0x04 | -| NVME_SC_ABORTED_POWER_LOSS | 0x05 | -| NVME_SC_INTERNAL_DEVICE_ERROR | 0x06 | -| NVME_SC_ABORTED_BY_REQUEST | 0x07 | -| NVME_SC_ABORTED_SQ_DELETION | 0x08 | -| NVME_SC_ABORTED_FAILED_FUSED | 0x09 | -| NVME_SC_ABORTED_MISSING_FUSED | 0x0a | -| NVME_SC_INVALID_NAMESPACE_OR_FORMAT | 0x0b | -| NVME_SC_COMMAND_SEQUENCE_ERROR | 0x0c | -| NVME_SC_INVALID_SGL_SEG_DESCRIPTOR | 0x0d | -| NVME_SC_INVALID_NUM_SGL_DESCIRPTORS | 0x0e | -| NVME_SC_DATA_SGL_LENGTH_INVALID | 0x0f | -| NVME_SC_METADATA_SGL_LENGTH_INVALID | 0x10 | -| NVME_SC_SGL_DESCRIPTOR_TYPE_INVALID | 0x11 | -| NVME_SC_INVALID_CONTROLLER_MEM_BUF | 0x12 | -| NVME_SC_INVALID_PRP_OFFSET | 0x13 | -| NVME_SC_ATOMIC_WRITE_UNIT_EXCEEDED | 0x14 | -| NVME_SC_OPERATION_DENIED | 0x15 | -| NVME_SC_INVALID_SGL_OFFSET | 0x16 | -| NVME_SC_INVALID_SGL_SUBTYPE | 0x17 | -| NVME_SC_HOSTID_INCONSISTENT_FORMAT | 0x18 | -| NVME_SC_KEEP_ALIVE_EXPIRED | 0x19 | -| NVME_SC_KEEP_ALIVE_INVALID | 0x1a | -| NVME_SC_ABORTED_PREEMPT | 0x1b | -| NVME_SC_SANITIZE_FAILED | 0x1c | -| NVME_SC_SANITIZE_IN_PROGRESS | 0x1d | -| NVME_SC_SGL_DATA_BLOCK_GRANULARITY_INVALID | 0x1e | -| NVME_SC_COMMAND_INVALID_IN_CMB | 0x1f | -| NVME_SC_LBA_OUT_OF_RANGE | 0x80 | -| NVME_SC_CAPACITY_EXCEEDED | 0x81 | -| NVME_SC_NAMESPACE_NOT_READY | 0x82 | -| NVME_SC_RESERVATION_CONFLICT | 0x83 | -| NVME_SC_FORMAT_IN_PROGRESS | 0x84 | - -### COMMAND_SPECIFIC - -Error Code Reference for Specific Commands - -| sc | value | -| ------------------------------------------ | ----- | -| NVME_SC_COMPLETION_QUEUE_INVALID | 0x00 | -| NVME_SC_INVALID_QUEUE_IDENTIFIER | 0x01 | -| NVME_SC_MAXIMUM_QUEUE_SIZE_EXCEEDED | 0x02 | -| NVME_SC_ABORT_COMMAND_LIMIT_EXCEEDED | 0x03 | -| NVME_SC_ASYNC_EVENT_REQUEST_LIMIT_EXCEEDED | 0x05 | -| NVME_SC_INVALID_FIRMWARE_SLOT | 0x06 | -| NVME_SC_INVALID_FIRMWARE_IMAGE | 0x07 | -| NVME_SC_INVALID_INTERRUPT_VECTOR | 0x08 | -| NVME_SC_INVALID_LOG_PAGE | 0x09 | -| NVME_SC_INVALID_FORMAT | 0x0a | -| NVME_SC_FIRMWARE_REQ_CONVENTIONAL_RESET | 0x0b | -| NVME_SC_INVALID_QUEUE_DELETION | 0x0c | -| NVME_SC_FEATURE_ID_NOT_SAVEABLE | 0x0d | -| NVME_SC_FEATURE_NOT_CHANGEABLE | 0x0e | -| NVME_SC_FEATURE_NOT_NAMESPACE_SPECIFIC | 0x0f | -| NVME_SC_FIRMWARE_REQ_NVM_RESET | 0x10 | -| NVME_SC_FIRMWARE_REQ_RESET | 0x11 | -| NVME_SC_FIRMWARE_REQ_MAX_TIME_VIOLATION | 0x12 | -| NVME_SC_FIRMWARE_ACTIVATION_PROHIBITED | 0x13 | -| NVME_SC_OVERLAPPING_RANGE | 0x14 | -| NVME_SC_NAMESPACE_INSUFFICIENT_CAPACITY | 0x15 | -| NVME_SC_NAMESPACE_ID_UNAVAILABLE | 0x16 | -| NVME_SC_NAMESPACE_ALREADY_ATTACHED | 0x18 | -| NVME_SC_NAMESPACE_IS_PRIVATE | 0x19 | -| NVME_SC_NAMESPACE_NOT_ATTACHED | 0x1a | -| NVME_SC_THINPROVISIONING_NOT_SUPPORTED | 0x1b | -| NVME_SC_CONTROLLER_LIST_INVALID | 0x1c | -| NVME_SC_DEVICE_SELF_TEST_IN_PROGRESS | 0x1d | -| NVME_SC_BOOT_PARTITION_WRITE_PROHIBITED | 0x1e | -| NVME_SC_INVALID_CTRLR_ID | 0x1f | -| NVME_SC_INVALID_SECONDARY_CTRLR_STATE | 0x20 | -| NVME_SC_INVALID_NUM_CTRLR_RESOURCES | 0x21 | -| NVME_SC_INVALID_RESOURCE_ID | 0x22 | -| NVME_SC_CONFLICTING_ATTRIBUTES | 0x80 | -| NVME_SC_INVALID_PROTECTION_INFO | 0x81 | -| NVME_SC_ATTEMPTED_WRITE_TO_RO_PAGE | 0x82 | - -### MEDIA_DATA_INTERGRITY_ERROR - -Error Code Reference for Medium Exceptions - -| sc | value | -| -------------------------------------- | ----- | -| NVME_SC_WRITE_FAULTS | 0x80 | -| NVME_SC_UNRECOVERED_READ_ERROR | 0x81 | -| NVME_SC_GUARD_CHECK_ERROR | 0x82 | -| NVME_SC_APPLICATION_TAG_CHECK_ERROR | 0x83 | -| NVME_SC_REFERENCE_TAG_CHECK_ERROR | 0x84 | -| NVME_SC_COMPARE_FAILURE | 0x85 | -| NVME_SC_ACCESS_DENIED | 0x86 | -| NVME_SC_DEALLOCATED_OR_UNWRITTEN_BLOCK | 0x87 | \ No newline at end of file diff --git a/docs/en/docs/Installation/Installation-Guide1.md b/docs/en/docs/Installation/Installation-Guide1.md deleted file mode 100644 index db8031a38bcbe874ab2004e36cc2375f89e7d90e..0000000000000000000000000000000000000000 --- a/docs/en/docs/Installation/Installation-Guide1.md +++ /dev/null @@ -1,188 +0,0 @@ -# Installation Guide - -This section describes how to enable the Raspberry Pi function after [Writing Raspberry Pi Images into the SD card](./Installation-Modes1.md). - - -- [Installation Guide](#installation-guide) - - [Starting the System](#starting-the-system) - - [Logging in to the System](#logging-in-to-the-system) - - [Configuring the System](#configuring-the-system) - - [Expanding the Root Directory Partition](#expanding-the-root-directory-partition) - - [Connecting to the Wi-Fi Network](#connecting-to-the-wi-fi-network) - - -## Starting the System - -After an image is written into the SD card, insert the SD card into the Raspberry Pi and power on the SD card. - -For details about the Raspberry Pi hardware, visit the [Raspberry Pi official website](https://www.raspberrypi.org/). - -## Logging in to the System - -You can log in to the Raspberry Pi in either of the following ways: - -1. Local login - - Connect the Raspberry Pi to the monitor (the Raspberry Pi video output interface is Micro HDMI), keyboard, and mouse, and start the Raspberry Pi. The Raspberry Pi startup log is displayed on the monitor. After Raspberry Pi is started, enter the user name **root** and password **openeuler** to log in. - -2. SSH remote login - - By default, the Raspberry Pi uses the DHCP mode to automatically obtain the IP address. If the Raspberry Pi is connected to a known router, you can log in to the router to check the IP address. The new IP address is the Raspberry Pi IP address. - - **Figure 1** Obtain the IP address - ![](./figures/Obtain the IP address) - - According to the preceding figure, the IP address of the Raspberry Pi is **192.168.31.109**. You can run the `ssh root@192.168.31.109` command and enter the password `openeuler` to remotely log in to the Raspberry Pi. - -## Configuring the System - -### Expanding the Root Directory Partition - -The space of the default root directory partition is small. Therefore, you need to expand the partition capacity before using it. - -To expand the root directory partition capacity, perform the following procedure: - -1. Run the `fdisk -l` command as the root user to check the drive partition information. The command output is as follows: - - ```shell - # fdisk -l - Disk /dev/mmcblk0: 14.86 GiB, 15931539456 bytes, 31116288 sectors - Units: sectors of 1 * 512 = 512 bytes - Sector size (logical/physical): 512 bytes / 512 bytes - I/O size (minimum/optimal): 512 bytes / 512 bytes - Disklabel type: dos - Disk identifier: 0xf2dc3842 - - Device Boot Start End Sectors Size Id Type - /dev/mmcblk0p1 * 8192 593919 585728 286M c W95 FAT32 (LBA) - /dev/mmcblk0p2 593920 1593343 999424 488M 82 Linux swap / Solaris - /dev/mmcblk0p3 1593344 5044223 3450880 1.7G 83 Linux - ``` - - The drive letter of the SD card is **/dev/mmcblk0**, which contains three partitions: - - - **/dev/mmcblk0p1**: boot partition - - **/dev/mmcblk0p2**: swap partition - - **/dev/mmcblk0p3**: root directory partition - - Here, we need to expand the capacity of `/dev/mmcblk0p3`. - -2. Run the `fdisk /dev/mmcblk0` command as the root user and the interactive command line interface (CLI) is displayed. To expand the partition capacity, perform the following procedure as shown in [Figure 2](#zh-cn_topic_0151920806_f6ff7658b349942ea87f4521c0256c315). - - 1. Enter `p` to check the partition information. - - Record the start sector number of `/dev/mmcblk0p3`. That is, the value in the `Start` column of the `/dev/mmcblk0p3` information. In the example, the start sector number is `1593344`. - - 2. Enter `d` to delete the partition. - - 3. Enter `3` or press `Enter` to delete the partition whose number is `3`. That is, the `/dev/mmcblk0p3`. - - 4. Enter `n` to create a partition. - - 5. Enter `p` or press `Enter` to create a partition of the `Primary` type. - - 6. Enter `3` or press `Enter` to create a partition whose number is `3`. That is, the `/dev/mmcblk0p3`. - - 7. Enter the start sector number of the new partition. That is, the start sector number recorded in Step `1`. In the example, the start sector number is `1593344`. - - > ![](./public_sys-resources/icon-notice.gif) **NOTE:** -Do not press **Enter** or use the default parameters. - - 8. Press `Enter` to use the last sector number by default as the end sector number of the new partition. - - 9. Enter `N` without changing the sector ID. - - 10. Enter `w` to save the partition settings and exit the interactive CLI. - - **Figure 2** Expand the partition capacity -![](./figures/Expand the partition capacity) - -3. Run the `fdisk -l` command as the root user to check the drive partition information and ensure that the drive partition is correct. The command output is as follows: - - ```shell - # fdisk -l - Disk /dev/mmcblk0: 14.86 GiB, 15931539456 bytes, 31116288 sectors - Units: sectors of 1 * 512 = 512 bytes - Sector size (logical/physical): 512 bytes / 512 bytes - I/O size (minimum/optimal): 512 bytes / 512 bytes - Disklabel type: dos - Disk identifier: 0xf2dc3842 - - Device Boot Start End Sectors Size Id Type - /dev/mmcblk0p1 * 8192 593919 585728 286M c W95 FAT32 (LBA) - /dev/mmcblk0p2 593920 1593343 999424 488M 82 Linux swap / Solaris - /dev/mmcblk0p3 1593344 31116287 29522944 14.1G 83 Linux - ``` - -4. Run the `resize2fs /dev/mmcblk0p3` command as the root user to increase the size of the unloaded file system. - -5. Run the `df -lh` command to check the drive space information and ensure that the root directory partition has been expanded. - - > ![](./public_sys-resources/icon-notice.gif) **NOTE:** -If the root directory partition is not expanded, run the `reboot` command to restart the Raspberry Pi and then run the `resize2fs /dev/mmcblk0p3` command as the root user. - -### Connecting to the Wi-Fi Network - -To connect to the Wi-Fi network, perform the following procedure: - -1. Check the IP address and network adapter information. - - `ip a` - - Obtain information about the wireless network adapter **wlan0**: - - ```text - 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 - inet 127.0.0.1/8 scope host lo - valid_lft forever preferred_lft forever - inet6 ::1/128 scope host - valid_lft forever preferred_lft forever - 2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000 - link/ether dc:a6:32:50:de:57 brd ff:ff:ff:ff:ff:ff - inet 192.168.31.109/24 brd 192.168.31.255 scope global dynamic noprefixroute eth0 - valid_lft 41570sec preferred_lft 41570sec - inet6 fe80::cd39:a969:e647:3043/64 scope link noprefixroute - valid_lft forever preferred_lft forever - 3: wlan0: mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 - link/ether e2:e6:99:89:47:0c brd ff:ff:ff:ff:ff:ff - ``` - -2. Scan information about available Wi-Fi networks. - - `nmcli dev wifi` - -3. Connect to the Wi-Fi network. - - Run the `nmcli dev wifi connect SSID password PWD` command as the root user to connect to the Wi-Fi network. - - In the command, `SSID` indicates the SSID of the available Wi-Fi network scanned in the preceding step, and `PWD` indicates the password of the Wi-Fi network. For example, if the `SSID` is `openEuler-wifi`and the password is `12345678`, the command for connecting to the Wi-Fi network is `nmcli dev wifi connect openEuler-wifi password 12345678`. The connection is successful. - - ```text - Device 'wlan0' successfully activated with '26becaab-4adc-4c8e-9bf0-1d63cf5fa3f1'. - ``` - -4. Check the IP address and wireless network adapter information. - - `ip a` - - ```text - 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 - inet 127.0.0.1/8 scope host lo - valid_lft forever preferred_lft forever - inet6 ::1/128 scope host - valid_lft forever preferred_lft forever - 2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000 - link/ether dc:a6:32:50:de:57 brd ff:ff:ff:ff:ff:ff - inet 192.168.31.109/24 brd 192.168.31.255 scope global dynamic noprefixroute eth0 - valid_lft 41386sec preferred_lft 41386sec - inet6 fe80::cd39:a969:e647:3043/64 scope link noprefixroute - valid_lft forever preferred_lft forever - 3: wlan0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 - link/ether dc:a6:32:50:de:58 brd ff:ff:ff:ff:ff:ff - inet 192.168.31.110/24 brd 192.168.31.255 scope global dynamic noprefixroute wlan0 - valid_lft 43094sec preferred_lft 43094sec - inet6 fe80::394:d086:27fa:deba/64 scope link noprefixroute - valid_lft forever preferred_lft forever - ``` diff --git a/docs/en/docs/Installation/More-Resources.md b/docs/en/docs/Installation/More-Resources.md deleted file mode 100644 index 0bd1cf551733501720f02ae721327361288e7fa7..0000000000000000000000000000000000000000 --- a/docs/en/docs/Installation/More-Resources.md +++ /dev/null @@ -1,4 +0,0 @@ -# Reference - -- How to Create a Raspberry Pi Image File -- How to Use Raspberry Pi diff --git a/docs/en/docs/Installation/figures/Installation_source.png b/docs/en/docs/Installation/figures/Installation_source.png deleted file mode 100644 index 558374e3260e5218b6528ddd8d021606bf790787..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/Installation_source.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/Target_installation_position - 01.png b/docs/en/docs/Installation/figures/Target_installation_position - 01.png deleted file mode 100644 index 339d3d96f469f54f5b9c0f3b40fb0cd78935180c..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/Target_installation_position - 01.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/confignetwork.png b/docs/en/docs/Installation/figures/confignetwork.png deleted file mode 100644 index 79903b72948a06d3fceff97c11f49d12f7571b94..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/confignetwork.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/disk-encryption-password.png b/docs/en/docs/Installation/figures/disk-encryption-password.png deleted file mode 100644 index ba84e060133644910ff199376e11d2929cfe8d47..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/disk-encryption-password.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/en-us_image_0213178479.png b/docs/en/docs/Installation/figures/en-us_image_0213178479.png deleted file mode 100644 index 62ef0decdf6f1e591059904001d712a54f727e68..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/en-us_image_0213178479.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/en-us_image_0229291229.png b/docs/en/docs/Installation/figures/en-us_image_0229291229.png deleted file mode 100644 index b315531ca7f99d2a045b7933351af96cadc1ad77..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/en-us_image_0229291229.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/en-us_image_0229291236.png b/docs/en/docs/Installation/figures/en-us_image_0229291236.png deleted file mode 100644 index bf466a3d751df4a4c6fd99aecf620ec9adf540a3..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/en-us_image_0229291236.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/host_env8.png b/docs/en/docs/Installation/figures/host_env8.png deleted file mode 100644 index d08dcc89f40e1671a55a42fbcb02f26e987a461e..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/host_env8.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/installsourceen.png b/docs/en/docs/Installation/figures/installsourceen.png deleted file mode 100644 index 43e59b694ec1afcf8591e8272390da927da9a3fe..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/installsourceen.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/manual-partitioning-page.png b/docs/en/docs/Installation/figures/manual-partitioning-page.png deleted file mode 100644 index 7f3debff53c167acc15dd95c5face0c30e9e8ec3..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/manual-partitioning-page.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/setting-a-system-language.png b/docs/en/docs/Installation/figures/setting-a-system-language.png deleted file mode 100644 index e8e6faa69580e707657cba3f2f589918321a4b4d..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/setting-a-system-language.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/setting-date-and-time.png b/docs/en/docs/Installation/figures/setting-date-and-time.png deleted file mode 100644 index 6e366072db2ca698ae2bc317a361e9d38877a2d0..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/setting-date-and-time.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/setting-the-keyboard-layout.png b/docs/en/docs/Installation/figures/setting-the-keyboard-layout.png deleted file mode 100644 index 62b0074220b8e2c8ebca37dceecc92e0c2fcdffc..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/setting-the-keyboard-layout.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/setting-the-network-and-host-name.png b/docs/en/docs/Installation/figures/setting-the-network-and-host-name.png deleted file mode 100644 index b17ebdaafeaa2228ddbe0d8135fee3eabdc1cb76..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/setting-the-network-and-host-name.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/sourceftp.png b/docs/en/docs/Installation/figures/sourceftp.png deleted file mode 100644 index 2e18d3f5c6d999c8a637ebed36ccb740a96d8449..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/sourceftp.png and /dev/null differ diff --git a/docs/en/docs/Installation/figures/sourcenfs.png b/docs/en/docs/Installation/figures/sourcenfs.png deleted file mode 100644 index 3a4564871319deb546776b2542575ed43f2f2a35..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/figures/sourcenfs.png and /dev/null differ diff --git a/docs/en/docs/Installation/public_sys-resources/icon-caution.gif b/docs/en/docs/Installation/public_sys-resources/icon-caution.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/public_sys-resources/icon-caution.gif and /dev/null differ diff --git a/docs/en/docs/Installation/public_sys-resources/icon-danger.gif b/docs/en/docs/Installation/public_sys-resources/icon-danger.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/public_sys-resources/icon-danger.gif and /dev/null differ diff --git a/docs/en/docs/Installation/public_sys-resources/icon-notice.gif b/docs/en/docs/Installation/public_sys-resources/icon-notice.gif deleted file mode 100644 index 86024f61b691400bea99e5b1f506d9d9aef36e27..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/public_sys-resources/icon-notice.gif and /dev/null differ diff --git a/docs/en/docs/Installation/public_sys-resources/icon-tip.gif b/docs/en/docs/Installation/public_sys-resources/icon-tip.gif deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/public_sys-resources/icon-tip.gif and /dev/null differ diff --git a/docs/en/docs/Installation/public_sys-resources/icon-warning.gif b/docs/en/docs/Installation/public_sys-resources/icon-warning.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Installation/public_sys-resources/icon-warning.gif and /dev/null differ diff --git a/docs/en/docs/Kernel/how-to-use.md b/docs/en/docs/Kernel/how-to-use.md deleted file mode 100644 index 6079a2bb92663f33297627a8e0f3009acf1d1a25..0000000000000000000000000000000000000000 --- a/docs/en/docs/Kernel/how-to-use.md +++ /dev/null @@ -1,255 +0,0 @@ -## How to Use - -### Tiered-Reliability Memory Management for the OS - -**Overview** - -Memory is divided into two ranges based on high and low reliability. Therefore, memory allocation and release must be managed separately based on the reliability. The OS must be able to control the memory allocation path. User-mode processes use low reliable memory, and kernel-mode processes use highly reliable memory. When the highly reliable memory is insufficient, the allocation needs to fall back to the low reliable memory range or the allocation fails. - -In addition, according to the reliability requirements and types of processes, on-demand allocation of highly reliable and low reliable memory is required. For example, specify highly reliable memory for key processes to reduce the probability of memory errors encountered by key processes. Currently, the kernel uses highly reliable memory, and user-mode processes use low reliable memory. As a result, some key or core services, such as the service forwarding process, are unstable. If an exception occurs, I/Os are interrupted, affecting service stability. Therefore, these key services must use highly reliable memory to improve the stability of key processes. - -When a memory error occurs in the system, the OS overwrites the unallocated low reliable memory to clear the undetected memory error. - -**Restrictions** - -- **High-reliability memory for key processes** - - 1. The abuse of the `/proc//reliable` API may cause excessive use of highly reliable memory. - 2. The `reliable` attribute of a user-mode process can be modified by using the proc API or directly inherited from its parent process only after the process is started. `systemd (pid=1)` uses highly reliable memory. Its `reliable` attribute is useless and is not inherited. The `reliable` attribute of kernel-mode threads is invalid. - 3. The program and data segments of processes use highly reliable memory. Because the highly reliable memory is insufficient, the low reliable memory is used for startup. - 4. Common processes also use highly reliable memory in some scenarios, such as HugeTLB, page cache, vDSO, and TMPFS. - -- **Overwrite of unallocated memory** - - The overwrite of the unallocated memory can be executed only once and does not support concurrent operations. If this feature is executed, it will have the following impacts: - - 1. This feature takes a long time. When one CPU of each node is occupied by the overwrite thread, other tasks cannot be scheduled on the CPU. - 2. During the overwrite process, the zone lock needs to be obtained. Other service processes need to wait until the overwrite is complete. As a result, the memory may not be allocated in time. - 3. In the case of concurrent execution, queuing is blocked, resulting in a longer delay. - - If the machine performance is poor, the kernel RCU stall or soft lockup alarm may be triggered, and the process memory allocation may be blocked. Therefore, this feature can be used only on physical machines if necessary. There is a high probability that the preceding problem occurs on VMs. - - The following table lists the reference data of physical machines. (The actual time required depends on the hardware performance and system load.) - - -Table 1 Test data when the TaiShan 2280 V2 server is unloaded - -| Test Item | Node 0 | Node 1 | Node 2 | Node 3 | -| ------------- | ------ | ------ | ------ | ------ | -| Free Mem (MB) | 109290 | 81218 | 107365 | 112053 | - -The total time is 3.2s. - -**Usage** - -This sub-feature provides multiple APIs. You only need to perform steps 1 to 6 to enable and verify the sub-feature. - -1. Configure `kernelcore=reliable` to enable tiered-reliability memory management. `CONFIG_MEMORY_RELIABLE` is mandatory. Otherwise, tiered-reliability memory management is disabled for the entire system. - -2. You can use the startup parameter `reliable_debug=[F][,S][,P]` to disable the fallback function (`F`), disable the TMPFS to use highly reliable memory (`S`), or disable the read/write cache to use highly reliable memory (`P`). By default, all the preceding functions are enabled. - -3. Based on the address range reported by the BIOS, the system searches for and marks the highly reliable memory. For the NUMA system, not every node needs to reserve reliable memory. However, the lower 4 GB physical space on node 0 must be highly reliable memory. During the system startup, the system allocates memory. If the highly reliable memory cannot be allocated, the low reliable memory is allocated (based on the fallback logic of the mirroring function) or the system cannot be started. If low reliable memory is used, the entire system is unstable. Therefore, the highly reliable memory on node 0 must be retained and the lower 4 GB physical space must be highly reliable memory. - -4. After the startup, you can check whether memory tiering is enabled based on the startup log. If it is enabled, the following information is displayed: - - ``` - mem reliable: init succeed, mirrored memory - ``` - -5. The physical address range corresponding to the highly reliable memory can be queried in the startup log. Observe the attributes in the memory map reported by the EFI. The memory range with `MR` is the highly reliable memory range. The following is an excerpt of the startup log. The memory range `mem06` is the highly reliable memory, and `mem07` is the low reliable memory. Their physical address ranges are also listed (the highly and low reliable memory address ranges cannot be directly queried in other modes). - - ``` - [ 0.000000] efi: mem06: [Conventional Memory| |MR| | | | | | |WB| | | ] range=[0x0000000100000000-0x000000013fffffff] (1024MB) - [ 0.000000] efi: mem07: [Conventional Memory| | | | | | | | |WB| | | ] range=[0x0000000140000000-0x000000083eb6cfff] (28651MB) - ``` - -6. During kernel-mode development, a page struct page can be determined based on the zone where the page is located. `ZONE_MOVABLE` indicates a low reliable memory zone. If the zone ID is smaller than `ZONE_MOVABLE`, the zone is a highly reliable memory zone. The following is an example: - - ``` - bool page_reliable(struct page *page) - { - if (!mem_reliable_status() || !page) - return false; - return page_zonenum(page) < ZONE_MOVABLE; - } - ``` - - In addition, the provided APIs are classified based on their functions: - - 1. **Checking whether the reliability function is enabled at the code layer**: In the kernel module, use the following API to check whether the tiered-reliability memory management function is enabled. If `true` is returned, the function is enabled. If `false` is returned, the function is disabled. - - ``` - #include - bool mem_reliable_status(void); - ``` - - 2. **Memory hot swap**: If the kernel enables the memory hot swap operation (Logical Memory hot-add), the highly and low reliable memories also support this operation. The operation unit is the memory block, which is the same as the native process. - - ``` - # Bring the memory online to the highly reliable memory range. - echo online_kernel > /sys/devices/system/memory/auto_online_blocks - # Bring the memory online to the low reliable memory range. - echo online_movable > /sys/devices/system/memory/auto_online_blocks - ``` - - 3. **Dynamically disabling a tiered management function**: The long type is used to determine whether to enable or disable the tiered-reliability memory management function based on each bit. - - - `bit0`: enables tiered-reliability memory management. - - `bit1`: disables fallback to the low reliable memory range. - - `bit2`: disables TMPFS to use highly reliable memory. - - `bit3`: disables the page cache to use highly reliable memory. - - Other bits are reserved for extension. If you need to change the value, call the following proc API (the permission is 600). The value range is 0-15. (The subsequent functions are processed only when `bit 0` of the general function is `1`. Otherwise, all functions are disabled.) - - ``` - echo 15 > /proc/sys/vm/reliable_debug - # All functions are disabled because bit0 is 0. - echo 14 > /proc/sys/vm/reliable_debug - ``` - - This command can only be used to disable the function. This command cannot be used to enable a function that has been disabled or is disabled during running. - - Note: This function is used for escape and is configured only when the tiered-reliability memory management feature needs to be disabled in abnormal scenarios or during commissioning. Do not use this function as a common function. - - 4. **Viewing highly reliable memory statistics**: Call the native `/proc/meminfo` API. - - - `ReliableTotal`: total size of reliable memory managed by the kernel. - - `ReliableUsed`: total size of reliable memory used by the system, including the reserved memory used in the system. - - `ReliableBuddyMem`: remaining reliable memory of the partner system. - - `ReliableTaskUsed`: highly reliable memory used by systemd and key user processes, including anonymous pages and file pages. - - `ReliableShmem`: highly reliable memory usage of the shared memory, including the total highly reliable memory used by the shared memory, TMPFS, and rootfs. - - `ReliableFileCache`: highly reliable memory usage of the read/write cache. - - 5. **Overwrite of unallocated memory**: This function requires the configuration item to be enabled. - - Enable `CONFIG_CLEAR_FREELIST_PAGE` and add the startup parameter `clear_freelist`. Call the proc API. The value can only be `1` (the permission is 0200). - - ``` - echo 1 > /proc/sys/vm/clear_freelist_pages - ``` - - Note: This feature depends on the startup parameter `clear_freelist`. The kernel matches only the prefix of the startup parameter. Therefore, this feature also takes effect for parameters with misspelled suffix, such as `clear_freelisttt`. - - To prevent misoperations, add the kernel module parameter `cfp_timeout_ms` to indicate the maximum execution duration of the overwrite function. If the overwrite function times out, the system exits even if the overwrite operation is not complete. The default value is `2000` ms (the permission is 0644). - - ``` - echo 500 > /sys/module/clear_freelist_page/parameters/cfp_timeout_ms # Set the timeout to 500 ms. - ``` - - 6. **Checking and modifying the high and low reliability attribute of the current process**: Call the `/proc//reliable` API to check whether the process is a highly reliable process. If the process is running and written, the attribute is inherited. If the subprocess does not require the attribute, manually modify the subprocess attribute. The systemd and kernel threads do not support the read and write of the attribute. The value can be `0` or `1`. The default value is `0`, indicating a low reliable process (the permission is 0644). - - ``` - # Change the process whose PID is 1024 to a highly reliable process. After the change, the process applies for memory from the highly reliable memory range. If the memory fails to be allocated, the allocation may fall back to the low reliable memory range. - echo 1 > /proc/1024/reliable - ``` - - 7. **Setting the upper limit of highly reliable memory requested by user-mode processes**: Call `/proc/sys/vm/task_reliable_limit` to modify the upper limit of highly reliable memory requested by user-mode processes. The value range is [`ReliableTaskUsed`, `ReliableTotal`], and the unit is byte (the permission is 0644). Notes: - - - The default value is `ulong_max`, indicating that there is no limit. - - If the value is `0`, the reliable process cannot use the highly reliable memory. In fallback mode, the allocation falls back to the low reliable memory range. Otherwise, OOM occurs. - - If the value is not `0` and the limit is triggered, the fallback function is enabled. The allocation falls back to the low reliable memory range. If the fallback function is disabled, OOM is returned. - -### Highly Reliable Memory for Read and Write Cache - -**Overview** - -A page cache is also called a file cache. When Linux reads or writes files, the page cache is used to cache the logical content of the files to accelerate the access to images and data on disks. If low reliable memory is allocated to page caches, UCE may be triggered during the access, causing system exceptions. Therefore, the read/write cache (page cache) needs to be placed in the highly reliable memory zone. In addition, to prevent the highly reliable memory from being exhausted due to excessive page cache allocations (unlimited by default), the total number of page caches and the total amount of reliable memory need to be limited. - -**Restrictions** - -1. When the page cache exceeds the limit, it is reclaimed periodically. If the generation speed of the page cache is faster than the reclamation speed, the number of page caches may be higher than the specified limit. -2. The usage of `/proc/sys/vm/reliable_pagecache_max_bytes` has certain restrictions. In some scenarios, the page cache forcibly uses reliable memory. For example, when metadata (such as inode and dentry) of the file system is read, the reliable memory used by the page cache exceeds the API limit. In this case, you can run `echo 2 \> /proc/sys/vm/drop_caches` to release inode and dentry. -3. When the highly reliable memory used by the page cache exceeds the `reliable_pagecache_max_bytes` limit, the low reliable memory is allocated by default. If the low reliable memory cannot be allocated, the native process is used. -4. FileCache statistics are first collected in the percpu cache. When the value in the cache exceeds the threshold, the cache is added to the entire system and then displayed in `/proc/meminfo`. `ReliableFileCache` does not have the preceding threshold in `/proc/meminfo`. As a result, the value of `ReliableFileCache` may be greater than that of `FileCache`. -5. Write cache scenarios are restricted by `dirty_limit` (restricted by /`proc/sys/vm/dirty_ratio`, indicating the percentage of dirty pages on a single memory node). If the threshold is exceeded, the current zone is skipped. For tiered-reliability memory, because highly and low reliable memories are in different zones, the write cache may trigger fallback of the local node and use the low reliable memory of the local node. You can run `echo 100 > /proc/sys/vm/dirty_ratio` to cancel the restriction. -6. The highly reliable memory feature for the read/write cache limits the page cache usage. The system performance is affected in the following scenarios: - - If the upper limit of the page cache is too small, the I/O increases and the system performance is affected. - - If the page cache is reclaimed too frequently, system freezing may occur. - - If a large amount of page cache is reclaimed each time after the page cache exceeds the limit, system freezing may occur. - -**Usage** - -The highly reliable memory is enabled by default for the read/write cache. To disable the highly reliable memory, configure `reliable_debug=P`. In addition, the page cache cannot be used unlimitedly. The function of limiting the page cache size depends on the `CONFIG_SHRINK_PAGECACHE` configuration item. - -`FileCache` in `/proc/meminfo` can be used to query the usage of the page cache, and `ReliableFileCache` can be used to query the usage of the reliable memory in the page cache. - -The function of limiting the page cache size depends on several proc APIs, which are defined in `/proc/sys/vm/` to control the page cache usage. For details, see the following table. - -| API Name (Native/New) | Permission| Description | Default Value | -| ------------------------------------ | ---- | ------------------------------------------------------------ | ------------------------------------------ | -| `cache_reclaim_enable` (native) | 644 | Whether to enable the page cache restriction function.
Value range: `0` or `1`. If an invalid value is input, an error is returned.
Example: `echo 1 > cache_reclaim_enable`| 1 | -| `cache_limit_mbytes` (new) | 644 | Upper limit of the cache, in MB.
Value range: The minimum value is 0, indicating that the restriction function is disabled. The maximum value is the memory size in MB, for example, the value displayed by running the `free –m` command (the value of `MemTotal` in `meminfo` converted in MB).
Example: `echo 1024 \> cache_limit_mbytes`
Others: It is recommended that the cache upper limit be greater than or equal to half of the total memory. Otherwise, the I/O performance may be affected if the cache is too small.| 0 | -| `cache_reclaim_s` (native) | 644 | Interval for triggering cache reclamation, in seconds. The system creates work queues based on the number of online CPUs. If there are *n* CPUs, the system creates *n* work queues. Each work queue performs reclamation every `cache_reclaim_s` seconds. This parameter is compatible with the CPU online and offline functions. If the CPU is offline, the number of work queues decreases. If the CPU is online, the number of work queues increases.
Value range: The minimum value is `0` (indicating that the periodic reclamation function is disabled) and the maximum value is `43200`. If an invalid value is input, an error is returned.
Example: `echo 120 \> cache_reclaim_s`
Others: You are advised to set the reclamation interval to several minutes (for example, 2 minutes). Otherwise, frequent reclamation may cause system freezing.| 0 | -| `cache_reclaim_weight` (native) | 644 | Weight of each reclamation. Each CPU of the kernel expects to reclaim `32 x cache_reclaim_weight` pages each time. This weight applies to both reclamation triggered by the page upper limit and periodic page cache reclamation.
Value range: 1 to 100. If an invalid value is input, an error is returned.
Example: `echo 10 \> cache_reclaim_weight`
Others: You are advised to set this parameter to `10` or a smaller value. Otherwise, the system may freeze each time too much memory is reclaimed.| 1 | -| `reliable_pagecache_max_bytes` (new)| 644 | Total amount of highly reliable memory in the page cache.
Value range: 0 to the maximum highly reliable memory, in bytes. You can call `/proc/meminfo` to query the maximum highly reliable memory. If an invalid value is input, an error is returned.
Example: `echo 4096000 \> reliable_pagecache_max_bytes`| Maximum value of the unsigned long type, indicating that the usage is not limited.| - -### Highly Reliable Memory for entered - -**Overview** - -If TMPFS is used as rootfs, it stores core files and data used by the OS. However, TMPFS uses low reliable memory by default, which makes core files and data unreliable. Therefore, TMPFS must use highly reliable memory. - -**Usage** - -By default, the highly reliable memory is enabled for TMPFS. To disable it, configure `reliable_debug=S`. You can dynamically disable it by calling `/proc/sys/vm/reliable_debug`, but cannot dynamically enable it. - -When enabling TMPFS to use highly reliable memory, you can check `ReliableShmem` in `/proc/meminfo` to view the highly reliable memory that has been used by TMPFS. - -By default, the upper limit for TMPFS to use highly reliable memory is half of the physical memory (except when TMPFS is used as rootfs). The conventional SYS V shared memory is restricted by `/proc/sys/kernel/shmmax and /proc/sys/kernel/shmall` and can be dynamically configured. It is also restricted by the highly reliable memory used by TMPFS. For details, see the following table. - -| **Parameter** | **Description** | -|---------------------------------|--------------------------------| -| `/proc/sys/kernel/shmmax` (native)| Size of a single SYS V shared memory range.| -| `/proc/sys/kernel/shmall` (native)| Total size of the SYS V shared memory that can be used. | - -The `/proc/sys/vm/shmem_reliable_bytes_limit` API is added for you to set the available highly reliable size (in bytes) of the system-level TMPFS. The default value is `LONG_MAX`, indicating that the usage is not limited. The value ranges from 0 to the total reliable memory size of the system. The permission is 644. When fallback is disabled and the memory usage reaches the upper limit, an error indicating that no memory is available is returned. When fallback is enabled, the system attempts to allocate memory from the low reliable memory zone. Example: - -``` -echo 10000000 > /proc/sys/vm/shmem_reliable_bytes_limit -``` - -### UCE Does Not Reset After the Switch from the User Mode to Kernel Mode - -**Overview** - -Based on the tiered-reliability memory management solution, the kernel and key processes use highly reliable memory. Most user-mode processes use low reliable memory. When the system is running, a large amount of data needs to be exchanged between the user mode and kernel mode. When data is transferred to the kernel mode, data in the low reliable memory zone is copied to the highly reliable memory zone. The copy operation is performed in kernel mode. If a UCE occurs when the user-mode data is read, that is, the kernel-mode memory consumption UCE occurs, the system triggers a panic. This sub-feature provides solutions for scenarios where UCEs occurred in the switch from the user mode to kernel mode to avoid system reset, including copy-on-write (COW), copy_to_user, copy_from_user, get_user, put_user, and core dump scenarios. Other scenarios are not supported. - -**Restrictions** - -1. ARMv8.2 or later that supports the RAS feature. -2. This feature changes the synchronization exception handling policy. Therefore, this feature takes effect only when the kernel receives a synchronization exception reported by the firmware. -3. The kernel processing depends on the error type reported by the BIOS. The kernel cannot process fatal hardware errors but can process recoverable hardware errors. -4. Only the COW, copy_to_user (including the read page cache), copy_from_user, get_user, put_user, and core dump scenarios are supported. -5. In the core dump scenario, UCE tolerance needs to be implemented on the write API of the file system. This feature supports only three common file systems: ext4, TMPFS, and PipeFS. The corresponding error tolerance APIs are as follows: - - PipeFS: `copy_page_from_iter` - - ext4/TMPFS: `iov_iter_copy_from_user_atomic` - -**Usage** - -Ensure that `CONFIG_ARCH_HAS_COPY_MC` is enabled in the kernel. If `/proc/sys/kernel/machine_check_safe` is set to `1`, this feature is enabled for all scenarios. If `/proc/sys/kernel/machine_check_safe` is set to `0`, this feature is disabled for all scenarios. Other values are invalid. - -The fault tolerance mechanism in each scenario is as follows: - -| **No.**| **Scenario** | **Symptom** | **Mitigation Measure** | -| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| 1 | `copy_from/to_user`: basic switch to the user mode, involving syscall, sysctl, and procfs operations| If a UCE occurs during the copy, the kernel is reset. | If a UCE occurs, kill the current process. The kernel does not automatically reset. | -| 2 | `get/put_user`: simple variable copy, mainly in netlink scenarios.| If a UCE occurs during the copy, the kernel is reset. | If a UCE occurs, kill the current process. The kernel does not automatically reset. | -| 3 | COW: fork subprocess, which triggers COW. | COW is triggered. If a UCE occurs, the kernel is reset. | If a UCE occurs, kill related processes. The kernel does not automatically reset.| -| 4 | Read cache: The user mode uses low reliable memory. When a user-mode program reads or writes files, the OS uses idle memory to cache hard disk files, improving performance. However, when the user-mode program reads a file, the kernel accesses the cache first.| A UCE occurs, causing the kernel to reset. | If a UCE occurs, kill the current process. The kernel does not automatically reset.| -| 5 | UCE is triggered by memory access during a core dump. | A UCE occurs, causing the kernel to reset. | If a UCE occurs, kill the current process. The kernel does not automatically reset. | -| 6 | Write cache: When the write cache is flushed back to the disk, a UCE is triggered. | Cache flushing is actually disk DMA data migration. If a UCE is triggered during this process, page write fails after timeout. As a result, data inconsistency occurs and the file system becomes unavailable. If the data is key data, the kernel resets.| No solution is available. The kernel will be reset. | -| 7 | Kernel startup parameters and module parameters use highly reliable memory. | / | Not supported. The risk is reduced. | -| 8 | relayfs: a file system that quickly forwards data from the kernel mode to the user mode.| / | Not supported. The risk is reduced. | -| 9 | `seq_file`: transfers kernel data to the user mode as a file. | / | Not supported. The risk is reduced. | - -Most user-mode data uses low reliable memory. Therefore, this project involves only the scenario where user-mode data is read in kernel mode. In Linux, data can be exchanged between the user space and kernel space in nine modes, including kernel startup parameters, module parameters, sysfs, sysctl, syscall (system call), netlink, procfs, seq_file, debugfs, and relayfs. There are two other cases: COW and read/write file cache (page cache) when a process is created. - -In sysfs, syscall, netlink, and procfs modes, data is transferred from the user mode to the kernel mode in copy_from_user or get_user mode. - -The user mode can be switched to the kernel mode in the following scenarios: - -copy_from_user, get_user, COW, read cache, and write cache flushing. - -The kernel mode can be switched to the user mode in the following scenarios: - -relayfs, seq_file, copy_to_user, and put_user diff --git a/docs/en/docs/Kernel/overview.md b/docs/en/docs/Kernel/overview.md deleted file mode 100644 index 2751a20e048a4a77e19609d6f9cf9a47f090fb12..0000000000000000000000000000000000000000 --- a/docs/en/docs/Kernel/overview.md +++ /dev/null @@ -1,3 +0,0 @@ -## Overview - -This feature allows you to allocate memory with corresponding reliability as required and mitigates the impact of some possible UCEs or CEs to some extent. In this way, the overall service reliability does not deteriorate when partial memory mirroring (a RAS feature called address range mirroring) is used. \ No newline at end of file diff --git a/docs/en/docs/Kernel/restrictions.md b/docs/en/docs/Kernel/restrictions.md deleted file mode 100644 index 3656a25694c047dbc18979d9b1847c36c0d5bdf2..0000000000000000000000000000000000000000 --- a/docs/en/docs/Kernel/restrictions.md +++ /dev/null @@ -1,87 +0,0 @@ -## Restrictions - -This section describes the general constraints of this feature. Each subfeature has specific constraints, which are described in the corresponding section. - -**Compatibility** - -1. Currently, this feature applies only to ARM64. -2. The hardware needs to support partial memory mirroring (address range mirroring), that is, the memory whose attribute is `EFI_MEMORY_MORE_RELIABLE` is reported through the UEFI standard API. Common memory does not need to be set. The mirrored memory is the highly reliable memory, and the common memory is the low reliable memory. -3. High-reliability and low reliable memory tiering is implemented by using the memory management zones of the kernel. They cannot dynamically flow (that is, pages cannot move between different zones). -4. Continuous physical memory with different reliability is divided into different memblocks. As a result, the allocation of large continuous physical memory blocks may be restricted after memory tiering is enabled. -5. To enable this feature, the value of `kernelcore` must be `reliable`, which is incompatible with other values of this parameter. - - -**Design Specifications** - -1. During kernel-mode development, pay attention to the following points when allocating memory: - - - If the memory allocation API supports the specified `gfp_flag`, only the memory allocation whose `gfp_flag` contains `__GFP_HIGHMEM` and `__GFP_MOVABLE` forcibly allocates the common memory range or redirects to the reliable memory range. Other `gfp_flags` do not intervene. - - - High-reliability memory is allocated from slab, slub, and slob. (If the memory allocated at a time is greater than `KMALLOC_MAX_CACHE_SIZE` and `gfp_flag` is set to a common memory range, low reliable memory may be allocated.) - -2. During user-mode development, pay attention to the following points when allocating memory: - - - After the attribute of a common process is changed to a key process, the highly reliable memory is used only in the actual physical memory allocation phase (page fault). The attribute of the previously allocated memory does not change, and vice versa. Therefore, the memory allocated when a common process is started and changed to a key process may not be highly reliable memory. Whether the configuration takes effect can be verified by querying whether the physical address corresponding to the virtual address belongs to the highly reliable memory range. - - Similar mechanisms (ptmalloc, tcmalloc, and dpdk) in libc libraries, such as chunks in glibc, use cache logic to improve performance. However, memory cache causes inconsistency between the memory allocation logics of the user and the kernel. When a common process becomes a key process, this feature cannot be enabled (it is enabled only when the kernel allocates memory). - -3. When an upper-layer service applies for memory, if the highly reliable memory is insufficient (triggering the native min waterline of the zone) or the corresponding limit is triggered, the page cache is preferentially released to attempt to reclaim the highly reliable memory. If the memory still cannot be allocated, the kernel selects OOM or fallback to the low reliable memory range based on the fallback switch to complete memory allocation. (Fallback indicates that when the memory of a memory management zone or node is insufficient, memory is allocated from other memory management zones or nodes.) - -4. The dynamic memory migration mechanism similar to `NUMA_BALANCING` may cause the allocated highly reliable or low reliable memory to be migrated to another node. Because the migration operation loses the memory allocation context and the target node may not have the corresponding reliable memory, the memory reliability after the migration may not be as expected. - -5. The following configuration files are introduced based on the usage of the user-mode highly reliable memory: - - - **/proc/sys/vm/task_reliable_limit**: upper limit of the highly reliable memory used by key processes (including systemd). It contains anonymous pages and file pages. The SHMEM used by the process is also counted (included in anonymous pages). - - - **/proc/sys/vm/reliable_pagecache_max_bytes**: soft upper limit of the highly reliable memory used by the global page cache. The number of highly reliable page caches used by common processes is limited. By default, the system does not limit the highly reliable memory used by page caches. This restriction does not apply to scenarios such as highly reliable processes and file system metadata. Regardless of whether fallback is enabled, when a common process triggers the upper limit, the low reliable memory is allocated by default. If the low reliable memory cannot be allocated, the native process is used. - - - **/proc/sys/vm/shmem_reliable_bytes_limit**: soft upper limit of the highly reliable memory used by the global SHMEM. It limits the amount of highly reliable memory used by the SHMEM of common processes. By default, the system does not limit the amount of highly reliable memory used by SHMEM. High-reliability processes are not subject to this restriction. When fallback is disabled, if a common process triggers the upper limit, memory allocation fails, but OOM does not occur (consistent with the native process). - - If the above limits are reached, memory allocation fallback or OOM may occur. - - Memory allocation caused by page faults generated by key processes in the TMPFS or page cache may trigger multiple limits. For details about the interaction between multiple limits, see the following table. - - | Whether task_reliable_limit Is Reached| Whether reliable_pagecache_max_bytes or shmem_reliable_bytes_limit Is Reached| Memory Allocation Processing Policy | - | --------------------------- | ------------------------------------------------------------ | ------------------------------------------------ | - | Yes | Yes | The page cache is reclaimed first to meet the allocation requirements. Otherwise, fallback or OOM occurs.| - | Yes | No | The page cache is reclaimed first to meet the allocation requirements. Otherwise, fallback or OOM occurs.| - | No | No | High-reliability memory is allocated first. Otherwise, fallback or OOM occurs. | - | No | Yes | High-reliability memory is allocated first. Otherwise, fallback or OOM occurs. | - - Key processes comply with `task_reliable_limit`. If `task_reliable_limit` is greater than `tmpfs` or `pagecachelimit`, page cache and TMPFS generated by key processes still use highly reliable memory. As a result, the highly reliable memory used by page cache and TMPFS is greater than the corresponding limit. - - When `task_reliable_limit` is triggered, if the size of the highly reliable file cache is less than 4 MB, the file cache will not be reclaimed synchronously. If the highly reliable file cache is less than 4 MB when the page cache is generated, the allocation will fall back to the low reliable memory range. If the highly reliable file cache is greater than 4 MB, the page cache is reclaimed preferentially for allocation. However, when the size is close to 4 MB, direct cache reclamation is triggered more frequently. Because the lock overhead of direct cache reclamation is high, the CPU usage is high. In this case, the file read/write performance is close to the raw disk performance. - -6. Even if the system has sufficient highly reliable memory, the allocation may fall back to the low reliable memory range. - - - If the memory cannot be migrated to another node for allocation, the allocation falls back to the low reliable memory range of the current node. The common scenarios are as follows: - - If the memory allocation contains `__GFP_THISNODE` (for example, transparent huge page allocation), memory can be allocated only from the current node. If the highly reliable memory of the node does not meet the allocation requirements, the system attempts to allocate memory from the low reliable memory range of the memory node. - - A process runs on a node that contains common memory by running commands such as `taskset` and `numactl`. - - A process is scheduled to a common memory node under the native scheduling mechanism of the system memory. - - High-reliability memory allocation triggers the highly reliable memory usage threshold, which also causes fallback to the low reliable memory range. - -7. If tiered-reliability memory fallback is disabled, highly reliable memory cannot be expanded to low reliable memory. As a result, user-mode applications may not be compatible with this feature in determining the memory usage, for example, determining the available memory based on MemFree. - -8. If tiered-reliability memory fallback is enabled, the native fallback is affected. The main difference lies in the selection of the memory management zone and NUMA node. - - - Fallback process of **common user processes**: low reliable memory of the local node -> low reliable memory of the remote node. - - Fallback process of **key user processes**: highly reliable memory of the local node -> highly reliable memory of the remote node. If no memory is allocated and the fallback function of `reliable` is enabled, the system retries as follows: low reliable memory of the local node -> low reliable memory of the remote node. - -**Scenarios** - -1. The default page size (`PAGE_SIZE`) is 4 KB. -2. The lower 4 GB memory of the NUMA node 0 must be highly reliable, and the highly reliable memory size and low reliable memory size must meet the kernel requirements. Otherwise, the system may fail to start. There is no requirement on the highly reliable memory size of other nodes. However, - if a node does not have highly reliable memory or the highly reliable memory is insufficient, the per-node management structure may be located in the highly reliable memory of other nodes (because the per-node management structure is a kernel data structure and needs to be located in the highly reliable memory zone). As a result, a kernel warning is generated, for example, `vmemmap_verify` alarms are generated and the performance is affected. -3. Some statistics (such as the total amount of highly reliable memory for TMPFS) of this feature are collected using the percpu technology, which causes extra overhead. To reduce the impact on performance, there is a certain error when calculating the sum. It is normal that the error is less than 10%. -4. Huge page limit: - - In the startup phase, static huge pages are low reliable memory. By default, static huge pages allocated during running are low reliable memory. If memory allocation occurs in the context of a key process, the allocated huge pages are highly reliable memory. - - In the transparent huge page (THP) scenario, if one of the 512 4 KB pages to be combined (2 MB for example) is a highly reliable page, the newly allocated 2 MB huge page uses highly reliable memory. That is, the THP uses more highly reliable memory. - - The allocation of the reserved 2 MB huge page complies with the native fallback process. If the current node lacks low reliable memory, the allocation falls back to the highly reliable range. - - In the startup phase, 2 MB huge pages are reserved. If no memory node is specified, the load is balanced to each memory node for huge page reservation. If a memory node lacks low reliable memory, highly reliable memory is used according to the native process. -5. Currently, only the normal system startup scenario is supported. In some abnormal scenarios, kernel startup may be incompatible with the memory tiering function, for example, the kdump startup phase. (Currently, kdump can be automatically disabled. In other scenarios, it needs to be disabled by upper-layer services.) -6. In the swap-in and swap-out, memory offline, KSM, cma, and gigantic page processes, the newly allocated page types are not considered based on the tiered-reliability memory. As a result, the page types may not be defined (for example, the highly reliable memory usage statistics are inaccurate and the reliability level of the allocated memory is not as expected). - -**Impact on Performance** - -- Due to the introduction of tiered-reliability memory management, the judgment logic is added for physical page allocation, which affects the performance. The impact depends on the system status, memory type, and high- and low reliable memory margin of each node. -- This feature introduces highly reliable memory usage statistics, which affects system performance. -- When `task_reliable_limit` is triggered, the cache in the highly reliable zone is reclaimed synchronously, which increases the CPU usage. In the scenario where `task_reliable_limit` is triggered by page cache allocation (file read/write operations, such as dd), if the available highly reliable memory (ReliableFileCache is considered as available memory) is close to 4 MB, cache reclamation is triggered more frequently. The overhead of direct cache reclamation is high, causing high CPU usage. In this case, the file read/write performance is close to the raw disk performance. \ No newline at end of file diff --git a/docs/en/docs/KernelLiveUpgrade/KernelLiveUpgrade.md b/docs/en/docs/KernelLiveUpgrade/KernelLiveUpgrade.md deleted file mode 100644 index 401fae920ffdd7d4b27361040483b6dfcfaa9559..0000000000000000000000000000000000000000 --- a/docs/en/docs/KernelLiveUpgrade/KernelLiveUpgrade.md +++ /dev/null @@ -1 +0,0 @@ -# Kernel Hot Upgrade Guide This document describes how to install, deploy, and use the kernel hot upgrade feature on openEuler. This kernel hot upgrade feature on openEuler is implemented through quick kernel restart and hot program migration. A user-mode tool is provided to automate this process. This document is intended for community developers, open-source enthusiasts, and partners who want to learn about and use the openEuler system and kernel hot upgrade. The users are expected to know basics about the Linux operating system. ## Application Scenario The kernel hot upgrade is to save and restore the process running data with the second-level end-to-end latency. The following two conditions must be met: 1. The kernel needs to be restarted due to vulnerability fixing or version update. 2. Services running on the kernel can be quickly recovered after the kernel is restarted. \ No newline at end of file diff --git a/docs/en/docs/KernelLiveUpgrade/common-problems-and-solutions.md b/docs/en/docs/KernelLiveUpgrade/common-problems-and-solutions.md deleted file mode 100644 index a529bf7d454c91ecd9fcd7090feb075b000cc5fc..0000000000000000000000000000000000000000 --- a/docs/en/docs/KernelLiveUpgrade/common-problems-and-solutions.md +++ /dev/null @@ -1,29 +0,0 @@ -# Common Problems and Solutions - -1. After the `nvwa update` command is executed, the system is not upgraded. - - Cause: An error occurs when the running information is retained or the kernel is replaced. - - Solution: View logs to find the error cause. - -2. After the acceleration feature is enabled, the `nvwa` command fails to be executed. - - Cause: NVWA provides many acceleration features, including quick kexec, pin memory, and cpu park. These features involve the cmdline configuration and memory allocation. When selecting the memory, run cat /proc/iomemory to ensure that the selected memory does not conflict with that of other programs. If necessary, run the dmesg command to check whether error logs exist after the feature is enabled. - -3. After the hot upgrade, the related process is not recovered. - - Cause: Check whether the nvwa service is running. If the nvwa service is running, the service or process may fail to be recovered. - - Solution: Run the service `nvwa status` command to view the NVWA logs. If the service fails to be started, check whether the service is enabled, and then run the `systemd` command to view the logs of the corresponding service. Further logs are stored in the process or service folder named after the path specified by **criu_dir**. The dump.log file stores the logs generated when the running information is retained, and the restore.log file restores the logs generated for process recovery. - -4. The recovery fails, and the log displays "Can't fork for 948: File exists." - - Cause: The kernel hot upgrade tool finds that the PID of the program is occupied during program recovery. - - Solution: The current kernel does not provide a mechanism for retaining PIDs. Related policies are being developed. This restriction will be resolved in later kernel versions. Currently, you can only manually restart related processes. - -5. When the `nvwa` command is used to save and recover a simple program (hello world), the system displays a message indicating that the operation fails or the program is not running. - - Cause: There are many restrictions on the use of CRIU. - - Solution: View the NVWA logs. If the error is related to the CRIU, check the dump.log or restore.log file in the corresponding directory. For details about the usage restrictions related to the CRIU, see [CRIU WiKi](https://criu.org/What_cannot_be_checkpointed). diff --git a/docs/en/docs/Kmesh/appendixes.md b/docs/en/docs/Kmesh/appendixes.md deleted file mode 100644 index ffa7b4d28662efb22fcded563f759fd035ac9140..0000000000000000000000000000000000000000 --- a/docs/en/docs/Kmesh/appendixes.md +++ /dev/null @@ -1,3 +0,0 @@ -# Appendixes - -For more details, visit the [Kmesh](https://gitee.com/openeuler/Kmesh#kmesh) repository. diff --git a/docs/en/docs/Kmesh/faqs.md b/docs/en/docs/Kmesh/faqs.md deleted file mode 100644 index cd26472eef5910828ff0f59bd863b00ad6401cf4..0000000000000000000000000000000000000000 --- a/docs/en/docs/Kmesh/faqs.md +++ /dev/null @@ -1,23 +0,0 @@ -# FAQs - -## 1. If the Kmesh Service Is Started in the Cluster Mode and Control Plane IP Address Is Not Configured, Kmesh Reports an Error and Exits - -![](./figures/not_set_cluster_ip.png) - -Possible cause: In cluster mode, the Kmesh service communicates with the control plane program to obtain configuration information. Therefore, you need to configure the correct IP address of the control plane program. - -Solution: Configure the correct IP address of the control plane program by referring to the cluster mode in [Installation and Deployment](./Installation and Deployment.md). - -## 2. Kmesh Reports "get kube config error!" When Started - -![](./figures/get_kubeconfig_error.png) - -Possible cause: In cluster mode, the Kmesh service automatically obtains the IP address of the control plane program based on Kubernetes configurations. If the **kubeconfig** path is not configured in the system, Kmesh will fail to obtain the configurations and reports "get kube config error!" (If the IP address of the control plane program has been correctly configured in the Kmesh configuration file, ignore this problem.) - -Solution: Configure **kubeconfig** as follows: - -```shell -mkdir -p $HOME/.kube -sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config -sudo chown $(id -u):$(id -g) $HOME/.kube/config -``` diff --git a/docs/en/docs/Kmesh/getting-to-know-kmesh.md b/docs/en/docs/Kmesh/getting-to-know-kmesh.md deleted file mode 100644 index 2264bd9b7770b8e0cf044cf3c663d09889dd69ab..0000000000000000000000000000000000000000 --- a/docs/en/docs/Kmesh/getting-to-know-kmesh.md +++ /dev/null @@ -1,36 +0,0 @@ -# Getting to Know Kmesh - -## Introduction - -As more and more applications become cloud-native, the scale of cloud applications and application SLA requirements put high demands on cloud infrastructure. - -Kubernetes-based cloud infrastructure can help applications achieve agile deployment and management, but it lacks the application traffic orchestration ability. The emergence of service mesh has effectively compensated for Kubernetes shortcomings, allowing Kubernetes to completely realize agile cloud application development and O&M. However, as the application of service mesh gradually deepens, the current sidecar-based mesh architecture has obvious performance defects in the data plane, and the following problems have become a consensus in the industry: - -* High latency - Take the Istio service mesh for example. The single-hop access delay of a service is increased by 2.65 ms, which cannot meet the requirements of latency-sensitive applications. - -* High overhead - In Istio, each sidecar consumes 50 MB memory and occupies 2 CPU cores. This causes high overhead in large-scale clusters and decreases the deployment density of service containers. - -Based on the programmable kernel, Kmesh moves mesh traffic management down to the OS level and shortens the data path from 3 hops to 1. This greatly improves the latency performance of the mesh data plane and helps services innovate quickly. - -## Architecture - -![](./figures/kmesh-arch.png) - -Main components of Kmesh include: - -* kmesh-controller: - The management program of Kmesh, which manages the Kmesh lifecycle, XDS interconnection, and O&M observation. - -* kmesh-api: - The external API layer of Kmesh, including APIs for converted XDS orchestration and O&M observation. - -* kmesh-runtime: - The runtime for orchestration of traffic in layer 3 to layer 7, which is implemented in the kernel. - -* kmesh-orchestration: - Orchestration of traffic in layer 3 to layer 7 based on eBPF, implementing functions such as routing, gray release, and load balancing. - -* kmesh-probe: - O&M observation probe, which provides end-to-end observation. diff --git a/docs/en/docs/LLM/figures/chatglm.png b/docs/en/docs/LLM/figures/chatglm.png deleted file mode 100644 index 4a28ef59e78da07534ab05a0718fc23ff2cbe189..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/LLM/figures/chatglm.png and /dev/null differ diff --git a/docs/en/docs/NestOS/installation-and-deployment.md b/docs/en/docs/NestOS/installation-and-deployment.md deleted file mode 100644 index 2b1758a5fe0a2e5225c9126bc660742959b2e9a7..0000000000000000000000000000000000000000 --- a/docs/en/docs/NestOS/installation-and-deployment.md +++ /dev/null @@ -1,128 +0,0 @@ -# Installation and Deployment - -## Deploying NestOS on VMware - -This guide describes how to configure latest NestOS in VMware. - -NestOS supports the x86_64 and AArch64 architectures. - -### Before You Start - -​ Before deploying NestOS, make the following preparations: - -- Downloading the NestOS ISO -- Preparing the **config.bu** File -- Configuring the Butane Tool (on Linux or Windows 10) -- A host machine with VMware installed - -### Initial Installation and Startup - -#### Starting NestOS - -When NestOS is started for the first time, Ignition is not installed. You can use the nestos-installer component to install Ignition as prompted. - -### Producing an Ignition File - -#### Obtaining Butane - -You can use Butane to convert a .bu file into an Ignition file. Ignition configurations were designed to be human readable, but difficult to write, to -discourage users from attempting to write configs by hand. -Butane supports multiple environments. You can use Butane in a Linux or Windows host machines or in container environments. - -``` -docker pull quay.io/coreos/butane:release -``` - -#### Generating a Login Password - -Run the following command on the host machine and enter the password: - -``` -# openssl passwd -1 -salt yoursalt -Password: -$1$yoursalt$1QskegeyhtMG2tdh0ldQN0 -``` - -#### Generating an SSH Key Pair - -Run the following command on the host machine to obtain the public key and private key for SSH login: - -``` -# ssh-keygen -N '' -f ./id_rsa -Generating public/private rsa key pair. -Your identification has been saved in ./id_rsa -Your public key has been saved in ./id_rsa.pub -The key fingerprint is: -SHA256:4fFpDDyGHOYEd2fPaprKvvqst3T1xBQuk3mbdon+0Xs root@host-12-0-0-141 -``` - -``` -The key's randomart image is: -+---[RSA 3072]----+ -| ..= . o . | -| * = o * . | -| + B = * | -| o B O + . | -| S O B o | -| * = . . | -| . +o . . | -| +.o . .E | -| o*Oo ... | -+----[SHA256]-----+ -``` - -You can view the **id_rsa.pub** public key in the current directory. - -``` -# cat id_rsa.pub -ssh-rsa -AAAAB3NzaC1yc2... -``` - -#### Compiling a .bu File - -Perform a simple initial configuration. For more details, see the description of Ignition. -A simple **config.bu** file is as follows: - -``` -variant: fcos -version: 1.1.0 -passwd: - users: - - name: nest - password_hash: "$1$yoursalt$1QskegeyhtMG2tdh0ldQN0" - ssh_authorized_keys: - - "ssh-rsa - AAAAB3NzaC1yc2EAAA..." -``` - -#### Generating an Ignition File - -Use the Butane tool to convert the **config.bu** file to a **config.ign** file in the container environment. - -``` -# docker run --interactive --rm quay.io/coreos/butane:release \ ---pretty --strict < your_config.bu > transpiled_config.ign -``` - -### Installing NestOS - -Use SCP to copy the **config.ign** file generated by the host machine to NestOS that is initially started, which is not installed to the disk and runs in the memory. - -``` -sudo -i -scp root@your_ipAddress:/root/config.ign /root -``` - -Run the following command and complete the installation as prompted: - -``` -nestos-installer install /dev/sda --ignition-file config.ign -``` - -After the installation is complete, restart NestOS. - -``` -systemctl reboot -``` -Complete. diff --git a/docs/en/docs/NestOS/usage.md b/docs/en/docs/NestOS/usage.md deleted file mode 100644 index 84da85fa2216f2c293a7bfdbb353787e0d75db59..0000000000000000000000000000000000000000 --- a/docs/en/docs/NestOS/usage.md +++ /dev/null @@ -1,904 +0,0 @@ -# Container-based Kubernetes Deployment Using NestOS - -​ - -## Solution Overview - -Kubernetes (K8s) is a portable container orchestration and management tool developed for container services. This guide provides a solution for quickly deploying Kubernetes containers using NestOS. In this solution, multiple NestOS nodes are created on the virtualization platform as the verification environment for the Kubernetes cluster deployment. The environment required by Kubernetes is configured in a YAML-formatted Ignition configuration file in advance. The resources required by Kubernetes are deployed and nodes are created when NestOS is installed. In the bare metal environment, you can also deploy Kubernetes clusters by referring to this document and the NestOS bare metal installation document. - -- Software versions - - - NestOS image: 22.09 - - - Kubernetes: v1.23.10 - - - isulad: 2.0.16 - -- Installation requirements - - Each machine has 2 GB or more RAM and 2 or more CPU cores. - - All machines in the cluster can communicate with each other. - - Each node has a unique host name. - - The external network can be accessed for pulling images. - - The swap partition disabled. - - SELinux is disabled. -- Deployment content - - NestOS image that integrates isulad, kubeadm, kubelet, kubectl, and other binary files - - Kubernetes master node - - Container network plugins - - Kubernetes nodes to be added to the Kubernetes cluster - -## Kubernetes Node Configuration - -NestOS uses Ignition to implement batch node configuration. This section describes how to generate an Ignition file and provides an Ignition configuration example for deploying Kubernetes containers. The system configurations of a NestOS node are as follows: - -| Item | Description | -| ------------ | -------------------------------------- | -| passwd | Configures the node login user and access authentication information | -| hostname | Configures the host name of a node | -| Time zone | Configures the default time zone of a node | -| Kernel parameters | Some kernel parameters need to be enabled for Kubernetes deployment. | -| SELinux | SELinux needs to be disabled for Kubernetes deployment. | -| Time synchronization| The chronyd service is used to synchronize the cluster time in the Kubernetes environment.| - -### Generating a Login Password - -To access a NestOS instance using a password, run the following command to generate **${PASSWORD_HASH}** for Ignition configuration: - -``` -openssl passwd -1 -salt yoursalt -``` - -### Generating an SSH Key Pair - -To access a NestOS instance using an SSH public key, run the following command to generate an SSH key pair: - -``` -ssh-keygen -N '' -f /root/.ssh/id_rsa -``` - -View the public key file **id_rsa.pub** and obtain the SSH public key information for Ignition configuration: - -``` -cat /root/.ssh/id_rsa.pub -``` - -### Compiling the Butane Configuration File - -Configure the following fields in the configuration file example below based on the actual deployment. See the sections above for how to generate values of some fields. - -- **${PASSWORD_HASH}**: password for logging in to the node -- **${SSH-RSA}**: public key of the node -- **${MASTER_NAME}**: host name of the master node -- **${MASTER_IP}**: IP address of the master node -- **${MASTER_SEGMENT}**: Subnet where the master node is located -- **${NODE_NAME}**: host name of the node -- **${NODE_IP}**: IP address of the node -- **${GATEWAY}**: gateway of the node -- **${service-cidr}**: IP address range allocated to the services -- **${pod-network-cidr}**: IP address range allocated to the pods -- **${image-repository}**: image registry address, for example, https://registry.cn-hangzhou.aliyuncs.com -- **${token}**: token information for joining the cluster, which is obtained from the master node - -Example Butane configuration file for the master node: - -```yaml -variant: fcos -version: 1.1.0 -## Password-related configurations -passwd: - users: - - name: root - ## Password - password_hash: "${PASSWORD_HASH}" - "groups": [ - "adm", - "sudo", - "systemd-journal", - "wheel" - ] - ## SSH public key - ssh_authorized_keys: - - "${SSH-RSA}" -storage: - directories: - - path: /etc/systemd/system/kubelet.service.d - overwrite: true - files: - - path: /etc/hostname - mode: 0644 - contents: - inline: ${MASTER_NAME} - - path: /etc/hosts - mode: 0644 - overwrite: true - contents: - inline: | - 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 - ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 - ${MASTER_IP} ${MASTER_NAME} - ${NODE_IP} ${NODE_NAME} - - path: /etc/NetworkManager/system-connections/ens2.nmconnection - mode: 0600 - overwrite: true - contents: - inline: | - [connection] - id=ens2 - type=ethernet - interface-name=ens2 - [ipv4] - address1=${MASTER_IP}/24,${GATEWAY} - dns=8.8.8.8 - dns-search= - method=manual - - path: /etc/sysctl.d/kubernetes.conf - mode: 0644 - overwrite: true - contents: - inline: | - net.bridge.bridge-nf-call-iptables=1 - net.bridge.bridge-nf-call-ip6tables=1 - net.ipv4.ip_forward=1 - - path: /etc/isulad/daemon.json - mode: 0644 - overwrite: true - contents: - inline: | - { - "exec-opts": ["native.cgroupdriver=systemd"], - "group": "isula", - "default-runtime": "lcr", - "graph": "/var/lib/isulad", - "state": "/var/run/isulad", - "engine": "lcr", - "log-level": "ERROR", - "pidfile": "/var/run/isulad.pid", - "log-opts": { - "log-file-mode": "0600", - "log-path": "/var/lib/isulad", - "max-file": "1", - "max-size": "30KB" - }, - "log-driver": "stdout", - "container-log": { - "driver": "json-file" - }, - "hook-spec": "/etc/default/isulad/hooks/default.json", - "start-timeout": "2m", - "storage-driver": "overlay2", - "storage-opts": [ - "overlay2.override_kernel_check=true" - ], - "registry-mirrors": [ - "docker.io" - ], - "insecure-registries": [ - "${image-repository}" - ], - "pod-sandbox-image": "k8s.gcr.io/pause:3.6", - "native.umask": "secure", - "network-plugin": "cni", - "cni-bin-dir": "/opt/cni/bin", - "cni-conf-dir": "/etc/cni/net.d", - "image-layer-check": false, - "use-decrypted-key": true, - "insecure-skip-verify-enforce": false, - "cri-runtimes": { - "kata": "io.containerd.kata.v2" - } - } - - path: /root/pull_images.sh - mode: 0644 - overwrite: true - contents: - inline: | - #!/bin/sh - KUBE_VERSION=v1.23.10 - KUBE_PAUSE_VERSION=3.6 - ETCD_VERSION=3.5.1-0 - DNS_VERSION=v1.8.6 - CALICO_VERSION=v3.19.4 - username=${image-repository} - images=( - kube-proxy:${KUBE_VERSION} - kube-scheduler:${KUBE_VERSION} - kube-controller-manager:${KUBE_VERSION} - kube-apiserver:${KUBE_VERSION} - pause:${KUBE_PAUSE_VERSION} - etcd:${ETCD_VERSION} - ) - for image in ${images[@]} - do - isula pull ${username}/${image} - isula tag ${username}/${image} k8s.gcr.io/${image} - isula rmi ${username}/${image} - done - isula pull ${username}/coredns:${DNS_VERSION} - isula tag ${username}/coredns:${DNS_VERSION} k8s.gcr.io/coredns/coredns:${DNS_VERSION} - isula rmi ${username}/coredns:${DNS_VERSION} - isula pull calico/node:${CALICO_VERSION} - isula pull calico/cni:${CALICO_VERSION} - isula pull calico/kube-controllers:${CALICO_VERSION} - isula pull calico/pod2daemon-flexvol:${CALICO_VERSION} - touch /var/log/pull-images.stamp - - path: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf - mode: 0644 - contents: - inline: | - # Note: This dropin only works with kubeadm and kubelet v1.11+ - [Service] - Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" - Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" - # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically - EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env - # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use - # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. - EnvironmentFile=-/etc/sysconfig/kubelet - ExecStart= - ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS - - path: /root/init-config.yaml - mode: 0644 - contents: - inline: | - apiVersion: kubeadm.k8s.io/v1beta2 - kind: InitConfiguration - nodeRegistration: - criSocket: /var/run/isulad.sock - name: k8s-master01 - kubeletExtraArgs: - volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/" - --- - apiVersion: kubeadm.k8s.io/v1beta2 - kind: ClusterConfiguration - controllerManager: - extraArgs: - flex-volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/" - kubernetesVersion: v1.23.10 - imageRepository: k8s.gcr.io - controlPlaneEndpoint: "${MASTER_IP}:6443" - networking: - serviceSubnet: "${service-cidr}" - podSubnet: "${pod-network-cidr}" - dnsDomain: "cluster.local" - dns: - type: CoreDNS - imageRepository: k8s.gcr.io/coredns - imageTag: v1.8.6 - links: - - path: /etc/localtime - target: ../usr/share/zoneinfo/Asia/Shanghai - -systemd: - units: - - name: kubelet.service - enabled: true - contents: | - [Unit] - Description=kubelet: The Kubernetes Node Agent - Documentation=https://kubernetes.io/docs/ - Wants=network-online.target - After=network-online.target - - [Service] - ExecStart=/usr/bin/kubelet - Restart=always - StartLimitInterval=0 - RestartSec=10 - - [Install] - WantedBy=multi-user.target - - - name: set-kernel-para.service - enabled: true - contents: | - [Unit] - Description=set kernel para for Kubernetes - ConditionPathExists=!/var/log/set-kernel-para.stamp - - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=modprobe br_netfilter - ExecStart=sysctl -p /etc/sysctl.d/kubernetes.conf - ExecStart=/bin/touch /var/log/set-kernel-para.stamp - - [Install] - WantedBy=multi-user.target - - - name: pull-images.service - enabled: true - contents: | - [Unit] - Description=pull images for kubernetes - ConditionPathExists=!/var/log/pull-images.stamp - - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=systemctl start isulad - ExecStart=systemctl enable isulad - ExecStart=sh /root/pull_images.sh - - [Install] - WantedBy=multi-user.target - - - name: disable-selinux.service - enabled: true - contents: | - [Unit] - Description=disable selinux for kubernetes - ConditionPathExists=!/var/log/disable-selinux.stamp - - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=bash -c "sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config" - ExecStart=setenforce 0 - ExecStart=/bin/touch /var/log/disable-selinux.stamp - - [Install] - WantedBy=multi-user.target - - - name: set-time-sync.service - enabled: true - contents: | - [Unit] - Description=set time sync for kubernetes - ConditionPathExists=!/var/log/set-time-sync.stamp - - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=bash -c "sed -i '3aserver ntp1.aliyun.com iburst' /etc/chrony.conf" - ExecStart=bash -c "sed -i '24aallow ${MASTER_SEGMENT}' /etc/chrony.conf" - ExecStart=bash -c "sed -i '26alocal stratum 10' /etc/chrony.conf" - ExecStart=systemctl restart chronyd.service - ExecStart=/bin/touch /var/log/set-time-sync.stamp - - [Install] - WantedBy=multi-user.target - - - name: init-cluster.service - enabled: true - contents: | - [Unit] - Description=init kubernetes cluster - Requires=set-kernel-para.service pull-images.service disable-selinux.service set-time-sync.service - After=set-kernel-para.service pull-images.service disable-selinux.service set-time-sync.service - ConditionPathExists=/var/log/set-kernel-para.stamp - ConditionPathExists=/var/log/set-time-sync.stamp - ConditionPathExists=/var/log/disable-selinux.stamp - ConditionPathExists=/var/log/pull-images.stamp - ConditionPathExists=!/var/log/init-k8s-cluster.stamp - - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=kubeadm init --config=/root/init-config.yaml --upload-certs - ExecStart=/bin/touch /var/log/init-k8s-cluster.stamp - - [Install] - WantedBy=multi-user.target - - - - name: install-cni-plugin.service - enabled: true - contents: | - [Unit] - Description=install cni network plugin for kubernetes - Requires=init-cluster.service - After=init-cluster.service - - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=bash -c "curl https://docs.projectcalico.org/v3.19/manifests/calico.yaml -o /root/calico.yaml" - ExecStart=/bin/sleep 6 - ExecStart=bash -c "sed -i 's#usr/libexec/#opt/libexec/#g' /root/calico.yaml" - ExecStart=kubectl apply -f /root/calico.yaml --kubeconfig=/etc/kubernetes/admin.conf - - [Install] - WantedBy=multi-user.target - -``` - -Example Butane configuration file for a node: - -```yaml -variant: fcos -version: 1.1.0 -passwd: - users: - - name: root - password_hash: "${PASSWORD_HASH}" - "groups": [ - "adm", - "sudo", - "systemd-journal", - "wheel" - ] - ssh_authorized_keys: - - "${SSH-RSA}" -storage: - directories: - - path: /etc/systemd/system/kubelet.service.d - overwrite: true - files: - - path: /etc/hostname - mode: 0644 - contents: - inline: ${NODE_NAME} - - path: /etc/hosts - mode: 0644 - overwrite: true - contents: - inline: | - 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 - ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 - ${MASTER_IP} ${MASTER_NAME} - ${NODE_IP} ${NODE_NAME} - - path: /etc/NetworkManager/system-connections/ens2.nmconnection - mode: 0600 - overwrite: true - contents: - inline: | - [connection] - id=ens2 - type=ethernet - interface-name=ens2 - [ipv4] - address1=${NODE_IP}/24,${GATEWAY} - dns=8.8.8.8; - dns-search= - method=manual - - path: /etc/sysctl.d/kubernetes.conf - mode: 0644 - overwrite: true - contents: - inline: | - net.bridge.bridge-nf-call-iptables=1 - net.bridge.bridge-nf-call-ip6tables=1 - net.ipv4.ip_forward=1 - - path: /etc/isulad/daemon.json - mode: 0644 - overwrite: true - contents: - inline: | - { - "exec-opts": ["native.cgroupdriver=systemd"], - "group": "isula", - "default-runtime": "lcr", - "graph": "/var/lib/isulad", - "state": "/var/run/isulad", - "engine": "lcr", - "log-level": "ERROR", - "pidfile": "/var/run/isulad.pid", - "log-opts": { - "log-file-mode": "0600", - "log-path": "/var/lib/isulad", - "max-file": "1", - "max-size": "30KB" - }, - "log-driver": "stdout", - "container-log": { - "driver": "json-file" - }, - "hook-spec": "/etc/default/isulad/hooks/default.json", - "start-timeout": "2m", - "storage-driver": "overlay2", - "storage-opts": [ - "overlay2.override_kernel_check=true" - ], - "registry-mirrors": [ - "docker.io" - ], - "insecure-registries": [ - "${image-repository}" - ], - "pod-sandbox-image": "k8s.gcr.io/pause:3.6", - "native.umask": "secure", - "network-plugin": "cni", - "cni-bin-dir": "/opt/cni/bin", - "cni-conf-dir": "/etc/cni/net.d", - "image-layer-check": false, - "use-decrypted-key": true, - "insecure-skip-verify-enforce": false, - "cri-runtimes": { - "kata": "io.containerd.kata.v2" - } - } - - path: /root/pull_images.sh - mode: 0644 - overwrite: true - contents: - inline: | - #!/bin/sh - KUBE_VERSION=v1.23.10 - KUBE_PAUSE_VERSION=3.6 - ETCD_VERSION=3.5.1-0 - DNS_VERSION=v1.8.6 - CALICO_VERSION=v3.19.4 - username=${image-repository} - images=( - kube-proxy:${KUBE_VERSION} - kube-scheduler:${KUBE_VERSION} - kube-controller-manager:${KUBE_VERSION} - kube-apiserver:${KUBE_VERSION} - pause:${KUBE_PAUSE_VERSION} - etcd:${ETCD_VERSION} - ) - for image in ${images[@]} - do - isula pull ${username}/${image} - isula tag ${username}/${image} k8s.gcr.io/${image} - isula rmi ${username}/${image} - done - isula pull ${username}/coredns:${DNS_VERSION} - isula tag ${username}/coredns:${DNS_VERSION} k8s.gcr.io/coredns/coredns:${DNS_VERSION} - isula rmi ${username}/coredns:${DNS_VERSION} - touch /var/log/pull-images.stamp - - path: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf - mode: 0644 - contents: - inline: | - # Note: This dropin only works with kubeadm and kubelet v1.11+ - [Service] - Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" - Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" - # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically - EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env - # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use - # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. - EnvironmentFile=-/etc/sysconfig/kubelet - ExecStart= - ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS - - path: /root/join-config.yaml - mode: 0644 - contents: - inline: | - apiVersion: kubeadm.k8s.io/v1beta3 - caCertPath: /etc/kubernetes/pki/ca.crt - discovery: - bootstrapToken: - apiServerEndpoint: ${MASTER_IP}:6443 - token: ${token} - unsafeSkipCAVerification: true - timeout: 5m0s - tlsBootstrapToken: ${token} - kind: JoinConfiguration - nodeRegistration: - criSocket: /var/run/isulad.sock - imagePullPolicy: IfNotPresent - name: ${NODE_NAME} - taints: null - links: - - path: /etc/localtime - target: ../usr/share/zoneinfo/Asia/Shanghai - -systemd: - units: - - name: kubelet.service - enabled: true - contents: | - [Unit] - Description=kubelet: The Kubernetes Node Agent - Documentation=https://kubernetes.io/docs/ - Wants=network-online.target - After=network-online.target - - [Service] - ExecStart=/usr/bin/kubelet - Restart=always - StartLimitInterval=0 - RestartSec=10 - - [Install] - WantedBy=multi-user.target - - - name: set-kernel-para.service - enabled: true - contents: | - [Unit] - Description=set kernel para for kubernetes - ConditionPathExists=!/var/log/set-kernel-para.stamp - - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=modprobe br_netfilter - ExecStart=sysctl -p /etc/sysctl.d/kubernetes.conf - ExecStart=/bin/touch /var/log/set-kernel-para.stamp - - [Install] - WantedBy=multi-user.target - - - name: pull-images.service - enabled: true - contents: | - [Unit] - Description=pull images for kubernetes - ConditionPathExists=!/var/log/pull-images.stamp - - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=systemctl start isulad - ExecStart=systemctl enable isulad - ExecStart=sh /root/pull_images.sh - - [Install] - WantedBy=multi-user.target - - - name: disable-selinux.service - enabled: true - contents: | - [Unit] - Description=disable selinux for kubernetes - ConditionPathExists=!/var/log/disable-selinux.stamp - - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=bash -c "sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config" - ExecStart=setenforce 0 - ExecStart=/bin/touch /var/log/disable-selinux.stamp - - [Install] - WantedBy=multi-user.target - - - name: set-time-sync.service - enabled: true - contents: | - [Unit] - Description=set time sync for kubernetes - ConditionPathExists=!/var/log/set-time-sync.stamp - - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=bash -c "sed -i '3aserver ${MASTER_IP}' /etc/chrony.conf" - ExecStart=systemctl restart chronyd.service - ExecStart=/bin/touch /var/log/set-time-sync.stamp - - [Install] - WantedBy=multi-user.target - - - name: join-cluster.service - enabled: true - contents: | - [Unit] - Description=node join kubernetes cluster - Requires=set-kernel-para.service pull-images.service disable-selinux.service set-time-sync.service - After=set-kernel-para.service pull-images.service disable-selinux.service set-time-sync.service - ConditionPathExists=/var/log/set-kernel-para.stamp - ConditionPathExists=/var/log/set-time-sync.stamp - ConditionPathExists=/var/log/disable-selinux.stamp - ConditionPathExists=/var/log/pull-images.stamp - - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=kubeadm join --config=/root/join-config.yaml - - [Install] - WantedBy=multi-user.target - -``` - -### Generating an Ignition File - -To facilitate reading and writing of Ignition configurations for human users, Ignition provides conversion from a YAML-formatted Butane file to a JSON-formatted Ignition file, which is used to boot the NestOS image. Run the following command to convert a Butane configuration file to an Ignition configuration file: - -``` -podman run --interactive --rm quay.io/coreos/butane:release --pretty --strict < your_config.bu > transpiled_config.ign -``` - - - -## Kubernetes Cluster Setup - -Run the following command to create the master node of the Kubernetes cluster based on the Ignition file generated in the previous section. You can adjust the `vcpus`, `ram`, and `disk` parameters. For details, see the virt-install manual. - -``` -virt-install --name=${NAME} --vcpus=4 --ram=8192 --import --network=bridge=virbr0 --graphics=none --qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=${IGNITION_FILE_PATH}" --disk=size=40,backing_store=${NESTOS_RELEASE_QCOW2_PATH} --network=bridge=virbr1 --disk=size=40 -``` - -After NestOS is successfully installed on the master node, a series of environment configuration services are started in the background: set-kernel-para.service configures kernel parameters, pull-images.service pulls images required by the cluster, disable-selinux.service disables SELinux, set-time-sync.service sets time synchronization, init-cluster.service initializes the cluster, and then install-cni-plugin.service installs CNI network plugins. Wait a few minutes for the cluster to pull images. - -Run the `kubectl get pods -A` command to check whether all pods are in the running state. - - -Run the following command on the master node to view the token: - -``` -kubeadm token list -``` - -Add the queried token information to the Ignition file of the node and use the Ignition file to create the node. After the node is created, run the `kubectl get nodes` command on the master node to check whether the node is added to the cluster. - -If yes, Kubernetes is successfully deployed. - -# Using rpm-ostree - -## Installing Software Packages Using rpm-ostree - -Install wget. - -``` -rpm-ostree install wget -``` - -Restart the system. During the startup, use the up and down arrow keys on the keyboard to enter system before or after the RPM package installation. **ostree:0** indicates the version after the installation. - -``` -systemctl reboot -``` - -Check whether wget is successfully installed. - -``` -rpm -qa | grep wget -``` - -## Manually Upgrading NestOS Using rpm-ostree - -Run the following command in NestOS to view the current rpm-ostree status and version: - -``` -rpm-ostree status -``` - -Run the check command to check whether a new version is available. - -``` -rpm-ostree upgrade --check -``` - -Preview the differences between the versions. - -``` -rpm-ostree upgrade --preview -``` - -In the latest version, the nano package is imported. -Run the following command to download the latest OSTree and RPM data without performing the deployment. - -``` -rpm-ostree upgrade --download-only -``` - -Restart NestOS. After the restart, the old and new versions of the system are available. Enter the latest version. - -``` -rpm-ostree upgrade --reboot -``` - -## Comparing NestOS Versions - -Check the status. Ensure that two versions of OSTree exist: **LTS.20210927.dev.0** and **LTS.20210928.dev.0**. - -``` -rpm-ostree status -``` - -Compare the OSTree versions based on commit IDs. - -``` -rpm-ostree db diff 55eed9bfc5ec fe2408e34148 -``` - -## Rolling Back the System - -When a system upgrade is complete, the previous NestOS deployment is still stored on the disk. If the upgrade causes system problems, you can roll back to the previous deployment. - -### Temporary Rollback - -To temporarily roll back to the previous OS deployment, hold down **Shift** during system startup. When the boot load menu is displayed, select the corresponding branch from the menu. - -### Permanent Rollback - -To permanently roll back to the previous OS deployment, log in to the target node and run the `rpm-ostree rollback` command. This operation sets the previous OS deployment as the default deployment to boot into. -Run the following command to roll back to the system before the upgrade: - -``` -rpm-ostree rollback -``` - - -## Switching Versions - -NestOS is rolled back to an older version. You can run the following command to switch the rpm-ostree version used by NestOS to a newer version. - -``` -rpm-ostree deploy -r 22.03.20220325.dev.0 -``` - -After the restart, check whether NestOS uses the latest OSTree version. - - -# Using Zincati for Automatic Update - -Zincati automatically updates NestOS. Zincati uses the Cincinnati backend to check whether a new version is available. If a new version is available, Zincati downloads it using rpm-ostree. - -Currently, the Zincati automatic update service is disabled by default. You can modify the configuration file to set the automatic startup upon system startup for Zincati. - -``` -vi /etc/zincati/config.d/95-disable-on-dev.toml -``` - -Set **updates.enabled** to true. -Create a configuration file to specify the address of the Cincinnati backend. - -``` -vi /etc/zincati/config.d/update-cincinnati.toml -``` - -Add the following content: - -``` -[cincinnati] -base_url="http://nestos.org.cn:8080" -``` - -Restart the Zincati service. - -``` -systemctl restart zincati.service -``` - -When a new version is available, Zincati automatically detects the new version. Check the rpm-ostree status. If the status is **busy**, the system is being upgraded. - -After a period of time, NestOS automatically restarts. Log in to NestOS again and check the rpm-ostree status. If the status changes to **idle** and the current version is **20220325**, rpm-ostree has been upgraded. - -View the zincati service logs to check the upgrade process and system restart logs. In addition, the information "auto-updates logic enabled" in the logs indicates that the update is automatic. - -# Customizing NestOS - -You can use the nestos-installer tool to customize the original NestOS ISO file and package the Ignition file to generate a customized NestOS ISO file. The customized NestOS ISO file can be used to automatically install NestOS after the system is started for easy installation. - -Before customizing NestOS, make the following preparations: - -- Downloading the NestOS ISO. -- Preparing a **config.ign** File. - -## Generating a Customized NestOS ISO File - -### Setting Parameter Variables - -``` -$ export COREOS_ISO_ORIGIN_FILE=nestos-22.03.20220324.x86_64.iso -$ export COREOS_ISO_CUSTOMIZED_FILE=my-nestos.iso -$ export IGN_FILE=config.ign -``` - -### Checking the ISO File - -Ensure that the original NestOS ISO file does not contain the Ignition configuration. - -``` -$ nestos-installer iso ignition show $COREOS_ISO_ORIGIN_FILE - -Error: No embedded Ignition config. -``` - -### Generating a Customized NestOS ISO File - -Package the Ignition file into the original NestOS ISO file to generate a customized NestOS ISO file. - -``` -$ nestos-installer iso ignition embed $COREOS_ISO_ORIGIN_FILE --ignition-file $IGN_FILE $COREOS_ISO_ORIGIN_FILE --output $COREOS_ISO_CUSTOMIZED_FILE -``` - -### Checking the ISO File - -Ensure that the customized NestOS ISO file contains the Ignition configuration. - -``` -$ nestos-installer iso ignition show $COREOS_ISO_CUSTOMIZED_FILE -``` - -The previous command displays the Ignition configuration. - -## Installing the Customized NestOS ISO File - -The customized NestOS ISO file can be used to directly boot the installation. NestOS is automatically installed based on the Ignition configuration. After the installation is complete, you can use **nest/password** to log in to NestOS on the VM console. diff --git a/docs/en/docs/Quickstart/figures/Installation_wizard.png b/docs/en/docs/Quickstart/figures/Installation_wizard.png deleted file mode 100644 index fc3a96c0cd4b5a2ece94a0b3fc484720440adace..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/figures/Installation_wizard.png and /dev/null differ diff --git a/docs/en/docs/Quickstart/figures/advanced-user-configuration.png b/docs/en/docs/Quickstart/figures/advanced-user-configuration.png deleted file mode 100644 index 59a188aece92ad19cc9b42f69e235d9a9d4f702a..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/figures/advanced-user-configuration.png and /dev/null differ diff --git a/docs/en/docs/Quickstart/figures/creating-a-user.png b/docs/en/docs/Quickstart/figures/creating-a-user.png deleted file mode 100644 index 0e2befb0832d1167f5ffdcafdf7d9952d9ccdfbe..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/figures/creating-a-user.png and /dev/null differ diff --git a/docs/en/docs/Quickstart/figures/installation-process.png b/docs/en/docs/Quickstart/figures/installation-process.png deleted file mode 100644 index 2d219c7605ee75e73dffba1e2dd7c277968d4801..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/figures/installation-process.png and /dev/null differ diff --git a/docs/en/docs/Quickstart/figures/installation-summary.png b/docs/en/docs/Quickstart/figures/installation-summary.png deleted file mode 100644 index d5ca555a2b2291e139b67098a7c23d29b23b8b24..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/figures/installation-summary.png and /dev/null differ diff --git a/docs/en/docs/Quickstart/figures/password-of-the-root-account.png b/docs/en/docs/Quickstart/figures/password-of-the-root-account.png deleted file mode 100644 index fe65e73a81e25e5fa90a13af707165911e7fa459..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/figures/password-of-the-root-account.png and /dev/null differ diff --git a/docs/en/docs/Quickstart/figures/selecting-a-language.png b/docs/en/docs/Quickstart/figures/selecting-a-language.png deleted file mode 100644 index eaeb26ca208778822bf591782a689569339c3552..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/figures/selecting-a-language.png and /dev/null differ diff --git a/docs/en/docs/Quickstart/figures/selecting-installation-software.png b/docs/en/docs/Quickstart/figures/selecting-installation-software.png deleted file mode 100644 index c246e997d787d0d6a0439dcaf8780a09a9b72ca7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/figures/selecting-installation-software.png and /dev/null differ diff --git a/docs/en/docs/Quickstart/figures/setting-the-installation-destination.png b/docs/en/docs/Quickstart/figures/setting-the-installation-destination.png deleted file mode 100644 index 224f165b222598aa140187bdfa9b1e75af36c0c5..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/figures/setting-the-installation-destination.png and /dev/null differ diff --git a/docs/en/docs/Quickstart/public_sys-resources/icon-caution.gif b/docs/en/docs/Quickstart/public_sys-resources/icon-caution.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/public_sys-resources/icon-caution.gif and /dev/null differ diff --git a/docs/en/docs/Quickstart/public_sys-resources/icon-danger.gif b/docs/en/docs/Quickstart/public_sys-resources/icon-danger.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/public_sys-resources/icon-danger.gif and /dev/null differ diff --git a/docs/en/docs/Quickstart/public_sys-resources/icon-notice.gif b/docs/en/docs/Quickstart/public_sys-resources/icon-notice.gif deleted file mode 100644 index 86024f61b691400bea99e5b1f506d9d9aef36e27..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/public_sys-resources/icon-notice.gif and /dev/null differ diff --git a/docs/en/docs/Quickstart/public_sys-resources/icon-tip.gif b/docs/en/docs/Quickstart/public_sys-resources/icon-tip.gif deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/public_sys-resources/icon-tip.gif and /dev/null differ diff --git a/docs/en/docs/Quickstart/public_sys-resources/icon-warning.gif b/docs/en/docs/Quickstart/public_sys-resources/icon-warning.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Quickstart/public_sys-resources/icon-warning.gif and /dev/null differ diff --git a/docs/en/docs/Releasenotes/known-issues.md b/docs/en/docs/Releasenotes/known-issues.md deleted file mode 100644 index 790e6f011eafc512e5fc416d14f0637335d4de95..0000000000000000000000000000000000000000 --- a/docs/en/docs/Releasenotes/known-issues.md +++ /dev/null @@ -1,10 +0,0 @@ -# Known Issues - -| No. | Issue Ticket Number | Issue Description | Severity Level | Impact Analysis | Workaround Measure | History Scenario | -| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | -------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---------------- | -| 1 | [I5LZXD](https://gitee.com/src-openEuler/openldap/issues/I5LZXD) | openldap build problem in openEuler:22.09 | Minor | The case fails to be executed during the build. This problem is caused by the case design. The impact is controllable. Switch to the sleep mode and wait until the operation is complete. The operation occasionally fails in high-load scenarios. | Skip related case. Track the upstream community to solve the problem. | | -| 2 | [I5NLZI](https://gitee.com/src-openEuler/dde/issues/I5NLZI) | \[openEuler 22.09 rc2] Some application icons in the initiator are improperly displayed. | Minor | Only the initiator icon of the DDE desktop is improperly displayed. This issue does not affect functions. The overall impact on usability is controllable. | Switch the theme to avoid this problem. | | -| 3 | [I5P5HM](https://gitee.com/src-openEuler/afterburn/issues/I5P5HM) | \[22.09_RC3_EPOL]\[arm/x86] **Failed to stop afterburn-sshkeys@.service** is displayed when uninstalling the afterburn. | Minor | | | | -| 4 | [I5PQ3O](https://gitee.com/src-openEuler/openmpi/issues/I5PQ3O) | \[openEuler-22.09-RC3] An error is reported when the **ompi-clean -v -d** parameter is executed. | Major | This package is used by the NestOS and has a limited application scope. By default, this package is enabled by the **core** user in the NestOS. This package has little impact on the server version. | No workaround is provided by the SIG. | | -| 5 | [I5Q2FE](https://gitee.com/src-openEuler/udisks2/issues/I5Q2FE) | udisks2 build problem in openEuler:22.09 | Minor | The case fails to be executed during the build. The environment is not retained, and the problem does not recur during long-term local build. | Keep tracking the build success rate in the community. | | -| 6 | [I5SJ0R](https://gitee.com/src-openEuler/podman/issues/I5SJ0R) | \[22.09RC5 arm/x86] An error is reported when executing **podman create --blkio-weight-device /dev/loop0:123:15 fedora ls**. | Minor | blkio-weight is a kernel feature of version 4.xx. Not supported in version 5.10 | Upgrade the podman component. | | diff --git a/docs/en/docs/Releasenotes/public_sys-resources/icon-caution.gif b/docs/en/docs/Releasenotes/public_sys-resources/icon-caution.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Releasenotes/public_sys-resources/icon-caution.gif and /dev/null differ diff --git a/docs/en/docs/Releasenotes/public_sys-resources/icon-danger.gif b/docs/en/docs/Releasenotes/public_sys-resources/icon-danger.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Releasenotes/public_sys-resources/icon-danger.gif and /dev/null differ diff --git a/docs/en/docs/Releasenotes/public_sys-resources/icon-notice.gif b/docs/en/docs/Releasenotes/public_sys-resources/icon-notice.gif deleted file mode 100644 index 86024f61b691400bea99e5b1f506d9d9aef36e27..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Releasenotes/public_sys-resources/icon-notice.gif and /dev/null differ diff --git a/docs/en/docs/Releasenotes/public_sys-resources/icon-tip.gif b/docs/en/docs/Releasenotes/public_sys-resources/icon-tip.gif deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Releasenotes/public_sys-resources/icon-tip.gif and /dev/null differ diff --git a/docs/en/docs/Releasenotes/public_sys-resources/icon-warning.gif b/docs/en/docs/Releasenotes/public_sys-resources/icon-warning.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Releasenotes/public_sys-resources/icon-warning.gif and /dev/null differ diff --git a/docs/en/docs/Releasenotes/terms-of-use.md b/docs/en/docs/Releasenotes/terms-of-use.md deleted file mode 100644 index 8fab84f65144bb0c9208baf04afcc3251bccf46d..0000000000000000000000000000000000000000 --- a/docs/en/docs/Releasenotes/terms-of-use.md +++ /dev/null @@ -1,13 +0,0 @@ -# Terms of Use - -**Copyright © 2023 openEuler Community** - -Your replication, use, modification, and distribution of this document are governed by the Creative Commons License Attribution-ShareAlike 4.0 International Public License \(CC BY-SA 4.0\). You can visit [https://creativecommons.org/licenses/by-sa/4.0/](https://creativecommons.org/licenses/by-sa/4.0/) to view a human-readable summary of \(and not a substitute for\) CC BY-SA 4.0. For the complete CC BY-SA 4.0, visit [https://creativecommons.org/licenses/by-sa/4.0/legalcode](https://creativecommons.org/licenses/by-sa/4.0/legalcode). - -**Trademarks and Permissions** - -All trademarks and registered trademarks mentioned in the documents are the property of their respective holders. The use of the openEuler trademark must comply with the [Use Specifications of the openEuler Trademark](https://www.openeuler.org/en/other/brand/). - -**Disclaimer** - -This document is used only as a guide. Unless otherwise specified by applicable laws or agreed by both parties in written form, all statements, information, and recommendations in this document are provided "AS IS" without warranties, guarantees or representations of any kind, including but not limited to non-infringement, timeliness, and specific purposes. diff --git a/docs/en/docs/SecHarden/figures/en-us_image_0221925211.png b/docs/en/docs/SecHarden/figures/en-us_image_0221925211.png deleted file mode 100644 index 62ef0decdf6f1e591059904001d712a54f727e68..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SecHarden/figures/en-us_image_0221925211.png and /dev/null differ diff --git a/docs/en/docs/SecHarden/figures/en-us_image_0221925212.png b/docs/en/docs/SecHarden/figures/en-us_image_0221925212.png deleted file mode 100644 index ad5ed3f7beeb01e6a48707c4806606b41d687e22..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SecHarden/figures/en-us_image_0221925212.png and /dev/null differ diff --git a/docs/en/docs/SecHarden/public_sys-resources/icon-caution.gif b/docs/en/docs/SecHarden/public_sys-resources/icon-caution.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SecHarden/public_sys-resources/icon-caution.gif and /dev/null differ diff --git a/docs/en/docs/SecHarden/public_sys-resources/icon-danger.gif b/docs/en/docs/SecHarden/public_sys-resources/icon-danger.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SecHarden/public_sys-resources/icon-danger.gif and /dev/null differ diff --git a/docs/en/docs/SecHarden/public_sys-resources/icon-notice.gif b/docs/en/docs/SecHarden/public_sys-resources/icon-notice.gif deleted file mode 100644 index 86024f61b691400bea99e5b1f506d9d9aef36e27..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SecHarden/public_sys-resources/icon-notice.gif and /dev/null differ diff --git a/docs/en/docs/SecHarden/public_sys-resources/icon-tip.gif b/docs/en/docs/SecHarden/public_sys-resources/icon-tip.gif deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SecHarden/public_sys-resources/icon-tip.gif and /dev/null differ diff --git a/docs/en/docs/SecHarden/public_sys-resources/icon-warning.gif b/docs/en/docs/SecHarden/public_sys-resources/icon-warning.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SecHarden/public_sys-resources/icon-warning.gif and /dev/null differ diff --git a/docs/en/docs/ShangMi/disk-encryption.md b/docs/en/docs/ShangMi/disk-encryption.md deleted file mode 100644 index dc0fbc67d7be95049bbea115399e29d6b5563b08..0000000000000000000000000000000000000000 --- a/docs/en/docs/ShangMi/disk-encryption.md +++ /dev/null @@ -1,89 +0,0 @@ -# Disk Encryption - -## Overview - -Disk encryption protects the storage confidentiality of important data. Data is encrypted based on a specified encryption algorithm and then written to disks. This feature mainly involves the user-mode tool cryptsetup and the kernel-mode module dm-crypt. Currently, the disk encryption feature provided by the openEuler OS supports ShangMi (SM) series cryptographic algorithms. Parameters are as follows: - -- Encryption modes: luks2 and plain; -- Key length: 256 bits; -- Message digest algorithm: SM3; -- Encryption algorithm: sm4-xts-plain64. - -## Prerequisites - -1. Kernel 5.10.0-106 or later - -``` -$ rpm -qa kernel -kernel-5.10.0-106.1.0.55.oe2209.x86_64 -``` - -2. cryptsetup 2.4.1-1 or later - -``` -$ rpm -qa cryptsetup -cryptsetup-2.4.1-1.oe2209.x86_64 -``` - -## How to Use - -A disk is formatted into a disk in a specified encryption mode and mapped to **/dev/mapper** as a dm device. Subsequent disk read and write operations are performed through the dm device. Data encryption and decryption are performed in kernel mode and are not perceived by users. The procedure is as follows: - -1. Format the disk and map the disk as a dm device. - -a. luks2 mode - -Set the encryption mode to luks2, encryption algorithm to sm4-xts-plain64, key length to 256 bits, and message digest algorithm to SM3. - -``` -# cryptsetup luksFormat /dev/sdd -c sm4-xts-plain64 --key-size 256 --hash sm3 -``` - -b. plain mode - -Set the encryption mode to plain, encryption algorithm to sm4-xts-plain64, key length to 256 bits, and message digest algorithm to SM3. - -``` -# cryptsetup plainOpen /dev/sdd crypt1 -c sm4-xts-plain64 --key-size 256 --hash sm3 -``` - -2. After the mapping is successful, run the **lsblk** command to view the device information. - -``` -# lsblk -NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS -...... -sdd 8:48 0 50G 0 disk -└─crypt1 253:3 0 50G 0 crypt -...... -``` - -3. Perform I/O read and write operations on the encrypted device. - -Deliver I/Os to raw disks. - -``` -# dd if=/dev/random of=/dev/mapper/crypt1 bs=4k count=10240 -``` - -Deliver I/Os through the file system. - -``` -# mkfs.ext4 /dev/mapper/crypt1 -# mount /dev/mapper/crypt1 /mnt/crypt/ -# dd if=/dev/random of=/mnt/crypt/tmp bs=4k count=10240 -``` - -4. Disable device mapping. - -If a file system is mounted, unmount it first. - -``` -# umount /mnt/crypt -``` - -Closes a device. - -``` -# cryptsetup luksClose crypt1 -``` diff --git a/docs/en/docs/StratoVirt/public_sys-resources/note.png b/docs/en/docs/StratoVirt/public_sys-resources/note.png deleted file mode 100644 index ad5ed3f7beeb01e6a48707c4806606b41d687e22..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/StratoVirt/public_sys-resources/note.png and /dev/null differ diff --git a/docs/en/docs/SystemOptimization/figures/add-kernel-startup-item.png b/docs/en/docs/SystemOptimization/figures/add-kernel-startup-item.png deleted file mode 100644 index b674ab4a93e3fb2abd3f30749d96e724fd77019c..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SystemOptimization/figures/add-kernel-startup-item.png and /dev/null differ diff --git a/docs/en/docs/SystemOptimization/figures/check-whether-the-lro-parameter-is-enabled.png b/docs/en/docs/SystemOptimization/figures/check-whether-the-lro-parameter-is-enabled.png deleted file mode 100644 index 351e0d41ec47d790d4f3556d840e9c951e480680..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SystemOptimization/figures/check-whether-the-lro-parameter-is-enabled.png and /dev/null differ diff --git a/docs/en/docs/SystemOptimization/figures/core-binding-success-verification.png b/docs/en/docs/SystemOptimization/figures/core-binding-success-verification.png deleted file mode 100644 index 342e691a50fc63ea8a71fdf752a6df46daafe14c..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SystemOptimization/figures/core-binding-success-verification.png and /dev/null differ diff --git a/docs/en/docs/SystemOptimization/figures/core-binding.png b/docs/en/docs/SystemOptimization/figures/core-binding.png deleted file mode 100644 index 627d0ba137a169c37afa1cc6dd81a2fffd9a0085..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SystemOptimization/figures/core-binding.png and /dev/null differ diff --git a/docs/en/docs/SystemOptimization/figures/create-raid0.png b/docs/en/docs/SystemOptimization/figures/create-raid0.png deleted file mode 100644 index 31fc68e727aa3e1f3e9e29851e13ee2e05568735..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SystemOptimization/figures/create-raid0.png and /dev/null differ diff --git a/docs/en/docs/SystemOptimization/figures/kernel-boot-option-parameters.png b/docs/en/docs/SystemOptimization/figures/kernel-boot-option-parameters.png deleted file mode 100644 index 30bafb334c64617d4963b6781e8976a08de5b553..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SystemOptimization/figures/kernel-boot-option-parameters.png and /dev/null differ diff --git a/docs/en/docs/SystemOptimization/figures/pci-device-number.png b/docs/en/docs/SystemOptimization/figures/pci-device-number.png deleted file mode 100644 index 02dab7ffc45389886a5a7aec7222b1a53b62d509..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SystemOptimization/figures/pci-device-number.png and /dev/null differ diff --git a/docs/en/docs/SystemOptimization/figures/pci-nic-numa-node.png b/docs/en/docs/SystemOptimization/figures/pci-nic-numa-node.png deleted file mode 100644 index 401028d9f88ea936c4e08bc572aeee573ce84b92..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SystemOptimization/figures/pci-nic-numa-node.png and /dev/null differ diff --git a/docs/en/docs/SystemOptimization/figures/ring_buffer.png b/docs/en/docs/SystemOptimization/figures/ring_buffer.png deleted file mode 100644 index 4b4a608150554bf677f503213d0a0227310b0a17..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SystemOptimization/figures/ring_buffer.png and /dev/null differ diff --git a/docs/en/docs/SystemOptimization/figures/swapoff-before-and-after-modification.png b/docs/en/docs/SystemOptimization/figures/swapoff-before-and-after-modification.png deleted file mode 100644 index 080c9f9bd79a0090d0ed962358e9da2457afdc77..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SystemOptimization/figures/swapoff-before-and-after-modification.png and /dev/null differ diff --git a/docs/en/docs/SystemOptimization/mysql-performance-tuning.md b/docs/en/docs/SystemOptimization/mysql-performance-tuning.md deleted file mode 100644 index b396d3adcfb94ad8177a290e771c7c612b399470..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/SystemOptimization/mysql-performance-tuning.md and /dev/null differ diff --git a/docs/en/docs/TailorCustom/overview.md b/docs/en/docs/TailorCustom/overview.md deleted file mode 100644 index 053bc0b8481c1b95bbdba0abbe17e94ff674f4fa..0000000000000000000000000000000000000000 --- a/docs/en/docs/TailorCustom/overview.md +++ /dev/null @@ -1,3 +0,0 @@ -# Tailoring and Customization Tool Usage Guide - -This document describes the tailoring and customization tool of openEuler, including the introduction, installation, and usage. \ No newline at end of file diff --git a/docs/en/docs/Virtualization/configuring-disk-io-suspension.md b/docs/en/docs/Virtualization/configuring-disk-io-suspension.md deleted file mode 100644 index d43141ef3f49277fa242a3b8f80e6f4cae5f11f3..0000000000000000000000000000000000000000 --- a/docs/en/docs/Virtualization/configuring-disk-io-suspension.md +++ /dev/null @@ -1,105 +0,0 @@ -# Configuring Disk I/O Suspension - - - -- [Configuring Disk I/O Suspension](#configuring-disk-io-suspension) - - [Introduction](#introduction) - - [Overview](#overview) - - [Applicable Scenario](#applicable-scenario) - - [Precautions and Restrictions](#precautions-and-restrictions) - - [Disk I/O Suspension Configuration](#disk-io-suspension-configuration) - - [Qemu Command Line Configuration](#qemu-command-line-configuration) - - [XML Configuration](#xml-configuration) - - - -## Introduction - -### Overview - -When a storage fault occurs (for example, the storage link is disconnected), the I/O error of the physical disk is sent to the VM front end through the virtualization layer. After the VM receives the I/O error, the user file system in the VM may change to the read-only state. In this case, the VM needs to be restarted or the user needs to manually recover the file system, which brings extra workload. - -In this case, the virtualization platform provides the disk I/O suspension capability. When a storage fault occurs, the VM I/O being delivered to the host is suspended. During the suspension period, no I/O error is returned to the VM. In this way, the VM file system will not be in read-only state but is hung. At the same time, the VM backend retries I/Os based on the specified suspension interval. If the storage fault is rectified within the suspension time, the suspended I/O can be written to the disk. The internal file system of the VM automatically recovers and the VM does not need to be restarted. If the storage fault is not rectified within the suspension time, an error is reported to the VM and the user is notified. - -### Applicable Scenario - -The cloud that may be disconnected from the storage plane is used as the backend of a virtual disk. - -### Precautions and Restrictions - -- Only virtio-blk and virtio-scsi virtual drives support disk I/O suspension. - -- The backend of virtual disks suspended by disk I/O is usually the cloud drive that may cause storage plane link disconnection. - -- The disk I/O suspension can be enabled for read and write I/O errors. The retry interval and timeout interval for read and write I/O errors of the same disk are the same. - -- The disk I/O suspension retry interval does not include the actual I/O overhead on the host. That is, the actual interval between two I/O retry operations is greater than the configured I/O error retry interval. - -- The disk I/O suspension cannot identify the I/O error type (such as storage link disconnection, bad disk, and reservation conflict). As long as the hardware returns an I/O error, the disk I/O suspension is performed. - -- When the disk I/O is suspended, the internal I/O of the VM is not returned. The system commands for accessing the disk, such as fdisk, are suspended. The services that depend on the returned command are also suspended. - -- When the disk I/O is suspended, the I/O cannot be written to the disk. As a result, the VM may fail to be gracefully shut down. In this case, you need to forcibly shut down the VM. - -- When the disk I/O is suspended, the disk data cannot be read. As a result, the VM cannot be restarted. You need to forcibly shut down the VM, wait until the storage fault is rectified, and then restart the VM. - -- After a storage fault occurs, the following problems cannot be solved even though disk I/O suspension exists: - - 1. Failed to execute advanced storage features. - - Advanced features include virtual disk hot swapping, virtual disk creation, VM startup, VM shutdown, forcible VM shutdown, VM hibernation and wakeup, VM storage hot migration, VM storage hot migration cancellation, VM storage snapshot creation, VM storage snapshot combination, and VM disk capacity query, VM online scale-out, virtual CD-ROM drive insertion and ejection. - - 2. Failed to execute the VM life cycle. - -- When a VM configured with disk I/O suspension initiates hot migration, the XML configuration of the destination disk must contain the same disk I/O suspension configuration as that of the source disk. - -## Disk I/O Suspension Configuration - -### Qemu Command Line Configuration - -The disk I/O suspension function is enabled by specifying `werror=retry` and `rerror=retry` on the virtual disk device and using `retry_interval` and `retry_timeout` to configure the retry policy. `retry_interval` indicates the I/O error retry interval. The value ranges from 0 to MAX_LONG, in milliseconds. If this parameter is not set, the default value 1000 ms is used. `retry_timeout` indicates the I/O retry timeout interval. The value ranges from 0 to MAX_LONG. The value 0 indicates that no timeout occurs. The unit is millisecond. If this parameter is not set, the default value is 0. - -The I/O suspension configuration of the virtio-blk disk is as follows: - -```shell --drive file=/path/to/your/storage,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native \ --device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,\ -drive=drive-virtio-disk0,id=virtio-disk0,write-cache=on,\ -werror=retry,rerror=retry,retry_interval=2000,retry_timeout=10000 -``` - -The I/O suspension configuration of the virtio-scsi disk is as follows: - -```shell --drive file=/path/to/your/storage,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,aio=native \ --device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,\ -device_id=drive-scsi0-0-0-0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,write-cache=on,\ -werror=retry,rerror=retry,retry_interval=2000,retry_timeout=10000 -``` - -### XML Configuration - -The disk I/O suspension function is enabled by specifying `error_policy='retry'` and `rerror_policy='retry'`in the disk XML configuration file. Configure the values of `retry_interval` and `retry_timeout`. `retry_interval` indicates the I/O error retry interval. The value ranges from 0 to MAX_LONG, in milliseconds. If this parameter is not set, the default value 1000 ms is used. `retry_timeout` indicates the I/O retry timeout interval. The value ranges from 0 to MAX_LONG. The value 0 indicates that no timeout occurs. The unit is millisecond. If this parameter is not set, the default value is 0. - -The disk I/O suspension XML configuration of the virtio-blk disk is as follows: - -```xml - - - - - - -``` - -The disk I/O suspension XML configuration of the virtio-scsi disk is as follows: - -```xml - - - - - -
- -``` diff --git a/docs/en/docs/Virtualization/figures/en-us_image_0218587435.png b/docs/en/docs/Virtualization/figures/en-us_image_0218587435.png deleted file mode 100644 index a6107f2308d194c92ebe75b58e9125819e7fe9eb..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Virtualization/figures/en-us_image_0218587435.png and /dev/null differ diff --git a/docs/en/docs/Virtualization/figures/en-us_image_0218587436.png b/docs/en/docs/Virtualization/figures/en-us_image_0218587436.png deleted file mode 100644 index 28a8d25b19c5a5ed043a8f4701b8f920de365ea2..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Virtualization/figures/en-us_image_0218587436.png and /dev/null differ diff --git a/docs/en/docs/Virtualization/public_sys-resources/icon-caution.gif b/docs/en/docs/Virtualization/public_sys-resources/icon-caution.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Virtualization/public_sys-resources/icon-caution.gif and /dev/null differ diff --git a/docs/en/docs/Virtualization/public_sys-resources/icon-danger.gif b/docs/en/docs/Virtualization/public_sys-resources/icon-danger.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Virtualization/public_sys-resources/icon-danger.gif and /dev/null differ diff --git a/docs/en/docs/Virtualization/public_sys-resources/icon-notice.gif b/docs/en/docs/Virtualization/public_sys-resources/icon-notice.gif deleted file mode 100644 index 86024f61b691400bea99e5b1f506d9d9aef36e27..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Virtualization/public_sys-resources/icon-notice.gif and /dev/null differ diff --git a/docs/en/docs/Virtualization/public_sys-resources/icon-tip.gif b/docs/en/docs/Virtualization/public_sys-resources/icon-tip.gif deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Virtualization/public_sys-resources/icon-tip.gif and /dev/null differ diff --git a/docs/en/docs/Virtualization/public_sys-resources/icon-warning.gif b/docs/en/docs/Virtualization/public_sys-resources/icon-warning.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/Virtualization/public_sys-resources/icon-warning.gif and /dev/null differ diff --git a/docs/en/docs/Virtualization/user-and-administrator-guide.md b/docs/en/docs/Virtualization/user-and-administrator-guide.md deleted file mode 100644 index 47646f59b84e0a9b7e9952054286eba74f6ebdd1..0000000000000000000000000000000000000000 --- a/docs/en/docs/Virtualization/user-and-administrator-guide.md +++ /dev/null @@ -1,437 +0,0 @@ -# User and Administrator Guide - -This chapter describes how to create VMs on the virtualization platform, manage VM life cycles, and query information. - - - -- [Best Practices](#best-practices) - - [Performance Best Practices](#performance-best-practices) - - [Halt-Polling](#halt-polling) - - [I/O Thread Configuration](#i-o-thread-configuration) - - [Raw Device Mapping](#raw-device-mapping) - - [kworker Isolation and Binding](#kworker-isolation-and-binding) - - [HugePage Memory](#hugepage-memory) - - [Security Best Practices](#security-best-practices) - - [Libvirt Authentication](#libvirt-authentication) - - [qemu-ga](#qemu-ga) - - [sVirt Protection](#svirt-protection) - - -## Best Practices - -### Performance Best Practices -#### Halt-Polling - -##### Overview - -If compute resources are sufficient, the halt-polling feature can be used to enable VMs to obtain performance similar to that of physical machines. If the halt-polling feature is not enabled, the host allocates CPU resources to other processes when the vCPU exits due to idle timeout. When the halt-polling feature is enabled on the host, the vCPU of the VM performs polling when it is idle. The polling duration depends on the actual configuration. If the vCPU is woken up during the polling, the vCPU can continue to run without being scheduled from the host. This reduces the scheduling overhead and improves the VM system performance. - ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The halt-polling mechanism ensures that the vCPU thread of the VM responds in a timely manner. However, when the VM has no load, the host also performs polling. As a result, the host detects that the CPU usage of the vCPU is high, but the actual CPU usage of the VM is not high. - -##### Instructions - -The halt-polling feature is enabled by default. You can dynamically change the halt-polling time of vCPU by modifying the **halt\_poll\_ns** file. The default value is **500000**, in ns. - -For example, to set the polling duration to 400,000 ns, run the following command: - -``` -# echo 400000 > /sys/module/kvm/parameters/halt_poll_ns -``` - -#### I/O Thread Configuration - -##### Overview - -By default, QEMU main threads handle backend VM read and write operations on the KVM. This causes the following issues: - -- VM I/O requests are processed by a QEMU main thread. Therefore, the single-thread CPU usage becomes the bottleneck of VM I/O performance. -- The QEMU global lock \(qemu\_global\_mutex\) is used when VM I/O requests are processed by the QEMU main thread. If the I/O processing takes a long time, the QEMU main thread will occupy the global lock for a long time. As a result, the VM vCPU cannot be scheduled properly, affecting the overall VM performance and user experience. - -You can configure the I/O thread attribute for the virtio-blk disk or virtio-scsi controller. At the QEMU backend, an I/O thread is used to process read and write requests of a virtual disk. The mapping relationship between the I/O thread and the virtio-blk disk or virtio-scsi controller can be a one-to-one relationship to minimize the impact on the QEMU main thread, enhance the overall I/O performance of the VM, and improve user experience. - -##### Configuration Description - -To use I/O threads to process VM disk read and write requests, you need to modify VM configurations as follows: - -- Configure the total number of high-performance virtual disks on the VM. For example, set **** to **4** to control the total number of I/O threads. - - ``` - - VMName - 4194304 - 4194304 - 4 - 4 - ``` - -- Configure the I/O thread attribute for the virtio-blk disk. **** indicates I/O thread IDs. The IDs start from 1 and each ID must be unique. The maximum ID is the value of ****. For example, to allocate I/O thread 2 to the virtio-blk disk, set parameters as follows: - - ``` - - - - -
- - ``` - -- Configure the I/O thread attribute for the virtio-scsi controller. For example, to allocate I/O thread 2 to the virtio-scsi controller, set parameters as follows: - - ``` - - - -
- - ``` - -- Bind I/O threads to a physical CPU. - - Binding I/O threads to specified physical CPUs does not affect the resource usage of vCPU threads. **** indicates I/O thread IDs, and **** indicates IDs of the bound physical CPUs. - - ``` - - - - - ``` - - -#### Raw Device Mapping - -##### Overview - -When configuring VM storage devices, you can use configuration files to configure virtual disks for VMs, or connect block devices \(such as physical LUNs and LVs\) to VMs for use to improve storage performance. The latter configuration method is called raw device mapping \(RDM\). Through RDM, a virtual disk is presented as a small computer system interface \(SCSI\) device to the VM and supports most SCSI commands. - -RDM can be classified into virtual RDM and physical RDM based on backend implementation features. Compared with virtual RDM, physical RDM provides better performance and more SCSI commands. However, for physical RDM, the entire SCSI disk needs to be mounted to a VM for use. If partitions or logical volumes are used for configuration, the VM cannot identify the disk. - -##### Configuration Example - -VM configuration files need to be modified for RDM. The following is a configuration example. - -- Virtual RDM - - The following is an example of mounting the SCSI disk **/dev/sdc** on the host to the VM as a virtual raw device: - - ``` - - - ... - - - - - -
- - ... - - - ``` - - -- Physical RDM - - The following is an example of mounting the SCSI disk **/dev/sdc** on the host to the VM as a physical raw device: - - ``` - - - ... - - - - - -
- - ... - - - ``` - - -#### kworker Isolation and Binding - -##### Overview - -kworker is a per-CPU thread implemented by the Linux kernel. It is used to execute workqueue requests in the system. kworker threads will compete for physical core resources with vCPU threads, resulting in virtualization service performance jitter. To ensure that the VM can run stably and reduce the interference of kworker threads on the VM, you can bind kworker threads on the host to a specific CPU. - -##### Instructions - -You can modify the **/sys/devices/virtual/workqueue/cpumask** file to bind tasks in the workqueue to the CPU specified by **cpumasks**. Masks in **cpumask** are in hexadecimal format. For example, if you need to bind kworker to CPU0 to CPU7, run the following command to change the mask to **ff**: - -``` -# echo ff > /sys/devices/virtual/workqueue/cpumask -``` - -#### HugePage Memory - -##### Overview - -Compared with traditional 4 KB memory paging, openEuler also supports 2 MB/1 GB memory paging. HugePage memory can effectively reduce TLB misses and significantly improve the performance of memory-intensive services. openEuler uses two technologies to implement HugePage memory. - -- Static HugePages - - The static HugePage requires that a static HugePage pool be reserved before the host OS is loaded. When creating a VM, you can modify the XML configuration file to specify that the VM memory is allocated from the static HugePage pool. The static HugePage ensures that all memory of a VM exists on the host as the HugePage to ensure physical continuity. However, the deployment difficulty is increased. After the page size of the static HugePage pool is changed, the host needs to be restarted for the change to take effect. The size of a static HugePage can be 2 MB or 1 GB. - - -- THP - - If the transparent HugePage \(THP\) mode is enabled, the VM automatically selects available 2 MB consecutive pages and automatically splits and combines HugePages when allocating memory. When no 2 MB consecutive pages are available, the VM selects available 64 KB \(AArch64 architecture\) or 4 KB \(x86\_64 architecture\) pages for allocation. By using THP, users do not need to be aware of it and 2 MB HugePages can be used to improve memory access performance. - - -If VMs use static HugePages, you can disable THP to reduce the overhead of the host OS and ensure stable VM performance. - -##### Instructions - -- Configure static HugePages. - - Before creating a VM, modify the XML file to configure a static HugePage for the VM. - - ``` - - - - - - ``` - - The preceding XML segment indicates that a 1 GB static HugePage is configured for the VM. - - ``` - - - - - - ``` - - The preceding XML segment indicates that a 2 MB static HugePage is configured for the VM. - -- Configure transparent HugePage. - - Dynamically enable the THP through sysfs. - - ``` - # echo always > /sys/kernel/mm/transparent_hugepage/enabled - ``` - - Dynamically disable the THP. - - ``` - # echo never > /sys/kernel/mm/transparent_hugepage/enabled - ``` - - -### security Best Practices - -#### Libvirt Authentication - -##### Overview - -When a user uses libvirt remote invocation but no authentication is performed, any third-party program that connects to the host's network can operate VMs through the libvirt remote invocation mechanism. This poses security risks. To improve system security, openEuler provides the libvirt authentication function. That is, users can remotely invoke a VM through libvirt only after identity authentication. Only specified users can access the VM, thereby protecting VMs on the network. - -##### Enabling Libvirt Authentication - -By default, the libvirt remote invocation function is disabled on openEuler. This following describes how to enable the libvirt remote invocation and libvirt authentication functions. - -1. Log in to the host. -2. Modify the libvirt service configuration file **/etc/libvirt/libvirtd.conf** to enable the libvirt remote invocation and libvirt authentication functions. For example, to enable the TCP remote invocation that is based on the Simple Authentication and Security Layer \(SASL\) framework, configure parameters by referring to the following: - - ``` - #Transport layer security protocol. The value 0 indicates that the protocol is disabled, and the value 1 indicates that the protocol is enabled. You can set the value as needed. - listen_tls = 0 - #Enable the TCP remote invocation. To enable the libvirt remote invocation and libvirt authentication functions, set the value to 1. - listen_tcp = 1 - #User-defined protocol configuration for TCP remote invocation. The following uses sasl as an example. - auth_tcp = "sasl" - ``` - -3. Modify the **/etc/sasl2/libvirt.conf** configuration file to set the SASL mechanism and SASLDB. - - ``` - #Authentication mechanism of the SASL framework. - mech_list: digest-md5 - #Database for storing usernames and passwords - sasldb_path: /etc/libvirt/passwd.db - ``` - -4. Add the user for SASL authentication and set the password. Take the user **userName** as an example. The command is as follows: - - ``` - # saslpasswd2 -a libvirt userName - Password: - Again (for verification): - ``` - -5. Modify the **/etc/sysconfig/libvirtd** configuration file to enable the libvirt listening option. - - ``` - LIBVIRTD_ARGS="--listen" - ``` - -6. Restart the libvirtd service to make the modification to take effect. - - ``` - # systemctl restart libvirtd - ``` - -7. Check whether the authentication function for libvirt remote invocation takes effect. Enter the username and password as prompted. If the libvirt service is successfully connected, the function is successfully enabled. - - ``` - # virsh -c qemu+tcp://192.168.0.1/system - Please enter your authentication name: openeuler - Please enter your password: - Welcome to virsh, the virtualization interactive terminal. - - Type: 'help' for help with commands - 'quit' to quit - - virsh # - ``` - - -##### Managing SASL - -The following describes how to manage SASL users. - -- Query an existing user in the database. - - ``` - # sasldblistusers2 -f /etc/libvirt/passwd.db - user@localhost.localdomain: userPassword - ``` - -- Delete a user from the database. - - ``` - # saslpasswd2 -a libvirt -d user - ``` - - -#### qemu-ga - -##### Overview - -QEMU guest agent \(qemu-ga\) is a daemon running within VMs. It allows users on a host OS to perform various management operations on the guest OS through outband channels provided by QEMU. The operations include file operations \(open, read, write, close, seek, and flush\), internal shutdown, VM suspend \(suspend-disk, suspend-ram, and suspend-hybrid\), and obtaining of VM internal information \(including the memory, CPU, NIC, and OS information\). - -In some scenarios with high security requirements, qemu-ga provides the blacklist function to prevent internal information leakage of VMs. You can use a blacklist to selectively shield some functions provided by qemu-ga. - ->![](./public_sys-resources/icon-note.gif) **NOTE:** ->The qemu-ga installation package is **qemu-guest-agent-**_xx_**.rpm**. It is not installed on openEuler by default. _xx_ indicates the actual version number. - -##### Procedure - -To add a qemu-ga blacklist, perform the following steps: - -1. Log in to the VM and ensure that the qemu-guest-agent service exists and is running. - - ``` - # systemctl status qemu-guest-agent |grep Active - Active: active (running) since Wed 2018-03-28 08:17:33 CST; 9h ago - ``` - -2. Query which **qemu-ga** commands can be added to the blacklist: - - ``` - # qemu-ga --blacklist ? - guest-sync-delimited - guest-sync - guest-ping - guest-get-time - guest-set-time - guest-info - ... - ``` - - -1. Set the blacklist. Add the commands to be shielded to **--blacklist** in the **/usr/lib/systemd/system/qemu-guest-agent.service** file. Use spaces to separate different commands. For example, to add the **guest-file-open** and **guest-file-close** commands to the blacklist, configure the file by referring to the following: - - ``` - [Service] - ExecStart=-/usr/bin/qemu-ga \ - --blacklist=guest-file-open guest-file-close - ``` - - -1. Restart the qemu-guest-agent service. - - ``` - # systemctl daemon-reload - # systemctl restart qemu-guest-agent - ``` - -2. Check whether the qemu-ga blacklist function takes effect on the VM, that is, whether the **--blacklist** parameter configured for the qemu-ga process is correct. - - ``` - # ps -ef|grep qemu-ga|grep -E "blacklist=|b=" - root 727 1 0 08:17 ? 00:00:00 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=guest-file-open guest-file-close guest-file-read guest-file-write guest-file-seek guest-file-flush -F/etc/qemu-ga/fsfreeze-hook - ``` - - >![](./public_sys-resources/icon-note.gif) **NOTE:** - >For more information about qemu-ga, visit [https://wiki.qemu.org/Features/GuestAgent](https://wiki.qemu.org/Features/GuestAgent). - - -#### sVirt Protection - -##### Overview - -In a virtualization environment that uses the discretionary access control \(DAC\) policy only, malicious VMs running on hosts may attack the hypervisor or other VMs. To improve security in virtualization scenarios, openEuler uses sVirt for protection. sVirt is a security protection technology based on SELinux. It is applicable to KVM virtualization scenarios. A VM is a common process on the host OS. In the hypervisor, the sVirt mechanism labels QEMU processes corresponding to VMs with SELinux labels. In addition to types which are used to label virtualization processes and files, different categories are used to label different VMs. Each VM can access only file devices of the same category. This prevents VMs from accessing files and devices on unauthorized hosts or other VMs, thereby preventing VM escape and improving host and VM security. - -##### Enabling sVirt Protection - -1. Enable SELinux on the host. - 1. Log in to the host. - 2. Enable the SELinux function on the host. - 1. Modify the system startup parameter file **grub.cfg** to set **selinux** to **1**. - - ``` - selinux=1 - ``` - - 2. Modify **/etc/selinux/config** to set the **SELINUX** to **enforcing**. - - ``` - SELINUX=enforcing - ``` - - 3. Restart the host. - - ``` - # reboot - ``` - - - -1. Create a VM where the sVirt function is enabled. - 1. Add the following information to the VM configuration file: - - ``` - - ``` - - Or check whether the following configuration exists in the file: - - ``` - - ``` - - 2. Create a VM. - - ``` - # virsh define openEulerVM.xml - ``` - -2. Check whether sVirt is enabled. - - Run the following command to check whether sVirt protection has been enabled for the QEMU process of the running VM. If **svirt\_t:s0:c** exists, sVirt protection has been enabled. - - ``` - # ps -eZ|grep qemu |grep "svirt_t:s0:c" - system_u:system_r:svirt_t:s0:c200,c947 11359 ? 00:03:59 qemu-kvm - system_u:system_r:svirt_t:s0:c427,c670 13790 ? 19:02:07 qemu-kvm - ``` - - diff --git a/docs/en/docs/astream/astream-for-mysql-guide.md b/docs/en/docs/astream/astream-for-mysql-guide.md deleted file mode 100644 index 03a444e1c55585d4210283a3cd93ac9d5b0ad8d9..0000000000000000000000000000000000000000 --- a/docs/en/docs/astream/astream-for-mysql-guide.md +++ /dev/null @@ -1,572 +0,0 @@ -# Test Procedure for astream-Enabled MySQL - -## 1. Environment Requirements - -### 1.1 Hardware - -A server machine and a client machine are required. - -| | Server | Client | -| :-------------- | :-------------------------------: | :-------------------------: | -| CPU | 2 x Kunpeng 920-6426 | 2 x Kunpeng 920-6426 | -| Number of Cores | 2 x 64 | 2 x 64 | -| CPU Frequency | 2600MHz | 2600MHz | -| Memory | 16 x Samsung 32 GB 2666 MHz | 16 x Samsung 32 GB 2666 MHz | -| Network | SP580 10GE | SP580 10GE | -| System Drive | 1.2T HDD TOSHIBA | 1.12 HDD TOSHIBA | -| Data Drive | 2 x 1.6T ES3000 V5 NVMe PCIe SSDs | NA | - -### 1.2 Software - -| Software | Version | -| :----------: | :-----: | -| MySQL | 8.0.20 | -| BenchmarkSQL | 5.0 | - -### 1.3 Networking - - - -## 2. Deployment on the Server - -### 2.1 Installing MySQL Dependency Packages - -```shell -yum install -y cmake doxygen bison ncurses-devel openssl-devel libtool tar rpcgen libtirpc-devel bison bc unzip git gcc-c++ libaio libaio-devel numactl -``` - -### 2.2 Compiling and Installing MySQL - -- Download the source package from the [official website](https://downloads.mysql.com/archives/community/). - -- Download the optimization patches for [fine-grained locking](https://github.com/kunpengcompute/mysql-server/releases/download/tp_v1.0.0/0001-SHARDED-LOCK-SYS.patch), [NUMA scheduling](https://github.com/kunpengcompute/mysql-server/releases/download/21.0.RC1.B031/0001-SCHED-AFFINITY.patch), and [lock-free tuning](https://github.com/kunpengcompute/mysql-server/releases/download/tp_v1.0.0/0002-LOCK-FREE-TRX-SYS.patch). - -- Compile MySQL. Ensure that the libaio-devel package has been installed in advance. - - ```shell - tar zxvf mysql-boost-8.0.20.tar.gz - cd mysql-8.0.20/ - patch -p1 < ../0001-SHARDED-LOCK-SYS.patch - patch -p1 < ../0001-SCHED-AFFINITY.patch - patch -p1 < ../0002-LOCK-FREE-TRX-SYS.patch - cd cmake - make clean - cmake .. -DCMAKE_INSTALL_PREFIX=/usr/local/mysql-8.0.20 -DWITH_BOOST=../boost -DDOWNLOAD_BOOST=1 - make -j 64 - make install - ``` - -### 2.3 Configuring MySQL Parameters - -To produce enough drive load, **two MySQL instances run simultaneously** during the test. The configuration file of instance 1 is **/etc/my-1.cnf**, and the configuration file of instance 2 is **/etc/my-2.cnf**. - -- **/etc/my-1.cnf** - -``` -[mysqld_safe] -log-error=/data/mysql-1/log/mysql.log -pid-file=/data/mysql-1/run/mysqld.pid - -[client] -socket=/data/mysql-1/run/mysql.sock -default-character-set=utf8 - -[mysqld] -server-id=3306 -#log-error=/data/mysql-1/log/mysql.log -#basedir=/usr/local/mysql -socket=/data/mysql-1/run/mysql.sock -tmpdir=/data/mysql-1/tmp -datadir=/data/mysql-1/data -default_authentication_plugin=mysql_native_password -port=3306 -user=root -#innodb_page_size=4k - -max_connections=2000 -back_log=4000 -performance_schema=OFF -max_prepared_stmt_count=128000 -#transaction_isolation=READ-COMMITTED -#skip-grant-tables - -#file -innodb_file_per_table -innodb_log_file_size=2048M -innodb_log_files_in_group=32 -innodb_open_files=10000 -table_open_cache_instances=64 - -#buffers -innodb_buffer_pool_size=150G # Adjust the value based on the system memory size. -innodb_buffer_pool_instances=16 -innodb_log_buffer_size=2048M -#innodb_undo_log_truncate=OFF - -#tune -default_time_zone=+8:00 -#innodb_numa_interleave=1 -thread_cache_size=2000 -sync_binlog=1 -innodb_flush_log_at_trx_commit=1 -innodb_use_native_aio=1 -innodb_spin_wait_delay=180 -innodb_sync_spin_loops=25 -innodb_flush_method=O_DIRECT -innodb_io_capacity=30000 -innodb_io_capacity_max=40000 -innodb_lru_scan_depth=9000 -innodb_page_cleaners=16 -#innodb_spin_wait_pause_multiplier=25 - -#perf special -innodb_flush_neighbors=0 -innodb_write_io_threads=24 -innodb_read_io_threads=16 -innodb_purge_threads=32 - -sql_mode=STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,NO_AUTO_VALUE_ON_ZERO,STRICT_ALL_TABLES - -#skip_log_bin -log-bin=mysql-bin # Enable mysql-bin. -binlog_expire_logs_seconds=1800 # Set a value so that the generated data volume meets the requirement for long-time running. -ssl=0 -table_open_cache=30000 -max_connect_errors=2000 -innodb_adaptive_hash_index=0 - -mysqlx=0 -``` - -- **/etc/my-2.cnf** - -``` -[mysqld_safe] -log-error=/data/mysql-2/log/mysql.log -pid-file=/data/mysql-2/run/mysqld.pid - -[client] -socket=/data/mysql-2/run/mysql.sock -default-character-set=utf8 - -[mysqld] -server-id=3307 -#log-error=/data/mysql-2/log/mysql.log -#basedir=/usr/local/mysql -socket=/data/mysql-2/run/mysql.sock -tmpdir=/data/mysql-2/tmp -datadir=/data/mysql-2/data -default_authentication_plugin=mysql_native_password -port=3307 -user=root -#innodb_page_size=4k - -max_connections=2000 -back_log=4000 -performance_schema=OFF -max_prepared_stmt_count=128000 -#transaction_isolation=READ-COMMITTED -#skip-grant-tables - -#file -innodb_file_per_table -innodb_log_file_size=2048M -innodb_log_files_in_group=32 -innodb_open_files=10000 -table_open_cache_instances=64 - -#buffers -innodb_buffer_pool_size=150G # Adjust the value based on the system memory size. -innodb_buffer_pool_instances=16 -innodb_log_buffer_size=2048M -#innodb_undo_log_truncate=OFF - -#tune -default_time_zone=+8:00 -#innodb_numa_interleave=1 -thread_cache_size=2000 -sync_binlog=1 -innodb_flush_log_at_trx_commit=1 -innodb_use_native_aio=1 -innodb_spin_wait_delay=180 -innodb_sync_spin_loops=25 -innodb_flush_method=O_DIRECT -innodb_io_capacity=30000 -innodb_io_capacity_max=40000 -innodb_lru_scan_depth=9000 -innodb_page_cleaners=16 -#innodb_spin_wait_pause_multiplier=25 - -#perf special -innodb_flush_neighbors=0 -innodb_write_io_threads=24 -innodb_read_io_threads=16 -innodb_purge_threads=32 - -sql_mode=STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,NO_AUTO_VALUE_ON_ZERO,STRICT_ALL_TABLES - -log-bin=mysql-bin -#skip_log_bin # Enable mysql-bin. -binlog_expire_logs_seconds=1800 # Set a value so that the generated data volume meets the requirement for long-time running. -ssl=0 -table_open_cache=30000 -max_connect_errors=2000 -innodb_adaptive_hash_index=0 - -mysqlx=0 -``` - -### 2.4 Deploying MySQL - -```shell -#!/bin/bash -systemctl stop firewalld -systemctl disable irqbalance -echo 3 > /proc/sys/vm/drop_caches -mysql=mysql-8.0.20 -prepare_mysql_data() -{ - umount /dev/nvme0n1 - rm -rf /data - mkfs.xfs /dev/nvme0n1 -f - groupadd mysql - useradd -g mysql mysql - mkdir /data - mount /dev/nvme0n1 /data - mkdir -p /data/{mysql-1,mysql-2} - mkdir -p /data/mysql-1/{data,run,share,tmp,log} - mkdir -p /data/mysql-2/{data,run,share,tmp,log} - chown -R mysql:mysql /data - chown -R mysql:mysql /data/mysql-1 - chown -R mysql:mysql /data/mysql-2 - touch /data/mysql-1/log/mysql.log - touch /data/mysql-2/log/mysql.log - chown -R mysql:mysql /data/mysql-1/log/mysql.log - chown -R mysql:mysql /data/mysql-2/log/mysql.log -} -init_mysql() -{ - /usr/local/$mysql/bin/mysqld --defaults-file=/etc/my.cnf --user=root --initialize - /usr/local/$mysql/support-files/mysql.server start - sed -i 's/#skip-grant-tables/skip-grant-tables/g' /etc/my.cnf - /usr/local/$mysql/support-files/mysql.server restart - /usr/local/$mysql/bin/mysql -u root -p123456 < - -After the restart, enable the STEAL mode. - -```shell -echo STEAL > /sys/kernel/debug/sched_features -``` - -### 4.2 Stopping Items That Affect the Test - -```shell -# Stop irqbalance. -systemctl stop irqbalance.service -systemctl disable irqbalance.service - -# Stop the firewall. -systemctl stop iptables -systemctl stop firewalld -``` - -### 4.3 Configuring NIC Interrupt-Core Binding - -```shell -# Bind the interrupts on the server. (Replace the NIC name and CPU cores to be bound based on the environment.) -ethtool -L enp4s0 combined 6 -irq1=`cat /proc/interrupts| grep -E enp4s0 | head -n5 | awk -F ':' '{print $1}'` -cpulist=(61 62 63 64 65 66) ## Set the cores for handling NIC interrupts based on the environment. -c=0 -for irq in $irq1 -do -echo ${cpulist[c]} "->" $irq -echo ${cpulist[c]} > /proc/irq/$irq/smp_affinity_list -let "c++" -done -``` - -### 4.4 Installing the nvme-cli Tool - -nvme-cli is a command-line tool used to monitor, configure, and manage NVMe devices. nvme-cli can be used to enable the NVMe SSD multi-stream feature and obtain controller logs through `log` commands. - -```shell -yum install nvme-cli -``` - -### 4.5 Enabling the NVMe Multi-Stream Feature - -- Run the following command to check the multi-stream feature status of the NVMe SSD: - - ```shell - nvme dir-receive /dev/nvme0n1 -n 0x1 -D 0 -O 1 -H - ``` - - - - The command output indicates that the NVMe SSD supports Stream Directive, that is, the multi-stream feature, which is currently disabled. - -- Enable the multi-stream function. - - ```shell - modprobe -r nvme - modprobe nvme-core streams=1 - modprobe nvme - ``` - -- Check the multi-stream feature status of the NVMe SSD again. - - - - The command output indicates that the multi-stream feature has been enabled for the NVMe SSD. - -### 4.6 Preparing Data for the MySQL Instances - -To unify the baseline test and multi-stream test processes, format the drive before each test and copy two copies of data from the **/bak** directory to the corresponding data directories of the two MySQL instances. - -```shell -prepare_mysql_data() -{ - umount /dev/nvme0n1 - rm -rf /data - mkfs.xfs /dev/nvme0n1 -f - mkdir /data - mount /dev/nvme0n1 /data - mkdir -p /data/{mysql-1,mysql-2} - mkdir -p /data/mysql-1/{data,run,share,tmp,log} - mkdir -p /data/mysql-2/{data,run,share,tmp,log} - chown -R mysql:mysql /data - chown -R mysql:mysql /data/mysql-1 - chown -R mysql:mysql /data/mysql-2 - touch /data/mysql-1/log/mysql.log - touch /data/mysql-2/log/mysql.log - chown -R mysql:mysql /data/mysql-1/log/mysql.log - chown -R mysql:mysql /data/mysql-2/log/mysql.log -} - -prepare_mysql_data() -# After formatting, create the data directories of the two MySQL instances, and start astream. -astream -i /data/mysql-1/data /data/mysql-2/data -r rule1.txt rule2.txt# ---->. Delete this step when testing the baseline version. -cp -r /bak/* /data/mysql-1/data -cp -r /bak/* /data/mysql-2/data -``` - -Run the `df -h` command to check whether the drive space usage of the /dev/nvme0n1 drive is about 60% to 70%. - -### 4.7 Starting and Binding the MySQL Services - -```shell -#Start two MySQL instances. -numactl -C 0-60 -i 0-3 /usr/local/bin/mysqld --defaults-file=/etc/my-1.cnf & -numactl -C 67-127 -i 0-3 /usr/local/bin/mysqld --defaults-file=/etc/my-2.cnf & -``` - -### 4.8 Setting a Scheduled Task - -After data is successfully copied or generated, to measure the write amplification factor (WAF) of the drive before the MySQL test, use the `crontab` timer to execute the **calculate_wa.sh** script (see the following) for calculating the drive WAF every hour during the 12-hour test. - -```shell -#!/bin/bash - -source /etc/profile -source ~/.bash_profile - -BASE_PATH=$(cd $(dirname $0);pwd) -diskName=$1 - -echo 0x`/usr/bin/nvme get-log /dev/${diskName}n1 -i 0xc0 -n 0xffffffff -l 800|grep "01c0:"|awk '{print $13$12$11$10$9$8$7$6}'` >> ${BASE_PATH}/host_tmp -echo 0x`/usr/bin/nvme get-log /dev/${diskName}n1 -i 0xc0 -n 0xffffffff -l 800|grep "01d0:"|awk '{print $9$8$7$6$5$4$3$2}'` >> ${BASE_PATH}/gc_tmp - -# IO write counts,unit:4K # -hostWriteHexSectorTemp=`tail -1 ${BASE_PATH}/host_tmp` -# GC write counts,unit 4k # -gcWriteHexSectorTemp=`tail -1 ${BASE_PATH}/gc_tmp` -hostWriteDecSectorTemp=`printf "%llu" ${hostWriteHexSectorTemp}` -gcWriteDecSectorTemp=`printf "%llu" ${gcWriteHexSectorTemp}` -preHostValue=`tail -2 ${BASE_PATH}/host_tmp|head -1` -preGcValue=`tail -2 ${BASE_PATH}/gc_tmp|head -1` -preHostValue=`printf "%llu" ${preHostValue}` -preGcValue=`printf "%llu" ${preGcValue}` - -# IO write counts for a period of time -hostWrittenSector=$(echo ${hostWriteDecSectorTemp}-${preHostValue} | bc -l) -# Gc write counts for a period of time -gcWrittenSector=$(echo ${gcWriteDecSectorTemp}-${preGcValue} | bc -l) -nandSector=$(echo ${hostWrittenSector}+${gcWrittenSector} | bc -l) - -# unit from kB->MB -hostWrittenMB=$((${hostWrittenSector}/256)) -nandWrittenMB=$((${nandSector}/256)) - -# compute the WA -WA=$(echo "scale=5;${nandSector}/${hostWrittenSector}" | bc) -echo $nandWrittenMB $hostWrittenMB $WA >> ${BASE_PATH}/result_WA.txt -``` - -You can run the `crontab -e` command to add a scheduled task for executing the script command every hour. The command is as follows: - -```shell -0 */1 * * * bash /root/calculate_wa.sh nvme0 -``` - -If the device name of the tested NVMe drive is **/dev/nvme0n1**, pass **nvme0** to the script as the parameter of the scheduled task. - -### 4.9 Testing the MySQL Instances - -Before the test, ensure that the multi-stream feature of the NVMe SSD has been enabled. - -Enter the root directory of the tool on the client and start the test: - -```shell -cd benchmarksql5.0-for-mysql -./runBenchmark.sh props.conf -./runBenchmark.sh props-2.conf -``` - -### 4.10 Stopping the astream Process - -You do not need to perform this step after the baseline test. After the multi-stream test is complete, run the following command to stop the astream process: - -```shell -astream stop -``` - -## 5 Test Results - -The result generated by the scheduled script is in the **result_WA.txt** file in the directory where the script is located. After each test is complete, select the latest 12 non-zero data records in the file. - -When data is written to the drive, three values are written to a line in the **result_WA.txt** file, including: - -- Amount of data that is actually written to the drive within one hour. -- Write volume submitted by the host within one hour. -- Current drive WA. You can calculate the drive WAF in each hour using the formula in the appendix. - -According to the current test results, when astream is used and MySQL runs stably for a long time, the WAF of NVMe SSDs decreases by 12%, that is, the performance is improved by 12%. - -## **6 Appendix** - -**Write amplification (WA)** is an undesirable phenomenon associated with flash memory and SSDs where the actual amount of information physically written to the drive is a multiple of the logical amount intended to be written. The formula for calculating the write amplification factor (WAF) is as follows: - -$$ -WAF=\frac{\text{Actual data volume written to the drive}}{\text{Data volume submitted by the host}} -$$ - -Generally, as data storage and drive fragmentation become more and more severe, the WAF increases. If the WAF increase can be the delayed, the service life of the drive can be prolonged. diff --git a/docs/en/docs/astream/figures/STEAL.png b/docs/en/docs/astream/figures/STEAL.png deleted file mode 100644 index 3c6aeab2a69f76353eb02e8b10299488c995d4c1..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/astream/figures/STEAL.png and /dev/null differ diff --git a/docs/en/docs/astream/figures/deployment.png b/docs/en/docs/astream/figures/deployment.png deleted file mode 100644 index 253b312a6e9c5baa4f3f2862cdaf69e06156544c..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/astream/figures/deployment.png and /dev/null differ diff --git a/docs/en/docs/astream/figures/multi-stream_disabled.png b/docs/en/docs/astream/figures/multi-stream_disabled.png deleted file mode 100644 index 8f08500a44f1dc658870e5aa3702defee004d3b4..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/astream/figures/multi-stream_disabled.png and /dev/null differ diff --git a/docs/en/docs/astream/figures/multi-stream_enabled.png b/docs/en/docs/astream/figures/multi-stream_enabled.png deleted file mode 100644 index 1590dff910f5a35542d2ac06c07addceaf7888bb..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/astream/figures/multi-stream_enabled.png and /dev/null differ diff --git a/docs/en/docs/astream/installation_and_usage.md b/docs/en/docs/astream/installation_and_usage.md deleted file mode 100644 index 57988cea8df24f09b0b3ab1f811c66679b100129..0000000000000000000000000000000000000000 --- a/docs/en/docs/astream/installation_and_usage.md +++ /dev/null @@ -1,106 +0,0 @@ -# astream - -## Introduction - -astream is a tool for prolonging the service life of drives. It monitors directories based on the inotify mechanism of Linux and works with the stream allocation rules for application scenarios defined by the user to set stream information for matched files when they are created. Then, the stream information is transparently transmitted to the NVMe SSD with the multi-stream feature enabled through the kernel. Finally, file storage can be better classified and stored according to the identifier of the stream information, thereby reducing the workload of drive garbage collection and write amplification factor of the drive, and prolonging the service life of the drive. astream focuses on database applications that have workload characteristics with the same or similar lifecycle, such as MySQL. - -## Installation - -After configuring the Yum source of openEuler 22.09, install astream using the `yum` command. - -``` -yum install astream -``` - -## Usage - -Before getting into the usage of astream, you need to understand the stream allocation rule file required for starting astream. - -### Stream Allocation Rule File Example - -#### Introduction - -The stream allocation rule file allows you to define stream information rules for workloads based on their data lifecycle. - -Each line of the stream allocation rule file defines a rule, for example, **^/data/mysql/data/undo 4**. It means that any file with the **undo** prefix in **/data/mysql/data** is allocated with stream 4. - -#### Example - -A complete stream allocation rule file for MySQL is as follows: - -``` -^/data/mysql/data/ib_logfile 2 -^/data/mysql/data/ibdata1$ 3 -^/data/mysql/data/undo 4 -^/data/mysql/data/mysql-bin 5 -``` - -The file defines four rules for stream information: - -- A file whose absolute path is prefixed with **/data/mysql/data/ib_logfile** is allocated with stream 2. -- A file whose absolute path is prefixed with **/data/mysql/data/ibdata1** is allocated with stream 3. - -- A file whose absolute path is prefixed with **/data/mysql/data/undo** is allocated with stream 4. - -- A file whose absolute path is prefixed with **/data/mysql/data/mysql-bin** is allocated with stream 5. - -## Usage - -### Starting the astream Daemon - -Assume that the rule files **stream_rule1.txt** and **stream_rule2.txt** are in the **/home** directory. - -- Monitoring a single directory: - - ```shell - astream -i /data/mysql/data -r /home/stream_rule1.txt - ``` - -- Monitoring multiple directories: - - astream can monitor multiple directories. Each directory requires a stream allocation rule file. - - For example, to monitor two directories: - - ```shell - astream -i /data/mysql-1/data /data/mysql-2/data -r /home/stream_rule1.txt /home/stream_rule2.txt - ``` - -The preceding command is used to monitor the following directories: - -- **/data/mysql-1/data**, whose stream allocation rule file is **/home/stream_rule1.txt**. -- **/data/mysql-2/data**, whose stream allocation rule file is **/home/stream_rule2.txt**. - -## Command Options - -```shell -astream [options] -``` - -| Option | Description | Example | -| ------ | ------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | -| -h | Displays help information | `astream -h` | -| -l | Sets the log level for astream monitoring. Log levels include the debug(1), info(2), warn(3), and error(4) levels. | `astream -i /data/mysql/data -r /home/rule.txt -l 2` | -| -i | Specifies the directories to be monitored. Separate multiple directories with spaces. | This option is used with `-r`. See the example below. | -| -r | Specifies the stream allocation rule files corresponding to the monitored directories. Each parameter of `-r` corresponds to a parameter of `-i`. | `astream -i /data/mysql/data -r /home/rule.txt` | -| stop | Stops the astream daemon gracefully. | `astream stop` | - -## Restrictions - -Restrictions for using astream are as follows. - -### Function Restrictions - -- Only the NVMe SSDs with the multi-stream feature enabled are supported. -- A maximum of 5 streams can be allocated. The number of streams are limited by the **BLK_MAX_WRITE_HINTS** constant in the kernel and the maximum stream number supported by the NVMe SSD. - -### Operation Restrictions - -- Run the astream daemon with **root** privileges during the test. -- The drive to be tested must have enough I/O pressure and space usage. In that case, the write amplification factor is high, and the multi-stream feature benefits more from astream. - -## Precautions - -- When the astream daemon is running, do not delete the monitored directory and create it again. The created directory is not monitored until the astream daemon is restarted. -- You can use regular expression to match multiple files in the rule file. -- The NVMe SSD used in the test implements the multi-stream feature of NVMe 1.3. diff --git a/docs/en/docs/astream/overview.md b/docs/en/docs/astream/overview.md deleted file mode 100644 index 9b9b256470b74bec8f8d8059a533bfe4169b898c..0000000000000000000000000000000000000000 --- a/docs/en/docs/astream/overview.md +++ /dev/null @@ -1,3 +0,0 @@ -# astream User Guide - -astream is a tool for prolonging the service life of drives. It monitors directories based on the inotify mechanism of Linux and works with the stream allocation rules for application scenarios defined by the user to set stream information for matched files when they are created. Then, the stream information is transparently transmitted to the NVMe SSD with the multi-stream feature enabled through the kernel. Finally, file storage can be better classified and stored according to the identifier of the stream information, thereby reducing the workload of drive garbage collection and write amplification factor of the drive, and prolonging the service life of the drive. astream focuses on database applications that have workload characteristics with the same or similar lifecycle, such as MySQL. diff --git a/docs/en/docs/desktop/HA_use_cases.md b/docs/en/docs/desktop/HA_use_cases.md deleted file mode 100644 index 9358d970242314620efc6c60e2802d8842310c58..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/HA_use_cases.md +++ /dev/null @@ -1,712 +0,0 @@ -# HA Usage Example - -This section describes how to get started with the HA cluster and add an instance. If you are not familiar with HA installation, see [Installing and Deploying HA](./installing-and-deploying-HA.md). - -## Quick Start Guide - -- The following operations use the management platform newly developed by the community as an example. - -### Login Page - -The user name is `hacluster`, and the password is the one set on the host by the user. - -![](./figures/HA-api.png) - -### Home Page - -After logging in to the system, the main page is displayed. The main page consists of the side navigation bar, the top operation area, the resource node list area, and the node operation floating area. - -The following describes the features and usage of the four areas in detail. - -![](./figures/HA-home-page.png) - -#### Navigation Bar - -The side navigation bar consists of two parts: the name and logo of the HA cluster software, and the system navigation. The system navigation consists of three parts: System, Cluster Configurations, and Tools. System is the default option and the corresponding item to the home page. It displays the information and operation entries of all resources in the system. Preference Settings and Heartbeat Configurations are under Cluster Configurations. Log Download and Quick Cluster Operation are under Tools. These two items are displayed in a pop-up box after you click them. - -#### Top Operation Area - -The current login user is displayed statically. When you hover the mouse cursor on the user icon, the operation menu items are displayed, including Refresh Settings and Log Out. After you click Refresh Settings, the Refresh Settings dialog box is displayed with the option. You can set the automatic refresh modes for the system. The options are Do not refresh automatically, Refresh every 5 seconds, and Refresh every 10 seconds. By default, Do not refresh automatically is selected. Clicking Log Out to log out and jump to the login page. After that, a re-login is required if you want to continue to access the system. - -![](./figures/HA-refresh.png) - -#### Resource Node List Area - -The resource node list displays the resource information such as Resource Name, Status, Resource Type, Service, and Running Node of all resources in the system, and the node information such as all nodes in the system and the running status of the nodes. In addition, you can Add, Edit, Start, Stop, Clear, Migrate, Migrate Back, and Delete the resources, and set Relationships for the resources. - -#### Node Operation Floating Area - -By default, the node operation floating area is collapsed. When you click a node in the heading of the resource node list, the node operation area is displayed on the right, as shown in the preceding figure. This area consists of the collapse button, the node name, the stop button, and the standby button, and provides the stop and standby operations. Click the arrow in the upper left corner of the area to collapse the area. - -### Preference Settings - -The following operations can be performed using command lines. The following is an example. For more command details, run the `pcs --help` command. - -```shell -pcs property set stonith-enabled=false -pcs property set no-quorum-policy=ignore -``` - -Run `pcs property` to view all settings. - -![](./figures/HA-firstchoice-cmd.png) - -- Click Preference Settings in the navigation bar, the Preference Settings dialog box is displayed. Change the values of No Quorum Policy and Stonith Enabled from the default values to the values shown in the figure below. Then, click OK. - -![](./figures/HA-firstchoice.png) - -#### Adding Resources - -##### Adding Common Resources - -Click Add Common Resource. The Create Resource dialog box is displayed. All mandatory configuration items of the resource are on the Basic page. After you select a Resource Type on the Basic page, other mandatory and optional configuration items of the resource are displayed. When you type in the resource configuration information, a gray text area is displayed on the right of the dialog box to describe the current configuration item. After all mandatory parameters are set, click OK to create a common resource or click Cancel to cancel the add operation. The optional configuration items on the Instance Attribute, Meta Attribute, or Operation Attribute page are optional. The resource creation process is not affected if they are not configured. You can modify them as required. Otherwise, the default values are used. - -The following uses the Apache as an example to describe how to add an Apache resource. - -```shell -pcs resource create httpd ocf:heartbeat:apache -``` - -Check the resource running status: - -```shell -pcs status -``` - -![](./figures/HA-pcs-status.png) - -- Add the Apache resource: - -![](./figures/HA-add-resource.png) - -- If the following information is displayed, the resource is successfully added: - -![](./figures/HA-apache-suc.png) - -- The resource is successfully created and started, and runs on a node, for example, ha1. The Apache page is displayed. - -![](./figures/HA-apache-show.png) - -##### Adding Group Resources - -Adding group resources requires at least one common resource in the cluster. Click Add Group Resource. The Create Resource dialog box is displayed. All the parameters on the Basic tab page are mandatory. After setting the parameters, click OK to add the resource or click Cancel to cancel the add operation. - -- **Note: Group resources are started in the sequence of child resources. Therefore, you need to select child resources in sequence.** - -![](./figures/HA-group.png) - -If the following information is displayed, the resource is successfully added: - -![](./figures/HA-group-suc.png) - -##### Adding Clone Resources - -Click Add Clone Resource. The Create Resource dialog box is displayed. On the Basic page, enter the object to be cloned. The resource name is automatically generated. After entering the object name, click OK to add the resource, or click Cancel to cancel the add operation. - -![](./figures/HA-clone.png) - -If the following information is displayed, the resource is successfully added: - -![](./figures/HA-clone-suc.png) - -#### Editing Resources - -- Starting a resource: Select a target resource from the resource node list. The target resource must not be running. Start the resource. -- Stopping a resource: Select a target resource from the resource node list. The target resource must be running. Stop the resource. -- Clearing a resource: Select a target resource from the resource node list. Clear the resource. -- Migrating a resource: Select a target resource from the resource node list. The resource must be a common resource or a group resource in the running status. Migrate the resource to migrate it to a specified node. -- Migrating back a resource: Select a target resource from the resource node list. The resource must be a migrated resource. Migrate back the resource to clear the migration settings of the resource and migrate the resource back to the original node. - After you click Migrate Back, the status change of the resource item in the list is the same as that when the resource is started. -- Deleting a resource: Select a target resource from the resource node list. Delete the resource. - -#### Setting Resource Relationships - -Resource relationships are used to set restrictions for the target resources. There are three types of resource restrictions: resource location, resource collaboration, and resource order. - -- Resource location: sets the running level of the nodes in the cluster for the resource to determine the node where the resource runs during startup or switchover. The running levels are Master Node and Slave 1 in descending order. -- Resource collaboration: indicates whether the target resource and other resources in the cluster run on the same node. Same Node indicates that this resource must run on the same node as the target resource. Mutually Exclusive indicates that this resource cannot run on the same node as the target resource. -- Resource order: Set the order in which the target resource and other resources in the cluster are started. Front Resource indicates that this resource must be started before the target resource. Follow-up Resource indicates that this resource can be started only after the target resource is started. - -## HA MySQL Configuration Example - -- Configure three common resources separately, then add them as a group resource. - -### Configuring the Virtual IP Address - -On the home page, choose Add > Add Common Resource and set the parameters as follows: - -![](./figures/HA-vip.png) - -- The resource is successfully created and started and runs on a node, for example, ha1. The resource can be pinged and connected, and allows various operations after login. The resource is switched to ha2 and can be accessed normally. -- If the following information is displayed, the resource is successfully added: - -![](./figures/HA-vip-suc.png) - -### Configuring NFS Storage - -- Configure another host as the NFS server. - -Install the software packages: - -```shell -yum install -y nfs-utils rpcbind -``` - -Run the following command to disable the firewall: - -```shell -systemctl stop firewalld && systemctl disable firewalld -``` - -Modify the **/etc/selinux/config** file to change the status of SELINUX to disabled. - -```shell -SELINUX=disabled -``` - -Start the services: - -```shell -systemctl start rpcbind && systemctl enable rpcbind -systemctl start nfs-server && systemctl enable nfs-server -``` - -Create a shared directory on the server: - -```shell -mkdir -p /test -``` - -Modify the NFS configuration file: - -```shell -vim /etc/exports -/test *(rw,no_root_squash) -``` - -Reload the service: - -```shell -systemctl reload nfs -``` - -Install the software packages on the client. Install MySQL first to mount the NFS to the path of the MySQL data. - -```shell -yum install -y nfs-utils mariadb-server -``` - -On the home page, choose Add > Add Common Resource and configure the NFS resource as follows: - -![](./figures/HA-nfs.png) - -- The resource is successfully created and started and runs on a node, for example, ha1. The NFS is mounted to the **/var/lib/mysql** directory. The resource is switched to ha2. The NFS is unmounted from ha1 and automatically mounted to ha2. -- If the following information is displayed, the resource is successfully added: - -![](./figures/HA-nfs-suc.png) - -### Configuring MySQL - -On the home page, choose Add > Add Common Resource and configure the MySQL resource as follows: - -![](./figures/HA-mariadb.png) - -- If the following information is displayed, the resource is successfully added: - -![](./figures/HA-mariadb-suc.png) - -### Adding the Preceding Resources as a Group Resource - -- Add the three resources in the resource startup sequence. - -On the home page, choose Add > Add Group Resource and configure the group resource as follows: - -![](./figures/HA-group-new.png) - -- The group resource is successfully created and started. If the command output is the same as that of the preceding common resources, the group resource is successfully added. - -![](./figures/HA-group-new-suc.png) - -- Use ha1 as the standby node and migrate the group resource to the ha2 node. The system is running properly. - -![](./figures/HA-group-new-suc2.png) - -## Quorum Device Configuration - -Note: The current cluster must be normal with cluster attributes set. - -```sh -[root@ha1 ~]# pcs property set no-quorum-policy=stop -[root@ha1 ~]# pcs property set stonith-enabled=false -``` - -Select a new machine as the quorum device. - -### Installing Quorum Software - -- Install corosync-qdevice on a cluster node, for example, ha1. - -```sh -[root@ha1:~]# dnf install corosync-qdevice -y -``` - -- Install pcs and corosync-qnetd on the quorum device host. - -```sh -[root@qdevice:~]# dnf install pcs corosync-qnetd -y -``` - -- Start the pcsd service on the quorum device host and enable the pcsd service to start upon system startup. - -```sh -[root@qdevice:~]# systemctl start pcsd && systemctl enable pcsd -``` - -### Modifying the Host Name and the /etc/hosts File - -**Note: Perform the following operations on all the three hosts. The following uses one host as an example.** - -Before using the quorum function, change the host name, write all host names to the **/etc/hosts** file, and set the password for the **hacluster** user. - -- Change the host name. - -```shell -hostnamectl set-hostname qdevice -``` - -- Write the IP addresses and host names to the **/etc/hosts** file. - -```text -10.1.167.105 ha1 -10.1.167.105 ha2 -10.1.167.106 qdevice -``` - -- Set the password for the **hacluster** user. - -```sh -[root@qdevice:~]# passwd hacluster -``` - -### Configuring the Quorum Device and Adding It to the Cluster - -The following describes how to configure the quorum device and add it to the cluster. - -- The qdevice node is used as the quorum device. -- The model of the quorum device is net. -- The cluster nodes are ha1 and ha2. - -#### Disabling the Firewall - -```sh -systemctl stop firewalld && systemctl disable firewalld -``` - -- Temporarily disable SELinux. - -```Conf -setenforce 0 -``` - -#### Configuring the Quorum Device - -On the node that will be used to host the quorum device, run the following command to configure the quorum device. This command sets the model of the quorum device to net and configures the device to start during boot. - -```sh -[root@qdevice ~]# pcs qdevice setup model net --enable --start -Quorum device 'net' initialized -quorum device enabled -Starting quorum device... -quorum device started -``` - -After configuring the quorum device, view its status. The current status indicates that the corosync-qnetd daemon is running and no client is connected to it. Run the **--full** command to display the detailed output. - -```sh -[root@qdevice ~]# pcs qdevice status net --full -QNetd address: *:5403 -TLS: Supported (client certificate required) -Connected clients: 0 -Connected clusters: 0 -Maximum send/receive size: 32768/32768 bytes -``` - -#### Authenticate Identities - -Authenticate users on the node hosting the quorum device from a hacluster node in the cluster. This allows the pcs cluster to be connected to the qdevice on the pcs host, but does not allow the qdevice on the pcs host to be connected to the pcs cluster. - -```sh -[root@ha1 ~]# pcs host auth qdevice -Username: hacluster -Password: -qdevice: Authorized -``` - -#### Adding the Quorum Device to the Cluster - -Before adding the quorum device, run the **pcs quorum config** command to view the current configuration of the quorum device for later comparison. - -```sh -[root@ha1 ~]# pcs quorum config -Options: -``` - -Run the **pcs quorum status** command to check the current status of the quorum device. The command output indicates that the cluster does not use the quorum device and the member status of each qdevice node is NR (unregistered). - -```sh -[root@ha1 ~]# pcs quorum status -Quorum information ------------------- -Date: Mon Sep 4 17:03:29 2023 -Quorum provider: corosync_votequorum -Nodes: 2 -Node ID: 1 -Ring ID: 1.e -Quorate: Yes - -Votequorum information ----------------------- -Expected votes: 2 -Highest expected: 2 -Total votes: 2 -Quorum: 1 -Flags: 2Node Quorate WaitForAll - -Membership information ----------------------- - Nodeid Votes Qdevice Name - 1 1 NR ha1 (local) - 2 1 NR ha2 -``` - -Add the created quorum device to the cluster. Note that multiple quorum devices cannot be used in a cluster at the same time. However, a quorum device can be used by multiple clusters at the same time. This example configures the quorum device to use the ffsplit algorithm. - -```sh -[root@ha1 ~]# pcs quorum device add model net host=qdevice algorithm=ffsplit -Setting up qdevice certificates on nodes... -ha1: Succeeded -ha2: Succeeded -Enabling corosync-qdevice... -ha2: corosync-qdevice enabled -ha1: corosync-qdevice enabled -Sending updated corosync.conf to nodes... -ha1: Succeeded -ha2: Succeeded -ha1: Corosync configuration reloaded -Starting corosync-qdevice... -ha2: corosync-qdevice started -ha1: corosync-qdevice started -``` - -View the corosync-qdevice service status. - -```sh -[root@ha1 ~]# systemctl status corosync-qdevice -● corosync-qdevice.service - Corosync Qdevice daemon - Loaded: loaded (/usr/lib/systemd/system/corosync-qdevice.service; enabled; preset: disabled> - Active: active (running) since Mon 2023-09-04 17:03:49 CST; 20s ago - Docs: man:corosync-qdevice - Main PID: 12756 (corosync-qdevic) - Tasks: 2 (limit: 11872) - Memory: 1.6M - CGroup: /system.slice/corosync-qdevice.service - ├─12756 /usr/sbin/corosync-qdevice -f - └─12757 /usr/sbin/corosync-qdevice -f - -Sep 04 17:03:49 ha1 systemd[1]: Starting Corosync Qdevice daemon... -Sep 04 17:03:49 ha1 systemd[1]: Started Corosync Qdevice daemon. -``` - -#### Checking the Configuration Status of the Quorum Device - -Check the configuration changes in the cluster. Run the **pcs quorum config** command to view information about the configured quorum device. - -```shell -[root@ha1 ~]# pcs quorum config -Options: -Device: - Model: net - algorithm: ffsplit - host: qdevice -``` - -The **pcs quorum status** command displays the quorum running status, indicating that the quorum device is in use. The meanings of the member status values of each cluster node are as follows: - -- **A**/**NA**: Whether the quorum device is alive, indicating whether there is heartbeat corosync between qdevice and the cluster. This should always indicate that the quorum device is active. -- **V**/**NV**: **V** is set when the quorum device votes for a node. In this example, both nodes are set to **V** because they can communicate with each other. If the cluster is split into two single-node clusters, one node is set to **V** and the other is set to **NV**. -- **MW**/**NMW**: The internal quorum device flag is set (**MW**) or not set (**NMW**). By default, the flag is not set and the value is **NMW**. - -```sh -[root@ha1 ~]# pcs quorum status -Quorum information ------------------- -Date: Mon Sep 4 17:04:33 2023 -Quorum provider: corosync_votequorum -Nodes: 2 -Node ID: 1 -Ring ID: 1.e -Quorate: Yes - -Votequorum information ----------------------- -Expected votes: 3 -Highest expected: 3 -Total votes: 3 -Quorum: 2 -Flags: Quorate Qdevice - -Membership information ----------------------- - Nodeid Votes Qdevice Name - 1 1 A,V,NMW ha1 (local) - 2 1 A,V,NMW ha2 - 0 1 Qdevice -``` - -Run the **pcs quorum device status** command to view the running status of the quorum device. - -```shell -[root@ha1 ~]# pcs quorum device status -Qdevice information -------------------- -Model: Net -Node ID: 1 -Configured node list: - 0 Node ID = 1 - 1 Node ID = 2 -Membership node list: 1, 2 - -Qdevice-net information ----------------------- -Cluster name: hacluster -QNetd host: qdevice:5403 -Algorithm: Fifty-Fifty split -Tie-breaker: Node with lowest node ID -State: Connected -``` - -On the quorum device, run the following command to display the status of the corosync-qnetd daemon: - -```sh -[root@qdevice ~]# pcs qdevice status net --full -QNetd address: *:5403 -TLS: Supported (client certificate required) -Connected clients: 2 -Connected clusters: 1 -Maximum send/receive size: 32768/32768 bytes -Cluster "hacluster": - Algorithm: Fifty-Fifty split (KAP Tie-breaker) - Tie-breaker: Node with lowest node ID - Node ID 1: - Client address: ::ffff:10.211.55.36:43186 - HB interval: 8000ms - Configured node list: 1, 2 - Ring ID: 1.e - Membership node list: 1, 2 - Heuristics: Undefined (membership: Undefined, regular: Undefined) - TLS active: Yes (client certificate verified) - Vote: No change (ACK) - Node ID 2: - Client address: ::ffff:10.211.55.37:55682 - HB interval: 8000ms - Configured node list: 1, 2 - Ring ID: 1.e - Membership node list: 1, 2 - Heuristics: Undefined (membership: Undefined, regular: Undefined) - TLS active: Yes (client certificate verified) - Vote: ACK (ACK) -``` - -### Managing Quorum Device Services - -You can manage the quorum device by starting and stopping the corosync-qnetd service. - -```sh -[root@ha1 ~]# pcs quorum device status -Qdevice information -------------------- -Model: Net -Node ID: 1 -Configured node list: - 0 Node ID = 1 - 1 Node ID = 2 -Membership node list: 1, 2 - -Qdevice-net information ----------------------- -Cluster name: hacluster -QNetd host: qdevice:5403 -Algorithm: Fifty-Fifty split -Tie-breaker: Node with lowest node ID -State: Connected -``` - -```sh -[root@qdevice ~]# systemctl stop corosync-qnetd -[root@qdevice ~]# -[root@qdevice ~]# systemctl status corosync-qnetd -○ corosync-qnetd.service - Corosync Qdevice Network daemon - Loaded: loaded (/usr/lib/systemd/system/corosync-qnetd.service; enabled; preset: disabled> - Active: inactive (dead) since Mon 2023-09-04 17:07:57 CST; 1s ago - Duration: 5min 17.639s - Docs: man:corosync-qnetd - Process: 9297 ExecStart=/usr/bin/corosync-qnetd -f $COROSYNC_QNETD_OPTIONS (code=exited> - Main PID: 9297 (code=exited, status=0/SUCCESS) - -9月 04 17:02:39 qdevice systemd[1]: Starting Corosync Qdevice Network daemon... -9月 04 17:02:39 qdevice systemd[1]: Started Corosync Qdevice Network daemon. -9月 04 17:07:57 qdevice systemd[1]: Stopping Corosync Qdevice Network daemon... -9月 04 17:07:57 qdevice systemd[1]: corosync-qnetd.service: Deactivated successfully. -9月 04 17:07:57 qdevice systemd[1]: Stopped Corosync Qdevice Network daemon. -``` - -```sh -[root@ha1 ~]# pcs quorum device status -Qdevice information -------------------- -Model: Net -Node ID: 1 -Configured node list: - 0 Node ID = 1 - 1 Node ID = 2 -Membership node list: 1, 2 - -Qdevice-net information ----------------------- -Cluster name: hacluster -QNetd host: qdevice:5403 -Algorithm: Fifty-Fifty split -Tie-breaker: Node with lowest node ID -State: Connect failed -``` - -```sh -[root@qdevice ~]# systemctl start corosync-qnetd -[root@qdevice ~]# -[root@qdevice ~]# systemctl status corosync-qnetd -● corosync-qnetd.service - Corosync Qdevice Network daemon - Loaded: loaded (/usr/lib/systemd/system/corosync-qnetd.service; enabled; preset: disabled> - Active: active (running) since Mon 2023-09-04 17:08:09 CST; 3s ago - Docs: man:corosync-qnetd - Main PID: 9323 (corosync-qnetd) - Tasks: 1 (limit: 11872) - Memory: 6.2M - CGroup: /system.slice/corosync-qnetd.service - └─9323 /usr/bin/corosync-qnetd -f - -9月 04 17:08:09 qdevice systemd[1]: Starting Corosync Qdevice Network daemon... -9月 04 17:08:09 qdevice systemd[1]: Started Corosync Qdevice Network daemon. -``` - -```sh -[root@ha1 ~]# pcs quorum device status -Qdevice information -------------------- -Model: Net -Node ID: 1 -Configured node list: - 0 Node ID = 1 - 1 Node ID = 2 -Membership node list: 1, 2 - -Qdevice-net information ----------------------- -Cluster name: hacluster -QNetd host: qdevice:5403 -Algorithm: Fifty-Fifty split -Tie-breaker: Node with lowest node ID -State: Connected -``` - -### Managing the Quorum Device in the Cluster - -You can use the **pcs** commands to change quorum device settings, disable the quorum device, and delete the quorum device from the cluster. - -#### Changing Quorum Device Settings - -**Note: To change the net option of quorum device model in the host, run the pcs quorum device remove and pcs quorum device add commands to correctly configure the settings unless the old and new hosts are the same.** - -- Change the quorum device algorithm to lms. - -```sh -[root@ha1 ~]# pcs quorum device update model algorithm=lms -Sending updated corosync.conf to nodes... -ha1: Succeeded -ha2: Succeeded -ha1: Corosync configuration reloaded -Reloading qdevice configuration on nodes... -ha1: corosync-qdevice stopped -ha2: corosync-qdevice stopped -ha1: corosync-qdevice started -ha2: corosync-qdevice started -``` - -#### Deleting the Quorum Device - -- Delete the quorum device configured on the cluster node. - -```sh -[root@ha1 ~]# pcs quorum device remove -Disabling corosync-qdevice... -ha1: corosync-qdevice disabled -ha2: corosync-qdevice disabled -Stopping corosync-qdevice... -ha1: corosync-qdevice stopped -ha2: corosync-qdevice stopped -Removing qdevice certificates from nodes... -ha1: Succeeded -ha2: Succeeded -Sending updated corosync.conf to nodes... -ha1: Succeeded -ha2: Succeeded -ha1: Corosync configuration reloaded -``` - -After the quorum device is deleted, check the quorum device status. The following error message is displayed: - -```shell -[root@ha1 ~]# pcs quorum device status -Error: Unable to get quorum status: corosync-qdevice-tool: Can't connect to QDevice socket (is QDevice running?): No such file or directory -``` - -#### Destroying the Quorum Device - -- Disable and stop the quorum device on the quorum device host and delete all its configuration files. - -```shell -[root@qdevice ~]# pcs qdevice destroy net -Stopping quorum device... -quorum device stopped -quorum device disabled -Quorum device 'net' configuration files removed -``` - -## Encrypting corosync Service Configurations - -After the cluster starts normally, modify the **/etc/corosync/corosync.conf** configuration file on both nodes. - -```conf -totem { - version: 2 - cluster_name: hacluster - crypto_cipher: aes256 - crypto_hash: sha256 -} -``` - -Run `corosync-keygen` to generate a key pair for the corosync cluster and run `scp` to transfer the key to another node. - -```sh -[root@ha1 ~]# corosync-keygen -Corosync Cluster Engine Authentication key generator. -Gathering 2048 bits for key from /dev/urandom. -Writing corosync key to /etc/corosync/authkey. -[root@ha1 ~]# -[root@ha1 ~]# scp -r /etc/corosync/authkey root@10.211.55.37:/etc/corosync/ -``` - -Restart the corosync service on both nodes. - -```sh -systemctl restart corosync -``` diff --git a/docs/en/docs/desktop/HAuserguide.md b/docs/en/docs/desktop/HAuserguide.md deleted file mode 100644 index bcf248dc345848eb246a392ead53ce4abb91a381..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/HAuserguide.md +++ /dev/null @@ -1,358 +0,0 @@ -# Installing, Deploying, and Using HA - - -- [Installing, Deploying, and Using HA](#installing-deploying-and-using-ha) - - [Installation and Configuration](#installation-and-configuration) - - [Modifying the Host Name and the /etc/hosts File](#modifying-the-host-name-and-the-etchosts-file) - - [Configuring the Yum Source](#configuring-the-yum-source) - - [Installing HA Software Package Components](#installing-ha-software-package-components) - - [Setting the hacluster User Password](#setting-the-hacluster-user-password) - - [Modifying the `/etc/corosync/corosync.conf` File](#modifying-the-etccorosynccorosyncconf-file) - - [Managing Services](#managing-services) - - [Disabling the Firewall](#disabling-the-firewall) - - [Managing the pcs Service](#managing-the-pcs-service) - - [Managing the pacemaker Service](#managing-the-pacemaker-service) - - [Managing the corosync Service](#managing-the-corosync-service) - - [Performing Node Authentication](#performing-node-authentication) - - [Accessing the Front-End Management Platform](#accessing-the-front-end-management-platform) - - [Quick User Guide](#quick-user-guide) - - [Login Page](#login-page) - - [Home Page](#home-page) - - [Managing Nodes](#managing-nodes) - - [Node](#node) - - [Preference Setting](#preference-setting) - - [Adding Resources](#adding-resources) - - [Adding Common Resources](#adding-common-resources) - - [Adding Group Resources](#adding-group-resources) - - [Adding Clone Resources](#adding-clone-resources) - - [Editing Resources](#editing-resources) - - [Setting Resource Relationships](#setting-resource-relationships) - - [ACLS](#acls) - - - - -## Installation and Configuration - -- Environment preparation: At least two physical machines or VMs with openEuler 20.03 LTS SP2 installed are required. (This section uses two physical machines or VMs as an example.) For details, see the *openEuler 20.03 LTS SP2 Installation Guide*. - -### Modifying the Host Name and the /etc/hosts File - -- **Note: You need to perform the following operations on both hosts. The following takes the operation on one host as an example.** - -Before using the HA software, ensure that the host name has been changed and all host names have been written into the `/etc/hosts` file. - -- Run the following command to change the host name: - -``` -# hostnamectl set-hostname ha1 -``` - -- Edit the `/etc/hosts` file and write the following fields: - -``` -172.30.30.65 ha1 -172.30.30.66 ha2 -``` - -### Configuring the Yum Source - -After the system is successfully installed, the Yum source is configured by default. The file location information is stored in the `/etc/yum.repos.d/openEuler.repo` file. The HA software package uses the following sources: - -``` -[OS] -name=OS -baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/OS/$basearch/ -enabled=1 -gpgcheck=1 -gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler - -[everything] -name=everything -baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/everything/$basearch/ -enabled=1 -gpgcheck=1 -gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/everything/$basearch/RPM-GPG-KEY-openEuler - -[EPOL] -name=EPOL -baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/EPOL/$basearch/ -enabled=1 -gpgcheck=1 -gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler -``` - -### Installing HA Software Package Components - -``` -# yum install corosync pacemaker pcs fence-agents fence-virt corosync-qdevice sbd drbd drbd-utils -y -``` - -### Setting the hacluster User Password - -``` -# passwd hacluster -``` - -### Modifying the `/etc/corosync/corosync.conf` File - -``` -totem { - version: 2 - cluster_name: hacluster - crypto_cipher: none - crypto_hash: none -} -logging { - fileline: off - to_stderr: yes - to_logfile: yes - logfile: /var/log/cluster/corosync.log - to_syslog: yes - debug: on - logger_subsys { - subsys: QUORUM - debug: on - } -} -quorum { - provider: corosync_votequorum - expected_votes: 2 - two_node: 1 - } -nodelist { - node { - name: ha1 - nodeid: 1 - ring0_addr: 172.30.30.65 - } - node { - name: ha2 - nodeid: 2 - ring0_addr: 172.30.30.66 - } - } -``` - -### Managing Services - -#### Disabling the Firewall - -``` -# systemctl stop firewalld -``` - -Change the status of SELINUX in the `/etc/selinux/config` file to **disabled**. - -``` -# SELINUX=disabled -``` - -#### Managing the pcs Service - -- Run the following command to start the **pcs** service: - -``` -# systemctl start pcsd -``` - -- Run the following command to query service status: - -``` -# systemctl status pcsd -``` - -The service is started successfully if the following information is displayed: - -![](./figures/HA-pcs.png) - -#### Managing the pacemaker Service - -- Run the following command to start the **pacemaker** service: - -``` -# systemctl start pacemaker -``` - -- Run the following command to query service status: - -``` -# systemctl status pacemaker -``` - -The service is started successfully if the following information is displayed: - -![](./figures/HA-pacemaker.png) - -#### Managing the corosync Service - -- Run the following command to start the **corosync** service: - -``` -# systemctl start corosync -``` - -- Run the following command to query service status: - -``` -# systemctl status corosync -``` - -The service is started successfully if the following information is displayed: - -![](./figures/HA-corosync.png) - -### Performing Node Authentication - -- **Note: Perform this operation on only one node.** - -``` -# pcs host auth ha1 ha2 -``` - -### Accessing the Front-End Management Platform - -After the preceding services are started, open the browser (Chrome or Firefox is recommended) and enter `https://IP:2224` in the address box. - -## Quick User Guide - -### Login Page - -The username is **hacluster** and the password is the one set on the host. - -![](./figures/HA-login.png) - -### Home Page - -The home page is the **MANAGE CLUSTERS** page, which includes four functions: remove, add existing, destroy, and create new clusters. - -![](./figures/HA-home-page.png) - -### Managing Nodes - -#### Node - -You can add and remove nodes. The following describes how to add an existing node. - -![](./figures/HA-existing-nodes.png) - -Node management includes the following functions: start, stop, restart, standby, maintenance, and configure Fencing. You can view the enabled services and running resources of the node and manage the node. - -![](./figures/HA-node-setting1.png) ![](./figures/HA-node-setting2.png) - -### Preference Setting - -You can perform the following operations using command lines. The following is a simple example. Run the **pcs --help** command to query more commands available. - -``` -# pcs property set stonith-enabled=false -# pcs property set no-quorum-policy=ignore -``` - -Run the **pcs property** command to view all settings. - -![](./figures/HA-firstchoice-cmd.png) - -- Change the default status of **No Quorum Policy** to **ignore**, and the default status of **Stonith Enabled** to **false**, as shown in the following figure: - -![](./figures/HA-firstchoice.png) - -#### Adding Resources - -##### Adding Common Resources - -The multi-option drop-down list box in the system supports keyword matching. You can enter the keyword of the item to be configured and quickly select it. - -Apache and IPaddr are used as examples. - -Run the following commands to add the Apache and IPaddr resources: - -``` -# pcs resource create httpd ocf:heartbeat:apache -# pcs resource create IPaddr ocf:heartbeat:IPaddr2 ip=172.30.30.67 -``` - -Run the following command to check the cluster resource status: - -``` -# pcs status -``` - -![](./figures/HA-pcs-status.png) - -![](./figures/HA-add-resource.png) - -- Add Apache resources. - -![](./figures/HA-apache.png) - -- The resources are successfully added if the following information is displayed: - -![](./figures/HA-apache-suc.png) - -- The resources are created and started successfully, and run on a node, for example, **ha1**. The Apache page is displayed. - -![](./figures/HA-apache-show.png) - -- Add IPaddr resources. - -![](./figures/HA-ipaddr.png) - -- The resources are successfully added if the following information is displayed: - -![](./figures/HA-ipaddr-suc.png) - -- The resources are created and started successfully, and run on a node, for example, **ha1**. The HA web login page is displayed, and you can log in to the page and perform operations. When the resources are switched to **ha2**, the web page can still be accessed. - -![](./figures/HA-ipaddr-show.png) - -##### Adding Group Resources - -When you add group resources, at least one common resource is needed in the cluster. Select one or more resources and click **Create Group**. - -- **Note: Group resources are started in the sequence of subresources. Therefore, you need to select subresources in sequence.** - -![](./figures/HA-group.png) - -The resources are successfully added if the following information is displayed: - -![](./figures/HA-group-suc.png) - -##### Adding Clone Resources - -![](./figures/HA-clone.png) - -The resources are successfully added if the following information is displayed: - -![](./figures/HA-clone-suc.png) - -#### Editing Resources - -- **Enable**: Select a target resource that is not running from the resource node list. Enable the resource. -- **Disable**: Select a target resource that is running from the resource node list. Disable the resource. -- **Clearup**: Select a target resource from the resource node list and clear the resource. -- **Porting**: Select a target resource from the resource node list. The resource must be a common resource or group resource that is running. You can port the resource to a specified node. -- **Rollback**: Select a target resource from the resource node list. Before rolling back a resource, ensure that the resource has been ported. You can clear the porting settings of the resource and roll the resource back to the original node. After you click the button, the status of the resource item in the list is the same as that when the resource is enabled. -- **Remove**: Select a target resource from the resource node list and remove the resource. - -You can perform the preceding resource operations on the page shown in the following figure: - -![](./figures/HA-resoure-set.png) - -#### Setting Resource Relationships - -The resource relationship is used to set restrictions for target resources. Resource restrictions are classified as follows: **resource location**, **resource colocation**, and **resource ordering**. - -- **Resource location**: Set the runlevel of nodes in the cluster to determine the node where the resource runs during startup or switchover. The runlevels are Master and Slave in descending order. -- **Resource colocation**: Indicate whether the target resource and other resources in the cluster are running on the same node. For resources on the same node, the resource must run on the same node as the target resource. For resources on mutually exclusive nodes, the resource and the target resource must run on different nodes. -- **Resource ordering**: Set the ordering in which the target resource and other resources in the cluster are started. The preamble resource must run before the target resource runs. The postamble resource can run only after the target resource runs. - -After adding common resources or group resources, you can perform the preceding resource operations on the page shown in the following figure: - -![](./figures/HA-resource-relationship.png) - -#### ACLS - -ACLS is an access control list. You can click **Add** to add a user and manage the user access. - -![](./figures/HA-ACLS.png) \ No newline at end of file diff --git a/docs/en/docs/desktop/Install_Cinnamon.md b/docs/en/docs/desktop/Install_Cinnamon.md deleted file mode 100644 index 2b15a6760c7781cd47c91cdd9e4bbee21153d256..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/Install_Cinnamon.md +++ /dev/null @@ -1,72 +0,0 @@ -# Installing Cinnamon on openEuler - -Cinnamon is the most commonly used desktop environment running in Unix-like operating systems. It is also a desktop environment with complete functions, simple operations, user-friendly interfaces, and integrated use and development capabilities. It is also a formal desktop planned by GNU. - -For users, Cinnamon is a suite that integrates the desktop environment and applications. For developers, Cinnamon is an application development framework, consisting of a large number of function libraries. Applications written in Cinnamon can run properly even if users do not run the Cinnamon desktop environment. - -Cinnamon contains basic software such as the file manager, application store, and text editor, and advanced applications and tools such as system sampling analysis, system logs, software engineering IDE, web browser, simple virtual machine monitor, and developer document browser. - -You are advised to create an administrator during the installation. - -1. Configure the source and update the system. - [Download](https://openeuler.org/en/) the openEuler ISO file, install the system, and update the software source. (You need to configure the Everything source and EPOL source. The following command is used to install the Cinnamon in the minimum installation system.) - - ```shell - sudo dnf update - ``` - -2. Install the font library. - - ```shell - sudo dnf install dejavu-fonts liberation-fonts gnu-*-fonts google-*-fonts - ``` - -3. Install the Xorg. - - ```shell - sudo dnf install xorg-* - ``` - - Unnecessary packages may be installed during the installation. You can run the following commands to install necessary Xorg packages: - - ```shell - sudo dnf install xorg-x11-apps xorg-x11-drivers xorg-x11-drv-ati \ - xorg-x11-drv-dummy xorg-x11-drv-evdev xorg-x11-drv-fbdev xorg-x11-drv-intel \ - xorg-x11-drv-libinput xorg-x11-drv-nouveau xorg-x11-drv-qxl \ - xorg-x11-drv-synaptics-legacy xorg-x11-drv-v4l xorg-x11-drv-vesa \ - xorg-x11-drv-vmware xorg-x11-drv-wacom xorg-x11-fonts xorg-x11-fonts-others \ - xorg-x11-font-utils xorg-x11-server xorg-x11-server-utils xorg-x11-server-Xephyr \ - xorg-x11-server-Xspice xorg-x11-util-macros xorg-x11-utils xorg-x11-xauth \ - xorg-x11-xbitmaps xorg-x11-xinit xorg-x11-xkb-utils - ``` - -4. Install Cinnamon and components. - - ```shell - sudo dnf install cinnamon cinnamon-control-center cinnamon-desktop \ - cinnamon-menus cinnamon-screensaver cinnamon-session \ - cinnamon-settings-daemon cinnamon-themes cjs \ - nemo nemo-extensions muffin cinnamon-translations inxi \ - perl-XML-Dumper xapps mint-x-icons mint-y-icons mintlocale \ - python3-plum-py caribou mozjs78 python3-pam \ - python3-tinycss2 python3-xapp tint2 gnome-terminal \ - lightdm lightdm-gtk - ``` - -5. Enable LightDM to automatically start upon system startup. - - ```shell - sudo systemctl enable lightdm - ``` - -6. Set the system to log in to the GUI by default. - - ```shell - sudo systemctl set-default graphical.target - ``` - -7. Reboot. - - ```shell - sudo reboot - ``` diff --git a/docs/en/docs/desktop/dde.md b/docs/en/docs/desktop/dde.md deleted file mode 100644 index 31ee420c9fa1df2fe8dab0507f7754d3849276d2..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/dde.md +++ /dev/null @@ -1,23 +0,0 @@ -# DDE User Guide - -This section describes how to install and use the Deepin Desktop Environment (DDE). - -## FAQs - -### 1. After the DDE is installed, why are the computer and recycle bin icons not displayed on the desktop when I log in as the **root** user? - -* Issue - - After the DDE is installed, the computer and recycle bin icon is not displayed on the desktop when a user logs in as the **root** user. - -![img](./figures/dde-1.png) - -* Cause - - The **root** user is created before the DDE is installed. During the installation, the DDE does not add desktop icons for existing users. This issue does not occur if the user is created after the DDE is installed. - -* Solution - - Right-click the icon in the launcher and choose **Send to Desktop**. The icon functions the same as the one added by DDE. - - ![img](./figures/dde-2.png) diff --git a/docs/en/docs/desktop/desktop.md b/docs/en/docs/desktop/desktop.md deleted file mode 100644 index c46639179dd096a706477753175c219a7ac74cd5..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/desktop.md +++ /dev/null @@ -1,3 +0,0 @@ -# Desktop Environment User Guide - -This document describes how to install and use four common desktop environments (UKUI, DDE, Xfce, and GNOME), which provide a user-friendly, secure, and reliable GUI for better user experience. diff --git a/docs/en/docs/desktop/figures/1202_1.jpg b/docs/en/docs/desktop/figures/1202_1.jpg deleted file mode 100644 index def242a5b9a70602a9aab7dd8048244e7d9f6793..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/1202_1.jpg and /dev/null differ diff --git a/docs/en/docs/desktop/figures/49.png b/docs/en/docs/desktop/figures/49.png deleted file mode 100644 index 3b77668e5a4d1bdb3043c473dff9b36fa7144714..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/49.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/64.png b/docs/en/docs/desktop/figures/64.png deleted file mode 100644 index cbbd2ede047e735c3766e08b04595f08cd72f5b2..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/64.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-01.png b/docs/en/docs/desktop/figures/Cinnamon-01.png deleted file mode 100644 index 8f1dd8c6b2ef654721a92ce7b984091b7a60b455..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-01.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-02.png b/docs/en/docs/desktop/figures/Cinnamon-02.png deleted file mode 100644 index f4ab1c606047753d63b42fd317d436e05bb3e081..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-02.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-03.png b/docs/en/docs/desktop/figures/Cinnamon-03.png deleted file mode 100644 index b594c087d327834325773b13a6914b7b9f252bdc..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-03.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-04.png b/docs/en/docs/desktop/figures/Cinnamon-04.png deleted file mode 100644 index 36990d16627102e3e6de16b0efdf84ae501b7a4f..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-04.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-05.png b/docs/en/docs/desktop/figures/Cinnamon-05.png deleted file mode 100644 index 4b3819b482ebd3fdc9598de1a59a03afbf64583f..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-05.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-06.png b/docs/en/docs/desktop/figures/Cinnamon-06.png deleted file mode 100644 index c0210abe865c0c6d22cd93ed1f457be631cde7aa..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-06.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-07.png b/docs/en/docs/desktop/figures/Cinnamon-07.png deleted file mode 100644 index e88a10d5df1644180a443b4929adf3e2d756f4d2..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-07.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-08.png b/docs/en/docs/desktop/figures/Cinnamon-08.png deleted file mode 100644 index 40c8120544a37d05bc0040e9b84efb377e6d7048..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-08.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-09.png b/docs/en/docs/desktop/figures/Cinnamon-09.png deleted file mode 100644 index 1b4130f9acfec91124abc67a3f1c407349fd5d54..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-09.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-10.png b/docs/en/docs/desktop/figures/Cinnamon-10.png deleted file mode 100644 index c323c13fa9614f6ac6e86a99c3d16b9c53ca1afa..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-10.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-11.png b/docs/en/docs/desktop/figures/Cinnamon-11.png deleted file mode 100644 index 5de0e15d0df74fb9170951a1a3d9109f53010f8e..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-11.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-12.png b/docs/en/docs/desktop/figures/Cinnamon-12.png deleted file mode 100644 index 0e22f5197045a91d36fa72d0aaf7fab639b85ec7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-12.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-13.png b/docs/en/docs/desktop/figures/Cinnamon-13.png deleted file mode 100644 index 09065ff4b6b2de69c4cf88d5b021b74e3baf3582..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-13.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-14.png b/docs/en/docs/desktop/figures/Cinnamon-14.png deleted file mode 100644 index 4cdf44f509ce1eb7a5aefb8eb720a449c98297c1..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-14.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-15.png b/docs/en/docs/desktop/figures/Cinnamon-15.png deleted file mode 100644 index a15da3f6a00d340c6e06b68e481e7c5eb21294d6..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-15.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-16.png b/docs/en/docs/desktop/figures/Cinnamon-16.png deleted file mode 100644 index be8833fef87a9e9cb9d0be64b21adcb73c55c57e..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-16.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-17.png b/docs/en/docs/desktop/figures/Cinnamon-17.png deleted file mode 100644 index c8cc9e130cee4abb6faa6c81d3002e4b4e41741c..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-17.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-18.png b/docs/en/docs/desktop/figures/Cinnamon-18.png deleted file mode 100644 index 81ceb2219ff8ed31341f1f884f83c159f2feb412..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-18.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-19.png b/docs/en/docs/desktop/figures/Cinnamon-19.png deleted file mode 100644 index 35fa5b80633ba285d88244c5315fc181ada6e64a..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-19.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-20.png b/docs/en/docs/desktop/figures/Cinnamon-20.png deleted file mode 100644 index bdfbe2d724929b11817fa8f2e4e172e18e297b80..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-20.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-21.png b/docs/en/docs/desktop/figures/Cinnamon-21.png deleted file mode 100644 index 41dcd5f2740f6c5ab306ae0f31185a1f1c591102..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-21.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-22.png b/docs/en/docs/desktop/figures/Cinnamon-22.png deleted file mode 100644 index a36a20761061a5a1dca3aed461db66256241515c..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-22.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-23.png b/docs/en/docs/desktop/figures/Cinnamon-23.png deleted file mode 100644 index 87d8c4e303a990c3735b5f8ab18960c0f7b74e17..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-23.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-24.png b/docs/en/docs/desktop/figures/Cinnamon-24.png deleted file mode 100644 index c163b1b241db5c913064f4e7d73229d78c30863a..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-24.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-25.png b/docs/en/docs/desktop/figures/Cinnamon-25.png deleted file mode 100644 index 135f8aae40f03bf371d802ec484a5b2a3313a0f8..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-25.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-26.png b/docs/en/docs/desktop/figures/Cinnamon-26.png deleted file mode 100644 index fd659ed83107754abe99dc6a6caca798272eb5e5..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-26.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-27.png b/docs/en/docs/desktop/figures/Cinnamon-27.png deleted file mode 100644 index 8a66c6e683fa7ac2ad506871cc2489c8884516ba..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-27.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-28.png b/docs/en/docs/desktop/figures/Cinnamon-28.png deleted file mode 100644 index 08b73e344e740f3c5c8bb473f422aad6cb13bc2a..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-28.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-29.png b/docs/en/docs/desktop/figures/Cinnamon-29.png deleted file mode 100644 index 474ac9b05c82e7b7a144a63403a7a11691603e9f..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-29.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-30-0.png b/docs/en/docs/desktop/figures/Cinnamon-30-0.png deleted file mode 100644 index 417679dbf2647fea9da0c5a75b5a7b55b8d77c60..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-30-0.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-30-1.png b/docs/en/docs/desktop/figures/Cinnamon-30-1.png deleted file mode 100644 index 04c7e4f94c2d50cd0f7bef07cda2d0f8c2d3350e..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-30-1.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-31.png b/docs/en/docs/desktop/figures/Cinnamon-31.png deleted file mode 100644 index 448c86f7fa7c71e6248d4dd4a1be9930dd1e250b..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-31.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-32.png b/docs/en/docs/desktop/figures/Cinnamon-32.png deleted file mode 100644 index 8339778befa1a56738520c4b2af3210c1a5919fd..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-32.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-33.png b/docs/en/docs/desktop/figures/Cinnamon-33.png deleted file mode 100644 index bfea68e17fae43eee5cbdd0ffe2fd27203d1e6c0..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-33.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-34.png b/docs/en/docs/desktop/figures/Cinnamon-34.png deleted file mode 100644 index 02a60e270c1e6c6c0274c1694ac537bd1b3e9747..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-34.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-35-0.png b/docs/en/docs/desktop/figures/Cinnamon-35-0.png deleted file mode 100644 index 25e609e15f52c12e278120c75bf35fbb8bc1ec51..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-35-0.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-35-1.png b/docs/en/docs/desktop/figures/Cinnamon-35-1.png deleted file mode 100644 index 40b206b854ed641c731a7dc592394a8caee82b98..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-35-1.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-36.png b/docs/en/docs/desktop/figures/Cinnamon-36.png deleted file mode 100644 index 62df4366d53f1d4a660200729c4bdbb87d6eb512..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-36.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-37.png b/docs/en/docs/desktop/figures/Cinnamon-37.png deleted file mode 100644 index c2c81d26d7dd32d032001f0d4471ff8e031dd4e5..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-37.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-38.png b/docs/en/docs/desktop/figures/Cinnamon-38.png deleted file mode 100644 index 59a886be863bc6dc903d0e53e8e8c29a87e98da4..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-38.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-39.png b/docs/en/docs/desktop/figures/Cinnamon-39.png deleted file mode 100644 index 01dff5dd4243eb676b091eeba0fb2b395443ea68..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-39.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-40.png b/docs/en/docs/desktop/figures/Cinnamon-40.png deleted file mode 100644 index 0e7dd84857faf8c14505109c825a89a684acf65d..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-40.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-41-0.png b/docs/en/docs/desktop/figures/Cinnamon-41-0.png deleted file mode 100644 index 1b1d6c45c270e28e039fe644fb5ce3e62f206e46..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-41-0.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-41-1.png b/docs/en/docs/desktop/figures/Cinnamon-41-1.png deleted file mode 100644 index e55c802668da92e068cf05401254389ebd94deee..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-41-1.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-42.png b/docs/en/docs/desktop/figures/Cinnamon-42.png deleted file mode 100644 index 52b2c1842d16ff356113d2c2870766a7343366c7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-42.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-43.png b/docs/en/docs/desktop/figures/Cinnamon-43.png deleted file mode 100644 index 244678f92c7bc1656f15d51c4ae55176ec4efe20..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-43.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-44.png b/docs/en/docs/desktop/figures/Cinnamon-44.png deleted file mode 100644 index 7f3aede19de472562486c8660b194c044e034131..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-44.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-45.png b/docs/en/docs/desktop/figures/Cinnamon-45.png deleted file mode 100644 index 18096cf5c16ab1e74f4c4c6f39f27717f807fcab..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-45.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-46.png b/docs/en/docs/desktop/figures/Cinnamon-46.png deleted file mode 100644 index 7f77937fab733bda0b3a4cb3667b318e50bd53c4..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-46.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-47.png b/docs/en/docs/desktop/figures/Cinnamon-47.png deleted file mode 100644 index 09999c9562fda0498366a8cecd84f55b846f9928..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-47.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-48.png b/docs/en/docs/desktop/figures/Cinnamon-48.png deleted file mode 100644 index 35c56940e1bea933331fc2d1f3c27d1686193753..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-48.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-49.png b/docs/en/docs/desktop/figures/Cinnamon-49.png deleted file mode 100644 index 68ecda0b9cd69a23868177898d6e843d7b2575a0..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-49.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-50.png b/docs/en/docs/desktop/figures/Cinnamon-50.png deleted file mode 100644 index 758ed6664e78928bd0a81ade29fb5aa132412901..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-50.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-51.png b/docs/en/docs/desktop/figures/Cinnamon-51.png deleted file mode 100644 index 7f79428bd391f4efd578053da81bbbe25d30a9d2..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-51.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-52.png b/docs/en/docs/desktop/figures/Cinnamon-52.png deleted file mode 100644 index 27dd6632ee00eba7926259786d2a8ee2ce4376ba..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-52.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-53.png b/docs/en/docs/desktop/figures/Cinnamon-53.png deleted file mode 100644 index cb6bf8ca0f4c276d40fb5c1d86cb949b40fdfbd0..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-53.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/Cinnamon-54.png b/docs/en/docs/desktop/figures/Cinnamon-54.png deleted file mode 100644 index 5ed38e73d67998436be559b295dcaa0303983cf9..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/Cinnamon-54.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/HA-qdevice.png b/docs/en/docs/desktop/figures/HA-qdevice.png deleted file mode 100644 index 2964f36c952fc7e62fb7b041fcf6d2de8ead712c..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/HA-qdevice.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/icon133-o.svg b/docs/en/docs/desktop/figures/icon133-o.svg deleted file mode 100644 index 886d90a83e33497d134bdb3dcc864a5c2df53f20..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/figures/icon133-o.svg +++ /dev/null @@ -1,13 +0,0 @@ - - - - + - Created with Sketch. - - - - - - - - \ No newline at end of file diff --git a/docs/en/docs/desktop/figures/icon135-o.svg b/docs/en/docs/desktop/figures/icon135-o.svg deleted file mode 100644 index cea628a8f5eb92d10661b690242b6de41ca64816..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/figures/icon135-o.svg +++ /dev/null @@ -1,15 +0,0 @@ - - - - ~ - Created with Sketch. - - - - - - - - - - \ No newline at end of file diff --git a/docs/en/docs/desktop/figures/icon20.png b/docs/en/docs/desktop/figures/icon20.png deleted file mode 100644 index 4de3c7c695893539967245ea5e269b26e2b735be..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/icon20.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/icon21.png b/docs/en/docs/desktop/figures/icon21.png deleted file mode 100644 index e7b4320b6ce1fd4adb52525ba2c60983ffb2eed3..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/icon21.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/icon22.png b/docs/en/docs/desktop/figures/icon22.png deleted file mode 100644 index 43bfa96965ad13e0a34ead3cb1102a76b9346a23..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/icon22.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/icon23.png b/docs/en/docs/desktop/figures/icon23.png deleted file mode 100644 index aee221ddaa81d06fa7bd5b89a624da90cd1e53da..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/icon23.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/icon24.png b/docs/en/docs/desktop/figures/icon24.png deleted file mode 100644 index a9e5d700431ca1666fe9eda2cefce5dd2f83bdcd..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/icon24.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/icon25.png b/docs/en/docs/desktop/figures/icon25.png deleted file mode 100644 index 3de0f9476bbee9e89c3b759afbed968f17b5bbcc..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/icon25.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/icon29-o.png b/docs/en/docs/desktop/figures/icon29-o.png deleted file mode 100644 index e40d45fc0a9d2af93280ea14e01512838bb3c3dc..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/icon29-o.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/icon42.png b/docs/en/docs/desktop/figures/icon42.png deleted file mode 100644 index 25959977f986f433ddf3d66935f8d2c2bc6ed86b..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/icon42.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/kiran-43.0.png b/docs/en/docs/desktop/figures/kiran-43.0.png deleted file mode 100644 index caacc027322d4b7480e6508d4a1b4a13eefcf788..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/kiran-43.0.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/kubesphere.png b/docs/en/docs/desktop/figures/kubesphere.png deleted file mode 100644 index 939dcb70202b19c7853cbfd8f27f6e8e4678ce26..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/kubesphere.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-1.png b/docs/en/docs/desktop/figures/xfce-1.png deleted file mode 100644 index c04222d7757b84aa8afecf98815eee25211a86d7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-1.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-2.png b/docs/en/docs/desktop/figures/xfce-2.png deleted file mode 100644 index fa7e1a1ae3c1535a1528f03636d2b62d727412af..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-2.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-3.png b/docs/en/docs/desktop/figures/xfce-3.png deleted file mode 100644 index 6eeb68ad39f45ff476f1d18b8cd34492ec1f542b..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-3.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-4.png b/docs/en/docs/desktop/figures/xfce-4.png deleted file mode 100644 index f66de500fad7c847c2fea2e3774413d1c38e642e..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-4.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-5.png b/docs/en/docs/desktop/figures/xfce-5.png deleted file mode 100644 index 0258b0e5cf6c7c13d88b0431f4b0221e86451ce8..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-5.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-6.png b/docs/en/docs/desktop/figures/xfce-6.png deleted file mode 100644 index f2027b37021b260a97ff56a32026a53d00db0763..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-6.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-71.png b/docs/en/docs/desktop/figures/xfce-71.png deleted file mode 100644 index 6e2ff40536d18253dcfd4a69396e8e96817f704a..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-71.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-8.png b/docs/en/docs/desktop/figures/xfce-8.png deleted file mode 100644 index 4ae9885b617e49cba84140e84dd6b354ff55f92c..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-8.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-811.png b/docs/en/docs/desktop/figures/xfce-811.png deleted file mode 100644 index 21447e37a5dd94fc88cb3ec0a11cd0dc0d50cf36..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-811.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-812.png b/docs/en/docs/desktop/figures/xfce-812.png deleted file mode 100644 index d505f1ac8111062a172b9fb5f5717d72f653f1b8..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-812.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-813.png b/docs/en/docs/desktop/figures/xfce-813.png deleted file mode 100644 index 218d3b80c83cade14acc0c0baa4532710d1959dd..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-813.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-814.png b/docs/en/docs/desktop/figures/xfce-814.png deleted file mode 100644 index 6ccbe910bd32cb4d619ba47d2fcb354424e80451..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-814.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-821.png b/docs/en/docs/desktop/figures/xfce-821.png deleted file mode 100644 index 690f3f0b528dfdaf6586549cdeb105df2214fc44..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-821.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-831.png b/docs/en/docs/desktop/figures/xfce-831.png deleted file mode 100644 index 61da16b7871a085a6c373a1262c0f785fb415e60..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-831.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-832.png b/docs/en/docs/desktop/figures/xfce-832.png deleted file mode 100644 index 87b59b42d86ebd205750e162d5f2751b4d87181e..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-832.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-84.png b/docs/en/docs/desktop/figures/xfce-84.png deleted file mode 100644 index 1afe9d9bd51af83c99793666bad47d231bba5c7b..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-84.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-841.png b/docs/en/docs/desktop/figures/xfce-841.png deleted file mode 100644 index 35875b40b8c95ce32652003daa5caf065747725f..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-841.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-842.png b/docs/en/docs/desktop/figures/xfce-842.png deleted file mode 100644 index b4031b575ffc3e9aa5a8edc7826fe28af97d0f23..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-842.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-85.png b/docs/en/docs/desktop/figures/xfce-85.png deleted file mode 100644 index bce9a0165290167d5fceee22d74f2abf4aed28fd..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-85.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-851.png b/docs/en/docs/desktop/figures/xfce-851.png deleted file mode 100644 index 15c9e2d6d04e9b712bdf88d0ee1e7246a8d7b83e..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-851.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-86.png b/docs/en/docs/desktop/figures/xfce-86.png deleted file mode 100644 index d78bc4ae0dbf13c3ad40b29468bd44056817e522..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-86.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-861.png b/docs/en/docs/desktop/figures/xfce-861.png deleted file mode 100644 index 9a58733007cfac1c42ff244b52ee14c75051d852..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-861.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-87.png b/docs/en/docs/desktop/figures/xfce-87.png deleted file mode 100644 index ee5844bcfa836ec8ecf0a5fea125dcab530ad6db..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-87.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-921.png b/docs/en/docs/desktop/figures/xfce-921.png deleted file mode 100644 index 0681efd633cff00fe8572579b8971933cfc41dc1..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-921.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-931.png b/docs/en/docs/desktop/figures/xfce-931.png deleted file mode 100644 index 591a6d21d8fe69aed84d35316af506771a26ac01..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-931.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-941.png b/docs/en/docs/desktop/figures/xfce-941.png deleted file mode 100644 index aaee48a09a1e7233d25f68c6a74c7c39edc73b1f..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-941.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-951.png b/docs/en/docs/desktop/figures/xfce-951.png deleted file mode 100644 index 1d8ff807ac84bdae0dc935c3964d10701b5d47dc..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-951.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-961.png b/docs/en/docs/desktop/figures/xfce-961.png deleted file mode 100644 index 9d2944ae05699b8424695c865242c1c4f5d60fac..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-961.png and /dev/null differ diff --git a/docs/en/docs/desktop/figures/xfce-962.png b/docs/en/docs/desktop/figures/xfce-962.png deleted file mode 100644 index 72c65f9675d8259f327077ce7f7212bd2b17a588..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/desktop/figures/xfce-962.png and /dev/null differ diff --git a/docs/en/docs/desktop/gnome.md b/docs/en/docs/desktop/gnome.md deleted file mode 100644 index 179f07b13a78e140d686a3a7a7678d857c0ac323..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/gnome.md +++ /dev/null @@ -1,3 +0,0 @@ -# GNOME User Guide - -This section describes how to install and use GNOME. diff --git a/docs/en/docs/desktop/installing-and-deploying-HA.md b/docs/en/docs/desktop/installing-and-deploying-HA.md deleted file mode 100644 index a297aeffdeca7475a19c1a660b0c261a919375d0..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/installing-and-deploying-HA.md +++ /dev/null @@ -1,213 +0,0 @@ -# Installing and Deploying HA - -This chapter describes how to install and deploy an HA cluster. - - -- [Installing and Deploying HA](#installing-and-deploying-ha) - - [Installation and Deployment](#installation-and-deployment) - - [Modifying the Host Name and the /etc/hosts File](#modifying-the-host-name-and-the-etchosts-file) - - [Configuring the Yum Repository](#configuring-the-yum-repository) - - [Installing the HA Software Package Components](#installing-the-ha-software-package-components) - - [Setting the hacluster User Password](#setting-the-hacluster-user-password) - - [Modifying the /etc/corosync/corosync.conf File](#modifying-the-etccorosynccorosyncconf-file) - - [Managing the Services](#managing-the-services) - - [Disabling the firewall](#disabling-the-firewall) - - [Managing the pcs service](#managing-the-pcs-service) - - [Managing the Pacemaker service](#managing-the-pacemaker-service) - - [Managing the Corosync service](#managing-the-corosync-service) - - [Performing Node Authentication](#performing-node-authentication) - - [Accessing the Front-End Management Platform](#accessing-the-front-end-management-platform) - -## Installation and Deployment - -- Prepare the environment: At least two physical machines or VMs with openEuler 20.03 LTS SP2 installed are required. (This section uses two physical machines or VMs as an example.) For details about how to install openEuler 20.03 LTS SP2, see the [_openEuler Installation Guide_](../Installation/Installation.md). - -### Modifying the Host Name and the /etc/hosts File - -- **Note: You need to perform the following operations on both hosts. The following takes one host as an example.** - -Before using the HA software, ensure that all host names have been changed and written into the /etc/hosts file. - -- Run the following command to change the host name: - -```shell -hostnamectl set-hostname ha1 -``` - -- Edit the `/etc/hosts` file and write the following fields: - -```text -172.30.30.65 ha1 -172.30.30.66 ha2 -``` - -### Configuring the Yum Repository - -After the system is successfully installed, the Yum source is configured by default. The file location is stored in the `/etc/yum.repos.d/openEuler.repo` file. The HA software package uses the following sources: - -```text -[OS] -name=OS -baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/OS/$basearch/ -enabled=1 -gpgcheck=1 -gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler - -[everything] -name=everything -baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/everything/$basearch/ -enabled=1 -gpgcheck=1 -gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/everything/$basearch/RPM-GPG-KEY-openEuler - -[EPOL] -name=EPOL -baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/EPOL/$basearch/ -enabled=1 -gpgcheck=1 -gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler -``` - -### Installing the HA Software Package Components - -```shell -yum install -y corosync pacemaker pcs fence-agents fence-virt corosync-qdevice sbd drbd drbd-utils -``` - -### Setting the hacluster User Password - -```shell -passwd hacluster -``` - -### Modifying the /etc/corosync/corosync.conf File - -```text -totem { - version: 2 - cluster_name: hacluster - crypto_cipher: none - crypto_hash: none -} -logging { - fileline: off - to_stderr: yes - to_logfile: yes - logfile: /var/log/cluster/corosync.log - to_syslog: yes - debug: on - logger_subsys { - subsys: QUORUM - debug: on - } -} -quorum { - provider: corosync_votequorum - expected_votes: 2 - two_node: 1 - } -nodelist { - node { - name: ha1 - nodeid: 1 - ring0_addr: 172.30.30.65 - } - node { - name: ha2 - nodeid: 2 - ring0_addr: 172.30.30.66 - } - } -``` - -### Managing the Services - -#### Disabling the firewall - -```shell -systemctl stop firewalld -``` - -Change the status of SELINUX in the `/etc/selinux/config` file to disabled. - -```text -# SELINUX=disabled -``` - -#### Managing the pcs service - -- Run the following command to start the pcs service: - -```shell -systemctl start pcsd -``` - -- Run the following command to query the pcs service status: - -```shell -systemctl status pcsd -``` - -The service is started successfully if the following information is displayed: - -![](./figures/HA-pcs.png) - -#### Managing the Pacemaker service - -- Run the following command to start the Pacemaker service: - -```shell -systemctl start pacemaker -``` - -- Run the following command to query the Pacemaker service status: - -```shell -systemctl status pacemaker -``` - -The service is started successfully if the following information is displayed: - -![](./figures/HA-pacemaker.png) - -#### Managing the Corosync service - -- Run the following command to start the Corosync service: - -```shell -systemctl start corosync -``` - -- Run the following command to query the Corosync service status: - -```shell -systemctl status corosync -``` - -The service is started successfully if the following information is displayed: - -![](./figures/HA-corosync.png) - -### Performing Node Authentication - -- **Note: Run this command on only one node.** - -```shell -pcs host auth ha1 ha2 -``` - -### Accessing the Front-End Management Platform - -After the preceding services are started, open the browser (Chrome or Firefox is recommended) and enter **https://localhost:2224** in the navigation bar. - -- This page is the native management platform. - -![](./figures/HA-login.png) - -For details about how to install the management platform newly developed by the community, see . - -- The following is the management platform newly developed by the community. - -![](./figures/HA-api.png) - -- The next chapter describes how to quickly use an HA cluster and add an instance. For details, see the [HA Usage Example](./HA Usage Example.md\). diff --git a/docs/en/docs/desktop/kiran.md b/docs/en/docs/desktop/kiran.md deleted file mode 100644 index b9da8f9fda119724e97d0f45f62a3b32ea80bbb8..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/kiran.md +++ /dev/null @@ -1,3 +0,0 @@ -# Kiran User Guide - -This chapter describes how to install and use the Kiran desktop environment. diff --git a/docs/en/docs/desktop/kubesphere.md b/docs/en/docs/desktop/kubesphere.md deleted file mode 100644 index 6cc3b4ae243b58636bcf5d3cd45075d51b35e323..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/kubesphere.md +++ /dev/null @@ -1,60 +0,0 @@ -# KubeSphere Deployment Guide - -This document describes how to install and deploy Kubernetes and KubeSphere clusters on openEuler 21.09. - -## What Is KubeSphere - -[KubeSphere](https://kubesphere.io/) is an open source **distributed OS** built on [Kubernetes](https://kubernetes.io/) for cloud-native applications. It supports multi-cloud and multi-cluster management and provides full-stack automated IT O&M capabilities, simplifying DevOps-based workflows for enterprises. Its architecture enables plug-and-play integration between third-party applications and cloud-native ecosystem components. For more information, see the [KubeSphere official website](https://kubesphere.com.cn/). - -## Prerequisites - -Prepare a physical machine or VM with openEuler 21.09 installed. For details about the installation method, see the [*openEuler Installation Guide*](../Installation/Installation.md). - -## Software Installation - -1. Install KubeKey. - - ```bash - yum install kubekey - ``` - - > ![](../Virtualization/public_sys-resources/icon-note.gif)**Note** - > Before the installation, manually deploy Docker on each node in the cluster in advance or use KubeKey to automatically deploy Docker. The Docker version automatically deployed by KubeKey is 20.10.8. - -2. Deploy the KubeSphere cluster. - - ```bash - kk create cluster --with-kubesphere v3.1.1 - ``` - - > ![](../Virtualization/public_sys-resources/icon-note.gif)**Note** - > After this command is executed, Kubernetes v1.19.8 is installed by default. To specify the Kubernetes version, add `--with-kubernetes < version_number >` to the end of the command line. The supported Kubernetes versions include `v1.17.9`, `v1.18.8`, `v.1.19.8`, `v1.19.9`, and `v1.20.6`. - -3. Check whether the KubeSphere cluster is successfully installed. - - ```bash - kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f - ``` - - If the following information is displayed, the KubeSphere cluster is successfully installed: - - ![](./figures/kubesphere.png) - - >![](../Virtualization/public_sys-resources/icon-note.gif)**Note** - >This document describes how to install KubeSphere in the x86 environment. In the ARM64 environment, you need to install Kubernetes before deploying KubeSphere. - -## Accessing the KubeSphere Web Console - -**Depending on your network environment, you may need to configure port forwarding rules and firewall policies. Ensure that port 30880 is allowed in the firewall rules.** - -After the KubeSphere cluster is successfully deployed, enter `:30880` in the address box of a browser to access the KubeSphere web console. - -![kubesphere-console](./figures/1202_1.jpg) - -## See Also - -[What is KubeSphere](https://v3-1.docs.kubesphere.io/docs/introduction/what-is-kubesphere/) - -[Install a Multi-node Kubernetes and KubeSphere Cluster](https://v3-1.docs.kubesphere.io/docs/installing-on-linux/introduction/multioverview/) - -[Enable Pluggable Components](https://v3-1.docs.kubesphere.io/docs/quick-start/enable-pluggable-components/) diff --git a/docs/en/docs/desktop/ukui.md b/docs/en/docs/desktop/ukui.md deleted file mode 100644 index 71659c3ba300377ef96cdfbc346b81d727e8935a..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/ukui.md +++ /dev/null @@ -1,3 +0,0 @@ -# UKUI User Guide - -This section describes how to install and use UKUI. diff --git a/docs/en/docs/desktop/xfce.md b/docs/en/docs/desktop/xfce.md deleted file mode 100644 index 4463d1056292a129e109db9398e7c15c9f59e8e7..0000000000000000000000000000000000000000 --- a/docs/en/docs/desktop/xfce.md +++ /dev/null @@ -1,3 +0,0 @@ -# Xfce User Guide - -This section describes how to install and use Xfce. diff --git a/docs/en/docs/ops_guide/images/en-us_image_0000001321685172.png b/docs/en/docs/ops_guide/images/en-us_image_0000001321685172.png deleted file mode 100644 index acbe1f90720a7cc56dd20d03f00918264680a7db..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/en-us_image_0000001321685172.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001335816300.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001335816300.png deleted file mode 100644 index 619f0c33503cd27d92f227216c722d554b9132f2..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001335816300.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001336729664.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001336729664.png deleted file mode 100644 index 4d73507cceab2e0b123d6864d9f86c86eb1eee2f..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001336729664.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337000118.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337000118.png deleted file mode 100644 index 37131647778506f24be4ff401392a9cc209a36eb..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337000118.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337051916.jpg b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337051916.jpg deleted file mode 100644 index a2083b7783041884394f796222352d8772ada6cc..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337051916.jpg and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337053248.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337053248.png deleted file mode 100644 index 8859f37749a4f8a4394e24ddfb54fc473e8c10c2..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337053248.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337172594.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337172594.png deleted file mode 100644 index 4e806f83c57880543a777807778f14eeb0105aba..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337172594.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337212144.jpg b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337212144.jpg deleted file mode 100644 index c6f0874250475f598efa7375516109b540918fb8..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337212144.jpg and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337260780.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337260780.png deleted file mode 100644 index 09d521d933f5fa0caacc592ea92acee959786051..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337260780.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337268560.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337268560.png deleted file mode 100644 index 663f67428487d88e23aa9c3291c31399fec2f2c3..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337268560.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337268820.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337268820.png deleted file mode 100644 index cd1732ee870a6dde0acc54642f34793933ce3356..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337268820.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337419960.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337419960.png deleted file mode 100644 index c3b493bf1e57f130e122b59e99ff45cd44539dad..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337419960.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337420372.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337420372.png deleted file mode 100644 index 2300bcd7426748236fd48b85688bd3d1fa3315df..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337420372.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337422904.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337422904.png deleted file mode 100644 index 01e250c6f7cbb64abe0b136cd80fda7ae68b629d..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337422904.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337424024.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337424024.png deleted file mode 100644 index 6532d98885f756c6704bc4bacc0f9133d78405a7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337424024.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337424304.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337424304.png deleted file mode 100644 index 9ecb384ed58458c24d8e3ae729c4de197b982b86..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337424304.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337427216.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337427216.png deleted file mode 100644 index 8633dbdd658f98501dfc91a704395260f2d4df3c..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337427216.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337427392.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337427392.png deleted file mode 100644 index 74f5cb24520c94de8628b2e64e6916c563f9f5a2..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337427392.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337533690.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337533690.png deleted file mode 100644 index 1f02d9b155754a113347a54a7d35ba9b060175a8..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337533690.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337536842.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337536842.png deleted file mode 100644 index 5a9ee2c989638c9a6aad3fcfb35bb9b9f2d4683c..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337536842.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337579708.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337579708.png deleted file mode 100644 index 5cd8ed939434e6447dd55679eeaa3756d861751f..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337579708.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337580216.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337580216.png deleted file mode 100644 index 5516b8d261b769287c74cf860a6708fcde6bbb8a..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337580216.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337584296.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337584296.png deleted file mode 100644 index fa76ecb59018fb154ffe1d9f6da1484d652f3ac1..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337584296.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337696078.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337696078.png deleted file mode 100644 index 3864852e345eaf01794042feaa85b012b8af71de..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337696078.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337740252.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337740252.png deleted file mode 100644 index fd83fb600a54ab8bc39ee2ae54210be8b6c48973..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337740252.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337740540.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337740540.png deleted file mode 100644 index b8e25128a47dccaed733fc192f52f2ca7828e516..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337740540.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337747132.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337747132.png deleted file mode 100644 index 41ea7d47f5fe5fca46816d93cb08b5da00abc0ad..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337747132.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337748300.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337748300.png deleted file mode 100644 index 32488dc1740408834954cf8d57a2843d98f09c2e..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337748300.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337748528.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001337748528.png deleted file mode 100644 index f2d62c85c844c2756f4d27a48711560dfb9615ea..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001337748528.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001372249333.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001372249333.png deleted file mode 100644 index 48cd37225954e212cb3e159acc137866d8edc362..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001372249333.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001386699925.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001386699925.png deleted file mode 100644 index cf5b13b35e65ed0143a01a5bcad1e11eaddaded7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001386699925.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387293085.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387293085.png deleted file mode 100644 index 7f56b020949c53d018eba016952c2409f0d7dca9..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387293085.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387413509.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387413509.png deleted file mode 100644 index 2245427058fc31f3e5d7f40062c0551936a67199..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387413509.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387413793.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387413793.png deleted file mode 100644 index aa649bf7215662819766d897513fb711d9d1e7f8..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387413793.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387415629.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387415629.png deleted file mode 100644 index 01189358354090591de6580f8ef88ef78ddba3a1..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387415629.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387691985.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387691985.png deleted file mode 100644 index 31c3096fa837c1b397ab2fe27acdd87e2cec36de..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387691985.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387692269.jpg b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387692269.jpg deleted file mode 100644 index b79e3ddf78520277046b933c4662c6b72f45ab85..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387692269.jpg and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387692893.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387692893.png deleted file mode 100644 index 49ea515d834b58d4ded14c55a6a2b07034d76137..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387692893.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387755969.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387755969.png deleted file mode 100644 index b2daa95d6b757e7bd443d8fd961922f248dd6853..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387755969.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387780357.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387780357.png deleted file mode 100644 index 1aab3b8be2cd0c906253d70036a9fee3050a1055..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387780357.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387784693.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387784693.png deleted file mode 100644 index 62a40117a892ba6c163be81bce1d198c2920f0e9..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387784693.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387787605.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387787605.png deleted file mode 100644 index 8c1893e16fb929f77bb6b9a70cb25d3479dd684c..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387787605.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387855149.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387855149.png deleted file mode 100644 index 731e957c367cb05e4229f53cf97dcee2cde69dff..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387855149.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387857005.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387857005.png deleted file mode 100644 index 872f5c9eb05169831df4ba49d017629e8a943c64..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387857005.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387902849.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387902849.png deleted file mode 100644 index ffe2043c199308ed2033e3eb02a0662a65141ece..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387902849.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387907229.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387907229.png deleted file mode 100644 index 084fbea1aee4d09b1e623c66b4f07641c7a0208d..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387907229.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387908045.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387908045.png deleted file mode 100644 index 1fca645598e7a67da6e75b98c44f3c9a740be374..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387908045.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387908453.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387908453.png deleted file mode 100644 index b97804a0a575fd18235e7a0c7e4f2d0183e3b460..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387908453.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387961737.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001387961737.png deleted file mode 100644 index ae4ddce8cf2629b811e9711c61186b3efa4dfe3c..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001387961737.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388020197.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001388020197.png deleted file mode 100644 index 1816e1e068ee0294677ebb357ffd158a14bb86cf..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388020197.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388024321.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001388024321.png deleted file mode 100644 index da3ba54203ded0093b7c2b5308de0e2afd85a146..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388024321.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388024397.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001388024397.png deleted file mode 100644 index 4e4531dd19dc703399c9d4dd0e95236fa9a064c8..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388024397.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388028161.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001388028161.png deleted file mode 100644 index b3beb92520c34ba771d096a8a146fb2c5b5edbb7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388028161.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388028537.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001388028537.png deleted file mode 100644 index ffb244306787c397ef4a9f4d9c3eb504172d3777..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388028537.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388184025.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001388184025.png deleted file mode 100644 index cbce6fe1e32c547426319923c0fdb13e95554b99..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388184025.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388187249.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001388187249.png deleted file mode 100644 index 0ac83f21e269d909e550b68cb0bdc6347c05dcac..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388187249.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388187325.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001388187325.png deleted file mode 100644 index 02dbdf218da2cb1c844dfc13a463875df5124d48..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388187325.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388188365.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001388188365.png deleted file mode 100644 index dbe3bfb48446bab88e3e622b9f8066383f269590..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388188365.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388241577.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001388241577.png deleted file mode 100644 index 8dacb6e343ea4c750904fa090bb99213e012379d..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388241577.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388972645.png b/docs/en/docs/ops_guide/images/zh-cn_image_0000001388972645.png deleted file mode 100644 index e32606925f4bb4380b262d9f946d4cd106202b87..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_image_0000001388972645.png and /dev/null differ diff --git a/docs/en/docs/ops_guide/images/zh-cn_other_0000001337581224.jpeg b/docs/en/docs/ops_guide/images/zh-cn_other_0000001337581224.jpeg deleted file mode 100644 index 2c019b828bdf9c699f203f09ba3542968ff21262..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/ops_guide/images/zh-cn_other_0000001337581224.jpeg and /dev/null differ diff --git a/docs/en/docs/ops_guide/om-overview.md b/docs/en/docs/ops_guide/om-overview.md deleted file mode 100644 index ff727a6f00f36bd9f3e13be08f0894c44492eb33..0000000000000000000000000000000000000000 --- a/docs/en/docs/ops_guide/om-overview.md +++ /dev/null @@ -1,9 +0,0 @@ -# O&M Overview - -​IT O&M means that the IT department of an enterprise uses technical means to manage the IT system. It is a comprehensive, sophisticated, and specific service. Routine IT O&M services include software management and hardware management. In software management, maintaining device stability and efficiency through the OS is the core and key part of IT O&M. - -​Specifically, by monitoring dynamic changes of performance metrics such as the CPU, memory, and I/O in a device, related problems can be effectively prevented or located. For example, the CPU is overloaded due to various service reasons, which slows down the service response. In this case, you need to monitor the CPU usage. When the memory usage remains high for a long time, you need to use the memory analysis tool to monitor related hardware or processes. When the efficiency of read/write operations is low, I/O data needs to be monitored to evaluate I/O performance. - -​In addition, when a fault such as system breakdown, deadlock, or breakdown occurs, you need to perform troubleshooting on the OS to quickly locate and rectify the fault. For example, you can trigger kdump to collect system kernel information and then analyze the information. When you need to change the system password, enter the single-user mode and change the password of the **root** user. The file system can be damaged due to frequent forcible power-on and power-off. If the OS fails to automatically repair the file system, you need to manually repair it. For example, modify the **drop\_caches** content to manually release the memory. In addition, you need to collect information, such as log files and device files, when a fault occurs, so that you can comprehensively analyze the root cause of the fault. - -​Therefore, being familiar with the usage of the OS performance analysis tool and fault rectification operations is the key to implementing comprehensive IT O&M management. diff --git a/docs/en/docs/ops_guide/overview.md b/docs/en/docs/ops_guide/overview.md deleted file mode 100644 index 6f19668adba0fe51e991b83960ae8921c7624a87..0000000000000000000000000000000000000000 --- a/docs/en/docs/ops_guide/overview.md +++ /dev/null @@ -1,3 +0,0 @@ -# O&M Guide - -This document describes how to operate and maintain the openEuler operating system (OS), including performance monitoring tools, information collection methods, emergency handling solutions, and common tools. diff --git a/docs/en/docs/sysBoost/Appendixes.md b/docs/en/docs/sysBoost/Appendixes.md deleted file mode 100644 index 9ffa3b4defb16e8acfd24f15b3d5323f9ca6698a..0000000000000000000000000000000000000000 --- a/docs/en/docs/sysBoost/Appendixes.md +++ /dev/null @@ -1,26 +0,0 @@ -# Appendixes - - -- [Appendixes](#appendixes) - - [Acronyms and Abbreviations](#acronyms-and-abbreviations) - - -## Acronyms and Abbreviations - -**Table 1** Terminology - - - - - - - - - - - -

Term

-

Description

-

-

-

-

-
diff --git a/docs/en/docs/sysBoost/faqs.md b/docs/en/docs/sysBoost/faqs.md deleted file mode 100644 index 95241a8b3bdb785effe3e4f9330f7cc6537e330e..0000000000000000000000000000000000000000 --- a/docs/en/docs/sysBoost/faqs.md +++ /dev/null @@ -1 +0,0 @@ -# FAQs diff --git a/docs/en/docs/sysMaster/public_sys-resources/icon-caution.gif b/docs/en/docs/sysMaster/public_sys-resources/icon-caution.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/sysMaster/public_sys-resources/icon-caution.gif and /dev/null differ diff --git a/docs/en/docs/sysMaster/public_sys-resources/icon-danger.gif b/docs/en/docs/sysMaster/public_sys-resources/icon-danger.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/sysMaster/public_sys-resources/icon-danger.gif and /dev/null differ diff --git a/docs/en/docs/sysMaster/public_sys-resources/icon-notice.gif b/docs/en/docs/sysMaster/public_sys-resources/icon-notice.gif deleted file mode 100644 index 86024f61b691400bea99e5b1f506d9d9aef36e27..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/sysMaster/public_sys-resources/icon-notice.gif and /dev/null differ diff --git a/docs/en/docs/sysMaster/public_sys-resources/icon-tip.gif b/docs/en/docs/sysMaster/public_sys-resources/icon-tip.gif deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/sysMaster/public_sys-resources/icon-tip.gif and /dev/null differ diff --git a/docs/en/docs/sysMaster/public_sys-resources/icon-warning.gif b/docs/en/docs/sysMaster/public_sys-resources/icon-warning.gif deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/sysMaster/public_sys-resources/icon-warning.gif and /dev/null differ diff --git a/docs/en/docs/sysMaster/sysmaster_install_deploy.md b/docs/en/docs/sysMaster/sysmaster_install_deploy.md deleted file mode 100644 index 19e74e59f2ef0343b002fc1350a0fcec4d77a5fb..0000000000000000000000000000000000000000 --- a/docs/en/docs/sysMaster/sysmaster_install_deploy.md +++ /dev/null @@ -1,98 +0,0 @@ -# Installation and Deployment - -sysmaster can be used in containers and VMs. This document uses the AArch64 architecture as an example to describe how to install and deploy sysmaster in both scenarios. - -## Software - -* OS: openEuler 23.09 - -## Hardware - -* x86_64 or AArch64 architecture - -## Installation and Deployment in Containers - -1. Install Docker. - - ```bash - yum install -y docker - systemctl restart docker - ``` - -2. Load the base container image. - - Download the container image. - - ```bash - wget https://repo.openeuler.org/openEuler-23.09/docker_img/aarch64/openEuler-docker.aarch64.tar.xz - xz -d openEuler-docker.aarch64.tar.xz - ``` - - Load the container image. - - ```bash - docker load --input openEuler-docker.aarch64.tar - ``` - -3. Build the container. - - Create a Dockerfile. - - ```bash - cat << EOF > Dockerfile - FROM openeuler-23.09 - RUN yum install -y sysmaster - CMD ["/usr/lib/sysmaster/init"] - EOF - ``` - - Build the container. - - ```bash - docker build -t openeuler-23.09:latest . - ``` - -4. Start and enter the container. - - Start the container. - - ```bash - docker run -itd --privileged openeuler-23.09:latest - ``` - - Obtain the container ID. - - ```bash - docker ps - ``` - - Use the container ID to enter the container. - - ```bash - docker exec -it /bin/bash - ``` - -## Installation and Deployment in VMs - -1. Create an initramfs image. - To avoid the impact of systemd in the initrd phase, you need to create an initramfs image with systemd removed and use this image to enter the initrd procedure. Run the following command: - - ```bash - dracut -f --omit "systemd systemd-initrd systemd-networkd dracut-systemd" /boot/initrd_withoutsd.img - ``` - -2. Add a boot item. - Add a boot item to **grub.cfg**, whose path is **/boot/efi/EFI/openEuler/grub.cfg** in the AArch64 architecture and **/boot/grub2/grub.cfg** in the x86_64 architecture. Back up the original configurations and modify the configurations as follows: - - * **menuentry**: Change **openEuler (6.4.0-5.0.0.13.oe23.09.aarch64) 23.09** to **openEuler 23.09 withoutsd**. - * **linux**: Change **root=/dev/mapper/openeuler-root ro** to **root=/dev/mapper/openeuler-root rw**. - * **linux**: If Plymouth is installed, add **plymouth.enable=0** to disable it. - * **linux**: Add **init=/usr/lib/sysmaster/init**. - * **initrd**: Set to **/initrd_withoutsd.img**. -3. Install sysmaster. - - ```bash - yum install sysmaster - ``` - -4. If the **openEuler 23.09 withoutsd** boot item is displayed after the restart, the configuration is successful. Select **openEuler 23.09 withoutsd** to log in to the VM. diff --git a/docs/en/docs/thirdparty_migration/OpenStack-train.md b/docs/en/docs/thirdparty_migration/OpenStack-train.md deleted file mode 100644 index 7ad42a2867c6e95a01ee866f7d271e663b43335c..0000000000000000000000000000000000000000 --- a/docs/en/docs/thirdparty_migration/OpenStack-train.md +++ /dev/null @@ -1,2961 +0,0 @@ -# OpenStack-Wallaby Deployment Guide - - - -- [OpenStack-Wallaby Deployment Guide](#openstack-wallaby-deployment-guide) - - [OpenStack](#openstack) - - [Conventions](#conventions) - - [Preparing the Environment](#preparing-the-environment) - - [Environment Configuration](#environment-configuration) - - [Installing the SQL Database](#installing-the-sql-database) - - [Installing RabbitMQ](#installing-rabbitmq) - - [Installing Memcached](#installing-memcached) - - [OpenStack Installation](#openstack-installation) - - [Installing Keystone](#installing-keystone) - - [Installing Glance](#installing-glance) - - [Installing Placement](#installing-placement) - - [Installing Nova](#installing-nova) - - [Installing Neutron](#installing-neutron) - - [Installing Cinder](#installing-cinder) - - [Installing Horizon](#installing-horizon) - - [Installing Tempest](#installing-tempest) - - [Installing Ironic](#installing-ironic) - - [Installing Kolla](#installing-kolla) - - [Installing Trove](#installing-trove) - - [Installing Swift](#installing-swift) - - [Installing Cyborg](#installing-cyborg) - - [Installing Aodh](#installing-aodh) - - [Installing Gnocchi](#installing-gnocchi) - - [Installing Ceilometer](#installing-ceilometer) - - [Installing Heat](#installing-heat) - - [OpenStack Quick Installation](#openstack-quick-installation) - - -## OpenStack - -OpenStack is an open source cloud computing infrastructure software project developed by the community. It provides an operating platform or tool set for deploying the cloud, offering scalable and flexible cloud computing for organizations. - -As an open source cloud computing management platform, OpenStack consists of several major components, such as Nova, Cinder, Neutron, Glance, Keystone, and Horizon. OpenStack supports almost all cloud environments. The project aims to provide a cloud computing management platform that is easy-to-use, scalable, unified, and standardized. OpenStack provides an infrastructure as a service (IaaS) solution that combines complementary services, each of which provides an API for integration. - -The official source of openEuler 22.03-LTS now supports OpenStack Train. You can configure the Yum source then deploy OpenStack by following the instructions of this document. - -## Conventions - -OpenStack supports multiple deployment modes. This document includes two deployment modes: **All in One** and **Distributed**. The conventions are as follows: - -**All in One** mode: - -```text -Ignores all possible suffixes. -``` - -**Distributed** mode: - -```text -A suffix of (CTL) indicates that the configuration or command applies only to the control node. -A suffix of (CPT) indicates that the configuration or command applies only to the compute node. -A suffix of (STG) indicates that the configuration or command applies only to the storage node. -In other cases, the configuration or command applies to both the control node and compute node. -``` - -***Note*** - -The services involved in the preceding conventions are as follows: - -- Cinder -- Nova -- Neutron - -## Preparing the Environment - -### Environment Configuration - -1. Start the OpenStack Train Yum source. - - ```shell - yum update - yum install openstack-release-train - yum clean all && yum makecache - ``` - - **Note**: Enable the EPOL repository for the Yum source if it is not enabled already. - - ```shell - vi /etc/yum.repos.d/openEuler.repo - - [EPOL] - name=EPOL - baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler - EOF - ``` - -2. Change the host name and mapping. - - Set the host name of each node: - - ```shell - hostnamectl set-hostname controller (CTL) - hostnamectl set-hostname compute (CPT) - ``` - - Assuming the IP address of the controller node is **10.0.0.11** and the IP address of the compute node (if any) is **10.0.0.12**, add the following information to the **/etc/hosts** file: - - ```shell - 10.0.0.11 controller - 10.0.0.12 compute - ``` - -### Installing the SQL Database - -1. Run the following command to install the software package: - - ```shell - yum install mariadb mariadb-server python3-PyMySQL - ``` - -2. Run the following command to create and edit the **/etc/my.cnf.d/openstack.cnf** file: - - ```shell - vim /etc/my.cnf.d/openstack.cnf - - [mysqld] - bind-address = 10.0.0.11 - default-storage-engine = innodb - innodb_file_per_table = on - max_connections = 4096 - collation-server = utf8_general_ci - character-set-server = utf8 - ``` - - ***Note*** - - **`bind-address` is set to the management IP address of the controller node.** - -3. Run the following commands to start the database service and configure it to automatically start upon system boot: - - ```shell - systemctl enable mariadb.service - systemctl start mariadb.service - ``` - -4. (Optional) Configure the default database password: - - ```shell - mysql_secure_installation - ``` - - ***Note*** - - **Perform operations as prompted.** - -### Installing RabbitMQ - -1. Run the following command to install the software package: - - ```shell - yum install rabbitmq-server - ``` - -2. Start the RabbitMQ service and configure it to automatically start upon system boot: - - ```shell - systemctl enable rabbitmq-server.service - systemctl start rabbitmq-server.service - ``` - -3. Add the OpenStack user: - - ```shell - rabbitmqctl add_user openstack RABBIT_PASS - ``` - - ***Note*** - - **Replace *RABBIT_PASS* to set the password for the openstack user.** - -4. Run the following command to set the permission of the **openstack** user to allow the user to perform configuration, write, and read operations: - - ```shell - rabbitmqctl set_permissions openstack ".*" ".*" ".*" - ``` - -### Installing Memcached - -1. Run the following command to install the dependency package: - - ```shell - yum install memcached python3-memcached - ``` - -2. Open the **/etc/sysconfig/memcached** file in insert mode. - - ```shell - vim /etc/sysconfig/memcached - - OPTIONS="-l 127.0.0.1,::1,controller" - ``` - -3. Run the following command to start the Memcached service and configure it to automatically start upon system boot: - - ```shell - systemctl enable memcached.service - systemctl start memcached.service - ``` - - ***Note*** - - **After the service is started, you can run `memcached-tool controller stats` to ensure that the service is started properly and available. You can replace `controller` with the management IP address of the controller node.** - -## OpenStack Installation - -### Installing Keystone - -1. Create the **keystone** database and grant permissions: - - ``` sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE keystone; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***Note*** - - **Replace *KEYSTONE_DBPASS* to set the password for the keystone database.** - -2. Install the software package: - - ```shell - yum install openstack-keystone httpd mod_wsgi - ``` - -3. Configure Keystone: - - ```shell - vim /etc/keystone/keystone.conf - - [database] - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - - [token] - provider = fernet - ``` - - ***Description*** - - In the **[database]** section, configure the database entry . - - In the **[token]** section, configure the token provider . - - ***Note:*** - - **Replace *KEYSTONE_DBPASS* with the password of the keystone database.** - -4. Synchronize the database: - - ```shell - su -s /bin/sh -c "keystone-manage db_sync" keystone - ``` - -5. Initialize the Fernet keystore: - - ```shell - keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - ``` - -6. Start the service: - - ```shell - keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:5000/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - ``` - - ***Note*** - - **Replace *ADMIN_PASS* to set the password for the admin user.** - -7. Configure the Apache HTTP server: - - ```shell - vim /etc/httpd/conf/httpd.conf - - ServerName controller - ``` - - ```shell - ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ - ``` - - ***Description*** - - Configure **ServerName** to use the control node. - - ***Note*** - **If the ServerName item does not exist, create it. - -8. Start the Apache HTTP service: - - ```shell - systemctl enable httpd.service - systemctl start httpd.service - ``` - -9. Create environment variables: - - ```shell - cat << EOF >> ~/.admin-openrc - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=admin - export OS_USERNAME=admin - export OS_PASSWORD=ADMIN_PASS - export OS_AUTH_URL=http://controller:5000/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - EOF - ``` - - ***Note*** - - **Replace *ADMIN_PASS* with the password of the admin user.** - -10. Create domains, projects, users, and roles in sequence. python3-openstackclient must be installed first: - - ```shell - yum install python3-openstackclient - ``` - - Import the environment variables: - - ```shell - source ~/.admin-openrc - ``` - - Create the project **service**. The domain **default** has been created during keystone-manage bootstrap. - - ```shell - openstack domain create --description "An Example Domain" example - ``` - - ```shell - openstack project create --domain default --description "Service Project" service - ``` - - Create the (non-admin) project **myproject**, user **myuser**, and role **myrole**, and add the role **myrole** to **myproject** and **myuser**. - - ```shell - openstack project create --domain default --description "Demo Project" myproject - openstack user create --domain default --password-prompt myuser - openstack role create myrole - openstack role add --project myproject --user myuser myrole - ``` - -11. Perform the verification. - - Cancel the temporary environment variables **OS_AUTH_URL** and **OS_PASSWORD**. - - ```shell - source ~/.admin-openrc - unset OS_AUTH_URL OS_PASSWORD - ``` - - Request a token for the **admin** user: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - ``` - - Request a token for user **myuser**: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name myproject --os-username myuser token issue - ``` - -### Installing Glance - -1. Create the database, service credentials, and the API endpoints. - - Create the database: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE glance; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***Note:*** - - **Replace *GLANCE_DBPASS* to set the password for the glance database.** - - Create the service credential: - - ```shell - source ~/.admin-openrc - - openstack user create --domain default --password-prompt glance - openstack role add --project service --user glance admin - openstack service create --name glance --description "OpenStack Image" image - ``` - - Create the API endpoints for the image service: - - ```shell - openstack endpoint create --region RegionOne image public http://controller:9292 - openstack endpoint create --region RegionOne image internal http://controller:9292 - openstack endpoint create --region RegionOne image admin http://controller:9292 - ``` - -2. Install the software package: - - ```shell - yum install openstack-glance - ``` - -3. Configure Glance: - - ```shell - vim /etc/glance/glance-api.conf - - [database] - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - flavor = keystone - - [glance_store] - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - ``` - - ***Description:*** - - In the **[database]** section, configure the database entry. - - In the **[keystone_authtoken]** and **[paste_deploy]** sections, configure the identity authentication service entry. - - In the **[glance_store]** section, configure the local file system storage and the location of image files. - - ***Note*** - - **Replace *GLANCE_DBPASS* with the password of the glance database.** - - **Replace *GLANCE_PASS* with the password of user glance.** - -4. Synchronize the database: - - ```shell - su -s /bin/sh -c "glance-manage db_sync" glance - ``` - -5. Start the service: - - ```shell - systemctl enable openstack-glance-api.service - systemctl start openstack-glance-api.service - ``` - -6. Perform the verification. - - Download the image: - - ```shell - source ~/.admin-openrc - - wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img - ``` - - ***Note*** - - **If the Kunpeng architecture is used in your environment, download the image of the AArch64 version. the cirros-0.5.2-aarch64-disk.img image file has been tested.** - - Upload the image to the image service: - - ```shell - openstack image create --disk-format qcow2 --container-format bare \ - --file cirros-0.4.0-x86_64-disk.img --public cirros - ``` - - Confirm the image upload and verify the attributes: - - ```shell - openstack image list - ``` - -### Installing Placement - -1. Create a database, service credentials, and API endpoints. - - Create a database. - - Access the database as the **root** user. Create the **placement** database, and grant permissions. - - ```shell - mysql -u root -p - MariaDB [(none)]> CREATE DATABASE placement; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> exit - ``` - - **Note**: - - **Replace *PLACEMENT_DBPASS* to set the password for the placement database.** - - ```shell - source admin-openrc - ``` - - Run the following commands to create the Placement service credentials, create the **placement** user, and add the **admin** role to the **placement** user: - - Create the Placement API Service. - - ```shell - openstack user create --domain default --password-prompt placement - openstack role add --project service --user placement admin - openstack service create --name placement --description "Placement API" placement - ``` - - Create API endpoints of the **placement** service. - - ```shell - openstack endpoint create --region RegionOne placement public http://controller:8778 - openstack endpoint create --region RegionOne placement internal http://controller:8778 - openstack endpoint create --region RegionOne placement admin http://controller:8778 - ``` - -2. Perform the installation and configuration. - - Install the software package: - - ```shell - yum install openstack-placement-api - ``` - - Configure Placement: - - Edit the **/etc/placement/placement.conf** file: - - In the **[placement_database]** section, configure the database entry. - - In **[api]** and **[keystone_authtoken]** sections, configure the identity authentication service entry. - - ```shell - # vim /etc/placement/placement.conf - [placement_database] - # ... - connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement - [api] - # ... - auth_strategy = keystone - [keystone_authtoken] - # ... - auth_url = http://controller:5000/v3 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = placement - password = PLACEMENT_PASS - ``` - - Replace **PLACEMENT_DBPASS** with the password of the **placement** database, and replace **PLACEMENT_PASS** with the password of the **placement** user. - - Synchronize the database: - - ```shell - su -s /bin/sh -c "placement-manage db sync" placement - ``` - - Start the httpd service. - - ```shell - systemctl restart httpd - ``` - -3. Perform the verification. - - Run the following command to check the status: - - ```shell - . admin-openrc - placement-status upgrade check - ``` - - Run the following command to install osc-placement and list the available resource types and features: - - ```shell - yum install python3-osc-placement - openstack --os-placement-api-version 1.2 resource class list --sort-column name - openstack --os-placement-api-version 1.6 trait list --sort-column name - ``` - -### Installing Nova - -1. Create a database, service credentials, and API endpoints. - - Create a database. - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> exit - ``` - - **Note**: - - **Replace *NOVA_DBPASS* to set the password for the nova database.** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - Run the following command to create the Nova service certificate: - - ```shell - openstack user create --domain default --password-prompt nova (CTL) - openstack role add --project service --user nova admin (CTL) - openstack service create --name nova --description "OpenStack Compute" compute (CTL) - ``` - - Create a Nova API endpoint. - - ```shell - openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) - openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) - openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) - ``` - -2. Install the software packages: - - ```shell - yum install openstack-nova-api openstack-nova-conductor \ (CTL) - openstack-nova-novncproxy openstack-nova-scheduler - - yum install openstack-nova-compute (CPT) - ``` - - **Note**: - - **If the ARM64 architecture is used, you also need to run the following command:** - - ```shell - yum install edk2-aarch64 (CPT) - ``` - -3. Configure Nova: - - ```shell - vim /etc/nova/nova.conf - - [DEFAULT] - enabled_apis = osapi_compute,metadata - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - my_ip = 10.0.0.1 - use_neutron = true - firewall_driver = nova.virt.firewall.NoopFirewallDriver - compute_driver=libvirt.LibvirtDriver (CPT) - instances_path = /var/lib/nova/instances/ (CPT) - lock_path = /var/lib/nova/tmp (CPT) - - [api_database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) - - [database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) - - [api] - auth_strategy = keystone - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = nova - password = NOVA_PASS - - [vnc] - enabled = true - server_listen = $my_ip - server_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) - - [glance] - api_servers = http://controller:9292 - - [oslo_concurrency] - lock_path = /var/lib/nova/tmp (CTL) - - [placement] - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:5000/v3 - username = placement - password = PLACEMENT_PASS - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - Description - - In the **[default]** section, enable the compute and metadata APIs, configure the RabbitMQ message queue entry, configure **my_ip**, and enable the network service **neutron**. - - In the **[api_database]** and **[database]** sections, configure the database entry. - - In the **[api]** and **[keystone_authtoken]** sections, configure the identity service entry. - - In the **[vnc]** section, enable and configure the entry for the remote console. - - In the **[glance]** section, configure the API address for the image service. - - In the **[oslo_concurrency]** section, configure the lock path. - - In the **[placement]** section, configure the entry of the Placement service. - - **Note**: - - **Replace *RABBIT_PASS* with the password of the openstack user in RabbitMQ.** - - **Set *my_ip* to the management IP address of the controller node.** - - **Replace *NOVA_DBPASS* with the password of the nova database.** - - **Replace *NOVA_PASS* with the password of the nova user.** - - **Replace *PLACEMENT_PASS* with the password of the placement user.** - - **Replace *NEUTRON_PASS* with the password of the neutron user.** - - **Replace *METADATA_SECRET* with a proper metadata agent secret.** - - Others - - Check whether VM hardware acceleration (x86 architecture) is supported: - - ```shell - egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) - ``` - - If the returned value is **0**, hardware acceleration is not supported. You need to configure libvirt to use QEMU instead of KVM. - - ```shell - vim /etc/nova/nova.conf (CPT) - - [libvirt] - virt_type = qemu - ``` - - If the returned value is **1** or a larger value, hardware acceleration is supported. You can set the value of **virt_type** to **kvm**. - - **Note**: - - **If the ARM64 architecture is used, you also need to run the following command on the compute node:** - - ```shell - - mkdir -p /usr/share/AAVMF - chown nova:nova /usr/share/AAVMF - - ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \ - /usr/share/AAVMF/AAVMF_CODE.fd - ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \ - /usr/share/AAVMF/AAVMF_VARS.fd - - vim /etc/libvirt/qemu.conf - - nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \ - /usr/share/AAVMF/AAVMF_VARS.fd", \ - "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \ - /usr/share/edk2/aarch64/vars-template-pflash.raw"] - ``` - In addition, when the deployment environment in the ARM architecture is nested virtualization, configure the **[libvirt]** section as follows: - - ```shell - [libvirt] - virt_type = qemu - cpu_mode = custom - cpu_model = cortex-a72 - ``` - -4. Synchronize the database. - - Run the following command to synchronize the **nova-api** database: - - ```shell - su -s /bin/sh -c "nova-manage api_db sync" nova (CTL) - ``` - - Run the following command to register the **cell0** database: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova (CTL) - ``` - - Create the **cell1** cell: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova (CTL) - ``` - - Synchronize the **nova** database: - - ```shell - su -s /bin/sh -c "nova-manage db sync" nova (CTL) - ``` - - Verify whether **cell0** and **cell1** are correctly registered: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova (CTL) - ``` - - Add compute node to the OpenStack cluster: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (CPT) - ``` - -5. Start the services: - - ```shell - systemctl enable \ (CTL) - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - - systemctl start \ (CTL) - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - ``` - - ```shell - systemctl enable libvirtd.service openstack-nova-compute.service (CPT) - systemctl start libvirtd.service openstack-nova-compute.service (CPT) - ``` - -6. Perform the verification. - - ```shell - source ~/.admin-openrc (CTL) - ``` - - List the service components to verify that each process is successfully started and registered: - - ```shell - openstack compute service list (CTL) - ``` - - List the API endpoints in the identity service to verify the connection to the identity service: - - ```shell - openstack catalog list (CTL) - ``` - - List the images in the image service to verify the connections: - - ```shell - openstack image list (CTL) - ``` - - Check whether the cells are running properly and whether other prerequisites are met. - - ```shell - nova-status upgrade check (CTL) - ``` - -### Installing Neutron - -1. Create the database, service credentials, and API endpoints. - - Create the database: - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE neutron; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***Note*** - - **Replace *NEUTRON_DBPASS* to set the password for the neutron database.** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - Create the **neutron** service credential: - - ```shell - openstack user create --domain default --password-prompt neutron (CTL) - openstack role add --project service --user neutron admin (CTL) - openstack service create --name neutron --description "OpenStack Networking" network (CTL) - ``` - - Create the API endpoints of the Neutron service: - - ```shell - openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) - ``` - -2. Install the software packages: - - ```shell - yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \ (CTL) - openstack-neutron-ml2 - ``` - - ```shell - yum install openstack-neutron-linuxbridge ebtables ipset (CPT) - ``` - -3. Configure Neutron. - - Set the main configuration items: - - ```shell - vim /etc/neutron/neutron.conf - - [database] - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) - - [DEFAULT] - core_plugin = ml2 (CTL) - service_plugins = router (CTL) - allow_overlapping_ips = true (CTL) - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - notify_nova_on_port_status_changes = true (CTL) - notify_nova_on_port_data_changes = true (CTL) - api_workers = 3 (CTL) - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = neutron - password = NEUTRON_PASS - - [nova] - auth_url = http://controller:5000 (CTL) - auth_type = password (CTL) - project_domain_name = Default (CTL) - user_domain_name = Default (CTL) - region_name = RegionOne (CTL) - project_name = service (CTL) - username = nova (CTL) - password = NOVA_PASS (CTL) - - [oslo_concurrency] - lock_path = /var/lib/neutron/tmp - ``` - - ***Description*** - - Configure the database entry in the **[database]** section. - - Enable the ML2 and router plugins, allow IP address overlapping, and configure the RabbitMQ message queue entry in the **[default]** section. - - Configure the identity authentication service entry in the **[default]** and **[keystone]** sections. - - Enable the network to notify the change of the compute network topology in the **[default]** and **[nova]** sections. - - Configure the lock path in the **[oslo_concurrency]** section. - - ***Note*** - - **Replace *NEUTRON_DBPASS* with the password of the neutron database.** - - **Replace *RABBIT_PASS* with the password of the openstack user in RabbitMQ.** - - **Replace *NEUTRON_PASS* with the password of the neutron user.** - - **Replace *NOVA_PASS* with the password of the nova user.** - - Configure the ML2 plugin: - - ```shell - vim /etc/neutron/plugins/ml2/ml2_conf.ini - - [ml2] - type_drivers = flat,vlan,vxlan - tenant_network_types = vxlan - mechanism_drivers = linuxbridge,l2population - extension_drivers = port_security - - [ml2_type_flat] - flat_networks = provider - - [ml2_type_vxlan] - vni_ranges = 1:1000 - - [securitygroup] - enable_ipset = true - ``` - - Create the symbolic link for /etc/neutron/plugin.ini. - - ```shell - ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - ``` - - **Note** - - **Enable flat, vlan, and vxlan networks, enable the linuxbridge and l2population mechanisms, and enable the port security extension driver in the [ml2] section.** - - **Configure the flat network as the provider virtual network in the [ml2_type_flat] section.** - - **Configure the range of the VXLAN network identifier in the [ml2_type_vxlan] section.** - - **Set ipset enabled in the [securitygroup] section.** - - **Remarks** - - **The actual configurations of l2 can be modified based as required. In this example, the provider network + linuxbridge is used.** - - Configure the Linux bridge agent: - - ```shell - vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - [securitygroup] - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - ``` - - ***Description*** - - Map the provider virtual network to the physical network interface in the **[linux_bridge]** section. - - Enable the VXLAN overlay network, configure the IP address of the physical network interface that processes the overlay network, and enable layer-2 population in the **[vxlan]** section. - - Enable the security group and configure the linux bridge iptables firewall driver in the **[securitygroup]** section. - - ***Note*** - - **Replace *PROVIDER_INTERFACE_NAME* with the physical network interface.** - - **Replace *OVERLAY_INTERFACE_IP_ADDRESS* with the management IP address of the controller node.** - - Configure the Layer-3 agent: - - ```shell - vim /etc/neutron/l3_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - ``` - - ***Description*** - - Set the interface driver to linuxbridge in the **[default]** section. - - Configure the DHCP agent: - - ```shell - vim /etc/neutron/dhcp_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - ``` - - ***Description*** - - In the **[default]** section, configure the linuxbridge interface driver and Dnsmasq DHCP driver, and enable the isolated metadata. - - Configure the metadata agent: - - ```shell - vim /etc/neutron/metadata_agent.ini (CTL) - - [DEFAULT] - nova_metadata_host = controller - metadata_proxy_shared_secret = METADATA_SECRET - ``` - - ***Description*** - - In the **[default]**, configure the metadata host and the shared secret. - - ***Note*** - - **Replace *METADATA_SECRET* with a proper metadata agent secret.** - -4. Configure Nova: - - ```shell - vim /etc/nova/nova.conf - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***Description*** - - In the **[neutron]** section, configure the access parameters, enable the metadata agent, and configure the secret. - - ***Note*** - - **Replace *NEUTRON_PASS* with the password of the neutron user.** - - **Replace *METADATA_SECRET* with a proper metadata agent secret.** - -5. Synchronize the database: - - ```shell - su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - ``` - -6. Run the following command to restart the compute API service: - - ```shell - systemctl restart openstack-nova-api.service - ``` - -7. Start the network service: - - ```shell - systemctl enable neutron-server.service neutron-linuxbridge-agent.service \ (CTL) - neutron-dhcp-agent.service neutron-metadata-agent.service \ - neutron-l3-agent.service - - systemctl restart neutron-server.service neutron-linuxbridge-agent.service \ (CTL) - neutron-dhcp-agent.service neutron-metadata-agent.service \ - neutron-l3-agent.service - - systemctl enable neutron-linuxbridge-agent.service (CPT) - systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) - ``` - -8. Perform the verification. - - Run the following command to verify whether the Neutron agent is started successfully: - - ```shell - openstack network agent list - ``` - -### Installing Cinder - -1. Create the database, service credentials, and API endpoints. - - Create the database: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE cinder; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***Note*** - - **Replace *CINDER_DBPASS* to set the password for the cinder database.** - - ```shell - source ~/.admin-openrc - ``` - - Create the Cinder service credentials: - - ```shell - openstack user create --domain default --password-prompt cinder - openstack role add --project service --user cinder admin - openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 - openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 - ``` - - Create the API endpoints for the block storage service: - - ```shell - openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s - ``` - -2. Install the software packages: - - ```shell - yum install openstack-cinder-api openstack-cinder-scheduler (CTL) - ``` - - ```shell - yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \ (STG) - openstack-cinder-volume openstack-cinder-backup - ``` - -3. Prepare the storage devices. The following is an example: - - ```shell - pvcreate /dev/vdb - vgcreate cinder-volumes /dev/vdb - - vim /etc/lvm/lvm.conf - - - devices { - ... - filter = [ "a/vdb/", "r/.*/"] - ``` - - ***Description*** - - In the **devices** section, add filters to allow the **/dev/vdb** devices and reject other devices. - -4. Prepare the NFS: - - ```shell - mkdir -p /root/cinder/backup - - cat << EOF >> /etc/export - /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) - EOF - - ``` - -5. Configure Cinder: - - ```shell - vim /etc/cinder/cinder.conf - - [DEFAULT] - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - my_ip = 10.0.0.11 - enabled_backends = lvm (STG) - backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) - backup_share=HOST:PATH (STG) - - [database] - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = cinder - password = CINDER_PASS - - [oslo_concurrency] - lock_path = /var/lib/cinder/tmp - - [lvm] - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) - volume_group = cinder-volumes (STG) - iscsi_protocol = iscsi (STG) - iscsi_helper = tgtadm (STG) - ``` - - ***Description*** - - In the **[database]** section, configure the database entry. - - In the **[DEFAULT]** section, configure the RabbitMQ message queue entry and **my_ip**. - - In the **[DEFAULT]** and **[keystone_authtoken]** sections, configure the identity authentication service entry. - - In the **[oslo_concurrency]** section, configure the lock path. - - ***Note*** - - **Replace *CINDER_DBPASS* with the password of the cinder database.** - - **Replace *RABBIT_PASS* with the password of the openstack user in RabbitMQ.** - - **Set *my_ip* to the management IP address of the controller node.** - - **Replace *CINDER_PASS* with the password of the cinder user.** - - **Replace *HOST:PATH* with the host IP address and the shared path of the NFS.** - -6. Synchronize the database: - - ```shell - su -s /bin/sh -c "cinder-manage db sync" cinder (CTL) - ``` - -7. Configure Nova: - - ```shell - vim /etc/nova/nova.conf (CTL) - - [cinder] - os_region_name = RegionOne - ``` - -8. Restart the compute API service: - - ```shell - systemctl restart openstack-nova-api.service - ``` - -9. Start the Cinder service: - - ```shell - systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - ``` - - ```shell - systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - ``` - - ***Note*** - - If the Cinder volumes are mounted using tgtadm, modify the **/etc/tgt/tgtd.conf** file as follows to ensure that tgtd can discover the iscsi target of cinder-volume. - - ```shell - include /var/lib/cinder/volumes/* - ``` - -10. Perform the verification: - - ```shell - source ~/.admin-openrc - openstack volume service list - ``` - -### Installing Horizon - -1. Install the software package: - - ```shell - yum install openstack-dashboard - ``` - -2. Modify the file. - - Modify the variables: - - ```text - vim /etc/openstack-dashboard/local_settings - - OPENSTACK_HOST = "controller" - ALLOWED_HOSTS = ['*', ] - - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': 'controller:11211', - } - } - - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" - - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 3, - } - ``` - -3. Restart the httpd service: - - ```shell - systemctl restart httpd.service memcached.service - ``` - -4. Perform the verification. - Open the browser, enter in the address bar, and log in to Horizon. - - ***Note*** - - **Replace *HOSTIP* with the management plane IP address of the controller node.** - -### Installing Tempest - -Tempest is the integrated test service of OpenStack. If you need to run a fully automatic test of the functions of the installed OpenStack environment, you are advised to use Tempest. Otherwise, you can choose not to install it. - -1. Install Tempest: - - ```shell - yum install openstack-tempest - ``` - -2. Initialize the directory: - - ```shell - tempest init mytest - ``` - -3. Modify the configuration file: - - ```shell - cd mytest - vi etc/tempest.conf - ``` - - Configure the current OpenStack environment information in **tempest.conf**. For details, see the [official example](https://docs.openstack.org/tempest/latest/sampleconf.html). - -4. Perform the test: - - ```shell - tempest run - ``` - -5. (Optional) Install the tempest extensions. - The OpenStack services have provided some tempest test packages. You can install these packages to enrich the tempest test content. In Train, extension tests for Cinder, Glance, Keystone, Ironic and Trove are provided. You can run the following command to install and use them: - ``` - yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin - ``` - -### Installing Ironic - -Ironic is the bare metal service of OpenStack. If you need to deploy bare metal machines, Ironic is recommended. Otherwise, you can choose not to install it. - -1. Set the database. - - The bare metal service stores information in the database. Create a **ironic** database that can be accessed by the **ironic** user and replace **IRONIC_DBPASSWORD** with a proper password. - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - ``` - -2. Install the software packages. - - ```shell - yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient - ``` - - Start the services. - - ```shell - systemctl enable openstack-ironic-api openstack-ironic-conductor - systemctl start openstack-ironic-api openstack-ironic-conductor - ``` - -3. Create service user authentication. - - 1. Create the bare metal service user: - - ```shell - openstack user create --password IRONIC_PASSWORD \ - --email ironic@example.com ironic - openstack role add --project service --user ironic admin - openstack service create --name ironic \ - --description "Ironic baremetal provisioning service" baremetal - ``` - - 1. Create the bare metal service access entries: - - ```shell - openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 - ``` - -4. Configure the ironic-api service. - - Configuration file path: **/etc/ironic/ironic.conf** - - 1. Use **connection** to configure the location of the database as follows. Replace **IRONIC_DBPASSWORD** with the password of user **ironic** and replace **DB_IP** with the IP address of the database server. - - ```shell - [database] - - # The SQLAlchemy connection string used to connect to the - # database (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 1. Configure the ironic-api service to use the RabbitMQ message broker. Replace **RPC_\*** with the detailed address and the credential of RabbitMQ. - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - You can also use json-rpc instead of RabbitMQ. - - 1. Configure the ironic-api service to use the credential of the identity authentication service. Replace **PUBLIC_IDENTITY_IP** with the public IP address of the identity authentication server and **PRIVATE_IDENTITY_IP** with the private IP address of the identity authentication server, replace **IRONIC_PASSWORD** with the password of the **ironic** user in the identity authentication service. - - ```shell - [DEFAULT] - - # Authentication strategy used by ironic-api: one of - # "keystone" or "noauth". "noauth" should not be used in a - # production environment because all authentication will be - # disabled. (string value) - - auth_strategy=keystone - - [keystone_authtoken] - # Authentication type to load (string value) - auth_type=password - # Complete public Identity API endpoint (string value) - www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 - # Complete admin Identity API endpoint. (string value) - auth_url=http://PRIVATE_IDENTITY_IP:5000 - # Service username. (string value) - username=ironic - # Service account password. (string value) - password=IRONIC_PASSWORD - # Service tenant name. (string value) - project_name=service - # Domain name containing project (string value) - project_domain_name=Default - # User's domain name (string value) - user_domain_name=Default - - ``` - - 1. Create the bare metal service database table: - - ```shell - ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema - ``` - - 1. Restart the ironic-api service: - - ```shell - sudo systemctl restart openstack-ironic-api - ``` - -5. Configure the ironic-conductor service. - - 1. Replace **HOST_IP** with the IP address of the conductor host. - - ```shell - [DEFAULT] - - # IP address of this host. If unset, will determine the IP - # programmatically. If unable to do so, will use "127.0.0.1". - # (string value) - - my_ip=HOST_IP - ``` - - 1. Specifies the location of the database. ironic-conductor must use the same configuration as ironic-api. Replace **IRONIC_DBPASSWORD** with the password of user **ironic** and replace **DB_IP** with the IP address of the database server. - - ```shell - [database] - - # The SQLAlchemy connection string to use to connect to the - # database. (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 1. Configure the ironic-api service to use the RabbitMQ message broker. ironic-conductor must use the same configuration as ironic-api. Replace **RPC_\*** with the detailed address and the credential of RabbitMQ. - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - You can also use json-rpc instead of RabbitMQ. - - 1. Configure the credentials to access other OpenStack services. - - To communicate with other OpenStack services, the bare metal service needs to use the service users to get authenticated by the OpenStack Identity service when requesting other services. The credentials of these users must be configured in each configuration file associated to the corresponding service. - - ```shell - [neutron] - Accessing the OpenStack network services. - [glance] - Accessing the OpenStack image service. - [swift] - Accessing the OpenStack object storage service. - [cinder] - Accessing the OpenStack block storage service. - [inspector] Accessing the OpenStack bare metal introspection service. - [service_catalog] - A special item to store the credential used by the bare metal service. The credential is used to discover the API URL endpoint registered in the OpenStack identity authentication service catalog by the bare metal service. - ``` - - For simplicity, you can use one service user for all services. For backward compatibility, the user name must be the same as that configured in [keystone_authtoken] of the ironic-api service. However, this is not mandatory. You can also create and configure a different service user for each service. - - In the following example, the authentication information for the user to access the OpenStack network service is configured as follows: - - ```shell - The network service is deployed in the identity authentication service domain named RegionOne. Only the public endpoint interface is registered in the service catalog. - - A specific CA SSL certificate is used for HTTPS connection when sending a request. - - The same service user as that configured for ironic-api. - - The dynamic password authentication plugin discovers a proper identity authentication service API version based on other options. - ``` - - ```shell - [neutron] - - # Authentication type to load (string value) - auth_type = password - # Authentication URL (string value) - auth_url=https://IDENTITY_IP:5000/ - # Username (string value) - username=ironic - # User's password (string value) - password=IRONIC_PASSWORD - # Project name to scope to (string value) - project_name=service - # Domain ID containing project (string value) - project_domain_id=default - # User's domain id (string value) - user_domain_id=default - # PEM encoded Certificate Authority to use when verifying - # HTTPs connections. (string value) - cafile=/opt/stack/data/ca-bundle.pem - # The default region_name for endpoint URL discovery. (string - # value) - region_name = RegionOne - # List of interfaces, in order of preference, for endpoint - # URL. (list value) - valid_interfaces=public - ``` - - By default, to communicate with other services, the bare metal service attempts to discover a proper endpoint of the service through the service catalog of the identity authentication service. If you want to use a different endpoint for a specific service, specify the endpoint_override option in the bare metal service configuration file. - - ```shell - [neutron] ... endpoint_override = - ``` - - 1. Configure the allowed drivers and hardware types. - - Set enabled_hardware_types to specify the hardware types that can be used by ironic-conductor: - - ```shell - [DEFAULT] enabled_hardware_types = ipmi - ``` - - Configure hardware interfaces: - - ```shell - enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool - ``` - - Configure the default value of the interface: - - ```shell - [DEFAULT] default_deploy_interface = direct default_network_interface = neutron - ``` - - If any driver that uses Direct Deploy is enabled, you must install and configure the Swift backend of the image service. The Ceph object gateway (RADOS gateway) can also be used as the backend of the image service. - - 1. Restart the ironic-conductor service: - - ```shell - sudo systemctl restart openstack-ironic-conductor - ``` - -6. Configure the httpd service. - - 1. Create the root directory of the httpd used by Ironic, and set the owner and owner group. The directory path must be the same as the path specified by the **http_root** configuration item in the **[deploy]** group in **/etc/ironic/ironic.conf**. - - ``` - mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot - ``` - - 2. Install and configure the httpd Service. - - 1. Install the httpd service. If the httpd service is already installed, skip this step. - - ``` - yum install httpd -y - ``` - 2. Create the **/etc/httpd/conf.d/openstack-ironic-httpd.conf** file. The file content is as follows: - - ``` - Listen 8080 - - - ServerName ironic.openeuler.com - - ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log" - CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b" - - DocumentRoot "/var/lib/ironic/httproot" - - Options Indexes FollowSymLinks - Require all granted - - LogLevel warn - AddDefaultCharset UTF-8 - EnableSendfile on - - - ``` - - The listening port must be the same as the port specified by **http_url** in the **[deploy]** section of **/etc/ironic/ironic.conf**. - - 3. Restart the httpd service: - - ``` - systemctl restart httpd - ``` - - - -8. Create the deploy ramdisk image. - - The ramdisk image of Train can be created using the ironic-python-agent service or disk-image-builder tool. You can also use the latest ironic-python-agent-builder provided by the community. You can also use other tools. - To use the Train native tool, you need to install the corresponding software package. - - ```shell - yum install openstack-ironic-python-agent - or - yum install diskimage-builder - ``` - - For details, see the [official document](https://docs.openstack.org/ironic/queens/install/deploy-ramdisk.html). - - The following describes how to use the ironic-python-agent-builder to build the deploy image used by ironic. - - 1. Install ironic-python-agent-builder. - - 1. Install the tool: - - ```shell - pip install ironic-python-agent-builder - ``` - - 2. Modify the python interpreter in the following files: - - ```shell - /usr/bin/yum /usr/libexec/urlgrabber-ext-down - ``` - - 3. Install the other necessary tools: - - ```shell - yum install git - ``` - - **DIB** depends on the `semanage` command. Therefore, check whether the `semanage --help` command is available before creating an image. If the system displays a message indicating that the command is unavailable, install the command: - - ```shell - # Check which package needs to be installed. - [root@localhost ~]# yum provides /usr/sbin/semanage - Loaded plug-in: fastestmirror - Loading mirror speeds from cached hostfile - * base: mirror.vcu.edu - * extras: mirror.vcu.edu - * updates: mirror.math.princeton.edu - policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities - Source: base - Matching source: - File name: /usr/sbin/semanage - # Install. - [root@localhost ~]# yum install policycoreutils-python - ``` - - 2. Create the image. - - For Arm architecture, add the following information: - ```shell - export ARCH=aarch64 - ``` - - Basic usage: - - ```shell - usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] - [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] - distribution - - positional arguments: - distribution Distribution to use - - optional arguments: - -h, --help show this help message and exit - -r RELEASE, --release RELEASE - Distribution release to use - -o OUTPUT, --output OUTPUT - Output base file name - -e ELEMENT, --element ELEMENT - Additional DIB element to use - -b BRANCH, --branch BRANCH - If set, override the branch that is used for ironic- - python-agent and requirements - -v, --verbose Enable verbose logging in diskimage-builder - --extra-args EXTRA_ARGS - Extra arguments to pass to diskimage-builder - ``` - - Example: - - ```shell - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky - ``` - - 3. Allow SSH login. - - Initialize the environment variables and create the image: - - ```shell - export DIB_DEV_USER_USERNAME=ipa \ - export DIB_DEV_USER_PWDLESS_SUDO=yes \ - export DIB_DEV_USER_PASSWORD='123' - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser - ``` - - 4. Specify the code repository. - - Initialize the corresponding environment variables and create the image: - - ```shell - # Specify the address and version of the repository. - DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git - DIB_REPOREF_ironic_python_agent=origin/develop - - # Clone code from Gerrit. - DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent - DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 - ``` - - Reference: [source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html). - - The specified repository address and version are verified successfully. - - 5. Note - -The template of the PXE configuration file of the native OpenStack does not support the ARM64 architecture. You need to modify the native OpenStack code. - -In Train, Ironic provided by the community does not support the boot from ARM 64-bit UEFI PXE. As a result, the format of the generated grub.cfg file (generally in /tftpboot/) is incorrect, causing the PXE boot failure. - -You need to modify the code logic for generating the grub.cfg file. - -The following TLS error is reported when Ironic sends a request to IPA to query the command execution status: - -By default, both IPA and Ironic of Train have TLS authentication enabled to send requests to each other. Disable TLS authentication according to the description on the official website. - -1. Add **ipa-insecure=1** to the following configuration in the Ironic configuration file (**/etc/ironic/ironic.conf**): - -``` -[agent] -verify_ca = False - -[pxe] -pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 -``` - -2. Add the IPA configuration file **/etc/ironic_python_agent/ironic_python_agent.conf** to the ramdisk image and configure the TLS as follows: - -**/etc/ironic_python_agent/ironic_python_agent.conf** (The **/etc/ironic_python_agent** directory must be created in advance.) - -``` -[DEFAULT] -enable_auto_tls = False -``` - -Set the permission: - -``` -chown -R ipa.ipa /etc/ironic_python_agent/ -``` - -3. Modify the startup file of the IPA service and add the configuration file option. - - vim usr/lib/systemd/system/ironic-python-agent.service - - ``` - [Unit] - Description=Ironic Python Agent - After=network-online.target - - [Service] - ExecStartPre=/sbin/modprobe vfat - ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf - Restart=always - RestartSec=30s - - [Install] - WantedBy=multi-user.target - ``` - - -Other services such as ironic-inspector are also provided for OpenStack Train. Install the services based on site requirements. - -### Installing Kolla - -Kolla provides the OpenStack service with the container-based deployment function that is ready for the production environment. - -The installation of Kolla is simple. You only need to install the corresponding RPM packages: - -``` -yum install openstack-kolla openstack-kolla-ansible -``` - -After the installation is complete, you can run commands such as `kolla-ansible`, `kolla-build`, `kolla-genpwd`, `kolla-mergepwd` to create an image or deploy a container environment. - -### Installing Trove -Trove is the database service of OpenStack. If you need to use the database service provided by OpenStack, Trove is recommended. Otherwise, you can choose not to install it. - -1. Set the database. - - The database service stores information in the database. Create a **trove** database that can be accessed by the **trove** user and replace **TROVE_DBPASSWORD** with a proper password. - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - ``` - -2. Create service user authentication. - - 1. Create the **Trove** service user. - - ```shell - openstack user create --domain default --password-prompt trove - openstack role add --project service --user trove admin - openstack service create --name trove --description "Database" database - ``` - **Description:** Replace *TROVE_PASSWORD* with the password of the **trove** user. - - 1. Create the **Database** service access entry - - ```shell - openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s - ``` - -3. Install and configure the **Trove** components. - - 1. Install the **Trove** package: - ```shell script - yum install openstack-trove python3-troveclient - ``` - - 2. Configure **trove.conf**: - ```shell script - vim /etc/trove/trove.conf - - [DEFAULT] - log_dir = /var/log/trove - trove_auth_url = http://controller:5000/ - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - swift_url = http://controller:8080/v1/AUTH_ - rpc_backend = rabbit - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 - auth_strategy = keystone - add_addresses = True - api_paste_config = /etc/trove/api-paste.ini - nova_proxy_admin_user = admin - nova_proxy_admin_pass = ADMIN_PASSWORD - nova_proxy_admin_tenant_name = service - taskmanager_manager = trove.taskmanager.manager.Manager - use_nova_server_config_drive = True - # Set these if using Neutron Networking - network_driver = trove.network.neutron.NeutronDriver - network_label_regex = .* - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = trove - password = TROVE_PASSWORD - ``` - **Description:** - - In the **[Default]** section, **nova_compute_url** and **cinder_url** are endpoints created by Nova and Cinder in Keystone. - - **nova_proxy_XXX** is a user who can access the Nova service. In the preceding example, the **admin** user is used. - - **transport_url** is the **RabbitMQ** connection information, and **RABBIT_PASS** is the RabbitMQ password. - - In the **[database]** section, **connection** is the information of the database created for Trove in MySQL. - - Replace **TROVE_PASSWORD** in the Trove user information with the password of the **trove** user. - - 3. Configure **trove-guestagent.conf**: - ```shell script - vim /etc/trove/trove-guestagent.conf - - rabbit_host = controller - rabbit_password = RABBIT_PASS - trove_auth_url = http://controller:5000/ - ``` - **Description:** **guestagent** is an independent component in Trove and needs to be pre-built into the virtual machine image created by Trove using Nova. - After the database instance is created, the guestagent process is started to report heartbeat messages to the Trove through the message queue (RabbitMQ). - Therefore, you need to configure the user name and password of the RabbitMQ. - **Since Victoria, Trove uses a unified image to run different types of databases. The database service runs in the Docker container of the Guest VM.** - - Replace **RABBIT_PASS** with the RabbitMQ password. - - 4. Generate the **Trove** database table. - ```shell script - su -s /bin/sh -c "trove-manage db_sync" trove - ``` - -4. Complete the installation and configuration. - 1. Configure the **Trove** service to automatically start: - ```shell script - systemctl enable openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - 2. Start the services: - ```shell script - systemctl start openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` -### Installing Swift - -Swift provides a scalable and highly available distributed object storage service, which is suitable for storing unstructured data in large scale. - -1. Create the service credentials and API endpoints. - - Create the service credential: - - ``` shell - # Create the swift user. - openstack user create --domain default --password-prompt swift - # Add the admin role for the swift user. - openstack role add --project service --user swift admin - # Create the swift service entity. - openstack service create --name swift --description "OpenStack Object Storage" object-store - ``` - - Create the Swift API endpoints. - - ```shell - openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s - openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s - openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 - ``` - - -2. Install the software packages: - - ```shell - yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL) - ``` - -3. Configure the proxy-server. - - The Swift RPM package contains a **proxy-server.conf** file which is basically ready to use. You only need to change the values of **ip** and swift **password** in the file. - - ***Note*** - - **Replace password with the password you set for the swift user in the identity service.** - -4. Install and configure the storage node. (STG) - - Install the supported program packages: - ```shell - yum install xfsprogs rsync - ``` - - Format the /dev/vdb and /dev/vdc devices into XFS: - - ```shell - mkfs.xfs /dev/vdb - mkfs.xfs /dev/vdc - ``` - - Create the mount point directory structure: - - ```shell - mkdir -p /srv/node/vdb - mkdir -p /srv/node/vdc - ``` - - Find the UUID of the new partition: - - ```shell - blkid - ``` - - Add the following to the **/etc/fstab** file: - - ```shell - UUID="" /srv/node/vdb xfs noatime 0 2 - UUID="" /srv/node/vdc xfs noatime 0 2 - ``` - - Mount the devices: - - ```shell - mount /srv/node/vdb - mount /srv/node/vdc - ``` - ***Note*** - - **If the disaster recovery function is not required, you only need to create one device and skip the following rsync configuration.** - - (Optional) Create or edit the **/etc/rsyncd.conf** file to include the following content: - - ```shell - [DEFAULT] - uid = swift - gid = swift - log file = /var/log/rsyncd.log - pid file = /var/run/rsyncd.pid - address = MANAGEMENT_INTERFACE_IP_ADDRESS - - [account] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/account.lock - - [container] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/container.lock - - [object] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/object.lock - ``` - **Replace *MANAGEMENT_INTERFACE_IP_ADDRESS* with the management network IP address of the storage node.** - - Start the rsyncd service and configure it to start upon system startup. - - ```shell - systemctl enable rsyncd.service - systemctl start rsyncd.service - ``` - -5. Install and configure the components on storage nodes. (STG) - - Install the software packages: - - ```shell - yum install openstack-swift-account openstack-swift-container openstack-swift-object - ``` - - Edit **account-server.conf**, **container-server.conf**, and **object-server.conf** in the **/etc/swift directory** and replace **bind_ip** with the management network IP address of the storage node. - - Ensure the proper ownership of the mount point directory structure. - - ```shell - chown -R swift:swift /srv/node - ``` - - Create the recon directory and ensure that it has the correct ownership. - - ```shell - mkdir -p /var/cache/swift - chown -R root:swift /var/cache/swift - chmod -R 775 /var/cache/swift - ``` - -6. Create the account ring. (CTL) - - Switch to the **/etc/swift** directory: - - ```shell - cd /etc/swift - ``` - - Create the basic **account.builder** file: - - ```shell - swift-ring-builder account.builder create 10 1 1 - ``` - - Add each storage node to the ring: - - ```shell - swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT - ``` - - **Replace *STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS* with the management network IP address of the storage node. Replace *DEVICE_NAME* with the name of the storage device on the same storage node.** - - ***Note*** - **Repeat this command to each storage device on each storage node.** - - Verify the ring contents: - - ```shell - swift-ring-builder account.builder - ``` - - Rebalance the ring: - - ```shell - swift-ring-builder account.builder rebalance - ``` - -7. Create the container ring. (CTL) - - Switch to the **/etc/swift** directory: - - Create the basic **container.builder** file: - - ```shell - swift-ring-builder container.builder create 10 1 1 - ``` - - Add each storage node to the ring: - - ```shell - swift-ring-builder container.builder \ - add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \ - --device DEVICE_NAME --weight 100 - - ``` - - **Replace *STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS* with the management network IP address of the storage node. Replace *DEVICE_NAME* with the name of the storage device on the same storage node.** - - ***Note*** - **Repeat this command to every storage devices on every storage nodes.** - - Verify the ring contents: - - ```shell - swift-ring-builder container.builder - ``` - - Rebalance the ring: - - ```shell - swift-ring-builder container.builder rebalance - ``` - -8. Create the object ring. (CTL) - - Switch to the **/etc/swift** directory: - - Create the basic **object.builder** file: - - ```shell - swift-ring-builder object.builder create 10 1 1 - ``` - - Add each storage node to the ring: - - ```shell - swift-ring-builder object.builder \ - add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \ - --device DEVICE_NAME --weight 100 - ``` - - **Replace *STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS* with the management network IP address of the storage node. Replace *DEVICE_NAME* with the name of the storage device on the same storage node.** - - ***Note*** - **Repeat this command to every storage devices on every storage nodes.** - - Verify the ring contents: - - ```shell - swift-ring-builder object.builder - ``` - - Rebalance the ring: - - ```shell - swift-ring-builder object.builder rebalance - ``` - - Distribute ring configuration files: - - Copy **account.ring.gz**, **container.ring.gz**, and **object.ring.gz** to the **/etc/swift** directory on each storage node and any additional nodes running the proxy service. - - - -9. Complete the installation. - - Edit the **/etc/swift/swift.conf** file: - - ``` shell - [swift-hash] - swift_hash_path_suffix = test-hash - swift_hash_path_prefix = test-hash - - [storage-policy:0] - name = Policy-0 - default = yes - ``` - - **Replace test-hash with a unique value.** - - Copy the **swift.conf** file to the **/etc/swift** directory on each storage node and any additional nodes running the proxy service. - - Ensure correct ownership of the configuration directory on all nodes: - - ```shell - chown -R root:swift /etc/swift - ``` - - On the controller node and any additional nodes running the proxy service, start the object storage proxy service and its dependencies, and configure them to start upon system startup. - - ```shell - systemctl enable openstack-swift-proxy.service memcached.service - systemctl start openstack-swift-proxy.service memcached.service - ``` - - On the storage node, start the object storage services and configure them to start upon system startup. - - ```shell - systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service - - systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service - - systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service - - systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service - - systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service - - systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service - ``` -### Installing Cyborg - -Cyborg provides acceleration device support for OpenStack, for example, GPUs, FPGAs, ASICs, NPs, SoCs, NVMe/NOF SSDs, ODPs, DPDKs, and SPDKs. - -1. Initialize the databases. - -``` -CREATE DATABASE cyborg; -GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; -GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; -``` - -2. Create Keystone resource objects. - -``` -$ openstack user create --domain default --password-prompt cyborg -$ openstack role add --project service --user cyborg admin -$ openstack service create --name cyborg --description "Acceleration Service" accelerator - -$ openstack endpoint create --region RegionOne \ - accelerator public http://:6666/v1 -$ openstack endpoint create --region RegionOne \ - accelerator internal http://:6666/v1 -$ openstack endpoint create --region RegionOne \ - accelerator admin http://:6666/v1 -``` - -3. Install Cyborg - -``` -yum install openstack-cyborg -``` - -4. Configure Cyborg - -Modify **/etc/cyborg/cyborg.conf**. - -``` -[DEFAULT] -transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ -use_syslog = False -state_path = /var/lib/cyborg -debug = True - -[database] -connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg - -[service_catalog] -project_domain_id = default -user_domain_id = default -project_name = service -password = PASSWORD -username = cyborg -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password - -[placement] -project_domain_name = Default -project_name = service -user_domain_name = Default -password = PASSWORD -username = placement -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password - -[keystone_authtoken] -memcached_servers = localhost:11211 -project_domain_name = Default -project_name = service -user_domain_name = Default -password = PASSWORD -username = cyborg -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password -``` - -Set the user names, passwords, and IP addresses as required. - -1. Synchronize the database table. - -``` -cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade -``` - -6. Start the Cyborg services. - -``` -systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent -systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent -``` - -### Installing Aodh - -1. Create the database. - -``` -CREATE DATABASE aodh; - -GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; - -GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; -``` - -2. Create Keystone resource objects. - -``` -openstack user create --domain default --password-prompt aodh - -openstack role add --project service --user aodh admin - -openstack service create --name aodh --description "Telemetry" alarming - -openstack endpoint create --region RegionOne alarming public http://controller:8042 - -openstack endpoint create --region RegionOne alarming internal http://controller:8042 - -openstack endpoint create --region RegionOne alarming admin http://controller:8042 -``` - -3. Install Aodh. - -``` -yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient -``` - -4. Modify the configuration file. - -``` -[database] -connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh - -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller -auth_strategy = keystone - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_id = default -user_domain_id = default -project_name = service -username = aodh -password = AODH_PASS - -[service_credentials] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_id = default -user_domain_id = default -project_name = service -username = aodh -password = AODH_PASS -interface = internalURL -region_name = RegionOne -``` - -5. Initialize the database. - -``` -aodh-dbsync -``` - -6. Start the Aodh services. - -``` -systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service - -systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service -``` - -### Installing Gnocchi - -1. Create the database. - -``` -CREATE DATABASE gnocchi; - -GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; - -GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; -``` - -2. Create Keystone resource objects. - -``` -openstack user create --domain default --password-prompt gnocchi - -openstack role add --project service --user gnocchi admin - -openstack service create --name gnocchi --description "Metric Service" metric - -openstack endpoint create --region RegionOne metric public http://controller:8041 - -openstack endpoint create --region RegionOne metric internal http://controller:8041 - -openstack endpoint create --region RegionOne metric admin http://controller:8041 -``` - -3. Install Gnocchi. - -``` -yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient -``` - -1. Modify the **/etc/gnocchi/gnocchi.conf** configuration file. - -``` -[api] -auth_mode = keystone -port = 8041 -uwsgi_mode = http-socket - -[keystone_authtoken] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_name = Default -user_domain_name = Default -project_name = service -username = gnocchi -password = GNOCCHI_PASS -interface = internalURL -region_name = RegionOne - -[indexer] -url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi - -[storage] -# coordination_url is not required but specifying one will improve -# performance with better workload division across workers. -coordination_url = redis://controller:6379 -file_basepath = /var/lib/gnocchi -driver = file -``` - -5. Initialize the database. - -``` -gnocchi-upgrade -``` - -6. Start the Gnocchi services. - -``` -systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service - -systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service -``` - -### Installing Ceilometer - -1. Create Keystone resource objects. - -``` -openstack user create --domain default --password-prompt ceilometer - -openstack role add --project service --user ceilometer admin - -openstack service create --name ceilometer --description "Telemetry" metering -``` - -2. Install Ceilometer. - -``` -yum install openstack-ceilometer-notification openstack-ceilometer-central -``` - -1. Modify the **/etc/ceilometer/pipeline.yaml** configuration file. - -``` -publishers: - # set address of Gnocchi - # + filter out Gnocchi-related activity meters (Swift driver) - # + set default archive policy - - gnocchi://?filter_project=service&archive_policy=low -``` - -4. Modify the **/etc/ceilometer/ceilometer.conf** configuration file. - -``` -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller - -[service_credentials] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_id = default -user_domain_id = default -project_name = service -username = ceilometer -password = CEILOMETER_PASS -interface = internalURL -region_name = RegionOne -``` - -5. Initialize the database. - -``` -ceilometer-upgrade -``` - -6. Start the Ceilometer services. - -``` -systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service - -systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service -``` - -### Installing Heat - -1. Creat the **heat** database and grant proper privileges to it. Replace **HEAT_DBPASS** with a proper password. - -``` -CREATE DATABASE heat; -GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; -GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; -``` - -2. Create a service credential. Create the **heat** user and add the **admin** role to it. - -``` -openstack user create --domain default --password-prompt heat -openstack role add --project service --user heat admin -``` - -3. Create the **heat** and **heat-cfn** services and their API enpoints. - -``` -openstack service create --name heat --description "Orchestration" orchestration -openstack service create --name heat-cfn --description "Orchestration" cloudformation -openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 -openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 -openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 -``` - -4. Create additional OpenStack management information, including the **heat** domain and its administrator **heat_domain_admin**, the **heat_stack_owner** role, and the **heat_stack_user** role. - -``` -openstack user create --domain heat --password-prompt heat_domain_admin -openstack role add --domain heat --user-domain heat --user heat_domain_admin admin -openstack role create heat_stack_owner -openstack role create heat_stack_user -``` - -5. Install the software packages. - -``` -yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine -``` - -6. Modify the configuration file **/etc/heat/heat.conf**. - -``` -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller -heat_metadata_server_url = http://controller:8000 -heat_waitcondition_server_url = http://controller:8000/v1/waitcondition -stack_domain_admin = heat_domain_admin -stack_domain_admin_password = HEAT_DOMAIN_PASS -stack_user_domain_name = heat - -[database] -connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = default -user_domain_name = default -project_name = service -username = heat -password = HEAT_PASS - -[trustee] -auth_type = password -auth_url = http://controller:5000 -username = heat -password = HEAT_PASS -user_domain_name = default - -[clients_keystone] -auth_uri = http://controller:5000 -``` - -7. Initialize the **heat** database table. - -``` -su -s /bin/sh -c "heat-manage db_sync" heat -``` - -8. Start the services. - -``` -systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service -systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service -``` - -## OpenStack Quick Installation - -The OpenStack SIG provides the Ansible script for one-click deployment of OpenStack in All in One or Distributed modes. Users can use the script to quickly deploy an OpenStack environment based on openEuler RPM packages. The following uses the All in One mode installation as an example. - -1. Install the OpenStack SIG Tool. - - ```shell - pip install openstack-sig-tool - ``` - -2. Configure the OpenStack Yum source. - - ```shell - yum install openstack-release-train - ``` - - **Note**: Enable the EPOL repository for the Yum source if it is not enabled already. - - ```shell - vi /etc/yum.repos.d/openEuler.repo - - [EPOL] - name=EPOL - baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler - EOF - -3. Update the Ansible configurations. - - Open the **/usr/local/etc/inventory/all_in_one.yaml** file and modify the configuration based on the environment and requirements. Modify the file as follows: - - ```shell - all: - hosts: - controller: - ansible_host: - ansible_ssh_private_key_file: - ansible_ssh_user: root - vars: - mysql_root_password: root - mysql_project_password: root - rabbitmq_password: root - project_identity_password: root - enabled_service: - - keystone - - neutron - - cinder - - placement - - nova - - glance - - horizon - - aodh - - ceilometer - - cyborg - - gnocchi - - kolla - - heat - - swift - - trove - - tempest - neutron_provider_interface_name: br-ex - default_ext_subnet_range: 10.100.100.0/24 - default_ext_subnet_gateway: 10.100.100.1 - neutron_dataplane_interface_name: eth1 - cinder_block_device: vdb - swift_storage_devices: - - vdc - swift_hash_path_suffix: ash - swift_hash_path_prefix: has - children: - compute: - hosts: controller - storage: - hosts: controller - network: - hosts: controller - vars: - test-key: test-value - dashboard: - hosts: controller - vars: - allowed_host: '*' - kolla: - hosts: controller - vars: - # We add openEuler OS support for kolla in OpenStack Queens/Rocky release - # Set this var to true if you want to use it in Q/R - openeuler_plugin: false - ``` - - Key Configurations - - | Item | Description| - |---|---| - | ansible_host | IP address of the all-in-one node.| - | ansible_ssh_private_key_file | Key used by the Ansible script for logging in to the all-in-one node.| - | ansible_ssh_user | User used by the Ansible script for logging in to the all-in-one node.| - | enabled_service | List of services to be installed. You can delete services as required.| - | neutron_provider_interface_name | Neutron L3 bridge name. | - | default_ext_subnet_range | Neutron private network IP address range. | - | default_ext_subnet_gateway | Neutron private network gateway. | - | neutron_dataplane_interface_name | NIC used by Neutron. You are advised to use a new NIC to avoid conflicts with existing NICs causing disconnection of the all-in-one node. | - | cinder_block_device | Name of the block device used by Cinder.| - | swift_storage_devices | Name of the block device used by Swift. | - -4. Run the installation command. - - ```shell - oos env setup all_in_one - ``` - - After the command is executed, the OpenStack environment of the All in One mode is successfully deployed. - - The environment variable file **.admin-openrc** is stored in the home directory of the current user. - -5. Initialize the Tempest environment. - - If you want to perform the Tempest test in the environment, run the `oos env init all_in_one` command to create the OpenStack resources required by Tempest. - - After the command is executed successfully, a **mytest** directory is generated in the home directory of the user. You can run the `tempest run` command in the directory. diff --git a/docs/en/docs/thirdparty_migration/OpenStack-wallaby.md b/docs/en/docs/thirdparty_migration/OpenStack-wallaby.md deleted file mode 100644 index 486d1856d5d70faa55066435483d203939059cf4..0000000000000000000000000000000000000000 --- a/docs/en/docs/thirdparty_migration/OpenStack-wallaby.md +++ /dev/null @@ -1,3208 +0,0 @@ -# OpenStack-Wallaby Deployment Guide - - - -- [OpenStack-Wallaby Deployment Guide](#openstack-wallaby-deployment-guide) - - [OpenStack](#openstack) - - [Conventions](#conventions) - - [Preparing the Environment](#preparing-the-environment) - - [Environment Configuration](#environment-configuration) - - [Installing the SQL Database](#installing-the-sql-database) - - [Installing RabbitMQ](#installing-rabbitmq) - - [Installing Memcached](#installing-memcached) - - [Installing OpenStack](#installing-openstack) - - [Installing Keystone](#installing-keystone) - - [Installing Glance](#installing-glance) - - [Installing Placement](#installing-placement) - - [Installing Nova](#installing-nova) - - [Installing Neutron](#installing-neutron) - - [Installing Cinder](#installing-cinder) - - [Installing Horizon](#installing-horizon) - - [Installing Tempest](#installing-tempest) - - [Installing Ironic](#installing-ironic) - - [Installing Kolla](#installing-kolla) - - [Installing Trove](#installing-trove) - - [Installing Swift](#installing-swift) - - -## OpenStack - -OpenStack is an open source cloud computing infrastructure software project developed by the community. It provides an operating platform or tool set for deploying the cloud, offering scalable and flexible cloud computing for organizations. - -As an open source cloud computing management platform, OpenStack consists of several major components, such as Nova, Cinder, Neutron, Glance, Keystone, and Horizon. OpenStack supports almost all cloud environments. The project aims to provide a cloud computing management platform that is easy-to-use, scalable, unified, and standardized. OpenStack provides an infrastructure as a service (IaaS) solution that combines complementary services, each of which provides an API for integration. - -The official source of openEuler 22.03 LTS now supports OpenStack Wallaby. You can configure the Yum source then deploy OpenStack by following the instructions of this document. - -## Conventions - -OpenStack supports multiple deployment modes. This document includes two deployment modes: `All in One` and `Distributed`. The conventions are as follows: - -`ALL in One` mode: - -```text -Ignores all possible suffixes. -``` - -`Distributed` mode: - -```text -A suffix of `(CTL)` indicates that the configuration or command applies only to the `control node`. -A suffix of `(CPT)` indicates that the configuration or command applies only to the `compute node`. -A suffix of `(STG)` indicates that the configuration or command applies only to the `storage node`. -In other cases, the configuration or command applies to both the `control node` and `compute node`. -``` - -***Note*** - -The services involved in the preceding conventions are as follows: - -- Cinder -- Nova -- Neutron - -## Preparing the Environment - -### Environment Configuration - -1. Configure the openEuler 22.03 LTS official Yum source. Enable the EPOL software repository to support OpenStack. - - ```shell - yum update - yum install openstack-release-wallaby - yum clean all && yum makecache - ``` - - **Note**: Enable the EPOL repository for the Yum source if it is not enabled already. - - ```shell - vi /etc/yum.repos.d/openEuler.repo - - [EPOL] - name=EPOL - baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler - EOF - ``` - -2. Change the host name and mapping. - - Set the host name of each node: - - ```shell - hostnamectl set-hostname controller (CTL) - hostnamectl set-hostname compute (CPT) - ``` - - Assuming the IP address of the controller node is **10.0.0.11** and the IP address of the compute node (if any) is **10.0.0.12**, add the following information to the **/etc/hosts** file: - - ```shell - 10.0.0.11 controller - 10.0.0.12 compute - ``` - -### Installing the SQL Database - -1. Run the following command to install the software package: - - ```shell - yum install mariadb mariadb-server python3-PyMySQL - ``` - -2. Run the following command to create and edit the `/etc/my.cnf.d/openstack.cnf` file: - - ```shell - vim /etc/my.cnf.d/openstack.cnf - - [mysqld] - bind-address = 10.0.0.11 - default-storage-engine = innodb - innodb_file_per_table = on - max_connections = 4096 - collation-server = utf8_general_ci - character-set-server = utf8 - ``` - - ***Note*** - - **`bind-address` is set to the management IP address of the controller node.** - -3. Run the following commands to start the database service and configure it to automatically start upon system boot: - - ```shell - systemctl enable mariadb.service - systemctl start mariadb.service - ``` - -4. (Optional) Configure the default database password: - - ```shell - mysql_secure_installation - ``` - - ***Note*** - - **Perform operations as prompted.** - -### Installing RabbitMQ - -1. Run the following command to install the software package: - - ```shell - yum install rabbitmq-server - ``` - -2. Start the RabbitMQ service and configure it to automatically start upon system boot: - - ```shell - systemctl enable rabbitmq-server.service - systemctl start rabbitmq-server.service - ``` - -3. Add the OpenStack user: - - ```shell - rabbitmqctl add_user openstack RABBIT_PASS - ``` - - ***Note*** - - **Replace `RABBIT_PASS` to set the password for the openstack user.** - -4. Run the following command to set the permission of the openstack user to allow the user to perform configuration, write, and read operations: - - ```shell - rabbitmqctl set_permissions openstack ".*" ".*" ".*" - ``` - -### Installing Memcached - -1. Run the following command to install the dependency package: - - ```shell - yum install memcached python3-memcached - ``` - -2. Open the `/etc/sysconfig/memcached` file in insert mode. - - ```shell - vim /etc/sysconfig/memcached - - OPTIONS="-l 127.0.0.1,::1,controller" - ``` - -3. Run the following command to start the Memcached service and configure it to automatically start upon system boot: - - ```shell - systemctl enable memcached.service - systemctl start memcached.service - ``` - - ***Note*** - - **After the service is started, you can run `memcached-tool controller stats` to ensure that the service is started properly and available. You can replace `controller` with the management IP address of the controller node.** - -## Installing OpenStack - -### Installing Keystone - -1. Create the **keyston** database and grant permissions: - - ``` sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE keystone; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***Note*** - - **Replace `KEYSTONE_DBPASS` to set the password for the keystone database.** - -2. Install the software package: - - ```shell - yum install openstack-keystone httpd mod_wsgi - ``` - -3. Configure Keystone: - - ```shell - vim /etc/keystone/keystone.conf - - [database] - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - - [token] - provider = fernet - ``` - - ***Description*** - - In the **[database]** section, configure the database entry . - - In the **[token]** section, configure the token provider . - - ***Note:*** - - **Replace `KEYSTONE_DBPASS` with the password of the keystone database.** - -4. Synchronize the database: - - ```shell - su -s /bin/sh -c "keystone-manage db_sync" keystone - ``` - -5. Initialize the Fernet keystore: - - ```shell - keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - ``` - -6. Start the service: - - ```shell - keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:5000/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - ``` - - ***Note*** - - **Replace `ADMIN_PASS` to set the password for the admin user.** - -7. Configure the Apache HTTP server: - - ```shell - vim /etc/httpd/conf/httpd.conf - - ServerName controller - ``` - - ```shell - ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ - ``` - - ***Description*** - - Configure `ServerName` to use the control node. - - ***Note*** - **If the `ServerName` item does not exist, create it. - -8. Start the Apache HTTP service: - - ```shell - systemctl enable httpd.service - systemctl start httpd.service - ``` - -9. Create environment variables: - - ```shell - cat << EOF >> ~/.admin-openrc - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=admin - export OS_USERNAME=admin - export OS_PASSWORD=ADMIN_PASS - export OS_AUTH_URL=http://controller:5000/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - EOF - ``` - - ***Note*** - - **Replace `ADMIN_PASS` with the password of the admin user.** - -10. Create domains, projects, users, and roles in sequence.The python3-openstackclient must be installed first: - - ```shell - yum install python3-openstackclient - ``` - - Import the environment variables: - - ```shell - source ~/.admin-openrc - ``` - - Create the project `service`. The domain `default` has been created during keystone-manage bootstrap. - - ```shell - openstack domain create --description "An Example Domain" example - ``` - - ```shell - openstack project create --domain default --description "Service Project" service - ``` - - Create the (non-admin) project `myproject`, user `myuser`, and role `myrole`, and add the role `myrole` to `myproject` and `myuser`. - - ```shell - openstack project create --domain default --description "Demo Project" myproject - openstack user create --domain default --password-prompt myuser - openstack role create myrole - openstack role add --project myproject --user myuser myrole - ``` - -11. Perform the verification. - - Cancel the temporary environment variables `OS_AUTH_URL` and `OS_PASSWORD`. - - ```shell - source ~/.admin-openrc - unset OS_AUTH_URL OS_PASSWORD - ``` - - Request a token for the **admin** user: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - ``` - - Request a token for user **myuser**: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name myproject --os-username myuser token issue - ``` - -### Installing Glance - -1. Create the database, service credentials, and the API endpoints. - - Create the database: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE glance; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***Note:*** - - **Replace `GLANCE_DBPASS` to set the password for the glance database.** - - Create the service credential: - - ```shell - source ~/.admin-openrc - - openstack user create --domain default --password-prompt glance - openstack role add --project service --user glance admin - openstack service create --name glance --description "OpenStack Image" image - ``` - - Create the API endpoints for the image service: - - ```shell - openstack endpoint create --region RegionOne image public http://controller:9292 - openstack endpoint create --region RegionOne image internal http://controller:9292 - openstack endpoint create --region RegionOne image admin http://controller:9292 - ``` - -2. Install the software package: - - ```shell - yum install openstack-glance - ``` - -3. Configure Glance: - - ```shell - vim /etc/glance/glance-api.conf - - [database] - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - flavor = keystone - - [glance_store] - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - ``` - - ***Description:*** - - In the **[database]** section, configure the database entry. - - In the **[keystone_authtoken]** and **[paste_deploy]** sections, configure the identity authentication service entry. - - In the **[glance_store]** section, configure the local file system storage and the location of image files. - - ***Note*** - - **Replace `GLANCE_DBPASS` with the password of the glance database.** - - **Replace `GLANCE_PASS` with the password of user glance.** - -4. Synchronize the database: - - ```shell - su -s /bin/sh -c "glance-manage db_sync" glance - ``` - -5. Start the service: - - ```shell - systemctl enable openstack-glance-api.service - systemctl start openstack-glance-api.service - ``` - -6. Perform the verification. - - Download the image: - - ```shell - source ~/.admin-openrc - - wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img - ``` - - ***Note*** - - **If the Kunpeng architecture is used in your environment, download the image of the AArch64 version. the Image cirros-0.5.2-aarch64-disk.img has been tested.** - - Upload the image to the image service: - - ```shell - openstack image create --disk-format qcow2 --container-format bare \ - --file cirros-0.4.0-x86_64-disk.img --public cirros - ``` - - Confirm the image upload and verify the attributes: - - ```shell - openstack image list - ``` - -### Installing Placement - -1. Create a database, service credentials, and API endpoints. - - Create a database. - - Access the database as the **root** user. Create the **placement** database, and grant permissions. - - ```shell - mysql -u root -p - MariaDB [(none)]> CREATE DATABASE placement; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> exit - ``` - - **Note**: - - **Replace `PLACEMENT_DBPASS` to set the password for the placement database.** - - ```shell - source admin-openrc - ``` - - Run the following commands to create the Placement service credentials, create the **placement** user, and add the **admin** role to the **placement** user: - - Create the Placement API Service. - - ```shell - openstack user create --domain default --password-prompt placement - openstack role add --project service --user placement admin - openstack service create --name placement --description "Placement API" placement - ``` - - Create API endpoints of the Placement service. - - ```shell - openstack endpoint create --region RegionOne placement public http://controller:8778 - openstack endpoint create --region RegionOne placement internal http://controller:8778 - openstack endpoint create --region RegionOne placement admin http://controller:8778 - ``` - -2. Perform the installation and configuration. - - Install the software package: - - ```shell - yum install openstack-placement-api - ``` - - Configure Placement: - - Edit the **/etc/placement/placement.conf** file: - - In the **[placement_database]** section, configure the database entry. - - In **[api]** and **[keystone_authtoken]** sections, configure the identity authentication service entry. - - ```shell - # vim /etc/placement/placement.conf - [placement_database] - # ... - connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement - [api] - # ... - auth_strategy = keystone - [keystone_authtoken] - # ... - auth_url = http://controller:5000/v3 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = placement - password = PLACEMENT_PASS - ``` - - Replace **PLACEMENT_DBPASS** with the password of the **placement** database, and replace **PLACEMENT_PASS** with the password of the **placement** user. - - Synchronize the database: - - ```shell - su -s /bin/sh -c "placement-manage db sync" placement - ``` - - Start the httpd service. - - ```shell - systemctl restart httpd - ``` - -3. Perform the verification. - - Run the following command to check the status: - - ```shell - . admin-openrc - placement-status upgrade check - ``` - - Run the following command to install osc-placement and list the available resource types and features: - - ```shell - yum install python3-osc-placement - openstack --os-placement-api-version 1.2 resource class list --sort-column name - openstack --os-placement-api-version 1.6 trait list --sort-column name - ``` - -### Installing Nova - -1. Create a database, service credentials, and API endpoints. - - Create a database. - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> exit - ``` - - **Note**: - - **Replace `NOVA_DBPASS` to set the password for the nova database.** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - Run the following command to create the Nova service certificate: - - ```shell - openstack user create --domain default --password-prompt nova (CTL) - openstack role add --project service --user nova admin (CTL) - openstack service create --name nova --description "OpenStack Compute" compute (CTL) - ``` - - Create a Nova API endpoint. - - ```shell - openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) - openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) - openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) - ``` - -2. Install the software packages: - - ```shell - yum install openstack-nova-api openstack-nova-conductor \ (CTL) - openstack-nova-novncproxy openstack-nova-scheduler - - yum install openstack-nova-compute (CPT) - ``` - - **Note**: - - **If the ARM64 architecture is used, you also need to run the following command:** - - ```shell - yum install edk2-aarch64 (CPT) - ``` - -3. Configure Nova: - - ```shell - vim /etc/nova/nova.conf - - [DEFAULT] - enabled_apis = osapi_compute,metadata - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - my_ip = 10.0.0.1 - use_neutron = true - firewall_driver = nova.virt.firewall.NoopFirewallDriver - compute_driver=libvirt.LibvirtDriver (CPT) - instances_path = /var/lib/nova/instances/ (CPT) - lock_path = /var/lib/nova/tmp (CPT) - - [api_database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) - - [database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) - - [api] - auth_strategy = keystone - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = nova - password = NOVA_PASS - - [vnc] - enabled = true - server_listen = $my_ip - server_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) - - [libvirt] - virt_type = qemu (CPT) - cpu_mode = custom (CPT) - cpu_model = cortex-a72 (CPT) - - [glance] - api_servers = http://controller:9292 - - [oslo_concurrency] - lock_path = /var/lib/nova/tmp (CTL) - - [placement] - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:5000/v3 - username = placement - password = PLACEMENT_PASS - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - Description - - In the **[default]** section, enable the compute and metadata APIs, configure the RabbitMQ message queue entry, configure **my_ip**, and enable the network service **neutron**. - - In the **[api_database]** and **[database]** sections, configure the database entry. - - In the **[api]** and **[keystone_authtoken]** sections, configure the identity service entry. - - In the **[vnc]** section, enable and configure the entry for the remote console. - - In the **[glance]** section, configure the API address for the image service. - - In the **[oslo_concurrency]** section, configure the lock path. - - In the **[placement]** section, configure the entry of the Placement service. - - **Note**: - - **Replace `RABBIT_PASS` with the password of the openstack user in RabbitMQ.** - - **Set `my_ip` to the management IP address of the controller node.** - - **Replace `NOVA_DBPASS` with the password of the nova database.** - - **Replace `NOVA_PASS` with the password of the nova user.** - - **Replace `PLACEMENT_PASS` with the password of the placement user.** - - **Replace `NEUTRON_PASS` with the password of the neutron user.** - - **Replace `METADATA_SECRET` with a proper metadata agent secret.** - - Others - - Check whether VM hardware acceleration (x86 architecture) is supported: - - ```shell - egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) - ``` - - If the returned value is **0**, hardware acceleration is not supported. You need to configure libvirt to use QEMU instead of KVM. - - ```shell - vim /etc/nova/nova.conf (CPT) - - [libvirt] - virt_type = qemu - ``` - - If the returned value is **1** or a larger value, hardware acceleration is supported, and no extra configuration is required. - - **Note**: - - **If the ARM64 architecture is used, you also need to run the following command:** - - ```shell - vim /etc/libvirt/qemu.conf - - nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \ - /usr/share/AAVMF/AAVMF_VARS.fd", \ - "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \ - /usr/share/edk2/aarch64/vars-template-pflash.raw"] - - vim /etc/qemu/firmware/edk2-aarch64.json - - { - "description": "UEFI firmware for ARM64 virtual machines", - "interface-types": [ - "uefi" - ], - "mapping": { - "device": "flash", - "executable": { - "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw", - "format": "raw" - }, - "nvram-template": { - "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw", - "format": "raw" - } - }, - "targets": [ - { - "architecture": "aarch64", - "machines": [ - "virt-*" - ] - } - ], - "features": [ - - ], - "tags": [ - - ] - } - - (CPT) - ``` - -4. Synchronize the database. - - Run the following command to synchronize the **nova-api** database: - - ```shell - su -s /bin/sh -c "nova-manage api_db sync" nova (CTL) - ``` - - Run the following command to register the **cell0** database: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova (CTL) - ``` - - Create the **cell1** cell: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova (CTL) - ``` - - Synchronize the **nova** database: - - ```shell - su -s /bin/sh -c "nova-manage db sync" nova (CTL) - ``` - - Verify whether **cell0** and **cell1** are correctly registered: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova (CTL) - ``` - - Add compute node to the OpenStack cluster: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (CPT) - ``` - -5. Start the services: - - ```shell - systemctl enable \ (CTL) - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - - systemctl start \ (CTL) - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - ``` - - ```shell - systemctl enable libvirtd.service openstack-nova-compute.service (CPT) - systemctl start libvirtd.service openstack-nova-compute.service (CPT) - ``` - -6. Perform the verification. - - ```shell - source ~/.admin-openrc (CTL) - ``` - - List the service components to verify that each process is successfully started and registered: - - ```shell - openstack compute service list (CTL) - ``` - - List the API endpoints in the identity service to verify the connection to the identity service: - - ```shell - openstack catalog list (CTL) - ``` - - List the images in the image service to verify the connections: - - ```shell - openstack image list (CTL) - ``` - - Check whether the cells are running properly and whether other prerequisites are met. - - ```shell - nova-status upgrade check (CTL) - ``` - -### Installing Neutron - -1. Create the database, service credentials, and API endpoints. - - Create the database: - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE neutron; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***Note*** - - **Replace `NEUTRON_DBPASS` to set the password for the neutron database.** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - Create the **neutron** service credential: - - ```shell - openstack user create --domain default --password-prompt neutron (CTL) - openstack role add --project service --user neutron admin (CTL) - openstack service create --name neutron --description "OpenStack Networking" network (CTL) - ``` - - Create the API endpoints of the Neutron service: - - ```shell - openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) - ``` - -2. Install the software packages: - - ```shell - yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \ (CTL) - openstack-neutron-ml2 - ``` - - ```shell - yum install openstack-neutron-linuxbridge ebtables ipset (CPT) - ``` - -3. Configure Neutron. - - Set the main configuration items: - - ```shell - vim /etc/neutron/neutron.conf - - [database] - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) - - [DEFAULT] - core_plugin = ml2 (CTL) - service_plugins = router (CTL) - allow_overlapping_ips = true (CTL) - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - notify_nova_on_port_status_changes = true (CTL) - notify_nova_on_port_data_changes = true (CTL) - api_workers = 3 (CTL) - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = neutron - password = NEUTRON_PASS - - [nova] - auth_url = http://controller:5000 (CTL) - auth_type = password (CTL) - project_domain_name = Default (CTL) - user_domain_name = Default (CTL) - region_name = RegionOne (CTL) - project_name = service (CTL) - username = nova (CTL) - password = NOVA_PASS (CTL) - - [oslo_concurrency] - lock_path = /var/lib/neutron/tmp - ``` - - ***Description*** - - Configure the database entry in the **[database]** section. - - Enable the ML2 and router plugins, allow IP address overlapping, and configure the RabbitMQ message queue entry in the **[default]** section. - - Configure the identity authentication service entry in the **[default]** and **[keystone]** sections. - - Enable the network to notify the change of the compute network topology in the **[default]** and **[nova]** sections. - - Configure the lock path in the **[oslo_concurrency]** section. - - ***Note*** - - **Replace `NEUTRON_DBPASS` with the password of the neutron database.** - - **Replace `RABBIT_PASS` with the password of the openstack user in RabbitMQ.** - - **Replace `NEUTRON_PASS` with the password of the neutron user.** - - **Replace `NOVA_PASS` with the password of the nova user.** - - Configure the ML2 plugin: - - ```shell - vim /etc/neutron/plugins/ml2/ml2_conf.ini - - [ml2] - type_drivers = flat,vlan,vxlan - tenant_network_types = vxlan - mechanism_drivers = linuxbridge,l2population - extension_drivers = port_security - - [ml2_type_flat] - flat_networks = provider - - [ml2_type_vxlan] - vni_ranges = 1:1000 - - [securitygroup] - enable_ipset = true - ``` - - Create the symbolic link for /etc/neutron/plugin.ini. - - ```shell - ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - ``` - - **Note** - - **Enable flat, vlan, and vxlan networks, enable the linuxbridge and l2population mechanisms, and enable the port security extension driver in the [ml2] section.** - - **Configure the flat network as the provider virtual network in the [ml2_type_flat] section.** - - **Configure the range of the VXLAN network identifier in the [ml2_type_vxlan] section.** - - **Set ipset enabled in the [securitygroup] section.** - - **Remarks** - - **The actual configurations of l2 can be modified based as required. In this example, the provider network + linuxbridge is used.** - - Configure the Linux bridge agent: - - ```shell - vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - [securitygroup] - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - ``` - - ***Description*** - - Map the provider virtual network to the physical network interface in the **[linux_bridge]** section. - - Enable the VXLAN overlay network, configure the IP address of the physical network interface that processes the overlay network, and enable layer-2 population in the **[vxlan]** section. - - Enable the security group and configure the linux bridge iptables firewall driver in the **[securitygroup]** section. - - ***Note*** - - **Replace `PROVIDER_INTERFACE_NAME` with the physical network interface.** - - **Replace `OVERLAY_INTERFACE_IP_ADDRESS` with the management IP address of the controller node.** - - Configure the Layer-3 agent: - - ```shell - vim /etc/neutron/l3_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - ``` - - ***Description*** - - Set the interface driver to linuxbridge in the **[default]** section. - - Configure the DHCP agent: - - ```shell - vim /etc/neutron/dhcp_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - ``` - - ***Description*** - - In the **[default]** section, configure the linuxbridge interface driver and Dnsmasq DHCP driver, and enable the isolated metadata. - - Configure the metadata agent: - - ```shell - vim /etc/neutron/metadata_agent.ini (CTL) - - [DEFAULT] - nova_metadata_host = controller - metadata_proxy_shared_secret = METADATA_SECRET - ``` - - ***Description*** - - In the **[default]**, configure the metadata host and the shared secret. - - ***Note*** - - **Replace `METADATA_SECRET` with a proper metadata agent secret.** - -4. Configure Nova: - - ```shell - vim /etc/nova/nova.conf - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***Description*** - - In the **[neutron]** section, configure the access parameters, enable the metadata agent, and configure the secret. - - ***Note*** - - **Replace `NEUTRON_PASS` with the password of the neutron user.** - - **Replace `METADATA_SECRET` with a proper metadata agent secret.** - -5. Synchronize the database: - - ```shell - su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - ``` - -6. Run the following command to restart the compute API service: - - ```shell - systemctl restart openstack-nova-api.service - ``` - -7. Start the network service: - - ```shell - systemctl enable neutron-server.service neutron-linuxbridge-agent.service \ (CTL) - neutron-dhcp-agent.service neutron-metadata-agent.service \ - systemctl enable neutron-l3-agent.service - systemctl restart openstack-nova-api.service neutron-server.service (CTL) - neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ - neutron-metadata-agent.service neutron-l3-agent.service - - systemctl enable neutron-linuxbridge-agent.service (CPT) - systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) - ``` - -8. Perform the verification. - - Run the following command to verify whether the Neutron agent is started successfully: - - ```shell - openstack network agent list - ``` - -### Installing Cinder - -1. Create the database, service credentials, and API endpoints. - - Create the database: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE cinder; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***Note*** - - **Replace `CINDER_DBPASS` to set the password for the cinder database.** - - ```shell - source ~/.admin-openrc - ``` - - Create the Cinder service credentials: - - ```shell - openstack user create --domain default --password-prompt cinder - openstack role add --project service --user cinder admin - openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 - openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 - ``` - - Create the API endpoints for the block storage service: - - ```shell - openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s - ``` - -2. Install the software packages: - - ```shell - yum install openstack-cinder-api openstack-cinder-scheduler (CTL) - ``` - - ```shell - yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \ (STG) - openstack-cinder-volume openstack-cinder-backup - ``` - -3. Prepare the storage devices. The following is an example: - - ```shell - pvcreate /dev/vdb - vgcreate cinder-volumes /dev/vdb - - vim /etc/lvm/lvm.conf - - - devices { - ... - filter = [ "a/vdb/", "r/.*/"] - ``` - - ***Description*** - - In the **devices** section, add filters to allow the **/dev/vdb** devices and reject other devices. - -4. Prepare the NFS: - - ```shell - mkdir -p /root/cinder/backup - - cat << EOF >> /etc/export - /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) - EOF - - ``` - -5. Configure Cinder: - - ```shell - vim /etc/cinder/cinder.conf - - [DEFAULT] - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - my_ip = 10.0.0.11 - enabled_backends = lvm (STG) - backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) - backup_share=HOST:PATH (STG) - - [database] - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = cinder - password = CINDER_PASS - - [oslo_concurrency] - lock_path = /var/lib/cinder/tmp - - [lvm] - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) - volume_group = cinder-volumes (STG) - iscsi_protocol = iscsi (STG) - iscsi_helper = tgtadm (STG) - ``` - - ***Description*** - - In the **[database]** section, configure the database entry. - - In the **[DEFAULT]** section, configure the RabbitMQ message queue entry and **my_ip**. - - In the **[DEFAULT]** and **[keystone_authtoken]** sections, configure the identity authentication service entry. - - In the **[oslo_concurrency]** section, configure the lock path. - - ***Note*** - - **Replace `CINDER_DBPASS` with the password of the cinder database.** - - **Replace `RABBIT_PASS` with the password of the openstack user in RabbitMQ.** - - **Set `my_ip` to the management IP address of the controller node.** - - **Replace `CINDER_PASS` with the password of the cinder user.** - - **Replace `HOST:PATH` with the host IP address and the shared path of the NFS.** - -6. Synchronize the database: - - ```shell - su -s /bin/sh -c "cinder-manage db sync" cinder (CTL) - ``` - -7. Configure Nova: - - ```shell - vim /etc/nova/nova.conf (CTL) - - [cinder] - os_region_name = RegionOne - ``` - -8. Restart the compute API service: - - ```shell - systemctl restart openstack-nova-api.service - ``` - -9. Start the Cinder service: - - ```shell - systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - ``` - - ```shell - systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - ``` - - ***Note*** - - If the Cinder volumes are mounted using tgtadm, modify the /etc/tgt/tgtd.conf file as follows to ensure that tgtd can discover the iscsi target of cinder-volume. - - ```shell - include /var/lib/cinder/volumes/* - ``` - -10. Perform the verification: - - ```shell - source ~/.admin-openrc - openstack volume service list - ``` - -### Installing Horizon - -1. Install the software package: - - ```shell - yum install openstack-dashboard - ``` - -2. Modify the file. - - Modify the variables: - - ```text - vim /etc/openstack-dashboard/local_settings - - OPENSTACK_HOST = "controller" - ALLOWED_HOSTS = ['*', ] - - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': 'controller:11211', - } - } - - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" - - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 3, - } - ``` - -3. Restart the httpd service: - - ```shell - systemctl restart httpd.service memcached.service - ``` - -4. Perform the verification. - Open the browser, enter in the address bar, and log in to Horizon. - - ***Note*** - - **Replace `HOSTIP` with the management plane IP address of the controller node.** - -### Installing Tempest - -Tempest is the integrated test service of OpenStack. If you need to run a fully automatic test of the functions of the installed OpenStack environment, you are advised to use Tempest. Otherwise, you can choose not to install it. - -1. Install Tempest: - - ```shell - yum install openstack-tempest - ``` - -2. Initialize the directory: - - ```shell - tempest init mytest - ``` - -3. Modify the configuration file: - - ```shell - cd mytest - vi etc/tempest.conf - ``` - - Configure the current OpenStack environment information in **tempest.conf**. For details, see the [official example](https://docs.openstack.org/tempest/latest/sampleconf.html). - -4. Perform the test: - - ```shell - tempest run - ``` - -5. (Optional) Install the tempest extensions. - The OpenStack services have provided some tempest test packages. You can install these packages to enrich the tempest test content. In Wallaby, extension tests for Cinder, Glance, Keystone, Ironic and Trove are provided. You can run the following command to install and use them: - ``` - yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin - ``` - -### Installing Ironic - -Ironic is the bare metal service of OpenStack. If you need to deploy bare metal machines, Ironic is recommended. Otherwise, you can choose not to install it. - -1. Set the database. - - The bare metal service stores information in the database. Create a **ironic** database that can be accessed by the **ironic** user and replace **IRONIC_DBPASSWORD** with a proper password. - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - ``` - -2. Create service user authentication. - - 1. Create the bare metal service user: - - ```shell - openstack user create --password IRONIC_PASSWORD \ - --email ironic@example.com ironic - openstack role add --project service --user ironic admin - openstack service create --name ironic - --description "Ironic baremetal provisioning service" baremetal - - openstack service create --name ironic-inspector --description "Ironic inspector baremetal provisioning service" baremetal-introspection - openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector - openstack role add --project service --user ironic-inspector admin - ``` - - 2. Create the bare metal service access entries: - - ```shell - openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 - openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 - openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 - ``` - -3. Configure the ironic-api service. - - Configuration file path: **/etc/ironic/ironic.conf** - - 1. Use **connection** to configure the location of the database as follows. Replace **IRONIC_DBPASSWORD** with the password of user **ironic** and replace **DB_IP** with the IP address of the database server. - - ```shell - [database] - - # The SQLAlchemy connection string used to connect to the - # database (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 2. Configure the ironic-api service to use the RabbitMQ message broker. Replace **RPC_\*** with the detailed address and the credential of RabbitMQ. - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - You can also use json-rpc instead of RabbitMQ. - - 3. Configure the ironic-api service to use the credential of the identity authentication service. Replace **PUBLIC_IDENTITY_IP** with the public IP address of the identity authentication server and **PRIVATE_IDENTITY_IP** with the private IP address of the identity authentication server, replace **IRONIC_PASSWORD** with the password of the **ironic** user in the identity authentication service. - - ```shell - [DEFAULT] - - # Authentication strategy used by ironic-api: one of - # "keystone" or "noauth". "noauth" should not be used in a - # production environment because all authentication will be - # disabled. (string value) - - auth_strategy=keystone - host = controller - memcache_servers = controller:11211 - enabled_network_interfaces = flat,noop,neutron - default_network_interface = noop - transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ - enabled_hardware_types = ipmi - enabled_boot_interfaces = pxe - enabled_deploy_interfaces = direct - default_deploy_interface = direct - enabled_inspect_interfaces = inspector - enabled_management_interfaces = ipmitool - enabled_power_interfaces = ipmitool - enabled_rescue_interfaces = no-rescue,agent - isolinux_bin = /usr/share/syslinux/isolinux.bin - logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s - - [keystone_authtoken] - # Authentication type to load (string value) - auth_type=password - # Complete public Identity API endpoint (string value) - www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 - # Complete admin Identity API endpoint. (string value) - auth_url=http://PRIVATE_IDENTITY_IP:5000 - # Service username. (string value) - username=ironic - # Service account password. (string value) - password=IRONIC_PASSWORD - # Service tenant name. (string value) - project_name=service - # Domain name containing project (string value) - project_domain_name=Default - # User's domain name (string value) - user_domain_name=Default - - [agent] - deploy_logs_collect = always - deploy_logs_local_path = /var/log/ironic/deploy - deploy_logs_storage_backend = local - image_download_source = http - stream_raw_images = false - force_raw_images = false - verify_ca = False - - [oslo_concurrency] - - [oslo_messaging_notifications] - transport_url = rabbit://openstack:123456@172.20.19.25:5672/ - topics = notifications - driver = messagingv2 - - [oslo_messaging_rabbit] - amqp_durable_queues = True - rabbit_ha_queues = True - - [pxe] - ipxe_enabled = false - pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 - image_cache_size = 204800 - tftp_root=/var/lib/tftpboot/cephfs/ - tftp_master_path=/var/lib/tftpboot/cephfs/master_images - - [dhcp] - dhcp_provider = none - ``` - - 4. Create the bare metal service database table: - - ```shell - ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema - ``` - - 5. Restart the ironic-api service: - - ```shell - sudo systemctl restart openstack-ironic-api - ``` - -4. Configure the ironic-conductor service. - - 1. Replace **HOST_IP** with the IP address of the conductor host. - - ```shell - [DEFAULT] - - # IP address of this host. If unset, will determine the IP - # programmatically. If unable to do so, will use "127.0.0.1". - # (string value) - - my_ip=HOST_IP - ``` - - 2. Specifies the location of the database. ironic-conductor must use the same configuration as ironic-api. Replace **IRONIC_DBPASSWORD** with the password of user **ironic** and replace **DB_IP** with the IP address of the database server. - - ```shell - [database] - - # The SQLAlchemy connection string to use to connect to the - # database. (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 3. Configure the ironic-api service to use the RabbitMQ message broker. ironic-conductor must use the same configuration as ironic-api. Replace **RPC_\*** with the detailed address and the credential of RabbitMQ. - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - You can also use json-rpc instead of RabbitMQ. - - 4. Configure the credentials to access other OpenStack services. - - To communicate with other OpenStack services, the bare metal service needs to use the service users to get authenticated by the OpenStack Identity service when requesting other services. The credentials of these users must be configured in each configuration file associated to the corresponding service. - - ```shell - [neutron] - Accessing the OpenStack network services. - [glance] - Accessing the OpenStack image service. - [swift] - Accessing the OpenStack object storage service. - [cinder] - Accessing the OpenStack block storage service. - [inspector] Accessing the OpenStack bare metal introspection service. - [service_catalog] - A special item to store the credential used by the bare metal service. The credential is used to discover the API URL endpoint registered in the OpenStack identity authentication service catalog by the bare metal service. - ``` - - For simplicity, you can use one service user for all services. For backward compatibility, the user name must be the same as that configured in [keystone_authtoken] of the ironic-api service. However, this is not mandatory. You can also create and configure a different service user for each service. - - In the following example, the authentication information for the user to access the OpenStack network service is configured as follows: - - ```shell - The network service is deployed in the identity authentication service domain named RegionOne. Only the public endpoint interface is registered in the service catalog. - - A specific CA SSL certificate is used for HTTPS connection when sending a request. - - The same service user as that configured for ironic-api. - - The dynamic password authentication plugin discovers a proper identity authentication service API version based on other options. - ``` - - ```shell - [neutron] - - # Authentication type to load (string value) - auth_type = password - # Authentication URL (string value) - auth_url=https://IDENTITY_IP:5000/ - # Username (string value) - username=ironic - # User's password (string value) - password=IRONIC_PASSWORD - # Project name to scope to (string value) - project_name=service - # Domain ID containing project (string value) - project_domain_id=default - # User's domain id (string value) - user_domain_id=default - # PEM encoded Certificate Authority to use when verifying - # HTTPs connections. (string value) - cafile=/opt/stack/data/ca-bundle.pem - # The default region_name for endpoint URL discovery. (string - # value) - region_name = RegionOne - # List of interfaces, in order of preference, for endpoint - # URL. (list value) - valid_interfaces=public - ``` - - By default, to communicate with other services, the bare metal service attempts to discover a proper endpoint of the service through the service catalog of the identity authentication service. If you want to use a different endpoint for a specific service, specify the endpoint_override option in the bare metal service configuration file. - - ```shell - [neutron] ... endpoint_override = - ``` - - 5. Configure the allowed drivers and hardware types. - - Set enabled_hardware_types to specify the hardware types that can be used by ironic-conductor: - - ```shell - [DEFAULT] enabled_hardware_types = ipmi - ``` - - Configure hardware interfaces: - - ```shell - enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool - ``` - - Configure the default value of the interface: - - ```shell - [DEFAULT] default_deploy_interface = direct default_network_interface = neutron - ``` - - If any driver that uses Direct Deploy is enabled, you must install and configure the Swift backend of the image service. The Ceph object gateway (RADOS gateway) can also be used as the backend of the image service. - - 6. Restart the ironic-conductor service: - - ```shell - sudo systemctl restart openstack-ironic-conductor - ``` - -5. Configure the ironic-inspector service. - - Configuration file path: **/etc/ironic-inspector/inspector.conf**. - - 1. Create the database: - - ```shell - # mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \ - IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; - ``` - - 2. Use **connection** to configure the location of the database as follows. Replace **IRONIC_INSPECTOR_DBPASSWORD** with the password of user **ironic_inspector** and replace **DB_IP** with the IP address of the database server: - - ```shell - [database] - backend = sqlalchemy - connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector - min_pool_size = 100 - max_pool_size = 500 - pool_timeout = 30 - max_retries = 5 - max_overflow = 200 - db_retry_interval = 2 - db_inc_retry_interval = True - db_max_retry_interval = 2 - db_max_retries = 5 - ``` - - 3. Configure the communication address of the message queue: - - ```shell - [DEFAULT] - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - - ``` - - 4. Configure the Keystone authentication: - - ```shell - [DEFAULT] - - auth_strategy = keystone - timeout = 900 - rootwrap_config = /etc/ironic-inspector/rootwrap.conf - logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s - log_dir = /var/log/ironic-inspector - state_path = /var/lib/ironic-inspector - use_stderr = False - - [ironic] - api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 - auth_type = password - auth_url = http://PUBLIC_IDENTITY_IP:5000 - auth_strategy = keystone - ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 - os_region = RegionOne - project_name = service - project_domain_name = Default - user_domain_name = Default - username = IRONIC_SERVICE_USER_NAME - password = IRONIC_SERVICE_USER_PASSWORD - - [keystone_authtoken] - auth_type = password - auth_url = http://control:5000 - www_authenticate_uri = http://control:5000 - project_domain_name = default - user_domain_name = default - project_name = service - username = ironic_inspector - password = IRONICPASSWD - region_name = RegionOne - memcache_servers = control:11211 - token_cache_time = 300 - - [processing] - add_ports = active - processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic - ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk - always_store_ramdisk_logs = true - store_data =none - power_off = false - - [pxe_filter] - driver = iptables - - [capabilities] - boot_mode=True - ``` - - 5. Configure the ironic inspector dnsmasq service: - - ```shell - #Configuration file path: /etc/ironic-inspector/dnsmasq.conf - port=0 - interface=enp3s0 #Replace with the actual listening network interface. - dhcp-range=172.20.19.100,172.20.19.110 #Replace with the actual DHCP IP address range. - bind-interfaces - enable-tftp - - dhcp-match=set:efi,option:client-arch,7 - dhcp-match=set:efi,option:client-arch,9 - dhcp-match=aarch64, option:client-arch,11 - dhcp-boot=tag:aarch64,grubaa64.efi - dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi - dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 - - tftp-root=/tftpboot #Replace with the actual tftpboot directory. - log-facility=/var/log/dnsmasq.log - ``` - - 6. Disable DHCP for the subnet of the ironic provision network. - - ``` - openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c - ``` - - 7. Initializs the database of the ironic-inspector service. - - Run the following command on the controller node: - - ``` - ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade - ``` - - 8. Start the service: - - ```shell - systemctl enable --now openstack-ironic-inspector.service - systemctl enable --now openstack-ironic-inspector-dnsmasq.service - ``` - -6. Configure the httpd service. - - 1. Create the root directory of the httpd used by Ironic, and set the owner and owner group. The directory path must be the same as the path specified by the **http_root** configuration item in the **[deploy]** group in **/etc/ironic/ironic.conf**. - - ``` - mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot - ``` - - - - 2. Install and configure the httpd Service. - - - - 1. Install the httpd service. If the httpd service is already installed, skip this step. - - ``` - yum install httpd -y - ``` - - - - 2. Create the **/etc/httpd/conf.d/openstack-ironic-httpd.conf** file. The file content is as follows: - - ``` - Listen 8080 - - - ServerName ironic.openeuler.com - - ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log" - CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b" - - DocumentRoot "/var/lib/ironic/httproot" - - Options Indexes FollowSymLinks - Require all granted - - LogLevel warn - AddDefaultCharset UTF-8 - EnableSendfile on - - - ``` - - The listening port must be the same as the port specified by **http_url** in the **[deploy]** section of **/etc/ironic/ironic.conf**. - - 3. Restart the httpd service: - - ``` - systemctl restart httpd - ``` - - - -7. Create the deploy ramdisk image. - - The ramdisk image of Wallaby can be created using the ironic-python-agent service or disk-image-builder tool. You can also use the latest ironic-python-agent-builder provided by the community. You can also use other tools. - To use the Wallaby native tool, you need to install the corresponding software package. - - ```shell - yum install openstack-ironic-python-agent - or - yum install diskimage-builder - ``` - - For details, see the [official document](https://docs.openstack.org/ironic/queens/install/deploy-ramdisk.html). - - The following describes how to use the ironic-python-agent-builder to build the deploy image used by ironic. - - 1. Install ironic-python-agent-builder. - - - 1. Install the tool: - - ```shell - pip install ironic-python-agent-builder - ``` - - 2. Modify the python interpreter in the following files: - - ```shell - /usr/bin/yum /usr/libexec/urlgrabber-ext-down - ``` - - 3. Install the other necessary tools: - - ```shell - yum install git - ``` - - `DIB` depends on the `semanage` command. Therefore, check whether the `semanage --help` command is available before creating an image. If the system displays a message indicating that the command is unavailable, install the command: - - ```shell - # Check which package needs to be installed. - [root@localhost ~]# yum provides /usr/sbin/semanage - Loaded plug-in: fastestmirror - Loading mirror speeds from cached hostfile - * base: mirror.vcu.edu - * extras: mirror.vcu.edu - * updates: mirror.math.princeton.edu - policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities - Source: base - Matching source: - File name: /usr/sbin/semanage - # Install. - [root@localhost ~]# yum install policycoreutils-python - ``` - - 2. Create the image. - - For `arm` architecture, add the following information: - ```shell - export ARCH=aarch64 - ``` - - Basic usage: - - ```shell - usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] - [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] - distribution - - positional arguments: - distribution Distribution to use - - optional arguments: - -h, --help show this help message and exit - -r RELEASE, --release RELEASE - Distribution release to use - -o OUTPUT, --output OUTPUT - Output base file name - -e ELEMENT, --element ELEMENT - Additional DIB element to use - -b BRANCH, --branch BRANCH - If set, override the branch that is used for ironic- - python-agent and requirements - -v, --verbose Enable verbose logging in diskimage-builder - --extra-args EXTRA_ARGS - Extra arguments to pass to diskimage-builder - ``` - - Example: - - ```shell - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky - ``` - - 3. Allow SSH login. - - Initialize the environment variables and create the image: - - ```shell - export DIB_DEV_USER_USERNAME=ipa \ - export DIB_DEV_USER_PWDLESS_SUDO=yes \ - export DIB_DEV_USER_PASSWORD='123' - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser - ``` - - 4. Specify the code repository. - - Initialize the corresponding environment variables and create the image: - - ```shell - # Specify the address and version of the repository. - DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git - DIB_REPOREF_ironic_python_agent=origin/develop - - # Clone code from Gerrit. - DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent - DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 - ``` - - Reference: [source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html). - - The specified repository address and version are verified successfully. - - 5. Note - -The template of the PXE configuration file of the native OpenStack does not support the ARM64 architecture. You need to modify the native OpenStack code. - -In Wallaby, Ironic provided by the community does not support the boot from ARM 64-bit UEFI PXE. As a result, the format of the generated grub.cfg file (generally in /tftpboot/) is incorrect, causing the PXE boot failure. - -The generated incorrect configuration file is as follows: - -![erro](/Users/andy_lee/Downloads/erro.png) - -As shown in the preceding figure, in the ARM architecture, the commands for searching for the vmlinux and ramdisk images are **linux** and **initrd**, respectively. The command in red in the preceding figure is the UEFI PXE startup command in the x86 architecture. - -You need to modify the code logic for generating the grub.cfg file. - -The following TLS error is reported when Ironic sends a request to IPA to query the command execution status: - -By default, both IPA and Ironic of Wallaby have TLS authentication enabled to send requests to each other. Disable TLS authentication according to the description on the official website. - -1. Add **ipa-insecure=1** to the following configuration in the Ironic configuration file (**/etc/ironic/ironic.conf**): - -``` -[agent] -verify_ca = False - -[pxe] -pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 -``` - -2. Add the IPA configuration file **/etc/ironic_python_agent/ironic_python_agent.conf** to the ramdisk image and configure the TLS as follows: - -**/etc/ironic_python_agent/ironic_python_agent.conf** (The **/etc/ironic_python_agent** directory must be created in advance.) - -``` -[DEFAULT] -enable_auto_tls = False -``` - -Set the permission: - -``` -chown -R ipa.ipa /etc/ironic_python_agent/ -``` - -3. Modify the startup file of the IPA service and add the configuration file option. - - vim usr/lib/systemd/system/ironic-python-agent.service - - ``` - [Unit] - Description=Ironic Python Agent - After=network-online.target - - [Service] - ExecStartPre=/sbin/modprobe vfat - ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf - Restart=always - RestartSec=30s - - [Install] - WantedBy=multi-user.target - ``` - - - -### Installing Kolla - -Kolla provides the OpenStack service with the container-based deployment function that is ready for the production environment. The Kolla and Kolla-ansible services are introduced in openEuler in version 22.03 LTS. - -The installation of Kolla is simple. You only need to install the corresponding RPM packages: - -``` -yum install openstack-kolla openstack-kolla-ansible -``` - -After the installation is complete, you can run commands such as `kolla-ansible`, `kolla-build`, `kolla-genpwd`, `kolla-mergepwd`. - -### Installing Trove -Trove is the database service of OpenStack. If you need to use the database service provided by OpenStack, Trove is recommended. Otherwise, you can choose not to install it. - -1. Set the database. - - The database service stores information in the database. Create a **trove** database that can be accessed by the **trove** user and replace **TROVE_DBPASSWORD** with a proper password. - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - ``` - -2. Create service user authentication. - - 1. Create the **Trove** service user. - - ```shell - openstack user create --password TROVE_PASSWORD \ - --email trove@example.com trove - openstack role add --project service --user trove admin - openstack service create --name trove - --description "Database service" database - ``` - **Description:** Replace `TROVE_PASSWORD` with the password of the `trove` user. - - 2. Create the **Database** service access entry - - ```shell - openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s - ``` - -3. Install and configure the **Trove** components. - 1. Install the **Trove** package: - ```shell script - yum install openstack-trove python-troveclient - ``` - 2. Configure `trove.conf`: - ```shell script - vim /etc/trove/trove.conf - - [DEFAULT] - bind_host=TROVE_NODE_IP - log_dir = /var/log/trove - network_driver = trove.network.neutron.NeutronDriver - management_security_groups = - nova_keypair = trove-mgmt - default_datastore = mysql - taskmanager_manager = trove.taskmanager.manager.Manager - trove_api_workers = 5 - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - reboot_time_out = 300 - usage_timeout = 900 - agent_call_high_timeout = 1200 - use_syslog = False - debug = True - - # Set these if using Neutron Networking - network_driver=trove.network.neutron.NeutronDriver - network_label_regex=.* - - - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove - - [keystone_authtoken] - project_domain_name = Default - project_name = service - user_domain_name = Default - password = trove - username = trove - auth_url = http://controller:5000/v3/ - auth_type = password - - [service_credentials] - auth_url = http://controller:5000/v3/ - region_name = RegionOne - project_name = service - password = trove - project_domain_name = Default - user_domain_name = Default - username = trove - - [mariadb] - tcp_ports = 3306,4444,4567,4568 - - [mysql] - tcp_ports = 3306 - - [postgresql] - tcp_ports = 5432 - ``` - **Description:** - - In the `[Default]` section, set `bind_host` to the IP address of the node where Trove is deployed. - - `nova_compute_url` and `cinder_url` are endpoints created by Nova and Cinder in Keystone. - - `nova_proxy_XXX` is a user who can access the Nova service. In the preceding example, the `admin` user is used. - - `transport_url` is the `RabbitMQ` connection information, and `RABBIT_PASS` is the RabbitMQ password. - - In the `[database]` section, `connection` is the information of the database created for Trove in MySQL. - - Replace `TROVE_PASS` in the Trove user information with the password of the **trove** user. - - 5. Configure `trove-guestagent.conf`: - ```shell script - vim /etc/trove/trove-guestagent.conf - - [DEFAULT] - log_file = trove-guestagent.log - log_dir = /var/log/trove/ - ignore_users = os_admin - control_exchange = trove - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - rpc_backend = rabbit - command_process_timeout = 60 - use_syslog = False - debug = True - - [service_credentials] - auth_url = http://controller:5000/v3/ - region_name = RegionOne - project_name = service - password = TROVE_PASS - project_domain_name = Default - user_domain_name = Default - username = trove - - [mysql] - docker_image = your-registry/your-repo/mysql - backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 - ``` - ** Description:** `guestagent` is an independent component in Trove and needs to be pre-built into the virtual machine image created by Trove using Nova. - After the database instance is created, the guestagent process is started to report heartbeat messages to the Trove through the message queue (RabbitMQ). - Therefore, you need to configure the user name and password of the RabbitMQ. - ** Since Victoria, Trove uses a unified image to run different types of databases. The database service runs in the Docker container of the Guest VM.** - - `transport_url` is the `RabbitMQ` connection information, and `RABBIT_PASS` is the RabbitMQ password. - - Replace `TROVE_PASS` in the Trove user information with the password of the **trove** user. - - 6. Generate the `Trove` database table. - ```shell script - su -s /bin/sh -c "trove-manage db_sync" trove - ``` -4. Complete the installation and configuration. - 1. Configure the **Trove** service to automatically start: - ```shell script - systemctl enable openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - 2. Start the service: - ```shell script - systemctl start openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` -### Installing Swift - -Swift provides a scalable and highly available distributed object storage service, which is suitable for storing unstructured data in large scale. - -1. Create the service credentials and API endpoints. - - Create the service credential: - - ``` shell - #Create the swift user. - openstack user create --domain default --password-prompt swift - #Add the admin role for the swift user. - openstack role add --project service --user swift admin - #Create the swift service entity. - openstack service create --name swift --description "OpenStack Object Storage" object-store - ``` - - Create the Swift API endpoints. - - ```shell - openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s - openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s - openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 - ``` - - -2. Install the software packages: - - ```shell - yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL) - ``` - -3. Configure the proxy-server. - - The Swift RPM package contains a **proxy-server.conf** file which is basically ready to use. You only need to change the values of **ip** and swift **password** in the file. - - ***Note*** - - **Replace password with the password you set for the swift user in the identity service.** - -4. Install and configure the storage node. (STG) - - Install the supported program packages: - ```shell - yum install xfsprogs rsync - ``` - - Format the /dev/vdb and /dev/vdc devices into XFS: - - ```shell - mkfs.xfs /dev/vdb - mkfs.xfs /dev/vdc - ``` - - Create the mount point directory structure: - - ```shell - mkdir -p /srv/node/vdb - mkdir -p /srv/node/vdc - ``` - - Find the UUID of the new partition: - - ```shell - blkid - ``` - - Add the following to the **/etc/fstab** file: - - ```shell - UUID="" /srv/node/vdb xfs noatime 0 2 - UUID="" /srv/node/vdc xfs noatime 0 2 - ``` - - Mount the devices: - - ```shell - mount /srv/node/vdb - mount /srv/node/vdc - ``` - ***Note*** - - **If the disaster recovery function is not required, you only need to create one device and skip the following rsync configuration.** - - (Optional) Create or edit the **/etc/rsyncd.conf** file to include the following content: - - ```shell - [DEFAULT] - uid = swift - gid = swift - log file = /var/log/rsyncd.log - pid file = /var/run/rsyncd.pid - address = MANAGEMENT_INTERFACE_IP_ADDRESS - - [account] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/account.lock - - [container] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/container.lock - - [object] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/object.lock - ``` - **Replace `MANAGEMENT_INTERFACE_IP_ADDRESS` with the management network IP address of the storage node.** - - Start the rsyncd service and configure it to start upon system startup. - - ```shell - systemctl enable rsyncd.service - systemctl start rsyncd.service - ``` - -5. Install and configure the components on storage nodes. (STG) - - Install the software packages: - - ```shell - yum install openstack-swift-account openstack-swift-container openstack-swift-object - ``` - - Edit **account-server.conf**, **container-server.conf**, and **object-server.conf** in the **/etc/swift directory** and replace **bind_ip** with the management network IP address of the storage node. - - Ensure the proper ownership of the mount point directory structure. - - ```shell - chown -R swift:swift /srv/node - ``` - - Create the recon directory and ensure that it has the correct ownership. - - ```shell - mkdir -p /var/cache/swift - chown -R root:swift /var/cache/swift - chmod -R 775 /var/cache/swift - ``` - -6. Create the account ring. (CTL) - - Switch to the `/etc/swift` directory: - - ```shell - cd /etc/swift - ``` - - Create the basic `account.builder` file: - - ```shell - swift-ring-builder account.builder create 10 1 1 - ``` - - Add each storage node to the ring: - - ```shell - swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT - ``` - - **Replace `STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS` with the management network IP address of the storage node. Replace `DEVICE_NAME` with the name of the storage device on the same storage node.** - - ***Note*** - **Repeat this command to each storage device on each storage node.** - - Verify the ring contents: - - ```shell - swift-ring-builder account.builder - ``` - - Rebalance the ring: - - ```shell - swift-ring-builder account.builder rebalance - ``` - -7. Create the container ring. (CTL) - - Switch to the `/etc/swift` directory: - - Create the basic `container.builder` file: - - ```shell - swift-ring-builder container.builder create 10 1 1 - ``` - - Add each storage node to the ring: - - ```shell - swift-ring-builder container.builder \ - add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \ - --device DEVICE_NAME --weight 100 - - ``` - - **Replace `STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS` with the management network IP address of the storage node. Replace `DEVICE_NAME` with the name of the storage device on the same storage node.** - - ***Note*** - **Repeat this command to every storage devices on every storage nodes.** - - Verify the ring contents: - - ```shell - swift-ring-builder container.builder - ``` - - Rebalance the ring: - - ```shell - swift-ring-builder container.builder rebalance - ``` - -8. Create the object ring. (CTL) - - Switch to the `/etc/swift` directory: - - Create the basic `object.builder` file: - - ```shell - swift-ring-builder object.builder create 10 1 1 - ``` - - Add each storage node to the ring: - - ```shell - swift-ring-builder object.builder \ - add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \ - --device DEVICE_NAME --weight 100 - ``` - - **Replace `STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS` with the management network IP address of the storage node. Replace `DEVICE_NAME` with the name of the storage device on the same storage node.** - - ***Note*** - **Repeat this command to every storage devices on every storage nodes.** - - Verify the ring contents: - - ```shell - swift-ring-builder object.builder - ``` - - Rebalance the ring: - - ```shell - swift-ring-builder object.builder rebalance - ``` - - Distribute ring configuration files: - - Copy `account.ring.gz`, `container.ring.gz`, and `object.ring.gz` to the `/etc/swift` directory on each storage node and any additional nodes running the proxy service. - - - -9. Complete the installation. - - Edit the `/etc/swift/swift.conf` file: - - ``` shell - [swift-hash] - swift_hash_path_suffix = test-hash - swift_hash_path_prefix = test-hash - - [storage-policy:0] - name = Policy-0 - default = yes - ``` - - **Replace test-hash with a unique value.** - - Copy the `swift.conf` file to the `/etc/swift` directory on each storage node and any additional nodes running the proxy service. - - Ensure correct ownership of the configuration directory on all nodes: - - ```shell - chown -R root:swift /etc/swift - ``` - - On the controller node and any additional nodes running the proxy service, start the object storage proxy service and its dependencies, and configure them to start upon system startup. - - ```shell - systemctl enable openstack-swift-proxy.service memcached.service - systemctl start openstack-swift-proxy.service memcached.service - ``` - - On the storage node, start the object storage services and configure them to start upon system startup. - - ```shell - systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service - - systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service - - systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service - - systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service - - systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service - - systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service - ``` - -### Installing Cyborg - -Cyborg provides acceleration device support for OpenStack, for example, GPUs, FPGAs, ASICs, NPs, SoCs, NVMe/NOF SSDs, ODPs, DPDKs, and SPDKs. - -1. Initialize the databases. - -``` -CREATE DATABASE cyborg; -GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; -GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; -``` - -2. Create Keystone resource objects. - -``` -$ openstack user create --domain default --password-prompt cyborg -$ openstack role add --project service --user cyborg admin -$ openstack service create --name cyborg --description "Acceleration Service" accelerator - -$ openstack endpoint create --region RegionOne \ - accelerator public http://:6666/v1 -$ openstack endpoint create --region RegionOne \ - accelerator internal http://:6666/v1 -$ openstack endpoint create --region RegionOne \ - accelerator admin http://:6666/v1 -``` - -3. Install Cyborg - -``` -yum install openstack-cyborg -``` - -4. Configure Cyborg - -Modify **/etc/cyborg/cyborg.conf**. - -``` -[DEFAULT] -transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ -use_syslog = False -state_path = /var/lib/cyborg -debug = True - -[database] -connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg - -[service_catalog] -project_domain_id = default -user_domain_id = default -project_name = service -password = PASSWORD -username = cyborg -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password - -[placement] -project_domain_name = Default -project_name = service -user_domain_name = Default -password = PASSWORD -username = placement -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password - -[keystone_authtoken] -memcached_servers = localhost:11211 -project_domain_name = Default -project_name = service -user_domain_name = Default -password = PASSWORD -username = cyborg -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password -``` - -Set the user names, passwords, and IP addresses as required. - -1. Synchronize the database table. - -``` -cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade -``` - -6. Start the Cyborg services. - -``` -systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent -systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent -``` - -### Installing Aodh - -1. Create the database. - -``` -CREATE DATABASE aodh; - -GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; - -GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; -``` - -2. Create Keystone resource objects. - -``` -openstack user create --domain default --password-prompt aodh - -openstack role add --project service --user aodh admin - -openstack service create --name aodh --description "Telemetry" alarming - -openstack endpoint create --region RegionOne alarming public http://controller:8042 - -openstack endpoint create --region RegionOne alarming internal http://controller:8042 - -openstack endpoint create --region RegionOne alarming admin http://controller:8042 -``` - -3. Install Aodh. - -``` -yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient -``` - -4. Modify the configuration file. - -``` -[database] -connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh - -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller -auth_strategy = keystone - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_id = default -user_domain_id = default -project_name = service -username = aodh -password = AODH_PASS - -[service_credentials] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_id = default -user_domain_id = default -project_name = service -username = aodh -password = AODH_PASS -interface = internalURL -region_name = RegionOne -``` - -5. Initialize the database. - -``` -aodh-dbsync -``` - -6. Start the Aodh services. - -``` -systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service - -systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service -``` - -### Installing Gnocchi - -1. Create the database. - -``` -CREATE DATABASE gnocchi; - -GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; - -GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; -``` - -2. Create Keystone resource objects. - -``` -openstack user create --domain default --password-prompt gnocchi - -openstack role add --project service --user gnocchi admin - -openstack service create --name gnocchi --description "Metric Service" metric - -openstack endpoint create --region RegionOne metric public http://controller:8041 - -openstack endpoint create --region RegionOne metric internal http://controller:8041 - -openstack endpoint create --region RegionOne metric admin http://controller:8041 -``` - -3. Install Gnocchi. - -``` -yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient -``` - -1. Modify the **/etc/gnocchi/gnocchi.conf** configuration file. - -``` -[api] -auth_mode = keystone -port = 8041 -uwsgi_mode = http-socket - -[keystone_authtoken] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_name = Default -user_domain_name = Default -project_name = service -username = gnocchi -password = GNOCCHI_PASS -interface = internalURL -region_name = RegionOne - -[indexer] -url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi - -[storage] -# coordination_url is not required but specifying one will improve -# performance with better workload division across workers. -coordination_url = redis://controller:6379 -file_basepath = /var/lib/gnocchi -driver = file -``` - -5. Initialize the database. - -``` -gnocchi-upgrade -``` - -6. Start the Gnocchi services. - -``` -systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service - -systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service -``` - -### Installing Ceilometer - -1. Create Keystone resource objects. - -``` -openstack user create --domain default --password-prompt ceilometer - -openstack role add --project service --user ceilometer admin - -openstack service create --name ceilometer --description "Telemetry" metering -``` - -2. Install Ceilometer. - -``` -yum install openstack-ceilometer-notification openstack-ceilometer-central -``` - -1. Modify the **/etc/ceilometer/pipeline.yaml** configuration file. - -``` -publishers: - # set address of Gnocchi - # + filter out Gnocchi-related activity meters (Swift driver) - # + set default archive policy - - gnocchi://?filter_project=service&archive_policy=low -``` - -4. Modify the **/etc/ceilometer/ceilometer.conf** configuration file. - -``` -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller - -[service_credentials] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_id = default -user_domain_id = default -project_name = service -username = ceilometer -password = CEILOMETER_PASS -interface = internalURL -region_name = RegionOne -``` - -5. Initialize the database. - -``` -ceilometer-upgrade -``` - -6. Start the Ceilometer services. - -``` -systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service - -systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service -``` - -### Installing Heat - -1. Creat the **heat** database and grant proper privileges to it. Replace **HEAT_DBPASS** with a proper password. - -``` -CREATE DATABASE heat; -GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; -GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; -``` - -2. Create a service credential. Create the **heat** user and add the **admin** role to it. - -``` -openstack user create --domain default --password-prompt heat -openstack role add --project service --user heat admin -``` - -3. Create the **heat** and **heat-cfn** services and their API enpoints. - -``` -openstack service create --name heat --description "Orchestration" orchestration -openstack service create --name heat-cfn --description "Orchestration" cloudformation -openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 -openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 -openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 -``` - -4. Create additional OpenStack management information, including the **heat** domain and its administrator **heat_domain_admin**, the **heat_stack_owner** role, and the **heat_stack_user** role. - -``` -openstack user create --domain heat --password-prompt heat_domain_admin -openstack role add --domain heat --user-domain heat --user heat_domain_admin admin -openstack role create heat_stack_owner -openstack role create heat_stack_user -``` - -5. Install the software packages. - -``` -yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine -``` - -6. Modify the configuration file **/etc/heat/heat.conf**. - -``` -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller -heat_metadata_server_url = http://controller:8000 -heat_waitcondition_server_url = http://controller:8000/v1/waitcondition -stack_domain_admin = heat_domain_admin -stack_domain_admin_password = HEAT_DOMAIN_PASS -stack_user_domain_name = heat - -[database] -connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = default -user_domain_name = default -project_name = service -username = heat -password = HEAT_PASS - -[trustee] -auth_type = password -auth_url = http://controller:5000 -username = heat -password = HEAT_PASS -user_domain_name = default - -[clients_keystone] -auth_uri = http://controller:5000 -``` - -7. Initialize the **heat** database table. - -``` -su -s /bin/sh -c "heat-manage db_sync" heat -``` - -8. Start the services. - -``` -systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service -systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service -``` - -## OpenStack Quick Installation - -The OpenStack SIG provides the Ansible script for one-click deployment of OpenStack in All in One or Distributed modes. Users can use the script to quickly deploy an OpenStack environment based on openEuler RPM packages. The following uses the All in One mode installation as an example. - -1. Install the OpenStack SIG Tool. - - ```shell - pip install openstack-sig-tool - ``` - -2. Configure the OpenStack Yum source. - - ```shell - yum install openstack-release-wallaby - ``` - - **Note**: Enable the EPOL repository for the Yum source if it is not enabled already. - - ```shell - vi /etc/yum.repos.d/openEuler.repo - - [EPOL] - name=EPOL - baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler - EOF - -3. Update the Ansible configurations. - - Open the **/usr/local/etc/inventory/all_in_one.yaml** file and modify the configuration based on the environment and requirements. Modify the file as follows: - - ```shell - all: - hosts: - controller: - ansible_host: - ansible_ssh_private_key_file: - ansible_ssh_user: root - vars: - mysql_root_password: root - mysql_project_password: root - rabbitmq_password: root - project_identity_password: root - enabled_service: - - keystone - - neutron - - cinder - - placement - - nova - - glance - - horizon - - aodh - - ceilometer - - cyborg - - gnocchi - - kolla - - heat - - swift - - trove - - tempest - neutron_provider_interface_name: br-ex - default_ext_subnet_range: 10.100.100.0/24 - default_ext_subnet_gateway: 10.100.100.1 - neutron_dataplane_interface_name: eth1 - cinder_block_device: vdb - swift_storage_devices: - - vdc - swift_hash_path_suffix: ash - swift_hash_path_prefix: has - children: - compute: - hosts: controller - storage: - hosts: controller - network: - hosts: controller - vars: - test-key: test-value - dashboard: - hosts: controller - vars: - allowed_host: '*' - kolla: - hosts: controller - vars: - # We add openEuler OS support for kolla in OpenStack Queens/Rocky release - # Set this var to true if you want to use it in Q/R - openeuler_plugin: false - ``` - - Key Configurations - - | Item | Description| - |---|---| - | ansible_host | IP address of the all-in-one node.| - | ansible_ssh_private_key_file | Key used by the Ansible script for logging in to the all-in-one node.| - | ansible_ssh_user | User used by the Ansible script for logging in to the all-in-one node.| - | enabled_service | List of services to be installed. You can delete services as required.| - | neutron_provider_interface_name | Neutron L3 bridge name. | - | default_ext_subnet_range | Neutron private network IP address range. | - | default_ext_subnet_gateway | Neutron private network gateway. | - | neutron_dataplane_interface_name | NIC used by Neutron. You are advised to use a new NIC to avoid conflicts with existing NICs causing disconnection of the all-in-one node. | - | cinder_block_device | Name of the block device used by Cinder.| - | swift_storage_devices | Name of the block device used by Swift. | - -4. Run the installation command. - - ```shell - oos env setup all_in_one - ``` - - After the command is executed, the OpenStack environment of the All in One mode is successfully deployed. - - The environment variable file **.admin-openrc** is stored in the home directory of the current user. - -5. Initialize the Tempest environment. - - If you want to perform the Tempest test in the environment, run the `oos env init all_in_one` command to create the OpenStack resources required by Tempest. - - After the command is executed successfully, a **mytest** directory is generated in the home directory of the user. You can run the `tempest run` command in the directory. \ No newline at end of file diff --git a/docs/en/docs/thirdparty_migration/figures/HA-add-resource.png b/docs/en/docs/thirdparty_migration/figures/HA-add-resource.png deleted file mode 100644 index ac24895a1247828d248132f6c789ad8ef51a57e4..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-add-resource.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-apache-show.png b/docs/en/docs/thirdparty_migration/figures/HA-apache-show.png deleted file mode 100644 index c216500910f75f2de1108f6b618c5c08f4df8bae..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-apache-show.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-apache-suc.png b/docs/en/docs/thirdparty_migration/figures/HA-apache-suc.png deleted file mode 100644 index 23a7aaa702e3e68190ff7e01a5a673aee2c92409..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-apache-suc.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-api.png b/docs/en/docs/thirdparty_migration/figures/HA-api.png deleted file mode 100644 index f825fe005705d30809d12df97958cff0e5a80135..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-api.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-clone-suc.png b/docs/en/docs/thirdparty_migration/figures/HA-clone-suc.png deleted file mode 100644 index 4b6099ccc88d4f6f907a0c4563e729ab2a4dece1..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-clone-suc.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-clone.png b/docs/en/docs/thirdparty_migration/figures/HA-clone.png deleted file mode 100644 index 1b09ab73849494f4ffd759fa612ae3c241bd9c1d..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-clone.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-corosync.png b/docs/en/docs/thirdparty_migration/figures/HA-corosync.png deleted file mode 100644 index c4d93242e65c503b6e1b6a457e2517f647984a66..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-corosync.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-firstchoice-cmd.png b/docs/en/docs/thirdparty_migration/figures/HA-firstchoice-cmd.png deleted file mode 100644 index a265bab07f1d8e46d9d965975be180a8de6c9eb2..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-firstchoice-cmd.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-firstchoice.png b/docs/en/docs/thirdparty_migration/figures/HA-firstchoice.png deleted file mode 100644 index bd982ddcea55c629c0257fca86051a9ffa77e7b4..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-firstchoice.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-group-new-suc.png b/docs/en/docs/thirdparty_migration/figures/HA-group-new-suc.png deleted file mode 100644 index 437fd01ee83a9a1f65c12838fe56eea8435f6759..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-group-new-suc.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-group-new-suc2.png b/docs/en/docs/thirdparty_migration/figures/HA-group-new-suc2.png deleted file mode 100644 index 4fb933bd761f9808de95a324a50226ff041ebd4f..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-group-new-suc2.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-group-new.png b/docs/en/docs/thirdparty_migration/figures/HA-group-new.png deleted file mode 100644 index 9c914d0cc2e14f3220fc4346175961f129efb37b..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-group-new.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-group-suc.png b/docs/en/docs/thirdparty_migration/figures/HA-group-suc.png deleted file mode 100644 index 2338580343833ebab08627be3a2efbcdb48aef9e..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-group-suc.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-group.png b/docs/en/docs/thirdparty_migration/figures/HA-group.png deleted file mode 100644 index 6897817665dee90c0f8c47c6a3cb4bb09db52d78..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-group.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-home-page.png b/docs/en/docs/thirdparty_migration/figures/HA-home-page.png deleted file mode 100644 index c9a7a82dc412250d4c0984b3876c6f93c6aca789..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-home-page.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-login.png b/docs/en/docs/thirdparty_migration/figures/HA-login.png deleted file mode 100644 index 65d0ae11ec810da7574ec72bebf6e1b020c94a0d..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-login.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-mariadb-suc.png b/docs/en/docs/thirdparty_migration/figures/HA-mariadb-suc.png deleted file mode 100644 index 6f6756c945121715edc623bd9a848bc48ffeb4ca..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-mariadb-suc.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-mariadb.png b/docs/en/docs/thirdparty_migration/figures/HA-mariadb.png deleted file mode 100644 index d29587c8609b9d6aefeb07170901361b5ef8402d..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-mariadb.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-nfs-suc.png b/docs/en/docs/thirdparty_migration/figures/HA-nfs-suc.png deleted file mode 100644 index c0ea6af79e91649f1ad7d97ab6c2a0069a4f4fb8..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-nfs-suc.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-nfs.png b/docs/en/docs/thirdparty_migration/figures/HA-nfs.png deleted file mode 100644 index f6917938eec2e0431a9891c067475dd0b21c1bd9..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-nfs.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-pacemaker.png b/docs/en/docs/thirdparty_migration/figures/HA-pacemaker.png deleted file mode 100644 index 7681f963f67d2b803fef6fb2c3247384136201f8..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-pacemaker.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-pcs-status.png b/docs/en/docs/thirdparty_migration/figures/HA-pcs-status.png deleted file mode 100644 index fb150fba9f6258658702b35caacf98076d1fd109..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-pcs-status.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-pcs.png b/docs/en/docs/thirdparty_migration/figures/HA-pcs.png deleted file mode 100644 index 283670d7c3d0961ee1cb41345c2b2a013d7143b0..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-pcs.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-refresh.png b/docs/en/docs/thirdparty_migration/figures/HA-refresh.png deleted file mode 100644 index c2678c0c2945acbabfbeae0d5de8924a216bbf31..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-refresh.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-vip-suc.png b/docs/en/docs/thirdparty_migration/figures/HA-vip-suc.png deleted file mode 100644 index 313ce56e14f931c78dad4349ed57ab3fd7907f50..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-vip-suc.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/figures/HA-vip.png b/docs/en/docs/thirdparty_migration/figures/HA-vip.png deleted file mode 100644 index d8b417df2e64527d3b29d0289756dfbb01bf66ec..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/thirdparty_migration/figures/HA-vip.png and /dev/null differ diff --git a/docs/en/docs/thirdparty_migration/ha.md b/docs/en/docs/thirdparty_migration/ha.md deleted file mode 100644 index fec66a0ad93916672cecdf647322c18d1e1a4a35..0000000000000000000000000000000000000000 --- a/docs/en/docs/thirdparty_migration/ha.md +++ /dev/null @@ -1,3 +0,0 @@ -# HA User Guide - -This document describes how to install and use HA. diff --git a/docs/en/docs/thirdparty_migration/installing-and-deploying-HA.md b/docs/en/docs/thirdparty_migration/installing-and-deploying-HA.md deleted file mode 100644 index a297aeffdeca7475a19c1a660b0c261a919375d0..0000000000000000000000000000000000000000 --- a/docs/en/docs/thirdparty_migration/installing-and-deploying-HA.md +++ /dev/null @@ -1,213 +0,0 @@ -# Installing and Deploying HA - -This chapter describes how to install and deploy an HA cluster. - - -- [Installing and Deploying HA](#installing-and-deploying-ha) - - [Installation and Deployment](#installation-and-deployment) - - [Modifying the Host Name and the /etc/hosts File](#modifying-the-host-name-and-the-etchosts-file) - - [Configuring the Yum Repository](#configuring-the-yum-repository) - - [Installing the HA Software Package Components](#installing-the-ha-software-package-components) - - [Setting the hacluster User Password](#setting-the-hacluster-user-password) - - [Modifying the /etc/corosync/corosync.conf File](#modifying-the-etccorosynccorosyncconf-file) - - [Managing the Services](#managing-the-services) - - [Disabling the firewall](#disabling-the-firewall) - - [Managing the pcs service](#managing-the-pcs-service) - - [Managing the Pacemaker service](#managing-the-pacemaker-service) - - [Managing the Corosync service](#managing-the-corosync-service) - - [Performing Node Authentication](#performing-node-authentication) - - [Accessing the Front-End Management Platform](#accessing-the-front-end-management-platform) - -## Installation and Deployment - -- Prepare the environment: At least two physical machines or VMs with openEuler 20.03 LTS SP2 installed are required. (This section uses two physical machines or VMs as an example.) For details about how to install openEuler 20.03 LTS SP2, see the [_openEuler Installation Guide_](../Installation/Installation.md). - -### Modifying the Host Name and the /etc/hosts File - -- **Note: You need to perform the following operations on both hosts. The following takes one host as an example.** - -Before using the HA software, ensure that all host names have been changed and written into the /etc/hosts file. - -- Run the following command to change the host name: - -```shell -hostnamectl set-hostname ha1 -``` - -- Edit the `/etc/hosts` file and write the following fields: - -```text -172.30.30.65 ha1 -172.30.30.66 ha2 -``` - -### Configuring the Yum Repository - -After the system is successfully installed, the Yum source is configured by default. The file location is stored in the `/etc/yum.repos.d/openEuler.repo` file. The HA software package uses the following sources: - -```text -[OS] -name=OS -baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/OS/$basearch/ -enabled=1 -gpgcheck=1 -gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler - -[everything] -name=everything -baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/everything/$basearch/ -enabled=1 -gpgcheck=1 -gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/everything/$basearch/RPM-GPG-KEY-openEuler - -[EPOL] -name=EPOL -baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/EPOL/$basearch/ -enabled=1 -gpgcheck=1 -gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler -``` - -### Installing the HA Software Package Components - -```shell -yum install -y corosync pacemaker pcs fence-agents fence-virt corosync-qdevice sbd drbd drbd-utils -``` - -### Setting the hacluster User Password - -```shell -passwd hacluster -``` - -### Modifying the /etc/corosync/corosync.conf File - -```text -totem { - version: 2 - cluster_name: hacluster - crypto_cipher: none - crypto_hash: none -} -logging { - fileline: off - to_stderr: yes - to_logfile: yes - logfile: /var/log/cluster/corosync.log - to_syslog: yes - debug: on - logger_subsys { - subsys: QUORUM - debug: on - } -} -quorum { - provider: corosync_votequorum - expected_votes: 2 - two_node: 1 - } -nodelist { - node { - name: ha1 - nodeid: 1 - ring0_addr: 172.30.30.65 - } - node { - name: ha2 - nodeid: 2 - ring0_addr: 172.30.30.66 - } - } -``` - -### Managing the Services - -#### Disabling the firewall - -```shell -systemctl stop firewalld -``` - -Change the status of SELINUX in the `/etc/selinux/config` file to disabled. - -```text -# SELINUX=disabled -``` - -#### Managing the pcs service - -- Run the following command to start the pcs service: - -```shell -systemctl start pcsd -``` - -- Run the following command to query the pcs service status: - -```shell -systemctl status pcsd -``` - -The service is started successfully if the following information is displayed: - -![](./figures/HA-pcs.png) - -#### Managing the Pacemaker service - -- Run the following command to start the Pacemaker service: - -```shell -systemctl start pacemaker -``` - -- Run the following command to query the Pacemaker service status: - -```shell -systemctl status pacemaker -``` - -The service is started successfully if the following information is displayed: - -![](./figures/HA-pacemaker.png) - -#### Managing the Corosync service - -- Run the following command to start the Corosync service: - -```shell -systemctl start corosync -``` - -- Run the following command to query the Corosync service status: - -```shell -systemctl status corosync -``` - -The service is started successfully if the following information is displayed: - -![](./figures/HA-corosync.png) - -### Performing Node Authentication - -- **Note: Run this command on only one node.** - -```shell -pcs host auth ha1 ha2 -``` - -### Accessing the Front-End Management Platform - -After the preceding services are started, open the browser (Chrome or Firefox is recommended) and enter **https://localhost:2224** in the navigation bar. - -- This page is the native management platform. - -![](./figures/HA-login.png) - -For details about how to install the management platform newly developed by the community, see . - -- The following is the management platform newly developed by the community. - -![](./figures/HA-api.png) - -- The next chapter describes how to quickly use an HA cluster and add an instance. For details, see the [HA Usage Example](./HA Usage Example.md\). diff --git a/docs/en/docs/thirdparty_migration/thidrparty.md b/docs/en/docs/thirdparty_migration/thidrparty.md deleted file mode 100644 index 66f59126694b37d126c81238ab201744905d6b21..0000000000000000000000000000000000000000 --- a/docs/en/docs/thirdparty_migration/thidrparty.md +++ /dev/null @@ -1,3 +0,0 @@ -# Third-Party Software Porting Guide - -This document is intended for community developers, open source enthusiasts, and partners who use the openEuler OS and intend to learn more about third-party software. Basic knowledge about the Linux OS is required for reading this document. \ No newline at end of file diff --git a/docs/en/docs/userguide/images/Maintainer.jpg b/docs/en/docs/userguide/images/Maintainer.jpg deleted file mode 100644 index 45912da4e7915715df0f598b9429f63bc8695667..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/userguide/images/Maintainer.jpg and /dev/null differ diff --git a/docs/en/docs/userguide/images/PatchTracking.jpg b/docs/en/docs/userguide/images/PatchTracking.jpg deleted file mode 100644 index 3bac7d2f1b4a228da8d273cdaef55f2d33792fab..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/userguide/images/PatchTracking.jpg and /dev/null differ diff --git a/docs/en/docs/userguide/images/packagemanagement.png b/docs/en/docs/userguide/images/packagemanagement.png deleted file mode 100644 index 20808309c820d9d732dd4f25d6b882e5d802afdb..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/userguide/images/packagemanagement.png and /dev/null differ diff --git a/docs/en/docs/userguide/images/panel.png b/docs/en/docs/userguide/images/panel.png deleted file mode 100644 index 1e532446d7dd6c5208475bcfeae3dc717c6fe051..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/userguide/images/panel.png and /dev/null differ diff --git a/docs/en/docs/userguide/images/pkgship_outline.png b/docs/en/docs/userguide/images/pkgship_outline.png deleted file mode 100644 index 6fe1247c22c6b12a83aa01a5812c444f1667b952..0000000000000000000000000000000000000000 Binary files a/docs/en/docs/userguide/images/pkgship_outline.png and /dev/null differ diff --git a/docs/en/docs/userguide/overview.md b/docs/en/docs/userguide/overview.md deleted file mode 100644 index e3b656290f017e8688b1f831d00dd9ebeb86c576..0000000000000000000000000000000000000000 --- a/docs/en/docs/userguide/overview.md +++ /dev/null @@ -1,3 +0,0 @@ -# Toolset User Guide - -This document describes the toolkit used for the openEuler release, including the overview, installation, and usage of tools. diff --git a/docs/en/docs/userguide/pkgship.md b/docs/en/docs/userguide/pkgship.md deleted file mode 100644 index 584d5de0a2b2a46d68fbc3a3a2dea9ab29899161..0000000000000000000000000000000000000000 --- a/docs/en/docs/userguide/pkgship.md +++ /dev/null @@ -1,433 +0,0 @@ -# pkgship - - -- [pkgship](#pkgship) - - [Introduction](#introduction) - - [Architecture](#architecture) - - [Using the Software Online](#using-the-software-online) - - [Downloading the Software](#downloading-the-software) - - [Operating Environment](#operating-environment) - - [Installing the Tool](#installing-the-tool) - - [Configuring Parameters](#configuring-parameters) - - [Starting and Stopping the Service](#starting-and-stopping-the-service) - - [Using the Tool](#using-the-tool) - - [Viewing and Dumping Logs](#viewing-and-dumping-logs) - - [pkgship-panel](#pkgship-panel) - - - -## Introduction - -The pkgship is a query tool used to manage the dependency of OS software packages and provide a complete dependency graph. The pkgship provides functions such as software package dependency query and lifecycle management. - -1. Software package basic information query: Allow community personnel to quickly obtain information about the name, version, and description of the software package. -2. Software package dependency query: Allow community personnel to understand the impact on software when software packages are introduced, updated, or deleted. - -## Architecture - -The system uses the Flask-RESTful development mode. The following figure shows the architecture: - -![avatar](./images/packagemanagement.png) - -## Using the Software Online - -pkgship provides a [public online service](https://pkgmanage.openeuler.org/Packagemanagement). You can directly use pkgship online if you do not need to customize your query. - -To use a custom data source, install, configure, and use pkgship by referring to the following sections. - -## Downloading the Software - -- The repo source is officially released at: -- You can obtain the source code at: -- You can obtain the RPM package at: - -## Operating Environment - -- Hardware configuration: - -| Item| Recommended Specification| -|----------|----------| -| CPU| 8 cores| -| Memory| 32 GB (minimum: 4 GB)| -| Dive space| 20 GB| -| Network bandwidth| 300 Mbit/s| -| I/O| 375 MB/s| - -- Software configuration: - -| Name| Specifications| -|----------|----------| -| Elasticsearch| 7.10.1. Single-node and cluster deployment is available.| -| Redis| 5.0.4 or later is recommended. You are advised to set the size to 3/4 of the memory.| -| Python| 3.8 or later.| - -## Installing the Tool - ->Note: The software can run in Docker. In openEuler 21.09, due to environment restrictions, use the **--privileged** parameter when creating a Docker. Otherwise, the software fails to be started. This document will be updated after the adaptation. - -**1\. Installing the pkgship** - -You can use either of the following methods to install the pkgship: - -- Method 1: Mount the repo source using DNF. -Use DNF to mount the repo source where the pkgship is located (for details, see the [Application Development Guide](../ApplicationDev/application-development.md)). Then run the following command to download and install the pkgship and its dependencies: - - ```bash - dnf install pkgship - ``` - -- Method 2: Install the RPM package. Download the RPM package of the pkgship and run the following command to install the pkgship (x.x-x indicates the version number and needs to be replaced with the actual one): - - ```bash - rpm -ivh pkgship-x.x-x.oe1.noarch.rpm - ``` - - Or - - ```bash - dnf install pkgship-x.x-x.oe1.noarch.rpm - ``` - -**2\. Installing Elasticsearch and Redis** - -If Elasticsearch or Redis is not installed in the environment, you can execute the automatic installation script after the pkgship is installed. - -The default script path is as follows: - -```bash -/etc/pkgship/auto_install_pkgship_requires.sh -``` - -Run the following command: - -```bash -/bin/bash auto_install_pkgship_requires.sh elasticsearch -``` - -Or - -```bash -/bin/bash auto_install_pkgship_requires.sh redis -``` - -**3\. Adding a User After the Installation** - -After the pkgship software is installed, the system automatically creates a user named **pkgshipuser** and a user group named **pkgshipuser**. They will be used when the service is started and running. - -## Configuring Parameters - -1\. Configure the parameters in the configuration file. The default configuration file of the system is stored in **/etc/pkgship/package.ini**. Modify the configuration file as required. - -```bash -vim /etc/pkgship/package.ini -``` - -```ini -[SYSTEM-System Configuration] -; Path for storing the .yaml file imported during database initialization. The .yaml file records the location of the imported .sqlite file. -init_conf_path=/etc/pkgship/conf.yaml - -; Service query port -query_port=8090 - -; Service query IP address -query_ip_addr=127.0.0.1 - -; Address of the remote service. The command line can directly call the remote service to complete the data request. -remote_host=https://api.openeuler.org/pkgmanage - -; Directory for storing temporary files during initialization and download. The directory will not be occupied for a long time. It is recommended that the available space be at least 1 GB. -temporary_directory=/opt/pkgship/tmp/ - -[LOG-Logs] -; Service log storage path -log_path=/var/log/pkgship/ - -; Log level. The options are as follows: -; INFO DEBUG WARNING ERROR CRITICAL -log_level=INFO - -; Maximum size of a service log file. If the size of a service log file exceeds the value of this parameter, the file is automatically compressed and dumped. The default value is 30 MB. -max_bytes=31457280 - -; Maximum number of backup log files. The default value is 30. -backup_count=30 - -[UWSGI-Web Server Configuration] -; Operation log path -daemonize=/var/log/pkgship-operation/uwsgi.log -; Size of data transmitted between the front end and back end -buffer-size=65536 -; Network connection timeout interval -http-timeout=600 -; Service response time -harakiri=600 - -[REDIS-Cache Configuration] -; The address of the Redis cache server can be the released domain or IP address that can be accessed. -; The default link address is 127.0.0.1. -redis_host=127.0.0.1 - -; Port number of the Redis cache server. The default value is 6379. -redis_port=6379 - -; Maximum number of connections allowed by the Redis server at a time. -redis_max_connections=10 - -[DATABASE-Database] -; Database access address. The default value is the IP address of the local host. -database_host=127.0.0.1 - -; Database access port. The default value is 9200. -database_port=9200 -``` - -2\. Create a YAML configuration file to initialize the database. The **conf.yaml** file is stored in the **/etc/pkgship/** directory by default. The pkgship reads the name of the database to be created and the SQLite file to be imported based on this configuration. You can also configure the repo address of the SQLite file. An example of the **conf.yaml** file is as follows: - -```yaml -dbname: oe20.03 #Database name -src_db_file: /etc/pkgship/repo/openEuler-20.09/src #Local path of the source package -bin_db_file: /etc/pkgship/repo/openEuler-20.09/bin #Local path of the binary package -priority: 1 #Database priority - -dbname: oe20.09 -src_db_file: https://repo.openeuler.org/openEuler-20.09/source #Repo source of the source package -bin_db_file: https://repo.openeuler.org/openEuler-20.09/everything/aarch64 #Repo source of the binary package -priority: 2 -``` - -> To change the storage path, change the value of **init\_conf\_path** in the **package.ini** file. -> -> The SQLite file path cannot be configured directly. -> -> The value of **dbname** can contain only lowercase letters, digits, periods (.), hyphens (-), underscores (_), and plus signs (+), and must start and end with lower case letters or digits. - -## Starting and Stopping the Service - -The pkgship can be started and stopped in two modes: systemctl mode and pkgshipd mode. In systemctl mode, the automatic startup mechanism can be stopped when an exception occurs. You can run any of the following commands: - -```bash -systemctl start pkgship.service # Start the service. - -systemctl stop pkgship.service # Stop the service. - -systemctl restart pkgship.service # Restart the service. -``` - -```bash -pkgshipd start # Start the service. - -pkgshipd stop # Stop the service. -``` - -> Only one mode is supported in each start/stop period. The two modes cannot be used at the same time. -> -> The pkgshipd startup mode can be used only by the **pkgshipuser** user. -> -> If the **systemctl** command is not supported in the Docker environment, run the **pkgshipd** command to start or stop the service. - -## Using the Tool - -1. Initialize the database. - - > Application scenario: After the service is started, to query the package information and dependency in the corresponding database (for example, oe20.03 and oe20.09), you need to import the SQLite (including the source code library and binary library) generated by the **createrepo** to the service. Then insert the generated JSON body of the package information into the corresponding database of Elasticsearch. The database name is the value of d**bname-source/binary** generated based on the value of **dbname** in the **conf.yaml** file. - - ```bash - pkgship init [-filepath path] - ``` - - > Parameter description: - > **-filepath**: (Optional) Specifies the path of the initialization configuration file **config.yaml.** You can use either a relative path or an absolute path. If no parameter is specified, the default configuration is used for initialization. - -2. Query a single package. - - You can query details about a source package or binary package (**packagename**) in the specified **database** table. - - > Application scenario: You can query the detailed information about the source package or binary package in a specified database. - - ```bash - pkgship pkginfo $packageName $database [-s] - ``` - - > Parameter description: - > **packagename**: (Mandatory) Specifies the name of the software package to be queried. - > **database**: (Mandatory) Specifies the database name. - > - > **-s**: (Optional) Specifies that the source package `src` is to be queried by `-s`. If this parameter is not specified, the binary package information of `bin` is queried by default. - -3. Query all packages. - - Query information about all packages in the database. - - > Application scenario: You can query information about all software packages in a specified database. - - ```bash - pkgship list $database [-s] - ``` - - > Parameter description: - > **database**: (Mandatory) Specifies the database name. - > **-s**: (Optional) Specifies that the source package `src` is to be queried by `-s`. If this parameter is not specified, the binary package information of `bin` is queried by default. - -4. Query the installation dependency. - - Query the installation dependency of the binary package (**binaryName**). - - > Application scenario: When you need to install the binary package A, you need to install B, the installation dependency of A, and C, the installation dependency of B, etc. A can be installed only after all the installation dependencies are installed in the system. Therefore, before installing the binary package A, you may need to query all installation dependencies of A. You can run the following command to query multiple databases based on the default priority of the platform, and to customize the database query priority. - - ```bash - pkgship installdep [$binaryName $binaryName1 $binaryName2...] [-dbs] [db1 db2...] [-level] $level - ``` - - > Parameter description: - > **binaryName**: (Mandatory) Specifies the name of the dependent binary package to be queried. Multiple packages can be transferred. - > - > **-dbs:** (Optional) Specifies the priority of the database to be queried. If this parameter is not specified, the database is queried based on the default priority. - > - > **-level**: (Optional) Specifies the dependency level to be queried. If this parameter is not specified, the default value **0** is used, indicating that all levels are queried. - -5. Query the compilation dependency. - - Query all compilation dependencies of the source code package (**sourceName**). - - > Application scenario: To compile the source code package A, you need to install B, the compilation dependency package of A. To install B, you need to obtain all installation dependency packages of B. Therefore, before compiling the source code package A, you need to query the compilation dependencies of A and all installation dependencies of these compilation dependencies. You can run the following command to query multiple databases based on the default priority of the platform, and to customize the database query priority. - - ```bash - pkgship builddep [$sourceName $sourceName1 $sourceName2..] -dbs [db1 db2 ..] [-level] $level - ``` - - > Parameter description: - > **sourceName**: (Mandatory) Specifies the name of the source package on which the compilation depends. Multiple packages can be queried. - > - > **-dbs:** (Optional) Specifies the priority of the database to be queried. If this parameter is not specified, the database is queried based on the default priority. - > - > **-level**: (Optional) Specifies the dependency level to be queried. If this parameter is not specified, the default value **0** is used, indicating that all levels are queried. - -6. Query the self-compilation and self-installation dependencies. - - Query the installation and compilation dependencies of a specified binary package (**binaryName**) or source package (**sourceName**). In the command, **\[pkgName]** indicates the name of the binary package or source package to be queried. When querying a binary package, you can query all installation dependencies of the binary package, and the compilation dependencies of the source package corresponding to the binary package, as well as all installation dependencies of these compilation dependencies. When querying a source package, you can query its compilation dependency, and all installation dependencies of the compilation dependency, as well as all installation dependencies of the binary packages generated by the source package. In addition, you can run this command together with the corresponding parameters to query the self-compilation dependency of a software package and the dependency of a subpackage. - - > Application scenario: If you want to introduce a new software package based on the existing version library, you need to introduce all compilation and installation dependencies of the software package. You can run this command to query these two dependency types at the same time to know the packages introduced by the new software package, and to query binary packages and source packages. - - ```bash - pkgship selfdepend [$pkgName1 $pkgName2 $pkgName3 ..] [-dbs] [db1 db2..] [-b] [-s] [-w] - ``` - - > Parameter description: - > - > **pkgName**: (Mandatory) Specifies the name of the software package on which the installation depends. Multiple software packages can be transferred. - > - > **-dbs:** (Optional) Specifies the priority of the database to be queried. If this parameter is not specified, the database is queried based on the default priority. - > - > **-b**: (Optional) Specifies that the package to be queried is a binary package. If this parameter is not specified, the source package is queried by default. - > - > **-s**: (Optional) If **-s** is specified, all installation dependencies, compilation dependencies (that is, compilation dependencies of the source package on which compilation depends), and installation dependencies of all compilation dependencies of the software package are queried. If **-s** is not added, all installation dependencies and layer-1 compilation dependencies of the software package, as well as all installation dependencies of layer-1 compilation dependencies, are queried. - > - > **-w**: (Optional) If **-w** is specified, when a binary package is introduced, the query result displays the source package corresponding to the binary package and all binary packages generated by the source package. If **-w** is not specified, only the corresponding source package is displayed in the query result when a binary package is imported. - -7. Query dependency. -Query the packages that depend on the software package (**pkgName**) in a database (**dbName**). - - > Application scenario: You can run this command to query the software packages that will be affected by the upgrade or deletion of the software source package A. This command displays the source packages (for example, B) that depend on the binary packages generated by source package A (if it is a source package or the input binary package for compilation). It also displays the binary packages (for example, C1) that depend on A for installation. Then, it queries the source package (for example, D) that depend on the binary package generated by B C1 for compilation and the binary package (for example E1) for installation. This process continues until it traverses the packages that depend on the binary packages. - - ```bash - pkgship bedepend dbName [$pkgName1 $pkgName2 $pkgName3] [-w] [-b] [-install/build] - ``` - - > Parameter description: - > - > **dbName**: (Mandatory) Specifies the name of the repository whose dependency needs to be queried. Only one repository can be queried each time. - > - > **pkgName**: (Mandatory) Specifies the name of the software package to be queried. Multiple software packages can be queried. - > - > **-w**: (Optional) If **-w** is not specified, the query result does not contain the subpackages of the corresponding source package by default. If **\[-w]** is specified after the command, not only the dependency of binary package C1 is queried, but also the dependency of other binary packages (such as C2 and C3) generated by source package C corresponding to C1 is queried. - > - > **-b**: (Optional) Specifies `-b` and indicates that the package to be queried is a binary package. By default, the source package is queried. - > - > **-install/build**: (Optional) `-install` indicates that installation dependencies are queried. `-build` indicates that build dependencies are queried. By default, all dependencies are queried. `-install` and `-build` are exclusive to each other. - -8. Query the database information. - - > Application scenario: Check which databases are initialized in Elasticsearch. This function returns the list of initialized databases based on the priority. - - `pkgship dbs` - -9. Obtain the version number. - - > Application scenario: Obtain the version number of the pkgship software. - - `pkgship -v` - -## Viewing and Dumping Logs - -**Viewing Logs** - -When the pkgship service is running, two types of logs are generated: service logs and operation logs. - -1\. Service logs: - -Path: **/var/log/pkgship/log\_info.log**. You can customize the path through the **log\_path** field in the **package.ini** file. - -Function: This log records the internal running of the code to facilitate fault locating. - -Permission: The permissions on the path and the log file are 755 and 644, respectively. Common users can view the log file. - -2\. Operation logs: - -Path: **/var/log/pkgship-operation/uwsgi.log**. You can customize the path through the **daemonize** field in the **package.ini** file. - -Function: This log records user operation information, including the IP address, access time, URL, and result, to facilitate subsequent queries and record attacker information. - -Permission: The permissions on the path and the log file are 700 and 644, respectively. Only the **root** and **pkgshipuser** users can view the log file. - -**Dumping Logs** - -1\. Service log dumping: - -- Dumping mechanism - - Use the dumping mechanism of the logging built-in function of Python to back up logs based on the log size. - -> The items are used to configure the capacity and number of backups of each log in the **package.ini** file. -> -> ```ini -> ; Maximum capacity of each file, the unit is byte, default is 30M -> max_bytes=31457280 -> -> ; Number of old logs to keep;default is 30 -> backup_count=30 -> ``` - -- Dumping process - - After a log is written, if the size of the log file exceeds the configured log capacity, the log file is automatically compressed and dumped. The compressed file name is **log\_info.log.***x***.gz**, where *x* is a number. A smaller number indicates a later backup. - - When the number of backup log files reaches the threshold, the earliest backup log file is deleted and the latest compressed log file is backed up. - -2\. Operation log dumping: - -- Dumping mechanism - - A script is used to dump data by time. Data is dumped once a day and is retained for 30 days. Customized configuration is not supported. - - > The script is stored in **/etc/pkgship/uwsgi\_logrotate.sh**. - -- Dumping process - - When the pkgship is started, the script for dumping data runs in the background. From the startup, dumping and compression are performed every other day. A total of 30 compressed files are retained. The compressed file name is **uwsgi.log-20201010*x*.zip**, where *x* indicates the hour when the file is compressed. - - After the pkgship is stopped, the script for dumping data is stopped and data is not dumped . When the pkgship is started again, the script for dumping data is executed again. - -## pkgship-panel - -### Introduction - -pkgship-panel integrates software package build information and maintenance information so that version maintenance personnel can quickly identify abnormal software packages and notify the package owners to solve the problems, ensuring build project stability and improving the OS build success rate. - -### Architecture - -![](images/panel.png) - -### Using the Tool - -The data source of pkgship-panel cannot be configured. You are advised to use the [pkgship-panel official website](https://pkgmanage.openeuler.org/Infomanagement). diff --git a/docs/en/menu/index.md b/docs/en/menu/index.md index 760c380d0c7d5eb71679ab360926cb859e4e68dd..a3617c0d537a6e30b5c64e9a612a0b7aeeaccddc 100644 --- a/docs/en/menu/index.md +++ b/docs/en/menu/index.md @@ -1,285 +1,8 @@ --- headless: true --- -- [Release Notes]({{< relref "./docs/Releasenotes/terms-of-use.md" >}}) - - [Release Notes]({{< relref "./docs/Releasenotes/release_notes.md" >}}) - - [Introduction]({{< relref "./docs/Releasenotes/introduction.md" >}}) - - [User Notice]({{< relref "./docs/Releasenotes/user-notice.md" >}}) - - [Account List]({{< relref "./docs/Releasenotes/account-list.md" >}}) - - [OS Installation]({{< relref "./docs/Releasenotes/installing-the-os.md" >}}) - - [Key Features]({{< relref "./docs/Releasenotes/key-features.md" >}}) - - [Known Issues]({{< relref "./docs/Releasenotes/known-issues.md" >}}) - - [Resolved Issues]({{< relref "./docs/Releasenotes/resolved-issues.md" >}}) - - [Common Vulnerabilities and Exposures (CVE)]({{< relref "./docs/Releasenotes/common-vulnerabilities-and-exposures-(cve).md" >}}) - - [Source Code]({{< relref "./docs/Releasenotes/source-code.md" >}}) - - [Contribution]({{< relref "./docs/Releasenotes/contribution.md" >}}) - - [Acknowledgment]({{< relref "./docs/Releasenotes/acknowledgment.md" >}}) - - [Quick Start]({{< relref "./docs/Quickstart/quick-start.md" >}}) -- [Installation and Upgrade](#) - - [Installation Guide]({{< relref "./docs/Installation/Installation.md" >}}) - - [Installation on Servers]({{< relref "./docs/Installation/install-server.md" >}}) - - [Installation Preparations]({{< relref "./docs/Installation/installation-preparations.md" >}}) - - [Installation Mode]({{< relref "./docs/Installation/installation-modes.md" >}}) - - [Installation Guideline]({{< relref "./docs/Installation/installation-guideline.md" >}}) - - [Using Kickstart for Automatic Installation]({{< relref "./docs/Installation/using-kickstart-for-automatic-installation.md" >}}) - - [FAQs]({{< relref "./docs/Installation/faqs.md" >}}) - - [Installation on Raspberry Pi]({{< relref "./docs/Installation/install-pi.md" >}}) - - [Installation Preparations]({{< relref "./docs/Installation/Installation-Preparations1.md" >}}) - - [Installation Mode]({{< relref "./docs/Installation/Installation-Modes1.md" >}}) - - [Installation Guideline]({{< relref "./docs/Installation/Installation-Guide1" >}}) - - [FAQs]({{< relref "./docs/Installation/FAQ1.md" >}}) - - [More Resources]({{< relref "./docs/Installation/More-Resources.md" >}}) - - [RISC-V Installation Guide]({{< relref "./docs/Installation/riscv.md" >}}) - - [Virtual Machine Installation]({{< relref "./docs/Installation/riscv_qemu.md" >}}) - - [More Resources]({{< relref "./docs/Installation/riscv_more.md" >}}) - - [Upgrade and Downgrade Guide]({{< relref "./docs/os_upgrade_and_downgrade/openEuler_22.03_LTS_upgrade_and_downgrade.md" >}}) -- [OS Management](#) - - [Administrator Guide]({{< relref "./docs/Administration/administration.md" >}}) - - [Viewing System Information]({{< relref "./docs/Administration/viewing-system-information.md" >}}) - - [Basic Configuration]({{< relref "./docs/Administration/basic-configuration.md" >}}) - - [User and User Group Management]({{< relref "./docs/Administration/user-and-user-group-management.md" >}}) - - [Software Package Management with DNF]({{< relref "./docs/Administration/using-dnf-to-manage-software-packages.md" >}}) - - [Service Management]({{< relref "./docs/Administration/service-management.md" >}}) - - [Process Management]({{< relref "./docs/Administration/process-management.md" >}}) - - [Memory Management]({{< relref "./docs/Administration/overview.md" >}}) - - [etmem for Tiered Memory Expansion]({{< relref "./docs/Administration/memory-management.md" >}}) - - [GMEM User Guide]({{< relref "./docs/GMEM/GMEM_introduction.md" >}}) - - [Installation and Deployment]({{< relref "./docs/GMEM/install_deploy.md" >}}) - - [Usage Instructions]({{< relref "./docs/GMEM/usage.md" >}}) - - [Network Configuration]({{< relref "./docs/Administration/configuring-the-network.md" >}}) - - [Managing Drives Through LVM]({{< relref "./docs/Administration/managing-hard-disks-through-lvm.md" >}}) - - [KAE Usage]({{< relref "./docs/Administration/using-the-kae.md" >}}) - - [Service Configuration]({{< relref "./docs/Administration/configuring-services.md" >}}) - - [Configuring the Repo Server]({{< relref "./docs/Administration/configuring-the-repo-server.md" >}}) - - [Configuring the FTP Server]({{< relref "./docs/Administration/configuring-the-ftp-server.md" >}}) - - [Configuring the Web Server]({{< relref "./docs/Administration/configuring-the-web-server.md" >}}) - - [Setting Up the Database Server]({{< relref "./docs/Administration/setting-up-the-database-server.md" >}}) - - [Trusted Computing]({{< relref "./docs/Administration/trusted-computing.md" >}}) - - [FAQs]({{< relref "./docs/Administration/faqs.md" >}}) - - [O&M Guide]({{< relref "./docs/ops_guide/overview.md" >}}) - - [O&M Overview]({{< relref "./docs/ops_guide/om-overview.md" >}}) - - [System Resources and Performance]({{< relref "./docs/ops_guide/system-resources-and-performance.md" >}}) - - [Information Collection]({{< relref "./docs/ops_guide/information-collection.md" >}}) - - [Troubleshooting]({{< relref "./docs/ops_guide/troubleshooting.md" >}}) - - [Commonly Used Tools]({{< relref "./docs/ops_guide/commonly-used-tools.md" >}}) - - [Common Skills]({{< relref "./docs/ops_guide/common-skills.md" >}}) - - [sysMaster User Guide]({{< relref "./docs/sysMaster/overview.md" >}}) - - [Service Management]({{< relref "./docs/sysMaster/service_management.md" >}}) - - [Installation and Deployment]({{< relref "./docs/sysMaster/sysmaster_install_deploy.md" >}}) - - [Usage Instructions]({{< relref "./docs/sysMaster/sysmaster_usage.md" >}}) - - [Device Management]({{< relref "./docs/sysMaster/device_management.md" >}}) - - [Installation and Deployment]({{< relref "./docs/sysMaster/devmaster_install_deploy.md" >}}) - - [Usage Instructions]({{< relref "./docs/sysMaster/devmaster_usage.md" >}}) - - [Compatibility Commands]({{< relref "./docs/memsafety/overview.md" >}}) - - [utshell User Guide]({{< relref "./docs/memsafety/utshell/utshell_guide.md" >}}) - - [utsudo User Guide]({{< relref "./docs/memsafety/utsudo/utsudo_user_guide.md" >}}) -- [Network](#) - - [Gazelle User Guide]({{< relref "./docs/Gazelle/Gazelle.md" >}}) -- [Maintenance](#) - - [Kernel Live Upgrade Guide]({{< relref "./docs/KernelLiveUpgrade/KernelLiveUpgrade.md" >}}) - - [Installation and Deployment]({{< relref "./docs/KernelLiveUpgrade/installation-and-deployment.md" >}}) - - [How to Run]({{< relref "./docs/KernelLiveUpgrade/how-to-run.md" >}}) - - [Common Problems and Solutions]({{< relref "./docs/KernelLiveUpgrade/common-problems-and-solutions.md" >}}) - - [HA User Guide]({{< relref "./docs/thirdparty_migration/ha.md" >}}) - - [Deploying an HA Cluster]({{< relref "./docs/thirdparty_migration/installing-and-deploying-HA.md" >}}) - - [HA Usage Example]({{< relref "./docs/thirdparty_migration/usecase.md" >}}) -- [Security](#) - - [Security Hardening Guide]({{< relref "./docs/SecHarden/secHarden.md" >}}) - - [OS Hardening Overview]({{< relref "./docs/SecHarden/os-hardening-overview.md" >}}) - - [Security Configuration Description]({{< relref "./docs/SecHarden/security-configuration-benchmark.md" >}}) - - [Security Hardening Guide]({{< relref "./docs/SecHarden/security-hardening-guide.md" >}}) - - [Account Passwords]({{< relref "./docs/SecHarden/account-passwords.md" >}}) - - [Authentication and Authorization]({{< relref "./docs/SecHarden/authentication-and-authorization.md" >}}) - - [System Services]({{< relref "./docs/SecHarden/system-services.md" >}}) - - [File Permissions]({{< relref "./docs/SecHarden/file-permissions.md" >}}) - - [Kernel Parameters]({{< relref "./docs/SecHarden/kernel-parameters.md" >}}) - - [SELinux Configuration]({{< relref "./docs/SecHarden/selinux-configuration.md" >}}) - - [Security Hardening Tools]({{< relref "./docs/SecHarden/security-hardening-tools.md" >}}) - - [Appendix]({{< relref "./docs/SecHarden/appendix.md" >}}) - - [secGear Developer Guide]({{< relref "./docs/secGear/secGear.md" >}}) - - [Introduction to secGear]({{< relref "./docs/secGear/introduction-to-secGear.md" >}}) - - [Installing secGear]({{< relref "./docs/secGear/secGear-installation.md" >}}) - - [API Reference]({{< relref "./docs/secGear/api-reference.md" >}}) - - [secGear Application Development]({{< relref "./docs/secGear/developer-guide.md" >}}) -- [Performance](#) - - [A-Tune User Guide]({{< relref "./docs/A-Tune/A-Tune.md" >}}) - - [Getting to Know A-Tune]({{< relref "./docs/A-Tune/getting-to-know-a-tune.md" >}}) - - [Installation and Deployment]({{< relref "./docs/A-Tune/installation-and-deployment.md" >}}) - - [Usage Instructions]({{< relref "./docs/A-Tune/usage-instructions.md" >}}) - - [Native-Turbo]({{< relref "./docs/A-Tune/native-turbo.md" >}}) - - [FAQs]({{< relref "./docs/A-Tune/faqs.md" >}}) - - [Appendixes]({{< relref "./docs/A-Tune/appendixes.md" >}}) - - [sysBoost User Guide]({{< relref "./docs/sysBoost/sysBoost.md" >}}) - - [Getting to Know sysBoost]({{< relref "./docs/sysBoost/getting-to-know-sysBoost.md" >}}) - - [Installation and Deployment]({{< relref "./docs/sysBoost/installation-and-deployment.md" >}}) - - [Usage Instructions]({{< relref "./docs/sysBoost/usage-instructions.md" >}}) -- [Desktop](#) - - [UKUI]({{< relref "./docs/desktop/ukui.md" >}}) - - [UKUI Installation]({{< relref "./docs/desktop/installing-UKUI.md" >}}) - - [UKUI User Guide]({{< relref "./docs/desktop/UKUI-user-guide.md" >}}) - - [DDE]({{< relref "./docs/desktop/dde.md" >}}) - - [DDE Installation]({{< relref "./docs/desktop/installing-DDE.md" >}}) - - [DDE User Guide]({{< relref "./docs/desktop/DDE-user-guide.md" >}}) - - [Xfce]({{< relref "./docs/desktop/xfce.md" >}}) - - [Xfce Installation]({{< relref "./docs/desktop/installing-Xfce.md" >}}) - - [Xfce User Guide]({{< relref "./docs/desktop/Xfce_userguide.md" >}}) - - [GNOME]({{< relref "./docs/desktop/gnome.md" >}}) - - [GNOME Installation]({{< relref "./docs/desktop/installing-GNOME.md" >}}) - - [GNOME User Guide]({{< relref "./docs/desktop/GNOME_userguide.md" >}}) - - [Kiran]({{< relref "./docs/desktop/kiran.md" >}}) - - [Kiran Installation]({{< relref "./docs/desktop/install-kiran.md" >}}) - - [Kiran User Guide]({{< relref "./docs/desktop/Kiran_userguide.md" >}}) -- [Embedded](#) - - [openEuler Embedded User Guide](https://openeuler.gitee.io/yocto-meta-openeuler/master/index.html) -- [Virtualization](#) - - [Virtualization User Guide]({{< relref "./docs/Virtualization/virtualization.md" >}}) - - [Introduction to Virtualization]({{< relref "./docs/Virtualization/introduction-to-virtualization.md" >}}) - - [Installing Virtualization Components]({{< relref "./docs/Virtualization/virtualization-installation.md" >}}) - - [Environment Preparation]({{< relref "./docs/Virtualization/environment-preparation.md" >}}) - - [VM Configuration]({{< relref "./docs/Virtualization/vm-configuration.md" >}}) - - [Managing VMs]({{< relref "./docs/Virtualization/managing-vms.md" >}}) - - [VM Live Migration]({{< relref "./docs/Virtualization/vm-live-migration.md" >}}) - - [System Resource Management]({{< relref "./docs/Virtualization/system-resource-management.md" >}}) - - [Managing Devices]({{< relref "./docs/Virtualization/managing-devices.md" >}}) - - [VM Maintainability Management]({{< relref "./docs/Virtualization/vm-maintainability-management.md" >}}) - - [Best Practices]({{< relref "./docs/Virtualization/best-practices.md" >}}) - - [Tool Guide]({{< relref "./docs/Virtualization/tool-guide.md" >}}) - - [vmtop]({{< relref "./docs/Virtualization/vmtop.md" >}}) - - [LibcarePlus]({{< relref "./docs/Virtualization/LibcarePlus.md" >}}) - - [Skylark VM Hybrid Deployment]({{< relref "./docs/Virtualization/Skylark.md" >}}) - - [Appendix]({{< relref "./docs/Virtualization/appendix.md" >}}) - - [StratoVirt User Guide]({{< relref "./docs/StratoVirt/StratoVirt_guidence.md" >}}) - - [Introduction to StratoVirt]({{< relref "./docs/StratoVirt/StratoVirt_introduction.md" >}}) - - [Installing StratoVirt]({{< relref "./docs/StratoVirt/Install_StratoVirt.md" >}}) - - [Preparing the Environment]({{< relref "./docs/StratoVirt/Prepare_env.md" >}}) - - [Configuring a VM]({{< relref "./docs/StratoVirt/VM_configuration.md" >}}) - - [Managing VMs]({{< relref "./docs/StratoVirt/VM_management.md" >}}) - - [Connecting to the iSula Secure Container]({{< relref "./docs/StratoVirt/interconnect_isula.md" >}}) - - [Interconnecting with libvirt]({{< relref "./docs/StratoVirt/Interconnect_libvirt.md" >}}) - - [StratoVirt VFIO Instructions]({{< relref "./docs/StratoVirt/StratoVirt_VFIO_instructions.md" >}}) - - [libvirt Direct Connection Aggregation Environment Establishment]({{< relref "./docs/DPUOffload/libvirt-direct-connection-aggregation-environment-establishment.md" >}}) - - [qtfs Shared File System]({{< relref "./docs/DPUOffload/qtfs-architecture-and-usage.md" >}}) - - [Imperceptible DPU Offload User Guide]({{< relref "./docs/DPUOffload/overview.md" >}}) - - [Imperceptible Container Management Plane Offload]({{< relref "./docs/DPUOffload/imperceptible-container-management-plane-offload.md" >}}) - - [Imperceptible Container Management Plane Offload Deployment Guide]({{< relref "./docs/DPUOffload/offload-deployment-guide.md" >}}) - - [OpenStack]({{< relref "./docs/thirdparty_migration/openstack.md" >}}) -- [Cloud](#) - - [Container User Guide]({{< relref "./docs/Container/container.md" >}}) - - [iSulad Container Engine]({{< relref "./docs/Container/isulad-container-engine.md" >}}) - - [Installation, Upgrade, and Uninstallation]({{< relref "./docs/Container/installation-upgrade-Uninstallation.md" >}}) - - [Installation and Configuration]({{< relref "./docs/Container/installation-configuration.md" >}}) - - [Upgrade]({{< relref "./docs/Container/upgrade-methods.md" >}}) - - [Uninstallation]({{< relref "./docs/Container/uninstallation.md" >}}) - - [Application Scenarios]({{< relref "./docs/Container/application-scenarios.md" >}}) - - [Container Management]({{< relref "./docs/Container/container-management.md" >}}) - - [Interconnection with the CNI Network]({{< relref "./docs/Container/interconnection-with-the-cni-network.md" >}}) - - [Container Resource Management]({{< relref "./docs/Container/container-resource-management.md" >}}) - - [Privileged Container]({{< relref "./docs/Container/privileged-container.md" >}}) - - [CRI]({{< relref "./docs/Container/cri.md" >}}) - - [Image Management]({{< relref "./docs/Container/image-management.md" >}}) - - [Checking the Container Health Status]({{< relref "./docs/Container/checking-the-container-health-status.md" >}}) - - [Querying Information]({{< relref "./docs/Container/querying-information.md" >}}) - - [Security Features]({{< relref "./docs/Container/security-features.md" >}}) - - [Supporting OCI hooks]({{< relref "./docs/Container/supporting-oci-hooks.md" >}}) - - [Local Volume Management]({{< relref "./docs/Container/local-volume-management.md" >}}) - - [Interconnecting iSulad shim v2 with StratoVirt]({{< relref "./docs/Container/interconnecting-isula-shim-v2-with-stratovirt.md" >}}) - - [Appendix]({{< relref "./docs/Container/appendix.md" >}}) - - [System Container]({{< relref "./docs/Container/system-container.md" >}}) - - [Installation Guideline]({{< relref "./docs/Container/installation-guideline.md" >}}) - - [Usage Guide]({{< relref "./docs/Container/usage-guide.md" >}}) - - [Specifying Rootfs to Create a Container]({{< relref "./docs/Container/specifying-rootfs-to-create-a-container.md" >}}) - - [Using systemd to Start a Container]({{< relref "./docs/Container/using-systemd-to-start-a-container.md" >}}) - - [Reboot or Shutdown in a Container]({{< relref "./docs/Container/reboot-or-shutdown-in-a-container.md" >}}) - - [Configurable Cgroup Path]({{< relref "./docs/Container/configurable-cgroup-path.md" >}}) - - [Writable Namespace Kernel Parameters]({{< relref "./docs/Container/writable-namespace-kernel-parameters.md" >}}) - - [Shared Memory Channels]({{< relref "./docs/Container/shared-memory-channels.md" >}}) - - [Dynamically Loading the Kernel Module]({{< relref "./docs/Container/dynamically-loading-the-kernel-module.md" >}}) - - [Environment Variable Persisting]({{< relref "./docs/Container/environment-variable-persisting.md" >}}) - - [Maximum Number of Handles]({{< relref "./docs/Container/maximum-number-of-handles.md" >}}) - - [Security and Isolation]({{< relref "./docs/Container/security-and-isolation.md" >}}) - - [Dynamically Managing Container Resources \\(syscontainer-tools\\)]({{< relref "./docs/Container/dynamically-managing-container-resources-(syscontainer-tools).md" >}}) - - [Appendix]({{< relref "./docs/Container/appendix-1.md" >}}) - - [Secure Container]({{< relref "./docs/Container/secure-container.md" >}}) - - [Installation and Deployment]({{< relref "./docs/Container/installation-and-deployment-2.md" >}}) - - [Application Scenarios]({{< relref "./docs/Container/application-scenarios-2.md" >}}) - - [Managing the Lifecycle of a Secure Container]({{< relref "./docs/Container/managing-the-lifecycle-of-a-secure-container.md" >}}) - - [Configuring Resources for a Secure Container]({{< relref "./docs/Container/configuring-resources-for-a-secure-container.md" >}}) - - [Monitoring Secure Containers]({{< relref "./docs/Container/monitoring-secure-containers.md" >}}) - - [Appendix]({{< relref "./docs/Container/appendix-2.md" >}}) - - [Docker Container]({{< relref "./docs/Container/docker-container.md" >}}) - - [Installation and Configuration]({{< relref "./docs/Container/installation-and-configuration-3.md" >}}) - - [Container Management]({{< relref "./docs/Container/container-management-1.md" >}}) - - [Image Management]({{< relref "./docs/Container/image-management-1.md" >}}) - - [Command Reference]({{< relref "./docs/Container/command-reference.md" >}}) - - [Container Engine]({{< relref "./docs/Container/container-engine.md" >}}) - - [Container Management]({{< relref "./docs/Container/container-management-2.md" >}}) - - [Image Management]({{< relref "./docs/Container/image-management-2.md" >}}) - - [Statistics]({{< relref "./docs/Container/statistics.md" >}}) - - [Image Building]({{< relref "./docs/Container/isula-build.md" >}}) - - [Kuasar Multi-Sandbox Container Runtime]({{< relref "./docs/Container/kuasar.md" >}}) - - [Installation and Configuration]({{< relref "./docs/Container/kuasar-install-config.md" >}}) - - [Usage Instructions]({{< relref "./docs/Container/kuasar-usage.md" >}}) - - [Appendix]({{< relref "./docs/Container/kuasar-install-config.md" >}}) - - [KubeOS User Guide]({{< relref "./docs/KubeOS/kubeos-user-guide.md" >}}) - - [About KubeOS]({{< relref "./docs/KubeOS/about-kubeos.md" >}}) - - [Installation and Deployment]({{< relref "./docs/KubeOS/installation-and-deployment.md" >}}) - - [Usage Instructions]({{< relref "./docs/KubeOS/usage-instructions.md" >}}) - - [KubeOS Image Creation]({{< relref "./docs/KubeOS/kubeos-image-creation.md" >}}) - - [Kubernetes Cluster Deployment Guide]({{< relref "./docs/Kubernetes/Kubernetes.md" >}}) - - [Preparing VMs]( {{< relref "./docs/Kubernetes/preparing-VMs.md">}}) - - [Manual Cluster Deployment]({{< relref "./docs/Kubernetes/deploying-a-Kubernetes-cluster-manually.md" >}}) - - [Installing the Kubernetes Software Package]( {{< relref "./docs/Kubernetes/installing-the-Kubernetes-software-package.md" >}}) - - [Preparing Certificates]({{< relref "./docs/Kubernetes/preparing-certificates.md" >}}) - - [Installing etcd]({{< relref "./docs/Kubernetes/installing-etcd.md" >}}) - - [Deploying Components on the Control Plane]({{< relref "./docs/Kubernetes/deploying-control-plane-components.md" >}}) - - [Deploying a Node Component]({{< relref "./docs/Kubernetes/deploying-a-node-component.md" >}}) - - [Automatic Cluster Deployment]({{< relref "./docs/Kubernetes/eggo-automatic-deployment.md" >}}) - - [Tool Introduction]({{< relref "./docs/Kubernetes/eggo-tool-introduction.md" >}}) - - [Deploying a Cluster]({{< relref "./docs/Kubernetes/eggo-deploying-a-cluster.md" >}}) - - [Dismantling a Cluster]({{< relref "./docs/Kubernetes/eggo-dismantling-a-cluster.md" >}}) - - [Running the Test Pod]({{< relref "./docs/Kubernetes/running-the-test-pod.md" >}}) - - [Rubik User Guide]({{< relref "./docs/rubik/overview.md" >}}) - - [Installation and Deployment]({{< relref "./docs/rubik/installation-and-deployment.md" >}}) - - [HTTP APIs]({{< relref "./docs/rubik/http-apis.md" >}}) - - [Example of Isolation for Hybrid Deployed Services]({{< relref "./docs/rubik/example-of-isolation-for-hybrid-deployed-services.md" >}}) - - [NestOS User Guide]({{< relref "./docs/NestOS/overview.md" >}}) - - [Installation and Deployment]({{< relref "./docs/NestOS/installation-and-deployment.md" >}}) - - [Setting Up Kubernetes and iSulad]({{< relref "./docs/NestOS/usage.md" >}}) - - [Feature Description]({{< relref "./docs/NestOS/feature-description.md" >}}) - - [Kmesh User Guide]({{< relref "./docs/Kmesh/Kmesh.md" >}}) - - [Introduction to Kmesh]({{< relref "./docs/Kmesh/introduction-to-kmesh.md" >}}) - - [Installation and Deployment]({{< relref "./docs/Kmesh/installation-and-deployment.md" >}}) - - [Usage]({{< relref "./docs/Kmesh/usage.md" >}}) - - [FAQs]({{< relref "./docs/Kmesh/faqs.md" >}}) - - [Appendix]({{< relref "./docs/Kmesh/appendix.md" >}}) -- [Edge](#) - - [KubeEdge User Guide]({{< relref "./docs/KubeEdge/overview.md" >}}) - - [KubeEdge Usage Guide]({{< relref "./docs/KubeEdge/kubeedge-usage-guide.md" >}}) - - [KubeEdge Deployment Guide]({{< relref "./docs/KubeEdge/kubeedge-deployment-guide.md" >}}) - - [K3s Deployment Guide]({{< relref "./docs/K3s/K3s-deployment-guide.md" >}}) - - [ROS User Guide]({{< relref "./docs/ROS/ROS.md" >}}) - - [Introduction to ROS]({{< relref "./docs/ROS/introduction-to-ROS.md" >}}) - - [Installation and Deployment]({{< relref "./docs/ROS/installation-and-deployment.md" >}}) - - [Usage]({{< relref "./docs/ROS/usage.md" >}}) - - [FAQs]({{< relref "./docs/ROS/faqs.md" >}}) - - [Appendix]({{< relref "./docs/ROS/appendix.md" >}}) -- [openEuler DevKit](#) - - [isocut Usage Guide]({{< relref "./docs/TailorCustom/isocut-user-guide.md" >}}) - - [ImageTailor User Guide]({{< relref "./docs/TailorCustom/imageTailor-user-guide.md" >}}) - - [PIN User Guide]({{< relref "./docs/Pin/pin-user-guide.md" >}}) - - [Eulerlauncher User Guide]({{< relref "./docs/Eulerlauncher/overall.md" >}}) - - [Installing and Running EulerLauncher on Windows]({{< relref "./docs/Eulerlauncher/win-user-manual.md" >}}) - - [Installing and Running Eulerlauncher on macOS]({{< relref "./docs/Eulerlauncher/mac-user-manual.md" >}}) -- [openEuler DevOps](#) - - [Patch Tracking]({{< relref "./docs/userguide/patch-tracking.md" >}}) - - [pkgship]({{< relref "./docs/userguide/pkgship.md" >}}) -- [Application Development](#) - - [Application Development Guide]({{< relref "./docs/ApplicationDev/application-development.md" >}}) - - [Preparing the Development Environment]({{< relref "./docs/ApplicationDev/preparations-for-development-environment.md" >}}) - - [Using GCC for Compilation]({{< relref "./docs/ApplicationDev/using-gcc-for-compilation.md" >}}) - - [Using Make for Compilation]({{< relref "./docs/ApplicationDev/using-make-for-compilation.md" >}}) - - [Using JDK for Compilation]({{< relref "./docs/ApplicationDev/using-jdk-for-compilation.md" >}}) - - [Building an RPM Package]({{< relref "./docs/ApplicationDev/building-an-rpm-package.md" >}}) - - [FAQ]({{< relref "./docs/ApplicationDev/FAQ.md" >}}) - - [GCC User Guide]({{< relref "./docs/GCC/overview.md" >}}) - - [Kernel FDO User Guide]({{< relref "./docs/GCC/kernel_FDO_user_guide.md" >}}) +- [Server]({{< relref "./Server/Menu/index.md" >}}) +- [Cloud]({{< relref "./Cloud/Menu/index.md" >}}) +- [Virtualization]({{< relref "./Virtualization/Menu/index.md" >}}) +- [Embedded]({{< relref "./Embedded/Menu/index.md" >}}) +- [Tools]({{< relref "./Tools/Menu/index.md" >}})