diff --git a/docs/en/server/releasenotes/releasenotes/key-features.md b/docs/en/server/releasenotes/releasenotes/key-features.md index 9fe6cd529a47749993a69dcb8bc3ea14e8dc65ea..98ebb361d59550394b171bc467b66ef0528e79a8 100644 --- a/docs/en/server/releasenotes/releasenotes/key-features.md +++ b/docs/en/server/releasenotes/releasenotes/key-features.md @@ -1,298 +1,289 @@ -# Key Features +# Key Features + +## AI + +AI is redefining OSs by powering intelligent development, deployment, and O&M. openEuler supports general-purpose architectures like Arm, x86, and RISC-V, and next-gen AI processors like NVIDIA and Ascend. Further, openEuler is equipped with extensive AI capabilities that have made it a preferred choice for diversified computing power. + +- **OS for AI**: openEuler offers an efficient development and runtime environment that containerizes software stacks of AI platforms with out-of-the-box availability. It also provides various AI frameworks to facilitate AI development. + - openEuler supports TensorFlow, PyTorch, and MindSpore frameworks and software development kits (SDKs) of major computing architectures, such as Compute Architecture for Neural Networks (CANN) and Compute Unified Architecture (CUDA), to make it easy to develop and run AI applications. + - Environment setup is further simplified by containerizing software stacks. openEuler provides three types of container images: + 1. SDK images: Use openEuler as the base image and install the SDK of a computing architecture, for example, Ascend CANN and NVIDIA CUDA. + 2. AI framework images: Use an SDK image as the base and install AI framework software, such as PyTorch and TensorFlow. You can use an AI framework image to quickly build a distributed AI framework, such as Ray. + 3. Model application images: Provide a complete set of toolchains and model applications. + For details, see the [openEuler AI Container Image User Guide](https://gitee.com/openeuler/docs/blob/stable2-22.03_LTS_SP3/docs/en/docs/AI/ai_container_image_user_guide.md). + - The sysHax LLM heterogeneous acceleration runtime enhances model inference performance in single-server, multi-device setups by optimizing Kunpeng + *x*PU (GPU/NPU) resource synergy. + 1. Dynamic resource allocation: Intelligently balances Kunpeng CPU and xPU workloads to maximize compute efficiency. + 2. CPU inference acceleration: Improves throughput via NUMA-aware scheduling, parallelized matrix operations, and SVE-optimized inference operators. +- **AI for OS**: AI makes openEuler more intelligent. The openEuler Copilot System, an intelligent Q&A platform, is developed using foundation models and openEuler data. It assists in code generation, problem analysis, and system O&M. + - AI application development framework: Foundation model applications are key to enterprise AI adoption. Combining retrieval-augmented generation (RAG) with foundation models effectively addresses gaps in domain-specific data. To support this, the openEuler community has developed an AI application development framework that provides intelligent tools for individual and enterprise developers, enabling rapid AI application development through streamlined workflows. It lowers technical barriers, improves efficiency and data quality, and meets diverse needs for data processing, model training, and content management. Its core features include: + 1. Multi-path enhanced RAG for improved Q&A accuracy: Overcomes limitations of traditional native RAG (low accuracy, weak guidance) using techniques like corpus governance, prompt rewriting, and multi-path retrieval comparison. + 2. Document processing and optimization: Supports incremental corpus updates, deduplication, sensitive data masking, and standardization (such as summarization, code annotation, and case organization). + 3. Embedding model fine-tuning: Enables rapid tuning and evaluation of embedding models (such as BGE models) for domain-specific performance gains. + - Intelligent Q&A: The openEuler Copilot System is accessible via web or shell. + 1. Workflow scheduling: + - Atomic agent operations: Multiple agent operations can be combined into a multi-step workflow that is internally ordered and associated, and is executed as an inseparable atomic operation. + - Real-time data processing: Data generated in each step of the workflow can be processed immediately and then transferred to the next step. + - Intelligent interaction: When the openEuler Copilot System receives a vague or complex user instruction, it proactively asks the user to clarify and provide more details. + 2. Task recommendation: + - Intelligent response: The openEuler Copilot System can analyze the semantic information entered. + - Intelligent guidance: The openEuler Copilot System comprehensively analyzes the execution status, function requirements, and associated tasks of the current workflow to provide next-step operation suggestions. + 3. RAG: The RAG technology used by openEuler Copilot System supports more document formats and content scenarios, and enhances Q&A quality while adding little system load. + 4. Corpus governance: Corpus governance is a core RAG capability. It imports corpuses into the knowledge base in a supported format using fragment relationship extraction, fragment derivative construction, and optical character recognition (OCR). This increases the retrieval hit rate. + For details, see the [openEuler Copilot System Intelligent Q&A Service User Guide](https://gitee.com/openeuler/docs/blob/stable2-22.03_LTS_SP4/docs/en/docs/AI/EulerCopilot_user_guide.md). + - Intelligent tuning: The openEuler Copilot System supports the intelligent shell entry. + Through this entry, you can interact with the openEuler Copilot System using a natural language and perform heuristic tuning operations such as performance data collection, system performance analysis, and system performance tuning. + - Intelligent diagnosis: + 1. Inspection: The Inspection Agent checks for abnormalities of designated IP addresses and provides an abnormality list that contains associated container IDs and abnormal metrics (such as CPU and memory). + 2. Demarcation: The Demarcation Agent analyzes and demarcates a specified abnormality contained in the inspection result and outputs the top 3 metrics of the root cause. + 3. Location: The Detection Agent performs profiling location analysis on the root cause, and provides useful hotspot information such as the stack, system time, and performance metrics related to the root cause. + - Intelligent vulnerability patching: The openEuler intelligent vulnerability patching tool provides automated vulnerability management and repair capabilities for the openEuler kernel repository. This feature analyzes the impact of vulnerabilities on openEuler versions using the `/analyze` command and enables minute-level automated patch pull request (PR) creation via the `/create_pr` command. + - Intelligent container images: The openEuler Copilot System can invoke environment resources through a natural language, assist in pulling container images for local physical resources, and establish a development environment suitable for debugging on existing compute devices. This system supports three types of containers, and container images have been released on Docker Hub. You can manually pull and run these container images. + 1. SDK layer: encapsulates only the component libraries that enable AI hardware resources, such as CUDA and CANN. + 2. SDKs + training/inference frameworks: accommodates TensorFlow, PyTorch, and other frameworks (for example, tensorflow2.15.0-cuda12.2.0 and pytorch2.1.0.a1-cann7.0.RC1) in addition to the SDK layer. + 3. SDKs + training/inference frameworks + LLMs: encapsulates several models (for example, llama2-7b and chatglm2-13b) based on the second type of containers. + +## openEuler Embedded + +openEuler 25.03 Embedded is designed for embedded applications, offering significant progress in southbound and northbound ecosystems and infrastructure. openEuler Embedded provides a closed loop framework often found in operational technology (OT) applications such as manufacturing and robotics, whereby innovations help optimize its embedded system software stack and ecosystem. For southbound compatibility, the "openEuler Embedded Ecosystem Expansion Initiative" has strengthened hardware support including Kunpeng 920 and TaishanPi, while achieving successful adaptation of STMicroelectronics' STM32MP257 high-performance microprocessor for industry applications through collaboration with MYIR. On the northbound front, capabilities have been enriched with industrial middleware and graphical middleware solutions, enabling practical implementations in manufacturing automation and robotics. Looking ahead, openEuler Embedded will collaborate with community partners, users, and developers to expand support for new processor architectures like LoongArch, enhance southbound hardware compatibility, and advance key capabilities including industrial middleware, embedded AI, edge computing, and simulation systems to establish a comprehensive embedded software platform solution. + +- **Southbound ecosystem**: openEuler Embedded Linux supports mainstream processor architectures like AArch64, x86_64, AArch32, and RISC-V, and will extend support to LoongArch in the future. openEuler 24.03 and later versions have a rich southbound ecosystem and support chips from Raspberry Pi, HiSilicon, Rockchip, Renesas, TI, Phytium, StarFive, and Allwinner. +- **Embedded virtualization base**: openEuler Embedded uses an elastic virtualization base that enables multiple OSs to run on a system-on-a-chip (SoC). The base incorporates a series of technologies including bare metal, embedded virtualization, lightweight containers, LibOS, trusted execution environment (TEE), and heterogeneous deployment. + 1. The bare metal hybrid deployment solution runs on OpenAMP to manage peripherals by partition at a high performance level; however, it delivers poor isolation and flexibility. This solution supports the hybrid deployment of UniProton/Zephyr/RT-Thread and openEuler Embedded Linux. + 2. Partitioning-based virtualization is an industrial-grade hardware partition virtualization solution that runs on Jailhouse. It offers superior performance and isolation but inferior flexibility. This solution supports the hybrid deployment of UniProton/Zephyr/FreeRTOS and openEuler Embedded Linux or of OpenHarmony and openEuler Embedded Linux. + 3. Real-time virtualization is available as two community hypervisors, ZVM (for real-time VM monitoring) and Rust-Shyper (for Type-I embedded VM monitoring). +- **MICA deployment framework**: The MICA deployment framework is a unified environment that masks the differences between technologies that comprise the embedded elastic virtualization base. The multi-core capability of hardware combines the universal Linux OS and a dedicated real-time operating system (RTOS) to make full use of all OSs. The MICA deployment framework covers lifecycle management, cross-OS communication, service-oriented framework, and multi-OS infrastructure. + - Lifecycle management provides operations to load, start, suspend, and stop the client OS. + - Cross-OS communication uses a set of communication mechanisms between different OSs based on shared memory. + - Service-oriented framework enables different OSs to provide their own services. For example, Linux provides common file system and network services, and the RTOS provides real-time control and computing. + - Multi-OS infrastructure integrates OSs through a series of mechanisms, covering resource expression and allocation and unified build. + The MICA deployment framework provides the following functions: + - Lifecycle management and cross-OS communication for openEuler Embedded Linux and the RTOS (Zephyr or UniProton) in bare metal mode + - Lifecycle management and cross-OS communication for openEuler Embedded Linux and the RTOS (FreeRTOS or Zephyr) in partitioning-based virtualization mode +- **Northbound ecosystem**: Over 600 common embedded software packages can be built using openEuler.Soft real-time kernel helps respond to soft real-time interrupts within microseconds. The distributed soft bus system (DSoftBus) of openEuler Embedded integrates the DSoftBus and point-to-point authentication module of OpenHarmony, implementing interconnection between openEuler-based embedded devices and OpenHarmony-based devices as well as between openEuler-based embedded devices. With iSula containers, openEuler and other OS containers can be deployed on embedded devices to simplify application porting and deployment. Embedded container images can be compressed to 5 MB, and can be easily deployed into the OS on another container. +- **UniProton**: UniProton is an RTOS that features ultra-low latency and flexible MICA deployments. It is suited for industrial control because it supports both microcontroller units and multi-core CPUs. UniProton provides the following capabilities: + - Compatible with processor architectures like Cortex-M, AArch64, x86_64, and riscv64, and supports M4, RK3568, RK3588, x86_64, Hi3093, Raspberry Pi 4B, Kunpeng 920, Ascend 310, and Allwinner D1s. + - Connects with openEuler Embedded Linux on Raspberry Pi 4B, Hi3093, RK3588, and x86_64 devices in bare metal mode. + - Can be debugged using GDB on openEuler Embedded Linux. + +## epkg + +epkg is a new software package manager that supports the installation and use of non-service software packages. It solves version compatibility issues so that users can install and run software of different versions on the same OS by using simple commands to create, enable, and switch between environments. + +- **Multi-version compatibility**: Enables installation of multiple software versions, resolving version conflicts. +- **Flexible installation modes**: Supports both privileged (system-wide) and unprivileged (user-specific) installations, enabling minimal-footprint deployments and self-contained installations. +- **Environment management**: Facilitates environment lifecycle operations (create, delete, activate, register, and view), supporting multiple environments with distinct software repositories. Enables multi-environment version control, with runtime registration for multiple environments and exclusive environment activation for development debugging. +- **Environment rollback**: Maintains operational history tracking and provides state restoration capabilities, allowing recovery from misoperations or faulty package installations. +- **Package management**: Implements core package operations (install, remove, and query) with RPM/DNF-level functionality parity, meeting daily usage requirements for typical users and scenarios. + +## GCC Compilation and Linking Acceleration + +To improve the compilation efficiency of openEuler software packages and enhance CI pipeline and developer productivity, optimization techniques for C/C++ components are implemented through compiler and linker enhancements. The combination of GCC 12.3 with profile-guided optimization (PGO) and link time optimization (LTO), alongside the modern mold linker, reduced the total compilation time for the top 90+ software packages by approximately 9.5%. The following key capabilities are supported: + +1. GCC 12.3 is configured to generate binaries with PGO and LTO, accelerating the compilation process. +2. Applications specified in the [allowlist](https://gitee.com/src-openeuler/openEuler-rpm-config/blob/openEuler-25.03/0002-Enable-mold-links-through-whitelist.patch#L49) automatically switch to the mold linker to optimize linking efficiency. + +## Kernel Innovations + +openEuler 25.03 runs on Linux kernel 6.6 and inherits the competitive advantages of community versions and innovative features released in the openEuler community. + +- **Kernel replication**: This feature optimizes Linux kernel performance bottlenecks in non-uniform memory access (NUMA) architectures. Research shows critical data center applications like Apache, MySQL, and Redis experience significant performance impacts from kernel operations: kernel execution accounts for 61% of application CPU cycles, 57% of total instructions executed, 61% of I-cache misses, and 46% of I-TLB misses. Traditional Linux kernels restrict code segments, read-only data segments, and kernel page tables (**swapper_pg_dir**) to primary NUMA nodes without migration capability. This forces frequent cross-NUMA operations during system calls when multi-threaded applications are deployed across multiple NUMA nodes, increasing memory access latency and degrading system performance. The kernel replication feature extends the **pgd** global page directory table in **mm_struct** by automatically creating NUMA-local replicas of kernel code, data segments, and page tables during kernel initialization. This mechanism maps identical kernel virtual addresses to physical addresses within their respective NUMA nodes, enhancing memory locality and reducing cross-NUMA overhead. The implementation supports vmalloc, dynamic module loading, dynamic instruction injection mechanisms (Kprobe, KGDB, and BPF), security features (KPTI, KASLR, and KASAN), and 64 KB huge pages. A new boot-time cmdline configuration option (disabled by default) enables dynamic control for compatibility management. This feature benefits high-concurrency, multi-threaded server workloads. +- **HAOC 3.0 security feature**: Hardware-assisted OS compartmentalization (HAOC) leverages x86 and Arm processor capabilities to implement a dual-architecture kernel design. It creates isolated execution environments (IEE) within the kernel to prevent attackers from performing lateral movement and privilege escalation. The current version establishes IEE as a protected domain where sensitive resources can be incrementally isolated. These resources become accessible exclusively through controlled IEE interfaces, preventing unauthorized access by standard kernel code. + +## NestOS + +NestOS is a community cloud OS that uses nestos-assembler for quick integration and build. It runs rpm-ostree and Ignition tools over a dual rootfs and atomic update design, and enables easy cluster setup in large-scale containerized environments. Compatible with Kubernetes and OpenStack, NestOS also reduces container overheads. + +- **Out-of-the-box availability**: integrates popular container engines such as iSulad, Docker, and Podman to provide lightweight and tailored OSs for the cloud. +- **Easy configuration**: uses the Ignition utility to install and configure a large number of cluster nodes with a single configuration. +- **Secure management**: runs rpm-ostree to manage software packages and works with the openEuler software package source to ensure secure and stable atomic updates. +- **Hitless node updating**: uses Zincati to provide automatic node updates and reboot without interrupting services. +- **Dual rootfs**: executes dual rootfs for active/standby switchovers, to ensure integrity and security during system running. + +## oeAware Enhancements + +oeAware is a framework that provides low-load collection, sensing, and tuning upon detecting defined system behaviors on openEuler. The framework divides the tuning process into three layers: collection, sensing, and tuning. Each layer is associated through subscription and developed as plugins, overcoming the limitations of traditional tuning techniques that run independently and are statically enabled or disabled. +Every oeAware plugin is a dynamic library that utilizes oeAware interfaces. The plugins comprise multiple instances that each contains several topics and deliver collection or sensing results to other plugins or external applications for tuning and analysis purposes. openEuler 25.03 introduces the transparent_hugepage_tune and preload_tune plugins. + +- The SDK enables subscription to plugin topics, with a callback function handling data from oeAware. This allows external applications to create tailored functionalities, such as cross-cluster information collection or local node analysis. +- The Performance monitoring unit (PMU) information collection plugin gathers performance records from the system PMU. +- The Docker information collection plugin retrieves specific parameter details about the Docker environment. +- The system information collection plugin captures kernel parameters, thread details, and resource information (CPU, memory, I/O, network) from the current environment. +- The thread sensing plugin monitors key information about threads. +- The evaluation plugin examines system NUMA and network information during service operations, suggesting optimal tuning methods. +- The system tuning plugins comprise stealtask for enhanced CPU tuning, smc_tune which leverages shared memory communication in the kernel space to boost network throughput and reduce latency, xcall_tune which bypasses non-essential code paths to minimize system call processing overhead, transparent_hugepage_tune which enables transparent huge pages to boost the TLB hit ratio, and preload_tune which seamlessly loads dynamic libraries. +- The Docker tuning plugin addresses CPU performance issues during sudden load spikes by utilizing the CPU burst feature. +- smc_tune: SMC acceleration must be enabled before the server-client connection is established. This feature is most effective in scenarios with numerous persistent connections. +- Docker tuning is not compatible with Kubernetes containers. +- xcall_tune: The **FAST_SYSCALL** kernel configuration option must be activated. -## GMEM +## A-Ops with CVE Fixes and Configuration Source Tracing -Generalized Memory Management (GMEM) is an optimal solution for memory management in OS for AI. It provides a centralized management mechanism for heterogeneous memory interconnection. GMEM innovates the memory management architecture in the Linux kernel. Its logical mapping system masks the differences between the ways how the CPU and accelerator access memory addresses. The Remote Pager memory message interaction framework provides the device access abstraction layer. In the unified address space, GMEM automatically migrates data to the OS or accelerator when data needs to be accessed or paged. GMEM APIs are consistent with native Linux memory management APIs, and feature high usability, performance, and portability. +A-Ops empowers intelligent O&M through interactive dialogs and wizard-based operations. The intelligent interactive dialogs, featuring CVE prompts and fixes, configuration source tracing, configuration exception tracing, and configuration baseline synchronization, enable the O&M assistant to streamline routine O&M operations. +A-Ops integrates the intelligent O&M assistant based on the openEuler Intelligent Interaction Platform for intelligent CVE fixing and configuration source tracing. -- **Logical mapping system**: GMEM high-level APIs in the kernel allow the accelerator driver to directly obtain memory management functions and create logical page tables. The logical page tables decouple the high-layer logic of memory management from the hardware layer of the CPU, so as to abstract the high-layer memory management logic that can be reused by various accelerators. +- CVE fixing: A-Ops displays cluster CVE status, prompts high-score and high-severity CVEs, and offers corresponding fixes. You can apply these fixes and check results using the assistant or WebUI. +- Configuration source tracing: You can use the assistant to find the machines with abnormal baseline configurations. The assistant shows these machines and incorrect configuration items. It then intelligently gives you summaries and suggests fixes. You can correct the configurations using the assistant or WebUI. -- **Remote Pager**: This framework has the message channel, process management, memory swap, and memory prefetch modules for the interaction between the host and accelerator. The remote_pager abstraction layer simplifies device adaptation by enabling third-party accelerators to easily access the GMEM system. +## k8s-install -- **User APIs**: Users can directly use the memory map (mmap) of the OS to allocate the unified virtual memory. GMEM adds the flag (MMAP_PEER_SHARED) for allocating the unified virtual memory to the mmap system call. The libgmem user-mode library provides the hmadvise API of memory prefetch semantics to help users optimize the accelerator memory access efficiency. +k8s-install is an online utility designed to provision cloud-native infrastructure on a wide range of Linux distributions and architectures. It also serves as a tool for creating offline installation packages. It supports installation, deployment, and secure updates of cloud-native infrastructure suites across multiple dimensions with just a few clicks, greatly reducing deployment and adaptation time while ensuring a standardized and traceable workflow. Currently, the following issues are present: -## Native Support for Open Source Large Language Models (LLaMa and ChatGLM) +- openEuler suffers from outdated cloud-native toolchain versions and lacks maintenance for multiple version baselines (such as Kubernetes 1.20, 1.25, and 1.29) within the same release. Consequently, released branches cannot be updated to major versions, requiring users to independently adapt and maintain later versions to meet business requirements. +- Service parties commonly use tools like Ansible to deploy cloud infrastructure, often relying on non-standard packages, static binaries, and tarballs instead of distribution-managed packages. This practice inherently lacks support for secure CVE fixes, thereby posing security risks. +- Version synchronization between offline and online installations is challenging. Furthermore, upgrading or modifying offline packages is difficult. +- The lack of standardized installation and deployment processes results in inconsistent component versions, leading to incompatibilities and configuration differences that make issue resolution time-consuming and root cause analysis difficult. +- The installer detects, installs, and updates the runC, containerd, Docker, and Kubernetes components and their dependent system libraries. +- The configuration library stores configuration file templates for Docker and Kubernetes software. +- The package library stores RPM packages for various versions and architectures of runC, containerd, Docker, Kubernetes, and their dependent system libraries. +- The image library stores images required for Kubernetes software startup, such as various versions of kube-apiserver, kube-scheduler, etcd, and coredns. It also includes images for basic network plugins like Flannel. +- The publisher encapsulates the latest code scripts, RPM packages, images, and configurations to create online and offline installation packages. Written in Bash, the main k8s-install program does not need to be compiled or linked. Its online installation package is encapsulated into an RPM package, built using spec files. -The two model inference frameworks, llama.cpp and chatglm-cpp, are implemented based on C/C++. They allow users to deploy and use open source large language models on CPUs by means of model quantization. llama.cpp supports the deployment of multiple open source LLMs, such as LLaMa, LLaMa2, and Vicuna. It supports the deployment of multiple open source Chinese LLMs, such as ChatGLM-6B, ChatGLM2-6B, and Baichuan-13B. +## k8s-install Installers -- Implemented in GGML-based C/C++. +k8s-install is a tool used to install and securely update cloud-native infrastructure. +Version adaptation: openEuler suffers from outdated cloud-native toolchain versions from the upstream and released branches cannot be updated to major versions, requiring users to independently adapt and maintain later versions to meet business requirements. k8s-install supports multiple baseline versions to meet service requirements, preventing deployment failures or function exceptions caused by version incompatibilities. +Improved deployment efficiency and standardization: The lack of standardized installation and deployment processes across departments or projects leads to inconsistent component versions, resulting in frequent adaptation issues and time-consuming resolutions. k8s-install enables standard deployment, ensuring component version compatibility, reducing fault locating time, and improving overall deployment efficiency. +Enhanced security and maintainability: Service parties often deploy static binaries and tarballs, which lack support for secure CVE fixes. k8s-install can fix CVEs a timely manner, ensuring system security and stability. In addition, the code for all components has been committed to both the company's internal repository and the openEuler repository, which facilitates version tracing and fault locating and enhances system maintainability. +Promoting open source and collaboration: By establishing and actively maintaining a repository within the openEuler community, k8s-install promotes technology sharing, fosters the growth of the community ecosystem, attracts more developers, enhances project influence, and promotes the continuous progress of cloud-native technologies. +The installers provides the following core functions: -- They accelerate memory for efficient CPU inference through int4/int8 quantization, optimized KV cache, and parallel computing. +- Multi-version support: It supports multiple baseline Kubernetes versions, including 1.20, 1.25, and 1.29, to meet the version requirements of various business scenarios and enable on-demand deployment. +- Multi-architecture support: With compatibility for various architectures including x86_64, AArch64, and LoongArch64, it is suitable for diverse hardware environments, thereby expanding its application scope. +- Multi-component management: It integrates installation and configuration of Go, runC, containerd, Docker, Kubernetes, and related components, streamlining the deployment of complex components and improving efficiency. +- Online and offline deployment: An online installer k8s-install and an offline installer k8s-install-offline are available. Combined with the **publish.sh** publisher, these installers ensure flexible and stable deployment across various network conditions. -## Features in the openEuler Kernel 6.6 +## k8s-install Publisher -openEuler 24.09 runs on Linux kernel 6.6. It inherits the competitive advantages of community versions and innovative features released in the openEuler community. +**publish.sh** is the publisher in the k8s-install tool chain. It has the following advantages: -- **Folio-based memory management**: Folio-based Linux memory management is used instead of page. A folio consists of one or more pages and is declared in struct folio. Folio-based memory management is performed on one or more complete pages, rather than on PAGE_SIZE bytes. This alleviates compound page conversion and tail page misoperations, while decreasing the number of least recently used (LRU) linked lists and optimizing memory reclamation. It allocates more continuous memory on a per-operation basis to reduce the number of page faults and mitigate memory fragmentation. Folio-based management accelerates large I/Os and improves throughput, and large folios consisting of anonymous pages or file pages are available. For AArch64 systems, a contiguous bit (16 contiguous page table entries are cached in a single entry within a translation lookaside buffer, or TLB) is provided to reduce system TLB misses and improve system performance. In openEuler 24.09, multi-size transparent hugepage (mTHP) allocation by anonymous shmem and mTHP lazyfreeing are available. The memory subsystem supports large folios, with a new sysfs control interface for allocating mTHPs by page cache and a system-level switch for feature toggling. +- Ensuring offline deployment: In network-restricted or offline environments, such as certain data centers or specialized production setups, direct access to online repositories is not possible. **publish.sh** can generate complete offline installation packages, ensuring successful cloud-native infrastructure deployment in these scenarios and broadening the application scope of the tools. +- Efficient version iteration and release management: With the continuous updates of the k8s-install tool and its components, **publish.sh** enables automated build, test, and release processes. This enhances the efficiency of version iteration, ensures timely and accurate delivery of new versions to users, and facilitates the ongoing evolution of the system. +- Improving the stability and reliability of resource acquisition: Online repositories can face issues with package or image availability due to network fluctuations or delayed updates. **publish.sh** fetches resources from official or trusted online repositories and ensures their stability and reliability through integration and testing, preventing deployment failures caused by resource issues. +- Facilitating multi-team collaboration and resource synchronization: In large projects, different teams may manage various components or modules. **publish.sh** can integrate and publish the updates from each team, ensuring resource consistency across teams. It facilitates collaboration and improves overall project progress and quality. -- **Multipath TCP (MPTCP)**: MPTCP is introduced to let applications use multiple network paths for parallel data transmission, compared with single-path transmission over TCP. This design improves network hardware resource utilization and intelligently allocates traffic to different transmission paths, thereby relieving network congestion and improving throughput. +**Functions** - MPTCP features the following performance highlights: +- Offline package generation and release: It pulls the latest software packages and images from online Yum and image repositories, combines them with the latest configuration files and installer, and packages them into an offline **.tgz** installation package to meet the deployment needs of offline environments. +- Online code update and release: It uploads the updated code to the Git repository, selects the configuration library and installer for source code packaging, uploads it to the OBS server for official compilation after local build testing, and publishes it to the Yum repository to achieve online resource update and synchronization. - - Selects the optimal path after evaluating indicators such as latency and bandwidth. - - Ensures hitless network switchover and uninterrupted data transmission when switching between networks. - - Uses multiple channels where data packets are distributed to implement parallel transmission, increasing network bandwidth. +## Trace IO - In the lab environment, the Rsync file transfer tool that adopts MPTCP v1 shows good transmission efficiency improvement. Specifically, a 1.3 GB file can be transferred in just 14.35s (down from 114.83s), and the average transfer speed is increased from 11.08 MB/s to 88.25 MB/s. In simulations of path failure caused by unexpected faults during transmission, MPTCP seamlessly switches data to other available channels, ensuring transmission continuity and data integrity. - In openEuler 24.09, MPTCP-related features in Linux mainline kernel 6.9 have been fully transplanted and optimized. +Trace IO (TrIO) is designed to optimize the on-demand loading of container images using EROFS over fscache. It achieves this by accurately tracking I/Os during container startup and efficiently orchestrating I/Os into container images to improve the cold startup process of containers. Compared with existing container image loading solutions, TrIO can significantly reduce the cold startup latency of container jobs and improve bandwidth utilization. TrIO comprises both kernel-space and user-space modules. The kernel-space module includes adaptations within the EROFS file system. The user-space module provides tools for capturing I/O traces during container runtime and offers an adaptation modification guidance based on Nydus snapshotter. This allows container users to leverage TrIO without modifying containerd and runC, ensuring compatibility with existing container management tools. +The core advantage of TrIO lies in its ability to aggregate I/O operations during on-demand container loading. By orchestrating the runtime I/O traces of container jobs, TrIO accurately fetches the necessary I/O data during container execution. This greatly improves the efficiency of pulling image data during container startup, thereby achieving low latency. +TrIO's functionality comprises two main aspects: capturing container runtime I/Os and utilizing the runtime I/Os during container startup. Container runtime I/Os are captured by using eBPF to trace I/O operations in the file system. This allows for obtaining the I/O read requests during container job startup, and orchestrating the corresponding data to build a minimal runtime image. During container startup, a custom snapshotter plugin module pulls the minimal runtime image using large I/O operations and imports it into the kernel. Subsequently, all I/O operations during container job execution will preferentially be read from this minimal runtime image. +Compared with the existing on-demand container loading solutions, TrIO has the following advantages: -**Large folio for ext4 file systems**: The IOzone performance can be improved by 80%, and the writeback process of the iomap framework supports batch block mapping. Blocks can be requested in batches in default ext4, optimizing ext4 performance in various benchmarks. For ext4 buffer I/O and page cache writeback operations, the buffer_head framework is replaced with the iomap framework that adds large folio support for ext4. In version 24.09, the performance of small buffered I/Os (≤ 4 KB) is optimized when the block size is smaller than the folio size, typically seeing a 20% performance increase. +- No I/O amplification: TrIO accurately captures runtime I/Os and use them for job startup. It ensures that I/Os are not amplified during container job startup. +- I/O aggregation: During container job startup, TrIO uses large I/O operations to pull all the necessary data for the startup process to the container node at once. This improves the efficiency of loading image data while reducing startup latency. -- **CacheFiles failover**: In on-demand mode of CacheFiles, if the daemon breaks down or is killed, subsequent read and mount requests return an input/output error. The mount points can be used only after the daemon is restarted and the mount operations are performed again. For public cloud services, such I/O errors will be passed to cloud service users, which may impact job execution and endanger the overall system stability. The CacheFiles failover feature renders it unnecessary to remount the mount points upon daemon crashes. It requires only the daemon to restart, ensuring that these events are invisible to users. +## Kuasar Integration with virtCCA -**PGO for Clang**: Profile-guided optimization (PGO) is a feedback-directed compiler optimization technology that collects program runtime information to guide the compiler through optimization decision-making. Based on industry experience, PGO can be used to optimize large-scale data center applications (such as MySQL, Nginx, and Redis) and Linux kernels. Test results show that LLVM PGO provides over 20% performance increase on Nginx, in which a 10%+ performance increase is brought by kernel optimizations. +The Kuasar confidential container leverages the virtCCA capability of Kunpeng 920 processors. It connects northbound to the iSulad container engine and southbound to Kunpeng virtCCA hardware, enabling seamless integration of Kunpeng confidential computing with the cloud-native technology stack. +Kuasar fully utilizes the advantages of the Sandboxer architecture to deliver a high-performance, low-overhead confidential container runtime. Kuasar-sandboxer integrates the virtCCA capability of openEuler QEMU to manage the lifecycle of confidential sandboxes, allowing users to create confidential sandboxes on confidential hardware and ensuring containers run within a trusted execution environment (TEE). +Kuasar-task offers a Task API for iSulad to manage lifecycles of containers within secure sandboxes. Container images are securely pulled into encrypted sandbox memory through Kuasar-task's image retrieval capability. -## Embedded +**Technical Constraints** -openEuler 23.09 Embedded is equipped with an embedded virtualization base that is available in the Jailhouse virtualization solution or the OpenAMP lightweight hybrid deployment solution. You can select the most appropriate solution to suite your services. This version also supports the Robot Operating System (ROS) Humble version, which integrates core software packages such as ros-core, ros-base, and simultaneous localization and mapping (SLAM) to meet the ROS 2 runtime requirements. +1. Remote attestation support of Kuasar is planned for integration via secGear in the SP versions of openEuler 24.03 LTS. +2. Image encryption/decryption capabilities will be added after secGear integration. -- **Southbound ecosystem**: openEuler Embedded Linux supports AArch64 and x86-64 chip architectures and related hardware such as RK3568, Hi3093, RK3399, RK3588, Raspberry Pi 4B, and x86-64 industrial computers. It preliminarily supports AArch32 and RISC-V chip architectures based on QEMU simulation. +**Feature Description** -- **Embedded elastic virtualization base**: The converged elastic base of openEuler Embedded is a collection of technologies used to enable multiple OSs or runtimes to run on a system-on-a-chip (SoC). These technologies include bare metal, embedded virtualization, lightweight containers, LibOS, trusted execution environment (TEE), and heterogeneous deployment. +Kuasar has expanded its capabilities to include confidential container support while maintaining existing secure container functionality. You can enable this feature through iSulad runtime configuration. -- **Mixed criticality deployment framework**: The mixed-criticality (MICA) deployment framework is built on the converged elastic base. The unified framework masks the differences between the technologies used in the underlying converged elastic base, enabling Linux to be deployed together with other OSs. +- Native integration with the iSulad container engine preserves Kubernetes ecosystem compatibility. +- Hardware-level protection via Kunpeng virtCCA technology ensures confidential workloads are deployed in trusted environments. -- **Northbound ecosystem**: More than 350 common embedded software packages can be built using openEuler. The ROS 2 Humble version is supported, which contains core software packages such as ros-core, ros-base, and SLAM. The ROS SDK is provided to simplify embedded ROS development. The soft real-time capability based on Linux kernel 5.10 allows for response to soft real-time interrupts within microseconds. DSoftBus and HiChain for point-to-point authentication of OpenHarmony have been integrated to implement interconnection between openEuler-based embedded devices and between openEuler-based embedded devices and OpenHarmony-based devices. +## vKernel for Advanced Container Isolation -- **UniProton**: This hard RTOS features ultra-low latency and flexible MICA deployments. It is suited for industrial control because it supports both microcontroller units and multi-core CPUs. +The virtual kernel (vKernel) architecture represents a breakthrough in container isolation, addressing the inherent limitations of shared-kernel architectures while preserving container performance efficiency. +vKernel creates independent system call tables and file permission tables to enhance foundational security. It implements isolated kernel parameters, enabling containers to customize both macro-level resource policies and micro-level resource configurations. By partitioning kernel data ownership, leveraging hardware features to protect kernel privilege data, and building isolated kernel page tables for user data protection, vKernel further reinforces security. Future iterations will explore kernel data related to performance interference to strengthen container performance isolation capabilities. -## SysCare +## secGear with Secure Key Hosting for Confidential Container Images -SysCare is a system-level hotfix software that provides security patches and hot fixing for OSs. It can fix system errors without restarting hosts. By combining kernel-mode and user-mode hot patching, SysCare takes over system repair, allowing users to focus on core services. In the future, OS hot upgrade will be provided to further free O&M users and improve O&M efficiency. +The remote attestation service of secGear provides secure key hosting capabilities for confidential container images, establishing a management system that encompasses secure key storage, dynamic fine-grained authorization, and cross-environment collaborative distribution. By integrating zero-trust policies and automated auditing capabilities, secGear ensures data confidentiality and operational traceability while optimizing the balance between key governance and operational costs. This delivers a unified "encrypt by default, decrypt on demand" security framework for cloud-native environments. +secGear combines remote attestation technologies to build a layered key hosting architecture. -**Patches built in containers**: +**Attestation service** -- eBPF is used to monitor the compiler process. In this way, hot patch change information can be obtained in pure user mode without creating character devices, and users can compile hot patches in multiple containers concurrently. +A centralized key hosting server leverages the remote attestation mechanism of TEEs to securely store and manage image encryption keys throughout their lifecycle. It offers authorized users granular policy configuration interfaces for tailored access control. -- Users can install different RPM packages (syscare-build-kmod or syscare-build-ebpf) to use ko or eBPF. The syscare-build process automatically adapts to the corresponding underlying implementation. +**Attestation agent** -## GCC for openEuler +Lightweight attestation agent components deployed within confidential compute nodes expose local RESTful APIs. The confidential container runtime invokes these APIs to validate the integrity of the confidential execution environment and establish secure dynamic sessions with the server, enabling encrypted key transmission. -GCC for openEuler is developed based on the open source GCC 12.3 and supports features such as automatic feedback-directed optimization (FDO), software and hardware collaboration, memory optimization, SVE, and vectorized math libraries. +## RA-TLS -- The default language is upgraded from C14/C++14 to C17/C++17, and more hardware architecture features such as the Armv9-A architecture and x86 AVX512-FP16 are supported. +RA-TLS integrates remote attestation of confidential computing into TLS negotiation procedures, ensuring secure transmission of sensitive data into TEEs while simplifying secure channel establishment for confidential computing workloads, thereby lowering adoption barriers. -- GCC for openEuler supports structure optimization and instruction selection optimization, fully utilizing the hardware features of the Arm architecture to achieve higher operating efficiency. In the benchmark tests such as SPEC CPU 2017, GCC for openEuler delivers much better performance than GCC 10.3 of the upstream community. +**One-way authentication** -- GCC for openEuler also supports automatic FDO to greatly improve the performance of the MySQL database at the application layer. +In deployments where TLS servers operate within confidential environments and clients reside in regular environments, RA-TLS validates the legitimacy of the server confidential environment and applications through remote attestation before TLS key negotiation. -## PAC/BTI support for AArch64 +**Two-way authentication** -PAC (Pointer Authentication) and BTI (Branch Target Identification) are security feature instructions introduced by Arm in Armv8.3-A and Armv8.5-A, respectively, aimed at enhancing resistance against Return-Oriented Programming (ROP) and Jump-Oriented Programming (JOP) attacks. +For scenarios where both TLS servers and clients operate within confidential environments, RA-TLS enforces mutual verification of peer environments and applications via remote attestation before TLS key negotiation. -PAC confirms that the target address has not been modified prior to use by signing and verifying jump pointers, thereby preventing ROP. BTI restricts branch/jump (BR/BLR) by adding entry identifiers, preventing the modification of jump pointers to execute arbitrary code. PAC/BTI is compatible with AArch64 platforms, and software that supports PAC/BTI can still run on hardware that does not include these two security feature extensions. +**Technical Constraints** -When utilizing PAC/BTI features, it is essential to ensure that the running AArch64 platform supports PAC and BTI at the hardware level. At the software level, it is necessary to confirm that the corresponding kernel features are selected, and that the parameters arm64.nopauth or arm64.nobti are not passed during kernel boot, with the "-mbranch-protection=standard" flag added during compilation. To disable PAC or BTI, the parameters arm64.nopauth or arm64.nobti can be added to the kernel boot command line. +Confidential computing environments must maintain network accessibility (such as virtCCA-enabled configurations). -## A-Ops +## openAMDC for High-Performance In-Memory Data Caching and KV Storage -The amount of data generated by IT infrastructure and applications sees a 2- to 3-fold increase every year. The application of big data and machine learning technologies is maturing, driving the generation of efficient and intelligent O&M systems to help enterprises reduce costs and improve efficiency. A-Ops is an intelligent O&M framework that supports basic capabilities such as CVE management, exception detection (database scenario), and quick troubleshooting to reduce O&M costs. +openAMDC stands for open advanced in-memory data cache. It stores and caches data in memory to accelerate access, enhance application concurrency, minimize latency, and can serve as both a message broker and in-memory database. -- **Intelligent patch management**: Supports the patch service, kernel hot fix, intelligent patch inspection, and Hybrid management of cold and hot patches. +**Feature Description** -- **Exception detection**: Detects network I/O delays, packet loss, interruption, and high disk I/O loads in MySQL and openGauss service scenarios. +- Core capabilities: openAMDC, compatible with the Redis Serialization Protocol (RESP), delivers comprehensive caching for strings, lists, hashes, and sets while supporting active-standby, cluster, and sentinel deployment options. +- Architectural features: openAMDC employs a multi-threaded architecture to significantly enhance in-memory caching performance, while integrating a hot-cold data tiering mechanism to enable hybrid memory-drive storage. + 1. Multi-thread architecture: During initialization, openAMDC spawns multiple worker threads, each running an event loop for network monitoring. By enabling SO_REUSEPORT for socket listeners, kernel-level load balancing is implemented across threads sharing the same port. This approach eliminates resource contention from shared listening sockets through dedicated per-thread socket queues, substantially improving concurrency throughput. + 2. Data exchange architecture: Built upon the multi-threaded foundation, openAMDC implements data exchange capabilities supporting hybrid memory-drive storage, effectively optimizing total cost of ownership while maintaining performance efficiency. -- **Configuration source tracing**: Provides cluster configuration collection and baseline capabilities to implement manageable and controllable configurations. The configuration of the entire cluster is checked and compared with the baseline in real time to quickly identify unauthorized configuration changes and locate faults. +## OpenStack Antelope -## A-Ops gala +OpenStack is an open source project that provides a cloud computing management platform. It aims to deliver scalable and flexible cloud computing services to support private and public cloud environments. -The gala project will fully support fault diagnosis in Kubernetes scenarios, including application drill-down analysis, observable microservice and DB performance, cloud-native network monitoring, cloud-native performance profiling, process performance diagnosis, and minute-level diagnosis of five types of OS issues (network, drive, process, memory, and scheduling). +**Feature Description** -- **Easy deployment of the Kubernetes environment**: gala-gopher can be deployed as a DaemonSet, and a gala-gopher instance is deployed on each worker node. gala-spider and gala-anteater are deployed as containers on the Kubernetes management node. +OpenStack offers a series of services and tools to help build and manage public, private, and hybrid clouds. The service types include: -- **Application drill-down analysis**: Diagnoses subhealth problems in cloud native scenarios and demarcates problems between applications and the cloud platform in minutes. +- **Compute service**: creates, manages, and monitors VMs. It empowers users to quickly create, deploy, and destroy VMs and container instances, enabling flexible management and optimal utilization of computing resources. +- **Storage service**: provides object storage, block storage, file storage, and other storage. Block storage services, such as Cinder, allow users to dynamically allocate and manage persistent block storage devices, such as VM drives. Object storage services, such as Swift, provide a scalable and distributed object storage solution, facilitating storage of large amounts of unstructured data. +- Network service: empowers users to create, manage, and monitor virtual networks, and provides capabilities for topology planning, subnet management, and security group configuration. These features enable building of complex network structures while ensuring security and reliability. +- **Identity authentication service**: provides comprehensive identity management and access control capabilities, including user, role, and permissions management. It ensures secure access and management of cloud resources while safeguarding data confidentiality and integrity. +- **Image service**: enables image creation, management, and sharing through image uploading, downloading, and deletion. Users can perform management operations on images with ease and quickly deploy VM instances. +- **Orchestration service**: automates application deployment and management, and facilitates service collaboration and integration. Orchestration services like Heat help streamline application deployment and management by automatically perform related tasks based on user-defined templates. -- **Full-stack monitoring**: Provides application-oriented refined monitoring for cross-software stacks, including the language runtime (JVM), glibc, system call, and kernel (TCP, I/O, and scheduling), and allows users to view the impact of system resources on applications in real time. +## openEuler DevStation -- **Full-link monitoring**: Provides network traffic topology (TCP and RPC) and software deployment topology information, and builds a system 3D topology based on the information to accurately show the resource scope on which applications depend and quickly identify the fault radius. +openEuler DevStation is a Linux desktop OS built for developers, streamlining workflows while ensuring ecosystem compatibility. The latest release delivers major upgrades across three dimensions: supercharged toolchain, smarter GUI, and extended hardware support. These improvements create a more powerful, secure, and versatile development platform. -- **Causal AI model**: Provides visualized root cause derivation to demarcate faults to resource nodes in minutes. +**Feature Description** -- **Observable microservice and DB performance**: Provides non-intrusive microservice, DB, and HTTP1.*x* access performance observation, including the throughput, latency, and error rate; and supports refined API observation and HTTP TRACE to view abnormal HTTP requests. +- Developer-centric community toolchain -- **Observable PostgreSQL access performance**: Observes performance in terms of the throughput, latency, and error rate; and supports refined SQL access observation and slow SQL trace to show SQL statements of slow SQL queries. + 1. Comprehensive development suite: Pre-configured with VSCodium (an open source, telemetry-free IDE) and development environments for major languages including Python, Java, Go, Rust, and C/C++. + 2. Enhanced tool ecosystem: Features innovative tools like oeDeploy for seamless deployment, epkg for extended package management, DevKit utilities, and an AI-powered coding assistant, delivering complete workflow support from environment configuration to production-ready code. + 3. oeDevPlugin Extension: A specialized VSCodium plugin for openEuler developers, providing, visual issue/PR dashboards, quick repository cloning and PR creation, automated code quality checks (such as license headers, formatting), real-time community task tracking. + 4. Intelligent assistant: Generates code from natural language prompts, creates API documentation with few clicks, and explains Linux commands, with a privacy-focused offline operation mode. -- **Cloud-native application performance profiling**: Provides a non-intrusive and zero-modification cross-stack profiling analysis tool and can connect to the common UI front end of Pyroscope. +- Enhanced GUI and productivity suite -- **Cloud-native network monitoring**: Provides TCP, socket, and DNS monitoring for Kubernetes scenarios for more refined network monitoring. + 1. Smart navigation and workspace: Features an adaptive navigation bar that intelligently organizes shortcuts for development tools, system utilities, and common applications—all with customizable workspace layouts. + 2. Built-in productivity applications: Comes with the Thunderbird email client pre-installed for seamless office workflows. -- **Process performance diagnosis**: Provides process-level performance problem diagnosis for middleware (such as MySQL and Redis) in cloud native scenarios, monitors process performance KPIs and process-related system-layer metrics (such as I/O, memory, and TCP), and detects process performance KPI exceptions and system-layer metrics that affect the KPIs. +- Hardware compatibility upgrades -## sysMaster + 1. Notebook-ready support: Comprehensive compatibility with modern laptop components, including precision touchpads, Wi-Fi 6/Bluetooth stacks, and multi-architectural drivers, delivering 20% faster AI and rendering workloads. + 2. Raspberry Pi DevStation image: Provides an Arm-optimized development environment out of the box, featuring a lightweight desktop environment with pre-installed IoT development tools (VScodium and oeDevPlugin) and accelerated performance for Python scientific computing libraries like NumPy and pandas. -sysMaster is a collection of ultra-lightweight and highly reliable service management programs. It provides an innovative implementation of PID 1 to replace the conventional init process. Written in Rust, sysMaster is equipped with fault monitoring, second-level self-recovery, and quick startup capabilities, which help improve OS reliability and service availability. In version 0.5.0, sysMaster can manage system services in container and VM scenarios. +## oeDeploy for Simplified Software Deployment -- Added the devMaster component to manage device hot swap. +oeDeploy revolutionizes software deployment as a lightweight yet powerful tool that accelerates environment setup across single-node and distributed systems with unmatched efficiency. -- Added the live update and hot reboot functions to sysMaster. +**Feature Description** -- Allows PID 1 to run on a VM. - -## utsudo - -utsudo uses Rust to reconstruct sudo to deliver a more efficient, secure, and flexible privilege escalation tool. The involved modules include the common utility, overall framework, and function plugins. - -- **Access control**: Restricts the commands that can be executed by users as required, and specifies the required authentication method. - -- **Audit log**: Records and traces the commands and tasks executed by each user using utsudo. - -- **Temporary privilege escalation**: allows common users to enter their passwords to temporarily escalate to the super user to execute specific commands or tasks. - -- **Flexible configuration**: Allows users to set arguments such as command aliases, environment variables, and execution parameters to meet complex system management requirements. - -## utshell - -utshell is a new shell that inherits the usage habits of Bash. It can interact with users through command lines, specifically, responding to user operations to execute commands and provide feedback. In addition, it can execute automated scripts to facilitate O&M. - -- **Command execution**: Runs commands deployed on the user's machine and sends return values to the user. - -- **Batch processing**: Automates task execution using scripts. - -- **Job control**: Concurrently executes multiple user commands as background jobs, and manages and controls the tasks that are executed concurrently. - -- **Historical records**: Records the commands entered by users. - -- **Command aliases**: Allows users to create aliases for commands to customize their operations. - -## migration-tools - -migration-tools is oriented to users who want to quickly, smoothly, stably, and securely migrate services to the openEuler OS. migration-tools consists of the following modules: - -- **Server module**: It is developed on the Python Flask Web framework. As the core of migration-tools, it receives task requests, processes execution instructions, and distributes the instructions to each Agent. - -- **Agent module**: It is installed in the OS to be migrated to receive task requests from the Server module and perform migration. - -- **Configuration module**: It reads configuration files for the Server and Agent modules. - -- **Log module**: It records logs during migration. - -- **Migration evaluation module**: It provides evaluation reports such as basic environment check, software package comparison analysis, and ABI compatibility check before migration, providing a basis for users' migration work. - -- **Migration function module**: It provides migration with a few clicks, displays the migration progress, and checks the migration result. - -## DDE - -DDE focuses on delivering polished user interaction and visual design. DDE is powered by independently developed core technologies for desktop environments and provides login, screen locking, desktop and file manager, launcher, dock, window manager, control center, and more functions. As one of the preferred desktop environments, DDE features a user-friendly interface, elegant interaction, high reliability, and privacy protection. You can use DDE to work more creatively and efficiently or enjoy media entertainment while keeping in touch with friends. - -## Kmesh - -Based on the programmable kernel, Kmesh offloads service governance to the OS, thus shortening the inter-service communication latency to only 1/5 of the industry average. - -- Kmesh can connect to a mesh control plane (such as Istio) that complies with the Dynamic Resource Discovery (xDS) protocol. - -- **Traffic orchestration**: Polling and other load balancing policies, L4 and L7 routing support, and backend service policies available in percentage mode are supported. - -- **Sockmap for mesh acceleration**: Take the typical service mesh scenario as an example. When a sockmap is used, the eBPF program takes over the communication between service containers and Envoy containers. As a result, the communication path is shortened to achieve mesh acceleration. The eBPF program can also accelerate the communication between pods on the same node. - -## RISC-V QEMU - -openEuler 23.09 is released with support for the RISC-V architecture. openEuler 23.09 aims to provide basic support for upper-layer applications and is highly customizable, flexible, and secure. It provides a stable and reliable operating environment for the computing platform of the RISC-V architecture, facilitating the installation and verification of upper-layer applications and promoting the enrichment and quality improvement of the software ecosystem in the RISC-V architecture. - -- The OS kernel is updated to version 6.4.0, which is consistent with mainstream architectures. - -- It features a stable base, including core functions such as processor management, memory management, task scheduling, and device drivers, as well as common utilities. - -## DIM - -Dynamic Integrity Measurement (DIM) measures key data (such as code segments) in memory during program running and compares the measurement result with the reference value to determine whether the data in memory has been tampered with. In this way, attacks can be detected and countermeasures can be taken. - -- Measures user-mode processes, kernel modules, and code segment in the kernel memory. - -- Extends measurements to the PCR register of the TPM 2.0 chip for remote attestation. - -- Configures measurements and verifies measurement signatures. - -- Generates and imports measurement baseline data using tools, and verifies baseline data signatures. - -- Supports the SM3 algorithm. - -## Kuasar - -Kuasar is a container runtime that supports unified management of multiple types of sandboxes. It supports multiple mainstream sandbox isolation technologies. Based on the Kuasar container runtime combined with the iSulad container engine and StratoVirt virtualization engine, openEuler builds lightweight full-stack self-developed secure containers for cloud native scenarios, delivering key competitiveness of ultra-low overhead and ultra-fast startup. - -Kuasar 0.1.0 supports the StratoVirt lightweight VM sandbox and StratoVirt secure container instances created through Kubernetes+iSulad. - -- Compatible with the Kubernetes ecosystem when the iSulad container engine interconnects with the Kuasar container. - -- Secure container sandboxes based on the StratoVirt lightweight VM sandbox. - -- StratoVirt secure containers for precise resource restriction and management. - -## sysBoost - -SsysBoost is a tool for optimizing the system microarchitecture for applications. The optimization involves assembly instructions, code layout, data layout, memory huge pages, and system calls. - -- **Binary file merging**: Only full static merging is supported. Applications and their dependent dynamic libraries are merged into one binary file, and segment-level reordering is performed. Multiple discrete code segments or data segments are merged into one to improve application performance. - -- **sysBoost daemon**: sysBoost registers with systemd to enable out-of-the-box optimization. systemd will start the sysBoost daemon after the system is started. Then, the sysBoost daemon reads the configuration file to obtain the binary files to be optimized and the corresponding optimization methods. - -- RTO binary file loading kernel module: This binary loading module is added to automatically load the optimized binary file when the kernel loads binary files. - -- **Huge page pre-loading of binary code or data segments**: sysBoost provides the huge page pre-loading function. After binary optimization is complete, sysBoost immediately loads the content to the kernel as a huge page. When an application is started, sysBoost maps the pre-loaded content to the user-mode page table in batches to reduce page faults and memory access delay of the application, thereby improving the application startup speed and running efficiency. - -## CTinspector - -CTinspector is a language VM running framework developed by China Telecom Cloud Technology Co., Ltd based on the eBPF instruction set. The CTinspector running framework enables application instances to be quickly expanded to diagnose network performance bottlenecks, storage I/O hotspots, and load balancing, improving the stability and timeliness of diagnosis during system running. - -- CTinspector uses a packet VM of the eBPF instruction set. The minimum size of the packet VM is 256 bytes, covering all VM components, including registers, stack segments, code segments, data segments, and page tables. - -- The packet VM supports independent migration. That is, the code in the packet VM can invoke migrate kernel function to migrate the packet VM to a specified node. - -- The packet VM also supports resumable execution. That is, after being migrated to another node, the packet VM can continue to execute the next instruction from the position where it has been interrupted on the previous node. - -## CVE-ease - -CVE-ease is an innovative Common Vulnerabilities and Exposures (CVE) platform developed by China Telecom Cloud Technology Co., Ltd It collects various CVE information released by multiple security platforms and notifies users of the information through multiple channels, such as email, WeChat, and DingTalk. The CVE-ease platform aims to help users quickly learn about and cope with vulnerabilities in the system. In addition to improving system security and stability, users can view CVE details on the CVE-ease platform, including vulnerability description, impact scope, and fixing suggestions, and select a fixing solution as required. - -CVE-ease has the following capabilities: - -- Dynamically tracks CVEs on multiple platforms in real time and integrates the information into the CVE database. - -- Extracts key information from the collected CVE information and updates changed CVE information in real time. - -- Automatically maintains and manages the CVE database. - -- Queries historical CVE information based on various conditions in interactive mode. - -- Reports historical CVE information in real time through WeCom, DingTalk, and email. - -## PilotGo - -The PilotGo O&M management platform is a plugin-based O&M management tool developed by the openEuler community. It adopts a lightweight modular design of functional modules that can be iterated and evolved independently, while ensuring stability of core functions. Plugins are used to enhance platform functions and remove barriers between different O&M components, implementing global status awareness and automation. - -PilotGo has the following core functional modules: - -- **User management**: Manages users by group based on the organizational structure, and imports existing platform accounts, facilitating migration. - -- **Permission management**: Supports RBAC-based permission management, which is flexible and reliable. - -- **Host management**: visualized front-end status, software package management, service management, and kernel parameter optimization. - -- **Batch management**: Concurrently performs O&M operation, which is stable and efficient. - -- **Log audit**: Traces and records user and plugin change operations, facilitating issue backtracking and security audit. - -- **Alarm management**: Detects platform exceptions in real time. - -- **Real-time exception detection**: Extends platform functions and associates plugins to realize automation and reduce manual intervention. - -## CPDS - -The wide application of cloud native technologies makes modern application deployment environments more and more complex. The container architecture provides flexibility and convenience, but also brings more monitoring and maintenance challenges. Container Problem Detect System (CPDS) is developed to ensure reliability and stability of containerized applications. - -- **Cluster information collection**: Node agents are implemented on host machines to monitor key container services using systemd, initv, eBPF, and other technologies. Cross-NS agents are configured on nodes and containers in non-intrusive mode to keep track of the application status, resource consumption, key system function execution status, and I/O execution status of containers. The collected information covers network, kernel, and drive LVM of the nodes. - -- **Cluster exception detection**: Raw data from each node is collected to detect exceptions based on exception rules and extract key information. Then, the detection results and raw data are uploaded online and saved permanently. - -- **Fault/Sub-Health diagnosis on nodes and service containers**: Nodes and service containers are diagnosed based on exception detection data. Diagnosis results are saved permanently and can be displayed on the UI for users to view real-time and historical diagnosis data. - -## EulerMaker Build System - -EulerMaker is a package build system. It converts source code into binary packages and allows developers to assemble and tailor scenario-specific OSs based on their requirements. It provides incremental/full build, package layer tailoring, and image tailoring capabilities. - -- Incremental/Full build: Analyzes the impact based on software changes and dependencies, obtains the list of packages to be built, and delivers parallel build tasks based on the dependency sequence. - -- **Build dependency query**: Provides a software package build dependency table in a project, and filters and collects statistics on software package dependencies and depended software packages. - -- Layer tailoring: In a build project, developers can select and configure layer models to tailor patches, build dependencies, installation dependencies, and compilation options for software packages. - -- **Image tailoring**: Developers can configure the repository source to generate ISO, QCOW2, and container OS images, and tailor the list of software packages for the images. +- Universal deployment: Seamlessly handles both standalone and clustered deployments through automation, eliminating manual processes and slashing setup times. +- Pre-built software solutions: Comes with optimized deployment solutions for industry-standard software, with continuous expansion through a growing plugin ecosystem. +- Customizable architecture: Features an open plugin framework that empowers developers to build tailored deployment solutions aligned with their unique technical requirements. +- Developer-centric design: Combines robust CLI capabilities with upcoming GUI tools and a plugin marketplace, letting developers concentrate on innovation rather than infrastructure.