From 89a3c511cf3346c39a831ae36429f2b256779dc6 Mon Sep 17 00:00:00 2001 From: lucia <11452490+luciafu@user.noreply.gitee.com> Date: Fri, 27 Oct 2023 15:39:30 +0000 Subject: [PATCH] update news/README.md. Signed-off-by: lucia <11452490+luciafu@user.noreply.gitee.com> --- news/README.md | 1061 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 1061 insertions(+) diff --git a/news/README.md b/news/README.md index 630d92c..d8df5f4 100644 --- a/news/README.md +++ b/news/README.md @@ -5,6 +5,1067 @@ * [2022 年](2022.md) * [2023 年 - 上半年](2023-1st-half.md) +## 20231027:第 65 期 + +### 内核动态 + +#### RISC-V 架构支持 + +**[v2: perf vendor events riscv: add StarFive Dubhe-90 JSON file](http://lore.kernel.org/linux-riscv/20231027073925.1523843-1-jisheng.teoh@starfivetech.com/)** + +> StarFive Dubhe-90 supports raw event id 0x00 - 0x22. +> The raw events are enabled through PMU node of the DT binding. +> +> Example of PMU DT node: +> pmu { +> compatible = "riscv,pmu"; +> riscv,raw-event-to-mhpmcounters = +> /* Event ID 1-31 */ +> <0x00 0x00 0xFFFFFFFF 0xFFFFFFE0 0x00007FF8>, +> /* Event ID 32-33 */ +> <0x00 0x20 0xFFFFFFFF 0xFFFFFFFE 0x00007FF8>, +> /* Event ID 34 */ +> <0x00 0x22 0xFFFFFFFF 0xFFFFFF22 0x00007FF8>; +> }; +> + +**[v6: scripts/gdb: add lx_current support for riscv](http://lore.kernel.org/linux-riscv/20231026233837.612405-1-debug@rivosinc.com/)** + +> csr_sscratch CSR holds current task_struct address when hart is in +> user space. Trap handler on entry spills csr_sscratch into "tp" (x2) +> register and zeroes out csr_sscratch CSR. Trap handler on exit reloads +> "tp" with expected user mode value and place current task_struct address +> again in csr_sscratch CSR. +> + +**[v1: riscv: add support for SBI Supervisor Software Events](http://lore.kernel.org/linux-riscv/20231026143122.279437-1-cleger@rivosinc.com/)** + +> The SBI Supervisor Software Events (SSE) extensions provides a mechanism +> to inject software events from an SBI implementation to supervisor +> software such that it preempts all other supervisor level traps and +> interrupts [1]. +> + +**[v1: genirq/matrix: Dynamic bitmap allocation](http://lore.kernel.org/linux-riscv/20231026101957.320572-1-bjorn@kernel.org/)** + +> Some (future) users of the irq matrix allocator, do not know the size +> of the matrix bitmaps at compile time. +> +> To avoid wasting memory on unnecessary large bitmaps, size the bitmap +> at matrix allocation time. +> + +**[v2: RISC-V: ACPI: Add external interrupt controller support](http://lore.kernel.org/linux-riscv/20231025202344.581132-1-sunilvl@ventanamicro.com/)** + +> This series adds support for the below ECR approved by ASWG. +> 1) MADT - https://drive.google.com/file/d/1oMGPyOD58JaPgMl1pKasT-VKsIKia7zR/view?usp=sharing +> +> The series primarily enables irqchip drivers for RISC-V ACPI based +> platforms. +> + +**[v1: soc: sifive: ccache: Add StarFive JH7100 support](http://lore.kernel.org/linux-riscv/CAJM55Z_pdoGxRXbmBgJ5GbVWyeM1N6+LHihbNdT26Oo_qA5VYA@mail.gmail.com/)** + +> This series adds support for the StarFive JH7100 SoC to the SiFive cache +> controller driver. The JH7100 was a "development version" of the JH7110 +> used on the BeagleV Starlight and VisionFive V1 boards. +> + +**[v1: RISC-V: provide some accelerated cryptography implementations using vector extensions](http://lore.kernel.org/linux-riscv/20231025183644.8735-1-jerry.shih@sifive.com/)** + +> This patch set is based on Heiko Stuebner's work at: +> +> The implementations reuse the perl-asm scripts from OpenSSL[2] with some +> changes adapting for the kernel crypto framework. +> The perl-asm scripts generate the opcodes into `.S` files instead of asm +> mnemonics. The reason for using opcodes is because the assembler hasn't +> supported the vector-crypto extensions yet. +> + +**[v1: Linux RISC-V AIA Preparatory Series](http://lore.kernel.org/linux-riscv/20231025142820.390238-1-apatel@ventanamicro.com/)** + +> The first three patches of the v11 Linux RISC-V AIA series can be +> merged independently hence sending these patches as an independent +> perparatory series. +> (Refer, https://www.spinics.net/lists/devicetree/msg643764.html) +> + +**[v1: riscv: CONFIG_EFI should not depend on CONFIG_RISCV_ISA_C](http://lore.kernel.org/linux-riscv/20231024192648.25527-1-bjorn@kernel.org/)** + +> UEFI/PE mandates that the kernel Image starts with "MZ" ASCII +> (0x5A4D). Convenient enough, "MZ" is a valid compressed RISC-V +> instruction. This means that a non-UEFI loader can simply jump to +> "code0" in the Image header [1] and start executing. +> + +**[GIT PULL: KVM/riscv changes for 6.7](http://lore.kernel.org/linux-riscv/CAAhSdy2dg61z7=vsrOqwxHoV1GBvaAzcdUY4o6pLmTNM0WV5ig@mail.gmail.com/)** + +> We have the following KVM RISC-V changes for 6.7: +> 1) Smstateen and Zicond support for Guest/VM +> 2) Virtualized senvcfg CSR for Guest/VM +> 3) Added Smstateen registers to the get-reg-list selftests +> 4) Added Zicond to the get-reg-list selftests +> 5) Virtualized SBI debug console (DBCN) for Guest/VM +> 6) Added SBI debug console (DBCN) to the get-reg-list selftests +> + +**[v3: RISC-V: Add MMC support for TH1520 boards](http://lore.kernel.org/linux-riscv/20231023-th1520-mmc-v3-0-abc5e7491166@baylibre.com/)** + +> This series adds support for the MMC controller in the T-Head TH1520 +> SoC, and it enables the eMMC and microSD slot on both the BeagleV +> Ahead and the Sipeed LicheePi 4A. +> +> I tested on top of v6.6-rc6 with riscv defconfig. I was able to boot +> both the Ahead [1] and LPi4a [2] from eMMC. The following prerequisites +> are required: +> + +**[v11: Linux RISC-V AIA Support](http://lore.kernel.org/linux-riscv/20231023172800.315343-1-apatel@ventanamicro.com/)** + +> The RISC-V AIA specification is ratified as-per the RISC-V international +> process. The latest ratified AIA specifcation can be found at: +> https://github.com/riscv/riscv-aia/releases/download/1.0/riscv-interrupts-1.0.pdf +> + +**[v1: riscv: Introduce Pseudo NMI](http://lore.kernel.org/linux-riscv/20231023082911.23242-1-luxu.kernel@bytedance.com/)** + +> Sorry to resend this patch series as I forgot to Cc the open list before. +> Below is formal content. +> +> The existing RISC-V kernel lacks an NMI mechanism as there is still no +> ratified resumable NMI extension in RISC-V community, which can not +> satisfy some scenarios like high precision perf sampling. There is an +> incoming hardware extension called Smrnmi which supports resumable NMI +> by providing new control registers to save status when NMI happens. +> However, it is still a draft and requires privilege level switches for +> kernel to utilize it as NMIs are automatically trapped into machine mode. +> + +**[v3: RESEND: Support Andes PMU extension](http://lore.kernel.org/linux-riscv/20231023004100.2663486-1-peterlin@andestech.com/)** + +> This patch series introduces the Andes PMU extension, which serves +> the same purpose as Sscofpmf. In this version we use FDT-based +> probing and the CONFIG_ANDES_CUSTOM_PMU to enable perf sampling +> and filtering support. +> + +**[v1: riscv: dts: thead: convert isa detection to new properties](http://lore.kernel.org/linux-riscv/20231022154135.3746-1-jszhang@kernel.org/)** + +> Convert the th1520 devicetrees to use the new properties +> "riscv,isa-base" & "riscv,isa-extensions". +> For compatibility with other projects, "riscv,isa" remains. +> + +**[v3: Support Andes PMU extension](http://lore.kernel.org/linux-riscv/20231022151858.2479969-1-peterlin@andestech.com/)** + +> This patch series introduces the Andes PMU extension, which serves +> the same purpose as Sscofpmf. In this version we use FDT-based +> probing and the CONFIG_ANDES_CUSTOM_PMU to enable perf sampling +> and filtering support. +> +> Its non-standard local interrupt is assigned to bit 18 in the +> custom S-mode local interrupt enable/pending registers (slie/slip), +> while the interrupt cause is (256 + 18). +> + +**[v1: ACPI: Rename acpi_scan_device_not_present() to be about enumeration](http://lore.kernel.org/linux-riscv/E1qtuWW-00AQ7P-0W@rmk-PC.armlinux.org.uk/)** + +> acpi_scan_device_not_present() is called when a device in the +> hierarchy is not available for enumeration. Historically enumeration +> was only based on whether the device was present. +> + +**[v1: ACPI: Use the acpi_device_is_present() helper in more places](http://lore.kernel.org/linux-riscv/E1qtq2W-00AJ8T-Mm@rmk-PC.armlinux.org.uk/)** + +> acpi_device_is_present() checks the present or functional bits +> from the cached copy of _STA. +> +> A few places open-code this check. Use the helper instead to +> improve readability. +> + +**[v2: RISC-V: hwprobe: Introduce which-cpus](http://lore.kernel.org/linux-riscv/20231020130515.424577-8-ajones@ventanamicro.com/)** + +> This series introduces a flag for the hwprobe syscall which effectively +> reverses its behavior from getting the values of keys for a set of cpus +> to getting the cpus for a set of key-value pairs. The series is based on +> the patch pointed out with the tag below. +> + +#### 进程调度 + +**[v3: drm-misc-next: drm/sched: implement dynamic job-flow control](http://lore.kernel.org/lkml/20231026161431.5934-1-dakr@redhat.com/)** + +> Currently, job flow control is implemented simply by limiting the number +> of jobs in flight. Therefore, a scheduler is initialized with a credit +> limit that corresponds to the number of jobs which can be sent to the +> hardware. +> + +**[[POC]v2: sched: Extended Scheduler Time Slice](http://lore.kernel.org/lkml/20231025235413.597287e1@gandalf.local.home/)** + +> This has very good performance improvements on user space implemented spin +> locks, and I'm sure this can be used for spin locks in VMs too. That will +> come shortly. +> + +**[[POC]v1: sched: Extended Scheduler Time Slice](http://lore.kernel.org/lkml/20231025054219.1acaa3dd@gandalf.local.home/)** + +> [ +> This is basically a resend of this email, but as a separate patch and not +> part of a very long thread. +> https://lore.kernel.org/lkml/20231024214958.22fff0bc@gandalf.local.home/ +> ] +> +> This has very good performance improvements on user space implemented spin +> locks, and I'm sure this can be used for spin locks in VMs too. That will +> come shortly. +> + +**[v1: freezer,sched: Report TASK_FROZEN tasks as TASK_UNINTERRUPTIBLE](http://lore.kernel.org/lkml/20231023135736.17891-1-liliangliang@vivo.com/)** + +> TASK_FROZEN is not in TASK_REPORT, thus a frozen task will appear as +> state == 0, IOW TASK_RUNNING. +> +> Fix this by make TASK_FROZEN appear as TASK_UNINTERRUPTIBLE, thus we +> dont need to imply a new state to userspace tools. +> + +**[v2: sched/rt: Account execution time for cgroup and thread group if rt entity is task](http://lore.kernel.org/lkml/20231023080954.1628449-1-yajun.deng@linux.dev/)** + +> The rt entity can be a task group. Like the fair scheduler class, we don't +> need to account execution time for cgroup and thread group if the rt +> entity isn't a task. +> + +**[v2: sched/fair migration reduction features](http://lore.kernel.org/lkml/20231019160523.1582101-1-mathieu.desnoyers@efficios.com/)** + +> This series introduces two new scheduler features: UTIL_FITS_CAPACITY +> and SELECT_BIAS_PREV. When used together, they achieve a 41% speedup of +> a hackbench workload which leaves some idle CPU time on a 192-core AMD +> EPYC. +> + +**[v1: sched/fair: Enable group_asym_packing in find_idlest_group](http://lore.kernel.org/lkml/20231018155036.2314342-1-srikar@linux.vnet.ibm.com/)** + +> Current scheduler code doesn't handle SD_ASYM_PACKING in the +> find_idlest_cpu path. On few architectures, like Powerpc, cache is at a +> core. Moving threads across cores may end up in cache misses. +> + +**[v1: sched/rt: Redefine RR_TIMESLICE to 100 msecs](http://lore.kernel.org/lkml/20231018081709.2289264-1-yajun.deng@linux.dev/)** + +> The RR_TIMESLICE is currently defined as the jiffies corresponding to +> 100 msecs. And then sysctl_sched_rr_timeslice will convert RR_TIMESLICE +> to 100 msecs. These are opposite calculations. +> + +**[v1: sched/fair: Introduce WAKEUP_BIAS_PREV to reduce migrations](http://lore.kernel.org/lkml/20231017221204.1535774-1-mathieu.desnoyers@efficios.com/)** + +> Introduce the WAKEUP_BIAS_PREV scheduler feature to reduce the task +> migration rate. +> +> For scenarios where the system is under-utilized (CPUs are partly idle), +> eliminate frequent task migrations from CPUs with spare capacity left to +> completely idle CPUs by introducing a bias towards the previous CPU if +> it is idle or has spare capacity left in select_idle_sibling(). Use 25% +> of the previously used CPU capacity as spare capacity cutoff. +> + +**[v4: perf bench sched pipe: Add -G/--cgroups option](http://lore.kernel.org/lkml/20231017202342.1353124-1-namhyung@kernel.org/)** + +> The -G/--cgroups option is to put sender and receiver in different +> cgroups in order to measure cgroup context switch overheads. +> + +**[v1: drm/sched: Add description of parameters in job_done](http://lore.kernel.org/lkml/20231017151521.12388-1-luben.tuikov@amd.com/)** + +> Fix a kernel test robot complaint that there's no description of the "result" +> parameter to drm_sched_job_done() function. +> + +#### 内存管理 + +**[v1: mm, memcg: avoid recycling when there is no more recyclable memory](http://lore.kernel.org/linux-mm/20231027093004.681270-1-suruifeng1@huawei.com/)** + +> When the number of alloc anonymous pages exceeds the memory.high, +> exc_page_fault successfully alloc code pages, +> and is released by mem_cgroup_handle_over_high before return to user mode. +> As a result, the program is trapped in a loop to exc page fault and reclaim +> pages. +> + +**[[POC]v2: sched: Extended Scheduler Time Slice](http://lore.kernel.org/linux-mm/20231025235413.597287e1@gandalf.local.home/)** + +> This has very good performance improvements on user space implemented spin +> locks, and I'm sure this can be used for spin locks in VMs too. That will +> come shortly. +> + +**[v7: NUMA: optimize detection of memory with no node id assigned by firmware](http://lore.kernel.org/linux-mm/20231026020329.327329-1-zhiguangni01@gmail.com/)** + +> Sanity check that makes sure the nodes cover all memory loops over +> numa_meminfo to count the pages that have node id assigned by the firmware, +> then loops again over memblock.memory to find the total amount of memory +> and in the end checks that the difference between the total memory and +> memory that covered by nodes is less than some threshold. Worse, the loop +> over numa_meminfo calls __absent_pages_in_range() that also partially +> traverses memblock.memory. +> + +**[v7: mm: use memmap_on_memory semantics for dax/kmem](http://lore.kernel.org/linux-mm/20231025-vv-kmem_memmap-v7-0-4a76d7652df5@intel.com/)** + +> The dax/kmem driver can potentially hot-add large amounts of memory +> originating from CXL memory expanders, or NVDIMMs, or other 'device +> memories'. There is a chance there isn't enough regular system memory +> available to fit the memmap for this new memory. It's therefore +> desirable, if all other conditions are met, for the kmem managed memory +> to place its memmap on the newly added memory itself. +> + +**[v3: debugobjects: stop accessing objects after releasing spinlock](http://lore.kernel.org/linux-mm/20231025-debugobjects_fix-v3-1-2bc3bf7084c2@intel.com/)** + +> After spinlock release object can be modified/freed by concurrent thread. +> Using it in such case is error prone, even for printing object state. +> To avoid such situation local copy of the object is created if necessary. +> + +**[v3: Swap-out small-sized THP without splitting](http://lore.kernel.org/linux-mm/20231025144546.577640-1-ryan.roberts@arm.com/)** + +> This is v3 of a series to add support for swapping out small-sized THP without +> needing to first split the large folio via __split_huge_page(). It closely +> follows the approach already used by PMD-sized THP. +> + +**[v2: zswap: add writeback_time_threshold interface to shrink zswap pool](http://lore.kernel.org/linux-mm/20231025095248.458789-1-hezhongkun.hzk@bytedance.com/)** + +> zswap does not have a suitable method to select objects that have not +> been accessed for a long time, and just shrink the pool when the limit +> is hit. There is a high probability of wasting memory in zswap if the +> limit is too high. +> + +**[v1: memcontrol: implement swap bypassing](http://lore.kernel.org/linux-mm/20231024233501.2639043-1-nphamcs@gmail.com/)** + +> During our experiment with zswap, we sometimes observe swap IOs due to +> occasional zswap store failures and writebacks. These swapping IOs +> prevent many users who cannot tolerate swapping from adopting zswap to +> save memory and improve performance where possible. +> + +**[v4: workload-specific and memory pressure-driven zswap writeback](http://lore.kernel.org/linux-mm/20231024203302.1920362-1-nphamcs@gmail.com/)** + +> This patch series solves these issues by separating the global zswap +> LRU into per-memcg and per-NUMA LRUs, and performs workload-specific +> (i.e memcg- and NUMA-aware) zswap writeback under memory pressure. The +> new shrinker does not have any parameter that must be tuned by the +> user, and can be opted in or out on a per-memcg basis. +> + +**[v1: mm: mlock: avoid folio_within_range() on KSM pages](http://lore.kernel.org/linux-mm/23852f6a-5bfa-1ffd-30db-30c5560ad426@google.com/)** + +> Since mm-hotfixes-stable commit dc68badcede4 ("mm: mlock: update +> mlock_pte_range to handle large folio") I've just occasionally seen +> VM_WARN_ON_FOLIO(folio_test_ksm) warnings from folio_within_range(), +> in a splurge after testing with KSM hyperactive. +> + +**[v1: zswap: export more zswap store failure stats](http://lore.kernel.org/linux-mm/20231024000702.1387130-1-nphamcs@gmail.com/)** + +> This patch adds a global and a per-cgroup zswap store failure counter, +> as well as a dedicated debugfs counter for compression algorithm failure +> (which can happen for e.g when random data are passed to zswap). +> + +**[v3: stackdepot: allow evicting stack traces](http://lore.kernel.org/linux-mm/cover.1698077459.git.andreyknvl@google.com/)** + +> Currently, the stack depot grows indefinitely until it reaches its +> capacity. Once that happens, the stack depot stops saving new stack +> traces. +> + +**[v2: mm: page_alloc: check the order of compound page even when the order is zero](http://lore.kernel.org/linux-mm/20231023083217.1866451-1-hyesoo.yu@samsung.com/)** + +> For compound pages, the head sets the PG_head flag and +> the tail sets the compound_head to indicate the head page. +> If a user allocates a compound page and frees it with a different +> order, the compound page information will not be properly +> initialized. To detect this problem, compound_order(page) and +> the order argument are compared, but this is not checked +> when the order argument is zero. That error should be checked +> regardless of the order. +> + +**[v1: -next: mm/kmemleak: move the initialisation of object to __link_object](http://lore.kernel.org/linux-mm/20231023025125.90972-1-liushixin2@huawei.com/)** + +> Leave __alloc_object() just do the actual allocation and __link_object() +> do the full initialisation. +> + +**[v1: selftests: add a sanity check for zswap](http://lore.kernel.org/linux-mm/20231020222009.2358953-1-nphamcs@gmail.com/)** + +> We recently encountered a bug that makes all zswap store attempt fail. +> Specifically, after: +> +> "141fdeececb3 mm/zswap: delay the initialization of zswap" +> +> if we build a kernel with zswap disabled by default, then enabled after +> the swapfile is set up, the zswap tree will not be initialized. As a +> result, all zswap store calls will be short-circuited. We have to +> perform another swapon to get zswap working properly again. +> + +**[v3: Some khugepaged folio conversions](http://lore.kernel.org/linux-mm/20231020183331.10770-1-vishal.moola@gmail.com/)** + +> This patchset converts a number of functions to use folios. This cleans +> up some khugepaged code and removes a large number of hidden +> compound_head() calls. +> + +**[v1: check MGLRU promoted without hold page lock](http://lore.kernel.org/linux-mm/20231020084358.463846-1-link@vivo.com/)** + +> This patchset add a new reclaim_stat named nr_promote to observe +> number folios which MGLRU promoted before shrink touch, and then +> show in mm_vmscan_lru_shrink_inactive. Also, fix nr_scanned in MGLRU +> trace into nr_taken. (patch1) +> + +**[v6: riscv: Add remaining module relocations and tests](http://lore.kernel.org/linux-mm/20231019-module_relocations-v6-0-94726e644321@rivosinc.com/)** + +> A handful of module relocations were missing, this patch includes the +> remaining ones. I also wrote some test cases to ensure that module +> loading works properly. Some relocations cannot be supported in the +> kernel, these include the ones that rely on thread local storage and +> dynamic linking. +> + +#### 文件系统 + +**[v4: security: Move IMA and EVM to the LSM infrastructure](http://lore.kernel.org/linux-fsdevel/20231027083558.484911-1-roberto.sassu@huaweicloud.com/)** + +> IMA and EVM are not effectively LSMs, especially due to the fact that in +> the past they could not provide a security blob while there is another LSM +> active. +> + +**[v1: exportfs: handle CONFIG_EXPORTFS=m also](http://lore.kernel.org/linux-fsdevel/20231026192830.21288-1-rdunlap@infradead.org/)** + +> When CONFIG_EXPORTFS=m, there are multiple build errors due to +> the header not handling the =m setting correctly. +> Change the header file to check for CONFIG_EXPORTFS enabled at all +> instead of just set =y. +> + +**[v1: io_uring: kiocb_done() should *not* trust ->ki_pos if ->{read,write}_iter() failed](http://lore.kernel.org/linux-fsdevel/20231026021840.GJ800259@ZenIV/)** + +> [in viro/vfs.git#fixes at the moment] +> ->ki_pos value is unreliable in such cases. For an obvious example, +> consider O_DSYNC write - we feed the data to page cache and start IO, +> then we make sure it's completed. Update of ->ki_pos is dealt with +> by the first part; failure in the second ends up with negative value +> returned _and_ ->ki_pos left advanced as if sync had been successful. +> In the same situation write(2) does not advance the file position +> at all. +> + +**[v1: rust: types: Add read_once and write_once](http://lore.kernel.org/linux-fsdevel/20231025195339.1431894-1-boqun.feng@gmail.com/)** + +> In theory, `read_volatile` and `write_volatile` in Rust can have UB in +> case of the data races [1]. However, kernel uses volatiles to implement +> READ_ONCE() and WRITE_ONCE(), and expects races on these marked accesses +> don't cause UB. And they are proven to have a lot of usages in kernel. +> + +**[v2: nfs: derive f_fsid from s_dev and server's fsid](http://lore.kernel.org/linux-fsdevel/20231025061117.3068417-1-amir73il@gmail.com/)** + +> Use s_dev number and the server's fsid to report f_fsid in statfs(2). +> +> The server's fsid could be zero for NFSv4 root export and is not unique +> across different servers, so we use the s_dev number to avoid local +> f_fsid collisions. +> + +**[v1: fs,block: yield devices](http://lore.kernel.org/linux-fsdevel/20231024-vfs-super-rework-v1-0-37a8aa697148@kernel.org/)** + +> This is a mechanism that allows the holder of a block device to yield +> device access before actually closing the block device. +> +> If a someone yields a device then any concurrent opener claiming the +> device exclusively with the same blk_holder_ops as the current owner can +> wait for the device to be given up. Filesystems by default use +> fs_holder_ps and so can wait on each other. +> + +**[v2: Implement freeze and thaw as holder operations](http://lore.kernel.org/linux-fsdevel/20231024-vfs-super-freeze-v2-0-599c19f4faac@kernel.org/)** + +> This is v2 and based on vfs.super. I'm sending this out right now +> because frankly, shortly before the merge window is the time when I have +> time to do something. Otherwise, I would've waited a bit. +> + +**[v1: freevxfs: derive f_fsid from bdev->bd_dev](http://lore.kernel.org/linux-fsdevel/20231024121457.3014063-1-amir73il@gmail.com/)** + +> The majority of blockdev filesystems, which do not have a UUID in their +> on-disk format, derive f_fsid of statfs(2) from bdev->bd_dev. +> +> Use the same practice for freevxfs. +> +> This will allow reporting fanotify events with fanotify_event_info_fid. +> + +**[v1: nfs: derive f_fsid from server's fsid](http://lore.kernel.org/linux-fsdevel/20231024110109.3007794-1-amir73il@gmail.com/)** + +> Fold the server's 128bit fsid to report f_fsid in statfs(2). +> This is similar to how uuid is folded for f_fsid of ext2/ext4/zonefs. +> +> This allows nfs client to be monitored by fanotify filesystem watch +> for local client access if nfs supports re-export. +> + +**[v1: gfs2: fs: derive f_fsid from s_uuid](http://lore.kernel.org/linux-fsdevel/20231024075535.2994553-1-amir73il@gmail.com/)** + +> gfs2 already has optional persistent uuid. +> +> Use that uuid to report f_fsid in statfs(2), same as ext2/ext4/zonefs. +> +> This allows gfs2 to be monitored by fanotify filesystem watch. +> for example, with inotify-tools 4.23.8.0, the following command can be +> used to watch changes over entire filesystem: +> +> fsnotifywatch --filesystem /mnt/gfs2 +> + +**[v2: Support more filesystems with FAN_REPORT_FID](http://lore.kernel.org/linux-fsdevel/20231023180801.2953446-1-amir73il@gmail.com/)** + +> Christian, +> +> The grand plan is to be able to use fanotify with FAN_REPORT_FID as a +> drop-in replacement for inotify, but with current upstream, inotify is +> supported on all the filesystems and FAN_REPORT_FID only on a few. +> + +**[v1: fs: report f_fsid from s_dev for "simple" filesystems](http://lore.kernel.org/linux-fsdevel/20231023143049.2944970-1-amir73il@gmail.com/)** + +> There are many "simple" filesystems (*) that report null f_fsid in +> statfs(2). Those "simple" filesystems report sb->s_dev as the st_dev +> field of the stat syscalls for all inodes of the filesystem (**). +> +> In order to enable fanotify reporting of events with fsid on those +> "simple" filesystems, report the sb->s_dev number in f_fsid field of +> statfs(2). +> + +**[v4: fuse: share lookup state between submount and its parent](http://lore.kernel.org/linux-fsdevel/20231020213459.GA3062@templeofstupid.com/)** + +> Fuse submounts do not perform a lookup for the nodeid that they inherit +> from their parent. Instead, the code decrements the nlookup on the +> submount's fuse_inode when it is instantiated, and no forget is +> performed when a submount root is evicted. +> + +**[v1: blk: optimization for classic polling](http://lore.kernel.org/linux-fsdevel/3578876466-3733-1-git-send-email-nj.shetty@samsung.com/)** + +> This removes the dependency on interrupts to wake up task. Set task +> state as TASK_RUNNING, if need_resched() returns true, +> while polling for IO completion. +> Earlier, polling task used to sleep, relying on interrupt to wake it up. +> This made some IO take very long when interrupt-coalescing is enabled in +> NVMe. +> + +#### 网络设备 + +**[v18: nvme-tcp receive offloads](http://lore.kernel.org/netdev/20231027122755.205334-1-aaptel@nvidia.com/)** + +> The next iteration of our nvme-tcp receive offload series. +> The main change is the move of the capabilities from the netdev to the driver. +> +> Previous submission (v17): +> https://lore.kernel.org/all/20231024125445.2632-1-aaptel@nvidia.com/ +> +> The changes are also available through git: +> Repo: https://github.com/aaptel/linux.git branch nvme-rx-offload-v18 +> Web: https://github.com/aaptel/linux/tree/nvme-rx-offload-v18 +> + +**[v1: ss: pretty-printing BPF socket-local storage](http://lore.kernel.org/netdev/20231027121155.1244308-1-qde@naccy.de/)** + +> BPF allows programs to store socket-specific data using +> BPF_MAP_TYPE_SK_STORAGE maps. The data is attached to the socket itself, +> and Martin added INET_DIAG_REQ_SK_BPF_STORAGES, so it can be fetched +> using the INET_DIAG mechanism. +> + +**[v1: rxrpc_find_service_conn_rcu: use read_seqbegin() rather than read_seqbegin_or_lock()](http://lore.kernel.org/netdev/20231027095842.GA30868@redhat.com/)** + +> read_seqbegin_or_lock() makes no sense unless you make "seq" odd +> after the lockless access failed. See thread_group_cputime() as +> an example, note that it does nextseq = 1 for the 2nd round. +> + +**[v1: net-next: iavf: use iavf_schedule_aq_request() helper](http://lore.kernel.org/netdev/20231027095102.499914-1-poros@redhat.com/)** + +> Use the iavf_schedule_aq_request() helper when we need to +> schedule a watchdog task immediately. No functional change. +> + +**[v4: net-next: tools: ynl: introduce option to process unknown attributes or types](http://lore.kernel.org/netdev/20231027092525.956172-1-jiri@resnulli.us/)** + +> In case the kernel sends message back containing attribute not defined +> in family spec, following exception is raised to the user: +> +> $ sudo ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/devlink.yaml --do trap-get --json '{"bus-name": "netdevsim", "dev-name": "netdevsim1", "trap-name": "source_mac_is_multicast"}' +> Traceback (most recent call last): +> File "/home/jiri/work/linux/tools/net/ynl/lib/ynl.py", line 521, in _decode +> attr_spec = attr_space.attrs_by_val[attr.type] +> + +**[v2: iproute2-next: Increase BPF verifier verbosity when in verbose mode](http://lore.kernel.org/netdev/20231027085706.25718-1-shung-hsi.yu@suse.com/)** + +> When debugging BPF verifier issue, it is useful get as much information +> out of the verifier as possible to help diagnostic, but right now that +> is not possible because load_bpf_object() does not set the +> kernel_log_level in struct bpf_object_open_opts, which is addressed in +> patch 1. +> + +**[v1: net: dsa: lan9303: consequently nested-lock physical MDIO](http://lore.kernel.org/netdev/20231027065741.534971-1-alexander.sverdlin@siemens.com/)** + +> Consequent annotation in lan9303_mdio_{read|write} as nested lock +> (similar to lan9303_mdio_phy_{read|write}, it's the same physical MDIO bus) +> prevents the following splat: +> +> WARNING: possible circular locking dependency detected +> kworker/u4:3/609 is trying to acquire lock: +> ffff000011531c68 (lan9303_mdio:131:(&lan9303_mdio_regmap_config)->lock){+.+.}-{3:3}, at: regmap_lock_mutex +> + +**[v3: net-next: net: pcs: xpcs: Add 2500BASE-X case in get state for XPCS drivers](http://lore.kernel.org/netdev/20231027044306.291250-1-Raju.Lakkaraju@microchip.com/)** + +> Add DW_2500BASEX case in xpcs_get_state( ) to update speed, duplex and pause +> + +**[v1: net-next: ptp: ptp_read should not release queue](http://lore.kernel.org/netdev/tencent_541B3D2565BACCBBD133319E441B774B6C08@qq.com/)** + +> Firstly, queue is not the memory allocated in ptp_read; +> Secondly, other processes may block at ptp_read and wait for conditions to be +> met to perform read operations. +> +> Reported-by: syzbot+df3f3ef31f60781fa911@syzkaller.appspotmail.com +> + +**[v2: net-next: WAKE_FILTER for Broadcom PHY (v2)](http://lore.kernel.org/netdev/20231026224509.112353-1-florian.fainelli@broadcom.com/)** + +> This is a re-submission of the series that was submitted before: +> +> https://lore.kernel.org/all/20230516231713.2882879-1-florian.fainelli@broadcom.com/ +> + +**[v2: next: ethtool: Add ethtool_puts()](http://lore.kernel.org/netdev/20231026-ethtool_puts_impl-v2-0-0d67cbdd0538@google.com/)** + +> This series aims to implement ethtool_puts() and send out a wave 1 of +> conversions from ethtool_sprintf(). There's also a checkpatch patch +> included to check for the cases listed below. +> + +**[v1: net-next: net: fill in 18 MODULE_DESCRIPTION()s](http://lore.kernel.org/netdev/20231026190101.1413939-1-kuba@kernel.org/)** + +> W=1 builds now warn if module is built without a MODULE_DESCRIPTION(). +> +> Fill in the first 18 that jumped out at me, and those missing +> in modules I maintain. +> + +**[v1: net-next: virtio_net: use u64_stats_t infra to avoid data-races](http://lore.kernel.org/netdev/20231026171840.4082735-1-edumazet@google.com/)** + +> syzbot reported a data-race in virtnet_poll / virtnet_stats [1] +> +> u64_stats_t infra has very nice accessors that must be used +> to avoid potential load-store tearing. +> + +**[v1: bpf-next: net, xdp: allow metadata > 32](http://lore.kernel.org/netdev/20231026165701.65878-1-larysa.zaremba@intel.com/)** + +> 32 bytes may be not enough for some custom metadata. Relax the restriction, +> allow metadata larger than 32 bytes and make __skb_metadata_differs() work +> with bigger lengths. +> + +**[v2: bpf-next: netkit: use netlink policy for mode and policy attributes validation](http://lore.kernel.org/netdev/20231026151659.1676037-1-razor@blackwall.org/)** + +> Use netlink's NLA_POLICY_VALIDATE_FN() type for mode and primary/peer +> policy with custom validation functions to return better errors. This +> simplifies the logic a bit and relies on netlink's policy validation. +> We have to use NLA_BINARY and validate the length inside the callbacks. +> + +**[v1: net-next: ipvlan: properly track tx_errors](http://lore.kernel.org/netdev/20231026131446.3933175-1-edumazet@google.com/)** + +> Both ipvlan_process_v4_outbound() and ipvlan_process_v6_outbound() +> increment dev->stats.tx_errors in case of errors. +> +> Unfortunately there are two issues : +> +> 1) ipvlan_get_stats64() does not propagate dev->stats.tx_errors to user. +> +> 2) Increments are not atomic. KCSAN would complain eventually. +> + +**[GIT PULL: Networking for v6.6-rc8](http://lore.kernel.org/netdev/20231026095510.23688-1-pabeni@redhat.com/)** + +> The following changes since commit ce55c22ec8b223a90ff3e084d842f73cfba35588: +> +> Merge tag 'net-6.6-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net (2023-10-19 12:08:18 -0700) +> + +**[v1: net-next: netdevsim: Block until all devices are released](http://lore.kernel.org/netdev/20231026083343.890689-1-idosch@nvidia.com/)** + +> Like other buses, devices on the netdevsim bus have a release callback +> that is invoked when the reference count of the device drops to zero. +> However, unlike other buses such as PCI, the release callback is not +> necessarily built into the kernel, as netdevsim can be built as a +> module. +> + +**[v2: net-next: nfp: using napi_build_skb() to replace build_skb()](http://lore.kernel.org/netdev/20231026080058.22810-1-louis.peens@corigine.com/)** + +> The napi_build_skb() can reuse the skb in skb cache per CPU or +> can allocate skbs in bulk, which helps improve the performance. +> + +**[v8: net-next: net: dsa: microchip: provide Wake on LAN support (part 2)](http://lore.kernel.org/netdev/20231026051051.2316937-1-o.rempel@pengutronix.de/)** + +> This patch series introduces extensive Wake on LAN (WoL) support for the +> Microchip KSZ9477 family of switches, coupled with some code refactoring +> and error handling enhancements. The principal aim is to enable and +> manage Wake on Magic Packet and other PHY event triggers for waking up +> the system, whilst ensuring that the switch isn't reset during a +> shutdown if WoL is active. +> + +**[v7: net-next: Rust abstractions for network PHY drivers](http://lore.kernel.org/netdev/20231026001050.1720612-1-fujita.tomonori@gmail.com/)** + +> This patchset adds Rust abstractions for phylib. It doesn't fully +> cover the C APIs yet but I think that it's already useful. I implement +> two PHY drivers (Asix AX88772A PHYs and Realtek Generic FE-GE). Seems +> they work well with real hardware. +> + +**[v2: net: llc: verify mac len before reading mac header](http://lore.kernel.org/netdev/20231025234251.3796495-1-willemdebruijn.kernel@gmail.com/)** + +> LLC reads the mac header with eth_hdr without verifying that the skb +> has an Ethernet header. +> + +**[v1: ethtool: Add ethtool_puts()](http://lore.kernel.org/netdev/20231025-ethtool_puts_impl-v1-0-6a53a93d3b72@google.com/)** + +> This series aims to implement ethtool_puts() and send out a wave 1 of +> conversions from ethtool_sprintf(). There's also a checkpatch patch +> included to check for the cases listed below. +> + +**[v1: hv_netvsc: Mark VF as slave before exposing it to user-mode](http://lore.kernel.org/netdev/1698274250-653-1-git-send-email-longli@linuxonhyperv.com/)** + +> When a VF is being exposed form the kernel, it should be marked as "slave" +> before exposing to the user-mode. The VF is not usable without netvsc running +> as master. The user-mode should never see a VF without the "slave" flag. +> + +**[v1: bpf-next: bpf, net: Use bpf mem allocator for sk local storage](http://lore.kernel.org/netdev/20231025224151.385719-1-thinker.li@gmail.com/)** + +> Switching to BPF memory allocator improve the performance +> of sk local storage in terms of creating and destroying. +> + +**[v1: net-next: tools: ynl-gen: respect attr-cnt-name at the attr set level](http://lore.kernel.org/netdev/20231025182739.184706-1-kuba@kernel.org/)** + +> Davide reports that we look for the attr-cnt-name in the wrong +> object. We try to read it from the family, but the schema only +> allows for it to exist at attr-set level. +> + +**[v1: net-next: netlink: specs: support conditional operations](http://lore.kernel.org/netdev/20231025162253.133159-1-kuba@kernel.org/)** + +> Page pool code is compiled conditionally, but the operations +> are part of the shared netlink family. We can handle this +> by reporting empty list of pools or -EOPNOTSUPP / -ENOSYS +> but the cleanest way seems to be removing the ops completely +> at compilation time. That way user can see that the page +> pool ops are not present using genetlink introspection. +> Same way they'd check if the kernel is "new enough" to +> support the ops. +> + +**[v1: net-next: netlink: make range pointers in policies const](http://lore.kernel.org/netdev/20231025162204.132528-1-kuba@kernel.org/)** + +> struct nla_policy is usually constant itself, but unless +> we make the ranges inside constant we won't be able to +> make range structs const. The ranges are not modified +> by the core. +> + +#### 安全增强 + +**[v2: airo: replace deprecated strncpy with strscpy_pad](http://lore.kernel.org/linux-hardening/20231026-strncpy-drivers-net-wireless-cisco-airo-c-v2-1-413427249e47@google.com/)** + +> strncpy() is deprecated for use on NUL-terminated destination strings +> [1] and as such we should prefer more robust and less ambiguous string +> interfaces. +> +> `extra` is clearly supposed to be NUL-terminated which is evident by the +> manual NUL-byte assignment as well as its immediate usage with strlen(). +> + +**[v1: wifi: wil6210: Replace strlcat() usage with seq_buf](http://lore.kernel.org/linux-hardening/20231026171349.work.928-kees@kernel.org/)** + +> The use of strlcat() is fragile at best, and we'd like to remove it from +> the available string APIs in the kernel. Instead, use the safer seq_buf +> APIs. +> + +**[v1: seq_buf: Introduce DECLARE_SEQ_BUF and seq_buf_cstr()](http://lore.kernel.org/linux-hardening/20231026170722.work.638-kees@kernel.org/)** + +> Solve two ergonomic issues with struct seq_buf: +> +> 1) Too much boilerplate is required to initialize: +> +> struct seq_buf s; +> char buf[32]; +> +> seq_buf_init(s, buf, sizeof(buf)); +> + +**[v2: scsi: elx: libefc: replace deprecated strncpy with strscpy_pad/memcpy](http://lore.kernel.org/linux-hardening/20231026-strncpy-drivers-scsi-elx-libefc-efc_node-h-v2-1-5c083d0c13f4@google.com/)** + +> strncpy() is deprecated for use on NUL-terminated destination strings +> [1] and as such we should prefer more robust and less ambiguous string +> interfaces. +> +> To keep node->current_state_name and node->prev_state_name NUL-padded +> and NUL-terminated let's use strscpy_pad() as this implicitly provides +> both. +> +> For the swap between the two, a simple memcpy will suffice. +> + +**[v2: wifi: ath10k: replace deprecated strncpy with memcpy](http://lore.kernel.org/linux-hardening/20231024-strncpy-drivers-net-wireless-ath-ath10k-mac-c-v2-1-4c1f4cd4b4df@google.com/)** + +> strncpy() is deprecated [1] and we should prefer less ambiguous +> interfaces. +> +> In this case, arvif->u.ap.ssid has its length maintained by +> arvif->u.ap.ssid_len which indicates it may not need to be +> NUL-terminated. Make this explicit with __nonstring and use a plain old +> memcpy. +> + +**[v1: scsi: elx: libefc: replace deprecated strncpy with strscpy](http://lore.kernel.org/linux-hardening/20231023-strncpy-drivers-scsi-elx-libefc-efc_node-h-v1-1-8b66878b6796@google.com/)** + +> strncpy() is deprecated for use on NUL-terminated destination strings +> [1] and as such we should prefer more robust and less ambiguous string +> interfaces. +> +> A suitable replacement is `strscpy` [2] due to the fact that it +> guarantees NUL-termination on the destination buffer without +> unnecessarily NUL-padding. +> + +**[v2: Add initial support for Xiaomi Mi 11 Ultra](http://lore.kernel.org/linux-hardening/20231021-sakuramist-mi11u-v2-0-fa82c91ecaf0@gmail.com/)** + +> This patch series add support for Xiaomi Mi 11 Ultra. +> + +**[v1: rpmsg: virtio: replace deprecated strncpy with strscpy/_pad](http://lore.kernel.org/linux-hardening/20231021-strncpy-drivers-rpmsg-virtio_rpmsg_bus-c-v1-1-8abb919cbe24@google.com/)** + +> strncpy() is deprecated for use on NUL-terminated destination strings +> [1] and as such we should prefer more robust and less ambiguous string +> interfaces. +> +> This patch replaces 3 callsites of strncpy(). +> + +**[v1: PNP: replace deprecated strncpy with memcpy](http://lore.kernel.org/linux-hardening/20231019-strncpy-drivers-pnp-pnpbios-rsparser-c-v1-1-e47d93b52e3e@google.com/)** + +> strncpy() is deprecated for use on NUL-terminated destination strings +> [1] and as such we should prefer more robust and less ambiguous +> interfaces. +> +> After having precisely calculated the lengths and ensuring we don't +> overflow the buffer, this really decays to just a memcpy. Let's not use +> a C string api as it makes the intention of the code confusing. +> + +**[v2: net: wwan: replace deprecated strncpy with strscpy](http://lore.kernel.org/linux-hardening/20231019-strncpy-drivers-net-wwan-rpmsg_wwan_ctrl-c-v2-1-ecf9b5a39430@google.com/)** + +> strncpy() is deprecated for use on NUL-terminated destination strings +> [1] and as such we should prefer more robust and less ambiguous string +> interfaces. +> + +#### 异步 IO + +**[v2: io_uring/fdinfo: park SQ thread while retrieving cpu/pid](http://lore.kernel.org/io-uring/04cfb22e-a706-424f-97ba-36421bf0154a@kernel.dk/)** + +> We could race with SQ thread exit, and if we do, we'll hit a NULL pointer +> dereference when the thread is cleared. Grab the SQPOLL data lock before +> attempting to get the task cpu and pid for fdinfo, this ensures we have a +> stable view of it. +> + +**[v7: io_uring: Initial support for {s,g}etsockopt commands](http://lore.kernel.org/io-uring/20231016134750.1381153-1-leitao@debian.org/)** + +> This patchset adds support for getsockopt (SOCKET_URING_OP_GETSOCKOPT) +> and setsockopt (SOCKET_URING_OP_SETSOCKOPT) in io_uring commands. +> SOCKET_URING_OP_SETSOCKOPT implements generic case, covering all levels +> and optnames. SOCKET_URING_OP_GETSOCKOPT is limited, for now, to +> SOL_SOCKET level, which seems to be the most common level parameter for +> get/setsockopt(2). +> + +#### Rust For Linux + +**[v2: rust: crates in other kernel directories](http://lore.kernel.org/rust-for-linux/20231027003504.146703-1-yakoyoku@gmail.com/)** + +> This RFC provides makes possible to have bindings for kernel subsystems +> that are compiled as modules. +> +> Previously, if you wanted to have Rust bindings for a subsystem, like +> AMBA for example, you had to put it under `rust/kernel/` so it came +> part of the `kernel` crate, but this came with many downsides. Namely +> if you compiled said subsystem as a module you've a dependency on it +> from `kernel`, which is linked directly on `vmlinux`. +> + +**[v4: Rust enablement for AArch64](http://lore.kernel.org/rust-for-linux/20231020155056.3495121-1-Jamie.Cunliffe@arm.com/)** + +> Enable Rust support for the AArch64 architecture. +> +> Since the v3 this has been refactored to split up the x86 Makefile +> changes. Updated the x86-64 conditionals as suggested by Boqun and +> addressed the formatting issues Miguel raised. +> + +**[v1: Rust abstractions for VFS](http://lore.kernel.org/rust-for-linux/20231018122518.128049-1-wedsonaf@gmail.com/)** + +> This series introduces Rust abstractions that allow page-cache-backed read-only +> file systems to be written in Rust. +> +> There are two file systems that are built on top of these abstractions: tarfs +> and puzzlefs. The former has zero unsafe blocks and is included as a patch in +> this series; the latter is described elsewhere [1]. We limit the functionality +> to the bare minimum needed to implement them. +> + +#### BPF + +**[v1: bpf-next: bpf: Support cpu v4 instructions for LoongArch](http://lore.kernel.org/bpf/20231026184337.563801-1-hengqi.chen@gmail.com/)** + +> This patchset adds support for cpu v4 instructions for LoongArch. +> For details, see the proposal ([0]) and its implementation in BPF core ([1]). +> +> [0]: https://lore.kernel.org/bpf/4bfe98be-5333-1c7e-2f6d-42486c8ec039@meta.com/ +> [1]: https://lore.kernel.org/all/20230728011143.3710005-1-yonghong.song@linux.dev/ +> + +**[v1: bpf-next: samples/bpf: Allow building as PIE](http://lore.kernel.org/bpf/cover.1698213811.git.vmalik@redhat.com/)** + +> when trying to build samples/bpf as PIE in Fedora, we came across +> several issues, mainly related to the way compiler/linker flags are +> handled in samples/bpf/Makefile. The first 2 commits in this patchset +> address these issues (see commit messages for details). +> + +**[v6: bpf-next: bpf: File verification with LSM and fsverity](http://lore.kernel.org/bpf/20231024235551.2769174-1-song@kernel.org/)** + +> This set enables file verification with BPF LSM and fsverity. +> +> In this solution, fsverity is used to provide reliable and efficient hash +> of files; and BPF LSM is used to implement signature verification (against +> asymmetric keys), and to enforce access control. +> + +**[v3: bpf-next: exact states comparison for iterator convergence checks](http://lore.kernel.org/bpf/20231024000917.12153-1-eddyz87@gmail.com/)** + +> Iterator convergence logic in is_state_visited() uses state_equals() +> for states with branches counter > 0 to check if iterator based loop +> converges. This is not fully correct because state_equals() relies on +> presence of read and precision marks on registers. These marks are not +> guaranteed to be finalized while state has branches. +> Commit message for patch #3 describes a program that exhibits such +> behavior. +> + +**[v1: bpf-next: Descend into struct, array types when searching for fields](http://lore.kernel.org/bpf/20231023220030.2556229-1-davemarchevsky@fb.com/)** + +> One would expect both bpf_kptr_xchg's to be possible, but currently +> only the first one works. From BPF program writer's perspective this +> is unexpected - the array map is an array with struct val elements, so +> is global_array, so why the difference in behavior? The confusion is +> not hypothetical - we stumbled onto this confusing situation while +> developing scheduling BPF programs for the sched_ext project [0]. +> + +**[v4: dwarves: pahole, btf_encoder: support --btf_features](http://lore.kernel.org/bpf/20231023095726.1179529-1-alan.maguire@oracle.com/)** + +> Currently, the kernel uses pahole version checking as the way to +> determine which BTF encoding features to request from pahole. This +> means that such features have to be tied to a specific version and +> as new features are added, additional clauses in scripts/pahole-flags.sh +> have to be added; for example +> + +**[v4: bpf-next: BPF register bounds logic and testing improvements](http://lore.kernel.org/bpf/20231022205743.72352-1-andrii@kernel.org/)** + +> This patch set adds a big set of manual and auto-generated test cases +> validating BPF verifier's register bounds tracking and deduction logic. See +> details in the last patch. +> + +**[v6: bpf-next: Registrating struct_ops types from modules](http://lore.kernel.org/bpf/20231022050335.2579051-1-thinker.li@gmail.com/)** + +> Given the current constraints of the current implementation, +> struct_ops cannot be registered dynamically. This presents a +> significant limitation for modules like coming fuse-bpf, which seeks +> to implement a new struct_ops type. To address this issue, a new API +> is introduced that allows the registration of new struct_ops types +> from modules. +> + +### 周边技术动态 + +#### Qemu + +**[v1: Support RISC-V IOPMP](http://lore.kernel.org/qemu-devel/20231025051430.493079-1-ethan84@andestech.com/)** + +> This series implements IOPMP specification v1.0.0-draft4 rapid-k model: +> https://github.com/riscv-non-isa/iopmp-spec/blob/main/riscv_iopmp_specification.pdf +> + +**[v3: riscv: zicntr/zihpm flags and disable support](http://lore.kernel.org/qemu-devel/20231023153927.435083-1-dbarboza@ventanamicro.com/)** + +> In this v3 the patches that added the extensions flags were squashed +> with the patches that handled the disablement of the extensions in TCG, +> as suggested by Alistair in v2. +> + +**[v3: riscv: RVA22U64 profile support](http://lore.kernel.org/qemu-devel/20231020223951.357513-1-dbarboza@ventanamicro.com/)** + +> Based-on: 20231017221226.136764-1-dbarboza@ventanamicro.com +> + +#### U-Boot + +**[Pull request: u-boot-sunxi/master for v2024.01](http://lore.kernel.org/u-boot/20231023104830.792659-1-andre.przywara@arm.com/)** + +> please pull the sunxi/master branch, containing the first part of the +> activities (fixes and reviews), but I didn't want to delay this series +> any longer, since it's been around one year in the making already: +> +> This is mostly about support for the Allwinner R528/T113s SoC, which is +> reportedly the same die as the Allwinner D1, but with the two +> Arm Cortex-A7 cores activated instead of the RISC-V one. +> + ## 20231015:第 64 期 ### 内核动态 -- Gitee