Compare commits

...

82 Commits

Author SHA1 Message Date
Steve Sakoman
72983ac391 build-appliance-image: Update to scarthgap head revision
(From OE-Core rev: 6988157ad983978ffd6b12bcefedd4deaffdbbd1)

Signed-off-by: Steve Sakoman <steve@sakoman.com>
2026-01-02 07:00:05 -08:00
Steve Sakoman
828c9d09b4 poky.conf: bump version for 5.0.15
(From meta-yocto rev: 9bb6e6e8b016a0c9dfe290369a6ed91ef4020535)

Signed-off-by: Steve Sakoman <steve@sakoman.com>
2026-01-02 06:56:54 -08:00
Vijay Anusuri
795103a538 go: Fix CVE-2025-61729
Upstream-Status: Backport from 3a842bd5c6

(From OE-Core rev: 2d6d68e46a430a1dbba7bd8b7d37ff56f4f5a0e6)

Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2026-01-02 06:56:54 -08:00
Vijay Anusuri
d3c87dc830 go: Fix CVE-2025-61727
Upstream-Status: Backport from 04db77a423

(From OE-Core rev: 647e151485bd10a8bbbdbae4825791723c9a5d8e)

Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2026-01-02 06:56:54 -08:00
Vijay Anusuri
a5cecb013b go: Update CVE-2025-58187
Upstream-Status: Backport from ca6a5545ba

(From OE-Core rev: 2d6b089de3ef5e062d852eb93e3ff16997e796ef)

Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2026-01-02 06:56:54 -08:00
Changqing Li
a4841fb5a2 libsoup: fix CVE-2025-12105
Refer:
https://gitlab.gnome.org/GNOME/libsoup/-/merge_requests/481

(From OE-Core rev: 1ac9ad3faf022684ae709f4494a430aee5fb9906)

Signed-off-by: Changqing Li <changqing.li@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2026-01-02 06:56:54 -08:00
Jiaying Song
17a65b334d grub: fix CVE-2025-54770 CVE-2025-61661 CVE-2025-61662 CVE-2025-61663 CVE-2025-61664
References:
https://nvd.nist.gov/vuln/detail/CVE-2025-54770
https://nvd.nist.gov/vuln/detail/CVE-2025-61661
https://nvd.nist.gov/vuln/detail/CVE-2025-61662
https://nvd.nist.gov/vuln/detail/CVE-2025-61663
https://nvd.nist.gov/vuln/detail/CVE-2025-61664

(From OE-Core rev: c28fa3e6421257f50d4ae283cca28fadb621f831)

Signed-off-by: Jiaying Song <jiaying.song.cn@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2026-01-02 06:56:54 -08:00
Martin Jansa
52ba7ab020 cross.bbclass: Propagate dependencies to outhash
Similar to what native and staging is doing since:
https://git.openembedded.org/openembedded-core/commit/meta/classes/native.bbclass?id=d6c7b9f4f0e61fa6546d3644e27abe3e96f597e2
https://git.openembedded.org/openembedded-core/commit/meta/classes/staging.bbclass?id=1cf62882bbac543960e4815d117ffce0e53bda07

Cross task outputs can call native dependencies and even when cross
recipe output doesn't change it might produce different results when
the called native dependency is changed, e.g. clang-cross-${TARGET_ARCH}
contains symlink to clang binary from clang-native, but when clang-native
outhash is changed, clang-cross-${TARGET_ARCH} will still be considered
equivalent and target recipes aren't rebuilt with new clang binary, see
work around in https://github.com/kraj/meta-clang/pull/1140 to make target
recipes to depend directly not only on clang-cross-${TARGET_ARCH} but
clang-native as well.

I have added a small testcase in meta-selftest which demostrates this issue.
Not included in this change, but will send it if useful.

openembedded-core $ ls -1 meta-selftest/recipes-devtools/hashequiv-test/
print-datetime-link-cross.bb
print-datetime-link-native.bb
print-datetime-native.bb
print-datetime-usecross.bb
print-datetime-usenative.bb

print-datetime-native provides script which prints defined PRINT_DATETIME variable.

print-datetime-link-native and print-datetime-link-cross both provide a symlink to
the script from print-datetime-native.

print-datetime-usenative and print-datetime-usecross are target recipes using the
native and cross versions of print-datetime-link-* recipe.

  # clean build all is rebuilt:
  $ bitbake -k print-datetime-usenative print-datetime-usecross
  WARNING: print-datetime-native-1.0-r0 do_install: print-datetime-native current DATETIME in script is 2025-11-13_20_05
  WARNING: print-datetime-link-native-1.0-r0 do_install: print-datetime-link-native current DATETIME in symlink is 2025-11-13_20_05
  WARNING: print-datetime-link-cross-x86_64-1.0-r0 do_install: print-datetime-link-cross-x86_64 current DATETIME in symlink is 2025-11-13_20_05
  WARNING: print-datetime-usenative-1.0-r0 do_install: print-datetime-usenative current DATETIME from print-datetime-link is 2025-11-13_20_05
  WARNING: print-datetime-usecross-1.0-r0 do_install: print-datetime-usecross current DATETIME from print-datetime-link is 2025-11-13_20_05

  # keep sstate-cache and hashserv.db:
  # print-datetime-usenative is correctly rebuilt, because print-datetime-link-native has different hash (because print-datetime-native hash changed)
  # print-datetime-usecross wasn't rebuilt, because print-datetime-link-cross-x86_64 doesn't include the changed hash of print-datetime-native
  $ bitbake -k print-datetime-usenative print-datetime-usecross
  WARNING: print-datetime-native-1.0-r0 do_install: print-datetime-native current DATETIME in script is 2025-11-13_20_07
  WARNING: print-datetime-link-native-1.0-r0 do_install: print-datetime-link-native current DATETIME in symlink is 2025-11-13_20_07
  WARNING: print-datetime-link-cross-x86_64-1.0-r0 do_install: print-datetime-link-cross-x86_64 current DATETIME in symlink is 2025-11-13_20_07
  WARNING: print-datetime-usenative-1.0-r0 do_install: print-datetime-usenative current DATETIME from print-datetime-link is 2025-11-13_20_07

It's because print-datetime-link-cross-x86_64 depsig doesn't include print-datetime-native signature:

$ cat tmp/work/x86_64-linux/print-datetime-link-cross-x86_64/1.0/temp/depsig.do_populate_sysroot
OEOuthashBasic
18
SSTATE_PKGSPEC=sstate:print-datetime-link-cross-x86_64:x86_64-oe-linux:1.0:r0:x86_64:14:
task=populate_sysroot
drwx                                                                                       .
drwx                                                                                       ./recipe-sysroot-native
drwx                                                                                       ./recipe-sysroot-native/sysroot-providers
-rw-                   32 19fbeb373f781c2504453c1ca04dab018a7bc8388c87f4bbc59589df31523d07 ./recipe-sysroot-native/sysroot-providers/print-datetime-link-cross-x86_64
drwx                                                                                       ./recipe-sysroot-native/usr
drwx                                                                                       ./recipe-sysroot-native/usr/bin
drwx                                                                                       ./recipe-sysroot-native/usr/bin/x86_64-oe-linux
lrwx                                                                                       ./recipe-sysroot-native/usr/bin/x86_64-oe-linux/print-datetime-link -> ../print-datetime

While print-datetime-link-native doesn't have this issue, because print-datetime-native signature is there:

$ cat tmp/work/x86_64-linux/print-datetime-link-native/1.0/temp/depsig.do_populate_sysroot
OEOuthashBasic
18
print-datetime-native: 60f2734a63d708489570ca719413b4662f8368abc9f4760a279a0a5481e4a17b
quilt-native: 65d78a7a5b5cbbf0969798efe558ca28e7ef058f4232fcff266912d16f67a8b8
SSTATE_PKGSPEC=sstate:print-datetime-link-native:x86_64-linux:1.0:r0:x86_64:14:
task=populate_sysroot
drwx                                                                                       .
drwx                                                                                       ./recipe-sysroot-native
drwx                                                                                       ./recipe-sysroot-native/sysroot-providers
-rw-                   26 3d5458be834b2d0e4c65466b9b877d6028ae2210a56399284a23144818666f10 ./recipe-sysroot-native/sysroot-providers/print-datetime-link-native
drwx                                                                                       ./recipe-sysroot-native/usr
drwx                                                                                       ./recipe-sysroot-native/usr/bin
lrwx                                                                                       ./recipe-sysroot-native/usr/bin/print-datetime-link -> print-datetime

With the cross.bbclass fix the link-cross recipe has a checksum from native recipe as well:

$ cat tmp/work/x86_64-linux/print-datetime-link-cross-x86_64/1.0/temp/depsig.do_populate_sysroot
OEOuthashBasic
18
print-datetime-native: 9ceb6c27342eae6b8da86c84685af38fb8927ccc19979aae75b8b1e444b11c5c
quilt-native: 65d78a7a5b5cbbf0969798efe558ca28e7ef058f4232fcff266912d16f67a8b8
SSTATE_PKGSPEC=sstate:print-datetime-link-cross-x86_64:x86_64-oe-linux:1.0:r0:x86_64:14:
task=populate_sysroot
drwx                                                                                       .
drwx                                                                                       ./recipe-sysroot-native
drwx                                                                                       ./recipe-sysroot-native/sysroot-providers
-rw-                   32 19fbeb373f781c2504453c1ca04dab018a7bc8388c87f4bbc59589df31523d07 ./recipe-sysroot-native/sysroot-providers/print-datetime-link-cross-x86_64
drwx                                                                                       ./recipe-sysroot-native/usr
drwx                                                                                       ./recipe-sysroot-native/usr/bin
drwx                                                                                       ./recipe-sysroot-native/usr/bin/x86_64-oe-linux
lrwx                                                                                       ./recipe-sysroot-native/usr/bin/x86_64-oe-linux/print-datetime-link -> ../print-datetime

And print-datetime-usecross is correctly rebuilt whenever print-datetime-native output is different.

(From OE-Core rev: dccb7a185fe58a97f33e219b4db283ff4a2071d7)

Signed-off-by: Martin Jansa <martin.jansa@gmail.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Moritz Haase
d792f1a83e curl: Use host CA bundle by default for native(sdk) builds
Fixes YOCTO #16077

Commit 0f98fecd (a backport of 4909a46e) broke HTTPS downloads in opkg in the
SDK, they now fail with:

> SSL certificate problem: self-signed certificate in certificate chain

The root cause is a difference in the handling of related env vars between
curl-cli and libcurl. The CLI will honour CURL_CA_BUNDLE and SSL_CERT_DIR|FILE
(see [0]). Those are set in the SDK via env setup scripts like [1], so curl
continued to work. The library however does not handle those env vars. Thus,
unless the program utilizing libcurl has implemented a similar mechanism itself
and configures libcurl accordingly via the API (like for example Git in [2] and
[3]), there will be no default CA bundle configured to verify certificates
against.

Opkg only supports setting the CA bundle path via config options 'ssl_ca_file'
and 'ssl_ca_path'. Upstreaming and then backporting a patch to add env var
support is not a feasible short-time fix for the issue at hand. Instead it's
better to ship libcurl in the SDK with a sensible built-in default - which also
helps any other libcurl users.

This patch is based on a proposal by Peter.Marko@siemens.com in the related
mailing list discussion at [4].

(cherry picked from commit 3f819f57aa1960af36ac0448106d1dce7f38c050)

[0]: 400fffa90f/src/tool_operate.c (L2056-L2084)
[1]: https://git.openembedded.org/openembedded-core/tree/meta/recipes-support/curl/curl/environment.d-curl.sh?id=3a15ca2a784539098e95a3a06dec7c39f23db985
[2]: 6ab38b7e9c/http.c (L1389)
[3]: 6ab38b7e9c/http.c (L1108-L1109)
[4]: https://lists.openembedded.org/g/openembedded-core/topic/115993530#msg226751

(From OE-Core rev: 0e553b685c0a987a7be1eee16b7b5e3e48a036e2)

Signed-off-by: Moritz Haase <Moritz.Haase@bmw.de>
CC: matthias.schiffer@ew.tq-group.com
CC: Peter.Marko@siemens.com
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Enrico Jörns
1df6f0ae91 cml1.bbclass: use consistent make flags for menuconfig
The class called 'make menuconfig' without any of the make variables and
options set in EXTRA_OEMAKE, resulting in a quite different build
environment than actually intended.

For the kernel.bbclass this was fixed in commit 8c616bc0 ("kernel: Use
consistent make flags for menuconfig") by appending ${EXTRA_OEMAKE} to
KCONFIG_CONFIG_COMMAND.

Instead of fixing this individually for additional recipes, we simply
include ${EXTRA_OEMAKE} in KCONFIG_CONFIG_COMMAND by default.

For most class users, this change is directly visible in the generated
.config file:

* For barebox and u-boot, the CONFIG_GCC_VERSION erroneously reflected
  the host GCC version before where it now correctly reflects the target
  toolchain's GCC.

* For u-boot, also the "Compiler: " line at the beginning of the .config
  now prints the target toolchain instead of the host ones.

* The kernel had this already set.

* busybox did not produce any difference.

Note that these projects might base some compile-time decisions on e.g.
the actual compiler version used. Having the wrong one in the
menuconfig-generated .config affects at least the visibility and
consistency.

Reported-by: Ulrich Ölmann <u.oelmann@pengutronix.de>
(From OE-Core rev: a7dd1c221e42fd8df1d6f1c76c6a5ab7a3e19542)

Signed-off-by: Enrico Jörns <ejo@pengutronix.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1b6ddd452837e67b500a84455a234f5edc8250a9)
Signed-off-by: Enrico Jörns <ejo@pengutronix.de>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Kamel Bouhara (Schneider Electric)
12a7475659 oeqa/selftest: oe-selftest: Add SPDX tests for kernel config and PACKAGECONFIG
Add test_kernel_config_spdx and test_packageconfig_spdx to verify
SPDX document generation includes kernel configuration and package
feature metadata when enabled.

(From OE-Core rev: a172a0e8d543796ee78bb66650726168352f1cdf)

Signed-off-by: Kamel Bouhara (Schneider Electric) <kamel.bouhara@bootlin.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2f0ab110d7521510c60e0493ef3cb021130758cd)
Signed-off-by: Kamel Bouhara <kamel.bouhara@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Kamel Bouhara (Schneider Electric)
707dce4f01 spdx30_tasks: Add support for exporting PACKAGECONFIG to SPDX
Introduce the SPDX_INCLUDE_PACKAGECONFIG variable, which when enabled causes
PACKAGECONFIG features to be recorded in the SPDX document as build parameters.

Each feature is recorded as a DictionaryEntry with key PACKAGECONFIG:<feature>
and value enabled or disabled, depending on whether the feature is active in
the current build.

This makes the build-time configuration more transparent in SPDX output and
improves reproducibility tracking.

This makes the build-time configuration more transparent in SPDX output and
improves reproducibility tracking. In particular, it allows consumers of the
SBOM to identify enabled/disabled features that may affect security posture
or feature set.

Reviewed-by: Joshua Watt <JPEWhacker@gmail.com>
(From OE-Core rev: 5cfd0690f819379d9f97c86d2078c3e529efe385)

Signed-off-by: Kamel Bouhara (Schneider Electric) <kamel.bouhara@bootlin.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 7ec61ac40345a5c0ef1ce20513a4596989c91ef4)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Kamel Bouhara (Schneider Electric)
6d222750d5 kernel.bbclass: Add task to export kernel configuration to SPDX
Introduce a new bitbake task do_create_kernel_config_spdx that extracts
the kernel configuration from ${B}/.config and exports it into the
recipe's SPDX document as a separate build_Build object.

The kernel config parameters are stored as SPDX DictionaryEntry objects
and linked to the main kernel build using an ancestorOf relationship.

This enables the kernel build's configuration to be explicitly captured
in the SPDX document for compliance, auditing, and reproducibility.

The task is gated by SPDX_INCLUDE_KERNEL_CONFIG (default = "0").

Reviewed-by: Joshua Watt <JPEWhacker@gmail.com>
(From OE-Core rev: 1fff29a0428778929ffa530482ebf7db95f1e0ae)

Signed-off-by: Kamel Bouhara (Schneider Electric) <kamel.bouhara@bootlin.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 228a968e7c47d811c06143279bdb0f9c5f374bef)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Aleksandar Nikolic
f327b4da74 scripts/install-buildtools: Update to 5.0.14
Update to the 5.0.14 release of the 5.0 series for buildtools

(From OE-Core rev: 4c85440cd95d9cd007ef4346ecc9580806526c96)

Signed-off-by: Aleksandar Nikolic <aleksandar.nikolic@zeiss.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Mingli Yu
4faff2acb8 ruby: Upgrade 3.3.5 -> 3.3.10
Per ruby maintenance policy [1], the 3.3.x branch should be still in normal
maintenance, so upgrade to the latest version 3.3.10 to fix many security
issues and bugs.

Remove the fix for CVE-2025-27219, CVE-2025-27220 and CVE-2025-27221 as
these fixes have been included in the new version.

[1] https://www.ruby-lang.org/en/downloads/branches/

(From OE-Core rev: bad372ad8ec33334c6a74c077bf975851c1e59d2)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Mingli Yu
fee180d783 libxslt: Fix CVE-2025-11731
Backport the patch [1] to fix CVE-2025-11731.

[1] fe508f201e

(From OE-Core rev: e70c70e0359418197699f18c9e2cbfd7ebac705d)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Yash Shinde
d27f4a8879 binutils: fix CVE-2025-11840
CVE-2025-11840

PR 33455
[BUG] A SEGV in vfinfo at ldmisc.c:527
A reloc howto set up with EMPTY_HOWTO has a NULL name.  More than one
place emitting diagnostics assumes a reloc howto won't have a NULL
name.

https://sourceware.org/bugzilla/show_bug.cgi?id=33455

Upstream-Status: Backport [https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=f6b0f53a36820da91eadfa9f466c22f92e4256e0]

(From OE-Core rev: d477a67f623da424c3165bde25d76152636b1f50)

Signed-off-by: Yash Shinde <Yash.Shinde@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Yash Shinde
a247883e38 binutils: fix CVE-2025-11839
CVE-2025-11839

PR 33448
[BUG] Aborted in tg_tag_type at prdbg.c:2452
Remove call to abort in the DGB debug format printing code, thus allowing
the display of a fuzzed input file to complete without triggering an abort.

https://sourceware.org/bugzilla/show_bug.cgi?id=33448

Upstream-Status: Backport [https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=12ef7d5b7b02d0023db645d86eb9d0797bc747fe]

(From OE-Core rev: d60c144e082d6e6db4f9971bb886751199cd433f)

Signed-off-by: Yash Shinde <Yash.Shinde@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Deepesh Varatharajan
c65b128458 binutils: Fix CVE-2025-11494
Since x86 .eh_frame section may reference _GLOBAL_OFFSET_TABLE_, keep
_GLOBAL_OFFSET_TABLE_ if there is dynamic section and the output
.eh_frame section is non-empty.

Backport a patch from upstream to fix CVE-2025-11494
Upstream-Status: Backport [https://sourceware.org/git/?p=binutils-gdb.git;a=patch;h=b6ac5a8a5b82f0ae6a4642c8d7149b325f4cc60a]

(From OE-Core rev: e087881bece2884f8d1a3c6d0dd7d69b40eb6732)

Signed-off-by: Deepesh Varatharajan <Deepesh.Varatharajan@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Daniel Turull
de3a6b0d24 cmake-native: fix CVE-2025-9301
Add fix for native recipe, since previous commit for cmake missed it.
5d8a6fb52c cmake: fix CVE-2025-9301

CC: Saravanan <saravanan.kadambathursubramaniyam@windriver.com>
CC: Steve Sakoman <steve@sakoman.com>
(From OE-Core rev: 24f831be7d99d5ea3fe304b9aa2d82e7e2d4a5fa)

Signed-off-by: Daniel Turull <daniel.turull@ericsson.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Jiaying Song
b9843e68be python3-urllib3: fix CVE-2025-66418 CVE-2025-66471
References:
https://nvd.nist.gov/vuln/detail/CVE-2025-66418
https://nvd.nist.gov/vuln/detail/CVE-2025-66471

(From OE-Core rev: d9f52c5f86bcc4716e384fe5c01c03d386d60446)

Signed-off-by: Jiaying Song <jiaying.song.cn@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Kai Kang
67ac024a29 qemu: fix CVE-2025-12464
Backport patch to fix CVE-2025-12464 for qemu.

Reference: https://gitlab.com/qemu-project/qemu/-/commit/a01344d9d7

(From OE-Core rev: c3108b279bd5c49a3c0ea35880fe7fd4f5b75b96)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Adarsh Jagadish Kamini
997f8de24c rsync: fix CVE-2025-10158
Fix an out-of-bounds read triggered by a malicious rsync client
acting as a receiver. The issue can be exploited with read access
to an rsync module.

CVE: CVE-2025-10158

(From OE-Core rev: 110933506d7a1177d1a074866d08fe0b0da612d7)

Signed-off-by: Adarsh Jagadish Kamini <adarsh.jagadish.kamini@est.tech>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Deepak Rathore
85e5f0fa1e cups 2.4.11: Fix CVE-2025-61915
Upstream Repository: https://github.com/OpenPrinting/cups.git

Bug Details: https://nvd.nist.gov/vuln/detail/CVE-2025-61915
Type: Security Fix
CVE: CVE-2025-61915
Score: 6.7
Patch: https://github.com/OpenPrinting/cups/commit/db8d560262c2

(From OE-Core rev: ca252aac4e50b7ed8864bf7482a86fe7129e737e)

Signed-off-by: Deepak Rathore <deeratho@cisco.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Deepak Rathore
15a18fae40 cups 2.4.11: Fix CVE-2025-58436
Upstream Repository: https://github.com/OpenPrinting/cups.git

Bug Details: https://nvd.nist.gov/vuln/detail/CVE-2025-58436
Type: Security Fix
CVE: CVE-2025-58436
Score: 5.5
Patch: https://github.com/OpenPrinting/cups/commit/5d414f1f91bd

(From OE-Core rev: 6a721aad5f531ac74996386cbaaa0173c2c5001a)

Signed-off-by: Deepak Rathore <deeratho@cisco.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-31 07:49:31 -08:00
Peter Marko
553530a8ac Revert "lib/oe/go: document map_arch, and raise an error on unknown architecture"
This reverts commit e6de433ccb2784581d6c775cce97f414ef9334b1.

This introduced a breaking change which is not suitable for backport to
stable LTS branches.

(From OE-Core rev: 2b3d2b671a149cbeea2bdc9ba42192da2015c3b7)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-17 08:48:38 -08:00
Vijay Anusuri
719a5fe1e3 libssh2: fix regression in KEX method validation (GH-1553)
Resolves: https://github.com/libssh2/libssh2/issues/1553

Regression caused by
00e2a07e82

Backport fix
4beed72458

(From OE-Core rev: c348296ff0181921e8aa5a16d8d90db75f7b3e7c)

Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-17 08:48:38 -08:00
Vijay Anusuri
76d0c749da libssh2: upgrade 1.11.0 -> 1.11.1
Changelog: https://github.com/libssh2/libssh2/releases/tag/libssh2-1.11.1

Dropped CVE-2023-48795.patch which is already included in version 1.11.1

Resolves: https://github.com/libssh2/libssh2/issues/1326

License-Update: Copyright symbols were changed from (C) to lowercase (c)

ptest results:

root@qemux86-64:~# ptest-runner libssh2
START: ptest-runner
2025-12-08T12:37
BEGIN: /usr/lib/libssh2/ptest
PASS: mansyntax.sh
PASS: test_simple
PASS: test_sshd.test
DURATION: 6
END: /usr/lib/libssh2/ptest
2025-12-08T12:37
STOP: ptest-runner
TOTAL: 1 FAIL: 0

(From OE-Core rev: 71316433eb018e831d72a873365aa53ed04f14f4)

Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-17 08:48:38 -08:00
Peter Marko
f0d2110a32 libmicrohttpd: disable experimental code by default
Introduce new packageconfig to explicitly avoid compilation of
experimental code. Note that the code was not compiled by default also
before this patch, this now makes it explicit and makes it possible to
check for the flags in cve-check code.

This is less intrusive change than a patch removing the code which was
rejected in patch review.

This will solve CVE-2025-59777 and CVE-2025-62689 as the vulnerable code
is not compiled by default.
Set appropriate CVE status for these CVEs based on new packageconfig.

(From OE-Core rev: 9e3c0ae261afb7b9ff9528dbc147fb6c89d5a624)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-17 08:48:37 -08:00
Hitendra Prajapati
cc239ca412 libxml2: Security fix for CVE-2025-7425
CVE-2025-7425
libxslt: heap-use-after-free in xmlFreeID caused by `atype` corruption

Origin: https://launchpad.net/ubuntu/+source/libxml2/2.9.14+dfsg-1.3ubuntu3.6
Ref : https://security-tracker.debian.org/tracker/CVE-2025-7425

Upstream-Status: Backport from https://gitlab.gnome.org/GNOME/libxslt/-/issues/140

(From OE-Core rev: 315882f25ac3c5e5d210557fd863b3a0fff28850)

Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-17 08:48:37 -08:00
Peter Marko
0549c04c9f libpng: patch CVE-2025-66293
Pick patches per nvd report [1] and github advisory [2].

[1] https://nvd.nist.gov/vuln/detail/CVE-2025-66293
[2] https://github.com/pnggroup/libpng/security/advisories/GHSA-9mpm-9pxh-mg4f

(From OE-Core rev: f5f0af82d8775180d76e6448a14f74cc70edf963)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-17 08:48:37 -08:00
Daniel Turull
8bddd959ff classes/create-spdx-2.2: Define SPDX_VERSION to 2.2
SPDX_VERSION is used in DEPLOY_DIR_SPDX but if is not defined,
will default to SPDX-1.1

Define SPDX_VERSION to have the correct deploy path, to align
with master branch behaviour.

The change in path was introduced in 8996d0899d

CC: Kamel Bouhara (Schneider Electric) <kamel.bouhara@bootlin.com>
CC: JPEWhacker@gmail.com
(From OE-Core rev: 04cc49593a0ba2c51e4f4d477d4587079735b624)

Signed-off-by: Daniel Turull <daniel.turull@ericsson.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-17 08:48:37 -08:00
Moritz Haase
9497778a4d curl: Ensure 'CURL_CA_BUNDLE' from host env is indeed respected
Due to what looks like a copy'n'paste mistake, the environment setup script
might override 'CURL_CA_BUNDLE' from the host env instead of leaving it
untouched. Fix that.

(cherry picked from commit 545e43a7a45be02fda8fc3af69faa20e889f58c4)

CC: changqing.li@windriver.com
CC: raj.khem@gmail.com
CC: Peter.Marko@siemens.com

(From OE-Core rev: ef198b0c6063ede32cb93fe44eb89937c076a073)

Signed-off-by: Moritz Haase <Moritz.Haase@bmw.de>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-05 07:13:42 -08:00
Peter Marko
295e960b85 libpng: patch CVE-2025-65018
Pick commit per NVD report.
Add two patches to apply it cleanly.

(From OE-Core rev: 4e03bed20bceb455cb46dcf9564ad5a8525b207d)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-05 07:13:42 -08:00
Peter Marko
ea30165e8b libpng: patch CVE-2025-64720
Pick commit per NVD report.

(From OE-Core rev: e8fbb7521e0113c467e07ba473a46612709c5311)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-05 07:13:42 -08:00
Peter Marko
eed16ae613 libpng: patch CVE-2025-64506
Pick commit per NVD report.

(From OE-Core rev: f3bdbd782eed2b597927df489a7d38a22fbba5ed)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-05 07:13:42 -08:00
Peter Marko
b0b3210686 libpng: patch CVE-2025-64505
Pick commit per NVD report.
Add two patches to apply it cleanly.

(From OE-Core rev: 285a495b8b0e8fa93a0a0884f466f1adca76a28a)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-05 07:13:42 -08:00
Praveen Kumar
792947d444 python3: fix CVE-2025-6075
If the value passed to os.path.expandvars() is user-controlled a
performance degradation is possible when expanding environment variables.

Reference:
https://nvd.nist.gov/vuln/detail/CVE-2025-6075

Upstream-patch:
9ab89c026a

(From OE-Core rev: 5313fa5236cd3943f90804de2af81358971894bc)

Signed-off-by: Praveen Kumar <praveen.kumar@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-05 07:13:42 -08:00
Peter Marko
adc9e377c8 gnutls: patch CVE-2025-9820
This CVE is announced under [1].
Pick commit which mentions this CVE per [2].

[1] https://www.gnutls.org/security-new.html#GNUTLS-SA-2025-11-18
[2] https://security-tracker.debian.org/tracker/CVE-2025-9820

(From OE-Core rev: 37dcb0f617f02f95293455d58927e0da4e768cc4)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-05 07:13:42 -08:00
Peter Marko
e6bfeed8f3 libarchive: patch CVE-2025-60753
Pick patch from [3] marked in [2] mentioned in [1].

[1] https://nvd.nist.gov/vuln/detail/CVE-2025-60753
[2] https://github.com/libarchive/libarchive/issues/2725
[3] https://github.com/libarchive/libarchive/pull/2787

(From OE-Core rev: 1fbd9eddbdf0da062df0510cabff6f6ee33d5752)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-01 07:34:55 -08:00
Peter Marko
842fd60ebb libarchive: patch 3.8.3 security issue 2
Pick patch [2] as listed in [1].

[1] https://github.com/libarchive/libarchive/releases/tag/v3.8.3
[2] https://github.com/libarchive/libarchive/pull/2768

(From OE-Core rev: efe032eef7034009f1202985b2036fc79e06bddf)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-01 07:34:55 -08:00
Peter Marko
03c1257cfd libarchive: patch 3.8.3 security issue 1
Pick patch [2] as listed in [1].
To apply it cleanly, add two additional patches from branch patch/3.8.

[1] https://github.com/libarchive/libarchive/releases/tag/v3.8.3
[2] https://github.com/libarchive/libarchive/pull/2753

(From OE-Core rev: 11f782c1ae9962a2faa98bff3566e49fbf6db017)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-01 07:34:55 -08:00
Benjamin Robin (Schneider Electric)
35a6ffc2db vex: fix rootfs manifest
Rootfs VEX file is created by gathering files from CVE_CHECK_DIR
(deploy directory), however recipes generate the files only in
CVE_CHECK_DIR (log directory).
This make the rootfs VEX be always empty without any message.

The code is copied from cve_check class, which writes to both, so let
keep them aligned and make also vex write both files.

Also add a warning for case that a cve file would be still missing.

(From OE-Core rev: 7493eeed6d53bc704f558a0ccf8a0b5195381873)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit ee6541d0940c65685aaafd7d41a59a9406392e7d)
Signed-off-by: Benjamin Robin (Schneider Electric) <benjamin.robin@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-01 07:34:55 -08:00
Benjamin Robin (Schneider Electric)
86f11fe94f spdx: extend CVE_STATUS variables
If spdx is generated without inheriting cve/vex classes (which is poky
default), only explicitly set CVE_STATUS fields are handled.
Calculated ones (e.g. from CVE_STATUS_GROUPS) are ignored.

Fix this by expanding the CVE_STATUS in spdx classes.

(From OE-Core rev: 23a4e02542252657fa45fd4a605aec0af9178e0b)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit ead9c6a8770463c21210a57cc5320f44f7754dd3)
Signed-off-by: Benjamin Robin (Schneider Electric) <benjamin.robin@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-01 07:34:55 -08:00
Benjamin Robin (Schneider Electric)
d1f8b0c6dd cve-check: extract extending CVE_STATUS to library function
The same code for extending CVE_STATUS by CVE_CHECK_IGNORE and
CVE_STATUS_GROUPS is used on multiple places.
Create a library function to have the code on single place and ready for
reuse by additional classes.

Conflicts:
  meta/classes/cve-check.bbclass
  meta/lib/oe/cve_check.py

(From OE-Core rev: ddd295c7d4c313fbbb24f7a5e633d4adfea4054a)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 45e18f4270d084d81c21b1e5a4a601ce975d8a77)
Signed-off-by: Benjamin Robin (Schneider Electric) <benjamin.robin@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-01 07:34:55 -08:00
Benjamin Robin (Schneider Electric)
cf3b1a7e6d vex.bbclass: add a new class
The "vex" class generates the minimum information that is necessary
for VEX generation by an external CVE checking tool. It is a drop-in
replacement of "cve-check". It uses the same variables from recipes
to make the migration and backporting easier.

The goal of this class is to allow generation of the CVE list of
an image or distribution on-demand, including the latest information
from vulnerability databases. Vulnerability data changes every day,
so a status generated at build becomes out-of-date very soon.

Research done for this work shows that the current VEX formats (CSAF
and OpenVEX) do not provide enough information to generate such
rolling information. Instead, we extract the needed data from recipe
annotations (package names, CPEs, versions, CVE patches applied...)
and store for later use in the format that is an extension of the
CVE-check JSON output format.

This output can be then used (separately or with SPDX of the same
build) by an external tool to generate the vulnerability annotation
and VEX statements in standard formats.

When back-porting this feature, the do_generate_vex() had to be modified
to use the "old" get_patched_cves() API.

(From OE-Core rev: 123a60bc19987e99d511b1f515e118022949be7e)

Signed-off-by: Marta Rybczynska <marta.rybczynska@syslinbit.com>
Signed-off-by: Samantha Jalabert <samantha.jalabert@syslinbit.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 6352ad93a72e67d6dfa82e870222518a97c426fa)
Signed-off-by: Benjamin Robin (Schneider Electric) <benjamin.robin@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-01 07:34:55 -08:00
Benjamin Robin (Schneider Electric)
976648aa60 spdx30: provide all CVE_STATUS, not only Patched status
In scarthgap, the `oe.cve_check.get_patched_cves()` method only returns
CVEs with a "Patched" status. We want to retrieve all annotations,
including those with an "Ignored" status. Therefore, to avoid modifying
the current API, we integrate the logic for retrieving all CVE_STATUS
values ​​directly into `spdx30_task`.

(From OE-Core rev: 9a204670b1c0daedf1ed8ff944f8e5443b39c8f7)

Signed-off-by: Benjamin Robin (Schneider Electric) <benjamin.robin@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-01 07:34:55 -08:00
Kai Kang
91ba7b5d66 Revert "spdx: Update for bitbake changes"
This reverts part of commit 4859cdf97fd9a260036e148e25f0b78eb393df1e.

Modification of meta/classes/create-spdx-2.2.bbclass is not backported,
so no need to consider it.

In the commit, it updates spdx according to bitbake change. But the
bitbake commit

* 2515fbd10 fetch: Drop multiple branch/revision support for single git urls

doesn't backport for scarthgap.

So revert the other parts of the commit 4859cdf97fd9a260036e148e25f0b.

(From OE-Core rev: f3bfb98d1cf928678d9931308c116e9e6ec64ba5)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-12-01 07:34:54 -08:00
Lee Chee Yang
d71d81814a migration-guides: add release notes for 4.0.31
(From yocto-docs rev: b0f5cc276639916df197435780b3e94accd4af41)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit 992d0725e8b4fdcdc2e9a101ce51ebef94a00112)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:36 -08:00
Ross Burton
e54c87a8b5 documentation: link to the Releases page on yoctoproject.org instead of wiki
We have a machine-generated Releases page[1] which is preferable to the
wiki.

[1] https://www.yoctoproject.org/development/releases/

(From yocto-docs rev: 5af5e64e42732c0919cad499e79ff35ca4255a86)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit 46a9172fd17aa518028e35b8c874e74889079094)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:36 -08:00
Quentin Schulz
dfa0c8dc8b overview-manual: migrate to SVG + fix typo
The original PNG had a typo (YP-Comptible instead of YP-Compatible).

Instead of patching a PNG, let's migrate to an SVG with the typo already
fixed.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: fd023b25026b562ff2de972a44bd2c773470208f)

Signed-off-by: Quentin Schulz <quentin.schulz@cherry.de>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit 9f3c2a9113b329f7efdd22d3b3fbe272a44bc654)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:36 -08:00
Quentin Schulz
00d09f8fd4 dev-manual: debugging: use bitbake-getvar in Viewing Variable Values section
We should recommend using bitbake-getvar command wherever possible as
its output is much less confusing and overwhelming than bitbake -e.

Unfortunately, bitbake-getvar currently doesn't list Python tasks or
functions, unlike bitbake -e, so keep the latter for some corner cases.

[AG: Moroever -> Moreover typo fix]

(From yocto-docs rev: 3f1ca1c3ef60dfabe5b2a2c6e53d14edad64fb06)

Signed-off-by: Quentin Schulz <quentin.schulz@cherry.de>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit 41e4e05369c4e028c679749b7b62434327927a09)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:36 -08:00
Quentin Schulz
1b8b7802d1 ref-manual: variables: migrate the OVERRIDES note to bitbake-getvar
Wherever possible, we should use bitbake-getvar as it's the recommended
tool so let's do that.

(From yocto-docs rev: b9453c7ce44a6bcae7cdc05f2b2cd47b525726e9)

Signed-off-by: Quentin Schulz <quentin.schulz@cherry.de>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit 2293a3f2767895e9fb5c3e8f3ec11bb4951a7127)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:36 -08:00
Quentin Schulz
bfa2803f8e kernel-dev: common: migrate bitbake -e to bitbake-getvar
It's recommended to use bitbake-getvar for a few releases now so let's
use that instead of bitbake -e.

While at it, use a cross-reference for "OpenEmbedded Build System".

(From yocto-docs rev: 29836a95c01cdb99c38802f55a92f32377b8c524)

Signed-off-by: Quentin Schulz <quentin.schulz@cherry.de>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit 54585646d8220f8de1ba2c7246cb3f2fcbc59583)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:36 -08:00
Lee Chee Yang
a6b0e3d404 migration-guides: add release notes for 5.0.13
(From yocto-docs rev: fefa33295b2b96d5bf91dfdec3c6e6913dbf1df2)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit 5a6f63e955807d6aab4a9dbcb4560078c2cec77f)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:36 -08:00
Walter Werner SCHNEIDER
6906c4236f kernel-dev: add disable config example
Makes it more clear that the configuration fragment can also be used to
disable a configuration.

(From yocto-docs rev: a586a0ecacb4e40f4f3aeeb01dbefbdfcee8ae35)

Signed-off-by: Walter Werner SCHNEIDER <contact@schnwalter.eu>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit d38ef467081ee73bf23f240ace54b849a3a87612)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:36 -08:00
Robert P. J. Day
938b1ad77a dev-manual/new-recipe.rst: typo, "whith" -> "which"
Fix typo "whith", should be "which".

(From yocto-docs rev: bec165a3505f298b668bcf2a0f03fb8dcfccc510)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit f98b25f7f7522cf223beb001cabef870d6dd8c10)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:36 -08:00
Robert P. J. Day
e1453702a5 dev-manual/new-recipe.rst: replace 'bitbake -e' with 'bitbake-getvar'
Replace the legacy call to 'bitbake -e' to get the value of a recipe's
variable with the newer call to 'bitbake-getvar'.

(From yocto-docs rev: 042c4cb8c6291be857a672144b573a5eb10f1ead)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Reviewed-by: Quentin Schulz <quentin.schulz@cherry.de>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit ed7c0766ef5f13b90943a69e64f8e8713d05e864)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Robert P. J. Day
d6fd50a616 dev-manual/new-recipe.rst: update "recipetool -h" output
Update the output of "recipetool -h" to include the missing "edit"
subcommand.

(From yocto-docs rev: 09039d05e485a842690f9f54930400e02eef1c2c)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit 092d688349b0b6bb10ae6fbbab7d82801964daf5)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Robert P. J. Day
9e6d5e0849 dev-manual/layers.rst: document "bitbake-layers show-machines"
The "show-machines" subcommand is not mentioned in the docs; add it.

(From yocto-docs rev: 98190334b2ad75421e8bf2cc84bd920311398670)

Signed-off-by: Robert P. J. Day <Crpjday@crashcourse.ca>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit b4320cdc4df08c59a24d5247b3895dd602554fa0)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Enrico Jörns
dbf5ddbdb5 dev-manual/sbom.rst: reflect that create-spdx is enabled by default
Since nanbield (b34032ec "defaultsetup: Inherit create-spdx by
default"), the create-spdx class is pulled in by default, not only by
poky.

Adapt the text to reflect this and also change INHERIT to INHERIT_DISTRO
since this is the more concrete variable to modify for disabling
create-spdx.

[AG: fix conflicts]

(From yocto-docs rev: 4c47eb98e096121d71663342dde86b8c9256c9b5)

Signed-off-by: Enrico Jörns <ejo@pengutronix.de>
Reviewed-by: Quentin Schulz <quentin.schulz@cherry.de>
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
(cherry picked from commit 2b6228943443faf76c9869a0daeccfe7f93688ca)
Signed-off-by: Antonin Godard <antonin.godard@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Peter Marko
fb988ddb65 oeqa/sdk/buildepoxy: skip test in eSDK
Currently meson inside eSDKs only works with fully populated eSDKs,
but our testing uses minimal eSDKS, so skip the test if the eSDK is a
minimal build.  A bug has been filed to resolve this.

This is minimal change extracted from OE-Core commit which has this only
as a minor comment: 575e0bf52db0467d88af4b5fe467b682f10ca62a

(From OE-Core rev: 7cfacaee1b3319e561036512a849e762d0f68a5e)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Peter Marko
2ab61fcf7d oeqa: drop unnecessary dependency from go runtime tests
The tests do not use scp command, so openssh-scp is not needed.

(From OE-Core rev: 4e10e7848cb10307f133f181b41563c995df032a)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Peter Marko
f5c5d1dd6c oeqa: fix package detection in go sdk tests
The test are skipped if architecture contains dash because TARGET_ARCH
contains underscore while package name contains dash. Here the
translation needs to be done.

Note that poky distro default arch has dash:
MACHINE="qemux86-64"
TARGET_ARCH="x86_64"
ERROR: Nothing PROVIDES 'go-cross-canadian-x86_64'. Close matches:
  gcc-cross-canadian-x86-64
  gdb-cross-canadian-x86-64
  go-cross-canadian-x86-64
TRANSLATED_TARGET_ARCH="x86-64"

Quoting meta/classes-recipe/cross-canadian.bbclass:
TRANSLATED_TARGET_ARCH is added into PN

(From OE-Core rev: 82a46b70bfba7c4ce4fd20e2658b182b03e55037)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Osama Abdelkader
72fd157b91 go: remove duplicate arch map in sdk test
ARCH_MAP is duplicating an existing map in meta/lib/oe/go.py
use oe.go map_arch instead.

(From OE-Core rev: c2ba36f41777d347fd5ffcd9b6862638e5f35a1b)

(From OE-Core rev: 21f3a6c661307eab5530b51704c3a338013c9c5c)

Signed-off-by: Osama Abdelkader <osama.abdelkader@gmail.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Osama Abdelkader
6ba417e775 go: extend runtime test
extend go runtime test with a simple test file, and simple
go module test to validate go compilation and execution on
target.

(From OE-Core rev: e3b2b9170f76f4bbdc41ea6ba7bccffc17d01968)

(From OE-Core rev: bda3e3711f84394423c15f48fb4e75258fec199a)

Signed-off-by: Osama Abdelkader <osama.abdelkader@gmail.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Osama Abdelkader
48b9d014e9 go: add sdk test
- Add meta/lib/oeqa/sdk/cases/go.py with GoCompileTest and GoHostCompileTest classes
- Test validates Go cross-compilation toolchain functionality
- Includes native compilation, cross-compilation, and Go module support
- Uses dynamic architecture detection for portability

(From OE-Core rev: 17015f692a6bf3697a89db51bbc4673a5efa1497)

(From OE-Core rev: 506f4e8c99b164673ba7d1c19e10d240f4df0376)

Signed-off-by: Osama Abdelkader <osama.abdelkader@gmail.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Alexander Kanavin
7b9540b6b5 goarch.bbclass: do not leak TUNE_FEATURES into crosssdk task signatures
The default assignments look like this:
TARGET_GO386 = "${@go_map_386(d.getVar('TARGET_ARCH'), d.getVar('TUNE_FEATURES'), d)}"

TUNE_FEATURES is a target-specific variable, and so should be used
only for target builds. The change is similar to what is already done
for native packages.

(From OE-Core rev: cfff8e968257c44880caa3605e158764ed5c6a2a)

(From OE-Core rev: e8d475b9b6d7b1ac3b0cfe367faabc07deb663b0)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Ross Burton
6707dcecb2 lib/oe/go: document map_arch, and raise an error on unknown architecture
Add a comment explaining what this function does and where the values
come from.

If the architecture isn't know, instead of returning an empty string
which could fail mysteriously, raise a KeyError so it fails quickly.

(From OE-Core rev: 025414c16319b068df1cd757ad9a3c987a6b871d)

(From OE-Core rev: e6de433ccb2784581d6c775cce97f414ef9334b1)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Ross Burton
cac0ff2d90 oe/sdk: fix empty SDK manifests
The SDK manifests are generated by listing the sstate was that used, but
it hardcodes that the sstate data filenames end in .tgz.

This has not been the case since sstate switched to Zstd[1] in 2021,
which meant that all of the tests which checked for packages existing
were being skipped as the manifests were empty.  For example, see a
representative core-image-sato eSDK test run[2]:

RESULTS - cmake.CMakeTest.test_assimp: SKIPPED (0.00s)
RESULTS - gtk3.GTK3Test.test_galculator: SKIPPED (0.00s)
RESULTS - kmod.KernelModuleTest.test_cryptodev: SKIPPED (0.00s)
RESULTS - maturin.MaturinDevelopTest.test_maturin_develop: SKIPPED (0.00s)
RESULTS - maturin.MaturinTest.test_maturin_list_python: SKIPPED (0.00s)
RESULTS - meson.MesonTest.test_epoxy: SKIPPED (0.00s)
RESULTS - perl.PerlTest.test_perl: SKIPPED (0.00s)
RESULTS - python.Python3Test.test_python3: SKIPPED (0.00s)

All of those tests should have been ran.

Solve this by generalising the filename check so that it doesn't care
what specfic compression algorithm is used.

[1] oe-core 0710e98f40e ("sstate: Switch to ZStandard compressor support")
[2] https://autobuilder.yoctoproject.org/valkyrie/#/builders/16/builds/1517/steps/15/logs/stdio

(From OE-Core rev: 062a525bd36c672f372dabe8d9f0fbe355c7e58b)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Ross Burton
d4a084d920 testsdk: allow user to specify which tests to run
Following the usage of TEST_SUITES in testimage, add TESTSDK_SUITES to
specify the list of tests to execute. By default the variable is empty,
which means to run all discovered tests.

This makes it easier to work on a single test without having to run all
of the tests.

(From OE-Core rev: 28d437c52c77889b2ede0fc2f2d6777c5b0a553d)

(From OE-Core rev: a93e21419476658f24220193fb0183efeb7a184f)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Gyorgy Sarvari
4af1396e46 glslang: fix compiling with gcc15
Backport a patch that fixes a compilation failure with gcc15:

| .../git/SPIRV/SpvBuilder.h:238:30: error: ‘uint32_t’ has not been declared
|   238 |     Id makeDebugLexicalBlock(uint32_t line);
|       |                              ^~~~~~~~
| .../git/SPIRV/SpvBuilder.h:64:1: note: ‘uint32_t’ is defined in header ‘<cstdint>’; this is probably fixable by adding ‘#include <cstdint>’

(From OE-Core rev: cd0039c22d7aa3d6983ac6fe917b648930355849)

Signed-off-by: Gyorgy Sarvari <skandigraun@gmail.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Ovidiu Panait
ee521bb17c rust-target-config: fix nativesdk-libstd-rs build with baremetal
If TCLIBC='baremetal' is set in local.conf, nativesdk-libstd-rs build fails
with:

| error[E0412]: cannot find type `c_char` in the crate root
|   --> /usr/src/debug/libstd-rs/1.75.0/rustc-1.75.0-src/vendor/libc/src/unix/mod.rs:56:29
|    |
| 6  | pub type c_schar = i8;
|    | ---------------------- similarly named type alias `c_schar` defined here
| ...
| 56 |         pub gr_name: *mut ::c_char,
|    |                             ^^^^^^

This happens because rust_gen_target() sets os="none" when TCLIBC is
'baremetal' - even for nativesdk targets. However, nativesdk packages are
built against glibc, so the correct 'os' value should be "linux".

Fix this by setting the os field based on {TARGET,HOST,BUILD}_OS variables,
as it is already done in rust_base_triple(), instead of relying on TCLIBC.

(From OE-Core rev: 4c3f321304f2aa8b75cb58699b59fea80a23690c)

Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(master rev: 3eaf2cd5647585a1e6df03fc20e2753da27bb692) -- backport
Signed-off-by: Stefan Ghinea <stefan.ghinea@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Gyorgy Sarvari
681244152c musl: patch CVE-2025-26519
Details: https://nvd.nist.gov/vuln/detail/CVE-2025-26519

Pick the patches that are attached to the musl advisory:
https://www.openwall.com/lists/musl/2025/02/13/1

(From OE-Core rev: bbdd7d54b070f62f13967df8a13f5f14f2c36120)

Signed-off-by: Gyorgy Sarvari <skandigraun@gmail.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Yogita Urade
027ce2d723 xwayland: fix CVE-2025-62231
A flaw was identified in the X.Org X serverâ\x80\x99s X Keyboard
(Xkb) extension where improper bounds checking in the XkbSetCompatMap()
function can cause an unsigned short overflow. If an attacker sends
specially crafted input data, the value calculation may overflow,
leading to memory corruption or a crash.

Reference:
https://nvd.nist.gov/vuln/detail/CVE-2025-62231

Upstream patch:
3baad99f9c

(From OE-Core rev: 97326be553f3fec8fbda63a8b38d18f656425b2c)

Signed-off-by: Yogita Urade <yogita.urade@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Yogita Urade
7f12b64980 xwayland: fix CVE-2025-62230
A flaw was discovered in the X.Org X serverâ\x80\x99s X Keyboard
(Xkb) extension when handling client resource cleanup. The software
frees certain data structures without properly detaching related
resources, leading to a use-after-free condition. This can cause
memory corruption or a crash when affected clients disconnect.

Reference:
3baad99f9c

Upstream patches:
865089ca70
87fe255393

(From OE-Core rev: 5d98bca7ca76964a6bf7efb7cf8331b9f518ad00)

Signed-off-by: Yogita Urade <yogita.urade@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Yogita Urade
33231bec7b xwayland: fix CVE-2025-62229
A flaw was found in the X.Org X server and Xwayland when processing
X11 Present extension notifications. Improper error handling during
notification creation can leave dangling pointers that lead to a
use-after-free condition. This can cause memory corruption or a crash,
potentially allowing an attacker to execute arbitrary code or cause a
denial of service.

Reference:
https://nvd.nist.gov/vuln/detail/CVE-2025-62229

Upstream patch:
5a4286b13f

(From OE-Core rev: 3d606cc94e5ce42b836878578fa271a72bc76015)

Signed-off-by: Yogita Urade <yogita.urade@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Ross Burton
db7f586822 xserver-xorg: fix CVE-2025-62229 CVE-2025-62230 CVE-2025-62231
>From https://lists.x.org/archives/xorg-announce/2025-October/003635.html:

1) CVE-2025-62229: Use-after-free in XPresentNotify structures creation

    Using the X11 Present extension, when processing and adding the
    notifications after presenting a pixmap, if an error occurs, a dangling
    pointer may be left in the error code path of the function causing a
    use-after-free when eventually destroying the notification structures
    later.

    Introduced in: Xorg 1.15
    Fixed in: xorg-server-21.1.19 and xwayland-24.1.9
    Fix: https://gitlab.freedesktop.org/xorg/xserver/-/commit/5a4286b1
    Found by: Jan-Niklas Sohn working with Trend Micro Zero Day Initiative.

2) CVE-2025-62230: Use-after-free in Xkb client resource removal

    When removing the Xkb resources for a client, the function
    XkbRemoveResourceClient() will free the XkbInterest data associated
    with the device, but not the resource associated with it.

    As a result, when the client terminates, the resource delete function
    triggers a use-after-free.

    Introduced in: X11R6
    Fixed in: xorg-server-21.1.19 and xwayland-24.1.9
    Fix: https://gitlab.freedesktop.org/xorg/xserver/-/commit/99790a2c
         https://gitlab.freedesktop.org/xorg/xserver/-/commit/10c94238
    Found by: Jan-Niklas Sohn working with Trend Micro Zero Day Initiative.

3) CVE-2025-62231: Value overflow in Xkb extension XkbSetCompatMap()

    The XkbCompatMap structure stores some of its values using an unsigned
    short, but fails to check whether the sum of the input data might
    overflow the maximum unsigned short value.

    Introduced in: X11R6
    Fixed in: xorg-server-21.1.19 and xwayland-24.1.9
    Fix: https://gitlab.freedesktop.org/xorg/xserver/-/commit/475d9f49
    Found by: Jan-Niklas Sohn working with Trend Micro Zero Day Initiative.

(From OE-Core rev: 50b9c34ba932761fab9035a54e58466d72b097bf)

(From OE-Core rev: f5a10c4950ccb5570c72eb0a09618b7b3523bc39)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Yogita Urade <yogita.urade@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Ross Burton
a78985ed94 xserver-xorg: remove redundant patch
The underlying issue with -fno-common was resolved upstream in xserver
21.1.0 onwards[1].

[1] xserver 0148a15da ("compiler.h: don't define inb/outb and friends on mips")

(From OE-Core rev: 74b77ee90efd50a703af76769fac66a0f7c394ca)

(From OE-Core rev: f1b064e684cebc3e0c6ca36eb585e26b8da5583b)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Yogita Urade <yogita.urade@windriver.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Hugo SIMELIERE
49e4da8b0a sqlite3: patch CVE-2025-7709
Pick commit used in debian patch https://git.launchpad.net/ubuntu/+source/sqlite3/commit/?id=9a309a50fa99e3b69623894bfd7d1f84d9fab33c
Upstream-Status: Backport [192d0ff8cc]

(From OE-Core rev: baaf28f6f2eac600f7caf53660a0b75f0329e86a)

Signed-off-by: Bruno VERNAY <bruno.vernay@se.com>
Signed-off-by: Hugo SIMELIERE <hsimeliere.opensource@witekio.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Hongxu Jia
e77289e9a4 spdx30: Provide software_packageUrl field in SPDX 3.0 SBOM
Define var-SPDX_PACKAGE_URL to provide software_packageUrl field [1][2]
in SPDX 3.0 SBOM, support to override with package name
SPDX_PACKAGE_URL:<pkgname>

Currently, the format of purl is not defined in Yocto, set empty for now
until we have a comprehensive plan for what Yocto purls look like.
But users could customize their own purl by setting var-SPDX_PACKAGE_URL

[1] https://spdx.github.io/spdx-spec/v3.0.1/model/Software/Properties/packageUrl/
[2] https://spdx.github.io/spdx-spec/v3.0.1/annexes/pkg-url-specification/

(From OE-Core rev: c8e6953a0b6f59ffca994c440069db39e60b12d2)

(From OE-Core rev: 60724efdb3a243bc796b390ad0c478584a0fb7fa)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
Peter Marko
c06e4e6e60 spdx30: fix cve status for patch files in VEX
This commit fixes commit 08595b39b46ef2bf3a928d4528292ee31a990c98
which adapts vex creation between function create_spdx where all changes
were backported and funtion get_patched_cves where changes were not
backported.

CVE patches were previously ignored as they cannot be decoded from
CVE_STATUS variables and each caused a warning like:
WARNING: ncurses-native-6.4-r0 do_create_spdx: Skipping CVE-2023-50495 — missing or unknown CVE status

Master branch uses fix-file-included for CVE patches however since
cve-check-map.conf was not part of spdx-3.0 backport, closest one
available (backported-patch) was implemented.

(From OE-Core rev: 8d14b2bb02861612130f02c445392f34090ba5d9)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2025-11-26 07:50:35 -08:00
122 changed files with 8424 additions and 906 deletions

View File

@@ -697,8 +697,8 @@ backported to a stable branch unless the bug in question does not affect the
master branch or the fix on the master branch is unsuitable for backporting.
The list of stable branches along with the status and maintainer for each
branch can be obtained from the
:yocto_wiki:`Releases wiki page </Releases>`.
branch can be obtained from the :yocto_home:`Releases </development/releases/>`
page.
.. note::

View File

@@ -111,17 +111,17 @@ occurred in your project. Perhaps an attempt to :ref:`modify a variable
<bitbake-user-manual/bitbake-user-manual-metadata:modifying existing
variables>` did not work out as expected.
BitBake's ``-e`` option is used to display variable values after
parsing. The following command displays the variable values after the
configuration files (i.e. ``local.conf``, ``bblayers.conf``,
BitBake's ``bitbake-getvar`` command is used to display variable values after
parsing. The following command displays the variable value for :term:`OVERRIDES`
after the configuration files (i.e. ``local.conf``, ``bblayers.conf``,
``bitbake.conf`` and so forth) have been parsed::
$ bitbake -e
$ bitbake-getvar OVERRIDES
The following command displays variable values after a specific recipe has
been parsed. The variables include those from the configuration as well::
The following command displays the value of :term:`PV` after a specific recipe
has been parsed::
$ bitbake -e recipename
$ bitbake-getvar -r recipename PV
.. note::
@@ -135,19 +135,25 @@ been parsed. The variables include those from the configuration as well::
the recipe datastore, which means that variables set within one task
will not be visible to other tasks.
In the output of ``bitbake -e``, each variable is preceded by a
description of how the variable got its value, including temporary
values that were later overridden. This description also includes
variable flags (varflags) set on the variable. The output can be very
In the output of ``bitbake-getvar``, the line containing the value of the
variable is preceded by a description of how the variable got its value,
including temporary values that were later overridden. This description also
includes variable flags (varflags) set on the variable. The output can be very
helpful during debugging.
Variables that are exported to the environment are preceded by
``export`` in the output of ``bitbake -e``. See the following example::
``export`` in the output of ``bitbake-getvar``. See the following example::
export CC="i586-poky-linux-gcc -m32 -march=i586 --sysroot=/home/ulf/poky/build/tmp/sysroots/qemux86"
In addition to variable values, the output of the ``bitbake -e`` and
``bitbake -e`` recipe commands includes the following information:
Shell functions and tasks can also be inspected with the same mechanism::
$ bitbake-getvar -r recipename do_install
For Python functions and tasks, ``bitbake -e recipename`` can be used instead.
Moreover, the output of the ``bitbake -e`` and ``bitbake -e`` recipe commands
includes the following information:
- The output starts with a tree listing all configuration files and
classes included globally, recursively listing the files they include

View File

@@ -123,10 +123,9 @@ Follow these general steps to create your layer without using tools:
Lists all layers on which this layer depends (if any).
- :term:`LAYERSERIES_COMPAT`:
Lists the :yocto_wiki:`Yocto Project </Releases>`
releases for which the current version is compatible. This
variable is a good way to indicate if your particular layer is
current.
Lists the :yocto_home:`Yocto Project releases </development/releases/>`
for which the current version is compatible. This variable is a good
way to indicate if your particular layer is current.
.. note::
@@ -832,6 +831,8 @@ The following list describes the available commands:
can replicate the directory structure and revisions of the layers in a current build.
For more information, see ":ref:`dev-manual/layers:saving and restoring the layers setup`".
- ``show-machines``: Lists the machines available in the currently configured layers.
Creating a General Layer Using the ``bitbake-layers`` Script
============================================================

View File

@@ -83,19 +83,20 @@ command::
OpenEmbedded recipe tool
options:
-d, --debug Enable debug output
-q, --quiet Print only errors
--color COLOR Colorize output (where COLOR is auto, always, never)
-h, --help show this help message and exit
-d, --debug Enable debug output
-q, --quiet Print only errors
--color COLOR Colorize output (where COLOR is auto, always, never)
-h, --help show this help message and exit
subcommands:
create Create a new recipe
newappend Create a bbappend for the specified target in the specified
layer
setvar Set a variable within a recipe
appendfile Create/update a bbappend to replace a target file
appendsrcfiles Create/update a bbappend to add or replace source files
appendsrcfile Create/update a bbappend to add or replace a source file
newappend Create a bbappend for the specified target in the specified layer
create Create a new recipe
setvar Set a variable within a recipe
appendfile Create/update a bbappend to replace a target file
appendsrcfiles Create/update a bbappend to add or replace source files
appendsrcfile Create/update a bbappend to add or replace a source file
edit Edit the recipe and appends for the specified target. This obeys $VISUAL if set,
otherwise $EDITOR, otherwise vi.
Use recipetool <subcommand> --help to get help on a specific command
Running ``recipetool create -o OUTFILE`` creates the base recipe and
@@ -218,9 +219,9 @@ compilation and packaging files, and so forth.
The path to the per-recipe temporary work directory depends on the
context in which it is being built. The quickest way to find this path
is to have BitBake return it by running the following::
is to use the ``bitbake-getvar`` utility::
$ bitbake -e basename | grep ^WORKDIR=
$ bitbake-getvar -r basename WORKDIR
As an example, assume a Source Directory
top-level folder named ``poky``, a default :term:`Build Directory` at
@@ -438,7 +439,7 @@ Licensing
=========
Your recipe needs to define variables related to the license
under whith the software is distributed. See the
under which the software is distributed. See the
:ref:`contributor-guide/recipe-style-guide:recipe license fields`
section in the Contributor Guide for details.

View File

@@ -24,11 +24,12 @@ users can read in standardized format.
:term:`SBOM` information is also critical to performing vulnerability exposure
assessments, as all the components used in the Software Supply Chain are listed.
The OpenEmbedded build system doesn't generate such information by default.
To make this happen, you must inherit the
:ref:`ref-classes-create-spdx` class from a configuration file::
The OpenEmbedded build system generates such information by default (by
inheriting the :ref:`ref-classes-create-spdx` class in :term:`INHERIT_DISTRO`).
INHERIT += "create-spdx"
If needed, it can be disabled from a :term:`configuration file`::
INHERIT_DISTRO:remove = "create-spdx"
Upon building an image, you will then get the compressed archive
``IMAGE-MACHINE.spdx.tar.zst`` contains the index and the files for the single

View File

@@ -44,10 +44,10 @@ See the
documentation for details regarding the policies and maintenance of stable
branches.
The :yocto_wiki:`Releases page </Releases>` contains a list
of all releases of the Yocto Project. Versions in gray are no longer actively
maintained with security patches, but well-tested patches may still be accepted
for them for significant issues.
The :yocto_home:`Releases </development/releases/>` page contains a list of all
releases of the Yocto Project, grouped into current and previous releases.
Previous releases are no longer actively maintained with security patches, but
well-tested patches may still be accepted for them for significant issues.
Security-related discussions at the Yocto Project
-------------------------------------------------

View File

@@ -651,7 +651,7 @@ described in the ":ref:`dev-manual/start:accessing source archives`" section.
.. note::
For a "map" of Yocto Project releases to version numbers, see the
:yocto_wiki:`Releases </Releases>` wiki page.
:yocto_home:`Releases </development/releases/>` page.
You can use the "RELEASE ARCHIVE" link to reveal a menu of all Yocto
Project releases.

View File

@@ -1191,10 +1191,12 @@ appear in the ``.config`` file, which is in the :term:`Build Directory`.
It is simple to create a configuration fragment. One method is to use
shell commands. For example, issuing the following from the shell
creates a configuration fragment file named ``my_smp.cfg`` that enables
multi-processor support within the kernel::
creates a configuration fragment file named ``my_changes.cfg`` that enables
multi-processor support within the kernel and disables the FPGA
Configuration Framework::
$ echo "CONFIG_SMP=y" >> my_smp.cfg
$ echo "CONFIG_SMP=y" >> my_changes.cfg
$ echo "# CONFIG_FPGA is not set" >> my_changes.cfg
.. note::
@@ -1431,15 +1433,13 @@ Expanding Variables
===================
Sometimes it is helpful to determine what a variable expands to during a
build. You can examine the values of variables by examining the
output of the ``bitbake -e`` command. The output is long and is more
easily managed in a text file, which allows for easy searches::
build. You can examine the value of a variable by running the ``bitbake-getvar``
command::
$ bitbake -e virtual/kernel > some_text_file
$ bitbake-getvar -r virtual/kernel VARIABLE
Within the text file, you can see
exactly how each variable is expanded and used by the OpenEmbedded build
system.
The output of the command explains exactly how the variable is expanded and used
by the :term:`OpenEmbedded Build System`.
Working with a "Dirty" Kernel Version String
============================================

View File

@@ -37,3 +37,4 @@ Release 4.0 (kirkstone)
release-notes-4.0.28
release-notes-4.0.29
release-notes-4.0.30
release-notes-4.0.31

View File

@@ -19,3 +19,4 @@ Release 5.0 (scarthgap)
release-notes-5.0.10
release-notes-5.0.11
release-notes-5.0.12
release-notes-5.0.13

View File

@@ -0,0 +1,210 @@
Release notes for Yocto-4.0.31 (Kirkstone)
------------------------------------------
Security Fixes in Yocto-4.0.31
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- binutils: Fix :cve_nist:`2025-8225`, :cve_nist:`2025-11081`, :cve_nist:`2025-11082` and
:cve_nist:`2025-11083`
- busybox: Fix :cve_nist:`2025-46394`
- cmake: Fix :cve_nist:`2025-9301`
- curl: Fix :cve_nist:`2025-9086`
- ffmpeg: Ignore :cve_nist:`2023-6603`
- ffmpeg: mark :cve_nist:`2023-6601` as Fixed
- ghostscript: Fix :cve_nist:`2025-59798`, :cve_nist:`2025-59799` and :cve_nist:`2025-59800`
- git: Fix :cve_nist:`2025-48386`
- glib-networking: Fix :cve_nist:`2025-60018` and :cve_nist:`2025-60019`
- go: Fix :cve_nist:`2025-47906` and :cve_nist:`2025-47907`
- grub2: Fix :cve_nist:`2024-56738`
- grub: Ignore :cve_nist:`2024-2312`
- gstreamer1.0-plugins-bad: Fix :cve_nist:`2025-3887`
- gstreamer1.0: Ignore :cve_nist:`2025-2759`, :cve_nist:`2025-3887`, :cve_nist:`2025-47183`,
:cve_nist:`2025-47219`, :cve_nist:`2025-47806`, :cve_nist:`2025-47807` and :cve_nist:`2025-47808`
- python3-jinja2: Fix :cve_nist:`2024-56201`, :cve_nist:`2024-56326` and :cve_nist:`2025-27516`
- libxml2: Fix :cve_nist:`2025-9714`
- libxslt: Fix :cve_nist:`2025-7424`
- lz4: Fix :cve_nist:`2025-62813`
- openssl: Fix :cve_nist:`2025-9230` and :cve_nist:`2025-9232`
- pulseaudio: Ignore :cve_nist:`2024-11586`
- python3: Fix :cve_nist:`2024-6345`, :cve_nist:`2025-47273` and :cve_nist:`2025-59375`
- qemu: Fix :cve_nist:`2024-8354`
- tiff: Fix :cve_nist:`2025-8961`, :cve_nist:`2025-9165` and :cve_nist:`2025-9900`
- vim: Fix :cve_nist:`2025-9389`
Fixes in Yocto-4.0.31
~~~~~~~~~~~~~~~~~~~~~
- build-appliance-image: Update to kirkstone head revision
- poky.conf: bump version for 4.0.31
- ref-manual/classes.rst: document the relative_symlinks class
- ref-manual/classes.rst: gettext: extend the documentation of the class
- ref-manual/variables.rst: document the CCACHE_DISABLE, UNINATIVE_CHECKSUM, UNINATIVE_URL, USE_NLS,
REQUIRED_COMBINED_FEATURES, REQUIRED_IMAGE_FEATURES, :term:`REQUIRED_MACHINE_FEATURES` variable
- ref-manual/variables.rst: fix :term:`LAYERDEPENDS` description
- dev-manual, test-manual: Update autobuilder output links
- ref-manual/classes.rst: extend the uninative class documentation
- python3: upgrade to 3.10.19
- linux-yocto/5.15: update to v5.15.194
- glibc: : PTHREAD_COND_INITIALIZER compatibility with pre-2.41 versions (bug 32786)
- glibc: nptl Use all of g1_start and g_signals
- glibc: nptl rename __condvar_quiesce_and_switch_g1
- glibc: nptl Fix indentation
- glibc: nptl Use a single loop in pthread_cond_wait instaed of a nested loop
- glibc: Remove g_refs from condition variables
- glibc: nptl Remove unnecessary quadruple check in pthread_cond_wait
- glibc: nptl Remove unnecessary catch-all-wake in condvar group switch
- glibc: nptl Update comments and indentation for new condvar implementation
- glibc: pthreads NPTL lost wakeup fix 2
- glibc: Remove partial BZ#25847 backport patches
- vulnerabilities: update nvdcve file name
- migration-guides: add release notes for 4.0.30
- oeqa/sdk/cases/buildcpio.py: use gnu mirror instead of main server
- selftest/cases/meta_ide.py: use use gnu mirror instead of main server
- conf/bitbake.conf: use gnu mirror instead of main server
- p11-kit: backport fix for handle :term:`USE_NLS` from master
- systemd: backport fix for handle :term:`USE_NLS` from master
- glibc: stable 2.35 branch updates
- openssl: upgrade to 3.0.18
- scripts/install-buildtools: Update to 4.0.30
- ref-manual/variables.rst: fix the description of :term:`STAGING_DIR`
- ref-manual/structure: document the auto.conf file
- dev-manual/building.rst: add note about externalsrc variables absolute paths
- ref-manual/variables.rst: fix the description of :term:`KBUILD_DEFCONFIG`
- kernel-dev/common.rst: fix the in-tree defconfig description
- test-manual/yocto-project-compatible.rst: fix a typo
- contributor-guide: submit-changes: make "Crediting contributors" part of "Commit your changes"
- contributor-guide: submit-changes: number instruction list in commit your changes
- contributor-guide: submit-changes: reword commit message instructions
- contributor-guide: submit-changes: make the Cc tag follow kernel guidelines
- contributor-guide: submit-changes: align :term:`CC` tag description
- contributor-guide: submit-changes: clarify example with Yocto bug ID
- contributor-guide: submit-changes: fix improper bold string
- libhandy: update git branch name
- python3-jinja2: upgrade to 3.1.6
- vim: upgrade to 9.1.1683
Known Issues in Yocto-4.0.31
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- N/A
Contributors to Yocto-4.0.31
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Adam Blank
- Aleksandar Nikolic
- Antonin Godard
- Archana Polampalli
- AshishKumar Mishra
- Bruce Ashfield
- Deepesh Varatharajan
- Divya Chellam
- Gyorgy Sarvari
- Hitendra Prajapati
- João Marcos Costa
- Lee Chee Yang
- Paul Barker
- Peter Marko
- Praveen Kumar
- Quentin Schulz
- Rajeshkumar Ramasamy
- Saravanan
- Soumya Sambu
- Steve Sakoman
- Sunil Dora
- Talel BELHAJ SALEM
- Theo GAIGE
- Vijay Anusuri
- Yash Shinde
- Yogita Urade
Repositories / Downloads for Yocto-4.0.31
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
yocto-docs
- Repository Location: :yocto_git:`/yocto-docs`
- Branch: :yocto_git:`kirkstone </yocto-docs/log/?h=kirkstone>`
- Tag: :yocto_git:`yocto-4.0.31 </yocto-docs/log/?h=yocto-4.0.31>`
- Git Revision: :yocto_git:`073f3bca4c374b03398317e7f445d2440a287741 </yocto-docs/commit/?id=073f3bca4c374b03398317e7f445d2440a287741>`
- Release Artefact: yocto-docs-073f3bca4c374b03398317e7f445d2440a287741
- sha: 3bfde9b6ad310dd42817509b67f61cd69552f74b2bc5011bd20788fe96d6823b
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-4.0.31/yocto-docs-073f3bca4c374b03398317e7f445d2440a287741.tar.bz2
https://mirrors.edge.kernel.org/yocto/yocto/yocto-4.0.31/yocto-docs-073f3bca4c374b03398317e7f445d2440a287741.tar.bz2
poky
- Repository Location: :yocto_git:`/poky`
- Branch: :yocto_git:`kirkstone </poky/log/?h=kirkstone>`
- Tag: :yocto_git:`yocto-4.0.31 </poky/log/?h=yocto-4.0.31>`
- Git Revision: :yocto_git:`04b39e5b7eb19498215d85c88a5fffb460fea1eb </poky/commit/?id=04b39e5b7eb19498215d85c88a5fffb460fea1eb>`
- Release Artefact: poky-04b39e5b7eb19498215d85c88a5fffb460fea1eb
- sha: 0ca18ab1ed25c0d77412ba30dbb03d74811756c7c2fe2401940f848a5e734930
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-4.0.31/poky-04b39e5b7eb19498215d85c88a5fffb460fea1eb.tar.bz2
https://mirrors.edge.kernel.org/yocto/yocto/yocto-4.0.31/poky-04b39e5b7eb19498215d85c88a5fffb460fea1eb.tar.bz2
openembedded-core
- Repository Location: :oe_git:`/openembedded-core`
- Branch: :oe_git:`kirkstone </openembedded-core/log/?h=kirkstone>`
- Tag: :oe_git:`yocto-4.0.31 </openembedded-core/log/?h=yocto-4.0.31>`
- Git Revision: :oe_git:`99204008786f659ab03538cd2ae2fd23ed4164c5 </openembedded-core/commit/?id=99204008786f659ab03538cd2ae2fd23ed4164c5>`
- Release Artefact: oecore-99204008786f659ab03538cd2ae2fd23ed4164c5
- sha: aa97bf826ad217b3a5278b4ad60bef4d194f0f1ff617677cf2323d3cc4897687
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-4.0.31/oecore-99204008786f659ab03538cd2ae2fd23ed4164c5.tar.bz2
https://mirrors.edge.kernel.org/yocto/yocto/yocto-4.0.31/oecore-99204008786f659ab03538cd2ae2fd23ed4164c5.tar.bz2
meta-yocto
- Repository Location: :yocto_git:`/meta-yocto`
- Branch: :yocto_git:`kirkstone </meta-yocto/log/?h=kirkstone>`
- Tag: :yocto_git:`yocto-4.0.31 </meta-yocto/log/?h=yocto-4.0.31>`
- Git Revision: :yocto_git:`3b2df00345b46479237fe0218675a818249f891c </meta-yocto/commit/?id=3b2df00345b46479237fe0218675a818249f891c>`
- Release Artefact: meta-yocto-3b2df00345b46479237fe0218675a818249f891c
- sha: 630e99e0f515bab8a316b2e32aff1352b4404f15aa087e8821b84093596a08ce
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-4.0.31/meta-yocto-3b2df00345b46479237fe0218675a818249f891c.tar.bz2
https://mirrors.edge.kernel.org/yocto/yocto/yocto-4.0.31/meta-yocto-3b2df00345b46479237fe0218675a818249f891c.tar.bz2
meta-mingw
- Repository Location: :yocto_git:`/meta-mingw`
- Branch: :yocto_git:`kirkstone </meta-mingw/log/?h=kirkstone>`
- Tag: :yocto_git:`yocto-4.0.31 </meta-mingw/log/?h=yocto-4.0.31>`
- Git Revision: :yocto_git:`87c22abb1f11be430caf4372e6b833dc7d77564e </meta-mingw/commit/?id=87c22abb1f11be430caf4372e6b833dc7d77564e>`
- Release Artefact: meta-mingw-87c22abb1f11be430caf4372e6b833dc7d77564e
- sha: f0bc4873e2e0319fb9d6d6ab9b98eb3f89664d4339a167d2db6a787dd12bc1a8
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-4.0.31/meta-mingw-87c22abb1f11be430caf4372e6b833dc7d77564e.tar.bz2
https://mirrors.edge.kernel.org/yocto/yocto/yocto-4.0.31/meta-mingw-87c22abb1f11be430caf4372e6b833dc7d77564e.tar.bz2
meta-gplv2
- Repository Location: :yocto_git:`/meta-gplv2`
- Branch: :yocto_git:`kirkstone </meta-gplv2/log/?h=kirkstone>`
- Tag: :yocto_git:`yocto-4.0.31 </meta-gplv2/log/?h=yocto-4.0.31>`
- Git Revision: :yocto_git:`d2f8b5cdb285b72a4ed93450f6703ca27aa42e8a </meta-gplv2/commit/?id=d2f8b5cdb285b72a4ed93450f6703ca27aa42e8a>`
- Release Artefact: meta-gplv2-d2f8b5cdb285b72a4ed93450f6703ca27aa42e8a
- sha: c386f59f8a672747dc3d0be1d4234b6039273d0e57933eb87caa20f56b9cca6d
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-4.0.31/meta-gplv2-d2f8b5cdb285b72a4ed93450f6703ca27aa42e8a.tar.bz2
https://mirrors.edge.kernel.org/yocto/yocto/yocto-4.0.31/meta-gplv2-d2f8b5cdb285b72a4ed93450f6703ca27aa42e8a.tar.bz2
bitbake
- Repository Location: :oe_git:`/bitbake`
- Branch: :oe_git:`2.0 </bitbake/log/?h=2.0>`
- Tag: :oe_git:`yocto-4.0.31 </bitbake/log/?h=yocto-4.0.31>`
- Git Revision: :oe_git:`8e2d1f8de055549b2101614d85454fcd1d0f94b2 </bitbake/commit/?id=8e2d1f8de055549b2101614d85454fcd1d0f94b2>`
- Release Artefact: bitbake-8e2d1f8de055549b2101614d85454fcd1d0f94b2
- sha: fad4e7699bae62082118e89785324b031b0af0743064caee87c91ba28549afb0
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-4.0.31/bitbake-8e2d1f8de055549b2101614d85454fcd1d0f94b2.tar.bz2
https://mirrors.edge.kernel.org/yocto/yocto/yocto-4.0.31/bitbake-8e2d1f8de055549b2101614d85454fcd1d0f94b2.tar.bz2

View File

@@ -0,0 +1,241 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Release notes for Yocto-5.0.13 (Scarthgap)
------------------------------------------
Security Fixes in Yocto-5.0.13
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- busybox: Fix :cve_nist:`2025-46394`
- cups: Fix :cve_nist:`2025-58060` and :cve_nist:`2025-58364`
- curl: Fix :cve_nist:`2025-9086`
- dpkg: Fix :cve_nist:`2025-6297`
- expat: follow-up Fix :cve_nist:`2024-8176`
- ffmpeg: Fix :cve_nist:`2025-1594`
- ffmpeg: Ignore :cve_nist:`2023-49502`, :cve_nist:`2023-50007`, :cve_nist:`2023-50008`,
:cve_nist:`2023-50009`, :cve_nist:`2023-50010`, :cve_nist:`2024-31578`, :cve_nist:`2024-31582`
and :cve_nist:`2024-31585`
- ghostscript: Fix :cve_nist:`2025-59798`, :cve_nist:`2025-59799` and :cve_nist:`2025-59800`
- glib-2.0: Fix :cve_nist:`2025-6052` and :cve_nist:`2025-7039`
- go-binary-native: Ignore :cve_nist:`2025-0913`
- go: Fix :cve_nist:`2025-4674`, :cve_nist:`2025-47906` and :cve_nist:`2025-47907`
- grub2: Fix :cve_nist:`2024-56738`
- grub2: Ignore :cve_nist:`2024-2312`
- gstreamer1.0-plugins-bad: Fix :cve_nist:`2025-3887`
- gstreamer1.0-plugins-base: Fix :cve_nist:`2025-47807`
- gstreamer1.0-plugins-base: Ignore :cve_nist:`2025-47806` and :cve_nist:`2025-47808`
- gstreamer1.0-plugins-good: Ignore :cve_nist:`2025-47183` and :cve_nist:`2025-47219`
- gstreamer1.0: Ignore :cve_nist:`2025-2759`
- libpam: Fix :cve_nist:`2024-10963`
- libxslt: Fix :cve_nist:`2025-7424`
- openssl: Fix :cve_nist:`2025-9230`, :cve_nist:`2025-9231` and :cve_nist:`2025-9232`
- pulseaudio: Ignore :cve_nist:`2024-11586`
- qemu: Ignore :cve_nist:`2024-7730`
- tiff: Fix :cve_nist:`2025-9900`
- tiff: Ignore :cve_nist:`2024-13978`, :cve_nist:`2025-8176`, :cve_nist:`2025-8177`,
:cve_nist:`2025-8534` and :cve_nist:`2025-8851`
- vim: Fix :cve_nist:`2025-9389`
- wpa-supplicant: Fix :cve_nist:`2022-37660`
Fixes in Yocto-5.0.13
~~~~~~~~~~~~~~~~~~~~~
- binutils: fix build with gcc-15
- bitbake: Use a "fork" multiprocessing context
- bitbake: bitbake: Bump version to 2.8.1
- build-appliance-image: Update to scarthgap head revision
- buildtools-tarball: fix unbound variable issues under 'set -u'
- cmake: fix build with gcc-15 on host
- conf/bitbake.conf: use gnu mirror instead of main server
- contributor-guide: submit-changes: align :term:`CC` tag description
- contributor-guide: submit-changes: clarify example with Yocto bug ID
- contributor-guide: submit-changes: fix improper bold string
- contributor-guide: submit-changes: make "Crediting contributors" part of "Commit your changes"
- contributor-guide: submit-changes: make the Cc tag follow kernel guidelines
- contributor-guide: submit-changes: number instruction list in commit your changes
- contributor-guide: submit-changes: reword commit message instructions
- cpio: Pin to use C17 std
- cups: upgrade to 2.4.11
- curl: update :term:`CVE_STATUS` for :cve_nist:`2025-5025`
- dbus-glib: fix build with gcc-15
- default-distrovars.inc: Fix CONNECTIVITY_CHECK_URIS redirect issue
- dev-manual/building.rst: add note about externalsrc variables absolute paths
- dev-manual/security-subjects.rst: update mailing lists
- elfutils: fix build with gcc-15
- examples: genl: fix wrong attribute size
- expect: Fix build with GCC 15
- expect: Revert "expect-native: fix do_compile failure with gcc-14"
- expect: cleanup do_install
- expect: don't run aclocal in do_configure
- expect: fix native build with GCC 15
- expect: update code for Tcl channel implementation
- ffmpeg: upgrade to 6.1.3
- gdbm: Use C11 standard
- git: fix build with gcc-15 on host
- gmp: Fix build with GCC15/C23
- gmp: Fix build with older gcc versions
- kernel-dev/common.rst: fix the in-tree defconfig description
- lib/oe/utils: use multiprocessing from bb
- libarchive: patch regression of patch for :cve_nist:`2025-5918`
- libgpg-error: fix build with gcc-15
- libtirpc: Fix build with gcc-15/C23
- license.py: avoid deprecated ast.Str
- llvm: fix build with gcc-15
- llvm: update to 18.1.8
- m4: Stick to C17 standard
- migration-guides: add release notes for 4.0.29 5.0.12
- ncurses: Pin to C17 standard
- oeqa/sdk/cases/buildcpio.py: use gnu mirror instead of main server
- openssl: upgrade to 3.2.6
- p11-kit: backport fix for handle :term:`USE_NLS` from master
- pkgconfig: fix build with gcc-15
- poky.conf: bump version for 5.0.13
- pulseaudio: Add audio group explicitly
- ref-manual/structure: document the auto.conf file
- ref-manual/variables.rst: expand :term:`IMAGE_OVERHEAD_FACTOR` glossary entry
- ref-manual/variables.rst: fix the description of :term:`KBUILD_DEFCONFIG` :term:`STAGING_DIR`
- rpm: keep leading "/" from sed operation
- ruby-ptest: some ptest fixes
- runqemu: fix special characters bug
- rust-llvm: fix build with gcc-15
- sanity.conf: Update minimum bitbake version to 2.8.1
- scripts/install-buildtools: Update to 5.0.12
- sdk: The main in the C example should return an int
- selftest/cases/meta_ide.py: use use gnu mirror instead of main server
- shared-mime-info: Handle :term:`USE_NLS`
- sudo: remove devtool FIXME comment
- systemd: backport fix for handle :term:`USE_NLS` from master
- systemtap: Fix task_work_cancel build
- test-manual/yocto-project-compatible.rst: fix a typo
- test-manual: update runtime-testing Exporting Tests section
- unifdef: Don't use C23 constexpr keyword
- unzip: Fix build with GCC-15
- util-linux: use ${B} instead of ${WORKDIR}/build, to fix building under devtool
- vim: upgrade to 9.1.1683
- yocto-uninative: Update to 4.9 for glibc 2.42 GCC 15.1
Known Issues in Yocto-5.0.13
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- N/A
Contributors to Yocto-5.0.13
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Thanks to the following people who contributed to this release:
- Adam Blank
- Adrian Freihofer
- Aleksandar Nikolic
- Antonin Godard
- Archana Polampalli
- AshishKumar Mishra
- Barne Carstensen
- Chris Laplante
- Deepak Rathore
- Divya Chellam
- Gyorgy Sarvari
- Haixiao Yan
- Hitendra Prajapati
- Hongxu Jia
- Jan Vermaete
- Jiaying Song
- Jinfeng Wang
- Joao Marcos Costa
- Joshua Watt
- Khem Raj
- Kyungjik Min
- Lee Chee Yang
- Libo Chen
- Martin Jansa
- Michael Halstead
- Nitin Wankhade
- Peter Marko
- Philip Lorenz
- Praveen Kumar
- Quentin Schulz
- Ross Burton
- Stanislav Vovk
- Steve Sakoman
- Talel BELHAJ SALEM
- Vijay Anusuri
- Vrushti Dabhi
- Yogita Urade
Repositories / Downloads for Yocto-5.0.13
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
yocto-docs
- Repository Location: :yocto_git:`/yocto-docs`
- Branch: :yocto_git:`scarthgap </yocto-docs/log/?h=scarthgap>`
- Tag: :yocto_git:`yocto-5.0.13 </yocto-docs/log/?h=yocto-5.0.13>`
- Git Revision: :yocto_git:`6f086fd3d9dbbb0c80f6c3e89b8df4fed422e79a </yocto-docs/commit/?id=6f086fd3d9dbbb0c80f6c3e89b8df4fed422e79a>`
- Release Artefact: yocto-docs-6f086fd3d9dbbb0c80f6c3e89b8df4fed422e79a
- sha: 454601d8b6034268212f74ca689ed360b08f7a4c7de5df726aa3706586ca4351
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-5.0.13/yocto-docs-6f086fd3d9dbbb0c80f6c3e89b8df4fed422e79a.tar.bz2
https://mirrors.kernel.org/yocto/yocto/yocto-5.0.13/yocto-docs-6f086fd3d9dbbb0c80f6c3e89b8df4fed422e79a.tar.bz2
poky
- Repository Location: :yocto_git:`/poky`
- Branch: :yocto_git:`scarthgap </poky/log/?h=scarthgap>`
- Tag: :yocto_git:`yocto-5.0.13 </poky/log/?h=yocto-5.0.13>`
- Git Revision: :yocto_git:`f16cffd030d21d12dd57bb95cfc310bda41f8a1f </poky/commit/?id=f16cffd030d21d12dd57bb95cfc310bda41f8a1f>`
- Release Artefact: poky-f16cffd030d21d12dd57bb95cfc310bda41f8a1f
- sha: 1367e43907f5ffa725f3afb019cd7ca07de21f13e5e73a1f5d1808989ae6ed2a
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-5.0.13/poky-f16cffd030d21d12dd57bb95cfc310bda41f8a1f.tar.bz2
https://mirrors.kernel.org/yocto/yocto/yocto-5.0.13/poky-f16cffd030d21d12dd57bb95cfc310bda41f8a1f.tar.bz2
openembedded-core
- Repository Location: :oe_git:`/openembedded-core`
- Branch: :oe_git:`scarthgap </openembedded-core/log/?h=scarthgap>`
- Tag: :oe_git:`yocto-5.0.13 </openembedded-core/log/?h=yocto-5.0.13>`
- Git Revision: :oe_git:`7af6b75221d5703ba5bf43c7cd9f1e7a2e0ed20b </openembedded-core/commit/?id=7af6b75221d5703ba5bf43c7cd9f1e7a2e0ed20b>`
- Release Artefact: oecore-7af6b75221d5703ba5bf43c7cd9f1e7a2e0ed20b
- sha: 4dcf636ec4a7b38b47a24e9cb3345b385bc126bb19620bf6af773bf292fef6b2
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-5.0.13/oecore-7af6b75221d5703ba5bf43c7cd9f1e7a2e0ed20b.tar.bz2
https://mirrors.kernel.org/yocto/yocto/yocto-5.0.13/oecore-7af6b75221d5703ba5bf43c7cd9f1e7a2e0ed20b.tar.bz2
meta-yocto
- Repository Location: :yocto_git:`/meta-yocto`
- Branch: :yocto_git:`scarthgap </meta-yocto/log/?h=scarthgap>`
- Tag: :yocto_git:`yocto-5.0.13 </meta-yocto/log/?h=yocto-5.0.13>`
- Git Revision: :yocto_git:`3ff7ca786732390cd56ae92ff4a43aba46a1bf2e </meta-yocto/commit/?id=3ff7ca786732390cd56ae92ff4a43aba46a1bf2e>`
- Release Artefact: meta-yocto-3ff7ca786732390cd56ae92ff4a43aba46a1bf2e
- sha: 8efbaeab49dc3e1c4b67ff8d5801df1b05204c2255d18cff9a6857769ae33b23
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-5.0.13/meta-yocto-3ff7ca786732390cd56ae92ff4a43aba46a1bf2e.tar.bz2
https://mirrors.kernel.org/yocto/yocto/yocto-5.0.13/meta-yocto-3ff7ca786732390cd56ae92ff4a43aba46a1bf2e.tar.bz2
meta-mingw
- Repository Location: :yocto_git:`/meta-mingw`
- Branch: :yocto_git:`scarthgap </meta-mingw/log/?h=scarthgap>`
- Tag: :yocto_git:`yocto-5.0.13 </meta-mingw/log/?h=yocto-5.0.13>`
- Git Revision: :yocto_git:`bd9fef71ec005be3c3a6d7f8b99d8116daf70c4f </meta-mingw/commit/?id=bd9fef71ec005be3c3a6d7f8b99d8116daf70c4f>`
- Release Artefact: meta-mingw-bd9fef71ec005be3c3a6d7f8b99d8116daf70c4f
- sha: ab073def6487f237ac125d239b3739bf02415270959546b6b287778664f0ae65
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-5.0.13/meta-mingw-bd9fef71ec005be3c3a6d7f8b99d8116daf70c4f.tar.bz2
https://mirrors.kernel.org/yocto/yocto/yocto-5.0.13/meta-mingw-bd9fef71ec005be3c3a6d7f8b99d8116daf70c4f.tar.bz2
bitbake
- Repository Location: :oe_git:`/bitbake`
- Branch: :oe_git:`2.8 </bitbake/log/?h=2.8>`
- Tag: :oe_git:`yocto-5.0.13 </bitbake/log/?h=yocto-5.0.13>`
- Git Revision: :oe_git:`1c9ec1ffde75809de34c10d3ec2b40d84d258cb4 </bitbake/commit/?id=1c9ec1ffde75809de34c10d3ec2b40d84d258cb4>`
- Release Artefact: bitbake-1c9ec1ffde75809de34c10d3ec2b40d84d258cb4
- sha: 98bf54fa3abe237b73a93b1e33842a429209371fca6e409c258a441987879d16
- Download Locations:
https://downloads.yoctoproject.org/releases/yocto/yocto-5.0.13/bitbake-1c9ec1ffde75809de34c10d3ec2b40d84d258cb4.tar.bz2
https://mirrors.kernel.org/yocto/yocto/yocto-5.0.13/bitbake-1c9ec1ffde75809de34c10d3ec2b40d84d258cb4.tar.bz2

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

View File

@@ -0,0 +1,172 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="164.765mm"
height="72.988113mm"
viewBox="0 0 164.765 72.988114"
version="1.1"
id="svg1"
xml:space="preserve"
inkscape:version="1.4.2 (ebf0e940d0, 2025-05-08)"
sodipodi:docname="key-dev-elements.svg"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><sodipodi:namedview
id="namedview1"
pagecolor="#ffffff"
bordercolor="#000000"
borderopacity="0.25"
inkscape:showpageshadow="false"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#d1d1d1"
inkscape:document-units="mm"
inkscape:zoom="1"
inkscape:cx="341.5"
inkscape:cy="-31.5"
inkscape:window-width="2560"
inkscape:window-height="1440"
inkscape:window-x="0"
inkscape:window-y="0"
inkscape:window-maximized="0"
inkscape:current-layer="layer2"
showborder="false"
borderlayer="false"
inkscape:antialias-rendering="true"
showguides="true" /><defs
id="defs1" /><g
inkscape:groupmode="layer"
id="layer2"
inkscape:label="Layer "
style="display:inline"
transform="translate(-20.664242,-129.6793)"><rect
style="display:inline;fill:#f1e9cc;fill-opacity:1;stroke:#6d8eb4;stroke-width:0.653;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:7.4;stroke-opacity:1;paint-order:fill markers stroke"
id="rect1"
width="164.112"
height="54.273098"
x="20.990742"
y="130.0058"
ry="0"
inkscape:label="yp-rect" /><rect
style="display:inline;fill:#f3d770;fill-opacity:1;stroke:#6d8eb4;stroke-width:0.653;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:7.4;stroke-opacity:1;paint-order:fill markers stroke"
id="rect2"
width="101.45864"
height="41.151588"
x="28.1292"
y="137.10953"
inkscape:label="poky-rect" /><rect
style="display:inline;fill:#c0ebf5;fill-opacity:1;stroke:#6d8eb4;stroke-width:0.653;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:7.4;stroke-opacity:1;paint-order:fill markers stroke"
id="rect3"
width="50.652737"
height="53.04562"
x="35.516178"
y="149.29529"
inkscape:label="oe-rect" /><text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:4.23333px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Bold';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:start;writing-mode:lr-tb;direction:ltr;text-anchor:start;white-space:pre;inline-size:46.7487;display:inline;fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:0;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:7.4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
x="136.38763"
y="137.69727"
id="text3"
inkscape:label="poky-title"
transform="matrix(0.90889596,0,0,0.81399719,-26.072941,39.399474)"><tspan
x="136.38763"
y="137.69727"
id="tspan2">Poky</tspan></text><text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:4.23333px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Bold';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:start;writing-mode:lr-tb;direction:ltr;text-anchor:start;white-space:pre;inline-size:46.7487;display:inline;fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:0;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:7.4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
x="136.38763"
y="137.69727"
id="text3-8"
inkscape:label="oe-title"
transform="matrix(0.90889596,0,0,0.81399719,-78.327995,83.175189)"><tspan
x="136.38763"
y="137.69727"
id="tspan4">OpenEmbedded</tspan></text><text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:4.23333px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Bold';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:start;writing-mode:lr-tb;direction:ltr;text-anchor:start;white-space:pre;inline-size:46.7487;display:inline;fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:0;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:7.4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
x="136.38763"
y="137.69727"
id="text3-0"
inkscape:label="yp-title"
transform="matrix(0.8469291,0,0,0.81399719,21.497595,28.033837)"><tspan
x="136.38763"
y="137.69727"
id="tspan5">YOCTO PROJECT (YP)</tspan></text><text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:2.98347px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:start;writing-mode:lr-tb;direction:ltr;text-anchor:start;display:inline;fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:0;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:7.4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
x="137.19444"
y="150.50006"
id="text4"
transform="scale(1.0050579,0.9949676)"
inkscape:label="yp-text"><tspan
sodipodi:role="line"
id="tspan3"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:2.98347px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;stroke-width:0"
x="137.19444"
y="150.50006">Umbrella Open Source Project</tspan><tspan
sodipodi:role="line"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:2.98347px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;stroke-width:0"
x="137.19444"
y="154.2294"
id="tspan6">that Builds and Maintains</tspan><tspan
sodipodi:role="line"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:2.98347px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;stroke-width:0"
x="137.19444"
y="157.95874"
id="tspan7">Validated Open Source Tools and</tspan><tspan
sodipodi:role="line"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:2.98347px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;stroke-width:0"
x="137.19444"
y="161.68808"
id="tspan8">Components Associated with</tspan><tspan
sodipodi:role="line"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:2.98347px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;stroke-width:0"
x="137.19444"
y="165.4174"
id="tspan9">Embedded Linux</tspan></text><text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:2.97078px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:start;writing-mode:lr-tb;direction:ltr;text-anchor:start;display:inline;fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:0;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:7.4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
x="90.582634"
y="159.10139"
id="text10"
transform="scale(1.0018079,0.9981954)"
inkscape:label="poky-text"><tspan
sodipodi:role="line"
id="tspan10"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:2.97078px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;stroke-width:0"
x="90.582634"
y="159.10139">Yocto Project Open</tspan><tspan
sodipodi:role="line"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:2.97078px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;stroke-width:0"
x="90.582634"
y="162.81487"
id="tspan11">Source Reference</tspan><tspan
sodipodi:role="line"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:2.97078px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;stroke-width:0"
x="90.582634"
y="166.52835"
id="tspan12">Embedded Distribution</tspan></text><text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:3.01677px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:start;writing-mode:lr-tb;direction:ltr;text-anchor:start;display:inline;fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:0;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:7.4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
x="40.36692"
y="160.98824"
id="text13"
transform="scale(0.99784993,1.0021547)"
inkscape:label="oe-text"><tspan
sodipodi:role="line"
id="tspan13"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:3.01677px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;stroke-width:0"
x="40.36692"
y="160.98824">Open Source Build Engine</tspan><tspan
sodipodi:role="line"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:3.01677px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;stroke-width:0"
x="40.36692"
y="164.7592"
id="tspan14">and YP-Compatible Metadata</tspan><tspan
sodipodi:role="line"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:3.01677px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal;stroke-width:0"
x="40.36692"
y="168.53017"
id="tspan15">for Embedded Linux</tspan></text></g></svg>

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -23,7 +23,7 @@ comes to delivering embedded software stacks. The project allows
software customizations and build interchange for multiple hardware
platforms as well as software stacks that can be maintained and scaled.
.. image:: figures/key-dev-elements.png
.. image:: svg/key-dev-elements.*
:width: 100%
For further introductory information on the Yocto Project, you might be

View File

@@ -62,7 +62,8 @@ codename are likely to be compatible and thus work together.
Releases are given a nominal release version as well but the codename is
used in repositories for this reason. You can find information on Yocto
Project releases and codenames at :yocto_wiki:`/Releases`.
Project releases and codenames in the :yocto_home:`Releases page
</development/releases/>`.
Our :doc:`/migration-guides/index` detail how to migrate from one release of
the Yocto Project to the next.

View File

@@ -6172,8 +6172,8 @@ system and gives an overview of their function and contents.
.. note::
An easy way to see what overrides apply is to search for :term:`OVERRIDES`
in the output of the ``bitbake -e`` command. See the
An easy way to see what overrides apply is to run the command
``bitbake-getvar -r myrecipe OVERRIDES``. See the
":ref:`dev-manual/debugging:viewing variable values`" section in the Yocto
Project Development Tasks Manual for more information.

View File

@@ -1,6 +1,6 @@
DISTRO = "poky"
DISTRO_NAME = "Poky (Yocto Project Reference Distro)"
DISTRO_VERSION = "5.0.14"
DISTRO_VERSION = "5.0.15"
DISTRO_CODENAME = "scarthgap"
SDK_VENDOR = "-pokysdk"
SDK_VERSION = "${@d.getVar('DISTRO_VERSION').replace('snapshot-${METADATA_REVISION}', 'snapshot')}"

View File

@@ -31,7 +31,7 @@ CROSS_CURSES_LIB = "-lncurses -ltinfo"
CROSS_CURSES_INC = '-DCURSES_LOC="<curses.h>"'
TERMINFO = "${STAGING_DATADIR_NATIVE}/terminfo"
KCONFIG_CONFIG_COMMAND ??= "menuconfig"
KCONFIG_CONFIG_COMMAND ??= "menuconfig ${EXTRA_OEMAKE}"
KCONFIG_CONFIG_ENABLE_MENUCONFIG ??= "true"
KCONFIG_CONFIG_ROOTDIR ??= "${B}"
python do_menuconfig() {

View File

@@ -101,3 +101,39 @@ addtask addto_recipe_sysroot after do_populate_sysroot
do_addto_recipe_sysroot[deptask] = "do_populate_sysroot"
PATH:prepend = "${COREBASE}/scripts/cross-intercept:"
#
# Cross task outputs can call native dependencies and even when cross
# recipe output doesn't change it might produce different results when
# the called native dependency is changed, e.g. clang-cross-${TARGET_ARCH}
# contains symlink to clang binary from clang-native, but when clang-native
# outhash is changed, clang-cross-${TARGET_ARCH} will still be considered
# equivalent and target recipes aren't rebuilt with new clang binary, see
# work around in https://github.com/kraj/meta-clang/pull/1140 to make target
# recipes to depend directly not only on clang-cross-${TARGET_ARCH} but
# clang-native as well.
#
# This can cause poor interactions with hash equivalence, since this recipes
# output-changing dependency is "hidden" and downstream task only see that this
# recipe has the same outhash and therefore is equivalent. This can result in
# different output in different cases.
#
# To resolve this, unhide the output-changing dependency by adding its unihash
# to this tasks outhash calculation. Unfortunately, don't know specifically
# know which dependencies are output-changing, so we have to add all of them.
#
python cross_add_do_populate_sysroot_deps () {
current_task = "do_" + d.getVar("BB_CURRENTTASK")
if current_task != "do_populate_sysroot":
return
taskdepdata = d.getVar("BB_TASKDEPDATA", False)
pn = d.getVar("PN")
deps = {
dep[0]:dep[6] for dep in taskdepdata.values() if
dep[1] == current_task and dep[0] != pn
}
d.setVar("HASHEQUIV_EXTRA_SIGDATA", "\n".join("%s: %s" % (k, deps[k]) for k in sorted(deps.keys())))
}
SSTATECREATEFUNCS += "cross_add_do_populate_sysroot_deps"

View File

@@ -24,6 +24,9 @@ TARGET_GOMIPS = "${@go_map_mips(d.getVar('TARGET_ARCH'), d.getVar('TUNE_FEATURES
TARGET_GOARM:class-native = "7"
TARGET_GO386:class-native = "sse2"
TARGET_GOMIPS:class-native = "hardfloat"
TARGET_GOARM:class-crosssdk = "7"
TARGET_GO386:class-crosssdk = "sse2"
TARGET_GOMIPS:class-crosssdk = "hardfloat"
TARGET_GOTUPLE = "${TARGET_GOOS}_${TARGET_GOARCH}"
GO_BUILD_BINDIR = "${@['bin/${HOST_GOTUPLE}','bin'][d.getVar('BUILD_GOTUPLE') == d.getVar('HOST_GOTUPLE')]}"

View File

@@ -697,9 +697,6 @@ addtask savedefconfig after do_configure
inherit cml1 pkgconfig
# Need LD, HOSTLDFLAGS and more for config operations
KCONFIG_CONFIG_COMMAND:append = " ${EXTRA_OEMAKE}"
EXPORT_FUNCTIONS do_compile do_transform_kernel do_transform_bundled_initramfs do_install do_configure
# kernel-base becomes kernel-${KERNEL_VERSION}
@@ -873,5 +870,69 @@ addtask deploy after do_populate_sysroot do_packagedata
EXPORT_FUNCTIONS do_deploy
python __anonymous() {
inherits = (d.getVar("INHERIT") or "")
if "create-spdx" in inherits:
bb.build.addtask('do_create_kernel_config_spdx', 'do_populate_lic do_deploy', 'do_create_spdx', d)
}
python do_create_kernel_config_spdx() {
if d.getVar("SPDX_INCLUDE_KERNEL_CONFIG", True) == "1":
import oe.spdx30
import oe.spdx30_tasks
from pathlib import Path
from datetime import datetime, timezone
pkg_arch = d.getVar("SSTATE_PKGARCH")
deploydir = Path(d.getVar("SPDXDEPLOY"))
pn = d.getVar("PN")
config_path = d.expand("${B}/.config")
kernel_params = []
if not os.path.exists(config_path):
bb.warn(f"SPDX: Kernel config file not found at: {config_path}")
return
try:
with open(config_path, 'r') as f:
for line in f:
line = line.strip()
if not line or line.startswith("#"):
continue
if "=" in line:
key, value = line.split("=", 1)
kernel_params.append(oe.spdx30.DictionaryEntry(
key=key,
value=value.strip('"')
))
bb.note(f"Parsed {len(kernel_params)} kernel config entries from {config_path}")
except Exception as e:
bb.error(f"Failed to parse kernel config file: {e}")
build, build_objset = oe.sbom30.find_root_obj_in_jsonld(
d, "recipes", f"recipe-{pn}", oe.spdx30.build_Build
)
kernel_build = build_objset.add_root(
oe.spdx30.build_Build(
_id=build_objset.new_spdxid("kernel-config"),
creationInfo=build_objset.doc.creationInfo,
build_buildType="https://openembedded.org/kernel-configuration",
build_parameter=kernel_params
)
)
oe.spdx30_tasks.set_timestamp_now(d, kernel_build, "build_buildStartTime")
build_objset.new_relationship(
[build],
oe.spdx30.RelationshipType.ancestorOf,
[kernel_build]
)
oe.sbom30.write_jsonld_doc(d, build_objset, deploydir / pkg_arch / "recipes" / f"recipe-{pn}.spdx.json")
}
do_create_kernel_config_spdx[depends] = "virtual/kernel:do_configure"
# Add using Device Tree support
inherit kernel-devicetree

View File

@@ -329,6 +329,7 @@ def rust_gen_target(d, thing, wd, arch):
sys = d.getVar('{}_SYS'.format(thing))
prefix = d.getVar('{}_PREFIX'.format(thing))
rustsys = d.getVar('RUST_{}_SYS'.format(thing))
os = d.getVar('{}_OS'.format(thing))
abi = None
cpu = "generic"
@@ -368,7 +369,7 @@ def rust_gen_target(d, thing, wd, arch):
tspec['target-c-int-width'] = d.getVarFlag('TARGET_C_INT_WIDTH', arch_abi)
tspec['target-endian'] = d.getVarFlag('TARGET_ENDIAN', arch_abi)
tspec['arch'] = arch_to_rust_target_arch(rust_arch)
if "baremetal" in d.getVar('TCLIBC'):
if "elf" in os:
tspec['os'] = "none"
else:
tspec['os'] = "linux"

View File

@@ -14,6 +14,9 @@
#
# where "<image-name>" is an image like core-image-sato.
# List of test modules to run, or run all that can be found if unset
TESTSDK_SUITES ?= ""
TESTSDK_CLASS_NAME ?= "oeqa.sdk.testsdk.TestSDK"
TESTSDKEXT_CLASS_NAME ?= "oeqa.sdkext.testsdk.TestSDKExt"

View File

@@ -4,6 +4,8 @@
# SPDX-License-Identifier: GPL-2.0-only
#
SPDX_VERSION = "2.2"
DEPLOY_DIR_SPDX ??= "${DEPLOY_DIR}/spdx/${SPDX_VERSION}"
# The product name that the CVE database uses. Defaults to BPN, but may need to

View File

@@ -50,6 +50,17 @@ SPDX_INCLUDE_TIMESTAMPS[doc] = "Include time stamps in SPDX output. This is \
useful if you want to know when artifacts were produced and when builds \
occurred, but will result in non-reproducible SPDX output"
SPDX_INCLUDE_KERNEL_CONFIG ??= "0"
SPDX_INCLUDE_KERNEL_CONFIG[doc] = "If set to '1', the .config file for the kernel will be parsed \
and each CONFIG_* value will be included in the Build.build_parameter list as DictionaryEntry \
items. Set to '0' to disable exporting kernel configuration to improve performance or reduce \
SPDX document size."
SPDX_INCLUDE_PACKAGECONFIG ??= "0"
SPDX_INCLUDE_PACKAGECONFIG[doc] = "If set to '1', each PACKAGECONFIG feature is recorded in the \
build_Build object's build_parameter list as a DictionaryEntry with key \
'PACKAGECONFIG:<feature>' and value 'enabled' or 'disabled'"
SPDX_IMPORTS ??= ""
SPDX_IMPORTS[doc] = "SPDX_IMPORTS is the base variable that describes how to \
reference external SPDX ids. Each import is defined as a key in this \
@@ -117,6 +128,11 @@ SPDX_PACKAGE_VERSION ??= "${PV}"
SPDX_PACKAGE_VERSION[doc] = "The version of a package, software_packageVersion \
in software_Package"
SPDX_PACKAGE_URL ??= ""
SPDX_PACKAGE_URL[doc] = "Provides a place for the SPDX data creator to record \
the package URL string (in accordance with the Package URL specification) for \
a software Package."
IMAGE_CLASSES:append = " create-spdx-image-3.0"
SDK_CLASSES += "create-spdx-sdk-3.0"

View File

@@ -107,21 +107,8 @@ CVE_CHECK_LAYER_INCLUDELIST ??= ""
CVE_VERSION_SUFFIX ??= ""
python () {
# Fallback all CVEs from CVE_CHECK_IGNORE to CVE_STATUS
cve_check_ignore = d.getVar("CVE_CHECK_IGNORE")
if cve_check_ignore:
bb.warn("CVE_CHECK_IGNORE is deprecated in favor of CVE_STATUS")
for cve in (d.getVar("CVE_CHECK_IGNORE") or "").split():
d.setVarFlag("CVE_STATUS", cve, "ignored")
# Process CVE_STATUS_GROUPS to set multiple statuses and optional detail or description at once
for cve_status_group in (d.getVar("CVE_STATUS_GROUPS") or "").split():
cve_group = d.getVar(cve_status_group)
if cve_group is not None:
for cve in cve_group.split():
d.setVarFlag("CVE_STATUS", cve, d.getVarFlag(cve_status_group, "status"))
else:
bb.warn("CVE_STATUS_GROUPS contains undefined variable %s" % cve_status_group)
from oe.cve_check import extend_cve_status
extend_cve_status(d)
}
def generate_json_report(d, out_path, link_path):

View File

@@ -37,6 +37,11 @@ SPDX_CUSTOM_ANNOTATION_VARS ??= ""
SPDX_MULTILIB_SSTATE_ARCHS ??= "${SSTATE_ARCHS}"
python () {
from oe.cve_check import extend_cve_status
extend_cve_status(d)
}
def create_spdx_source_deps(d):
import oe.spdx_common

319
meta/classes/vex.bbclass Normal file
View File

@@ -0,0 +1,319 @@
#
# Copyright OpenEmbedded Contributors
#
# SPDX-License-Identifier: MIT
#
# This class is used to generate metadata needed by external
# tools to check for vulnerabilities, for example CVEs.
#
# In order to use this class just inherit the class in the
# local.conf file and it will add the generate_vex task for
# every recipe. If an image is build it will generate a report
# in DEPLOY_DIR_IMAGE for all the packages used, it will also
# generate a file for all recipes used in the build.
#
# Variables use CVE_CHECK prefix to keep compatibility with
# the cve-check class
#
# Example:
# bitbake -c generate_vex openssl
# bitbake core-image-sato
# bitbake -k -c generate_vex universe
#
# The product name that the CVE database uses defaults to BPN, but may need to
# be overriden per recipe (for example tiff.bb sets CVE_PRODUCT=libtiff).
CVE_PRODUCT ??= "${BPN}"
CVE_VERSION ??= "${PV}"
CVE_CHECK_SUMMARY_DIR ?= "${LOG_DIR}/cve"
CVE_CHECK_SUMMARY_FILE_NAME_JSON = "cve-summary.json"
CVE_CHECK_SUMMARY_INDEX_PATH = "${CVE_CHECK_SUMMARY_DIR}/cve-summary-index.txt"
CVE_CHECK_DIR ??= "${DEPLOY_DIR}/cve"
CVE_CHECK_RECIPE_FILE_JSON ?= "${CVE_CHECK_DIR}/${PN}_cve.json"
CVE_CHECK_MANIFEST_JSON ?= "${IMGDEPLOYDIR}/${IMAGE_NAME}.json"
# Skip CVE Check for packages (PN)
CVE_CHECK_SKIP_RECIPE ?= ""
# Replace NVD DB check status for a given CVE. Each of CVE has to be mentioned
# separately with optional detail and description for this status.
#
# CVE_STATUS[CVE-1234-0001] = "not-applicable-platform: Issue only applies on Windows"
# CVE_STATUS[CVE-1234-0002] = "fixed-version: Fixed externally"
#
# Settings the same status and reason for multiple CVEs is possible
# via CVE_STATUS_GROUPS variable.
#
# CVE_STATUS_GROUPS = "CVE_STATUS_WIN CVE_STATUS_PATCHED"
#
# CVE_STATUS_WIN = "CVE-1234-0001 CVE-1234-0003"
# CVE_STATUS_WIN[status] = "not-applicable-platform: Issue only applies on Windows"
# CVE_STATUS_PATCHED = "CVE-1234-0002 CVE-1234-0004"
# CVE_STATUS_PATCHED[status] = "fixed-version: Fixed externally"
#
# All possible CVE statuses could be found in cve-check-map.conf
# CVE_CHECK_STATUSMAP[not-applicable-platform] = "Ignored"
# CVE_CHECK_STATUSMAP[fixed-version] = "Patched"
#
# CVE_CHECK_IGNORE is deprecated and CVE_STATUS has to be used instead.
# Keep CVE_CHECK_IGNORE until other layers migrate to new variables
CVE_CHECK_IGNORE ?= ""
# Layers to be excluded
CVE_CHECK_LAYER_EXCLUDELIST ??= ""
# Layers to be included
CVE_CHECK_LAYER_INCLUDELIST ??= ""
# set to "alphabetical" for version using single alphabetical character as increment release
CVE_VERSION_SUFFIX ??= ""
python () {
if bb.data.inherits_class("cve-check", d):
raise bb.parse.SkipRecipe("Skipping recipe: found incompatible combination of cve-check and vex enabled at the same time.")
from oe.cve_check import extend_cve_status
extend_cve_status(d)
}
def generate_json_report(d, out_path, link_path):
if os.path.exists(d.getVar("CVE_CHECK_SUMMARY_INDEX_PATH")):
import json
from oe.cve_check import cve_check_merge_jsons, update_symlinks
bb.note("Generating JSON CVE summary")
index_file = d.getVar("CVE_CHECK_SUMMARY_INDEX_PATH")
summary = {"version":"1", "package": []}
with open(index_file) as f:
filename = f.readline()
while filename:
with open(filename.rstrip()) as j:
data = json.load(j)
cve_check_merge_jsons(summary, data)
filename = f.readline()
summary["package"].sort(key=lambda d: d['name'])
with open(out_path, "w") as f:
json.dump(summary, f, indent=2)
update_symlinks(out_path, link_path)
python vex_save_summary_handler () {
import shutil
import datetime
from oe.cve_check import update_symlinks
cvelogpath = d.getVar("CVE_CHECK_SUMMARY_DIR")
bb.utils.mkdirhier(cvelogpath)
timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
json_summary_link_name = os.path.join(cvelogpath, d.getVar("CVE_CHECK_SUMMARY_FILE_NAME_JSON"))
json_summary_name = os.path.join(cvelogpath, "cve-summary-%s.json" % (timestamp))
generate_json_report(d, json_summary_name, json_summary_link_name)
bb.plain("Complete CVE JSON report summary created at: %s" % json_summary_link_name)
}
addhandler vex_save_summary_handler
vex_save_summary_handler[eventmask] = "bb.event.BuildCompleted"
python do_generate_vex () {
"""
Generate metadata needed for vulnerability checking for
the current recipe
"""
from oe.cve_check import get_patched_cves, decode_cve_status
cves_status = []
products = d.getVar("CVE_PRODUCT").split()
for product in products:
if ":" in product:
_, product = product.split(":", 1)
cves_status.append([product, False])
patched_cves = get_patched_cves(d)
cve_data = {}
for cve_id in (d.getVarFlags("CVE_STATUS") or {}):
mapping, detail, description = decode_cve_status(d, cve_id)
if not mapping or not detail:
bb.warn(f"Skipping {cve_id} — missing or unknown CVE status")
continue
cve_data[cve_id] = {
"abbrev-status": mapping,
"status": detail,
"justification": description
}
patched_cves.discard(cve_id)
# decode_cve_status is decoding CVE_STATUS, so patch files need to be hardcoded
for cve_id in patched_cves:
# fix-file-included is not available in scarthgap
cve_data[cve_id] = {
"abbrev-status": "Patched",
"status": "backported-patch",
}
cve_write_data_json(d, cve_data, cves_status)
}
addtask generate_vex before do_build
python vex_cleanup () {
"""
Delete the file used to gather all the CVE information.
"""
bb.utils.remove(e.data.getVar("CVE_CHECK_SUMMARY_INDEX_PATH"))
}
addhandler vex_cleanup
vex_cleanup[eventmask] = "bb.event.BuildCompleted"
python vex_write_rootfs_manifest () {
"""
Create VEX/CVE manifest when building an image
"""
import json
from oe.rootfs import image_list_installed_packages
from oe.cve_check import cve_check_merge_jsons, update_symlinks
deploy_file_json = d.getVar("CVE_CHECK_RECIPE_FILE_JSON")
if os.path.exists(deploy_file_json):
bb.utils.remove(deploy_file_json)
# Create a list of relevant recipies
recipies = set()
for pkg in list(image_list_installed_packages(d)):
pkg_info = os.path.join(d.getVar('PKGDATA_DIR'),
'runtime-reverse', pkg)
pkg_data = oe.packagedata.read_pkgdatafile(pkg_info)
recipies.add(pkg_data["PN"])
bb.note("Writing rootfs VEX manifest")
deploy_dir = d.getVar("IMGDEPLOYDIR")
link_name = d.getVar("IMAGE_LINK_NAME")
json_data = {"version":"1", "package": []}
text_data = ""
save_pn = d.getVar("PN")
for pkg in recipies:
# To be able to use the CVE_CHECK_RECIPE_FILE_JSON variable we have to evaluate
# it with the different PN names set each time.
d.setVar("PN", pkg)
pkgfilepath = d.getVar("CVE_CHECK_RECIPE_FILE_JSON")
if os.path.exists(pkgfilepath):
with open(pkgfilepath) as j:
data = json.load(j)
cve_check_merge_jsons(json_data, data)
else:
bb.warn("Missing cve file for %s" % pkg)
d.setVar("PN", save_pn)
link_path = os.path.join(deploy_dir, "%s.json" % link_name)
manifest_name = d.getVar("CVE_CHECK_MANIFEST_JSON")
with open(manifest_name, "w") as f:
json.dump(json_data, f, indent=2)
update_symlinks(manifest_name, link_path)
bb.plain("Image VEX JSON report stored in: %s" % manifest_name)
}
ROOTFS_POSTPROCESS_COMMAND:prepend = "vex_write_rootfs_manifest; "
do_rootfs[recrdeptask] += "do_generate_vex "
do_populate_sdk[recrdeptask] += "do_generate_vex "
def cve_write_data_json(d, cve_data, cve_status):
"""
Prepare CVE data for the JSON format, then write it.
Done for each recipe.
"""
from oe.cve_check import get_cpe_ids
import json
output = {"version":"1", "package": []}
nvd_link = "https://nvd.nist.gov/vuln/detail/"
fdir_name = d.getVar("FILE_DIRNAME")
layer = fdir_name.split("/")[-3]
include_layers = d.getVar("CVE_CHECK_LAYER_INCLUDELIST").split()
exclude_layers = d.getVar("CVE_CHECK_LAYER_EXCLUDELIST").split()
if exclude_layers and layer in exclude_layers:
return
if include_layers and layer not in include_layers:
return
product_data = []
for s in cve_status:
p = {"product": s[0], "cvesInRecord": "Yes"}
if s[1] == False:
p["cvesInRecord"] = "No"
product_data.append(p)
product_data = list({p['product']:p for p in product_data}.values())
package_version = "%s%s" % (d.getVar("EXTENDPE"), d.getVar("PV"))
cpes = get_cpe_ids(d.getVar("CVE_PRODUCT"), d.getVar("CVE_VERSION"))
package_data = {
"name" : d.getVar("PN"),
"layer" : layer,
"version" : package_version,
"products": product_data,
"cpes": cpes
}
cve_list = []
for cve in sorted(cve_data):
issue_link = "%s%s" % (nvd_link, cve)
cve_item = {
"id" : cve,
"status" : cve_data[cve]["abbrev-status"],
"link": issue_link,
}
if 'NVD-summary' in cve_data[cve]:
cve_item["summary"] = cve_data[cve]["NVD-summary"]
cve_item["scorev2"] = cve_data[cve]["NVD-scorev2"]
cve_item["scorev3"] = cve_data[cve]["NVD-scorev3"]
cve_item["vector"] = cve_data[cve]["NVD-vector"]
cve_item["vectorString"] = cve_data[cve]["NVD-vectorString"]
if 'status' in cve_data[cve]:
cve_item["detail"] = cve_data[cve]["status"]
if 'justification' in cve_data[cve]:
cve_item["description"] = cve_data[cve]["justification"]
if 'resource' in cve_data[cve]:
cve_item["patch-file"] = cve_data[cve]["resource"]
cve_list.append(cve_item)
package_data["issue"] = cve_list
output["package"].append(package_data)
deploy_file = d.getVar("CVE_CHECK_RECIPE_FILE_JSON")
write_string = json.dumps(output, indent=2)
cvelogpath = d.getVar("CVE_CHECK_SUMMARY_DIR")
index_path = d.getVar("CVE_CHECK_SUMMARY_INDEX_PATH")
bb.utils.mkdirhier(cvelogpath)
bb.utils.mkdirhier(os.path.dirname(deploy_file))
fragment_file = os.path.basename(deploy_file)
fragment_path = os.path.join(cvelogpath, fragment_file)
with open(fragment_path, "w") as f:
f.write(write_string)
with open(deploy_file, "w") as f:
f.write(write_string)
with open(index_path, "a+") as f:
f.write("%s\n" % fragment_path)

View File

@@ -243,3 +243,25 @@ def decode_cve_status(d, cve):
status_mapping = "Unpatched"
return (status_mapping, detail, description)
def extend_cve_status(d):
# do this only once in case multiple classes use this
if d.getVar("CVE_STATUS_EXTENDED"):
return
d.setVar("CVE_STATUS_EXTENDED", "1")
# Fallback all CVEs from CVE_CHECK_IGNORE to CVE_STATUS
cve_check_ignore = d.getVar("CVE_CHECK_IGNORE")
if cve_check_ignore:
bb.warn("CVE_CHECK_IGNORE is deprecated in favor of CVE_STATUS")
for cve in (d.getVar("CVE_CHECK_IGNORE") or "").split():
d.setVarFlag("CVE_STATUS", cve, "ignored")
# Process CVE_STATUS_GROUPS to set multiple statuses and optional detail or description at once
for cve_status_group in (d.getVar("CVE_STATUS_GROUPS") or "").split():
cve_group = d.getVar(cve_status_group)
if cve_group is not None:
for cve in cve_group.split():
d.setVarFlag("CVE_STATUS", cve, d.getVarFlag(cve_status_group, "status"))
else:
bb.warn("CVE_STATUS_GROUPS contains undefined variable %s" % cve_status_group)

View File

@@ -148,7 +148,8 @@ def get_extra_sdkinfo(sstate_dir):
extra_info['filesizes'] = {}
for root, _, files in os.walk(sstate_dir):
for fn in files:
if fn.endswith('.tgz'):
# Note that this makes an assumption about the sstate filenames
if '.tar.' in fn and not fn.endswith('.siginfo'):
fsize = int(math.ceil(float(os.path.getsize(os.path.join(root, fn))) / 1024))
task = fn.rsplit(':',1)[1].split('_',1)[1].split(',')[0]
origtotal = extra_info['tasksizes'].get(task, 0)

View File

@@ -356,77 +356,78 @@ def add_download_files(d, objset):
for download_idx, src_uri in enumerate(urls):
fd = fetch.ud[src_uri]
file_name = os.path.basename(fetch.localpath(src_uri))
if oe.patch.patch_path(src_uri, fetch, "", expand=False):
primary_purpose = oe.spdx30.software_SoftwarePurpose.patch
else:
primary_purpose = oe.spdx30.software_SoftwarePurpose.source
for name in fd.names:
file_name = os.path.basename(fetch.localpath(src_uri))
if oe.patch.patch_path(src_uri, fetch, "", expand=False):
primary_purpose = oe.spdx30.software_SoftwarePurpose.patch
else:
primary_purpose = oe.spdx30.software_SoftwarePurpose.source
if fd.type == "file":
if os.path.isdir(fd.localpath):
walk_idx = 1
for root, dirs, files in os.walk(fd.localpath, onerror=walk_error):
dirs.sort()
files.sort()
for f in files:
f_path = os.path.join(root, f)
if os.path.islink(f_path):
# TODO: SPDX doesn't support symlinks yet
continue
if fd.type == "file":
if os.path.isdir(fd.localpath):
walk_idx = 1
for root, dirs, files in os.walk(fd.localpath, onerror=walk_error):
dirs.sort()
files.sort()
for f in files:
f_path = os.path.join(root, f)
if os.path.islink(f_path):
# TODO: SPDX doesn't support symlinks yet
continue
file = objset.new_file(
objset.new_spdxid(
"source", str(download_idx + 1), str(walk_idx)
),
os.path.join(
file_name, os.path.relpath(f_path, fd.localpath)
),
f_path,
purposes=[primary_purpose],
)
file = objset.new_file(
objset.new_spdxid(
"source", str(download_idx + 1), str(walk_idx)
),
os.path.join(
file_name, os.path.relpath(f_path, fd.localpath)
),
f_path,
purposes=[primary_purpose],
)
inputs.add(file)
walk_idx += 1
inputs.add(file)
walk_idx += 1
else:
file = objset.new_file(
objset.new_spdxid("source", str(download_idx + 1)),
file_name,
fd.localpath,
purposes=[primary_purpose],
)
inputs.add(file)
else:
file = objset.new_file(
objset.new_spdxid("source", str(download_idx + 1)),
file_name,
fd.localpath,
purposes=[primary_purpose],
)
inputs.add(file)
else:
dl = objset.add(
oe.spdx30.software_Package(
_id=objset.new_spdxid("source", str(download_idx + 1)),
creationInfo=objset.doc.creationInfo,
name=file_name,
software_primaryPurpose=primary_purpose,
software_downloadLocation=oe.spdx_common.fetch_data_to_uri(
fd, fd.names[0]
),
)
)
if fd.method.supports_checksum(fd):
# TODO Need something better than hard coding this
for checksum_id in ["sha256", "sha1"]:
expected_checksum = getattr(
fd, "%s_expected" % checksum_id, None
dl = objset.add(
oe.spdx30.software_Package(
_id=objset.new_spdxid("source", str(download_idx + 1)),
creationInfo=objset.doc.creationInfo,
name=file_name,
software_primaryPurpose=primary_purpose,
software_downloadLocation=oe.spdx_common.fetch_data_to_uri(
fd, name
),
)
if expected_checksum is None:
continue
)
dl.verifiedUsing.append(
oe.spdx30.Hash(
algorithm=getattr(oe.spdx30.HashAlgorithm, checksum_id),
hashValue=expected_checksum,
if fd.method.supports_checksum(fd):
# TODO Need something better than hard coding this
for checksum_id in ["sha256", "sha1"]:
expected_checksum = getattr(
fd, "%s_expected" % checksum_id, None
)
)
if expected_checksum is None:
continue
inputs.add(dl)
dl.verifiedUsing.append(
oe.spdx30.Hash(
algorithm=getattr(oe.spdx30.HashAlgorithm, checksum_id),
hashValue=expected_checksum,
)
)
inputs.add(dl)
return inputs
@@ -452,6 +453,22 @@ def set_purposes(d, element, *var_names, force_purposes=[]):
]
def _get_cves_info(d):
patched_cves = oe.cve_check.get_patched_cves(d)
for cve_id in (d.getVarFlags("CVE_STATUS") or {}):
mapping, detail, description = oe.cve_check.decode_cve_status(d, cve_id)
if not mapping or not detail:
bb.warn(f"Skipping {cve_id} — missing or unknown CVE status")
continue
yield cve_id, mapping, detail, description
patched_cves.discard(cve_id)
# decode_cve_status is decoding CVE_STATUS, so patch files need to be hardcoded
for cve_id in patched_cves:
# fix-file-included is not available in scarthgap
yield cve_id, "Patched", "backported-patch", None
def create_spdx(d):
def set_var_field(var, obj, name, package=None):
val = None
@@ -501,14 +518,7 @@ def create_spdx(d):
# Add CVEs
cve_by_status = {}
if include_vex != "none":
patched_cves = oe.cve_check.get_patched_cves(d)
for cve_id in patched_cves:
mapping, detail, description = oe.cve_check.decode_cve_status(d, cve_id)
if not mapping or not detail:
bb.warn(f"Skipping {cve_id} — missing or unknown CVE status")
continue
for cve_id, mapping, detail, description in _get_cves_info(d):
# If this CVE is fixed upstream, skip it unless all CVEs are
# specified.
if (
@@ -626,6 +636,14 @@ def create_spdx(d):
set_var_field("SUMMARY", spdx_package, "summary", package=package)
set_var_field("DESCRIPTION", spdx_package, "description", package=package)
if d.getVar("SPDX_PACKAGE_URL:%s" % package) or d.getVar("SPDX_PACKAGE_URL"):
set_var_field(
"SPDX_PACKAGE_URL",
spdx_package,
"software_packageUrl",
package=package
)
pkg_objset.new_scoped_relationship(
[oe.sbom30.get_element_link_id(build)],
oe.spdx30.RelationshipType.hasOutput,
@@ -791,6 +809,26 @@ def create_spdx(d):
sorted(list(build_inputs)) + sorted(list(debug_source_ids)),
)
if d.getVar("SPDX_INCLUDE_PACKAGECONFIG", True) != "0":
packageconfig = (d.getVar("PACKAGECONFIG") or "").split()
all_features = (d.getVarFlags("PACKAGECONFIG") or {}).keys()
if all_features:
enabled = set(packageconfig)
all_features_set = set(all_features)
disabled = all_features_set - enabled
for feature in sorted(all_features):
status = "enabled" if feature in enabled else "disabled"
build.build_parameter.append(
oe.spdx30.DictionaryEntry(
key=f"PACKAGECONFIG:{feature}",
value=status
)
)
bb.note(f"Added PACKAGECONFIG entries: {len(enabled)} enabled, {len(disabled)} disabled")
oe.sbom30.write_recipe_jsonld_doc(d, build_objset, "recipes", deploydir)

View File

@@ -239,6 +239,6 @@ def fetch_data_to_uri(fd, name):
uri = uri + "://" + fd.host + fd.path
if fd.method.supports_srcrev():
uri = uri + "@" + fd.revision
uri = uri + "@" + fd.revisions[name]
return uri

View File

@@ -0,0 +1,7 @@
package main
import "fmt"
func main() {
fmt.Println("Hello from Go!")
}

View File

@@ -4,10 +4,76 @@
# SPDX-License-Identifier: MIT
#
import os
from oeqa.runtime.case import OERuntimeTestCase
from oeqa.core.decorator.depends import OETestDepends
from oeqa.runtime.decorator.package import OEHasPackage
class GoCompileTest(OERuntimeTestCase):
@classmethod
def setUp(cls):
dst = '/tmp/'
src = os.path.join(cls.tc.files_dir, 'test.go')
cls.tc.target.copyTo(src, dst)
@classmethod
def tearDown(cls):
files = '/tmp/test.go /tmp/test'
cls.tc.target.run('rm %s' % files)
dirs = '/tmp/hello-go'
cls.tc.target.run('rm -r %s' % dirs)
@OETestDepends(['ssh.SSHTest.test_ssh'])
@OEHasPackage('go')
@OEHasPackage('go-runtime')
@OEHasPackage('go-runtime-dev')
def test_go_compile(self):
# Check if go is available
status, output = self.target.run('which go')
if status != 0:
self.skipTest('go command not found, output: %s' % output)
# Compile the simple Go program
status, output = self.target.run('go build -o /tmp/test /tmp/test.go')
msg = 'go compile failed, output: %s' % output
self.assertEqual(status, 0, msg=msg)
# Run the compiled program
status, output = self.target.run('/tmp/test')
msg = 'running compiled file failed, output: %s' % output
self.assertEqual(status, 0, msg=msg)
@OETestDepends(['ssh.SSHTest.test_ssh'])
@OEHasPackage('go')
@OEHasPackage('go-runtime')
@OEHasPackage('go-runtime-dev')
def test_go_module(self):
# Check if go is available
status, output = self.target.run('which go')
if status != 0:
self.skipTest('go command not found, output: %s' % output)
# Create a simple Go module
status, output = self.target.run('mkdir -p /tmp/hello-go')
msg = 'mkdir failed, output: %s' % output
self.assertEqual(status, 0, msg=msg)
# Copy the existing test.go file to the module
status, output = self.target.run('cp /tmp/test.go /tmp/hello-go/main.go')
msg = 'copying test.go failed, output: %s' % output
self.assertEqual(status, 0, msg=msg)
# Build the module
status, output = self.target.run('cd /tmp/hello-go && go build -o hello main.go')
msg = 'go build failed, output: %s' % output
self.assertEqual(status, 0, msg=msg)
# Run the module
status, output = self.target.run('cd /tmp/hello-go && ./hello')
msg = 'running go module failed, output: %s' % output
self.assertEqual(status, 0, msg=msg)
class GoHelloworldTest(OERuntimeTestCase):
@OETestDepends(['ssh.SSHTest.test_ssh'])
@OEHasPackage(['go-helloworld'])

View File

@@ -10,6 +10,7 @@ import tempfile
import unittest
from oeqa.sdk.case import OESDKTestCase
from oeqa.sdkext.context import OESDKExtTestContext
from oeqa.utils.subprocesstweak import errors_have_output
errors_have_output()
@@ -22,6 +23,9 @@ class EpoxyTest(OESDKTestCase):
if libc in [ 'newlib' ]:
raise unittest.SkipTest("MesonTest class: SDK doesn't contain a supported C library")
if isinstance(self.tc, OESDKExtTestContext):
self.skipTest(f"{self.id()} does not support eSDK (https://bugzilla.yoctoproject.org/show_bug.cgi?id=15854)")
if not (self.tc.hasHostPackage("nativesdk-meson") or
self.tc.hasHostPackage("meson-native")):
raise unittest.SkipTest("EpoxyTest class: SDK doesn't contain Meson")

View File

@@ -0,0 +1,107 @@
#
# Copyright OpenEmbedded Contributors
#
# SPDX-License-Identifier: MIT
#
import os
import shutil
import unittest
from oeqa.core.utils.path import remove_safe
from oeqa.sdk.case import OESDKTestCase
from oeqa.utils.subprocesstweak import errors_have_output
from oe.go import map_arch
errors_have_output()
class GoCompileTest(OESDKTestCase):
td_vars = ['MACHINE', 'TARGET_ARCH']
@classmethod
def setUpClass(self):
# Copy test.go file to SDK directory (same as GCC test uses files_dir)
shutil.copyfile(os.path.join(self.tc.files_dir, 'test.go'),
os.path.join(self.tc.sdk_dir, 'test.go'))
def setUp(self):
translated_target_arch = self.td.get("TRANSLATED_TARGET_ARCH")
# Check for go-cross-canadian package (uses target architecture)
if not self.tc.hasHostPackage("go-cross-canadian-%s" % translated_target_arch):
raise unittest.SkipTest("GoCompileTest class: SDK doesn't contain a Go cross-canadian toolchain")
# Additional runtime check for go command availability
try:
self._run('which go')
except Exception as e:
raise unittest.SkipTest("GoCompileTest class: go command not available: %s" % str(e))
def test_go_build(self):
"""Test Go build command (native compilation)"""
self._run('cd %s; go build -o test test.go' % self.tc.sdk_dir)
def test_go_module(self):
"""Test Go module creation and building"""
# Create a simple Go module
self._run('cd %s; go mod init hello-go' % self.tc.sdk_dir)
self._run('cd %s; go build -o hello-go' % self.tc.sdk_dir)
@classmethod
def tearDownClass(self):
files = [os.path.join(self.tc.sdk_dir, f) \
for f in ['test.go', 'test', 'hello-go', 'go.mod', 'go.sum']]
for f in files:
remove_safe(f)
class GoHostCompileTest(OESDKTestCase):
td_vars = ['MACHINE', 'SDK_SYS', 'TARGET_ARCH']
@classmethod
def setUpClass(self):
# Copy test.go file to SDK directory (same as GCC test uses files_dir)
shutil.copyfile(os.path.join(self.tc.files_dir, 'test.go'),
os.path.join(self.tc.sdk_dir, 'test.go'))
def setUp(self):
translated_target_arch = self.td.get("TRANSLATED_TARGET_ARCH")
# Check for go-cross-canadian package (uses target architecture)
if not self.tc.hasHostPackage("go-cross-canadian-%s" % translated_target_arch):
raise unittest.SkipTest("GoHostCompileTest class: SDK doesn't contain a Go cross-canadian toolchain")
# Additional runtime check for go command availability
try:
self._run('which go')
except Exception as e:
raise unittest.SkipTest("GoHostCompileTest class: go command not available: %s" % str(e))
def _get_go_arch(self):
"""Get Go architecture from SDK_SYS"""
sdksys = self.td.get("SDK_SYS")
arch = sdksys.split('-')[0]
# Use mapping for other architectures
return map_arch(arch)
def test_go_cross_compile(self):
"""Test Go cross-compilation for target"""
goarch = self._get_go_arch()
self._run('cd %s; GOOS=linux GOARCH=%s go build -o test-%s test.go' % (self.tc.sdk_dir, goarch, goarch))
def test_go_module_cross_compile(self):
"""Test Go module cross-compilation"""
goarch = self._get_go_arch()
self._run('cd %s; go mod init hello-go' % self.tc.sdk_dir)
self._run('cd %s; GOOS=linux GOARCH=%s go build -o hello-go-%s' % (self.tc.sdk_dir, goarch, goarch))
@classmethod
def tearDownClass(self):
# Clean up files with dynamic architecture names
files = [os.path.join(self.tc.sdk_dir, f) \
for f in ['test.go', 'go.mod', 'go.sum']]
# Add common architecture-specific files that might be created
common_archs = ['arm64', 'arm', 'amd64', '386', 'mips', 'mipsle', 'ppc64', 'ppc64le', 'riscv64']
for arch in common_archs:
files.extend([os.path.join(self.tc.sdk_dir, f) \
for f in ['test-%s' % arch, 'hello-go-%s' % arch]])
for f in files:
remove_safe(f)

View File

@@ -114,7 +114,8 @@ class TestSDK(TestSDKBase):
host_pkg_manifest=host_pkg_manifest, **context_args)
try:
tc.loadTests(self.context_executor_class.default_cases)
modules = (d.getVar("TESTSDK_SUITES") or "").split()
tc.loadTests(self.context_executor_class.default_cases, modules)
except Exception as e:
import traceback
bb.fatal("Loading tests failed:\n%s" % traceback.format_exc())

View File

@@ -82,7 +82,8 @@ class TestSDKExt(TestSDKBase):
host_pkg_manifest=host_pkg_manifest)
try:
tc.loadTests(OESDKExtTestContextExecutor.default_cases)
modules = (d.getVar("TESTSDK_SUITES") or "").split()
tc.loadTests(OESDKExtTestContextExecutor.default_cases, modules)
except Exception as e:
import traceback
bb.fatal("Loading tests failed:\n%s" % traceback.format_exc())

View File

@@ -286,3 +286,60 @@ class SPDX30Check(SPDX3CheckBase, OESelftestTestCase):
break
else:
self.assertTrue(False, "Unable to find imported Host SpdxID")
def test_kernel_config_spdx(self):
kernel_recipe = get_bb_var("PREFERRED_PROVIDER_virtual/kernel")
spdx_file = f"recipe-{kernel_recipe}.spdx.json"
spdx_path = f"{{DEPLOY_DIR_SPDX}}/{{SSTATE_PKGARCH}}/recipes/{spdx_file}"
# Make sure kernel is configured first
bitbake(f"-c configure {kernel_recipe}")
objset = self.check_recipe_spdx(
kernel_recipe,
spdx_path,
task="do_create_kernel_config_spdx",
extraconf="""\
INHERIT += "create-spdx"
SPDX_INCLUDE_KERNEL_CONFIG = "1"
""",
)
# Check that at least one CONFIG_* entry exists
found_kernel_config = False
for build_obj in objset.foreach_type(oe.spdx30.build_Build):
if getattr(build_obj, "build_buildType", "") == "https://openembedded.org/kernel-configuration":
found_kernel_config = True
self.assertTrue(
len(getattr(build_obj, "build_parameter", [])) > 0,
"Kernel configuration build_Build has no CONFIG_* entries"
)
break
self.assertTrue(found_kernel_config, "Kernel configuration build_Build not found in SPDX output")
def test_packageconfig_spdx(self):
objset = self.check_recipe_spdx(
"tar",
"{DEPLOY_DIR_SPDX}/{SSTATE_PKGARCH}/recipes/recipe-tar.spdx.json",
extraconf="""\
SPDX_INCLUDE_PACKAGECONFIG = "1"
""",
)
found_entries = []
for build_obj in objset.foreach_type(oe.spdx30.build_Build):
for param in getattr(build_obj, "build_parameter", []):
if param.key.startswith("PACKAGECONFIG:"):
found_entries.append((param.key, param.value))
self.assertTrue(
found_entries,
"No PACKAGECONFIG entries found in SPDX output for 'tar'"
)
for key, value in found_entries:
self.assertIn(
value, ["enabled", "disabled"],
f"Unexpected PACKAGECONFIG value '{value}' for {key}"
)

View File

@@ -0,0 +1,41 @@
From 80e0e9b2558c40fb108ae7a869362566eb4c1ead Mon Sep 17 00:00:00 2001
From: Thomas Frauendorfer | Miray Software <tf@miray.de>
Date: Fri, 9 May 2025 14:20:47 +0200
Subject: [PATCH] net/net: Unregister net_set_vlan command on unload
The commit 954c48b9c (net/net: Add net_set_vlan command) added command
net_set_vlan to the net module. Unfortunately the commit only added the
grub_register_command() call on module load but missed the
grub_unregister_command() on unload. Let's fix this.
Fixes: CVE-2025-54770
Fixes: 954c48b9c (net/net: Add net_set_vlan command)
CVE: CVE-2025-54770
Upstream-Status: Backport
[https://gitweb.git.savannah.gnu.org/gitweb/?p=grub.git;a=commit;h=10e58a14db20e17d1b6a39abe38df01fef98e29d]
Reported-by: Thomas Frauendorfer | Miray Software <tf@miray.de>
Signed-off-by: Thomas Frauendorfer | Miray Software <tf@miray.de>
Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
Signed-off-by: Jiaying Song <jiaying.song.cn@windriver.com>
---
grub-core/net/net.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/grub-core/net/net.c b/grub-core/net/net.c
index 2b45c27d1..05f11be08 100644
--- a/grub-core/net/net.c
+++ b/grub-core/net/net.c
@@ -2080,6 +2080,7 @@ GRUB_MOD_FINI(net)
grub_unregister_command (cmd_deladdr);
grub_unregister_command (cmd_addroute);
grub_unregister_command (cmd_delroute);
+ grub_unregister_command (cmd_setvlan);
grub_unregister_command (cmd_lsroutes);
grub_unregister_command (cmd_lscards);
grub_unregister_command (cmd_lsaddr);
--
2.34.1

View File

@@ -0,0 +1,40 @@
From c24e11d87f8ee8cefd615e0c30eb71ff6149ee50 Mon Sep 17 00:00:00 2001
From: Jamie <volticks@gmail.com>
Date: Mon, 14 Jul 2025 09:52:59 +0100
Subject: [PATCH 2/4] commands/usbtest: Use correct string length field
An incorrect length field is used for buffer allocation. This leads to
grub_utf16_to_utf8() receiving an incorrect/different length and possibly
causing OOB write. This makes sure to use the correct length.
Fixes: CVE-2025-61661
CVE: CVE-2025-61661
Upstream-Status: Backport
[https://gitweb.git.savannah.gnu.org/gitweb/?p=grub.git;a=commit;h=549a9cc372fd0b96a4ccdfad0e12140476cc62a3]
Reported-by: Jamie <volticks@gmail.com>
Signed-off-by: Jamie <volticks@gmail.com>
Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
Signed-off-by: Jiaying Song <jiaying.song.cn@windriver.com>
---
grub-core/commands/usbtest.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/grub-core/commands/usbtest.c b/grub-core/commands/usbtest.c
index 2c6d93fe6..8ef187a9a 100644
--- a/grub-core/commands/usbtest.c
+++ b/grub-core/commands/usbtest.c
@@ -99,7 +99,7 @@ grub_usb_get_string (grub_usb_device_t dev, grub_uint8_t index, int langid,
return GRUB_USB_ERR_NONE;
}
- *string = grub_malloc (descstr.length * 2 + 1);
+ *string = grub_malloc (descstrp->length * 2 + 1);
if (! *string)
{
grub_free (descstrp);
--
2.34.1

View File

@@ -0,0 +1,72 @@
From 498dc73aa661bb1cae4b06572b5cef154dcb1fb7 Mon Sep 17 00:00:00 2001
From: Alec Brown <alec.r.brown@oracle.com>
Date: Thu, 21 Aug 2025 21:14:06 +0000
Subject: [PATCH 3/4] gettext/gettext: Unregister gettext command on module
unload
When the gettext module is loaded, the gettext command is registered but
isn't unregistered when the module is unloaded. We need to add a call to
grub_unregister_command() when unloading the module.
Fixes: CVE-2025-61662
CVE: CVE-2025-61662
Upstream-Status: Backport
[https://gitweb.git.savannah.gnu.org/gitweb/?p=grub.git;a=commit;h=8ed78fd9f0852ab218cc1f991c38e5a229e43807]
Reported-by: Alec Brown <alec.r.brown@oracle.com>
Signed-off-by: Alec Brown <alec.r.brown@oracle.com>
Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
Signed-off-by: Jiaying Song <jiaying.song.cn@windriver.com>
---
grub-core/gettext/gettext.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/grub-core/gettext/gettext.c b/grub-core/gettext/gettext.c
index 9ffc73428..edebed998 100644
--- a/grub-core/gettext/gettext.c
+++ b/grub-core/gettext/gettext.c
@@ -502,6 +502,8 @@ grub_cmd_translate (grub_command_t cmd __attribute__ ((unused)),
return 0;
}
+static grub_command_t cmd;
+
GRUB_MOD_INIT (gettext)
{
const char *lang;
@@ -521,13 +523,14 @@ GRUB_MOD_INIT (gettext)
grub_register_variable_hook ("locale_dir", NULL, read_main);
grub_register_variable_hook ("secondary_locale_dir", NULL, read_secondary);
- grub_register_command_p1 ("gettext", grub_cmd_translate,
- N_("STRING"),
- /* TRANSLATORS: It refers to passing the string through gettext.
- So it's "translate" in the same meaning as in what you're
- doing now.
- */
- N_("Translates the string with the current settings."));
+ cmd = grub_register_command_p1 ("gettext", grub_cmd_translate,
+ N_("STRING"),
+ /*
+ * TRANSLATORS: It refers to passing the string through gettext.
+ * So it's "translate" in the same meaning as in what you're
+ * doing now.
+ */
+ N_("Translates the string with the current settings."));
/* Reload .mo file information if lang changes. */
grub_register_variable_hook ("lang", NULL, grub_gettext_env_write_lang);
@@ -544,6 +547,8 @@ GRUB_MOD_FINI (gettext)
grub_register_variable_hook ("secondary_locale_dir", NULL, NULL);
grub_register_variable_hook ("lang", NULL, NULL);
+ grub_unregister_command (cmd);
+
grub_gettext_delete_list (&main_context);
grub_gettext_delete_list (&secondary_context);
--
2.34.1

View File

@@ -0,0 +1,64 @@
From 8368c026562a72a005bea320cfde9fd7d62d3850 Mon Sep 17 00:00:00 2001
From: Alec Brown <alec.r.brown@oracle.com>
Date: Thu, 21 Aug 2025 21:14:07 +0000
Subject: [PATCH 4/4] normal/main: Unregister commands on module unload
When the normal module is loaded, the normal and normal_exit commands
are registered but aren't unregistered when the module is unloaded. We
need to add calls to grub_unregister_command() when unloading the module
for these commands.
Fixes: CVE-2025-61663
Fixes: CVE-2025-61664
CVE: CVE-2025-61663 CVE-2025-61664
Upstream-Status: Backport
[https://gitweb.git.savannah.gnu.org/gitweb/?p=grub.git;a=commit;h=05d3698b8b03eccc49e53491bbd75dba15f40917]
Reported-by: Alec Brown <alec.r.brown@oracle.com>
Signed-off-by: Alec Brown <alec.r.brown@oracle.com>
Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
Signed-off-by: Jiaying Song <jiaying.song.cn@windriver.com>
---
grub-core/normal/main.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/grub-core/normal/main.c b/grub-core/normal/main.c
index dad25e7d2..a810858c3 100644
--- a/grub-core/normal/main.c
+++ b/grub-core/normal/main.c
@@ -500,7 +500,7 @@ grub_mini_cmd_clear (struct grub_command *cmd __attribute__ ((unused)),
return 0;
}
-static grub_command_t cmd_clear;
+static grub_command_t cmd_clear, cmd_normal, cmd_normal_exit;
static void (*grub_xputs_saved) (const char *str);
static const char *features[] = {
@@ -542,10 +542,10 @@ GRUB_MOD_INIT(normal)
grub_env_export ("pager");
/* Register a command "normal" for the rescue mode. */
- grub_register_command ("normal", grub_cmd_normal,
- 0, N_("Enter normal mode."));
- grub_register_command ("normal_exit", grub_cmd_normal_exit,
- 0, N_("Exit from normal mode."));
+ cmd_normal = grub_register_command ("normal", grub_cmd_normal,
+ 0, N_("Enter normal mode."));
+ cmd_normal_exit = grub_register_command ("normal_exit", grub_cmd_normal_exit,
+ 0, N_("Exit from normal mode."));
/* Reload terminal colors when these variables are written to. */
grub_register_variable_hook ("color_normal", NULL, grub_env_write_color_normal);
@@ -587,4 +587,6 @@ GRUB_MOD_FINI(normal)
grub_register_variable_hook ("color_highlight", NULL, NULL);
grub_fs_autoload_hook = 0;
grub_unregister_command (cmd_clear);
+ grub_unregister_command (cmd_normal);
+ grub_unregister_command (cmd_normal_exit);
}
--
2.34.1

View File

@@ -38,6 +38,10 @@ SRC_URI = "${GNU_MIRROR}/grub/grub-${PV}.tar.gz \
file://CVE-2025-0677_CVE-2025-0684_CVE-2025-0685_CVE-2025-0686_CVE-2025-0689.patch \
file://CVE-2025-0678_CVE-2025-1125.patch \
file://CVE-2024-56738.patch \
file://CVE-2025-54770.patch \
file://CVE-2025-61661.patch \
file://CVE-2025-61662.patch \
file://CVE-2025-61663_61664.patch \
"
SRC_URI[sha256sum] = "b30919fa5be280417c17ac561bb1650f60cfb80cc6237fa1e2b6f56154cb9c91"

View File

@@ -26,7 +26,7 @@ inherit core-image setuptools3 features_check
REQUIRED_DISTRO_FEATURES += "xattr"
SRCREV ?= "dd2d3cfc4e26fdb9a3105ddf0af040f5a29b6306"
SRCREV ?= "828c9d09b4d056f77399fcb4bca8582f9c26ac84"
SRC_URI = "git://git.yoctoproject.org/poky;branch=scarthgap \
file://Yocto_Build_Appliance.vmx \
file://Yocto_Build_Appliance.vmxf \

View File

@@ -0,0 +1,802 @@
From 87786d6200ae1f5ac98d21f04d451e17ff25a216 Mon Sep 17 00:00:00 2001
From: David Kilzer <ddkilzer@apple.com>
Reviewed-By: Aron Xu <aron@debian.org>
Date: Mon, 23 Jun 2025 14:41:56 -0700
Subject: [PATCH] libxslt: heap-use-after-free in xmlFreeID caused by `atype`
corruption
* include/libxml/tree.h:
(XML_ATTR_CLEAR_ATYPE): Add.
(XML_ATTR_GET_ATYPE): Add.
(XML_ATTR_SET_ATYPE): Add.
(XML_NODE_ADD_EXTRA): Add.
(XML_NODE_CLEAR_EXTRA): Add.
(XML_NODE_GET_EXTRA): Add.
(XML_NODE_SET_EXTRA): Add.
(XML_DOC_ADD_PROPERTIES): Add.
(XML_DOC_CLEAR_PROPERTIES): Add.
(XML_DOC_GET_PROPERTIES): Add.
(XML_DOC_SET_PROPERTIES): Add.
- Add macros for accessing fields with upper bits that may be set by
libxslt.
* HTMLparser.c:
(htmlNewDocNoDtD):
* SAX2.c:
(xmlSAX2StartDocument):
(xmlSAX2EndDocument):
* parser.c:
(xmlParseEntityDecl):
(xmlParseExternalSubset):
(xmlParseReference):
(xmlCtxtParseDtd):
* runxmlconf.c:
(xmlconfTestInvalid):
(xmlconfTestValid):
* tree.c:
(xmlNewDoc):
(xmlFreeProp):
(xmlNodeSetDoc):
(xmlSetNsProp):
(xmlDOMWrapAdoptBranch):
* valid.c:
(xmlFreeID):
(xmlAddIDInternal):
(xmlValidateAttributeValueInternal):
(xmlValidateOneAttribute):
(xmlValidateRef):
* xmlreader.c:
(xmlTextReaderStartElement):
(xmlTextReaderStartElementNs):
(xmlTextReaderValidateEntity):
(xmlTextReaderRead):
(xmlTextReaderNext):
(xmlTextReaderIsEmptyElement):
(xmlTextReaderPreserve):
* xmlschemas.c:
(xmlSchemaPValAttrNodeID):
* xmlschemastypes.c:
(xmlSchemaValAtomicType):
- Adopt macros by renaming the struct fields, recompiling and fixing
compiler failures, then changing the struct field names back.
Origin: https://launchpad.net/ubuntu/+source/libxml2/2.9.14+dfsg-1.3ubuntu3.6
Ref : https://security-tracker.debian.org/tracker/CVE-2025-7425
CVE: CVE-2025-7425
Upstream-Status: Backport [https://gitlab.gnome.org/GNOME/libxslt/-/issues/140]
Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
---
HTMLparser.c | 1 +
SAX2.c | 6 ++--
include/libxml/tree.h | 14 ++++++++-
parser.c | 8 ++---
runxmlconf.c | 4 +--
tree.c | 20 ++++++-------
valid.c | 68 +++++++++++++++++++++----------------------
xmlreader.c | 30 +++++++++----------
xmlschemas.c | 4 +--
xmlschemastypes.c | 12 ++++----
10 files changed, 90 insertions(+), 77 deletions(-)
diff --git a/HTMLparser.c b/HTMLparser.c
index ea6a4f2..9f439d6 100644
--- a/HTMLparser.c
+++ b/HTMLparser.c
@@ -2459,6 +2459,7 @@ htmlNewDocNoDtD(const xmlChar *URI, const xmlChar *ExternalID) {
cur->refs = NULL;
cur->_private = NULL;
cur->charset = XML_CHAR_ENCODING_UTF8;
+ XML_DOC_SET_PROPERTIES(cur, XML_DOC_HTML | XML_DOC_USERBUILT);
cur->properties = XML_DOC_HTML | XML_DOC_USERBUILT;
if ((ExternalID != NULL) ||
(URI != NULL))
diff --git a/SAX2.c b/SAX2.c
index bb72e16..08786a3 100644
--- a/SAX2.c
+++ b/SAX2.c
@@ -899,7 +899,7 @@ xmlSAX2StartDocument(void *ctx)
xmlSAX2ErrMemory(ctxt, "xmlSAX2StartDocument");
return;
}
- ctxt->myDoc->properties = XML_DOC_HTML;
+ XML_DOC_SET_PROPERTIES(ctxt->myDoc, XML_DOC_HTML);
ctxt->myDoc->parseFlags = ctxt->options;
#else
xmlGenericError(xmlGenericErrorContext,
@@ -912,9 +912,9 @@ xmlSAX2StartDocument(void *ctx)
} else {
doc = ctxt->myDoc = xmlNewDoc(ctxt->version);
if (doc != NULL) {
- doc->properties = 0;
+ XML_DOC_CLEAR_PROPERTIES(doc);
if (ctxt->options & XML_PARSE_OLD10)
- doc->properties |= XML_DOC_OLD10;
+ XML_DOC_ADD_PROPERTIES(doc, XML_DOC_OLD10);
doc->parseFlags = ctxt->options;
doc->standalone = ctxt->standalone;
} else {
diff --git a/include/libxml/tree.h b/include/libxml/tree.h
index a90a174..a013232 100644
--- a/include/libxml/tree.h
+++ b/include/libxml/tree.h
@@ -370,7 +370,6 @@ struct _xmlElement {
#endif
};
-
/**
* XML_LOCAL_NAMESPACE:
*
@@ -451,6 +450,10 @@ struct _xmlAttr {
void *psvi; /* for type/PSVI information */
};
+#define XML_ATTR_CLEAR_ATYPE(attr) (((attr)->atype) = 0)
+#define XML_ATTR_GET_ATYPE(attr) (((attr)->atype) & ~(15U << 27))
+#define XML_ATTR_SET_ATYPE(attr, type) ((attr)->atype = ((((attr)->atype) & (15U << 27)) | ((type) & ~(15U << 27))))
+
/**
* xmlID:
*
@@ -512,6 +515,11 @@ struct _xmlNode {
unsigned short extra; /* extra data for XPath/XSLT */
};
+#define XML_NODE_ADD_EXTRA(node, type) ((node)->extra |= ((type) & ~(15U << 12)))
+#define XML_NODE_CLEAR_EXTRA(node) (((node)->extra) = 0)
+#define XML_NODE_GET_EXTRA(node) (((node)->extra) & ~(15U << 12))
+#define XML_NODE_SET_EXTRA(node, type) ((node)->extra = ((((node)->extra) & (15U << 12)) | ((type) & ~(15U << 12))))
+
/**
* XML_GET_CONTENT:
*
@@ -589,6 +597,10 @@ struct _xmlDoc {
set at the end of parsing */
};
+#define XML_DOC_ADD_PROPERTIES(doc, type) ((doc)->properties |= ((type) & ~(15U << 27)))
+#define XML_DOC_CLEAR_PROPERTIES(doc) (((doc)->properties) = 0)
+#define XML_DOC_GET_PROPERTIES(doc) (((doc)->properties) & ~(15U << 27))
+#define XML_DOC_SET_PROPERTIES(doc, type) ((doc)->properties = ((((doc)->properties) & (15U << 27)) | ((type) & ~(15U << 27))))
typedef struct _xmlDOMWrapCtxt xmlDOMWrapCtxt;
typedef xmlDOMWrapCtxt *xmlDOMWrapCtxtPtr;
diff --git a/parser.c b/parser.c
index 6ab4bfe..19ae310 100644
--- a/parser.c
+++ b/parser.c
@@ -5663,7 +5663,7 @@ xmlParseEntityDecl(xmlParserCtxtPtr ctxt) {
xmlErrMemory(ctxt, "New Doc failed");
goto done;
}
- ctxt->myDoc->properties = XML_DOC_INTERNAL;
+ XML_DOC_SET_PROPERTIES(ctxt->myDoc, XML_DOC_INTERNAL);
}
if (ctxt->myDoc->intSubset == NULL)
ctxt->myDoc->intSubset = xmlNewDtd(ctxt->myDoc,
@@ -5734,7 +5734,7 @@ xmlParseEntityDecl(xmlParserCtxtPtr ctxt) {
xmlErrMemory(ctxt, "New Doc failed");
goto done;
}
- ctxt->myDoc->properties = XML_DOC_INTERNAL;
+ XML_DOC_SET_PROPERTIES(ctxt->myDoc, XML_DOC_INTERNAL);
}
if (ctxt->myDoc->intSubset == NULL)
@@ -7179,7 +7179,7 @@ xmlParseExternalSubset(xmlParserCtxtPtr ctxt, const xmlChar *ExternalID,
xmlErrMemory(ctxt, "New Doc failed");
return;
}
- ctxt->myDoc->properties = XML_DOC_INTERNAL;
+ XML_DOC_SET_PROPERTIES(ctxt->myDoc, XML_DOC_INTERNAL);
}
if ((ctxt->myDoc != NULL) && (ctxt->myDoc->intSubset == NULL))
xmlCreateIntSubset(ctxt->myDoc, NULL, ExternalID, SystemID);
@@ -7580,7 +7580,7 @@ xmlParseReference(xmlParserCtxtPtr ctxt) {
(nw != NULL) &&
(nw->type == XML_ELEMENT_NODE) &&
(nw->children == NULL))
- nw->extra = 1;
+ XML_NODE_SET_EXTRA(nw, 1);
break;
}
diff --git a/runxmlconf.c b/runxmlconf.c
index b5c3fd8..75fcfd6 100644
--- a/runxmlconf.c
+++ b/runxmlconf.c
@@ -190,7 +190,7 @@ xmlconfTestInvalid(const char *id, const char *filename, int options) {
id, filename);
} else {
/* invalidity should be reported both in the context and in the document */
- if ((ctxt->valid != 0) || (doc->properties & XML_DOC_DTDVALID)) {
+ if ((ctxt->valid != 0) || (XML_DOC_GET_PROPERTIES(doc) & XML_DOC_DTDVALID)) {
test_log("test %s : %s failed to detect invalid document\n",
id, filename);
nb_errors++;
@@ -222,7 +222,7 @@ xmlconfTestValid(const char *id, const char *filename, int options) {
ret = 0;
} else {
/* validity should be reported both in the context and in the document */
- if ((ctxt->valid == 0) || ((doc->properties & XML_DOC_DTDVALID) == 0)) {
+ if ((ctxt->valid == 0) || ((XML_DOC_GET_PROPERTIES(doc) & XML_DOC_DTDVALID) == 0)) {
test_log("test %s : %s failed to validate a valid document\n",
id, filename);
nb_errors++;
diff --git a/tree.c b/tree.c
index f89e3cd..772ca62 100644
--- a/tree.c
+++ b/tree.c
@@ -1160,7 +1160,7 @@ xmlNewDoc(const xmlChar *version) {
cur->compression = -1; /* not initialized */
cur->doc = cur;
cur->parseFlags = 0;
- cur->properties = XML_DOC_USERBUILT;
+ XML_DOC_SET_PROPERTIES(cur, XML_DOC_USERBUILT);
/*
* The in memory encoding is always UTF8
* This field will never change and would
@@ -2077,7 +2077,7 @@ xmlFreeProp(xmlAttrPtr cur) {
xmlDeregisterNodeDefaultValue((xmlNodePtr)cur);
/* Check for ID removal -> leading to invalid references ! */
- if ((cur->doc != NULL) && (cur->atype == XML_ATTRIBUTE_ID)) {
+ if ((cur->doc != NULL) && (XML_ATTR_GET_ATYPE(cur) == XML_ATTRIBUTE_ID)) {
xmlRemoveID(cur->doc, cur);
}
if (cur->children != NULL) xmlFreeNodeList(cur->children);
@@ -2794,7 +2794,7 @@ xmlSetTreeDoc(xmlNodePtr tree, xmlDocPtr doc) {
if(tree->type == XML_ELEMENT_NODE) {
prop = tree->properties;
while (prop != NULL) {
- if (prop->atype == XML_ATTRIBUTE_ID) {
+ if (XML_ATTR_GET_ATYPE(prop) == XML_ATTRIBUTE_ID) {
xmlRemoveID(tree->doc, prop);
}
@@ -6836,9 +6836,9 @@ xmlSetNsProp(xmlNodePtr node, xmlNsPtr ns, const xmlChar *name,
/*
* Modify the attribute's value.
*/
- if (prop->atype == XML_ATTRIBUTE_ID) {
+ if (XML_ATTR_GET_ATYPE(prop) == XML_ATTRIBUTE_ID) {
xmlRemoveID(node->doc, prop);
- prop->atype = XML_ATTRIBUTE_ID;
+ XML_ATTR_SET_ATYPE(prop, XML_ATTRIBUTE_ID);
}
if (prop->children != NULL)
xmlFreeNodeList(prop->children);
@@ -6858,7 +6858,7 @@ xmlSetNsProp(xmlNodePtr node, xmlNsPtr ns, const xmlChar *name,
tmp = tmp->next;
}
}
- if (prop->atype == XML_ATTRIBUTE_ID)
+ if (XML_ATTR_GET_ATYPE(prop) == XML_ATTRIBUTE_ID)
xmlAddID(NULL, node->doc, value, prop);
return(prop);
}
@@ -9077,7 +9077,7 @@ ns_end:
if (cur->type == XML_ELEMENT_NODE) {
cur->psvi = NULL;
cur->line = 0;
- cur->extra = 0;
+ XML_NODE_CLEAR_EXTRA(cur);
/*
* Walk attributes.
*/
@@ -9093,11 +9093,11 @@ ns_end:
* Attributes.
*/
if ((sourceDoc != NULL) &&
- (((xmlAttrPtr) cur)->atype == XML_ATTRIBUTE_ID))
+ (XML_ATTR_GET_ATYPE((xmlAttrPtr) cur) == XML_ATTRIBUTE_ID))
{
xmlRemoveID(sourceDoc, (xmlAttrPtr) cur);
}
- ((xmlAttrPtr) cur)->atype = 0;
+ XML_ATTR_CLEAR_ATYPE((xmlAttrPtr) cur);
((xmlAttrPtr) cur)->psvi = NULL;
}
break;
@@ -9818,7 +9818,7 @@ xmlDOMWrapAdoptAttr(xmlDOMWrapCtxtPtr ctxt,
}
XML_TREE_ADOPT_STR(attr->name);
- attr->atype = 0;
+ XML_ATTR_CLEAR_ATYPE(attr);
attr->psvi = NULL;
/*
* Walk content.
diff --git a/valid.c b/valid.c
index abefdc5..ae4bb82 100644
--- a/valid.c
+++ b/valid.c
@@ -1736,7 +1736,7 @@ xmlScanIDAttributeDecl(xmlValidCtxtPtr ctxt, xmlElementPtr elem, int err) {
if (elem == NULL) return(0);
cur = elem->attributes;
while (cur != NULL) {
- if (cur->atype == XML_ATTRIBUTE_ID) {
+ if (XML_ATTR_GET_ATYPE(cur) == XML_ATTRIBUTE_ID) {
ret ++;
if ((ret > 1) && (err))
xmlErrValidNode(ctxt, (xmlNodePtr) elem, XML_DTD_MULTIPLE_ID,
@@ -2109,7 +2109,7 @@ xmlDumpAttributeDecl(xmlBufferPtr buf, xmlAttributePtr attr) {
xmlBufferWriteChar(buf, ":");
}
xmlBufferWriteCHAR(buf, attr->name);
- switch (attr->atype) {
+ switch (XML_ATTR_GET_ATYPE(attr)) {
case XML_ATTRIBUTE_CDATA:
xmlBufferWriteChar(buf, " CDATA");
break;
@@ -2582,7 +2582,7 @@ xmlAddID(xmlValidCtxtPtr ctxt, xmlDocPtr doc, const xmlChar *value,
return(NULL);
}
if (attr != NULL)
- attr->atype = XML_ATTRIBUTE_ID;
+ XML_ATTR_SET_ATYPE(attr, XML_ATTRIBUTE_ID);
return(ret);
}
@@ -2661,7 +2661,7 @@ xmlIsID(xmlDocPtr doc, xmlNodePtr elem, xmlAttrPtr attr) {
if ((fullelemname != felem) && (fullelemname != elem->name))
xmlFree(fullelemname);
- if ((attrDecl != NULL) && (attrDecl->atype == XML_ATTRIBUTE_ID))
+ if ((attrDecl != NULL) && (XML_ATTR_GET_ATYPE(attrDecl) == XML_ATTRIBUTE_ID))
return(1);
}
return(0);
@@ -2702,7 +2702,7 @@ xmlRemoveID(xmlDocPtr doc, xmlAttrPtr attr) {
xmlHashRemoveEntry(table, ID, xmlFreeIDTableEntry);
xmlFree(ID);
- attr->atype = 0;
+ XML_ATTR_CLEAR_ATYPE(attr);
return(0);
}
@@ -2987,8 +2987,8 @@ xmlIsRef(xmlDocPtr doc, xmlNodePtr elem, xmlAttrPtr attr) {
elem->name, attr->name);
if ((attrDecl != NULL) &&
- (attrDecl->atype == XML_ATTRIBUTE_IDREF ||
- attrDecl->atype == XML_ATTRIBUTE_IDREFS))
+ (XML_ATTR_GET_ATYPE(attrDecl) == XML_ATTRIBUTE_IDREF ||
+ XML_ATTR_GET_ATYPE(attrDecl) == XML_ATTRIBUTE_IDREFS))
return(1);
}
return(0);
@@ -3372,7 +3372,7 @@ xmlIsMixedElement(xmlDocPtr doc, const xmlChar *name) {
static int
xmlIsDocNameStartChar(xmlDocPtr doc, int c) {
- if ((doc == NULL) || (doc->properties & XML_DOC_OLD10) == 0) {
+ if ((doc == NULL) || (XML_DOC_GET_PROPERTIES(doc) & XML_DOC_OLD10) == 0) {
/*
* Use the new checks of production [4] [4a] amd [5] of the
* Update 5 of XML-1.0
@@ -3402,7 +3402,7 @@ xmlIsDocNameStartChar(xmlDocPtr doc, int c) {
static int
xmlIsDocNameChar(xmlDocPtr doc, int c) {
- if ((doc == NULL) || (doc->properties & XML_DOC_OLD10) == 0) {
+ if ((doc == NULL) || (XML_DOC_GET_PROPERTIES(doc) & XML_DOC_OLD10) == 0) {
/*
* Use the new checks of production [4] [4a] amd [5] of the
* Update 5 of XML-1.0
@@ -3952,7 +3952,7 @@ xmlValidCtxtNormalizeAttributeValue(xmlValidCtxtPtr ctxt, xmlDocPtr doc,
if (attrDecl == NULL)
return(NULL);
- if (attrDecl->atype == XML_ATTRIBUTE_CDATA)
+ if (XML_ATTR_GET_ATYPE(attrDecl) == XML_ATTRIBUTE_CDATA)
return(NULL);
ret = xmlStrdup(value);
@@ -4014,7 +4014,7 @@ xmlValidNormalizeAttributeValue(xmlDocPtr doc, xmlNodePtr elem,
if (attrDecl == NULL)
return(NULL);
- if (attrDecl->atype == XML_ATTRIBUTE_CDATA)
+ if (XML_ATTR_GET_ATYPE(attrDecl) == XML_ATTRIBUTE_CDATA)
return(NULL);
ret = xmlStrdup(value);
@@ -4029,7 +4029,7 @@ xmlValidateAttributeIdCallback(void *payload, void *data,
const xmlChar *name ATTRIBUTE_UNUSED) {
xmlAttributePtr attr = (xmlAttributePtr) payload;
int *count = (int *) data;
- if (attr->atype == XML_ATTRIBUTE_ID) (*count)++;
+ if (XML_ATTR_GET_ATYPE(attr) == XML_ATTRIBUTE_ID) (*count)++;
}
/**
@@ -4061,7 +4061,7 @@ xmlValidateAttributeDecl(xmlValidCtxtPtr ctxt, xmlDocPtr doc,
/* Attribute Default Legal */
/* Enumeration */
if (attr->defaultValue != NULL) {
- val = xmlValidateAttributeValueInternal(doc, attr->atype,
+ val = xmlValidateAttributeValueInternal(doc, XML_ATTR_GET_ATYPE(attr),
attr->defaultValue);
if (val == 0) {
xmlErrValidNode(ctxt, (xmlNodePtr) attr, XML_DTD_ATTRIBUTE_DEFAULT,
@@ -4072,7 +4072,7 @@ xmlValidateAttributeDecl(xmlValidCtxtPtr ctxt, xmlDocPtr doc,
}
/* ID Attribute Default */
- if ((attr->atype == XML_ATTRIBUTE_ID)&&
+ if ((XML_ATTR_GET_ATYPE(attr) == XML_ATTRIBUTE_ID)&&
(attr->def != XML_ATTRIBUTE_IMPLIED) &&
(attr->def != XML_ATTRIBUTE_REQUIRED)) {
xmlErrValidNode(ctxt, (xmlNodePtr) attr, XML_DTD_ID_FIXED,
@@ -4082,7 +4082,7 @@ xmlValidateAttributeDecl(xmlValidCtxtPtr ctxt, xmlDocPtr doc,
}
/* One ID per Element Type */
- if (attr->atype == XML_ATTRIBUTE_ID) {
+ if (XML_ATTR_GET_ATYPE(attr) == XML_ATTRIBUTE_ID) {
int nbId;
/* the trick is that we parse DtD as their own internal subset */
@@ -4341,9 +4341,9 @@ xmlValidateOneAttribute(xmlValidCtxtPtr ctxt, xmlDocPtr doc,
attr->name, elem->name, NULL);
return(0);
}
- attr->atype = attrDecl->atype;
+ XML_ATTR_SET_ATYPE(attr, attrDecl->atype);
- val = xmlValidateAttributeValueInternal(doc, attrDecl->atype, value);
+ val = xmlValidateAttributeValueInternal(doc, XML_ATTR_GET_ATYPE(attrDecl), value);
if (val == 0) {
xmlErrValidNode(ctxt, elem, XML_DTD_ATTRIBUTE_VALUE,
"Syntax of value for attribute %s of %s is not valid\n",
@@ -4362,19 +4362,19 @@ xmlValidateOneAttribute(xmlValidCtxtPtr ctxt, xmlDocPtr doc,
}
/* Validity Constraint: ID uniqueness */
- if (attrDecl->atype == XML_ATTRIBUTE_ID) {
+ if (XML_ATTR_GET_ATYPE(attrDecl) == XML_ATTRIBUTE_ID) {
if (xmlAddID(ctxt, doc, value, attr) == NULL)
ret = 0;
}
- if ((attrDecl->atype == XML_ATTRIBUTE_IDREF) ||
- (attrDecl->atype == XML_ATTRIBUTE_IDREFS)) {
+ if ((XML_ATTR_GET_ATYPE(attrDecl) == XML_ATTRIBUTE_IDREF) ||
+ (XML_ATTR_GET_ATYPE(attrDecl) == XML_ATTRIBUTE_IDREFS)) {
if (xmlAddRef(ctxt, doc, value, attr) == NULL)
ret = 0;
}
/* Validity Constraint: Notation Attributes */
- if (attrDecl->atype == XML_ATTRIBUTE_NOTATION) {
+ if (XML_ATTR_GET_ATYPE(attrDecl) == XML_ATTRIBUTE_NOTATION) {
xmlEnumerationPtr tree = attrDecl->tree;
xmlNotationPtr nota;
@@ -4404,7 +4404,7 @@ xmlValidateOneAttribute(xmlValidCtxtPtr ctxt, xmlDocPtr doc,
}
/* Validity Constraint: Enumeration */
- if (attrDecl->atype == XML_ATTRIBUTE_ENUMERATION) {
+ if (XML_ATTR_GET_ATYPE(attrDecl) == XML_ATTRIBUTE_ENUMERATION) {
xmlEnumerationPtr tree = attrDecl->tree;
while (tree != NULL) {
if (xmlStrEqual(tree->name, value)) break;
@@ -4429,7 +4429,7 @@ xmlValidateOneAttribute(xmlValidCtxtPtr ctxt, xmlDocPtr doc,
/* Extra check for the attribute value */
ret &= xmlValidateAttributeValue2(ctxt, doc, attr->name,
- attrDecl->atype, value);
+ XML_ATTR_GET_ATYPE(attrDecl), value);
return(ret);
}
@@ -4528,7 +4528,7 @@ xmlNodePtr elem, const xmlChar *prefix, xmlNsPtr ns, const xmlChar *value) {
return(0);
}
- val = xmlValidateAttributeValueInternal(doc, attrDecl->atype, value);
+ val = xmlValidateAttributeValueInternal(doc, XML_ATTR_GET_ATYPE(attrDecl), value);
if (val == 0) {
if (ns->prefix != NULL) {
xmlErrValidNode(ctxt, elem, XML_DTD_INVALID_DEFAULT,
@@ -4578,7 +4578,7 @@ xmlNodePtr elem, const xmlChar *prefix, xmlNsPtr ns, const xmlChar *value) {
#endif
/* Validity Constraint: Notation Attributes */
- if (attrDecl->atype == XML_ATTRIBUTE_NOTATION) {
+ if (XML_ATTR_GET_ATYPE(attrDecl) == XML_ATTRIBUTE_NOTATION) {
xmlEnumerationPtr tree = attrDecl->tree;
xmlNotationPtr nota;
@@ -4620,7 +4620,7 @@ xmlNodePtr elem, const xmlChar *prefix, xmlNsPtr ns, const xmlChar *value) {
}
/* Validity Constraint: Enumeration */
- if (attrDecl->atype == XML_ATTRIBUTE_ENUMERATION) {
+ if (XML_ATTR_GET_ATYPE(attrDecl) == XML_ATTRIBUTE_ENUMERATION) {
xmlEnumerationPtr tree = attrDecl->tree;
while (tree != NULL) {
if (xmlStrEqual(tree->name, value)) break;
@@ -4658,10 +4658,10 @@ xmlNodePtr elem, const xmlChar *prefix, xmlNsPtr ns, const xmlChar *value) {
/* Extra check for the attribute value */
if (ns->prefix != NULL) {
ret &= xmlValidateAttributeValue2(ctxt, doc, ns->prefix,
- attrDecl->atype, value);
+ XML_ATTR_GET_ATYPE(attrDecl), value);
} else {
ret &= xmlValidateAttributeValue2(ctxt, doc, BAD_CAST "xmlns",
- attrDecl->atype, value);
+ XML_ATTR_GET_ATYPE(attrDecl), value);
}
return(ret);
@@ -6375,7 +6375,7 @@ xmlValidateRef(xmlRefPtr ref, xmlValidCtxtPtr ctxt,
while (IS_BLANK_CH(*cur)) cur++;
}
xmlFree(dup);
- } else if (attr->atype == XML_ATTRIBUTE_IDREF) {
+ } else if (XML_ATTR_GET_ATYPE(attr) == XML_ATTRIBUTE_IDREF) {
id = xmlGetID(ctxt->doc, name);
if (id == NULL) {
xmlErrValidNode(ctxt, attr->parent, XML_DTD_UNKNOWN_ID,
@@ -6383,7 +6383,7 @@ xmlValidateRef(xmlRefPtr ref, xmlValidCtxtPtr ctxt,
attr->name, name, NULL);
ctxt->valid = 0;
}
- } else if (attr->atype == XML_ATTRIBUTE_IDREFS) {
+ } else if (XML_ATTR_GET_ATYPE(attr) == XML_ATTRIBUTE_IDREFS) {
xmlChar *dup, *str = NULL, *cur, save;
dup = xmlStrdup(name);
@@ -6583,7 +6583,7 @@ xmlValidateAttributeCallback(void *payload, void *data,
if (cur == NULL)
return;
- switch (cur->atype) {
+ switch (XML_ATTR_GET_ATYPE(cur)) {
case XML_ATTRIBUTE_CDATA:
case XML_ATTRIBUTE_ID:
case XML_ATTRIBUTE_IDREF :
@@ -6598,7 +6598,7 @@ xmlValidateAttributeCallback(void *payload, void *data,
if (cur->defaultValue != NULL) {
ret = xmlValidateAttributeValue2(ctxt, ctxt->doc, cur->name,
- cur->atype, cur->defaultValue);
+ XML_ATTR_GET_ATYPE(cur), cur->defaultValue);
if ((ret == 0) && (ctxt->valid == 1))
ctxt->valid = 0;
}
@@ -6606,14 +6606,14 @@ xmlValidateAttributeCallback(void *payload, void *data,
xmlEnumerationPtr tree = cur->tree;
while (tree != NULL) {
ret = xmlValidateAttributeValue2(ctxt, ctxt->doc,
- cur->name, cur->atype, tree->name);
+ cur->name, XML_ATTR_GET_ATYPE(cur), tree->name);
if ((ret == 0) && (ctxt->valid == 1))
ctxt->valid = 0;
tree = tree->next;
}
}
}
- if (cur->atype == XML_ATTRIBUTE_NOTATION) {
+ if (XML_ATTR_GET_ATYPE(cur) == XML_ATTRIBUTE_NOTATION) {
doc = cur->doc;
if (cur->elem == NULL) {
xmlErrValid(ctxt, XML_ERR_INTERNAL_ERROR,
diff --git a/xmlreader.c b/xmlreader.c
index 5fdeb2b..5de168c 100644
--- a/xmlreader.c
+++ b/xmlreader.c
@@ -572,7 +572,7 @@ xmlTextReaderStartElement(void *ctx, const xmlChar *fullname,
if ((ctxt->node != NULL) && (ctxt->input != NULL) &&
(ctxt->input->cur != NULL) && (ctxt->input->cur[0] == '/') &&
(ctxt->input->cur[1] == '>'))
- ctxt->node->extra = NODE_IS_EMPTY;
+ XML_NODE_SET_EXTRA(ctxt->node, NODE_IS_EMPTY);
}
if (reader != NULL)
reader->state = XML_TEXTREADER_ELEMENT;
@@ -631,7 +631,7 @@ xmlTextReaderStartElementNs(void *ctx,
if ((ctxt->node != NULL) && (ctxt->input != NULL) &&
(ctxt->input->cur != NULL) && (ctxt->input->cur[0] == '/') &&
(ctxt->input->cur[1] == '>'))
- ctxt->node->extra = NODE_IS_EMPTY;
+ XML_NODE_SET_EXTRA(ctxt->node, NODE_IS_EMPTY);
}
if (reader != NULL)
reader->state = XML_TEXTREADER_ELEMENT;
@@ -1017,7 +1017,7 @@ skip_children:
xmlNodePtr tmp;
if (reader->entNr == 0) {
while ((tmp = node->last) != NULL) {
- if ((tmp->extra & NODE_IS_PRESERVED) == 0) {
+ if ((XML_NODE_GET_EXTRA(tmp) & NODE_IS_PRESERVED) == 0) {
xmlUnlinkNode(tmp);
xmlTextReaderFreeNode(reader, tmp);
} else
@@ -1265,7 +1265,7 @@ get_next_node:
if ((oldstate == XML_TEXTREADER_ELEMENT) &&
(reader->node->type == XML_ELEMENT_NODE) &&
(reader->node->children == NULL) &&
- ((reader->node->extra & NODE_IS_EMPTY) == 0)
+ ((XML_NODE_GET_EXTRA(reader->node) & NODE_IS_EMPTY) == 0)
#ifdef LIBXML_XINCLUDE_ENABLED
&& (reader->in_xinclude <= 0)
#endif
@@ -1279,7 +1279,7 @@ get_next_node:
xmlTextReaderValidatePop(reader);
#endif /* LIBXML_REGEXP_ENABLED */
if ((reader->preserves > 0) &&
- (reader->node->extra & NODE_IS_SPRESERVED))
+ (XML_NODE_GET_EXTRA(reader->node) & NODE_IS_SPRESERVED))
reader->preserves--;
reader->node = reader->node->next;
reader->state = XML_TEXTREADER_ELEMENT;
@@ -1295,7 +1295,7 @@ get_next_node:
(reader->node->prev != NULL) &&
(reader->node->prev->type != XML_DTD_NODE)) {
xmlNodePtr tmp = reader->node->prev;
- if ((tmp->extra & NODE_IS_PRESERVED) == 0) {
+ if ((XML_NODE_GET_EXTRA(tmp) & NODE_IS_PRESERVED) == 0) {
if (oldnode == tmp)
oldnode = NULL;
xmlUnlinkNode(tmp);
@@ -1308,7 +1308,7 @@ get_next_node:
if ((oldstate == XML_TEXTREADER_ELEMENT) &&
(reader->node->type == XML_ELEMENT_NODE) &&
(reader->node->children == NULL) &&
- ((reader->node->extra & NODE_IS_EMPTY) == 0)) {;
+ ((XML_NODE_GET_EXTRA(reader->node) & NODE_IS_EMPTY) == 0)) {;
reader->state = XML_TEXTREADER_END;
goto node_found;
}
@@ -1317,7 +1317,7 @@ get_next_node:
xmlTextReaderValidatePop(reader);
#endif /* LIBXML_REGEXP_ENABLED */
if ((reader->preserves > 0) &&
- (reader->node->extra & NODE_IS_SPRESERVED))
+ (XML_NODE_GET_EXTRA(reader->node) & NODE_IS_SPRESERVED))
reader->preserves--;
reader->node = reader->node->parent;
if ((reader->node == NULL) ||
@@ -1341,7 +1341,7 @@ get_next_node:
#endif
(reader->entNr == 0) &&
(oldnode->type != XML_DTD_NODE) &&
- ((oldnode->extra & NODE_IS_PRESERVED) == 0)) {
+ ((XML_NODE_GET_EXTRA(oldnode) & NODE_IS_PRESERVED) == 0)) {
xmlUnlinkNode(oldnode);
xmlTextReaderFreeNode(reader, oldnode);
}
@@ -1354,7 +1354,7 @@ get_next_node:
#endif
(reader->entNr == 0) &&
(reader->node->last != NULL) &&
- ((reader->node->last->extra & NODE_IS_PRESERVED) == 0)) {
+ ((XML_NODE_GET_EXTRA(reader->node->last) & NODE_IS_PRESERVED) == 0)) {
xmlNodePtr tmp = reader->node->last;
xmlUnlinkNode(tmp);
xmlTextReaderFreeNode(reader, tmp);
@@ -1536,7 +1536,7 @@ xmlTextReaderNext(xmlTextReaderPtr reader) {
return(xmlTextReaderRead(reader));
if (reader->state == XML_TEXTREADER_END || reader->state == XML_TEXTREADER_BACKTRACK)
return(xmlTextReaderRead(reader));
- if (cur->extra & NODE_IS_EMPTY)
+ if (XML_NODE_GET_EXTRA(cur) & NODE_IS_EMPTY)
return(xmlTextReaderRead(reader));
do {
ret = xmlTextReaderRead(reader);
@@ -2956,7 +2956,7 @@ xmlTextReaderIsEmptyElement(xmlTextReaderPtr reader) {
if (reader->in_xinclude > 0)
return(1);
#endif
- return((reader->node->extra & NODE_IS_EMPTY) != 0);
+ return((XML_NODE_GET_EXTRA(reader->node) & NODE_IS_EMPTY) != 0);
}
/**
@@ -3818,15 +3818,15 @@ xmlTextReaderPreserve(xmlTextReaderPtr reader) {
return(NULL);
if ((cur->type != XML_DOCUMENT_NODE) && (cur->type != XML_DTD_NODE)) {
- cur->extra |= NODE_IS_PRESERVED;
- cur->extra |= NODE_IS_SPRESERVED;
+ XML_NODE_ADD_EXTRA(cur, NODE_IS_PRESERVED);
+ XML_NODE_ADD_EXTRA(cur, NODE_IS_SPRESERVED);
}
reader->preserves++;
parent = cur->parent;;
while (parent != NULL) {
if (parent->type == XML_ELEMENT_NODE)
- parent->extra |= NODE_IS_PRESERVED;
+ XML_NODE_ADD_EXTRA(parent, NODE_IS_PRESERVED);
parent = parent->parent;
}
return(cur);
diff --git a/xmlschemas.c b/xmlschemas.c
index 428e3c8..1f54acc 100644
--- a/xmlschemas.c
+++ b/xmlschemas.c
@@ -5895,7 +5895,7 @@ xmlSchemaPValAttrNodeID(xmlSchemaParserCtxtPtr ctxt, xmlAttrPtr attr)
/*
* NOTE: the IDness might have already be declared in the DTD
*/
- if (attr->atype != XML_ATTRIBUTE_ID) {
+ if (XML_ATTR_GET_ATYPE(attr) != XML_ATTRIBUTE_ID) {
xmlIDPtr res;
xmlChar *strip;
@@ -5918,7 +5918,7 @@ xmlSchemaPValAttrNodeID(xmlSchemaParserCtxtPtr ctxt, xmlAttrPtr attr)
NULL, NULL, "Duplicate value '%s' of simple "
"type 'xs:ID'", value, NULL);
} else
- attr->atype = XML_ATTRIBUTE_ID;
+ XML_ATTR_SET_ATYPE(attr, XML_ATTRIBUTE_ID);
}
} else if (ret > 0) {
ret = XML_SCHEMAP_S4S_ATTR_INVALID_VALUE;
diff --git a/xmlschemastypes.c b/xmlschemastypes.c
index de95d94..76a7c87 100644
--- a/xmlschemastypes.c
+++ b/xmlschemastypes.c
@@ -2969,7 +2969,7 @@ xmlSchemaValAtomicType(xmlSchemaTypePtr type, const xmlChar * value,
/*
* NOTE: the IDness might have already be declared in the DTD
*/
- if (attr->atype != XML_ATTRIBUTE_ID) {
+ if (XML_ATTR_GET_ATYPE(attr) != XML_ATTRIBUTE_ID) {
xmlIDPtr res;
xmlChar *strip;
@@ -2982,7 +2982,7 @@ xmlSchemaValAtomicType(xmlSchemaTypePtr type, const xmlChar * value,
if (res == NULL) {
ret = 2;
} else {
- attr->atype = XML_ATTRIBUTE_ID;
+ XML_ATTR_SET_ATYPE(attr, XML_ATTRIBUTE_ID);
}
}
}
@@ -3007,7 +3007,7 @@ xmlSchemaValAtomicType(xmlSchemaTypePtr type, const xmlChar * value,
xmlFree(strip);
} else
xmlAddRef(NULL, node->doc, value, attr);
- attr->atype = XML_ATTRIBUTE_IDREF;
+ XML_ATTR_SET_ATYPE(attr, XML_ATTRIBUTE_IDREF);
}
goto done;
case XML_SCHEMAS_IDREFS:
@@ -3021,7 +3021,7 @@ xmlSchemaValAtomicType(xmlSchemaTypePtr type, const xmlChar * value,
(node->type == XML_ATTRIBUTE_NODE)) {
xmlAttrPtr attr = (xmlAttrPtr) node;
- attr->atype = XML_ATTRIBUTE_IDREFS;
+ XML_ATTR_SET_ATYPE(attr, XML_ATTRIBUTE_IDREFS);
}
goto done;
case XML_SCHEMAS_ENTITY:{
@@ -3052,7 +3052,7 @@ xmlSchemaValAtomicType(xmlSchemaTypePtr type, const xmlChar * value,
(node->type == XML_ATTRIBUTE_NODE)) {
xmlAttrPtr attr = (xmlAttrPtr) node;
- attr->atype = XML_ATTRIBUTE_ENTITY;
+ XML_ATTR_SET_ATYPE(attr, XML_ATTRIBUTE_ENTITY);
}
goto done;
}
@@ -3069,7 +3069,7 @@ xmlSchemaValAtomicType(xmlSchemaTypePtr type, const xmlChar * value,
(node->type == XML_ATTRIBUTE_NODE)) {
xmlAttrPtr attr = (xmlAttrPtr) node;
- attr->atype = XML_ATTRIBUTE_ENTITIES;
+ XML_ATTR_SET_ATYPE(attr, XML_ATTRIBUTE_ENTITIES);
}
goto done;
case XML_SCHEMAS_NOTATION:{
--
2.50.1

View File

@@ -24,6 +24,7 @@ SRC_URI += "http://www.w3.org/XML/Test/xmlts20130923.tar;subdir=${BP};name=testt
file://CVE-2025-49794-CVE-2025-49796.patch \
file://CVE-2025-49795.patch \
file://CVE-2025-6170.patch \
file://CVE-2025-7425.patch \
"
SRC_URI[archive.sha256sum] = "c3d8c0c34aa39098f66576fe51969db12a5100b956233dc56506f7a8679be995"

View File

@@ -0,0 +1,39 @@
From 8ebb2a68dfac02e7a83885587a9a5a203147ebbe Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Wed, 19 Nov 2025 13:23:38 +0100
Subject: [PATCH] iconv: fix erroneous input validation in EUC-KR decoder
as a result of incorrect bounds checking on the lead byte being
decoded, certain invalid inputs which should produce an encoding
error, such as "\xc8\x41", instead produced out-of-bounds loads from
the ksc table.
in a worst case, the loaded value may not be a valid unicode scalar
value, in which case, if the output encoding was UTF-8, wctomb would
return (size_t)-1, causing an overflow in the output pointer and
remaining buffer size which could clobber memory outside of the output
buffer.
bug report was submitted in private by Nick Wellnhofer on account of
potential security implications.
CVE: CVE-2025-26519
Upstream-Status: Backport [https://git.musl-libc.org/cgit/musl/commit/?id=e5adcd97b5196e29991b524237381a0202a60659]
Signed-off-by: Gyorgy Sarvari <skandigraun@gmail.com>
---
src/locale/iconv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/locale/iconv.c b/src/locale/iconv.c
index 3047c27b..1fb66bc8 100644
--- a/src/locale/iconv.c
+++ b/src/locale/iconv.c
@@ -495,7 +495,7 @@ size_t iconv(iconv_t cd, char **restrict in, size_t *restrict inb, char **restri
if (c >= 93 || d >= 94) {
c += (0xa1-0x81);
d += 0xa1;
- if (c >= 93 || c>=0xc6-0x81 && d>0x52)
+ if (c > 0xc6-0x81 || c==0xc6-0x81 && d>0x52)
goto ilseq;
if (d-'A'<26) d = d-'A';
else if (d-'a'<26) d = d-'a'+26;

View File

@@ -0,0 +1,38 @@
From 7e7052e17e900194a588d337ff4a8e646133afed Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Wed, 19 Nov 2025 13:27:15 +0100
Subject: [PATCH] iconv: harden UTF-8 output code path against input decoder
bugs
the UTF-8 output code was written assuming an invariant that iconv's
decoders only emit valid Unicode Scalar Values which wctomb can encode
successfully, thereby always returning a value between 1 and 4.
if this invariant is not satisfied, wctomb returns (size_t)-1, and the
subsequent adjustments to the output buffer pointer and remaining
output byte count overflow, moving the output position backwards,
potentially past the beginning of the buffer, without storing any
bytes.
CVE: CVE-2025-26519
Upstream-Status: Backport [https://git.musl-libc.org/cgit/musl/commit/?id=c47ad25ea3b484e10326f933e927c0bc8cded3da]
Signed-off-by: Gyorgy Sarvari <skandigraun@gmail.com>
---
src/locale/iconv.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/src/locale/iconv.c b/src/locale/iconv.c
index 1fb66bc8..fb1d3217 100644
--- a/src/locale/iconv.c
+++ b/src/locale/iconv.c
@@ -538,6 +538,10 @@ size_t iconv(iconv_t cd, char **restrict in, size_t *restrict inb, char **restri
if (*outb < k) goto toobig;
memcpy(*out, tmp, k);
} else k = wctomb_utf8(*out, c);
+ /* This failure condition should be unreachable, but
+ * is included to prevent decoder bugs from translating
+ * into advancement outside the output buffer range. */
+ if (k>4) goto ilseq;
*out += k;
*outb -= k;
break;

View File

@@ -14,7 +14,9 @@ SRC_URI = "git://git.etalabs.net/git/musl;branch=master;protocol=https \
file://0001-Make-dynamic-linker-a-relative-symlink-to-libc.patch \
file://0002-ldso-Use-syslibdir-and-libdir-as-default-pathes-to-l.patch \
file://0003-elf.h-add-typedefs-for-Elf64_Relr-and-Elf32_Relr.patch \
"
file://CVE-2025-26519-1.patch \
file://CVE-2025-26519-2.patch \
"
S = "${WORKDIR}/git"

View File

@@ -66,5 +66,8 @@ SRC_URI = "\
file://CVE-2025-11414.patch \
file://CVE-2025-11412.patch \
file://CVE-2025-11413.patch \
file://0028-CVE-2025-11494.patch \
file://0029-CVE-2025-11839.patch \
file://0030-CVE-2025-11840.patch \
"
S = "${WORKDIR}/git"

View File

@@ -0,0 +1,43 @@
From: "H.J. Lu" <hjl.tools@gmail.com>
Date: Tue, 30 Sep 2025 08:13:56 +0800
Upstream-Status: Backport [https://sourceware.org/git/?p=binutils-gdb.git;a=patch;h=b6ac5a8a5b82f0ae6a4642c8d7149b325f4cc60a]
CVE: CVE-2025-11494
Since x86 .eh_frame section may reference _GLOBAL_OFFSET_TABLE_, keep
_GLOBAL_OFFSET_TABLE_ if there is dynamic section and the output
.eh_frame section is non-empty.
PR ld/33499
* elfxx-x86.c (_bfd_x86_elf_late_size_sections): Keep
_GLOBAL_OFFSET_TABLE_ if there is dynamic section and the
output .eh_frame section is non-empty.
Signed-off-by: Deepesh Varatharajan <Deepesh.Varatharajan@windriver.com>
diff --git a/bfd/elfxx-x86.c b/bfd/elfxx-x86.c
index c054f7cd..ddc15945 100644
--- a/bfd/elfxx-x86.c
+++ b/bfd/elfxx-x86.c
@@ -2447,6 +2447,8 @@ _bfd_x86_elf_late_size_sections (bfd *output_bfd,
if (htab->elf.sgotplt)
{
+ asection *eh_frame;
+
/* Don't allocate .got.plt section if there are no GOT nor PLT
entries and there is no reference to _GLOBAL_OFFSET_TABLE_. */
if ((htab->elf.hgot == NULL
@@ -2459,7 +2461,11 @@ _bfd_x86_elf_late_size_sections (bfd *output_bfd,
&& (htab->elf.iplt == NULL
|| htab->elf.iplt->size == 0)
&& (htab->elf.igotplt == NULL
- || htab->elf.igotplt->size == 0))
+ || htab->elf.igotplt->size == 0)
+ && (!htab->elf.dynamic_sections_created
+ || (eh_frame = bfd_get_section_by_name (output_bfd,
+ ".eh_frame")) == NULL
+ || eh_frame->rawsize == 0))
{
htab->elf.sgotplt->size = 0;
/* Solaris requires to keep _GLOBAL_OFFSET_TABLE_ even if it

View File

@@ -0,0 +1,32 @@
From 12ef7d5b7b02d0023db645d86eb9d0797bc747fe Mon Sep 17 00:00:00 2001
From: Nick Clifton <nickc@redhat.com>
Date: Mon, 3 Nov 2025 11:49:02 +0000
Subject: [PATCH] Remove call to abort in the DGB debug format printing code,
thus allowing the display of a fuzzed input file to complete without
triggering an abort.
PR 33448
---
binutils/prdbg.c | 1 -
1 file changed, 1 deletion(-)
Upstream-Status: Backport [https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=12ef7d5b7b02d0023db645d86eb9d0797bc747fe]
CVE: CVE-2025-11839
Signed-off-by: Yash Shinde <Yash.Shinde@windriver.com>
diff --git a/binutils/prdbg.c b/binutils/prdbg.c
index c239aeb1a79..5d405c48e3d 100644
--- a/binutils/prdbg.c
+++ b/binutils/prdbg.c
@@ -2449,7 +2449,6 @@ tg_tag_type (void *p, const char *name, unsigned int id,
t = "union class ";
break;
default:
- abort ();
return false;
}
--
2.43.7

View File

@@ -0,0 +1,37 @@
From f6b0f53a36820da91eadfa9f466c22f92e4256e0 Mon Sep 17 00:00:00 2001
From: Alan Modra <amodra@gmail.com>
Date: Mon, 3 Nov 2025 09:03:37 +1030
Subject: [PATCH] PR 33455 SEGV in vfinfo at ldmisc.c:527
A reloc howto set up with EMPTY_HOWTO has a NULL name. More than one
place emitting diagnostics assumes a reloc howto won't have a NULL
name.
PR 33455
* coffcode.h (coff_slurp_reloc_table): Don't allow a howto with
a NULL name.
---
bfd/coffcode.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Upstream-Status: Backport [https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=f6b0f53a36820da91eadfa9f466c22f92e4256e0]
CVE: CVE-2025-11840
Signed-off-by: Yash Shinde <Yash.Shinde@windriver.com>
diff --git a/bfd/coffcode.h b/bfd/coffcode.h
index 1e5acc0032c..ce1e39131b4 100644
--- a/bfd/coffcode.h
+++ b/bfd/coffcode.h
@@ -5345,7 +5345,7 @@ coff_slurp_reloc_table (bfd * abfd, sec_ptr asect, asymbol ** symbols)
RTYPE2HOWTO (cache_ptr, &dst);
#endif /* RELOC_PROCESSING */
- if (cache_ptr->howto == NULL)
+ if (cache_ptr->howto == NULL || cache_ptr->howto->name == NULL)
{
_bfd_error_handler
/* xgettext:c-format */
--
2.43.7

View File

@@ -7,6 +7,7 @@ SRC_URI += "file://OEToolchainConfig.cmake \
file://environment.d-cmake.sh \
file://0005-Disable-use-of-ext2fs-ext2_fs.h-by-cmake-s-internal-.patch \
file://0001-CMakeLists.txt-disable-USE_NGHTTP2.patch \
file://CVE-2025-9301.patch \
"
LICENSE:append = " & BSD-1-Clause & MIT & BSD-2-Clause & curl"

View File

@@ -22,12 +22,15 @@ SRC_URI += "\
file://CVE-2025-47907.patch \
file://CVE-2025-47906.patch \
file://CVE-2025-58185.patch \
file://CVE-2025-58187.patch \
file://CVE-2025-58187-1.patch \
file://CVE-2025-58187-2.patch \
file://CVE-2025-58188.patch \
file://CVE-2025-58189.patch \
file://CVE-2025-47912.patch \
file://CVE-2025-61723.patch \
file://CVE-2025-61724.patch \
file://CVE-2025-61727.patch \
file://CVE-2025-61729.patch \
"
SRC_URI[main.sha256sum] = "012a7e1f37f362c0918c1dfa3334458ac2da1628c4b9cf4d9ca02db986e17d71"

View File

@@ -0,0 +1,516 @@
From ca6a5545ba18844a97c88a90a385eb6335bb7526 Mon Sep 17 00:00:00 2001
From: Roland Shoemaker <roland@golang.org>
Date: Thu, 9 Oct 2025 13:35:24 -0700
Subject: [PATCH] [release-branch.go1.24] crypto/x509: rework fix for
CVE-2025-58187
In CL 709854 we enabled strict validation for a number of properties of
domain names (and their constraints). This caused significant breakage,
since we didn't previously disallow the creation of certificates which
contained these malformed domains.
Rollback a number of the properties we enforced, making domainNameValid
only enforce the same properties that domainToReverseLabels does. Since
this also undoes some of the DoS protections our initial fix enabled,
this change also adds caching of constraints in isValid (which perhaps
is the fix we should've initially chosen).
Updates #75835
Updates #75828
Fixes #75860
Change-Id: Ie6ca6b4f30e9b8a143692b64757f7bbf4671ed0e
Reviewed-on: https://go-review.googlesource.com/c/go/+/710735
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Damien Neil <dneil@google.com>
(cherry picked from commit 1cd71689f2ed8f07031a0cc58fc3586ca501839f)
Reviewed-on: https://go-review.googlesource.com/c/go/+/710879
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
Upstream-Status: Backport [https://github.com/golang/go/commit/ca6a5545ba18844a97c88a90a385eb6335bb7526]
CVE: CVE-2025-58187
Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
---
src/crypto/x509/name_constraints_test.go | 66 +++++++++++++++++--
src/crypto/x509/parser.go | 57 +++++++++++-----
src/crypto/x509/parser_test.go | 84 +++++++++++++++++++++---
src/crypto/x509/verify.go | 53 ++++++++++-----
src/crypto/x509/verify_test.go | 2 +-
5 files changed, 213 insertions(+), 49 deletions(-)
diff --git a/src/crypto/x509/name_constraints_test.go b/src/crypto/x509/name_constraints_test.go
index 9aaa6d7..78263fc 100644
--- a/src/crypto/x509/name_constraints_test.go
+++ b/src/crypto/x509/name_constraints_test.go
@@ -1456,7 +1456,63 @@ var nameConstraintsTests = []nameConstraintsTest{
expectedError: "incompatible key usage",
},
- // #77: if several EKUs are requested, satisfying any of them is sufficient.
+ // An invalid DNS SAN should be detected only at validation time so
+ // that we can process CA certificates in the wild that have invalid SANs.
+ // See https://github.com/golang/go/issues/23995
+
+ // #77: an invalid DNS or mail SAN will not be detected if name constraint
+ // checking is not triggered.
+ {
+ roots: make([]constraintsSpec, 1),
+ intermediates: [][]constraintsSpec{
+ {
+ {},
+ },
+ },
+ leaf: leafSpec{
+ sans: []string{"dns:this is invalid", "email:this @ is invalid"},
+ },
+ },
+
+ // #78: an invalid DNS SAN will be detected if any name constraint checking
+ // is triggered.
+ {
+ roots: []constraintsSpec{
+ {
+ bad: []string{"uri:"},
+ },
+ },
+ intermediates: [][]constraintsSpec{
+ {
+ {},
+ },
+ },
+ leaf: leafSpec{
+ sans: []string{"dns:this is invalid"},
+ },
+ expectedError: "cannot parse dnsName",
+ },
+
+ // #79: an invalid email SAN will be detected if any name constraint
+ // checking is triggered.
+ {
+ roots: []constraintsSpec{
+ {
+ bad: []string{"uri:"},
+ },
+ },
+ intermediates: [][]constraintsSpec{
+ {
+ {},
+ },
+ },
+ leaf: leafSpec{
+ sans: []string{"email:this @ is invalid"},
+ },
+ expectedError: "cannot parse rfc822Name",
+ },
+
+ // #80: if several EKUs are requested, satisfying any of them is sufficient.
{
roots: make([]constraintsSpec, 1),
intermediates: [][]constraintsSpec{
@@ -1471,7 +1527,7 @@ var nameConstraintsTests = []nameConstraintsTest{
requestedEKUs: []ExtKeyUsage{ExtKeyUsageClientAuth, ExtKeyUsageEmailProtection},
},
- // #78: EKUs that are not asserted in VerifyOpts are not required to be
+ // #81: EKUs that are not asserted in VerifyOpts are not required to be
// nested.
{
roots: make([]constraintsSpec, 1),
@@ -1490,7 +1546,7 @@ var nameConstraintsTests = []nameConstraintsTest{
},
},
- // #79: a certificate without SANs and CN is accepted in a constrained chain.
+ // #82: a certificate without SANs and CN is accepted in a constrained chain.
{
roots: []constraintsSpec{
{
@@ -1507,7 +1563,7 @@ var nameConstraintsTests = []nameConstraintsTest{
},
},
- // #80: a certificate without SANs and with a CN that does not parse as a
+ // #83: a certificate without SANs and with a CN that does not parse as a
// hostname is accepted in a constrained chain.
{
roots: []constraintsSpec{
@@ -1526,7 +1582,7 @@ var nameConstraintsTests = []nameConstraintsTest{
},
},
- // #81: a certificate with SANs and CN is accepted in a constrained chain.
+ // #84: a certificate with SANs and CN is accepted in a constrained chain.
{
roots: []constraintsSpec{
{
diff --git a/src/crypto/x509/parser.go b/src/crypto/x509/parser.go
index 9a3bcd6..f8beff7 100644
--- a/src/crypto/x509/parser.go
+++ b/src/crypto/x509/parser.go
@@ -378,14 +378,10 @@ func parseSANExtension(der cryptobyte.String) (dnsNames, emailAddresses []string
if err := isIA5String(email); err != nil {
return errors.New("x509: SAN rfc822Name is malformed")
}
- parsed, ok := parseRFC2821Mailbox(email)
- if !ok || (ok && !domainNameValid(parsed.domain, false)) {
- return errors.New("x509: SAN rfc822Name is malformed")
- }
emailAddresses = append(emailAddresses, email)
case nameTypeDNS:
name := string(data)
- if err := isIA5String(name); err != nil || (err == nil && !domainNameValid(name, false)) {
+ if err := isIA5String(name); err != nil {
return errors.New("x509: SAN dNSName is malformed")
}
dnsNames = append(dnsNames, string(name))
@@ -395,9 +391,12 @@ func parseSANExtension(der cryptobyte.String) (dnsNames, emailAddresses []string
return errors.New("x509: SAN uniformResourceIdentifier is malformed")
}
uri, err := url.Parse(uriStr)
- if err != nil || (err == nil && uri.Host != "" && !domainNameValid(uri.Host, false)) {
+ if err != nil {
return fmt.Errorf("x509: cannot parse URI %q: %s", uriStr, err)
}
+ if len(uri.Host) > 0 && !domainNameValid(uri.Host, false) {
+ return fmt.Errorf("x509: cannot parse URI %q: invalid domain", uriStr)
+ }
uris = append(uris, uri)
case nameTypeIP:
switch len(data) {
@@ -1176,36 +1175,58 @@ func ParseRevocationList(der []byte) (*RevocationList, error) {
return rl, nil
}
-// domainNameValid does minimal domain name validity checking. In particular it
-// enforces the following properties:
-// - names cannot have the trailing period
-// - names can only have a leading period if constraint is true
-// - names must be <= 253 characters
-// - names cannot have empty labels
-// - names cannot labels that are longer than 63 characters
-//
-// Note that this does not enforce the LDH requirements for domain names.
+// domainNameValid is an alloc-less version of the checks that
+// domainToReverseLabels does.
func domainNameValid(s string, constraint bool) bool {
- if len(s) == 0 && constraint {
+ // TODO(#75835): This function omits a number of checks which we
+ // really should be doing to enforce that domain names are valid names per
+ // RFC 1034. We previously enabled these checks, but this broke a
+ // significant number of certificates we previously considered valid, and we
+ // happily create via CreateCertificate (et al). We should enable these
+ // checks, but will need to gate them behind a GODEBUG.
+ //
+ // I have left the checks we previously enabled, noted with "TODO(#75835)" so
+ // that we can easily re-enable them once we unbreak everyone.
+
+ // TODO(#75835): this should only be true for constraints.
+ if len(s) == 0 {
return true
}
- if len(s) == 0 || (!constraint && s[0] == '.') || s[len(s)-1] == '.' || len(s) > 253 {
+
+ // Do not allow trailing period (FQDN format is not allowed in SANs or
+ // constraints).
+ if s[len(s)-1] == '.' {
return false
}
+
+ // TODO(#75835): domains must have at least one label, cannot have
+ // a leading empty label, and cannot be longer than 253 characters.
+ // if len(s) == 0 || (!constraint && s[0] == '.') || len(s) > 253 {
+ // return false
+ // }
+
lastDot := -1
if constraint && s[0] == '.' {
s = s[1:]
}
for i := 0; i <= len(s); i++ {
+ if i < len(s) && (s[i] < 33 || s[i] > 126) {
+ // Invalid character.
+ return false
+ }
if i == len(s) || s[i] == '.' {
labelLen := i
if lastDot >= 0 {
labelLen -= lastDot + 1
}
- if labelLen == 0 || labelLen > 63 {
+ if labelLen == 0 {
return false
}
+ // TODO(#75835): labels cannot be longer than 63 characters.
+ // if labelLen > 63 {
+ // return false
+ // }
lastDot = i
}
}
diff --git a/src/crypto/x509/parser_test.go b/src/crypto/x509/parser_test.go
index a6cdfb8..35f872a 100644
--- a/src/crypto/x509/parser_test.go
+++ b/src/crypto/x509/parser_test.go
@@ -5,6 +5,9 @@
package x509
import (
+ "crypto/ecdsa"
+ "crypto/elliptic"
+ "crypto/rand"
"encoding/asn1"
"strings"
"testing"
@@ -110,7 +113,31 @@ func TestDomainNameValid(t *testing.T) {
constraint bool
valid bool
}{
- {"empty name, name", "", false, false},
+ // TODO(#75835): these tests are for stricter name validation, which we
+ // had to disable. Once we reenable these strict checks, behind a
+ // GODEBUG, we should add them back in.
+ // {"empty name, name", "", false, false},
+ // {"254 char label, name", strings.Repeat("a.a", 84) + "aaa", false, false},
+ // {"254 char label, constraint", strings.Repeat("a.a", 84) + "aaa", true, false},
+ // {"253 char label, name", strings.Repeat("a.a", 84) + "aa", false, false},
+ // {"253 char label, constraint", strings.Repeat("a.a", 84) + "aa", true, false},
+ // {"64 char single label, name", strings.Repeat("a", 64), false, false},
+ // {"64 char single label, constraint", strings.Repeat("a", 64), true, false},
+ // {"64 char label, name", "a." + strings.Repeat("a", 64), false, false},
+ // {"64 char label, constraint", "a." + strings.Repeat("a", 64), true, false},
+
+ // TODO(#75835): these are the inverse of the tests above, they should be removed
+ // once the strict checking is enabled.
+ {"254 char label, name", strings.Repeat("a.a", 84) + "aaa", false, true},
+ {"254 char label, constraint", strings.Repeat("a.a", 84) + "aaa", true, true},
+ {"253 char label, name", strings.Repeat("a.a", 84) + "aa", false, true},
+ {"253 char label, constraint", strings.Repeat("a.a", 84) + "aa", true, true},
+ {"64 char single label, name", strings.Repeat("a", 64), false, true},
+ {"64 char single label, constraint", strings.Repeat("a", 64), true, true},
+ {"64 char label, name", "a." + strings.Repeat("a", 64), false, true},
+ {"64 char label, constraint", "a." + strings.Repeat("a", 64), true, true},
+
+ // Check we properly enforce properties of domain names.
{"empty name, constraint", "", true, true},
{"empty label, name", "a..a", false, false},
{"empty label, constraint", "a..a", true, false},
@@ -124,23 +151,60 @@ func TestDomainNameValid(t *testing.T) {
{"trailing period, constraint", "a.", true, false},
{"bare label, name", "a", false, true},
{"bare label, constraint", "a", true, true},
- {"254 char label, name", strings.Repeat("a.a", 84) + "aaa", false, false},
- {"254 char label, constraint", strings.Repeat("a.a", 84) + "aaa", true, false},
- {"253 char label, name", strings.Repeat("a.a", 84) + "aa", false, false},
- {"253 char label, constraint", strings.Repeat("a.a", 84) + "aa", true, false},
- {"64 char single label, name", strings.Repeat("a", 64), false, false},
- {"64 char single label, constraint", strings.Repeat("a", 64), true, false},
{"63 char single label, name", strings.Repeat("a", 63), false, true},
{"63 char single label, constraint", strings.Repeat("a", 63), true, true},
- {"64 char label, name", "a." + strings.Repeat("a", 64), false, false},
- {"64 char label, constraint", "a." + strings.Repeat("a", 64), true, false},
{"63 char label, name", "a." + strings.Repeat("a", 63), false, true},
{"63 char label, constraint", "a." + strings.Repeat("a", 63), true, true},
} {
t.Run(tc.name, func(t *testing.T) {
- if tc.valid != domainNameValid(tc.dnsName, tc.constraint) {
+ valid := domainNameValid(tc.dnsName, tc.constraint)
+ if tc.valid != valid {
t.Errorf("domainNameValid(%q, %t) = %v; want %v", tc.dnsName, tc.constraint, !tc.valid, tc.valid)
}
+ // Also check that we enforce the same properties as domainToReverseLabels
+ trimmedName := tc.dnsName
+ if tc.constraint && len(trimmedName) > 1 && trimmedName[0] == '.' {
+ trimmedName = trimmedName[1:]
+ }
+ _, revValid := domainToReverseLabels(trimmedName)
+ if valid != revValid {
+ t.Errorf("domainNameValid(%q, %t) = %t != domainToReverseLabels(%q) = %t", tc.dnsName, tc.constraint, valid, trimmedName, revValid)
+ }
})
}
}
+
+func TestRoundtripWeirdSANs(t *testing.T) {
+ // TODO(#75835): check that certificates we create with CreateCertificate that have malformed SAN values
+ // can be parsed by ParseCertificate. We should eventually restrict this, but for now we have to maintain
+ // this property as people have been relying on it.
+ k, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
+ if err != nil {
+ t.Fatal(err)
+ }
+ badNames := []string{
+ "baredomain",
+ "baredomain.",
+ strings.Repeat("a", 255),
+ strings.Repeat("a", 65) + ".com",
+ }
+ tmpl := &Certificate{
+ EmailAddresses: badNames,
+ DNSNames: badNames,
+ }
+ b, err := CreateCertificate(rand.Reader, tmpl, tmpl, &k.PublicKey, k)
+ if err != nil {
+ t.Fatal(err)
+ }
+ _, err = ParseCertificate(b)
+ if err != nil {
+ t.Fatalf("Couldn't roundtrip certificate: %v", err)
+ }
+}
+
+func FuzzDomainNameValid(f *testing.F) {
+ f.Fuzz(func(t *testing.T, data string) {
+ domainNameValid(data, false)
+ domainNameValid(data, true)
+ })
+}
diff --git a/src/crypto/x509/verify.go b/src/crypto/x509/verify.go
index 14cd23f..e670786 100644
--- a/src/crypto/x509/verify.go
+++ b/src/crypto/x509/verify.go
@@ -393,7 +393,7 @@ func domainToReverseLabels(domain string) (reverseLabels []string, ok bool) {
return reverseLabels, true
}
-func matchEmailConstraint(mailbox rfc2821Mailbox, constraint string) (bool, error) {
+func matchEmailConstraint(mailbox rfc2821Mailbox, constraint string, reversedDomainsCache map[string][]string, reversedConstraintsCache map[string][]string) (bool, error) {
// If the constraint contains an @, then it specifies an exact mailbox
// name.
if strings.Contains(constraint, "@") {
@@ -406,10 +406,10 @@ func matchEmailConstraint(mailbox rfc2821Mailbox, constraint string) (bool, erro
// Otherwise the constraint is like a DNS constraint of the domain part
// of the mailbox.
- return matchDomainConstraint(mailbox.domain, constraint)
+ return matchDomainConstraint(mailbox.domain, constraint, reversedDomainsCache, reversedConstraintsCache)
}
-func matchURIConstraint(uri *url.URL, constraint string) (bool, error) {
+func matchURIConstraint(uri *url.URL, constraint string, reversedDomainsCache map[string][]string, reversedConstraintsCache map[string][]string) (bool, error) {
// From RFC 5280, Section 4.2.1.10:
// “a uniformResourceIdentifier that does not include an authority
// component with a host name specified as a fully qualified domain
@@ -438,7 +438,7 @@ func matchURIConstraint(uri *url.URL, constraint string) (bool, error) {
return false, fmt.Errorf("URI with IP (%q) cannot be matched against constraints", uri.String())
}
- return matchDomainConstraint(host, constraint)
+ return matchDomainConstraint(host, constraint, reversedDomainsCache, reversedConstraintsCache)
}
func matchIPConstraint(ip net.IP, constraint *net.IPNet) (bool, error) {
@@ -455,16 +455,21 @@ func matchIPConstraint(ip net.IP, constraint *net.IPNet) (bool, error) {
return true, nil
}
-func matchDomainConstraint(domain, constraint string) (bool, error) {
+func matchDomainConstraint(domain, constraint string, reversedDomainsCache map[string][]string, reversedConstraintsCache map[string][]string) (bool, error) {
// The meaning of zero length constraints is not specified, but this
// code follows NSS and accepts them as matching everything.
if len(constraint) == 0 {
return true, nil
}
- domainLabels, ok := domainToReverseLabels(domain)
- if !ok {
- return false, fmt.Errorf("x509: internal error: cannot parse domain %q", domain)
+ domainLabels, found := reversedDomainsCache[domain]
+ if !found {
+ var ok bool
+ domainLabels, ok = domainToReverseLabels(domain)
+ if !ok {
+ return false, fmt.Errorf("x509: internal error: cannot parse domain %q", domain)
+ }
+ reversedDomainsCache[domain] = domainLabels
}
// RFC 5280 says that a leading period in a domain name means that at
@@ -478,9 +483,14 @@ func matchDomainConstraint(domain, constraint string) (bool, error) {
constraint = constraint[1:]
}
- constraintLabels, ok := domainToReverseLabels(constraint)
- if !ok {
- return false, fmt.Errorf("x509: internal error: cannot parse domain %q", constraint)
+ constraintLabels, found := reversedConstraintsCache[constraint]
+ if !found {
+ var ok bool
+ constraintLabels, ok = domainToReverseLabels(constraint)
+ if !ok {
+ return false, fmt.Errorf("x509: internal error: cannot parse domain %q", constraint)
+ }
+ reversedConstraintsCache[constraint] = constraintLabels
}
if len(domainLabels) < len(constraintLabels) ||
@@ -601,6 +611,19 @@ func (c *Certificate) isValid(certType int, currentChain []*Certificate, opts *V
}
}
+ // Each time we do constraint checking, we need to check the constraints in
+ // the current certificate against all of the names that preceded it. We
+ // reverse these names using domainToReverseLabels, which is a relatively
+ // expensive operation. Since we check each name against each constraint,
+ // this requires us to do N*C calls to domainToReverseLabels (where N is the
+ // total number of names that preceed the certificate, and C is the total
+ // number of constraints in the certificate). By caching the results of
+ // calling domainToReverseLabels, we can reduce that to N+C calls at the
+ // cost of keeping all of the parsed names and constraints in memory until
+ // we return from isValid.
+ reversedDomainsCache := map[string][]string{}
+ reversedConstraintsCache := map[string][]string{}
+
if (certType == intermediateCertificate || certType == rootCertificate) &&
c.hasNameConstraints() {
toCheck := []*Certificate{}
@@ -621,20 +644,20 @@ func (c *Certificate) isValid(certType int, currentChain []*Certificate, opts *V
if err := c.checkNameConstraints(&comparisonCount, maxConstraintComparisons, "email address", name, mailbox,
func(parsedName, constraint any) (bool, error) {
- return matchEmailConstraint(parsedName.(rfc2821Mailbox), constraint.(string))
+ return matchEmailConstraint(parsedName.(rfc2821Mailbox), constraint.(string), reversedDomainsCache, reversedConstraintsCache)
}, c.PermittedEmailAddresses, c.ExcludedEmailAddresses); err != nil {
return err
}
case nameTypeDNS:
name := string(data)
- if _, ok := domainToReverseLabels(name); !ok {
+ if !domainNameValid(name, false) {
return fmt.Errorf("x509: cannot parse dnsName %q", name)
}
if err := c.checkNameConstraints(&comparisonCount, maxConstraintComparisons, "DNS name", name, name,
func(parsedName, constraint any) (bool, error) {
- return matchDomainConstraint(parsedName.(string), constraint.(string))
+ return matchDomainConstraint(parsedName.(string), constraint.(string), reversedDomainsCache, reversedConstraintsCache)
}, c.PermittedDNSDomains, c.ExcludedDNSDomains); err != nil {
return err
}
@@ -648,7 +671,7 @@ func (c *Certificate) isValid(certType int, currentChain []*Certificate, opts *V
if err := c.checkNameConstraints(&comparisonCount, maxConstraintComparisons, "URI", name, uri,
func(parsedName, constraint any) (bool, error) {
- return matchURIConstraint(parsedName.(*url.URL), constraint.(string))
+ return matchURIConstraint(parsedName.(*url.URL), constraint.(string), reversedDomainsCache, reversedConstraintsCache)
}, c.PermittedURIDomains, c.ExcludedURIDomains); err != nil {
return err
}
diff --git a/src/crypto/x509/verify_test.go b/src/crypto/x509/verify_test.go
index 4a7d8da..ba5c392 100644
--- a/src/crypto/x509/verify_test.go
+++ b/src/crypto/x509/verify_test.go
@@ -1549,7 +1549,7 @@ var nameConstraintTests = []struct {
func TestNameConstraints(t *testing.T) {
for i, test := range nameConstraintTests {
- result, err := matchDomainConstraint(test.domain, test.constraint)
+ result, err := matchDomainConstraint(test.domain, test.constraint, map[string][]string{}, map[string][]string{})
if err != nil && !test.expectError {
t.Errorf("unexpected error for test #%d: domain=%s, constraint=%s, err=%s", i, test.domain, test.constraint, err)
--
2.43.0

View File

@@ -0,0 +1,226 @@
From 04db77a423cac75bb82cc9a6859991ae9c016344 Mon Sep 17 00:00:00 2001
From: Roland Shoemaker <bracewell@google.com>
Date: Mon, 24 Nov 2025 08:46:08 -0800
Subject: [PATCH] [release-branch.go1.24] crypto/x509: excluded subdomain
constraints preclude wildcard SANs
When evaluating name constraints in a certificate chain, the presence of
an excluded subdomain constraint (e.g., excluding "test.example.com")
should preclude the use of a wildcard SAN (e.g., "*.example.com").
Fixes #76442
Fixes #76463
Fixes CVE-2025-61727
Change-Id: I42a0da010cb36d2ec9d1239ae3f61cf25eb78bba
Reviewed-on: https://go-review.googlesource.com/c/go/+/724401
Reviewed-by: Nicholas Husin <husin@google.com>
Reviewed-by: Daniel McCarney <daniel@binaryparadox.net>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Nicholas Husin <nsh@golang.org>
Reviewed-by: Neal Patel <nealpatel@google.com>
Upstream-Status: Backport [https://github.com/golang/go/commit/04db77a423cac75bb82cc9a6859991ae9c016344]
CVE: CVE-2025-61727
Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
---
src/crypto/x509/name_constraints_test.go | 34 ++++++++++++++++++++
src/crypto/x509/verify.go | 40 +++++++++++++++---------
src/crypto/x509/verify_test.go | 2 +-
3 files changed, 60 insertions(+), 16 deletions(-)
diff --git a/src/crypto/x509/name_constraints_test.go b/src/crypto/x509/name_constraints_test.go
index a5851845164d10..bc91b28401fce5 100644
--- a/src/crypto/x509/name_constraints_test.go
+++ b/src/crypto/x509/name_constraints_test.go
@@ -1624,6 +1624,40 @@ var nameConstraintsTests = []nameConstraintsTest{
},
expectedError: "URI with IP",
},
+ // #87: subdomain excluded constraints preclude wildcard names
+ {
+ roots: []constraintsSpec{
+ {
+ bad: []string{"dns:foo.example.com"},
+ },
+ },
+ intermediates: [][]constraintsSpec{
+ {
+ {},
+ },
+ },
+ leaf: leafSpec{
+ sans: []string{"dns:*.example.com"},
+ },
+ expectedError: "\"*.example.com\" is excluded by constraint \"foo.example.com\"",
+ },
+ // #88: wildcard names are not matched by subdomain permitted constraints
+ {
+ roots: []constraintsSpec{
+ {
+ ok: []string{"dns:foo.example.com"},
+ },
+ },
+ intermediates: [][]constraintsSpec{
+ {
+ {},
+ },
+ },
+ leaf: leafSpec{
+ sans: []string{"dns:*.example.com"},
+ },
+ expectedError: "\"*.example.com\" is not permitted",
+ },
}
func makeConstraintsCACert(constraints constraintsSpec, name string, key *ecdsa.PrivateKey, parent *Certificate, parentKey *ecdsa.PrivateKey) (*Certificate, error) {
diff --git a/src/crypto/x509/verify.go b/src/crypto/x509/verify.go
index bf7e7ec058db2b..9175fa4dc147a2 100644
--- a/src/crypto/x509/verify.go
+++ b/src/crypto/x509/verify.go
@@ -429,7 +429,7 @@ func domainToReverseLabels(domain string) (reverseLabels []string, ok bool) {
return reverseLabels, true
}
-func matchEmailConstraint(mailbox rfc2821Mailbox, constraint string, reversedDomainsCache map[string][]string, reversedConstraintsCache map[string][]string) (bool, error) {
+func matchEmailConstraint(mailbox rfc2821Mailbox, constraint string, excluded bool, reversedDomainsCache map[string][]string, reversedConstraintsCache map[string][]string) (bool, error) {
// If the constraint contains an @, then it specifies an exact mailbox
// name.
if strings.Contains(constraint, "@") {
@@ -442,10 +442,10 @@ func matchEmailConstraint(mailbox rfc2821Mailbox, constraint string, reversedDom
// Otherwise the constraint is like a DNS constraint of the domain part
// of the mailbox.
- return matchDomainConstraint(mailbox.domain, constraint, reversedDomainsCache, reversedConstraintsCache)
+ return matchDomainConstraint(mailbox.domain, constraint, excluded, reversedDomainsCache, reversedConstraintsCache)
}
-func matchURIConstraint(uri *url.URL, constraint string, reversedDomainsCache map[string][]string, reversedConstraintsCache map[string][]string) (bool, error) {
+func matchURIConstraint(uri *url.URL, constraint string, excluded bool, reversedDomainsCache map[string][]string, reversedConstraintsCache map[string][]string) (bool, error) {
// From RFC 5280, Section 4.2.1.10:
// “a uniformResourceIdentifier that does not include an authority
// component with a host name specified as a fully qualified domain
@@ -474,7 +474,7 @@ func matchURIConstraint(uri *url.URL, constraint string, reversedDomainsCache ma
return false, fmt.Errorf("URI with IP (%q) cannot be matched against constraints", uri.String())
}
- return matchDomainConstraint(host, constraint, reversedDomainsCache, reversedConstraintsCache)
+ return matchDomainConstraint(host, constraint, excluded, reversedDomainsCache, reversedConstraintsCache)
}
func matchIPConstraint(ip net.IP, constraint *net.IPNet) (bool, error) {
@@ -491,7 +491,7 @@ func matchIPConstraint(ip net.IP, constraint *net.IPNet) (bool, error) {
return true, nil
}
-func matchDomainConstraint(domain, constraint string, reversedDomainsCache map[string][]string, reversedConstraintsCache map[string][]string) (bool, error) {
+func matchDomainConstraint(domain, constraint string, excluded bool, reversedDomainsCache map[string][]string, reversedConstraintsCache map[string][]string) (bool, error) {
// The meaning of zero length constraints is not specified, but this
// code follows NSS and accepts them as matching everything.
if len(constraint) == 0 {
@@ -508,6 +508,11 @@ func matchDomainConstraint(domain, constraint string, reversedDomainsCache map[s
reversedDomainsCache[domain] = domainLabels
}
+ wildcardDomain := false
+ if len(domain) > 0 && domain[0] == '*' {
+ wildcardDomain = true
+ }
+
// RFC 5280 says that a leading period in a domain name means that at
// least one label must be prepended, but only for URI and email
// constraints, not DNS constraints. The code also supports that
@@ -534,6 +539,11 @@ func matchDomainConstraint(domain, constraint string, reversedDomainsCache map[s
return false, nil
}
+ if excluded && wildcardDomain && len(domainLabels) > 1 && len(constraintLabels) > 0 {
+ domainLabels = domainLabels[:len(domainLabels)-1]
+ constraintLabels = constraintLabels[:len(constraintLabels)-1]
+ }
+
for i, constraintLabel := range constraintLabels {
if !strings.EqualFold(constraintLabel, domainLabels[i]) {
return false, nil
@@ -553,7 +563,7 @@ func (c *Certificate) checkNameConstraints(count *int,
nameType string,
name string,
parsedName any,
- match func(parsedName, constraint any) (match bool, err error),
+ match func(parsedName, constraint any, excluded bool) (match bool, err error),
permitted, excluded any) error {
excludedValue := reflect.ValueOf(excluded)
@@ -565,7 +575,7 @@ func (c *Certificate) checkNameConstraints(count *int,
for i := 0; i < excludedValue.Len(); i++ {
constraint := excludedValue.Index(i).Interface()
- match, err := match(parsedName, constraint)
+ match, err := match(parsedName, constraint, true)
if err != nil {
return CertificateInvalidError{c, CANotAuthorizedForThisName, err.Error()}
}
@@ -587,7 +597,7 @@ func (c *Certificate) checkNameConstraints(count *int,
constraint := permittedValue.Index(i).Interface()
var err error
- if ok, err = match(parsedName, constraint); err != nil {
+ if ok, err = match(parsedName, constraint, false); err != nil {
return CertificateInvalidError{c, CANotAuthorizedForThisName, err.Error()}
}
@@ -679,8 +689,8 @@ func (c *Certificate) isValid(certType int, currentChain []*Certificate, opts *V
}
if err := c.checkNameConstraints(&comparisonCount, maxConstraintComparisons, "email address", name, mailbox,
- func(parsedName, constraint any) (bool, error) {
- return matchEmailConstraint(parsedName.(rfc2821Mailbox), constraint.(string), reversedDomainsCache, reversedConstraintsCache)
+ func(parsedName, constraint any, excluded bool) (bool, error) {
+ return matchEmailConstraint(parsedName.(rfc2821Mailbox), constraint.(string), excluded, reversedDomainsCache, reversedConstraintsCache)
}, c.PermittedEmailAddresses, c.ExcludedEmailAddresses); err != nil {
return err
}
@@ -692,8 +702,8 @@ func (c *Certificate) isValid(certType int, currentChain []*Certificate, opts *V
}
if err := c.checkNameConstraints(&comparisonCount, maxConstraintComparisons, "DNS name", name, name,
- func(parsedName, constraint any) (bool, error) {
- return matchDomainConstraint(parsedName.(string), constraint.(string), reversedDomainsCache, reversedConstraintsCache)
+ func(parsedName, constraint any, excluded bool) (bool, error) {
+ return matchDomainConstraint(parsedName.(string), constraint.(string), excluded, reversedDomainsCache, reversedConstraintsCache)
}, c.PermittedDNSDomains, c.ExcludedDNSDomains); err != nil {
return err
}
@@ -706,8 +716,8 @@ func (c *Certificate) isValid(certType int, currentChain []*Certificate, opts *V
}
if err := c.checkNameConstraints(&comparisonCount, maxConstraintComparisons, "URI", name, uri,
- func(parsedName, constraint any) (bool, error) {
- return matchURIConstraint(parsedName.(*url.URL), constraint.(string), reversedDomainsCache, reversedConstraintsCache)
+ func(parsedName, constraint any, excluded bool) (bool, error) {
+ return matchURIConstraint(parsedName.(*url.URL), constraint.(string), excluded, reversedDomainsCache, reversedConstraintsCache)
}, c.PermittedURIDomains, c.ExcludedURIDomains); err != nil {
return err
}
@@ -719,7 +729,7 @@ func (c *Certificate) isValid(certType int, currentChain []*Certificate, opts *V
}
if err := c.checkNameConstraints(&comparisonCount, maxConstraintComparisons, "IP address", ip.String(), ip,
- func(parsedName, constraint any) (bool, error) {
+ func(parsedName, constraint any, _ bool) (bool, error) {
return matchIPConstraint(parsedName.(net.IP), constraint.(*net.IPNet))
}, c.PermittedIPRanges, c.ExcludedIPRanges); err != nil {
return err
diff --git a/src/crypto/x509/verify_test.go b/src/crypto/x509/verify_test.go
index 60a4cea9146adf..6a394e46e94f5a 100644
--- a/src/crypto/x509/verify_test.go
+++ b/src/crypto/x509/verify_test.go
@@ -1352,7 +1352,7 @@ var nameConstraintTests = []struct {
func TestNameConstraints(t *testing.T) {
for i, test := range nameConstraintTests {
- result, err := matchDomainConstraint(test.domain, test.constraint, map[string][]string{}, map[string][]string{})
+ result, err := matchDomainConstraint(test.domain, test.constraint, false, map[string][]string{}, map[string][]string{})
if err != nil && !test.expectError {
t.Errorf("unexpected error for test #%d: domain=%s, constraint=%s, err=%s", i, test.domain, test.constraint, err)

View File

@@ -0,0 +1,174 @@
From 3a842bd5c6aa8eefa13c0174de3ab361e50bd672 Mon Sep 17 00:00:00 2001
From: "Nicholas S. Husin" <nsh@golang.org>
Date: Mon, 24 Nov 2025 14:56:23 -0500
Subject: [PATCH] [release-branch.go1.24] crypto/x509: prevent
HostnameError.Error() from consuming excessive resource
Constructing HostnameError.Error() takes O(N^2) runtime due to using a
string concatenation in a loop. Additionally, there is no limit on how
many names are included in the error message. As a result, a malicious
attacker could craft a certificate with an infinite amount of names to
unfairly consume resource.
To remediate this, we will now use strings.Builder to construct the
error message, preventing O(N^2) runtime. When a certificate has 100 or
more names, we will also not print each name individually.
Thanks to Philippe Antoine (Catena cyber) for reporting this issue.
Updates #76445
Fixes #76460
Fixes CVE-2025-61729
Change-Id: I6343776ec3289577abc76dad71766c491c1a7c81
Reviewed-on: https://go-internal-review.googlesource.com/c/go/+/3000
Reviewed-by: Neal Patel <nealpatel@google.com>
Reviewed-by: Damien Neil <dneil@google.com>
Reviewed-on: https://go-internal-review.googlesource.com/c/go/+/3220
Reviewed-by: Roland Shoemaker <bracewell@google.com>
Reviewed-on: https://go-review.googlesource.com/c/go/+/725820
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
TryBot-Bypass: Dmitri Shuralyov <dmitshur@golang.org>
Auto-Submit: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Mark Freeman <markfreeman@google.com>
Upstream-Status: Backport [https://github.com/golang/go/commit/3a842bd5c6aa8eefa13c0174de3ab361e50bd672]
CVE: CVE-2025-61729
Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
---
src/crypto/x509/verify.go | 21 ++++++++++-----
src/crypto/x509/verify_test.go | 47 ++++++++++++++++++++++++++++++++++
2 files changed, 61 insertions(+), 7 deletions(-)
diff --git a/src/crypto/x509/verify.go b/src/crypto/x509/verify.go
index 3fdf65f..0ae8aef 100644
--- a/src/crypto/x509/verify.go
+++ b/src/crypto/x509/verify.go
@@ -100,31 +100,38 @@ type HostnameError struct {
func (h HostnameError) Error() string {
c := h.Certificate
+ maxNamesIncluded := 100
if !c.hasSANExtension() && matchHostnames(c.Subject.CommonName, h.Host) {
return "x509: certificate relies on legacy Common Name field, use SANs instead"
}
- var valid string
+ var valid strings.Builder
if ip := net.ParseIP(h.Host); ip != nil {
// Trying to validate an IP
if len(c.IPAddresses) == 0 {
return "x509: cannot validate certificate for " + h.Host + " because it doesn't contain any IP SANs"
}
+ if len(c.IPAddresses) >= maxNamesIncluded {
+ return fmt.Sprintf("x509: certificate is valid for %d IP SANs, but none matched %s", len(c.IPAddresses), h.Host)
+ }
for _, san := range c.IPAddresses {
- if len(valid) > 0 {
- valid += ", "
+ if valid.Len() > 0 {
+ valid.WriteString(", ")
}
- valid += san.String()
+ valid.WriteString(san.String())
}
} else {
- valid = strings.Join(c.DNSNames, ", ")
+ if len(c.DNSNames) >= maxNamesIncluded {
+ return fmt.Sprintf("x509: certificate is valid for %d names, but none matched %s", len(c.DNSNames), h.Host)
+ }
+ valid.WriteString(strings.Join(c.DNSNames, ", "))
}
- if len(valid) == 0 {
+ if valid.Len() == 0 {
return "x509: certificate is not valid for any names, but wanted to match " + h.Host
}
- return "x509: certificate is valid for " + valid + ", not " + h.Host
+ return "x509: certificate is valid for " + valid.String() + ", not " + h.Host
}
// UnknownAuthorityError results when the certificate issuer is unknown
diff --git a/src/crypto/x509/verify_test.go b/src/crypto/x509/verify_test.go
index df9b0a2..223c250 100644
--- a/src/crypto/x509/verify_test.go
+++ b/src/crypto/x509/verify_test.go
@@ -10,13 +10,16 @@ import (
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
+ "crypto/rsa"
"crypto/x509/pkix"
"encoding/asn1"
"encoding/pem"
"errors"
"fmt"
"internal/testenv"
+ "log"
"math/big"
+ "net"
"os/exec"
"reflect"
"runtime"
@@ -89,6 +92,26 @@ var verifyTests = []verifyTest{
errorCallback: expectHostnameError("certificate is valid for"),
},
+ {
+ name: "TooManyDNS",
+ leaf: generatePEMCertWithRepeatSAN(1677615892, 200, "fake.dns"),
+ roots: []string{generatePEMCertWithRepeatSAN(1677615892, 200, "fake.dns")},
+ currentTime: 1677615892,
+ dnsName: "www.example.com",
+ systemSkip: true, // does not chain to a system root
+
+ errorCallback: expectHostnameError("certificate is valid for 200 names, but none matched"),
+ },
+ {
+ name: "TooManyIPs",
+ leaf: generatePEMCertWithRepeatSAN(1677615892, 150, "4.3.2.1"),
+ roots: []string{generatePEMCertWithRepeatSAN(1677615892, 150, "4.3.2.1")},
+ currentTime: 1677615892,
+ dnsName: "1.2.3.4",
+ systemSkip: true, // does not chain to a system root
+
+ errorCallback: expectHostnameError("certificate is valid for 150 IP SANs, but none matched"),
+ },
{
name: "IPMissing",
leaf: googleLeaf,
@@ -595,6 +618,30 @@ func nameToKey(name *pkix.Name) string {
return strings.Join(name.Country, ",") + "/" + strings.Join(name.Organization, ",") + "/" + strings.Join(name.OrganizationalUnit, ",") + "/" + name.CommonName
}
+func generatePEMCertWithRepeatSAN(currentTime int64, count int, san string) string {
+ cert := Certificate{
+ NotBefore: time.Unix(currentTime, 0),
+ NotAfter: time.Unix(currentTime, 0),
+ }
+ if ip := net.ParseIP(san); ip != nil {
+ cert.IPAddresses = slices.Repeat([]net.IP{ip}, count)
+ } else {
+ cert.DNSNames = slices.Repeat([]string{san}, count)
+ }
+ privKey, err := rsa.GenerateKey(rand.Reader, 4096)
+ if err != nil {
+ log.Fatal(err)
+ }
+ certBytes, err := CreateCertificate(rand.Reader, &cert, &cert, &privKey.PublicKey, privKey)
+ if err != nil {
+ log.Fatal(err)
+ }
+ return string(pem.EncodeToMemory(&pem.Block{
+ Type: "CERTIFICATE",
+ Bytes: certBytes,
+ }))
+}
+
const gtsIntermediate = `-----BEGIN CERTIFICATE-----
MIIFljCCA36gAwIBAgINAgO8U1lrNMcY9QFQZjANBgkqhkiG9w0BAQsFADBHMQsw
CQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEU
--
2.43.0

View File

@@ -0,0 +1,80 @@
From 3bf7db860ef730e828b68264e88210190120cacf Mon Sep 17 00:00:00 2001
From: Illia Volochii <illia.volochii@gmail.com>
Date: Fri, 5 Dec 2025 16:41:33 +0200
Subject: [PATCH] Merge commit from fork
* Add a hard-coded limit for the decompression chain
* Reuse new list
CVE: CVE-2025-66418
Upstream-Status: Backport
[https://github.com/urllib3/urllib3/commit/24d7b67eac89f94e11003424bcf0d8f7b72222a8]
Signed-off-by: Jiaying Song <jiaying.song.cn@windriver.com>
---
changelog/GHSA-gm62-xv2j-4w53.security.rst | 4 ++++
src/urllib3/response.py | 12 +++++++++++-
test/test_response.py | 10 ++++++++++
3 files changed, 25 insertions(+), 1 deletion(-)
create mode 100644 changelog/GHSA-gm62-xv2j-4w53.security.rst
diff --git a/changelog/GHSA-gm62-xv2j-4w53.security.rst b/changelog/GHSA-gm62-xv2j-4w53.security.rst
new file mode 100644
index 00000000..6646eaa3
--- /dev/null
+++ b/changelog/GHSA-gm62-xv2j-4w53.security.rst
@@ -0,0 +1,4 @@
+Fixed a security issue where an attacker could compose an HTTP response with
+virtually unlimited links in the ``Content-Encoding`` header, potentially
+leading to a denial of service (DoS) attack by exhausting system resources
+during decoding. The number of allowed chained encodings is now limited to 5.
diff --git a/src/urllib3/response.py b/src/urllib3/response.py
index a0273d65..b8e8565c 100644
--- a/src/urllib3/response.py
+++ b/src/urllib3/response.py
@@ -194,8 +194,18 @@ class MultiDecoder(ContentDecoder):
they were applied.
"""
+ # Maximum allowed number of chained HTTP encodings in the
+ # Content-Encoding header.
+ max_decode_links = 5
+
def __init__(self, modes: str) -> None:
- self._decoders = [_get_decoder(m.strip()) for m in modes.split(",")]
+ encodings = [m.strip() for m in modes.split(",")]
+ if len(encodings) > self.max_decode_links:
+ raise DecodeError(
+ "Too many content encodings in the chain: "
+ f"{len(encodings)} > {self.max_decode_links}"
+ )
+ self._decoders = [_get_decoder(e) for e in encodings]
def flush(self) -> bytes:
return self._decoders[0].flush()
diff --git a/test/test_response.py b/test/test_response.py
index c0062771..0e8abd93 100644
--- a/test/test_response.py
+++ b/test/test_response.py
@@ -581,6 +581,16 @@ class TestResponse:
assert r.read(9 * 37) == b"foobarbaz" * 37
assert r.read() == b""
+ def test_read_multi_decoding_too_many_links(self) -> None:
+ fp = BytesIO(b"foo")
+ with pytest.raises(
+ DecodeError, match="Too many content encodings in the chain: 6 > 5"
+ ):
+ HTTPResponse(
+ fp,
+ headers={"content-encoding": "gzip, deflate, br, zstd, gzip, deflate"},
+ )
+
def test_body_blob(self) -> None:
resp = HTTPResponse(b"foo")
assert resp.data == b"foo"
--
2.34.1

View File

@@ -0,0 +1,585 @@
From f25c0d11e1b640e3c7e0addb66a1ff50730be508 Mon Sep 17 00:00:00 2001
From: Illia Volochii <illia.volochii@gmail.com>
Date: Fri, 5 Dec 2025 16:40:41 +0200
Subject: [PATCH] Merge commit from fork
* Prevent decompression bomb for zstd in Python 3.14
* Add experimental `decompress_iter` for Brotli
* Update changes for Brotli
* Add `GzipDecoder.decompress_iter`
* Test https://github.com/python-hyper/brotlicffi/pull/207
* Pin Brotli
* Add `decompress_iter` to all decoders and make tests pass
* Pin brotlicffi to an official release
* Revert changes to response.py
* Add `max_length` parameter to all `decompress` methods
* Fix the `test_brotlipy` session
* Unset `_data` on gzip error
* Add a test for memory usage
* Test more methods
* Fix the test for `stream`
* Cover more lines with tests
* Add more coverage
* Make `read1` a bit more efficient
* Fix PyPy tests for Brotli
* Revert an unnecessarily moved check
* Add some comments
* Leave just one `self._obj.decompress` call in `GzipDecoder`
* Refactor test params
* Test reads with all data already in the decompressor
* Prevent needless copying of data decoded with `max_length`
* Rename the changed test
* Note that responses of unknown length should be streamed too
* Add a changelog entry
* Avoid returning a memory view from `BytesQueueBuffer`
* Add one more note to the changelog entry
CVE: CVE-2025-66471
Upstream-Status: Backport
[https://github.com/urllib3/urllib3/commit/c19571de34c47de3a766541b041637ba5f716ed7]
Signed-off-by: Jiaying Song <jiaying.song.cn@windriver.com>
---
docs/advanced-usage.rst | 3 +-
docs/user-guide.rst | 4 +-
pyproject.toml | 5 +-
src/urllib3/response.py | 278 ++++++++++++++++++++++++++++++++++------
4 files changed, 246 insertions(+), 44 deletions(-)
diff --git a/docs/advanced-usage.rst b/docs/advanced-usage.rst
index 36a51e67..a12c7143 100644
--- a/docs/advanced-usage.rst
+++ b/docs/advanced-usage.rst
@@ -66,7 +66,8 @@ When using ``preload_content=True`` (the default setting) the
response body will be read immediately into memory and the HTTP connection
will be released back into the pool without manual intervention.
-However, when dealing with large responses it's often better to stream the response
+However, when dealing with responses of large or unknown length,
+it's often better to stream the response
content using ``preload_content=False``. Setting ``preload_content`` to ``False`` means
that urllib3 will only read from the socket when data is requested.
diff --git a/docs/user-guide.rst b/docs/user-guide.rst
index 5c78c8af..1d9d0bbd 100644
--- a/docs/user-guide.rst
+++ b/docs/user-guide.rst
@@ -145,8 +145,8 @@ to a byte string representing the response content:
print(resp.data)
# b"\xaa\xa5H?\x95\xe9\x9b\x11"
-.. note:: For larger responses, it's sometimes better to :ref:`stream <stream>`
- the response.
+.. note:: For responses of large or unknown length, it's sometimes better to
+ :ref:`stream <stream>` the response.
Using io Wrappers with Response Content
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/pyproject.toml b/pyproject.toml
index 1fe82937..58a2c2db 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -40,8 +40,8 @@ dynamic = ["version"]
[project.optional-dependencies]
brotli = [
- "brotli>=1.0.9; platform_python_implementation == 'CPython'",
- "brotlicffi>=0.8.0; platform_python_implementation != 'CPython'"
+ "brotli>=1.2.0; platform_python_implementation == 'CPython'",
+ "brotlicffi>=1.2.0.0; platform_python_implementation != 'CPython'"
]
zstd = [
"zstandard>=0.18.0",
@@ -95,6 +95,7 @@ filterwarnings = [
'''default:ssl\.PROTOCOL_TLSv1_1 is deprecated:DeprecationWarning''',
'''default:ssl\.PROTOCOL_TLSv1_2 is deprecated:DeprecationWarning''',
'''default:ssl NPN is deprecated, use ALPN instead:DeprecationWarning''',
+ '''default:Brotli >= 1.2.0 is required to prevent decompression bombs\.:urllib3.exceptions.DependencyWarning''',
'''default:Async generator 'quart\.wrappers\.response\.DataBody\.__aiter__\.<locals>\._aiter' was garbage collected.*:ResourceWarning''', # https://github.com/pallets/quart/issues/301
'''default:unclosed file <_io\.BufferedWriter name='/dev/null'>:ResourceWarning''', # https://github.com/SeleniumHQ/selenium/issues/13328
]
diff --git a/src/urllib3/response.py b/src/urllib3/response.py
index b8e8565c..4304133e 100644
--- a/src/urllib3/response.py
+++ b/src/urllib3/response.py
@@ -49,6 +49,7 @@ from .connection import BaseSSLError, HTTPConnection, HTTPException
from .exceptions import (
BodyNotHttplibCompatible,
DecodeError,
+ DependencyWarning,
HTTPError,
IncompleteRead,
InvalidChunkLength,
@@ -68,7 +69,11 @@ log = logging.getLogger(__name__)
class ContentDecoder:
- def decompress(self, data: bytes) -> bytes:
+ def decompress(self, data: bytes, max_length: int = -1) -> bytes:
+ raise NotImplementedError()
+
+ @property
+ def has_unconsumed_tail(self) -> bool:
raise NotImplementedError()
def flush(self) -> bytes:
@@ -78,30 +83,57 @@ class ContentDecoder:
class DeflateDecoder(ContentDecoder):
def __init__(self) -> None:
self._first_try = True
- self._data = b""
+ self._first_try_data = b""
+ self._unfed_data = b""
self._obj = zlib.decompressobj()
- def decompress(self, data: bytes) -> bytes:
- if not data:
+ def decompress(self, data: bytes, max_length: int = -1) -> bytes:
+ data = self._unfed_data + data
+ self._unfed_data = b""
+ if not data and not self._obj.unconsumed_tail:
return data
+ original_max_length = max_length
+ if original_max_length < 0:
+ max_length = 0
+ elif original_max_length == 0:
+ # We should not pass 0 to the zlib decompressor because 0 is
+ # the default value that will make zlib decompress without a
+ # length limit.
+ # Data should be stored for subsequent calls.
+ self._unfed_data = data
+ return b""
+ # Subsequent calls always reuse `self._obj`. zlib requires
+ # passing the unconsumed tail if decompression is to continue.
if not self._first_try:
- return self._obj.decompress(data)
+ return self._obj.decompress(
+ self._obj.unconsumed_tail + data, max_length=max_length
+ )
- self._data += data
+ # First call tries with RFC 1950 ZLIB format.
+ self._first_try_data += data
try:
- decompressed = self._obj.decompress(data)
+ decompressed = self._obj.decompress(data, max_length=max_length)
if decompressed:
self._first_try = False
- self._data = None # type: ignore[assignment]
+ self._first_try_data = b""
return decompressed
+ # On failure, it falls back to RFC 1951 DEFLATE format.
except zlib.error:
self._first_try = False
self._obj = zlib.decompressobj(-zlib.MAX_WBITS)
try:
- return self.decompress(self._data)
+ return self.decompress(
+ self._first_try_data, max_length=original_max_length
+ )
finally:
- self._data = None # type: ignore[assignment]
+ self._first_try_data = b""
+
+ @property
+ def has_unconsumed_tail(self) -> bool:
+ return bool(self._unfed_data) or (
+ bool(self._obj.unconsumed_tail) and not self._first_try
+ )
def flush(self) -> bytes:
return self._obj.flush()
@@ -117,27 +149,61 @@ class GzipDecoder(ContentDecoder):
def __init__(self) -> None:
self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)
self._state = GzipDecoderState.FIRST_MEMBER
+ self._unconsumed_tail = b""
- def decompress(self, data: bytes) -> bytes:
+ def decompress(self, data: bytes, max_length: int = -1) -> bytes:
ret = bytearray()
- if self._state == GzipDecoderState.SWALLOW_DATA or not data:
+ if self._state == GzipDecoderState.SWALLOW_DATA:
return bytes(ret)
+
+ if max_length == 0:
+ # We should not pass 0 to the zlib decompressor because 0 is
+ # the default value that will make zlib decompress without a
+ # length limit.
+ # Data should be stored for subsequent calls.
+ self._unconsumed_tail += data
+ return b""
+
+ # zlib requires passing the unconsumed tail to the subsequent
+ # call if decompression is to continue.
+ data = self._unconsumed_tail + data
+ if not data and self._obj.eof:
+ return bytes(ret)
+
while True:
try:
- ret += self._obj.decompress(data)
+ ret += self._obj.decompress(
+ data, max_length=max(max_length - len(ret), 0)
+ )
except zlib.error:
previous_state = self._state
# Ignore data after the first error
self._state = GzipDecoderState.SWALLOW_DATA
+ self._unconsumed_tail = b""
if previous_state == GzipDecoderState.OTHER_MEMBERS:
# Allow trailing garbage acceptable in other gzip clients
return bytes(ret)
raise
- data = self._obj.unused_data
+
+ self._unconsumed_tail = data = (
+ self._obj.unconsumed_tail or self._obj.unused_data
+ )
+ if max_length > 0 and len(ret) >= max_length:
+ break
+
if not data:
return bytes(ret)
- self._state = GzipDecoderState.OTHER_MEMBERS
- self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)
+ # When the end of a gzip member is reached, a new decompressor
+ # must be created for unused (possibly future) data.
+ if self._obj.eof:
+ self._state = GzipDecoderState.OTHER_MEMBERS
+ self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS)
+
+ return bytes(ret)
+
+ @property
+ def has_unconsumed_tail(self) -> bool:
+ return bool(self._unconsumed_tail)
def flush(self) -> bytes:
return self._obj.flush()
@@ -152,9 +218,35 @@ if brotli is not None:
def __init__(self) -> None:
self._obj = brotli.Decompressor()
if hasattr(self._obj, "decompress"):
- setattr(self, "decompress", self._obj.decompress)
+ setattr(self, "_decompress", self._obj.decompress)
else:
- setattr(self, "decompress", self._obj.process)
+ setattr(self, "_decompress", self._obj.process)
+
+ # Requires Brotli >= 1.2.0 for `output_buffer_limit`.
+ def _decompress(self, data: bytes, output_buffer_limit: int = -1) -> bytes:
+ raise NotImplementedError()
+
+ def decompress(self, data: bytes, max_length: int = -1) -> bytes:
+ try:
+ if max_length > 0:
+ return self._decompress(data, output_buffer_limit=max_length)
+ else:
+ return self._decompress(data)
+ except TypeError:
+ # Fallback for Brotli/brotlicffi/brotlipy versions without
+ # the `output_buffer_limit` parameter.
+ warnings.warn(
+ "Brotli >= 1.2.0 is required to prevent decompression bombs.",
+ DependencyWarning,
+ )
+ return self._decompress(data)
+
+ @property
+ def has_unconsumed_tail(self) -> bool:
+ try:
+ return not self._obj.can_accept_more_data()
+ except AttributeError:
+ return False
def flush(self) -> bytes:
if hasattr(self._obj, "flush"):
@@ -168,16 +260,46 @@ if HAS_ZSTD:
def __init__(self) -> None:
self._obj = zstd.ZstdDecompressor().decompressobj()
- def decompress(self, data: bytes) -> bytes:
- if not data:
+ def decompress(self, data: bytes, max_length: int = -1) -> bytes:
+ if not data and not self.has_unconsumed_tail:
return b""
- data_parts = [self._obj.decompress(data)]
- while self._obj.eof and self._obj.unused_data:
+ if self._obj.eof:
+ data = self._obj.unused_data + data
+ self._obj = zstd.ZstdDecompressor()
+ part = self._obj.decompress(data, max_length=max_length)
+ length = len(part)
+ data_parts = [part]
+ # Every loop iteration is supposed to read data from a separate frame.
+ # The loop breaks when:
+ # - enough data is read;
+ # - no more unused data is available;
+ # - end of the last read frame has not been reached (i.e.,
+ # more data has to be fed).
+ while (
+ self._obj.eof
+ and self._obj.unused_data
+ and (max_length < 0 or length < max_length)
+ ):
unused_data = self._obj.unused_data
- self._obj = zstd.ZstdDecompressor().decompressobj()
- data_parts.append(self._obj.decompress(unused_data))
+ if not self._obj.needs_input:
+ self._obj = zstd.ZstdDecompressor()
+ part = self._obj.decompress(
+ unused_data,
+ max_length=(max_length - length) if max_length > 0 else -1,
+ )
+ if part_length := len(part):
+ data_parts.append(part)
+ length += part_length
+ elif self._obj.needs_input:
+ break
return b"".join(data_parts)
+ @property
+ def has_unconsumed_tail(self) -> bool:
+ return not (self._obj.needs_input or self._obj.eof) or bool(
+ self._obj.unused_data
+ )
+
def flush(self) -> bytes:
ret = self._obj.flush() # note: this is a no-op
if not self._obj.eof:
@@ -210,10 +332,35 @@ class MultiDecoder(ContentDecoder):
def flush(self) -> bytes:
return self._decoders[0].flush()
- def decompress(self, data: bytes) -> bytes:
- for d in reversed(self._decoders):
- data = d.decompress(data)
- return data
+ def decompress(self, data: bytes, max_length: int = -1) -> bytes:
+ if max_length <= 0:
+ for d in reversed(self._decoders):
+ data = d.decompress(data)
+ return data
+
+ ret = bytearray()
+ # Every while loop iteration goes through all decoders once.
+ # It exits when enough data is read or no more data can be read.
+ # It is possible that the while loop iteration does not produce
+ # any data because we retrieve up to `max_length` from every
+ # decoder, and the amount of bytes may be insufficient for the
+ # next decoder to produce enough/any output.
+ while True:
+ any_data = False
+ for d in reversed(self._decoders):
+ data = d.decompress(data, max_length=max_length - len(ret))
+ if data:
+ any_data = True
+ # We should not break when no data is returned because
+ # next decoders may produce data even with empty input.
+ ret += data
+ if not any_data or len(ret) >= max_length:
+ return bytes(ret)
+ data = b""
+
+ @property
+ def has_unconsumed_tail(self) -> bool:
+ return any(d.has_unconsumed_tail for d in self._decoders)
def _get_decoder(mode: str) -> ContentDecoder:
@@ -246,9 +393,6 @@ class BytesQueueBuffer:
* self.buffer, which contains the full data
* the largest chunk that we will copy in get()
-
- The worst case scenario is a single chunk, in which case we'll make a full copy of
- the data inside get().
"""
def __init__(self) -> None:
@@ -270,6 +414,10 @@ class BytesQueueBuffer:
elif n < 0:
raise ValueError("n should be > 0")
+ if len(self.buffer[0]) == n and isinstance(self.buffer[0], bytes):
+ self._size -= n
+ return self.buffer.popleft()
+
fetched = 0
ret = io.BytesIO()
while fetched < n:
@@ -473,7 +621,11 @@ class BaseHTTPResponse(io.IOBase):
self._decoder = _get_decoder(content_encoding)
def _decode(
- self, data: bytes, decode_content: bool | None, flush_decoder: bool
+ self,
+ data: bytes,
+ decode_content: bool | None,
+ flush_decoder: bool,
+ max_length: int | None = None,
) -> bytes:
"""
Decode the data passed in and potentially flush the decoder.
@@ -486,9 +638,12 @@ class BaseHTTPResponse(io.IOBase):
)
return data
+ if max_length is None or flush_decoder:
+ max_length = -1
+
try:
if self._decoder:
- data = self._decoder.decompress(data)
+ data = self._decoder.decompress(data, max_length=max_length)
self._has_decoded_content = True
except self.DECODER_ERROR_CLASSES as e:
content_encoding = self.headers.get("content-encoding", "").lower()
@@ -953,6 +1108,14 @@ class HTTPResponse(BaseHTTPResponse):
elif amt is not None:
cache_content = False
+ if self._decoder and self._decoder.has_unconsumed_tail:
+ decoded_data = self._decode(
+ b"",
+ decode_content,
+ flush_decoder=False,
+ max_length=amt - len(self._decoded_buffer),
+ )
+ self._decoded_buffer.put(decoded_data)
if len(self._decoded_buffer) >= amt:
return self._decoded_buffer.get(amt)
@@ -960,7 +1123,11 @@ class HTTPResponse(BaseHTTPResponse):
flush_decoder = amt is None or (amt != 0 and not data)
- if not data and len(self._decoded_buffer) == 0:
+ if (
+ not data
+ and len(self._decoded_buffer) == 0
+ and not (self._decoder and self._decoder.has_unconsumed_tail)
+ ):
return data
if amt is None:
@@ -977,7 +1144,12 @@ class HTTPResponse(BaseHTTPResponse):
)
return data
- decoded_data = self._decode(data, decode_content, flush_decoder)
+ decoded_data = self._decode(
+ data,
+ decode_content,
+ flush_decoder,
+ max_length=amt - len(self._decoded_buffer),
+ )
self._decoded_buffer.put(decoded_data)
while len(self._decoded_buffer) < amt and data:
@@ -985,7 +1157,12 @@ class HTTPResponse(BaseHTTPResponse):
# For example, the GZ file header takes 10 bytes, we don't want to read
# it one byte at a time
data = self._raw_read(amt)
- decoded_data = self._decode(data, decode_content, flush_decoder)
+ decoded_data = self._decode(
+ data,
+ decode_content,
+ flush_decoder,
+ max_length=amt - len(self._decoded_buffer),
+ )
self._decoded_buffer.put(decoded_data)
data = self._decoded_buffer.get(amt)
@@ -1020,6 +1197,20 @@ class HTTPResponse(BaseHTTPResponse):
"Calling read1(decode_content=False) is not supported after "
"read1(decode_content=True) was called."
)
+ if (
+ self._decoder
+ and self._decoder.has_unconsumed_tail
+ and (amt is None or len(self._decoded_buffer) < amt)
+ ):
+ decoded_data = self._decode(
+ b"",
+ decode_content,
+ flush_decoder=False,
+ max_length=(
+ amt - len(self._decoded_buffer) if amt is not None else None
+ ),
+ )
+ self._decoded_buffer.put(decoded_data)
if len(self._decoded_buffer) > 0:
if amt is None:
return self._decoded_buffer.get_all()
@@ -1035,7 +1226,9 @@ class HTTPResponse(BaseHTTPResponse):
self._init_decoder()
while True:
flush_decoder = not data
- decoded_data = self._decode(data, decode_content, flush_decoder)
+ decoded_data = self._decode(
+ data, decode_content, flush_decoder, max_length=amt
+ )
self._decoded_buffer.put(decoded_data)
if decoded_data or flush_decoder:
break
@@ -1066,7 +1259,11 @@ class HTTPResponse(BaseHTTPResponse):
if self.chunked and self.supports_chunked_reads():
yield from self.read_chunked(amt, decode_content=decode_content)
else:
- while not is_fp_closed(self._fp) or len(self._decoded_buffer) > 0:
+ while (
+ not is_fp_closed(self._fp)
+ or len(self._decoded_buffer) > 0
+ or (self._decoder and self._decoder.has_unconsumed_tail)
+ ):
data = self.read(amt=amt, decode_content=decode_content)
if data:
@@ -1218,7 +1415,10 @@ class HTTPResponse(BaseHTTPResponse):
break
chunk = self._handle_chunk(amt)
decoded = self._decode(
- chunk, decode_content=decode_content, flush_decoder=False
+ chunk,
+ decode_content=decode_content,
+ flush_decoder=False,
+ max_length=amt,
)
if decoded:
yield decoded
--
2.34.1

View File

@@ -9,6 +9,8 @@ inherit pypi python_hatchling
SRC_URI += " \
file://CVE-2025-50181.patch \
file://CVE-2025-66418.patch \
file://CVE-2025-66471.patch \
"
RDEPENDS:${PN} += "\

View File

@@ -0,0 +1,355 @@
From 9ab89c026aa9611c4b0b67c288b8303a480fe742 Mon Sep 17 00:00:00 2001
From: Łukasz Langa <lukasz@langa.pl>
Date: Fri, 31 Oct 2025 17:58:09 +0100
Subject: [PATCH] gh-136065: Fix quadratic complexity in os.path.expandvars()
(GH-134952) (GH-140845)
(cherry picked from commit f029e8db626ddc6e3a3beea4eff511a71aaceb5c)
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
CVE: CVE-2025-6075
Upstream-Status: Backport [https://github.com/python/cpython/commit/9ab89c026aa9611c4b0b67c288b8303a480fe742]
Signed-off-by: Praveen Kumar <praveen.kumar@windriver.com>
---
Lib/ntpath.py | 126 ++++++------------
Lib/posixpath.py | 43 +++---
Lib/test/test_genericpath.py | 14 ++
Lib/test/test_ntpath.py | 18 ++-
...-05-30-22-33-27.gh-issue-136065.bu337o.rst | 1 +
5 files changed, 92 insertions(+), 110 deletions(-)
create mode 100644 Misc/NEWS.d/next/Security/2025-05-30-22-33-27.gh-issue-136065.bu337o.rst
diff --git a/Lib/ntpath.py b/Lib/ntpath.py
index 1bef630..393d358 100644
--- a/Lib/ntpath.py
+++ b/Lib/ntpath.py
@@ -409,17 +409,23 @@ def expanduser(path):
# XXX With COMMAND.COM you can use any characters in a variable name,
# XXX except '^|<>='.
+_varpattern = r"'[^']*'?|%(%|[^%]*%?)|\$(\$|[-\w]+|\{[^}]*\}?)"
+_varsub = None
+_varsubb = None
+
def expandvars(path):
"""Expand shell variables of the forms $var, ${var} and %var%.
Unknown variables are left unchanged."""
path = os.fspath(path)
+ global _varsub, _varsubb
if isinstance(path, bytes):
if b'$' not in path and b'%' not in path:
return path
- import string
- varchars = bytes(string.ascii_letters + string.digits + '_-', 'ascii')
- quote = b'\''
+ if not _varsubb:
+ import re
+ _varsubb = re.compile(_varpattern.encode(), re.ASCII).sub
+ sub = _varsubb
percent = b'%'
brace = b'{'
rbrace = b'}'
@@ -428,94 +434,44 @@ def expandvars(path):
else:
if '$' not in path and '%' not in path:
return path
- import string
- varchars = string.ascii_letters + string.digits + '_-'
- quote = '\''
+ if not _varsub:
+ import re
+ _varsub = re.compile(_varpattern, re.ASCII).sub
+ sub = _varsub
percent = '%'
brace = '{'
rbrace = '}'
dollar = '$'
environ = os.environ
- res = path[:0]
- index = 0
- pathlen = len(path)
- while index < pathlen:
- c = path[index:index+1]
- if c == quote: # no expansion within single quotes
- path = path[index + 1:]
- pathlen = len(path)
- try:
- index = path.index(c)
- res += c + path[:index + 1]
- except ValueError:
- res += c + path
- index = pathlen - 1
- elif c == percent: # variable or '%'
- if path[index + 1:index + 2] == percent:
- res += c
- index += 1
- else:
- path = path[index+1:]
- pathlen = len(path)
- try:
- index = path.index(percent)
- except ValueError:
- res += percent + path
- index = pathlen - 1
- else:
- var = path[:index]
- try:
- if environ is None:
- value = os.fsencode(os.environ[os.fsdecode(var)])
- else:
- value = environ[var]
- except KeyError:
- value = percent + var + percent
- res += value
- elif c == dollar: # variable or '$$'
- if path[index + 1:index + 2] == dollar:
- res += c
- index += 1
- elif path[index + 1:index + 2] == brace:
- path = path[index+2:]
- pathlen = len(path)
- try:
- index = path.index(rbrace)
- except ValueError:
- res += dollar + brace + path
- index = pathlen - 1
- else:
- var = path[:index]
- try:
- if environ is None:
- value = os.fsencode(os.environ[os.fsdecode(var)])
- else:
- value = environ[var]
- except KeyError:
- value = dollar + brace + var + rbrace
- res += value
- else:
- var = path[:0]
- index += 1
- c = path[index:index + 1]
- while c and c in varchars:
- var += c
- index += 1
- c = path[index:index + 1]
- try:
- if environ is None:
- value = os.fsencode(os.environ[os.fsdecode(var)])
- else:
- value = environ[var]
- except KeyError:
- value = dollar + var
- res += value
- if c:
- index -= 1
+
+ def repl(m):
+ lastindex = m.lastindex
+ if lastindex is None:
+ return m[0]
+ name = m[lastindex]
+ if lastindex == 1:
+ if name == percent:
+ return name
+ if not name.endswith(percent):
+ return m[0]
+ name = name[:-1]
else:
- res += c
- index += 1
- return res
+ if name == dollar:
+ return name
+ if name.startswith(brace):
+ if not name.endswith(rbrace):
+ return m[0]
+ name = name[1:-1]
+
+ try:
+ if environ is None:
+ return os.fsencode(os.environ[os.fsdecode(name)])
+ else:
+ return environ[name]
+ except KeyError:
+ return m[0]
+
+ return sub(repl, path)
# Normalize a path, e.g. A//B, A/./B and A/foo/../B all become A\B.
diff --git a/Lib/posixpath.py b/Lib/posixpath.py
index 90a6f54..6306f14 100644
--- a/Lib/posixpath.py
+++ b/Lib/posixpath.py
@@ -314,42 +314,41 @@ def expanduser(path):
# This expands the forms $variable and ${variable} only.
# Non-existent variables are left unchanged.
-_varprog = None
-_varprogb = None
+_varpattern = r'\$(\w+|\{[^}]*\}?)'
+_varsub = None
+_varsubb = None
def expandvars(path):
"""Expand shell variables of form $var and ${var}. Unknown variables
are left unchanged."""
path = os.fspath(path)
- global _varprog, _varprogb
+ global _varsub, _varsubb
if isinstance(path, bytes):
if b'$' not in path:
return path
- if not _varprogb:
+ if not _varsubb:
import re
- _varprogb = re.compile(br'\$(\w+|\{[^}]*\})', re.ASCII)
- search = _varprogb.search
+ _varsubb = re.compile(_varpattern.encode(), re.ASCII).sub
+ sub = _varsubb
start = b'{'
end = b'}'
environ = getattr(os, 'environb', None)
else:
if '$' not in path:
return path
- if not _varprog:
+ if not _varsub:
import re
- _varprog = re.compile(r'\$(\w+|\{[^}]*\})', re.ASCII)
- search = _varprog.search
+ _varsub = re.compile(_varpattern, re.ASCII).sub
+ sub = _varsub
start = '{'
end = '}'
environ = os.environ
- i = 0
- while True:
- m = search(path, i)
- if not m:
- break
- i, j = m.span(0)
- name = m.group(1)
- if name.startswith(start) and name.endswith(end):
+
+ def repl(m):
+ name = m[1]
+ if name.startswith(start):
+ if not name.endswith(end):
+ return m[0]
name = name[1:-1]
try:
if environ is None:
@@ -357,13 +356,11 @@ def expandvars(path):
else:
value = environ[name]
except KeyError:
- i = j
+ return m[0]
else:
- tail = path[j:]
- path = path[:i] + value
- i = len(path)
- path += tail
- return path
+ return value
+
+ return sub(repl, path)
# Normalize a path, e.g. A//B, A/./B and A/foo/../B all become A/B.
diff --git a/Lib/test/test_genericpath.py b/Lib/test/test_genericpath.py
index 3eefb72..1cec587 100644
--- a/Lib/test/test_genericpath.py
+++ b/Lib/test/test_genericpath.py
@@ -7,6 +7,7 @@ import os
import sys
import unittest
import warnings
+from test import support
from test.support import is_emscripten
from test.support import os_helper
from test.support import warnings_helper
@@ -443,6 +444,19 @@ class CommonTest(GenericTest):
os.fsencode('$bar%s bar' % nonascii))
check(b'$spam}bar', os.fsencode('%s}bar' % nonascii))
+ @support.requires_resource('cpu')
+ def test_expandvars_large(self):
+ expandvars = self.pathmodule.expandvars
+ with os_helper.EnvironmentVarGuard() as env:
+ env.clear()
+ env["A"] = "B"
+ n = 100_000
+ self.assertEqual(expandvars('$A'*n), 'B'*n)
+ self.assertEqual(expandvars('${A}'*n), 'B'*n)
+ self.assertEqual(expandvars('$A!'*n), 'B!'*n)
+ self.assertEqual(expandvars('${A}A'*n), 'BA'*n)
+ self.assertEqual(expandvars('${'*10*n), '${'*10*n)
+
def test_abspath(self):
self.assertIn("foo", self.pathmodule.abspath("foo"))
with warnings.catch_warnings():
diff --git a/Lib/test/test_ntpath.py b/Lib/test/test_ntpath.py
index ced9dc4..f4d5063 100644
--- a/Lib/test/test_ntpath.py
+++ b/Lib/test/test_ntpath.py
@@ -7,6 +7,7 @@ import sys
import unittest
import warnings
from ntpath import ALLOW_MISSING
+from test import support
from test.support import cpython_only, os_helper
from test.support import TestFailed, is_emscripten
from test.support.os_helper import FakePath
@@ -58,7 +59,7 @@ def tester(fn, wantResult):
fn = fn.replace("\\", "\\\\")
gotResult = eval(fn)
if wantResult != gotResult and _norm(wantResult) != _norm(gotResult):
- raise TestFailed("%s should return: %s but returned: %s" \
+ raise support.TestFailed("%s should return: %s but returned: %s" \
%(str(fn), str(wantResult), str(gotResult)))
# then with bytes
@@ -74,7 +75,7 @@ def tester(fn, wantResult):
warnings.simplefilter("ignore", DeprecationWarning)
gotResult = eval(fn)
if _norm(wantResult) != _norm(gotResult):
- raise TestFailed("%s should return: %s but returned: %s" \
+ raise support.TestFailed("%s should return: %s but returned: %s" \
%(str(fn), str(wantResult), repr(gotResult)))
@@ -882,6 +883,19 @@ class TestNtpath(NtpathTestCase):
check('%spam%bar', '%sbar' % nonascii)
check('%{}%bar'.format(nonascii), 'ham%sbar' % nonascii)
+ @support.requires_resource('cpu')
+ def test_expandvars_large(self):
+ expandvars = ntpath.expandvars
+ with os_helper.EnvironmentVarGuard() as env:
+ env.clear()
+ env["A"] = "B"
+ n = 100_000
+ self.assertEqual(expandvars('%A%'*n), 'B'*n)
+ self.assertEqual(expandvars('%A%A'*n), 'BA'*n)
+ self.assertEqual(expandvars("''"*n + '%%'), "''"*n + '%')
+ self.assertEqual(expandvars("%%"*n), "%"*n)
+ self.assertEqual(expandvars("$$"*n), "$"*n)
+
def test_expanduser(self):
tester('ntpath.expanduser("test")', 'test')
diff --git a/Misc/NEWS.d/next/Security/2025-05-30-22-33-27.gh-issue-136065.bu337o.rst b/Misc/NEWS.d/next/Security/2025-05-30-22-33-27.gh-issue-136065.bu337o.rst
new file mode 100644
index 0000000..1d152bb
--- /dev/null
+++ b/Misc/NEWS.d/next/Security/2025-05-30-22-33-27.gh-issue-136065.bu337o.rst
@@ -0,0 +1 @@
+Fix quadratic complexity in :func:`os.path.expandvars`.
--
2.40.0

View File

@@ -34,6 +34,7 @@ SRC_URI = "http://www.python.org/ftp/python/${PV}/Python-${PV}.tar.xz \
file://0001-test_deadlock-skip-problematic-test.patch \
file://0001-test_active_children-skip-problematic-test.patch \
file://0001-test_readline-skip-limited-history-test.patch \
file://CVE-2025-6075.patch \
"
SRC_URI:append:class-native = " \

View File

@@ -42,6 +42,7 @@ SRC_URI = "https://download.qemu.org/${BPN}-${PV}.tar.xz \
file://qemu-guest-agent.init \
file://qemu-guest-agent.udev \
file://CVE-2024-8354.patch \
file://CVE-2025-12464.patch \
"
UPSTREAM_CHECK_REGEX = "qemu-(?P<pver>\d+(\.\d+)+)\.tar"

View File

@@ -0,0 +1,70 @@
From a01344d9d78089e9e585faaeb19afccff2050abf Mon Sep 17 00:00:00 2001
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 28 Oct 2025 16:00:42 +0000
Subject: [PATCH] net: pad packets to minimum length in qemu_receive_packet()
In commits like 969e50b61a28 ("net: Pad short frames to minimum size
before sending from SLiRP/TAP") we switched away from requiring
network devices to handle short frames to instead having the net core
code do the padding of short frames out to the ETH_ZLEN minimum size.
We then dropped the code for handling short frames from the network
devices in a series of commits like 140eae9c8f7 ("hw/net: e1000:
Remove the logic of padding short frames in the receive path").
This missed one route where the device's receive code can still see a
short frame: if the device is in loopback mode and it transmits a
short frame via the qemu_receive_packet() function, this will be fed
back into its own receive code without being padded.
Add the padding logic to qemu_receive_packet().
This fixes a buffer overrun which can be triggered in the
e1000_receive_iov() logic via the loopback code path.
Other devices that use qemu_receive_packet() to implement loopback
are cadence_gem, dp8393x, lan9118, msf2-emac, pcnet, rtl8139
and sungem.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3043
Reviewed-by: Akihiko Odaki <odaki@rsg.ci.i.u-tokyo.ac.jp>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
CVE: CVE-2025-12464
Upstream-Status: Backport [https://gitlab.com/qemu-project/qemu/-/commit/a01344d9d7]
Signed-off-by: Kai Kang <kai.kang@windriver.com>
---
net/net.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/net/net.c b/net/net.c
index 27e0d27807..8aefdb3424 100644
--- a/net/net.c
+++ b/net/net.c
@@ -775,10 +775,20 @@ ssize_t qemu_send_packet(NetClientState *nc, const uint8_t *buf, int size)
ssize_t qemu_receive_packet(NetClientState *nc, const uint8_t *buf, int size)
{
+ uint8_t min_pkt[ETH_ZLEN];
+ size_t min_pktsz = sizeof(min_pkt);
+
if (!qemu_can_receive_packet(nc)) {
return 0;
}
+ if (net_peer_needs_padding(nc)) {
+ if (eth_pad_short_frame(min_pkt, &min_pktsz, buf, size)) {
+ buf = min_pkt;
+ size = min_pktsz;
+ }
+ }
+
return qemu_net_queue_receive(nc->incoming_queue, buf, size);
}
--
2.47.1

View File

@@ -0,0 +1,36 @@
From 797e17fc4a6f15e3b1756538a9f812b63942686f Mon Sep 17 00:00:00 2001
From: Andrew Tridgell <andrew@tridgell.net>
Date: Sat, 23 Aug 2025 17:26:53 +1000
Subject: [PATCH] fixed an invalid access to files array
this was found by Calum Hutton from Rapid7. It is a real bug, but
analysis shows it can't be leverged into an exploit. Worth fixing
though.
Many thanks to Calum and Rapid7 for finding and reporting this
CVE: CVE-2025-10158
Upstream-Status: Backport
[https://github.com/RsyncProject/rsync/commit/797e17fc4a6f15e3b1756538a9f812b63942686f]
Signed-off-by: Adarsh Jagadish Kamini<adarsh.jagadish.kamini@est.tech>
---
sender.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sender.c b/sender.c
index 2bbff2fa..5528071e 100644
--- a/sender.c
+++ b/sender.c
@@ -262,6 +262,8 @@ void send_files(int f_in, int f_out)
if (ndx - cur_flist->ndx_start >= 0)
file = cur_flist->files[ndx - cur_flist->ndx_start];
+ else if (cur_flist->parent_ndx < 0)
+ exit_cleanup(RERR_PROTOCOL);
else
file = dir_flist->files[cur_flist->parent_ndx];
if (F_PATHNAME(file)) {
--
2.44.1

View File

@@ -27,6 +27,7 @@ SRC_URI = "https://download.samba.org/pub/${BPN}/src/${BP}.tar.gz \
file://CVE-2024-12087-0003.patch \
file://CVE-2024-12088.patch \
file://CVE-2024-12747.patch \
file://CVE-2025-10158.patch \
"
SRC_URI[sha256sum] = "4e7d9d3f6ed10878c58c5fb724a67dacf4b6aac7340b13e488fb2dc41346f2bb"

View File

@@ -1,31 +0,0 @@
From 9907b76dad0777ee300de236dad4b559e07596ab Mon Sep 17 00:00:00 2001
From: Hiroshi SHIBATA <hsbt@ruby-lang.org>
Date: Fri, 21 Feb 2025 16:01:17 +0900
Subject: [PATCH] Use String#concat instead of String#+ for reducing cpu usage
Co-authored-by: "Yusuke Endoh" <mame@ruby-lang.org>
Upstream-Status: Backport [https://github.com/ruby/cgi/commit/9907b76dad0777ee300de236dad4b559e07596ab]
CVE: CVE-2025-27219
Signed-off-by: Ashish Sharma <asharma@mvista.com>
lib/cgi/cookie.rb | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/lib/cgi/cookie.rb b/lib/cgi/cookie.rb
index 9498e2f..1c4ef6a 100644
--- a/lib/cgi/cookie.rb
+++ b/lib/cgi/cookie.rb
@@ -190,9 +190,10 @@ def self.parse(raw_cookie)
values ||= ""
values = values.split('&').collect{|v| CGI.unescape(v,@@accept_charset) }
if cookies.has_key?(name)
- values = cookies[name].value + values
+ cookies[name].concat(values)
+ else
+ cookies[name] = Cookie.new(name, *values)
end
- cookies[name] = Cookie.new(name, *values)
end
cookies

View File

@@ -1,78 +0,0 @@
From cd1eb08076c8b8e310d4d553d427763f2577a1b6 Mon Sep 17 00:00:00 2001
From: Hiroshi SHIBATA <hsbt@ruby-lang.org>
Date: Fri, 21 Feb 2025 15:53:31 +0900
Subject: [PATCH] Escape/unescape unclosed tags as well
Co-authored-by: Nobuyoshi Nakada <nobu@ruby-lang.org>
CVE: CVE-2025-27220
Upstream-Status: Backport [https://github.com/ruby/cgi/commit/cd1eb08076c8b8e310d4d553d427763f2577a1b6]
Signed-off-by: Divya Chellam <divya.chellam@windriver.com>
---
lib/cgi/util.rb | 4 ++--
test/cgi/test_cgi_util.rb | 18 ++++++++++++++++++
2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/lib/cgi/util.rb b/lib/cgi/util.rb
index 4986e54..5f12eae 100644
--- a/lib/cgi/util.rb
+++ b/lib/cgi/util.rb
@@ -184,7 +184,7 @@ module CGI::Util
def escapeElement(string, *elements)
elements = elements[0] if elements[0].kind_of?(Array)
unless elements.empty?
- string.gsub(/<\/?(?:#{elements.join("|")})(?!\w)(?:.|\n)*?>/i) do
+ string.gsub(/<\/?(?:#{elements.join("|")})\b[^<>]*+>?/im) do
CGI.escapeHTML($&)
end
else
@@ -204,7 +204,7 @@ module CGI::Util
def unescapeElement(string, *elements)
elements = elements[0] if elements[0].kind_of?(Array)
unless elements.empty?
- string.gsub(/&lt;\/?(?:#{elements.join("|")})(?!\w)(?:.|\n)*?&gt;/i) do
+ string.gsub(/&lt;\/?(?:#{elements.join("|")})\b(?>[^&]+|&(?![gl]t;)\w+;)*(?:&gt;)?/im) do
unescapeHTML($&)
end
else
diff --git a/test/cgi/test_cgi_util.rb b/test/cgi/test_cgi_util.rb
index b0612fc..bff77f7 100644
--- a/test/cgi/test_cgi_util.rb
+++ b/test/cgi/test_cgi_util.rb
@@ -269,6 +269,14 @@ class CGIUtilTest < Test::Unit::TestCase
assert_equal("<BR>&lt;A HREF=&quot;url&quot;&gt;&lt;/A&gt;", escapeElement('<BR><A HREF="url"></A>', ["A", "IMG"]))
assert_equal("<BR>&lt;A HREF=&quot;url&quot;&gt;&lt;/A&gt;", escape_element('<BR><A HREF="url"></A>', "A", "IMG"))
assert_equal("<BR>&lt;A HREF=&quot;url&quot;&gt;&lt;/A&gt;", escape_element('<BR><A HREF="url"></A>', ["A", "IMG"]))
+
+ assert_equal("&lt;A &lt;A HREF=&quot;url&quot;&gt;&lt;/A&gt;", escapeElement('<A <A HREF="url"></A>', "A", "IMG"))
+ assert_equal("&lt;A &lt;A HREF=&quot;url&quot;&gt;&lt;/A&gt;", escapeElement('<A <A HREF="url"></A>', ["A", "IMG"]))
+ assert_equal("&lt;A &lt;A HREF=&quot;url&quot;&gt;&lt;/A&gt;", escape_element('<A <A HREF="url"></A>', "A", "IMG"))
+ assert_equal("&lt;A &lt;A HREF=&quot;url&quot;&gt;&lt;/A&gt;", escape_element('<A <A HREF="url"></A>', ["A", "IMG"]))
+
+ assert_equal("&lt;A &lt;A ", escapeElement('<A <A ', "A", "IMG"))
+ assert_equal("&lt;A &lt;A ", escapeElement('<A <A ', ["A", "IMG"]))
end
@@ -277,6 +285,16 @@ class CGIUtilTest < Test::Unit::TestCase
assert_equal('&lt;BR&gt;<A HREF="url"></A>', unescapeElement(escapeHTML('<BR><A HREF="url"></A>'), ["A", "IMG"]))
assert_equal('&lt;BR&gt;<A HREF="url"></A>', unescape_element(escapeHTML('<BR><A HREF="url"></A>'), "A", "IMG"))
assert_equal('&lt;BR&gt;<A HREF="url"></A>', unescape_element(escapeHTML('<BR><A HREF="url"></A>'), ["A", "IMG"]))
+
+ assert_equal('<A <A HREF="url"></A>', unescapeElement(escapeHTML('<A <A HREF="url"></A>'), "A", "IMG"))
+ assert_equal('<A <A HREF="url"></A>', unescapeElement(escapeHTML('<A <A HREF="url"></A>'), ["A", "IMG"]))
+ assert_equal('<A <A HREF="url"></A>', unescape_element(escapeHTML('<A <A HREF="url"></A>'), "A", "IMG"))
+ assert_equal('<A <A HREF="url"></A>', unescape_element(escapeHTML('<A <A HREF="url"></A>'), ["A", "IMG"]))
+
+ assert_equal('<A <A ', unescapeElement(escapeHTML('<A <A '), "A", "IMG"))
+ assert_equal('<A <A ', unescapeElement(escapeHTML('<A <A '), ["A", "IMG"]))
+ assert_equal('<A <A ', unescape_element(escapeHTML('<A <A '), "A", "IMG"))
+ assert_equal('<A <A ', unescape_element(escapeHTML('<A <A '), ["A", "IMG"]))
end
end
--
2.40.0

View File

@@ -1,57 +0,0 @@
From 3675494839112b64d5f082a9068237b277ed1495 Mon Sep 17 00:00:00 2001
From: Hiroshi SHIBATA <hsbt@ruby-lang.org>
Date: Fri, 21 Feb 2025 16:29:36 +0900
Subject: [PATCH] Truncate userinfo with URI#join, URI#merge and URI#+
CVE: CVE-2025-27221
Upstream-Status: Backport [https://github.com/ruby/uri/commit/3675494839112b64d5f082a9068237b277ed1495]
Signed-off-by: Divya Chellam <divya.chellam@windriver.com>
---
lib/uri/generic.rb | 6 +++++-
test/uri/test_generic.rb | 11 +++++++++++
2 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/lib/uri/generic.rb b/lib/uri/generic.rb
index f3540a2..ecc78c5 100644
--- a/lib/uri/generic.rb
+++ b/lib/uri/generic.rb
@@ -1141,7 +1141,11 @@ module URI
end
# RFC2396, Section 5.2, 7)
- base.set_userinfo(rel.userinfo) if rel.userinfo
+ if rel.userinfo
+ base.set_userinfo(rel.userinfo)
+ else
+ base.set_userinfo(nil)
+ end
base.set_host(rel.host) if rel.host
base.set_port(rel.port) if rel.port
base.query = rel.query if rel.query
diff --git a/test/uri/test_generic.rb b/test/uri/test_generic.rb
index e661937..17ba2b6 100644
--- a/test/uri/test_generic.rb
+++ b/test/uri/test_generic.rb
@@ -164,6 +164,17 @@ class URI::TestGeneric < Test::Unit::TestCase
# must be empty string to identify as path-abempty, not path-absolute
assert_equal('', url.host)
assert_equal('http:////example.com', url.to_s)
+
+ # sec-2957667
+ url = URI.parse('http://user:pass@example.com').merge('//example.net')
+ assert_equal('http://example.net', url.to_s)
+ assert_nil(url.userinfo)
+ url = URI.join('http://user:pass@example.com', '//example.net')
+ assert_equal('http://example.net', url.to_s)
+ assert_nil(url.userinfo)
+ url = URI.parse('http://user:pass@example.com') + '//example.net'
+ assert_equal('http://example.net', url.to_s)
+ assert_nil(url.userinfo)
end
def test_parse_scheme_with_symbols
--
2.40.0

View File

@@ -1,73 +0,0 @@
From 2789182478f42ccbb62197f952eb730e4f02bfc5 Mon Sep 17 00:00:00 2001
From: Hiroshi SHIBATA <hsbt@ruby-lang.org>
Date: Fri, 21 Feb 2025 18:16:28 +0900
Subject: [PATCH] Fix merger of URI with authority component
https://hackerone.com/reports/2957667
Co-authored-by: Nobuyoshi Nakada <nobu@ruby-lang.org>
CVE: CVE-2025-27221
Upstream-Status: Backport [https://github.com/ruby/uri/commit/2789182478f42ccbb62197f952eb730e4f02bfc5]
Signed-off-by: Divya Chellam <divya.chellam@windriver.com>
---
lib/uri/generic.rb | 19 +++++++------------
test/uri/test_generic.rb | 7 +++++++
2 files changed, 14 insertions(+), 12 deletions(-)
diff --git a/lib/uri/generic.rb b/lib/uri/generic.rb
index ecc78c5..2c0a88d 100644
--- a/lib/uri/generic.rb
+++ b/lib/uri/generic.rb
@@ -1133,21 +1133,16 @@ module URI
base.fragment=(nil)
# RFC2396, Section 5.2, 4)
- if !authority
- base.set_path(merge_path(base.path, rel.path)) if base.path && rel.path
- else
- # RFC2396, Section 5.2, 4)
- base.set_path(rel.path) if rel.path
+ if authority
+ base.set_userinfo(rel.userinfo)
+ base.set_host(rel.host)
+ base.set_port(rel.port || base.default_port)
+ base.set_path(rel.path)
+ elsif base.path && rel.path
+ base.set_path(merge_path(base.path, rel.path))
end
# RFC2396, Section 5.2, 7)
- if rel.userinfo
- base.set_userinfo(rel.userinfo)
- else
- base.set_userinfo(nil)
- end
- base.set_host(rel.host) if rel.host
- base.set_port(rel.port) if rel.port
base.query = rel.query if rel.query
base.fragment=(rel.fragment) if rel.fragment
diff --git a/test/uri/test_generic.rb b/test/uri/test_generic.rb
index 17ba2b6..1a70dd4 100644
--- a/test/uri/test_generic.rb
+++ b/test/uri/test_generic.rb
@@ -267,6 +267,13 @@ class URI::TestGeneric < Test::Unit::TestCase
assert_equal(u0, u1)
end
+ def test_merge_authority
+ u = URI.parse('http://user:pass@example.com:8080')
+ u0 = URI.parse('http://new.example.org/path')
+ u1 = u.merge('//new.example.org/path')
+ assert_equal(u0, u1)
+ end
+
def test_route
url = URI.parse('http://hoge/a.html').route_to('http://hoge/b.html')
assert_equal('b.html', url.to_s)
--
2.40.0

View File

@@ -26,10 +26,6 @@ SRC_URI = "http://cache.ruby-lang.org/pub/ruby/${SHRT_VER}/ruby-${PV}.tar.gz \
file://0005-Mark-Gemspec-reproducible-change-fixing-784225-too.patch \
file://0006-Make-gemspecs-reproducible.patch \
file://0001-vm_dump.c-Define-REG_S1-and-REG_S2-for-musl-riscv.patch \
file://CVE-2025-27219.patch \
file://CVE-2025-27220.patch \
file://CVE-2025-27221-0001.patch \
file://CVE-2025-27221-0002.patch \
file://0007-Skip-test_rm_r_no_permissions-test-under-root.patch \
"
UPSTREAM_CHECK_URI = "https://www.ruby-lang.org/en/downloads/"
@@ -51,7 +47,7 @@ do_configure:prepend() {
DEPENDS:append:libc-musl = " libucontext"
SRC_URI[sha256sum] = "3781a3504222c2f26cb4b9eb9c1a12dbf4944d366ce24a9ff8cf99ecbce75196"
SRC_URI[sha256sum] = "b555baa467a306cfc8e6c6ed24d0d27b27e9a1bed1d91d95509859eac6b0e928"
PACKAGECONFIG ??= ""
PACKAGECONFIG += "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)}"

View File

@@ -17,6 +17,8 @@ SRC_URI = "${GITHUB_BASE_URI}/download/v${PV}/cups-${PV}-source.tar.gz \
file://cups-volatiles.conf \
file://CVE-2025-58060.patch \
file://CVE-2025-58364.patch \
file://CVE-2025-58436.patch \
file://CVE-2025-61915.patch \
"
GITHUB_BASE_URI = "https://github.com/OpenPrinting/cups/releases"

View File

@@ -0,0 +1,635 @@
From 7587d27139227397ab68cce554a112bb1190e6b6 Mon Sep 17 00:00:00 2001
From: Zdenek Dohnal <zdohnal@redhat.com>
Date: Mon, 13 Oct 2025 10:16:48 +0200
Subject: [PATCH] Fix unresponsive cupsd process caused by a slow client
If client is very slow, it will slow cupsd process for other clients.
The fix is the best effort without turning scheduler cupsd into
multithreaded process which would be too complex and error-prone when
backporting to 2.4.x series.
The fix for unencrypted communication is to follow up on communication
only if there is the whole line on input, and the waiting time is
guarded by timeout.
Encrypted communication now starts after we have the whole client hello
packet, which conflicts with optional upgrade support to HTTPS via
methods other than method OPTIONS, so this optional support defined in
RFC 2817, section 3.1 is removed. Too slow or incomplete requests are
handled by connection timeout.
Fixes CVE-2025-58436
CVE: CVE-2025-58436
Upstream-Status: Backport [https://github.com/OpenPrinting/cups/commit/5d414f1f91bd]
(cherry picked from commit 5d414f1f91bdca118413301b148f0b188eb1cdc6)
Signed-off-by: Deepak Rathore <deeratho@cisco.com>
---
cups/http-private.h | 7 +-
cups/http.c | 80 +++++++++++++-------
cups/tls-openssl.c | 15 +++-
scheduler/client.c | 178 ++++++++++++++++++++++++++++----------------
scheduler/client.h | 3 +
scheduler/select.c | 12 +++
6 files changed, 198 insertions(+), 97 deletions(-)
diff --git a/cups/http-private.h b/cups/http-private.h
index 5f77b8ef0..b8e200bf6 100644
--- a/cups/http-private.h
+++ b/cups/http-private.h
@@ -121,6 +121,7 @@ extern "C" {
* Constants...
*/
+# define _HTTP_MAX_BUFFER 32768 /* Size of read buffer */
# define _HTTP_MAX_SBUFFER 65536 /* Size of (de)compression buffer */
# define _HTTP_RESOLVE_DEFAULT 0 /* Just resolve with default options */
# define _HTTP_RESOLVE_STDERR 1 /* Log resolve progress to stderr */
@@ -232,8 +233,8 @@ struct _http_s /**** HTTP connection structure ****/
http_encoding_t data_encoding; /* Chunked or not */
int _data_remaining;/* Number of bytes left (deprecated) */
int used; /* Number of bytes used in buffer */
- char buffer[HTTP_MAX_BUFFER];
- /* Buffer for incoming data */
+ char _buffer[HTTP_MAX_BUFFER];
+ /* Old read buffer (deprecated) */
int _auth_type; /* Authentication in use (deprecated) */
unsigned char _md5_state[88]; /* MD5 state (deprecated) */
char nonce[HTTP_MAX_VALUE];
@@ -307,6 +308,8 @@ struct _http_s /**** HTTP connection structure ****/
/* Allocated field values */
*default_fields[HTTP_FIELD_MAX];
/* Default field values, if any */
+ char buffer[_HTTP_MAX_BUFFER];
+ /* Read buffer */
};
# endif /* !_HTTP_NO_PRIVATE */
diff --git a/cups/http.c b/cups/http.c
index 31a8be361..599703c7b 100644
--- a/cups/http.c
+++ b/cups/http.c
@@ -53,7 +53,7 @@ static http_t *http_create(const char *host, int port,
static void http_debug_hex(const char *prefix, const char *buffer,
int bytes);
#endif /* DEBUG */
-static ssize_t http_read(http_t *http, char *buffer, size_t length);
+static ssize_t http_read(http_t *http, char *buffer, size_t length, int timeout);
static ssize_t http_read_buffered(http_t *http, char *buffer, size_t length);
static ssize_t http_read_chunk(http_t *http, char *buffer, size_t length);
static int http_send(http_t *http, http_state_t request,
@@ -1200,7 +1200,7 @@ httpGets(char *line, /* I - Line to read into */
return (NULL);
}
- bytes = http_read(http, http->buffer + http->used, (size_t)(HTTP_MAX_BUFFER - http->used));
+ bytes = http_read(http, http->buffer + http->used, (size_t)(_HTTP_MAX_BUFFER - http->used), http->wait_value);
DEBUG_printf(("4httpGets: read " CUPS_LLFMT " bytes.", CUPS_LLCAST bytes));
@@ -1720,24 +1720,13 @@ httpPeek(http_t *http, /* I - HTTP connection */
ssize_t buflen; /* Length of read for buffer */
- if (!http->blocking)
- {
- while (!httpWait(http, http->wait_value))
- {
- if (http->timeout_cb && (*http->timeout_cb)(http, http->timeout_data))
- continue;
-
- return (0);
- }
- }
-
if ((size_t)http->data_remaining > sizeof(http->buffer))
buflen = sizeof(http->buffer);
else
buflen = (ssize_t)http->data_remaining;
DEBUG_printf(("2httpPeek: Reading %d bytes into buffer.", (int)buflen));
- bytes = http_read(http, http->buffer, (size_t)buflen);
+ bytes = http_read(http, http->buffer, (size_t)buflen, http->wait_value);
DEBUG_printf(("2httpPeek: Read " CUPS_LLFMT " bytes into buffer.",
CUPS_LLCAST bytes));
@@ -1758,9 +1747,9 @@ httpPeek(http_t *http, /* I - HTTP connection */
int zerr; /* Decompressor error */
z_stream stream; /* Copy of decompressor stream */
- if (http->used > 0 && ((z_stream *)http->stream)->avail_in < HTTP_MAX_BUFFER)
+ if (http->used > 0 && ((z_stream *)http->stream)->avail_in < _HTTP_MAX_BUFFER)
{
- size_t buflen = HTTP_MAX_BUFFER - ((z_stream *)http->stream)->avail_in;
+ size_t buflen = _HTTP_MAX_BUFFER - ((z_stream *)http->stream)->avail_in;
/* Number of bytes to copy */
if (((z_stream *)http->stream)->avail_in > 0 &&
@@ -2018,7 +2007,7 @@ httpRead2(http_t *http, /* I - HTTP connection */
if (bytes == 0)
{
- ssize_t buflen = HTTP_MAX_BUFFER - (ssize_t)((z_stream *)http->stream)->avail_in;
+ ssize_t buflen = _HTTP_MAX_BUFFER - (ssize_t)((z_stream *)http->stream)->avail_in;
/* Additional bytes for buffer */
if (buflen > 0)
@@ -2768,7 +2757,7 @@ int /* O - 1 to continue, 0 to stop */
_httpUpdate(http_t *http, /* I - HTTP connection */
http_status_t *status) /* O - Current HTTP status */
{
- char line[32768], /* Line from connection... */
+ char line[_HTTP_MAX_BUFFER], /* Line from connection... */
*value; /* Pointer to value on line */
http_field_t field; /* Field index */
int major, minor; /* HTTP version numbers */
@@ -2776,12 +2765,46 @@ _httpUpdate(http_t *http, /* I - HTTP connection */
DEBUG_printf(("_httpUpdate(http=%p, status=%p), state=%s", (void *)http, (void *)status, httpStateString(http->state)));
+ /* When doing non-blocking I/O, make sure we have a whole line... */
+ if (!http->blocking)
+ {
+ ssize_t bytes; /* Bytes "peeked" from connection */
+
+ /* See whether our read buffer is full... */
+ DEBUG_printf(("2_httpUpdate: used=%d", http->used));
+
+ if (http->used > 0 && !memchr(http->buffer, '\n', (size_t)http->used) && (size_t)http->used < sizeof(http->buffer))
+ {
+ /* No, try filling in more data... */
+ if ((bytes = http_read(http, http->buffer + http->used, sizeof(http->buffer) - (size_t)http->used, /*timeout*/0)) > 0)
+ {
+ DEBUG_printf(("2_httpUpdate: Read %d bytes.", (int)bytes));
+ http->used += (int)bytes;
+ }
+ }
+
+ /* Peek at the incoming data... */
+ if (!http->used || !memchr(http->buffer, '\n', (size_t)http->used))
+ {
+ /* Don't have a full line, tell the reader to try again when there is more data... */
+ DEBUG_puts("1_htttpUpdate: No newline in buffer yet.");
+ if ((size_t)http->used == sizeof(http->buffer))
+ *status = HTTP_STATUS_ERROR;
+ else
+ *status = HTTP_STATUS_CONTINUE;
+ return (0);
+ }
+
+ DEBUG_puts("2_httpUpdate: Found newline in buffer.");
+ }
+
/*
* Grab a single line from the connection...
*/
if (!httpGets(line, sizeof(line), http))
{
+ DEBUG_puts("1_httpUpdate: Error reading request line.");
*status = HTTP_STATUS_ERROR;
return (0);
}
@@ -4134,7 +4157,8 @@ http_debug_hex(const char *prefix, /* I - Prefix for line */
static ssize_t /* O - Number of bytes read or -1 on error */
http_read(http_t *http, /* I - HTTP connection */
char *buffer, /* I - Buffer */
- size_t length) /* I - Maximum bytes to read */
+ size_t length, /* I - Maximum bytes to read */
+ int timeout) /* I - Wait timeout */
{
ssize_t bytes; /* Bytes read */
@@ -4143,7 +4167,7 @@ http_read(http_t *http, /* I - HTTP connection */
if (!http->blocking || http->timeout_value > 0.0)
{
- while (!httpWait(http, http->wait_value))
+ while (!_httpWait(http, timeout, 1))
{
if (http->timeout_cb && (*http->timeout_cb)(http, http->timeout_data))
continue;
@@ -4246,7 +4270,7 @@ http_read_buffered(http_t *http, /* I - HTTP connection */
else
bytes = (ssize_t)length;
- DEBUG_printf(("8http_read: Grabbing %d bytes from input buffer.",
+ DEBUG_printf(("8http_read_buffered: Grabbing %d bytes from input buffer.",
(int)bytes));
memcpy(buffer, http->buffer, (size_t)bytes);
@@ -4256,7 +4280,7 @@ http_read_buffered(http_t *http, /* I - HTTP connection */
memmove(http->buffer, http->buffer + bytes, (size_t)http->used);
}
else
- bytes = http_read(http, buffer, length);
+ bytes = http_read(http, buffer, length, http->wait_value);
return (bytes);
}
@@ -4597,15 +4621,15 @@ http_set_timeout(int fd, /* I - File descriptor */
static void
http_set_wait(http_t *http) /* I - HTTP connection */
{
- if (http->blocking)
- {
- http->wait_value = (int)(http->timeout_value * 1000);
+ http->wait_value = (int)(http->timeout_value * 1000);
- if (http->wait_value <= 0)
+ if (http->wait_value <= 0)
+ {
+ if (http->blocking)
http->wait_value = 60000;
+ else
+ http->wait_value = 1000;
}
- else
- http->wait_value = 10000;
}
diff --git a/cups/tls-openssl.c b/cups/tls-openssl.c
index 9fcbe0af3..f746f4cba 100644
--- a/cups/tls-openssl.c
+++ b/cups/tls-openssl.c
@@ -215,12 +215,14 @@ cupsMakeServerCredentials(
// Save them...
if ((bio = BIO_new_file(keyfile, "wb")) == NULL)
{
+ DEBUG_printf(("1cupsMakeServerCredentials: Unable to create private key file '%s': %s", keyfile, strerror(errno)));
_cupsSetError(IPP_STATUS_ERROR_INTERNAL, strerror(errno), 0);
goto done;
}
if (!PEM_write_bio_PrivateKey(bio, pkey, NULL, NULL, 0, NULL, NULL))
{
+ DEBUG_puts("1cupsMakeServerCredentials: PEM_write_bio_PrivateKey failed.");
_cupsSetError(IPP_STATUS_ERROR_INTERNAL, _("Unable to write private key."), 1);
BIO_free(bio);
goto done;
@@ -230,12 +232,14 @@ cupsMakeServerCredentials(
if ((bio = BIO_new_file(crtfile, "wb")) == NULL)
{
+ DEBUG_printf(("1cupsMakeServerCredentials: Unable to create certificate file '%s': %s", crtfile, strerror(errno)));
_cupsSetError(IPP_STATUS_ERROR_INTERNAL, strerror(errno), 0);
goto done;
}
if (!PEM_write_bio_X509(bio, cert))
{
+ DEBUG_puts("1cupsMakeServerCredentials: PEM_write_bio_X509 failed.");
_cupsSetError(IPP_STATUS_ERROR_INTERNAL, _("Unable to write X.509 certificate."), 1);
BIO_free(bio);
goto done;
@@ -1082,10 +1086,10 @@ _httpTLSStart(http_t *http) // I - Connection to server
if (!cupsMakeServerCredentials(tls_keypath, cn, 0, NULL, time(NULL) + 3650 * 86400))
{
- DEBUG_puts("4_httpTLSStart: cupsMakeServerCredentials failed.");
+ DEBUG_printf(("4_httpTLSStart: cupsMakeServerCredentials failed: %s", cupsLastErrorString()));
http->error = errno = EINVAL;
http->status = HTTP_STATUS_ERROR;
- _cupsSetError(IPP_STATUS_ERROR_INTERNAL, _("Unable to create server credentials."), 1);
+// _cupsSetError(IPP_STATUS_ERROR_INTERNAL, _("Unable to create server credentials."), 1);
SSL_CTX_free(context);
_cupsMutexUnlock(&tls_mutex);
@@ -1346,14 +1350,17 @@ http_bio_read(BIO *h, // I - BIO data
http = (http_t *)BIO_get_data(h);
- if (!http->blocking)
+ if (!http->blocking || http->timeout_value > 0.0)
{
/*
* Make sure we have data before we read...
*/
- if (!_httpWait(http, 10000, 0))
+ while (!_httpWait(http, http->wait_value, 0))
{
+ if (http->timeout_cb && (*http->timeout_cb)(http, http->timeout_data))
+ continue;
+
#ifdef WIN32
http->error = WSAETIMEDOUT;
#else
diff --git a/scheduler/client.c b/scheduler/client.c
index 233f9017d..d495d9a75 100644
--- a/scheduler/client.c
+++ b/scheduler/client.c
@@ -34,11 +34,11 @@
static int check_if_modified(cupsd_client_t *con,
struct stat *filestats);
-static int compare_clients(cupsd_client_t *a, cupsd_client_t *b,
- void *data);
#ifdef HAVE_TLS
-static int cupsd_start_tls(cupsd_client_t *con, http_encryption_t e);
+static int check_start_tls(cupsd_client_t *con);
#endif /* HAVE_TLS */
+static int compare_clients(cupsd_client_t *a, cupsd_client_t *b,
+ void *data);
static char *get_file(cupsd_client_t *con, struct stat *filestats,
char *filename, size_t len);
static http_status_t install_cupsd_conf(cupsd_client_t *con);
@@ -360,14 +360,20 @@ cupsdAcceptClient(cupsd_listener_t *lis)/* I - Listener socket */
if (lis->encryption == HTTP_ENCRYPTION_ALWAYS)
{
/*
- * https connection; go secure...
+ * HTTPS connection, force TLS negotiation...
*/
- if (cupsd_start_tls(con, HTTP_ENCRYPTION_ALWAYS))
- cupsdCloseClient(con);
+ con->tls_start = time(NULL);
+ con->encryption = HTTP_ENCRYPTION_ALWAYS;
}
else
+ {
+ /*
+ * HTTP connection, but check for HTTPS negotiation on first data...
+ */
+
con->auto_ssl = 1;
+ }
#endif /* HAVE_TLS */
}
@@ -606,17 +612,46 @@ cupsdReadClient(cupsd_client_t *con) /* I - Client to read from */
con->auto_ssl = 0;
- if (recv(httpGetFd(con->http), buf, 1, MSG_PEEK) == 1 &&
- (!buf[0] || !strchr("DGHOPT", buf[0])))
+ if (recv(httpGetFd(con->http), buf, 5, MSG_PEEK) == 5 && buf[0] == 0x16 && buf[1] == 3 && buf[2])
{
/*
- * Encrypt this connection...
+ * Client hello record, encrypt this connection...
*/
- cupsdLogClient(con, CUPSD_LOG_DEBUG2, "Saw first byte %02X, auto-negotiating SSL/TLS session.", buf[0] & 255);
+ cupsdLogClient(con, CUPSD_LOG_DEBUG2, "Saw client hello record, auto-negotiating TLS session.");
+ con->tls_start = time(NULL);
+ con->encryption = HTTP_ENCRYPTION_ALWAYS;
+ }
+ }
- if (cupsd_start_tls(con, HTTP_ENCRYPTION_ALWAYS))
- cupsdCloseClient(con);
+ if (con->tls_start)
+ {
+ /*
+ * Try negotiating TLS...
+ */
+
+ int tls_status = check_start_tls(con);
+
+ if (tls_status < 0)
+ {
+ /*
+ * TLS negotiation failed, close the connection.
+ */
+
+ cupsdCloseClient(con);
+ return;
+ }
+ else if (tls_status == 0)
+ {
+ /*
+ * Nothing to do yet...
+ */
+
+ if ((time(NULL) - con->tls_start) > 5)
+ {
+ // Timeout, close the connection...
+ cupsdCloseClient(con);
+ }
return;
}
@@ -780,9 +815,7 @@ cupsdReadClient(cupsd_client_t *con) /* I - Client to read from */
* Parse incoming parameters until the status changes...
*/
- while ((status = httpUpdate(con->http)) == HTTP_STATUS_CONTINUE)
- if (!httpGetReady(con->http))
- break;
+ status = httpUpdate(con->http);
if (status != HTTP_STATUS_OK && status != HTTP_STATUS_CONTINUE)
{
@@ -944,11 +977,10 @@ cupsdReadClient(cupsd_client_t *con) /* I - Client to read from */
return;
}
- if (cupsd_start_tls(con, HTTP_ENCRYPTION_REQUIRED))
- {
- cupsdCloseClient(con);
- return;
- }
+ con->tls_start = time(NULL);
+ con->tls_upgrade = 1;
+ con->encryption = HTTP_ENCRYPTION_REQUIRED;
+ return;
#else
if (!cupsdSendError(con, HTTP_STATUS_NOT_IMPLEMENTED, CUPSD_AUTH_NONE))
{
@@ -987,32 +1019,11 @@ cupsdReadClient(cupsd_client_t *con) /* I - Client to read from */
if (!_cups_strcasecmp(httpGetField(con->http, HTTP_FIELD_CONNECTION),
"Upgrade") && !httpIsEncrypted(con->http))
{
-#ifdef HAVE_TLS
- /*
- * Do encryption stuff...
- */
-
- httpClearFields(con->http);
-
- if (!cupsdSendHeader(con, HTTP_STATUS_SWITCHING_PROTOCOLS, NULL,
- CUPSD_AUTH_NONE))
- {
- cupsdCloseClient(con);
- return;
- }
-
- if (cupsd_start_tls(con, HTTP_ENCRYPTION_REQUIRED))
- {
- cupsdCloseClient(con);
- return;
- }
-#else
if (!cupsdSendError(con, HTTP_STATUS_NOT_IMPLEMENTED, CUPSD_AUTH_NONE))
{
cupsdCloseClient(con);
return;
}
-#endif /* HAVE_TLS */
}
if ((status = cupsdIsAuthorized(con, NULL)) != HTTP_STATUS_OK)
@@ -2685,6 +2696,69 @@ check_if_modified(
}
+#ifdef HAVE_TLS
+/*
+ * 'check_start_tls()' - Start encryption on a connection.
+ */
+
+static int /* O - 0 to continue, 1 on success, -1 on error */
+check_start_tls(cupsd_client_t *con) /* I - Client connection */
+{
+ unsigned char chello[4096]; /* Client hello record */
+ ssize_t chello_bytes; /* Bytes read/peeked */
+ int chello_len; /* Length of record */
+
+
+ /*
+ * See if we have a good and complete client hello record...
+ */
+
+ if ((chello_bytes = recv(httpGetFd(con->http), (char *)chello, sizeof(chello), MSG_PEEK)) < 5)
+ return (0); /* Not enough bytes (yet) */
+
+ if (chello[0] != 0x016 || chello[1] != 3 || chello[2] == 0)
+ return (-1); /* Not a TLS Client Hello record */
+
+ chello_len = (chello[3] << 8) | chello[4];
+
+ if ((chello_len + 5) > chello_bytes)
+ return (0); /* Not enough bytes yet */
+
+ /*
+ * OK, we do, try negotiating...
+ */
+
+ con->tls_start = 0;
+
+ if (httpEncryption(con->http, con->encryption))
+ {
+ cupsdLogClient(con, CUPSD_LOG_ERROR, "Unable to encrypt connection: %s", cupsLastErrorString());
+ return (-1);
+ }
+
+ cupsdLogClient(con, CUPSD_LOG_DEBUG, "Connection now encrypted.");
+
+ if (con->tls_upgrade)
+ {
+ // Respond to the original OPTIONS command...
+ con->tls_upgrade = 0;
+
+ httpClearFields(con->http);
+ httpClearCookie(con->http);
+ httpSetField(con->http, HTTP_FIELD_CONTENT_LENGTH, "0");
+
+ if (!cupsdSendHeader(con, HTTP_STATUS_OK, NULL, CUPSD_AUTH_NONE))
+ {
+ cupsdCloseClient(con);
+ return (-1);
+ }
+ }
+
+ return (1);
+}
+#endif /* HAVE_TLS */
+
+
/*
* 'compare_clients()' - Compare two client connections.
*/
@@ -2705,28 +2779,6 @@ compare_clients(cupsd_client_t *a, /* I - First client */
}
-#ifdef HAVE_TLS
-/*
- * 'cupsd_start_tls()' - Start encryption on a connection.
- */
-
-static int /* O - 0 on success, -1 on error */
-cupsd_start_tls(cupsd_client_t *con, /* I - Client connection */
- http_encryption_t e) /* I - Encryption mode */
-{
- if (httpEncryption(con->http, e))
- {
- cupsdLogClient(con, CUPSD_LOG_ERROR, "Unable to encrypt connection: %s",
- cupsLastErrorString());
- return (-1);
- }
-
- cupsdLogClient(con, CUPSD_LOG_DEBUG, "Connection now encrypted.");
- return (0);
-}
-#endif /* HAVE_TLS */
-
-
/*
* 'get_file()' - Get a filename and state info.
*/
diff --git a/scheduler/client.h b/scheduler/client.h
index 9fe4e2ea6..2939ce997 100644
--- a/scheduler/client.h
+++ b/scheduler/client.h
@@ -53,6 +53,9 @@ struct cupsd_client_s
cups_lang_t *language; /* Language to use */
#ifdef HAVE_TLS
int auto_ssl; /* Automatic test for SSL/TLS */
+ time_t tls_start; /* Do TLS negotiation? */
+ int tls_upgrade; /* Doing TLS upgrade via OPTIONS? */
+ http_encryption_t encryption; /* Type of TLS negotiation */
#endif /* HAVE_TLS */
http_addr_t clientaddr; /* Client's server address */
char clientname[256];/* Client's server name for connection */
diff --git a/scheduler/select.c b/scheduler/select.c
index 2e64f2a7e..ac6205c51 100644
--- a/scheduler/select.c
+++ b/scheduler/select.c
@@ -408,6 +408,9 @@ cupsdDoSelect(long timeout) /* I - Timeout in seconds */
cupsd_in_select = 1;
+ // Prevent 100% CPU by releasing control before the kevent call...
+ usleep(1);
+
if (timeout >= 0 && timeout < 86400)
{
ktimeout.tv_sec = timeout;
@@ -452,6 +455,9 @@ cupsdDoSelect(long timeout) /* I - Timeout in seconds */
struct epoll_event *event; /* Current event */
+ // Prevent 100% CPU by releasing control before the epoll_wait call...
+ usleep(1);
+
if (timeout >= 0 && timeout < 86400)
nfds = epoll_wait(cupsd_epoll_fd, cupsd_epoll_events, MaxFDs,
timeout * 1000);
@@ -544,6 +550,9 @@ cupsdDoSelect(long timeout) /* I - Timeout in seconds */
}
}
+ // Prevent 100% CPU by releasing control before the poll call...
+ usleep(1);
+
if (timeout >= 0 && timeout < 86400)
nfds = poll(cupsd_pollfds, (nfds_t)count, timeout * 1000);
else
@@ -597,6 +606,9 @@ cupsdDoSelect(long timeout) /* I - Timeout in seconds */
cupsd_current_input = cupsd_global_input;
cupsd_current_output = cupsd_global_output;
+ // Prevent 100% CPU by releasing control before the select call...
+ usleep(1);
+
if (timeout >= 0 && timeout < 86400)
{
stimeout.tv_sec = timeout;
--
2.44.1

View File

@@ -0,0 +1,491 @@
From 3ff24bbe1d0e11a2edb5cac0ae421b8e95220651 Mon Sep 17 00:00:00 2001
From: Zdenek Dohnal <zdohnal@redhat.com>
Date: Fri, 21 Nov 2025 07:36:36 +0100
Subject: [PATCH] Fix various issues in cupsd
Various issues were found by @SilverPlate3, recognized as CVE-2025-61915:
- out of bound write when handling IPv6 addresses,
- cupsd crash caused by null dereference when ErrorPolicy value is empty,
On the top of that, Mike Sweet noticed vulnerability via domain socket,
exploitable locally if attacker has access to domain socket and knows username
of user within a group which is present in CUPS system groups:
- rewrite of cupsd.conf via PeerCred authorization via domain socket
The last vulnerability is fixed by introducing PeerCred directive for cups-files.conf,
which controls whether PeerCred is enabled/disabled for user in CUPS system groups.
Fixes CVE-2025-61915
CVE: CVE-2025-61915
Upstream-Status: Backport [https://github.com/OpenPrinting/cups/commit/db8d560262c2]
(cherry picked from commit db8d560262c22a21ee1e55dfd62fa98d9359bcb0)
Signed-off-by: Deepak Rathore <deeratho@cisco.com>
---
conf/cups-files.conf.in | 3 ++
config-scripts/cups-defaults.m4 | 9 +++++
config.h.in | 7 ++++
configure | 22 ++++++++++
doc/help/man-cups-files.conf.html | 9 ++++-
man/cups-files.conf.5 | 17 ++++++--
scheduler/auth.c | 8 +++-
scheduler/auth.h | 7 ++++
scheduler/client.c | 2 +-
scheduler/conf.c | 60 ++++++++++++++++++++++++----
test/run-stp-tests.sh | 2 +-
vcnet/config.h | 7 ++++
xcode/CUPS.xcodeproj/project.pbxproj | 2 -
xcode/config.h | 7 ++++
14 files changed, 145 insertions(+), 17 deletions(-)
diff --git a/conf/cups-files.conf.in b/conf/cups-files.conf.in
index 27d8be96f..bc999e420 100644
--- a/conf/cups-files.conf.in
+++ b/conf/cups-files.conf.in
@@ -22,6 +22,9 @@
SystemGroup @CUPS_SYSTEM_GROUPS@
@CUPS_SYSTEM_AUTHKEY@
+# Are Unix domain socket peer credentials used for authorization?
+PeerCred @CUPS_PEER_CRED@
+
# User that is substituted for unauthenticated (remote) root accesses...
#RemoteRoot remroot
diff --git a/config-scripts/cups-defaults.m4 b/config-scripts/cups-defaults.m4
index 27e5bc472..b4f03d624 100644
--- a/config-scripts/cups-defaults.m4
+++ b/config-scripts/cups-defaults.m4
@@ -129,6 +129,15 @@ AC_ARG_WITH([log_level], AS_HELP_STRING([--with-log-level], [set default LogLeve
AC_SUBST([CUPS_LOG_LEVEL])
AC_DEFINE_UNQUOTED([CUPS_DEFAULT_LOG_LEVEL], ["$CUPS_LOG_LEVEL"], [Default LogLevel value.])
+dnl Default PeerCred
+AC_ARG_WITH([peer_cred], AS_HELP_STRING([--with-peer-cred], [set default PeerCred value (on/off/root-only), default=on]), [
+ CUPS_PEER_CRED="$withval"
+], [
+ CUPS_PEER_CRED="on"
+])
+AC_SUBST([CUPS_PEER_CRED])
+AC_DEFINE_UNQUOTED([CUPS_DEFAULT_PEER_CRED], ["$CUPS_PEER_CRED"], [Default PeerCred value.])
+
dnl Default AccessLogLevel
AC_ARG_WITH(access_log_level, [ --with-access-log-level set default AccessLogLevel value, default=none],
CUPS_ACCESS_LOG_LEVEL="$withval",
diff --git a/config.h.in b/config.h.in
index 6940b9604..222b3b5bf 100644
--- a/config.h.in
+++ b/config.h.in
@@ -86,6 +86,13 @@
#define CUPS_DEFAULT_ERROR_POLICY "stop-printer"
+/*
+ * Default PeerCred value...
+ */
+
+#define CUPS_DEFAULT_PEER_CRED "on"
+
+
/*
* Default MaxCopies value...
*/
diff --git a/configure b/configure
index f8147c9d6..f456c8588 100755
--- a/configure
+++ b/configure
@@ -672,6 +672,7 @@ CUPS_BROWSING
CUPS_SYNC_ON_CLOSE
CUPS_PAGE_LOG_FORMAT
CUPS_ACCESS_LOG_LEVEL
+CUPS_PEER_CRED
CUPS_LOG_LEVEL
CUPS_FATAL_ERRORS
CUPS_ERROR_POLICY
@@ -925,6 +926,7 @@ with_max_log_size
with_error_policy
with_fatal_errors
with_log_level
+with_peer_cred
with_access_log_level
enable_page_logging
enable_sync_on_close
@@ -1661,6 +1663,8 @@ Optional Packages:
--with-error-policy set default ErrorPolicy value, default=stop-printer
--with-fatal-errors set default FatalErrors value, default=config
--with-log-level set default LogLevel value, default=warn
+ --with-peer-cred set default PeerCred value (on/off/root-only),
+ default=on
--with-access-log-level set default AccessLogLevel value, default=none
--with-local-protocols set default BrowseLocalProtocols, default=""
--with-cups-user set default user for CUPS
@@ -11718,6 +11722,24 @@ printf "%s\n" "#define CUPS_DEFAULT_LOG_LEVEL \"$CUPS_LOG_LEVEL\"" >>confdefs.h
+# Check whether --with-peer_cred was given.
+if test ${with_peer_cred+y}
+then :
+ withval=$with_peer_cred;
+ CUPS_PEER_CRED="$withval"
+
+else $as_nop
+
+ CUPS_PEER_CRED="on"
+
+fi
+
+
+
+printf "%s\n" "#define CUPS_DEFAULT_PEER_CRED \"$CUPS_PEER_CRED\"" >>confdefs.h
+
+
+
# Check whether --with-access_log_level was given.
if test ${with_access_log_level+y}
then :
diff --git a/doc/help/man-cups-files.conf.html b/doc/help/man-cups-files.conf.html
index c0c775dec..5a9ddefeb 100644
--- a/doc/help/man-cups-files.conf.html
+++ b/doc/help/man-cups-files.conf.html
@@ -119,6 +119,13 @@ The default is "/var/log/cups/page_log".
<dt><a name="PassEnv"></a><b>PassEnv </b><i>variable </i>[ ... <i>variable </i>]
<dd style="margin-left: 5.0em">Passes the specified environment variable(s) to child processes.
Note: the standard CUPS filter and backend environment variables cannot be overridden using this directive.
+<dt><a name="PeerCred"></a><b>PeerCred off</b>
+<dd style="margin-left: 5.0em"><dt><b>PeerCred on</b>
+<dd style="margin-left: 5.0em"><dt><b>PeerCred root-only</b>
+<dd style="margin-left: 5.0em">Specifies whether peer credentials are used for authorization when communicating over the UNIX domain socket.
+When <b>on</b>, the peer credentials of any user are accepted for authorization.
+The value <b>off</b> disables the use of peer credentials entirely, while the value <b>root-only</b> allows peer credentials only for the root user.
+Note: for security reasons, the <b>on</b> setting is reduced to <b>root-only</b> for authorization of PUT requests.
<dt><a name="RemoteRoot"></a><b>RemoteRoot </b><i>username</i>
<dd style="margin-left: 5.0em">Specifies the username that is associated with unauthenticated accesses by clients claiming to be the root user.
The default is "remroot".
@@ -207,7 +214,7 @@ command is used instead.
<a href="man-subscriptions.conf.html?TOPIC=Man+Pages"><b>subscriptions.conf</b>(5),</a>
CUPS Online Help (<a href="http://localhost:631/help">http://localhost:631/help</a>)
<h2 class="title"><a name="COPYRIGHT">Copyright</a></h2>
-Copyright &copy; 2020-2023 by OpenPrinting.
+Copyright &copy; 2020-2025 by OpenPrinting.
</body>
</html>
diff --git a/man/cups-files.conf.5 b/man/cups-files.conf.5
index 8358b62a1..107072c3c 100644
--- a/man/cups-files.conf.5
+++ b/man/cups-files.conf.5
@@ -1,14 +1,14 @@
.\"
.\" cups-files.conf man page for CUPS.
.\"
-.\" Copyright © 2020-2024 by OpenPrinting.
+.\" Copyright © 2020-2025 by OpenPrinting.
.\" Copyright © 2007-2019 by Apple Inc.
.\" Copyright © 1997-2006 by Easy Software Products.
.\"
.\" Licensed under Apache License v2.0. See the file "LICENSE" for more
.\" information.
.\"
-.TH cups-files.conf 5 "CUPS" "2021-03-06" "OpenPrinting"
+.TH cups-files.conf 5 "CUPS" "2025-10-08" "OpenPrinting"
.SH NAME
cups\-files.conf \- file and directory configuration file for cups
.SH DESCRIPTION
@@ -166,6 +166,17 @@ The default is "/var/log/cups/page_log".
\fBPassEnv \fIvariable \fR[ ... \fIvariable \fR]
Passes the specified environment variable(s) to child processes.
Note: the standard CUPS filter and backend environment variables cannot be overridden using this directive.
+.\"#PeerCred
+.TP 5
+\fBPeerCred off\fR
+.TP 5
+\fBPeerCred on\fR
+.TP 5
+\fBPeerCred root-only\fR
+Specifies whether peer credentials are used for authorization when communicating over the UNIX domain socket.
+When \fBon\fR, the peer credentials of any user are accepted for authorization.
+The value \fBoff\fR disables the use of peer credentials entirely, while the value \fBroot-only\fR allows peer credentials only for the root user.
+Note: for security reasons, the \fBon\fR setting is reduced to \fBroot-only\fR for authorization of PUT requests.
.\"#RemoteRoot
.TP 5
\fBRemoteRoot \fIusername\fR
@@ -289,4 +300,4 @@ command is used instead.
.BR subscriptions.conf (5),
CUPS Online Help (http://localhost:631/help)
.SH COPYRIGHT
-Copyright \[co] 2020-2024 by OpenPrinting.
+Copyright \[co] 2020-2025 by OpenPrinting.
diff --git a/scheduler/auth.c b/scheduler/auth.c
index 3c9aa72aa..bd0d28a0e 100644
--- a/scheduler/auth.c
+++ b/scheduler/auth.c
@@ -398,7 +398,7 @@ cupsdAuthorize(cupsd_client_t *con) /* I - Client connection */
}
#endif /* HAVE_AUTHORIZATION_H */
#if defined(SO_PEERCRED) && defined(AF_LOCAL)
- else if (!strncmp(authorization, "PeerCred ", 9) &&
+ else if (PeerCred != CUPSD_PEERCRED_OFF && !strncmp(authorization, "PeerCred ", 9) &&
con->http->hostaddr->addr.sa_family == AF_LOCAL && con->best)
{
/*
@@ -441,6 +441,12 @@ cupsdAuthorize(cupsd_client_t *con) /* I - Client connection */
}
#endif /* HAVE_AUTHORIZATION_H */
+ if ((PeerCred == CUPSD_PEERCRED_ROOTONLY || httpGetState(con->http) == HTTP_STATE_PUT_RECV) && strcmp(authorization + 9, "root"))
+ {
+ cupsdLogClient(con, CUPSD_LOG_INFO, "User \"%s\" is not allowed to use peer credentials.", authorization + 9);
+ return;
+ }
+
if ((pwd = getpwnam(authorization + 9)) == NULL)
{
cupsdLogClient(con, CUPSD_LOG_ERROR, "User \"%s\" does not exist.", authorization + 9);
diff --git a/scheduler/auth.h b/scheduler/auth.h
index ee98e92c7..fdf71213f 100644
--- a/scheduler/auth.h
+++ b/scheduler/auth.h
@@ -50,6 +50,10 @@
#define CUPSD_AUTH_LIMIT_ALL 127 /* Limit all requests */
#define CUPSD_AUTH_LIMIT_IPP 128 /* Limit IPP requests */
+#define CUPSD_PEERCRED_OFF 0 /* Don't allow PeerCred authorization */
+#define CUPSD_PEERCRED_ON 1 /* Allow PeerCred authorization for all users */
+#define CUPSD_PEERCRED_ROOTONLY 2 /* Allow PeerCred authorization for root user */
+
#define IPP_ANY_OPERATION (ipp_op_t)0
/* Any IPP operation */
#define IPP_BAD_OPERATION (ipp_op_t)-1
@@ -105,6 +109,9 @@ typedef struct
VAR cups_array_t *Locations VALUE(NULL);
/* Authorization locations */
+VAR int PeerCred VALUE(CUPSD_PEERCRED_ON);
+ /* Allow PeerCred authorization? */
+
#ifdef HAVE_TLS
VAR http_encryption_t DefaultEncryption VALUE(HTTP_ENCRYPT_REQUIRED);
/* Default encryption for authentication */
diff --git a/scheduler/client.c b/scheduler/client.c
index d495d9a75..81db4aa52 100644
--- a/scheduler/client.c
+++ b/scheduler/client.c
@@ -2204,7 +2204,7 @@ cupsdSendHeader(
auth_size = sizeof(auth_str) - (size_t)(auth_key - auth_str);
#if defined(SO_PEERCRED) && defined(AF_LOCAL)
- if (httpAddrFamily(httpGetAddress(con->http)) == AF_LOCAL)
+ if (PeerCred != CUPSD_PEERCRED_OFF && httpAddrFamily(httpGetAddress(con->http)) == AF_LOCAL)
{
strlcpy(auth_key, ", PeerCred", auth_size);
auth_key += 10;
diff --git a/scheduler/conf.c b/scheduler/conf.c
index 3184d72f0..6accf0590 100644
--- a/scheduler/conf.c
+++ b/scheduler/conf.c
@@ -47,6 +47,7 @@ typedef enum
{
CUPSD_VARTYPE_INTEGER, /* Integer option */
CUPSD_VARTYPE_TIME, /* Time interval option */
+ CUPSD_VARTYPE_NULLSTRING, /* String option or NULL/empty string */
CUPSD_VARTYPE_STRING, /* String option */
CUPSD_VARTYPE_BOOLEAN, /* Boolean option */
CUPSD_VARTYPE_PATHNAME, /* File/directory name option */
@@ -69,7 +70,7 @@ static const cupsd_var_t cupsd_vars[] =
{
{ "AutoPurgeJobs", &JobAutoPurge, CUPSD_VARTYPE_BOOLEAN },
#ifdef HAVE_DNSSD
- { "BrowseDNSSDSubTypes", &DNSSDSubTypes, CUPSD_VARTYPE_STRING },
+ { "BrowseDNSSDSubTypes", &DNSSDSubTypes, CUPSD_VARTYPE_NULLSTRING },
#endif /* HAVE_DNSSD */
{ "BrowseWebIF", &BrowseWebIF, CUPSD_VARTYPE_BOOLEAN },
{ "Browsing", &Browsing, CUPSD_VARTYPE_BOOLEAN },
@@ -120,7 +121,7 @@ static const cupsd_var_t cupsd_vars[] =
{ "MaxSubscriptionsPerPrinter",&MaxSubscriptionsPerPrinter, CUPSD_VARTYPE_INTEGER },
{ "MaxSubscriptionsPerUser", &MaxSubscriptionsPerUser, CUPSD_VARTYPE_INTEGER },
{ "MultipleOperationTimeout", &MultipleOperationTimeout, CUPSD_VARTYPE_TIME },
- { "PageLogFormat", &PageLogFormat, CUPSD_VARTYPE_STRING },
+ { "PageLogFormat", &PageLogFormat, CUPSD_VARTYPE_NULLSTRING },
{ "PreserveJobFiles", &JobFiles, CUPSD_VARTYPE_TIME },
{ "PreserveJobHistory", &JobHistory, CUPSD_VARTYPE_TIME },
{ "ReloadTimeout", &ReloadTimeout, CUPSD_VARTYPE_TIME },
@@ -791,6 +792,13 @@ cupsdReadConfiguration(void)
IdleExitTimeout = 60;
#endif /* HAVE_ONDEMAND */
+ if (!strcmp(CUPS_DEFAULT_PEER_CRED, "off"))
+ PeerCred = CUPSD_PEERCRED_OFF;
+ else if (!strcmp(CUPS_DEFAULT_PEER_CRED, "root-only"))
+ PeerCred = CUPSD_PEERCRED_ROOTONLY;
+ else
+ PeerCred = CUPSD_PEERCRED_ON;
+
/*
* Setup environment variables...
*/
@@ -1831,7 +1839,7 @@ get_addr_and_mask(const char *value, /* I - String from config file */
family = AF_INET6;
- for (i = 0, ptr = value + 1; *ptr && i < 8; i ++)
+ for (i = 0, ptr = value + 1; *ptr && i >= 0 && i < 8; i ++)
{
if (*ptr == ']')
break;
@@ -1977,7 +1985,7 @@ get_addr_and_mask(const char *value, /* I - String from config file */
#ifdef AF_INET6
if (family == AF_INET6)
{
- if (i > 128)
+ if (i < 0 || i > 128)
return (0);
i = 128 - i;
@@ -2011,7 +2019,7 @@ get_addr_and_mask(const char *value, /* I - String from config file */
else
#endif /* AF_INET6 */
{
- if (i > 32)
+ if (i < 0 || i > 32)
return (0);
mask[0] = 0xffffffff;
@@ -2921,7 +2929,17 @@ parse_variable(
cupsdSetString((char **)var->ptr, temp);
break;
+ case CUPSD_VARTYPE_NULLSTRING :
+ cupsdSetString((char **)var->ptr, value);
+ break;
+
case CUPSD_VARTYPE_STRING :
+ if (!value)
+ {
+ cupsdLogMessage(CUPSD_LOG_ERROR, "Missing value for %s on line %d of %s.", line, linenum, filename);
+ return (0);
+ }
+
cupsdSetString((char **)var->ptr, value);
break;
}
@@ -3436,9 +3454,10 @@ read_cupsd_conf(cups_file_t *fp) /* I - File to read from */
line, value ? " " : "", value ? value : "", linenum,
ConfigurationFile, CupsFilesFile);
}
- else
- parse_variable(ConfigurationFile, linenum, line, value,
- sizeof(cupsd_vars) / sizeof(cupsd_vars[0]), cupsd_vars);
+ else if (!parse_variable(ConfigurationFile, linenum, line, value,
+ sizeof(cupsd_vars) / sizeof(cupsd_vars[0]), cupsd_vars) &&
+ (FatalErrors & CUPSD_FATAL_CONFIG))
+ return (0);
}
return (1);
@@ -3597,6 +3616,31 @@ read_cups_files_conf(cups_file_t *fp) /* I - File to read from */
break;
}
}
+ else if (!_cups_strcasecmp(line, "PeerCred") && value)
+ {
+ /*
+ * PeerCred {off,on,root-only}
+ */
+
+ if (!_cups_strcasecmp(value, "off"))
+ {
+ PeerCred = CUPSD_PEERCRED_OFF;
+ }
+ else if (!_cups_strcasecmp(value, "on"))
+ {
+ PeerCred = CUPSD_PEERCRED_ON;
+ }
+ else if (!_cups_strcasecmp(value, "root-only"))
+ {
+ PeerCred = CUPSD_PEERCRED_ROOTONLY;
+ }
+ else
+ {
+ cupsdLogMessage(CUPSD_LOG_ERROR, "Unknown PeerCred \"%s\" on line %d of %s.", value, linenum, CupsFilesFile);
+ if (FatalErrors & CUPSD_FATAL_CONFIG)
+ return (0);
+ }
+ }
else if (!_cups_strcasecmp(line, "PrintcapFormat") && value)
{
/*
diff --git a/test/run-stp-tests.sh b/test/run-stp-tests.sh
index 39b53c3e4..2089f7944 100755
--- a/test/run-stp-tests.sh
+++ b/test/run-stp-tests.sh
@@ -512,7 +512,7 @@ fi
cat >$BASE/cups-files.conf <<EOF
FileDevice yes
-Printcap
+Printcap $BASE/printcap
User $user
ServerRoot $BASE
StateDir $BASE
diff --git a/vcnet/config.h b/vcnet/config.h
index 7fc459217..76f5adbb7 100644
--- a/vcnet/config.h
+++ b/vcnet/config.h
@@ -169,6 +169,13 @@ typedef unsigned long useconds_t;
#define CUPS_DEFAULT_ERROR_POLICY "stop-printer"
+/*
+ * Default PeerCred value...
+ */
+
+#define CUPS_DEFAULT_PEER_CRED "on"
+
+
/*
* Default MaxCopies value...
*/
diff --git a/xcode/CUPS.xcodeproj/project.pbxproj b/xcode/CUPS.xcodeproj/project.pbxproj
index 597946440..54ac652a1 100644
--- a/xcode/CUPS.xcodeproj/project.pbxproj
+++ b/xcode/CUPS.xcodeproj/project.pbxproj
@@ -3433,7 +3433,6 @@
72220FB313330BCE00FCA411 /* mime.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = mime.c; path = ../scheduler/mime.c; sourceTree = "<group>"; };
72220FB413330BCE00FCA411 /* mime.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = mime.h; path = ../scheduler/mime.h; sourceTree = "<group>"; };
72220FB513330BCE00FCA411 /* type.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = type.c; path = ../scheduler/type.c; sourceTree = "<group>"; };
- 7226369B18AE6D19004ED309 /* org.cups.cups-lpd.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist.xml; name = "org.cups.cups-lpd.plist"; path = "../scheduler/org.cups.cups-lpd.plist"; sourceTree = SOURCE_ROOT; };
7226369C18AE6D19004ED309 /* org.cups.cupsd.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist.xml; name = org.cups.cupsd.plist; path = ../scheduler/org.cups.cupsd.plist; sourceTree = SOURCE_ROOT; };
7226369D18AE73BB004ED309 /* config.h.in */ = {isa = PBXFileReference; lastKnownFileType = text; name = config.h.in; path = ../config.h.in; sourceTree = "<group>"; };
722A24EE2178D00C000CAB20 /* debug-internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = "debug-internal.h"; path = "../cups/debug-internal.h"; sourceTree = "<group>"; };
@@ -5055,7 +5054,6 @@
isa = PBXGroup;
children = (
72E65BDC18DC852700097E89 /* Makefile */,
- 7226369B18AE6D19004ED309 /* org.cups.cups-lpd.plist */,
72E65BD518DC818400097E89 /* org.cups.cups-lpd.plist.in */,
7226369C18AE6D19004ED309 /* org.cups.cupsd.plist */,
72220F6913330B0C00FCA411 /* auth.c */,
diff --git a/xcode/config.h b/xcode/config.h
index e4a63f69d..366da777e 100644
--- a/xcode/config.h
+++ b/xcode/config.h
@@ -88,6 +88,13 @@
#define CUPS_DEFAULT_ERROR_POLICY "stop-printer"
+/*
+ * Default PeerCred value...
+ */
+
+#define CUPS_DEFAULT_PEER_CRED "on"
+
+
/*
* Default MaxCopies value...
*/
--
2.44.1

View File

@@ -0,0 +1,28 @@
From 53d2bc4f89fcbd7414b92bd242f6cdc901941f55 Mon Sep 17 00:00:00 2001
From: Tim Kientzle <kientzle@acm.org>
Date: Sat, 16 Aug 2025 10:27:11 -0600
Subject: [PATCH] Merge pull request #2696 from al3xtjames/mkstemp
Fix mkstemp path in setup_mac_metadata
(cherry picked from commit 892f33145093d1c9b962b6521a6480dfea66ae00)
Upstream-Status: Backport [https://github.com/libarchive/libarchive/commit/53d2bc4f89fcbd7414b92bd242f6cdc901941f55]
Signed-off-by: Peter Marko <peter.marko@siemens.com>
---
libarchive/archive_read_disk_entry_from_file.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/libarchive/archive_read_disk_entry_from_file.c b/libarchive/archive_read_disk_entry_from_file.c
index 19d04977..87389642 100644
--- a/libarchive/archive_read_disk_entry_from_file.c
+++ b/libarchive/archive_read_disk_entry_from_file.c
@@ -364,7 +364,7 @@ setup_mac_metadata(struct archive_read_disk *a,
tempdir = _PATH_TMP;
archive_string_init(&tempfile);
archive_strcpy(&tempfile, tempdir);
- archive_strcat(&tempfile, "tar.md.XXXXXX");
+ archive_strcat(&tempfile, "/tar.md.XXXXXX");
tempfd = mkstemp(tempfile.s);
if (tempfd < 0) {
archive_set_error(&a->archive, errno,

View File

@@ -0,0 +1,186 @@
From 82e31ba4a9afcce0c7c19e591ccd8653196d84a0 Mon Sep 17 00:00:00 2001
From: Tim Kientzle <kientzle@acm.org>
Date: Mon, 13 Oct 2025 10:57:18 -0700
Subject: [PATCH] Merge pull request #2749 from KlaraSystems/des/tempdir
Unify temporary directory handling
(cherry picked from commit d207d816d065c79dc2cb992008c3ba9721c6a276)
Upstream-Status: Backport [https://github.com/libarchive/libarchive/commit/82e31ba4a9afcce0c7c19e591ccd8653196d84a0]
Signed-off-by: Peter Marko <peter.marko@siemens.com>
---
CMakeLists.txt | 6 ++-
configure.ac | 6 ++-
libarchive/archive_private.h | 1 +
.../archive_read_disk_entry_from_file.c | 14 +++----
libarchive/archive_read_disk_posix.c | 3 --
libarchive/archive_util.c | 38 ++++++++++++++++---
6 files changed, 49 insertions(+), 19 deletions(-)
diff --git a/CMakeLists.txt b/CMakeLists.txt
index f44adc77..fc9aca4e 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -1455,15 +1455,19 @@ CHECK_FUNCTION_EXISTS_GLIBC(ftruncate HAVE_FTRUNCATE)
CHECK_FUNCTION_EXISTS_GLIBC(futimens HAVE_FUTIMENS)
CHECK_FUNCTION_EXISTS_GLIBC(futimes HAVE_FUTIMES)
CHECK_FUNCTION_EXISTS_GLIBC(futimesat HAVE_FUTIMESAT)
+CHECK_FUNCTION_EXISTS_GLIBC(getegid HAVE_GETEGID)
CHECK_FUNCTION_EXISTS_GLIBC(geteuid HAVE_GETEUID)
CHECK_FUNCTION_EXISTS_GLIBC(getgrgid_r HAVE_GETGRGID_R)
CHECK_FUNCTION_EXISTS_GLIBC(getgrnam_r HAVE_GETGRNAM_R)
CHECK_FUNCTION_EXISTS_GLIBC(getline HAVE_GETLINE)
+CHECK_FUNCTION_EXISTS_GLIBC(getpid HAVE_GETPID)
CHECK_FUNCTION_EXISTS_GLIBC(getpwnam_r HAVE_GETPWNAM_R)
CHECK_FUNCTION_EXISTS_GLIBC(getpwuid_r HAVE_GETPWUID_R)
-CHECK_FUNCTION_EXISTS_GLIBC(getpid HAVE_GETPID)
+CHECK_FUNCTION_EXISTS_GLIBC(getresgid HAVE_GETRESGID)
+CHECK_FUNCTION_EXISTS_GLIBC(getresuid HAVE_GETRESUID)
CHECK_FUNCTION_EXISTS_GLIBC(getvfsbyname HAVE_GETVFSBYNAME)
CHECK_FUNCTION_EXISTS_GLIBC(gmtime_r HAVE_GMTIME_R)
+CHECK_FUNCTION_EXISTS_GLIBC(issetugid HAVE_ISSETUGID)
CHECK_FUNCTION_EXISTS_GLIBC(lchflags HAVE_LCHFLAGS)
CHECK_FUNCTION_EXISTS_GLIBC(lchmod HAVE_LCHMOD)
CHECK_FUNCTION_EXISTS_GLIBC(lchown HAVE_LCHOWN)
diff --git a/configure.ac b/configure.ac
index aae0f381..a1a8f380 100644
--- a/configure.ac
+++ b/configure.ac
@@ -810,8 +810,10 @@ AC_CHECK_FUNCS([arc4random_buf chflags chown chroot ctime_r])
AC_CHECK_FUNCS([fchdir fchflags fchmod fchown fcntl fdopendir fnmatch fork])
AC_CHECK_FUNCS([fstat fstatat fstatfs fstatvfs ftruncate])
AC_CHECK_FUNCS([futimens futimes futimesat])
-AC_CHECK_FUNCS([geteuid getline getpid getgrgid_r getgrnam_r])
-AC_CHECK_FUNCS([getpwnam_r getpwuid_r getvfsbyname gmtime_r])
+AC_CHECK_FUNCS([getegid geteuid getline getpid getresgid getresuid])
+AC_CHECK_FUNCS([getgrgid_r getgrnam_r getpwnam_r getpwuid_r])
+AC_CHECK_FUNCS([getvfsbyname gmtime_r])
+AC_CHECK_FUNCS([issetugid])
AC_CHECK_FUNCS([lchflags lchmod lchown link linkat localtime_r lstat lutimes])
AC_CHECK_FUNCS([mbrtowc memmove memset])
AC_CHECK_FUNCS([mkdir mkfifo mknod mkstemp])
diff --git a/libarchive/archive_private.h b/libarchive/archive_private.h
index 050fc63c..3a926c68 100644
--- a/libarchive/archive_private.h
+++ b/libarchive/archive_private.h
@@ -158,6 +158,7 @@ int __archive_check_magic(struct archive *, unsigned int magic,
__LA_NORETURN void __archive_errx(int retvalue, const char *msg);
void __archive_ensure_cloexec_flag(int fd);
+int __archive_get_tempdir(struct archive_string *);
int __archive_mktemp(const char *tmpdir);
#if defined(_WIN32) && !defined(__CYGWIN__)
int __archive_mkstemp(wchar_t *templates);
diff --git a/libarchive/archive_read_disk_entry_from_file.c b/libarchive/archive_read_disk_entry_from_file.c
index 87389642..42af4034 100644
--- a/libarchive/archive_read_disk_entry_from_file.c
+++ b/libarchive/archive_read_disk_entry_from_file.c
@@ -338,7 +338,7 @@ setup_mac_metadata(struct archive_read_disk *a,
int ret = ARCHIVE_OK;
void *buff = NULL;
int have_attrs;
- const char *name, *tempdir;
+ const char *name;
struct archive_string tempfile;
(void)fd; /* UNUSED */
@@ -357,14 +357,12 @@ setup_mac_metadata(struct archive_read_disk *a,
if (have_attrs == 0)
return (ARCHIVE_OK);
- tempdir = NULL;
- if (issetugid() == 0)
- tempdir = getenv("TMPDIR");
- if (tempdir == NULL)
- tempdir = _PATH_TMP;
archive_string_init(&tempfile);
- archive_strcpy(&tempfile, tempdir);
- archive_strcat(&tempfile, "/tar.md.XXXXXX");
+ if (__archive_get_tempdir(&tempfile) != ARCHIVE_OK) {
+ ret = ARCHIVE_WARN;
+ goto cleanup;
+ }
+ archive_strcat(&tempfile, "tar.md.XXXXXX");
tempfd = mkstemp(tempfile.s);
if (tempfd < 0) {
archive_set_error(&a->archive, errno,
diff --git a/libarchive/archive_read_disk_posix.c b/libarchive/archive_read_disk_posix.c
index ba0046d7..54a8e661 100644
--- a/libarchive/archive_read_disk_posix.c
+++ b/libarchive/archive_read_disk_posix.c
@@ -1578,9 +1578,6 @@ setup_current_filesystem(struct archive_read_disk *a)
# endif
#endif
int r, xr = 0;
-#if !defined(HAVE_STRUCT_STATFS_F_NAMEMAX)
- long nm;
-#endif
t->current_filesystem->synthetic = -1;
t->current_filesystem->remote = -1;
diff --git a/libarchive/archive_util.c b/libarchive/archive_util.c
index 900abd0c..d048bbc9 100644
--- a/libarchive/archive_util.c
+++ b/libarchive/archive_util.c
@@ -443,11 +443,39 @@ __archive_mkstemp(wchar_t *template)
#else
static int
-get_tempdir(struct archive_string *temppath)
+__archive_issetugid(void)
{
- const char *tmp;
+#ifdef HAVE_ISSETUGID
+ return (issetugid());
+#elif HAVE_GETRESUID
+ uid_t ruid, euid, suid;
+ gid_t rgid, egid, sgid;
+ if (getresuid(&ruid, &euid, &suid) != 0)
+ return (-1);
+ if (ruid != euid || ruid != suid)
+ return (1);
+ if (getresgid(&ruid, &egid, &sgid) != 0)
+ return (-1);
+ if (rgid != egid || rgid != sgid)
+ return (1);
+#elif HAVE_GETEUID
+ if (geteuid() != getuid())
+ return (1);
+#if HAVE_GETEGID
+ if (getegid() != getgid())
+ return (1);
+#endif
+#endif
+ return (0);
+}
- tmp = getenv("TMPDIR");
+int
+__archive_get_tempdir(struct archive_string *temppath)
+{
+ const char *tmp = NULL;
+
+ if (__archive_issetugid() == 0)
+ tmp = getenv("TMPDIR");
if (tmp == NULL)
#ifdef _PATH_TMP
tmp = _PATH_TMP;
@@ -474,7 +502,7 @@ __archive_mktemp(const char *tmpdir)
archive_string_init(&temp_name);
if (tmpdir == NULL) {
- if (get_tempdir(&temp_name) != ARCHIVE_OK)
+ if (__archive_get_tempdir(&temp_name) != ARCHIVE_OK)
goto exit_tmpfile;
} else {
archive_strcpy(&temp_name, tmpdir);
@@ -536,7 +564,7 @@ __archive_mktempx(const char *tmpdir, char *template)
if (template == NULL) {
archive_string_init(&temp_name);
if (tmpdir == NULL) {
- if (get_tempdir(&temp_name) != ARCHIVE_OK)
+ if (__archive_get_tempdir(&temp_name) != ARCHIVE_OK)
goto exit_tmpfile;
} else
archive_strcpy(&temp_name, tmpdir);

View File

@@ -0,0 +1,190 @@
From c3593848067cea3b41bc11eec15f391318675cb4 Mon Sep 17 00:00:00 2001
From: Tim Kientzle <kientzle@acm.org>
Date: Tue, 28 Oct 2025 17:13:18 -0700
Subject: [PATCH] Merge pull request #2753 from KlaraSystems/des/temp-files
Create temporary files in the target directory
(cherry picked from commit d2e861769c25470427656b36a14b535f17d47d03)
Upstream-Status: Backport [https://github.com/libarchive/libarchive/commit/c3593848067cea3b41bc11eec15f391318675cb4]
Signed-off-by: Peter Marko <peter.marko@siemens.com>
---
.../archive_read_disk_entry_from_file.c | 10 ++---
libarchive/archive_string.c | 20 ++++++++++
libarchive/archive_string.h | 4 ++
libarchive/archive_write_disk_posix.c | 20 ++++++----
libarchive/test/test_archive_string.c | 38 +++++++++++++++++++
5 files changed, 79 insertions(+), 13 deletions(-)
diff --git a/libarchive/archive_read_disk_entry_from_file.c b/libarchive/archive_read_disk_entry_from_file.c
index 42af4034..121af198 100644
--- a/libarchive/archive_read_disk_entry_from_file.c
+++ b/libarchive/archive_read_disk_entry_from_file.c
@@ -358,12 +358,10 @@ setup_mac_metadata(struct archive_read_disk *a,
return (ARCHIVE_OK);
archive_string_init(&tempfile);
- if (__archive_get_tempdir(&tempfile) != ARCHIVE_OK) {
- ret = ARCHIVE_WARN;
- goto cleanup;
- }
- archive_strcat(&tempfile, "tar.md.XXXXXX");
- tempfd = mkstemp(tempfile.s);
+ archive_strcpy(&tempfile, name);
+ archive_string_dirname(&tempfile);
+ archive_strcat(&tempfile, "/tar.XXXXXXXX");
+ tempfd = __archive_mkstemp(tempfile.s);
if (tempfd < 0) {
archive_set_error(&a->archive, errno,
"Could not open extended attribute file");
diff --git a/libarchive/archive_string.c b/libarchive/archive_string.c
index 3bb97833..740308b6 100644
--- a/libarchive/archive_string.c
+++ b/libarchive/archive_string.c
@@ -2039,6 +2039,26 @@ archive_strncat_l(struct archive_string *as, const void *_p, size_t n,
return (r);
}
+struct archive_string *
+archive_string_dirname(struct archive_string *as)
+{
+ /* strip trailing separators */
+ while (as->length > 1 && as->s[as->length - 1] == '/')
+ as->length--;
+ /* strip final component */
+ while (as->length > 0 && as->s[as->length - 1] != '/')
+ as->length--;
+ /* empty path -> cwd */
+ if (as->length == 0)
+ return (archive_strcat(as, "."));
+ /* strip separator(s) */
+ while (as->length > 1 && as->s[as->length - 1] == '/')
+ as->length--;
+ /* terminate */
+ as->s[as->length] = '\0';
+ return (as);
+}
+
#if HAVE_ICONV
/*
diff --git a/libarchive/archive_string.h b/libarchive/archive_string.h
index e8987867..d5f5c03a 100644
--- a/libarchive/archive_string.h
+++ b/libarchive/archive_string.h
@@ -192,6 +192,10 @@ void archive_string_vsprintf(struct archive_string *, const char *,
void archive_string_sprintf(struct archive_string *, const char *, ...)
__LA_PRINTF(2, 3);
+/* Equivalent to dirname(3) */
+struct archive_string *
+archive_string_dirname(struct archive_string *);
+
/* Translates from MBS to Unicode. */
/* Returns non-zero if conversion failed in any way. */
int archive_wstring_append_from_mbs(struct archive_wstring *dest,
diff --git a/libarchive/archive_write_disk_posix.c b/libarchive/archive_write_disk_posix.c
index 6fcf3929..cd256203 100644
--- a/libarchive/archive_write_disk_posix.c
+++ b/libarchive/archive_write_disk_posix.c
@@ -412,12 +412,14 @@ static ssize_t _archive_write_disk_data_block(struct archive *, const void *,
static int
la_mktemp(struct archive_write_disk *a)
{
+ struct archive_string *tmp = &a->_tmpname_data;
int oerrno, fd;
mode_t mode;
- archive_string_empty(&a->_tmpname_data);
- archive_string_sprintf(&a->_tmpname_data, "%s.XXXXXX", a->name);
- a->tmpname = a->_tmpname_data.s;
+ archive_strcpy(tmp, a->name);
+ archive_string_dirname(tmp);
+ archive_strcat(tmp, "/tar.XXXXXXXX");
+ a->tmpname = tmp->s;
fd = __archive_mkstemp(a->tmpname);
if (fd == -1)
@@ -4283,8 +4285,10 @@ create_tempdatafork(struct archive_write_disk *a, const char *pathname)
int tmpfd;
archive_string_init(&tmpdatafork);
- archive_strcpy(&tmpdatafork, "tar.md.XXXXXX");
- tmpfd = mkstemp(tmpdatafork.s);
+ archive_strcpy(&tmpdatafork, pathname);
+ archive_string_dirname(&tmpdatafork);
+ archive_strcat(&tmpdatafork, "/tar.XXXXXXXX");
+ tmpfd = __archive_mkstemp(tmpdatafork.s);
if (tmpfd < 0) {
archive_set_error(&a->archive, errno,
"Failed to mkstemp");
@@ -4363,8 +4367,10 @@ set_mac_metadata(struct archive_write_disk *a, const char *pathname,
* silly dance of writing the data to disk just so that
* copyfile() can read it back in again. */
archive_string_init(&tmp);
- archive_strcpy(&tmp, "tar.mmd.XXXXXX");
- fd = mkstemp(tmp.s);
+ archive_strcpy(&tmp, pathname);
+ archive_string_dirname(&tmp);
+ archive_strcat(&tmp, "/tar.XXXXXXXX");
+ fd = __archive_mkstemp(tmp.s);
if (fd < 0) {
archive_set_error(&a->archive, errno,
diff --git a/libarchive/test/test_archive_string.c b/libarchive/test/test_archive_string.c
index 30f7a800..bf822c0d 100644
--- a/libarchive/test/test_archive_string.c
+++ b/libarchive/test/test_archive_string.c
@@ -353,6 +353,43 @@ test_archive_string_sprintf(void)
archive_string_free(&s);
}
+static void
+test_archive_string_dirname(void)
+{
+ static struct pair { const char *str, *exp; } pairs[] = {
+ { "", "." },
+ { "/", "/" },
+ { "//", "/" },
+ { "///", "/" },
+ { "./", "." },
+ { ".", "." },
+ { "..", "." },
+ { "foo", "." },
+ { "foo/", "." },
+ { "foo//", "." },
+ { "foo/bar", "foo" },
+ { "foo/bar/", "foo" },
+ { "foo/bar//", "foo" },
+ { "foo//bar", "foo" },
+ { "foo//bar/", "foo" },
+ { "foo//bar//", "foo" },
+ { "/foo", "/" },
+ { "//foo", "/" },
+ { "//foo/", "/" },
+ { "//foo//", "/" },
+ { 0 },
+ };
+ struct pair *pair;
+ struct archive_string s;
+
+ archive_string_init(&s);
+ for (pair = pairs; pair->str; pair++) {
+ archive_strcpy(&s, pair->str);
+ archive_string_dirname(&s);
+ assertEqualString(pair->exp, s.s);
+ }
+}
+
DEFINE_TEST(test_archive_string)
{
test_archive_string_ensure();
@@ -364,6 +401,7 @@ DEFINE_TEST(test_archive_string)
test_archive_string_concat();
test_archive_string_copy();
test_archive_string_sprintf();
+ test_archive_string_dirname();
}
static const char *strings[] =

View File

@@ -0,0 +1,28 @@
From 82b57a9740aa6d084edcf4592a3b8e49f63dec98 Mon Sep 17 00:00:00 2001
From: Tim Kientzle <kientzle@acm.org>
Date: Fri, 31 Oct 2025 22:07:19 -0700
Subject: [PATCH] Merge pull request #2768 from Commandoss/master
Fix for an out-of-bounds buffer overrun when using p[H_LEVEL_OFFSET]
(cherry picked from commit ce614c65246158bcb0dc1f9c1dce5a5af65f9827)
Upstream-Status: Backport [https://github.com/libarchive/libarchive/commit/82b57a9740aa6d084edcf4592a3b8e49f63dec98]
Signed-off-by: Peter Marko <peter.marko@siemens.com>
---
libarchive/archive_read_support_format_lha.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/libarchive/archive_read_support_format_lha.c b/libarchive/archive_read_support_format_lha.c
index 2a84ad9d..abf8b879 100644
--- a/libarchive/archive_read_support_format_lha.c
+++ b/libarchive/archive_read_support_format_lha.c
@@ -690,7 +690,7 @@ archive_read_format_lha_read_header(struct archive_read *a,
* a pathname and a symlink has '\' character, a directory
* separator in DOS/Windows. So we should convert it to '/'.
*/
- if (p[H_LEVEL_OFFSET] == 0)
+ if (lha->level == 0)
lha_replace_path_separator(lha, entry);
archive_entry_set_mode(entry, lha->mode);

View File

@@ -0,0 +1,76 @@
From 3150539edb18690c2c5f81c37fd2d3a35c69ace5 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?ARJANEN=20Lo=C3=AFc=20Jean=20David?= <ljd@luigiscorner.mu>
Date: Fri, 14 Nov 2025 20:34:48 +0100
Subject: [PATCH] Fix bsdtar zero-length pattern issue.
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Uses the sed-like way (and Java-like, and .Net-like, and Javascript-like…) to fix this issue of advancing the string to be processed by one if the match is zero-length.
Fixes libarchive/libarchive#2725 and solves libarchive/libarchive#2438.
CVE: CVE-2025-60753
Upstream-Status: Backport [https://github.com/libarchive/libarchive/commit/3150539edb18690c2c5f81c37fd2d3a35c69ace5]
Signed-off-by: Peter Marko <peter.marko@siemens.com>
---
tar/subst.c | 19 ++++++++++++-------
tar/test/test_option_s.c | 8 +++++++-
2 files changed, 19 insertions(+), 8 deletions(-)
diff --git a/tar/subst.c b/tar/subst.c
index 9747abb9..902a4d64 100644
--- a/tar/subst.c
+++ b/tar/subst.c
@@ -235,7 +235,9 @@ apply_substitution(struct bsdtar *bsdtar, const char *name, char **result,
(*result)[0] = 0;
}
- while (1) {
+ char isEnd = 0;
+ do {
+ isEnd = *name == '\0';
if (regexec(&rule->re, name, 10, matches, 0))
break;
@@ -290,12 +292,15 @@ apply_substitution(struct bsdtar *bsdtar, const char *name, char **result,
}
realloc_strcat(result, rule->result + j);
-
- name += matches[0].rm_eo;
-
- if (!rule->global)
- break;
- }
+ if (matches[0].rm_eo > 0) {
+ name += matches[0].rm_eo;
+ } else {
+ // We skip a character because the match is 0-length
+ // so we need to add it to the output
+ realloc_strncat(result, name, 1);
+ name += 1;
+ }
+ } while (rule->global && !isEnd); // Testing one step after because sed et al. run 0-length patterns a last time on the empty string at the end
}
if (got_match)
diff --git a/tar/test/test_option_s.c b/tar/test/test_option_s.c
index 564793b9..90b4c471 100644
--- a/tar/test/test_option_s.c
+++ b/tar/test/test_option_s.c
@@ -42,7 +42,13 @@ DEFINE_TEST(test_option_s)
systemf("%s -cf test1_2.tar -s /d1/d2/ in/d1/foo", testprog);
systemf("%s -xf test1_2.tar -C test1", testprog);
assertFileContents("foo", 3, "test1/in/d2/foo");
-
+ systemf("%s -cf test1_3.tar -s /o/#/g in/d1/foo", testprog);
+ systemf("%s -xf test1_3.tar -C test1", testprog);
+ assertFileContents("foo", 3, "test1/in/d1/f##");
+ // For the 0-length pattern check, remember that "test1/" isn't part of the string affected by the regexp
+ systemf("%s -cf test1_4.tar -s /f*/\\<~\\>/g in/d1/foo", testprog);
+ systemf("%s -xf test1_4.tar -C test1", testprog);
+ assertFileContents("foo", 3, "test1/<>i<>n<>/<>d<>1<>/<f><>o<>o<>");
/*
* Test 2: Basic substitution when extracting archive.
*/

View File

@@ -38,6 +38,11 @@ SRC_URI = "http://libarchive.org/downloads/libarchive-${PV}.tar.gz \
file://CVE-2025-5918-0001.patch \
file://CVE-2025-5918-0002.patch \
file://CVE-2025-5918-0003.patch \
file://0001-Merge-pull-request-2696-from-al3xtjames-mkstemp.patch \
file://0001-Merge-pull-request-2749-from-KlaraSystems-des-tempdi.patch \
file://0001-Merge-pull-request-2753-from-KlaraSystems-des-temp-f.patch \
file://0001-Merge-pull-request-2768-from-Commandoss-master.patch \
file://CVE-2025-60753.patch \
"
UPSTREAM_CHECK_URI = "http://libarchive.org/"

View File

@@ -0,0 +1,30 @@
From e40c14a3e007fac0e4f2e4164fdf14d1712355bd Mon Sep 17 00:00:00 2001
From: Sergei Trofimovich <slyich@gmail.com>
Date: Fri, 2 Aug 2024 22:44:21 +0100
Subject: [PATCH] SPIRV/SpvBuilder.h: add missing <cstdint> include
Without the change `glslang` build fails on upcoming `gcc-15` as:
In file included from /build/source/SPIRV/GlslangToSpv.cpp:45:
SPIRV/SpvBuilder.h:248:30: error: 'uint32_t' has not been declared
248 | Id makeDebugLexicalBlock(uint32_t line);
| ^~~~~~~~
Upstream-Status: Backport [https://github.com/KhronosGroup/glslang/commit/e40c14a3e007fac0e4f2e4164fdf14d1712355bd]
Signed-off-by: Gyorgy Sarvari <skandigraun@gmail.com>
---
SPIRV/SpvBuilder.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/SPIRV/SpvBuilder.h b/SPIRV/SpvBuilder.h
index b1ca6ce1..00b2e53a 100644
--- a/SPIRV/SpvBuilder.h
+++ b/SPIRV/SpvBuilder.h
@@ -56,6 +56,7 @@ namespace spv {
}
#include <algorithm>
+#include <cstdint>
#include <map>
#include <memory>
#include <set>

View File

@@ -11,6 +11,7 @@ LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=2a2b5acd7bc4844964cfda45fe807dc3"
SRCREV = "a91631b260cba3f22858d6c6827511e636c2458a"
SRC_URI = "git://github.com/KhronosGroup/glslang.git;protocol=https;branch=main \
file://0001-generate-glslang-pkg-config.patch \
file://0001-SPIRV-SpvBuilder.h-add-missing-cstdint-include.patch \
"
PE = "1"
# These recipes need to be updated in lockstep with each other:

View File

@@ -1,28 +0,0 @@
From cedc797e1a0850039a25b7e387b342e54fffcc97 Mon Sep 17 00:00:00 2001
From: Khem Raj <raj.khem@gmail.com>
Date: Mon, 17 Aug 2020 10:50:51 -0700
Subject: [PATCH] Avoid duplicate definitions of IOPortBase
This fixed build with gcc10/-fno-common
Fixes
compiler.h:528: multiple definition of `IOPortBase';
Upstream-Status: Pending
Signed-off-by: Khem Raj <raj.khem@gmail.com>
---
hw/xfree86/os-support/linux/lnx_video.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/hw/xfree86/os-support/linux/lnx_video.c b/hw/xfree86/os-support/linux/lnx_video.c
index fd83022..1d0d96e 100644
--- a/hw/xfree86/os-support/linux/lnx_video.c
+++ b/hw/xfree86/os-support/linux/lnx_video.c
@@ -78,6 +78,7 @@ xf86OSInitVidMem(VidMemInfoPtr pVidMem)
/***************************************************************************/
/* I/O Permissions section */
/***************************************************************************/
+_X_EXPORT unsigned int IOPortBase; /* Memory mapped I/O port area */
#if defined(__powerpc__)
volatile unsigned char *ioBase = NULL;

View File

@@ -0,0 +1,91 @@
From 359c9c0478406fe00e0d4c5d52bd9bf8c2ca4081 Mon Sep 17 00:00:00 2001
From: Olivier Fourdan <ofourdan@redhat.com>
Date: Wed, 2 Jul 2025 09:46:22 +0200
Subject: [PATCH 1/4] present: Fix use-after-free in present_create_notifies()
Using the Present extension, if an error occurs while processing and
adding the notifications after presenting a pixmap, the function
present_create_notifies() will clean up and remove the notifications
it added.
However, there are two different code paths that can lead to an error
creating the notify, one being before the notify is being added to the
list, and another one after the notify is added.
When the error occurs before it's been added, it removes the elements up
to the last added element, instead of the actual number of elements
which were added.
As a result, in case of error, as with an invalid window for example, it
leaves a dangling pointer to the last element, leading to a use after
free case later:
| Invalid write of size 8
| at 0x5361D5: present_clear_window_notifies (present_notify.c:42)
| by 0x534A56: present_destroy_window (present_screen.c:107)
| by 0x41E441: xwl_destroy_window (xwayland-window.c:1959)
| by 0x4F9EC9: compDestroyWindow (compwindow.c:622)
| by 0x51EAC4: damageDestroyWindow (damage.c:1592)
| by 0x4FDC29: DbeDestroyWindow (dbe.c:1291)
| by 0x4EAC55: FreeWindowResources (window.c:1023)
| by 0x4EAF59: DeleteWindow (window.c:1091)
| by 0x4DE59A: doFreeResource (resource.c:890)
| by 0x4DEFB2: FreeClientResources (resource.c:1156)
| by 0x4A9AFB: CloseDownClient (dispatch.c:3567)
| by 0x5DCC78: ClientReady (connection.c:603)
| Address 0x16126200 is 16 bytes inside a block of size 2,048 free'd
| at 0x4841E43: free (vg_replace_malloc.c:989)
| by 0x5363DD: present_destroy_notifies (present_notify.c:111)
| by 0x53638D: present_create_notifies (present_notify.c:100)
| by 0x5368E9: proc_present_pixmap_common (present_request.c:164)
| by 0x536A7D: proc_present_pixmap (present_request.c:189)
| by 0x536FA9: proc_present_dispatch (present_request.c:337)
| by 0x4A1E4E: Dispatch (dispatch.c:561)
| by 0x4B00F1: dix_main (main.c:284)
| by 0x42879D: main (stubmain.c:34)
| Block was alloc'd at
| at 0x48463F3: calloc (vg_replace_malloc.c:1675)
| by 0x5362A1: present_create_notifies (present_notify.c:81)
| by 0x5368E9: proc_present_pixmap_common (present_request.c:164)
| by 0x536A7D: proc_present_pixmap (present_request.c:189)
| by 0x536FA9: proc_present_dispatch (present_request.c:337)
| by 0x4A1E4E: Dispatch (dispatch.c:561)
| by 0x4B00F1: dix_main (main.c:284)
| by 0x42879D: main (stubmain.c:34)
To fix the issue, count and remove the actual number of notify elements
added in case of error.
CVE-2025-62229, ZDI-CAN-27238
This vulnerability was discovered by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
(cherry picked from commit 5a4286b13f631b66c20f5bc8db7b68211dcbd1d0)
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/2087>
CVE: CVE-2025-62229
Upstream-Status: Backport
Signed-off-by: Ross Burton <ross.burton@arm.com>
---
present/present_notify.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/present/present_notify.c b/present/present_notify.c
index 445954998..00b3b68bd 100644
--- a/present/present_notify.c
+++ b/present/present_notify.c
@@ -90,7 +90,7 @@ present_create_notifies(ClientPtr client, int num_notifies, xPresentNotify *x_no
if (status != Success)
goto bail;
- added = i;
+ added++;
}
return Success;
--
2.43.0

View File

@@ -0,0 +1,63 @@
From a3d5c76ee8925ef9846c72e2327674b84e3fcdb3 Mon Sep 17 00:00:00 2001
From: Olivier Fourdan <ofourdan@redhat.com>
Date: Wed, 10 Sep 2025 15:55:06 +0200
Subject: [PATCH 2/4] xkb: Make the RT_XKBCLIENT resource private
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Currently, the resource in only available to the xkb.c source file.
In preparation for the next commit, to be able to free the resources
from XkbRemoveResourceClient(), make that variable private instead.
This is related to:
CVE-2025-62230, ZDI-CAN-27545
This vulnerability was discovered by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
(cherry picked from commit 99790a2c9205a52fbbec01f21a92c9b7f4ed1d8f)
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/2087>
CVE: CVE-2025-62230
Upstream-Status: Backport
Signed-off-by: Ross Burton <ross.burton@arm.com>
---
include/xkbsrv.h | 2 ++
xkb/xkb.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/xkbsrv.h b/include/xkbsrv.h
index fbb5427e1..b2766277c 100644
--- a/include/xkbsrv.h
+++ b/include/xkbsrv.h
@@ -58,6 +58,8 @@ THE USE OR PERFORMANCE OF THIS SOFTWARE.
#include "inputstr.h"
#include "events.h"
+extern RESTYPE RT_XKBCLIENT;
+
typedef struct _XkbInterest {
DeviceIntPtr dev;
ClientPtr client;
diff --git a/xkb/xkb.c b/xkb/xkb.c
index 5131bfcdf..26d965d48 100644
--- a/xkb/xkb.c
+++ b/xkb/xkb.c
@@ -51,7 +51,7 @@ int XkbKeyboardErrorCode;
CARD32 xkbDebugFlags = 0;
static CARD32 xkbDebugCtrls = 0;
-static RESTYPE RT_XKBCLIENT;
+RESTYPE RT_XKBCLIENT = 0;
/***====================================================================***/
--
2.43.0

View File

@@ -0,0 +1,92 @@
From 32b12feb6f9f3d32532ff75c7434a7426b85e0c3 Mon Sep 17 00:00:00 2001
From: Olivier Fourdan <ofourdan@redhat.com>
Date: Wed, 10 Sep 2025 15:58:57 +0200
Subject: [PATCH 3/4] xkb: Free the XKB resource when freeing XkbInterest
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
XkbRemoveResourceClient() would free the XkbInterest data associated
with the device, but not the resource associated with it.
As a result, when the client terminates, the resource delete function
gets called and accesses already freed memory:
| Invalid read of size 8
| at 0x5BC0C0: XkbRemoveResourceClient (xkbEvents.c:1047)
| by 0x5B3391: XkbClientGone (xkb.c:7094)
| by 0x4DF138: doFreeResource (resource.c:890)
| by 0x4DFB50: FreeClientResources (resource.c:1156)
| by 0x4A9A59: CloseDownClient (dispatch.c:3550)
| by 0x5E0A53: ClientReady (connection.c:601)
| by 0x5E4FEF: ospoll_wait (ospoll.c:657)
| by 0x5DC834: WaitForSomething (WaitFor.c:206)
| by 0x4A1BA5: Dispatch (dispatch.c:491)
| by 0x4B0070: dix_main (main.c:277)
| by 0x4285E7: main (stubmain.c:34)
| Address 0x1893e278 is 184 bytes inside a block of size 928 free'd
| at 0x4842E43: free (vg_replace_malloc.c:989)
| by 0x49C1A6: CloseDevice (devices.c:1067)
| by 0x49C522: CloseOneDevice (devices.c:1193)
| by 0x49C6E4: RemoveDevice (devices.c:1244)
| by 0x5873D4: remove_master (xichangehierarchy.c:348)
| by 0x587921: ProcXIChangeHierarchy (xichangehierarchy.c:504)
| by 0x579BF1: ProcIDispatch (extinit.c:390)
| by 0x4A1D85: Dispatch (dispatch.c:551)
| by 0x4B0070: dix_main (main.c:277)
| by 0x4285E7: main (stubmain.c:34)
| Block was alloc'd at
| at 0x48473F3: calloc (vg_replace_malloc.c:1675)
| by 0x49A118: AddInputDevice (devices.c:262)
| by 0x4A0E58: AllocDevicePair (devices.c:2846)
| by 0x5866EE: add_master (xichangehierarchy.c:153)
| by 0x5878C2: ProcXIChangeHierarchy (xichangehierarchy.c:493)
| by 0x579BF1: ProcIDispatch (extinit.c:390)
| by 0x4A1D85: Dispatch (dispatch.c:551)
| by 0x4B0070: dix_main (main.c:277)
| by 0x4285E7: main (stubmain.c:34)
To avoid that issue, make sure to free the resources when freeing the
device XkbInterest data.
CVE-2025-62230, ZDI-CAN-27545
This vulnerability was discovered by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
(cherry picked from commit 10c94238bdad17c11707e0bdaaa3a9cd54c504be)
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/2087>
CVE: CVE-2025-62230
Upstream-Status: Backport
Signed-off-by: Ross Burton <ross.burton@arm.com>
---
xkb/xkbEvents.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/xkb/xkbEvents.c b/xkb/xkbEvents.c
index 0bbd66186..3d04ecf0c 100644
--- a/xkb/xkbEvents.c
+++ b/xkb/xkbEvents.c
@@ -1056,6 +1056,7 @@ XkbRemoveResourceClient(DevicePtr inDev, XID id)
autoCtrls = interest->autoCtrls;
autoValues = interest->autoCtrlValues;
client = interest->client;
+ FreeResource(interest->resource, RT_XKBCLIENT);
free(interest);
found = TRUE;
}
@@ -1067,6 +1068,7 @@ XkbRemoveResourceClient(DevicePtr inDev, XID id)
autoCtrls = victim->autoCtrls;
autoValues = victim->autoCtrlValues;
client = victim->client;
+ FreeResource(victim->resource, RT_XKBCLIENT);
free(victim);
found = TRUE;
}
--
2.43.0

View File

@@ -0,0 +1,53 @@
From 364f06788f1de4edc0547c7f29d338e6deffc138 Mon Sep 17 00:00:00 2001
From: Olivier Fourdan <ofourdan@redhat.com>
Date: Wed, 10 Sep 2025 16:30:29 +0200
Subject: [PATCH 4/4] xkb: Prevent overflow in XkbSetCompatMap()
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
The XkbCompatMap structure stores its "num_si" and "size_si" fields
using an unsigned short.
However, the function _XkbSetCompatMap() will store the sum of the
input data "firstSI" and "nSI" in both XkbCompatMap's "num_si" and
"size_si" without first checking if the sum overflows the maximum
unsigned short value, leading to a possible overflow.
To avoid the issue, check whether the sum does not exceed the maximum
unsigned short value, or return a "BadValue" error otherwise.
CVE-2025-62231, ZDI-CAN-27560
This vulnerability was discovered by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
(cherry picked from commit 475d9f49acd0e55bc0b089ed77f732ad18585470)
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/2087>
CVE: CVE-2025-62231
Upstream-Status: Backport
Signed-off-by: Ross Burton <ross.burton@arm.com>
---
xkb/xkb.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/xkb/xkb.c b/xkb/xkb.c
index 26d965d48..137d70da2 100644
--- a/xkb/xkb.c
+++ b/xkb/xkb.c
@@ -2992,6 +2992,8 @@ _XkbSetCompatMap(ClientPtr client, DeviceIntPtr dev,
XkbSymInterpretPtr sym;
unsigned int skipped = 0;
+ if ((unsigned) (req->firstSI + req->nSI) > USHRT_MAX)
+ return BadValue;
if ((unsigned) (req->firstSI + req->nSI) > compat->size_si) {
compat->num_si = compat->size_si = req->firstSI + req->nSI;
compat->sym_interpret = reallocarray(compat->sym_interpret,
--
2.43.0

View File

@@ -1,8 +1,11 @@
require xserver-xorg.inc
SRC_URI += "file://0001-xf86pciBus.c-use-Intel-ddx-only-for-pre-gen4-hardwar.patch \
file://0001-Avoid-duplicate-definitions-of-IOPortBase.patch \
"
file://0001-present-Fix-use-after-free-in-present_create_notifie.patch \
file://0002-xkb-Make-the-RT_XKBCLIENT-resource-private.patch \
file://0003-xkb-Free-the-XKB-resource-when-freeing-XkbInterest.patch \
file://0004-xkb-Prevent-overflow-in-XkbSetCompatMap.patch \
"
SRC_URI[sha256sum] = "c878d1930d87725d4a5bf498c24f4be8130d5b2646a9fd0f2994deff90116352"
# These extensions are now integrated into the server, so declare the migration

View File

@@ -0,0 +1,89 @@
From 5a4286b13f631b66c20f5bc8db7b68211dcbd1d0 Mon Sep 17 00:00:00 2001
From: Olivier Fourdan <ofourdan@redhat.com>
Date: Wed, 2 Jul 2025 09:46:22 +0200
Subject: [PATCH] present: Fix use-after-free in present_create_notifies()
Using the Present extension, if an error occurs while processing and
adding the notifications after presenting a pixmap, the function
present_create_notifies() will clean up and remove the notifications
it added.
However, there are two different code paths that can lead to an error
creating the notify, one being before the notify is being added to the
list, and another one after the notify is added.
When the error occurs before it's been added, it removes the elements up
to the last added element, instead of the actual number of elements
which were added.
As a result, in case of error, as with an invalid window for example, it
leaves a dangling pointer to the last element, leading to a use after
free case later:
| Invalid write of size 8
| at 0x5361D5: present_clear_window_notifies (present_notify.c:42)
| by 0x534A56: present_destroy_window (present_screen.c:107)
| by 0x41E441: xwl_destroy_window (xwayland-window.c:1959)
| by 0x4F9EC9: compDestroyWindow (compwindow.c:622)
| by 0x51EAC4: damageDestroyWindow (damage.c:1592)
| by 0x4FDC29: DbeDestroyWindow (dbe.c:1291)
| by 0x4EAC55: FreeWindowResources (window.c:1023)
| by 0x4EAF59: DeleteWindow (window.c:1091)
| by 0x4DE59A: doFreeResource (resource.c:890)
| by 0x4DEFB2: FreeClientResources (resource.c:1156)
| by 0x4A9AFB: CloseDownClient (dispatch.c:3567)
| by 0x5DCC78: ClientReady (connection.c:603)
| Address 0x16126200 is 16 bytes inside a block of size 2,048 free'd
| at 0x4841E43: free (vg_replace_malloc.c:989)
| by 0x5363DD: present_destroy_notifies (present_notify.c:111)
| by 0x53638D: present_create_notifies (present_notify.c:100)
| by 0x5368E9: proc_present_pixmap_common (present_request.c:164)
| by 0x536A7D: proc_present_pixmap (present_request.c:189)
| by 0x536FA9: proc_present_dispatch (present_request.c:337)
| by 0x4A1E4E: Dispatch (dispatch.c:561)
| by 0x4B00F1: dix_main (main.c:284)
| by 0x42879D: main (stubmain.c:34)
| Block was alloc'd at
| at 0x48463F3: calloc (vg_replace_malloc.c:1675)
| by 0x5362A1: present_create_notifies (present_notify.c:81)
| by 0x5368E9: proc_present_pixmap_common (present_request.c:164)
| by 0x536A7D: proc_present_pixmap (present_request.c:189)
| by 0x536FA9: proc_present_dispatch (present_request.c:337)
| by 0x4A1E4E: Dispatch (dispatch.c:561)
| by 0x4B00F1: dix_main (main.c:284)
| by 0x42879D: main (stubmain.c:34)
To fix the issue, count and remove the actual number of notify elements
added in case of error.
CVE-2025-62229, ZDI-CAN-27238
This vulnerability was discovered by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/2086>
CVE: CVE-2025-62229
Upstream-Status: Backport [https://gitlab.freedesktop.org/xorg/xserver/-/commit/5a4286b13f631b66c20f5bc8db7b68211dcbd1d0]
Signed-off-by: Yogita Urade <yogita.urade@windriver.com>
---
present/present_notify.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/present/present_notify.c b/present/present_notify.c
index 4459549..00b3b68 100644
--- a/present/present_notify.c
+++ b/present/present_notify.c
@@ -90,7 +90,7 @@ present_create_notifies(ClientPtr client, int num_notifies, xPresentNotify *x_no
if (status != Success)
goto bail;
- added = i;
+ added++;
}
return Success;
--
2.40.0

View File

@@ -0,0 +1,60 @@
From 865089ca70840c0f13a61df135f7b44a9782a175 Mon Sep 17 00:00:00 2001
From: Olivier Fourdan <ofourdan@redhat.com>
Date: Wed, 10 Sep 2025 15:55:06 +0200
Subject: [PATCH] xkb: Make the RT_XKBCLIENT resource private
Currently, the resource in only available to the xkb.c source file.
In preparation for the next commit, to be able to free the resources
from XkbRemoveResourceClient(), make that variable private instead.
This is related to:
CVE-2025-62230, ZDI-CAN-27545
This vulnerability was discovered by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
(cherry picked from commit 99790a2c9205a52fbbec01f21a92c9b7f4ed1d8f)
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/2087>
CVE: CVE-2025-62230
Upstream-Status: Backport [https://gitlab.freedesktop.org/xorg/xserver/-/commit/865089ca70840c0f13a61df135f7b44a9782a175]
Signed-off-by: Yogita Urade <yogita.urade@windriver.com>
---
include/xkbsrv.h | 2 ++
xkb/xkb.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/xkbsrv.h b/include/xkbsrv.h
index 21cd876..24fdfb4 100644
--- a/include/xkbsrv.h
+++ b/include/xkbsrv.h
@@ -58,6 +58,8 @@ THE USE OR PERFORMANCE OF THIS SOFTWARE.
#include "inputstr.h"
#include "events.h"
+extern RESTYPE RT_XKBCLIENT;
+
typedef struct _XkbInterest {
DeviceIntPtr dev;
ClientPtr client;
diff --git a/xkb/xkb.c b/xkb/xkb.c
index 3210ff9..b7877f5 100644
--- a/xkb/xkb.c
+++ b/xkb/xkb.c
@@ -51,7 +51,7 @@ int XkbKeyboardErrorCode;
CARD32 xkbDebugFlags = 0;
static CARD32 xkbDebugCtrls = 0;
-static RESTYPE RT_XKBCLIENT;
+RESTYPE RT_XKBCLIENT = 0;
/***====================================================================***/
--
2.40.0

View File

@@ -0,0 +1,89 @@
From 87fe2553937a99fd914ad0cde999376a3adc3839 Mon Sep 17 00:00:00 2001
From: Olivier Fourdan <ofourdan@redhat.com>
Date: Wed, 10 Sep 2025 15:58:57 +0200
Subject: [PATCH] xkb: Free the XKB resource when freeing XkbInterest
XkbRemoveResourceClient() would free the XkbInterest data associated
with the device, but not the resource associated with it.
As a result, when the client terminates, the resource delete function
gets called and accesses already freed memory:
| Invalid read of size 8
| at 0x5BC0C0: XkbRemoveResourceClient (xkbEvents.c:1047)
| by 0x5B3391: XkbClientGone (xkb.c:7094)
| by 0x4DF138: doFreeResource (resource.c:890)
| by 0x4DFB50: FreeClientResources (resource.c:1156)
| by 0x4A9A59: CloseDownClient (dispatch.c:3550)
| by 0x5E0A53: ClientReady (connection.c:601)
| by 0x5E4FEF: ospoll_wait (ospoll.c:657)
| by 0x5DC834: WaitForSomething (WaitFor.c:206)
| by 0x4A1BA5: Dispatch (dispatch.c:491)
| by 0x4B0070: dix_main (main.c:277)
| by 0x4285E7: main (stubmain.c:34)
| Address 0x1893e278 is 184 bytes inside a block of size 928 free'd
| at 0x4842E43: free (vg_replace_malloc.c:989)
| by 0x49C1A6: CloseDevice (devices.c:1067)
| by 0x49C522: CloseOneDevice (devices.c:1193)
| by 0x49C6E4: RemoveDevice (devices.c:1244)
| by 0x5873D4: remove_master (xichangehierarchy.c:348)
| by 0x587921: ProcXIChangeHierarchy (xichangehierarchy.c:504)
| by 0x579BF1: ProcIDispatch (extinit.c:390)
| by 0x4A1D85: Dispatch (dispatch.c:551)
| by 0x4B0070: dix_main (main.c:277)
| by 0x4285E7: main (stubmain.c:34)
| Block was alloc'd at
| at 0x48473F3: calloc (vg_replace_malloc.c:1675)
| by 0x49A118: AddInputDevice (devices.c:262)
| by 0x4A0E58: AllocDevicePair (devices.c:2846)
| by 0x5866EE: add_master (xichangehierarchy.c:153)
| by 0x5878C2: ProcXIChangeHierarchy (xichangehierarchy.c:493)
| by 0x579BF1: ProcIDispatch (extinit.c:390)
| by 0x4A1D85: Dispatch (dispatch.c:551)
| by 0x4B0070: dix_main (main.c:277)
| by 0x4285E7: main (stubmain.c:34)
To avoid that issue, make sure to free the resources when freeing the
device XkbInterest data.
CVE-2025-62230, ZDI-CAN-27545
This vulnerability was discovered by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
(cherry picked from commit 10c94238bdad17c11707e0bdaaa3a9cd54c504be)
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/2087>
CVE: CVE-2025-62230
Upstream-Status: Backport [https://gitlab.freedesktop.org/xorg/xserver/-/commit/87fe2553937a99fd914ad0cde999376a3adc3839]
Signed-off-by: Yogita Urade <yogita.urade@windriver.com>
---
xkb/xkbEvents.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/xkb/xkbEvents.c b/xkb/xkbEvents.c
index f8f65d4..7c669c9 100644
--- a/xkb/xkbEvents.c
+++ b/xkb/xkbEvents.c
@@ -1055,6 +1055,7 @@ XkbRemoveResourceClient(DevicePtr inDev, XID id)
autoCtrls = interest->autoCtrls;
autoValues = interest->autoCtrlValues;
client = interest->client;
+ FreeResource(interest->resource, RT_XKBCLIENT);
free(interest);
found = TRUE;
}
@@ -1066,6 +1067,7 @@ XkbRemoveResourceClient(DevicePtr inDev, XID id)
autoCtrls = victim->autoCtrls;
autoValues = victim->autoCtrlValues;
client = victim->client;
+ FreeResource(victim->resource, RT_XKBCLIENT);
free(victim);
found = TRUE;
}
--
2.40.0

View File

@@ -0,0 +1,50 @@
From 3baad99f9c15028ed8c3e3d8408e5ec35db155aa Mon Sep 17 00:00:00 2001
From: Olivier Fourdan <ofourdan@redhat.com>
Date: Wed, 10 Sep 2025 16:30:29 +0200
Subject: [PATCH] xkb: Prevent overflow in XkbSetCompatMap()
The XkbCompatMap structure stores its "num_si" and "size_si" fields
using an unsigned short.
However, the function _XkbSetCompatMap() will store the sum of the
input data "firstSI" and "nSI" in both XkbCompatMap's "num_si" and
"size_si" without first checking if the sum overflows the maximum
unsigned short value, leading to a possible overflow.
To avoid the issue, check whether the sum does not exceed the maximum
unsigned short value, or return a "BadValue" error otherwise.
CVE-2025-62231, ZDI-CAN-27560
This vulnerability was discovered by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
(cherry picked from commit 475d9f49acd0e55bc0b089ed77f732ad18585470)
Part-of: <https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/2087>
CVE: CVE-2025-62231
Upstream-Status: Backport [https://gitlab.freedesktop.org/xorg/xserver/-/commit/3baad99f9c15028ed8c3e3d8408e5ec35db155aa]
Signed-off-by: Yogita Urade <yogita.urade@windriver.com>
---
xkb/xkb.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/xkb/xkb.c b/xkb/xkb.c
index b7877f5..4e585d1 100644
--- a/xkb/xkb.c
+++ b/xkb/xkb.c
@@ -2992,6 +2992,8 @@ _XkbSetCompatMap(ClientPtr client, DeviceIntPtr dev,
XkbSymInterpretPtr sym;
unsigned int skipped = 0;
+ if ((unsigned) (req->firstSI + req->nSI) > USHRT_MAX)
+ return BadValue;
if ((unsigned) (req->firstSI + req->nSI) > compat->size_si) {
compat->num_si = compat->size_si = req->firstSI + req->nSI;
compat->sym_interpret = reallocarray(compat->sym_interpret,
--
2.40.0

View File

@@ -31,6 +31,10 @@ SRC_URI = "https://www.x.org/archive/individual/xserver/xwayland-${PV}.tar.xz \
file://CVE-2025-49178.patch \
file://CVE-2025-49179.patch \
file://CVE-2025-49180.patch \
file://CVE-2025-62229.patch \
file://CVE-2025-62230-0001.patch \
file://CVE-2025-62230-0002.patch \
file://CVE-2025-62231.patch \
"
SRC_URI[sha256sum] = "33ec7ff2687a59faaa52b9b09aa8caf118e7ecb6aed8953f526a625ff9f4bd90"

View File

@@ -0,0 +1,111 @@
From 0fa3c0f698c2ca618a0fa44e10a822678df85373 Mon Sep 17 00:00:00 2001
From: Cosmin Truta <ctruta@gmail.com>
Date: Thu, 15 Feb 2024 21:53:24 +0200
Subject: [PATCH] chore: Clean up the spurious uses of `sizeof(png_byte)`; fix
the manual
By definition, `sizeof(png_byte)` is 1.
Remove all the occurences of `sizeof(png_byte)` from the code, and fix
a related typo in the libpng manual.
Also update the main .editorconfig file to reflect the fixing expected
by a FIXME note.
CVE: CVE-2025-64505
Upstream-Status: Backport [https://github.com/pnggroup/libpng/commit/0fa3c0f698c2ca618a0fa44e10a822678df85373]
Signed-off-by: Peter Marko <peter.marko@siemens.com>
---
libpng-manual.txt | 4 ++--
libpng.3 | 4 ++--
pngrtran.c | 17 +++++++----------
3 files changed, 11 insertions(+), 14 deletions(-)
diff --git a/libpng-manual.txt b/libpng-manual.txt
index eb24ef483..d2918ce31 100644
--- a/libpng-manual.txt
+++ b/libpng-manual.txt
@@ -1178,11 +1178,11 @@ where row_pointers is an array of pointers to the pixel data for each row:
If you know your image size and pixel size ahead of time, you can allocate
row_pointers prior to calling png_read_png() with
- if (height > PNG_UINT_32_MAX/(sizeof (png_byte)))
+ if (height > PNG_UINT_32_MAX / (sizeof (png_bytep)))
png_error(png_ptr,
"Image is too tall to process in memory");
- if (width > PNG_UINT_32_MAX/pixel_size)
+ if (width > PNG_UINT_32_MAX / pixel_size)
png_error(png_ptr,
"Image is too wide to process in memory");
diff --git a/libpng.3 b/libpng.3
index 57d06f2db..8875b219a 100644
--- a/libpng.3
+++ b/libpng.3
@@ -1697,11 +1697,11 @@ where row_pointers is an array of pointers to the pixel data for each row:
If you know your image size and pixel size ahead of time, you can allocate
row_pointers prior to calling png_read_png() with
- if (height > PNG_UINT_32_MAX/(sizeof (png_byte)))
+ if (height > PNG_UINT_32_MAX / (sizeof (png_bytep)))
png_error(png_ptr,
"Image is too tall to process in memory");
- if (width > PNG_UINT_32_MAX/pixel_size)
+ if (width > PNG_UINT_32_MAX / pixel_size)
png_error(png_ptr,
"Image is too wide to process in memory");
diff --git a/pngrtran.c b/pngrtran.c
index 74cca476b..041f9306c 100644
--- a/pngrtran.c
+++ b/pngrtran.c
@@ -441,7 +441,7 @@ png_set_quantize(png_structrp png_ptr, png_colorp palette,
int i;
png_ptr->quantize_index = (png_bytep)png_malloc(png_ptr,
- (png_alloc_size_t)((png_uint_32)num_palette * (sizeof (png_byte))));
+ (png_alloc_size_t)num_palette);
for (i = 0; i < num_palette; i++)
png_ptr->quantize_index[i] = (png_byte)i;
}
@@ -458,7 +458,7 @@ png_set_quantize(png_structrp png_ptr, png_colorp palette,
/* Initialize an array to sort colors */
png_ptr->quantize_sort = (png_bytep)png_malloc(png_ptr,
- (png_alloc_size_t)((png_uint_32)num_palette * (sizeof (png_byte))));
+ (png_alloc_size_t)num_palette);
/* Initialize the quantize_sort array */
for (i = 0; i < num_palette; i++)
@@ -592,11 +592,9 @@ png_set_quantize(png_structrp png_ptr, png_colorp palette,
/* Initialize palette index arrays */
png_ptr->index_to_palette = (png_bytep)png_malloc(png_ptr,
- (png_alloc_size_t)((png_uint_32)num_palette *
- (sizeof (png_byte))));
+ (png_alloc_size_t)num_palette);
png_ptr->palette_to_index = (png_bytep)png_malloc(png_ptr,
- (png_alloc_size_t)((png_uint_32)num_palette *
- (sizeof (png_byte))));
+ (png_alloc_size_t)num_palette);
/* Initialize the sort array */
for (i = 0; i < num_palette; i++)
@@ -761,12 +759,11 @@ png_set_quantize(png_structrp png_ptr, png_colorp palette,
size_t num_entries = ((size_t)1 << total_bits);
png_ptr->palette_lookup = (png_bytep)png_calloc(png_ptr,
- (png_alloc_size_t)(num_entries * (sizeof (png_byte))));
+ (png_alloc_size_t)(num_entries));
- distance = (png_bytep)png_malloc(png_ptr, (png_alloc_size_t)(num_entries *
- (sizeof (png_byte))));
+ distance = (png_bytep)png_malloc(png_ptr, (png_alloc_size_t)num_entries);
- memset(distance, 0xff, num_entries * (sizeof (png_byte)));
+ memset(distance, 0xff, num_entries);
for (i = 0; i < num_palette; i++)
{

View File

@@ -0,0 +1,163 @@
From ea094764f3436e3c6524622724c2d342a3eff235 Mon Sep 17 00:00:00 2001
From: Cosmin Truta <ctruta@gmail.com>
Date: Sat, 8 Nov 2025 17:16:59 +0200
Subject: [PATCH] Fix a memory leak in function `png_set_quantize`; refactor
Release the previously-allocated array `quantize_index` before
reallocating it. This avoids leaking memory when the function
`png_set_quantize` is called multiple times on the same `png_struct`.
This function assumed single-call usage, but fuzzing revealed that
repeated calls would overwrite the pointers without freeing the
original allocations, leaking 256 bytes per call for `quantize_index`
and additional memory for `quantize_sort` when histogram-based
quantization is used.
Also remove the array `quantize_sort` from the list of `png_struct`
members and make it a local variable. This array is initialized,
used and released exclusively inside the function `png_set_quantize`.
Reported-by: Samsung-PENTEST <Samsung-PENTEST@users.noreply.github.com>
Analyzed-by: degrigis <degrigis@users.noreply.github.com>
Reviewed-by: John Bowler <jbowler@acm.org>
CVE: CVE-2025-64505
Upstream-Status: Backport [https://github.com/pnggroup/libpng/commit/ea094764f3436e3c6524622724c2d342a3eff235]
Signed-off-by: Peter Marko <peter.marko@siemens.com>
---
pngrtran.c | 43 +++++++++++++++++++++++--------------------
pngstruct.h | 1 -
2 files changed, 23 insertions(+), 21 deletions(-)
diff --git a/pngrtran.c b/pngrtran.c
index 1809db704..4632dd521 100644
--- a/pngrtran.c
+++ b/pngrtran.c
@@ -440,6 +440,12 @@ png_set_quantize(png_structrp png_ptr, png_colorp palette,
{
int i;
+ /* Initialize the array to index colors.
+ *
+ * Be careful to avoid leaking memory. Applications are allowed to call
+ * this function more than once per png_struct.
+ */
+ png_free(png_ptr, png_ptr->quantize_index);
png_ptr->quantize_index = (png_bytep)png_malloc(png_ptr,
(png_alloc_size_t)num_palette);
for (i = 0; i < num_palette; i++)
@@ -454,15 +460,14 @@ png_set_quantize(png_structrp png_ptr, png_colorp palette,
* Perhaps not the best solution, but good enough.
*/
- int i;
+ png_bytep quantize_sort;
+ int i, j;
- /* Initialize an array to sort colors */
- png_ptr->quantize_sort = (png_bytep)png_malloc(png_ptr,
+ /* Initialize the local array to sort colors. */
+ quantize_sort = (png_bytep)png_malloc(png_ptr,
(png_alloc_size_t)num_palette);
-
- /* Initialize the quantize_sort array */
for (i = 0; i < num_palette; i++)
- png_ptr->quantize_sort[i] = (png_byte)i;
+ quantize_sort[i] = (png_byte)i;
/* Find the least used palette entries by starting a
* bubble sort, and running it until we have sorted
@@ -474,19 +479,18 @@ png_set_quantize(png_structrp png_ptr, png_colorp palette,
for (i = num_palette - 1; i >= maximum_colors; i--)
{
int done; /* To stop early if the list is pre-sorted */
- int j;
done = 1;
for (j = 0; j < i; j++)
{
- if (histogram[png_ptr->quantize_sort[j]]
- < histogram[png_ptr->quantize_sort[j + 1]])
+ if (histogram[quantize_sort[j]]
+ < histogram[quantize_sort[j + 1]])
{
png_byte t;
- t = png_ptr->quantize_sort[j];
- png_ptr->quantize_sort[j] = png_ptr->quantize_sort[j + 1];
- png_ptr->quantize_sort[j + 1] = t;
+ t = quantize_sort[j];
+ quantize_sort[j] = quantize_sort[j + 1];
+ quantize_sort[j + 1] = t;
done = 0;
}
}
@@ -498,18 +502,18 @@ png_set_quantize(png_structrp png_ptr, png_colorp palette,
/* Swap the palette around, and set up a table, if necessary */
if (full_quantize != 0)
{
- int j = num_palette;
+ j = num_palette;
/* Put all the useful colors within the max, but don't
* move the others.
*/
for (i = 0; i < maximum_colors; i++)
{
- if ((int)png_ptr->quantize_sort[i] >= maximum_colors)
+ if ((int)quantize_sort[i] >= maximum_colors)
{
do
j--;
- while ((int)png_ptr->quantize_sort[j] >= maximum_colors);
+ while ((int)quantize_sort[j] >= maximum_colors);
palette[i] = palette[j];
}
@@ -517,7 +521,7 @@ png_set_quantize(png_structrp png_ptr, png_colorp palette,
}
else
{
- int j = num_palette;
+ j = num_palette;
/* Move all the used colors inside the max limit, and
* develop a translation table.
@@ -525,13 +529,13 @@ png_set_quantize(png_structrp png_ptr, png_colorp palette,
for (i = 0; i < maximum_colors; i++)
{
/* Only move the colors we need to */
- if ((int)png_ptr->quantize_sort[i] >= maximum_colors)
+ if ((int)quantize_sort[i] >= maximum_colors)
{
png_color tmp_color;
do
j--;
- while ((int)png_ptr->quantize_sort[j] >= maximum_colors);
+ while ((int)quantize_sort[j] >= maximum_colors);
tmp_color = palette[j];
palette[j] = palette[i];
@@ -569,8 +573,7 @@ png_set_quantize(png_structrp png_ptr, png_colorp palette,
}
}
}
- png_free(png_ptr, png_ptr->quantize_sort);
- png_ptr->quantize_sort = NULL;
+ png_free(png_ptr, quantize_sort);
}
else
{
diff --git a/pngstruct.h b/pngstruct.h
index 084422bc1..fe5fa0415 100644
--- a/pngstruct.h
+++ b/pngstruct.h
@@ -413,7 +413,6 @@ struct png_struct_def
#ifdef PNG_READ_QUANTIZE_SUPPORTED
/* The following three members were added at version 1.0.14 and 1.2.4 */
- png_bytep quantize_sort; /* working sort array */
png_bytep index_to_palette; /* where the original index currently is
in the palette */
png_bytep palette_to_index; /* which original index points to this

View File

@@ -0,0 +1,52 @@
From 6a528eb5fd0dd7f6de1c39d30de0e41473431c37 Mon Sep 17 00:00:00 2001
From: Cosmin Truta <ctruta@gmail.com>
Date: Sat, 8 Nov 2025 23:58:26 +0200
Subject: [PATCH] Fix a buffer overflow in `png_do_quantize`
Allocate the quantize_index array to PNG_MAX_PALETTE_LENGTH (256 bytes)
instead of num_palette bytes. This approach matches the allocation
pattern for `palette[]`, `trans_alpha[]` and `riffled_palette[]` which
were similarly oversized in libpng 1.2.1 to prevent buffer overflows
from malformed PNG files with out-of-range palette indices.
Out-of-range palette indices `index >= num_palette` will now read
identity-mapped values from the `quantize_index` array (where index N
maps to palette entry N). This prevents undefined behavior while
avoiding runtime bounds checking overhead in the performance-critical
pixel processing loop.
Reported-by: Samsung-PENTEST <Samsung-PENTEST@users.noreply.github.com>
Analyzed-by: degrigis <degrigis@users.noreply.github.com>
CVE: CVE-2025-64505
Upstream-Status: Backport [https://github.com/pnggroup/libpng/commit/6a528eb5fd0dd7f6de1c39d30de0e41473431c37]
Signed-off-by: Peter Marko <peter.marko@siemens.com>
---
pngrtran.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/pngrtran.c b/pngrtran.c
index 4632dd521..9c2475fde 100644
--- a/pngrtran.c
+++ b/pngrtran.c
@@ -441,14 +441,18 @@ png_set_quantize(png_structrp png_ptr, png_colorp palette,
int i;
/* Initialize the array to index colors.
+ *
+ * Ensure quantize_index can fit 256 elements (PNG_MAX_PALETTE_LENGTH)
+ * rather than num_palette elements. This is to prevent buffer overflows
+ * caused by malformed PNG files with out-of-range palette indices.
*
* Be careful to avoid leaking memory. Applications are allowed to call
* this function more than once per png_struct.
*/
png_free(png_ptr, png_ptr->quantize_index);
png_ptr->quantize_index = (png_bytep)png_malloc(png_ptr,
- (png_alloc_size_t)num_palette);
- for (i = 0; i < num_palette; i++)
+ PNG_MAX_PALETTE_LENGTH);
+ for (i = 0; i < PNG_MAX_PALETTE_LENGTH; i++)
png_ptr->quantize_index[i] = (png_byte)i;
}

Some files were not shown because too many files have changed in this diff Show More