Compare commits

..

77 Commits

Author SHA1 Message Date
Richard Purdie
d3cda9a3e0 build-appliance-image: Update to langdale head revision
(From OE-Core rev: 9237ffc4feee2dd6ff5bdd672072509ef9e82f6d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-14 16:30:23 +00:00
Michael Opdenacker
80d22fc07f create-spdx.bbclass: remove unused SPDX_INCLUDE_PACKAGED
[YOCTO #14948]

(From OE-Core rev: 88ca1b07abf1a8641a0eb8382e9322349a150c98)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 89f1abd5e00807cf179ddf658f74d48119523b0c)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-14 15:59:15 +00:00
ciarancourtney
c35857bd24 wic: swap partitions are not added to fstab
- Regression in 7aa678ce804c21dc1dc51b9be442671bc33c4041

(From OE-Core rev: 8fdb75c0f0f7458305ccae657cf2722520e00572)

Signed-off-by: Ciaran Courtney <ciaran.courtney@activeenergy.ie>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit f1243572ad6b6303fe562e4eb7a9826fd51ea3c3)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-14 15:59:15 +00:00
Ross Burton
d8917f76bc sanity: check for GNU tar specifically
We need the system tar to be GNU tar, as we reply on --xattrs.  Some
distributions may be using libarchive's tar binary, which is definitely
not as featureful, so check for this and abort early with a clear
message instead of later with mysterious errors.

(From OE-Core rev: fd92cdc6d2b9b3b808503b3274860a7c301587cb)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 7dd2b1cd1bb10e67485dab8600c0787df6c2eee7)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-14 15:59:15 +00:00
Alexander Kanavin
d5add7c5b7 quilt: backport a patch to address grep 3.8 failures
(From OE-Core rev: a46aad035d800193b740bad2431ce30fae736a23)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit b5001af5c711a373bd2f1ea108c8b597dd40faca)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-14 15:59:15 +00:00
Bernhard Rosenkränzer
e65081b949 cmake-native: Fix host tool contamination
[v2 hopefully fixes the From: mangling by the ML, no functional changes]

Trying to build cmake-native on a host system where curl was built with cmake
(resulting in CURLConfig.cmake and friends, which do not use the same naming
schemes expected by cmake-native's build process, being installed to a system
wide cmake directory like /usr/lib64/cmake/CURL) results in undefined
references to all libcurl symbols.

The problem is that cmake-native sees and uses the system wide
/usr/lib64/cmake/CURL/CURLConfig.cmake, which defines CURL::libcurl and
CURL::curl as opposed to setting ${CURL_LIBRARIES} as expected by
cmake-native.

find_package(CURL) (cmake-native's CMakeLists.txt, line 478) succeeds, but
incorrectly uses the system wide CURLConfig.cmake, resulting
CMAKE_CURL_LIBRARIES to be set to an empty string (cmake-native's
CMakeLists.txt, line 484), causing the cmake-native build to miss -lcurl.

The simplest fix is to let cmake know the right value for
CURL_LIBRARIES. Making it -lcurl should always work with libcurl-native
in recipe-sysroot-native.

[YOCTO #14951]

(From OE-Core rev: 62b117c382ffd65f6c5d808699b664f70ba6f2d8)

Signed-off-by: Bernhard Rosenkränzer <bero@baylibre.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 2659c735a464c956b4fca0894a5aed27a0fe7e37)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-14 15:59:15 +00:00
Alexander Kanavin
027ec0ecf5 lttng-modules: upgrade 2.13.4 -> 2.13.5
2022-08-19 (National Potato Day) LTTng modules 2.13.5
	* Fix: incorrect stub prototypes when CONFIG_HAVE_SYSCALL_TRACEPOINTS=n
	* fix: mm/tracing: add 'accounted' entry into output of allocation tracepoints (v6.0)
	* fix: block: remove bdevname (v6.0)
	* fix: fs/jbd2: Fix the documentation of the jbd2_write_superblock() callers (v6.0)
	* fix: tie compaction probe build to CONFIG_COMPACTION
	* fix: net: skb: introduce kfree_skb_reason() (v5.15.58..v5.16)
	* fix: workqueue: Fix type of cpu in trace event (v5.19)
	* fix: fs: Remove flags parameter from aops->write_begin (v5.19)
	* fix: mm/page_alloc: fix tracepoint mm_page_alloc_zone_locked() (v5.19)

(From OE-Core rev: cbb85f35d342ffd1c8a0f147f139a8d1a3084aae)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 335c60e76b341014bd69eaac0a4b281036a94916)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-14 15:59:15 +00:00
Alexander Kanavin
54fb46c66e shadow: update 4.12.1 -> 4.12.3
4.12.2 changes
	* Address CVE-2013-4235
	* Fix uk manpages

4.12.3 changes
	* Revert the removal of subid_init as pointed out by Balint.
	* Address CVE-2013-4235 (TOCTTOU when copying directories)

(From OE-Core rev: 30fe8df131a3ef5efa5c35e69fce7b2d1bdc2f7d)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 5b9fc88d06f79e8dbd2375172689f2fbf3e2a8a3)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-14 15:59:15 +00:00
Ross Burton
63e80a0233 sudo: backport fix for CVE-2022-43995
(From OE-Core rev: a41a5f310246dcd9dbdb4537d59bc0579c3b1052)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-14 15:59:15 +00:00
Ross Burton
c689d5d4e3 pixman: backport fix for CVE-2022-44638
(From OE-Core rev: 23df4760ebc153c484d467e51b414910c570a6f8)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-14 15:59:15 +00:00
Robert Joslyn
2ac597044a curl: Backport CVE fixes
Backport fixes for:
 - CVE-2022-32221 POST following PUT confusion
 - CVE-2022-35260 .netrc parser out-of-bounds access
 - CVE-2022-42915 HTTP proxy double-free
 - CVE-2022-42916 HSTS bypass via IDN

(From OE-Core rev: 724c8b65fe307af602b6bf7e3704dfb25bc51ee9)

Signed-off-by: Robert Joslyn <robert.joslyn@redrectangle.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-14 15:59:15 +00:00
Mark Asselstine
79434a17eb bitbake: bitbake: bitbake-layers: checkout layer(s) branch when clone exists
[YOCTO #7852]

Fixes 'bitbake-layers layerindex-fetch --branch kirkstone meta-arm'
not checking out the branch if the repo is already cloned and on a
different branch.

If a clone of a layer being added already exists check what branch it
is on and if necessary attempt to switch to the given branch. If the
switch fails to happen the git error will be reported. We also warn if
there are uncommitted changes as the changes might go unnoticed and
result in unexpected behaviors.

(Bitbake rev: 138dd7883ee2c521900b29985b6d24a23d96563c)

Signed-off-by: Mark Asselstine <mark.asselstine@windriver.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit d2cb388f58a37db2149fad34e4572d954e6e5441)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-10 14:43:30 +00:00
Justin Bronder
25f355e0ef bitbake: asyncrpc: serv: correct closed client socket detection
If the client socket is closed, asyncio.StreamReader.readline() will
return an empty bytes object, not None.

This prevents multiple tracebacks being logged by bitbake-hashserv each
time bitbake is started and performs a connection check.

(Bitbake rev: 4bdd9ba43f34a1473db31a6a3b10bd33e358fe3a)

Signed-off-by: Justin Bronder <jsbronder@cold-front.org>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2d07f252704dff7747fa1f9adf223a452806717f)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-10 14:43:30 +00:00
Ross Burton
186d179614 bitbake: fetch2/git: don't set core.fsyncobjectfiles=0
This git configuration variable is deprecated in 2.36.0 onwards, so git
warns in the logs for every git call.

Luckily the default value has always been false[1], so we can just remove
this.

[ YOCTO #14939 ]

[1] aafe9fbaf4

(Bitbake rev: 13f86aeb53cd73c03bfb2f00fe923b51ec8d1c73)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8ad310633e0c5d5593631c1196cbdde30147efce)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-10 14:43:30 +00:00
Michael Opdenacker
975e3fb53c bitbake: bitbake-user-manual: details about variable flags starting with underscore
Fixes [YOCTO #14140]

(Bitbake rev: 8a08e207854810b40b53946ec94065a6a560a7a5)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0f3e9d87168813ce49995ff04bccdce11c5f7b47)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-10 14:43:30 +00:00
Steve Sakoman
e881560619 poky.conf: bump version for 4.1.1
(From meta-yocto rev: e911b760d279774d8ab24529a2ffd82c02976feb)

Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:56 +00:00
Etienne Cordonnier
6054c58908 mirrors.bbclass: use shallow tarball for binutils-native
This is useful e.g. when using meta-clang, which introduces a dependency to binutils-native, and then a
full tarball of binutils is fetched additionally to a shallow tarball.

The original BB_GIT_SHALLOW lines were added because of https://www.mail-archive.com/yocto@lists.yoctoproject.org/msg08752.html

(From OE-Core rev: 0eee57ef03908c04e1567889f72d7187b5c1f657)

Signed-off-by: Etienne Cordonnier <ecordonnier@snap.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bd83b8b502ae935c75b59aaf71bbb531c9771dcc)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Alexander Kanavin
cb9d9fd076 rust: install rustfmt for riscv32 as well
With the above rust arch fixes it builds just fine.

(From OE-Core rev: 655b9a0bbe07b33db8aa6ebf7c49f3d9074cc5e0)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f417ae30c79fac99e2549324ed351f6f63cc4a25)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Alexander Kanavin
7b401c7540 rust-target-config: match riscv target names with what rust expects
Official rust risc-v targets are prefixed with riscv32gc- and riscv64gc-:
https://doc.rust-lang.org/nightly/rustc/platform-support.html

Particularly crossbeam-utils make important build time decisions
for atomics based on those names, and so we need to match ours
with official targets.

On the other hand, the actual definitions for those targets do not
use the 'gc' suffix in 'arch' and 'llvm-target' fields, and so we
need to follow that too, to avoid cryptic mismatch errors from rust-llvm:
https://github.com/rust-lang/rust/blob/master/compiler/rustc_target/src/spec/riscv32gc_unknown_linux_gnu.rs

(From OE-Core rev: 2daa8d76369cd06e5c357e393e3145e08f3d6760)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1cfb9c8a59d98ccc9b0510cd28fb933f72fb6b6c)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Sean Anderson
62c4b68a11 kernel-fitimage: Use KERNEL_OUTPUT_DIR where appropriate
We have a specific variable for the path to the boot directory. Use it
instead of open-coding this path.

(From OE-Core rev: dda8017274e71daa7aa4d8a3a15e128df213b0de)

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 725b75e83bc2b2111f2ab5103b7e7f60d6d3f34e)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Sean Anderson
1b5b1ba8fb kernel: Clear SYSROOT_DIRS instead of replacing sysroot_stage_all
Replacing sysroot_stage_all by a no-op recipe makes it difficult for
bbappends to stage files intentionally. Instead, just clear
SYSROOT_DIRS, allowing other bbappends to easily add new directories.

(From OE-Core rev: d9081df0dc62f733bef643340af678eeba74fe89)

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 849791e7086463a4c7c53c2c1ed9603a6c3a080d)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Sean Anderson
6bded7cb12 uboot-sign: Fix using wrong KEY_REQ_ARGS
When generating our SPL-verifying certificate, we use FIT_KEY_REQ_ARGS,
which is intended for the U-Boot-verifying certificate. Instead, use
UBOOT_FIT_KEY_REQ_ARGS.

Fixes: 0e6b0fefa0 ("u-boot: Use a different Key for SPL signing")
(From OE-Core rev: f01b15fcffd1a628a17caf1e94753c8cd09ea48f)

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit a2d939ccb182a1ad29280d236b9f9e1d09527af1)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Jose Quaresma
d18ec217b3 kernel-yocto: improve fatal error messages of symbol_why.py
Improve the fatal error message of the yocto-kernel-tools symbol_why.py
and shows the command that generate the error as it can help understand
the root cause of the error.

(From OE-Core rev: 97cb48ce09d80e5496e4f887a8cf02125c66c6c5)

Signed-off-by: Jose Quaresma <jose.quaresma@foundries.io>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 54ae08779071f2e97bff0ff6514ede3124312c3b)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Claus Stovgaard
0771c25330 gstreamer1.0-libav: fix errors with ffmpeg 5.x
Backport of patch already present upstream to fix issues with invalid
characters for GLIB when combining gstreamer1.0-libav with ffmpeg 5.x.

Remove when gstreamer1.0-libav is upgraded to 1.21.1 or above

(From OE-Core rev: 8a837dba82d6e665406c2ee0543ee0135fe2ae3a)

Signed-off-by: Claus Stovgaard <claus.stovgaard@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 703ff945557ad307bbe4ba0b0b7f1a2e5b4b847e)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Peter Kjellerstedt
f02d7f4547 externalsrc.bbclass: Remove a trailing slash from ${B}
The trailing slash in ${B} caused -fdebug-prefix-map=${B}=... to not
match as intended, resulting in ${TMPDIR} ending up in files in
${PN}-dbg when externalsrc was in use, which in turn triggered buildpath
QA warnings.

(From OE-Core rev: c7e94e74eceef0b22d09d80d0da6ddcd86d9b12e)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 9b5031ed5a0d102905fa75acc418246c23df6eef)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Sergei Zhmylev
d051fc188b wic: honor the SOURCE_DATE_EPOCH in case of updated fstab
In case user requested to build a binary repeatable package,
it's required to honor the SOURCE_DATE_EPOCH environment
variable. So forcefully set mtime inside all the routines
which modify fstab in case it is updated.

(From OE-Core rev: 4d3f43fe06186b6580395a161fdbc4470b8aab62)

Signed-off-by: Sergei Zhmylev <s.zhmylev@yadro.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 99719a3712a88dce8450994d995803e126e49115)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Martin Jansa
22c5e7fa3e externalsrc.bbclass: fix git repo detection
* fix issue introduced in:
  https://git.openembedded.org/openembedded-core/commit/?id=95fbac8dcad6c93f4c9737e9fe13e92ab6befa09

* it added check for s_dir + git-dir (typically '.git') isn't
  the same as ${TOPDIR} + git-dir, but due to copy-paste issue
  it was just comparing it with s_dir + git-dir again, resulting
  in most external repos (where git-dir is '.git') to be processed
  as regular directory (not taking advantage of git write-tree).

* normally this wouldn't be an issue, but for big repo with a lot of
  files this added a lot of checksums in:
  d.setVarFlag('do_compile', 'file-checksums', '${@srctree_hash_files(d)}')

  and I mean *a lot, e.g. in chromium build it was 380227 paths
  which still wouldn't that bad, but the checksum processing in
  siggen.py isn't trivial and just looping through all these
  checksums takes very long time (over 1000sec on fast NVME drive
  with warm cache) and then
  https://git.openembedded.org/bitbake/commit/?id=b4975d2ecf615ac4c240808fbc5a3f879a93846b
  made the processing a bit more complicated and the loop in
  get_taskhash() function took 6448sec and to make things worse
  there was no output from bitbake during that time, so even with -DDD
  it looks like this:

  DEBUG: virtual/libgles2 resolved to: mesa (langdale/oe-core/meta/recipes-graphics/mesa/mesa_22.2.0.bb)
  Bitbake still alive (no events for 600s). Active tasks:
  Bitbake still alive (no events for 1200s). Active tasks:
  Bitbake still alive (no events for 1800s). Active tasks:
  Bitbake still alive (no events for 2400s). Active tasks:
  Bitbake still alive (no events for 3000s). Active tasks:
  Bitbake still alive (no events for 3600s). Active tasks:
  Bitbake still alive (no events for 4200s). Active tasks:
  Bitbake still alive (no events for 4800s). Active tasks:
  Bitbake still alive (no events for 5400s). Active tasks:
  Bitbake still alive (no events for 6000s). Active tasks:
  DEBUG: Starting bitbake-worker

  without -DDD it will get stuck for almost 2 hours in:
  "Initialising tasks..."
  before it finally writes sstate summary like:
  "Sstate summary: Wanted 3102 Local 0 Mirrors 0 Missed 3102 Current 1483 (0% match, 32% complete)"

* fix the copy&paste typo to use git work-tree in most cases, but
  be aware that this issue still exists for huge local source
  trees not in git

[YOCTO #14942]

(From OE-Core rev: 43d3a1a314cf4cab1b384ebf81e10610f18ed12c)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 9102e5a94b8146cb1da27afbe41d3db999a914ff)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Luca Boccassi
cd873bc5de systemd: add systemd-creds and systemd-cryptenroll to systemd-extra-utils
ERROR: systemd-1_251.4-r0 do_package: QA Issue: systemd: Files/directories were installed but not shipped in any package:
  /usr/bin/systemd-creds
  /usr/bin/systemd-cryptenroll

(From OE-Core rev: 34cdacb072644f4bd610c48a789e4001d374e190)

Signed-off-by: Luca Boccassi <luca.boccassi@microsoft.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit b3763dd26d324a7ce575586f306b8aec4b1103b3)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Keiya Nobuta
2f52d04e17 create-spdx: Remove ";name=..." for downloadLocation
(From OE-Core rev: e2258c34a7a587f67b233617613a12fe4549932a)

Signed-off-by: Keiya Nobuta <nobuta.keiya@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit bbecab53d1b27f3bb8c5882cb0ec39b04ef300a3)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Bruce Ashfield
3d58ac1ddd kern-tools: fix relative path processing
The previous fix for processing of paths with relative components, broke
uses cases that were a mix of patches and configuration fragments.

Updating the SRCREV to include a simplied fix for relative paths, and
a cleanup patch from Jose:

[
  Author: Jose Quaresma <quaresma.jose@gmail.com>
  Date:   Thu Sep 29 16:37:23 2022 +0000

      scc: only look for error in scc_output_file if it has valid content

      When process_file function fails the output of the processed script is show to
      the user, some parsing is performed as well to look for common errors so we
      can point to the right input file.

      This can only be done when the scc_output_file have some valid content
      otherwise it will show invalid messages to the user.

      Signed-off-by: Jose Quaresma <jose.quaresma@foundries.io>
      Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>

  Author: Bruce Ashfield <bruce.ashfield@gmail.com>
  Date:   Wed Oct 5 19:13:33 2022 +0000

      spp: ensure that prefix check uses absolute paths

      The previous fix for this issue was too broad, and impacted
      all calls to the prefix check and removal. With this change,
      we only expand the input on scc/spp operations that may
      execute with relative paths.

      Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
]

(From OE-Core rev: d56e29947176976e172a3e731a6ae37df98af4bb)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 533720a1756454447341769c4a0969fce8d6f287)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Mingli Yu
e6428f3c1c grub: disable build on armv7ve/a with hardfp
The commit(75dbdea940 grub: Allow build on armv7ve/a with softfp)
enable build on armv7ve/a with softfp, but it acutally enable
build on armv7ve/a with hardfp altogether and result in below build
failure:
 | checking for compile options to get strict alignment... -mno-unaligned-access
 | checking if compiler generates unaligned accesses... no
 | checking if C symbols get an underscore after compilation... no
 | checking whether target compiler is working... no
 | configure: error: cannot compile for the target

So update the check to disable build on armv7ve/a with hardfp.

(From OE-Core rev: 3d4bb6b1ba41e83c98e821ddf86e231daec029b1)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit f67b2880fc2cfb21f51216c63b5f24d0524b4278)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Thomas Perrot
db48ca5830 xserver-xorg: move some recommended dependencies in required
Otherwise, xserver will no longer start when NO_RECOMMENDATIONS = “1”,
because dependencies in XSERVER_RRECOMMENDS are missing.

(From OE-Core rev: c017175deed298f7fb3fff9181eb4379fcc436d7)

Signed-off-by: Thomas Perrot <thomas.perrot@bootlin.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit bc7bd3953f3896af0db036250cda34bc9ecbb3ac)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Vincent Davis Jr
23cf93f091 linux-firmware: package amdgpu firmware
Add packages for the firmware required by amdgpu kernel driver.

(From OE-Core rev: bb907ecbc0f513b83163db0985ae9ab3486389f4)

Signed-off-by: Vincent Davis Jr <vince@underview.tech>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 0d7aa21f120a756d1a4fc4ae0be3527b54a58247)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Christian Eggers
9921f0a250 linux-firmware: split rtl8761 firmware
Realtek Bluetooth devices require binary firmware files. Package them
separately in order to avoid installing the full linux-firmware package
on embedded devices.

Affected (end user) products (incomplete list):
- TP-Link UB500
- Logilink BT0054

(From OE-Core rev: 2772f356d4a8b8f31c34a3951814d04fb4f3decb)

Signed-off-by: Christian Eggers <ceggers@arri.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit fb44eb4feef54f2343c8186809a65dcb9b58a9b2)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Martin Jansa
cd89ca53ed vulkan-samples: add lfs=0 to SRC_URI to avoid git smudge errors in do_unpack
* we don't need other_lib/ios/Debug-iphoneos/libSDL2.a from
  https://github.com/KhronosGroup/KTX-Software.git so we can explicitly
  disable LFS here to avoid do_unpack error, bitbake will then use
  GIT_LFS_SKIP_SMUDGE=1 to override smudge setting in gitconfig,
  otherwise we would need bitbake patch to fetch LFS objects from the
  submodules as well

* do_fetch won't fetch LFS objects without explicitly requesting lfs in SRC_URI
  then do_unpack might run git smudge when enabled in .gitconfig (or /etc/gitconfig) with:

[filter "lfs"]
       smudge = git-lfs smudge -- %f
       process = git-lfs filter-process
       required = true
       clean = git-lfs clean -- %f

  and do_unpack fails as in:
  http://errors.yoctoproject.org/Errors/Details/672888/

The default /etc/gitconfig in ubuntu has this added automatically by
git-lfs postinst:

  root@ljama:~# rm /etc/gitconfig
  root@ljama:~# git lfs install --skip-repo --system
  Git LFS initialized.
  root@ljama:~# cat /etc/gitconfig
  [filter "lfs"]
        clean = git-lfs clean -- %f
        smudge = git-lfs smudge -- %f
        process = git-lfs filter-process
        required = true
  root@ljama:~# cat /var/lib/dpkg/info/git-lfs.postinst

  set -e

  # Set up /etc/gitconfig for git-lfs. The --skip-repo option prevents failure if
  # / is a Git repository with existing non-git-lfs hooks.

  git lfs install --skip-repo --system > /dev/null 2>&1

according to
https://changelogs.ubuntu.com/changelogs/pool/universe/g/git-lfs/git-lfs_3.0.2-1/changelog
it was added in:

git-lfs (2.6.0-1) unstable; urgency=medium

  * New upstream release
  * Bump standards version to 4.2.1
  * Add postinst/prerm to set up/remove git-lfs gitconfig

FWIW: vulkan-samples still fail to build with DEBUG_BUILD enabled:
http://errors.yoctoproject.org/Errors/Details/672892/

(From OE-Core rev: 58f93fcc5364880f11f1d86e0a5a6c5712f6ca6a)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit b45b1f5dba02a626b7e9040d45198bd17dce4c99)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Adrian Freihofer
290ce3525f buildconf: compare abspath
We have something like ${TOPDIR}/../../poky/meta in the bblayers.conf
file. This does not work without normalizing the path for comparison.

(From OE-Core rev: 803975aff35c9423f4bde4c0201d0f61242389e0)

Signed-off-by: Adrian Freihofer <adrian.freihofer@siemens.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit e0d45bcd34311ae248bac9378f46962198d148ef)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Bruce Ashfield
c3911e12f6 linux-yocto/5.19: update to v5.19.14
Updating  to the latest korg -stable release that comprises
the following commits:

    30c780ac0f9f Linux 5.19.14
    b11cc6399c56 damon/sysfs: fix possible memleak on damon_sysfs_add_target
    381eae6b1dc3 x86/alternative: Fix race in try_get_desc()
    1e624467e41f x86/cacheinfo: Add a cpu_llc_shared_mask() UP variant
    b1bad76d6a18 KVM: x86: Hide IA32_PLATFORM_DCA_CAP[31:0] from the guest
    f45380a17be5 perf tests record: Fail the test if the 'errs' counter is not zero
    77d5e98fb6f0 perf test: Fix test case 87 ("perf record tests") for hybrid systems
    b781430cd770 net: ethernet: mtk_eth_soc: fix mask of RX_DMA_GET_SPORT{,_V2}
    c81bca132fc6 net: mscc: ocelot: fix tagged VLAN refusal while under a VLAN-unaware bridge
    79a55020c989 clk: imx93: drop of_match_ptr
    539cc4ac04f8 clk: iproc: Do not rely on node name for correct PLL setup
    61560f315371 drm/i915/gt: Perf_limit_reasons are only available for Gen11+
    b1dd83f321dc clk: imx: imx6sx: remove the SET_RATE_PARENT flag for QSPI clocks
    3e8d61faead0 vdpa/mlx5: Fix MQ to support non power of two num queues
    b3b8359fafb4 virtio-blk: Fix WARN_ON_ONCE in virtio_queue_rq()
    bffdf0421ba8 vdpa/ifcvf: fix the calculation of queuepair
    9f9687bfd884 ice: xsk: drop power of 2 ring size restriction for AF_XDP
    01c2475d0c21 ice: xsk: change batched Tx descriptor cleaning
    dcf42724aacb selftests: Fix the if conditions of in test_extra_filter()
    929a2f6e93a8 net: phy: Don't WARN for PHY_UP state in mdio_bus_phy_resume()
    07eb54aa93d7 net: stmmac: power up/down serdes in stmmac_open/release
    ec1a5138428f wifi: mac80211: fix memory corruption in minstrel_ht_update_rates()
    a84813338208 wifi: mac80211: fix regression with non-QoS drivers
    3afc354e1084 wifi: cfg80211: fix MCS divisor value
    e8027e26ad58 nvme: Fix IOC_PR_CLEAR and IOC_PR_RELEASE ioctls for nvme devices
    ab3abb72bec2 net/mlxbf_gige: Fix an IS_ERR() vs NULL bug in mlxbf_gige_mdio_probe
    8b1d17a8d8ba cxgb4: fix missing unlock on ETHOFLD desc collect fail path
    3687a0c03863 net: sched: act_ct: fix possible refcount leak in tcf_ct_init()
    75b276c0537e usbnet: Fix memory leak in usbnet_disconnect()
    1a39d83193c6 perf parse-events: Remove "not supported" hybrid cache events
    44ff610a3cd4 perf print-events: Fix "perf list" can not display the PMU prefix for some hybrid cache events
    c4a07387a4b0 perf parse-events: Break out tracepoint and printing
    9ac8b5bae9f2 gpio: mvebu: Fix check for pwm support on non-A8K platforms
    314df2265c04 Input: melfas_mip4 - fix return value check in mip4_probe()
    3cd81a694233 Revert "drm: bridge: analogix/dp: add panel prepare/unprepare in suspend/resume time"
    e126ad29ec71 net: macb: Fix ZynqMP SGMII non-wakeup source resume failure
    5c94fcc0e87f drm/bridge: lt8912b: fix corrupted image output
    6fe84153067b drm/bridge: lt8912b: set hdmi or dvi mode
    8d2b780e1ed6 drm/bridge: lt8912b: add vsync hsync
    18bf2334b0b3 ASoC: tas2770: Reinit regcache on reset
    a0977f22b8a7 arm64: dts: qcom: sm8350: fix UFS PHY serdes size
    2c8028dd3f8a clk: microchip: mpfs: make the rtc's ahb clock critical
    ab5081ce9f9c clk: microchip: mpfs: fix clk_cfg array bounds violation
    8e8516fe1a64 ASoC: imx-card: Fix refcount issue with of_node_put
    1317541f0dae soc: sunxi: sram: Fix debugfs info for A64 SRAM C
    450080540800 soc: sunxi: sram: Fix probe function ordering issues
    e4768a5b0a30 soc: sunxi: sram: Prevent the driver from being unbound
    44a9633e9e16 soc: sunxi: sram: Actually claim SRAM regions
    24d6230edfc2 ARM: dts: am5748: keep usb4_tm disabled
    6f1364939969 reset: imx7: Fix the iMX8MP PCIe PHY PERST support
    38d9f71a04c2 ARM: dts: am33xx: Fix MMCHS0 dma properties
    3cf3c17fd66f media: v4l2-compat-ioctl32.c: zero buffer passed to v4l2_compat_get_array_args()
    fa20a7dcd56b media: mediatek: vcodec: Drop platform_get_resource(IORESOURCE_IRQ)
    b901652568f3 media: rkvdec: Disable H.264 error detection
    3a35e67f6b29 media: dvb_vb2: fix possible out of bound access
    e88a7c1831d4 mm,hwpoison: check mm when killing accessing process
    fcc9261c2b5f mm/hugetlb: correct demote page offset logic
    f9cedf6b357e mm: bring back update_mmu_cache() to finish_fault()
    3094c01fb1e3 mm: fix madivse_pageout mishandling on non-LRU page
    edbaf99db91b mm/migrate_device.c: copy pte dirty bit to page
    e85ab5ae17bd mm/migrate_device.c: add missing flush_cache_page()
    ca7d59a4b5f3 mm/migrate_device.c: flush TLB while holding PTL
    82a00edf23e4 mm: fix dereferencing possible ERR_PTR
    a346ba002906 mm/page_isolation: fix isolate_single_pageblock() isolation behavior
    faee7721e795 mm: prevent page_frag_alloc() from corrupting the memory
    0cddc19ddb05 mm/page_alloc: fix race condition between build_all_zonelists and page allocation
    437484d936ac mm: gup: fix the fast GUP race against THP collapse
    c6e2a0587215 mmc: hsq: Fix data stomping during mmc recovery
    1209607a7133 mmc: moxart: fix 4-bit bus width and remove 8-bit bus width
    b41808bfa049 mptcp: fix unreleased socket in accept queue
    5368d1a17ec5 mptcp: factor out __mptcp_close() without socket lock
    16220537557a mm: fix BUG splat with kvmalloc + GFP_ATOMIC
    2c5a04961201 libata: add ATA_HORKAGE_NOLPM for Pioneer BDR-207M and BDR-205
    38d854c4a11c vduse: prevent uninitialized memory accesses
    4a1230f34f06 drm/amdgpu: Add amdgpu suspend-resume code path under SRIOV
    5af714ceebae drm/i915/gt: Restrict forced preemption to the active context
    193153f7cc2a powerpc/64s/radix: don't need to broadcast IPI for radix pmd collapse flush
    a129cce68908 Revert "firmware: arm_scmi: Add clock management to the SCMI power domain"
    b48abca42e0a net: mt7531: only do PLL once after the reset
    1f4ceb7daf36 mm/damon/dbgfs: fix memory leak when using debugfs_lookup()
    decf4f5c01a8 x86/uaccess: avoid check_object_size() in copy_from_user_nmi()
    9653cc040a7d ntfs: fix BUG_ON in ntfs_lookup_inode_by_name()
    d0ebc7ef65e3 ARM: dts: integrator: Tag PCI host with device_type
    76335c4156ed frontswap: don't call ->init if no ops are registered
    26a1ca1f9fbb x86/sgx: Do not fail on incomplete sanitization on premature stop of ksgxd
    19f89548ed86 wifi: mac80211: ensure vif queues are operational after start
    d87926a7448c clk: ingenic-tcu: Properly enable registers before accessing timers
    179fd43179a1 can: c_can: don't cache TX messages for C_CAN cores
    983dd7223a9b Input: snvs_pwrkey - fix SNVS_HPVIDR1 register address
    6aac871bca33 net: usb: qmi_wwan: Add new usb-id for Dell branded EM7455
    266ee6ee24ea thunderbolt: Explicitly reset plug events delay back to USB4 spec value
    d916978b6976 usb: typec: ucsi: Remove incorrect warning
    1d54281c91d7 uas: ignore UAS for Thinkplus chips
    be014a8d8925 usb-storage: Add Hiksemi USB3-FW to IGNORE_UAS
    4a66ab5bfaea uas: add no-uas quirk for Hiksemi usb_disk
    a3ed03b3ce4d counter: 104-quad-8: Fix skipped IRQ lines during events configuration
    036eeda2212a counter: 104-quad-8: Implement and utilize register structures
    c096ac781807 counter: 104-quad-8: Utilize iomap interface
    640e0b97dfa9 perf record: Fix cpu mask bit setting for mixed mmaps
    bcd04b006c78 tools/perf: Fix out of bound access to cpu mask array
    d948e6c57793 riscv: make t-head erratas depend on MMU
    1bae99844613 Linux 5.19.13
    781e43179640 Revert "drm/i915: Extract intel_edp_fixup_vbt_bpp()"
    da42e25ec54a Revert "drm/i915/pps: Split pps_init_delays() into distinct parts"
    5f86062caf4d Revert "drm/i915/bios: Split parse_driver_features() into two parts"
    3f2631ce3c8f Revert "drm/i915/bios: Split VBT parsing to global vs. panel specific parts"
    139d38c14725 Revert "drm/i915/bios: Split VBT data into per-panel vs. global parts"
    10c7b3919e6d Revert "drm/i915/dsi: filter invalid backlight and CABC ports"
    bef6a9b54730 Revert "drm/i915/dsi: fix dual-link DSI backlight and CABC ports for display 11+"
    9182c86a0456 Revert "drm/i915/display: Fix handling of enable_psr parameter"
    58df6af8cea3 Linux 5.19.12
    547262c5b373 ext4: make directory inode spreading reflect flexbg size
    cdefe8dd61c9 ext4: fixup possible uninitialized variable access in ext4_mb_choose_next_group_cr1()
    48a12961e800 Revert "block: freeze the queue earlier in del_gendisk"
    398a0fdb38d9 ext4: use buckets for cr 1 block scan instead of rbtree
    52e8d671393c ext4: use locality group preallocation for small closed files
    405a609430a6 ext4: avoid unnecessary spreading of allocations among groups
    b82d312ff30f ext4: make mballoc try target group first even with mb_optimize_scan
    17eb9845f20f ext4: limit the number of retries after discarding preallocations blocks
    2f5e9de15e4f ext4: fix bug in extents parsing when eh_entries == 0 and eh_depth > 0
    034ef0c47e31 devdax: Fix soft-reservation memory description
    27d5563e8f5f Makefile.debug: re-enable debug info for .S files
    6ba8627f72a4 Makefile.debug: set -g unconditional on CONFIG_DEBUG_INFO_SPLIT
    c4f8b89f3ffc certs: make system keyring depend on built-in x509 parser
    c2eab6faf82b drm/amdgpu: don't register a dirty callback for non-atomic
    7f0dcbb0e557 i2c: mux: harden i2c_mux_alloc() against integer overflows
    4925e5e94ae9 i2c: mlxbf: Fix frequency calculation
    3b5ab5fbe69e i2c: mlxbf: prevent stack overflow in mlxbf_i2c_smbus_start_transaction()
    5a7547ee0d24 i2c: mlxbf: incorrect base address passed during io write
    e46e177fd8ed i2c: imx: If pm_runtime_get_sync() returned 1 device access is possible
    c9245ea442a8 workqueue: don't skip lockdep work dependency in cancel_work_sync()
    60644dffac87 fsdax: Fix infinite loop in dax_iomap_rw()
    8054beba353b pmem: fix a name collision
    c62322e62662 gpio: mt7621: Make the irqchip immutable
    2d57e46fa45b drm/rockchip: Fix return type of cdn_dp_connector_mode_valid
    4822afcff82c drm/amd/display: Mark dml30's UseMinimumDCFCLK() as noinline for stack usage
    6f14c55dc8e7 drm/amd/display: Reduce number of arguments of dml31's CalculateFlipSchedule()
    8836e42e8b00 drm/amd/display: Reduce number of arguments of dml31's CalculateWatermarksAndDRAMSpeedChangeSupport()
    88e7896936e0 drm/amd/display: Limit user regamma to a valid value
    9757b3ad4498 drm/amdgpu: Skip reset error status for psp v13_0_0
    83dfcae61be8 drm/amdgpu: add HDP remap functionality to nbio 7.7
    386ca6720b34 drm/amdgpu: change the alignment size of TMR BO to 1M
    8442bc8426d1 drm/amdgpu: use dirty framebuffer helper
    444574f828cd drm/amd/pm: disable BACO entry/exit completely on several sienna cichlid cards
    a5de08013672 gpio: ixp4xx: Make irqchip immutable
    7718cac88524 drm/gma500: Fix (vblank) IRQs not working after suspend/resume
    55c077d97fa6 drm/gma500: Fix WARN_ON(lock->magic != lock) error
    a6ed7624bf4d drm/gma500: Fix BUG: sleeping function called from invalid context errors
    9812e9ed3419 Drivers: hv: Never allocate anything besides framebuffer from framebuffer memory region
    98756ca2584e block: Do not call blk_put_queue() if gendisk allocation fails
    2f092fd2ce24 block: call blk_mq_exit_queue from disk_release for never added disks
    47f57236ba40 blk-mq: fix error handling in __blk_mq_alloc_disk
    0d0f5ca7f241 drm/i915/display: Fix handling of enable_psr parameter
    650a2e79d176 s390/dasd: fix Oops in dasd_alias_get_start_dev due to missing pavgroup
    54be62deede4 phy: marvell: phy-mvebu-a3700-comphy: Remove broken reset support
    1e9571887f97 cgroup: cgroup_get_from_id() must check the looked-up kn is a directory
    a899ba61958e serial: tegra-tcu: Use uart_xmit_advance(), fixes icount.tx accounting
    f986bfe60020 serial: tegra: Use uart_xmit_advance(), fixes icount.tx accounting
    f387ca14c73f serial: Create uart_xmit_advance()
    dc4b06e21691 serial: fsl_lpuart: Reset prior to registration
    f3f5f26c53ef io_uring: ensure that cached task references are always put on exit
    b4b3bc3f8501 selftests: forwarding: add shebang for sch_red.sh
    32afa1f23e42 bnxt: prevent skb UAF after handing over to PTP worker
    0559d91ee3a2 net: sched: fix possible refcount leak in tc_new_tfilter()
    9fc7a9f0a6e9 net: sunhme: Fix packet reception for len < RX_COPY_THRESHOLD
    2c8e8ab53acf bonding: fix NULL deref in bond_rr_gen_slave_id
    6c537124ea61 net: phy: micrel: fix shared interrupt on LAN8814
    32ac8c92919c net/smc: Stop the CLC flow if no link to map buffers on
    56c167a564b6 ice: Fix ice_xdp_xmit() when XDP TX queue number is not sufficient
    b65c53369786 drm/mediatek: dsi: Move mtk_dsi_stop() call back to mtk_dsi_poweroff()
    6acb3e83b508 perf tools: Honor namespace when synthesizing build-ids
    ee7036166b91 perf kcore_copy: Do not check /proc/modules is unchanged
    e71a088d6a97 perf jit: Include program header in ELF files
    306c17dead99 perf stat: Fix cpu map index in bperf cgroup code
    98992697b30b perf stat: Fix BPF program section name
    031b4f40487e can: gs_usb: gs_can_open(): fix race dev->can.state condition
    18979d10300e gpio: tqmx86: fix uninitialized variable girq
    16189cccd46e net: sh_eth: Fix PHY state warning splat during system resume
    199ddf9d3726 net: ravb: Fix PHY state warning splat during system resume
    235c47f437a1 netfilter: nf_ct_ftp: fix deadlock when nat rewrite is needed
    38cf372b17f0 netfilter: ebtables: fix memory leak when blob is malformed
    985b031667c3 netfilter: nf_tables: fix percpu memory leak at nf_tables_addchain()
    8bcad2a93131 netfilter: nf_tables: fix nft_counters_enabled underflow at nf_tables_addchain()
    d8d9a6995858 ice: Fix interface being down after reset with link-down-on-close flag on
    c14cdf15cde3 ice: config netdev tc before setting queues number
    dff2fa324207 net/sched: taprio: make qdisc_leaf() see the per-netdev-queue pfifo child qdiscs
    c7c9c7eb305a net/sched: taprio: avoid disabling offload when it was never enabled
    68a5def1d2c8 ipv6: Fix crash when IPv6 is administratively disabled
    23022b74b1a2 net: enetc: deny offload of tc-based TSN features on VF interfaces
    2fdebdfcd98f net: enetc: move enetc_set_psfp() out of the common enetc_set_features()
    92f7d44de3be wireguard: netlink: avoid variable-sized memcpy on sockaddr
    3b263cc13340 wireguard: ratelimiter: disable timings test by default
    a4eadca702df sfc/siena: fix null pointer dereference in efx_hard_start_xmit
    b454f12cfedd sfc/siena: fix TX channel offset when using legacy interrupts
    c9ba2948db9d net: ipa: properly limit modem routing table use
    506638752e92 of: mdio: Add of_node_put() when breaking out of for_each_xx
    68197205b3f6 drm/hisilicon: Add depends on MMU
    6201c365a0ef gve: Fix GFP flags when allocing pages
    e969486525be bnxt_en: fix flags to check for supported fw version
    b3b952168ee1 sfc: fix null pointer dereference in efx_hard_start_xmit
    5f623a77cfc2 sfc: fix TX channel offset when using legacy interrupts
    b6bea8101f97 netdevsim: Fix hwstats debugfs file permissions
    0b145d3da801 i40e: Fix set max_tx_rate when it is lower than 1 Mbps
    ab1af66d4de9 i40e: Fix VF set max MTU size
    2ffdf364b845 iavf: Fix set max MTU size with port VLAN and jumbo frames
    36da184d2196 mlxbf_gige: clear MDIO gateway lock after read
    c3f9f3089ed5 iavf: Fix bad page state
    3b27f829b7f6 um: fix default console kernel parameter
    f8c3861243be MIPS: Loongson32: Fix PHY-mode being left unspecified
    7c1f2373be0a MIPS: lantiq: export clk_get_io() for lantiq_wdt.ko
    c673c6ceac53 mm/slab_common: fix possible double free of kmem_cache
    183b87c4d18d drm/panel: simple: Fix innolux_g121i1_l01 bus_format
    88b08afb0d80 net: team: Unsync device addresses on ndo_stop
    a4761e45c86c net: bonding: Unsync device addresses on ndo_stop
    b1b48d9e60cb net: bonding: Share lacpdu_mcast_addr definition
    38aa25adcd4d scsi: mpt3sas: Fix return value check of dma_get_required_mask()
    6a4236ed47f5 scsi: qla2xxx: Fix memory leak in __qlt_24xx_handle_abts()
    16b5647f1a55 arm64: dts: imx8mp-venice-gw74xx: fix port/phy validation
    af0c754d4f60 net: phy: aquantia: wait for the suspend/resume operations to finish
    26735f395b30 ARM: dts: lan966x: Fix the interrupt number for internal PHYs
    d5241ea15778 arm64: dts: imx8mp-venice-gw74xx: fix ksz9477 cpu port
    f675f5955ab8 arm64: dts: imx8mp-venice-gw74xx: fix CAN STBY polarity
    392bd6ce1ba9 drm/mediatek: Fix wrong dither settings
    fc8454d54478 arm64: dts: tqma8mqml: Include phy-imx8-pcie.h header
    31ce3c688ddc wifi: iwlwifi: Mark IWLMEI as broken
    9fe1e2da965a net: core: fix flow symmetric hash
    b583e6b25bf9 ipvlan: Fix out-of-bound bugs caused by unset skb->mac_header
    0ad4e4f4d1c4 iavf: Fix cached head and tail value for iavf_get_tx_pending
    7c945e5b4787 ice: Fix crash by keep old cfg when update TCs more than queues
    149979e87eb7 ice: Don't double unplug aux on peer initiated reset
    633c81c04496 netfilter: nfnetlink_osf: fix possible bogus match in nf_osf_find()
    510ea9eae5ee netfilter: nf_conntrack_irc: Tighten matching on DCC message
    f28e376e4c1e netfilter: nf_conntrack_sip: fix ct_sip_walk_headers
    5f394e885eaf arm64: dts: imx8mm-verdin: extend pmic voltages
    3e39beb4efa5 arm64: dts: rockchip: Remove 'enable-active-low' from rk3566-quartz64-a
    efd3a3e464c6 arm64: dts: rockchip: Remove 'enable-active-low' from rk3399-puma
    9350ed92dfe0 arm64: dts: rockchip: fix property for usb2 phy supply on rk3568-evb1-v10
    5e6d95bd6c9d arm64: dts: rockchip: fix property for usb2 phy supply on rock-3a
    a17df55bf6d5 dmaengine: ti: k3-udma-private: Fix refcount leak bug in of_xudma_dev_get()
    869c94dfd900 arm64: dts: imx8ulp: add #reset-cells for pcc
    f478a456a30d arm64: dts: imx8mn: remove GPU power domain reset
    124c330f4071 arm64: dts: rockchip: Set RK3399-Gru PCLK_EDP to 24 MHz
    9182be042c3e arm64: dts: imx8mm: Reverse CPLD_Dn GPIO label mapping on MX8Menlo
    164f2c710a78 drm/mediatek: dsi: Add atomic {destroy,duplicate}_state, reset callbacks
    87d4bdeacff8 arm64: dts: rockchip: Fix typo in lisense text for PX30.Core
    8a906e3a18bb arm64: dts: rockchip: Pull up wlan wake# on Gru-Bob
    6b8c338e1b88 arm64: dts: rockchip: Lower sd speed on quartz64-b
    daacedde25f0 firmware: arm_scmi: Fix the asynchronous reset requests
    8e65edf0d376 firmware: arm_scmi: Harden accesses to the reset domains
    e31fa6648542 batman-adv: Fix hang up with small MTU hard-interface
    117737acc4b3 vmlinux.lds.h: CFI: Reduce alignment of jump-table to function alignment
    bb6d99e27cbe arm64: topology: fix possible overflow in amu_fie_setup()
    42c7fc41020c perf/arm-cmn: Add more bits to child node address offset field
    7a764b44d346 KVM: x86: Inject #UD on emulated XSETBV if XSAVES isn't enabled
    eec722138aee KVM: x86: Always enable legacy FP/SSE in allowed user XFEATURES
    c5f118361297 KVM: x86: Reinstate kvm_vcpu_arch.guest_supported_xcr0
    df6cb39335cf mm: slub: fix flush_cpu_slab()/__free_slab() invocations in task context.
    02bcd951aa3c mm/slub: fix to return errno if kmalloc() fails
    cbaddace599e net: mana: Add rmb after checking owner bits
    7221020d79cc can: flexcan: flexcan_mailbox_read() fix return value for drop = true
    b6c2ad616dd4 kasan: call kasan_malloc() from __kmalloc_*track_caller()
    dc8864f4fd01 xen/xenbus: fix xenbus_setup_ring()
    f799e0568d6c drm/i915/gem: Really move i915_gem_context.link under ref protection
    92881e068ee1 drm/i915/gem: Flush contexts on driver release
    08ac12569010 riscv: fix RISCV_ISA_SVPBMT kconfig dependency warning
    558003a84a3c riscv: fix a nasty sigreturn bug...
    b1489043d3b9 gpiolib: cdev: Set lineevent_state::irq after IRQ register successfully
    41f857033c44 gpio: mockup: Fix potential resource leakage when register a chip
    af0bfabf06c7 gpio: mockup: fix NULL pointer dereference when removing debugfs
    74ce6f1e0f3b wifi: mt76: fix reading current per-tid starting sequence number for aggregation
    1dd2a948a178 efi: libstub: check Shim mode using MokSBStateRT
    96dc4e2c5283 efi: x86: Wipe setup_data on pure EFI boot
    7a27a04f4ef6 thunderbolt: Add support for Intel Maple Ridge single port controller
    d6f28143bccb usb: dwc3: core: leave default DMA if the controller does not support 64-bit DMA
    af830c831d40 media: flexcop-usb: fix endpoint type check
    53b48f0672d5 libperf evlist: Fix polling of system-wide events
    eecada16bcc4 btrfs: zoned: wait for extent buffer IOs before finishing a zone
    c338bea1fec5 btrfs: fix hang during unmount when stopping a space reclaim worker
    cf7769a47e65 btrfs: fix hang during unmount when stopping block group reclaim worker
    17244f71765d exfat: fix overflow for large capacity partition
    2e238bba8a7e iommu/vt-d: Check correct capability for sagaw determination
    ecec349af8c7 ALSA: hda/realtek: Add a quirk for HP OMEN 16 (8902) mute LED
    28e07bb27ba4 ALSA: hda/realtek: Enable 4-speaker output Dell Precision 5530 laptop
    1f65164bc605 ALSA: hda/realtek: Add quirk for ASUS GA503R laptop
    0632fb7f2158 ALSA: hda/realtek: Add pincfg for ASUS G533Z HP jack
    76b75705c941 ALSA: hda/realtek: Add pincfg for ASUS G513 HP jack
    2035227cd000 ALSA: hda/realtek: Re-arrange quirk table entries
    3637770602ac ALSA: hda/realtek: Enable 4-speaker output Dell Precision 5570 laptop
    73c4ae35ff11 ALSA: hda/realtek: Add quirk for Huawei WRT-WX9
    61c19a35f0d2 ALSA: hda: add Intel 5 Series / 3400 PCI DID
    d6cb6e424a60 ALSA: hda: Fix Nvidia dp infoframe
    6e91ec54e7f1 ALSA: hda: Fix hang at HD-audio codec unbinding due to refcount saturation
    abb050dabd7d ALSA: hda/tegra: set depop delay for tegra
    1c5a0a1f4d15 ALSA: core: Fix double-free at snd_card_new()
    e0e17c7bbdf4 Revert "ALSA: usb-audio: Split endpoint setups for hw_params and prepare"
    d744140498a3 USB: serial: option: add Quectel RM520N
    3db2ec3a6724 USB: serial: option: add Quectel BG95 0x0203 composition
    e82a8ff62709 USB: core: Fix RST error in hub.c
    fd0b4fd54892 drivers/base: Fix unsigned comparison to -1 in CPUMAP_FILE_MAX_BYTES
    2e7eb4c1e8af scsi: core: Fix a use-after-free
    d27b66257db1 block: simplify disk shutdown
    fdb28e968815 block: stop setting the nomerges flags in blk_cleanup_queue
    ab85cb5297f7 block: remove QUEUE_FLAG_DEAD
    633e819de9fa xfrm: fix XFRMA_LASTUSED comment
    2776911d4a98 Revert "usb: gadget: udc-xilinx: replace memcpy with memcpy_toio"
    8039621a78e5 Revert "usb: add quirks for Lenovo OneLink+ Dock"
    7c64dd4dbf90 smb3: use filemap_write_and_wait_range instead of filemap_write_and_wait
    c7ae5c403d68 usb: gadget: udc-xilinx: replace memcpy with memcpy_toio
    9b56515aeeff usb: add quirks for Lenovo OneLink+ Dock
    0cdde8460c30 smb3: fix temporary data corruption in insert range
    49523a473220 smb3: fix temporary data corruption in collapse range
    cc914c37e55f smb3: Move the flush out of smb2_copychunk_range() into its callers
    f6bb739e61eb drm/i915/dsi: fix dual-link DSI backlight and CABC ports for display 11+
    d9d2625dafe2 drm/i915/dsi: filter invalid backlight and CABC ports
    fc6aff984b1c drm/i915/bios: Split VBT data into per-panel vs. global parts
    2af21ae876cf drm/i915/bios: Split VBT parsing to global vs. panel specific parts
    5da3f1bfb88e drm/i915/bios: Split parse_driver_features() into two parts
    ad719d5cc7cb drm/i915/pps: Split pps_init_delays() into distinct parts
    a0f7cdd69ca3 drm/i915: Extract intel_edp_fixup_vbt_bpp()
    fcf22aefe871 Linux 5.19.11
    4d8637f1d672 Revert "iommu/vt-d: Fix possible recursive locking in intel_iommu_init()"
    36371c3adb7a ALSA: hda/sigmatel: Fix unused variable warning for beep power change
    ddd2edc276e0 ALSA: hda/sigmatel: Keep power up while beep is enabled
    99bc25748e39 cgroup: Add missing cpus_read_lock() to cgroup_attach_task_all()
    7051efc07d72 dt-bindings: apple,aic: Fix required item "apple,fiq-index" in affinity description
    20b3f49e9498 net: Find dst with sk's xfrm policy not ctl_sk
    e68db1a89fc9 drm/amdgpu: move nbio sdma_doorbell_range() into sdma code for vega
    9189056c223b drm/amdgpu: move nbio ih_doorbell_range() into ih code for vega
    989d23d88520 drm/amdgpu: Don't enable LTR if not supported
    e6189420e34f drm/amdgpu: make sure to init common IP before gmc
    dd52bde6767e drm/i915: Set correct domains values at _i915_vma_move_to_active
    871b9d5c68d8 drm/i915/gt: Fix perf limit reasons bit positions
    b31c81d633d8 tools/include/uapi: Fix <asm/errno.h> for parisc and xtensa
    ac12a96d1d35 parisc: Allow CONFIG_64BIT with ARCH=parisc
    46c716a31fcd blk-lib: fix blkdev_issue_secure_erase
    c2c7f67fd12d cifs: always initialize struct msghdr smb_msg completely
    eea8626615a0 cifs: don't send down the destination address to sendmsg for a SOCK_STREAM
    2c3f439480c0 cifs: revalidate mapping when doing direct writes
    d50c30b66f04 io_uring/msg_ring: check file type before putting
    6f5ceeb59d09 of/device: Fix up of_dma_configure_id() stub
    6ebcd3a8f5d2 parisc: ccio-dma: Add missing iounmap in error path in ccio_probe()
    248c48ced209 block: blk_queue_enter() / __bio_queue_enter() must return -EAGAIN for nowait
    d31efde8d45d drm/i915/guc: Cancel GuC engine busyness worker synchronously
    6731a2193bc8 drm/i915/guc: Don't update engine busyness stats too frequently
    b0dc9560acd2 drm/i915/vdsc: Set VDSC PIC_HEIGHT before using for DP DSC
    fc689a286139 drm/rockchip: vop2: Fix eDP/HDMI sync polarities
    ca52cf493f97 drm/meson: Fix OSD1 RGB to YCbCr coefficient
    e681b2df3ad4 drm/meson: Correct OSD1 global alpha value
    99ed392209cc drm/panel-edp: Fix delays for Innolux N116BCA-EA1
    c60087415670 Revert "SUNRPC: Remove unreachable error condition"
    4d9f296e78b0 NFSv4.2: Update mode bits after ALLOCATE and DEALLOCATE
    2f0a154b16ab gpio: mpc8xxx: Fix support for IRQ_TYPE_LEVEL_LOW flow_type in mpc85xx
    51e024dcaf08 NFSv4: Turn off open-by-filehandle and NFS re-export for NFSv4.0
    dce19409fb74 SUNRPC: Fix call completion races with call_decode()
    fe0a6a2369d8 pinctrl: sunxi: Fix name for A100 R_PIO
    4b1366bf4ed1 pinctrl: rockchip: Enhance support for IRQ_TYPE_EDGE_BOTH
    8e33176cd475 pinctrl: qcom: sc8180x: Fix wrong pin numbers
    50207584d3f5 pinctrl: qcom: sc8180x: Fix gpio_wakeirq_map
    2133f4513116 of: fdt: fix off-by-one error in unflatten_dt_nodes()
    b80678c1e00a Linux 5.19.10
    0541ab4d0330 Input: goodix - add compatible string for GT1158
    693ccecee083 RDMA/irdma: Use s/g array in post send only when its valid
    1989b17301f8 gpio: 104-idio-16: Make irq_chip immutable
    b240650a6600 gpio: 104-dio-48e: Make irq_chip immutable
    e18b2e3310f0 LoongArch: Fix arch_remove_memory() undefined build error
    6023efd94e54 LoongArch: Fix section mismatch due to acpi_os_ioremap()
    0b38a5072464 platform/x86: asus-wmi: Increase FAN_CURVE_BUF_LEN to 32
    fe5872fd1684 usb: storage: Add ASUS <0x0b05:0x1932> to IGNORE_UAS
    2fdf0a1ff474 platform/x86: acer-wmi: Acer Aspire One AOD270/Packard Bell Dot keymap fixes
    719b2021d778 perf/arm_pmu_platform: fix tests for platform_get_irq() failure
    3d513ebf8c3b net: dsa: hellcreek: Print warning only once
    c624b5659a28 drm/amd/amdgpu: skip ucode loading if ucode_size == 0
    f566cb9f4057 nvmet-tcp: fix unhandled tcp states in nvmet_tcp_state_change()
    e8d5aa9c67ed nvme-pci: add NVME_QUIRK_BOGUS_NID for Lexar NM610
    137f1493f151 drm/amd/pm: use vbios carried pptable for all SMU13.0.7 SKUs
    2052738ece42 drm/amdgpu: disable FRU access on special SIENNA CICHLID card
    12c20186d84e Input: iforce - add support for Boeder Force Feedback Wheel
    47e83e6ebf99 ieee802154: cc2520: add rc code in cc2520_tx()
    bc55c1677edb gpio: mockup: remove gpio debugfs when remove device
    35c0b78d0d42 r8152: add PID for the Lenovo OneLink+ Dock
    84d8959393a0 tg3: Disable tg3 device on system reboot to avoid triggering AER
    6b1bcd579fc5 Bluetooth: MGMT: Fix Get Device Flags
    fbb701e51ee2 hid: intel-ish-hid: ishtp: Fix ishtp client sending disordered message
    37c3dcfc4730 HID: ishtp-hid-clientHID: ishtp-hid-client: Fix comment typo
    65d983566887 dt-bindings: iio: gyroscope: bosch,bmg160: correct number of pins
    50a1ffa557cf kvm: x86: mmu: Always flush TLBs when enabling dirty logging
    c87f1f99e26e peci: cpu: Fix use-after-free in adev_release()
    f25a547e7c76 drm/msm/rd: Fix FIFO-full deadlock
    df01ac6582e1 platform/surface: aggregator_registry: Add support for Surface Laptop Go 2
    f05939158a41 Input: goodix - add support for GT1158
    37c81d9f1d1b ACPI: resource: skip IRQ override on AMD Zen platforms
    f26649e59b4f RDMA/mlx5: Fix UMR cleanup on error flow of driver init
    d8f7bff9a426 RDMA/mlx5: Add a umr recovery flow
    ada0ccc4a137 RDMA/mlx5: Rely on RoCE fw cap instead of devlink when setting profile
    ddc58af02675 net/mlx5: Use software VHCA id when it's supported
    630a75548b88 net/mlx5: Introduce ifc bits for using software vhca id
    3bd8fdde3826 iommu/vt-d: Fix kdump kernels boot failure with scalable mode

(From OE-Core rev: 4814d5d8e7ff674ca812048c54f2f3e74ba35000)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 396b40b0b1e52fc12c0e171734fba190edfaf671)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:14 +00:00
Bruce Ashfield
ad8bd886a4 linux-yocto/5.15: update to v5.15.72
Updating  to the latest korg -stable release that comprises
the following commits:

    c68173b2012b Linux 5.15.72
    713fa3e4591f drm/i915/gem: Really move i915_gem_context.link under ref protection
    a00ed4e5d5ee x86/alternative: Fix race in try_get_desc()
    c3d4b8970c0d KVM: x86: Hide IA32_PLATFORM_DCA_CAP[31:0] from the guest
    ab5c5787ab5e clk: iproc: Do not rely on node name for correct PLL setup
    e748a084b51c clk: imx: imx6sx: remove the SET_RATE_PARENT flag for QSPI clocks
    19f4e1636626 fs: split off setxattr_copy and do_setxattr function from setxattr
    a0e3719e030a vdpa/ifcvf: fix the calculation of queuepair
    4755d9d2c9b0 selftests: Fix the if conditions of in test_extra_filter()
    c83a7606aa65 net: phy: Don't WARN for PHY_UP state in mdio_bus_phy_resume()
    a8cd7e1bc7cd net: stmmac: power up/down serdes in stmmac_open/release
    67c00bcf4231 wifi: mac80211: fix regression with non-QoS drivers
    520e434a082d nvme: Fix IOC_PR_CLEAR and IOC_PR_RELEASE ioctls for nvme devices
    e9d7d809022e net/mlxbf_gige: Fix an IS_ERR() vs NULL bug in mlxbf_gige_mdio_probe
    8b1b908507ce cxgb4: fix missing unlock on ETHOFLD desc collect fail path
    e99c7a61d89e net: sched: act_ct: fix possible refcount leak in tcf_ct_init()
    815381aeff95 usbnet: Fix memory leak in usbnet_disconnect()
    af91321b7372 gpio: mvebu: Fix check for pwm support on non-A8K platforms
    f592ccddac68 Input: melfas_mip4 - fix return value check in mip4_probe()
    ff982b1f325d Revert "drm: bridge: analogix/dp: add panel prepare/unprepare in suspend/resume time"
    bde7795794f4 drm/bridge: lt8912b: fix corrupted image output
    e103b0e83991 drm/bridge: lt8912b: set hdmi or dvi mode
    473f653a86ee drm/bridge: lt8912b: add vsync hsync
    6a12105d9d4f ASoC: tas2770: Reinit regcache on reset
    75ef73d7d2b3 arm64: dts: qcom: sm8350: fix UFS PHY serdes size
    5664dc84fc2e ASoC: imx-card: Fix refcount issue with of_node_put
    367403bc1cfe soc: sunxi: sram: Fix debugfs info for A64 SRAM C
    68d2f42cf4f6 soc: sunxi: sram: Fix probe function ordering issues
    2f82b5290078 soc: sunxi_sram: Make use of the helper function devm_platform_ioremap_resource()
    861adc2b2037 soc: sunxi: sram: Prevent the driver from being unbound
    8b07378ebe43 soc: sunxi: sram: Actually claim SRAM regions
    d50e0e2f3d94 ARM: dts: am5748: keep usb4_tm disabled
    c48e3db1df25 reset: imx7: Fix the iMX8MP PCIe PHY PERST support
    606229101290 ARM: dts: am33xx: Fix MMCHS0 dma properties
    bfe5dc2101ba swiotlb: max mapping size takes min align mask into account
    a6a3b6b11ac0 media: v4l2-compat-ioctl32.c: zero buffer passed to v4l2_compat_get_array_args()
    ab9d32844742 media: rkvdec: Disable H.264 error detection
    69379139ed78 media: dvb_vb2: fix possible out of bound access
    6287c9e00595 mm,hwpoison: check mm when killing accessing process
    f9aed3d8a029 mm: fix madivse_pageout mishandling on non-LRU page
    1299c1198878 mm/migrate_device.c: flush TLB while holding PTL
    e858f7ac7395 mm: fix dereferencing possible ERR_PTR
    d75ce115625e mm: prevent page_frag_alloc() from corrupting the memory
    23d17e2b04c7 mm/page_alloc: fix race condition between build_all_zonelists and page allocation
    fec2db7a434a mmc: hsq: Fix data stomping during mmc recovery
    4fef6e1fe07c mmc: moxart: fix 4-bit bus width and remove 8-bit bus width
    4f75d0cacd65 libata: add ATA_HORKAGE_NOLPM for Pioneer BDR-207M and BDR-205
    dc248ddf41ea vduse: prevent uninitialized memory accesses
    ea774829699a drm/amdgpu: Add amdgpu suspend-resume code path under SRIOV
    25759a7bc1f4 drm/i915/gt: Restrict forced preemption to the active context
    e0f576335d05 Revert "firmware: arm_scmi: Add clock management to the SCMI power domain"
    5de02ab84aec net: mt7531: only do PLL once after the reset
    56e3f8d56299 mm/damon/dbgfs: fix memory leak when using debugfs_lookup()
    149da9e60b8c ntfs: fix BUG_ON in ntfs_lookup_inode_by_name()
    dc8cdb988453 ARM: dts: integrator: Tag PCI host with device_type
    aa5c3aa3f197 x86/sgx: Do not fail on incomplete sanitization on premature stop of ksgxd
    476c188b9dbe clk: ingenic-tcu: Properly enable registers before accessing timers
    d134b0f7a9b9 can: c_can: don't cache TX messages for C_CAN cores
    6fff203793cb Input: snvs_pwrkey - fix SNVS_HPVIDR1 register address
    006a5085a3a8 net: usb: qmi_wwan: Add new usb-id for Dell branded EM7455
    81e759d71a6b thunderbolt: Explicitly reset plug events delay back to USB4 spec value
    85a70a259916 usb: typec: ucsi: Remove incorrect warning
    ac12a04c8e08 uas: ignore UAS for Thinkplus chips
    528aba78ee01 usb-storage: Add Hiksemi USB3-FW to IGNORE_UAS
    0a4e8f384e82 uas: add no-uas quirk for Hiksemi usb_disk
    8484a356cee8 cgroup: cgroup_get_from_id() must check the looked-up kn is a directory
    ae04dd5ef180 cgroup: reduce dependency on cgroup_mutex
    7a64e6dc6cb7 ALSA: hda/realtek: fix speakers and micmute on HP 855 G8
    6a3bee2ead9b ALSA: hda: Fix Nvidia dp infoframe
    f7392f93a2fb ALSA: hda: Fix hang at HD-audio codec unbinding due to refcount saturation
    de5deddfa7e7 ALSA: hda: Do disconnect jacks at codec unbind
    90c7e9b400c7 Linux 5.15.71
    214194610a18 ext4: use locality group preallocation for small closed files
    8a1ac4167dda ext4: avoid unnecessary spreading of allocations among groups
    fd8b82919549 ext4: make mballoc try target group first even with mb_optimize_scan
    21dada4ce19c ext4: limit the number of retries after discarding preallocations blocks
    be4df018c0be ext4: fix bug in extents parsing when eh_entries == 0 and eh_depth > 0
    90bc7b630c6c ext4: make directory inode spreading reflect flexbg size
    95d714d8ad3d devdax: Fix soft-reservation memory description
    27bf7a5d1198 NFSv4: Fixes for nfs4_inode_return_delegation()
    21b0301f2234 drm/amdgpu: don't register a dirty callback for non-atomic
    6eb08245da51 i2c: mlxbf: Fix frequency calculation
    dc2a0c587006 i2c: mlxbf: prevent stack overflow in mlxbf_i2c_smbus_start_transaction()
    621c6ab03ac3 i2c: mlxbf: incorrect base address passed during io write
    c242dbf2e36f i2c: imx: If pm_runtime_get_sync() returned 1 device access is possible
    c71ec39be45a workqueue: don't skip lockdep work dependency in cancel_work_sync()
    929ef155e1da fsdax: Fix infinite loop in dax_iomap_rw()
    9aac3819f099 drm/rockchip: Fix return type of cdn_dp_connector_mode_valid
    1c26968caf18 drm/amd/display: Mark dml30's UseMinimumDCFCLK() as noinline for stack usage
    492db4ffcff3 drm/amd/display: Reduce number of arguments of dml31's CalculateFlipSchedule()
    9539cfc74493 drm/amd/display: Reduce number of arguments of dml31's CalculateWatermarksAndDRAMSpeedChangeSupport()
    a541c0111818 drm/amd/display: Limit user regamma to a valid value
    33b128f790b6 drm/amdgpu: use dirty framebuffer helper
    f76d6f309a68 drm/amd/pm: disable BACO entry/exit completely on several sienna cichlid cards
    e5ae504c8623 drm/gma500: Fix BUG: sleeping function called from invalid context errors
    e07d9154bb81 Drivers: hv: Never allocate anything besides framebuffer from framebuffer memory region
    5f270b61ee8b drm/amd/amdgpu: fixing read wrong pf2vf data in SRIOV
    d3a67c21b18f s390/dasd: fix Oops in dasd_alias_get_start_dev due to missing pavgroup
    faf0e1b5d82b serial: tegra-tcu: Use uart_xmit_advance(), fixes icount.tx accounting
    0aada772fd16 serial: tegra: Use uart_xmit_advance(), fixes icount.tx accounting
    4c7e17270cab serial: Create uart_xmit_advance()
    4199425b1132 serial: fsl_lpuart: Reset prior to registration
    cc1504f6da2e KVM: x86/mmu: Fold rmap_recycle into rmap_add
    dddae48eabfb selftests: forwarding: add shebang for sch_red.sh
    08483e4c0c83 bnxt: prevent skb UAF after handing over to PTP worker
    f8162aed962b net: sched: fix possible refcount leak in tc_new_tfilter()
    bd29ca2b398c net: sunhme: Fix packet reception for len < RX_COPY_THRESHOLD
    ec3a6f4ffe55 bonding: fix NULL deref in bond_rr_gen_slave_id
    db145b8a04fc net/smc: Stop the CLC flow if no link to map buffers on
    5daef0042d2c drm/mediatek: dsi: Move mtk_dsi_stop() call back to mtk_dsi_poweroff()
    a08cba2f50d7 perf tools: Honor namespace when synthesizing build-ids
    1a83f39dc4e1 perf kcore_copy: Do not check /proc/modules is unchanged
    a3b923f449a3 perf jit: Include program header in ELF files
    39dc6ccdd5af perf stat: Fix BPF program section name
    c6d939639fe0 can: gs_usb: gs_can_open(): fix race dev->can.state condition
    e1676adedc17 net: sh_eth: Fix PHY state warning splat during system resume
    71200518bbbf net: ravb: Fix PHY state warning splat during system resume
    d5917b7af7ca netfilter: ebtables: fix memory leak when blob is malformed
    08d7524f366a netfilter: nf_tables: fix percpu memory leak at nf_tables_addchain()
    91aa52652f4b netfilter: nf_tables: fix nft_counters_enabled underflow at nf_tables_addchain()
    c721623efd09 net/sched: taprio: make qdisc_leaf() see the per-netdev-queue pfifo child qdiscs
    f58e43184226 net/sched: taprio: avoid disabling offload when it was never enabled
    510e703e4ed0 net: enetc: deny offload of tc-based TSN features on VF interfaces
    11eb9ed08856 net: enetc: move enetc_set_psfp() out of the common enetc_set_features()
    c60801e4e2b5 wireguard: netlink: avoid variable-sized memcpy on sockaddr
    3ebf690d1cde wireguard: ratelimiter: disable timings test by default
    c2dc533a7edb net: ipa: properly limit modem routing table use
    cbdab7d68f20 of: mdio: Add of_node_put() when breaking out of for_each_xx
    ca86577c10bc drm/hisilicon: Add depends on MMU
    68c4acee6328 drm/hisilicon/hibmc: Allow to be built if COMPILE_TEST is enabled
    8547c7bfc061 sfc: fix null pointer dereference in efx_hard_start_xmit
    360910b88d14 sfc: fix TX channel offset when using legacy interrupts
    bc750d7127a9 i40e: Fix set max_tx_rate when it is lower than 1 Mbps
    53220b99059a i40e: Fix VF set max MTU size
    7249a653fe5f iavf: Fix set max MTU size with port VLAN and jumbo frames
    030e0688b6b2 mlxbf_gige: clear MDIO gateway lock after read
    93859f6878e7 iavf: Fix bad page state
    e1dbe8a62098 um: fix default console kernel parameter
    7400e2edfc9e MIPS: Loongson32: Fix PHY-mode being left unspecified
    abea65fa7713 MIPS: lantiq: export clk_get_io() for lantiq_wdt.ko
    831cf63c043e drm/panel: simple: Fix innolux_g121i1_l01 bus_format
    408d5752b60f net: team: Unsync device addresses on ndo_stop
    f50265a4f3da net: bonding: Unsync device addresses on ndo_stop
    e6b277f7367e net: bonding: Share lacpdu_mcast_addr definition
    8b2ab46b6c63 scsi: mpt3sas: Fix return value check of dma_get_required_mask()
    89df49e561b4 scsi: qla2xxx: Fix memory leak in __qlt_24xx_handle_abts()
    5826a555f77c net: phy: aquantia: wait for the suspend/resume operations to finish
    4d2f1bc9067a net: core: fix flow symmetric hash
    8d06006c7eb7 ipvlan: Fix out-of-bound bugs caused by unset skb->mac_header
    dae9d2abe25b iavf: Fix cached head and tail value for iavf_get_tx_pending
    34447d64b8d2 ice: Don't double unplug aux on peer initiated reset
    816eab147e5c netfilter: nfnetlink_osf: fix possible bogus match in nf_osf_find()
    dc33ffbc361e netfilter: nf_conntrack_irc: Tighten matching on DCC message
    0606c5d5fefd netfilter: nf_conntrack_sip: fix ct_sip_walk_headers
    0babb5bc85ee arm64: dts: rockchip: Remove 'enable-active-low' from rk3399-puma
    dd5a6c5a0875 dmaengine: ti: k3-udma-private: Fix refcount leak bug in of_xudma_dev_get()
    1b0e46d970b4 arm64: dts: rockchip: Set RK3399-Gru PCLK_EDP to 24 MHz
    e352fea1d0fc drm/mediatek: dsi: Add atomic {destroy,duplicate}_state, reset callbacks
    43733b6c9fda arm64: dts: rockchip: Fix typo in lisense text for PX30.Core
    2929463a9eff arm64: dts: rockchip: Pull up wlan wake# on Gru-Bob
    166a332463b5 firmware: arm_scmi: Fix the asynchronous reset requests
    1f08a1b26cfc firmware: arm_scmi: Harden accesses to the reset domains
    9ec5a534d77c xfs: validate inode fork size against fork format
    5caa3a127953 xfs: fix xfs_ifree() error handling to not leak perag ref
    9e7b231687fd xfs: reorder iunlink remove operation in xfs_ifree
    28c7ef86b21b vmlinux.lds.h: CFI: Reduce alignment of jump-table to function alignment
    3c3edb82d67b arm64: topology: fix possible overflow in amu_fie_setup()
    2427a04bce86 KVM: x86: Inject #UD on emulated XSETBV if XSAVES isn't enabled
    61703b248be9 mm: slub: fix flush_cpu_slab()/__free_slab() invocations in task context.
    2d6e55e0c038 mm/slub: fix to return errno if kmalloc() fails
    71075d7d4632 net: mana: Add rmb after checking owner bits
    19aea370fd09 can: flexcan: flexcan_mailbox_read() fix return value for drop = true
    bf0197aea195 kasan: call kasan_malloc() from __kmalloc_*track_caller()
    c75288a4902b riscv: fix a nasty sigreturn bug...
    97da736cd11a gpiolib: cdev: Set lineevent_state::irq after IRQ register successfully
    9b26723e058f gpio: mockup: Fix potential resource leakage when register a chip
    18352095a0d5 gpio: mockup: fix NULL pointer dereference when removing debugfs
    2279e977405b wifi: mt76: fix reading current per-tid starting sequence number for aggregation
    b5bc5a274d54 efi: libstub: check Shim mode using MokSBStateRT
    ef43fee9f211 efi: x86: Wipe setup_data on pure EFI boot
    b173f1f8ef9e thunderbolt: Add support for Intel Maple Ridge single port controller
    65b13f951fe6 usb: dwc3: core: leave default DMA if the controller does not support 64-bit DMA
    7143f6cf58db media: flexcop-usb: fix endpoint type check
    d8a76a2e514f btrfs: fix hang during unmount when stopping a space reclaim worker
    46053262b5f5 btrfs: fix hang during unmount when stopping block group reclaim worker
    b02f86689a5a iommu/vt-d: Check correct capability for sagaw determination
    a963fe6d0eb6 ALSA: hda/realtek: Enable 4-speaker output Dell Precision 5530 laptop
    4b2fa20da623 ALSA: hda/realtek: Add quirk for ASUS GA503R laptop
    eb54e457c4ad ALSA: hda/realtek: Add pincfg for ASUS G533Z HP jack
    0898469913cd ALSA: hda/realtek: Add pincfg for ASUS G513 HP jack
    c6a746b4fca5 ALSA: hda/realtek: Re-arrange quirk table entries
    41e974cd6ecb ALSA: hda/realtek: Enable 4-speaker output Dell Precision 5570 laptop
    5421125bbda8 ALSA: hda/realtek: Add quirk for Huawei WRT-WX9
    84481d7a59a2 ALSA: hda: add Intel 5 Series / 3400 PCI DID
    04b5bd5702ab ALSA: hda/tegra: set depop delay for tegra
    e10425c5424b ALSA: core: Fix double-free at snd_card_new()
    10a8c5d7d393 Revert "ALSA: usb-audio: Split endpoint setups for hw_params and prepare"
    06c0204a6e80 USB: serial: option: add Quectel RM520N
    6cf9e8b7e67a USB: serial: option: add Quectel BG95 0x0203 composition
    369b008bbe36 USB: core: Fix RST error in hub.c
    d10d1e9d9f1e drivers/base: Fix unsigned comparison to -1 in CPUMAP_FILE_MAX_BYTES
    6eede01dfd0e Revert "usb: gadget: udc-xilinx: replace memcpy with memcpy_toio"
    c02431f43e12 Revert "usb: add quirks for Lenovo OneLink+ Dock"
    8de5e12f587b usb: gadget: udc-xilinx: replace memcpy with memcpy_toio
    2db7a7176c45 usb: add quirks for Lenovo OneLink+ Dock
    a72eee6d905e usb: dwc3: gadget: Avoid duplicate requests to enable Run/Stop
    f79a57d4091f usb: dwc3: gadget: Don't modify GEVNTCOUNT in pullup()
    1a9923999459 usb: dwc3: gadget: Refactor pullup()
    7604a210acbb usb: dwc3: gadget: Prevent repeat pullup()
    a0b5d22b0448 usb: dwc3: Issue core soft reset before enabling run/stop
    8d583ba79cde usb: dwc3: gadget: Avoid starting DWC3 gadget during UDC unbind
    167b18f25b96 staging: r8188eu: Add Rosewill USB-N150 Nano to device tables
    add40eda8258 staging: r8188eu: Remove support for devices with 8188FU chipset (0bda:f179)
    55653c548612 drm/amdgpu: make sure to init common IP before gmc
    25a90a11036b drm/amdgpu: Separate vf2pf work item init from virt data exchange
    3e98e33d345e Linux 5.15.70
    21f948cab866 ALSA: hda/sigmatel: Fix unused variable warning for beep power change
    5db17805b6ba cgroup: Add missing cpus_read_lock() to cgroup_attach_task_all()
    39b0235284c7 KVM: SEV: add cache flush to solve SEV cache incoherency issues
    d9bf46e74735 net: Find dst with sk's xfrm policy not ctl_sk
    ab5140c6ddd7 video: fbdev: pxa3xx-gcu: Fix integer overflow in pxa3xx_gcu_write
    9af7af862cb8 mksysmap: Fix the mismatch of 'L0' symbols in System.map
    2340f23c770d drm/panfrost: devfreq: set opp to the recommended one to configure regulator
    7e8df4920b2a MIPS: OCTEON: irq: Fix octeon_irq_force_ciu_mapping()
    af88da4c737a afs: Return -EAGAIN, not -EREMOTEIO, when a file already locked
    2dd0ae85fb3c net: usb: qmi_wwan: add Quectel RM520N
    a5e949e088bc ALSA: hda/tegra: Align BDL entry to 4KB boundary
    3d25aaf71fe0 ALSA: hda/sigmatel: Keep power up while beep is enabled
    d582756bfc71 wifi: mac80211_hwsim: check length for virtio packets
    17898c3b578a rxrpc: Fix calc of resend age
    1bbcd88c3c99 rxrpc: Fix local destruction being repeated
    87cd4c02bdb1 scsi: lpfc: Return DID_TRANSPORT_DISRUPTED instead of DID_REQUEUE
    f08a320b4b60 regulator: pfuze100: Fix the global-out-of-bounds access in pfuze100_regulator_probe()
    80c7be217ba7 ASoC: nau8824: Fix semaphore unbalance at error paths
    f1d57c4c99c2 arm64: dts: juno: Add missing MHU secure-irq
    59b756da49bf video: fbdev: i740fb: Error out if 'pixclock' equals zero
    899f4160b140 binder: remove inaccurate mmap_assert_locked()
    8c2bbfb0ded3 drm/amdgpu: move nbio sdma_doorbell_range() into sdma code for vega
    0a7d86f156fa drm/amdgpu: move nbio ih_doorbell_range() into ih code for vega
    dcef16f64969 drm/amdgpu: Don't enable LTR if not supported
    710ebf8f1a08 tools/include/uapi: Fix <asm/errno.h> for parisc and xtensa
    309e9f4a17cf parisc: Allow CONFIG_64BIT with ARCH=parisc
    9a72466fb61b cifs: always initialize struct msghdr smb_msg completely
    21c47a08f96a cifs: don't send down the destination address to sendmsg for a SOCK_STREAM
    e1aad8c56090 cifs: revalidate mapping when doing direct writes
    b04e0208d025 of/device: Fix up of_dma_configure_id() stub
    8fd27239ca92 parisc: ccio-dma: Add missing iounmap in error path in ccio_probe()
    5f285e4c47c3 block: blk_queue_enter() / __bio_queue_enter() must return -EAGAIN for nowait
    f86092d12fbb drm/meson: Fix OSD1 RGB to YCbCr coefficient
    d38eb1f37538 drm/meson: Correct OSD1 global alpha value
    89cfddd416ba gpio: mpc8xxx: Fix support for IRQ_TYPE_LEVEL_LOW flow_type in mpc85xx
    9a173db71a99 NFSv4: Turn off open-by-filehandle and NFS re-export for NFSv4.0
    cd358b2ee56f pinctrl: sunxi: Fix name for A100 R_PIO
    ca2b798e53d4 pinctrl: rockchip: Enhance support for IRQ_TYPE_EDGE_BOTH
    30fccb4fe449 pinctrl: qcom: sc8180x: Fix wrong pin numbers
    cbafdbb6f6ce pinctrl: qcom: sc8180x: Fix gpio_wakeirq_map
    ba6b9f7cc110 of: fdt: fix off-by-one error in unflatten_dt_nodes()
    c23065adf97f tty: serial: atmel: Preserve previous USART mode if RS485 disabled
    1d01d7beccba serial: atmel: remove redundant assignment in rs485_config
    f3450c33411b drm/tegra: vic: Fix build warning when CONFIG_PM=n
    820b689b4a7a Linux 5.15.69
    277674996dcf Input: goodix - add compatible string for GT1158
    b9b39f7332c5 RDMA/irdma: Use s/g array in post send only when its valid
    125c3ae8a936 usb: gadget: f_uac2: fix superspeed transfer
    fa7e0266c239 usb: gadget: f_uac2: clean up some inconsistent indenting
    07609e83c1b9 soc: fsl: select FSL_GUTS driver for DPIO
    3998dc50ebdc mm: Fix TLB flush for not-first PFNMAP mappings in unmap_region()
    cd698131ef5d usb: storage: Add ASUS <0x0b05:0x1932> to IGNORE_UAS
    6087747599ec platform/x86: acer-wmi: Acer Aspire One AOD270/Packard Bell Dot keymap fixes
    d4441b810bd8 perf/arm_pmu_platform: fix tests for platform_get_irq() failure
    55032fb14d4a net: dsa: hellcreek: Print warning only once
    985a5d3d491d drm/amd/amdgpu: skip ucode loading if ucode_size == 0
    a1347be8f0ff nvmet-tcp: fix unhandled tcp states in nvmet_tcp_state_change()
    3d380f9d1e2b Input: iforce - add support for Boeder Force Feedback Wheel
    b9682878abee ieee802154: cc2520: add rc code in cc2520_tx()
    3a10e8edee2b gpio: mockup: remove gpio debugfs when remove device
    b4ebcd6d48bc tg3: Disable tg3 device on system reboot to avoid triggering AER
    f715188c23fa hid: intel-ish-hid: ishtp: Fix ishtp client sending disordered message
    a86c8d1b36a9 HID: ishtp-hid-clientHID: ishtp-hid-client: Fix comment typo
    2e3aeb48995a dt-bindings: iio: gyroscope: bosch,bmg160: correct number of pins
    1b80691d5115 drm/msm/rd: Fix FIFO-full deadlock
    a9687a2dc7e1 platform/surface: aggregator_registry: Add support for Surface Laptop Go 2
    49801d5f8b67 Input: goodix - add support for GT1158
    709edbac4c45 iommu/vt-d: Fix kdump kernels boot failure with scalable mode
    90f922646f57 tracefs: Only clobber mode/uid/gid on remount if asked
    3c90af5a773a tracing: hold caller_addr to hardirq_{enable,disable}_ip
    64840a4a2d8e task_stack, x86/cea: Force-inline stack helpers
    0b009e5fd146 x86/mm: Force-inline __phys_addr_nodebug()
    f9571a969973 lockdep: Fix -Wunused-parameter for _THIS_IP_
    dee782da3937 ARM: dts: at91: sama7g5ek: specify proper regulator output ranges
    424ac5929d0a ARM: dts: at91: fix low limit for CPU regulator
    8be25fa7cfd6 ARM: dts: imx6qdl-kontron-samx6i: fix spi-flash compatible
    78eb5e326a0e ARM: dts: imx: align SPI NOR node name with dtschema
    3bb12efc5e4d ACPI: resource: skip IRQ override on AMD Zen platforms
    a68a734b19af NFS: Fix WARN_ON due to unionization of nfs_inode.nrequests

(From OE-Core rev: b4f0bc16db0a18baf9234171edce3206319a2c2d)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit fbc8840580fe008c2deda50c0d2d5a98e9b6c564)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
wangmy
b0bf1ab118 lighttpd: upgrade 1.4.66 -> 1.4.67
Changelog:
=============
  * Update comment about TCP_INFO on OpenBSD
  * [mod_ajp13] fix crash with bad response headers (fixes #3170)
  * [core] handle RDHUP when collecting chunked body
  * [core] tweak streaming request body to backends
  * [core] handle ENOSPC with pwritev() (#3171)
  * [core] manually calculate off_t max (fixes #3171)
  * [autoconf] force large file support (#3171)
  * [multiple] quiet coverity warnings using casts
  * [meson] add license keyword to project declaration

(From OE-Core rev: d099203a342b8bbb35656b84c6488e8131cc8648)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 7a399862bb2e1503fbffa18e7ec0767643f76132)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
Markus Volk
e57c26ea29 mesa: update 22.2.0 -> 22.2.2
Mesa 22.2.2 is a bug fix release which fixes bugs found since the 22.2.1 release.

New features

None

Bug fixes

radv: Crash in dEQP-VK.ray_query.misc.dynamic_indexing
glthread: radeonsi: offset textures in game starsector with glthread enabled
Crashing on Windows VM
Exanima renders with the wrong colors.
nouveau: tegra124: GL_OUT_OF_MEMORY error

Changes

freedreno: Fix graphic glitches on a4xx and a5xx
nir/lower_system_values: Fix cs_local_index_to_id with variable workgroups
pan/mdg: Lower PIPE_COMPUTE_CAP_MAX_THREADS_PER_BLOCK on Midgard
pan/mdg: Fix 16-bit alignment with spiller
nir: Fix nir_fmax_abs_vec_comp
gallium/vl: Add opaque rgb pixel formats
aco/spill: Fix spilling of Phi operands
tu: Reset whether there is DS resolve for dynamic subpass
gallivm: handle llvm coroutines for llvm > 15
nouveau: treat DRM_FORMAT_INVALID as implicit modifier
docs: Add sha256 sum for 22.2.1
.pick_status.json: Update to 243aa6b2ec0c2626b1333ba666a6d6d60ede8505
.pick_status.json: Update to c4482a3c1a973975eb27ac284a18bebca24f7876
.pick_status.json: Update to 3eed5931edf6e5f45378b013ca21f98f17af2b34
.pick_status.json: Update to b02e9ef35a0446019cda9473e4c355c7cc4bb24d
.pick_status.json: Mark 4c7a44413a07d3fb314f786e047bb7212c082a6c as denominated
.pick_status.json: Mark dbd022f2ab43ff0a9ecc05c61123467e25f109de as backported
turnip: Don’t use the dynamic color write enable during non-dynamic.
gallium/u_threaded_context: remove stale comment
r300: don’t use smooth line if not requested
r600/sfn: Always start a new CF after a KILL instruction
r600/sfn: don’t propagate registers into conditional test
virgl: Report CONSTANT_BUFFER_SIZE according to GL_MAX_UNIFORM_BLOCK_SIZE
vulkan/runtime: don’t lookup the pipeline disk cache if disabled
anv: initialization pipeline layout to 0s
anv: add missing tracepoint
clc/clover: Link clang statically when shared-llvm is disabled
zink: clamp line_stipple_factor to 1 if stipple is disabled
zink: unset rp_changed after initializing renderpass attachments
zink: disable fbfetch when flushing clears
vulkan/wsi: Add dep_libudev to idep dependencies
gallium/va: vaDeriveImage to check PIPE_VIDEO_SUPPORTS_CONTIGUOUS_PLANES_MAP
d3d12: Implement cap PIPE_VIDEO_SUPPORTS_CONTIGUOUS_PLANES_MAP
zink: fix invalid Offset set for variables which do not need an offset
zink: stop enabling minmax filtering when not supported
zink: fix isNan mismatch between NIR and SPIR-V
util/conf: enable init to zero workaround for Exanima
util/radeonsi: enable zerovram workaround for Exanima
radv: add radv_zero_vram workarounds for OpenGL games
glthread: fix matrix stack depth tracking
glthread: leave dlist dispatch in place for Begin/End
util: Turn -DWINDOWS_NO_FUTEX to be pre_args

- add a PACKAGECONFIG for perfetto support

(From OE-Core rev: a68121557f72ebccc92adaec0df2b43abe11869d)

Signed-off-by: Markus Volk <f_l_k@t-online.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit cbcaff0b4cc349706b9847f4262746b43adba209)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
Ross Burton
baccaad9a0 zlib: upgrade 1.2.12 -> 1.2.13
Changes in 1.2.13 (13 Oct 2022)
- Fix configure issue that discarded provided CC definition
- Correct incorrect inputs provided to the CRC functions
- Repair prototypes and exporting of new CRC functions
- Fix inflateBack to detect invalid input with distances too far
- Have infback() deliver all of the available output up to any error
- Fix a bug when getting a gzip header extra field with inflate()
- Fix bug in block type selection when Z_FIXED used
- Tighten deflateBound bounds
- Remove deleted assembler code references
- Various portability and appearance improvements

Drop a number of patches whicih have been merged upstream.

(From OE-Core rev: b7805c7daef0690e27d44aa18cf3946e3108abbf)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 115eb5326dc7f9256d58147b3655cd13d5994cfc)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
Ross Burton
c5c4cbb024 zlib: do out-of-tree builds
zlib supports out-of-tree builds, so do them.

(From OE-Core rev: 2cd077f6396efd940d873c5f7f0f7614d1626ac3)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit e8bf682e9ccf2ddce5149f01ba788ca813329221)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
wangmy
394054d7ca lttng-ust: upgrade 2.13.4 -> 2.13.5
Changelog:
==========
* Fix: bytecode validator: reject specialized load field/context ref instructions
* Fix: bytecode validator: reject specialized load instructions
* Fix: event notification capture: validate buffer length
* Fix: event notification capture error handling
* Fix: lttng-ust-comm: wait on wrong child process
* fix: 'make dist' without javah

(From OE-Core rev: d96afd6159b696dc18a7d6ab3731ad1ac258c98c)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 569d6c271bf782cb4a524603693adbbe3d020f92)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
wangmy
27e0f91aaa libsdl2: upgrade 2.24.0 -> 2.24.1
This is a stable bugfix release, with the following changes:

Windows

Only check to see if the ICC profile changes when the display changes or we gain focus
Fixed window resize handing when using the D3D12 renderer
Fixed Xbox controller detection on Windows XP

macOS

Fixed long delay in SDL_CloseAudioDevice()

Linux

Fixed crash in Wayland_HasScreenKeyboardSupport()

FreeBSD

Fixed building without GNU sort, but warn that dynamic libraries won't be found

Emscripten

Fixed infinite recursion related to mutexes on startup

OS/2

Fixes and improvements to SDL_LoadObject() functionality

0001-Disable-libunwind-in-native-OE-builds-by-not-looking.patch
refreshed for new version.

(From OE-Core rev: 3c686477cc7557060fd9152f7546f00099a630a2)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit aa45a2fad9ecd5d553c605dc6b3d4cd70d7d7776)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
wangmy
079bb45350 libksba: upgrade 1.6.0 -> 1.6.2
New upstream release fixing CVE-2022-3515

(From OE-Core rev: 8e453d64255ce6a01b193c3735bb0aefbaa6fb38)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 4bef6fc673de958dfbab80bcbc2e0159803b97ee)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
wangmy
2dd06fb636 wpebackend-fdo: upgrade 1.12.1 -> 1.14.0
Changelog:
==========
Fixed a crash caused by trying to deallocate already freed graphics buffers in certain situations.

(From OE-Core rev: d650490c7786edde665472a38eb68f6db1f6aa4d)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 0db4627fe8c6f8a0080248052dc06419774cba4f)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
wangmy
8ed9ff8919 numactl: upgrade 2.0.15 -> 2.0.16
Commits
5a99c6d: Revert "numademo: fix error on 32bit system" (Andi Kleen)
04da3af: fix the memory leak of numa_preferred api (luochenglcs) #139
86edd38: when preferred_many is not supported, fall back to preferred will (luochenglcs) #137
413a93f: add cut-release github workflow (#142) (LUCIANO FURTADO) #142
10285f1: Release numactl 2.0.16 (Filipe Brandenburger)

(From OE-Core rev: 5ab90209ef18876285bd62468e9cec7a9a80608d)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 6d9ed8d4b13c2d87dae482bbadef039de050bc9d)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
wangmy
86eaa373a7 libical: upgrade 3.0.14 -> 3.0.15
Changelog:
=========
 Add missing property parameters into libical-glib
 Fix CMake option USE_32BIT_TIME_T actually uses a 32-bit time_t value
 Fix icaltime_as_timet, which returned incorrect results for years >= 2100, to work properly between years 1902 and 10k.
 Fix x-property comma handling and escaping
 Built-in timezones updated to tzdata2022d (now with a VTIMEZONE for each time zone alias)
 Fix fuzzer issues
 Handle unreachable-code compile warnings with clang
 Ensure all vanew_foo() calls finish with (void*)0 (not 0)

(From OE-Core rev: 68e89fb36d43db7a655a3a73933e403bb0932ff3)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 6092ae3cbe0eaf006db615c6cc3f1692e1cc1df8)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
wangmy
b0b966ad07 libcap: upgrade 2.65 -> 2.66
RELEASE NOTES FOR 2.66

Fix documentation typos in cap_from_text.3 (Bug: 216514 reported by Paulo Andrade.)

Some getpcaps code clean up and a fix for PID argument parsing from Jakub Wilk.

Slightly more robust Makefiles to address an error with make -j48 test observed by Tomasz Kłoczko.

Include a simple Go program, captrace, to trace kernel capability validation checks

This program can be used to figure out what capabilities a program needs to operate.

captrace (a wrapper for bpftrace) uses BPF kprobes to monitor the kernel for capability checks and whether or not they succeed for the system, a specific PID or a program's direct execution.

Trim down the default file capabilities for contrib/sucap/su to those actually needed and set USER and HOME environment variables so bash doesn't complain about a sourcing error.

(From OE-Core rev: 21f57b4341d8520c1e7319b2b9a0616af61e0f68)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 9040e612084a561b1766bb86c9c002b811eea4c9)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
Tim Orling
fc5bc29d1b vim: upgrade 9.0.0614 -> 9.0.0820
Includes fixes for CVE-2022-3705
https://nvd.nist.gov/vuln/detail/CVE-2022-3705

For a short list of important changes, see:
https://www.arp242.net/vimlog/

(From OE-Core rev: 1b0ce402ef432cacb824a49aeb039732fe25dc9d)

Signed-off-by: Tim Orling <tim.orling@konsulko.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f6d917bd0f8810b5ed8d403ad25d59cda2fc9574)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-09 17:42:13 +00:00
Ed Tanous
8a3bbee311 openssl: Upgrade 3.0.5 -> 3.0.7
OpenSSL 3.0.5 includes a HIGH level security vulnerability [1].

Upgrade the recipe to point to 3.0.7.

CVE-2022-3358 is reported fixed in 3.0.6, so drop the patch for that as
well.

[1] https://www.openssl.org/news/vulnerabilities.html

Fixes CVE-2022-3786 and CVE-2022-3602: X.509 Email Address Buffer Overflows
https://www.openssl.org/blog/blog/2022/11/01/email-address-overflows/

(From OE-Core rev: 48f9f92c547fac35ff398180a32a5b0829cd9fff)

Signed-off-by: Ed Tanous <edtanous@google.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a69ea1f7db96ec8b853573bd581438edd42ad6e0)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:49 +00:00
wangmy
b1b1c9232f gnutls: upgrade 3.7.7 -> 3.7.8
Changelog:
=========
** libgnutls: In FIPS140 mode, RSA signature verification is an approved
   operation if the key has modulus with known sizes (1024, 1280,
   1536, and 1792 bits), in addition to any modulus sizes larger than
   2048 bits, according to SP800-131A rev2.

** libgnutls: gnutls_session_channel_binding performs additional checks when
   GNUTLS_CB_TLS_EXPORTER is requested. According to RFC9622 4.2, the
   "tls-exporter" channel binding is only usable when the handshake is
   bound to a unique master secret (i.e., either TLS 1.3 or extended
   master secret extension is negotiated). Otherwise the function now
   returns error.

** libgnutls: usage of the following functions, which are designed to
   loosen restrictions imposed by allowlisting mode of configuration,
   has been additionally restricted. Invoking them is now only allowed
   if system-wide TLS priority string has not been initialized yet:
gnutls_digest_set_secure
gnutls_sign_set_secure
gnutls_sign_set_secure_for_certs
gnutls_protocol_set_enabled

(From OE-Core rev: a583ac20cc82ede59e1a4e30708cf5434b49ce37)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 858886aa07d0c2c2ef2489996cc8eca5fbe931fa)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:49 +00:00
Vyacheslav Yurkov
cc4b3a0040 overlayfs: Allow not used mount points
When machine configuration defines a mount point, which is not used in
any recipe, allow to fall through and only report a note in the logs.
This can be expected behavior, when a mount point is defined for several
machines, but not used in all of them

(From OE-Core rev: c7c6b273656a3e2b8b959004b996e56d4086ce5e)

Signed-off-by: Vyacheslav Yurkov <Vyacheslav.Yurkov@bruker.com>
Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
(cherry picked from commit a9c604b5e0d943b5b5f7c8bdd5be730c2abcf866)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:49 +00:00
Joshua Watt
900420392d runqemu: Fix gl-es argument from causing other arguments to be ignored
The code to parse arguments was inadvertently skipping all arguments in
the elif block after gl-es if it was specified on the command line.

(From OE-Core rev: dd1dcfada1fa46ecb8227c2852769b35026875d3)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 718bb8d56f6a24c86e67830a7d13af54df2ebb4e)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:49 +00:00
Joshua Watt
03f1b28c6d runqemu: Do not perturb script environment
Instead of changing the script environment to affect the child
processes, make a copy of the environment with modifications and pass
that to subprocess.

Specifically, when dri rendering is enabled, LD_PRELOAD was being passed
to all processes created by the script which resulted in other commands
(e.g. stty) exiting with a failure like:

 /bin/sh: symbol lookup error: sysroots-uninative/x86_64-linux/lib/librt.so.1: undefined symbol: __libc_unwind_link_get, version GLIBC_PRIVATE

Making a copy of the environment fixes this because the LD_PRELOAD is
now only passed to qemu itself.

(From OE-Core rev: 91c2449d4e873b2cec8777d71e218a12f899669d)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 2232599d330bd5f2a9e206b490196569ad855de8)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:49 +00:00
Jeremy Puhlman
692a8ab550 qemu-native: Add PACKAGECONFIG option for jack
With libjack-devel or jack-audio-connection-kit-devel, qemu-native
detects the library/header and tries to build with it. Since its
missing from the sysroot, it fails to build.

 -O2 -fPIE -D_REENTRANT -Wno-undef -MD -MQ libcommon.fa.p/audio_jackaudio.c.o
-MF libcommon.fa.p/audio_jackaudio.c.o.d -o libcommon.fa.p/audio_jackaudio.c.o
-c ../qemu-6.2.0/audio/jackaudio.c
| ../qemu-6.2.0/audio/jackaudio.c:34:10: fatal error: jack/jack.h: No such file
or directory
|    34 | #include <jack/jack.h>
|       |          ^~~~~~~~~~~~~
| compilation terminated.

(From OE-Core rev: 7c8f23aa594175f2169df0d62051bf42d491a1bb)

Signed-off-by: Jeremy A. Puhlman <jpuhlman@mvista.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 27260be388f7f9f324ff405e7d8e254925b4ae90)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Jan-Simon Moeller
94270812fa buildtools-tarball: export certificates to python and curl
The custom path of the ca-certificates.crt within the buildtools-tarball requires more
environment variables to be exported. Namely REQUESTS_CA_BUNDLE for the python requests library
and CURL_CA_BUNDLE for curl.

(From OE-Core rev: facafa0f76af9cbf80f862497b66c18b3fbfa60b)

Signed-off-by: Jan-Simon Moeller <jsmoeller@linuxfoundation.org>
Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
(cherry picked from commit 5c249db9de8ad8cfe0996ff4fee4c575a5ff1e34)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Kai Kang
570e56775b mesa: only apply patch to fix ALWAYS_INLINE for native
0001-nir-nir_opt_move-fix-ALWAYS_INLINE-compiler-error.patch is not
needed by target mesa any more. But it still fails to compile
mesa-native without this patch when DEBUG_BUILD is enabled on Ubuntu
18.04 with gcc 7.5.0:

| ../mesa-22.1.6/src/compiler/nir/nir_inline_helpers.h: In function ‘nir_opt_move_block’:
| ../mesa-22.1.6/src/compiler/nir/nir_opt_move.c:55:1: error: inlining failed in call to
    always_inline ‘src_is_ssa’: indirect function call with a yet undetermined callee
|  src_is_ssa(nir_src *src, void *state)
|  ^~~~~~~~~~

So only apply it for mesa-native.

(From OE-Core rev: f6fb2da56ef1f35b536ebf62a03e10bba59d8276)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit c6a6d0c2680799683d58968c2558a224f27caaa2)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
wangmy
0dfef83aa5 ifupdown: upgrade 0.8.37 -> 0.8.39
ifupdown (0.8.38)
  * Remove dependency on lsb-base (Closes: #1020604)
  * Remove pump support (no longer in Debian archive)
  * Fix error message when turning down VLAN interfaces. Thanks to Aleksandr
    Muravjov (Closes: #1007889)
  * Ship Ubuntu's integration scripts for systemd-resolved. Thanks to Luca
    Boccassi (Closes: #1016798)
  * Add rfkill support. Thanks to Sebastian Reichel <email address hidden>
    (Closes: #645559)

ifupdown (0.8.39)
  * Add execution permission on resolved scripts. Thanks to Vincent Lefèvre
    (Closes: #1021259)

(From OE-Core rev: 342fb3183fd1910b76c2bed242bf8b2ea179d217)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit f0462e3336c7134aeeb2684692732c187971b330)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
wangmy
adaa8ad2a5 mtools: upgrade 4.0.40 -> 4.0.41
disable-hardcoded-configs.patch
refreshed for new version

Changelo:
=========
- Made it possible again to have FAT32 filesystems with less
  than 0xfff5 clusters
- Make FAT32 entries 0 and 1 match what windows 10 does
- Misc source code and configure script cleanup

(From OE-Core rev: 9ac0de44f11123876a92f7d7819d5ff2c20475b7)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit b19127f0cd0e10c7180c138284b38c97fa9db7af)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Ross Burton
811f8a09eb pango: upgrade 1.50.9 -> 1.50.10
Overview of changes in 1.50.10, 16-09-2022
=========================================
- Avoid some unnecessary strdups
- Fix line height computations with a non-trivial CTM

(From OE-Core rev: 78dc0bf6384349c23a54f59d89988ad242125581)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
(cherry picked from commit 884ce27b9cee231e093fe53192d04133c437404e)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Teoh Jay Shen
72157834c6 vim: Upgrade 9.0.0598 -> 9.0.0614
Include fixes for CVE-2022-3352.

(From OE-Core rev: 9067e3a24bc5558af6a41f2c5e6f16c37116e3ed)

Signed-off-by: Teoh Jay Shen <jay.shen.teoh@intel.com>
Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
(cherry picked from commit 8aa707f80ae1cfe89d5e20ec1f1632a65149aed4)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
wangmy
25cfdd66e4 meson: upgrade 0.63.2 -> 0.63.3
(From OE-Core rev: fe33134efbe109b9f3bffa1b05fd6fed8860129c)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 3c87597dcde7676858f76c1066cd87195ecc8aef)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Liam Beguin
33711d546d meson: make wrapper options sub-command specific
The meson-wrapper adds setup options to facilitate cross-compilation.
The current options are exclusive to the setup sub-command and might
cause issues with other sub-commands.

Update the wrapper to make options sub-command specific.

(From OE-Core rev: 4475250ee0d83cc90322f2fcd9ec8df7c05b6903)

Signed-off-by: Liam Beguin <liambeguin@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 7bcda141f2019862b4fb5d8dec7956cd8344b420)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Ross Burton
1c94f9d64b qemu: backport the fix for CVE-2022-3165
(From OE-Core rev: d63c5b210b50a2c332a5c309298ec13b510cc7c8)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit d820389728b0f5e085954b4f995da2b2014acedf)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Qiu, Zheng
e6daf39c9b tiff: fix a typo for CVE-2022-2953.patch
The CVE number in the patch is a typo. CVE-2022-2053 is not related to
libtiff. So fix it.

(From OE-Core rev: 3ef84008bf729f74f1244e8b57451cdeb3a9e262)

Signed-off-by: Zheng Qiu <zheng.qiu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c9f76ef859b0b4edb83ac098816b625f52c78173)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Ross Burton
7ffb05dd16 tiff: fix a number of CVEs
Backport fixes from upstream for the following CVEs:
- CVE-2022-3599
- CVE-2022-3597
- CVE-2022-3626
- CVE-2022-3627
- CVE-2022-3570
- CVE-2022-3598

(From OE-Core rev: bfd6d135a555e854e30d45ea36b0cbd612e322df)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 722bbb88777cc3c7d1c8273f1279fc18ba33e87c)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Ross Burton
8074213da8 xserver-xorg: backport fixes for CVE-2022-3550 and CVE-2022-3551
(From OE-Core rev: 9163db79ec90ff4b8ecd189f5fb6e44e27b9e53b)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit e32401d8bf44afcca88af7e4c5948d2c28e1813f)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Ross Burton
f435cff54a xserver-xorg: ignore CVE-2022-3553 as it is XQuartz-specific
(From OE-Core rev: 2017ed15cc5b29319fe1b769c1fcfc5c2f799fd8)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 769576f36aac9652525beec5c7e8a4d26632b844)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Ross Burton
a6586821f0 libx11: apply the fix for CVE-2022-3554
(From OE-Core rev: 3a65a787d1b53f57cd0eedbf7a70ce6dcde0d148)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 5d30f124274d2822d72b56f84eb8c8ae64e31e0d)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Hitendra Prajapati
0bc04f5e6d openssl: CVE-2022-3358 Using a Custom Cipher with NID_undef may lead to NULL encryption
Upstream-Status: Backport from https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=5485c56679d7c49b96e8fc8ca708b0b7e7c03c4b]
Description:
	CVE-2022-3358 openssl: Using a Custom Cipher with NID_undef may lead to NULL encryption.
Affects "openssl < 3.0.6"

(From OE-Core rev: c28dc71f17133f6e4470fc0c1a552c743869b3ad)

Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit f98b2273c6f03f8f6029a7a409600ce290817e27)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-11-04 23:31:48 +00:00
Richard Purdie
6b9db5a99b bitbake: tests/fetch: Allow handling of a file:// url within a submodule
CVE-2022-39253 in git meant file:// urls within submodules were disabled. Add
a parameter to the commands in the tests to allow this to continue to work.

(Bitbake rev: 209f7ba352b60722830157054e3fc56cb9c693eb)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-10-26 23:02:11 +01:00
Mark Asselstine
6672cbe670 bitbake: tests: bb.tests.fetch.URLHandle: add 2 new tests
Add a test for special characters in user and password to qualify
decodeurl() inspired by a bug report describing that '=' signs in a
password was problematic.

Add a second test to qualify decodeurl() as related to the change in
commit 628c4bf6c89b [fetch2/__init__: handle @ in package names].

Relates to [YOCTO #14476]

(Bitbake rev: ee04cf09c7022168c035affa654773652a49793e)

Signed-off-by: Mark Asselstine <mark.asselstine@windriver.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-10-26 23:02:10 +01:00
Michael Opdenacker
c58059d282 bitbake: doc: bitbake-user-manual: expand description of BB_PRESSURE_MAX variables
(Bitbake rev: 72e9847dd578c3cbed52a9c16fea23ebbeef5046)

Signed-off-by: Paul Eggleton <paul.eggleton@microsoft.com>
Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-10-26 23:02:10 +01:00
Mark Hatle
ec08faf2e4 bitbake: utils/ply: Update md5 to better report errors with hashlib
In the case where hashlib is not available, the try would fail and fall
through resulting in a backtrace on the usage of the 'sig'.  The backtrace
itself was confusing and made it difficult to determine what went wrong.

Update the import to be in it's own try block with an appropriate
message to indicate what went wrong.

Note, the current version of ply all of this code has been restructured
so this is not applicable upstream.

Additionally, some versions of hashlib don't appear to implement the
second FIPS related argument.  Detect this and support both versions.

(Bitbake rev: 484ab42f440070c0369b81f5c69da860fa47a798)

Signed-off-by: Mark Hatle <mark.hatle@amd.com>
Signed-off-by: Mark Hatle <mark.hatle@kernel.crashing.org>
Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-10-26 23:02:10 +01:00
Johan Korsnes
7aa3ed5c37 bitbake: bitbake: user-manual: inform about spaces in :remove
Inform the reader that there should be no need for spaces in the value
when using removal override `:remove`.

Considering why spaces are used in the other override operators, it
might seem obvious that they aren't needed for the removal operator.
But, it seems like I'm not the first to be confused about this.

Cc: Richard Purdie <richard.purdie@linuxfoundation.org>
Cc: Quentin Schulz <quentin.schulz@theobroma-systems.com>
Cc: Ross Burton <ross.burton@arm.com>
Cc: Nicolas Dechesne <nicolas.dechesne@linaro.org>
(Bitbake rev: 0a493a772f83436cbe909de93c157f4ab2d2d136)

Signed-off-by: Johan Korsnes <johan.korsnes@remarkable.no>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-10-26 23:02:10 +01:00
Richard Purdie
b6d633e7f3 openssl: Fix SSL_CERT_FILE to match ca-certs location
In OE-Core d6b15d1e70b99185cf245d829ada5b6fb99ec1af,
"openssl: export necessary env vars in SDK", the value added for
SSL_CERT_FILE was in conflict with the value used elsewhere, such as
in buildtools. This makes them match and fixes buildtools testsdk
failures.

(From OE-Core rev: d40f7ddcfbdd5cb1d9f96271fefddf67e9044bb9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2022-10-26 23:01:18 +01:00
1698 changed files with 52974 additions and 57148 deletions

6
.gitignore vendored
View File

@@ -31,8 +31,4 @@ pull-*/
bitbake/lib/toaster/contrib/tts/backlog.txt
bitbake/lib/toaster/contrib/tts/log/*
bitbake/lib/toaster/contrib/tts/.cache/*
bitbake/lib/bb/tests/runqueue-tests/bitbake-cookerdaemon.log
_toaster_clones/
downloads/
sstate-cache/
toaster.sqlite
bitbake/lib/bb/tests/runqueue-tests/bitbake-cookerdaemon.log

View File

@@ -13,8 +13,6 @@ Bitbake plain documentation can be found under the doc directory or its integrat
html version at the Yocto Project website:
https://docs.yoctoproject.org
Bitbake requires Python version 3.8 or newer.
Contributing
------------
@@ -36,21 +34,10 @@ Source code:
https://git.openembedded.org/bitbake/
Testing
-------
Testing:
Bitbake has a testsuite located in lib/bb/tests/ whichs aim to try and prevent regressions.
You can run this with "bitbake-selftest". In particular the fetcher is well covered since
it has so many corner cases. The datastore has many tests too. Testing with the testsuite is
recommended before submitting patches, particularly to the fetcher and datastore. We also
appreciate new test cases and may require them for more obscure issues.
To run the tests "zstd" and "git" must be installed. Git must be correctly configured, in
particular the user.email and user.name values must be set.
The assumption is made that this testsuite is run from an initialized OpenEmbedded build
environment (i.e. `source oe-init-build-env` is used). If this is not the case, run the
testsuite as follows:
export PATH=$(pwd)/bin:$PATH
bin/bitbake-selftest

View File

@@ -25,9 +25,10 @@ except RuntimeError as exc:
from bb import cookerdata
from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
bb.utils.check_system_locale()
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
__version__ = "2.4.0"
__version__ = "2.2.0"
if __name__ == "__main__":
if __version__ != bb.__version__:

View File

@@ -25,7 +25,6 @@ if __name__ == "__main__":
parser.add_argument('-u', '--unexpand', help='Do not expand the value (with --value)', action="store_true")
parser.add_argument('-f', '--flag', help='Specify a variable flag to query (with --value)', default=None)
parser.add_argument('--value', help='Only report the value, no history and no variable name', action="store_true")
parser.add_argument('-q', '--quiet', help='Silence bitbake server logging', action="store_true")
args = parser.parse_args()
if args.unexpand and not args.value:
@@ -36,7 +35,7 @@ if __name__ == "__main__":
print("--flag only makes sense with --value")
sys.exit(1)
with bb.tinfoil.Tinfoil(tracking=True, setup_logging=not args.quiet) as tinfoil:
with bb.tinfoil.Tinfoil(tracking=True) as tinfoil:
if args.recipe:
tinfoil.prepare(quiet=2)
d = tinfoil.parse_recipe(args.recipe)

View File

@@ -12,12 +12,11 @@ warnings.simplefilter("default")
import logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb
bb.utils.check_system_locale()
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 11 or not sys.argv[1].startswith("decafbad"):
if len(sys.argv) != 10 or not sys.argv[1].startswith("decafbad"):
print("bitbake-server is meant for internal execution by bitbake itself, please don't use it standalone.")
sys.exit(1)
@@ -29,8 +28,7 @@ logfile = sys.argv[4]
lockname = sys.argv[5]
sockname = sys.argv[6]
timeout = float(sys.argv[7])
profile = bool(int(sys.argv[8]))
xmlrpcinterface = (sys.argv[9], int(sys.argv[10]))
xmlrpcinterface = (sys.argv[8], int(sys.argv[9]))
if xmlrpcinterface[0] == "None":
xmlrpcinterface = (None, xmlrpcinterface[1])
@@ -51,5 +49,5 @@ logger = logging.getLogger("BitBake")
handler = bb.event.LogHandler()
logger.addHandler(handler)
bb.server.process.execServer(lockfd, readypipeinfd, lockname, sockname, timeout, xmlrpcinterface, profile)
bb.server.process.execServer(lockfd, readypipeinfd, lockname, sockname, timeout, xmlrpcinterface)

View File

@@ -24,7 +24,8 @@ import subprocess
from multiprocessing import Lock
from threading import Thread
bb.utils.check_system_locale()
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
@@ -121,10 +122,11 @@ def worker_child_fire(event, d):
data = b"<event>" + pickle.dumps(event) + b"</event>"
try:
with bb.utils.lock_timeout(worker_pipe_lock):
while(len(data)):
written = worker_pipe.write(data)
data = data[written:]
worker_pipe_lock.acquire()
while(len(data)):
written = worker_pipe.write(data)
data = data[written:]
worker_pipe_lock.release()
except IOError:
sigterm_handler(None, None)
raise
@@ -143,16 +145,7 @@ def sigterm_handler(signum, frame):
os.killpg(0, signal.SIGTERM)
sys.exit()
def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
fn = runtask['fn']
task = runtask['task']
taskname = runtask['taskname']
taskhash = runtask['taskhash']
unihash = runtask['unihash']
appends = runtask['appends']
taskdepdata = runtask['taskdepdata']
quieterrors = runtask['quieterrors']
def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskhash, unihash, appends, taskdepdata, extraconfigdata, quieterrors=False, dry_run_exec=False):
# We need to setup the environment BEFORE the fork, since
# a fork() or exec*() activates PSEUDO...
@@ -164,7 +157,8 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
uid = os.getuid()
gid = os.getgid()
taskdep = runtask['taskdep']
taskdep = workerdata["taskdeps"][fn]
if 'umask' in taskdep and taskname in taskdep['umask']:
umask = taskdep['umask'][taskname]
elif workerdata["umask"]:
@@ -176,24 +170,24 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
except TypeError:
pass
dry_run = cfg.dry_run or runtask['dry_run']
dry_run = cfg.dry_run or dry_run_exec
# We can't use the fakeroot environment in a dry run as it possibly hasn't been built
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
fakeroot = True
envvars = (runtask['fakerootenv'] or "").split()
envvars = (workerdata["fakerootenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
fakeenv[key] = value
fakedirs = (runtask['fakerootdirs'] or "").split()
fakedirs = (workerdata["fakerootdirs"][fn] or "").split()
for p in fakedirs:
bb.utils.mkdirhier(p)
logger.debug2('Running %s:%s under fakeroot, fakedirs: %s' %
(fn, taskname, ', '.join(fakedirs)))
else:
envvars = (runtask['fakerootnoenv'] or "").split()
envvars = (workerdata["fakerootnoenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
@@ -244,6 +238,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
os.umask(umask)
try:
bb_cache = bb.cache.NoCache(databuilder)
(realfn, virtual, mc) = bb.cache.virtualfn2realfn(fn)
the_data = databuilder.mcdata[mc]
the_data.setVar("BB_WORKERCONTEXT", "1")
@@ -262,14 +257,13 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
bb.parse.siggen.set_taskhashes(workerdata["newhashes"])
ret = 0
the_data = databuilder.parseRecipe(fn, appends)
the_data = bb_cache.loadDataFull(fn, appends)
the_data.setVar('BB_TASKHASH', taskhash)
the_data.setVar('BB_UNIHASH', unihash)
bb.parse.siggen.setup_datacache_from_datastore(fn, the_data)
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN"), taskname.replace("do_", "")))
if not bb.utils.to_boolean(the_data.getVarFlag(taskname, 'network')):
if not the_data.getVarFlag(taskname, 'network', False):
if bb.utils.is_local_uid(uid):
logger.debug("Attempting to disable network for %s" % taskname)
bb.utils.disable_network(uid, gid)
@@ -481,15 +475,11 @@ class BitbakeWorker(object):
sys.exit(0)
def handle_runtask(self, data):
runtask = pickle.loads(data)
fn = runtask['fn']
task = runtask['task']
taskname = runtask['taskname']
fn, task, taskname, taskhash, unihash, quieterrors, appends, taskdepdata, dry_run_exec = pickle.loads(data)
workerlog_write("Handling runtask %s %s %s\n" % (task, fn, taskname))
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, self.extraconfigdata, runtask)
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, fn, task, taskname, taskhash, unihash, appends, taskdepdata, self.extraconfigdata, quieterrors, dry_run_exec)
self.build_pids[pid] = task
self.build_pipes[pid] = runQueueWorkerPipe(pipein, pipeout)

View File

@@ -1,9 +0,0 @@
<footer>
<hr/>
<div role="contentinfo">
<p>&copy; Copyright {{ copyright }}
<br>Last updated on {{ last_updated }} from the <a href="https://git.openembedded.org/bitbake/">bitbake</a> git repository.
</p>
</div>
</footer>

View File

@@ -552,8 +552,8 @@ through dependency chains are more complex and are generally
accomplished with a Python function. The code in
``meta/lib/oe/sstatesig.py`` shows two examples of this and also
illustrates how you can insert your own policy into the system if so
desired. This file defines the basic signature generator
OpenEmbedded-Core uses: "OEBasicHash". By default, there
desired. This file defines the two basic signature generators
OpenEmbedded-Core uses: "OEBasic" and "OEBasicHash". By default, there
is a dummy "noop" signature handler enabled in BitBake. This means that
behavior is unchanged from previous versions. ``OE-Core`` uses the
"OEBasicHash" signature handler by default through this setting in the
@@ -561,13 +561,14 @@ behavior is unchanged from previous versions. ``OE-Core`` uses the
BB_SIGNATURE_HANDLER ?= "OEBasicHash"
The main feature of the "OEBasicHash" :term:`BB_SIGNATURE_HANDLER` is that
it adds the task hash to the stamp files. Thanks to this, any metadata
change will change the task hash, automatically causing the task to be run
again. This removes the need to bump :term:`PR` values, and changes to
metadata automatically ripple across the build.
The "OEBasicHash" :term:`BB_SIGNATURE_HANDLER` is the same as the "OEBasic"
version but adds the task hash to the stamp files. This results in any
metadata change that changes the task hash, automatically causing the
task to be run again. This removes the need to bump
:term:`PR` values, and changes to metadata automatically
ripple across the build.
It is also worth noting that the end result of signature
It is also worth noting that the end result of these signature
generators is to make some dependency and hash information available to
the build. This information includes:
@@ -656,7 +657,7 @@ builds are when execute, bitbake also supports user defined
configuration of the `Python
logging <https://docs.python.org/3/library/logging.html>`__ facilities
through the :term:`BB_LOGCONFIG` variable. This
variable defines a JSON or YAML `logging
variable defines a json or yaml `logging
configuration <https://docs.python.org/3/library/logging.config.html>`__
that will be intelligently merged into the default configuration. The
logging configuration is merged using the following rules:
@@ -690,9 +691,9 @@ logging configuration is merged using the following rules:
adds a filter called ``BitBake.defaultFilter``, both filters will be
applied to the logger
As a first example, you can create a ``hashequiv.json`` user logging
configuration file to log all Hash Equivalence related messages of ``VERBOSE``
or higher priority to a file called ``hashequiv.log``::
As an example, consider the following user logging configuration file
which logs all Hash Equivalence related messages of VERBOSE or higher to
a file called ``hashequiv.log`` ::
{
"version": 1,
@@ -721,40 +722,3 @@ or higher priority to a file called ``hashequiv.log``::
}
}
}
Then set the :term:`BB_LOGCONFIG` variable in ``conf/local.conf``::
BB_LOGCONFIG = "hashequiv.json"
Another example is this ``warn.json`` file to log all ``WARNING`` and
higher priority messages to a ``warn.log`` file::
{
"version": 1,
"formatters": {
"warnlogFormatter": {
"()": "bb.msg.BBLogFormatter",
"format": "%(levelname)s: %(message)s"
}
},
"handlers": {
"warnlog": {
"class": "logging.FileHandler",
"formatter": "warnlogFormatter",
"level": "WARNING",
"filename": "warn.log"
}
},
"loggers": {
"BitBake": {
"handlers": ["warnlog"]
}
},
"@disable_existing_loggers": false
}
Note that BitBake's helper classes for structured logging are implemented in
``lib/bb/msg.py``.

View File

@@ -424,8 +424,8 @@ This fetcher supports the following parameters:
- *"nobranch":* Tells the fetcher to not check the SHA validation for
the branch when set to "1". The default is "0". Set this option for
the recipe that refers to the commit that is valid for any namespace
(branch, tag, ...) instead of the branch.
the recipe that refers to the commit that is valid for a tag instead
of the branch.
- *"bareclone":* Tells the fetcher to clone a bare clone into the
destination directory without checking out a working tree. Only the
@@ -740,7 +740,7 @@ Here is an example URL with both fetchers::
"
See :yocto_docs:`Creating Node Package Manager (NPM) Packages
</dev-manual/packages.html#creating-node-package-manager-npm-packages>`
</dev-manual/common-tasks.html#creating-node-package-manager-npm-packages>`
in the Yocto Project manual for details about using
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
to automatically create a recipe from an NPM URL.
@@ -777,7 +777,7 @@ the package which has such dependencies, for example::
Such a file can automatically be generated using
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
as described in the :yocto_docs:`Creating Node Package Manager (NPM) Packages
</dev-manual/packages.html#creating-node-package-manager-npm-packages>`
</dev-manual/common-tasks.html#creating-node-package-manager-npm-packages>`
section of the Yocto Project.
Other Fetchers

View File

@@ -18,32 +18,28 @@ it.
Obtaining BitBake
=================
See the :ref:`bitbake-user-manual/bitbake-user-manual-intro:obtaining bitbake` section for
See the :ref:`bitbake-user-manual/bitbake-user-manual-hello:obtaining bitbake` section for
information on how to obtain BitBake. Once you have the source code on
your machine, the BitBake directory appears as follows::
$ ls -al
total 108
drwxr-xr-x 9 fawkh 10000 4096 feb 24 12:10 .
drwx------ 36 fawkh 10000 4096 mar 2 17:00 ..
-rw-r--r-- 1 fawkh 10000 365 feb 24 12:10 AUTHORS
drwxr-xr-x 2 fawkh 10000 4096 feb 24 12:10 bin
-rw-r--r-- 1 fawkh 10000 16501 feb 24 12:10 ChangeLog
drwxr-xr-x 2 fawkh 10000 4096 feb 24 12:10 classes
drwxr-xr-x 2 fawkh 10000 4096 feb 24 12:10 conf
drwxr-xr-x 5 fawkh 10000 4096 feb 24 12:10 contrib
drwxr-xr-x 6 fawkh 10000 4096 feb 24 12:10 doc
drwxr-xr-x 8 fawkh 10000 4096 mar 2 16:26 .git
-rw-r--r-- 1 fawkh 10000 31 feb 24 12:10 .gitattributes
-rw-r--r-- 1 fawkh 10000 392 feb 24 12:10 .gitignore
drwxr-xr-x 13 fawkh 10000 4096 feb 24 12:11 lib
-rw-r--r-- 1 fawkh 10000 1224 feb 24 12:10 LICENSE
-rw-r--r-- 1 fawkh 10000 15394 feb 24 12:10 LICENSE.GPL-2.0-only
-rw-r--r-- 1 fawkh 10000 1286 feb 24 12:10 LICENSE.MIT
-rw-r--r-- 1 fawkh 10000 229 feb 24 12:10 MANIFEST.in
-rw-r--r-- 1 fawkh 10000 2413 feb 24 12:10 README
-rw-r--r-- 1 fawkh 10000 43 feb 24 12:10 toaster-requirements.txt
-rw-r--r-- 1 fawkh 10000 2887 feb 24 12:10 TODO
total 100
drwxrwxr-x. 9 wmat wmat 4096 Jan 31 13:44 .
drwxrwxr-x. 3 wmat wmat 4096 Feb 4 10:45 ..
-rw-rw-r--. 1 wmat wmat 365 Nov 26 04:55 AUTHORS
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 bin
drwxrwxr-x. 4 wmat wmat 4096 Jan 31 13:44 build
-rw-rw-r--. 1 wmat wmat 16501 Nov 26 04:55 ChangeLog
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 classes
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 conf
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 contrib
-rw-rw-r--. 1 wmat wmat 17987 Nov 26 04:55 COPYING
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 doc
-rw-rw-r--. 1 wmat wmat 69 Nov 26 04:55 .gitignore
-rw-rw-r--. 1 wmat wmat 849 Nov 26 04:55 HEADER
drwxrwxr-x. 5 wmat wmat 4096 Jan 31 13:44 lib
-rw-rw-r--. 1 wmat wmat 195 Nov 26 04:55 MANIFEST.in
-rw-rw-r--. 1 wmat wmat 2887 Nov 26 04:55 TODO
At this point, you should have BitBake cloned to a directory that
matches the previous listing except for dates and user names.
@@ -56,7 +52,7 @@ directory to where your local BitBake files are and run the following
command::
$ ./bin/bitbake --version
BitBake Build Tool Core version 2.3.1
BitBake Build Tool Core version 1.23.0, bitbake version 1.23.0
The console output tells you what version
you are running.
@@ -134,8 +130,23 @@ Following is the complete "Hello World" example.
directory. Run the ``bitbake`` command and see what it does::
$ bitbake
ERROR: The BBPATH variable is not set and bitbake did not find a conf/bblayers.conf file in the expected location.
The BBPATH variable is not set and bitbake did not
find a conf/bblayers.conf file in the expected location.
Maybe you accidentally invoked bitbake from the wrong directory?
DEBUG: Removed the following variables from the environment:
GNOME_DESKTOP_SESSION_ID, XDG_CURRENT_DESKTOP,
GNOME_KEYRING_CONTROL, DISPLAY, SSH_AGENT_PID, LANG, no_proxy,
XDG_SESSION_PATH, XAUTHORITY, SESSION_MANAGER, SHLVL,
MANDATORY_PATH, COMPIZ_CONFIG_PROFILE, WINDOWID, EDITOR,
GPG_AGENT_INFO, SSH_AUTH_SOCK, GDMSESSION, GNOME_KEYRING_PID,
XDG_SEAT_PATH, XDG_CONFIG_DIRS, LESSOPEN, DBUS_SESSION_BUS_ADDRESS,
_, XDG_SESSION_COOKIE, DESKTOP_SESSION, LESSCLOSE, DEFAULTS_PATH,
UBUNTU_MENUPROXY, OLDPWD, XDG_DATA_DIRS, COLORTERM, LS_COLORS
The majority of this output is specific to environment variables that
are not directly relevant to BitBake. However, the very first
message regarding the :term:`BBPATH` variable and the
``conf/bblayers.conf`` file is relevant.
When you run BitBake, it begins looking for metadata files. The
:term:`BBPATH` variable is what tells BitBake where
@@ -168,14 +179,20 @@ Following is the complete "Hello World" example.
``bitbake`` command again::
$ bitbake
ERROR: Unable to parse /home/scott-lenovo/bitbake/lib/bb/parse/__init__.py
Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 127, in resolve_file(fn='conf/bitbake.conf', d=<bb.data_smart.DataSmart object at 0x7f22919a3df0>):
if not newfn:
> raise IOError(errno.ENOENT, "file %s not found in %s" % (fn, bbpath))
fn = newfn
FileNotFoundError: [Errno 2] file conf/bitbake.conf not found in <projectdirectory>
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 173, in parse_config_file
return bb.parse.handle(fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 99, in handle
return h['handle'](fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 120, in handle
abs_fn = resolve_file(fn, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 117, in resolve_file
raise IOError("file %s not found in %s" % (fn, bbpath))
IOError: file conf/bitbake.conf not found in /home/scott-lenovo/hello
ERROR: Unable to parse conf/bitbake.conf: file conf/bitbake.conf not found in /home/scott-lenovo/hello
This sample output shows that BitBake could not find the
``conf/bitbake.conf`` file in the project directory. This file is
@@ -237,14 +254,18 @@ Following is the complete "Hello World" example.
exists, you can run the ``bitbake`` command again::
$ bitbake
ERROR: Unable to parse /home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py
Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 67, in inherit(files=['base'], fn='configuration INHERITs', lineno=0, d=<bb.data_smart.DataSmart object at 0x7fab6815edf0>):
if not os.path.exists(file):
> raise ParseError("Could not inherit file %s" % (file), fn, lineno)
bb.parse.ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 177, in _inherit
bb.parse.BBHandler.inherit(bbclass, "configuration INHERITs", 0, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 92, in inherit
include(fn, file, lineno, d, "inherit")
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 100, in include
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
In the sample output,
BitBake could not find the ``classes/base.bbclass`` file. You need
@@ -263,10 +284,7 @@ Following is the complete "Hello World" example.
$ mkdir classes
Move to the ``classes`` directory and then create the
``base.bbclass`` file by inserting this single line::
addtask build
``base.bbclass`` file by inserting this single line: addtask build
The minimal task that BitBake runs is the ``do_build`` task. This is
all the example needs in order to build the project. Of course, the
``base.bbclass`` can have much more depending on which build
@@ -310,19 +328,10 @@ Following is the complete "Hello World" example.
BBFILES += "${LAYERDIR}/*.bb"
BBFILE_COLLECTIONS += "mylayer"
BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
LAYERSERIES_CORENAMES = "hello_world_example"
LAYERSERIES_COMPAT_mylayer = "hello_world_example"
For information on these variables, click on :term:`BBFILES`,
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS`, :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
or :term:`LAYERSERIES_COMPAT` to go to the definitions in the glossary.
.. note::
We are setting both LAYERSERIES_CORENAMES and LAYERSERIES_COMPAT in this particular case, because we
are using bitbake without OpenEmbedded.
You should usually just use LAYERSERIES_COMPAT to specify the OE-Core versions for which your layer
is compatible, and add the meta-openembedded layer to your project.
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS` or :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
to go to the definitions in the glossary.
You need to create the recipe file next. Inside your layer at the
top-level, use an editor and create a recipe file named
@@ -380,14 +389,12 @@ Following is the complete "Hello World" example.
target::
$ bitbake printhello
Loading cache: 100% |
Loaded 0 entries from dependency cache.
Parsing recipes: 100% |##################################################################################|
Time: 00:00:00
Parsing of 1 .bb files complete (0 cached, 1 parsed). 1 targets, 0 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
Initialising tasks: 100% |###############################################################################|
NOTE: No setscene tasks
NOTE: Executing Tasks
NOTE: Preparing RunQueue
NOTE: Executing RunQueue Tasks
********************
* *
* Hello, World! *

View File

@@ -319,10 +319,6 @@ The variable ``D`` becomes "dvaladditional data".
You must control all spacing when you use the override syntax.
.. note::
The overrides are applied in this order, ":append", ":prepend", ":remove".
It is also possible to append and prepend to shell functions and
BitBake-style Python functions. See the ":ref:`bitbake-user-manual/bitbake-user-manual-metadata:shell functions`" and ":ref:`bitbake-user-manual/bitbake-user-manual-metadata:bitbake-style python functions`"
sections for examples.
@@ -356,28 +352,6 @@ The variable ``FOO`` becomes
Like ":append" and ":prepend", ":remove" is applied at variable
expansion time.
.. note::
The overrides are applied in this order, ":append", ":prepend", ":remove".
This implies it is not possible to re-append previously removed strings.
However, one can undo a ":remove" by using an intermediate variable whose
content is passed to the ":remove" so that modifying the intermediate
variable equals to keeping the string in::
FOOREMOVE = "123 456 789"
FOO:remove = "${FOOREMOVE}"
...
FOOREMOVE = "123 789"
This expands to ``FOO:remove = "123 789"``.
.. note::
Override application order may not match variable parse history, i.e.
the output of ``bitbake -e`` may contain ":remove" before ":append",
but the result will be removed string, because ":remove" is handled
last.
Override Style Operation Advantages
-----------------------------------
@@ -1496,23 +1470,6 @@ functionality of the task:
directory listed is used as the current working directory for the
task.
- ``[file-checksums]``: Controls the file dependencies for a task. The
baseline file list is the set of files associated with
:term:`SRC_URI`. May be used to set additional dependencies on
files not associated with :term:`SRC_URI`.
The value set to the list is a file-boolean pair where the first
value is the file name and the second is whether or not it
physically exists on the filesystem. ::
do_configure[file-checksums] += "${MY_DIRPATH}/my-file.txt:True"
It is important to record any paths which the task looked at and
which didn't exist. This means that if these do exist at a later
time, the task can be rerun with the new additional files. The
"exists" True or False value after the path allows this to be
handled.
- ``[lockfiles]``: Specifies one or more lockfiles to lock while the
task executes. Only one task may hold a lockfile, and any task that
attempts to lock an already locked file will block until the lock is
@@ -1972,31 +1929,13 @@ looking at the source code of the ``bb`` module, which is in
the commonly used functions ``bb.utils.contains()`` and
``bb.utils.mkdirhier()``, which come with docstrings.
Extending Python Library Code
-----------------------------
If you wish to add your own Python library code (e.g. to provide
functions/classes you can use from Python functions in the metadata)
you can do so from any layer using the ``addpylib`` directive.
This directive is typically added to your layer configuration (
``conf/layer.conf``) although it will be handled in any ``.conf`` file.
Usage is of the form::
addpylib <directory> <namespace>
Where <directory> specifies the directory to add to the library path.
The specified <namespace> is imported automatically, and if the imported
module specifies an attribute named ``BBIMPORTS``, that list of
sub-modules is iterated and imported too.
Testing and Debugging BitBake Python code
-----------------------------------------
The OpenEmbedded build system implements a convenient ``pydevshell`` target which
you can use to access the BitBake datastore and experiment with your own Python
code. See :yocto_docs:`Using a Python Development Shell
</dev-manual/python-development-shell.html#using-a-python-development-shell>` in the Yocto
</dev-manual/common-tasks.html#using-a-python-development-shell>` in the Yocto
Project manual for details.
Task Checksums and Setscene

View File

@@ -40,7 +40,8 @@ overview of their function and contents.
Azure Storage Shared Access Signature, when using the
:ref:`Azure Storage fetcher <bitbake-user-manual/bitbake-user-manual-fetching:fetchers>`
This variable can be defined to be used by the fetcher to authenticate
and gain access to non-public artifacts::
and gain access to non-public artifacts.
::
AZ_SAS = ""se=2021-01-01&sp=r&sv=2018-11-09&sr=c&skoid=<skoid>&sig=<signature>""
@@ -99,26 +100,10 @@ overview of their function and contents.
the path of the build. BitBake's output should not (and usually does
not) depend on the directory in which it was built.
:term:`BB_CACHEDIR`
Specifies the code parser cache directory (distinct from :term:`CACHE`
and :term:`PERSISTENT_DIR` although they can be set to the same value
if desired). The default value is "${TOPDIR}/cache".
:term:`BB_CHECK_SSL_CERTS`
Specifies if SSL certificates should be checked when fetching. The default
value is ``1`` and certificates are not checked if the value is set to ``0``.
:term:`BB_HASH_CODEPARSER_VALS`
Specifies values for variables to use when populating the codeparser cache.
This can be used selectively to set dummy values for variables to avoid
the codeparser cache growing on every parse. Variables that would typically
be included are those where the value is not significant for where the
codeparser cache is used (i.e. when calculating variable dependencies for
code fragments.) The value is space-separated without quoting values, for
example::
BB_HASH_CODEPARSER_VALS = "T=/ WORKDIR=/ DATE=1234 TIME=1234"
:term:`BB_CONSOLELOG`
Specifies the path to a log file into which BitBake's user interface
writes output during the build.
@@ -359,14 +344,6 @@ overview of their function and contents.
For example usage, see :term:`BB_GIT_SHALLOW`.
:term:`BB_GLOBAL_PYMODULES`
Specifies the list of Python modules to place in the global namespace.
It is intended that only the core layer should set this and it is meant
to be a very small list, typically just ``os`` and ``sys``.
:term:`BB_GLOBAL_PYMODULES` is expected to be set before the first
``addpylib`` directive.
See also ":ref:`bitbake-user-manual/bitbake-user-manual-metadata:extending python library code`".
:term:`BB_HASHCHECK_FUNCTION`
Specifies the name of the function to call during the "setscene" part
of the task's execution in order to validate the list of task hashes.
@@ -1014,7 +991,7 @@ overview of their function and contents.
``bblayers.conf`` configuration file.
To exclude a recipe from a world build using this variable, set the
variable to "1" in the recipe. Set it to "0" to add it back to world build.
variable to "1" in the recipe.
.. note::
@@ -1120,29 +1097,6 @@ overview of their function and contents.
variable is not available outside of ``layer.conf`` and references
are expanded immediately when parsing of the file completes.
:term:`LAYERSERIES_COMPAT`
Lists the versions of the OpenEmbedded-Core (OE-Core) for which
a layer is compatible. Using the :term:`LAYERSERIES_COMPAT` variable
allows the layer maintainer to indicate which combinations of the
layer and OE-Core can be expected to work. The variable gives the
system a way to detect when a layer has not been tested with new
releases of OE-Core (e.g. the layer is not maintained).
To specify the OE-Core versions for which a layer is compatible, use
this variable in your layer's ``conf/layer.conf`` configuration file.
For the list, use the Yocto Project release name (e.g. "kirkstone",
"mickledore"). To specify multiple OE-Core versions for the layer, use
a space-separated list::
LAYERSERIES_COMPAT_layer_root_name = "kirkstone mickledore"
.. note::
Setting :term:`LAYERSERIES_COMPAT` is required by the Yocto Project
Compatible version 2 standard.
The OpenEmbedded build system produces a warning if the variable
is not set for any given layer.
:term:`LAYERVERSION`
Optionally specifies the version of a layer as a single number. You
can use this variable within

View File

@@ -1,57 +1,61 @@
.. SPDX-License-Identifier: CC-BY-2.5
=================================
BitBake Supported Release Manuals
=================================
*****************************
Release Series 4.1 (langdale)
*****************************
- :yocto_docs:`BitBake 2.2 User Manual </bitbake/2.2/>`
*****************************
Release Series 4.0 (kirstone)
*****************************
- :yocto_docs:`BitBake 2.0 User Manual </bitbake/2.0/>`
****************************
Release Series 3.1 (dunfell)
****************************
- :yocto_docs:`BitBake 1.46 User Manual </bitbake/1.46/>`
================================
BitBake Outdated Release Manuals
================================
===========================
Supported Release Manuals
===========================
******************************
Release Series 3.4 (honister)
******************************
- :yocto_docs:`BitBake 1.52 User Manual </bitbake/1.52/>`
- :yocto_docs:`3.4 BitBake User Manual </3.4/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.4.1 BitBake User Manual </3.4.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.4.2 BitBake User Manual </3.4.2/bitbake-user-manual/bitbake-user-manual.html>`
******************************
Release Series 3.3 (hardknott)
******************************
- :yocto_docs:`BitBake 1.50 User Manual </bitbake/1.50/>`
- :yocto_docs:`3.3 BitBake User Manual </3.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.3.1 BitBake User Manual </3.3.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.3.2 BitBake User Manual </3.3.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.3.3 BitBake User Manual </3.3.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.3.4 BitBake User Manual </3.3.4/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.3.5 BitBake User Manual </3.3.5/bitbake-user-manual/bitbake-user-manual.html>`
*******************************
Release Series 3.2 (gatesgarth)
*******************************
- :yocto_docs:`BitBake 1.48 User Manual </bitbake/1.48/>`
*******************************************
Release Series 3.1 (dunfell first versions)
*******************************************
****************************
Release Series 3.1 (dunfell)
****************************
- :yocto_docs:`3.1 BitBake User Manual </3.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.1 BitBake User Manual </3.1.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.2 BitBake User Manual </3.1.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.3 BitBake User Manual </3.1.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.4 BitBake User Manual </3.1.4/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.5 BitBake User Manual </3.1.5/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.6 BitBake User Manual </3.1.6/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.7 BitBake User Manual </3.1.7/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.8 BitBake User Manual </3.1.8/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.9 BitBake User Manual </3.1.9/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.10 BitBake User Manual </3.1.10/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.11 BitBake User Manual </3.1.11/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.12 BitBake User Manual </3.1.12/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.13 BitBake User Manual </3.1.13/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.14 BitBake User Manual </3.1.14/bitbake-user-manual/bitbake-user-manual.html>`
==========================
Outdated Release Manuals
==========================
*******************************
Release Series 3.2 (gatesgarth)
*******************************
- :yocto_docs:`3.2 BitBake User Manual </3.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.2.1 BitBake User Manual </3.2.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.2.2 BitBake User Manual </3.2.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.2.3 BitBake User Manual </3.2.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.2.4 BitBake User Manual </3.2.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 3.0 (zeus)

View File

@@ -9,11 +9,11 @@
# SPDX-License-Identifier: GPL-2.0-only
#
__version__ = "2.4.0"
__version__ = "2.2.0"
import sys
if sys.version_info < (3, 8, 0):
raise RuntimeError("Sorry, python 3.8.0 or later is required for this version of bitbake")
if sys.version_info < (3, 6, 0):
raise RuntimeError("Sorry, python 3.6.0 or later is required for this version of bitbake")
class BBHandledException(Exception):

View File

@@ -25,7 +25,6 @@ import bb
import bb.msg
import bb.process
import bb.progress
from io import StringIO
from bb import data, event, utils
bblogger = logging.getLogger('BitBake')
@@ -178,9 +177,7 @@ class StdoutNoopContextManager:
@property
def name(self):
if "name" in dir(sys.stdout):
return sys.stdout.name
return "<mem>"
return sys.stdout.name
def exec_func(func, d, dirs = None):
@@ -299,21 +296,9 @@ def exec_func_python(func, d, runfile, cwd=None):
lineno = int(d.getVarFlag(func, "lineno", False))
bb.methodpool.insert_method(func, text, fn, lineno - 1)
if verboseStdoutLogging:
sys.stdout.flush()
sys.stderr.flush()
currout = sys.stdout
currerr = sys.stderr
sys.stderr = sys.stdout = execio = StringIO()
comp = utils.better_compile(code, func, "exec_func_python() autogenerated")
utils.better_exec(comp, {"d": d}, code, "exec_func_python() autogenerated")
finally:
if verboseStdoutLogging:
execio.flush()
logger.plain("%s" % execio.getvalue())
sys.stdout = currout
sys.stderr = currerr
execio.close()
# We want any stdout/stderr to be printed before any other log messages to make debugging
# more accurate. In some cases we seem to lose stdout/stderr entirely in logging tests without this.
sys.stdout.flush()
@@ -456,11 +441,7 @@ exit $ret
if fakerootcmd:
cmd = [fakerootcmd, runfile]
# We only want to output to logger via LogTee if stdout is sys.__stdout__ (which will either
# be real stdout or subprocess PIPE or similar). In other cases we are being run "recursively",
# ie. inside another function, in which case stdout is already being captured so we don't
# want to Tee here as output would be printed twice, and out of order.
if verboseStdoutLogging and sys.stdout == sys.__stdout__:
if verboseStdoutLogging:
logfile = LogTee(logger, StdoutNoopContextManager())
else:
logfile = StdoutNoopContextManager()
@@ -591,6 +572,7 @@ def _task_data(fn, task, d):
localdata.setVar('BB_FILENAME', fn)
localdata.setVar('OVERRIDES', 'task-%s:%s' %
(task[3:].replace('_', '-'), d.getVar('OVERRIDES', False)))
localdata.finalize()
bb.data.expandKeys(localdata)
return localdata
@@ -791,7 +773,44 @@ def exec_task(fn, task, d, profile = False):
event.fire(failedevent, d)
return 1
def _get_cleanmask(taskname, mcfn):
def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
"""
Internal stamp helper function
Makes sure the stamp directory exists
Returns the stamp path+filename
In the bitbake core, d can be a CacheData and file_name will be set.
When called in task context, d will be a data store, file_name will not be set
"""
taskflagname = taskname
if taskname.endswith("_setscene") and taskname != "do_setscene":
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stamp[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMP')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
if baseonly:
return stamp
if noextra:
extrainfo = ""
if not stamp:
return
stamp = bb.parse.siggen.stampfile(stamp, file_name, taskname, extrainfo)
stampdir = os.path.dirname(stamp)
if cached_mtime_noerror(stampdir) == 0:
bb.utils.mkdirhier(stampdir)
return stamp
def stamp_cleanmask_internal(taskname, d, file_name):
"""
Internal stamp helper function to generate stamp cleaning mask
Returns the stamp path+filename
@@ -799,14 +818,27 @@ def _get_cleanmask(taskname, mcfn):
In the bitbake core, d can be a CacheData and file_name will be set.
When called in task context, d will be a data store, file_name will not be set
"""
cleanmask = bb.parse.siggen.stampcleanmask_mcfn(taskname, mcfn)
taskflagname = taskname.replace("_setscene", "")
if cleanmask:
return [cleanmask, cleanmask.replace(taskflagname, taskflagname + "_setscene")]
return []
taskflagname = taskname
if taskname.endswith("_setscene") and taskname != "do_setscene":
taskflagname = taskname.replace("_setscene", "")
def clean_stamp_mcfn(task, mcfn):
cleanmask = _get_cleanmask(task, mcfn)
if file_name:
stamp = d.stampclean[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMPCLEAN')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
if not stamp:
return []
cleanmask = bb.parse.siggen.stampcleanmask(stamp, file_name, taskname, extrainfo)
return [cleanmask, cleanmask.replace(taskflagname, taskflagname + "_setscene")]
def clean_stamp(task, d, file_name = None):
cleanmask = stamp_cleanmask_internal(task, d, file_name)
for mask in cleanmask:
for name in glob.glob(mask):
# Preserve sigdata files in the stamps directory
@@ -816,46 +848,33 @@ def clean_stamp_mcfn(task, mcfn):
if name.endswith('.taint'):
continue
os.unlink(name)
return
def clean_stamp(task, d):
mcfn = d.getVar('BB_FILENAME')
clean_stamp_mcfn(task, mcfn)
def make_stamp_mcfn(task, mcfn):
basestamp = bb.parse.siggen.stampfile_mcfn(task, mcfn)
stampdir = os.path.dirname(basestamp)
if cached_mtime_noerror(stampdir) == 0:
bb.utils.mkdirhier(stampdir)
clean_stamp_mcfn(task, mcfn)
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
if basestamp:
bb.utils.remove(basestamp)
open(basestamp, "w").close()
def make_stamp(task, d):
def make_stamp(task, d, file_name = None):
"""
Creates/updates a stamp for a given task
(d can be a data dict or dataCache)
"""
mcfn = d.getVar('BB_FILENAME')
clean_stamp(task, d, file_name)
make_stamp_mcfn(task, mcfn)
stamp = stamp_internal(task, d, file_name)
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
if stamp:
bb.utils.remove(stamp)
open(stamp, "w").close()
# If we're in task context, write out a signature file for each task
# as it completes
if not task.endswith("_setscene"):
stampbase = bb.parse.siggen.stampfile_base(mcfn)
bb.parse.siggen.dump_sigtask(mcfn, task, stampbase, True)
if not task.endswith("_setscene") and task != "do_setscene" and not file_name:
stampbase = stamp_internal(task, d, None, True)
file_name = d.getVar('BB_FILENAME')
bb.parse.siggen.dump_sigtask(file_name, task, stampbase, True)
def find_stale_stamps(task, mcfn):
current = bb.parse.siggen.stampfile_mcfn(task, mcfn)
current2 = bb.parse.siggen.stampfile_mcfn(task + "_setscene", mcfn)
cleanmask = _get_cleanmask(task, mcfn)
def find_stale_stamps(task, d, file_name=None):
current = stamp_internal(task, d, file_name)
current2 = stamp_internal(task + "_setscene", d, file_name)
cleanmask = stamp_cleanmask_internal(task, d, file_name)
found = []
for mask in cleanmask:
for name in glob.glob(mask):
@@ -869,14 +888,38 @@ def find_stale_stamps(task, mcfn):
found.append(name)
return found
def write_taint(task, d):
def del_stamp(task, d, file_name = None):
"""
Removes a stamp for a given task
(d can be a data dict or dataCache)
"""
stamp = stamp_internal(task, d, file_name)
bb.utils.remove(stamp)
def write_taint(task, d, file_name = None):
"""
Creates a "taint" file which will force the specified task and its
dependents to be re-run the next time by influencing the value of its
taskhash.
(d can be a data dict or dataCache)
"""
mcfn = d.getVar('BB_FILENAME')
bb.parse.siggen.invalidate_task(task, mcfn)
import uuid
if file_name:
taintfn = d.stamp[file_name] + '.' + task + '.taint'
else:
taintfn = d.getVar('STAMP') + '.' + task + '.taint'
bb.utils.mkdirhier(os.path.dirname(taintfn))
# The specific content of the taint file is not really important,
# we just need it to be random, so a random UUID is used
with open(taintfn, 'w') as taintf:
taintf.write(str(uuid.uuid4()))
def stampfile(taskname, d, file_name = None, noextra=False):
"""
Return the stamp for a given task
(d can be a data dict or dataCache)
"""
return stamp_internal(taskname, d, file_name, noextra=noextra)
def add_tasks(tasklist, d):
task_deps = d.getVar('_task_deps', False)

View File

@@ -28,7 +28,7 @@ import shutil
logger = logging.getLogger("BitBake.Cache")
__cache_version__ = "155"
__cache_version__ = "154"
def getCacheFile(path, filename, mc, data_hash):
mcspec = ''
@@ -105,7 +105,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.tasks = metadata.getVar('__BBTASKS', False)
self.basetaskhashes = metadata.getVar('__siggen_basehashes', False) or {}
self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
self.hashfilename = self.getvar('BB_HASHFILENAME', metadata)
self.task_deps = metadata.getVar('_task_deps', False) or {'tasks': [], 'parents': {}}
@@ -216,7 +216,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
# Collect files we may need for possible world-dep
# calculations
if not bb.utils.to_boolean(self.not_world):
if not self.not_world:
cachedata.possible_world.append(fn)
#else:
# logger.debug2("EXCLUDE FROM WORLD: %s", fn)
@@ -238,106 +238,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootlogs[fn] = self.fakerootlogs
cachedata.extradepsfunc[fn] = self.extradepsfunc
class SiggenRecipeInfo(RecipeInfoCommon):
__slots__ = ()
classname = "SiggenRecipeInfo"
cachefile = "bb_cache_" + classname +".dat"
# we don't want to show this information in graph files so don't set cachefields
#cachefields = []
def __init__(self, filename, metadata):
self.siggen_gendeps = metadata.getVar("__siggen_gendeps", False)
self.siggen_varvals = metadata.getVar("__siggen_varvals", False)
self.siggen_taskdeps = metadata.getVar("__siggen_taskdeps", False)
@classmethod
def init_cacheData(cls, cachedata):
cachedata.siggen_taskdeps = {}
cachedata.siggen_gendeps = {}
cachedata.siggen_varvals = {}
def add_cacheData(self, cachedata, fn):
cachedata.siggen_gendeps[fn] = self.siggen_gendeps
cachedata.siggen_varvals[fn] = self.siggen_varvals
cachedata.siggen_taskdeps[fn] = self.siggen_taskdeps
# The siggen variable data is large and impacts:
# - bitbake's overall memory usage
# - the amount of data sent over IPC between parsing processes and the server
# - the size of the cache files on disk
# - the size of "sigdata" hash information files on disk
# The data consists of strings (some large) or frozenset lists of variables
# As such, we a) deplicate the data here and b) pass references to the object at second
# access (e.g. over IPC or saving into pickle).
store = {}
save_map = {}
save_count = 1
restore_map = {}
restore_count = {}
@classmethod
def reset(cls):
# Needs to be called before starting new streamed data in a given process
# (e.g. writing out the cache again)
cls.save_map = {}
cls.save_count = 1
cls.restore_map = {}
@classmethod
def _save(cls, deps):
ret = []
if not deps:
return deps
for dep in deps:
fs = deps[dep]
if fs is None:
ret.append((dep, None, None))
elif fs in cls.save_map:
ret.append((dep, None, cls.save_map[fs]))
else:
cls.save_map[fs] = cls.save_count
ret.append((dep, fs, cls.save_count))
cls.save_count = cls.save_count + 1
return ret
@classmethod
def _restore(cls, deps, pid):
ret = {}
if not deps:
return deps
if pid not in cls.restore_map:
cls.restore_map[pid] = {}
map = cls.restore_map[pid]
for dep, fs, mapnum in deps:
if fs is None and mapnum is None:
ret[dep] = None
elif fs is None:
ret[dep] = map[mapnum]
else:
try:
fs = cls.store[fs]
except KeyError:
cls.store[fs] = fs
map[mapnum] = fs
ret[dep] = fs
return ret
def __getstate__(self):
ret = {}
for key in ["siggen_gendeps", "siggen_taskdeps", "siggen_varvals"]:
ret[key] = self._save(self.__dict__[key])
ret['pid'] = os.getpid()
return ret
def __setstate__(self, state):
pid = state['pid']
for key in ["siggen_gendeps", "siggen_taskdeps", "siggen_varvals"]:
setattr(self, key, self._restore(state[key], pid))
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
@@ -380,18 +280,75 @@ def variant2virtual(realfn, variant):
return "mc:" + elems[1] + ":" + realfn
return "virtual:" + variant + ":" + realfn
#
# Cooker calls cacheValid on its recipe list, then either calls loadCached
# from it's main thread or parse from separate processes to generate an up to
# date cache
#
class Cache(object):
def parse_recipe(bb_data, bbfile, appends, mc=''):
"""
Parse a recipe
"""
bb_data.setVar("__BBMULTICONFIG", mc)
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
bb.parse.cached_mtime_noerror(bbfile_loc)
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
bb_data = bb.parse.handle(bbfile, bb_data)
return bb_data
class NoCache(object):
def __init__(self, databuilder):
self.databuilder = databuilder
self.data = databuilder.data
def loadDataFull(self, virtualfn, appends):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug("Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = virtualfn2realfn(virtualfn)
bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
return bb_data[virtual]
def load_bbfile(self, bbfile, appends, virtonly = False, mc=None):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
if virtonly:
(bbfile, virtual, mc) = virtualfn2realfn(bbfile)
bb_data = self.databuilder.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
datastores = parse_recipe(bb_data, bbfile, appends, mc)
return datastores
if mc is not None:
bb_data = self.databuilder.mcdata[mc].createCopy()
return parse_recipe(bb_data, bbfile, appends, mc)
bb_data = self.data.createCopy()
datastores = parse_recipe(bb_data, bbfile, appends)
for mc in self.databuilder.mcdata:
if not mc:
continue
bb_data = self.databuilder.mcdata[mc].createCopy()
newstores = parse_recipe(bb_data, bbfile, appends, mc)
for ns in newstores:
datastores["mc:%s:%s" % (mc, ns)] = newstores[ns]
return datastores
class Cache(NoCache):
"""
BitBake Cache implementation
"""
def __init__(self, databuilder, mc, data_hash, caches_array):
self.databuilder = databuilder
self.data = databuilder.data
super().__init__(databuilder)
data = databuilder.data
# Pass caches_array information into Cache Constructor
# It will be used later for deciding whether we
@@ -399,7 +356,7 @@ class Cache(object):
self.mc = mc
self.logger = PrefixLoggerAdapter("Cache: %s: " % (mc if mc else "default"), logger)
self.caches_array = caches_array
self.cachedir = self.data.getVar("CACHE")
self.cachedir = data.getVar("CACHE")
self.clean = set()
self.checked = set()
self.depends_cache = {}
@@ -409,12 +366,20 @@ class Cache(object):
self.filelist_regex = re.compile(r'(?:(?<=:True)|(?<=:False))\s+')
if self.cachedir in [None, '']:
bb.fatal("Please ensure CACHE is set to the cache directory for BitBake to use")
self.has_cache = False
self.logger.info("Not using a cache. "
"Set CACHE = <directory> to enable.")
return
self.has_cache = True
def getCacheFile(self, cachefile):
return getCacheFile(self.cachedir, cachefile, self.mc, self.data_hash)
def prepare_cache(self, progress):
if not self.has_cache:
return 0
loaded = 0
self.cachefile = self.getCacheFile("bb_cache.dat")
@@ -453,6 +418,9 @@ class Cache(object):
return loaded
def cachesize(self):
if not self.has_cache:
return 0
cachesize = 0
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
@@ -518,7 +486,7 @@ class Cache(object):
"""Parse the specified filename, returning the recipe information"""
self.logger.debug("Parsing %s", filename)
infos = []
datastores = self.databuilder.parseRecipeVariants(filename, appends, mc=self.mc)
datastores = self.load_bbfile(filename, appends, mc=self.mc)
depends = []
variants = []
# Process the "real" fn last so we can store variants list
@@ -540,19 +508,43 @@ class Cache(object):
return infos
def loadCached(self, filename, appends):
def load(self, filename, appends):
"""Obtain the recipe information for the specified filename,
using cached values.
"""
using cached values if available, otherwise parsing.
infos = []
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = variant2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
Note that if it does parse to obtain the info, it will not
automatically add the information to the cache or to your
CacheData. Use the add or add_info method to do so after
running this, or use loadData instead."""
cached = self.cacheValid(filename, appends)
if cached:
infos = []
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = variant2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
else:
return self.parse(filename, appends, configdata, self.caches_array)
return infos
return cached, infos
def loadData(self, fn, appends, cacheData):
"""Load the recipe info for the specified filename,
parsing and adding to the cache if necessary, and adding
the recipe information to the supplied CacheData instance."""
skipped, virtuals = 0, 0
cached, infos = self.load(fn, appends)
for virtualfn, info_array in infos:
if info_array[0].skipped:
self.logger.debug("Skipping %s: %s", virtualfn, info_array[0].skipreason)
skipped += 1
else:
self.add_info(virtualfn, info_array, cacheData, not cached)
virtuals += 1
return cached, skipped, virtuals
def cacheValid(self, fn, appends):
"""
@@ -561,6 +553,10 @@ class Cache(object):
"""
if fn not in self.checked:
self.cacheValidUpdate(fn, appends)
# Is cache enabled?
if not self.has_cache:
return False
if fn in self.clean:
return True
return False
@@ -570,6 +566,10 @@ class Cache(object):
Is the cache valid for fn?
Make thorough (slower) checks including timestamps.
"""
# Is cache enabled?
if not self.has_cache:
return False
self.checked.add(fn)
# File isn't in depends_cache
@@ -676,6 +676,10 @@ class Cache(object):
Save the cache
Called from the parser when complete (or exiting)
"""
if not self.has_cache:
return
if self.cacheclean:
self.logger.debug2("Cache is clean, not saving.")
return
@@ -696,7 +700,6 @@ class Cache(object):
p.dump(info)
del self.depends_cache
SiggenRecipeInfo.reset()
@staticmethod
def mtime(cachefile):
@@ -719,11 +722,26 @@ class Cache(object):
if watcher:
watcher(info_array[0].file_depends)
if not self.has_cache:
return
if (info_array[0].skipped or 'SRCREVINACTION' not in info_array[0].pv) and not info_array[0].nocache:
if parsed:
self.cacheclean = False
self.depends_cache[filename] = info_array
def add(self, file_name, data, cacheData, parsed=None):
"""
Save data we need into the cache
"""
realfn = virtualfn2realfn(file_name)[0]
info_array = []
for cache_class in self.caches_array:
info_array.append(cache_class(realfn, data))
self.add_info(file_name, info_array, cacheData, parsed)
class MulticonfigCache(Mapping):
def __init__(self, databuilder, data_hash, caches_array):
def progress(p):
@@ -760,7 +778,6 @@ class MulticonfigCache(Mapping):
loaded = 0
for c in self.__caches.values():
SiggenRecipeInfo.reset()
loaded += c.prepare_cache(progress)
previous_progress = current_progress
@@ -838,10 +855,11 @@ class MultiProcessCache(object):
self.cachedata = self.create_cachedata()
self.cachedata_extras = self.create_cachedata()
def init_cache(self, cachedir, cache_file_name=None):
if not cachedir:
def init_cache(self, d, cache_file_name=None):
cachedir = (d.getVar("PERSISTENT_DIR") or
d.getVar("CACHE"))
if cachedir in [None, '']:
return
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
@@ -872,10 +890,6 @@ class MultiProcessCache(object):
if not self.cachefile:
return
have_data = any(self.cachedata_extras)
if not have_data:
return
glf = bb.utils.lockfile(self.cachefile + ".lock", shared=True)
i = os.getpid()
@@ -910,8 +924,6 @@ class MultiProcessCache(object):
data = self.cachedata
have_data = False
for f in [y for y in os.listdir(os.path.dirname(self.cachefile)) if y.startswith(os.path.basename(self.cachefile) + '-')]:
f = os.path.join(os.path.dirname(self.cachefile), f)
try:
@@ -926,14 +938,12 @@ class MultiProcessCache(object):
os.unlink(f)
continue
have_data = True
self.merge_data(extradata, data)
os.unlink(f)
if have_data:
with open(self.cachefile, "wb") as f:
p = pickle.Pickler(f, -1)
p.dump([data, self.__class__.CACHE_VERSION])
with open(self.cachefile, "wb") as f:
p = pickle.Pickler(f, -1)
p.dump([data, self.__class__.CACHE_VERSION])
bb.utils.unlockfile(glf)

View File

@@ -27,7 +27,6 @@ import ast
import sys
import codegen
import logging
import inspect
import bb.pysh as pysh
import bb.utils, bb.data
import hashlib
@@ -59,39 +58,10 @@ def check_indent(codestr):
return codestr
modulecode_deps = {}
def add_module_functions(fn, functions, namespace):
fstat = os.stat(fn)
fixedhash = fn + ":" + str(fstat.st_size) + ":" + str(fstat.st_mtime)
for f in functions:
name = "%s.%s" % (namespace, f)
parser = PythonParser(name, logger)
try:
parser.parse_python(None, filename=fn, lineno=1, fixedhash=fixedhash+f)
#bb.warn("Cached %s" % f)
except KeyError:
lines, lineno = inspect.getsourcelines(functions[f])
src = "".join(lines)
parser.parse_python(src, filename=fn, lineno=lineno, fixedhash=fixedhash+f)
#bb.warn("Not cached %s" % f)
execs = parser.execs.copy()
# Expand internal module exec references
for e in parser.execs:
if e in functions:
execs.remove(e)
execs.add(namespace + "." + e)
modulecode_deps[name] = [parser.references.copy(), execs, parser.var_execs.copy(), parser.contains.copy()]
#bb.warn("%s: %s\nRefs:%s Execs: %s %s %s" % (name, src, parser.references, parser.execs, parser.var_execs, parser.contains))
def update_module_dependencies(d):
for mod in modulecode_deps:
excludes = set((d.getVarFlag(mod, "vardepsexclude") or "").split())
if excludes:
modulecode_deps[mod] = [modulecode_deps[mod][0] - excludes, modulecode_deps[mod][1] - excludes, modulecode_deps[mod][2] - excludes, modulecode_deps[mod][3]]
# A custom getstate/setstate using tuples is actually worth 15% cachesize by
# avoiding duplication of the attribute names!
class SetCache(object):
def __init__(self):
self.setcache = {}
@@ -184,12 +154,12 @@ class CodeParserCache(MultiProcessCache):
self.shellcachelines[h] = cacheline
return cacheline
def init_cache(self, cachedir):
def init_cache(self, d):
# Check if we already have the caches
if self.pythoncache:
return
MultiProcessCache.init_cache(self, cachedir)
MultiProcessCache.init_cache(self, d)
# cachedata gets re-assigned in the parent
self.pythoncache = self.cachedata[0]
@@ -201,8 +171,8 @@ class CodeParserCache(MultiProcessCache):
codeparsercache = CodeParserCache()
def parser_cache_init(cachedir):
codeparsercache.init_cache(cachedir)
def parser_cache_init(d):
codeparsercache.init_cache(d)
def parser_cache_save():
codeparsercache.save_extras()
@@ -319,17 +289,11 @@ class PythonParser():
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
# For the python module code it is expensive to have the function text so it is
# uses a different fixedhash to cache against. We can take the hit on obtaining the
# text if it isn't in the cache.
def parse_python(self, node, lineno=0, filename="<string>", fixedhash=None):
if not fixedhash and (not node or not node.strip()):
def parse_python(self, node, lineno=0, filename="<string>"):
if not node or not node.strip():
return
if fixedhash:
h = fixedhash
else:
h = bbhash(str(node))
h = bbhash(str(node))
if h in codeparsercache.pythoncache:
self.references = set(codeparsercache.pythoncache[h].refs)
@@ -347,9 +311,6 @@ class PythonParser():
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
return
if fixedhash and not node:
raise KeyError
# Need to parse so take the hit on the real log buffer
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, self._log)

View File

@@ -51,17 +51,16 @@ class Command:
"""
A queue of asynchronous commands for bitbake
"""
def __init__(self, cooker, process_server):
def __init__(self, cooker):
self.cooker = cooker
self.cmds_sync = CommandsSync()
self.cmds_async = CommandsAsync()
self.remotedatastores = None
self.process_server = process_server
# Access with locking using process_server.{get/set/clear}_async_cmd()
# FIXME Add lock for this
self.currentAsyncCommand = None
def runCommand(self, commandline, process_server, ro_only=False):
def runCommand(self, commandline, ro_only = False):
command = commandline.pop(0)
# Ensure cooker is ready for commands
@@ -85,7 +84,7 @@ class Command:
if not hasattr(command_method, 'readonly') or not getattr(command_method, 'readonly'):
return None, "Not able to execute not readonly commands in readonly mode"
try:
self.cooker.process_inotify_updates_apply()
self.cooker.process_inotify_updates()
if getattr(command_method, 'needconfig', True):
self.cooker.updateCacheSync()
result = command_method(self, commandline)
@@ -100,24 +99,24 @@ class Command:
return None, traceback.format_exc()
else:
return result, None
if self.currentAsyncCommand is not None:
return None, "Busy (%s in progress)" % self.currentAsyncCommand[0]
if command not in CommandsAsync.__dict__:
return None, "No such command"
if not process_server.set_async_cmd((command, commandline)):
return None, "Busy (%s in progress)" % self.process_server.get_async_cmd()[0]
self.cooker.idleCallBackRegister(self.runAsyncCommand, process_server)
self.currentAsyncCommand = (command, commandline)
self.cooker.idleCallBackRegister(self.cooker.runCommands, self.cooker)
return True, None
def runAsyncCommand(self, _, process_server, halt):
def runAsyncCommand(self):
try:
self.cooker.process_inotify_updates_apply()
self.cooker.process_inotify_updates()
if self.cooker.state in (bb.cooker.state.error, bb.cooker.state.shutdown, bb.cooker.state.forceshutdown):
# updateCache will trigger a shutdown of the parser
# and then raise BBHandledException triggering an exit
self.cooker.updateCache()
return bb.server.process.idleFinish("Cooker in error state")
cmd = process_server.get_async_cmd()
if cmd is not None:
(command, options) = cmd
return False
if self.currentAsyncCommand is not None:
(command, options) = self.currentAsyncCommand
commandmethod = getattr(CommandsAsync, command)
needcache = getattr( commandmethod, "needcache" )
if needcache and self.cooker.state != bb.cooker.state.running:
@@ -127,21 +126,24 @@ class Command:
commandmethod(self.cmds_async, self, options)
return False
else:
return bb.server.process.idleFinish("Nothing to do, no async command?")
return False
except KeyboardInterrupt as exc:
return bb.server.process.idleFinish("Interrupted")
self.finishAsyncCommand("Interrupted")
return False
except SystemExit as exc:
arg = exc.args[0]
if isinstance(arg, str):
return bb.server.process.idleFinish(arg)
self.finishAsyncCommand(arg)
else:
return bb.server.process.idleFinish("Exited with %s" % arg)
self.finishAsyncCommand("Exited with %s" % arg)
return False
except Exception as exc:
import traceback
if isinstance(exc, bb.BBHandledException):
return bb.server.process.idleFinish("")
self.finishAsyncCommand("")
else:
return bb.server.process.idleFinish(traceback.format_exc())
self.finishAsyncCommand(traceback.format_exc())
return False
def finishAsyncCommand(self, msg=None, code=None):
if msg or msg == "":
@@ -150,8 +152,8 @@ class Command:
bb.event.fire(CommandExit(code), self.cooker.data)
else:
bb.event.fire(CommandCompleted(), self.cooker.data)
self.currentAsyncCommand = None
self.cooker.finishcommand()
self.process_server.clear_async_cmd()
def reset(self):
if self.remotedatastores:
@@ -164,12 +166,6 @@ class CommandsSync:
These must not influence any running synchronous command.
"""
def ping(self, command, params):
"""
Allow a UI to check the server is still alive
"""
return "Still alive!"
def stateShutdown(self, command, params):
"""
Trigger cooker 'shutdown' mode
@@ -568,10 +564,11 @@ class CommandsSync:
if config_data:
# We have to use a different function here if we're passing in a datastore
# NOTE: we took a copy above, so we don't do it here again
envdata = command.cooker.databuilder._parse_recipe(config_data, fn, appendfiles, mc)['']
envdata = bb.cache.parse_recipe(config_data, fn, appendfiles, mc)['']
else:
# Use the standard path
envdata = command.cooker.databuilder.parseRecipe(fn, appendfiles)
parser = bb.cache.NoCache(command.cooker.databuilder)
envdata = parser.loadDataFull(fn, appendfiles)
idx = command.remotedatastores.store(envdata)
return DataStoreConnectionHandle(idx)
parseRecipeFile.readonly = True
@@ -744,7 +741,7 @@ class CommandsAsync:
"""
event = params[0]
bb.event.fire(eval(event), command.cooker.data)
process_server.clear_async_cmd()
command.currentAsyncCommand = None
triggerEvent.needcache = False
def resetCooker(self, command, params):

View File

@@ -80,7 +80,7 @@ class SkippedPackage:
class CookerFeatures(object):
_feature_list = [HOB_EXTRA_CACHES, BASEDATASTORE_TRACKING, SEND_SANITYEVENTS, RECIPE_SIGGEN_INFO] = list(range(4))
_feature_list = [HOB_EXTRA_CACHES, BASEDATASTORE_TRACKING, SEND_SANITYEVENTS] = list(range(3))
def __init__(self):
self._features=set()
@@ -149,7 +149,7 @@ class BBCooker:
Manages one bitbake build run
"""
def __init__(self, featureSet=None, server=None):
def __init__(self, featureSet=None, idleCallBackRegister=None):
self.recipecaches = None
self.eventlog = None
self.skiplist = {}
@@ -163,12 +163,7 @@ class BBCooker:
self.configuration = bb.cookerdata.CookerConfiguration()
self.process_server = server
self.idleCallBackRegister = None
self.waitIdle = None
if server:
self.idleCallBackRegister = server.register_idle_function
self.waitIdle = server.wait_for_idle
self.idleCallBackRegister = idleCallBackRegister
bb.debug(1, "BBCooker starting %s" % time.time())
sys.stdout.flush()
@@ -194,6 +189,12 @@ class BBCooker:
self.inotify_modified_files = []
def _process_inotify_updates(server, cooker, halt):
cooker.process_inotify_updates()
return 1.0
self.idleCallBackRegister(_process_inotify_updates, self)
# TOSTOP must not be set or our children will hang when they output
try:
fd = sys.stdout.fileno()
@@ -207,7 +208,7 @@ class BBCooker:
except UnsupportedOperation:
pass
self.command = bb.command.Command(self, self.process_server)
self.command = bb.command.Command(self)
self.state = state.initial
self.parser = None
@@ -219,8 +220,6 @@ class BBCooker:
bb.debug(1, "BBCooker startup complete %s" % time.time())
sys.stdout.flush()
self.inotify_threadlock = threading.Lock()
def init_configdata(self):
if not hasattr(self, "data"):
self.initConfigurationData()
@@ -229,40 +228,31 @@ class BBCooker:
self.handlePRServ()
def setupConfigWatcher(self):
with bb.utils.lock_timeout(self.inotify_threadlock):
if self.configwatcher:
self.configwatcher.close()
self.confignotifier = None
self.configwatcher = None
self.configwatcher = pyinotify.WatchManager()
self.configwatcher.bbseen = set()
self.configwatcher.bbwatchedfiles = set()
self.confignotifier = pyinotify.Notifier(self.configwatcher, self.config_notifications)
if self.configwatcher:
self.configwatcher.close()
self.confignotifier = None
self.configwatcher = None
self.configwatcher = pyinotify.WatchManager()
self.configwatcher.bbseen = set()
self.configwatcher.bbwatchedfiles = set()
self.confignotifier = pyinotify.Notifier(self.configwatcher, self.config_notifications)
def setupParserWatcher(self):
with bb.utils.lock_timeout(self.inotify_threadlock):
if self.watcher:
self.watcher.close()
self.notifier = None
self.watcher = None
self.watcher = pyinotify.WatchManager()
self.watcher.bbseen = set()
self.watcher.bbwatchedfiles = set()
self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
if self.watcher:
self.watcher.close()
self.notifier = None
self.watcher = None
self.watcher = pyinotify.WatchManager()
self.watcher.bbseen = set()
self.watcher.bbwatchedfiles = set()
self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
def process_inotify_updates(self):
with bb.utils.lock_timeout(self.inotify_threadlock):
for n in [self.confignotifier, self.notifier]:
if n and n.check_events(timeout=0):
# read notified events and enqueue them
n.read_events()
def process_inotify_updates_apply(self):
with bb.utils.lock_timeout(self.inotify_threadlock):
for n in [self.confignotifier, self.notifier]:
if n and n.check_events(timeout=0):
n.read_events()
n.process_events()
for n in [self.confignotifier, self.notifier]:
if n and n.check_events(timeout=0):
# read notified events and enqeue them
n.read_events()
n.process_events()
def config_notifications(self, event):
if event.maskname == "IN_Q_OVERFLOW":
@@ -339,21 +329,12 @@ class BBCooker:
providerlog.error("Root privilege is required to modify max_user_watches.")
raise
def handle_inotify_updates(self):
# reload files for which we got notifications
for p in self.inotify_modified_files:
bb.parse.update_cache(p)
if p in bb.parse.BBHandler.cached_statements:
del bb.parse.BBHandler.cached_statements[p]
self.inotify_modified_files = []
def sigterm_exception(self, signum, stackframe):
if signum == signal.SIGTERM:
bb.warn("Cooker received SIGTERM, shutting down...")
elif signum == signal.SIGHUP:
bb.warn("Cooker received SIGHUP, shutting down...")
self.state = state.forceshutdown
bb.event._should_exit.set()
def setFeatures(self, features):
# we only accept a new feature set if we're in state initial, so we can reset without problems
@@ -376,7 +357,6 @@ class BBCooker:
if mod not in self.orig_sysmodules:
del sys.modules[mod]
self.handle_inotify_updates()
self.setupConfigWatcher()
# Need to preserve BB_CONSOLELOG over resets
@@ -387,12 +367,12 @@ class BBCooker:
if CookerFeatures.BASEDATASTORE_TRACKING in self.featureset:
self.enableDataTracking()
caches_name_array = ['bb.cache:CoreRecipeInfo']
all_extra_cache_names = []
# We hardcode all known cache types in a single place, here.
if CookerFeatures.HOB_EXTRA_CACHES in self.featureset:
caches_name_array.append("bb.cache_extra:HobRecipeInfo")
if CookerFeatures.RECIPE_SIGGEN_INFO in self.featureset:
caches_name_array.append("bb.cache:SiggenRecipeInfo")
all_extra_cache_names.append("bb.cache_extra:HobRecipeInfo")
caches_name_array = ['bb.cache:CoreRecipeInfo'] + all_extra_cache_names
# At least CoreRecipeInfo will be loaded, so caches_array will never be empty!
# This is the entry point, no further check needed!
@@ -420,7 +400,9 @@ class BBCooker:
self.disableDataTracking()
for mc in self.databuilder.mcdata.values():
mc.renameVar("__depends", "__base_depends")
self.add_filewatch(mc.getVar("__base_depends", False), self.configwatcher)
mc.setVar("__bbclasstype", "recipe")
self.baseconfig_valid = True
self.parsecache_valid = False
@@ -454,8 +436,10 @@ class BBCooker:
upstream=upstream,
)
self.hashserv.serve_as_process()
self.data.setVar("BB_HASHSERVE", self.hashservaddr)
self.databuilder.origdata.setVar("BB_HASHSERVE", self.hashservaddr)
self.databuilder.data.setVar("BB_HASHSERVE", self.hashservaddr)
for mc in self.databuilder.mcdata:
self.databuilder.mcorigdata[mc].setVar("BB_HASHSERVE", self.hashservaddr)
self.databuilder.mcdata[mc].setVar("BB_HASHSERVE", self.hashservaddr)
bb.parse.init_parser(self.data)
@@ -551,6 +535,15 @@ class BBCooker:
logger.debug("Base environment change, triggering reparse")
self.reset()
def runCommands(self, server, data, halt):
"""
Run any queued asynchronous command
This is done by the idle handler so it runs in true context rather than
tied to any UI.
"""
return self.command.runAsyncCommand()
def showVersions(self):
(latest_versions, preferred_versions, required) = self.findProviders()
@@ -624,7 +617,8 @@ class BBCooker:
if fn:
try:
envdata = self.databuilder.parseRecipe(fn, self.collections[mc].get_file_appends(fn))
bb_caches = bb.cache.MulticonfigCache(self.databuilder, self.data_hash, self.caches_array)
envdata = bb_caches[mc].loadDataFull(fn, self.collections[mc].get_file_appends(fn))
except Exception as e:
parselog.exception("Unable to read %s", fn)
raise
@@ -1455,12 +1449,10 @@ class BBCooker:
self.recipecaches[mc].rundeps[fn] = defaultdict(list)
self.recipecaches[mc].runrecs[fn] = defaultdict(list)
bb.parse.siggen.setup_datacache(self.recipecaches)
# Invalidate task for target if force mode active
if self.configuration.force:
logger.verbose("Invalidate task %s, %s", task, fn)
bb.parse.siggen.invalidate_task(task, fn)
bb.parse.siggen.invalidate_task(task, self.recipecaches[mc], fn)
# Setup taskdata structure
taskdata = {}
@@ -1474,7 +1466,6 @@ class BBCooker:
buildname = self.databuilder.mcdata[mc].getVar("BUILDNAME")
if fireevents:
bb.event.fire(bb.event.BuildStarted(buildname, [item]), self.databuilder.mcdata[mc])
bb.event.enable_heartbeat()
# Execute the runqueue
runlist = [[mc, item, task, fn]]
@@ -1500,21 +1491,22 @@ class BBCooker:
failures += len(exc.args)
retval = False
except SystemExit as exc:
self.command.finishAsyncCommand(str(exc))
if quietlog:
bb.runqueue.logger.setLevel(rqloglevel)
return bb.server.process.idleFinish(str(exc))
return False
if not retval:
if fireevents:
bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runtaskentries), buildname, item, failures, interrupted), self.databuilder.mcdata[mc])
bb.event.disable_heartbeat()
self.command.finishAsyncCommand(msg)
# We trashed self.recipecaches above
self.parsecache_valid = False
self.configuration.limited_deps = False
bb.parse.siggen.reset(self.data)
if quietlog:
bb.runqueue.logger.setLevel(rqloglevel)
return bb.server.process.idleFinish(msg)
return False
if retval is True:
return True
return retval
@@ -1530,7 +1522,6 @@ class BBCooker:
msg = None
interrupted = 0
if halt or self.state == state.forceshutdown:
bb.event._should_exit.set()
rq.finish_runqueue(True)
msg = "Forced shutdown"
interrupted = 2
@@ -1545,16 +1536,16 @@ class BBCooker:
failures += len(exc.args)
retval = False
except SystemExit as exc:
return bb.server.process.idleFinish(str(exc))
self.command.finishAsyncCommand(str(exc))
return False
if not retval:
try:
for mc in self.multiconfigs:
bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runtaskentries), buildname, targets, failures, interrupted), self.databuilder.mcdata[mc])
finally:
bb.event.disable_heartbeat()
return bb.server.process.idleFinish(msg)
self.command.finishAsyncCommand(msg)
return False
if retval is True:
return True
return retval
@@ -1586,7 +1577,6 @@ class BBCooker:
for mc in self.multiconfigs:
bb.event.fire(bb.event.BuildStarted(buildname, ntargets), self.databuilder.mcdata[mc])
bb.event.enable_heartbeat()
rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
if 'universe' in targets:
@@ -1623,7 +1613,12 @@ class BBCooker:
if self.state == state.running:
return
self.handle_inotify_updates()
# reload files for which we got notifications
for p in self.inotify_modified_files:
bb.parse.update_cache(p)
if p in bb.parse.BBHandler.cached_statements:
del bb.parse.BBHandler.cached_statements[p]
self.inotify_modified_files = []
if not self.baseconfig_valid:
logger.debug("Reloading base configuration data")
@@ -1761,28 +1756,22 @@ class BBCooker:
if hasattr(self, "data"):
bb.event.fire(CookerExit(), self.data)
def shutdown(self, force=False):
def shutdown(self, force = False):
if force:
self.state = state.forceshutdown
bb.event._should_exit.set()
else:
self.state = state.shutdown
if self.parser:
self.parser.shutdown(clean=False)
self.parser.shutdown(clean=not force)
self.parser.final_cleanup()
def finishcommand(self):
if hasattr(self.parser, 'shutdown'):
self.parser.shutdown(clean=False)
self.parser.final_cleanup()
self.state = state.initial
bb.event._should_exit.clear()
def reset(self):
if hasattr(bb.parse, "siggen"):
bb.parse.siggen.exit()
self.finishcommand()
self.initConfigurationData()
self.handlePRServ()
@@ -1794,9 +1783,8 @@ class BBCooker:
if hasattr(self, "data"):
self.databuilder.reset()
self.data = self.databuilder.data
# In theory tinfoil could have modified the base data before parsing,
# ideally need to track if anything did modify the datastore
self.parsecache_valid = False
self.baseconfig_valid = False
class CookerExit(bb.event.Event):
@@ -2104,29 +2092,29 @@ class Parser(multiprocessing.Process):
multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, exitpriority=1)
pending = []
havejobs = True
try:
while havejobs or pending:
if self.quit.is_set():
while True:
try:
self.quit.get_nowait()
except queue.Empty:
pass
else:
break
job = None
try:
job = self.jobs.pop()
except IndexError:
havejobs = False
if job:
if pending:
result = pending.pop()
else:
try:
job = self.jobs.pop()
except IndexError:
break
result = self.parse(*job)
# Clear the siggen cache after parsing to control memory usage, its huge
bb.parse.siggen.postparsing_clean_cache()
try:
self.results.put(result, timeout=0.25)
except queue.Full:
pending.append(result)
if pending:
try:
result = pending.pop()
self.results.put(result, timeout=0.05)
except queue.Full:
pending.append(result)
finally:
self.results.close()
self.results.join_thread()
@@ -2197,7 +2185,6 @@ class CookerParser(object):
self.num_processes = min(int(self.cfgdata.getVar("BB_NUMBER_PARSE_THREADS") or
multiprocessing.cpu_count()), self.toparse)
bb.cache.SiggenRecipeInfo.reset()
self.start()
self.haveshutdown = False
self.syncthread = None
@@ -2208,7 +2195,7 @@ class CookerParser(object):
if self.toparse:
bb.event.fire(bb.event.ParseStarted(self.toparse), self.cfgdata)
self.parser_quit = multiprocessing.Event()
self.parser_quit = multiprocessing.Queue(maxsize=self.num_processes)
self.result_queue = multiprocessing.Queue()
def chunkify(lst,n):
@@ -2223,7 +2210,7 @@ class CookerParser(object):
self.results = itertools.chain(self.results, self.parse_generator())
def shutdown(self, clean=True, eventmsg="Parsing halted due to errors"):
def shutdown(self, clean=True):
if not self.toparse:
return
if self.haveshutdown:
@@ -2238,9 +2225,11 @@ class CookerParser(object):
bb.event.fire(event, self.cfgdata)
else:
bb.event.fire(bb.event.ParseError(eventmsg), self.cfgdata)
bb.error("Parsing halted due to errors, see error messages above")
for process in self.processes:
self.parser_quit.put(None)
# Cleanup the queue before call process.join(), otherwise there might be
# deadlocks.
while True:
@@ -2249,16 +2238,6 @@ class CookerParser(object):
except queue.Empty:
break
def sync_caches():
for c in self.bb_caches.values():
bb.cache.SiggenRecipeInfo.reset()
c.sync()
self.syncthread = threading.Thread(target=sync_caches, name="SyncThread")
self.syncthread.start()
self.parser_quit.set()
for process in self.processes:
process.join(0.5)
@@ -2279,9 +2258,18 @@ class CookerParser(object):
if hasattr(process, "close"):
process.close()
bb.codeparser.parser_cache_save()
self.parser_quit.close()
# Allow data left in the cancel queue to be discarded
self.parser_quit.cancel_join_thread()
def sync_caches():
for c in self.bb_caches.values():
c.sync()
sync = threading.Thread(target=sync_caches, name="SyncThread")
self.syncthread = sync
sync.start()
bb.codeparser.parser_cache_savemerge()
bb.cache.SiggenRecipeInfo.reset()
bb.fetch.fetcher_parse_done()
if self.cooker.configuration.profile:
profiles = []
@@ -2300,8 +2288,8 @@ class CookerParser(object):
def load_cached(self):
for mc, cache, filename, appends in self.fromcache:
infos = cache.loadCached(filename, appends)
yield False, mc, infos
cached, infos = cache.load(filename, appends)
yield not cached, mc, infos
def parse_generator(self):
empty = False
@@ -2356,7 +2344,7 @@ class CookerParser(object):
except bb.parse.ParseError as exc:
self.error += 1
logger.error(str(exc))
self.shutdown(clean=False, eventmsg=str(exc))
self.shutdown(clean=False)
return False
except bb.data_smart.ExpansionError as exc:
self.error += 1
@@ -2399,7 +2387,6 @@ class CookerParser(object):
return True
def reparse(self, filename):
bb.cache.SiggenRecipeInfo.reset()
to_reparse = set()
for mc in self.cooker.multiconfigs:
to_reparse.add((mc, filename, self.cooker.collections[mc].get_file_appends(filename)))

View File

@@ -160,7 +160,12 @@ def catch_parse_error(func):
def wrapped(fn, *args):
try:
return func(fn, *args)
except Exception as exc:
except IOError as exc:
import traceback
parselog.critical(traceback.format_exc())
parselog.critical("Unable to parse %s: %s" % (fn, exc))
raise bb.BBHandledException()
except bb.data_smart.ExpansionError as exc:
import traceback
bbdir = os.path.dirname(__file__) + os.sep
@@ -172,11 +177,14 @@ def catch_parse_error(func):
break
parselog.critical("Unable to parse %s" % fn, exc_info=(exc_class, exc, tb))
raise bb.BBHandledException()
except bb.parse.ParseError as exc:
parselog.critical(str(exc))
raise bb.BBHandledException()
return wrapped
@catch_parse_error
def parse_config_file(fn, data, include=True):
return bb.parse.handle(fn, data, include, baseconfig=True)
return bb.parse.handle(fn, data, include)
@catch_parse_error
def _inherit(bbclass, data):
@@ -255,7 +263,6 @@ class CookerDataBuilder(object):
self.mcdata = {}
def parseBaseConfiguration(self, worker=False):
mcdata = {}
data_hash = hashlib.sha256()
try:
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
@@ -263,6 +270,7 @@ class CookerDataBuilder(object):
if self.data.getVar("BB_WORKERCONTEXT", False) is None and not worker:
bb.fetch.fetcher_init(self.data)
bb.parse.init_parser(self.data)
bb.codeparser.parser_cache_init(self.data)
bb.event.fire(bb.event.ConfigParsed(), self.data)
@@ -280,25 +288,29 @@ class CookerDataBuilder(object):
bb.parse.init_parser(self.data)
data_hash.update(self.data.get_hash().encode('utf-8'))
mcdata[''] = self.data
self.mcdata[''] = self.data
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
for config in multiconfig:
if config[0].isdigit():
bb.fatal("Multiconfig name '%s' is invalid as multiconfigs cannot start with a digit" % config)
parsed_mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), parsed_mcdata)
mcdata[config] = parsed_mcdata
data_hash.update(parsed_mcdata.get_hash().encode('utf-8'))
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), mcdata)
self.mcdata[config] = mcdata
data_hash.update(mcdata.get_hash().encode('utf-8'))
if multiconfig:
bb.event.fire(bb.event.MultiConfigParsed(mcdata), self.data)
bb.event.fire(bb.event.MultiConfigParsed(self.mcdata), self.data)
self.data_hash = data_hash.hexdigest()
except (SyntaxError, bb.BBHandledException):
raise bb.BBHandledException()
except bb.data_smart.ExpansionError as e:
logger.error(str(e))
raise bb.BBHandledException()
except Exception:
logger.exception("Error parsing configuration files")
raise bb.BBHandledException()
bb.codeparser.update_module_dependencies(self.data)
# Handle obsolete variable names
d = self.data
@@ -319,23 +331,17 @@ class CookerDataBuilder(object):
if issues:
raise bb.BBHandledException()
for mc in mcdata:
mcdata[mc].renameVar("__depends", "__base_depends")
mcdata[mc].setVar("__bbclasstype", "recipe")
# Create a copy so we can reset at a later date when UIs disconnect
self.mcorigdata = mcdata
for mc in mcdata:
self.mcdata[mc] = bb.data.createCopy(mcdata[mc])
self.data = self.mcdata['']
self.origdata = self.data
self.data = bb.data.createCopy(self.origdata)
self.mcdata[''] = self.data
def reset(self):
# We may not have run parseBaseConfiguration() yet
if not hasattr(self, 'mcorigdata'):
if not hasattr(self, 'origdata'):
return
for mc in self.mcorigdata:
self.mcdata[mc] = bb.data.createCopy(self.mcorigdata[mc])
self.data = self.mcdata['']
self.data = bb.data.createCopy(self.origdata)
self.mcdata[''] = self.data
def _findLayerConf(self, data):
return findConfigFile("bblayers.conf", data)
@@ -356,11 +362,6 @@ class CookerDataBuilder(object):
data.setVar("TOPDIR", os.path.dirname(os.path.dirname(layerconf)))
data = parse_config_file(layerconf, data)
if not data.getVar("BB_CACHEDIR"):
data.setVar("BB_CACHEDIR", "${TOPDIR}/cache")
bb.codeparser.parser_cache_init(data.getVar("BB_CACHEDIR"))
layers = (data.getVar('BBLAYERS') or "").split()
broken_layers = []
@@ -382,8 +383,6 @@ class CookerDataBuilder(object):
parselog.critical("Please check BBLAYERS in %s" % (layerconf))
raise bb.BBHandledException()
layerseries = None
compat_entries = {}
for layer in layers:
parselog.debug2("Adding layer %s", layer)
if 'HOME' in approved and '~' in layer:
@@ -396,27 +395,8 @@ class CookerDataBuilder(object):
data.expandVarref('LAYERDIR')
data.expandVarref('LAYERDIR_RE')
# Sadly we can't have nice things.
# Some layers think they're going to be 'clever' and copy the values from
# another layer, e.g. using ${LAYERSERIES_COMPAT_core}. The whole point of
# this mechanism is to make it clear which releases a layer supports and
# show when a layer master branch is bitrotting and is unmaintained.
# We therefore avoid people doing this here.
collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
for c in collections:
compat_entry = data.getVar("LAYERSERIES_COMPAT_%s" % c)
if compat_entry:
compat_entries[c] = set(compat_entry.split())
data.delVar("LAYERSERIES_COMPAT_%s" % c)
if not layerseries:
layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
if layerseries:
data.delVar("LAYERSERIES_CORENAMES")
data.delVar('LAYERDIR_RE')
data.delVar('LAYERDIR')
for c in compat_entries:
data.setVar("LAYERSERIES_COMPAT_%s" % c, " ".join(sorted(compat_entries[c])))
bbfiles_dynamic = (data.getVar('BBFILES_DYNAMIC') or "").split()
collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
@@ -435,15 +415,13 @@ class CookerDataBuilder(object):
if invalid:
bb.fatal("BBFILES_DYNAMIC entries must be of the form {!}<collection name>:<filename pattern>, not:\n %s" % "\n ".join(invalid))
layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
collections_tmp = collections[:]
for c in collections:
collections_tmp.remove(c)
if c in collections_tmp:
bb.fatal("Found duplicated BBFILE_COLLECTIONS '%s', check bblayers.conf or layer.conf to fix it." % c)
compat = set()
if c in compat_entries:
compat = compat_entries[c]
compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
if compat and not layerseries:
bb.fatal("No core layer found to work with layer '%s'. Missing entry in bblayers.conf?" % c)
if compat and not (compat & layerseries):
@@ -452,21 +430,16 @@ class CookerDataBuilder(object):
elif not compat and not data.getVar("BB_WORKERCONTEXT"):
bb.warn("Layer %s should set LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core layer names it is compatible with." % (c, c))
data.setVar("LAYERSERIES_CORENAMES", " ".join(sorted(layerseries)))
if not data.getVar("BBPATH"):
msg = "The BBPATH variable is not set"
if not layerconf:
msg += (" and bitbake did not find a conf/bblayers.conf file in"
" the expected location.\nMaybe you accidentally"
" invoked bitbake from the wrong directory?")
bb.fatal(msg)
raise SystemExit(msg)
if not data.getVar("TOPDIR"):
data.setVar("TOPDIR", os.path.abspath(os.getcwd()))
if not data.getVar("BB_CACHEDIR"):
data.setVar("BB_CACHEDIR", "${TOPDIR}/cache")
bb.codeparser.parser_cache_init(data.getVar("BB_CACHEDIR"))
data = parse_config_file(os.path.join("conf", "bitbake.conf"), data)
@@ -493,54 +466,3 @@ class CookerDataBuilder(object):
return data
@staticmethod
def _parse_recipe(bb_data, bbfile, appends, mc=''):
bb_data.setVar("__BBMULTICONFIG", mc)
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
bb.parse.cached_mtime_noerror(bbfile_loc)
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
bb_data = bb.parse.handle(bbfile, bb_data)
return bb_data
def parseRecipeVariants(self, bbfile, appends, virtonly=False, mc=None):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
if virtonly:
(bbfile, virtual, mc) = bb.cache.virtualfn2realfn(bbfile)
bb_data = self.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
datastores = self._parse_recipe(bb_data, bbfile, appends, mc)
return datastores
if mc is not None:
bb_data = self.mcdata[mc].createCopy()
return self._parse_recipe(bb_data, bbfile, appends, mc)
bb_data = self.data.createCopy()
datastores = self._parse_recipe(bb_data, bbfile, appends)
for mc in self.mcdata:
if not mc:
continue
bb_data = self.mcdata[mc].createCopy()
newstores = self._parse_recipe(bb_data, bbfile, appends, mc)
for ns in newstores:
datastores["mc:%s:%s" % (mc, ns)] = newstores[ns]
return datastores
def parseRecipe(self, virtualfn, appends):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug("Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = bb.cache.virtualfn2realfn(virtualfn)
bb_data = self.parseRecipeVariants(virtualfn, appends, virtonly=True)
return bb_data[virtual]

View File

@@ -4,16 +4,14 @@ BitBake 'Data' implementations
Functions for interacting with the data structure used by the
BitBake build tools.
expandKeys and datastore iteration are the most expensive
operations. Updating overrides is now "on the fly" but still based
on the idea of the cookie monster introduced by zecke:
"At night the cookie monster came by and
The expandKeys and update_data are the most expensive
operations. At night the cookie monster came by and
suggested 'give me cookies on setting the variables and
things will work out'. Taking this suggestion into account
applying the skills from the not yet passed 'Entwurf und
Analyse von Algorithmen' lecture and the cookie
monster seems to be right. We will track setVar more carefully
to have faster datastore operations."
to have faster update_data and expandKeys operations.
This is a trade-off between speed and memory again but
the speed is more critical here.
@@ -28,6 +26,11 @@ the speed is more critical here.
import sys, os, re
import hashlib
if sys.argv[0][-5:] == "pydoc":
path = os.path.dirname(os.path.dirname(sys.argv[1]))
else:
path = os.path.dirname(os.path.dirname(sys.argv[0]))
sys.path.insert(0, path)
from itertools import groupby
from bb import data_smart
@@ -67,6 +70,10 @@ def keys(d):
"""Return a list of keys in d"""
return d.keys()
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
def expand(s, d, varname = None):
"""Variable expansion using the data store"""
return d.expand(s, varname)
@@ -114,8 +121,8 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
if d.getVarFlag(var, 'python', False) and func:
return False
export = bb.utils.to_boolean(d.getVarFlag(var, "export"))
unexport = bb.utils.to_boolean(d.getVarFlag(var, "unexport"))
export = d.getVarFlag(var, "export", False)
unexport = d.getVarFlag(var, "unexport", False)
if not all and not export and not unexport and not func:
return False
@@ -188,8 +195,8 @@ def emit_env(o=sys.__stdout__, d = init(), all=False):
def exported_keys(d):
return (key for key in d.keys() if not key.startswith('__') and
bb.utils.to_boolean(d.getVarFlag(key, 'export')) and
not bb.utils.to_boolean(d.getVarFlag(key, 'unexport')))
d.getVarFlag(key, 'export', False) and
not d.getVarFlag(key, 'unexport', False))
def exported_vars(d):
k = list(exported_keys(d))
@@ -261,40 +268,13 @@ def emit_func_python(func, o=sys.__stdout__, d = init()):
newdeps |= set((d.getVarFlag(dep, "vardeps") or "").split())
newdeps -= seen
def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_vars, d, codeparsedata):
def handle_contains(value, contains, exclusions, d):
newvalue = []
if value:
newvalue.append(str(value))
for k in sorted(contains):
if k in exclusions or k in ignored_vars:
continue
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
newvalue.append("\n%s{%s} = Unset" % (k, item))
break
else:
newvalue.append("\n%s{%s} = Set" % (k, item))
return "".join(newvalue)
def handle_remove(value, deps, removes, d):
for r in sorted(removes):
r2 = d.expandWithRefs(r, None)
value += "\n_remove of %s" % r
deps |= r2.references
deps = deps | (keys & r2.execs)
return value
def update_data(d):
"""Performs final steps upon the datastore, including application of overrides"""
d.finalize(parent = True)
def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
deps = set()
try:
if key in mod_funcs:
exclusions = set()
moddep = bb.codeparser.modulecode_deps[key]
value = handle_contains("", moddep[3], exclusions, d)
return frozenset((moddep[0] | keys & moddep[1]) - ignored_vars), value
if key[-1] == ']':
vf = key[:-1].split('[')
if vf[1] == "vardepvalueexclude":
@@ -302,24 +282,48 @@ def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_va
value, parser = d.getVarFlag(vf[0], vf[1], False, retparser=True)
deps |= parser.references
deps = deps | (keys & parser.execs)
deps -= ignored_vars
return frozenset(deps), value
return deps, value
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
vardeps = varflags.get("vardeps")
exclusions = varflags.get("vardepsexclude", "").split()
def handle_contains(value, contains, exclusions, d):
newvalue = []
if value:
newvalue.append(str(value))
for k in sorted(contains):
if k in exclusions or k in ignored_vars:
continue
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
newvalue.append("\n%s{%s} = Unset" % (k, item))
break
else:
newvalue.append("\n%s{%s} = Set" % (k, item))
return "".join(newvalue)
def handle_remove(value, deps, removes, d):
for r in sorted(removes):
r2 = d.expandWithRefs(r, None)
value += "\n_remove of %s" % r
deps |= r2.references
deps = deps | (keys & r2.execs)
return value
if "vardepvalue" in varflags:
value = varflags.get("vardepvalue")
elif varflags.get("func"):
if varflags.get("python"):
value = codeparsedata.getVarFlag(key, "_content", False)
value = d.getVarFlag(key, "_content", False)
parser = bb.codeparser.PythonParser(key, logger)
parser.parse_python(value, filename=varflags.get("filename"), lineno=varflags.get("lineno"))
deps = deps | parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, exclusions, d)
else:
value, parsedvar = codeparsedata.getVarFlag(key, "_content", False, retparser=True)
value, parsedvar = d.getVarFlag(key, "_content", False, retparser=True)
parser = bb.codeparser.ShellParser(key, logger)
parser.parse_shell(parsedvar.value)
deps = deps | shelldeps
@@ -361,43 +365,36 @@ def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_va
deps |= set((vardeps or "").split())
deps -= set(exclusions)
deps -= ignored_vars
except bb.parse.SkipRecipe:
raise
except Exception as e:
bb.warn("Exception during build_dependencies for %s" % key)
raise
return frozenset(deps), value
return deps, value
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)
def generate_dependencies(d, ignored_vars):
mod_funcs = set(bb.codeparser.modulecode_deps.keys())
keys = set(key for key in d if not key.startswith("__")) | mod_funcs
shelldeps = set(key for key in d.getVar("__exportlist", False) if bb.utils.to_boolean(d.getVarFlag(key, "export")) and not bb.utils.to_boolean(d.getVarFlag(key, "unexport")))
keys = set(key for key in d if not key.startswith("__"))
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export", False) and not d.getVarFlag(key, "unexport", False))
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS')
codeparserd = d.createCopy()
for forced in (d.getVar('BB_HASH_CODEPARSER_VALS') or "").split():
key, value = forced.split("=", 1)
codeparserd.setVar(key, value)
deps = {}
values = {}
tasklist = d.getVar('__BBTASKS', False) or []
for task in tasklist:
deps[task], values[task] = build_dependencies(task, keys, mod_funcs, shelldeps, varflagsexcl, ignored_vars, d, codeparserd)
deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, ignored_vars, d)
newdeps = deps[task]
seen = set()
while newdeps:
nextdeps = newdeps
nextdeps = newdeps - ignored_vars
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep not in deps:
deps[dep], values[dep] = build_dependencies(dep, keys, mod_funcs, shelldeps, varflagsexcl, ignored_vars, d, codeparserd)
deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, varflagsexcl, ignored_vars, d)
newdeps |= deps[dep]
newdeps -= seen
#print "For %s: %s" % (task, str(deps[task]))
@@ -416,6 +413,7 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
else:
data = [data]
gendeps[task] -= ignored_vars
newdeps = gendeps[task]
seen = set()
while newdeps:
@@ -423,6 +421,9 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep in ignored_vars:
continue
gendeps[dep] -= ignored_vars
newdeps |= gendeps[dep]
newdeps -= seen
@@ -434,7 +435,7 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
data.append(str(var))
k = fn + ":" + task
basehash[k] = hashlib.sha256("".join(data).encode("utf-8")).hexdigest()
taskdeps[task] = frozenset(seen)
taskdeps[task] = alldeps
return taskdeps, basehash

View File

@@ -29,7 +29,7 @@ logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = [":append", ":prepend", ":remove"]
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>:append|:prepend|:remove)(:(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+?}")
__expand_python_regexp__ = re.compile(r"\${@(?:{.*?}|.)+?}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
__whitespace_split__ = re.compile(r'(\s)')
__override_regexp__ = re.compile(r'[a-z0-9]+')
@@ -92,11 +92,10 @@ def infer_caller_details(loginfo, parent = False, varval = True):
loginfo['func'] = func
class VariableParse:
def __init__(self, varname, d, unexpanded_value = None, val = None):
def __init__(self, varname, d, val = None):
self.varname = varname
self.d = d
self.value = val
self.unexpanded_value = unexpanded_value
self.references = set()
self.execs = set()
@@ -120,11 +119,6 @@ class VariableParse:
else:
code = match.group()[3:-1]
# Do not run code that contains one or more unexpanded variables
# instead return the code with the characters we removed put back
if __expand_var_regexp__.findall(code):
return "${@" + code + "}"
if self.varname:
varname = 'Var <%s>' % self.varname
else:
@@ -448,9 +442,9 @@ class DataSmart(MutableMapping):
def expandWithRefs(self, s, varname):
if not isinstance(s, str): # sanity check
return VariableParse(varname, self, s, s)
return VariableParse(varname, self, s)
varparse = VariableParse(varname, self, s)
varparse = VariableParse(varname, self)
while s.find('${') != -1:
olds = s
@@ -482,19 +476,24 @@ class DataSmart(MutableMapping):
def expand(self, s, varname = None):
return self.expandWithRefs(s, varname).value
def finalize(self, parent = False):
return
def internal_finalize(self, parent = False):
"""Performs final steps upon the datastore, including application of overrides"""
self.overrides = None
def need_overrides(self):
if self.overrides is not None:
return
if self.inoverride:
return
overrride_stack = []
for count in range(5):
self.inoverride = True
# Can end up here recursively so setup dummy values
self.overrides = []
self.overridesset = set()
self.overrides = (self.getVar("OVERRIDES") or "").split(":") or []
overrride_stack.append(self.overrides)
self.overridesset = set(self.overrides)
self.inoverride = False
self.expand_cache = {}
@@ -504,7 +503,7 @@ class DataSmart(MutableMapping):
self.overrides = newoverrides
self.overridesset = set(self.overrides)
else:
bb.fatal("Overrides could not be expanded into a stable state after 5 iterations, overrides must be being referenced by other overridden variables in some recursive fashion. Please provide your configuration to bitbake-devel so we can laugh, er, I mean try and understand how to make it work. The list of failing override expansions: %s" % "\n".join(str(s) for s in overrride_stack))
bb.fatal("Overrides could not be expanded into a stable state after 5 iterations, overrides must be being referenced by other overridden variables in some recursive fashion. Please provide your configuration to bitbake-devel so we can laugh, er, I mean try and understand how to make it work.")
def initVar(self, var):
self.expand_cache = {}
@@ -515,18 +514,18 @@ class DataSmart(MutableMapping):
dest = self.dict
while dest:
if var in dest:
return dest[var]
return dest[var], self.overridedata.get(var, None)
if "_data" not in dest:
break
dest = dest["_data"]
return None
return None, self.overridedata.get(var, None)
def _makeShadowCopy(self, var):
if var in self.dict:
return
local_var = self._findVar(var)
local_var, _ = self._findVar(var)
if local_var:
self.dict[var] = copy.copy(local_var)
@@ -634,7 +633,7 @@ class DataSmart(MutableMapping):
nextnew.update(vardata.references)
nextnew.update(vardata.contains.keys())
new = nextnew
self.overrides = None
self.internal_finalize(True)
def _setvar_update_overrides(self, var, **loginfo):
# aka pay the cookie monster
@@ -721,7 +720,7 @@ class DataSmart(MutableMapping):
if ':' in var:
override = var[var.rfind(':')+1:]
shortvar = var[:var.rfind(':')]
while override and __override_regexp__.match(override):
while override and override.islower():
try:
if shortvar in self.overridedata:
# Force CoW by recreating the list first
@@ -776,18 +775,13 @@ class DataSmart(MutableMapping):
return None
cachename = var + "[" + flag + "]"
if not expand and retparser and cachename in self.expand_cache:
return self.expand_cache[cachename].unexpanded_value, self.expand_cache[cachename]
if expand and cachename in self.expand_cache:
return self.expand_cache[cachename].value
local_var = self._findVar(var)
local_var, overridedata = self._findVar(var)
value = None
removes = set()
if flag == "_content" and not parsing:
overridedata = self.overridedata.get(var, None)
if flag == "_content" and not parsing and overridedata is not None:
if flag == "_content" and overridedata is not None and not parsing:
match = False
active = {}
self.need_overrides()
@@ -902,7 +896,7 @@ class DataSmart(MutableMapping):
def delVarFlag(self, var, flag, **loginfo):
self.expand_cache = {}
local_var = self._findVar(var)
local_var, _ = self._findVar(var)
if not local_var:
return
if not var in self.dict:
@@ -945,7 +939,7 @@ class DataSmart(MutableMapping):
self.dict[var][i] = flags[i]
def getVarFlags(self, var, expand = False, internalflags=False):
local_var = self._findVar(var)
local_var, _ = self._findVar(var)
flags = {}
if local_var:

View File

@@ -68,39 +68,29 @@ _catchall_handlers = {}
_eventfilter = None
_uiready = False
_thread_lock = threading.Lock()
_heartbeat_enabled = False
_should_exit = threading.Event()
_thread_lock_enabled = False
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
def enable_threadlock():
# Always needed now
return
global _thread_lock_enabled
_thread_lock_enabled = True
def disable_threadlock():
# Always needed now
return
def enable_heartbeat():
global _heartbeat_enabled
_heartbeat_enabled = True
def disable_heartbeat():
global _heartbeat_enabled
_heartbeat_enabled = False
#
# In long running code, this function should be called periodically
# to check if we should exit due to an interuption (.e.g Ctrl+C from the UI)
#
def check_for_interrupts(d):
global _should_exit
if _should_exit.is_set():
bb.warn("Exiting due to interrupt.")
raise bb.BBHandledException()
global _thread_lock_enabled
_thread_lock_enabled = False
def execute_handler(name, handler, event, d):
event.data = d
addedd = False
if 'd' not in builtins:
builtins['d'] = d
addedd = True
try:
ret = handler(event, d)
ret = handler(event)
except (bb.parse.SkipRecipe, bb.BBHandledException):
raise
except Exception:
@@ -114,7 +104,8 @@ def execute_handler(name, handler, event, d):
raise
finally:
del event.data
if addedd:
del builtins['d']
def fire_class_handlers(event, d):
if isinstance(event, logging.LogRecord):
@@ -189,30 +180,36 @@ def print_ui_queue():
def fire_ui_handlers(event, d):
global _thread_lock
global _thread_lock_enabled
if not _uiready:
# No UI handlers registered yet, queue up the messages
ui_queue.append(event)
return
with bb.utils.lock_timeout(_thread_lock):
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
try:
if not _ui_logfilters[h].filter(event):
continue
# We use pickle here since it better handles object instances
# which xmlrpc's marshaller does not. Events *must* be serializable
# by pickle.
if hasattr(_ui_handlers[h].event, "sendpickle"):
_ui_handlers[h].event.sendpickle((pickle.dumps(event)))
else:
_ui_handlers[h].event.send(event)
except:
errors.append(h)
for h in errors:
del _ui_handlers[h]
if _thread_lock_enabled:
_thread_lock.acquire()
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
try:
if not _ui_logfilters[h].filter(event):
continue
# We use pickle here since it better handles object instances
# which xmlrpc's marshaller does not. Events *must* be serializable
# by pickle.
if hasattr(_ui_handlers[h].event, "sendpickle"):
_ui_handlers[h].event.sendpickle((pickle.dumps(event)))
else:
_ui_handlers[h].event.send(event)
except:
errors.append(h)
for h in errors:
del _ui_handlers[h]
if _thread_lock_enabled:
_thread_lock.release()
def fire(event, d):
"""Fire off an Event"""
@@ -256,12 +253,12 @@ def register(name, handler, mask=None, filename=None, lineno=None, data=None):
if handler is not None:
# handle string containing python code
if isinstance(handler, str):
tmp = "def %s(e, d):\n%s" % (name, handler)
tmp = "def %s(e):\n%s" % (name, handler)
try:
code = bb.methodpool.compile_cache(tmp)
if not code:
if filename is None:
filename = "%s(e, d)" % name
filename = "%s(e)" % name
code = compile(tmp, filename, "exec", ast.PyCF_ONLY_AST)
if lineno is not None:
ast.increment_lineno(code, lineno-1)
@@ -326,23 +323,21 @@ def set_eventfilter(func):
_eventfilter = func
def register_UIHhandler(handler, mainui=False):
with bb.utils.lock_timeout(_thread_lock):
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
_ui_logfilters[_ui_handler_seq] = UIEventFilter(level, debug_domains)
if mainui:
global _uiready
_uiready = _ui_handler_seq
return _ui_handler_seq
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
_ui_logfilters[_ui_handler_seq] = UIEventFilter(level, debug_domains)
if mainui:
global _uiready
_uiready = _ui_handler_seq
return _ui_handler_seq
def unregister_UIHhandler(handlerNum, mainui=False):
if mainui:
global _uiready
_uiready = False
with bb.utils.lock_timeout(_thread_lock):
if handlerNum in _ui_handlers:
del _ui_handlers[handlerNum]
if handlerNum in _ui_handlers:
del _ui_handlers[handlerNum]
return
def get_uihandler():
@@ -856,11 +851,3 @@ class FindSigInfoResult(Event):
def __init__(self, result):
Event.__init__(self)
self.result = result
class ParseError(Event):
"""
Event to indicate parse failed
"""
def __init__(self, msg):
super().__init__()
self._msg = msg

View File

@@ -469,7 +469,6 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
basename = os.path.basename(mirrortarball)
# Kill parameters, they make no sense for mirror tarballs
uri_decoded[5] = {}
uri_find_decoded[5] = {}
elif ud.localpath and ud.method.supports_checksum(ud):
basename = os.path.basename(ud.localpath)
if basename:
@@ -518,7 +517,7 @@ def fetcher_init(d):
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
_checksum_cache.init_cache(d.getVar("BB_CACHEDIR"))
_checksum_cache.init_cache(d)
for m in methods:
if hasattr(m, "init"):
@@ -560,6 +559,7 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
file against those in the recipe each time, rather than only after
downloading. See https://bugzilla.yoctoproject.org/show_bug.cgi?id=5571.
"""
if ud.ignore_checksums or not ud.method.supports_checksum(ud):
return {}
@@ -604,7 +604,11 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
# If strict checking enabled and neither sum defined, raise error
if strict == "1":
raise NoChecksumError("\n".join(checksum_lines))
messages.append("No checksum specified for '%s', please add at " \
"least one to the recipe:" % ud.localpath)
messages.extend(checksum_lines)
logger.error("\n".join(messages))
raise NoChecksumError("Missing SRC_URI checksum", ud.url)
bb.event.fire(MissingChecksumEvent(ud.url, **checksum_event), d)
@@ -744,13 +748,10 @@ def subprocess_setup():
# SIGPIPE errors are known issues with gzip/bash
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def mark_recipe_nocache(d):
def get_autorev(d):
# only not cache src rev in autorev case
if d.getVar('BB_SRCREV_POLICY') != "cache":
d.setVar('BB_DONT_CACHE', '1')
def get_autorev(d):
mark_recipe_nocache(d)
d.setVar("__BBAUTOREV_SEEN", True)
return "AUTOINC"
def get_srcrev(d, method_name='sortable_revision'):
@@ -767,7 +768,7 @@ def get_srcrev(d, method_name='sortable_revision'):
that fetcher provides a method with the given name and the same signature as sortable_revision.
"""
d.setVar("__BBSRCREV_SEEN", "1")
d.setVar("__BBSEENSRCREV", "1")
recursion = d.getVar("__BBINSRCREV")
if recursion:
raise FetchError("There are recursive references in fetcher variables, likely through SRC_URI")
@@ -849,13 +850,10 @@ FETCH_EXPORT_VARS = ['HOME', 'PATH',
'DBUS_SESSION_BUS_ADDRESS',
'P4CONFIG',
'SSL_CERT_FILE',
'NODE_EXTRA_CA_CERTS',
'AWS_PROFILE',
'AWS_ACCESS_KEY_ID',
'AWS_SECRET_ACCESS_KEY',
'AWS_DEFAULT_REGION',
'GIT_CACHE_PATH',
'SSL_CERT_DIR']
'AWS_DEFAULT_REGION']
def get_fetcher_environment(d):
newenv = {}
@@ -1217,7 +1215,6 @@ def srcrev_internal_helper(ud, d, name):
if srcrev == "INVALID" or not srcrev:
raise FetchError("Please set a valid SRCREV for url %s (possible key names are %s, or use a ;rev=X URL parameter)" % (str(attempts), ud.url), ud.url)
if srcrev == "AUTOINC":
d.setVar("__BBAUTOREV_ACTED_UPON", True)
srcrev = ud.method.latest_revision(ud, d, name)
return srcrev
@@ -1290,13 +1287,18 @@ class FetchData(object):
if checksum_name in self.parm:
checksum_expected = self.parm[checksum_name]
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3", "az", "crate"]:
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3", "az"]:
checksum_expected = None
else:
checksum_expected = d.getVarFlag("SRC_URI", checksum_name)
setattr(self, "%s_expected" % checksum_id, checksum_expected)
for checksum_id in CHECKSUM_LIST:
configure_checksum(checksum_id)
self.ignore_checksums = False
self.names = self.parm.get("name",'default').split(',')
self.method = None
@@ -1318,11 +1320,6 @@ class FetchData(object):
if hasattr(self.method, "urldata_init"):
self.method.urldata_init(self, d)
for checksum_id in CHECKSUM_LIST:
configure_checksum(checksum_id)
self.ignore_checksums = False
if "localpath" in self.parm:
# if user sets localpath for file, use it instead.
self.localpath = self.parm["localpath"]
@@ -1723,7 +1720,6 @@ class Fetch(object):
network = self.d.getVar("BB_NO_NETWORK")
premirroronly = bb.utils.to_boolean(self.d.getVar("BB_FETCH_PREMIRRORONLY"))
checksum_missing_messages = []
for u in urls:
ud = self.ud[u]
ud.setup_localpath(self.d)
@@ -1735,6 +1731,7 @@ class Fetch(object):
try:
self.d.setVar("BB_NO_NETWORK", network)
if m.verify_donestamp(ud, self.d) and not m.need_update(ud, self.d):
done = True
elif m.try_premirror(ud, self.d):
@@ -1806,20 +1803,13 @@ class Fetch(object):
raise ChecksumError("Stale Error Detected")
except BBFetchException as e:
if isinstance(e, NoChecksumError):
(message, _) = e.args
checksum_missing_messages.append(message)
continue
elif isinstance(e, ChecksumError):
if isinstance(e, ChecksumError):
logger.error("Checksum failure fetching %s" % u)
raise
finally:
if ud.lockfile:
bb.utils.unlockfile(lf)
if checksum_missing_messages:
logger.error("Missing SRC_URI checksum, please add those to the recipe: \n%s", "\n".join(checksum_missing_messages))
raise BBFetchException("There was some missing checksums in the recipe")
def checkstatus(self, urls=None):
"""

View File

@@ -33,7 +33,7 @@ class Crate(Wget):
return ud.type in ['crate']
def recommends_checksum(self, urldata):
return True
return False
def urldata_init(self, ud, d):
"""
@@ -56,10 +56,8 @@ class Crate(Wget):
if len(parts) < 5:
raise bb.fetch2.ParameterError("Invalid URL: Must be crate://HOST/NAME/VERSION", ud.url)
# version is expected to be the last token
# but ignore possible url parameters which will be used
# by the top fetcher class
version, _, _ = parts[len(parts) -1].partition(";")
# last field is version
version = parts[len(parts) - 1]
# second to last field is name
name = parts[len(parts) - 2]
# host (this is to allow custom crate registries to be specified
@@ -71,8 +69,7 @@ class Crate(Wget):
ud.url = "https://%s/%s/%s/download" % (host, name, version)
ud.parm['downloadfilename'] = "%s-%s.crate" % (name, version)
if 'name' not in ud.parm:
ud.parm['name'] = '%s-%s' % (name, version)
ud.parm['name'] = name
logger.debug2("Fetching %s to %s" % (ud.url, ud.parm['downloadfilename']))

View File

@@ -44,8 +44,7 @@ Supported SRC_URI options are:
- nobranch
Don't check the SHA validation for branch. set this option for the recipe
referring to commit which is valid in any namespace (branch, tag, ...)
instead of branch.
referring to commit which is valid in tag instead of branch.
The default is "0", set nobranch=1 if needed.
- usehead
@@ -367,13 +366,9 @@ class Git(FetchMethod):
# If the repo still doesn't exist, fallback to cloning it
if not os.path.exists(ud.clonedir):
# We do this since git will use a "-l" option automatically for local urls where possible,
# but it doesn't work when git/objects is a symlink, only works when it is a directory.
# We do this since git will use a "-l" option automatically for local urls where possible
if repourl.startswith("file://"):
repourl_path = repourl[7:]
objects = os.path.join(repourl_path, 'objects')
if os.path.isdir(objects) and not os.path.islink(objects):
repourl = repourl_path
repourl = repourl[7:]
clone_cmd = "LANG=C %s clone --bare --mirror %s %s --progress" % (ud.basecmd, shlex.quote(repourl), ud.clonedir)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, clone_cmd, ud.url)
@@ -387,11 +382,7 @@ class Git(FetchMethod):
runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, shlex.quote(repourl)), d, workdir=ud.clonedir)
if ud.nobranch:
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/*:refs/*" % (ud.basecmd, shlex.quote(repourl))
else:
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/heads/*:refs/heads/* refs/tags/*:refs/tags/*" % (ud.basecmd, shlex.quote(repourl))
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/*:refs/*" % (ud.basecmd, shlex.quote(repourl))
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
progresshandler = GitProgressHandler(d)
@@ -421,7 +412,8 @@ class Git(FetchMethod):
# It would be nice to just do this inline here by running 'git-lfs fetch'
# on the bare clonedir, but that operation requires a working copy on some
# releases of Git LFS.
with tempfile.TemporaryDirectory(dir=d.getVar('DL_DIR')) as tmpdir:
tmpdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
try:
# Do the checkout. This implicitly involves a Git LFS fetch.
Git.unpack(self, ud, tmpdir, d)
@@ -439,6 +431,8 @@ class Git(FetchMethod):
# downloaded.
if os.path.exists(os.path.join(tmpdir, "git", ".git", "lfs")):
runfetchcmd("tar -cf - lfs | tar -xf - -C %s" % ud.clonedir, d, workdir="%s/git/.git" % tmpdir)
finally:
bb.utils.remove(tmpdir, recurse=True)
def build_mirror_data(self, ud, d):
@@ -661,6 +655,11 @@ class Git(FetchMethod):
Check if the repository has 'lfs' (large file) content
"""
if not ud.nobranch:
branchname = ud.branches[ud.names[0]]
else:
branchname = "master"
# The bare clonedir doesn't use the remote names; it has the branch immediately.
if wd == ud.clonedir:
refname = ud.branches[ud.names[0]]
@@ -737,11 +736,11 @@ class Git(FetchMethod):
"""
Compute the HEAD revision for the url
"""
if not d.getVar("__BBSRCREV_SEEN"):
if not d.getVar("__BBSEENSRCREV"):
raise bb.fetch2.FetchError("Recipe uses a floating tag/branch '%s' for repo '%s' without a fixed SRCREV yet doesn't call bb.fetch2.get_srcrev() (use SRCPV in PV for OE)." % (ud.unresolvedrev[name], ud.host+ud.path))
# Ensure we mark as not cached
bb.fetch2.mark_recipe_nocache(d)
bb.fetch2.get_autorev(d)
output = self._lsremote(ud, d, "")
# Tags of the form ^{} may not work, need to fallback to other form

View File

@@ -90,7 +90,7 @@ class GitSM(Git):
# Convert relative to absolute uri based on parent uri
if uris[m].startswith('..') or uris[m].startswith('./'):
newud = copy.copy(ud)
newud.path = os.path.normpath(os.path.join(newud.path, uris[m]))
newud.path = os.path.realpath(os.path.join(newud.path, uris[m]))
uris[m] = Git._get_repo_url(self, newud)
for module in submodules:
@@ -115,14 +115,13 @@ class GitSM(Git):
# This has to be a file reference
proto = "file"
url = "gitsm://" + uris[module]
if url.endswith("{}{}".format(ud.host, ud.path)):
if "{}{}".format(ud.host, ud.path) in url:
raise bb.fetch2.FetchError("Submodule refers to the parent repository. This will cause deadlock situation in current version of Bitbake." \
"Consider using git fetcher instead.")
url += ';protocol=%s' % proto
url += ";name=%s" % module
url += ";subpath=%s" % module
url += ";nobranch=1"
ld = d.createCopy()
# Not necessary to set SRC_URI, since we're passing the URI to

View File

@@ -72,7 +72,7 @@ class Local(FetchMethod):
filespath = d.getVar('FILESPATH')
if filespath:
locations = filespath.split(":")
msg = "Unable to find file " + urldata.url + " anywhere to download to " + urldata.localpath + ". The paths that were searched were:\n " + "\n ".join(locations)
msg = "Unable to find file " + urldata.url + " anywhere. The paths that were searched were:\n " + "\n ".join(locations)
raise FetchError(msg)
return True

View File

@@ -129,28 +129,10 @@ class NpmShrinkWrap(FetchMethod):
localpath = os.path.join(d.getVar("DL_DIR"), localfile)
# Handle local tarball and link sources
elif version.startswith("file"):
localpath = version[5:]
if not version.endswith(".tgz"):
unpack = False
# Handle git sources
elif version.startswith(("git", "bitbucket","gist")) or (
not version.endswith((".tgz", ".tar", ".tar.gz"))
and not version.startswith((".", "@", "/"))
and "/" in version
):
elif version.startswith("git"):
if version.startswith("github:"):
version = "git+https://github.com/" + version[len("github:"):]
elif version.startswith("gist:"):
version = "git+https://gist.github.com/" + version[len("gist:"):]
elif version.startswith("bitbucket:"):
version = "git+https://bitbucket.org/" + version[len("bitbucket:"):]
elif version.startswith("gitlab:"):
version = "git+https://gitlab.com/" + version[len("gitlab:"):]
elif not version.startswith(("git+","git:")):
version = "git+https://github.com/" + version
regex = re.compile(r"""
^
git\+
@@ -176,6 +158,12 @@ class NpmShrinkWrap(FetchMethod):
url = str(uri)
# Handle local tarball and link sources
elif version.startswith("file"):
localpath = version[5:]
if not version.endswith(".tgz"):
unpack = False
else:
raise ParameterError("Unsupported dependency: %s" % name, ud.url)
@@ -205,9 +193,7 @@ class NpmShrinkWrap(FetchMethod):
# This fetcher resolves multiple URIs from a shrinkwrap file and then
# forwards it to a proxy fetcher. The management of the donestamp file,
# the lockfile and the checksums are forwarded to the proxy fetcher.
shrinkwrap_urls = [dep["url"] for dep in ud.deps if dep["url"]]
if shrinkwrap_urls:
ud.proxy = Fetch(shrinkwrap_urls, data)
ud.proxy = Fetch([dep["url"] for dep in ud.deps if dep["url"]], data)
ud.needdonestamp = False
@staticmethod

View File

@@ -103,7 +103,7 @@ class SFTP(FetchMethod):
if path[:3] == '/~/':
path = path[3:]
remote = '"%s%s:%s"' % (user, urlo.hostname, path)
remote = '%s%s:%s' % (user, urlo.hostname, path)
cmd = '%s %s %s %s' % (basecmd, port, remote, lpath)

View File

@@ -26,6 +26,7 @@ from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
from bb.utils import export_proxies
from bs4 import BeautifulSoup
from bs4 import SoupStrainer
@@ -340,8 +341,7 @@ class Wget(FetchMethod):
opener = urllib.request.build_opener(*handlers)
try:
uri_base = ud.url.split(";")[0]
uri = "{}://{}{}".format(urllib.parse.urlparse(uri_base).scheme, ud.host, ud.path)
uri = ud.url.split(";")[0]
r = urllib.request.Request(uri)
r.get_method = lambda: "HEAD"
# Some servers (FusionForge, as used on Alioth) require that the
@@ -360,16 +360,23 @@ class Wget(FetchMethod):
try:
import netrc
auth_data = netrc.netrc().authenticators(urllib.parse.urlparse(uri).hostname)
if auth_data:
login, _, password = auth_data
add_basic_auth("%s:%s" % (login, password), r)
except (FileNotFoundError, netrc.NetrcParseError):
n = netrc.netrc()
login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
add_basic_auth("%s:%s" % (login, password), r)
except (TypeError, ImportError, IOError, netrc.NetrcParseError):
pass
with opener.open(r, timeout=30) as response:
pass
except (urllib.error.URLError, ConnectionResetError, TimeoutError) as e:
except urllib.error.URLError as e:
if try_again:
logger.debug2("checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug2("checkstatus() urlopen failed: %s" % e)
return False
except ConnectionResetError as e:
if try_again:
logger.debug2("checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
@@ -637,10 +644,10 @@ class Wget(FetchMethod):
# search for version matches on folders inside the path, like:
# "5.7" in http://download.gnome.org/sources/${PN}/5.7/${PN}-${PV}.tar.gz
dirver_regex = re.compile(r"(?P<dirver>[^/]*(\d+\.)*\d+([-_]r\d+)*)/")
m = dirver_regex.findall(path)
m = dirver_regex.search(path)
if m:
pn = d.getVar('PN')
dirver = m[-1][0]
dirver = m.group('dirver')
dirver_pn_regex = re.compile(r"%s\d?" % (re.escape(pn)))
if not dirver_pn_regex.search(dirver):

View File

@@ -12,12 +12,11 @@
import os
import sys
import logging
import argparse
import optparse
import warnings
import fcntl
import time
import traceback
import datetime
import bb
from bb import event
@@ -44,18 +43,18 @@ def present_options(optionlist):
else:
return optionlist[0]
class BitbakeHelpFormatter(argparse.HelpFormatter):
def _get_help_string(self, action):
class BitbakeHelpFormatter(optparse.IndentedHelpFormatter):
def format_option(self, option):
# We need to do this here rather than in the text we supply to
# add_option() because we don't want to call list_extension_modules()
# on every execution (since it imports all of the modules)
# Note also that we modify option.help rather than the returned text
# - this is so that we don't have to re-format the text ourselves
if action.dest == 'ui':
if option.dest == 'ui':
valid_uis = list_extension_modules(bb.ui, 'main')
return action.help.replace('@CHOICES@', present_options(valid_uis))
option.help = option.help.replace('@CHOICES@', present_options(valid_uis))
return action.help
return optparse.IndentedHelpFormatter.format_option(self, option)
def list_extension_modules(pkg, checkattr):
"""
@@ -115,205 +114,180 @@ def _showwarning(message, category, filename, lineno, file=None, line=None):
warnings.showwarning = _showwarning
def create_bitbake_parser():
parser = argparse.ArgumentParser(
description="""\
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.
""",
formatter_class=BitbakeHelpFormatter,
allow_abbrev=False,
add_help=False, # help is manually added below in a specific argument group
)
parser = optparse.OptionParser(
formatter=BitbakeHelpFormatter(),
version="BitBake Build Tool Core version %s" % bb.__version__,
usage="""%prog [options] [recipename/target recipe:do_task ...]
general_group = parser.add_argument_group('General options')
task_group = parser.add_argument_group('Task control options')
exec_group = parser.add_argument_group('Execution control options')
logging_group = parser.add_argument_group('Logging/output control options')
server_group = parser.add_argument_group('Server options')
config_group = parser.add_argument_group('Configuration options')
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.""")
general_group.add_argument("targets", nargs="*", metavar="recipename/target",
help="Execute the specified task (default is 'build') for these target "
"recipes (.bb files).")
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
general_group.add_argument("-s", "--show-versions", action="store_true",
help="Show current and preferred versions of all recipes.")
parser.add_option("-k", "--continue", action="store_false", dest="halt", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
general_group.add_argument("-e", "--environment", action="store_true",
dest="show_environment",
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
general_group.add_argument("-g", "--graphviz", action="store_true", dest="dot_graph",
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-c", "--cmd", action="store", dest="cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
help="Output less log message data to the terminal. You can specify this more than once.")
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
help="Don't execute, just go through the motions.")
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
parser.add_option("-p", "--parse-only", action="store_true",
dest="parse_only", default=False,
help="Quit after parsing the BB recipes.")
parser.add_option("-s", "--show-versions", action="store_true",
dest="show_versions", default=False,
help="Show current and preferred versions of all recipes.")
parser.add_option("-e", "--environment", action="store_true",
dest="show_environment", default=False,
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing")
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
help="Show debug logging for the specified logging domains")
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
help="Profile the command and save reports.")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
general_group.add_argument("-u", "--ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %(default)s).")
parser.add_option("-u", "--ui", action="store", dest="ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
general_group.add_argument("--version", action="store_true",
help="Show programs version and exit.")
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
general_group.add_argument('-h', '--help', action='help',
help='Show this help message and exit.')
parser.add_option("", "--revisions-changed", action="store_true",
dest="revisions_changed", default=False,
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
parser.add_option("", "--server-only", action="store_true",
dest="server_only", default=False,
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
task_group.add_argument("-f", "--force", action="store_true",
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
task_group.add_argument("-c", "--cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
task_group.add_argument("-C", "--clear-stamp", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
task_group.add_argument("--runall", action="append", default=[],
help="Run the specified task for any recipe in the taskgraph of the "
"specified target (even if it wouldn't otherwise have run).")
parser.add_option("", "--skip-setscene", action="store_true",
dest="skipsetscene", default=False,
help="Skip setscene tasks if they would be executed. Tasks previously "
"restored from sstate will be kept, unlike --no-setscene")
task_group.add_argument("--runonly", action="append",
help="Run only the specified task within the taskgraph of the "
"specified targets (and any task dependencies those tasks may have).")
parser.add_option("", "--setscene-only", action="store_true",
dest="setsceneonly", default=False,
help="Only run setscene tasks, don't run any real tasks.")
task_group.add_argument("--no-setscene", action="store_true",
dest="nosetscene",
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
parser.add_option("", "--remote-server", action="store", dest="remote_server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
task_group.add_argument("--skip-setscene", action="store_true",
dest="skipsetscene",
help="Skip setscene tasks if they would be executed. Tasks previously "
"restored from sstate will be kept, unlike --no-setscene.")
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
help="Terminate any running bitbake server.")
task_group.add_argument("--setscene-only", action="store_true",
dest="setsceneonly",
help="Only run setscene tasks, don't run any real tasks.")
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
help="Connect to a server as an observing-only client.")
parser.add_option("", "--status-only", action="store_true",
dest="status_only", default=False,
help="Check the status of the remote bitbake server.")
exec_group.add_argument("-n", "--dry-run", action="store_true",
help="Don't execute, just go through the motions.")
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
exec_group.add_argument("-p", "--parse-only", action="store_true",
help="Quit after parsing the BB recipes.")
exec_group.add_argument("-k", "--continue", action="store_false", dest="halt",
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
exec_group.add_argument("-P", "--profile", action="store_true",
help="Profile the command and save reports.")
exec_group.add_argument("-S", "--dump-signatures", action="append",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
exec_group.add_argument("--revisions-changed", action="store_true",
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
exec_group.add_argument("-b", "--buildfile",
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
logging_group.add_argument("-D", "--debug", action="count", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
logging_group.add_argument("-l", "--log-domains", action="append", dest="debug_domains",
default=[],
help="Show debug logging for the specified logging domains.")
logging_group.add_argument("-v", "--verbose", action="store_true",
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
logging_group.add_argument("-q", "--quiet", action="count", default=0,
help="Output less log message data to the terminal. You can specify this "
"more than once.")
logging_group.add_argument("-w", "--write-log", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
server_group.add_argument("-B", "--bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
server_group.add_argument("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
server_group.add_argument("--remote-server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
server_group.add_argument("-m", "--kill-server", action="store_true",
help="Terminate any running bitbake server.")
server_group.add_argument("--token", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
server_group.add_argument("--observe-only", action="store_true",
help="Connect to a server as an observing-only client.")
server_group.add_argument("--status-only", action="store_true",
help="Check the status of the remote bitbake server.")
server_group.add_argument("--server-only", action="store_true",
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
config_group.add_argument("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
config_group.add_argument("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
config_group.add_argument("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing.")
parser.add_option("", "--runall", action="append", dest="runall",
help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).")
parser.add_option("", "--runonly", action="append", dest="runonly",
help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).")
return parser
class BitBakeConfigParameters(cookerdata.ConfigParameters):
def parseCommandLine(self, argv=sys.argv):
parser = create_bitbake_parser()
options = parser.parse_intermixed_args(argv[1:])
if options.version:
print("BitBake Build Tool Core version %s" % bb.__version__)
sys.exit(0)
options, targets = parser.parse_args(argv)
if options.quiet and options.verbose:
parser.error("options --quiet and --verbose are mutually exclusive")
@@ -345,7 +319,7 @@ class BitBakeConfigParameters(cookerdata.ConfigParameters):
else:
options.xmlrpcinterface = (None, 0)
return options, options.targets
return options, targets[1:]
def bitbake_main(configParams, configuration):
@@ -410,9 +384,6 @@ def bitbake_main(configParams, configuration):
return 1
def timestamp():
return datetime.datetime.now().strftime('%H:%M:%S.%f')
def setup_bitbake(configParams, extrafeatures=None):
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
@@ -420,11 +391,6 @@ def setup_bitbake(configParams, extrafeatures=None):
# In status only mode there are no logs and no UI
logger.addHandler(handler)
if configParams.dump_signatures:
if extrafeatures is None:
extrafeatures = []
extrafeatures.append(bb.cooker.CookerFeatures.RECIPE_SIGGEN_INFO)
if configParams.server_only:
featureset = []
ui_module = None
@@ -452,7 +418,7 @@ def setup_bitbake(configParams, extrafeatures=None):
retries = 8
while retries:
try:
topdir, lock, lockfile = lockBitbake()
topdir, lock = lockBitbake()
sockname = topdir + "/bitbake.sock"
if lock:
if configParams.status_only or configParams.kill_server:
@@ -463,22 +429,18 @@ def setup_bitbake(configParams, extrafeatures=None):
logger.info("Starting bitbake server...")
# Clear the event queue since we already displayed messages
bb.event.ui_queue = []
server = bb.server.process.BitBakeServer(lock, sockname, featureset, configParams.server_timeout, configParams.xmlrpcinterface, configParams.profile)
server = bb.server.process.BitBakeServer(lock, sockname, featureset, configParams.server_timeout, configParams.xmlrpcinterface)
else:
logger.info("Reconnecting to bitbake server...")
if not os.path.exists(sockname):
logger.info("Previous bitbake instance shutting down?, waiting to retry... (%s)" % timestamp())
procs = bb.server.process.get_lockfile_process_msg(lockfile)
if procs:
logger.info("Processes holding bitbake.lock (missing socket %s):\n%s" % (sockname, procs))
logger.info("Directory listing: %s" % (str(os.listdir(topdir))))
logger.info("Previous bitbake instance shutting down?, waiting to retry...")
i = 0
lock = None
# Wait for 5s or until we can get the lock
while not lock and i < 50:
time.sleep(0.1)
_, lock, _ = lockBitbake()
_, lock = lockBitbake()
i += 1
if lock:
bb.utils.unlockfile(lock)
@@ -497,9 +459,9 @@ def setup_bitbake(configParams, extrafeatures=None):
retries -= 1
tryno = 8 - retries
if isinstance(e, (bb.server.process.ProcessTimeout, BrokenPipeError, EOFError, SystemExit)):
logger.info("Retrying server connection (#%d)... (%s)" % (tryno, timestamp()))
logger.info("Retrying server connection (#%d)..." % tryno)
else:
logger.info("Retrying server connection (#%d)... (%s, %s)" % (tryno, traceback.format_exc(), timestamp()))
logger.info("Retrying server connection (#%d)... (%s)" % (tryno, traceback.format_exc()))
if not retries:
bb.fatal("Unable to connect to bitbake server, or start one (server startup failures would be in bitbake-cookerdaemon.log).")
@@ -528,5 +490,5 @@ def lockBitbake():
bb.error("Unable to find conf/bblayers.conf or conf/bitbake.conf. BBPATH is unset and/or not in a build directory?")
raise BBMainFatal
lockfile = topdir + "/bitbake.lock"
return topdir, bb.utils.lockfile(lockfile, False, False), lockfile
return topdir, bb.utils.lockfile(lockfile, False, False)

View File

@@ -99,12 +99,12 @@ def supports(fn, data):
return 1
return 0
def handle(fn, data, include=0, baseconfig=False):
def handle(fn, data, include = 0):
"""Call the handler that is appropriate for this file"""
for h in handlers:
if h['supports'](fn, data):
with data.inchistory.include(fn):
return h['handle'](fn, data, include, baseconfig)
return h['handle'](fn, data, include)
raise ParseError("not a BitBake file", fn)
def init(fn, data):

View File

@@ -9,7 +9,6 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import sys
import bb
from bb import methodpool
from bb.parse import logger
@@ -270,41 +269,6 @@ class BBHandlerNode(AstNode):
data.setVarFlag(h, "handler", 1)
data.setVar('__BBHANDLERS', bbhands)
class PyLibNode(AstNode):
def __init__(self, filename, lineno, libdir, namespace):
AstNode.__init__(self, filename, lineno)
self.libdir = libdir
self.namespace = namespace
def eval(self, data):
global_mods = (data.getVar("BB_GLOBAL_PYMODULES") or "").split()
for m in global_mods:
if m not in bb.utils._context:
bb.utils._context[m] = __import__(m)
libdir = data.expand(self.libdir)
if libdir not in sys.path:
sys.path.append(libdir)
try:
bb.utils._context[self.namespace] = __import__(self.namespace)
toimport = getattr(bb.utils._context[self.namespace], "BBIMPORTS", [])
for i in toimport:
bb.utils._context[self.namespace] = __import__(self.namespace + "." + i)
mod = getattr(bb.utils._context[self.namespace], i)
fn = getattr(mod, "__file__")
funcs = {}
for f in dir(mod):
if f.startswith("_"):
continue
fcall = getattr(mod, f)
if not callable(fcall):
continue
funcs[f] = fcall
bb.codeparser.add_module_functions(fn, funcs, "%s.%s" % (self.namespace, i))
except AttributeError as e:
bb.error("Error importing OE modules: %s" % str(e))
class InheritNode(AstNode):
def __init__(self, filename, lineno, classes):
AstNode.__init__(self, filename, lineno)
@@ -356,9 +320,6 @@ def handleDelTask(statements, filename, lineno, m):
def handleBBHandlers(statements, filename, lineno, m):
statements.append(BBHandlerNode(filename, lineno, m.group(1)))
def handlePyLib(statements, filename, lineno, m):
statements.append(PyLibNode(filename, lineno, m.group(1), m.group(2)))
def handleInherit(statements, filename, lineno, m):
classes = m.group(1)
statements.append(InheritNode(filename, lineno, classes))
@@ -400,9 +361,6 @@ def finalize(fn, d, variant = None):
d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
if d.getVar('__BBAUTOREV_SEEN') and d.getVar('__BBSRCREV_SEEN') and not d.getVar("__BBAUTOREV_ACTED_UPON"):
bb.fatal("AUTOREV/SRCPV set too late for the fetcher to work properly, please set the variables earlier in parsing. Erroring instead of later obtuse build failures.")
bb.event.fire(bb.event.RecipeParsed(fn), d)
finally:
bb.event.set_handlers(saved_handlers)

View File

@@ -101,8 +101,8 @@ def get_statements(filename, absolute_filename, base_name):
cached_statements[absolute_filename] = statements
return statements
def handle(fn, d, include, baseconfig=False):
global __infunc__, __body__, __residue__, __classname__
def handle(fn, d, include):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__, __classname__
__body__ = []
__infunc__ = []
__classname__ = ""
@@ -154,7 +154,7 @@ def handle(fn, d, include, baseconfig=False):
return d
def feeder(lineno, s, fn, root, statements, eof=False):
global __inpython__, __infunc__, __body__, __residue__, __classname__
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__, __infunc__, __body__, bb, __residue__, __classname__
# Check tabs in python functions:
# - def py_funcname(): covered by __inpython__
@@ -265,7 +265,7 @@ def feeder(lineno, s, fn, root, statements, eof=False):
ast.handleInherit(statements, fn, lineno, m)
return
return ConfHandler.feeder(lineno, s, fn, statements, conffile=False)
return ConfHandler.feeder(lineno, s, fn, statements)
# Add us to the handlers list
from .. import handlers

View File

@@ -21,7 +21,7 @@ __config_regexp__ = re.compile( r"""
^
(?P<exp>export\s+)?
(?P<var>[a-zA-Z0-9\-_+.${}/~:]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.][a-zA-Z0-9\-_+.@]*)\])?
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
\s* (
(?P<colon>:=) |
@@ -45,8 +45,7 @@ __include_regexp__ = re.compile( r"include\s+(.+)" )
__require_regexp__ = re.compile( r"require\s+(.+)" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.][a-zA-Z0-9\-_+.@]+)\]$" )
__addpylib_regexp__ = re.compile(r"addpylib\s+(.+)\s+(.+)" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.]+)\]$" )
def init(data):
return
@@ -103,12 +102,12 @@ def include_single_file(parentfn, fn, lineno, data, error_out):
# We have an issue where a UI might want to enforce particular settings such as
# an empty DISTRO variable. If configuration files do something like assigning
# a weak default, it turns out to be very difficult to filter out these changes,
# particularly when the weak default might appear half way though parsing a chain
# particularly when the weak default might appear half way though parsing a chain
# of configuration files. We therefore let the UIs hook into configuration file
# parsing. This turns out to be a hard problem to solve any other way.
confFilters = []
def handle(fn, data, include, baseconfig=False):
def handle(fn, data, include):
init(data)
if include == 0:
@@ -145,7 +144,7 @@ def handle(fn, data, include, baseconfig=False):
# skip comments
if s[0] == '#':
continue
feeder(lineno, s, abs_fn, statements, baseconfig=baseconfig)
feeder(lineno, s, abs_fn, statements)
# DONE WITH PARSING... time to evaluate
data.setVar('FILE', abs_fn)
@@ -158,9 +157,7 @@ def handle(fn, data, include, baseconfig=False):
return data
# baseconfig is set for the bblayers/layer.conf cookerdata config parsing
# The function is also used by BBHandler, conffile would be False
def feeder(lineno, s, fn, statements, baseconfig=False, conffile=True):
def feeder(lineno, s, fn, statements):
m = __config_regexp__.match(s)
if m:
groupd = m.groupdict()
@@ -192,11 +189,6 @@ def feeder(lineno, s, fn, statements, baseconfig=False, conffile=True):
ast.handleUnsetFlag(statements, fn, lineno, m)
return
m = __addpylib_regexp__.match(s)
if baseconfig and conffile and m:
ast.handlePyLib(statements, fn, lineno, m)
return
raise ParseError("unparsed line: '%s'" % s, fn, lineno);
# Add us to the handlers list

View File

@@ -249,23 +249,4 @@ def persist(domain, d):
bb.utils.mkdirhier(cachedir)
cachefile = os.path.join(cachedir, "bb_persist_data.sqlite3")
try:
return SQLTable(cachefile, domain)
except sqlite3.OperationalError:
# Sqlite fails to open database when its path is too long.
# After testing, 504 is the biggest path length that can be opened by
# sqlite.
# Note: This code is called before sanity.bbclass and its path length
# check
max_len = 504
if len(cachefile) > max_len:
logger.critical("The path of the cache file is too long "
"({0} chars > {1}) to be opened by sqlite! "
"Your cache file is \"{2}\"".format(
len(cachefile),
max_len,
cachefile))
sys.exit(1)
else:
raise
return SQLTable(cachefile, domain)

View File

@@ -155,7 +155,7 @@ class RunQueueScheduler(object):
self.stamps = {}
for tid in self.rqdata.runtaskentries:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
self.stamps[tid] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
self.stamps[tid] = bb.build.stampfile(taskname, self.rqdata.dataCaches[mc], taskfn, noextra=True)
if tid in self.rq.runq_buildable:
self.buildable.append(tid)
@@ -198,20 +198,15 @@ class RunQueueScheduler(object):
curr_cpu_pressure = cpu_pressure_fds.readline().split()[4].split("=")[1]
curr_io_pressure = io_pressure_fds.readline().split()[4].split("=")[1]
curr_memory_pressure = memory_pressure_fds.readline().split()[4].split("=")[1]
exceeds_cpu_pressure = self.rq.max_cpu_pressure and (float(curr_cpu_pressure) - float(self.prev_cpu_pressure)) > self.rq.max_cpu_pressure
exceeds_io_pressure = self.rq.max_io_pressure and (float(curr_io_pressure) - float(self.prev_io_pressure)) > self.rq.max_io_pressure
exceeds_memory_pressure = self.rq.max_memory_pressure and (float(curr_memory_pressure) - float(self.prev_memory_pressure)) > self.rq.max_memory_pressure
now = time.time()
tdiff = now - self.prev_pressure_time
if tdiff > 1.0:
exceeds_cpu_pressure = self.rq.max_cpu_pressure and (float(curr_cpu_pressure) - float(self.prev_cpu_pressure)) / tdiff > self.rq.max_cpu_pressure
exceeds_io_pressure = self.rq.max_io_pressure and (float(curr_io_pressure) - float(self.prev_io_pressure)) / tdiff > self.rq.max_io_pressure
exceeds_memory_pressure = self.rq.max_memory_pressure and (float(curr_memory_pressure) - float(self.prev_memory_pressure)) / tdiff > self.rq.max_memory_pressure
if now - self.prev_pressure_time > 1.0:
self.prev_cpu_pressure = curr_cpu_pressure
self.prev_io_pressure = curr_io_pressure
self.prev_memory_pressure = curr_memory_pressure
self.prev_pressure_time = now
else:
exceeds_cpu_pressure = self.rq.max_cpu_pressure and (float(curr_cpu_pressure) - float(self.prev_cpu_pressure)) > self.rq.max_cpu_pressure
exceeds_io_pressure = self.rq.max_io_pressure and (float(curr_io_pressure) - float(self.prev_io_pressure)) > self.rq.max_io_pressure
exceeds_memory_pressure = self.rq.max_memory_pressure and (float(curr_memory_pressure) - float(self.prev_memory_pressure)) > self.rq.max_memory_pressure
return (exceeds_cpu_pressure or exceeds_io_pressure or exceeds_memory_pressure)
return False
@@ -656,11 +651,8 @@ class RunQueueData:
# Nothing to do
return 0
bb.parse.siggen.setup_datacache(self.dataCaches)
self.init_progress_reporter.start()
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
# Step A - Work out a list of tasks to run
#
@@ -706,8 +698,6 @@ class RunQueueData:
frommc = mcdependency[1]
mcdep = mcdependency[2]
deptask = mcdependency[4]
if mcdep not in taskData:
bb.fatal("Multiconfig '%s' is referenced in multiconfig dependency '%s' but not enabled in BBMULTICONFIG?" % (mcdep, dep))
if mc == frommc:
fn = taskData[mcdep].build_targets[pn][0]
newdep = '%s:%s' % (fn,deptask)
@@ -809,7 +799,6 @@ class RunQueueData:
#self.dump_data()
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
# Resolve recursive 'recrdeptask' dependencies (Part B)
#
@@ -906,7 +895,6 @@ class RunQueueData:
self.runtaskentries[tid].depends.difference_update(recursivetasksselfref)
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
#self.dump_data()
@@ -945,7 +933,7 @@ class RunQueueData:
bb.debug(1, "Task %s is marked nostamp, cannot invalidate this task" % taskname)
else:
logger.verbose("Invalidate task %s, %s", taskname, fn)
bb.parse.siggen.invalidate_task(taskname, taskfn)
bb.parse.siggen.invalidate_task(taskname, self.dataCaches[mc], taskfn)
self.target_tids = []
for (mc, target, task, fn) in self.targets:
@@ -988,7 +976,6 @@ class RunQueueData:
mark_active(tid, 1)
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
# Step C - Prune all inactive tasks
#
@@ -1028,7 +1015,6 @@ class RunQueueData:
bb.msg.fatal("RunQueue", "Could not find any tasks with the tasknames %s to run within the recipes of the taskgraphs of the targets %s" % (str(self.cooker.configuration.runall), str(self.targets)))
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
# Handle runonly
if self.cooker.configuration.runonly:
@@ -1069,7 +1055,6 @@ class RunQueueData:
logger.verbose("Assign Weightings")
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
# Generate a list of reverse dependencies to ease future calculations
for tid in self.runtaskentries:
@@ -1077,7 +1062,6 @@ class RunQueueData:
self.runtaskentries[dep].revdeps.add(tid)
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
# Identify tasks at the end of dependency chains
# Error on circular dependency loops (length two)
@@ -1094,14 +1078,12 @@ class RunQueueData:
logger.verbose("Compute totals (have %s endpoint(s))", len(endpoints))
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
# Calculate task weights
# Check of higher length circular dependencies
self.runq_weight = self.calculate_task_weights(endpoints)
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
# Sanity Check - Check for multiple tasks building the same provider
for mc in self.dataCaches:
@@ -1202,7 +1184,6 @@ class RunQueueData:
self.init_progress_reporter.next_stage()
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
# Iterate over the task list looking for tasks with a 'setscene' function
self.runq_setscene_tids = set()
@@ -1215,7 +1196,6 @@ class RunQueueData:
self.runq_setscene_tids.add(tid)
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
# Invalidate task if force mode active
if self.cooker.configuration.force:
@@ -1232,7 +1212,6 @@ class RunQueueData:
invalidate_task(fn + ":" + st, True)
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
# Create and print to the logs a virtual/xxxx -> PN (fn) table
for mc in taskData:
@@ -1245,7 +1224,6 @@ class RunQueueData:
bb.parse.siggen.tasks_resolved(virtmap, virtpnmap, self.dataCaches[mc])
self.init_progress_reporter.next_stage()
bb.event.check_for_interrupts(self.cooker.data)
bb.parse.siggen.set_setscene_tasks(self.runq_setscene_tids)
@@ -1258,7 +1236,6 @@ class RunQueueData:
dealtwith.add(tid)
todeal.remove(tid)
self.prepare_task_hash(tid)
bb.event.check_for_interrupts(self.cooker.data)
bb.parse.siggen.writeout_file_checksum_cache()
@@ -1266,8 +1243,9 @@ class RunQueueData:
return len(self.runtaskentries)
def prepare_task_hash(self, tid):
bb.parse.siggen.prep_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
self.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
dc = bb.parse.siggen.get_data_caches(self.dataCaches, mc_from_tid(tid))
bb.parse.siggen.prep_taskhash(tid, self.runtaskentries[tid].depends, dc)
self.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, self.runtaskentries[tid].depends, dc)
self.runtaskentries[tid].unihash = bb.parse.siggen.get_unihash(tid)
def dump_data(self):
@@ -1333,6 +1311,10 @@ class RunQueue:
workerpipe = runQueuePipe(worker.stdout, None, self.cfgData, self, rqexec, fakerootlogs=fakerootlogs)
workerdata = {
"taskdeps" : self.rqdata.dataCaches[mc].task_deps,
"fakerootenv" : self.rqdata.dataCaches[mc].fakerootenv,
"fakerootdirs" : self.rqdata.dataCaches[mc].fakerootdirs,
"fakerootnoenv" : self.rqdata.dataCaches[mc].fakerootnoenv,
"sigdata" : bb.parse.siggen.get_taskdata(),
"logdefaultlevel" : bb.msg.loggerDefaultLogLevel,
"build_verbose_shell" : self.cooker.configuration.build_verbose_shell,
@@ -1417,7 +1399,7 @@ class RunQueue:
if taskname is None:
taskname = tn
stampfile = bb.parse.siggen.stampfile_mcfn(taskname, taskfn)
stampfile = bb.build.stampfile(taskname, self.rqdata.dataCaches[mc], taskfn)
# If the stamp is missing, it's not current
if not os.access(stampfile, os.F_OK):
@@ -1429,7 +1411,7 @@ class RunQueue:
logger.debug2("%s.%s is nostamp\n", fn, taskname)
return False
if taskname.endswith("_setscene"):
if taskname != "do_setscene" and taskname.endswith("_setscene"):
return True
if cache is None:
@@ -1440,8 +1422,8 @@ class RunQueue:
for dep in self.rqdata.runtaskentries[tid].depends:
if iscurrent:
(mc2, fn2, taskname2, taskfn2) = split_tid_mcfn(dep)
stampfile2 = bb.parse.siggen.stampfile_mcfn(taskname2, taskfn2)
stampfile3 = bb.parse.siggen.stampfile_mcfn(taskname2 + "_setscene", taskfn2)
stampfile2 = bb.build.stampfile(taskname2, self.rqdata.dataCaches[mc2], taskfn2)
stampfile3 = bb.build.stampfile(taskname2 + "_setscene", self.rqdata.dataCaches[mc2], taskfn2)
t2 = get_timestamp(stampfile2)
t3 = get_timestamp(stampfile3)
if t3 and not t2:
@@ -1502,7 +1484,6 @@ class RunQueue:
"""
retval = True
bb.event.check_for_interrupts(self.cooker.data)
if self.state is runQueuePrepare:
# NOTE: if you add, remove or significantly refactor the stages of this
@@ -1531,7 +1512,7 @@ class RunQueue:
if not self.dm_event_handler_registered:
res = bb.event.register(self.dm_event_handler_name,
lambda x, y: self.dm.check(self) if self.state in [runQueueRunning, runQueueCleanUp] else False,
lambda x: self.dm.check(self) if self.state in [runQueueRunning, runQueueCleanUp] else False,
('bb.event.HeartbeatEvent',), data=self.cfgData)
self.dm_event_handler_registered = True
@@ -1628,28 +1609,29 @@ class RunQueue:
else:
self.rqexe.finish()
def _rq_dump_sigtid(self, tids):
for tid in tids:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
dataCaches = self.rqdata.dataCaches
bb.parse.siggen.dump_sigtask(taskfn, taskname, dataCaches[mc].stamp[taskfn], True)
def rq_dump_sigfn(self, fn, options):
bb_cache = bb.cache.NoCache(self.cooker.databuilder)
mc = bb.runqueue.mc_from_tid(fn)
the_data = bb_cache.loadDataFull(fn, self.cooker.collections[mc].get_file_appends(fn))
siggen = bb.parse.siggen
dataCaches = self.rqdata.dataCaches
siggen.dump_sigfn(fn, dataCaches, options)
def dump_signatures(self, options):
if bb.cooker.CookerFeatures.RECIPE_SIGGEN_INFO not in self.cooker.featureset:
bb.fatal("The dump signatures functionality needs the RECIPE_SIGGEN_INFO feature enabled")
fns = set()
bb.note("Reparsing files to collect dependency data")
bb.note("Writing task signature files")
for tid in self.rqdata.runtaskentries:
fn = fn_from_tid(tid)
fns.add(fn)
max_process = int(self.cfgData.getVar("BB_NUMBER_PARSE_THREADS") or os.cpu_count() or 1)
def chunkify(l, n):
return [l[i::n] for i in range(n)]
tids = chunkify(list(self.rqdata.runtaskentries), max_process)
# We cannot use the real multiprocessing.Pool easily due to some local data
# that can't be pickled. This is a cheap multi-process solution.
launched = []
while tids:
while fns:
if len(launched) < max_process:
p = Process(target=self._rq_dump_sigtid, args=(tids.pop(), ))
p = Process(target=self.rq_dump_sigfn, args=(fns.pop(), options))
p.start()
launched.append(p)
for q in launched:
@@ -1961,7 +1943,8 @@ class RunQueueExecute:
try:
module = __import__(modname, fromlist=(name,))
except ImportError as exc:
bb.fatal("Unable to import scheduler '%s' from '%s': %s" % (name, modname, exc))
logger.critical("Unable to import scheduler '%s' from '%s': %s" % (name, modname, exc))
raise SystemExit(1)
else:
schedulers.add(getattr(module, name))
return schedulers
@@ -2157,33 +2140,21 @@ class RunQueueExecute:
startevent = sceneQueueTaskStarted(task, self.stats, self.rq)
bb.event.fire(startevent, self.cfgData)
taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
runtask = {
'fn' : taskfn,
'task' : task,
'taskname' : taskname,
'taskhash' : self.rqdata.get_task_hash(task),
'unihash' : self.rqdata.get_task_unihash(task),
'quieterrors' : True,
'appends' : self.cooker.collections[mc].get_file_appends(taskfn),
'taskdepdata' : self.sq_build_taskdepdata(task),
'dry_run' : False,
'taskdep': taskdep,
'fakerootenv' : self.rqdata.dataCaches[mc].fakerootenv[taskfn],
'fakerootdirs' : self.rqdata.dataCaches[mc].fakerootdirs[taskfn],
'fakerootnoenv' : self.rqdata.dataCaches[mc].fakerootnoenv[taskfn]
}
taskdepdata = self.sq_build_taskdepdata(task)
taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
taskhash = self.rqdata.get_task_hash(task)
unihash = self.rqdata.get_task_unihash(task)
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not self.cooker.configuration.dry_run:
if not mc in self.rq.fakeworker:
self.rq.start_fakeworker(self, mc)
self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps(runtask) + b"</runtask>")
self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, True, self.cooker.collections[mc].get_file_appends(taskfn), taskdepdata, False)) + b"</runtask>")
self.rq.fakeworker[mc].process.stdin.flush()
else:
self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps(runtask) + b"</runtask>")
self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, True, self.cooker.collections[mc].get_file_appends(taskfn), taskdepdata, False)) + b"</runtask>")
self.rq.worker[mc].process.stdin.flush()
self.build_stamps[task] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
self.build_stamps[task] = bb.build.stampfile(taskname, self.rqdata.dataCaches[mc], taskfn, noextra=True)
self.build_stamps2.append(self.build_stamps[task])
self.sq_running.add(task)
self.sq_live.add(task)
@@ -2243,30 +2214,18 @@ class RunQueueExecute:
self.runq_running.add(task)
self.stats.taskActive()
if not (self.cooker.configuration.dry_run or self.rqdata.setscene_enforce):
bb.build.make_stamp_mcfn(taskname, taskfn)
bb.build.make_stamp(taskname, self.rqdata.dataCaches[mc], taskfn)
self.task_complete(task)
return True
else:
startevent = runQueueTaskStarted(task, self.stats, self.rq)
bb.event.fire(startevent, self.cfgData)
taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
runtask = {
'fn' : taskfn,
'task' : task,
'taskname' : taskname,
'taskhash' : self.rqdata.get_task_hash(task),
'unihash' : self.rqdata.get_task_unihash(task),
'quieterrors' : False,
'appends' : self.cooker.collections[mc].get_file_appends(taskfn),
'taskdepdata' : self.build_taskdepdata(task),
'dry_run' : self.rqdata.setscene_enforce,
'taskdep': taskdep,
'fakerootenv' : self.rqdata.dataCaches[mc].fakerootenv[taskfn],
'fakerootdirs' : self.rqdata.dataCaches[mc].fakerootdirs[taskfn],
'fakerootnoenv' : self.rqdata.dataCaches[mc].fakerootnoenv[taskfn]
}
taskdepdata = self.build_taskdepdata(task)
taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
taskhash = self.rqdata.get_task_hash(task)
unihash = self.rqdata.get_task_unihash(task)
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not (self.cooker.configuration.dry_run or self.rqdata.setscene_enforce):
if not mc in self.rq.fakeworker:
try:
@@ -2276,13 +2235,13 @@ class RunQueueExecute:
self.rq.state = runQueueFailed
self.stats.taskFailed()
return True
self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps(runtask) + b"</runtask>")
self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, False, self.cooker.collections[mc].get_file_appends(taskfn), taskdepdata, self.rqdata.setscene_enforce)) + b"</runtask>")
self.rq.fakeworker[mc].process.stdin.flush()
else:
self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps(runtask) + b"</runtask>")
self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, False, self.cooker.collections[mc].get_file_appends(taskfn), taskdepdata, self.rqdata.setscene_enforce)) + b"</runtask>")
self.rq.worker[mc].process.stdin.flush()
self.build_stamps[task] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
self.build_stamps[task] = bb.build.stampfile(taskname, self.rqdata.dataCaches[mc], taskfn, noextra=True)
self.build_stamps2.append(self.build_stamps[task])
self.runq_running.add(task)
self.stats.taskActive()
@@ -2297,7 +2256,7 @@ class RunQueueExecute:
if self.sq_deferred:
deferred_tid = list(self.sq_deferred.keys())[0]
blocking_tid = self.sq_deferred.pop(deferred_tid)
logger.warning("Runqueue deadlocked on deferred tasks, forcing task %s blocked by %s" % (deferred_tid, blocking_tid))
logger.warning("Runqeueue deadlocked on deferred tasks, forcing task %s blocked by %s" % (deferred_tid, blocking_tid))
return True
if self.failed_tids:
@@ -2454,7 +2413,8 @@ class RunQueueExecute:
if self.rqdata.runtaskentries[p].depends and not self.rqdata.runtaskentries[tid].depends.isdisjoint(total):
continue
orighash = self.rqdata.runtaskentries[tid].hash
newhash = bb.parse.siggen.get_taskhash(tid, self.rqdata.runtaskentries[tid].depends, self.rqdata.dataCaches)
dc = bb.parse.siggen.get_data_caches(self.rqdata.dataCaches, mc_from_tid(tid))
newhash = bb.parse.siggen.get_taskhash(tid, self.rqdata.runtaskentries[tid].depends, dc)
origuni = self.rqdata.runtaskentries[tid].unihash
newuni = bb.parse.siggen.get_unihash(tid)
# FIXME, need to check it can come from sstate at all for determinism?
@@ -2529,28 +2489,6 @@ class RunQueueExecute:
self.sq_buildable.remove(tid)
if tid in self.sq_running:
self.sq_running.remove(tid)
if tid in self.sqdata.outrightfail:
self.sqdata.outrightfail.remove(tid)
if tid in self.scenequeue_notcovered:
self.scenequeue_notcovered.remove(tid)
if tid in self.scenequeue_covered:
self.scenequeue_covered.remove(tid)
if tid in self.scenequeue_notneeded:
self.scenequeue_notneeded.remove(tid)
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
self.sqdata.stamps[tid] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
if tid in self.stampcache:
del self.stampcache[tid]
if tid in self.build_stamps:
del self.build_stamps[tid]
update_tasks.append(tid)
update_tasks2 = []
for tid in update_tasks:
harddepfail = False
for t in self.sqdata.sq_harddeps:
if tid in self.sqdata.sq_harddeps[t] and t in self.scenequeue_notcovered:
@@ -2562,25 +2500,42 @@ class RunQueueExecute:
if not self.sqdata.sq_revdeps[tid]:
self.sq_buildable.add(tid)
update_tasks2.append((tid, harddepfail, tid in self.sqdata.valid))
if tid in self.sqdata.outrightfail:
self.sqdata.outrightfail.remove(tid)
if tid in self.scenequeue_notcovered:
self.scenequeue_notcovered.remove(tid)
if tid in self.scenequeue_covered:
self.scenequeue_covered.remove(tid)
if tid in self.scenequeue_notneeded:
self.scenequeue_notneeded.remove(tid)
if update_tasks2:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
self.sqdata.stamps[tid] = bb.build.stampfile(taskname + "_setscene", self.rqdata.dataCaches[mc], taskfn, noextra=True)
if tid in self.stampcache:
del self.stampcache[tid]
if tid in self.build_stamps:
del self.build_stamps[tid]
update_tasks.append((tid, harddepfail, tid in self.sqdata.valid))
if update_tasks:
self.sqdone = False
for mc in sorted(self.sqdata.multiconfigs):
for tid in sorted([t[0] for t in update_tasks2]):
for tid in sorted([t[0] for t in update_tasks]):
if mc_from_tid(tid) != mc:
continue
h = pending_hash_index(tid, self.rqdata)
if h in self.sqdata.hashes and tid != self.sqdata.hashes[h]:
self.sq_deferred[tid] = self.sqdata.hashes[h]
bb.note("Deferring %s after %s" % (tid, self.sqdata.hashes[h]))
update_scenequeue_data([t[0] for t in update_tasks2], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
update_scenequeue_data([t[0] for t in update_tasks], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
for (tid, harddepfail, origvalid) in update_tasks2:
for (tid, harddepfail, origvalid) in update_tasks:
if tid in self.sqdata.valid and not origvalid:
hashequiv_logger.verbose("Setscene task %s became valid" % tid)
if harddepfail:
logger.debug2("%s has an unavailable hard dependency so skipping" % (tid))
self.sq_task_failoutright(tid)
if changed:
@@ -2855,8 +2810,7 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
realtid = tid + "_setscene"
idepends = rqdata.taskData[mc].taskentries[realtid].idepends
sqdata.stamps[tid] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
sqdata.stamps[tid] = bb.build.stampfile(taskname + "_setscene", rqdata.dataCaches[mc], taskfn, noextra=True)
for (depname, idependtask) in idepends:
if depname not in rqdata.taskData[mc].build_targets:
@@ -2935,7 +2889,7 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
found = {}
for tid in rqdata.runq_setscene_tids:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
stamps = bb.build.find_stale_stamps(taskname, taskfn)
stamps = bb.build.find_stale_stamps(taskname, rqdata.dataCaches[mc], taskfn)
if stamps:
if mc not in found:
found[mc] = {}
@@ -2951,7 +2905,7 @@ def check_setscene_stamps(tid, rqdata, rq, stampcache, noexecstamp=False):
taskdep = rqdata.dataCaches[mc].task_deps[taskfn]
if 'noexec' in taskdep and taskname in taskdep['noexec']:
bb.build.make_stamp_mcfn(taskname + "_setscene", taskfn)
bb.build.make_stamp(taskname + "_setscene", rqdata.dataCaches[mc], taskfn)
return True, False
if rq.check_stamp_task(tid, taskname + "_setscene", cache=stampcache):
@@ -2981,13 +2935,11 @@ def update_scenequeue_data(tids, sqdata, rqdata, rq, cooker, stampcache, sqrq, s
if noexec:
sqdata.noexec.add(tid)
sqrq.sq_task_skip(tid)
logger.debug2("%s is noexec so skipping setscene" % (tid))
continue
if stamppresent:
sqdata.stamppresent.add(tid)
sqrq.sq_task_skip(tid)
logger.debug2("%s has a valid stamp, skipping" % (tid))
continue
tocheck.add(tid)
@@ -3008,7 +2960,6 @@ def update_scenequeue_data(tids, sqdata, rqdata, rq, cooker, stampcache, sqrq, s
if tid in sqrq.sq_deferred:
continue
sqdata.outrightfail.add(tid)
logger.debug2("%s already handled (fallthrough), skipping" % (tid))
class TaskFailure(Exception):
"""

View File

@@ -28,7 +28,6 @@ import datetime
import pickle
import traceback
import gc
import stat
import bb.server.xmlrpcserver
from bb import daemonize
from multiprocessing import queues
@@ -42,39 +41,6 @@ def serverlog(msg):
print(str(os.getpid()) + " " + datetime.datetime.now().strftime('%H:%M:%S.%f') + " " + msg)
sys.stdout.flush()
#
# When we have lockfile issues, try and find infomation about which process is
# using the lockfile
#
def get_lockfile_process_msg(lockfile):
# Some systems may not have lsof available
procs = None
try:
procs = subprocess.check_output(["lsof", '-w', lockfile], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
# File was deleted?
pass
except OSError as e:
if e.errno != errno.ENOENT:
raise
if procs is None:
# Fall back to fuser if lsof is unavailable
try:
procs = subprocess.check_output(["fuser", '-v', lockfile], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
# File was deleted?
pass
except OSError as e:
if e.errno != errno.ENOENT:
raise
if procs:
return procs.decode("utf-8")
return None
class idleFinish():
def __init__(self, msg):
self.msg = msg
class ProcessServer():
profile_filename = "profile.log"
profile_processed_filename = "profile.log.processed"
@@ -92,19 +58,12 @@ class ProcessServer():
self.maxuiwait = 30
self.xmlrpc = False
self.idle = None
# Need a lock for _idlefuns changes
self._idlefuns = {}
self._idlefuncsLock = threading.Lock()
self.idle_cond = threading.Condition(self._idlefuncsLock)
self.bitbake_lock = lock
self.bitbake_lock_name = lockname
self.sock = sock
self.sockname = sockname
# It is possible the directory may be renamed. Cache the inode of the socket file
# so we can tell if things changed.
self.sockinode = os.stat(self.sockname)[stat.ST_INO]
self.server_timeout = server_timeout
self.timeout = self.server_timeout
@@ -113,9 +72,7 @@ class ProcessServer():
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
assert hasattr(function, '__call__')
with bb.utils.lock_timeout(self._idlefuncsLock):
self._idlefuns[function] = data
serverlog("Registering idle function %s" % str(function))
self._idlefuns[function] = data
def run(self):
@@ -154,31 +111,6 @@ class ProcessServer():
return ret
def _idle_check(self):
return len(self._idlefuns) == 0 and self.cooker.command.currentAsyncCommand is None
def wait_for_idle(self, timeout=30):
# Wait for the idle loop to have cleared
with bb.utils.lock_timeout(self._idlefuncsLock):
return self.idle_cond.wait_for(self._idle_check, timeout) is not False
def set_async_cmd(self, cmd):
with bb.utils.lock_timeout(self._idlefuncsLock):
ret = self.idle_cond.wait_for(self._idle_check, 30)
if ret is False:
return False
self.cooker.command.currentAsyncCommand = cmd
return True
def clear_async_cmd(self):
with bb.utils.lock_timeout(self._idlefuncsLock):
self.cooker.command.currentAsyncCommand = None
self.idle_cond.notify_all()
def get_async_cmd(self):
with bb.utils.lock_timeout(self._idlefuncsLock):
return self.cooker.command.currentAsyncCommand
def main(self):
self.cooker.pre_serve()
@@ -193,19 +125,14 @@ class ProcessServer():
fds.append(self.xmlrpc)
seendata = False
serverlog("Entering server connection loop")
serverlog("Lockfile is: %s\nSocket is %s (%s)" % (self.bitbake_lock_name, self.sockname, os.path.exists(self.sockname)))
def disconnect_client(self, fds):
serverlog("Disconnecting Client (socket: %s)" % os.path.exists(self.sockname))
serverlog("Disconnecting Client")
if self.controllersock:
fds.remove(self.controllersock)
self.controllersock.close()
self.controllersock = False
if self.haveui:
# Wait for the idle loop to have cleared (30s max)
if not self.wait_for_idle(30):
serverlog("Idle loop didn't finish queued commands after 30s, exiting.")
self.quit = True
fds.remove(self.command_channel)
bb.event.unregister_UIHhandler(self.event_handle, True)
self.command_channel_reply.writer.close()
@@ -217,7 +144,7 @@ class ProcessServer():
self.cooker.clientComplete()
self.haveui = False
ready = select.select(fds,[],[],0)[0]
if newconnections and not self.quit:
if newconnections:
serverlog("Starting new client")
conn = newconnections.pop(-1)
fds.append(conn)
@@ -289,8 +216,8 @@ class ProcessServer():
continue
try:
serverlog("Running command %s" % command)
self.command_channel_reply.send(self.cooker.command.runCommand(command, self))
serverlog("Command Completed (socket: %s)" % os.path.exists(self.sockname))
self.command_channel_reply.send(self.cooker.command.runCommand(command))
serverlog("Command Completed")
except Exception as e:
stack = traceback.format_exc()
serverlog('Exception in server main event loop running command %s (%s)' % (command, stack))
@@ -317,25 +244,16 @@ class ProcessServer():
ready = self.idle_commands(.1, fds)
if self.idle:
self.idle.join()
serverlog("Exiting (socket: %s)" % os.path.exists(self.sockname))
serverlog("Exiting")
# Remove the socket file so we don't get any more connections to avoid races
# The build directory could have been renamed so if the file isn't the one we created
# we shouldn't delete it.
try:
sockinode = os.stat(self.sockname)[stat.ST_INO]
if sockinode == self.sockinode:
os.unlink(self.sockname)
else:
serverlog("bitbake.sock inode mismatch (%s vs %s), not deleting." % (sockinode, self.sockinode))
except Exception as err:
serverlog("Removing socket file '%s' failed (%s)" % (self.sockname, err))
os.unlink(self.sockname)
except:
pass
self.sock.close()
try:
self.cooker.shutdown(True, idle=False)
self.cooker.shutdown(True)
self.cooker.notifier.stop()
self.cooker.confignotifier.stop()
except:
@@ -361,21 +279,20 @@ class ProcessServer():
except FileNotFoundError:
return None
lockcontents = get_lock_contents(lockfile)
serverlog("Original lockfile contents: " + str(lockcontents))
lock.close()
lock = None
while not lock:
i = 0
lock = None
if not os.path.exists(os.path.basename(lockfile)):
serverlog("Lockfile directory gone, exiting.")
return
while not lock and i < 30:
lock = bb.utils.lockfile(lockfile, shared=False, retry=False, block=False)
if not lock:
newlockcontents = get_lock_contents(lockfile)
if not newlockcontents[0].startswith([os.getpid() + "\n", os.getpid() + " "]):
if newlockcontents != lockcontents:
# A new server was started, the lockfile contents changed, we can exit
serverlog("Lockfile now contains different contents, exiting: " + str(newlockcontents))
return
@@ -389,98 +306,80 @@ class ProcessServer():
return
if not lock:
procs = get_lockfile_process_msg(lockfile)
# Some systems may not have lsof available
procs = None
try:
procs = subprocess.check_output(["lsof", '-w', lockfile], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
# File was deleted?
continue
except OSError as e:
if e.errno != errno.ENOENT:
raise
if procs is None:
# Fall back to fuser if lsof is unavailable
try:
procs = subprocess.check_output(["fuser", '-v', lockfile], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
# File was deleted?
continue
except OSError as e:
if e.errno != errno.ENOENT:
raise
msg = ["Delaying shutdown due to active processes which appear to be holding bitbake.lock"]
if procs:
msg.append(":\n%s" % procs)
msg.append(":\n%s" % str(procs.decode("utf-8")))
serverlog("".join(msg))
def idle_thread(self):
def remove_idle_func(function):
with bb.utils.lock_timeout(self._idlefuncsLock):
del self._idlefuns[function]
self.idle_cond.notify_all()
while not self.quit:
nextsleep = 0.1
fds = []
try:
self.cooker.process_inotify_updates()
except Exception as exc:
serverlog("Exception %s in inofify updates broke the idle_thread, exiting" % traceback.format_exc())
self.quit = True
with bb.utils.lock_timeout(self._idlefuncsLock):
items = list(self._idlefuns.items())
for function, data in items:
try:
retval = function(self, data, False)
if isinstance(retval, idleFinish):
serverlog("Removing idle function %s at idleFinish" % str(function))
remove_idle_func(function)
self.cooker.command.finishAsyncCommand(retval.msg)
nextsleep = None
elif retval is False:
serverlog("Removing idle function %s" % str(function))
remove_idle_func(function)
nextsleep = None
elif retval is True:
nextsleep = None
elif isinstance(retval, float) and nextsleep:
if (retval < nextsleep):
nextsleep = retval
elif nextsleep is None:
continue
else:
fds = fds + retval
except SystemExit:
raise
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running idle function')
remove_idle_func(function)
serverlog("Exception %s broke the idle_thread, exiting" % traceback.format_exc())
self.quit = True
# Create new heartbeat event?
now = time.time()
if bb.event._heartbeat_enabled and now >= self.next_heartbeat:
# We might have missed heartbeats. Just trigger once in
# that case and continue after the usual delay.
self.next_heartbeat += self.heartbeat_seconds
if self.next_heartbeat <= now:
self.next_heartbeat = now + self.heartbeat_seconds
if hasattr(self.cooker, "data"):
heartbeat = bb.event.HeartbeatEvent(now)
try:
bb.event.fire(heartbeat, self.cooker.data)
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running heartbeat function')
serverlog("Exception %s broke in idle_thread, exiting" % traceback.format_exc())
self.quit = True
if nextsleep and bb.event._heartbeat_enabled and now + nextsleep > self.next_heartbeat:
# Shorten timeout so that we we wake up in time for
# the heartbeat.
nextsleep = self.next_heartbeat - now
if nextsleep is not None:
select.select(fds,[],[],nextsleep)[0]
def idle_commands(self, delay, fds=None):
nextsleep = delay
if not fds:
fds = []
if not self.idle:
self.idle = threading.Thread(target=self.idle_thread)
self.idle.start()
elif self.idle and not self.idle.is_alive():
serverlog("Idle thread terminated, main thread exiting too")
bb.error("Idle thread terminated, main thread exiting too")
self.quit = True
for function, data in list(self._idlefuns.items()):
try:
retval = function(self, data, False)
if retval is False:
del self._idlefuns[function]
nextsleep = None
elif retval is True:
nextsleep = None
elif isinstance(retval, float) and nextsleep:
if (retval < nextsleep):
nextsleep = retval
elif nextsleep is None:
continue
else:
fds = fds + retval
except SystemExit:
raise
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running idle function')
del self._idlefuns[function]
self.quit = True
# Create new heartbeat event?
now = time.time()
if now >= self.next_heartbeat:
# We might have missed heartbeats. Just trigger once in
# that case and continue after the usual delay.
self.next_heartbeat += self.heartbeat_seconds
if self.next_heartbeat <= now:
self.next_heartbeat = now + self.heartbeat_seconds
if hasattr(self.cooker, "data"):
heartbeat = bb.event.HeartbeatEvent(now)
try:
bb.event.fire(heartbeat, self.cooker.data)
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running heartbeat function')
self.quit = True
if nextsleep and now + nextsleep > self.next_heartbeat:
# Shorten timeout so that we we wake up in time for
# the heartbeat.
nextsleep = self.next_heartbeat - now
if nextsleep is not None:
if self.xmlrpc:
@@ -549,14 +448,13 @@ start_log_datetime_format = '%Y-%m-%d %H:%M:%S.%f'
class BitBakeServer(object):
def __init__(self, lock, sockname, featureset, server_timeout, xmlrpcinterface, profile):
def __init__(self, lock, sockname, featureset, server_timeout, xmlrpcinterface):
self.server_timeout = server_timeout
self.xmlrpcinterface = xmlrpcinterface
self.featureset = featureset
self.sockname = sockname
self.bitbake_lock = lock
self.profile = profile
self.readypipe, self.readypipein = os.pipe()
# Place the log in the builddirectory alongside the lock file
@@ -620,9 +518,9 @@ class BitBakeServer(object):
os.set_inheritable(self.bitbake_lock.fileno(), True)
os.set_inheritable(self.readypipein, True)
serverscript = os.path.realpath(os.path.dirname(__file__) + "/../../../bin/bitbake-server")
os.execl(sys.executable, "bitbake-server", serverscript, "decafbad", str(self.bitbake_lock.fileno()), str(self.readypipein), self.logfile, self.bitbake_lock.name, self.sockname, str(self.server_timeout or 0), str(int(self.profile)), str(self.xmlrpcinterface[0]), str(self.xmlrpcinterface[1]))
os.execl(sys.executable, "bitbake-server", serverscript, "decafbad", str(self.bitbake_lock.fileno()), str(self.readypipein), self.logfile, self.bitbake_lock.name, self.sockname, str(self.server_timeout or 0), str(self.xmlrpcinterface[0]), str(self.xmlrpcinterface[1]))
def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpcinterface, profile):
def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpcinterface):
import bb.cookerdata
import bb.cooker
@@ -634,7 +532,6 @@ def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpc
# Create server control socket
if os.path.exists(sockname):
serverlog("WARNING: removing existing socket file '%s'" % sockname)
os.unlink(sockname)
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
@@ -651,8 +548,7 @@ def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpc
writer = ConnectionWriter(readypipeinfd)
try:
featureset = []
cooker = bb.cooker.BBCooker(featureset, server)
cooker.configuration.profile = profile
cooker = bb.cooker.BBCooker(featureset, server.register_idle_function)
except bb.BBHandledException:
return None
writer.send("r")
@@ -771,7 +667,7 @@ class BBUIEventQueue:
self.t.start()
def getEvent(self):
with bb.utils.lock_timeout(self.eventQueueLock):
with self.eventQueueLock:
if len(self.eventQueue) == 0:
return None
@@ -786,7 +682,7 @@ class BBUIEventQueue:
return self.getEvent()
def queue_event(self, event):
with bb.utils.lock_timeout(self.eventQueueLock):
with self.eventQueueLock:
self.eventQueue.append(event)
self.eventQueueNotify.set()
@@ -822,7 +718,7 @@ class ConnectionReader(object):
return self.reader.poll(timeout)
def get(self):
with bb.utils.lock_timeout(self.rlock):
with self.rlock:
res = self.reader.recv_bytes()
return multiprocessing.reduction.ForkingPickler.loads(res)
@@ -843,7 +739,7 @@ class ConnectionWriter(object):
def _send(self, obj):
gc.disable()
with bb.utils.lock_timeout(self.wlock):
with self.wlock:
self.writer.send_bytes(obj)
gc.enable()
@@ -856,7 +752,7 @@ class ConnectionWriter(object):
# pthread_sigmask block/unblock would be nice but doesn't work, https://bugs.python.org/issue47139
process = multiprocessing.current_process()
if process and hasattr(process, "queue_signals"):
with bb.utils.lock_timeout(process.signal_threadlock):
with process.signal_threadlock:
process.queue_signals = True
self._send(obj)
process.queue_signals = False

View File

@@ -118,7 +118,7 @@ class BitBakeXMLRPCServerCommands():
"""
Run a cooker command on the server
"""
return self.server.cooker.command.runCommand(command, self.server.parent, self.server.readonly)
return self.server.cooker.command.runCommand(command, self.server.readonly)
def getEventHandle(self):
return self.event_handle

View File

@@ -14,7 +14,6 @@ import bb.data
import difflib
import simplediff
import json
import types
import bb.compress.zstd
from bb.checksum import FileChecksumCache
from bb import runqueue
@@ -26,13 +25,13 @@ hashequiv_logger = logging.getLogger('BitBake.SigGen.HashEquiv')
class SetEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, set) or isinstance(obj, frozenset):
if isinstance(obj, set):
return dict(_set_object=list(sorted(obj)))
return json.JSONEncoder.default(self, obj)
def SetDecoder(dct):
if '_set_object' in dct:
return frozenset(dct['_set_object'])
return set(dct['_set_object'])
return dct
def init(d):
@@ -54,6 +53,11 @@ class SignatureGenerator(object):
"""
name = "noop"
# If the derived class supports multiconfig datacaches, set this to True
# The default is False for backward compatibility with derived signature
# generators that do not understand multiconfig caches
supports_multiconfig_datacaches = False
def __init__(self, data):
self.basehash = {}
self.taskhash = {}
@@ -71,27 +75,6 @@ class SignatureGenerator(object):
def postparsing_clean_cache(self):
return
def setup_datacache(self, datacaches):
self.datacaches = datacaches
def setup_datacache_from_datastore(self, mcfn, d):
# In task context we have no cache so setup internal data structures
# from the fully parsed data store provided
mc = d.getVar("__BBMULTICONFIG", False) or ""
tasks = d.getVar('__BBTASKS', False)
self.datacaches = {}
self.datacaches[mc] = types.SimpleNamespace()
setattr(self.datacaches[mc], "stamp", {})
self.datacaches[mc].stamp[mcfn] = d.getVar('STAMP')
setattr(self.datacaches[mc], "stamp_extrainfo", {})
self.datacaches[mc].stamp_extrainfo[mcfn] = {}
for t in tasks:
flag = d.getVarFlag(t, "stamp-extra-info")
if flag:
self.datacaches[mc].stamp_extrainfo[mcfn][t] = flag
def get_unihash(self, tid):
return self.taskhash[tid]
@@ -106,51 +89,17 @@ class SignatureGenerator(object):
"""Write/update the file checksum cache onto disk"""
return
def stampfile_base(self, mcfn):
mc = bb.runqueue.mc_from_tid(mcfn)
return self.datacaches[mc].stamp[mcfn]
def stampfile_mcfn(self, taskname, mcfn, extrainfo=True):
mc = bb.runqueue.mc_from_tid(mcfn)
stamp = self.datacaches[mc].stamp[mcfn]
if not stamp:
return
stamp_extrainfo = ""
if extrainfo:
taskflagname = taskname
if taskname.endswith("_setscene"):
taskflagname = taskname.replace("_setscene", "")
stamp_extrainfo = self.datacaches[mc].stamp_extrainfo[mcfn].get(taskflagname) or ""
return self.stampfile(stamp, mcfn, taskname, stamp_extrainfo)
def stampfile(self, stampbase, file_name, taskname, extrainfo):
return ("%s.%s.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
def stampcleanmask_mcfn(self, taskname, mcfn):
mc = bb.runqueue.mc_from_tid(mcfn)
stamp = self.datacaches[mc].stamp[mcfn]
if not stamp:
return []
taskflagname = taskname
if taskname.endswith("_setscene"):
taskflagname = taskname.replace("_setscene", "")
stamp_extrainfo = self.datacaches[mc].stamp_extrainfo[mcfn].get(taskflagname) or ""
return self.stampcleanmask(stamp, mcfn, taskname, stamp_extrainfo)
def stampcleanmask(self, stampbase, file_name, taskname, extrainfo):
return ("%s.%s.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
def dump_sigtask(self, mcfn, task, stampbase, runtime):
def dump_sigtask(self, fn, task, stampbase, runtime):
return
def invalidate_task(self, task, mcfn):
mc = bb.runqueue.mc_from_tid(mcfn)
stamp = self.datacaches[mc].stamp[mcfn]
bb.utils.remove(stamp)
def invalidate_task(self, task, d, fn):
bb.build.del_stamp(task, d, fn)
def dump_sigs(self, dataCache, options):
return
@@ -179,6 +128,38 @@ class SignatureGenerator(object):
def set_setscene_tasks(self, setscene_tasks):
return
@classmethod
def get_data_caches(cls, dataCaches, mc):
"""
This function returns the datacaches that should be passed to signature
generator functions. If the signature generator supports multiconfig
caches, the entire dictionary of data caches is sent, otherwise a
special proxy is sent that support both index access to all
multiconfigs, and also direct access for the default multiconfig.
The proxy class allows code in this class itself to always use
multiconfig aware code (to ease maintenance), but derived classes that
are unaware of multiconfig data caches can still access the default
multiconfig as expected.
Do not override this function in derived classes; it will be removed in
the future when support for multiconfig data caches is mandatory
"""
class DataCacheProxy(object):
def __init__(self):
pass
def __getitem__(self, key):
return dataCaches[key]
def __getattr__(self, name):
return getattr(dataCaches[mc], name)
if cls.supports_multiconfig_datacaches:
return dataCaches
return DataCacheProxy()
def exit(self):
return
@@ -191,9 +172,12 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.basehash = {}
self.taskhash = {}
self.unihash = {}
self.taskdeps = {}
self.runtaskdeps = {}
self.file_checksum_values = {}
self.taints = {}
self.gendeps = {}
self.lookupcache = {}
self.setscenetasks = set()
self.basehash_ignore_vars = set((data.getVar("BB_BASEHASH_IGNORE_VARS") or "").split())
self.taskhash_ignore_tasks = None
@@ -217,15 +201,15 @@ class SignatureGeneratorBasic(SignatureGenerator):
else:
self.twl = None
def _build_data(self, mcfn, d):
def _build_data(self, fn, d):
ignore_mismatch = ((d.getVar("BB_HASH_IGNORE_MISMATCH") or '') == '1')
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d, self.basehash_ignore_vars)
taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basehash_ignore_vars, mcfn)
taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basehash_ignore_vars, fn)
for task in tasklist:
tid = mcfn + ":" + task
tid = fn + ":" + task
if not ignore_mismatch and tid in self.basehash and self.basehash[tid] != basehash[tid]:
bb.error("When reparsing %s, the basehash value changed from %s to %s. The metadata is not deterministic and this needs to be fixed." % (tid, self.basehash[tid], basehash[tid]))
bb.error("The following commands may help:")
@@ -236,7 +220,11 @@ class SignatureGeneratorBasic(SignatureGenerator):
bb.error("%s -Sprintdiff\n" % cmd)
self.basehash[tid] = basehash[tid]
return taskdeps, gendeps, lookupcache
self.taskdeps[fn] = taskdeps
self.gendeps[fn] = gendeps
self.lookupcache[fn] = lookupcache
return taskdeps
def set_setscene_tasks(self, setscene_tasks):
self.setscenetasks = set(setscene_tasks)
@@ -244,41 +232,31 @@ class SignatureGeneratorBasic(SignatureGenerator):
def finalise(self, fn, d, variant):
mc = d.getVar("__BBMULTICONFIG", False) or ""
mcfn = fn
if variant or mc:
mcfn = bb.cache.realfn2virtual(fn, variant, mc)
fn = bb.cache.realfn2virtual(fn, variant, mc)
try:
taskdeps, gendeps, lookupcache = self._build_data(mcfn, d)
taskdeps = self._build_data(fn, d)
except bb.parse.SkipRecipe:
raise
except:
bb.warn("Error during finalise of %s" % mcfn)
bb.warn("Error during finalise of %s" % fn)
raise
#Slow but can be useful for debugging mismatched basehashes
#for task in self.taskdeps[mcfn]:
# self.dump_sigtask(mcfn, task, d.getVar("STAMP"), False)
#for task in self.taskdeps[fn]:
# self.dump_sigtask(fn, task, d.getVar("STAMP"), False)
basehashes = {}
for task in taskdeps:
basehashes[task] = self.basehash[mcfn + ":" + task]
d.setVar("BB_BASEHASH:task-%s" % task, self.basehash[fn + ":" + task])
d.setVar("__siggen_basehashes", basehashes)
d.setVar("__siggen_gendeps", gendeps)
d.setVar("__siggen_varvals", lookupcache)
d.setVar("__siggen_taskdeps", taskdeps)
def setup_datacache_from_datastore(self, mcfn, d):
super().setup_datacache_from_datastore(mcfn, d)
mc = bb.runqueue.mc_from_tid(mcfn)
for attr in ["siggen_varvals", "siggen_taskdeps", "siggen_gendeps"]:
if not hasattr(self.datacaches[mc], attr):
setattr(self.datacaches[mc], attr, {})
self.datacaches[mc].siggen_varvals[mcfn] = d.getVar("__siggen_varvals")
self.datacaches[mc].siggen_taskdeps[mcfn] = d.getVar("__siggen_taskdeps")
self.datacaches[mc].siggen_gendeps[mcfn] = d.getVar("__siggen_gendeps")
def postparsing_clean_cache(self):
#
# After parsing we can remove some things from memory to reduce our memory footprint
#
self.gendeps = {}
self.lookupcache = {}
self.taskdeps = {}
def rundep_check(self, fn, recipename, task, dep, depname, dataCaches):
# Return True if we should keep the dependency, False to drop it
@@ -301,33 +279,38 @@ class SignatureGeneratorBasic(SignatureGenerator):
def prep_taskhash(self, tid, deps, dataCaches):
(mc, _, task, mcfn) = bb.runqueue.split_tid_mcfn(tid)
(mc, _, task, fn) = bb.runqueue.split_tid_mcfn(tid)
self.basehash[tid] = dataCaches[mc].basetaskhash[tid]
self.runtaskdeps[tid] = []
self.file_checksum_values[tid] = []
recipename = dataCaches[mc].pkg_fn[mcfn]
recipename = dataCaches[mc].pkg_fn[fn]
self.tidtopn[tid] = recipename
for dep in sorted(deps, key=clean_basepath):
(depmc, _, _, depmcfn) = bb.runqueue.split_tid_mcfn(dep)
depname = dataCaches[depmc].pkg_fn[depmcfn]
if not self.rundep_check(mcfn, recipename, task, dep, depname, dataCaches):
if not self.supports_multiconfig_datacaches and mc != depmc:
# If the signature generator doesn't understand multiconfig
# data caches, any dependency not in the same multiconfig must
# be skipped for backward compatibility
continue
if not self.rundep_check(fn, recipename, task, dep, depname, dataCaches):
continue
if dep not in self.taskhash:
bb.fatal("%s is not in taskhash, caller isn't calling in dependency order?" % dep)
self.runtaskdeps[tid].append(dep)
if task in dataCaches[mc].file_checksums[mcfn]:
if task in dataCaches[mc].file_checksums[fn]:
if self.checksum_cache:
checksums = self.checksum_cache.get_checksums(dataCaches[mc].file_checksums[mcfn][task], recipename, self.localdirsexclude)
checksums = self.checksum_cache.get_checksums(dataCaches[mc].file_checksums[fn][task], recipename, self.localdirsexclude)
else:
checksums = bb.fetch2.get_file_checksums(dataCaches[mc].file_checksums[mcfn][task], recipename, self.localdirsexclude)
checksums = bb.fetch2.get_file_checksums(dataCaches[mc].file_checksums[fn][task], recipename, self.localdirsexclude)
for (f,cs) in checksums:
self.file_checksum_values[tid].append((f,cs))
taskdep = dataCaches[mc].task_deps[mcfn]
taskdep = dataCaches[mc].task_deps[fn]
if 'nostamp' in taskdep and task in taskdep['nostamp']:
# Nostamp tasks need an implicit taint so that they force any dependent tasks to run
if tid in self.taints and self.taints[tid].startswith("nostamp:"):
@@ -338,7 +321,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
taint = str(uuid.uuid4())
self.taints[tid] = "nostamp:" + taint
taint = self.read_taint(mcfn, task, dataCaches[mc].stamp[mcfn])
taint = self.read_taint(fn, task, dataCaches[mc].stamp[fn])
if taint:
self.taints[tid] = taint
logger.warning("%s is tainted from a forced run" % tid)
@@ -349,19 +332,19 @@ class SignatureGeneratorBasic(SignatureGenerator):
data = self.basehash[tid]
for dep in self.runtaskdeps[tid]:
data += self.get_unihash(dep)
data = data + self.get_unihash(dep)
for (f, cs) in self.file_checksum_values[tid]:
if cs:
if "/./" in f:
data += "./" + f.split("/./")[1]
data += cs
data = data + "./" + f.split("/./")[1]
data = data + cs
if tid in self.taints:
if self.taints[tid].startswith("nostamp:"):
data += self.taints[tid][8:]
data = data + self.taints[tid][8:]
else:
data += self.taints[tid]
data = data + self.taints[tid]
h = hashlib.sha256(data.encode("utf-8")).hexdigest()
self.taskhash[tid] = h
@@ -383,9 +366,9 @@ class SignatureGeneratorBasic(SignatureGenerator):
def copy_unitaskhashes(self, targetdir):
self.unihash_cache.copyfile(targetdir)
def dump_sigtask(self, mcfn, task, stampbase, runtime):
tid = mcfn + ":" + task
mc = bb.runqueue.mc_from_tid(mcfn)
def dump_sigtask(self, fn, task, stampbase, runtime):
tid = fn + ":" + task
referencestamp = stampbase
if isinstance(runtime, str) and runtime.startswith("customfile"):
sigfile = stampbase
@@ -402,16 +385,16 @@ class SignatureGeneratorBasic(SignatureGenerator):
data['task'] = task
data['basehash_ignore_vars'] = self.basehash_ignore_vars
data['taskhash_ignore_tasks'] = self.taskhash_ignore_tasks
data['taskdeps'] = self.datacaches[mc].siggen_taskdeps[mcfn][task]
data['taskdeps'] = self.taskdeps[fn][task]
data['basehash'] = self.basehash[tid]
data['gendeps'] = {}
data['varvals'] = {}
data['varvals'][task] = self.datacaches[mc].siggen_varvals[mcfn][task]
for dep in self.datacaches[mc].siggen_taskdeps[mcfn][task]:
data['varvals'][task] = self.lookupcache[fn][task]
for dep in self.taskdeps[fn][task]:
if dep in self.basehash_ignore_vars:
continue
data['gendeps'][dep] = self.datacaches[mc].siggen_gendeps[mcfn][dep]
data['varvals'][dep] = self.datacaches[mc].siggen_varvals[mcfn][dep]
continue
data['gendeps'][dep] = self.gendeps[fn][dep]
data['varvals'][dep] = self.lookupcache[fn][dep]
if runtime and tid in self.taskhash:
data['runtaskdeps'] = self.runtaskdeps[tid]
@@ -427,7 +410,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
data['taskhash'] = self.taskhash[tid]
data['unihash'] = self.get_unihash(tid)
taint = self.read_taint(mcfn, task, referencestamp)
taint = self.read_taint(fn, task, referencestamp)
if taint:
data['taint'] = taint
@@ -458,6 +441,18 @@ class SignatureGeneratorBasic(SignatureGenerator):
pass
raise err
def dump_sigfn(self, fn, dataCaches, options):
if fn in self.taskdeps:
for task in self.taskdeps[fn]:
tid = fn + ":" + task
mc = bb.runqueue.mc_from_tid(tid)
if tid not in self.taskhash:
continue
if dataCaches[mc].basetaskhash[tid] != self.basehash[tid]:
bb.error("Bitbake's cached basehash does not match the one we just generated (%s)!" % tid)
bb.error("The mismatched hashes were %s and %s" % (dataCaches[mc].basetaskhash[tid], self.basehash[tid]))
self.dump_sigtask(fn, task, dataCaches[mc].stamp[fn], True)
class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
name = "basichash"
@@ -468,11 +463,11 @@ class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
# If task is not in basehash, then error
return self.basehash[tid]
def stampfile(self, stampbase, mcfn, taskname, extrainfo, clean=False):
if taskname.endswith("_setscene"):
tid = mcfn + ":" + taskname[:-9]
def stampfile(self, stampbase, fn, taskname, extrainfo, clean=False):
if taskname != "do_setscene" and taskname.endswith("_setscene"):
tid = fn + ":" + taskname[:-9]
else:
tid = mcfn + ":" + taskname
tid = fn + ":" + taskname
if clean:
h = "*"
else:
@@ -480,23 +475,12 @@ class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
return ("%s.%s.%s.%s" % (stampbase, taskname, h, extrainfo)).rstrip('.')
def stampcleanmask(self, stampbase, mcfn, taskname, extrainfo):
return self.stampfile(stampbase, mcfn, taskname, extrainfo, clean=True)
def stampcleanmask(self, stampbase, fn, taskname, extrainfo):
return self.stampfile(stampbase, fn, taskname, extrainfo, clean=True)
def invalidate_task(self, task, mcfn):
bb.note("Tainting hash to force rebuild of task %s, %s" % (mcfn, task))
mc = bb.runqueue.mc_from_tid(mcfn)
stamp = self.datacaches[mc].stamp[mcfn]
taintfn = stamp + '.' + task + '.taint'
import uuid
bb.utils.mkdirhier(os.path.dirname(taintfn))
# The specific content of the taint file is not really important,
# we just need it to be random, so a random UUID is used
with open(taintfn, 'w') as taintf:
taintf.write(str(uuid.uuid4()))
def invalidate_task(self, task, d, fn):
bb.note("Tainting hash to force rebuild of task %s, %s" % (fn, task))
bb.build.write_taint(task, d, fn)
class SignatureGeneratorUniHashMixIn(object):
def __init__(self, data):
@@ -598,7 +582,7 @@ class SignatureGeneratorUniHashMixIn(object):
# A unique hash equal to the taskhash is not very interesting,
# so it is reported it at debug level 2. If they differ, that
# is much more interesting, so it is reported at debug level 1
hashequiv_logger.bbdebug((1, 2)[unihash == taskhash], 'Found unihash %s in place of %s for %s from %s' % (unihash, taskhash, tid, self.server))
hashequiv_logger.debug((1, 2)[unihash == taskhash], 'Found unihash %s in place of %s for %s from %s' % (unihash, taskhash, tid, self.server))
else:
hashequiv_logger.debug2('No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
except ConnectionError as e:
@@ -615,8 +599,8 @@ class SignatureGeneratorUniHashMixIn(object):
unihash = d.getVar('BB_UNIHASH')
report_taskdata = d.getVar('SSTATE_HASHEQUIV_REPORT_TASKDATA') == '1'
tempdir = d.getVar('T')
mcfn = d.getVar('BB_FILENAME')
tid = mcfn + ':do_' + task
fn = d.getVar('BB_FILENAME')
tid = fn + ':do_' + task
key = tid + ':' + taskhash
if self.setscenetasks and tid not in self.setscenetasks:
@@ -675,7 +659,7 @@ class SignatureGeneratorUniHashMixIn(object):
if new_unihash != unihash:
hashequiv_logger.debug('Task %s unihash changed %s -> %s by server %s' % (taskhash, unihash, new_unihash, self.server))
bb.event.fire(bb.runqueue.taskUniHashUpdate(mcfn + ':do_' + task, new_unihash), d)
bb.event.fire(bb.runqueue.taskUniHashUpdate(fn + ':do_' + task, new_unihash), d)
self.set_unihash(tid, new_unihash)
d.setVar('BB_UNIHASH', new_unihash)
else:
@@ -735,12 +719,19 @@ class SignatureGeneratorTestEquivHash(SignatureGeneratorUniHashMixIn, SignatureG
self.server = data.getVar('BB_HASHSERVE')
self.method = "sstate_output_hash"
#
# Dummy class used for bitbake-selftest
#
class SignatureGeneratorTestMulticonfigDepends(SignatureGeneratorBasicHash):
name = "TestMulticonfigDepends"
supports_multiconfig_datacaches = True
def dump_this_task(outfile, d):
import bb.parse
mcfn = d.getVar("BB_FILENAME")
fn = d.getVar("BB_FILENAME")
task = "do_" + d.getVar("BB_CURRENTTASK")
referencestamp = bb.parse.siggen.stampfile_base(mcfn)
bb.parse.siggen.dump_sigtask(mcfn, task, outfile, "customfile:" + referencestamp)
referencestamp = bb.build.stamp_internal(task, d, None, True)
bb.parse.siggen.dump_sigtask(fn, task, outfile, "customfile:" + referencestamp)
def init_colors(enable_color):
"""Initialise colour dict for passing to compare_sigfiles()"""
@@ -1065,7 +1056,7 @@ def calc_basehash(sigdata):
basedata = ''
alldeps = sigdata['taskdeps']
for dep in sorted(alldeps):
for dep in alldeps:
basedata = basedata + dep
val = sigdata['varvals'][dep]
if val is not None:

View File

@@ -318,7 +318,7 @@ d.getVar(a(), False)
"filename": "example.bb",
})
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
self.assertEqual(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
@@ -365,7 +365,7 @@ esac
self.d.setVarFlags("FOO", {"func": True})
self.setEmptyVars(execs)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
self.assertEqual(deps, set(["somevar", "inverted"] + execs))
@@ -375,7 +375,7 @@ esac
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
self.d.setVarFlag("FOO", "vardeps", "oe_libinstall")
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
self.assertEqual(deps, set(["oe_libinstall"]))
@@ -384,7 +384,7 @@ esac
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
self.d.setVarFlag("FOO", "vardeps", "${@'oe_libinstall'}")
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
self.assertEqual(deps, set(["oe_libinstall"]))
@@ -399,7 +399,7 @@ esac
# Check dependencies
self.d.setVar('ANOTHERVAR', expr)
self.d.setVar('TESTVAR', 'anothervalue testval testval2')
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), self.d)
self.assertEqual(sorted(values.splitlines()),
sorted([expr,
'TESTVAR{anothervalue} = Set',
@@ -418,14 +418,14 @@ esac
self.d.setVar('ANOTHERVAR', varval)
self.d.setVar('TESTVAR', 'anothervalue testval testval2')
self.d.setVar('TESTVAR2', 'testval3')
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), set(["TESTVAR"]), self.d, self.d)
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(["TESTVAR"]), self.d)
self.assertEqual(sorted(values.splitlines()), sorted([varval]))
self.assertEqual(deps, set(["TESTVAR2"]))
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval3', 'anothervalue'])
# Check the vardepsexclude flag is handled by contains functionality
self.d.setVarFlag('ANOTHERVAR', 'vardepsexclude', 'TESTVAR')
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), self.d)
self.assertEqual(sorted(values.splitlines()), sorted([varval]))
self.assertEqual(deps, set(["TESTVAR2"]))
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval3', 'anothervalue'])

View File

@@ -20,7 +20,7 @@ class ProgressWatcher:
def __init__(self):
self._reports = []
def handle_event(self, event, d):
def handle_event(self, event):
self._reports.append((event.progress, event.rate))
def reports(self):

View File

@@ -60,15 +60,6 @@ class DataExpansions(unittest.TestCase):
val = self.d.expand("${@5*12}")
self.assertEqual(str(val), "60")
def test_python_snippet_w_dict(self):
val = self.d.expand("${@{ 'green': 1, 'blue': 2 }['green']}")
self.assertEqual(str(val), "1")
def test_python_unexpanded_multi(self):
self.d.setVar("bar", "${unsetvar}")
val = self.d.expand("${@2*2},${foo},${@d.getVar('foo') + ' ${bar}'},${foo}")
self.assertEqual(str(val), "4,value_of_foo,${@d.getVar('foo') + ' ${unsetvar}'},value_of_foo")
def test_expand_in_python_snippet(self):
val = self.d.expand("${@'boo ' + '${foo}'}")
self.assertEqual(str(val), "boo value_of_foo")

View File

@@ -157,7 +157,7 @@ class EventHandlingTest(unittest.TestCase):
self._test_process.event_handler,
event,
None)
self._test_process.event_handler.assert_called_once_with(event, None)
self._test_process.event_handler.assert_called_once_with(event)
def test_fire_class_handlers(self):
""" Test fire_class_handlers method """
@@ -175,10 +175,10 @@ class EventHandlingTest(unittest.TestCase):
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
bb.event.fire_class_handlers(event2, None)
expected_event_handler1 = [call(event1, None)]
expected_event_handler2 = [call(event1, None),
call(event2, None),
call(event2, None)]
expected_event_handler1 = [call(event1)]
expected_event_handler2 = [call(event1),
call(event2),
call(event2)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected_event_handler1)
self.assertEqual(self._test_process.event_handler2.call_args_list,
@@ -205,7 +205,7 @@ class EventHandlingTest(unittest.TestCase):
bb.event.fire_class_handlers(event2, None)
bb.event.fire_class_handlers(event2, None)
expected_event_handler1 = []
expected_event_handler2 = [call(event1, None)]
expected_event_handler2 = [call(event1)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected_event_handler1)
self.assertEqual(self._test_process.event_handler2.call_args_list,
@@ -223,7 +223,7 @@ class EventHandlingTest(unittest.TestCase):
self.assertEqual(result, bb.event.Registered)
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
expected = [call(event1, None), call(event2, None)]
expected = [call(event1), call(event2)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected)
@@ -237,7 +237,7 @@ class EventHandlingTest(unittest.TestCase):
self.assertEqual(result, bb.event.Registered)
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
expected = [call(event1, None), call(event2, None), call(event1, None)]
expected = [call(event1), call(event2), call(event1)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected)
@@ -251,7 +251,7 @@ class EventHandlingTest(unittest.TestCase):
self.assertEqual(result, bb.event.Registered)
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
expected = [call(event1,None), call(event2, None), call(event1, None), call(event2, None)]
expected = [call(event1), call(event2), call(event1), call(event2)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected)
@@ -359,10 +359,9 @@ class EventHandlingTest(unittest.TestCase):
event1 = bb.event.ConfigParsed()
bb.event.fire(event1, None)
expected = [call(event1, None)]
expected = [call(event1)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected)
expected = [call(event1)]
self.assertEqual(self._test_ui1.event.send.call_args_list,
expected)
@@ -451,9 +450,10 @@ class EventHandlingTest(unittest.TestCase):
and disable threadlocks tests """
bb.event.fire(bb.event.OperationStarted(), None)
def test_event_threadlock(self):
def test_enable_threadlock(self):
""" Test enable_threadlock method """
self._set_threadlock_test_mockups()
bb.event.enable_threadlock()
self._set_and_run_threadlock_test_workers()
# Calls to UI handlers should be in order as all the registered
# handlers for the event coming from the first worker should be
@@ -461,6 +461,20 @@ class EventHandlingTest(unittest.TestCase):
self.assertEqual(self._threadlock_test_calls,
["w1_ui1", "w1_ui2", "w2_ui1", "w2_ui2"])
def test_disable_threadlock(self):
""" Test disable_threadlock method """
self._set_threadlock_test_mockups()
bb.event.disable_threadlock()
self._set_and_run_threadlock_test_workers()
# Calls to UI handlers should be intertwined together. Thanks to the
# delay in the registered handlers for the event coming from the first
# worker, the event coming from the second worker starts being
# processed before finishing handling the first worker event.
self.assertEqual(self._threadlock_test_calls,
["w1_ui1", "w2_ui1", "w1_ui2", "w2_ui2"])
class EventClassesTest(unittest.TestCase):
""" Event classes test class """

View File

@@ -1,20 +0,0 @@
<!DOCTYPE html><html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"><meta name="viewport" content="width=device-width"><style type="text/css">body,html {background:#fff;font-family:"Bitstream Vera Sans","Lucida Grande","Lucida Sans Unicode",Lucidux,Verdana,Lucida,sans-serif;}tr:nth-child(even) {background:#f4f4f4;}th,td {padding:0.1em 0.5em;}th {text-align:left;font-weight:bold;background:#eee;border-bottom:1px solid #aaa;}#list {border:1px solid #aaa;width:100%;}a {color:#a33;}a:hover {color:#e33;}</style>
<title>Index of /sources/libxml2/2.10/</title>
</head><body><h1>Index of /sources/libxml2/2.10/</h1>
<table id="list"><thead><tr><th style="width:55%"><a href="?C=N&amp;O=A">File Name</a>&nbsp;<a href="?C=N&amp;O=D">&nbsp;&darr;&nbsp;</a></th><th style="width:20%"><a href="?C=S&amp;O=A">File Size</a>&nbsp;<a href="?C=S&amp;O=D">&nbsp;&darr;&nbsp;</a></th><th style="width:25%"><a href="?C=M&amp;O=A">Date</a>&nbsp;<a href="?C=M&amp;O=D">&nbsp;&darr;&nbsp;</a></th></tr></thead>
<tbody><tr><td class="link"><a href="../">Parent directory/</a></td><td class="size">-</td><td class="date">-</td></tr>
<tr><td class="link"><a href="LATEST-IS-2.10.3" title="LATEST-IS-2.10.3">LATEST-IS-2.10.3</a></td><td class="size">2.5 MiB</td><td class="date">2022-Oct-14 12:55</td></tr>
<tr><td class="link"><a href="libxml2-2.10.0.news" title="libxml2-2.10.0.news">libxml2-2.10.0.news</a></td><td class="size">7.1 KiB</td><td class="date">2022-Aug-17 11:55</td></tr>
<tr><td class="link"><a href="libxml2-2.10.0.sha256sum" title="libxml2-2.10.0.sha256sum">libxml2-2.10.0.sha256sum</a></td><td class="size">174 B</td><td class="date">2022-Aug-17 11:55</td></tr>
<tr><td class="link"><a href="libxml2-2.10.0.tar.xz" title="libxml2-2.10.0.tar.xz">libxml2-2.10.0.tar.xz</a></td><td class="size">2.6 MiB</td><td class="date">2022-Aug-17 11:55</td></tr>
<tr><td class="link"><a href="libxml2-2.10.1.news" title="libxml2-2.10.1.news">libxml2-2.10.1.news</a></td><td class="size">455 B</td><td class="date">2022-Aug-25 11:33</td></tr>
<tr><td class="link"><a href="libxml2-2.10.1.sha256sum" title="libxml2-2.10.1.sha256sum">libxml2-2.10.1.sha256sum</a></td><td class="size">174 B</td><td class="date">2022-Aug-25 11:33</td></tr>
<tr><td class="link"><a href="libxml2-2.10.1.tar.xz" title="libxml2-2.10.1.tar.xz">libxml2-2.10.1.tar.xz</a></td><td class="size">2.6 MiB</td><td class="date">2022-Aug-25 11:33</td></tr>
<tr><td class="link"><a href="libxml2-2.10.2.news" title="libxml2-2.10.2.news">libxml2-2.10.2.news</a></td><td class="size">309 B</td><td class="date">2022-Aug-29 14:56</td></tr>
<tr><td class="link"><a href="libxml2-2.10.2.sha256sum" title="libxml2-2.10.2.sha256sum">libxml2-2.10.2.sha256sum</a></td><td class="size">174 B</td><td class="date">2022-Aug-29 14:56</td></tr>
<tr><td class="link"><a href="libxml2-2.10.2.tar.xz" title="libxml2-2.10.2.tar.xz">libxml2-2.10.2.tar.xz</a></td><td class="size">2.5 MiB</td><td class="date">2022-Aug-29 14:56</td></tr>
<tr><td class="link"><a href="libxml2-2.10.3.news" title="libxml2-2.10.3.news">libxml2-2.10.3.news</a></td><td class="size">294 B</td><td class="date">2022-Oct-14 12:55</td></tr>
<tr><td class="link"><a href="libxml2-2.10.3.sha256sum" title="libxml2-2.10.3.sha256sum">libxml2-2.10.3.sha256sum</a></td><td class="size">174 B</td><td class="date">2022-Oct-14 12:55</td></tr>
<tr><td class="link"><a href="libxml2-2.10.3.tar.xz" title="libxml2-2.10.3.tar.xz">libxml2-2.10.3.tar.xz</a></td><td class="size">2.5 MiB</td><td class="date">2022-Oct-14 12:55</td></tr>
</tbody></table></body></html>

View File

@@ -1,40 +0,0 @@
<!DOCTYPE html><html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"><meta name="viewport" content="width=device-width"><style type="text/css">body,html {background:#fff;font-family:"Bitstream Vera Sans","Lucida Grande","Lucida Sans Unicode",Lucidux,Verdana,Lucida,sans-serif;}tr:nth-child(even) {background:#f4f4f4;}th,td {padding:0.1em 0.5em;}th {text-align:left;font-weight:bold;background:#eee;border-bottom:1px solid #aaa;}#list {border:1px solid #aaa;width:100%;}a {color:#a33;}a:hover {color:#e33;}</style>
<title>Index of /sources/libxml2/2.9/</title>
</head><body><h1>Index of /sources/libxml2/2.9/</h1>
<table id="list"><thead><tr><th style="width:55%"><a href="?C=N&amp;O=A">File Name</a>&nbsp;<a href="?C=N&amp;O=D">&nbsp;&darr;&nbsp;</a></th><th style="width:20%"><a href="?C=S&amp;O=A">File Size</a>&nbsp;<a href="?C=S&amp;O=D">&nbsp;&darr;&nbsp;</a></th><th style="width:25%"><a href="?C=M&amp;O=A">Date</a>&nbsp;<a href="?C=M&amp;O=D">&nbsp;&darr;&nbsp;</a></th></tr></thead>
<tbody><tr><td class="link"><a href="../">Parent directory/</a></td><td class="size">-</td><td class="date">-</td></tr>
<tr><td class="link"><a href="LATEST-IS-2.9.14" title="LATEST-IS-2.9.14">LATEST-IS-2.9.14</a></td><td class="size">3.0 MiB</td><td class="date">2022-May-02 12:03</td></tr>
<tr><td class="link"><a href="libxml2-2.9.0.sha256sum" title="libxml2-2.9.0.sha256sum">libxml2-2.9.0.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:27</td></tr>
<tr><td class="link"><a href="libxml2-2.9.0.tar.xz" title="libxml2-2.9.0.tar.xz">libxml2-2.9.0.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:27</td></tr>
<tr><td class="link"><a href="libxml2-2.9.1.sha256sum" title="libxml2-2.9.1.sha256sum">libxml2-2.9.1.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:28</td></tr>
<tr><td class="link"><a href="libxml2-2.9.1.tar.xz" title="libxml2-2.9.1.tar.xz">libxml2-2.9.1.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:28</td></tr>
<tr><td class="link"><a href="libxml2-2.9.10.sha256sum" title="libxml2-2.9.10.sha256sum">libxml2-2.9.10.sha256sum</a></td><td class="size">88 B</td><td class="date">2022-Feb-14 18:42</td></tr>
<tr><td class="link"><a href="libxml2-2.9.10.tar.xz" title="libxml2-2.9.10.tar.xz">libxml2-2.9.10.tar.xz</a></td><td class="size">3.2 MiB</td><td class="date">2022-Feb-14 18:42</td></tr>
<tr><td class="link"><a href="libxml2-2.9.11.sha256sum" title="libxml2-2.9.11.sha256sum">libxml2-2.9.11.sha256sum</a></td><td class="size">88 B</td><td class="date">2022-Feb-14 18:43</td></tr>
<tr><td class="link"><a href="libxml2-2.9.11.tar.xz" title="libxml2-2.9.11.tar.xz">libxml2-2.9.11.tar.xz</a></td><td class="size">3.2 MiB</td><td class="date">2022-Feb-14 18:43</td></tr>
<tr><td class="link"><a href="libxml2-2.9.12.sha256sum" title="libxml2-2.9.12.sha256sum">libxml2-2.9.12.sha256sum</a></td><td class="size">88 B</td><td class="date">2022-Feb-14 18:45</td></tr>
<tr><td class="link"><a href="libxml2-2.9.12.tar.xz" title="libxml2-2.9.12.tar.xz">libxml2-2.9.12.tar.xz</a></td><td class="size">3.2 MiB</td><td class="date">2022-Feb-14 18:45</td></tr>
<tr><td class="link"><a href="libxml2-2.9.13.news" title="libxml2-2.9.13.news">libxml2-2.9.13.news</a></td><td class="size">26.6 KiB</td><td class="date">2022-Feb-20 12:42</td></tr>
<tr><td class="link"><a href="libxml2-2.9.13.sha256sum" title="libxml2-2.9.13.sha256sum">libxml2-2.9.13.sha256sum</a></td><td class="size">174 B</td><td class="date">2022-Feb-20 12:42</td></tr>
<tr><td class="link"><a href="libxml2-2.9.13.tar.xz" title="libxml2-2.9.13.tar.xz">libxml2-2.9.13.tar.xz</a></td><td class="size">3.1 MiB</td><td class="date">2022-Feb-20 12:42</td></tr>
<tr><td class="link"><a href="libxml2-2.9.14.news" title="libxml2-2.9.14.news">libxml2-2.9.14.news</a></td><td class="size">1.0 KiB</td><td class="date">2022-May-02 12:03</td></tr>
<tr><td class="link"><a href="libxml2-2.9.14.sha256sum" title="libxml2-2.9.14.sha256sum">libxml2-2.9.14.sha256sum</a></td><td class="size">174 B</td><td class="date">2022-May-02 12:03</td></tr>
<tr><td class="link"><a href="libxml2-2.9.14.tar.xz" title="libxml2-2.9.14.tar.xz">libxml2-2.9.14.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-May-02 12:03</td></tr>
<tr><td class="link"><a href="libxml2-2.9.2.sha256sum" title="libxml2-2.9.2.sha256sum">libxml2-2.9.2.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:30</td></tr>
<tr><td class="link"><a href="libxml2-2.9.2.tar.xz" title="libxml2-2.9.2.tar.xz">libxml2-2.9.2.tar.xz</a></td><td class="size">3.2 MiB</td><td class="date">2022-Feb-14 18:30</td></tr>
<tr><td class="link"><a href="libxml2-2.9.3.sha256sum" title="libxml2-2.9.3.sha256sum">libxml2-2.9.3.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:31</td></tr>
<tr><td class="link"><a href="libxml2-2.9.3.tar.xz" title="libxml2-2.9.3.tar.xz">libxml2-2.9.3.tar.xz</a></td><td class="size">3.2 MiB</td><td class="date">2022-Feb-14 18:31</td></tr>
<tr><td class="link"><a href="libxml2-2.9.4.sha256sum" title="libxml2-2.9.4.sha256sum">libxml2-2.9.4.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:33</td></tr>
<tr><td class="link"><a href="libxml2-2.9.4.tar.xz" title="libxml2-2.9.4.tar.xz">libxml2-2.9.4.tar.xz</a></td><td class="size">2.9 MiB</td><td class="date">2022-Feb-14 18:33</td></tr>
<tr><td class="link"><a href="libxml2-2.9.5.sha256sum" title="libxml2-2.9.5.sha256sum">libxml2-2.9.5.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:35</td></tr>
<tr><td class="link"><a href="libxml2-2.9.5.tar.xz" title="libxml2-2.9.5.tar.xz">libxml2-2.9.5.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:35</td></tr>
<tr><td class="link"><a href="libxml2-2.9.6.sha256sum" title="libxml2-2.9.6.sha256sum">libxml2-2.9.6.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:36</td></tr>
<tr><td class="link"><a href="libxml2-2.9.6.tar.xz" title="libxml2-2.9.6.tar.xz">libxml2-2.9.6.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:36</td></tr>
<tr><td class="link"><a href="libxml2-2.9.7.sha256sum" title="libxml2-2.9.7.sha256sum">libxml2-2.9.7.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:37</td></tr>
<tr><td class="link"><a href="libxml2-2.9.7.tar.xz" title="libxml2-2.9.7.tar.xz">libxml2-2.9.7.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:37</td></tr>
<tr><td class="link"><a href="libxml2-2.9.8.sha256sum" title="libxml2-2.9.8.sha256sum">libxml2-2.9.8.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:39</td></tr>
<tr><td class="link"><a href="libxml2-2.9.8.tar.xz" title="libxml2-2.9.8.tar.xz">libxml2-2.9.8.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:39</td></tr>
<tr><td class="link"><a href="libxml2-2.9.9.sha256sum" title="libxml2-2.9.9.sha256sum">libxml2-2.9.9.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:40</td></tr>
<tr><td class="link"><a href="libxml2-2.9.9.tar.xz" title="libxml2-2.9.9.tar.xz">libxml2-2.9.9.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:40</td></tr>
</tbody></table></body></html>

View File

@@ -1,19 +0,0 @@
<!DOCTYPE html><html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"><meta name="viewport" content="width=device-width"><style type="text/css">body,html {background:#fff;font-family:"Bitstream Vera Sans","Lucida Grande","Lucida Sans Unicode",Lucidux,Verdana,Lucida,sans-serif;}tr:nth-child(even) {background:#f4f4f4;}th,td {padding:0.1em 0.5em;}th {text-align:left;font-weight:bold;background:#eee;border-bottom:1px solid #aaa;}#list {border:1px solid #aaa;width:100%;}a {color:#a33;}a:hover {color:#e33;}</style>
<title>Index of /sources/libxml2/</title>
</head><body><h1>Index of /sources/libxml2/</h1>
<table id="list"><thead><tr><th style="width:55%"><a href="?C=N&amp;O=A">File Name</a>&nbsp;<a href="?C=N&amp;O=D">&nbsp;&darr;&nbsp;</a></th><th style="width:20%"><a href="?C=S&amp;O=A">File Size</a>&nbsp;<a href="?C=S&amp;O=D">&nbsp;&darr;&nbsp;</a></th><th style="width:25%"><a href="?C=M&amp;O=A">Date</a>&nbsp;<a href="?C=M&amp;O=D">&nbsp;&darr;&nbsp;</a></th></tr></thead>
<tbody><tr><td class="link"><a href="../">Parent directory/</a></td><td class="size">-</td><td class="date">-</td></tr>
<tr><td class="link"><a href="2.0/" title="2.0">2.0/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:04</td></tr>
<tr><td class="link"><a href="2.1/" title="2.1">2.1/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:04</td></tr>
<tr><td class="link"><a href="2.10/" title="2.10">2.10/</a></td><td class="size">-</td><td class="date">2022-Oct-14 12:55</td></tr>
<tr><td class="link"><a href="2.2/" title="2.2">2.2/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:04</td></tr>
<tr><td class="link"><a href="2.3/" title="2.3">2.3/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:05</td></tr>
<tr><td class="link"><a href="2.4/" title="2.4">2.4/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:05</td></tr>
<tr><td class="link"><a href="2.5/" title="2.5">2.5/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:05</td></tr>
<tr><td class="link"><a href="2.6/" title="2.6">2.6/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:05</td></tr>
<tr><td class="link"><a href="2.7/" title="2.7">2.7/</a></td><td class="size">-</td><td class="date">2022-Feb-14 18:24</td></tr>
<tr><td class="link"><a href="2.8/" title="2.8">2.8/</a></td><td class="size">-</td><td class="date">2022-Feb-14 18:26</td></tr>
<tr><td class="link"><a href="2.9/" title="2.9">2.9/</a></td><td class="size">-</td><td class="date">2022-May-02 12:04</td></tr>
<tr><td class="link"><a href="cache.json" title="cache.json">cache.json</a></td><td class="size">22.8 KiB</td><td class="date">2022-Oct-14 12:55</td></tr>
</tbody></table></body></html>

View File

@@ -785,7 +785,7 @@ class FetcherLocalTest(FetcherTest):
# Fetch and check revision
self.d.setVar("SRCREV", "AUTOINC")
self.d.setVar("__BBSRCREV_SEEN", "1")
self.d.setVar("__BBSEENSRCREV", "1")
url = "git://" + self.gitdir + ";branch=master;protocol=file;" + suffix
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
@@ -1401,9 +1401,6 @@ class FetchLatestVersionTest(FetcherTest):
# http://www.cmake.org/files/v2.8/cmake-2.8.12.1.tar.gz
("cmake", "/files/v2.8/cmake-2.8.12.1.tar.gz", "", "")
: "2.8.12.1",
# https://download.gnome.org/sources/libxml2/2.9/libxml2-2.9.14.tar.xz
("libxml2", "/software/libxml2/2.9/libxml2-2.9.14.tar.xz", "", "")
: "2.10.3",
#
# packages with versions only in current directory
#
@@ -1654,7 +1651,7 @@ class GitShallowTest(FetcherTest):
self.d.setVar('BB_GIT_SHALLOW', '1')
self.d.setVar('BB_GENERATE_MIRROR_TARBALLS', '0')
self.d.setVar('BB_GENERATE_SHALLOW_TARBALLS', '1')
self.d.setVar("__BBSRCREV_SEEN", "1")
self.d.setVar("__BBSEENSRCREV", "1")
def assertRefs(self, expected_refs, cwd=None):
if cwd is None:
@@ -2199,12 +2196,6 @@ class GitShallowTest(FetcherTest):
self.assertIn("fstests.doap", dir)
class GitLfsTest(FetcherTest):
def skipIfNoGitLFS():
import shutil
if not shutil.which('git-lfs'):
return unittest.skip('git-lfs not installed')
return lambda f: f
def setUp(self):
FetcherTest.setUp(self)
@@ -2218,7 +2209,7 @@ class GitLfsTest(FetcherTest):
self.d.setVar('SRCREV', '${AUTOREV}')
self.d.setVar('AUTOREV', '${@bb.fetch2.get_autorev(d)}')
self.d.setVar("__BBSRCREV_SEEN", "1")
self.d.setVar("__BBSEENSRCREV", "1")
bb.utils.mkdirhier(self.srcdir)
self.git_init(cwd=self.srcdir)
@@ -2238,44 +2229,6 @@ class GitLfsTest(FetcherTest):
ud = fetcher.ud[uri]
return fetcher, ud
def get_real_git_lfs_file(self):
self.d.setVar('PATH', os.environ.get('PATH'))
fetcher, ud = self.fetch()
fetcher.unpack(self.d.getVar('WORKDIR'))
unpacked_lfs_file = os.path.join(self.d.getVar('WORKDIR'), 'git', "Cat_poster_1.jpg")
return unpacked_lfs_file
@skipIfNoGitLFS()
@skipIfNoNetwork()
def test_real_git_lfs_repo_succeeds_without_lfs_param(self):
self.d.setVar('SRC_URI', "git://gitlab.com/gitlab-examples/lfs.git;protocol=https;branch=master")
f = self.get_real_git_lfs_file()
self.assertTrue(os.path.exists(f))
self.assertEqual("c0baab607a97839c9a328b4310713307", bb.utils.md5_file(f))
@skipIfNoGitLFS()
@skipIfNoNetwork()
def test_real_git_lfs_repo_succeeds(self):
self.d.setVar('SRC_URI', "git://gitlab.com/gitlab-examples/lfs.git;protocol=https;branch=master;lfs=1")
f = self.get_real_git_lfs_file()
self.assertTrue(os.path.exists(f))
self.assertEqual("c0baab607a97839c9a328b4310713307", bb.utils.md5_file(f))
@skipIfNoGitLFS()
@skipIfNoNetwork()
def test_real_git_lfs_repo_succeeds(self):
self.d.setVar('SRC_URI', "git://gitlab.com/gitlab-examples/lfs.git;protocol=https;branch=master;lfs=0")
f = self.get_real_git_lfs_file()
# This is the actual non-smudged placeholder file on the repo if git-lfs does not run
lfs_file = (
'version https://git-lfs.github.com/spec/v1\n'
'oid sha256:34be66b1a39a1955b46a12588df9d5f6fc1da790e05cf01f3c7422f4bbbdc26b\n'
'size 11423554\n'
)
with open(f) as fh:
self.assertEqual(lfs_file, fh.read())
def test_lfs_enabled(self):
import shutil
@@ -2294,16 +2247,12 @@ class GitLfsTest(FetcherTest):
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
old_find_git_lfs = ud.method._find_git_lfs
try:
# If git-lfs cannot be found, the unpack should throw an error
with self.assertRaises(bb.fetch2.FetchError):
fetcher.download()
ud.method._find_git_lfs = lambda d: False
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
finally:
ud.method._find_git_lfs = old_find_git_lfs
# If git-lfs cannot be found, the unpack should throw an error
with self.assertRaises(bb.fetch2.FetchError):
fetcher.download()
ud.method._find_git_lfs = lambda d: False
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
def test_lfs_disabled(self):
import shutil
@@ -2318,21 +2267,17 @@ class GitLfsTest(FetcherTest):
fetcher, ud = self.fetch()
self.assertIsNotNone(ud.method._find_git_lfs)
old_find_git_lfs = ud.method._find_git_lfs
try:
# If git-lfs can be found, the unpack should be successful. A
# live copy of git-lfs is not required for this case, so
# unconditionally forge its presence.
ud.method._find_git_lfs = lambda d: True
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
# If git-lfs cannot be found, the unpack should be successful
# If git-lfs can be found, the unpack should be successful. A
# live copy of git-lfs is not required for this case, so
# unconditionally forge its presence.
ud.method._find_git_lfs = lambda d: True
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
ud.method._find_git_lfs = lambda d: False
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
finally:
ud.method._find_git_lfs = old_find_git_lfs
# If git-lfs cannot be found, the unpack should be successful
ud.method._find_git_lfs = lambda d: False
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
class GitURLWithSpacesTest(FetcherTest):
test_git_urls = {
@@ -2377,13 +2322,6 @@ class CrateTest(FetcherTest):
d = self.d
fetcher = bb.fetch2.Fetch(uris, self.d)
ud = fetcher.ud[fetcher.urls[0]]
self.assertIn("name", ud.parm)
self.assertEqual(ud.parm["name"], "glob-0.2.11")
self.assertIn("downloadfilename", ud.parm)
self.assertEqual(ud.parm["downloadfilename"], "glob-0.2.11.crate")
fetcher.download()
fetcher.unpack(self.tempdir)
self.assertEqual(sorted(os.listdir(self.tempdir)), ['cargo_home', 'download' , 'unpacked'])
@@ -2391,30 +2329,6 @@ class CrateTest(FetcherTest):
self.assertTrue(os.path.exists(self.tempdir + "/cargo_home/bitbake/glob-0.2.11/.cargo-checksum.json"))
self.assertTrue(os.path.exists(self.tempdir + "/cargo_home/bitbake/glob-0.2.11/src/lib.rs"))
@skipIfNoNetwork()
def test_crate_url_params(self):
uri = "crate://crates.io/aho-corasick/0.7.20;name=aho-corasick-renamed"
self.d.setVar('SRC_URI', uri)
uris = self.d.getVar('SRC_URI').split()
d = self.d
fetcher = bb.fetch2.Fetch(uris, self.d)
ud = fetcher.ud[fetcher.urls[0]]
self.assertIn("name", ud.parm)
self.assertEqual(ud.parm["name"], "aho-corasick-renamed")
self.assertIn("downloadfilename", ud.parm)
self.assertEqual(ud.parm["downloadfilename"], "aho-corasick-0.7.20.crate")
fetcher.download()
fetcher.unpack(self.tempdir)
self.assertEqual(sorted(os.listdir(self.tempdir)), ['cargo_home', 'download' , 'unpacked'])
self.assertEqual(sorted(os.listdir(self.tempdir + "/download")), ['aho-corasick-0.7.20.crate', 'aho-corasick-0.7.20.crate.done'])
self.assertTrue(os.path.exists(self.tempdir + "/cargo_home/bitbake/aho-corasick-0.7.20/.cargo-checksum.json"))
self.assertTrue(os.path.exists(self.tempdir + "/cargo_home/bitbake/aho-corasick-0.7.20/src/lib.rs"))
@skipIfNoNetwork()
def test_crate_url_multi(self):
@@ -2425,19 +2339,6 @@ class CrateTest(FetcherTest):
d = self.d
fetcher = bb.fetch2.Fetch(uris, self.d)
ud = fetcher.ud[fetcher.urls[0]]
self.assertIn("name", ud.parm)
self.assertEqual(ud.parm["name"], "glob-0.2.11")
self.assertIn("downloadfilename", ud.parm)
self.assertEqual(ud.parm["downloadfilename"], "glob-0.2.11.crate")
ud = fetcher.ud[fetcher.urls[1]]
self.assertIn("name", ud.parm)
self.assertEqual(ud.parm["name"], "time-0.1.35")
self.assertIn("downloadfilename", ud.parm)
self.assertEqual(ud.parm["downloadfilename"], "time-0.1.35.crate")
fetcher.download()
fetcher.unpack(self.tempdir)
self.assertEqual(sorted(os.listdir(self.tempdir)), ['cargo_home', 'download' , 'unpacked'])
@@ -2447,18 +2348,6 @@ class CrateTest(FetcherTest):
self.assertTrue(os.path.exists(self.tempdir + "/cargo_home/bitbake/time-0.1.35/.cargo-checksum.json"))
self.assertTrue(os.path.exists(self.tempdir + "/cargo_home/bitbake/time-0.1.35/src/lib.rs"))
@skipIfNoNetwork()
def test_crate_incorrect_cksum(self):
uri = "crate://crates.io/aho-corasick/0.7.20"
self.d.setVar('SRC_URI', uri)
self.d.setVarFlag("SRC_URI", "aho-corasick-0.7.20.sha256sum", hashlib.sha256("Invalid".encode("utf-8")).hexdigest())
uris = self.d.getVar('SRC_URI').split()
fetcher = bb.fetch2.Fetch(uris, self.d)
with self.assertRaisesRegexp(bb.fetch2.FetchError, "Fetcher failure for URL"):
fetcher.download()
class NPMTest(FetcherTest):
def skipIfNoNpm():
import shutil
@@ -2720,45 +2609,6 @@ class NPMTest(FetcherTest):
self.assertTrue(os.path.exists(os.path.join(self.sdir, 'node_modules', 'array-flatten', 'node_modules', 'content-type', 'package.json')))
self.assertTrue(os.path.exists(os.path.join(self.sdir, 'node_modules', 'array-flatten', 'node_modules', 'content-type', 'node_modules', 'cookie', 'package.json')))
@skipIfNoNpm()
@skipIfNoNetwork()
def test_npmsw_git(self):
swfile = self.create_shrinkwrap_file({
'dependencies': {
'cookie': {
'version': 'github:jshttp/cookie.git#aec1177c7da67e3b3273df96cf476824dbc9ae09',
'from': 'github:jshttp/cookie.git'
}
}
})
fetcher = bb.fetch.Fetch(['npmsw://' + swfile], self.d)
fetcher.download()
self.assertTrue(os.path.exists(os.path.join(self.dldir, 'git2', 'github.com.jshttp.cookie.git')))
swfile = self.create_shrinkwrap_file({
'dependencies': {
'cookie': {
'version': 'jshttp/cookie.git#aec1177c7da67e3b3273df96cf476824dbc9ae09',
'from': 'jshttp/cookie.git'
}
}
})
fetcher = bb.fetch.Fetch(['npmsw://' + swfile], self.d)
fetcher.download()
self.assertTrue(os.path.exists(os.path.join(self.dldir, 'git2', 'github.com.jshttp.cookie.git')))
swfile = self.create_shrinkwrap_file({
'dependencies': {
'nodejs': {
'version': 'gitlab:gitlab-examples/nodejs.git#892a1f16725e56cc3a2cb0d677be42935c8fc262',
'from': 'gitlab:gitlab-examples/nodejs'
}
}
})
fetcher = bb.fetch.Fetch(['npmsw://' + swfile], self.d)
fetcher.download()
self.assertTrue(os.path.exists(os.path.join(self.dldir, 'git2', 'gitlab.com.gitlab-examples.nodejs.git')))
@skipIfNoNpm()
@skipIfNoNetwork()
def test_npmsw_dev(self):
@@ -2968,7 +2818,7 @@ class GitSharedTest(FetcherTest):
super(GitSharedTest, self).setUp()
self.recipe_url = "git://git.openembedded.org/bitbake;branch=master"
self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40')
self.d.setVar("__BBSRCREV_SEEN", "1")
self.d.setVar("__BBSEENSRCREV", "1")
@skipIfNoNetwork()
def test_shared_unpack(self):
@@ -2999,7 +2849,7 @@ class FetchPremirroronlyLocalTest(FetcherTest):
os.mkdir(self.mirrordir)
self.reponame = "bitbake"
self.gitdir = os.path.join(self.tempdir, "git", self.reponame)
self.recipe_url = "git://git.fake.repo/bitbake;branch=master"
self.recipe_url = "git://git.fake.repo/bitbake"
self.d.setVar("BB_FETCH_PREMIRRORONLY", "1")
self.d.setVar("BB_NO_NETWORK", "1")
self.d.setVar("PREMIRRORS", self.recipe_url + " " + "file://{}".format(self.mirrordir) + " \n")
@@ -3083,50 +2933,6 @@ class FetchPremirroronlyNetworkTest(FetcherTest):
with self.assertRaises(bb.fetch2.NetworkAccess):
fetcher.download()
class FetchPremirroronlyMercurialTest(FetcherTest):
""" Test for premirrors with mercurial repos
the test covers also basic hg:// clone (see fetch_and_create_tarball
"""
def skipIfNoHg():
import shutil
if not shutil.which('hg'):
return unittest.skip('Mercurial not installed')
return lambda f: f
def setUp(self):
super(FetchPremirroronlyMercurialTest, self).setUp()
self.mirrordir = os.path.join(self.tempdir, "mirrors")
os.mkdir(self.mirrordir)
self.reponame = "libgnt"
self.clonedir = os.path.join(self.tempdir, "hg")
self.recipe_url = "hg://keep.imfreedom.org/libgnt;module=libgnt"
self.d.setVar("SRCREV", "53e8b422faaf")
self.mirrorname = "hg_libgnt_keep.imfreedom.org_.libgnt.tar.gz"
def fetch_and_create_tarball(self):
"""
Ask bitbake to download repo and prepare mirror tarball for us
"""
self.d.setVar("BB_GENERATE_MIRROR_TARBALLS", "1")
fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
fetcher.download()
mirrorfile = os.path.join(self.d.getVar("DL_DIR"), self.mirrorname)
self.assertTrue(os.path.exists(mirrorfile), "Mirror tarball {} has not been created".format(mirrorfile))
## moving tarball to mirror directory
os.rename(mirrorfile, os.path.join(self.mirrordir, self.mirrorname))
self.d.setVar("BB_GENERATE_MIRROR_TARBALLS", "0")
@skipIfNoNetwork()
@skipIfNoHg()
def test_premirror_mercurial(self):
self.fetch_and_create_tarball()
self.d.setVar("PREMIRRORS", self.recipe_url + " " + "file://{}".format(self.mirrordir) + " \n")
self.d.setVar("BB_FETCH_PREMIRRORONLY", "1")
self.d.setVar("BB_NO_NETWORK", "1")
fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
fetcher.download()
class FetchPremirroronlyBrokenTarball(FetcherTest):
def setUp(self):

View File

@@ -218,26 +218,3 @@ VAR = " \\
with self.assertRaises(bb.BBHandledException):
d = bb.parse.handle(f.name, self.d)['']
at_sign_in_var_flag = """
A[flag@.service] = "nonet"
B[flag@.target] = "ntb"
C[f] = "flag"
unset A[flag@.service]
"""
def test_parse_at_sign_in_var_flag(self):
f = self.parsehelper(self.at_sign_in_var_flag)
d = bb.parse.handle(f.name, self.d)['']
self.assertEqual(d.getVar("A"), None)
self.assertEqual(d.getVar("B"), None)
self.assertEqual(d.getVarFlag("A","flag@.service"), None)
self.assertEqual(d.getVarFlag("B","flag@.target"), "ntb")
self.assertEqual(d.getVarFlag("C","f"), "flag")
def test_parse_invalid_at_sign_in_var_flag(self):
invalid_at_sign = self.at_sign_in_var_flag.replace("B[f", "B[@f")
f = self.parsehelper(invalid_at_sign)
with self.assertRaises(bb.parse.ParseError):
d = bb.parse.handle(f.name, self.d)['']

View File

@@ -288,7 +288,7 @@ class RunQueueTests(unittest.TestCase):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
extraenv = {
"BBMULTICONFIG" : "mc-1 mc_2",
"BB_SIGNATURE_HANDLER" : "basichash",
"BB_SIGNATURE_HANDLER" : "TestMulticonfigDepends",
"EXTRA_BBFILES": "${COREBASE}/recipes/fails-mc/*.bb",
}
tasks = self.run_bitbakecmd(["bitbake", "mc:mc-1:f1"], tempdir, "", extraenv=extraenv, cleanup=True)

View File

@@ -10,7 +10,6 @@
import logging
import os
import sys
import time
import atexit
import re
from collections import OrderedDict, defaultdict
@@ -730,7 +729,6 @@ class Tinfoil:
ret = self.run_command('buildTargets', targets, task)
if handle_events:
lastevent = time.time()
result = False
# Borrowed from knotty, instead somewhat hackily we use the helper
# as the object to store "shutdown" on
@@ -743,7 +741,6 @@ class Tinfoil:
try:
event = self.wait_event(0.25)
if event:
lastevent = time.time()
if event_callback and event_callback(event):
continue
if helper.eventHandler(event):
@@ -776,7 +773,7 @@ class Tinfoil:
if isinstance(event, bb.command.CommandCompleted):
result = True
break
if isinstance(event, (bb.command.CommandFailed, bb.command.CommandExit)):
if isinstance(event, bb.command.CommandFailed):
self.logger.error(str(event))
result = False
break
@@ -788,13 +785,10 @@ class Tinfoil:
self.logger.error(str(event))
result = False
break
elif helper.shutdown > 1:
break
termfilter.updateFooter()
if time.time() > (lastevent + (3*60)):
if not self.run_command('ping', handle_events=False):
print("\nUnable to ping server and no events, closing down...\n")
return False
except KeyboardInterrupt:
termfilter.clearFooter()
if helper.shutdown == 1:

View File

@@ -625,38 +625,25 @@ def main(server, eventHandler, params, tf = TerminalFilter):
printintervaldelta = 10 * 60 # 10 minutes
printinterval = printintervaldelta
pinginterval = 1 * 60 # 1 minute
lastevent = lastprint = time.time()
lastprint = time.time()
termfilter = tf(main, helper, console_handlers, params.options.quiet)
atexit.register(termfilter.finish)
# shutdown levels
# 0 - normal operation
# 1 - no new task execution, let current running tasks finish
# 2 - interrupting currently executing tasks
# 3 - we're done, exit
while main.shutdown < 3:
while True:
try:
if (lastprint + printinterval) <= time.time():
termfilter.keepAlive(printinterval)
printinterval += printintervaldelta
event = eventHandler.waitEvent(0)
if event is None:
if (lastevent + pinginterval) <= time.time():
ret, error = server.runCommand(["ping"])
if error or not ret:
termfilter.clearFooter()
print("No reply after pinging server (%s, %s), exiting." % (str(error), str(ret)))
return_value = 3
main.shutdown = 3
lastevent = time.time()
if main.shutdown > 1:
break
if not parseprogress:
termfilter.updateFooter()
event = eventHandler.waitEvent(0.25)
if event is None:
continue
lastevent = time.time()
helper.eventHandler(event)
if isinstance(event, bb.runqueue.runQueueExitWait):
if not main.shutdown:
@@ -761,15 +748,15 @@ def main(server, eventHandler, params, tf = TerminalFilter):
if event.error:
errors = errors + 1
logger.error(str(event))
main.shutdown = 3
main.shutdown = 2
continue
if isinstance(event, bb.command.CommandExit):
if not return_value:
return_value = event.exitcode
main.shutdown = 3
main.shutdown = 2
continue
if isinstance(event, (bb.command.CommandCompleted, bb.cooker.CookerExit)):
main.shutdown = 3
main.shutdown = 2
continue
if isinstance(event, bb.event.MultipleProviders):
logger.info(str(event))

View File

@@ -177,7 +177,7 @@ class gtkthread(threading.Thread):
quit = threading.Event()
def __init__(self, shutdown):
threading.Thread.__init__(self)
self.daemon = True
self.setDaemon(True)
self.shutdown = shutdown
if not Gtk.init_check()[0]:
sys.stderr.write("Gtk+ init failed. Make sure DISPLAY variable is set.\n")

View File

@@ -65,27 +65,35 @@ class BBUIEventQueue:
self.server = server
self.t = threading.Thread()
self.t.daemon = True
self.t.setDaemon(True)
self.t.run = self.startCallbackHandler
self.t.start()
def getEvent(self):
with bb.utils.lock_timeout(self.eventQueueLock):
if not self.eventQueue:
return None
item = self.eventQueue.pop(0)
if not self.eventQueue:
self.eventQueueNotify.clear()
return item
self.eventQueueLock.acquire()
if not self.eventQueue:
self.eventQueueLock.release()
return None
item = self.eventQueue.pop(0)
if not self.eventQueue:
self.eventQueueNotify.clear()
self.eventQueueLock.release()
return item
def waitEvent(self, delay):
self.eventQueueNotify.wait(delay)
return self.getEvent()
def queue_event(self, event):
with bb.utils.lock_timeout(self.eventQueueLock):
self.eventQueue.append(event)
self.eventQueueNotify.set()
self.eventQueueLock.acquire()
self.eventQueue.append(event)
self.eventQueueNotify.set()
self.eventQueueLock.release()
def send_event(self, event):
self.queue_event(pickle.loads(event))

View File

@@ -13,7 +13,6 @@ import errno
import logging
import bb
import bb.msg
import locale
import multiprocessing
import fcntl
import importlib
@@ -609,21 +608,6 @@ def preserved_envvars():
]
return v + preserved_envvars_exported()
def check_system_locale():
"""Make sure the required system locale are available and configured"""
default_locale = locale.getlocale(locale.LC_CTYPE)
try:
locale.setlocale(locale.LC_CTYPE, ("en_US", "UTF-8"))
except:
sys.exit("Please make sure locale 'en_US.UTF-8' is available on your system")
else:
locale.setlocale(locale.LC_CTYPE, default_locale)
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\n"
"Python can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
def filter_environment(good_vars):
"""
Create a pristine environment for bitbake. This will remove variables that
@@ -1008,9 +992,6 @@ def to_boolean(string, default=None):
if not string:
return default
if isinstance(string, int):
return string != 0
normalized = string.lower()
if normalized in ("y", "yes", "1", "true"):
return True
@@ -1698,11 +1679,25 @@ def disable_network(uid=None, gid=None):
f.write("%s %s 1" % (gid, gid))
def export_proxies(d):
from bb.fetch2 import get_fetcher_environment
""" export common proxies variables from datastore to environment """
newenv = get_fetcher_environment(d)
for v in newenv:
os.environ[v] = newenv[v]
import os
variables = ['http_proxy', 'HTTP_PROXY', 'https_proxy', 'HTTPS_PROXY',
'ftp_proxy', 'FTP_PROXY', 'no_proxy', 'NO_PROXY',
'GIT_PROXY_COMMAND']
exported = False
for v in variables:
if v in os.environ.keys():
exported = True
else:
v_proxy = d.getVar(v)
if v_proxy is not None:
os.environ[v] = v_proxy
exported = True
return exported
def load_plugins(logger, plugins, pluginpath):
def load_plugin(name):
@@ -1827,16 +1822,3 @@ def mkstemp(suffix=None, prefix=None, dir=None, text=False):
else:
prefix = tempfile.gettempprefix() + entropy
return tempfile.mkstemp(suffix=suffix, prefix=prefix, dir=dir, text=text)
# If we don't have a timeout of some kind and a process/thread exits badly (for example
# OOM killed) and held a lock, we'd just hang in the lock futex forever. It is better
# we exit at some point than hang. 5 minutes with no progress means we're probably deadlocked.
@contextmanager
def lock_timeout(lock):
held = lock.acquire(timeout=5*60)
try:
if not held:
os._exit(1)
yield held
finally:
lock.release()

View File

@@ -11,7 +11,6 @@ import shutil
import sys
import tempfile
from bb.cookerdata import findTopdir
import bb.utils
from bblayers.common import LayerPlugin
@@ -38,7 +37,7 @@ class ActionPlugin(LayerPlugin):
sys.stderr.write("Specified layer directory %s doesn't contain a conf/layer.conf file\n" % layerdir)
return 1
bblayers_conf = os.path.join(findTopdir(),'conf', 'bblayers.conf')
bblayers_conf = os.path.join('conf', 'bblayers.conf')
if not os.path.exists(bblayers_conf):
sys.stderr.write("Unable to find bblayers.conf\n")
return 1
@@ -66,7 +65,7 @@ class ActionPlugin(LayerPlugin):
def do_remove_layer(self, args):
"""Remove one or more layers from bblayers.conf."""
bblayers_conf = os.path.join(findTopdir() ,'conf', 'bblayers.conf')
bblayers_conf = os.path.join('conf', 'bblayers.conf')
if not os.path.exists(bblayers_conf):
sys.stderr.write("Unable to find bblayers.conf\n")
return 1

View File

@@ -29,12 +29,12 @@ class QueryPlugin(LayerPlugin):
def do_show_layers(self, args):
"""show current configured layers."""
logger.plain("%s %s %s" % ("layer".ljust(20), "path".ljust(70), "priority"))
logger.plain('=' * 104)
logger.plain("%s %s %s" % ("layer".ljust(20), "path".ljust(40), "priority"))
logger.plain('=' * 74)
for layer, _, regex, pri in self.tinfoil.cooker.bbfile_config_priorities:
layerdir = self.bbfile_collections.get(layer, None)
layername = layer
logger.plain("%s %s %s" % (layername.ljust(20), layerdir.ljust(70), pri))
layername = self.get_layer_name(layerdir)
logger.plain("%s %s %d" % (layername.ljust(20), layerdir.ljust(40), pri))
def version_str(self, pe, pv, pr = None):
verstr = "%s" % pv

View File

@@ -27,4 +27,4 @@ Data can be provided in XML, JSON and if installed YAML formats.
Use the django management command manage.py loaddata <your fixture file>
For further information see the Django command documentation at:
https://docs.djangoproject.com/en/3.2/ref/django-admin/#django-admin-loaddata
https://docs.djangoproject.com/en/1.8/ref/django-admin/#django-admin-loaddata

View File

@@ -35,19 +35,17 @@ verbose = False
# [Codename, Yocto Project Version, Release Date, Current Version, Support Level, Poky Version, BitBake branch]
current_releases = [
# Release slot #1
['Kirkstone','4.0','April 2022','4.0.8 (March 2023)','Stable - Long Term Support (until Apr. 2024)','','2.0'],
['Kirkstone','3.5','April 2022','','Future - Long Term Support (until Apr. 2024)','27.0','1.54'],
# ['Dunfell','3.1','April 2021','3.1.5 (March 2022)','Stable - Support for 13 months (until Apr. 2022)','23.0','1.46'],
# Release slot #2 'local'
['HEAD','HEAD','','Local Yocto Project','HEAD','','HEAD'],
# Release slot #3 'master'
['Master','master','','Yocto Project master','master','','master'],
# Release slot #4
['Mickledore','4.2','April 2023','4.2.0 (April 2023)','Support for 7 months (until October 2023)','','2.4'],
# ['Langdale','4.1','October 2022','4.1.2 (January 2023)','Support for 7 months (until May 2023)','','2.2'],
# ['Honister','3.4','October 2021','3.4.2 (February 2022)','Support for 7 months (until May 2022)','26.0','1.52'],
# ['Hardknott','3.3','April 2021','3.3.5 (March 2022)','Stable - Support for 13 months (until Apr. 2022)','25.0','1.50'],
# ['Gatesgarth','3.2','Oct 2020','3.2.4 (May 2021)','EOL','24.0','1.48'],
# Optional Release slot #5
['Dunfell','3.1','April 2020','3.1.23 (February 2023)','Stable - Long Term Support (until Apr. 2024)','23.0','1.46'],
['Honister','3.4','October 2021','3.4.2 (February 2022)','Support for 7 months (until May 2022)','26.0','1.52'],
# ['Gatesgarth','3.2','Oct 2020','3.2.4 (May 2021)','EOL','24.0','1.48'],
# Optional Release slot #4
['Hardknott','3.3','April 2021','3.3.5 (March 2022)','Stable - Support for 13 months (until Apr. 2022)','25.0','1.50'],
]
default_poky_layers = [

View File

@@ -10,7 +10,7 @@
<object model="orm.bitbakeversion" pk="1">
<field type="CharField" name="name">kirkstone</field>
<field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
<field type="CharField" name="branch">2.0</field>
<field type="CharField" name="branch">1.54</field>
</object>
<object model="orm.bitbakeversion" pk="2">
<field type="CharField" name="name">HEAD</field>
@@ -23,14 +23,14 @@
<field type="CharField" name="branch">master</field>
</object>
<object model="orm.bitbakeversion" pk="4">
<field type="CharField" name="name">mickledore</field>
<field type="CharField" name="name">honister</field>
<field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
<field type="CharField" name="branch">2.4</field>
<field type="CharField" name="branch">1.52</field>
</object>
<object model="orm.bitbakeversion" pk="5">
<field type="CharField" name="name">dunfell</field>
<field type="CharField" name="name">hardknott</field>
<field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
<field type="CharField" name="branch">1.46</field>
<field type="CharField" name="branch">1.50</field>
</object>
<!-- Releases available -->
@@ -56,18 +56,18 @@
<field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href=\"https://cgit.openembedded.org/openembedded-core/log/\"&gt;OpenEmbedded master&lt;/a&gt; branch.</field>
</object>
<object model="orm.release" pk="4">
<field type="CharField" name="name">mickledore</field>
<field type="CharField" name="description">Openembedded Mickledore</field>
<field type="CharField" name="name">honister</field>
<field type="CharField" name="description">Openembedded Honister</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">4</field>
<field type="CharField" name="branch_name">mickledore</field>
<field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href=\"https://cgit.openembedded.org/openembedded-core/log/?h=mickledore\"&gt;OpenEmbedded Mickledore&lt;/a&gt; branch.</field>
<field type="CharField" name="branch_name">honister</field>
<field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href=\"https://cgit.openembedded.org/openembedded-core/log/?h=honister\"&gt;OpenEmbedded Honister&lt;/a&gt; branch.</field>
</object>
<object model="orm.release" pk="5">
<field type="CharField" name="name">dunfell</field>
<field type="CharField" name="description">Openembedded Dunfell</field>
<field type="CharField" name="name">hardknott</field>
<field type="CharField" name="description">Openembedded Hardknott</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">5</field>
<field type="CharField" name="branch_name">dunfell</field>
<field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href=\"https://cgit.openembedded.org/openembedded-core/log/?h=dunfell\"&gt;OpenEmbedded Dunfell&lt;/a&gt; branch.</field>
<field type="CharField" name="branch_name">hardknott</field>
<field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href=\"https://cgit.openembedded.org/openembedded-core/log/?h=hardknott\"&gt;OpenEmbedded Hardknott&lt;/a&gt; branch.</field>
</object>
<!-- Default layers for each release -->

View File

@@ -26,15 +26,15 @@
<field type="CharField" name="dirpath">bitbake</field>
</object>
<object model="orm.bitbakeversion" pk="4">
<field type="CharField" name="name">mickledore</field>
<field type="CharField" name="name">honister</field>
<field type="CharField" name="giturl">git://git.yoctoproject.org/poky</field>
<field type="CharField" name="branch">mickledore</field>
<field type="CharField" name="branch">honister</field>
<field type="CharField" name="dirpath">bitbake</field>
</object>
<object model="orm.bitbakeversion" pk="5">
<field type="CharField" name="name">dunfell</field>
<field type="CharField" name="name">hardknott</field>
<field type="CharField" name="giturl">git://git.yoctoproject.org/poky</field>
<field type="CharField" name="branch">dunfell</field>
<field type="CharField" name="branch">hardknott</field>
<field type="CharField" name="dirpath">bitbake</field>
</object>
@@ -62,18 +62,18 @@
<field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href="https://git.yoctoproject.org/cgit/cgit.cgi/poky/log/"&gt;Yocto Project Master branch&lt;/a&gt;.</field>
</object>
<object model="orm.release" pk="4">
<field type="CharField" name="name">mickledore</field>
<field type="CharField" name="description">Yocto Project 4.2 "Mickledore"</field>
<field type="CharField" name="name">honister</field>
<field type="CharField" name="description">Yocto Project 3.4 "Honister"</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">4</field>
<field type="CharField" name="branch_name">mickledore</field>
<field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href="https://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=mickledore"&gt;Yocto Project Mickledore branch&lt;/a&gt;.</field>
<field type="CharField" name="branch_name">honister</field>
<field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href="https://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=honister"&gt;Yocto Project Honister branch&lt;/a&gt;.</field>
</object>
<object model="orm.release" pk="5">
<field type="CharField" name="name">dunfell</field>
<field type="CharField" name="description">Yocto Project 3.1 "Dunfell"</field>
<field type="CharField" name="name">hardknott</field>
<field type="CharField" name="description">Yocto Project 3.3 "Hardknott"</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">5</field>
<field type="CharField" name="branch_name">dunfell</field>
<field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href="https://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=dunfell"&gt;Yocto Project Dunfell branch&lt;/a&gt;.</field>
<field type="CharField" name="branch_name">hardknott</field>
<field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href="https://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=hardknott"&gt;Yocto Project Hardknott branch&lt;/a&gt;.</field>
</object>
<!-- Default project layers for each release -->
@@ -177,14 +177,14 @@
<field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">4</field>
<field type="CharField" name="branch">mickledore</field>
<field type="CharField" name="branch">honister</field>
<field type="CharField" name="dirpath">meta</field>
</object>
<object model="orm.layer_version" pk="5">
<field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">5</field>
<field type="CharField" name="branch">dunfell</field>
<field type="CharField" name="branch">hardknott</field>
<field type="CharField" name="dirpath">meta</field>
</object>
@@ -222,14 +222,14 @@
<field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">4</field>
<field type="CharField" name="branch">mickledore</field>
<field type="CharField" name="branch">honister</field>
<field type="CharField" name="dirpath">meta-poky</field>
</object>
<object model="orm.layer_version" pk="10">
<field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">5</field>
<field type="CharField" name="branch">dunfell</field>
<field type="CharField" name="branch">hardknott</field>
<field type="CharField" name="dirpath">meta-poky</field>
</object>
@@ -267,14 +267,14 @@
<field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">4</field>
<field type="CharField" name="branch">mickledore</field>
<field type="CharField" name="branch">honister</field>
<field type="CharField" name="dirpath">meta-yocto-bsp</field>
</object>
<object model="orm.layer_version" pk="15">
<field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">5</field>
<field type="CharField" name="branch">dunfell</field>
<field type="CharField" name="branch">hardknott</field>
<field type="CharField" name="dirpath">meta-yocto-bsp</field>
</object>
</django-objects>

View File

@@ -40,7 +40,7 @@ class Spinner(threading.Thread):
""" A simple progress spinner to indicate download/parsing is happening"""
def __init__(self, *args, **kwargs):
super(Spinner, self).__init__(*args, **kwargs)
self.daemon = True
self.setDaemon(True)
self.signal = True
def run(self):

View File

@@ -24,7 +24,7 @@ class KillRunbuilds(threading.Thread):
""" Kill the runbuilds process after an amount of time """
def __init__(self, *args, **kwargs):
super(KillRunbuilds, self).__init__(*args, **kwargs)
self.daemon = True
self.setDaemon(True)
def run(self):
time.sleep(5)

View File

@@ -1,4 +1,3 @@
sphinx/__pycache__
_build/
Pipfile.lock
poky.yaml

View File

@@ -275,19 +275,6 @@ websites.
More information can be found here:
https://sublime-and-sphinx-guide.readthedocs.io/en/latest/references.html.
For external links, we use this syntax:
`link text <link URL>`__
instead of:
`link text <link URL>`_
Both syntaxes work, but the latter also creates a "link text" reference
target which could conflict with other references with the same name.
So, only use this variant when you wish to make multiple references
to this link, reusing only the target name.
See https://stackoverflow.com/questions/27420317/restructured-text-rst-http-links-underscore-vs-use
Anchor (<#link>) links are forbidden as they are not checked by Sphinx during
the build and may be broken without knowing about it.
@@ -357,16 +344,13 @@ The sphinx.ext.intersphinx extension is enabled by default
so that we can cross reference content from other Sphinx based
documentation projects, such as the BitBake manual.
References to the BitBake manual can directly be done:
References to the BitBake manual can be done:
- With a specific description instead of the section name:
:ref:`Azure Storage fetcher (az://) <bitbake-user-manual/bitbake-user-manual-fetching:fetchers>`
:ref:`Azure Storage fetcher (az://) <bitbake:bitbake-user-manual/bitbake-user-manual-fetching:fetchers>`
- With the section name:
:ref:`bitbake-user-manual/bitbake-user-manual-intro:usage and syntax` option
If you want to refer to an entire document (or chapter) in the BitBake manual,
you have to use the ":doc:" macro with the "bitbake:" prefix:
- :doc:`BitBake User Manual <bitbake:index>`
- :doc:`bitbake:bitbake-user-manual/bitbake-user-manual-metadata`" chapter
:ref:`bitbake:bitbake-user-manual/bitbake-user-manual-intro:usage and syntax` option
- Linking to the entire BitBake manual:
:doc:`BitBake User Manual <bitbake:index>`
Note that a reference to a variable (:term:`VARIABLE`) automatically points to
the BitBake manual if the variable is not described in the Reference Manual's Variable Glossary.
@@ -375,11 +359,6 @@ BitBake manual as follows:
:term:`bitbake:BB_NUMBER_PARSE_THREADS`
This would be the same if we had identical document filenames in
both the Yocto Project and BitBake manuals:
:ref:`bitbake:directory/file:section title`
Submitting documentation changes
================================

View File

@@ -5,7 +5,7 @@
<br> All Rights Reserved. Linux Foundation&reg; and Yocto Project&reg; are registered trademarks of the Linux Foundation.
<br>Linux&reg; is a registered trademark of Linus Torvalds.
<br>&copy; Copyright {{ copyright }}
<br>Last updated on {{ last_updated }} from the <a href="https://git.yoctoproject.org/yocto-docs/">yocto-docs</a> git repository.
<br>Last updated on {{ last_updated }}
</p>
</div>
</footer>

View File

@@ -1,5 +1,3 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
.. include:: <xhtml1-lat1.txt>
.. include:: <xhtml1-symbol.txt>
@@ -10,7 +8,7 @@
Permission is granted to copy, distribute and/or modify this document under the
terms of the `Creative Commons Attribution-Share Alike 2.0 UK: England & Wales
<https://creativecommons.org/licenses/by-sa/2.0/uk/>`__ as published by Creative
<https://creativecommons.org/licenses/by-sa/2.0/uk/>`_ as published by Creative
Commons.
To report any inaccuracies or problems with this (or any other Yocto Project)

View File

@@ -40,13 +40,7 @@ Compatible Linux Distribution
Make sure your :term:`Build Host` meets the
following requirements:
- At least &MIN_DISK_SPACE; Gbytes of free disk space, though
much more will help to run multiple builds and increase
performance by reusing build artifacts.
- At least &MIN_RAM; Gbytes of RAM, though a modern modern build host with as
much RAM and as many CPU cores as possible is strongly recommended to
maximize build performance.
- 50 Gbytes of free disk space
- Runs a supported Linux distribution (i.e. recent releases of Fedora,
openSUSE, CentOS, Debian, or Ubuntu). For a list of Linux
@@ -76,9 +70,11 @@ Build Host Packages
You must install essential host packages on your build host. The
following command installs the host packages based on an Ubuntu
distribution::
distribution:
$ sudo apt install &UBUNTU_HOST_PACKAGES_ESSENTIAL;
.. code-block:: shell
$ sudo apt install &UBUNTU_HOST_PACKAGES_ESSENTIAL;
.. note::
@@ -228,13 +224,13 @@ an entire Linux distribution, including the toolchain, from source.
Among other things, the script creates the :term:`Build Directory`, which is
``build`` in this case and is located in the :term:`Source Directory`. After
the script runs, your current working directory is set to the
:term:`Build Directory`. Later, when the build completes, the
:term:`Build Directory` contains all the files created during the build.
the script runs, your current working directory is set to the Build
Directory. Later, when the build completes, the Build Directory contains all the
files created during the build.
#. **Examine Your Local Configuration File:** When you set up the build
environment, a local configuration file named ``local.conf`` becomes
available in a ``conf`` subdirectory of the :term:`Build Directory`. For this
available in a ``conf`` subdirectory of the Build Directory. For this
example, the defaults are set to build for a ``qemux86`` target,
which is suitable for emulation. The package manager used is set to
the RPM package manager.
@@ -266,7 +262,7 @@ an entire Linux distribution, including the toolchain, from source.
For information on using the ``bitbake`` command, see the
:ref:`overview-manual/concepts:bitbake` section in the Yocto Project Overview and
Concepts Manual, or see
:ref:`bitbake-user-manual/bitbake-user-manual-intro:the bitbake command`
:ref:`bitbake:bitbake-user-manual/bitbake-user-manual-intro:the bitbake command`
in the BitBake User Manual.
#. **Simulate Your Image Using QEMU:** Once this particular image is
@@ -349,7 +345,9 @@ Follow these steps to add a hardware layer:
#. **Add Your Layer to the Layer Configuration File:** Before you can use
a layer during a build, you must add it to your ``bblayers.conf``
file, which is found in the :term:`Build Directory` ``conf`` directory.
file, which is found in the
:term:`Build Directory` ``conf``
directory.
Use the ``bitbake-layers add-layer`` command to add the layer to the
configuration file:
@@ -365,7 +363,7 @@ Follow these steps to add a hardware layer:
You can find
more information on adding layers in the
:ref:`dev-manual/layers:adding a layer using the \`\`bitbake-layers\`\` script`
:ref:`dev-manual/common-tasks:adding a layer using the \`\`bitbake-layers\`\` script`
section.
Completing these steps has added the ``meta-altera`` layer to your Yocto
@@ -400,7 +398,7 @@ The following commands run the tool to create a layer named
For more information
on layers and how to create them, see the
:ref:`dev-manual/layers:creating a general layer using the \`\`bitbake-layers\`\` script`
:ref:`dev-manual/common-tasks:creating a general layer using the \`\`bitbake-layers\`\` script`
section in the Yocto Project Development Tasks Manual.
Where To Go Next

View File

@@ -109,7 +109,8 @@ them to the "Dependencies" section.
Some layers function as a layer to hold other BSP layers. These layers
are known as ":term:`container layers <Container Layer>`". An example of
this type of layer is OpenEmbedded's :oe_git:`meta-openbedded </meta-openembedded>`
this type of layer is OpenEmbedded's
`meta-openembedded <https://github.com/openembedded/meta-openembedded>`__
layer. The ``meta-openembedded`` layer contains many ``meta-*`` layers.
In cases like this, you need to include the names of the actual layers
you want to work with, such as::
@@ -127,7 +128,7 @@ you want to work with, such as::
and so on.
For more information on layers, see the
":ref:`dev-manual/layers:understanding and creating layers`"
":ref:`dev-manual/common-tasks:understanding and creating layers`"
section of the Yocto Project Development Tasks Manual.
Preparing Your Build Host to Work With BSP Layers
@@ -463,7 +464,7 @@ requirements are handled with the ``COPYING.MIT`` file.
Licensing files can be MIT, BSD, GPLv*, and so forth. These files are
recommended for the BSP but are optional and totally up to the BSP
developer. For information on how to maintain license compliance, see
the ":ref:`dev-manual/licenses:maintaining open source license compliance during your product's lifecycle`"
the ":ref:`dev-manual/common-tasks:maintaining open source license compliance during your product's lifecycle`"
section in the Yocto Project Development Tasks Manual.
README File
@@ -589,7 +590,7 @@ filenames correspond to the values to which users have set the
These files define things such as the kernel package to use
(:term:`PREFERRED_PROVIDER` of
:ref:`virtual/kernel <dev-manual/new-recipe:using virtual providers>`),
:ref:`virtual/kernel <dev-manual/common-tasks:using virtual providers>`),
the hardware drivers to include in different types of images, any
special software components that are needed, any bootloader information,
and also any special image format requirements.
@@ -757,7 +758,7 @@ workflow.
OpenEmbedded build system knows about. For more information on
layers, see the ":ref:`overview-manual/yp-intro:the yocto project layer model`"
section in the Yocto Project Overview and Concepts Manual. You can also
reference the ":ref:`dev-manual/layers:understanding and creating layers`"
reference the ":ref:`dev-manual/common-tasks:understanding and creating layers`"
section in the Yocto Project Development Tasks Manual. For more
information on BSP layers, see the ":ref:`bsp-guide/bsp:bsp layers`"
section.
@@ -816,7 +817,7 @@ workflow.
key configuration files are configured appropriately: the
``conf/local.conf`` and the ``conf/bblayers.conf`` file. You must
make the OpenEmbedded build system aware of your new layer. See the
":ref:`dev-manual/layers:enabling your layer`"
":ref:`dev-manual/common-tasks:enabling your layer`"
section in the Yocto Project Development Tasks Manual for information
on how to let the build system know about your new layer.
@@ -845,7 +846,7 @@ Before looking at BSP requirements, you should consider the following:
layer that can be added to the Yocto Project. For guidelines on
creating a layer that meets these base requirements, see the
":ref:`bsp-guide/bsp:bsp layers`" section in this manual and the
":ref:`dev-manual/layers:understanding and creating layers`"
":ref:`dev-manual/common-tasks:understanding and creating layers`"
section in the Yocto Project Development Tasks Manual.
- The requirements in this section apply regardless of how you package
@@ -927,7 +928,7 @@ Yocto Project:
- The name and contact information for the BSP layer maintainer.
This is the person to whom patches and questions should be sent.
For information on how to find the right person, see the
":ref:`dev-manual/changes:submitting a change to the yocto project`"
":ref:`dev-manual/common-tasks:submitting a change to the yocto project`"
section in the Yocto Project Development Tasks Manual.
- Instructions on how to build the BSP using the BSP layer.
@@ -1013,7 +1014,7 @@ the following:
- Create a ``*.bbappend`` file for the modified recipe. For information on using
append files, see the
":ref:`dev-manual/layers:appending other layers metadata with your layer`"
":ref:`dev-manual/common-tasks:appending other layers metadata with your layer`"
section in the Yocto Project Development Tasks Manual.
- Ensure your directory structure in the BSP layer that supports your
@@ -1117,7 +1118,7 @@ list describes them in order of preference:
Specifying the matching license string signifies that you agree to
the license. Thus, the build system can build the corresponding
recipe and include the component in the image. See the
":ref:`dev-manual/licenses:enabling commercially licensed recipes`"
":ref:`dev-manual/common-tasks:enabling commercially licensed recipes`"
section in the Yocto Project Development Tasks Manual for details on
how to use these variables.
@@ -1169,7 +1170,7 @@ Use these steps to create a BSP layer:
``create-layer`` subcommand to create a new general layer. For
instructions on how to create a general layer using the
``bitbake-layers`` script, see the
":ref:`dev-manual/layers:creating a general layer using the \`\`bitbake-layers\`\` script`"
":ref:`dev-manual/common-tasks:creating a general layer using the \`\`bitbake-layers\`\` script`"
section in the Yocto Project Development Tasks Manual.
- *Create a Layer Configuration File:* Every layer needs a layer
@@ -1179,14 +1180,14 @@ Use these steps to create a BSP layer:
:yocto_git:`Source Repositories <>`. To get examples of what you need
in your configuration file, locate a layer (e.g. "meta-ti") and
examine the
:yocto_git:`local.conf </meta-ti/tree/meta-ti-bsp/conf/layer.conf>`
:yocto_git:`local.conf </meta-ti/tree/conf/layer.conf>`
file.
- *Create a Machine Configuration File:* Create a
``conf/machine/bsp_root_name.conf`` file. See
:yocto_git:`meta-yocto-bsp/conf/machine </poky/tree/meta-yocto-bsp/conf/machine>`
for sample ``bsp_root_name.conf`` files. There are other samples such as
:yocto_git:`meta-ti </meta-ti/tree/meta-ti-bsp/conf/machine>`
:yocto_git:`meta-ti </meta-ti/tree/conf/machine>`
and
:yocto_git:`meta-freescale </meta-freescale/tree/conf/machine>`
from other vendors that have more specific machine and tuning
@@ -1209,7 +1210,7 @@ BSP Layer Configuration Example
-------------------------------
The layer's ``conf`` directory contains the ``layer.conf`` configuration
file. In this example, the ``conf/layer.conf`` file is the following::
file. In this example, the ``conf/layer.conf`` is the following::
# We have a conf and classes directory, add to BBPATH
BBPATH .= ":${LAYERDIR}"
@@ -1229,7 +1230,7 @@ configuration files is to examine various files for BSP from the
:yocto_git:`Source Repositories <>`.
For a detailed description of this particular layer configuration file,
see ":ref:`step 3 <dev-manual/layers:creating your own layer>`"
see ":ref:`step 3 <dev-manual/common-tasks:creating your own layer>`"
in the discussion that describes how to create layers in the Yocto
Project Development Tasks Manual.
@@ -1356,7 +1357,7 @@ Project Reference Manual.
- :term:`EXTRA_IMAGECMD`:
Specifies additional options for image creation commands. In this
example, the "-lnp " option is used when creating the
:wikipedia:`JFFS2 <JFFS2>` image.
`JFFS2 <https://en.wikipedia.org/wiki/JFFS2>`__ image.
- :term:`WKS_FILE`: The location of
the :ref:`Wic kickstart <ref-manual/kickstart:openembedded kickstart (\`\`.wks\`\`) reference>` file used
@@ -1365,7 +1366,7 @@ Project Reference Manual.
- :term:`IMAGE_INSTALL`:
Specifies packages to install into an image through the
:ref:`ref-classes-image` class. Recipes
:ref:`image <ref-classes-image>` class. Recipes
use the :term:`IMAGE_INSTALL` variable.
- ``do_image_wic[depends]``: A task that is constructed during the

View File

@@ -106,7 +106,6 @@ extlinks = {
'oe_wiki': ('https://www.openembedded.org/wiki%s', None),
'oe_layerindex': ('https://layers.openembedded.org%s', None),
'oe_layer': ('https://layers.openembedded.org/layerindex/branch/master/layer%s', None),
'wikipedia': ('https://en.wikipedia.org/wiki/%s', None),
}
# Intersphinx config to use cross reference with BitBake user manual

View File

@@ -1,59 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Flashing Images Using ``bmaptool``
**********************************
A fast and easy way to flash an image to a bootable device is to use
Bmaptool, which is integrated into the OpenEmbedded build system.
Bmaptool is a generic tool that creates a file's block map (bmap) and
then uses that map to copy the file. As compared to traditional tools
such as dd or cp, Bmaptool can copy (or flash) large files like raw
system image files much faster.
.. note::
- If you are using Ubuntu or Debian distributions, you can install
the ``bmap-tools`` package using the following command and then
use the tool without specifying ``PATH`` even from the root
account::
$ sudo apt install bmap-tools
- If you are unable to install the ``bmap-tools`` package, you will
need to build Bmaptool before using it. Use the following command::
$ bitbake bmap-tools-native
Following, is an example that shows how to flash a Wic image. Realize
that while this example uses a Wic image, you can use Bmaptool to flash
any type of image. Use these steps to flash an image using Bmaptool:
#. *Update your local.conf File:* You need to have the following set
in your ``local.conf`` file before building your image::
IMAGE_FSTYPES += "wic wic.bmap"
#. *Get Your Image:* Either have your image ready (pre-built with the
:term:`IMAGE_FSTYPES`
setting previously mentioned) or take the step to build the image::
$ bitbake image
#. *Flash the Device:* Flash the device with the image by using Bmaptool
depending on your particular setup. The following commands assume the
image resides in the :term:`Build Directory`'s ``deploy/images/`` area:
- If you have write access to the media, use this command form::
$ oe-run-native bmap-tools-native bmaptool copy build-directory/tmp/deploy/images/machine/image.wic /dev/sdX
- If you do not have write access to the media, set your permissions
first and then use the same command form::
$ sudo chmod 666 /dev/sdX
$ oe-run-native bmap-tools-native bmaptool copy build-directory/tmp/deploy/images/machine/image.wic /dev/sdX
For help on the ``bmaptool`` command, use the following command::
$ bmaptool --help

View File

@@ -1,409 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Maintaining Build Output Quality
********************************
Many factors can influence the quality of a build. For example, if you
upgrade a recipe to use a new version of an upstream software package or
you experiment with some new configuration options, subtle changes can
occur that you might not detect until later. Consider the case where
your recipe is using a newer version of an upstream package. In this
case, a new version of a piece of software might introduce an optional
dependency on another library, which is auto-detected. If that library
has already been built when the software is building, the software will
link to the built library and that library will be pulled into your
image along with the new software even if you did not want the library.
The :ref:`ref-classes-buildhistory` class helps you maintain the quality of
your build output. You can use the class to highlight unexpected and possibly
unwanted changes in the build output. When you enable build history, it records
information about the contents of each package and image and then commits that
information to a local Git repository where you can examine the information.
The remainder of this section describes the following:
- :ref:`How you can enable and disable build history <dev-manual/build-quality:enabling and disabling build history>`
- :ref:`How to understand what the build history contains <dev-manual/build-quality:understanding what the build history contains>`
- :ref:`How to limit the information used for build history <dev-manual/build-quality:using build history to gather image information only>`
- :ref:`How to examine the build history from both a command-line and web interface <dev-manual/build-quality:examining build history information>`
Enabling and Disabling Build History
====================================
Build history is disabled by default. To enable it, add the following
:term:`INHERIT` statement and set the :term:`BUILDHISTORY_COMMIT` variable to
"1" at the end of your ``conf/local.conf`` file found in the
:term:`Build Directory`::
INHERIT += "buildhistory"
BUILDHISTORY_COMMIT = "1"
Enabling build history as
previously described causes the OpenEmbedded build system to collect
build output information and commit it as a single commit to a local
:ref:`overview-manual/development-environment:git` repository.
.. note::
Enabling build history increases your build times slightly,
particularly for images, and increases the amount of disk space used
during the build.
You can disable build history by removing the previous statements from
your ``conf/local.conf`` file.
Understanding What the Build History Contains
=============================================
Build history information is kept in ``${``\ :term:`TOPDIR`\ ``}/buildhistory``
in the :term:`Build Directory` as defined by the :term:`BUILDHISTORY_DIR`
variable. Here is an example abbreviated listing:
.. image:: figures/buildhistory.png
:align: center
:width: 50%
At the top level, there is a ``metadata-revs`` file that lists the
revisions of the repositories for the enabled layers when the build was
produced. The rest of the data splits into separate ``packages``,
``images`` and ``sdk`` directories, the contents of which are described
as follows.
Build History Package Information
---------------------------------
The history for each package contains a text file that has name-value
pairs with information about the package. For example,
``buildhistory/packages/i586-poky-linux/busybox/busybox/latest``
contains the following:
.. code-block:: none
PV = 1.22.1
PR = r32
RPROVIDES =
RDEPENDS = glibc (>= 2.20) update-alternatives-opkg
RRECOMMENDS = busybox-syslog busybox-udhcpc update-rc.d
PKGSIZE = 540168
FILES = /usr/bin/* /usr/sbin/* /usr/lib/busybox/* /usr/lib/lib*.so.* \
/etc /com /var /bin/* /sbin/* /lib/*.so.* /lib/udev/rules.d \
/usr/lib/udev/rules.d /usr/share/busybox /usr/lib/busybox/* \
/usr/share/pixmaps /usr/share/applications /usr/share/idl \
/usr/share/omf /usr/share/sounds /usr/lib/bonobo/servers
FILELIST = /bin/busybox /bin/busybox.nosuid /bin/busybox.suid /bin/sh \
/etc/busybox.links.nosuid /etc/busybox.links.suid
Most of these
name-value pairs correspond to variables used to produce the package.
The exceptions are ``FILELIST``, which is the actual list of files in
the package, and ``PKGSIZE``, which is the total size of files in the
package in bytes.
There is also a file that corresponds to the recipe from which the package
came (e.g. ``buildhistory/packages/i586-poky-linux/busybox/latest``):
.. code-block:: none
PV = 1.22.1
PR = r32
DEPENDS = initscripts kern-tools-native update-rc.d-native \
virtual/i586-poky-linux-compilerlibs virtual/i586-poky-linux-gcc \
virtual/libc virtual/update-alternatives
PACKAGES = busybox-ptest busybox-httpd busybox-udhcpd busybox-udhcpc \
busybox-syslog busybox-mdev busybox-hwclock busybox-dbg \
busybox-staticdev busybox-dev busybox-doc busybox-locale busybox
Finally, for those recipes fetched from a version control system (e.g.,
Git), there is a file that lists source revisions that are specified in
the recipe and the actual revisions used during the build. Listed
and actual revisions might differ when
:term:`SRCREV` is set to
${:term:`AUTOREV`}. Here is an
example assuming
``buildhistory/packages/qemux86-poky-linux/linux-yocto/latest_srcrev``)::
# SRCREV_machine = "38cd560d5022ed2dbd1ab0dca9642e47c98a0aa1"
SRCREV_machine = "38cd560d5022ed2dbd1ab0dca9642e47c98a0aa1"
# SRCREV_meta = "a227f20eff056e511d504b2e490f3774ab260d6f"
SRCREV_meta ="a227f20eff056e511d504b2e490f3774ab260d6f"
You can use the
``buildhistory-collect-srcrevs`` command with the ``-a`` option to
collect the stored :term:`SRCREV` values from build history and report them
in a format suitable for use in global configuration (e.g.,
``local.conf`` or a distro include file) to override floating
:term:`AUTOREV` values to a fixed set of revisions. Here is some example
output from this command::
$ buildhistory-collect-srcrevs -a
# all-poky-linux
SRCREV:pn-ca-certificates = "07de54fdcc5806bde549e1edf60738c6bccf50e8"
SRCREV:pn-update-rc.d = "8636cf478d426b568c1be11dbd9346f67e03adac"
# core2-64-poky-linux
SRCREV:pn-binutils = "87d4632d36323091e731eb07b8aa65f90293da66"
SRCREV:pn-btrfs-tools = "8ad326b2f28c044cb6ed9016d7c3285e23b673c8"
SRCREV_bzip2-tests:pn-bzip2 = "f9061c030a25de5b6829e1abf373057309c734c0"
SRCREV:pn-e2fsprogs = "02540dedd3ddc52c6ae8aaa8a95ce75c3f8be1c0"
SRCREV:pn-file = "504206e53a89fd6eed71aeaf878aa3512418eab1"
SRCREV_glibc:pn-glibc = "24962427071fa532c3c48c918e9d64d719cc8a6c"
SRCREV:pn-gnome-desktop-testing = "e346cd4ed2e2102c9b195b614f3c642d23f5f6e7"
SRCREV:pn-init-system-helpers = "dbd9197569c0935029acd5c9b02b84c68fd937ee"
SRCREV:pn-kmod = "b6ecfc916a17eab8f93be5b09f4e4f845aabd3d1"
SRCREV:pn-libnsl2 = "82245c0c58add79a8e34ab0917358217a70e5100"
SRCREV:pn-libseccomp = "57357d2741a3b3d3e8425889a6b79a130e0fa2f3"
SRCREV:pn-libxcrypt = "50cf2b6dd4fdf04309445f2eec8de7051d953abf"
SRCREV:pn-ncurses = "51d0fd9cc3edb975f04224f29f777f8f448e8ced"
SRCREV:pn-procps = "19a508ea121c0c4ac6d0224575a036de745eaaf8"
SRCREV:pn-psmisc = "5fab6b7ab385080f1db725d6803136ec1841a15f"
SRCREV:pn-ptest-runner = "bcb82804daa8f725b6add259dcef2067e61a75aa"
SRCREV:pn-shared-mime-info = "18e558fa1c8b90b86757ade09a4ba4d6a6cf8f70"
SRCREV:pn-zstd = "e47e674cd09583ff0503f0f6defd6d23d8b718d3"
# qemux86_64-poky-linux
SRCREV_machine:pn-linux-yocto = "20301aeb1a64164b72bc72af58802b315e025c9c"
SRCREV_meta:pn-linux-yocto = "2d38a472b21ae343707c8bd64ac68a9eaca066a0"
# x86_64-linux
SRCREV:pn-binutils-cross-x86_64 = "87d4632d36323091e731eb07b8aa65f90293da66"
SRCREV_glibc:pn-cross-localedef-native = "24962427071fa532c3c48c918e9d64d719cc8a6c"
SRCREV_localedef:pn-cross-localedef-native = "794da69788cbf9bf57b59a852f9f11307663fa87"
SRCREV:pn-debianutils-native = "de14223e5bffe15e374a441302c528ffc1cbed57"
SRCREV:pn-libmodulemd-native = "ee80309bc766d781a144e6879419b29f444d94eb"
SRCREV:pn-virglrenderer-native = "363915595e05fb252e70d6514be2f0c0b5ca312b"
SRCREV:pn-zstd-native = "e47e674cd09583ff0503f0f6defd6d23d8b718d3"
.. note::
Here are some notes on using the ``buildhistory-collect-srcrevs`` command:
- By default, only values where the :term:`SRCREV` was not hardcoded
(usually when :term:`AUTOREV` is used) are reported. Use the ``-a``
option to see all :term:`SRCREV` values.
- The output statements might not have any effect if overrides are
applied elsewhere in the build system configuration. Use the
``-f`` option to add the ``forcevariable`` override to each output
line if you need to work around this restriction.
- The script does apply special handling when building for multiple
machines. However, the script does place a comment before each set
of values that specifies which triplet to which they belong as
previously shown (e.g., ``i586-poky-linux``).
Build History Image Information
-------------------------------
The files produced for each image are as follows:
- ``image-files:`` A directory containing selected files from the root
filesystem. The files are defined by
:term:`BUILDHISTORY_IMAGE_FILES`.
- ``build-id.txt:`` Human-readable information about the build
configuration and metadata source revisions. This file contains the
full build header as printed by BitBake.
- ``*.dot:`` Dependency graphs for the image that are compatible with
``graphviz``.
- ``files-in-image.txt:`` A list of files in the image with
permissions, owner, group, size, and symlink information.
- ``image-info.txt:`` A text file containing name-value pairs with
information about the image. See the following listing example for
more information.
- ``installed-package-names.txt:`` A list of installed packages by name
only.
- ``installed-package-sizes.txt:`` A list of installed packages ordered
by size.
- ``installed-packages.txt:`` A list of installed packages with full
package filenames.
.. note::
Installed package information is able to be gathered and produced
even if package management is disabled for the final image.
Here is an example of ``image-info.txt``:
.. code-block:: none
DISTRO = poky
DISTRO_VERSION = 3.4+snapshot-a0245d7be08f3d24ea1875e9f8872aa6bbff93be
USER_CLASSES = buildstats
IMAGE_CLASSES = qemuboot qemuboot license_image
IMAGE_FEATURES = debug-tweaks
IMAGE_LINGUAS =
IMAGE_INSTALL = packagegroup-core-boot speex speexdsp
BAD_RECOMMENDATIONS =
NO_RECOMMENDATIONS =
PACKAGE_EXCLUDE =
ROOTFS_POSTPROCESS_COMMAND = write_package_manifest; license_create_manifest; cve_check_write_rootfs_manifest; ssh_allow_empty_password; ssh_allow_root_login; postinst_enable_logging; rootfs_update_timestamp; write_image_test_data; empty_var_volatile; sort_passwd; rootfs_reproducible;
IMAGE_POSTPROCESS_COMMAND = buildhistory_get_imageinfo ;
IMAGESIZE = 9265
Other than ``IMAGESIZE``,
which is the total size of the files in the image in Kbytes, the
name-value pairs are variables that may have influenced the content of
the image. This information is often useful when you are trying to
determine why a change in the package or file listings has occurred.
Using Build History to Gather Image Information Only
----------------------------------------------------
As you can see, build history produces image information, including
dependency graphs, so you can see why something was pulled into the
image. If you are just interested in this information and not interested
in collecting specific package or SDK information, you can enable
writing only image information without any history by adding the
following to your ``conf/local.conf`` file found in the
:term:`Build Directory`::
INHERIT += "buildhistory"
BUILDHISTORY_COMMIT = "0"
BUILDHISTORY_FEATURES = "image"
Here, you set the
:term:`BUILDHISTORY_FEATURES`
variable to use the image feature only.
Build History SDK Information
-----------------------------
Build history collects similar information on the contents of SDKs (e.g.
``bitbake -c populate_sdk imagename``) as compared to information it
collects for images. Furthermore, this information differs depending on
whether an extensible or standard SDK is being produced.
The following list shows the files produced for SDKs:
- ``files-in-sdk.txt:`` A list of files in the SDK with permissions,
owner, group, size, and symlink information. This list includes both
the host and target parts of the SDK.
- ``sdk-info.txt:`` A text file containing name-value pairs with
information about the SDK. See the following listing example for more
information.
- ``sstate-task-sizes.txt:`` A text file containing name-value pairs
with information about task group sizes (e.g. :ref:`ref-tasks-populate_sysroot`
tasks have a total size). The ``sstate-task-sizes.txt`` file exists
only when an extensible SDK is created.
- ``sstate-package-sizes.txt:`` A text file containing name-value pairs
with information for the shared-state packages and sizes in the SDK.
The ``sstate-package-sizes.txt`` file exists only when an extensible
SDK is created.
- ``sdk-files:`` A folder that contains copies of the files mentioned
in ``BUILDHISTORY_SDK_FILES`` if the files are present in the output.
Additionally, the default value of ``BUILDHISTORY_SDK_FILES`` is
specific to the extensible SDK although you can set it differently if
you would like to pull in specific files from the standard SDK.
The default files are ``conf/local.conf``, ``conf/bblayers.conf``,
``conf/auto.conf``, ``conf/locked-sigs.inc``, and
``conf/devtool.conf``. Thus, for an extensible SDK, these files get
copied into the ``sdk-files`` directory.
- The following information appears under each of the ``host`` and
``target`` directories for the portions of the SDK that run on the
host and on the target, respectively:
.. note::
The following files for the most part are empty when producing an
extensible SDK because this type of SDK is not constructed from
packages as is the standard SDK.
- ``depends.dot:`` Dependency graph for the SDK that is compatible
with ``graphviz``.
- ``installed-package-names.txt:`` A list of installed packages by
name only.
- ``installed-package-sizes.txt:`` A list of installed packages
ordered by size.
- ``installed-packages.txt:`` A list of installed packages with full
package filenames.
Here is an example of ``sdk-info.txt``:
.. code-block:: none
DISTRO = poky
DISTRO_VERSION = 1.3+snapshot-20130327
SDK_NAME = poky-glibc-i686-arm
SDK_VERSION = 1.3+snapshot
SDKMACHINE =
SDKIMAGE_FEATURES = dev-pkgs dbg-pkgs
BAD_RECOMMENDATIONS =
SDKSIZE = 352712
Other than ``SDKSIZE``, which is
the total size of the files in the SDK in Kbytes, the name-value pairs
are variables that might have influenced the content of the SDK. This
information is often useful when you are trying to determine why a
change in the package or file listings has occurred.
Examining Build History Information
-----------------------------------
You can examine build history output from the command line or from a web
interface.
To see any changes that have occurred (assuming you have
:term:`BUILDHISTORY_COMMIT` = "1"),
you can simply use any Git command that allows you to view the history
of a repository. Here is one method::
$ git log -p
You need to realize,
however, that this method does show changes that are not significant
(e.g. a package's size changing by a few bytes).
There is a command-line tool called ``buildhistory-diff``, though,
that queries the Git repository and prints just the differences that
might be significant in human-readable form. Here is an example::
$ poky/poky/scripts/buildhistory-diff . HEAD^
Changes to images/qemux86_64/glibc/core-image-minimal (files-in-image.txt):
/etc/anotherpkg.conf was added
/sbin/anotherpkg was added
* (installed-package-names.txt):
* anotherpkg was added
Changes to images/qemux86_64/glibc/core-image-minimal (installed-package-names.txt):
anotherpkg was added
packages/qemux86_64-poky-linux/v86d: PACKAGES: added "v86d-extras"
* PR changed from "r0" to "r1"
* PV changed from "0.1.10" to "0.1.12"
packages/qemux86_64-poky-linux/v86d/v86d: PKGSIZE changed from 110579 to 144381 (+30%)
* PR changed from "r0" to "r1"
* PV changed from "0.1.10" to "0.1.12"
.. note::
The ``buildhistory-diff`` tool requires the ``GitPython``
package. Be sure to install it using Pip3 as follows::
$ pip3 install GitPython --user
Alternatively, you can install ``python3-git`` using the appropriate
distribution package manager (e.g. ``apt``, ``dnf``, or ``zipper``).
To see changes to the build history using a web interface, follow the
instruction in the ``README`` file
:yocto_git:`here </buildhistory-web/>`.
Here is a sample screenshot of the interface:
.. image:: figures/buildhistory-web.png
:width: 100%

View File

@@ -1,939 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Building
********
This section describes various build procedures, such as the steps
needed for a simple build, building a target for multiple configurations,
generating an image for more than one machine, and so forth.
Building a Simple Image
=======================
In the development environment, you need to build an image whenever you
change hardware support, add or change system libraries, or add or
change services that have dependencies. There are several methods that allow
you to build an image within the Yocto Project. This section presents
the basic steps you need to build a simple image using BitBake from a
build host running Linux.
.. note::
- For information on how to build an image using
:term:`Toaster`, see the
:doc:`/toaster-manual/index`.
- For information on how to use ``devtool`` to build images, see the
":ref:`sdk-manual/extensible:using \`\`devtool\`\` in your sdk workflow`"
section in the Yocto Project Application Development and the
Extensible Software Development Kit (eSDK) manual.
- For a quick example on how to build an image using the
OpenEmbedded build system, see the
:doc:`/brief-yoctoprojectqs/index` document.
The build process creates an entire Linux distribution from source and
places it in your :term:`Build Directory` under ``tmp/deploy/images``. For
detailed information on the build process using BitBake, see the
":ref:`overview-manual/concepts:images`" section in the Yocto Project Overview
and Concepts Manual.
The following figure and list overviews the build process:
.. image:: figures/bitbake-build-flow.png
:width: 100%
#. *Set up Your Host Development System to Support Development Using the
Yocto Project*: See the ":doc:`start`" section for options on how to get a
build host ready to use the Yocto Project.
#. *Initialize the Build Environment:* Initialize the build environment
by sourcing the build environment script (i.e.
:ref:`structure-core-script`)::
$ source oe-init-build-env [build_dir]
When you use the initialization script, the OpenEmbedded build system
uses ``build`` as the default :term:`Build Directory` in your current work
directory. You can use a `build_dir` argument with the script to
specify a different :term:`Build Directory`.
.. note::
A common practice is to use a different :term:`Build Directory` for
different targets; for example, ``~/build/x86`` for a ``qemux86``
target, and ``~/build/arm`` for a ``qemuarm`` target. In any
event, it's typically cleaner to locate the :term:`Build Directory`
somewhere outside of your source directory.
#. *Make Sure Your* ``local.conf`` *File is Correct*: Ensure the
``conf/local.conf`` configuration file, which is found in the
:term:`Build Directory`, is set up how you want it. This file defines many
aspects of the build environment including the target machine architecture
through the :term:`MACHINE` variable, the packaging format used during
the build (:term:`PACKAGE_CLASSES`), and a centralized tarball download
directory through the :term:`DL_DIR` variable.
#. *Build the Image:* Build the image using the ``bitbake`` command::
$ bitbake target
.. note::
For information on BitBake, see the :doc:`bitbake:index`.
The target is the name of the recipe you want to build. Common
targets are the images in ``meta/recipes-core/images``,
``meta/recipes-sato/images``, and so forth all found in the
:term:`Source Directory`. Alternatively, the target
can be the name of a recipe for a specific piece of software such as
BusyBox. For more details about the images the OpenEmbedded build
system supports, see the
":ref:`ref-manual/images:Images`" chapter in the Yocto
Project Reference Manual.
As an example, the following command builds the
``core-image-minimal`` image::
$ bitbake core-image-minimal
Once an
image has been built, it often needs to be installed. The images and
kernels built by the OpenEmbedded build system are placed in the
:term:`Build Directory` in ``tmp/deploy/images``. For information on how to
run pre-built images such as ``qemux86`` and ``qemuarm``, see the
:doc:`/sdk-manual/index` manual. For
information about how to install these images, see the documentation
for your particular board or machine.
Building Images for Multiple Targets Using Multiple Configurations
==================================================================
You can use a single ``bitbake`` command to build multiple images or
packages for different targets where each image or package requires a
different configuration (multiple configuration builds). The builds, in
this scenario, are sometimes referred to as "multiconfigs", and this
section uses that term throughout.
This section describes how to set up for multiple configuration builds
and how to account for cross-build dependencies between the
multiconfigs.
Setting Up and Running a Multiple Configuration Build
-----------------------------------------------------
To accomplish a multiple configuration build, you must define each
target's configuration separately using a parallel configuration file in
the :term:`Build Directory` or configuration directory within a layer, and you
must follow a required file hierarchy. Additionally, you must enable the
multiple configuration builds in your ``local.conf`` file.
Follow these steps to set up and execute multiple configuration builds:
- *Create Separate Configuration Files*: You need to create a single
configuration file for each build target (each multiconfig).
The configuration definitions are implementation dependent but often
each configuration file will define the machine and the
temporary directory BitBake uses for the build. Whether the same
temporary directory (:term:`TMPDIR`) can be shared will depend on what is
similar and what is different between the configurations. Multiple MACHINE
targets can share the same (:term:`TMPDIR`) as long as the rest of the
configuration is the same, multiple :term:`DISTRO` settings would need separate
(:term:`TMPDIR`) directories.
For example, consider a scenario with two different multiconfigs for the same
:term:`MACHINE`: "qemux86" built
for two distributions such as "poky" and "poky-lsb". In this case,
you would need to use the different :term:`TMPDIR`.
Here is an example showing the minimal statements needed in a
configuration file for a "qemux86" target whose temporary build
directory is ``tmpmultix86``::
MACHINE = "qemux86"
TMPDIR = "${TOPDIR}/tmpmultix86"
The location for these multiconfig configuration files is specific.
They must reside in the current :term:`Build Directory` in a sub-directory of
``conf`` named ``multiconfig`` or within a layer's ``conf`` directory
under a directory named ``multiconfig``. Following is an example that defines
two configuration files for the "x86" and "arm" multiconfigs:
.. image:: figures/multiconfig_files.png
:align: center
:width: 50%
The usual :term:`BBPATH` search path is used to locate multiconfig files in
a similar way to other conf files.
- *Add the BitBake Multi-configuration Variable to the Local
Configuration File*: Use the
:term:`BBMULTICONFIG`
variable in your ``conf/local.conf`` configuration file to specify
each multiconfig. Continuing with the example from the previous
figure, the :term:`BBMULTICONFIG` variable needs to enable two
multiconfigs: "x86" and "arm" by specifying each configuration file::
BBMULTICONFIG = "x86 arm"
.. note::
A "default" configuration already exists by definition. This
configuration is named: "" (i.e. empty string) and is defined by
the variables coming from your ``local.conf``
file. Consequently, the previous example actually adds two
additional configurations to your build: "arm" and "x86" along
with "".
- *Launch BitBake*: Use the following BitBake command form to launch
the multiple configuration build::
$ bitbake [mc:multiconfigname:]target [[[mc:multiconfigname:]target] ... ]
For the example in this section, the following command applies::
$ bitbake mc:x86:core-image-minimal mc:arm:core-image-sato mc::core-image-base
The previous BitBake command builds a ``core-image-minimal`` image
that is configured through the ``x86.conf`` configuration file, a
``core-image-sato`` image that is configured through the ``arm.conf``
configuration file and a ``core-image-base`` that is configured
through your ``local.conf`` configuration file.
.. note::
Support for multiple configuration builds in the Yocto Project &DISTRO;
(&DISTRO_NAME;) Release does not include Shared State (sstate)
optimizations. Consequently, if a build uses the same object twice
in, for example, two different :term:`TMPDIR`
directories, the build either loads from an existing sstate cache for
that build at the start or builds the object fresh.
Enabling Multiple Configuration Build Dependencies
--------------------------------------------------
Sometimes dependencies can exist between targets (multiconfigs) in a
multiple configuration build. For example, suppose that in order to
build a ``core-image-sato`` image for an "x86" multiconfig, the root
filesystem of an "arm" multiconfig must exist. This dependency is
essentially that the
:ref:`ref-tasks-image` task in the
``core-image-sato`` recipe depends on the completion of the
:ref:`ref-tasks-rootfs` task of the
``core-image-minimal`` recipe.
To enable dependencies in a multiple configuration build, you must
declare the dependencies in the recipe using the following statement
form::
task_or_package[mcdepends] = "mc:from_multiconfig:to_multiconfig:recipe_name:task_on_which_to_depend"
To better show how to use this statement, consider the example scenario
from the first paragraph of this section. The following statement needs
to be added to the recipe that builds the ``core-image-sato`` image::
do_image[mcdepends] = "mc:x86:arm:core-image-minimal:do_rootfs"
In this example, the `from_multiconfig` is "x86". The `to_multiconfig` is "arm". The
task on which the :ref:`ref-tasks-image` task in the recipe depends is the
:ref:`ref-tasks-rootfs` task from the ``core-image-minimal`` recipe associated
with the "arm" multiconfig.
Once you set up this dependency, you can build the "x86" multiconfig
using a BitBake command as follows::
$ bitbake mc:x86:core-image-sato
This command executes all the tasks needed to create the
``core-image-sato`` image for the "x86" multiconfig. Because of the
dependency, BitBake also executes through the :ref:`ref-tasks-rootfs` task for the
"arm" multiconfig build.
Having a recipe depend on the root filesystem of another build might not
seem that useful. Consider this change to the statement in the
``core-image-sato`` recipe::
do_image[mcdepends] = "mc:x86:arm:core-image-minimal:do_image"
In this case, BitBake must
create the ``core-image-minimal`` image for the "arm" build since the
"x86" build depends on it.
Because "x86" and "arm" are enabled for multiple configuration builds
and have separate configuration files, BitBake places the artifacts for
each build in the respective temporary build directories (i.e.
:term:`TMPDIR`).
Building an Initial RAM Filesystem (Initramfs) Image
====================================================
An initial RAM filesystem (:term:`Initramfs`) image provides a temporary root
filesystem used for early system initialization, typically providing tools and
loading modules needed to locate and mount the final root filesystem.
Follow these steps to create an :term:`Initramfs` image:
#. *Create the :term:`Initramfs` Image Recipe:* You can reference the
``core-image-minimal-initramfs.bb`` recipe found in the
``meta/recipes-core`` directory of the :term:`Source Directory`
as an example from which to work.
#. *Decide if You Need to Bundle the :term:`Initramfs` Image Into the Kernel
Image:* If you want the :term:`Initramfs` image that is built to be bundled
in with the kernel image, set the :term:`INITRAMFS_IMAGE_BUNDLE`
variable to ``"1"`` in your ``local.conf`` configuration file and set the
:term:`INITRAMFS_IMAGE` variable in the recipe that builds the kernel image.
Setting the :term:`INITRAMFS_IMAGE_BUNDLE` flag causes the :term:`Initramfs`
image to be unpacked into the ``${B}/usr/`` directory. The unpacked
:term:`Initramfs` image is then passed to the kernel's ``Makefile`` using the
:term:`CONFIG_INITRAMFS_SOURCE` variable, allowing the :term:`Initramfs`
image to be built into the kernel normally.
#. *Optionally Add Items to the Initramfs Image Through the Initramfs
Image Recipe:* If you add items to the :term:`Initramfs` image by way of its
recipe, you should use :term:`PACKAGE_INSTALL` rather than
:term:`IMAGE_INSTALL`. :term:`PACKAGE_INSTALL` gives more direct control of
what is added to the image as compared to the defaults you might not
necessarily want that are set by the :ref:`ref-classes-image`
or :ref:`ref-classes-core-image` classes.
#. *Build the Kernel Image and the Initramfs Image:* Build your kernel
image using BitBake. Because the :term:`Initramfs` image recipe is a
dependency of the kernel image, the :term:`Initramfs` image is built as well
and bundled with the kernel image if you used the
:term:`INITRAMFS_IMAGE_BUNDLE` variable described earlier.
Bundling an Initramfs Image From a Separate Multiconfig
-------------------------------------------------------
There may be a case where we want to build an :term:`Initramfs` image which does not
inherit the same distro policy as our main image, for example, we may want
our main image to use ``TCLIBC="glibc"``, but to use ``TCLIBC="musl"`` in our :term:`Initramfs`
image to keep a smaller footprint. However, by performing the steps mentioned
above the :term:`Initramfs` image will inherit ``TCLIBC="glibc"`` without allowing us
to override it.
To achieve this, you need to perform some additional steps:
#. *Create a multiconfig for your Initramfs image:* You can perform the steps
on ":ref:`dev-manual/building:building images for multiple targets using multiple configurations`" to create a separate multiconfig.
For the sake of simplicity let's assume such multiconfig is called: ``initramfscfg.conf`` and
contains the variables::
TMPDIR="${TOPDIR}/tmp-initramfscfg"
TCLIBC="musl"
#. *Set additional Initramfs variables on your main configuration:*
Additionally, on your main configuration (``local.conf``) you need to set the
variables::
INITRAMFS_MULTICONFIG = "initramfscfg"
INITRAMFS_DEPLOY_DIR_IMAGE = "${TOPDIR}/tmp-initramfscfg/deploy/images/${MACHINE}"
The variables :term:`INITRAMFS_MULTICONFIG` and :term:`INITRAMFS_DEPLOY_DIR_IMAGE`
are used to create a multiconfig dependency from the kernel to the :term:`INITRAMFS_IMAGE`
to be built coming from the ``initramfscfg`` multiconfig, and to let the
buildsystem know where the :term:`INITRAMFS_IMAGE` will be located.
Building a system with such configuration will build the kernel using the
main configuration but the :ref:`ref-tasks-bundle_initramfs` task will grab the
selected :term:`INITRAMFS_IMAGE` from :term:`INITRAMFS_DEPLOY_DIR_IMAGE`
instead, resulting in a musl based :term:`Initramfs` image bundled in the kernel
but a glibc based main image.
The same is applicable to avoid inheriting :term:`DISTRO_FEATURES` on :term:`INITRAMFS_IMAGE`
or to build a different :term:`DISTRO` for it such as ``poky-tiny``.
Building a Tiny System
======================
Very small distributions have some significant advantages such as
requiring less on-die or in-package memory (cheaper), better performance
through efficient cache usage, lower power requirements due to less
memory, faster boot times, and reduced development overhead. Some
real-world examples where a very small distribution gives you distinct
advantages are digital cameras, medical devices, and small headless
systems.
This section presents information that shows you how you can trim your
distribution to even smaller sizes than the ``poky-tiny`` distribution,
which is around 5 Mbytes, that can be built out-of-the-box using the
Yocto Project.
Tiny System Overview
--------------------
The following list presents the overall steps you need to consider and
perform to create distributions with smaller root filesystems, achieve
faster boot times, maintain your critical functionality, and avoid
initial RAM disks:
- :ref:`Determine your goals and guiding principles
<dev-manual/building:goals and guiding principles>`
- :ref:`dev-manual/building:understand what contributes to your image size`
- :ref:`Reduce the size of the root filesystem
<dev-manual/building:trim the root filesystem>`
- :ref:`Reduce the size of the kernel <dev-manual/building:trim the kernel>`
- :ref:`dev-manual/building:remove package management requirements`
- :ref:`dev-manual/building:look for other ways to minimize size`
- :ref:`dev-manual/building:iterate on the process`
Goals and Guiding Principles
----------------------------
Before you can reach your destination, you need to know where you are
going. Here is an example list that you can use as a guide when creating
very small distributions:
- Determine how much space you need (e.g. a kernel that is 1 Mbyte or
less and a root filesystem that is 3 Mbytes or less).
- Find the areas that are currently taking 90% of the space and
concentrate on reducing those areas.
- Do not create any difficult "hacks" to achieve your goals.
- Leverage the device-specific options.
- Work in a separate layer so that you keep changes isolated. For
information on how to create layers, see the
":ref:`dev-manual/layers:understanding and creating layers`" section.
Understand What Contributes to Your Image Size
----------------------------------------------
It is easiest to have something to start with when creating your own
distribution. You can use the Yocto Project out-of-the-box to create the
``poky-tiny`` distribution. Ultimately, you will want to make changes in
your own distribution that are likely modeled after ``poky-tiny``.
.. note::
To use ``poky-tiny`` in your build, set the :term:`DISTRO` variable in your
``local.conf`` file to "poky-tiny" as described in the
":ref:`dev-manual/custom-distribution:creating your own distribution`"
section.
Understanding some memory concepts will help you reduce the system size.
Memory consists of static, dynamic, and temporary memory. Static memory
is the TEXT (code), DATA (initialized data in the code), and BSS
(uninitialized data) sections. Dynamic memory represents memory that is
allocated at runtime: stacks, hash tables, and so forth. Temporary
memory is recovered after the boot process. This memory consists of
memory used for decompressing the kernel and for the ``__init__``
functions.
To help you see where you currently are with kernel and root filesystem
sizes, you can use two tools found in the :term:`Source Directory`
in the
``scripts/tiny/`` directory:
- ``ksize.py``: Reports component sizes for the kernel build objects.
- ``dirsize.py``: Reports component sizes for the root filesystem.
This next tool and command help you organize configuration fragments and
view file dependencies in a human-readable form:
- ``merge_config.sh``: Helps you manage configuration files and
fragments within the kernel. With this tool, you can merge individual
configuration fragments together. The tool allows you to make
overrides and warns you of any missing configuration options. The
tool is ideal for allowing you to iterate on configurations, create
minimal configurations, and create configuration files for different
machines without having to duplicate your process.
The ``merge_config.sh`` script is part of the Linux Yocto kernel Git
repositories (i.e. ``linux-yocto-3.14``, ``linux-yocto-3.10``,
``linux-yocto-3.8``, and so forth) in the ``scripts/kconfig``
directory.
For more information on configuration fragments, see the
":ref:`kernel-dev/common:creating configuration fragments`"
section in the Yocto Project Linux Kernel Development Manual.
- ``bitbake -u taskexp -g bitbake_target``: Using the BitBake command
with these options brings up a Dependency Explorer from which you can
view file dependencies. Understanding these dependencies allows you
to make informed decisions when cutting out various pieces of the
kernel and root filesystem.
Trim the Root Filesystem
------------------------
The root filesystem is made up of packages for booting, libraries, and
applications. To change things, you can configure how the packaging
happens, which changes the way you build them. You can also modify the
filesystem itself or select a different filesystem.
First, find out what is hogging your root filesystem by running the
``dirsize.py`` script from your root directory::
$ cd root-directory-of-image
$ dirsize.py 100000 > dirsize-100k.log
$ cat dirsize-100k.log
You can apply a filter to the script to ignore files
under a certain size. The previous example filters out any files below
100 Kbytes. The sizes reported by the tool are uncompressed, and thus
will be smaller by a relatively constant factor in a compressed root
filesystem. When you examine your log file, you can focus on areas of
the root filesystem that take up large amounts of memory.
You need to be sure that what you eliminate does not cripple the
functionality you need. One way to see how packages relate to each other
is by using the Dependency Explorer UI with the BitBake command::
$ cd image-directory
$ bitbake -u taskexp -g image
Use the interface to
select potential packages you wish to eliminate and see their dependency
relationships.
When deciding how to reduce the size, get rid of packages that result in
minimal impact on the feature set. For example, you might not need a VGA
display. Or, you might be able to get by with ``devtmpfs`` and ``mdev``
instead of ``udev``.
Use your ``local.conf`` file to make changes. For example, to eliminate
``udev`` and ``glib``, set the following in the local configuration
file::
VIRTUAL-RUNTIME_dev_manager = ""
Finally, you should consider exactly the type of root filesystem you
need to meet your needs while also reducing its size. For example,
consider ``cramfs``, ``squashfs``, ``ubifs``, ``ext2``, or an
:term:`Initramfs` using ``initramfs``. Be aware that ``ext3`` requires a 1
Mbyte journal. If you are okay with running read-only, you do not need
this journal.
.. note::
After each round of elimination, you need to rebuild your system and
then use the tools to see the effects of your reductions.
Trim the Kernel
---------------
The kernel is built by including policies for hardware-independent
aspects. What subsystems do you enable? For what architecture are you
building? Which drivers do you build by default?
.. note::
You can modify the kernel source if you want to help with boot time.
Run the ``ksize.py`` script from the top-level Linux build directory to
get an idea of what is making up the kernel::
$ cd top-level-linux-build-directory
$ ksize.py > ksize.log
$ cat ksize.log
When you examine the log, you will see how much space is taken up with
the built-in ``.o`` files for drivers, networking, core kernel files,
filesystem, sound, and so forth. The sizes reported by the tool are
uncompressed, and thus will be smaller by a relatively constant factor
in a compressed kernel image. Look to reduce the areas that are large
and taking up around the "90% rule."
To examine, or drill down, into any particular area, use the ``-d``
option with the script::
$ ksize.py -d > ksize.log
Using this option
breaks out the individual file information for each area of the kernel
(e.g. drivers, networking, and so forth).
Use your log file to see what you can eliminate from the kernel based on
features you can let go. For example, if you are not going to need
sound, you do not need any drivers that support sound.
After figuring out what to eliminate, you need to reconfigure the kernel
to reflect those changes during the next build. You could run
``menuconfig`` and make all your changes at once. However, that makes it
difficult to see the effects of your individual eliminations and also
makes it difficult to replicate the changes for perhaps another target
device. A better method is to start with no configurations using
``allnoconfig``, create configuration fragments for individual changes,
and then manage the fragments into a single configuration file using
``merge_config.sh``. The tool makes it easy for you to iterate using the
configuration change and build cycle.
Each time you make configuration changes, you need to rebuild the kernel
and check to see what impact your changes had on the overall size.
Remove Package Management Requirements
--------------------------------------
Packaging requirements add size to the image. One way to reduce the size
of the image is to remove all the packaging requirements from the image.
This reduction includes both removing the package manager and its unique
dependencies as well as removing the package management data itself.
To eliminate all the packaging requirements for an image, be sure that
"package-management" is not part of your
:term:`IMAGE_FEATURES`
statement for the image. When you remove this feature, you are removing
the package manager as well as its dependencies from the root
filesystem.
Look for Other Ways to Minimize Size
------------------------------------
Depending on your particular circumstances, other areas that you can
trim likely exist. The key to finding these areas is through tools and
methods described here combined with experimentation and iteration. Here
are a couple of areas to experiment with:
- ``glibc``: In general, follow this process:
#. Remove ``glibc`` features from
:term:`DISTRO_FEATURES`
that you think you do not need.
#. Build your distribution.
#. If the build fails due to missing symbols in a package, determine
if you can reconfigure the package to not need those features. For
example, change the configuration to not support wide character
support as is done for ``ncurses``. Or, if support for those
characters is needed, determine what ``glibc`` features provide
the support and restore the configuration.
4. Rebuild and repeat the process.
- ``busybox``: For BusyBox, use a process similar as described for
``glibc``. A difference is you will need to boot the resulting system
to see if you are able to do everything you expect from the running
system. You need to be sure to integrate configuration fragments into
Busybox because BusyBox handles its own core features and then allows
you to add configuration fragments on top.
Iterate on the Process
----------------------
If you have not reached your goals on system size, you need to iterate
on the process. The process is the same. Use the tools and see just what
is taking up 90% of the root filesystem and the kernel. Decide what you
can eliminate without limiting your device beyond what you need.
Depending on your system, a good place to look might be Busybox, which
provides a stripped down version of Unix tools in a single, executable
file. You might be able to drop virtual terminal services or perhaps
ipv6.
Building Images for More than One Machine
=========================================
A common scenario developers face is creating images for several
different machines that use the same software environment. In this
situation, it is tempting to set the tunings and optimization flags for
each build specifically for the targeted hardware (i.e. "maxing out" the
tunings). Doing so can considerably add to build times and package feed
maintenance collectively for the machines. For example, selecting tunes
that are extremely specific to a CPU core used in a system might enable
some micro optimizations in GCC for that particular system but would
otherwise not gain you much of a performance difference across the other
systems as compared to using a more general tuning across all the builds
(e.g. setting :term:`DEFAULTTUNE`
specifically for each machine's build). Rather than "max out" each
build's tunings, you can take steps that cause the OpenEmbedded build
system to reuse software across the various machines where it makes
sense.
If build speed and package feed maintenance are considerations, you
should consider the points in this section that can help you optimize
your tunings to best consider build times and package feed maintenance.
- *Share the :term:`Build Directory`:* If at all possible, share the
:term:`TMPDIR` across builds. The Yocto Project supports switching between
different :term:`MACHINE` values in the same :term:`TMPDIR`. This practice
is well supported and regularly used by developers when building for
multiple machines. When you use the same :term:`TMPDIR` for multiple
machine builds, the OpenEmbedded build system can reuse the existing native
and often cross-recipes for multiple machines. Thus, build time decreases.
.. note::
If :term:`DISTRO` settings change or fundamental configuration settings
such as the filesystem layout, you need to work with a clean :term:`TMPDIR`.
Sharing :term:`TMPDIR` under these circumstances might work but since it is
not guaranteed, you should use a clean :term:`TMPDIR`.
- *Enable the Appropriate Package Architecture:* By default, the
OpenEmbedded build system enables three levels of package
architectures: "all", "tune" or "package", and "machine". Any given
recipe usually selects one of these package architectures (types) for
its output. Depending for what a given recipe creates packages,
making sure you enable the appropriate package architecture can
directly impact the build time.
A recipe that just generates scripts can enable "all" architecture
because there are no binaries to build. To specifically enable "all"
architecture, be sure your recipe inherits the
:ref:`ref-classes-allarch` class.
This class is useful for "all" architectures because it configures
many variables so packages can be used across multiple architectures.
If your recipe needs to generate packages that are machine-specific
or when one of the build or runtime dependencies is already
machine-architecture dependent, which makes your recipe also
machine-architecture dependent, make sure your recipe enables the
"machine" package architecture through the
:term:`MACHINE_ARCH`
variable::
PACKAGE_ARCH = "${MACHINE_ARCH}"
When you do not
specifically enable a package architecture through the
:term:`PACKAGE_ARCH`, The
OpenEmbedded build system defaults to the
:term:`TUNE_PKGARCH` setting::
PACKAGE_ARCH = "${TUNE_PKGARCH}"
- *Choose a Generic Tuning File if Possible:* Some tunes are more
generic and can run on multiple targets (e.g. an ``armv5`` set of
packages could run on ``armv6`` and ``armv7`` processors in most
cases). Similarly, ``i486`` binaries could work on ``i586`` and
higher processors. You should realize, however, that advances on
newer processor versions would not be used.
If you select the same tune for several different machines, the
OpenEmbedded build system reuses software previously built, thus
speeding up the overall build time. Realize that even though a new
sysroot for each machine is generated, the software is not recompiled
and only one package feed exists.
- *Manage Granular Level Packaging:* Sometimes there are cases where
injecting another level of package architecture beyond the three
higher levels noted earlier can be useful. For example, consider how
NXP (formerly Freescale) allows for the easy reuse of binary packages
in their layer
:yocto_git:`meta-freescale </meta-freescale/>`.
In this example, the
:yocto_git:`fsl-dynamic-packagearch </meta-freescale/tree/classes/fsl-dynamic-packagearch.bbclass>`
class shares GPU packages for i.MX53 boards because all boards share
the AMD GPU. The i.MX6-based boards can do the same because all
boards share the Vivante GPU. This class inspects the BitBake
datastore to identify if the package provides or depends on one of
the sub-architecture values. If so, the class sets the
:term:`PACKAGE_ARCH` value
based on the ``MACHINE_SUBARCH`` value. If the package does not
provide or depend on one of the sub-architecture values but it
matches a value in the machine-specific filter, it sets
:term:`MACHINE_ARCH`. This
behavior reduces the number of packages built and saves build time by
reusing binaries.
- *Use Tools to Debug Issues:* Sometimes you can run into situations
where software is being rebuilt when you think it should not be. For
example, the OpenEmbedded build system might not be using shared
state between machines when you think it should be. These types of
situations are usually due to references to machine-specific
variables such as :term:`MACHINE`,
:term:`SERIAL_CONSOLES`,
:term:`XSERVER`,
:term:`MACHINE_FEATURES`,
and so forth in code that is supposed to only be tune-specific or
when the recipe depends
(:term:`DEPENDS`,
:term:`RDEPENDS`,
:term:`RRECOMMENDS`,
:term:`RSUGGESTS`, and so forth)
on some other recipe that already has
:term:`PACKAGE_ARCH` defined
as "${MACHINE_ARCH}".
.. note::
Patches to fix any issues identified are most welcome as these
issues occasionally do occur.
For such cases, you can use some tools to help you sort out the
situation:
- ``state-diff-machines.sh``*:* You can find this tool in the
``scripts`` directory of the Source Repositories. See the comments
in the script for information on how to use the tool.
- *BitBake's "-S printdiff" Option:* Using this option causes
BitBake to try to establish the closest signature match it can
(e.g. in the shared state cache) and then run ``bitbake-diffsigs``
over the matches to determine the stamps and delta where these two
stamp trees diverge.
Building Software from an External Source
=========================================
By default, the OpenEmbedded build system uses the :term:`Build Directory`
when building source code. The build process involves fetching the source
files, unpacking them, and then patching them if necessary before the build
takes place.
There are situations where you might want to build software from source
files that are external to and thus outside of the OpenEmbedded build
system. For example, suppose you have a project that includes a new BSP
with a heavily customized kernel. And, you want to minimize exposing the
build system to the development team so that they can focus on their
project and maintain everyone's workflow as much as possible. In this
case, you want a kernel source directory on the development machine
where the development occurs. You want the recipe's
:term:`SRC_URI` variable to point to
the external directory and use it as is, not copy it.
To build from software that comes from an external source, all you need to do
is inherit the :ref:`ref-classes-externalsrc` class and then set
the :term:`EXTERNALSRC` variable to point to your external source code. Here
are the statements to put in your ``local.conf`` file::
INHERIT += "externalsrc"
EXTERNALSRC:pn-myrecipe = "path-to-your-source-tree"
This next example shows how to accomplish the same thing by setting
:term:`EXTERNALSRC` in the recipe itself or in the recipe's append file::
EXTERNALSRC = "path"
EXTERNALSRC_BUILD = "path"
.. note::
In order for these settings to take effect, you must globally or
locally inherit the :ref:`ref-classes-externalsrc` class.
By default, :ref:`ref-classes-externalsrc` builds the source code in a
directory separate from the external source directory as specified by
:term:`EXTERNALSRC`. If you need
to have the source built in the same directory in which it resides, or
some other nominated directory, you can set
:term:`EXTERNALSRC_BUILD`
to point to that directory::
EXTERNALSRC_BUILD:pn-myrecipe = "path-to-your-source-tree"
Replicating a Build Offline
===========================
It can be useful to take a "snapshot" of upstream sources used in a
build and then use that "snapshot" later to replicate the build offline.
To do so, you need to first prepare and populate your downloads
directory your "snapshot" of files. Once your downloads directory is
ready, you can use it at any time and from any machine to replicate your
build.
Follow these steps to populate your Downloads directory:
#. *Create a Clean Downloads Directory:* Start with an empty downloads
directory (:term:`DL_DIR`). You
start with an empty downloads directory by either removing the files
in the existing directory or by setting :term:`DL_DIR` to point to either
an empty location or one that does not yet exist.
#. *Generate Tarballs of the Source Git Repositories:* Edit your
``local.conf`` configuration file as follows::
DL_DIR = "/home/your-download-dir/"
BB_GENERATE_MIRROR_TARBALLS = "1"
During
the fetch process in the next step, BitBake gathers the source files
and creates tarballs in the directory pointed to by :term:`DL_DIR`. See
the
:term:`BB_GENERATE_MIRROR_TARBALLS`
variable for more information.
#. *Populate Your Downloads Directory Without Building:* Use BitBake to
fetch your sources but inhibit the build::
$ bitbake target --runonly=fetch
The downloads directory (i.e. ``${DL_DIR}``) now has
a "snapshot" of the source files in the form of tarballs, which can
be used for the build.
#. *Optionally Remove Any Git or other SCM Subdirectories From the
Downloads Directory:* If you want, you can clean up your downloads
directory by removing any Git or other Source Control Management
(SCM) subdirectories such as ``${DL_DIR}/git2/*``. The tarballs
already contain these subdirectories.
Once your downloads directory has everything it needs regarding source
files, you can create your "own-mirror" and build your target.
Understand that you can use the files to build the target offline from
any machine and at any time.
Follow these steps to build your target using the files in the downloads
directory:
#. *Using Local Files Only:* Inside your ``local.conf`` file, add the
:term:`SOURCE_MIRROR_URL` variable, inherit the
:ref:`ref-classes-own-mirrors` class, and use the
:term:`BB_NO_NETWORK` variable to your ``local.conf``::
SOURCE_MIRROR_URL ?= "file:///home/your-download-dir/"
INHERIT += "own-mirrors"
BB_NO_NETWORK = "1"
The :term:`SOURCE_MIRROR_URL` and :ref:`ref-classes-own-mirrors`
class set up the system to use the downloads directory as your "own
mirror". Using the :term:`BB_NO_NETWORK` variable makes sure that
BitBake's fetching process in step 3 stays local, which means files
from your "own-mirror" are used.
#. *Start With a Clean Build:* You can start with a clean build by
removing the ``${``\ :term:`TMPDIR`\ ``}`` directory or using a new
:term:`Build Directory`.
#. *Build Your Target:* Use BitBake to build your target::
$ bitbake target
The build completes using the known local "snapshot" of source
files from your mirror. The resulting tarballs for your "snapshot" of
source files are in the downloads directory.
.. note::
The offline build does not work if recipes attempt to find the
latest version of software by setting
:term:`SRCREV` to
``${``\ :term:`AUTOREV`\ ``}``::
SRCREV = "${AUTOREV}"
When a recipe sets :term:`SRCREV` to
``${``\ :term:`AUTOREV`\ ``}``, the build system accesses the network in an
attempt to determine the latest version of software from the SCM.
Typically, recipes that use :term:`AUTOREV` are custom or modified
recipes. Recipes that reside in public repositories usually do not
use :term:`AUTOREV`.
If you do have recipes that use :term:`AUTOREV`, you can take steps to
still use the recipes in an offline build. Do the following:
#. Use a configuration generated by enabling :ref:`build
history <dev-manual/build-quality:maintaining build output quality>`.
#. Use the ``buildhistory-collect-srcrevs`` command to collect the
stored :term:`SRCREV` values from the build's history. For more
information on collecting these values, see the
":ref:`dev-manual/build-quality:build history package information`"
section.
#. Once you have the correct source revisions, you can modify
those recipes to set :term:`SRCREV` to specific versions of the
software.

View File

@@ -1,525 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Making Changes to the Yocto Project
***********************************
Because the Yocto Project is an open-source, community-based project,
you can effect changes to the project. This section presents procedures
that show you how to submit a defect against the project and how to
submit a change.
Submitting a Defect Against the Yocto Project
=============================================
Use the Yocto Project implementation of
`Bugzilla <https://www.bugzilla.org/about/>`__ to submit a defect (bug)
against the Yocto Project. For additional information on this
implementation of Bugzilla see the ":ref:`Yocto Project
Bugzilla <resources-bugtracker>`" section in the
Yocto Project Reference Manual. For more detail on any of the following
steps, see the Yocto Project
:yocto_wiki:`Bugzilla wiki page </Bugzilla_Configuration_and_Bug_Tracking>`.
Use the following general steps to submit a bug:
#. Open the Yocto Project implementation of :yocto_bugs:`Bugzilla <>`.
#. Click "File a Bug" to enter a new bug.
#. Choose the appropriate "Classification", "Product", and "Component"
for which the bug was found. Bugs for the Yocto Project fall into
one of several classifications, which in turn break down into
several products and components. For example, for a bug against the
``meta-intel`` layer, you would choose "Build System, Metadata &
Runtime", "BSPs", and "bsps-meta-intel", respectively.
#. Choose the "Version" of the Yocto Project for which you found the
bug (e.g. &DISTRO;).
#. Determine and select the "Severity" of the bug. The severity
indicates how the bug impacted your work.
#. Choose the "Hardware" that the bug impacts.
#. Choose the "Architecture" that the bug impacts.
#. Choose a "Documentation change" item for the bug. Fixing a bug might
or might not affect the Yocto Project documentation. If you are
unsure of the impact to the documentation, select "Don't Know".
#. Provide a brief "Summary" of the bug. Try to limit your summary to
just a line or two and be sure to capture the essence of the bug.
#. Provide a detailed "Description" of the bug. You should provide as
much detail as you can about the context, behavior, output, and so
forth that surrounds the bug. You can even attach supporting files
for output from logs by using the "Add an attachment" button.
#. Click the "Submit Bug" button submit the bug. A new Bugzilla number
is assigned to the bug and the defect is logged in the bug tracking
system.
Once you file a bug, the bug is processed by the Yocto Project Bug
Triage Team and further details concerning the bug are assigned (e.g.
priority and owner). You are the "Submitter" of the bug and any further
categorization, progress, or comments on the bug result in Bugzilla
sending you an automated email concerning the particular change or
progress to the bug.
Submitting a Change to the Yocto Project
========================================
Contributions to the Yocto Project and OpenEmbedded are very welcome.
Because the system is extremely configurable and flexible, we recognize
that developers will want to extend, configure or optimize it for their
specific uses.
The Yocto Project uses a mailing list and a patch-based workflow that is
similar to the Linux kernel but contains important differences. In
general, there is a mailing list through which you can submit patches. You
should send patches to the appropriate mailing list so that they can be
reviewed and merged by the appropriate maintainer. The specific mailing
list you need to use depends on the location of the code you are
changing. Each component (e.g. layer) should have a ``README`` file that
indicates where to send the changes and which process to follow.
You can send the patch to the mailing list using whichever approach you
feel comfortable with to generate the patch. Once sent, the patch is
usually reviewed by the community at large. If somebody has concerns
with the patch, they will usually voice their concern over the mailing
list. If a patch does not receive any negative reviews, the maintainer
of the affected layer typically takes the patch, tests it, and then
based on successful testing, merges the patch.
The "poky" repository, which is the Yocto Project's reference build
environment, is a hybrid repository that contains several individual
pieces (e.g. BitBake, Metadata, documentation, and so forth) built using
the combo-layer tool. The upstream location used for submitting changes
varies by component:
- *Core Metadata:* Send your patch to the
:oe_lists:`openembedded-core </g/openembedded-core>`
mailing list. For example, a change to anything under the ``meta`` or
``scripts`` directories should be sent to this mailing list.
- *BitBake:* For changes to BitBake (i.e. anything under the
``bitbake`` directory), send your patch to the
:oe_lists:`bitbake-devel </g/bitbake-devel>`
mailing list.
- *"meta-\*" trees:* These trees contain Metadata. Use the
:yocto_lists:`poky </g/poky>` mailing list.
- *Documentation*: For changes to the Yocto Project documentation, use the
:yocto_lists:`docs </g/docs>` mailing list.
For changes to other layers hosted in the Yocto Project source
repositories (i.e. ``yoctoproject.org``) and tools use the
:yocto_lists:`Yocto Project </g/yocto/>` general mailing list.
.. note::
Sometimes a layer's documentation specifies to use a particular
mailing list. If so, use that list.
For additional recipes that do not fit into the core Metadata, you
should determine which layer the recipe should go into and submit the
change in the manner recommended by the documentation (e.g. the
``README`` file) supplied with the layer. If in doubt, please ask on the
Yocto general mailing list or on the openembedded-devel mailing list.
You can also push a change upstream and request a maintainer to pull the
change into the component's upstream repository. You do this by pushing
to a contribution repository that is upstream. See the
":ref:`overview-manual/development-environment:git workflows and the yocto project`"
section in the Yocto Project Overview and Concepts Manual for additional
concepts on working in the Yocto Project development environment.
Maintainers commonly use ``-next`` branches to test submissions prior to
merging patches. Thus, you can get an idea of the status of a patch based on
whether the patch has been merged into one of these branches. The commonly
used testing branches for OpenEmbedded-Core are as follows:
- *openembedded-core "master-next" branch:* This branch is part of the
:oe_git:`openembedded-core </openembedded-core/>` repository and contains
proposed changes to the core metadata.
- *poky "master-next" branch:* This branch is part of the
:yocto_git:`poky </poky/>` repository and combines proposed
changes to BitBake, the core metadata and the poky distro.
Similarly, stable branches maintained by the project may have corresponding
``-next`` branches which collect proposed changes. For example,
``&DISTRO_NAME_NO_CAP;-next`` and ``&DISTRO_NAME_NO_CAP_MINUS_ONE;-next``
branches in both the "openembdedded-core" and "poky" repositories.
Other layers may have similar testing branches but there is no formal
requirement or standard for these so please check the documentation for the
layers you are contributing to.
The following sections provide procedures for submitting a change.
Preparing Changes for Submission
--------------------------------
#. *Make Your Changes Locally:* Make your changes in your local Git
repository. You should make small, controlled, isolated changes.
Keeping changes small and isolated aids review, makes
merging/rebasing easier and keeps the change history clean should
anyone need to refer to it in future.
#. *Stage Your Changes:* Stage your changes by using the ``git add``
command on each file you changed.
#. *Commit Your Changes:* Commit the change by using the ``git commit``
command. Make sure your commit information follows standards by
following these accepted conventions:
- Be sure to include a "Signed-off-by:" line in the same style as
required by the Linux kernel. This can be done by using the
``git commit -s`` command. Adding this line signifies that you,
the submitter, have agreed to the Developer's Certificate of
Origin 1.1 as follows:
.. code-block:: none
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
- Provide a single-line summary of the change and, if more
explanation is needed, provide more detail in the body of the
commit. This summary is typically viewable in the "shortlist" of
changes. Thus, providing something short and descriptive that
gives the reader a summary of the change is useful when viewing a
list of many commits. You should prefix this short description
with the recipe name (if changing a recipe), or else with the
short form path to the file being changed.
- For the body of the commit message, provide detailed information
that describes what you changed, why you made the change, and the
approach you used. It might also be helpful if you mention how you
tested the change. Provide as much detail as you can in the body
of the commit message.
.. note::
You do not need to provide a more detailed explanation of a
change if the change is minor to the point of the single line
summary providing all the information.
- If the change addresses a specific bug or issue that is associated
with a bug-tracking ID, include a reference to that ID in your
detailed description. For example, the Yocto Project uses a
specific convention for bug references --- any commit that addresses
a specific bug should use the following form for the detailed
description. Be sure to use the actual bug-tracking ID from
Bugzilla for bug-id::
Fixes [YOCTO #bug-id]
detailed description of change
Using Email to Submit a Patch
-----------------------------
Depending on the components changed, you need to submit the email to a
specific mailing list. For some guidance on which mailing list to use,
see the
:ref:`list <dev-manual/changes:submitting a change to the yocto project>`
at the beginning of this section. For a description of all the available
mailing lists, see the ":ref:`Mailing Lists <resources-mailinglist>`" section in the
Yocto Project Reference Manual.
Here is the general procedure on how to submit a patch through email
without using the scripts once the steps in
:ref:`dev-manual/changes:preparing changes for submission` have been followed:
#. *Format the Commit:* Format the commit into an email message. To
format commits, use the ``git format-patch`` command. When you
provide the command, you must include a revision list or a number of
patches as part of the command. For example, either of these two
commands takes your most recent single commit and formats it as an
email message in the current directory::
$ git format-patch -1
or ::
$ git format-patch HEAD~
After the command is run, the current directory contains a numbered
``.patch`` file for the commit.
If you provide several commits as part of the command, the
``git format-patch`` command produces a series of numbered files in
the current directory one for each commit. If you have more than
one patch, you should also use the ``--cover`` option with the
command, which generates a cover letter as the first "patch" in the
series. You can then edit the cover letter to provide a description
for the series of patches. For information on the
``git format-patch`` command, see ``GIT_FORMAT_PATCH(1)`` displayed
using the ``man git-format-patch`` command.
.. note::
If you are or will be a frequent contributor to the Yocto Project
or to OpenEmbedded, you might consider requesting a contrib area
and the necessary associated rights.
#. *Send the patches via email:* Send the patches to the recipients and
relevant mailing lists by using the ``git send-email`` command.
.. note::
In order to use ``git send-email``, you must have the proper Git packages
installed on your host.
For Ubuntu, Debian, and Fedora the package is ``git-email``.
The ``git send-email`` command sends email by using a local or remote
Mail Transport Agent (MTA) such as ``msmtp``, ``sendmail``, or
through a direct ``smtp`` configuration in your Git ``~/.gitconfig``
file. If you are submitting patches through email only, it is very
important that you submit them without any whitespace or HTML
formatting that either you or your mailer introduces. The maintainer
that receives your patches needs to be able to save and apply them
directly from your emails. A good way to verify that what you are
sending will be applicable by the maintainer is to do a dry run and
send them to yourself and then save and apply them as the maintainer
would.
The ``git send-email`` command is the preferred method for sending
your patches using email since there is no risk of compromising
whitespace in the body of the message, which can occur when you use
your own mail client. The command also has several options that let
you specify recipients and perform further editing of the email
message. For information on how to use the ``git send-email``
command, see ``GIT-SEND-EMAIL(1)`` displayed using the
``man git-send-email`` command.
The Yocto Project uses a `Patchwork instance <https://patchwork.yoctoproject.org/>`__
to track the status of patches submitted to the various mailing lists and to
support automated patch testing. Each submitted patch is checked for common
mistakes and deviations from the expected patch format and submitters are
notified by patchtest if such mistakes are found. This process helps to
reduce the burden of patch review on maintainers.
.. note::
This system is imperfect and changes can sometimes get lost in the flow.
Asking about the status of a patch or change is reasonable if the change
has been idle for a while with no feedback.
Using Scripts to Push a Change Upstream and Request a Pull
----------------------------------------------------------
For larger patch series it is preferable to send a pull request which not
only includes the patch but also a pointer to a branch that can be pulled
from. This involves making a local branch for your changes, pushing this
branch to an accessible repository and then using the ``create-pull-request``
and ``send-pull-request`` scripts from openembedded-core to create and send a
patch series with a link to the branch for review.
Follow this procedure to push a change to an upstream "contrib" Git
repository once the steps in :ref:`dev-manual/changes:preparing changes for submission` have
been followed:
.. note::
You can find general Git information on how to push a change upstream
in the
`Git Community Book <https://git-scm.com/book/en/v2/Distributed-Git-Distributed-Workflows>`__.
#. *Push Your Commits to a "Contrib" Upstream:* If you have arranged for
permissions to push to an upstream contrib repository, push the
change to that repository::
$ git push upstream_remote_repo local_branch_name
For example, suppose you have permissions to push
into the upstream ``meta-intel-contrib`` repository and you are
working in a local branch named `your_name`\ ``/README``. The following
command pushes your local commits to the ``meta-intel-contrib``
upstream repository and puts the commit in a branch named
`your_name`\ ``/README``::
$ git push meta-intel-contrib your_name/README
#. *Determine Who to Notify:* Determine the maintainer or the mailing
list that you need to notify for the change.
Before submitting any change, you need to be sure who the maintainer
is or what mailing list that you need to notify. Use either these
methods to find out:
- *Maintenance File:* Examine the ``maintainers.inc`` file, which is
located in the :term:`Source Directory` at
``meta/conf/distro/include``, to see who is responsible for code.
- *Search by File:* Using :ref:`overview-manual/development-environment:git`, you can
enter the following command to bring up a short list of all
commits against a specific file::
git shortlog -- filename
Just provide the name of the file for which you are interested. The
information returned is not ordered by history but does include a
list of everyone who has committed grouped by name. From the list,
you can see who is responsible for the bulk of the changes against
the file.
- *Examine the List of Mailing Lists:* For a list of the Yocto
Project and related mailing lists, see the ":ref:`Mailing
lists <resources-mailinglist>`" section in
the Yocto Project Reference Manual.
#. *Make a Pull Request:* Notify the maintainer or the mailing list that
you have pushed a change by making a pull request.
The Yocto Project provides two scripts that conveniently let you
generate and send pull requests to the Yocto Project. These scripts
are ``create-pull-request`` and ``send-pull-request``. You can find
these scripts in the ``scripts`` directory within the
:term:`Source Directory` (e.g.
``poky/scripts``).
Using these scripts correctly formats the requests without
introducing any whitespace or HTML formatting. The maintainer that
receives your patches either directly or through the mailing list
needs to be able to save and apply them directly from your emails.
Using these scripts is the preferred method for sending patches.
First, create the pull request. For example, the following command
runs the script, specifies the upstream repository in the contrib
directory into which you pushed the change, and provides a subject
line in the created patch files::
$ poky/scripts/create-pull-request -u meta-intel-contrib -s "Updated Manual Section Reference in README"
Running this script forms ``*.patch`` files in a folder named
``pull-``\ `PID` in the current directory. One of the patch files is a
cover letter.
Before running the ``send-pull-request`` script, you must edit the
cover letter patch to insert information about your change. After
editing the cover letter, send the pull request. For example, the
following command runs the script and specifies the patch directory
and email address. In this example, the email address is a mailing
list::
$ poky/scripts/send-pull-request -p ~/meta-intel/pull-10565 -t meta-intel@lists.yoctoproject.org
You need to follow the prompts as the script is interactive.
.. note::
For help on using these scripts, simply provide the ``-h``
argument as follows::
$ poky/scripts/create-pull-request -h
$ poky/scripts/send-pull-request -h
Responding to Patch Review
--------------------------
You may get feedback on your submitted patches from other community members
or from the automated patchtest service. If issues are identified in your
patch then it is usually necessary to address these before the patch will be
accepted into the project. In this case you should amend the patch according
to the feedback and submit an updated version to the relevant mailing list,
copying in the reviewers who provided feedback to the previous version of the
patch.
The patch should be amended using ``git commit --amend`` or perhaps ``git
rebase`` for more expert git users. You should also modify the ``[PATCH]``
tag in the email subject line when sending the revised patch to mark the new
iteration as ``[PATCH v2]``, ``[PATCH v3]``, etc as appropriate. This can be
done by passing the ``-v`` argument to ``git format-patch`` with a version
number.
Lastly please ensure that you also test your revised changes. In particular
please don't just edit the patch file written out by ``git format-patch`` and
resend it.
Submitting Changes to Stable Release Branches
---------------------------------------------
The process for proposing changes to a Yocto Project stable branch differs
from the steps described above. Changes to a stable branch must address
identified bugs or CVEs and should be made carefully in order to avoid the
risk of introducing new bugs or breaking backwards compatibility. Typically
bug fixes must already be accepted into the master branch before they can be
backported to a stable branch unless the bug in question does not affect the
master branch or the fix on the master branch is unsuitable for backporting.
The list of stable branches along with the status and maintainer for each
branch can be obtained from the
:yocto_wiki:`Releases wiki page </Releases>`.
.. note::
Changes will not typically be accepted for branches which are marked as
End-Of-Life (EOL).
With this in mind, the steps to submit a change for a stable branch are as
follows:
#. *Identify the bug or CVE to be fixed:* This information should be
collected so that it can be included in your submission.
See :ref:`dev-manual/vulnerabilities:checking for vulnerabilities`
for details about CVE tracking.
#. *Check if the fix is already present in the master branch:* This will
result in the most straightforward path into the stable branch for the
fix.
#. *If the fix is present in the master branch --- submit a backport request
by email:* You should send an email to the relevant stable branch
maintainer and the mailing list with details of the bug or CVE to be
fixed, the commit hash on the master branch that fixes the issue and
the stable branches which you would like this fix to be backported to.
#. *If the fix is not present in the master branch --- submit the fix to the
master branch first:* This will ensure that the fix passes through the
project's usual patch review and test processes before being accepted.
It will also ensure that bugs are not left unresolved in the master
branch itself. Once the fix is accepted in the master branch a backport
request can be submitted as above.
#. *If the fix is unsuitable for the master branch --- submit a patch
directly for the stable branch:* This method should be considered as a
last resort. It is typically necessary when the master branch is using
a newer version of the software which includes an upstream fix for the
issue or when the issue has been fixed on the master branch in a way
that introduces backwards incompatible changes. In this case follow the
steps in :ref:`dev-manual/changes:preparing changes for submission` and
:ref:`dev-manual/changes:using email to submit a patch` but modify the subject header of your patch
email to include the name of the stable branch which you are
targetting. This can be done using the ``--subject-prefix`` argument to
``git format-patch``, for example to submit a patch to the dunfell
branch use
``git format-patch --subject-prefix='&DISTRO_NAME_NO_CAP_MINUS_ONE;][PATCH' ...``.

File diff suppressed because it is too large Load Diff

View File

@@ -1,108 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Creating Your Own Distribution
******************************
When you build an image using the Yocto Project and do not alter any
distribution :term:`Metadata`, you are
creating a Poky distribution. If you wish to gain more control over
package alternative selections, compile-time options, and other
low-level configurations, you can create your own distribution.
To create your own distribution, the basic steps consist of creating
your own distribution layer, creating your own distribution
configuration file, and then adding any needed code and Metadata to the
layer. The following steps provide some more detail:
- *Create a layer for your new distro:* Create your distribution layer
so that you can keep your Metadata and code for the distribution
separate. It is strongly recommended that you create and use your own
layer for configuration and code. Using your own layer as compared to
just placing configurations in a ``local.conf`` configuration file
makes it easier to reproduce the same build configuration when using
multiple build machines. See the
":ref:`dev-manual/layers:creating a general layer using the \`\`bitbake-layers\`\` script`"
section for information on how to quickly set up a layer.
- *Create the distribution configuration file:* The distribution
configuration file needs to be created in the ``conf/distro``
directory of your layer. You need to name it using your distribution
name (e.g. ``mydistro.conf``).
.. note::
The :term:`DISTRO` variable in your ``local.conf`` file determines the
name of your distribution.
You can split out parts of your configuration file into include files
and then "require" them from within your distribution configuration
file. Be sure to place the include files in the
``conf/distro/include`` directory of your layer. A common example
usage of include files would be to separate out the selection of
desired version and revisions for individual recipes.
Your configuration file needs to set the following required
variables:
- :term:`DISTRO_NAME`
- :term:`DISTRO_VERSION`
These following variables are optional and you typically set them
from the distribution configuration file:
- :term:`DISTRO_FEATURES`
- :term:`DISTRO_EXTRA_RDEPENDS`
- :term:`DISTRO_EXTRA_RRECOMMENDS`
- :term:`TCLIBC`
.. tip::
If you want to base your distribution configuration file on the
very basic configuration from OE-Core, you can use
``conf/distro/defaultsetup.conf`` as a reference and just include
variables that differ as compared to ``defaultsetup.conf``.
Alternatively, you can create a distribution configuration file
from scratch using the ``defaultsetup.conf`` file or configuration files
from another distribution such as Poky as a reference.
- *Provide miscellaneous variables:* Be sure to define any other
variables for which you want to create a default or enforce as part
of the distribution configuration. You can include nearly any
variable from the ``local.conf`` file. The variables you use are not
limited to the list in the previous bulleted item.
- *Point to Your distribution configuration file:* In your ``local.conf``
file in the :term:`Build Directory`, set your :term:`DISTRO` variable to
point to your distribution's configuration file. For example, if your
distribution's configuration file is named ``mydistro.conf``, then
you point to it as follows::
DISTRO = "mydistro"
- *Add more to the layer if necessary:* Use your layer to hold other
information needed for the distribution:
- Add recipes for installing distro-specific configuration files
that are not already installed by another recipe. If you have
distro-specific configuration files that are included by an
existing recipe, you should add an append file (``.bbappend``) for
those. For general information and recommendations on how to add
recipes to your layer, see the
":ref:`dev-manual/layers:creating your own layer`" and
":ref:`dev-manual/layers:following best practices when creating layers`"
sections.
- Add any image recipes that are specific to your distribution.
- Add a ``psplash`` append file for a branded splash screen. For
information on append files, see the
":ref:`dev-manual/layers:appending other layers metadata with your layer`"
section.
- Add any other append files to make custom changes that are
specific to individual recipes.

View File

@@ -1,52 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Creating a Custom Template Configuration Directory
**************************************************
If you are producing your own customized version of the build system for
use by other users, you might want to provide a custom build configuration
that includes all the necessary settings and layers (i.e. ``local.conf`` and
``bblayers.conf`` that are created in a new :term:`Build Directory`) and a custom
message that is shown when setting up the build. This can be done by
creating one or more template configuration directories in your
custom distribution layer.
This can be done by using ``bitbake-layers save-build-conf``::
$ bitbake-layers save-build-conf ../../meta-alex/ test-1
NOTE: Starting bitbake server...
NOTE: Configuration template placed into /srv/work/alex/meta-alex/conf/templates/test-1
Please review the files in there, and particularly provide a configuration description in /srv/work/alex/meta-alex/conf/templates/test-1/conf-notes.txt
You can try out the configuration with
TEMPLATECONF=/srv/work/alex/meta-alex/conf/templates/test-1 . /srv/work/alex/poky/oe-init-build-env build-try-test-1
The above command takes the config files from the currently active :term:`Build Directory` under ``conf``,
replaces site-specific paths in ``bblayers.conf`` with ``##OECORE##``-relative paths, and copies
the config files into a specified layer under a specified template name.
To use those saved templates as a starting point for a build, users should point
to one of them with :term:`TEMPLATECONF` environment variable::
TEMPLATECONF=/srv/work/alex/meta-alex/conf/templates/test-1 . /srv/work/alex/poky/oe-init-build-env build-try-test-1
The OpenEmbedded build system uses the environment variable
:term:`TEMPLATECONF` to locate the directory from which it gathers
configuration information that ultimately ends up in the
:term:`Build Directory` ``conf`` directory.
If :term:`TEMPLATECONF` is not set, the default value is obtained
from ``.templateconf`` file that is read from the same directory as
``oe-init-build-env`` script. For the Poky reference distribution this
would be::
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf/templates/default}
If you look at a configuration template directory, you will
see the ``bblayers.conf.sample``, ``local.conf.sample``, and
``conf-notes.txt`` files. The build system uses these files to form the
respective ``bblayers.conf`` file, ``local.conf`` file, and show
users a note about the build they're setting up
when running the ``oe-init-build-env`` setup script. These can be
edited further if needed to improve or change the build configurations
available to the users.

View File

@@ -1,223 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Customizing Images
******************
You can customize images to satisfy particular requirements. This
section describes several methods and provides guidelines for each.
Customizing Images Using ``local.conf``
=======================================
Probably the easiest way to customize an image is to add a package by
way of the ``local.conf`` configuration file. Because it is limited to
local use, this method generally only allows you to add packages and is
not as flexible as creating your own customized image. When you add
packages using local variables this way, you need to realize that these
variable changes are in effect for every build and consequently affect
all images, which might not be what you require.
To add a package to your image using the local configuration file, use
the :term:`IMAGE_INSTALL` variable with the ``:append`` operator::
IMAGE_INSTALL:append = " strace"
Use of the syntax is important; specifically, the leading space
after the opening quote and before the package name, which is
``strace`` in this example. This space is required since the ``:append``
operator does not add the space.
Furthermore, you must use ``:append`` instead of the ``+=`` operator if
you want to avoid ordering issues. The reason for this is because doing
so unconditionally appends to the variable and avoids ordering problems
due to the variable being set in image recipes and ``.bbclass`` files
with operators like ``?=``. Using ``:append`` ensures the operation
takes effect.
As shown in its simplest use, ``IMAGE_INSTALL:append`` affects all
images. It is possible to extend the syntax so that the variable applies
to a specific image only. Here is an example::
IMAGE_INSTALL:append:pn-core-image-minimal = " strace"
This example adds ``strace`` to the ``core-image-minimal`` image only.
You can add packages using a similar approach through the
:term:`CORE_IMAGE_EXTRA_INSTALL` variable. If you use this variable, only
``core-image-*`` images are affected.
Customizing Images Using Custom ``IMAGE_FEATURES`` and ``EXTRA_IMAGE_FEATURES``
===============================================================================
Another method for customizing your image is to enable or disable
high-level image features by using the
:term:`IMAGE_FEATURES` and
:term:`EXTRA_IMAGE_FEATURES`
variables. Although the functions for both variables are nearly
equivalent, best practices dictate using :term:`IMAGE_FEATURES` from within
a recipe and using :term:`EXTRA_IMAGE_FEATURES` from within your
``local.conf`` file, which is found in the :term:`Build Directory`.
To understand how these features work, the best reference is
:ref:`meta/classes-recipe/image.bbclass <ref-classes-image>`.
This class lists out the available
:term:`IMAGE_FEATURES` of which most map to package groups while some, such
as ``debug-tweaks`` and ``read-only-rootfs``, resolve as general
configuration settings.
In summary, the file looks at the contents of the :term:`IMAGE_FEATURES`
variable and then maps or configures the feature accordingly. Based on
this information, the build system automatically adds the appropriate
packages or configurations to the
:term:`IMAGE_INSTALL` variable.
Effectively, you are enabling extra features by extending the class or
creating a custom class for use with specialized image ``.bb`` files.
Use the :term:`EXTRA_IMAGE_FEATURES` variable from within your local
configuration file. Using a separate area from which to enable features
with this variable helps you avoid overwriting the features in the image
recipe that are enabled with :term:`IMAGE_FEATURES`. The value of
:term:`EXTRA_IMAGE_FEATURES` is added to :term:`IMAGE_FEATURES` within
``meta/conf/bitbake.conf``.
To illustrate how you can use these variables to modify your image,
consider an example that selects the SSH server. The Yocto Project ships
with two SSH servers you can use with your images: Dropbear and OpenSSH.
Dropbear is a minimal SSH server appropriate for resource-constrained
environments, while OpenSSH is a well-known standard SSH server
implementation. By default, the ``core-image-sato`` image is configured
to use Dropbear. The ``core-image-full-cmdline`` and ``core-image-lsb``
images both include OpenSSH. The ``core-image-minimal`` image does not
contain an SSH server.
You can customize your image and change these defaults. Edit the
:term:`IMAGE_FEATURES` variable in your recipe or use the
:term:`EXTRA_IMAGE_FEATURES` in your ``local.conf`` file so that it
configures the image you are working with to include
``ssh-server-dropbear`` or ``ssh-server-openssh``.
.. note::
See the ":ref:`ref-manual/features:image features`" section in the Yocto
Project Reference Manual for a complete list of image features that ship
with the Yocto Project.
Customizing Images Using Custom .bb Files
=========================================
You can also customize an image by creating a custom recipe that defines
additional software as part of the image. The following example shows
the form for the two lines you need::
IMAGE_INSTALL = "packagegroup-core-x11-base package1 package2"
inherit core-image
Defining the software using a custom recipe gives you total control over
the contents of the image. It is important to use the correct names of
packages in the :term:`IMAGE_INSTALL` variable. You must use the
OpenEmbedded notation and not the Debian notation for the names (e.g.
``glibc-dev`` instead of ``libc6-dev``).
The other method for creating a custom image is to base it on an
existing image. For example, if you want to create an image based on
``core-image-sato`` but add the additional package ``strace`` to the
image, copy the ``meta/recipes-sato/images/core-image-sato.bb`` to a new
``.bb`` and add the following line to the end of the copy::
IMAGE_INSTALL += "strace"
Customizing Images Using Custom Package Groups
==============================================
For complex custom images, the best approach for customizing an image is
to create a custom package group recipe that is used to build the image
or images. A good example of a package group recipe is
``meta/recipes-core/packagegroups/packagegroup-base.bb``.
If you examine that recipe, you see that the :term:`PACKAGES` variable lists
the package group packages to produce. The ``inherit packagegroup``
statement sets appropriate default values and automatically adds
``-dev``, ``-dbg``, and ``-ptest`` complementary packages for each
package specified in the :term:`PACKAGES` statement.
.. note::
The ``inherit packagegroup`` line should be located near the top of the
recipe, certainly before the :term:`PACKAGES` statement.
For each package you specify in :term:`PACKAGES`, you can use :term:`RDEPENDS`
and :term:`RRECOMMENDS` entries to provide a list of packages the parent
task package should contain. You can see examples of these further down
in the ``packagegroup-base.bb`` recipe.
Here is a short, fabricated example showing the same basic pieces for a
hypothetical packagegroup defined in ``packagegroup-custom.bb``, where
the variable :term:`PN` is the standard way to abbreviate the reference to
the full packagegroup name ``packagegroup-custom``::
DESCRIPTION = "My Custom Package Groups"
inherit packagegroup
PACKAGES = "\
${PN}-apps \
${PN}-tools \
"
RDEPENDS:${PN}-apps = "\
dropbear \
portmap \
psplash"
RDEPENDS:${PN}-tools = "\
oprofile \
oprofileui-server \
lttng-tools"
RRECOMMENDS:${PN}-tools = "\
kernel-module-oprofile"
In the previous example, two package group packages are created with
their dependencies and their recommended package dependencies listed:
``packagegroup-custom-apps``, and ``packagegroup-custom-tools``. To
build an image using these package group packages, you need to add
``packagegroup-custom-apps`` and/or ``packagegroup-custom-tools`` to
:term:`IMAGE_INSTALL`. For other forms of image dependencies see the other
areas of this section.
Customizing an Image Hostname
=============================
By default, the configured hostname (i.e. ``/etc/hostname``) in an image
is the same as the machine name. For example, if
:term:`MACHINE` equals "qemux86", the
configured hostname written to ``/etc/hostname`` is "qemux86".
You can customize this name by altering the value of the "hostname"
variable in the ``base-files`` recipe using either an append file or a
configuration file. Use the following in an append file::
hostname = "myhostname"
Use the following in a configuration file::
hostname:pn-base-files = "myhostname"
Changing the default value of the variable "hostname" can be useful in
certain situations. For example, suppose you need to do extensive
testing on an image and you would like to easily identify the image
under test from existing images with typical default hostnames. In this
situation, you could change the default hostname to "testme", which
results in all the images using the name "testme". Once testing is
complete and you do not need to rebuild the image for test any longer,
you can easily reset the default hostname.
Another point of interest is that if you unset the variable, the image
will have no default hostname in the filesystem. Here is an example that
unsets the variable in a configuration file::
hostname:pn-base-files = ""
Having no default hostname in the filesystem is suitable for
environments that use dynamic hostnames such as virtual machines.

File diff suppressed because it is too large Load Diff

View File

@@ -1,82 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Using a Development Shell
*************************
When debugging certain commands or even when just editing packages,
``devshell`` can be a useful tool. When you invoke ``devshell``, all
tasks up to and including
:ref:`ref-tasks-patch` are run for the
specified target. Then, a new terminal is opened and you are placed in
``${``\ :term:`S`\ ``}``, the source
directory. In the new terminal, all the OpenEmbedded build-related
environment variables are still defined so you can use commands such as
``configure`` and ``make``. The commands execute just as if the
OpenEmbedded build system were executing them. Consequently, working
this way can be helpful when debugging a build or preparing software to
be used with the OpenEmbedded build system.
Following is an example that uses ``devshell`` on a target named
``matchbox-desktop``::
$ bitbake matchbox-desktop -c devshell
This command spawns a terminal with a shell prompt within the
OpenEmbedded build environment. The
:term:`OE_TERMINAL` variable
controls what type of shell is opened.
For spawned terminals, the following occurs:
- The ``PATH`` variable includes the cross-toolchain.
- The ``pkgconfig`` variables find the correct ``.pc`` files.
- The ``configure`` command finds the Yocto Project site files as well
as any other necessary files.
Within this environment, you can run configure or compile commands as if
they were being run by the OpenEmbedded build system itself. As noted
earlier, the working directory also automatically changes to the Source
Directory (:term:`S`).
To manually run a specific task using ``devshell``, run the
corresponding ``run.*`` script in the
``${``\ :term:`WORKDIR`\ ``}/temp``
directory (e.g., ``run.do_configure.``\ `pid`). If a task's script does
not exist, which would be the case if the task was skipped by way of the
sstate cache, you can create the task by first running it outside of the
``devshell``::
$ bitbake -c task
.. note::
- Execution of a task's ``run.*`` script and BitBake's execution of
a task are identical. In other words, running the script re-runs
the task just as it would be run using the ``bitbake -c`` command.
- Any ``run.*`` file that does not have a ``.pid`` extension is a
symbolic link (symlink) to the most recent version of that file.
Remember, that the ``devshell`` is a mechanism that allows you to get
into the BitBake task execution environment. And as such, all commands
must be called just as BitBake would call them. That means you need to
provide the appropriate options for cross-compilation and so forth as
applicable.
When you are finished using ``devshell``, exit the shell or close the
terminal window.
.. note::
- It is worth remembering that when using ``devshell`` you need to
use the full compiler name such as ``arm-poky-linux-gnueabi-gcc``
instead of just using ``gcc``. The same applies to other
applications such as ``binutils``, ``libtool`` and so forth.
BitBake sets up environment variables such as :term:`CC` to assist
applications, such as ``make`` to find the correct tools.
- It is also worth noting that ``devshell`` still works over X11
forwarding and similar situations.

View File

@@ -1,74 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
.. _device-manager:
Selecting a Device Manager
**************************
The Yocto Project provides multiple ways to manage the device manager
(``/dev``):
- Persistent and Pre-Populated ``/dev``: For this case, the ``/dev``
directory is persistent and the required device nodes are created
during the build.
- Use ``devtmpfs`` with a Device Manager: For this case, the ``/dev``
directory is provided by the kernel as an in-memory file system and
is automatically populated by the kernel at runtime. Additional
configuration of device nodes is done in user space by a device
manager like ``udev`` or ``busybox-mdev``.
Using Persistent and Pre-Populated ``/dev``
===========================================
To use the static method for device population, you need to set the
:term:`USE_DEVFS` variable to "0"
as follows::
USE_DEVFS = "0"
The content of the resulting ``/dev`` directory is defined in a Device
Table file. The
:term:`IMAGE_DEVICE_TABLES`
variable defines the Device Table to use and should be set in the
machine or distro configuration file. Alternatively, you can set this
variable in your ``local.conf`` configuration file.
If you do not define the :term:`IMAGE_DEVICE_TABLES` variable, the default
``device_table-minimal.txt`` is used::
IMAGE_DEVICE_TABLES = "device_table-mymachine.txt"
The population is handled by the ``makedevs`` utility during image
creation:
Using ``devtmpfs`` and a Device Manager
=======================================
To use the dynamic method for device population, you need to use (or be
sure to set) the :term:`USE_DEVFS`
variable to "1", which is the default::
USE_DEVFS = "1"
With this
setting, the resulting ``/dev`` directory is populated by the kernel
using ``devtmpfs``. Make sure the corresponding kernel configuration
variable ``CONFIG_DEVTMPFS`` is set when building you build a Linux
kernel.
All devices created by ``devtmpfs`` will be owned by ``root`` and have
permissions ``0600``.
To have more control over the device nodes, you can use a device manager
like ``udev`` or ``busybox-mdev``. You choose the device manager by
defining the ``VIRTUAL-RUNTIME_dev_manager`` variable in your machine or
distro configuration file. Alternatively, you can set this variable in
your ``local.conf`` configuration file::
VIRTUAL-RUNTIME_dev_manager = "udev"
# Some alternative values
# VIRTUAL-RUNTIME_dev_manager = "busybox-mdev"
# VIRTUAL-RUNTIME_dev_manager = "systemd"

View File

@@ -1,45 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Conserving Disk Space
*********************
Conserving Disk Space During Builds
===================================
To help conserve disk space during builds, you can add the following
statement to your project's ``local.conf`` configuration file found in
the :term:`Build Directory`::
INHERIT += "rm_work"
Adding this statement deletes the work directory used for
building a recipe once the recipe is built. For more information on
"rm_work", see the :ref:`ref-classes-rm-work` class in the
Yocto Project Reference Manual.
When you inherit this class and build a ``core-image-sato`` image for a
``qemux86-64`` machine from an Ubuntu 22.04 x86-64 system, you end up with a
final disk usage of 22 Gbytes instead of &MIN_DISK_SPACE; Gbytes. However,
&MIN_DISK_SPACE_RM_WORK; Gbytes of initial free disk space are still needed to
create temporary files before they can be deleted.
Purging Duplicate Shared State Cache Files
==========================================
After multiple build iterations, the Shared State (sstate) cache can contain
duplicate cache files for a given package, while only the most recent one
is likely to be reusable. The following command purges all but the
newest sstate cache file for each package::
sstate-cache-management.sh --remove-duplicated --cache-dir=build/sstate-cache
This command will ask you to confirm the deletions it identifies.
.. note::
The duplicated sstate cache files of one package must have the same
architecture, which means that sstate cache files with multiple
architectures are not considered as duplicate.
Run ``sstate-cache-management.sh`` for more details about this script.

View File

@@ -1,68 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Efficiently Fetching Source Files During a Build
************************************************
The OpenEmbedded build system works with source files located through
the :term:`SRC_URI` variable. When
you build something using BitBake, a big part of the operation is
locating and downloading all the source tarballs. For images,
downloading all the source for various packages can take a significant
amount of time.
This section shows you how you can use mirrors to speed up fetching
source files and how you can pre-fetch files all of which leads to more
efficient use of resources and time.
Setting up Effective Mirrors
============================
A good deal that goes into a Yocto Project build is simply downloading
all of the source tarballs. Maybe you have been working with another
build system for which you have built up a
sizable directory of source tarballs. Or, perhaps someone else has such
a directory for which you have read access. If so, you can save time by
adding statements to your configuration file so that the build process
checks local directories first for existing tarballs before checking the
Internet.
Here is an efficient way to set it up in your ``local.conf`` file::
SOURCE_MIRROR_URL ?= "file:///home/you/your-download-dir/"
INHERIT += "own-mirrors"
BB_GENERATE_MIRROR_TARBALLS = "1"
# BB_NO_NETWORK = "1"
In the previous example, the
:term:`BB_GENERATE_MIRROR_TARBALLS`
variable causes the OpenEmbedded build system to generate tarballs of
the Git repositories and store them in the
:term:`DL_DIR` directory. Due to
performance reasons, generating and storing these tarballs is not the
build system's default behavior.
You can also use the
:term:`PREMIRRORS` variable. For
an example, see the variable's glossary entry in the Yocto Project
Reference Manual.
Getting Source Files and Suppressing the Build
==============================================
Another technique you can use to ready yourself for a successive string
of build operations, is to pre-fetch all the source files without
actually starting a build. This technique lets you work through any
download issues and ultimately gathers all the source files into your
download directory :ref:`structure-build-downloads`,
which is located with :term:`DL_DIR`.
Use the following BitBake command form to fetch all the necessary
sources without starting the build::
$ bitbake target --runall=fetch
This
variation of the BitBake command guarantees that you have all the
sources for that BitBake target should you disconnect from the Internet
and want to do the build later offline.

View File

@@ -1,84 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Using the Error Reporting Tool
******************************
The error reporting tool allows you to submit errors encountered during
builds to a central database. Outside of the build environment, you can
use a web interface to browse errors, view statistics, and query for
errors. The tool works using a client-server system where the client
portion is integrated with the installed Yocto Project
:term:`Source Directory` (e.g. ``poky``).
The server receives the information collected and saves it in a
database.
There is a live instance of the error reporting server at
https://errors.yoctoproject.org.
When you want to get help with build failures, you can submit all of the
information on the failure easily and then point to the URL in your bug
report or send an email to the mailing list.
.. note::
If you send error reports to this server, the reports become publicly
visible.
Enabling and Using the Tool
===========================
By default, the error reporting tool is disabled. You can enable it by
inheriting the :ref:`ref-classes-report-error` class by adding the
following statement to the end of your ``local.conf`` file in your
:term:`Build Directory`::
INHERIT += "report-error"
By default, the error reporting feature stores information in
``${``\ :term:`LOG_DIR`\ ``}/error-report``.
However, you can specify a directory to use by adding the following to
your ``local.conf`` file::
ERR_REPORT_DIR = "path"
Enabling error
reporting causes the build process to collect the errors and store them
in a file as previously described. When the build system encounters an
error, it includes a command as part of the console output. You can run
the command to send the error file to the server. For example, the
following command sends the errors to an upstream server::
$ send-error-report /home/brandusa/project/poky/build/tmp/log/error-report/error_report_201403141617.txt
In the previous example, the errors are sent to a public database
available at https://errors.yoctoproject.org, which is used by the
entire community. If you specify a particular server, you can send the
errors to a different database. Use the following command for more
information on available options::
$ send-error-report --help
When sending the error file, you are prompted to review the data being
sent as well as to provide a name and optional email address. Once you
satisfy these prompts, the command returns a link from the server that
corresponds to your entry in the database. For example, here is a
typical link: https://errors.yoctoproject.org/Errors/Details/9522/
Following the link takes you to a web interface where you can browse,
query the errors, and view statistics.
Disabling the Tool
==================
To disable the error reporting feature, simply remove or comment out the
following statement from the end of your ``local.conf`` file in your
:term:`Build Directory`::
INHERIT += "report-error"
Setting Up Your Own Error Reporting Server
==========================================
If you want to set up your own error reporting server, you can obtain
the code from the Git repository at :yocto_git:`/error-report-web/`.
Instructions on how to set it up are in the README document.

View File

@@ -1,67 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Using an External SCM
*********************
If you're working on a recipe that pulls from an external Source Code
Manager (SCM), it is possible to have the OpenEmbedded build system
notice new recipe changes added to the SCM and then build the resulting
packages that depend on the new recipes by using the latest versions.
This only works for SCMs from which it is possible to get a sensible
revision number for changes. Currently, you can do this with Apache
Subversion (SVN), Git, and Bazaar (BZR) repositories.
To enable this behavior, the :term:`PV` of
the recipe needs to reference
:term:`SRCPV`. Here is an example::
PV = "1.2.3+git${SRCPV}"
Then, you can add the following to your
``local.conf``::
SRCREV:pn-PN = "${AUTOREV}"
:term:`PN` is the name of the recipe for
which you want to enable automatic source revision updating.
If you do not want to update your local configuration file, you can add
the following directly to the recipe to finish enabling the feature::
SRCREV = "${AUTOREV}"
The Yocto Project provides a distribution named ``poky-bleeding``, whose
configuration file contains the line::
require conf/distro/include/poky-floating-revisions.inc
This line pulls in the
listed include file that contains numerous lines of exactly that form::
#SRCREV:pn-opkg-native ?= "${AUTOREV}"
#SRCREV:pn-opkg-sdk ?= "${AUTOREV}"
#SRCREV:pn-opkg ?= "${AUTOREV}"
#SRCREV:pn-opkg-utils-native ?= "${AUTOREV}"
#SRCREV:pn-opkg-utils ?= "${AUTOREV}"
SRCREV:pn-gconf-dbus ?= "${AUTOREV}"
SRCREV:pn-matchbox-common ?= "${AUTOREV}"
SRCREV:pn-matchbox-config-gtk ?= "${AUTOREV}"
SRCREV:pn-matchbox-desktop ?= "${AUTOREV}"
SRCREV:pn-matchbox-keyboard ?= "${AUTOREV}"
SRCREV:pn-matchbox-panel-2 ?= "${AUTOREV}"
SRCREV:pn-matchbox-themes-extra ?= "${AUTOREV}"
SRCREV:pn-matchbox-terminal ?= "${AUTOREV}"
SRCREV:pn-matchbox-wm ?= "${AUTOREV}"
SRCREV:pn-settings-daemon ?= "${AUTOREV}"
SRCREV:pn-screenshot ?= "${AUTOREV}"
. . .
These lines allow you to
experiment with building a distribution that tracks the latest
development source for numerous packages.
.. note::
The ``poky-bleeding`` distribution is not tested on a regular basis. Keep
this in mind if you use it.

View File

@@ -1,40 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Optionally Using an External Toolchain
**************************************
You might want to use an external toolchain as part of your development.
If this is the case, the fundamental steps you need to accomplish are as
follows:
- Understand where the installed toolchain resides. For cases where you
need to build the external toolchain, you would need to take separate
steps to build and install the toolchain.
- Make sure you add the layer that contains the toolchain to your
``bblayers.conf`` file through the
:term:`BBLAYERS` variable.
- Set the :term:`EXTERNAL_TOOLCHAIN` variable in your ``local.conf`` file
to the location in which you installed the toolchain.
The toolchain configuration is very flexible and customizable. It
is primarily controlled with the :term:`TCMODE` variable. This variable
controls which ``tcmode-*.inc`` file to include from the
``meta/conf/distro/include`` directory within the :term:`Source Directory`.
The default value of :term:`TCMODE` is "default", which tells the
OpenEmbedded build system to use its internally built toolchain (i.e.
``tcmode-default.inc``). However, other patterns are accepted. In
particular, "external-\*" refers to external toolchains. One example is
the Mentor Graphics Sourcery G++ Toolchain. Support for this toolchain resides
in the separate ``meta-sourcery`` layer at
https://github.com/MentorEmbedded/meta-sourcery/.
See its ``README`` file for details about how to use this layer.
Another example of external toolchain layer is
:yocto_git:`meta-arm-toolchain </meta-arm/tree/meta-arm-toolchain/>`
supporting GNU toolchains released by ARM.
You can find further information by reading about the :term:`TCMODE` variable
in the Yocto Project Reference Manual's variable glossary.

View File

@@ -1,155 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Enabling GObject Introspection Support
**************************************
`GObject introspection <https://gi.readthedocs.io/en/latest/>`__
is the standard mechanism for accessing GObject-based software from
runtime environments. GObject is a feature of the GLib library that
provides an object framework for the GNOME desktop and related software.
GObject Introspection adds information to GObject that allows objects
created within it to be represented across different programming
languages. If you want to construct GStreamer pipelines using Python, or
control UPnP infrastructure using Javascript and GUPnP, GObject
introspection is the only way to do it.
This section describes the Yocto Project support for generating and
packaging GObject introspection data. GObject introspection data is a
description of the API provided by libraries built on top of the GLib
framework, and, in particular, that framework's GObject mechanism.
GObject Introspection Repository (GIR) files go to ``-dev`` packages,
``typelib`` files go to main packages as they are packaged together with
libraries that are introspected.
The data is generated when building such a library, by linking the
library with a small executable binary that asks the library to describe
itself, and then executing the binary and processing its output.
Generating this data in a cross-compilation environment is difficult
because the library is produced for the target architecture, but its
code needs to be executed on the build host. This problem is solved with
the OpenEmbedded build system by running the code through QEMU, which
allows precisely that. Unfortunately, QEMU does not always work
perfectly as mentioned in the ":ref:`dev-manual/gobject-introspection:known issues`"
section.
Enabling the Generation of Introspection Data
=============================================
Enabling the generation of introspection data (GIR files) in your
library package involves the following:
#. Inherit the :ref:`ref-classes-gobject-introspection` class.
#. Make sure introspection is not disabled anywhere in the recipe or
from anything the recipe includes. Also, make sure that
"gobject-introspection-data" is not in
:term:`DISTRO_FEATURES_BACKFILL_CONSIDERED`
and that "qemu-usermode" is not in
:term:`MACHINE_FEATURES_BACKFILL_CONSIDERED`.
In either of these conditions, nothing will happen.
#. Try to build the recipe. If you encounter build errors that look like
something is unable to find ``.so`` libraries, check where these
libraries are located in the source tree and add the following to the
recipe::
GIR_EXTRA_LIBS_PATH = "${B}/something/.libs"
.. note::
See recipes in the ``oe-core`` repository that use that
:term:`GIR_EXTRA_LIBS_PATH` variable as an example.
#. Look for any other errors, which probably mean that introspection
support in a package is not entirely standard, and thus breaks down
in a cross-compilation environment. For such cases, custom-made fixes
are needed. A good place to ask and receive help in these cases is
the :ref:`Yocto Project mailing
lists <resources-mailinglist>`.
.. note::
Using a library that no longer builds against the latest Yocto
Project release and prints introspection related errors is a good
candidate for the previous procedure.
Disabling the Generation of Introspection Data
==============================================
You might find that you do not want to generate introspection data. Or,
perhaps QEMU does not work on your build host and target architecture
combination. If so, you can use either of the following methods to
disable GIR file generations:
- Add the following to your distro configuration::
DISTRO_FEATURES_BACKFILL_CONSIDERED = "gobject-introspection-data"
Adding this statement disables generating introspection data using
QEMU but will still enable building introspection tools and libraries
(i.e. building them does not require the use of QEMU).
- Add the following to your machine configuration::
MACHINE_FEATURES_BACKFILL_CONSIDERED = "qemu-usermode"
Adding this statement disables the use of QEMU when building packages for your
machine. Currently, this feature is used only by introspection
recipes and has the same effect as the previously described option.
.. note::
Future releases of the Yocto Project might have other features
affected by this option.
If you disable introspection data, you can still obtain it through other
means such as copying the data from a suitable sysroot, or by generating
it on the target hardware. The OpenEmbedded build system does not
currently provide specific support for these techniques.
Testing that Introspection Works in an Image
============================================
Use the following procedure to test if generating introspection data is
working in an image:
#. Make sure that "gobject-introspection-data" is not in
:term:`DISTRO_FEATURES_BACKFILL_CONSIDERED`
and that "qemu-usermode" is not in
:term:`MACHINE_FEATURES_BACKFILL_CONSIDERED`.
#. Build ``core-image-sato``.
#. Launch a Terminal and then start Python in the terminal.
#. Enter the following in the terminal::
>>> from gi.repository import GLib
>>> GLib.get_host_name()
#. For something a little more advanced, enter the following see:
https://python-gtk-3-tutorial.readthedocs.io/en/latest/introduction.html
Known Issues
============
Here are know issues in GObject Introspection Support:
- ``qemu-ppc64`` immediately crashes. Consequently, you cannot build
introspection data on that architecture.
- x32 is not supported by QEMU. Consequently, introspection data is
disabled.
- musl causes transient GLib binaries to crash on assertion failures.
Consequently, generating introspection data is disabled.
- Because QEMU is not able to run the binaries correctly, introspection
is disabled for some specific packages under specific architectures
(e.g. ``gcr``, ``libsecret``, and ``webkit``).
- QEMU usermode might not work properly when running 64-bit binaries
under 32-bit host machines. In particular, "qemumips64" is known to
not work under i686.

View File

@@ -12,43 +12,7 @@ Yocto Project Development Tasks Manual
intro
start
layers
customizing-images
new-recipe
new-machine
upgrading-recipes
temporary-source-code
quilt.rst
development-shell
python-development-shell
building
speeding-up-build
libraries
prebuilt-libraries
x32-psabi
gobject-introspection
external-toolchain
wic
bmaptool
securing-images
custom-distribution
custom-template-configuration-directory
disk-space
packages
efficiently-fetching-sources
init-manager
device-manager
external-scm
read-only-rootfs
build-quality
runtime-testing
debugging
changes
licenses
vulnerabilities
sbom
error-reporting-tool
wayland
common-tasks
qemu
.. include:: /boilerplate.rst

View File

@@ -1,162 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
.. _init-manager:
Selecting an Initialization Manager
***********************************
By default, the Yocto Project uses :wikipedia:`SysVinit <Init#SysV-style>` as
the initialization manager. There is also support for BusyBox init, a simpler
implementation, as well as support for :wikipedia:`systemd <Systemd>`, which
is a full replacement for init with parallel starting of services, reduced
shell overhead, increased security and resource limits for services, and other
features that are used by many distributions.
Within the system, SysVinit and BusyBox init treat system components as
services. These services are maintained as shell scripts stored in the
``/etc/init.d/`` directory.
SysVinit is more elaborate than BusyBox init and organizes services in
different run levels. This organization is maintained by putting links
to the services in the ``/etc/rcN.d/`` directories, where `N/` is one
of the following options: "S", "0", "1", "2", "3", "4", "5", or "6".
.. note::
Each runlevel has a dependency on the previous runlevel. This
dependency allows the services to work properly.
Both SysVinit and BusyBox init are configured through the ``/etc/inittab``
file, with a very similar syntax, though of course BusyBox init features
are more limited.
In comparison, systemd treats components as units. Using units is a
broader concept as compared to using a service. A unit includes several
different types of entities. ``Service`` is one of the types of entities.
The runlevel concept in SysVinit corresponds to the concept of a target
in systemd, where target is also a type of supported unit.
In systems with SysVinit or BusyBox init, services load sequentially (i.e. one
by one) during init and parallelization is not supported. With systemd, services
start in parallel. This method can have an impact on the startup performance
of a given service, though systemd will also provide more services by default,
therefore increasing the total system boot time. systemd also substantially
increases system size because of its multiple components and the extra
dependencies it pulls.
On the contrary, BusyBox init is the simplest and the lightest solution and
also comes with BusyBox mdev as device manager, a lighter replacement to
:wikipedia:`udev <Udev>`, which SysVinit and systemd both use.
The ":ref:`device-manager`" chapter has more details about device managers.
Using SysVinit with udev
=========================
SysVinit with the udev device manager corresponds to the
default setting in Poky. This corresponds to setting::
INIT_MANAGER = "sysvinit"
Using BusyBox init with BusyBox mdev
====================================
BusyBox init with BusyBox mdev is the simplest and lightest solution
for small root filesystems. All you need is BusyBox, which most systems
have anyway::
INIT_MANAGER = "mdev-busybox"
Using systemd
=============
The last option is to use systemd together with the udev device
manager. This is the most powerful and versatile solution, especially
for more complex systems::
INIT_MANAGER = "systemd"
This will enable systemd and remove sysvinit components from the image.
See :yocto_git:`meta/conf/distro/include/init-manager-systemd.inc
</poky/tree/meta/conf/distro/include/init-manager-systemd.inc>` for exact
details on what this does.
Controling systemd from the target command line
-----------------------------------------------
Here is a quick reference for controling systemd from the command line on the
target. Instead of opening and sometimes modifying files, most interaction
happens through the ``systemctl`` and ``journalctl`` commands:
- ``systemctl status``: show the status of all services
- ``systemctl status <service>``: show the status of one service
- ``systemctl [start|stop] <service>``: start or stop a service
- ``systemctl [enable|disable] <service>``: enable or disable a service at boot time
- ``systemctl list-units``: list all available units
- ``journalctl -a``: show all logs for all services
- ``journalctl -f``: show only the last log entries, and keep printing updates as they arrive
- ``journalctl -u``: show only logs from a particular service
Using systemd-journald without a traditional syslog daemon
----------------------------------------------------------
Counter-intuitively, ``systemd-journald`` is not a syslog runtime or provider,
and the proper way to use ``systemd-journald`` as your sole logging mechanism is to
effectively disable syslog entirely by setting these variables in your distribution
configuration file::
VIRTUAL-RUNTIME_syslog = ""
VIRTUAL-RUNTIME_base-utils-syslog = ""
Doing so will prevent ``rsyslog`` / ``busybox-syslog`` from being pulled in by
default, leaving only ``systemd-journald``.
Summary
-------
The Yocto Project supports three different initialization managers, offering
increasing levels of complexity and functionality:
.. list-table::
:widths: 40 20 20 20
:header-rows: 1
* -
- BusyBox init
- SysVinit
- systemd
* - Size
- Small
- Small
- Big [#footnote-systemd-size]_
* - Complexity
- Small
- Medium
- High
* - Support for boot profiles
- No
- Yes ("runlevels")
- Yes ("targets")
* - Services defined as
- Shell scripts
- Shell scripts
- Description files
* - Starting services in parallel
- No
- No
- Yes
* - Setting service resource limits
- No
- No
- Yes
* - Support service isolation
- No
- No
- Yes
* - Integrated logging
- No
- No
- Yes
.. [#footnote-systemd-size] Using systemd increases the ``core-image-minimal``
image size by 160\% for ``qemux86-64`` on Mickledore (4.2), compared to SysVinit.

View File

@@ -1,905 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Understanding and Creating Layers
*********************************
The OpenEmbedded build system supports organizing
:term:`Metadata` into multiple layers.
Layers allow you to isolate different types of customizations from each
other. For introductory information on the Yocto Project Layer Model,
see the
":ref:`overview-manual/yp-intro:the yocto project layer model`"
section in the Yocto Project Overview and Concepts Manual.
Creating Your Own Layer
=======================
.. note::
It is very easy to create your own layers to use with the OpenEmbedded
build system, as the Yocto Project ships with tools that speed up creating
layers. This section describes the steps you perform by hand to create
layers so that you can better understand them. For information about the
layer-creation tools, see the
":ref:`bsp-guide/bsp:creating a new bsp layer using the \`\`bitbake-layers\`\` script`"
section in the Yocto Project Board Support Package (BSP) Developer's
Guide and the ":ref:`dev-manual/layers:creating a general layer using the \`\`bitbake-layers\`\` script`"
section further down in this manual.
Follow these general steps to create your layer without using tools:
#. *Check Existing Layers:* Before creating a new layer, you should be
sure someone has not already created a layer containing the Metadata
you need. You can see the :oe_layerindex:`OpenEmbedded Metadata Index <>`
for a list of layers from the OpenEmbedded community that can be used in
the Yocto Project. You could find a layer that is identical or close
to what you need.
#. *Create a Directory:* Create the directory for your layer. When you
create the layer, be sure to create the directory in an area not
associated with the Yocto Project :term:`Source Directory`
(e.g. the cloned ``poky`` repository).
While not strictly required, prepend the name of the directory with
the string "meta-". For example::
meta-mylayer
meta-GUI_xyz
meta-mymachine
With rare exceptions, a layer's name follows this form::
meta-root_name
Following this layer naming convention can save
you trouble later when tools, components, or variables "assume" your
layer name begins with "meta-". A notable example is in configuration
files as shown in the following step where layer names without the
"meta-" string are appended to several variables used in the
configuration.
#. *Create a Layer Configuration File:* Inside your new layer folder,
you need to create a ``conf/layer.conf`` file. It is easiest to take
an existing layer configuration file and copy that to your layer's
``conf`` directory and then modify the file as needed.
The ``meta-yocto-bsp/conf/layer.conf`` file in the Yocto Project
:yocto_git:`Source Repositories </poky/tree/meta-yocto-bsp/conf>`
demonstrates the required syntax. For your layer, you need to replace
"yoctobsp" with a unique identifier for your layer (e.g. "machinexyz"
for a layer named "meta-machinexyz")::
# We have a conf and classes directory, add to BBPATH
BBPATH .= ":${LAYERDIR}"
# We have recipes-* directories, add to BBFILES
BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
${LAYERDIR}/recipes-*/*/*.bbappend"
BBFILE_COLLECTIONS += "yoctobsp"
BBFILE_PATTERN_yoctobsp = "^${LAYERDIR}/"
BBFILE_PRIORITY_yoctobsp = "5"
LAYERVERSION_yoctobsp = "4"
LAYERSERIES_COMPAT_yoctobsp = "dunfell"
Following is an explanation of the layer configuration file:
- :term:`BBPATH`: Adds the layer's
root directory to BitBake's search path. Through the use of the
:term:`BBPATH` variable, BitBake locates class files (``.bbclass``),
configuration files, and files that are included with ``include``
and ``require`` statements. For these cases, BitBake uses the
first file that matches the name found in :term:`BBPATH`. This is
similar to the way the ``PATH`` variable is used for binaries. It
is recommended, therefore, that you use unique class and
configuration filenames in your custom layer.
- :term:`BBFILES`: Defines the
location for all recipes in the layer.
- :term:`BBFILE_COLLECTIONS`:
Establishes the current layer through a unique identifier that is
used throughout the OpenEmbedded build system to refer to the
layer. In this example, the identifier "yoctobsp" is the
representation for the container layer named "meta-yocto-bsp".
- :term:`BBFILE_PATTERN`:
Expands immediately during parsing to provide the directory of the
layer.
- :term:`BBFILE_PRIORITY`:
Establishes a priority to use for recipes in the layer when the
OpenEmbedded build finds recipes of the same name in different
layers.
- :term:`LAYERVERSION`:
Establishes a version number for the layer. You can use this
version number to specify this exact version of the layer as a
dependency when using the
:term:`LAYERDEPENDS`
variable.
- :term:`LAYERDEPENDS`:
Lists all layers on which this layer depends (if any).
- :term:`LAYERSERIES_COMPAT`:
Lists the :yocto_wiki:`Yocto Project </Releases>`
releases for which the current version is compatible. This
variable is a good way to indicate if your particular layer is
current.
#. *Add Content:* Depending on the type of layer, add the content. If
the layer adds support for a machine, add the machine configuration
in a ``conf/machine/`` file within the layer. If the layer adds
distro policy, add the distro configuration in a ``conf/distro/``
file within the layer. If the layer introduces new recipes, put the
recipes you need in ``recipes-*`` subdirectories within the layer.
.. note::
For an explanation of layer hierarchy that is compliant with the
Yocto Project, see the ":ref:`bsp-guide/bsp:example filesystem layout`"
section in the Yocto Project Board Support Package (BSP) Developer's Guide.
#. *Optionally Test for Compatibility:* If you want permission to use
the Yocto Project Compatibility logo with your layer or application
that uses your layer, perform the steps to apply for compatibility.
See the
":ref:`dev-manual/layers:making sure your layer is compatible with yocto project`"
section for more information.
Following Best Practices When Creating Layers
=============================================
To create layers that are easier to maintain and that will not impact
builds for other machines, you should consider the information in the
following list:
- *Avoid "Overlaying" Entire Recipes from Other Layers in Your
Configuration:* In other words, do not copy an entire recipe into
your layer and then modify it. Rather, use an append file
(``.bbappend``) to override only those parts of the original recipe
you need to modify.
- *Avoid Duplicating Include Files:* Use append files (``.bbappend``)
for each recipe that uses an include file. Or, if you are introducing
a new recipe that requires the included file, use the path relative
to the original layer directory to refer to the file. For example,
use ``require recipes-core/``\ `package`\ ``/``\ `file`\ ``.inc`` instead
of ``require`` `file`\ ``.inc``. If you're finding you have to overlay
the include file, it could indicate a deficiency in the include file
in the layer to which it originally belongs. If this is the case, you
should try to address that deficiency instead of overlaying the
include file. For example, you could address this by getting the
maintainer of the include file to add a variable or variables to make
it easy to override the parts needing to be overridden.
- *Structure Your Layers:* Proper use of overrides within append files
and placement of machine-specific files within your layer can ensure
that a build is not using the wrong Metadata and negatively impacting
a build for a different machine. Following are some examples:
- *Modify Variables to Support a Different Machine:* Suppose you
have a layer named ``meta-one`` that adds support for building
machine "one". To do so, you use an append file named
``base-files.bbappend`` and create a dependency on "foo" by
altering the :term:`DEPENDS`
variable::
DEPENDS = "foo"
The dependency is created during any
build that includes the layer ``meta-one``. However, you might not
want this dependency for all machines. For example, suppose you
are building for machine "two" but your ``bblayers.conf`` file has
the ``meta-one`` layer included. During the build, the
``base-files`` for machine "two" will also have the dependency on
``foo``.
To make sure your changes apply only when building machine "one",
use a machine override with the :term:`DEPENDS` statement::
DEPENDS:one = "foo"
You should follow the same strategy when using ``:append``
and ``:prepend`` operations::
DEPENDS:append:one = " foo"
DEPENDS:prepend:one = "foo "
As an actual example, here's a
snippet from the generic kernel include file ``linux-yocto.inc``,
wherein the kernel compile and link options are adjusted in the
case of a subset of the supported architectures::
DEPENDS:append:aarch64 = " libgcc"
KERNEL_CC:append:aarch64 = " ${TOOLCHAIN_OPTIONS}"
KERNEL_LD:append:aarch64 = " ${TOOLCHAIN_OPTIONS}"
DEPENDS:append:nios2 = " libgcc"
KERNEL_CC:append:nios2 = " ${TOOLCHAIN_OPTIONS}"
KERNEL_LD:append:nios2 = " ${TOOLCHAIN_OPTIONS}"
DEPENDS:append:arc = " libgcc"
KERNEL_CC:append:arc = " ${TOOLCHAIN_OPTIONS}"
KERNEL_LD:append:arc = " ${TOOLCHAIN_OPTIONS}"
KERNEL_FEATURES:append:qemuall=" features/debug/printk.scc"
- *Place Machine-Specific Files in Machine-Specific Locations:* When
you have a base recipe, such as ``base-files.bb``, that contains a
:term:`SRC_URI` statement to a
file, you can use an append file to cause the build to use your
own version of the file. For example, an append file in your layer
at ``meta-one/recipes-core/base-files/base-files.bbappend`` could
extend :term:`FILESPATH` using :term:`FILESEXTRAPATHS` as follows::
FILESEXTRAPATHS:prepend := "${THISDIR}/${BPN}:"
The build for machine "one" will pick up your machine-specific file as
long as you have the file in
``meta-one/recipes-core/base-files/base-files/``. However, if you
are building for a different machine and the ``bblayers.conf``
file includes the ``meta-one`` layer and the location of your
machine-specific file is the first location where that file is
found according to :term:`FILESPATH`, builds for all machines will
also use that machine-specific file.
You can make sure that a machine-specific file is used for a
particular machine by putting the file in a subdirectory specific
to the machine. For example, rather than placing the file in
``meta-one/recipes-core/base-files/base-files/`` as shown above,
put it in ``meta-one/recipes-core/base-files/base-files/one/``.
Not only does this make sure the file is used only when building
for machine "one", but the build process locates the file more
quickly.
In summary, you need to place all files referenced from
:term:`SRC_URI` in a machine-specific subdirectory within the layer in
order to restrict those files to machine-specific builds.
- *Perform Steps to Apply for Yocto Project Compatibility:* If you want
permission to use the Yocto Project Compatibility logo with your
layer or application that uses your layer, perform the steps to apply
for compatibility. See the
":ref:`dev-manual/layers:making sure your layer is compatible with yocto project`"
section for more information.
- *Follow the Layer Naming Convention:* Store custom layers in a Git
repository that use the ``meta-layer_name`` format.
- *Group Your Layers Locally:* Clone your repository alongside other
cloned ``meta`` directories from the :term:`Source Directory`.
Making Sure Your Layer is Compatible With Yocto Project
=======================================================
When you create a layer used with the Yocto Project, it is advantageous
to make sure that the layer interacts well with existing Yocto Project
layers (i.e. the layer is compatible with the Yocto Project). Ensuring
compatibility makes the layer easy to be consumed by others in the Yocto
Project community and could allow you permission to use the Yocto
Project Compatible Logo.
.. note::
Only Yocto Project member organizations are permitted to use the
Yocto Project Compatible Logo. The logo is not available for general
use. For information on how to become a Yocto Project member
organization, see the :yocto_home:`Yocto Project Website <>`.
The Yocto Project Compatibility Program consists of a layer application
process that requests permission to use the Yocto Project Compatibility
Logo for your layer and application. The process consists of two parts:
#. Successfully passing a script (``yocto-check-layer``) that when run
against your layer, tests it against constraints based on experiences
of how layers have worked in the real world and where pitfalls have
been found. Getting a "PASS" result from the script is required for
successful compatibility registration.
#. Completion of an application acceptance form, which you can find at
:yocto_home:`/webform/yocto-project-compatible-registration`.
To be granted permission to use the logo, you need to satisfy the
following:
- Be able to check the box indicating that you got a "PASS" when
running the script against your layer.
- Answer "Yes" to the questions on the form or have an acceptable
explanation for any questions answered "No".
- Be a Yocto Project Member Organization.
The remainder of this section presents information on the registration
form and on the ``yocto-check-layer`` script.
Yocto Project Compatible Program Application
--------------------------------------------
Use the form to apply for your layer's approval. Upon successful
application, you can use the Yocto Project Compatibility Logo with your
layer and the application that uses your layer.
To access the form, use this link:
:yocto_home:`/webform/yocto-project-compatible-registration`.
Follow the instructions on the form to complete your application.
The application consists of the following sections:
- *Contact Information:* Provide your contact information as the fields
require. Along with your information, provide the released versions
of the Yocto Project for which your layer is compatible.
- *Acceptance Criteria:* Provide "Yes" or "No" answers for each of the
items in the checklist. There is space at the bottom of the form for
any explanations for items for which you answered "No".
- *Recommendations:* Provide answers for the questions regarding Linux
kernel use and build success.
``yocto-check-layer`` Script
----------------------------
The ``yocto-check-layer`` script provides you a way to assess how
compatible your layer is with the Yocto Project. You should run this
script prior to using the form to apply for compatibility as described
in the previous section. You need to achieve a "PASS" result in order to
have your application form successfully processed.
The script divides tests into three areas: COMMON, BSP, and DISTRO. For
example, given a distribution layer (DISTRO), the layer must pass both
the COMMON and DISTRO related tests. Furthermore, if your layer is a BSP
layer, the layer must pass the COMMON and BSP set of tests.
To execute the script, enter the following commands from your build
directory::
$ source oe-init-build-env
$ yocto-check-layer your_layer_directory
Be sure to provide the actual directory for your
layer as part of the command.
Entering the command causes the script to determine the type of layer
and then to execute a set of specific tests against the layer. The
following list overviews the test:
- ``common.test_readme``: Tests if a ``README`` file exists in the
layer and the file is not empty.
- ``common.test_parse``: Tests to make sure that BitBake can parse the
files without error (i.e. ``bitbake -p``).
- ``common.test_show_environment``: Tests that the global or per-recipe
environment is in order without errors (i.e. ``bitbake -e``).
- ``common.test_world``: Verifies that ``bitbake world`` works.
- ``common.test_signatures``: Tests to be sure that BSP and DISTRO
layers do not come with recipes that change signatures.
- ``common.test_layerseries_compat``: Verifies layer compatibility is
set properly.
- ``bsp.test_bsp_defines_machines``: Tests if a BSP layer has machine
configurations.
- ``bsp.test_bsp_no_set_machine``: Tests to ensure a BSP layer does not
set the machine when the layer is added.
- ``bsp.test_machine_world``: Verifies that ``bitbake world`` works
regardless of which machine is selected.
- ``bsp.test_machine_signatures``: Verifies that building for a
particular machine affects only the signature of tasks specific to
that machine.
- ``distro.test_distro_defines_distros``: Tests if a DISTRO layer has
distro configurations.
- ``distro.test_distro_no_set_distros``: Tests to ensure a DISTRO layer
does not set the distribution when the layer is added.
Enabling Your Layer
===================
Before the OpenEmbedded build system can use your new layer, you need to
enable it. To enable your layer, simply add your layer's path to the
:term:`BBLAYERS` variable in your ``conf/bblayers.conf`` file, which is
found in the :term:`Build Directory`. The following example shows how to
enable your new ``meta-mylayer`` layer (note how your new layer exists
outside of the official ``poky`` repository which you would have checked
out earlier)::
# POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
POKY_BBLAYERS_CONF_VERSION = "2"
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS ?= " \
/home/user/poky/meta \
/home/user/poky/meta-poky \
/home/user/poky/meta-yocto-bsp \
/home/user/mystuff/meta-mylayer \
"
BitBake parses each ``conf/layer.conf`` file from the top down as
specified in the :term:`BBLAYERS` variable within the ``conf/bblayers.conf``
file. During the processing of each ``conf/layer.conf`` file, BitBake
adds the recipes, classes and configurations contained within the
particular layer to the source directory.
Appending Other Layers Metadata With Your Layer
===============================================
A recipe that appends Metadata to another recipe is called a BitBake
append file. A BitBake append file uses the ``.bbappend`` file type
suffix, while the corresponding recipe to which Metadata is being
appended uses the ``.bb`` file type suffix.
You can use a ``.bbappend`` file in your layer to make additions or
changes to the content of another layer's recipe without having to copy
the other layer's recipe into your layer. Your ``.bbappend`` file
resides in your layer, while the main ``.bb`` recipe file to which you
are appending Metadata resides in a different layer.
Being able to append information to an existing recipe not only avoids
duplication, but also automatically applies recipe changes from a
different layer into your layer. If you were copying recipes, you would
have to manually merge changes as they occur.
When you create an append file, you must use the same root name as the
corresponding recipe file. For example, the append file
``someapp_3.1.bbappend`` must apply to ``someapp_3.1.bb``. This
means the original recipe and append filenames are version
number-specific. If the corresponding recipe is renamed to update to a
newer version, you must also rename and possibly update the
corresponding ``.bbappend`` as well. During the build process, BitBake
displays an error on starting if it detects a ``.bbappend`` file that
does not have a corresponding recipe with a matching name. See the
:term:`BB_DANGLINGAPPENDS_WARNONLY`
variable for information on how to handle this error.
Overlaying a File Using Your Layer
----------------------------------
As an example, consider the main formfactor recipe and a corresponding
formfactor append file both from the :term:`Source Directory`.
Here is the main
formfactor recipe, which is named ``formfactor_0.0.bb`` and located in
the "meta" layer at ``meta/recipes-bsp/formfactor``::
SUMMARY = "Device formfactor information"
DESCRIPTION = "A formfactor configuration file provides information about the \
target hardware for which the image is being built and information that the \
build system cannot obtain from other sources such as the kernel."
SECTION = "base"
LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
PR = "r45"
SRC_URI = "file://config file://machconfig"
S = "${WORKDIR}"
PACKAGE_ARCH = "${MACHINE_ARCH}"
INHIBIT_DEFAULT_DEPS = "1"
do_install() {
# Install file only if it has contents
install -d ${D}${sysconfdir}/formfactor/
install -m 0644 ${S}/config ${D}${sysconfdir}/formfactor/
if [ -s "${S}/machconfig" ]; then
install -m 0644 ${S}/machconfig ${D}${sysconfdir}/formfactor/
fi
}
In the main recipe, note the :term:`SRC_URI`
variable, which tells the OpenEmbedded build system where to find files
during the build.
Following is the append file, which is named ``formfactor_0.0.bbappend``
and is from the Raspberry Pi BSP Layer named ``meta-raspberrypi``. The
file is in the layer at ``recipes-bsp/formfactor``::
FILESEXTRAPATHS:prepend := "${THISDIR}/${PN}:"
By default, the build system uses the
:term:`FILESPATH` variable to
locate files. This append file extends the locations by setting the
:term:`FILESEXTRAPATHS`
variable. Setting this variable in the ``.bbappend`` file is the most
reliable and recommended method for adding directories to the search
path used by the build system to find files.
The statement in this example extends the directories to include
``${``\ :term:`THISDIR`\ ``}/${``\ :term:`PN`\ ``}``,
which resolves to a directory named ``formfactor`` in the same directory
in which the append file resides (i.e.
``meta-raspberrypi/recipes-bsp/formfactor``. This implies that you must
have the supporting directory structure set up that will contain any
files or patches you will be including from the layer.
Using the immediate expansion assignment operator ``:=`` is important
because of the reference to :term:`THISDIR`. The trailing colon character is
important as it ensures that items in the list remain colon-separated.
.. note::
BitBake automatically defines the :term:`THISDIR` variable. You should
never set this variable yourself. Using ":prepend" as part of the
:term:`FILESEXTRAPATHS` ensures your path will be searched prior to other
paths in the final list.
Also, not all append files add extra files. Many append files simply
allow to add build options (e.g. ``systemd``). For these cases, your
append file would not even use the :term:`FILESEXTRAPATHS` statement.
The end result of this ``.bbappend`` file is that on a Raspberry Pi, where
``rpi`` will exist in the list of :term:`OVERRIDES`, the file
``meta-raspberrypi/recipes-bsp/formfactor/formfactor/rpi/machconfig`` will be
used during :ref:`ref-tasks-fetch` and the test for a non-zero file size in
:ref:`ref-tasks-install` will return true, and the file will be installed.
Installing Additional Files Using Your Layer
--------------------------------------------
As another example, consider the main ``xserver-xf86-config`` recipe and a
corresponding ``xserver-xf86-config`` append file both from the :term:`Source
Directory`. Here is the main ``xserver-xf86-config`` recipe, which is named
``xserver-xf86-config_0.1.bb`` and located in the "meta" layer at
``meta/recipes-graphics/xorg-xserver``::
SUMMARY = "X.Org X server configuration file"
HOMEPAGE = "http://www.x.org"
SECTION = "x11/base"
LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
PR = "r33"
SRC_URI = "file://xorg.conf"
S = "${WORKDIR}"
CONFFILES:${PN} = "${sysconfdir}/X11/xorg.conf"
PACKAGE_ARCH = "${MACHINE_ARCH}"
ALLOW_EMPTY:${PN} = "1"
do_install () {
if test -s ${WORKDIR}/xorg.conf; then
install -d ${D}/${sysconfdir}/X11
install -m 0644 ${WORKDIR}/xorg.conf ${D}/${sysconfdir}/X11/
fi
}
Following is the append file, which is named ``xserver-xf86-config_%.bbappend``
and is from the Raspberry Pi BSP Layer named ``meta-raspberrypi``. The
file is in the layer at ``recipes-graphics/xorg-xserver``::
FILESEXTRAPATHS:prepend := "${THISDIR}/${PN}:"
SRC_URI:append:rpi = " \
file://xorg.conf.d/98-pitft.conf \
file://xorg.conf.d/99-calibration.conf \
"
do_install:append:rpi () {
PITFT="${@bb.utils.contains("MACHINE_FEATURES", "pitft", "1", "0", d)}"
if [ "${PITFT}" = "1" ]; then
install -d ${D}/${sysconfdir}/X11/xorg.conf.d/
install -m 0644 ${WORKDIR}/xorg.conf.d/98-pitft.conf ${D}/${sysconfdir}/X11/xorg.conf.d/
install -m 0644 ${WORKDIR}/xorg.conf.d/99-calibration.conf ${D}/${sysconfdir}/X11/xorg.conf.d/
fi
}
FILES:${PN}:append:rpi = " ${sysconfdir}/X11/xorg.conf.d/*"
Building off of the previous example, we once again are setting the
:term:`FILESEXTRAPATHS` variable. In this case we are also using
:term:`SRC_URI` to list additional source files to use when ``rpi`` is found in
the list of :term:`OVERRIDES`. The :ref:`ref-tasks-install` task will then perform a
check for an additional :term:`MACHINE_FEATURES` that if set will cause these
additional files to be installed. These additional files are listed in
:term:`FILES` so that they will be packaged.
Prioritizing Your Layer
=======================
Each layer is assigned a priority value. Priority values control which
layer takes precedence if there are recipe files with the same name in
multiple layers. For these cases, the recipe file from the layer with a
higher priority number takes precedence. Priority values also affect the
order in which multiple ``.bbappend`` files for the same recipe are
applied. You can either specify the priority manually, or allow the
build system to calculate it based on the layer's dependencies.
To specify the layer's priority manually, use the
:term:`BBFILE_PRIORITY`
variable and append the layer's root name::
BBFILE_PRIORITY_mylayer = "1"
.. note::
It is possible for a recipe with a lower version number
:term:`PV` in a layer that has a higher
priority to take precedence.
Also, the layer priority does not currently affect the precedence
order of ``.conf`` or ``.bbclass`` files. Future versions of BitBake
might address this.
Managing Layers
===============
You can use the BitBake layer management tool ``bitbake-layers`` to
provide a view into the structure of recipes across a multi-layer
project. Being able to generate output that reports on configured layers
with their paths and priorities and on ``.bbappend`` files and their
applicable recipes can help to reveal potential problems.
For help on the BitBake layer management tool, use the following
command::
$ bitbake-layers --help
The following list describes the available commands:
- ``help:`` Displays general help or help on a specified command.
- ``show-layers:`` Shows the current configured layers.
- ``show-overlayed:`` Lists overlayed recipes. A recipe is overlayed
when a recipe with the same name exists in another layer that has a
higher layer priority.
- ``show-recipes:`` Lists available recipes and the layers that
provide them.
- ``show-appends:`` Lists ``.bbappend`` files and the recipe files to
which they apply.
- ``show-cross-depends:`` Lists dependency relationships between
recipes that cross layer boundaries.
- ``add-layer:`` Adds a layer to ``bblayers.conf``.
- ``remove-layer:`` Removes a layer from ``bblayers.conf``
- ``flatten:`` Flattens the layer configuration into a separate
output directory. Flattening your layer configuration builds a
"flattened" directory that contains the contents of all layers, with
any overlayed recipes removed and any ``.bbappend`` files appended to
the corresponding recipes. You might have to perform some manual
cleanup of the flattened layer as follows:
- Non-recipe files (such as patches) are overwritten. The flatten
command shows a warning for these files.
- Anything beyond the normal layer setup has been added to the
``layer.conf`` file. Only the lowest priority layer's
``layer.conf`` is used.
- Overridden and appended items from ``.bbappend`` files need to be
cleaned up. The contents of each ``.bbappend`` end up in the
flattened recipe. However, if there are appended or changed
variable values, you need to tidy these up yourself. Consider the
following example. Here, the ``bitbake-layers`` command adds the
line ``#### bbappended ...`` so that you know where the following
lines originate::
...
DESCRIPTION = "A useful utility"
...
EXTRA_OECONF = "--enable-something"
...
#### bbappended from meta-anotherlayer ####
DESCRIPTION = "Customized utility"
EXTRA_OECONF += "--enable-somethingelse"
Ideally, you would tidy up these utilities as follows::
...
DESCRIPTION = "Customized utility"
...
EXTRA_OECONF = "--enable-something --enable-somethingelse"
...
- ``layerindex-fetch``: Fetches a layer from a layer index, along
with its dependent layers, and adds the layers to the
``conf/bblayers.conf`` file.
- ``layerindex-show-depends``: Finds layer dependencies from the
layer index.
- ``save-build-conf``: Saves the currently active build configuration
(``conf/local.conf``, ``conf/bblayers.conf``) as a template into a layer.
This template can later be used for setting up builds via :term:``TEMPLATECONF``.
For information about saving and using configuration templates, see
":ref:`dev-manual/custom-template-configuration-directory:creating a custom template configuration directory`".
- ``create-layer``: Creates a basic layer.
- ``create-layers-setup``: Writes out a configuration file and/or a script that
can replicate the directory structure and revisions of the layers in a current build.
For more information, see ":ref:`dev-manual/layers:saving and restoring the layers setup`".
Creating a General Layer Using the ``bitbake-layers`` Script
============================================================
The ``bitbake-layers`` script with the ``create-layer`` subcommand
simplifies creating a new general layer.
.. note::
- For information on BSP layers, see the ":ref:`bsp-guide/bsp:bsp layers`"
section in the Yocto
Project Board Specific (BSP) Developer's Guide.
- In order to use a layer with the OpenEmbedded build system, you
need to add the layer to your ``bblayers.conf`` configuration
file. See the ":ref:`dev-manual/layers:adding a layer using the \`\`bitbake-layers\`\` script`"
section for more information.
The default mode of the script's operation with this subcommand is to
create a layer with the following:
- A layer priority of 6.
- A ``conf`` subdirectory that contains a ``layer.conf`` file.
- A ``recipes-example`` subdirectory that contains a further
subdirectory named ``example``, which contains an ``example.bb``
recipe file.
- A ``COPYING.MIT``, which is the license statement for the layer. The
script assumes you want to use the MIT license, which is typical for
most layers, for the contents of the layer itself.
- A ``README`` file, which is a file describing the contents of your
new layer.
In its simplest form, you can use the following command form to create a
layer. The command creates a layer whose name corresponds to
"your_layer_name" in the current directory::
$ bitbake-layers create-layer your_layer_name
As an example, the following command creates a layer named ``meta-scottrif``
in your home directory::
$ cd /usr/home
$ bitbake-layers create-layer meta-scottrif
NOTE: Starting bitbake server...
Add your new layer with 'bitbake-layers add-layer meta-scottrif'
If you want to set the priority of the layer to other than the default
value of "6", you can either use the ``--priority`` option or you
can edit the
:term:`BBFILE_PRIORITY` value
in the ``conf/layer.conf`` after the script creates it. Furthermore, if
you want to give the example recipe file some name other than the
default, you can use the ``--example-recipe-name`` option.
The easiest way to see how the ``bitbake-layers create-layer`` command
works is to experiment with the script. You can also read the usage
information by entering the following::
$ bitbake-layers create-layer --help
NOTE: Starting bitbake server...
usage: bitbake-layers create-layer [-h] [--priority PRIORITY]
[--example-recipe-name EXAMPLERECIPE]
layerdir
Create a basic layer
positional arguments:
layerdir Layer directory to create
optional arguments:
-h, --help show this help message and exit
--priority PRIORITY, -p PRIORITY
Layer directory to create
--example-recipe-name EXAMPLERECIPE, -e EXAMPLERECIPE
Filename of the example recipe
Adding a Layer Using the ``bitbake-layers`` Script
==================================================
Once you create your general layer, you must add it to your
``bblayers.conf`` file. Adding the layer to this configuration file
makes the OpenEmbedded build system aware of your layer so that it can
search it for metadata.
Add your layer by using the ``bitbake-layers add-layer`` command::
$ bitbake-layers add-layer your_layer_name
Here is an example that adds a
layer named ``meta-scottrif`` to the configuration file. Following the
command that adds the layer is another ``bitbake-layers`` command that
shows the layers that are in your ``bblayers.conf`` file::
$ bitbake-layers add-layer meta-scottrif
NOTE: Starting bitbake server...
Parsing recipes: 100% |##########################################################| Time: 0:00:49
Parsing of 1441 .bb files complete (0 cached, 1441 parsed). 2055 targets, 56 skipped, 0 masked, 0 errors.
$ bitbake-layers show-layers
NOTE: Starting bitbake server...
layer path priority
==========================================================================
meta /home/scottrif/poky/meta 5
meta-poky /home/scottrif/poky/meta-poky 5
meta-yocto-bsp /home/scottrif/poky/meta-yocto-bsp 5
workspace /home/scottrif/poky/build/workspace 99
meta-scottrif /home/scottrif/poky/build/meta-scottrif 6
Adding the layer to this file
enables the build system to locate the layer during the build.
.. note::
During a build, the OpenEmbedded build system looks in the layers
from the top of the list down to the bottom in that order.
Saving and restoring the layers setup
=====================================
Once you have a working build with the correct set of layers, it is beneficial
to capture the layer setup --- what they are, which repositories they come from
and which SCM revisions they're at --- into a configuration file, so that this
setup can be easily replicated later, perhaps on a different machine. Here's
how to do this::
$ bitbake-layers create-layers-setup /srv/work/alex/meta-alex/
NOTE: Starting bitbake server...
NOTE: Created /srv/work/alex/meta-alex/setup-layers.json
NOTE: Created /srv/work/alex/meta-alex/setup-layers
The tool needs a single argument which tells where to place the output, consisting
of a json formatted layer configuration, and a ``setup-layers`` script that can use that configuration
to restore the layers in a different location, or on a different host machine. The argument
can point to a custom layer (which is then deemed a "bootstrap" layer that needs to be
checked out first), or into a completely independent location.
The replication of the layers is performed by running the ``setup-layers`` script provided
above:
#. Clone the bootstrap layer or some other repository to obtain
the json config and the setup script that can use it.
#. Run the script directly with no options::
alex@Zen2:/srv/work/alex/my-build$ meta-alex/setup-layers
Note: not checking out source meta-alex, use --force-bootstraplayer-checkout to override.
Setting up source meta-intel, revision 15.0-hardknott-3.3-310-g0a96edae, branch master
Running 'git init -q /srv/work/alex/my-build/meta-intel'
Running 'git remote remove origin > /dev/null 2>&1; git remote add origin git://git.yoctoproject.org/meta-intel' in /srv/work/alex/my-build/meta-intel
Running 'git fetch -q origin || true' in /srv/work/alex/my-build/meta-intel
Running 'git checkout -q 0a96edae609a3f48befac36af82cf1eed6786b4a' in /srv/work/alex/my-build/meta-intel
Setting up source poky, revision 4.1_M1-372-g55483d28f2, branch akanavin/setup-layers
Running 'git init -q /srv/work/alex/my-build/poky'
Running 'git remote remove origin > /dev/null 2>&1; git remote add origin git://git.yoctoproject.org/poky' in /srv/work/alex/my-build/poky
Running 'git fetch -q origin || true' in /srv/work/alex/my-build/poky
Running 'git remote remove poky-contrib > /dev/null 2>&1; git remote add poky-contrib ssh://git@push.yoctoproject.org/poky-contrib' in /srv/work/alex/my-build/poky
Running 'git fetch -q poky-contrib || true' in /srv/work/alex/my-build/poky
Running 'git checkout -q 11db0390b02acac1324e0f827beb0e2e3d0d1d63' in /srv/work/alex/my-build/poky
.. note::
This will work to update an existing checkout as well.
.. note::
The script is self-sufficient and requires only python3
and git on the build machine.
.. note::
Both the ``create-layers-setup`` and the ``setup-layers`` provided several additional options
that customize their behavior - you are welcome to study them via ``--help`` command line parameter.

View File

@@ -1,267 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Working With Libraries
**********************
Libraries are an integral part of your system. This section describes
some common practices you might find helpful when working with libraries
to build your system:
- :ref:`How to include static library files
<dev-manual/libraries:including static library files>`
- :ref:`How to use the Multilib feature to combine multiple versions of
library files into a single image
<dev-manual/libraries:combining multiple versions of library files into one image>`
- :ref:`How to install multiple versions of the same library in parallel on
the same system
<dev-manual/libraries:installing multiple versions of the same library>`
Including Static Library Files
==============================
If you are building a library and the library offers static linking, you
can control which static library files (``*.a`` files) get included in
the built library.
The :term:`PACKAGES` and
:term:`FILES:* <FILES>` variables in the
``meta/conf/bitbake.conf`` configuration file define how files installed
by the :ref:`ref-tasks-install` task are packaged. By default, the :term:`PACKAGES`
variable includes ``${PN}-staticdev``, which represents all static
library files.
.. note::
Some previously released versions of the Yocto Project defined the
static library files through ``${PN}-dev``.
Following is part of the BitBake configuration file, where you can see
how the static library files are defined::
PACKAGE_BEFORE_PN ?= ""
PACKAGES = "${PN}-src ${PN}-dbg ${PN}-staticdev ${PN}-dev ${PN}-doc ${PN}-locale ${PACKAGE_BEFORE_PN} ${PN}"
PACKAGES_DYNAMIC = "^${PN}-locale-.*"
FILES = ""
FILES:${PN} = "${bindir}/* ${sbindir}/* ${libexecdir}/* ${libdir}/lib*${SOLIBS} \
${sysconfdir} ${sharedstatedir} ${localstatedir} \
${base_bindir}/* ${base_sbindir}/* \
${base_libdir}/*${SOLIBS} \
${base_prefix}/lib/udev ${prefix}/lib/udev \
${base_libdir}/udev ${libdir}/udev \
${datadir}/${BPN} ${libdir}/${BPN}/* \
${datadir}/pixmaps ${datadir}/applications \
${datadir}/idl ${datadir}/omf ${datadir}/sounds \
${libdir}/bonobo/servers"
FILES:${PN}-bin = "${bindir}/* ${sbindir}/*"
FILES:${PN}-doc = "${docdir} ${mandir} ${infodir} ${datadir}/gtk-doc \
${datadir}/gnome/help"
SECTION:${PN}-doc = "doc"
FILES_SOLIBSDEV ?= "${base_libdir}/lib*${SOLIBSDEV} ${libdir}/lib*${SOLIBSDEV}"
FILES:${PN}-dev = "${includedir} ${FILES_SOLIBSDEV} ${libdir}/*.la \
${libdir}/*.o ${libdir}/pkgconfig ${datadir}/pkgconfig \
${datadir}/aclocal ${base_libdir}/*.o \
${libdir}/${BPN}/*.la ${base_libdir}/*.la \
${libdir}/cmake ${datadir}/cmake"
SECTION:${PN}-dev = "devel"
ALLOW_EMPTY:${PN}-dev = "1"
RDEPENDS:${PN}-dev = "${PN} (= ${EXTENDPKGV})"
FILES:${PN}-staticdev = "${libdir}/*.a ${base_libdir}/*.a ${libdir}/${BPN}/*.a"
SECTION:${PN}-staticdev = "devel"
RDEPENDS:${PN}-staticdev = "${PN}-dev (= ${EXTENDPKGV})"
Combining Multiple Versions of Library Files into One Image
===========================================================
The build system offers the ability to build libraries with different
target optimizations or architecture formats and combine these together
into one system image. You can link different binaries in the image
against the different libraries as needed for specific use cases. This
feature is called "Multilib".
An example would be where you have most of a system compiled in 32-bit
mode using 32-bit libraries, but you have something large, like a
database engine, that needs to be a 64-bit application and uses 64-bit
libraries. Multilib allows you to get the best of both 32-bit and 64-bit
libraries.
While the Multilib feature is most commonly used for 32 and 64-bit
differences, the approach the build system uses facilitates different
target optimizations. You could compile some binaries to use one set of
libraries and other binaries to use a different set of libraries. The
libraries could differ in architecture, compiler options, or other
optimizations.
There are several examples in the ``meta-skeleton`` layer found in the
:term:`Source Directory`:
- :oe_git:`conf/multilib-example.conf </openembedded-core/tree/meta-skeleton/conf/multilib-example.conf>`
configuration file.
- :oe_git:`conf/multilib-example2.conf </openembedded-core/tree/meta-skeleton/conf/multilib-example2.conf>`
configuration file.
- :oe_git:`recipes-multilib/images/core-image-multilib-example.bb </openembedded-core/tree/meta-skeleton/recipes-multilib/images/core-image-multilib-example.bb>`
recipe
Preparing to Use Multilib
-------------------------
User-specific requirements drive the Multilib feature. Consequently,
there is no one "out-of-the-box" configuration that would
meet your needs.
In order to enable Multilib, you first need to ensure your recipe is
extended to support multiple libraries. Many standard recipes are
already extended and support multiple libraries. You can check in the
``meta/conf/multilib.conf`` configuration file in the
:term:`Source Directory` to see how this is
done using the
:term:`BBCLASSEXTEND` variable.
Eventually, all recipes will be covered and this list will not be
needed.
For the most part, the :ref:`Multilib <ref-classes-multilib*>`
class extension works automatically to
extend the package name from ``${PN}`` to ``${MLPREFIX}${PN}``, where
:term:`MLPREFIX` is the particular multilib (e.g. "lib32-" or "lib64-").
Standard variables such as
:term:`DEPENDS`,
:term:`RDEPENDS`,
:term:`RPROVIDES`,
:term:`RRECOMMENDS`,
:term:`PACKAGES`, and
:term:`PACKAGES_DYNAMIC` are
automatically extended by the system. If you are extending any manual
code in the recipe, you can use the ``${MLPREFIX}`` variable to ensure
those names are extended correctly.
Using Multilib
--------------
After you have set up the recipes, you need to define the actual
combination of multiple libraries you want to build. You accomplish this
through your ``local.conf`` configuration file in the
:term:`Build Directory`. An example configuration would be as follows::
MACHINE = "qemux86-64"
require conf/multilib.conf
MULTILIBS = "multilib:lib32"
DEFAULTTUNE:virtclass-multilib-lib32 = "x86"
IMAGE_INSTALL:append = " lib32-glib-2.0"
This example enables an additional library named
``lib32`` alongside the normal target packages. When combining these
"lib32" alternatives, the example uses "x86" for tuning. For information
on this particular tuning, see
``meta/conf/machine/include/ia32/arch-ia32.inc``.
The example then includes ``lib32-glib-2.0`` in all the images, which
illustrates one method of including a multiple library dependency. You
can use a normal image build to include this dependency, for example::
$ bitbake core-image-sato
You can also build Multilib packages
specifically with a command like this::
$ bitbake lib32-glib-2.0
Additional Implementation Details
---------------------------------
There are generic implementation details as well as details that are specific to
package management systems. Following are implementation details
that exist regardless of the package management system:
- The typical convention used for the class extension code as used by
Multilib assumes that all package names specified in
:term:`PACKAGES` that contain
``${PN}`` have ``${PN}`` at the start of the name. When that
convention is not followed and ``${PN}`` appears at the middle or the
end of a name, problems occur.
- The :term:`TARGET_VENDOR`
value under Multilib will be extended to "-vendormlmultilib" (e.g.
"-pokymllib32" for a "lib32" Multilib with Poky). The reason for this
slightly unwieldy contraction is that any "-" characters in the
vendor string presently break Autoconf's ``config.sub``, and other
separators are problematic for different reasons.
Here are the implementation details for the RPM Package Management System:
- A unique architecture is defined for the Multilib packages, along
with creating a unique deploy folder under ``tmp/deploy/rpm`` in the
:term:`Build Directory`. For example, consider ``lib32`` in a
``qemux86-64`` image. The possible architectures in the system are "all",
"qemux86_64", "lib32:qemux86_64", and "lib32:x86".
- The ``${MLPREFIX}`` variable is stripped from ``${PN}`` during RPM
packaging. The naming for a normal RPM package and a Multilib RPM
package in a ``qemux86-64`` system resolves to something similar to
``bash-4.1-r2.x86_64.rpm`` and ``bash-4.1.r2.lib32_x86.rpm``,
respectively.
- When installing a Multilib image, the RPM backend first installs the
base image and then installs the Multilib libraries.
- The build system relies on RPM to resolve the identical files in the
two (or more) Multilib packages.
Here are the implementation details for the IPK Package Management System:
- The ``${MLPREFIX}`` is not stripped from ``${PN}`` during IPK
packaging. The naming for a normal RPM package and a Multilib IPK
package in a ``qemux86-64`` system resolves to something like
``bash_4.1-r2.x86_64.ipk`` and ``lib32-bash_4.1-rw:x86.ipk``,
respectively.
- The IPK deploy folder is not modified with ``${MLPREFIX}`` because
packages with and without the Multilib feature can exist in the same
folder due to the ``${PN}`` differences.
- IPK defines a sanity check for Multilib installation using certain
rules for file comparison, overridden, etc.
Installing Multiple Versions of the Same Library
================================================
There are be situations where you need to install and use multiple versions
of the same library on the same system at the same time. This
almost always happens when a library API changes and you have
multiple pieces of software that depend on the separate versions of the
library. To accommodate these situations, you can install multiple
versions of the same library in parallel on the same system.
The process is straightforward as long as the libraries use proper
versioning. With properly versioned libraries, all you need to do to
individually specify the libraries is create separate, appropriately
named recipes where the :term:`PN` part of
the name includes a portion that differentiates each library version
(e.g. the major part of the version number). Thus, instead of having a
single recipe that loads one version of a library (e.g. ``clutter``),
you provide multiple recipes that result in different versions of the
libraries you want. As an example, the following two recipes would allow
the two separate versions of the ``clutter`` library to co-exist on the
same system:
.. code-block:: none
clutter-1.6_1.6.20.bb
clutter-1.8_1.8.4.bb
Additionally, if
you have other recipes that depend on a given library, you need to use
the :term:`DEPENDS` variable to
create the dependency. Continuing with the same example, if you want to
have a recipe depend on the 1.8 version of the ``clutter`` library, use
the following in your recipe::
DEPENDS = "clutter-1.8"

View File

@@ -1,522 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Working With Licenses
*********************
As mentioned in the ":ref:`overview-manual/development-environment:licensing`"
section in the Yocto Project Overview and Concepts Manual, open source
projects are open to the public and they consequently have different
licensing structures in place. This section describes the mechanism by
which the :term:`OpenEmbedded Build System`
tracks changes to
licensing text and covers how to maintain open source license compliance
during your project's lifecycle. The section also describes how to
enable commercially licensed recipes, which by default are disabled.
Tracking License Changes
========================
The license of an upstream project might change in the future. In order
to prevent these changes going unnoticed, the
:term:`LIC_FILES_CHKSUM`
variable tracks changes to the license text. The checksums are validated
at the end of the configure step, and if the checksums do not match, the
build will fail.
Specifying the ``LIC_FILES_CHKSUM`` Variable
--------------------------------------------
The :term:`LIC_FILES_CHKSUM` variable contains checksums of the license text
in the source code for the recipe. Following is an example of how to
specify :term:`LIC_FILES_CHKSUM`::
LIC_FILES_CHKSUM = "file://COPYING;md5=xxxx \
file://licfile1.txt;beginline=5;endline=29;md5=yyyy \
file://licfile2.txt;endline=50;md5=zzzz \
..."
.. note::
- When using "beginline" and "endline", realize that line numbering
begins with one and not zero. Also, the included lines are
inclusive (i.e. lines five through and including 29 in the
previous example for ``licfile1.txt``).
- When a license check fails, the selected license text is included
as part of the QA message. Using this output, you can determine
the exact start and finish for the needed license text.
The build system uses the :term:`S`
variable as the default directory when searching files listed in
:term:`LIC_FILES_CHKSUM`. The previous example employs the default
directory.
Consider this next example::
LIC_FILES_CHKSUM = "file://src/ls.c;beginline=5;endline=16;\
md5=bb14ed3c4cda583abc85401304b5cd4e"
LIC_FILES_CHKSUM = "file://${WORKDIR}/license.html;md5=5c94767cedb5d6987c902ac850ded2c6"
The first line locates a file in ``${S}/src/ls.c`` and isolates lines
five through 16 as license text. The second line refers to a file in
:term:`WORKDIR`.
Note that :term:`LIC_FILES_CHKSUM` variable is mandatory for all recipes,
unless the :term:`LICENSE` variable is set to "CLOSED".
Explanation of Syntax
---------------------
As mentioned in the previous section, the :term:`LIC_FILES_CHKSUM` variable
lists all the important files that contain the license text for the
source code. It is possible to specify a checksum for an entire file, or
a specific section of a file (specified by beginning and ending line
numbers with the "beginline" and "endline" parameters, respectively).
The latter is useful for source files with a license notice header,
README documents, and so forth. If you do not use the "beginline"
parameter, then it is assumed that the text begins on the first line of
the file. Similarly, if you do not use the "endline" parameter, it is
assumed that the license text ends with the last line of the file.
The "md5" parameter stores the md5 checksum of the license text. If the
license text changes in any way as compared to this parameter then a
mismatch occurs. This mismatch triggers a build failure and notifies the
developer. Notification allows the developer to review and address the
license text changes. Also note that if a mismatch occurs during the
build, the correct md5 checksum is placed in the build log and can be
easily copied to the recipe.
There is no limit to how many files you can specify using the
:term:`LIC_FILES_CHKSUM` variable. Generally, however, every project
requires a few specifications for license tracking. Many projects have a
"COPYING" file that stores the license information for all the source
code files. This practice allows you to just track the "COPYING" file as
long as it is kept up to date.
.. note::
- If you specify an empty or invalid "md5" parameter,
:term:`BitBake` returns an md5
mis-match error and displays the correct "md5" parameter value
during the build. The correct parameter is also captured in the
build log.
- If the whole file contains only license text, you do not need to
use the "beginline" and "endline" parameters.
Enabling Commercially Licensed Recipes
======================================
By default, the OpenEmbedded build system disables components that have
commercial or other special licensing requirements. Such requirements
are defined on a recipe-by-recipe basis through the
:term:`LICENSE_FLAGS` variable
definition in the affected recipe. For instance, the
``poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly`` recipe
contains the following statement::
LICENSE_FLAGS = "commercial"
Here is a
slightly more complicated example that contains both an explicit recipe
name and version (after variable expansion)::
LICENSE_FLAGS = "license_${PN}_${PV}"
In order for a component restricted by a
:term:`LICENSE_FLAGS` definition to be enabled and included in an image, it
needs to have a matching entry in the global
:term:`LICENSE_FLAGS_ACCEPTED`
variable, which is a variable typically defined in your ``local.conf``
file. For example, to enable the
``poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly`` package, you
could add either the string "commercial_gst-plugins-ugly" or the more
general string "commercial" to :term:`LICENSE_FLAGS_ACCEPTED`. See the
":ref:`dev-manual/licenses:license flag matching`" section for a full
explanation of how :term:`LICENSE_FLAGS` matching works. Here is the
example::
LICENSE_FLAGS_ACCEPTED = "commercial_gst-plugins-ugly"
Likewise, to additionally enable the package built from the recipe
containing ``LICENSE_FLAGS = "license_${PN}_${PV}"``, and assuming that
the actual recipe name was ``emgd_1.10.bb``, the following string would
enable that package as well as the original ``gst-plugins-ugly``
package::
LICENSE_FLAGS_ACCEPTED = "commercial_gst-plugins-ugly license_emgd_1.10"
As a convenience, you do not need to specify the
complete license string for every package. You can use
an abbreviated form, which consists of just the first portion or
portions of the license string before the initial underscore character
or characters. A partial string will match any license that contains the
given string as the first portion of its license. For example, the
following value will also match both of the packages
previously mentioned as well as any other packages that have licenses
starting with "commercial" or "license"::
LICENSE_FLAGS_ACCEPTED = "commercial license"
License Flag Matching
---------------------
License flag matching allows you to control what recipes the
OpenEmbedded build system includes in the build. Fundamentally, the
build system attempts to match :term:`LICENSE_FLAGS` strings found in
recipes against strings found in :term:`LICENSE_FLAGS_ACCEPTED`.
A match causes the build system to include a recipe in the
build, while failure to find a match causes the build system to exclude
a recipe.
In general, license flag matching is simple. However, understanding some
concepts will help you correctly and effectively use matching.
Before a flag defined by a particular recipe is tested against the
entries of :term:`LICENSE_FLAGS_ACCEPTED`, the expanded
string ``_${PN}`` is appended to the flag. This expansion makes each
:term:`LICENSE_FLAGS` value recipe-specific. After expansion, the
string is then matched against the entries. Thus, specifying
``LICENSE_FLAGS = "commercial"`` in recipe "foo", for example, results
in the string ``"commercial_foo"``. And, to create a match, that string
must appear among the entries of :term:`LICENSE_FLAGS_ACCEPTED`.
Judicious use of the :term:`LICENSE_FLAGS` strings and the contents of the
:term:`LICENSE_FLAGS_ACCEPTED` variable allows you a lot of flexibility for
including or excluding recipes based on licensing. For example, you can
broaden the matching capabilities by using license flags string subsets
in :term:`LICENSE_FLAGS_ACCEPTED`.
.. note::
When using a string subset, be sure to use the part of the expanded
string that precedes the appended underscore character (e.g.
``usethispart_1.3``, ``usethispart_1.4``, and so forth).
For example, simply specifying the string "commercial" in the
:term:`LICENSE_FLAGS_ACCEPTED` variable matches any expanded
:term:`LICENSE_FLAGS` definition that starts with the string
"commercial" such as "commercial_foo" and "commercial_bar", which
are the strings the build system automatically generates for
hypothetical recipes named "foo" and "bar" assuming those recipes simply
specify the following::
LICENSE_FLAGS = "commercial"
Thus, you can choose to exhaustively enumerate each license flag in the
list and allow only specific recipes into the image, or you can use a
string subset that causes a broader range of matches to allow a range of
recipes into the image.
This scheme works even if the :term:`LICENSE_FLAGS` string already has
``_${PN}`` appended. For example, the build system turns the license
flag "commercial_1.2_foo" into "commercial_1.2_foo_foo" and would match
both the general "commercial" and the specific "commercial_1.2_foo"
strings found in the :term:`LICENSE_FLAGS_ACCEPTED` variable, as expected.
Here are some other scenarios:
- You can specify a versioned string in the recipe such as
"commercial_foo_1.2" in a "foo" recipe. The build system expands this
string to "commercial_foo_1.2_foo". Combine this license flag with a
:term:`LICENSE_FLAGS_ACCEPTED` variable that has the string
"commercial" and you match the flag along with any other flag that
starts with the string "commercial".
- Under the same circumstances, you can add "commercial_foo" in the
:term:`LICENSE_FLAGS_ACCEPTED` variable and the build system not only
matches "commercial_foo_1.2" but also matches any license flag with
the string "commercial_foo", regardless of the version.
- You can be very specific and use both the package and version parts
in the :term:`LICENSE_FLAGS_ACCEPTED` list (e.g.
"commercial_foo_1.2") to specifically match a versioned recipe.
Other Variables Related to Commercial Licenses
----------------------------------------------
There are other helpful variables related to commercial license handling,
defined in the
``poky/meta/conf/distro/include/default-distrovars.inc`` file::
COMMERCIAL_AUDIO_PLUGINS ?= ""
COMMERCIAL_VIDEO_PLUGINS ?= ""
If you want to enable these components, you can do so by making sure you have
statements similar to the following in your ``local.conf`` configuration file::
COMMERCIAL_AUDIO_PLUGINS = "gst-plugins-ugly-mad \
gst-plugins-ugly-mpegaudioparse"
COMMERCIAL_VIDEO_PLUGINS = "gst-plugins-ugly-mpeg2dec \
gst-plugins-ugly-mpegstream gst-plugins-bad-mpegvideoparse"
LICENSE_FLAGS_ACCEPTED = "commercial_gst-plugins-ugly commercial_gst-plugins-bad commercial_qmmp"
Of course, you could also create a matching list for those components using the
more general "commercial" string in the :term:`LICENSE_FLAGS_ACCEPTED` variable,
but that would also enable all the other packages with :term:`LICENSE_FLAGS`
containing "commercial", which you may or may not want::
LICENSE_FLAGS_ACCEPTED = "commercial"
Specifying audio and video plugins as part of the
:term:`COMMERCIAL_AUDIO_PLUGINS` and :term:`COMMERCIAL_VIDEO_PLUGINS` statements
(along with :term:`LICENSE_FLAGS_ACCEPTED`) includes the plugins or
components into built images, thus adding support for media formats or
components.
.. note::
GStreamer "ugly" and "bad" plugins are actually available through
open source licenses. However, the "ugly" ones can be subject to software
patents in some countries, making it necessary to pay licensing fees
to distribute them. The "bad" ones are just deemed unreliable by the
GStreamer community and should therefore be used with care.
Maintaining Open Source License Compliance During Your Product's Lifecycle
==========================================================================
One of the concerns for a development organization using open source
software is how to maintain compliance with various open source
licensing during the lifecycle of the product. While this section does
not provide legal advice or comprehensively cover all scenarios, it does
present methods that you can use to assist you in meeting the compliance
requirements during a software release.
With hundreds of different open source licenses that the Yocto Project
tracks, it is difficult to know the requirements of each and every
license. However, the requirements of the major FLOSS licenses can begin
to be covered by assuming that there are three main areas of concern:
- Source code must be provided.
- License text for the software must be provided.
- Compilation scripts and modifications to the source code must be
provided.
There are other requirements beyond the scope of these three and the
methods described in this section (e.g. the mechanism through which
source code is distributed).
As different organizations have different methods of complying with open
source licensing, this section is not meant to imply that there is only
one single way to meet your compliance obligations, but rather to
describe one method of achieving compliance. The remainder of this
section describes methods supported to meet the previously mentioned
three requirements. Once you take steps to meet these requirements, and
prior to releasing images, sources, and the build system, you should
audit all artifacts to ensure completeness.
.. note::
The Yocto Project generates a license manifest during image creation
that is located in ``${DEPLOY_DIR}/licenses/``\ `image_name`\ ``-``\ `datestamp`
to assist with any audits.
Providing the Source Code
-------------------------
Compliance activities should begin before you generate the final image.
The first thing you should look at is the requirement that tops the list
for most compliance groups --- providing the source. The Yocto Project has
a few ways of meeting this requirement.
One of the easiest ways to meet this requirement is to provide the
entire :term:`DL_DIR` used by the
build. This method, however, has a few issues. The most obvious is the
size of the directory since it includes all sources used in the build
and not just the source used in the released image. It will include
toolchain source, and other artifacts, which you would not generally
release. However, the more serious issue for most companies is
accidental release of proprietary software. The Yocto Project provides
an :ref:`ref-classes-archiver` class to help avoid some of these concerns.
Before you employ :term:`DL_DIR` or the :ref:`ref-classes-archiver` class, you
need to decide how you choose to provide source. The source
:ref:`ref-classes-archiver` class can generate tarballs and SRPMs and can
create them with various levels of compliance in mind.
One way of doing this (but certainly not the only way) is to release
just the source as a tarball. You can do this by adding the following to
the ``local.conf`` file found in the :term:`Build Directory`::
INHERIT += "archiver"
ARCHIVER_MODE[src] = "original"
During the creation of your
image, the source from all recipes that deploy packages to the image is
placed within subdirectories of ``DEPLOY_DIR/sources`` based on the
:term:`LICENSE` for each recipe.
Releasing the entire directory enables you to comply with requirements
concerning providing the unmodified source. It is important to note that
the size of the directory can get large.
A way to help mitigate the size issue is to only release tarballs for
licenses that require the release of source. Let us assume you are only
concerned with GPL code as identified by running the following script:
.. code-block:: shell
# Script to archive a subset of packages matching specific license(s)
# Source and license files are copied into sub folders of package folder
# Must be run from build folder
#!/bin/bash
src_release_dir="source-release"
mkdir -p $src_release_dir
for a in tmp/deploy/sources/*; do
for d in $a/*; do
# Get package name from path
p=`basename $d`
p=${p%-*}
p=${p%-*}
# Only archive GPL packages (update *GPL* regex for your license check)
numfiles=`ls tmp/deploy/licenses/$p/*GPL* 2> /dev/null | wc -l`
if [ $numfiles -ge 1 ]; then
echo Archiving $p
mkdir -p $src_release_dir/$p/source
cp $d/* $src_release_dir/$p/source 2> /dev/null
mkdir -p $src_release_dir/$p/license
cp tmp/deploy/licenses/$p/* $src_release_dir/$p/license 2> /dev/null
fi
done
done
At this point, you
could create a tarball from the ``gpl_source_release`` directory and
provide that to the end user. This method would be a step toward
achieving compliance with section 3a of GPLv2 and with section 6 of
GPLv3.
Providing License Text
----------------------
One requirement that is often overlooked is inclusion of license text.
This requirement also needs to be dealt with prior to generating the
final image. Some licenses require the license text to accompany the
binary. You can achieve this by adding the following to your
``local.conf`` file::
COPY_LIC_MANIFEST = "1"
COPY_LIC_DIRS = "1"
LICENSE_CREATE_PACKAGE = "1"
Adding these statements to the
configuration file ensures that the licenses collected during package
generation are included on your image.
.. note::
Setting all three variables to "1" results in the image having two
copies of the same license file. One copy resides in
``/usr/share/common-licenses`` and the other resides in
``/usr/share/license``.
The reason for this behavior is because
:term:`COPY_LIC_DIRS` and
:term:`COPY_LIC_MANIFEST`
add a copy of the license when the image is built but do not offer a
path for adding licenses for newly installed packages to an image.
:term:`LICENSE_CREATE_PACKAGE`
adds a separate package and an upgrade path for adding licenses to an
image.
As the source :ref:`ref-classes-archiver` class has already archived the
original unmodified source that contains the license files, you would have
already met the requirements for inclusion of the license information
with source as defined by the GPL and other open source licenses.
Providing Compilation Scripts and Source Code Modifications
-----------------------------------------------------------
At this point, we have addressed all we need to prior to generating the
image. The next two requirements are addressed during the final
packaging of the release.
By releasing the version of the OpenEmbedded build system and the layers
used during the build, you will be providing both compilation scripts
and the source code modifications in one step.
If the deployment team has a :ref:`overview-manual/concepts:bsp layer`
and a distro layer, and those
those layers are used to patch, compile, package, or modify (in any way)
any open source software included in your released images, you might be
required to release those layers under section 3 of GPLv2 or section 1
of GPLv3. One way of doing that is with a clean checkout of the version
of the Yocto Project and layers used during your build. Here is an
example:
.. code-block:: shell
# We built using the dunfell branch of the poky repo
$ git clone -b dunfell git://git.yoctoproject.org/poky
$ cd poky
# We built using the release_branch for our layers
$ git clone -b release_branch git://git.mycompany.com/meta-my-bsp-layer
$ git clone -b release_branch git://git.mycompany.com/meta-my-software-layer
# clean up the .git repos
$ find . -name ".git" -type d -exec rm -rf {} \;
One thing a development organization might want to consider for end-user
convenience is to modify
``meta-poky/conf/templates/default/bblayers.conf.sample`` to ensure that when
the end user utilizes the released build system to build an image, the
development organization's layers are included in the ``bblayers.conf`` file
automatically::
# POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
POKY_BBLAYERS_CONF_VERSION = "2"
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS ?= " \
##OEROOT##/meta \
##OEROOT##/meta-poky \
##OEROOT##/meta-yocto-bsp \
##OEROOT##/meta-mylayer \
"
Creating and
providing an archive of the :term:`Metadata`
layers (recipes, configuration files, and so forth) enables you to meet
your requirements to include the scripts to control compilation as well
as any modifications to the original source.
Compliance Limitations with Executables Built from Static Libraries
-------------------------------------------------------------------
When package A is added to an image via the :term:`RDEPENDS` or :term:`RRECOMMENDS`
mechanisms as well as explicitly included in the image recipe with
:term:`IMAGE_INSTALL`, and depends on a static linked library recipe B
(``DEPENDS += "B"``), package B will neither appear in the generated license
manifest nor in the generated source tarballs. This occurs as the
:ref:`ref-classes-license` and :ref:`ref-classes-archiver` classes assume that
only packages included via :term:`RDEPENDS` or :term:`RRECOMMENDS`
end up in the image.
As a result, potential obligations regarding license compliance for package B
may not be met.
The Yocto Project doesn't enable static libraries by default, in part because
of this issue. Before a solution to this limitation is found, you need to
keep in mind that if your root filesystem is built from static libraries,
you will need to manually ensure that your deliveries are compliant
with the licenses of these libraries.
Copying Non Standard Licenses
=============================
Some packages, such as the linux-firmware package, have many licenses
that are not in any way common. You can avoid adding a lot of these
types of common license files, which are only applicable to a specific
package, by using the
:term:`NO_GENERIC_LICENSE`
variable. Using this variable also avoids QA errors when you use a
non-common, non-CLOSED license in a recipe.
Here is an example that uses the ``LICENSE.Abilis.txt`` file as
the license from the fetched source::
NO_GENERIC_LICENSE[Firmware-Abilis] = "LICENSE.Abilis.txt"

View File

@@ -1,118 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Adding a New Machine
********************
Adding a new machine to the Yocto Project is a straightforward process.
This section describes how to add machines that are similar to those
that the Yocto Project already supports.
.. note::
Although well within the capabilities of the Yocto Project, adding a
totally new architecture might require changes to ``gcc``/``glibc``
and to the site information, which is beyond the scope of this
manual.
For a complete example that shows how to add a new machine, see the
":ref:`bsp-guide/bsp:creating a new bsp layer using the \`\`bitbake-layers\`\` script`"
section in the Yocto Project Board Support Package (BSP) Developer's
Guide.
Adding the Machine Configuration File
=====================================
To add a new machine, you need to add a new machine configuration file
to the layer's ``conf/machine`` directory. This configuration file
provides details about the device you are adding.
The OpenEmbedded build system uses the root name of the machine
configuration file to reference the new machine. For example, given a
machine configuration file named ``crownbay.conf``, the build system
recognizes the machine as "crownbay".
The most important variables you must set in your machine configuration
file or include from a lower-level configuration file are as follows:
- :term:`TARGET_ARCH` (e.g. "arm")
- ``PREFERRED_PROVIDER_virtual/kernel``
- :term:`MACHINE_FEATURES` (e.g. "apm screen wifi")
You might also need these variables:
- :term:`SERIAL_CONSOLES` (e.g. "115200;ttyS0 115200;ttyS1")
- :term:`KERNEL_IMAGETYPE` (e.g. "zImage")
- :term:`IMAGE_FSTYPES` (e.g. "tar.gz jffs2")
You can find full details on these variables in the reference section.
You can leverage existing machine ``.conf`` files from
``meta-yocto-bsp/conf/machine/``.
Adding a Kernel for the Machine
===============================
The OpenEmbedded build system needs to be able to build a kernel for the
machine. You need to either create a new kernel recipe for this machine,
or extend an existing kernel recipe. You can find several kernel recipe
examples in the Source Directory at ``meta/recipes-kernel/linux`` that
you can use as references.
If you are creating a new kernel recipe, normal recipe-writing rules
apply for setting up a :term:`SRC_URI`. Thus, you need to specify any
necessary patches and set :term:`S` to point at the source code. You need to
create a :ref:`ref-tasks-configure` task that configures the unpacked kernel with
a ``defconfig`` file. You can do this by using a ``make defconfig``
command or, more commonly, by copying in a suitable ``defconfig`` file
and then running ``make oldconfig``. By making use of ``inherit kernel``
and potentially some of the ``linux-*.inc`` files, most other
functionality is centralized and the defaults of the class normally work
well.
If you are extending an existing kernel recipe, it is usually a matter
of adding a suitable ``defconfig`` file. The file needs to be added into
a location similar to ``defconfig`` files used for other machines in a
given kernel recipe. A possible way to do this is by listing the file in
the :term:`SRC_URI` and adding the machine to the expression in
:term:`COMPATIBLE_MACHINE`::
COMPATIBLE_MACHINE = '(qemux86|qemumips)'
For more information on ``defconfig`` files, see the
":ref:`kernel-dev/common:changing the configuration`"
section in the Yocto Project Linux Kernel Development Manual.
Adding a Formfactor Configuration File
======================================
A formfactor configuration file provides information about the target
hardware for which the image is being built and information that the
build system cannot obtain from other sources such as the kernel. Some
examples of information contained in a formfactor configuration file
include framebuffer orientation, whether or not the system has a
keyboard, the positioning of the keyboard in relation to the screen, and
the screen resolution.
The build system uses reasonable defaults in most cases. However, if
customization is necessary, you need to create a ``machconfig`` file in
the ``meta/recipes-bsp/formfactor/files`` directory. This directory
contains directories for specific machines such as ``qemuarm`` and
``qemux86``. For information about the settings available and the
defaults, see the ``meta/recipes-bsp/formfactor/files/config`` file
found in the same area.
Following is an example for "qemuarm" machine::
HAVE_TOUCHSCREEN=1
HAVE_KEYBOARD=1
DISPLAY_CAN_ROTATE=0
DISPLAY_ORIENTATION=0
#DISPLAY_WIDTH_PIXELS=640
#DISPLAY_HEIGHT_PIXELS=480
#DISPLAY_BPP=16
DISPLAY_DPI=150
DISPLAY_SUBPIXEL_ORDER=vrgb

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,209 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Working with Pre-Built Libraries
********************************
Introduction
============
Some library vendors do not release source code for their software but do
release pre-built binaries. When shared libraries are built, they should
be versioned (see `this article
<https://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html>`__
for some background), but sometimes this is not done.
To summarize, a versioned library must meet two conditions:
#. The filename must have the version appended, for example: ``libfoo.so.1.2.3``.
#. The library must have the ELF tag ``SONAME`` set to the major version
of the library, for example: ``libfoo.so.1``. You can check this by
running ``readelf -d filename | grep SONAME``.
This section shows how to deal with both versioned and unversioned
pre-built libraries.
Versioned Libraries
===================
In this example we work with pre-built libraries for the FT4222H USB I/O chip.
Libraries are built for several target architecture variants and packaged in
an archive as follows::
├── build-arm-hisiv300
│   └── libft4222.so.1.4.4.44
├── build-arm-v5-sf
│   └── libft4222.so.1.4.4.44
├── build-arm-v6-hf
│   └── libft4222.so.1.4.4.44
├── build-arm-v7-hf
│   └── libft4222.so.1.4.4.44
├── build-arm-v8
│   └── libft4222.so.1.4.4.44
├── build-i386
│   └── libft4222.so.1.4.4.44
├── build-i486
│   └── libft4222.so.1.4.4.44
├── build-mips-eglibc-hf
│   └── libft4222.so.1.4.4.44
├── build-pentium
│   └── libft4222.so.1.4.4.44
├── build-x86_64
│   └── libft4222.so.1.4.4.44
├── examples
│   ├── get-version.c
│   ├── i2cm.c
│   ├── spim.c
│   └── spis.c
├── ftd2xx.h
├── install4222.sh
├── libft4222.h
├── ReadMe.txt
└── WinTypes.h
To write a recipe to use such a library in your system:
- The vendor will probably have a proprietary licence, so set
:term:`LICENSE_FLAGS` in your recipe.
- The vendor provides a tarball containing libraries so set :term:`SRC_URI`
appropriately.
- Set :term:`COMPATIBLE_HOST` so that the recipe cannot be used with an
unsupported architecture. In the following example, we only support the 32
and 64 bit variants of the ``x86`` architecture.
- As the vendor provides versioned libraries, we can use ``oe_soinstall``
from :ref:`ref-classes-utils` to install the shared library and create
symbolic links. If the vendor does not do this, we need to follow the
non-versioned library guidelines in the next section.
- As the vendor likely used :term:`LDFLAGS` different from those in your Yocto
Project build, disable the corresponding checks by adding ``ldflags``
to :term:`INSANE_SKIP`.
- The vendor will typically ship release builds without debugging symbols.
Avoid errors by preventing the packaging task from stripping out the symbols
and adding them to a separate debug package. This is done by setting the
``INHIBIT_`` flags shown below.
The complete recipe would look like this::
SUMMARY = "FTDI FT4222H Library"
SECTION = "libs"
LICENSE_FLAGS = "ftdi"
LICENSE = "CLOSED"
COMPATIBLE_HOST = "(i.86|x86_64).*-linux"
# Sources available in a .tgz file in .zip archive
# at https://ftdichip.com/wp-content/uploads/2021/01/libft4222-linux-1.4.4.44.zip
# Found on https://ftdichip.com/software-examples/ft4222h-software-examples/
# Since dealing with this particular type of archive is out of topic here,
# we use a local link.
SRC_URI = "file://libft4222-linux-${PV}.tgz"
S = "${WORKDIR}"
ARCH_DIR:x86-64 = "build-x86_64"
ARCH_DIR:i586 = "build-i386"
ARCH_DIR:i686 = "build-i386"
INSANE_SKIP:${PN} = "ldflags"
INHIBIT_PACKAGE_STRIP = "1"
INHIBIT_SYSROOT_STRIP = "1"
INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
do_install () {
install -m 0755 -d ${D}${libdir}
oe_soinstall ${S}/${ARCH_DIR}/libft4222.so.${PV} ${D}${libdir}
install -d ${D}${includedir}
install -m 0755 ${S}/*.h ${D}${includedir}
}
If the precompiled binaries are not statically linked and have dependencies on
other libraries, then by adding those libraries to :term:`DEPENDS`, the linking
can be examined and the appropriate :term:`RDEPENDS` automatically added.
Non-Versioned Libraries
=======================
Some Background
---------------
Libraries in Linux systems are generally versioned so that it is possible
to have multiple versions of the same library installed, which eases upgrades
and support for older software. For example, suppose that in a versioned
library, an actual library is called ``libfoo.so.1.2``, a symbolic link named
``libfoo.so.1`` points to ``libfoo.so.1.2``, and a symbolic link named
``libfoo.so`` points to ``libfoo.so.1.2``. Given these conditions, when you
link a binary against a library, you typically provide the unversioned file
name (i.e. ``-lfoo`` to the linker). However, the linker follows the symbolic
link and actually links against the versioned filename. The unversioned symbolic
link is only used at development time. Consequently, the library is packaged
along with the headers in the development package ``${PN}-dev`` along with the
actual library and versioned symbolic links in ``${PN}``. Because versioned
libraries are far more common than unversioned libraries, the default packaging
rules assume versioned libraries.
Yocto Library Packaging Overview
--------------------------------
It follows that packaging an unversioned library requires a bit of work in the
recipe. By default, ``libfoo.so`` gets packaged into ``${PN}-dev``, which
triggers a QA warning that a non-symlink library is in a ``-dev`` package,
and binaries in the same recipe link to the library in ``${PN}-dev``,
which triggers more QA warnings. To solve this problem, you need to package the
unversioned library into ``${PN}`` where it belongs. The following are the abridged
default :term:`FILES` variables in ``bitbake.conf``::
SOLIBS = ".so.*"
SOLIBSDEV = ".so"
FILES:${PN} = "... ${libdir}/lib*${SOLIBS} ..."
FILES_SOLIBSDEV ?= "... ${libdir}/lib*${SOLIBSDEV} ..."
FILES:${PN}-dev = "... ${FILES_SOLIBSDEV} ..."
:term:`SOLIBS` defines a pattern that matches real shared object libraries.
:term:`SOLIBSDEV` matches the development form (unversioned symlink). These two
variables are then used in ``FILES:${PN}`` and ``FILES:${PN}-dev``, which puts
the real libraries into ``${PN}`` and the unversioned symbolic link into ``${PN}-dev``.
To package unversioned libraries, you need to modify the variables in the recipe
as follows::
SOLIBS = ".so"
FILES_SOLIBSDEV = ""
The modifications cause the ``.so`` file to be the real library
and unset :term:`FILES_SOLIBSDEV` so that no libraries get packaged into
``${PN}-dev``. The changes are required because unless :term:`PACKAGES` is changed,
``${PN}-dev`` collects files before `${PN}`. ``${PN}-dev`` must not collect any of
the files you want in ``${PN}``.
Finally, loadable modules, essentially unversioned libraries that are linked
at runtime using ``dlopen()`` instead of at build time, should generally be
installed in a private directory. However, if they are installed in ``${libdir}``,
then the modules can be treated as unversioned libraries.
Example
-------
The example below installs an unversioned x86-64 pre-built library named
``libfoo.so``. The :term:`COMPATIBLE_HOST` variable limits recipes to the
x86-64 architecture while the :term:`INSANE_SKIP`, :term:`INHIBIT_PACKAGE_STRIP`
and :term:`INHIBIT_SYSROOT_STRIP` variables are all set as in the above
versioned library example. The "magic" is setting the :term:`SOLIBS` and
:term:`FILES_SOLIBSDEV` variables as explained above::
SUMMARY = "libfoo sample recipe"
SECTION = "libs"
LICENSE = "CLOSED"
SRC_URI = "file://libfoo.so"
COMPATIBLE_HOST = "x86_64.*-linux"
INSANE_SKIP:${PN} = "ldflags"
INHIBIT_PACKAGE_STRIP = "1"
INHIBIT_SYSROOT_STRIP = "1"
SOLIBS = ".so"
FILES_SOLIBSDEV = ""
do_install () {
install -d ${D}${libdir}
install -m 0755 ${WORKDIR}/libfoo.so ${D}${libdir}
}

View File

@@ -1,50 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Using a Python Development Shell
********************************
Similar to working within a development shell as described in the
previous section, you can also spawn and work within an interactive
Python development shell. When debugging certain commands or even when
just editing packages, ``pydevshell`` can be a useful tool. When you
invoke the ``pydevshell`` task, all tasks up to and including
:ref:`ref-tasks-patch` are run for the
specified target. Then a new terminal is opened. Additionally, key
Python objects and code are available in the same way they are to
BitBake tasks, in particular, the data store 'd'. So, commands such as
the following are useful when exploring the data store and running
functions::
pydevshell> d.getVar("STAGING_DIR")
'/media/build1/poky/build/tmp/sysroots'
pydevshell> d.getVar("STAGING_DIR", False)
'${TMPDIR}/sysroots'
pydevshell> d.setVar("FOO", "bar")
pydevshell> d.getVar("FOO")
'bar'
pydevshell> d.delVar("FOO")
pydevshell> d.getVar("FOO")
pydevshell> bb.build.exec_func("do_unpack", d)
pydevshell>
See the ":ref:`bitbake-user-manual/bitbake-user-manual-metadata:functions you can call from within python`"
section in the BitBake User Manual for details about available functions.
The commands execute just as if the OpenEmbedded build
system were executing them. Consequently, working this way can be
helpful when debugging a build or preparing software to be used with the
OpenEmbedded build system.
Following is an example that uses ``pydevshell`` on a target named
``matchbox-desktop``::
$ bitbake matchbox-desktop -c pydevshell
This command spawns a terminal and places you in an interactive Python
interpreter within the OpenEmbedded build environment. The
:term:`OE_TERMINAL` variable
controls what type of shell is opened.
When you are finished using ``pydevshell``, you can exit the shell
either by using Ctrl+d or closing the terminal window.

View File

@@ -44,13 +44,13 @@ To use QEMU, you need to have QEMU installed and initialized as well as
have the proper artifacts (i.e. image files and root filesystems)
available. Follow these general steps to run QEMU:
#. *Install QEMU:* QEMU is made available with the Yocto Project a
1. *Install QEMU:* QEMU is made available with the Yocto Project a
number of ways. One method is to install a Software Development Kit
(SDK). See ":ref:`sdk-manual/intro:the qemu emulator`" section in the
Yocto Project Application Development and the Extensible Software
Development Kit (eSDK) manual for information on how to install QEMU.
#. *Setting Up the Environment:* How you set up the QEMU environment
2. *Setting Up the Environment:* How you set up the QEMU environment
depends on how you installed QEMU:
- If you cloned the ``poky`` repository or you downloaded and
@@ -66,7 +66,7 @@ available. Follow these general steps to run QEMU:
. poky_sdk/environment-setup-core2-64-poky-linux
#. *Ensure the Artifacts are in Place:* You need to be sure you have a
3. *Ensure the Artifacts are in Place:* You need to be sure you have a
pre-built kernel that will boot in QEMU. You also need the target
root filesystem for your target machine's architecture:
@@ -84,7 +84,7 @@ available. Follow these general steps to run QEMU:
Extensible Software Development Kit (eSDK) manual for information on
how to extract a root filesystem.
#. *Run QEMU:* The basic ``runqemu`` command syntax is as follows::
4. *Run QEMU:* The basic ``runqemu`` command syntax is as follows::
$ runqemu [option ] [...]
@@ -99,13 +99,12 @@ available. Follow these general steps to run QEMU:
Here are some additional examples to help illustrate further QEMU:
- This example starts QEMU with MACHINE set to "qemux86-64".
Assuming a standard :term:`Build Directory`, ``runqemu``
Assuming a standard
:term:`Build Directory`, ``runqemu``
automatically finds the ``bzImage-qemux86-64.bin`` image file and
the ``core-image-minimal-qemux86-64-20200218002850.rootfs.ext4``
(assuming the current build created a ``core-image-minimal``
image)::
$ runqemu qemux86-64
image).
.. note::
@@ -113,31 +112,38 @@ available. Follow these general steps to run QEMU:
and uses the most recently built image according to the
timestamp.
::
$ runqemu qemux86-64
- This example produces the exact same results as the previous
example. This command, however, specifically provides the image
and root filesystem type::
and root filesystem type.
::
$ runqemu qemux86-64 core-image-minimal ext4
- This example specifies to boot an :term:`Initramfs` image and to
enable audio in QEMU. For this case, ``runqemu`` sets the internal
variable ``FSTYPE`` to ``cpio.gz``. Also, for audio to be enabled,
an appropriate driver must be installed (see the ``audio`` option
in :ref:`dev-manual/qemu:\`\`runqemu\`\` command-line options`
for more information)::
an appropriate driver must be installed (see the previous
description for the ``audio`` option for more information).
::
$ runqemu qemux86-64 ramfs audio
- This example does not provide enough information for QEMU to
launch. While the command does provide a root filesystem type, it
must also minimally provide a `MACHINE`, `KERNEL`, or `VM` option::
must also minimally provide a `MACHINE`, `KERNEL`, or `VM` option.
::
$ runqemu ext4
- This example specifies to boot a virtual machine image
(``.wic.vmdk`` file). From the ``.wic.vmdk``, ``runqemu``
determines the QEMU architecture (`MACHINE`) to be "qemux86-64" and
the root filesystem type to be "vmdk"::
the root filesystem type to be "vmdk".
::
$ runqemu /home/scott-lenovo/vm/core-image-minimal-qemux86-64.wic.vmdk
@@ -184,7 +190,7 @@ the system does not need root privileges to run. It uses a user space
NFS server to avoid that. Follow these steps to set up for running QEMU
using an NFS server.
#. *Extract a Root Filesystem:* Once you are able to run QEMU in your
1. *Extract a Root Filesystem:* Once you are able to run QEMU in your
environment, you can use the ``runqemu-extract-sdk`` script, which is
located in the ``scripts`` directory along with the ``runqemu``
script.
@@ -198,7 +204,7 @@ using an NFS server.
runqemu-extract-sdk ./tmp/deploy/images/qemux86-64/core-image-sato-qemux86-64.tar.bz2 test-nfs
#. *Start QEMU:* Once you have extracted the file system, you can run
2. *Start QEMU:* Once you have extracted the file system, you can run
``runqemu`` normally with the additional location of the file system.
You can then also make changes to the files within ``./test-nfs`` and
see those changes appear in the image in real time. Here is an
@@ -240,10 +246,11 @@ be a problem when QEMU is running with KVM enabled. Specifically,
software compiled with a certain CPU feature crashes when run on a CPU
under KVM that does not support that feature. To work around this
problem, you can override QEMU's runtime CPU setting by changing the
``QB_CPU_KVM`` variable in ``qemuboot.conf`` in the :term:`Build Directory`
``deploy/image`` directory. This setting specifies a ``-cpu`` option passed
into QEMU in the ``runqemu`` script. Running ``qemu -cpu help`` returns a
list of available supported CPU types.
``QB_CPU_KVM`` variable in ``qemuboot.conf`` in the
:term:`Build Directory` ``deploy/image``
directory. This setting specifies a ``-cpu`` option passed into QEMU in
the ``runqemu`` script. Running ``qemu -cpu help`` returns a list of
available supported CPU types.
QEMU Performance
================
@@ -323,7 +330,7 @@ Following is the command-line help output for the ``runqemu`` command::
Simplified QEMU command-line options can be passed with:
nographic - disable video console
serial - enable a serial console on /dev/ttyS0
slirp - enable user networking, no root privileges required
slirp - enable user networking, no root privileges is required
kvm - enable KVM when running x86/x86_64 (VT-capable CPU required)
kvm-vhost - enable KVM with vhost when running x86/x86_64 (VT-capable CPU required)
publicvnc - enable a VNC server open to all hosts
@@ -421,29 +428,6 @@ command line:
networking that does not need root access but also is not as easy to
use or comprehensive as the default.
Using ``slirp`` by default will forward the guest machine's
22 and 23 TCP ports to host machine's 2222 and 2323 ports
(or the next free ports). Specific forwarding rules can be configured
by setting ``QB_SLIRP_OPT`` as environment variable or in ``qemuboot.conf``
in the :term:`Build Directory` ``deploy/image`` directory.
Examples::
QB_SLIRP_OPT="-netdev user,id=net0,hostfwd=tcp::8080-:80"
QB_SLIRP_OPT="-netdev user,id=net0,hostfwd=tcp::8080-:80,hostfwd=tcp::2222-:22"
The first example forwards TCP port 80 from the emulated system to
port 8080 (or the next free port) on the host system,
allowing access to an http server running in QEMU from
``http://<host ip>:8080/``.
The second example does the same, but also forwards TCP port 22 on the
guest system to 2222 (or the next free port) on the host system,
allowing ssh access to the emulated system using
``ssh -P 2222 <user>@<host ip>``.
Keep in mind that proper configuration of firewall software is required.
- ``kvm``: Enables KVM when running "qemux86" or "qemux86-64" QEMU
architectures. For KVM to work, all the following conditions must be
met:

View File

@@ -1,89 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
Using Quilt in Your Workflow
****************************
`Quilt <https://savannah.nongnu.org/projects/quilt>`__ is a powerful tool
that allows you to capture source code changes without having a clean
source tree. This section outlines the typical workflow you can use to
modify source code, test changes, and then preserve the changes in the
form of a patch all using Quilt.
.. note::
With regard to preserving changes to source files, if you clean a
recipe or have :ref:`ref-classes-rm-work` enabled, the
:ref:`devtool workflow <sdk-manual/extensible:using \`\`devtool\`\` in your sdk workflow>`
as described in the Yocto Project Application Development and the
Extensible Software Development Kit (eSDK) manual is a safer
development flow than the flow that uses Quilt.
Follow these general steps:
#. *Find the Source Code:* Temporary source code used by the
OpenEmbedded build system is kept in the :term:`Build Directory`. See the
":ref:`dev-manual/temporary-source-code:finding temporary source code`" section to
learn how to locate the directory that has the temporary source code for a
particular package.
#. *Change Your Working Directory:* You need to be in the directory that
has the temporary source code. That directory is defined by the
:term:`S` variable.
#. *Create a New Patch:* Before modifying source code, you need to
create a new patch. To create a new patch file, use ``quilt new`` as
below::
$ quilt new my_changes.patch
#. *Notify Quilt and Add Files:* After creating the patch, you need to
notify Quilt about the files you plan to edit. You notify Quilt by
adding the files to the patch you just created::
$ quilt add file1.c file2.c file3.c
#. *Edit the Files:* Make your changes in the source code to the files
you added to the patch.
#. *Test Your Changes:* Once you have modified the source code, the
easiest way to test your changes is by calling the :ref:`ref-tasks-compile`
task as shown in the following example::
$ bitbake -c compile -f package
The ``-f`` or ``--force`` option forces the specified task to
execute. If you find problems with your code, you can just keep
editing and re-testing iteratively until things work as expected.
.. note::
All the modifications you make to the temporary source code disappear
once you run the :ref:`ref-tasks-clean` or :ref:`ref-tasks-cleanall`
tasks using BitBake (i.e. ``bitbake -c clean package`` and
``bitbake -c cleanall package``). Modifications will also disappear if
you use the :ref:`ref-classes-rm-work` feature as described in
the ":ref:`dev-manual/disk-space:conserving disk space during builds`"
section.
#. *Generate the Patch:* Once your changes work as expected, you need to
use Quilt to generate the final patch that contains all your
modifications::
$ quilt refresh
At this point, the
``my_changes.patch`` file has all your edits made to the ``file1.c``,
``file2.c``, and ``file3.c`` files.
You can find the resulting patch file in the ``patches/``
subdirectory of the source (:term:`S`) directory.
#. *Copy the Patch File:* For simplicity, copy the patch file into a
directory named ``files``, which you can create in the same directory
that holds the recipe (``.bb``) file or the append (``.bbappend``)
file. Placing the patch here guarantees that the OpenEmbedded build
system will find the patch. Next, add the patch into the :term:`SRC_URI`
of the recipe. Here is an example::
SRC_URI += "file://my_changes.patch"

Some files were not shown because too many files have changed in this diff Show More