Compare commits

..

130 Commits

Author SHA1 Message Date
Richard Purdie
05a8aad57c documentation: prepare for 3.3.1 release
Include update to previous releases.

(From yocto-docs rev: eb19a2b5687f11c22c7fc26d3efabbf65adb572e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-17 09:47:53 +01:00
Richard Purdie
11e25e2fec build-appliance-image: Update to hardknott head revision
(From OE-Core rev: efce6334bf122a64f63d46c1c04e3dbffe298c51)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-17 09:42:25 +01:00
Richard Purdie
96e8fcd6a2 poky.conf: Bump version for 3.3.1 hardknott release
(From meta-yocto rev: 308d0262a8100d68d3f4e86b4f35ba05b5dc5356)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-17 09:40:39 +01:00
Richard Purdie
d3fd4f6154 puzzles: Upstream changed to main branch for development
(From OE-Core rev: 1cf4d3f44191c3fc2cb4d056b38f98fae4e8b8e1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 943402b25212408a4ddcfa8a146b645509e138dd)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Yann Dirson
2ab37d09cf linux-firmware: include all relevant files in -bcm4356
This currently catches the .clb_blob and .vamrs,rock960.txt, and other
.txt files may come in future upstream releases.

(From OE-Core rev: 68647eccaf817287df17d5a247b3caf7df9f6840)

Signed-off-by: Yann Dirson <yann@blade-group.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e332738a8aae0914c58b40faae8b9d7a82fd6a95)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Anuj Mittal
32f185b0cf lsb-release: fix reproducibility failure
Make sure help2man output is reproducible. Fixes:

| .\"·DO·NOT·MODIFY·THIS·FILE!··It·was·generated·by·help2man·1.022.	.\"·DO·NOT·MODIFY·THIS·FILE!··It·was·generated·by·help2man·1.022.
| .TH·FSG·"1"·"April·2021"·"FSG·lsb_release·v1.4"·FSG	.TH·FSG·"1"·"May·2021"·"FSG·lsb_release·v1.4"·FSG
| .SH·NAME	3 	.SH·NAME

(From OE-Core rev: e73898b59eb79d20082963e629ce6f8cc75103c9)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 49371207a7f1fe3d3feb7b8b9aabb62b43ae34d1)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
zhengruoqin
81aabb718e ruby: upgrade 3.0.0 -> 3.0.1
(From OE-Core rev: 9fde0b5121b6cda894ef761a526fa4feced02d5f)

Signed-off-by: Zheng Ruoqin <zhengrq.fnst@fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b6949a028fd31bd04ed0478fb34a58b971f31e1f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Kai Kang
7ef7ee5247 grub2.inc: remove '-O2' from CFLAGS
It fails to boot grub after upgrade grub to 2.06. According to
description in

https://bugzilla.yoctoproject.org/show_bug.cgi?id=14367

it is introduced by a commit to fix CVE. So remove option '-O2' from
CFLAGS rather than revert the commit to avoid the failure.

[YOCTO #14367]

CC: Tony Battersby <tonyb@cybernetics.com>
(From OE-Core rev: 7520bd4f72d550052774042c542a3d3ee874b363)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 69805629b8f47fd46a37b7c5cc435982e2ac3d1d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Romain Naour
d39350cf1f dejagnu: needs expect at runtime
runtest return an error due to missing expect on the target.
Add expect as runtime dependency.

(From OE-Core rev: 9dc044fdbd20085dfa99fd4a7189763365334ede)

Signed-off-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit d9a3a08edc1efcbe7b02e80be98370792d3c6cc2)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Peter Kjellerstedt
6b5baa4a29 libcap: Configure Make variables correctly without a horrible hack
Occasionally, the build would fail with:

  make[2]: execvp: mkdir: Argument list too long

This turned out to be due to a hacky solution used in the recipe to
modify the Makefile, which resulted in one more $(BUILD_CFLAGS) being
added to the immediately expanded BUILD_CFLAGS Make variable each time
do_configure was executed. After a couple of times, this lead to an
environment with a 140 kB BUILD_CFLAGS when mkdir should execute, which
resulted in the E2BIG.

(From OE-Core rev: 44900610bea76ab8983a899599f78790f6c5f659)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 116e6b61c585c6f0f7ae6f010bd490bb39914348)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Vinícius Ossanes Aquino
900aae50c9 lttng-modules: backport patches to fix build against 5.12+ kernel
Add the following patches from stable-2.12 branch of lttng repository
to fix errors when building lttng-modules against 5.12+ kernel
since they are not present on the release 2.12.5:

- 17cd2dc9 fix: block: add a disk_uevent helper (v5.12)
- 127135b6 fix backport: block: add a disk_uevent helper (v5.12)
- 853d5903 fix: mm, tracing: kfree event name mismatching with
provider kmem (v5.12)

(From OE-Core rev: 86bcab9e9f4ee5e06f7db8c75d4b983fd2be59d2)

Signed-off-by: Vinicius Aquino <vinicius.aquino@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2538ba2b3490e3599d9ccd637aa8486ea428f1b0)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Bruce Ashfield
337fa11044 linux-yocto/5.4: qemuppc32: reduce serial shutdown issues
Integrating the following commit(s) to linux-yocto/5.4:

    qemuppc32: reduce serial issues seen on shutdown

    Richard reported:

    We've been seeing a lot of the qemuppc shutdown issue and I decided to
    look into it. The really worrying thing looking at the logs locally is the
    serial ports are showing irq issues and becoming disabled as nobody would
    handle them.

    Errors like:

       [    9.194886] irq 36: nobody cared (try booting with the "irqpoll" option)
       [    9.198712] CPU: 0 PID: 127 Comm: bootlogd Not tainted
       [    9.202283] Call Trace:
       [    9.205611] [d1005f00] [c00a0da8] __report_bad_irq+0x50/0x138 (unreliable)
       [    9.209347] [d1005f30] [c00a0cc0] note_interrupt+0x324/0x378
       [    9.212855] [d1005f70] [c009d138] handle_irq_event+0xe8/0x104
       [    9.216353] [d1005fa0] [c00a1d9c] handle_fasteoi_irq+0xc0/0x29c
       [    9.219960] [d1005fc0] [c009b798] generic_handle_irq+0x40/0x5c
       [    9.223496] [d1005fd0] [c00075d0] __do_irq+0x58/0x188
       [    9.226948] [d1005ff0] [c0010040] call_do_irq+0x20/0x38
       [    9.230391] [d29eda60] [c0007788] do_IRQ+0x88/0xfc
       [    9.233860] [d29eda90] [c0016454] ret_from_except+0x0/0x14
       [    9.237288] --- interrupt: 501 at __setup_irq+0x3c4/0x838
       [    9.237288]     LR = __setup_irq+0x790/0x838
       [    9.244155] [d29edb88] [c009f0a4] request_threaded_irq+0x114/0x1c8
       [    9.247672] [d29edbb8] [c07a5a18] pmz_startup+0x17c/0x32c
       [    9.251203] [d29edbd8] [c07a1140] uart_port_startup+0x184/0x2f8
       [    9.254651] [d29edc08] [c07a1974] uart_port_activate+0x78/0xf4
       [    9.258141] [d29edc28] [c07839f8] tty_port_open+0xd4/0x170
       [    9.261579] [d29edc58] [c079db74] uart_open+0x2c/0x48
       [    9.265116] [d29edc68] [c077a288] tty_open+0x168/0x640
       [    9.268574] [d29edcd8] [c0280be8] chrdev_open+0x138/0x2a4
       [    9.272123] [d29edd18] [c027421c] do_dentry_open+0x228/0x410
       [    9.275643] [d29edd48] [c028e9f4] path_openat+0xb04/0xf28
       [    9.279184] [d29eddd8] [c02917e4] do_filp_open+0x120/0x164
       [    9.282535] [d29ede98] [c0276238] do_sys_openat2+0xd8/0x19c
       [    9.285790] [d29edee8] [c0276574] sys_openat+0x88/0xdc
       [    9.289096] [d29edf38] [c00160d8] ret_from_syscall+0x0/0x34
       [    9.292620] --- interrupt: c01 at 0xfec3738
       [    9.292620]     LR = 0xfec36e0
       [    9.299035] handlers:
       [    9.302312] [<7f7f7da8>] pmz_interrupt
       [    9.305541] Disabling IRQ #36

    (and the irqpoll option does not help)

    This is problematic as the shutdown test uses the serial interface to
    shut down the system. If the serial interface fails to login or run the command,
    game over for the test.

    CONFIG_SERIAL_PMACZILOG_CONSOLE complicates that handling, but doesn't provide
    any output or capabilities that we need. So we disable it here, and
    reduce the chances of issues during shutdown.

(From OE-Core rev: aca5873e830d3b66f00cad4fa03982cc4ec5b445)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 42355cb73049ee7a4af0f539a2a5b7d4ee1abc65)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Alexander Kanavin
3b39e2801d linux-firmware: upgrade 20210208 -> 20210315
License-Update: additional firmware files, version changes

(From OE-Core rev: 132014a299053b84f79611827d8d0eb88fb91275)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2f10b9dbb4fb8ccb9a427883370fbbeb6f394551)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Chen Qi
2c96b17e0d db: update CVE_PRODUCT
Update CVE_PRODUCT to also include 'berkeley_db'. For example,
CVE-2020-2981 uses 'berkeley_db'.

(From OE-Core rev: b5004de05327c734d63cfac153ebf1542f9177c9)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit ad799b109716ccd2f44dcf7a6a4cfcbd622ea661)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Richard Purdie
4634aca5c2 oeqa/qemurunner: Improve handling of run_serial for shutdown commands
When running a shutdown command, the serial port can close without the
command returning. This is seen as the socket being readable but having
no data. Change the way this case is handled in the code to avoid
tracebacks.

(From OE-Core rev: a72572532b976a4c3e8fa68fe63f63e39399ee88)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 396a3ba884820d040c91f7592daf20ac28c49b5d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Richard Purdie
2f897b26bb oeqa/qemurunner: Fix binary vs str issue
The recent logging changes for qemurunner showed up as errors on the
autobuilder where decode couldn't be called on the returned string.
Since the code returns binary data, return b'' instead of '' to match
to avoid tracebacks.

One of these cases was newly added, copied from the other which has
been there for a long time, always broken.

(From OE-Core rev: 000feb98ff99e74d6118fc3f53330b8e975923d9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b8995b27db265b0a0b2d2ca595915f70f9f96e07)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Richard Purdie
6ad4febbcd oeqa/qemurunner: Improve logging thread exit handling for qemu shutdown test
Rather than totally disabling the logging, inform it we're about to exit
so we can log messages over the exit cleanly too. This aids debugging. It
also avoids a race where the logging handler could still error whilst
shutting down.

Also remove a race window by notificing the handler of the shutdown
first, before triggering it. This removes a race window I watched in
local testing.

(From OE-Core rev: 7f931dce4484a2740b419b2d25830fc453748a0c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0e19f31a1005f94105e1cef252abfffcef2aafad)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Michael Opdenacker
6f1d6448c5 sanity.bbclass: mention CONNECTIVITY_CHECK_URIS in network failure message
This expands the error message when a network failure is detected.
It happens that some ISPs or networks block the default example.com
domain. Therefore, instead of disabling network access, it
lets the user know how to modify the test URL.

(From OE-Core rev: f54eaf65ff549a98ff98157d6b3aa48f9adc9ca5)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 62c94bb925543c1e1c5af3c751913d9f06d9597d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-15 17:18:21 +01:00
Yi Fan Yu
641848ab75 libevent: Increase ptest timing tolerance 50 ms -> 100 ms
Adjusting the tolerance to a more reasonable time
given the load on the AB and given the high amount(100) of
events some of the tests like `common_timeout` generates.

[YOCTO #14163]

(From OE-Core rev: d5d88c2293e8ebc958d1bce9af8f796024443be9)

Signed-off-by: Yi Fan Yu <yifan.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 38b36d2b90d570149e63816e68f457aea28a5092)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Joshua Watt
cc5c08c1bb classes/image: Use xargs to set file timestamps
Instead of having find directly invoke touch for each file in the root
file system, pass a list to xargs for batching. This significantly
reduces the number of times the touch program is invoked and speeds up
the do_image task time:

    PKG           TASK      ABSDIFF  RELDIFF  CPUTIME1 -> CPUTIME2
    my-image      do_image   -45.3s   -94.2%     48.1s -> 2.8s

    Cumulative cputime:
      -44.3s    -92.3%    00:48.1 (48.1s) -> 00:03.7 (3.7s)

(From OE-Core rev: caa63cae723b9025943f3d60dd8ae852fc52addc)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 15c65f90a3aa1e98c2beab2539403157df1fca08)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Bruce Ashfield
eb69aa8d9c linux-yocto/5.10: qemuppc32: reduce serial shutdown issues
Integrating the following commit(s) to linux-yocto/5.10:

    qemuppc32: reduce serial issues seen on shutdown

    Richard reported:

    We've been seeing a lot of the qemuppc shutdown issue and I decided to
    look into it. The really worrying thing looking at the logs locally is the
    serial ports are showing irq issues and becoming disabled as nobody would
    handle them.

    Errors like:

       [    9.194886] irq 36: nobody cared (try booting with the "irqpoll" option)
       [    9.198712] CPU: 0 PID: 127 Comm: bootlogd Not tainted 5.10.30-yocto-standard #1
       [    9.202283] Call Trace:
       [    9.205611] [d1005f00] [c00a0da8] __report_bad_irq+0x50/0x138 (unreliable)
       [    9.209347] [d1005f30] [c00a0cc0] note_interrupt+0x324/0x378
       [    9.212855] [d1005f70] [c009d138] handle_irq_event+0xe8/0x104
       [    9.216353] [d1005fa0] [c00a1d9c] handle_fasteoi_irq+0xc0/0x29c
       [    9.219960] [d1005fc0] [c009b798] generic_handle_irq+0x40/0x5c
       [    9.223496] [d1005fd0] [c00075d0] __do_irq+0x58/0x188
       [    9.226948] [d1005ff0] [c0010040] call_do_irq+0x20/0x38
       [    9.230391] [d29eda60] [c0007788] do_IRQ+0x88/0xfc
       [    9.233860] [d29eda90] [c0016454] ret_from_except+0x0/0x14
       [    9.237288] --- interrupt: 501 at __setup_irq+0x3c4/0x838
       [    9.237288]     LR = __setup_irq+0x790/0x838
       [    9.244155] [d29edb88] [c009f0a4] request_threaded_irq+0x114/0x1c8
       [    9.247672] [d29edbb8] [c07a5a18] pmz_startup+0x17c/0x32c
       [    9.251203] [d29edbd8] [c07a1140] uart_port_startup+0x184/0x2f8
       [    9.254651] [d29edc08] [c07a1974] uart_port_activate+0x78/0xf4
       [    9.258141] [d29edc28] [c07839f8] tty_port_open+0xd4/0x170
       [    9.261579] [d29edc58] [c079db74] uart_open+0x2c/0x48
       [    9.265116] [d29edc68] [c077a288] tty_open+0x168/0x640
       [    9.268574] [d29edcd8] [c0280be8] chrdev_open+0x138/0x2a4
       [    9.272123] [d29edd18] [c027421c] do_dentry_open+0x228/0x410
       [    9.275643] [d29edd48] [c028e9f4] path_openat+0xb04/0xf28
       [    9.279184] [d29eddd8] [c02917e4] do_filp_open+0x120/0x164
       [    9.282535] [d29ede98] [c0276238] do_sys_openat2+0xd8/0x19c
       [    9.285790] [d29edee8] [c0276574] sys_openat+0x88/0xdc
       [    9.289096] [d29edf38] [c00160d8] ret_from_syscall+0x0/0x34
       [    9.292620] --- interrupt: c01 at 0xfec3738
       [    9.292620]     LR = 0xfec36e0
       [    9.299035] handlers:
       [    9.302312] [<7f7f7da8>] pmz_interrupt
       [    9.305541] Disabling IRQ #36

    (and the irqpoll option does not help)

    This is problematic as the shutdown test uses the serial interface to
    shut down the system. If the serial interface fails to login or run the command,
    game over for the test.

    CONFIG_SERIAL_PMACZILOG_CONSOLE complicates that handling, but doesn't provide
    any output or capabilities that we need. So we disable it here, and
    reduce the chances of issues during shutdown.

(From OE-Core rev: f91bb6a2a9591e28f37b1c8002dce1d053c33fd4)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bf2c6ea03d45742597275691b4c883044765c57e)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Richard Purdie
d3865958fa lib/package_manager: Use shutil.copy instead of bb.utils.copyfile for intercepts
If the scripts/postinst-intercepts is owned by root/root then the copyfile() calls
will fail due to chown issues. We don't care about ownership of these files so
use shutil.copy() instead which won't perform any chown.

(From OE-Core rev: f2c5f666140df29d97e2b1539e727d3609e9e4d2)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1a03c70c282b3445b93a4c70ea6d40a1778750c5)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Alexander Kanavin
cc40e858d2 diffoscope: add native libraries to LD_LIBRARY_PATH
Reversal of global setting in previous commit necessitates
a local fix, otherwise, this happens:

  File "/home/pokybuild/yocto-worker/reproducible-debian/build/build-st-52142/tmp/work/x86_64-linux/diffoscope-native/172-r0/recipe-sysroot-native/usr/lib/python3.9/ctypes/__init__.py", line 392, in __getitem__
    func = self._FuncPtr((name_or_ordinal, self))
AttributeError: nativepython3: undefined symbol: archive_errno

(From OE-Core rev: 73edf1b88f0997f7368bfdb59d3076f085c5da4e)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 87884d9938829d5ae5d250f483c749e00cd83322)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Alexander Kanavin
12e069303c Revert "oeqa: Set LD_LIBRARY_PATH when executing native commands"
LD_LIBRARY_PATH leaks into host executables too, and breaks them
as they are not uninative-enabled. E.g. on ubuntu 18.04 trying
to run host bash with a sysroot that was built on Fedora 33:

akanavin@ubuntu1804-ty-3:/home/pokybuild/yocto-worker/oe-selftest-ubuntu/build/build-st-24341/tmp/work/x86_64-linux/gnupg-native/2.3.1-r0/recipe-sysroot-native$ LD_LIBRARY_PATH=./usr/lib /bin/bash
/bin/bash: ./usr/lib/libtinfo.so.5: no version information available (required by /bin/bash)
/bin/bash: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by ./usr/lib/libtinfo.so.5)

This was seen e.g. here:
https://autobuilder.yoctoproject.org/typhoon/#/builders/87/builds/2090/steps/14/logs/stdio

(From OE-Core rev: efcc95f25843ed5aa825ebc55985eaf4660a498a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0e9850486b74a3de934527ca1077df001d3a8d22)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Anuj Mittal
1cbca26e04 qemu: fix CVE-2021-3392
(From OE-Core rev: 147bed3b6c591c2b20b4ac31f806ee153cc23322)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a0257aee7d80fc67c92877e2de1e4b98ece54174)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Ross Burton
e18e4cb59f oe-buildenv-internal: add BitBake's library to PYTHONPATH
There are many Python scripts in oe-core that want to use Tinfoil, and
right now they have to know where they are to work out where BitBake is
likely to be.

This is suboptimal as BitBake could be somewhere else, so this
approach doesn't scale to other layers at all.

Solve this by adding BITBAKEDIR/lib to PYTHONPATH in oe-buildenv-internal,
so that Python has BitBake on its search path once the build system is
configured.

(From OE-Core rev: c65fe0a000c1170d346ffcddf7c65fad53a55b36)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a48178f6d00e7f97a09f42d5a164204e9dcffa9f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Khem Raj
7843c046ad webkitgtk: Fix reproducibility in minibrowser
(From OE-Core rev: 283e6adb30a1946d4b870ab0f2d69c1b230a70e4)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8f08ca440b6c2ad3494808ffa4ec6091722c0339)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Khem Raj
75ea23a238 busybox: Fix reproducibility
The ensures that globbing results in same order irrespective of shell in
use

(From OE-Core rev: b5bb7b5499b7a1ece9ef6592166709fecd5e6935)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit fdeee94fa78f91613850500b209b75a6608241d0)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Khem Raj
34161a0513 libjpeg-turbo: Use --reproducible option for nasm
This ensures that nasm version and timestamps do but appear in build
outputs

(From OE-Core rev: 66db1962e49e6d06d388e4df9b31fc8db5372a42)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Cc: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2f69c00c4bc1de6cd518fd78f67ff3ca863392f3)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Jose Quaresma
a0ce4f3bba ptest-runner: libgcc must be installed for pthread_cancel to work
This only affects glibc systems and have been
found on runqemu core-image-minimal with gstreamer ptest-runner

STOP: ptest-runner
libgcc_s.so.1 must be installed for pthread_cancel to work
Aborted

(From OE-Core rev: 0eeb4dd1e9dbbbe205ff9821a398c44d5769f798)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1cb679e6a4528a2cef16f65342d5e65adb14cb16)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Sakib Sajal
b75787562e qemu: fix CVE-2021-20263
virtiofs: drop remapped security.capability xattr as needed

(From OE-Core rev: 56f948329e2780ce8845646b0bb499d82e197d85)

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 7ad71de89dd60700cbaad2df1937bc3d743112da)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Sakib Sajal
44de79f8f5 qemu: fix CVE-2020-27821
memory: clamp cached translation in case it points to an MMIO region

(From OE-Core rev: 5240cce285d3baea513da0fc577b69e6f078a527)

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit df92b3359743ed1837fa57df8035d121f5c5676b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Bruce Ashfield
77de7815a7 linux-yocto/5.4: update to v5.4.116
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    370636ffbb86 Linux 5.4.116
    e23967af130b bpf: Update selftests to reflect new error states
    ef4e68f0af04 bpf: Tighten speculative pointer arithmetic mask
    4dc6e55e282f bpf: Move sanitize_val_alu out of op switch
    876d1cec9369 bpf: Refactor and streamline bounds check into helper
    4158e5fea3b1 bpf: Improve verifier error messages for users
    15de0c537bf7 bpf: Rework ptr_limit into alu_limit and add common error path
    f7fbedc90909 bpf: Ensure off_reg has no mixed signed bounds for all types
    4a163b1c7053 bpf: Move off_reg into sanitize_ptr_alu
    19bfeb47e96b Linux 5.4.115
    af7099bad495 USB: CDC-ACM: fix poison/unpoison imbalance
    d7fad2ce15bd net: hso: fix NULL-deref on disconnect regression
    699017fe0de4 x86/crash: Fix crash_setup_memmap_entries() out-of-bounds access
    b3962b4e8334 ia64: tools: remove duplicate definition of ia64_mf() on ia64
    763cbe5e1ebb ia64: fix discontig.c section mismatches
    3dce9c4bb546 csky: change a Kconfig symbol name to fix e1000 build error
    892f6bc55746 cavium/liquidio: Fix duplicate argument
    2ccca124620e xen-netback: Check for hotplug-status existence before watching
    78687d6a3213 s390/entry: save the caller of psw_idle
    026490fac496 net: geneve: check skb is large enough for IPv4/IPv6 header
    caaf9371ecad ARM: dts: Fix swapped mmc order for omap3
    be60afbb9136 HID: wacom: Assign boolean values to a bool variable
    116ee59ef886 HID: alps: fix error return code in alps_input_configured()
    a4e2b91cea52 HID: google: add don USB id
    aefb6ac6ac11 perf auxtrace: Fix potential NULL pointer dereference
    39638289595b perf/x86/kvm: Fix Broadwell Xeon stepping in isolation_ucodes[]
    319a06e58ed7 perf/x86/intel/uncore: Remove uncore extra PCI dev HSWEP_PCI_PCU_3
    82808cc02681 locking/qrwlock: Fix ordering in queued_write_lock_slowpath()
    c6eb92b37af1 arm64: dts: allwinner: Revert SD card CD GPIO for Pine64-LTS
    37ee803d7ed7 pinctrl: lewisburg: Update number of pins in community
    dbb355960ef9 gpio: omap: Save and restore sysconfig
    835c8d688e1e s390/ptrace: return -ENOSYS when invalid syscall is supplied

(From OE-Core rev: b41af8ae8fce5b1c8d32cebcc85315517775a3cc)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 667352cc46429f3d8eca12cf93c26be2d26e5d74)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Bruce Ashfield
22f2abe250 linux-yocto/5.10: update to v5.10.34
Updating linux-yocto/5.10 to the latest korg -stable release that comprises
the following commits:

    0aa66717f684 Linux 5.10.34
    47d54b990103 mei: me: add Alder Lake P device id.
    2a442f11407e iwlwifi: Fix softirq/hardirq disabling in iwl_pcie_gen2_enqueue_hcmd()
    8bd8301ccc11 Linux 5.10.33
    8a661bad6cee USB: CDC-ACM: fix poison/unpoison imbalance
    90642ee9eb58 net: hso: fix NULL-deref on disconnect regression
    31720f9e87c0 x86/crash: Fix crash_setup_memmap_entries() out-of-bounds access
    bed21bed2e79 ia64: tools: remove duplicate definition of ia64_mf() on ia64
    ba0910ad1c57 ia64: fix discontig.c section mismatches
    f4a777bcc8d1 csky: change a Kconfig symbol name to fix e1000 build error
    393200a1b095 kasan: fix hwasan build for gcc
    f2b46286e326 cavium/liquidio: Fix duplicate argument
    1bfefd866195 xen-netback: Check for hotplug-status existence before watching
    509ae27a1874 arm64: kprobes: Restore local irqflag if kprobes is cancelled
    da99331fc6ce s390/entry: save the caller of psw_idle
    d33031a894d2 dmaengine: tegra20: Fix runtime PM imbalance on error
    66d0cf7dcaa1 net: geneve: check skb is large enough for IPv4/IPv6 header
    6ce64437224d ARM: dts: Fix swapped mmc order for omap3
    db010ba54a96 dmaengine: xilinx: dpdma: Fix race condition in done IRQ
    e8d9a93ec46e dmaengine: xilinx: dpdma: Fix descriptor issuing on video group
    eb2c81ee764d soc: qcom: geni: shield geni_icc_get() for ACPI boot
    8c4bfe30eb55 HID: wacom: Assign boolean values to a bool variable
    e913cbc952c3 HID cp2112: fix support for multiple gpiochips
    f691dc86411d HID: alps: fix error return code in alps_input_configured()
    079e32723f78 HID: google: add don USB id
    ffe249b4fc2c perf map: Fix error return code in maps__clone()
    4d0cfb3713bc perf auxtrace: Fix potential NULL pointer dereference
    ab112cc573cc perf/x86/kvm: Fix Broadwell Xeon stepping in isolation_ucodes[]
    6f8315e5d951 perf/x86/intel/uncore: Remove uncore extra PCI dev HSWEP_PCI_PCU_3
    82fa9ced35d8 locking/qrwlock: Fix ordering in queued_write_lock_slowpath()
    b642e493a9a0 bpf: Tighten speculative pointer arithmetic mask
    2982ea926b5c bpf: Refactor and streamline bounds check into helper
    f3c4b01689d3 bpf: Allow variable-offset stack access
    f79efcb0075a bpf: Permits pointers on stack for helper calls
    edc5d1601389 arm64: dts: allwinner: Revert SD card CD GPIO for Pine64-LTS
    83d93d05376a pinctrl: core: Show pin numbers for the controllers with base = 0
    fc2454cc0c4b block: return -EBUSY when there are open partitions in blkdev_reread_part
    2bbd8aafde36 pinctrl: lewisburg: Update number of pins in community
    a8cd07e4400d vdpa/mlx5: Set err = -ENOMEM in case dma_map_sg_attrs fails
    bf84ef2dd2cc KEYS: trusted: Fix TPM reservation for seal/unseal
    9857fccd653c gpio: omap: Save and restore sysconfig
    71777492b745 vhost-vdpa: protect concurrent access to vhost device iotlb

(From OE-Core rev: 5d5e21cfb052618d3a3dec2fd0b2bf74473755be)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2cfc4489c14f8d1ec2c6fc2aa411d158058f5aea)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Bruce Ashfield
3a7b618eee linux-yocto/5.4: update to v5.4.114
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    a7eb81c1d11a Linux 5.4.114
    3822683fd101 net: phy: marvell: fix detection of PHY on Topaz switches
    cec3b778f70f ARM: 9071/1: uprobes: Don't hook on thumb instructions
    4f0cda5e9e62 r8169: don't advertise pause in jumbo mode
    c5934da725bb r8169: tweak max read request size for newer chips also in jumbo mtu mode
    50b7a68664dc r8169: improve rtl_jumbo_config
    cbbd3e2a2e7c r8169: fix performance regression related to PCIe max read request size
    0243bb394186 r8169: simplify setting PCI_EXP_DEVCTL_NOSNOOP_EN
    c667953d6433 r8169: remove fiddling with the PCIe max read request size
    b14992c96274 arm64: dts: allwinner: Fix SD card CD GPIO for SOPine systems
    871b569a3e67 ARM: footbridge: fix PCI interrupt mapping
    9a7ac9afc8d7 gro: ensure frag0 meets IP header alignment
    fde195c03bff ibmvnic: remove duplicate napi_schedule call in open function
    c591bbaae545 ibmvnic: remove duplicate napi_schedule call in do_reset function
    c6acd7d19124 ibmvnic: avoid calling napi_disable() twice
    2bc14f5eca10 i40e: fix the panic when running bpf in xdpdrv mode
    51edda8a6334 net: ip6_tunnel: Unregister catch-all devices
    92f93a03cef0 net: sit: Unregister catch-all devices
    4fcbb1fa2703 net: davicom: Fix regulator not turned off on failed probe
    01fb1626b620 netfilter: nft_limit: avoid possible divide error in nft_limit_init
    e65cd80558e5 net: macb: fix the restore of cmp registers
    6449b405f99a netfilter: arp_tables: add pre_exit hook for table unregister
    ce23be37ecac netfilter: bridge: add pre_exit hooks for ebtable unregistration
    61ca5b653220 libnvdimm/region: Fix nvdimm_has_flush() to handle ND_REGION_ASYNC
    4ce8e86d125d netfilter: conntrack: do not print icmpv6 as unknown via /proc
    5f6c1a81713e scsi: libsas: Reset num_scatter if libata marks qc as NODATA
    7779f84e4677 riscv: Fix spelling mistake "SPARSEMEM" to "SPARSMEM"
    ec3bb712fb62 vfio/pci: Add missing range check in vfio_pci_mmap
    9e8c5e3d8279 arm64: alternatives: Move length validation in alternative_{insn, endif}
    b7d15166c1d1 arm64: fix inline asm in load_unaligned_zeropad()
    b9956950f23c readdir: make sure to verify directory entry for legacy interfaces too
    ff821c7ce913 dm verity fec: fix misaligned RS roots IO
    804607635cc1 HID: wacom: set EV_KEY and EV_ABS only for non-HID_GENERIC type of devices
    b428063fb310 Input: i8042 - fix Pegatron C15B ID entry
    995503dd6546 Input: s6sy761 - fix coordinate read bit shift
    7a2ac9ed8cf6 virt_wifi: Return micros for BSS TSF values
    bd7e90c82850 mac80211: clear sta->fast_rx when STA removed from 4-addr VLAN
    f666567a51fb pcnet32: Use pci_resource_len to validate PCI resource
    9e249bc38a48 net: ieee802154: forbid monitor for add llsec seclevel
    7a7899eaaeb8 net: ieee802154: stop dump llsec seclevels for monitors
    fc5f9c33edb5 net: ieee802154: forbid monitor for del llsec devkey
    63581374638b net: ieee802154: forbid monitor for add llsec devkey
    0d5ee2ee9ab2 net: ieee802154: stop dump llsec devkeys for monitors
    6c8caf78304f net: ieee802154: forbid monitor for del llsec dev
    c993c05b9d48 net: ieee802154: forbid monitor for add llsec dev
    f9d7088d385c net: ieee802154: stop dump llsec devs for monitors
    178ddee28d53 net: ieee802154: forbid monitor for del llsec key
    5d025404d513 net: ieee802154: forbid monitor for add llsec key
    d8b4f3a9d732 net: ieee802154: stop dump llsec keys for monitors
    e16998019358 scsi: scsi_transport_srp: Don't block target in SRP_PORT_LOST state
    f0268d35305d ASoC: fsl_esai: Fix TDM slot setup for I2S mode
    d60837aa64be drm/msm: Fix a5xx/a6xx timestamps
    01e86da75c18 ARM: omap1: fix building with clang IAS
    4f02dc4d360f ARM: keystone: fix integer overflow warning
    f3183866b3da neighbour: Disregard DEAD dst in neigh_update
    1cf8b48a4de2 ASoC: max98373: Added 30ms turn on/off time delay
    47d04c039915 arc: kernel: Return -EFAULT if copy_to_user() fails
    68bd0d8ab19e lockdep: Add a missing initialization hint to the "INFO: Trying to register non-static key" message
    6ffc9f854d23 ARM: dts: Fix moving mmc devices with aliases for omap4 & 5
    4609d27ca6e4 ARM: dts: Drop duplicate sha2md5_fck to fix clk_disable race
    09db44ad36b0 dmaengine: dw: Make it dependent to HAS_IOMEM
    5130cda3cb1f gpio: sysfs: Obey valid_mask
    2dce5702ef05 Input: nspire-keypad - enable interrupts only when opened
    6180d2274b17 net/sctp: fix race condition in sctp_destroy_sock
    304c21786b01 scsi: qla2xxx: Fix fabric scan hang
    ca0188d396cd scsi: qla2xxx: Fix stuck login session using prli_pend_timer
    c393c7f77cf8 scsi: qla2xxx: Add a shadow variable to hold disc_state history of fcport
    ad66dc6d8830 scsi: qla2xxx: Retry PLOGI on FC-NVMe PRLI failure
    8b5e82aea7b3 scsi: qla2xxx: Fix device connect issues in P2P configuration
    8eed34d3c444 scsi: qla2xxx: Dual FCP-NVMe target port support
    33beb0e6c244 Revert "scsi: qla2xxx: Fix stuck login session using prli_pend_timer"
    94ac0a8866c4 Revert "scsi: qla2xxx: Retry PLOGI on FC-NVMe PRLI failure"
    ab3bed80f9d3 Linux 5.4.113
    94371b6c5553 xen/events: fix setting irq affinity
    4ea6097986c4 perf map: Tighten snprintf() string precision to pass gcc check on some 32-bit arches
    d462247bb274 perf tools: Use %zd for size_t printf formats on 32-bit
    2715a4c0dc34 perf tools: Use %define api.pure full instead of %pure-parser
    799f02f0dfc4 driver core: Fix locking bug in deferred_probe_timeout_work_func()
    cc59b872f2e1 netfilter: x_tables: fix compat match/target pad out-of-bound write
    8119a2b42028 block: don't ignore REQ_NOWAIT for direct IO
    2d71bffbe9a0 riscv,entry: fix misaligned base for excp_vect_table
    90b71ae8e5cf idr test suite: Create anchor before launching throbber
    b9299c2bf554 idr test suite: Take RCU read lock in idr_find_test_1
    cde89079ce46 radix tree test suite: Register the main thread with the RCU library
    f5b60f26e36b block: only update parent bi_status when bio fail
    5b8f89685a9a drm/tegra: dc: Don't set PLL clock to 0Hz
    db162d8d7d08 gfs2: report "already frozen/thawed" errors
    3c89c7240412 drm/imx: imx-ldb: fix out of bounds array access warning
    e1ff1c6bbe4b KVM: arm64: Disable guest access to trace filter controls
    2012f9f75444 KVM: arm64: Hide system instruction access to Trace registers
    cc678e2f372e interconnect: core: fix error return code of icc_link_destroy()

(From OE-Core rev: b67f3e091f9cd40c6790bc7056eab29a5c4e4e97)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a24b8651365b333e903b317ad969ba8adfed28c4)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Bruce Ashfield
9d41c4e5f9 perf: fix python-audit RDEPENDS
When doing the perf python3 conversion, the audit-python RDEPENDS
was caught up in the regex replacement and was incorrectly changed.

The audit recipe continues to produce a package called audit-python
and it is that package we should have as a RDEPENDS.

(From OE-Core rev: 220725bbe835cb20feef6f21f036a9f10f689a30)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 7eccb9c0c2ea00685451c44cb8faa96c4a2272fd)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Bruce Ashfield
93583d3ac9 linux-yocto/5.10: update to v5.10.32
Updating linux-yocto/5.10 to the latest korg -stable release that comprises
the following commits:

    aea70bd5a455 Linux 5.10.32
    6ac98ee9cb7c net: phy: marvell: fix detection of PHY on Topaz switches
    fbe6603e7cab bpf: Move sanitize_val_alu out of op switch
    7723d3243857 bpf: Improve verifier error messages for users
    55565c307908 bpf: Rework ptr_limit into alu_limit and add common error path
    496e2fabbbe3 arm64: mte: Ensure TIF_MTE_ASYNC_FAULT is set atomically
    cada2ed0bb70 ARM: 9071/1: uprobes: Don't hook on thumb instructions
    480d875f1242 bpf: Move off_reg into sanitize_ptr_alu
    589fd9684dfa bpf: Ensure off_reg has no mixed signed bounds for all types
    b2df20c0f19f r8169: don't advertise pause in jumbo mode
    154fb9cb3e6f r8169: tweak max read request size for newer chips also in jumbo mtu mode
    7f64753835a7 KVM: VMX: Don't use vcpu->run->internal.ndata as an array index
    c670ff84fac9 KVM: VMX: Convert vcpu_vmx.exit_reason to a union
    4f3ff11204ea bpf: Use correct permission flag for mixed signed bounds arithmetic
    8d7906c548aa arm64: dts: allwinner: h6: beelink-gs1: Remove ext. 32 kHz osc reference
    286c39d08664 arm64: dts: allwinner: Fix SD card CD GPIO for SOPine systems
    4f90db2e92d2 ARM: OMAP2+: Fix uninitialized sr_inst
    1fc087fdb98d ARM: footbridge: fix PCI interrupt mapping
    11a718ef953f ARM: 9069/1: NOMMU: Fix conversion for_each_membock() to for_each_mem_range()
    a13d4a1228ab ARM: OMAP2+: Fix warning for omap_init_time_of()
    9143158a6bd3 gro: ensure frag0 meets IP header alignment
    fd766f792a56 ch_ktls: do not send snd_una update to TCB in middle
    65bdd564b387 ch_ktls: tcb close causes tls connection failure
    5f3c278035c0 ch_ktls: fix device connection close
    8d5a9dbd2116 ch_ktls: Fix kernel panic
    976da1b08784 ibmvnic: remove duplicate napi_schedule call in open function
    008885a880dc ibmvnic: remove duplicate napi_schedule call in do_reset function
    685bc730e3a9 ibmvnic: avoid calling napi_disable() twice
    e154b5060aa1 ia64: tools: remove inclusion of ia64-specific version of errno.h header
    f8f01fc8c653 ia64: remove duplicate entries in generic_defconfig
    1aec111c944f ethtool: pause: make sure we init driver stats
    44ef38c0a2b3 i40e: fix the panic when running bpf in xdpdrv mode
    35d7491e2f77 net: Make tcp_allowed_congestion_control readonly in non-init netns
    76af8126a6e4 mm: ptdump: fix build failure
    33f3dab42ae2 net: ip6_tunnel: Unregister catch-all devices
    ea0340e632ba net: sit: Unregister catch-all devices
    154ac84d497a net: davicom: Fix regulator not turned off on failed probe
    e072247938a8 net/mlx5e: Fix setting of RS FEC mode
    dc1732baa9da netfilter: nft_limit: avoid possible divide error in nft_limit_init
    cda5507d234f net/mlx5e: fix ingress_ifindex check in mlx5e_flower_parse_meta
    40ed1d29f151 net: macb: fix the restore of cmp registers
    7f8e59c4c5e5 libbpf: Fix potential NULL pointer dereference
    7824d5a9935a netfilter: arp_tables: add pre_exit hook for table unregister
    4d26865974fb netfilter: bridge: add pre_exit hooks for ebtable unregistration
    eb82199e377a libnvdimm/region: Fix nvdimm_has_flush() to handle ND_REGION_ASYNC
    a2af8a0f38e4 ice: Fix potential infinite loop when using u8 loop counter
    783645e65b57 netfilter: conntrack: do not print icmpv6 as unknown via /proc
    394c81e36e49 netfilter: flowtable: fix NAT IPv6 offload mangling
    be07581aacae ixgbe: fix unbalanced device enable/disable in suspend/resume
    0ef9919a06a3 scsi: libsas: Reset num_scatter if libata marks qc as NODATA
    6a70ab9769cd riscv: Fix spelling mistake "SPARSEMEM" to "SPARSMEM"
    f66d695c06f4 vfio/pci: Add missing range check in vfio_pci_mmap
    e6177990e17d arm64: alternatives: Move length validation in alternative_{insn, endif}
    e2931f05eb32 arm64: fix inline asm in load_unaligned_zeropad()
    957f83a138f1 readdir: make sure to verify directory entry for legacy interfaces too
    2b8308741cf5 dm verity fec: fix misaligned RS roots IO
    18ba387261ea HID: wacom: set EV_KEY and EV_ABS only for non-HID_GENERIC type of devices
    dedf75aec8fc Input: i8042 - fix Pegatron C15B ID entry
    8b978750dcd2 Input: s6sy761 - fix coordinate read bit shift
    955da2b5cd98 lib: fix kconfig dependency on ARCH_WANT_FRAME_POINTERS
    024f9d048000 virt_wifi: Return micros for BSS TSF values
    cc413b375c6d mac80211: clear sta->fast_rx when STA removed from 4-addr VLAN
    2e08d9a56838 pcnet32: Use pci_resource_len to validate PCI resource
    248b9b61b951 net: ieee802154: forbid monitor for add llsec seclevel
    b97c7bc42d8d net: ieee802154: stop dump llsec seclevels for monitors
    ab9f9a1d5874 net: ieee802154: forbid monitor for del llsec devkey
    4846c2debb2c net: ieee802154: forbid monitor for add llsec devkey
    07714229e0e2 net: ieee802154: stop dump llsec devkeys for monitors
    4c1775d6ea86 net: ieee802154: forbid monitor for del llsec dev
    813b13155d14 net: ieee802154: forbid monitor for add llsec dev
    2f80452951b5 net: ieee802154: stop dump llsec devs for monitors
    08744a622faa net: ieee802154: forbid monitor for del llsec key
    7edf4d2baa8a net: ieee802154: forbid monitor for add llsec key
    c09075df5e4d net: ieee802154: stop dump llsec keys for monitors
    8b9485b651d4 iwlwifi: add support for Qu with AX201 device
    c836374bacfa scsi: scsi_transport_srp: Don't block target in SRP_PORT_LOST state
    d9fc084067f5 ASoC: fsl_esai: Fix TDM slot setup for I2S mode
    79ef0e6c0cf8 drm/msm: Fix a5xx/a6xx timestamps
    d61238aa6482 ARM: omap1: fix building with clang IAS
    505c48942f04 ARM: keystone: fix integer overflow warning
    0d0ad98bee39 neighbour: Disregard DEAD dst in neigh_update
    7a1cd9044da4 gpu/xen: Fix a use after free in xen_drm_drv_init
    bfb5a1523f17 ASoC: max98373: Added 30ms turn on/off time delay
    58d59d9ae56f ASoC: max98373: Changed amp shutdown register as volatile
    b2f8476193eb xfrm: BEET mode doesn't support fragments for inner packets
    806addaf8dfd iwlwifi: Fix softirq/hardirq disabling in iwl_pcie_enqueue_hcmd()
    b448a6a2fc5a arc: kernel: Return -EFAULT if copy_to_user() fails
    f12e8cf6b180 lockdep: Add a missing initialization hint to the "INFO: Trying to register non-static key" message
    a55de4f0d1d4 ARM: dts: Fix moving mmc devices with aliases for omap4 & 5
    9f399a9d7006 ARM: dts: Drop duplicate sha2md5_fck to fix clk_disable race
    f338b8fffd75 ACPI: x86: Call acpi_boot_table_init() after acpi_table_upgrade()
    e5eb9757fe4c dmaengine: idxd: fix wq cleanup of WQCFG registers
    4c59c5c8668e dmaengine: plx_dma: add a missing put_device() on error path
    ac030f5c5680 dmaengine: Fix a double free in dma_async_device_register
    56f9c04893fb dmaengine: dw: Make it dependent to HAS_IOMEM
    4ecf25595273 dmaengine: idxd: fix wq size store permission state
    db23b7b5ca3e dmaengine: idxd: fix opcap sysfs attribute output
    0e3f14755111 dmaengine: idxd: fix delta_rec and crc size field for completion record
    a5ad12d5d69c dmaengine: idxd: Fix clobbering of SWERR overflow bit on writeback
    f567fde02baa gpio: sysfs: Obey valid_mask
    dfed481e62e5 Input: nspire-keypad - enable interrupts only when opened
    b80ea54e1e71 mtd: rawnand: mtk: Fix WAITRDY break condition and timeout
    5a627026be4a net/sctp: fix race condition in sctp_destroy_sock
    65f1995ea1e9 Linux 5.10.31
    ceee49ca34bf xen/events: fix setting irq affinity
    9d9facd32d89 net: sfp: cope with SFPs that set both LOS normal and LOS inverted
    2a60ab2dab3d net: sfp: relax bitrate-derived mode check
    cd8ce27e6caa perf map: Tighten snprintf() string precision to pass gcc check on some 32-bit arches
    1f3b9000cb44 netfilter: x_tables: fix compat match/target pad out-of-bound write
    5402a67ac403 block: don't ignore REQ_NOWAIT for direct IO
    efa7b6e4017a riscv,entry: fix misaligned base for excp_vect_table
    6fbdce3cde97 io_uring: don't mark S_ISBLK async work as unbounded
    5d4600017bee null_blk: fix command timeout completion handling
    b1f6c6f39bd6 idr test suite: Create anchor before launching throbber
    9a7552daa93b idr test suite: Take RCU read lock in idr_find_test_1
    edd822b69241 radix tree test suite: Register the main thread with the RCU library
    1d2310d95fb8 block: only update parent bi_status when bio fail
    d99e22c0ea74 XArray: Fix splitting to non-zero orders
    9576dd89554e gpu: host1x: Use different lock classes for each client
    39af2f472f21 drm/tegra: dc: Don't set PLL clock to 0Hz
    e4a0956574c7 tools/kvm_stat: Add restart delay
    1dcb3ebc2416 ftrace: Check if pages were allocated before calling free_pages()
    6c6d58322079 gfs2: report "already frozen/thawed" errors
    870c8df1d192 drm/imx: imx-ldb: fix out of bounds array access warning
    5b50468a2d4d KVM: arm64: Disable guest access to trace filter controls
    fa0c0dce589d KVM: arm64: Hide system instruction access to Trace registers
    57fb08fb9a25 gfs2: Flag a withdraw if init_threads() fails
    9b57ecb01b43 interconnect: core: fix error return code of icc_link_destroy()

(From OE-Core rev: 848984a8678093790f9f03e7e62ab7fcb12346ac)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 91fcd094619e25d63a80231c3b776788504ce37b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Bruce Ashfield
3e1d3be9ef linux-yocto/5.10: qemuriscv32.cfg: RV32 only supports 1G physical memory
Integrating the following commit(s) to linux-yocto/5.10:

    a19886b00ea qemuriscv32.cfg: RV32 only supports 1G physical memory

(From OE-Core rev: 27f691faf496d67de99538ee19ce79edfb4cc192)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 894f5328d395872f69bd48c59518bbafb7cbd61e)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Bruce Ashfield
01f537542e linux-yocto/5.10: aufs fixes
It was reported that aufs was behaving incorrectly on arm/x86. Although
we don't have an exact fix for the issues, the Wind River guys were able
to come up with a minimal patch set to fix just the core issue, versus
a full aufs uprev.

We didn't have time to get this in before the release, but picking it up
in a dot release is sufficient. (given that it took several months for
the issue to be noticed).

Integrating the following commit(s) to linux-yocto/5.10:

    a8808e541750 aufs: linux-v5.10-rc1, no more f_op->read() and ->write()
    cb1c41dac775 for aufs: linux-v5.10-rc1, no more vfs_(read|write)f_t
    a5805df6583f aufs: linux-v5.10-rc1, no more set_fs()
    64e145dcca8c Revert "aufs: initial port to v5.10"

(From OE-Core rev: 98ae1dd5c60a8f6ca30e80726c81f9fa0fc5d4cb)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c290adec4e27f5d7987193e9a0749082f3ed3e20)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Richard Purdie
230bfb1399 yocto-uninative: Update to 3.1 which includes a patchelf fix
(From OE-Core rev: 2f8edab7ccc80144a7575c8e95c463a161bf5c82)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1d9d38eb6b3621fed58a217eeb4de1816e3e6487)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
zhengruoqin
b150da374a wireless-regdb: upgrade 2020.11.20 -> 2021.04.21
(From OE-Core rev: b41c32d47b2fcb023ea4abd27af71366fd192236)

Signed-off-by: Zheng Ruoqin <zhengrq.fnst@fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit df540a630f87c02898f7ce5703f63e9c7bd2c156)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Christophe Chapuis
e5ab48e8b5 rootfs.py: find .ko.gz and .ko.xz kernel modules as well
* with xz PACKAGECONFIG enabled in kmod and xz module compression enabled in kernel
  the do_rootfs task doesn't run depmod in the image, because it thinks there are no modules:
  NOTE: No Kernel Modules found, not running depmod

(From OE-Core rev: 96a751b84d15480304b931264b9e5d07098c0a90)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Christophe Chapuis <chris.chapuis@gmail.com>
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9c13ce05eae0f126eb150e48709e9bd06e9280fa)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Stefan Ghinea
b35658c7fc xserver-xorg: fix CVE-2021-3472
Insufficient checks on the lengths of the XInput extension
ChangeFeedbackControl request can lead to out of bounds memory accesses
in the X server.

References:
https://nvd.nist.gov/vuln/detail/CVE-2021-3472

Upstream patches:
7aaf54a188

(From OE-Core rev: 8fbf485f24711ab29972841ba52dcb9dcdabaffb)

Signed-off-by: Stefan Ghinea <stefan.ghinea@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 6fec5fea942ce88e33e5cf4c2102d69ce25e7180)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Richard Purdie
e4068c5359 pybootchart/draw: Avoid divide by zero error
When disk stats don't run frequenctly enough, we see divide by zero
errors. The code already has a fallback path so ensure we use it
for this case too.

[YOCTO #14360]

(From OE-Core rev: f9d9f0333bd7c590eb1307c429d43408abffeb00)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b71d30aef5dc2c360432c0dd4147859dd303ea48)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Richard Purdie
7e61f58125 patchelf: Fix alignment patch
The previous fix was in the right direction but needed to account
for the section alignment of the current section. Tweak the patch
to handle this.

(From OE-Core rev: 69e5a81ceeba3104ba5954dadc7c65cfa4b1be9b)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e464efc07a8997c43998a9c6a9544be11ab4f303)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
wangmy
865d46981a mesa: upgrade 21.0.2 -> 21.0.3
(From OE-Core rev: c0ecb7a67de478b402e1e915d51ca9bbeb662d6c)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a89ed8ce30a5830a0ac90aa633ec466b4e3a0ba1)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Richard Purdie
d06606e6dd patchelf: Fix note section alignment issues
Improve note section normalization was added to patchelf in recent versions
however if fails if there are two note sections which aren't sized to match
section alignment. Tweak the code to account for section alignment.

This fixes patchelf failures on the autobuilder, particularly to ccache-native.

(From OE-Core rev: 8a051bf055623f1ef5ca94d9291162ac7ce871c6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit fee8dde0d597b511b37d8dcf215e8355980d5f2b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Reto Schneider
344481289b license_image.bbclass: Fix symlink to generic license files
Link to the canonical filename of a license as only this one exists.

Fixes commit 670fe71dd18ea675f35581db4a61fda137f8bf00
[license_image.bbclass: use canonical name for license files].

(From OE-Core rev: e24510fbb1439d56a278e2b5fc036d11a24e23df)

Signed-off-by: Reto Schneider <reto.schneider@husqvarnagroup.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 64b1ba978e079c345e1f7fbd1bf44052fc3dd857)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:29 +01:00
Reto Schneider
ba3ea82f68 license_image.bbclass: Detect broken symlinks
Find and report symlinks which point to a non-existing file.

(From OE-Core rev: afeefde357e468ba79570208bd67d097b9cb9ee1)

Signed-off-by: Reto Schneider <reto.schneider@husqvarnagroup.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 81809a1ffe67aade1b2ed66fe95044ffbf7d3df8)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-11 12:02:28 +01:00
Richard Purdie
58cbdaecf7 bitbake: runqueue: Handle deferred task rehashing in multiconfig builds
If the hash of a task changes and that hash is a deferred task (e.g. a multiconfig
build), we need to ensure that the hash change propagates through to all the tasks
else the build will run multiple copies of the task, sometimes with oddly differing
results as the outhashes of native tasks built in differing locations can confuse
things.

(Bitbake rev: b67476d4758915db7a5d9f58bc903ae7501a1774)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2db571324f755edc4981deecbcfdf0aaa5a97627)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-06 11:22:44 +01:00
Richard Purdie
8aad4c243a bitbake: runqueue: Fix multiconfig deferred task sstate validity caching issue
We were testing the validity of deferred tasks setscene status "up front" which
is very unlikely to succeed and leads to cache invalidation issues. With the
change to rebuild the deferred task list, this status becomes out of sync. The
result was tasks being executed when they should not have been leading to extra
work for the build unnecessarily.

Instead, don't process validity status for deferred tasks and assume their
data will become available. If it doesn't, this will now result in a build
error as the setscene task will fail and the main task will run instead.

In theory we could try and track the state changes in the deferred list and
re-test validity then but I'm not sure it is worth the effort when the other
code path and errors in setscene tasks will give a pretty good idea of what
is happening anyway.

(Bitbake rev: e70cba8d5861d79ed0da9e760e618af8b759c8a9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit edcafac13b3b241b6687419e59018d21811507a1)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-06 11:22:44 +01:00
Chen Qi
2d5f42e6fa rsync: fix CVE-2020-14387
Backport patch to fix CVE-2020-14387.

(From OE-Core rev: 940111cefa459bc7a5fd9de1cf70b2040ffb5229)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5e7a536d07856630e4eb421614c8d823c67e0294)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-04 22:57:51 +01:00
Richard Purdie
70fb6c0d86 patchelf: Backport fix from upstream for note section overlap error
Backport a patch from upstream to fix an error:
patchelf: cannot normalize PT_NOTE segment: non-contiguous SHT_NOTE sections

seen on our ubuntu1604 autobuilder worker.

(From OE-Core rev: 738530b30c2538f7ecd151c0f0f5283075230bab)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 80e8f7d34d7032cc94b61bf155eac7648e6b6c74)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-04 22:57:51 +01:00
Chen Qi
49034c5cda weston: fix build failure due to race condition
The wayland.c actually include 'xdg-shell-client-protocol.h' instead of
the server one, so fix it. Otherwise, it's possible to get build failure
due to race condition.

(From OE-Core rev: 9147e34486d7d45365e590140c5f08aa4be367ee)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bd2a9a4d82f66f1ff414c392bcf234d8dbd5e553)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-04 22:57:51 +01:00
Alexander Kanavin
8197c42f57 meta/lib/oeqa/core/tests/cases/timeout.py: add a testcase for the previous fix
This is the sequence that didn't properly operate:

- a test case that skips and isn't executed
- a second test case that is skipped via a dependency decorator, and sets a timeout
- a third test case that takes longer than the timeout from the second
test case

Without the fix, the timeout is not cleared, and the third test case is
erroneously aborted. With the fix, the timeout is cleared and the third
test case is able to complete.

(From OE-Core rev: 4665008247cd4bd28da8c8b56c8c604e2e24d2cb)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 54ef07a9aa1af8f41cfb9a4802929c918efc43c8)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-04 22:57:51 +01:00
Alexander Kanavin
1ecf10dc33 oeqa: tear down oeqa decorators if one of them raises an exception in setup
Some of the decorators need proper cleanup, such as OETimeout
which sets a signal handler that needs to be cleared via teardown.
If this is not done then the signal gets called later with unpredictable effects.

This can be seen if there's a test that is skipped via a decorator and sets a timeout
at the same time: the timeout isn't cleared, and is invoked later in a
completely unrelated context. The test case for this is added in the
next commit.

(From OE-Core rev: be45a8271c06ffbb5d97afd33bb15b1143b6cf8d)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f42a08e1aabf1ca57e0c09d69fb69cc717c7f156)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-04 22:57:51 +01:00
Kai Kang
509dd69bad cmake.bbclass: remove ${B} before cmake_do_configure
It is fallible to remove ${B} in directory ${B} itself. And it does fail
when call bitbake by third-party wrapper script.

Use flag 'cleandirs' to remove ${B} first if build out of source tree.

(From OE-Core rev: db6a315e5f6de02e226e582f878a83c427fd87cc)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0fb6280432a36985590d9a714a5f11164aaebb51)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-04 22:57:51 +01:00
Kai Kang
81a90094bb kernel-yocto.bbclass: chdir to ${WORKDIR} for do_kernel_checkout
It chdirs to ${S} at the beginning of task do_kernel_checkout. Then it
removes ${S} when it still resides in ${S}. It may fail to run the task
do_kernel_checkout when bitbake is called by third-part wrapper script.
So chdir to ${WORKDIR} by default for do_kernel_checkout. And it will
chdir to ${S} afterwards in task do_kernel_checkout.

(From OE-Core rev: 51b03665de86c14f5b3887a60154b118c0d37aa3)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit cf0e3397d3f86c7ea1f3c66c50a44d6205f5921b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-04 22:57:50 +01:00
Richard Purdie
e7aece6988 yocto-check-layer: Avoid bug when iterating and autoadding dependencies
If iterating a layer with multiple components and auto-adding dependencies
the tests can break since layers are never removed and order isn't guaranteed
to account for that.

Fix this by resetting the layer list back to the original list each time
before auto-adding the dependencies in each case.

This fixes scanning of meta-openembedded in particular where the sublayers
may not be added in order of minimal dependency.

(From OE-Core rev: 280596107b2744de63e6f34007324e5e2c857758)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bf1b467dacf345379cd5d84a1c9b3b0d844d5c91)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-04 22:57:50 +01:00
Stefan Ghinea
5cde220103 libssh2: fix build failure with option no-ecdsa
libssh2 fails at do_compile if
DEPRECATED_CRYPTO_FLAGS = "no-ecdsa" is set in recipe:

../src/.libs/libssh2.so: undefined reference to
`LIBSSH2_KEX_METHOD_EC_SHA_HASH_CREATE_VERIFY'

References:
https://github.com/libssh2/libssh2/issues/549

Upstream patches:
1f76151c92

(From OE-Core rev: d70cf4cd57d61f7db7179673b211e631c944e0e6)

Signed-off-by: Stefan Ghinea <stefan.ghinea@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2bb146e7315f8080cb49a95212231ccb76a4a822)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-04 22:57:50 +01:00
Chen Qi
80c76a82ec glib-2.0: fix CVE-2021-28153
Backport patches to fix CVE-2021-28153.

(From OE-Core rev: 8a0aae46bc87c00fb4d32f6ce5567cc44cae6d34)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-05-04 22:57:50 +01:00
Khem Raj
aa8618b624 go: Use dl.google.com for SRC_URI
golang.org/dl is resolving to this anyway

(From OE-Core rev: 3357bbf0dad31306d5e16ad306d3e931042eec61)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8470e38ac1d9f9bb6d8a4ee43724af452d080057)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Sakib Sajal
5c1a29e6de qemu: fix CVE-2021-20257
(From OE-Core rev: 5b66ff7972951db973d12f3dae6ccecf3bc29e56)

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 547ac986a74cfcae39b691ebb92aadc8436443ea)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Sakib Sajal
381aebe82f qemu: fix CVE-2021-3416
(From OE-Core rev: 7a3ce8a79a6c682e1b38f757eb68534e0ce5589d)

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e2b5bc11d1b26b73b62e1a63cb75572793282dcb)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Sakib Sajal
4de4010f3f qemu: fix CVE-2021-3409
(From OE-Core rev: e6fd06544018f37943d4758ea57206f994cd04d3)

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e2fb8c15a64e1f5db678e8e95924da8c88a188c0)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Sakib Sajal
a1f77137d2 qemu: fix CVE-2021-20221
(From OE-Core rev: e71b85d59c96a9aba06852dfdcd6ad5d9cdc4c35)

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 59a44f8c70d4a026ae74e44b9d70100029c691b5)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Sakib Sajal
686f914733 qemu: fix CVE-2020-29443
(From OE-Core rev: 27cc6761ecd7dbe5b7972706f2a21cb3ee5eef3f)

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 481e012de865ee232fa5a233e9f1d4fc7a2232ab)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Sakib Sajal
53390d2261 qemu: fix CVE-2021-20181
(From OE-Core rev: a993a379bb490efbbf507f5dccda5ab358e8afea)

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c2f79065ef0684f2c0bdb92f1b03e690ab730b8c)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Konrad Weihmann
7db4ec5372 cve-update-db-native: skip on empty cpe23Uri
Recently an entry in the NVD DB appeared that looks like that
{'vulnerable': True, 'cpe_name': []}.
As besides all the vulnerable flag no data is present we would get
a KeyError exception on acccess.
Use get method on dictionary and return if no meta data is present
Also quit if the length of the array after splitting is less than 6

(From OE-Core rev: 650eaa56b83b5698ad7b95337607959e018ff6c0)

Signed-off-by: Konrad Weihmann <kweihmann@outlook.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 00ce2796d97de2bc376b038d0ea7969088791d34)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Mingli Yu
6821ee2eed rpm: Upgrade to 4.16.1.3
Fixes some security vulnerabilities such as CVE-2021-3421 and
CVE-2021-20271.

Rebase 0001-Do-not-hardcode-lib-rpm-as-the-installation-path-for.patch
to avoid fuzz warnings.

(From OE-Core rev: 532698a83261e3ce53f03d5b063a6978a7592bd1)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
(cherry picked from commit 25fe972c4aa6ea640b1cdcd1624108f70e539586)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Richard Purdie
461c4d5821 runqemu: Ensure we cleanup snapshot files after image run
We need to cleanup snapshot files if we make a copy of them to ensure
the tmpfs doesn't run out of space. There is already NFS code needing
this so make it a generic code path.

(From OE-Core rev: 63f3c44f51cf36d3ac550ebb2292eb8e08d1b8d4)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a3e0eec5a4785a0c4859455eb10b43aa832e606d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Changqing Li
de631334cc gdk-pixbuf: fix CVE-2021-20240
(From OE-Core rev: bd08e4d179979937604c196b4047f59c5499a960)

Signed-off-by: Changqing Li <changqing.li@windriver.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Changqing Li
a1970b56f9 cairo: fix CVE-2020-35492
(From OE-Core rev: 69d693c4800c43b62bc216d7c1763d17e19ed421)

Signed-off-by: Changqing Li <changqing.li@windriver.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Anders Wallin
5bfd931c2e lttng-tools: Fix path for test_python_looging
" was missing

(From OE-Core rev: 73bc035151760ce6d07bb3541607544f71adae7e)

Signed-off-by: Anders Wallin <anders.wallin@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e1780ccfc89e9ff4e260276f28ffa0bb8e9b44e1)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:26 +01:00
Anders Wallin
043817eabd lttng-tools: Fix missing legacy test files
tests/regression/tools/save-load

(From OE-Core rev: 4d0e6ff408caeb6e57b5a347aa071d3afef98d4d)

Signed-off-by: Anders Wallin <anders.wallin@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2e892895e25d148b4c522e3a30bfb1bb4e9a9506)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Stefan Ghinea
855a1596f5 wpa-supplicant: fix CVE-2021-30004
In wpa_supplicant and hostapd 2.9, forging attacks may occur because
AlgorithmIdentifier parameters are mishandled in tls/pkcs1.c and
tls/x509v3.c.

References:
https://nvd.nist.gov/vuln/detail/CVE-2021-30004

Upstream patches:
https://w1.fi/cgit/hostap/commit/?id=a0541334a6394f8237a4393b7372693cd7e96f15

(From OE-Core rev: decf95ad84a38b86e4e9f86a78f76535f4f22d4f)

Signed-off-by: Stefan Ghinea <stefan.ghinea@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b32b671bf430b36a5547f8d822dbb760d6be47f7)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
hongxu
a2bf564b5f deb: apply postinstall on sdk
If not postinstall applied, some nativesdk command could not be found
in sdk due to update-alternatives in postinst not be executed, such as chroot:

$ which chroot
/sbin/chroot
$ which chroot.coreutils
path-to-sdk/sysroots/x86_64-wrlinuxsdk-linux/usr/bin/chroot.coreutils

After applying the fix
$ which chroot
path-to-sdk/sysroots/x86_64-wrlinuxsdk-linux/usr/bin/chroot
$ which chroot.coreutils
path-to-sdk/sysroots/x86_64-wrlinuxsdk-linux/usr/bin/chroot.coreutils

(From OE-Core rev: 07aaa526c60c6d545ca856fc3d51606b669f641c)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2a9bf19502766baa4087456649d5471483d04f6a)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Douglas Royds
c20e75d07d externalsrc: Detect code changes in submodules
Further to 50ff9afb39, only detect code changes in submodules that are
subdirectories of the EXTERNALSRC directory.

The (undocumented) git submodule--helper returns a path
for each submodule relative to the top of the repo.
Don't add submodules that are not within our source subtree.

[YOCTO #14333]

(From OE-Core rev: d233735891872b73e66cb3ce9f73b9af4d32a186)

Signed-off-by: Douglas Royds <douglas.royds@taitradio.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1c18225d3ef94a41fc073ae87c163b68e6d46571)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Douglas Royds
6708092ea4 Revert "externalsrc: Detect code changes in submodules"
This reverts commit 4525310d49d115a37705f04ac5c03d639e5e8f8c.

Further to 50ff9afb39, only detect code changes in submodules that are
subdirectories of the EXTERNALSRC directory.

The (undocumented) git submodule--helper returns a path
for each submodule relative to the top of the repo.
Don't add submodules that are not within our EXTERNALSRC subtree.

If we unpack one git repo inside another, like this:

    SRC_URI = "git://${GIT_SERVER}/repo1;name=repo1;destsuffix=repo1 \
               git://${GIT_SERVER}/repo2;name=repo2;destsuffix=repo1/repo2 \
               "

Git status reports, for repo1:

    Untracked files:
      (use "git add <file>..." to include in what will be committed)
	repo2/

If we run `devtool modify` on this recipe, do_patch runs with:

    PATCHTOOL = "git"
    PATCH_COMMIT_FUNCTIONS = "1"

The `patch_task_postfunc` (patch.bbclass, line 82) runs a `git add .` on the
top-level repo1, leaving the checkout in an invalid state. The following git
warning does not appear in the log:

    $ git add .
    warning: adding embedded git repository: repo2
    hint: You've added another git repository inside your current repository.
    hint: Clones of the outer repository will not contain the contents of
    hint: the embedded repository and will not know how to obtain it.
    hint: If you meant to add a submodule, use:
    hint:
    hint: 	git submodule add <url> repo2
    hint:
    hint: If you added this path by mistake, you can remove it from the
    hint: index with:
    hint:
    hint: 	git rm --cached repo2
    hint:
    hint: See "git help submodule" for more information.

    $ git submodule status
    fatal: no submodule mapping found in .gitmodules for path 'repo2'

No further git submodule commands can be run on the checkout.

We could enhance the `patch_task_postfunc` to look for any embedded git
checkouts and add them as submodules, but this seems unnecessary complexity for
an obscure edge-case. Although the git repo is left in an invalid state with
respect to the submodules, it still serves the purpose required by devtool:
To take further commits, and generate patch files from them.

We are still able to run these commands to examine any submodules,
where git submodule--helper reports paths relative to the top of the checkout:

    $ git ls-files --stage | grep ^160000
    160000 5feee12d6e974dd8c0614cf5b593380b046439a5 0   repo2

    $ git submodule--helper list
    160000 5feee12d6e974dd8c0614cf5b593380b046439a5 0   repo2

When a recipe sets EXTERNALSRC to a subdirectory of the git checkout, we test
for the existence of the reported submodule paths within the EXTERNALSRC
directory.

The latest versions of git submodule--helper accept a path to a subdirectory and
correctly report no submodules within that subdirectory. Regrettably, we still
support git versions that don't accept a path to a subdirectory.

[YOCTO #14333]

(From OE-Core rev: 4d961d6b794b389f8a2d062d5e7c0ae1ddc49e36)

Signed-off-by: Douglas Royds <douglas.royds@taitradio.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2055718fdd19f925e236d67823017323bbd92a4b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Ulrich Ölmann
8921498bed arch-armv6m.inc: fix access rights
(From OE-Core rev: f07b527676d2dba05559a972b1db885db050471d)

Signed-off-by: Ulrich Ölmann <u.oelmann@pengutronix.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2f7ebe444c2a78ef149b8c5f0f005ab23f24a176)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Mingli Yu
9a3439012d libxshmfence: Build fixes for riscv32
NR_futex is not defined by newer architectures e.g. riscv32 as
they only have 64bit variant of time_t. Glibc defines SYS_futex
interface based on __NR_futex, since this is used in applications,
such applications start to fail to build for these newer architectures.

Define a fallback to alias __NR_futex to __NR_futex_time64 to make
SYS_futex keep working.

Reference: https://git.openembedded.org/openembedded-core/commit/?id=7a218adf9990f5e18d0b6a33eb34091969f979c7

(From OE-Core rev: 45fedd892d2263ac14ceae16f1f9c5ed2b312ff7)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 81599bf32135187b34726d41e9f619d22ca1bdd1)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Mingli Yu
4577cbed9a packagegroup-core-tools-testapps.bb: Remove kexec for riscv32
kexec is not yet ported to riscv32.

(From OE-Core rev: 77f2d0be675f7cbb539ef65507bb946ad9b295c7)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f1e7da7737b3d6df27cc5af002fd1eb0c202d0b4)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Mingli Yu
c55a25d272 packagegroup-core-tools-profile: Remove valgrind for riscv32
valgrind is not yet ported to riscv32.

(From OE-Core rev: aeb9a929ef34e61820916227358061e9b0ef9724)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit df70bc4c60838af1dd7e7f31aba43e8d190def77)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Jonas Höppner
1fa9d0e19b ltp: fix empty ltp-dev package
Currently the headers are not installed and the ltp-dev package is
empty.

This patch adds an include-install make target in the do_install step to
install them in sysroot which ends up as a working ltp-dev package.

(From OE-Core rev: c4419fb58b6ab5f4fbdcd65e5b6d2e7742c8d66f)

Signed-off-by: Jonas Höppner <jonas.hoeppner@garz-fricke.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f6943da4444cd71053650be0c9212bc25ac53137)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Ross Burton
efcded9727 glslang: strip whitespace in pkgconfig file
Whilst pkg-config is fine with .pc files containing leading whitespace,
pkgconf is less forgiving.

(From OE-Core rev: bece9af0991776926004fc12c4d6ec542bc9957c)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 14bfe5f15f78c1bc049868633fd6fa19feb5a70c)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
wangmy
4c15652bbd go: upgrade 1.16.2 -> 1.16.3
This is bugfix release in 1.16 series [1]

[1] https://github.com/golang/go/issues?q=milestone%3AGo1.16.3+label%3ACherryPickApproved

(From OE-Core rev: b4c312c72c180c26691af83c0df43384e533dca5)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 84188e7b78aa40b168b526fa5d681a8a21d3b77c)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Saul Wold
47a3f7cdd5 pango: re-enable ptest
The run-ptest script got accidently dropped from the SRC_URI during
a past update and ptest patch.

(From OE-Core rev: 9786f7f41e034c60f61a7c0e47755d672353e07f)

Signed-off-by: Saul Wold <saul.wold@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 4479f810c1a3ab2badf4f9610c309bc0e23e2a5f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Gavin Li
3db9d8dbb7 kmod: do not symlink config.guess/config.sub during autoreconf
I was encountering the following race condition on poky:

- automake-native does do_install.
- automake-native does do_populate_sysroot. This hardlinks config.guess
  and config.sub into ${D}.
- kmod-native does do_configure. This runs `autoreconf`, which runs
  `automake --add-missing` (symlinks config.guess/config.sub from
  recipe-sysroot-native to build dir), then runs `gnu-configize` (copies
  _its own_ config.guess/config.sub _on top_ of the already existing
  ones). Since the destinations already had symlinks, the copy would
  overwrite config.guess/config.sub in recipe-sysroot-native, which
  would in turn overwrite the same in ${D} due to being hardlinked.
- automake-native does do_package. The outhash is thus calculated on the
  clobbered config.guess/config.sub files.

With hash equivalency enabled, the different outhash produced a
different unihash, which kept me from reusing sstate between my laptop
and my build server. This race condition would happen only on the build
server (BB_NUMBER_THREADS = 32) but never on my laptop
(BB_NUMBER_THREADS = 6).

I didn't see the --install and --symlink flags being used by any other
recipe, so I removed them, and that fixed the issue.

(From OE-Core rev: fd12e5872813a4750ef2603a357170dd3f0f44e1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 89d675efd633b495daa4a3a57420b9c309497035)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Mingli Yu
3c1f5940fb libtool: make sure autoheader run before automake
When use automake to generate Makefile.in from Makefile.am, there
comes below race:
 | configure.ac:45: error: required file 'config-h.in' not found

It is because the file config-h.in in updating process by autoheader,
so make automake run after autoheader to avoid the above race.

(From OE-Core rev: 55372f0b2d8c57954a704a967178c75d19e0af89)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1fc0a4a98e65db7efba8bb5cb835101ea5dd865b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Kevin Hao
d881748096 Revert "inittab: Add getty launch on hvc0 for qemuppc64"
This reverts commit ed69ef2016.

The console entry has already been added into /etc/inittab based
on the SERIAL_CONSOLES. So drop this redundant entry.

(From OE-Core rev: 5dbe969f4fdcf3005c0a69e97e8753819ab066a4)

Signed-off-by: Kevin Hao <kexin.hao@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 633f0c6b74e3caa2bae52ca60c61b811b7b2215d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Kevin Hao
2afb0cb4df sysvinit-inittab/start_getty: Check /sys for the tty device existence
The hvc tty driver doesn't populate a file like /proc/tty/driver/serial,
so the current implementation of start_getty doesn't work for the hvc
console. By checking the /sys/class/tty/ for the tty device existence,
it should support more console types and also make the codes more simple.

(From OE-Core rev: ab7a1f14191e882439715e82f1636d7713e1da03)

Signed-off-by: Kevin Hao <kexin.hao@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 670ceef0f6584ece5ce4176610255226a6148570)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Kevin Hao
f2c9dd561b modutils-initscripts: Bail out when no module is installed
Fix the following warning when boot with a core-image-minimal rootfs:
  depmod: can't change directory to 'lib/modules/5.10.25-yocto-standard': No such file or directory

(From OE-Core rev: 1aa5d8231a7e4ee2f19afbe12aa49fc19c854062)

Signed-off-by: Kevin Hao <kexin.hao@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c34650400182a1104a5fbe03e54f5cea69eb1900)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Anthony Bagwell
3b0bc8961e systemd: upgrade 247.4 -> 247.6
(From OE-Core rev: 7580c864a4afdf72b34c94c694e590f087bf5298)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 63fbf39b8aa3d94ca2db719d1a53190045dbb86d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Ross Burton
9e33e0a215 insane: clean up some more warning messages
(From OE-Core rev: db27cb7aabb875d9a48b3b1e82f7cf4e44d15c8b)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2abe18682192e7b38b9af5a5043906f2f069648f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Richard Purdie
ee6c3be627 sanity: Add error check for '%' in build path
It has been reported that '%' characters in build paths break with python
exceptions, probably due to confusion with python string escaping. Whilst it
is probably fixable, showing the user a human readable error is better given
it doesn't work.

[YOCTO #14282]

(From OE-Core rev: 000c12eeca6f6145ba9203c91ec1e67e4b5d8b6f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 31a3cf78452270131a657be45e76569515cff7ef)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Mingli Yu
2ddb6be223 groff: not ship /usr/bin/grap2graph
grap2graph which converts a GRAP diagram into a cropped image fails
to run as below:
 $ grap2graph
 /usr/bin/grap2graph: line 89: convert: command not found
 /usr/bin/grap2graph: warning: falling back to old '-crop 0x0' trim method
 /usr/bin/grap2graph: line 104: convert: command not found
 /usr/bin/grap2graph: line 103: grap: command not found

Considering we don't often need to convert a GRAP diagram into
a cropped image and the recipe ImageMagick which provides convert
command is in meta-oe layer, so don't ship the related files to
avoid the confusion about the above run time error.

(From OE-Core rev: b096417b9635c5a790616d20f0490bc15b9d7c0f)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 251be7279a475ee18c0c53fe9795bb37bffc2b45)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Jon Mason
80257e06f5 oeqa/runtime: space needed
Messages are currently being printed as:
	Test requires dropbear, oropenssh-sshd to be installed
but should be
	Test requires dropbear, or openssh-sshd to be installed
Adding the space after the 'or' corrects this.

(From OE-Core rev: f85c993bc4535dc42b89e87050d43c018c100f58)

Signed-off-by: Jon Mason <jdmason@kudzu.us>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 51596e0f8cebe1607ab64ffb018d51e815c0ee4b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Yanfei Xu
7c25e4c53c parselogs: ignore floppy error on qemu-system-x86 at boot stage
We can disable floppy drive by BIOS on a hardware, but an empty floppy
drive is connected by default on qemu-system-x86. Linux usually detect
the device and modprode the matched floppy.ko at the boot stage. Due to
we don't specify a floppy deivce in qemu boot arguments, then the errors
about floppy reading comes out.

It is harmless and normal, so we could ignore this error message on
qemux86.

Seen if kernel-modules is included in the image which pulls in the
relavent kernel module.

https://lists.gnu.org/archive/html/qemu-devel/2021-04/msg01402.html

(From OE-Core rev: 0e449143839f8de338b4a18fb27e8380d80e9b2f)

Signed-off-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3359f23ee9351c70997d5e0a17d17d1e47d59623)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
wangmy
f5b9a8206d go: update SRC_URI to use https protocol
(From OE-Core rev: 3659a2dd7fc246f1f9e8b474ed45de5d18fd558f)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2a1eb731ed3bcb049192550e362b771c3a9ea6eb)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
wangmy
4fb8f152ee mesa: upgrade 21.0.1 -> 21.0.2
(From OE-Core rev: c94987889ccf82746221574a41d7d27464254467)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 58ad359da1b05820ea3dc4ae3f789ccb8991fc32)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Khem Raj
f5d3d43422 systemd: Fix build on mips/musl
(From OE-Core rev: 84f452be1f6a4d1de276553815899c79a1f2cf63)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b4a0d8799af0a3d1b685dd7200b545fdb2c79d64)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Alejandro Enedino Hernandez Samaniego
c135ee79f6 python3: Improve logging, syntax and update deprecated modules to create_manifest
The imp module has een deprecated by upstream python, drop its usage
  (imp.get_tag) in favor of sys.implementation.cache_tag.

Avoid incorrectly getting dependencies for running script and
multiprocessing module.

Improve logging behavior of the create_manifest task:
- Use indentation.
- Logs on temp directory.
- Use a proper debug flag.
- Standarize syntax.

(From OE-Core rev: 003d73d74791e5d7dcdeb4f29fc7b05e35b345ea)

Signed-off-by: Alejandro Enedino Hernandez Samaniego <alejandro@enedino.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a3ac339f5b8549a050308ba94c4ef9093f10e303)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Alejandro Enedino Hernandez Samaniego
d2f044b182 python3: Upgrade 3.9.2 -> 3.9.4
- Rebased patch 0001-test_locale.py-correct-the-test-output-format
  Maintainer needs to sign CLA and resubmit
- configure now explicitly requires autoconf-archive to be present

(From OE-Core rev: 8c1473189f4439d2462130b3cface95dc251fe24)

Signed-off-by: Alejandro Enedino Hernandez Samaniego <alejandro@enedino.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 34cb8f2a2ed36ad929dca9055c96f2f843656b8f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Wes Lindauer
d652d25f2a oeqa/runtime/cases: Only disable/enable for current boot
Previously doing a stop/start worked, but using a disable/enable does
not work on a read-only rootfs. Add a --runtime flag to systemctl so
that systemd only modifies the current boot and does not attempt to
write to the filesystem.

This also keeps the test from making a permanent (one could argue
policy) change to the running system being tested. i.e. What if the
image being tested had intentionally disabled the timesyncd service in
preference to using chrony or ntpd? The test shouldn't assume that the
user wants the timesyncd service enabled.

(From OE-Core rev: 49a6632aa789fca8085a91b5b7c749aef3db4e0e)

Signed-off-by: Wes Lindauer <wesley.lindauer@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 43dd83b6a325589368c980a3f17cab90935aaeb0)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Randy MacLeod
d390ebc18b oe-time-dd-test.sh: increase timeout to 15 sec
With the previous timeout of 5 seconds, there would be
builds such as:
   https://autobuilder.yocto.io/pub/non-release/20210417-13/
which produced 17 files with top output with top running 454 times
and that's a bit too much data to analyze for each run. By
increasing the timeout, we'll find the worse problems
first, fix them and then we can decrease the timeout if needed.

(From OE-Core rev: 4f9921db882ed06e0902d34ae06a0eabff4ba86e)

Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 92b29a09b4c442597d212337b785afb76129ac7c)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Alexander Kanavin
692b2ffa92 scripts/oe-debuginfod: correct several issues
Particularly:
- nesting subprocess.run() inside subprocess.check_output() does not work at all.
How was this tested?
- -R and -U options can be combined; no need to separate the invocations based
on packaging format
- both exception handlers are unnecessary; we can simply print the hint if
invocation did not succeed
- to run debuginfod from its own sysroot, '-c addto_recipe_sysroot' for elfutils-native
must be executed

(From OE-Core rev: 77deac8501990ac8071eb11d4bec6aec4be948b7)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9e57bf636ec63e74d56f1ac48b5a27c5b80f1877)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Khem Raj
1a3ffc03d5 ca-certificates: Fix openssl runtime cert dependencies
With commit dc778c70449ee5401b5a24ad18b22b88338c47c5, dependency was
moved to openssl-bin which in itself was a fine change, but dropping
dependency on openssl too should have been kept along, dropping this
meant that openssl binary wont be able to validate secure connections as
the CApath files wont be installed, which infact are required for
openssl bins to work, following call e.g. fails

$ openssl s_client -connect google.com:443

....
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 256 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 20 (unable to get local issuer certificate)
....

The local issuer certs are not found in default location
/usr/lib/ssh-1.1/certs, this dir and its content is installed by openssl package
therefore re-add the dependency on openssl

(From OE-Core rev: 84afcdcb9d7ee24596bd3f8d808d30c9d558d918)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Cc: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit eaf377315efc73d6ffe361372a873918b3bb3bf5)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Trevor Gamblin
b4bbdbd6dd nettle: upgrade 3.7.1 -> 3.7.2
Version 3.7.2 includes a fix for CVE-2021-20305.

(From OE-Core rev: 95f038986eb53c3e1ae1b5aac96e1f2b9a235e63)

Signed-off-by: Trevor Gamblin <trevor.gamblin@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 29f0ef2e32a9b55d8271fde240a4469070d57729)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:25 +01:00
Bruce Ashfield
ddcdddeadf linux-yocto/5.10: fix arm defconfig warnings
A recent fix to the kern-tools promoted some previously unseen
issues to warnings. This commit fixes them by tagging some BT
options as non-hardware so they won't generate warnings if they
don't appear in the final .config. These are sub BT options and
shouldn't warn when/if their controlling option is disabled by
a fragment.

    40a967b115f base: exclude some BT options as non-hardware

(From OE-Core rev: 70515be581fef634a96a72a892dcff3ec4f890c0)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit fc7875ce3c68a253f8b8e5d8855c1814731b5a45)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Bruce Ashfield
04b23cf21e linux-yocto/5.4: fix arm defconfig warnings
A recent fix to the kern-tools promoted some previously unseen
issues to warnings. This commit fixes them by tagging some BT
options as non-hardware so they won't generate warnings if they
don't appear in the final .config. These are sub BT options and
shouldn't warn when/if their controlling option is disabled by
a fragment.

    d7fd0213b75 base: exclude some BT options as non-hardware

(From OE-Core rev: 32495969b2b37c2cd4f334339c4a57066da23873)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a86c8251905baf5bf4714f3db01cdfae02383839)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Bruce Ashfield
04aa3aa0a9 linux-yocto/5.4: update to v5.4.112
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    8f55ad4daf00 Linux 5.4.112
    ea42fd91d304 Revert "cifs: Set CIFS_MOUNT_USE_PREFIX_PATH flag on setting cifs_sb->prepath."
    7ee5bde3164c net: ieee802154: stop dump llsec params for monitors
    b4042ecc12cb net: ieee802154: forbid monitor for del llsec seclevel
    e82f8b7713ab net: ieee802154: forbid monitor for set llsec params
    948a2817f71d net: ieee802154: fix nl802154 del llsec devkey
    b3a105e15cd6 net: ieee802154: fix nl802154 add llsec key
    4097afd93df7 net: ieee802154: fix nl802154 del llsec dev
    7d32fc7964d6 net: ieee802154: fix nl802154 del llsec key
    8f4c815c74f4 net: ieee802154: nl-mac: fix check on panid
    38ea2b3ed00f net: mac802154: Fix general protection fault
    6e7098f56c83 drivers: net: fix memory leak in peak_usb_create_dev
    32e2f9a708e1 drivers: net: fix memory leak in atusb_probe
    0a790ad1358b net: tun: set tun->dev->addr_len during TUNSETLINK processing
    ed13df88c6d5 cfg80211: remove WARN_ON() in cfg80211_sme_connect
    628ac886dfba net: sched: bump refcount for new action in ACT replace mode
    3dbafee8426f dt-bindings: net: ethernet-controller: fix typo in NVMEM
    f4c5968da773 clk: socfpga: fix iomem pointer cast on 64-bit
    35ba6d9240ee RAS/CEC: Correct ce_add_elem()'s returned values
    f666ad4f8d87 RDMA/addr: Be strict with gid size
    44d03319fe77 RDMA/cxgb4: check for ipv6 address properly while destroying listener
    3ca5345db92c net/mlx5: Fix PBMC register mapping
    798d94a274fb net/mlx5: Fix placement of log_max_flow_counter
    9716aac17419 net: hns3: clear VF down state bit before request link status
    9dd7092d1a96 openvswitch: fix send of uninitialized stack memory in ct limit reply
    731abf396e37 net: openvswitch: conntrack: simplify the return expression of ovs_ct_limit_get_default_limit()
    d0aab59f0993 perf inject: Fix repipe usage
    d3343a35d108 s390/cpcmd: fix inline assembly register clobbering
    c88fa8d4f994 workqueue: Move the position of debug_work_activate() in __queue_work()
    14060454cdb9 clk: fix invalid usage of list cursor in unregister
    bedda47d5dce clk: fix invalid usage of list cursor in register
    b3717885865c net: macb: restore cmp registers on resume path
    c61fe6b7e21f scsi: ufs: core: Fix wrong Task Tag used in task management request UPIUs
    81fddc7be649 scsi: ufs: core: Fix task management request completion timeout
    f6abec1a3172 scsi: ufs: Use blk_{get,put}_request() to allocate and free TMFs
    a8d2d45c70c7 scsi: ufs: Avoid busy-waiting by eliminating tag conflicts
    c5efc9d26c84 scsi: ufs: Fix irq return code
    537a2449cc6f net: udp: Add support for getsockopt(..., ..., UDP_GRO, ..., ...);
    de8c5962bdae drm/msm: Set drvdata to NULL when msm_drm_init() fails
    e22ce1d21b42 i40e: Fix display statistics for veb_tc
    7c0d2372298f soc/fsl: qbman: fix conflicting alignment attributes
    c178e8a19937 net/rds: Fix a use after free in rds_message_map_pages
    73f88cc2bf5c net/mlx5: Don't request more than supported EQs
    029416e14be2 net/mlx5e: Fix ethtool indication of connector type
    1f3010fc3fe6 ASoC: sunxi: sun4i-codec: fill ASoC card owner
    db4600aa938c net: phy: broadcom: Only advertise EEE for supported modes
    6aa7d2621b19 nfp: flower: ignore duplicate merge hints from FW
    bbbee59f4f32 net/ncsi: Avoid channel_monitor hrtimer deadlock
    c66b672a231c ARM: dts: imx6: pbab01: Set vmmc supply for both SD interfaces
    c991ca6a2c79 net:tipc: Fix a double free in tipc_sk_mcast_rcv
    200c8453287f cxgb4: avoid collecting SGE_QBASE regs during traffic
    e9bdd3e45f0e gianfar: Handle error code at MAC address change
    516c436ff5d6 can: bcm/raw: fix msg_namelen values depending on CAN_REQUIRED_SIZE
    ca443546f8d4 arm64: dts: imx8mm/q: Fix pad control of SD1_DATA0
    840a181729ac sch_red: fix off-by-one checks in red_check_params()
    accb27006595 amd-xgbe: Update DMA coherency values
    e472f6814ceb hostfs: fix memory handling in follow_link()
    613f35568a5d hostfs: Use kasprintf() instead of fixed buffer formatting
    fec47d458add i40e: Fix kernel oops when i40e driver removes VF's
    c0aacaa0a8f2 i40e: Added Asym_Pause to supported link modes
    f819977ad42c xfrm: Fix NULL pointer dereference on policy lookup
    bac7e764e5d5 ASoC: wm8960: Fix wrong bclk and lrclk with pll enabled for some chips
    b32969aaed1c ASoC: SOF: Intel: HDA: fix core status verification
    99b4e9af8f00 ASoC: SOF: Intel: hda: remove unnecessary parentheses
    540ddeed5c51 esp: delete NETIF_F_SCTP_CRC bit from features for esp offload
    a128e07b472b net: xfrm: Localize sequence counter per network namespace
    34659399e713 regulator: bd9571mwv: Fix AVS and DVFS voltage range
    d78e99dd4960 xfrm: interface: fix ipv4 pmtu check to honor ip header df
    7977d5fe3d5b net: dsa: lantiq_gswip: Configure all remaining GSWIP_MII_CFG bits
    249908ed36a8 net: dsa: lantiq_gswip: Don't use PHY auto polling
    910e785ba8de virtio_net: Add XDP meta data support
    0534f1f1bc76 i2c: turn recovery error on init to debug
    cafced041915 usbip: synchronize event handler with sysfs code paths
    37168011d427 usbip: vudc synchronize sysfs code paths
    06fedcc6870e usbip: stub-dev synchronize sysfs code paths
    6a435364b608 usbip: add sysfs_lock to synchronize sysfs code paths
    b02bded94b91 net: let skb_orphan_partial wake-up waiters.
    fd8a95d56050 net-ipv6: bugfix - raw & sctp - switch to ipv6_can_nonlocal_bind()
    b5e7653ffdd1 net: hsr: Reset MAC header for Tx path
    a9311be5f617 mac80211: fix TXQ AC confusion
    5a4f39f19e6f net: sched: sch_teql: fix null-pointer dereference
    2f5edf14f62a i40e: Fix sparse error: 'vsi->netdev' could be null
    b31d91e9e8c8 i40e: Fix sparse warning: missing error code 'err'
    599200ad44e7 net: ensure mac header is set in virtio_net_hdr_to_skb()
    158a9b815c54 bpf, sockmap: Fix sk->prot unhash op reset
    0242251d6a97 ethernet/netronome/nfp: Fix a use after free in nfp_bpf_ctrl_msg_rx
    4a2933c88399 net: hso: fix null-ptr-deref during tty device unregistration
    ef2ccf84071f ice: Cleanup fltr list in case of allocation issues
    0df579b3de8c ice: Fix for dereference of NULL pointer
    1aecc5781101 ice: Increase control queue timeout
    9de1caa1103f batman-adv: initialize "struct batadv_tvlv_tt_vlan_data"->reserved field
    79407ae3475e ARM: dts: turris-omnia: configure LED[2]/INTn pin as interrupt pin
    9dfd74a8c015 parisc: avoid a warning on u8 cast for cmpxchg on u8 pointers
    957d0308aa36 parisc: parisc-agp requires SBA IOMMU driver
    507c2009dc4c fs: direct-io: fix missing sdio->boundary
    f495bedb001b ocfs2: fix deadlock between setattr and dio_end_io_write
    52999a66c0b3 nds32: flush_dcache_page: use page_mapping_file to avoid races with swapoff
    75fd54ea1b60 ia64: fix user_stack_pointer() for ptrace()
    7a92396bf8dd gcov: re-fix clang-11+ support
    c2b3cf2c70d6 drm/i915: Fix invalid access to ACPI _DSM objects
    0e8f850e26b2 net: dsa: lantiq_gswip: Let GSWIP automatically set the xMII clock
    6649b5eda131 net: ipv6: check for validity before dereferencing cfg->fc_nlinfo.nlh
    a09acbb53934 xen/evtchn: Change irq_info lock to raw_spinlock_t
    aa0cff2e0751 nfc: Avoid endless loops caused by repeated llcp_sock_connect()
    404daa4d62a3 nfc: fix memory leak in llcp_sock_connect()
    41bc58ba0945 nfc: fix refcount leak in llcp_sock_connect()
    c89903c9eff2 nfc: fix refcount leak in llcp_sock_bind()
    12289d9840d6 ASoC: intel: atom: Stop advertising non working S24LE support
    c99780f782aa ALSA: hda/realtek: Fix speaker amp setup on Acer Aspire E1
    da8f3cc5771e ALSA: aloop: Fix initialization of controls
    8732c2df9d15 counter: stm32-timer-cnt: fix ceiling miss-alignment with reload register

(From OE-Core rev: 6277683a2b42e981d5e9dc566c8c48db72038c74)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bd41c1b7170b4d27bebac0a4387cad070c41e03d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Bruce Ashfield
b354987235 linux-yocto-rt/5.10: update to -rt34
Integrating the following commit(s) to linux-yocto/5.10:

    ac98a75ef2bc net/xfrm: fixup 5.10.30 -stable merge

(From OE-Core rev: a55ddf4892af79b900a1f5adf0a95f39023a878d)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2e7dd8afd0dbe7803170006297309b6699b98f34)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Bruce Ashfield
98af459189 linux-yocto/5.10: update to v5.10.30
Updating linux-yocto/5.10 to the latest korg -stable release that comprises
the following commits:

    1e798745fa8e Linux 5.10.30
    b451aed56348 Revert "net: sched: bump refcount for new action in ACT replace mode"
    a22115c3492f net: ieee802154: stop dump llsec params for monitors
    f872fb3feadd net: ieee802154: forbid monitor for del llsec seclevel
    a933bcbb1f7f net: ieee802154: forbid monitor for set llsec params
    0238c7b47f77 net: ieee802154: fix nl802154 del llsec devkey
    d06a96e72803 net: ieee802154: fix nl802154 add llsec key
    399f38c420ee net: ieee802154: fix nl802154 del llsec dev
    07699fcce052 net: ieee802154: fix nl802154 del llsec key
    8bfb45fa131d net: ieee802154: nl-mac: fix check on panid
    38731bbcd9f0 net: mac802154: Fix general protection fault
    9f51a42d81f6 drivers: net: fix memory leak in peak_usb_create_dev
    160ac0d55d52 drivers: net: fix memory leak in atusb_probe
    4d9117b7404a net: tun: set tun->dev->addr_len during TUNSETLINK processing
    26ab092615f5 cfg80211: remove WARN_ON() in cfg80211_sme_connect
    138a6e1dc35e gpiolib: Read "gpio-line-names" from a firmware node
    300368c59cf0 net: sched: bump refcount for new action in ACT replace mode
    982dd14fba0f dt-bindings: net: ethernet-controller: fix typo in NVMEM
    c65a000a236e lockdep: Address clang -Wformat warning printing for %hd
    4c4aa344edf4 clk: socfpga: fix iomem pointer cast on 64-bit
    674ddb52f94b RAS/CEC: Correct ce_add_elem()'s returned values
    253acf2e983b vdpa/mlx5: Fix wrong use of bit numbers
    0ddb34c2ccce vdpa/mlx5: should exclude header length and fcs from mtu
    5700c3d4abb2 RDMA/addr: Be strict with gid size
    e53ff6e59144 i40e: Fix parameters in aq_get_phy_register()
    999852207464 drm/vc4: crtc: Reduce PV fifo threshold on hvs4
    d8a0861e269d RDMA/qedr: Fix kernel panic when trying to access recv_cq
    3fa7ae3f3754 perf report: Fix wrong LBR block sorting
    7f40e9332898 RDMA/cxgb4: check for ipv6 address properly while destroying listener
    03ad6a2521a0 net/mlx5: Fix PBMC register mapping
    1312f11eb33d net/mlx5: Fix PPLM register mapping
    f92faf0bdd25 net/mlx5: Fix placement of log_max_flow_counter
    f780a0808827 net: hns3: clear VF down state bit before request link status
    f473789db536 tipc: increment the tmp aead refcnt before attaching it
    3292c4fc9ce2 can: mcp251x: fix support for half duplex SPI host controllers
    a96f1ed70927 iwlwifi: fix 11ax disabled bit in the regulatory capability flags
    363d610a9652 i2c: designware: Adjust bus_freq_hz when refuse high speed mode set
    cc5418973cc9 openvswitch: fix send of uninitialized stack memory in ct limit reply
    3e288c3a7d55 net: openvswitch: conntrack: simplify the return expression of ovs_ct_limit_get_default_limit()
    3b70c6f26364 perf inject: Fix repipe usage
    d9dc1b406cb9 s390/cpcmd: fix inline assembly register clobbering
    7943f749f0d2 workqueue: Move the position of debug_work_activate() in __queue_work()
    b3f29ed5dd4b clk: fix invalid usage of list cursor in unregister
    2307baac56af clk: fix invalid usage of list cursor in register
    d9c55b2d3368 net: macb: restore cmp registers on resume path
    af36da5becfb net: cls_api: Fix uninitialised struct field bo->unlocked_driver_cb
    ffd5f1e87c15 scsi: ufs: core: Fix wrong Task Tag used in task management request UPIUs
    ff9231ddfec8 scsi: ufs: core: Fix task management request completion timeout
    71ee255d0698 mptcp: forbit mcast-related sockopt on MPTCP sockets
    24bbfe89b1c7 net: udp: Add support for getsockopt(..., ..., UDP_GRO, ..., ...);
    a08d5d3bec53 drm/msm: Set drvdata to NULL when msm_drm_init() fails
    7290bf419894 RDMA/rtrs-clt: Close rtrs client conn before destroying rtrs clt session files
    49cfa2b20193 i40e: Fix display statistics for veb_tc
    e8c96b57a781 soc/fsl: qbman: fix conflicting alignment attributes
    553290002aa8 xdp: fix xdp_return_frame() kernel BUG throw for page_pool memory model
    4cfae7b23889 net/rds: Fix a use after free in rds_message_map_pages
    05bbe9d85a4c net/mlx5: Don't request more than supported EQs
    86530effd18f net/mlx5e: Fix ethtool indication of connector type
    bde64eac2379 net/mlx5e: Fix mapping of ct_label zero
    d65b66ca3334 ASoC: sunxi: sun4i-codec: fill ASoC card owner
    dcdf0876b040 I2C: JZ4780: Fix bug for Ingenic X1000.
    f295dfc831bc net: phy: broadcom: Only advertise EEE for supported modes
    7a896e189361 nfp: flower: ignore duplicate merge hints from FW
    6af631d1caf2 net: qrtr: Fix memory leak on qrtr_tx_wait failure
    dfe7805e6aa6 net/ncsi: Avoid channel_monitor hrtimer deadlock
    ae4a8d10ac8b ARM: dts: imx6: pbab01: Set vmmc supply for both SD interfaces
    e5e5ecc9d9fd net:tipc: Fix a double free in tipc_sk_mcast_rcv
    f273e3726e14 cxgb4: avoid collecting SGE_QBASE regs during traffic
    63a64c366ce0 net: dsa: Fix type was not set for devlink port
    ed613d96842e gianfar: Handle error code at MAC address change
    1eb5f4e00755 ethernet: myri10ge: Fix a use after free in myri10ge_sw_tso
    759b44d247c6 mlxsw: spectrum: Fix ECN marking in tunnel decapsulation
    d02b68a92905 can: isotp: fix msg_namelen values depending on CAN_REQUIRED_SIZE
    1d3837ca7335 can: bcm/raw: fix msg_namelen values depending on CAN_REQUIRED_SIZE
    58f8f1074039 xfrm: Provide private skb extensions for segmented and hw offloaded ESP packets
    bc0b89a9a28f arm64: dts: imx8mm/q: Fix pad control of SD1_DATA0
    d9670f5e77e5 drivers/net/wan/hdlc_fr: Fix a double free in pvc_xmit
    d38bce5adcd9 sch_red: fix off-by-one checks in red_check_params()
    985c9bb1b594 geneve: do not modify the shared tunnel info when PMTU triggers an ICMP reply
    f3bc1885746f vxlan: do not modify the shared tunnel info when PMTU triggers an ICMP reply
    f33f79703a4e amd-xgbe: Update DMA coherency values
    e5a3449ce16a hostfs: fix memory handling in follow_link()
    3cc4db1213a4 i40e: Fix kernel oops when i40e driver removes VF's
    9856607c9c29 i40e: Added Asym_Pause to supported link modes
    d4d4c6a4ca7c virtchnl: Fix layout of RSS structures
    95d58bf5ed43 xfrm: Fix NULL pointer dereference on policy lookup
    48a443026bb6 ASoC: wm8960: Fix wrong bclk and lrclk with pll enabled for some chips
    f6db9dbfa6b6 ASoC: SOF: Intel: HDA: fix core status verification
    ef4ddd1d6d93 esp: delete NETIF_F_SCTP_CRC bit from features for esp offload
    0224432a8fc1 net: xfrm: Localize sequence counter per network namespace
    1e6a3b41cf2a ARM: OMAP4: PM: update ROM return address for OSWR and OFF
    042b2cad81de ARM: OMAP4: Fix PMIC voltage domains for bionic
    1f51cb88e788 regulator: bd9571mwv: Fix AVS and DVFS voltage range
    b267688ce007 remoteproc: qcom: pil_info: avoid 64-bit division
    c7a175a24b0e xfrm: Use actual socket sk instead of skb socket for xfrm_output_resume
    3b74ce529ece xfrm: interface: fix ipv4 pmtu check to honor ip header df
    2d62d6980c2b ice: Recognize 860 as iSCSI port in CEE mode
    fd92e7aacc16 ice: Refactor DCB related variables out of the ice_port_info struct
    4a78ae127803 net: sched: fix err handler in tcf_action_init()
    3c7d3d188ca7 KVM: x86/mmu: preserve pending TLB flush across calls to kvm_tdp_mmu_zap_sp
    25fc773b21ce KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pages
    be2c527b5d39 KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping
    0aa4dd9e5132 KVM: x86/mmu: Ensure TLBs are flushed when yielding during GFN range zap
    3c7a18440638 KVM: x86/mmu: Yield in TDU MMU iter even if no SPTES changed
    85f4ff2b06af KVM: x86/mmu: Ensure forward progress when yielding in TDP MMU iter
    1cd17c5c9b8a KVM: x86/mmu: Rename goal_gfn to next_last_level_gfn
    b4a3a0d27924 KVM: x86/mmu: Merge flush and non-flush tdp_mmu_iter_cond_resched
    8f90432d7f59 KVM: x86/mmu: change TDP MMU yield function returns to match cond_resched
    5ea9e6038d29 i2c: turn recovery error on init to debug
    efa869b68be9 percpu: make pcpu_nr_empty_pop_pages per chunk type
    c441949184a9 scsi: target: iscsi: Fix zero tag inside a trace event
    d8e7fa8509d7 scsi: pm80xx: Fix chip initialization failure
    0c47d8a55f7f driver core: Fix locking bug in deferred_probe_timeout_work_func()
    f06cb4641b15 usbip: synchronize event handler with sysfs code paths
    28dc9237fe83 usbip: vudc synchronize sysfs code paths
    513765b186c9 usbip: stub-dev synchronize sysfs code paths
    68be610c19a5 usbip: add sysfs_lock to synchronize sysfs code paths
    126ce97d39cf thunderbolt: Fix off by one in tb_port_find_retimer()
    256ece954961 thunderbolt: Fix a leak in tb_retimer_add()
    b830650c1a0c net: let skb_orphan_partial wake-up waiters.
    5d9216b85100 net-ipv6: bugfix - raw & sctp - switch to ipv6_can_nonlocal_bind()
    b82816d77875 net: hsr: Reset MAC header for Tx path
    9b9c910ccc19 mac80211: fix TXQ AC confusion
    cc357c29358d mac80211: fix time-is-after bug in mlme
    cc1a702e6ec0 cfg80211: check S1G beacon compat element length
    fea52345f422 nl80211: fix potential leak of ACL params
    42e4450e3790 nl80211: fix beacon head validation
    81692c6add7e net: sched: fix action overwrite reference counting
    cdcf3829f418 net: sched: sch_teql: fix null-pointer dereference
    422eda625516 vdpa/mlx5: Fix suspend/resume index restoration
    89e406e95278 i40e: Fix sparse errors in i40e_txrx.c
    12e1438a0946 i40e: Fix sparse error: uninitialized symbol 'ring'
    2472ba1c46b4 i40e: Fix sparse error: 'vsi->netdev' could be null
    792387118204 i40e: Fix sparse warning: missing error code 'err'
    f0b4c9acf5fe net: ensure mac header is set in virtio_net_hdr_to_skb()
    72c5de25ba83 bpf, sockmap: Fix incorrect fwd_alloc accounting
    00c01de1a994 bpf, sockmap: Fix sk->prot unhash op reset
    d921baabd964 bpf: Refcount task stack in bpf_get_task_stack
    caef7806141a libbpf: Only create rx and tx XDP rings when necessary
    4cc9177b099e libbpf: Restore umem state after socket create failure
    5aa7df172207 libbpf: Ensure umem pointer is non-NULL before dereferencing
    b52e88638f71 ethernet/netronome/nfp: Fix a use after free in nfp_bpf_ctrl_msg_rx
    d86046a77535 bpf: link: Refuse non-O_RDWR flags in BPF_OBJ_GET
    b7004ecafade bpf: Enforce that struct_ops programs be GPL-only
    3015db3de715 libbpf: Fix bail out from 'ringbuf_process_ring()' on error
    dc195928d7e4 net: hso: fix null-ptr-deref during tty device unregistration
    c2743e0a631c ice: fix memory leak of aRFS after resuming from suspend
    6bd4e822925d iwlwifi: pcie: properly set LTR workarounds on 22000 devices
    e5386e87f8aa ice: Cleanup fltr list in case of allocation issues
    9d1c342c5018 ice: Use port number instead of PF ID for WoL
    b69686110291 ice: Fix for dereference of NULL pointer
    4d73a6143d40 ice: remove DCBNL_DEVRESET bit from PF state
    286830a8469c ice: fix memory allocation call
    4686a26e9536 ice: prevent ice_open and ice_stop during reset
    ef7ed8c77d1c ice: Increase control queue timeout
    6590b7bfbc2b ice: Continue probe on link/PHY errors
    9a7bc0c40367 batman-adv: initialize "struct batadv_tvlv_tt_vlan_data"->reserved field
    d1173effc574 ARM: dts: turris-omnia: configure LED[2]/INTn pin as interrupt pin
    4941889535f3 parisc: avoid a warning on u8 cast for cmpxchg on u8 pointers
    597121792eb4 parisc: parisc-agp requires SBA IOMMU driver
    9b54dad28def of: property: fw_devlink: do not link ".*,nr-gpios"
    009c5665278b ethtool: fix incorrect datatype in set_eee ops
    3a675c1b507f fs: direct-io: fix missing sdio->boundary
    b1a5122554ae ocfs2: fix deadlock between setattr and dio_end_io_write
    4fabcf229477 nds32: flush_dcache_page: use page_mapping_file to avoid races with swapoff
    7d9da660affc ia64: fix user_stack_pointer() for ptrace()
    8e5bfafedf6d gcov: re-fix clang-11+ support
    43908139368e LOOKUP_MOUNTPOINT: we are cleaning "jumped" flag too late
    de427b662bfb IB/hfi1: Fix probe time panic when AIP is enabled with a buggy BIOS
    856f60e3e800 ACPI: processor: Fix build when CONFIG_ACPI_PROCESSOR=m
    8599a39adca8 drm/i915: Fix invalid access to ACPI _DSM objects
    bf991df9535e net: dsa: lantiq_gswip: Configure all remaining GSWIP_MII_CFG bits
    c4ae852ec940 net: dsa: lantiq_gswip: Don't use PHY auto polling
    ba39959bfebd net: dsa: lantiq_gswip: Let GSWIP automatically set the xMII clock
    40375bc3d0f9 net: ipv6: check for validity before dereferencing cfg->fc_nlinfo.nlh
    005c5afa9f85 xen/evtchn: Change irq_info lock to raw_spinlock_t
    a28124e8ad03 selinux: fix race between old and new sidtab
    fd75d73aa214 selinux: fix cond_list corruption when changing booleans
    4f29b08e238f selinux: make nslot handling in avtab more robust
    a12a2fa9a129 nfc: Avoid endless loops caused by repeated llcp_sock_connect()
    568ac94df580 nfc: fix memory leak in llcp_sock_connect()
    99b596199e84 nfc: fix refcount leak in llcp_sock_connect()
    6fb003e5ae18 nfc: fix refcount leak in llcp_sock_bind()
    117557711974 ASoC: intel: atom: Stop advertising non working S24LE support
    c4a6fb0e8389 ALSA: hda/conexant: Apply quirk for another HP ZBook G5 model
    6c9119de7ffe ALSA: hda/realtek: Fix speaker amp setup on Acer Aspire E1
    6efe4c1f4d17 ALSA: aloop: Fix initialization of controls
    4c933ff31f21 xfrm/compat: Cleanup WARN()s that can be user-triggered

(From OE-Core rev: ccbfd33bea75460cc97ffaeea78972ca3b6bd395)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit aec9a6d709f14decd65013434f13a26c57e9196f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Bruce Ashfield
e5f3493efd linux-yocto/5.4: update to v5.4.111
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    a49e5ea5e045 Linux 5.4.111
    45f540622d5b init/Kconfig: make COMPILE_TEST depend on HAS_IOMEM
    43dd03f08819 init/Kconfig: make COMPILE_TEST depend on !S390
    f5eb7e12a75d nvme-mpath: replace direct_make_request with generic_make_request
    6cce30548058 bpf, x86: Validate computation of branch displacements for x86-32
    a0b3927a07be bpf, x86: Validate computation of branch displacements for x86-64
    20c60bbc1c54 cifs: Silently ignore unknown oplock break handle
    754c82a6bf48 cifs: revalidate mapping when we open files for SMB1 POSIX
    e5991b4fcedb ia64: fix format strings for err_inject
    3e9292b39862 ia64: mca: allocate early mca with GFP_ATOMIC
    9b872bac1923 scsi: target: pscsi: Clean up after failure in pscsi_map_sg()
    e2db0e66139a x86/build: Turn off -fcf-protection for realmode targets
    0465098898ef platform/x86: thinkpad_acpi: Allow the FnLock LED to change state
    5a8c30e8acad netfilter: conntrack: Fix gre tunneling over ipv6
    e84a795b8a0b drm/msm: Ratelimit invalid-fence message
    daf5aaa8e6e0 drm/msm/adreno: a5xx_power: Don't apply A540 lm_setup to other GPUs
    6abe3dad0afe mac80211: choose first enabled channel for monitor
    37b51460b25a mISDN: fix crash in fritzpci
    901d39f7b2ce net: pxa168_eth: Fix a potential data race in pxa168_eth_remove
    dc7c4d30d6e0 net/mlx5e: Enforce minimum value check for ICOSQ size
    b0e2b3271236 bpf, x86: Use kvmalloc_array instead kmalloc_array in bpf_jit_comp
    e5868baa1e3c platform/x86: intel-hid: Support Lenovo ThinkPad X1 Tablet Gen 2
    422c68101110 bus: ti-sysc: Fix warning on unbind if reset is not deasserted
    bec7103b04a9 ARM: dts: am33xx: add aliases for mmc interfaces
    59c8e3329268 Linux 5.4.110
    cde4e338c2b2 drivers: video: fbcon: fix NULL dereference in fbcon_cursor()
    0ca13611d33f staging: rtl8192e: Change state information from u16 to u8
    f9974f189c67 staging: rtl8192e: Fix incorrect source in memcpy()
    fd5ce87aee48 usb: dwc2: Prevent core suspend when port connection flag is 0
    85e1752ae0ed usb: dwc2: Fix HPRT0.PrtSusp bit setting for HiKey 960 board.
    26d2284a0580 usb: gadget: udc: amd5536udc_pci fix null-ptr-dereference
    25c13ca8302f USB: cdc-acm: fix use-after-free after probe failure
    b5aedddb621e USB: cdc-acm: fix double free on probe failure
    7220bba3066e USB: cdc-acm: downgrade message to debug
    62da51d0e7b7 USB: cdc-acm: untangle a circular dependency between callback and softint
    7443350af8cb cdc-acm: fix BREAK rx code path adding necessary calls
    58cace45f84b usb: xhci-mtk: fix broken streams issue on 0.96 xHCI
    a22e35f7b4fb usb: musb: Fix suspend with devices connected for a64
    e94dec2765b5 USB: quirks: ignore remote wake-up on Fibocom L850-GL LTE modem
    2ecf5803557b usbip: vhci_hcd fix shift out-of-bounds in vhci_hub_control()
    5ecfad1efbc3 firewire: nosy: Fix a use-after-free bug in nosy_ioctl()
    58073dc536a6 extcon: Fix error handling in extcon_dev_register
    e3a3d5005e63 extcon: Add stubs for extcon_register_notifier_all() functions
    67ff75be1ab1 pinctrl: rockchip: fix restore error in resume
    c92e8a8ecb9d vfio/nvlink: Add missing SPAPR_TCE_IOMMU depends
    7f93d47677dd reiserfs: update reiserfs_xattrs_initialized() condition
    4dc52ce56d63 drm/amdgpu: check alignment on CPU page for bo map
    f9b3b70fd468 drm/amdgpu: fix offset calculation in amdgpu_vm_bo_clear_mappings()
    00bd9c22409e mm: fix race by making init_zero_pfn() early_initcall
    558ab52776c0 tracing: Fix stack trace event size
    07b19a118d2f PM: runtime: Fix ordering in pm_runtime_get_suppliers()
    72a667681cc4 PM: runtime: Fix race getting/putting suppliers at probe
    b6e7dbf0ed9c xtensa: move coprocessor_flush to the .text section
    c3715f06f9ad ALSA: hda/realtek: call alc_update_headset_mode() in hp_automute_hook
    09a08fd89996 ALSA: hda/realtek: fix a determine_headset_type issue for a Dell AIO
    3acbf473a885 ALSA: hda: Add missing sanity checks in PM prepare/complete callbacks
    65f92e40cc6d ALSA: hda: Re-add dropped snd_poewr_change_state() calls
    05dd1a4223c5 ALSA: usb-audio: Apply sample rate quirk to Logitech Connect
    42c83e3bca43 bpf: Remove MTU check in __bpf_skb_max_len
    aca623d79cb7 net: wan/lmc: unregister device when no matching device is found
    f22854911523 appletalk: Fix skb allocation size in loopback case
    4ff476b88135 net: ethernet: aquantia: Handle error cleanup of start on open
    ee898d95f446 ath10k: hold RCU lock when calling ieee80211_find_sta_by_ifaddr()
    0b8dfb61f29a brcmfmac: clear EAP/association status bits on linkdown events
    2d0e594c1316 can: tcan4x5x: fix max register value
    4ac1feff6ea6 net: introduce CAN specific pointer in the struct net_device
    23394679aa56 can: dev: move driver related infrastructure into separate subdir
    7ca4feb37e9e flow_dissector: fix TTL and TOS dissection on IPv4 fragments
    ee5055593d0e net: mvpp2: fix interrupt mask/unmask skip condition
    aa9345d10f0a ext4: do not iput inode under running transaction in ext4_rename()
    5e39a73e47ef locking/ww_mutex: Simplify use_ww_ctx & ww_ctx handling
    84bd602c14b7 thermal/core: Add NULL pointer check before using cooling device stats
    50c38f76b51d ASoC: rt5659: Update MCLK rate in set_sysclk()
    b6408fd7eb89 staging: comedi: cb_pcidas64: fix request_irq() warn
    b9fe8673b874 staging: comedi: cb_pcidas: fix request_irq() warn
    7390a1cdf304 scsi: qla2xxx: Fix broken #endif placement
    6e79f829e791 scsi: st: Fix a use after free in st_open()
    98052c40e3ac vhost: Fix vhost_vq_reset()
    57aa4f30911a powerpc: Force inlining of cpu_has_feature() to avoid build failure
    dcf4b6e710c7 NFSD: fix error handling in NFSv4.0 callbacks
    990a0fa1ccbb ASoC: cs42l42: Always wait at least 3ms after reset
    6d197691a1c5 ASoC: cs42l42: Fix mixer volume control
    aa74bf73937c ASoC: cs42l42: Fix channel width support
    47ae33d5b32b ASoC: cs42l42: Fix Bitclock polarity inversion
    5952cf385ceb ASoC: es8316: Simplify adc_pga_gain_tlv table
    381679aec216 ASoC: sgtl5000: set DAP_AVC_CTRL register to correct default value on probe
    57b8a192872a ASoC: rt5651: Fix dac- and adc- vol-tlv values being off by a factor of 10
    b75073a37c65 ASoC: rt5640: Fix dac- and adc- vol-tlv values being off by a factor of 10
    ca3f8dcd6d94 iomap: Fix negative assignment to unsigned sis->pages in iomap_swapfile_activate
    c899b8391a54 rpc: fix NULL dereference on kmalloc failure
    0e71c59b2450 fs: nfsd: fix kconfig dependency warning for NFSD_V4
    9b68d3ed8aa8 ext4: fix bh ref count on error paths
    721a6f64c0bc ext4: shrink race window in ext4_should_retry_alloc()
    05d891e76dde module: harden ELF info handling
    6a8df0821f67 module: avoid *goto*s in module_sig_check()
    d9b98ccdfed0 module: merge repetitive strings in module_sig_check()
    1a8c5fbe2f1d modsign: print module name along with error message
    120589bb0970 ipv6: weaken the v4mapped source check
    1225bb45c87b selinux: vsock: Set SID for socket returned by accept()

(From OE-Core rev: 5be4de51009c5164c058a46e9f143591e433ab97)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 199566a40671ac273028cb44d0bb4494be22c4aa)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Bruce Ashfield
0cafbb003a linux-yocto/5.10: update to v5.10.29
Updating linux-yocto/5.10 to the latest korg -stable release that comprises
the following commits:

    d8cf82b410b4 Linux 5.10.29
    cef13a04376b init/Kconfig: make COMPILE_TEST depend on HAS_IOMEM
    ba02635769f1 init/Kconfig: make COMPILE_TEST depend on !S390
    faa30969f66e bpf, x86: Validate computation of branch displacements for x86-32
    3edb8967d91e bpf, x86: Validate computation of branch displacements for x86-64
    f890246ae75c tools/resolve_btfids: Add /libbpf to .gitignore
    76983e244908 kbuild: Do not clean resolve_btfids if the output does not exist
    0945d67e5d43 kbuild: Add resolve_btfids clean to root clean target
    eff1e0465727 tools/resolve_btfids: Set srctree variable unconditionally
    f60c918b07b7 tools/resolve_btfids: Check objects before removing
    249719092447 tools/resolve_btfids: Build libbpf and libsubcmd in separate directories
    2934985086b9 math: Export mul_u64_u64_div_u64
    7345d4b2d421 io_uring: fix timeout cancel return code
    8f9049e70cd6 cifs: Silently ignore unknown oplock break handle
    fee111089cc9 cifs: revalidate mapping when we open files for SMB1 POSIX
    42498ee67296 ia64: fix format strings for err_inject
    bc30fdd598e3 ia64: mca: allocate early mca with GFP_ATOMIC
    b008489d8b86 selftests/vm: fix out-of-tree build
    47f8bc68ae95 scsi: target: pscsi: Clean up after failure in pscsi_map_sg()
    266d3106efbd ptp_qoriq: fix overflow in ptp_qoriq_adjfine() u64 calcalation
    f135b89e286b platform/x86: intel_pmc_core: Ignore GBE LTR on Tiger Lake platforms
    037950869be3 block: clear GD_NEED_PART_SCAN later in bdev_disk_changed
    7c73059bf849 x86/build: Turn off -fcf-protection for realmode targets
    6372aa9a78f8 drm/msm/disp/dpu1: icc path needs to be set before dpu runtime resume
    6deb9d9a84a2 kselftest/arm64: sve: Do not use non-canonical FFR register value
    bcd57b07fd90 platform/x86: thinkpad_acpi: Allow the FnLock LED to change state
    6304295c6190 net: ipa: fix init header command validation
    8a57256e0548 netfilter: nftables: skip hook overlap logic if flowtable is stale
    b0c795f4cc53 netfilter: conntrack: Fix gre tunneling over ipv6
    439c2c22fb85 drm/msm: Ratelimit invalid-fence message
    57e0546f01ca drm/msm/adreno: a5xx_power: Don't apply A540 lm_setup to other GPUs
    b9ec77ef36af drm/msm/dsi_pll_7nm: Fix variable usage for pll_lockdet_rate
    0a66bd60b1ce mac80211: choose first enabled channel for monitor
    7705c48b8695 mac80211: Check crypto_aead_encrypt for errors
    05878b681981 mISDN: fix crash in fritzpci
    4ca265610cc6 kunit: tool: Fix a python tuple typing error
    f0ed115feccc net: pxa168_eth: Fix a potential data race in pxa168_eth_remove
    4b4ce9895e64 net/mlx5e: Enforce minimum value check for ICOSQ size
    198afc3b0c01 bpf, x86: Use kvmalloc_array instead kmalloc_array in bpf_jit_comp
    107875a53868 platform/x86: intel-hid: Support Lenovo ThinkPad X1 Tablet Gen 2
    4c875e034dfb bus: ti-sysc: Fix warning on unbind if reset is not deasserted
    5c6f778e8f7d ARM: dts: am33xx: add aliases for mmc interfaces
    ecdfb9d70fb8 Linux 5.10.28
    7973a0dad073 bpf: Use NOP_ATOMIC5 instead of emit_nops(&prog, 5) for BPF_TRAMP_F_CALL_ORIG
    12b5f9dae410 Revert "kernel: freezer should treat PF_IO_WORKER like PF_KTHREAD for freezing"
    6ae5eaee1ea5 riscv: evaluate put_user() arg before enabling user access
    61f0c3e8098f drivers: video: fbcon: fix NULL dereference in fbcon_cursor()
    d06d0b3cf626 driver core: clear deferred probe reason on probe retry
    d29c38dd926d staging: rtl8192e: Change state information from u16 to u8
    538b96315375 staging: rtl8192e: Fix incorrect source in memcpy()
    84e5203fd277 soc: qcom-geni-se: Cleanup the code to remove proxy votes
    996a5782faef usb: dwc3: gadget: Clear DEP flags after stop transfers in ep disable
    1808ee421ce5 usb: dwc3: qcom: skip interconnect init for ACPI probe
    137dfed1552a usb: dwc2: Prevent core suspend when port connection flag is 0
    4e28aca96729 usb: dwc2: Fix HPRT0.PrtSusp bit setting for HiKey 960 board.
    77c0d6af858b usb: gadget: udc: amd5536udc_pci fix null-ptr-dereference
    6f86681691c2 USB: cdc-acm: fix use-after-free after probe failure
    64deff1f4e0f USB: cdc-acm: fix double free on probe failure
    439a27521112 USB: cdc-acm: downgrade message to debug
    511302531eb8 USB: cdc-acm: untangle a circular dependency between callback and softint
    e700e3aec303 cdc-acm: fix BREAK rx code path adding necessary calls
    9efa606a83e0 usb: xhci-mtk: fix broken streams issue on 0.96 xHCI
    1addcb1f77d6 usb: musb: Fix suspend with devices connected for a64
    15e61d9ae7ac USB: quirks: ignore remote wake-up on Fibocom L850-GL LTE modem
    4027d6e88fef usbip: vhci_hcd fix shift out-of-bounds in vhci_hub_control()
    c04adcc819d3 firewire: nosy: Fix a use-after-free bug in nosy_ioctl()
    2c7d85026324 video: hyperv_fb: Fix a double free in hvfb_probe
    a267a7e1c0ca usb: dwc3: pci: Enable dis_uX_susphy_quirk for Intel Merrifield
    bf4c643192b3 firmware: stratix10-svc: reset COMMAND_RECONFIG_FLAG_PARTIAL to 0
    3b681a1c43b6 extcon: Fix error handling in extcon_dev_register
    023d13952e9b extcon: Add stubs for extcon_register_notifier_all() functions
    0fe56e294cef pinctrl: rockchip: fix restore error in resume
    80ee9e02be3d vfio/nvlink: Add missing SPAPR_TCE_IOMMU depends
    d2308dd5119b drm/tegra: sor: Grab runtime PM reference across reset
    f552f95853f8 drm/tegra: dc: Restore coupling of display controllers
    77a8e6f792d5 drm/imx: fix memory leak when fails to init
    74612ecdf263 reiserfs: update reiserfs_xattrs_initialized() condition
    8c71f5b30955 drm/amdgpu: check alignment on CPU page for bo map
    78ceecd2ed45 drm/amdgpu: fix offset calculation in amdgpu_vm_bo_clear_mappings()
    28f901fe1634 drm/amdkfd: dqm fence memory corruption
    ec3e06e06f76 mm: fix race by making init_zero_pfn() early_initcall
    d88b557b9b73 s390/vdso: fix tod_steering_delta type
    b332265430c8 s390/vdso: copy tod_steering_delta value to vdso_data page
    f706acc9312b tracing: Fix stack trace event size
    cc038ab785a8 PM: runtime: Fix ordering in pm_runtime_get_suppliers()
    da2976cd711b PM: runtime: Fix race getting/putting suppliers at probe
    e6d8eb65532e KVM: SVM: ensure that EFER.SVME is set when running nested guest or on nested vmexit
    5f6625f5cd5c KVM: SVM: load control fields from VMCB12 before checking them
    6aaa3c2ebb4f xtensa: move coprocessor_flush to the .text section
    a3be911a5fee xtensa: fix uaccess-related livelock in do_page_fault
    bcd7999c03ed ALSA: hda/realtek: fix mute/micmute LEDs for HP 640 G8
    ee58eee4501f ALSA: hda/realtek: call alc_update_headset_mode() in hp_automute_hook
    f235ffa56b8e ALSA: hda/realtek: fix a determine_headset_type issue for a Dell AIO
    6d91f3afb632 ALSA: hda: Add missing sanity checks in PM prepare/complete callbacks
    b3116cda4e52 ALSA: hda: Re-add dropped snd_poewr_change_state() calls
    474d3d65784e ALSA: usb-audio: Apply sample rate quirk to Logitech Connect
    e525cd364c09 ACPI: processor: Fix CPU0 wakeup in acpi_idle_play_dead()
    cdd192a20b06 ACPI: tables: x86: Reserve memory occupied by ACPI tables
    fd38d4e6757b bpf: Remove MTU check in __bpf_skb_max_len
    ff64f33bc93b net: 9p: advance iov on empty read
    84877db1cdea net: wan/lmc: unregister device when no matching device is found
    33a6b3eea44b net: ipa: fix register write command validation
    44d76042c038 net: ipa: remove two unused register definitions
    c805f215e9c5 appletalk: Fix skb allocation size in loopback case
    f2294a707f63 net: ethernet: aquantia: Handle error cleanup of start on open
    7d3ffc0993fe ath10k: hold RCU lock when calling ieee80211_find_sta_by_ifaddr()
    221528c20e5e iwlwifi: pcie: don't disable interrupts for reg_lock
    f33d87047323 netdevsim: dev: Initialize FIB module after debugfs
    660bf76aec07 rtw88: coex: 8821c: correct antenna switch function
    b5777172cce2 ath11k: add ieee80211_unregister_hw to avoid kernel crash caused by NULL pointer
    731c4447e6db brcmfmac: clear EAP/association status bits on linkdown events
    4094194d103b can: tcan4x5x: fix max register value
    1a5751d58b14 net: introduce CAN specific pointer in the struct net_device
    9e35159c6e9a can: dev: move driver related infrastructure into separate subdir
    e3ccad57ac09 flow_dissector: fix TTL and TOS dissection on IPv4 fragments
    8fe47a33944f net: mvpp2: fix interrupt mask/unmask skip condition
    44c816c8b9ab io_uring: call req_set_fail_links() on short send[msg]()/recv[msg]() with MSG_WAITALL
    5038c1122e13 ext4: do not iput inode under running transaction in ext4_rename()
    eb8049d85a92 static_call: Align static_call_is_init() patching condition
    21c2bbc17b6b io_uring: imply MSG_NOSIGNAL for send[msg]()/recv[msg]() calls
    fa068ee3f37e nvmet-tcp: fix kmap leak when data digest in use
    3ac4aaff387b locking/ww_mutex: Fix acquire/release imbalance in ww_acquire_init()/ww_acquire_fini()
    905ef030bdf9 locking/ww_mutex: Simplify use_ww_ctx & ww_ctx handling
    1e2a75c24a48 thermal/core: Add NULL pointer check before using cooling device stats
    cf51b6145b9d ASoC: rt711: add snd_soc_component remove callback
    805645d89a20 ASoC: rt5659: Update MCLK rate in set_sysclk()
    7d4344fd3ee0 staging: comedi: cb_pcidas64: fix request_irq() warn
    e833d5716fbb staging: comedi: cb_pcidas: fix request_irq() warn
    4cd96a0de7a1 scsi: qla2xxx: Fix broken #endif placement
    3860814ef620 scsi: st: Fix a use after free in st_open()
    861fc287e036 io_uring: fix ->flags races by linked timeouts
    e1f8c95c1110 vhost: Fix vhost_vq_reset()
    7f6518ec6ee9 kernel: freezer should treat PF_IO_WORKER like PF_KTHREAD for freezing
    540a1ebf3c23 NFSD: fix error handling in NFSv4.0 callbacks
    73df108e3aec ASoC: cs42l42: Always wait at least 3ms after reset
    9b7b92c4b92d ASoC: cs42l42: Fix mixer volume control
    20b39eb99598 ASoC: cs42l42: Fix channel width support
    0d3753babfa7 ASoC: cs42l42: Fix Bitclock polarity inversion
    ed47acc0c888 ASoC: soc-core: Prevent warning if no DMI table is present
    294d4c2b4fda ASoC: es8316: Simplify adc_pga_gain_tlv table
    f134a436d766 ASoC: sgtl5000: set DAP_AVC_CTRL register to correct default value on probe
    b057d540ad2c ASoC: rt5651: Fix dac- and adc- vol-tlv values being off by a factor of 10
    ed4cdb772680 ASoC: rt5640: Fix dac- and adc- vol-tlv values being off by a factor of 10
    4bac395e0b8a ASoC: rt1015: fix i2c communication error
    4eff80b14014 iomap: Fix negative assignment to unsigned sis->pages in iomap_swapfile_activate
    5fb71b231c4e rpc: fix NULL dereference on kmalloc failure
    9e9aa1c03c33 fs: nfsd: fix kconfig dependency warning for NFSD_V4
    e178f362f095 ext4: fix bh ref count on error paths
    4b3139576a20 ext4: shrink race window in ext4_should_retry_alloc()
    1bfb046d29e3 virtiofs: Fail dax mount if device does not support it
    e21d2b92354b bpf: Fix fexit trampoline.
    68abc0115617 arm64: mm: correct the inside linear map range during hotplug check

(From OE-Core rev: 52a1231209190d0921a620343608a06b906b6a30)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 255ec8ff86d31c3464c30c26bdb15f01563b088e)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Bruce Ashfield
852f67148e linux-yocto/5.10: BSP configuration fixes
Integrating the following commit(s) to linux-yocto/5.10.:

    fa039db710c qemuppc64: Enable the RTC driver
    f6cfc23fbfc nxp-s32g2xx: add HSE UIO related configs to make hse demo work
    2b445fb1e0b firmware: fix CONFIG_FW_LOADER option mismatch warning
    60dde01d949 nxp-imx8: Correct DRM_TTM config and delete redundant config
    07119316ee5 xlnx: bsp: drop obsolete kernel options for xilinx-zynqmp and xilinx-zynq
    0cf78165f8e bcm-2xxx-rpi: update v5.10 kernel config for raspberrypi 4b platform
    9b5a9e46778 marvell-cn96xx: Add the preempt-rt support

(From OE-Core rev: 2ce5cf55ab534ea3959daeb3de181a51b313bdec)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 6186f21b29e7a152d34c620e81878bf6eff6519d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
He Zhe
a7c05409a1 linux-yocto-dev: add features/scsi/scsi-debug.scc features/gpio/mockup.scc to KERNEL_FEATURES
Add features/scsi/scsi-debug.scc and features/gpio/mockup.scc to
KERNEL_FEATURES to meet ptest requirement as what we did for other
linux-yocto*.

(From OE-Core rev: 8e1d3f54cdba087d84a938ea1fc0e0b0b501b50b)

Signed-off-by: He Zhe <zhe.he@windriver.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit fd27f302df886c27cb424191c27152ad9d0e8d80)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Bruce Ashfield
8494d268f7 linux-yocto/5.10: update to v5.10.27
Updating linux-yocto/5.10 to the latest korg -stable release that comprises
the following commits:

    472493c8a425 Linux 5.10.27
    3a1ca9bd4f5a xen-blkback: don't leak persistent grants from xen_blkbk_map()
    03a1c3253f25 can: peak_usb: Revert "can: peak_usb: add forgotten supported devices"
    f12d05f70282 nvme: fix the nsid value to print in nvme_validate_or_alloc_ns
    36478a9ec5af Revert "net: bonding: fix error return code of bond_neigh_init()"
    451ba16cc5b7 Revert "xen: fix p2m size in dom0 for disabled memory hotplug case"
    df61d3cff422 fs/ext4: fix integer overflow in s_log_groups_per_flex
    0229b5926dc9 ext4: add reclaim checks to xattr code
    25e809bf8bec mac80211: fix double free in ibss_leave
    39e1a35ea65a net: dsa: b53: VLAN filtering is global to all users
    d3b5a04b8ce5 r8169: fix DMA being used after buffer free if WoL is enabled
    8dc08a2962c8 can: dev: Move device back to init netns on owning netns delete
    24256b4d87eb ch_ktls: fix enum-conversion warning
    6f15c02ebbe9 fs/cachefiles: Remove wait_bit_key layout dependency
    002ea848d7fd mm/memcg: fix 5.10 backport of splitting page memcg
    2c163520e12b x86/mem_encrypt: Correct physical address calculation in __set_clr_pte_enc()
    c6c9bc4f261d locking/mutex: Fix non debug version of mutex_lock_io_nested()
    d4ce2a8f465d cifs: Adjust key sizes and key generation routines for AES256 encryption
    86cc799e1d9d smb3: fix cached file size problems in duplicate extents (reflink)
    2423511cc5ba scsi: mpt3sas: Fix error return code of mpt3sas_base_attach()
    6b977fea78de scsi: qedi: Fix error return code of qedi_alloc_global_queues()
    62bb066cdfb6 scsi: Revert "qla2xxx: Make sure that aborted commands are freed"
    fc062d21c011 block: recalculate segment count for multi-segment discards correctly
    dcf2dfc1614d io_uring: fix provide_buffers sign extension
    efb334c4e5ff perf synthetic events: Avoid write of uninitialized memory when generating PERF_RECORD_MMAP* records
    5febe60a8021 perf auxtrace: Fix auxtrace queue conflict
    4a5891992c68 ACPI: scan: Use unique number for instance_no
    2ba9964a9653 ACPI: scan: Rearrange memory allocation in acpi_device_add()
    c33f918758fa Revert "netfilter: x_tables: Update remaining dereference to RCU"
    de2e6b4e32d6 mm/mmu_notifiers: ensure range_end() is paired with range_start()
    42aa210795d8 dm table: Fix zoned model check and zone sectors check
    3fdebc2d8e79 netfilter: x_tables: Use correct memory barriers.
    520be4d1af9c Revert "netfilter: x_tables: Switch synchronization to RCU"
    87771c9b09bb net: phy: broadcom: Fix RGMII delays for BCM50160 and BCM50610M
    485335a637c8 net: phy: broadcom: Set proper 1000BaseX/SGMII interface mode for BCM54616S
    837a3ae33459 net: phy: broadcom: Avoid forward for bcm54xx_config_clock_delay()
    9a5267264fc2 net: phy: introduce phydev->port
    c4934e65c8bc net: axienet: Fix probe error cleanup
    3e08fd4a8298 net: axienet: Properly handle PCS/PMA PHY for 1000BaseX mode
    d65e7d0c7449 igb: avoid premature Rx buffer reuse
    c7eb3e12f18f net, bpf: Fix ip6ip6 crash with collect_md populated skbs
    0a245acbce89 net: Consolidate common blackhole dst ops
    33cd5f88b5bf bpf: Don't do bpf_cgroup_storage_set() for kuprobe/tp programs
    d95696f537d6 RDMA/cxgb4: Fix adapter LE hash errors while destroying ipv6 listening server
    b740e58324c8 xen/x86: make XEN_BALLOON_MEMORY_HOTPLUG_LIMIT depend on MEMORY_HOTPLUG
    889c56ea941e octeontx2-af: Fix memory leak of object buf
    558454ec5170 net: bridge: don't notify switchdev for local FDB addresses
    7d019b2d0f27 PM: EM: postpone creating the debugfs dir till fs_initcall
    08a5f812ad6c net/mlx5e: Fix error path for ethtool set-priv-flag
    624f0dc8f7f4 net/mlx5e: Offload tuple rewrite for non-CT flows
    c83207bb02d6 net/mlx5e: Allow to match on MPLS parameters only for MPLS over UDP
    0be13d01473a net/mlx5: Add back multicast stats for uplink representor
    65c021e73590 PM: runtime: Defer suspending suppliers
    3db5fc556515 arm64: kdump: update ppos when reading elfcorehdr
    447a011bb40d drm/msm: Fix suspend/resume on i.MX5
    c7552dee62a0 drm/msm: fix shutdown hook in case GPU components failed to bind
    0b7bc92c1986 can: isotp: tx-path: zero initialize outgoing CAN frames
    ccd5565feea3 bpf: Fix umd memory leak in copy_process()
    eeadce8811d3 libbpf: Fix BTF dump of pointer-to-array-of-struct
    7693b64ae508 selftests: forwarding: vxlan_bridge_1d: Fix vxlan ecn decapsulate value
    5ebb9947b488 selinux: vsock: Set SID for socket returned by accept()
    1e01729999c0 net: stmmac: dwmac-sun8i: Provide TX and RX fifo sizes
    961d9a6e47b9 r8152: limit the RX buffer size of RTL8153A for USB 2.0
    2330d46db081 igb: check timestamp validity
    421e0d731070 net: cdc-phonet: fix data-interface release on probe failure
    943e1583bf8a net: check all name nodes in __dev_alloc_name
    748a158359d7 octeontx2-af: fix infinite loop in unmapping NPC counter
    b553f45c76ec octeontx2-pf: Clear RSS enable flag on interace down
    11e94cfa9dd8 octeontx2-af: Fix irq free in rvu teardown
    da517ca38dc6 octeontx2-af: Remove TOS field from MKEX TX
    1055796ca031 octeontx2-af: Modify default KEX profile to extract TX packet fields
    f896ae2886d1 octeontx2-af: Formatting debugfs entry rsrc_alloc.
    5f64c4c550c8 ipv6: weaken the v4mapped source check
    9e48a3bc8ba2 ARM: dts: imx6ull: fix ubi filesystem mount failed
    b4c574e4b471 libbpf: Use SOCK_CLOEXEC when opening the netlink socket
    86e525bc04f2 libbpf: Fix error path in bpf_object__elf_init()
    4280132339ce netfilter: flowtable: Make sure GC works periodically in idle system
    186d8dc40a65 netfilter: nftables: allow to update flowtable flags
    4a741b4df032 netfilter: nftables: report EOPNOTSUPP on unsupported flowtable flags
    a96a8cb0500a net/sched: cls_flower: fix only mask bit check in the validate_ct_state
    6233c2d09633 ionic: linearize tso skb with too many frags
    7637048707e5 drm/msm/dsi: fix check-before-set in the 7nm dsi_pll code
    126aa8f23424 ftrace: Fix modify_ftrace_direct.
    29b8834cf828 nfp: flower: fix pre_tun mask id allocation
    47dae14b21f7 nfp: flower: add ipv6 bit to pre_tunnel control message
    259b0122dea5 nfp: flower: fix unsupported pre_tunnel flows
    aeff815e76ef selftests/net: fix warnings on reuseaddr_ports_exhausted
    bd63bd78d303 mac80211: Allow HE operation to be longer than expected.
    f865127b1d26 mac80211: fix rate mask reset
    48d0b548b49e can: m_can: m_can_rx_peripheral(): fix RX being blocked by errors
    afaca48e3017 can: m_can: m_can_do_rx_poll(): fix extraneous msg loss warning
    4fcf59c24990 can: c_can: move runtime PM enable/disable to c_can_platform
    524320e8034a can: c_can_pci: c_can_pci_remove(): fix use-after-free
    f9a5974b9719 can: kvaser_pciefd: Always disable bus load reporting
    af3e6c3dcf54 can: flexcan: flexcan_chip_freeze(): fix chip freeze for missing bitrate
    0cbadc0fb54c can: peak_usb: add forgotten supported devices
    3b3d9279be6c can: isotp: TX-path: ensure that CAN frame flags are initialized
    f88517dae95b can: isotp: isotp_setsockopt(): only allow to set low level TX flags for CAN-FD
    63f2a9bd3133 tcp: relookup sock for RST+ACK packets handled by obsolete req sock
    50f41f2e29ff tipc: better validate user input in tipc_nl_retrieve_key()
    ddeba5b39cca net: phylink: Fix phylink_err() function name error in phylink_major_config
    375f5169f231 net: hdlc_x25: Prevent racing between "x25_close" and "x25_xmit"/"x25_rx"
    ee39ee5f437c netfilter: ctnetlink: fix dump of the expect mask attribute
    d5380ceede6f selftests/bpf: Set gopt opt_class to 0 if get tunnel opt failed
    33cc382c5830 flow_dissector: fix byteorder of dissected ICMP ID
    fce6fb902189 net: qrtr: fix a kernel-infoleak in qrtr_recvmsg()
    6d3635ed12e7 net: ipa: terminate message handler arrays
    1701bd22b05d clk: qcom: gcc-sc7180: Use floor ops for the correct sdcc1 clk
    b50c46ef67d6 ftgmac100: Restart MAC HW once
    e64a5a5b8e93 net: phy: broadcom: Add power down exit reset state delay
    87378c850fee net/qlcnic: Fix a use after free in qlcnic_83xx_get_minidump_template
    648b62f10cec e1000e: Fix error handling in e1000_set_d0_lplu_state_82571
    8ed431fec355 e1000e: add rtnl_lock() to e1000_reset_task
    5994a096570f igc: Fix igc_ptp_rx_pktstamp()
    0963fadcf536 igc: Fix Supported Pause Frame Link Setting
    d5330d5cc3ad igc: Fix Pause Frame Advertising
    d85ffade499a igc: reinit_locked() should be called with rtnl_lock
    4c91fc60e3f6 net: dsa: bcm_sf2: Qualify phydev->dev_flags based on port
    f64270027928 net: sched: validate stab values
    400199d6e6f6 macvlan: macvlan_count_rx() needs to be aware of preemption
    2514c7ad115e drop_monitor: Perform cleanup upon probe registration failure
    7f041ee8effd ipv6: fix suspecious RCU usage warning
    61219de46413 net/mlx5e: Don't match on Geneve options in case option masks are all zero
    d0be25fa4f96 net/mlx5e: When changing XDP program without reset, take refs for XSK RQs
    60b5ff15b41d net/mlx5e: RX, Mind the MPWQE gaps when calculating offsets
    9857de932b30 libbpf: Fix INSTALL flag order
    f7c3d7615e6c bpf: Change inode_storage's lookup_elem return value from NULL to -EBADF
    926cde9eec67 veth: Store queue_mapping independently of XDP prog presence
    f47a9b2570ad soc: ti: omap-prm: Fix occasional abort on reset deassert for dra7 iva
    1f798907b435 ARM: OMAP2+: Fix smartreflex init regression after dropping legacy data
    965e6cb8d4c9 bus: omap_l3_noc: mark l3 irqs as IRQF_NO_THREAD
    921aae17bb0f dm ioctl: fix out of bounds array access when no devices
    d8b36c483d47 dm verity: fix DM_VERITY_OPTS_MAX value
    1e2d70d08ade drm/i915: Fix the GT fence revocation runtime PM logic
    da6a9b5b1799 drm/amdgpu: Add additional Sienna Cichlid PCI ID
    dc28098f40b4 drm/amdgpu/display: restore AUX_DPHY_TX_CONTROL for DCN2.x
    e02f765fa784 drm/amd/pm: workaround for audio noise issue
    f771b2b3eb2f drm/etnaviv: Use FOLL_FORCE for userptr
    546f7fcc451c integrity: double check iint_cache was initialized
    5f7b515df003 ARM: dts: at91-sama5d27_som1: fix phy address to 7
    2a0d35962ff1 ARM: dts: at91: sam9x60: fix mux-mask to match product's datasheet
    0b6cd8802d32 ARM: dts: at91: sam9x60: fix mux-mask for PA7 so it can be set to A, B and C
    1c103f512251 arm64: dts: ls1043a: mark crypto engine dma coherent
    4f35b64ba823 arm64: dts: ls1012a: mark crypto engine dma coherent
    3883f335b5ee arm64: dts: ls1046a: mark crypto engine dma coherent
    1ced45535d4b arm64: stacktrace: don't trace arch_stack_walk()
    53d3c8063590 ACPICA: Always create namespace nodes using acpi_ns_create_node()
    36fe73bd0af9 ACPI: video: Add missing callback back for Sony VPCEH3U1E
    1f5c9efad9fe gcov: fix clang-11+ support
    6e63cc1fe253 kasan: fix per-page tags for non-page_alloc pages
    fe03ccc3ce90 hugetlb_cgroup: fix imbalanced css_get and css_put pair for shared mappings
    269042e8ffed squashfs: fix xattr id and id lookup sanity checks
    61d72c5952c4 squashfs: fix inode lookup sanity checks
    1d215fcbc4ef z3fold: prevent reclaim/free race for headless pages
    e4642090734e psample: Fix user API breakage
    a4be7e4ed5d9 platform/x86: intel-vbtn: Stop reporting SW_DOCK events
    4f67d3e8c0ac netsec: restore phy power state after controller reset
    19c9967e495e selinux: fix variable scope issue in live sidtab conversion
    9731e08a3381 selinux: don't log MAC_POLICY_LOAD record on failed policy load
    3b87d0c5834b btrfs: fix sleep while in non-sleep context during qgroup removal
    771dfb3c531d KVM: x86: Protect userspace MSR filter with SRCU, and set atomically-ish
    394e4fd67946 static_call: Fix static_call_set_init()
    0fefb5f3e574 static_call: Fix the module key fixup
    a63068e93917 static_call: Allow module use without exposing static_call_key
    433cd7ca386c static_call: Pull some static_call declarations to the type headers
    533c293f737c ia64: fix ptrace(PTRACE_SYSCALL_INFO_EXIT) sign
    d76e207991c4 ia64: fix ia64_syscall_get_set_arguments() for break-based syscalls
    7077d5e7f074 mm/fork: clear PASID for new mm
    07feac84efc6 block: Suppress uevent for hidden device when removed
    9f704608010b nfs: we don't support removing system.nfs4_acl
    3dab008e23bd nvme-pci: add the DISABLE_WRITE_ZEROES quirk for a Samsung PM1725a
    8f0534c96ac8 nvme-rdma: Fix a use after free in nvmet_rdma_write_data_done
    c7b3f6db97c2 nvme-core: check ctrl css before setting up zns
    9083dc773d67 nvme-fc: return NVME_SC_HOST_ABORTED_CMD when a command has been aborted
    4d6aea29a795 nvme-fc: set NVME_REQ_CANCELLED in nvme_fc_terminate_exchange()
    7e62a89b51dd nvme: add NVME_REQ_CANCELLED flag in nvme_cancel_request()
    d8b17df7bf80 nvme: simplify error logic in nvme_validate_ns()
    b91230a0013f drm/radeon: fix AGP dependency
    35d4f0712828 drm/amdgpu: fb BO should be ttm_bo_type_device
    a255d14eb5dc drm/amd/display: Revert dram_clock_change_latency for DCN2.1
    d27b0964ade9 block: Fix REQ_OP_ZONE_RESET_ALL handling
    c9d1f6ad1e25 regulator: qcom-rpmh: Correct the pmic5_hfsmps515 buck
    6366a5bb888b kselftest: arm64: Fix exit code of sve-ptrace
    da5bc0c21c04 u64_stats,lockdep: Fix u64_stats_init() vs lockdep
    f89338395545 staging: rtl8192e: fix kconfig dependency on CRYPTO
    eb4154fb61e2 habanalabs: Call put_pid() when releasing control device
    f2b38f03a3f7 sparc64: Fix opcode filtering in handling of no fault loads
    58b34195b33f umem: fix error return code in mm_pci_probe()
    feaa91193ad3 kbuild: dummy-tools: fix inverted tests for gcc
    ede8be3ae078 kbuild: add image_name to no-sync-config-targets
    264bb27b9fe4 irqchip/ingenic: Add support for the JZ4760
    b684c380f0b9 cifs: change noisy error message to FYI
    758bca385a79 atm: idt77252: fix null-ptr-dereference
    f35954a3961b atm: uPD98402: fix incorrect allocation
    852143ed96e2 net: enetc: set MAC RX FIFO to recommended value
    697082b125b0 net: davicom: Use platform_get_irq_optional()
    e6946ef43848 net: wan: fix error return code of uhdlc_init()
    184dc037575c net: hisilicon: hns: fix error return code of hns_nic_clear_all_rx_fetch()
    9d1a5392aca1 NFS: Correct size calculation for create reply length
    2479c6b9ef36 nfs: fix PNFS_FLEXFILE_LAYOUT Kconfig default
    b48779c863c0 gpiolib: acpi: Add missing IRQF_ONESHOT
    9443aef16fca cpufreq: blacklist Arm Vexpress platforms in cpufreq-dt-platdev
    6d7dce3bdfc4 gfs2: fix use-after-free in trans_drain
    419ebba40dbf cifs: ask for more credit on async read/write code paths
    b8bfda6e08b8 gianfar: fix jumbo packets+napi+rx overrun crash
    2d0fba5a2e9f sun/niu: fix wrong RXMAC_BC_FRM_CNT_COUNT count
    81b1a8f14436 net: intel: iavf: fix error return code of iavf_init_get_resources()
    5f86016bdfa7 net: tehuti: fix error return code in bdx_probe()
    71b996c9b883 blk-cgroup: Fix the recursive blkg rwstat
    b171748b7953 scsi: ufs: ufs-qcom: Disable interrupt in reset path
    028210541b3c ixgbe: Fix memleak in ixgbe_configure_clsu32
    4dc123500c3b ALSA: hda: ignore invalid NHLT table
    18f27fc6bcc2 Revert "r8152: adjust the settings about MAC clock speed down for RTL8153"
    f8f6190094a3 atm: lanai: dont run lanai_dev_close if not open
    6f6e45947572 atm: eni: dont release is never initialized
    75e967a04d37 powerpc/4xx: Fix build errors from mfdcr()
    4a104e4d4d9d net: fec: ptp: avoid register access when ipg clock is disabled
    50c75680bdce net: stmmac: fix dma physical address of descriptor when display ring
    a9daba140178 mt76: fix tx skb error handling in mt76_dma_tx_queue_skb
    efb12c03fcd0 mm/memcg: set memcg when splitting page
    6143a1d193e9 mm/memcg: rename mem_cgroup_split_huge_fixup to split_page_memcg and add nr_pages argument
    856cd02bbdd4 Linux 5.10.26
    de1126ea44bb cifs: Fix preauth hash corruption
    21536d7b7e6f x86/apic/of: Fix CPU devicetree-node lookups
    95247d24c4d4 genirq: Disable interrupts for force threaded handlers
    80b2787789af firmware/efi: Fix a use after bug in efi_mem_reserve_persistent
    47ba0d4d2afb efi: use 32-bit alignment for efi_guid_t literals
    e5154ea8e48f static_call: Fix static_call_update() sanity check
    51ccdd25d7e5 MAINTAINERS: move the staging subsystem to lists.linux.dev
    4c9a74798ef1 MAINTAINERS: move some real subsystems off of the staging mailing list
    35ecf664fd6c ext4: fix rename whiteout with fast commit
    e8fa569465e5 ext4: fix potential error in ext4_do_update_inode
    6163a0662b79 ext4: do not try to set xattr into ea_inode if value is empty
    d130b802f98a ext4: stop inode update before return
    258db8e6ffdc ext4: find old entry again if failed to rename whiteout
    9689ecadf8a7 ext4: fix error handling in ext4_end_enable_verity()
    e4ea2a28d068 efivars: respect EFI_UNSUPPORTED return from firmware
    a548acde9608 x86: Introduce TS_COMPAT_RESTART to fix get_nr_restart_syscall()
    97c608959c27 x86: Move TS_COMPAT back to asm/thread_info.h
    4523e648b7b7 kernel, fs: Introduce and use set_restart_fn() and arch_set_restart_data()
    0e245256e34d x86/ioapic: Ignore IRQ2 again
    4fdf5f4ba61f perf/x86/intel: Fix unchecked MSR access error caused by VLBR_EVENT
    514ea597be8e perf/x86/intel: Fix a crash caused by zero PEBS status
    be1f58e58f76 PCI: rpadlpar: Fix potential drc_name corruption in store functions
    6d4e1fed18d0 counter: stm32-timer-cnt: fix ceiling miss-alignment with reload register
    cbc4c42dbec0 counter: stm32-timer-cnt: fix ceiling write max value
    dcdde25844d4 iio: hid-sensor-temperature: Fix issues of timestamp channel
    7de97c4bba51 iio: hid-sensor-prox: Fix scale not correct issue
    fd8efe16d867 iio: hid-sensor-humidity: Fix alignment issue of timestamp channel
    b477c121a287 iio: adc: adi-axi-adc: add proper Kconfig dependencies
    d894acab2844 iio: adc: ad7949: fix wrong ADC result due to incorrect bit mask
    533ee1e28455 iio: adc: ab8500-gpadc: Fix off by 10 to 3
    f8bfbd3917fa iio: gyro: mpu3050: Fix error handling in mpu3050_trigger_handler
    06c281c23ace iio: adis16400: Fix an error code in adis16400_initial_setup()
    531231485844 iio:adc:qcom-spmi-vadc: add default scale to LR_MUX2_BAT_ID channel
    3ce2e7b2d360 iio:adc:stm32-adc: Add HAS_IOMEM dependency
    6c3c90058b95 thunderbolt: Increase runtime PM reference count on DP tunnel discovery
    f4ca082e3f59 thunderbolt: Initialize HopID IDAs in tb_switch_alloc()
    c7bb96a37dd2 usb: dwc3: gadget: Prevent EP queuing while stopping transfers
    395d273f2998 usb: dwc3: gadget: Allow runtime suspend if UDC unbinded
    8b8a84234c38 usb: typec: tcpm: Invoke power_supply_changed for tcpm-source-psy-
    0ea3fb15a87e usb: typec: Remove vdo[3] part of tps6598x_rx_identity_reg struct
    0f882bcc6407 usb: gadget: configfs: Fix KASAN use-after-free
    22e85a6a35cc usbip: Fix incorrect double assignment to udc->ud.tcp_rx
    7046e5f7a2f6 usb-storage: Add quirk to defeat Kindle's automatic unload
    5a62d6d7afa0 powerpc: Force inlining of cpu_has_feature() to avoid build failure
    2bdef2b476e2 gfs2: bypass signal_our_withdraw if no journal
    a602e830ddaf gfs2: move freeze glock outside the make_fs_rw and _ro functions
    49787b1bba1f gfs2: Add common helper for holding and releasing the freeze glock
    db37238f3452 regulator: pca9450: Clear PRESET_EN bit to fix BUCK1/2/3 voltage setting
    cfbff8bd9efc regulator: pca9450: Enable system reset on WDOG_B assertion
    775691b94ce7 regulator: pca9450: Add SD_VSEL GPIO for LDO5
    9392b8219b62 net: bonding: fix error return code of bond_neigh_init()
    76f496681d6a io_uring: clear IOCB_WAITQ for non -EIOCBQUEUED return
    3c08f772ad0d io_uring: don't attempt IO reissue from the ring exit path
    40345b9c9d90 drm/amd/pm: fulfill the Polaris implementation for get_clock_by_type_with_latency()
    e8e99acd0830 s390/qeth: schedule TX NAPI on QAOB completion
    f3f6765fd0e8 ibmvnic: remove excessive irqsave
    96823c1e9997 media: cedrus: h264: Support profile controls
    1c20e9040f49 io_uring: fix inconsistent lock state
    e1a69079edc4 iwlwifi: Add a new card for MA family
    e7f6ebde21cf drm/amd/display: turn DPMS off on connector unplug
    559b842a64ff MIPS: compressed: fix build with enabled UBSAN
    8545519b1f51 net: phy: micrel: set soft_reset callback to genphy_soft_reset for KSZ8081
    33cafc7952a4 i40e: Fix endianness conversions
    41d4c889b274 powerpc/sstep: Fix darn emulation
    8a335142f1c5 powerpc/sstep: Fix load-store and update emulation
    8b4a797e86a0 RDMA/mlx5: Allow creating all QPs even when non RDMA profile is used
    bb38c1c03384 scsi: isci: Pass gfp_t flags in isci_port_bc_change_received()
    d74238028a11 scsi: isci: Pass gfp_t flags in isci_port_link_up()
    d9f5efd1afc4 scsi: isci: Pass gfp_t flags in isci_port_link_down()
    1eda358e37e5 scsi: mvsas: Pass gfp_t flags to libsas event notifiers
    58bdc321beb5 scsi: libsas: Introduce a _gfp() variant of event notifiers
    18c3c04e8e53 scsi: libsas: Remove notifier indirection
    29c5b80327b7 scsi: pm8001: Neaten debug logging macros and uses
    c4186c00adc1 scsi: pm80xx: Fix pm8001_mpi_get_nvmd_resp() race condition
    3e4b3770744d scsi: pm80xx: Make running_req atomic
    6075c84a98ce scsi: pm80xx: Make mpi_build_cmd locking consistent
    d802672c7f00 module: harden ELF info handling
    e2c8978a75e0 module: avoid *goto*s in module_sig_check()
    8587715b65fa module: merge repetitive strings in module_sig_check()
    c02a33f0fd28 RDMA/rtrs: Fix KASAN: stack-out-of-bounds bug
    904a52dd9e50 RDMA/rtrs: Introduce rtrs_post_send
    9e97c211b701 RDMA/rtrs-srv: Jump to dereg_mr label if allocate iu fails
    5abee8b1fc4f RDMA/rtrs: Remove unnecessary argument dir of rtrs_iu_free
    4ebd8f0c82a5 bpf: Declare __bpf_free_used_maps() unconditionally
    0e44f1e18398 serial: stm32: fix DMA initialization error handling
    5f8659adf7a2 tty: serial: stm32-usart: Remove set but unused 'cookie' variables
    20c0bd2b6579 ibmvnic: serialize access to work queue on remove
    f8ba6913c40a ibmvnic: add some debugs
    b4be6e6e2696 nvme-rdma: fix possible hang when failing to set io queues
    b3901ceb120d gpiolib: Assign fwnode to parent's if no primary one provided
    c5fe922eaf1a counter: stm32-timer-cnt: Report count function when SLAVE_MODE_DISABLED
    f854abe46b0e RISC-V: correct enum sbi_ext_rfence_fid
    359d8ff40a09 scsi: ufs: ufs-mediatek: Correct operator & -> &&
    38089ba4b20c scsi: myrs: Fix a double free in myrs_cleanup()
    eb9d08b34351 scsi: lpfc: Fix some error codes in debugfs
    e95c0d43509c riscv: Correct SPARSEMEM configuration
    04eb2b2fa12f cifs: fix allocation size on newly created files
    bb2e41e65c33 kbuild: Fix <linux/version.h> for empty SUBLEVEL or PATCHLEVEL again
    72714560fbc7 net/qrtr: fix __netdev_alloc_skb call
    6cae8095490c io_uring: ensure that SQPOLL thread is started for exit
    a7acb614287b pstore: Fix warning in pstore_kill_sb()
    5f7d470696ad i915/perf: Start hrtimer only if sampling the OA buffer
    cb14e99e886f sunrpc: fix refcount leak for rpc auth modules
    2ea2d3a79800 vhost_vdpa: fix the missing irq_bypass_unregister_producer() invocation
    3e5a1bb6ea20 vfio: IOMMU_API should be selected
    c2219627091c svcrdma: disable timeouts on rdma backchannel
    982b899ba672 NFSD: fix dest to src mount in inter-server COPY
    800369d61add NFSD: Repair misuse of sv_lock in 5.10.16-rt30.
    12628e7779f8 nfsd: don't abort copies early
    5ea0aa29ad4b nfsd: Don't keep looking up unhashed files in the nfsd file cache
    628f39a57a46 nvmet: don't check iosqes,iocqes for discovery controllers
    b4f911e3a982 nvme-tcp: fix a NULL deref when receiving a 0-length r2t PDU
    7089cdfce32f nvme-tcp: fix possible hang when failing to set io queues
    a83e5c6c35fa nvme-tcp: fix misuse of __smp_processor_id with preemption enabled
    fd9e2b999740 nvme: fix Write Zeroes limitations
    2d202085d2dd ALSA: usb-audio: Fix unintentional sign extension issue
    64195f022ae8 afs: Stop listxattr() from listing "afs.*" attributes
    78ba4793b084 afs: Fix accessing YFS xattrs on a non-YFS server
    07fa872bf79c ASoC: simple-card-utils: Do not handle device clock
    d1ab87e31761 ASoC: qcom: lpass-cpu: Fix lpass dai ids parse
    1ae54de79fba ASoC: codecs: wcd934x: add a sanity check in set channel map
    03079a0f1bf7 ASoC: qcom: sdm845: Fix array out of range on rx slim channels
    26b08c08a5f3 ASoC: qcom: sdm845: Fix array out of bounds access
    47a6cadb6cfd ASoC: SOF: intel: fix wrong poll bits in dsp power down
    b94b71a7a6f6 ASoC: SOF: Intel: unregister DMIC device on probe error
    4da5a9a73c4c ASoC: Intel: bytcr_rt5640: Fix HP Pavilion x2 10-p0XX OVCD current threshold
    118cfdc770cd ASoC: fsl_ssi: Fix TDM slot setup for I2S mode
    223dc51caa51 drm/amd/display: Correct algorithm for reversed gamma
    4daa70a80c68 vhost-vdpa: set v->config_ctx to NULL if eventfd_ctx_fdget() fails
    49ca3100fbaf vhost-vdpa: fix use-after-free of v->config_ctx
    2c8d6a9474f0 btrfs: fix slab cache flags for free space tree bitmap
    38ffe9eaeb7c btrfs: fix race when cloning extent buffer during rewind of an old root
    78486cf1f31e zonefs: fix to update .i_wr_refcnt correctly in zonefs_open_zone()
    9c1c5e81a002 zonefs: prevent use of seq files as swap file
    dfbdbf0f359a zonefs: Fix O_APPEND async write handling
    38c74f2f2318 s390/pci: fix leak of PCI device structure
    075e3034740c s390/pci: remove superfluous zdev->zbus check
    bd37d9b9c4fb s390/pci: refactor zpci_create_device()
    015916ca0266 s390/vtime: fix increased steal time accounting
    5c0a3a331dc5 Revert "PM: runtime: Update device status before letting suppliers suspend"
    68525e424175 ALSA: hda/realtek: fix mute/micmute LEDs for HP 850 G8
    f086deab2c64 ALSA: hda/realtek: fix mute/micmute LEDs for HP 440 G8
    7b00df1894c6 ALSA: hda/realtek: fix mute/micmute LEDs for HP 840 G8
    14af4bf8d481 ALSA: hda/realtek: Apply headset-mic quirks for Xiaomi Redmibook Air
    4c698a3b8fb7 ALSA: hda: generic: Fix the micmute led init state
    e6c7cdf0baf3 ALSA: hda/realtek: apply pin quirk for XiaomiNotebook Pro
    cd7b17ba8e4d ALSA: dice: fix null pointer dereference when node is disconnected
    422806f8d289 spi: cadence: set cqspi to the driver_data field of struct device
    f8d5ced57b07 ASoC: ak5558: Add MODULE_DEVICE_TABLE
    064a7289b445 ASoC: ak4458: Add MODULE_DEVICE_TABLE

(From OE-Core rev: 7724de7ba3aee83efc1d01e13c3365634ec6eb3c)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit cbb5c4392c63f896f204c0c15b0cfa7a364feed2)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Bruce Ashfield
73c71f3d77 linux-yocto/5.4: update to v5.4.109
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    4e85f8a712cd Linux 5.4.109
    057dd3e6986b xen-blkback: don't leak persistent grants from xen_blkbk_map()
    ce934540ff09 can: peak_usb: Revert "can: peak_usb: add forgotten supported devices"
    2638770e793b ext4: add reclaim checks to xattr code
    92b9e3deffb6 mac80211: fix double free in ibss_leave
    ae23957bd1fb net: qrtr: fix a kernel-infoleak in qrtr_recvmsg()
    f7a962970001 net: dsa: b53: VLAN filtering is global to all users
    f866d1fa48e4 can: dev: Move device back to init netns on owning netns delete
    dfd6627c83dd x86/mem_encrypt: Correct physical address calculation in __set_clr_pte_enc()
    f989059cd22a locking/mutex: Fix non debug version of mutex_lock_io_nested()
    1260d8dc2d66 scsi: mpt3sas: Fix error return code of mpt3sas_base_attach()
    d31747705762 scsi: qedi: Fix error return code of qedi_alloc_global_queues()
    063c3cfb264b scsi: Revert "qla2xxx: Make sure that aborted commands are freed"
    fdc61af371db block: recalculate segment count for multi-segment discards correctly
    8ce9f6efa655 perf auxtrace: Fix auxtrace queue conflict
    bc0b1a2036dd ACPI: scan: Use unique number for instance_no
    b382f9d61609 ACPI: scan: Rearrange memory allocation in acpi_device_add()
    cc578c3e612b Revert "netfilter: x_tables: Update remaining dereference to RCU"
    19a5fb4ceada netfilter: x_tables: Use correct memory barriers.
    c46cd29b89da Revert "netfilter: x_tables: Switch synchronization to RCU"
    e74d46e69a45 bpf: Don't do bpf_cgroup_storage_set() for kuprobe/tp programs
    01398e024ba6 RDMA/cxgb4: Fix adapter LE hash errors while destroying ipv6 listening server
    78aafa0240bc PM: EM: postpone creating the debugfs dir till fs_initcall
    f54b10114d63 net/mlx5e: Fix error path for ethtool set-priv-flag
    fa4addf30c2c PM: runtime: Defer suspending suppliers
    c82d289fe958 arm64: kdump: update ppos when reading elfcorehdr
    8bf90e000c10 drm/msm: fix shutdown hook in case GPU components failed to bind
    4fda26d2f7e1 libbpf: Fix BTF dump of pointer-to-array-of-struct
    4f71aacd6c92 selftests: forwarding: vxlan_bridge_1d: Fix vxlan ecn decapsulate value
    4ecf6d486e45 net: stmmac: dwmac-sun8i: Provide TX and RX fifo sizes
    1f103ca31c51 r8152: limit the RX buffer size of RTL8153A for USB 2.0
    048d0bf8ad19 net: cdc-phonet: fix data-interface release on probe failure
    ecc62c3b1b57 octeontx2-af: fix infinite loop in unmapping NPC counter
    7e9a48ceccae octeontx2-af: Fix irq free in rvu teardown
    e15823801229 libbpf: Use SOCK_CLOEXEC when opening the netlink socket
    7722378c4a0a nfp: flower: fix pre_tun mask id allocation
    060deac22f87 mac80211: fix rate mask reset
    52cc7bad1275 can: m_can: m_can_rx_peripheral(): fix RX being blocked by errors
    059c1996017d can: m_can: m_can_do_rx_poll(): fix extraneous msg loss warning
    e484616a9600 can: c_can: move runtime PM enable/disable to c_can_platform
    4f71965ee897 can: c_can_pci: c_can_pci_remove(): fix use-after-free
    42e49b3aa536 can: kvaser_pciefd: Always disable bus load reporting
    e3ca9fbfcdf5 can: flexcan: flexcan_chip_freeze(): fix chip freeze for missing bitrate
    fb4a6ac4851a can: peak_usb: add forgotten supported devices
    0a8046daba17 tcp: relookup sock for RST+ACK packets handled by obsolete req sock
    67319a8df5d3 netfilter: ctnetlink: fix dump of the expect mask attribute
    c4dd0b36cce4 selftests/bpf: Set gopt opt_class to 0 if get tunnel opt failed
    9d06cabe3bf4 ftgmac100: Restart MAC HW once
    81c591299da3 net/qlcnic: Fix a use after free in qlcnic_83xx_get_minidump_template
    d00db63edd0a e1000e: Fix error handling in e1000_set_d0_lplu_state_82571
    9f02a5658413 e1000e: add rtnl_lock() to e1000_reset_task
    71fa8051f2f4 igc: Fix Supported Pause Frame Link Setting
    35d8a780fa2b igc: Fix Pause Frame Advertising
    da8af444b325 net: dsa: bcm_sf2: Qualify phydev->dev_flags based on port
    267b79a11046 net: sched: validate stab values
    76909a298ebb macvlan: macvlan_count_rx() needs to be aware of preemption
    c6b6c7a92fe5 ipv6: fix suspecious RCU usage warning
    40fa14bbe3fe net/mlx5e: Don't match on Geneve options in case option masks are all zero
    e64e327c7fab libbpf: Fix INSTALL flag order
    53f1483984bf veth: Store queue_mapping independently of XDP prog presence
    f259a7fdeb12 bus: omap_l3_noc: mark l3 irqs as IRQF_NO_THREAD
    e6587d142d02 dm ioctl: fix out of bounds array access when no devices
    7b6944f18cec dm verity: fix DM_VERITY_OPTS_MAX value
    752589cd4ea8 integrity: double check iint_cache was initialized
    f3404a677770 ARM: dts: at91-sama5d27_som1: fix phy address to 7
    1815a24b9483 arm64: dts: ls1043a: mark crypto engine dma coherent
    7447c05e06c4 arm64: dts: ls1012a: mark crypto engine dma coherent
    b6f866bbf7ca arm64: dts: ls1046a: mark crypto engine dma coherent
    e980bd1f7f60 ACPI: video: Add missing callback back for Sony VPCEH3U1E
    431aaecd24ac gcov: fix clang-11+ support
    4748b6d56efe kasan: fix per-page tags for non-page_alloc pages
    037ecab65eb6 squashfs: fix xattr id and id lookup sanity checks
    79b8814d6765 squashfs: fix inode lookup sanity checks
    5b1abfe7d620 platform/x86: intel-vbtn: Stop reporting SW_DOCK events
    599cbcda68ee netsec: restore phy power state after controller reset
    8aa97ae0f5d9 ia64: fix ptrace(PTRACE_SYSCALL_INFO_EXIT) sign
    cb1504b30b6f ia64: fix ia64_syscall_get_set_arguments() for break-based syscalls
    37732ea82e09 block: Suppress uevent for hidden device when removed
    a2d07d077eb3 nfs: we don't support removing system.nfs4_acl
    eed4e1abc997 nvme-pci: add the DISABLE_WRITE_ZEROES quirk for a Samsung PM1725a
    5fc284999c4a nvme-fc: return NVME_SC_HOST_ABORTED_CMD when a command has been aborted
    526abcb05c61 nvme: add NVME_REQ_CANCELLED flag in nvme_cancel_request()
    8cdbee05b83f drm/radeon: fix AGP dependency
    5a0e3fcbeb5a drm/amdgpu: fb BO should be ttm_bo_type_device
    fc8e4af4c3ef drm/amd/display: Revert dram_clock_change_latency for DCN2.1
    6292d84c8af4 regulator: qcom-rpmh: Correct the pmic5_hfsmps515 buck
    c45182707277 u64_stats,lockdep: Fix u64_stats_init() vs lockdep
    f59604786a48 habanalabs: Call put_pid() when releasing control device
    694761bfdd76 sparc64: Fix opcode filtering in handling of no fault loads
    11efb0cda655 irqchip/ingenic: Add support for the JZ4760
    69423418c5eb cifs: change noisy error message to FYI
    981ba9c9a529 atm: idt77252: fix null-ptr-dereference
    6b2844ad7b17 atm: uPD98402: fix incorrect allocation
    40d0a9297f83 net: davicom: Use platform_get_irq_optional()
    b90de232a806 net: wan: fix error return code of uhdlc_init()
    0da0f199e767 net: hisilicon: hns: fix error return code of hns_nic_clear_all_rx_fetch()
    ab60e4f5eb3a NFS: Correct size calculation for create reply length
    785be28d360f nfs: fix PNFS_FLEXFILE_LAYOUT Kconfig default
    d605afb11945 gpiolib: acpi: Add missing IRQF_ONESHOT
    f6c1da94ddb3 cpufreq: blacklist Arm Vexpress platforms in cpufreq-dt-platdev
    1d2c9669135f cifs: ask for more credit on async read/write code paths
    ec7ce1e337ec gianfar: fix jumbo packets+napi+rx overrun crash
    7ef7d296b154 sun/niu: fix wrong RXMAC_BC_FRM_CNT_COUNT count
    d25f579ec557 net: intel: iavf: fix error return code of iavf_init_get_resources()
    d4dd6de6fc90 net: tehuti: fix error return code in bdx_probe()
    e224a789d4a6 ixgbe: Fix memleak in ixgbe_configure_clsu32
    537653a0698b ALSA: hda: ignore invalid NHLT table
    bd272f11a9d4 Revert "r8152: adjust the settings about MAC clock speed down for RTL8153"
    7a12167636bf atm: lanai: dont run lanai_dev_close if not open
    fb0067fcda6a atm: eni: dont release is never initialized
    614a4ba66854 powerpc/4xx: Fix build errors from mfdcr()
    45c1ca3e5784 net: fec: ptp: avoid register access when ipg clock is disabled
    d0f5726ab1df hugetlbfs: hugetlb_fault_mutex_hash() cleanup
    b90344f7d600 Linux 5.4.108
    819eb4d7a85e cifs: Fix preauth hash corruption
    cf113ffd620d x86/apic/of: Fix CPU devicetree-node lookups
    288be0ed9b36 genirq: Disable interrupts for force threaded handlers
    b8ebe853abca firmware/efi: Fix a use after bug in efi_mem_reserve_persistent
    31e17169a116 efi: use 32-bit alignment for efi_guid_t literals
    886dbe0e338b ext4: fix potential error in ext4_do_update_inode
    2f65ae3a7ee3 ext4: do not try to set xattr into ea_inode if value is empty
    474aab448436 ext4: find old entry again if failed to rename whiteout
    de2e1603c125 x86: Introduce TS_COMPAT_RESTART to fix get_nr_restart_syscall()
    076b60af926b x86: Move TS_COMPAT back to asm/thread_info.h
    27ddd2b59045 kernel, fs: Introduce and use set_restart_fn() and arch_set_restart_data()
    f546965c3aac x86/ioapic: Ignore IRQ2 again
    da326ba3b84a perf/x86/intel: Fix a crash caused by zero PEBS status
    51a2b19b554c PCI: rpadlpar: Fix potential drc_name corruption in store functions
    796fc331c3cf counter: stm32-timer-cnt: fix ceiling write max value
    850ca1c0130a iio: hid-sensor-temperature: Fix issues of timestamp channel
    31a2e804ad4a iio: hid-sensor-prox: Fix scale not correct issue
    3fa27c8749cf iio: hid-sensor-humidity: Fix alignment issue of timestamp channel
    4458ae8d4001 iio: adc: ad7949: fix wrong ADC result due to incorrect bit mask
    a605c095bb46 iio: gyro: mpu3050: Fix error handling in mpu3050_trigger_handler
    87163fbba6d2 iio: adis16400: Fix an error code in adis16400_initial_setup()
    ed0625334b94 iio:adc:qcom-spmi-vadc: add default scale to LR_MUX2_BAT_ID channel
    08414c498b4b iio:adc:stm32-adc: Add HAS_IOMEM dependency
    b0a595269e62 usb: typec: tcpm: Invoke power_supply_changed for tcpm-source-psy-
    4baade6fd6e5 usb: gadget: configfs: Fix KASAN use-after-free
    c92aebf2b0f3 USB: replace hardcode maximum usb string length by definition
    f89366164693 usbip: Fix incorrect double assignment to udc->ud.tcp_rx
    251949ec9d95 usb-storage: Add quirk to defeat Kindle's automatic unload
    81b56afc2841 nvme-rdma: fix possible hang when failing to set io queues
    b891d41d01f4 counter: stm32-timer-cnt: Report count function when SLAVE_MODE_DISABLED
    86fd6c0d22a5 scsi: myrs: Fix a double free in myrs_cleanup()
    eb46392d329a scsi: lpfc: Fix some error codes in debugfs
    1f925558e3f1 riscv: Correct SPARSEMEM configuration
    7db8f3be034d kbuild: Fix <linux/version.h> for empty SUBLEVEL or PATCHLEVEL again
    1dad483b1ebc net/qrtr: fix __netdev_alloc_skb call
    f0b09d547713 sunrpc: fix refcount leak for rpc auth modules
    3c57ea09365f vfio: IOMMU_API should be selected
    b439aac77360 svcrdma: disable timeouts on rdma backchannel
    d1ae8f16c223 NFSD: Repair misuse of sv_lock in 5.10.16-rt30.
    4c5fab560cb0 nfsd: Don't keep looking up unhashed files in the nfsd file cache
    49545a7b8b30 nvmet: don't check iosqes,iocqes for discovery controllers
    cf7d7728d8a5 nvme-tcp: fix a NULL deref when receiving a 0-length r2t PDU
    36a4f9164cf6 nvme-tcp: fix possible hang when failing to set io queues
    81c1dbe1070c nvme: fix Write Zeroes limitations
    6712b7fcef9d afs: Stop listxattr() from listing "afs.*" attributes
    c71b93323f37 ASoC: simple-card-utils: Do not handle device clock
    e029384c1835 ASoC: SOF: intel: fix wrong poll bits in dsp power down
    626a484d1ec2 ASoC: SOF: Intel: unregister DMIC device on probe error
    db3d39bcd66a ASoC: fsl_ssi: Fix TDM slot setup for I2S mode
    24c553371add btrfs: fix slab cache flags for free space tree bitmap
    5b3b99525c4f btrfs: fix race when cloning extent buffer during rewind of an old root
    a3e438db75fb ARM: 9044/1: vfp: use undef hook for VFP support detection
    a47b395d441d ARM: 9030/1: entry: omit FP emulation for UND exceptions taken in kernel mode
    34794bc0e768 s390/vtime: fix increased steal time accounting
    ba4342094d71 Revert "PM: runtime: Update device status before letting suppliers suspend"
    62cf220630a0 ALSA: hda/realtek: Apply headset-mic quirks for Xiaomi Redmibook Air
    613fd762d188 ALSA: hda: generic: Fix the micmute led init state
    5a5f85603e6e ALSA: hda/realtek: apply pin quirk for XiaomiNotebook Pro
    4d35c01a3645 ALSA: dice: fix null pointer dereference when node is disconnected
    d0fc0e7bfda2 ASoC: ak5558: Add MODULE_DEVICE_TABLE
    a592a4c2889e ASoC: ak4458: Add MODULE_DEVICE_TABLE

(From OE-Core rev: d40439f9de4bf0acbcc730f06395b7a75ece0415)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a6aecb7e564f067b786cdec5b2eedd7fc3f2f13d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Chen Qi
9b9ecfdca9 busybox: fix CVE-2021-28831
Backport patch to fix CVE-2021-28831.

(From OE-Core rev: 4d32f16caa3d1ca280af06b892803373e2ab4b7e)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e579dbd9a6b2472ca90f411c0b594da9e38c9aca)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Daniel Ammann
8248f857c0 archiver: Fix typos
(From OE-Core rev: dee125de5f6a4b42ecfae08688641ac783c096f5)

Signed-off-by: Daniel Ammann <daniel.ammann@bytesatwork.ch>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 36de56496bc07c321162555d603fac756297911a)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Khairul Rohaizzat Jamaluddin
04267a31cc qemu: Fix CVE-2020-35517
CVE:
CVE-2020-35517

(From OE-Core rev: 5b69726fdd959f41dc45019700360fcc164150a9)

Signed-off-by: Khairul Rohaizzat Jamaluddin <khairul.rohaizzat.jamaluddin@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 51376edb13eed748395ebe1e56081c092565be9b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Richard Purdie
4783bb12d2 oeqa/selftest: Ensure packages classes are set correctly for maintainers test
The dnf packages aren't parsed if rpm isn't in PACKAGE_CLASSES which means
the aintainers test failes for OE-Core (where ipk is the default) but not
for poky (where the default is rpm).

Ensure PACKAGE_CLASSES is set so it works in all cases.

[YOCTO #14277]

(From OE-Core rev: 9fdfeba3ec11b6b547e033b65ca13f4f5061d770)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 842b11107363357ed933cfcf619f1cf23f0d841e)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Richard Purdie
5db4f25ece pseudo: Upgrade to add trailing slashes ignore path fix
Pull in:
  client: strip trailing slashes when opening an ignored path

(From OE-Core rev: 141cd6342ff9ab8f684d81c3b7ba4cb3356bc33b)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9fb92bc13b8a78ef98798f14e728058feb180ba6)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Peter Budny
762fe99172 lib/oe/terminal: Fix tmux new-session on older tmux versions (<1.9)
`tmux new -c` fails on tmux older than 1.9, when that flag was added.
We can omit the flag for older versions of tmux, and the working
directory gets set even without it.

(From OE-Core rev: d049d7413b72c22388693b71c5901b2283f83df9)

Signed-off-by: Peter Budny <pbbudny@amazon.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c55c294be6f5119f4c58a4e7a0bc052904126569)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-30 14:37:24 +01:00
Mikko Rapeli
464472d851 bitbake: bitbake: tests/fetch: remove write protected files too
For some reason several git-annex files in Debian 10 buster
are read-only and removing them with "rm -rf" fails.

Fixes test failures like:

$ bitbake-selftest
...
rm: cannot remove '/tmp/tmpwmfn4w64/download/git2/tmp.tmpwmfn4w64.gitsource/annex/objects/f87/4d5/SHA256E-s0--e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855/SHA256E-s0--e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855': Permission denied
rm: cannot remove '/tmp/tmpwmfn4w64/download/git2/tmp.tmpwmfn4w64.gitsource/annex/objects/f87/4d5/SHA256E-s0--e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855/SHA256E-s0--e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855': Permission denied
EE..................................ssss.sssssssssssssss.sssss.......................................................................................................
======================================================================
ERROR: test_shallow_annex (bb.tests.fetch.GitShallowTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/builder/src/base/poky/bitbake/lib/bb/tests/fetch.py", line 1773, in test_shallow_annex
    fetcher, ud = self.fetch_shallow(uri)
  File "/home/builder/src/base/poky/bitbake/lib/bb/tests/fetch.py", line 1541, in fetch_shallow
    bb.utils.remove(ud.clonedir, recurse=True)
  File "/home/builder/src/base/poky/bitbake/lib/bb/utils.py", line 700, in remove
    subprocess.check_call(cmd + ['rm', '-rf'] + glob.glob(path))
  File "/usr/lib/python3.7/subprocess.py", line 347, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['rm', '-rf', '/tmp/tmpwmfn4w64/download/git2/tmp.tmpwmfn4w64.gitsource']' returned non-zero exit status 1.

Also, one "chmod" call was failing since the .git/annex subdirectory doesn't exist so just chmod
the whole temporary directory which should cover any directory name differences between
different git-annex versions. Fixes tests failing after chmod call:

Running 'export PSEUDO_DISABLED=1; unset _PYTHON_SYSCONFIGDATA_NAME; chmod u+w -R /tmp/tmpwmfn4w64/git//.git/annex' in /tmp/tmpwmfn4w64/git/

(Bitbake rev: 14c5f0735947307b9d69c57f7334fefaea7311b3)

Signed-off-by: Mikko Rapeli <mikko.rapeli@bmw.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 7729ef2983c72867e99fad82d671069ba5cb32b2)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-27 15:22:58 +01:00
Niels Avonds
2742e73760 bitbake: fetch/gitsm: Fix crash when using git LFS and submodules
Gitsm fetcher crashes when cloning a repository that contains LFS files.
This happens because the unpack method is called during download, but the
submodules have not been downloaded yet at this point.

This issue was introduced in this
commit: 977b7268bf

[YOCTO #14283]

(Bitbake rev: 88d1d2b65a70081389a1c8f9b590a013a1cb4452)

Signed-off-by: Niels Avonds <niels@codebits.be>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 26caedc4d2e9b5a0f1d57f9291754a7f6c5e437e)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-27 15:22:58 +01:00
Ross Burton
3f8758b54d bitbake: bitbake-server: ensure server timeout is a float
bitbake-server is spawned by process.py and passes the arguments it is
given to ProcessServer.  There's some type confusion here:

bitbake-server is called with a string representation of the timeout,
which may be None.  If the timeout is not set, pass 0 instead of None.

Inside bitbake-server a ProcessServer is created which expects the
timeout to be a float not a string, so always float() the value.

[ YOCTO #14350 ]

(Bitbake rev: f2cfb9f6710808ea37aecb6c34c62f92191e1d4b)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c93ae1f861208f6d39fd15c84fbcd0e2b54331f5)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-27 15:22:58 +01:00
Mikko Rapeli
b7cba05f82 bitbake: bitbake: tests/fetch: fix test execution without .gitconfig
A CI user validating changes does not have any git push rights or
even a .gitconfig file so fix tests so that they run
by setting the user.name and user.email for the repo before
committing changes.

Fixes errors like:

ERROR: test_that_unpack_throws_an_error_when_the_git_clone_nor_shallow_tarball_exist (bb.tests.fetch.GitShallowTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/builder/src/base/poky/bitbake/lib/bb/tests/fetch.py", line 2055, in test_that_unpack_throws_an_error_when_the_git_clone_nor_shallow_tarball_exist
    self.add_empty_file('a')
  File "/home/builder/src/base/poky/bitbake/lib/bb/tests/fetch.py", line 1562, in add_empty_file
    self.git(['commit', '-m', msg, path], cwd)
  File "/home/builder/src/base/poky/bitbake/lib/bb/tests/fetch.py", line 1553, in git
    return bb.process.run(cmd, cwd=cwd)[0]
  File "/home/builder/src/base/poky/bitbake/lib/bb/process.py", line 184, in run
    raise ExecutionError(cmd, pipe.returncode, stdout, stderr)
bb.process.ExecutionError: Execution of 'git commit -m a a' failed with exit code 128:

*** Please tell me who you are.

Run

  git config --global user.email "you@example.com"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

(Bitbake rev: 1e1d1187e602aa1ef50c23551eec07f1a0cd81ef)

Signed-off-by: Mikko Rapeli <mikko.rapeli@bmw.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 57c0811f1ee19b6619f4840a39e01e3cb98c34c4)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-27 15:22:58 +01:00
Richard Purdie
a49e5c0f4f bitbake: runqueue: Fix deferred task issues
In a multiconfig situation there are circumstances where firstly, tasks
are deferred when they shouldn't be, then later, tasks can end up as
both covered and not covered.

This patch fixes two related issues. Firstly, the stamp validity checking
is done up front in the build and not reevaulated. When rebuilding the
deferred task list after scenequeue hash change updates, we need therefore
need to check if a task was in notcovered *or* covered when deciding to
defer it. This avoids strange logs like:

NOTE: Running setscene task X of Y (mc:initrfs_guest:/A/alsa-state.bb:do_deploy_source_date_epoch_setscene)
NOTE: Deferring mc:initrfs_guest:/A/alsa-state.bb:do_deploy_source_date_epoch after mc:host:/A/alsa-state.bb:do_deploy_source_date_epoch

where tasks have run but are then deferred.

Since we're recalculating the whole list, we also need to clear it before
iterating to rebuild it. By ensuring covered tasks aren't added to the
deferred queue, the covered + notcovered issue should also be avoided.
in the task deadlock forcing code.

[YOCTO #14342]

(Bitbake rev: 1ec855731800cf8e2bae2b1e7241640e0bad8aae)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3c8717fb9ee1114dd80fc1ad22ee6c9e312bdac7)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-04-27 15:22:58 +01:00
5601 changed files with 181216 additions and 285383 deletions

7
.gitignore vendored
View File

@@ -31,9 +31,4 @@ pull-*/
bitbake/lib/toaster/contrib/tts/backlog.txt
bitbake/lib/toaster/contrib/tts/log/*
bitbake/lib/toaster/contrib/tts/.cache/*
bitbake/lib/bb/tests/runqueue-tests/bitbake-cookerdaemon.log
_toaster_clones/
downloads/
sstate-cache/
toaster.sqlite
.vscode/
bitbake/lib/bb/tests/runqueue-tests/bitbake-cookerdaemon.log

View File

@@ -1,2 +1,2 @@
# Template settings
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf/templates/default}
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf}

View File

@@ -1,71 +0,0 @@
OpenEmbedded-Core and Yocto Project Maintainer Information
==========================================================
OpenEmbedded and Yocto Project work jointly together to maintain the metadata,
layers, tools and sub-projects that make up their ecosystems.
The projects operate through collaborative development. This currently takes
place on mailing lists for many components as the "pull request on github"
workflow works well for single or small numbers of maintainers but we have
a large number, all with different specialisms and benefit from the mailing
list review process. Changes therefore undergo peer review through mailing
lists in many cases.
This file aims to acknowledge people with specific skills/knowledge/interest
both to recognise their contributions but also empower them to help lead and
curate those components. Where we have people with specialist knowledge in
particular areas, during review patches/feedback from these people in these
areas would generally carry weight.
This file is maintained in OE-Core but may refer to components that are separate
to it if that makes sense in the context of maintainership. The README of specific
layers and components should ultimately be definitive about the patch process and
maintainership for the component.
Recipe Maintainers
------------------
See meta/conf/distro/include/maintainers.inc
Component/Subsystem Maintainers
-------------------------------
* Kernel (inc. linux-yocto, perf): Bruce Ashfield
* Reproducible Builds: Joshua Watt
* Toaster: David Reyna
* Hash-Equivalence: Joshua Watt
* Recipe upgrade infrastructure: Alex Kanavin
* Toolchain: Khem Raj
* ptest-runner: Aníbal Limón
* opkg: Alex Stewart
* devtool: Saul Wold
* eSDK: Saul Wold
* overlayfs: Vyacheslav Yurkov
Maintainers needed
------------------
* Pseudo
* Layer Index
* recipetool
* QA framework/automated testing
* error reporting system/web UI
* wic
* Patchwork
* Patchtest
* Matchbox
* Sato
* Autobuilder
Layer Maintainers needed
------------------------
* meta-gplv2 (ideally new strategy but active maintainer welcome)
Shadow maintainers/development needed
--------------------------------------
* toaster
* bitbake

35
Makefile Normal file
View File

@@ -0,0 +1,35 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
DESTDIR = final
ifeq ($(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; else echo 0; fi),0)
$(error "The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed")
endif
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile.sphinx clean publish
publish: Makefile.sphinx html singlehtml
rm -rf $(BUILDDIR)/$(DESTDIR)/
mkdir -p $(BUILDDIR)/$(DESTDIR)/
cp -r $(BUILDDIR)/html/* $(BUILDDIR)/$(DESTDIR)/
cp $(BUILDDIR)/singlehtml/index.html $(BUILDDIR)/$(DESTDIR)/singleindex.html
sed -i -e 's@index.html#@singleindex.html#@g' $(BUILDDIR)/$(DESTDIR)/singleindex.html
clean:
@rm -rf $(BUILDDIR)
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile.sphinx
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

29
README.OE-Core Normal file
View File

@@ -0,0 +1,29 @@
OpenEmbedded-Core
=================
OpenEmbedded-Core is a layer containing the core metadata for current versions
of OpenEmbedded. It is distro-less (can build a functional image with
DISTRO = "nodistro") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
http://www.openembedded.org/
The Yocto Project has extensive documentation about OE including a reference manual
which can be found at:
http://yoctoproject.org/documentation
Contributing
------------
Please refer to
http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
for guidelines on how to submit patches.
Mailing list:
http://lists.openembedded.org/mailman/listinfo/openembedded-core
Source code:
http://git.openembedded.org/openembedded-core/

View File

@@ -1,33 +0,0 @@
OpenEmbedded-Core
=================
OpenEmbedded-Core is a layer containing the core metadata for current versions
of OpenEmbedded. It is distro-less (can build a functional image with
DISTRO = "nodistro") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
https://www.openembedded.org/
The Yocto Project has extensive documentation about OE including a reference manual
which can be found at:
https://docs.yoctoproject.org/
Contributing
------------
Please refer to our contributor guide here: https://docs.yoctoproject.org/dev/contributor-guide/
for full details on how to submit changes.
As a quick guide, patches should be sent to openembedded-core@lists.openembedded.org
The git command to do that would be:
git send-email -M -1 --to openembedded-core@lists.openembedded.org
Mailing list:
https://lists.openembedded.org/g/openembedded-core
Source code:
https://git.openembedded.org/openembedded-core/

1
README.hardware Symbolic link
View File

@@ -0,0 +1 @@
meta-yocto-bsp/README.hardware

View File

@@ -1 +0,0 @@
meta-yocto-bsp/README.hardware.md

View File

@@ -1 +0,0 @@
README.poky.md

1
README.poky Symbolic link
View File

@@ -0,0 +1 @@
meta-poky/README.poky

View File

@@ -1 +0,0 @@
meta-poky/README.poky.md

View File

@@ -1,22 +0,0 @@
How to Report a Potential Vulnerability?
========================================
If you would like to report a public issue (for example, one with a released
CVE number), please report it using the
[https://bugzilla.yoctoproject.org/enter_bug.cgi?product=Security Security Bugzilla]
If you are dealing with a not-yet released or urgent issue, please send a
message to security AT yoctoproject DOT org, including as many details as
possible: the layer or software module affected, the recipe and its version,
and any example code, if available.
Branches maintained with security fixes
---------------------------------------
See [https://wiki.yoctoproject.org/wiki/Stable_Release_and_LTS Stable release and LTS]
for detailed info regarding the policies and maintenance of Stable branches.
The [https://wiki.yoctoproject.org/wiki/Releases Release page] contains a list of all
releases of the Yocto Project. Versions in grey are no longer actively maintained with
security patches, but well-tested patches may still be accepted for them for
significant issues.

View File

@@ -7,57 +7,29 @@ One of BitBake's main users, OpenEmbedded, takes this core and builds embedded L
stacks using a task-oriented approach.
For information about Bitbake, see the OpenEmbedded website:
https://www.openembedded.org/
http://www.openembedded.org/
Bitbake plain documentation can be found under the doc directory or its integrated
html version at the Yocto Project website:
https://docs.yoctoproject.org
Bitbake requires Python version 3.8 or newer.
Contributing
------------
Please refer to our contributor guide here: https://docs.yoctoproject.org/contributor-guide/
for full details on how to submit changes.
As a quick guide, patches should be sent to bitbake-devel@lists.openembedded.org
The git command to do that would be:
Please refer to
http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
for guidelines on how to submit patches, just note that the latter documentation is intended
for OpenEmbedded (and its core) not bitbake patches (bitbake-devel@lists.openembedded.org)
but in general main guidelines apply. Once the commit(s) have been created, the way to send
the patch is through git-send-email. For example, to send the last commit (HEAD) on current
branch, type:
git send-email -M -1 --to bitbake-devel@lists.openembedded.org
If you're sending a patch related to the BitBake manual, make sure you copy
the Yocto Project documentation mailing list:
git send-email -M -1 --to bitbake-devel@lists.openembedded.org --cc docs@lists.yoctoproject.org
Mailing list:
https://lists.openembedded.org/g/bitbake-devel
http://lists.openembedded.org/mailman/listinfo/bitbake-devel
Source code:
https://git.openembedded.org/bitbake/
Testing
-------
Bitbake has a testsuite located in lib/bb/tests/ whichs aim to try and prevent regressions.
You can run this with "bitbake-selftest". In particular the fetcher is well covered since
it has so many corner cases. The datastore has many tests too. Testing with the testsuite is
recommended before submitting patches, particularly to the fetcher and datastore. We also
appreciate new test cases and may require them for more obscure issues.
To run the tests "zstd" and "git" must be installed.
The assumption is made that this testsuite is run from an initialized OpenEmbedded build
environment (i.e. `source oe-init-build-env` is used). If this is not the case, run the
testsuite as follows:
export PATH=$(pwd)/bin:$PATH
bin/bitbake-selftest
The testsuite can alternatively be executed using pytest, e.g. obtained from PyPI (in this
case, the PATH is configured automatically):
pytest
http://git.openembedded.org/bitbake/

View File

@@ -1,24 +0,0 @@
How to Report a Potential Vulnerability?
========================================
If you would like to report a public issue (for example, one with a released
CVE number), please report it using the
[https://bugzilla.yoctoproject.org/enter_bug.cgi?product=Security Security Bugzilla].
If you have a patch ready, submit it following the same procedure as any other
patch as described in README.md.
If you are dealing with a not-yet released or urgent issue, please send a
message to security AT yoctoproject DOT org, including as many details as
possible: the layer or software module affected, the recipe and its version,
and any example code, if available.
Branches maintained with security fixes
---------------------------------------
See [https://wiki.yoctoproject.org/wiki/Stable_Release_and_LTS Stable release and LTS]
for detailed info regarding the policies and maintenance of Stable branches.
The [https://wiki.yoctoproject.org/wiki/Releases Release page] contains a list of all
releases of the Yocto Project. Versions in grey are no longer actively maintained with
security patches, but well-tested patches may still be accepted for them for
significant issues.

View File

@@ -12,8 +12,6 @@
import os
import sys
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),
'lib'))
@@ -25,9 +23,10 @@ except RuntimeError as exc:
from bb import cookerdata
from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
bb.utils.check_system_locale()
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
__version__ = "2.8.0"
__version__ = "1.50.0"
if __name__ == "__main__":
if __version__ != bb.__version__:

View File

@@ -11,8 +11,6 @@
import os
import sys
import warnings
warnings.simplefilter("default")
import argparse
import logging
import pickle
@@ -28,7 +26,6 @@ logger = bb.msg.logger_create(myname)
is_dump = myname == 'bitbake-dumpsig'
def find_siginfo(tinfoil, pn, taskname, sigs=None):
result = None
tinfoil.set_event_mask(['bb.event.FindSigInfoResult',
@@ -54,7 +51,6 @@ def find_siginfo(tinfoil, pn, taskname, sigs=None):
sys.exit(2)
return result
def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
""" Find the most recent signature files for the specified PN/task """
@@ -63,25 +59,22 @@ def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
if sig1 and sig2:
sigfiles = find_siginfo(bbhandler, pn, taskname, [sig1, sig2])
if not sigfiles:
if len(sigfiles) == 0:
logger.error('No sigdata files found matching %s %s matching either %s or %s' % (pn, taskname, sig1, sig2))
sys.exit(1)
elif sig1 not in sigfiles:
elif not sig1 in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig1))
sys.exit(1)
elif sig2 not in sigfiles:
elif not sig2 in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig2))
sys.exit(1)
latestfiles = [sigfiles[sig1], sigfiles[sig2]]
else:
sigfiles = find_siginfo(bbhandler, pn, taskname)
latestsigs = sorted(sigfiles.keys(), key=lambda h: sigfiles[h]['time'])[-2:]
if not latestsigs:
filedates = find_siginfo(bbhandler, pn, taskname)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-2:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
sig1 = latestsigs[0]
sig2 = latestsigs[1]
latestfiles = [sigfiles[sig1]['path'], sigfiles[sig2]['path']]
return latestfiles
@@ -92,14 +85,14 @@ def recursecb(key, hash1, hash2):
hashfiles = find_siginfo(tinfoil, key, None, hashes)
recout = []
if not hashfiles:
if len(hashfiles) == 0:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
elif hash1 not in hashfiles:
elif not hash1 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash1))
elif hash2 not in hashfiles:
elif not hash2 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash2))
else:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1]['path'], hashfiles[hash2]['path'], recursecb, color=color)
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb, color=color)
for change in out2:
for line in change.splitlines():
recout.append(' ' + line)
@@ -116,36 +109,36 @@ parser.add_argument('-D', '--debug',
if is_dump:
parser.add_argument("-t", "--task",
help="find the signature data file for the last run of the specified task",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
help="find the signature data file for the last run of the specified task",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
parser.add_argument("sigdatafile1",
help="Signature file to dump. Not used when using -t/--task.",
action="store", nargs='?', metavar="sigdatafile")
help="Signature file to dump. Not used when using -t/--task.",
action="store", nargs='?', metavar="sigdatafile")
else:
parser.add_argument('-c', '--color',
help='Colorize the output (where %(metavar)s is %(choices)s)',
choices=['auto', 'always', 'never'], default='auto', metavar='color')
help='Colorize the output (where %(metavar)s is %(choices)s)',
choices=['auto', 'always', 'never'], default='auto', metavar='color')
parser.add_argument('-d', '--dump',
help='Dump the last signature data instead of comparing (equivalent to using bitbake-dumpsig)',
action='store_true')
help='Dump the last signature data instead of comparing (equivalent to using bitbake-dumpsig)',
action='store_true')
parser.add_argument("-t", "--task",
help="find the signature data files for the last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
help="find the signature data files for the last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
parser.add_argument("-s", "--signature",
help="With -t/--task, specify the signatures to look for instead of taking the last two",
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
help="With -t/--task, specify the signatures to look for instead of taking the last two",
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
parser.add_argument("sigdatafile1",
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
action="store", nargs='?')
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
action="store", nargs='?')
parser.add_argument("sigdatafile2",
help="Second signature file to compare",
action="store", nargs='?')
help="Second signature file to compare",
action="store", nargs='?')
options = parser.parse_args()
if is_dump:
@@ -163,8 +156,7 @@ if options.taskargs:
with bb.tinfoil.Tinfoil() as tinfoil:
tinfoil.prepare(config_only=True)
if not options.dump and options.sigargs:
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0],
options.sigargs[1])
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0], options.sigargs[1])
else:
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1])
@@ -173,8 +165,7 @@ if options.taskargs:
output = bb.siggen.dump_sigfile(files[-1])
else:
if len(files) < 2:
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (
options.taskargs[0], options.taskargs[1]))
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (options.taskargs[0], options.taskargs[1]))
sys.exit(1)
# Recurse into signature comparison

View File

@@ -1,60 +0,0 @@
#! /usr/bin/env python3
#
# Copyright (C) 2021 Richard Purdie
#
# SPDX-License-Identifier: GPL-2.0-only
#
import argparse
import io
import os
import sys
import warnings
warnings.simplefilter("default")
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
sys.path[0:0] = [os.path.join(topdir, 'lib')]
import bb.tinfoil
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Bitbake Query Variable")
parser.add_argument("variable", help="variable name to query")
parser.add_argument("-r", "--recipe", help="Recipe name to query", default=None, required=False)
parser.add_argument('-u', '--unexpand', help='Do not expand the value (with --value)', action="store_true")
parser.add_argument('-f', '--flag', help='Specify a variable flag to query (with --value)', default=None)
parser.add_argument('--value', help='Only report the value, no history and no variable name', action="store_true")
parser.add_argument('-q', '--quiet', help='Silence bitbake server logging', action="store_true")
parser.add_argument('--ignore-undefined', help='Suppress any errors related to undefined variables', action="store_true")
args = parser.parse_args()
if not args.value:
if args.unexpand:
sys.exit("--unexpand only makes sense with --value")
if args.flag:
sys.exit("--flag only makes sense with --value")
quiet = args.quiet or args.value
with bb.tinfoil.Tinfoil(tracking=True, setup_logging=not quiet) as tinfoil:
if args.recipe:
tinfoil.prepare(quiet=3 if quiet else 2)
d = tinfoil.parse_recipe(args.recipe)
else:
tinfoil.prepare(quiet=2, config_only=True)
d = tinfoil.config_data
value = None
if args.flag:
value = d.getVarFlag(args.variable, args.flag, expand=not args.unexpand)
if value is None and not args.ignore_undefined:
sys.exit(f"The flag '{args.flag}' is not defined for variable '{args.variable}'")
else:
value = d.getVar(args.variable, expand=not args.unexpand)
if value is None and not args.ignore_undefined:
sys.exit(f"The variable '{args.variable}' is not defined")
if args.value:
print(str(value if value is not None else ""))
else:
bb.data.emit_var(args.variable, d=d, all=True)

View File

@@ -13,10 +13,6 @@ import pprint
import sys
import threading
import time
import warnings
import netrc
import json
warnings.simplefilter("default")
try:
import tqdm
@@ -38,42 +34,18 @@ except ImportError:
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import hashserv
import bb.asyncrpc
DEFAULT_ADDRESS = 'unix://./hashserve.sock'
METHOD = 'stress.test.method'
def print_user(u):
print(f"Username: {u['username']}")
if "permissions" in u:
print("Permissions: " + " ".join(u["permissions"]))
if "token" in u:
print(f"Token: {u['token']}")
def main():
def handle_get(args, client):
result = client.get_taskhash(args.method, args.taskhash, all_properties=True)
if not result:
return 0
print(json.dumps(result, sort_keys=True, indent=4))
return 0
def handle_get_outhash(args, client):
result = client.get_outhash(args.method, args.outhash, args.taskhash)
if not result:
return 0
print(json.dumps(result, sort_keys=True, indent=4))
return 0
def handle_stats(args, client):
if args.reset:
s = client.reset_stats()
else:
s = client.get_stats()
print(json.dumps(s, sort_keys=True, indent=4))
pprint.pprint(s)
return 0
def handle_stress(args, client):
@@ -82,24 +54,25 @@ def main():
nonlocal missed_hashes
nonlocal max_time
with hashserv.create_client(args.address) as client:
for i in range(args.requests):
taskhash = hashlib.sha256()
taskhash.update(args.taskhash_seed.encode('utf-8'))
taskhash.update(str(i).encode('utf-8'))
client = hashserv.create_client(args.address)
start_time = time.perf_counter()
l = client.get_unihash(METHOD, taskhash.hexdigest())
elapsed = time.perf_counter() - start_time
for i in range(args.requests):
taskhash = hashlib.sha256()
taskhash.update(args.taskhash_seed.encode('utf-8'))
taskhash.update(str(i).encode('utf-8'))
with lock:
if l:
found_hashes += 1
else:
missed_hashes += 1
start_time = time.perf_counter()
l = client.get_unihash(METHOD, taskhash.hexdigest())
elapsed = time.perf_counter() - start_time
max_time = max(elapsed, max_time)
pbar.update()
with lock:
if l:
found_hashes += 1
else:
missed_hashes += 1
max_time = max(elapsed, max_time)
pbar.update()
max_time = 0
found_hashes = 0
@@ -138,114 +111,12 @@ def main():
with lock:
pbar.update()
def handle_remove(args, client):
where = {k: v for k, v in args.where}
if where:
result = client.remove(where)
print("Removed %d row(s)" % (result["count"]))
else:
print("No query specified")
def handle_clean_unused(args, client):
result = client.clean_unused(args.max_age)
print("Removed %d rows" % (result["count"]))
return 0
def handle_refresh_token(args, client):
r = client.refresh_token(args.username)
print_user(r)
def handle_set_user_permissions(args, client):
r = client.set_user_perms(args.username, args.permissions)
print_user(r)
def handle_get_user(args, client):
r = client.get_user(args.username)
print_user(r)
def handle_get_all_users(args, client):
users = client.get_all_users()
print("{username:20}| {permissions}".format(username="Username", permissions="Permissions"))
print(("-" * 20) + "+" + ("-" * 20))
for u in users:
print("{username:20}| {permissions}".format(username=u["username"], permissions=" ".join(u["permissions"])))
def handle_new_user(args, client):
r = client.new_user(args.username, args.permissions)
print_user(r)
def handle_delete_user(args, client):
r = client.delete_user(args.username)
print_user(r)
def handle_get_db_usage(args, client):
usage = client.get_db_usage()
print(usage)
tables = sorted(usage.keys())
print("{name:20}| {rows:20}".format(name="Table name", rows="Rows"))
print(("-" * 20) + "+" + ("-" * 20))
for t in tables:
print("{name:20}| {rows:<20}".format(name=t, rows=usage[t]["rows"]))
print()
total_rows = sum(t["rows"] for t in usage.values())
print(f"Total rows: {total_rows}")
def handle_get_db_query_columns(args, client):
columns = client.get_db_query_columns()
print("\n".join(sorted(columns)))
def handle_gc_status(args, client):
result = client.gc_status()
if not result["mark"]:
print("No Garbage collection in progress")
return 0
print("Current Mark: %s" % result["mark"])
print("Total hashes to keep: %d" % result["keep"])
print("Total hashes to remove: %s" % result["remove"])
return 0
def handle_gc_mark(args, client):
where = {k: v for k, v in args.where}
result = client.gc_mark(args.mark, where)
print("New hashes marked: %d" % result["count"])
return 0
def handle_gc_sweep(args, client):
result = client.gc_sweep(args.mark)
print("Removed %d rows" % result["count"])
return 0
def handle_unihash_exists(args, client):
result = client.unihash_exists(args.unihash)
if args.quiet:
return 0 if result else 1
print("true" if result else "false")
return 0
parser = argparse.ArgumentParser(description='Hash Equivalence Client')
parser.add_argument('--address', default=DEFAULT_ADDRESS, help='Server address (default "%(default)s")')
parser.add_argument('--log', default='WARNING', help='Set logging level')
parser.add_argument('--login', '-l', metavar="USERNAME", help="Authenticate as USERNAME")
parser.add_argument('--password', '-p', metavar="TOKEN", help="Authenticate using token TOKEN")
parser.add_argument('--become', '-b', metavar="USERNAME", help="Impersonate user USERNAME (if allowed) when performing actions")
parser.add_argument('--no-netrc', '-n', action="store_false", dest="netrc", help="Do not use .netrc")
subparsers = parser.add_subparsers()
get_parser = subparsers.add_parser('get', help="Get the unihash for a taskhash")
get_parser.add_argument("method", help="Method to query")
get_parser.add_argument("taskhash", help="Task hash to query")
get_parser.set_defaults(func=handle_get)
get_outhash_parser = subparsers.add_parser('get-outhash', help="Get output hash information")
get_outhash_parser.add_argument("method", help="Method to query")
get_outhash_parser.add_argument("outhash", help="Output hash to query")
get_outhash_parser.add_argument("taskhash", help="Task hash to query")
get_outhash_parser.set_defaults(func=handle_get_outhash)
stats_parser = subparsers.add_parser('stats', help='Show server stats')
stats_parser.add_argument('--reset', action='store_true',
help='Reset server stats')
@@ -264,64 +135,6 @@ def main():
help='Include string in outhash')
stress_parser.set_defaults(func=handle_stress)
remove_parser = subparsers.add_parser('remove', help="Remove hash entries")
remove_parser.add_argument("--where", "-w", metavar="KEY VALUE", nargs=2, action="append", default=[],
help="Remove entries from table where KEY == VALUE")
remove_parser.set_defaults(func=handle_remove)
clean_unused_parser = subparsers.add_parser('clean-unused', help="Remove unused database entries")
clean_unused_parser.add_argument("max_age", metavar="SECONDS", type=int, help="Remove unused entries older than SECONDS old")
clean_unused_parser.set_defaults(func=handle_clean_unused)
refresh_token_parser = subparsers.add_parser('refresh-token', help="Refresh auth token")
refresh_token_parser.add_argument("--username", "-u", help="Refresh the token for another user (if authorized)")
refresh_token_parser.set_defaults(func=handle_refresh_token)
set_user_perms_parser = subparsers.add_parser('set-user-perms', help="Set new permissions for user")
set_user_perms_parser.add_argument("--username", "-u", help="Username", required=True)
set_user_perms_parser.add_argument("permissions", metavar="PERM", nargs="*", default=[], help="New permissions")
set_user_perms_parser.set_defaults(func=handle_set_user_permissions)
get_user_parser = subparsers.add_parser('get-user', help="Get user")
get_user_parser.add_argument("--username", "-u", help="Username")
get_user_parser.set_defaults(func=handle_get_user)
get_all_users_parser = subparsers.add_parser('get-all-users', help="List all users")
get_all_users_parser.set_defaults(func=handle_get_all_users)
new_user_parser = subparsers.add_parser('new-user', help="Create new user")
new_user_parser.add_argument("--username", "-u", help="Username", required=True)
new_user_parser.add_argument("permissions", metavar="PERM", nargs="*", default=[], help="New permissions")
new_user_parser.set_defaults(func=handle_new_user)
delete_user_parser = subparsers.add_parser('delete-user', help="Delete user")
delete_user_parser.add_argument("--username", "-u", help="Username", required=True)
delete_user_parser.set_defaults(func=handle_delete_user)
db_usage_parser = subparsers.add_parser('get-db-usage', help="Database Usage")
db_usage_parser.set_defaults(func=handle_get_db_usage)
db_query_columns_parser = subparsers.add_parser('get-db-query-columns', help="Show columns that can be used in database queries")
db_query_columns_parser.set_defaults(func=handle_get_db_query_columns)
gc_status_parser = subparsers.add_parser("gc-status", help="Show garbage collection status")
gc_status_parser.set_defaults(func=handle_gc_status)
gc_mark_parser = subparsers.add_parser('gc-mark', help="Mark hashes to be kept for garbage collection")
gc_mark_parser.add_argument("mark", help="Mark for this garbage collection operation")
gc_mark_parser.add_argument("--where", "-w", metavar="KEY VALUE", nargs=2, action="append", default=[],
help="Keep entries in table where KEY == VALUE")
gc_mark_parser.set_defaults(func=handle_gc_mark)
gc_sweep_parser = subparsers.add_parser('gc-sweep', help="Perform garbage collection and delete any entries that are not marked")
gc_sweep_parser.add_argument("mark", help="Mark for this garbage collection operation")
gc_sweep_parser.set_defaults(func=handle_gc_sweep)
unihash_exists_parser = subparsers.add_parser('unihash-exists', help="Check if a unihash is known to the server")
unihash_exists_parser.add_argument("--quiet", action="store_true", help="Don't print status. Instead, exit with 0 if unihash exists and 1 if it does not")
unihash_exists_parser.add_argument("unihash", help="Unihash to check")
unihash_exists_parser.set_defaults(func=handle_unihash_exists)
args = parser.parse_args()
logger = logging.getLogger('hashserv')
@@ -335,30 +148,11 @@ def main():
console.setLevel(level)
logger.addHandler(console)
login = args.login
password = args.password
if login is None and args.netrc:
try:
n = netrc.netrc()
auth = n.authenticators(args.address)
if auth is not None:
login, _, password = auth
except FileNotFoundError:
pass
except netrc.NetrcParseError as e:
sys.stderr.write(f"Error parsing {e.filename}:{e.lineno}: {e.msg}\n")
func = getattr(args, 'func', None)
if func:
try:
with hashserv.create_client(args.address, login, password) as client:
if args.become:
client.become_user(args.become)
return func(args, client)
except bb.asyncrpc.InvokeError as e:
print(f"ERROR: {e}")
return 1
client = hashserv.create_client(args.address)
return func(args, client)
return 0

View File

@@ -10,162 +10,55 @@ import sys
import logging
import argparse
import sqlite3
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), "lib"))
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import hashserv
from hashserv.server import DEFAULT_ANON_PERMS
VERSION = "1.0.0"
DEFAULT_BIND = "unix://./hashserve.sock"
DEFAULT_BIND = 'unix://./hashserve.sock'
def main():
parser = argparse.ArgumentParser(
description="Hash Equivalence Reference Server. Version=%s" % VERSION,
formatter_class=argparse.RawTextHelpFormatter,
epilog="""
The bind address may take one of the following formats:
unix://PATH - Bind to unix domain socket at PATH
ws://ADDRESS:PORT - Bind to websocket on ADDRESS:PORT
ADDRESS:PORT - Bind to raw TCP socket on ADDRESS:PORT
parser = argparse.ArgumentParser(description='Hash Equivalence Reference Server. Version=%s' % VERSION,
epilog='''The bind address is the path to a unix domain socket if it is
prefixed with "unix://". Otherwise, it is an IP address
and port in form ADDRESS:PORT. To bind to all addresses, leave
the ADDRESS empty, e.g. "--bind :8686". To bind to a specific
IPv6 address, enclose the address in "[]", e.g.
"--bind [::1]:8686"'''
)
To bind to all addresses, leave the ADDRESS empty, e.g. "--bind :8686" or
"--bind ws://:8686". To bind to a specific IPv6 address, enclose the address in
"[]", e.g. "--bind [::1]:8686" or "--bind ws://[::1]:8686"
Note that the default Anonymous permissions are designed to not break existing
server instances when upgrading, but are not particularly secure defaults. If
you want to use authentication, it is recommended that you use "--anon-perms
@read" to only give anonymous users read access, or "--anon-perms @none" to
give un-authenticated users no access at all.
Setting "--anon-perms @all" or "--anon-perms @user-admin" is not allowed, since
this would allow anonymous users to manage all users accounts, which is a bad
idea.
If you are using user authentication, you should run your server in websockets
mode with an SSL terminating load balancer in front of it (as this server does
not implement SSL). Otherwise all usernames and passwords will be transmitted
in the clear. When configured this way, clients can connect using a secure
websocket, as in "wss://SERVER:PORT"
The following permissions are supported by the server:
@none - No permissions
@read - The ability to read equivalent hashes from the server
@report - The ability to report equivalent hashes to the server
@db-admin - Manage the hash database(s). This includes cleaning the
database, removing hashes, etc.
@user-admin - The ability to manage user accounts. This includes, creating
users, deleting users, resetting login tokens, and assigning
permissions.
@all - All possible permissions, including any that may be added
in the future
""",
)
parser.add_argument(
"-b",
"--bind",
default=os.environ.get("HASHSERVER_BIND", DEFAULT_BIND),
help='Bind address (default $HASHSERVER_BIND, "%(default)s")',
)
parser.add_argument(
"-d",
"--database",
default=os.environ.get("HASHSERVER_DB", "./hashserv.db"),
help='Database file (default $HASHSERVER_DB, "%(default)s")',
)
parser.add_argument(
"-l",
"--log",
default=os.environ.get("HASHSERVER_LOG_LEVEL", "WARNING"),
help='Set logging level (default $HASHSERVER_LOG_LEVEL, "%(default)s")',
)
parser.add_argument(
"-u",
"--upstream",
default=os.environ.get("HASHSERVER_UPSTREAM", None),
help="Upstream hashserv to pull hashes from ($HASHSERVER_UPSTREAM)",
)
parser.add_argument(
"-r",
"--read-only",
action="store_true",
help="Disallow write operations from clients ($HASHSERVER_READ_ONLY)",
)
parser.add_argument(
"--db-username",
default=os.environ.get("HASHSERVER_DB_USERNAME", None),
help="Database username ($HASHSERVER_DB_USERNAME)",
)
parser.add_argument(
"--db-password",
default=os.environ.get("HASHSERVER_DB_PASSWORD", None),
help="Database password ($HASHSERVER_DB_PASSWORD)",
)
parser.add_argument(
"--anon-perms",
metavar="PERM[,PERM[,...]]",
default=os.environ.get("HASHSERVER_ANON_PERMS", ",".join(DEFAULT_ANON_PERMS)),
help='Permissions to give anonymous users (default $HASHSERVER_ANON_PERMS, "%(default)s")',
)
parser.add_argument(
"--admin-user",
default=os.environ.get("HASHSERVER_ADMIN_USER", None),
help="Create default admin user with name ADMIN_USER ($HASHSERVER_ADMIN_USER)",
)
parser.add_argument(
"--admin-password",
default=os.environ.get("HASHSERVER_ADMIN_PASSWORD", None),
help="Create default admin user with password ADMIN_PASSWORD ($HASHSERVER_ADMIN_PASSWORD)",
)
parser.add_argument('-b', '--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
parser.add_argument('-d', '--database', default='./hashserv.db', help='Database file (default "%(default)s")')
parser.add_argument('-l', '--log', default='WARNING', help='Set logging level')
parser.add_argument('-u', '--upstream', help='Upstream hashserv to pull hashes from')
parser.add_argument('-r', '--read-only', action='store_true', help='Disallow write operations from clients')
args = parser.parse_args()
logger = logging.getLogger("hashserv")
logger = logging.getLogger('hashserv')
level = getattr(logging, args.log.upper(), None)
if not isinstance(level, int):
raise ValueError("Invalid log level: %s (Try ERROR/WARNING/INFO/DEBUG)" % args.log)
raise ValueError('Invalid log level: %s' % args.log)
logger.setLevel(level)
console = logging.StreamHandler()
console.setLevel(level)
logger.addHandler(console)
read_only = (os.environ.get("HASHSERVER_READ_ONLY", "0") == "1") or args.read_only
if "," in args.anon_perms:
anon_perms = args.anon_perms.split(",")
else:
anon_perms = args.anon_perms.split()
server = hashserv.create_server(
args.bind,
args.database,
upstream=args.upstream,
read_only=read_only,
db_username=args.db_username,
db_password=args.db_password,
anon_perms=anon_perms,
admin_username=args.admin_user,
admin_password=args.admin_password,
)
server = hashserv.create_server(args.bind, args.database, upstream=args.upstream, read_only=args.read_only)
server.serve_forever()
return 0
if __name__ == "__main__":
if __name__ == '__main__':
try:
ret = main()
except Exception:
ret = 1
import traceback
traceback.print_exc()
sys.exit(ret)

View File

@@ -14,8 +14,6 @@ import logging
import os
import sys
import argparse
import warnings
warnings.simplefilter("default")
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
@@ -68,11 +66,11 @@ def main():
registered = False
for plugin in plugins:
if hasattr(plugin, 'tinfoil_init'):
plugin.tinfoil_init(tinfoil)
if hasattr(plugin, 'register_commands'):
registered = True
plugin.register_commands(subparsers)
if hasattr(plugin, 'tinfoil_init'):
plugin.tinfoil_init(tinfoil)
if not registered:
logger.error("No commands registered - missing plugins?")

View File

@@ -1,83 +1,49 @@
#!/usr/bin/env python3
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import os
import sys,logging
import argparse
import warnings
warnings.simplefilter("default")
import optparse
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), "lib"))
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),'lib'))
import prserv
import prserv.serv
VERSION = "1.1.0"
__version__="1.0.0"
PRHOST_DEFAULT="0.0.0.0"
PRHOST_DEFAULT='0.0.0.0'
PRPORT_DEFAULT=8585
def main():
parser = argparse.ArgumentParser(
description="BitBake PR Server. Version=%s" % VERSION,
formatter_class=argparse.RawTextHelpFormatter)
parser = optparse.OptionParser(
version="Bitbake PR Service Core version %s, %%prog version %s" % (prserv.__version__, __version__),
usage = "%prog < --start | --stop > [options]")
parser.add_argument(
"-f",
"--file",
default="prserv.sqlite3",
help="database filename (default: prserv.sqlite3)",
)
parser.add_argument(
"-l",
"--log",
default="prserv.log",
help="log filename(default: prserv.log)",
)
parser.add_argument(
"--loglevel",
default="INFO",
help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
)
parser.add_argument(
"--start",
action="store_true",
help="start daemon",
)
parser.add_argument(
"--stop",
action="store_true",
help="stop daemon",
)
parser.add_argument(
"--host",
help="ip address to bind",
default=PRHOST_DEFAULT,
)
parser.add_argument(
"--port",
type=int,
default=PRPORT_DEFAULT,
help="port number (default: 8585)",
)
parser.add_argument(
"-r",
"--read-only",
action="store_true",
help="open database in read-only mode",
)
parser.add_option("-f", "--file", help="database filename(default: prserv.sqlite3)", action="store",
dest="dbfile", type="string", default="prserv.sqlite3")
parser.add_option("-l", "--log", help="log filename(default: prserv.log)", action="store",
dest="logfile", type="string", default="prserv.log")
parser.add_option("--loglevel", help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
action = "store", type="string", dest="loglevel", default = "INFO")
parser.add_option("--start", help="start daemon",
action="store_true", dest="start")
parser.add_option("--stop", help="stop daemon",
action="store_true", dest="stop")
parser.add_option("--host", help="ip address to bind", action="store",
dest="host", type="string", default=PRHOST_DEFAULT)
parser.add_option("--port", help="port number(default: 8585)", action="store",
dest="port", type="int", default=PRPORT_DEFAULT)
args = parser.parse_args()
prserv.init_logger(os.path.abspath(args.log), args.loglevel)
options, args = parser.parse_args(sys.argv)
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
if args.start:
ret=prserv.serv.start_daemon(args.file, args.host, args.port, os.path.abspath(args.log), args.read_only)
elif args.stop:
ret=prserv.serv.stop_daemon(args.host, args.port)
if options.start:
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile))
elif options.stop:
ret=prserv.serv.stop_daemon(options.host, options.port)
else:
ret=parser.print_help()
return ret

View File

@@ -7,8 +7,6 @@
import os
import sys, logging
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import unittest
@@ -31,7 +29,6 @@ tests = ["bb.tests.codeparser",
"bb.tests.runqueue",
"bb.tests.siggen",
"bb.tests.utils",
"bb.tests.compression",
"hashserv.tests",
"layerindexlib.tests.layerindexobj",
"layerindexlib.tests.restapi",

View File

@@ -8,16 +8,14 @@
import os
import sys
import warnings
warnings.simplefilter("default")
import logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb
bb.utils.check_system_locale()
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 11 or not sys.argv[1].startswith("decafbad"):
if len(sys.argv) != 10 or not sys.argv[1].startswith("decafbad"):
print("bitbake-server is meant for internal execution by bitbake itself, please don't use it standalone.")
sys.exit(1)
@@ -29,10 +27,11 @@ logfile = sys.argv[4]
lockname = sys.argv[5]
sockname = sys.argv[6]
timeout = float(sys.argv[7])
profile = bool(int(sys.argv[8]))
xmlrpcinterface = (sys.argv[9], int(sys.argv[10]))
xmlrpcinterface = (sys.argv[8], int(sys.argv[9]))
if xmlrpcinterface[0] == "None":
xmlrpcinterface = (None, xmlrpcinterface[1])
if timeout == "None":
timeout = None
# Replace standard fds with our own
with open('/dev/null', 'r') as si:
@@ -51,5 +50,5 @@ logger = logging.getLogger("BitBake")
handler = bb.event.LogHandler()
logger.addHandler(handler)
bb.server.process.execServer(lockfd, readypipeinfd, lockname, sockname, timeout, xmlrpcinterface, profile)
bb.server.process.execServer(lockfd, readypipeinfd, lockname, sockname, timeout, xmlrpcinterface)

View File

@@ -1,14 +1,11 @@
#!/usr/bin/env python3
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import os
import sys
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
from bb import fetch2
import logging
@@ -19,12 +16,11 @@ import signal
import pickle
import traceback
import queue
import shlex
import subprocess
from multiprocessing import Lock
from threading import Thread
bb.utils.check_system_locale()
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
@@ -91,19 +87,19 @@ def worker_fire_prepickled(event):
worker_thread_exit = False
def worker_flush(worker_queue):
worker_queue_int = bytearray()
worker_queue_int = b""
global worker_pipe, worker_thread_exit
while True:
try:
worker_queue_int.extend(worker_queue.get(True, 1))
worker_queue_int = worker_queue_int + worker_queue.get(True, 1)
except queue.Empty:
pass
while (worker_queue_int or not worker_queue.empty()):
try:
(_, ready, _) = select.select([], [worker_pipe], [], 1)
if not worker_queue.empty():
worker_queue_int.extend(worker_queue.get())
worker_queue_int = worker_queue_int + worker_queue.get()
written = os.write(worker_pipe, worker_queue_int)
worker_queue_int = worker_queue_int[written:]
except (IOError, OSError) as e:
@@ -121,10 +117,11 @@ def worker_child_fire(event, d):
data = b"<event>" + pickle.dumps(event) + b"</event>"
try:
with bb.utils.lock_timeout(worker_pipe_lock):
while(len(data)):
written = worker_pipe.write(data)
data = data[written:]
worker_pipe_lock.acquire()
while(len(data)):
written = worker_pipe.write(data)
data = data[written:]
worker_pipe_lock.release()
except IOError:
sigterm_handler(None, None)
raise
@@ -143,29 +140,15 @@ def sigterm_handler(signum, frame):
os.killpg(0, signal.SIGTERM)
sys.exit()
def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
fn = runtask['fn']
task = runtask['task']
taskname = runtask['taskname']
taskhash = runtask['taskhash']
unihash = runtask['unihash']
appends = runtask['appends']
layername = runtask['layername']
taskdepdata = runtask['taskdepdata']
quieterrors = runtask['quieterrors']
def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskhash, unihash, appends, taskdepdata, extraconfigdata, quieterrors=False, dry_run_exec=False):
# We need to setup the environment BEFORE the fork, since
# a fork() or exec*() activates PSEUDO...
envbackup = {}
fakeroot = False
fakeenv = {}
umask = None
uid = os.getuid()
gid = os.getgid()
taskdep = runtask['taskdep']
taskdep = workerdata["taskdeps"][fn]
if 'umask' in taskdep and taskname in taskdep['umask']:
umask = taskdep['umask'][taskname]
elif workerdata["umask"]:
@@ -177,25 +160,24 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
except TypeError:
pass
dry_run = cfg.dry_run or runtask['dry_run']
dry_run = cfg.dry_run or dry_run_exec
# We can't use the fakeroot environment in a dry run as it possibly hasn't been built
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
fakeroot = True
envvars = (runtask['fakerootenv'] or "").split()
for key, value in (var.split('=',1) for var in envvars):
envvars = (workerdata["fakerootenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
fakeenv[key] = value
fakedirs = (runtask['fakerootdirs'] or "").split()
fakedirs = (workerdata["fakerootdirs"][fn] or "").split()
for p in fakedirs:
bb.utils.mkdirhier(p)
logger.debug2('Running %s:%s under fakeroot, fakedirs: %s' %
(fn, taskname, ', '.join(fakedirs)))
else:
envvars = (runtask['fakerootnoenv'] or "").split()
for key, value in (var.split('=',1) for var in envvars):
envvars = (workerdata["fakerootnoenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
fakeenv[key] = value
@@ -237,21 +219,19 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
# Let SIGHUP exit as SIGTERM
signal.signal(signal.SIGHUP, sigterm_handler)
# No stdin & stdout
# stdout is used as a status report channel and must not be used by child processes.
dumbio = os.open(os.devnull, os.O_RDWR)
os.dup2(dumbio, sys.stdin.fileno())
os.dup2(dumbio, sys.stdout.fileno())
# No stdin
newsi = os.open(os.devnull, os.O_RDWR)
os.dup2(newsi, sys.stdin.fileno())
if umask is not None:
if umask:
os.umask(umask)
try:
bb_cache = bb.cache.NoCache(databuilder)
(realfn, virtual, mc) = bb.cache.virtualfn2realfn(fn)
the_data = databuilder.mcdata[mc]
the_data.setVar("BB_WORKERCONTEXT", "1")
the_data.setVar("BB_TASKDEPDATA", taskdepdata)
the_data.setVar('BB_CURRENTTASK', taskname.replace("do_", ""))
if cfg.limited_deps:
the_data.setVar("BB_LIMITEDDEPS", "1")
the_data.setVar("BUILDNAME", workerdata["buildname"])
@@ -265,20 +245,12 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
bb.parse.siggen.set_taskhashes(workerdata["newhashes"])
ret = 0
the_data = databuilder.parseRecipe(fn, appends, layername)
the_data = bb_cache.loadDataFull(fn, appends)
the_data.setVar('BB_TASKHASH', taskhash)
the_data.setVar('BB_UNIHASH', unihash)
bb.parse.siggen.setup_datacache_from_datastore(fn, the_data)
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN"), taskname.replace("do_", "")))
if not bb.utils.to_boolean(the_data.getVarFlag(taskname, 'network')):
if bb.utils.is_local_uid(uid):
logger.debug("Attempting to disable network for %s" % taskname)
bb.utils.disable_network(uid, gid)
else:
logger.debug("Skipping disable network for %s since %s is not a local uid." % (taskname, uid))
# exported_vars() returns a generator which *cannot* be passed to os.environ.update()
# successfully. We also need to unset anything from the environment which shouldn't be there
exports = bb.data.exported_vars(the_data)
@@ -307,20 +279,10 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
if not quieterrors:
logger.critical(traceback.format_exc())
os._exit(1)
sys.stdout.flush()
sys.stderr.flush()
try:
if dry_run:
return 0
try:
ret = bb.build.exec_task(fn, taskname, the_data, cfg.profile)
finally:
if fakeroot:
fakerootcmd = shlex.split(the_data.getVar("FAKEROOTCMD"))
subprocess.run(fakerootcmd + ['-S'], check=True, stdout=subprocess.PIPE)
return ret
return bb.build.exec_task(fn, taskname, the_data, cfg.profile)
except:
os._exit(1)
if not profiling:
@@ -352,12 +314,12 @@ class runQueueWorkerPipe():
if pipeout:
pipeout.close()
bb.utils.nonblockingfd(self.input)
self.queue = bytearray()
self.queue = b""
def read(self):
start = len(self.queue)
try:
self.queue.extend(self.input.read(102400) or b"")
self.queue = self.queue + (self.input.read(102400) or b"")
except (OSError, IOError) as e:
if e.errno != errno.EAGAIN:
raise
@@ -385,7 +347,7 @@ class BitbakeWorker(object):
def __init__(self, din):
self.input = din
bb.utils.nonblockingfd(self.input)
self.queue = bytearray()
self.queue = b""
self.cookercfg = None
self.databuilder = None
self.data = None
@@ -419,7 +381,7 @@ class BitbakeWorker(object):
if len(r) == 0:
# EOF on pipe, server must have terminated
self.sigterm_exception(signal.SIGTERM, None)
self.queue.extend(r)
self.queue = self.queue + r
except (OSError, IOError):
pass
if len(self.queue):
@@ -439,35 +401,19 @@ class BitbakeWorker(object):
while self.process_waitpid():
continue
def handle_item(self, item, func):
opening_tag = b"<" + item + b">"
if not self.queue.startswith(opening_tag):
return
tag_len = len(opening_tag)
if len(self.queue) < tag_len + 4:
# we need to receive more data
return
header = self.queue[tag_len:tag_len + 4]
payload_len = int.from_bytes(header, 'big')
# closing tag has length (tag_len + 1)
if len(self.queue) < tag_len * 2 + 1 + payload_len:
# we need to receive more data
return
index = self.queue.find(b"</" + item + b">")
if index != -1:
try:
func(self.queue[(tag_len + 4):index])
except pickle.UnpicklingError:
workerlog_write("Unable to unpickle data: %s\n" % ":".join("{:02x}".format(c) for c in self.queue))
raise
self.queue = self.queue[(index + len(b"</") + len(item) + len(b">")):]
if self.queue.startswith(b"<" + item + b">"):
index = self.queue.find(b"</" + item + b">")
while index != -1:
func(self.queue[(len(item) + 2):index])
self.queue = self.queue[(index + len(item) + 3):]
index = self.queue.find(b"</" + item + b">")
def handle_cookercfg(self, data):
self.cookercfg = pickle.loads(data)
self.databuilder = bb.cookerdata.CookerDataBuilder(self.cookercfg, worker=True)
self.databuilder.parseBaseConfiguration(worker=True)
self.databuilder.parseBaseConfiguration()
self.data = self.databuilder.data
def handle_extraconfigdata(self, data):
@@ -482,7 +428,6 @@ class BitbakeWorker(object):
for mc in self.databuilder.mcdata:
self.databuilder.mcdata[mc].setVar("PRSERV_HOST", self.workerdata["prhost"])
self.databuilder.mcdata[mc].setVar("BB_HASHSERVE", self.workerdata["hashservaddr"])
self.databuilder.mcdata[mc].setVar("__bbclasstype", "recipe")
def handle_newtaskhashes(self, data):
self.workerdata["newhashes"] = pickle.loads(data)
@@ -500,15 +445,11 @@ class BitbakeWorker(object):
sys.exit(0)
def handle_runtask(self, data):
runtask = pickle.loads(data)
fn = runtask['fn']
task = runtask['task']
taskname = runtask['taskname']
fn, task, taskname, taskhash, unihash, quieterrors, appends, taskdepdata, dry_run_exec = pickle.loads(data)
workerlog_write("Handling runtask %s %s %s\n" % (task, fn, taskname))
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, self.extraconfigdata, runtask)
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, fn, task, taskname, taskhash, unihash, appends, taskdepdata, self.extraconfigdata, quieterrors, dry_run_exec)
self.build_pids[pid] = task
self.build_pipes[pid] = runQueueWorkerPipe(pipein, pipeout)
@@ -572,11 +513,9 @@ except BaseException as e:
import traceback
sys.stderr.write(traceback.format_exc())
sys.stderr.write(str(e))
finally:
worker_thread_exit = True
worker_thread.join()
workerlog_write("exiting")
if not normalexit:
sys.exit(1)
worker_thread_exit = True
worker_thread.join()
workerlog_write("exitting")
sys.exit(0)

View File

@@ -1,7 +1,5 @@
#!/usr/bin/env python3
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -18,23 +16,19 @@ import itertools
import os
import subprocess
import sys
import warnings
warnings.simplefilter("default")
version = 1.0
git_cmd = ['git', '-c', 'safe.bareRepository=all']
def main():
if sys.version_info < (3, 4, 0):
sys.exit('Python 3.4 or greater is required')
git_dir = check_output(git_cmd + ['rev-parse', '--git-dir']).rstrip()
git_dir = check_output(['git', 'rev-parse', '--git-dir']).rstrip()
shallow_file = os.path.join(git_dir, 'shallow')
if os.path.exists(shallow_file):
try:
check_output(git_cmd + ['fetch', '--unshallow'])
check_output(['git', 'fetch', '--unshallow'])
except subprocess.CalledProcessError:
try:
os.unlink(shallow_file)
@@ -43,21 +37,21 @@ def main():
raise
args = process_args()
revs = check_output(git_cmd + ['rev-list'] + args.revisions).splitlines()
revs = check_output(['git', 'rev-list'] + args.revisions).splitlines()
make_shallow(shallow_file, args.revisions, args.refs)
ref_revs = check_output(git_cmd + ['rev-list'] + args.refs).splitlines()
ref_revs = check_output(['git', 'rev-list'] + args.refs).splitlines()
remaining_history = set(revs) & set(ref_revs)
for rev in remaining_history:
if check_output(git_cmd + ['rev-parse', '{}^@'.format(rev)]):
if check_output(['git', 'rev-parse', '{}^@'.format(rev)]):
sys.exit('Error: %s was not made shallow' % rev)
filter_refs(args.refs)
if args.shrink:
shrink_repo(git_dir)
subprocess.check_call(git_cmd + ['fsck', '--unreachable'])
subprocess.check_call(['git', 'fsck', '--unreachable'])
def process_args():
@@ -74,12 +68,12 @@ def process_args():
args = parser.parse_args()
if args.refs:
args.refs = check_output(git_cmd + ['rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
args.refs = check_output(['git', 'rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
else:
args.refs = get_all_refs(lambda r, t, tt: t == 'commit' or tt == 'commit')
args.refs = list(filter(lambda r: not r.endswith('/HEAD'), args.refs))
args.revisions = check_output(git_cmd + ['rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
args.revisions = check_output(['git', 'rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
return args
@@ -97,7 +91,7 @@ def make_shallow(shallow_file, revisions, refs):
def get_all_refs(ref_filter=None):
"""Return all the existing refs in this repository, optionally filtering the refs."""
ref_output = check_output(git_cmd + ['for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
ref_output = check_output(['git', 'for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
ref_split = [tuple(iter_extend(l.rsplit('\t'), 3)) for l in ref_output.splitlines()]
if ref_filter:
ref_split = (e for e in ref_split if ref_filter(*e))
@@ -115,7 +109,7 @@ def filter_refs(refs):
all_refs = get_all_refs()
to_remove = set(all_refs) - set(refs)
if to_remove:
check_output(['xargs', '-0', '-n', '1'] + git_cmd + ['update-ref', '-d', '--no-deref'],
check_output(['xargs', '-0', '-n', '1', 'git', 'update-ref', '-d', '--no-deref'],
input=''.join(l + '\0' for l in to_remove))
@@ -128,7 +122,7 @@ def follow_history_intersections(revisions, refs):
if rev in seen:
continue
parents = check_output(git_cmd + ['rev-parse', '%s^@' % rev]).splitlines()
parents = check_output(['git', 'rev-parse', '%s^@' % rev]).splitlines()
yield rev
seen.add(rev)
@@ -136,12 +130,12 @@ def follow_history_intersections(revisions, refs):
if not parents:
continue
check_refs = check_output(git_cmd + ['merge-base', '--independent'] + sorted(refs)).splitlines()
check_refs = check_output(['git', 'merge-base', '--independent'] + sorted(refs)).splitlines()
for parent in parents:
for ref in check_refs:
print("Checking %s vs %s" % (parent, ref))
try:
merge_base = check_output(git_cmd + ['merge-base', parent, ref]).rstrip()
merge_base = check_output(['git', 'merge-base', parent, ref]).rstrip()
except subprocess.CalledProcessError:
continue
else:
@@ -161,14 +155,14 @@ def iter_except(func, exception, start=None):
def shrink_repo(git_dir):
"""Shrink the newly shallow repository, removing the unreachable objects."""
subprocess.check_call(git_cmd + ['reflog', 'expire', '--expire-unreachable=now', '--all'])
subprocess.check_call(git_cmd + ['repack', '-ad'])
subprocess.check_call(['git', 'reflog', 'expire', '--expire-unreachable=now', '--all'])
subprocess.check_call(['git', 'repack', '-ad'])
try:
os.unlink(os.path.join(git_dir, 'objects', 'info', 'alternates'))
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
subprocess.check_call(git_cmd + ['prune', '--expire', 'now'])
subprocess.check_call(['git', 'prune', '--expire', 'now'])
if __name__ == '__main__':

View File

@@ -33,7 +33,7 @@ databaseCheck()
$MANAGE migrate --noinput || retval=1
if [ $retval -eq 1 ]; then
echo "Failed migrations, halting system start" 1>&2
echo "Failed migrations, aborting system start" 1>&2
return $retval
fi
# Make sure that checksettings can pick up any value for TEMPLATECONF
@@ -41,7 +41,7 @@ databaseCheck()
$MANAGE checksettings --traceback || retval=1
if [ $retval -eq 1 ]; then
printf "\nError while checking settings; exiting\n"
printf "\nError while checking settings; aborting\n"
return $retval
fi
@@ -84,7 +84,7 @@ webserverStartAll()
echo "Starting webserver..."
$MANAGE runserver --noreload "$ADDR_PORT" \
</dev/null >>${TOASTER_LOGS_DIR}/web.log 2>&1 \
</dev/null >>${BUILDDIR}/toaster_web.log 2>&1 \
& echo $! >${BUILDDIR}/.toastermain.pid
sleep 1
@@ -181,14 +181,6 @@ WEBSERVER=1
export TOASTER_BUILDSERVER=1
ADDR_PORT="localhost:8000"
TOASTERDIR=`dirname $BUILDDIR`
# ${BUILDDIR}/toaster_logs/ became the default location for toaster logs
# This is needed for implemented django-log-viewer: https://pypi.org/project/django-log-viewer/
# If the directory does not exist, create it.
TOASTER_LOGS_DIR="${BUILDDIR}/toaster_logs/"
if [ ! -d $TOASTER_LOGS_DIR ]
then
mkdir $TOASTER_LOGS_DIR
fi
unset CMD
for param in $*; do
case $param in
@@ -256,7 +248,7 @@ fi
# 3) the sqlite db if that is being used.
# 4) pid's we need to clean up on exit/shutdown
export TOASTER_DIR=$TOASTERDIR
export BB_ENV_PASSTHROUGH_ADDITIONS="$BB_ENV_PASSTHROUGH_ADDITIONS TOASTER_DIR"
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE TOASTER_DIR"
# Determine the action. If specified by arguments, fine, if not, toggle it
if [ "$CMD" = "start" ] ; then
@@ -307,7 +299,7 @@ case $CMD in
export BITBAKE_UI='toasterui'
if [ $TOASTER_BUILDSERVER -eq 1 ] ; then
$MANAGE runbuilds \
</dev/null >>${TOASTER_LOGS_DIR}/toaster_runbuilds.log 2>&1 \
</dev/null >>${BUILDDIR}/toaster_runbuilds.log 2>&1 \
& echo $! >${BUILDDIR}/.runbuilds.pid
else
echo "Toaster build server not started."

View File

@@ -19,8 +19,6 @@ import sys
import json
import pickle
import codecs
import warnings
warnings.simplefilter("default")
from collections import namedtuple
@@ -30,23 +28,79 @@ sys.path.insert(0, join(dirname(dirname(abspath(__file__))), 'lib'))
import bb.cooker
from bb.ui import toasterui
from bb.ui import eventreplay
class EventPlayer:
"""Emulate a connection to a bitbake server."""
def __init__(self, eventfile, variables):
self.eventfile = eventfile
self.variables = variables
self.eventmask = []
def waitEvent(self, _timeout):
"""Read event from the file."""
line = self.eventfile.readline().strip()
if not line:
return
try:
event_str = json.loads(line)['vars'].encode('utf-8')
event = pickle.loads(codecs.decode(event_str, 'base64'))
event_name = "%s.%s" % (event.__module__, event.__class__.__name__)
if event_name not in self.eventmask:
return
return event
except ValueError as err:
print("Failed loading ", line)
raise err
def runCommand(self, command_line):
"""Emulate running a command on the server."""
name = command_line[0]
if name == "getVariable":
var_name = command_line[1]
variable = self.variables.get(var_name)
if variable:
return variable['v'], None
return None, "Missing variable %s" % var_name
elif name == "getAllKeysWithFlags":
dump = {}
flaglist = command_line[1]
for key, val in self.variables.items():
try:
if not key.startswith("__"):
dump[key] = {
'v': val['v'],
'history' : val['history'],
}
for flag in flaglist:
dump[key][flag] = val[flag]
except Exception as err:
print(err)
return (dump, None)
elif name == 'setEventMask':
self.eventmask = command_line[-1]
return True, None
else:
raise Exception("Command %s not implemented" % command_line[0])
def getEventHandle(self):
"""
This method is called by toasterui.
The return value is passed to self.runCommand but not used there.
"""
pass
def main(argv):
with open(argv[-1]) as eventfile:
# load variables from the first line
variables = None
while line := eventfile.readline().strip():
try:
variables = json.loads(line)['allvariables']
break
except (KeyError, json.JSONDecodeError):
continue
if not variables:
sys.exit("Cannot find allvariables entry in event log file %s" % argv[-1])
eventfile.seek(0)
variables = json.loads(eventfile.readline().strip())['allvariables']
params = namedtuple('ConfigParams', ['observe_only'])(True)
player = eventreplay.EventPlayer(eventfile, variables)
player = EventPlayer(eventfile, variables)
return toasterui.main(player, player, params)

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: MIT
#
# Copyright (c) 2021 Joshua Watt <JPEWhacker@gmail.com>
#
#
# Dockerfile to build a bitbake hash equivalence server container
#
# From the root of the bitbake repository, run:
@@ -15,9 +15,5 @@ RUN apk add --no-cache python3
COPY bin/bitbake-hashserv /opt/bbhashserv/bin/
COPY lib/hashserv /opt/bbhashserv/lib/hashserv/
COPY lib/bb /opt/bbhashserv/lib/bb/
COPY lib/codegen.py /opt/bbhashserv/lib/codegen.py
COPY lib/ply /opt/bbhashserv/lib/ply/
COPY lib/bs4 /opt/bbhashserv/lib/bs4/
ENTRYPOINT ["/opt/bbhashserv/bin/bitbake-hashserv"]

View File

@@ -1,62 +0,0 @@
# SPDX-License-Identifier: MIT
#
# Copyright (c) 2022 Daniel Gomez <daniel@qtec.com>
#
# Dockerfile to build a bitbake PR service container
#
# From the root of the bitbake repository, run:
#
# docker build -f contrib/prserv/Dockerfile . -t prserv
#
# Running examples:
#
# 1. PR Service in RW mode, port 18585:
#
# docker run --detach --tty \
# --env PORT=18585 \
# --publish 18585:18585 \
# --volume $PWD:/var/lib/bbprserv \
# prserv
#
# 2. PR Service in RO mode, default port (8585) and custom LOGFILE:
#
# docker run --detach --tty \
# --env DBMODE="--read-only" \
# --env LOGFILE=/var/lib/bbprserv/prservro.log \
# --publish 8585:8585 \
# --volume $PWD:/var/lib/bbprserv \
# prserv
#
FROM alpine:3.14.4
RUN apk add --no-cache python3
COPY bin/bitbake-prserv /opt/bbprserv/bin/
COPY lib/prserv /opt/bbprserv/lib/prserv/
COPY lib/bb /opt/bbprserv/lib/bb/
COPY lib/codegen.py /opt/bbprserv/lib/codegen.py
COPY lib/ply /opt/bbprserv/lib/ply/
COPY lib/bs4 /opt/bbprserv/lib/bs4/
ENV PATH=$PATH:/opt/bbprserv/bin
RUN mkdir -p /var/lib/bbprserv
ENV DBFILE=/var/lib/bbprserv/prserv.sqlite3 \
LOGFILE=/var/lib/bbprserv/prserv.log \
LOGLEVEL=debug \
HOST=0.0.0.0 \
PORT=8585 \
DBMODE=""
ENTRYPOINT [ "/bin/sh", "-c", \
"bitbake-prserv \
--file=$DBFILE \
--log=$LOGFILE \
--loglevel=$LOGLEVEL \
--start \
--host=$HOST \
--port=$PORT \
$DBMODE \
&& tail -f $LOGFILE"]

View File

@@ -40,7 +40,7 @@ set cpo&vim
let s:maxoff = 50 " maximum number of lines to look backwards for ()
function! GetBBPythonIndent(lnum)
function GetPythonIndent(lnum)
" If this line is explicitly joined: If the previous line was also joined,
" line it up with that one, otherwise add two 'shiftwidth'
@@ -257,7 +257,7 @@ let b:did_indent = 1
setlocal indentkeys+=0\"
function! BitbakeIndent(lnum)
function BitbakeIndent(lnum)
if !has('syntax_items')
return -1
endif
@@ -315,7 +315,7 @@ function! BitbakeIndent(lnum)
endif
if index(["bbPyDefRegion", "bbPyFuncRegion"], name) != -1
let ret = GetBBPythonIndent(a:lnum)
let ret = GetPythonIndent(a:lnum)
" Should normally always be indented by at least one shiftwidth; but allow
" return of -1 (defer to autoindent) or -2 (force indent to 0)
if ret == 0

View File

@@ -20,7 +20,7 @@ fun! NewBBAppendTemplate()
set nopaste
" New bbappend template
0 put ='FILESEXTRAPATHS:prepend := \"${THISDIR}/${PN}:\"'
0 put ='FILESEXTRAPATHS_prepend := \"${THISDIR}/${PN}:\"'
2
if paste == 1

View File

@@ -51,9 +51,9 @@ syn region bbString matchgroup=bbQuote start=+'+ skip=+\\$+ end=+'+
syn match bbExport "^export" nextgroup=bbIdentifier skipwhite
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
syn match bbIdentifier "[a-zA-Z0-9\-_\.\/\+]\+" display contained
syn match bbVarDeref "${[a-zA-Z0-9\-_:\.\/\+]\+}" contained
syn match bbVarDeref "${[a-zA-Z0-9\-_\.\/\+]\+}" contained
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)" contained nextgroup=bbVarValue
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+][${}a-zA-Z0-9\-_:\.\/\+]*\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbOverrideOperator,bbVarDeref nextgroup=bbVarEq
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+]\+\(_[${}a-zA-Z0-9\-_\.\/\+]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
syn region bbVarPyValue start=+${@+ skip=+\\$+ end=+}+ contained contains=@python
@@ -63,14 +63,13 @@ syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*
" Includes and requires
syn keyword bbInclude inherit include require contained
syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref
syn match bbIncludeLine "^\(inherit\|include\|require\)\s\+" contains=bbInclude nextgroup=bbIncludeRest
" Add taks and similar
syn keyword bbStatement addtask deltask addhandler after before EXPORT_FUNCTIONS contained
syn match bbStatementRest /[^\\]*$/ skipwhite contained contains=bbStatement,bbVarDeref,bbVarPyValue
syn region bbStatementRestCont start=/.*\\$/ end=/^[^\\]*$/ contained contains=bbStatement,bbVarDeref,bbVarPyValue,bbContinue keepend
syn match bbStatementLine "^\(addtask\|deltask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest,bbStatementRestCont
syn match bbStatementRest ".*$" skipwhite contained contains=bbStatement
syn match bbStatementLine "^\(addtask\|deltask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
" OE Important Functions
syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_compile do_stage do_install do_package contained
@@ -78,15 +77,13 @@ syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_comp
" Generic Functions
syn match bbFunction "\h[0-9A-Za-z_\-\.]*" display contained contains=bbOEFunctions
syn keyword bbOverrideOperator append prepend remove contained
" BitBake shell metadata
syn include @shell syntax/sh.vim
if exists("b:current_syntax")
unlet b:current_syntax
endif
syn keyword bbShFakeRootFlag fakeroot contained
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([\.0-9A-Za-z_:${}\-\.]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbOverrideOperator,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([\.0-9A-Za-z_${}\-\.]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
syn region bbShFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@shell
" Python value inside shell functions
@@ -94,7 +91,7 @@ syn region shDeref start=+${@+ skip=+\\$+ excludenl end=+}+ contained co
" BitBake python metadata
syn keyword bbPyFlag python contained
syn match bbPyFuncDef "^\(fakeroot\s*\)\?\(python\)\(\s\+[0-9A-Za-z_:${}\-\.]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbPyFlag,bbFunction,bbOverrideOperator,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
syn match bbPyFuncDef "^\(fakeroot\s*\)\?\(python\)\(\s\+[0-9A-Za-z_${}\-\.]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbPyFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
syn region bbPyFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@python
" BitBake 'def'd python functions
@@ -123,9 +120,7 @@ hi def link bbPyFlag Type
hi def link bbPyDef Statement
hi def link bbStatement Statement
hi def link bbStatementRest Identifier
hi def link bbStatementRestCont Identifier
hi def link bbOEFunctions Special
hi def link bbVarPyValue PreProc
hi def link bbOverrideOperator Operator
let b:current_syntax = "bb"

View File

@@ -3,7 +3,7 @@
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?= -W --keep-going -j auto
SPHINXOPTS ?= -j auto
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build

View File

@@ -8,12 +8,12 @@ Manual Organization
Folders exist for individual manuals as follows:
* bitbake-user-manual --- The BitBake User Manual
* bitbake-user-manual - The BitBake User Manual
Each folder is self-contained regarding content and figures.
If you want to find HTML versions of the BitBake manuals on the web,
go to https://www.openembedded.org/wiki/Documentation.
go to http://www.openembedded.org/wiki/Documentation.
Sphinx
======
@@ -47,8 +47,8 @@ To install all required packages run:
To build the documentation locally, run:
$ cd doc
$ make html
$ cd documentation
$ make -f Makefile.sphinx html
The resulting HTML index page will be _build/html/index.html, and you
can browse your own copy of the locally generated documentation with

View File

@@ -1,9 +0,0 @@
<footer>
<hr/>
<div role="contentinfo">
<p>&copy; Copyright {{ copyright }}
<br>Last updated on {{ last_updated }} from the <a href="https://git.openembedded.org/bitbake/">bitbake</a> git repository.
</p>
</div>
</footer>

View File

@@ -16,7 +16,7 @@ data, or simply return information about the execution environment.
This chapter describes BitBake's execution process from start to finish
when you use it to create an image. The execution process is launched
using the following command form::
using the following command form: ::
$ bitbake target
@@ -32,7 +32,7 @@ the BitBake command and its options, see ":ref:`The BitBake Command
your project's ``local.conf`` configuration file.
A common method to determine this value for your build host is to run
the following::
the following: ::
$ grep processor /proc/cpuinfo
@@ -40,7 +40,7 @@ the BitBake command and its options, see ":ref:`The BitBake Command
the number of processors, which takes into account hyper-threading.
Thus, a quad-core build host with hyper-threading most likely shows
eight processors, which is the value you would then assign to
:term:`BB_NUMBER_THREADS`.
``BB_NUMBER_THREADS``.
A possibly simpler solution is that some Linux distributions (e.g.
Debian and Ubuntu) provide the ``ncpus`` command.
@@ -65,13 +65,13 @@ data itself is of various types:
The ``layer.conf`` files are used to construct key variables such as
:term:`BBPATH` and :term:`BBFILES`.
:term:`BBPATH` is used to search for configuration and class files under the
``conf`` and ``classes`` directories, respectively. :term:`BBFILES` is used
``BBPATH`` is used to search for configuration and class files under the
``conf`` and ``classes`` directories, respectively. ``BBFILES`` is used
to locate both recipe and recipe append files (``.bb`` and
``.bbappend``). If there is no ``bblayers.conf`` file, it is assumed the
user has set the :term:`BBPATH` and :term:`BBFILES` directly in the environment.
user has set the ``BBPATH`` and ``BBFILES`` directly in the environment.
Next, the ``bitbake.conf`` file is located using the :term:`BBPATH` variable
Next, the ``bitbake.conf`` file is located using the ``BBPATH`` variable
that was just constructed. The ``bitbake.conf`` file may also include
other configuration files using the ``include`` or ``require``
directives.
@@ -79,8 +79,8 @@ directives.
Prior to parsing configuration files, BitBake looks at certain
variables, including:
- :term:`BB_ENV_PASSTHROUGH`
- :term:`BB_ENV_PASSTHROUGH_ADDITIONS`
- :term:`BB_ENV_WHITELIST`
- :term:`BB_ENV_EXTRAWHITE`
- :term:`BB_PRESERVE_ENV`
- :term:`BB_ORIGENV`
- :term:`BITBAKE_UI`
@@ -104,7 +104,7 @@ BitBake first searches the current working directory for an optional
contain a :term:`BBLAYERS` variable that is a
space-delimited list of 'layer' directories. Recall that if BitBake
cannot find a ``bblayers.conf`` file, then it is assumed the user has
set the :term:`BBPATH` and :term:`BBFILES` variables directly in the
set the ``BBPATH`` and ``BBFILES`` variables directly in the
environment.
For each directory (layer) in this list, a ``conf/layer.conf`` file is
@@ -114,7 +114,7 @@ files automatically set up :term:`BBPATH` and other
variables correctly for a given build directory.
BitBake then expects to find the ``conf/bitbake.conf`` file somewhere in
the user-specified :term:`BBPATH`. That configuration file generally has
the user-specified ``BBPATH``. That configuration file generally has
include directives to pull in any other metadata such as files specific
to the architecture, the machine, the local environment, and so forth.
@@ -135,11 +135,11 @@ The ``base.bbclass`` file is always included. Other classes that are
specified in the configuration using the
:term:`INHERIT` variable are also included. BitBake
searches for class files in a ``classes`` subdirectory under the paths
in :term:`BBPATH` in the same way as configuration files.
in ``BBPATH`` in the same way as configuration files.
A good way to get an idea of the configuration files and the class files
used in your execution environment is to run the following BitBake
command::
command: ::
$ bitbake -e > mybb.log
@@ -155,7 +155,7 @@ execution environment.
pair of curly braces in a shell function, the closing curly brace
must not be located at the start of the line without leading spaces.
Here is an example that causes BitBake to produce a parsing error::
Here is an example that causes BitBake to produce a parsing error: ::
fakeroot create_shar() {
cat << "EOF" > ${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.sh
@@ -184,13 +184,13 @@ Locating and Parsing Recipes
During the configuration phase, BitBake will have set
:term:`BBFILES`. BitBake now uses it to construct a
list of recipes to parse, along with any append files (``.bbappend``) to
apply. :term:`BBFILES` is a space-separated list of available files and
supports wildcards. An example would be::
apply. ``BBFILES`` is a space-separated list of available files and
supports wildcards. An example would be: ::
BBFILES = "/path/to/bbfiles/*.bb /path/to/appends/*.bbappend"
BitBake parses each
recipe and append file located with :term:`BBFILES` and stores the values of
recipe and append file located with ``BBFILES`` and stores the values of
various variables into the datastore.
.. note::
@@ -201,18 +201,18 @@ For each file, a fresh copy of the base configuration is made, then the
recipe is parsed line by line. Any inherit statements cause BitBake to
find and then parse class files (``.bbclass``) using
:term:`BBPATH` as the search path. Finally, BitBake
parses in order any append files found in :term:`BBFILES`.
parses in order any append files found in ``BBFILES``.
One common convention is to use the recipe filename to define pieces of
metadata. For example, in ``bitbake.conf`` the recipe name and version
are used to set the variables :term:`PN` and
:term:`PV`::
:term:`PV`: ::
PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
PV = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
In this example, a recipe called "something_1.2.3.bb" would set
:term:`PN` to "something" and :term:`PV` to "1.2.3".
``PN`` to "something" and ``PV`` to "1.2.3".
By the time parsing is complete for a recipe, BitBake has a list of
tasks that the recipe defines and a set of data consisting of keys and
@@ -228,7 +228,7 @@ and then reload it.
Where possible, subsequent BitBake commands reuse this cache of recipe
information. The validity of this cache is determined by first computing
a checksum of the base configuration data (see
:term:`BB_HASHCONFIG_IGNORE_VARS`) and
:term:`BB_HASHCONFIG_WHITELIST`) and
then checking if the checksum matches. If that checksum matches what is
in the cache and the recipe and class files have not changed, BitBake is
able to use the cache. BitBake then reloads the cached information about
@@ -238,7 +238,7 @@ Recipe file collections exist to allow the user to have multiple
repositories of ``.bb`` files that contain the same exact package. For
example, one could easily use them to make one's own local copy of an
upstream repository, but with custom modifications that one does not
want upstream. Here is an example::
want upstream. Here is an example: ::
BBFILES = "/stuff/openembedded/*/*.bb /stuff/openembedded.modified/*/*.bb"
BBFILE_COLLECTIONS = "upstream local"
@@ -260,21 +260,21 @@ Providers
Assuming BitBake has been instructed to execute a target and that all
the recipe files have been parsed, BitBake starts to figure out how to
build the target. BitBake looks through the :term:`PROVIDES` list for each
of the recipes. A :term:`PROVIDES` list is the list of names by which the
recipe can be known. Each recipe's :term:`PROVIDES` list is created
build the target. BitBake looks through the ``PROVIDES`` list for each
of the recipes. A ``PROVIDES`` list is the list of names by which the
recipe can be known. Each recipe's ``PROVIDES`` list is created
implicitly through the recipe's :term:`PN` variable and
explicitly through the recipe's :term:`PROVIDES`
variable, which is optional.
When a recipe uses :term:`PROVIDES`, that recipe's functionality can be
found under an alternative name or names other than the implicit :term:`PN`
When a recipe uses ``PROVIDES``, that recipe's functionality can be
found under an alternative name or names other than the implicit ``PN``
name. As an example, suppose a recipe named ``keyboard_1.0.bb``
contained the following::
contained the following: ::
PROVIDES += "fullkeyboard"
The :term:`PROVIDES`
The ``PROVIDES``
list for this recipe becomes "keyboard", which is implicit, and
"fullkeyboard", which is explicit. Consequently, the functionality found
in ``keyboard_1.0.bb`` can be found under two different names.
@@ -284,14 +284,14 @@ in ``keyboard_1.0.bb`` can be found under two different names.
Preferences
===========
The :term:`PROVIDES` list is only part of the solution for figuring out a
The ``PROVIDES`` list is only part of the solution for figuring out a
target's recipes. Because targets might have multiple providers, BitBake
needs to prioritize providers by determining provider preferences.
A common example in which a target has multiple providers is
"virtual/kernel", which is on the :term:`PROVIDES` list for each kernel
"virtual/kernel", which is on the ``PROVIDES`` list for each kernel
recipe. Each machine often selects the best kernel provider by using a
line similar to the following in the machine configuration file::
line similar to the following in the machine configuration file: ::
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto"
@@ -309,10 +309,10 @@ specify a particular version. You can influence the order by using the
:term:`DEFAULT_PREFERENCE` variable.
By default, files have a preference of "0". Setting
:term:`DEFAULT_PREFERENCE` to "-1" makes the recipe unlikely to be used
unless it is explicitly referenced. Setting :term:`DEFAULT_PREFERENCE` to
"1" makes it likely the recipe is used. :term:`PREFERRED_VERSION` overrides
any :term:`DEFAULT_PREFERENCE` setting. :term:`DEFAULT_PREFERENCE` is often used
``DEFAULT_PREFERENCE`` to "-1" makes the recipe unlikely to be used
unless it is explicitly referenced. Setting ``DEFAULT_PREFERENCE`` to
"1" makes it likely the recipe is used. ``PREFERRED_VERSION`` overrides
any ``DEFAULT_PREFERENCE`` setting. ``DEFAULT_PREFERENCE`` is often used
to mark newer and more experimental recipe versions until they have
undergone sufficient testing to be considered stable.
@@ -331,7 +331,7 @@ If the first recipe is named ``a_1.1.bb``, then the
Thus, if a recipe named ``a_1.2.bb`` exists, BitBake will choose 1.2 by
default. However, if you define the following variable in a ``.conf``
file that BitBake parses, you can change that preference::
file that BitBake parses, you can change that preference: ::
PREFERRED_VERSION_a = "1.1"
@@ -394,7 +394,7 @@ ready to run, those tasks have all their dependencies met, and the
thread threshold has not been exceeded.
It is worth noting that you can greatly speed up the build time by
properly setting the :term:`BB_NUMBER_THREADS` variable.
properly setting the ``BB_NUMBER_THREADS`` variable.
As each task completes, a timestamp is written to the directory
specified by the :term:`STAMP` variable. On subsequent
@@ -435,7 +435,7 @@ BitBake writes a shell script to
executes the script. The generated shell script contains all the
exported variables, and the shell functions with all variables expanded.
Output from the shell script goes to the file
``${``\ :term:`T`\ ``}/log.do_taskname.pid``. Looking at the expanded shell functions in
``${T}/log.do_taskname.pid``. Looking at the expanded shell functions in
the run file and the output in the log files is a useful debugging
technique.
@@ -477,7 +477,7 @@ changes because it should not affect the output for target packages. The
simplistic approach for excluding the working directory is to set it to
some fixed value and create the checksum for the "run" script. BitBake
goes one step better and uses the
:term:`BB_BASEHASH_IGNORE_VARS` variable
:term:`BB_HASHBASE_WHITELIST` variable
to define a list of variables that should never be included when
generating the signatures.
@@ -498,7 +498,7 @@ to the task.
Like the working directory case, situations exist where dependencies
should be ignored. For these cases, you can instruct the build process
to ignore a dependency by using a line like the following::
to ignore a dependency by using a line like the following: ::
PACKAGE_ARCHS[vardepsexclude] = "MACHINE"
@@ -508,7 +508,7 @@ even if it does reference it.
Equally, there are cases where we need to add dependencies BitBake is
not able to find. You can accomplish this by using a line like the
following::
following: ::
PACKAGE_ARCHS[vardeps] = "MACHINE"
@@ -523,7 +523,7 @@ it cannot figure out dependencies.
Thus far, this section has limited discussion to the direct inputs into
a task. Information based on direct inputs is referred to as the
"basehash" in the code. However, there is still the question of a task's
indirect inputs --- the things that were already built and present in the
indirect inputs - the things that were already built and present in the
build directory. The checksum (or signature) for a particular task needs
to add the hashes of all the tasks on which the particular task depends.
Choosing which dependencies to add is a policy decision. However, the
@@ -534,11 +534,11 @@ At the code level, there are a variety of ways both the basehash and the
dependent task hashes can be influenced. Within the BitBake
configuration file, we can give BitBake some extra information to help
it construct the basehash. The following statement effectively results
in a list of global variable dependency excludes --- variables never
in a list of global variable dependency excludes - variables never
included in any checksum. This example uses variables from OpenEmbedded
to help illustrate the concept::
to help illustrate the concept: ::
BB_BASEHASH_IGNORE_VARS ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL \
USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \
PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \
@@ -552,22 +552,23 @@ through dependency chains are more complex and are generally
accomplished with a Python function. The code in
``meta/lib/oe/sstatesig.py`` shows two examples of this and also
illustrates how you can insert your own policy into the system if so
desired. This file defines the basic signature generator
OpenEmbedded-Core uses: "OEBasicHash". By default, there
desired. This file defines the two basic signature generators
OpenEmbedded-Core uses: "OEBasic" and "OEBasicHash". By default, there
is a dummy "noop" signature handler enabled in BitBake. This means that
behavior is unchanged from previous versions. ``OE-Core`` uses the
"OEBasicHash" signature handler by default through this setting in the
``bitbake.conf`` file::
``bitbake.conf`` file: ::
BB_SIGNATURE_HANDLER ?= "OEBasicHash"
The main feature of the "OEBasicHash" :term:`BB_SIGNATURE_HANDLER` is that
it adds the task hash to the stamp files. Thanks to this, any metadata
change will change the task hash, automatically causing the task to be run
again. This removes the need to bump :term:`PR` values, and changes to
metadata automatically ripple across the build.
The "OEBasicHash" ``BB_SIGNATURE_HANDLER`` is the same as the "OEBasic"
version but adds the task hash to the stamp files. This results in any
metadata change that changes the task hash, automatically causing the
task to be run again. This removes the need to bump
:term:`PR` values, and changes to metadata automatically
ripple across the build.
It is also worth noting that the end result of signature
It is also worth noting that the end result of these signature
generators is to make some dependency and hash information available to
the build. This information includes:
@@ -577,7 +578,10 @@ the build. This information includes:
- ``BB_BASEHASH_``\ *filename:taskname*: The base hashes for each
dependent task.
- :term:`BB_TASKHASH`: The hash of the currently running task.
- ``BBHASHDEPS_``\ *filename:taskname*: The task dependencies for
each task.
- ``BB_TASKHASH``: The hash of the currently running task.
It is worth noting that BitBake's "-S" option lets you debug BitBake's
processing of signatures. The options passed to -S allow different
@@ -586,11 +590,10 @@ or possibly those defined in the metadata/signature handler itself. The
simplest parameter to pass is "none", which causes a set of signature
information to be written out into ``STAMPS_DIR`` corresponding to the
targets specified. The other currently available parameter is
"printdiff", which causes BitBake to try to establish the most recent
"printdiff", which causes BitBake to try to establish the closest
signature match it can (e.g. in the sstate cache) and then run
compare the matched signatures to determine the stamps and delta
where these two stamp trees diverge. This can be used to determine why
tasks need to be re-run in situations where that is not expected.
``bitbake-diffsigs`` over the matches to determine the stamps and delta
where these two stamp trees diverge.
.. note::
@@ -645,6 +648,13 @@ compiled binary. To handle this, BitBake calls the
each successful setscene task to know whether or not it needs to obtain
the dependencies of that task.
Finally, after all the setscene tasks have executed, BitBake calls the
function listed in
:term:`BB_SETSCENE_VERIFY_FUNCTION2`
with the list of tasks BitBake thinks has been "covered". The metadata
can then ensure that this list is correct and can inform BitBake that it
wants specific tasks to be run regardless of the setscene result.
You can find more information on setscene metadata in the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:task checksums and setscene`
section.
@@ -657,7 +667,7 @@ builds are when execute, bitbake also supports user defined
configuration of the `Python
logging <https://docs.python.org/3/library/logging.html>`__ facilities
through the :term:`BB_LOGCONFIG` variable. This
variable defines a JSON or YAML `logging
variable defines a json or yaml `logging
configuration <https://docs.python.org/3/library/logging.config.html>`__
that will be intelligently merged into the default configuration. The
logging configuration is merged using the following rules:
@@ -691,9 +701,9 @@ logging configuration is merged using the following rules:
adds a filter called ``BitBake.defaultFilter``, both filters will be
applied to the logger
As a first example, you can create a ``hashequiv.json`` user logging
configuration file to log all Hash Equivalence related messages of ``VERBOSE``
or higher priority to a file called ``hashequiv.log``::
As an example, consider the following user logging configuration file
which logs all Hash Equivalence related messages of VERBOSE or higher to
a file called ``hashequiv.log`` ::
{
"version": 1,
@@ -722,40 +732,3 @@ or higher priority to a file called ``hashequiv.log``::
}
}
}
Then set the :term:`BB_LOGCONFIG` variable in ``conf/local.conf``::
BB_LOGCONFIG = "hashequiv.json"
Another example is this ``warn.json`` file to log all ``WARNING`` and
higher priority messages to a ``warn.log`` file::
{
"version": 1,
"formatters": {
"warnlogFormatter": {
"()": "bb.msg.BBLogFormatter",
"format": "%(levelname)s: %(message)s"
}
},
"handlers": {
"warnlog": {
"class": "logging.FileHandler",
"formatter": "warnlogFormatter",
"level": "WARNING",
"filename": "warn.log"
}
},
"loggers": {
"BitBake": {
"handlers": ["warnlog"]
}
},
"@disable_existing_loggers": false
}
Note that BitBake's helper classes for structured logging are implemented in
``lib/bb/msg.py``.

View File

@@ -27,7 +27,7 @@ and unpacking the files is often optionally followed by patching.
Patching, however, is not covered by this module.
The code to execute the first part of this process, a fetch, looks
something like the following::
something like the following: ::
src_uri = (d.getVar('SRC_URI') or "").split()
fetcher = bb.fetch2.Fetch(src_uri, d)
@@ -37,7 +37,7 @@ This code sets up an instance of the fetch class. The instance uses a
space-separated list of URLs from the :term:`SRC_URI`
variable and then calls the ``download`` method to download the files.
The instantiation of the fetch class is usually followed by::
The instantiation of the fetch class is usually followed by: ::
rootdir = l.getVar('WORKDIR')
fetcher.unpack(rootdir)
@@ -51,7 +51,7 @@ This code unpacks the downloaded files to the specified by ``WORKDIR``.
examine the OpenEmbedded class file ``base.bbclass``
.
The :term:`SRC_URI` and ``WORKDIR`` variables are not hardcoded into the
The ``SRC_URI`` and ``WORKDIR`` variables are not hardcoded into the
fetcher, since those fetcher methods can be (and are) called with
different variable names. In OpenEmbedded for example, the shared state
(sstate) code uses the fetch module to fetch the sstate files.
@@ -64,38 +64,38 @@ URLs by looking for source files in a specific search order:
:term:`PREMIRRORS` variable.
- *Source URI:* If pre-mirrors fail, BitBake uses the original URL (e.g
from :term:`SRC_URI`).
from ``SRC_URI``).
- *Mirror Sites:* If fetch failures occur, BitBake next uses mirror
locations as defined by the :term:`MIRRORS` variable.
For each URL passed to the fetcher, the fetcher calls the submodule that
handles that particular URL type. This behavior can be the source of
some confusion when you are providing URLs for the :term:`SRC_URI` variable.
Consider the following two URLs::
some confusion when you are providing URLs for the ``SRC_URI`` variable.
Consider the following two URLs: ::
https://git.yoctoproject.org/git/poky;protocol=git
http://git.yoctoproject.org/git/poky;protocol=git
git://git.yoctoproject.org/git/poky;protocol=http
In the former case, the URL is passed to the ``wget`` fetcher, which does not
understand "git". Therefore, the latter case is the correct form since the Git
fetcher does know how to use HTTP as a transport.
Here are some examples that show commonly used mirror definitions::
Here are some examples that show commonly used mirror definitions: ::
PREMIRRORS ?= "\
bzr://.*/.\* http://somemirror.org/sources/ \
cvs://.*/.\* http://somemirror.org/sources/ \
git://.*/.\* http://somemirror.org/sources/ \
hg://.*/.\* http://somemirror.org/sources/ \
osc://.*/.\* http://somemirror.org/sources/ \
p4://.*/.\* http://somemirror.org/sources/ \
svn://.*/.\* http://somemirror.org/sources/"
bzr://.*/.\* http://somemirror.org/sources/ \\n \
cvs://.*/.\* http://somemirror.org/sources/ \\n \
git://.*/.\* http://somemirror.org/sources/ \\n \
hg://.*/.\* http://somemirror.org/sources/ \\n \
osc://.*/.\* http://somemirror.org/sources/ \\n \
p4://.*/.\* http://somemirror.org/sources/ \\n \
svn://.*/.\* http://somemirror.org/sources/ \\n"
MIRRORS =+ "\
ftp://.*/.\* http://somemirror.org/sources/ \
http://.*/.\* http://somemirror.org/sources/ \
https://.*/.\* http://somemirror.org/sources/"
ftp://.*/.\* http://somemirror.org/sources/ \\n \
http://.*/.\* http://somemirror.org/sources/ \\n \
https://.*/.\* http://somemirror.org/sources/ \\n"
It is useful to note that BitBake
supports cross-URLs. It is possible to mirror a Git repository on an
@@ -110,26 +110,26 @@ which is specified by the :term:`DL_DIR` variable.
File integrity is of key importance for reproducing builds. For
non-local archive downloads, the fetcher code can verify SHA-256 and MD5
checksums to ensure the archives have been downloaded correctly. You can
specify these checksums by using the :term:`SRC_URI` variable with the
appropriate varflags as follows::
specify these checksums by using the ``SRC_URI`` variable with the
appropriate varflags as follows: ::
SRC_URI[md5sum] = "value"
SRC_URI[sha256sum] = "value"
You can also specify the checksums as
parameters on the :term:`SRC_URI` as shown below::
parameters on the ``SRC_URI`` as shown below: ::
SRC_URI = "http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d"
If multiple URIs exist, you can specify the checksums either directly as
in the previous example, or you can name the URLs. The following syntax
shows how you name the URIs::
shows how you name the URIs: ::
SRC_URI = "http://example.com/foobar.tar.bz2;name=foo"
SRC_URI[foo.md5sum] = 4a8e0f237e961fd7785d19d07fdb994d
After a file has been downloaded and
has had its checksum checked, a ".done" stamp is placed in :term:`DL_DIR`.
has had its checksum checked, a ".done" stamp is placed in ``DL_DIR``.
BitBake uses this stamp during subsequent builds to avoid downloading or
comparing a checksum for the file again.
@@ -144,10 +144,6 @@ download without a checksum triggers an error message. The
make any attempted network access a fatal error, which is useful for
checking that mirrors are complete as well as other things.
If :term:`BB_CHECK_SSL_CERTS` is set to ``0`` then SSL certificate checking will
be disabled. This variable defaults to ``1`` so SSL certificates are normally
checked.
.. _bb-the-unpack:
The Unpack
@@ -167,8 +163,8 @@ govern the behavior of the unpack stage:
- *dos:* Applies to ``.zip`` and ``.jar`` files and specifies whether
to use DOS line ending conversion on text files.
- *striplevel:* Strip specified number of leading components (levels)
from file names on extraction
- *basepath:* Instructs the unpack stage to strip the specified
directories from the source path when unpacking.
- *subdir:* Unpacks the specific URL to the specified subdirectory
within the root directory.
@@ -208,7 +204,7 @@ time the ``download()`` method is called.
If you specify a directory, the entire directory is unpacked.
Here are a couple of example URLs, the first relative and the second
absolute::
absolute: ::
SRC_URI = "file://relativefile.patch"
SRC_URI = "file:///Users/ich/very_important_software"
@@ -229,12 +225,7 @@ downloaded file is useful for avoiding collisions in
:term:`DL_DIR` when dealing with multiple files that
have the same name.
If a username and password are specified in the ``SRC_URI``, a Basic
Authorization header will be added to each request, including across redirects.
To instead limit the Authorization header to the first request, add
"redirectauth=0" to the list of parameters.
Some example URLs are as follows::
Some example URLs are as follows: ::
SRC_URI = "http://oe.handhelds.org/not_there.aac"
SRC_URI = "ftp://oe.handhelds.org/not_there_as_well.aac"
@@ -244,13 +235,15 @@ Some example URLs are as follows::
Because URL parameters are delimited by semi-colons, this can
introduce ambiguity when parsing URLs that also contain semi-colons,
for example::
for example:
::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git;a=snapshot;h=a5dd47"
Such URLs should should be modified by replacing semi-colons with '&'
characters::
characters:
::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47"
@@ -258,7 +251,8 @@ Some example URLs are as follows::
In most cases this should work. Treating semi-colons and '&' in
queries identically is recommended by the World Wide Web Consortium
(W3C). Note that due to the nature of the URL, you may have to
specify the name of the downloaded file as well::
specify the name of the downloaded file as well:
::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47;downloadfilename=myfile.bz2"
@@ -327,7 +321,7 @@ The supported parameters are as follows:
- *"port":* The port to which the CVS server connects.
Some example URLs are as follows::
Some example URLs are as follows: ::
SRC_URI = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
SRC_URI = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
@@ -369,7 +363,7 @@ The supported parameters are as follows:
username is different than the username used in the main URL, which
is passed to the subversion command.
Following are three examples using svn::
Following are three examples using svn: ::
SRC_URI = "svn://myrepos/proj1;module=vip;protocol=http;rev=667"
SRC_URI = "svn://myrepos/proj1;module=opie;protocol=svn+ssh"
@@ -396,19 +390,6 @@ This fetcher supports the following parameters:
protocol is "file". You can also use "http", "https", "ssh" and
"rsync".
.. note::
When ``protocol`` is "ssh", the URL expected in :term:`SRC_URI` differs
from the one that is typically passed to ``git clone`` command and provided
by the Git server to fetch from. For example, the URL returned by GitLab
server for ``mesa`` when cloning over SSH is
``git@gitlab.freedesktop.org:mesa/mesa.git``, however the expected URL in
:term:`SRC_URI` is the following::
SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
Note the ``:`` character changed for a ``/`` before the path to the project.
- *"nocheckout":* Tells the fetcher to not checkout source code when
unpacking when set to "1". Set this option for the URL where there is
a custom routine to checkout code. The default is "0".
@@ -424,17 +405,17 @@ This fetcher supports the following parameters:
- *"nobranch":* Tells the fetcher to not check the SHA validation for
the branch when set to "1". The default is "0". Set this option for
the recipe that refers to the commit that is valid for any namespace
(branch, tag, ...) instead of the branch.
the recipe that refers to the commit that is valid for a tag instead
of the branch.
- *"bareclone":* Tells the fetcher to clone a bare clone into the
destination directory without checking out a working tree. Only the
raw Git metadata is provided. This parameter implies the "nocheckout"
parameter as well.
- *"branch":* The branch(es) of the Git tree to clone. Unless
"nobranch" is set to "1", this is a mandatory parameter. The number of
branch parameters must match the number of name parameters.
- *"branch":* The branch(es) of the Git tree to clone. If unset, this
is assumed to be "master". The number of branch parameters much match
the number of name parameters.
- *"rev":* The revision to use for the checkout. The default is
"master".
@@ -455,35 +436,19 @@ This fetcher supports the following parameters:
parameter implies no branch and only works when the transfer protocol
is ``file://``.
Here are some example URLs::
Here are some example URLs: ::
SRC_URI = "git://github.com/fronteed/icheck.git;protocol=https;branch=${PV};tag=${PV}"
SRC_URI = "git://github.com/asciidoc/asciidoc-py;protocol=https;branch=main"
SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
.. note::
When using ``git`` as the fetcher of the main source code of your software,
``S`` should be set accordingly::
S = "${WORKDIR}/git"
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
.. note::
Specifying passwords directly in ``git://`` urls is not supported.
There are several reasons: :term:`SRC_URI` is often written out to logs and
There are several reasons: ``SRC_URI`` is often written out to logs and
other places, and that could easily leak passwords; it is also all too
easy to share metadata without removing passwords. SSH keys, ``~/.netrc``
and ``~/.ssh/config`` files can be used as alternatives.
Using tags with the git fetcher may cause surprising behaviour. Bitbake needs to
resolve the tag to a specific revision and to do that, it has to connect to and use
the upstream repository. This is because the revision the tags point at can change and
we've seen cases of this happening in well known public repositories. This can mean
many more network connections than expected and recipes may be reparsed at every build.
Source mirrors will also be bypassed as the upstream repository is the only source
of truth to resolve the revision accurately. For these reasons, whilst the fetcher
can support tags, we recommend being specific about revisions in recipes.
.. _gitsm-fetcher:
@@ -519,7 +484,7 @@ repository.
To use this fetcher, make sure your recipe has proper
:term:`SRC_URI`, :term:`SRCREV`, and
:term:`PV` settings. Here is an example::
:term:`PV` settings. Here is an example: ::
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
SRCREV = "EXAMPLE_CLEARCASE_TAG"
@@ -528,7 +493,7 @@ To use this fetcher, make sure your recipe has proper
The fetcher uses the ``rcleartool`` or
``cleartool`` remote client, depending on which one is available.
Following are options for the :term:`SRC_URI` statement:
Following are options for the ``SRC_URI`` statement:
- *vob*: The name, which must include the prepending "/" character,
of the ClearCase VOB. This option is required.
@@ -541,7 +506,7 @@ Following are options for the :term:`SRC_URI` statement:
The module and vob options are combined to create the load rule in the
view config spec. As an example, consider the vob and module values from
the SRC_URI statement at the start of this section. Combining those values
results in the following::
results in the following: ::
load /example_vob/example_module
@@ -590,10 +555,10 @@ password if you do not wish to keep those values in a recipe itself. If
you choose not to use ``P4CONFIG``, or to explicitly set variables that
``P4CONFIG`` can contain, you can specify the ``P4PORT`` value, which is
the server's URL and port number, and you can specify a username and
password directly in your recipe within :term:`SRC_URI`.
password directly in your recipe within ``SRC_URI``.
Here is an example that relies on ``P4CONFIG`` to specify the server URL
and port, username, and password, and fetches the Head Revision::
and port, username, and password, and fetches the Head Revision: ::
SRC_URI = "p4://example-depot/main/source/..."
SRCREV = "${AUTOREV}"
@@ -601,7 +566,7 @@ and port, username, and password, and fetches the Head Revision::
S = "${WORKDIR}/p4"
Here is an example that specifies the server URL and port, username, and
password, and fetches a Revision based on a Label::
password, and fetches a Revision based on a Label: ::
P4PORT = "tcp:p4server.example.net:1666"
SRC_URI = "p4://user:passwd@example-depot/main/source/..."
@@ -627,7 +592,7 @@ paths locally is desirable, the fetcher supports two parameters:
paths locally for the specified location, even in combination with the
``module`` parameter.
Here is an example use of the the ``module`` parameter::
Here is an example use of the the ``module`` parameter: ::
SRC_URI = "p4://user:passwd@example-depot/main;module=source/..."
@@ -635,7 +600,7 @@ In this case, the content of the top-level directory ``source/`` will be fetched
to ``${P4DIR}``, including the directory itself. The top-level directory will
be accesible at ``${P4DIR}/source/``.
Here is an example use of the the ``remotepath`` parameter::
Here is an example use of the the ``remotepath`` parameter: ::
SRC_URI = "p4://user:passwd@example-depot/main;module=source/...;remotepath=keep"
@@ -663,7 +628,7 @@ This fetcher supports the following parameters:
- *"manifest":* Name of the manifest file (default: ``default.xml``).
Here are some example URLs::
Here are some example URLs: ::
SRC_URI = "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml"
SRC_URI = "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml"
@@ -686,143 +651,16 @@ Such functionality is set by the variable:
delegate access to resources, if this variable is set, the Az Fetcher will
use it when fetching artifacts from the cloud.
You can specify the AZ_SAS variable as shown below::
You can specify the AZ_SAS variable as shown below: ::
AZ_SAS = "se=2021-01-01&sp=r&sv=2018-11-09&sr=c&skoid=<skoid>&sig=<signature>"
Here is an example URL::
Here is an example URL: ::
SRC_URI = "az://<azure-storage-account>.blob.core.windows.net/<foo_container>/<bar_file>"
It can also be used when setting mirrors definitions using the :term:`PREMIRRORS` variable.
.. _gcp-fetcher:
GCP Fetcher (``gs://``)
--------------------------
This submodule fetches data from a
`Google Cloud Storage Bucket <https://cloud.google.com/storage/docs/buckets>`__.
It uses the `Google Cloud Storage Python Client <https://cloud.google.com/python/docs/reference/storage/latest>`__
to check the status of objects in the bucket and download them.
The use of the Python client makes it substantially faster than using command
line tools such as gsutil.
The fetcher requires the Google Cloud Storage Python Client to be installed, along
with the gsutil tool.
The fetcher requires that the machine has valid credentials for accessing the
chosen bucket. Instructions for authentication can be found in the
`Google Cloud documentation <https://cloud.google.com/docs/authentication/provide-credentials-adc#local-dev>`__.
If it used from the OpenEmbedded build system, the fetcher can be used for
fetching sstate artifacts from a GCS bucket by specifying the
``SSTATE_MIRRORS`` variable as shown below::
SSTATE_MIRRORS ?= "\
file://.* gs://<bucket name>/PATH \
"
The fetcher can also be used in recipes::
SRC_URI = "gs://<bucket name>/<foo_container>/<bar_file>"
However, the checksum of the file should be also be provided::
SRC_URI[sha256sum] = "<sha256 string>"
.. _crate-fetcher:
Crate Fetcher (``crate://``)
----------------------------
This submodule fetches code for
`Rust language "crates" <https://doc.rust-lang.org/reference/glossary.html?highlight=crate#crate>`__
corresponding to Rust libraries and programs to compile. Such crates are typically shared
on https://crates.io/ but this fetcher supports other crate registries too.
The format for the :term:`SRC_URI` setting must be::
SRC_URI = "crate://REGISTRY/NAME/VERSION"
Here is an example URL::
SRC_URI = "crate://crates.io/glob/0.2.11"
.. _npm-fetcher:
NPM Fetcher (``npm://``)
------------------------
This submodule fetches source code from an
`NPM <https://en.wikipedia.org/wiki/Npm_(software)>`__
Javascript package registry.
The format for the :term:`SRC_URI` setting must be::
SRC_URI = "npm://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
This fetcher supports the following parameters:
- *"package":* The NPM package name. This is a mandatory parameter.
- *"version":* The NPM package version. This is a mandatory parameter.
- *"downloadfilename":* Specifies the filename used when storing the downloaded file.
- *"destsuffix":* Specifies the directory to use to unpack the package (default: ``npm``).
Note that NPM fetcher only fetches the package source itself. The dependencies
can be fetched through the `npmsw-fetcher`_.
Here is an example URL with both fetchers::
SRC_URI = " \
npm://registry.npmjs.org/;package=cute-files;version=${PV} \
npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
"
See :yocto_docs:`Creating Node Package Manager (NPM) Packages
</dev-manual/packages.html#creating-node-package-manager-npm-packages>`
in the Yocto Project manual for details about using
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
to automatically create a recipe from an NPM URL.
.. _npmsw-fetcher:
NPM shrinkwrap Fetcher (``npmsw://``)
-------------------------------------
This submodule fetches source code from an
`NPM shrinkwrap <https://docs.npmjs.com/cli/v8/commands/npm-shrinkwrap>`__
description file, which lists the dependencies
of an NPM package while locking their versions.
The format for the :term:`SRC_URI` setting must be::
SRC_URI = "npmsw://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
This fetcher supports the following parameters:
- *"dev":* Set this parameter to ``1`` to install "devDependencies".
- *"destsuffix":* Specifies the directory to use to unpack the dependencies
(``${S}`` by default).
Note that the shrinkwrap file can also be provided by the recipe for
the package which has such dependencies, for example::
SRC_URI = " \
npm://registry.npmjs.org/;package=cute-files;version=${PV} \
npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
"
Such a file can automatically be generated using
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
as described in the :yocto_docs:`Creating Node Package Manager (NPM) Packages
</dev-manual/packages.html#creating-node-package-manager-npm-packages>`
section of the Yocto Project.
Other Fetchers
--------------
@@ -832,9 +670,9 @@ Fetch submodules also exist for the following:
- Mercurial (``hg://``)
- OSC (``osc://``)
- npm (``npm://``)
- S3 (``s3://``)
- OSC (``osc://``)
- Secure FTP (``sftp://``)
@@ -848,4 +686,4 @@ submodules. However, you might find the code helpful and readable.
Auto Revisions
==============
We need to document ``AUTOREV`` and :term:`SRCREV_FORMAT` here.
We need to document ``AUTOREV`` and ``SRCREV_FORMAT`` here.

View File

@@ -18,32 +18,28 @@ it.
Obtaining BitBake
=================
See the :ref:`bitbake-user-manual/bitbake-user-manual-intro:obtaining bitbake` section for
See the :ref:`bitbake-user-manual/bitbake-user-manual-hello:obtaining bitbake` section for
information on how to obtain BitBake. Once you have the source code on
your machine, the BitBake directory appears as follows::
your machine, the BitBake directory appears as follows: ::
$ ls -al
total 108
drwxr-xr-x 9 fawkh 10000 4096 feb 24 12:10 .
drwx------ 36 fawkh 10000 4096 mar 2 17:00 ..
-rw-r--r-- 1 fawkh 10000 365 feb 24 12:10 AUTHORS
drwxr-xr-x 2 fawkh 10000 4096 feb 24 12:10 bin
-rw-r--r-- 1 fawkh 10000 16501 feb 24 12:10 ChangeLog
drwxr-xr-x 2 fawkh 10000 4096 feb 24 12:10 classes
drwxr-xr-x 2 fawkh 10000 4096 feb 24 12:10 conf
drwxr-xr-x 5 fawkh 10000 4096 feb 24 12:10 contrib
drwxr-xr-x 6 fawkh 10000 4096 feb 24 12:10 doc
drwxr-xr-x 8 fawkh 10000 4096 mar 2 16:26 .git
-rw-r--r-- 1 fawkh 10000 31 feb 24 12:10 .gitattributes
-rw-r--r-- 1 fawkh 10000 392 feb 24 12:10 .gitignore
drwxr-xr-x 13 fawkh 10000 4096 feb 24 12:11 lib
-rw-r--r-- 1 fawkh 10000 1224 feb 24 12:10 LICENSE
-rw-r--r-- 1 fawkh 10000 15394 feb 24 12:10 LICENSE.GPL-2.0-only
-rw-r--r-- 1 fawkh 10000 1286 feb 24 12:10 LICENSE.MIT
-rw-r--r-- 1 fawkh 10000 229 feb 24 12:10 MANIFEST.in
-rw-r--r-- 1 fawkh 10000 2413 feb 24 12:10 README
-rw-r--r-- 1 fawkh 10000 43 feb 24 12:10 toaster-requirements.txt
-rw-r--r-- 1 fawkh 10000 2887 feb 24 12:10 TODO
total 100
drwxrwxr-x. 9 wmat wmat 4096 Jan 31 13:44 .
drwxrwxr-x. 3 wmat wmat 4096 Feb 4 10:45 ..
-rw-rw-r--. 1 wmat wmat 365 Nov 26 04:55 AUTHORS
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 bin
drwxrwxr-x. 4 wmat wmat 4096 Jan 31 13:44 build
-rw-rw-r--. 1 wmat wmat 16501 Nov 26 04:55 ChangeLog
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 classes
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 conf
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 contrib
-rw-rw-r--. 1 wmat wmat 17987 Nov 26 04:55 COPYING
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 doc
-rw-rw-r--. 1 wmat wmat 69 Nov 26 04:55 .gitignore
-rw-rw-r--. 1 wmat wmat 849 Nov 26 04:55 HEADER
drwxrwxr-x. 5 wmat wmat 4096 Jan 31 13:44 lib
-rw-rw-r--. 1 wmat wmat 195 Nov 26 04:55 MANIFEST.in
-rw-rw-r--. 1 wmat wmat 2887 Nov 26 04:55 TODO
At this point, you should have BitBake cloned to a directory that
matches the previous listing except for dates and user names.
@@ -53,10 +49,10 @@ Setting Up the BitBake Environment
First, you need to be sure that you can run BitBake. Set your working
directory to where your local BitBake files are and run the following
command::
command: ::
$ ./bin/bitbake --version
BitBake Build Tool Core version 2.3.1
BitBake Build Tool Core version 1.23.0, bitbake version 1.23.0
The console output tells you what version
you are running.
@@ -65,14 +61,14 @@ The recommended method to run BitBake is from a directory of your
choice. To be able to run BitBake from any directory, you need to add
the executable binary to your binary to your shell's environment
``PATH`` variable. First, look at your current ``PATH`` variable by
entering the following::
entering the following: ::
$ echo $PATH
Next, add the directory location
for the BitBake binary to the ``PATH``. Here is an example that adds the
``/home/scott-lenovo/bitbake/bin`` directory to the front of the
``PATH`` variable::
``PATH`` variable: ::
$ export PATH=/home/scott-lenovo/bitbake/bin:$PATH
@@ -103,7 +99,7 @@ discussion mailing list about the BitBake build tool.
This example was inspired by and drew heavily from
`Mailing List post - The BitBake equivalent of "Hello, World!"
<https://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html>`_.
<http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html>`_.
As stated earlier, the goal of this example is to eventually compile
"Hello World". However, it is unknown what BitBake needs and what you
@@ -120,7 +116,7 @@ Following is the complete "Hello World" example.
#. **Create a Project Directory:** First, set up a directory for the
"Hello World" project. Here is how you can do so in your home
directory::
directory: ::
$ mkdir ~/hello
$ cd ~/hello
@@ -131,26 +127,41 @@ Following is the complete "Hello World" example.
directory is a good way to isolate your project.
#. **Run BitBake:** At this point, you have nothing but a project
directory. Run the ``bitbake`` command and see what it does::
directory. Run the ``bitbake`` command and see what it does: ::
$ bitbake
ERROR: The BBPATH variable is not set and bitbake did not find a conf/bblayers.conf file in the expected location.
The BBPATH variable is not set and bitbake did not
find a conf/bblayers.conf file in the expected location.
Maybe you accidentally invoked bitbake from the wrong directory?
DEBUG: Removed the following variables from the environment:
GNOME_DESKTOP_SESSION_ID, XDG_CURRENT_DESKTOP,
GNOME_KEYRING_CONTROL, DISPLAY, SSH_AGENT_PID, LANG, no_proxy,
XDG_SESSION_PATH, XAUTHORITY, SESSION_MANAGER, SHLVL,
MANDATORY_PATH, COMPIZ_CONFIG_PROFILE, WINDOWID, EDITOR,
GPG_AGENT_INFO, SSH_AUTH_SOCK, GDMSESSION, GNOME_KEYRING_PID,
XDG_SEAT_PATH, XDG_CONFIG_DIRS, LESSOPEN, DBUS_SESSION_BUS_ADDRESS,
_, XDG_SESSION_COOKIE, DESKTOP_SESSION, LESSCLOSE, DEFAULTS_PATH,
UBUNTU_MENUPROXY, OLDPWD, XDG_DATA_DIRS, COLORTERM, LS_COLORS
The majority of this output is specific to environment variables that
are not directly relevant to BitBake. However, the very first
message regarding the ``BBPATH`` variable and the
``conf/bblayers.conf`` file is relevant.
When you run BitBake, it begins looking for metadata files. The
:term:`BBPATH` variable is what tells BitBake where
to look for those files. :term:`BBPATH` is not set and you need to set
it. Without :term:`BBPATH`, BitBake cannot find any configuration files
to look for those files. ``BBPATH`` is not set and you need to set
it. Without ``BBPATH``, BitBake cannot find any configuration files
(``.conf``) or recipe files (``.bb``) at all. BitBake also cannot
find the ``bitbake.conf`` file.
#. **Setting BBPATH:** For this example, you can set :term:`BBPATH` in
#. **Setting BBPATH:** For this example, you can set ``BBPATH`` in
the same manner that you set ``PATH`` earlier in the appendix. You
should realize, though, that it is much more flexible to set the
:term:`BBPATH` variable up in a configuration file for each project.
``BBPATH`` variable up in a configuration file for each project.
From your shell, enter the following commands to set and export the
:term:`BBPATH` variable::
``BBPATH`` variable: ::
$ BBPATH="projectdirectory"
$ export BBPATH
@@ -164,18 +175,24 @@ Following is the complete "Hello World" example.
("~") character as BitBake does not expand that character as the
shell would.
#. **Run BitBake:** Now that you have :term:`BBPATH` defined, run the
``bitbake`` command again::
#. **Run BitBake:** Now that you have ``BBPATH`` defined, run the
``bitbake`` command again: ::
$ bitbake
ERROR: Unable to parse /home/scott-lenovo/bitbake/lib/bb/parse/__init__.py
Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 127, in resolve_file(fn='conf/bitbake.conf', d=<bb.data_smart.DataSmart object at 0x7f22919a3df0>):
if not newfn:
> raise IOError(errno.ENOENT, "file %s not found in %s" % (fn, bbpath))
fn = newfn
FileNotFoundError: [Errno 2] file conf/bitbake.conf not found in <projectdirectory>
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 173, in parse_config_file
return bb.parse.handle(fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 99, in handle
return h['handle'](fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 120, in handle
abs_fn = resolve_file(fn, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 117, in resolve_file
raise IOError("file %s not found in %s" % (fn, bbpath))
IOError: file conf/bitbake.conf not found in /home/scott-lenovo/hello
ERROR: Unable to parse conf/bitbake.conf: file conf/bitbake.conf not found in /home/scott-lenovo/hello
This sample output shows that BitBake could not find the
``conf/bitbake.conf`` file in the project directory. This file is
@@ -188,18 +205,18 @@ Following is the complete "Hello World" example.
recipe files. For this example, you need to create the file in your
project directory and define some key BitBake variables. For more
information on the ``bitbake.conf`` file, see
https://git.openembedded.org/bitbake/tree/conf/bitbake.conf.
http://git.openembedded.org/bitbake/tree/conf/bitbake.conf.
Use the following commands to create the ``conf`` directory in the
project directory::
project directory: ::
$ mkdir conf
From within the ``conf`` directory,
use some editor to create the ``bitbake.conf`` so that it contains
the following::
the following: ::
PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
TMPDIR = "${TOPDIR}/tmp"
CACHE = "${TMPDIR}/cache"
@@ -209,12 +226,12 @@ Following is the complete "Hello World" example.
.. note::
Without a value for :term:`PN`, the variables :term:`STAMP`, :term:`T`, and :term:`B`, prevent more
than one recipe from working. You can fix this by either setting :term:`PN` to
Without a value for PN , the variables STAMP , T , and B , prevent more
than one recipe from working. You can fix this by either setting PN to
have a value similar to what OpenEmbedded and BitBake use in the default
``bitbake.conf`` file (see previous example). Or, by manually updating each
recipe to set :term:`PN`. You will also need to include :term:`PN` as part of the :term:`STAMP`,
:term:`T`, and :term:`B` variable definitions in the ``local.conf`` file.
bitbake.conf file (see previous example). Or, by manually updating each
recipe to set PN . You will also need to include PN as part of the STAMP
, T , and B variable definitions in the local.conf file.
The ``TMPDIR`` variable establishes a directory that BitBake uses
for build output and intermediate files other than the cached
@@ -234,17 +251,21 @@ Following is the complete "Hello World" example.
glossary.
#. **Run BitBake:** After making sure that the ``conf/bitbake.conf`` file
exists, you can run the ``bitbake`` command again::
exists, you can run the ``bitbake`` command again: ::
$ bitbake
ERROR: Unable to parse /home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py
Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 67, in inherit(files=['base'], fn='configuration INHERITs', lineno=0, d=<bb.data_smart.DataSmart object at 0x7fab6815edf0>):
if not os.path.exists(file):
> raise ParseError("Could not inherit file %s" % (file), fn, lineno)
bb.parse.ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 177, in _inherit
bb.parse.BBHandler.inherit(bbclass, "configuration INHERITs", 0, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 92, in inherit
include(fn, file, lineno, d, "inherit")
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 100, in include
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
In the sample output,
BitBake could not find the ``classes/base.bbclass`` file. You need
@@ -257,23 +278,20 @@ Following is the complete "Hello World" example.
in the ``classes`` directory of the project (i.e ``hello/classes``
in this example).
Create the ``classes`` directory as follows::
Create the ``classes`` directory as follows: ::
$ cd $HOME/hello
$ mkdir classes
Move to the ``classes`` directory and then create the
``base.bbclass`` file by inserting this single line::
addtask build
``base.bbclass`` file by inserting this single line: addtask build
The minimal task that BitBake runs is the ``do_build`` task. This is
all the example needs in order to build the project. Of course, the
``base.bbclass`` can have much more depending on which build
environments BitBake is supporting.
#. **Run BitBake:** After making sure that the ``classes/base.bbclass``
file exists, you can run the ``bitbake`` command again::
file exists, you can run the ``bitbake`` command again: ::
$ bitbake
Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.
@@ -296,7 +314,7 @@ Following is the complete "Hello World" example.
Minimally, you need a recipe file and a layer configuration file in
your layer. The configuration file needs to be in the ``conf``
directory inside the layer. Use these commands to set up the layer
and the ``conf`` directory::
and the ``conf`` directory: ::
$ cd $HOME
$ mkdir mylayer
@@ -304,29 +322,20 @@ Following is the complete "Hello World" example.
$ mkdir conf
Move to the ``conf`` directory and create a ``layer.conf`` file that has the
following::
following: ::
BBPATH .= ":${LAYERDIR}"
BBFILES += "${LAYERDIR}/*.bb"
BBFILES += "${LAYERDIR}/\*.bb"
BBFILE_COLLECTIONS += "mylayer"
BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
LAYERSERIES_CORENAMES = "hello_world_example"
LAYERSERIES_COMPAT_mylayer = "hello_world_example"
`BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
For information on these variables, click on :term:`BBFILES`,
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS`, :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
or :term:`LAYERSERIES_COMPAT` to go to the definitions in the glossary.
.. note::
We are setting both ``LAYERSERIES_CORENAMES`` and :term:`LAYERSERIES_COMPAT` in this particular case, because we
are using bitbake without OpenEmbedded.
You should usually just use :term:`LAYERSERIES_COMPAT` to specify the OE-Core versions for which your layer
is compatible, and add the meta-openembedded layer to your project.
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS` or :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
to go to the definitions in the glossary.
You need to create the recipe file next. Inside your layer at the
top-level, use an editor and create a recipe file named
``printhello.bb`` that has the following::
``printhello.bb`` that has the following: ::
DESCRIPTION = "Prints Hello World"
PN = 'printhello'
@@ -347,7 +356,7 @@ Following is the complete "Hello World" example.
follow the links to the glossary.
#. **Run BitBake With a Target:** Now that a BitBake target exists, run
the command and provide that target::
the command and provide that target: ::
$ cd $HOME/hello
$ bitbake printhello
@@ -367,7 +376,7 @@ Following is the complete "Hello World" example.
``hello/conf`` for this example).
Set your working directory to the ``hello/conf`` directory and then
create the ``bblayers.conf`` file so that it contains the following::
create the ``bblayers.conf`` file so that it contains the following: ::
BBLAYERS ?= " \
/home/<you>/mylayer \
@@ -377,17 +386,15 @@ Following is the complete "Hello World" example.
#. **Run BitBake With a Target:** Now that you have supplied the
``bblayers.conf`` file, run the ``bitbake`` command and provide the
target::
target: ::
$ bitbake printhello
Loading cache: 100% |
Loaded 0 entries from dependency cache.
Parsing recipes: 100% |##################################################################################|
Time: 00:00:00
Parsing of 1 .bb files complete (0 cached, 1 parsed). 1 targets, 0 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
Initialising tasks: 100% |###############################################################################|
NOTE: No setscene tasks
NOTE: Executing Tasks
NOTE: Preparing RunQueue
NOTE: Executing RunQueue Tasks
********************
* *
* Hello, World! *

View File

@@ -27,7 +27,7 @@ Linux software stacks using a task-oriented approach.
Conceptually, BitBake is similar to GNU Make in some regards but has
significant differences:
- BitBake executes tasks according to the provided metadata that builds up
- BitBake executes tasks according to provided metadata that builds up
the tasks. Metadata is stored in recipe (``.bb``) and related recipe
"append" (``.bbappend``) files, configuration (``.conf``) and
underlying include (``.inc``) files, and in class (``.bbclass``)
@@ -60,10 +60,11 @@ member Chris Larson split the project into two distinct pieces:
- OpenEmbedded, a metadata set utilized by BitBake
Today, BitBake is the primary basis of the
`OpenEmbedded <https://www.openembedded.org/>`__ project, which is being
used to build and maintain Linux distributions such as the `Poky
Reference Distribution <https://www.yoctoproject.org/software-item/poky/>`__,
developed under the umbrella of the `Yocto Project <https://www.yoctoproject.org>`__.
`OpenEmbedded <http://www.openembedded.org/>`__ project, which is being
used to build and maintain Linux distributions such as the `Angstrom
Distribution <http://www.angstrom-distribution.org/>`__, and which is
also being used as the build tool for Linux projects such as the `Yocto
Project <http://www.yoctoproject.org>`__.
Prior to BitBake, no other build tool adequately met the needs of an
aspiring embedded Linux distribution. All of the build systems used by
@@ -247,13 +248,13 @@ underlying, similarly-named recipe files.
When you name an append file, you can use the "``%``" wildcard character
to allow for matching recipe names. For example, suppose you have an
append file named as follows::
append file named as follows: ::
busybox_1.21.%.bbappend
That append file
would match any ``busybox_1.21.``\ x\ ``.bb`` version of the recipe. So,
the append file would match the following recipe names::
the append file would match the following recipe names: ::
busybox_1.21.1.bb
busybox_1.21.2.bb
@@ -289,7 +290,7 @@ You can obtain BitBake several different ways:
are using. The metadata is generally backwards compatible but not
forward compatible.
Here is an example that clones the BitBake repository::
Here is an example that clones the BitBake repository: ::
$ git clone git://git.openembedded.org/bitbake
@@ -297,7 +298,7 @@ You can obtain BitBake several different ways:
Git repository into a directory called ``bitbake``. Alternatively,
you can designate a directory after the ``git clone`` command if you
want to call the new directory something other than ``bitbake``. Here
is an example that names the directory ``bbdev``::
is an example that names the directory ``bbdev``: ::
$ git clone git://git.openembedded.org/bitbake bbdev
@@ -316,9 +317,9 @@ You can obtain BitBake several different ways:
method for getting BitBake. Cloning the repository makes it easier
to update as patches are added to the stable branches.
The following example downloads a snapshot of BitBake version 1.17.0::
The following example downloads a snapshot of BitBake version 1.17.0: ::
$ wget https://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
$ wget http://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
$ tar zxpvf bitbake-1.17.0.tar.gz
After extraction of the tarball using
@@ -346,7 +347,7 @@ execution examples.
Usage and syntax
----------------
Following is the usage and syntax for BitBake::
Following is the usage and syntax for BitBake: ::
$ bitbake -h
Usage: bitbake [options] [recipename/target recipe:do_task ...]
@@ -416,8 +417,8 @@ Following is the usage and syntax for BitBake::
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile Profile the command and save reports.
-u UI, --ui=UI The user interface to use (knotty, ncurses, taskexp or
teamcity - default knotty).
-u UI, --ui=UI The user interface to use (knotty, ncurses or taskexp
- default knotty).
--token=XMLRPCTOKEN Specify the connection token to be used when
connecting to a remote server.
--revisions-changed Set the exit code depending on whether upstream
@@ -432,9 +433,6 @@ Following is the usage and syntax for BitBake::
Environment variable BB_SERVER_TIMEOUT.
--no-setscene Do not run any setscene tasks. sstate will be ignored
and everything needed, built.
--skip-setscene Skip setscene tasks if they would be executed. Tasks
previously restored from sstate will be kept, unlike
--no-setscene
--setscene-only Only run setscene tasks, don't run any real tasks.
--remote-server=REMOTE_SERVER
Connect to the specified server.
@@ -471,11 +469,11 @@ default task, which is "build". BitBake obeys inter-task dependencies
when doing so.
The following command runs the build task, which is the default task, on
the ``foo_1.0.bb`` recipe file::
the ``foo_1.0.bb`` recipe file: ::
$ bitbake -b foo_1.0.bb
The following command runs the clean task on the ``foo.bb`` recipe file::
The following command runs the clean task on the ``foo.bb`` recipe file: ::
$ bitbake -b foo.bb -c clean
@@ -499,13 +497,13 @@ functionality, or when there are multiple versions of a recipe.
The ``bitbake`` command, when not using "--buildfile" or "-b" only
accepts a "PROVIDES". You cannot provide anything else. By default, a
recipe file generally "PROVIDES" its "packagename" as shown in the
following example::
following example: ::
$ bitbake foo
This next example "PROVIDES" the
package name and also uses the "-c" option to tell BitBake to just
execute the ``do_clean`` task::
execute the ``do_clean`` task: ::
$ bitbake -c clean foo
@@ -516,7 +514,7 @@ The BitBake command line supports specifying different tasks for
individual targets when you specify multiple targets. For example,
suppose you had two targets (or recipes) ``myfirstrecipe`` and
``mysecondrecipe`` and you needed BitBake to run ``taskA`` for the first
recipe and ``taskB`` for the second recipe::
recipe and ``taskB`` for the second recipe: ::
$ bitbake myfirstrecipe:do_taskA mysecondrecipe:do_taskB
@@ -536,13 +534,13 @@ current working directory:
- ``pn-buildlist``: Shows a simple list of targets that are to be
built.
To stop depending on common depends, use the ``-I`` depend option and
To stop depending on common depends, use the "-I" depend option and
BitBake omits them from the graph. Leaving this information out can
produce more readable graphs. This way, you can remove from the graph
:term:`DEPENDS` from inherited classes such as ``base.bbclass``.
``DEPENDS`` from inherited classes such as ``base.bbclass``.
Here are two examples that create dependency graphs. The second example
omits depends common in OpenEmbedded from the graph::
omits depends common in OpenEmbedded from the graph: ::
$ bitbake -g foo
@@ -566,7 +564,7 @@ for two separate targets:
.. image:: figures/bb_multiconfig_files.png
:align: center
The reason for this required file hierarchy is because the :term:`BBPATH`
The reason for this required file hierarchy is because the ``BBPATH``
variable is not constructed until the layers are parsed. Consequently,
using the configuration file as a pre-configuration file is not possible
unless it is located in the current working directory.
@@ -584,17 +582,17 @@ accomplished by setting the
configuration files for ``target1`` and ``target2`` defined in the build
directory. The following statement in the ``local.conf`` file both
enables BitBake to perform multiple configuration builds and specifies
the two extra multiconfigs::
the two extra multiconfigs: ::
BBMULTICONFIG = "target1 target2"
Once the target configuration files are in place and BitBake has been
enabled to perform multiple configuration builds, use the following
command form to start the builds::
command form to start the builds: ::
$ bitbake [mc:multiconfigname:]target [[[mc:multiconfigname:]target] ... ]
Here is an example for two extra multiconfigs: ``target1`` and ``target2``::
Here is an example for two extra multiconfigs: ``target1`` and ``target2``: ::
$ bitbake mc::target mc:target1:target mc:target2:target
@@ -615,12 +613,12 @@ multiconfig.
To enable dependencies in a multiple configuration build, you must
declare the dependencies in the recipe using the following statement
form::
form: ::
task_or_package[mcdepends] = "mc:from_multiconfig:to_multiconfig:recipe_name:task_on_which_to_depend"
To better show how to use this statement, consider an example with two
multiconfigs: ``target1`` and ``target2``::
multiconfigs: ``target1`` and ``target2``: ::
image_task[mcdepends] = "mc:target1:target2:image2:rootfs_task"
@@ -631,7 +629,7 @@ completion of the rootfs_task used to build out image2, which is
associated with the "target2" multiconfig.
Once you set up this dependency, you can build the "target1" multiconfig
using a BitBake command as follows::
using a BitBake command as follows: ::
$ bitbake mc:target1:image1
@@ -641,7 +639,7 @@ the ``rootfs_task`` for the "target2" multiconfig build.
Having a recipe depend on the root filesystem of another build might not
seem that useful. Consider this change to the statement in the image1
recipe::
recipe: ::
image_task[mcdepends] = "mc:target1:target2:image2:image_task"

View File

@@ -1,91 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
================
Variable Context
================
|
Variables might only have an impact or can be used in certain contexts. Some
should only be used in global files like ``.conf``, while others are intended only
for local files like ``.bb``. This chapter aims to describe some important variable
contexts.
.. _ref-varcontext-configuration:
BitBake's own configuration
===========================
Variables starting with ``BB_`` usually configure the behaviour of BitBake itself.
For example, one could configure:
- System resources, like disk space to be used (:term:`BB_DISKMON_DIRS`),
or the number of tasks to be run in parallel by BitBake (:term:`BB_NUMBER_THREADS`).
- How the fetchers shall behave, e.g., :term:`BB_FETCH_PREMIRRORONLY` is used
by BitBake to determine if BitBake's fetcher shall search only
:term:`PREMIRRORS` for files.
Those variables are usually configured globally.
BitBake configuration
=====================
There are variables:
- Like :term:`B` or :term:`T`, that are used to specify directories used by
BitBake during the build of a particular recipe. Those variables are
specified in ``bitbake.conf``. Some, like :term:`B`, are quite often
overwritten in recipes.
- Starting with ``FAKEROOT``, to configure how the ``fakeroot`` command is
handled. Those are usually set by ``bitbake.conf`` and might get adapted in a
``bbclass``.
- Detailing where BitBake will store and fetch information from, for
data reuse between build runs like :term:`CACHE`, :term:`DL_DIR` or
:term:`PERSISTENT_DIR`. Those are usually global.
Layers and files
================
Variables starting with ``LAYER`` configure how BitBake handles layers.
Additionally, variables starting with ``BB`` configure how layers and files are
handled. For example:
- :term:`LAYERDEPENDS` is used to configure on which layers a given layer
depends.
- The configured layers are contained in :term:`BBLAYERS` and files in
:term:`BBFILES`.
Those variables are often used in the files ``layer.conf`` and ``bblayers.conf``.
Recipes and packages
====================
Variables handling recipes and packages can be split into:
- :term:`PN`, :term:`PV` or :term:`PF` for example, contain information about
the name or revision of a recipe or package. Usually, the default set in
``bitbake.conf`` is used, but those are from time to time overwritten in
recipes.
- :term:`SUMMARY`, :term:`DESCRIPTION`, :term:`LICENSE` or :term:`HOMEPAGE`
contain the expected information and should be set specifically for every
recipe.
- In recipes, variables are also used to control build and runtime
dependencies between recipes/packages with other recipes/packages. The
most common should be: :term:`PROVIDES`, :term:`RPROVIDES`, :term:`DEPENDS`,
and :term:`RDEPENDS`.
- There are further variables starting with ``SRC`` that specify the sources in
a recipe like :term:`SRC_URI` or :term:`SRCDATE`. Those are also usually set
in recipes.
- Which version or provider of a recipe should be given preference when
multiple recipes would provide the same item, is controlled by variables
starting with ``PREFERRED_``. Those are normally set in the configuration
files of a ``MACHINE`` or ``DISTRO``.

View File

@@ -13,7 +13,6 @@ BitBake User Manual
bitbake-user-manual/bitbake-user-manual-intro
bitbake-user-manual/bitbake-user-manual-execution
bitbake-user-manual/bitbake-user-manual-metadata
bitbake-user-manual/bitbake-user-manual-ref-variables-context
bitbake-user-manual/bitbake-user-manual-fetching
bitbake-user-manual/bitbake-user-manual-ref-variables
bitbake-user-manual/bitbake-user-manual-hello

View File

@@ -1,76 +1,32 @@
.. SPDX-License-Identifier: CC-BY-2.5
=================================
BitBake Supported Release Manuals
=================================
*******************************
Release Series 4.2 (mickledore)
*******************************
- :yocto_docs:`BitBake 2.4 User Manual </bitbake/2.4/>`
******************************
Release Series 4.0 (kirkstone)
******************************
- :yocto_docs:`BitBake 2.0 User Manual </bitbake/2.0/>`
=========================
Current Release Manuals
=========================
****************************
Release Series 3.1 (dunfell)
3.1 'dunfell' Release Series
****************************
- :yocto_docs:`BitBake 1.46 User Manual </bitbake/1.46/>`
================================
BitBake Outdated Release Manuals
================================
*****************************
Release Series 4.1 (langdale)
*****************************
- :yocto_docs:`BitBake 2.2 User Manual </bitbake/2.2/>`
******************************
Release Series 3.4 (honister)
******************************
- :yocto_docs:`BitBake 1.52 User Manual </bitbake/1.52/>`
******************************
Release Series 3.3 (hardknott)
******************************
- :yocto_docs:`BitBake 1.50 User Manual </bitbake/1.50/>`
*******************************
Release Series 3.2 (gatesgarth)
*******************************
- :yocto_docs:`BitBake 1.48 User Manual </bitbake/1.48/>`
*******************************************
Release Series 3.1 (dunfell first versions)
*******************************************
- :yocto_docs:`3.1 BitBake User Manual </3.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.1 BitBake User Manual </3.1.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.2 BitBake User Manual </3.1.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.3 BitBake User Manual </3.1.3/bitbake-user-manual/bitbake-user-manual.html>`
==========================
Previous Release Manuals
==========================
*************************
Release Series 3.0 (zeus)
3.0 'zeus' Release Series
*************************
- :yocto_docs:`3.0 BitBake User Manual </3.0/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.1 BitBake User Manual </3.0.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.2 BitBake User Manual </3.0.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.3 BitBake User Manual </3.0.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.4 BitBake User Manual </3.0.4/bitbake-user-manual/bitbake-user-manual.html>`
****************************
Release Series 2.7 (warrior)
2.7 'warrior' Release Series
****************************
- :yocto_docs:`2.7 BitBake User Manual </2.7/bitbake-user-manual/bitbake-user-manual.html>`
@@ -80,7 +36,7 @@ Release Series 2.7 (warrior)
- :yocto_docs:`2.7.4 BitBake User Manual </2.7.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 2.6 (thud)
2.6 'thud' Release Series
*************************
- :yocto_docs:`2.6 BitBake User Manual </2.6/bitbake-user-manual/bitbake-user-manual.html>`
@@ -90,16 +46,16 @@ Release Series 2.6 (thud)
- :yocto_docs:`2.6.4 BitBake User Manual </2.6.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 2.5 (sumo)
2.5 'sumo' Release Series
*************************
- :yocto_docs:`2.5 Documentation </2.5>`
- :yocto_docs:`2.5.1 Documentation </2.5.1>`
- :yocto_docs:`2.5.2 Documentation </2.5.2>`
- :yocto_docs:`2.5.3 Documentation </2.5.3>`
- :yocto_docs:`2.5 BitBake User Manual </2.5/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.5.1 BitBake User Manual </2.5.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.5.2 BitBake User Manual </2.5.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.5.3 BitBake User Manual </2.5.3/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 2.4 (rocko)
2.4 'rocko' Release Series
**************************
- :yocto_docs:`2.4 BitBake User Manual </2.4/bitbake-user-manual/bitbake-user-manual.html>`
@@ -109,7 +65,7 @@ Release Series 2.4 (rocko)
- :yocto_docs:`2.4.4 BitBake User Manual </2.4.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 2.3 (pyro)
2.3 'pyro' Release Series
*************************
- :yocto_docs:`2.3 BitBake User Manual </2.3/bitbake-user-manual/bitbake-user-manual.html>`
@@ -119,7 +75,7 @@ Release Series 2.3 (pyro)
- :yocto_docs:`2.3.4 BitBake User Manual </2.3.4/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 2.2 (morty)
2.2 'morty' Release Series
**************************
- :yocto_docs:`2.2 BitBake User Manual </2.2/bitbake-user-manual/bitbake-user-manual.html>`
@@ -128,7 +84,7 @@ Release Series 2.2 (morty)
- :yocto_docs:`2.2.3 BitBake User Manual </2.2.3/bitbake-user-manual/bitbake-user-manual.html>`
****************************
Release Series 2.1 (krogoth)
2.1 'krogoth' Release Series
****************************
- :yocto_docs:`2.1 BitBake User Manual </2.1/bitbake-user-manual/bitbake-user-manual.html>`
@@ -137,7 +93,7 @@ Release Series 2.1 (krogoth)
- :yocto_docs:`2.1.3 BitBake User Manual </2.1.3/bitbake-user-manual/bitbake-user-manual.html>`
***************************
Release Series 2.0 (jethro)
2.0 'jethro' Release Series
***************************
- :yocto_docs:`1.9 BitBake User Manual </1.9/bitbake-user-manual/bitbake-user-manual.html>`
@@ -147,7 +103,7 @@ Release Series 2.0 (jethro)
- :yocto_docs:`2.0.3 BitBake User Manual </2.0.3/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 1.8 (fido)
1.8 'fido' Release Series
*************************
- :yocto_docs:`1.8 BitBake User Manual </1.8/bitbake-user-manual/bitbake-user-manual.html>`
@@ -155,7 +111,7 @@ Release Series 1.8 (fido)
- :yocto_docs:`1.8.2 BitBake User Manual </1.8.2/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 1.7 (dizzy)
1.7 'dizzy' Release Series
**************************
- :yocto_docs:`1.7 BitBake User Manual </1.7/bitbake-user-manual/bitbake-user-manual.html>`
@@ -164,7 +120,7 @@ Release Series 1.7 (dizzy)
- :yocto_docs:`1.7.3 BitBake User Manual </1.7.3/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 1.6 (daisy)
1.6 'daisy' Release Series
**************************
- :yocto_docs:`1.6 BitBake User Manual </1.6/bitbake-user-manual/bitbake-user-manual.html>`

View File

@@ -3,8 +3,6 @@
#
# Copyright (C) 2006 Tim Ansell
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Please Note:
# Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
# Assign a file to __warn__ to get warnings about slow operations.

View File

@@ -9,19 +9,12 @@
# SPDX-License-Identifier: GPL-2.0-only
#
__version__ = "2.8.0"
__version__ = "1.50.0"
import sys
if sys.version_info < (3, 8, 0):
raise RuntimeError("Sorry, python 3.8.0 or later is required for this version of bitbake")
if sys.version_info < (3, 5, 0):
raise RuntimeError("Sorry, python 3.5.0 or later is required for this version of bitbake")
if sys.version_info < (3, 10, 0):
# With python 3.8 and 3.9, we see errors of "libgcc_s.so.1 must be installed for pthread_cancel to work"
# https://stackoverflow.com/questions/64797838/libgcc-s-so-1-must-be-installed-for-pthread-cancel-to-work
# https://bugs.ams1.psf.io/issue42888
# so ensure libgcc_s is loaded early on
import ctypes
libgcc_s = ctypes.CDLL('libgcc_s.so.1')
class BBHandledException(Exception):
"""
@@ -36,7 +29,6 @@ class BBHandledException(Exception):
import os
import logging
from collections import namedtuple
class NullHandler(logging.Handler):
@@ -68,10 +60,6 @@ class BBLoggerMixin(object):
return
if loglevel < bb.msg.loggerDefaultLogLevel:
return
if not isinstance(level, int) or not isinstance(msg, str):
mainlogger.warning("Invalid arguments in bbdebug: %s" % repr((level, msg,) + args))
return self.log(loglevel, msg, *args, **kwargs)
def plain(self, msg, *args, **kwargs):
@@ -83,13 +71,6 @@ class BBLoggerMixin(object):
def verbnote(self, msg, *args, **kwargs):
return self.log(logging.INFO + 2, msg, *args, **kwargs)
def warnonce(self, msg, *args, **kwargs):
return self.log(logging.WARNING - 1, msg, *args, **kwargs)
def erroronce(self, msg, *args, **kwargs):
return self.log(logging.ERROR - 1, msg, *args, **kwargs)
Logger = logging.getLoggerClass()
class BBLogger(Logger, BBLoggerMixin):
def __init__(self, name, *args, **kwargs):
@@ -176,15 +157,9 @@ def verbnote(*args):
def warn(*args):
mainlogger.warning(''.join(args))
def warnonce(*args):
mainlogger.warnonce(''.join(args))
def error(*args, **kwargs):
mainlogger.error(''.join(args), extra=kwargs)
def erroronce(*args):
mainlogger.erroronce(''.join(args))
def fatal(*args, **kwargs):
mainlogger.critical(''.join(args), extra=kwargs)
raise BBHandledException()
@@ -228,14 +203,3 @@ def deprecate_import(current, modulename, fromlist, renames = None):
setattr(sys.modules[current], newname, newobj)
TaskData = namedtuple("TaskData", [
"pn",
"taskname",
"fn",
"deps",
"provides",
"taskhash",
"unihash",
"hashfn",
"taskhash_deps",
])

View File

@@ -1,215 +0,0 @@
#! /usr/bin/env python3
#
# Copyright 2023 by Garmin Ltd. or its subsidiaries
#
# SPDX-License-Identifier: MIT
import sys
import ctypes
import os
import errno
import pwd
import grp
libacl = ctypes.CDLL("libacl.so.1", use_errno=True)
ACL_TYPE_ACCESS = 0x8000
ACL_TYPE_DEFAULT = 0x4000
ACL_FIRST_ENTRY = 0
ACL_NEXT_ENTRY = 1
ACL_UNDEFINED_TAG = 0x00
ACL_USER_OBJ = 0x01
ACL_USER = 0x02
ACL_GROUP_OBJ = 0x04
ACL_GROUP = 0x08
ACL_MASK = 0x10
ACL_OTHER = 0x20
ACL_READ = 0x04
ACL_WRITE = 0x02
ACL_EXECUTE = 0x01
acl_t = ctypes.c_void_p
acl_entry_t = ctypes.c_void_p
acl_permset_t = ctypes.c_void_p
acl_perm_t = ctypes.c_uint
acl_tag_t = ctypes.c_int
libacl.acl_free.argtypes = [acl_t]
def acl_free(acl):
libacl.acl_free(acl)
libacl.acl_get_file.restype = acl_t
libacl.acl_get_file.argtypes = [ctypes.c_char_p, ctypes.c_uint]
def acl_get_file(path, typ):
acl = libacl.acl_get_file(os.fsencode(path), typ)
if acl is None:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err), str(path))
return acl
libacl.acl_get_entry.argtypes = [acl_t, ctypes.c_int, ctypes.c_void_p]
def acl_get_entry(acl, entry_id):
entry = acl_entry_t()
ret = libacl.acl_get_entry(acl, entry_id, ctypes.byref(entry))
if ret < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
if ret == 0:
return None
return entry
libacl.acl_get_tag_type.argtypes = [acl_entry_t, ctypes.c_void_p]
def acl_get_tag_type(entry_d):
tag = acl_tag_t()
ret = libacl.acl_get_tag_type(entry_d, ctypes.byref(tag))
if ret < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
return tag.value
libacl.acl_get_qualifier.restype = ctypes.c_void_p
libacl.acl_get_qualifier.argtypes = [acl_entry_t]
def acl_get_qualifier(entry_d):
ret = libacl.acl_get_qualifier(entry_d)
if ret is None:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
return ctypes.c_void_p(ret)
libacl.acl_get_permset.argtypes = [acl_entry_t, ctypes.c_void_p]
def acl_get_permset(entry_d):
permset = acl_permset_t()
ret = libacl.acl_get_permset(entry_d, ctypes.byref(permset))
if ret < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
return permset
libacl.acl_get_perm.argtypes = [acl_permset_t, acl_perm_t]
def acl_get_perm(permset_d, perm):
ret = libacl.acl_get_perm(permset_d, perm)
if ret < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
return bool(ret)
class Entry(object):
def __init__(self, tag, qualifier, mode):
self.tag = tag
self.qualifier = qualifier
self.mode = mode
def __str__(self):
typ = ""
qual = ""
if self.tag == ACL_USER:
typ = "user"
qual = pwd.getpwuid(self.qualifier).pw_name
elif self.tag == ACL_GROUP:
typ = "group"
qual = grp.getgrgid(self.qualifier).gr_name
elif self.tag == ACL_USER_OBJ:
typ = "user"
elif self.tag == ACL_GROUP_OBJ:
typ = "group"
elif self.tag == ACL_MASK:
typ = "mask"
elif self.tag == ACL_OTHER:
typ = "other"
r = "r" if self.mode & ACL_READ else "-"
w = "w" if self.mode & ACL_WRITE else "-"
x = "x" if self.mode & ACL_EXECUTE else "-"
return f"{typ}:{qual}:{r}{w}{x}"
class ACL(object):
def __init__(self, acl):
self.acl = acl
def __del__(self):
acl_free(self.acl)
def entries(self):
entry_id = ACL_FIRST_ENTRY
while True:
entry = acl_get_entry(self.acl, entry_id)
if entry is None:
break
permset = acl_get_permset(entry)
mode = 0
for m in (ACL_READ, ACL_WRITE, ACL_EXECUTE):
if acl_get_perm(permset, m):
mode |= m
qualifier = None
tag = acl_get_tag_type(entry)
if tag == ACL_USER or tag == ACL_GROUP:
qual = acl_get_qualifier(entry)
qualifier = ctypes.cast(qual, ctypes.POINTER(ctypes.c_int))[0]
yield Entry(tag, qualifier, mode)
entry_id = ACL_NEXT_ENTRY
@classmethod
def from_path(cls, path, typ):
acl = acl_get_file(path, typ)
return cls(acl)
def main():
import argparse
import pwd
import grp
from pathlib import Path
parser = argparse.ArgumentParser()
parser.add_argument("path", help="File Path", type=Path)
args = parser.parse_args()
acl = ACL.from_path(args.path, ACL_TYPE_ACCESS)
for entry in acl.entries():
print(str(entry))
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,16 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
from .client import AsyncClient, Client, ClientPool
from .serv import AsyncServer, AsyncServerConnection
from .connection import DEFAULT_MAX_CHUNK
from .exceptions import (
ClientError,
ServerError,
ConnectionClosedError,
InvokeError,
)

View File

@@ -1,313 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import abc
import asyncio
import json
import os
import socket
import sys
import re
import contextlib
from threading import Thread
from .connection import StreamConnection, WebsocketConnection, DEFAULT_MAX_CHUNK
from .exceptions import ConnectionClosedError, InvokeError
UNIX_PREFIX = "unix://"
WS_PREFIX = "ws://"
WSS_PREFIX = "wss://"
ADDR_TYPE_UNIX = 0
ADDR_TYPE_TCP = 1
ADDR_TYPE_WS = 2
def parse_address(addr):
if addr.startswith(UNIX_PREFIX):
return (ADDR_TYPE_UNIX, (addr[len(UNIX_PREFIX) :],))
elif addr.startswith(WS_PREFIX) or addr.startswith(WSS_PREFIX):
return (ADDR_TYPE_WS, (addr,))
else:
m = re.match(r"\[(?P<host>[^\]]*)\]:(?P<port>\d+)$", addr)
if m is not None:
host = m.group("host")
port = m.group("port")
else:
host, port = addr.split(":")
return (ADDR_TYPE_TCP, (host, int(port)))
class AsyncClient(object):
def __init__(
self,
proto_name,
proto_version,
logger,
timeout=30,
server_headers=False,
headers={},
):
self.socket = None
self.max_chunk = DEFAULT_MAX_CHUNK
self.proto_name = proto_name
self.proto_version = proto_version
self.logger = logger
self.timeout = timeout
self.needs_server_headers = server_headers
self.server_headers = {}
self.headers = headers
async def connect_tcp(self, address, port):
async def connect_sock():
reader, writer = await asyncio.open_connection(address, port)
return StreamConnection(reader, writer, self.timeout, self.max_chunk)
self._connect_sock = connect_sock
async def connect_unix(self, path):
async def connect_sock():
# AF_UNIX has path length issues so chdir here to workaround
cwd = os.getcwd()
try:
os.chdir(os.path.dirname(path))
# The socket must be opened synchronously so that CWD doesn't get
# changed out from underneath us so we pass as a sock into asyncio
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM, 0)
sock.connect(os.path.basename(path))
finally:
os.chdir(cwd)
reader, writer = await asyncio.open_unix_connection(sock=sock)
return StreamConnection(reader, writer, self.timeout, self.max_chunk)
self._connect_sock = connect_sock
async def connect_websocket(self, uri):
import websockets
async def connect_sock():
websocket = await websockets.connect(uri, ping_interval=None)
return WebsocketConnection(websocket, self.timeout)
self._connect_sock = connect_sock
async def setup_connection(self):
# Send headers
await self.socket.send("%s %s" % (self.proto_name, self.proto_version))
await self.socket.send(
"needs-headers: %s" % ("true" if self.needs_server_headers else "false")
)
for k, v in self.headers.items():
await self.socket.send("%s: %s" % (k, v))
# End of headers
await self.socket.send("")
self.server_headers = {}
if self.needs_server_headers:
while True:
line = await self.socket.recv()
if not line:
# End headers
break
tag, value = line.split(":", 1)
self.server_headers[tag.lower()] = value.strip()
async def get_header(self, tag, default):
await self.connect()
return self.server_headers.get(tag, default)
async def connect(self):
if self.socket is None:
self.socket = await self._connect_sock()
await self.setup_connection()
async def disconnect(self):
if self.socket is not None:
await self.socket.close()
self.socket = None
async def close(self):
await self.disconnect()
async def _send_wrapper(self, proc):
count = 0
while True:
try:
await self.connect()
return await proc()
except (
OSError,
ConnectionError,
ConnectionClosedError,
json.JSONDecodeError,
UnicodeDecodeError,
) as e:
self.logger.warning("Error talking to server: %s" % e)
if count >= 3:
if not isinstance(e, ConnectionError):
raise ConnectionError(str(e))
raise e
await self.close()
count += 1
def check_invoke_error(self, msg):
if isinstance(msg, dict) and "invoke-error" in msg:
raise InvokeError(msg["invoke-error"]["message"])
async def invoke(self, msg):
async def proc():
await self.socket.send_message(msg)
return await self.socket.recv_message()
result = await self._send_wrapper(proc)
self.check_invoke_error(result)
return result
async def ping(self):
return await self.invoke({"ping": {}})
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc_value, traceback):
await self.close()
class Client(object):
def __init__(self):
self.client = self._get_async_client()
self.loop = asyncio.new_event_loop()
# Override any pre-existing loop.
# Without this, the PR server export selftest triggers a hang
# when running with Python 3.7. The drawback is that there is
# potential for issues if the PR and hash equiv (or some new)
# clients need to both be instantiated in the same process.
# This should be revisited if/when Python 3.9 becomes the
# minimum required version for BitBake, as it seems not
# required (but harmless) with it.
asyncio.set_event_loop(self.loop)
self._add_methods("connect_tcp", "ping")
@abc.abstractmethod
def _get_async_client(self):
pass
def _get_downcall_wrapper(self, downcall):
def wrapper(*args, **kwargs):
return self.loop.run_until_complete(downcall(*args, **kwargs))
return wrapper
def _add_methods(self, *methods):
for m in methods:
downcall = getattr(self.client, m)
setattr(self, m, self._get_downcall_wrapper(downcall))
def connect_unix(self, path):
self.loop.run_until_complete(self.client.connect_unix(path))
self.loop.run_until_complete(self.client.connect())
@property
def max_chunk(self):
return self.client.max_chunk
@max_chunk.setter
def max_chunk(self, value):
self.client.max_chunk = value
def disconnect(self):
self.loop.run_until_complete(self.client.close())
def close(self):
if self.loop:
self.loop.run_until_complete(self.client.close())
if sys.version_info >= (3, 6):
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
self.loop.close()
self.loop = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
return False
class ClientPool(object):
def __init__(self, max_clients):
self.avail_clients = []
self.num_clients = 0
self.max_clients = max_clients
self.loop = None
self.client_condition = None
@abc.abstractmethod
async def _new_client(self):
raise NotImplementedError("Must be implemented in derived class")
def close(self):
if self.client_condition:
self.client_condition = None
if self.loop:
self.loop.run_until_complete(self.__close_clients())
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
self.loop.close()
self.loop = None
def run_tasks(self, tasks):
if not self.loop:
self.loop = asyncio.new_event_loop()
thread = Thread(target=self.__thread_main, args=(tasks,))
thread.start()
thread.join()
@contextlib.asynccontextmanager
async def get_client(self):
async with self.client_condition:
if self.avail_clients:
client = self.avail_clients.pop()
elif self.num_clients < self.max_clients:
self.num_clients += 1
client = await self._new_client()
else:
while not self.avail_clients:
await self.client_condition.wait()
client = self.avail_clients.pop()
try:
yield client
finally:
async with self.client_condition:
self.avail_clients.append(client)
self.client_condition.notify()
def __thread_main(self, tasks):
async def process_task(task):
async with self.get_client() as client:
await task(client)
asyncio.set_event_loop(self.loop)
if not self.client_condition:
self.client_condition = asyncio.Condition()
tasks = [process_task(t) for t in tasks]
self.loop.run_until_complete(asyncio.gather(*tasks))
async def __close_clients(self):
for c in self.avail_clients:
await c.close()
self.avail_clients = []
self.num_clients = 0
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
return False

View File

@@ -1,146 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import asyncio
import itertools
import json
from datetime import datetime
from .exceptions import ClientError, ConnectionClosedError
# The Python async server defaults to a 64K receive buffer, so we hardcode our
# maximum chunk size. It would be better if the client and server reported to
# each other what the maximum chunk sizes were, but that will slow down the
# connection setup with a round trip delay so I'd rather not do that unless it
# is necessary
DEFAULT_MAX_CHUNK = 32 * 1024
def chunkify(msg, max_chunk):
if len(msg) < max_chunk - 1:
yield "".join((msg, "\n"))
else:
yield "".join((json.dumps({"chunk-stream": None}), "\n"))
args = [iter(msg)] * (max_chunk - 1)
for m in map("".join, itertools.zip_longest(*args, fillvalue="")):
yield "".join(itertools.chain(m, "\n"))
yield "\n"
def json_serialize(obj):
if isinstance(obj, datetime):
return obj.isoformat()
raise TypeError("Type %s not serializeable" % type(obj))
class StreamConnection(object):
def __init__(self, reader, writer, timeout, max_chunk=DEFAULT_MAX_CHUNK):
self.reader = reader
self.writer = writer
self.timeout = timeout
self.max_chunk = max_chunk
@property
def address(self):
return self.writer.get_extra_info("peername")
async def send_message(self, msg):
for c in chunkify(json.dumps(msg, default=json_serialize), self.max_chunk):
self.writer.write(c.encode("utf-8"))
await self.writer.drain()
async def recv_message(self):
l = await self.recv()
m = json.loads(l)
if not m:
return m
if "chunk-stream" in m:
lines = []
while True:
l = await self.recv()
if not l:
break
lines.append(l)
m = json.loads("".join(lines))
return m
async def send(self, msg):
self.writer.write(("%s\n" % msg).encode("utf-8"))
await self.writer.drain()
async def recv(self):
if self.timeout < 0:
line = await self.reader.readline()
else:
try:
line = await asyncio.wait_for(self.reader.readline(), self.timeout)
except asyncio.TimeoutError:
raise ConnectionError("Timed out waiting for data")
if not line:
raise ConnectionClosedError("Connection closed")
line = line.decode("utf-8")
if not line.endswith("\n"):
raise ConnectionError("Bad message %r" % (line))
return line.rstrip()
async def close(self):
self.reader = None
if self.writer is not None:
self.writer.close()
self.writer = None
class WebsocketConnection(object):
def __init__(self, socket, timeout):
self.socket = socket
self.timeout = timeout
@property
def address(self):
return ":".join(str(s) for s in self.socket.remote_address)
async def send_message(self, msg):
await self.send(json.dumps(msg, default=json_serialize))
async def recv_message(self):
m = await self.recv()
return json.loads(m)
async def send(self, msg):
import websockets.exceptions
try:
await self.socket.send(msg)
except websockets.exceptions.ConnectionClosed:
raise ConnectionClosedError("Connection closed")
async def recv(self):
import websockets.exceptions
try:
if self.timeout < 0:
return await self.socket.recv()
try:
return await asyncio.wait_for(self.socket.recv(), self.timeout)
except asyncio.TimeoutError:
raise ConnectionError("Timed out waiting for data")
except websockets.exceptions.ConnectionClosed:
raise ConnectionClosedError("Connection closed")
async def close(self):
if self.socket is not None:
await self.socket.close()
self.socket = None

View File

@@ -1,21 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
class ClientError(Exception):
pass
class InvokeError(Exception):
pass
class ServerError(Exception):
pass
class ConnectionClosedError(Exception):
pass

View File

@@ -1,391 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import abc
import asyncio
import json
import os
import signal
import socket
import sys
import multiprocessing
import logging
from .connection import StreamConnection, WebsocketConnection
from .exceptions import ClientError, ServerError, ConnectionClosedError, InvokeError
class ClientLoggerAdapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
return f"[Client {self.extra['address']}] {msg}", kwargs
class AsyncServerConnection(object):
# If a handler returns this object (e.g. `return self.NO_RESPONSE`), no
# return message will be automatically be sent back to the client
NO_RESPONSE = object()
def __init__(self, socket, proto_name, logger):
self.socket = socket
self.proto_name = proto_name
self.handlers = {
"ping": self.handle_ping,
}
self.logger = ClientLoggerAdapter(
logger,
{
"address": socket.address,
},
)
self.client_headers = {}
async def close(self):
await self.socket.close()
async def handle_headers(self, headers):
return {}
async def process_requests(self):
try:
self.logger.info("Client %r connected" % (self.socket.address,))
# Read protocol and version
client_protocol = await self.socket.recv()
if not client_protocol:
return
(client_proto_name, client_proto_version) = client_protocol.split()
if client_proto_name != self.proto_name:
self.logger.debug("Rejecting invalid protocol %s" % (self.proto_name))
return
self.proto_version = tuple(int(v) for v in client_proto_version.split("."))
if not self.validate_proto_version():
self.logger.debug(
"Rejecting invalid protocol version %s" % (client_proto_version)
)
return
# Read headers
self.client_headers = {}
while True:
header = await self.socket.recv()
if not header:
# Empty line. End of headers
break
tag, value = header.split(":", 1)
self.client_headers[tag.lower()] = value.strip()
if self.client_headers.get("needs-headers", "false") == "true":
for k, v in (await self.handle_headers(self.client_headers)).items():
await self.socket.send("%s: %s" % (k, v))
await self.socket.send("")
# Handle messages
while True:
d = await self.socket.recv_message()
if d is None:
break
try:
response = await self.dispatch_message(d)
except InvokeError as e:
await self.socket.send_message(
{"invoke-error": {"message": str(e)}}
)
break
if response is not self.NO_RESPONSE:
await self.socket.send_message(response)
except ConnectionClosedError as e:
self.logger.info(str(e))
except (ClientError, ConnectionError) as e:
self.logger.error(str(e))
finally:
await self.close()
async def dispatch_message(self, msg):
for k in self.handlers.keys():
if k in msg:
self.logger.debug("Handling %s" % k)
return await self.handlers[k](msg[k])
raise ClientError("Unrecognized command %r" % msg)
async def handle_ping(self, request):
return {"alive": True}
class StreamServer(object):
def __init__(self, handler, logger):
self.handler = handler
self.logger = logger
self.closed = False
async def handle_stream_client(self, reader, writer):
# writer.transport.set_write_buffer_limits(0)
socket = StreamConnection(reader, writer, -1)
if self.closed:
await socket.close()
return
await self.handler(socket)
async def stop(self):
self.closed = True
class TCPStreamServer(StreamServer):
def __init__(self, host, port, handler, logger):
super().__init__(handler, logger)
self.host = host
self.port = port
def start(self, loop):
self.server = loop.run_until_complete(
asyncio.start_server(self.handle_stream_client, self.host, self.port)
)
for s in self.server.sockets:
self.logger.debug("Listening on %r" % (s.getsockname(),))
# Newer python does this automatically. Do it manually here for
# maximum compatibility
s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
# Enable keep alives. This prevents broken client connections
# from persisting on the server for long periods of time.
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
name = self.server.sockets[0].getsockname()
if self.server.sockets[0].family == socket.AF_INET6:
self.address = "[%s]:%d" % (name[0], name[1])
else:
self.address = "%s:%d" % (name[0], name[1])
return [self.server.wait_closed()]
async def stop(self):
await super().stop()
self.server.close()
def cleanup(self):
pass
class UnixStreamServer(StreamServer):
def __init__(self, path, handler, logger):
super().__init__(handler, logger)
self.path = path
def start(self, loop):
cwd = os.getcwd()
try:
# Work around path length limits in AF_UNIX
os.chdir(os.path.dirname(self.path))
self.server = loop.run_until_complete(
asyncio.start_unix_server(
self.handle_stream_client, os.path.basename(self.path)
)
)
finally:
os.chdir(cwd)
self.logger.debug("Listening on %r" % self.path)
self.address = "unix://%s" % os.path.abspath(self.path)
return [self.server.wait_closed()]
async def stop(self):
await super().stop()
self.server.close()
def cleanup(self):
os.unlink(self.path)
class WebsocketsServer(object):
def __init__(self, host, port, handler, logger):
self.host = host
self.port = port
self.handler = handler
self.logger = logger
def start(self, loop):
import websockets.server
self.server = loop.run_until_complete(
websockets.server.serve(
self.client_handler,
self.host,
self.port,
ping_interval=None,
)
)
for s in self.server.sockets:
self.logger.debug("Listening on %r" % (s.getsockname(),))
# Enable keep alives. This prevents broken client connections
# from persisting on the server for long periods of time.
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
name = self.server.sockets[0].getsockname()
if self.server.sockets[0].family == socket.AF_INET6:
self.address = "ws://[%s]:%d" % (name[0], name[1])
else:
self.address = "ws://%s:%d" % (name[0], name[1])
return [self.server.wait_closed()]
async def stop(self):
self.server.close()
def cleanup(self):
pass
async def client_handler(self, websocket):
socket = WebsocketConnection(websocket, -1)
await self.handler(socket)
class AsyncServer(object):
def __init__(self, logger):
self.logger = logger
self.loop = None
self.run_tasks = []
def start_tcp_server(self, host, port):
self.server = TCPStreamServer(host, port, self._client_handler, self.logger)
def start_unix_server(self, path):
self.server = UnixStreamServer(path, self._client_handler, self.logger)
def start_websocket_server(self, host, port):
self.server = WebsocketsServer(host, port, self._client_handler, self.logger)
async def _client_handler(self, socket):
address = socket.address
try:
client = self.accept_client(socket)
await client.process_requests()
except Exception as e:
import traceback
self.logger.error(
"Error from client %s: %s" % (address, str(e)), exc_info=True
)
traceback.print_exc()
finally:
self.logger.debug("Client %s disconnected", address)
await socket.close()
@abc.abstractmethod
def accept_client(self, socket):
pass
async def stop(self):
self.logger.debug("Stopping server")
await self.server.stop()
def start(self):
tasks = self.server.start(self.loop)
self.address = self.server.address
return tasks
def signal_handler(self):
self.logger.debug("Got exit signal")
self.loop.create_task(self.stop())
def _serve_forever(self, tasks):
try:
self.loop.add_signal_handler(signal.SIGTERM, self.signal_handler)
self.loop.add_signal_handler(signal.SIGINT, self.signal_handler)
self.loop.add_signal_handler(signal.SIGQUIT, self.signal_handler)
signal.pthread_sigmask(signal.SIG_UNBLOCK, [signal.SIGTERM])
self.loop.run_until_complete(asyncio.gather(*tasks))
self.logger.debug("Server shutting down")
finally:
self.server.cleanup()
def serve_forever(self):
"""
Serve requests in the current process
"""
self._create_loop()
tasks = self.start()
self._serve_forever(tasks)
self.loop.close()
def _create_loop(self):
# Create loop and override any loop that may have existed in
# a parent process. It is possible that the usecases of
# serve_forever might be constrained enough to allow using
# get_event_loop here, but better safe than sorry for now.
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
def serve_as_process(self, *, prefunc=None, args=(), log_level=None):
"""
Serve requests in a child process
"""
def run(queue):
# Create loop and override any loop that may have existed
# in a parent process. Without doing this and instead
# using get_event_loop, at the very minimum the hashserv
# unit tests will hang when running the second test.
# This happens since get_event_loop in the spawned server
# process for the second testcase ends up with the loop
# from the hashserv client created in the unit test process
# when running the first testcase. The problem is somewhat
# more general, though, as any potential use of asyncio in
# Cooker could create a loop that needs to replaced in this
# new process.
self._create_loop()
try:
self.address = None
tasks = self.start()
finally:
# Always put the server address to wake up the parent task
queue.put(self.address)
queue.close()
if prefunc is not None:
prefunc(self, *args)
if log_level is not None:
self.logger.setLevel(log_level)
self._serve_forever(tasks)
if sys.version_info >= (3, 6):
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
self.loop.close()
queue = multiprocessing.Queue()
# Temporarily block SIGTERM. The server process will inherit this
# block which will ensure it doesn't receive the SIGTERM until the
# handler is ready for it
mask = signal.pthread_sigmask(signal.SIG_BLOCK, [signal.SIGTERM])
try:
self.process = multiprocessing.Process(target=run, args=(queue,))
self.process.start()
self.address = queue.get()
queue.close()
queue.join_thread()
return self.process
finally:
signal.pthread_sigmask(signal.SIG_SETMASK, mask)

View File

@@ -20,12 +20,10 @@ import itertools
import time
import re
import stat
import datetime
import bb
import bb.msg
import bb.process
import bb.progress
from io import StringIO
from bb import data, event, utils
bblogger = logging.getLogger('BitBake')
@@ -178,9 +176,7 @@ class StdoutNoopContextManager:
@property
def name(self):
if "name" in dir(sys.stdout):
return sys.stdout.name
return "<mem>"
return sys.stdout.name
def exec_func(func, d, dirs = None):
@@ -299,25 +295,9 @@ def exec_func_python(func, d, runfile, cwd=None):
lineno = int(d.getVarFlag(func, "lineno", False))
bb.methodpool.insert_method(func, text, fn, lineno - 1)
if verboseStdoutLogging:
sys.stdout.flush()
sys.stderr.flush()
currout = sys.stdout
currerr = sys.stderr
sys.stderr = sys.stdout = execio = StringIO()
comp = utils.better_compile(code, func, "exec_func_python() autogenerated")
utils.better_exec(comp, {"d": d}, code, "exec_func_python() autogenerated")
comp = utils.better_compile(code, func, "exec_python_func() autogenerated")
utils.better_exec(comp, {"d": d}, code, "exec_python_func() autogenerated")
finally:
if verboseStdoutLogging:
execio.flush()
logger.plain("%s" % execio.getvalue())
sys.stdout = currout
sys.stderr = currerr
execio.close()
# We want any stdout/stderr to be printed before any other log messages to make debugging
# more accurate. In some cases we seem to lose stdout/stderr entirely in logging tests without this.
sys.stdout.flush()
sys.stderr.flush()
bb.debug(2, "Python function %s finished" % func)
if cwd and olddir:
@@ -456,11 +436,7 @@ exit $ret
if fakerootcmd:
cmd = [fakerootcmd, runfile]
# We only want to output to logger via LogTee if stdout is sys.__stdout__ (which will either
# be real stdout or subprocess PIPE or similar). In other cases we are being run "recursively",
# ie. inside another function, in which case stdout is already being captured so we don't
# want to Tee here as output would be printed twice, and out of order.
if verboseStdoutLogging and sys.stdout == sys.__stdout__:
if verboseStdoutLogging:
logfile = LogTee(logger, StdoutNoopContextManager())
else:
logfile = StdoutNoopContextManager()
@@ -589,8 +565,10 @@ exit $ret
def _task_data(fn, task, d):
localdata = bb.data.createCopy(d)
localdata.setVar('BB_FILENAME', fn)
localdata.setVar('BB_CURRENTTASK', task[3:])
localdata.setVar('OVERRIDES', 'task-%s:%s' %
(task[3:].replace('_', '-'), d.getVar('OVERRIDES', False)))
localdata.finalize()
bb.data.expandKeys(localdata)
return localdata
@@ -601,7 +579,7 @@ def _exec_task(fn, task, d, quieterr):
running it with its own local metadata, and with some useful variables set.
"""
if not d.getVarFlag(task, 'task', False):
event.fire(TaskInvalid(task, fn, d), d)
event.fire(TaskInvalid(task, d), d)
logger.error("No such task: %s" % task)
return 1
@@ -637,8 +615,7 @@ def _exec_task(fn, task, d, quieterr):
logorder = os.path.join(tempdir, 'log.task_order')
try:
with open(logorder, 'a') as logorderfile:
timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S.%f")
logorderfile.write('{0} {1} ({2}): {3}\n'.format(timestamp, task, os.getpid(), logbase))
logorderfile.write('{0} ({1}): {2}\n'.format(task, os.getpid(), logbase))
except OSError:
logger.exception("Opening log file '%s'", logorder)
pass
@@ -705,55 +682,47 @@ def _exec_task(fn, task, d, quieterr):
try:
try:
event.fire(TaskStarted(task, fn, logfn, flags, localdata), localdata)
except (bb.BBHandledException, SystemExit):
return 1
try:
for func in (prefuncs or '').split():
exec_func(func, localdata)
exec_func(task, localdata)
for func in (postfuncs or '').split():
exec_func(func, localdata)
finally:
# Need to flush and close the logs before sending events where the
# UI may try to look at the logs.
sys.stdout.flush()
sys.stderr.flush()
except bb.BBHandledException:
event.fire(TaskFailed(task, fn, logfn, localdata, True), localdata)
return 1
except Exception as exc:
if quieterr:
event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
logger.error(str(exc))
event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
return 1
finally:
sys.stdout.flush()
sys.stderr.flush()
bblogger.removeHandler(handler)
bblogger.removeHandler(handler)
# Restore the backup fds
os.dup2(osi[0], osi[1])
os.dup2(oso[0], oso[1])
os.dup2(ose[0], ose[1])
# Restore the backup fds
os.dup2(osi[0], osi[1])
os.dup2(oso[0], oso[1])
os.dup2(ose[0], ose[1])
# Close the backup fds
os.close(osi[0])
os.close(oso[0])
os.close(ose[0])
logfile.close()
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
logger.debug2("Zero size logfn %s, removing", logfn)
bb.utils.remove(logfn)
bb.utils.remove(loglink)
except (Exception, SystemExit) as exc:
handled = False
if isinstance(exc, bb.BBHandledException):
handled = True
if quieterr:
if not handled:
logger.warning(repr(exc))
event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
# If the output is already on stdout, we've printed the information in the
# logs once already so don't duplicate
if verboseStdoutLogging or handled:
errprinted = True
if not handled:
logger.error(repr(exc))
event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
return 1
# Close the backup fds
os.close(osi[0])
os.close(oso[0])
os.close(ose[0])
logfile.close()
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
logger.debug2("Zero size logfn %s, removing", logfn)
bb.utils.remove(logfn)
bb.utils.remove(loglink)
event.fire(TaskSucceeded(task, fn, logfn, localdata), localdata)
if not localdata.getVarFlag(task, 'nostamp', False) and not localdata.getVarFlag(task, 'selfstamp', False):
@@ -791,7 +760,44 @@ def exec_task(fn, task, d, profile = False):
event.fire(failedevent, d)
return 1
def _get_cleanmask(taskname, mcfn):
def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
"""
Internal stamp helper function
Makes sure the stamp directory exists
Returns the stamp path+filename
In the bitbake core, d can be a CacheData and file_name will be set.
When called in task context, d will be a data store, file_name will not be set
"""
taskflagname = taskname
if taskname.endswith("_setscene") and taskname != "do_setscene":
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stamp[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMP')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
if baseonly:
return stamp
if noextra:
extrainfo = ""
if not stamp:
return
stamp = bb.parse.siggen.stampfile(stamp, file_name, taskname, extrainfo)
stampdir = os.path.dirname(stamp)
if cached_mtime_noerror(stampdir) == 0:
bb.utils.mkdirhier(stampdir)
return stamp
def stamp_cleanmask_internal(taskname, d, file_name):
"""
Internal stamp helper function to generate stamp cleaning mask
Returns the stamp path+filename
@@ -799,14 +805,31 @@ def _get_cleanmask(taskname, mcfn):
In the bitbake core, d can be a CacheData and file_name will be set.
When called in task context, d will be a data store, file_name will not be set
"""
cleanmask = bb.parse.siggen.stampcleanmask_mcfn(taskname, mcfn)
taskflagname = taskname.replace("_setscene", "")
if cleanmask:
return [cleanmask, cleanmask.replace(taskflagname, taskflagname + "_setscene")]
return []
taskflagname = taskname
if taskname.endswith("_setscene") and taskname != "do_setscene":
taskflagname = taskname.replace("_setscene", "")
def clean_stamp_mcfn(task, mcfn):
cleanmask = _get_cleanmask(task, mcfn)
if file_name:
stamp = d.stampclean[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMPCLEAN')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
if not stamp:
return []
cleanmask = bb.parse.siggen.stampcleanmask(stamp, file_name, taskname, extrainfo)
return [cleanmask, cleanmask.replace(taskflagname, taskflagname + "_setscene")]
def make_stamp(task, d, file_name = None):
"""
Creates/updates a stamp for a given task
(d can be a data dict or dataCache)
"""
cleanmask = stamp_cleanmask_internal(task, d, file_name)
for mask in cleanmask:
for name in glob.glob(mask):
# Preserve sigdata files in the stamps directory
@@ -817,45 +840,24 @@ def clean_stamp_mcfn(task, mcfn):
continue
os.unlink(name)
def clean_stamp(task, d):
mcfn = d.getVar('BB_FILENAME')
clean_stamp_mcfn(task, mcfn)
def make_stamp_mcfn(task, mcfn):
basestamp = bb.parse.siggen.stampfile_mcfn(task, mcfn)
stampdir = os.path.dirname(basestamp)
if cached_mtime_noerror(stampdir) == 0:
bb.utils.mkdirhier(stampdir)
clean_stamp_mcfn(task, mcfn)
stamp = stamp_internal(task, d, file_name)
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
if basestamp:
bb.utils.remove(basestamp)
open(basestamp, "w").close()
def make_stamp(task, d):
"""
Creates/updates a stamp for a given task
"""
mcfn = d.getVar('BB_FILENAME')
make_stamp_mcfn(task, mcfn)
if stamp:
bb.utils.remove(stamp)
open(stamp, "w").close()
# If we're in task context, write out a signature file for each task
# as it completes
if not task.endswith("_setscene"):
stampbase = bb.parse.siggen.stampfile_base(mcfn)
bb.parse.siggen.dump_sigtask(mcfn, task, stampbase, True)
if not task.endswith("_setscene") and task != "do_setscene" and not file_name:
stampbase = stamp_internal(task, d, None, True)
file_name = d.getVar('BB_FILENAME')
bb.parse.siggen.dump_sigtask(file_name, task, stampbase, True)
def find_stale_stamps(task, mcfn):
current = bb.parse.siggen.stampfile_mcfn(task, mcfn)
current2 = bb.parse.siggen.stampfile_mcfn(task + "_setscene", mcfn)
cleanmask = _get_cleanmask(task, mcfn)
def find_stale_stamps(task, d, file_name=None):
current = stamp_internal(task, d, file_name)
current2 = stamp_internal(task + "_setscene", d, file_name)
cleanmask = stamp_cleanmask_internal(task, d, file_name)
found = []
for mask in cleanmask:
for name in glob.glob(mask):
@@ -869,14 +871,38 @@ def find_stale_stamps(task, mcfn):
found.append(name)
return found
def write_taint(task, d):
def del_stamp(task, d, file_name = None):
"""
Removes a stamp for a given task
(d can be a data dict or dataCache)
"""
stamp = stamp_internal(task, d, file_name)
bb.utils.remove(stamp)
def write_taint(task, d, file_name = None):
"""
Creates a "taint" file which will force the specified task and its
dependents to be re-run the next time by influencing the value of its
taskhash.
(d can be a data dict or dataCache)
"""
mcfn = d.getVar('BB_FILENAME')
bb.parse.siggen.invalidate_task(task, mcfn)
import uuid
if file_name:
taintfn = d.stamp[file_name] + '.' + task + '.taint'
else:
taintfn = d.getVar('STAMP') + '.' + task + '.taint'
bb.utils.mkdirhier(os.path.dirname(taintfn))
# The specific content of the taint file is not really important,
# we just need it to be random, so a random UUID is used
with open(taintfn, 'w') as taintf:
taintf.write(str(uuid.uuid4()))
def stampfile(taskname, d, file_name = None, noextra=False):
"""
Return the stamp for a given task
(d can be a data dict or dataCache)
"""
return stamp_internal(taskname, d, file_name, noextra=noextra)
def add_tasks(tasklist, d):
task_deps = d.getVar('_task_deps', False)
@@ -901,11 +927,6 @@ def add_tasks(tasklist, d):
task_deps[name] = {}
if name in flags:
deptask = d.expand(flags[name])
if name in ['noexec', 'fakeroot', 'nostamp']:
if deptask != '1':
bb.warn("In a future version of BitBake, setting the '{}' flag to something other than '1' "
"will result in the flag not being set. See YP bug #13808.".format(name))
task_deps[name][task] = deptask
getTask('mcdepends')
getTask('depends')
@@ -1004,8 +1025,6 @@ def tasksbetween(task_start, task_end, d):
def follow_chain(task, endtask, chain=None):
if not chain:
chain = []
if task in chain:
bb.fatal("Circular task dependencies as %s depends on itself via the chain %s" % (task, " -> ".join(chain)))
chain.append(task)
for othertask in tasks:
if othertask == task:

View File

@@ -19,16 +19,14 @@
import os
import logging
import pickle
from collections import defaultdict
from collections.abc import Mapping
from collections import defaultdict, Mapping
import bb.utils
from bb import PrefixLoggerAdapter
import re
import shutil
logger = logging.getLogger("BitBake.Cache")
__cache_version__ = "155"
__cache_version__ = "154"
def getCacheFile(path, filename, mc, data_hash):
mcspec = ''
@@ -55,12 +53,12 @@ class RecipeInfoCommon(object):
@classmethod
def pkgvar(cls, var, packages, metadata):
return dict((pkg, cls.depvar("%s:%s" % (var, pkg), metadata))
return dict((pkg, cls.depvar("%s_%s" % (var, pkg), metadata))
for pkg in packages)
@classmethod
def taskvar(cls, var, tasks, metadata):
return dict((task, cls.getvar("%s:task-%s" % (var, task), metadata))
return dict((task, cls.getvar("%s_task-%s" % (var, task), metadata))
for task in tasks)
@classmethod
@@ -105,7 +103,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.tasks = metadata.getVar('__BBTASKS', False)
self.basetaskhashes = metadata.getVar('__siggen_basehashes', False) or {}
self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
self.hashfilename = self.getvar('BB_HASHFILENAME', metadata)
self.task_deps = metadata.getVar('_task_deps', False) or {'tasks': [], 'parents': {}}
@@ -216,7 +214,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
# Collect files we may need for possible world-dep
# calculations
if not bb.utils.to_boolean(self.not_world):
if not self.not_world:
cachedata.possible_world.append(fn)
#else:
# logger.debug2("EXCLUDE FROM WORLD: %s", fn)
@@ -238,113 +236,15 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootlogs[fn] = self.fakerootlogs
cachedata.extradepsfunc[fn] = self.extradepsfunc
class SiggenRecipeInfo(RecipeInfoCommon):
__slots__ = ()
classname = "SiggenRecipeInfo"
cachefile = "bb_cache_" + classname +".dat"
# we don't want to show this information in graph files so don't set cachefields
#cachefields = []
def __init__(self, filename, metadata):
self.siggen_gendeps = metadata.getVar("__siggen_gendeps", False)
self.siggen_varvals = metadata.getVar("__siggen_varvals", False)
self.siggen_taskdeps = metadata.getVar("__siggen_taskdeps", False)
@classmethod
def init_cacheData(cls, cachedata):
cachedata.siggen_taskdeps = {}
cachedata.siggen_gendeps = {}
cachedata.siggen_varvals = {}
def add_cacheData(self, cachedata, fn):
cachedata.siggen_gendeps[fn] = self.siggen_gendeps
cachedata.siggen_varvals[fn] = self.siggen_varvals
cachedata.siggen_taskdeps[fn] = self.siggen_taskdeps
# The siggen variable data is large and impacts:
# - bitbake's overall memory usage
# - the amount of data sent over IPC between parsing processes and the server
# - the size of the cache files on disk
# - the size of "sigdata" hash information files on disk
# The data consists of strings (some large) or frozenset lists of variables
# As such, we a) deplicate the data here and b) pass references to the object at second
# access (e.g. over IPC or saving into pickle).
store = {}
save_map = {}
save_count = 1
restore_map = {}
restore_count = {}
@classmethod
def reset(cls):
# Needs to be called before starting new streamed data in a given process
# (e.g. writing out the cache again)
cls.save_map = {}
cls.save_count = 1
cls.restore_map = {}
@classmethod
def _save(cls, deps):
ret = []
if not deps:
return deps
for dep in deps:
fs = deps[dep]
if fs is None:
ret.append((dep, None, None))
elif fs in cls.save_map:
ret.append((dep, None, cls.save_map[fs]))
else:
cls.save_map[fs] = cls.save_count
ret.append((dep, fs, cls.save_count))
cls.save_count = cls.save_count + 1
return ret
@classmethod
def _restore(cls, deps, pid):
ret = {}
if not deps:
return deps
if pid not in cls.restore_map:
cls.restore_map[pid] = {}
map = cls.restore_map[pid]
for dep, fs, mapnum in deps:
if fs is None and mapnum is None:
ret[dep] = None
elif fs is None:
ret[dep] = map[mapnum]
else:
try:
fs = cls.store[fs]
except KeyError:
cls.store[fs] = fs
map[mapnum] = fs
ret[dep] = fs
return ret
def __getstate__(self):
ret = {}
for key in ["siggen_gendeps", "siggen_taskdeps", "siggen_varvals"]:
ret[key] = self._save(self.__dict__[key])
ret['pid'] = os.getpid()
return ret
def __setstate__(self, state):
pid = state['pid']
for key in ["siggen_gendeps", "siggen_taskdeps", "siggen_varvals"]:
setattr(self, key, self._restore(state[key], pid))
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
"""
mc = ""
if virtualfn.startswith('mc:') and virtualfn.count(':') >= 2:
(_, mc, virtualfn) = virtualfn.split(':', 2)
elems = virtualfn.split(':')
mc = elems[1]
virtualfn = ":".join(elems[2:])
fn = virtualfn
cls = ""
@@ -367,7 +267,7 @@ def realfn2virtual(realfn, cls, mc):
def variant2virtual(realfn, variant):
"""
Convert a real filename + a variant to a virtual filename
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if variant == "":
return realfn
@@ -378,18 +278,96 @@ def variant2virtual(realfn, variant):
return "mc:" + elems[1] + ":" + realfn
return "virtual:" + variant + ":" + realfn
#
# Cooker calls cacheValid on its recipe list, then either calls loadCached
# from it's main thread or parse from separate processes to generate an up to
# date cache
#
class Cache(object):
def parse_recipe(bb_data, bbfile, appends, mc=''):
"""
Parse a recipe
"""
chdir_back = False
bb_data.setVar("__BBMULTICONFIG", mc)
# expand tmpdir to include this topdir
bb_data.setVar('TMPDIR', bb_data.getVar('TMPDIR') or "")
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
bb.parse.cached_mtime_noerror(bbfile_loc)
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not bb_data.getVar('TOPDIR', False):
chdir_back = True
bb_data.setVar('TOPDIR', bbfile_loc)
try:
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
bb_data = bb.parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
return bb_data
except:
if chdir_back:
os.chdir(oldpath)
raise
class NoCache(object):
def __init__(self, databuilder):
self.databuilder = databuilder
self.data = databuilder.data
def loadDataFull(self, virtualfn, appends):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug("Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = virtualfn2realfn(virtualfn)
bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
return bb_data[virtual]
def load_bbfile(self, bbfile, appends, virtonly = False, mc=None):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
if virtonly:
(bbfile, virtual, mc) = virtualfn2realfn(bbfile)
bb_data = self.databuilder.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
datastores = parse_recipe(bb_data, bbfile, appends, mc)
return datastores
if mc is not None:
bb_data = self.databuilder.mcdata[mc].createCopy()
return parse_recipe(bb_data, bbfile, appends, mc)
bb_data = self.data.createCopy()
datastores = parse_recipe(bb_data, bbfile, appends)
for mc in self.databuilder.mcdata:
if not mc:
continue
bb_data = self.databuilder.mcdata[mc].createCopy()
newstores = parse_recipe(bb_data, bbfile, appends, mc)
for ns in newstores:
datastores["mc:%s:%s" % (mc, ns)] = newstores[ns]
return datastores
class Cache(NoCache):
"""
BitBake Cache implementation
"""
def __init__(self, databuilder, mc, data_hash, caches_array):
self.databuilder = databuilder
self.data = databuilder.data
super().__init__(databuilder)
data = databuilder.data
# Pass caches_array information into Cache Constructor
# It will be used later for deciding whether we
@@ -397,7 +375,7 @@ class Cache(object):
self.mc = mc
self.logger = PrefixLoggerAdapter("Cache: %s: " % (mc if mc else "default"), logger)
self.caches_array = caches_array
self.cachedir = self.data.getVar("CACHE")
self.cachedir = data.getVar("CACHE")
self.clean = set()
self.checked = set()
self.depends_cache = {}
@@ -407,12 +385,20 @@ class Cache(object):
self.filelist_regex = re.compile(r'(?:(?<=:True)|(?<=:False))\s+')
if self.cachedir in [None, '']:
bb.fatal("Please ensure CACHE is set to the cache directory for BitBake to use")
self.has_cache = False
self.logger.info("Not using a cache. "
"Set CACHE = <directory> to enable.")
return
self.has_cache = True
def getCacheFile(self, cachefile):
return getCacheFile(self.cachedir, cachefile, self.mc, self.data_hash)
def prepare_cache(self, progress):
if not self.has_cache:
return 0
loaded = 0
self.cachefile = self.getCacheFile("bb_cache.dat")
@@ -451,6 +437,9 @@ class Cache(object):
return loaded
def cachesize(self):
if not self.has_cache:
return 0
cachesize = 0
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
@@ -512,11 +501,11 @@ class Cache(object):
return len(self.depends_cache)
def parse(self, filename, appends, layername):
def parse(self, filename, appends):
"""Parse the specified filename, returning the recipe information"""
self.logger.debug("Parsing %s", filename)
infos = []
datastores = self.databuilder.parseRecipeVariants(filename, appends, mc=self.mc, layername=layername)
datastores = self.load_bbfile(filename, appends, mc=self.mc)
depends = []
variants = []
# Process the "real" fn last so we can store variants list
@@ -538,19 +527,43 @@ class Cache(object):
return infos
def loadCached(self, filename, appends):
def load(self, filename, appends):
"""Obtain the recipe information for the specified filename,
using cached values.
"""
using cached values if available, otherwise parsing.
infos = []
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = variant2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
Note that if it does parse to obtain the info, it will not
automatically add the information to the cache or to your
CacheData. Use the add or add_info method to do so after
running this, or use loadData instead."""
cached = self.cacheValid(filename, appends)
if cached:
infos = []
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = variant2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
else:
return self.parse(filename, appends, configdata, self.caches_array)
return infos
return cached, infos
def loadData(self, fn, appends, cacheData):
"""Load the recipe info for the specified filename,
parsing and adding to the cache if necessary, and adding
the recipe information to the supplied CacheData instance."""
skipped, virtuals = 0, 0
cached, infos = self.load(fn, appends)
for virtualfn, info_array in infos:
if info_array[0].skipped:
self.logger.debug("Skipping %s: %s", virtualfn, info_array[0].skipreason)
skipped += 1
else:
self.add_info(virtualfn, info_array, cacheData, not cached)
virtuals += 1
return cached, skipped, virtuals
def cacheValid(self, fn, appends):
"""
@@ -559,6 +572,10 @@ class Cache(object):
"""
if fn not in self.checked:
self.cacheValidUpdate(fn, appends)
# Is cache enabled?
if not self.has_cache:
return False
if fn in self.clean:
return True
return False
@@ -568,6 +585,10 @@ class Cache(object):
Is the cache valid for fn?
Make thorough (slower) checks including timestamps.
"""
# Is cache enabled?
if not self.has_cache:
return False
self.checked.add(fn)
# File isn't in depends_cache
@@ -618,7 +639,7 @@ class Cache(object):
for f in flist:
if not f:
continue
f, exist = f.rsplit(":", 1)
f, exist = f.split(":")
if (exist == "True" and not os.path.exists(f)) or (exist == "False" and os.path.exists(f)):
self.logger.debug2("%s's file checksum list file %s changed",
fn, f)
@@ -674,6 +695,10 @@ class Cache(object):
Save the cache
Called from the parser when complete (or exiting)
"""
if not self.has_cache:
return
if self.cacheclean:
self.logger.debug2("Cache is clean, not saving.")
return
@@ -694,7 +719,6 @@ class Cache(object):
p.dump(info)
del self.depends_cache
SiggenRecipeInfo.reset()
@staticmethod
def mtime(cachefile):
@@ -717,11 +741,26 @@ class Cache(object):
if watcher:
watcher(info_array[0].file_depends)
if not self.has_cache:
return
if (info_array[0].skipped or 'SRCREVINACTION' not in info_array[0].pv) and not info_array[0].nocache:
if parsed:
self.cacheclean = False
self.depends_cache[filename] = info_array
def add(self, file_name, data, cacheData, parsed=None):
"""
Save data we need into the cache
"""
realfn = virtualfn2realfn(file_name)[0]
info_array = []
for cache_class in self.caches_array:
info_array.append(cache_class(realfn, data))
self.add_info(file_name, info_array, cacheData, parsed)
class MulticonfigCache(Mapping):
def __init__(self, databuilder, data_hash, caches_array):
def progress(p):
@@ -758,7 +797,6 @@ class MulticonfigCache(Mapping):
loaded = 0
for c in self.__caches.values():
SiggenRecipeInfo.reset()
loaded += c.prepare_cache(progress)
previous_progress = current_progress
@@ -836,10 +874,11 @@ class MultiProcessCache(object):
self.cachedata = self.create_cachedata()
self.cachedata_extras = self.create_cachedata()
def init_cache(self, cachedir, cache_file_name=None):
if not cachedir:
def init_cache(self, d, cache_file_name=None):
cachedir = (d.getVar("PERSISTENT_DIR") or
d.getVar("CACHE"))
if cachedir in [None, '']:
return
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
@@ -870,10 +909,6 @@ class MultiProcessCache(object):
if not self.cachefile:
return
have_data = any(self.cachedata_extras)
if not have_data:
return
glf = bb.utils.lockfile(self.cachefile + ".lock", shared=True)
i = os.getpid()
@@ -908,8 +943,6 @@ class MultiProcessCache(object):
data = self.cachedata
have_data = False
for f in [y for y in os.listdir(os.path.dirname(self.cachefile)) if y.startswith(os.path.basename(self.cachefile) + '-')]:
f = os.path.join(os.path.dirname(self.cachefile), f)
try:
@@ -924,14 +957,12 @@ class MultiProcessCache(object):
os.unlink(f)
continue
have_data = True
self.merge_data(extradata, data)
os.unlink(f)
if have_data:
with open(self.cachefile, "wb") as f:
p = pickle.Pickler(f, -1)
p.dump([data, self.__class__.CACHE_VERSION])
with open(self.cachefile, "wb") as f:
p = pickle.Pickler(f, -1)
p.dump([data, self.__class__.CACHE_VERSION])
bb.utils.unlockfile(glf)
@@ -987,11 +1018,3 @@ class SimpleCache(object):
p.dump([data, self.cacheversion])
bb.utils.unlockfile(glf)
def copyfile(self, target):
if not self.cachefile:
return
glf = bb.utils.lockfile(self.cachefile + ".lock")
shutil.copy(self.cachefile, target)
bb.utils.unlockfile(glf)

View File

@@ -11,13 +11,10 @@ import os
import stat
import bb.utils
import logging
import re
from bb.cache import MultiProcessCache
logger = logging.getLogger("BitBake.Cache")
filelist_regex = re.compile(r'(?:(?<=:True)|(?<=:False))\s+')
# mtime cache (non-persistent)
# based upon the assumption that files do not change during bitbake run
class FileMtimeCache(object):
@@ -53,7 +50,6 @@ class FileChecksumCache(MultiProcessCache):
MultiProcessCache.__init__(self)
def get_checksum(self, f):
f = os.path.normpath(f)
entry = self.cachedata[0].get(f)
cmtime = self.mtime_cache.cached_mtime(f)
if entry:
@@ -88,36 +84,22 @@ class FileChecksumCache(MultiProcessCache):
return None
return checksum
#
# Changing the format of file-checksums is problematic as both OE and Bitbake have
# knowledge of them. We need to encode a new piece of data, the portion of the path
# we care about from a checksum perspective. This means that files that change subdirectory
# are tracked by the task hashes. To do this, we do something horrible and put a "/./" into
# the path. The filesystem handles it but it gives us a marker to know which subsection
# of the path to cache.
#
def checksum_dir(pth):
# Handle directories recursively
if pth == "/":
bb.fatal("Refusing to checksum /")
pth = pth.rstrip("/")
dirchecksums = []
for root, dirs, files in os.walk(pth, topdown=True):
[dirs.remove(d) for d in list(dirs) if d in localdirsexclude]
for name in files:
fullpth = os.path.join(root, name).replace(pth, os.path.join(pth, "."))
fullpth = os.path.join(root, name)
checksum = checksum_file(fullpth)
if checksum:
dirchecksums.append((fullpth, checksum))
return dirchecksums
checksums = []
for pth in filelist_regex.split(filelist):
if not pth:
continue
pth = pth.strip()
if not pth:
continue
for pth in filelist.split():
exist = pth.split(":")[1]
if exist == "False":
continue

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -27,7 +25,6 @@ import ast
import sys
import codegen
import logging
import inspect
import bb.pysh as pysh
import bb.utils, bb.data
import hashlib
@@ -59,45 +56,10 @@ def check_indent(codestr):
return codestr
modulecode_deps = {}
def add_module_functions(fn, functions, namespace):
import os
fstat = os.stat(fn)
fixedhash = fn + ":" + str(fstat.st_size) + ":" + str(fstat.st_mtime)
for f in functions:
name = "%s.%s" % (namespace, f)
parser = PythonParser(name, logger)
try:
parser.parse_python(None, filename=fn, lineno=1, fixedhash=fixedhash+f)
#bb.warn("Cached %s" % f)
except KeyError:
targetfn = inspect.getsourcefile(functions[f])
if fn != targetfn:
# Skip references to other modules outside this file
#bb.warn("Skipping %s" % name)
continue
lines, lineno = inspect.getsourcelines(functions[f])
src = "".join(lines)
parser.parse_python(src, filename=fn, lineno=lineno, fixedhash=fixedhash+f)
#bb.warn("Not cached %s" % f)
execs = parser.execs.copy()
# Expand internal module exec references
for e in parser.execs:
if e in functions:
execs.remove(e)
execs.add(namespace + "." + e)
modulecode_deps[name] = [parser.references.copy(), execs, parser.var_execs.copy(), parser.contains.copy(), parser.extra]
#bb.warn("%s: %s\nRefs:%s Execs: %s %s %s" % (name, fn, parser.references, parser.execs, parser.var_execs, parser.contains))
def update_module_dependencies(d):
for mod in modulecode_deps:
excludes = set((d.getVarFlag(mod, "vardepsexclude") or "").split())
if excludes:
modulecode_deps[mod] = [modulecode_deps[mod][0] - excludes, modulecode_deps[mod][1] - excludes, modulecode_deps[mod][2] - excludes, modulecode_deps[mod][3], modulecode_deps[mod][4]]
# A custom getstate/setstate using tuples is actually worth 15% cachesize by
# avoiding duplication of the attribute names!
class SetCache(object):
def __init__(self):
self.setcache = {}
@@ -117,22 +79,21 @@ class SetCache(object):
codecache = SetCache()
class pythonCacheLine(object):
def __init__(self, refs, execs, contains, extra):
def __init__(self, refs, execs, contains):
self.refs = codecache.internSet(refs)
self.execs = codecache.internSet(execs)
self.contains = {}
for c in contains:
self.contains[c] = codecache.internSet(contains[c])
self.extra = extra
def __getstate__(self):
return (self.refs, self.execs, self.contains, self.extra)
return (self.refs, self.execs, self.contains)
def __setstate__(self, state):
(refs, execs, contains, extra) = state
self.__init__(refs, execs, contains, extra)
(refs, execs, contains) = state
self.__init__(refs, execs, contains)
def __hash__(self):
l = (hash(self.refs), hash(self.execs), hash(self.extra))
l = (hash(self.refs), hash(self.execs))
for c in sorted(self.contains.keys()):
l = l + (c, hash(self.contains[c]))
return hash(l)
@@ -161,7 +122,7 @@ class CodeParserCache(MultiProcessCache):
# so that an existing cache gets invalidated. Additionally you'll need
# to increment __cache_version__ in cache.py in order to ensure that old
# recipe caches don't trigger "Taskhash mismatch" errors.
CACHE_VERSION = 12
CACHE_VERSION = 11
def __init__(self):
MultiProcessCache.__init__(self)
@@ -175,8 +136,8 @@ class CodeParserCache(MultiProcessCache):
self.pythoncachelines = {}
self.shellcachelines = {}
def newPythonCacheLine(self, refs, execs, contains, extra):
cacheline = pythonCacheLine(refs, execs, contains, extra)
def newPythonCacheLine(self, refs, execs, contains):
cacheline = pythonCacheLine(refs, execs, contains)
h = hash(cacheline)
if h in self.pythoncachelines:
return self.pythoncachelines[h]
@@ -191,12 +152,12 @@ class CodeParserCache(MultiProcessCache):
self.shellcachelines[h] = cacheline
return cacheline
def init_cache(self, cachedir):
def init_cache(self, d):
# Check if we already have the caches
if self.pythoncache:
return
MultiProcessCache.init_cache(self, cachedir)
MultiProcessCache.init_cache(self, d)
# cachedata gets re-assigned in the parent
self.pythoncache = self.cachedata[0]
@@ -208,8 +169,8 @@ class CodeParserCache(MultiProcessCache):
codeparsercache = CodeParserCache()
def parser_cache_init(cachedir):
codeparsercache.init_cache(cachedir)
def parser_cache_init(d):
codeparsercache.init_cache(d)
def parser_cache_save():
codeparsercache.save_extras()
@@ -234,10 +195,6 @@ class BufferedLogger(Logger):
self.target.handle(record)
self.buffer = []
class DummyLogger():
def flush(self):
return
class PythonParser():
getvars = (".getVar", ".appendVar", ".prependVar", "oe.utils.conditional")
getvarflags = (".getVarFlag", ".appendVarFlag", ".prependVarFlag")
@@ -262,19 +219,19 @@ class PythonParser():
def visit_Call(self, node):
name = self.called_node_name(node.func)
if name and (name.endswith(self.getvars) or name.endswith(self.getvarflags) or name in self.containsfuncs or name in self.containsanyfuncs):
if isinstance(node.args[0], ast.Constant) and isinstance(node.args[0].value, str):
varname = node.args[0].value
if name in self.containsfuncs and isinstance(node.args[1], ast.Constant):
if isinstance(node.args[0], ast.Str):
varname = node.args[0].s
if name in self.containsfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].add(node.args[1].value)
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Constant):
self.contains[varname].add(node.args[1].s)
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].update(node.args[1].value.split())
self.contains[varname].update(node.args[1].s.split())
elif name.endswith(self.getvarflags):
if isinstance(node.args[1], ast.Constant):
self.references.add('%s[%s]' % (varname, node.args[1].value))
if isinstance(node.args[1], ast.Str):
self.references.add('%s[%s]' % (varname, node.args[1].s))
else:
self.warn(node.func, node.args[1])
else:
@@ -282,8 +239,8 @@ class PythonParser():
else:
self.warn(node.func, node.args[0])
elif name and name.endswith(".expand"):
if isinstance(node.args[0], ast.Constant):
value = node.args[0].value
if isinstance(node.args[0], ast.Str):
value = node.args[0].s
d = bb.data.init()
parser = d.expandWithRefs(value, self.name)
self.references |= parser.references
@@ -293,8 +250,8 @@ class PythonParser():
self.contains[varname] = set()
self.contains[varname] |= parser.contains[varname]
elif name in self.execfuncs:
if isinstance(node.args[0], ast.Constant):
self.var_execs.add(node.args[0].value)
if isinstance(node.args[0], ast.Str):
self.var_execs.add(node.args[0].s)
else:
self.warn(node.func, node.args[0])
elif name and isinstance(node.func, (ast.Name, ast.Attribute)):
@@ -319,24 +276,16 @@ class PythonParser():
self.contains = {}
self.execs = set()
self.references = set()
self._log = log
# Defer init as expensive
self.log = DummyLogger()
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, log)
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
# For the python module code it is expensive to have the function text so it is
# uses a different fixedhash to cache against. We can take the hit on obtaining the
# text if it isn't in the cache.
def parse_python(self, node, lineno=0, filename="<string>", fixedhash=None):
if not fixedhash and (not node or not node.strip()):
def parse_python(self, node, lineno=0, filename="<string>"):
if not node or not node.strip():
return
if fixedhash:
h = fixedhash
else:
h = bbhash(str(node))
h = bbhash(str(node))
if h in codeparsercache.pythoncache:
self.references = set(codeparsercache.pythoncache[h].refs)
@@ -344,7 +293,6 @@ class PythonParser():
self.contains = {}
for i in codeparsercache.pythoncache[h].contains:
self.contains[i] = set(codeparsercache.pythoncache[h].contains[i])
self.extra = codeparsercache.pythoncache[h].extra
return
if h in codeparsercache.pythoncacheextras:
@@ -353,15 +301,8 @@ class PythonParser():
self.contains = {}
for i in codeparsercache.pythoncacheextras[h].contains:
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
self.extra = codeparsercache.pythoncacheextras[h].extra
return
if fixedhash and not node:
raise KeyError
# Need to parse so take the hit on the real log buffer
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, self._log)
# We can't add to the linenumbers for compile, we can pad to the correct number of blank lines though
node = "\n" * int(lineno) + node
code = compile(check_indent(str(node)), filename, "exec",
@@ -372,22 +313,15 @@ class PythonParser():
self.visit_Call(n)
self.execs.update(self.var_execs)
self.extra = None
if fixedhash:
self.extra = bbhash(str(node))
codeparsercache.pythoncacheextras[h] = codeparsercache.newPythonCacheLine(self.references, self.execs, self.contains, self.extra)
codeparsercache.pythoncacheextras[h] = codeparsercache.newPythonCacheLine(self.references, self.execs, self.contains)
class ShellParser():
def __init__(self, name, log):
self.funcdefs = set()
self.allexecs = set()
self.execs = set()
self._name = name
self._log = log
# Defer init as expensive
self.log = DummyLogger()
self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
self.unhandled_template = "unable to handle non-literal command '%s'"
self.unhandled_template = "while parsing %s, %s" % (name, self.unhandled_template)
@@ -406,9 +340,6 @@ class ShellParser():
self.execs = set(codeparsercache.shellcacheextras[h].execs)
return self.execs
# Need to parse so take the hit on the real log buffer
self.log = BufferedLogger('BitBake.Data.%s' % self._name, logging.DEBUG, self._log)
self._parse_shell(value)
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)

View File

@@ -20,7 +20,6 @@ Commands are queued in a CommandQueue
from collections import OrderedDict, defaultdict
import io
import bb.event
import bb.cooker
import bb.remotedata
@@ -51,32 +50,23 @@ class Command:
"""
A queue of asynchronous commands for bitbake
"""
def __init__(self, cooker, process_server):
def __init__(self, cooker):
self.cooker = cooker
self.cmds_sync = CommandsSync()
self.cmds_async = CommandsAsync()
self.remotedatastores = None
self.process_server = process_server
# Access with locking using process_server.{get/set/clear}_async_cmd()
# FIXME Add lock for this
self.currentAsyncCommand = None
def runCommand(self, commandline, process_server, ro_only=False):
def runCommand(self, commandline, ro_only = False):
command = commandline.pop(0)
# Ensure cooker is ready for commands
if command not in ["updateConfig", "setFeatures", "ping"]:
try:
self.cooker.init_configdata()
if not self.remotedatastores:
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
except (Exception, SystemExit) as exc:
import traceback
if isinstance(exc, bb.BBHandledException):
# We need to start returning real exceptions here. Until we do, we can't
# tell if an exception is an instance of bb.BBHandledException
return None, "bb.BBHandledException()\n" + traceback.format_exc()
return None, traceback.format_exc()
if command != "updateConfig" and command != "setFeatures":
self.cooker.init_configdata()
if not self.remotedatastores:
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
if hasattr(CommandsSync, command):
# Can run synchronous commands straight away
@@ -85,6 +75,7 @@ class Command:
if not hasattr(command_method, 'readonly') or not getattr(command_method, 'readonly'):
return None, "Not able to execute not readonly commands in readonly mode"
try:
self.cooker.process_inotify_updates()
if getattr(command_method, 'needconfig', True):
self.cooker.updateCacheSync()
result = command_method(self, commandline)
@@ -99,23 +90,24 @@ class Command:
return None, traceback.format_exc()
else:
return result, None
if self.currentAsyncCommand is not None:
return None, "Busy (%s in progress)" % self.currentAsyncCommand[0]
if command not in CommandsAsync.__dict__:
return None, "No such command"
if not process_server.set_async_cmd((command, commandline)):
return None, "Busy (%s in progress)" % self.process_server.get_async_cmd()[0]
self.cooker.idleCallBackRegister(self.runAsyncCommand, process_server)
self.currentAsyncCommand = (command, commandline)
self.cooker.idleCallBackRegister(self.cooker.runCommands, self.cooker)
return True, None
def runAsyncCommand(self, _, process_server, halt):
def runAsyncCommand(self):
try:
self.cooker.process_inotify_updates()
if self.cooker.state in (bb.cooker.state.error, bb.cooker.state.shutdown, bb.cooker.state.forceshutdown):
# updateCache will trigger a shutdown of the parser
# and then raise BBHandledException triggering an exit
self.cooker.updateCache()
return bb.server.process.idleFinish("Cooker in error state")
cmd = process_server.get_async_cmd()
if cmd is not None:
(command, options) = cmd
return False
if self.currentAsyncCommand is not None:
(command, options) = self.currentAsyncCommand
commandmethod = getattr(CommandsAsync, command)
needcache = getattr( commandmethod, "needcache" )
if needcache and self.cooker.state != bb.cooker.state.running:
@@ -125,21 +117,24 @@ class Command:
commandmethod(self.cmds_async, self, options)
return False
else:
return bb.server.process.idleFinish("Nothing to do, no async command?")
return False
except KeyboardInterrupt as exc:
return bb.server.process.idleFinish("Interrupted")
self.finishAsyncCommand("Interrupted")
return False
except SystemExit as exc:
arg = exc.args[0]
if isinstance(arg, str):
return bb.server.process.idleFinish(arg)
self.finishAsyncCommand(arg)
else:
return bb.server.process.idleFinish("Exited with %s" % arg)
self.finishAsyncCommand("Exited with %s" % arg)
return False
except Exception as exc:
import traceback
if isinstance(exc, bb.BBHandledException):
return bb.server.process.idleFinish("")
self.finishAsyncCommand("")
else:
return bb.server.process.idleFinish(traceback.format_exc())
self.finishAsyncCommand(traceback.format_exc())
return False
def finishAsyncCommand(self, msg=None, code=None):
if msg or msg == "":
@@ -148,8 +143,8 @@ class Command:
bb.event.fire(CommandExit(code), self.cooker.data)
else:
bb.event.fire(CommandCompleted(), self.cooker.data)
self.currentAsyncCommand = None
self.cooker.finishcommand()
self.process_server.clear_async_cmd()
def reset(self):
if self.remotedatastores:
@@ -162,14 +157,6 @@ class CommandsSync:
These must not influence any running synchronous command.
"""
def ping(self, command, params):
"""
Allow a UI to check the server is still alive
"""
return "Still alive!"
ping.needconfig = False
ping.readonly = True
def stateShutdown(self, command, params):
"""
Trigger cooker 'shutdown' mode
@@ -307,11 +294,6 @@ class CommandsSync:
return ret
getLayerPriorities.readonly = True
def revalidateCaches(self, command, params):
"""Called by UI clients when metadata may have changed"""
command.cooker.revalidateCaches()
parseConfiguration.needconfig = False
def getRecipes(self, command, params):
try:
mc = params[0]
@@ -518,17 +500,6 @@ class CommandsSync:
d = command.remotedatastores[dsindex].varhistory
return getattr(d, method)(*args, **kwargs)
def dataStoreConnectorVarHistCmdEmit(self, command, params):
dsindex = params[0]
var = params[1]
oval = params[2]
val = params[3]
d = command.remotedatastores[params[4]]
o = io.StringIO()
command.remotedatastores[dsindex].varhistory.emit(var, oval, val, o, d)
return o.getvalue()
def dataStoreConnectorIncHistCmd(self, command, params):
dsindex = params[0]
method = params[1]
@@ -550,8 +521,8 @@ class CommandsSync:
and return a datastore object representing the environment
for the recipe.
"""
virtualfn = params[0]
(fn, cls, mc) = bb.cache.virtualfn2realfn(virtualfn)
fn = params[0]
mc = bb.runqueue.mc_from_tid(fn)
appends = params[1]
appendlist = params[2]
if len(params) > 3:
@@ -566,7 +537,6 @@ class CommandsSync:
appendfiles = command.cooker.collections[mc].get_file_appends(fn)
else:
appendfiles = []
layername = command.cooker.collections[mc].calc_bbfile_priority(fn)[2]
# We are calling bb.cache locally here rather than on the server,
# but that's OK because it doesn't actually need anything from
# the server barring the global datastore (which we have a remote
@@ -574,10 +544,11 @@ class CommandsSync:
if config_data:
# We have to use a different function here if we're passing in a datastore
# NOTE: we took a copy above, so we don't do it here again
envdata = command.cooker.databuilder._parse_recipe(config_data, fn, appendfiles, mc, layername)[cls]
envdata = bb.cache.parse_recipe(config_data, fn, appendfiles, mc)['']
else:
# Use the standard path
envdata = command.cooker.databuilder.parseRecipe(virtualfn, appendfiles, layername)
parser = bb.cache.NoCache(command.cooker.databuilder)
envdata = parser.loadDataFull(fn, appendfiles)
idx = command.remotedatastores.store(envdata)
return DataStoreConnectionHandle(idx)
parseRecipeFile.readonly = True
@@ -676,16 +647,6 @@ class CommandsAsync:
command.finishAsyncCommand()
findFilesMatchingInDir.needcache = False
def testCookerCommandEvent(self, command, params):
"""
Dummy command used by OEQA selftest to test tinfoil without IO
"""
pattern = params[0]
command.cooker.testCookerCommandEvent(pattern)
command.finishAsyncCommand()
testCookerCommandEvent.needcache = False
def findConfigFilePath(self, command, params):
"""
Find the path of the requested configuration file
@@ -750,7 +711,7 @@ class CommandsAsync:
"""
event = params[0]
bb.event.fire(eval(event), command.cooker.data)
process_server.clear_async_cmd()
command.currentAsyncCommand = None
triggerEvent.needcache = False
def resetCooker(self, command, params):
@@ -777,14 +738,7 @@ class CommandsAsync:
(mc, pn) = bb.runqueue.split_mc(params[0])
taskname = params[1]
sigs = params[2]
bb.siggen.check_siggen_version(bb.siggen)
res = bb.siggen.find_siginfo(pn, taskname, sigs, command.cooker.databuilder.mcdata[mc])
bb.event.fire(bb.event.FindSigInfoResult(res), command.cooker.databuilder.mcdata[mc])
command.finishAsyncCommand()
findSigInfo.needcache = False
def getTaskSignatures(self, command, params):
res = command.cooker.getTaskSignatures(params[0], params[1])
bb.event.fire(bb.event.GetTaskSignatureResult(res), command.cooker.data)
command.finishAsyncCommand()
getTaskSignatures.needcache = True

View File

@@ -1,196 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Helper library to implement streaming compression and decompression using an
# external process
#
# This library should be used directly by end users; a wrapper library for the
# specific compression tool should be created
import builtins
import io
import os
import subprocess
def open_wrap(
cls, filename, mode="rb", *, encoding=None, errors=None, newline=None, **kwargs
):
"""
Open a compressed file in binary or text mode.
Users should not call this directly. A specific compression library can use
this helper to provide it's own "open" command
The filename argument can be an actual filename (a str or bytes object), or
an existing file object to read from or write to.
The mode argument can be "r", "rb", "w", "wb", "x", "xb", "a" or "ab" for
binary mode, or "rt", "wt", "xt" or "at" for text mode. The default mode is
"rb".
For binary mode, this function is equivalent to the cls constructor:
cls(filename, mode). In this case, the encoding, errors and newline
arguments must not be provided.
For text mode, a cls object is created, and wrapped in an
io.TextIOWrapper instance with the specified encoding, error handling
behavior, and line ending(s).
"""
if "t" in mode:
if "b" in mode:
raise ValueError("Invalid mode: %r" % (mode,))
else:
if encoding is not None:
raise ValueError("Argument 'encoding' not supported in binary mode")
if errors is not None:
raise ValueError("Argument 'errors' not supported in binary mode")
if newline is not None:
raise ValueError("Argument 'newline' not supported in binary mode")
file_mode = mode.replace("t", "")
if isinstance(filename, (str, bytes, os.PathLike, int)):
binary_file = cls(filename, file_mode, **kwargs)
elif hasattr(filename, "read") or hasattr(filename, "write"):
binary_file = cls(None, file_mode, fileobj=filename, **kwargs)
else:
raise TypeError("filename must be a str or bytes object, or a file")
if "t" in mode:
return io.TextIOWrapper(
binary_file, encoding, errors, newline, write_through=True
)
else:
return binary_file
class CompressionError(OSError):
pass
class PipeFile(io.RawIOBase):
"""
Class that implements generically piping to/from a compression program
Derived classes should add the function get_compress() and get_decompress()
that return the required commands. Input will be piped into stdin and the
(de)compressed output should be written to stdout, e.g.:
class FooFile(PipeCompressionFile):
def get_decompress(self):
return ["fooc", "--decompress", "--stdout"]
def get_compress(self):
return ["fooc", "--compress", "--stdout"]
"""
READ = 0
WRITE = 1
def __init__(self, filename=None, mode="rb", *, stderr=None, fileobj=None):
if "t" in mode or "U" in mode:
raise ValueError("Invalid mode: {!r}".format(mode))
if not "b" in mode:
mode += "b"
if mode.startswith("r"):
self.mode = self.READ
elif mode.startswith("w"):
self.mode = self.WRITE
else:
raise ValueError("Invalid mode %r" % mode)
if fileobj is not None:
self.fileobj = fileobj
else:
self.fileobj = builtins.open(filename, mode or "rb")
if self.mode == self.READ:
self.p = subprocess.Popen(
self.get_decompress(),
stdin=self.fileobj,
stdout=subprocess.PIPE,
stderr=stderr,
close_fds=True,
)
self.pipe = self.p.stdout
else:
self.p = subprocess.Popen(
self.get_compress(),
stdin=subprocess.PIPE,
stdout=self.fileobj,
stderr=stderr,
close_fds=True,
)
self.pipe = self.p.stdin
self.__closed = False
def _check_process(self):
if self.p is None:
return
returncode = self.p.wait()
if returncode:
raise CompressionError("Process died with %d" % returncode)
self.p = None
def close(self):
if self.closed:
return
self.pipe.close()
if self.p is not None:
self._check_process()
self.fileobj.close()
self.__closed = True
@property
def closed(self):
return self.__closed
def fileno(self):
return self.pipe.fileno()
def flush(self):
self.pipe.flush()
def isatty(self):
return self.pipe.isatty()
def readable(self):
return self.mode == self.READ
def writable(self):
return self.mode == self.WRITE
def readinto(self, b):
if self.mode != self.READ:
import errno
raise OSError(
errno.EBADF, "read() on write-only %s object" % self.__class__.__name__
)
size = self.pipe.readinto(b)
if size == 0:
self._check_process()
return size
def write(self, data):
if self.mode != self.WRITE:
import errno
raise OSError(
errno.EBADF, "write() on read-only %s object" % self.__class__.__name__
)
data = self.pipe.write(data)
if not data:
self._check_process()
return data

View File

@@ -1,19 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import bb.compress._pipecompress
def open(*args, **kwargs):
return bb.compress._pipecompress.open_wrap(LZ4File, *args, **kwargs)
class LZ4File(bb.compress._pipecompress.PipeFile):
def get_compress(self):
return ["lz4c", "-z", "-c"]
def get_decompress(self):
return ["lz4c", "-d", "-c"]

View File

@@ -1,30 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import bb.compress._pipecompress
import shutil
def open(*args, **kwargs):
return bb.compress._pipecompress.open_wrap(ZstdFile, *args, **kwargs)
class ZstdFile(bb.compress._pipecompress.PipeFile):
def __init__(self, *args, num_threads=1, compresslevel=3, **kwargs):
self.num_threads = num_threads
self.compresslevel = compresslevel
super().__init__(*args, **kwargs)
def _get_zstd(self):
if self.num_threads == 1 or not shutil.which("pzstd"):
return ["zstd"]
return ["pzstd", "-p", "%d" % self.num_threads]
def get_compress(self):
return self._get_zstd() + ["-c", "-%d" % self.compresslevel]
def get_decompress(self):
return self._get_zstd() + ["-d", "-c"]

File diff suppressed because it is too large Load Diff

View File

@@ -57,7 +57,7 @@ class ConfigParameters(object):
def updateToServer(self, server, environment):
options = {}
for o in ["halt", "force", "invalidate_stamp",
for o in ["abort", "force", "invalidate_stamp",
"dry_run", "dump_signatures",
"extra_assume_provided", "profile",
"prefile", "postfile", "server_timeout",
@@ -86,7 +86,7 @@ class ConfigParameters(object):
action['msg'] = "Only one target can be used with the --environment option."
elif self.options.buildfile and len(self.options.pkgs_to_build) > 0:
action['msg'] = "No target should be used with the --environment and --buildfile options."
elif self.options.pkgs_to_build:
elif len(self.options.pkgs_to_build) > 0:
action['action'] = ["showEnvironmentTarget", self.options.pkgs_to_build]
else:
action['action'] = ["showEnvironment", self.options.buildfile]
@@ -124,7 +124,7 @@ class CookerConfiguration(object):
self.prefile = []
self.postfile = []
self.cmd = None
self.halt = True
self.abort = True
self.force = False
self.profile = False
self.nosetscene = False
@@ -160,7 +160,12 @@ def catch_parse_error(func):
def wrapped(fn, *args):
try:
return func(fn, *args)
except Exception as exc:
except IOError as exc:
import traceback
parselog.critical(traceback.format_exc())
parselog.critical("Unable to parse %s: %s" % (fn, exc))
raise bb.BBHandledException()
except bb.data_smart.ExpansionError as exc:
import traceback
bbdir = os.path.dirname(__file__) + os.sep
@@ -172,11 +177,14 @@ def catch_parse_error(func):
break
parselog.critical("Unable to parse %s" % fn, exc_info=(exc_class, exc, tb))
raise bb.BBHandledException()
except bb.parse.ParseError as exc:
parselog.critical(str(exc))
raise bb.BBHandledException()
return wrapped
@catch_parse_error
def parse_config_file(fn, data, include=True):
return bb.parse.handle(fn, data, include, baseconfig=True)
return bb.parse.handle(fn, data, include)
@catch_parse_error
def _inherit(bbclass, data):
@@ -202,7 +210,7 @@ def findConfigFile(configfile, data):
#
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
# up to /. If that fails, bitbake would fall back to cwd.
# up to /. If that fails, we search for a conf/bitbake.conf in BBPATH.
#
def findTopdir():
@@ -215,8 +223,11 @@ def findTopdir():
layerconf = findConfigFile("bblayers.conf", d)
if layerconf:
return os.path.dirname(os.path.dirname(layerconf))
return os.path.abspath(os.getcwd())
if bbpath:
bitbakeconf = bb.utils.which(bbpath, "conf/bitbake.conf")
if bitbakeconf:
return os.path.dirname(os.path.dirname(bitbakeconf))
return None
class CookerDataBuilder(object):
@@ -239,14 +250,10 @@ class CookerDataBuilder(object):
self.savedenv = bb.data.init()
for k in cookercfg.env:
self.savedenv.setVar(k, cookercfg.env[k])
if k in bb.data_smart.bitbake_renamed_vars:
bb.error('Shell environment variable %s has been renamed to %s' % (k, bb.data_smart.bitbake_renamed_vars[k]))
bb.fatal("Exiting to allow enviroment variables to be corrected")
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
self.basedata.setVar("BB_ORIGENV", self.savedenv)
self.basedata.setVar("__bbclasstype", "global")
if worker:
self.basedata.setVar("BB_WORKERCONTEXT", "1")
@@ -254,15 +261,15 @@ class CookerDataBuilder(object):
self.data = self.basedata
self.mcdata = {}
def parseBaseConfiguration(self, worker=False):
mcdata = {}
def parseBaseConfiguration(self):
data_hash = hashlib.sha256()
try:
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
if self.data.getVar("BB_WORKERCONTEXT", False) is None and not worker:
if self.data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(self.data)
bb.parse.init_parser(self.data)
bb.codeparser.parser_cache_init(self.data)
bb.event.fire(bb.event.ConfigParsed(), self.data)
@@ -280,62 +287,38 @@ class CookerDataBuilder(object):
bb.parse.init_parser(self.data)
data_hash.update(self.data.get_hash().encode('utf-8'))
mcdata[''] = self.data
self.mcdata[''] = self.data
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
for config in multiconfig:
if config[0].isdigit():
bb.fatal("Multiconfig name '%s' is invalid as multiconfigs cannot start with a digit" % config)
parsed_mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), parsed_mcdata)
mcdata[config] = parsed_mcdata
data_hash.update(parsed_mcdata.get_hash().encode('utf-8'))
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), mcdata)
self.mcdata[config] = mcdata
data_hash.update(mcdata.get_hash().encode('utf-8'))
if multiconfig:
bb.event.fire(bb.event.MultiConfigParsed(mcdata), self.data)
bb.event.fire(bb.event.MultiConfigParsed(self.mcdata), self.data)
self.data_hash = data_hash.hexdigest()
except (SyntaxError, bb.BBHandledException):
raise bb.BBHandledException()
except bb.data_smart.ExpansionError as e:
logger.error(str(e))
raise bb.BBHandledException()
bb.codeparser.update_module_dependencies(self.data)
# Handle obsolete variable names
d = self.data
renamedvars = d.getVarFlags('BB_RENAMED_VARIABLES') or {}
renamedvars.update(bb.data_smart.bitbake_renamed_vars)
issues = False
for v in renamedvars:
if d.getVar(v) != None or d.hasOverrides(v):
issues = True
loginfo = {}
history = d.varhistory.get_variable_refs(v)
for h in history:
for line in history[h]:
loginfo = {'file' : h, 'line' : line}
bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
if not history:
bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
if issues:
except Exception:
logger.exception("Error parsing configuration files")
raise bb.BBHandledException()
for mc in mcdata:
mcdata[mc].renameVar("__depends", "__base_depends")
mcdata[mc].setVar("__bbclasstype", "recipe")
# Create a copy so we can reset at a later date when UIs disconnect
self.mcorigdata = mcdata
for mc in mcdata:
self.mcdata[mc] = bb.data.createCopy(mcdata[mc])
self.data = self.mcdata['']
self.origdata = self.data
self.data = bb.data.createCopy(self.origdata)
self.mcdata[''] = self.data
def reset(self):
# We may not have run parseBaseConfiguration() yet
if not hasattr(self, 'mcorigdata'):
if not hasattr(self, 'origdata'):
return
for mc in self.mcorigdata:
self.mcdata[mc] = bb.data.createCopy(self.mcorigdata[mc])
self.data = self.mcdata['']
self.data = bb.data.createCopy(self.origdata)
self.mcdata[''] = self.data
def _findLayerConf(self, data):
return findConfigFile("bblayers.conf", data)
@@ -350,23 +333,15 @@ class CookerDataBuilder(object):
layerconf = self._findLayerConf(data)
if layerconf:
parselog.debug2("Found bblayers.conf (%s)", layerconf)
parselog.debug(2, "Found bblayers.conf (%s)", layerconf)
# By definition bblayers.conf is in conf/ of TOPDIR.
# We may have been called with cwd somewhere else so reset TOPDIR
data.setVar("TOPDIR", os.path.dirname(os.path.dirname(layerconf)))
data = parse_config_file(layerconf, data)
if not data.getVar("BB_CACHEDIR"):
data.setVar("BB_CACHEDIR", "${TOPDIR}/cache")
bb.codeparser.parser_cache_init(data.getVar("BB_CACHEDIR"))
layers = (data.getVar('BBLAYERS') or "").split()
broken_layers = []
if not layers:
bb.fatal("The bblayers.conf file doesn't contain any BBLAYERS definition")
data = bb.data.createCopy(data)
approved = bb.utils.approved_variables()
@@ -382,10 +357,8 @@ class CookerDataBuilder(object):
parselog.critical("Please check BBLAYERS in %s" % (layerconf))
raise bb.BBHandledException()
layerseries = None
compat_entries = {}
for layer in layers:
parselog.debug2("Adding layer %s", layer)
parselog.debug(2, "Adding layer %s", layer)
if 'HOME' in approved and '~' in layer:
layer = os.path.expanduser(layer)
if layer.endswith('/'):
@@ -396,27 +369,8 @@ class CookerDataBuilder(object):
data.expandVarref('LAYERDIR')
data.expandVarref('LAYERDIR_RE')
# Sadly we can't have nice things.
# Some layers think they're going to be 'clever' and copy the values from
# another layer, e.g. using ${LAYERSERIES_COMPAT_core}. The whole point of
# this mechanism is to make it clear which releases a layer supports and
# show when a layer master branch is bitrotting and is unmaintained.
# We therefore avoid people doing this here.
collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
for c in collections:
compat_entry = data.getVar("LAYERSERIES_COMPAT_%s" % c)
if compat_entry:
compat_entries[c] = set(compat_entry.split())
data.delVar("LAYERSERIES_COMPAT_%s" % c)
if not layerseries:
layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
if layerseries:
data.delVar("LAYERSERIES_CORENAMES")
data.delVar('LAYERDIR_RE')
data.delVar('LAYERDIR')
for c in compat_entries:
data.setVar("LAYERSERIES_COMPAT_%s" % c, " ".join(sorted(compat_entries[c])))
bbfiles_dynamic = (data.getVar('BBFILES_DYNAMIC') or "").split()
collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
@@ -435,38 +389,26 @@ class CookerDataBuilder(object):
if invalid:
bb.fatal("BBFILES_DYNAMIC entries must be of the form {!}<collection name>:<filename pattern>, not:\n %s" % "\n ".join(invalid))
layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
collections_tmp = collections[:]
for c in collections:
collections_tmp.remove(c)
if c in collections_tmp:
bb.fatal("Found duplicated BBFILE_COLLECTIONS '%s', check bblayers.conf or layer.conf to fix it." % c)
compat = set()
if c in compat_entries:
compat = compat_entries[c]
if compat and not layerseries:
bb.fatal("No core layer found to work with layer '%s'. Missing entry in bblayers.conf?" % c)
compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
if compat and not (compat & layerseries):
bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)"
% (c, " ".join(layerseries), " ".join(compat)))
elif not compat and not data.getVar("BB_WORKERCONTEXT"):
bb.warn("Layer %s should set LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core layer names it is compatible with." % (c, c))
data.setVar("LAYERSERIES_CORENAMES", " ".join(sorted(layerseries)))
if not data.getVar("BBPATH"):
msg = "The BBPATH variable is not set"
if not layerconf:
msg += (" and bitbake did not find a conf/bblayers.conf file in"
" the expected location.\nMaybe you accidentally"
" invoked bitbake from the wrong directory?")
bb.fatal(msg)
if not data.getVar("TOPDIR"):
data.setVar("TOPDIR", os.path.abspath(os.getcwd()))
if not data.getVar("BB_CACHEDIR"):
data.setVar("BB_CACHEDIR", "${TOPDIR}/cache")
bb.codeparser.parser_cache_init(data.getVar("BB_CACHEDIR"))
raise SystemExit(msg)
data = parse_config_file(os.path.join("conf", "bitbake.conf"), data)
@@ -479,7 +421,7 @@ class CookerDataBuilder(object):
for bbclass in bbclasses:
data = _inherit(bbclass, data)
# Normally we only register event handlers at the end of parsing .bb files
# Nomally we only register event handlers at the end of parsing .bb files
# We register any handlers we've found so far here...
for var in data.getVar('__BBHANDLERS', False) or []:
handlerfn = data.getVarFlag(var, "filename", False)
@@ -493,54 +435,3 @@ class CookerDataBuilder(object):
return data
@staticmethod
def _parse_recipe(bb_data, bbfile, appends, mc, layername):
bb_data.setVar("__BBMULTICONFIG", mc)
bb_data.setVar("FILE_LAYERNAME", layername)
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
bb.parse.cached_mtime_noerror(bbfile_loc)
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
return bb.parse.handle(bbfile, bb_data)
def parseRecipeVariants(self, bbfile, appends, virtonly=False, mc=None, layername=None):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
if virtonly:
(bbfile, virtual, mc) = bb.cache.virtualfn2realfn(bbfile)
bb_data = self.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
return self._parse_recipe(bb_data, bbfile, appends, mc, layername)
if mc is not None:
bb_data = self.mcdata[mc].createCopy()
return self._parse_recipe(bb_data, bbfile, appends, mc, layername)
bb_data = self.data.createCopy()
datastores = self._parse_recipe(bb_data, bbfile, appends, '', layername)
for mc in self.mcdata:
if not mc:
continue
bb_data = self.mcdata[mc].createCopy()
newstores = self._parse_recipe(bb_data, bbfile, appends, mc, layername)
for ns in newstores:
datastores["mc:%s:%s" % (mc, ns)] = newstores[ns]
return datastores
def parseRecipe(self, virtualfn, appends, layername):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug("Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = bb.cache.virtualfn2realfn(virtualfn)
datastores = self.parseRecipeVariants(virtualfn, appends, virtonly=True, layername=layername)
return datastores[virtual]

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -76,26 +74,26 @@ def createDaemon(function, logfile):
with open('/dev/null', 'r') as si:
os.dup2(si.fileno(), sys.stdin.fileno())
with open(logfile, 'a+') as so:
try:
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(so.fileno(), sys.stderr.fileno())
except io.UnsupportedOperation:
sys.stdout = so
try:
so = open(logfile, 'a+')
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(so.fileno(), sys.stderr.fileno())
except io.UnsupportedOperation:
sys.stdout = open(logfile, 'a+')
# Have stdout and stderr be the same so log output matches chronologically
# and there aren't two separate buffers
sys.stderr = sys.stdout
# Have stdout and stderr be the same so log output matches chronologically
# and there aren't two seperate buffers
sys.stderr = sys.stdout
try:
function()
except Exception as e:
traceback.print_exc()
finally:
bb.event.print_ui_queue()
# os._exit() doesn't flush open files like os.exit() does. Manually flush
# stdout and stderr so that any logging output will be seen, particularly
# exception tracebacks.
sys.stdout.flush()
sys.stderr.flush()
os._exit(0)
try:
function()
except Exception as e:
traceback.print_exc()
finally:
bb.event.print_ui_queue()
# os._exit() doesn't flush open files like os.exit() does. Manually flush
# stdout and stderr so that any logging output will be seen, particularly
# exception tracebacks.
sys.stdout.flush()
sys.stderr.flush()
os._exit(0)

View File

@@ -4,16 +4,14 @@ BitBake 'Data' implementations
Functions for interacting with the data structure used by the
BitBake build tools.
expandKeys and datastore iteration are the most expensive
operations. Updating overrides is now "on the fly" but still based
on the idea of the cookie monster introduced by zecke:
"At night the cookie monster came by and
The expandKeys and update_data are the most expensive
operations. At night the cookie monster came by and
suggested 'give me cookies on setting the variables and
things will work out'. Taking this suggestion into account
applying the skills from the not yet passed 'Entwurf und
Analyse von Algorithmen' lecture and the cookie
monster seems to be right. We will track setVar more carefully
to have faster datastore operations."
to have faster update_data and expandKeys operations.
This is a trade-off between speed and memory again but
the speed is more critical here.
@@ -28,6 +26,11 @@ the speed is more critical here.
import sys, os, re
import hashlib
if sys.argv[0][-5:] == "pydoc":
path = os.path.dirname(os.path.dirname(sys.argv[1]))
else:
path = os.path.dirname(os.path.dirname(sys.argv[0]))
sys.path.insert(0, path)
from itertools import groupby
from bb import data_smart
@@ -67,6 +70,10 @@ def keys(d):
"""Return a list of keys in d"""
return d.keys()
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
def expand(s, d, varname = None):
"""Variable expansion using the data store"""
return d.expand(s, varname)
@@ -114,8 +121,8 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
if d.getVarFlag(var, 'python', False) and func:
return False
export = bb.utils.to_boolean(d.getVarFlag(var, "export"))
unexport = bb.utils.to_boolean(d.getVarFlag(var, "unexport"))
export = d.getVarFlag(var, "export", False)
unexport = d.getVarFlag(var, "unexport", False)
if not all and not export and not unexport and not func:
return False
@@ -188,8 +195,8 @@ def emit_env(o=sys.__stdout__, d = init(), all=False):
def exported_keys(d):
return (key for key in d.keys() if not key.startswith('__') and
bb.utils.to_boolean(d.getVarFlag(key, 'export')) and
not bb.utils.to_boolean(d.getVarFlag(key, 'unexport')))
d.getVarFlag(key, 'export', False) and
not d.getVarFlag(key, 'unexport', False))
def exported_vars(d):
k = list(exported_keys(d))
@@ -219,7 +226,7 @@ def emit_func(func, o=sys.__stdout__, d = init()):
deps = newdeps
seen |= deps
newdeps = set()
for dep in sorted(deps):
for dep in deps:
if d.getVarFlag(dep, "func", False) and not d.getVarFlag(dep, "python", False):
emit_var(dep, o, d, False) and o.write('\n')
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep))
@@ -261,72 +268,65 @@ def emit_func_python(func, o=sys.__stdout__, d = init()):
newdeps |= set((d.getVarFlag(dep, "vardeps") or "").split())
newdeps -= seen
def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_vars, d, codeparsedata):
def handle_contains(value, contains, exclusions, d):
newvalue = []
if value:
newvalue.append(str(value))
for k in sorted(contains):
if k in exclusions or k in ignored_vars:
continue
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
newvalue.append("\n%s{%s} = Unset" % (k, item))
break
else:
newvalue.append("\n%s{%s} = Set" % (k, item))
return "".join(newvalue)
def handle_remove(value, deps, removes, d):
for r in sorted(removes):
r2 = d.expandWithRefs(r, None)
value += "\n_remove of %s" % r
deps |= r2.references
deps = deps | (keys & r2.execs)
value = handle_contains(value, r2.contains, exclusions, d)
return value
def update_data(d):
"""Performs final steps upon the datastore, including application of overrides"""
d.finalize(parent = True)
def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps = set()
try:
if key in mod_funcs:
exclusions = set()
moddep = bb.codeparser.modulecode_deps[key]
value = handle_contains(moddep[4], moddep[3], exclusions, d)
return frozenset((moddep[0] | keys & moddep[1]) - ignored_vars), value
if key[-1] == ']':
vf = key[:-1].split('[')
if vf[1] == "vardepvalueexclude":
return deps, ""
value, parser = d.getVarFlag(vf[0], vf[1], False, retparser=True)
deps |= parser.references
deps = deps | (keys & parser.execs)
deps -= ignored_vars
return frozenset(deps), value
return deps, value
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
vardeps = varflags.get("vardeps")
exclusions = varflags.get("vardepsexclude", "").split()
def handle_contains(value, contains, d):
newvalue = ""
for k in sorted(contains):
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
newvalue += "\n%s{%s} = Unset" % (k, item)
break
else:
newvalue += "\n%s{%s} = Set" % (k, item)
if not newvalue:
return value
if not value:
return newvalue
return value + newvalue
def handle_remove(value, deps, removes, d):
for r in sorted(removes):
r2 = d.expandWithRefs(r, None)
value += "\n_remove of %s" % r
deps |= r2.references
deps = deps | (keys & r2.execs)
return value
if "vardepvalue" in varflags:
value = varflags.get("vardepvalue")
elif varflags.get("func"):
if varflags.get("python"):
value = codeparsedata.getVarFlag(key, "_content", False)
value = d.getVarFlag(key, "_content", False)
parser = bb.codeparser.PythonParser(key, logger)
parser.parse_python(value, filename=varflags.get("filename"), lineno=varflags.get("lineno"))
deps = deps | parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, exclusions, d)
value = handle_contains(value, parser.contains, d)
else:
value, parsedvar = codeparsedata.getVarFlag(key, "_content", False, retparser=True)
value, parsedvar = d.getVarFlag(key, "_content", False, retparser=True)
parser = bb.codeparser.ShellParser(key, logger)
parser.parse_shell(parsedvar.value)
deps = deps | shelldeps
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
value = handle_contains(value, parsedvar.contains, exclusions, d)
value = handle_contains(value, parsedvar.contains, d)
if hasattr(parsedvar, "removes"):
value = handle_remove(value, deps, parsedvar.removes, d)
if vardeps is None:
@@ -341,7 +341,7 @@ def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_va
value, parser = d.getVarFlag(key, "_content", False, retparser=True)
deps |= parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, exclusions, d)
value = handle_contains(value, parser.contains, d)
if hasattr(parser, "removes"):
value = handle_remove(value, deps, parser.removes, d)
@@ -361,50 +361,43 @@ def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_va
deps |= set(varfdeps)
deps |= set((vardeps or "").split())
deps -= set(exclusions)
deps -= ignored_vars
deps -= set(varflags.get("vardepsexclude", "").split())
except bb.parse.SkipRecipe:
raise
except Exception as e:
bb.warn("Exception during build_dependencies for %s" % key)
raise
return frozenset(deps), value
return deps, value
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)
def generate_dependencies(d, ignored_vars):
def generate_dependencies(d, whitelist):
mod_funcs = set(bb.codeparser.modulecode_deps.keys())
keys = set(key for key in d if not key.startswith("__")) | mod_funcs
shelldeps = set(key for key in d.getVar("__exportlist", False) if bb.utils.to_boolean(d.getVarFlag(key, "export")) and not bb.utils.to_boolean(d.getVarFlag(key, "unexport")))
keys = set(key for key in d if not key.startswith("__"))
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export", False) and not d.getVarFlag(key, "unexport", False))
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS')
codeparserd = d.createCopy()
for forced in (d.getVar('BB_HASH_CODEPARSER_VALS') or "").split():
key, value = forced.split("=", 1)
codeparserd.setVar(key, value)
deps = {}
values = {}
tasklist = d.getVar('__BBTASKS', False) or []
for task in tasklist:
deps[task], values[task] = build_dependencies(task, keys, mod_funcs, shelldeps, varflagsexcl, ignored_vars, d, codeparserd)
deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, d)
newdeps = deps[task]
seen = set()
while newdeps:
nextdeps = newdeps
nextdeps = newdeps - whitelist
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep not in deps:
deps[dep], values[dep] = build_dependencies(dep, keys, mod_funcs, shelldeps, varflagsexcl, ignored_vars, d, codeparserd)
deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, varflagsexcl, d)
newdeps |= deps[dep]
newdeps -= seen
#print "For %s: %s" % (task, str(deps[task]))
return tasklist, deps, values
def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
def generate_dependency_hash(tasklist, gendeps, lookupcache, whitelist, fn):
taskdeps = {}
basehash = {}
@@ -413,10 +406,9 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
if data is None:
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
data = []
else:
data = [data]
data = ''
gendeps[task] -= whitelist
newdeps = gendeps[task]
seen = set()
while newdeps:
@@ -424,24 +416,27 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep in whitelist:
continue
gendeps[dep] -= whitelist
newdeps |= gendeps[dep]
newdeps -= seen
alldeps = sorted(seen)
for dep in alldeps:
data.append(dep)
data = data + dep
var = lookupcache[dep]
if var is not None:
data.append(str(var))
data = data + str(var)
k = fn + ":" + task
basehash[k] = hashlib.sha256("".join(data).encode("utf-8")).hexdigest()
taskdeps[task] = frozenset(seen)
basehash[k] = hashlib.sha256(data.encode("utf-8")).hexdigest()
taskdeps[task] = alldeps
return taskdeps, basehash
def inherits_class(klass, d):
val = d.getVar('__inherit_cache', False) or []
needle = '/%s.bbclass' % klass
needle = os.path.join('classes', '%s.bbclass' % klass)
for v in val:
if v.endswith(needle):
return True

View File

@@ -16,11 +16,8 @@ BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import builtins
import copy
import re
import sys
from collections.abc import MutableMapping
import copy, re, sys, traceback
from collections import MutableMapping
import logging
import hashlib
import bb, bb.codeparser
@@ -29,25 +26,13 @@ from bb.COW import COWDictBase
logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = [":append", ":prepend", ":remove"]
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>:append|:prepend|:remove)(:(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+?}")
__expand_python_regexp__ = re.compile(r"\${@(?:{.*?}|.)+?}")
__setvar_keyword__ = ["_append", "_prepend", "_remove"]
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~]+?}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
__whitespace_split__ = re.compile(r'(\s)')
__override_regexp__ = re.compile(r'[a-z0-9]+')
bitbake_renamed_vars = {
"BB_ENV_WHITELIST": "BB_ENV_PASSTHROUGH",
"BB_ENV_EXTRAWHITE": "BB_ENV_PASSTHROUGH_ADDITIONS",
"BB_HASHBASE_WHITELIST": "BB_BASEHASH_IGNORE_VARS",
"BB_HASHCONFIG_WHITELIST": "BB_HASHCONFIG_IGNORE_VARS",
"BB_HASHTASK_WHITELIST": "BB_TASKHASH_IGNORE_TASKS",
"BB_SETSCENE_ENFORCE_WHITELIST": "BB_SETSCENE_ENFORCE_IGNORE_TASKS",
"MULTI_PROVIDER_WHITELIST": "BB_MULTI_PROVIDER_ALLOWED",
"BB_STAMP_WHITELIST": "is a deprecated variable and support has been removed",
"BB_STAMP_POLICY": "is a deprecated variable and support has been removed",
}
def infer_caller_details(loginfo, parent = False, varval = True):
"""Save the caller the trouble of specifying everything."""
# Save effort.
@@ -95,11 +80,10 @@ def infer_caller_details(loginfo, parent = False, varval = True):
loginfo['func'] = func
class VariableParse:
def __init__(self, varname, d, unexpanded_value = None, val = None):
def __init__(self, varname, d, val = None):
self.varname = varname
self.d = d
self.value = val
self.unexpanded_value = unexpanded_value
self.references = set()
self.execs = set()
@@ -123,11 +107,6 @@ class VariableParse:
else:
code = match.group()[3:-1]
# Do not run code that contains one or more unexpanded variables
# instead return the code with the characters we removed put back
if __expand_var_regexp__.findall(code):
return "${@" + code + "}"
if self.varname:
varname = 'Var <%s>' % self.varname
else:
@@ -153,21 +132,16 @@ class VariableParse:
value = utils.better_eval(codeobj, DataContext(self.d), {'d' : self.d})
return str(value)
class DataContext(dict):
excluded = set([i for i in dir(builtins) if not i.startswith('_')] + ['oe'])
class DataContext(dict):
def __init__(self, metadata, **kwargs):
self.metadata = metadata
dict.__init__(self, **kwargs)
self['d'] = metadata
self.context = set(bb.utils.get_context())
def __missing__(self, key):
if key in self.excluded or key in self.context:
raise KeyError(key)
value = self.metadata.getVar(key)
if value is None:
if value is None or self.metadata.getVarFlag(key, 'func', False):
raise KeyError(key)
else:
return value
@@ -177,7 +151,6 @@ class ExpansionError(Exception):
self.expression = expression
self.variablename = varname
self.exception = exception
self.varlist = [varname or expression or ""]
if varname:
if expression:
self.msg = "Failure expanding variable %s, expression was %s which triggered exception %s: %s" % (varname, expression, type(exception).__name__, exception)
@@ -187,14 +160,8 @@ class ExpansionError(Exception):
self.msg = "Failure expanding expression %s which triggered exception %s: %s" % (expression, type(exception).__name__, exception)
Exception.__init__(self, self.msg)
self.args = (varname, expression, exception)
def addVar(self, varname):
if varname:
self.varlist.append(varname)
def __str__(self):
chain = "\nThe variable dependency chain for the failure is: " + " -> ".join(self.varlist)
return self.msg + chain
return self.msg
class IncludeHistory(object):
def __init__(self, parent = None, filename = '[TOP LEVEL]'):
@@ -272,9 +239,12 @@ class VariableHistory(object):
return
if 'op' not in loginfo or not loginfo['op']:
loginfo['op'] = 'set'
if 'detail' in loginfo:
loginfo['detail'] = str(loginfo['detail'])
if 'variable' not in loginfo or 'file' not in loginfo:
raise ValueError("record() missing variable or file.")
var = loginfo['variable']
if var not in self.variables:
self.variables[var] = []
if not isinstance(self.variables[var], list):
@@ -307,7 +277,7 @@ class VariableHistory(object):
for (r, override) in d.overridedata[var]:
for event in self.variable(r):
loginfo = event.copy()
if 'flag' in loginfo and not loginfo['flag'].startswith(("_", ":")):
if 'flag' in loginfo and not loginfo['flag'].startswith("_"):
continue
loginfo['variable'] = var
loginfo['op'] = 'override[%s]:%s' % (override, loginfo['op'])
@@ -333,8 +303,7 @@ class VariableHistory(object):
flag = '[%s] ' % (event['flag'])
else:
flag = ''
o.write("# %s %s:%s%s\n# %s\"%s\"\n" % \
(event['op'], event['file'], event['line'], display_func, flag, re.sub('\n', '\n# ', str(event['detail']))))
o.write("# %s %s:%s%s\n# %s\"%s\"\n" % (event['op'], event['file'], event['line'], display_func, flag, re.sub('\n', '\n# ', event['detail'])))
if len(history) > 1:
o.write("# pre-expansion value:\n")
o.write('# "%s"\n' % (commentVal))
@@ -360,16 +329,6 @@ class VariableHistory(object):
lines.append(line)
return lines
def get_variable_refs(self, var):
"""Return a dict of file/line references"""
var_history = self.variable(var)
refs = {}
for event in var_history:
if event['file'] not in refs:
refs[event['file']] = []
refs[event['file']].append(event['line'])
return refs
def get_variable_items_files(self, var):
"""
Use variable history to map items added to a list variable and
@@ -383,12 +342,12 @@ class VariableHistory(object):
for event in history:
if 'flag' in event:
continue
if event['op'] == ':remove':
if event['op'] == '_remove':
continue
if isset and event['op'] == 'set?':
continue
isset = True
items = d.expand(str(event['detail'])).split()
items = d.expand(event['detail']).split()
for item in items:
# This is a little crude but is belt-and-braces to avoid us
# having to handle every possible operation type specifically
@@ -404,23 +363,6 @@ class VariableHistory(object):
else:
self.variables[var] = []
def _print_rename_error(var, loginfo, renamedvars, fullvar=None):
info = ""
if "file" in loginfo:
info = " file: %s" % loginfo["file"]
if "line" in loginfo:
info += " line: %s" % loginfo["line"]
if fullvar and fullvar != var:
info += " referenced as: %s" % fullvar
if info:
info = " (%s)" % info.strip()
renameinfo = renamedvars[var]
if " " in renameinfo:
# A space signals a string to display instead of a rename
bb.erroronce('Variable %s %s%s' % (var, renameinfo, info))
else:
bb.erroronce('Variable %s has been renamed to %s%s' % (var, renameinfo, info))
class DataSmart(MutableMapping):
def __init__(self):
self.dict = {}
@@ -428,8 +370,6 @@ class DataSmart(MutableMapping):
self.inchistory = IncludeHistory()
self.varhistory = VariableHistory(self)
self._tracking = False
self._var_renames = {}
self._var_renames.update(bitbake_renamed_vars)
self.expand_cache = {}
@@ -451,9 +391,9 @@ class DataSmart(MutableMapping):
def expandWithRefs(self, s, varname):
if not isinstance(s, str): # sanity check
return VariableParse(varname, self, s, s)
return VariableParse(varname, self, s)
varparse = VariableParse(varname, self, s)
varparse = VariableParse(varname, self)
while s.find('${') != -1:
olds = s
@@ -463,17 +403,14 @@ class DataSmart(MutableMapping):
s = __expand_python_regexp__.sub(varparse.python_sub, s)
except SyntaxError as e:
# Likely unmatched brackets, just don't expand the expression
if e.msg != "EOL while scanning string literal" and not e.msg.startswith("unterminated string literal"):
if e.msg != "EOL while scanning string literal":
raise
if s == olds:
break
except ExpansionError as e:
e.addVar(varname)
except ExpansionError:
raise
except bb.parse.SkipRecipe:
raise
except bb.BBHandledException:
raise
except Exception as exc:
tb = sys.exc_info()[2]
raise ExpansionError(varname, s, exc).with_traceback(tb) from exc
@@ -485,19 +422,24 @@ class DataSmart(MutableMapping):
def expand(self, s, varname = None):
return self.expandWithRefs(s, varname).value
def finalize(self, parent = False):
return
def internal_finalize(self, parent = False):
"""Performs final steps upon the datastore, including application of overrides"""
self.overrides = None
def need_overrides(self):
if self.overrides is not None:
return
if self.inoverride:
return
overrride_stack = []
for count in range(5):
self.inoverride = True
# Can end up here recursively so setup dummy values
self.overrides = []
self.overridesset = set()
self.overrides = (self.getVar("OVERRIDES") or "").split(":") or []
overrride_stack.append(self.overrides)
self.overridesset = set(self.overrides)
self.inoverride = False
self.expand_cache = {}
@@ -507,7 +449,7 @@ class DataSmart(MutableMapping):
self.overrides = newoverrides
self.overridesset = set(self.overrides)
else:
bb.fatal("Overrides could not be expanded into a stable state after 5 iterations, overrides must be being referenced by other overridden variables in some recursive fashion. Please provide your configuration to bitbake-devel so we can laugh, er, I mean try and understand how to make it work. The list of failing override expansions: %s" % "\n".join(str(s) for s in overrride_stack))
bb.fatal("Overrides could not be expanded into a stable state after 5 iterations, overrides must be being referenced by other overridden variables in some recursive fashion. Please provide your configuration to bitbake-devel so we can laugh, er, I mean try and understand how to make it work.")
def initVar(self, var):
self.expand_cache = {}
@@ -518,44 +460,27 @@ class DataSmart(MutableMapping):
dest = self.dict
while dest:
if var in dest:
return dest[var]
return dest[var], self.overridedata.get(var, None)
if "_data" not in dest:
break
dest = dest["_data"]
return None
return None, self.overridedata.get(var, None)
def _makeShadowCopy(self, var):
if var in self.dict:
return
local_var = self._findVar(var)
local_var, _ = self._findVar(var)
if local_var:
self.dict[var] = copy.copy(local_var)
else:
self.initVar(var)
def hasOverrides(self, var):
return var in self.overridedata
def setVar(self, var, value, **loginfo):
#print("var=" + str(var) + " val=" + str(value))
if not var.startswith("__anon_") and ("_append" in var or "_prepend" in var or "_remove" in var):
info = "%s" % var
if "file" in loginfo:
info += " file: %s" % loginfo["file"]
if "line" in loginfo:
info += " line: %s" % loginfo["line"]
bb.fatal("Variable %s contains an operation using the old override syntax. Please convert this layer/metadata before attempting to use with a newer bitbake." % info)
shortvar = var.split(":", 1)[0]
if shortvar in self._var_renames:
_print_rename_error(shortvar, loginfo, self._var_renames, fullvar=var)
# Mark that we have seen a renamed variable
self.setVar("_FAILPARSINGERRORHANDLED", True)
self.expand_cache = {}
parsing=False
if 'parsing' in loginfo:
@@ -584,7 +509,7 @@ class DataSmart(MutableMapping):
# pay the cookie monster
# more cookies for the cookie monster
if ':' in var:
if '_' in var:
self._setvar_update_overrides(base, **loginfo)
if base in self.overridevars:
@@ -595,27 +520,27 @@ class DataSmart(MutableMapping):
self._makeShadowCopy(var)
if not parsing:
if ":append" in self.dict[var]:
del self.dict[var][":append"]
if ":prepend" in self.dict[var]:
del self.dict[var][":prepend"]
if ":remove" in self.dict[var]:
del self.dict[var][":remove"]
if "_append" in self.dict[var]:
del self.dict[var]["_append"]
if "_prepend" in self.dict[var]:
del self.dict[var]["_prepend"]
if "_remove" in self.dict[var]:
del self.dict[var]["_remove"]
if var in self.overridedata:
active = []
self.need_overrides()
for (r, o) in self.overridedata[var]:
if o in self.overridesset:
active.append(r)
elif ":" in o:
if set(o.split(":")).issubset(self.overridesset):
elif "_" in o:
if set(o.split("_")).issubset(self.overridesset):
active.append(r)
for a in active:
self.delVar(a)
del self.overridedata[var]
# more cookies for the cookie monster
if ':' in var:
if '_' in var:
self._setvar_update_overrides(var, **loginfo)
# setting var
@@ -637,12 +562,12 @@ class DataSmart(MutableMapping):
nextnew.update(vardata.references)
nextnew.update(vardata.contains.keys())
new = nextnew
self.overrides = None
self.internal_finalize(True)
def _setvar_update_overrides(self, var, **loginfo):
# aka pay the cookie monster
override = var[var.rfind(':')+1:]
shortvar = var[:var.rfind(':')]
override = var[var.rfind('_')+1:]
shortvar = var[:var.rfind('_')]
while override and __override_regexp__.match(override):
if shortvar not in self.overridedata:
self.overridedata[shortvar] = []
@@ -651,9 +576,9 @@ class DataSmart(MutableMapping):
self.overridedata[shortvar] = list(self.overridedata[shortvar])
self.overridedata[shortvar].append([var, override])
override = None
if ":" in shortvar:
override = var[shortvar.rfind(':')+1:]
shortvar = var[:shortvar.rfind(':')]
if "_" in shortvar:
override = var[shortvar.rfind('_')+1:]
shortvar = var[:shortvar.rfind('_')]
if len(shortvar) == 0:
override = None
@@ -677,11 +602,10 @@ class DataSmart(MutableMapping):
self.varhistory.record(**loginfo)
self.setVar(newkey, val, ignore=True, parsing=True)
srcflags = self.getVarFlags(key, False, True) or {}
for i in srcflags:
if i not in (__setvar_keyword__):
for i in (__setvar_keyword__):
src = self.getVarFlag(key, i, False)
if src is None:
continue
src = srcflags[i]
dest = self.getVarFlag(newkey, i, False) or []
dest.extend(src)
@@ -693,7 +617,7 @@ class DataSmart(MutableMapping):
self.overridedata[newkey].append([v.replace(key, newkey), o])
self.renameVar(v, v.replace(key, newkey))
if ':' in newkey and val is None:
if '_' in newkey and val is None:
self._setvar_update_overrides(newkey, **loginfo)
loginfo['variable'] = key
@@ -705,12 +629,12 @@ class DataSmart(MutableMapping):
def appendVar(self, var, value, **loginfo):
loginfo['op'] = 'append'
self.varhistory.record(**loginfo)
self.setVar(var + ":append", value, ignore=True, parsing=True)
self.setVar(var + "_append", value, ignore=True, parsing=True)
def prependVar(self, var, value, **loginfo):
loginfo['op'] = 'prepend'
self.varhistory.record(**loginfo)
self.setVar(var + ":prepend", value, ignore=True, parsing=True)
self.setVar(var + "_prepend", value, ignore=True, parsing=True)
def delVar(self, var, **loginfo):
self.expand_cache = {}
@@ -721,10 +645,10 @@ class DataSmart(MutableMapping):
self.dict[var] = {}
if var in self.overridedata:
del self.overridedata[var]
if ':' in var:
override = var[var.rfind(':')+1:]
shortvar = var[:var.rfind(':')]
while override and __override_regexp__.match(override):
if '_' in var:
override = var[var.rfind('_')+1:]
shortvar = var[:var.rfind('_')]
while override and override.islower():
try:
if shortvar in self.overridedata:
# Force CoW by recreating the list first
@@ -733,23 +657,15 @@ class DataSmart(MutableMapping):
except ValueError as e:
pass
override = None
if ":" in shortvar:
override = var[shortvar.rfind(':')+1:]
shortvar = var[:shortvar.rfind(':')]
if "_" in shortvar:
override = var[shortvar.rfind('_')+1:]
shortvar = var[:shortvar.rfind('_')]
if len(shortvar) == 0:
override = None
def setVarFlag(self, var, flag, value, **loginfo):
self.expand_cache = {}
if var == "BB_RENAMED_VARIABLES":
self._var_renames[flag] = value
if var in self._var_renames:
_print_rename_error(var, loginfo, self._var_renames)
# Mark that we have seen a renamed variable
self.setVar("_FAILPARSINGERRORHANDLED", True)
if 'op' not in loginfo:
loginfo['op'] = "set"
loginfo['flag'] = flag
@@ -758,7 +674,7 @@ class DataSmart(MutableMapping):
self._makeShadowCopy(var)
self.dict[var][flag] = value
if flag == "_defaultval" and ':' in var:
if flag == "_defaultval" and '_' in var:
self._setvar_update_overrides(var, **loginfo)
if flag == "_defaultval" and var in self.overridevars:
self._setvar_update_overridevars(var, value)
@@ -779,27 +695,22 @@ class DataSmart(MutableMapping):
return None
cachename = var + "[" + flag + "]"
if not expand and retparser and cachename in self.expand_cache:
return self.expand_cache[cachename].unexpanded_value, self.expand_cache[cachename]
if expand and cachename in self.expand_cache:
return self.expand_cache[cachename].value
local_var = self._findVar(var)
local_var, overridedata = self._findVar(var)
value = None
removes = set()
if flag == "_content" and not parsing:
overridedata = self.overridedata.get(var, None)
if flag == "_content" and not parsing and overridedata is not None:
if flag == "_content" and overridedata is not None and not parsing:
match = False
active = {}
self.need_overrides()
for (r, o) in overridedata:
# FIXME What about double overrides both with "_" in the name?
# What about double overrides both with "_" in the name?
if o in self.overridesset:
active[o] = r
elif ":" in o:
if set(o.split(":")).issubset(self.overridesset):
elif "_" in o:
if set(o.split("_")).issubset(self.overridesset):
active[o] = r
mod = True
@@ -807,10 +718,10 @@ class DataSmart(MutableMapping):
mod = False
for o in self.overrides:
for a in active.copy():
if a.endswith(":" + o):
if a.endswith("_" + o):
t = active[a]
del active[a]
active[a.replace(":" + o, "")] = t
active[a.replace("_" + o, "")] = t
mod = True
elif a == o:
match = active[a]
@@ -829,31 +740,31 @@ class DataSmart(MutableMapping):
value = copy.copy(local_var["_defaultval"])
if flag == "_content" and local_var is not None and ":append" in local_var and not parsing:
if flag == "_content" and local_var is not None and "_append" in local_var and not parsing:
if not value:
value = ""
self.need_overrides()
for (r, o) in local_var[":append"]:
for (r, o) in local_var["_append"]:
match = True
if o:
for o2 in o.split(":"):
for o2 in o.split("_"):
if not o2 in self.overrides:
match = False
if match:
if value is None:
value = ""
value = value + r
if flag == "_content" and local_var is not None and ":prepend" in local_var and not parsing:
if flag == "_content" and local_var is not None and "_prepend" in local_var and not parsing:
if not value:
value = ""
self.need_overrides()
for (r, o) in local_var[":prepend"]:
for (r, o) in local_var["_prepend"]:
match = True
if o:
for o2 in o.split(":"):
for o2 in o.split("_"):
if not o2 in self.overrides:
match = False
if match:
if value is None:
value = ""
value = r + value
parser = None
@@ -862,12 +773,12 @@ class DataSmart(MutableMapping):
if expand:
value = parser.value
if value and flag == "_content" and local_var is not None and ":remove" in local_var and not parsing:
if value and flag == "_content" and local_var is not None and "_remove" in local_var and not parsing:
self.need_overrides()
for (r, o) in local_var[":remove"]:
for (r, o) in local_var["_remove"]:
match = True
if o:
for o2 in o.split(":"):
for o2 in o.split("_"):
if not o2 in self.overrides:
match = False
if match:
@@ -880,7 +791,7 @@ class DataSmart(MutableMapping):
expanded_removes[r] = self.expand(r).split()
parser.removes = set()
val = []
val = ""
for v in __whitespace_split__.split(parser.value):
skip = False
for r in removes:
@@ -889,8 +800,8 @@ class DataSmart(MutableMapping):
skip = True
if skip:
continue
val.append(v)
parser.value = "".join(val)
val = val + v
parser.value = val
if expand:
value = parser.value
@@ -905,7 +816,7 @@ class DataSmart(MutableMapping):
def delVarFlag(self, var, flag, **loginfo):
self.expand_cache = {}
local_var = self._findVar(var)
local_var, _ = self._findVar(var)
if not local_var:
return
if not var in self.dict:
@@ -948,12 +859,12 @@ class DataSmart(MutableMapping):
self.dict[var][i] = flags[i]
def getVarFlags(self, var, expand = False, internalflags=False):
local_var = self._findVar(var)
local_var, _ = self._findVar(var)
flags = {}
if local_var:
for i in local_var:
if i.startswith(("_", ":")) and not internalflags:
if i.startswith("_") and not internalflags:
continue
flags[i] = local_var[i]
if expand and i in expand:
@@ -994,7 +905,6 @@ class DataSmart(MutableMapping):
data.inchistory = self.inchistory.copy()
data._tracking = self._tracking
data._var_renames = self._var_renames
data.overrides = None
data.overridevars = copy.copy(self.overridevars)
@@ -1017,7 +927,7 @@ class DataSmart(MutableMapping):
value = self.getVar(variable, False)
for key in keys:
referrervalue = self.getVar(key, False)
if referrervalue and isinstance(referrervalue, str) and ref in referrervalue:
if referrervalue and ref in referrervalue:
self.setVar(key, referrervalue.replace(ref, value))
def localkeys(self):
@@ -1052,8 +962,8 @@ class DataSmart(MutableMapping):
for (r, o) in self.overridedata[var]:
if o in self.overridesset:
overrides.add(var)
elif ":" in o:
if set(o.split(":")).issubset(self.overridesset):
elif "_" in o:
if set(o.split("_")).issubset(self.overridesset):
overrides.add(var)
for k in keylist(self.dict):
@@ -1083,10 +993,10 @@ class DataSmart(MutableMapping):
d = self.createCopy()
bb.data.expandKeys(d)
config_ignore_vars = set((d.getVar("BB_HASHCONFIG_IGNORE_VARS") or "").split())
config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST") or "").split())
keys = set(key for key in iter(d) if not key.startswith("__"))
for key in keys:
if key in config_ignore_vars:
if key in config_whitelist:
continue
value = d.getVar(key, False) or ""

View File

@@ -40,7 +40,7 @@ class HeartbeatEvent(Event):
"""Triggered at regular time intervals of 10 seconds. Other events can fire much more often
(runQueueTaskStarted when there are many short tasks) or not at all for long periods
of time (again runQueueTaskStarted, when there is just one long-running task), so this
event is more suitable for doing some task-independent work occasionally."""
event is more suitable for doing some task-independent work occassionally."""
def __init__(self, time):
Event.__init__(self)
self.time = time
@@ -68,39 +68,29 @@ _catchall_handlers = {}
_eventfilter = None
_uiready = False
_thread_lock = threading.Lock()
_heartbeat_enabled = False
_should_exit = threading.Event()
_thread_lock_enabled = False
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
def enable_threadlock():
# Always needed now
return
global _thread_lock_enabled
_thread_lock_enabled = True
def disable_threadlock():
# Always needed now
return
def enable_heartbeat():
global _heartbeat_enabled
_heartbeat_enabled = True
def disable_heartbeat():
global _heartbeat_enabled
_heartbeat_enabled = False
#
# In long running code, this function should be called periodically
# to check if we should exit due to an interuption (.e.g Ctrl+C from the UI)
#
def check_for_interrupts(d):
global _should_exit
if _should_exit.is_set():
bb.warn("Exiting due to interrupt.")
raise bb.BBHandledException()
global _thread_lock_enabled
_thread_lock_enabled = False
def execute_handler(name, handler, event, d):
event.data = d
addedd = False
if 'd' not in builtins:
builtins['d'] = d
addedd = True
try:
ret = handler(event, d)
ret = handler(event)
except (bb.parse.SkipRecipe, bb.BBHandledException):
raise
except Exception:
@@ -114,7 +104,8 @@ def execute_handler(name, handler, event, d):
raise
finally:
del event.data
if addedd:
del builtins['d']
def fire_class_handlers(event, d):
if isinstance(event, logging.LogRecord):
@@ -141,14 +132,8 @@ def print_ui_queue():
if not _uiready:
from bb.msg import BBLogFormatter
# Flush any existing buffered content
try:
sys.stdout.flush()
except:
pass
try:
sys.stderr.flush()
except:
pass
sys.stdout.flush()
sys.stderr.flush()
stdout = logging.StreamHandler(sys.stdout)
stderr = logging.StreamHandler(sys.stderr)
formatter = BBLogFormatter("%(levelname)s: %(message)s")
@@ -189,30 +174,36 @@ def print_ui_queue():
def fire_ui_handlers(event, d):
global _thread_lock
global _thread_lock_enabled
if not _uiready:
# No UI handlers registered yet, queue up the messages
ui_queue.append(event)
return
with bb.utils.lock_timeout(_thread_lock):
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
try:
if not _ui_logfilters[h].filter(event):
continue
# We use pickle here since it better handles object instances
# which xmlrpc's marshaller does not. Events *must* be serializable
# by pickle.
if hasattr(_ui_handlers[h].event, "sendpickle"):
_ui_handlers[h].event.sendpickle((pickle.dumps(event)))
else:
_ui_handlers[h].event.send(event)
except:
errors.append(h)
for h in errors:
del _ui_handlers[h]
if _thread_lock_enabled:
_thread_lock.acquire()
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
try:
if not _ui_logfilters[h].filter(event):
continue
# We use pickle here since it better handles object instances
# which xmlrpc's marshaller does not. Events *must* be serializable
# by pickle.
if hasattr(_ui_handlers[h].event, "sendpickle"):
_ui_handlers[h].event.sendpickle((pickle.dumps(event)))
else:
_ui_handlers[h].event.send(event)
except:
errors.append(h)
for h in errors:
del _ui_handlers[h]
if _thread_lock_enabled:
_thread_lock.release()
def fire(event, d):
"""Fire off an Event"""
@@ -256,16 +247,15 @@ def register(name, handler, mask=None, filename=None, lineno=None, data=None):
if handler is not None:
# handle string containing python code
if isinstance(handler, str):
tmp = "def %s(e, d):\n%s" % (name, handler)
# Inject empty lines to make code match lineno in filename
if lineno is not None:
tmp = "\n" * (lineno-1) + tmp
tmp = "def %s(e):\n%s" % (name, handler)
try:
code = bb.methodpool.compile_cache(tmp)
if not code:
if filename is None:
filename = "%s(e, d)" % name
filename = "%s(e)" % name
code = compile(tmp, filename, "exec", ast.PyCF_ONLY_AST)
if lineno is not None:
ast.increment_lineno(code, lineno-1)
code = compile(code, filename, "exec")
bb.methodpool.compile_cache_add(tmp, code)
except SyntaxError:
@@ -327,23 +317,21 @@ def set_eventfilter(func):
_eventfilter = func
def register_UIHhandler(handler, mainui=False):
with bb.utils.lock_timeout(_thread_lock):
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
_ui_logfilters[_ui_handler_seq] = UIEventFilter(level, debug_domains)
if mainui:
global _uiready
_uiready = _ui_handler_seq
return _ui_handler_seq
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
_ui_logfilters[_ui_handler_seq] = UIEventFilter(level, debug_domains)
if mainui:
global _uiready
_uiready = _ui_handler_seq
return _ui_handler_seq
def unregister_UIHhandler(handlerNum, mainui=False):
if mainui:
global _uiready
_uiready = False
with bb.utils.lock_timeout(_thread_lock):
if handlerNum in _ui_handlers:
del _ui_handlers[handlerNum]
if handlerNum in _ui_handlers:
del _ui_handlers[handlerNum]
return
def get_uihandler():
@@ -498,7 +486,7 @@ class BuildCompleted(BuildBase, OperationCompleted):
BuildBase.__init__(self, n, p, failures)
class DiskFull(Event):
"""Disk full case build halted"""
"""Disk full case build aborted"""
def __init__(self, dev, type, freespace, mountpoint):
Event.__init__(self)
self._dev = dev
@@ -776,7 +764,7 @@ class LogHandler(logging.Handler):
class MetadataEvent(Event):
"""
Generic event that target for OE-Core classes
to report information during asynchronous execution
to report information during asynchrous execution
"""
def __init__(self, eventtype, eventdata):
Event.__init__(self)
@@ -857,19 +845,3 @@ class FindSigInfoResult(Event):
def __init__(self, result):
Event.__init__(self)
self.result = result
class GetTaskSignatureResult(Event):
"""
Event to return results from GetTaskSignatures command
"""
def __init__(self, sig):
Event.__init__(self)
self.sig = sig
class ParseError(Event):
"""
Event to indicate parse failed
"""
def __init__(self, msg):
super().__init__()
self._msg = msg

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#

View File

@@ -1,57 +0,0 @@
There are expectations of users of the fetcher code. This file attempts to document
some of the constraints that are present. Some are obvious, some are less so. It is
documented in the context of how OE uses it but the API calls are generic.
a) network access for sources is only expected to happen in the do_fetch task.
This is not enforced or tested but is required so that we can:
i) audit the sources used (i.e. for license/manifest reasons)
ii) support offline builds with a suitable cache
iii) allow work to continue even with downtime upstream
iv) allow for changes upstream in incompatible ways
v) allow rebuilding of the software in X years time
b) network access is not expected in do_unpack task.
c) you can take DL_DIR and use it as a mirror for offline builds.
d) access to the network is only made when explicitly configured in recipes
(e.g. use of AUTOREV, or use of git tags which change revision).
e) fetcher output is deterministic (i.e. if you fetch configuration XXX now it
will match in future exactly in a clean build with a new DL_DIR).
One specific pain point example are git tags. They can be replaced and change
so the git fetcher has to resolve them with the network. We use git revisions
where possible to avoid this and ensure determinism.
f) network access is expected to work with the standard linux proxy variables
so that access behind firewalls works (the fetcher sets these in the
environment but only in the do_fetch tasks).
g) access during parsing has to be minimal, a "git ls-remote" for an AUTOREV
git recipe might be ok but you can't expect to checkout a git tree.
h) we need to provide revision information during parsing such that a version
for the recipe can be constructed.
i) versions are expected to be able to increase in a way which sorts allowing
package feeds to operate (see PR server required for git revisions to sort).
j) API to query for possible version upgrades of a url is highly desireable to
allow our automated upgrage code to function (it is implied this does always
have network access).
k) Where fixes or changes to behaviour in the fetcher are made, we ask that
test cases are added (run with "bitbake-selftest bb.tests.fetch"). We do
have fairly extensive test coverage of the fetcher as it is the only way
to track all of its corner cases, it still doesn't give entire coverage
though sadly.
l) If using tools during parse time, they will have to be in ASSUME_PROVIDED
in OE's context as we can't build git-native, then parse a recipe and use
git ls-remote.
Not all fetchers support all features, autorev is optional and doesn't make
sense for some. Upgrade detection means different things in different contexts
too.

View File

@@ -113,7 +113,7 @@ class MissingParameterError(BBFetchException):
self.args = (missing, url)
class ParameterError(BBFetchException):
"""Exception raised when a url cannot be processed due to invalid parameters."""
"""Exception raised when a url cannot be proccessed due to invalid parameters."""
def __init__(self, message, url):
msg = "URL: '%s' has invalid parameters. %s" % (url, message)
self.url = url
@@ -182,7 +182,7 @@ class URI(object):
Some notes about relative URIs: while it's specified that
a URI beginning with <scheme>:// should either be directly
followed by a hostname or a /, the old URI handling of the
fetch2 library did not conform to this. Therefore, this URI
fetch2 library did not comform to this. Therefore, this URI
class has some kludges to make sure that URIs are parsed in
a way comforming to bitbake's current usage. This URI class
supports the following:
@@ -199,7 +199,7 @@ class URI(object):
file://hostname/absolute/path.diff (would be IETF compliant)
Note that the last case only applies to a list of
explicitly allowed schemes (currently only file://), that requires
"whitelisted" schemes (currently only file://), that requires
its URIs to not have a network location.
"""
@@ -290,12 +290,12 @@ class URI(object):
def _param_str_split(self, string, elmdelim, kvdelim="="):
ret = collections.OrderedDict()
for k, v in [x.split(kvdelim, 1) if kvdelim in x else (x, None) for x in string.split(elmdelim) if x]:
for k, v in [x.split(kvdelim, 1) for x in string.split(elmdelim) if x]:
ret[k] = v
return ret
def _param_str_join(self, dict_, elmdelim, kvdelim="="):
return elmdelim.join([kvdelim.join([k, v]) if v else k for k, v in dict_.items()])
return elmdelim.join([kvdelim.join([k, v]) for k, v in dict_.items()])
@property
def hostport(self):
@@ -388,7 +388,7 @@ def decodeurl(url):
if s:
if not '=' in s:
raise MalformedUrl(url, "The URL: '%s' is invalid: parameter %s does not specify a value (missing '=')" % (url, s))
s1, s2 = s.split('=', 1)
s1, s2 = s.split('=')
p[s1] = s2
return type, host, urllib.parse.unquote(path), user, pswd, p
@@ -402,24 +402,24 @@ def encodeurl(decoded):
if not type:
raise MissingParameterError('type', "encoded from the data %s" % str(decoded))
url = ['%s://' % type]
url = '%s://' % type
if user and type != "file":
url.append("%s" % user)
url += "%s" % user
if pswd:
url.append(":%s" % pswd)
url.append("@")
url += ":%s" % pswd
url += "@"
if host and type != "file":
url.append("%s" % host)
url += "%s" % host
if path:
# Standardise path to ensure comparisons work
while '//' in path:
path = path.replace("//", "/")
url.append("%s" % urllib.parse.quote(path))
url += "%s" % urllib.parse.quote(path)
if p:
for parm in p:
url.append(";%s=%s" % (parm, p[parm]))
url += ";%s=%s" % (parm, p[parm])
return "".join(url)
return url
def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
if not ud.url or not uri_find or not uri_replace:
@@ -430,7 +430,6 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
uri_replace_decoded = list(decodeurl(uri_replace))
logger.debug2("For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
result_decoded = ['', '', '', '', '', {}]
# 0 - type, 1 - host, 2 - path, 3 - user, 4- pswd, 5 - params
for loc, i in enumerate(uri_find_decoded):
result_decoded[loc] = uri_decoded[loc]
regexp = i
@@ -450,9 +449,6 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
for l in replacements:
uri_replace_decoded[loc][k] = uri_replace_decoded[loc][k].replace(l, replacements[l])
result_decoded[loc][k] = uri_replace_decoded[loc][k]
elif (loc == 3 or loc == 4) and uri_replace_decoded[loc]:
# User/password in the replacement is just a straight replacement
result_decoded[loc] = uri_replace_decoded[loc]
elif (re.match(regexp, uri_decoded[loc])):
if not uri_replace_decoded[loc]:
result_decoded[loc] = ""
@@ -469,18 +465,10 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
basename = os.path.basename(mirrortarball)
# Kill parameters, they make no sense for mirror tarballs
uri_decoded[5] = {}
uri_find_decoded[5] = {}
elif ud.localpath and ud.method.supports_checksum(ud):
basename = os.path.basename(ud.localpath)
if basename:
uri_basename = os.path.basename(uri_decoded[loc])
# Prefix with a slash as a sentinel in case
# result_decoded[loc] does not contain one.
path = "/" + result_decoded[loc]
if uri_basename and basename != uri_basename and path.endswith("/" + uri_basename):
result_decoded[loc] = path[1:-len(uri_basename)] + basename
elif not path.endswith("/" + basename):
result_decoded[loc] = os.path.join(path[1:], basename)
if basename and not result_decoded[loc].endswith(basename):
result_decoded[loc] = os.path.join(result_decoded[loc], basename)
else:
return None
result = encodeurl(result_decoded)
@@ -518,7 +506,7 @@ def fetcher_init(d):
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
_checksum_cache.init_cache(d.getVar("BB_CACHEDIR"))
_checksum_cache.init_cache(d)
for m in methods:
if hasattr(m, "init"):
@@ -546,7 +534,7 @@ def mirror_from_string(data):
bb.warn('Invalid mirror data %s, should have paired members.' % data)
return list(zip(*[iter(mirrors)]*2))
def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True):
def verify_checksum(ud, d, precomputed={}):
"""
verify the MD5 and SHA256 checksum for downloaded src
@@ -560,25 +548,20 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
file against those in the recipe each time, rather than only after
downloading. See https://bugzilla.yoctoproject.org/show_bug.cgi?id=5571.
"""
if ud.ignore_checksums or not ud.method.supports_checksum(ud):
return {}
if localpath is None:
localpath = ud.localpath
def compute_checksum_info(checksum_id):
checksum_name = getattr(ud, "%s_name" % checksum_id)
if checksum_id in precomputed:
checksum_data = precomputed[checksum_id]
else:
checksum_data = getattr(bb.utils, "%s_file" % checksum_id)(localpath)
checksum_data = getattr(bb.utils, "%s_file" % checksum_id)(ud.localpath)
checksum_expected = getattr(ud, "%s_expected" % checksum_id)
if checksum_expected == '':
checksum_expected = None
return {
"id": checksum_id,
"name": checksum_name,
@@ -598,13 +581,17 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
checksum_lines = ["SRC_URI[%s] = \"%s\"" % (ci["name"], ci["data"])]
# If no checksum has been provided
if fatal_nochecksum and ud.method.recommends_checksum(ud) and all(ci["expected"] is None for ci in checksum_infos):
if ud.method.recommends_checksum(ud) and all(ci["expected"] is None for ci in checksum_infos):
messages = []
strict = d.getVar("BB_STRICT_CHECKSUM") or "0"
# If strict checking enabled and neither sum defined, raise error
if strict == "1":
raise NoChecksumError("\n".join(checksum_lines))
messages.append("No checksum specified for '%s', please add at " \
"least one to the recipe:" % ud.localpath)
messages.extend(checksum_lines)
logger.error("\n".join(messages))
raise NoChecksumError("Missing SRC_URI checksum", ud.url)
bb.event.fire(MissingChecksumEvent(ud.url, **checksum_event), d)
@@ -625,8 +612,8 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
for ci in checksum_infos:
if ci["expected"] and ci["expected"] != ci["data"]:
messages.append("File: '%s' has %s checksum '%s' when '%s' was " \
"expected" % (localpath, ci["id"], ci["data"], ci["expected"]))
messages.append("File: '%s' has %s checksum %s when %s was " \
"expected" % (ud.localpath, ci["id"], ci["data"], ci["expected"]))
bad_checksum = ci["data"]
if bad_checksum:
@@ -744,16 +731,13 @@ def subprocess_setup():
# SIGPIPE errors are known issues with gzip/bash
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def mark_recipe_nocache(d):
def get_autorev(d):
# only not cache src rev in autorev case
if d.getVar('BB_SRCREV_POLICY') != "cache":
d.setVar('BB_DONT_CACHE', '1')
def get_autorev(d):
mark_recipe_nocache(d)
d.setVar("__BBAUTOREV_SEEN", True)
return "AUTOINC"
def _get_srcrev(d, method_name='sortable_revision'):
def get_srcrev(d, method_name='sortable_revision'):
"""
Return the revision string, usually for use in the version string (PV) of the current package
Most packages usually only have one SCM so we just pass on the call.
@@ -767,34 +751,23 @@ def _get_srcrev(d, method_name='sortable_revision'):
that fetcher provides a method with the given name and the same signature as sortable_revision.
"""
d.setVar("__BBSRCREV_SEEN", "1")
recursion = d.getVar("__BBINSRCREV")
if recursion:
raise FetchError("There are recursive references in fetcher variables, likely through SRC_URI")
d.setVar("__BBINSRCREV", True)
scms = []
revs = []
fetcher = Fetch(d.getVar('SRC_URI').split(), d)
urldata = fetcher.ud
for u in urldata:
if urldata[u].method.supports_srcrev():
scms.append(u)
if not scms:
d.delVar("__BBINSRCREV")
return "", revs
if len(scms) == 0:
raise FetchError("SRCREV was used yet no valid SCM was found in SRC_URI")
if len(scms) == 1 and len(urldata[scms[0]].names) == 1:
autoinc, rev = getattr(urldata[scms[0]].method, method_name)(urldata[scms[0]], d, urldata[scms[0]].names[0])
revs.append(rev)
if len(rev) > 10:
rev = rev[:10]
d.delVar("__BBINSRCREV")
if autoinc:
return "AUTOINC+" + rev, revs
return rev, revs
return "AUTOINC+" + rev
return rev
#
# Mutiple SCMs are in SRC_URI so we resort to SRCREV_FORMAT
@@ -810,7 +783,6 @@ def _get_srcrev(d, method_name='sortable_revision'):
ud = urldata[scm]
for name in ud.names:
autoinc, rev = getattr(ud.method, method_name)(ud, d, name)
revs.append(rev)
seenautoinc = seenautoinc or autoinc
if len(rev) > 10:
rev = rev[:10]
@@ -827,70 +799,12 @@ def _get_srcrev(d, method_name='sortable_revision'):
if seenautoinc:
format = "AUTOINC+" + format
d.delVar("__BBINSRCREV")
return format, revs
def get_hashvalue(d, method_name='sortable_revision'):
pkgv, revs = _get_srcrev(d, method_name=method_name)
return " ".join(revs)
def get_pkgv_string(d, method_name='sortable_revision'):
pkgv, revs = _get_srcrev(d, method_name=method_name)
return pkgv
def get_srcrev(d, method_name='sortable_revision'):
pkgv, revs = _get_srcrev(d, method_name=method_name)
if not pkgv:
raise FetchError("SRCREV was used yet no valid SCM was found in SRC_URI")
return pkgv
return format
def localpath(url, d):
fetcher = bb.fetch2.Fetch([url], d)
return fetcher.localpath(url)
# Need to export PATH as binary could be in metadata paths
# rather than host provided
# Also include some other variables.
FETCH_EXPORT_VARS = ['HOME', 'PATH',
'HTTP_PROXY', 'http_proxy',
'HTTPS_PROXY', 'https_proxy',
'FTP_PROXY', 'ftp_proxy',
'FTPS_PROXY', 'ftps_proxy',
'NO_PROXY', 'no_proxy',
'ALL_PROXY', 'all_proxy',
'GIT_PROXY_COMMAND',
'GIT_SSH',
'GIT_SSH_COMMAND',
'GIT_SSL_CAINFO',
'GIT_SMART_HTTP',
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
'SOCKS5_USER', 'SOCKS5_PASSWD',
'DBUS_SESSION_BUS_ADDRESS',
'P4CONFIG',
'SSL_CERT_FILE',
'NODE_EXTRA_CA_CERTS',
'AWS_PROFILE',
'AWS_ACCESS_KEY_ID',
'AWS_SECRET_ACCESS_KEY',
'AWS_ROLE_ARN',
'AWS_WEB_IDENTITY_TOKEN_FILE',
'AWS_DEFAULT_REGION',
'AWS_SESSION_TOKEN',
'GIT_CACHE_PATH',
'REMOTE_CONTAINERS_IPC',
'SSL_CERT_DIR']
def get_fetcher_environment(d):
newenv = {}
origenv = d.getVar("BB_ORIGENV")
for name in bb.fetch2.FETCH_EXPORT_VARS:
value = d.getVar(name)
if not value and origenv:
value = origenv.getVar(name)
if value:
newenv[name] = value
return newenv
def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
"""
Run cmd returning the command output
@@ -899,7 +813,25 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
Optionally remove the files/directories listed in cleanup upon failure
"""
exportvars = FETCH_EXPORT_VARS
# Need to export PATH as binary could be in metadata paths
# rather than host provided
# Also include some other variables.
# FIXME: Should really include all export varaiables?
exportvars = ['HOME', 'PATH',
'HTTP_PROXY', 'http_proxy',
'HTTPS_PROXY', 'https_proxy',
'FTP_PROXY', 'ftp_proxy',
'FTPS_PROXY', 'ftps_proxy',
'NO_PROXY', 'no_proxy',
'ALL_PROXY', 'all_proxy',
'GIT_PROXY_COMMAND',
'GIT_SSH',
'GIT_SSL_CAINFO',
'GIT_SMART_HTTP',
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
'SOCKS5_USER', 'SOCKS5_PASSWD',
'DBUS_SESSION_BUS_ADDRESS',
'P4CONFIG']
if not cleanup:
cleanup = []
@@ -936,17 +868,14 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
(output, errors) = bb.process.run(cmd, log=log, shell=True, stderr=subprocess.PIPE, cwd=workdir)
success = True
except bb.process.NotFoundError as e:
error_message = "Fetch command %s not found" % (e.command)
error_message = "Fetch command %s" % (e.command)
except bb.process.ExecutionError as e:
if e.stdout:
output = "output:\n%s\n%s" % (e.stdout, e.stderr)
elif e.stderr:
output = "output:\n%s" % e.stderr
else:
if log:
output = "see logfile for output"
else:
output = "no output"
output = "no output"
error_message = "Fetch command %s failed with exit code %s, %s" % (e.command, e.exitcode, output)
except bb.process.CmdError as e:
error_message = "Fetch command %s could not be run:\n%s" % (e.command, e.msg)
@@ -1008,7 +937,6 @@ def build_mirroruris(origud, mirrors, ld):
try:
newud = FetchData(newuri, ld)
newud.ignore_checksums = True
newud.setup_localpath(ld)
except bb.fetch2.BBFetchException as e:
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
@@ -1118,8 +1046,7 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
logger.debug(str(e))
try:
if ud.method.cleanup_upon_failure():
ud.method.clean(ud, ld)
ud.method.clean(ud, ld)
except UnboundLocalError:
pass
return False
@@ -1130,8 +1057,6 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
def ensure_symlink(target, link_name):
if not os.path.exists(link_name):
dirname = os.path.dirname(link_name)
bb.utils.mkdirhier(dirname)
if os.path.islink(link_name):
# Broken symbolic link
os.unlink(link_name)
@@ -1215,11 +1140,11 @@ def srcrev_internal_helper(ud, d, name):
pn = d.getVar("PN")
attempts = []
if name != '' and pn:
attempts.append("SRCREV_%s:pn-%s" % (name, pn))
attempts.append("SRCREV_%s_pn-%s" % (name, pn))
if name != '':
attempts.append("SRCREV_%s" % name)
if pn:
attempts.append("SRCREV:pn-%s" % pn)
attempts.append("SRCREV_pn-%s" % pn)
attempts.append("SRCREV")
for a in attempts:
@@ -1244,7 +1169,6 @@ def srcrev_internal_helper(ud, d, name):
if srcrev == "INVALID" or not srcrev:
raise FetchError("Please set a valid SRCREV for url %s (possible key names are %s, or use a ;rev=X URL parameter)" % (str(attempts), ud.url), ud.url)
if srcrev == "AUTOINC":
d.setVar("__BBAUTOREV_ACTED_UPON", True)
srcrev = ud.method.latest_revision(ud, d, name)
return srcrev
@@ -1256,21 +1180,23 @@ def get_checksum_file_list(d):
SRC_URI as a space-separated string
"""
fetch = Fetch([], d, cache = False, localonly = True)
dl_dir = d.getVar('DL_DIR')
filelist = []
for u in fetch.urls:
ud = fetch.ud[u]
if ud and isinstance(ud.method, local.Local):
found = False
paths = ud.method.localfile_searchpaths(ud, d)
paths = ud.method.localpaths(ud, d)
for f in paths:
pth = ud.decodedurl
if os.path.exists(f):
found = True
if f.startswith(dl_dir):
# The local fetcher's behaviour is to return a path under DL_DIR if it couldn't find the file anywhere else
if os.path.exists(f):
bb.warn("Getting checksum for %s SRC_URI entry %s: file not found except in DL_DIR" % (d.getVar('PN'), os.path.basename(f)))
else:
bb.warn("Unable to get checksum for %s SRC_URI entry %s: file could not be found" % (d.getVar('PN'), os.path.basename(f)))
filelist.append(f + ":" + str(os.path.exists(f)))
if not found:
bb.fatal(("Unable to get checksum for %s SRC_URI entry %s: file could not be found"
"\nThe following paths were searched:"
"\n%s") % (d.getVar('PN'), os.path.basename(f), '\n'.join(paths)))
return " ".join(filelist)
@@ -1317,13 +1243,18 @@ class FetchData(object):
if checksum_name in self.parm:
checksum_expected = self.parm[checksum_name]
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3", "az", "crate", "gs"]:
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3", "az"]:
checksum_expected = None
else:
checksum_expected = d.getVarFlag("SRC_URI", checksum_name)
setattr(self, "%s_expected" % checksum_id, checksum_expected)
for checksum_id in CHECKSUM_LIST:
configure_checksum(checksum_id)
self.ignore_checksums = False
self.names = self.parm.get("name",'default').split(',')
self.method = None
@@ -1345,11 +1276,6 @@ class FetchData(object):
if hasattr(self.method, "urldata_init"):
self.method.urldata_init(self, d)
for checksum_id in CHECKSUM_LIST:
configure_checksum(checksum_id)
self.ignore_checksums = False
if "localpath" in self.parm:
# if user sets localpath for file, use it instead.
self.localpath = self.parm["localpath"]
@@ -1429,9 +1355,6 @@ class FetchMethod(object):
Is localpath something that can be represented by a checksum?
"""
# We cannot compute checksums for None
if urldata.localpath is None:
return False
# We cannot compute checksums for directories
if os.path.isdir(urldata.localpath):
return False
@@ -1444,12 +1367,6 @@ class FetchMethod(object):
"""
return False
def cleanup_upon_failure(self):
"""
When a fetch fails, should clean() be called?
"""
return True
def verify_donestamp(self, ud, d):
"""
Verify the donestamp file
@@ -1517,33 +1434,30 @@ class FetchMethod(object):
cmd = None
if unpack:
tar_cmd = 'tar --extract --no-same-owner'
if 'striplevel' in urldata.parm:
tar_cmd += ' --strip-components=%s' % urldata.parm['striplevel']
if file.endswith('.tar'):
cmd = '%s -f %s' % (tar_cmd, file)
cmd = 'tar x --no-same-owner -f %s' % file
elif file.endswith('.tgz') or file.endswith('.tar.gz') or file.endswith('.tar.Z'):
cmd = '%s -z -f %s' % (tar_cmd, file)
cmd = 'tar xz --no-same-owner -f %s' % file
elif file.endswith('.tbz') or file.endswith('.tbz2') or file.endswith('.tar.bz2'):
cmd = 'bzip2 -dc %s | %s -f -' % (file, tar_cmd)
cmd = 'bzip2 -dc %s | tar x --no-same-owner -f -' % file
elif file.endswith('.gz') or file.endswith('.Z') or file.endswith('.z'):
cmd = 'gzip -dc %s > %s' % (file, efile)
elif file.endswith('.bz2'):
cmd = 'bzip2 -dc %s > %s' % (file, efile)
elif file.endswith('.txz') or file.endswith('.tar.xz'):
cmd = 'xz -dc %s | %s -f -' % (file, tar_cmd)
cmd = 'xz -dc %s | tar x --no-same-owner -f -' % file
elif file.endswith('.xz'):
cmd = 'xz -dc %s > %s' % (file, efile)
elif file.endswith('.tar.lz'):
cmd = 'lzip -dc %s | %s -f -' % (file, tar_cmd)
cmd = 'lzip -dc %s | tar x --no-same-owner -f -' % file
elif file.endswith('.lz'):
cmd = 'lzip -dc %s > %s' % (file, efile)
elif file.endswith('.tar.7z'):
cmd = '7z x -so %s | %s -f -' % (file, tar_cmd)
cmd = '7z x -so %s | tar x --no-same-owner -f -' % file
elif file.endswith('.7z'):
cmd = '7za x -y %s 1>/dev/null' % file
elif file.endswith('.tzst') or file.endswith('.tar.zst'):
cmd = 'zstd --decompress --stdout %s | %s -f -' % (file, tar_cmd)
cmd = 'zstd --decompress --stdout %s | tar x --no-same-owner -f -' % file
elif file.endswith('.zst'):
cmd = 'zstd --decompress --stdout %s > %s' % (file, efile)
elif file.endswith('.zip') or file.endswith('.jar'):
@@ -1576,7 +1490,7 @@ class FetchMethod(object):
raise UnpackError("Unable to unpack deb/ipk package - does not contain data.tar.* file", urldata.url)
else:
raise UnpackError("Unable to unpack deb/ipk package - could not list contents", urldata.url)
cmd = 'ar x %s %s && %s -p -f %s && rm %s' % (file, datafile, tar_cmd, datafile, datafile)
cmd = 'ar x %s %s && tar --no-same-owner -xpf %s && rm %s' % (file, datafile, datafile, datafile)
# If 'subdir' param exists, create a dir and use it as destination for unpack cmd
if 'subdir' in urldata.parm:
@@ -1592,7 +1506,6 @@ class FetchMethod(object):
unpackdir = rootdir
if not unpack or not cmd:
urldata.unpack_tracer.unpack("file-copy", unpackdir)
# If file == dest, then avoid any copies, as we already put the file into dest!
dest = os.path.join(unpackdir, os.path.basename(file))
if file != dest and not (os.path.exists(dest) and os.path.samefile(file, dest)):
@@ -1607,8 +1520,6 @@ class FetchMethod(object):
destdir = urlpath.rsplit("/", 1)[0] + '/'
bb.utils.mkdirhier("%s/%s" % (unpackdir, destdir))
cmd = 'cp -fpPRH "%s" "%s"' % (file, destdir)
else:
urldata.unpack_tracer.unpack("archive-extract", unpackdir)
if not cmd:
return
@@ -1700,61 +1611,12 @@ class FetchMethod(object):
"""
return []
class DummyUnpackTracer(object):
"""
Abstract API definition for a class that traces unpacked source files back
to their respective upstream SRC_URI entries, for software composition
analysis, license compliance and detailed SBOM generation purposes.
User may load their own unpack tracer class (instead of the dummy
one) by setting the BB_UNPACK_TRACER_CLASS config parameter.
"""
def start(self, unpackdir, urldata_dict, d):
"""
Start tracing the core Fetch.unpack process, using an index to map
unpacked files to each SRC_URI entry.
This method is called by Fetch.unpack and it may receive nested calls by
gitsm and npmsw fetchers, that expand SRC_URI entries by adding implicit
URLs and by recursively calling Fetch.unpack from new (nested) Fetch
instances.
"""
return
def start_url(self, url):
"""Start tracing url unpack process.
This method is called by Fetch.unpack before the fetcher-specific unpack
method starts, and it may receive nested calls by gitsm and npmsw
fetchers.
"""
return
def unpack(self, unpack_type, destdir):
"""
Set unpack_type and destdir for current url.
This method is called by the fetcher-specific unpack method after url
tracing started.
"""
return
def finish_url(self, url):
"""Finish tracing url unpack process and update the file index.
This method is called by Fetch.unpack after the fetcher-specific unpack
method finished its job, and it may receive nested calls by gitsm
and npmsw fetchers.
"""
return
def complete(self):
"""
Finish tracing the Fetch.unpack process, and check if all nested
Fecth.unpack calls (if any) have been completed; if so, save collected
metadata.
"""
return
class Fetch(object):
def __init__(self, urls, d, cache = True, localonly = False, connection_cache = None):
if localonly and cache:
raise Exception("bb.fetch2.Fetch.__init__: cannot set cache and localonly at same time")
if not urls:
if len(urls) == 0:
urls = d.getVar("SRC_URI").split()
self.urls = urls
self.d = d
@@ -1769,30 +1631,10 @@ class Fetch(object):
if key in urldata_cache:
self.ud = urldata_cache[key]
# the unpack_tracer object needs to be made available to possible nested
# Fetch instances (when those are created by gitsm and npmsw fetchers)
# so we set it as a global variable
global unpack_tracer
try:
unpack_tracer
except NameError:
class_path = d.getVar("BB_UNPACK_TRACER_CLASS")
if class_path:
# use user-defined unpack tracer class
import importlib
module_name, _, class_name = class_path.rpartition(".")
module = importlib.import_module(module_name)
class_ = getattr(module, class_name)
unpack_tracer = class_()
else:
# fall back to the dummy/abstract class
unpack_tracer = DummyUnpackTracer()
for url in urls:
if url not in self.ud:
try:
self.ud[url] = FetchData(url, d, localonly)
self.ud[url].unpack_tracer = unpack_tracer
except NonLocalMethod:
if localonly:
self.ud[url] = None
@@ -1831,7 +1673,6 @@ class Fetch(object):
network = self.d.getVar("BB_NO_NETWORK")
premirroronly = bb.utils.to_boolean(self.d.getVar("BB_FETCH_PREMIRRORONLY"))
checksum_missing_messages = []
for u in urls:
ud = self.ud[u]
ud.setup_localpath(self.d)
@@ -1843,6 +1684,7 @@ class Fetch(object):
try:
self.d.setVar("BB_NO_NETWORK", network)
if m.verify_donestamp(ud, self.d) and not m.need_update(ud, self.d):
done = True
elif m.try_premirror(ud, self.d):
@@ -1863,9 +1705,7 @@ class Fetch(object):
self.d.setVar("BB_NO_NETWORK", "1")
firsterr = None
verified_stamp = False
if done:
verified_stamp = m.verify_donestamp(ud, self.d)
verified_stamp = m.verify_donestamp(ud, self.d)
if not done and (not verified_stamp or m.need_update(ud, self.d)):
try:
if not trusted_network(self.d, ud.url):
@@ -1895,7 +1735,7 @@ class Fetch(object):
logger.debug(str(e))
firsterr = e
# Remove any incomplete fetch
if not verified_stamp and m.cleanup_upon_failure():
if not verified_stamp:
m.clean(ud, self.d)
logger.debug("Trying MIRRORS")
mirrors = mirror_from_string(self.d.getVar('MIRRORS'))
@@ -1914,28 +1754,17 @@ class Fetch(object):
raise ChecksumError("Stale Error Detected")
except BBFetchException as e:
if isinstance(e, NoChecksumError):
(message, _) = e.args
checksum_missing_messages.append(message)
continue
elif isinstance(e, ChecksumError):
if isinstance(e, ChecksumError):
logger.error("Checksum failure fetching %s" % u)
raise
finally:
if ud.lockfile:
bb.utils.unlockfile(lf)
if checksum_missing_messages:
logger.error("Missing SRC_URI checksum, please add those to the recipe: \n%s", "\n".join(checksum_missing_messages))
raise BBFetchException("There was some missing checksums in the recipe")
def checkstatus(self, urls=None):
"""
Check all URLs exist upstream.
Returns None if the URLs exist, raises FetchError if the check wasn't
successful but there wasn't an error (such as file not found), and
raises other exceptions in error cases.
Check all urls exist upstream
"""
if not urls:
@@ -1958,7 +1787,7 @@ class Fetch(object):
ret = m.try_mirrors(self, ud, self.d, mirrors, True)
if not ret:
raise FetchError("URL doesn't work", u)
raise FetchError("URL %s doesn't work" % u, u)
def unpack(self, root, urls=None):
"""
@@ -1968,8 +1797,6 @@ class Fetch(object):
if not urls:
urls = self.urls
unpack_tracer.start(root, self.ud, self.d)
for u in urls:
ud = self.ud[u]
ud.setup_localpath(self.d)
@@ -1977,15 +1804,11 @@ class Fetch(object):
if ud.lockfile:
lf = bb.utils.lockfile(ud.lockfile)
unpack_tracer.start_url(u)
ud.method.unpack(ud, root, self.d)
unpack_tracer.finish_url(u)
if ud.lockfile:
bb.utils.unlockfile(lf)
unpack_tracer.complete()
def clean(self, urls=None):
"""
Clean files that the fetcher gets or places
@@ -2086,8 +1909,6 @@ from . import clearcase
from . import npm
from . import npmsw
from . import az
from . import crate
from . import gcp
methods.append(local.Local())
methods.append(wget.Wget())
@@ -2108,5 +1929,3 @@ methods.append(clearcase.ClearCase())
methods.append(npm.Npm())
methods.append(npmsw.NpmShrinkWrap())
methods.append(az.Az())
methods.append(crate.Crate())
methods.append(gcp.GCP())

View File

@@ -1,141 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for crates.io
"""
# Copyright (C) 2016 Doug Goldstein
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import hashlib
import json
import os
import subprocess
import bb
from bb.fetch2 import logger, subprocess_setup, UnpackError
from bb.fetch2.wget import Wget
class Crate(Wget):
"""Class to fetch crates via wget"""
def _cargo_bitbake_path(self, rootdir):
return os.path.join(rootdir, "cargo_home", "bitbake")
def supports(self, ud, d):
"""
Check to see if a given url is for this fetcher
"""
return ud.type in ['crate']
def recommends_checksum(self, urldata):
return True
def urldata_init(self, ud, d):
"""
Sets up to download the respective crate from crates.io
"""
if ud.type == 'crate':
self._crate_urldata_init(ud, d)
super(Crate, self).urldata_init(ud, d)
def _crate_urldata_init(self, ud, d):
"""
Sets up the download for a crate
"""
# URL syntax is: crate://NAME/VERSION
# break the URL apart by /
parts = ud.url.split('/')
if len(parts) < 5:
raise bb.fetch2.ParameterError("Invalid URL: Must be crate://HOST/NAME/VERSION", ud.url)
# version is expected to be the last token
# but ignore possible url parameters which will be used
# by the top fetcher class
version = parts[-1].split(";")[0]
# second to last field is name
name = parts[-2]
# host (this is to allow custom crate registries to be specified
host = '/'.join(parts[2:-2])
# if using upstream just fix it up nicely
if host == 'crates.io':
host = 'crates.io/api/v1/crates'
ud.url = "https://%s/%s/%s/download" % (host, name, version)
ud.parm['downloadfilename'] = "%s-%s.crate" % (name, version)
if 'name' not in ud.parm:
ud.parm['name'] = '%s-%s' % (name, version)
logger.debug2("Fetching %s to %s" % (ud.url, ud.parm['downloadfilename']))
def unpack(self, ud, rootdir, d):
"""
Uses the crate to build the necessary paths for cargo to utilize it
"""
if ud.type == 'crate':
return self._crate_unpack(ud, rootdir, d)
else:
super(Crate, self).unpack(ud, rootdir, d)
def _crate_unpack(self, ud, rootdir, d):
"""
Unpacks a crate
"""
thefile = ud.localpath
# possible metadata we need to write out
metadata = {}
# change to the rootdir to unpack but save the old working dir
save_cwd = os.getcwd()
os.chdir(rootdir)
bp = d.getVar('BP')
if bp == ud.parm.get('name'):
cmd = "tar -xz --no-same-owner -f %s" % thefile
ud.unpack_tracer.unpack("crate-extract", rootdir)
else:
cargo_bitbake = self._cargo_bitbake_path(rootdir)
ud.unpack_tracer.unpack("cargo-extract", cargo_bitbake)
cmd = "tar -xz --no-same-owner -f %s -C %s" % (thefile, cargo_bitbake)
# ensure we've got these paths made
bb.utils.mkdirhier(cargo_bitbake)
# generate metadata necessary
with open(thefile, 'rb') as f:
# get the SHA256 of the original tarball
tarhash = hashlib.sha256(f.read()).hexdigest()
metadata['files'] = {}
metadata['package'] = tarhash
path = d.getVar('PATH')
if path:
cmd = "PATH=\"%s\" %s" % (path, cmd)
bb.note("Unpacking %s to %s/" % (thefile, os.getcwd()))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True)
os.chdir(save_cwd)
if ret != 0:
raise UnpackError("Unpack command %s failed with return value %s" % (cmd, ret), ud.url)
# if we have metadata to write out..
if len(metadata) > 0:
cratepath = os.path.splitext(os.path.basename(thefile))[0]
bbpath = self._cargo_bitbake_path(rootdir)
mdfile = '.cargo-checksum.json'
mdpath = os.path.join(bbpath, cratepath, mdfile)
with open(mdpath, "w") as f:
json.dump(metadata, f)

View File

@@ -1,102 +0,0 @@
"""
BitBake 'Fetch' implementation for Google Cloup Platform Storage.
Class for fetching files from Google Cloud Storage using the
Google Cloud Storage Python Client. The GCS Python Client must
be correctly installed, configured and authenticated prior to use.
Additionally, gsutil must also be installed.
"""
# Copyright (C) 2023, Snap Inc.
#
# Based in part on bb.fetch2.s3:
# Copyright (C) 2017 Andre McCurdy
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import bb
import urllib.parse, urllib.error
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class GCP(FetchMethod):
"""
Class to fetch urls via GCP's Python API.
"""
def __init__(self):
self.gcp_client = None
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with GCP.
"""
return ud.type in ['gs']
def recommends_checksum(self, urldata):
return True
def urldata_init(self, ud, d):
if 'downloadfilename' in ud.parm:
ud.basename = ud.parm['downloadfilename']
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
ud.basecmd = "gsutil stat"
def get_gcp_client(self):
from google.cloud import storage
self.gcp_client = storage.Client(project=None)
def download(self, ud, d):
"""
Fetch urls using the GCP API.
Assumes localpath was called first.
"""
logger.debug2(f"Trying to download gs://{ud.host}{ud.path} to {ud.localpath}")
if self.gcp_client is None:
self.get_gcp_client()
bb.fetch2.check_network_access(d, ud.basecmd, f"gs://{ud.host}{ud.path}")
runfetchcmd("%s %s" % (ud.basecmd, f"gs://{ud.host}{ud.path}"), d)
# Path sometimes has leading slash, so strip it
path = ud.path.lstrip("/")
blob = self.gcp_client.bucket(ud.host).blob(path)
blob.download_to_filename(ud.localpath)
# Additional sanity checks copied from the wget class (although there
# are no known issues which mean these are required, treat the GCP API
# tool with a little healthy suspicion).
if not os.path.exists(ud.localpath):
raise FetchError(f"The GCP API returned success for gs://{ud.host}{ud.path} but {ud.localpath} doesn't exist?!")
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError(f"The downloaded file for gs://{ud.host}{ud.path} resulted in a zero size file?! Deleting and failing since this isn't right.")
return True
def checkstatus(self, fetch, ud, d):
"""
Check the status of a URL.
"""
logger.debug2(f"Checking status of gs://{ud.host}{ud.path}")
if self.gcp_client is None:
self.get_gcp_client()
bb.fetch2.check_network_access(d, ud.basecmd, f"gs://{ud.host}{ud.path}")
runfetchcmd("%s %s" % (ud.basecmd, f"gs://{ud.host}{ud.path}"), d)
# Path sometimes has leading slash, so strip it
path = ud.path.lstrip("/")
if self.gcp_client.bucket(ud.host).blob(path).exists() == False:
raise FetchError(f"The GCP API reported that gs://{ud.host}{ud.path} does not exist")
else:
return True

View File

@@ -44,27 +44,13 @@ Supported SRC_URI options are:
- nobranch
Don't check the SHA validation for branch. set this option for the recipe
referring to commit which is valid in any namespace (branch, tag, ...)
instead of branch.
referring to commit which is valid in tag instead of branch.
The default is "0", set nobranch=1 if needed.
- subpath
Limit the checkout to a specific subpath of the tree.
By default, checkout the whole tree, set subpath=<path> if needed
- destsuffix
The name of the path in which to place the checkout.
By default, the path is git/, set destsuffix=<suffix> if needed
- usehead
For local git:// urls to use the current branch HEAD as the revision for use with
AUTOREV. Implies nobranch.
- lfs
Enable the checkout to use LFS for large files. This will download all LFS files
in the download step, as the unpack step does not have network access.
The default is "1", set lfs=0 to skip.
"""
# Copyright (C) 2005 Richard Purdie
@@ -78,21 +64,15 @@ import fnmatch
import os
import re
import shlex
import shutil
import subprocess
import tempfile
import bb
import bb.progress
from contextlib import contextmanager
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
from bb.fetch2 import trusted_network
sha1_re = re.compile(r'^[0-9a-f]{40}$')
slash_re = re.compile(r"/+")
class GitProgressHandler(bb.progress.LineFilterProgressHandler):
"""Extract progress information from git output"""
def __init__(self, d):
@@ -150,9 +130,6 @@ class Git(FetchMethod):
def supports_checksum(self, urldata):
return False
def cleanup_upon_failure(self):
return False
def urldata_init(self, ud, d):
"""
init git specific variable within url data
@@ -164,11 +141,6 @@ class Git(FetchMethod):
ud.proto = 'file'
else:
ud.proto = "git"
if ud.host == "github.com" and ud.proto == "git":
# github stopped supporting git protocol
# https://github.blog/2021-09-01-improving-git-protocol-security-github/#no-more-unauthenticated-git
ud.proto = "https"
bb.warn("URL: %s uses git protocol which is no longer supported by github. Please change to ;protocol=https in the url." % ud.url)
if not ud.proto in ('git', 'file', 'ssh', 'http', 'https', 'rsync'):
raise bb.fetch2.ParameterError("Invalid protocol type", ud.url)
@@ -192,18 +164,11 @@ class Git(FetchMethod):
ud.nocheckout = 1
ud.unresolvedrev = {}
branches = ud.parm.get("branch", "").split(',')
if branches == [""] and not ud.nobranch:
bb.warn("URL: %s does not set any branch parameter. The future default branch used by tools and repositories is uncertain and we will therefore soon require this is set in all git urls." % ud.url)
branches = ["master"]
branches = ud.parm.get("branch", "master").split(',')
if len(branches) != len(ud.names):
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
ud.noshared = d.getVar("BB_GIT_NOSHARED") == "1"
ud.cloneflags = "-n"
if not ud.noshared:
ud.cloneflags += " -s"
ud.cloneflags = "-s -n"
if ud.bareclone:
ud.cloneflags += " --mirror"
@@ -262,7 +227,7 @@ class Git(FetchMethod):
for name in ud.names:
ud.unresolvedrev[name] = 'HEAD'
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c gc.autoDetach=false -c core.pager=cat -c safe.bareRepository=all"
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0"
write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0"
ud.write_tarballs = write_tarballs != "0" or ud.rebaseable
@@ -271,20 +236,20 @@ class Git(FetchMethod):
ud.setup_revisions(d)
for name in ud.names:
# Ensure any revision that doesn't look like a SHA-1 is translated into one
if not sha1_re.match(ud.revisions[name] or ''):
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
if not ud.revisions[name] or len(ud.revisions[name]) != 40 or (False in [c in "abcdef0123456789" for c in ud.revisions[name]]):
if ud.revisions[name]:
ud.unresolvedrev[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud, d, name)
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.').replace(' ','_').replace('(', '_').replace(')', '_'))
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.').replace(' ','_'))
if gitsrcname.startswith('.'):
gitsrcname = gitsrcname[1:]
# For a rebaseable git repo, it is necessary to keep a mirror tar ball
# per revision, so that even if the revision disappears from the
# for rebaseable git repo, it is necessary to keep mirror tar ball
# per revision, so that even the revision disappears from the
# upstream repo in the future, the mirror will remain intact and still
# contain the revision
# contains the revision
if ud.rebaseable:
for name in ud.names:
gitsrcname = gitsrcname + '_' + ud.revisions[name]
@@ -328,10 +293,7 @@ class Git(FetchMethod):
return ud.clonedir
def need_update(self, ud, d):
return self.clonedir_need_update(ud, d) \
or self.shallow_tarball_need_update(ud) \
or self.tarball_need_update(ud) \
or self.lfs_need_update(ud, d)
return self.clonedir_need_update(ud, d) or self.shallow_tarball_need_update(ud) or self.tarball_need_update(ud)
def clonedir_need_update(self, ud, d):
if not os.path.exists(ud.clonedir):
@@ -343,15 +305,6 @@ class Git(FetchMethod):
return True
return False
def lfs_need_update(self, ud, d):
if self.clonedir_need_update(ud, d):
return True
for name in ud.names:
if not self._lfs_objects_downloaded(ud, d, name, ud.clonedir):
return True
return False
def clonedir_need_shallow_revs(self, ud, d):
for rev in ud.shallow_revs:
try:
@@ -371,16 +324,6 @@ class Git(FetchMethod):
# is not possible
if bb.utils.to_boolean(d.getVar("BB_FETCH_PREMIRRORONLY")):
return True
# If the url is not in trusted network, that is, BB_NO_NETWORK is set to 0
# and BB_ALLOWED_NETWORKS does not contain the host that ud.url uses, then
# we need to try premirrors first as using upstream is destined to fail.
if not trusted_network(d, ud.url):
return True
# the following check is to ensure incremental fetch in downloads, this is
# because the premirror might be old and does not contain the new rev required,
# and this will cause a total removal and new clone. So if we can reach to
# network, we prefer upstream over premirror, though the premirror might contain
# the new rev.
if os.path.exists(ud.clonedir):
return False
return True
@@ -394,54 +337,17 @@ class Git(FetchMethod):
if ud.shallow and os.path.exists(ud.fullshallow) and self.need_update(ud, d):
ud.localpath = ud.fullshallow
return
elif os.path.exists(ud.fullmirror) and self.need_update(ud, d):
if not os.path.exists(ud.clonedir):
bb.utils.mkdirhier(ud.clonedir)
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
else:
tmpdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=tmpdir)
output = runfetchcmd("%s remote" % ud.basecmd, d, quiet=True, workdir=ud.clonedir)
if 'mirror' in output:
runfetchcmd("%s remote rm mirror" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s remote add --mirror=fetch mirror %s" % (ud.basecmd, tmpdir), d, workdir=ud.clonedir)
fetch_cmd = "LANG=C %s fetch -f --update-head-ok --progress mirror " % (ud.basecmd)
runfetchcmd(fetch_cmd, d, workdir=ud.clonedir)
elif os.path.exists(ud.fullmirror) and not os.path.exists(ud.clonedir):
bb.utils.mkdirhier(ud.clonedir)
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
repourl = self._get_repo_url(ud)
needs_clone = False
if os.path.exists(ud.clonedir):
# The directory may exist, but not be the top level of a bare git
# repository in which case it needs to be deleted and re-cloned.
try:
# Since clones can be bare, use --absolute-git-dir instead of --show-toplevel
output = runfetchcmd("LANG=C %s rev-parse --absolute-git-dir" % ud.basecmd, d, workdir=ud.clonedir)
toplevel = output.rstrip()
if not bb.utils.path_is_descendant(toplevel, ud.clonedir):
logger.warning("Top level directory '%s' is not a descendant of '%s'. Re-cloning", toplevel, ud.clonedir)
needs_clone = True
except bb.fetch2.FetchError as e:
logger.warning("Unable to get top level for %s (not a git directory?): %s", ud.clonedir, e)
needs_clone = True
except FileNotFoundError as e:
logger.warning("%s", e)
needs_clone = True
if needs_clone:
shutil.rmtree(ud.clonedir)
else:
needs_clone = True
# If the repo still doesn't exist, fallback to cloning it
if needs_clone:
# We do this since git will use a "-l" option automatically for local urls where possible,
# but it doesn't work when git/objects is a symlink, only works when it is a directory.
if not os.path.exists(ud.clonedir):
# We do this since git will use a "-l" option automatically for local urls where possible
if repourl.startswith("file://"):
repourl_path = repourl[7:]
objects = os.path.join(repourl_path, 'objects')
if os.path.isdir(objects) and not os.path.islink(objects):
repourl = repourl_path
repourl = repourl[7:]
clone_cmd = "LANG=C %s clone --bare --mirror %s %s --progress" % (ud.basecmd, shlex.quote(repourl), ud.clonedir)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, clone_cmd, ud.url)
@@ -455,11 +361,7 @@ class Git(FetchMethod):
runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, shlex.quote(repourl)), d, workdir=ud.clonedir)
if ud.nobranch:
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/*:refs/*" % (ud.basecmd, shlex.quote(repourl))
else:
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/heads/*:refs/heads/* refs/tags/*:refs/tags/*" % (ud.basecmd, shlex.quote(repourl))
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/*:refs/*" % (ud.basecmd, shlex.quote(repourl))
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
progresshandler = GitProgressHandler(d)
@@ -482,14 +384,15 @@ class Git(FetchMethod):
if missing_rev:
raise bb.fetch2.FetchError("Unable to find revision %s even from upstream" % missing_rev)
if self.lfs_need_update(ud, d):
if self._contains_lfs(ud, d, ud.clonedir) and self._need_lfs(ud):
# Unpack temporary working copy, use it to run 'git checkout' to force pre-fetching
# of all LFS blobs needed at the srcrev.
# of all LFS blobs needed at the the srcrev.
#
# It would be nice to just do this inline here by running 'git-lfs fetch'
# on the bare clonedir, but that operation requires a working copy on some
# releases of Git LFS.
with tempfile.TemporaryDirectory(dir=d.getVar('DL_DIR')) as tmpdir:
tmpdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
try:
# Do the checkout. This implicitly involves a Git LFS fetch.
Git.unpack(self, ud, tmpdir, d)
@@ -505,24 +408,12 @@ class Git(FetchMethod):
# Only do this if the unpack resulted in a .git/lfs directory being
# created; this only happens if at least one blob needed to be
# downloaded.
if os.path.exists(os.path.join(ud.destdir, ".git", "lfs")):
runfetchcmd("tar -cf - lfs | tar -xf - -C %s" % ud.clonedir, d, workdir="%s/.git" % ud.destdir)
if os.path.exists(os.path.join(tmpdir, "git", ".git", "lfs")):
runfetchcmd("tar -cf - lfs | tar -xf - -C %s" % ud.clonedir, d, workdir="%s/git/.git" % tmpdir)
finally:
bb.utils.remove(tmpdir, recurse=True)
def build_mirror_data(self, ud, d):
# Create as a temp file and move atomically into position to avoid races
@contextmanager
def create_atomic(filename):
fd, tfile = tempfile.mkstemp(dir=os.path.dirname(filename))
try:
yield tfile
umask = os.umask(0o666)
os.umask(umask)
os.chmod(tfile, (0o666 & ~umask))
os.rename(tfile, filename)
finally:
os.close(fd)
if ud.shallow and ud.write_shallow_tarballs:
if not os.path.exists(ud.fullshallow):
if os.path.islink(ud.fullshallow):
@@ -533,8 +424,7 @@ class Git(FetchMethod):
self.clone_shallow_local(ud, shallowclone, d)
logger.info("Creating tarball of git repository")
with create_atomic(ud.fullshallow) as tfile:
runfetchcmd("tar -czf %s ." % tfile, d, workdir=shallowclone)
runfetchcmd("tar -czf %s ." % ud.fullshallow, d, workdir=shallowclone)
runfetchcmd("touch %s.done" % ud.fullshallow, d)
finally:
bb.utils.remove(tempdir, recurse=True)
@@ -543,11 +433,7 @@ class Git(FetchMethod):
os.unlink(ud.fullmirror)
logger.info("Creating tarball of git repository")
with create_atomic(ud.fullmirror) as tfile:
mtime = runfetchcmd("{} log --all -1 --format=%cD".format(ud.basecmd), d,
quiet=True, workdir=ud.clonedir)
runfetchcmd("tar -czf %s --owner oe:0 --group oe:0 --mtime \"%s\" ."
% (tfile, mtime), d, workdir=ud.clonedir)
runfetchcmd("tar -czf %s ." % ud.fullmirror, d, workdir=ud.clonedir)
runfetchcmd("touch %s.done" % ud.fullmirror, d)
def clone_shallow_local(self, ud, dest, d):
@@ -609,31 +495,18 @@ class Git(FetchMethod):
def unpack(self, ud, destdir, d):
""" unpack the downloaded src to destdir"""
subdir = ud.parm.get("subdir")
subpath = ud.parm.get("subpath")
readpathspec = ""
def_destsuffix = "git/"
if subpath:
readpathspec = ":%s" % subpath
def_destsuffix = "%s/" % os.path.basename(subpath.rstrip('/'))
if subdir:
# If 'subdir' param exists, create a dir and use it as destination for unpack cmd
if os.path.isabs(subdir):
if not os.path.realpath(subdir).startswith(os.path.realpath(destdir)):
raise bb.fetch2.UnpackError("subdir argument isn't a subdirectory of unpack root %s" % destdir, ud.url)
destdir = subdir
else:
destdir = os.path.join(destdir, subdir)
def_destsuffix = ""
subdir = ud.parm.get("subpath", "")
if subdir != "":
readpathspec = ":%s" % subdir
def_destsuffix = "%s/" % os.path.basename(subdir.rstrip('/'))
else:
readpathspec = ""
def_destsuffix = "git/"
destsuffix = ud.parm.get("destsuffix", def_destsuffix)
destdir = ud.destdir = os.path.join(destdir, destsuffix)
if os.path.exists(destdir):
bb.utils.prunedir(destdir)
if not ud.bareclone:
ud.unpack_tracer.unpack("git", destdir)
need_lfs = self._need_lfs(ud)
@@ -643,12 +516,13 @@ class Git(FetchMethod):
source_found = False
source_error = []
clonedir_is_up_to_date = not self.clonedir_need_update(ud, d)
if clonedir_is_up_to_date:
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
source_found = True
else:
source_error.append("clone directory not available or not up to date: " + ud.clonedir)
if not source_found:
clonedir_is_up_to_date = not self.clonedir_need_update(ud, d)
if clonedir_is_up_to_date:
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
source_found = True
else:
source_error.append("clone directory not available or not up to date: " + ud.clonedir)
if not source_found:
if ud.shallow:
@@ -672,11 +546,9 @@ class Git(FetchMethod):
raise bb.fetch2.FetchError("Repository %s has LFS content, install git-lfs on host to download (or set lfs=0 to ignore it)" % (repourl))
elif not need_lfs:
bb.note("Repository %s has LFS content but it is not being fetched" % (repourl))
else:
runfetchcmd("%s lfs install --local" % ud.basecmd, d, workdir=destdir)
if not ud.nocheckout:
if subpath:
if subdir != "":
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d,
workdir=destdir)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d, workdir=destdir)
@@ -725,35 +597,6 @@ class Git(FetchMethod):
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
return output.split()[0] != "0"
def _lfs_objects_downloaded(self, ud, d, name, wd):
"""
Verifies whether the LFS objects for requested revisions have already been downloaded
"""
# Bail out early if this repository doesn't use LFS
if not self._need_lfs(ud) or not self._contains_lfs(ud, d, wd):
return True
# The Git LFS specification specifies ([1]) the LFS folder layout so it should be safe to check for file
# existence.
# [1] https://github.com/git-lfs/git-lfs/blob/main/docs/spec.md#intercepting-git
cmd = "%s lfs ls-files -l %s" \
% (ud.basecmd, ud.revisions[name])
output = runfetchcmd(cmd, d, quiet=True, workdir=wd).rstrip()
# Do not do any further matching if no objects are managed by LFS
if not output:
return True
# Match all lines beginning with the hexadecimal OID
oid_regex = re.compile("^(([a-fA-F0-9]{2})([a-fA-F0-9]{2})[A-Fa-f0-9]+)")
for line in output.split("\n"):
oid = re.search(oid_regex, line)
if not oid:
bb.warn("git lfs ls-files output '%s' did not match expected format." % line)
if not os.path.exists(os.path.join(wd, "lfs", "objects", oid.group(2), oid.group(3), oid.group(1))):
return False
return True
def _need_lfs(self, ud):
return ud.parm.get("lfs", "1") == "1"
@@ -762,11 +605,13 @@ class Git(FetchMethod):
Check if the repository has 'lfs' (large file) content
"""
if ud.nobranch:
# If no branch is specified, use the current git commit
refname = self._build_revision(ud, d, ud.names[0])
elif wd == ud.clonedir:
# The bare clonedir doesn't use the remote names; it has the branch immediately.
if not ud.nobranch:
branchname = ud.branches[ud.names[0]]
else:
branchname = "master"
# The bare clonedir doesn't use the remote names; it has the branch immediately.
if wd == ud.clonedir:
refname = ud.branches[ud.names[0]]
else:
refname = "origin/%s" % ud.branches[ud.names[0]]
@@ -809,6 +654,7 @@ class Git(FetchMethod):
Return a unique key for the url
"""
# Collapse adjacent slashes
slash_re = re.compile(r"/+")
return "git:" + ud.host + slash_re.sub(".", ud.path) + ud.unresolvedrev[name]
def _lsremote(self, ud, d, search):
@@ -841,12 +687,6 @@ class Git(FetchMethod):
"""
Compute the HEAD revision for the url
"""
if not d.getVar("__BBSRCREV_SEEN"):
raise bb.fetch2.FetchError("Recipe uses a floating tag/branch '%s' for repo '%s' without a fixed SRCREV yet doesn't call bb.fetch2.get_srcrev() (use SRCPV in PV for OE)." % (ud.unresolvedrev[name], ud.host+ud.path))
# Ensure we mark as not cached
bb.fetch2.mark_recipe_nocache(d)
output = self._lsremote(ud, d, "")
# Tags of the form ^{} may not work, need to fallback to other form
if ud.unresolvedrev[name][:5] == "refs/" or ud.usehead:
@@ -871,42 +711,38 @@ class Git(FetchMethod):
"""
pupver = ('', '')
tagregex = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or r"(?P<pver>([0-9][\.|_]?)+)")
try:
output = self._lsremote(ud, d, "refs/tags/*")
except (bb.fetch2.FetchError, bb.fetch2.NetworkAccess) as e:
bb.note("Could not list remote: %s" % str(e))
return pupver
rev_tag_re = re.compile(r"([0-9a-f]{40})\s+refs/tags/(.*)")
pver_re = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or r"(?P<pver>([0-9][\.|_]?)+)")
nonrel_re = re.compile(r"(alpha|beta|rc|final)+")
verstring = ""
revision = ""
for line in output.split("\n"):
if not line:
break
m = rev_tag_re.match(line)
if not m:
continue
(revision, tag) = m.groups()
tag_head = line.split("/")[-1]
# Ignore non-released branches
if nonrel_re.search(tag):
m = re.search(r"(alpha|beta|rc|final)+", tag_head)
if m:
continue
# search for version in the line
m = pver_re.search(tag)
if not m:
tag = tagregex.search(tag_head)
if tag is None:
continue
pver = m.group('pver').replace("_", ".")
tag = tag.group('pver')
tag = tag.replace("_", ".")
if verstring and bb.utils.vercmp(("0", pver, ""), ("0", verstring, "")) < 0:
if verstring and bb.utils.vercmp(("0", tag, ""), ("0", verstring, "")) < 0:
continue
verstring = pver
verstring = tag
revision = line.split()[0]
pupver = (verstring, revision)
return pupver

View File

@@ -88,9 +88,9 @@ class GitSM(Git):
subrevision[m] = module_hash.split()[2]
# Convert relative to absolute uri based on parent uri
if uris[m].startswith('..') or uris[m].startswith('./'):
if uris[m].startswith('..'):
newud = copy.copy(ud)
newud.path = os.path.normpath(os.path.join(newud.path, uris[m]))
newud.path = os.path.realpath(os.path.join(newud.path, uris[m]))
uris[m] = Git._get_repo_url(self, newud)
for module in submodules:
@@ -115,21 +115,10 @@ class GitSM(Git):
# This has to be a file reference
proto = "file"
url = "gitsm://" + uris[module]
if url.endswith("{}{}".format(ud.host, ud.path)):
raise bb.fetch2.FetchError("Submodule refers to the parent repository. This will cause deadlock situation in current version of Bitbake." \
"Consider using git fetcher instead.")
url += ';protocol=%s' % proto
url += ";name=%s" % module
url += ";subpath=%s" % module
url += ";nobranch=1"
url += ";lfs=%s" % self._need_lfs(ud)
# Note that adding "user=" here to give credentials to the
# submodule is not supported. Since using SRC_URI to give git://
# URL a password is not supported, one have to use one of the
# recommended way (eg. ~/.netrc or SSH config) which does specify
# the user (See comment in git.py).
# So, we will not take patches adding "user=" support here.
ld = d.createCopy()
# Not necessary to set SRC_URI, since we're passing the URI to
@@ -151,6 +140,16 @@ class GitSM(Git):
if Git.need_update(self, ud, d):
return True
try:
# Check for the nugget dropped by the download operation
known_srcrevs = runfetchcmd("%s config --get-all bitbake.srcrev" % \
(ud.basecmd), d, workdir=ud.clonedir)
if ud.revisions[ud.names[0]] in known_srcrevs.split():
return False
except bb.fetch2.FetchError:
pass
need_update_list = []
def need_update_submodule(ud, url, module, modpath, workdir, d):
url += ";bareclone=1;nobranch=1"
@@ -173,8 +172,13 @@ class GitSM(Git):
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, need_update_submodule, d)
if len(need_update_list) == 0:
# We already have the required commits of all submodules. Drop
# a nugget so we don't need to check again.
runfetchcmd("%s config --add bitbake.srcrev %s" % \
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
if need_update_list:
if len(need_update_list) > 0:
logger.debug('gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
return True
@@ -205,6 +209,9 @@ class GitSM(Git):
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, download_submodule, d)
# Drop a nugget for the srcrev we've fetched (used by need_update)
runfetchcmd("%s config --add bitbake.srcrev %s" % \
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
def unpack(self, ud, destdir, d):
def unpack_submodules(ud, url, module, modpath, workdir, d):
@@ -218,10 +225,6 @@ class GitSM(Git):
try:
newfetch = Fetch([url], d, cache=False)
# modpath is needed by unpack tracer to calculate submodule
# checkout dir
new_ud = newfetch.ud[url]
new_ud.modpath = modpath
newfetch.unpack(root=os.path.dirname(os.path.join(repo_conf, 'modules', module)))
except Exception as e:
logger.error('gitsm: submodule unpack failed: %s %s' % (type(e).__name__, str(e)))
@@ -247,12 +250,10 @@ class GitSM(Git):
ret = self.process_submodules(ud, ud.destdir, unpack_submodules, d)
if not ud.bareclone and ret:
# All submodules should already be downloaded and configured in the tree. This simply
# sets up the configuration and checks out the files. The main project config should
# remain unmodified, and no download from the internet should occur. As such, lfs smudge
# should also be skipped as these files were already smudged in the fetch stage if lfs
# was enabled.
runfetchcmd("GIT_LFS_SKIP_SMUDGE=1 %s submodule update --recursive --no-fetch" % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
# All submodules should already be downloaded and configured in the tree. This simply sets
# up the configuration and checks out the files. The main project config should remain
# unmodified, and no download from the internet should occur.
runfetchcmd("%s submodule update --recursive --no-fetch" % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
def implicit_urldata(self, ud, d):
import shutil, subprocess, tempfile

View File

@@ -242,7 +242,6 @@ class Hg(FetchMethod):
revflag = "-r %s" % ud.revision
subdir = ud.parm.get("destsuffix", ud.module)
codir = "%s/%s" % (destdir, subdir)
ud.unpack_tracer.unpack("hg", codir)
scmdata = ud.parm.get("scmdata", "")
if scmdata != "nokeep":

View File

@@ -41,9 +41,9 @@ class Local(FetchMethod):
"""
Return the local filename of a given url assuming a successful fetch.
"""
return self.localfile_searchpaths(urldata, d)[-1]
return self.localpaths(urldata, d)[-1]
def localfile_searchpaths(self, urldata, d):
def localpaths(self, urldata, d):
"""
Return the local filename of a given url assuming a successful fetch.
"""
@@ -51,14 +51,18 @@ class Local(FetchMethod):
path = urldata.decodedurl
newpath = path
if path[0] == "/":
logger.debug2("Using absolute %s" % (path))
return [path]
filespath = d.getVar('FILESPATH')
if filespath:
logger.debug2("Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
newpath, hist = bb.utils.which(filespath, path, history=True)
logger.debug2("Using %s for %s" % (newpath, path))
searched.extend(hist)
if not os.path.exists(newpath):
dldirfile = os.path.join(d.getVar("DL_DIR"), path)
logger.debug2("Defaulting to %s for %s" % (dldirfile, path))
bb.utils.mkdirhier(os.path.dirname(dldirfile))
searched.append(dldirfile)
return searched
return searched
def need_update(self, ud, d):
@@ -74,7 +78,9 @@ class Local(FetchMethod):
filespath = d.getVar('FILESPATH')
if filespath:
locations = filespath.split(":")
msg = "Unable to find file " + urldata.url + " anywhere to download to " + urldata.localpath + ". The paths that were searched were:\n " + "\n ".join(locations)
locations.append(d.getVar("DL_DIR"))
msg = "Unable to find file " + urldata.url + " anywhere. The paths that were searched were:\n " + "\n ".join(locations)
raise FetchError(msg)
return True

View File

@@ -44,24 +44,17 @@ def npm_package(package):
"""Convert the npm package name to remove unsupported character"""
# Scoped package names (with the @) use the same naming convention
# as the 'npm pack' command.
name = re.sub("/", "-", package)
name = name.lower()
name = re.sub(r"[^\-a-z0-9]", "", name)
name = name.strip("-")
return name
if package.startswith("@"):
return re.sub("/", "-", package[1:])
return package
def npm_filename(package, version):
"""Get the filename of a npm package"""
return npm_package(package) + "-" + version + ".tgz"
def npm_localfile(package, version=None):
def npm_localfile(package, version):
"""Get the local filename of a npm package"""
if version is not None:
filename = npm_filename(package, version)
else:
filename = package
return os.path.join("npm2", filename)
return os.path.join("npm2", npm_filename(package, version))
def npm_integrity(integrity):
"""
@@ -76,52 +69,41 @@ def npm_unpack(tarball, destdir, d):
bb.utils.mkdirhier(destdir)
cmd = "tar --extract --gzip --file=%s" % shlex.quote(tarball)
cmd += " --no-same-owner"
cmd += " --delay-directory-restore"
cmd += " --strip-components=1"
runfetchcmd(cmd, d, workdir=destdir)
runfetchcmd("chmod -R +X '%s'" % (destdir), d, quiet=True, workdir=destdir)
class NpmEnvironment(object):
"""
Using a npm config file seems more reliable than using cli arguments.
This class allows to create a controlled environment for npm commands.
"""
def __init__(self, d, configs=[], npmrc=None):
def __init__(self, d, configs=None):
self.d = d
self.user_config = tempfile.NamedTemporaryFile(mode="w", buffering=1)
for key, value in configs:
self.user_config.write("%s=%s\n" % (key, value))
if npmrc:
self.global_config_name = npmrc
else:
self.global_config_name = "/dev/null"
def __del__(self):
if self.user_config:
self.user_config.close()
self.configs = configs
def run(self, cmd, args=None, configs=None, workdir=None):
"""Run npm command in a controlled environment"""
with tempfile.TemporaryDirectory() as tmpdir:
d = bb.data.createCopy(self.d)
d.setVar("PATH", d.getVar("PATH")) # PATH might contain $HOME - evaluate it before patching
d.setVar("HOME", tmpdir)
cfgfile = os.path.join(tmpdir, "npmrc")
if not workdir:
workdir = tmpdir
def _run(cmd):
cmd = "NPM_CONFIG_USERCONFIG=%s " % (self.user_config.name) + cmd
cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % (self.global_config_name) + cmd
cmd = "NPM_CONFIG_USERCONFIG=%s " % cfgfile + cmd
cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % cfgfile + cmd
return runfetchcmd(cmd, d, workdir=workdir)
if self.configs:
for key, value in self.configs:
_run("npm config set %s %s" % (key, shlex.quote(value)))
if configs:
bb.warn("Use of configs argument of NpmEnvironment.run() function"
" is deprecated. Please use args argument instead.")
for key, value in configs:
cmd += " --%s=%s" % (key, shlex.quote(value))
_run("npm config set %s %s" % (key, shlex.quote(value)))
if args:
for key, value in args:
@@ -160,12 +142,12 @@ class Npm(FetchMethod):
raise ParameterError("Invalid 'version' parameter", ud.url)
# Extract the 'registry' part of the url
ud.registry = re.sub(r"^npm://", "https://", ud.url.split(";")[0])
ud.registry = re.sub(r"^npm://", "http://", ud.url.split(";")[0])
# Using the 'downloadfilename' parameter as local filename
# or the npm package name.
if "downloadfilename" in ud.parm:
ud.localfile = npm_localfile(d.expand(ud.parm["downloadfilename"]))
ud.localfile = d.expand(ud.parm["downloadfilename"])
else:
ud.localfile = npm_localfile(ud.package, ud.version)
@@ -183,14 +165,14 @@ class Npm(FetchMethod):
def _resolve_proxy_url(self, ud, d):
def _npm_view():
args = []
args.append(("json", "true"))
args.append(("registry", ud.registry))
configs = []
configs.append(("json", "true"))
configs.append(("registry", ud.registry))
pkgver = shlex.quote(ud.package + "@" + ud.version)
cmd = ud.basecmd + " view %s" % pkgver
env = NpmEnvironment(d)
check_network_access(d, cmd, ud.registry)
view_string = env.run(cmd, args=args)
view_string = env.run(cmd, configs=configs)
if not view_string:
raise FetchError("Unavailable package %s" % pkgver, ud.url)
@@ -298,7 +280,6 @@ class Npm(FetchMethod):
destsuffix = ud.parm.get("destsuffix", "npm")
destdir = os.path.join(rootdir, destsuffix)
npm_unpack(ud.localpath, destdir, d)
ud.unpack_tracer.unpack("npm", destdir)
def clean(self, ud, d):
"""Clean any existing full or partial download"""

View File

@@ -24,14 +24,11 @@ import bb
from bb.fetch2 import Fetch
from bb.fetch2 import FetchMethod
from bb.fetch2 import ParameterError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import URI
from bb.fetch2.npm import npm_integrity
from bb.fetch2.npm import npm_localfile
from bb.fetch2.npm import npm_unpack
from bb.utils import is_semver
from bb.utils import lockfile
from bb.utils import unlockfile
def foreach_dependencies(shrinkwrap, callback=None, dev=False):
"""
@@ -41,9 +38,8 @@ def foreach_dependencies(shrinkwrap, callback=None, dev=False):
with:
name = the package name (string)
params = the package parameters (dictionary)
destdir = the destination of the package (string)
deptree = the package dependency tree (array of strings)
"""
# For handling old style dependencies entries in shinkwrap files
def _walk_deps(deps, deptree):
for name in deps:
subtree = [*deptree, name]
@@ -53,22 +49,9 @@ def foreach_dependencies(shrinkwrap, callback=None, dev=False):
continue
elif deps[name].get("bundled", False):
continue
destsubdirs = [os.path.join("node_modules", dep) for dep in subtree]
destsuffix = os.path.join(*destsubdirs)
callback(name, deps[name], destsuffix)
callback(name, deps[name], subtree)
# packages entry means new style shrinkwrap file, else use dependencies
packages = shrinkwrap.get("packages", None)
if packages is not None:
for package in packages:
if package != "":
name = package.split('node_modules/')[-1]
package_infos = packages.get(package, {})
if dev == False and package_infos.get("dev", False):
continue
callback(name, package_infos, package)
else:
_walk_deps(shrinkwrap.get("dependencies", {}), [])
_walk_deps(shrinkwrap.get("dependencies", {}), [])
class NpmShrinkWrap(FetchMethod):
"""Class to fetch all package from a shrinkwrap file"""
@@ -89,22 +72,19 @@ class NpmShrinkWrap(FetchMethod):
# Resolve the dependencies
ud.deps = []
def _resolve_dependency(name, params, destsuffix):
def _resolve_dependency(name, params, deptree):
url = None
localpath = None
extrapaths = []
unpack = True
destsubdirs = [os.path.join("node_modules", dep) for dep in deptree]
destsuffix = os.path.join(*destsubdirs)
integrity = params.get("integrity", None)
resolved = params.get("resolved", None)
version = params.get("version", None)
# Handle registry sources
if is_semver(version) and integrity:
# Handle duplicate dependencies without url
if not resolved:
return
if is_semver(version) and resolved and integrity:
localfile = npm_localfile(name, version)
uri = URI(resolved)
@@ -129,7 +109,7 @@ class NpmShrinkWrap(FetchMethod):
# Handle http tarball sources
elif version.startswith("http") and integrity:
localfile = npm_localfile(os.path.basename(version))
localfile = os.path.join("npm2", os.path.basename(version))
uri = URI(version)
uri.params["downloadfilename"] = localfile
@@ -141,28 +121,8 @@ class NpmShrinkWrap(FetchMethod):
localpath = os.path.join(d.getVar("DL_DIR"), localfile)
# Handle local tarball and link sources
elif version.startswith("file"):
localpath = version[5:]
if not version.endswith(".tgz"):
unpack = False
# Handle git sources
elif version.startswith(("git", "bitbucket","gist")) or (
not version.endswith((".tgz", ".tar", ".tar.gz"))
and not version.startswith((".", "@", "/"))
and "/" in version
):
if version.startswith("github:"):
version = "git+https://github.com/" + version[len("github:"):]
elif version.startswith("gist:"):
version = "git+https://gist.github.com/" + version[len("gist:"):]
elif version.startswith("bitbucket:"):
version = "git+https://bitbucket.org/" + version[len("bitbucket:"):]
elif version.startswith("gitlab:"):
version = "git+https://gitlab.com/" + version[len("gitlab:"):]
elif not version.startswith(("git+","git:")):
version = "git+https://github.com/" + version
elif version.startswith("git"):
regex = re.compile(r"""
^
git\+
@@ -188,17 +148,15 @@ class NpmShrinkWrap(FetchMethod):
url = str(uri)
# local tarball sources and local link sources are unsupported
else:
raise ParameterError("Unsupported dependency: %s" % name, ud.url)
# name is needed by unpack tracer for module mapping
ud.deps.append({
"name": name,
"url": url,
"localpath": localpath,
"extrapaths": extrapaths,
"destsuffix": destsuffix,
"unpack": unpack,
})
try:
@@ -219,23 +177,17 @@ class NpmShrinkWrap(FetchMethod):
# This fetcher resolves multiple URIs from a shrinkwrap file and then
# forwards it to a proxy fetcher. The management of the donestamp file,
# the lockfile and the checksums are forwarded to the proxy fetcher.
shrinkwrap_urls = [dep["url"] for dep in ud.deps if dep["url"]]
if shrinkwrap_urls:
ud.proxy = Fetch(shrinkwrap_urls, data)
ud.proxy = Fetch([dep["url"] for dep in ud.deps], data)
ud.needdonestamp = False
@staticmethod
def _foreach_proxy_method(ud, handle):
returns = []
#Check if there are dependencies before try to fetch them
if len(ud.deps) > 0:
for proxy_url in ud.proxy.urls:
proxy_ud = ud.proxy.ud[proxy_url]
proxy_d = ud.proxy.d
proxy_ud.setup_localpath(proxy_d)
lf = lockfile(proxy_ud.lockfile)
returns.append(handle(proxy_ud.method, proxy_ud, proxy_d))
unlockfile(lf)
for proxy_url in ud.proxy.urls:
proxy_ud = ud.proxy.ud[proxy_url]
proxy_d = ud.proxy.d
proxy_ud.setup_localpath(proxy_d)
returns.append(handle(proxy_ud.method, proxy_ud, proxy_d))
return returns
def verify_donestamp(self, ud, d):
@@ -272,7 +224,6 @@ class NpmShrinkWrap(FetchMethod):
destsuffix = ud.parm.get("destsuffix")
if destsuffix:
destdir = os.path.join(rootdir, destsuffix)
ud.unpack_tracer.unpack("npm-shrinkwrap", destdir)
bb.utils.mkdirhier(destdir)
bb.utils.copyfile(ud.shrinkwrap_file,
@@ -286,16 +237,7 @@ class NpmShrinkWrap(FetchMethod):
for dep in manual:
depdestdir = os.path.join(destdir, dep["destsuffix"])
if dep["url"]:
npm_unpack(dep["localpath"], depdestdir, d)
else:
depsrcdir= os.path.join(destdir, dep["localpath"])
if dep["unpack"]:
npm_unpack(depsrcdir, depdestdir, d)
else:
bb.utils.mkdirhier(depdestdir)
cmd = 'cp -fpPRH "%s/." .' % (depsrcdir)
runfetchcmd(cmd, d, workdir=depdestdir)
npm_unpack(dep["localpath"], depdestdir, d)
def clean(self, ud, d):
"""Clean any existing full or partial download"""

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
"""
@@ -11,7 +9,6 @@ Based on the svn "Fetch" implementation.
import logging
import os
import re
import bb
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
@@ -39,7 +36,6 @@ class Osc(FetchMethod):
# Create paths to osc checkouts
oscdir = d.getVar("OSCDIR") or (d.getVar("DL_DIR") + "/osc")
relpath = self._strip_leading_slashes(ud.path)
ud.oscdir = oscdir
ud.pkgdir = os.path.join(oscdir, ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
@@ -47,13 +43,13 @@ class Osc(FetchMethod):
ud.revision = ud.parm['rev']
else:
pv = d.getVar("PV", False)
rev = bb.fetch2.srcrev_internal_helper(ud, d, '')
rev = bb.fetch2.srcrev_internal_helper(ud, d)
if rev:
ud.revision = rev
else:
ud.revision = ""
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), relpath.replace('/', '.'), ud.revision))
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision))
def _buildosccommand(self, ud, d, command):
"""
@@ -63,49 +59,26 @@ class Osc(FetchMethod):
basecmd = d.getVar("FETCHCMD_osc") or "/usr/bin/env osc"
proto = ud.parm.get('protocol', 'https')
proto = ud.parm.get('protocol', 'ocs')
options = []
config = "-c %s" % self.generate_config(ud, d)
if getattr(ud, 'revision', ''):
if ud.revision:
options.append("-r %s" % ud.revision)
coroot = self._strip_leading_slashes(ud.path)
if command == "fetch":
osccmd = "%s %s -A %s://%s co %s/%s %s" % (basecmd, config, proto, ud.host, coroot, ud.module, " ".join(options))
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
elif command == "update":
osccmd = "%s %s -A %s://%s up %s" % (basecmd, config, proto, ud.host, " ".join(options))
elif command == "api_source":
osccmd = "%s %s -A %s://%s api source/%s/%s" % (basecmd, config, proto, ud.host, coroot, ud.module)
osccmd = "%s %s up %s" % (basecmd, config, " ".join(options))
else:
raise FetchError("Invalid osc command %s" % command, ud.url)
return osccmd
def _latest_revision(self, ud, d, name):
"""
Fetch latest revision for the given package
"""
api_source_cmd = self._buildosccommand(ud, d, "api_source")
output = runfetchcmd(api_source_cmd, d)
match = re.match(r'<directory ?.* rev="(\d+)".*>', output)
if match is None:
raise FetchError("Unable to parse osc response", ud.url)
return match.groups()[0]
def _revision_key(self, ud, d, name):
"""
Return a unique key for the url
"""
# Collapse adjacent slashes
slash_re = re.compile(r"/+")
rev = getattr(ud, 'revision', "latest")
return "osc:%s%s.%s.%s" % (ud.host, slash_re.sub(".", ud.path), name, rev)
def download(self, ud, d):
"""
Fetch url
@@ -113,7 +86,7 @@ class Osc(FetchMethod):
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(ud.moddir, os.R_OK):
if os.access(os.path.join(d.getVar('OSCDIR'), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ ud.url)
# update sources there
@@ -141,23 +114,20 @@ class Osc(FetchMethod):
Generate a .oscrc to be used for this run.
"""
config_path = os.path.join(ud.oscdir, "oscrc")
if not os.path.exists(ud.oscdir):
bb.utils.mkdirhier(ud.oscdir)
config_path = os.path.join(d.getVar('OSCDIR'), "oscrc")
if (os.path.exists(config_path)):
os.remove(config_path)
f = open(config_path, 'w')
proto = ud.parm.get('protocol', 'https')
f.write("[general]\n")
f.write("apiurl = %s://%s\n" % (proto, ud.host))
f.write("apisrv = %s\n" % ud.host)
f.write("scheme = http\n")
f.write("su-wrapper = su -c\n")
f.write("build-root = %s\n" % d.getVar('WORKDIR'))
f.write("urllist = %s\n" % d.getVar("OSCURLLIST"))
f.write("extra-pkgs = gzip\n")
f.write("\n")
f.write("[%s://%s]\n" % (proto, ud.host))
f.write("[%s]\n" % ud.host)
f.write("user = %s\n" % ud.parm["user"])
f.write("pass = %s\n" % ud.parm["pswd"])
f.close()

View File

@@ -134,7 +134,7 @@ class Perforce(FetchMethod):
ud.setup_revisions(d)
ud.localfile = d.expand('%s_%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, cleanedmodule, ud.revision))
ud.localfile = d.expand('%s_%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, cleandedmodule, ud.revision))
def _buildp4command(self, ud, d, command, depot_filename=None):
"""

View File

@@ -18,47 +18,10 @@ The aws tool must be correctly installed and configured prior to use.
import os
import bb
import urllib.request, urllib.parse, urllib.error
import re
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
def convertToBytes(value, unit):
value = float(value)
if (unit == "KiB"):
value = value*1024.0;
elif (unit == "MiB"):
value = value*1024.0*1024.0;
elif (unit == "GiB"):
value = value*1024.0*1024.0*1024.0;
return value
class S3ProgressHandler(bb.progress.LineFilterProgressHandler):
"""
Extract progress information from s3 cp output, e.g.:
Completed 5.1 KiB/8.8 GiB (12.0 MiB/s) with 1 file(s) remaining
"""
def __init__(self, d):
super(S3ProgressHandler, self).__init__(d)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def writeline(self, line):
percs = re.findall(r'^Completed (\d+.{0,1}\d*) (\w+)\/(\d+.{0,1}\d*) (\w+) (\(.+\)) with\s+', line)
if percs:
completed = (percs[-1][0])
completedUnit = (percs[-1][1])
total = (percs[-1][2])
totalUnit = (percs[-1][3])
completed = convertToBytes(completed, completedUnit)
total = convertToBytes(total, totalUnit)
progress = (completed/total)*100.0
rate = percs[-1][4]
self.update(progress, rate)
return False
return True
class S3(FetchMethod):
"""Class to fetch urls via 'aws s3'"""
@@ -89,9 +52,7 @@ class S3(FetchMethod):
cmd = '%s cp s3://%s%s %s' % (ud.basecmd, ud.host, ud.path, ud.localpath)
bb.fetch2.check_network_access(d, cmd, ud.url)
progresshandler = S3ProgressHandler(d)
runfetchcmd(cmd, d, False, log=progresshandler)
runfetchcmd(cmd, d)
# Additional sanity checks copied from the wget class (although there
# are no known issues which mean these are required, treat the aws cli

View File

@@ -103,7 +103,7 @@ class SFTP(FetchMethod):
if path[:3] == '/~/':
path = path[3:]
remote = '"%s%s:%s"' % (user, urlo.hostname, path)
remote = '%s%s:%s' % (user, urlo.hostname, path)
cmd = '%s %s %s %s' % (basecmd, port, remote, lpath)

View File

@@ -32,7 +32,6 @@ IETF secsh internet draft:
import re, os
from bb.fetch2 import check_network_access, FetchMethod, ParameterError, runfetchcmd
import urllib
__pattern__ = re.compile(r'''
@@ -41,9 +40,9 @@ __pattern__ = re.compile(r'''
( # Optional username/password block
(?P<user>\S+) # username
(:(?P<pass>\S+))? # colon followed by the password (optional)
)?
(?P<cparam>(;[^;]+)*)? # connection parameters block (optional)
@
)?
(?P<host>\S+?) # non-greedy match of the host
(:(?P<port>[0-9]+))? # colon followed by the port (optional)
/
@@ -71,7 +70,6 @@ class SSH(FetchMethod):
"git:// prefix with protocol=ssh", urldata.url)
m = __pattern__.match(urldata.url)
path = m.group('path')
path = urllib.parse.unquote(path)
host = m.group('host')
urldata.localpath = os.path.join(d.getVar('DL_DIR'),
os.path.basename(os.path.normpath(path)))
@@ -98,11 +96,6 @@ class SSH(FetchMethod):
fr += '@%s' % host
else:
fr = host
if path[0] != '~':
path = '/%s' % path
path = urllib.parse.unquote(path)
fr += ':%s' % path
cmd = 'scp -B -r %s %s %s/' % (
@@ -115,41 +108,3 @@ class SSH(FetchMethod):
runfetchcmd(cmd, d)
def checkstatus(self, fetch, urldata, d):
"""
Check the status of the url
"""
m = __pattern__.match(urldata.url)
path = m.group('path')
host = m.group('host')
port = m.group('port')
user = m.group('user')
password = m.group('pass')
if port:
portarg = '-P %s' % port
else:
portarg = ''
if user:
fr = user
if password:
fr += ':%s' % password
fr += '@%s' % host
else:
fr = host
if path[0] != '~':
path = '/%s' % path
path = urllib.parse.unquote(path)
cmd = 'ssh -o BatchMode=true %s %s [ -f %s ]' % (
portarg,
fr,
path
)
check_network_access(d, cmd, urldata.url)
runfetchcmd(cmd, d)
return True

View File

@@ -57,12 +57,7 @@ class Svn(FetchMethod):
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
# Whether to use the @REV peg-revision syntax in the svn command or not
ud.pegrevision = True
if 'nopegrevision' in ud.parm:
ud.pegrevision = False
ud.localfile = d.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ["0", "1"][ud.pegrevision]))
ud.localfile = d.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision))
def _buildsvncommand(self, ud, d, command):
"""
@@ -91,7 +86,7 @@ class Svn(FetchMethod):
if command == "info":
svncmd = "%s info %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
elif command == "log1":
svncmd = "%s log --limit 1 --quiet %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
svncmd = "%s log --limit 1 %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
else:
suffix = ""
@@ -103,8 +98,7 @@ class Svn(FetchMethod):
if ud.revision:
options.append("-r %s" % ud.revision)
if ud.pegrevision:
suffix = "@%s" % (ud.revision)
suffix = "@%s" % (ud.revision)
if command == "fetch":
transportuser = ud.parm.get("transportuser", "")

View File

@@ -26,6 +26,7 @@ from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
from bb.utils import export_proxies
from bs4 import BeautifulSoup
from bs4 import SoupStrainer
@@ -51,24 +52,18 @@ class WgetProgressHandler(bb.progress.LineFilterProgressHandler):
class Wget(FetchMethod):
"""Class to fetch urls via 'wget'"""
# CDNs like CloudFlare may do a 'browser integrity test' which can fail
# with the standard wget/urllib User-Agent, so pretend to be a modern
# browser.
user_agent = "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
def check_certs(self, d):
"""
Should certificates be checked?
"""
return (d.getVar("BB_CHECK_SSL_CERTS") or "1") != "0"
"""Class to fetch urls via 'wget'"""
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with wget.
"""
return ud.type in ['http', 'https', 'ftp', 'ftps']
return ud.type in ['http', 'https', 'ftp']
def recommends_checksum(self, urldata):
return True
@@ -87,13 +82,7 @@ class Wget(FetchMethod):
if not ud.localfile:
ud.localfile = d.expand(urllib.parse.unquote(ud.host + ud.path).replace("/", "."))
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30"
if ud.type == 'ftp' or ud.type == 'ftps':
self.basecmd += " --passive-ftp"
if not self.check_certs(d):
self.basecmd += " --no-check-certificate"
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp --no-check-certificate"
def _runwget(self, ud, d, command, quiet, workdir=None):
@@ -108,51 +97,32 @@ class Wget(FetchMethod):
fetchcmd = self.basecmd
dldir = os.path.realpath(d.getVar("DL_DIR"))
localpath = os.path.join(dldir, ud.localfile) + ".tmp"
bb.utils.mkdirhier(os.path.dirname(localpath))
fetchcmd += " -O %s" % shlex.quote(localpath)
if 'downloadfilename' in ud.parm:
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile)
bb.utils.mkdirhier(os.path.dirname(localpath))
fetchcmd += " -O %s" % shlex.quote(localpath)
if ud.user and ud.pswd:
fetchcmd += " --auth-no-challenge"
if ud.parm.get("redirectauth", "1") == "1":
# An undocumented feature of wget is that if the
# username/password are specified on the URI, wget will only
# send the Authorization header to the first host and not to
# any hosts that it is redirected to. With the increasing
# usage of temporary AWS URLs, this difference now matters as
# AWS will reject any request that has authentication both in
# the query parameters (from the redirect) and in the
# Authorization header.
fetchcmd += " --user=%s --password=%s" % (ud.user, ud.pswd)
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
uri = ud.url.split(";")[0]
if os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
fetchcmd += " -c -P " + dldir + " '" + uri + "'"
fetchcmd += d.expand(" -c -P ${DL_DIR} '%s'" % uri)
else:
fetchcmd += " -P " + dldir + " '" + uri + "'"
fetchcmd += d.expand(" -P ${DL_DIR} '%s'" % uri)
self._runwget(ud, d, fetchcmd, False)
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(localpath):
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, localpath), uri)
if not os.path.exists(ud.localpath):
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
if os.path.getsize(localpath) == 0:
os.remove(localpath)
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (uri), uri)
# Try and verify any checksum now, meaning if it isn't correct, we don't remove the
# original file, which might be a race (imagine two recipes referencing the same
# source, one with an incorrect checksum)
bb.fetch2.verify_checksum(ud, d, localpath=localpath, fatal_nochecksum=False)
# Remove the ".tmp" and move the file into position atomically
# Our lock prevents multiple writers but mirroring code may grab incomplete files
os.rename(localpath, localpath[:-4])
return True
def checkstatus(self, fetch, ud, d, try_again=True):
@@ -239,7 +209,7 @@ class Wget(FetchMethod):
# We let the request fail and expect it to be
# tried once more ("try_again" in check_status()),
# with the dead connection removed from the cache.
# If it still fails, we give up, which can happen for bad
# If it still fails, we give up, which can happend for bad
# HTTP proxy settings.
fetch.connection_cache.remove_connection(h.host, h.port)
raise urllib.error.URLError(err)
@@ -312,76 +282,64 @@ class Wget(FetchMethod):
newreq = urllib.request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, headers, newurl)
newreq.get_method = req.get_method
return newreq
exported_proxies = export_proxies(d)
# We need to update the environment here as both the proxy and HTTPS
# handlers need variables set. The proxy needs http_proxy and friends to
# be set, and HTTPSHandler ends up calling into openssl to load the
# certificates. In buildtools configurations this will be looking at the
# wrong place for certificates by default: we set SSL_CERT_FILE to the
# right location in the buildtools environment script but as BitBake
# prunes prunes the environment this is lost. When binaries are executed
# runfetchcmd ensures these values are in the environment, but this is
# pure Python so we need to update the environment.
#
# Avoid tramping the environment too much by using bb.utils.environment
# to scope the changes to the build_opener request, which is when the
# environment lookups happen.
newenv = bb.fetch2.get_fetcher_environment(d)
handlers = [FixedHTTPRedirectHandler, HTTPMethodFallback]
if exported_proxies:
handlers.append(urllib.request.ProxyHandler())
handlers.append(CacheHTTPHandler())
# Since Python 2.7.9 ssl cert validation is enabled by default
# see PEP-0476, this causes verification errors on some https servers
# so disable by default.
import ssl
if hasattr(ssl, '_create_unverified_context'):
handlers.append(urllib.request.HTTPSHandler(context=ssl._create_unverified_context()))
opener = urllib.request.build_opener(*handlers)
with bb.utils.environment(**newenv):
import ssl
try:
uri = ud.url.split(";")[0]
r = urllib.request.Request(uri)
r.get_method = lambda: "HEAD"
# Some servers (FusionForge, as used on Alioth) require that the
# optional Accept header is set.
r.add_header("Accept", "*/*")
r.add_header("User-Agent", self.user_agent)
def add_basic_auth(login_str, request):
'''Adds Basic auth to http request, pass in login:password as string'''
import base64
encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
authheader = "Basic %s" % encodeuser
r.add_header("Authorization", authheader)
if self.check_certs(d):
context = ssl.create_default_context()
else:
context = ssl._create_unverified_context()
handlers = [FixedHTTPRedirectHandler,
HTTPMethodFallback,
urllib.request.ProxyHandler(),
CacheHTTPHandler(),
urllib.request.HTTPSHandler(context=context)]
opener = urllib.request.build_opener(*handlers)
if ud.user and ud.pswd:
add_basic_auth(ud.user + ':' + ud.pswd, r)
try:
uri_base = ud.url.split(";")[0]
uri = "{}://{}{}".format(urllib.parse.urlparse(uri_base).scheme, ud.host, ud.path)
r = urllib.request.Request(uri)
r.get_method = lambda: "HEAD"
# Some servers (FusionForge, as used on Alioth) require that the
# optional Accept header is set.
r.add_header("Accept", "*/*")
r.add_header("User-Agent", self.user_agent)
def add_basic_auth(login_str, request):
'''Adds Basic auth to http request, pass in login:password as string'''
import base64
encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
authheader = "Basic %s" % encodeuser
r.add_header("Authorization", authheader)
if ud.user and ud.pswd:
add_basic_auth(ud.user + ':' + ud.pswd, r)
try:
import netrc
auth_data = netrc.netrc().authenticators(urllib.parse.urlparse(uri).hostname)
if auth_data:
login, _, password = auth_data
add_basic_auth("%s:%s" % (login, password), r)
except (FileNotFoundError, netrc.NetrcParseError):
pass
with opener.open(r, timeout=30) as response:
pass
except (urllib.error.URLError, ConnectionResetError, TimeoutError) as e:
if try_again:
logger.debug2("checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug2("checkstatus() urlopen failed for %s: %s" % (uri,e))
return False
import netrc
n = netrc.netrc()
login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
add_basic_auth("%s:%s" % (login, password), r)
except (TypeError, ImportError, IOError, netrc.NetrcParseError):
pass
with opener.open(r) as response:
pass
except urllib.error.URLError as e:
if try_again:
logger.debug2("checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug2("checkstatus() urlopen failed: %s" % e)
return False
except ConnectionResetError as e:
if try_again:
logger.debug2("checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug2("checkstatus() urlopen failed: %s" % e)
return False
return True
def _parse_path(self, regex, s):
@@ -514,7 +472,7 @@ class Wget(FetchMethod):
version_dir = ['', '', '']
version = ['', '', '']
dirver_regex = re.compile(r"(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])*(\d+))")
dirver_regex = re.compile(r"(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])+(\d+))")
s = dirver_regex.search(dirver)
if s:
version_dir[1] = s.group('ver')
@@ -590,7 +548,7 @@ class Wget(FetchMethod):
# src.rpm extension was added only for rpm package. Can be removed if the rpm
# packaged will always be considered as having to be manually upgraded
psuffix_regex = r"(tar\.\w+|tgz|zip|xz|rpm|bz2|orig\.tar\.\w+|src\.tar\.\w+|src\.tgz|svnr\d+\.tar\.\w+|stable\.tar\.\w+|src\.rpm)"
psuffix_regex = r"(tar\.gz|tgz|tar\.bz2|zip|xz|tar\.lz|rpm|bz2|orig\.tar\.gz|tar\.xz|src\.tar\.gz|src\.tgz|svnr\d+\.tar\.bz2|stable\.tar\.gz|src\.rpm)"
# match name, version and archive type of a package
package_regex_comp = re.compile(r"(?P<name>%s?\.?v?)(?P<pver>%s)(?P<arch>%s)?[\.-](?P<type>%s$)"
@@ -641,10 +599,10 @@ class Wget(FetchMethod):
# search for version matches on folders inside the path, like:
# "5.7" in http://download.gnome.org/sources/${PN}/5.7/${PN}-${PV}.tar.gz
dirver_regex = re.compile(r"(?P<dirver>[^/]*(\d+\.)*\d+([-_]r\d+)*)/")
m = dirver_regex.findall(path)
m = dirver_regex.search(path)
if m:
pn = d.getVar('PN')
dirver = m[-1][0]
dirver = m.group('dirver')
dirver_pn_regex = re.compile(r"%s\d?" % (re.escape(pn)))
if not dirver_pn_regex.search(dirver):

View File

@@ -12,12 +12,11 @@
import os
import sys
import logging
import argparse
import optparse
import warnings
import fcntl
import time
import traceback
import datetime
import bb
from bb import event
@@ -44,18 +43,18 @@ def present_options(optionlist):
else:
return optionlist[0]
class BitbakeHelpFormatter(argparse.HelpFormatter):
def _get_help_string(self, action):
class BitbakeHelpFormatter(optparse.IndentedHelpFormatter):
def format_option(self, option):
# We need to do this here rather than in the text we supply to
# add_option() because we don't want to call list_extension_modules()
# on every execution (since it imports all of the modules)
# Note also that we modify option.help rather than the returned text
# - this is so that we don't have to re-format the text ourselves
if action.dest == 'ui':
if option.dest == 'ui':
valid_uis = list_extension_modules(bb.ui, 'main')
return action.help.replace('@CHOICES@', present_options(valid_uis))
option.help = option.help.replace('@CHOICES@', present_options(valid_uis))
return action.help
return optparse.IndentedHelpFormatter.format_option(self, option)
def list_extension_modules(pkg, checkattr):
"""
@@ -113,209 +112,189 @@ def _showwarning(message, category, filename, lineno, file=None, line=None):
warnlog.warning(s)
warnings.showwarning = _showwarning
warnings.filterwarnings("ignore")
warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
warnings.filterwarnings("ignore", category=ImportWarning)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
def create_bitbake_parser():
parser = argparse.ArgumentParser(
description="""\
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.
""",
formatter_class=BitbakeHelpFormatter,
allow_abbrev=False,
add_help=False, # help is manually added below in a specific argument group
)
parser = optparse.OptionParser(
formatter=BitbakeHelpFormatter(),
version="BitBake Build Tool Core version %s" % bb.__version__,
usage="""%prog [options] [recipename/target recipe:do_task ...]
general_group = parser.add_argument_group('General options')
task_group = parser.add_argument_group('Task control options')
exec_group = parser.add_argument_group('Execution control options')
logging_group = parser.add_argument_group('Logging/output control options')
server_group = parser.add_argument_group('Server options')
config_group = parser.add_argument_group('Configuration options')
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.""")
general_group.add_argument("targets", nargs="*", metavar="recipename/target",
help="Execute the specified task (default is 'build') for these target "
"recipes (.bb files).")
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
general_group.add_argument("-s", "--show-versions", action="store_true",
help="Show current and preferred versions of all recipes.")
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
general_group.add_argument("-e", "--environment", action="store_true",
dest="show_environment",
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
general_group.add_argument("-g", "--graphviz", action="store_true", dest="dot_graph",
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-c", "--cmd", action="store", dest="cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
help="Output less log message data to the terminal. You can specify this more than once.")
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
help="Don't execute, just go through the motions.")
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
parser.add_option("-p", "--parse-only", action="store_true",
dest="parse_only", default=False,
help="Quit after parsing the BB recipes.")
parser.add_option("-s", "--show-versions", action="store_true",
dest="show_versions", default=False,
help="Show current and preferred versions of all recipes.")
parser.add_option("-e", "--environment", action="store_true",
dest="show_environment", default=False,
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing")
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
help="Show debug logging for the specified logging domains")
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
help="Profile the command and save reports.")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
general_group.add_argument("-u", "--ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %(default)s).")
parser.add_option("-u", "--ui", action="store", dest="ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
general_group.add_argument("--version", action="store_true",
help="Show programs version and exit.")
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
general_group.add_argument('-h', '--help', action='help',
help='Show this help message and exit.')
parser.add_option("", "--revisions-changed", action="store_true",
dest="revisions_changed", default=False,
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
parser.add_option("", "--server-only", action="store_true",
dest="server_only", default=False,
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
task_group.add_argument("-f", "--force", action="store_true",
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
task_group.add_argument("-c", "--cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
task_group.add_argument("-C", "--clear-stamp", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
task_group.add_argument("--runall", action="append", default=[],
help="Run the specified task for any recipe in the taskgraph of the "
"specified target (even if it wouldn't otherwise have run).")
parser.add_option("", "--skip-setscene", action="store_true",
dest="skipsetscene", default=False,
help="Skip setscene tasks if they would be executed. Tasks previously "
"restored from sstate will be kept, unlike --no-setscene")
task_group.add_argument("--runonly", action="append",
help="Run only the specified task within the taskgraph of the "
"specified targets (and any task dependencies those tasks may have).")
parser.add_option("", "--setscene-only", action="store_true",
dest="setsceneonly", default=False,
help="Only run setscene tasks, don't run any real tasks.")
task_group.add_argument("--no-setscene", action="store_true",
dest="nosetscene",
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
parser.add_option("", "--remote-server", action="store", dest="remote_server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
task_group.add_argument("--skip-setscene", action="store_true",
dest="skipsetscene",
help="Skip setscene tasks if they would be executed. Tasks previously "
"restored from sstate will be kept, unlike --no-setscene.")
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
help="Terminate any running bitbake server.")
task_group.add_argument("--setscene-only", action="store_true",
dest="setsceneonly",
help="Only run setscene tasks, don't run any real tasks.")
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
help="Connect to a server as an observing-only client.")
parser.add_option("", "--status-only", action="store_true",
dest="status_only", default=False,
help="Check the status of the remote bitbake server.")
exec_group.add_argument("-n", "--dry-run", action="store_true",
help="Don't execute, just go through the motions.")
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
exec_group.add_argument("-p", "--parse-only", action="store_true",
help="Quit after parsing the BB recipes.")
exec_group.add_argument("-k", "--continue", action="store_false", dest="halt",
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
exec_group.add_argument("-P", "--profile", action="store_true",
help="Profile the command and save reports.")
exec_group.add_argument("-S", "--dump-signatures", action="append",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means recursively compare the dumped signature with the most recent"
" one in a local build or sstate cache (can be used to find out why tasks re-run"
" when that is not expected)")
exec_group.add_argument("--revisions-changed", action="store_true",
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
exec_group.add_argument("-b", "--buildfile",
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
logging_group.add_argument("-D", "--debug", action="count", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
logging_group.add_argument("-l", "--log-domains", action="append", dest="debug_domains",
default=[],
help="Show debug logging for the specified logging domains.")
logging_group.add_argument("-v", "--verbose", action="store_true",
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
logging_group.add_argument("-q", "--quiet", action="count", default=0,
help="Output less log message data to the terminal. You can specify this "
"more than once.")
logging_group.add_argument("-w", "--write-log", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
server_group.add_argument("-B", "--bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
server_group.add_argument("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
server_group.add_argument("--remote-server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
server_group.add_argument("-m", "--kill-server", action="store_true",
help="Terminate any running bitbake server.")
server_group.add_argument("--token", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
server_group.add_argument("--observe-only", action="store_true",
help="Connect to a server as an observing-only client.")
server_group.add_argument("--status-only", action="store_true",
help="Check the status of the remote bitbake server.")
server_group.add_argument("--server-only", action="store_true",
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
config_group.add_argument("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
config_group.add_argument("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
config_group.add_argument("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing.")
parser.add_option("", "--runall", action="append", dest="runall",
help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).")
parser.add_option("", "--runonly", action="append", dest="runonly",
help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).")
return parser
class BitBakeConfigParameters(cookerdata.ConfigParameters):
def parseCommandLine(self, argv=sys.argv):
parser = create_bitbake_parser()
options = parser.parse_intermixed_args(argv[1:])
if options.version:
print("BitBake Build Tool Core version %s" % bb.__version__)
sys.exit(0)
options, targets = parser.parse_args(argv)
if options.quiet and options.verbose:
parser.error("options --quiet and --verbose are mutually exclusive")
@@ -347,7 +326,7 @@ class BitBakeConfigParameters(cookerdata.ConfigParameters):
else:
options.xmlrpcinterface = (None, 0)
return options, options.targets
return options, targets[1:]
def bitbake_main(configParams, configuration):
@@ -412,9 +391,6 @@ def bitbake_main(configParams, configuration):
return 1
def timestamp():
return datetime.datetime.now().strftime('%H:%M:%S.%f')
def setup_bitbake(configParams, extrafeatures=None):
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
@@ -422,11 +398,6 @@ def setup_bitbake(configParams, extrafeatures=None):
# In status only mode there are no logs and no UI
logger.addHandler(handler)
if configParams.dump_signatures:
if extrafeatures is None:
extrafeatures = []
extrafeatures.append(bb.cooker.CookerFeatures.RECIPE_SIGGEN_INFO)
if configParams.server_only:
featureset = []
ui_module = None
@@ -454,7 +425,7 @@ def setup_bitbake(configParams, extrafeatures=None):
retries = 8
while retries:
try:
topdir, lock, lockfile = lockBitbake()
topdir, lock = lockBitbake()
sockname = topdir + "/bitbake.sock"
if lock:
if configParams.status_only or configParams.kill_server:
@@ -465,22 +436,18 @@ def setup_bitbake(configParams, extrafeatures=None):
logger.info("Starting bitbake server...")
# Clear the event queue since we already displayed messages
bb.event.ui_queue = []
server = bb.server.process.BitBakeServer(lock, sockname, featureset, configParams.server_timeout, configParams.xmlrpcinterface, configParams.profile)
server = bb.server.process.BitBakeServer(lock, sockname, featureset, configParams.server_timeout, configParams.xmlrpcinterface)
else:
logger.info("Reconnecting to bitbake server...")
if not os.path.exists(sockname):
logger.info("Previous bitbake instance shutting down?, waiting to retry... (%s)" % timestamp())
procs = bb.server.process.get_lockfile_process_msg(lockfile)
if procs:
logger.info("Processes holding bitbake.lock (missing socket %s):\n%s" % (sockname, procs))
logger.info("Directory listing: %s" % (str(os.listdir(topdir))))
logger.info("Previous bitbake instance shutting down?, waiting to retry...")
i = 0
lock = None
# Wait for 5s or until we can get the lock
while not lock and i < 50:
time.sleep(0.1)
_, lock, _ = lockBitbake()
_, lock = lockBitbake()
i += 1
if lock:
bb.utils.unlockfile(lock)
@@ -499,9 +466,9 @@ def setup_bitbake(configParams, extrafeatures=None):
retries -= 1
tryno = 8 - retries
if isinstance(e, (bb.server.process.ProcessTimeout, BrokenPipeError, EOFError, SystemExit)):
logger.info("Retrying server connection (#%d)... (%s)" % (tryno, timestamp()))
logger.info("Retrying server connection (#%d)..." % tryno)
else:
logger.info("Retrying server connection (#%d)... (%s, %s)" % (tryno, traceback.format_exc(), timestamp()))
logger.info("Retrying server connection (#%d)... (%s)" % (tryno, traceback.format_exc()))
if not retries:
bb.fatal("Unable to connect to bitbake server, or start one (server startup failures would be in bitbake-cookerdaemon.log).")
@@ -530,5 +497,5 @@ def lockBitbake():
bb.error("Unable to find conf/bblayers.conf or conf/bitbake.conf. BBPATH is unset and/or not in a build directory?")
raise BBMainFatal
lockfile = topdir + "/bitbake.lock"
return topdir, bb.utils.lockfile(lockfile, False, False), lockfile
return topdir, bb.utils.lockfile(lockfile, False, False)

View File

@@ -76,12 +76,7 @@ def getDiskData(BBDirs):
return None
action = pathSpaceInodeRe.group(1)
if action == "ABORT":
# Emit a deprecation warning
logger.warnonce("The BB_DISKMON_DIRS \"ABORT\" action has been renamed to \"HALT\", update configuration")
action = "HALT"
if action not in ("HALT", "STOPTASKS", "WARN"):
if action not in ("ABORT", "STOPTASKS", "WARN"):
printErr("Unknown disk space monitor action: %s" % action)
return None
@@ -182,7 +177,7 @@ class diskMonitor:
# use them to avoid printing too many warning messages
self.preFreeS = {}
self.preFreeI = {}
# This is for STOPTASKS and HALT, to avoid printing the message
# This is for STOPTASKS and ABORT, to avoid printing the message
# repeatedly while waiting for the tasks to finish
self.checked = {}
for k in self.devDict:
@@ -224,8 +219,8 @@ class diskMonitor:
self.checked[k] = True
rq.finish_runqueue(False)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
elif action == "HALT" and not self.checked[k]:
logger.error("Immediately halt since the disk space monitor action is \"HALT\"!")
elif action == "ABORT" and not self.checked[k]:
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
self.checked[k] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
@@ -234,10 +229,9 @@ class diskMonitor:
freeInode = st.f_favail
if minInode and freeInode < minInode:
# Some filesystems use dynamic inodes so can't run out.
# This is reported by the inode count being 0 (btrfs) or the free
# inode count being -1 (cephfs).
if st.f_files == 0 or st.f_favail == -1:
# Some filesystems use dynamic inodes so can't run out
# (e.g. btrfs). This is reported by the inode count being 0.
if st.f_files == 0:
self.devDict[k][2] = None
continue
# Always show warning, the self.checked would always be False if the action is WARN
@@ -251,8 +245,8 @@ class diskMonitor:
self.checked[k] = True
rq.finish_runqueue(False)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
elif action == "HALT" and not self.checked[k]:
logger.error("Immediately halt since the disk space monitor action is \"HALT\"!")
elif action == "ABORT" and not self.checked[k]:
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
self.checked[k] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)

View File

@@ -30,9 +30,7 @@ class BBLogFormatter(logging.Formatter):
PLAIN = logging.INFO + 1
VERBNOTE = logging.INFO + 2
ERROR = logging.ERROR
ERRORONCE = logging.ERROR - 1
WARNING = logging.WARNING
WARNONCE = logging.WARNING - 1
CRITICAL = logging.CRITICAL
levelnames = {
@@ -44,9 +42,7 @@ class BBLogFormatter(logging.Formatter):
PLAIN : '',
VERBNOTE: 'NOTE',
WARNING : 'WARNING',
WARNONCE : 'WARNING',
ERROR : 'ERROR',
ERRORONCE : 'ERROR',
CRITICAL: 'ERROR',
}
@@ -62,9 +58,7 @@ class BBLogFormatter(logging.Formatter):
PLAIN : BASECOLOR,
VERBNOTE: BASECOLOR,
WARNING : YELLOW,
WARNONCE : YELLOW,
ERROR : RED,
ERRORONCE : RED,
CRITICAL: RED,
}
@@ -127,22 +121,6 @@ class BBLogFilter(object):
return True
return False
class LogFilterShowOnce(logging.Filter):
def __init__(self):
self.seen_warnings = set()
self.seen_errors = set()
def filter(self, record):
if record.levelno == bb.msg.BBLogFormatter.WARNONCE:
if record.msg in self.seen_warnings:
return False
self.seen_warnings.add(record.msg)
if record.levelno == bb.msg.BBLogFormatter.ERRORONCE:
if record.msg in self.seen_errors:
return False
self.seen_errors.add(record.msg)
return True
class LogFilterGEQLevel(logging.Filter):
def __init__(self, level):
self.strlevel = str(level)
@@ -228,9 +206,8 @@ def logger_create(name, output=sys.stderr, level=logging.INFO, preserve_handlers
"""Standalone logger creation function"""
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
console.addFilter(bb.msg.LogFilterShowOnce())
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if color == 'always' or (color == 'auto' and output.isatty() and os.environ.get('NO_COLOR', '') == ''):
if color == 'always' or (color == 'auto' and output.isatty()):
format.enable_color()
console.setFormatter(format)
if preserve_handlers:
@@ -316,17 +293,10 @@ def setLoggingConfig(defaultconfig, userconfigfile=None):
# Convert all level parameters to integers in case users want to use the
# bitbake defined level names
for name, h in logconfig["handlers"].items():
for h in logconfig["handlers"].values():
if "level" in h:
h["level"] = bb.msg.stringToLevel(h["level"])
# Every handler needs its own instance of the once filter.
once_filter_name = name + ".showonceFilter"
logconfig.setdefault("filters", {})[once_filter_name] = {
"()": "bb.msg.LogFilterShowOnce",
}
h.setdefault("filters", []).append(once_filter_name)
for l in logconfig["loggers"].values():
if "level" in l:
l["level"] = bb.msg.stringToLevel(l["level"])

View File

@@ -49,32 +49,20 @@ class SkipPackage(SkipRecipe):
__mtime_cache = {}
def cached_mtime(f):
if f not in __mtime_cache:
res = os.stat(f)
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
return __mtime_cache[f]
def cached_mtime_noerror(f):
if f not in __mtime_cache:
try:
res = os.stat(f)
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
return 0
return __mtime_cache[f]
def check_mtime(f, mtime):
try:
res = os.stat(f)
current_mtime = (res.st_mtime_ns, res.st_size, res.st_ino)
__mtime_cache[f] = current_mtime
except OSError:
current_mtime = 0
return current_mtime == mtime
def update_mtime(f):
try:
res = os.stat(f)
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
if f in __mtime_cache:
del __mtime_cache[f]
@@ -111,12 +99,12 @@ def supports(fn, data):
return 1
return 0
def handle(fn, data, include=0, baseconfig=False):
def handle(fn, data, include = 0):
"""Call the handler that is appropriate for this file"""
for h in handlers:
if h['supports'](fn, data):
with data.inchistory.include(fn):
return h['handle'](fn, data, include, baseconfig)
return h['handle'](fn, data, include)
raise ParseError("not a BitBake file", fn)
def init(fn, data):
@@ -125,8 +113,6 @@ def init(fn, data):
return h['init'](data)
def init_parser(d):
if hasattr(bb.parse, "siggen"):
bb.parse.siggen.exit()
bb.parse.siggen = bb.siggen.init(d)
def resolve_file(fn, d):

View File

@@ -9,7 +9,6 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import sys
import bb
from bb import methodpool
from bb.parse import logger
@@ -131,10 +130,6 @@ class DataNode(AstNode):
else:
val = groupd["value"]
if ":append" in key or ":remove" in key or ":prepend" in key:
if op in ["append", "prepend", "postdot", "predot", "ques"]:
bb.warn(key + " " + groupd[op] + " is not a recommended operator combination, please replace it.")
flag = None
if 'flag' in groupd and groupd['flag'] is not None:
flag = groupd['flag']
@@ -150,7 +145,7 @@ class DataNode(AstNode):
data.setVar(key, val, parsing=True, **loginfo)
class MethodNode(AstNode):
tr_tbl = str.maketrans('/.+-@%&~', '________')
tr_tbl = str.maketrans('/.+-@%&', '_______')
def __init__(self, filename, lineno, func_name, body, python, fakeroot):
AstNode.__init__(self, filename, lineno)
@@ -211,12 +206,10 @@ class ExportFuncsNode(AstNode):
def eval(self, data):
sentinel = " # Export function set\n"
for func in self.n:
calledfunc = self.classname + "_" + func
basevar = data.getVar(func, False)
if basevar and sentinel not in basevar:
if data.getVar(func, False) and not data.getVarFlag(func, 'export_func', False):
continue
if data.getVar(func, False):
@@ -226,18 +219,19 @@ class ExportFuncsNode(AstNode):
for flag in [ "func", "python" ]:
if data.getVarFlag(calledfunc, flag, False):
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag, False))
for flag in ["dirs", "cleandirs", "fakeroot"]:
for flag in [ "dirs" ]:
if data.getVarFlag(func, flag, False):
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag, False))
data.setVarFlag(func, "filename", "autogenerated")
data.setVarFlag(func, "lineno", 1)
if data.getVarFlag(calledfunc, "python", False):
data.setVar(func, sentinel + " bb.build.exec_func('" + calledfunc + "', d)\n", parsing=True)
data.setVar(func, " bb.build.exec_func('" + calledfunc + "', d)\n", parsing=True)
else:
if "-" in self.classname:
bb.fatal("The classname %s contains a dash character and is calling an sh function %s using EXPORT_FUNCTIONS. Since a dash is illegal in sh function names, this cannot work, please rename the class or don't use EXPORT_FUNCTIONS." % (self.classname, calledfunc))
data.setVar(func, sentinel + " " + calledfunc + "\n", parsing=True)
data.setVar(func, " " + calledfunc + "\n", parsing=True)
data.setVarFlag(func, 'export_func', '1')
class AddTaskNode(AstNode):
def __init__(self, filename, lineno, func, before, after):
@@ -271,41 +265,6 @@ class BBHandlerNode(AstNode):
data.setVarFlag(h, "handler", 1)
data.setVar('__BBHANDLERS', bbhands)
class PyLibNode(AstNode):
def __init__(self, filename, lineno, libdir, namespace):
AstNode.__init__(self, filename, lineno)
self.libdir = libdir
self.namespace = namespace
def eval(self, data):
global_mods = (data.getVar("BB_GLOBAL_PYMODULES") or "").split()
for m in global_mods:
if m not in bb.utils._context:
bb.utils._context[m] = __import__(m)
libdir = data.expand(self.libdir)
if libdir not in sys.path:
sys.path.append(libdir)
try:
bb.utils._context[self.namespace] = __import__(self.namespace)
toimport = getattr(bb.utils._context[self.namespace], "BBIMPORTS", [])
for i in toimport:
bb.utils._context[self.namespace] = __import__(self.namespace + "." + i)
mod = getattr(bb.utils._context[self.namespace], i)
fn = getattr(mod, "__file__")
funcs = {}
for f in dir(mod):
if f.startswith("_"):
continue
fcall = getattr(mod, f)
if not callable(fcall):
continue
funcs[f] = fcall
bb.codeparser.add_module_functions(fn, funcs, "%s.%s" % (self.namespace, i))
except AttributeError as e:
bb.error("Error importing OE modules: %s" % str(e))
class InheritNode(AstNode):
def __init__(self, filename, lineno, classes):
AstNode.__init__(self, filename, lineno)
@@ -314,16 +273,6 @@ class InheritNode(AstNode):
def eval(self, data):
bb.parse.BBHandler.inherit(self.classes, self.filename, self.lineno, data)
class InheritDeferredNode(AstNode):
def __init__(self, filename, lineno, classes):
AstNode.__init__(self, filename, lineno)
self.inherit = (classes, filename, lineno)
def eval(self, data):
inherits = data.getVar('__BBDEFINHERITS', False) or []
inherits.append(self.inherit)
data.setVar('__BBDEFINHERITS', inherits)
def handleInclude(statements, filename, lineno, m, force):
statements.append(IncludeNode(filename, lineno, m.group(1), force))
@@ -367,17 +316,10 @@ def handleDelTask(statements, filename, lineno, m):
def handleBBHandlers(statements, filename, lineno, m):
statements.append(BBHandlerNode(filename, lineno, m.group(1)))
def handlePyLib(statements, filename, lineno, m):
statements.append(PyLibNode(filename, lineno, m.group(1), m.group(2)))
def handleInherit(statements, filename, lineno, m):
classes = m.group(1)
statements.append(InheritNode(filename, lineno, classes))
def handleInheritDeferred(statements, filename, lineno, m):
classes = m.group(1)
statements.append(InheritDeferredNode(filename, lineno, classes))
def runAnonFuncs(d):
code = []
for funcname in d.getVar("__BBANONFUNCS", False) or []:
@@ -387,10 +329,6 @@ def runAnonFuncs(d):
def finalize(fn, d, variant = None):
saved_handlers = bb.event.get_handlers().copy()
try:
# Found renamed variables. Exit immediately
if d.getVar("_FAILPARSINGERRORHANDLED", False) == True:
raise bb.BBHandledException()
for var in d.getVar('__BBHANDLERS', False) or []:
# try to add the handler
handlerfn = d.getVarFlag(var, "filename", False)
@@ -415,9 +353,6 @@ def finalize(fn, d, variant = None):
d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
if d.getVar('__BBAUTOREV_SEEN') and d.getVar('__BBSRCREV_SEEN') and not d.getVar("__BBAUTOREV_ACTED_UPON"):
bb.fatal("AUTOREV/SRCPV set too late for the fetcher to work properly, please set the variables earlier in parsing. Erroring instead of later obtuse build failures.")
bb.event.fire(bb.event.RecipeParsed(fn), d)
finally:
bb.event.set_handlers(saved_handlers)
@@ -444,14 +379,6 @@ def multi_finalize(fn, d):
logger.debug("Appending .bbappend file %s to %s", append, fn)
bb.parse.BBHandler.handle(append, d, True)
while True:
inherits = d.getVar('__BBDEFINHERITS', False) or []
if not inherits:
break
inherit, filename, lineno = inherits.pop(0)
d.setVar('__BBDEFINHERITS', inherits)
bb.parse.BBHandler.inherit(inherit, filename, lineno, d, deferred=True)
onlyfinalise = d.getVar("__ONLYFINALISE", False)
safe_d = d

View File

@@ -19,9 +19,11 @@ from . import ConfHandler
from .. import resolve_file, ast, logger, ParseError
from .ConfHandler import include, init
__func_start_regexp__ = re.compile(r"(((?P<py>python(?=(\s|\()))|(?P<fr>fakeroot(?=\s)))\s*)*(?P<func>[\w\.\-\+\{\}\$:]+)?\s*\(\s*\)\s*{$" )
# For compatibility
bb.deprecate_import(__name__, "bb.parse", ["vars_from_file"])
__func_start_regexp__ = re.compile(r"(((?P<py>python(?=(\s|\()))|(?P<fr>fakeroot(?=\s)))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
__inherit_regexp__ = re.compile(r"inherit\s+(.+)" )
__inherit_def_regexp__ = re.compile(r"inherit_defer\s+(.+)" )
__export_func_regexp__ = re.compile(r"EXPORT_FUNCTIONS\s+(.+)" )
__addtask_regexp__ = re.compile(r"addtask\s+(?P<func>\w+)\s*((before\s*(?P<before>((.*(?=after))|(.*))))|(after\s*(?P<after>((.*(?=before))|(.*)))))*")
__deltask_regexp__ = re.compile(r"deltask\s+(.+)")
@@ -34,7 +36,6 @@ __infunc__ = []
__inpython__ = False
__body__ = []
__classname__ = ""
__residue__ = []
cached_statements = {}
@@ -42,46 +43,31 @@ def supports(fn, d):
"""Return True if fn has a supported extension"""
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
def inherit(files, fn, lineno, d, deferred=False):
def inherit(files, fn, lineno, d):
__inherit_cache = d.getVar('__inherit_cache', False) or []
#if "${" in files and not deferred:
# bb.warn("%s:%s has non deferred conditional inherit" % (fn, lineno))
files = d.expand(files).split()
for file in files:
classtype = d.getVar("__bbclasstype", False)
origfile = file
for t in ["classes-" + classtype, "classes"]:
file = origfile
if not os.path.isabs(file) and not file.endswith(".bbclass"):
file = os.path.join(t, '%s.bbclass' % file)
if not os.path.isabs(file) and not file.endswith(".bbclass"):
file = os.path.join('classes', '%s.bbclass' % file)
if not os.path.isabs(file):
bbpath = d.getVar("BBPATH")
abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
for af in attempts:
if af != abs_fn:
bb.parse.mark_dependency(d, af)
if abs_fn:
file = abs_fn
if os.path.exists(file):
break
if not os.path.exists(file):
raise ParseError("Could not inherit file %s" % (file), fn, lineno)
if not os.path.isabs(file):
bbpath = d.getVar("BBPATH")
abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
for af in attempts:
if af != abs_fn:
bb.parse.mark_dependency(d, af)
if abs_fn:
file = abs_fn
if not file in __inherit_cache:
logger.debug("Inheriting %s (from %s:%d)" % (file, fn, lineno))
__inherit_cache.append( file )
d.setVar('__inherit_cache', __inherit_cache)
try:
bb.parse.handle(file, d, True)
except (IOError, OSError) as exc:
raise ParseError("Could not inherit file %s: %s" % (fn, exc.strerror), fn, lineno)
include(fn, file, lineno, d, "inherit")
__inherit_cache = d.getVar('__inherit_cache', False) or []
def get_statements(filename, absolute_filename, base_name):
global cached_statements, __residue__, __body__
global cached_statements
try:
return cached_statements[absolute_filename]
@@ -101,17 +87,12 @@ def get_statements(filename, absolute_filename, base_name):
# add a blank line to close out any python definition
feeder(lineno, "", filename, base_name, statements, eof=True)
if __residue__:
raise ParseError("Unparsed lines %s: %s" % (filename, str(__residue__)), filename, lineno)
if __body__:
raise ParseError("Unparsed lines from unclosed function %s: %s" % (filename, str(__body__)), filename, lineno)
if filename.endswith(".bbclass") or filename.endswith(".inc"):
cached_statements[absolute_filename] = statements
return statements
def handle(fn, d, include, baseconfig=False):
global __infunc__, __body__, __residue__, __classname__
def handle(fn, d, include):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__, __classname__
__body__ = []
__infunc__ = []
__classname__ = ""
@@ -163,7 +144,7 @@ def handle(fn, d, include, baseconfig=False):
return d
def feeder(lineno, s, fn, root, statements, eof=False):
global __inpython__, __infunc__, __body__, __residue__, __classname__
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__, __infunc__, __body__, bb, __residue__, __classname__
# Check tabs in python functions:
# - def py_funcname(): covered by __inpython__
@@ -200,10 +181,10 @@ def feeder(lineno, s, fn, root, statements, eof=False):
if s and s[0] == '#':
if len(__residue__) != 0 and __residue__[0][0] != "#":
bb.fatal("There is a comment on line %s of file %s:\n'''\n%s\n'''\nwhich is in the middle of a multiline expression. This syntax is invalid, please correct it." % (lineno, fn, s))
bb.fatal("There is a comment on line %s of file %s (%s) which is in the middle of a multiline expression.\nBitbake used to ignore these but no longer does so, please fix your metadata as errors are likely as a result of this change." % (lineno, fn, s))
if len(__residue__) != 0 and __residue__[0][0] == "#" and (not s or s[0] != "#"):
bb.fatal("There is a confusing multiline partially commented expression on line %s of file %s:\n%s\nPlease clarify whether this is all a comment or should be parsed." % (lineno - len(__residue__), fn, "\n".join(__residue__)))
bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
if s and s[-1] == '\\':
__residue__.append(s[:-1])
@@ -274,12 +255,7 @@ def feeder(lineno, s, fn, root, statements, eof=False):
ast.handleInherit(statements, fn, lineno, m)
return
m = __inherit_def_regexp__.match(s)
if m:
ast.handleInheritDeferred(statements, fn, lineno, m)
return
return ConfHandler.feeder(lineno, s, fn, statements, conffile=False)
return ConfHandler.feeder(lineno, s, fn, statements)
# Add us to the handlers list
from .. import handlers

View File

@@ -20,8 +20,8 @@ from bb.parse import ParseError, resolve_file, ast, logger, handle
__config_regexp__ = re.compile( r"""
^
(?P<exp>export\s+)?
(?P<var>[a-zA-Z0-9\-_+.${}/~:]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.][a-zA-Z0-9\-_+.@]*)\])?
(?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
\s* (
(?P<colon>:=) |
@@ -45,11 +45,13 @@ __include_regexp__ = re.compile( r"include\s+(.+)" )
__require_regexp__ = re.compile( r"require\s+(.+)" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.][a-zA-Z0-9\-_+.@]+)\]$" )
__addpylib_regexp__ = re.compile(r"addpylib\s+(.+)\s+(.+)" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.]+)\]$" )
def init(data):
return
topdir = data.getVar('TOPDIR', False)
if not topdir:
data.setVar('TOPDIR', os.getcwd())
def supports(fn, d):
return fn[-5:] == ".conf"
@@ -103,12 +105,12 @@ def include_single_file(parentfn, fn, lineno, data, error_out):
# We have an issue where a UI might want to enforce particular settings such as
# an empty DISTRO variable. If configuration files do something like assigning
# a weak default, it turns out to be very difficult to filter out these changes,
# particularly when the weak default might appear half way though parsing a chain
# particularly when the weak default might appear half way though parsing a chain
# of configuration files. We therefore let the UIs hook into configuration file
# parsing. This turns out to be a hard problem to solve any other way.
confFilters = []
def handle(fn, data, include, baseconfig=False):
def handle(fn, data, include):
init(data)
if include == 0:
@@ -126,26 +128,21 @@ def handle(fn, data, include, baseconfig=False):
s = f.readline()
if not s:
break
origlineno = lineno
origline = s
w = s.strip()
# skip empty lines
if not w:
continue
s = s.rstrip()
while s[-1] == '\\':
line = f.readline()
origline += line
s2 = line.rstrip()
s2 = f.readline().rstrip()
lineno = lineno + 1
if (not s2 or s2 and s2[0] != "#") and s[0] == "#" :
bb.fatal("There is a confusing multiline, partially commented expression starting on line %s of file %s:\n%s\nPlease clarify whether this is all a comment or should be parsed." % (origlineno, fn, origline))
bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
s = s[:-1] + s2
# skip comments
if s[0] == '#':
continue
feeder(lineno, s, abs_fn, statements, baseconfig=baseconfig)
feeder(lineno, s, abs_fn, statements)
# DONE WITH PARSING... time to evaluate
data.setVar('FILE', abs_fn)
@@ -153,14 +150,14 @@ def handle(fn, data, include, baseconfig=False):
if oldfile:
data.setVar('FILE', oldfile)
f.close()
for f in confFilters:
f(fn, data)
return data
# baseconfig is set for the bblayers/layer.conf cookerdata config parsing
# The function is also used by BBHandler, conffile would be False
def feeder(lineno, s, fn, statements, baseconfig=False, conffile=True):
def feeder(lineno, s, fn, statements):
m = __config_regexp__.match(s)
if m:
groupd = m.groupdict()
@@ -192,11 +189,6 @@ def feeder(lineno, s, fn, statements, baseconfig=False, conffile=True):
ast.handleUnsetFlag(statements, fn, lineno, m)
return
m = __addpylib_regexp__.match(s)
if baseconfig and conffile and m:
ast.handlePyLib(statements, fn, lineno, m)
return
raise ParseError("unparsed line: '%s'" % s, fn, lineno);
# Add us to the handlers list

View File

@@ -12,14 +12,14 @@ currently, providing a key/value store accessed by 'domain'.
#
import collections
import collections.abc
import contextlib
import functools
import logging
import os.path
import sqlite3
import sys
from collections.abc import Mapping
import warnings
from collections import Mapping
sqlversion = sqlite3.sqlite_version_info
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
@@ -29,7 +29,7 @@ if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
logger = logging.getLogger("BitBake.PersistData")
@functools.total_ordering
class SQLTable(collections.abc.MutableMapping):
class SQLTable(collections.MutableMapping):
class _Decorators(object):
@staticmethod
def retry(*, reconnect=True):
@@ -63,7 +63,7 @@ class SQLTable(collections.abc.MutableMapping):
"""
Decorator that starts a database transaction and creates a database
cursor for performing queries. If no exception is thrown, the
database results are committed. If an exception occurs, the database
database results are commited. If an exception occurs, the database
is rolled back. In all cases, the cursor is closed after the
function ends.
@@ -208,7 +208,7 @@ class SQLTable(collections.abc.MutableMapping):
def __lt__(self, other):
if not isinstance(other, Mapping):
raise NotImplementedError()
raise NotImplemented
return len(self) < len(other)
@@ -238,6 +238,55 @@ class SQLTable(collections.abc.MutableMapping):
def has_key(self, key):
return key in self
class PersistData(object):
"""Deprecated representation of the bitbake persistent data store"""
def __init__(self, d):
warnings.warn("Use of PersistData is deprecated. Please use "
"persist(domain, d) instead.",
category=DeprecationWarning,
stacklevel=2)
self.data = persist(d)
logger.debug("Using '%s' as the persistent data cache",
self.data.filename)
def addDomain(self, domain):
"""
Add a domain (pending deprecation)
"""
return self.data[domain]
def delDomain(self, domain):
"""
Removes a domain and all the data it contains
"""
del self.data[domain]
def getKeyValues(self, domain):
"""
Return a list of key + value pairs for a domain
"""
return list(self.data[domain].items())
def getValue(self, domain, key):
"""
Return the value of a key for a domain
"""
return self.data[domain][key]
def setValue(self, domain, key, value):
"""
Sets the value of a key for a domain
"""
self.data[domain][key] = value
def delValue(self, domain, key):
"""
Deletes a key/value pair
"""
del self.data[domain][key]
def persist(domain, d):
"""Convenience factory for SQLTable objects based upon metadata"""
import bb.utils
@@ -249,23 +298,4 @@ def persist(domain, d):
bb.utils.mkdirhier(cachedir)
cachefile = os.path.join(cachedir, "bb_persist_data.sqlite3")
try:
return SQLTable(cachefile, domain)
except sqlite3.OperationalError:
# Sqlite fails to open database when its path is too long.
# After testing, 504 is the biggest path length that can be opened by
# sqlite.
# Note: This code is called before sanity.bbclass and its path length
# check
max_len = 504
if len(cachefile) > max_len:
logger.critical("The path of the cache file is too long "
"({0} chars > {1}) to be opened by sqlite! "
"Your cache file is \"{2}\"".format(
len(cachefile),
max_len,
cachefile))
sys.exit(1)
else:
raise
return SQLTable(cachefile, domain)

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -62,7 +60,7 @@ class Popen(subprocess.Popen):
"close_fds": True,
"preexec_fn": subprocess_setup,
"stdout": subprocess.PIPE,
"stderr": subprocess.PIPE,
"stderr": subprocess.STDOUT,
"stdin": subprocess.PIPE,
"shell": False,
}
@@ -144,7 +142,7 @@ def _logged_communicate(pipe, log, input, extrafiles):
while pipe.poll() is None:
read_all_pipes(log, rin, outdata, errdata)
# Process closed, drain all pipes...
# Pocess closed, drain all pipes...
read_all_pipes(log, rin, outdata, errdata)
finally:
log.flush()
@@ -183,8 +181,5 @@ def run(cmd, input=None, log=None, extrafiles=None, **options):
stderr = stderr.decode("utf-8")
if pipe.returncode != 0:
if log:
# Don't duplicate the output in the exception if logging it
raise ExecutionError(cmd, pipe.returncode, None, None)
raise ExecutionError(cmd, pipe.returncode, stdout, stderr)
return stdout, stderr

View File

@@ -94,15 +94,12 @@ class LineFilterProgressHandler(ProgressHandler):
while True:
breakpos = self._linebuffer.find('\n') + 1
if breakpos == 0:
# for the case when the line with progress ends with only '\r'
breakpos = self._linebuffer.find('\r') + 1
if breakpos == 0:
break
break
line = self._linebuffer[:breakpos]
self._linebuffer = self._linebuffer[breakpos:]
# Drop any line feeds and anything that precedes them
lbreakpos = line.rfind('\r') + 1
if lbreakpos and lbreakpos != breakpos:
if lbreakpos:
line = line[lbreakpos:]
if self.writeline(filter_color(line)):
super().write(line)
@@ -148,7 +145,7 @@ class MultiStageProgressReporter:
for tasks made up of python code spread across multiple
classes / functions - the progress reporter object can
be passed around or stored at the object level and calls
to next_stage() and update() made wherever needed.
to next_stage() and update() made whereever needed.
"""
def __init__(self, d, stage_weights, debug=False):
"""

View File

@@ -94,7 +94,7 @@ def versionVariableMatch(cfgData, keyword, pn):
# pn can contain '_', e.g. gcc-cross-x86_64 and an override cannot
# hence we do this manually rather than use OVERRIDES
ver = cfgData.getVar("%s_VERSION:pn-%s" % (keyword, pn))
ver = cfgData.getVar("%s_VERSION_pn-%s" % (keyword, pn))
if not ver:
ver = cfgData.getVar("%s_VERSION_%s" % (keyword, pn))
if not ver:
@@ -133,7 +133,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
if required_v is not None:
if preferred_v is not None:
logger.warning("REQUIRED_VERSION and PREFERRED_VERSION for package %s%s are both set using REQUIRED_VERSION %s", pn, itemstr, required_v)
logger.warn("REQUIRED_VERSION and PREFERRED_VERSION for package %s%s are both set using REQUIRED_VERSION %s", pn, itemstr, required_v)
else:
logger.debug("REQUIRED_VERSION is set for package %s%s", pn, itemstr)
# REQUIRED_VERSION always takes precedence over PREFERRED_VERSION
@@ -173,7 +173,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
pv_str = '%s:%s' % (preferred_e, pv_str)
if preferred_file is None:
if not required:
logger.warning("preferred version %s of %s not available%s", pv_str, pn, itemstr)
logger.warn("preferred version %s of %s not available%s", pv_str, pn, itemstr)
available_vers = []
for file_set in pkg_pn:
for f in file_set:
@@ -185,7 +185,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
available_vers.append(ver_str)
if available_vers:
available_vers.sort()
logger.warning("versions of %s available: %s", pn, ' '.join(available_vers))
logger.warn("versions of %s available: %s", pn, ' '.join(available_vers))
if required:
logger.error("required version %s of %s not available%s", pv_str, pn, itemstr)
else:
@@ -396,8 +396,8 @@ def getRuntimeProviders(dataCache, rdepend):
return rproviders
# Only search dynamic packages if we can't find anything in other variables
for pat_key in dataCache.packages_dynamic:
pattern = pat_key.replace(r'+', r"\+")
for pattern in dataCache.packages_dynamic:
pattern = pattern.replace(r'+', r"\+")
if pattern in regexp_cache:
regexp = regexp_cache[pattern]
else:
@@ -408,7 +408,7 @@ def getRuntimeProviders(dataCache, rdepend):
raise
regexp_cache[pattern] = regexp
if regexp.match(rdepend):
rproviders += dataCache.packages_dynamic[pat_key]
rproviders += dataCache.packages_dynamic[pattern]
logger.debug("Assuming %s is a dynamic package, but it may not exist" % rdepend)
return rproviders

File diff suppressed because it is too large Load Diff

View File

@@ -26,9 +26,6 @@ import errno
import re
import datetime
import pickle
import traceback
import gc
import stat
import bb.server.xmlrpcserver
from bb import daemonize
from multiprocessing import queues
@@ -38,46 +35,9 @@ logger = logging.getLogger('BitBake')
class ProcessTimeout(SystemExit):
pass
def currenttime():
return datetime.datetime.now().strftime('%H:%M:%S.%f')
def serverlog(msg):
print(str(os.getpid()) + " " + currenttime() + " " + msg)
#Seems a flush here triggers filesytem sync like behaviour and long hangs in the server
#sys.stdout.flush()
#
# When we have lockfile issues, try and find infomation about which process is
# using the lockfile
#
def get_lockfile_process_msg(lockfile):
# Some systems may not have lsof available
procs = None
try:
procs = subprocess.check_output(["lsof", '-w', lockfile], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
# File was deleted?
pass
except OSError as e:
if e.errno != errno.ENOENT:
raise
if procs is None:
# Fall back to fuser if lsof is unavailable
try:
procs = subprocess.check_output(["fuser", '-v', lockfile], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
# File was deleted?
pass
except OSError as e:
if e.errno != errno.ENOENT:
raise
if procs:
return procs.decode("utf-8")
return None
class idleFinish():
def __init__(self, msg):
self.msg = msg
print(str(os.getpid()) + " " + datetime.datetime.now().strftime('%H:%M:%S.%f') + " " + msg)
sys.stdout.flush()
class ProcessServer():
profile_filename = "profile.log"
@@ -96,19 +56,12 @@ class ProcessServer():
self.maxuiwait = 30
self.xmlrpc = False
self.idle = None
# Need a lock for _idlefuns changes
self._idlefuns = {}
self._idlefuncsLock = threading.Lock()
self.idle_cond = threading.Condition(self._idlefuncsLock)
self.bitbake_lock = lock
self.bitbake_lock_name = lockname
self.sock = sock
self.sockname = sockname
# It is possible the directory may be renamed. Cache the inode of the socket file
# so we can tell if things changed.
self.sockinode = os.stat(self.sockname)[stat.ST_INO]
self.server_timeout = server_timeout
self.timeout = self.server_timeout
@@ -117,9 +70,7 @@ class ProcessServer():
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
assert hasattr(function, '__call__')
with bb.utils.lock_timeout(self._idlefuncsLock):
self._idlefuns[function] = data
serverlog("Registering idle function %s" % str(function))
self._idlefuns[function] = data
def run(self):
@@ -158,31 +109,6 @@ class ProcessServer():
return ret
def _idle_check(self):
return len(self._idlefuns) == 0 and self.cooker.command.currentAsyncCommand is None
def wait_for_idle(self, timeout=30):
# Wait for the idle loop to have cleared
with bb.utils.lock_timeout(self._idlefuncsLock):
return self.idle_cond.wait_for(self._idle_check, timeout) is not False
def set_async_cmd(self, cmd):
with bb.utils.lock_timeout(self._idlefuncsLock):
ret = self.idle_cond.wait_for(self._idle_check, 30)
if ret is False:
return False
self.cooker.command.currentAsyncCommand = cmd
return True
def clear_async_cmd(self):
with bb.utils.lock_timeout(self._idlefuncsLock):
self.cooker.command.currentAsyncCommand = None
self.idle_cond.notify_all()
def get_async_cmd(self):
with bb.utils.lock_timeout(self._idlefuncsLock):
return self.cooker.command.currentAsyncCommand
def main(self):
self.cooker.pre_serve()
@@ -197,19 +123,14 @@ class ProcessServer():
fds.append(self.xmlrpc)
seendata = False
serverlog("Entering server connection loop")
serverlog("Lockfile is: %s\nSocket is %s (%s)" % (self.bitbake_lock_name, self.sockname, os.path.exists(self.sockname)))
def disconnect_client(self, fds):
serverlog("Disconnecting Client (socket: %s)" % os.path.exists(self.sockname))
serverlog("Disconnecting Client")
if self.controllersock:
fds.remove(self.controllersock)
self.controllersock.close()
self.controllersock = False
if self.haveui:
# Wait for the idle loop to have cleared (30s max)
if not self.wait_for_idle(30):
serverlog("Idle loop didn't finish queued commands after 30s, exiting.")
self.quit = True
fds.remove(self.command_channel)
bb.event.unregister_UIHhandler(self.event_handle, True)
self.command_channel_reply.writer.close()
@@ -221,12 +142,12 @@ class ProcessServer():
self.cooker.clientComplete()
self.haveui = False
ready = select.select(fds,[],[],0)[0]
if newconnections and not self.quit:
if newconnections:
serverlog("Starting new client")
conn = newconnections.pop(-1)
fds.append(conn)
self.controllersock = conn
elif not self.timeout and not ready:
elif self.timeout is None and not ready:
serverlog("No timeout, exiting.")
self.quit = True
@@ -293,14 +214,11 @@ class ProcessServer():
continue
try:
serverlog("Running command %s" % command)
reply = self.cooker.command.runCommand(command, self)
serverlog("Sending reply %s" % repr(reply))
self.command_channel_reply.send(reply)
serverlog("Command Completed (socket: %s)" % os.path.exists(self.sockname))
self.command_channel_reply.send(self.cooker.command.runCommand(command))
serverlog("Command Completed")
except Exception as e:
stack = traceback.format_exc()
serverlog('Exception in server main event loop running command %s (%s)' % (command, stack))
logger.exception('Exception in server main event loop running command %s (%s)' % (command, stack))
serverlog('Exception in server main event loop running command %s (%s)' % (command, str(e)))
logger.exception('Exception in server main event loop running command %s (%s)' % (command, str(e)))
if self.xmlrpc in ready:
self.xmlrpc.handle_requests()
@@ -323,25 +241,19 @@ class ProcessServer():
ready = self.idle_commands(.1, fds)
if self.idle:
self.idle.join()
if len(threading.enumerate()) != 1:
serverlog("More than one thread left?: " + str(threading.enumerate()))
serverlog("Exiting (socket: %s)" % os.path.exists(self.sockname))
serverlog("Exiting")
# Remove the socket file so we don't get any more connections to avoid races
# The build directory could have been renamed so if the file isn't the one we created
# we shouldn't delete it.
try:
sockinode = os.stat(self.sockname)[stat.ST_INO]
if sockinode == self.sockinode:
os.unlink(self.sockname)
else:
serverlog("bitbake.sock inode mismatch (%s vs %s), not deleting." % (sockinode, self.sockinode))
except Exception as err:
serverlog("Removing socket file '%s' failed (%s)" % (self.sockname, err))
os.unlink(self.sockname)
except:
pass
self.sock.close()
try:
self.cooker.shutdown(True, idle=False)
self.cooker.shutdown(True)
self.cooker.notifier.stop()
self.cooker.confignotifier.stop()
except:
@@ -349,9 +261,6 @@ class ProcessServer():
self.cooker.post_serve()
if len(threading.enumerate()) != 1:
serverlog("More than one thread left?: " + str(threading.enumerate()))
# Flush logs before we release the lock
sys.stdout.flush()
sys.stderr.flush()
@@ -367,21 +276,20 @@ class ProcessServer():
except FileNotFoundError:
return None
lockcontents = get_lock_contents(lockfile)
serverlog("Original lockfile contents: " + str(lockcontents))
lock.close()
lock = None
while not lock:
i = 0
lock = None
if not os.path.exists(os.path.basename(lockfile)):
serverlog("Lockfile directory gone, exiting.")
return
while not lock and i < 30:
lock = bb.utils.lockfile(lockfile, shared=False, retry=False, block=False)
if not lock:
newlockcontents = get_lock_contents(lockfile)
if not newlockcontents[0].startswith([f"{os.getpid()}\n", f"{os.getpid()} "]):
if newlockcontents != lockcontents:
# A new server was started, the lockfile contents changed, we can exit
serverlog("Lockfile now contains different contents, exiting: " + str(newlockcontents))
return
@@ -395,108 +303,75 @@ class ProcessServer():
return
if not lock:
procs = get_lockfile_process_msg(lockfile)
msg = ["Delaying shutdown due to active processes which appear to be holding bitbake.lock"]
if procs:
msg.append(":\n%s" % procs)
serverlog("".join(msg))
def idle_thread(self):
if self.cooker.configuration.profile:
try:
import cProfile as profile
except:
import profile
prof = profile.Profile()
ret = profile.Profile.runcall(prof, self.idle_thread_internal)
prof.dump_stats("profile-mainloop.log")
bb.utils.process_profilelog("profile-mainloop.log")
serverlog("Raw profiling information saved to profile-mainloop.log and processed statistics to profile-mainloop.log.processed")
else:
self.idle_thread_internal()
def idle_thread_internal(self):
def remove_idle_func(function):
with bb.utils.lock_timeout(self._idlefuncsLock):
del self._idlefuns[function]
self.idle_cond.notify_all()
while not self.quit:
nextsleep = 0.1
fds = []
with bb.utils.lock_timeout(self._idlefuncsLock):
items = list(self._idlefuns.items())
for function, data in items:
# Some systems may not have lsof available
procs = None
try:
retval = function(self, data, False)
if isinstance(retval, idleFinish):
serverlog("Removing idle function %s at idleFinish" % str(function))
remove_idle_func(function)
self.cooker.command.finishAsyncCommand(retval.msg)
nextsleep = None
elif retval is False:
serverlog("Removing idle function %s" % str(function))
remove_idle_func(function)
nextsleep = None
elif retval is True:
nextsleep = None
elif isinstance(retval, float) and nextsleep:
if (retval < nextsleep):
nextsleep = retval
elif nextsleep is None:
continue
else:
fds = fds + retval
except SystemExit:
raise
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running idle function')
remove_idle_func(function)
serverlog("Exception %s broke the idle_thread, exiting" % traceback.format_exc())
self.quit = True
# Create new heartbeat event?
now = time.time()
if bb.event._heartbeat_enabled and now >= self.next_heartbeat:
# We might have missed heartbeats. Just trigger once in
# that case and continue after the usual delay.
self.next_heartbeat += self.heartbeat_seconds
if self.next_heartbeat <= now:
self.next_heartbeat = now + self.heartbeat_seconds
if hasattr(self.cooker, "data"):
heartbeat = bb.event.HeartbeatEvent(now)
procs = subprocess.check_output(["lsof", '-w', lockfile], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
# File was deleted?
continue
except OSError as e:
if e.errno != errno.ENOENT:
raise
if procs is None:
# Fall back to fuser if lsof is unavailable
try:
bb.event.fire(heartbeat, self.cooker.data)
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running heartbeat function')
serverlog("Exception %s broke in idle_thread, exiting" % traceback.format_exc())
self.quit = True
if nextsleep and bb.event._heartbeat_enabled and now + nextsleep > self.next_heartbeat:
# Shorten timeout so that we we wake up in time for
# the heartbeat.
nextsleep = self.next_heartbeat - now
procs = subprocess.check_output(["fuser", '-v', lockfile], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
# File was deleted?
continue
except OSError as e:
if e.errno != errno.ENOENT:
raise
if nextsleep is not None:
select.select(fds,[],[],nextsleep)[0]
msg = "Delaying shutdown due to active processes which appear to be holding bitbake.lock"
if procs:
msg += ":\n%s" % str(procs.decode("utf-8"))
serverlog(msg)
def idle_commands(self, delay, fds=None):
nextsleep = delay
if not fds:
fds = []
if not self.idle:
self.idle = threading.Thread(target=self.idle_thread)
self.idle.start()
elif self.idle and not self.idle.is_alive():
serverlog("Idle thread terminated, main thread exiting too")
bb.error("Idle thread terminated, main thread exiting too")
self.quit = True
for function, data in list(self._idlefuns.items()):
try:
retval = function(self, data, False)
if retval is False:
del self._idlefuns[function]
nextsleep = None
elif retval is True:
nextsleep = None
elif isinstance(retval, float) and nextsleep:
if (retval < nextsleep):
nextsleep = retval
elif nextsleep is None:
continue
else:
fds = fds + retval
except SystemExit:
raise
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running idle function')
del self._idlefuns[function]
self.quit = True
# Create new heartbeat event?
now = time.time()
if now >= self.next_heartbeat:
# We might have missed heartbeats. Just trigger once in
# that case and continue after the usual delay.
self.next_heartbeat += self.heartbeat_seconds
if self.next_heartbeat <= now:
self.next_heartbeat = now + self.heartbeat_seconds
if hasattr(self.cooker, "data"):
heartbeat = bb.event.HeartbeatEvent(now)
bb.event.fire(heartbeat, self.cooker.data)
if nextsleep and now + nextsleep > self.next_heartbeat:
# Shorten timeout so that we we wake up in time for
# the heartbeat.
nextsleep = self.next_heartbeat - now
if nextsleep is not None:
if self.xmlrpc:
@@ -516,18 +391,12 @@ class ServerCommunicator():
self.recv = recv
def runCommand(self, command):
try:
self.connection.send(command)
except BrokenPipeError as e:
raise BrokenPipeError("bitbake-server might have died or been forcibly stopped, ie. OOM killed") from e
self.connection.send(command)
if not self.recv.poll(30):
logger.info("No reply from server in 30s (for command %s at %s)" % (command[0], currenttime()))
logger.info("No reply from server in 30s")
if not self.recv.poll(30):
raise ProcessTimeout("Timeout while waiting for a reply from the bitbake server (60s at %s)" % currenttime())
try:
ret, exc = self.recv.get()
except EOFError as e:
raise EOFError("bitbake-server might have died or been forcibly stopped, ie. OOM killed") from e
raise ProcessTimeout("Timeout while waiting for a reply from the bitbake server (60s)")
ret, exc = self.recv.get()
# Should probably turn all exceptions in exc back into exceptions?
# For now, at least handle BBHandledException
if exc and ("BBHandledException" in exc or "SystemExit" in exc):
@@ -560,7 +429,6 @@ class BitBakeProcessServerConnection(object):
self.socket_connection = sock
def terminate(self):
self.events.close()
self.socket_connection.close()
self.connection.connection.close()
self.connection.recv.close()
@@ -571,14 +439,13 @@ start_log_datetime_format = '%Y-%m-%d %H:%M:%S.%f'
class BitBakeServer(object):
def __init__(self, lock, sockname, featureset, server_timeout, xmlrpcinterface, profile):
def __init__(self, lock, sockname, featureset, server_timeout, xmlrpcinterface):
self.server_timeout = server_timeout
self.xmlrpcinterface = xmlrpcinterface
self.featureset = featureset
self.sockname = sockname
self.bitbake_lock = lock
self.profile = profile
self.readypipe, self.readypipein = os.pipe()
# Place the log in the builddirectory alongside the lock file
@@ -599,7 +466,7 @@ class BitBakeServer(object):
try:
r = ready.get()
except EOFError:
# Trap the child exiting/closing the pipe and error out
# Trap the child exitting/closing the pipe and error out
r = None
if not r or r[0] != "r":
ready.close()
@@ -642,9 +509,9 @@ class BitBakeServer(object):
os.set_inheritable(self.bitbake_lock.fileno(), True)
os.set_inheritable(self.readypipein, True)
serverscript = os.path.realpath(os.path.dirname(__file__) + "/../../../bin/bitbake-server")
os.execl(sys.executable, sys.executable, serverscript, "decafbad", str(self.bitbake_lock.fileno()), str(self.readypipein), self.logfile, self.bitbake_lock.name, self.sockname, str(self.server_timeout or 0), str(int(self.profile)), str(self.xmlrpcinterface[0]), str(self.xmlrpcinterface[1]))
os.execl(sys.executable, "bitbake-server", serverscript, "decafbad", str(self.bitbake_lock.fileno()), str(self.readypipein), self.logfile, self.bitbake_lock.name, self.sockname, str(self.server_timeout or 0), str(self.xmlrpcinterface[0]), str(self.xmlrpcinterface[1]))
def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpcinterface, profile):
def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpcinterface):
import bb.cookerdata
import bb.cooker
@@ -656,7 +523,6 @@ def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpc
# Create server control socket
if os.path.exists(sockname):
serverlog("WARNING: removing existing socket file '%s'" % sockname)
os.unlink(sockname)
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
@@ -673,8 +539,7 @@ def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpc
writer = ConnectionWriter(readypipeinfd)
try:
featureset = []
cooker = bb.cooker.BBCooker(featureset, server)
cooker.configuration.profile = profile
cooker = bb.cooker.BBCooker(featureset, server.register_idle_function)
except bb.BBHandledException:
return None
writer.send("r")
@@ -684,7 +549,7 @@ def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpc
server.run()
finally:
# Flush any messages/errors to the logfile before exit
# Flush any ,essages/errors to the logfile before exit
sys.stdout.flush()
sys.stderr.flush()
@@ -789,18 +654,23 @@ class BBUIEventQueue:
self.reader = ConnectionReader(readfd)
self.t = threading.Thread()
self.t.setDaemon(True)
self.t.run = self.startCallbackHandler
self.t.start()
def getEvent(self):
with bb.utils.lock_timeout(self.eventQueueLock):
if len(self.eventQueue) == 0:
return None
self.eventQueueLock.acquire()
item = self.eventQueue.pop(0)
if len(self.eventQueue) == 0:
self.eventQueueNotify.clear()
if len(self.eventQueue) == 0:
self.eventQueueLock.release()
return None
item = self.eventQueue.pop(0)
if len(self.eventQueue) == 0:
self.eventQueueNotify.clear()
self.eventQueueLock.release()
return item
def waitEvent(self, delay):
@@ -808,9 +678,10 @@ class BBUIEventQueue:
return self.getEvent()
def queue_event(self, event):
with bb.utils.lock_timeout(self.eventQueueLock):
self.eventQueue.append(event)
self.eventQueueNotify.set()
self.eventQueueLock.acquire()
self.eventQueue.append(event)
self.eventQueueNotify.set()
self.eventQueueLock.release()
def send_event(self, event):
self.queue_event(pickle.loads(event))
@@ -819,17 +690,13 @@ class BBUIEventQueue:
bb.utils.set_process_name("UIEventQueue")
while True:
try:
ready = self.reader.wait(0.25)
if ready:
event = self.reader.get()
self.queue_event(event)
except (EOFError, OSError, TypeError):
self.reader.wait()
event = self.reader.get()
self.queue_event(event)
except EOFError:
# Easiest way to exit is to close the file descriptor to cause an exit
break
def close(self):
self.reader.close()
self.t.join()
class ConnectionReader(object):
@@ -844,7 +711,7 @@ class ConnectionReader(object):
return self.reader.poll(timeout)
def get(self):
with bb.utils.lock_timeout(self.rlock):
with self.rlock:
res = self.reader.recv_bytes()
return multiprocessing.reduction.ForkingPickler.loads(res)
@@ -863,31 +730,10 @@ class ConnectionWriter(object):
# Why bb.event needs this I have no idea
self.event = self
def _send(self, obj):
gc.disable()
with bb.utils.lock_timeout(self.wlock):
self.writer.send_bytes(obj)
gc.enable()
def send(self, obj):
obj = multiprocessing.reduction.ForkingPickler.dumps(obj)
# See notes/code in CookerParser
# We must not terminate holding this lock else processes will hang.
# For SIGTERM, raising afterwards avoids this.
# For SIGINT, we don't want to have written partial data to the pipe.
# pthread_sigmask block/unblock would be nice but doesn't work, https://bugs.python.org/issue47139
process = multiprocessing.current_process()
if process and hasattr(process, "queue_signals"):
with bb.utils.lock_timeout(process.signal_threadlock):
process.queue_signals = True
self._send(obj)
process.queue_signals = False
while len(process.signal_received) > 0:
sig = process.signal_received.pop()
process.handle_sig(sig, None)
else:
self._send(obj)
with self.wlock:
self.writer.send_bytes(obj)
def fileno(self):
return self.writer.fileno()

View File

@@ -11,7 +11,6 @@ import hashlib
import time
import inspect
from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import bb.server.xmlrpcclient
import bb
@@ -118,7 +117,7 @@ class BitBakeXMLRPCServerCommands():
"""
Run a cooker command on the server
"""
return self.server.cooker.command.runCommand(command, self.server.parent, self.server.readonly)
return self.server.cooker.command.runCommand(command, self.server.readonly)
def getEventHandle(self):
return self.event_handle

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More