Compare commits

..

153 Commits

Author SHA1 Message Date
Richard Purdie
b53230c08d build-appliance-image: Update to honister head revision
(From OE-Core rev: 70384dd958c57d1da924a66cffa35f80eb60d4b0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-25 22:00:52 +00:00
Anuj Mittal
3837e8bb9f poky.conf: bump version for 3.4.1 honister release
(From meta-yocto rev: ee721e0fa7624c29979d9b7b3f41e9a76eedd453)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-25 22:00:04 +00:00
Michael Opdenacker
e3d86eb738 manuals: releases.rst: move gatesgarth to outdated releases section
(From yocto-docs rev: 67a7465375fb845c1853c0b988baa675c7a5d0e3)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Reported-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-24 21:14:09 +00:00
Andres Beltran
b4c64791a0 create-spdx: Fix key errors in do_create_runtime_spdx
Currently, the do_create_runtime_spdx task fails with a Key Error if a
dependency is not contained in the package providers dictionary. Add a
check before using "dep" as a key in "providers".

(From OE-Core rev: ac9b387c5e19386ce3c5cd88b42dad24d25b0f70)

Signed-off-by: Andres Beltran <abeltran@linux.microsoft.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 140ce5ef5e8f10251091660e3ef76f315f409076)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-24 21:12:50 +00:00
Saul Wold
5bcb2b1732 create-spdx: Protect against None from LICENSE_PATH
If LICENSE_PATH is not set, then the split() will fail on a NoneType.

(From OE-Core rev: 123ee0fc0d1470427cc563f512f621e0172cc232)

Signed-off-by: Saul Wold <saul.wold@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit d6260decae6d2654f6e058f12ca02d582a8ef5a4)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-24 21:12:50 +00:00
Saul Wold
07c12415c6 create_spdx: ensure is_work_shared() is unique
There is a function with the same name is_work_shared() in the archiver class
this causes a conflict when both classes are included. Use work-shared as the
check in WORKDIR to allow for other packages beyond the kernel and gcc that
use a common shared-work source directory.

(From OE-Core rev: 1d350fd2a0db57617fbc62eb1d65f3ffa2667551)

Signed-off-by: Saul Wold <saul.wold@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 18eab77ee65c73b17225e69c7ba446ab1c69fa92)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-24 21:12:50 +00:00
Richard Purdie
10a700c094 glibc: Backport fix for CVE-2021-43396
Backport the fix for CVE-2021-43396. It is disputed that this is a security issue
however the fix applies easily so we may as well.

(From OE-Core rev: 8d7a88bdee734df527a0ed954a25f27ac975071f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e8de9b01c6b305b2498c5f942397a49ae2af0cde)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-24 21:12:50 +00:00
Claus Stovgaard
fb09b37f2a cups: Fix missing installation of cups sysv init scripts
The packageconfig needs to be --disable-systemd as documented in
configure file for cups. With the current value "--without-systemd" the
SYSTEM_DIR variable ends up being set to "no"

It is caused by the --without-* section in configure file resulting in
eval with_$ac_useropt=no ;;

$ac_useropt is "systemd" causing the variable $with_systemd to be set
to "no", because of below test

if test ${with_systemd+y}
then :
  withval=$with_systemd; SYSTEMD_DIR="$withval"
else $as_nop
  SYSTEMD_DIR=""
fi

cups configure test for i if SYSTEMD_DIR is empty to decide if the init
scripts need to be installed. A value of "no" results in that no init
scripts is installed.

With --disable-systemd it works as expected - installing the init files.
Though cups should properly improve their configure script.

(From OE-Core rev: e2518c2eba8c6e486aee3273dc2cba9ab51ffb69)

Signed-off-by: Claus Stovgaard <clst@ambu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 967fdd2ba12f22d8e46600ff085833993a32cfeb)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-24 21:12:50 +00:00
Richard Purdie
8178470ec6 bitbake: fetch2: Fix url remap issue and add testcase
Using "" as a target for .replace() is a really bad idea as it duplicates the replacement
for every character in the string. Add a testcase which triggered this and correct the
code to return the correct result.

(Bitbake rev: 44a83b373e1fc34c93cd4a6c6cf8b73b230c1520)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3af1ecf049d2eed56f6d319dc7df6eb4a3d4eebc)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-24 10:08:24 +00:00
Richard Purdie
c4f08fc43d bitbake: utils: Handle lockfile filenames that are too long for filesystems
The fetcher mirror code can go crazy creating lock filenames which exceed the
filesystem limits. When this happens, the code will loop/hang.

Handle the filename too long exception correctly but also truncate lockfile
lengths to under 256 since the worst case situation is lockfile overlap
and lack of parallelism.

(Bitbake rev: 64498ecb094b7911d10b07c098d5a966e79f95b3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 63baf3440b16e41ac6601de21ced94a94bdf1509)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-24 10:08:23 +00:00
Michael Opdenacker
243758b8f4 ref-manual: update system requirements
Assuming the same support status as on the development version.

(From yocto-docs rev: b608ee12e7ce5b379bffe2a6e0b84d289b84cffc)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-24 10:04:34 +00:00
Peter Kjellerstedt
b6b0af0889 insane.bbclass: Add a check for directories that are expected to be empty
The empty-dirs QA check verifies that all directories specified in
QA_EMPTY_DIRS are empty. It is possible to specify why a directory is
expected to be empty by defining QA_EMPTY_DIRS_RECOMMENDATION:<path>,
which will then be included in the error message if the directory is
not empty. If it is not specified for a directory, then "but it is
expected to be empty" will be used.

Compared to the corresponding patch for master, there are two
differences:

* "/var/volatile" is not added to QA_EMPTY_DIRS by default, and
* "empty-dirs" is not enabled in ERROR_QA (nor in WARN_QA).

(From OE-Core rev: 9f3fbfc02ae6fadffbcc1bda1fa75dfe140d05c5)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Richard Purdie
9594c5893b mirrors: Add kernel.org sources mirror for downloads.yoctoproject.org
kernel.org now has a mirror of the downloads.yoctoproject.org sources
archive so include this in our mirrors list.

(From OE-Core rev: d7fe71c0fa0f368037b20d423c4c45d91c108a8c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f602b6c2046bbc52a95dcc68a754f1cbb2db6761)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Richard Purdie
ca1d3dee3c uninative: Add version to uninative tarball name
uninative works via hashes and doesn't need the version in the tarball name but
it does make things easier to inspect in DL_DIR. There were reasons such as
ease of publication of the build tarballs but we can handle those differently
now and the signature issues from the early code aren't an issue now. From 3.4
onwards we can use a version'd name.

[YOCTO #12970]

(From OE-Core rev: aca617aada3a06a6b460bf477541639f44681b32)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit dadba70d6a24d8ebb5576598efffa973151c7218)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Jon Mason
719728fe47 scripts/lib/wic/help.py: Update Fedora Kickstart URLs
The URLs describing Kickstart are no longer valid and do not redirect to
the correct location.  Update them with the correct location.

(From OE-Core rev: 4878c1180dc6df7012ae28afd9a84645cc094c0b)

Signed-off-by: Jon Mason <jdmason@kudzu.us>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e5ac75f93c8128b0761af5fee99e8603ddd1657d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Richard Purdie
2bfad06778 wpa-supplicant: Match package override to PACKAGES for pkg_postinst
In PACKAGES, ${PN} is used so it makes sense for the pkg_postinst variable
override to match that else it causes user confusion.

[YOCTO #14616]

(From OE-Core rev: 2e0bbd8edcdcc892e593848e618a9a00f6dac05f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit ae9094d45bbfff377bd542939e12a8451a4959b6)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Richard Purdie
c04d6ccd4a scripts/oe-package-browser: Handle no packages being built
Give the user a proper error message if there aren't packages built,
rather than a less friendly traceback.

[YOCTO #14619]

(From OE-Core rev: 879a176f7159d1b3f5a9dc2116017b4a08172468)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b14c176b7dd74b7d63ca0f72e6e00fbf209f5a0b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Richard Purdie
e8d40a2dab scripts/oe-package-browser: Fix after overrides change
After the overrides change, the format of pkgdata changed and this
usage of configparser no longer works. This change is a bandaid to make
things work but the pkgdata format isn't very similar to ini files
so this may need to be reimplmented in a better way in the long run.

[YOCTO #14619]

(From OE-Core rev: b27a11f4ddc0c10ff7e5fb447431bff1411a5417)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 25a8ec6e2891b71bc280aacaf5f62ecc4b0bd1d1)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Kai Kang
7884e05de9 convert-srcuri.py: use regex to check space in SRC_URI
There may be none, one or more spaces including tab before backslash in
SRC_URI. Use regex to check and update. It helps to avoid malformed uri
such as recipe open-iscsi-user in meta-openstack:

SRC_URI = "git://github.com/open-iscsi/open-iscsi.git;protocol=https  ;branch=master \

And help to check more recipes such as concurrent-ruby in the same
layer:

SRC_URI = "git://github.com/ruby-concurrency/concurrent-ruby.git;protocol=https;tag=v1.1.6\

(From OE-Core rev: f87a3aba3086cd3fd89274337f25fc1717d6c981)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a69a53573b1987ee5834a6fc27763f9bbf5fe5a4)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Bruce Ashfield
7752c9e87b linux-yocto/5.10: update to v5.10.78
Updating linux-yocto/5.10 to the latest korg -stable release that comprises
the following commits:

    5040520482a5 Linux 5.10.78
    4c7c0243275b ALSA: usb-audio: Add Audient iD14 to mixer map quirk table
    f3eb44f496ef ALSA: usb-audio: Add Schiit Hel device to mixer map quirk table
    68765fc97762 Revert "wcn36xx: Disable bmps when encryption is disabled"
    f84b791d4c3b ARM: 9120/1: Revert "amba: make use of -1 IRQs warn"
    bbc920fb320f Revert "drm/ttm: fix memleak in ttm_transfered_destroy"
    6d67b2a73b8e mm: khugepaged: skip huge page collapse for special files
    5a7957491e31 Revert "usb: core: hcd: Add support for deferring roothub registration"
    50f46bd30949 Revert "xhci: Set HCD flag to defer primary roothub registration"
    d7fc85f61042 media: firewire: firedtv-avc: fix a buffer overflow in avc_ca_pmt()
    b93a70bf2b57 net: ethernet: microchip: lan743x: Fix skb allocation failure
    b9c85a71e1b4 vrf: Revert "Reset skb conntrack connection..."
    0382fdf9ae78 sfc: Fix reading non-legacy supported link modes
    748786564a35 Revert "io_uring: reinforce cancel on flush during exit"
    7b57c38d12ae scsi: core: Put LLD module refcnt after SCSI device is released

(From OE-Core rev: b57ee9fafb80034cf7cd2f870a741741c2a469cd)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 7a7d1eed8e3d550ac9bfa301b26095100eeba111)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Bruce Ashfield
40cefcce5d linux-yocto/5.14: update to v5.14.17
Updating linux-yocto/5.14 to the latest korg -stable release that comprises
the following commits:

    3dfa869cb79d Linux 5.14.17
    b1dbd891bfe5 ALSA: usb-audio: Add Audient iD14 to mixer map quirk table
    570b5004f827 ALSA: usb-audio: Add Schiit Hel device to mixer map quirk table
    db6d7c4acca3 Revert "drm/i915/gt: Propagate change in error status to children on unhold"
    aac2f6861683 drm/amd/display: Revert "Directly retrain link from debugfs"
    77d029e1e218 drm/amdgpu: revert "Add autodump debugfs node for gpu reset v8"
    9f9e09a59c58 Revert "wcn36xx: Disable bmps when encryption is disabled"
    b9722a7369f8 ARM: 9120/1: Revert "amba: make use of -1 IRQs warn"
    e556fca311ce Revert "soc: imx: gpcv2: move reset assert after requesting domain power up"
    d6a60e6ada49 drm/i915: Remove memory frequency calculation
    7883e13c2494 drm/amdkfd: fix boot failure when iommu is disabled in Picasso.
    a82fa1213d12 Revert "usb: core: hcd: Add support for deferring roothub registration"
    0979b923ff3f Revert "xhci: Set HCD flag to defer primary roothub registration"
    02a476ca886d media: firewire: firedtv-avc: fix a buffer overflow in avc_ca_pmt()
    ec0c91e2ebb8 vrf: Revert "Reset skb conntrack connection..."
    6467b75cf9d1 sfc: Fix reading non-legacy supported link modes
    f30822c0b4c3 scsi: core: Put LLD module refcnt after SCSI device is released

(From OE-Core rev: 4ab85464b7c11099e1aa55a26816f250f564f383)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a9ac5a388d682bcf0aad59d1b8ae8334846dfcd9)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Bruce Ashfield
3e1a0f0d09 linux-yocto/5.10: update to v5.10.77
Updating linux-yocto/5.10 to the latest korg -stable release that comprises
the following commits:

    09df347cfd18 Linux 5.10.77
    fbb91dadb512 perf script: Check session->header.env.arch before using it
    6f416815c505 riscv: Fix asan-stack clang build
    7a4cf25d8329 riscv: fix misalgned trap vector base address
    acb8832f6a1c scsi: ufs: ufs-exynos: Correct timeout value setting registers
    8ecddaca7942 KVM: s390: preserve deliverable_mask in __airqs_kick_single_vcpu
    e11a7355fb98 KVM: s390: clear kicked_mask before sleeping again
    727e5deca802 lan743x: fix endianness when accessing descriptors
    a7112b8eeb14 sctp: add vtag check in sctp_sf_ootb
    c2442f721972 sctp: add vtag check in sctp_sf_do_8_5_1_E_sa
    14c1e02b11c2 sctp: add vtag check in sctp_sf_violation
    dad2486414b5 sctp: fix the processing for COOKIE_ECHO chunk
    8c50693d25e4 sctp: fix the processing for INIT_ACK chunk
    ad111d4435d8 sctp: use init_tag from inithdr for ABORT chunk
    4509000a2515 phy: phy_ethtool_ksettings_set: Lock the PHY while changing settings
    5b88bb9377ee phy: phy_start_aneg: Add an unlocked version
    81780b624d1c phy: phy_ethtool_ksettings_set: Move after phy_start_aneg
    258c5fea44cf phy: phy_ethtool_ksettings_get: Lock the phy for consistency
    58722323d4bc net/tls: Fix flipped sign in async_wait.err assignment
    44e8c93e1e49 net: nxp: lpc_eth.c: avoid hang when bringing interface down
    c2af2092c9bb net: ethernet: microchip: lan743x: Fix dma allocation failure by using dma_set_mask_and_coherent
    bfa6fbdb4e39 net: ethernet: microchip: lan743x: Fix driver crash when lan743x_pm_resume fails
    e81bed557fe7 mlxsw: pci: Recycle received packet upon allocation failure
    be98be1a17e9 nios2: Make NIOS2_DTB_SOURCE_BOOL depend on !COMPILE_TEST
    aead02927af3 gpio: xgs-iproc: fix parsing of ngpios property
    863a423ee07b RDMA/sa_query: Use strscpy_pad instead of memcpy to copy a string
    2b7c5eed19d3 net: Prevent infinite while loop in skb_tx_hash()
    04121b10cdf0 cfg80211: correct bridge/4addr mode check
    aed897e96b19 net-sysfs: initialize uid and gid before calling net_ns_get_ownership
    b0a2cd38553c net: batman-adv: fix error handling
    36e911a16b37 regmap: Fix possible double-free in regcache_rbtree_exit()
    e51371bd687e reset: brcmstb-rescal: fix incorrect polarity of status bit
    2cf7d935d6ba arm64: dts: allwinner: h5: NanoPI Neo 2: Fix ethernet node
    10e40fb2f508 RDMA/mlx5: Set user priority for DCT
    24fd8e2f027d octeontx2-af: Display all enabled PF VF rsrc_alloc entries.
    c63d7f2ca99a nvme-tcp: fix possible req->offset corruption
    32f3db20f126 nvme-tcp: fix data digest pointer calculation
    4286c72c5321 nvmet-tcp: fix data digest pointer calculation
    d98883f6c33e IB/hfi1: Fix abba locking issue with sc_disable()
    c3e17e58f571 IB/qib: Protect from buffer overflow in struct qib_user_sdma_pkt fields
    ee4908f909b3 bpf: Fix error usage of map_fd and fdget() in generic_map_update_batch()
    dd2260ec643d bpf: Fix potential race in tail call compatibility check
    15dec6d8f864 tcp_bpf: Fix one concurrency problem in the tcp_bpf_send_verdict function
    cac6b043cea3 riscv, bpf: Fix potential NULL dereference
    01599bf7cc2b cgroup: Fix memory leak caused by missing cgroup_bpf_offline
    eb3b6805e3e9 drm/amdgpu: fix out of bounds write
    c21b4002214c drm/ttm: fix memleak in ttm_transfered_destroy
    69a7fa5cb0de mm, thp: bail out early in collapse_file for writeback page
    8fb858b74ac5 net: lan78xx: fix division by zero in send path
    4c22227e39c7 cfg80211: fix management registrations locking
    fa29cec42c2d cfg80211: scan: fix RCU in cfg80211_add_nontrans_list()
    db1191a529e4 nvme-tcp: fix H2CData PDU send accounting (again)
    5043fbd294f5 ocfs2: fix race between searching chunks and release journal_head from buffer_head
    01169a43353d mmc: sdhci-esdhc-imx: clear the buffer_read_ready to reset standard tuning circuit
    ee3213b117ce mmc: sdhci: Map more voltage level to SDHCI_POWER_330
    a95a76fc01a0 mmc: dw_mmc: exynos: fix the finding clock sample value
    12a46f72f499 mmc: mediatek: Move cqhci init behind ungate clock
    44c2bc2a6bbe mmc: cqhci: clear HALT state after CQE enable
    efe934629fff mmc: vub300: fix control-message timeouts
    f3dec7e7ace3 net/tls: Fix flipped sign in tls_err_abort() calls
    c828115a14ea Revert "net: mdiobus: Fix memory leak in __mdiobus_register"
    11c0406b4c33 nfc: port100: fix using -ERRNO as command type mask
    0b1b3e086b0a tipc: fix size validations for the MSG_CRYPTO type
    5aa5bab57957 ata: sata_mv: Fix the error handling of mv_chip_id()
    9a52798dce73 pinctrl: amd: disable and mask interrupts on probe
    01c2881bb0e0 Revert "pinctrl: bcm: ns: support updated DT binding as syscon subnode"
    017718dfbb6f usbnet: fix error return code in usbnet_probe()
    693ecbe8f799 usbnet: sanity check for maxpacket
    b663890d8544 ext4: fix possible UAF when remounting r/o a mmp-protected file system
    d4d9c065988c arm64: Avoid premature usercopy failure
    e184a21b5ccc powerpc/bpf: Fix BPF_MOD when imm == 1
    3f2c12ec8a3f io_uring: don't take uring_lock during iowq cancel
    5a768b4d3e1a ARM: 9141/1: only warn about XIP address when not compile testing
    15b278f94bbb ARM: 9139/1: kprobes: fix arch_init_kprobes() prototype
    c06d7d9bfcf6 ARM: 9138/1: fix link warning with XIP + frame-pointer
    8a6af97c31be ARM: 9134/1: remove duplicate memcpy() definition
    6ad8bbc9d301 ARM: 9133/1: mm: proc-macros: ensure *_tlb_fns are 4B aligned
    3ceaa85c331d ARM: 9132/1: Fix __get_user_check failure with ARM KASAN images

(From OE-Core rev: 9929f2d2a4b60dc989b4b5a3dd8fad48b572d393)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit d57bc7281015d09e2ff7a8a028dbf31559ff7331)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Bruce Ashfield
fbb7df5adb linux-yocto/5.14: update to v5.14.16
Updating linux-yocto/5.14 to the latest korg -stable release that comprises
the following commits:

    f63179c1e68c Linux 5.14.16
    e874c870dfd8 KVM: x86: Take srcu lock in post_kvm_run_save()
    9ab39a3d0cec KVM: SEV-ES: fix another issue with string I/O VMGEXITs
    eb0461c572e9 KVM: x86: switch pvclock_gtod_sync_lock to a raw spinlock
    10242cc2ad79 KVM: x86/xen: Fix kvm_xen_has_interrupt() sleeping in kvm_vcpu_block()
    669a7e147ee6 perf script: Check session->header.env.arch before using it
    e914237feb46 riscv: Fix asan-stack clang build
    4606bbb6b19c riscv: Do not re-populate shadow memory with kasan_populate_early_shadow
    7567abe63797 riscv: fix misalgned trap vector base address
    20bd764387ac scsi: ibmvfc: Fix up duplicate response detection
    f6782c0ca808 perf script: Fix PERF_SAMPLE_WEIGHT_STRUCT support
    04ced3822a66 scsi: ufs: ufs-exynos: Correct timeout value setting registers
    d748da838b21 KVM: s390: preserve deliverable_mask in __airqs_kick_single_vcpu
    4faa35ce98c7 KVM: s390: clear kicked_mask before sleeping again
    ae566351ca18 octeontx2-af: Check whether ipolicers exists
    45d9cd363786 virtio-ring: fix DMA metadata flags
    52a936b037b5 net: hns3: expand buffer len for some debugfs command
    efccb66bc917 net: hns3: add more string spaces for dumping packets number of queue info in debugfs
    e5c6ad377c07 bpf: Move BPF_MAP_TYPE for INODE_STORAGE and TASK_STORAGE outside of CONFIG_NET
    b341612b659d watchdog: sbsa: only use 32-bit accessors
    de709ec74f8b bpf: Use kvmalloc for map values in syscall
    0717c71deae6 sctp: add vtag check in sctp_sf_ootb
    1c255b5f68f4 sctp: add vtag check in sctp_sf_do_8_5_1_E_sa
    dd82b3a345ab sctp: add vtag check in sctp_sf_violation
    44ef3ecbc24a sctp: fix the processing for COOKIE_ECHO chunk
    7975f42f1038 sctp: fix the processing for INIT_ACK chunk
    6277d424ead2 sctp: fix the processing for INIT chunk
    332933f9ae0a sctp: use init_tag from inithdr for ABORT chunk
    44d44bf72591 RDMA/irdma: Do not hold qos mutex twice on QP resume
    6392f26fbe92 RDMA/irdma: Set VLAN in UD work completion correctly
    7762917173cc RDMA/irdma: Process extended CQ entries correctly
    7860484eeb90 phy: phy_ethtool_ksettings_set: Lock the PHY while changing settings
    37a1b9befb73 phy: phy_start_aneg: Add an unlocked version
    1f9c99e0bb5b phy: phy_ethtool_ksettings_set: Move after phy_start_aneg
    2191b1e8eb3d phy: phy_ethtool_ksettings_get: Lock the phy for consistency
    e2b4dd261720 net/tls: Fix flipped sign in async_wait.err assignment
    373f94d73651 net: ethernet: microchip: lan743x: Fix skb allocation failure
    228862acb549 net: hns3: fix data endian problem of some functions of debugfs
    20d88211706b net: hns3: fix pause config problem after autoneg disabled
    7cc73feb57f6 net: nxp: lpc_eth.c: avoid hang when bringing interface down
    d8774769d198 net: ethernet: microchip: lan743x: Fix dma allocation failure by using dma_set_mask_and_coherent
    69d3c7785ec4 net: ethernet: microchip: lan743x: Fix driver crash when lan743x_pm_resume fails
    18bd5e285a78 mlxsw: pci: Recycle received packet upon allocation failure
    960f4a54b909 nios2: Make NIOS2_DTB_SOURCE_BOOL depend on !COMPILE_TEST
    030f05812d81 gpio: xgs-iproc: fix parsing of ngpios property
    c653c522e521 RDMA/sa_query: Use strscpy_pad instead of memcpy to copy a string
    5f6995295f65 RDMA/mlx5: Initialize the ODP xarray when creating an ODP MR
    ed894f5439ab net: Prevent infinite while loop in skb_tx_hash()
    f435287d719b cfg80211: correct bridge/4addr mode check
    da279dac227a net-sysfs: initialize uid and gid before calling net_ns_get_ownership
    a8f7359259dd net: batman-adv: fix error handling
    50cc1462a668 regmap: Fix possible double-free in regcache_rbtree_exit()
    c9e39214fddf reset: brcmstb-rescal: fix incorrect polarity of status bit
    86f9394073d8 arm64: dts: allwinner: h5: NanoPI Neo 2: Fix ethernet node
    63a97a9f95f2 ice: check whether PTP is initialized in ice_ptp_release()
    ebd0edad1cdf RDMA/mlx5: Set user priority for DCT
    e83b3cce4722 ice: Respond to a NETDEV_UNREGISTER event for LAG
    f1e3cd1cc802 octeontx2-af: Fix possible null pointer dereference.
    98db2a8c14be octeontx2-af: Display all enabled PF VF rsrc_alloc entries.
    c7752ec9ad39 nvme-tcp: fix possible req->offset corruption
    7258a6eef5be nvme-tcp: fix data digest pointer calculation
    daa12f0c1d1b nvmet-tcp: fix data digest pointer calculation
    5d33bd6b4d4d IB/hfi1: Fix abba locking issue with sc_disable()
    0d4395477741 IB/qib: Protect from buffer overflow in struct qib_user_sdma_pkt fields
    6525bfbd546f bpf: Fix error usage of map_fd and fdget() in generic_map_update_batch()
    adb17f828177 bpf: Fix potential race in tail call compatibility check
    6f226ffe4458 tcp_bpf: Fix one concurrency problem in the tcp_bpf_send_verdict function
    e1b80a5ebe54 riscv, bpf: Fix potential NULL dereference
    b529f88d9388 cgroup: Fix memory leak caused by missing cgroup_bpf_offline
    b7ca59297fa3 Revert "watchdog: iTCO_wdt: Account for rebooting on second timeout"
    0a8b7eba95a0 drm/amd/display: Fix deadlock when falling back to v2 from v3
    a363d80566cc drm/amd/display: Fallback to clocks which meet requested voltage on DCN31
    aeadb0662478 drm/amd/display: Moved dccg init to after bios golden init
    5a5f1f070c3e drm/amd/display: Increase watermark latencies for DCN3.1
    85cf47160d0e drm/amd/display: increase Z9 latency to workaround underflow in Z9
    01f39421d590 drm/amd/display: Fix prefetch bandwidth calculation for DCN3.1
    b60efcaf5e8b drm/amd/display: Limit display scaling to up to true 4k for DCN 3.1
    c3ae5cf3e3ee drm/amdgpu: support B0&B1 external revision id for yellow carp
    d3ed72495a59 drm/amdgpu: fix out of bounds write
    9eb4bdd554fc drm/amdgpu: Fix even more out of bound writes from debugfs
    d87ac6054e3d drm/i915/dp: Skip the HW readout of DPCD on disabled encoders
    7650327e7174 drm/i915: Catch yet another unconditioal clflush
    0ed2dfb5f598 drm/i915: Convert unconditional clflush to drm_clflush_virt_range()
    132a3d998d67 drm/ttm: fix memleak in ttm_transfered_destroy
    15a4f2bdbdfd mac80211: mesh: fix HE operation element length check
    ce277959d77c arm64: dts: imx8mm-kontron: Make sure SOC and DRAM supply voltages are correct
    8c684aaceaf3 arm64: dts: imx8mm-kontron: Set lower limit of VDD_SNVS to 800 mV
    f5eaf91dd8af arm64: dts: imx8mm-kontron: Fix connection type for VSC8531 RGMII PHY
    da32086a0203 arm64: dts: imx8mm-kontron: Fix CAN SPI clock frequency
    d2bdcd23cba9 arm64: dts: imx8mm-kontron: Fix polarity of reg_rst_eth2
    5fcb6fce74ff mm: khugepaged: skip huge page collapse for special files
    5e669d8ab30a mm, thp: bail out early in collapse_file for writeback page
    6ac017254b59 mm: filemap: check if THP has hwpoisoned subpage for PMD page fault
    8821fedc7f83 mm: hwpoison: remove the unnecessary THP check
    67979d186c51 drm/amd/display: Require immediate flip support for DCN3.1 planes
    75b1b172ae5a net: lan78xx: fix division by zero in send path
    3c897f39b71f cfg80211: fix management registrations locking
    2a000d137589 cfg80211: scan: fix RCU in cfg80211_add_nontrans_list()
    e6d02b0da2df ftrace/nds32: Update the proto for ftrace_trace_function to match ftrace_stub
    ea081b13b00e nvme-tcp: fix H2CData PDU send accounting (again)
    2e382600e885 ocfs2: fix race between searching chunks and release journal_head from buffer_head
    7335acd51f6b block: Fix partition check for host-aware zoned block devices
    10bcaafc5753 mmc: sdhci-esdhc-imx: clear the buffer_read_ready to reset standard tuning circuit
    78873d5a2717 mmc: sdhci-pci: Read card detect from ACPI for Intel Merrifield
    b572d6c18511 mmc: sdhci: Map more voltage level to SDHCI_POWER_330
    ac6f66f208a1 mmc: dw_mmc: exynos: fix the finding clock sample value
    b1ad3ecffaac mmc: tmio: reenable card irqs after the reset callback
    e1b94f0e744f mmc: mediatek: Move cqhci init behind ungate clock
    9106d68c8082 mmc: cqhci: clear HALT state after CQE enable
    aa2f3e425e22 mmc: vub300: fix control-message timeouts
    e41473543f75 net/tls: Fix flipped sign in tls_err_abort() calls
    8ba94a7f7b9f Revert "net: mdiobus: Fix memory leak in __mdiobus_register"
    836f40777d58 nfc: port100: fix using -ERRNO as command type mask
    e029c9828c5b tipc: fix size validations for the MSG_CRYPTO type
    43849df432c9 ata: sata_mv: Fix the error handling of mv_chip_id()
    66a1c8748068 pinctrl: amd: disable and mask interrupts on probe
    18f31a907c9f Revert "pinctrl: bcm: ns: support updated DT binding as syscon subnode"
    b5c410a4af7d usbnet: fix error return code in usbnet_probe()
    7e8b6a4f18ed usbnet: sanity check for maxpacket
    a350df591870 ARM: 9148/1: handle CONFIG_CPU_ENDIAN_BE32 in arch/arm/kernel/head.S
    351d0f587b4c ARM: 9141/1: only warn about XIP address when not compile testing
    a51d78193d21 ARM: 9139/1: kprobes: fix arch_init_kprobes() prototype
    4108f38c05bd ARM: 9138/1: fix link warning with XIP + frame-pointer
    6aa2d9cf81f9 ARM: 9134/1: remove duplicate memcpy() definition
    78a7a2694e69 ARM: 9133/1: mm: proc-macros: ensure *_tlb_fns are 4B aligned
    e108afbd38a5 ARM: 9132/1: Fix __get_user_check failure with ARM KASAN images

(From OE-Core rev: 01fe48bf2499c387cd5ed71489b33da7bc6a6ae0)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit ae32a59571abec59cc9f19bf9289ec9472b3923b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Ross Burton
8dbdb9e1e2 vim: add patch number to CVE-2021-3778 patch
(From OE-Core rev: 851a5d697918247c05f7d59782f84c430771fd48)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 222be29051a3543ac63a0eb07019e90d44429b16)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Ross Burton
e8bdd45fe8 vim: fix CVE-2021-3796, CVE-2021-3872, and CVE-2021-3875
Backport patches from upstream to fix these CVEs.

(From OE-Core rev: 2ed29a813fa07a2e6d2637f7fc63d5e0066b6304)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b493eb4f9a6bb75a2f01a53b6c70762845bf79f9)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Kai Kang
f8ad42fc49 squashfs-tools: follow-up fix for CVE-2021-41072
Squash a follow-up fix for CVE-2021-41072 from upstream:
https://github.com/plougher/squashfs-tools/commit/19fcc93

(From OE-Core rev: 722c8fbe68a6236f9391eb0ded4c11efd6962de5)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 70709ff0741ed9fb9c111ef4b7aa2ee7432453f4)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Richard Purdie
4b28378957 mirrors: Add uninative mirror on kernel.org
At the last nas outage, we realised that we don't have good mirrors of the
uninative tarball if our main system can't be accessed. kernel.org mirrors
some Yocto Project data so we've ensured uninative is there. Add the appropriate
mirror url to make use of that.

(From OE-Core rev: 20d7be2f3b481bc9a2f034f84eff1c48a4a13d92)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1833cb0c5841afafb468b963b74b63366b09a134)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Chen Qi
c43f22e42e avahi: update CVE id fixed by local-ping.patch
CVE-2021-36217 is treated as a duplicate of CVE-2021-3502.
Update the local-ping.patch to mark it resolve both.

(From OE-Core rev: efb82a8e56c9af7846b391a031511ab60d12ced4)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 4d75d6c39f1faeb38191b55f1fa9311b63fcfb29)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:41:39 +00:00
Richard Purdie
79964afc90 bitbake: cooker: Fix task-depends.dot for multiconfig targets
The right hand side of dependencies in the task dependency file generated
by bitbake -g was missing multiconfig prefixes, corrupting the data. Fix
this.

[YOCTO #14621]

(Bitbake rev: c1938abf51b57938a21948bb414ad0467e4368d9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1d5ca721040c5e39aefa11219f62710de6587701)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:35:41 +00:00
Richard Purdie
6108762d02 bitbake: cooker: Handle parsing results queue race
The previous fix introduced a race where the queue might not be empty
but all the parser processes have exited. Handle this correctly to avoid
occasional errors.

(Bitbake rev: 8eaddb92a5fd14de6b5995aa92a6eed03b90a252)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8e7f2b6500e26610f52d128b48ca0a09bf6fb2cb)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:35:41 +00:00
Richard Purdie
fe1583b445 bitbake: cooker: Remove debug code, oops :(
(Bitbake rev: ae1bfbf9523e8f6155bb43ee3adba17af3ec9630)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 19291665fa8b6cc331290f2542af3e8e653203f1)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:35:41 +00:00
Richard Purdie
e180b85efb bitbake: cooker: Handle parse threads disappearing to avoid hangs
If one of the parse threads disappears during parsing for some reason, bitbake
currently hangs. Avoid this (and zombie threads hanging around) by joining()
threads which have exited.

(Bitbake rev: 920111a330be59e5be2068a8f1a9edcbc6c14402)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit dc86a533d951d13643ce446533370da804782afc)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-21 11:35:41 +00:00
Anuj Mittal
da5d1b540e glibc-version.inc: remove branch= from GLIBC_GIT_URI
GLIBC_GIT_URI is used along with branch=${SRCBRANCH} so no need to add
it here.

(From OE-Core rev: d2cba06c27c87c64423636153c0f186c5f45b147)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 4c9cfe326913d28f82e6a91d1eeae55a6651f0f7)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Richard Purdie
7acf465dd5 bitbake.conf: Fix corruption of GNOME mirror url
The url changes from the script accidentally corrupted this mirror
url, fix it.

(From OE-Core rev: 299023686865e0f1f9cc1f585ba64767ba63f638)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a16dd60fb058ec2257eb1c6c0baa86e11e78cb42)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Richard Purdie
30fb5da7ee go-helloworld/glide: Fix urls
Handle github protocol changes not covered by the script due to variable indirection.

(From OE-Core rev: 88c7d6f8c0d603b4404ab73cd147aa0ba6d8afd1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3bb1cb476dbad1037522970af9afd69691a7033c)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Richard Purdie
543567bb8a recipes: Update github.com urls to use https
Github has announced there will be no more git:// fetching from their servers:

https://github.blog/2021-09-01-improving-git-protocol-security-github/#no-more-unauthenticated-git

and they're about to start having brownout periods to encourage people
to update. This runs the conversion script over OE-Core to update our
urls to use https instead of git.

(From OE-Core rev: 8b83eddda83327d25247bb9b61a049b0a8698a45)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b37b61e9a1e448a34957db9ae39285d21352552e)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Richard Purdie
9c87e0204b scripts/convert-srcuri: Update SRC_URI conversion script to handle github url changes
Github are dropping support for git:// protocol fetching. Update the script
to learn about corner cases found in the previous conversion and
support remapping the github urls as needed too.

(From OE-Core rev: fc9209fa892b31b2226008bdaf474750c3b61f38)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e59fe8279b209f67ff79b9d6dbb69389a64db236)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Richard Purdie
7836a7c4d4 meta/scripts: Manual git url branch additions
Following the scripted conversion adding branches to git://
SRC_URI entries, add the remaining references, mainly in the selftests
and recipetool.

(From OE-Core rev: 467aa56b8773e8dd2e8e29936684606d5e291888)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5340c0d688036c1be6c938f05d8a8c1e3b49ec38)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Richard Purdie
5258dd0cd0 meta: Add explict branch to git SRC_URIs
There is uncertainty about the default branch name in git going forward.
To try and cover the different possible outcomes, add branch names to all
git:// and gitsm:// SRC_URI entries.

This update was made with the script added to contrib in this patch which
aims to help others convert other layers.

(From OE-Core rev: 37b4f66fa23979cbfe82679a74ce21b11fc61557)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b51c405faf6f8c0365f7533bfaf470d79152a463)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Richard Purdie
aaa0c06d85 pseudo: Add fcntl64 wrapper
Add fcntl64 wrapper which hopefully fixes issues seen in findutils and the find
command in the libtool removal code when built with LFS compile flags on Gentoo.

(From OE-Core rev: f90e4b84d75d8dc4d5905784abe3298488127ff3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f26867fe4daec7299f59a82ae4a0d70cceb3e082)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Richard Purdie
b989276b40 linunistring: Add missing gperf-native dependency
(From OE-Core rev: 04d181a8cc90f73a36e2665087c030ec4c12b3b3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 73d3efbaeb2f412ab8d3491d2da3f3124fc009f3)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Tom Hochstein
59eed5965c bitbake.conf: Use wayland distro feature for native builds
The wayland-scanner is missing from SDKs with weston, but the weston build
requires wayland-scanner. Allow the distro feature in order to include
the wayland-scanner packages via nativesdk-packagegroup-sdk-host.bb.

(From OE-Core rev: 99ff8a3dbe5d3a68faf9241f4c334953cf9cc5b0)

Signed-off-by: Tom Hochstein <tom.hochstein@nxp.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 858cc6f257e22e39df83f4808ea27c6d12cd1b80)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Tom Hochstein
443816b4dd nativesdk-packagegroup-sdk-host.bb: Update host tools for wayland
The wayland-scanner host tool required to build weston is moved to the
wayland-tools package, so update the SDK host tools list accordingly.

Also, the weston build requires wayland-scanner.pc to find wayland-scanner,
so add wayland-dev.

(From OE-Core rev: adee9d40023b6197f121ec0cf1115ce229c2a26f)

Signed-off-by: Tom Hochstein <tom.hochstein@nxp.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 31ed91bdbb0ec05730fb98d7cc523bb46aca50e3)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Tom Hochstein
025b64f23e wayland: Fix wayland-tools packaging
There are some packaging problems due to the wayland-tools packaging
implementation. The wayland-tools package currently looks like this:

wayland-tools
└── usr
    ├── bin
    │   └── wayland-scanner
    └── share
        └── wayland
            ├── wayland.dtd
            ├── wayland-scanner.mk
            └── wayland.xml

The files wayland.dtd and wayland.xml belong in the main package,
while wayland-scanner.mk belongs in wayland-dev.

Fix the wayland.dtd and wayland.xml packaging by prepending the
wayland-tools package and dropping the main package FILES variable
override. The file wayland-scanner.mk is included in the main
package by default, and so must be explicitly added to wayland-dev.

(From OE-Core rev: 35d54049a94897626eafcd4922ca7ef25a76859c)

Signed-off-by: Tom Hochstein <tom.hochstein@nxp.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a31fbec45d24df5b74091940d0e0b2daf34d8492)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Peter Kjellerstedt
f57e4968b3 libx11-compose-data: Update LICENSE to better reflect reality
There are no traces of neither the BSD-2-Clause license nor the
BSD-4-Clause license being used in the code. There is one occurrence
of the BSD-1-Clause license. On the other hand, HPND and
HPND-sell-variant are all over the place.

(From OE-Core rev: b7fc3411dba82e87b626d110b3951a7dbf910f83)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b0f30792fd0ea41f1d1590dbe0452c956e018c82)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Peter Kjellerstedt
27d151c032 libx11: Update LICENSE to better reflect reality
There are no traces of neither the BSD-2-Clause license nor the
BSD-4-Clause license being used in the code. There is one occurrence
of the BSD-1-Clause license. On the other hand, HPND and
HPND-sell-variant are all over the place.

(From OE-Core rev: 3781b045366a280d33062e0dc9071dc194dd7bf5)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5cd90092e21ad245df40a60feed3598dd9c6b98b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Alexander Kanavin
dfca86c200 libpcre/libpcre2: correct SRC_URI
http://ftp.pcre.org is down, take sources according to links on
http://www.pcre.org

(From OE-Core rev: 1be81f77e3c479a1c11d1d5ea06653b596cbd00b)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 81ba0ba3e8d9c08b8dc69c24fb1d91446739229b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Khem Raj
5045ce3c04 kernel-devsrc: Add vdso.lds and other build files for riscv64 as well
These additional bits are needed on riscv64 as well
Fixes
make[1]: *** No rule to make target 'arch/riscv/kernel/vdso/vdso.lds', needed by 'arch/riscv/kernel/vdso/vdso.so.dbg'.  Stop.
make: *** [arch/riscv/Makefile:114: vdso_prepare] Error 2

(From OE-Core rev: b1e4b39d09a090bfb2bf656ce0eb053e579bf6a1)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Cc: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 446972600ed51ca75a2a4e579cdc3e6dd2e05195)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:17 +00:00
Ross Burton
ed66d58ed6 meson: set objcopy in the cross and native toolchain files
(From OE-Core rev: 028d40076b704669cf7bf423385a4f11e0dd6f03)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0a589998e717ae3865f0db5abe6005ab4eee86d9)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:16 +00:00
Andres Beltran
ce68ec010f create-spdx: Set the Organization field via a variable
Currently, the "Organization" field for SBOMs is hard-coded in
create-spdx. Create a new variable SPDX_ORG to make this field more
generic.

(From OE-Core rev: e370039febe601127347da977ff9b7e5c7470315)

Signed-off-by: Andres Beltran <abeltran@linux.microsoft.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f239814f3f5d9bd54de54b0f2a5081067336e32b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:16 +00:00
Jose Quaresma
be28d98b3a sstate: another fix for touching files inside pseudo
This patch is a fixup for 676757f "sstate: fix touching files inside pseudo"

running the 'id' command inside the sstate_unpack_package
function shows that this funcion run inside the pseudo:

 uid=0(root) gid=0(root) groups=0(root)

The check for [ -w ${SSTATE_PKG} ] and [ -O ${SSTATE_PKG}.siginfo ]
will always return true and the touch can fail when the real user
don't have permission or in readonly filesystem.

As the documentation refers:
- the file test operator "-w" check if the file has write permission
(for the user running the test).
- the file test operator "-O" check if you are owner of file

We can avoid this test running the touch and mask any return errors
that we have.

(From OE-Core rev: 29fc85997ade490ae46ffca37ef8e1a56957c876)

(From OE-Core rev: 10e300e6b4c3935d3fd177478f07c429c9b8c735)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5b9210d66c)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:16 +00:00
Bruce Ashfield
d6768d9d52 strace: fix build against 5.15 kernel/kernel-headers
kernel 5.15 removed ipx.h from the uapi, but strace hasn't adjusted
its tests to the removal.

There is a WIP patch on the esyr/5.15 branch that solves the problem,
so we grab it here, adjust for context and fix our build problem.

When strace updates to 5.15, and we can bump our version and drop
this patch.

Upstream-Status: Backport [commit cca828197c0e16c2599129114]

(From OE-Core rev: 1b47465688474cdba603578c1cbb768cfe699579)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a8c4ba727251e53494a4aec483fcc51982e6fb75)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:16 +00:00
Bruce Ashfield
4aa48aeab8 linux-yocto-rt/5.10: update to -rt54
Integrating the following commit(s) to linux-yocto-rt/5.10:

    f01089784fd6 Linux 5.10.73-rt54
    f34df8f3c666 Linux 5.10.65-rt53
    271c5e6e4064 Linux 5.10.59-rt52
    1a4bba4bc32c locking/rwsem-rt: Remove might_sleep() in __up_read()
    ff591a2bdcfb Linux 5.10.59-rt51
    8d185ac23c11 Linux 5.10.58-rt50
    2c0fd44153f5 Linux 5.10.56-rt49
    8b083d3c993c printk: Enhance the condition check of msleep in pr_flush()
    448cd29e3bc9 Linux 5.10.56-rt48

(From OE-Core rev: fd5980829646a1b0e3865d3ebf64feacc4bc1ee6)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 7c7dc8f38cf1e874a7722389c95d895e10855d9a)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:16 +00:00
Bruce Ashfield
77e8fa92c2 linux-yocto/5.10: update to v5.10.76
Updating linux-yocto/5.10 to the latest korg -stable release that comprises
the following commits:

    378e85d1aeb5 Linux 5.10.76
    cfa79faf7e1f pinctrl: stm32: use valid pin identifier in stm32_pinctrl_resume()
    c56c801391c3 ARM: 9122/1: select HAVE_FUTEX_CMPXCHG
    d088db8637bb selftests: bpf: fix backported ASSERT_FALSE
    3a845fa00fd7 e1000e: Separate TGP board type from SPT
    021b6d11e590 tracing: Have all levels of checks prevent recursion
    3a0dc2e35a5d net: mdiobus: Fix memory leak in __mdiobus_register
    cfe9266213c4 bpf, test, cgroup: Use sk_{alloc,free} for test cases
    188907c25218 s390/pci: fix zpci_zdev_put() on reserve
    f18b90e9366f can: isotp: isotp_sendmsg(): fix TX buffer concurrent access in isotp_sendmsg()
    2304dfb548a4 scsi: core: Fix shost->cmd_per_lun calculation in scsi_add_host_with_dma()
    c58654f344dd net: hns3: fix for miscalculation of rx unused desc
    96fe5061291d sched/scs: Reset the shadow stack when idle_task_exit
    96f0aebf29be scsi: qla2xxx: Fix a memory leak in an error path of qla2x00_process_els()
    90c8e8c0829b scsi: iscsi: Fix set_param() handling
    0eb254479685 Input: snvs_pwrkey - add clk handling
    ea9c1f5d8a3a perf/x86/msr: Add Sapphire Rapids CPU support
    7a5a1f09c8b4 libperf tests: Fix test_stat_cpu
    e56a3e7ae353 ALSA: hda: avoid write to STATESTS if controller is in reset
    85c8d8c1609d platform/x86: intel_scu_ipc: Update timeout value in comment
    9f591cbdbed3 isdn: mISDN: Fix sleeping function called from invalid context
    ab4f542b515b ARM: dts: spear3xx: Fix gmac node
    15d3ad79885b net: stmmac: add support for dwmac 3.40a
    f9d16a428489 btrfs: deal with errors when checking if a dir entry exists during log replay
    369db2a91d5c ALSA: hda: intel: Allow repeatedly probing on codec configuration errors
    81d8e70cdce4 gcc-plugins/structleak: add makefile var for disabling structleak
    69078a94365a net: hns3: fix the max tx size according to user manual
    f40c2281d2c0 drm: mxsfb: Fix NULL pointer dereference crash on unload
    96835b68d7b3 net: bridge: mcast: use multicast_membership_interval for IGMPv3
    0e033cb40761 selftests: netfilter: remove stray bash debug line
    f8a6541345c2 netfilter: Kconfig: use 'default y' instead of 'm' for bool config option
    7f221ccbee4e isdn: cpai: check ctr->cnr to avoid array index out of bound
    77c0ef979e32 nfc: nci: fix the UAF of rf_conn_info object
    8f042315fcc4 KVM: nVMX: promptly process interrupts delivered while in guest mode
    b41fd8f5d2ad mm, slub: fix incorrect memcg slab count for bulk free
    568f906340b4 mm, slub: fix potential memoryleak in kmem_cache_open()
    48843dd23c7b mm, slub: fix mismatch between reconstructed freelist depth and cnt
    c5c2a80368e9 powerpc/idle: Don't corrupt back chain when going idle
    197ec50b2df1 KVM: PPC: Book3S HV: Make idle_kvm_start_guest() return 0 if it went to guest
    fbd724c49bea KVM: PPC: Book3S HV: Fix stack handling in idle_kvm_start_guest()
    9258f58432c5 powerpc64/idle: Fix SP offsets when saving GPRs
    3e16d9d525a7 net: dsa: mt7530: correct ds->num_ports
    16802fa4c33e audit: fix possible null-pointer dereference in audit_filter_rules
    0d867a359979 ASoC: DAPM: Fix missing kctl change notifications
    a2606acf418e ALSA: hda/realtek: Add quirk for Clevo PC50HS
    6411397b6d7a ALSA: usb-audio: Provide quirk for Sennheiser GSP670 Headset
    b721500c979b vfs: check fd has read access in kernel_read_file_from_fd()
    895ceeff31b1 elfcore: correct reference to CONFIG_UML
    3cda4bfffd4f userfaultfd: fix a race between writeprotect and exit_mmap()
    93be0eeea14c ocfs2: mount fails with buffer overflow in strlen
    f1b98569e81c ocfs2: fix data corruption after conversion from inline format
    1727e8688d2e ceph: fix handling of "meta" errors
    603d4bcc0fcd ceph: skip existing superblocks that are blocklisted or shut down when mounting
    d48db508f911 can: j1939: j1939_xtp_rx_rts_session_new(): abort TP less than 9 bytes
    5abc9b9d3ca5 can: j1939: j1939_xtp_rx_dat_one(): cancel session if receive TP.DT with error length
    864e77771a24 can: j1939: j1939_netdev_start(): fix UAF for rx_kref of j1939_priv
    ecfccb1c58c9 can: j1939: j1939_tp_rxtimer(): fix errant alert in j1939_tp_rxtimer
    053bc12df0d6 can: isotp: isotp_sendmsg(): add result check for wait_event_interruptible()
    0917fb04069a can: isotp: isotp_sendmsg(): fix return error on FC timeout on TX path
    28f28e4bc3a5 can: peak_pci: peak_pci_remove(): fix UAF
    9697ad6395f9 can: peak_usb: pcan_usb_fd_decode_status(): fix back to ERROR_ACTIVE state notification
    4758e92e75ca can: rcar_can: fix suspend/resume
    4a0928c3ebca net: enetc: fix ethtool counter name for PM0_TERR
    00ad7a015409 drm/panel: ilitek-ili9881c: Fix sync for Feixin K101-IM2BYL02 panel
    eccd00728b1a ice: Add missing E810 device ids
    6418508a3ac2 e1000e: Fix packet loss on Tiger Lake and later
    29f1bdcaa3dd net: stmmac: Fix E2E delay mechanism
    d36b15e3e7b5 net: hns3: disable sriov before unload hclge layer
    6a72e1d78a2f net: hns3: fix vf reset workqueue cannot exit
    32b860d364d2 net: hns3: schedule the polling again when allocation fails
    96c013f40c9b net: hns3: add limit ets dwrr bandwidth cannot be 0
    21f61d10435c net: hns3: reset DWRR of unused tc to zero
    53770a411559 powerpc/smp: do not decrement idle task preempt count in CPU offline
    81dbd898fb7b NIOS2: irqflags: rename a redefined register name
    6edf99b000d6 net: dsa: lantiq_gswip: fix register definition
    ef97219d5fec ipv6: When forwarding count rx stats on the orig netdev
    38d984e5e845 tcp: md5: Fix overlap between vrf and non-vrf keys
    c28bea6b876f lan78xx: select CRC32
    9c8943812dac netfilter: ipvs: make global sysctl readonly in non-init netns
    911e01990c70 netfilter: ip6t_rt: fix rt0_hdr parsing in rt_mt6
    69ea08c1b539 ice: fix getting UDP tunnel entry
    842fce43190c ASoC: wm8960: Fix clock configuration on slave mode
    39afed394cc6 dma-debug: fix sg checks in debug_dma_map_sg()
    2a670c323055 netfilter: xt_IDLETIMER: fix panic that occurs when timer_type has garbage value
    0f4308a164a9 NFSD: Keep existing listeners on portlist error
    546c04c85791 xtensa: xtfpga: Try software restart before simulating CPU reset
    bfef5d826276 xtensa: xtfpga: use CONFIG_USE_OF instead of CONFIG_OF
    d8284c981c1c drm/amdgpu/display: fix dependencies for DRM_AMD_DC_SI
    101e1bcb1147 xen/x86: prevent PVH type from getting clobbered
    a6285b1b2212 block: decode QUEUE_FLAG_HCTX_ACTIVE in debugfs output
    85c1827eeee7 ARM: dts: at91: sama5d2_som1_ek: disable ISC node by default
    5489c1bed5b8 arm: dts: vexpress-v2p-ca9: Fix the SMB unit-address
    f59da9f7efa7 io_uring: fix splice_fd_in checks backport typo
    b6f32897af19 xhci: add quirk for host controllers that don't update endpoint DCS
    b3b7f831a49b parisc: math-emu: Fix fall-through warnings

(From OE-Core rev: 512e0c418ff1a185289fd23ee21afa8eac75f992)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 61f8f7d18417334e3b13e4447f318107372dcfe0)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:16 +00:00
Bruce Ashfield
a9ed4b6357 linux-yocto/5.14: update to v5.14.15
Updating linux-yocto/5.14 to the latest korg -stable release that comprises
the following commits:

    b46092c7497e Linux 5.14.15
    96e4ea34f6d7 pinctrl: stm32: use valid pin identifier in stm32_pinctrl_resume()
    4850e9e3c6a3 ARM: 9122/1: select HAVE_FUTEX_CMPXCHG
    80344938e468 e1000e: Separate TGP board type from SPT
    0c4e87ba11eb net: mdiobus: Fix memory leak in __mdiobus_register
    2f4356963624 bpf, test, cgroup: Use sk_{alloc,free} for test cases
    4e5d794a2743 s390/pci: fix zpci_zdev_put() on reserve
    e27170d5f2fc s390/pci: cleanup resources only if necessary
    2be38f02ec89 scsi: core: Fix shost->cmd_per_lun calculation in scsi_add_host_with_dma()
    ff261f9aa654 autofs: fix wait name hash calculation in autofs_wait()
    1009f098dfbe drm/kmb: Limit supported mode to 1080p
    217d42e8b835 drm/kmb: Enable alpha blended second plane
    c1ad040dbea8 net/mlx5: Lag, change multipath and bonding to be mutually exclusive
    d2ec7d208d8e net/mlx5: Lag, move lag destruction to a workqueue
    42b6431f1c17 net: hns3: fix for miscalculation of rx unused desc
    f1972f14f16e sched/scs: Reset the shadow stack when idle_task_exit
    c4813d308517 mm/thp: decrease nr_thps in file's mapping on THP split
    a7fbb56e6c94 scsi: qla2xxx: Fix a memory leak in an error path of qla2x00_process_els()
    c8c1b2183fb8 scsi: mpi3mr: Fix duplicate device entries when scanning through sysfs
    ce527668277c scsi: storvsc: Fix validation for unsolicited incoming packets
    08d82a9b65e7 scsi: iscsi: Fix set_param() handling
    6408a4c8da2f ASoC: codec: wcd938x: Add irq config support
    9eb2aaede632 Input: snvs_pwrkey - add clk handling
    9dd0389d77b9 perf/x86/msr: Add Sapphire Rapids CPU support
    11d6811cbde0 libperf tests: Fix test_stat_cpu
    65eec1fb58c1 libperf test evsel: Fix build error on !x86 architectures
    b6062308c510 spi-mux: Fix false-positive lockdep splats
    722ef19a161c spi: Fix deadlock when adding SPI controllers on SPI buses
    785d69099ef4 ALSA: hda: avoid write to STATESTS if controller is in reset
    3972b03ed085 platform/x86: intel_scu_ipc: Update timeout value in comment
    6659008140b4 platform/x86: intel_scu_ipc: Increase virtual timeout to 10s
    f5966ba53013 isdn: mISDN: Fix sleeping function called from invalid context
    ef24577a52ba ARM: dts: spear3xx: Fix gmac node
    834cc3fc2b99 net: stmmac: add support for dwmac 3.40a
    0c878175dd2f btrfs: deal with errors when checking if a dir entry exists during log replay
    051995bd0f42 ALSA: hda: intel: Allow repeatedly probing on codec configuration errors
    9906da162dc8 objtool: Update section header before relocations
    e73e72be194e objtool: Check for gelf_update_rel[a] failures
    515e03331255 bitfield: build kunit tests without structleak plugin
    3f66b6e01c82 thunderbolt: build kunit tests without structleak plugin
    d9f94a8ec35a device property: build kunit tests without structleak plugin
    2c793a67d71b iio/test-format: build kunit tests without structleak plugin
    930f561aae28 gcc-plugins/structleak: add makefile var for disabling structleak
    1d1af4da1c44 drm/msm/a6xx: Serialize GMU communication
    bbdd158b40b6 kunit: fix reference count leak in kfree_at_end
    dfcc47a1fe36 KVM: MMU: Reset mmu->pkru_mask to avoid stale data
    e647d75565ab net: hns3: fix the max tx size according to user manual
    b0e6db0656dd drm: mxsfb: Fix NULL pointer dereference crash on unload
    56a3d9637b77 KVM: SEV-ES: Set guest_state_protected after VMSA update
    d469678d6b50 net: bridge: mcast: use multicast_membership_interval for IGMPv3
    8f20259f186e selftests: netfilter: remove stray bash debug line
    057aef8df940 netfilter: Kconfig: use 'default y' instead of 'm' for bool config option
    cc20226e218a isdn: cpai: check ctr->cnr to avoid array index out of bound
    6197eb050cfa nfc: nci: fix the UAF of rf_conn_info object
    fb82d4dbee95 KVM: x86: remove unnecessary arguments from complete_emulator_pio_in
    66e46fe3f276 KVM: x86: split the two parts of emulator_pio_in
    9887c1668ada KVM: x86: check for interrupts before deciding whether to exit the fast path
    169577c8840e KVM: x86: leave vcpu->arch.pio.count alone in emulator_pio_in_out
    62a1a254ed83 KVM: SEV-ES: reduce ghcb_sa_len to 32 bits
    3f54362dc7d7 KVM: SEV-ES: go over the sev_pio_data buffer in multiple passes if needed
    4988e000b3a8 KVM: SEV-ES: fix length of string I/O
    727286b23f93 KVM: SEV-ES: keep INS functions together
    98c55c508df0 KVM: SEV-ES: clean up kvm_sev_es_ins/outs
    abcae3cd6272 KVM: SEV-ES: rename guest_ins_data to sev_pio_data
    6697ceb9f6cd KVM: SEV: Flush cache on non-coherent systems before RECEIVE_UPDATE_DATA
    495bd03b6ba5 KVM: nVMX: promptly process interrupts delivered while in guest mode
    dc94b8b3f28a mm, slub: fix incorrect memcg slab count for bulk free
    159d8cfbd042 mm, slub: fix potential use-after-free in slab_debugfs_fops
    42b81946e3ac mm, slub: fix potential memoryleak in kmem_cache_open()
    ec93d4a439c3 mm, slub: fix mismatch between reconstructed freelist depth and cnt
    65ede7bd9713 powerpc/idle: Don't corrupt back chain when going idle
    5a8c22e7fb66 KVM: PPC: Book3S HV: Make idle_kvm_start_guest() return 0 if it went to guest
    6d077c37c464 KVM: PPC: Book3S HV: Fix stack handling in idle_kvm_start_guest()
    e8735e2e306f ucounts: Fix signal ucount refcounting
    1eb825343d63 ucounts: Proper error handling in set_cred_ucounts
    f7f7e4dbc41c ucounts: Pair inc_rlimit_ucounts with dec_rlimit_ucoutns in commit_creds
    32880dcecb51 ucounts: Move get_ucounts from cred_alloc_blank to key_change_session_keyring
    04b938ff2d2c net: dsa: mt7530: correct ds->num_ports
    4e9e46a70020 audit: fix possible null-pointer dereference in audit_filter_rules
    b1a34f86b41f blk-cgroup: blk_cgroup_bio_start() should use irq-safe operations on blkg->iostat_cpu
    152f35191d12 ASoC: nau8824: Fix headphone vs headset, button-press detection no longer working
    a60ce083dcbf ASoC: DAPM: Fix missing kctl change notifications
    9da68a107d07 ALSA: hda/realtek: Add quirk for Clevo PC50HS
    896fc3ab9fc1 ALSA: usb-audio: Provide quirk for Sennheiser GSP670 Headset
    b77ba1e02345 mm/secretmem: fix NULL page->mapping dereference in page_is_secretmem()
    abe046ddf311 vfs: check fd has read access in kernel_read_file_from_fd()
    3681e4772c78 elfcore: correct reference to CONFIG_UML
    9ee4e9ae98f1 mm/mempolicy: do not allow illegal MPOL_F_NUMA_BALANCING | MPOL_LOCAL in mbind()
    149958ecd062 userfaultfd: fix a race between writeprotect and exit_mmap()
    6de91691768c mm/userfaultfd: selftests: fix memory corruption with thp enabled
    0e677ea5b739 ocfs2: mount fails with buffer overflow in strlen
    fa9b6b6c953e ocfs2: fix data corruption after conversion from inline format
    909c8482d8ac tracing: Have all levels of checks prevent recursion
    54dc25f4e31e ceph: fix handling of "meta" errors
    0ff7b35631ac ceph: skip existing superblocks that are blocklisted or shut down when mounting
    d832133cf228 can: j1939: j1939_xtp_rx_rts_session_new(): abort TP less than 9 bytes
    03ec23e55e3e can: j1939: j1939_xtp_rx_dat_one(): cancel session if receive TP.DT with error length
    6e8811707e2d can: j1939: j1939_netdev_start(): fix UAF for rx_kref of j1939_priv
    fb545be86c53 can: j1939: j1939_tp_rxtimer(): fix errant alert in j1939_tp_rxtimer
    013e7890663d can: isotp: isotp_sendmsg(): fix TX buffer concurrent access in isotp_sendmsg()
    a76abedd2be3 can: isotp: isotp_sendmsg(): add result check for wait_event_interruptible()
    1d12d110a820 can: isotp: isotp_sendmsg(): fix return error on FC timeout on TX path
    0e5afdc2315b can: peak_pci: peak_pci_remove(): fix UAF
    44c353a14375 can: peak_usb: pcan_usb_fd_decode_status(): fix back to ERROR_ACTIVE state notification
    e18b9f4d62c1 can: rcar_can: fix suspend/resume
    113f7a1f3421 net: enetc: make sure all traffic classes can send large frames
    284c68cae7e0 net: enetc: fix ethtool counter name for PM0_TERR
    eff55ddd240f drm/kmb: Enable ADV bridge after modeset
    c0179dbae96d drm/kmb: Corrected typo in handle_lcd_irq
    566ec004ed8b drm/kmb: Disable change of plane parameters
    2317b45fb3d2 drm/kmb: Remove clearing DPHY regs
    97cfc4b28c4d drm/kmb: Work around for higher system clock
    4dbf3d658540 drm/panel: ilitek-ili9881c: Fix sync for Feixin K101-IM2BYL02 panel
    3b194affe445 net/mlx5e: IPsec: Fix work queue entry ethernet segment checksum flags
    ea7a4d6132ce net/mlx5e: IPsec: Fix a misuse of the software parser's fields
    e867502bc4f6 ice: Add missing E810 device ids
    9b76c3fedb24 igc: Update I226_K device ID
    3300633367b6 e1000e: Fix packet loss on Tiger Lake and later
    95c0a0c5ec88 ptp: Fix possible memory leak in ptp_clock_register()
    d487413c020f net: stmmac: Fix E2E delay mechanism
    9b5a29f0acef net: hns3: disable sriov before unload hclge layer
    82a136c15a77 net: hns3: fix vf reset workqueue cannot exit
    9ee9191d3384 net: hns3: schedule the polling again when allocation fails
    6446be7c9090 net: hns3: add limit ets dwrr bandwidth cannot be 0
    dddafeda454a net: hns3: reset DWRR of unused tc to zero
    93fa0277ea4e net: hns3: Add configuration of TM QCN error event
    3ea0b497a7a2 powerpc/smp: do not decrement idle task preempt count in CPU offline
    0666c4cd67b1 net: dsa: Fix an error handling path in 'dsa_switch_parse_ports_of()'
    9ef9a287aab5 NIOS2: irqflags: rename a redefined register name
    8b523da74a22 net/sched: act_ct: Fix byte count on fragmented packets
    872b836a183d net: dsa: lantiq_gswip: fix register definition
    57deb5ffd8f6 hamradio: baycom_epp: fix build for UML
    c74f3c127e6c ipv6: When forwarding count rx stats on the orig netdev
    1dda424ef5c4 tcp: md5: Fix overlap between vrf and non-vrf keys
    9c281a1006f4 lan78xx: select CRC32
    5220cad0e69e sctp: fix transport encap_port update in sctp_vtag_verify
    ddffcd23d325 netfilter: ipvs: make global sysctl readonly in non-init netns
    cd3d0282dd3d netfilter: ip6t_rt: fix rt0_hdr parsing in rt_mt6
    aabf95dddb45 ice: Print the api_patch as part of the fw.mgmt.api
    75ebc3b08bbd ice: fix getting UDP tunnel entry
    777682e59840 ice: Avoid crash from unnecessary IDA free
    31b4517293bf ice: Fix failure to re-add LAN/RDMA Tx queues
    18b4fcfeab6d ASoC: wm8960: Fix clock configuration on slave mode
    2b7d598b9651 dma-debug: fix sg checks in debug_dma_map_sg()
    90c7c58aa2bd netfilter: nf_tables: skip netdev events generated on netns removal
    cae7cab804c9 netfilter: xt_IDLETIMER: fix panic that occurs when timer_type has garbage value
    78325fcb17f3 KVM: arm64: Release mmap_lock when using VM_SHARED with MTE
    b372264c66ef KVM: arm64: Fix host stage-2 PGD refcount
    649c2e76632d ASoC: cs4341: Add SPI device ID table
    efdcc26785de ASoC: pcm179x: Add missing entries SPI to device ID table
    242ce1c51b69 ASoC: fsl_xcvr: Fix channel swap issue with ARC
    41551810285e ASoC: pcm512x: Mend accesses to the I2S_1 and I2S_2 registers
    c8ecd221db9a powerpc/bpf: Emit stf barrier instruction sequences for BPF_NOSPEC
    663e89b7f4cc powerpc/security: Add a helper to query stf_barrier type
    23d127cf5e66 powerpc/bpf: Validate branch ranges
    e680ab91a034 powerpc/lib: Add helper to check if offset is within conditional branch range
    6fe5d304ca49 NFSD: Keep existing listeners on portlist error
    9ad89fcde18c xtensa: xtfpga: Try software restart before simulating CPU reset
    8287678c20b2 xtensa: xtfpga: use CONFIG_USE_OF instead of CONFIG_OF
    2602e9cc283a drm/amdgpu: init iommu after amdkfd device init
    68771262aab3 drm/amdgpu/display: fix dependencies for DRM_AMD_DC_SI
    537a7189df57 r8152: avoid to resubmit rx immediately
    7d8cdaffc518 xen/x86: prevent PVH type from getting clobbered
    28d084adc22d block: decode QUEUE_FLAG_HCTX_ACTIVE in debugfs output
    96a37f6acb6a ARM: dts: at91: sama5d2_som1_ek: disable ISC node by default
    470585caf603 arm: dts: vexpress-v2p-ca9: Fix the SMB unit-address
    79e3dc32f14f sh: pgtable-3level: fix cast to pointer from integer of different size
    d3f4c51c2a7f parisc: math-emu: Fix fall-through warnings
    4248f37f6cfc block/mq-deadline: Move dd_queued() to fix defined but not used warning

(From OE-Core rev: e745b0d7c2354405946ab669281224747d255a6c)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5c2c7e54bf17d28fe8b918ee8f053748b2b13e01)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:16 +00:00
Richard Purdie
520493bef5 opkg: Fix poor operator combination choice
Combining :append with += rarely makes sense. Improve it to use the standard
format (and tweak the implied spacing).

(From OE-Core rev: 0ed0fd99153dd8a4560b6fbbbaa0decc60f79c5a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 768766dc007ebe9b4bc38d425584be03fbdb98c1)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:16 +00:00
Alexander Kanavin
ef62cff62e linux-firmware: upgrade 20210919 -> 20211027
License-Update: additional firmwares listed

(From OE-Core rev: 38d613df4854d269493d019d24606161d49b4659)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1ca3fb1c7f11e04bf8d8bf59901ddd60178cb13c)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:16 +00:00
Manuel Leonhardt
b9a58411fc dpkg: Install dkpg-perl scripts to versioned perl directory
Install dpkg-perl scripts to versioned perl directory, otherwise the
following traceback happens when running, e.g. dpkg-architecture on the
target:

Can't locate Dpkg.pm in @INC (you may need to install the Dpkg module)
  (@INC contains: /usr/lib/perl5/site_perl/5.30.1/aarch64-linux
  /usr/lib/perl5/site_perl/5.30.1
  /usr/lib/perl5/vendor_perl/5.30.1/aarch64-linux
  /usr/lib/perl5/vendor_perl/5.30.1
  /usr/lib/perl5/5.30.1/aarch64-linux
  /usr/lib/perl5/5.30.1 .) at /usr/bin/dpkg-architecture line 25.

Cc: Richard Purdie <richard.purdie@linuxfoundation.org>
(From OE-Core rev: 37030893cdabdce935defc6f468309d8cd275e53)

Signed-off-by: Manuel Leonhardt <mleonhardt@arri.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bdd4757ae057c7b3bfe27353fa25c4d7807a86ce)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:16 +00:00
Manuel Leonhardt
b0db46f667 sstate: Account for reserved characters when shortening sstate filenames
Previously, when shortening sstate filenames, the reserved
characters for .siginfo were not considered, when siginfo=False,
resulting in differently shortend filenames for the sstate and siginfo
files. With this change, the filenames of the truncated sstate and
siginfo files have the same basename, just as is already the case for
untruncated filenames.

Making sure that the .siginfo files always have the filename of the
corresponding sstate file plus its .siginfo suffix, also when being
truncated, makes it easier to manage the sstate cache and an sstate
mirror outside of Bitbake/Yocto.

(From OE-Core rev: c5fbe4b18446900525119038b8c4b284ace3a8d6)

Signed-off-by: Manuel Leonhardt <mleonhardt@arri.de>
Cc: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c2e0e43b7123cf5149833e0072c8edaea3629112)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:16 +00:00
Kai Kang
fc46b14304 squashfs-tools: fix CVE-2021-41072
Backport patch to fix CVE-2021-41072. And 3 more ancestor commits are
backported too, otherwise it fails to compile.

CVE: CVE-2021-41072

Ref:
* https://nvd.nist.gov/vuln/detail/CVE-2021-41072

(From OE-Core rev: 329e893a36cf651bfd73abe8e50f173382e3b015)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-15 11:56:16 +00:00
Richard Purdie
6e02c340bf bitbake: runqueue: Fix runall option handling
The previous fix for runall option handling had a small bug in it, it
didn't clear the originally processed task list which meant it was running
too many tasks. Fix this so the list is reset and rebuild correctly.

(Bitbake rev: 693eec8edf8d3b2b01c53be6776213cccd797485)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 87c9e120897ed04dfc64d4752fc602f9bfcb8645)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-12 17:27:58 +00:00
Richard Purdie
903333da5b bitbake: runqueue: Fix runall option task deletion ordering issue
The runbuild option handling in runqueue was flawed as items deleted from the
main task list may be dependencies and hence cause index errors.

Rather than modify runtaskentries straight away, compute a new shorted list
and use that as an input to the second phase. This avoids the need to add tasks
back to the list meaning delcount can be simplifed to a simple counter.

The second use case in runonly doen't re-add items so doesn't have this
issue.

(Bitbake rev: cc2e9c4800a8dfde24b3b5fa7184d0bb6398d4fe)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3428e3c54eb5cc03ff96f9cee6dc839afee7a419)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-12 17:27:58 +00:00
Richard Purdie
679e630732 bitbake: tests/fetch: Update pcre.org address after github changes
vcs.pcre.org was a redirect to github which we use for subversion testing.
With the protocol changes at github and the removal of the redirect, use a
direct address for github.

(Bitbake rev: 85eb90edb4b912b4befb10128d60d342d0525eb3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 6230ca71eb7eb2a6db162e28a01727d00af5299b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-12 17:27:58 +00:00
Jose Quaresma
a48e0bb5ec bitbake: cooker: check if upstream hash equivalence server is available
When the user specify an invalid upstream hash equivalence server in
BB_HASHSERVE_UPSTREAM notify the user that we can't connect the server.

(Bitbake rev: 7561fdc23f1aff370ead2abc5747c3a1c8b4ae4d)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit be45aeb9a84f30c28711e87e2d2a4a86320a8d94)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-12 17:27:58 +00:00
Richard Purdie
a498f39e5b bitbake: fetch: Handle mirror user/password replacements correctly
Username or password replacements in URIs were being appended rather than
replaced in mirror url remapping. Fix this and add a test case.

[YOCTO #13823]

(Bitbake rev: 85e7af227a48faec65838dcb7e73b17344bb2a0d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 66ad58bb87e5158aced572be4f1d5726bc97fcce)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-12 17:27:58 +00:00
Richard Purdie
164c944983 bitbake: tests/fetch: Update github urls
(Bitbake rev: 5e9bb32f229d4beebf11b880841edd5a7417bb70)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 07fca7e3ab696ba985b3ef86ab9031d688bf2df2)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-12 17:27:58 +00:00
Quentin Schulz
98d101475a conf: update for release 3.4
conf.py:
* set version to 3.4

switchers.js:
* add 3.4 release
* update 'dev' to 3.5

(From yocto-docs rev: 063e21e1eaffa3e43119800bea50263e12b45f92)

Signed-off-by: Quentin Schulz <quentin.schulz@theobroma-systems.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-11 11:08:13 +00:00
Richard Purdie
9b66d8fb60 bitbake: fetch/wget: Add timeout for checkstatus calls (30s)
We had an issue where a webserver serving sstate had filesystem issues so
would accept connections but effectively not do anything with them. This
causes bitbake to hang whilst processing things like sstate objects inside
the checkstatus() calls. It can be replicated by setting up a server like:

socat -u TCP4-LISTEN:NNN,fork OPEN:/dev/null

and pointing SSTATE_MIRRORS in OE at that address.

Adding a timeout to the checkstatus calls of 30s means that whilst the
system will pause, it will then continue and not hang entirely. Since there
isn't a large transfer here, 30s should be a reasonable response time after
which we should fall back to building things ourselves.

[YOCTO #13716]

(Bitbake rev: ba97caa58efe25bb62d2378fa52d21b6a6aa446c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-11 11:04:15 +00:00
Peter Kjellerstedt
72f8934284 qemu.inc: Remove empty egg-info directories before running meson
This is the same solution that has been applied to meson.bbclass to
allow building with meson after it has been updated to a new
version. It needs to be applied here as well since qemu uses meson
without inheriting meson.bbclass.

(From OE-Core rev: 3cbe3e6f932151800793854ad5d3569dc6f36ab1)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9d05227e910d3f374ba7a9763ff2584b9e40db61)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Peter Kjellerstedt
0979299bb9 meson.bblcass: Remove empty egg-info directories before running meson
sstate.bbclass no longer removes empty directories to avoid a race (see
commit 4f94d929 "sstate/staging: Handle directory creation race issue").
Unfortunately Python apparently treats an empty egg-info directory as if
the version it previously contained still exists and fails if a newer
version is required, which Meson does. To avoid this, make sure there
are no empty egg-info directories from previous versions left behind.

(From OE-Core rev: 0abc761e84ea25a4acc7633eb9b5c8ae73120116)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 47d9d90b4ec7d04d6f3f1a9b97c0ab7f1264a88e)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Teoh Jay Shen
a10db5945e oeqa/runtime/parselogs: modified drm error in common errors list
Changed the following line from:

  [drm] Cannot find any crtc or sizes - going 1024x768  >  [drm] Cannot find any crtc or sizes

This will expand the coverage of the failure to also cover the case when fallback size is not set.

(From OE-Core rev: 51c6c16e2342a13874124d9364d92b340cb002ed)

Signed-off-by: Teoh Jay Shen <jay.shen.teoh@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0569fa735458512d6e15aa3315218ecbdf8510a3)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Saul Wold
b922f5cfa1 create-spdx: cross recipes are native also
Recipes that inherit cross should also be categorized as isNative

(From OE-Core rev: 9edd5e3eeec447a1d90ebbfc681c84d7047933ec)

Signed-off-by: Saul Wold <saul.wold@windriver.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit ee113e3894deb1cfb18622085a3fe0600e1ef01d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Saul Wold
ceb1f52dff create-spdx: add create_annotation function
This allows code reuse and future usage with relationship annotations

(From OE-Core rev: a56b50ada5d1aba57e901684af6a3761f74f6674)

Signed-off-by: Saul Wold <saul.wold@windriver.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 1f8fdb7dc9d02d0ee3c42674ca16e03f0ec18cba)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Saul Wold
4c9414b35d spdx.py: Add annotation to relationship
Having annotations on relationship can provide additional information
about the relationship such as how it was derived.

(From OE-Core rev: 37a29bd732cb917da4930ef624da72f5196732cc)

Signed-off-by: Saul Wold <saul.wold@windriver.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit d98585aa89e1d3819f8139a07fb7376ef89b37f8)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Alexander Kanavin
c7b2db1fa6 tzdata: update 2021d -> 2021e
(From OE-Core rev: f598f13dd642fc6451a3700ea77bef4a841e81c1)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 660f932c21fed410ad092ec610749e7090b6a324)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Alexander Kanavin
17be7973d6 tzdata: upgrade 2021a -> 2021d
(From OE-Core rev: 38da21f954899bb1a0dd05be87c8794d12b96b5a)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f171f4f528090fc108624de6049274aa4d4880eb)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Alexander Kanavin
d70d12d140 stress-ng: convert to git, website is down
(From OE-Core rev: 464fba5a4ee320fb964fcaa378c899aa04ade558)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 0bc00868993d7093a70f29de9047f9ae0be33836)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Ahmed Hossam
2f3ee2aff2 go.bbclass: Allow adding parameters to go ldflags
Currently, there is no clean way to pass extra parameters to the go tool link,
which is passed by the go build ldflags flag, the append needs to happen inside
the quotes of the ldflags parameter

See [YOCTO #14554].

Add a variable to allow adding extra parameters to -ldflags in the GO_LDFLAGS
variable, one of the main usecases is setting the application version.

For example, adding to the recipe something like
GO_EXTRA_LDFLAGS="-X main.Version=v1.0.0"
or
GO_EXTRA_LDFLAGS="-X main.Version=${PV}"

(From OE-Core rev: 4c0c5edbb561f2bd21bba979ed7553fb3b717116)

Signed-off-by: Ahmed Hossam <Ahmed.Hossam@opensynergy.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit eaa7a61dab9a1d7bb039f16abdd9aacb44faa595)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Hsia-Jun(Randy) Li
1eb9a963ff meson: install native file in sdk
Without a native environment file, find_program() can't
locate the native program inside SDK.

That stops wayland compositor using wayland scanner.

(From OE-Core rev: 2ea62c23bf9d37e46d3cd9aa7527c535994d4b77)

Signed-off-by: Hsia-Jun(Randy) Li <randy.li@synaptics.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c6aed1084006727e3baf70ab9d1f70d9d2d6c01f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Randy Li
ae397dcedc meson: move lang args to the right section
After meson 0.56.0, <lang>_args and <lang>_link_args would be
regarded as meson built-in options.

(From OE-Core rev: 07e2ace3e9208b1a0806cd0ab768059671974a1c)

Signed-off-by: Hsia-Jun(Randy) Li <randy.li@synaptics.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 50c8f654e9006a7c902dd76f75082d4f8d668d0c)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Bruce Ashfield
b254cbfcff linux-yocto/5.10: update to v5.10.75
Updating linux-yocto/5.10 to the latest korg -stable release that comprises
the following commits:

    3a9842b42e42 Linux 5.10.75
    3e2873652163 net: dsa: mv88e6xxx: don't use PHY_DETECT on internal PHY's
    3593fa147c86 ionic: don't remove netdev->dev_addr when syncing uc list
    f33890d9bb59 net: mscc: ocelot: warn when a PTP IRQ is raised for an unknown skb
    9c546af181bc nfp: flow_offload: move flow_indr_dev_register from app init to app start
    6da9af2d2531 r8152: select CRC32 and CRYPTO/CRYPTO_HASH/CRYPTO_SHA256
    ecfd4fa15b06 qed: Fix missing error code in qed_slowpath_start()
    51f6e72ca656 mqprio: Correct stats in mqprio_dump_class_stats().
    fdaff7f9e806 platform/x86: intel_scu_ipc: Fix busy loop expiry time
    057ee6843bbb acpi/arm64: fix next_platform_timer() section mismatch error
    c6b2400095ba drm/msm/dsi: fix off by one in dsi_bus_clk_enable error handling
    2c5658717428 drm/msm/dsi: Fix an error code in msm_dsi_modeset_init()
    b28586fb04f3 drm/msm/a6xx: Track current ctx by seqno
    abd11864159b drm/msm/mdp5: fix cursor-related warnings
    91a340768b01 drm/msm: Fix null pointer dereference on pointer edp
    a7b45024f66f drm/edid: In connector_bad_edid() cap num_of_ext by num_blocks read
    d0f0e1710397 drm/panel: olimex-lcd-olinuxino: select CRC32
    a4a37e6516f8 spi: bcm-qspi: clear MSPI spifie interrupt during probe
    d9428f08e1c3 platform/mellanox: mlxreg-io: Fix read access of n-bytes size attributes
    c216cebdd245 platform/mellanox: mlxreg-io: Fix argument base in kstrtou32() call
    e59d839743b5 mlxsw: thermal: Fix out-of-bounds memory accesses
    7eef482db728 ata: ahci_platform: fix null-ptr-deref in ahci_platform_enable_regulators()
    116932c0e45e pata_legacy: fix a couple uninitialized variable bugs
    50cb95487c26 NFC: digital: fix possible memory leak in digital_in_send_sdd_req()
    3f2960b39f22 NFC: digital: fix possible memory leak in digital_tg_listen_mdaa()
    2f21f06a5e7a nfc: fix error handling of nfc_proto_register()
    ba39f55952a2 vhost-vdpa: Fix the wrong input in config_cb
    84e0f2fc662e ethernet: s2io: fix setting mac address during resume
    e19c10d6e07c net: encx24j600: check error in devm_regmap_init_encx24j600
    f2e1de075018 net: dsa: microchip: Added the condition for scheduling ksz_mib_read_work
    9053c5b4594c net: stmmac: fix get_hw_feature() on old hardware
    12da46cb6a90 net/mlx5e: Mutually exclude RX-FCS and RX-port-timestamp
    4f7bddf8c5c0 net/mlx5e: Fix memory leak in mlx5_core_destroy_cq() error path
    afb0c67dfdb5 net: korina: select CRC32
    33ca85010511 net: arc: select CRC32
    17a027aafd52 gpio: pca953x: Improve bias setting
    d84a69ac410f sctp: account stream padding length for reconf chunk
    6fecdb5b54a5 nvme-pci: Fix abort command id
    2d937cc12c14 ARM: dts: bcm2711-rpi-4-b: Fix pcie0's unit address formatting
    6e6082250b53 ARM: dts: bcm2711-rpi-4-b: fix sd_io_1v8_reg regulator states
    48613e687e28 ARM: dts: bcm2711: fix MDIO #address- and #size-cells
    6e6e3018d3ce ARM: dts: bcm2711-rpi-4-b: Fix usb's unit address
    76644f94595b tee: optee: Fix missing devices unregister during optee_remove
    07f885682486 iio: dac: ti-dac5571: fix an error code in probe()
    6c0024bcaadc iio: ssp_sensors: fix error code in ssp_print_mcu_debug()
    0fbc3cf7dd9a iio: ssp_sensors: add more range checking in ssp_parse_dataframe()
    abe5b13dd959 iio: adc: max1027: Fix the number of max1X31 channels
    41e84a4f25b6 iio: light: opt3001: Fixed timeout error when 0 lux
    e811506f609a iio: mtk-auxadc: fix case IIO_CHAN_INFO_PROCESSED
    1671cfd31b66 iio: adc: max1027: Fix wrong shift with 12-bit devices
    f931076d32b6 iio: adc128s052: Fix the error handling path of 'adc128_probe()'
    4425d059aa2e iio: adc: ad7793: Fix IRQ flag
    d078043a1775 iio: adc: ad7780: Fix IRQ flag
    a8177f0576fa iio: adc: ad7192: Add IRQ flag
    be8ef91d6166 driver core: Reject pointless SYNC_STATE_ONLY device links
    d5f13bbb5104 drivers: bus: simple-pm-bus: Add support for probing simple bus only devices
    b45923f66eb6 iio: adc: aspeed: set driver data when adc probe.
    ea947267eb6f powerpc/xive: Discard disabled interrupts in get_irqchip_state()
    9e46bdfb55a3 x86/Kconfig: Do not enable AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT automatically
    57e48886401b nvmem: Fix shift-out-of-bound (UBSAN) with byte size cells
    a7bd0dd3f2ed EDAC/armada-xp: Fix output of uncorrectable error counter
    92e6e08ca2b0 virtio: write back F_VERSION_1 before validate
    86e3ad8b759d misc: fastrpc: Add missing lock before accessing find_vma()
    3f0ca245a834 USB: serial: option: add prod. id for Quectel EG91
    ecad614b0c68 USB: serial: option: add Telit LE910Cx composition 0x1204
    bf26bc72dc59 USB: serial: option: add Quectel EC200S-CN module support
    d4b77900cffe USB: serial: qcserial: add EM9191 QDL support
    3147f5721588 Input: xpad - add support for another USB ID of Nacon GC-100
    9d89e2871167 usb: musb: dsps: Fix the probe error path
    3b4275140142 efi: Change down_interruptible() in virt_efi_reset_system() to down_trylock()
    5100dc4489ab efi/cper: use stack buffer for error record decoding
    2c5dd2a8af77 cb710: avoid NULL pointer subtraction
    d40e193abd07 xhci: Enable trust tx length quirk for Fresco FL11 USB controller
    dec944bb7079 xhci: Fix command ring pointer corruption while aborting a command
    dc3e0a20dbb9 xhci: guard accesses to ep_state in xhci_endpoint_reset()
    0ee66290f006 USB: xhci: dbc: fix tty registration race
    9f0d6c781cb5 mei: me: add Ice Lake-N device id.
    e4f7171c2395 x86/resctrl: Free the ctrlval arrays when domain_setup_mon_state() fails
    0e32a2b85c7d btrfs: fix abort logic in btrfs_replace_file_extents
    52924879ed45 btrfs: update refs for any root except tree log roots
    352349aa4948 btrfs: check for error when looking up inode during dir entry replay
    4ed68471bc37 btrfs: deal with errors when adding inode reference during log replay
    95d3aba5febe btrfs: deal with errors when replaying dir entry during log replay
    206868a5b6c1 btrfs: unlock newly allocated extent buffer after error
    e7e3ed5c92b6 drm/msm: Avoid potential overflow in timeout_to_jiffies()
    a31c33aa80a5 arm64/hugetlb: fix CMA gigantic page order for non-4K PAGE_SIZE
    0c97008859ca csky: Fixup regs.sr broken in ptrace
    5dab6e8f141a csky: don't let sigreturn play with priveleged bits of status register
    e3c37135c9ca clk: socfpga: agilex: fix duplicate s2f_user0_clk
    faba7916cdc0 s390: fix strrchr() implementation
    7ef43c0f68fb nds32/ftrace: Fix Error: invalid operands (*UND* and *UND* sections) for `^'
    c3bf276fd7c8 ALSA: hda/realtek: Fix the mic type detection issue for ASUS G551JW
    1099953b32c6 ALSA: hda/realtek: Fix for quirk to enable speaker output on the Lenovo 13s Gen2
    554a5027f536 ALSA: hda/realtek: Add quirk for TongFang PHxTxX1
    0fa256509b9f ALSA: hda/realtek - ALC236 headset MIC recording issue
    1e10c6bf15d2 ALSA: hda/realtek: Add quirk for Clevo X170KM-G
    8a5f01f4b01c ALSA: hda/realtek: Complete partial device name to avoid ambiguity
    c6e5290e6cc1 ALSA: hda - Enable headphone mic on Dell Latitude laptops with ALC3254
    9bb1659ac594 ALSA: hda/realtek: Enable 4-speaker output for Dell Precision 5560 laptop
    7680631ac7ab ALSA: seq: Fix a potential UAF by wrong private_free call order
    4aab156d302c ALSA: pcm: Workaround for a wrong offset in SYNC_PTR compat ioctl
    f077d699c1d2 ALSA: usb-audio: Add quirk for VF0770

(From OE-Core rev: 40a688f4b3398c1bfe1258be98c3ff7b74699094)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 08857198b40617d53701ac46d95d6d60dfbdb4af)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Bruce Ashfield
88e6b99201 linux-yocto/5.14: update to v5.14.14
Updating linux-yocto/5.14 to the latest korg -stable release that comprises
the following commits:

    fe024e004fa3 Linux 5.14.14
    9513ce07f05b ionic: don't remove netdev->dev_addr when syncing uc list
    6b55eadb0b1d net: dsa: felix: break at first CPU port during init and teardown
    9d2cec10ea9e net: mscc: ocelot: cross-check the sequence id from the timestamp FIFO with the skb PTP header
    23a6801c0585 net: mscc: ocelot: deny TX timestamping of non-PTP packets
    de32ef6d79dd net: mscc: ocelot: warn when a PTP IRQ is raised for an unknown skb
    3b4241817601 net: mscc: ocelot: avoid overflowing the PTP timestamp FIFO
    34fd7a2e375a net: mscc: ocelot: make use of all 63 PTP timestamp identifiers
    f7697d70d76b nfp: flow_offload: move flow_indr_dev_register from app init to app start
    9d162f541ba3 block/rnbd-clt-sysfs: fix a couple uninitialized variable bugs
    61616be89997 ice: fix locking for Tx timestamp tracking flush
    99eef638a327 r8152: select CRC32 and CRYPTO/CRYPTO_HASH/CRYPTO_SHA256
    821dca5635e2 qed: Fix missing error code in qed_slowpath_start()
    1a4554e94f0d mptcp: fix possible stall on recvmsg()
    4fd74935619f mqprio: Correct stats in mqprio_dump_class_stats().
    395218b5c7e0 platform/x86: intel_scu_ipc: Fix busy loop expiry time
    b4fb645a7412 acpi/arm64: fix next_platform_timer() section mismatch error
    6302ce26eceb drm/msm/dsi: fix off by one in dsi_bus_clk_enable error handling
    3c403c4c0580 drm/msm/dsi: Fix an error code in msm_dsi_modeset_init()
    f1457eea4ccd drm/msm/dsi: dsi_phy_14nm: Take ready-bit into account in poll_for_ready
    d59e44e7821a drm/msm/a3xx: fix error handling in a3xx_gpu_init()
    3962d626eb3e drm/msm/a4xx: fix error handling in a4xx_gpu_init()
    20cfa89cd7e1 drm/msm/a6xx: Track current ctx by seqno
    00ba7a3951f4 drm/msm/submit: fix overflow check on 64-bit architectures
    2d28dafbc88e drm/msm/mdp5: fix cursor-related warnings
    46c8ddede027 drm/msm: Fix null pointer dereference on pointer edp
    09f3946bb452 drm/edid: In connector_bad_edid() cap num_of_ext by num_blocks read
    8b0462c25eff drm/panel: olimex-lcd-olinuxino: select CRC32
    dc4f4acadabf spi: bcm-qspi: clear MSPI spifie interrupt during probe
    2a51f25a7ed9 spi: spidev: Add SPI ID table
    b461c8553474 platform/mellanox: mlxreg-io: Fix read access of n-bytes size attributes
    1da4f33681b5 platform/mellanox: mlxreg-io: Fix argument base in kstrtou32() call
    df8e58716afb mlxsw: thermal: Fix out-of-bounds memory accesses
    2d14f8a9f1b7 ata: ahci_platform: fix null-ptr-deref in ahci_platform_enable_regulators()
    55b033b82dde pata_legacy: fix a couple uninitialized variable bugs
    6432d7f1d1c3 NFC: digital: fix possible memory leak in digital_in_send_sdd_req()
    564249219e5b NFC: digital: fix possible memory leak in digital_tg_listen_mdaa()
    e005ba2235b6 nfc: fix error handling of nfc_proto_register()
    0b84e32840b7 vhost-vdpa: Fix the wrong input in config_cb
    2d902349653c ethernet: s2io: fix setting mac address during resume
    322c0e534963 net: encx24j600: check error in devm_regmap_init_encx24j600
    38eaccdcc811 net: dsa: fix spurious error message when unoffloaded port leaves bridge
    383239a33cf2 net: dsa: microchip: Added the condition for scheduling ksz_mib_read_work
    b1752d2f4fc2 net: dsa: mv88e6xxx: don't use PHY_DETECT on internal PHY's
    f71c73a1275c net: phy: Do not shutdown PHYs in READY state
    568feb737f5e net: stmmac: fix get_hw_feature() on old hardware
    947442b62090 net/mlx5e: Switchdev representors are not vlan challenged
    2f306483d547 net/mlx5e: Mutually exclude RX-FCS and RX-port-timestamp
    ed8aafea4fec net/mlx5e: Fix memory leak in mlx5_core_destroy_cq() error path
    0d9ddf515cde net/smc: improved fix wait on already cleared link
    844b62f61709 net: korina: select CRC32
    af9a33bfff34 net: arc: select CRC32
    81099749174e gpio: pca953x: Improve bias setting
    9025c92a6cc7 gpio: 74x164: Add SPI device ID table
    4f0bc44b9191 sctp: account stream padding length for reconf chunk
    5ccd69157a9a nvme-pci: Fix abort command id
    9036542c2bef clk: renesas: rzg2l: Fix clk status function
    abab28387755 ARM: dts: bcm2711-rpi-4-b: Fix pcie0's unit address formatting
    264e77ee3987 ARM: dts: bcm2711-rpi-4-b: fix sd_io_1v8_reg regulator states
    06560ba731e2 firmware: arm_ffa: Add missing remove callback to ffa_bus_type
    b2da1ae1941d firmware: arm_ffa: Fix __ffa_devices_unregister
    a0dfb710735d ARM: dts: bcm2711: fix MDIO #address- and #size-cells
    83fe15846c48 ARM: dts: bcm283x: Fix VEC address for BCM2711
    2a7374dd882d ARM: dts: bcm2711-rpi-4-b: Fix usb's unit address
    a009758b28f3 tee: optee: Fix missing devices unregister during optee_remove
    362d067a231d tracing: Fix missing osnoise tracer on max_latency
    ce5c6dd07473 iio: dac: ti-dac5571: fix an error code in probe()
    8d3fd8fdf2cb fpga: ice40-spi: Add SPI device ID table
    645e2c994b6a eeprom: at25: Add SPI ID table
    362fe6c8d5ab eeprom: 93xx46: fix MODULE_DEVICE_TABLE
    42c587653cb7 eeprom: 93xx46: Add SPI device ID table
    1a5ba478c41c Input: resistive-adc-touch - fix division by zero error on z1 == 0
    6ad4dc9602fa iio: ssp_sensors: fix error code in ssp_print_mcu_debug()
    af8aae7a1257 iio: ssp_sensors: add more range checking in ssp_parse_dataframe()
    3903e5404214 iio: adc: max1027: Fix the number of max1X31 channels
    43e399d862ef iio: accel: fxls8962af: return IRQ_HANDLED when fifo is flushed
    56e3bcdf6b9b iio: light: opt3001: Fixed timeout error when 0 lux
    07415de29ded iio: mtk-auxadc: fix case IIO_CHAN_INFO_PROCESSED
    04e03b907022 iio: adis16475: fix deadlock on frequency set
    06a6230a5683 iio: adc: max1027: Fix wrong shift with 12-bit devices
    45b54f7f6ae7 iio: adc128s052: Fix the error handling path of 'adc128_probe()'
    2c675f25eb35 iio: adis16480: fix devices that do not support sleep mode
    696eef458c31 iio: adc: ad7793: Fix IRQ flag
    c9e8c11b1a84 iio: adc: ad7780: Fix IRQ flag
    d8f72ea6ccfd iio: adc: ad7192: Add IRQ flag
    10dea2bc52e4 driver core: Reject pointless SYNC_STATE_ONLY device links
    e733c7a6f754 drivers: bus: simple-pm-bus: Add support for probing simple bus only devices
    11d6dbd807aa iio: adc: aspeed: set driver data when adc probe.
    74c078866ff4 powerpc/xive: Discard disabled interrupts in get_irqchip_state()
    202975c570d2 x86/Kconfig: Do not enable AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT automatically
    128f38289215 x86/fpu: Mask out the invalid MXCSR bits properly
    bce9adf0b5ea Revert "virtio-blk: Add validation for block size in config space"
    f2935e790419 virtio-blk: remove unneeded "likely" statements
    0e822e5413da nvmem: Fix shift-out-of-bound (UBSAN) with byte size cells
    eb1e9f2ec683 EDAC/armada-xp: Fix output of uncorrectable error counter
    2c2e626d9ba4 virtio: write back F_VERSION_1 before validate
    d22592f1fd8d misc: fastrpc: Add missing lock before accessing find_vma()
    6df4c42e0b60 USB: serial: option: add prod. id for Quectel EG91
    b39adce3afe1 USB: serial: option: add Telit LE910Cx composition 0x1204
    8372fb17ebf2 USB: serial: option: add Quectel EC200S-CN module support
    1e2c4a11a59b USB: serial: qcserial: add EM9191 QDL support
    96703298fc51 Input: xpad - add support for another USB ID of Nacon GC-100
    ff9249aab398 usb: musb: dsps: Fix the probe error path
    85c6f477b357 efi: Change down_interruptible() in virt_efi_reset_system() to down_trylock()
    3b7951e32193 efi/cper: use stack buffer for error record decoding
    746b00a48688 cb710: avoid NULL pointer subtraction
    2b6c75bf9202 xhci: Enable trust tx length quirk for Fresco FL11 USB controller
    e54abefe703a xhci: Fix command ring pointer corruption while aborting a command
    fa3093d37cce xhci: add quirk for host controllers that don't update endpoint DCS
    eacfdec26656 xhci: guard accesses to ep_state in xhci_endpoint_reset()
    db96c1d87c95 USB: xhci: dbc: fix tty registration race
    7c0af62f11c3 mei: hbm: drop hbm responses on early shutdown
    fe87a580929e mei: me: add Ice Lake-N device id.
    ce8f1faa8140 x86/resctrl: Free the ctrlval arrays when domain_setup_mon_state() fails
    0294b7ccb00b module: fix clang CFI with MODULE_UNLOAD=n
    0e309e1152fc btrfs: fix abort logic in btrfs_replace_file_extents
    f86531a3115f btrfs: update refs for any root except tree log roots
    5dbc0d798074 btrfs: check for error when looking up inode during dir entry replay
    439cce2df925 btrfs: deal with errors when adding inode reference during log replay
    790dbfcd43a0 btrfs: deal with errors when replaying dir entry during log replay
    0adda9f173f1 btrfs: unlock newly allocated extent buffer after error
    697ee8c3d3fa drm/msm: Avoid potential overflow in timeout_to_jiffies()
    2479f72f5328 drm/msm: Do not run snapshot on non-DPU devices
    95a9523afb3d drm/nouveau/fifo: Reinstate the correct engine bit programming
    0af9c042cd6e arm64/hugetlb: fix CMA gigantic page order for non-4K PAGE_SIZE
    f66b6d61f2e3 drm/fbdev: Clamp fbdev surface size if too large
    2c7820141702 csky: Fixup regs.sr broken in ptrace
    f8e8e5448c77 csky: don't let sigreturn play with priveleged bits of status register
    46f067744387 clk: socfpga: agilex: fix duplicate s2f_user0_clk
    d429630cde94 s390: fix strrchr() implementation
    8ca9745efe35 dm rq: don't queue request to blk-mq during DM suspend
    d856f5d13d65 ACPI: PM: Include alternate AMDI0005 id in special behaviour
    6e506f07c5b5 dm: fix mempool NULL pointer race when completing IO
    594a97f7617b nds32/ftrace: Fix Error: invalid operands (*UND* and *UND* sections) for `^'
    24262c6439c6 mtd: rawnand: qcom: Update code word value for raw read
    f7744bdec09f spi: atmel: Fix PDC transfer setup bug
    26a88eedfc88 platform/x86: amd-pmc: Add alternative acpi id for PMC controller
    1a707ec090e9 platform/x86: gigabyte-wmi: add support for B550 AORUS ELITE AX V2
    52d44bd028c1 ALSA: hda/realtek: Fix the mic type detection issue for ASUS G551JW
    8c5628cbb26e ALSA: hda/realtek: Fix for quirk to enable speaker output on the Lenovo 13s Gen2
    9a13d0f9c3d9 ALSA: hda/realtek: Add quirk for TongFang PHxTxX1
    f8d3c17e1c37 ALSA: hda/realtek - ALC236 headset MIC recording issue
    1f923b81f49e ALSA: hda/realtek: Add quirk for Clevo X170KM-G
    07015c2e0f35 ALSA: hda/realtek: Complete partial device name to avoid ambiguity
    a2fc31b3699a ALSA: hda - Enable headphone mic on Dell Latitude laptops with ALC3254
    72653bfc9b9d ALSA: hda/realtek: Enable 4-speaker output for Dell Precision 5560 laptop
    14137ae740cb ALSA: seq: Fix a potential UAF by wrong private_free call order
    dfd5633ae775 ALSA: usb-audio: Fix a missing error check in scarlett gen2 mixer
    1a98c3c68795 ALSA: pcm: Workaround for a wrong offset in SYNC_PTR compat ioctl
    ca3dccb96511 ALSA: usb-audio: Add quirk for VF0770

(From OE-Core rev: 3ac58e2c4ebc3b4ebccfccca587406dcf5a7aa28)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3471c208fe87e80e4e8d54bc3e24d8ea9c3f6b2a)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Bruce Ashfield
25e649f67e linux-yocto/5.14: common-pc: enable CONFIG_ATA_PIIX as built-in
Jacob Kroon reported that generic/custom x86 kernels would no
longer boot out of the box since the IDE options were removed
and the PATA migration happened.

To re-enable that use case, we grab the following kernel
configuration change:

    common-pc*/qemux86*: set CONFIG_ATA_PIIX as built-in

    Since the IDE options were made obselete in the kernel, and the
    PATA driver is the replacement, we haven't had one of the commonly
    used qemu boot devices enabled in our kernel by default.

    We change CONFIG_ATA_PIIX to built-in, to re-enable use cases that
    boot from default qemu 'hardware'.

    Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>

Reported-by: Jacob Kroon <jacob.kroon@gmail.com>
(From OE-Core rev: 32f484d445eedadb9d1f2428398a4ec64ac7e4eb)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 341707513a7c3cfcd797f6631b8daf09ddf5bae8)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Bruce Ashfield
3fe7557392 linux-yocto/5.10: update to v5.10.74
Updating linux-yocto/5.10 to the latest korg -stable release that comprises
the following commits:

    77434fe5a077 Linux 5.10.74
    42b49f012b6a hwmon: (pmbus/ibm-cffps) max_power_out swap changes
    bb893f075431 sched: Always inline is_percpu_thread()
    bdae2a083436 perf/core: fix userpage->time_enabled of inactive events
    57c7ca3d5592 scsi: virtio_scsi: Fix spelling mistake "Unsupport" -> "Unsupported"
    d993d1e1c411 scsi: ses: Fix unsigned comparison with less than zero
    621ddffb70db drm/amdgpu: fix gart.bo pin_count leak
    a5ba615fbeb3 net: sun: SUNVNET_COMMON should depend on INET
    db868b45324d vboxfs: fix broken legacy mount signature checking
    42c871d38e3d mac80211: check return value of rhashtable_init
    bda06aff03a1 net: prevent user from passing illegal stab size
    3d68c7b0ab5b hwmon: (ltc2947) Properly handle errors when looking for the external clock
    194e8a4f0acd m68k: Handle arrivals of multiple signals correctly
    977aee58142a mac80211: Drop frames from invalid MAC address in ad-hoc mode
    9ec9a975ea37 netfilter: nf_nat_masquerade: defer conntrack walk to work queue
    5182d6db80bb netfilter: nf_nat_masquerade: make async masq_inet6_event handling generic
    bcb647c1e15d ASoC: SOF: loader: release_firmware() on load failure to avoid batching
    f6952b1e22c2 HID: wacom: Add new Intuos BT (CTL-4100WL/CTL-6100WL) device IDs
    ddc4ba737bcb netfilter: ip6_tables: zero-initialize fragment offset
    ddf026d6ae9a HID: apple: Fix logical maximum and usage maximum of Magic Keyboard JIS
    0bcfa99e8fae ASoC: Intel: sof_sdw: tag SoundWire BEs as non-atomic
    14cbfeeee41b ext4: correct the error path of ext4_write_inline_data_end()
    d7a15e1e4fd7 ext4: check and update i_disksize properly

(From OE-Core rev: 7615702d29bd1578416e3a965a794fa2aad3f88f)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8e863e9c57fc26e4158b6c10b04931976c54efb8)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Bruce Ashfield
4d2d21ef4d linux-yocto/5.14: update to v5.14.13
Updating linux-yocto/5.14 to the latest korg -stable release that comprises
the following commits:

    b9ed05407395 Linux 5.14.13
    d7c187ab28f6 hwmon: (pmbus/ibm-cffps) max_power_out swap changes
    e798dcd960a3 io_uring: kill fasync
    15571bb5bb64 sched: Always inline is_percpu_thread()
    643c519c36dc perf/core: fix userpage->time_enabled of inactive events
    15f69a666166 scsi: qla2xxx: Fix excessive messages during device logout
    cc07ecaf9a9c scsi: virtio_scsi: Fix spelling mistake "Unsupport" -> "Unsupported"
    21c2e89e7caa scsi: ses: Fix unsigned comparison with less than zero
    18d1c5ea3798 drm/amdgpu: fix gart.bo pin_count leak
    048389b85643 net: sun: SUNVNET_COMMON should depend on INET
    e36444b36ff0 vboxfs: fix broken legacy mount signature checking
    5c85a825615a net: bgmac-platform: handle mac-address deferral
    af13e6176b25 mac80211: check return value of rhashtable_init
    ebb25ff84341 net: prevent user from passing illegal stab size
    998e080844c9 hwmon: (ltc2947) Properly handle errors when looking for the external clock
    1d0996b0d2b3 m68k: Handle arrivals of multiple signals correctly
    4d38fb418f71 pinctrl: qcom: sc7280: Add PM suspend callbacks
    9a8a181ed97e mac80211: Drop frames from invalid MAC address in ad-hoc mode
    a3ea231aa3f0 netfilter: nf_nat_masquerade: defer conntrack walk to work queue
    36f822c301c7 netfilter: nf_nat_masquerade: make async masq_inet6_event handling generic
    6c3e84af3944 KVM: arm64: nvhe: Fix missing FORCE for hyp-reloc.S build rule
    1fd0252cad6b ASoC: SOF: loader: release_firmware() on load failure to avoid batching
    2dd40af15d19 HID: wacom: Add new Intuos BT (CTL-4100WL/CTL-6100WL) device IDs
    95cb145dcfc8 netfilter: ip6_tables: zero-initialize fragment offset
    f117530a10e0 HID: apple: Fix logical maximum and usage maximum of Magic Keyboard JIS
    13e6abfa0b1e ALSA: usb-audio: Unify mixer resume and reset_resume procedure
    cb315326664d ALSA: oxfw: fix transmission method for Loud models based on OXFW971
    3c13d6e6fc56 ASoC: Intel: sof_sdw: tag SoundWire BEs as non-atomic
    7c2893a12fc0 ext4: correct the error path of ext4_write_inline_data_end()
    501f3491d99e ext4: check and update i_disksize properly

(From OE-Core rev: d7eebc956d4fa4353475fda198a656619ae387e4)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a1028453439db361d5f77fa220d77c49bc7a1f82)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Bruce Ashfield
e11d7bf851 linux-yocto/5.14: update to v5.14.12
Updating linux-yocto/5.14 to the latest korg -stable release that comprises
the following commits:

    325225e2f9fa Linux 5.14.12
    58f0e59efa34 dsa: tag_dsa: Fix mask for trunked packets
    5dc24f3e0841 x86/hpet: Use another crystalball to evaluate HPET usability
    4e9ec1c65da9 x86/entry: Clear X86_FEATURE_SMAP when CONFIG_X86_SMAP=n
    2ba3e3026f4f x86/entry: Correct reference to intended CONFIG_64_BIT
    0723d4f8b179 x86/fpu: Restore the masking out of reserved MXCSR bits
    44976b5cb6af x86/sev: Return an error on a returned non-zero SW_EXITINFO1[31:0]
    6665c1c5770f x86/Kconfig: Correct reference to MWINCHIP3D
    1d4092c10125 x86/platform/olpc: Correct ifdef symbol to intended CONFIG_OLPC_XO15_SCI
    8ba6e4551011 pseries/eeh: Fix the kdump kernel crash during eeh_pseries_init
    da0cb12f1983 powerpc/32s: Fix kuap_kernel_restore()
    d7a8e38999fb powerpc/64s: Fix unrecoverable MCE calling async handler from NMI
    22ee1f15a72e powerpc/traps: do not enable irqs in _exception
    c835b3d1d636 powerpc/64s: fix program check interrupt emergency stack path
    6b77166ffee7 powerpc/bpf ppc32: Fix BPF_SUB when imm == 0x80000000
    b8601d47e87a powerpc/bpf ppc32: Do not emit zero extend instruction for 64-bit BPF_END
    491976e521c1 powerpc/bpf ppc32: Fix JMP32_JSET_K
    9a3e91f94473 powerpc/bpf ppc32: Fix ALU32 BPF_ARSH operation
    096d4c941f0e powerpc/bpf: Fix BPF_SUB when imm == 0x80000000
    2d7781883b3e powerpc/bpf: Fix BPF_MOD when imm == 1
    a7ce57ca9407 objtool: Make .altinstructions section entry size consistent
    039a68957f81 objtool: Remove reloc symbol type checks in get_alt_entry()
    1642f51ac0d4 scsi: iscsi: Fix iscsi_task use after free
    412754da783d RISC-V: Include clone3() on rv32
    cf63b49349cc i2c: mlxcpld: Modify register setting for 400KHz frequency
    3655a1934519 i2c: mlxcpld: Fix criteria for frequency setting
    d590a410e472 bpf, s390: Fix potential memory leak about jit_data
    f344ad3060c4 riscv/vdso: make arch_setup_additional_pages wait for mmap_sem for write killable
    b8b60c1139c7 riscv/vdso: Move vdso data page up front
    309fd6f1e7cf riscv/vdso: Refactor asm/vdso.h
    ff26f96fe0a2 RISC-V: Fix VDSO build for !MMU
    363128071346 riscv: explicitly use symbol offsets for VDSO
    26e7025ef25a i2c: mediatek: Add OFFSET_EXT_CONF setting back
    90f1077c9184 i2c: acpi: fix resource leak in reconfiguration device addition
    d40c4da7318f powerpc/iommu: Report the correct most efficient DMA mask for PCI devices
    272b85c2fdb2 net: prefer socket bound to interface when not in VRF
    8d2a1e7fb90c iavf: fix double unlock of crit_lock
    75099439209d i40e: Fix freeing of uninitialized misc IRQ vector
    d6db5bcd1817 i40e: fix endless loop under rtnl
    1fad5d7f75f7 gve: report 64bit tx_bytes counter from gve_handle_report_stats()
    bcf4f5e4d33d gve: fix gve_get_stats()
    f4479f3bc861 rtnetlink: fix if_nlmsg_stats_size() under estimation
    f5cfed82e0f3 gve: Properly handle errors in gve_assign_qpl
    2044137a268a gve: Avoid freeing NULL pointer
    3e8df2cada21 gve: Correct available tx qpl check
    bb23ade18ad7 net: stmmac: trigger PCS EEE to turn off on link down
    940ee87907f0 net: pcs: xpcs: fix incorrect steps on disable EEE
    88c3610045ca drm/nouveau/debugfs: fix file release memory leak
    0b4e9fc14973 drm/nouveau/kms/nv50-: fix file release memory leak
    548f2ff8ea5e drm/nouveau: avoid a use-after-free when BO init fails
    23514c752f9b video: fbdev: gbefb: Only instantiate device when built for IP32
    ae7a72cd325c drm/panel: abt-y030xx067a: yellow tint fix
    e6b90dcda29b drm/nouveau/fifo/ga102: initialise chid on return from channel creation
    8228b3b3b5a2 drm/sun4i: dw-hdmi: Fix HDMI PHY clock setup
    ad0fca5a28b3 bus: ti-sysc: Use CLKDM_NOAUTO for dra7 dcan1 for errata i893
    37e2d7fe11ae perf jevents: Free the sys_event_tables list after processing entries
    72e9a1bf9b72 drm/amdgpu: handle the case of pci_channel_io_frozen only in amdgpu_pci_resume
    7e5ce6029b62 drm/amdkfd: fix a potential ttm->sg memory leak
    50002489a20c ARM: defconfig: gemini: Restore framebuffer
    942bde2caec2 netlink: annotate data races around nlk->bound
    464be37f127b net: pcs: xpcs: fix incorrect CL37 AN sequence
    6594158f24e1 net: sfp: Fix typo in state machine debug string
    7a1c1af34104 net/sched: sch_taprio: properly cancel timer from taprio_destroy()
    ba07883c780f net: bridge: fix under estimation in br_get_linkxstats_size()
    df7983fdbc83 net: bridge: use nla_total_size_64bit() in br_get_linkxstats_size()
    47afb35c4f87 afs: Fix afs_launder_page() to set correct start file position
    2eb0a5440068 netfs: Fix READ/WRITE confusion when calling iov_iter_xarray()
    cd4dcab5d20c drm/i915/bdb: Fix version check
    4e7c20e5166e drm/i915/tc: Fix TypeC port init/resume time sanitization
    185e4eeac58e drm/i915/jsl: Add W/A 1409054076 for JSL
    8eb67e815d5e drm/i915/audio: Use BIOS provided value for RKL HDA link
    a23d12eeb1ad ARM: imx6: disable the GIC CPU interface before calling stby-poweroff sequence
    94d64d44e41a dt-bindings: drm/bridge: ti-sn65dsi86: Fix reg value
    b07494f81da2 arm64: dts: ls1028a: fix eSDHC2 node
    26a949f2335b arm64: dts: imx8mm-kontron-n801x-som: do not allow to switch off buck2
    4350e1f61930 arm64: dts: imx8: change the spi-nor tx
    672285df5e0a ARM: dts: imx: change the spi-nor tx
    baa59a36ff1b ptp_pch: Load module automatically if ID matches
    9b5198c1e041 powerpc/fsl/dts: Fix phy-connection-type for fm1mac3
    6d1e04d8f044 netfilter: nf_tables: honor NLM_F_CREATE and NLM_F_EXCL in event notification
    96117e85b83b MIPS: Revert "add support for buggy MT7621S core detection"
    8efe947ea1ea net: stmmac: dwmac-rk: Fix ethernet on rk3399 based devices
    f1325381177c net: mscc: ocelot: fix VCAP filters remaining active after being deleted
    fb58cd799174 net_sched: fix NULL deref in fifo_set_limit()
    9e8e7504e098 libbpf: Fix memory leak in strset
    064c2616234a phy: mdio: fix memory leak
    8b6cd17219c3 libbpf: Fix segfault in light skeleton for objects without BTF
    2ca78aa65bc1 net/mlx5e: Fix the presented RQ index in PTP stats
    c0b1de56a40e net/mlx5: Fix setting number of EQs of SFs
    5ef55400217f net/mlx5: Fix length of irq_index in chars
    f1c4eaf49d5d net/mlx5: Avoid generating event after PPS out in Real time mode
    4f3369d3e5e8 net/mlx5: Force round second at 1PPS out start time
    ea0b8ffff565 net/mlx5: E-Switch, Fix double allocation of acl flow counter
    d7954cedb9e6 net/mlx5e: Keep the value for maximum number of channels in-sync
    35460565138f net/mlx5e: IPSEC RX, enable checksum complete
    3a1ac1e368be bpf: Fix integer overflow in prealloc_elems_and_freelist()
    0385744b240a soc: ti: omap-prm: Fix external abort for am335x pruss
    f419febd396e bpf, arm: Fix register clobbering in div/mod implementation
    34362a65c248 netfilter: nf_tables: reverse order in rule replacement expansion
    0b1891aa588a netfilter: nf_tables: add position handle in event notification
    3ece5c4bf601 netfilter: conntrack: fix boot failure with nf_conntrack.enable_hooks=1
    9039a8596370 iwlwifi: pcie: add configuration of a Wi-Fi adapter on Dell XPS 15
    8979fa2c43b0 xtensa: call irqchip_init only when CONFIG_USE_OF is selected
    c4a9836c9dd6 xtensa: use CONFIG_USE_OF instead of CONFIG_OF
    5be9d1335749 arm64: dts: qcom: pm8150: use qcom,pm8998-pon binding
    1c186680c89f ath5k: fix building with LEDS=m
    436f61a89655 PCI: hv: Fix sleep while in non-sleep context when removing child devices from the bus
    11fc74ddd63a ARM: dts: imx6qdl-pico: Fix Ethernet support
    871b9129ca6d ARM: dts: imx: Fix USB host power regulator polarity on M53Menlo
    d5cbf524d90c ARM: dts: imx: Add missing pinctrl-names for panel on M53Menlo
    64a64a031fc1 soc: qcom: mdt_loader: Drop PT_LOAD check on hash segment
    432d8185e9ff iwlwifi: mvm: Fix possible NULL dereference
    306b7fe278ac ARM: at91: pm: do not panic if ram controllers are not enabled
    55f37cc6ee05 Revert "arm64: dts: qcom: sc7280: Fixup the cpufreq node"
    5ceb465692d6 ARM: dts: qcom: apq8064: Use 27MHz PXO clock as DSI PLL reference
    457673bfee0b soc: qcom: socinfo: Fixed argument passed to platform_set_data()
    54607728e944 bus: ti-sysc: Add break in switch statement in sysc_init_soc()
    f1c7aa87c423 riscv: Flush current cpu icache before other cpus
    b514b752b626 scsi: ufs: core: Fix task management completion
    4a0775d0c030 ARM: dts: qcom: apq8064: use compatible which contains chipid
    d62956ddb915 ARM: dts: imx6dl-yapp4: Fix lp5562 LED driver probe
    05d9d419220b ARM: dts: omap3430-sdp: Fix NAND device node
    35c6691812b7 xen/balloon: fix cancelled balloon action
    f574ab3192eb SUNRPC: fix sign error causing rpcsec_gss drops
    ace054d4e523 nfsd4: Handle the NFSv4 READDIR 'dircount' hint being zero
    9228f2a0d1bc nfsd: fix error handling of register_pernet_subsys() in init_nfsd()
    d9f9dfb9040c ovl: fix IOCB_DIRECT if underlying fs doesn't support direct IO
    71b8b36187af ovl: fix missing negative dentry check in ovl_rename()
    b0ee6190e856 fbdev: simplefb: fix Kconfig dependencies
    897e427ef37c mmc: sdhci-of-at91: replace while loop with read_poll_timeout
    aa7c4ce94835 mmc: sdhci-of-at91: wait for calibration done before proceed
    266fd4b85ce3 mmc: meson-gx: do not use memcpy_to/fromio for dram-access-quirk
    527d377da38f xen/privcmd: fix error handling in mmap-resource processing
    c2a35a408070 drm/i915: Extend the async flip VT-d w/a to skl/bxt
    6dafefe60cb2 drm/i915: Fix runtime pm handling in i915_gem_shrink
    92c92e554553 drm/amd/display: Fix DCN3 B0 DP Alt Mapping
    1a9c5c132686 drm/amd/display: Fix detection of 4 lane for DPALT
    4fd24bff9fac drm/amd/display: Limit display scaling to up to 4k for DCN 3.1
    c43e26907d91 drm/nouveau/ga102-: support ttm buffer moves via copy engine
    e4c1d18cb951 drm/nouveau/kms/tu102-: delay enabling cursor until after assign_windows
    4df3adab896f drm/amdgpu: During s0ix don't wait to signal GFXOFF
    ec36503dffdd drm/amd/display: USB4 bring up set correct address
    4b55ade094de drm/amd/display: Fix B0 USB-C DP Alt mode
    3048656f5abf usb: typec: tipd: Remove dependency on "connector" child fwnode
    f5155225108f usb: typec: tcpm: handle SRC_STARTUP state if cc changes
    108d39a6b5a7 usb: typec: tcpci: don't handle vSafe0V event if it's not enabled
    267d19e300c1 USB: cdc-acm: fix break reporting
    aff426d4b887 USB: cdc-acm: fix racy tty buffer accesses
    09c4c413bc56 usb: gadget: f_uac2: fixed EP-IN wMaxPacketSize
    66dd03b10e1c usb: chipidea: ci_hdrc_imx: Also search for 'phys' phandle
    9b70e9acfceb usb: cdc-wdm: Fix check for WWAN
    d92e0c42cfee Partially revert "usb: Kconfig: using select for USB_COMMON dependency"
    924356b31dcb Linux 5.14.11
    add46a06b8d3 Revert "ARM: imx6q: drop of_platform_default_populate() from init_machine"
    cfd436c4b683 Revert "brcmfmac: use ISO3166 country code and 0 rev as fallback"
    86524ac0ddac libata: Add ATA_HORKAGE_NO_NCQ_ON_ATI for Samsung 860 and 870 SSD.
    2cef02f53d59 perf/x86: Reset destroy callback on event init failure
    12058756a220 KVM: x86: nSVM: restore int_vector in svm_clear_vintr
    b232ba59feb9 kvm: x86: Add AMD PMU MSRs to msrs_to_save_all[]
    9c827ab0cb09 KVM: x86: reset pdptrs_from_userspace when exiting smm
    ce64d61801d9 KVM: do not shrink halt_poll_ns below grow_start
    11e4acd09e3f selftests: KVM: Align SMCCC call with the spec in steal_time
    96320e3316f8 kasan: always respect CONFIG_KASAN_STACK
    7d434c5f4687 tools/vm/page-types: remove dependency on opt_file for idle page tracking
    004b8f8a6912 block: don't call rq_qos_ops->done_bio if the bio isn't tracked
    648f59a06b0e io_uring: allow conditional reschedule for intensive iterators
    1b5b6666e235 x86/insn, tools/x86: Fix undefined behavior due to potential unaligned accesses
    d022e4c48e16 smb3: correct smb3 ACL security descriptor
    629c6e725d10 irqchip/gic: Work around broken Renesas integration
    ab0a257d1591 scsi: ses: Retry failed Send/Receive Diagnostic commands
    cd402c666fe7 thermal/drivers/tsens: Fix wrong check for tzd in irq handlers
    7efa50dd020c nvme-fc: avoid race between time out and tear down
    70f57c93f10b nvme-fc: update hardware queues before using them
    2e4a7695c8df swiotlb-xen: ensure to issue well-formed XENMEM_exchange requests
    3ad674aa1742 Xen/gntdev: don't ignore kernel unmapping error
    95342046ba4e selftests: kvm: fix get_run_delay() ignoring fscanf() return warn
    80b7cc21401b selftests: kvm: move get_run_delay() into lib/test_util
    b6d7e8c09c40 selftests:kvm: fix get_trans_hugepagesz() ignoring fscanf() return warn
    b664df7bb40a selftests:kvm: fix get_warnings_count() ignoring fscanf() return warn
    2085e5ad67f4 selftests: be sure to make khdr before other targets
    656998200410 habanalabs/gaudi: fix LBW RR configuration
    6874cdba4daa habanalabs: fail collective wait when not supported
    1c806d5a425b habanalabs/gaudi: use direct MSI in single mode
    337f00a0bc62 usb: dwc2: check return value after calling platform_get_resource()
    6b5af31c50ac usb: testusb: Fix for showing the connection speed
    6a48e3f46ef4 scsi: elx: efct: Do not hold lock while calling fc_vport_terminate()
    e95f62013a11 scsi: sd: Free scsi_disk device via put_device()
    ac7d732b24f4 drm/amdkfd: fix svm_migrate_fini warning
    4c5a564bf968 drm/amdkfd: handle svm migrate init error
    3c2830d0cb6f ext2: fix sleeping in atomic bugs on error
    a3b450333d64 platform/x86: gigabyte-wmi: add support for B550I Aorus Pro AX
    3702afcf0aac sparc64: fix pci_iounmap() when CONFIG_PCI is not set
    e4cff35be8ff xen-netback: correct success/error reporting for the SKB-with-fraglist case
    0cfda0cc59d4 net: mdio: introduce a shutdown method to mdio device drivers
    7a08b2e1e477 btrfs: fix mount failure due to past and transient device flush error
    31e401cb05ac btrfs: replace BUG_ON() in btrfs_csum_one_bio() with proper error handling
    20282e53d6bd nfsd: back channel stuck in SEQ4_STATUS_CB_PATH_DOWN
    5c1e84b7ae04 platform/x86: touchscreen_dmi: Update info for the Chuwi Hi10 Plus (CWI527) tablet
    77e6b00985f6 platform/x86: touchscreen_dmi: Add info for the Chuwi HiBook (CWI514) tablet
    bf4597f45f31 afs: Add missing vnode validation checks
    20137432e181 spi: rockchip: handle zero length transfers without timing out
    b133f076639b Linux 5.14.10
    81971ea5ec5c HID: amd_sfh: Fix potential NULL pointer dereference - take 2
    fe6f7b77796e objtool: print out the symbol type when complaining about it
    a7d4cb29f556 drivers: net: mhi: fix error path in mhi_net_newlink
    14492ff96387 netfilter: nf_tables: Fix oversized kvmalloc() calls
    7ea6f5848281 netfilter: conntrack: serialize hash resizes and cleanups
    4664318f73e4 KVM: x86: Handle SRCU initialization failure during page track init
    38c84dfafed5 crypto: aesni - xts_crypt() return if walk.nbytes is 0
    2b704864c92d HID: usbhid: free raw_report buffers in usbhid_stop
    24f3fc95b56b mm: don't allow oversized kvmalloc() calls
    3213f5f8d4ad netfilter: ipset: Fix oversized kvmalloc() calls
    708107b80aa6 HID: betop: fix slab-out-of-bounds Write in betop_probe
    eae2fce438f1 usb: hso: remove the bailout parameter
    47d791dbe1ba NIOS2: setup.c: drop unused variable 'dram_start'
    a7931aa81760 net: udp: annotate data race around udp_sk(sk)->corkflag
    aa3a4f5913a9 HID: u2fzero: ignore incomplete packets without data
    a4f316af25ba ext4: flush s_error_work before journal destroy in ext4_fill_super
    2021f187321c ext4: fix potential infinite loop in ext4_dx_readdir()
    27e10c5d31ff ext4: add error checking to ext4_ext_replay_set_iblocks()
    9bef6f6e2172 ext4: fix reserved space counter leakage
    a5a403aed8a0 ext4: limit the number of blocks in one ADD_RANGE TLV
    68a5ca234225 ext4: fix loff_t overflow in ext4_max_bitmap_size()
    811178f296b1 ipack: ipoctal: fix module reference leak
    382ef7ff1854 ipack: ipoctal: fix missing allocation-failure check
    fcd28f229175 ipack: ipoctal: fix tty-registration error handling
    4953ef80af5f ipack: ipoctal: fix tty registration race
    0a9c36a2e06a ipack: ipoctal: fix stack information leak
    ec889a8be77b debugfs: debugfs_create_file_size(): use IS_ERR to check for error
    e554f26ea453 driver core: fw_devlink: Improve handling of cyclic dependencies
    133578ac70a2 elf: don't use MAP_FIXED_NOREPLACE for elf interpreter mappings
    617f0ea5dfc4 nvme: add command id quirk for apple controllers
    bad1cb95af71 kvm: fix objtool relocation warning
    77744fa757b1 hwmon: (pmbus/mp2975) Add missed POUT attribute for page 1 mp2975 controller
    ec9331ef103f hwmon: (occ) Fix P10 VRM temp sensors
    9ea06d55278e sched/fair: Null terminate buffer when updating tunable_scaling
    fce08b03923e sched/fair: Add ancestors of unthrottled undecayed cfs_rq
    d42683c2b196 perf/x86/intel: Update event constraints for ICX
    3aa381480fbe objtool: Teach get_alt_entry() about more relocation types
    ec716aac7fe4 af_unix: fix races in sk_peer_pid and sk_peer_cred accesses
    97f1c1783c1b net: stmmac: fix EEE init issue when paired with EEE capable PHYs
    dab4677bdbff net: sched: flower: protect fl_walk() with rcu
    e88c502ef7be net: phy: bcm7xxx: Fixed indirect MMD operations
    4cdec1041cd3 net: hns3: disable firmware compatible features when uninstall PF
    3937b9c2961e net: hns3: fix always enable rx vlan filter problem after selftest
    fd519ae5a816 net: hns3: reconstruct function hns3_self_test
    851c0b9913b8 net: hns3: fix show wrong state when add existing uc mac address
    18e609791fa6 net: hns3: fix mixed flag HCLGE_FLAG_MQPRIO_ENABLE and HCLGE_FLAG_DCB_ENABLE
    8bcaeeefccfb net: hns3: don't rollback when destroy mqprio fail
    8d4ad0ab2874 net: hns3: remove tc enable checking
    3dac38bdce79 net: hns3: do not allow call hns3_nic_net_open repeatedly
    2744341dd52e ixgbe: Fix NULL pointer dereference in ixgbe_xdp_setup
    81369dce6d85 scsi: csiostor: Add module softdep on cxgb4
    7a73120f8eaf Revert "block, bfq: honor already-setup queue merges"
    27b9ff88f1f6 ionic: fix gathering of debug stats
    477e7f62b358 net: ks8851: fix link error
    9d561381e48c bpf, x86: Fix bpf mapping of atomic fetch implementation
    0157eb81e339 selftests, bpf: test_lwt_ip_encap: Really disable rp_filter
    54d54d2e02c7 selftests, bpf: Fix makefile dependencies on libbpf
    173dbe4fdb22 libbpf: Fix segfault in static linker for objects without BTF
    b822ce7334d5 bpf: Exempt CAP_BPF from checks against bpf_jit_limit
    b96fc31338ca RDMA/hns: Add the check of the CQE size of the user space
    8ba300a48a3b RDMA/hns: Fix the size setting error when copying CQE in clean_cq()
    714bfabe5f29 RDMA/hfi1: Fix kernel pointer leak
    d1db35d832a8 e100: fix buffer overrun in e100_get_regs
    474443c9982b e100: fix length calculation in e100_get_regs_len
    ed3617b8aeb4 dsa: mv88e6xxx: Include tagger overhead when setting MTU for DSA and CPU ports
    2c3c98b40e1f dsa: mv88e6xxx: Fix MTU definition
    eabd1e182225 dsa: mv88e6xxx: 6161: Use chip wide MAX MTU
    3027d7ba264f drm/i915: Remove warning from the rps worker
    406b3c0f64ab drm/i915/request: fix early tracepoints
    60edf381ca21 smsc95xx: fix stalled rx after link change
    bac85b1d0745 net: ipv4: Fix rtnexthop len when RTA_FLOW is present
    3636e045de1f net: enetc: fix the incorrect clearing of IF_MODE bits
    d4a6139e651f hwmon: (tmp421) fix rounding for negative values
    8776ad745092 hwmon: (tmp421) report /PVLD condition as fault
    0fe76b4171e4 RDMA/hns: Work around broken constant propagation in gcc 8
    62adc41df3b5 mptcp: allow changing the 'backup' bit when no sockets are open
    385cf9ac00c2 mptcp: don't return sockets in foreign netns
    8180611c238e sctp: break out if skb_header_pointer returns NULL in sctp_rcv_ootb
    734652b0a231 net: mdiobus: Set FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD for mdiobus parents
    7f9cb654462d driver core: fw_devlink: Add support for FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD
    ed2adf69e298 mac80211-hwsim: fix late beacon hrtimer handling
    35367a5b63d9 mac80211: mesh: fix potentially unaligned access
    997ee230e4f5 mac80211: limit injected vht mcs/nss in ieee80211_parse_tx_radiotap
    764a80c53dee mac80211: Fix ieee80211_amsdu_aggregate frag_tail bug
    2e46f261b28c Revert "mac80211: do not use low data rates for data frames with no ack flag"
    5f66dd17451d netfilter: log: work around missing softdep backend module
    f65c73d3aabb netfilter: nf_tables: unlink table before deleting it
    ec0eb6794804 RDMA/irdma: Report correct WC error when there are MW bind errors
    c3044d872d6d RDMA/irdma: Report correct WC error when transport retry counter is exceeded
    63a5c2119924 RDMA/irdma: Validate number of CQ entries on create CQ
    7dce0dc364c4 RDMA/irdma: Skip CQP ring during a reset
    aa85fb7bde55 hwmon: (mlxreg-fan) Return non-zero value when fan current state is enforced from sysfs
    dbe853968d4d bpf, mips: Validate conditional branch offsets
    e56a5146ef8c RDMA/cma: Fix listener leak in rdma_cma_listen_on_all() failure
    2288eafe2c4a IB/cma: Do not send IGMP leaves for sendonly Multicast groups
    67b07e7b490f bpf: Handle return value of BPF_PROG_TYPE_STRUCT_OPS prog
    473c59ab5de5 ipvs: check that ip_vs_conn_tab_bits is between 8 and 20
    ce1cccb000bd drm/i915/gvt: fix the usage of ww lock in gvt scheduler.
    8bb4ef3807d5 interconnect: qcom: sdm660: Correct NOC_QOS_PRIORITY shift and mask
    f3856fe1a057 interconnect: qcom: sdm660: Fix id of slv_cnoc_mnoc_cfg
    5c488a28b436 drm/amdgpu: correct initial cp_hqd_quantum for gfx9
    73bb3f4e877c drm/amdgpu: check tiling flags when creating FB on GFX8-
    0d77b5d94301 drm/amdgpu: force exit gfxoff on sdma resume for rmb s0ix
    be6f8fb11a24 drm/amd/display: Fix Display Flicker on embedded panels
    f43a2abf5dd7 drm/amd/display: Pass PCI deviceid into DC
    81a22172ba35 drm/amd/display: initialize backlight_ramping_override to false
    25011c9ec8e7 nbd: use shifts rather than multiplies
    03d884671572 RDMA/cma: Ensure rdma_addr_cancel() happens before issuing more requests
    d9ba5565c7f8 RDMA/cma: Do not change route.addr.src_addr.ss_family
    698c8a0a029b media: ir_toy: prevent device from hanging during transmit
    4ed5f2656691 mmc: renesas_sdhi: fix regression with hard reset on old SDHIs
    dd2ee266dd58 KVM: VMX: Fix a TSX_CTRL_CPUID_CLEAR field mask issue
    2cebb9aed993 KVM: nVMX: Fix nested bus lock VM exit
    efd7866e114d KVM: SVM: fix missing sev_decommission in sev_receive_start
    540dd9506ae0 KVM: SEV: Allow some commands for mirror VM
    d6e7fd7ece71 KVM: SEV: Acquire vcpu mutex when updating VMSA
    c9343f03e522 KVM: SEV: Pin guest memory for write for RECEIVE_UPDATE_DATA
    0c1a1c505432 KVM: SEV: Update svm_vm_copy_asid_from for SEV-ES
    5d522f759211 KVM: nVMX: Filter out all unsupported controls when eVMCS was activated
    17e96fe4a8ec KVM: x86: Swap order of CPUID entry "index" vs. "significant flag" checks
    3e7144429936 KVM: x86: Clear KVM's cached guest CR3 at RESET/INIT
    4639ee36e064 KVM: x86: nSVM: don't copy virt_ext from vmcb12
    99a9e9b80f19 KVM: x86: Fix stack-out-of-bounds memory access from ioapic_write_indirect()
    99a016076ed5 ptp: Fix ptp_kvm_getcrosststamp issue for x86 ptp_kvm
    81bfd6268fd3 x86/kvmclock: Move this_cpu_pvti into kvmclock.h
    9a75f445a4a1 platform/x86/intel: hid: Add DMI switches allow list
    27d3eb5616ee mac80211: fix use-after-free in CCMP/GCMP RX
    38b789c914b1 scsi: ufs: Fix illegal offset in UPIU event trace
    de6c8af17f53 gpio: pca953x: do not ignore i2c errors
    16887ae4e3de hwmon: (w83791d) Fix NULL pointer dereference by removing unnecessary structure field
    24af1fe376e2 hwmon: (w83792d) Fix NULL pointer dereference by removing unnecessary structure field
    746011193f44 hwmon: (w83793) Fix NULL pointer dereference by removing unnecessary structure field
    7635f8a7fc8a hwmon: (tmp421) handle I2C errors
    343307d050c1 fs-verity: fix signed integer overflow with i_size near S64_MAX
    2a0d1a8ff21c ACPI: NFIT: Use fallback node id when numa info in NFIT table is incorrect
    062055d4f23e ALSA: hda/realtek: Quirks to enable speaker output for Lenovo Legion 7i 15IMHG05, Yoga 7i 14ITL5/15ITL5, and 13s Gen2 laptops.
    c949aaec0208 ALSA: firewire-motu: fix truncated bytes in message tracepoints
    12d508014972 ALSA: rawmidi: introduce SNDRV_RAWMIDI_IOCTL_USER_PVERSION
    3327293839d0 scsi: ufs: ufs-pci: Fix Intel LKF link stability
    e130f2ed1da9 cpufreq: schedutil: Destroy mutex before kobject_put() frees the memory
    920e3c77f130 drm/amdgpu: stop scheduler when calling hw_fini (v2)
    8ba968ae672b drm/amdgpu: avoid over-handle of fence driver fini in s3 test (v2)
    05c8a9dca354 drm/amdgpu: adjust fence driver enable sequence
    8a88b1529a39 scsi: qla2xxx: Changes to support kdump kernel for NVMe BFS
    8d62aec52a8c cpufreq: schedutil: Use kobject release() method to free sugov_tunables
    699d926585da tty: Fix out-of-bound vmalloc access in imageblit
    7be199764d46 watchdog/sb_watchdog: fix compilation problem due to COMPILE_TEST
    a55e7c3f7e4d perf iostat: Fix Segmentation fault from NULL 'struct perf_counts_values *'
    af0bbcbba0d5 perf iostat: Use system-wide mode if the target cpu_list is unspecified
    018e7ce13f2d perf test: Fix DWARF unwind for optimized builds.
    283e4bee701d HID: amd_sfh: Fix potential NULL pointer dereference
    a3d0bfc22a99 kasan: fix Kconfig check of CC_HAS_WORKING_NOSANITIZE_ADDRESS
    5a309b91dd57 NIOS2: fix kconfig unmet dependency warning for SERIAL_CORE_CONSOLE
    a688abc484b5 m68k: Update ->thread.esp0 before calling syscall_trace() in ret_from_signal
    e450c422aa23 crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd()
    0bfe74174132 s390/qeth: fix deadlock during failing recovery
    0184084365c4 s390/qeth: Fix deadlock in remove_discipline
    946aa1b742df net/mlx4_en: Resolve bad operstate value
    262468353f59 pinctrl: qcom: spmi-gpio: correct parent irqspec translation
    b1ca0c6353d4 ASoC: SOF: imx: imx8m: Bar index is only valid for IRAM and SRAM types
    5f589b073843 ASoC: SOF: imx: imx8: Bar index is only valid for IRAM and SRAM types
    a6bb576ead07 ASoC: SOF: Fix DSP oops stack dump output contents
    69c9494d1450 scsi: elx: efct: Fix void-pointer-to-enum-cast warning for efc_nport_topology
    0a0d0ce37578 ASoC: mediatek: common: handle NULL case in suspend/resume function
    9b5de0165d67 ASoC: fsl_xcvr: register platform component before registering cpu dai
    4916efd4385c ASoC: fsl_spdif: register platform component before registering cpu dai
    63ff9da3572a ASoC: fsl_micfil: register platform component before registering cpu dai
    b04db30f71bb ASoC: fsl_esai: register platform component before registering cpu dai
    799b9ffd7f5a ASoC: fsl_sai: register platform component before registering cpu dai
    ef074ff5a776 media: s5p-jpeg: rename JPEG marker constants to prevent build warnings
    add13fd5e07e media: cedrus: Fix SUNXI tile size calculation
    00426cf7effb media: hantro: Fix check for single irq

(From OE-Core rev: f282f90a44db3213b974d574a285acda97a10c1c)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit ddf50256fa94f240d62719d74e144e68a2302797)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Bruce Ashfield
ef4175b82f linux-yocto/5.10: update to v5.10.73
Updating linux-yocto/5.10 to the latest korg -stable release that comprises
the following commits:

    0268aa579b1f Linux 5.10.73
    825c00c2ee14 x86/hpet: Use another crystalball to evaluate HPET usability
    f2447f6587b8 x86/entry: Clear X86_FEATURE_SMAP when CONFIG_X86_SMAP=n
    6bfe1f6fc876 x86/entry: Correct reference to intended CONFIG_64_BIT
    5d637bc6f98a x86/sev: Return an error on a returned non-zero SW_EXITINFO1[31:0]
    df121cf55003 x86/Kconfig: Correct reference to MWINCHIP3D
    d7c36115fb81 x86/platform/olpc: Correct ifdef symbol to intended CONFIG_OLPC_XO15_SCI
    f73ca4961d51 pseries/eeh: Fix the kdump kernel crash during eeh_pseries_init
    411b38fe68ba powerpc/64s: fix program check interrupt emergency stack path
    18a2a2cafcf9 powerpc/bpf: Fix BPF_SUB when imm == 0x80000000
    a4037dded56b RISC-V: Include clone3() on rv32
    29fdb11ca88d bpf, s390: Fix potential memory leak about jit_data
    2c152d9da8fe riscv/vdso: make arch_setup_additional_pages wait for mmap_sem for write killable
    de834e12b96d i2c: mediatek: Add OFFSET_EXT_CONF setting back
    f86de018fd7a i2c: acpi: fix resource leak in reconfiguration device addition
    87990a60b45f powerpc/iommu: Report the correct most efficient DMA mask for PCI devices
    985cca1ad11e net: prefer socket bound to interface when not in VRF
    97aeed72af4f i40e: Fix freeing of uninitialized misc IRQ vector
    2dc768a98c9b i40e: fix endless loop under rtnl
    d3a07ca78ace gve: report 64bit tx_bytes counter from gve_handle_report_stats()
    35f6ddd934e6 gve: fix gve_get_stats()
    9a043022522e rtnetlink: fix if_nlmsg_stats_size() under estimation
    72c2a68f1d83 gve: Avoid freeing NULL pointer
    5d903a694b08 gve: Correct available tx qpl check
    f69556a42043 drm/nouveau/debugfs: fix file release memory leak
    65fff0a8efcd drm/nouveau/kms/nv50-: fix file release memory leak
    f86e19d918a8 drm/nouveau: avoid a use-after-free when BO init fails
    008224cdc126 video: fbdev: gbefb: Only instantiate device when built for IP32
    d2ccbaaa6615 drm/sun4i: dw-hdmi: Fix HDMI PHY clock setup
    18d2568cc7ff bus: ti-sysc: Use CLKDM_NOAUTO for dra7 dcan1 for errata i893
    40a84fcae2bf perf jevents: Tidy error handling
    628b31d96711 netlink: annotate data races around nlk->bound
    144715fbab1b net: sfp: Fix typo in state machine debug string
    3ec73ffeef54 net/sched: sch_taprio: properly cancel timer from taprio_destroy()
    60955b65bd6a net: bridge: fix under estimation in br_get_linkxstats_size()
    c480d15190eb net: bridge: use nla_total_size_64bit() in br_get_linkxstats_size()
    cb8880680bdf ARM: imx6: disable the GIC CPU interface before calling stby-poweroff sequence
    2b0035d1058a dt-bindings: drm/bridge: ti-sn65dsi86: Fix reg value
    10afd1597263 arm64: dts: ls1028a: add missing CAN nodes
    95ba03fb4cb1 ptp_pch: Load module automatically if ID matches
    442ea65d0ccb powerpc/fsl/dts: Fix phy-connection-type for fm1mac3
    acff2d182c07 net_sched: fix NULL deref in fifo_set_limit()
    0d2dd40a7be6 phy: mdio: fix memory leak
    6e6f79e39830 net/mlx5: E-Switch, Fix double allocation of acl flow counter
    d70cb6c77ad9 net/mlx5e: IPSEC RX, enable checksum complete
    064faa8e8a9b bpf: Fix integer overflow in prealloc_elems_and_freelist()
    d5f4b27c3cfc soc: ti: omap-prm: Fix external abort for am335x pruss
    1d8f4447e8c4 bpf, arm: Fix register clobbering in div/mod implementation
    29a19eaeb29d iwlwifi: pcie: add configuration of a Wi-Fi adapter on Dell XPS 15
    6b0132f73094 xtensa: call irqchip_init only when CONFIG_USE_OF is selected
    3d288ed98314 xtensa: use CONFIG_USE_OF instead of CONFIG_OF
    997bec509a83 arm64: dts: qcom: pm8150: use qcom,pm8998-pon binding
    fbca14abc111 ath5k: fix building with LEDS=m
    8aef3824e946 PCI: hv: Fix sleep while in non-sleep context when removing child devices from the bus
    d9b838ae390e ARM: dts: imx6qdl-pico: Fix Ethernet support
    9e99ad4194a5 ARM: dts: imx: Fix USB host power regulator polarity on M53Menlo
    2ba34cf0c16c ARM: dts: imx: Add missing pinctrl-names for panel on M53Menlo
    8f977e97b2b9 soc: qcom: mdt_loader: Drop PT_LOAD check on hash segment
    14f52004bda5 ARM: at91: pm: do not panic if ram controllers are not enabled
    d89a313a5739 ARM: dts: qcom: apq8064: Use 27MHz PXO clock as DSI PLL reference
    25ac88e601eb soc: qcom: socinfo: Fixed argument passed to platform_set_data()
    ab8073794be3 bus: ti-sysc: Add break in switch statement in sysc_init_soc()
    427faa29e06f riscv: Flush current cpu icache before other cpus
    05287407dedf ARM: dts: qcom: apq8064: use compatible which contains chipid
    ac06fe40e889 ARM: dts: imx6dl-yapp4: Fix lp5562 LED driver probe
    71d3ce62ac88 ARM: dts: omap3430-sdp: Fix NAND device node
    f9a855d1bcb2 xen/balloon: fix cancelled balloon action
    9aac782ab0ab SUNRPC: fix sign error causing rpcsec_gss drops
    8f174a208c4c nfsd4: Handle the NFSv4 READDIR 'dircount' hint being zero
    12d4b179022a nfsd: fix error handling of register_pernet_subsys() in init_nfsd()
    1bc2f315a215 ovl: fix IOCB_DIRECT if underlying fs doesn't support direct IO
    9763ffd4da21 ovl: fix missing negative dentry check in ovl_rename()
    1500f0c83670 mmc: sdhci-of-at91: replace while loop with read_poll_timeout
    3a0feae5f642 mmc: sdhci-of-at91: wait for calibration done before proceed
    e5cb3680b958 mmc: meson-gx: do not use memcpy_to/fromio for dram-access-quirk
    13d17cc717d5 xen/privcmd: fix error handling in mmap-resource processing
    de1e8bd36ab4 drm/nouveau/kms/tu102-: delay enabling cursor until after assign_windows
    1d4e9f27d20d usb: typec: tcpm: handle SRC_STARTUP state if cc changes
    feb3fe702a58 USB: cdc-acm: fix break reporting
    fc8b3e838bdf USB: cdc-acm: fix racy tty buffer accesses
    b3265b88e83b usb: chipidea: ci_hdrc_imx: Also search for 'phys' phandle
    16d728110bd7 Partially revert "usb: Kconfig: using select for USB_COMMON dependency"
    5aa003b38148 Linux 5.10.72
    387aecdab7fa libata: Add ATA_HORKAGE_NO_NCQ_ON_ATI for Samsung 860 and 870 SSD.
    02bf504bc32b perf/x86: Reset destroy callback on event init failure
    b56475c29bd8 KVM: x86: nSVM: restore int_vector in svm_clear_vintr
    ae34f26d4a84 kvm: x86: Add AMD PMU MSRs to msrs_to_save_all[]
    6d0ff9205999 KVM: do not shrink halt_poll_ns below grow_start
    b8add3f47ae7 selftests: KVM: Align SMCCC call with the spec in steal_time
    352b02562a3e tools/vm/page-types: remove dependency on opt_file for idle page tracking
    84778fd66d3d smb3: correct smb3 ACL security descriptor
    a7be240d1703 irqchip/gic: Work around broken Renesas integration
    8724a2a0e6d9 scsi: ses: Retry failed Send/Receive Diagnostic commands
    2e28f7dd3743 thermal/drivers/tsens: Fix wrong check for tzd in irq handlers
    7a670cfb0f4c nvme-fc: avoid race between time out and tear down
    c251d023ed22 nvme-fc: update hardware queues before using them
    c4506403e1f3 selftests:kvm: fix get_warnings_count() ignoring fscanf() return warn
    bcc4b4de63a4 selftests: be sure to make khdr before other targets
    6a4aaf1d84f7 habanalabs/gaudi: fix LBW RR configuration
    2754fa3b73df usb: dwc2: check return value after calling platform_get_resource()
    ed6574d48469 usb: testusb: Fix for showing the connection speed
    60df9f55562a scsi: sd: Free scsi_disk device via put_device()
    76c7063c7405 ext2: fix sleeping in atomic bugs on error
    b114f2d18e0f sparc64: fix pci_iounmap() when CONFIG_PCI is not set
    fdfb3bc87381 xen-netback: correct success/error reporting for the SKB-with-fraglist case
    a41938d07201 net: mdio: introduce a shutdown method to mdio device drivers
    63c89930d4b5 btrfs: fix mount failure due to past and transient device flush error
    50628b06e604 btrfs: replace BUG_ON() in btrfs_csum_one_bio() with proper error handling
    83050cc23909 nfsd: back channel stuck in SEQ4_STATUS_CB_PATH_DOWN
    f986cf270284 platform/x86: touchscreen_dmi: Update info for the Chuwi Hi10 Plus (CWI527) tablet
    e5611503249f platform/x86: touchscreen_dmi: Add info for the Chuwi HiBook (CWI514) tablet
    2ababcd8c2ab spi: rockchip: handle zero length transfers without timing out
    5cd40b137cba Linux 5.10.71
    96f439a7eda6 netfilter: nf_tables: Fix oversized kvmalloc() calls
    e2d192301a0d netfilter: conntrack: serialize hash resizes and cleanups
    deb294941767 KVM: x86: Handle SRCU initialization failure during page track init
    f7ac4d24e161 HID: usbhid: free raw_report buffers in usbhid_stop
    57a269a1b12a mm: don't allow oversized kvmalloc() calls
    da5b8b9319f0 netfilter: ipset: Fix oversized kvmalloc() calls
    dedfc35a2de2 HID: betop: fix slab-out-of-bounds Write in betop_probe
    17ccc64e4fa5 crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd()
    28f0fdbac0f5 usb: hso: remove the bailout parameter
    4ad4852b9adf ASoC: dapm: use component prefix when checking widget names
    5c3a90b6ff75 net: udp: annotate data race around udp_sk(sk)->corkflag
    a7f4c633ae12 HID: u2fzero: ignore incomplete packets without data
    3770e21f60fc ext4: fix potential infinite loop in ext4_dx_readdir()
    a63474dbf692 ext4: add error checking to ext4_ext_replay_set_iblocks()
    9ccf35492b08 ext4: fix reserved space counter leakage
    dc0942168ab3 ext4: limit the number of blocks in one ADD_RANGE TLV
    d11502fa2691 ext4: fix loff_t overflow in ext4_max_bitmap_size()
    7cea84867847 ipack: ipoctal: fix module reference leak
    843efca98e6a ipack: ipoctal: fix missing allocation-failure check
    67d1df661088 ipack: ipoctal: fix tty-registration error handling
    f46e5db92fa2 ipack: ipoctal: fix tty registration race
    5f6a309a6996 ipack: ipoctal: fix stack information leak
    3bef1b7242e0 debugfs: debugfs_create_file_size(): use IS_ERR to check for error
    15fd3954bca7 elf: don't use MAP_FIXED_NOREPLACE for elf interpreter mappings
    011b4de950d8 nvme: add command id quirk for apple controllers
    44c600a57d57 hwmon: (pmbus/mp2975) Add missed POUT attribute for page 1 mp2975 controller
    7fc5f60a01bb perf/x86/intel: Update event constraints for ICX
    3db53827a0e9 af_unix: fix races in sk_peer_pid and sk_peer_cred accesses
    d0d520c19e7e net: sched: flower: protect fl_walk() with rcu
    e63f6d8fe74a net: phy: bcm7xxx: Fixed indirect MMD operations
    071febc37e06 net: hns3: fix always enable rx vlan filter problem after selftest
    85e4f5d28d25 net: hns3: reconstruct function hns3_self_test
    8e89876c84b2 net: hns3: fix prototype warning
    d4a14faf7919 net: hns3: fix show wrong state when add existing uc mac address
    64dae9551f8a net: hns3: fix mixed flag HCLGE_FLAG_MQPRIO_ENABLE and HCLGE_FLAG_DCB_ENABLE
    8d3d27664ef4 net: hns3: keep MAC pause mode when multiple TCs are enabled
    f8ba689cb695 net: hns3: do not allow call hns3_nic_net_open repeatedly
    20f6c4a31a52 ixgbe: Fix NULL pointer dereference in ixgbe_xdp_setup
    16138cf938dc scsi: csiostor: Add module softdep on cxgb4
    0306a2c7df7e Revert "block, bfq: honor already-setup queue merges"
    1f2ca30fbde6 net: ks8851: fix link error
    f1dd6e10f077 selftests, bpf: test_lwt_ip_encap: Really disable rp_filter
    4967ae9ab44b selftests, bpf: Fix makefile dependencies on libbpf
    59efda5073ab bpf: Exempt CAP_BPF from checks against bpf_jit_limit
    f908072391a6 RDMA/hns: Fix inaccurate prints
    7e3eda32b881 e100: fix buffer overrun in e100_get_regs
    f2edf80cdd03 e100: fix length calculation in e100_get_regs_len
    c20a0ad7b6a0 dsa: mv88e6xxx: Include tagger overhead when setting MTU for DSA and CPU ports
    7b771b12229e dsa: mv88e6xxx: Fix MTU definition
    ee4d0495a65e dsa: mv88e6xxx: 6161: Use chip wide MAX MTU
    d35d95e8b9da drm/i915/request: fix early tracepoints
    8321738c6e5a smsc95xx: fix stalled rx after link change
    8de12ad9162c net: ipv4: Fix rtnexthop len when RTA_FLOW is present
    b22c5e2c8e03 net: enetc: fix the incorrect clearing of IF_MODE bits
    5ee40530b0a6 hwmon: (tmp421) fix rounding for negative values
    89d96f147d82 hwmon: (tmp421) report /PVLD condition as fault
    560271d09f78 mptcp: don't return sockets in foreign netns
    9c6591ae8e63 sctp: break out if skb_header_pointer returns NULL in sctp_rcv_ootb
    2c204cf594df mac80211-hwsim: fix late beacon hrtimer handling
    8576e72ac5d6 mac80211: mesh: fix potentially unaligned access
    1282bb00835f mac80211: limit injected vht mcs/nss in ieee80211_parse_tx_radiotap
    3748871e1215 mac80211: Fix ieee80211_amsdu_aggregate frag_tail bug
    76bbb482d33b hwmon: (mlxreg-fan) Return non-zero value when fan current state is enforced from sysfs
    c61736a994fe bpf, mips: Validate conditional branch offsets
    3f4e68902d2e RDMA/cma: Fix listener leak in rdma_cma_listen_on_all() failure
    62ba3c50104b IB/cma: Do not send IGMP leaves for sendonly Multicast groups
    d93f65586c59 bpf: Handle return value of BPF_PROG_TYPE_STRUCT_OPS prog
    12cbdaeeb5d4 ipvs: check that ip_vs_conn_tab_bits is between 8 and 20
    9f382e1edf90 drm/amdgpu: correct initial cp_hqd_quantum for gfx9
    c331fad63b6d drm/amd/display: Pass PCI deviceid into DC
    0a16c9751e0f RDMA/cma: Do not change route.addr.src_addr.ss_family
    31a13f039e15 media: ir_toy: prevent device from hanging during transmit
    249e5e5a501e KVM: rseq: Update rseq when processing NOTIFY_RESUME on xfer to KVM guest
    3778511dfc59 KVM: nVMX: Filter out all unsupported controls when eVMCS was activated
    4ed671e6bc62 KVM: x86: nSVM: don't copy virt_ext from vmcb12
    bebabb76ad9a KVM: x86: Fix stack-out-of-bounds memory access from ioapic_write_indirect()
    782122ae7db0 x86/kvmclock: Move this_cpu_pvti into kvmclock.h
    57de2dcb1874 mac80211: fix use-after-free in CCMP/GCMP RX
    201ba843fef5 scsi: ufs: Fix illegal offset in UPIU event trace
    bd4e446a6947 gpio: pca953x: do not ignore i2c errors
    516d90550390 hwmon: (w83791d) Fix NULL pointer dereference by removing unnecessary structure field
    1499bb2c3a87 hwmon: (w83792d) Fix NULL pointer dereference by removing unnecessary structure field
    7c4fd5de39f2 hwmon: (w83793) Fix NULL pointer dereference by removing unnecessary structure field
    196dabd96bbf hwmon: (tmp421) handle I2C errors
    23a6dfa10f03 fs-verity: fix signed integer overflow with i_size near S64_MAX
    d1d0016e4a7d ACPI: NFIT: Use fallback node id when numa info in NFIT table is incorrect
    e9edc7bc611a ALSA: hda/realtek: Quirks to enable speaker output for Lenovo Legion 7i 15IMHG05, Yoga 7i 14ITL5/15ITL5, and 13s Gen2 laptops.
    23115ca7d227 usb: cdns3: fix race condition before setting doorbell
    3945c481360c cpufreq: schedutil: Destroy mutex before kobject_put() frees the memory
    2193cf76f43a scsi: qla2xxx: Changes to support kdump kernel for NVMe BFS
    a7d4fc84404d cpufreq: schedutil: Use kobject release() method to free sugov_tunables
    d570c48dd37d tty: Fix out-of-bound vmalloc access in imageblit

(From OE-Core rev: 51ec225dcef75eb3e75e3bd3d143c2a6bb8e83ce)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c9697cc081208a91d21b0c41219dc1b30d772f13)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Richard Purdie
49e0ce0e29 staging: Fix autoconf-native rebuild failure
When rebuilds are triggered, autoconf-native can fail with:

| DEBUG: Executing shell function update_gnu_config
| install: cannot stat '[BUILDPATH]tmp/work/x86_64-linux/autoconf-native/2.71-r0/recipe-sysroot-native/usr/share/gnu-config/config.guess': No such file or directory

which is due to update_gnu_config running before extend_recipe_sysroot.
This only happens rarely since usually the prepare_recipe_sysroot
function would already have set things up and only in the invalidated
task hash cases does this rebuild in this way from configure only.

Fix the code to prepend this function instead of appending which
resolves the ordering issue.

(From OE-Core rev: f79fa476c0d0d57ab5ce59728fdb9fff4cd54df1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b9535f513366536b13d0522058f517d2e04451b5)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Joshua Watt
5254f0af57 classes/populate_sdk_base: Add setscene tasks
do_populate_sdk was added to SSTATETASKS, but had no _setscene task
created to allow it to actually run from sstate. Add it so that SDKs can
be restored from sstate.

Note that like do_image_complete, do_populate_sdk is marked with
SSTATE_SKIP_CREATION by default so sstate is not used for them; adding
this task will allow it to work if the user overrides this default
though.

(From OE-Core rev: 292cd79bfb9a9e62f1cb4afaef7d8c7f2c4aac98)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1f204592903a2fd9375b0f3c9c52e7dde0467460)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:41:00 +00:00
Ross Burton
d35822d09a strace: show test suite log on failure
If the tests fail, dump the log so we can see the failures.

(From OE-Core rev: b5e799b94d918ad908eab5a0daf6a0ee460d7581)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3154a65039831b1e041217707fdd6ca042f588fb)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:40:59 +00:00
Alexander Kanavin
47e8cde01f waffle: convert to git, website is down
(From OE-Core rev: 767a12b4e2cd55faf91fa1b918b73e7562ec5bc5)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 28391c20044058e05a1bfdacc31a3e876828fb72)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:40:59 +00:00
Tim Orling
d74def94da python3-setuptools: _distutils/sysconfig fix
Add patch to append STAGING_LIBDIR python-sysconfigdata to sys.path so
that packages which set SETUPTOOLS_USE_DISUTILS='local' cross-compile
properly with python3-setuptools-native.

Fixes:
ModuleNotFoundError: No module named '_sysconfigdata'

References:
https://setuptools.pypa.io/en/latest/deprecated/distutils-legacy.html#porting-from-distutils

(From OE-Core rev: 2f9a362bfebc83ea6459b5294a6fab3c77ea6cb2)

Signed-off-by: Tim Orling <timothy.t.orling@intel.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f6fb99c53f779966fc902a629d0a8bbd9f84c6be)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:40:59 +00:00
Richard Purdie
907ca04187 bitbake: Revert "parse/ast: Show errors for append/prepend/remove operators combined with +=/.="
This reverts commit ae2b34285f8b3a1a3067c5e9b5d29e32e68c75f1.

Accidentally applied to the wrong branch.

(Bitbake rev: 1ac73638c1504cf2aa7f13257396aad617f25e8f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:40:59 +00:00
Richard Purdie
e0218edf84 bitbake: parse/ast: Show errors for append/prepend/remove operators combined with +=/.=
Operations like XXX:append += "YYY" are almost always wrong and this
is a common mistake made in the metadata. Show warnings for these usages
with a view to making it a fatal error eventually.

(Bitbake rev: ae2b34285f8b3a1a3067c5e9b5d29e32e68c75f1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-08 23:40:59 +00:00
Chen Qi
fb2300c144 bitbake: fetch2: fix downloadfilename issue with premirror
The following commit to fix [Yocto #13039] causes regression of
the behavior of PREMIRRORS.

  "bitbake: fetch2: fix premirror URI when downloadfilename defined"

Take meta-openembedded/meta-networking/recipes-protocols/freediameter/freediameter_1.4.0.bb
as an example.
SRC_URI = "\
    http://www.freediameter.net/hg/${fd_pkgname}/archive/${PV}.tar.gz;downloadfilename=${fd_pkgname}-${PV}.tar.gz \
    ...
"
With the above commit, it now tries to fetch 1.4.0.tar.gz instead of
freeDiameter-1.4.0.tar.gz. This makes https://downloads.yoctoproject.org/mirror/sources
not work for freediameter, as it holds freeDiameter-1.4.0.tar.gz.

The commit above tries to avoid fetching from invalid url such as:
https://<some_mirror>/1.4.0.tar.gz/freeDiameter-1.4.0.tar.gz.
And its solution is to make basename to be 1.4.0.tar.gz, thus causing the
regression.

This patch fixes the above regression. For Yocto #13039, it now tries
to fetch from url: https://<some_mirror>/freeDiameter-1.4.0.tar.gz.

(Bitbake rev: 78949cf3fd31d8a408e93af7e27bcf26ae7942f4)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 96c30007dc0b32eee2b15771daec7948bc9bfd97)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-03 12:30:51 +00:00
Chen Qi
1a344c3242 bitbake: tests/fetch.py: add test case to ensure downloadfilename is used for premirror
Add a test case test_fetch_premirror_use_downloadfilename_to_fetch to ensure
that 'downloadfilename' is used when fetching from premirror.

Although the other two previous test cases, test_fetch_premirror_specify_downloadfilename_regex_uri
and test_fetch_premirror_specify_downloadfilename_specific_uri already
implicitly contain such verification, we still need to add a very clear
case to ensure no regression.

(Bitbake rev: 057cbba6b7ade134e4fa3584b9e896be025a6f46)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 20aabc3d53f69949810ecf02295725db947ffef8)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-03 12:30:51 +00:00
Chen Qi
57e91d6136 bitbake: tests/fetch.py: fix premirror test cases
When downloadfilename is specified, it is used to fetch from premirror.
So fix the test cases accordingly.

(Bitbake rev: af573273e4a5b73550af9639da18906f13bfa1a9)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3b4d2e3b5024324058360a2a28f33c34114218d0)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-03 12:30:51 +00:00
Richard Purdie
13a4588253 bitbake: fetch/git: Handle github dropping git:// support
github is dropping support for git protocol in Git urls. Add code to remap
this to https in a way that could be used in older bitbake versions.

(Bitbake rev: f19eefdaa5b43460f00d79d002f96112a6aa3c9a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-11-03 11:31:21 +00:00
Richard Purdie
43cfa130d9 bootchart2: Don't compile python modules
"make install" may attempt to compile the python modules but it uses the host python
and host paths which means the binaries are not reproducbile. Make things consistent.
If anyone needs compiling, it will beed to be fixed to be cross compile compatible.

(From OE-Core rev: 6ca6c9c12c93c6df7b18f49ebdbfb69433ff5158)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1189f95e05c80286e009e1ab46a603ee5b7ca239)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Jose Quaresma
7ff34cf041 sstate: fix touching files inside pseudo
running the 'id' command inside the sstate_create_package
function shows that this funcion run inside the pseudo:

 uid=0(root) gid=0(root) groups=0(root)

The check for touch files [ ! -w ${SSTATE_PKG} ]
will always return true and the touch can fail
when the real user don't have permission or
in readonly filesystem.

As the documentation refers, the file test operator "-w"
check if the file has write permission (for the user running the test).

We can avoid this test running the touch and mask any return errors
that we have.

(From OE-Core rev: 1092bb67737eff63c24c26c9f807bec5e6adffc9)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f6e7445c94443544e92fda97a017ce93393c5f84)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Pablo Saavedra Rodi?o
41320cf8c3 mesa: upgrade 21.2.1 -> 21.2.4
Contains 'Make YUV formats we're going to emulate external-only' [1]
was applied in 21.2.4. This fixes red label issues on video for VC4,
Freedreno and others.

Deletes meta/recipes-graphics/mesa/files/without-neon.patch [2]
already in Mesa since 21.2.

Release notes:

* 21.2.2: https://docs.mesa3d.org/relnotes/21.2.2.html
* 21.2.3: https://docs.mesa3d.org/relnotes/21.2.3.html
* 21.2.4: https://docs.mesa3d.org/relnotes/21.2.4.html

[1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13038
[2] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/12569

(From OE-Core rev: 5ac1121e4c3f559562037abf8ab736f4772173dd)

Signed-off-by: Pablo Saavedra <psaavedra@igalia.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 51fccaa16a3cb78ace077ba593b6cdde5e085528)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Kiran Surendran
e6ef65341e ffmpeg: fix CVE-2021-38114
backport from upstream

(From OE-Core rev: b9a3ca0f4f70ebdb58e59e94e917242b7e9d2111)

Signed-off-by: Kiran Surendran <kiran.surendran@windriver.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit fe9cdf74f7ef3637ed7c600182f8a0ba40510d2a)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Yureka
91527514f3 systemd: add missing include for musl
Fixes "error: ‘FTW_ACTIONRETVAL’ undeclared (first use in this
function)" in src/shared/mount-setup.c.

(From OE-Core rev: 1c4c9f68f13d40bfea489fe27556b85e59255da8)

Signed-off-by: Yureka <yuka@yuka.dev>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 7707d08bb10db5eb782a2476be58ebe4b8bba154)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Alexander Kanavin
4f97823c12 wireless-regdb: upgrade 2021.07.14 -> 2021.08.28
(From OE-Core rev: 15f5ad7b9844a80fecb6a35013e8645b6b68064f)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 00c590f50d6894089ff7ce8ad6e263431d9cc550)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Alexander Kanavin
2610a43530 linux-firmware: upgrade 20210818 -> 20210919
License-Update: additional files
(From OE-Core rev: 34b44c7c52b5e207a0affdff8c58fd912649d9ce)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8dac57dfed45a0d8a049473f2efc1711b56273a4)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Alexander Kanavin
1eb0c9e32e ovmf: update 202105 -> 202108
(From OE-Core rev: 07644ddc782547fd790c45ce9820efe0fc87f871)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9e5d15aba7515952614f69e06d3d9b9316a77204)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Alexander Kanavin
1fb6bc3cbd ca-certificates: update 20210119 -> 20211016
(From OE-Core rev: 686db3483e7db36e9854862518c64ca4c6932442)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c479b8a810d966d7267af1b4dac38a46f55fc547)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Ross Burton
cfc03f903a testimage: fix unclosed testdata file
(From OE-Core rev: 950bafd0ce15309167336d30e0ced6f184284c81)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0c192a97e3e1c015a48667d6903cc07a8b2620e4)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Khem Raj
792d4036e2 mesa: Enable svga for x86 only
Enable svga only on x86/x86_64 since some arches e.g. riscv64 do not
support it

(From OE-Core rev: c45f98e424dffee97f6e43d9b04ea59d6d9b68b1)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit d7d380a45ab0efedcba33baaae37589da4d25a2b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Richard Purdie
6af7e42820 oeqa: Update cleanup code to wait for hashserv exit
We sometimes see exceptions from code seeing the hashserv DB files
being removed at directory cleanup time. Add a check to ensure the
hashserv has written the data base journal (and hence likely exited)
before cleaning up.

This will hopefully avoid errors like:

Traceback (most recent call last):
  File "[...]/meta/lib/oeqa/sdk/buildtools-cases/build.py", line 30, in test_libc
    delay = delay - 1
  File "/usr/lib/python3.6/tempfile.py", line 948, in __exit__
    self.cleanup()
  File "/usr/lib/python3.6/tempfile.py", line 952, in cleanup
    _rmtree(self.name)
  File "/usr/lib/python3.6/shutil.py", line 486, in rmtree
    _rmtree_safe_fd(fd, path, onerror)
  File "/usr/lib/python3.6/shutil.py", line 424, in _rmtree_safe_fd
    _rmtree_safe_fd(dirfd, fullname, onerror)
  File "/usr/lib/python3.6/shutil.py", line 444, in _rmtree_safe_fd
    onerror(os.unlink, fullname, sys.exc_info())
  File "/usr/lib/python3.6/shutil.py", line 442, in _rmtree_safe_fd
    os.unlink(name, dir_fd=topfd)
FileNotFoundError: [Errno 2] No such file or directory: 'hashserv.db-wal'

(From OE-Core rev: 635833734b4c61e453ca9843a9fb5cecf3eb1c97)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0b07d9add687d78495176cda0f3011c10ffa4d4b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Ross Burton
095aa7adfd curl: fix CVE-2021-22945 through -22947
(From OE-Core rev: 2f9feadd518444a5c19892acfa9bfca38cb1c25b)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit cff6888f3b2b4bd0a42329b7f7c59b33c9d51265)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Jose Quaresma
b332b342b4 patch.bbclass: when the patch fails show more info on the fatal error
There are situations when the user have the 'patchdir' defined
as a parameter on SRC_URI. However he doesn't know that with this
the patch is applied relatively to the receipe source dir 'S'.

- When user have 'patchdir' defined check if this directory exist.
- If the patch fails show addition info to the user:
  - Import: show the striplevel
  - Resolver: show the expanded 'patchdir' to the user.

The next example is from opencv in meta-oe layer, here the
patch is applied on the target directory ${WORKDIR}/git/contrib.

S = "${WORKDIR}/git"
SRCREV_FORMAT = "opencv_contrib"
SRC_URI = "git://github.com/opencv/opencv.git;name=opencv \
           git://github.com/opencv/opencv_contrib.git;destsuffix=contrib;name=contrib \
           file://0001-sfm-link-with-Glog_LIBS.patch;patchdir=../contrib \
           "

* When the patch fail there are no message that indicates the real reason.
  patchdir=../no-found-on-file-system

ERROR: opencv-4.5.2-r0 do_patch: Command Error: 'quilt --quiltrc /build/tmp/work/core2-64-poky-linux/opencv/4.5.2-r0/recipe-sysroot-native/etc/quiltrc push' exited with 0  Output:
stdout: Applying patch 0001-sfm-link-with-Glog_LIBS.patch
can't find file to patch at input line 37
Perhaps you used the wrong -p or --strip option?

* The check of the patchdir will add a new fatal error
  when the user specifies a wrong path than don't exist.
  patchdir=../no-found-on-file-system

ERROR: opencv-4.5.2-r0 do_patch: Target directory '/build/tmp/work/core2-64-poky-linux/opencv/4.5.2-r0/git/../no-found-on-file-system' not found, patchdir '../no-found-on-file-system' is incorrect in patch file '0001-sfm-link-with-Glog_LIBS.patch'

* When we can't aplly the patch but the patchdir exist,
  show the expanded patchdir on fatal error.
  patchdir=../git

ERROR: opencv-4.5.2-r0 do_patch: Applying patch '0001-sfm-link-with-Glog_LIBS.patch' on target directory '/build/tmp/work/core2-64-poky-linux/opencv/4.5.2-r0/git/../git'
Command Error: 'quilt --quiltrc /build/tmp/work/core2-64-poky-linux/opencv/4.5.2-r0/recipe-sysroot-native/etc/quiltrc push' exited with 0  Output:
stdout: Applying patch 0001-sfm-link-with-Glog_LIBS.patch
can't find file to patch at input line 37
Perhaps you used the wrong -p or --strip option?

(From OE-Core rev: caf21ee38f7a96af6c10e80f9422611e317b29d6)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit c44bc7c0fb8b7c2e44dd93607a3bfd9733e1df80)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Richard Purdie
8e140d5d12 linux-yocto-dev: Ensure DEPENDS matches recent 5.14 kernel changes
DEPENDS here should match what 5.14 is using.

(From OE-Core rev: adc33c4bb8a0f5c542cb1da3b986e89ecea75714)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 899fd41723f41fe0a0cc24373c326b88cb385fe9)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Ross Burton
d484fcd8df linux-yocto: add libmpc-native to DEPENDS
5.14 changed how the GCC plugins are built, which means they now
depend on both GMP and MPC to be built. We already depend on gmp-native,
so add libmpc-native aswell.

(From OE-Core rev: 0c15ed141ea3b23140d3aa4e6ae17ddee0947f3f)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit f242a6db0757b31c0d4eba5c362f616e1ace14d6)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Andres Beltran
6c3a8ae1f9 buildhistory: Fix package output files for SDKs
Currently, installed packages are listed for images in image-info.txt, but
not for SDKs in sdk-info.txt. Add TOOLCHAIN_HOST_TASK and
TOOLCHAIN_TARGET_TASK to the output variables in sdk-info.txt.

Moreover, package output files for the SDK host are empty because
PKGDATA_DIR defaults to the target directory. Fix this bug and create a new
variable called PKGDATA_DIR_SDK which stores the correct path for the SDK
host package data.

(From OE-Core rev: af7b5c664649d2c0d1b23eb1d553080b9d2a7864)

Signed-off-by: Andres Beltran <abeltran@linux.microsoft.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 82e6172c1df378dff4e503aa878501c08937b5bb)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Alexander Kanavin
be011b75c0 lttng-tools: replace ad hoc ptest fixup with upstream fixes
(From OE-Core rev: e9613ecfcec8b606b05407b6199806df7ac18e9b)

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 87fd3080c86f6987e4403a2cb8263564f6e1ac4f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Richard Purdie
984e5e04aa libnewt: Use python3targetconfig to fix reproducibility issue
We're seeing pthread being linked sometimes and not others leading to
non-reproducible target binaries. The reason is mixing the native python
config with the target one. We should use the target one.

(From OE-Core rev: 5d27faf68ff94519d6618351ce87a8b3818ba611)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3fe5101b335384ef83e96ccc58687fd631164075)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Richard Purdie
5fa3c638ed libxml2: Use python3targetconfig to fix reproducibility issue
We're seeing pthread being linked sometimes and not others leading to
non-reproducible target binaries. The reason is mixing the native python
config with the target one. We should use the target one.

(From OE-Core rev: 0a390b5b36bbd1b2a3aefa74d03e8e40240c68fb)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1bc5378db760963e2ad46542f2907dd6a592eb66)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Oleksandr Kravchuk
b26b40bbf7 python3: update to 3.9.7
(From OE-Core rev: 5895b6a51b73735f081267ed6e6e2455c1d717ed)

Signed-off-by: Oleksandr Kravchuk <open.source@oleksandr-kravchuk.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 9612bb0639c13571e661f208aa7b28789953d9ec)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Sakib Sajal
a02cfe25a5 go: upgrade 1.16.7 -> 1.16.8
(From OE-Core rev: 18559ba281a2ea4f8334fcdd4fca427af802ea81)

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 97a2f406635f51bad1ab070f018a6466209f257b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:26 +01:00
Andrej Valek
3bf9467bcb busybox: 1.34.0 -> 1.34.1
- update to next stable version 1.34.1

(From OE-Core rev: 12930a587dbce9057071f5ea177c649e524d950d)

Signed-off-by: Andrej Valek <andrej.valek@siemens.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 84c9bb0796aa4382cc08075ec2908aea81892f64)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Jose Quaresma
48efa6a229 gstreamer1.0: 1.18.4 -> 1.18.5
(From OE-Core rev: 8fc9f5ad560b8d530cbffacbd1191fba4c2b14d4)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit d325f0d31bb1cbe889c7303ac2999c4dae391b34)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Jose Quaresma
c53f1f413a gst-examples: 1.18.4 -> 1.18.5
(From OE-Core rev: 570573718fe34de28709301d9cbc322321e71f5f)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit b1bddc80dc172563b7cd469a8de6b9db2e6ad985)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Jose Quaresma
8f140e272a gst-devtools: 1.18.4 -> 1.18.5
(From OE-Core rev: 9de831baa373f71805c2c3fb20463ebb70215760)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit fe1345f72e41fe0fd0a8c69ac8e7cb7551666fcb)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Jose Quaresma
fa13f88d19 gstreamer1.0-python: 1.18.4 -> 1.18.5
(From OE-Core rev: 9b436d2f69d1594e3efa33153226a269f09f3130)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 3c68529eb99c74de5a30520261f62a5544be9b39)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Jose Quaresma
11ff9d080a gstreamer1.0-omx: 1.18.4 -> 1.18.5
(From OE-Core rev: 0ede8a6546e54fd4e4c0ebe829e354a8bb4ae7c6)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 65ed3c4e6c0fbade647ec31a6a77f06ed4e97e7a)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Jose Quaresma
57703b457c gstreamer1.0-vaapi: 1.18.4 -> 1.18.5
(From OE-Core rev: f1f0d33097a939f573ea02d6ee2aff6c5df783fb)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit a46b9209b5f2f45b4206a7819e00c48795885093)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Jose Quaresma
eb7bcac465 gstreamer1.0-libav: 1.18.4 -> 1.18.5
(From OE-Core rev: 6290f239b73209ad3ba49cce56ac14b7fb225edc)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 6a52088c1938c197d8e89e10d8e6622fa4b41465)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Jose Quaresma
86d6c4cbfe gstreamer1.0-rtsp-server: 1.18.4 -> 1.18.5
(From OE-Core rev: b34d6d2be8bb5c7f8718ba6948502db808f09601)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 809db373816ed896048f551275589bac0f04ff92)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Jose Quaresma
9b197bef11 gstreamer1.0-plugins-ugly: 1.18.4 -> 1.18.5
(From OE-Core rev: 9d78656433bd9b1192dc5ca7a8eefa31ba6d97a3)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 09373e8c33cd0c585e146b55d9f7680832f2ad09)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Jose Quaresma
5d8c7e6ab6 gstreamer1.0-plugins-bad: 1.18.4 -> 1.18.5
(From OE-Core rev: ef13e5913b1b4a447c89ca3247face6178accaef)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 4e7789ecfdb1bd7afa6ff5be40f1d0e2a1a09e4c)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Jose Quaresma
73ebd1f04d gstreamer1.0-plugins-good: 1.18.4 -> 1.18.5
Drop backport patches:
    * 0002-rtpjitterbuffer-Fix-parsing-of-the-mediaclk-direct-f.patch

    * 0003-Remove-volatile-from-static-vars-to-fix-build-with-g.patch
      a1bf3d8d54

(From OE-Core rev: 790cb8acafc89969a6d1468f7df0493b737f946a)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit b51d46790e582556a7230a1fe8f67375e785cc43)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Jose Quaresma
dc851c427b gstreamer1.0-plugins-base: 1.18.4 -> 1.18.5
Drop backport patches:
    * 4ef5c91697a141fea7317aff7f0f28e5a861db99.patch

(From OE-Core rev: d38a5626f20ed964d79e64f237aa2e3bdb5c7d69)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit de0ee4323a19a27b6bcef7cc791d0373c311ef22)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Alexandre Belloni
9d87c1c9d4 oeqa/selftest/sstatetests: fix typo ware -> were
(From OE-Core rev: 0c8d02830ebac3c8ba563a46d304b1ef2a282b9f)

Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit c94a9ece226b1d2012f5ee966b81bf607d954937)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Ralph Siemsen
291da72ce1 tar: filter CVEs using vendor name
Recently a number of CVEs have been logged against a nodejs project
called "node-tar". These appear as false positives against the GNU tar
being built by Yocto. Some of these have been manually excluded using
CVE_CHECK_WHITELIST.

To avoid this problem, use the vendor name (in addition to package name)
for filtering CVEs. The syntax for this is:
  CVE_PRODUCT = "vendor:package"
When not specified, the vendor defaults to "%" which matches anything.

(From OE-Core rev: d11e970c6e2482ad0b21994e4ec85ddf2aea1ede)

Signed-off-by: Ralph Siemsen <ralph.siemsen@linaro.org>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 45d1a0bea0c628f84a00d641a4d323491988106f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-29 11:17:25 +01:00
Richard Purdie
0b500dba7a bitbake: bitbake-worker: Add debug when unpickle fails
We occasionally see bitbake-worker failing and from the logs, an unpickle error
occurs. Add more debug so we can further debug this next time it fails.

[YOCTO #14595]

(Bitbake rev: 692fa35f4c23722f3179502cb965960cc230e709)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit fe8105cc06beca8240b76ea366a1eff5aa9c5412)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-26 23:27:20 +01:00
Richard Purdie
3c5a5bbc19 bitbake: tests/runqueue: Ensure hashserv exits before deleting files
We've seen races where the socket may be gone but the server is still writing
out it's database. Handle that case too to avoid cleanup tracebacks.

[YOCTO #14440]

(Bitbake rev: 36b1b4c4fcee9dde628c7113203939730ab12ae5)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b9e4fb843cb9d3a4d4404af093a781fab5520465)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-26 23:27:20 +01:00
Richard Purdie
a9fdfc41ba bitbake: fetch2/perforce: Fix typo
(Bitbake rev: 20eae05fdd6cb7ace87ad005f72c256e2fddb3d0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-26 13:47:21 +01:00
Michael Opdenacker
8eb5dd8757 docs: poky.yaml: updates for 3.4
(From yocto-docs rev: 6b5fd186df147816e2769241c4d6b501b66126dc)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-22 22:15:30 +01:00
Michael Opdenacker
80f109a7da releases.rst: fix release number for 3.3.3
(From yocto-docs rev: 3bb6a0918f3755d8d25865b5b3d2bd711925714b)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-22 22:15:30 +01:00
Daiane Angolini
1b1369d52c ref-manual: Update how to set a useradd password
Partial fix for [YOCTO 14605]

(From yocto-docs rev: d9c7fba68ca7c901e9e7064fee2989d834d4684c)

Signed-off-by: Daiane Angolini <daiane.angolini@foundries.io>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-22 22:15:30 +01:00
Paul Eggleton
14d4106f56 migration-3.4: add some extra packaging notes
Add some notes on minor packaging changes that I missed earlier.

(From yocto-docs rev: 8cb799a04f9c160da28a7283c9bd8d10406f57af)

Signed-off-by: Paul Eggleton <paul.eggleton@microsoft.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-22 22:15:30 +01:00
Paul Eggleton
3507345f40 poky.yaml: fix lz4 package name for older Ubuntu versions
It turns out that for Ubuntu, lz4 is the valid package name for newer
versions (e.g. 20.04 LTS) but not for older ones (e.g. 18.04 LTS, where
the correct package is liblz4-tool). In 20.04 the lz4 package includes
liblz4-tool in its "provides" so it's best to list that one for now.

(From yocto-docs rev: 222af72b9ee307d43a8463283e058c6ebb18fefc)

Signed-off-by: Paul Eggleton <paul.eggleton@microsoft.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-22 22:15:30 +01:00
Michael Opdenacker
42322ad2ce ref-manual: document TOOLCHAIN_HOST_TASK_ESDK
(From yocto-docs rev: d75c5450ecf56c8ac799a633ee9ac459e88f91fc)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-20 20:20:46 +01:00
Michael Opdenacker
34d69cc1b3 test-manual: how to enable reproducible builds
(From yocto-docs rev: 2f6780b837b3c17bc7fd1d2d1420e2e893960a27)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-20 20:20:46 +01:00
Michael Opdenacker
9bdb2db854 ref-manual: document "reproducible_build" class and SOURCE_DATE_EPOCH
(From yocto-docs rev: ab6d3dbf57cac560b9b142cf5becf11d3edf09b7)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-20 20:20:46 +01:00
Michael Opdenacker
d03d151093 ref-manual: document BUILD_REPRODUCIBLE_BINARIES
(From yocto-docs rev: 9855ed9b35acede2f6e56509709000796a3927f3)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-20 20:20:46 +01:00
Paul Eggleton
2abbaa3319 migration: tweak introduction section
Ensure we have a brief introductory section and tweak the general
migration considerations a little.

(From yocto-docs rev: c94aa8b9d828f9267a70deee05bdf483dc570101)

Signed-off-by: Paul Eggleton <paul.eggleton@microsoft.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-20 20:20:46 +01:00
Paul Eggleton
f0428cea44 migration-3.4: add additional migration info
Add migration instructions gathered by combing the commits in this
release.

(From yocto-docs rev: b864f8570271df4e6cb47d21cb658d13ffd1d8f5)

Signed-off-by: Paul Eggleton <paul.eggleton@microsoft.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-20 20:20:46 +01:00
Paul Eggleton
cc8df6ce5a poky.yaml: add lz4 and zstd to essential host packages
These are now required so update the corresponding distro-specific
lists used in the system requirements documentation.

(From yocto-docs rev: 1ddd56a98064015582a8c161a1b998c06ebcaf26)

Signed-off-by: Paul Eggleton <paul.eggleton@microsoft.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-20 20:20:46 +01:00
Paul Eggleton
5c6455f1af ref-manual: remove meta class
This was recently removed so remove the reference to it.

(From yocto-docs rev: 46bfdb0b4ae2cb834589ef09436b120715663a31)

Signed-off-by: Paul Eggleton <paul.eggleton@microsoft.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-20 20:20:46 +01:00
Paul Eggleton
c3223e3101 migration-3.4: tweak overrides change section
Minor grammar and readability improvements.

(From yocto-docs rev: 8e497bf7398042620e921645f85f5ccc59c4790a)

Signed-off-by: Paul Eggleton <paul.eggleton@microsoft.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-20 20:20:46 +01:00
Richard Purdie
7f4efed145 bitbake: test/fetch: Update urls to match upstream branch name changes
(Bitbake rev: 036ad517921a68525a9b2564363b01332d668e4c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-10-20 20:20:46 +01:00
4146 changed files with 162899 additions and 187098 deletions

6
.gitignore vendored
View File

@@ -31,8 +31,4 @@ pull-*/
bitbake/lib/toaster/contrib/tts/backlog.txt
bitbake/lib/toaster/contrib/tts/log/*
bitbake/lib/toaster/contrib/tts/.cache/*
bitbake/lib/bb/tests/runqueue-tests/bitbake-cookerdaemon.log
_toaster_clones/
downloads/
sstate-cache/
toaster.sqlite
bitbake/lib/bb/tests/runqueue-tests/bitbake-cookerdaemon.log

View File

@@ -1,2 +1,2 @@
# Template settings
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf/templates/default}
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf}

View File

@@ -53,6 +53,7 @@ Maintainers needed
* wic
* Patchwork
* Patchtest
* Prelink-cross
* Matchbox
* Sato
* Autobuilder

35
Makefile Normal file
View File

@@ -0,0 +1,35 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
DESTDIR = final
ifeq ($(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; else echo 0; fi),0)
$(error "The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed")
endif
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile.sphinx clean publish
publish: Makefile.sphinx html singlehtml
rm -rf $(BUILDDIR)/$(DESTDIR)/
mkdir -p $(BUILDDIR)/$(DESTDIR)/
cp -r $(BUILDDIR)/html/* $(BUILDDIR)/$(DESTDIR)/
cp $(BUILDDIR)/singlehtml/index.html $(BUILDDIR)/$(DESTDIR)/singleindex.html
sed -i -e 's@index.html#@singleindex.html#@g' $(BUILDDIR)/$(DESTDIR)/singleindex.html
clean:
@rm -rf $(BUILDDIR)
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile.sphinx
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

View File

@@ -6,28 +6,24 @@ of OpenEmbedded. It is distro-less (can build a functional image with
DISTRO = "nodistro") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
https://www.openembedded.org/
http://www.openembedded.org/
The Yocto Project has extensive documentation about OE including a reference manual
which can be found at:
https://docs.yoctoproject.org/
http://yoctoproject.org/documentation
Contributing
------------
Please refer to our contributor guide here: https://docs.yoctoproject.org/dev/contributor-guide/
for full details on how to submit changes.
As a quick guide, patches should be sent to openembedded-core@lists.openembedded.org
The git command to do that would be:
git send-email -M -1 --to openembedded-core@lists.openembedded.org
Please refer to
http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
for guidelines on how to submit patches.
Mailing list:
https://lists.openembedded.org/g/openembedded-core
http://lists.openembedded.org/mailman/listinfo/openembedded-core
Source code:
https://git.openembedded.org/openembedded-core/
http://git.openembedded.org/openembedded-core/

View File

@@ -13,24 +13,19 @@ Bitbake plain documentation can be found under the doc directory or its integrat
html version at the Yocto Project website:
https://docs.yoctoproject.org
Bitbake requires Python version 3.8 or newer.
Contributing
------------
Please refer to our contributor guide here: https://docs.yoctoproject.org/dev/contributor-guide/
for full details on how to submit changes.
As a quick guide, patches should be sent to bitbake-devel@lists.openembedded.org
The git command to do that would be:
Please refer to
https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
for guidelines on how to submit patches, just note that the latter documentation is intended
for OpenEmbedded (and its core) not bitbake patches (bitbake-devel@lists.openembedded.org)
but in general main guidelines apply. Once the commit(s) have been created, the way to send
the patch is through git-send-email. For example, to send the last commit (HEAD) on current
branch, type:
git send-email -M -1 --to bitbake-devel@lists.openembedded.org
If you're sending a patch related to the BitBake manual, make sure you copy
the Yocto Project documentation mailing list:
git send-email -M -1 --to bitbake-devel@lists.openembedded.org --cc docs@lists.yoctoproject.org
Mailing list:
https://lists.openembedded.org/g/bitbake-devel
@@ -39,25 +34,10 @@ Source code:
https://git.openembedded.org/bitbake/
Testing
-------
Testing:
Bitbake has a testsuite located in lib/bb/tests/ whichs aim to try and prevent regressions.
You can run this with "bitbake-selftest". In particular the fetcher is well covered since
it has so many corner cases. The datastore has many tests too. Testing with the testsuite is
recommended before submitting patches, particularly to the fetcher and datastore. We also
appreciate new test cases and may require them for more obscure issues.
To run the tests "zstd" and "git" must be installed.
The assumption is made that this testsuite is run from an initialized OpenEmbedded build
environment (i.e. `source oe-init-build-env` is used). If this is not the case, run the
testsuite as follows:
export PATH=$(pwd)/bin:$PATH
bin/bitbake-selftest
The testsuite can alternatively be executed using pytest, e.g. obtained from PyPI (in this
case, the PATH is configured automatically):
pytest

View File

@@ -25,9 +25,10 @@ except RuntimeError as exc:
from bb import cookerdata
from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
bb.utils.check_system_locale()
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
__version__ = "2.6.0"
__version__ = "1.52.0"
if __name__ == "__main__":
if __version__ != bb.__version__:

View File

@@ -11,7 +11,6 @@
import os
import sys
import warnings
warnings.simplefilter("default")
import argparse
import logging
@@ -28,7 +27,6 @@ logger = bb.msg.logger_create(myname)
is_dump = myname == 'bitbake-dumpsig'
def find_siginfo(tinfoil, pn, taskname, sigs=None):
result = None
tinfoil.set_event_mask(['bb.event.FindSigInfoResult',
@@ -54,7 +52,6 @@ def find_siginfo(tinfoil, pn, taskname, sigs=None):
sys.exit(2)
return result
def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
""" Find the most recent signature files for the specified PN/task """
@@ -63,13 +60,13 @@ def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
if sig1 and sig2:
sigfiles = find_siginfo(bbhandler, pn, taskname, [sig1, sig2])
if not sigfiles:
if len(sigfiles) == 0:
logger.error('No sigdata files found matching %s %s matching either %s or %s' % (pn, taskname, sig1, sig2))
sys.exit(1)
elif sig1 not in sigfiles:
elif not sig1 in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig1))
sys.exit(1)
elif sig2 not in sigfiles:
elif not sig2 in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig2))
sys.exit(1)
latestfiles = [sigfiles[sig1], sigfiles[sig2]]
@@ -89,11 +86,11 @@ def recursecb(key, hash1, hash2):
hashfiles = find_siginfo(tinfoil, key, None, hashes)
recout = []
if not hashfiles:
if len(hashfiles) == 0:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
elif hash1 not in hashfiles:
elif not hash1 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash1))
elif hash2 not in hashfiles:
elif not hash2 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash2))
else:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb, color=color)
@@ -113,36 +110,36 @@ parser.add_argument('-D', '--debug',
if is_dump:
parser.add_argument("-t", "--task",
help="find the signature data file for the last run of the specified task",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
help="find the signature data file for the last run of the specified task",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
parser.add_argument("sigdatafile1",
help="Signature file to dump. Not used when using -t/--task.",
action="store", nargs='?', metavar="sigdatafile")
help="Signature file to dump. Not used when using -t/--task.",
action="store", nargs='?', metavar="sigdatafile")
else:
parser.add_argument('-c', '--color',
help='Colorize the output (where %(metavar)s is %(choices)s)',
choices=['auto', 'always', 'never'], default='auto', metavar='color')
help='Colorize the output (where %(metavar)s is %(choices)s)',
choices=['auto', 'always', 'never'], default='auto', metavar='color')
parser.add_argument('-d', '--dump',
help='Dump the last signature data instead of comparing (equivalent to using bitbake-dumpsig)',
action='store_true')
help='Dump the last signature data instead of comparing (equivalent to using bitbake-dumpsig)',
action='store_true')
parser.add_argument("-t", "--task",
help="find the signature data files for the last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
help="find the signature data files for the last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
parser.add_argument("-s", "--signature",
help="With -t/--task, specify the signatures to look for instead of taking the last two",
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
help="With -t/--task, specify the signatures to look for instead of taking the last two",
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
parser.add_argument("sigdatafile1",
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
action="store", nargs='?')
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
action="store", nargs='?')
parser.add_argument("sigdatafile2",
help="Second signature file to compare",
action="store", nargs='?')
help="Second signature file to compare",
action="store", nargs='?')
options = parser.parse_args()
if is_dump:
@@ -160,8 +157,7 @@ if options.taskargs:
with bb.tinfoil.Tinfoil() as tinfoil:
tinfoil.prepare(config_only=True)
if not options.dump and options.sigargs:
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0],
options.sigargs[1])
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0], options.sigargs[1])
else:
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1])
@@ -170,8 +166,7 @@ if options.taskargs:
output = bb.siggen.dump_sigfile(files[-1])
else:
if len(files) < 2:
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (
options.taskargs[0], options.taskargs[1]))
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (options.taskargs[0], options.taskargs[1]))
sys.exit(1)
# Recurse into signature comparison

View File

@@ -25,7 +25,6 @@ if __name__ == "__main__":
parser.add_argument('-u', '--unexpand', help='Do not expand the value (with --value)', action="store_true")
parser.add_argument('-f', '--flag', help='Specify a variable flag to query (with --value)', default=None)
parser.add_argument('--value', help='Only report the value, no history and no variable name', action="store_true")
parser.add_argument('-q', '--quiet', help='Silence bitbake server logging', action="store_true")
args = parser.parse_args()
if args.unexpand and not args.value:
@@ -36,7 +35,7 @@ if __name__ == "__main__":
print("--flag only makes sense with --value")
sys.exit(1)
with bb.tinfoil.Tinfoil(tracking=True, setup_logging=not args.quiet) as tinfoil:
with bb.tinfoil.Tinfoil(tracking=True) as tinfoil:
if args.recipe:
tinfoil.prepare(quiet=2)
d = tinfoil.parse_recipe(args.recipe)

View File

@@ -68,11 +68,11 @@ def main():
registered = False
for plugin in plugins:
if hasattr(plugin, 'tinfoil_init'):
plugin.tinfoil_init(tinfoil)
if hasattr(plugin, 'register_commands'):
registered = True
plugin.register_commands(subparsers)
if hasattr(plugin, 'tinfoil_init'):
plugin.tinfoil_init(tinfoil)
if not registered:
logger.error("No commands registered - missing plugins?")

View File

@@ -1,7 +1,5 @@
#!/usr/bin/env python3
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#

View File

@@ -12,12 +12,11 @@ warnings.simplefilter("default")
import logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb
bb.utils.check_system_locale()
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 11 or not sys.argv[1].startswith("decafbad"):
if len(sys.argv) != 10 or not sys.argv[1].startswith("decafbad"):
print("bitbake-server is meant for internal execution by bitbake itself, please don't use it standalone.")
sys.exit(1)
@@ -29,8 +28,7 @@ logfile = sys.argv[4]
lockname = sys.argv[5]
sockname = sys.argv[6]
timeout = float(sys.argv[7])
profile = bool(int(sys.argv[8]))
xmlrpcinterface = (sys.argv[9], int(sys.argv[10]))
xmlrpcinterface = (sys.argv[8], int(sys.argv[9]))
if xmlrpcinterface[0] == "None":
xmlrpcinterface = (None, xmlrpcinterface[1])
@@ -51,5 +49,5 @@ logger = logging.getLogger("BitBake")
handler = bb.event.LogHandler()
logger.addHandler(handler)
bb.server.process.execServer(lockfd, readypipeinfd, lockname, sockname, timeout, xmlrpcinterface, profile)
bb.server.process.execServer(lockfd, readypipeinfd, lockname, sockname, timeout, xmlrpcinterface)

View File

@@ -1,7 +1,5 @@
#!/usr/bin/env python3
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -24,7 +22,8 @@ import subprocess
from multiprocessing import Lock
from threading import Thread
bb.utils.check_system_locale()
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
@@ -121,10 +120,11 @@ def worker_child_fire(event, d):
data = b"<event>" + pickle.dumps(event) + b"</event>"
try:
with bb.utils.lock_timeout(worker_pipe_lock):
while(len(data)):
written = worker_pipe.write(data)
data = data[written:]
worker_pipe_lock.acquire()
while(len(data)):
written = worker_pipe.write(data)
data = data[written:]
worker_pipe_lock.release()
except IOError:
sigterm_handler(None, None)
raise
@@ -143,17 +143,7 @@ def sigterm_handler(signum, frame):
os.killpg(0, signal.SIGTERM)
sys.exit()
def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
fn = runtask['fn']
task = runtask['task']
taskname = runtask['taskname']
taskhash = runtask['taskhash']
unihash = runtask['unihash']
appends = runtask['appends']
layername = runtask['layername']
taskdepdata = runtask['taskdepdata']
quieterrors = runtask['quieterrors']
def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskhash, unihash, appends, taskdepdata, extraconfigdata, quieterrors=False, dry_run_exec=False):
# We need to setup the environment BEFORE the fork, since
# a fork() or exec*() activates PSEUDO...
@@ -162,10 +152,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
fakeenv = {}
umask = None
uid = os.getuid()
gid = os.getgid()
taskdep = runtask['taskdep']
taskdep = workerdata["taskdeps"][fn]
if 'umask' in taskdep and taskname in taskdep['umask']:
umask = taskdep['umask'][taskname]
elif workerdata["umask"]:
@@ -177,24 +164,24 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
except TypeError:
pass
dry_run = cfg.dry_run or runtask['dry_run']
dry_run = cfg.dry_run or dry_run_exec
# We can't use the fakeroot environment in a dry run as it possibly hasn't been built
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
fakeroot = True
envvars = (runtask['fakerootenv'] or "").split()
envvars = (workerdata["fakerootenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
fakeenv[key] = value
fakedirs = (runtask['fakerootdirs'] or "").split()
fakedirs = (workerdata["fakerootdirs"][fn] or "").split()
for p in fakedirs:
bb.utils.mkdirhier(p)
logger.debug2('Running %s:%s under fakeroot, fakedirs: %s' %
(fn, taskname, ', '.join(fakedirs)))
else:
envvars = (runtask['fakerootnoenv'] or "").split()
envvars = (workerdata["fakerootnoenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
@@ -245,11 +232,11 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
os.umask(umask)
try:
bb_cache = bb.cache.NoCache(databuilder)
(realfn, virtual, mc) = bb.cache.virtualfn2realfn(fn)
the_data = databuilder.mcdata[mc]
the_data.setVar("BB_WORKERCONTEXT", "1")
the_data.setVar("BB_TASKDEPDATA", taskdepdata)
the_data.setVar('BB_CURRENTTASK', taskname.replace("do_", ""))
if cfg.limited_deps:
the_data.setVar("BB_LIMITEDDEPS", "1")
the_data.setVar("BUILDNAME", workerdata["buildname"])
@@ -263,20 +250,12 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
bb.parse.siggen.set_taskhashes(workerdata["newhashes"])
ret = 0
the_data = databuilder.parseRecipe(fn, appends, layername)
the_data = bb_cache.loadDataFull(fn, appends)
the_data.setVar('BB_TASKHASH', taskhash)
the_data.setVar('BB_UNIHASH', unihash)
bb.parse.siggen.setup_datacache_from_datastore(fn, the_data)
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN"), taskname.replace("do_", "")))
if not bb.utils.to_boolean(the_data.getVarFlag(taskname, 'network')):
if bb.utils.is_local_uid(uid):
logger.debug("Attempting to disable network for %s" % taskname)
bb.utils.disable_network(uid, gid)
else:
logger.debug("Skipping disable network for %s since %s is not a local uid." % (taskname, uid))
# exported_vars() returns a generator which *cannot* be passed to os.environ.update()
# successfully. We also need to unset anything from the environment which shouldn't be there
exports = bb.data.exported_vars(the_data)
@@ -449,7 +428,7 @@ class BitbakeWorker(object):
def handle_cookercfg(self, data):
self.cookercfg = pickle.loads(data)
self.databuilder = bb.cookerdata.CookerDataBuilder(self.cookercfg, worker=True)
self.databuilder.parseBaseConfiguration(worker=True)
self.databuilder.parseBaseConfiguration()
self.data = self.databuilder.data
def handle_extraconfigdata(self, data):
@@ -464,7 +443,6 @@ class BitbakeWorker(object):
for mc in self.databuilder.mcdata:
self.databuilder.mcdata[mc].setVar("PRSERV_HOST", self.workerdata["prhost"])
self.databuilder.mcdata[mc].setVar("BB_HASHSERVE", self.workerdata["hashservaddr"])
self.databuilder.mcdata[mc].setVar("__bbclasstype", "recipe")
def handle_newtaskhashes(self, data):
self.workerdata["newhashes"] = pickle.loads(data)
@@ -482,15 +460,11 @@ class BitbakeWorker(object):
sys.exit(0)
def handle_runtask(self, data):
runtask = pickle.loads(data)
fn = runtask['fn']
task = runtask['task']
taskname = runtask['taskname']
fn, task, taskname, taskhash, unihash, quieterrors, appends, taskdepdata, dry_run_exec = pickle.loads(data)
workerlog_write("Handling runtask %s %s %s\n" % (task, fn, taskname))
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, self.extraconfigdata, runtask)
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, fn, task, taskname, taskhash, unihash, appends, taskdepdata, self.extraconfigdata, quieterrors, dry_run_exec)
self.build_pids[pid] = task
self.build_pipes[pid] = runQueueWorkerPipe(pipein, pipeout)

View File

@@ -1,7 +1,5 @@
#!/usr/bin/env python3
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#

View File

@@ -33,7 +33,7 @@ databaseCheck()
$MANAGE migrate --noinput || retval=1
if [ $retval -eq 1 ]; then
echo "Failed migrations, halting system start" 1>&2
echo "Failed migrations, aborting system start" 1>&2
return $retval
fi
# Make sure that checksettings can pick up any value for TEMPLATECONF
@@ -41,7 +41,7 @@ databaseCheck()
$MANAGE checksettings --traceback || retval=1
if [ $retval -eq 1 ]; then
printf "\nError while checking settings; exiting\n"
printf "\nError while checking settings; aborting\n"
return $retval
fi
@@ -248,7 +248,7 @@ fi
# 3) the sqlite db if that is being used.
# 4) pid's we need to clean up on exit/shutdown
export TOASTER_DIR=$TOASTERDIR
export BB_ENV_PASSTHROUGH_ADDITIONS="$BB_ENV_PASSTHROUGH_ADDITIONS TOASTER_DIR"
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE TOASTER_DIR"
# Determine the action. If specified by arguments, fine, if not, toggle it
if [ "$CMD" = "start" ] ; then

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: MIT
#
# Copyright (c) 2021 Joshua Watt <JPEWhacker@gmail.com>
#
#
# Dockerfile to build a bitbake hash equivalence server container
#
# From the root of the bitbake repository, run:
@@ -15,9 +15,5 @@ RUN apk add --no-cache python3
COPY bin/bitbake-hashserv /opt/bbhashserv/bin/
COPY lib/hashserv /opt/bbhashserv/lib/hashserv/
COPY lib/bb /opt/bbhashserv/lib/bb/
COPY lib/codegen.py /opt/bbhashserv/lib/codegen.py
COPY lib/ply /opt/bbhashserv/lib/ply/
COPY lib/bs4 /opt/bbhashserv/lib/bs4/
ENTRYPOINT ["/opt/bbhashserv/bin/bitbake-hashserv"]

View File

@@ -1,62 +0,0 @@
# SPDX-License-Identifier: MIT
#
# Copyright (c) 2022 Daniel Gomez <daniel@qtec.com>
#
# Dockerfile to build a bitbake PR service container
#
# From the root of the bitbake repository, run:
#
# docker build -f contrib/prserv/Dockerfile . -t prserv
#
# Running examples:
#
# 1. PR Service in RW mode, port 18585:
#
# docker run --detach --tty \
# --env PORT=18585 \
# --publish 18585:18585 \
# --volume $PWD:/var/lib/bbprserv \
# prserv
#
# 2. PR Service in RO mode, default port (8585) and custom LOGFILE:
#
# docker run --detach --tty \
# --env DBMODE="--read-only" \
# --env LOGFILE=/var/lib/bbprserv/prservro.log \
# --publish 8585:8585 \
# --volume $PWD:/var/lib/bbprserv \
# prserv
#
FROM alpine:3.14.4
RUN apk add --no-cache python3
COPY bin/bitbake-prserv /opt/bbprserv/bin/
COPY lib/prserv /opt/bbprserv/lib/prserv/
COPY lib/bb /opt/bbprserv/lib/bb/
COPY lib/codegen.py /opt/bbprserv/lib/codegen.py
COPY lib/ply /opt/bbprserv/lib/ply/
COPY lib/bs4 /opt/bbprserv/lib/bs4/
ENV PATH=$PATH:/opt/bbprserv/bin
RUN mkdir -p /var/lib/bbprserv
ENV DBFILE=/var/lib/bbprserv/prserv.sqlite3 \
LOGFILE=/var/lib/bbprserv/prserv.log \
LOGLEVEL=debug \
HOST=0.0.0.0 \
PORT=8585 \
DBMODE=""
ENTRYPOINT [ "/bin/sh", "-c", \
"bitbake-prserv \
--file=$DBFILE \
--log=$LOGFILE \
--loglevel=$LOGLEVEL \
--start \
--host=$HOST \
--port=$PORT \
$DBMODE \
&& tail -f $LOGFILE"]

View File

@@ -40,7 +40,7 @@ set cpo&vim
let s:maxoff = 50 " maximum number of lines to look backwards for ()
function! GetBBPythonIndent(lnum)
function GetPythonIndent(lnum)
" If this line is explicitly joined: If the previous line was also joined,
" line it up with that one, otherwise add two 'shiftwidth'
@@ -257,7 +257,7 @@ let b:did_indent = 1
setlocal indentkeys+=0\"
function! BitbakeIndent(lnum)
function BitbakeIndent(lnum)
if !has('syntax_items')
return -1
endif
@@ -315,7 +315,7 @@ function! BitbakeIndent(lnum)
endif
if index(["bbPyDefRegion", "bbPyFuncRegion"], name) != -1
let ret = GetBBPythonIndent(a:lnum)
let ret = GetPythonIndent(a:lnum)
" Should normally always be indented by at least one shiftwidth; but allow
" return of -1 (defer to autoindent) or -2 (force indent to 0)
if ret == 0

View File

@@ -8,7 +8,7 @@ Manual Organization
Folders exist for individual manuals as follows:
* bitbake-user-manual --- The BitBake User Manual
* bitbake-user-manual - The BitBake User Manual
Each folder is self-contained regarding content and figures.
@@ -47,8 +47,8 @@ To install all required packages run:
To build the documentation locally, run:
$ cd doc
$ make html
$ cd documentation
$ make -f Makefile.sphinx html
The resulting HTML index page will be _build/html/index.html, and you
can browse your own copy of the locally generated documentation with

View File

@@ -1,9 +0,0 @@
<footer>
<hr/>
<div role="contentinfo">
<p>&copy; Copyright {{ copyright }}
<br>Last updated on {{ last_updated }} from the <a href="https://git.openembedded.org/bitbake/">bitbake</a> git repository.
</p>
</div>
</footer>

View File

@@ -79,8 +79,8 @@ directives.
Prior to parsing configuration files, BitBake looks at certain
variables, including:
- :term:`BB_ENV_PASSTHROUGH`
- :term:`BB_ENV_PASSTHROUGH_ADDITIONS`
- :term:`BB_ENV_WHITELIST`
- :term:`BB_ENV_EXTRAWHITE`
- :term:`BB_PRESERVE_ENV`
- :term:`BB_ORIGENV`
- :term:`BITBAKE_UI`
@@ -228,7 +228,7 @@ and then reload it.
Where possible, subsequent BitBake commands reuse this cache of recipe
information. The validity of this cache is determined by first computing
a checksum of the base configuration data (see
:term:`BB_HASHCONFIG_IGNORE_VARS`) and
:term:`BB_HASHCONFIG_WHITELIST`) and
then checking if the checksum matches. If that checksum matches what is
in the cache and the recipe and class files have not changed, BitBake is
able to use the cache. BitBake then reloads the cached information about
@@ -435,7 +435,7 @@ BitBake writes a shell script to
executes the script. The generated shell script contains all the
exported variables, and the shell functions with all variables expanded.
Output from the shell script goes to the file
``${``\ :term:`T`\ ``}/log.do_taskname.pid``. Looking at the expanded shell functions in
``${T}/log.do_taskname.pid``. Looking at the expanded shell functions in
the run file and the output in the log files is a useful debugging
technique.
@@ -477,7 +477,7 @@ changes because it should not affect the output for target packages. The
simplistic approach for excluding the working directory is to set it to
some fixed value and create the checksum for the "run" script. BitBake
goes one step better and uses the
:term:`BB_BASEHASH_IGNORE_VARS` variable
:term:`BB_HASHBASE_WHITELIST` variable
to define a list of variables that should never be included when
generating the signatures.
@@ -523,7 +523,7 @@ it cannot figure out dependencies.
Thus far, this section has limited discussion to the direct inputs into
a task. Information based on direct inputs is referred to as the
"basehash" in the code. However, there is still the question of a task's
indirect inputs --- the things that were already built and present in the
indirect inputs - the things that were already built and present in the
build directory. The checksum (or signature) for a particular task needs
to add the hashes of all the tasks on which the particular task depends.
Choosing which dependencies to add is a policy decision. However, the
@@ -534,11 +534,11 @@ At the code level, there are a variety of ways both the basehash and the
dependent task hashes can be influenced. Within the BitBake
configuration file, we can give BitBake some extra information to help
it construct the basehash. The following statement effectively results
in a list of global variable dependency excludes --- variables never
in a list of global variable dependency excludes - variables never
included in any checksum. This example uses variables from OpenEmbedded
to help illustrate the concept::
BB_BASEHASH_IGNORE_VARS ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL \
USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \
PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \
@@ -552,8 +552,8 @@ through dependency chains are more complex and are generally
accomplished with a Python function. The code in
``meta/lib/oe/sstatesig.py`` shows two examples of this and also
illustrates how you can insert your own policy into the system if so
desired. This file defines the basic signature generator
OpenEmbedded-Core uses: "OEBasicHash". By default, there
desired. This file defines the two basic signature generators
OpenEmbedded-Core uses: "OEBasic" and "OEBasicHash". By default, there
is a dummy "noop" signature handler enabled in BitBake. This means that
behavior is unchanged from previous versions. ``OE-Core`` uses the
"OEBasicHash" signature handler by default through this setting in the
@@ -561,13 +561,14 @@ behavior is unchanged from previous versions. ``OE-Core`` uses the
BB_SIGNATURE_HANDLER ?= "OEBasicHash"
The main feature of the "OEBasicHash" :term:`BB_SIGNATURE_HANDLER` is that
it adds the task hash to the stamp files. Thanks to this, any metadata
change will change the task hash, automatically causing the task to be run
again. This removes the need to bump :term:`PR` values, and changes to
metadata automatically ripple across the build.
The "OEBasicHash" :term:`BB_SIGNATURE_HANDLER` is the same as the "OEBasic"
version but adds the task hash to the stamp files. This results in any
metadata change that changes the task hash, automatically causing the
task to be run again. This removes the need to bump
:term:`PR` values, and changes to metadata automatically
ripple across the build.
It is also worth noting that the end result of signature
It is also worth noting that the end result of these signature
generators is to make some dependency and hash information available to
the build. This information includes:
@@ -656,7 +657,7 @@ builds are when execute, bitbake also supports user defined
configuration of the `Python
logging <https://docs.python.org/3/library/logging.html>`__ facilities
through the :term:`BB_LOGCONFIG` variable. This
variable defines a JSON or YAML `logging
variable defines a json or yaml `logging
configuration <https://docs.python.org/3/library/logging.config.html>`__
that will be intelligently merged into the default configuration. The
logging configuration is merged using the following rules:
@@ -690,9 +691,9 @@ logging configuration is merged using the following rules:
adds a filter called ``BitBake.defaultFilter``, both filters will be
applied to the logger
As a first example, you can create a ``hashequiv.json`` user logging
configuration file to log all Hash Equivalence related messages of ``VERBOSE``
or higher priority to a file called ``hashequiv.log``::
As an example, consider the following user logging configuration file
which logs all Hash Equivalence related messages of VERBOSE or higher to
a file called ``hashequiv.log`` ::
{
"version": 1,
@@ -721,40 +722,3 @@ or higher priority to a file called ``hashequiv.log``::
}
}
}
Then set the :term:`BB_LOGCONFIG` variable in ``conf/local.conf``::
BB_LOGCONFIG = "hashequiv.json"
Another example is this ``warn.json`` file to log all ``WARNING`` and
higher priority messages to a ``warn.log`` file::
{
"version": 1,
"formatters": {
"warnlogFormatter": {
"()": "bb.msg.BBLogFormatter",
"format": "%(levelname)s: %(message)s"
}
},
"handlers": {
"warnlog": {
"class": "logging.FileHandler",
"formatter": "warnlogFormatter",
"level": "WARNING",
"filename": "warn.log"
}
},
"loggers": {
"BitBake": {
"handlers": ["warnlog"]
}
},
"@disable_existing_loggers": false
}
Note that BitBake's helper classes for structured logging are implemented in
``lib/bb/msg.py``.

View File

@@ -84,18 +84,18 @@ fetcher does know how to use HTTP as a transport.
Here are some examples that show commonly used mirror definitions::
PREMIRRORS ?= "\
bzr://.*/.\* http://somemirror.org/sources/ \
cvs://.*/.\* http://somemirror.org/sources/ \
git://.*/.\* http://somemirror.org/sources/ \
hg://.*/.\* http://somemirror.org/sources/ \
osc://.*/.\* http://somemirror.org/sources/ \
p4://.*/.\* http://somemirror.org/sources/ \
svn://.*/.\* http://somemirror.org/sources/"
bzr://.*/.\* http://somemirror.org/sources/ \\n \
cvs://.*/.\* http://somemirror.org/sources/ \\n \
git://.*/.\* http://somemirror.org/sources/ \\n \
hg://.*/.\* http://somemirror.org/sources/ \\n \
osc://.*/.\* http://somemirror.org/sources/ \\n \
p4://.*/.\* http://somemirror.org/sources/ \\n \
svn://.*/.\* http://somemirror.org/sources/ \\n"
MIRRORS =+ "\
ftp://.*/.\* http://somemirror.org/sources/ \
http://.*/.\* http://somemirror.org/sources/ \
https://.*/.\* http://somemirror.org/sources/"
ftp://.*/.\* http://somemirror.org/sources/ \\n \
http://.*/.\* http://somemirror.org/sources/ \\n \
https://.*/.\* http://somemirror.org/sources/ \\n"
It is useful to note that BitBake
supports cross-URLs. It is possible to mirror a Git repository on an
@@ -167,9 +167,6 @@ govern the behavior of the unpack stage:
- *dos:* Applies to ``.zip`` and ``.jar`` files and specifies whether
to use DOS line ending conversion on text files.
- *striplevel:* Strip specified number of leading components (levels)
from file names on extraction
- *subdir:* Unpacks the specific URL to the specified subdirectory
within the root directory.
@@ -229,11 +226,6 @@ downloaded file is useful for avoiding collisions in
:term:`DL_DIR` when dealing with multiple files that
have the same name.
If a username and password are specified in the ``SRC_URI``, a Basic
Authorization header will be added to each request, including across redirects.
To instead limit the Authorization header to the first request, add
"redirectauth=0" to the list of parameters.
Some example URLs are as follows::
SRC_URI = "http://oe.handhelds.org/not_there.aac"
@@ -396,19 +388,6 @@ This fetcher supports the following parameters:
protocol is "file". You can also use "http", "https", "ssh" and
"rsync".
.. note::
When ``protocol`` is "ssh", the URL expected in :term:`SRC_URI` differs
from the one that is typically passed to ``git clone`` command and provided
by the Git server to fetch from. For example, the URL returned by GitLab
server for ``mesa`` when cloning over SSH is
``git@gitlab.freedesktop.org:mesa/mesa.git``, however the expected URL in
:term:`SRC_URI` is the following::
SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
Note the ``:`` character changed for a ``/`` before the path to the project.
- *"nocheckout":* Tells the fetcher to not checkout source code when
unpacking when set to "1". Set this option for the URL where there is
a custom routine to checkout code. The default is "0".
@@ -424,17 +403,17 @@ This fetcher supports the following parameters:
- *"nobranch":* Tells the fetcher to not check the SHA validation for
the branch when set to "1". The default is "0". Set this option for
the recipe that refers to the commit that is valid for any namespace
(branch, tag, ...) instead of the branch.
the recipe that refers to the commit that is valid for a tag instead
of the branch.
- *"bareclone":* Tells the fetcher to clone a bare clone into the
destination directory without checking out a working tree. Only the
raw Git metadata is provided. This parameter implies the "nocheckout"
parameter as well.
- *"branch":* The branch(es) of the Git tree to clone. Unless
"nobranch" is set to "1", this is a mandatory parameter. The number of
branch parameters must match the number of name parameters.
- *"branch":* The branch(es) of the Git tree to clone. If unset, this
is assumed to be "master". The number of branch parameters much match
the number of name parameters.
- *"rev":* The revision to use for the checkout. The default is
"master".
@@ -457,9 +436,8 @@ This fetcher supports the following parameters:
Here are some example URLs::
SRC_URI = "git://github.com/fronteed/icheck.git;protocol=https;branch=${PV};tag=${PV}"
SRC_URI = "git://github.com/asciidoc/asciidoc-py;protocol=https;branch=main"
SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
.. note::
@@ -476,14 +454,6 @@ Here are some example URLs::
easy to share metadata without removing passwords. SSH keys, ``~/.netrc``
and ``~/.ssh/config`` files can be used as alternatives.
Using tags with the git fetcher may cause surprising behaviour. Bitbake needs to
resolve the tag to a specific revision and to do that, it has to connect to and use
the upstream repository. This is because the revision the tags point at can change and
we've seen cases of this happening in well known public repositories. This can mean
many more network connections than expected and recipes may be reparsed at every build.
Source mirrors will also be bypassed as the upstream repository is the only source
of truth to resolve the revision accurately. For these reasons, whilst the fetcher
can support tags, we recommend being specific about revisions in recipes.
.. _gitsm-fetcher:
@@ -696,133 +666,6 @@ Here is an example URL::
It can also be used when setting mirrors definitions using the :term:`PREMIRRORS` variable.
.. _gcp-fetcher:
GCP Fetcher (``gs://``)
--------------------------
This submodule fetches data from a
`Google Cloud Storage Bucket <https://cloud.google.com/storage/docs/buckets>`__.
It uses the `Google Cloud Storage Python Client <https://cloud.google.com/python/docs/reference/storage/latest>`__
to check the status of objects in the bucket and download them.
The use of the Python client makes it substantially faster than using command
line tools such as gsutil.
The fetcher requires the Google Cloud Storage Python Client to be installed, along
with the gsutil tool.
The fetcher requires that the machine has valid credentials for accessing the
chosen bucket. Instructions for authentication can be found in the
`Google Cloud documentation <https://cloud.google.com/docs/authentication/provide-credentials-adc#local-dev>`__.
If it used from the OpenEmbedded build system, the fetcher can be used for
fetching sstate artifacts from a GCS bucket by specifying the
``SSTATE_MIRRORS`` variable as shown below::
SSTATE_MIRRORS ?= "\
file://.* gs://<bucket name>/PATH \
"
The fetcher can also be used in recipes::
SRC_URI = "gs://<bucket name>/<foo_container>/<bar_file>"
However, the checksum of the file should be also be provided::
SRC_URI[sha256sum] = "<sha256 string>"
.. _crate-fetcher:
Crate Fetcher (``crate://``)
----------------------------
This submodule fetches code for
`Rust language "crates" <https://doc.rust-lang.org/reference/glossary.html?highlight=crate#crate>`__
corresponding to Rust libraries and programs to compile. Such crates are typically shared
on https://crates.io/ but this fetcher supports other crate registries too.
The format for the :term:`SRC_URI` setting must be::
SRC_URI = "crate://REGISTRY/NAME/VERSION"
Here is an example URL::
SRC_URI = "crate://crates.io/glob/0.2.11"
.. _npm-fetcher:
NPM Fetcher (``npm://``)
------------------------
This submodule fetches source code from an
`NPM <https://en.wikipedia.org/wiki/Npm_(software)>`__
Javascript package registry.
The format for the :term:`SRC_URI` setting must be::
SRC_URI = "npm://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
This fetcher supports the following parameters:
- *"package":* The NPM package name. This is a mandatory parameter.
- *"version":* The NPM package version. This is a mandatory parameter.
- *"downloadfilename":* Specifies the filename used when storing the downloaded file.
- *"destsuffix":* Specifies the directory to use to unpack the package (default: ``npm``).
Note that NPM fetcher only fetches the package source itself. The dependencies
can be fetched through the `npmsw-fetcher`_.
Here is an example URL with both fetchers::
SRC_URI = " \
npm://registry.npmjs.org/;package=cute-files;version=${PV} \
npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
"
See :yocto_docs:`Creating Node Package Manager (NPM) Packages
</dev-manual/packages.html#creating-node-package-manager-npm-packages>`
in the Yocto Project manual for details about using
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
to automatically create a recipe from an NPM URL.
.. _npmsw-fetcher:
NPM shrinkwrap Fetcher (``npmsw://``)
-------------------------------------
This submodule fetches source code from an
`NPM shrinkwrap <https://docs.npmjs.com/cli/v8/commands/npm-shrinkwrap>`__
description file, which lists the dependencies
of an NPM package while locking their versions.
The format for the :term:`SRC_URI` setting must be::
SRC_URI = "npmsw://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
This fetcher supports the following parameters:
- *"dev":* Set this parameter to ``1`` to install "devDependencies".
- *"destsuffix":* Specifies the directory to use to unpack the dependencies
(``${S}`` by default).
Note that the shrinkwrap file can also be provided by the recipe for
the package which has such dependencies, for example::
SRC_URI = " \
npm://registry.npmjs.org/;package=cute-files;version=${PV} \
npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
"
Such a file can automatically be generated using
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
as described in the :yocto_docs:`Creating Node Package Manager (NPM) Packages
</dev-manual/packages.html#creating-node-package-manager-npm-packages>`
section of the Yocto Project.
Other Fetchers
--------------
@@ -832,9 +675,9 @@ Fetch submodules also exist for the following:
- Mercurial (``hg://``)
- OSC (``osc://``)
- npm (``npm://``)
- S3 (``s3://``)
- OSC (``osc://``)
- Secure FTP (``sftp://``)

View File

@@ -18,32 +18,28 @@ it.
Obtaining BitBake
=================
See the :ref:`bitbake-user-manual/bitbake-user-manual-intro:obtaining bitbake` section for
See the :ref:`bitbake-user-manual/bitbake-user-manual-hello:obtaining bitbake` section for
information on how to obtain BitBake. Once you have the source code on
your machine, the BitBake directory appears as follows::
$ ls -al
total 108
drwxr-xr-x 9 fawkh 10000 4096 feb 24 12:10 .
drwx------ 36 fawkh 10000 4096 mar 2 17:00 ..
-rw-r--r-- 1 fawkh 10000 365 feb 24 12:10 AUTHORS
drwxr-xr-x 2 fawkh 10000 4096 feb 24 12:10 bin
-rw-r--r-- 1 fawkh 10000 16501 feb 24 12:10 ChangeLog
drwxr-xr-x 2 fawkh 10000 4096 feb 24 12:10 classes
drwxr-xr-x 2 fawkh 10000 4096 feb 24 12:10 conf
drwxr-xr-x 5 fawkh 10000 4096 feb 24 12:10 contrib
drwxr-xr-x 6 fawkh 10000 4096 feb 24 12:10 doc
drwxr-xr-x 8 fawkh 10000 4096 mar 2 16:26 .git
-rw-r--r-- 1 fawkh 10000 31 feb 24 12:10 .gitattributes
-rw-r--r-- 1 fawkh 10000 392 feb 24 12:10 .gitignore
drwxr-xr-x 13 fawkh 10000 4096 feb 24 12:11 lib
-rw-r--r-- 1 fawkh 10000 1224 feb 24 12:10 LICENSE
-rw-r--r-- 1 fawkh 10000 15394 feb 24 12:10 LICENSE.GPL-2.0-only
-rw-r--r-- 1 fawkh 10000 1286 feb 24 12:10 LICENSE.MIT
-rw-r--r-- 1 fawkh 10000 229 feb 24 12:10 MANIFEST.in
-rw-r--r-- 1 fawkh 10000 2413 feb 24 12:10 README
-rw-r--r-- 1 fawkh 10000 43 feb 24 12:10 toaster-requirements.txt
-rw-r--r-- 1 fawkh 10000 2887 feb 24 12:10 TODO
total 100
drwxrwxr-x. 9 wmat wmat 4096 Jan 31 13:44 .
drwxrwxr-x. 3 wmat wmat 4096 Feb 4 10:45 ..
-rw-rw-r--. 1 wmat wmat 365 Nov 26 04:55 AUTHORS
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 bin
drwxrwxr-x. 4 wmat wmat 4096 Jan 31 13:44 build
-rw-rw-r--. 1 wmat wmat 16501 Nov 26 04:55 ChangeLog
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 classes
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 conf
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 contrib
-rw-rw-r--. 1 wmat wmat 17987 Nov 26 04:55 COPYING
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 doc
-rw-rw-r--. 1 wmat wmat 69 Nov 26 04:55 .gitignore
-rw-rw-r--. 1 wmat wmat 849 Nov 26 04:55 HEADER
drwxrwxr-x. 5 wmat wmat 4096 Jan 31 13:44 lib
-rw-rw-r--. 1 wmat wmat 195 Nov 26 04:55 MANIFEST.in
-rw-rw-r--. 1 wmat wmat 2887 Nov 26 04:55 TODO
At this point, you should have BitBake cloned to a directory that
matches the previous listing except for dates and user names.
@@ -56,7 +52,7 @@ directory to where your local BitBake files are and run the following
command::
$ ./bin/bitbake --version
BitBake Build Tool Core version 2.3.1
BitBake Build Tool Core version 1.23.0, bitbake version 1.23.0
The console output tells you what version
you are running.
@@ -134,8 +130,23 @@ Following is the complete "Hello World" example.
directory. Run the ``bitbake`` command and see what it does::
$ bitbake
ERROR: The BBPATH variable is not set and bitbake did not find a conf/bblayers.conf file in the expected location.
The BBPATH variable is not set and bitbake did not
find a conf/bblayers.conf file in the expected location.
Maybe you accidentally invoked bitbake from the wrong directory?
DEBUG: Removed the following variables from the environment:
GNOME_DESKTOP_SESSION_ID, XDG_CURRENT_DESKTOP,
GNOME_KEYRING_CONTROL, DISPLAY, SSH_AGENT_PID, LANG, no_proxy,
XDG_SESSION_PATH, XAUTHORITY, SESSION_MANAGER, SHLVL,
MANDATORY_PATH, COMPIZ_CONFIG_PROFILE, WINDOWID, EDITOR,
GPG_AGENT_INFO, SSH_AUTH_SOCK, GDMSESSION, GNOME_KEYRING_PID,
XDG_SEAT_PATH, XDG_CONFIG_DIRS, LESSOPEN, DBUS_SESSION_BUS_ADDRESS,
_, XDG_SESSION_COOKIE, DESKTOP_SESSION, LESSCLOSE, DEFAULTS_PATH,
UBUNTU_MENUPROXY, OLDPWD, XDG_DATA_DIRS, COLORTERM, LS_COLORS
The majority of this output is specific to environment variables that
are not directly relevant to BitBake. However, the very first
message regarding the :term:`BBPATH` variable and the
``conf/bblayers.conf`` file is relevant.
When you run BitBake, it begins looking for metadata files. The
:term:`BBPATH` variable is what tells BitBake where
@@ -168,14 +179,20 @@ Following is the complete "Hello World" example.
``bitbake`` command again::
$ bitbake
ERROR: Unable to parse /home/scott-lenovo/bitbake/lib/bb/parse/__init__.py
Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 127, in resolve_file(fn='conf/bitbake.conf', d=<bb.data_smart.DataSmart object at 0x7f22919a3df0>):
if not newfn:
> raise IOError(errno.ENOENT, "file %s not found in %s" % (fn, bbpath))
fn = newfn
FileNotFoundError: [Errno 2] file conf/bitbake.conf not found in <projectdirectory>
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 173, in parse_config_file
return bb.parse.handle(fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 99, in handle
return h['handle'](fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 120, in handle
abs_fn = resolve_file(fn, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 117, in resolve_file
raise IOError("file %s not found in %s" % (fn, bbpath))
IOError: file conf/bitbake.conf not found in /home/scott-lenovo/hello
ERROR: Unable to parse conf/bitbake.conf: file conf/bitbake.conf not found in /home/scott-lenovo/hello
This sample output shows that BitBake could not find the
``conf/bitbake.conf`` file in the project directory. This file is
@@ -209,12 +226,12 @@ Following is the complete "Hello World" example.
.. note::
Without a value for :term:`PN`, the variables :term:`STAMP`, :term:`T`, and :term:`B`, prevent more
than one recipe from working. You can fix this by either setting :term:`PN` to
Without a value for PN , the variables STAMP , T , and B , prevent more
than one recipe from working. You can fix this by either setting PN to
have a value similar to what OpenEmbedded and BitBake use in the default
``bitbake.conf`` file (see previous example). Or, by manually updating each
recipe to set :term:`PN`. You will also need to include :term:`PN` as part of the :term:`STAMP`,
:term:`T`, and :term:`B` variable definitions in the ``local.conf`` file.
bitbake.conf file (see previous example). Or, by manually updating each
recipe to set PN . You will also need to include PN as part of the STAMP
, T , and B variable definitions in the local.conf file.
The ``TMPDIR`` variable establishes a directory that BitBake uses
for build output and intermediate files other than the cached
@@ -237,14 +254,18 @@ Following is the complete "Hello World" example.
exists, you can run the ``bitbake`` command again::
$ bitbake
ERROR: Unable to parse /home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py
Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 67, in inherit(files=['base'], fn='configuration INHERITs', lineno=0, d=<bb.data_smart.DataSmart object at 0x7fab6815edf0>):
if not os.path.exists(file):
> raise ParseError("Could not inherit file %s" % (file), fn, lineno)
bb.parse.ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 177, in _inherit
bb.parse.BBHandler.inherit(bbclass, "configuration INHERITs", 0, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 92, in inherit
include(fn, file, lineno, d, "inherit")
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 100, in include
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
In the sample output,
BitBake could not find the ``classes/base.bbclass`` file. You need
@@ -263,10 +284,7 @@ Following is the complete "Hello World" example.
$ mkdir classes
Move to the ``classes`` directory and then create the
``base.bbclass`` file by inserting this single line::
addtask build
``base.bbclass`` file by inserting this single line: addtask build
The minimal task that BitBake runs is the ``do_build`` task. This is
all the example needs in order to build the project. Of course, the
``base.bbclass`` can have much more depending on which build
@@ -310,19 +328,10 @@ Following is the complete "Hello World" example.
BBFILES += "${LAYERDIR}/*.bb"
BBFILE_COLLECTIONS += "mylayer"
BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
LAYERSERIES_CORENAMES = "hello_world_example"
LAYERSERIES_COMPAT_mylayer = "hello_world_example"
For information on these variables, click on :term:`BBFILES`,
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS`, :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
or :term:`LAYERSERIES_COMPAT` to go to the definitions in the glossary.
.. note::
We are setting both ``LAYERSERIES_CORENAMES`` and :term:`LAYERSERIES_COMPAT` in this particular case, because we
are using bitbake without OpenEmbedded.
You should usually just use :term:`LAYERSERIES_COMPAT` to specify the OE-Core versions for which your layer
is compatible, and add the meta-openembedded layer to your project.
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS` or :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
to go to the definitions in the glossary.
You need to create the recipe file next. Inside your layer at the
top-level, use an editor and create a recipe file named
@@ -380,14 +389,12 @@ Following is the complete "Hello World" example.
target::
$ bitbake printhello
Loading cache: 100% |
Loaded 0 entries from dependency cache.
Parsing recipes: 100% |##################################################################################|
Time: 00:00:00
Parsing of 1 .bb files complete (0 cached, 1 parsed). 1 targets, 0 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
Initialising tasks: 100% |###############################################################################|
NOTE: No setscene tasks
NOTE: Executing Tasks
NOTE: Preparing RunQueue
NOTE: Executing RunQueue Tasks
********************
* *
* Hello, World! *

View File

@@ -61,9 +61,10 @@ member Chris Larson split the project into two distinct pieces:
Today, BitBake is the primary basis of the
`OpenEmbedded <https://www.openembedded.org/>`__ project, which is being
used to build and maintain Linux distributions such as the `Poky
Reference Distribution <https://www.yoctoproject.org/software-item/poky/>`__,
developed under the umbrella of the `Yocto Project <https://www.yoctoproject.org>`__.
used to build and maintain Linux distributions such as the `Angstrom
Distribution <http://www.angstrom-distribution.org/>`__, and which is
also being used as the build tool for Linux projects such as the `Yocto
Project <https://www.yoctoproject.org>`__.
Prior to BitBake, no other build tool adequately met the needs of an
aspiring embedded Linux distribution. All of the build systems used by
@@ -536,7 +537,7 @@ current working directory:
- ``pn-buildlist``: Shows a simple list of targets that are to be
built.
To stop depending on common depends, use the ``-I`` depend option and
To stop depending on common depends, use the "-I" depend option and
BitBake omits them from the graph. Leaving this information out can
produce more readable graphs. This way, you can remove from the graph
:term:`DEPENDS` from inherited classes such as ``base.bbclass``.

View File

@@ -104,15 +104,15 @@ Line Joining
Outside of :ref:`functions <bitbake-user-manual/bitbake-user-manual-metadata:functions>`,
BitBake joins any line ending in
a backslash character ("\\") with the following line before parsing
statements. The most common use for the "\\" character is to split
a backslash character ("\") with the following line before parsing
statements. The most common use for the "\" character is to split
variable assignments over multiple lines, as in the following example::
FOO = "bar \
baz \
qaz"
Both the "\\" character and the newline
Both the "\" character and the newline
character that follow it are removed when joining lines. Thus, no
newline characters end up in the value of ``FOO``.
@@ -125,7 +125,7 @@ Consider this additional example where the two assignments both assign
.. note::
BitBake does not interpret escape sequences like "\\n" in variable
BitBake does not interpret escape sequences like "\n" in variable
values. For these to have an effect, the value must be passed to some
utility that interprets escape sequences, such as
``printf`` or ``echo -n``.
@@ -159,7 +159,7 @@ behavior::
C = "qux"
*At this point, ${A} equals "qux bar baz"*
B = "norf"
*At this point, ${A} equals "norf baz"*
*At this point, ${A} equals "norf baz"\*
Contrast this behavior with the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:immediate variable
@@ -195,45 +195,22 @@ value. However, if ``A`` is not set, the variable is set to "aval".
Setting a weak default value (??=)
----------------------------------
The weak default value of a variable is the value which that variable
will expand to if no value has been assigned to it via any of the other
assignment operators. The "??=" operator takes effect immediately, replacing
any previously defined weak default value. Here is an example::
It is possible to use a "weaker" assignment than in the previous section
by using the "??=" operator. This assignment behaves identical to "?="
except that the assignment is made at the end of the parsing process
rather than immediately. Consequently, when multiple "??=" assignments
exist, the last one is used. Also, any "=" or "?=" assignment will
override the value set with "??=". Here is an example::
W ??= "x"
A := "${W}" # Immediate variable expansion
W ??= "y"
B := "${W}" # Immediate variable expansion
W ??= "z"
C = "${W}"
W ?= "i"
A ??= "somevalue"
A ??= "someothervalue"
After parsing we will have::
If ``A`` is set before the above statements are
parsed, the variable retains its value. If ``A`` is not set, the
variable is set to "someothervalue".
A = "x"
B = "y"
C = "i"
W = "i"
Appending and prepending non-override style will not substitute the weak
default value, which means that after parsing::
W ??= "x"
W += "y"
we will have::
W = " y"
On the other hand, override-style appends/prepends/removes are applied after
any active weak default value has been substituted::
W ??= "x"
W:append = "y"
After parsing we will have::
W = "xy"
Again, this assignment is a "lazy" or "weak" assignment because it does
not occur until the end of the parsing process.
Immediate variable expansion (:=)
---------------------------------
@@ -319,10 +296,6 @@ The variable ``D`` becomes "dvaladditional data".
You must control all spacing when you use the override syntax.
.. note::
The overrides are applied in this order, ":append", ":prepend", ":remove".
It is also possible to append and prepend to shell functions and
BitBake-style Python functions. See the ":ref:`bitbake-user-manual/bitbake-user-manual-metadata:shell functions`" and ":ref:`bitbake-user-manual/bitbake-user-manual-metadata:bitbake-style python functions`"
sections for examples.
@@ -334,8 +307,7 @@ Removal (Override Style Syntax)
You can remove values from lists using the removal override style
syntax. Specifying a value for removal causes all occurrences of that
value to be removed from the variable. Unlike ":append" and ":prepend",
there is no need to add a leading or trailing space to the value.
value to be removed from the variable.
When you use this syntax, BitBake expects one or more strings.
Surrounding spaces and spacing are preserved. Here is an example::
@@ -356,28 +328,6 @@ The variable ``FOO`` becomes
Like ":append" and ":prepend", ":remove" is applied at variable
expansion time.
.. note::
The overrides are applied in this order, ":append", ":prepend", ":remove".
This implies it is not possible to re-append previously removed strings.
However, one can undo a ":remove" by using an intermediate variable whose
content is passed to the ":remove" so that modifying the intermediate
variable equals to keeping the string in::
FOOREMOVE = "123 456 789"
FOO:remove = "${FOOREMOVE}"
...
FOOREMOVE = "123 789"
This expands to ``FOO:remove = "123 789"``.
.. note::
Override application order may not match variable parse history, i.e.
the output of ``bitbake -e`` may contain ":remove" before ":append",
but the result will be removed string, because ":remove" is handled
last.
Override Style Operation Advantages
-----------------------------------
@@ -448,12 +398,6 @@ documentation to a BitBake variable as follows::
CACHE[doc] = "The directory holding the cache of the metadata."
.. note::
Variable flag names starting with an underscore (``_``) character
are allowed but are ignored by ``d.getVarFlags("VAR")``
in Python code. Such flag names are used internally by BitBake.
Inline Python Variable Expansion
--------------------------------
@@ -566,8 +510,8 @@ variable.
.. note::
Overrides can only use lower-case characters, digits and dashes.
In particular, colons are not permitted in override names as they are used to
Overrides can only use lower-case characters. Additionally,
underscores are not permitted in override names as they are used to
separate overrides from each other and from the variable name.
- *Selecting a Variable:* The :term:`OVERRIDES` variable is a
@@ -579,14 +523,14 @@ variable.
OVERRIDES = "architecture:os:machine"
TEST = "default"
TEST:os = "osspecific"
TEST:nooverride = "othercondvalue"
TEST_os = "osspecific"
TEST_nooverride = "othercondvalue"
In this example, the :term:`OVERRIDES`
variable lists three overrides: "architecture", "os", and "machine".
The variable ``TEST`` by itself has a default value of "default". You
select the os-specific version of the ``TEST`` variable by appending
the "os" override to the variable (i.e. ``TEST:os``).
the "os" override to the variable (i.e. ``TEST_os``).
To better understand this, consider a practical example that assumes
an OpenEmbedded metadata-based Linux kernel recipe file. The
@@ -623,7 +567,7 @@ variable.
- *Setting a Variable for a Single Task:* BitBake supports setting a
variable just for the duration of a single task. Here is an example::
FOO:task-configure = "val 1"
FOO_task-configure = "val 1"
FOO:task-compile = "val 2"
In the
@@ -641,16 +585,6 @@ variable.
EXTRA_OEMAKE:prepend:task-compile = "${PARALLEL_MAKE} "
.. note::
Before BitBake 1.52 (Honister 3.4), the syntax for :term:`OVERRIDES`
used ``_`` instead of ``:``, so you will still find a lot of documentation
using ``_append``, ``_prepend``, and ``_remove``, for example.
For details, see the
:yocto_docs:`Overrides Syntax Changes </migration-guides/migration-3.4.html#override-syntax-changes>`
section in the Yocto Project manual migration notes.
Key Expansion
-------------
@@ -960,7 +894,7 @@ Regardless of the type of function, you can only define them in class
Shell Functions
---------------
Functions written in shell script are executed either directly as
Functions written in shell script and executed either directly as
functions, tasks, or both. They can also be called by other shell
functions. Here is an example shell function definition::
@@ -1010,7 +944,7 @@ Running ``do_foo`` prints the following::
Overrides and override-style operators can be applied to any shell
function, not just :ref:`tasks <bitbake-user-manual/bitbake-user-manual-metadata:tasks>`.
You can use the ``bitbake -e recipename`` command to view the final
You can use the ``bitbake -e`` recipename command to view the final
assembled function after all overrides have been applied.
BitBake-Style Python Functions
@@ -1062,7 +996,7 @@ Running ``do_foo`` prints the following::
recipename do_foo: second
recipename do_foo: third
You can use the ``bitbake -e recipename`` command to view
You can use the ``bitbake -e`` recipename command to view
the final assembled function after all overrides have been applied.
Python Functions
@@ -1409,8 +1343,8 @@ the build machine cannot influence the build.
.. note::
By default, BitBake cleans the environment to include only those
things exported or listed in its passthrough list to ensure that the
build environment is reproducible and consistent. You can prevent this
things exported or listed in its whitelist to ensure that the build
environment is reproducible and consistent. You can prevent this
"cleaning" by setting the :term:`BB_PRESERVE_ENV` variable.
Consequently, if you do want something to get passed into the build task
@@ -1418,14 +1352,14 @@ environment, you must take these two steps:
#. Tell BitBake to load what you want from the environment into the
datastore. You can do so through the
:term:`BB_ENV_PASSTHROUGH` and
:term:`BB_ENV_PASSTHROUGH_ADDITIONS` variables. For
:term:`BB_ENV_WHITELIST` and
:term:`BB_ENV_EXTRAWHITE` variables. For
example, assume you want to prevent the build system from accessing
your ``$HOME/.ccache`` directory. The following command adds the
the environment variable ``CCACHE_DIR`` to BitBake's passthrough
list to allow that variable into the datastore::
your ``$HOME/.ccache`` directory. The following command "whitelists"
the environment variable ``CCACHE_DIR`` causing BitBake to allow that
variable into the datastore::
export BB_ENV_PASSTHROUGH_ADDITIONS="$BB_ENV_PASSTHROUGH_ADDITIONS CCACHE_DIR"
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE CCACHE_DIR"
#. Tell BitBake to export what you have loaded into the datastore to the
task environment of every running task. Loading something from the
@@ -1442,7 +1376,7 @@ environment, you must take these two steps:
A side effect of the previous steps is that BitBake records the
variable as a dependency of the build process in things like the
setscene checksums. If doing so results in unnecessary rebuilds of
tasks, you can also flag the variable so that the setscene code
tasks, you can whitelist the variable so that the setscene code
ignores the dependency when it creates checksums.
Sometimes, it is useful to be able to obtain information from the
@@ -1496,35 +1430,12 @@ functionality of the task:
directory listed is used as the current working directory for the
task.
- ``[file-checksums]``: Controls the file dependencies for a task. The
baseline file list is the set of files associated with
:term:`SRC_URI`. May be used to set additional dependencies on
files not associated with :term:`SRC_URI`.
The value set to the list is a file-boolean pair where the first
value is the file name and the second is whether or not it
physically exists on the filesystem. ::
do_configure[file-checksums] += "${MY_DIRPATH}/my-file.txt:True"
It is important to record any paths which the task looked at and
which didn't exist. This means that if these do exist at a later
time, the task can be rerun with the new additional files. The
"exists" True or False value after the path allows this to be
handled.
- ``[lockfiles]``: Specifies one or more lockfiles to lock while the
task executes. Only one task may hold a lockfile, and any task that
attempts to lock an already locked file will block until the lock is
released. You can use this variable flag to accomplish mutual
exclusion.
- ``[network]``: When set to "1", allows a task to access the network. By
default, only the ``do_fetch`` task is granted network access. Recipes
shouldn't access the network outside of ``do_fetch`` as it usually
undermines fetcher source mirroring, image and licence manifests, software
auditing and supply chain security.
- ``[noexec]``: When set to "1", marks the task as being empty, with
no execution required. You can use the ``[noexec]`` flag to set up
tasks as dependency placeholders, or to disable tasks defined
@@ -1737,8 +1648,8 @@ user interfaces:
.. _variants-class-extension-mechanism:
Variants --- Class Extension Mechanism
======================================
Variants - Class Extension Mechanism
====================================
BitBake supports multiple incarnations of a recipe file via the
:term:`BBCLASSEXTEND` variable.
@@ -1978,33 +1889,6 @@ looking at the source code of the ``bb`` module, which is in
the commonly used functions ``bb.utils.contains()`` and
``bb.utils.mkdirhier()``, which come with docstrings.
Extending Python Library Code
-----------------------------
If you wish to add your own Python library code (e.g. to provide
functions/classes you can use from Python functions in the metadata)
you can do so from any layer using the ``addpylib`` directive.
This directive is typically added to your layer configuration (
``conf/layer.conf``) although it will be handled in any ``.conf`` file.
Usage is of the form::
addpylib <directory> <namespace>
Where <directory> specifies the directory to add to the library path.
The specified <namespace> is imported automatically, and if the imported
module specifies an attribute named ``BBIMPORTS``, that list of
sub-modules is iterated and imported too.
Testing and Debugging BitBake Python code
-----------------------------------------
The OpenEmbedded build system implements a convenient ``pydevshell`` target which
you can use to access the BitBake datastore and experiment with your own Python
code. See :yocto_docs:`Using a Python Development Shell
</dev-manual/python-development-shell.html#using-a-python-development-shell>` in the Yocto
Project manual for details.
Task Checksums and Setscene
===========================
@@ -2037,6 +1921,12 @@ The following list describes related variables:
Specifies a function BitBake calls that determines whether BitBake
requires a setscene dependency to be met.
- :term:`BB_STAMP_POLICY`: Defines the mode
for comparing timestamps of stamp files.
- :term:`BB_STAMP_WHITELIST`: Lists stamp
files that are looked at when the stamp policy is "whitelist".
- :term:`BB_TASKHASH`: Within an executing task,
this variable holds the hash of the task as returned by the currently
enabled signature generator.

View File

@@ -40,7 +40,8 @@ overview of their function and contents.
Azure Storage Shared Access Signature, when using the
:ref:`Azure Storage fetcher <bitbake-user-manual/bitbake-user-manual-fetching:fetchers>`
This variable can be defined to be used by the fetcher to authenticate
and gain access to non-public artifacts::
and gain access to non-public artifacts.
::
AZ_SAS = ""se=2021-01-01&sp=r&sv=2018-11-09&sr=c&skoid=<skoid>&sig=<signature>""
@@ -92,33 +93,10 @@ overview of their function and contents.
fetcher does not attempt to use the host listed in :term:`SRC_URI` after
a successful fetch from the :term:`PREMIRRORS` occurs.
:term:`BB_BASEHASH_IGNORE_VARS`
Lists variables that are excluded from checksum and dependency data.
Variables that are excluded can therefore change without affecting
the checksum mechanism. A common example would be the variable for
the path of the build. BitBake's output should not (and usually does
not) depend on the directory in which it was built.
:term:`BB_CACHEDIR`
Specifies the code parser cache directory (distinct from :term:`CACHE`
and :term:`PERSISTENT_DIR` although they can be set to the same value
if desired). The default value is "${TOPDIR}/cache".
:term:`BB_CHECK_SSL_CERTS`
Specifies if SSL certificates should be checked when fetching. The default
value is ``1`` and certificates are not checked if the value is set to ``0``.
:term:`BB_HASH_CODEPARSER_VALS`
Specifies values for variables to use when populating the codeparser cache.
This can be used selectively to set dummy values for variables to avoid
the codeparser cache growing on every parse. Variables that would typically
be included are those where the value is not significant for where the
codeparser cache is used (i.e. when calculating variable dependencies for
code fragments.) The value is space-separated without quoting values, for
example::
BB_HASH_CODEPARSER_VALS = "T=/ WORKDIR=/ DATE=1234 TIME=1234"
:term:`BB_CONSOLELOG`
Specifies the path to a log file into which BitBake's user interface
writes output during the build.
@@ -160,7 +138,7 @@ overview of their function and contents.
where:
<action> is:
HALT: Immediately halt the build when
ABORT: Immediately abort the build when
a threshold is broken.
STOPTASKS: Stop the build after the currently
executing tasks have finished when
@@ -191,13 +169,13 @@ overview of their function and contents.
Here are some examples::
BB_DISKMON_DIRS = "HALT,${TMPDIR},1G,100K WARN,${SSTATE_DIR},1G,100K"
BB_DISKMON_DIRS = "ABORT,${TMPDIR},1G,100K WARN,${SSTATE_DIR},1G,100K"
BB_DISKMON_DIRS = "STOPTASKS,${TMPDIR},1G"
BB_DISKMON_DIRS = "HALT,${TMPDIR},,100K"
BB_DISKMON_DIRS = "ABORT,${TMPDIR},,100K"
The first example works only if you also set the
:term:`BB_DISKMON_WARNINTERVAL`
variable. This example causes the build system to immediately halt
variable. This example causes the build system to immediately abort
when either the disk space in ``${TMPDIR}`` drops below 1 Gbyte or
the available free inodes drops below 100 Kbytes. Because two
directories are provided with the variable, the build system also
@@ -211,7 +189,7 @@ overview of their function and contents.
directory drops below 1 Gbyte. No disk monitoring occurs for the free
inodes in this case.
The final example immediately halts the build when the number of
The final example immediately aborts the build when the number of
free inodes in the ``${TMPDIR}`` directory drops below 100 Kbytes. No
disk space monitoring for the directory itself occurs in this case.
@@ -258,23 +236,23 @@ overview of their function and contents.
based on the interval occur each time a respective interval is
reached beyond the initial warning (i.e. 1 Gbytes and 100 Kbytes).
:term:`BB_ENV_PASSTHROUGH`
Specifies the internal list of variables to allow through from
the external environment into BitBake's datastore. If the value of
this variable is not specified (which is the default), the following
list is used: :term:`BBPATH`, :term:`BB_PRESERVE_ENV`,
:term:`BB_ENV_PASSTHROUGH`, and :term:`BB_ENV_PASSTHROUGH_ADDITIONS`.
:term:`BB_ENV_EXTRAWHITE`
Specifies an additional set of variables to allow through (whitelist)
from the external environment into BitBake's datastore. This list of
variables are on top of the internal list set in
:term:`BB_ENV_WHITELIST`.
.. note::
You must set this variable in the external environment in order
for it to work.
:term:`BB_ENV_PASSTHROUGH_ADDITIONS`
Specifies an additional set of variables to allow through from the
external environment into BitBake's datastore. This list of variables
are on top of the internal list set in
:term:`BB_ENV_PASSTHROUGH`.
:term:`BB_ENV_WHITELIST`
Specifies the internal whitelist of variables to allow through from
the external environment into BitBake's datastore. If the value of
this variable is not specified (which is the default), the following
list is used: :term:`BBPATH`, :term:`BB_PRESERVE_ENV`,
:term:`BB_ENV_WHITELIST`, and :term:`BB_ENV_EXTRAWHITE`.
.. note::
@@ -303,69 +281,12 @@ overview of their function and contents.
BB_GENERATE_MIRROR_TARBALLS = "1"
:term:`BB_GENERATE_SHALLOW_TARBALLS`
Setting this variable to "1" when :term:`BB_GIT_SHALLOW` is also set to
"1" causes bitbake to generate shallow mirror tarballs when fetching git
repositories. The number of commits included in the shallow mirror
tarballs is controlled by :term:`BB_GIT_SHALLOW_DEPTH`.
If both :term:`BB_GIT_SHALLOW` and :term:`BB_GENERATE_MIRROR_TARBALLS` are
enabled, bitbake will generate shallow mirror tarballs by default for git
repositories. This separate variable exists so that shallow tarball
generation can be enabled without needing to also enable normal mirror
generation if it is not desired.
For example usage, see :term:`BB_GIT_SHALLOW`.
:term:`BB_GIT_SHALLOW`
Setting this variable to "1" enables the support for fetching, using and
generating mirror tarballs of `shallow git repositories <https://riptutorial.com/git/example/4584/shallow-clone>`_.
The external `git-make-shallow <https://git.openembedded.org/bitbake/tree/bin/git-make-shallow>`_
script is used for shallow mirror tarball creation.
When :term:`BB_GIT_SHALLOW` is enabled, bitbake will attempt to fetch a shallow
mirror tarball. If the shallow mirror tarball cannot be fetched, it will
try to fetch the full mirror tarball and use that.
When a mirror tarball is not available, a full git clone will be performed
regardless of whether this variable is set or not. Support for shallow
clones is not currently implemented as git does not directly support
shallow cloning a particular git commit hash (it only supports cloning
from a tag or branch reference).
See also :term:`BB_GIT_SHALLOW_DEPTH` and
:term:`BB_GENERATE_SHALLOW_TARBALLS`.
Example usage::
BB_GIT_SHALLOW ?= "1"
# Keep only the top commit
BB_GIT_SHALLOW_DEPTH ?= "1"
# This defaults to enabled if both BB_GIT_SHALLOW and
# BB_GENERATE_MIRROR_TARBALLS are enabled
BB_GENERATE_SHALLOW_TARBALLS ?= "1"
:term:`BB_GIT_SHALLOW_DEPTH`
When used with :term:`BB_GENERATE_SHALLOW_TARBALLS`, this variable sets
the number of commits to include in generated shallow mirror tarballs.
With a depth of 1, only the commit referenced in :term:`SRCREV` is
included in the shallow mirror tarball. Increasing the depth includes
additional parent commits, working back through the commit history.
If this variable is unset, bitbake will default to a depth of 1 when
generating shallow mirror tarballs.
For example usage, see :term:`BB_GIT_SHALLOW`.
:term:`BB_GLOBAL_PYMODULES`
Specifies the list of Python modules to place in the global namespace.
It is intended that only the core layer should set this and it is meant
to be a very small list, typically just ``os`` and ``sys``.
:term:`BB_GLOBAL_PYMODULES` is expected to be set before the first
``addpylib`` directive.
See also ":ref:`bitbake-user-manual/bitbake-user-manual-metadata:extending python library code`".
:term:`BB_HASHBASE_WHITELIST`
Lists variables that are excluded from checksum and dependency data.
Variables that are excluded can therefore change without affecting
the checksum mechanism. A common example would be the variable for
the path of the build. BitBake's output should not (and usually does
not) depend on the directory in which it was built.
:term:`BB_HASHCHECK_FUNCTION`
Specifies the name of the function to call during the "setscene" part
@@ -381,7 +302,7 @@ overview of their function and contents.
However, the more accurate the data returned, the more efficient the
build will be.
:term:`BB_HASHCONFIG_IGNORE_VARS`
:term:`BB_HASHCONFIG_WHITELIST`
Lists variables that are excluded from base configuration checksum,
which is used to determine if the cache can be reused.
@@ -397,35 +318,12 @@ overview of their function and contents.
Specifies the Hash Equivalence server to use.
If set to ``auto``, BitBake automatically starts its own server
over a UNIX domain socket. An option is to connect this server
to an upstream one, by setting :term:`BB_HASHSERVE_UPSTREAM`.
over a UNIX domain socket.
If set to ``unix://path``, BitBake will connect to an existing
hash server available over a UNIX domain socket.
If set to ``host:port``, BitBake will connect to a remote server on the
If set to ``host:port``, BitBake will use a remote server on the
specified host. This allows multiple clients to share the same
hash equivalence data.
The remote server can be started manually through
the ``bin/bitbake-hashserv`` script provided by BitBake,
which supports UNIX domain sockets too. This script also allows
to start the server in read-only mode, to avoid accepting
equivalences that correspond to Share State caches that are
only available on specific clients.
:term:`BB_HASHSERVE_UPSTREAM`
Specifies an upstream Hash Equivalence server.
This optional setting is only useful when a local Hash Equivalence
server is started (setting :term:`BB_HASHSERVE` to ``auto``),
and you wish the local server to query an upstream server for
Hash Equivalence data.
Example usage::
BB_HASHSERVE_UPSTREAM = "hashserv.yocto.io:8687"
:term:`BB_INVALIDCONF`
Used in combination with the ``ConfigParsed`` event to trigger
re-parsing the base metadata (i.e. all the recipes). The
@@ -449,19 +347,6 @@ overview of their function and contents.
If you want to force log files to take a specific name, you can set this
variable in a configuration file.
:term:`BB_MULTI_PROVIDER_ALLOWED`
Allows you to suppress BitBake warnings caused when building two
separate recipes that provide the same output.
BitBake normally issues a warning when building two different recipes
where each provides the same output. This scenario is usually
something the user does not want. However, cases do exist where it
makes sense, particularly in the ``virtual/*`` namespace. You can use
this variable to suppress BitBake's warnings.
To use the variable, list provider names (e.g. recipe names,
``virtual/kernel``, and so forth).
:term:`BB_NICE_LEVEL`
Allows BitBake to run at a specific priority (i.e. nice level).
System permissions usually mean that BitBake can reduce its priority
@@ -488,9 +373,8 @@ overview of their function and contents.
:term:`BB_ORIGENV`
Contains a copy of the original external environment in which BitBake
was run. The copy is taken before any variable values configured to
pass through from the external environment are filtered into BitBake's
datastore.
was run. The copy is taken before any whitelisted variable values are
filtered into BitBake's datastore.
.. note::
@@ -498,72 +382,21 @@ overview of their function and contents.
queried using the normal datastore operations.
:term:`BB_PRESERVE_ENV`
Disables environment filtering and instead allows all variables through
from the external environment into BitBake's datastore.
Disables whitelisting and instead allows all variables through from
the external environment into BitBake's datastore.
.. note::
You must set this variable in the external environment in order
for it to work.
:term:`BB_PRESSURE_MAX_CPU`
Specifies a maximum CPU pressure threshold, above which BitBake's
scheduler will not start new tasks (providing there is at least
one active task). If no value is set, CPU pressure is not
monitored when starting tasks.
The pressure data is calculated based upon what Linux kernels since
version 4.20 expose under ``/proc/pressure``. The threshold represents
the difference in "total" pressure from the previous second. The
minimum value is 1.0 (extremely slow builds) and the maximum is
1000000 (a pressure value unlikely to ever be reached).
This threshold can be set in ``conf/local.conf`` as::
BB_PRESSURE_MAX_CPU = "500"
:term:`BB_PRESSURE_MAX_IO`
Specifies a maximum I/O pressure threshold, above which BitBake's
scheduler will not start new tasks (providing there is at least
one active task). If no value is set, I/O pressure is not
monitored when starting tasks.
The pressure data is calculated based upon what Linux kernels since
version 4.20 expose under ``/proc/pressure``. The threshold represents
the difference in "total" pressure from the previous second. The
minimum value is 1.0 (extremely slow builds) and the maximum is
1000000 (a pressure value unlikely to ever be reached).
At this point in time, experiments show that IO pressure tends to
be short-lived and regulating just the CPU with
:term:`BB_PRESSURE_MAX_CPU` can help to reduce it.
:term:`BB_PRESSURE_MAX_MEMORY`
Specifies a maximum memory pressure threshold, above which BitBake's
scheduler will not start new tasks (providing there is at least
one active task). If no value is set, memory pressure is not
monitored when starting tasks.
The pressure data is calculated based upon what Linux kernels since
version 4.20 expose under ``/proc/pressure``. The threshold represents
the difference in "total" pressure from the previous second. The
minimum value is 1.0 (extremely slow builds) and the maximum is
1000000 (a pressure value unlikely to ever be reached).
Memory pressure is experienced when time is spent swapping,
refaulting pages from the page cache or performing direct reclaim.
This is why memory pressure is rarely seen, but setting this variable
might be useful as a last resort to prevent OOM errors if they are
occurring during builds.
:term:`BB_RUNFMT`
Specifies the name of the executable script files (i.e. run files)
saved into ``${``\ :term:`T`\ ``}``. By default, the
:term:`BB_RUNFMT` variable is undefined and the run filenames get
created using the following form::
run.{func}.{pid}
run.{task}.{pid}
If you want to force run files to take a specific name, you can set this
variable in a configuration file.
@@ -577,14 +410,14 @@ overview of their function and contents.
Selects the name of the scheduler to use for the scheduling of
BitBake tasks. Three options exist:
- *basic* --- the basic framework from which everything derives. Using
- *basic* - The basic framework from which everything derives. Using
this option causes tasks to be ordered numerically as they are
parsed.
- *speed* --- executes tasks first that have more tasks depending on
- *speed* - Executes tasks first that have more tasks depending on
them. The "speed" option is the default.
- *completion* --- causes the scheduler to try to complete a given
- *completion* - Causes the scheduler to try to complete a given
recipe once its build has started.
:term:`BB_SCHEDULERS`
@@ -631,12 +464,35 @@ overview of their function and contents.
The variable can be set using one of two policies:
- *cache* --- retains the value the system obtained previously rather
- *cache* - Retains the value the system obtained previously rather
than querying the source control system each time.
- *clear* --- queries the source controls system every time. With this
- *clear* - Queries the source controls system every time. With this
policy, there is no cache. The "clear" policy is the default.
:term:`BB_STAMP_POLICY`
Defines the mode used for how timestamps of stamp files are compared.
You can set the variable to one of the following modes:
- *perfile* - Timestamp comparisons are only made between timestamps
of a specific recipe. This is the default mode.
- *full* - Timestamp comparisons are made for all dependencies.
- *whitelist* - Identical to "full" mode except timestamp
comparisons are made for recipes listed in the
:term:`BB_STAMP_WHITELIST` variable.
.. note::
Stamp policies are largely obsolete with the introduction of
setscene tasks.
:term:`BB_STAMP_WHITELIST`
Lists files whose stamp file timestamps are compared when the stamp
policy mode is set to "whitelist". For information on stamp policies,
see the :term:`BB_STAMP_POLICY` variable.
:term:`BB_STRICT_CHECKSUM`
Sets a more strict checksum mechanism for non-local URLs. Setting
this variable to a value causes BitBake to report an error if it
@@ -760,7 +616,7 @@ overview of their function and contents.
This variable is useful in situations where the same recipe appears
in more than one layer. Setting this variable allows you to
prioritize a layer against other layers that contain the same recipe
--- effectively letting you control the precedence for the multiple
- effectively letting you control the precedence for the multiple
layers. The precedence established through this variable stands
regardless of a recipe's version (:term:`PV` variable).
For example, a layer that has a recipe with a higher :term:`PV` value but
@@ -822,7 +678,7 @@ overview of their function and contents.
"
This next example shows an error message that occurs because invalid
entries are found, which cause parsing to fail::
entries are found, which cause parsing to abort::
ERROR: BBFILES_DYNAMIC entries must be of the form {!}<collection name>:<filename pattern>, not:
/work/my-layer/bbappends/meta-security-isafw/*/*/*.bbappend
@@ -920,9 +776,9 @@ overview of their function and contents.
section.
:term:`BBPATH`
A colon-separated list used by BitBake to locate class (``.bbclass``)
and configuration (``.conf``) files. This variable is analogous to the
``PATH`` variable.
Used by BitBake to locate class (``.bbclass``) and configuration
(``.conf``) files. This variable is analogous to the ``PATH``
variable.
If you run BitBake from a directory outside of the build directory,
you must be sure to set :term:`BBPATH` to point to the build directory.
@@ -1014,7 +870,7 @@ overview of their function and contents.
``bblayers.conf`` configuration file.
To exclude a recipe from a world build using this variable, set the
variable to "1" in the recipe. Set it to "0" to add it back to world build.
variable to "1" in the recipe.
.. note::
@@ -1072,11 +928,6 @@ overview of their function and contents.
environment variable. The value is a colon-separated list of
directories that are searched left-to-right in order.
:term:`FILE_LAYERNAME`
During parsing and task execution, this is set to the name of the
layer containing the recipe file. Code can use this to identify which
layer a recipe is from.
:term:`GITDIR`
The directory in which a local copy of a Git repository is stored
when it is cloned.
@@ -1125,29 +976,6 @@ overview of their function and contents.
variable is not available outside of ``layer.conf`` and references
are expanded immediately when parsing of the file completes.
:term:`LAYERSERIES_COMPAT`
Lists the versions of the OpenEmbedded-Core (OE-Core) for which
a layer is compatible. Using the :term:`LAYERSERIES_COMPAT` variable
allows the layer maintainer to indicate which combinations of the
layer and OE-Core can be expected to work. The variable gives the
system a way to detect when a layer has not been tested with new
releases of OE-Core (e.g. the layer is not maintained).
To specify the OE-Core versions for which a layer is compatible, use
this variable in your layer's ``conf/layer.conf`` configuration file.
For the list, use the Yocto Project release name (e.g. "kirkstone",
"mickledore"). To specify multiple OE-Core versions for the layer, use
a space-separated list::
LAYERSERIES_COMPAT_layer_root_name = "kirkstone mickledore"
.. note::
Setting :term:`LAYERSERIES_COMPAT` is required by the Yocto Project
Compatible version 2 standard.
The OpenEmbedded build system produces a warning if the variable
is not set for any given layer.
:term:`LAYERVERSION`
Optionally specifies the version of a layer as a single number. You
can use this variable within
@@ -1169,9 +997,22 @@ overview of their function and contents.
upstream source, and then locations specified by :term:`MIRRORS` in that
order.
:term:`MULTI_PROVIDER_WHITELIST`
Allows you to suppress BitBake warnings caused when building two
separate recipes that provide the same output.
BitBake normally issues a warning when building two different recipes
where each provides the same output. This scenario is usually
something the user does not want. However, cases do exist where it
makes sense, particularly in the ``virtual/*`` namespace. You can use
this variable to suppress BitBake's warnings.
To use the variable, list provider names (e.g. recipe names,
``virtual/kernel``, and so forth).
:term:`OVERRIDES`
A colon-separated list that BitBake uses to control what variables are
overridden after BitBake parses recipes and configuration files.
BitBake uses :term:`OVERRIDES` to control what variables are overridden
after BitBake parses recipes and configuration files.
Following is a simple example that uses an overrides list based on
machine architectures: OVERRIDES = "arm:x86:mips:powerpc" You can
@@ -1282,10 +1123,10 @@ overview of their function and contents.
your configuration::
PREMIRRORS:prepend = "\
git://.*/.* http://downloads.yoctoproject.org/mirror/sources/ \
ftp://.*/.* http://downloads.yoctoproject.org/mirror/sources/ \
http://.*/.* http://downloads.yoctoproject.org/mirror/sources/ \
https://.*/.* http://downloads.yoctoproject.org/mirror/sources/"
git://.*/.* http://www.yoctoproject.org/sources/ \n \
ftp://.*/.* http://www.yoctoproject.org/sources/ \n \
http://.*/.* http://www.yoctoproject.org/sources/ \n \
https://.*/.* http://www.yoctoproject.org/sources/ \n"
These changes cause the build system to intercept Git, FTP, HTTP, and
HTTPS requests and direct them to the ``http://`` sources mirror. You can
@@ -1433,106 +1274,70 @@ overview of their function and contents.
The section in which packages should be categorized.
:term:`SRC_URI`
The list of source files --- local or remote. This variable tells
The list of source files - local or remote. This variable tells
BitBake which bits to pull for the build and how to pull them. For
example, if the recipe or append file needs to fetch a single tarball
from the Internet, the recipe or append file uses a :term:`SRC_URI`
entry that specifies that tarball. On the other hand, if the recipe or
append file needs to fetch a tarball, apply two patches, and include
a custom file, the recipe or append file needs an :term:`SRC_URI`
variable that specifies all those sources.
from the Internet, the recipe or append file uses a :term:`SRC_URI` entry
that specifies that tarball. On the other hand, if the recipe or
append file needs to fetch a tarball and include a custom file, the
recipe or append file needs an :term:`SRC_URI` variable that specifies
all those sources.
The following list explains the available URI protocols. URI
protocols are highly dependent on particular BitBake Fetcher
submodules. Depending on the fetcher BitBake uses, various URL
parameters are employed. For specifics on the supported Fetchers, see
the :ref:`bitbake-user-manual/bitbake-user-manual-fetching:fetchers`
section.
The following list explains the available URI protocols:
- ``az://``: Fetches files from an Azure Storage account using HTTPS.
- ``file://`` : Fetches files, which are usually files shipped
with the metadata, from the local machine. The path is relative to
the :term:`FILESPATH` variable.
- ``bzr://``: Fetches files from a Bazaar revision control
- ``bzr://`` : Fetches files from a Bazaar revision control
repository.
- ``ccrc://``: Fetches files from a ClearCase repository.
- ``cvs://``: Fetches files from a CVS revision control
- ``git://`` : Fetches files from a Git revision control
repository.
- ``file://``: Fetches files, which are usually files shipped
with the Metadata, from the local machine.
The path is relative to the :term:`FILESPATH`
variable. Thus, the build system searches, in order, from the
following directories, which are assumed to be a subdirectories of
the directory in which the recipe file (``.bb``) or append file
(``.bbappend``) resides:
- ``${BPN}``: the base recipe name without any special suffix
or version numbers.
- ``${BP}`` - ``${BPN}-${PV}``: the base recipe name and
version but without any special package name suffix.
- ``files``: files within a directory, which is named ``files``
and is also alongside the recipe or append file.
- ``ftp://``: Fetches files from the Internet using FTP.
- ``git://``: Fetches files from a Git revision control
repository.
- ``gitsm://``: Fetches submodules from a Git revision control
repository.
- ``hg://``: Fetches files from a Mercurial (``hg``) revision
control repository.
- ``http://``: Fetches files from the Internet using HTTP.
- ``https://``: Fetches files from the Internet using HTTPS.
- ``npm://``: Fetches JavaScript modules from a registry.
- ``osc://``: Fetches files from an OSC (OpenSUSE Build service)
- ``osc://`` : Fetches files from an OSC (OpenSUSE Build service)
revision control repository.
- ``p4://``: Fetches files from a Perforce (``p4``) revision
- ``repo://`` : Fetches files from a repo (Git) repository.
- ``http://`` : Fetches files from the Internet using HTTP.
- ``https://`` : Fetches files from the Internet using HTTPS.
- ``ftp://`` : Fetches files from the Internet using FTP.
- ``cvs://`` : Fetches files from a CVS revision control
repository.
- ``hg://`` : Fetches files from a Mercurial (``hg``) revision
control repository.
- ``repo://``: Fetches files from a repo (Git) repository.
- ``ssh://``: Fetches files from a secure shell.
- ``svn://``: Fetches files from a Subversion (``svn``) revision
- ``p4://`` : Fetches files from a Perforce (``p4``) revision
control repository.
- ``ssh://`` : Fetches files from a secure shell.
- ``svn://`` : Fetches files from a Subversion (``svn``) revision
control repository.
- ``az://`` : Fetches files from an Azure Storage account using HTTPS.
Here are some additional options worth mentioning:
- ``downloadfilename``: Specifies the filename used when storing
the downloaded file.
- ``unpack`` : Controls whether or not to unpack the file if it is
an archive. The default action is to unpack the file.
- ``name``: Specifies a name to be used for association with
:term:`SRC_URI` checksums or :term:`SRCREV` when you have more than one
file or git repository specified in :term:`SRC_URI`. For example::
SRC_URI = "git://example.com/foo.git;branch=main;name=first \
git://example.com/bar.git;branch=main;name=second \
http://example.com/file.tar.gz;name=third"
SRCREV_first = "f1d2d2f924e986ac86fdf7b36c94bcdf32beec15"
SRCREV_second = "e242ed3bffccdf271b7fbaf34ed72d089537b42f"
SRC_URI[third.sha256sum] = "13550350a8681c84c861aac2e5b440161c2b33a3e4f302ac680ca5b686de48de"
- ``subdir``: Places the file (or extracts its contents) into the
- ``subdir`` : Places the file (or extracts its contents) into the
specified subdirectory. This option is useful for unusual tarballs
or other archives that do not have their files already in a
subdirectory within the archive.
- ``subpath``: Limits the checkout to a specific subpath of the
tree when using the Git fetcher is used.
- ``name`` : Specifies a name to be used for association with
:term:`SRC_URI` checksums when you have more than one file specified
in :term:`SRC_URI`.
- ``unpack``: Controls whether or not to unpack the file if it is
an archive. The default action is to unpack the file.
- ``downloadfilename`` : Specifies the filename used when storing
the downloaded file.
:term:`SRCDATE`
The date of the source code used to build the package. This variable

View File

@@ -1,76 +1,32 @@
.. SPDX-License-Identifier: CC-BY-2.5
=================================
BitBake Supported Release Manuals
=================================
*******************************
Release Series 4.2 (mickledore)
*******************************
- :yocto_docs:`BitBake 2.4 User Manual </bitbake/2.4/>`
******************************
Release Series 4.0 (kirkstone)
******************************
- :yocto_docs:`BitBake 2.0 User Manual </bitbake/2.0/>`
=========================
Current Release Manuals
=========================
****************************
Release Series 3.1 (dunfell)
3.1 'dunfell' Release Series
****************************
- :yocto_docs:`BitBake 1.46 User Manual </bitbake/1.46/>`
================================
BitBake Outdated Release Manuals
================================
*****************************
Release Series 4.1 (langdale)
*****************************
- :yocto_docs:`BitBake 2.2 User Manual </bitbake/2.2/>`
******************************
Release Series 3.4 (honister)
******************************
- :yocto_docs:`BitBake 1.52 User Manual </bitbake/1.52/>`
******************************
Release Series 3.3 (hardknott)
******************************
- :yocto_docs:`BitBake 1.50 User Manual </bitbake/1.50/>`
*******************************
Release Series 3.2 (gatesgarth)
*******************************
- :yocto_docs:`BitBake 1.48 User Manual </bitbake/1.48/>`
*******************************************
Release Series 3.1 (dunfell first versions)
*******************************************
- :yocto_docs:`3.1 BitBake User Manual </3.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.1 BitBake User Manual </3.1.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.2 BitBake User Manual </3.1.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.3 BitBake User Manual </3.1.3/bitbake-user-manual/bitbake-user-manual.html>`
==========================
Previous Release Manuals
==========================
*************************
Release Series 3.0 (zeus)
3.0 'zeus' Release Series
*************************
- :yocto_docs:`3.0 BitBake User Manual </3.0/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.1 BitBake User Manual </3.0.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.2 BitBake User Manual </3.0.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.3 BitBake User Manual </3.0.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.4 BitBake User Manual </3.0.4/bitbake-user-manual/bitbake-user-manual.html>`
****************************
Release Series 2.7 (warrior)
2.7 'warrior' Release Series
****************************
- :yocto_docs:`2.7 BitBake User Manual </2.7/bitbake-user-manual/bitbake-user-manual.html>`
@@ -80,7 +36,7 @@ Release Series 2.7 (warrior)
- :yocto_docs:`2.7.4 BitBake User Manual </2.7.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 2.6 (thud)
2.6 'thud' Release Series
*************************
- :yocto_docs:`2.6 BitBake User Manual </2.6/bitbake-user-manual/bitbake-user-manual.html>`
@@ -90,16 +46,16 @@ Release Series 2.6 (thud)
- :yocto_docs:`2.6.4 BitBake User Manual </2.6.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 2.5 (sumo)
2.5 'sumo' Release Series
*************************
- :yocto_docs:`2.5 Documentation </2.5>`
- :yocto_docs:`2.5.1 Documentation </2.5.1>`
- :yocto_docs:`2.5.2 Documentation </2.5.2>`
- :yocto_docs:`2.5.3 Documentation </2.5.3>`
- :yocto_docs:`2.5 BitBake User Manual </2.5/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.5.1 BitBake User Manual </2.5.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.5.2 BitBake User Manual </2.5.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.5.3 BitBake User Manual </2.5.3/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 2.4 (rocko)
2.4 'rocko' Release Series
**************************
- :yocto_docs:`2.4 BitBake User Manual </2.4/bitbake-user-manual/bitbake-user-manual.html>`
@@ -109,7 +65,7 @@ Release Series 2.4 (rocko)
- :yocto_docs:`2.4.4 BitBake User Manual </2.4.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 2.3 (pyro)
2.3 'pyro' Release Series
*************************
- :yocto_docs:`2.3 BitBake User Manual </2.3/bitbake-user-manual/bitbake-user-manual.html>`
@@ -119,7 +75,7 @@ Release Series 2.3 (pyro)
- :yocto_docs:`2.3.4 BitBake User Manual </2.3.4/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 2.2 (morty)
2.2 'morty' Release Series
**************************
- :yocto_docs:`2.2 BitBake User Manual </2.2/bitbake-user-manual/bitbake-user-manual.html>`
@@ -128,7 +84,7 @@ Release Series 2.2 (morty)
- :yocto_docs:`2.2.3 BitBake User Manual </2.2.3/bitbake-user-manual/bitbake-user-manual.html>`
****************************
Release Series 2.1 (krogoth)
2.1 'krogoth' Release Series
****************************
- :yocto_docs:`2.1 BitBake User Manual </2.1/bitbake-user-manual/bitbake-user-manual.html>`
@@ -137,7 +93,7 @@ Release Series 2.1 (krogoth)
- :yocto_docs:`2.1.3 BitBake User Manual </2.1.3/bitbake-user-manual/bitbake-user-manual.html>`
***************************
Release Series 2.0 (jethro)
2.0 'jethro' Release Series
***************************
- :yocto_docs:`1.9 BitBake User Manual </1.9/bitbake-user-manual/bitbake-user-manual.html>`
@@ -147,7 +103,7 @@ Release Series 2.0 (jethro)
- :yocto_docs:`2.0.3 BitBake User Manual </2.0.3/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 1.8 (fido)
1.8 'fido' Release Series
*************************
- :yocto_docs:`1.8 BitBake User Manual </1.8/bitbake-user-manual/bitbake-user-manual.html>`
@@ -155,7 +111,7 @@ Release Series 1.8 (fido)
- :yocto_docs:`1.8.2 BitBake User Manual </1.8.2/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 1.7 (dizzy)
1.7 'dizzy' Release Series
**************************
- :yocto_docs:`1.7 BitBake User Manual </1.7/bitbake-user-manual/bitbake-user-manual.html>`
@@ -164,7 +120,7 @@ Release Series 1.7 (dizzy)
- :yocto_docs:`1.7.3 BitBake User Manual </1.7.3/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 1.6 (daisy)
1.6 'daisy' Release Series
**************************
- :yocto_docs:`1.6 BitBake User Manual </1.6/bitbake-user-manual/bitbake-user-manual.html>`

View File

@@ -3,8 +3,6 @@
#
# Copyright (C) 2006 Tim Ansell
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Please Note:
# Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
# Assign a file to __warn__ to get warnings about slow operations.

View File

@@ -9,11 +9,11 @@
# SPDX-License-Identifier: GPL-2.0-only
#
__version__ = "2.6.0"
__version__ = "1.52.0"
import sys
if sys.version_info < (3, 8, 0):
raise RuntimeError("Sorry, python 3.8.0 or later is required for this version of bitbake")
if sys.version_info < (3, 6, 0):
raise RuntimeError("Sorry, python 3.6.0 or later is required for this version of bitbake")
class BBHandledException(Exception):
@@ -60,10 +60,6 @@ class BBLoggerMixin(object):
return
if loglevel < bb.msg.loggerDefaultLogLevel:
return
if not isinstance(level, int) or not isinstance(msg, str):
mainlogger.warning("Invalid arguments in bbdebug: %s" % repr((level, msg,) + args))
return self.log(loglevel, msg, *args, **kwargs)
def plain(self, msg, *args, **kwargs):
@@ -75,13 +71,6 @@ class BBLoggerMixin(object):
def verbnote(self, msg, *args, **kwargs):
return self.log(logging.INFO + 2, msg, *args, **kwargs)
def warnonce(self, msg, *args, **kwargs):
return self.log(logging.WARNING - 1, msg, *args, **kwargs)
def erroronce(self, msg, *args, **kwargs):
return self.log(logging.ERROR - 1, msg, *args, **kwargs)
Logger = logging.getLoggerClass()
class BBLogger(Logger, BBLoggerMixin):
def __init__(self, name, *args, **kwargs):
@@ -168,15 +157,9 @@ def verbnote(*args):
def warn(*args):
mainlogger.warning(''.join(args))
def warnonce(*args):
mainlogger.warnonce(''.join(args))
def error(*args, **kwargs):
mainlogger.error(''.join(args), extra=kwargs)
def erroronce(*args):
mainlogger.erroronce(''.join(args))
def fatal(*args, **kwargs):
mainlogger.critical(''.join(args), extra=kwargs)
raise BBHandledException()

View File

@@ -1,215 +0,0 @@
#! /usr/bin/env python3
#
# Copyright 2023 by Garmin Ltd. or its subsidiaries
#
# SPDX-License-Identifier: MIT
import sys
import ctypes
import os
import errno
import pwd
import grp
libacl = ctypes.CDLL("libacl.so.1", use_errno=True)
ACL_TYPE_ACCESS = 0x8000
ACL_TYPE_DEFAULT = 0x4000
ACL_FIRST_ENTRY = 0
ACL_NEXT_ENTRY = 1
ACL_UNDEFINED_TAG = 0x00
ACL_USER_OBJ = 0x01
ACL_USER = 0x02
ACL_GROUP_OBJ = 0x04
ACL_GROUP = 0x08
ACL_MASK = 0x10
ACL_OTHER = 0x20
ACL_READ = 0x04
ACL_WRITE = 0x02
ACL_EXECUTE = 0x01
acl_t = ctypes.c_void_p
acl_entry_t = ctypes.c_void_p
acl_permset_t = ctypes.c_void_p
acl_perm_t = ctypes.c_uint
acl_tag_t = ctypes.c_int
libacl.acl_free.argtypes = [acl_t]
def acl_free(acl):
libacl.acl_free(acl)
libacl.acl_get_file.restype = acl_t
libacl.acl_get_file.argtypes = [ctypes.c_char_p, ctypes.c_uint]
def acl_get_file(path, typ):
acl = libacl.acl_get_file(os.fsencode(path), typ)
if acl is None:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err), str(path))
return acl
libacl.acl_get_entry.argtypes = [acl_t, ctypes.c_int, ctypes.c_void_p]
def acl_get_entry(acl, entry_id):
entry = acl_entry_t()
ret = libacl.acl_get_entry(acl, entry_id, ctypes.byref(entry))
if ret < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
if ret == 0:
return None
return entry
libacl.acl_get_tag_type.argtypes = [acl_entry_t, ctypes.c_void_p]
def acl_get_tag_type(entry_d):
tag = acl_tag_t()
ret = libacl.acl_get_tag_type(entry_d, ctypes.byref(tag))
if ret < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
return tag.value
libacl.acl_get_qualifier.restype = ctypes.c_void_p
libacl.acl_get_qualifier.argtypes = [acl_entry_t]
def acl_get_qualifier(entry_d):
ret = libacl.acl_get_qualifier(entry_d)
if ret is None:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
return ctypes.c_void_p(ret)
libacl.acl_get_permset.argtypes = [acl_entry_t, ctypes.c_void_p]
def acl_get_permset(entry_d):
permset = acl_permset_t()
ret = libacl.acl_get_permset(entry_d, ctypes.byref(permset))
if ret < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
return permset
libacl.acl_get_perm.argtypes = [acl_permset_t, acl_perm_t]
def acl_get_perm(permset_d, perm):
ret = libacl.acl_get_perm(permset_d, perm)
if ret < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
return bool(ret)
class Entry(object):
def __init__(self, tag, qualifier, mode):
self.tag = tag
self.qualifier = qualifier
self.mode = mode
def __str__(self):
typ = ""
qual = ""
if self.tag == ACL_USER:
typ = "user"
qual = pwd.getpwuid(self.qualifier).pw_name
elif self.tag == ACL_GROUP:
typ = "group"
qual = grp.getgrgid(self.qualifier).gr_name
elif self.tag == ACL_USER_OBJ:
typ = "user"
elif self.tag == ACL_GROUP_OBJ:
typ = "group"
elif self.tag == ACL_MASK:
typ = "mask"
elif self.tag == ACL_OTHER:
typ = "other"
r = "r" if self.mode & ACL_READ else "-"
w = "w" if self.mode & ACL_WRITE else "-"
x = "x" if self.mode & ACL_EXECUTE else "-"
return f"{typ}:{qual}:{r}{w}{x}"
class ACL(object):
def __init__(self, acl):
self.acl = acl
def __del__(self):
acl_free(self.acl)
def entries(self):
entry_id = ACL_FIRST_ENTRY
while True:
entry = acl_get_entry(self.acl, entry_id)
if entry is None:
break
permset = acl_get_permset(entry)
mode = 0
for m in (ACL_READ, ACL_WRITE, ACL_EXECUTE):
if acl_get_perm(permset, m):
mode |= m
qualifier = None
tag = acl_get_tag_type(entry)
if tag == ACL_USER or tag == ACL_GROUP:
qual = acl_get_qualifier(entry)
qualifier = ctypes.cast(qual, ctypes.POINTER(ctypes.c_int))[0]
yield Entry(tag, qualifier, mode)
entry_id = ACL_NEXT_ENTRY
@classmethod
def from_path(cls, path, typ):
acl = acl_get_file(path, typ)
return cls(acl)
def main():
import argparse
import pwd
import grp
from pathlib import Path
parser = argparse.ArgumentParser()
parser.add_argument("path", help="File Path", type=Path)
args = parser.parse_args()
acl = ACL.from_path(args.path, ACL_TYPE_ACCESS)
for entry in acl.entries():
print(str(entry))
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -30,4 +28,4 @@ def chunkify(msg, max_chunk):
from .client import AsyncClient, Client
from .serv import AsyncServer, AsyncServerConnection, ClientError, ServerError
from .serv import AsyncServer, AsyncServerConnection

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -31,17 +29,7 @@ class AsyncClient(object):
async def connect_unix(self, path):
async def connect_sock():
# AF_UNIX has path length issues so chdir here to workaround
cwd = os.getcwd()
try:
os.chdir(os.path.dirname(path))
# The socket must be opened synchronously so that CWD doesn't get
# changed out from underneath us so we pass as a sock into asyncio
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM, 0)
sock.connect(os.path.basename(path))
finally:
os.chdir(cwd)
return await asyncio.open_unix_connection(sock=sock)
return await asyncio.open_unix_connection(path)
self._connect_sock = connect_sock
@@ -160,8 +148,14 @@ class Client(object):
setattr(self, m, self._get_downcall_wrapper(downcall))
def connect_unix(self, path):
self.loop.run_until_complete(self.client.connect_unix(path))
self.loop.run_until_complete(self.client.connect())
# AF_UNIX has path length issues so chdir here to workaround
cwd = os.getcwd()
try:
os.chdir(os.path.dirname(path))
self.loop.run_until_complete(self.client.connect_unix(os.path.basename(path)))
self.loop.run_until_complete(self.client.connect())
finally:
os.chdir(cwd)
@property
def max_chunk(self):

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -42,7 +40,7 @@ class AsyncServerConnection(object):
# Read protocol and version
client_protocol = await self.reader.readline()
if not client_protocol:
if client_protocol is None:
return
(client_proto_name, client_proto_version) = client_protocol.decode('utf-8').rstrip().split()
@@ -59,7 +57,7 @@ class AsyncServerConnection(object):
# an empty line to signal the end of the headers
while True:
line = await self.reader.readline()
if not line:
if line is None:
return
line = line.decode('utf-8').rstrip()
@@ -153,13 +151,6 @@ class AsyncServer(object):
s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
# Enable keep alives. This prevents broken client connections
# from persisting on the server for long periods of time.
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
name = self.server.sockets[0].getsockname()
if self.server.sockets[0].family == socket.AF_INET6:
self.address = "[%s]:%d" % (name[0], name[1])

View File

@@ -20,12 +20,10 @@ import itertools
import time
import re
import stat
import datetime
import bb
import bb.msg
import bb.process
import bb.progress
from io import StringIO
from bb import data, event, utils
bblogger = logging.getLogger('BitBake')
@@ -178,9 +176,7 @@ class StdoutNoopContextManager:
@property
def name(self):
if "name" in dir(sys.stdout):
return sys.stdout.name
return "<mem>"
return sys.stdout.name
def exec_func(func, d, dirs = None):
@@ -299,21 +295,9 @@ def exec_func_python(func, d, runfile, cwd=None):
lineno = int(d.getVarFlag(func, "lineno", False))
bb.methodpool.insert_method(func, text, fn, lineno - 1)
if verboseStdoutLogging:
sys.stdout.flush()
sys.stderr.flush()
currout = sys.stdout
currerr = sys.stderr
sys.stderr = sys.stdout = execio = StringIO()
comp = utils.better_compile(code, func, "exec_func_python() autogenerated")
utils.better_exec(comp, {"d": d}, code, "exec_func_python() autogenerated")
finally:
if verboseStdoutLogging:
execio.flush()
logger.plain("%s" % execio.getvalue())
sys.stdout = currout
sys.stderr = currerr
execio.close()
# We want any stdout/stderr to be printed before any other log messages to make debugging
# more accurate. In some cases we seem to lose stdout/stderr entirely in logging tests without this.
sys.stdout.flush()
@@ -456,11 +440,7 @@ exit $ret
if fakerootcmd:
cmd = [fakerootcmd, runfile]
# We only want to output to logger via LogTee if stdout is sys.__stdout__ (which will either
# be real stdout or subprocess PIPE or similar). In other cases we are being run "recursively",
# ie. inside another function, in which case stdout is already being captured so we don't
# want to Tee here as output would be printed twice, and out of order.
if verboseStdoutLogging and sys.stdout == sys.__stdout__:
if verboseStdoutLogging:
logfile = LogTee(logger, StdoutNoopContextManager())
else:
logfile = StdoutNoopContextManager()
@@ -589,8 +569,10 @@ exit $ret
def _task_data(fn, task, d):
localdata = bb.data.createCopy(d)
localdata.setVar('BB_FILENAME', fn)
localdata.setVar('BB_CURRENTTASK', task[3:])
localdata.setVar('OVERRIDES', 'task-%s:%s' %
(task[3:].replace('_', '-'), d.getVar('OVERRIDES', False)))
localdata.finalize()
bb.data.expandKeys(localdata)
return localdata
@@ -601,7 +583,7 @@ def _exec_task(fn, task, d, quieterr):
running it with its own local metadata, and with some useful variables set.
"""
if not d.getVarFlag(task, 'task', False):
event.fire(TaskInvalid(task, fn, d), d)
event.fire(TaskInvalid(task, d), d)
logger.error("No such task: %s" % task)
return 1
@@ -637,8 +619,7 @@ def _exec_task(fn, task, d, quieterr):
logorder = os.path.join(tempdir, 'log.task_order')
try:
with open(logorder, 'a') as logorderfile:
timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S.%f")
logorderfile.write('{0} {1} ({2}): {3}\n'.format(timestamp, task, os.getpid(), logbase))
logorderfile.write('{0} ({1}): {2}\n'.format(task, os.getpid(), logbase))
except OSError:
logger.exception("Opening log file '%s'", logorder)
pass
@@ -734,23 +715,19 @@ def _exec_task(fn, task, d, quieterr):
logger.debug2("Zero size logfn %s, removing", logfn)
bb.utils.remove(logfn)
bb.utils.remove(loglink)
except bb.BBHandledException:
event.fire(TaskFailed(task, fn, logfn, localdata, True), localdata)
return 1
except (Exception, SystemExit) as exc:
handled = False
if isinstance(exc, bb.BBHandledException):
handled = True
if quieterr:
if not handled:
logger.warning(repr(exc))
event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
# If the output is already on stdout, we've printed the information in the
# logs once already so don't duplicate
if verboseStdoutLogging or handled:
if verboseStdoutLogging:
errprinted = True
if not handled:
logger.error(repr(exc))
logger.error(repr(exc))
event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
return 1
@@ -791,7 +768,44 @@ def exec_task(fn, task, d, profile = False):
event.fire(failedevent, d)
return 1
def _get_cleanmask(taskname, mcfn):
def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
"""
Internal stamp helper function
Makes sure the stamp directory exists
Returns the stamp path+filename
In the bitbake core, d can be a CacheData and file_name will be set.
When called in task context, d will be a data store, file_name will not be set
"""
taskflagname = taskname
if taskname.endswith("_setscene") and taskname != "do_setscene":
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stamp[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMP')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
if baseonly:
return stamp
if noextra:
extrainfo = ""
if not stamp:
return
stamp = bb.parse.siggen.stampfile(stamp, file_name, taskname, extrainfo)
stampdir = os.path.dirname(stamp)
if cached_mtime_noerror(stampdir) == 0:
bb.utils.mkdirhier(stampdir)
return stamp
def stamp_cleanmask_internal(taskname, d, file_name):
"""
Internal stamp helper function to generate stamp cleaning mask
Returns the stamp path+filename
@@ -799,14 +813,31 @@ def _get_cleanmask(taskname, mcfn):
In the bitbake core, d can be a CacheData and file_name will be set.
When called in task context, d will be a data store, file_name will not be set
"""
cleanmask = bb.parse.siggen.stampcleanmask_mcfn(taskname, mcfn)
taskflagname = taskname.replace("_setscene", "")
if cleanmask:
return [cleanmask, cleanmask.replace(taskflagname, taskflagname + "_setscene")]
return []
taskflagname = taskname
if taskname.endswith("_setscene") and taskname != "do_setscene":
taskflagname = taskname.replace("_setscene", "")
def clean_stamp_mcfn(task, mcfn):
cleanmask = _get_cleanmask(task, mcfn)
if file_name:
stamp = d.stampclean[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMPCLEAN')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
if not stamp:
return []
cleanmask = bb.parse.siggen.stampcleanmask(stamp, file_name, taskname, extrainfo)
return [cleanmask, cleanmask.replace(taskflagname, taskflagname + "_setscene")]
def make_stamp(task, d, file_name = None):
"""
Creates/updates a stamp for a given task
(d can be a data dict or dataCache)
"""
cleanmask = stamp_cleanmask_internal(task, d, file_name)
for mask in cleanmask:
for name in glob.glob(mask):
# Preserve sigdata files in the stamps directory
@@ -817,45 +848,24 @@ def clean_stamp_mcfn(task, mcfn):
continue
os.unlink(name)
def clean_stamp(task, d):
mcfn = d.getVar('BB_FILENAME')
clean_stamp_mcfn(task, mcfn)
def make_stamp_mcfn(task, mcfn):
basestamp = bb.parse.siggen.stampfile_mcfn(task, mcfn)
stampdir = os.path.dirname(basestamp)
if cached_mtime_noerror(stampdir) == 0:
bb.utils.mkdirhier(stampdir)
clean_stamp_mcfn(task, mcfn)
stamp = stamp_internal(task, d, file_name)
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
if basestamp:
bb.utils.remove(basestamp)
open(basestamp, "w").close()
def make_stamp(task, d):
"""
Creates/updates a stamp for a given task
"""
mcfn = d.getVar('BB_FILENAME')
make_stamp_mcfn(task, mcfn)
if stamp:
bb.utils.remove(stamp)
open(stamp, "w").close()
# If we're in task context, write out a signature file for each task
# as it completes
if not task.endswith("_setscene"):
stampbase = bb.parse.siggen.stampfile_base(mcfn)
bb.parse.siggen.dump_sigtask(mcfn, task, stampbase, True)
if not task.endswith("_setscene") and task != "do_setscene" and not file_name:
stampbase = stamp_internal(task, d, None, True)
file_name = d.getVar('BB_FILENAME')
bb.parse.siggen.dump_sigtask(file_name, task, stampbase, True)
def find_stale_stamps(task, mcfn):
current = bb.parse.siggen.stampfile_mcfn(task, mcfn)
current2 = bb.parse.siggen.stampfile_mcfn(task + "_setscene", mcfn)
cleanmask = _get_cleanmask(task, mcfn)
def find_stale_stamps(task, d, file_name=None):
current = stamp_internal(task, d, file_name)
current2 = stamp_internal(task + "_setscene", d, file_name)
cleanmask = stamp_cleanmask_internal(task, d, file_name)
found = []
for mask in cleanmask:
for name in glob.glob(mask):
@@ -869,14 +879,38 @@ def find_stale_stamps(task, mcfn):
found.append(name)
return found
def write_taint(task, d):
def del_stamp(task, d, file_name = None):
"""
Removes a stamp for a given task
(d can be a data dict or dataCache)
"""
stamp = stamp_internal(task, d, file_name)
bb.utils.remove(stamp)
def write_taint(task, d, file_name = None):
"""
Creates a "taint" file which will force the specified task and its
dependents to be re-run the next time by influencing the value of its
taskhash.
(d can be a data dict or dataCache)
"""
mcfn = d.getVar('BB_FILENAME')
bb.parse.siggen.invalidate_task(task, mcfn)
import uuid
if file_name:
taintfn = d.stamp[file_name] + '.' + task + '.taint'
else:
taintfn = d.getVar('STAMP') + '.' + task + '.taint'
bb.utils.mkdirhier(os.path.dirname(taintfn))
# The specific content of the taint file is not really important,
# we just need it to be random, so a random UUID is used
with open(taintfn, 'w') as taintf:
taintf.write(str(uuid.uuid4()))
def stampfile(taskname, d, file_name = None, noextra=False):
"""
Return the stamp for a given task
(d can be a data dict or dataCache)
"""
return stamp_internal(taskname, d, file_name, noextra=noextra)
def add_tasks(tasklist, d):
task_deps = d.getVar('_task_deps', False)

View File

@@ -24,11 +24,10 @@ from collections.abc import Mapping
import bb.utils
from bb import PrefixLoggerAdapter
import re
import shutil
logger = logging.getLogger("BitBake.Cache")
__cache_version__ = "155"
__cache_version__ = "154"
def getCacheFile(path, filename, mc, data_hash):
mcspec = ''
@@ -105,7 +104,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.tasks = metadata.getVar('__BBTASKS', False)
self.basetaskhashes = metadata.getVar('__siggen_basehashes', False) or {}
self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
self.hashfilename = self.getvar('BB_HASHFILENAME', metadata)
self.task_deps = metadata.getVar('_task_deps', False) or {'tasks': [], 'parents': {}}
@@ -216,7 +215,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
# Collect files we may need for possible world-dep
# calculations
if not bb.utils.to_boolean(self.not_world):
if not self.not_world:
cachedata.possible_world.append(fn)
#else:
# logger.debug2("EXCLUDE FROM WORLD: %s", fn)
@@ -238,106 +237,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootlogs[fn] = self.fakerootlogs
cachedata.extradepsfunc[fn] = self.extradepsfunc
class SiggenRecipeInfo(RecipeInfoCommon):
__slots__ = ()
classname = "SiggenRecipeInfo"
cachefile = "bb_cache_" + classname +".dat"
# we don't want to show this information in graph files so don't set cachefields
#cachefields = []
def __init__(self, filename, metadata):
self.siggen_gendeps = metadata.getVar("__siggen_gendeps", False)
self.siggen_varvals = metadata.getVar("__siggen_varvals", False)
self.siggen_taskdeps = metadata.getVar("__siggen_taskdeps", False)
@classmethod
def init_cacheData(cls, cachedata):
cachedata.siggen_taskdeps = {}
cachedata.siggen_gendeps = {}
cachedata.siggen_varvals = {}
def add_cacheData(self, cachedata, fn):
cachedata.siggen_gendeps[fn] = self.siggen_gendeps
cachedata.siggen_varvals[fn] = self.siggen_varvals
cachedata.siggen_taskdeps[fn] = self.siggen_taskdeps
# The siggen variable data is large and impacts:
# - bitbake's overall memory usage
# - the amount of data sent over IPC between parsing processes and the server
# - the size of the cache files on disk
# - the size of "sigdata" hash information files on disk
# The data consists of strings (some large) or frozenset lists of variables
# As such, we a) deplicate the data here and b) pass references to the object at second
# access (e.g. over IPC or saving into pickle).
store = {}
save_map = {}
save_count = 1
restore_map = {}
restore_count = {}
@classmethod
def reset(cls):
# Needs to be called before starting new streamed data in a given process
# (e.g. writing out the cache again)
cls.save_map = {}
cls.save_count = 1
cls.restore_map = {}
@classmethod
def _save(cls, deps):
ret = []
if not deps:
return deps
for dep in deps:
fs = deps[dep]
if fs is None:
ret.append((dep, None, None))
elif fs in cls.save_map:
ret.append((dep, None, cls.save_map[fs]))
else:
cls.save_map[fs] = cls.save_count
ret.append((dep, fs, cls.save_count))
cls.save_count = cls.save_count + 1
return ret
@classmethod
def _restore(cls, deps, pid):
ret = {}
if not deps:
return deps
if pid not in cls.restore_map:
cls.restore_map[pid] = {}
map = cls.restore_map[pid]
for dep, fs, mapnum in deps:
if fs is None and mapnum is None:
ret[dep] = None
elif fs is None:
ret[dep] = map[mapnum]
else:
try:
fs = cls.store[fs]
except KeyError:
cls.store[fs] = fs
map[mapnum] = fs
ret[dep] = fs
return ret
def __getstate__(self):
ret = {}
for key in ["siggen_gendeps", "siggen_taskdeps", "siggen_varvals"]:
ret[key] = self._save(self.__dict__[key])
ret['pid'] = os.getpid()
return ret
def __setstate__(self, state):
pid = state['pid']
for key in ["siggen_gendeps", "siggen_taskdeps", "siggen_varvals"]:
setattr(self, key, self._restore(state[key], pid))
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
@@ -380,18 +279,96 @@ def variant2virtual(realfn, variant):
return "mc:" + elems[1] + ":" + realfn
return "virtual:" + variant + ":" + realfn
#
# Cooker calls cacheValid on its recipe list, then either calls loadCached
# from it's main thread or parse from separate processes to generate an up to
# date cache
#
class Cache(object):
def parse_recipe(bb_data, bbfile, appends, mc=''):
"""
Parse a recipe
"""
chdir_back = False
bb_data.setVar("__BBMULTICONFIG", mc)
# expand tmpdir to include this topdir
bb_data.setVar('TMPDIR', bb_data.getVar('TMPDIR') or "")
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
bb.parse.cached_mtime_noerror(bbfile_loc)
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not bb_data.getVar('TOPDIR', False):
chdir_back = True
bb_data.setVar('TOPDIR', bbfile_loc)
try:
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
bb_data = bb.parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
return bb_data
except:
if chdir_back:
os.chdir(oldpath)
raise
class NoCache(object):
def __init__(self, databuilder):
self.databuilder = databuilder
self.data = databuilder.data
def loadDataFull(self, virtualfn, appends):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug("Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = virtualfn2realfn(virtualfn)
bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
return bb_data[virtual]
def load_bbfile(self, bbfile, appends, virtonly = False, mc=None):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
if virtonly:
(bbfile, virtual, mc) = virtualfn2realfn(bbfile)
bb_data = self.databuilder.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
datastores = parse_recipe(bb_data, bbfile, appends, mc)
return datastores
if mc is not None:
bb_data = self.databuilder.mcdata[mc].createCopy()
return parse_recipe(bb_data, bbfile, appends, mc)
bb_data = self.data.createCopy()
datastores = parse_recipe(bb_data, bbfile, appends)
for mc in self.databuilder.mcdata:
if not mc:
continue
bb_data = self.databuilder.mcdata[mc].createCopy()
newstores = parse_recipe(bb_data, bbfile, appends, mc)
for ns in newstores:
datastores["mc:%s:%s" % (mc, ns)] = newstores[ns]
return datastores
class Cache(NoCache):
"""
BitBake Cache implementation
"""
def __init__(self, databuilder, mc, data_hash, caches_array):
self.databuilder = databuilder
self.data = databuilder.data
super().__init__(databuilder)
data = databuilder.data
# Pass caches_array information into Cache Constructor
# It will be used later for deciding whether we
@@ -399,7 +376,7 @@ class Cache(object):
self.mc = mc
self.logger = PrefixLoggerAdapter("Cache: %s: " % (mc if mc else "default"), logger)
self.caches_array = caches_array
self.cachedir = self.data.getVar("CACHE")
self.cachedir = data.getVar("CACHE")
self.clean = set()
self.checked = set()
self.depends_cache = {}
@@ -409,12 +386,20 @@ class Cache(object):
self.filelist_regex = re.compile(r'(?:(?<=:True)|(?<=:False))\s+')
if self.cachedir in [None, '']:
bb.fatal("Please ensure CACHE is set to the cache directory for BitBake to use")
self.has_cache = False
self.logger.info("Not using a cache. "
"Set CACHE = <directory> to enable.")
return
self.has_cache = True
def getCacheFile(self, cachefile):
return getCacheFile(self.cachedir, cachefile, self.mc, self.data_hash)
def prepare_cache(self, progress):
if not self.has_cache:
return 0
loaded = 0
self.cachefile = self.getCacheFile("bb_cache.dat")
@@ -453,6 +438,9 @@ class Cache(object):
return loaded
def cachesize(self):
if not self.has_cache:
return 0
cachesize = 0
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
@@ -514,11 +502,11 @@ class Cache(object):
return len(self.depends_cache)
def parse(self, filename, appends, layername):
def parse(self, filename, appends):
"""Parse the specified filename, returning the recipe information"""
self.logger.debug("Parsing %s", filename)
infos = []
datastores = self.databuilder.parseRecipeVariants(filename, appends, mc=self.mc, layername=layername)
datastores = self.load_bbfile(filename, appends, mc=self.mc)
depends = []
variants = []
# Process the "real" fn last so we can store variants list
@@ -540,19 +528,43 @@ class Cache(object):
return infos
def loadCached(self, filename, appends):
def load(self, filename, appends):
"""Obtain the recipe information for the specified filename,
using cached values.
"""
using cached values if available, otherwise parsing.
infos = []
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = variant2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
Note that if it does parse to obtain the info, it will not
automatically add the information to the cache or to your
CacheData. Use the add or add_info method to do so after
running this, or use loadData instead."""
cached = self.cacheValid(filename, appends)
if cached:
infos = []
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = variant2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
else:
return self.parse(filename, appends, configdata, self.caches_array)
return infos
return cached, infos
def loadData(self, fn, appends, cacheData):
"""Load the recipe info for the specified filename,
parsing and adding to the cache if necessary, and adding
the recipe information to the supplied CacheData instance."""
skipped, virtuals = 0, 0
cached, infos = self.load(fn, appends)
for virtualfn, info_array in infos:
if info_array[0].skipped:
self.logger.debug("Skipping %s: %s", virtualfn, info_array[0].skipreason)
skipped += 1
else:
self.add_info(virtualfn, info_array, cacheData, not cached)
virtuals += 1
return cached, skipped, virtuals
def cacheValid(self, fn, appends):
"""
@@ -561,6 +573,10 @@ class Cache(object):
"""
if fn not in self.checked:
self.cacheValidUpdate(fn, appends)
# Is cache enabled?
if not self.has_cache:
return False
if fn in self.clean:
return True
return False
@@ -570,6 +586,10 @@ class Cache(object):
Is the cache valid for fn?
Make thorough (slower) checks including timestamps.
"""
# Is cache enabled?
if not self.has_cache:
return False
self.checked.add(fn)
# File isn't in depends_cache
@@ -620,7 +640,7 @@ class Cache(object):
for f in flist:
if not f:
continue
f, exist = f.rsplit(":", 1)
f, exist = f.split(":")
if (exist == "True" and not os.path.exists(f)) or (exist == "False" and os.path.exists(f)):
self.logger.debug2("%s's file checksum list file %s changed",
fn, f)
@@ -676,6 +696,10 @@ class Cache(object):
Save the cache
Called from the parser when complete (or exiting)
"""
if not self.has_cache:
return
if self.cacheclean:
self.logger.debug2("Cache is clean, not saving.")
return
@@ -696,7 +720,6 @@ class Cache(object):
p.dump(info)
del self.depends_cache
SiggenRecipeInfo.reset()
@staticmethod
def mtime(cachefile):
@@ -719,11 +742,26 @@ class Cache(object):
if watcher:
watcher(info_array[0].file_depends)
if not self.has_cache:
return
if (info_array[0].skipped or 'SRCREVINACTION' not in info_array[0].pv) and not info_array[0].nocache:
if parsed:
self.cacheclean = False
self.depends_cache[filename] = info_array
def add(self, file_name, data, cacheData, parsed=None):
"""
Save data we need into the cache
"""
realfn = virtualfn2realfn(file_name)[0]
info_array = []
for cache_class in self.caches_array:
info_array.append(cache_class(realfn, data))
self.add_info(file_name, info_array, cacheData, parsed)
class MulticonfigCache(Mapping):
def __init__(self, databuilder, data_hash, caches_array):
def progress(p):
@@ -760,7 +798,6 @@ class MulticonfigCache(Mapping):
loaded = 0
for c in self.__caches.values():
SiggenRecipeInfo.reset()
loaded += c.prepare_cache(progress)
previous_progress = current_progress
@@ -838,10 +875,11 @@ class MultiProcessCache(object):
self.cachedata = self.create_cachedata()
self.cachedata_extras = self.create_cachedata()
def init_cache(self, cachedir, cache_file_name=None):
if not cachedir:
def init_cache(self, d, cache_file_name=None):
cachedir = (d.getVar("PERSISTENT_DIR") or
d.getVar("CACHE"))
if cachedir in [None, '']:
return
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
@@ -872,10 +910,6 @@ class MultiProcessCache(object):
if not self.cachefile:
return
have_data = any(self.cachedata_extras)
if not have_data:
return
glf = bb.utils.lockfile(self.cachefile + ".lock", shared=True)
i = os.getpid()
@@ -910,8 +944,6 @@ class MultiProcessCache(object):
data = self.cachedata
have_data = False
for f in [y for y in os.listdir(os.path.dirname(self.cachefile)) if y.startswith(os.path.basename(self.cachefile) + '-')]:
f = os.path.join(os.path.dirname(self.cachefile), f)
try:
@@ -926,14 +958,12 @@ class MultiProcessCache(object):
os.unlink(f)
continue
have_data = True
self.merge_data(extradata, data)
os.unlink(f)
if have_data:
with open(self.cachefile, "wb") as f:
p = pickle.Pickler(f, -1)
p.dump([data, self.__class__.CACHE_VERSION])
with open(self.cachefile, "wb") as f:
p = pickle.Pickler(f, -1)
p.dump([data, self.__class__.CACHE_VERSION])
bb.utils.unlockfile(glf)
@@ -989,11 +1019,3 @@ class SimpleCache(object):
p.dump([data, self.cacheversion])
bb.utils.unlockfile(glf)
def copyfile(self, target):
if not self.cachefile:
return
glf = bb.utils.lockfile(self.cachefile + ".lock")
shutil.copy(self.cachefile, target)
bb.utils.unlockfile(glf)

View File

@@ -11,13 +11,10 @@ import os
import stat
import bb.utils
import logging
import re
from bb.cache import MultiProcessCache
logger = logging.getLogger("BitBake.Cache")
filelist_regex = re.compile(r'(?:(?<=:True)|(?<=:False))\s+')
# mtime cache (non-persistent)
# based upon the assumption that files do not change during bitbake run
class FileMtimeCache(object):
@@ -53,7 +50,6 @@ class FileChecksumCache(MultiProcessCache):
MultiProcessCache.__init__(self)
def get_checksum(self, f):
f = os.path.normpath(f)
entry = self.cachedata[0].get(f)
cmtime = self.mtime_cache.cached_mtime(f)
if entry:
@@ -88,36 +84,22 @@ class FileChecksumCache(MultiProcessCache):
return None
return checksum
#
# Changing the format of file-checksums is problematic as both OE and Bitbake have
# knowledge of them. We need to encode a new piece of data, the portion of the path
# we care about from a checksum perspective. This means that files that change subdirectory
# are tracked by the task hashes. To do this, we do something horrible and put a "/./" into
# the path. The filesystem handles it but it gives us a marker to know which subsection
# of the path to cache.
#
def checksum_dir(pth):
# Handle directories recursively
if pth == "/":
bb.fatal("Refusing to checksum /")
pth = pth.rstrip("/")
dirchecksums = []
for root, dirs, files in os.walk(pth, topdown=True):
[dirs.remove(d) for d in list(dirs) if d in localdirsexclude]
for name in files:
fullpth = os.path.join(root, name).replace(pth, os.path.join(pth, "."))
fullpth = os.path.join(root, name)
checksum = checksum_file(fullpth)
if checksum:
dirchecksums.append((fullpth, checksum))
return dirchecksums
checksums = []
for pth in filelist_regex.split(filelist):
if not pth:
continue
pth = pth.strip()
if not pth:
continue
for pth in filelist.split():
exist = pth.split(":")[1]
if exist == "False":
continue

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -27,7 +25,6 @@ import ast
import sys
import codegen
import logging
import inspect
import bb.pysh as pysh
import bb.utils, bb.data
import hashlib
@@ -59,39 +56,10 @@ def check_indent(codestr):
return codestr
modulecode_deps = {}
def add_module_functions(fn, functions, namespace):
fstat = os.stat(fn)
fixedhash = fn + ":" + str(fstat.st_size) + ":" + str(fstat.st_mtime)
for f in functions:
name = "%s.%s" % (namespace, f)
parser = PythonParser(name, logger)
try:
parser.parse_python(None, filename=fn, lineno=1, fixedhash=fixedhash+f)
#bb.warn("Cached %s" % f)
except KeyError:
lines, lineno = inspect.getsourcelines(functions[f])
src = "".join(lines)
parser.parse_python(src, filename=fn, lineno=lineno, fixedhash=fixedhash+f)
#bb.warn("Not cached %s" % f)
execs = parser.execs.copy()
# Expand internal module exec references
for e in parser.execs:
if e in functions:
execs.remove(e)
execs.add(namespace + "." + e)
modulecode_deps[name] = [parser.references.copy(), execs, parser.var_execs.copy(), parser.contains.copy()]
#bb.warn("%s: %s\nRefs:%s Execs: %s %s %s" % (name, src, parser.references, parser.execs, parser.var_execs, parser.contains))
def update_module_dependencies(d):
for mod in modulecode_deps:
excludes = set((d.getVarFlag(mod, "vardepsexclude") or "").split())
if excludes:
modulecode_deps[mod] = [modulecode_deps[mod][0] - excludes, modulecode_deps[mod][1] - excludes, modulecode_deps[mod][2] - excludes, modulecode_deps[mod][3]]
# A custom getstate/setstate using tuples is actually worth 15% cachesize by
# avoiding duplication of the attribute names!
class SetCache(object):
def __init__(self):
self.setcache = {}
@@ -184,12 +152,12 @@ class CodeParserCache(MultiProcessCache):
self.shellcachelines[h] = cacheline
return cacheline
def init_cache(self, cachedir):
def init_cache(self, d):
# Check if we already have the caches
if self.pythoncache:
return
MultiProcessCache.init_cache(self, cachedir)
MultiProcessCache.init_cache(self, d)
# cachedata gets re-assigned in the parent
self.pythoncache = self.cachedata[0]
@@ -201,8 +169,8 @@ class CodeParserCache(MultiProcessCache):
codeparsercache = CodeParserCache()
def parser_cache_init(cachedir):
codeparsercache.init_cache(cachedir)
def parser_cache_init(d):
codeparsercache.init_cache(d)
def parser_cache_save():
codeparsercache.save_extras()
@@ -227,10 +195,6 @@ class BufferedLogger(Logger):
self.target.handle(record)
self.buffer = []
class DummyLogger():
def flush(self):
return
class PythonParser():
getvars = (".getVar", ".appendVar", ".prependVar", "oe.utils.conditional")
getvarflags = (".getVarFlag", ".appendVarFlag", ".prependVarFlag")
@@ -312,24 +276,16 @@ class PythonParser():
self.contains = {}
self.execs = set()
self.references = set()
self._log = log
# Defer init as expensive
self.log = DummyLogger()
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, log)
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
# For the python module code it is expensive to have the function text so it is
# uses a different fixedhash to cache against. We can take the hit on obtaining the
# text if it isn't in the cache.
def parse_python(self, node, lineno=0, filename="<string>", fixedhash=None):
if not fixedhash and (not node or not node.strip()):
def parse_python(self, node, lineno=0, filename="<string>"):
if not node or not node.strip():
return
if fixedhash:
h = fixedhash
else:
h = bbhash(str(node))
h = bbhash(str(node))
if h in codeparsercache.pythoncache:
self.references = set(codeparsercache.pythoncache[h].refs)
@@ -347,12 +303,6 @@ class PythonParser():
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
return
if fixedhash and not node:
raise KeyError
# Need to parse so take the hit on the real log buffer
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, self._log)
# We can't add to the linenumbers for compile, we can pad to the correct number of blank lines though
node = "\n" * int(lineno) + node
code = compile(check_indent(str(node)), filename, "exec",
@@ -371,11 +321,7 @@ class ShellParser():
self.funcdefs = set()
self.allexecs = set()
self.execs = set()
self._name = name
self._log = log
# Defer init as expensive
self.log = DummyLogger()
self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
self.unhandled_template = "unable to handle non-literal command '%s'"
self.unhandled_template = "while parsing %s, %s" % (name, self.unhandled_template)
@@ -394,9 +340,6 @@ class ShellParser():
self.execs = set(codeparsercache.shellcacheextras[h].execs)
return self.execs
# Need to parse so take the hit on the real log buffer
self.log = BufferedLogger('BitBake.Data.%s' % self._name, logging.DEBUG, self._log)
self._parse_shell(value)
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)

View File

@@ -51,21 +51,20 @@ class Command:
"""
A queue of asynchronous commands for bitbake
"""
def __init__(self, cooker, process_server):
def __init__(self, cooker):
self.cooker = cooker
self.cmds_sync = CommandsSync()
self.cmds_async = CommandsAsync()
self.remotedatastores = None
self.process_server = process_server
# Access with locking using process_server.{get/set/clear}_async_cmd()
# FIXME Add lock for this
self.currentAsyncCommand = None
def runCommand(self, commandline, process_server, ro_only=False):
def runCommand(self, commandline, ro_only = False):
command = commandline.pop(0)
# Ensure cooker is ready for commands
if command not in ["updateConfig", "setFeatures", "ping"]:
if command != "updateConfig" and command != "setFeatures":
try:
self.cooker.init_configdata()
if not self.remotedatastores:
@@ -85,8 +84,7 @@ class Command:
if not hasattr(command_method, 'readonly') or not getattr(command_method, 'readonly'):
return None, "Not able to execute not readonly commands in readonly mode"
try:
if command != "ping":
self.cooker.process_inotify_updates_apply()
self.cooker.process_inotify_updates()
if getattr(command_method, 'needconfig', True):
self.cooker.updateCacheSync()
result = command_method(self, commandline)
@@ -101,24 +99,24 @@ class Command:
return None, traceback.format_exc()
else:
return result, None
if self.currentAsyncCommand is not None:
return None, "Busy (%s in progress)" % self.currentAsyncCommand[0]
if command not in CommandsAsync.__dict__:
return None, "No such command"
if not process_server.set_async_cmd((command, commandline)):
return None, "Busy (%s in progress)" % self.process_server.get_async_cmd()[0]
self.cooker.idleCallBackRegister(self.runAsyncCommand, process_server)
self.currentAsyncCommand = (command, commandline)
self.cooker.idleCallBackRegister(self.cooker.runCommands, self.cooker)
return True, None
def runAsyncCommand(self, _, process_server, halt):
def runAsyncCommand(self):
try:
self.cooker.process_inotify_updates_apply()
self.cooker.process_inotify_updates()
if self.cooker.state in (bb.cooker.state.error, bb.cooker.state.shutdown, bb.cooker.state.forceshutdown):
# updateCache will trigger a shutdown of the parser
# and then raise BBHandledException triggering an exit
self.cooker.updateCache()
return bb.server.process.idleFinish("Cooker in error state")
cmd = process_server.get_async_cmd()
if cmd is not None:
(command, options) = cmd
return False
if self.currentAsyncCommand is not None:
(command, options) = self.currentAsyncCommand
commandmethod = getattr(CommandsAsync, command)
needcache = getattr( commandmethod, "needcache" )
if needcache and self.cooker.state != bb.cooker.state.running:
@@ -128,21 +126,24 @@ class Command:
commandmethod(self.cmds_async, self, options)
return False
else:
return bb.server.process.idleFinish("Nothing to do, no async command?")
return False
except KeyboardInterrupt as exc:
return bb.server.process.idleFinish("Interrupted")
self.finishAsyncCommand("Interrupted")
return False
except SystemExit as exc:
arg = exc.args[0]
if isinstance(arg, str):
return bb.server.process.idleFinish(arg)
self.finishAsyncCommand(arg)
else:
return bb.server.process.idleFinish("Exited with %s" % arg)
self.finishAsyncCommand("Exited with %s" % arg)
return False
except Exception as exc:
import traceback
if isinstance(exc, bb.BBHandledException):
return bb.server.process.idleFinish("")
self.finishAsyncCommand("")
else:
return bb.server.process.idleFinish(traceback.format_exc())
self.finishAsyncCommand(traceback.format_exc())
return False
def finishAsyncCommand(self, msg=None, code=None):
if msg or msg == "":
@@ -151,8 +152,8 @@ class Command:
bb.event.fire(CommandExit(code), self.cooker.data)
else:
bb.event.fire(CommandCompleted(), self.cooker.data)
self.currentAsyncCommand = None
self.cooker.finishcommand()
self.process_server.clear_async_cmd()
def reset(self):
if self.remotedatastores:
@@ -165,14 +166,6 @@ class CommandsSync:
These must not influence any running synchronous command.
"""
def ping(self, command, params):
"""
Allow a UI to check the server is still alive
"""
return "Still alive!"
ping.needconfig = False
ping.readonly = True
def stateShutdown(self, command, params):
"""
Trigger cooker 'shutdown' mode
@@ -564,7 +557,6 @@ class CommandsSync:
appendfiles = command.cooker.collections[mc].get_file_appends(fn)
else:
appendfiles = []
layername = command.cooker.collections[mc].calc_bbfile_priority(fn)[2]
# We are calling bb.cache locally here rather than on the server,
# but that's OK because it doesn't actually need anything from
# the server barring the global datastore (which we have a remote
@@ -572,10 +564,11 @@ class CommandsSync:
if config_data:
# We have to use a different function here if we're passing in a datastore
# NOTE: we took a copy above, so we don't do it here again
envdata = command.cooker.databuilder._parse_recipe(config_data, fn, appendfiles, mc, layername)['']
envdata = bb.cache.parse_recipe(config_data, fn, appendfiles, mc)['']
else:
# Use the standard path
envdata = command.cooker.databuilder.parseRecipe(fn, appendfiles, layername)
parser = bb.cache.NoCache(command.cooker.databuilder)
envdata = parser.loadDataFull(fn, appendfiles)
idx = command.remotedatastores.store(envdata)
return DataStoreConnectionHandle(idx)
parseRecipeFile.readonly = True
@@ -748,7 +741,7 @@ class CommandsAsync:
"""
event = params[0]
bb.event.fire(eval(event), command.cooker.data)
process_server.clear_async_cmd()
command.currentAsyncCommand = None
triggerEvent.needcache = False
def resetCooker(self, command, params):

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Helper library to implement streaming compression and decompression using an
@@ -51,7 +49,7 @@ def open_wrap(
raise ValueError("Argument 'newline' not supported in binary mode")
file_mode = mode.replace("t", "")
if isinstance(filename, (str, bytes, os.PathLike, int)):
if isinstance(filename, (str, bytes, os.PathLike)):
binary_file = cls(filename, file_mode, **kwargs)
elif hasattr(filename, "read") or hasattr(filename, "write"):
binary_file = cls(None, file_mode, fileobj=filename, **kwargs)

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#

File diff suppressed because it is too large Load Diff

View File

@@ -57,7 +57,7 @@ class ConfigParameters(object):
def updateToServer(self, server, environment):
options = {}
for o in ["halt", "force", "invalidate_stamp",
for o in ["abort", "force", "invalidate_stamp",
"dry_run", "dump_signatures",
"extra_assume_provided", "profile",
"prefile", "postfile", "server_timeout",
@@ -86,7 +86,7 @@ class ConfigParameters(object):
action['msg'] = "Only one target can be used with the --environment option."
elif self.options.buildfile and len(self.options.pkgs_to_build) > 0:
action['msg'] = "No target should be used with the --environment and --buildfile options."
elif self.options.pkgs_to_build:
elif len(self.options.pkgs_to_build) > 0:
action['action'] = ["showEnvironmentTarget", self.options.pkgs_to_build]
else:
action['action'] = ["showEnvironment", self.options.buildfile]
@@ -124,7 +124,7 @@ class CookerConfiguration(object):
self.prefile = []
self.postfile = []
self.cmd = None
self.halt = True
self.abort = True
self.force = False
self.profile = False
self.nosetscene = False
@@ -160,7 +160,12 @@ def catch_parse_error(func):
def wrapped(fn, *args):
try:
return func(fn, *args)
except Exception as exc:
except IOError as exc:
import traceback
parselog.critical(traceback.format_exc())
parselog.critical("Unable to parse %s: %s" % (fn, exc))
raise bb.BBHandledException()
except bb.data_smart.ExpansionError as exc:
import traceback
bbdir = os.path.dirname(__file__) + os.sep
@@ -172,11 +177,14 @@ def catch_parse_error(func):
break
parselog.critical("Unable to parse %s" % fn, exc_info=(exc_class, exc, tb))
raise bb.BBHandledException()
except bb.parse.ParseError as exc:
parselog.critical(str(exc))
raise bb.BBHandledException()
return wrapped
@catch_parse_error
def parse_config_file(fn, data, include=True):
return bb.parse.handle(fn, data, include, baseconfig=True)
return bb.parse.handle(fn, data, include)
@catch_parse_error
def _inherit(bbclass, data):
@@ -202,7 +210,7 @@ def findConfigFile(configfile, data):
#
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
# up to /. If that fails, bitbake would fall back to cwd.
# up to /. If that fails, we search for a conf/bitbake.conf in BBPATH.
#
def findTopdir():
@@ -215,8 +223,11 @@ def findTopdir():
layerconf = findConfigFile("bblayers.conf", d)
if layerconf:
return os.path.dirname(os.path.dirname(layerconf))
return os.path.abspath(os.getcwd())
if bbpath:
bitbakeconf = bb.utils.which(bbpath, "conf/bitbake.conf")
if bitbakeconf:
return os.path.dirname(os.path.dirname(bitbakeconf))
return None
class CookerDataBuilder(object):
@@ -239,14 +250,10 @@ class CookerDataBuilder(object):
self.savedenv = bb.data.init()
for k in cookercfg.env:
self.savedenv.setVar(k, cookercfg.env[k])
if k in bb.data_smart.bitbake_renamed_vars:
bb.error('Shell environment variable %s has been renamed to %s' % (k, bb.data_smart.bitbake_renamed_vars[k]))
bb.fatal("Exiting to allow enviroment variables to be corrected")
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
self.basedata.setVar("BB_ORIGENV", self.savedenv)
self.basedata.setVar("__bbclasstype", "global")
if worker:
self.basedata.setVar("BB_WORKERCONTEXT", "1")
@@ -254,15 +261,15 @@ class CookerDataBuilder(object):
self.data = self.basedata
self.mcdata = {}
def parseBaseConfiguration(self, worker=False):
mcdata = {}
def parseBaseConfiguration(self):
data_hash = hashlib.sha256()
try:
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
if self.data.getVar("BB_WORKERCONTEXT", False) is None and not worker:
if self.data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(self.data)
bb.parse.init_parser(self.data)
bb.codeparser.parser_cache_init(self.data)
bb.event.fire(bb.event.ConfigParsed(), self.data)
@@ -280,62 +287,40 @@ class CookerDataBuilder(object):
bb.parse.init_parser(self.data)
data_hash.update(self.data.get_hash().encode('utf-8'))
mcdata[''] = self.data
self.mcdata[''] = self.data
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
for config in multiconfig:
if config[0].isdigit():
bb.fatal("Multiconfig name '%s' is invalid as multiconfigs cannot start with a digit" % config)
parsed_mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), parsed_mcdata)
mcdata[config] = parsed_mcdata
data_hash.update(parsed_mcdata.get_hash().encode('utf-8'))
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), mcdata)
self.mcdata[config] = mcdata
data_hash.update(mcdata.get_hash().encode('utf-8'))
if multiconfig:
bb.event.fire(bb.event.MultiConfigParsed(mcdata), self.data)
bb.event.fire(bb.event.MultiConfigParsed(self.mcdata), self.data)
self.data_hash = data_hash.hexdigest()
except (SyntaxError, bb.BBHandledException):
raise bb.BBHandledException()
except bb.data_smart.ExpansionError as e:
logger.error(str(e))
raise bb.BBHandledException()
bb.codeparser.update_module_dependencies(self.data)
# Handle obsolete variable names
d = self.data
renamedvars = d.getVarFlags('BB_RENAMED_VARIABLES') or {}
renamedvars.update(bb.data_smart.bitbake_renamed_vars)
issues = False
for v in renamedvars:
if d.getVar(v) != None or d.hasOverrides(v):
issues = True
loginfo = {}
history = d.varhistory.get_variable_refs(v)
for h in history:
for line in history[h]:
loginfo = {'file' : h, 'line' : line}
bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
if not history:
bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
if issues:
except Exception:
logger.exception("Error parsing configuration files")
raise bb.BBHandledException()
for mc in mcdata:
mcdata[mc].renameVar("__depends", "__base_depends")
mcdata[mc].setVar("__bbclasstype", "recipe")
# Create a copy so we can reset at a later date when UIs disconnect
self.mcorigdata = mcdata
for mc in mcdata:
self.mcdata[mc] = bb.data.createCopy(mcdata[mc])
self.data = self.mcdata['']
self.origdata = self.data
self.data = bb.data.createCopy(self.origdata)
self.mcdata[''] = self.data
def reset(self):
# We may not have run parseBaseConfiguration() yet
if not hasattr(self, 'mcorigdata'):
if not hasattr(self, 'origdata'):
return
for mc in self.mcorigdata:
self.mcdata[mc] = bb.data.createCopy(self.mcorigdata[mc])
self.data = self.mcdata['']
self.data = bb.data.createCopy(self.origdata)
self.mcdata[''] = self.data
def _findLayerConf(self, data):
return findConfigFile("bblayers.conf", data)
@@ -350,17 +335,12 @@ class CookerDataBuilder(object):
layerconf = self._findLayerConf(data)
if layerconf:
parselog.debug2("Found bblayers.conf (%s)", layerconf)
parselog.debug(2, "Found bblayers.conf (%s)", layerconf)
# By definition bblayers.conf is in conf/ of TOPDIR.
# We may have been called with cwd somewhere else so reset TOPDIR
data.setVar("TOPDIR", os.path.dirname(os.path.dirname(layerconf)))
data = parse_config_file(layerconf, data)
if not data.getVar("BB_CACHEDIR"):
data.setVar("BB_CACHEDIR", "${TOPDIR}/cache")
bb.codeparser.parser_cache_init(data.getVar("BB_CACHEDIR"))
layers = (data.getVar('BBLAYERS') or "").split()
broken_layers = []
@@ -382,10 +362,8 @@ class CookerDataBuilder(object):
parselog.critical("Please check BBLAYERS in %s" % (layerconf))
raise bb.BBHandledException()
layerseries = None
compat_entries = {}
for layer in layers:
parselog.debug2("Adding layer %s", layer)
parselog.debug(2, "Adding layer %s", layer)
if 'HOME' in approved and '~' in layer:
layer = os.path.expanduser(layer)
if layer.endswith('/'):
@@ -396,27 +374,8 @@ class CookerDataBuilder(object):
data.expandVarref('LAYERDIR')
data.expandVarref('LAYERDIR_RE')
# Sadly we can't have nice things.
# Some layers think they're going to be 'clever' and copy the values from
# another layer, e.g. using ${LAYERSERIES_COMPAT_core}. The whole point of
# this mechanism is to make it clear which releases a layer supports and
# show when a layer master branch is bitrotting and is unmaintained.
# We therefore avoid people doing this here.
collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
for c in collections:
compat_entry = data.getVar("LAYERSERIES_COMPAT_%s" % c)
if compat_entry:
compat_entries[c] = set(compat_entry.split())
data.delVar("LAYERSERIES_COMPAT_%s" % c)
if not layerseries:
layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
if layerseries:
data.delVar("LAYERSERIES_CORENAMES")
data.delVar('LAYERDIR_RE')
data.delVar('LAYERDIR')
for c in compat_entries:
data.setVar("LAYERSERIES_COMPAT_%s" % c, " ".join(sorted(compat_entries[c])))
bbfiles_dynamic = (data.getVar('BBFILES_DYNAMIC') or "").split()
collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
@@ -435,15 +394,13 @@ class CookerDataBuilder(object):
if invalid:
bb.fatal("BBFILES_DYNAMIC entries must be of the form {!}<collection name>:<filename pattern>, not:\n %s" % "\n ".join(invalid))
layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
collections_tmp = collections[:]
for c in collections:
collections_tmp.remove(c)
if c in collections_tmp:
bb.fatal("Found duplicated BBFILE_COLLECTIONS '%s', check bblayers.conf or layer.conf to fix it." % c)
compat = set()
if c in compat_entries:
compat = compat_entries[c]
compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
if compat and not layerseries:
bb.fatal("No core layer found to work with layer '%s'. Missing entry in bblayers.conf?" % c)
if compat and not (compat & layerseries):
@@ -452,21 +409,13 @@ class CookerDataBuilder(object):
elif not compat and not data.getVar("BB_WORKERCONTEXT"):
bb.warn("Layer %s should set LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core layer names it is compatible with." % (c, c))
data.setVar("LAYERSERIES_CORENAMES", " ".join(sorted(layerseries)))
if not data.getVar("BBPATH"):
msg = "The BBPATH variable is not set"
if not layerconf:
msg += (" and bitbake did not find a conf/bblayers.conf file in"
" the expected location.\nMaybe you accidentally"
" invoked bitbake from the wrong directory?")
bb.fatal(msg)
if not data.getVar("TOPDIR"):
data.setVar("TOPDIR", os.path.abspath(os.getcwd()))
if not data.getVar("BB_CACHEDIR"):
data.setVar("BB_CACHEDIR", "${TOPDIR}/cache")
bb.codeparser.parser_cache_init(data.getVar("BB_CACHEDIR"))
raise SystemExit(msg)
data = parse_config_file(os.path.join("conf", "bitbake.conf"), data)
@@ -479,7 +428,7 @@ class CookerDataBuilder(object):
for bbclass in bbclasses:
data = _inherit(bbclass, data)
# Normally we only register event handlers at the end of parsing .bb files
# Nomally we only register event handlers at the end of parsing .bb files
# We register any handlers we've found so far here...
for var in data.getVar('__BBHANDLERS', False) or []:
handlerfn = data.getVarFlag(var, "filename", False)
@@ -493,55 +442,3 @@ class CookerDataBuilder(object):
return data
@staticmethod
def _parse_recipe(bb_data, bbfile, appends, mc, layername):
bb_data.setVar("__BBMULTICONFIG", mc)
bb_data.setVar("FILE_LAYERNAME", layername)
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
bb.parse.cached_mtime_noerror(bbfile_loc)
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
bb_data = bb.parse.handle(bbfile, bb_data)
return bb_data
def parseRecipeVariants(self, bbfile, appends, virtonly=False, mc=None, layername=None):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
if virtonly:
(bbfile, virtual, mc) = bb.cache.virtualfn2realfn(bbfile)
bb_data = self.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
datastores = self._parse_recipe(bb_data, bbfile, appends, mc, layername)
return datastores
if mc is not None:
bb_data = self.mcdata[mc].createCopy()
return self._parse_recipe(bb_data, bbfile, appends, mc, layername)
bb_data = self.data.createCopy()
datastores = self._parse_recipe(bb_data, bbfile, appends, '', layername)
for mc in self.mcdata:
if not mc:
continue
bb_data = self.mcdata[mc].createCopy()
newstores = self._parse_recipe(bb_data, bbfile, appends, mc, layername)
for ns in newstores:
datastores["mc:%s:%s" % (mc, ns)] = newstores[ns]
return datastores
def parseRecipe(self, virtualfn, appends, layername):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug("Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = bb.cache.virtualfn2realfn(virtualfn)
bb_data = self.parseRecipeVariants(virtualfn, appends, virtonly=True, layername=layername)
return bb_data[virtual]

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -76,26 +74,26 @@ def createDaemon(function, logfile):
with open('/dev/null', 'r') as si:
os.dup2(si.fileno(), sys.stdin.fileno())
with open(logfile, 'a+') as so:
try:
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(so.fileno(), sys.stderr.fileno())
except io.UnsupportedOperation:
sys.stdout = so
try:
so = open(logfile, 'a+')
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(so.fileno(), sys.stderr.fileno())
except io.UnsupportedOperation:
sys.stdout = open(logfile, 'a+')
# Have stdout and stderr be the same so log output matches chronologically
# and there aren't two separate buffers
sys.stderr = sys.stdout
# Have stdout and stderr be the same so log output matches chronologically
# and there aren't two seperate buffers
sys.stderr = sys.stdout
try:
function()
except Exception as e:
traceback.print_exc()
finally:
bb.event.print_ui_queue()
# os._exit() doesn't flush open files like os.exit() does. Manually flush
# stdout and stderr so that any logging output will be seen, particularly
# exception tracebacks.
sys.stdout.flush()
sys.stderr.flush()
os._exit(0)
try:
function()
except Exception as e:
traceback.print_exc()
finally:
bb.event.print_ui_queue()
# os._exit() doesn't flush open files like os.exit() does. Manually flush
# stdout and stderr so that any logging output will be seen, particularly
# exception tracebacks.
sys.stdout.flush()
sys.stderr.flush()
os._exit(0)

View File

@@ -4,16 +4,14 @@ BitBake 'Data' implementations
Functions for interacting with the data structure used by the
BitBake build tools.
expandKeys and datastore iteration are the most expensive
operations. Updating overrides is now "on the fly" but still based
on the idea of the cookie monster introduced by zecke:
"At night the cookie monster came by and
The expandKeys and update_data are the most expensive
operations. At night the cookie monster came by and
suggested 'give me cookies on setting the variables and
things will work out'. Taking this suggestion into account
applying the skills from the not yet passed 'Entwurf und
Analyse von Algorithmen' lecture and the cookie
monster seems to be right. We will track setVar more carefully
to have faster datastore operations."
to have faster update_data and expandKeys operations.
This is a trade-off between speed and memory again but
the speed is more critical here.
@@ -28,6 +26,11 @@ the speed is more critical here.
import sys, os, re
import hashlib
if sys.argv[0][-5:] == "pydoc":
path = os.path.dirname(os.path.dirname(sys.argv[1]))
else:
path = os.path.dirname(os.path.dirname(sys.argv[0]))
sys.path.insert(0, path)
from itertools import groupby
from bb import data_smart
@@ -67,6 +70,10 @@ def keys(d):
"""Return a list of keys in d"""
return d.keys()
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
def expand(s, d, varname = None):
"""Variable expansion using the data store"""
return d.expand(s, varname)
@@ -114,8 +121,8 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
if d.getVarFlag(var, 'python', False) and func:
return False
export = bb.utils.to_boolean(d.getVarFlag(var, "export"))
unexport = bb.utils.to_boolean(d.getVarFlag(var, "unexport"))
export = d.getVarFlag(var, "export", False)
unexport = d.getVarFlag(var, "unexport", False)
if not all and not export and not unexport and not func:
return False
@@ -188,8 +195,8 @@ def emit_env(o=sys.__stdout__, d = init(), all=False):
def exported_keys(d):
return (key for key in d.keys() if not key.startswith('__') and
bb.utils.to_boolean(d.getVarFlag(key, 'export')) and
not bb.utils.to_boolean(d.getVarFlag(key, 'unexport')))
d.getVarFlag(key, 'export', False) and
not d.getVarFlag(key, 'unexport', False))
def exported_vars(d):
k = list(exported_keys(d))
@@ -261,71 +268,65 @@ def emit_func_python(func, o=sys.__stdout__, d = init()):
newdeps |= set((d.getVarFlag(dep, "vardeps") or "").split())
newdeps -= seen
def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_vars, d, codeparsedata):
def handle_contains(value, contains, exclusions, d):
newvalue = []
if value:
newvalue.append(str(value))
for k in sorted(contains):
if k in exclusions or k in ignored_vars:
continue
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
newvalue.append("\n%s{%s} = Unset" % (k, item))
break
else:
newvalue.append("\n%s{%s} = Set" % (k, item))
return "".join(newvalue)
def handle_remove(value, deps, removes, d):
for r in sorted(removes):
r2 = d.expandWithRefs(r, None)
value += "\n_remove of %s" % r
deps |= r2.references
deps = deps | (keys & r2.execs)
return value
def update_data(d):
"""Performs final steps upon the datastore, including application of overrides"""
d.finalize(parent = True)
def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps = set()
try:
if key in mod_funcs:
exclusions = set()
moddep = bb.codeparser.modulecode_deps[key]
value = handle_contains("", moddep[3], exclusions, d)
return frozenset((moddep[0] | keys & moddep[1]) - ignored_vars), value
if key[-1] == ']':
vf = key[:-1].split('[')
if vf[1] == "vardepvalueexclude":
return deps, ""
value, parser = d.getVarFlag(vf[0], vf[1], False, retparser=True)
deps |= parser.references
deps = deps | (keys & parser.execs)
deps -= ignored_vars
return frozenset(deps), value
return deps, value
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
vardeps = varflags.get("vardeps")
exclusions = varflags.get("vardepsexclude", "").split()
def handle_contains(value, contains, d):
newvalue = ""
for k in sorted(contains):
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
newvalue += "\n%s{%s} = Unset" % (k, item)
break
else:
newvalue += "\n%s{%s} = Set" % (k, item)
if not newvalue:
return value
if not value:
return newvalue
return value + newvalue
def handle_remove(value, deps, removes, d):
for r in sorted(removes):
r2 = d.expandWithRefs(r, None)
value += "\n_remove of %s" % r
deps |= r2.references
deps = deps | (keys & r2.execs)
return value
if "vardepvalue" in varflags:
value = varflags.get("vardepvalue")
elif varflags.get("func"):
if varflags.get("python"):
value = codeparsedata.getVarFlag(key, "_content", False)
value = d.getVarFlag(key, "_content", False)
parser = bb.codeparser.PythonParser(key, logger)
parser.parse_python(value, filename=varflags.get("filename"), lineno=varflags.get("lineno"))
deps = deps | parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, exclusions, d)
value = handle_contains(value, parser.contains, d)
else:
value, parsedvar = codeparsedata.getVarFlag(key, "_content", False, retparser=True)
value, parsedvar = d.getVarFlag(key, "_content", False, retparser=True)
parser = bb.codeparser.ShellParser(key, logger)
parser.parse_shell(parsedvar.value)
deps = deps | shelldeps
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
value = handle_contains(value, parsedvar.contains, exclusions, d)
value = handle_contains(value, parsedvar.contains, d)
if hasattr(parsedvar, "removes"):
value = handle_remove(value, deps, parsedvar.removes, d)
if vardeps is None:
@@ -340,7 +341,7 @@ def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_va
value, parser = d.getVarFlag(key, "_content", False, retparser=True)
deps |= parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, exclusions, d)
value = handle_contains(value, parser.contains, d)
if hasattr(parser, "removes"):
value = handle_remove(value, deps, parser.removes, d)
@@ -360,50 +361,43 @@ def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_va
deps |= set(varfdeps)
deps |= set((vardeps or "").split())
deps -= set(exclusions)
deps -= ignored_vars
deps -= set(varflags.get("vardepsexclude", "").split())
except bb.parse.SkipRecipe:
raise
except Exception as e:
bb.warn("Exception during build_dependencies for %s" % key)
raise
return frozenset(deps), value
return deps, value
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)
def generate_dependencies(d, ignored_vars):
def generate_dependencies(d, whitelist):
mod_funcs = set(bb.codeparser.modulecode_deps.keys())
keys = set(key for key in d if not key.startswith("__")) | mod_funcs
shelldeps = set(key for key in d.getVar("__exportlist", False) if bb.utils.to_boolean(d.getVarFlag(key, "export")) and not bb.utils.to_boolean(d.getVarFlag(key, "unexport")))
keys = set(key for key in d if not key.startswith("__"))
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export", False) and not d.getVarFlag(key, "unexport", False))
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS')
codeparserd = d.createCopy()
for forced in (d.getVar('BB_HASH_CODEPARSER_VALS') or "").split():
key, value = forced.split("=", 1)
codeparserd.setVar(key, value)
deps = {}
values = {}
tasklist = d.getVar('__BBTASKS', False) or []
for task in tasklist:
deps[task], values[task] = build_dependencies(task, keys, mod_funcs, shelldeps, varflagsexcl, ignored_vars, d, codeparserd)
deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, d)
newdeps = deps[task]
seen = set()
while newdeps:
nextdeps = newdeps
nextdeps = newdeps - whitelist
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep not in deps:
deps[dep], values[dep] = build_dependencies(dep, keys, mod_funcs, shelldeps, varflagsexcl, ignored_vars, d, codeparserd)
deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, varflagsexcl, d)
newdeps |= deps[dep]
newdeps -= seen
#print "For %s: %s" % (task, str(deps[task]))
return tasklist, deps, values
def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
def generate_dependency_hash(tasklist, gendeps, lookupcache, whitelist, fn):
taskdeps = {}
basehash = {}
@@ -412,10 +406,9 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
if data is None:
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
data = []
else:
data = [data]
data = ''
gendeps[task] -= whitelist
newdeps = gendeps[task]
seen = set()
while newdeps:
@@ -423,24 +416,27 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep in whitelist:
continue
gendeps[dep] -= whitelist
newdeps |= gendeps[dep]
newdeps -= seen
alldeps = sorted(seen)
for dep in alldeps:
data.append(dep)
data = data + dep
var = lookupcache[dep]
if var is not None:
data.append(str(var))
data = data + str(var)
k = fn + ":" + task
basehash[k] = hashlib.sha256("".join(data).encode("utf-8")).hexdigest()
taskdeps[task] = frozenset(seen)
basehash[k] = hashlib.sha256(data.encode("utf-8")).hexdigest()
taskdeps[task] = alldeps
return taskdeps, basehash
def inherits_class(klass, d):
val = d.getVar('__inherit_cache', False) or []
needle = '/%s.bbclass' % klass
needle = os.path.join('classes', '%s.bbclass' % klass)
for v in val:
if v.endswith(needle):
return True

View File

@@ -16,10 +16,7 @@ BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import builtins
import copy
import re
import sys
import copy, re, sys, traceback
from collections.abc import MutableMapping
import logging
import hashlib
@@ -32,22 +29,10 @@ logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = [":append", ":prepend", ":remove"]
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>:append|:prepend|:remove)(:(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+?}")
__expand_python_regexp__ = re.compile(r"\${@(?:{.*?}|.)+?}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
__whitespace_split__ = re.compile(r'(\s)')
__override_regexp__ = re.compile(r'[a-z0-9]+')
bitbake_renamed_vars = {
"BB_ENV_WHITELIST": "BB_ENV_PASSTHROUGH",
"BB_ENV_EXTRAWHITE": "BB_ENV_PASSTHROUGH_ADDITIONS",
"BB_HASHBASE_WHITELIST": "BB_BASEHASH_IGNORE_VARS",
"BB_HASHCONFIG_WHITELIST": "BB_HASHCONFIG_IGNORE_VARS",
"BB_HASHTASK_WHITELIST": "BB_TASKHASH_IGNORE_TASKS",
"BB_SETSCENE_ENFORCE_WHITELIST": "BB_SETSCENE_ENFORCE_IGNORE_TASKS",
"MULTI_PROVIDER_WHITELIST": "BB_MULTI_PROVIDER_ALLOWED",
"BB_STAMP_WHITELIST": "is a deprecated variable and support has been removed",
"BB_STAMP_POLICY": "is a deprecated variable and support has been removed",
}
def infer_caller_details(loginfo, parent = False, varval = True):
"""Save the caller the trouble of specifying everything."""
# Save effort.
@@ -95,11 +80,10 @@ def infer_caller_details(loginfo, parent = False, varval = True):
loginfo['func'] = func
class VariableParse:
def __init__(self, varname, d, unexpanded_value = None, val = None):
def __init__(self, varname, d, val = None):
self.varname = varname
self.d = d
self.value = val
self.unexpanded_value = unexpanded_value
self.references = set()
self.execs = set()
@@ -123,11 +107,6 @@ class VariableParse:
else:
code = match.group()[3:-1]
# Do not run code that contains one or more unexpanded variables
# instead return the code with the characters we removed put back
if __expand_var_regexp__.findall(code):
return "${@" + code + "}"
if self.varname:
varname = 'Var <%s>' % self.varname
else:
@@ -153,21 +132,16 @@ class VariableParse:
value = utils.better_eval(codeobj, DataContext(self.d), {'d' : self.d})
return str(value)
class DataContext(dict):
excluded = set([i for i in dir(builtins) if not i.startswith('_')] + ['oe'])
class DataContext(dict):
def __init__(self, metadata, **kwargs):
self.metadata = metadata
dict.__init__(self, **kwargs)
self['d'] = metadata
self.context = set(bb.utils.get_context())
def __missing__(self, key):
if key in self.excluded or key in self.context:
raise KeyError(key)
value = self.metadata.getVar(key)
if value is None:
if value is None or self.metadata.getVarFlag(key, 'func', False):
raise KeyError(key)
else:
return value
@@ -362,16 +336,6 @@ class VariableHistory(object):
lines.append(line)
return lines
def get_variable_refs(self, var):
"""Return a dict of file/line references"""
var_history = self.variable(var)
refs = {}
for event in var_history:
if event['file'] not in refs:
refs[event['file']] = []
refs[event['file']].append(event['line'])
return refs
def get_variable_items_files(self, var):
"""
Use variable history to map items added to a list variable and
@@ -406,23 +370,6 @@ class VariableHistory(object):
else:
self.variables[var] = []
def _print_rename_error(var, loginfo, renamedvars, fullvar=None):
info = ""
if "file" in loginfo:
info = " file: %s" % loginfo["file"]
if "line" in loginfo:
info += " line: %s" % loginfo["line"]
if fullvar and fullvar != var:
info += " referenced as: %s" % fullvar
if info:
info = " (%s)" % info.strip()
renameinfo = renamedvars[var]
if " " in renameinfo:
# A space signals a string to display instead of a rename
bb.erroronce('Variable %s %s%s' % (var, renameinfo, info))
else:
bb.erroronce('Variable %s has been renamed to %s%s' % (var, renameinfo, info))
class DataSmart(MutableMapping):
def __init__(self):
self.dict = {}
@@ -430,8 +377,6 @@ class DataSmart(MutableMapping):
self.inchistory = IncludeHistory()
self.varhistory = VariableHistory(self)
self._tracking = False
self._var_renames = {}
self._var_renames.update(bitbake_renamed_vars)
self.expand_cache = {}
@@ -453,9 +398,9 @@ class DataSmart(MutableMapping):
def expandWithRefs(self, s, varname):
if not isinstance(s, str): # sanity check
return VariableParse(varname, self, s, s)
return VariableParse(varname, self, s)
varparse = VariableParse(varname, self, s)
varparse = VariableParse(varname, self)
while s.find('${') != -1:
olds = s
@@ -487,19 +432,24 @@ class DataSmart(MutableMapping):
def expand(self, s, varname = None):
return self.expandWithRefs(s, varname).value
def finalize(self, parent = False):
return
def internal_finalize(self, parent = False):
"""Performs final steps upon the datastore, including application of overrides"""
self.overrides = None
def need_overrides(self):
if self.overrides is not None:
return
if self.inoverride:
return
overrride_stack = []
for count in range(5):
self.inoverride = True
# Can end up here recursively so setup dummy values
self.overrides = []
self.overridesset = set()
self.overrides = (self.getVar("OVERRIDES") or "").split(":") or []
overrride_stack.append(self.overrides)
self.overridesset = set(self.overrides)
self.inoverride = False
self.expand_cache = {}
@@ -509,7 +459,7 @@ class DataSmart(MutableMapping):
self.overrides = newoverrides
self.overridesset = set(self.overrides)
else:
bb.fatal("Overrides could not be expanded into a stable state after 5 iterations, overrides must be being referenced by other overridden variables in some recursive fashion. Please provide your configuration to bitbake-devel so we can laugh, er, I mean try and understand how to make it work. The list of failing override expansions: %s" % "\n".join(str(s) for s in overrride_stack))
bb.fatal("Overrides could not be expanded into a stable state after 5 iterations, overrides must be being referenced by other overridden variables in some recursive fashion. Please provide your configuration to bitbake-devel so we can laugh, er, I mean try and understand how to make it work.")
def initVar(self, var):
self.expand_cache = {}
@@ -520,44 +470,36 @@ class DataSmart(MutableMapping):
dest = self.dict
while dest:
if var in dest:
return dest[var]
return dest[var], self.overridedata.get(var, None)
if "_data" not in dest:
break
dest = dest["_data"]
return None
return None, self.overridedata.get(var, None)
def _makeShadowCopy(self, var):
if var in self.dict:
return
local_var = self._findVar(var)
local_var, _ = self._findVar(var)
if local_var:
self.dict[var] = copy.copy(local_var)
else:
self.initVar(var)
def hasOverrides(self, var):
return var in self.overridedata
def setVar(self, var, value, **loginfo):
#print("var=" + str(var) + " val=" + str(value))
if not var.startswith("__anon_") and ("_append" in var or "_prepend" in var or "_remove" in var):
if "_append" in var or "_prepend" in var or "_remove" in var:
info = "%s" % var
if "file" in loginfo:
info += " file: %s" % loginfo["file"]
if "line" in loginfo:
info += " line: %s" % loginfo["line"]
if "filename" in loginfo:
info += " file: %s" % loginfo[filename]
if "lineno" in loginfo:
info += " line: %s" % loginfo[lineno]
bb.fatal("Variable %s contains an operation using the old override syntax. Please convert this layer/metadata before attempting to use with a newer bitbake." % info)
shortvar = var.split(":", 1)[0]
if shortvar in self._var_renames:
_print_rename_error(shortvar, loginfo, self._var_renames, fullvar=var)
# Mark that we have seen a renamed variable
self.setVar("_FAILPARSINGERRORHANDLED", True)
self.expand_cache = {}
parsing=False
if 'parsing' in loginfo:
@@ -639,7 +581,7 @@ class DataSmart(MutableMapping):
nextnew.update(vardata.references)
nextnew.update(vardata.contains.keys())
new = nextnew
self.overrides = None
self.internal_finalize(True)
def _setvar_update_overrides(self, var, **loginfo):
# aka pay the cookie monster
@@ -679,11 +621,10 @@ class DataSmart(MutableMapping):
self.varhistory.record(**loginfo)
self.setVar(newkey, val, ignore=True, parsing=True)
srcflags = self.getVarFlags(key, False, True) or {}
for i in srcflags:
if i not in (__setvar_keyword__):
for i in (__setvar_keyword__):
src = self.getVarFlag(key, i, False)
if src is None:
continue
src = srcflags[i]
dest = self.getVarFlag(newkey, i, False) or []
dest.extend(src)
@@ -726,7 +667,7 @@ class DataSmart(MutableMapping):
if ':' in var:
override = var[var.rfind(':')+1:]
shortvar = var[:var.rfind(':')]
while override and __override_regexp__.match(override):
while override and override.islower():
try:
if shortvar in self.overridedata:
# Force CoW by recreating the list first
@@ -744,14 +685,6 @@ class DataSmart(MutableMapping):
def setVarFlag(self, var, flag, value, **loginfo):
self.expand_cache = {}
if var == "BB_RENAMED_VARIABLES":
self._var_renames[flag] = value
if var in self._var_renames:
_print_rename_error(var, loginfo, self._var_renames)
# Mark that we have seen a renamed variable
self.setVar("_FAILPARSINGERRORHANDLED", True)
if 'op' not in loginfo:
loginfo['op'] = "set"
loginfo['flag'] = flag
@@ -781,18 +714,13 @@ class DataSmart(MutableMapping):
return None
cachename = var + "[" + flag + "]"
if not expand and retparser and cachename in self.expand_cache:
return self.expand_cache[cachename].unexpanded_value, self.expand_cache[cachename]
if expand and cachename in self.expand_cache:
return self.expand_cache[cachename].value
local_var = self._findVar(var)
local_var, overridedata = self._findVar(var)
value = None
removes = set()
if flag == "_content" and not parsing:
overridedata = self.overridedata.get(var, None)
if flag == "_content" and not parsing and overridedata is not None:
if flag == "_content" and overridedata is not None and not parsing:
match = False
active = {}
self.need_overrides()
@@ -882,7 +810,7 @@ class DataSmart(MutableMapping):
expanded_removes[r] = self.expand(r).split()
parser.removes = set()
val = []
val = ""
for v in __whitespace_split__.split(parser.value):
skip = False
for r in removes:
@@ -891,8 +819,8 @@ class DataSmart(MutableMapping):
skip = True
if skip:
continue
val.append(v)
parser.value = "".join(val)
val = val + v
parser.value = val
if expand:
value = parser.value
@@ -907,7 +835,7 @@ class DataSmart(MutableMapping):
def delVarFlag(self, var, flag, **loginfo):
self.expand_cache = {}
local_var = self._findVar(var)
local_var, _ = self._findVar(var)
if not local_var:
return
if not var in self.dict:
@@ -950,7 +878,7 @@ class DataSmart(MutableMapping):
self.dict[var][i] = flags[i]
def getVarFlags(self, var, expand = False, internalflags=False):
local_var = self._findVar(var)
local_var, _ = self._findVar(var)
flags = {}
if local_var:
@@ -996,7 +924,6 @@ class DataSmart(MutableMapping):
data.inchistory = self.inchistory.copy()
data._tracking = self._tracking
data._var_renames = self._var_renames
data.overrides = None
data.overridevars = copy.copy(self.overridevars)
@@ -1019,7 +946,7 @@ class DataSmart(MutableMapping):
value = self.getVar(variable, False)
for key in keys:
referrervalue = self.getVar(key, False)
if referrervalue and isinstance(referrervalue, str) and ref in referrervalue:
if referrervalue and ref in referrervalue:
self.setVar(key, referrervalue.replace(ref, value))
def localkeys(self):
@@ -1085,10 +1012,10 @@ class DataSmart(MutableMapping):
d = self.createCopy()
bb.data.expandKeys(d)
config_ignore_vars = set((d.getVar("BB_HASHCONFIG_IGNORE_VARS") or "").split())
config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST") or "").split())
keys = set(key for key in iter(d) if not key.startswith("__"))
for key in keys:
if key in config_ignore_vars:
if key in config_whitelist:
continue
value = d.getVar(key, False) or ""

View File

@@ -40,7 +40,7 @@ class HeartbeatEvent(Event):
"""Triggered at regular time intervals of 10 seconds. Other events can fire much more often
(runQueueTaskStarted when there are many short tasks) or not at all for long periods
of time (again runQueueTaskStarted, when there is just one long-running task), so this
event is more suitable for doing some task-independent work occasionally."""
event is more suitable for doing some task-independent work occassionally."""
def __init__(self, time):
Event.__init__(self)
self.time = time
@@ -68,39 +68,29 @@ _catchall_handlers = {}
_eventfilter = None
_uiready = False
_thread_lock = threading.Lock()
_heartbeat_enabled = False
_should_exit = threading.Event()
_thread_lock_enabled = False
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
def enable_threadlock():
# Always needed now
return
global _thread_lock_enabled
_thread_lock_enabled = True
def disable_threadlock():
# Always needed now
return
def enable_heartbeat():
global _heartbeat_enabled
_heartbeat_enabled = True
def disable_heartbeat():
global _heartbeat_enabled
_heartbeat_enabled = False
#
# In long running code, this function should be called periodically
# to check if we should exit due to an interuption (.e.g Ctrl+C from the UI)
#
def check_for_interrupts(d):
global _should_exit
if _should_exit.is_set():
bb.warn("Exiting due to interrupt.")
raise bb.BBHandledException()
global _thread_lock_enabled
_thread_lock_enabled = False
def execute_handler(name, handler, event, d):
event.data = d
addedd = False
if 'd' not in builtins:
builtins['d'] = d
addedd = True
try:
ret = handler(event, d)
ret = handler(event)
except (bb.parse.SkipRecipe, bb.BBHandledException):
raise
except Exception:
@@ -114,7 +104,8 @@ def execute_handler(name, handler, event, d):
raise
finally:
del event.data
if addedd:
del builtins['d']
def fire_class_handlers(event, d):
if isinstance(event, logging.LogRecord):
@@ -141,14 +132,8 @@ def print_ui_queue():
if not _uiready:
from bb.msg import BBLogFormatter
# Flush any existing buffered content
try:
sys.stdout.flush()
except:
pass
try:
sys.stderr.flush()
except:
pass
sys.stdout.flush()
sys.stderr.flush()
stdout = logging.StreamHandler(sys.stdout)
stderr = logging.StreamHandler(sys.stderr)
formatter = BBLogFormatter("%(levelname)s: %(message)s")
@@ -189,30 +174,36 @@ def print_ui_queue():
def fire_ui_handlers(event, d):
global _thread_lock
global _thread_lock_enabled
if not _uiready:
# No UI handlers registered yet, queue up the messages
ui_queue.append(event)
return
with bb.utils.lock_timeout(_thread_lock):
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
try:
if not _ui_logfilters[h].filter(event):
continue
# We use pickle here since it better handles object instances
# which xmlrpc's marshaller does not. Events *must* be serializable
# by pickle.
if hasattr(_ui_handlers[h].event, "sendpickle"):
_ui_handlers[h].event.sendpickle((pickle.dumps(event)))
else:
_ui_handlers[h].event.send(event)
except:
errors.append(h)
for h in errors:
del _ui_handlers[h]
if _thread_lock_enabled:
_thread_lock.acquire()
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
try:
if not _ui_logfilters[h].filter(event):
continue
# We use pickle here since it better handles object instances
# which xmlrpc's marshaller does not. Events *must* be serializable
# by pickle.
if hasattr(_ui_handlers[h].event, "sendpickle"):
_ui_handlers[h].event.sendpickle((pickle.dumps(event)))
else:
_ui_handlers[h].event.send(event)
except:
errors.append(h)
for h in errors:
del _ui_handlers[h]
if _thread_lock_enabled:
_thread_lock.release()
def fire(event, d):
"""Fire off an Event"""
@@ -256,12 +247,12 @@ def register(name, handler, mask=None, filename=None, lineno=None, data=None):
if handler is not None:
# handle string containing python code
if isinstance(handler, str):
tmp = "def %s(e, d):\n%s" % (name, handler)
tmp = "def %s(e):\n%s" % (name, handler)
try:
code = bb.methodpool.compile_cache(tmp)
if not code:
if filename is None:
filename = "%s(e, d)" % name
filename = "%s(e)" % name
code = compile(tmp, filename, "exec", ast.PyCF_ONLY_AST)
if lineno is not None:
ast.increment_lineno(code, lineno-1)
@@ -326,23 +317,21 @@ def set_eventfilter(func):
_eventfilter = func
def register_UIHhandler(handler, mainui=False):
with bb.utils.lock_timeout(_thread_lock):
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
_ui_logfilters[_ui_handler_seq] = UIEventFilter(level, debug_domains)
if mainui:
global _uiready
_uiready = _ui_handler_seq
return _ui_handler_seq
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
_ui_logfilters[_ui_handler_seq] = UIEventFilter(level, debug_domains)
if mainui:
global _uiready
_uiready = _ui_handler_seq
return _ui_handler_seq
def unregister_UIHhandler(handlerNum, mainui=False):
if mainui:
global _uiready
_uiready = False
with bb.utils.lock_timeout(_thread_lock):
if handlerNum in _ui_handlers:
del _ui_handlers[handlerNum]
if handlerNum in _ui_handlers:
del _ui_handlers[handlerNum]
return
def get_uihandler():
@@ -497,7 +486,7 @@ class BuildCompleted(BuildBase, OperationCompleted):
BuildBase.__init__(self, n, p, failures)
class DiskFull(Event):
"""Disk full case build halted"""
"""Disk full case build aborted"""
def __init__(self, dev, type, freespace, mountpoint):
Event.__init__(self)
self._dev = dev
@@ -775,7 +764,7 @@ class LogHandler(logging.Handler):
class MetadataEvent(Event):
"""
Generic event that target for OE-Core classes
to report information during asynchronous execution
to report information during asynchrous execution
"""
def __init__(self, eventtype, eventdata):
Event.__init__(self)
@@ -856,11 +845,3 @@ class FindSigInfoResult(Event):
def __init__(self, result):
Event.__init__(self)
self.result = result
class ParseError(Event):
"""
Event to indicate parse failed
"""
def __init__(self, msg):
super().__init__()
self._msg = msg

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#

View File

@@ -1,57 +0,0 @@
There are expectations of users of the fetcher code. This file attempts to document
some of the constraints that are present. Some are obvious, some are less so. It is
documented in the context of how OE uses it but the API calls are generic.
a) network access for sources is only expected to happen in the do_fetch task.
This is not enforced or tested but is required so that we can:
i) audit the sources used (i.e. for license/manifest reasons)
ii) support offline builds with a suitable cache
iii) allow work to continue even with downtime upstream
iv) allow for changes upstream in incompatible ways
v) allow rebuilding of the software in X years time
b) network access is not expected in do_unpack task.
c) you can take DL_DIR and use it as a mirror for offline builds.
d) access to the network is only made when explicitly configured in recipes
(e.g. use of AUTOREV, or use of git tags which change revision).
e) fetcher output is deterministic (i.e. if you fetch configuration XXX now it
will match in future exactly in a clean build with a new DL_DIR).
One specific pain point example are git tags. They can be replaced and change
so the git fetcher has to resolve them with the network. We use git revisions
where possible to avoid this and ensure determinism.
f) network access is expected to work with the standard linux proxy variables
so that access behind firewalls works (the fetcher sets these in the
environment but only in the do_fetch tasks).
g) access during parsing has to be minimal, a "git ls-remote" for an AUTOREV
git recipe might be ok but you can't expect to checkout a git tree.
h) we need to provide revision information during parsing such that a version
for the recipe can be constructed.
i) versions are expected to be able to increase in a way which sorts allowing
package feeds to operate (see PR server required for git revisions to sort).
j) API to query for possible version upgrades of a url is highly desireable to
allow our automated upgrage code to function (it is implied this does always
have network access).
k) Where fixes or changes to behaviour in the fetcher are made, we ask that
test cases are added (run with "bitbake-selftest bb.tests.fetch"). We do
have fairly extensive test coverage of the fetcher as it is the only way
to track all of its corner cases, it still doesn't give entire coverage
though sadly.
l) If using tools during parse time, they will have to be in ASSUME_PROVIDED
in OE's context as we can't build git-native, then parse a recipe and use
git ls-remote.
Not all fetchers support all features, autorev is optional and doesn't make
sense for some. Upgrade detection means different things in different contexts
too.

View File

@@ -113,7 +113,7 @@ class MissingParameterError(BBFetchException):
self.args = (missing, url)
class ParameterError(BBFetchException):
"""Exception raised when a url cannot be processed due to invalid parameters."""
"""Exception raised when a url cannot be proccessed due to invalid parameters."""
def __init__(self, message, url):
msg = "URL: '%s' has invalid parameters. %s" % (url, message)
self.url = url
@@ -182,7 +182,7 @@ class URI(object):
Some notes about relative URIs: while it's specified that
a URI beginning with <scheme>:// should either be directly
followed by a hostname or a /, the old URI handling of the
fetch2 library did not conform to this. Therefore, this URI
fetch2 library did not comform to this. Therefore, this URI
class has some kludges to make sure that URIs are parsed in
a way comforming to bitbake's current usage. This URI class
supports the following:
@@ -199,7 +199,7 @@ class URI(object):
file://hostname/absolute/path.diff (would be IETF compliant)
Note that the last case only applies to a list of
explicitly allowed schemes (currently only file://), that requires
"whitelisted" schemes (currently only file://), that requires
its URIs to not have a network location.
"""
@@ -388,7 +388,7 @@ def decodeurl(url):
if s:
if not '=' in s:
raise MalformedUrl(url, "The URL: '%s' is invalid: parameter %s does not specify a value (missing '=')" % (url, s))
s1, s2 = s.split('=', 1)
s1, s2 = s.split('=')
p[s1] = s2
return type, host, urllib.parse.unquote(path), user, pswd, p
@@ -402,24 +402,24 @@ def encodeurl(decoded):
if not type:
raise MissingParameterError('type', "encoded from the data %s" % str(decoded))
url = ['%s://' % type]
url = '%s://' % type
if user and type != "file":
url.append("%s" % user)
url += "%s" % user
if pswd:
url.append(":%s" % pswd)
url.append("@")
url += ":%s" % pswd
url += "@"
if host and type != "file":
url.append("%s" % host)
url += "%s" % host
if path:
# Standardise path to ensure comparisons work
while '//' in path:
path = path.replace("//", "/")
url.append("%s" % urllib.parse.quote(path))
url += "%s" % urllib.parse.quote(path)
if p:
for parm in p:
url.append(";%s=%s" % (parm, p[parm]))
url += ";%s=%s" % (parm, p[parm])
return "".join(url)
return url
def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
if not ud.url or not uri_find or not uri_replace:
@@ -469,18 +469,14 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
basename = os.path.basename(mirrortarball)
# Kill parameters, they make no sense for mirror tarballs
uri_decoded[5] = {}
uri_find_decoded[5] = {}
elif ud.localpath and ud.method.supports_checksum(ud):
basename = os.path.basename(ud.localpath)
if basename:
uri_basename = os.path.basename(uri_decoded[loc])
# Prefix with a slash as a sentinel in case
# result_decoded[loc] does not contain one.
path = "/" + result_decoded[loc]
if uri_basename and basename != uri_basename and path.endswith("/" + uri_basename):
result_decoded[loc] = path[1:-len(uri_basename)] + basename
elif not path.endswith("/" + basename):
result_decoded[loc] = os.path.join(path[1:], basename)
if uri_basename and basename != uri_basename and result_decoded[loc].endswith(uri_basename):
result_decoded[loc] = result_decoded[loc].replace(uri_basename, basename)
elif not result_decoded[loc].endswith(basename):
result_decoded[loc] = os.path.join(result_decoded[loc], basename)
else:
return None
result = encodeurl(result_decoded)
@@ -518,7 +514,7 @@ def fetcher_init(d):
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
_checksum_cache.init_cache(d.getVar("BB_CACHEDIR"))
_checksum_cache.init_cache(d)
for m in methods:
if hasattr(m, "init"):
@@ -546,7 +542,7 @@ def mirror_from_string(data):
bb.warn('Invalid mirror data %s, should have paired members.' % data)
return list(zip(*[iter(mirrors)]*2))
def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True):
def verify_checksum(ud, d, precomputed={}):
"""
verify the MD5 and SHA256 checksum for downloaded src
@@ -560,19 +556,17 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
file against those in the recipe each time, rather than only after
downloading. See https://bugzilla.yoctoproject.org/show_bug.cgi?id=5571.
"""
if ud.ignore_checksums or not ud.method.supports_checksum(ud):
return {}
if localpath is None:
localpath = ud.localpath
def compute_checksum_info(checksum_id):
checksum_name = getattr(ud, "%s_name" % checksum_id)
if checksum_id in precomputed:
checksum_data = precomputed[checksum_id]
else:
checksum_data = getattr(bb.utils, "%s_file" % checksum_id)(localpath)
checksum_data = getattr(bb.utils, "%s_file" % checksum_id)(ud.localpath)
checksum_expected = getattr(ud, "%s_expected" % checksum_id)
@@ -598,13 +592,17 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
checksum_lines = ["SRC_URI[%s] = \"%s\"" % (ci["name"], ci["data"])]
# If no checksum has been provided
if fatal_nochecksum and ud.method.recommends_checksum(ud) and all(ci["expected"] is None for ci in checksum_infos):
if ud.method.recommends_checksum(ud) and all(ci["expected"] is None for ci in checksum_infos):
messages = []
strict = d.getVar("BB_STRICT_CHECKSUM") or "0"
# If strict checking enabled and neither sum defined, raise error
if strict == "1":
raise NoChecksumError("\n".join(checksum_lines))
messages.append("No checksum specified for '%s', please add at " \
"least one to the recipe:" % ud.localpath)
messages.extend(checksum_lines)
logger.error("\n".join(messages))
raise NoChecksumError("Missing SRC_URI checksum", ud.url)
bb.event.fire(MissingChecksumEvent(ud.url, **checksum_event), d)
@@ -626,7 +624,7 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
for ci in checksum_infos:
if ci["expected"] and ci["expected"] != ci["data"]:
messages.append("File: '%s' has %s checksum '%s' when '%s' was " \
"expected" % (localpath, ci["id"], ci["data"], ci["expected"]))
"expected" % (ud.localpath, ci["id"], ci["data"], ci["expected"]))
bad_checksum = ci["data"]
if bad_checksum:
@@ -744,16 +742,13 @@ def subprocess_setup():
# SIGPIPE errors are known issues with gzip/bash
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def mark_recipe_nocache(d):
def get_autorev(d):
# only not cache src rev in autorev case
if d.getVar('BB_SRCREV_POLICY') != "cache":
d.setVar('BB_DONT_CACHE', '1')
def get_autorev(d):
mark_recipe_nocache(d)
d.setVar("__BBAUTOREV_SEEN", True)
return "AUTOINC"
def _get_srcrev(d, method_name='sortable_revision'):
def get_srcrev(d, method_name='sortable_revision'):
"""
Return the revision string, usually for use in the version string (PV) of the current package
Most packages usually only have one SCM so we just pass on the call.
@@ -767,34 +762,29 @@ def _get_srcrev(d, method_name='sortable_revision'):
that fetcher provides a method with the given name and the same signature as sortable_revision.
"""
d.setVar("__BBSRCREV_SEEN", "1")
recursion = d.getVar("__BBINSRCREV")
if recursion:
raise FetchError("There are recursive references in fetcher variables, likely through SRC_URI")
d.setVar("__BBINSRCREV", True)
scms = []
revs = []
fetcher = Fetch(d.getVar('SRC_URI').split(), d)
urldata = fetcher.ud
for u in urldata:
if urldata[u].method.supports_srcrev():
scms.append(u)
if not scms:
d.delVar("__BBINSRCREV")
return "", revs
if len(scms) == 0:
raise FetchError("SRCREV was used yet no valid SCM was found in SRC_URI")
if len(scms) == 1 and len(urldata[scms[0]].names) == 1:
autoinc, rev = getattr(urldata[scms[0]].method, method_name)(urldata[scms[0]], d, urldata[scms[0]].names[0])
revs.append(rev)
if len(rev) > 10:
rev = rev[:10]
d.delVar("__BBINSRCREV")
if autoinc:
return "AUTOINC+" + rev, revs
return rev, revs
return "AUTOINC+" + rev
return rev
#
# Mutiple SCMs are in SRC_URI so we resort to SRCREV_FORMAT
@@ -810,7 +800,6 @@ def _get_srcrev(d, method_name='sortable_revision'):
ud = urldata[scm]
for name in ud.names:
autoinc, rev = getattr(ud.method, method_name)(ud, d, name)
revs.append(rev)
seenautoinc = seenautoinc or autoinc
if len(rev) > 10:
rev = rev[:10]
@@ -828,21 +817,7 @@ def _get_srcrev(d, method_name='sortable_revision'):
format = "AUTOINC+" + format
d.delVar("__BBINSRCREV")
return format, revs
def get_hashvalue(d, method_name='sortable_revision'):
pkgv, revs = _get_srcrev(d, method_name=method_name)
return " ".join(revs)
def get_pkgv_string(d, method_name='sortable_revision'):
pkgv, revs = _get_srcrev(d, method_name=method_name)
return pkgv
def get_srcrev(d, method_name='sortable_revision'):
pkgv, revs = _get_srcrev(d, method_name=method_name)
if not pkgv:
raise FetchError("SRCREV was used yet no valid SCM was found in SRC_URI")
return pkgv
return format
def localpath(url, d):
fetcher = bb.fetch2.Fetch([url], d)
@@ -860,7 +835,6 @@ FETCH_EXPORT_VARS = ['HOME', 'PATH',
'ALL_PROXY', 'all_proxy',
'GIT_PROXY_COMMAND',
'GIT_SSH',
'GIT_SSH_COMMAND',
'GIT_SSL_CAINFO',
'GIT_SMART_HTTP',
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
@@ -868,24 +842,10 @@ FETCH_EXPORT_VARS = ['HOME', 'PATH',
'DBUS_SESSION_BUS_ADDRESS',
'P4CONFIG',
'SSL_CERT_FILE',
'NODE_EXTRA_CA_CERTS',
'AWS_PROFILE',
'AWS_ACCESS_KEY_ID',
'AWS_SECRET_ACCESS_KEY',
'AWS_DEFAULT_REGION',
'GIT_CACHE_PATH',
'SSL_CERT_DIR']
def get_fetcher_environment(d):
newenv = {}
origenv = d.getVar("BB_ORIGENV")
for name in bb.fetch2.FETCH_EXPORT_VARS:
value = d.getVar(name)
if not value and origenv:
value = origenv.getVar(name)
if value:
newenv[name] = value
return newenv
'AWS_DEFAULT_REGION']
def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
"""
@@ -1001,7 +961,6 @@ def build_mirroruris(origud, mirrors, ld):
try:
newud = FetchData(newuri, ld)
newud.ignore_checksums = True
newud.setup_localpath(ld)
except bb.fetch2.BBFetchException as e:
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
@@ -1122,8 +1081,6 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
def ensure_symlink(target, link_name):
if not os.path.exists(link_name):
dirname = os.path.dirname(link_name)
bb.utils.mkdirhier(dirname)
if os.path.islink(link_name):
# Broken symbolic link
os.unlink(link_name)
@@ -1236,7 +1193,6 @@ def srcrev_internal_helper(ud, d, name):
if srcrev == "INVALID" or not srcrev:
raise FetchError("Please set a valid SRCREV for url %s (possible key names are %s, or use a ;rev=X URL parameter)" % (str(attempts), ud.url), ud.url)
if srcrev == "AUTOINC":
d.setVar("__BBAUTOREV_ACTED_UPON", True)
srcrev = ud.method.latest_revision(ud, d, name)
return srcrev
@@ -1248,21 +1204,23 @@ def get_checksum_file_list(d):
SRC_URI as a space-separated string
"""
fetch = Fetch([], d, cache = False, localonly = True)
dl_dir = d.getVar('DL_DIR')
filelist = []
for u in fetch.urls:
ud = fetch.ud[u]
if ud and isinstance(ud.method, local.Local):
found = False
paths = ud.method.localfile_searchpaths(ud, d)
paths = ud.method.localpaths(ud, d)
for f in paths:
pth = ud.decodedurl
if os.path.exists(f):
found = True
if f.startswith(dl_dir):
# The local fetcher's behaviour is to return a path under DL_DIR if it couldn't find the file anywhere else
if os.path.exists(f):
bb.warn("Getting checksum for %s SRC_URI entry %s: file not found except in DL_DIR" % (d.getVar('PN'), os.path.basename(f)))
else:
bb.warn("Unable to get checksum for %s SRC_URI entry %s: file could not be found" % (d.getVar('PN'), os.path.basename(f)))
filelist.append(f + ":" + str(os.path.exists(f)))
if not found:
bb.fatal(("Unable to get checksum for %s SRC_URI entry %s: file could not be found"
"\nThe following paths were searched:"
"\n%s") % (d.getVar('PN'), os.path.basename(f), '\n'.join(paths)))
return " ".join(filelist)
@@ -1309,13 +1267,18 @@ class FetchData(object):
if checksum_name in self.parm:
checksum_expected = self.parm[checksum_name]
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3", "az", "crate", "gs"]:
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3", "az"]:
checksum_expected = None
else:
checksum_expected = d.getVarFlag("SRC_URI", checksum_name)
setattr(self, "%s_expected" % checksum_id, checksum_expected)
for checksum_id in CHECKSUM_LIST:
configure_checksum(checksum_id)
self.ignore_checksums = False
self.names = self.parm.get("name",'default').split(',')
self.method = None
@@ -1337,11 +1300,6 @@ class FetchData(object):
if hasattr(self.method, "urldata_init"):
self.method.urldata_init(self, d)
for checksum_id in CHECKSUM_LIST:
configure_checksum(checksum_id)
self.ignore_checksums = False
if "localpath" in self.parm:
# if user sets localpath for file, use it instead.
self.localpath = self.parm["localpath"]
@@ -1421,9 +1379,6 @@ class FetchMethod(object):
Is localpath something that can be represented by a checksum?
"""
# We cannot compute checksums for None
if urldata.localpath is None:
return False
# We cannot compute checksums for directories
if os.path.isdir(urldata.localpath):
return False
@@ -1503,33 +1458,30 @@ class FetchMethod(object):
cmd = None
if unpack:
tar_cmd = 'tar --extract --no-same-owner'
if 'striplevel' in urldata.parm:
tar_cmd += ' --strip-components=%s' % urldata.parm['striplevel']
if file.endswith('.tar'):
cmd = '%s -f %s' % (tar_cmd, file)
cmd = 'tar x --no-same-owner -f %s' % file
elif file.endswith('.tgz') or file.endswith('.tar.gz') or file.endswith('.tar.Z'):
cmd = '%s -z -f %s' % (tar_cmd, file)
cmd = 'tar xz --no-same-owner -f %s' % file
elif file.endswith('.tbz') or file.endswith('.tbz2') or file.endswith('.tar.bz2'):
cmd = 'bzip2 -dc %s | %s -f -' % (file, tar_cmd)
cmd = 'bzip2 -dc %s | tar x --no-same-owner -f -' % file
elif file.endswith('.gz') or file.endswith('.Z') or file.endswith('.z'):
cmd = 'gzip -dc %s > %s' % (file, efile)
elif file.endswith('.bz2'):
cmd = 'bzip2 -dc %s > %s' % (file, efile)
elif file.endswith('.txz') or file.endswith('.tar.xz'):
cmd = 'xz -dc %s | %s -f -' % (file, tar_cmd)
cmd = 'xz -dc %s | tar x --no-same-owner -f -' % file
elif file.endswith('.xz'):
cmd = 'xz -dc %s > %s' % (file, efile)
elif file.endswith('.tar.lz'):
cmd = 'lzip -dc %s | %s -f -' % (file, tar_cmd)
cmd = 'lzip -dc %s | tar x --no-same-owner -f -' % file
elif file.endswith('.lz'):
cmd = 'lzip -dc %s > %s' % (file, efile)
elif file.endswith('.tar.7z'):
cmd = '7z x -so %s | %s -f -' % (file, tar_cmd)
cmd = '7z x -so %s | tar x --no-same-owner -f -' % file
elif file.endswith('.7z'):
cmd = '7za x -y %s 1>/dev/null' % file
elif file.endswith('.tzst') or file.endswith('.tar.zst'):
cmd = 'zstd --decompress --stdout %s | %s -f -' % (file, tar_cmd)
cmd = 'zstd --decompress --stdout %s | tar x --no-same-owner -f -' % file
elif file.endswith('.zst'):
cmd = 'zstd --decompress --stdout %s > %s' % (file, efile)
elif file.endswith('.zip') or file.endswith('.jar'):
@@ -1562,7 +1514,7 @@ class FetchMethod(object):
raise UnpackError("Unable to unpack deb/ipk package - does not contain data.tar.* file", urldata.url)
else:
raise UnpackError("Unable to unpack deb/ipk package - could not list contents", urldata.url)
cmd = 'ar x %s %s && %s -p -f %s && rm %s' % (file, datafile, tar_cmd, datafile, datafile)
cmd = 'ar x %s %s && tar --no-same-owner -xpf %s && rm %s' % (file, datafile, datafile, datafile)
# If 'subdir' param exists, create a dir and use it as destination for unpack cmd
if 'subdir' in urldata.parm:
@@ -1688,7 +1640,7 @@ class Fetch(object):
if localonly and cache:
raise Exception("bb.fetch2.Fetch.__init__: cannot set cache and localonly at same time")
if not urls:
if len(urls) == 0:
urls = d.getVar("SRC_URI").split()
self.urls = urls
self.d = d
@@ -1745,7 +1697,6 @@ class Fetch(object):
network = self.d.getVar("BB_NO_NETWORK")
premirroronly = bb.utils.to_boolean(self.d.getVar("BB_FETCH_PREMIRRORONLY"))
checksum_missing_messages = []
for u in urls:
ud = self.ud[u]
ud.setup_localpath(self.d)
@@ -1757,6 +1708,7 @@ class Fetch(object):
try:
self.d.setVar("BB_NO_NETWORK", network)
if m.verify_donestamp(ud, self.d) and not m.need_update(ud, self.d):
done = True
elif m.try_premirror(ud, self.d):
@@ -1777,9 +1729,7 @@ class Fetch(object):
self.d.setVar("BB_NO_NETWORK", "1")
firsterr = None
verified_stamp = False
if done:
verified_stamp = m.verify_donestamp(ud, self.d)
verified_stamp = m.verify_donestamp(ud, self.d)
if not done and (not verified_stamp or m.need_update(ud, self.d)):
try:
if not trusted_network(self.d, ud.url):
@@ -1828,28 +1778,17 @@ class Fetch(object):
raise ChecksumError("Stale Error Detected")
except BBFetchException as e:
if isinstance(e, NoChecksumError):
(message, _) = e.args
checksum_missing_messages.append(message)
continue
elif isinstance(e, ChecksumError):
if isinstance(e, ChecksumError):
logger.error("Checksum failure fetching %s" % u)
raise
finally:
if ud.lockfile:
bb.utils.unlockfile(lf)
if checksum_missing_messages:
logger.error("Missing SRC_URI checksum, please add those to the recipe: \n%s", "\n".join(checksum_missing_messages))
raise BBFetchException("There was some missing checksums in the recipe")
def checkstatus(self, urls=None):
"""
Check all URLs exist upstream.
Returns None if the URLs exist, raises FetchError if the check wasn't
successful but there wasn't an error (such as file not found), and
raises other exceptions in error cases.
Check all urls exist upstream
"""
if not urls:
@@ -1994,8 +1933,6 @@ from . import clearcase
from . import npm
from . import npmsw
from . import az
from . import crate
from . import gcp
methods.append(local.Local())
methods.append(wget.Wget())
@@ -2016,5 +1953,3 @@ methods.append(clearcase.ClearCase())
methods.append(npm.Npm())
methods.append(npmsw.NpmShrinkWrap())
methods.append(az.Az())
methods.append(crate.Crate())
methods.append(gcp.GCP())

View File

@@ -1,98 +0,0 @@
"""
BitBake 'Fetch' implementation for Google Cloup Platform Storage.
Class for fetching files from Google Cloud Storage using the
Google Cloud Storage Python Client. The GCS Python Client must
be correctly installed, configured and authenticated prior to use.
Additionally, gsutil must also be installed.
"""
# Copyright (C) 2023, Snap Inc.
#
# Based in part on bb.fetch2.s3:
# Copyright (C) 2017 Andre McCurdy
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import bb
import urllib.parse, urllib.error
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
class GCP(FetchMethod):
"""
Class to fetch urls via GCP's Python API.
"""
def __init__(self):
self.gcp_client = None
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with GCP.
"""
return ud.type in ['gs']
def recommends_checksum(self, urldata):
return True
def urldata_init(self, ud, d):
if 'downloadfilename' in ud.parm:
ud.basename = ud.parm['downloadfilename']
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
def get_gcp_client(self):
from google.cloud import storage
self.gcp_client = storage.Client(project=None)
def download(self, ud, d):
"""
Fetch urls using the GCP API.
Assumes localpath was called first.
"""
logger.debug2(f"Trying to download gs://{ud.host}{ud.path} to {ud.localpath}")
if self.gcp_client is None:
self.get_gcp_client()
bb.fetch2.check_network_access(d, "gsutil stat", ud.url)
# Path sometimes has leading slash, so strip it
path = ud.path.lstrip("/")
blob = self.gcp_client.bucket(ud.host).blob(path)
blob.download_to_filename(ud.localpath)
# Additional sanity checks copied from the wget class (although there
# are no known issues which mean these are required, treat the GCP API
# tool with a little healthy suspicion).
if not os.path.exists(ud.localpath):
raise FetchError(f"The GCP API returned success for gs://{ud.host}{ud.path} but {ud.localpath} doesn't exist?!")
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError(f"The downloaded file for gs://{ud.host}{ud.path} resulted in a zero size file?! Deleting and failing since this isn't right.")
return True
def checkstatus(self, fetch, ud, d):
"""
Check the status of a URL.
"""
logger.debug2(f"Checking status of gs://{ud.host}{ud.path}")
if self.gcp_client is None:
self.get_gcp_client()
bb.fetch2.check_network_access(d, "gsutil stat", ud.url)
# Path sometimes has leading slash, so strip it
path = ud.path.lstrip("/")
if self.gcp_client.bucket(ud.host).blob(path).exists() == False:
raise FetchError(f"The GCP API reported that gs://{ud.host}{ud.path} does not exist")
else:
return True

View File

@@ -44,8 +44,7 @@ Supported SRC_URI options are:
- nobranch
Don't check the SHA validation for branch. set this option for the recipe
referring to commit which is valid in any namespace (branch, tag, ...)
instead of branch.
referring to commit which is valid in tag instead of branch.
The default is "0", set nobranch=1 if needed.
- usehead
@@ -65,7 +64,6 @@ import fnmatch
import os
import re
import shlex
import shutil
import subprocess
import tempfile
import bb
@@ -76,9 +74,6 @@ from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
sha1_re = re.compile(r'^[0-9a-f]{40}$')
slash_re = re.compile(r"/+")
class GitProgressHandler(bb.progress.LineFilterProgressHandler):
"""Extract progress information from git output"""
def __init__(self, d):
@@ -151,7 +146,6 @@ class Git(FetchMethod):
# github stopped supporting git protocol
# https://github.blog/2021-09-01-improving-git-protocol-security-github/#no-more-unauthenticated-git
ud.proto = "https"
bb.warn("URL: %s uses git protocol which is no longer supported by github. Please change to ;protocol=https in the url." % ud.url)
if not ud.proto in ('git', 'file', 'ssh', 'http', 'https', 'rsync'):
raise bb.fetch2.ParameterError("Invalid protocol type", ud.url)
@@ -175,10 +169,7 @@ class Git(FetchMethod):
ud.nocheckout = 1
ud.unresolvedrev = {}
branches = ud.parm.get("branch", "").split(',')
if branches == [""] and not ud.nobranch:
bb.warn("URL: %s does not set any branch parameter. The future default branch used by tools and repositories is uncertain and we will therefore soon require this is set in all git urls." % ud.url)
branches = ["master"]
branches = ud.parm.get("branch", "master").split(',')
if len(branches) != len(ud.names):
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
@@ -245,7 +236,7 @@ class Git(FetchMethod):
for name in ud.names:
ud.unresolvedrev[name] = 'HEAD'
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c gc.autoDetach=false -c core.pager=cat"
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0 -c gc.autoDetach=false"
write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0"
ud.write_tarballs = write_tarballs != "0" or ud.rebaseable
@@ -254,8 +245,8 @@ class Git(FetchMethod):
ud.setup_revisions(d)
for name in ud.names:
# Ensure any revision that doesn't look like a SHA-1 is translated into one
if not sha1_re.match(ud.revisions[name] or ''):
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
if not ud.revisions[name] or len(ud.revisions[name]) != 40 or (False in [c in "abcdef0123456789" for c in ud.revisions[name]]):
if ud.revisions[name]:
ud.unresolvedrev[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud, d, name)
@@ -264,10 +255,10 @@ class Git(FetchMethod):
if gitsrcname.startswith('.'):
gitsrcname = gitsrcname[1:]
# For a rebaseable git repo, it is necessary to keep a mirror tar ball
# per revision, so that even if the revision disappears from the
# for rebaseable git repo, it is necessary to keep mirror tar ball
# per revision, so that even the revision disappears from the
# upstream repo in the future, the mirror will remain intact and still
# contain the revision
# contains the revision
if ud.rebaseable:
for name in ud.names:
gitsrcname = gitsrcname + '_' + ud.revisions[name]
@@ -355,53 +346,17 @@ class Git(FetchMethod):
if ud.shallow and os.path.exists(ud.fullshallow) and self.need_update(ud, d):
ud.localpath = ud.fullshallow
return
elif os.path.exists(ud.fullmirror) and self.need_update(ud, d):
if not os.path.exists(ud.clonedir):
bb.utils.mkdirhier(ud.clonedir)
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
else:
tmpdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=tmpdir)
fetch_cmd = "LANG=C %s fetch -f --progress %s " % (ud.basecmd, shlex.quote(tmpdir))
runfetchcmd(fetch_cmd, d, workdir=ud.clonedir)
elif os.path.exists(ud.fullmirror) and not os.path.exists(ud.clonedir):
bb.utils.mkdirhier(ud.clonedir)
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
repourl = self._get_repo_url(ud)
needs_clone = False
if os.path.exists(ud.clonedir):
# The directory may exist, but not be the top level of a bare git
# repository in which case it needs to be deleted and re-cloned.
try:
# Since clones can be bare, use --absolute-git-dir instead of --show-toplevel
output = runfetchcmd("LANG=C %s rev-parse --absolute-git-dir" % ud.basecmd, d, workdir=ud.clonedir)
toplevel = os.path.abspath(output.rstrip())
abs_clonedir = os.path.abspath(ud.clonedir).rstrip('/')
# The top level Git directory must either be the clone directory
# or a child of the clone directory. Any ancestor directory of
# the clone directory is not valid as the Git directory (and
# probably belongs to some other unrelated repository), so a
# clone is required
if os.path.commonprefix([abs_clonedir, toplevel]) != abs_clonedir:
logger.warning("Top level directory '%s' doesn't match expected '%s'. Re-cloning", toplevel, ud.clonedir)
needs_clone = True
except bb.fetch2.FetchError as e:
logger.warning("Unable to get top level for %s (not a git directory?): %s", ud.clonedir, e)
needs_clone = True
if needs_clone:
shutil.rmtree(ud.clonedir)
else:
needs_clone = True
# If the repo still doesn't exist, fallback to cloning it
if needs_clone:
# We do this since git will use a "-l" option automatically for local urls where possible,
# but it doesn't work when git/objects is a symlink, only works when it is a directory.
if not os.path.exists(ud.clonedir):
# We do this since git will use a "-l" option automatically for local urls where possible
if repourl.startswith("file://"):
repourl_path = repourl[7:]
objects = os.path.join(repourl_path, 'objects')
if os.path.isdir(objects) and not os.path.islink(objects):
repourl = repourl_path
repourl = repourl[7:]
clone_cmd = "LANG=C %s clone --bare --mirror %s %s --progress" % (ud.basecmd, shlex.quote(repourl), ud.clonedir)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, clone_cmd, ud.url)
@@ -415,11 +370,7 @@ class Git(FetchMethod):
runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, shlex.quote(repourl)), d, workdir=ud.clonedir)
if ud.nobranch:
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/*:refs/*" % (ud.basecmd, shlex.quote(repourl))
else:
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/heads/*:refs/heads/* refs/tags/*:refs/tags/*" % (ud.basecmd, shlex.quote(repourl))
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/*:refs/*" % (ud.basecmd, shlex.quote(repourl))
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
progresshandler = GitProgressHandler(d)
@@ -444,12 +395,13 @@ class Git(FetchMethod):
if self._contains_lfs(ud, d, ud.clonedir) and self._need_lfs(ud):
# Unpack temporary working copy, use it to run 'git checkout' to force pre-fetching
# of all LFS blobs needed at the srcrev.
# of all LFS blobs needed at the the srcrev.
#
# It would be nice to just do this inline here by running 'git-lfs fetch'
# on the bare clonedir, but that operation requires a working copy on some
# releases of Git LFS.
with tempfile.TemporaryDirectory(dir=d.getVar('DL_DIR')) as tmpdir:
tmpdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
try:
# Do the checkout. This implicitly involves a Git LFS fetch.
Git.unpack(self, ud, tmpdir, d)
@@ -467,6 +419,8 @@ class Git(FetchMethod):
# downloaded.
if os.path.exists(os.path.join(tmpdir, "git", ".git", "lfs")):
runfetchcmd("tar -cf - lfs | tar -xf - -C %s" % ud.clonedir, d, workdir="%s/git/.git" % tmpdir)
finally:
bb.utils.remove(tmpdir, recurse=True)
def build_mirror_data(self, ud, d):
@@ -504,10 +458,7 @@ class Git(FetchMethod):
logger.info("Creating tarball of git repository")
with create_atomic(ud.fullmirror) as tfile:
mtime = runfetchcmd("git log --all -1 --format=%cD", d,
quiet=True, workdir=ud.clonedir)
runfetchcmd("tar -czf %s --owner oe:0 --group oe:0 --mtime \"%s\" ."
% (tfile, mtime), d, workdir=ud.clonedir)
runfetchcmd("tar -czf %s ." % tfile, d, workdir=ud.clonedir)
runfetchcmd("touch %s.done" % ud.fullmirror, d)
def clone_shallow_local(self, ud, dest, d):
@@ -569,24 +520,13 @@ class Git(FetchMethod):
def unpack(self, ud, destdir, d):
""" unpack the downloaded src to destdir"""
subdir = ud.parm.get("subdir")
subpath = ud.parm.get("subpath")
readpathspec = ""
def_destsuffix = "git/"
if subpath:
readpathspec = ":%s" % subpath
def_destsuffix = "%s/" % os.path.basename(subpath.rstrip('/'))
if subdir:
# If 'subdir' param exists, create a dir and use it as destination for unpack cmd
if os.path.isabs(subdir):
if not os.path.realpath(subdir).startswith(os.path.realpath(destdir)):
raise bb.fetch2.UnpackError("subdir argument isn't a subdirectory of unpack root %s" % destdir, ud.url)
destdir = subdir
else:
destdir = os.path.join(destdir, subdir)
def_destsuffix = ""
subdir = ud.parm.get("subpath", "")
if subdir != "":
readpathspec = ":%s" % subdir
def_destsuffix = "%s/" % os.path.basename(subdir.rstrip('/'))
else:
readpathspec = ""
def_destsuffix = "git/"
destsuffix = ud.parm.get("destsuffix", def_destsuffix)
destdir = ud.destdir = os.path.join(destdir, destsuffix)
@@ -601,12 +541,13 @@ class Git(FetchMethod):
source_found = False
source_error = []
clonedir_is_up_to_date = not self.clonedir_need_update(ud, d)
if clonedir_is_up_to_date:
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
source_found = True
else:
source_error.append("clone directory not available or not up to date: " + ud.clonedir)
if not source_found:
clonedir_is_up_to_date = not self.clonedir_need_update(ud, d)
if clonedir_is_up_to_date:
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
source_found = True
else:
source_error.append("clone directory not available or not up to date: " + ud.clonedir)
if not source_found:
if ud.shallow:
@@ -632,7 +573,7 @@ class Git(FetchMethod):
bb.note("Repository %s has LFS content but it is not being fetched" % (repourl))
if not ud.nocheckout:
if subpath:
if subdir != "":
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d,
workdir=destdir)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d, workdir=destdir)
@@ -689,6 +630,11 @@ class Git(FetchMethod):
Check if the repository has 'lfs' (large file) content
"""
if not ud.nobranch:
branchname = ud.branches[ud.names[0]]
else:
branchname = "master"
# The bare clonedir doesn't use the remote names; it has the branch immediately.
if wd == ud.clonedir:
refname = ud.branches[ud.names[0]]
@@ -733,6 +679,7 @@ class Git(FetchMethod):
Return a unique key for the url
"""
# Collapse adjacent slashes
slash_re = re.compile(r"/+")
return "git:" + ud.host + slash_re.sub(".", ud.path) + ud.unresolvedrev[name]
def _lsremote(self, ud, d, search):
@@ -765,12 +712,6 @@ class Git(FetchMethod):
"""
Compute the HEAD revision for the url
"""
if not d.getVar("__BBSRCREV_SEEN"):
raise bb.fetch2.FetchError("Recipe uses a floating tag/branch '%s' for repo '%s' without a fixed SRCREV yet doesn't call bb.fetch2.get_srcrev() (use SRCPV in PV for OE)." % (ud.unresolvedrev[name], ud.host+ud.path))
# Ensure we mark as not cached
bb.fetch2.mark_recipe_nocache(d)
output = self._lsremote(ud, d, "")
# Tags of the form ^{} may not work, need to fallback to other form
if ud.unresolvedrev[name][:5] == "refs/" or ud.usehead:

View File

@@ -88,9 +88,9 @@ class GitSM(Git):
subrevision[m] = module_hash.split()[2]
# Convert relative to absolute uri based on parent uri
if uris[m].startswith('..') or uris[m].startswith('./'):
if uris[m].startswith('..'):
newud = copy.copy(ud)
newud.path = os.path.normpath(os.path.join(newud.path, uris[m]))
newud.path = os.path.realpath(os.path.join(newud.path, uris[m]))
uris[m] = Git._get_repo_url(self, newud)
for module in submodules:
@@ -115,21 +115,10 @@ class GitSM(Git):
# This has to be a file reference
proto = "file"
url = "gitsm://" + uris[module]
if url.endswith("{}{}".format(ud.host, ud.path)):
raise bb.fetch2.FetchError("Submodule refers to the parent repository. This will cause deadlock situation in current version of Bitbake." \
"Consider using git fetcher instead.")
url += ';protocol=%s' % proto
url += ";name=%s" % module
url += ";subpath=%s" % module
url += ";nobranch=1"
url += ";lfs=%s" % self._need_lfs(ud)
# Note that adding "user=" here to give credentials to the
# submodule is not supported. Since using SRC_URI to give git://
# URL a password is not supported, one have to use one of the
# recommended way (eg. ~/.netrc or SSH config) which does specify
# the user (See comment in git.py).
# So, we will not take patches adding "user=" support here.
ld = d.createCopy()
# Not necessary to set SRC_URI, since we're passing the URI to
@@ -174,7 +163,7 @@ class GitSM(Git):
else:
self.process_submodules(ud, ud.clonedir, need_update_submodule, d)
if need_update_list:
if len(need_update_list) > 0:
logger.debug('gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
return True
@@ -243,12 +232,10 @@ class GitSM(Git):
ret = self.process_submodules(ud, ud.destdir, unpack_submodules, d)
if not ud.bareclone and ret:
# All submodules should already be downloaded and configured in the tree. This simply
# sets up the configuration and checks out the files. The main project config should
# remain unmodified, and no download from the internet should occur. As such, lfs smudge
# should also be skipped as these files were already smudged in the fetch stage if lfs
# was enabled.
runfetchcmd("GIT_LFS_SKIP_SMUDGE=1 %s submodule update --recursive --no-fetch" % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
# All submodules should already be downloaded and configured in the tree. This simply sets
# up the configuration and checks out the files. The main project config should remain
# unmodified, and no download from the internet should occur.
runfetchcmd("%s submodule update --recursive --no-fetch" % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
def implicit_urldata(self, ud, d):
import shutil, subprocess, tempfile

View File

@@ -41,9 +41,9 @@ class Local(FetchMethod):
"""
Return the local filename of a given url assuming a successful fetch.
"""
return self.localfile_searchpaths(urldata, d)[-1]
return self.localpaths(urldata, d)[-1]
def localfile_searchpaths(self, urldata, d):
def localpaths(self, urldata, d):
"""
Return the local filename of a given url assuming a successful fetch.
"""
@@ -51,14 +51,18 @@ class Local(FetchMethod):
path = urldata.decodedurl
newpath = path
if path[0] == "/":
logger.debug2("Using absolute %s" % (path))
return [path]
filespath = d.getVar('FILESPATH')
if filespath:
logger.debug2("Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
newpath, hist = bb.utils.which(filespath, path, history=True)
logger.debug2("Using %s for %s" % (newpath, path))
searched.extend(hist)
if not os.path.exists(newpath):
dldirfile = os.path.join(d.getVar("DL_DIR"), path)
logger.debug2("Defaulting to %s for %s" % (dldirfile, path))
bb.utils.mkdirhier(os.path.dirname(dldirfile))
searched.append(dldirfile)
return searched
return searched
def need_update(self, ud, d):
@@ -74,7 +78,9 @@ class Local(FetchMethod):
filespath = d.getVar('FILESPATH')
if filespath:
locations = filespath.split(":")
msg = "Unable to find file " + urldata.url + " anywhere to download to " + urldata.localpath + ". The paths that were searched were:\n " + "\n ".join(locations)
locations.append(d.getVar("DL_DIR"))
msg = "Unable to find file " + urldata.url + " anywhere. The paths that were searched were:\n " + "\n ".join(locations)
raise FetchError(msg)
return True

View File

@@ -44,24 +44,17 @@ def npm_package(package):
"""Convert the npm package name to remove unsupported character"""
# Scoped package names (with the @) use the same naming convention
# as the 'npm pack' command.
name = re.sub("/", "-", package)
name = name.lower()
name = re.sub(r"[^\-a-z0-9]", "", name)
name = name.strip("-")
return name
if package.startswith("@"):
return re.sub("/", "-", package[1:])
return package
def npm_filename(package, version):
"""Get the filename of a npm package"""
return npm_package(package) + "-" + version + ".tgz"
def npm_localfile(package, version=None):
def npm_localfile(package, version):
"""Get the local filename of a npm package"""
if version is not None:
filename = npm_filename(package, version)
else:
filename = package
return os.path.join("npm2", filename)
return os.path.join("npm2", npm_filename(package, version))
def npm_integrity(integrity):
"""
@@ -79,19 +72,23 @@ def npm_unpack(tarball, destdir, d):
cmd += " --delay-directory-restore"
cmd += " --strip-components=1"
runfetchcmd(cmd, d, workdir=destdir)
runfetchcmd("chmod -R +X '%s'" % (destdir), d, quiet=True, workdir=destdir)
runfetchcmd("chmod -R +X %s" % (destdir), d, quiet=True, workdir=destdir)
class NpmEnvironment(object):
"""
Using a npm config file seems more reliable than using cli arguments.
This class allows to create a controlled environment for npm commands.
"""
def __init__(self, d, configs=[], npmrc=None):
def __init__(self, d, configs=None, npmrc=None):
self.d = d
self.user_config = tempfile.NamedTemporaryFile(mode="w", buffering=1)
for key, value in configs:
self.user_config.write("%s=%s\n" % (key, value))
if configs:
self.user_config = tempfile.NamedTemporaryFile(mode="w", buffering=1)
self.user_config_name = self.user_config.name
for key, value in configs:
self.user_config.write("%s=%s\n" % (key, value))
else:
self.user_config_name = "/dev/null"
if npmrc:
self.global_config_name = npmrc
@@ -106,14 +103,13 @@ class NpmEnvironment(object):
"""Run npm command in a controlled environment"""
with tempfile.TemporaryDirectory() as tmpdir:
d = bb.data.createCopy(self.d)
d.setVar("PATH", d.getVar("PATH")) # PATH might contain $HOME - evaluate it before patching
d.setVar("HOME", tmpdir)
if not workdir:
workdir = tmpdir
def _run(cmd):
cmd = "NPM_CONFIG_USERCONFIG=%s " % (self.user_config.name) + cmd
cmd = "NPM_CONFIG_USERCONFIG=%s " % (self.user_config_name) + cmd
cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % (self.global_config_name) + cmd
return runfetchcmd(cmd, d, workdir=workdir)
@@ -160,12 +156,12 @@ class Npm(FetchMethod):
raise ParameterError("Invalid 'version' parameter", ud.url)
# Extract the 'registry' part of the url
ud.registry = re.sub(r"^npm://", "https://", ud.url.split(";")[0])
ud.registry = re.sub(r"^npm://", "http://", ud.url.split(";")[0])
# Using the 'downloadfilename' parameter as local filename
# or the npm package name.
if "downloadfilename" in ud.parm:
ud.localfile = npm_localfile(d.expand(ud.parm["downloadfilename"]))
ud.localfile = d.expand(ud.parm["downloadfilename"])
else:
ud.localfile = npm_localfile(ud.package, ud.version)

View File

@@ -30,8 +30,6 @@ from bb.fetch2.npm import npm_integrity
from bb.fetch2.npm import npm_localfile
from bb.fetch2.npm import npm_unpack
from bb.utils import is_semver
from bb.utils import lockfile
from bb.utils import unlockfile
def foreach_dependencies(shrinkwrap, callback=None, dev=False):
"""
@@ -41,9 +39,8 @@ def foreach_dependencies(shrinkwrap, callback=None, dev=False):
with:
name = the package name (string)
params = the package parameters (dictionary)
destdir = the destination of the package (string)
deptree = the package dependency tree (array of strings)
"""
# For handling old style dependencies entries in shinkwrap files
def _walk_deps(deps, deptree):
for name in deps:
subtree = [*deptree, name]
@@ -53,22 +50,9 @@ def foreach_dependencies(shrinkwrap, callback=None, dev=False):
continue
elif deps[name].get("bundled", False):
continue
destsubdirs = [os.path.join("node_modules", dep) for dep in subtree]
destsuffix = os.path.join(*destsubdirs)
callback(name, deps[name], destsuffix)
callback(name, deps[name], subtree)
# packages entry means new style shrinkwrap file, else use dependencies
packages = shrinkwrap.get("packages", None)
if packages is not None:
for package in packages:
if package != "":
name = package.split('node_modules/')[-1]
package_infos = packages.get(package, {})
if dev == False and package_infos.get("dev", False):
continue
callback(name, package_infos, package)
else:
_walk_deps(shrinkwrap.get("dependencies", {}), [])
_walk_deps(shrinkwrap.get("dependencies", {}), [])
class NpmShrinkWrap(FetchMethod):
"""Class to fetch all package from a shrinkwrap file"""
@@ -89,10 +73,12 @@ class NpmShrinkWrap(FetchMethod):
# Resolve the dependencies
ud.deps = []
def _resolve_dependency(name, params, destsuffix):
def _resolve_dependency(name, params, deptree):
url = None
localpath = None
extrapaths = []
destsubdirs = [os.path.join("node_modules", dep) for dep in deptree]
destsuffix = os.path.join(*destsubdirs)
unpack = True
integrity = params.get("integrity", None)
@@ -100,11 +86,7 @@ class NpmShrinkWrap(FetchMethod):
version = params.get("version", None)
# Handle registry sources
if is_semver(version) and integrity:
# Handle duplicate dependencies without url
if not resolved:
return
if is_semver(version) and resolved and integrity:
localfile = npm_localfile(name, version)
uri = URI(resolved)
@@ -129,7 +111,7 @@ class NpmShrinkWrap(FetchMethod):
# Handle http tarball sources
elif version.startswith("http") and integrity:
localfile = npm_localfile(os.path.basename(version))
localfile = os.path.join("npm2", os.path.basename(version))
uri = URI(version)
uri.params["downloadfilename"] = localfile
@@ -141,28 +123,8 @@ class NpmShrinkWrap(FetchMethod):
localpath = os.path.join(d.getVar("DL_DIR"), localfile)
# Handle local tarball and link sources
elif version.startswith("file"):
localpath = version[5:]
if not version.endswith(".tgz"):
unpack = False
# Handle git sources
elif version.startswith(("git", "bitbucket","gist")) or (
not version.endswith((".tgz", ".tar", ".tar.gz"))
and not version.startswith((".", "@", "/"))
and "/" in version
):
if version.startswith("github:"):
version = "git+https://github.com/" + version[len("github:"):]
elif version.startswith("gist:"):
version = "git+https://gist.github.com/" + version[len("gist:"):]
elif version.startswith("bitbucket:"):
version = "git+https://bitbucket.org/" + version[len("bitbucket:"):]
elif version.startswith("gitlab:"):
version = "git+https://gitlab.com/" + version[len("gitlab:"):]
elif not version.startswith(("git+","git:")):
version = "git+https://github.com/" + version
elif version.startswith("git"):
regex = re.compile(r"""
^
git\+
@@ -188,6 +150,12 @@ class NpmShrinkWrap(FetchMethod):
url = str(uri)
# Handle local tarball and link sources
elif version.startswith("file"):
localpath = version[5:]
if not version.endswith(".tgz"):
unpack = False
else:
raise ParameterError("Unsupported dependency: %s" % name, ud.url)
@@ -217,23 +185,17 @@ class NpmShrinkWrap(FetchMethod):
# This fetcher resolves multiple URIs from a shrinkwrap file and then
# forwards it to a proxy fetcher. The management of the donestamp file,
# the lockfile and the checksums are forwarded to the proxy fetcher.
shrinkwrap_urls = [dep["url"] for dep in ud.deps if dep["url"]]
if shrinkwrap_urls:
ud.proxy = Fetch(shrinkwrap_urls, data)
ud.proxy = Fetch([dep["url"] for dep in ud.deps if dep["url"]], data)
ud.needdonestamp = False
@staticmethod
def _foreach_proxy_method(ud, handle):
returns = []
#Check if there are dependencies before try to fetch them
if len(ud.deps) > 0:
for proxy_url in ud.proxy.urls:
proxy_ud = ud.proxy.ud[proxy_url]
proxy_d = ud.proxy.d
proxy_ud.setup_localpath(proxy_d)
lf = lockfile(proxy_ud.lockfile)
returns.append(handle(proxy_ud.method, proxy_ud, proxy_d))
unlockfile(lf)
for proxy_url in ud.proxy.urls:
proxy_ud = ud.proxy.ud[proxy_url]
proxy_d = ud.proxy.d
proxy_ud.setup_localpath(proxy_d)
returns.append(handle(proxy_ud.method, proxy_ud, proxy_d))
return returns
def verify_donestamp(self, ud, d):

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
"""
@@ -11,7 +9,6 @@ Based on the svn "Fetch" implementation.
import logging
import os
import re
import bb
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
@@ -39,7 +36,6 @@ class Osc(FetchMethod):
# Create paths to osc checkouts
oscdir = d.getVar("OSCDIR") or (d.getVar("DL_DIR") + "/osc")
relpath = self._strip_leading_slashes(ud.path)
ud.oscdir = oscdir
ud.pkgdir = os.path.join(oscdir, ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
@@ -47,13 +43,13 @@ class Osc(FetchMethod):
ud.revision = ud.parm['rev']
else:
pv = d.getVar("PV", False)
rev = bb.fetch2.srcrev_internal_helper(ud, d, '')
rev = bb.fetch2.srcrev_internal_helper(ud, d)
if rev:
ud.revision = rev
else:
ud.revision = ""
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), relpath.replace('/', '.'), ud.revision))
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision))
def _buildosccommand(self, ud, d, command):
"""
@@ -63,49 +59,26 @@ class Osc(FetchMethod):
basecmd = d.getVar("FETCHCMD_osc") or "/usr/bin/env osc"
proto = ud.parm.get('protocol', 'https')
proto = ud.parm.get('protocol', 'ocs')
options = []
config = "-c %s" % self.generate_config(ud, d)
if getattr(ud, 'revision', ''):
if ud.revision:
options.append("-r %s" % ud.revision)
coroot = self._strip_leading_slashes(ud.path)
if command == "fetch":
osccmd = "%s %s -A %s://%s co %s/%s %s" % (basecmd, config, proto, ud.host, coroot, ud.module, " ".join(options))
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
elif command == "update":
osccmd = "%s %s -A %s://%s up %s" % (basecmd, config, proto, ud.host, " ".join(options))
elif command == "api_source":
osccmd = "%s %s -A %s://%s api source/%s/%s" % (basecmd, config, proto, ud.host, coroot, ud.module)
osccmd = "%s %s up %s" % (basecmd, config, " ".join(options))
else:
raise FetchError("Invalid osc command %s" % command, ud.url)
return osccmd
def _latest_revision(self, ud, d, name):
"""
Fetch latest revision for the given package
"""
api_source_cmd = self._buildosccommand(ud, d, "api_source")
output = runfetchcmd(api_source_cmd, d)
match = re.match(r'<directory ?.* rev="(\d+)".*>', output)
if match is None:
raise FetchError("Unable to parse osc response", ud.url)
return match.groups()[0]
def _revision_key(self, ud, d, name):
"""
Return a unique key for the url
"""
# Collapse adjacent slashes
slash_re = re.compile(r"/+")
rev = getattr(ud, 'revision', "latest")
return "osc:%s%s.%s.%s" % (ud.host, slash_re.sub(".", ud.path), name, rev)
def download(self, ud, d):
"""
Fetch url
@@ -113,7 +86,7 @@ class Osc(FetchMethod):
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(ud.moddir, os.R_OK):
if os.access(os.path.join(d.getVar('OSCDIR'), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ ud.url)
# update sources there
@@ -141,23 +114,20 @@ class Osc(FetchMethod):
Generate a .oscrc to be used for this run.
"""
config_path = os.path.join(ud.oscdir, "oscrc")
if not os.path.exists(ud.oscdir):
bb.utils.mkdirhier(ud.oscdir)
config_path = os.path.join(d.getVar('OSCDIR'), "oscrc")
if (os.path.exists(config_path)):
os.remove(config_path)
f = open(config_path, 'w')
proto = ud.parm.get('protocol', 'https')
f.write("[general]\n")
f.write("apiurl = %s://%s\n" % (proto, ud.host))
f.write("apisrv = %s\n" % ud.host)
f.write("scheme = http\n")
f.write("su-wrapper = su -c\n")
f.write("build-root = %s\n" % d.getVar('WORKDIR'))
f.write("urllist = %s\n" % d.getVar("OSCURLLIST"))
f.write("extra-pkgs = gzip\n")
f.write("\n")
f.write("[%s://%s]\n" % (proto, ud.host))
f.write("[%s]\n" % ud.host)
f.write("user = %s\n" % ud.parm["user"])
f.write("pass = %s\n" % ud.parm["pswd"])
f.close()

View File

@@ -103,7 +103,7 @@ class SFTP(FetchMethod):
if path[:3] == '/~/':
path = path[3:]
remote = '"%s%s:%s"' % (user, urlo.hostname, path)
remote = '%s%s:%s' % (user, urlo.hostname, path)
cmd = '%s %s %s %s' % (basecmd, port, remote, lpath)

View File

@@ -32,7 +32,6 @@ IETF secsh internet draft:
import re, os
from bb.fetch2 import check_network_access, FetchMethod, ParameterError, runfetchcmd
import urllib
__pattern__ = re.compile(r'''
@@ -41,9 +40,9 @@ __pattern__ = re.compile(r'''
( # Optional username/password block
(?P<user>\S+) # username
(:(?P<pass>\S+))? # colon followed by the password (optional)
)?
(?P<cparam>(;[^;]+)*)? # connection parameters block (optional)
@
)?
(?P<host>\S+?) # non-greedy match of the host
(:(?P<port>[0-9]+))? # colon followed by the port (optional)
/
@@ -71,7 +70,6 @@ class SSH(FetchMethod):
"git:// prefix with protocol=ssh", urldata.url)
m = __pattern__.match(urldata.url)
path = m.group('path')
path = urllib.parse.unquote(path)
host = m.group('host')
urldata.localpath = os.path.join(d.getVar('DL_DIR'),
os.path.basename(os.path.normpath(path)))
@@ -98,11 +96,6 @@ class SSH(FetchMethod):
fr += '@%s' % host
else:
fr = host
if path[0] != '~':
path = '/%s' % path
path = urllib.parse.unquote(path)
fr += ':%s' % path
cmd = 'scp -B -r %s %s %s/' % (
@@ -115,41 +108,3 @@ class SSH(FetchMethod):
runfetchcmd(cmd, d)
def checkstatus(self, fetch, urldata, d):
"""
Check the status of the url
"""
m = __pattern__.match(urldata.url)
path = m.group('path')
host = m.group('host')
port = m.group('port')
user = m.group('user')
password = m.group('pass')
if port:
portarg = '-P %s' % port
else:
portarg = ''
if user:
fr = user
if password:
fr += ':%s' % password
fr += '@%s' % host
else:
fr = host
if path[0] != '~':
path = '/%s' % path
path = urllib.parse.unquote(path)
cmd = 'ssh -o BatchMode=true %s %s [ -f %s ]' % (
portarg,
fr,
path
)
check_network_access(d, cmd, urldata.url)
runfetchcmd(cmd, d)
return True

View File

@@ -26,6 +26,7 @@ from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
from bb.utils import export_proxies
from bs4 import BeautifulSoup
from bs4 import SoupStrainer
@@ -105,22 +106,13 @@ class Wget(FetchMethod):
fetchcmd = self.basecmd
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile) + ".tmp"
bb.utils.mkdirhier(os.path.dirname(localpath))
fetchcmd += " -O %s" % shlex.quote(localpath)
if 'downloadfilename' in ud.parm:
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile)
bb.utils.mkdirhier(os.path.dirname(localpath))
fetchcmd += " -O %s" % shlex.quote(localpath)
if ud.user and ud.pswd:
fetchcmd += " --auth-no-challenge"
if ud.parm.get("redirectauth", "1") == "1":
# An undocumented feature of wget is that if the
# username/password are specified on the URI, wget will only
# send the Authorization header to the first host and not to
# any hosts that it is redirected to. With the increasing
# usage of temporary AWS URLs, this difference now matters as
# AWS will reject any request that has authentication both in
# the query parameters (from the redirect) and in the
# Authorization header.
fetchcmd += " --user=%s --password=%s" % (ud.user, ud.pswd)
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
uri = ud.url.split(";")[0]
if os.path.exists(ud.localpath):
@@ -131,15 +123,6 @@ class Wget(FetchMethod):
self._runwget(ud, d, fetchcmd, False)
# Try and verify any checksum now, meaning if it isn't correct, we don't remove the
# original file, which might be a race (imagine two recipes referencing the same
# source, one with an incorrect checksum)
bb.fetch2.verify_checksum(ud, d, localpath=localpath, fatal_nochecksum=False)
# Remove the ".tmp" and move the file into position atomically
# Our lock prevents multiple writers but mirroring code may grab incomplete files
os.rename(localpath, localpath[:-4])
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath):
@@ -235,7 +218,7 @@ class Wget(FetchMethod):
# We let the request fail and expect it to be
# tried once more ("try_again" in check_status()),
# with the dead connection removed from the cache.
# If it still fails, we give up, which can happen for bad
# If it still fails, we give up, which can happend for bad
# HTTP proxy settings.
fetch.connection_cache.remove_connection(h.host, h.port)
raise urllib.error.URLError(err)
@@ -322,7 +305,15 @@ class Wget(FetchMethod):
# Avoid tramping the environment too much by using bb.utils.environment
# to scope the changes to the build_opener request, which is when the
# environment lookups happen.
newenv = bb.fetch2.get_fetcher_environment(d)
newenv = {}
for name in bb.fetch2.FETCH_EXPORT_VARS:
value = d.getVar(name)
if not value:
origenv = d.getVar("BB_ORIGENV")
if origenv:
value = origenv.getVar(name)
if value:
newenv[name] = value
with bb.utils.environment(**newenv):
import ssl
@@ -340,8 +331,7 @@ class Wget(FetchMethod):
opener = urllib.request.build_opener(*handlers)
try:
uri_base = ud.url.split(";")[0]
uri = "{}://{}{}".format(urllib.parse.urlparse(uri_base).scheme, ud.host, ud.path)
uri = ud.url.split(";")[0]
r = urllib.request.Request(uri)
r.get_method = lambda: "HEAD"
# Some servers (FusionForge, as used on Alioth) require that the
@@ -360,16 +350,23 @@ class Wget(FetchMethod):
try:
import netrc
auth_data = netrc.netrc().authenticators(urllib.parse.urlparse(uri).hostname)
if auth_data:
login, _, password = auth_data
add_basic_auth("%s:%s" % (login, password), r)
except (FileNotFoundError, netrc.NetrcParseError):
n = netrc.netrc()
login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
add_basic_auth("%s:%s" % (login, password), r)
except (TypeError, ImportError, IOError, netrc.NetrcParseError):
pass
with opener.open(r, timeout=30) as response:
pass
except (urllib.error.URLError, ConnectionResetError, TimeoutError) as e:
except urllib.error.URLError as e:
if try_again:
logger.debug2("checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug2("checkstatus() urlopen failed: %s" % e)
return False
except ConnectionResetError as e:
if try_again:
logger.debug2("checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
@@ -586,7 +583,7 @@ class Wget(FetchMethod):
# src.rpm extension was added only for rpm package. Can be removed if the rpm
# packaged will always be considered as having to be manually upgraded
psuffix_regex = r"(tar\.\w+|tgz|zip|xz|rpm|bz2|orig\.tar\.\w+|src\.tar\.\w+|src\.tgz|svnr\d+\.tar\.\w+|stable\.tar\.\w+|src\.rpm)"
psuffix_regex = r"(tar\.gz|tgz|tar\.bz2|zip|xz|tar\.lz|rpm|bz2|orig\.tar\.gz|tar\.xz|src\.tar\.gz|src\.tgz|svnr\d+\.tar\.bz2|stable\.tar\.gz|src\.rpm)"
# match name, version and archive type of a package
package_regex_comp = re.compile(r"(?P<name>%s?\.?v?)(?P<pver>%s)(?P<arch>%s)?[\.-](?P<type>%s$)"
@@ -637,10 +634,10 @@ class Wget(FetchMethod):
# search for version matches on folders inside the path, like:
# "5.7" in http://download.gnome.org/sources/${PN}/5.7/${PN}-${PV}.tar.gz
dirver_regex = re.compile(r"(?P<dirver>[^/]*(\d+\.)*\d+([-_]r\d+)*)/")
m = dirver_regex.findall(path)
m = dirver_regex.search(path)
if m:
pn = d.getVar('PN')
dirver = m[-1][0]
dirver = m.group('dirver')
dirver_pn_regex = re.compile(r"%s\d?" % (re.escape(pn)))
if not dirver_pn_regex.search(dirver):

View File

@@ -12,12 +12,11 @@
import os
import sys
import logging
import argparse
import optparse
import warnings
import fcntl
import time
import traceback
import datetime
import bb
from bb import event
@@ -44,18 +43,18 @@ def present_options(optionlist):
else:
return optionlist[0]
class BitbakeHelpFormatter(argparse.HelpFormatter):
def _get_help_string(self, action):
class BitbakeHelpFormatter(optparse.IndentedHelpFormatter):
def format_option(self, option):
# We need to do this here rather than in the text we supply to
# add_option() because we don't want to call list_extension_modules()
# on every execution (since it imports all of the modules)
# Note also that we modify option.help rather than the returned text
# - this is so that we don't have to re-format the text ourselves
if action.dest == 'ui':
if option.dest == 'ui':
valid_uis = list_extension_modules(bb.ui, 'main')
return action.help.replace('@CHOICES@', present_options(valid_uis))
option.help = option.help.replace('@CHOICES@', present_options(valid_uis))
return action.help
return optparse.IndentedHelpFormatter.format_option(self, option)
def list_extension_modules(pkg, checkattr):
"""
@@ -115,205 +114,180 @@ def _showwarning(message, category, filename, lineno, file=None, line=None):
warnings.showwarning = _showwarning
def create_bitbake_parser():
parser = argparse.ArgumentParser(
description="""\
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.
""",
formatter_class=BitbakeHelpFormatter,
allow_abbrev=False,
add_help=False, # help is manually added below in a specific argument group
)
parser = optparse.OptionParser(
formatter=BitbakeHelpFormatter(),
version="BitBake Build Tool Core version %s" % bb.__version__,
usage="""%prog [options] [recipename/target recipe:do_task ...]
general_group = parser.add_argument_group('General options')
task_group = parser.add_argument_group('Task control options')
exec_group = parser.add_argument_group('Execution control options')
logging_group = parser.add_argument_group('Logging/output control options')
server_group = parser.add_argument_group('Server options')
config_group = parser.add_argument_group('Configuration options')
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.""")
general_group.add_argument("targets", nargs="*", metavar="recipename/target",
help="Execute the specified task (default is 'build') for these target "
"recipes (.bb files).")
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
general_group.add_argument("-s", "--show-versions", action="store_true",
help="Show current and preferred versions of all recipes.")
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
general_group.add_argument("-e", "--environment", action="store_true",
dest="show_environment",
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
general_group.add_argument("-g", "--graphviz", action="store_true", dest="dot_graph",
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-c", "--cmd", action="store", dest="cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
help="Output less log message data to the terminal. You can specify this more than once.")
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
help="Don't execute, just go through the motions.")
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
parser.add_option("-p", "--parse-only", action="store_true",
dest="parse_only", default=False,
help="Quit after parsing the BB recipes.")
parser.add_option("-s", "--show-versions", action="store_true",
dest="show_versions", default=False,
help="Show current and preferred versions of all recipes.")
parser.add_option("-e", "--environment", action="store_true",
dest="show_environment", default=False,
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing")
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
help="Show debug logging for the specified logging domains")
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
help="Profile the command and save reports.")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
general_group.add_argument("-u", "--ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %(default)s).")
parser.add_option("-u", "--ui", action="store", dest="ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
general_group.add_argument("--version", action="store_true",
help="Show programs version and exit.")
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
general_group.add_argument('-h', '--help', action='help',
help='Show this help message and exit.')
parser.add_option("", "--revisions-changed", action="store_true",
dest="revisions_changed", default=False,
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
parser.add_option("", "--server-only", action="store_true",
dest="server_only", default=False,
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
task_group.add_argument("-f", "--force", action="store_true",
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
task_group.add_argument("-c", "--cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
task_group.add_argument("-C", "--clear-stamp", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
task_group.add_argument("--runall", action="append", default=[],
help="Run the specified task for any recipe in the taskgraph of the "
"specified target (even if it wouldn't otherwise have run).")
parser.add_option("", "--skip-setscene", action="store_true",
dest="skipsetscene", default=False,
help="Skip setscene tasks if they would be executed. Tasks previously "
"restored from sstate will be kept, unlike --no-setscene")
task_group.add_argument("--runonly", action="append",
help="Run only the specified task within the taskgraph of the "
"specified targets (and any task dependencies those tasks may have).")
parser.add_option("", "--setscene-only", action="store_true",
dest="setsceneonly", default=False,
help="Only run setscene tasks, don't run any real tasks.")
task_group.add_argument("--no-setscene", action="store_true",
dest="nosetscene",
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
parser.add_option("", "--remote-server", action="store", dest="remote_server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
task_group.add_argument("--skip-setscene", action="store_true",
dest="skipsetscene",
help="Skip setscene tasks if they would be executed. Tasks previously "
"restored from sstate will be kept, unlike --no-setscene.")
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
help="Terminate any running bitbake server.")
task_group.add_argument("--setscene-only", action="store_true",
dest="setsceneonly",
help="Only run setscene tasks, don't run any real tasks.")
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
help="Connect to a server as an observing-only client.")
parser.add_option("", "--status-only", action="store_true",
dest="status_only", default=False,
help="Check the status of the remote bitbake server.")
exec_group.add_argument("-n", "--dry-run", action="store_true",
help="Don't execute, just go through the motions.")
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
exec_group.add_argument("-p", "--parse-only", action="store_true",
help="Quit after parsing the BB recipes.")
exec_group.add_argument("-k", "--continue", action="store_false", dest="halt",
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
exec_group.add_argument("-P", "--profile", action="store_true",
help="Profile the command and save reports.")
exec_group.add_argument("-S", "--dump-signatures", action="append",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
exec_group.add_argument("--revisions-changed", action="store_true",
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
exec_group.add_argument("-b", "--buildfile",
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
logging_group.add_argument("-D", "--debug", action="count", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
logging_group.add_argument("-l", "--log-domains", action="append", dest="debug_domains",
default=[],
help="Show debug logging for the specified logging domains.")
logging_group.add_argument("-v", "--verbose", action="store_true",
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
logging_group.add_argument("-q", "--quiet", action="count", default=0,
help="Output less log message data to the terminal. You can specify this "
"more than once.")
logging_group.add_argument("-w", "--write-log", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
server_group.add_argument("-B", "--bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
server_group.add_argument("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
server_group.add_argument("--remote-server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
server_group.add_argument("-m", "--kill-server", action="store_true",
help="Terminate any running bitbake server.")
server_group.add_argument("--token", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
server_group.add_argument("--observe-only", action="store_true",
help="Connect to a server as an observing-only client.")
server_group.add_argument("--status-only", action="store_true",
help="Check the status of the remote bitbake server.")
server_group.add_argument("--server-only", action="store_true",
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
config_group.add_argument("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
config_group.add_argument("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
config_group.add_argument("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing.")
parser.add_option("", "--runall", action="append", dest="runall",
help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).")
parser.add_option("", "--runonly", action="append", dest="runonly",
help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).")
return parser
class BitBakeConfigParameters(cookerdata.ConfigParameters):
def parseCommandLine(self, argv=sys.argv):
parser = create_bitbake_parser()
options = parser.parse_intermixed_args(argv[1:])
if options.version:
print("BitBake Build Tool Core version %s" % bb.__version__)
sys.exit(0)
options, targets = parser.parse_args(argv)
if options.quiet and options.verbose:
parser.error("options --quiet and --verbose are mutually exclusive")
@@ -345,7 +319,7 @@ class BitBakeConfigParameters(cookerdata.ConfigParameters):
else:
options.xmlrpcinterface = (None, 0)
return options, options.targets
return options, targets[1:]
def bitbake_main(configParams, configuration):
@@ -410,9 +384,6 @@ def bitbake_main(configParams, configuration):
return 1
def timestamp():
return datetime.datetime.now().strftime('%H:%M:%S.%f')
def setup_bitbake(configParams, extrafeatures=None):
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
@@ -420,11 +391,6 @@ def setup_bitbake(configParams, extrafeatures=None):
# In status only mode there are no logs and no UI
logger.addHandler(handler)
if configParams.dump_signatures:
if extrafeatures is None:
extrafeatures = []
extrafeatures.append(bb.cooker.CookerFeatures.RECIPE_SIGGEN_INFO)
if configParams.server_only:
featureset = []
ui_module = None
@@ -452,7 +418,7 @@ def setup_bitbake(configParams, extrafeatures=None):
retries = 8
while retries:
try:
topdir, lock, lockfile = lockBitbake()
topdir, lock = lockBitbake()
sockname = topdir + "/bitbake.sock"
if lock:
if configParams.status_only or configParams.kill_server:
@@ -463,22 +429,18 @@ def setup_bitbake(configParams, extrafeatures=None):
logger.info("Starting bitbake server...")
# Clear the event queue since we already displayed messages
bb.event.ui_queue = []
server = bb.server.process.BitBakeServer(lock, sockname, featureset, configParams.server_timeout, configParams.xmlrpcinterface, configParams.profile)
server = bb.server.process.BitBakeServer(lock, sockname, featureset, configParams.server_timeout, configParams.xmlrpcinterface)
else:
logger.info("Reconnecting to bitbake server...")
if not os.path.exists(sockname):
logger.info("Previous bitbake instance shutting down?, waiting to retry... (%s)" % timestamp())
procs = bb.server.process.get_lockfile_process_msg(lockfile)
if procs:
logger.info("Processes holding bitbake.lock (missing socket %s):\n%s" % (sockname, procs))
logger.info("Directory listing: %s" % (str(os.listdir(topdir))))
logger.info("Previous bitbake instance shutting down?, waiting to retry...")
i = 0
lock = None
# Wait for 5s or until we can get the lock
while not lock and i < 50:
time.sleep(0.1)
_, lock, _ = lockBitbake()
_, lock = lockBitbake()
i += 1
if lock:
bb.utils.unlockfile(lock)
@@ -497,9 +459,9 @@ def setup_bitbake(configParams, extrafeatures=None):
retries -= 1
tryno = 8 - retries
if isinstance(e, (bb.server.process.ProcessTimeout, BrokenPipeError, EOFError, SystemExit)):
logger.info("Retrying server connection (#%d)... (%s)" % (tryno, timestamp()))
logger.info("Retrying server connection (#%d)..." % tryno)
else:
logger.info("Retrying server connection (#%d)... (%s, %s)" % (tryno, traceback.format_exc(), timestamp()))
logger.info("Retrying server connection (#%d)... (%s)" % (tryno, traceback.format_exc()))
if not retries:
bb.fatal("Unable to connect to bitbake server, or start one (server startup failures would be in bitbake-cookerdaemon.log).")
@@ -528,5 +490,5 @@ def lockBitbake():
bb.error("Unable to find conf/bblayers.conf or conf/bitbake.conf. BBPATH is unset and/or not in a build directory?")
raise BBMainFatal
lockfile = topdir + "/bitbake.lock"
return topdir, bb.utils.lockfile(lockfile, False, False), lockfile
return topdir, bb.utils.lockfile(lockfile, False, False)

View File

@@ -76,12 +76,7 @@ def getDiskData(BBDirs):
return None
action = pathSpaceInodeRe.group(1)
if action == "ABORT":
# Emit a deprecation warning
logger.warnonce("The BB_DISKMON_DIRS \"ABORT\" action has been renamed to \"HALT\", update configuration")
action = "HALT"
if action not in ("HALT", "STOPTASKS", "WARN"):
if action not in ("ABORT", "STOPTASKS", "WARN"):
printErr("Unknown disk space monitor action: %s" % action)
return None
@@ -182,7 +177,7 @@ class diskMonitor:
# use them to avoid printing too many warning messages
self.preFreeS = {}
self.preFreeI = {}
# This is for STOPTASKS and HALT, to avoid printing the message
# This is for STOPTASKS and ABORT, to avoid printing the message
# repeatedly while waiting for the tasks to finish
self.checked = {}
for k in self.devDict:
@@ -224,8 +219,8 @@ class diskMonitor:
self.checked[k] = True
rq.finish_runqueue(False)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
elif action == "HALT" and not self.checked[k]:
logger.error("Immediately halt since the disk space monitor action is \"HALT\"!")
elif action == "ABORT" and not self.checked[k]:
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
self.checked[k] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
@@ -234,10 +229,9 @@ class diskMonitor:
freeInode = st.f_favail
if minInode and freeInode < minInode:
# Some filesystems use dynamic inodes so can't run out.
# This is reported by the inode count being 0 (btrfs) or the free
# inode count being -1 (cephfs).
if st.f_files == 0 or st.f_favail == -1:
# Some filesystems use dynamic inodes so can't run out
# (e.g. btrfs). This is reported by the inode count being 0.
if st.f_files == 0:
self.devDict[k][2] = None
continue
# Always show warning, the self.checked would always be False if the action is WARN
@@ -251,8 +245,8 @@ class diskMonitor:
self.checked[k] = True
rq.finish_runqueue(False)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
elif action == "HALT" and not self.checked[k]:
logger.error("Immediately halt since the disk space monitor action is \"HALT\"!")
elif action == "ABORT" and not self.checked[k]:
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
self.checked[k] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)

View File

@@ -30,9 +30,7 @@ class BBLogFormatter(logging.Formatter):
PLAIN = logging.INFO + 1
VERBNOTE = logging.INFO + 2
ERROR = logging.ERROR
ERRORONCE = logging.ERROR - 1
WARNING = logging.WARNING
WARNONCE = logging.WARNING - 1
CRITICAL = logging.CRITICAL
levelnames = {
@@ -44,9 +42,7 @@ class BBLogFormatter(logging.Formatter):
PLAIN : '',
VERBNOTE: 'NOTE',
WARNING : 'WARNING',
WARNONCE : 'WARNING',
ERROR : 'ERROR',
ERRORONCE : 'ERROR',
CRITICAL: 'ERROR',
}
@@ -62,9 +58,7 @@ class BBLogFormatter(logging.Formatter):
PLAIN : BASECOLOR,
VERBNOTE: BASECOLOR,
WARNING : YELLOW,
WARNONCE : YELLOW,
ERROR : RED,
ERRORONCE : RED,
CRITICAL: RED,
}
@@ -127,22 +121,6 @@ class BBLogFilter(object):
return True
return False
class LogFilterShowOnce(logging.Filter):
def __init__(self):
self.seen_warnings = set()
self.seen_errors = set()
def filter(self, record):
if record.levelno == bb.msg.BBLogFormatter.WARNONCE:
if record.msg in self.seen_warnings:
return False
self.seen_warnings.add(record.msg)
if record.levelno == bb.msg.BBLogFormatter.ERRORONCE:
if record.msg in self.seen_errors:
return False
self.seen_errors.add(record.msg)
return True
class LogFilterGEQLevel(logging.Filter):
def __init__(self, level):
self.strlevel = str(level)
@@ -228,7 +206,6 @@ def logger_create(name, output=sys.stderr, level=logging.INFO, preserve_handlers
"""Standalone logger creation function"""
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
console.addFilter(bb.msg.LogFilterShowOnce())
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if color == 'always' or (color == 'auto' and output.isatty()):
format.enable_color()
@@ -316,17 +293,10 @@ def setLoggingConfig(defaultconfig, userconfigfile=None):
# Convert all level parameters to integers in case users want to use the
# bitbake defined level names
for name, h in logconfig["handlers"].items():
for h in logconfig["handlers"].values():
if "level" in h:
h["level"] = bb.msg.stringToLevel(h["level"])
# Every handler needs its own instance of the once filter.
once_filter_name = name + ".showonceFilter"
logconfig.setdefault("filters", {})[once_filter_name] = {
"()": "bb.msg.LogFilterShowOnce",
}
h.setdefault("filters", []).append(once_filter_name)
for l in logconfig["loggers"].values():
if "level" in l:
l["level"] = bb.msg.stringToLevel(l["level"])

View File

@@ -99,12 +99,12 @@ def supports(fn, data):
return 1
return 0
def handle(fn, data, include=0, baseconfig=False):
def handle(fn, data, include = 0):
"""Call the handler that is appropriate for this file"""
for h in handlers:
if h['supports'](fn, data):
with data.inchistory.include(fn):
return h['handle'](fn, data, include, baseconfig)
return h['handle'](fn, data, include)
raise ParseError("not a BitBake file", fn)
def init(fn, data):
@@ -113,8 +113,6 @@ def init(fn, data):
return h['init'](data)
def init_parser(d):
if hasattr(bb.parse, "siggen"):
bb.parse.siggen.exit()
bb.parse.siggen = bb.siggen.init(d)
def resolve_file(fn, d):

View File

@@ -9,7 +9,6 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import sys
import bb
from bb import methodpool
from bb.parse import logger
@@ -131,10 +130,6 @@ class DataNode(AstNode):
else:
val = groupd["value"]
if ":append" in key or ":remove" in key or ":prepend" in key:
if op in ["append", "prepend", "postdot", "predot", "ques"]:
bb.warn(key + " " + groupd[op] + " is not a recommended operator combination, please replace it.")
flag = None
if 'flag' in groupd and groupd['flag'] is not None:
flag = groupd['flag']
@@ -224,7 +219,7 @@ class ExportFuncsNode(AstNode):
for flag in [ "func", "python" ]:
if data.getVarFlag(calledfunc, flag, False):
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag, False))
for flag in ["dirs", "cleandirs", "fakeroot"]:
for flag in [ "dirs" ]:
if data.getVarFlag(func, flag, False):
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag, False))
data.setVarFlag(func, "filename", "autogenerated")
@@ -270,41 +265,6 @@ class BBHandlerNode(AstNode):
data.setVarFlag(h, "handler", 1)
data.setVar('__BBHANDLERS', bbhands)
class PyLibNode(AstNode):
def __init__(self, filename, lineno, libdir, namespace):
AstNode.__init__(self, filename, lineno)
self.libdir = libdir
self.namespace = namespace
def eval(self, data):
global_mods = (data.getVar("BB_GLOBAL_PYMODULES") or "").split()
for m in global_mods:
if m not in bb.utils._context:
bb.utils._context[m] = __import__(m)
libdir = data.expand(self.libdir)
if libdir not in sys.path:
sys.path.append(libdir)
try:
bb.utils._context[self.namespace] = __import__(self.namespace)
toimport = getattr(bb.utils._context[self.namespace], "BBIMPORTS", [])
for i in toimport:
bb.utils._context[self.namespace] = __import__(self.namespace + "." + i)
mod = getattr(bb.utils._context[self.namespace], i)
fn = getattr(mod, "__file__")
funcs = {}
for f in dir(mod):
if f.startswith("_"):
continue
fcall = getattr(mod, f)
if not callable(fcall):
continue
funcs[f] = fcall
bb.codeparser.add_module_functions(fn, funcs, "%s.%s" % (self.namespace, i))
except AttributeError as e:
bb.error("Error importing OE modules: %s" % str(e))
class InheritNode(AstNode):
def __init__(self, filename, lineno, classes):
AstNode.__init__(self, filename, lineno)
@@ -356,9 +316,6 @@ def handleDelTask(statements, filename, lineno, m):
def handleBBHandlers(statements, filename, lineno, m):
statements.append(BBHandlerNode(filename, lineno, m.group(1)))
def handlePyLib(statements, filename, lineno, m):
statements.append(PyLibNode(filename, lineno, m.group(1), m.group(2)))
def handleInherit(statements, filename, lineno, m):
classes = m.group(1)
statements.append(InheritNode(filename, lineno, classes))
@@ -372,10 +329,6 @@ def runAnonFuncs(d):
def finalize(fn, d, variant = None):
saved_handlers = bb.event.get_handlers().copy()
try:
# Found renamed variables. Exit immediately
if d.getVar("_FAILPARSINGERRORHANDLED", False) == True:
raise bb.BBHandledException()
for var in d.getVar('__BBHANDLERS', False) or []:
# try to add the handler
handlerfn = d.getVarFlag(var, "filename", False)
@@ -400,9 +353,6 @@ def finalize(fn, d, variant = None):
d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
if d.getVar('__BBAUTOREV_SEEN') and d.getVar('__BBSRCREV_SEEN') and not d.getVar("__BBAUTOREV_ACTED_UPON"):
bb.fatal("AUTOREV/SRCPV set too late for the fetcher to work properly, please set the variables earlier in parsing. Erroring instead of later obtuse build failures.")
bb.event.fire(bb.event.RecipeParsed(fn), d)
finally:
bb.event.set_handlers(saved_handlers)

View File

@@ -44,36 +44,23 @@ def inherit(files, fn, lineno, d):
__inherit_cache = d.getVar('__inherit_cache', False) or []
files = d.expand(files).split()
for file in files:
classtype = d.getVar("__bbclasstype", False)
origfile = file
for t in ["classes-" + classtype, "classes"]:
file = origfile
if not os.path.isabs(file) and not file.endswith(".bbclass"):
file = os.path.join(t, '%s.bbclass' % file)
if not os.path.isabs(file) and not file.endswith(".bbclass"):
file = os.path.join('classes', '%s.bbclass' % file)
if not os.path.isabs(file):
bbpath = d.getVar("BBPATH")
abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
for af in attempts:
if af != abs_fn:
bb.parse.mark_dependency(d, af)
if abs_fn:
file = abs_fn
if os.path.exists(file):
break
if not os.path.exists(file):
raise ParseError("Could not inherit file %s" % (file), fn, lineno)
if not os.path.isabs(file):
bbpath = d.getVar("BBPATH")
abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
for af in attempts:
if af != abs_fn:
bb.parse.mark_dependency(d, af)
if abs_fn:
file = abs_fn
if not file in __inherit_cache:
logger.debug("Inheriting %s (from %s:%d)" % (file, fn, lineno))
__inherit_cache.append( file )
d.setVar('__inherit_cache', __inherit_cache)
try:
bb.parse.handle(file, d, True)
except (IOError, OSError) as exc:
raise ParseError("Could not inherit file %s: %s" % (fn, exc.strerror), fn, lineno)
include(fn, file, lineno, d, "inherit")
__inherit_cache = d.getVar('__inherit_cache', False) or []
def get_statements(filename, absolute_filename, base_name):
@@ -101,8 +88,8 @@ def get_statements(filename, absolute_filename, base_name):
cached_statements[absolute_filename] = statements
return statements
def handle(fn, d, include, baseconfig=False):
global __infunc__, __body__, __residue__, __classname__
def handle(fn, d, include):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__, __classname__
__body__ = []
__infunc__ = []
__classname__ = ""
@@ -154,7 +141,7 @@ def handle(fn, d, include, baseconfig=False):
return d
def feeder(lineno, s, fn, root, statements, eof=False):
global __inpython__, __infunc__, __body__, __residue__, __classname__
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__, __infunc__, __body__, bb, __residue__, __classname__
# Check tabs in python functions:
# - def py_funcname(): covered by __inpython__
@@ -191,10 +178,10 @@ def feeder(lineno, s, fn, root, statements, eof=False):
if s and s[0] == '#':
if len(__residue__) != 0 and __residue__[0][0] != "#":
bb.fatal("There is a comment on line %s of file %s:\n'''\n%s\n'''\nwhich is in the middle of a multiline expression. This syntax is invalid, please correct it." % (lineno, fn, s))
bb.fatal("There is a comment on line %s of file %s (%s) which is in the middle of a multiline expression.\nBitbake used to ignore these but no longer does so, please fix your metadata as errors are likely as a result of this change." % (lineno, fn, s))
if len(__residue__) != 0 and __residue__[0][0] == "#" and (not s or s[0] != "#"):
bb.fatal("There is a confusing multiline partially commented expression on line %s of file %s:\n%s\nPlease clarify whether this is all a comment or should be parsed." % (lineno - len(__residue__), fn, "\n".join(__residue__)))
bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
if s and s[-1] == '\\':
__residue__.append(s[:-1])
@@ -265,7 +252,7 @@ def feeder(lineno, s, fn, root, statements, eof=False):
ast.handleInherit(statements, fn, lineno, m)
return
return ConfHandler.feeder(lineno, s, fn, statements, conffile=False)
return ConfHandler.feeder(lineno, s, fn, statements)
# Add us to the handlers list
from .. import handlers

View File

@@ -21,7 +21,7 @@ __config_regexp__ = re.compile( r"""
^
(?P<exp>export\s+)?
(?P<var>[a-zA-Z0-9\-_+.${}/~:]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.][a-zA-Z0-9\-_+.@]*)\])?
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
\s* (
(?P<colon>:=) |
@@ -45,11 +45,13 @@ __include_regexp__ = re.compile( r"include\s+(.+)" )
__require_regexp__ = re.compile( r"require\s+(.+)" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.][a-zA-Z0-9\-_+.@]+)\]$" )
__addpylib_regexp__ = re.compile(r"addpylib\s+(.+)\s+(.+)" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.]+)\]$" )
def init(data):
return
topdir = data.getVar('TOPDIR', False)
if not topdir:
data.setVar('TOPDIR', os.getcwd())
def supports(fn, d):
return fn[-5:] == ".conf"
@@ -103,12 +105,12 @@ def include_single_file(parentfn, fn, lineno, data, error_out):
# We have an issue where a UI might want to enforce particular settings such as
# an empty DISTRO variable. If configuration files do something like assigning
# a weak default, it turns out to be very difficult to filter out these changes,
# particularly when the weak default might appear half way though parsing a chain
# particularly when the weak default might appear half way though parsing a chain
# of configuration files. We therefore let the UIs hook into configuration file
# parsing. This turns out to be a hard problem to solve any other way.
confFilters = []
def handle(fn, data, include, baseconfig=False):
def handle(fn, data, include):
init(data)
if include == 0:
@@ -126,26 +128,21 @@ def handle(fn, data, include, baseconfig=False):
s = f.readline()
if not s:
break
origlineno = lineno
origline = s
w = s.strip()
# skip empty lines
if not w:
continue
s = s.rstrip()
while s[-1] == '\\':
line = f.readline()
origline += line
s2 = line.rstrip()
s2 = f.readline().rstrip()
lineno = lineno + 1
if (not s2 or s2 and s2[0] != "#") and s[0] == "#" :
bb.fatal("There is a confusing multiline, partially commented expression starting on line %s of file %s:\n%s\nPlease clarify whether this is all a comment or should be parsed." % (origlineno, fn, origline))
bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
s = s[:-1] + s2
# skip comments
if s[0] == '#':
continue
feeder(lineno, s, abs_fn, statements, baseconfig=baseconfig)
feeder(lineno, s, abs_fn, statements)
# DONE WITH PARSING... time to evaluate
data.setVar('FILE', abs_fn)
@@ -153,14 +150,14 @@ def handle(fn, data, include, baseconfig=False):
if oldfile:
data.setVar('FILE', oldfile)
f.close()
for f in confFilters:
f(fn, data)
return data
# baseconfig is set for the bblayers/layer.conf cookerdata config parsing
# The function is also used by BBHandler, conffile would be False
def feeder(lineno, s, fn, statements, baseconfig=False, conffile=True):
def feeder(lineno, s, fn, statements):
m = __config_regexp__.match(s)
if m:
groupd = m.groupdict()
@@ -192,11 +189,6 @@ def feeder(lineno, s, fn, statements, baseconfig=False, conffile=True):
ast.handleUnsetFlag(statements, fn, lineno, m)
return
m = __addpylib_regexp__.match(s)
if baseconfig and conffile and m:
ast.handlePyLib(statements, fn, lineno, m)
return
raise ParseError("unparsed line: '%s'" % s, fn, lineno);
# Add us to the handlers list

View File

@@ -63,7 +63,7 @@ class SQLTable(collections.abc.MutableMapping):
"""
Decorator that starts a database transaction and creates a database
cursor for performing queries. If no exception is thrown, the
database results are committed. If an exception occurs, the database
database results are commited. If an exception occurs, the database
is rolled back. In all cases, the cursor is closed after the
function ends.
@@ -208,7 +208,7 @@ class SQLTable(collections.abc.MutableMapping):
def __lt__(self, other):
if not isinstance(other, Mapping):
raise NotImplementedError()
raise NotImplemented
return len(self) < len(other)
@@ -249,23 +249,4 @@ def persist(domain, d):
bb.utils.mkdirhier(cachedir)
cachefile = os.path.join(cachedir, "bb_persist_data.sqlite3")
try:
return SQLTable(cachefile, domain)
except sqlite3.OperationalError:
# Sqlite fails to open database when its path is too long.
# After testing, 504 is the biggest path length that can be opened by
# sqlite.
# Note: This code is called before sanity.bbclass and its path length
# check
max_len = 504
if len(cachefile) > max_len:
logger.critical("The path of the cache file is too long "
"({0} chars > {1}) to be opened by sqlite! "
"Your cache file is \"{2}\"".format(
len(cachefile),
max_len,
cachefile))
sys.exit(1)
else:
raise
return SQLTable(cachefile, domain)

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -62,7 +60,7 @@ class Popen(subprocess.Popen):
"close_fds": True,
"preexec_fn": subprocess_setup,
"stdout": subprocess.PIPE,
"stderr": subprocess.PIPE,
"stderr": subprocess.STDOUT,
"stdin": subprocess.PIPE,
"shell": False,
}
@@ -144,7 +142,7 @@ def _logged_communicate(pipe, log, input, extrafiles):
while pipe.poll() is None:
read_all_pipes(log, rin, outdata, errdata)
# Process closed, drain all pipes...
# Pocess closed, drain all pipes...
read_all_pipes(log, rin, outdata, errdata)
finally:
log.flush()

View File

@@ -148,7 +148,7 @@ class MultiStageProgressReporter:
for tasks made up of python code spread across multiple
classes / functions - the progress reporter object can
be passed around or stored at the object level and calls
to next_stage() and update() made wherever needed.
to next_stage() and update() made whereever needed.
"""
def __init__(self, d, stage_weights, debug=False):
"""

View File

@@ -396,8 +396,8 @@ def getRuntimeProviders(dataCache, rdepend):
return rproviders
# Only search dynamic packages if we can't find anything in other variables
for pat_key in dataCache.packages_dynamic:
pattern = pat_key.replace(r'+', r"\+")
for pattern in dataCache.packages_dynamic:
pattern = pattern.replace(r'+', r"\+")
if pattern in regexp_cache:
regexp = regexp_cache[pattern]
else:
@@ -408,7 +408,7 @@ def getRuntimeProviders(dataCache, rdepend):
raise
regexp_cache[pattern] = regexp
if regexp.match(rdepend):
rproviders += dataCache.packages_dynamic[pat_key]
rproviders += dataCache.packages_dynamic[pattern]
logger.debug("Assuming %s is a dynamic package, but it may not exist" % rdepend)
return rproviders

File diff suppressed because it is too large Load Diff

View File

@@ -27,8 +27,6 @@ import re
import datetime
import pickle
import traceback
import gc
import stat
import bb.server.xmlrpcserver
from bb import daemonize
from multiprocessing import queues
@@ -38,46 +36,10 @@ logger = logging.getLogger('BitBake')
class ProcessTimeout(SystemExit):
pass
def currenttime():
return datetime.datetime.now().strftime('%H:%M:%S.%f')
def serverlog(msg):
print(str(os.getpid()) + " " + currenttime() + " " + msg)
print(str(os.getpid()) + " " + datetime.datetime.now().strftime('%H:%M:%S.%f') + " " + msg)
sys.stdout.flush()
#
# When we have lockfile issues, try and find infomation about which process is
# using the lockfile
#
def get_lockfile_process_msg(lockfile):
# Some systems may not have lsof available
procs = None
try:
procs = subprocess.check_output(["lsof", '-w', lockfile], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
# File was deleted?
pass
except OSError as e:
if e.errno != errno.ENOENT:
raise
if procs is None:
# Fall back to fuser if lsof is unavailable
try:
procs = subprocess.check_output(["fuser", '-v', lockfile], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
# File was deleted?
pass
except OSError as e:
if e.errno != errno.ENOENT:
raise
if procs:
return procs.decode("utf-8")
return None
class idleFinish():
def __init__(self, msg):
self.msg = msg
class ProcessServer():
profile_filename = "profile.log"
profile_processed_filename = "profile.log.processed"
@@ -95,19 +57,12 @@ class ProcessServer():
self.maxuiwait = 30
self.xmlrpc = False
self.idle = None
# Need a lock for _idlefuns changes
self._idlefuns = {}
self._idlefuncsLock = threading.Lock()
self.idle_cond = threading.Condition(self._idlefuncsLock)
self.bitbake_lock = lock
self.bitbake_lock_name = lockname
self.sock = sock
self.sockname = sockname
# It is possible the directory may be renamed. Cache the inode of the socket file
# so we can tell if things changed.
self.sockinode = os.stat(self.sockname)[stat.ST_INO]
self.server_timeout = server_timeout
self.timeout = self.server_timeout
@@ -116,9 +71,7 @@ class ProcessServer():
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
assert hasattr(function, '__call__')
with bb.utils.lock_timeout(self._idlefuncsLock):
self._idlefuns[function] = data
serverlog("Registering idle function %s" % str(function))
self._idlefuns[function] = data
def run(self):
@@ -157,31 +110,6 @@ class ProcessServer():
return ret
def _idle_check(self):
return len(self._idlefuns) == 0 and self.cooker.command.currentAsyncCommand is None
def wait_for_idle(self, timeout=30):
# Wait for the idle loop to have cleared
with bb.utils.lock_timeout(self._idlefuncsLock):
return self.idle_cond.wait_for(self._idle_check, timeout) is not False
def set_async_cmd(self, cmd):
with bb.utils.lock_timeout(self._idlefuncsLock):
ret = self.idle_cond.wait_for(self._idle_check, 30)
if ret is False:
return False
self.cooker.command.currentAsyncCommand = cmd
return True
def clear_async_cmd(self):
with bb.utils.lock_timeout(self._idlefuncsLock):
self.cooker.command.currentAsyncCommand = None
self.idle_cond.notify_all()
def get_async_cmd(self):
with bb.utils.lock_timeout(self._idlefuncsLock):
return self.cooker.command.currentAsyncCommand
def main(self):
self.cooker.pre_serve()
@@ -196,19 +124,14 @@ class ProcessServer():
fds.append(self.xmlrpc)
seendata = False
serverlog("Entering server connection loop")
serverlog("Lockfile is: %s\nSocket is %s (%s)" % (self.bitbake_lock_name, self.sockname, os.path.exists(self.sockname)))
def disconnect_client(self, fds):
serverlog("Disconnecting Client (socket: %s)" % os.path.exists(self.sockname))
serverlog("Disconnecting Client")
if self.controllersock:
fds.remove(self.controllersock)
self.controllersock.close()
self.controllersock = False
if self.haveui:
# Wait for the idle loop to have cleared (30s max)
if not self.wait_for_idle(30):
serverlog("Idle loop didn't finish queued commands after 30s, exiting.")
self.quit = True
fds.remove(self.command_channel)
bb.event.unregister_UIHhandler(self.event_handle, True)
self.command_channel_reply.writer.close()
@@ -220,7 +143,7 @@ class ProcessServer():
self.cooker.clientComplete()
self.haveui = False
ready = select.select(fds,[],[],0)[0]
if newconnections and not self.quit:
if newconnections:
serverlog("Starting new client")
conn = newconnections.pop(-1)
fds.append(conn)
@@ -292,10 +215,8 @@ class ProcessServer():
continue
try:
serverlog("Running command %s" % command)
reply = self.cooker.command.runCommand(command, self)
serverlog("Sending reply %s" % repr(reply))
self.command_channel_reply.send(reply)
serverlog("Command Completed (socket: %s)" % os.path.exists(self.sockname))
self.command_channel_reply.send(self.cooker.command.runCommand(command))
serverlog("Command Completed")
except Exception as e:
stack = traceback.format_exc()
serverlog('Exception in server main event loop running command %s (%s)' % (command, stack))
@@ -322,25 +243,19 @@ class ProcessServer():
ready = self.idle_commands(.1, fds)
if self.idle:
self.idle.join()
if len(threading.enumerate()) != 1:
serverlog("More than one thread left?: " + str(threading.enumerate()))
serverlog("Exiting (socket: %s)" % os.path.exists(self.sockname))
serverlog("Exiting")
# Remove the socket file so we don't get any more connections to avoid races
# The build directory could have been renamed so if the file isn't the one we created
# we shouldn't delete it.
try:
sockinode = os.stat(self.sockname)[stat.ST_INO]
if sockinode == self.sockinode:
os.unlink(self.sockname)
else:
serverlog("bitbake.sock inode mismatch (%s vs %s), not deleting." % (sockinode, self.sockinode))
except Exception as err:
serverlog("Removing socket file '%s' failed (%s)" % (self.sockname, err))
os.unlink(self.sockname)
except:
pass
self.sock.close()
try:
self.cooker.shutdown(True, idle=False)
self.cooker.shutdown(True)
self.cooker.notifier.stop()
self.cooker.confignotifier.stop()
except:
@@ -348,9 +263,6 @@ class ProcessServer():
self.cooker.post_serve()
if len(threading.enumerate()) != 1:
serverlog("More than one thread left?: " + str(threading.enumerate()))
# Flush logs before we release the lock
sys.stdout.flush()
sys.stderr.flush()
@@ -366,21 +278,20 @@ class ProcessServer():
except FileNotFoundError:
return None
lockcontents = get_lock_contents(lockfile)
serverlog("Original lockfile contents: " + str(lockcontents))
lock.close()
lock = None
while not lock:
i = 0
lock = None
if not os.path.exists(os.path.basename(lockfile)):
serverlog("Lockfile directory gone, exiting.")
return
while not lock and i < 30:
lock = bb.utils.lockfile(lockfile, shared=False, retry=False, block=False)
if not lock:
newlockcontents = get_lock_contents(lockfile)
if not newlockcontents[0].startswith([f"{os.getpid()}\n", f"{os.getpid()} "]):
if newlockcontents != lockcontents:
# A new server was started, the lockfile contents changed, we can exit
serverlog("Lockfile now contains different contents, exiting: " + str(newlockcontents))
return
@@ -394,98 +305,80 @@ class ProcessServer():
return
if not lock:
procs = get_lockfile_process_msg(lockfile)
msg = ["Delaying shutdown due to active processes which appear to be holding bitbake.lock"]
if procs:
msg.append(":\n%s" % procs)
serverlog("".join(msg))
def idle_thread(self):
def remove_idle_func(function):
with bb.utils.lock_timeout(self._idlefuncsLock):
del self._idlefuns[function]
self.idle_cond.notify_all()
while not self.quit:
nextsleep = 0.1
fds = []
try:
self.cooker.process_inotify_updates()
except Exception as exc:
serverlog("Exception %s in inofify updates broke the idle_thread, exiting" % traceback.format_exc())
self.quit = True
with bb.utils.lock_timeout(self._idlefuncsLock):
items = list(self._idlefuns.items())
for function, data in items:
# Some systems may not have lsof available
procs = None
try:
retval = function(self, data, False)
if isinstance(retval, idleFinish):
serverlog("Removing idle function %s at idleFinish" % str(function))
remove_idle_func(function)
self.cooker.command.finishAsyncCommand(retval.msg)
nextsleep = None
elif retval is False:
serverlog("Removing idle function %s" % str(function))
remove_idle_func(function)
nextsleep = None
elif retval is True:
nextsleep = None
elif isinstance(retval, float) and nextsleep:
if (retval < nextsleep):
nextsleep = retval
elif nextsleep is None:
continue
else:
fds = fds + retval
except SystemExit:
raise
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running idle function')
remove_idle_func(function)
serverlog("Exception %s broke the idle_thread, exiting" % traceback.format_exc())
self.quit = True
# Create new heartbeat event?
now = time.time()
if bb.event._heartbeat_enabled and now >= self.next_heartbeat:
# We might have missed heartbeats. Just trigger once in
# that case and continue after the usual delay.
self.next_heartbeat += self.heartbeat_seconds
if self.next_heartbeat <= now:
self.next_heartbeat = now + self.heartbeat_seconds
if hasattr(self.cooker, "data"):
heartbeat = bb.event.HeartbeatEvent(now)
procs = subprocess.check_output(["lsof", '-w', lockfile], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
# File was deleted?
continue
except OSError as e:
if e.errno != errno.ENOENT:
raise
if procs is None:
# Fall back to fuser if lsof is unavailable
try:
bb.event.fire(heartbeat, self.cooker.data)
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running heartbeat function')
serverlog("Exception %s broke in idle_thread, exiting" % traceback.format_exc())
self.quit = True
if nextsleep and bb.event._heartbeat_enabled and now + nextsleep > self.next_heartbeat:
# Shorten timeout so that we we wake up in time for
# the heartbeat.
nextsleep = self.next_heartbeat - now
procs = subprocess.check_output(["fuser", '-v', lockfile], stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
# File was deleted?
continue
except OSError as e:
if e.errno != errno.ENOENT:
raise
if nextsleep is not None:
select.select(fds,[],[],nextsleep)[0]
msg = "Delaying shutdown due to active processes which appear to be holding bitbake.lock"
if procs:
msg += ":\n%s" % str(procs.decode("utf-8"))
serverlog(msg)
def idle_commands(self, delay, fds=None):
nextsleep = delay
if not fds:
fds = []
if not self.idle:
self.idle = threading.Thread(target=self.idle_thread)
self.idle.start()
elif self.idle and not self.idle.is_alive():
serverlog("Idle thread terminated, main thread exiting too")
bb.error("Idle thread terminated, main thread exiting too")
self.quit = True
for function, data in list(self._idlefuns.items()):
try:
retval = function(self, data, False)
if retval is False:
del self._idlefuns[function]
nextsleep = None
elif retval is True:
nextsleep = None
elif isinstance(retval, float) and nextsleep:
if (retval < nextsleep):
nextsleep = retval
elif nextsleep is None:
continue
else:
fds = fds + retval
except SystemExit:
raise
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running idle function')
del self._idlefuns[function]
self.quit = True
# Create new heartbeat event?
now = time.time()
if now >= self.next_heartbeat:
# We might have missed heartbeats. Just trigger once in
# that case and continue after the usual delay.
self.next_heartbeat += self.heartbeat_seconds
if self.next_heartbeat <= now:
self.next_heartbeat = now + self.heartbeat_seconds
if hasattr(self.cooker, "data"):
heartbeat = bb.event.HeartbeatEvent(now)
try:
bb.event.fire(heartbeat, self.cooker.data)
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running heartbeat function')
self.quit = True
if nextsleep and now + nextsleep > self.next_heartbeat:
# Shorten timeout so that we we wake up in time for
# the heartbeat.
nextsleep = self.next_heartbeat - now
if nextsleep is not None:
if self.xmlrpc:
@@ -507,9 +400,9 @@ class ServerCommunicator():
def runCommand(self, command):
self.connection.send(command)
if not self.recv.poll(30):
logger.info("No reply from server in 30s (for command %s at %s)" % (command[0], currenttime()))
logger.info("No reply from server in 30s")
if not self.recv.poll(30):
raise ProcessTimeout("Timeout while waiting for a reply from the bitbake server (60s at %s)" % currenttime())
raise ProcessTimeout("Timeout while waiting for a reply from the bitbake server (60s)")
ret, exc = self.recv.get()
# Should probably turn all exceptions in exc back into exceptions?
# For now, at least handle BBHandledException
@@ -543,7 +436,6 @@ class BitBakeProcessServerConnection(object):
self.socket_connection = sock
def terminate(self):
self.events.close()
self.socket_connection.close()
self.connection.connection.close()
self.connection.recv.close()
@@ -554,14 +446,13 @@ start_log_datetime_format = '%Y-%m-%d %H:%M:%S.%f'
class BitBakeServer(object):
def __init__(self, lock, sockname, featureset, server_timeout, xmlrpcinterface, profile):
def __init__(self, lock, sockname, featureset, server_timeout, xmlrpcinterface):
self.server_timeout = server_timeout
self.xmlrpcinterface = xmlrpcinterface
self.featureset = featureset
self.sockname = sockname
self.bitbake_lock = lock
self.profile = profile
self.readypipe, self.readypipein = os.pipe()
# Place the log in the builddirectory alongside the lock file
@@ -625,9 +516,9 @@ class BitBakeServer(object):
os.set_inheritable(self.bitbake_lock.fileno(), True)
os.set_inheritable(self.readypipein, True)
serverscript = os.path.realpath(os.path.dirname(__file__) + "/../../../bin/bitbake-server")
os.execl(sys.executable, "bitbake-server", serverscript, "decafbad", str(self.bitbake_lock.fileno()), str(self.readypipein), self.logfile, self.bitbake_lock.name, self.sockname, str(self.server_timeout or 0), str(int(self.profile)), str(self.xmlrpcinterface[0]), str(self.xmlrpcinterface[1]))
os.execl(sys.executable, "bitbake-server", serverscript, "decafbad", str(self.bitbake_lock.fileno()), str(self.readypipein), self.logfile, self.bitbake_lock.name, self.sockname, str(self.server_timeout or 0), str(self.xmlrpcinterface[0]), str(self.xmlrpcinterface[1]))
def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpcinterface, profile):
def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpcinterface):
import bb.cookerdata
import bb.cooker
@@ -639,7 +530,6 @@ def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpc
# Create server control socket
if os.path.exists(sockname):
serverlog("WARNING: removing existing socket file '%s'" % sockname)
os.unlink(sockname)
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
@@ -656,8 +546,7 @@ def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpc
writer = ConnectionWriter(readypipeinfd)
try:
featureset = []
cooker = bb.cooker.BBCooker(featureset, server)
cooker.configuration.profile = profile
cooker = bb.cooker.BBCooker(featureset, server.register_idle_function)
except bb.BBHandledException:
return None
writer.send("r")
@@ -667,7 +556,7 @@ def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpc
server.run()
finally:
# Flush any messages/errors to the logfile before exit
# Flush any ,essages/errors to the logfile before exit
sys.stdout.flush()
sys.stderr.flush()
@@ -772,18 +661,23 @@ class BBUIEventQueue:
self.reader = ConnectionReader(readfd)
self.t = threading.Thread()
self.t.daemon = True
self.t.run = self.startCallbackHandler
self.t.start()
def getEvent(self):
with bb.utils.lock_timeout(self.eventQueueLock):
if len(self.eventQueue) == 0:
return None
self.eventQueueLock.acquire()
item = self.eventQueue.pop(0)
if len(self.eventQueue) == 0:
self.eventQueueNotify.clear()
if len(self.eventQueue) == 0:
self.eventQueueLock.release()
return None
item = self.eventQueue.pop(0)
if len(self.eventQueue) == 0:
self.eventQueueNotify.clear()
self.eventQueueLock.release()
return item
def waitEvent(self, delay):
@@ -791,9 +685,10 @@ class BBUIEventQueue:
return self.getEvent()
def queue_event(self, event):
with bb.utils.lock_timeout(self.eventQueueLock):
self.eventQueue.append(event)
self.eventQueueNotify.set()
self.eventQueueLock.acquire()
self.eventQueue.append(event)
self.eventQueueNotify.set()
self.eventQueueLock.release()
def send_event(self, event):
self.queue_event(pickle.loads(event))
@@ -802,17 +697,13 @@ class BBUIEventQueue:
bb.utils.set_process_name("UIEventQueue")
while True:
try:
ready = self.reader.wait(0.25)
if ready:
event = self.reader.get()
self.queue_event(event)
except (EOFError, OSError, TypeError):
self.reader.wait()
event = self.reader.get()
self.queue_event(event)
except EOFError:
# Easiest way to exit is to close the file descriptor to cause an exit
break
def close(self):
self.reader.close()
self.t.join()
class ConnectionReader(object):
@@ -827,7 +718,7 @@ class ConnectionReader(object):
return self.reader.poll(timeout)
def get(self):
with bb.utils.lock_timeout(self.rlock):
with self.rlock:
res = self.reader.recv_bytes()
return multiprocessing.reduction.ForkingPickler.loads(res)
@@ -846,31 +737,10 @@ class ConnectionWriter(object):
# Why bb.event needs this I have no idea
self.event = self
def _send(self, obj):
gc.disable()
with bb.utils.lock_timeout(self.wlock):
self.writer.send_bytes(obj)
gc.enable()
def send(self, obj):
obj = multiprocessing.reduction.ForkingPickler.dumps(obj)
# See notes/code in CookerParser
# We must not terminate holding this lock else processes will hang.
# For SIGTERM, raising afterwards avoids this.
# For SIGINT, we don't want to have written partial data to the pipe.
# pthread_sigmask block/unblock would be nice but doesn't work, https://bugs.python.org/issue47139
process = multiprocessing.current_process()
if process and hasattr(process, "queue_signals"):
with bb.utils.lock_timeout(process.signal_threadlock):
process.queue_signals = True
self._send(obj)
process.queue_signals = False
while len(process.signal_received) > 0:
sig = process.signal_received.pop()
process.handle_sig(sig, None)
else:
self._send(obj)
with self.wlock:
self.writer.send_bytes(obj)
def fileno(self):
return self.writer.fileno()

View File

@@ -11,7 +11,6 @@ import hashlib
import time
import inspect
from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import bb.server.xmlrpcclient
import bb
@@ -118,7 +117,7 @@ class BitBakeXMLRPCServerCommands():
"""
Run a cooker command on the server
"""
return self.server.cooker.command.runCommand(command, self.server.parent, self.server.readonly)
return self.server.cooker.command.runCommand(command, self.server.readonly)
def getEventHandle(self):
return self.event_handle

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -13,9 +11,6 @@ import pickle
import bb.data
import difflib
import simplediff
import json
import types
import bb.compress.zstd
from bb.checksum import FileChecksumCache
from bb import runqueue
import hashserv
@@ -24,17 +19,6 @@ import hashserv.client
logger = logging.getLogger('BitBake.SigGen')
hashequiv_logger = logging.getLogger('BitBake.SigGen.HashEquiv')
class SetEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, set) or isinstance(obj, frozenset):
return dict(_set_object=list(sorted(obj)))
return json.JSONEncoder.default(self, obj)
def SetDecoder(dct):
if '_set_object' in dct:
return frozenset(dct['_set_object'])
return dct
def init(d):
siggens = [obj for obj in globals().values()
if type(obj) is type and issubclass(obj, SignatureGenerator)]
@@ -43,6 +27,7 @@ def init(d):
for sg in siggens:
if desired == sg.name:
return sg(d)
break
else:
logger.error("Invalid signature generator '%s', using default 'noop'\n"
"Available generators: %s", desired,
@@ -54,6 +39,11 @@ class SignatureGenerator(object):
"""
name = "noop"
# If the derived class supports multiconfig datacaches, set this to True
# The default is False for backward compatibility with derived signature
# generators that do not understand multiconfig caches
supports_multiconfig_datacaches = False
def __init__(self, data):
self.basehash = {}
self.taskhash = {}
@@ -71,27 +61,6 @@ class SignatureGenerator(object):
def postparsing_clean_cache(self):
return
def setup_datacache(self, datacaches):
self.datacaches = datacaches
def setup_datacache_from_datastore(self, mcfn, d):
# In task context we have no cache so setup internal data structures
# from the fully parsed data store provided
mc = d.getVar("__BBMULTICONFIG", False) or ""
tasks = d.getVar('__BBTASKS', False)
self.datacaches = {}
self.datacaches[mc] = types.SimpleNamespace()
setattr(self.datacaches[mc], "stamp", {})
self.datacaches[mc].stamp[mcfn] = d.getVar('STAMP')
setattr(self.datacaches[mc], "stamp_extrainfo", {})
self.datacaches[mc].stamp_extrainfo[mcfn] = {}
for t in tasks:
flag = d.getVarFlag(t, "stamp-extra-info")
if flag:
self.datacaches[mc].stamp_extrainfo[mcfn][t] = flag
def get_unihash(self, tid):
return self.taskhash[tid]
@@ -106,51 +75,17 @@ class SignatureGenerator(object):
"""Write/update the file checksum cache onto disk"""
return
def stampfile_base(self, mcfn):
mc = bb.runqueue.mc_from_tid(mcfn)
return self.datacaches[mc].stamp[mcfn]
def stampfile_mcfn(self, taskname, mcfn, extrainfo=True):
mc = bb.runqueue.mc_from_tid(mcfn)
stamp = self.datacaches[mc].stamp[mcfn]
if not stamp:
return
stamp_extrainfo = ""
if extrainfo:
taskflagname = taskname
if taskname.endswith("_setscene"):
taskflagname = taskname.replace("_setscene", "")
stamp_extrainfo = self.datacaches[mc].stamp_extrainfo[mcfn].get(taskflagname) or ""
return self.stampfile(stamp, mcfn, taskname, stamp_extrainfo)
def stampfile(self, stampbase, file_name, taskname, extrainfo):
return ("%s.%s.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
def stampcleanmask_mcfn(self, taskname, mcfn):
mc = bb.runqueue.mc_from_tid(mcfn)
stamp = self.datacaches[mc].stamp[mcfn]
if not stamp:
return []
taskflagname = taskname
if taskname.endswith("_setscene"):
taskflagname = taskname.replace("_setscene", "")
stamp_extrainfo = self.datacaches[mc].stamp_extrainfo[mcfn].get(taskflagname) or ""
return self.stampcleanmask(stamp, mcfn, taskname, stamp_extrainfo)
def stampcleanmask(self, stampbase, file_name, taskname, extrainfo):
return ("%s.%s.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
def dump_sigtask(self, mcfn, task, stampbase, runtime):
def dump_sigtask(self, fn, task, stampbase, runtime):
return
def invalidate_task(self, task, mcfn):
mc = bb.runqueue.mc_from_tid(mcfn)
stamp = self.datacaches[mc].stamp[mcfn]
bb.utils.remove(stamp)
def invalidate_task(self, task, d, fn):
bb.build.del_stamp(task, d, fn)
def dump_sigs(self, dataCache, options):
return
@@ -173,19 +108,40 @@ class SignatureGenerator(object):
def save_unitaskhashes(self):
return
def copy_unitaskhashes(self, targetdir):
return
def set_setscene_tasks(self, setscene_tasks):
return
def exit(self):
return
@classmethod
def get_data_caches(cls, dataCaches, mc):
"""
This function returns the datacaches that should be passed to signature
generator functions. If the signature generator supports multiconfig
caches, the entire dictionary of data caches is sent, otherwise a
special proxy is sent that support both index access to all
multiconfigs, and also direct access for the default multiconfig.
def build_pnid(mc, pn, taskname):
if mc:
return "mc:" + mc + ":" + pn + ":" + taskname
return pn + ":" + taskname
The proxy class allows code in this class itself to always use
multiconfig aware code (to ease maintenance), but derived classes that
are unaware of multiconfig data caches can still access the default
multiconfig as expected.
Do not override this function in derived classes; it will be removed in
the future when support for multiconfig data caches is mandatory
"""
class DataCacheProxy(object):
def __init__(self):
pass
def __getitem__(self, key):
return dataCaches[key]
def __getattr__(self, name):
return getattr(dataCaches[mc], name)
if cls.supports_multiconfig_datacaches:
return dataCaches
return DataCacheProxy()
class SignatureGeneratorBasic(SignatureGenerator):
"""
@@ -196,12 +152,15 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.basehash = {}
self.taskhash = {}
self.unihash = {}
self.taskdeps = {}
self.runtaskdeps = {}
self.file_checksum_values = {}
self.taints = {}
self.gendeps = {}
self.lookupcache = {}
self.setscenetasks = set()
self.basehash_ignore_vars = set((data.getVar("BB_BASEHASH_IGNORE_VARS") or "").split())
self.taskhash_ignore_tasks = None
self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST") or "").split())
self.taskwhitelist = None
self.init_rundepcheck(data)
checksum_cache_file = data.getVar("BB_HASH_CHECKSUM_CACHE_FILE")
if checksum_cache_file:
@@ -216,21 +175,21 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.tidtopn = {}
def init_rundepcheck(self, data):
self.taskhash_ignore_tasks = data.getVar("BB_TASKHASH_IGNORE_TASKS") or None
if self.taskhash_ignore_tasks:
self.twl = re.compile(self.taskhash_ignore_tasks)
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST") or None
if self.taskwhitelist:
self.twl = re.compile(self.taskwhitelist)
else:
self.twl = None
def _build_data(self, mcfn, d):
def _build_data(self, fn, d):
ignore_mismatch = ((d.getVar("BB_HASH_IGNORE_MISMATCH") or '') == '1')
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d, self.basehash_ignore_vars)
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d, self.basewhitelist)
taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basehash_ignore_vars, mcfn)
taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basewhitelist, fn)
for task in tasklist:
tid = mcfn + ":" + task
tid = fn + ":" + task
if not ignore_mismatch and tid in self.basehash and self.basehash[tid] != basehash[tid]:
bb.error("When reparsing %s, the basehash value changed from %s to %s. The metadata is not deterministic and this needs to be fixed." % (tid, self.basehash[tid], basehash[tid]))
bb.error("The following commands may help:")
@@ -241,7 +200,11 @@ class SignatureGeneratorBasic(SignatureGenerator):
bb.error("%s -Sprintdiff\n" % cmd)
self.basehash[tid] = basehash[tid]
return taskdeps, gendeps, lookupcache
self.taskdeps[fn] = taskdeps
self.gendeps[fn] = gendeps
self.lookupcache[fn] = lookupcache
return taskdeps
def set_setscene_tasks(self, setscene_tasks):
self.setscenetasks = set(setscene_tasks)
@@ -249,47 +212,35 @@ class SignatureGeneratorBasic(SignatureGenerator):
def finalise(self, fn, d, variant):
mc = d.getVar("__BBMULTICONFIG", False) or ""
mcfn = fn
if variant or mc:
mcfn = bb.cache.realfn2virtual(fn, variant, mc)
fn = bb.cache.realfn2virtual(fn, variant, mc)
try:
taskdeps, gendeps, lookupcache = self._build_data(mcfn, d)
taskdeps = self._build_data(fn, d)
except bb.parse.SkipRecipe:
raise
except:
bb.warn("Error during finalise of %s" % mcfn)
bb.warn("Error during finalise of %s" % fn)
raise
basehashes = {}
for task in taskdeps:
basehashes[task] = self.basehash[mcfn + ":" + task]
d.setVar("__siggen_basehashes", basehashes)
d.setVar("__siggen_gendeps", gendeps)
d.setVar("__siggen_varvals", lookupcache)
d.setVar("__siggen_taskdeps", taskdeps)
#Slow but can be useful for debugging mismatched basehashes
#self.setup_datacache_from_datastore(mcfn, d)
#for task in taskdeps:
# self.dump_sigtask(mcfn, task, d.getVar("STAMP"), False)
#for task in self.taskdeps[fn]:
# self.dump_sigtask(fn, task, d.getVar("STAMP"), False)
def setup_datacache_from_datastore(self, mcfn, d):
super().setup_datacache_from_datastore(mcfn, d)
for task in taskdeps:
d.setVar("BB_BASEHASH:task-%s" % task, self.basehash[fn + ":" + task])
mc = bb.runqueue.mc_from_tid(mcfn)
for attr in ["siggen_varvals", "siggen_taskdeps", "siggen_gendeps"]:
if not hasattr(self.datacaches[mc], attr):
setattr(self.datacaches[mc], attr, {})
self.datacaches[mc].siggen_varvals[mcfn] = d.getVar("__siggen_varvals")
self.datacaches[mc].siggen_taskdeps[mcfn] = d.getVar("__siggen_taskdeps")
self.datacaches[mc].siggen_gendeps[mcfn] = d.getVar("__siggen_gendeps")
def postparsing_clean_cache(self):
#
# After parsing we can remove some things from memory to reduce our memory footprint
#
self.gendeps = {}
self.lookupcache = {}
self.taskdeps = {}
def rundep_check(self, fn, recipename, task, dep, depname, dataCaches):
# Return True if we should keep the dependency, False to drop it
# We only manipulate the dependencies for packages not in the ignore
# list
# We only manipulate the dependencies for packages not in the whitelist
if self.twl and not self.twl.search(recipename):
# then process the actual dependencies
if self.twl.search(depname):
@@ -307,37 +258,38 @@ class SignatureGeneratorBasic(SignatureGenerator):
def prep_taskhash(self, tid, deps, dataCaches):
(mc, _, task, mcfn) = bb.runqueue.split_tid_mcfn(tid)
(mc, _, task, fn) = bb.runqueue.split_tid_mcfn(tid)
self.basehash[tid] = dataCaches[mc].basetaskhash[tid]
self.runtaskdeps[tid] = []
self.file_checksum_values[tid] = []
recipename = dataCaches[mc].pkg_fn[mcfn]
recipename = dataCaches[mc].pkg_fn[fn]
self.tidtopn[tid] = recipename
# save hashfn for deps into siginfo?
for dep in deps:
(depmc, _, deptask, depmcfn) = bb.runqueue.split_tid_mcfn(dep)
dep_pn = dataCaches[depmc].pkg_fn[depmcfn]
if not self.rundep_check(mcfn, recipename, task, dep, dep_pn, dataCaches):
for dep in sorted(deps, key=clean_basepath):
(depmc, _, _, depmcfn) = bb.runqueue.split_tid_mcfn(dep)
depname = dataCaches[depmc].pkg_fn[depmcfn]
if not self.supports_multiconfig_datacaches and mc != depmc:
# If the signature generator doesn't understand multiconfig
# data caches, any dependency not in the same multiconfig must
# be skipped for backward compatibility
continue
if not self.rundep_check(fn, recipename, task, dep, depname, dataCaches):
continue
if dep not in self.taskhash:
bb.fatal("%s is not in taskhash, caller isn't calling in dependency order?" % dep)
self.runtaskdeps[tid].append(dep)
dep_pnid = build_pnid(depmc, dep_pn, deptask)
self.runtaskdeps[tid].append((dep_pnid, dep))
if task in dataCaches[mc].file_checksums[mcfn]:
if task in dataCaches[mc].file_checksums[fn]:
if self.checksum_cache:
checksums = self.checksum_cache.get_checksums(dataCaches[mc].file_checksums[mcfn][task], recipename, self.localdirsexclude)
checksums = self.checksum_cache.get_checksums(dataCaches[mc].file_checksums[fn][task], recipename, self.localdirsexclude)
else:
checksums = bb.fetch2.get_file_checksums(dataCaches[mc].file_checksums[mcfn][task], recipename, self.localdirsexclude)
checksums = bb.fetch2.get_file_checksums(dataCaches[mc].file_checksums[fn][task], recipename, self.localdirsexclude)
for (f,cs) in checksums:
self.file_checksum_values[tid].append((f,cs))
taskdep = dataCaches[mc].task_deps[mcfn]
taskdep = dataCaches[mc].task_deps[fn]
if 'nostamp' in taskdep and task in taskdep['nostamp']:
# Nostamp tasks need an implicit taint so that they force any dependent tasks to run
if tid in self.taints and self.taints[tid].startswith("nostamp:"):
@@ -348,7 +300,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
taint = str(uuid.uuid4())
self.taints[tid] = "nostamp:" + taint
taint = self.read_taint(mcfn, task, dataCaches[mc].stamp[mcfn])
taint = self.read_taint(fn, task, dataCaches[mc].stamp[fn])
if taint:
self.taints[tid] = taint
logger.warning("%s is tainted from a forced run" % tid)
@@ -358,20 +310,18 @@ class SignatureGeneratorBasic(SignatureGenerator):
def get_taskhash(self, tid, deps, dataCaches):
data = self.basehash[tid]
for dep in sorted(self.runtaskdeps[tid]):
data += self.get_unihash(dep[1])
for dep in self.runtaskdeps[tid]:
data = data + self.get_unihash(dep)
for (f, cs) in sorted(self.file_checksum_values[tid], key=clean_checksum_file_path):
for (f, cs) in self.file_checksum_values[tid]:
if cs:
if "/./" in f:
data += "./" + f.split("/./")[1]
data += cs
data = data + cs
if tid in self.taints:
if self.taints[tid].startswith("nostamp:"):
data += self.taints[tid][8:]
data = data + self.taints[tid][8:]
else:
data += self.taints[tid]
data = data + self.taints[tid]
h = hashlib.sha256(data.encode("utf-8")).hexdigest()
self.taskhash[tid] = h
@@ -390,12 +340,9 @@ class SignatureGeneratorBasic(SignatureGenerator):
def save_unitaskhashes(self):
self.unihash_cache.save(self.unitaskhashes)
def copy_unitaskhashes(self, targetdir):
self.unihash_cache.copyfile(targetdir)
def dump_sigtask(self, fn, task, stampbase, runtime):
def dump_sigtask(self, mcfn, task, stampbase, runtime):
tid = mcfn + ":" + task
mc = bb.runqueue.mc_from_tid(mcfn)
tid = fn + ":" + task
referencestamp = stampbase
if isinstance(runtime, str) and runtime.startswith("customfile"):
sigfile = stampbase
@@ -410,34 +357,29 @@ class SignatureGeneratorBasic(SignatureGenerator):
data = {}
data['task'] = task
data['basehash_ignore_vars'] = self.basehash_ignore_vars
data['taskhash_ignore_tasks'] = self.taskhash_ignore_tasks
data['taskdeps'] = self.datacaches[mc].siggen_taskdeps[mcfn][task]
data['basewhitelist'] = self.basewhitelist
data['taskwhitelist'] = self.taskwhitelist
data['taskdeps'] = self.taskdeps[fn][task]
data['basehash'] = self.basehash[tid]
data['gendeps'] = {}
data['varvals'] = {}
data['varvals'][task] = self.datacaches[mc].siggen_varvals[mcfn][task]
for dep in self.datacaches[mc].siggen_taskdeps[mcfn][task]:
if dep in self.basehash_ignore_vars:
data['varvals'][task] = self.lookupcache[fn][task]
for dep in self.taskdeps[fn][task]:
if dep in self.basewhitelist:
continue
data['gendeps'][dep] = self.datacaches[mc].siggen_gendeps[mcfn][dep]
data['varvals'][dep] = self.datacaches[mc].siggen_varvals[mcfn][dep]
data['gendeps'][dep] = self.gendeps[fn][dep]
data['varvals'][dep] = self.lookupcache[fn][dep]
if runtime and tid in self.taskhash:
data['runtaskdeps'] = [dep[0] for dep in sorted(self.runtaskdeps[tid])]
data['file_checksum_values'] = []
for f,cs in sorted(self.file_checksum_values[tid], key=clean_checksum_file_path):
if "/./" in f:
data['file_checksum_values'].append(("./" + f.split("/./")[1], cs))
else:
data['file_checksum_values'].append((os.path.basename(f), cs))
data['runtaskdeps'] = self.runtaskdeps[tid]
data['file_checksum_values'] = [(os.path.basename(f), cs) for f,cs in self.file_checksum_values[tid]]
data['runtaskhashes'] = {}
for dep in self.runtaskdeps[tid]:
data['runtaskhashes'][dep[0]] = self.get_unihash(dep[1])
for dep in data['runtaskdeps']:
data['runtaskhashes'][dep] = self.get_unihash(dep)
data['taskhash'] = self.taskhash[tid]
data['unihash'] = self.get_unihash(tid)
taint = self.read_taint(mcfn, task, referencestamp)
taint = self.read_taint(fn, task, referencestamp)
if taint:
data['taint'] = taint
@@ -454,11 +396,11 @@ class SignatureGeneratorBasic(SignatureGenerator):
bb.error("Taskhash mismatch %s versus %s for %s" % (computed_taskhash, self.taskhash[tid], tid))
sigfile = sigfile.replace(self.taskhash[tid], computed_taskhash)
fd, tmpfile = bb.utils.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
fd, tmpfile = tempfile.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
try:
with bb.compress.zstd.open(fd, "wt", encoding="utf-8", num_threads=1) as f:
json.dump(data, f, sort_keys=True, separators=(",", ":"), cls=SetEncoder)
f.flush()
with os.fdopen(fd, "wb") as stream:
p = pickle.dump(data, stream, -1)
stream.flush()
os.chmod(tmpfile, 0o664)
bb.utils.rename(tmpfile, sigfile)
except (OSError, IOError) as err:
@@ -468,6 +410,18 @@ class SignatureGeneratorBasic(SignatureGenerator):
pass
raise err
def dump_sigfn(self, fn, dataCaches, options):
if fn in self.taskdeps:
for task in self.taskdeps[fn]:
tid = fn + ":" + task
mc = bb.runqueue.mc_from_tid(tid)
if tid not in self.taskhash:
continue
if dataCaches[mc].basetaskhash[tid] != self.basehash[tid]:
bb.error("Bitbake's cached basehash does not match the one we just generated (%s)!" % tid)
bb.error("The mismatched hashes were %s and %s" % (dataCaches[mc].basetaskhash[tid], self.basehash[tid]))
self.dump_sigtask(fn, task, dataCaches[mc].stamp[fn], True)
class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
name = "basichash"
@@ -478,11 +432,11 @@ class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
# If task is not in basehash, then error
return self.basehash[tid]
def stampfile(self, stampbase, mcfn, taskname, extrainfo, clean=False):
if taskname.endswith("_setscene"):
tid = mcfn + ":" + taskname[:-9]
def stampfile(self, stampbase, fn, taskname, extrainfo, clean=False):
if taskname != "do_setscene" and taskname.endswith("_setscene"):
tid = fn + ":" + taskname[:-9]
else:
tid = mcfn + ":" + taskname
tid = fn + ":" + taskname
if clean:
h = "*"
else:
@@ -490,23 +444,12 @@ class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
return ("%s.%s.%s.%s" % (stampbase, taskname, h, extrainfo)).rstrip('.')
def stampcleanmask(self, stampbase, mcfn, taskname, extrainfo):
return self.stampfile(stampbase, mcfn, taskname, extrainfo, clean=True)
def stampcleanmask(self, stampbase, fn, taskname, extrainfo):
return self.stampfile(stampbase, fn, taskname, extrainfo, clean=True)
def invalidate_task(self, task, mcfn):
bb.note("Tainting hash to force rebuild of task %s, %s" % (mcfn, task))
mc = bb.runqueue.mc_from_tid(mcfn)
stamp = self.datacaches[mc].stamp[mcfn]
taintfn = stamp + '.' + task + '.taint'
import uuid
bb.utils.mkdirhier(os.path.dirname(taintfn))
# The specific content of the taint file is not really important,
# we just need it to be random, so a random UUID is used
with open(taintfn, 'w') as taintf:
taintf.write(str(uuid.uuid4()))
def invalidate_task(self, task, d, fn):
bb.note("Tainting hash to force rebuild of task %s, %s" % (fn, task))
bb.build.write_taint(task, d, fn)
class SignatureGeneratorUniHashMixIn(object):
def __init__(self, data):
@@ -525,18 +468,6 @@ class SignatureGeneratorUniHashMixIn(object):
self._client = hashserv.create_client(self.server)
return self._client
def reset(self, data):
if getattr(self, '_client', None) is not None:
self._client.close()
self._client = None
return super().reset(data)
def exit(self):
if getattr(self, '_client', None) is not None:
self._client.close()
self._client = None
return super().exit()
def get_stampfile_hash(self, tid):
if tid in self.taskhash:
# If a unique hash is reported, use it as the stampfile hash. This
@@ -608,7 +539,7 @@ class SignatureGeneratorUniHashMixIn(object):
# A unique hash equal to the taskhash is not very interesting,
# so it is reported it at debug level 2. If they differ, that
# is much more interesting, so it is reported at debug level 1
hashequiv_logger.bbdebug((1, 2)[unihash == taskhash], 'Found unihash %s in place of %s for %s from %s' % (unihash, taskhash, tid, self.server))
hashequiv_logger.debug((1, 2)[unihash == taskhash], 'Found unihash %s in place of %s for %s from %s' % (unihash, taskhash, tid, self.server))
else:
hashequiv_logger.debug2('No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
except ConnectionError as e:
@@ -625,14 +556,14 @@ class SignatureGeneratorUniHashMixIn(object):
unihash = d.getVar('BB_UNIHASH')
report_taskdata = d.getVar('SSTATE_HASHEQUIV_REPORT_TASKDATA') == '1'
tempdir = d.getVar('T')
mcfn = d.getVar('BB_FILENAME')
tid = mcfn + ':do_' + task
fn = d.getVar('BB_FILENAME')
tid = fn + ':do_' + task
key = tid + ':' + taskhash
if self.setscenetasks and tid not in self.setscenetasks:
return
# This can happen if locked sigs are in action. Detect and just exit
# This can happen if locked sigs are in action. Detect and just abort
if taskhash != self.taskhash[tid]:
return
@@ -685,7 +616,7 @@ class SignatureGeneratorUniHashMixIn(object):
if new_unihash != unihash:
hashequiv_logger.debug('Task %s unihash changed %s -> %s by server %s' % (taskhash, unihash, new_unihash, self.server))
bb.event.fire(bb.runqueue.taskUniHashUpdate(mcfn + ':do_' + task, new_unihash), d)
bb.event.fire(bb.runqueue.taskUniHashUpdate(fn + ':do_' + task, new_unihash), d)
self.set_unihash(tid, new_unihash)
d.setVar('BB_UNIHASH', new_unihash)
else:
@@ -745,18 +676,19 @@ class SignatureGeneratorTestEquivHash(SignatureGeneratorUniHashMixIn, SignatureG
self.server = data.getVar('BB_HASHSERVE')
self.method = "sstate_output_hash"
def clean_checksum_file_path(file_checksum_tuple):
f, cs = file_checksum_tuple
if "/./" in f:
return "./" + f.split("/./")[1]
return f
#
# Dummy class used for bitbake-selftest
#
class SignatureGeneratorTestMulticonfigDepends(SignatureGeneratorBasicHash):
name = "TestMulticonfigDepends"
supports_multiconfig_datacaches = True
def dump_this_task(outfile, d):
import bb.parse
mcfn = d.getVar("BB_FILENAME")
fn = d.getVar("BB_FILENAME")
task = "do_" + d.getVar("BB_CURRENTTASK")
referencestamp = bb.parse.siggen.stampfile_base(mcfn)
bb.parse.siggen.dump_sigtask(mcfn, task, outfile, "customfile:" + referencestamp)
referencestamp = bb.build.stamp_internal(task, d, None, True)
bb.parse.siggen.dump_sigtask(fn, task, outfile, "customfile:" + referencestamp)
def init_colors(enable_color):
"""Initialise colour dict for passing to compare_sigfiles()"""
@@ -809,15 +741,38 @@ def list_inline_diff(oldlist, newlist, colors=None):
ret.append(item)
return '[%s]' % (', '.join(ret))
# Handled renamed fields
def handle_renames(data):
if 'basewhitelist' in data:
data['basehash_ignore_vars'] = data['basewhitelist']
del data['basewhitelist']
if 'taskwhitelist' in data:
data['taskhash_ignore_tasks'] = data['taskwhitelist']
del data['taskwhitelist']
def clean_basepath(basepath):
basepath, dir, recipe_task = basepath.rsplit("/", 2)
cleaned = dir + '/' + recipe_task
if basepath[0] == '/':
return cleaned
if basepath.startswith("mc:") and basepath.count(':') >= 2:
mc, mc_name, basepath = basepath.split(":", 2)
mc_suffix = ':mc:' + mc_name
else:
mc_suffix = ''
# mc stuff now removed from basepath. Whatever was next, if present will be the first
# suffix. ':/', recipe path start, marks the end of this. Something like
# 'virtual:a[:b[:c]]:/path...' (b and c being optional)
if basepath[0] != '/':
cleaned += ':' + basepath.split(':/', 1)[0]
return cleaned + mc_suffix
def clean_basepaths(a):
b = {}
for x in a:
b[clean_basepath(x)] = a[x]
return b
def clean_basepaths_list(a):
b = []
for x in a:
b.append(clean_basepath(x))
return b
def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
output = []
@@ -839,21 +794,20 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
formatparams.update(values)
return formatstr.format(**formatparams)
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
a_data = json.load(f, object_hook=SetDecoder)
with bb.compress.zstd.open(b, "rt", encoding="utf-8", num_threads=1) as f:
b_data = json.load(f, object_hook=SetDecoder)
with open(a, 'rb') as f:
p1 = pickle.Unpickler(f)
a_data = p1.load()
with open(b, 'rb') as f:
p2 = pickle.Unpickler(f)
b_data = p2.load()
for data in [a_data, b_data]:
handle_renames(data)
def dict_diff(a, b, ignored_vars=set()):
def dict_diff(a, b, whitelist=set()):
sa = set(a.keys())
sb = set(b.keys())
common = sa & sb
changed = set()
for i in common:
if a[i] != b[i] and i not in ignored_vars:
if a[i] != b[i] and i not in whitelist:
changed.add(i)
added = sb - sa
removed = sa - sb
@@ -861,11 +815,11 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
def file_checksums_diff(a, b):
from collections import Counter
# Convert lists back to tuples
a = [(f[0], f[1]) for f in a]
b = [(f[0], f[1]) for f in b]
# Handle old siginfo format
if isinstance(a, dict):
a = [(os.path.basename(f), cs) for f, cs in a.items()]
if isinstance(b, dict):
b = [(os.path.basename(f), cs) for f, cs in b.items()]
# Compare lists, ensuring we can handle duplicate filenames if they exist
removedcount = Counter(a)
removedcount.subtract(b)
@@ -892,15 +846,15 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
removed = [x[0] for x in removed]
return changed, added, removed
if 'basehash_ignore_vars' in a_data and a_data['basehash_ignore_vars'] != b_data['basehash_ignore_vars']:
output.append(color_format("{color_title}basehash_ignore_vars changed{color_default} from '%s' to '%s'") % (a_data['basehash_ignore_vars'], b_data['basehash_ignore_vars']))
if a_data['basehash_ignore_vars'] and b_data['basehash_ignore_vars']:
output.append("changed items: %s" % a_data['basehash_ignore_vars'].symmetric_difference(b_data['basehash_ignore_vars']))
if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
output.append(color_format("{color_title}basewhitelist changed{color_default} from '%s' to '%s'") % (a_data['basewhitelist'], b_data['basewhitelist']))
if a_data['basewhitelist'] and b_data['basewhitelist']:
output.append("changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist']))
if 'taskhash_ignore_tasks' in a_data and a_data['taskhash_ignore_tasks'] != b_data['taskhash_ignore_tasks']:
output.append(color_format("{color_title}taskhash_ignore_tasks changed{color_default} from '%s' to '%s'") % (a_data['taskhash_ignore_tasks'], b_data['taskhash_ignore_tasks']))
if a_data['taskhash_ignore_tasks'] and b_data['taskhash_ignore_tasks']:
output.append("changed items: %s" % a_data['taskhash_ignore_tasks'].symmetric_difference(b_data['taskhash_ignore_tasks']))
if 'taskwhitelist' in a_data and a_data['taskwhitelist'] != b_data['taskwhitelist']:
output.append(color_format("{color_title}taskwhitelist changed{color_default} from '%s' to '%s'") % (a_data['taskwhitelist'], b_data['taskwhitelist']))
if a_data['taskwhitelist'] and b_data['taskwhitelist']:
output.append("changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist']))
if a_data['taskdeps'] != b_data['taskdeps']:
output.append(color_format("{color_title}Task dependencies changed{color_default} from:\n%s\nto:\n%s") % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps'])))
@@ -908,7 +862,7 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
if a_data['basehash'] != b_data['basehash'] and not collapsed:
output.append(color_format("{color_title}basehash changed{color_default} from %s to %s") % (a_data['basehash'], b_data['basehash']))
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basehash_ignore_vars'] & b_data['basehash_ignore_vars'])
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basewhitelist'] & b_data['basewhitelist'])
if changed:
for dep in sorted(changed):
output.append(color_format("{color_title}List of dependencies for variable %s changed from '{color_default}%s{color_title}' to '{color_default}%s{color_title}'") % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep]))
@@ -948,9 +902,9 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
output.append(color_format("{color_title}Variable {var} value changed from '{color_default}{oldval}{color_title}' to '{color_default}{newval}{color_title}'{color_default}", var=dep, oldval=oldval, newval=newval))
if not 'file_checksum_values' in a_data:
a_data['file_checksum_values'] = []
a_data['file_checksum_values'] = {}
if not 'file_checksum_values' in b_data:
b_data['file_checksum_values'] = []
b_data['file_checksum_values'] = {}
changed, added, removed = file_checksums_diff(a_data['file_checksum_values'], b_data['file_checksum_values'])
if changed:
@@ -977,11 +931,11 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
a = a_data['runtaskdeps'][idx]
b = b_data['runtaskdeps'][idx]
if a_data['runtaskhashes'][a] != b_data['runtaskhashes'][b] and not collapsed:
changed.append("%s with hash %s\n changed to\n%s with hash %s" % (a, a_data['runtaskhashes'][a], b, b_data['runtaskhashes'][b]))
changed.append("%s with hash %s\n changed to\n%s with hash %s" % (clean_basepath(a), a_data['runtaskhashes'][a], clean_basepath(b), b_data['runtaskhashes'][b]))
if changed:
clean_a = a_data['runtaskdeps']
clean_b = b_data['runtaskdeps']
clean_a = clean_basepaths_list(a_data['runtaskdeps'])
clean_b = clean_basepaths_list(b_data['runtaskdeps'])
if clean_a != clean_b:
output.append(color_format("{color_title}runtaskdeps changed:{color_default}\n%s") % list_inline_diff(clean_a, clean_b, colors))
else:
@@ -1002,7 +956,7 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
#output.append("Dependency on task %s was replaced by %s with same hash" % (dep, bdep))
bdep_found = True
if not bdep_found:
output.append(color_format("{color_title}Dependency on task %s was added{color_default} with hash %s") % (dep, b[dep]))
output.append(color_format("{color_title}Dependency on task %s was added{color_default} with hash %s") % (clean_basepath(dep), b[dep]))
if removed:
for dep in sorted(removed):
adep_found = False
@@ -1012,11 +966,11 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
#output.append("Dependency on task %s was replaced by %s with same hash" % (adep, dep))
adep_found = True
if not adep_found:
output.append(color_format("{color_title}Dependency on task %s was removed{color_default} with hash %s") % (dep, a[dep]))
output.append(color_format("{color_title}Dependency on task %s was removed{color_default} with hash %s") % (clean_basepath(dep), a[dep]))
if changed:
for dep in sorted(changed):
if not collapsed:
output.append(color_format("{color_title}Hash for task dependency %s changed{color_default} from %s to %s") % (dep, a[dep], b[dep]))
output.append(color_format("{color_title}Hash for dependent task %s changed{color_default} from %s to %s") % (clean_basepath(dep), a[dep], b[dep]))
if callable(recursecb):
recout = recursecb(dep, a[dep], b[dep])
if recout:
@@ -1026,7 +980,6 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
# If a dependent hash changed, might as well print the line above and then defer to the changes in
# that hash since in all likelyhood, they're the same changes this task also saw.
output = [output[-1]] + recout
break
a_taint = a_data.get('taint', None)
b_taint = b_data.get('taint', None)
@@ -1048,7 +1001,7 @@ def calc_basehash(sigdata):
basedata = ''
alldeps = sigdata['taskdeps']
for dep in sorted(alldeps):
for dep in alldeps:
basedata = basedata + dep
val = sigdata['varvals'][dep]
if val is not None:
@@ -1064,8 +1017,6 @@ def calc_taskhash(sigdata):
for c in sigdata['file_checksum_values']:
if c[1]:
if "./" in c[0]:
data = data + c[0]
data = data + c[1]
if 'taint' in sigdata:
@@ -1080,33 +1031,32 @@ def calc_taskhash(sigdata):
def dump_sigfile(a):
output = []
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
a_data = json.load(f, object_hook=SetDecoder)
with open(a, 'rb') as f:
p1 = pickle.Unpickler(f)
a_data = p1.load()
handle_renames(a_data)
output.append("basewhitelist: %s" % (a_data['basewhitelist']))
output.append("basehash_ignore_vars: %s" % (sorted(a_data['basehash_ignore_vars'])))
output.append("taskhash_ignore_tasks: %s" % (sorted(a_data['taskhash_ignore_tasks'] or [])))
output.append("taskwhitelist: %s" % (a_data['taskwhitelist']))
output.append("Task dependencies: %s" % (sorted(a_data['taskdeps'])))
output.append("basehash: %s" % (a_data['basehash']))
for dep in sorted(a_data['gendeps']):
output.append("List of dependencies for variable %s is %s" % (dep, sorted(a_data['gendeps'][dep])))
for dep in a_data['gendeps']:
output.append("List of dependencies for variable %s is %s" % (dep, a_data['gendeps'][dep]))
for dep in sorted(a_data['varvals']):
for dep in a_data['varvals']:
output.append("Variable %s value is %s" % (dep, a_data['varvals'][dep]))
if 'runtaskdeps' in a_data:
output.append("Tasks this task depends on: %s" % (sorted(a_data['runtaskdeps'])))
output.append("Tasks this task depends on: %s" % (a_data['runtaskdeps']))
if 'file_checksum_values' in a_data:
output.append("This task depends on the checksums of files: %s" % (sorted(a_data['file_checksum_values'])))
output.append("This task depends on the checksums of files: %s" % (a_data['file_checksum_values']))
if 'runtaskhashes' in a_data:
for dep in sorted(a_data['runtaskhashes']):
for dep in a_data['runtaskhashes']:
output.append("Hash for dependent task %s is %s" % (dep, a_data['runtaskhashes'][dep]))
if 'taint' in a_data:

View File

@@ -39,7 +39,7 @@ class TaskData:
"""
BitBake Task Data implementation
"""
def __init__(self, halt = True, skiplist = None, allowincomplete = False):
def __init__(self, abort = True, skiplist = None, allowincomplete = False):
self.build_targets = {}
self.run_targets = {}
@@ -57,7 +57,7 @@ class TaskData:
self.failed_rdeps = []
self.failed_fns = []
self.halt = halt
self.abort = abort
self.allowincomplete = allowincomplete
self.skiplist = skiplist
@@ -328,7 +328,7 @@ class TaskData:
try:
self.add_provider_internal(cfgData, dataCache, item)
except bb.providers.NoProvider:
if self.halt:
if self.abort:
raise
self.remove_buildtarget(item)
@@ -451,12 +451,12 @@ class TaskData:
for target in self.build_targets:
if fn in self.build_targets[target]:
self.build_targets[target].remove(fn)
if not self.build_targets[target]:
if len(self.build_targets[target]) == 0:
self.remove_buildtarget(target, missing_list)
for target in self.run_targets:
if fn in self.run_targets[target]:
self.run_targets[target].remove(fn)
if not self.run_targets[target]:
if len(self.run_targets[target]) == 0:
self.remove_runtarget(target, missing_list)
def remove_buildtarget(self, target, missing_list=None):
@@ -479,7 +479,7 @@ class TaskData:
fn = tid.rsplit(":",1)[0]
self.fail_fn(fn, missing_list)
if self.halt and target in self.external_targets:
if self.abort and target in self.external_targets:
logger.error("Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s", target, missing_list)
raise bb.providers.NoProvider(target)
@@ -516,7 +516,7 @@ class TaskData:
self.add_provider_internal(cfgData, dataCache, target)
added = added + 1
except bb.providers.NoProvider:
if self.halt and target in self.external_targets and not self.allowincomplete:
if self.abort and target in self.external_targets and not self.allowincomplete:
raise
if not self.allowincomplete:
self.remove_buildtarget(target)

View File

@@ -44,7 +44,6 @@ class VariableReferenceTest(ReferenceTest):
def parseExpression(self, exp):
parsedvar = self.d.expandWithRefs(exp, None)
self.references = parsedvar.references
self.execs = parsedvar.execs
def test_simple_reference(self):
self.setEmptyVars(["FOO"])
@@ -62,11 +61,6 @@ class VariableReferenceTest(ReferenceTest):
self.parseExpression("${@d.getVar('BAR') + 'foo'}")
self.assertReferences(set(["BAR"]))
def test_python_exec_reference(self):
self.parseExpression("${@eval('3 * 5')}")
self.assertReferences(set())
self.assertExecs(set(["eval"]))
class ShellReferenceTest(ReferenceTest):
def parseExpression(self, exp):
@@ -324,7 +318,7 @@ d.getVar(a(), False)
"filename": "example.bb",
})
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
@@ -371,7 +365,7 @@ esac
self.d.setVarFlags("FOO", {"func": True})
self.setEmptyVars(execs)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["somevar", "inverted"] + execs))
@@ -381,7 +375,7 @@ esac
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
self.d.setVarFlag("FOO", "vardeps", "oe_libinstall")
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["oe_libinstall"]))
@@ -390,7 +384,7 @@ esac
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
self.d.setVarFlag("FOO", "vardeps", "${@'oe_libinstall'}")
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["oe_libinstall"]))
@@ -405,7 +399,7 @@ esac
# Check dependencies
self.d.setVar('ANOTHERVAR', expr)
self.d.setVar('TESTVAR', 'anothervalue testval testval2')
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(sorted(values.splitlines()),
sorted([expr,
'TESTVAR{anothervalue} = Set',
@@ -418,24 +412,6 @@ esac
# Check final value
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['anothervalue', 'yetanothervalue', 'lastone'])
def test_contains_vardeps_excluded(self):
# Check the ignored_vars option to build_dependencies is handled by contains functionality
varval = '${TESTVAR2} ${@bb.utils.filter("TESTVAR", "somevalue anothervalue", d)}'
self.d.setVar('ANOTHERVAR', varval)
self.d.setVar('TESTVAR', 'anothervalue testval testval2')
self.d.setVar('TESTVAR2', 'testval3')
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), set(["TESTVAR"]), self.d, self.d)
self.assertEqual(sorted(values.splitlines()), sorted([varval]))
self.assertEqual(deps, set(["TESTVAR2"]))
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval3', 'anothervalue'])
# Check the vardepsexclude flag is handled by contains functionality
self.d.setVarFlag('ANOTHERVAR', 'vardepsexclude', 'TESTVAR')
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
self.assertEqual(sorted(values.splitlines()), sorted([varval]))
self.assertEqual(deps, set(["TESTVAR2"]))
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval3', 'anothervalue'])
#Currently no wildcard support
#def test_vardeps_wildcards(self):
# self.d.setVar("oe_libinstall", "echo test")

View File

@@ -20,7 +20,7 @@ class ProgressWatcher:
def __init__(self):
self._reports = []
def handle_event(self, event, d):
def handle_event(self, event):
self._reports.append((event.progress, event.rate))
def reports(self):

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#

View File

@@ -1,8 +1,6 @@
#
# BitBake Tests for cooker.py
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#

View File

@@ -60,15 +60,6 @@ class DataExpansions(unittest.TestCase):
val = self.d.expand("${@5*12}")
self.assertEqual(str(val), "60")
def test_python_snippet_w_dict(self):
val = self.d.expand("${@{ 'green': 1, 'blue': 2 }['green']}")
self.assertEqual(str(val), "1")
def test_python_unexpanded_multi(self):
self.d.setVar("bar", "${unsetvar}")
val = self.d.expand("${@2*2},${foo},${@d.getVar('foo') + ' ${bar}'},${foo}")
self.assertEqual(str(val), "4,value_of_foo,${@d.getVar('foo') + ' ${unsetvar}'},value_of_foo")
def test_expand_in_python_snippet(self):
val = self.d.expand("${@'boo ' + '${foo}'}")
self.assertEqual(str(val), "boo value_of_foo")
@@ -77,18 +68,6 @@ class DataExpansions(unittest.TestCase):
val = self.d.expand("${@d.getVar('foo') + ' ${bar}'}")
self.assertEqual(str(val), "value_of_foo value_of_bar")
def test_python_snippet_function_reference(self):
self.d.setVar("TESTVAL", "testvalue")
self.d.setVar("testfunc", 'd.getVar("TESTVAL")')
context = bb.utils.get_context()
context["testfunc"] = lambda d: d.getVar("TESTVAL")
val = self.d.expand("${@testfunc(d)}")
self.assertEqual(str(val), "testvalue")
def test_python_snippet_builtin_metadata(self):
self.d.setVar("eval", "INVALID")
self.d.expand("${@eval('3')}")
def test_python_unexpanded(self):
self.d.setVar("bar", "${unsetvar}")
val = self.d.expand("${@d.getVar('foo') + ' ${bar}'}")

View File

@@ -157,7 +157,7 @@ class EventHandlingTest(unittest.TestCase):
self._test_process.event_handler,
event,
None)
self._test_process.event_handler.assert_called_once_with(event, None)
self._test_process.event_handler.assert_called_once_with(event)
def test_fire_class_handlers(self):
""" Test fire_class_handlers method """
@@ -175,10 +175,10 @@ class EventHandlingTest(unittest.TestCase):
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
bb.event.fire_class_handlers(event2, None)
expected_event_handler1 = [call(event1, None)]
expected_event_handler2 = [call(event1, None),
call(event2, None),
call(event2, None)]
expected_event_handler1 = [call(event1)]
expected_event_handler2 = [call(event1),
call(event2),
call(event2)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected_event_handler1)
self.assertEqual(self._test_process.event_handler2.call_args_list,
@@ -205,7 +205,7 @@ class EventHandlingTest(unittest.TestCase):
bb.event.fire_class_handlers(event2, None)
bb.event.fire_class_handlers(event2, None)
expected_event_handler1 = []
expected_event_handler2 = [call(event1, None)]
expected_event_handler2 = [call(event1)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected_event_handler1)
self.assertEqual(self._test_process.event_handler2.call_args_list,
@@ -223,7 +223,7 @@ class EventHandlingTest(unittest.TestCase):
self.assertEqual(result, bb.event.Registered)
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
expected = [call(event1, None), call(event2, None)]
expected = [call(event1), call(event2)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected)
@@ -237,7 +237,7 @@ class EventHandlingTest(unittest.TestCase):
self.assertEqual(result, bb.event.Registered)
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
expected = [call(event1, None), call(event2, None), call(event1, None)]
expected = [call(event1), call(event2), call(event1)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected)
@@ -251,7 +251,7 @@ class EventHandlingTest(unittest.TestCase):
self.assertEqual(result, bb.event.Registered)
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
expected = [call(event1,None), call(event2, None), call(event1, None), call(event2, None)]
expected = [call(event1), call(event2), call(event1), call(event2)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected)
@@ -359,10 +359,9 @@ class EventHandlingTest(unittest.TestCase):
event1 = bb.event.ConfigParsed()
bb.event.fire(event1, None)
expected = [call(event1, None)]
expected = [call(event1)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected)
expected = [call(event1)]
self.assertEqual(self._test_ui1.event.send.call_args_list,
expected)
@@ -451,9 +450,10 @@ class EventHandlingTest(unittest.TestCase):
and disable threadlocks tests """
bb.event.fire(bb.event.OperationStarted(), None)
def test_event_threadlock(self):
def test_enable_threadlock(self):
""" Test enable_threadlock method """
self._set_threadlock_test_mockups()
bb.event.enable_threadlock()
self._set_and_run_threadlock_test_workers()
# Calls to UI handlers should be in order as all the registered
# handlers for the event coming from the first worker should be
@@ -461,6 +461,20 @@ class EventHandlingTest(unittest.TestCase):
self.assertEqual(self._threadlock_test_calls,
["w1_ui1", "w1_ui2", "w2_ui1", "w2_ui2"])
def test_disable_threadlock(self):
""" Test disable_threadlock method """
self._set_threadlock_test_mockups()
bb.event.disable_threadlock()
self._set_and_run_threadlock_test_workers()
# Calls to UI handlers should be intertwined together. Thanks to the
# delay in the registered handlers for the event coming from the first
# worker, the event coming from the second worker starts being
# processed before finishing handling the first worker event.
self.assertEqual(self._threadlock_test_calls,
["w1_ui1", "w2_ui1", "w1_ui2", "w2_ui2"])
class EventClassesTest(unittest.TestCase):
""" Event classes test class """

View File

@@ -1,59 +0,0 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html>
<head>
<title>Index of /debian/pool/main/m/minicom</title>
</head>
<body>
<h1>Index of /debian/pool/main/m/minicom</h1>
<table>
<tr><th valign="top"><img src="/icons/blank.gif" alt="[ICO]"></th><th><a href="?C=N;O=D">Name</a></th><th><a href="?C=M;O=A">Last modified</a></th><th><a href="?C=S;O=A">Size</a></th></tr>
<tr><th colspan="4"><hr></th></tr>
<tr><td valign="top"><img src="/icons/back.gif" alt="[PARENTDIR]"></td><td><a href="/debian/pool/main/m/">Parent Directory</a></td><td>&nbsp;</td><td align="right"> - </td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1.debian.tar.xz">minicom_2.7-1+deb8u1.debian.tar.xz</a></td><td align="right">2017-04-24 08:22 </td><td align="right"> 14K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1.dsc">minicom_2.7-1+deb8u1.dsc</a></td><td align="right">2017-04-24 08:22 </td><td align="right">1.9K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1_amd64.deb">minicom_2.7-1+deb8u1_amd64.deb</a></td><td align="right">2017-04-25 21:10 </td><td align="right">257K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1_armel.deb">minicom_2.7-1+deb8u1_armel.deb</a></td><td align="right">2017-04-26 00:58 </td><td align="right">246K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1_armhf.deb">minicom_2.7-1+deb8u1_armhf.deb</a></td><td align="right">2017-04-26 00:58 </td><td align="right">245K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1_i386.deb">minicom_2.7-1+deb8u1_i386.deb</a></td><td align="right">2017-04-25 21:41 </td><td align="right">258K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1.debian.tar.xz">minicom_2.7-1.1.debian.tar.xz</a></td><td align="right">2017-04-22 09:34 </td><td align="right"> 14K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1.dsc">minicom_2.7-1.1.dsc</a></td><td align="right">2017-04-22 09:34 </td><td align="right">1.9K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_amd64.deb">minicom_2.7-1.1_amd64.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">261K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_arm64.deb">minicom_2.7-1.1_arm64.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">250K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_armel.deb">minicom_2.7-1.1_armel.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">255K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_armhf.deb">minicom_2.7-1.1_armhf.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">254K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_i386.deb">minicom_2.7-1.1_i386.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">266K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_mips.deb">minicom_2.7-1.1_mips.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">258K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_mips64el.deb">minicom_2.7-1.1_mips64el.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">259K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_mipsel.deb">minicom_2.7-1.1_mipsel.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">259K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_ppc64el.deb">minicom_2.7-1.1_ppc64el.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">253K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_s390x.deb">minicom_2.7-1.1_s390x.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">261K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_amd64.deb">minicom_2.7.1-1+b1_amd64.deb</a></td><td align="right">2018-05-06 08:14 </td><td align="right">262K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_arm64.deb">minicom_2.7.1-1+b1_arm64.deb</a></td><td align="right">2018-05-06 07:58 </td><td align="right">250K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_armel.deb">minicom_2.7.1-1+b1_armel.deb</a></td><td align="right">2018-05-06 08:45 </td><td align="right">253K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_armhf.deb">minicom_2.7.1-1+b1_armhf.deb</a></td><td align="right">2018-05-06 10:42 </td><td align="right">253K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_i386.deb">minicom_2.7.1-1+b1_i386.deb</a></td><td align="right">2018-05-06 08:55 </td><td align="right">266K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_mips.deb">minicom_2.7.1-1+b1_mips.deb</a></td><td align="right">2018-05-06 08:14 </td><td align="right">258K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_mipsel.deb">minicom_2.7.1-1+b1_mipsel.deb</a></td><td align="right">2018-05-06 12:13 </td><td align="right">259K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_ppc64el.deb">minicom_2.7.1-1+b1_ppc64el.deb</a></td><td align="right">2018-05-06 09:10 </td><td align="right">260K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_s390x.deb">minicom_2.7.1-1+b1_s390x.deb</a></td><td align="right">2018-05-06 08:14 </td><td align="right">257K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b2_mips64el.deb">minicom_2.7.1-1+b2_mips64el.deb</a></td><td align="right">2018-05-06 09:41 </td><td align="right">260K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1.debian.tar.xz">minicom_2.7.1-1.debian.tar.xz</a></td><td align="right">2017-08-13 15:40 </td><td align="right"> 14K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1.dsc">minicom_2.7.1-1.dsc</a></td><td align="right">2017-08-13 15:40 </td><td align="right">1.8K</td></tr>
<tr><td valign="top"><img src="/icons/compressed.gif" alt="[ ]"></td><td><a href="minicom_2.7.1.orig.tar.gz">minicom_2.7.1.orig.tar.gz</a></td><td align="right">2017-08-13 15:40 </td><td align="right">855K</td></tr>
<tr><td valign="top"><img src="/icons/compressed.gif" alt="[ ]"></td><td><a href="minicom_2.7.orig.tar.gz">minicom_2.7.orig.tar.gz</a></td><td align="right">2014-01-01 09:36 </td><td align="right">843K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2.debian.tar.xz">minicom_2.8-2.debian.tar.xz</a></td><td align="right">2021-06-15 03:47 </td><td align="right"> 14K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2.dsc">minicom_2.8-2.dsc</a></td><td align="right">2021-06-15 03:47 </td><td align="right">1.8K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_amd64.deb">minicom_2.8-2_amd64.deb</a></td><td align="right">2021-06-15 03:58 </td><td align="right">280K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_arm64.deb">minicom_2.8-2_arm64.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">275K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_armel.deb">minicom_2.8-2_armel.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">271K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_armhf.deb">minicom_2.8-2_armhf.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">272K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_i386.deb">minicom_2.8-2_i386.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">285K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_mips64el.deb">minicom_2.8-2_mips64el.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">277K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_mipsel.deb">minicom_2.8-2_mipsel.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">278K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_ppc64el.deb">minicom_2.8-2_ppc64el.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">286K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_s390x.deb">minicom_2.8-2_s390x.deb</a></td><td align="right">2021-06-15 03:58 </td><td align="right">275K</td></tr>
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8.orig.tar.bz2">minicom_2.8.orig.tar.bz2</a></td><td align="right">2021-01-03 12:44 </td><td align="right">598K</td></tr>
<tr><th colspan="4"><hr></th></tr>
</table>
<address>Apache Server at ftp.debian.org Port 80</address>
</body></html>

View File

@@ -1,20 +0,0 @@
<!DOCTYPE html><html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"><meta name="viewport" content="width=device-width"><style type="text/css">body,html {background:#fff;font-family:"Bitstream Vera Sans","Lucida Grande","Lucida Sans Unicode",Lucidux,Verdana,Lucida,sans-serif;}tr:nth-child(even) {background:#f4f4f4;}th,td {padding:0.1em 0.5em;}th {text-align:left;font-weight:bold;background:#eee;border-bottom:1px solid #aaa;}#list {border:1px solid #aaa;width:100%;}a {color:#a33;}a:hover {color:#e33;}</style>
<title>Index of /sources/libxml2/2.10/</title>
</head><body><h1>Index of /sources/libxml2/2.10/</h1>
<table id="list"><thead><tr><th style="width:55%"><a href="?C=N&amp;O=A">File Name</a>&nbsp;<a href="?C=N&amp;O=D">&nbsp;&darr;&nbsp;</a></th><th style="width:20%"><a href="?C=S&amp;O=A">File Size</a>&nbsp;<a href="?C=S&amp;O=D">&nbsp;&darr;&nbsp;</a></th><th style="width:25%"><a href="?C=M&amp;O=A">Date</a>&nbsp;<a href="?C=M&amp;O=D">&nbsp;&darr;&nbsp;</a></th></tr></thead>
<tbody><tr><td class="link"><a href="../">Parent directory/</a></td><td class="size">-</td><td class="date">-</td></tr>
<tr><td class="link"><a href="LATEST-IS-2.10.3" title="LATEST-IS-2.10.3">LATEST-IS-2.10.3</a></td><td class="size">2.5 MiB</td><td class="date">2022-Oct-14 12:55</td></tr>
<tr><td class="link"><a href="libxml2-2.10.0.news" title="libxml2-2.10.0.news">libxml2-2.10.0.news</a></td><td class="size">7.1 KiB</td><td class="date">2022-Aug-17 11:55</td></tr>
<tr><td class="link"><a href="libxml2-2.10.0.sha256sum" title="libxml2-2.10.0.sha256sum">libxml2-2.10.0.sha256sum</a></td><td class="size">174 B</td><td class="date">2022-Aug-17 11:55</td></tr>
<tr><td class="link"><a href="libxml2-2.10.0.tar.xz" title="libxml2-2.10.0.tar.xz">libxml2-2.10.0.tar.xz</a></td><td class="size">2.6 MiB</td><td class="date">2022-Aug-17 11:55</td></tr>
<tr><td class="link"><a href="libxml2-2.10.1.news" title="libxml2-2.10.1.news">libxml2-2.10.1.news</a></td><td class="size">455 B</td><td class="date">2022-Aug-25 11:33</td></tr>
<tr><td class="link"><a href="libxml2-2.10.1.sha256sum" title="libxml2-2.10.1.sha256sum">libxml2-2.10.1.sha256sum</a></td><td class="size">174 B</td><td class="date">2022-Aug-25 11:33</td></tr>
<tr><td class="link"><a href="libxml2-2.10.1.tar.xz" title="libxml2-2.10.1.tar.xz">libxml2-2.10.1.tar.xz</a></td><td class="size">2.6 MiB</td><td class="date">2022-Aug-25 11:33</td></tr>
<tr><td class="link"><a href="libxml2-2.10.2.news" title="libxml2-2.10.2.news">libxml2-2.10.2.news</a></td><td class="size">309 B</td><td class="date">2022-Aug-29 14:56</td></tr>
<tr><td class="link"><a href="libxml2-2.10.2.sha256sum" title="libxml2-2.10.2.sha256sum">libxml2-2.10.2.sha256sum</a></td><td class="size">174 B</td><td class="date">2022-Aug-29 14:56</td></tr>
<tr><td class="link"><a href="libxml2-2.10.2.tar.xz" title="libxml2-2.10.2.tar.xz">libxml2-2.10.2.tar.xz</a></td><td class="size">2.5 MiB</td><td class="date">2022-Aug-29 14:56</td></tr>
<tr><td class="link"><a href="libxml2-2.10.3.news" title="libxml2-2.10.3.news">libxml2-2.10.3.news</a></td><td class="size">294 B</td><td class="date">2022-Oct-14 12:55</td></tr>
<tr><td class="link"><a href="libxml2-2.10.3.sha256sum" title="libxml2-2.10.3.sha256sum">libxml2-2.10.3.sha256sum</a></td><td class="size">174 B</td><td class="date">2022-Oct-14 12:55</td></tr>
<tr><td class="link"><a href="libxml2-2.10.3.tar.xz" title="libxml2-2.10.3.tar.xz">libxml2-2.10.3.tar.xz</a></td><td class="size">2.5 MiB</td><td class="date">2022-Oct-14 12:55</td></tr>
</tbody></table></body></html>

View File

@@ -1,40 +0,0 @@
<!DOCTYPE html><html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"><meta name="viewport" content="width=device-width"><style type="text/css">body,html {background:#fff;font-family:"Bitstream Vera Sans","Lucida Grande","Lucida Sans Unicode",Lucidux,Verdana,Lucida,sans-serif;}tr:nth-child(even) {background:#f4f4f4;}th,td {padding:0.1em 0.5em;}th {text-align:left;font-weight:bold;background:#eee;border-bottom:1px solid #aaa;}#list {border:1px solid #aaa;width:100%;}a {color:#a33;}a:hover {color:#e33;}</style>
<title>Index of /sources/libxml2/2.9/</title>
</head><body><h1>Index of /sources/libxml2/2.9/</h1>
<table id="list"><thead><tr><th style="width:55%"><a href="?C=N&amp;O=A">File Name</a>&nbsp;<a href="?C=N&amp;O=D">&nbsp;&darr;&nbsp;</a></th><th style="width:20%"><a href="?C=S&amp;O=A">File Size</a>&nbsp;<a href="?C=S&amp;O=D">&nbsp;&darr;&nbsp;</a></th><th style="width:25%"><a href="?C=M&amp;O=A">Date</a>&nbsp;<a href="?C=M&amp;O=D">&nbsp;&darr;&nbsp;</a></th></tr></thead>
<tbody><tr><td class="link"><a href="../">Parent directory/</a></td><td class="size">-</td><td class="date">-</td></tr>
<tr><td class="link"><a href="LATEST-IS-2.9.14" title="LATEST-IS-2.9.14">LATEST-IS-2.9.14</a></td><td class="size">3.0 MiB</td><td class="date">2022-May-02 12:03</td></tr>
<tr><td class="link"><a href="libxml2-2.9.0.sha256sum" title="libxml2-2.9.0.sha256sum">libxml2-2.9.0.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:27</td></tr>
<tr><td class="link"><a href="libxml2-2.9.0.tar.xz" title="libxml2-2.9.0.tar.xz">libxml2-2.9.0.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:27</td></tr>
<tr><td class="link"><a href="libxml2-2.9.1.sha256sum" title="libxml2-2.9.1.sha256sum">libxml2-2.9.1.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:28</td></tr>
<tr><td class="link"><a href="libxml2-2.9.1.tar.xz" title="libxml2-2.9.1.tar.xz">libxml2-2.9.1.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:28</td></tr>
<tr><td class="link"><a href="libxml2-2.9.10.sha256sum" title="libxml2-2.9.10.sha256sum">libxml2-2.9.10.sha256sum</a></td><td class="size">88 B</td><td class="date">2022-Feb-14 18:42</td></tr>
<tr><td class="link"><a href="libxml2-2.9.10.tar.xz" title="libxml2-2.9.10.tar.xz">libxml2-2.9.10.tar.xz</a></td><td class="size">3.2 MiB</td><td class="date">2022-Feb-14 18:42</td></tr>
<tr><td class="link"><a href="libxml2-2.9.11.sha256sum" title="libxml2-2.9.11.sha256sum">libxml2-2.9.11.sha256sum</a></td><td class="size">88 B</td><td class="date">2022-Feb-14 18:43</td></tr>
<tr><td class="link"><a href="libxml2-2.9.11.tar.xz" title="libxml2-2.9.11.tar.xz">libxml2-2.9.11.tar.xz</a></td><td class="size">3.2 MiB</td><td class="date">2022-Feb-14 18:43</td></tr>
<tr><td class="link"><a href="libxml2-2.9.12.sha256sum" title="libxml2-2.9.12.sha256sum">libxml2-2.9.12.sha256sum</a></td><td class="size">88 B</td><td class="date">2022-Feb-14 18:45</td></tr>
<tr><td class="link"><a href="libxml2-2.9.12.tar.xz" title="libxml2-2.9.12.tar.xz">libxml2-2.9.12.tar.xz</a></td><td class="size">3.2 MiB</td><td class="date">2022-Feb-14 18:45</td></tr>
<tr><td class="link"><a href="libxml2-2.9.13.news" title="libxml2-2.9.13.news">libxml2-2.9.13.news</a></td><td class="size">26.6 KiB</td><td class="date">2022-Feb-20 12:42</td></tr>
<tr><td class="link"><a href="libxml2-2.9.13.sha256sum" title="libxml2-2.9.13.sha256sum">libxml2-2.9.13.sha256sum</a></td><td class="size">174 B</td><td class="date">2022-Feb-20 12:42</td></tr>
<tr><td class="link"><a href="libxml2-2.9.13.tar.xz" title="libxml2-2.9.13.tar.xz">libxml2-2.9.13.tar.xz</a></td><td class="size">3.1 MiB</td><td class="date">2022-Feb-20 12:42</td></tr>
<tr><td class="link"><a href="libxml2-2.9.14.news" title="libxml2-2.9.14.news">libxml2-2.9.14.news</a></td><td class="size">1.0 KiB</td><td class="date">2022-May-02 12:03</td></tr>
<tr><td class="link"><a href="libxml2-2.9.14.sha256sum" title="libxml2-2.9.14.sha256sum">libxml2-2.9.14.sha256sum</a></td><td class="size">174 B</td><td class="date">2022-May-02 12:03</td></tr>
<tr><td class="link"><a href="libxml2-2.9.14.tar.xz" title="libxml2-2.9.14.tar.xz">libxml2-2.9.14.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-May-02 12:03</td></tr>
<tr><td class="link"><a href="libxml2-2.9.2.sha256sum" title="libxml2-2.9.2.sha256sum">libxml2-2.9.2.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:30</td></tr>
<tr><td class="link"><a href="libxml2-2.9.2.tar.xz" title="libxml2-2.9.2.tar.xz">libxml2-2.9.2.tar.xz</a></td><td class="size">3.2 MiB</td><td class="date">2022-Feb-14 18:30</td></tr>
<tr><td class="link"><a href="libxml2-2.9.3.sha256sum" title="libxml2-2.9.3.sha256sum">libxml2-2.9.3.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:31</td></tr>
<tr><td class="link"><a href="libxml2-2.9.3.tar.xz" title="libxml2-2.9.3.tar.xz">libxml2-2.9.3.tar.xz</a></td><td class="size">3.2 MiB</td><td class="date">2022-Feb-14 18:31</td></tr>
<tr><td class="link"><a href="libxml2-2.9.4.sha256sum" title="libxml2-2.9.4.sha256sum">libxml2-2.9.4.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:33</td></tr>
<tr><td class="link"><a href="libxml2-2.9.4.tar.xz" title="libxml2-2.9.4.tar.xz">libxml2-2.9.4.tar.xz</a></td><td class="size">2.9 MiB</td><td class="date">2022-Feb-14 18:33</td></tr>
<tr><td class="link"><a href="libxml2-2.9.5.sha256sum" title="libxml2-2.9.5.sha256sum">libxml2-2.9.5.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:35</td></tr>
<tr><td class="link"><a href="libxml2-2.9.5.tar.xz" title="libxml2-2.9.5.tar.xz">libxml2-2.9.5.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:35</td></tr>
<tr><td class="link"><a href="libxml2-2.9.6.sha256sum" title="libxml2-2.9.6.sha256sum">libxml2-2.9.6.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:36</td></tr>
<tr><td class="link"><a href="libxml2-2.9.6.tar.xz" title="libxml2-2.9.6.tar.xz">libxml2-2.9.6.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:36</td></tr>
<tr><td class="link"><a href="libxml2-2.9.7.sha256sum" title="libxml2-2.9.7.sha256sum">libxml2-2.9.7.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:37</td></tr>
<tr><td class="link"><a href="libxml2-2.9.7.tar.xz" title="libxml2-2.9.7.tar.xz">libxml2-2.9.7.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:37</td></tr>
<tr><td class="link"><a href="libxml2-2.9.8.sha256sum" title="libxml2-2.9.8.sha256sum">libxml2-2.9.8.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:39</td></tr>
<tr><td class="link"><a href="libxml2-2.9.8.tar.xz" title="libxml2-2.9.8.tar.xz">libxml2-2.9.8.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:39</td></tr>
<tr><td class="link"><a href="libxml2-2.9.9.sha256sum" title="libxml2-2.9.9.sha256sum">libxml2-2.9.9.sha256sum</a></td><td class="size">87 B</td><td class="date">2022-Feb-14 18:40</td></tr>
<tr><td class="link"><a href="libxml2-2.9.9.tar.xz" title="libxml2-2.9.9.tar.xz">libxml2-2.9.9.tar.xz</a></td><td class="size">3.0 MiB</td><td class="date">2022-Feb-14 18:40</td></tr>
</tbody></table></body></html>

View File

@@ -1,19 +0,0 @@
<!DOCTYPE html><html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"><meta name="viewport" content="width=device-width"><style type="text/css">body,html {background:#fff;font-family:"Bitstream Vera Sans","Lucida Grande","Lucida Sans Unicode",Lucidux,Verdana,Lucida,sans-serif;}tr:nth-child(even) {background:#f4f4f4;}th,td {padding:0.1em 0.5em;}th {text-align:left;font-weight:bold;background:#eee;border-bottom:1px solid #aaa;}#list {border:1px solid #aaa;width:100%;}a {color:#a33;}a:hover {color:#e33;}</style>
<title>Index of /sources/libxml2/</title>
</head><body><h1>Index of /sources/libxml2/</h1>
<table id="list"><thead><tr><th style="width:55%"><a href="?C=N&amp;O=A">File Name</a>&nbsp;<a href="?C=N&amp;O=D">&nbsp;&darr;&nbsp;</a></th><th style="width:20%"><a href="?C=S&amp;O=A">File Size</a>&nbsp;<a href="?C=S&amp;O=D">&nbsp;&darr;&nbsp;</a></th><th style="width:25%"><a href="?C=M&amp;O=A">Date</a>&nbsp;<a href="?C=M&amp;O=D">&nbsp;&darr;&nbsp;</a></th></tr></thead>
<tbody><tr><td class="link"><a href="../">Parent directory/</a></td><td class="size">-</td><td class="date">-</td></tr>
<tr><td class="link"><a href="2.0/" title="2.0">2.0/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:04</td></tr>
<tr><td class="link"><a href="2.1/" title="2.1">2.1/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:04</td></tr>
<tr><td class="link"><a href="2.10/" title="2.10">2.10/</a></td><td class="size">-</td><td class="date">2022-Oct-14 12:55</td></tr>
<tr><td class="link"><a href="2.2/" title="2.2">2.2/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:04</td></tr>
<tr><td class="link"><a href="2.3/" title="2.3">2.3/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:05</td></tr>
<tr><td class="link"><a href="2.4/" title="2.4">2.4/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:05</td></tr>
<tr><td class="link"><a href="2.5/" title="2.5">2.5/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:05</td></tr>
<tr><td class="link"><a href="2.6/" title="2.6">2.6/</a></td><td class="size">-</td><td class="date">2009-Jul-14 13:05</td></tr>
<tr><td class="link"><a href="2.7/" title="2.7">2.7/</a></td><td class="size">-</td><td class="date">2022-Feb-14 18:24</td></tr>
<tr><td class="link"><a href="2.8/" title="2.8">2.8/</a></td><td class="size">-</td><td class="date">2022-Feb-14 18:26</td></tr>
<tr><td class="link"><a href="2.9/" title="2.9">2.9/</a></td><td class="size">-</td><td class="date">2022-May-02 12:04</td></tr>
<tr><td class="link"><a href="cache.json" title="cache.json">cache.json</a></td><td class="size">22.8 KiB</td><td class="date">2022-Oct-14 12:55</td></tr>
</tbody></table></body></html>

File diff suppressed because it is too large Load Diff

View File

@@ -119,7 +119,7 @@ EXTRA_OECONF:class-target = "b"
EXTRA_OECONF:append = " c"
"""
def test_parse_overrides2(self):
def test_parse_overrides(self):
f = self.parsehelper(self.overridetest2)
d = bb.parse.handle(f.name, self.d)['']
d.appendVar("EXTRA_OECONF", " d")
@@ -164,7 +164,6 @@ python () {
# become unset/disappear.
#
def test_parse_classextend_contamination(self):
self.d.setVar("__bbclasstype", "recipe")
cls = self.parsehelper(self.classextend_bbclass, suffix=".bbclass")
#clsname = os.path.basename(cls.name).replace(".bbclass", "")
self.classextend = self.classextend.replace("###CLASS###", cls.name)
@@ -186,60 +185,12 @@ deltask ${EMPTYVAR}
"""
def test_parse_addtask_deltask(self):
import sys
with self.assertLogs() as logs:
f = self.parsehelper(self.addtask_deltask)
d = bb.parse.handle(f.name, self.d)['']
output = "".join(logs.output)
self.assertTrue("addtask contained multiple 'before' keywords" in output)
self.assertTrue("addtask contained multiple 'after' keywords" in output)
self.assertTrue('addtask ignored: " do_patch"' in output)
#self.assertTrue('dependent task do_foo for do_patch does not exist' in output)
broken_multiline_comment = """
# First line of comment \\
# Second line of comment \\
"""
def test_parse_broken_multiline_comment(self):
f = self.parsehelper(self.broken_multiline_comment)
with self.assertRaises(bb.BBHandledException):
d = bb.parse.handle(f.name, self.d)['']
comment_in_var = """
VAR = " \\
SOMEVAL \\
# some comment \\
SOMEOTHERVAL \\
"
"""
def test_parse_comment_in_var(self):
f = self.parsehelper(self.comment_in_var)
with self.assertRaises(bb.BBHandledException):
d = bb.parse.handle(f.name, self.d)['']
at_sign_in_var_flag = """
A[flag@.service] = "nonet"
B[flag@.target] = "ntb"
C[f] = "flag"
unset A[flag@.service]
"""
def test_parse_at_sign_in_var_flag(self):
f = self.parsehelper(self.at_sign_in_var_flag)
f = self.parsehelper(self.addtask_deltask)
d = bb.parse.handle(f.name, self.d)['']
self.assertEqual(d.getVar("A"), None)
self.assertEqual(d.getVar("B"), None)
self.assertEqual(d.getVarFlag("A","flag@.service"), None)
self.assertEqual(d.getVarFlag("B","flag@.target"), "ntb")
self.assertEqual(d.getVarFlag("C","f"), "flag")
def test_parse_invalid_at_sign_in_var_flag(self):
invalid_at_sign = self.at_sign_in_var_flag.replace("B[f", "B[@f")
f = self.parsehelper(invalid_at_sign)
with self.assertRaises(bb.parse.ParseError):
d = bb.parse.handle(f.name, self.d)['']
stdout = sys.stdout.getvalue()
self.assertTrue("addtask contained multiple 'before' keywords" in stdout)
self.assertTrue("addtask contained multiple 'after' keywords" in stdout)
self.assertTrue('addtask ignored: " do_patch"' in stdout)
#self.assertTrue('dependent task do_foo for do_patch does not exist' in stdout)

View File

@@ -12,6 +12,6 @@ STAMP = "${TMPDIR}/stamps/${PN}"
T = "${TMPDIR}/workdir/${PN}/temp"
BB_NUMBER_THREADS = "4"
BB_BASEHASH_IGNORE_VARS = "BB_CURRENT_MC BB_HASHSERVE TMPDIR TOPDIR SLOWTASKS SSTATEVALID FILE BB_CURRENTTASK"
BB_HASHBASE_WHITELIST = "BB_CURRENT_MC BB_HASHSERVE TMPDIR TOPDIR SLOWTASKS SSTATEVALID FILE"
include conf/multiconfig/${BB_CURRENT_MC}.conf

View File

@@ -29,14 +29,13 @@ class RunQueueTests(unittest.TestCase):
def run_bitbakecmd(self, cmd, builddir, sstatevalid="", slowtasks="", extraenv=None, cleanup=False):
env = os.environ.copy()
env["BBPATH"] = os.path.realpath(os.path.join(os.path.dirname(__file__), "runqueue-tests"))
env["BB_ENV_PASSTHROUGH_ADDITIONS"] = "SSTATEVALID SLOWTASKS TOPDIR"
env["BB_ENV_EXTRAWHITE"] = "SSTATEVALID SLOWTASKS"
env["SSTATEVALID"] = sstatevalid
env["SLOWTASKS"] = slowtasks
env["TOPDIR"] = builddir
if extraenv:
for k in extraenv:
env[k] = extraenv[k]
env["BB_ENV_PASSTHROUGH_ADDITIONS"] = env["BB_ENV_PASSTHROUGH_ADDITIONS"] + " " + k
env["BB_ENV_EXTRAWHITE"] = env["BB_ENV_EXTRAWHITE"] + " " + k
try:
output = subprocess.check_output(cmd, env=env, stderr=subprocess.STDOUT,universal_newlines=True, cwd=builddir)
print(output)
@@ -59,8 +58,6 @@ class RunQueueTests(unittest.TestCase):
expected = ['a1:' + x for x in self.alltasks]
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_single_setscenevalid(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1"]
@@ -71,8 +68,6 @@ class RunQueueTests(unittest.TestCase):
'a1:populate_sysroot', 'a1:build']
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_intermediate_setscenevalid(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1"]
@@ -82,8 +77,6 @@ class RunQueueTests(unittest.TestCase):
'a1:populate_sysroot_setscene', 'a1:build']
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_intermediate_notcovered(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1"]
@@ -93,8 +86,6 @@ class RunQueueTests(unittest.TestCase):
'a1:package_qa_setscene', 'a1:build', 'a1:populate_sysroot_setscene']
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_all_setscenevalid(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1"]
@@ -104,8 +95,6 @@ class RunQueueTests(unittest.TestCase):
'a1:package_qa_setscene', 'a1:build', 'a1:populate_sysroot_setscene']
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_no_settasks(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1", "-c", "patch"]
@@ -114,8 +103,6 @@ class RunQueueTests(unittest.TestCase):
expected = ['a1:fetch', 'a1:unpack', 'a1:patch']
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_mix_covered_notcovered(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1:do_patch", "a1:do_populate_sysroot"]
@@ -124,7 +111,6 @@ class RunQueueTests(unittest.TestCase):
expected = ['a1:fetch', 'a1:unpack', 'a1:patch', 'a1:populate_sysroot_setscene']
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
# Test targets with intermediate setscene tasks alongside a target with no intermediate setscene tasks
def test_mixed_direct_tasks_setscene_tasks(self):
@@ -136,8 +122,6 @@ class RunQueueTests(unittest.TestCase):
'a1:package_qa_setscene', 'a1:build', 'a1:populate_sysroot_setscene']
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
# This test slows down the execution of do_package_setscene until after other real tasks have
# started running which tests for a bug where tasks were being lost from the buildable list of real
# tasks if they weren't in tasks_covered or tasks_notcovered
@@ -152,14 +136,12 @@ class RunQueueTests(unittest.TestCase):
'a1:populate_sysroot', 'a1:build']
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_setscene_ignore_tasks(self):
def test_setscenewhitelist(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1"]
extraenv = {
"BB_SETSCENE_ENFORCE" : "1",
"BB_SETSCENE_ENFORCE_IGNORE_TASKS" : "a1:do_package_write_rpm a1:do_build"
"BB_SETSCENE_ENFORCE_WHITELIST" : "a1:do_package_write_rpm a1:do_build"
}
sstatevalid = "a1:do_package a1:do_package_qa a1:do_packagedata a1:do_package_write_ipk a1:do_populate_lic a1:do_populate_sysroot"
tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv)
@@ -167,8 +149,6 @@ class RunQueueTests(unittest.TestCase):
'a1:populate_sysroot_setscene', 'a1:package_setscene']
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
# Tests for problems with dependencies between setscene tasks
def test_no_setscenevalid_harddeps(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
@@ -182,8 +162,6 @@ class RunQueueTests(unittest.TestCase):
'd1:populate_sysroot', 'd1:build']
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_no_setscenevalid_withdeps(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "b1"]
@@ -194,8 +172,6 @@ class RunQueueTests(unittest.TestCase):
expected.remove('a1:package_qa')
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_single_a1_setscenevalid_withdeps(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "b1"]
@@ -206,8 +182,6 @@ class RunQueueTests(unittest.TestCase):
'a1:populate_sysroot'] + ['b1:' + x for x in self.alltasks]
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_single_b1_setscenevalid_withdeps(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "b1"]
@@ -219,8 +193,6 @@ class RunQueueTests(unittest.TestCase):
expected.remove('b1:package')
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_intermediate_setscenevalid_withdeps(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "b1"]
@@ -231,8 +203,6 @@ class RunQueueTests(unittest.TestCase):
expected.remove('b1:package')
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_all_setscenevalid_withdeps(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "b1"]
@@ -243,8 +213,6 @@ class RunQueueTests(unittest.TestCase):
'b1:packagedata_setscene', 'b1:package_qa_setscene', 'b1:populate_sysroot_setscene']
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_multiconfig_setscene_optimise(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
extraenv = {
@@ -264,8 +232,6 @@ class RunQueueTests(unittest.TestCase):
expected.remove(x)
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
def test_multiconfig_bbmask(self):
# This test validates that multiconfigs can independently mask off
# recipes they do not want with BBMASK. It works by having recipes
@@ -282,13 +248,11 @@ class RunQueueTests(unittest.TestCase):
cmd = ["bitbake", "mc:mc-1:fails-mc2", "mc:mc_2:fails-mc1"]
self.run_bitbakecmd(cmd, tempdir, "", extraenv=extraenv)
self.shutdown(tempdir)
def test_multiconfig_mcdepends(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
extraenv = {
"BBMULTICONFIG" : "mc-1 mc_2",
"BB_SIGNATURE_HANDLER" : "basichash",
"BB_SIGNATURE_HANDLER" : "TestMulticonfigDepends",
"EXTRA_BBFILES": "${COREBASE}/recipes/fails-mc/*.bb",
}
tasks = self.run_bitbakecmd(["bitbake", "mc:mc-1:f1"], tempdir, "", extraenv=extraenv, cleanup=True)
@@ -314,8 +278,7 @@ class RunQueueTests(unittest.TestCase):
["mc_2:a1:%s" % t for t in rerun_tasks]
self.assertEqual(set(tasks), set(expected))
self.shutdown(tempdir)
@unittest.skipIf(sys.version_info < (3, 5, 0), 'Python 3.5 or later required')
def test_hashserv_single(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
extraenv = {
@@ -341,6 +304,7 @@ class RunQueueTests(unittest.TestCase):
self.shutdown(tempdir)
@unittest.skipIf(sys.version_info < (3, 5, 0), 'Python 3.5 or later required')
def test_hashserv_double(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
extraenv = {
@@ -365,6 +329,7 @@ class RunQueueTests(unittest.TestCase):
self.shutdown(tempdir)
@unittest.skipIf(sys.version_info < (3, 5, 0), 'Python 3.5 or later required')
def test_hashserv_multiple_setscene(self):
# Runs e1:do_package_setscene twice
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
@@ -396,6 +361,7 @@ class RunQueueTests(unittest.TestCase):
def shutdown(self, tempdir):
# Wait for the hashserve socket to disappear else we'll see races with the tempdir cleanup
while (os.path.exists(tempdir + "/hashserve.sock") or os.path.exists(tempdir + "cache/hashserv.db-wal") or os.path.exists(tempdir + "/bitbake.lock")):
while (os.path.exists(tempdir + "/hashserve.sock") or os.path.exists(tempdir + "cache/hashserv.db-wal")):
time.sleep(0.5)

View File

@@ -17,12 +17,75 @@ import bb.siggen
class SiggenTest(unittest.TestCase):
def test_build_pnid(self):
tests = {
('', 'helloworld', 'do_sometask') : 'helloworld:do_sometask',
('XX', 'helloworld', 'do_sometask') : 'mc:XX:helloworld:do_sometask',
}
def test_clean_basepath_simple_target_basepath(self):
basepath = '/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask'
expected_cleaned = 'helloworld/helloworld_1.2.3.bb:do_sometask'
for t in tests:
self.assertEqual(bb.siggen.build_pnid(*t), tests[t])
actual_cleaned = bb.siggen.clean_basepath(basepath)
self.assertEqual(actual_cleaned, expected_cleaned)
def test_clean_basepath_basic_virtual_basepath(self):
basepath = 'virtual:something:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask'
expected_cleaned = 'helloworld/helloworld_1.2.3.bb:do_sometask:virtual:something'
actual_cleaned = bb.siggen.clean_basepath(basepath)
self.assertEqual(actual_cleaned, expected_cleaned)
def test_clean_basepath_mc_basepath(self):
basepath = 'mc:somemachine:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask'
expected_cleaned = 'helloworld/helloworld_1.2.3.bb:do_sometask:mc:somemachine'
actual_cleaned = bb.siggen.clean_basepath(basepath)
self.assertEqual(actual_cleaned, expected_cleaned)
def test_clean_basepath_virtual_long_prefix_basepath(self):
basepath = 'virtual:something:A:B:C:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask'
expected_cleaned = 'helloworld/helloworld_1.2.3.bb:do_sometask:virtual:something:A:B:C'
actual_cleaned = bb.siggen.clean_basepath(basepath)
self.assertEqual(actual_cleaned, expected_cleaned)
def test_clean_basepath_mc_virtual_basepath(self):
basepath = 'mc:somemachine:virtual:something:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask'
expected_cleaned = 'helloworld/helloworld_1.2.3.bb:do_sometask:virtual:something:mc:somemachine'
actual_cleaned = bb.siggen.clean_basepath(basepath)
self.assertEqual(actual_cleaned, expected_cleaned)
def test_clean_basepath_mc_virtual_long_prefix_basepath(self):
basepath = 'mc:X:virtual:something:C:B:A:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask'
expected_cleaned = 'helloworld/helloworld_1.2.3.bb:do_sometask:virtual:something:C:B:A:mc:X'
actual_cleaned = bb.siggen.clean_basepath(basepath)
self.assertEqual(actual_cleaned, expected_cleaned)
# def test_clean_basepath_performance(self):
# input_basepaths = [
# 'mc:X:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
# 'mc:X:virtual:something:C:B:A:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
# 'virtual:something:C:B:A:/different/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
# 'virtual:something:A:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
# '/this/is/most/common/input/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
# '/and/should/be/tested/with/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
# '/more/weight/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
# ]
# time_start = time.time()
# i = 2000000
# while i >= 0:
# for basepath in input_basepaths:
# bb.siggen.clean_basepath(basepath)
# i -= 1
# elapsed = time.time() - time_start
# print('{} ({}s)'.format(self.id(), round(elapsed, 3)))
# self.assertTrue(False)

View File

@@ -10,7 +10,6 @@
import logging
import os
import sys
import time
import atexit
import re
from collections import OrderedDict, defaultdict
@@ -449,7 +448,7 @@ class Tinfoil:
self.run_actions(config_params)
self.recipes_parsed = True
def run_command(self, command, *params, handle_events=True):
def run_command(self, command, *params):
"""
Run a command on the server (as implemented in bb.command).
Note that there are two types of command - synchronous and
@@ -469,7 +468,7 @@ class Tinfoil:
try:
result = self.server_connection.connection.runCommand(commandline)
finally:
while handle_events:
while True:
event = self.wait_event()
if not event:
break
@@ -494,7 +493,7 @@ class Tinfoil:
Wait for an event from the server for the specified time.
A timeout of 0 means don't wait if there are no events in the queue.
Returns the next event in the queue or None if the timeout was
reached. Note that in order to receive any events you will
reached. Note that in order to recieve any events you will
first need to set the internal event mask using set_event_mask()
(otherwise whatever event mask the UI set up will be in effect).
"""
@@ -730,7 +729,6 @@ class Tinfoil:
ret = self.run_command('buildTargets', targets, task)
if handle_events:
lastevent = time.time()
result = False
# Borrowed from knotty, instead somewhat hackily we use the helper
# as the object to store "shutdown" on
@@ -743,7 +741,6 @@ class Tinfoil:
try:
event = self.wait_event(0.25)
if event:
lastevent = time.time()
if event_callback and event_callback(event):
continue
if helper.eventHandler(event):
@@ -764,7 +761,7 @@ class Tinfoil:
if parseprogress:
parseprogress.update(event.progress)
else:
bb.warn("Got ProcessProgress event for something that never started?")
bb.warn("Got ProcessProgress event for someting that never started?")
continue
if isinstance(event, bb.event.ProcessFinished):
if self.quiet > 1:
@@ -776,7 +773,7 @@ class Tinfoil:
if isinstance(event, bb.command.CommandCompleted):
result = True
break
if isinstance(event, (bb.command.CommandFailed, bb.command.CommandExit)):
if isinstance(event, bb.command.CommandFailed):
self.logger.error(str(event))
result = False
break
@@ -788,13 +785,10 @@ class Tinfoil:
self.logger.error(str(event))
result = False
break
elif helper.shutdown > 1:
break
termfilter.updateFooter()
if time.time() > (lastevent + (3*60)):
if not self.run_command('ping', handle_events=False):
print("\nUnable to ping server and no events, closing down...\n")
return False
except KeyboardInterrupt:
termfilter.clearFooter()
if helper.shutdown == 1:

View File

@@ -45,7 +45,7 @@ from pprint import pformat
import logging
from datetime import datetime, timedelta
from django.db import transaction
from django.db import transaction, connection
# pylint: disable=invalid-name
@@ -227,12 +227,6 @@ class ORMWrapper(object):
build.completed_on = timezone.now()
build.outcome = outcome
build.save()
# We force a sync point here to force the outcome status commit,
# which resolves a race condition with the build completion takedown
transaction.set_autocommit(True)
transaction.set_autocommit(False)
signal_runbuilds()
def update_target_set_license_manifest(self, target, license_manifest_path):
@@ -489,14 +483,14 @@ class ORMWrapper(object):
# we already created the root directory, so ignore any
# entry for it
if not path:
if len(path) == 0:
continue
parent_path = "/".join(path.split("/")[:len(path.split("/")) - 1])
if not parent_path:
if len(parent_path) == 0:
parent_path = "/"
parent_obj = self._cached_get(Target_File, target = target_obj, path = parent_path, inodetype = Target_File.ITYPE_DIRECTORY)
Target_File.objects.create(
tf_obj = Target_File.objects.create(
target = target_obj,
path = path,
size = size,
@@ -561,7 +555,7 @@ class ORMWrapper(object):
parent_obj = Target_File.objects.get(target = target_obj, path = parent_path, inodetype = Target_File.ITYPE_DIRECTORY)
Target_File.objects.create(
tf_obj = Target_File.objects.create(
target = target_obj,
path = path,
size = size,
@@ -577,7 +571,7 @@ class ORMWrapper(object):
assert isinstance(build_obj, Build)
assert isinstance(target_obj, Target)
errormsg = []
errormsg = ""
for p in packagedict:
# Search name swtiches round the installed name vs package name
# by default installed name == package name
@@ -639,10 +633,10 @@ class ORMWrapper(object):
packagefile_objects.append(Package_File( package = packagedict[p]['object'],
path = targetpath,
size = targetfilesize))
if packagefile_objects:
if len(packagefile_objects):
Package_File.objects.bulk_create(packagefile_objects)
except KeyError as e:
errormsg.append(" stpi: Key error, package %s key %s \n" % (p, e))
errormsg += " stpi: Key error, package %s key %s \n" % ( p, e )
# save disk installed size
packagedict[p]['object'].installed_size = packagedict[p]['size']
@@ -679,13 +673,13 @@ class ORMWrapper(object):
logger.warning("Could not add dependency to the package %s "
"because %s is an unknown package", p, px)
if packagedeps_objs:
if len(packagedeps_objs) > 0:
Package_Dependency.objects.bulk_create(packagedeps_objs)
else:
logger.info("No package dependencies created")
if errormsg:
logger.warning("buildinfohelper: target_package_info could not identify recipes: \n%s", "".join(errormsg))
if len(errormsg) > 0:
logger.warning("buildinfohelper: target_package_info could not identify recipes: \n%s", errormsg)
def save_target_image_file_information(self, target_obj, file_name, file_size):
Target_Image_File.objects.create(target=target_obj,
@@ -773,7 +767,7 @@ class ORMWrapper(object):
packagefile_objects.append(Package_File( package = bp_object,
path = path,
size = package_info['FILES_INFO'][path] ))
if packagefile_objects:
if len(packagefile_objects):
Package_File.objects.bulk_create(packagefile_objects)
def _po_byname(p):
@@ -815,7 +809,7 @@ class ORMWrapper(object):
packagedeps_objs.append(Package_Dependency( package = bp_object,
depends_on = _po_byname(p), dep_type = Package_Dependency.TYPE_RCONFLICTS))
if packagedeps_objs:
if len(packagedeps_objs) > 0:
Package_Dependency.objects.bulk_create(packagedeps_objs)
return bp_object
@@ -832,7 +826,7 @@ class ORMWrapper(object):
desc = vardump[root_var]['doc']
if desc is None:
desc = ''
if desc:
if len(desc):
HelpText.objects.get_or_create(build=build_obj,
area=HelpText.VARIABLE,
key=k, text=desc)
@@ -852,7 +846,7 @@ class ORMWrapper(object):
file_name = vh['file'],
line_number = vh['line'],
operation = vh['op']))
if varhist_objects:
if len(varhist_objects):
VariableHistory.objects.bulk_create(varhist_objects)
@@ -899,6 +893,9 @@ class BuildInfoHelper(object):
self.task_order = 0
self.autocommit_step = 1
self.server = server
# we use manual transactions if the database doesn't autocommit on us
if not connection.features.autocommits_when_autocommit_is_off:
transaction.set_autocommit(False)
self.orm_wrapper = ORMWrapper()
self.has_build_history = has_build_history
self.tmp_dir = self.server.runCommand(["getVariable", "TMPDIR"])[0]
@@ -1062,6 +1059,27 @@ class BuildInfoHelper(object):
return recipe_info
def _get_path_information(self, task_object):
self._ensure_build()
assert isinstance(task_object, Task)
build_stats_format = "{tmpdir}/buildstats/{buildname}/{package}/"
build_stats_path = []
for t in self.internal_state['targets']:
buildname = self.internal_state['build'].build_name
pe, pv = task_object.recipe.version.split(":",1)
if len(pe) > 0:
package = task_object.recipe.name + "-" + pe + "_" + pv
else:
package = task_object.recipe.name + "-" + pv
build_stats_path.append(build_stats_format.format(tmpdir=self.tmp_dir,
buildname=buildname,
package=package))
return build_stats_path
################################
## external available methods to store information
@@ -1295,11 +1313,12 @@ class BuildInfoHelper(object):
task_information['outcome'] = Task.OUTCOME_FAILED
del self.internal_state['taskdata'][identifier]
# we force a sync point here, to get the progress bar to show
if self.autocommit_step % 3 == 0:
transaction.set_autocommit(True)
transaction.set_autocommit(False)
self.autocommit_step += 1
if not connection.features.autocommits_when_autocommit_is_off:
# we force a sync point here, to get the progress bar to show
if self.autocommit_step % 3 == 0:
transaction.set_autocommit(True)
transaction.set_autocommit(False)
self.autocommit_step += 1
self.orm_wrapper.get_update_task_object(task_information, True) # must exist
@@ -1385,7 +1404,7 @@ class BuildInfoHelper(object):
assert 'pn' in event._depgraph
assert 'tdepends' in event._depgraph
errormsg = []
errormsg = ""
# save layer version priorities
if 'layer-priorities' in event._depgraph.keys():
@@ -1477,7 +1496,7 @@ class BuildInfoHelper(object):
elif dep in self.internal_state['recipes']:
dependency = self.internal_state['recipes'][dep]
else:
errormsg.append(" stpd: KeyError saving recipe dependency for %s, %s \n" % (recipe, dep))
errormsg += " stpd: KeyError saving recipe dependency for %s, %s \n" % (recipe, dep)
continue
recipe_dep = Recipe_Dependency(recipe=target,
depends_on=dependency,
@@ -1518,8 +1537,8 @@ class BuildInfoHelper(object):
taskdeps_objects.append(Task_Dependency( task = target, depends_on = dep ))
Task_Dependency.objects.bulk_create(taskdeps_objects)
if errormsg:
logger.warning("buildinfohelper: dependency info not identify recipes: \n%s", "".join(errormsg))
if len(errormsg) > 0:
logger.warning("buildinfohelper: dependency info not identify recipes: \n%s", errormsg)
def store_build_package_information(self, event):
@@ -1599,7 +1618,7 @@ class BuildInfoHelper(object):
if 'backlog' in self.internal_state:
# if we have a backlog of events, do our best to save them here
if self.internal_state['backlog']:
if len(self.internal_state['backlog']):
tempevent = self.internal_state['backlog'].pop()
logger.debug("buildinfohelper: Saving stored event %s "
% tempevent)
@@ -1746,6 +1765,7 @@ class BuildInfoHelper(object):
buildname = self.server.runCommand(['getVariable', 'BUILDNAME'])[0]
machine = self.server.runCommand(['getVariable', 'MACHINE'])[0]
image_name = self.server.runCommand(['getVariable', 'IMAGE_NAME'])[0]
# location of the manifest files for this build;
# note that this file is only produced if an image is produced
@@ -1766,18 +1786,6 @@ class BuildInfoHelper(object):
# filter out anything which isn't an image target
image_targets = [target for target in targets if target.is_image]
if len(image_targets) > 0:
#if there are image targets retrieve image_name
image_name = self.server.runCommand(['getVariable', 'IMAGE_NAME'])[0]
if not image_name:
#When build target is an image and image_name is not found as an environment variable
logger.info("IMAGE_NAME not found, extracting from bitbake command")
cmd = self.server.runCommand(['getVariable','BB_CMDLINE'])[0]
#filter out tokens that are command line options
cmd = [token for token in cmd if not token.startswith('-')]
image_name = cmd[1].split(':', 1)[0] # remove everything after : in image name
logger.info("IMAGE_NAME found as : %s " % image_name)
for image_target in image_targets:
# this is set to True if we find at least one file relating to
# this target; if this remains False after the scan, we copy the
@@ -1982,6 +1990,8 @@ class BuildInfoHelper(object):
# Do not skip command line build events
self.store_log_event(tempevent,False)
if not connection.features.autocommits_when_autocommit_is_off:
transaction.set_autocommit(True)
# unset the brbe; this is to prevent subsequent command-line builds
# being incorrectly attached to the previous Toaster-triggered build;

View File

@@ -25,7 +25,7 @@ from itertools import groupby
from bb.ui import uihelper
featureSet = [bb.cooker.CookerFeatures.SEND_SANITYEVENTS, bb.cooker.CookerFeatures.BASEDATASTORE_TRACKING]
featureSet = [bb.cooker.CookerFeatures.SEND_SANITYEVENTS]
logger = logging.getLogger("BitBake")
interactive = sys.stdout.isatty()
@@ -228,9 +228,7 @@ class TerminalFilter(object):
def keepAlive(self, t):
if not self.cuu:
print("Bitbake still alive (no events for %ds). Active tasks:" % t)
for t in self.helper.running_tasks:
print(t)
print("Bitbake still alive (%ds)" % t)
sys.stdout.flush()
def updateFooter(self):
@@ -252,68 +250,58 @@ class TerminalFilter(object):
return
tasks = []
for t in runningpids:
start_time = activetasks[t].get("starttime", None)
if start_time:
msg = "%s - %s (pid %s)" % (activetasks[t]["title"], self.elapsed(currenttime - start_time), activetasks[t]["pid"])
else:
msg = "%s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"])
progress = activetasks[t].get("progress", None)
if progress is not None:
pbar = activetasks[t].get("progressbar", None)
rate = activetasks[t].get("rate", None)
start_time = activetasks[t].get("starttime", None)
if not pbar or pbar.bouncing != (progress < 0):
if progress < 0:
pbar = BBProgress("0: %s" % msg, 100, widgets=[' ', progressbar.BouncingSlider(), ''], extrapos=3, resize_handler=self.sigwinch_handle)
pbar = BBProgress("0: %s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"]), 100, widgets=[' ', progressbar.BouncingSlider(), ''], extrapos=3, resize_handler=self.sigwinch_handle)
pbar.bouncing = True
else:
pbar = BBProgress("0: %s" % msg, 100, widgets=[' ', progressbar.Percentage(), ' ', progressbar.Bar(), ''], extrapos=5, resize_handler=self.sigwinch_handle)
pbar = BBProgress("0: %s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"]), 100, widgets=[' ', progressbar.Percentage(), ' ', progressbar.Bar(), ''], extrapos=5, resize_handler=self.sigwinch_handle)
pbar.bouncing = False
activetasks[t]["progressbar"] = pbar
tasks.append((pbar, msg, progress, rate, start_time))
tasks.append((pbar, progress, rate, start_time))
else:
tasks.append(msg)
start_time = activetasks[t].get("starttime", None)
if start_time:
tasks.append("%s - %s (pid %s)" % (activetasks[t]["title"], self.elapsed(currenttime - start_time), activetasks[t]["pid"]))
else:
tasks.append("%s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"]))
if self.main.shutdown:
content = pluralise("Waiting for %s running task to finish",
"Waiting for %s running tasks to finish", len(activetasks))
if not self.quiet:
content += ':'
content = "Waiting for %s running tasks to finish:" % len(activetasks)
print(content)
else:
scene_tasks = "%s of %s" % (self.helper.setscene_current, self.helper.setscene_total)
cur_tasks = "%s of %s" % (self.helper.tasknumber_current, self.helper.tasknumber_total)
content = ''
if not self.quiet:
msg = "Setscene tasks: %s" % scene_tasks
content += msg + "\n"
print(msg)
if self.quiet:
msg = "Running tasks (%s, %s)" % (scene_tasks, cur_tasks)
content = "Running tasks (%s of %s/%s of %s)" % (self.helper.setscene_current, self.helper.setscene_total, self.helper.tasknumber_current, self.helper.tasknumber_total)
elif not len(activetasks):
msg = "No currently running tasks (%s)" % cur_tasks
content = "No currently running tasks (%s of %s/%s of %s)" % (self.helper.setscene_current, self.helper.setscene_total, self.helper.tasknumber_current, self.helper.tasknumber_total)
else:
msg = "Currently %2s running tasks (%s)" % (len(activetasks), cur_tasks)
content = "Currently %2s running tasks (%s of %s/%s of %s)" % (len(activetasks), self.helper.setscene_current, self.helper.setscene_total, self.helper.tasknumber_current, self.helper.tasknumber_total)
maxtask = self.helper.tasknumber_total
if not self.main_progress or self.main_progress.maxval != maxtask:
widgets = [' ', progressbar.Percentage(), ' ', progressbar.Bar()]
self.main_progress = BBProgress("Running tasks", maxtask, widgets=widgets, resize_handler=self.sigwinch_handle)
self.main_progress.start(False)
self.main_progress.setmessage(msg)
progress = max(0, self.helper.tasknumber_current - 1)
content += self.main_progress.update(progress)
self.main_progress.setmessage(content)
progress = self.helper.tasknumber_current - 1
if progress < 0:
progress = 0
content = self.main_progress.update(progress)
print('')
lines = self.getlines(content)
if not self.quiet:
for tasknum, task in enumerate(tasks[:(self.rows - 1 - lines)]):
lines = 1 + int(len(content) / (self.columns + 1))
if self.quiet == 0:
for tasknum, task in enumerate(tasks[:(self.rows - 2)]):
if isinstance(task, tuple):
pbar, msg, progress, rate, start_time = task
pbar, progress, rate, start_time = task
if not pbar.start_time:
pbar.start(False)
if start_time:
pbar.start_time = start_time
pbar.setmessage('%s: %s' % (tasknum, msg))
pbar.setmessage('%s:%s' % (tasknum, pbar.msg.split(':', 1)[1]))
pbar.setextra(rate)
if progress > -1:
content = pbar.update(progress)
@@ -323,17 +311,11 @@ class TerminalFilter(object):
else:
content = "%s: %s" % (tasknum, task)
print(content)
lines = lines + self.getlines(content)
lines = lines + 1 + int(len(content) / (self.columns + 1))
self.footer_present = lines
self.lastpids = runningpids[:]
self.lastcount = self.helper.tasknumber_current
def getlines(self, content):
lines = 0
for line in content.split("\n"):
lines = lines + 1 + int(len(line) / (self.columns + 1))
return lines
def finish(self):
if self.stdinbackup:
fd = sys.stdin.fileno()
@@ -623,40 +605,26 @@ def main(server, eventHandler, params, tf = TerminalFilter):
warnings = 0
taskfailures = []
printintervaldelta = 10 * 60 # 10 minutes
printinterval = printintervaldelta
pinginterval = 1 * 60 # 1 minute
lastevent = lastprint = time.time()
printinterval = 5000
lastprint = time.time()
termfilter = tf(main, helper, console_handlers, params.options.quiet)
atexit.register(termfilter.finish)
# shutdown levels
# 0 - normal operation
# 1 - no new task execution, let current running tasks finish
# 2 - interrupting currently executing tasks
# 3 - we're done, exit
while main.shutdown < 3:
while True:
try:
if (lastprint + printinterval) <= time.time():
termfilter.keepAlive(printinterval)
printinterval += printintervaldelta
printinterval += 5000
event = eventHandler.waitEvent(0)
if event is None:
if (lastevent + pinginterval) <= time.time():
ret, error = server.runCommand(["ping"])
if error or not ret:
termfilter.clearFooter()
print("No reply after pinging server (%s, %s), exiting." % (str(error), str(ret)))
return_value = 3
main.shutdown = 3
lastevent = time.time()
if main.shutdown > 1:
break
if not parseprogress:
termfilter.updateFooter()
event = eventHandler.waitEvent(0.25)
if event is None:
continue
lastevent = time.time()
helper.eventHandler(event)
if isinstance(event, bb.runqueue.runQueueExitWait):
if not main.shutdown:
@@ -678,8 +646,8 @@ def main(server, eventHandler, params, tf = TerminalFilter):
if isinstance(event, logging.LogRecord):
lastprint = time.time()
printinterval = printintervaldelta
if event.levelno >= bb.msg.BBLogFormatter.ERRORONCE:
printinterval = 5000
if event.levelno >= bb.msg.BBLogFormatter.ERROR:
errors = errors + 1
return_value = 1
elif event.levelno == bb.msg.BBLogFormatter.WARNING:
@@ -693,10 +661,10 @@ def main(server, eventHandler, params, tf = TerminalFilter):
continue
# Prefix task messages with recipe/task
if event.taskpid in helper.pidmap and event.levelno not in [bb.msg.BBLogFormatter.PLAIN, bb.msg.BBLogFormatter.WARNONCE, bb.msg.BBLogFormatter.ERRORONCE]:
if event.taskpid in helper.pidmap and event.levelno != bb.msg.BBLogFormatter.PLAIN:
taskinfo = helper.running_tasks[helper.pidmap[event.taskpid]]
event.msg = taskinfo['title'] + ': ' + event.msg
if hasattr(event, 'fn') and event.levelno not in [bb.msg.BBLogFormatter.WARNONCE, bb.msg.BBLogFormatter.ERRORONCE]:
if hasattr(event, 'fn'):
event.msg = event.fn + ': ' + event.msg
logging.getLogger(event.name).handle(event)
continue
@@ -761,15 +729,15 @@ def main(server, eventHandler, params, tf = TerminalFilter):
if event.error:
errors = errors + 1
logger.error(str(event))
main.shutdown = 3
main.shutdown = 2
continue
if isinstance(event, bb.command.CommandExit):
if not return_value:
return_value = event.exitcode
main.shutdown = 3
main.shutdown = 2
continue
if isinstance(event, (bb.command.CommandCompleted, bb.cooker.CookerExit)):
main.shutdown = 3
main.shutdown = 2
continue
if isinstance(event, bb.event.MultipleProviders):
logger.info(str(event))
@@ -890,6 +858,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
state_force_shutdown()
main.shutdown = main.shutdown + 1
pass
except Exception as e:
import traceback
sys.stderr.write(traceback.format_exc())
@@ -906,11 +875,11 @@ def main(server, eventHandler, params, tf = TerminalFilter):
for failure in taskfailures:
summary += "\n %s" % failure
if warnings:
summary += pluralise("\nSummary: There was %s WARNING message.",
"\nSummary: There were %s WARNING messages.", warnings)
summary += pluralise("\nSummary: There was %s WARNING message shown.",
"\nSummary: There were %s WARNING messages shown.", warnings)
if return_value and errors:
summary += pluralise("\nSummary: There was %s ERROR message, returning a non-zero exit code.",
"\nSummary: There were %s ERROR messages, returning a non-zero exit code.", errors)
summary += pluralise("\nSummary: There was %s ERROR message shown, returning a non-zero exit code.",
"\nSummary: There were %s ERROR messages shown, returning a non-zero exit code.", errors)
if summary and params.options.quiet == 0:
print(summary)

View File

@@ -177,7 +177,7 @@ class gtkthread(threading.Thread):
quit = threading.Event()
def __init__(self, shutdown):
threading.Thread.__init__(self)
self.daemon = True
self.setDaemon(True)
self.shutdown = shutdown
if not Gtk.init_check()[0]:
sys.stderr.write("Gtk+ init failed. Make sure DISPLAY variable is set.\n")

View File

@@ -44,7 +44,7 @@ class BBUIEventQueue:
for count_tries in range(5):
ret = self.BBServer.registerEventHandler(self.host, self.port)
if isinstance(ret, collections.abc.Iterable):
if isinstance(ret, collections.Iterable):
self.EventHandle, error = ret
else:
self.EventHandle = ret
@@ -65,27 +65,35 @@ class BBUIEventQueue:
self.server = server
self.t = threading.Thread()
self.t.daemon = True
self.t.setDaemon(True)
self.t.run = self.startCallbackHandler
self.t.start()
def getEvent(self):
with bb.utils.lock_timeout(self.eventQueueLock):
if not self.eventQueue:
return None
item = self.eventQueue.pop(0)
if not self.eventQueue:
self.eventQueueNotify.clear()
return item
self.eventQueueLock.acquire()
if len(self.eventQueue) == 0:
self.eventQueueLock.release()
return None
item = self.eventQueue.pop(0)
if len(self.eventQueue) == 0:
self.eventQueueNotify.clear()
self.eventQueueLock.release()
return item
def waitEvent(self, delay):
self.eventQueueNotify.wait(delay)
return self.getEvent()
def queue_event(self, event):
with bb.utils.lock_timeout(self.eventQueueLock):
self.eventQueue.append(event)
self.eventQueueNotify.set()
self.eventQueueLock.acquire()
self.eventQueue.append(event)
self.eventQueueNotify.set()
self.eventQueueLock.release()
def send_event(self, event):
self.queue_event(pickle.loads(event))

View File

@@ -13,12 +13,10 @@ import errno
import logging
import bb
import bb.msg
import locale
import multiprocessing
import fcntl
import importlib
import importlib.machinery
import importlib.util
from importlib import machinery
import itertools
import subprocess
import glob
@@ -28,11 +26,6 @@ import errno
import signal
import collections
import copy
import ctypes
import random
import socket
import struct
import tempfile
from subprocess import getstatusoutput
from contextlib import contextmanager
from ctypes import cdll
@@ -258,7 +251,7 @@ def explode_dep_versions(s):
"""
Take an RDEPENDS style string of format:
"DEPEND1 (optional version) DEPEND2 (optional version) ..."
skip null value and items appeared in dependency string multiple times
skip null value and items appeared in dependancy string multiple times
and return a dictionary of dependencies and versions.
"""
r = explode_dep_versions2(s)
@@ -386,7 +379,7 @@ def _print_exception(t, value, tb, realfile, text, context):
error.append("Exception: %s" % ''.join(exception))
# If the exception is from spawning a task, let's be helpful and display
# If the exception is from spwaning a task, let's be helpful and display
# the output (which hopefully includes stderr).
if isinstance(value, subprocess.CalledProcessError) and value.output:
error.append("Subprocess output:")
@@ -407,7 +400,7 @@ def better_exec(code, context, text = None, realfile = "<code>", pythonexception
code = better_compile(code, realfile, realfile)
try:
exec(code, get_context(), context)
except (bb.BBHandledException, bb.parse.SkipRecipe, bb.data_smart.ExpansionError, bb.process.ExecutionError):
except (bb.BBHandledException, bb.parse.SkipRecipe, bb.data_smart.ExpansionError):
# Error already shown so passthrough, no need for traceback
raise
except Exception as e:
@@ -434,14 +427,12 @@ def better_eval(source, locals, extraglobals = None):
return eval(source, ctx, locals)
@contextmanager
def fileslocked(files, *args, **kwargs):
def fileslocked(files):
"""Context manager for locking and unlocking file locks."""
locks = []
if files:
for lockfile in files:
l = bb.utils.lockfile(lockfile, *args, **kwargs)
if l is not None:
locks.append(l)
locks.append(bb.utils.lockfile(lockfile))
try:
yield
@@ -460,16 +451,13 @@ def lockfile(name, shared=False, retry=True, block=False):
consider the possibility of sending a signal to the process to break
out - at which point you want block=True rather than retry=True.
"""
basename = os.path.basename(name)
if len(basename) > 255:
root, ext = os.path.splitext(basename)
basename = root[:255 - len(ext)] + ext
if len(name) > 255:
root, ext = os.path.splitext(name)
name = root[:255 - len(ext)] + ext
dirname = os.path.dirname(name)
mkdirhier(dirname)
name = os.path.join(dirname, basename)
if not os.access(dirname, os.W_OK):
logger.error("Unable to acquire lock '%s', directory is not writable",
name)
@@ -548,12 +536,7 @@ def md5_file(filename):
Return the hex string representation of the MD5 checksum of filename.
"""
import hashlib
try:
sig = hashlib.new('MD5', usedforsecurity=False)
except TypeError:
# Some configurations don't appear to support two arguments
sig = hashlib.new('MD5')
return _hasher(sig, filename)
return _hasher(hashlib.md5(), filename)
def sha256_file(filename):
"""
@@ -604,26 +587,11 @@ def preserved_envvars():
v = [
'BBPATH',
'BB_PRESERVE_ENV',
'BB_ENV_PASSTHROUGH',
'BB_ENV_PASSTHROUGH_ADDITIONS',
'BB_ENV_WHITELIST',
'BB_ENV_EXTRAWHITE',
]
return v + preserved_envvars_exported()
def check_system_locale():
"""Make sure the required system locale are available and configured"""
default_locale = locale.getlocale(locale.LC_CTYPE)
try:
locale.setlocale(locale.LC_CTYPE, ("en_US", "UTF-8"))
except:
sys.exit("Please make sure locale 'en_US.UTF-8' is available on your system")
else:
locale.setlocale(locale.LC_CTYPE, default_locale)
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\n"
"Python can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
def filter_environment(good_vars):
"""
Create a pristine environment for bitbake. This will remove variables that
@@ -651,21 +619,21 @@ def filter_environment(good_vars):
def approved_variables():
"""
Determine and return the list of variables which are approved
Determine and return the list of whitelisted variables which are approved
to remain in the environment.
"""
if 'BB_PRESERVE_ENV' in os.environ:
return os.environ.keys()
approved = []
if 'BB_ENV_PASSTHROUGH' in os.environ:
approved = os.environ['BB_ENV_PASSTHROUGH'].split()
approved.extend(['BB_ENV_PASSTHROUGH'])
if 'BB_ENV_WHITELIST' in os.environ:
approved = os.environ['BB_ENV_WHITELIST'].split()
approved.extend(['BB_ENV_WHITELIST'])
else:
approved = preserved_envvars()
if 'BB_ENV_PASSTHROUGH_ADDITIONS' in os.environ:
approved.extend(os.environ['BB_ENV_PASSTHROUGH_ADDITIONS'].split())
if 'BB_ENV_PASSTHROUGH_ADDITIONS' not in approved:
approved.extend(['BB_ENV_PASSTHROUGH_ADDITIONS'])
if 'BB_ENV_EXTRAWHITE' in os.environ:
approved.extend(os.environ['BB_ENV_EXTRAWHITE'].split())
if 'BB_ENV_EXTRAWHITE' not in approved:
approved.extend(['BB_ENV_EXTRAWHITE'])
return approved
def clean_environment():
@@ -719,8 +687,8 @@ def remove(path, recurse=False, ionice=False):
return
if recurse:
for name in glob.glob(path):
if _check_unsafe_delete_path(name):
raise Exception('bb.utils.remove: called with dangerous path "%s" and recurse=True, refusing to delete!' % name)
if _check_unsafe_delete_path(path):
raise Exception('bb.utils.remove: called with dangerous path "%s" and recurse=True, refusing to delete!' % path)
# shutil.rmtree(name) would be ideal but its too slow
cmd = []
if ionice:
@@ -778,7 +746,7 @@ def movefile(src, dest, newmtime = None, sstat = None):
if not sstat:
sstat = os.lstat(src)
except Exception as e:
logger.warning("movefile: Stating source file failed...", e)
print("movefile: Stating source file failed...", e)
return None
destexists = 1
@@ -806,7 +774,7 @@ def movefile(src, dest, newmtime = None, sstat = None):
os.unlink(src)
return os.lstat(dest)
except Exception as e:
logger.warning("movefile: failed to properly create symlink:", dest, "->", target, e)
print("movefile: failed to properly create symlink:", dest, "->", target, e)
return None
renamefailed = 1
@@ -823,7 +791,7 @@ def movefile(src, dest, newmtime = None, sstat = None):
except Exception as e:
if e.errno != errno.EXDEV:
# Some random error.
logger.warning("movefile: Failed to move", src, "to", dest, e)
print("movefile: Failed to move", src, "to", dest, e)
return None
# Invalid cross-device-link 'bind' mounted or actually Cross-Device
@@ -835,13 +803,13 @@ def movefile(src, dest, newmtime = None, sstat = None):
bb.utils.rename(destpath + "#new", destpath)
didcopy = 1
except Exception as e:
logger.warning('movefile: copy', src, '->', dest, 'failed.', e)
print('movefile: copy', src, '->', dest, 'failed.', e)
return None
else:
#we don't yet handle special, so we need to fall back to /bin/mv
a = getstatusoutput("/bin/mv -f " + "'" + src + "' '" + dest + "'")
if a[0] != 0:
logger.warning("movefile: Failed to move special file:" + src + "' to '" + dest + "'", a)
print("movefile: Failed to move special file:" + src + "' to '" + dest + "'", a)
return None # failure
try:
if didcopy:
@@ -849,7 +817,7 @@ def movefile(src, dest, newmtime = None, sstat = None):
os.chmod(destpath, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown
os.unlink(src)
except Exception as e:
logger.warning("movefile: Failed to chown/chmod/unlink", dest, e)
print("movefile: Failed to chown/chmod/unlink", dest, e)
return None
if newmtime:
@@ -1008,9 +976,6 @@ def to_boolean(string, default=None):
if not string:
return default
if isinstance(string, int):
return string != 0
normalized = string.lower()
if normalized in ("y", "yes", "1", "true"):
return True
@@ -1629,89 +1594,33 @@ def set_process_name(name):
except:
pass
def enable_loopback_networking():
# From bits/ioctls.h
SIOCGIFFLAGS = 0x8913
SIOCSIFFLAGS = 0x8914
SIOCSIFADDR = 0x8916
SIOCSIFNETMASK = 0x891C
# if.h
IFF_UP = 0x1
IFF_RUNNING = 0x40
# bits/socket.h
AF_INET = 2
# char ifr_name[IFNAMSIZ=16]
ifr_name = struct.pack("@16s", b"lo")
def netdev_req(fd, req, data = b""):
# Pad and add interface name
data = ifr_name + data + (b'\x00' * (16 - len(data)))
# Return all data after interface name
return fcntl.ioctl(fd, req, data)[16:]
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_IP) as sock:
fd = sock.fileno()
# struct sockaddr_in ifr_addr { unsigned short family; uint16_t sin_port ; uint32_t in_addr; }
req = struct.pack("@H", AF_INET) + struct.pack("=H4B", 0, 127, 0, 0, 1)
netdev_req(fd, SIOCSIFADDR, req)
# short ifr_flags
flags = struct.unpack_from('@h', netdev_req(fd, SIOCGIFFLAGS))[0]
flags |= IFF_UP | IFF_RUNNING
netdev_req(fd, SIOCSIFFLAGS, struct.pack('@h', flags))
# struct sockaddr_in ifr_netmask
req = struct.pack("@H", AF_INET) + struct.pack("=H4B", 0, 255, 0, 0, 0)
netdev_req(fd, SIOCSIFNETMASK, req)
def disable_network(uid=None, gid=None):
"""
Disable networking in the current process if the kernel supports it, else
just return after logging to debug. To do this we need to create a new user
namespace, then map back to the original uid/gid.
"""
libc = ctypes.CDLL('libc.so.6')
# From sched.h
# New user namespace
CLONE_NEWUSER = 0x10000000
# New network namespace
CLONE_NEWNET = 0x40000000
if uid is None:
uid = os.getuid()
if gid is None:
gid = os.getgid()
ret = libc.unshare(CLONE_NEWNET | CLONE_NEWUSER)
if ret != 0:
logger.debug("System doesn't support disabling network without admin privs")
return
with open("/proc/self/uid_map", "w") as f:
f.write("%s %s 1" % (uid, uid))
with open("/proc/self/setgroups", "w") as f:
f.write("deny")
with open("/proc/self/gid_map", "w") as f:
f.write("%s %s 1" % (gid, gid))
def export_proxies(d):
from bb.fetch2 import get_fetcher_environment
""" export common proxies variables from datastore to environment """
newenv = get_fetcher_environment(d)
for v in newenv:
os.environ[v] = newenv[v]
import os
variables = ['http_proxy', 'HTTP_PROXY', 'https_proxy', 'HTTPS_PROXY',
'ftp_proxy', 'FTP_PROXY', 'no_proxy', 'NO_PROXY',
'GIT_PROXY_COMMAND']
exported = False
for v in variables:
if v in os.environ.keys():
exported = True
else:
v_proxy = d.getVar(v)
if v_proxy is not None:
os.environ[v] = v_proxy
exported = True
return exported
def load_plugins(logger, plugins, pluginpath):
def load_plugin(name):
logger.debug('Loading plugin %s' % name)
spec = importlib.machinery.PathFinder.find_spec(name, path=[pluginpath] )
if spec:
mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(mod)
return mod
return spec.loader.load_module()
logger.debug('Loading plugins from %s...' % pluginpath)
@@ -1790,53 +1699,5 @@ def environment(**envvars):
for var in envvars:
if var in backup:
os.environ[var] = backup[var]
elif var in os.environ:
else:
del os.environ[var]
def is_local_uid(uid=''):
"""
Check whether uid is a local one or not.
Can't use pwd module since it gets all UIDs, not local ones only.
"""
if not uid:
uid = os.getuid()
with open('/etc/passwd', 'r') as f:
for line in f:
line_split = line.split(':')
if len(line_split) < 3:
continue
if str(uid) == line_split[2]:
return True
return False
def mkstemp(suffix=None, prefix=None, dir=None, text=False):
"""
Generates a unique filename, independent of time.
mkstemp() in glibc (at least) generates unique file names based on the
current system time. When combined with highly parallel builds, and
operating over NFS (e.g. shared sstate/downloads) this can result in
conflicts and race conditions.
This function adds additional entropy to the file name so that a collision
is independent of time and thus extremely unlikely.
"""
entropy = "".join(random.choices("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890", k=20))
if prefix:
prefix = prefix + entropy
else:
prefix = tempfile.gettempprefix() + entropy
return tempfile.mkstemp(suffix=suffix, prefix=prefix, dir=dir, text=text)
# If we don't have a timeout of some kind and a process/thread exits badly (for example
# OOM killed) and held a lock, we'd just hang in the lock futex forever. It is better
# we exit at some point than hang. 5 minutes with no progress means we're probably deadlocked.
@contextmanager
def lock_timeout(lock):
held = lock.acquire(timeout=5*60)
try:
if not held:
os._exit(1)
yield held
finally:
lock.release()

View File

@@ -1,126 +0,0 @@
#! /usr/bin/env python3
#
# Copyright 2023 by Garmin Ltd. or its subsidiaries
#
# SPDX-License-Identifier: MIT
import sys
import ctypes
import os
import errno
libc = ctypes.CDLL("libc.so.6", use_errno=True)
fsencoding = sys.getfilesystemencoding()
libc.listxattr.argtypes = [ctypes.c_char_p, ctypes.c_char_p, ctypes.c_size_t]
libc.llistxattr.argtypes = [ctypes.c_char_p, ctypes.c_char_p, ctypes.c_size_t]
def listxattr(path, follow=True):
func = libc.listxattr if follow else libc.llistxattr
os_path = os.fsencode(path)
while True:
length = func(os_path, None, 0)
if length < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err), str(path))
if length == 0:
return []
arr = ctypes.create_string_buffer(length)
read_length = func(os_path, arr, length)
if read_length != length:
# Race!
continue
return [a.decode(fsencoding) for a in arr.raw.split(b"\x00") if a]
libc.getxattr.argtypes = [
ctypes.c_char_p,
ctypes.c_char_p,
ctypes.c_char_p,
ctypes.c_size_t,
]
libc.lgetxattr.argtypes = [
ctypes.c_char_p,
ctypes.c_char_p,
ctypes.c_char_p,
ctypes.c_size_t,
]
def getxattr(path, name, follow=True):
func = libc.getxattr if follow else libc.lgetxattr
os_path = os.fsencode(path)
os_name = os.fsencode(name)
while True:
length = func(os_path, os_name, None, 0)
if length < 0:
err = ctypes.get_errno()
if err == errno.ENODATA:
return None
raise OSError(err, os.strerror(err), str(path))
if length == 0:
return ""
arr = ctypes.create_string_buffer(length)
read_length = func(os_path, os_name, arr, length)
if read_length != length:
# Race!
continue
return arr.raw
def get_all_xattr(path, follow=True):
attrs = {}
names = listxattr(path, follow)
for name in names:
value = getxattr(path, name, follow)
if value is None:
# This can happen if a value is erased after listxattr is called,
# so ignore it
continue
attrs[name] = value
return attrs
def main():
import argparse
from pathlib import Path
parser = argparse.ArgumentParser()
parser.add_argument("path", help="File Path", type=Path)
args = parser.parse_args()
attrs = get_all_xattr(args.path)
for name, value in attrs.items():
try:
value = value.decode(fsencoding)
except UnicodeDecodeError:
pass
print(f"{name} = {value}")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -11,7 +9,6 @@ import shutil
import sys
import tempfile
from bb.cookerdata import findTopdir
import bb.utils
from bblayers.common import LayerPlugin
@@ -38,7 +35,7 @@ class ActionPlugin(LayerPlugin):
sys.stderr.write("Specified layer directory %s doesn't contain a conf/layer.conf file\n" % layerdir)
return 1
bblayers_conf = os.path.join(findTopdir(),'conf', 'bblayers.conf')
bblayers_conf = os.path.join('conf', 'bblayers.conf')
if not os.path.exists(bblayers_conf):
sys.stderr.write("Unable to find bblayers.conf\n")
return 1
@@ -56,7 +53,7 @@ class ActionPlugin(LayerPlugin):
except (bb.tinfoil.TinfoilUIException, bb.BBHandledException):
# Restore the back up copy of bblayers.conf
shutil.copy2(backup, bblayers_conf)
bb.fatal("Parse failure with the specified layer added, exiting.")
bb.fatal("Parse failure with the specified layer added, aborting.")
else:
for item in notadded:
sys.stderr.write("Specified layer %s is already in BBLAYERS\n" % item)
@@ -66,7 +63,7 @@ class ActionPlugin(LayerPlugin):
def do_remove_layer(self, args):
"""Remove one or more layers from bblayers.conf."""
bblayers_conf = os.path.join(findTopdir() ,'conf', 'bblayers.conf')
bblayers_conf = os.path.join('conf', 'bblayers.conf')
if not os.path.exists(bblayers_conf):
sys.stderr.write("Unable to find bblayers.conf\n")
return 1

Some files were not shown because too many files have changed in this diff Show More