Compare commits

..

94 Commits

Author SHA1 Message Date
Richard Purdie
943ef2fad8 build-appliance-image: Update to gatesgarth head revision
(From OE-Core rev: d11ab9cb77bf91f939035417b757773a5d80242c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 16:25:30 +00:00
Richard Purdie
76dac9d657 build-appliance: Correct branch to gatesgarth
(From OE-Core rev: feb77e322fa13495550b98e3924d24df1560156d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 16:25:24 +00:00
Richard Purdie
333f24caec build-appliance-image: Update to gatesgarth head revision
(From OE-Core rev: e525592e83062ed9a9b2d3cb37c8dbbcfe8759a9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 08:59:37 +00:00
Anuj Mittal
e5bd9b93b4 poky.conf: bump version for 3.2.1 release
(From meta-yocto rev: be61a726ee0036402c460493df9532714903ea57)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 08:58:12 +00:00
Anuj Mittal
a4ff9dd2dc releases.rst: add gatesgarth to current releases
(From yocto-docs rev: b9d69c76561eb6708cd217126a5ed08b52315fa5)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 08:58:12 +00:00
Nicolas Dechesne
2d3224bf20 sphinx: releases: add link to 3.1.3
(From yocto-docs rev: 5e422dc364800d67ef5ee632b5c787265afd75f8)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bf395d0af044f6e9826a8235b760b2d285602b26)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 08:58:12 +00:00
Anuj Mittal
e6f6420d98 documentation: prepare for 3.2.1 release
Bump the current version to 3.2.1

(From yocto-docs rev: 1e46d6ffd3a193c24ddc07aaaad6f4769d12cc45)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 08:58:12 +00:00
He Zhe
f0b8b3a960 lttng-modules: Backport a patch to fix btrfs build failure
lttng-modules-2.12.3/probes/lttng-probe-btrfs.c:36:
lttng-modules-2.12.3/probes/../probes/lttng-tracepoint-event-impl.h:131:6:
error: conflicting types for 'trace_find_free_extent'

(From OE-Core rev: af428fa2432279d24cdf2a62f9dee91b30d46c3a)

Signed-off-by: He Zhe <zhe.he@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 42c791ab3815b47188fdd98998cdcb3d2c62ef20)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Alexander Kanavin
fef73fcd3a lttng-modules: update 2.12.2 -> 2.12.3
Drop a pile of backports.

(From OE-Core rev: d11a2157befcfe40517140988dd26bf0ed7240b6)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit fba843f79ac6ad2636385de2bd63e90e08c04fcd)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Anuj Mittal
d12e2d67c9 distutils-common-base: fix LINKSHARED expansion
Add the missing $ so SECURITY_CFLAGS actually gets expanded.

(From OE-Core rev: 26bd176e221789e9592d71e8c469eb40f506029a)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 6ed2f892ebb0b4e30a3bf167eac68027ea378a2d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Khem Raj
eeb98ec6ae binutils: Fix linker errors on chromium/ffmpeg on aarch64
ffmpeg in qtwebengine/chromium fails to build on aarch64

ffmpeg/ffmpeg_internal/videodsp.o: in function `ff_prefetch_aarch64':
(.text+0x10): relocation truncated to fit: R_AARCH64_CONDBR19 against symbol `ff_prefetch_aarch64' defined in .text section in obj/third_party/ffmpeg/ffmpeg_internal/videodsp.o

Backport an upstream fix to handle this error which is a regrression in
binutils 2.35

(From OE-Core rev: 658024f47b5f96d3f4e1813b4716e8981fbf2e47)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0a68def6b1f69b61096e58ae7778b61412dec4a2)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Richard Purdie
3f2bc0a2e1 e2fsprogs: Fix a ptest permissions determinism issue
When comparing builds built with different host umasks, this file jumped out.
The umask from do_compile was influencing ${D} and as cp was used to add the
file it wasn't deterministic. Fix the file mode to ensure determinism.

(From OE-Core rev: b99796ec9436b63e4fc7cb7d12c0c9bcceef5d4b)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 37f37f4a52de3711973b372160f23672b61ff6ad)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Richard Purdie
cbd023e0db fs-perms: Ensure /usr/src/debug/ file modes are correct
If files are copied into /usr/src/debug directly from WORKDIR (e.g. makedevs)
we'd get the permissions from the checkout which would depend on the host umask.

Avoid this and be deterministic by setting the file modes consistently. Core
code copies the files in so we're responsible for the permissions.

Unfortunately to force this change to apply we need to invalidate both
the package tasks and the hash equivalance mappings since file mode
'corruption' already made it into the output hashes (both input options
were mapped to the output hashes).

(From OE-Core rev: 1f807da38b9d9aebdd86b3b5839305e03d9930e1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1f958bcd6c9cd12ec76d80586cba15f4d6ed17a7)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Stacy Gaikovaia
307146220b valgrind: helgrind: Intercept libc functions
PTH_FUNC definition needs to be modified in order to
intercept posix thread functions in both libc and libpthread.
In order to handle this in helgrind, weak alias the pthread functions in glibc.
Include a special case for musl.

See https://bugs.kde.org/show_bug.cgi?id=428909 for additional
discussion.

Upstream-Status: Submitted

(From OE-Core rev: 4c33ce1b1eca9aff0009bf71ce50f6398f7cd281)

Signed-off-by: Paul Floyd <paulf@free.fr>
Signed-off-by: Stacy Gaikovaia <Stacy.Gaikovaia@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5da46a552d54de34a5243e1d90dcc6f52b7af746)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Fedor Ross
d754cd3a49 eudev: remove bashism to be compatible with dash
Remove 'echo -e' and replace it with 'printf'. In bash the builtin
'echo' has an option for interpreting backslash escapes. In a shell like
dash the builtin 'echo' interprets backslash escapes by default.
Therefor the 'echo' in dash doesn't have the '-e' option. When using
'printf' instead it is safe to use it either with bash or dash.

(From OE-Core rev: af5a68b545fda9013bbe8f07a2175a04e950d768)

Signed-off-by: Fedor Ross <fedor.ross@ifm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c747acca33f84879a1ebd0ef972c07f4d5dff8b7)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Fedor Ross
3d5309b736 sysvinit: remove bashism to be compatible with dash
Replace the equality operator '==' with '=' inside of '[]' to be
compatible with bash and dash.

(From OE-Core rev: f3dbd50d3af6ff6ef6d2d5a64691c0861a19a733)

Signed-off-by: Fedor Ross <fedor.ross@ifm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b7f0ec6eafb35117eaf4eeef281162080f0ca79a)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Ross Burton
369b6e0192 gstreamer1.0-plugins-base: set CVE_PRODUCT
There are CVEs with the 'gst-plugins-base' product, so set that.

(From OE-Core rev: 679964bf178e0bba9fc3e5f8064b1cd55bf159c0)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit ec0f0e5995ab498f50ad51ceb361784247614982)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Ross Burton
e03e489758 gstreamer1.0-rtsp-server: set CVE_PRODUCT
There are CVEs with the 'gst-rtsp-server' product, so set that.

(From OE-Core rev: 096b1aa0727ee29adaf54b3133ebdaa71399a967)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit eb5cbdead78d092733e783b09528b208efccac3d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Ross Burton
321e17803e sqlite3: add CVE-2015-3717 to whitelist
As per https://groups.google.com/g/sqlite-dev/c/U7OjAbZO6LA this issue
is believed to be either iOS specific, or fixed in 3.8.9.

(From OE-Core rev: 2b68dc373895c2e609a5841841960c57ea457e22)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b781058267bd86bd979c50f4dfe8168c58dfa5a9)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Ross Burton
086ed4af2a python3: add CVE-2007-4559 to whitelist
This issue describes expected behaviour, do not use tarfile with
untrusted data.

(From OE-Core rev: 391ed53928db0df325798a0bce18ec6947e09ddd)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f4c22e83f2e68ff157da5ea1303acc2931d63f5f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Ross Burton
67ff1d9ffb cve-check: show real PN/PV
The output currently shows the remapped product and version fields,
which may not be the actual recipe name/version. As this report is about
recipes, use the real values.

(From OE-Core rev: 62e07072bbeeebfead34bbdb04e75cff1c4ef1e1)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 18827d7f40db4a4f92680bd59ca655cca373ad65)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Anuj Mittal
8de9b33e14 glib-2.0: RDEPEND on dbusmock only when GI_DATA_ENABLED is True
python3-dbusmock depends on pygobject unconditionally and it's not going
to work if g-i is disabled.

(From OE-Core rev: 881986b4032d893464dbcbd7e7e114b454af0a1b)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b70627e2818ded74be862ad8650e19bf1fe9bd43)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Bruce Ashfield
afe59c8e1d linux-yocto/5.4: update to v5.4.78
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    315443293a2d Linux 5.4.78
    9fda2e762498 Convert trailing spaces and periods in path components
    ebc24aeb8694 net: sch_generic: fix the missing new qdisc assignment bug
    c5cf5c7b585c perf/core: Fix race in the perf_mmap_close() function
    c6b1616f5472 perf scripting python: Avoid declaring function pointers with a visibility attribute
    b74fe3186471 x86/speculation: Allow IBPB to be conditionally enabled on CPUs with always-on STIBP
    6958fbd52e79 powerpc/603: Always fault when _PAGE_ACCESSED is not set
    5af9d48acbee drm/i915: Correctly set SFC capability for video engines
    6fcf4141b9a2 r8169: fix potential skb double free in an error path
    78f6fac0814e tipc: fix memory leak in tipc_topsrv_start()
    c59039a088bd net/x25: Fix null-ptr-deref in x25_connect
    7e332a5c0e2c net: Update window_clamp if SOCK_RCVBUF is set
    25786fb512f7 net: udp: fix UDP header access on Fast/frag0 UDP GRO
    016e70d176ff net/af_iucv: fix null pointer dereference on shutdown
    22ee23fe1cc9 IPv6: Set SIT tunnel hard_header_len to zero
    98901bff58d9 swiotlb: fix "x86: Don't panic if can not alloc buffer for swiotlb"
    2cd21fe5bcc4 pinctrl: amd: fix incorrect way to disable debounce filter
    fa76dd3c1df3 pinctrl: amd: use higher precision for 512 RtcClk
    c6a6168a31e1 drm/gma500: Fix out-of-bounds access to struct drm_device.vblank[]
    974e3a7002a0 don't dump the threads that had been already exiting when zapped.
    039c8dcd2b15 mmc: renesas_sdhi_core: Add missing tmio_mmc_host_free() at remove
    e1d706eeeaf7 mmc: sdhci-of-esdhc: Handle pulse width detection erratum for more SoCs
    2a6cba6d3d72 gpio: pcie-idio-24: Enable PEX8311 interrupts
    7b6790ae3a94 gpio: pcie-idio-24: Fix IRQ Enable Register value
    819bf3b0d969 gpio: pcie-idio-24: Fix irq mask when masking
    68dae71b7cde selinux: Fix error return code in sel_ib_pkey_sid_slow()
    33e53f2cac19 btrfs: fix potential overflow in cluster_pages_for_defrag on 32bit arch
    9de4ffb70150 ocfs2: initialize ip_next_orphan
    ac18b128cfd6 reboot: fix overflow parsing reboot cpu number
    fa6265f8fb9e Revert "kernel/reboot.c: convert simple_strtoul to kstrtoint"
    bd4d106f3122 mm/slub: fix panic in slab_alloc_node()
    84778a43ae59 jbd2: fix up sparse warnings in checkpoint code
    2192d905df0d futex: Don't enable IRQs unconditionally in put_pi_state()
    761fb6829238 mei: protect mei_cl_mtu from null dereference
    e2b2c390ec9e virtio: virtio_console: fix DMA memory allocation for rproc serial
    57626d77ef1e xhci: hisilicon: fix refercence leak in xhci_histb_probe
    cbad9668929c usb: cdc-acm: Add DISABLE_ECHO for Renesas USB Download mode
    f988e9c85cfb uio: Fix use-after-free in uio_unregister_device()
    1654bf2d9f0e thunderbolt: Add the missed ida_simple_remove() in ring_request_msix()
    06c1895fe71b thunderbolt: Fix memory leak if ida_simple_get() fails in enumerate_services()
    11c14da8d005 KVM: arm64: Don't hide ID registers from userspace
    2033dd885297 btrfs: dev-replace: fail mount if we don't have replace item with target device
    5af9630036ef btrfs: fix min reserved size calculation in merge_reloc_root
    8266c23124c1 btrfs: ref-verify: fix memory leak in btrfs_ref_tree_mod
    062c9b04f6eb ext4: unlock xattr_sem properly in ext4_inline_data_truncate()
    a6ca4c7ec44c ext4: correctly report "not supported" for {usr,grp}jquota when !CONFIG_QUOTA
    52e3a55bc253 erofs: derive atime instead of leaving it empty
    09b0d47b7952 perf: Fix get_recursion_context()
    70867a9dbf57 vrf: Fix fast path output packet handling with async Netfilter rules
    2ab9c76986e4 cosa: Add missing kfree in error path of cosa_write
    c0a6cc9e11f4 of/address: Fix of_node memory leak in of_dma_is_coherent
    f10d238aad93 xfs: fix a missing unlock on error in xfs_fs_map_blocks
    0e2ad69bd4b5 lan743x: fix "BUG: invalid wait context" when setting rx mode
    b45f52a20879 xfs: fix brainos in the refcount scrubber's rmap fragment processor
    7cbf708b1b9a xfs: fix rmap key and record comparison functions
    3bd97b33be41 xfs: set the unwritten bit in rmap lookup flags in xchk_bmap_get_rmapextents
    08e213bef291 xfs: fix flags argument to rmap lookup when converting shared file rmaps
    a8ee686597fb igc: Fix returning wrong statistics
    81dcfdb9a015 nbd: fix a block_device refcount leak in nbd_release
    c602ad2b52dc bpf: Zero-fill re-used per-cpu map element
    dfcb33773877 SUNRPC: Fix general protection fault in trace_rpc_xdr_overflow()
    b9e8f9d139bd net/mlx5: Fix deletion of duplicate rules
    e74e514c8cca pinctrl: aspeed: Fix GPI only function problem.
    d2e61c5202e6 bpf: Don't rely on GCC __attribute__((optimize)) to disable GCSE
    443ae3655f8c ARM: 9019/1: kprobes: Avoid fortify_panic() when copying optprobe template
    c0be7a34c889 pinctrl: intel: Set default bias in case no particular value given
    88ccabbd2066 mfd: sprd: Add wakeup capability for PMIC IRQ
    58953e87343d tick/common: Touch watchdog in tick_unfreeze() on all CPUs
    3322f7289e50 spi: bcm2835: remove use of uninitialized gpio flags variable
    572e545d80ea tpm_tis: Disable interrupts on ThinkPad T490s
    713a3a94bee0 i2c: sh_mobile: implement atomic transfers
    37a048d790c3 riscv: Set text_offset correctly for M-Mode
    6d8b43376990 selftests: proc: fix warning: _GNU_SOURCE redefined
    ab10b7def421 amd/amdgpu: Disable VCN DPG mode for Picasso
    4faa1fabc645 i2c: mediatek: move dma reset before i2c reset
    b66c7cdedd1e vfio/pci: Bypass IGD init in case of -ENODEV
    c6be53caf1c8 vfio: platform: fix reference leak in vfio_platform_open
    4d6f536e34d6 s390/smp: move rcu_cpu_starting() earlier
    984d77507439 iommu/amd: Increase interrupt remapping table limit to 512 entries
    a889cd3d350d nvme-tcp: avoid repeated request completion
    9d14f5225dbb nvme-rdma: avoid repeated request completion
    531b55cce9cd nvme-tcp: avoid race between time out and tear down
    d0e888a20dfd nvme-rdma: avoid race between time out and tear down
    0ca279c859d7 nvme: introduce nvme_sync_io_queues
    c473b3e56c1d scsi: mpt3sas: Fix timeouts observed while reenabling IRQ
    b61e157d9f64 scsi: scsi_dh_alua: Avoid crash during alua_bus_detach()
    bf1cedc12f58 tracing: Fix the checking of stackidx in __ftrace_trace_stack
    e57c04697030 cfg80211: regulatory: Fix inconsistent format argument
    a3f0db0d2320 cfg80211: initialize wdev data earlier
    67bb2e4d41de mac80211: fix use of skb payload instead of header
    c1cbb64c100d drm/amd/pm: do not use ixFEATURE_STATUS for checking smc running
    48083640a47b drm/amd/pm: perform SMC reset on suspend/hibernation
    f449b902badb drm/amdgpu: perform srbm soft reset always on SDMA resume
    7f6df0b085ce scsi: hpsa: Fix memory leak in hpsa_init_one()
    325455358e54 gfs2: check for live vs. read-only file system in gfs2_fitrim
    edeff05a1f10 gfs2: Add missing truncate_inode_pages_final for sd_aspace
    99dcfc517d17 gfs2: Free rd_bits later in gfs2_clear_rgrpd to fix use-after-free
    42eaa22aaf2e ALSA: hda: Reinstate runtime_allow() for all hda controllers
    0a4c091673ca ALSA: hda: Separate runtime and system suspend
    9b7e6b670df7 selftests: pidfd: fix compilation errors due to wait.h
    9110e2f2633d selftests/ftrace: check for do_sys_openat2 in user-memory test
    1737ea0c5775 usb: gadget: goku_udc: fix potential crashes in probe
    e60490354191 opp: Reduce the size of critical section in _opp_table_kref_release()
    fe2dc1093c61 usb: dwc3: pci: add support for the Intel Alder Lake-S
    e22142a9a2a9 ASoC: cs42l51: manage mclk shutdown delay
    0fc0befe0bfa ASoC: qcom: sdm845: set driver name correctly
    b668352c4aad ath9k_htc: Use appropriate rs_datalen type
    42501604363f KVM: x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally
    d2cef3bae14b KVM: arm64: ARM_SMCCC_ARCH_WORKAROUND_1 doesn't return SMCCC_RET_NOT_REQUIRED
    213e1238cacc random32: make prandom_u32() output unpredictable
    327af342ca9b tpm: efi: Don't create binary_bios_measurements file for an empty log
    0685eb84ad56 xfs: fix scrub flagging rtinherit even if there is no rt device
    2f6cbef32718 xfs: flush new eof page on truncate to avoid post-eof corruption
    66ce8bfad6f6 can: flexcan: flexcan_remove(): disable wakeup completely
    0b657367309e can: flexcan: remove FLEXCAN_QUIRK_DISABLE_MECR quirk for LS1021A
    56c56af0a3a1 can: peak_canfd: pucan_handle_can_rx(): fix echo management when loopback is on
    a23ee9956612 can: peak_usb: peak_usb_get_ts_time(): fix timestamp wrapping
    44b2c4beff8a can: peak_usb: add range checking in decode operations
    d6c34afab0ed can: xilinx_can: handle failure cases of pm_runtime_get_sync
    51920ca7519c can: ti_hecc: ti_hecc_probe(): add missed clk_disable_unprepare() in error path
    b9c4a9a07c4a can: j1939: j1939_sk_bind(): return failure if netdev is down
    0ab4c839409a can: j1939: swap addr and pgn in the send example
    5bde65abe166 can: can_create_echo_skb(): fix echo skb generation: always use skb_clone()
    183f1af506fe can: dev: __can_get_echo_skb(): fix real payload length return value for RTR frames
    ab46748bf988 can: dev: can_get_echo_skb(): prevent call to kfree_skb() in hard IRQ context
    3d0954767918 can: rx-offload: don't call kfree_skb() from IRQ context
    e201588fad54 afs: Fix warning due to unadvanced marshalling pointer
    9946509a027b iommu/vt-d: Fix a bug for PDP check in prq_event_thread
    2825a5bf3ca5 ALSA: hda: prevent undefined shift in snd_hdac_ext_bus_get_link()
    22901751d269 perf tools: Add missing swap for ino_generation
    b36f78fd48e9 perf trace: Fix segfault when trying to trace events by cgroup
    d261d0bd9066 powerpc/eeh_cache: Fix a possible debugfs deadlock
    1c8fe343a79d netfilter: ipset: Update byte and packet counters regardless of whether they match
    ad017cf5dace netfilter: nf_tables: missing validation from the abort path
    56907fa27b94 netfilter: use actual socket sk rather than skb sk when routing harder
    6234710dc634 xfs: set xefi_discard when creating a deferred agfl free log intent item
    933f911136e2 ASoC: codecs: wcd9335: Set digital gain range correctly
    5cb904da85ed net: xfrm: fix a race condition during allocing spi
    4e438ca1b629 hv_balloon: disable warning when floor reached
    bb2b60242c8e genirq: Let GENERIC_IRQ_IPI select IRQ_DOMAIN_HIERARCHY
    bb8c6bd53cc0 ASoC: Intel: kbl_rt5663_max98927: Fix kabylake_ssp_fixup function
    a8ec66026dd8 btrfs: reschedule when cloning lots of extents
    0ee771e96954 btrfs: sysfs: init devices outside of the chunk_mutex
    c58fa93b1409 btrfs: tracepoints: output proper root owner for trace_find_free_extent()
    e24516cf62f9 usb: dwc3: gadget: Reclaim extra TRBs after request completion
    ab031673e2ab usb: dwc3: gadget: Continue to process pending requests
    504cfb5e3bca PCI: qcom: Make sure PCIe is reset before init for rev 2.1.0
    9dfbc2f82ac8 KVM: arm64: Force PTE mapping on fault resulting in a device mapping
    95fda70d3955 nbd: don't update block size after device is started
    160777b19b86 time: Prevent undefined behaviour in timespec64_to_ns()
    5a39fb2f22fd drm/i915/gem: Flush coherency domains on first set-domain-ioctl
    2544d06afd8d Linux 5.4.77
    19f6d91bdad4 powercap: restrict energy meter to root access
    ec9c6b417e27 Linux 5.4.76
    c3d60c695712 arm64: dts: marvell: espressobin: Add ethernet switch aliases
    b7f7474b3921 perf/core: Fix a memory leak in perf_event_parse_addr_filter()
    21ab13af8c50 xfs: flush for older, xfs specific ioctls
    258d01b1577e PM: runtime: Resume the device earlier in __device_release_driver()
    37f75c6aa8dd PM: runtime: Drop pm_runtime_clean_up_links()
    874dfb5c6aa3 PM: runtime: Drop runtime PM references to supplier on link removal
    fbfca92c7840 ARC: stack unwinding: avoid indefinite looping
    d61edc06002f drm/panfrost: Fix a deadlock between the shrinker and madvise path
    b9d91fa92164 usb: mtu3: fix panic in mtu3_gadget_stop()
    b0d03a1bdb3c USB: Add NO_LPM quirk for Kingston flash drive
    290fcf3e0c0c usb: dwc3: ep0: Fix delay status handling
    86875e1d6426 tty: serial: fsl_lpuart: LS1021A has a FIFO size of 16 words, like LS1028A
    8febdfb5973d tty: serial: fsl_lpuart: add LS1028A support
    d5d3cca9d61f USB: serial: option: add Telit FN980 composition 0x1055
    7f7be9341b86 USB: serial: option: add LE910Cx compositions 0x1203, 0x1230, 0x1231
    b7f74775c2bb USB: serial: option: add Quectel EC200T module support
    9d34dbab6ef4 USB: serial: cyberjack: fix write-URB completion race
    62c4b2b21e3b serial: txx9: add missing platform_driver_unregister() on error in serial_txx9_init
    085fc4784e4b serial: 8250_mtk: Fix uart_get_baud_rate warning
    b33a1039564c s390/pkey: fix paes selftest failure with paes and pkey static build
    beeb658cfd35 fork: fix copy_process(CLONE_PARENT) race with the exiting ->real_parent
    642181fe3567 vt: Disable KD_FONT_OP_COPY
    cfd9d7137759 Revert "coresight: Make sysfs functional on topologies with per core sink"
    8ee6a0f25457 arm64/smp: Move rcu_cpu_starting() earlier
    eceb94287dbf drm/nouveau/gem: fix "refcount_t: underflow; use-after-free"
    7d0de6f87257 drm/nouveau/nouveau: fix the start/end range for migration
    4dab0fd40323 usb: cdns3: gadget: suspicious implicit sign extension
    937753df482c ACPI: NFIT: Fix comparison to '-ENXIO'
    16476c2b26ca drm/vc4: drv: Add error handding for bind
    a04cec1dd293 nvmet: fix a NULL pointer dereference when tracing the flush command
    8c9c03432500 nvme-rdma: handle unexpected nvme completion data length
    2fd9e60760ef vsock: use ns_capable_noaudit() on socket create
    2149aa583068 scsi: ibmvscsi: Fix potential race after loss of transport
    1247f4e29188 drm/amdgpu: add DID for navi10 blockchain SKU
    fd4fb5080725 scsi: core: Don't start concurrent async scan on same host
    3c52715ceaae blk-cgroup: Pre-allocate tree node on blkg_conf_prep
    f77756ea6641 blk-cgroup: Fix memleak on error path
    914fc5524261 drm/sun4i: frontend: Fix the scaler phase on A33
    f743f73f42a7 drm/sun4i: frontend: Reuse the ch0 phase for RGB formats
    6d7b41a67687 drm/sun4i: frontend: Rework a bit the phase data
    147e3743cf7a of: Fix reserved-memory overlap detection
    6e02c29e4ac4 x86/kexec: Use up-to-dated screen_info copy to fill boot params
    3283d4d78412 arm64: dts: meson: add missing g12 rng clock
    69e0e917c7c8 ARM: dts: sun4i-a10: fix cpu_alert temperature
    2716e78a6486 futex: Handle transient "ownerless" rtmutex state correctly
    ec5f524e0293 tracing: Fix out of bounds write in get_trace_buf
    9f6883fce694 spi: bcm2835: fix gpio cs level inversion
    f352cca84625 regulator: defer probe when trying to get voltage from unresolved supply
    a69af5baed80 ftrace: Handle tracing when switching between context
    3058420f40fb ftrace: Fix recursion check for NMI test
    cfaf010cf345 mtd: spi-nor: Don't copy self-pointing struct around
    aef59b5e5bdf ring-buffer: Fix recursion protection transitions between interrupt context
    2cd71743e7ff gfs2: Wake up when sd_glock_disposal becomes zero
    d2286457bd83 mm: always have io_remap_pfn_range() set pgprot_decrypted()
    1b8490d6b809 kthread_worker: prevent queuing delayed work from timer_fn when it is being canceled
    b1d16be4f2f4 lib/crc32test: remove extra local_irq_disable/enable
    c1f729c7dec0 mm: mempolicy: fix potential pte_unmap_unlock pte error
    f7c2913d606b ALSA: usb-audio: Add implicit feedback quirk for MODX
    26a871cf86cb ALSA: usb-audio: Add implicit feedback quirk for Qu-16
    a46e830d017e ALSA: usb-audio: add usb vendor id as DSD-capable for Khadas devices
    65457e345f3c ALSA: usb-audio: Add implicit feedback quirk for Zoom UAC-2
    72ce616ed55a ALSA: hda/realtek - Enable headphone for ASUS TM420
    f7d0f7242405 ALSA: hda/realtek - Fixed HP headset Mic can't be detected
    61402d61a2af Fonts: Replace discarded const qualifier
    e5ea79bb19f8 sfp: Fix error handing in sfp_probe()
    9b5458effeee sctp: Fix COMM_LOST/CANT_STR_ASSOC err reporting on big-endian platforms
    26ffb8916059 powerpc/vnic: Extend "failover pending" window
    92e65059beda net: usb: qmi_wwan: add Telit LE910Cx 0x1230 composition
    8e3c047f814b ip_tunnel: fix over-mtu packet send fail without TUNNEL_DONT_FRAGMENT flags
    ac343efb572c ionic: check port ptr before use
    6ef3bcc25a3e gianfar: Account for Tx PTP timestamp in the skb headroom
    5b66a5b6a9e2 gianfar: Replace skb_realloc_headroom with skb_cow_head for PTP
    7bf7b7c385a1 chelsio/chtls: fix always leaking ctrl_skb
    14d755a4815e chelsio/chtls: fix memory leaks caused by a race
    57bb59f9d8fb cadence: force nonlinear buffers to be cloned
    1695fca8a923 ptrace: fix task_join_group_stop() for the case when current is traced
    76e5bba75a63 tipc: fix use-after-free in tipc_bcast_get_mode
    ca16a42f5f0d arm64: Change .weak to SYM_FUNC_START_WEAK_PI for arch/arm64/lib/mem*.S
    d94589900d98 arm64: lib: Use modern annotations for assembly functions
    3e7050661d95 arm64: asm: Add new-style position independent function annotations
    840d8c9b3e5f linkage: Introduce new macros for assembler symbols
    1ca84322ab5b ASoC: Intel: Skylake: Add alternative topology binary name
    e05dfcff26e9 drm/i915: Drop runtime-pm assert from vgpu io accessors
    d321f127eb51 drm/i915/gt: Delay execlist processing for tgl
    5bcd18bf8082 drm/i915: Break up error capture compression loops with cond_resched()

(From OE-Core rev: 1dcfaba6c60805a3987a0bbdc8fbf61225a41dc1)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 6063baedd741e1ae86a2c42cd2dc41899718a2d5)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Bruce Ashfield
f6434fde67 linux-yocto/5.8: ext4/tipc warning fixups
Integrating the following commit(s) to linux-yocto/5.8:

    3c5d210805d6 tipc: fix -Wstringop-truncation warnings
    cc89fd77c248 ext4: fix -Wstringop-truncation warnings

(From OE-Core rev: 234c8101d642120b08b369d305914b1560f140db)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 45a229f84fe71b251530bb182c1ad03a88f592a8)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Bruce Ashfield
e46465c718 linux-yocto/5.8: perf: Alias SYS_futex with SYS_futex_time64 on 32-bit arches with 64bit time_t
Integrating the following commit(s) to linux-yocto/5.8:

    52b840afae05 perf: Alias SYS_futex with SYS_futex_time64 on 32-bit arches with 64bit time_t

(From OE-Core rev: fbcd54a3db79e85aa1180523ca2903bf03ff7462)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 41135c844af1165b1e74e8e2654784f3cd4def8b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Bruce Ashfield
e4156f232b linux-yocto/5.4: perf: Alias SYS_futex with SYS_futex_time64 on 32-bit arches with 64bit time_t
Integrating the following commit(s) to linux-yocto/5.4:

    356914747645 perf: Alias SYS_futex with SYS_futex_time64 on 32-bit arches with 64bit time_t

(From OE-Core rev: 7c8b7ed2ece21b5473eca2144c8b9a01d0197475)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 73ee256e5c1194ec5d0843dee274d29cc0efe993)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
INC@Cisco)
bfa254bd1a kernel-devsrc: improve reproducibility for arm64
.vdso-offsets.h.cmd contains command that was used to produce vdso-offsets.h.
It breaks reproducibility because it has an absolute path in it. There is no
any value to package such files so it can be dropped.

(From OE-Core rev: b627c00c624f9f9279c21ddd4d8aa9a8a592a8d3)

Signed-off-by: Denys Zagorui <dzagorui@cisco.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit d31b4db24643b0867c654af34c684b4de2f8122b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Vyacheslav Yurkov
4315a12330 license_image.bbclass: use canonical name for license files
When copying license files to the image rootfs, i.e to
/usr/share/common-licenses, a canonical name of a license should be
used, otherwise duplicated files end up in common-licenses directory.

For example, GPL-2.0 license according to conf/license.conf can be
referenced in recipes as GPL-2, GPLv2, and GPLv2.0. If a license name is
used directly, we end up with three files in the rootfs with the same
content. If a canonical name used instead, then each license gets copied
only once.

(From OE-Core rev: 0fda54af52dfb57598ea9409113d33dacb786dc1)

Signed-off-by: Vyacheslav Yurkov <Vyacheslav.Yurkov@bruker.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 670fe71dd18ea675f35581db4a61fda137f8bf00)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:07 +00:00
Lee Chee Yang
9b58e1d1a8 qemu: fix CVE-2020-24352
(From OE-Core rev: 12bee66a42a7c2a38789ddb37cb098bcbf0b3841)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Lee Chee Yang
f4ff33fd11 python3: fix CVE-2020-27619
(From OE-Core rev: 0edf9f32929c462b9b53f0cdc7e5ecf816fbb7b3)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Lee Chee Yang
f9f50c5638 libproxy: fix CVE-2020-26154
(From OE-Core rev: af85169a4dfb2fc4dc820409eb4a7756dc14e894)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Max Krummenacher
23eef02eff linux-firmware: rdepend on license for all nvidia packages
Fixes commit 0671d04978 ("linux-firmware: package nvidia firmware")

(From OE-Core rev: cbe3142a32363a45c9935b6ee748f217a699f6b8)

Signed-off-by: Max Krummenacher <max.krummenacher@toradex.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 59789dea33629a96f0fe5646eb684aa131e167bf)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Loic Domaigne
bef1f4761e roofs_*.bbclass: fix missing vardeps for do_rootfs
As per lib/oe/rootfs.py and lib/oe/package_manager/???/__init__.py
the PACKAGE_FEED baseurl is defined as the joined paths of:
URIS/BASE_PATHS/ARCHS

Therefore, the do_rootfs task should depend furthermore on
PACKAGE_FEED_{BASE_PATHS,ARCHS} to properly retrigger a build if
the value changes.

(From OE-Core rev: 14165724d41a5d00384a9db60b49b37ac4f3b40f)

Signed-off-by: Loic Domaigne (ljd) <tech@domaigne.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e5329464f5ebad909c4c9bd27a718bbd8f4cc221)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Alistair
8b9bdf1d1e weston-init: Fix incorrect idle-time setting
(From OE-Core rev: c7cd893088bc82466bf1843c292731eb5992467b)

Signed-off-by: Alistair Francis <alistair@alistair23.me>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 84b3a6b7bd73ebad90865ee4351578c2109358fb)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Wonmin Jung
1a4b81a392 kernel: Set proper LD in KERNEL_KCONFIG_COMMAND
With 'ld-is-gold' and linux kernel 5.4 or later, menuconfig
task for kernel recipes will fail with:

$ bitbake -c menuconfig virtual/kernel
...
scripts/kconfig/mconf  Kconfig
scripts/Kconfig.include:43:  gold linker 'x86_64-poky-linux-ld' not supported
/OE/build/tmp/work-shared/qemux86-64/kernel-source/scripts/kconfig/Makefile:29:
 recipe for target 'menuconfig' failed
make[2]: *** [menuconfig] Error 1
/OE/build/tmp/work-shared/qemux86-64/kernel-source/Makefile:606:
 recipe for target 'menuconfig' failed
make[1]: *** [menuconfig] Error 2
/OE/build/tmp/work-shared/qemux86-64/kernel-source/Makefile:185:
 recipe for target '__sub-make' failed
make: *** [__sub-make] Error 2
Command failed.

This is because that the KERNEL_LD variable already set in
kernel-arch.bbclass isn't used by do_menuconfig function of
cml1.bbclass.

To fix this issue specify LD variable while calling the kernel
menuconfig command through KERNEL_KCONFIG_COMMAND.

(From OE-Core rev: 263e0c7a301fc11d3cf4ced4ffb911ebf6cb2f14)

Signed-off-by: Wonmin Jung <wonmin82@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1faf66ce0b1f8f5165277161e07e25e672370c3f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Bruce Ashfield
c111b692cc kernel: relocate copy of module.lds to module compilation task
There were two copies of this patch floating around, and the merged
variant has the copy in the wrong place.

module.lds is only created during modules_prepare, and that target is
not invoked during our main build of the kernel. We aren't about to
change the kernel build (there's no need), so we move the copy into
the compile_kernelmodules task. After that runs, we have module.lds
availble to copy.

This has been tested against clean kernel + out of tree module
builds, and the dependencies are correct that the file is copied
before the out of tree module build starts.

(From OE-Core rev: d9e327063f63193186822d958706081d64ec8139)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 7d94f9209ebaaf59ea001239a889dd7f928a0e7c)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Bruce Ashfield
701e43727a kernel: provide module.lds for out of tree builds in v5.10+
The upstream commit 596b0474d3d [kbuild: preprocess module linker
script], adds a dependency on module.lds for external module
building.

Since module.lds is generated as part of 'modules_prepare', we
must make it available with the other kernel artifacts in the
kernel shared workdir, otherwise out of tree builds fail.

This fixes errors like:

    | make[4]: *** No rule to make target 'scripts/module.lds', needed by
        'build/tmp/work/qemuarm64-poky-linux/cryptodev-module/1.11-r0/git/cryptodev.ko'.
        Stop.
    | make[4]: *** Waiting for unfinished jobs....

We also ensure that kernel-devsrc has a copy to support on
target module builds that are often prepared with 'make scripts
prepare'. Those targets won't regenerate it, so the build fails.
If 'make modules_prepare' is used, the file will be regenerated
and overwrite our copy (as expected).

(From OE-Core rev: 27856184dee4b68254cb302b2294c115a46fcf16)

Signed-off-by: Pan, Kris <kris.pan@intel.com>
Signed-off-by: Lili Li <lili.li@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0fc66a0b64953aae38d0124b57615fffaec8de52)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Bruce Ashfield
dedca9ecb7 linux-yocto/5.4: update to v5.4.75
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    6e97ed6efa70 Linux 5.4.75
    6ce4da84e5f4 staging: octeon: Drop on uncorrectable alignment or FCS error
    b869f6b67274 staging: octeon: repair "fixed-link" support
    15506ee68893 staging: comedi: cb_pcidas: Allow 2-channel commands for AO subdevice
    4d934fe936fd staging: fieldbus: anybuss: jump to correct label in an error path
    8fd792948e76 KVM: arm64: Fix AArch32 handling of DBGD{CCINT,SCRext} and DBGVCR
    4cb29cdd5043 device property: Don't clear secondary pointer for shared primary firmware node
    26086875476f device property: Keep secondary firmware node secondary by type
    e793fc391351 ARM: s3c24xx: fix missing system reset
    2937774ef43a ARM: samsung: fix PM debug build with DEBUG_LL but !MMU
    0808ca98e67e arm: dts: mt7623: add missing pause for switchport
    f3d8023e0647 hil/parisc: Disable HIL driver when it gets stuck
    81190a9efde0 cachefiles: Handle readpage error correctly
    4bf2a744a4e7 arm64: berlin: Select DW_APB_TIMER_OF
    c2313d7818b9 tty: make FONTX ioctl use the tty pointer they were actually passed
    beb5d0dfc154 drm/amd/pm: increase mclk switch threshold to 200 us
    071b3300c951 mmc: sdhci: Use Auto CMD Auto Select only when v4_mode is true
    fb4e2a67e193 mmc: sdhci-of-esdhc: set timeout to max before tuning
    b7e1a637eae9 drm/ttm: fix eviction valuable range check.
    b60edf37d5d3 ext4: fix invalid inode checksum
    ae05fdc6d60a ext4: fix error handling code in add_new_gdb
    c0de3cf2f286 ext4: fix leaking sysfs kobject after failed mount
    b11e9dd66e3a vringh: fix __vringh_iov() when riov and wiov are different
    3cfbc13ab3f0 ring-buffer: Return 0 on success from ring_buffer_resize()
    0db6e7161e33 9P: Cast to loff_t before multiplying
    51135ffbb54d libceph: clear con->out_msg on Policy::stateful_server faults
    d4fdbedef767 ceph: promote to unsigned long long before shifting
    9cdccb4761e5 drm/amd/display: Fix kernel panic by dal_gpio_open() error
    d7e22dbc662d drm/amd/display: Don't invoke kgdb_breakpoint() unconditionally
    d1628cdacfb0 drm/amdgpu: increase the reserved VM size to 2MB
    adff3a805c97 drm/amd/display: Avoid MST manager resource leak.
    1e460aa7353d drm/amdkfd: Use same SQ prefetch setting as amdgpu
    d417026c4081 drm/amdgpu: correct the gpu reset handling for job != NULL case
    9887a48d49f0 drm/amd/display: Increase timeout for DP Disable
    987d3814c92c drm/amdgpu: don't map BO in reserved region
    2c58d5e0c754 i2c: imx: Fix external abort on interrupt in exit paths
    da3ccf5b2045 rtc: rx8010: don't modify the global rtc ops
    e17afa6d1de3 ia64: fix build error with !COREDUMP
    da3bb6fa23f1 ubi: check kthread_should_stop() after the setting of task state
    6d0beeebd15d ARC: perf: redo the pct irq missing in device-tree handling
    468811595833 perf python scripting: Fix printable strings in python3 scripts
    a99cbd20a5c5 ubifs: mount_ubifs: Release authentication resource in error handling path
    9ba6324ca9c4 ubifs: Don't parse authentication mount options in remount process
    748057df47b9 ubifs: Fix a memleak after dumping authentication mount options
    bc202c839b5d ubifs: journal: Make sure to not dirty twice for auth nodes
    a77927469760 ubifs: xattr: Fix some potential memory leaks while iterating entries
    213c836b2396 ubifs: dent: Fix some potential memory leaks while iterating entries
    c1ea3c4a4302 NFSD: Add missing NFSv2 .pc_func methods
    da86bb4c214f NFSv4.2: support EXCHGID4_FLAG_SUPP_FENCE_OPS 4.2 EXCHANGE_ID flag
    c342001cab7f NFSv4: Wait for stateid updates after CLOSE/OPEN_DOWNGRADE
    415043c3ec0d powerpc: Fix undetected data corruption with P9N DD2.1 VSX CI load emulation
    94e27f13694c powerpc/powermac: Fix low_sleep_handler with KUAP and KUEP
    61ed8c1b940d powerpc/powernv/elog: Fix race while processing OPAL error log event.
    7850dd0851a3 powerpc/memhotplug: Make lmb size 64bit
    3fa03b7f21a3 powerpc: Warn about use of smt_snooze_delay
    240baebeda09 powerpc/rtas: Restrict RTAS requests from userspace
    551bf7c4bc24 s390/stp: add locking to sysfs functions
    58a7dc5f521a MIPS: DEC: Restore bootmem reservation for firmware working memory area
    73597ab2a9b9 powerpc/drmem: Make lmb_size 64 bit
    829c0a9634b9 iio:gyro:itg3200: Fix timestamp alignment and prevent data leak.
    9f4f75df4b47 iio:adc:ti-adc12138 Fix alignment issue with timestamp
    96a5134423ae iio:adc:ti-adc0832 Fix alignment issue with timestamp
    a8c59abdbc6b iio: adc: gyroadc: fix leak of device node iterator
    ad877be5b983 iio:light:si1145: Fix timestamp alignment and prevent data leak.
    a4f02a81c7e6 dmaengine: dma-jz4780: Fix race in jz4780_dma_tx_status
    f707ccb2f10c udf: Fix memory leak when mounting
    93da9dcee2d2 HID: wacom: Avoid entering wacom_wac_pen_report for pad / battery
    87d398f348b8 vt: keyboard, extend func_buf_lock to readers
    eb4c460e2e06 vt: keyboard, simplify vt_kdgkbsent
    8c16ca600657 drm/i915: Force VT'd workarounds when running as a guest OS
    94478c1dc57d usb: host: fsl-mph-dr-of: check return of dma_set_mask()
    75d0d4ff5970 usb: typec: tcpm: reset hard_reset_count for any disconnect
    543432d078c0 usb: cdc-acm: fix cooldown mechanism
    2850f148cd7f usb: dwc3: gadget: END_TRANSFER before CLEAR_STALL command
    206dcd6ce82f usb: dwc3: gadget: Resume pending requests after CLEAR_STALL
    97224cdc0440 usb: dwc3: core: don't trigger runtime pm when remove driver
    726f638e7cd1 usb: dwc3: core: add phy cleanup for probe error handling
    f935b70cf724 usb: dwc3: gadget: Check MPS of the request length
    1c9e86c933ea usb: dwc3: ep0: Fix ZLP for OUT ep0 requests
    3468cbceb563 usb: dwc3: pci: Allow Elkhart Lake to utilize DSM method for PM functionality
    2600a131e1f6 usb: xhci: Workaround for S3 issue on AMD SNPS 3.0 xHC
    c964d386e849 btrfs: fix readahead hang and use-after-free after removing a device
    dfda50e882f5 btrfs: fix use-after-free on readahead extent after failure to create it
    834a61b2123b btrfs: tree-checker: validate number of chunk stripes and parity
    1cedc54ad3d4 btrfs: cleanup cow block on error
    d3ce2d0fb8b2 btrfs: tree-checker: fix false alert caused by legacy btrfs root item
    4b82b8aba08d btrfs: use kvzalloc() to allocate clone_roots in btrfs_ioctl_send()
    6ec4b82fc322 btrfs: send, recompute reference path after orphanization of a directory
    c2dcc9b03b7f btrfs: send, orphanize first all conflicting inodes when processing references
    e1cf034899b6 btrfs: reschedule if necessary when logging directory items
    223b462744b3 btrfs: improve device scanning messages
    c5f2a5091263 btrfs: qgroup: fix wrong qgroup metadata reserve for delayed inode
    1e2f16dd611b PM: runtime: Remove link state checks in rpm_get/put_supplier()
    a0bdb5b16392 scsi: qla2xxx: Fix crash on session cleanup with unload
    f0ef0e2299f5 scsi: mptfusion: Fix null pointer dereferences in mptscsih_remove()
    3fc2cbba4069 w1: mxc_w1: Fix timeout resolution problem leading to bus error
    a034ea12bdd4 acpi-cpufreq: Honor _PSD table setting on new AMD CPUs
    7f9d9a007e59 ACPI: EC: PM: Drop ec_no_wakeup check from acpi_ec_dispatch_gpe()
    0adf4dbae9c0 ACPI: EC: PM: Flush EC work unconditionally after wakeup
    e7f52fd6e0ef PCI/ACPI: Whitelist hotplug ports for D3 if power managed by ACPI
    6341984bef17 ACPI: debug: don't allow debugging when ACPI is disabled
    1a5f62a3c694 ACPI: video: use ACPI backlight for HP 635 Notebook
    9578d7381432 ACPI / extlog: Check for RDMSR failure
    5e25b44cc2eb ACPI: button: fix handling lid state changes when input device closed
    c75b77cb9f01 NFS: fix nfs_path in case of a rename retry
    f8a6a2ed4b7d fs: Don't invalidate page buffers in block_write_full_page()
    2f3cb993a6f2 media: uvcvideo: Fix uvc_ctrl_fixup_xu_info() not having any effect
    8ac92a5e5fd7 leds: bcm6328, bcm6358: use devres LED registering function
    a908e29705ee extcon: ptn5150: Fix usage of atomic GPIO with sleeping GPIO chips
    004fb028f22c spi: sprd: Release DMA channel also on probe deferral
    d789e1c5b1ce perf/x86/amd/ibs: Fix raw sample data accumulation
    2e2a324641f9 perf/x86/amd/ibs: Don't include randomized bits in get_ibs_op_count()
    f9a48ff99961 perf/x86/intel: Fix Ice Lake event constraint table
    3674b0445b70 selftests/x86/fsgsbase: Test PTRACE_PEEKUSER for GSBASE with invalid LDT GS
    2d1c48227780 seccomp: Make duplicate listener detection non-racy
    470c8c409e1c mmc: sdhci-acpi: AMDI0040: Set SDHCI_QUIRK2_PRESET_VALUE_BROKEN
    3f56e94b6f7c mmc: sdhci: Add LTR support for some Intel BYT based controllers
    b91d4797b3da md/raid5: fix oops during stripe resizing
    a7aa5d578fed nvme-rdma: fix crash when connect rejected
    c421c082088e sgl_alloc_order: fix memory leak
    742fd49cf811 nbd: make the config put is called before the notifying the waiter
    b71dbaf08f9f ARM: dts: s5pv210: remove dedicated 'audio-subsystem' node
    3ad1464467e7 ARM: dts: s5pv210: move PMU node out of clock controller
    8a9024f6e29f ARM: dts: s5pv210: move fixed clocks under root node
    8c1b47e8aa43 ARM: dts: s5pv210: remove DMA controller bus node name to fix dtschema warnings
    c6029d9bc68d memory: emif: Remove bogus debugfs error handling
    2f98e2843b69 ARM: dts: omap4: Fix sgx clock rate for 4430
    c70f909e7ad6 arm64: dts: renesas: ulcb: add full-pwr-cycle-in-suspend into eMMC nodes
    e2dca8845c37 cifs: handle -EINTR in cifs_setattr
    3c78eb161c26 gfs2: add validation checks for size of superblock
    9f7e4bfadfe9 gfs2: use-after-free in sysfs deregistration
    9b58c55ba81c KVM: PPC: Book3S HV: Do not allocate HPT for a nested guest
    d7d7920a7f66 ext4: Detect already used quota file early
    d01b63320799 drivers: watchdog: rdc321x_wdt: Fix race condition bugs
    229bdf0b1319 net: 9p: initialize sun_server.sun_path to have addr's value only when addr is valid
    660e2d9d1417 clk: ti: clockdomain: fix static checker warning
    f66125e1c4df rpmsg: glink: Use complete_all for open states
    dfcfccd05075 bnxt_en: Log unknown link speed appropriately.
    78452408bb3e md/bitmap: md_bitmap_get_counter returns wrong blocks
    4ebdad05129e btrfs: fix replace of seed device
    1f145a1193ea ARC: [dts] fix the errors detected by dtbs_check
    5759f38a63db drm/amd/display: HDMI remote sink need mode validation for Linux
    3ef6095d6587 power: supply: test_power: add missing newlines when printing parameters by sysfs
    cf5a6124f237 ACPI: HMAT: Fix handling of changes from ACPI 6.2 to ACPI 6.3
    37464a8a7f68 bus/fsl_mc: Do not rely on caller to provide non NULL mc_io
    0606a8df86fe drivers/net/wan/hdlc_fr: Correctly handle special skb->protocol values
    592cbc0a6a83 brcmfmac: Fix warning message after dongle setup failed
    cf9cc49cd881 ACPI: Add out of bounds and numa_off protections to pxm_to_node()
    5880a0d1c835 xfs: don't free rt blocks when we're doing a REMAP bunmapi call
    7551e2f4fddd can: flexcan: disable clocks during stop mode
    64129ad98b74 arm64/mm: return cpu_all_mask when node is NUMA_NO_NODE
    ea888a14ac6e SUNRPC: Mitigate cond_resched() in xprt_transmit()
    7f7f437277ac usb: xhci: omit duplicate actions when suspending a runtime suspended host.
    8fd52a21ab57 coresight: Make sysfs functional on topologies with per core sink
    2502107a9ccd uio: free uio id after uio file node is freed
    16b9e40d2989 USB: adutux: fix debugging
    65052761eeb9 cpufreq: sti-cpufreq: add stih418 support
    2eab702ee945 riscv: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFO
    7762afa04fd4 samples/bpf: Fix possible deadlock in xdpsock
    58c80462e467 selftests/bpf: Define string const as global for test_sysctl_prog.c
    8f71fb76a312 media: uvcvideo: Fix dereference of out-of-bound list iterator
    4801ffdd6962 bpf: Permit map_ptr arithmetic with opcode add and offset 0
    f7f7b77ee507 kgdb: Make "kgdbcon" work properly with "kgdb_earlycon"
    77fa5e15c933 ia64: kprobes: Use generic kretprobe trampoline handler
    b3142fe7ff63 printk: reduce LOG_BUF_SHIFT range for H8300
    80685a94f7c4 arm64: topology: Stop using MPIDR for topology information
    7975367a005f drm/bridge/synopsys: dsi: add support for non-continuous HS clock
    d3fb88a51c04 mmc: via-sdmmc: Fix data race bug
    67e18c92e081 media: imx274: fix frame interval handling
    448e5004ad85 media: tw5864: check status of tw5864_frameinterval_get
    47ab020f3290 usb: typec: tcpm: During PR_SWAP, source caps should be sent only after tSwapSourceStart
    5472c5d1d505 media: platform: Improve queue set up flow for bug fixing
    3a8568806285 media: videodev2.h: RGB BT2020 and HSV are always full range
    ac437801e3c2 selftests/x86/fsgsbase: Reap a forgotten child
    581940d9b9c8 drm/brige/megachips: Add checking if ge_b850v3_lvds_init() is working correctly
    ed0bd7b12939 ath10k: fix VHT NSS calculation when STBC is enabled
    b30a5c8d9def ath10k: start recovery process when payload length exceeds max htc length for sdio
    759721fb5886 video: fbdev: pvr2fb: initialize variables
    b2844ba3d37c xfs: fix realtime bitmap/summary file truncation when growing rt volume
    a10ed3b55fed power: supply: bq27xxx: report "not charging" on all types
    036b0f4d7671 NFS4: Fix oops when copy_file_range is attempted with NFS4.0 source
    13081d5ddb58 ARM: 8997/2: hw_breakpoint: Handle inexact watchpoint addresses
    df5b07f2172a f2fs: handle errors of f2fs_get_meta_page_nofail
    15c7ec03ddb8 um: change sigio_spinlock to a mutex
    fb9b18150e3f s390/startup: avoid save_area_sync overflow
    9804eda4a975 f2fs: fix to check segment boundary during SIT page readahead
    1544dcb514ad f2fs: fix uninit-value in f2fs_lookup
    40b357f7436d f2fs: add trace exit in exception path
    2eab8974aea8 sparc64: remove mm_cpumask clearing to fix kthread_use_mm race
    7d59323cff67 powerpc: select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
    82e93f94ac65 mm: fix exec activate_mm vs TLB shootdown and lazy tlb switching race
    dc17b990ee90 powerpc/powernv/smp: Fix spurious DBG() warning
    2db759037152 futex: Fix incorrect should_fail_futex() handling
    87d9ac94c7e7 ata: sata_nv: Fix retrieving of active qcs
    da8e2fbe458c RDMA/qedr: Fix memory leak in iWARP CM
    d90dd1599cf3 mlxsw: core: Fix use-after-free in mlxsw_emad_trans_finish()
    f7e7de28d106 x86/unwind/orc: Fix inactive tasks with stack pointer in %sp on GCC 10 compiled kernels
    6937c143e3d3 firmware: arm_scmi: Add missing Rx size re-initialisation
    aedcfe9a02f8 firmware: arm_scmi: Fix ARCH_COLD_RESET
    85d9d02a49e2 xen/events: block rogue events for some time
    1d628c330fa6 xen/events: defer eoi in case of excessive number of events
    25c23f033457 xen/events: use a common cpu hotplug hook for event channels
    b7d6a66e2172 xen/events: switch user event channels to lateeoi model
    48b533aa838d xen/pciback: use lateeoi irq binding
    9396de462aa6 xen/pvcallsback: use lateeoi irq binding
    5441639a38df xen/scsiback: use lateeoi irq binding
    e6ea898e5602 xen/netback: use lateeoi irq binding
    ade6bd5af7f9 xen/blkback: use lateeoi irq binding
    df54eca9ae8a xen/events: add a new "late EOI" evtchn framework
    44a455e06d87 xen/events: fix race in evtchn_fifo_unmask()
    4bea575a1069 xen/events: add a proper barrier to 2-level uevent unmasking
    a01379671d67 xen/events: avoid removing an event channel while handling it
    b300b28b7814 Linux 5.4.74
    847c86d7f1d5 phy: marvell: comphy: Convert internal SMCC firmware return codes to errno
    aa3410cc232c misc: rtsx: do not setting OC_POWER_DOWN reg in rtsx_pci_init_ocp()
    a6db3aab9c40 openrisc: Fix issue with get_user for 64-bit values
    f73328c3192e crypto: x86/crc32c - fix building with clang ias
    29bbc9cb0b27 xen/gntdev.c: Mark pages as dirty
    8f640cd8ee60 ata: sata_rcar: Fix DMA boundary mask
    9f531583c1f0 PM: runtime: Fix timer_expires data type on 32-bit arches
    870d910e1afb serial: pl011: Fix lockdep splat when handling magic-sysrq interrupt
    44ef3b63c788 serial: qcom_geni_serial: To correct QUP Version detection logic
    c274d1f8baaf mtd: lpddr: Fix bad logic in print_drs_error
    bc67eeb9781b RDMA/addr: Fix race with netevent_callback()/rdma_addr_cancel()
    ebb0adcfbb1f cxl: Rework error message for incompatible slots
    125a229e52e7 p54: avoid accessing the data mapped to streaming DMA
    801863f634c4 evm: Check size of security.evm before using it
    dd2f800e9074 bpf: Fix comment for helper bpf_current_task_under_cgroup()
    860448e73ba2 fuse: fix page dereference after free
    4e1a23779bde ata: ahci: mvebu: Make SATA PHY optional for Armada 3720
    7aae7466f5db x86/xen: disable Firmware First mode for correctable memory errors
    47a4d5406389 arch/x86/amd/ibs: Fix re-arming IBS Fetch
    95daf621291c erofs: avoid duplicated permission check for "trusted." xattrs
    b8321829036f bnxt_en: Invoke cancel_delayed_work_sync() for PFs also.
    b1b5efe574cd bnxt_en: Fix regression in workqueue cleanup logic in bnxt_remove_one().
    aa4dba4e2226 bnxt_en: Re-write PCI BARs after PCI fatal error.
    5c86cda6a529 net: hns3: Clear the CMDQ registers before unmapping BAR region
    30d628ede582 tipc: fix memory leak caused by tipc_buf_append()
    8cc351a3d444 tcp: Prevent low rmem stalls with SO_RCVLOWAT.
    7740774940fc ravb: Fix bit fields checking in ravb_hwtstamp_get()
    4939183bb28c r8169: fix issue with forced threading in combination with shared interrupts
    f1493ab33679 net/sched: act_mpls: Add softdep on mpls_gso.ko
    4bffc9618caf netem: fix zero division in tabledist
    13a4843d3938 mlxsw: core: Fix memory leak on module removal
    c90459593f55 ibmvnic: fix ibmvnic_set_mac
    e781c67629ed gtp: fix an use-before-init in gtp_newlink()
    0ea202010b40 cxgb4: set up filter action after rewrites
    3a0d5b5358d1 chelsio/chtls: fix tls record info to user
    c5db8069776f chelsio/chtls: fix memory leaks in CPL handlers
    a5b9b28b22ba chelsio/chtls: fix deadlock issue
    c17d5aea3395 bnxt_en: Send HWRM_FUNC_RESET fw command unconditionally.
    72c17fadf3f8 bnxt_en: Check abort error state in bnxt_open_nic().
    8e1b40e57dca efivarfs: Replace invalid slashes with exclamation marks in dentries.
    c3019695f1d8 x86/PCI: Fix intel_mid_pci.c build error when ACPI is not enabled
    57a88e44b512 arm64: link with -z norelro regardless of CONFIG_RELOCATABLE
    7736c61080f1 arm64: Run ARCH_WORKAROUND_2 enabling code on all CPUs
    114c6930b351 arm64: Run ARCH_WORKAROUND_1 enabling code on all CPUs
    2dcb0c6c3818 scripts/setlocalversion: make git describe output more reliable
    c8a5496bc747 objtool: Support Clang non-section symbols in ORC generation
    a45c8c0a31a7 socket: don't clear SOCK_TSTAMP_NEW when SO_TIMESTAMPNS is disabled
    bded4de4a5e1 netfilter: nftables_offload: KASAN slab-out-of-bounds Read in nft_flow_rule_create

(From OE-Core rev: daa8aa8af31dc74ba9c916525db348a393fe4f1e)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 16dc22108fcf7e53750424b90c0aeb8dba2dc5e5)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Bruce Ashfield
d890775c90 linux-yocto/5.8: update to v5.8.18
Updating linux-yocto/5.8 to the latest korg -stable release that comprises
the following commits:

    ab435ce49bd1 Linux 5.8.18
    4a5649e0d379 phy: marvell: comphy: Convert internal SMCC firmware return codes to errno
    b8049438969b misc: rtsx: do not setting OC_POWER_DOWN reg in rtsx_pci_init_ocp()
    ad9ee9ce9d68 openrisc: Fix issue with get_user for 64-bit values
    f594998331bc xen/gntdev.c: Mark pages as dirty
    67e326e4f5df ata: sata_rcar: Fix DMA boundary mask
    f6b94060a123 PM: runtime: Fix timer_expires data type on 32-bit arches
    53faca2f4ca3 serial: pl011: Fix lockdep splat when handling magic-sysrq interrupt
    e3f6c126a3f7 serial: qcom_geni_serial: To correct QUP Version detection logic
    8f924c0a5665 drm/i915/gem: Serialise debugfs i915_gem_objects with ctx->mutex
    241bd102e337 mtd: lpddr: Fix bad logic in print_drs_error
    5868beda60c8 RDMA/addr: Fix race with netevent_callback()/rdma_addr_cancel()
    a8069b80a1fb cxl: Rework error message for incompatible slots
    9f9dc704c8cd p54: avoid accessing the data mapped to streaming DMA
    9f4ef6a90c1b evm: Check size of security.evm before using it
    a42b1273af73 bpf: Fix comment for helper bpf_current_task_under_cgroup()
    07d54b8dc56e fuse: fix page dereference after free
    78453a7dbb1a ata: ahci: mvebu: Make SATA PHY optional for Armada 3720
    4752a1313463 PCI: aardvark: Fix initialization with old Marvell's Arm Trusted Firmware
    b9cc04b049d8 x86/xen: disable Firmware First mode for correctable memory errors
    ea4e8cf5072e x86/traps: Fix #DE Oops message regression
    085f6be2fe88 arch/x86/amd/ibs: Fix re-arming IBS Fetch
    b4818cfc3f9c erofs: avoid duplicated permission check for "trusted." xattrs
    3a9e7db9a40e net: protect tcf_block_unbind with block lock
    af5d5b8afd12 tipc: fix memory leak caused by tipc_buf_append()
    519366f64c27 tcp: Prevent low rmem stalls with SO_RCVLOWAT.
    9ceecfdba701 ravb: Fix bit fields checking in ravb_hwtstamp_get()
    fa67cc69a8c8 r8169: fix issue with forced threading in combination with shared interrupts
    62d9cec6f928 net/sched: act_mpls: Add softdep on mpls_gso.ko
    2bc5d5c373ef net: ipa: command payloads already mapped
    1336d288b353 net: hns3: Clear the CMDQ registers before unmapping BAR region
    7fb8fbceb0e3 netem: fix zero division in tabledist
    25259932e1bb mlxsw: core: Fix memory leak on module removal
    d6f6e3f97885 ibmvnic: fix ibmvnic_set_mac
    4606d3512043 ibmveth: Fix use of ibmveth in a bridge.
    b520e574fdbf gtp: fix an use-before-init in gtp_newlink()
    9921e777a347 cxgb4: set up filter action after rewrites
    b97638e0f3be chelsio/chtls: fix tls record info to user
    eb592f2ae478 chelsio/chtls: fix memory leaks in CPL handlers
    c3208dec446a chelsio/chtls: fix deadlock issue
    b334112f20b7 bnxt_en: Send HWRM_FUNC_RESET fw command unconditionally.
    f739fc7e1072 bnxt_en: Re-write PCI BARs after PCI fatal error.
    7fe9514cfe68 bnxt_en: Invoke cancel_delayed_work_sync() for PFs also.
    bfbbfb501e74 bnxt_en: Fix regression in workqueue cleanup logic in bnxt_remove_one().
    0b17de4d67bf bnxt_en: Check abort error state in bnxt_open_nic().
    c328793e21fb efivarfs: Replace invalid slashes with exclamation marks in dentries.
    61ececc85274 x86/copy_mc: Introduce copy_mc_enhanced_fast_string()
    a092869e0351 x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()
    18703f749e99 x86/PCI: Fix intel_mid_pci.c build error when ACPI is not enabled
    4b0a9591dd78 arm64: link with -z norelro regardless of CONFIG_RELOCATABLE
    dfaa0f7d0832 arm64: Run ARCH_WORKAROUND_2 enabling code on all CPUs
    0ccd5c2c60e0 arm64: Run ARCH_WORKAROUND_1 enabling code on all CPUs
    4720b25e4ca3 fs/kernel_read_file: Remove FIRMWARE_EFI_EMBEDDED enum
    8b23af0ef2f7 efi/arm64: libstub: Deal gracefully with EFI_RNG_PROTOCOL failure
    865013fcf4c3 scripts/setlocalversion: make git describe output more reliable
    6f4c9772e195 io_uring: Convert advanced XArray uses to the normal API
    f7b24bee5e6e io_uring: Fix XArray usage in io_uring_add_task_file
    efce965a49f1 io_uring: Fix use of XArray in __io_uring_files_cancel
    5ee3fea0c227 io_uring: no need to call xa_destroy() on empty xarray
    0ca6ce23f4f6 io-wq: fix use-after-free in io_wq_worker_running
    4863be653425 io_wq: Make io_wqe::lock a raw_spinlock_t
    b6a6d1df552b io_uring: reference ->nsproxy for file table commands
    511abceaf0a0 io_uring: don't rely on weak ->files references
    fdc84c9bf131 io_uring: enable task/files specific overflow flushing
    3de61f9bcc1c io_uring: return cancelation status from poll/timeout/files handlers
    f34e674fbe6d io_uring: unconditionally grab req->task
    bf0305989241 io_uring: stash ctx task reference for SQPOLL
    dd1acc182c85 io_uring: move dropping of files into separate helper
    cecf78cc0890 io_uring: allow timeout/poll/files killing to take task into account
    07463d7da999 io_uring: don't run task work on an exiting task
    6e1f770fbc0a netfilter: nftables_offload: KASAN slab-out-of-bounds Read in nft_flow_rule_create

(From OE-Core rev: ba9858ac4397958b0e693b687622923266c951c7)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8c81b83bfe7cb870eb12c93d0793cad27d1de162)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Bruce Ashfield
fd3e68b355 linux-yocto/5.8: config cleanup / warnings
Integrating the following commit(s) to:

    d5ca337b7e9 bsp/mti-malta64: fix warning of CONFIG_SCSI_VIRTIO on qemumips64
    63c7a70c90f net/l2tp.cfg: fix CONFIG_PPPOL2TP mismatched warnings

(From OE-Core rev: f74584cfafccad63967ff8ae63bf3375f5e2c274)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bc51dcff0b23827fc05a6203c889154616f48014)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
678eafa74d linux-yocto/5.4: config cleanup / warnings
Integrating the following commit(s):

    eadca496e9f bsp/mti-malta64: fix warning of CONFIG_SCSI_VIRTIO on qemumips64
    203911bc035 net/l2tp.cfg: fix CONFIG_PPPOL2TP mismatched warnings

(From OE-Core rev: 33edfd487088b674b1e512eaa33c43542a9d1441)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e8df0a1f9607417f3f308b9ff852e287837b6cdf)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
c2014927f2 linux-yocto-dev: move to v5.10-rc
(From OE-Core rev: a8637f9f52a7541250dce4b1da1676b9894501f2)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a04e56631c4bc7fac58e2f157beea3423195ad8e)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
c5b7872dab linux-yocto/5.4: update to v5.4.73
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    bde3f94035b0 Linux 5.4.73
    3c7ccd7d4ace usb: gadget: f_ncm: allow using NCM in SuperSpeed Plus gadgets.
    efb893a56cea eeprom: at25: set minimum read/write access stride to 1
    8011f45598cd usb: cdns3: gadget: free interrupt after gadget has deleted
    ed134662a62b USB: cdc-wdm: Make wdm_flush() interruptible and add wdm_fsync().
    2cc661ab2bde usb: cdc-acm: add quirk to blacklist ETAS ES58X devices
    1d2ce4350a01 tty: serial: fsl_lpuart: fix lpuart32_poll_get_char
    231146202650 tty: serial: lpuart: fix lpuart32_write usage
    a8a4b17bcc9d s390/qeth: don't let HW override the configured port role
    905f0d17a07f net: korina: cast KSEG0 address to pointer in kfree
    9bca56ad2f0a ath10k: check idx validity in __ath10k_htt_rx_ring_fill_n()
    18ec92b1ce29 dmaengine: dw: Activate FIFO-mode for memory peripherals only
    190bce292b73 dmaengine: dw: Add DMA-channels mask cell support
    bc94a025cfd2 scsi: ufs: ufs-qcom: Fix race conditions caused by ufs_qcom_testbus_config()
    e13f0d325a04 usb: core: Solve race condition in anchor cleanup functions
    5912b09c97cd brcm80211: fix possible memleak in brcmf_proto_msgbuf_attach
    36df67bd0097 scsi: smartpqi: Avoid crashing kernel for controller issues
    d00555d2255f ALSA: hda/ca0132 - Add new quirk ID for SoundBlaster AE-7.
    4529f9e5067c ALSA: hda/ca0132 - Add AE-7 microphone selection commands.
    752df39ed6e1 mwifiex: don't call del_timer_sync() on uninitialized timer
    045f29c16fcf reiserfs: Fix memory leak in reiserfs_parse_options()
    109f5845a60f ipvs: Fix uninit-value in do_ip_vs_set_ctl()
    8f8df766f75c Bluetooth: btusb: Fix memleak in btusb_mtk_submit_wmt_recv_urb
    4886c2cf3d91 tty: ipwireless: fix error handling
    e80b7ebcfda7 fbmem: add margin check to fb_check_caps()
    f14811c617b4 scsi: qedi: Fix list_del corruption while removing active I/O
    56b2fd0cbfb0 scsi: qedi: Protect active command list to avoid list corruption
    f8bf0bbee1cc scsi: qedf: Return SUCCESS if stale rport is encountered
    09e4f2271178 HID: ite: Add USB id match for Acer One S1003 keyboard dock
    f3c23dcff8fb Fix use after free in get_capset_info callback.
    a4638768b03d rtl8xxxu: prevent potential memory leak
    d5eb55b5f96f brcmsmac: fix memory leak in wlc_phy_attach_lcnphy
    061d2f3fce45 selftests/bpf: Fix test_sysctl_loop{1, 2} failure due to clang change
    d399015f191b scsi: qla2xxx: Warn if done() or free() are called on an already freed srb
    0bb4a0b5a0ec scsi: ibmvfc: Fix error return in ibmvfc_probe()
    ff9c607f0355 iomap: fix WARN_ON_ONCE() from unprivileged users
    6458e8e8689b drm/msm/a6xx: fix a potential overflow issue
    bab673eef853 Bluetooth: Only mark socket zapped after unlocking
    78a47ef68262 usb: ohci: Default to per-port over-current protection
    df01087859fa xfs: make sure the rt allocator doesn't run off the end
    09b63105d089 opp: Prevent memory leak in dev_pm_opp_attach_genpd()
    6ff3df752c06 reiserfs: only call unlock_new_inode() if I_NEW
    0e3f41b6bec0 misc: rtsx: Fix memory leak in rtsx_pci_probe
    3a8d86d8da1b bpf: Limit caller's stack depth 256 for subprogs with tailcalls
    6c3a1aabfcff drm/panfrost: add amlogic reset quirk callback
    a9990ed2d7ca ath9k: hif_usb: fix race condition between usb_get_urb() and usb_kill_anchored_urbs()
    85b757ca3005 can: flexcan: flexcan_chip_stop(): add error handling and propagate error value
    42e781da7b37 usb: dwc3: simple: add support for Hikey 970
    0e1fb72e27d7 USB: cdc-acm: handle broken union descriptors
    ca4261a249dd rtw88: increse the size of rx buffer size
    41ce99a3ef1a udf: Avoid accessing uninitialized data on failed inode read
    01d886b89eb8 udf: Limit sparing table size
    e9e791f5c39a usb: gadget: function: printer: fix use-after-free in __lock_acquire
    08045050c6bd usb: dwc3: Add splitdisable quirk for Hisilicon Kirin Soc
    821dcabafded misc: vop: add round_up(x,4) for vring_size to avoid kernel panic
    85efddd97b72 mic: vop: copy data to kernel space then write to io memory
    e93b629d347e scsi: target: core: Add CONTROL field for trace events
    7cb5830b775a scsi: mvumi: Fix error return in mvumi_io_attach()
    267edd6478f9 PM: hibernate: remove the bogus call to get_gendisk() in software_resume()
    9ff197703e25 mac80211: handle lack of sband->bitrates in rates
    c8b6ad0a8afb ip_gre: set dev->hard_header_len and dev->needed_headroom properly
    16281bdd202f ntfs: add check for mft record size in superblock
    05f9cc28a954 media: venus: core: Fix runtime PM imbalance in venus_probe
    0ce7ba162b35 fs: dlm: fix configfs memory leak
    ed99b3e5117d media: venus: fixes for list corruption
    4f6af5a3c0f4 media: saa7134: avoid a shift overflow
    cb475ba4400f mmc: sdio: Check for CISTPL_VERS_1 buffer size
    67806a68d52c media: uvcvideo: Ensure all probed info is returned to v4l2
    6827d62a86de x86/mce: Make mce_rdmsrl() panic on an inaccessible MSR
    7aa3f954cd91 media: media/pci: prevent memory leak in bttv_probe
    ad3825eedb16 media: bdisp: Fix runtime PM imbalance on error
    e1285a73c5fa media: platform: sti: hva: Fix runtime PM imbalance on error
    8d727e1d261a media: platform: s3c-camif: Fix runtime PM imbalance on error
    6b3f0742f531 media: vsp1: Fix runtime PM imbalance on error
    7db4c3dfee01 media: exynos4-is: Fix a reference count leak
    f36a80bc7512 media: exynos4-is: Fix a reference count leak due to pm_runtime_get_sync
    8babe11e46ba media: exynos4-is: Fix several reference count leaks due to pm_runtime_get_sync
    62f3bc07008d media: sti: Fix reference count leaks
    e4d4abe6e86f media: st-delta: Fix reference count leak in delta_run_work
    d310c7437cb8 media: ati_remote: sanity check for both endpoints
    b4325c738f8f media: firewire: fix memory leak
    d06ea207e90b x86/mce: Add Skylake quirk for patrol scrub reported errors
    624c2782b49d x86/asm: Replace __force_order with a memory clobber
    fce2779e1c6e crypto: ccp - fix error handling
    b3a0ed411008 block: ratelimit handle_bad_sector() message
    a47cecbd2816 md/bitmap: fix memory leak of temporary bitmap
    44e2bc80a6ec i2c: core: Restore acpi_walk_dep_device_list() getting called after registering the ACPI i2c devs
    f224b8be9e31 perf: correct SNOOPX field offset
    78e27678db4e sched/features: Fix !CONFIG_JUMP_LABEL case
    13153509d8f3 NTB: hw: amd: fix an issue about leak system resources
    abd19984441c nvmet: fix uninitialized work for zero kato
    5ef1279abc74 powerpc/pseries: Avoid using addr_to_pfn in real mode
    72ccbd1481cb powerpc/powernv/dump: Fix race while processing OPAL dump
    d21b8c8fbf89 lightnvm: fix out-of-bounds write to array devices->info[]
    b0b10fa454ea ARM: dts: meson8: remove two invalid interrupt lines from the GPU node
    7de30421d646 arm64: dts: zynqmp: Remove additional compatible string for i2c IPs
    64b8f8fbe939 ARM: OMAP2+: Restore MPU power domain if cpu_cluster_pm_enter() fails
    55a7acbc0495 soc: fsl: qbman: Fix return value on success
    c7ffa707e657 ARM: dts: owl-s500: Fix incorrect PPI interrupt specifiers
    d725df0e2bbb arm64: dts: actions: limit address range for pinctrl node
    449ad29d76f7 arm64: dts: renesas: r8a774c0: Fix MSIOF1 DMA channels
    845e4eefd3c4 arm64: dts: renesas: r8a77990: Fix MSIOF1 DMA channels
    b78cdf1b51fc arm64: dts: qcom: msm8916: Fix MDP/DSI interrupts
    1e61c8fda1bb arm64: dts: qcom: pm8916: Remove invalid reg size from wcd_codec
    975dafc038f0 arm64: dts: qcom: msm8916: Remove one more thermal trip point unit name
    08ece4ba2a6e arm64: dts: imx8mq: Add missing interrupts to GPC
    93c3898ee8df memory: fsl-corenet-cf: Fix handling of platform_get_irq() error
    c072b76699a4 memory: omap-gpmc: Fix build error without CONFIG_OF
    afb15453ca4c memory: omap-gpmc: Fix a couple off by ones
    8426055fc960 arm64: dts: allwinner: h5: remove Mali GPU PMU module
    ec65c6a90621 ARM: dts: sun8i: r40: bananapi-m2-ultra: Fix dcdc1 regulator
    46ac92161144 ARM: s3c24xx: fix mmc gpio lookup tables
    e118c1527ffe ARM: at91: pm: of_node_put() after its usage
    5c4c2f437cea ARM: dts: imx6sl: fix rng node
    c1430c876984 arm64: dts: meson: vim3: correct led polarity
    6dbdc81b2625 netfilter: nf_fwd_netdev: clear timestamp in forwarding path
    2f3839075a5f netfilter: ebtables: Fixes dropping of small packets in bridge nat
    4d1eec59628c netfilter: conntrack: connection timeout after re-register
    e6b7b40aced7 scsi: bfa: Fix error return in bfad_pci_init()
    48df327e4b04 KVM: x86: emulating RDPID failure shall return #UD rather than #GP
    ad87f31648ab Input: sun4i-ps2 - fix handling of platform_get_irq() error
    cb3b77359a26 Input: twl4030_keypad - fix handling of platform_get_irq() error
    2f967303cbdd Input: omap4-keypad - fix handling of platform_get_irq() error
    2106d1cbe1c2 Input: ep93xx_keypad - fix handling of platform_get_irq() error
    b205eef76388 Input: stmfts - fix a & vs && typo
    81e5e2c268e9 Input: imx6ul_tsc - clean up some errors in imx6ul_tsc_resume()
    6498597aeb4c SUNRPC: fix copying of multiple pages in gss_read_proxy_verf()
    e412625f38a4 clk: imx8mq: Fix usdhc parents order
    b4035b3d64b6 vfio iommu type1: Fix memory leak in vfio_iommu_type1_pin_pages
    f54d8a9e37b0 vfio/pci: Clear token on bypass registration failure
    f2f616f3e333 ext4: limit entries returned when counting fsmap records
    9c27185e12e8 svcrdma: fix bounce buffers for unaligned offsets and multiple pages
    120222811b2e watchdog: sp5100: Fix definition of EFCH_PM_DECODEEN3
    dbb9ef17777e watchdog: Use put_device on error
    a8bbb47d94af watchdog: Fix memleak in watchdog_cdev_register
    9a3ee7177f72 clk: bcm2835: add missing release if devm_clk_hw_register fails
    c10e3c919a69 clk: at91: clk-main: update key before writing AT91_CKGR_MOR
    1ed7508e684e module: statically initialize init section freeing data
    b213999028e6 clk: mediatek: add UART0 clock support
    56e68e2cd8fe clk: rockchip: Initialize hw to error to avoid undefined behavior
    72407e5aa058 pwm: img: Fix null pointer access in probe
    7e5155fdd061 clk: keystone: sci-clk: fix parsing assigned-clock data during probe
    5b8882b53b0c clk: qcom: gcc-sdm660: Fix wrong parent_map
    fddcf515454e vfio/pci: Decouple PCI_COMMAND_MEMORY bit checks from is_virtfn
    42f16b3add6c PCI/IOV: Mark VFs as not implementing PCI_COMMAND_MEMORY
    aafa4b4c38e8 rpmsg: smd: Fix a kobj leak in in qcom_smd_parse_edge()
    833f3c362f63 PCI: iproc: Set affinity mask on MSI interrupts
    bcb9394accb6 PCI: aardvark: Check for errors from pci_bridge_emul_init() call
    bf65e6c51ac4 clk: meson: g12a: mark fclk_div2 as critical
    423e65dcd594 i2c: rcar: Auto select RESET_CONTROLLER
    63bd88ba8865 mailbox: avoid timer start from callback
    fe1936208e3f rapidio: fix the missed put_device() for rio_mport_add_riodev
    bfab0711eb27 rapidio: fix error handling path
    c5df8ff043c3 ramfs: fix nommu mmap with gaps in the page cache
    410f50b41c14 lib/crc32.c: fix trivial typo in preprocessor condition
    a3a45516c70e mm/page_owner: change split_page_owner to take a count
    06727f797f45 RDMA/rxe: Handle skb_clone() failure in rxe_recv.c
    6fa4d484bada f2fs: wait for sysfs kobject removal before freeing f2fs_sb_info
    f08ae0c46198 selftests/powerpc: Fix eeh-basic.sh exit codes
    180cf2e5f722 maiblox: mediatek: Fix handling of platform_get_irq() error
    e7f0b9ab8b7d RDMA/rxe: Fix skb lifetime in rxe_rcv_mcast_pkt()
    7efb373881f7 IB/rdmavt: Fix sizeof mismatch
    bc2cba6b2d5a cpufreq: powernv: Fix frame-size-overflow in powernv_cpufreq_reboot_notifier
    56c30ffe5fcd i3c: master: Fix error return in cdns_i3c_master_probe()
    ebe1a014d7ed powerpc/perf/hv-gpci: Fix starting index value
    271e53005a26 powerpc/perf: Exclude pmc5/6 from the irrelevant PMU group constraints
    dc1d4c658b9c RDMA/ipoib: Set rtnl_link_ops for ipoib interfaces
    c3a1c7b426b9 overflow: Include header file with SIZE_MAX declaration
    de47278648aa kdb: Fix pager search for multi-line strings
    626e2200f80b mtd: spinand: gigadevice: Add QE Bit
    8999f59944e3 mtd: spinand: gigadevice: Only one dummy byte in QUADIO
    2bb74bc921e0 mtd: rawnand: vf610: disable clk on error handling path in probe
    5e3782b1fae1 RDMA/hns: Fix missing sq_sig_type when querying QP
    eff57fbc2377 RDMA/hns: Fix the wrong value of rnr_retry when querying qp
    1e583b2948ae perf stat: Skip duration_time in setup_system_wide
    b79dd191680f i40iw: Add support to make destroy QP synchronous
    61ad14e24eba RDMA/mlx5: Disable IB_DEVICE_MEM_MGT_EXTENSIONS if IB_WR_REG_MR can't work
    4b1d559cc5c6 RDMA/hns: Set the unsupported wr opcode
    0ff75bfed10d perf intel-pt: Fix "context_switch event has no tid" error
    cee5080a0776 RDMA/cma: Consolidate the destruction of a cma_multicast in one place
    7c4fec28980d RDMA/cma: Remove dead code for kernel rdmacm multicast
    557c184df3c5 powerpc/64s/radix: Fix mm_cpumask trimming race vs kthread_use_mm
    148d4f4dc75e powerpc/tau: Disable TAU between measurements
    72407b8d08b3 powerpc/tau: Check processor type before enabling TAU interrupt
    68a8ec0b022f powerpc/tau: Remove duplicated set_thresholds() call
    c0578b423b5e powerpc/tau: Convert from timer to workqueue
    0305488040dc powerpc/tau: Use appropriate temperature sample interval
    a2087c04a2ac powerpc/book3s64/hash/4k: Support large linear mapping range with 4K
    8fd3154eb0ee RDMA/qedr: Fix inline size returned for iWARP
    97336c8296b5 RDMA/qedr: Fix return code if accept is called on a destroyed qp
    4c5f385ab49e RDMA/qedr: Fix use of uninitialized field
    e0a970d8f627 RDMA/qedr: Fix qp structure memory leak
    1738b03e34ad RDMA/umem: Prevent small pages from being returned by ib_umem_find_best_pgsz()
    85e40ba1c4a5 RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary
    b1712ec30dfb xfs: fix high key handling in the rt allocator's query_range function
    b005b448daf2 xfs: fix deadlock and streamline xfs_getfsmap performance
    adc3e2698637 xfs: limit entries returned when counting fsmap records
    2577720d35e2 ida: Free allocated bitmap in error path
    3789f5cfd600 arc: plat-hsdk: fix kconfig dependency warning when !RESET_CONTROLLER
    67c2e58b684e ARM: 9007/1: l2c: fix prefetch bits init in L2X0_AUX_CTRL using DT values
    baa7ea082f8e mtd: mtdoops: Don't write panic data twice
    b8d4f65c6ae2 RDMA/mlx5: Fix potential race between destroy and CQE poll
    935950e3190d pseries/drmem: don't cache node id in drmem_lmb struct
    eb327e98631e powerpc/pseries: explicitly reschedule during drmem_lmb list traversal
    937cdcc45aaa RDMA/umem: Fix signature of stub ib_umem_find_best_pgsz()
    a43f936da88f RDMA/hns: Add a check for current state before modifying QP
    4a5aaa1747a3 mtd: lpddr: fix excessive stack usage with clang
    1564884a4176 RDMA/ucma: Add missing locking around rdma_leave_multicast()
    cc8ebd76b10a RDMA/ucma: Fix locking for ctx->events_reported
    22d8bebf634a powerpc/icp-hv: Fix missing of_node_put() in success path
    d2575bf27279 powerpc/pseries: Fix missing of_node_put() in rng_init()
    4f74f179a335 IB/mlx4: Adjust delayed work when a dup is observed
    1fe669e9ad19 IB/mlx4: Fix starvation in paravirt mux/demux
    8d44d75812cf i3c: master add i3c_master_attach_boardinfo to preserve boardinfo
    e7f826cd20a6 selftests/ftrace: Change synthetic event name for inter-event-combined test
    17ed6448b00c fs: fix NULL dereference due to data race in prepend_path()
    91e4c12a3bf4 mm, oom_adj: don't loop through tasks in __set_oom_adj when not necessary
    9a1656f1d19b mm/memcg: fix device private memcg accounting
    04fabdfcbf5d mm/swapfile.c: fix potential memory leak in sys_swapon
    8194371c4d60 netfilter: nf_log: missing vlan offload tag and proto
    a6aaab712d6a net: korina: fix kfree of rx/tx descriptor array
    76c0e4b2a50f ipvs: clear skb->tstamp in forwarding path
    7c83fe15ecb1 mwifiex: fix double free
    91962ac35b48 platform/x86: mlx-platform: Remove PSU EEPROM configuration
    dddb49f4152a ipmi_si: Fix wrong return value in try_smi_init()
    b2a98fec2d1e scsi: be2iscsi: Fix a theoretical leak in beiscsi_create_eqs()
    9899e57bd714 scsi: target: tcmu: Fix warning: 'page' may be used uninitialized
    2fb431e69ad6 usb: dwc2: Fix INTR OUT transfers in DDMA mode.
    3fed2b5657e4 nl80211: fix non-split wiphy information
    6aa25d03dfb5 usb: gadget: u_ether: enable qmult on SuperSpeed Plus as well
    9af716ed41e4 usb: gadget: f_ncm: fix ncm_bitrate for SuperSpeed and above.
    2f002b5172b2 iwlwifi: mvm: split a print to avoid a WARNING in ROC
    1dbf9d994b12 mfd: sm501: Fix leaks in probe()
    df63949a2750 net: enic: Cure the enic api locking trainwreck
    7c48d6e80e70 iio: adc: stm32-adc: fix runtime autosuspend delay when slow polling
    cbe5109aa47b qtnfmac: fix resource leaks on unsupported iftype error return path
    1d3188378d9b ibmvnic: set up 200GBPS speed
    da012618c502 coresight: etm: perf: Fix warning caused by etm_setup_aux failure
    56365dbb3ec2 nl80211: fix OBSS PD min and max offset validation
    99e8886339fa nvmem: core: fix possibly memleak when use nvmem_cell_info_to_nvmem_cell()
    903bee2ebff1 HID: hid-input: fix stylus battery reporting
    1ad7f52fe668 ASoC: fsl_sai: Instantiate snd_soc_dai_driver
    56c1c45bb82d slimbus: qcom-ngd-ctrl: disable ngd in qmi server down callback
    5bfd32bb16dc slimbus: core: do not enter to clock pause mode in core
    9da3ff3368b7 slimbus: core: check get_addr before removing laddr ida
    b7e2b1fe04bf quota: clear padding in v2r1_mem2diskdqb()
    3fcd75ae29b5 usb: dwc2: Fix parameter type in function pointer prototype
    f70650083b9e ALSA: seq: oss: Avoid mutex lock for a long-time ioctl
    6f04266d084d misc: mic: scif: Fix error handling path
    a7bf4cf31f57 dmaengine: dmatest: Check list for emptiness before access its last entry
    4ca39ef88adc ath6kl: wmi: prevent a shift wrapping bug in ath6kl_wmi_delete_pstream_cmd()
    572a7d15f2d1 spi: omap2-mcspi: Improve performance waiting for CHSTAT
    98d0b2742fe0 net: dsa: rtl8366rb: Support all 4096 VLANs
    06ba92787790 ASoC: tlv320aic32x4: Fix bdiv clock rate derivation
    0f5203a88ca4 net: wilc1000: clean up resource in error path of init mon interface
    26751638ff09 net: dsa: rtl8366: Skip PVID setting if not requested
    11064fef1bb1 net: dsa: rtl8366: Refactor VLAN/PVID init
    09cb271bcbde net: dsa: rtl8366: Check validity of passed VLANs
    714ca2d03282 xhci: don't create endpoint debugfs entry before ring buffer is set.
    1a31fa71d979 coresight: etm4x: Handle unreachable sink in perf mode
    ed8b90d303cf drm: mxsfb: check framebuffer pitch
    c8bc46fc01e4 cpufreq: armada-37xx: Add missing MODULE_DEVICE_TABLE
    1122f2a7833c net: stmmac: use netif_tx_start|stop_all_queues() function
    148b49be7277 scsi: mpt3sas: Fix sync irqs
    e757a39c2d84 net/mlx5: Don't call timecounter cyc2time directly from 1PPS flow
    50185a14fe8e pinctrl: mcp23s08: Fix mcp23x17 precious range
    5e829cdd6d62 pinctrl: mcp23s08: Fix mcp23x17_regmap initialiser
    44a83bd3243b iomap: Clear page error before beginning a write
    82ef2b6a9b6c drm/panfrost: Ensure GPU quirks are always initialised
    a74f0f0a6265 drm/msm: Avoid div-by-zero in dpu_crtc_atomic_check()
    02bf8fbfb445 HID: roccat: add bounds checking in kone_sysfs_write_settings()
    4d861784f0eb ASoC: fsl: imx-es8328: add missing put_device() call in imx_es8328_probe()
    23159b4375a4 video: fbdev: radeon: Fix memleak in radeonfb_pci_register
    2370d94aed41 video: fbdev: sis: fix null ptr dereference
    67e65396cd56 video: fbdev: vga16fb: fix setting of pixclock because a pass-by-value error
    be700c52ae00 drivers/virt/fsl_hypervisor: Fix error handling path
    bf12e769ff2a pwm: lpss: Add range limit check for the base_unit register value
    34f326e702fd pwm: lpss: Fix off by one error in base_unit math in pwm_lpss_prepare()
    2b6fb30cb49d pty: do tty_flip_buffer_push without port->lock in pty_write
    bf94a8754f2a tty: hvcs: Don't NULL tty->driver_data until hvcs_cleanup()
    f3f79d92ca71 tty: serial: earlycon dependency
    2b150aa2e3ef binder: Remove bogus warning on failed same-process transaction
    48c121a74fb6 drm/crc-debugfs: Fix memleak in crc_control_write
    751c4cf0ee62 drm: panel: Fix bpc for OrtusTech COM43H4M85ULC panel
    d911c0e9fcf0 mm/error_inject: Fix allow_error_inject function signatures.
    ebc1d548a729 VMCI: check return value of get_user_pages_fast() for errors
    659da2df0c5d staging: emxx_udc: Fix passing of NULL to dma_alloc_coherent()
    f87f0236bdbb backlight: sky81452-backlight: Fix refcount imbalance on error
    517f0785cef9 scsi: csiostor: Fix wrong return value in csio_hw_prep_fw()
    a28b846431c6 scsi: qla2xxx: Fix wrong return value in qla_nvme_register_hba()
    835e3a595aa3 scsi: qla2xxx: Fix wrong return value in qlt_chk_unresolv_exchg()
    49fc81280f83 scsi: qla4xxx: Fix an error handling path in 'qla4xxx_get_host_stats()'
    58826ecb7385 drm/gma500: fix error check
    84b79c485356 staging: rtl8192u: Do not use GFP_KERNEL in atomic context
    dc432c231f4a mwifiex: Do not use GFP_KERNEL in atomic context
    7bf50ff5a32c brcmfmac: check ndev pointer
    eb4bb7e520a7 ASoC: qcom: lpass-cpu: fix concurrency issue
    cab19b7f827b ASoC: qcom: lpass-platform: fix memory leak
    0627ae9be941 wcn36xx: Fix reported 802.11n rx_highest rate wcn3660/wcn3680
    a3cf5b3ad12d ath10k: Fix the size used in a 'dma_free_coherent()' call in an error handling path
    9981ef0f9cfa ath9k: Fix potential out of bounds in ath9k_htc_txcompletion_cb()
    80ff60f046f4 ath6kl: prevent potential array overflow in ath6kl_add_new_sta()
    e2a1b94f7fd2 drm: panel: Fix bus format for OrtusTech COM43H4M85ULC panel
    0a5630dee31f drm/amd/display: Fix wrong return value in dm_update_plane_state()
    0d234d1135dc Bluetooth: hci_uart: Cancel init work before unregistering
    e99958ec096b drm/vkms: fix xrgb on compute crc
    0ae399b5da2a ath10k: provide survey info as accumulated data
    450d03435ca9 blk-mq: move cancel of hctx->run_work to the front of blk_exit_queue
    96bc5e4cb4c8 spi: spi-s3c64xx: Check return values
    a053db13b3e6 spi: spi-s3c64xx: swap s3c64xx_spi_set_cs() and s3c64xx_enable_datapath()
    fcf7bf406590 pinctrl: bcm: fix kconfig dependency warning when !GPIOLIB
    0120ec32a777 regulator: resolve supply after creating regulator
    cd68531d2981 media: ti-vpe: Fix a missing check and reference count leak
    5c4ffc07f92e media: stm32-dcmi: Fix a reference count leak
    a05590cc08e3 media: s5p-mfc: Fix a reference count leak
    0747ff17aa6c media: camss: Fix a reference count leak.
    28b21e02dce9 media: platform: fcp: Fix a reference count leak.
    4e954d4dea1e media: rockchip/rga: Fix a reference count leak.
    aa60f4ad0707 media: rcar-vin: Fix a reference count leak.
    55d01160af68 media: tc358743: cleanup tc358743_cec_isr
    de566409e3ad media: tc358743: initialize variable
    3c66762f0c64 media: mx2_emmaprp: Fix memleak in emmaprp_probe
    7fb271426a70 cypto: mediatek - fix leaks in mtk_desc_ring_alloc
    cc0f25040972 hwmon: (pmbus/max34440) Fix status register reads for MAX344{51,60,61}
    90e8f87c0b25 crypto: omap-sham - fix digcnt register handling with export/import
    0db26c777a25 media: rcar-csi2: Allocate v4l2_async_subdev dynamically
    7906b7a7ce1d media: rcar_drif: Allocate v4l2_async_subdev dynamically
    58e2bcb7fa43 media: rcar_drif: Fix fwnode reference leak when parsing DT
    79ec0578c7e0 media: i2c: ov5640: Enable data pins on poweron for DVP mode
    b2f8546056b3 media: i2c: ov5640: Separate out mipi configuration from s_power
    b9ccea540564 media: i2c: ov5640: Remain in power down for DVP mode unless streaming
    8409370ae02e media: omap3isp: Fix memleak in isp_probe
    79a41d2357c6 media: staging/intel-ipu3: css: Correctly reset some memory
    8bcc5c270771 media: uvcvideo: Silence shift-out-of-bounds warning
    8504250759f4 media: uvcvideo: Set media controller entity functions
    8b426d665a41 media: m5mols: Check function pointer in m5mols_sensor_power
    361a1b76b2d2 media: ov5640: Correct Bit Div register in clock tree diagram
    7052f4c5ab51 media: Revert "media: exynos4-is: Add missed check for pinctrl_lookup_state()"
    c6243d107c32 media: tuner-simple: fix regression in simple_set_radio_freq
    ac36f94d34df crypto: picoxcell - Fix potential race condition bug
    71444295839c crypto: ixp4xx - Fix the size used in a 'dma_free_coherent()' call
    3dd9ffbb6eda crypto: mediatek - Fix wrong return value in mtk_desc_ring_alloc()
    528acbf310ff crypto: algif_skcipher - EBUSY on aio should be an error
    d6623eea9abb x86/events/amd/iommu: Fix sizeof mismatch
    200f13d0d9a1 x86/nmi: Fix nmi_handle() duration miscalculation
    b257bb437dc3 perf/x86/intel/uncore: Reduce the number of CBOX counters
    e089a75b7786 perf/x86/intel/uncore: Update Ice Lake uncore units
    cfa97676cb44 sched/fair: Fix wrong cpu selecting from isolated domain
    500a98894821 drivers/perf: thunderx2_pmu: Fix memory resource error handling
    1731c693a62c drivers/perf: xgene_pmu: Fix uninitialized resource struct
    7e297c83e64d x86/fpu: Allow multiple bits in clearcpuid= parameter
    ab6bb1c1f1de perf/x86/intel/ds: Fix x86_pmu_stop warning for large PEBS
    9aee8216556e EDAC/ti: Fix handling of platform_get_irq() error
    64a9f5a30fbb EDAC/aspeed: Fix handling of platform_get_irq() error
    4d86328e42c3 EDAC/i5100: Fix error handling order in i5100_init_one()
    24543df3f491 crypto: caam/qi - add fallback for XTS with more than 8B IV
    66ec3755f791 crypto: algif_aead - Do not set MAY_BACKLOG on the async path
    68e3b25444cb ima: Don't ignore errors from crypto_shash_update()
    4a62024168c3 KVM: SVM: Initialize prev_ga_tag before use
    39ba2b6c3d11 KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages
    413aeed19567 KVM: nVMX: Reload vmcs01 if getting vmcs12's pages fails
    f9ac2036344a KVM: nVMX: Reset the segment cache when stuffing guest segs
    a5513655cfee SMB3: Resolve data corruption of TCP server info fields
    aeaa30720d67 cifs: Return the error from crypt_message when enc/dec key not found.
    65604f3ea2f2 cifs: remove bogus debug code
    706538edacc6 ALSA: hda/realtek: Enable audio jacks of ASUS D700SA with ALC887
    5e19bf634c92 ALSA: hda/realtek - Add mute Led support for HP Elitebook 845 G7
    995a90e70429 ALSA: hda/realtek - set mic to auto detect on a HP AIO machine
    a40f49438a15 ALSA: hda/realtek - The front Mic on a HP machine doesn't work
    8df0ffe2f32c icmp: randomize the global rate limiter
    9fa95d101caf tcp: fix to update snd_wl1 in bulk receiver fast path
    c5e4e010f39e selftests: rtnetlink: load fou module for kci_test_encap_fou() test
    6f7c40767bf4 selftests: forwarding: Add missing 'rp_filter' configuration
    f93a27b0f301 r8169: fix operation under forced interrupt threading
    68db21094ee5 nfc: Ensure presence of NFC_ATTR_FIRMWARE_NAME attribute in nfc_genl_fw_download()
    2f58abe7708a nexthop: Fix performance regression in nexthop deletion
    d6d478290815 net/sched: act_tunnel_key: fix OOB write in case of IPv6 ERSPAN tunnels
    09ea22aa3681 net: Properly typecast int values to set sk_max_pacing_rate
    432336b3cf2a net: hdlc_raw_eth: Clear the IFF_TX_SKB_SHARING flag after calling ether_setup
    62d366f8e570 net: hdlc: In hdlc_rcv, check to make sure dev is an HDLC device
    1a3c8d6acbfc net: ftgmac100: Fix Aspeed ast2600 TX hang issue
    7a6a016c5281 ibmvnic: save changed mac address to adapter->mac_addr
    416eec363622 chelsio/chtls: correct function return and return type
    15110ce6e26f chelsio/chtls: correct netdevice for vlan interface
    fe97af291fee chelsio/chtls: fix socket lock
    750e81e2dbc0 nvme-pci: disable the write zeros command for Intel 600P/P3100
    a86bf1d8b19c ALSA: hda/hdmi: fix incorrect locking in hdmi_pcm_close
    17784cec2da4 ALSA: hda: fix jack detection with Realtek codecs when in D3
    8bedcbceaaa3 ALSA: bebob: potential info leak in hwdep_read()
    401d4d79a8ed binder: fix UAF when releasing todo list
    711c0471ef17 cxgb4: handle 4-tuple PEDIT to NAT mode translation
    5f269cb9e513 r8169: fix data corruption issue on RTL8402
    c5b868eecb4f net_sched: remove a redundant goto chain check
    ba05057bd056 net/ipv4: always honour route mtu during forwarding
    46a55a44cc75 net: j1939: j1939_session_fresh_new(): fix missing initialization of skbcnt
    25bd9ea1ae5b can: j1935: j1939_tp_tx_dat_new(): fix missing initialization of skbcnt
    b0342b87cad8 can: m_can_platform: don't call m_can_class_suspend in runtime suspend
    c4099221dbc0 socket: fix option SO_TIMESTAMPING_NEW
    7d31e5722cbf tipc: fix the skb_unshare() in tipc_buf_append()
    dd3f58f499d0 net: usb: qmi_wwan: add Cellient MPL200 card
    65033e39f728 net/tls: sendfile fails with ktls offload
    926210cd8158 net/smc: fix valid DMBE buffer sizes
    cdd3c52a983e net: fix pos incrementment in ipv6_route_seq_next
    f08752a4498b net: fec: Fix PHY init after phy_reset_after_clk_enable()
    9e70485b40c8 net: fec: Fix phy_device lookup for phy_reset_after_clk_enable()
    0b41975f7b78 mlx4: handle non-napi callers to napi_poll
    3392c9d8f9aa ipv4: Restore flowi4_oif update before call to xfrm_lookup_route
    b7d2587f726a ibmveth: Identify ingress large send packets.
    b809bead48a3 ibmveth: Switch order of ibmveth_helper calls.

(From OE-Core rev: 914263fa624e6cce8580ba2c0a2dc7b903a3e9df)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 13cc1130b778f60330534804153abef4c4833ea4)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
2691a54e91 linux-yocto/5.8: update to v5.8.17
Updating linux-yocto/5.8 to the latest korg -stable release that comprises
the following commits:

    33156ccb29d9 Linux 5.8.17
    05981710aa5e usb: gadget: f_ncm: allow using NCM in SuperSpeed Plus gadgets.
    5a30d4a5afcc eeprom: at25: set minimum read/write access stride to 1
    d33abbe3b327 usb: cdns3: gadget: free interrupt after gadget has deleted
    5a118fc75b65 USB: cdc-wdm: Make wdm_flush() interruptible and add wdm_fsync().
    2e1905ce84a1 usb: cdc-acm: add quirk to blacklist ETAS ES58X devices
    3f7ebf3355ac usb: gadget: bcm63xx_udc: fix up the error of undeclared usb_debug_root
    3d53646d781b tty: serial: fsl_lpuart: fix lpuart32_poll_get_char
    40254b8d0f8b tty: serial: lpuart: fix lpuart32_write usage
    6a8a92d5770b s390/qeth: don't let HW override the configured port role
    941895dc705d net: korina: cast KSEG0 address to pointer in kfree
    574079593732 ath10k: check idx validity in __ath10k_htt_rx_ring_fill_n()
    f8ea12647fa6 dmaengine: dw: Activate FIFO-mode for memory peripherals only
    e106dc6c4c4d dmaengine: dw: Add DMA-channels mask cell support
    b6dead6f20e9 drm/amd/display: Screen corruption on dual displays (DP+USB-C)
    0666c173a061 scsi: ufs: ufs-qcom: Fix race conditions caused by ufs_qcom_testbus_config()
    4360db24d35a usb: core: Solve race condition in anchor cleanup functions
    19bcbc2ee12f brcm80211: fix possible memleak in brcmf_proto_msgbuf_attach
    044d8bfb9028 scsi: smartpqi: Avoid crashing kernel for controller issues
    651984d53d54 ASoC: Intel: sof_rt5682: override quirk data for tgl_max98373_rt5682
    85f1ad8c8644 ASoC: SOF: Add topology filename override based on dmi data match
    54e4b6262ca7 ALSA: hda/ca0132 - Add new quirk ID for SoundBlaster AE-7.
    4597e6f214c1 ALSA: hda/ca0132 - Add AE-7 microphone selection commands.
    5fa4faf96e44 mwifiex: don't call del_timer_sync() on uninitialized timer
    047a51bba8dc s390/qeth: strictly order bridge address events
    a527bf9df3af reiserfs: Fix memory leak in reiserfs_parse_options()
    72720eaa6c33 ipvs: Fix uninit-value in do_ip_vs_set_ctl()
    2e2b67844504 Bluetooth: btusb: Fix memleak in btusb_mtk_submit_wmt_recv_urb
    97811d992adb tty: ipwireless: fix error handling
    ffe1b711045f fbmem: add margin check to fb_check_caps()
    98d29fc2c451 scsi: qedi: Fix list_del corruption while removing active I/O
    ee3fc1103a40 scsi: qedi: Protect active command list to avoid list corruption
    5bbd0a791b7c scsi: qedi: Mark all connections for recovery on link down event
    95d42ebebc2c scsi: qedf: Return SUCCESS if stale rport is encountered
    3f07687e959e HID: ite: Add USB id match for Acer One S1003 keyboard dock
    0c1943f203c2 Fix use after free in get_capset_info callback.
    4d779accb71b rtl8xxxu: prevent potential memory leak
    437ee0e6c677 brcmsmac: fix memory leak in wlc_phy_attach_lcnphy
    445359b32632 selftests/bpf: Fix test_sysctl_loop{1, 2} failure due to clang change
    5ecc5ea6e1a7 scsi: qla2xxx: Warn if done() or free() are called on an already freed srb
    d6447b6646ef scsi: ibmvfc: Fix error return in ibmvfc_probe()
    458a89fa9015 iomap: fix WARN_ON_ONCE() from unprivileged users
    e653923ad7f1 drm/msm/a6xx: fix a potential overflow issue
    1d8181746a36 Bluetooth: Only mark socket zapped after unlocking
    76925b9ea722 drm: fix double free for gbo in drm_gem_vram_init and drm_gem_vram_create
    c64d4179f8ae usb: ohci: Default to per-port over-current protection
    0c0476d096d6 xfs: make sure the rt allocator doesn't run off the end
    0c35ab58c587 opp: Prevent memory leak in dev_pm_opp_attach_genpd()
    c31de74b342a reiserfs: only call unlock_new_inode() if I_NEW
    af90d9faf01a misc: rtsx: Fix memory leak in rtsx_pci_probe
    7a40d2814425 bpf: Limit caller's stack depth 256 for subprogs with tailcalls
    cc618717afdd drm/panfrost: add support for vendor quirk
    c246a3325c75 drm/panfrost: add amlogic reset quirk callback
    8159f330f25e drm/panfrost: add Amlogic GPU integration quirks
    7f5972267295 ath9k: hif_usb: fix race condition between usb_get_urb() and usb_kill_anchored_urbs()
    8951e760c038 HID: multitouch: Lenovo X1 Tablet Gen3 trackpoint and buttons
    3eb0b62e57c3 can: flexcan: flexcan_chip_stop(): add error handling and propagate error value
    5d2dd06ad8db habanalabs: cast to u64 before shift > 31 bits
    375d81cf16bb usb: dwc3: simple: add support for Hikey 970
    c373f8d5098f USB: cdc-acm: handle broken union descriptors
    739048988f1b rtw88: increse the size of rx buffer size
    eacaacfe8bd0 udf: Avoid accessing uninitialized data on failed inode read
    9a3d398af87d udf: Limit sparing table size
    6a71fc5ca9f5 rtw88: pci: Power cycle device during shutdown
    34f026263889 usb: gadget: function: printer: fix use-after-free in __lock_acquire
    b9c15de08dfd usb: dwc3: Add splitdisable quirk for Hisilicon Kirin Soc
    e7eec8654168 misc: vop: add round_up(x,4) for vring_size to avoid kernel panic
    226b5887720b mic: vop: copy data to kernel space then write to io memory
    f96fba04992c scsi: target: core: Add CONTROL field for trace events
    d805c83716ef scsi: mvumi: Fix error return in mvumi_io_attach()
    9f1960911919 PM: hibernate: remove the bogus call to get_gendisk() in software_resume()
    6cc0a248bcfa bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI
    6afdaf29e4c2 libbpf: Close map fd if init map slots failed
    e1ec1c25b00e staging: wfx: fix handling of MMIC error
    858c56fa3741 mac80211: handle lack of sband->bitrates in rates
    148c3d23858d ip_gre: set dev->hard_header_len and dev->needed_headroom properly
    ec23aa8bb0e5 ntfs: add check for mft record size in superblock
    d5772580c109 media: venus: core: Fix runtime PM imbalance in venus_probe
    6ed15eebcb61 media: venus: core: Fix error handling in probe
    91cde7d5aa17 fs: dlm: fix configfs memory leak
    24f924dbf640 media: venus: fixes for list corruption
    6e5fdad5c10f media: atomisp: fix memleak in ia_css_stream_create
    93b6de835777 media: saa7134: avoid a shift overflow
    c0f64a9057e3 mmc: sdio: Check for CISTPL_VERS_1 buffer size
    60e8d95f72b5 media: uvcvideo: Ensure all probed info is returned to v4l2
    5b66aa6f52a1 x86/mce: Make mce_rdmsrl() panic on an inaccessible MSR
    9300f536c77e spi: fsi: Fix clock running too fast
    75d927fc5587 crypto: hisilicon - fixed memory allocation error
    cde267085992 x86/mce: Annotate mce_rd/wrmsrl() with noinstr
    71b3d6794ae7 media: media/pci: prevent memory leak in bttv_probe
    e4f08676d93c media: bdisp: Fix runtime PM imbalance on error
    bad248c1ec53 media: platform: sti: hva: Fix runtime PM imbalance on error
    59eb92867e9c media: platform: s3c-camif: Fix runtime PM imbalance on error
    9fa2286f1925 media: vsp1: Fix runtime PM imbalance on error
    2341407a05ea media: exynos4-is: Fix a reference count leak
    dcc6fbbab0dc media: exynos4-is: Fix a reference count leak due to pm_runtime_get_sync
    e7997018b45d media: exynos4-is: Fix several reference count leaks due to pm_runtime_get_sync
    30f5c4e91d14 media: sti: Fix reference count leaks
    236117a8bf3a media: st-delta: Fix reference count leak in delta_run_work
    fe8798e78292 media: ati_remote: sanity check for both endpoints
    49e06f165b9c media: firewire: fix memory leak
    ba3c07c18034 x86/mce: Add Skylake quirk for patrol scrub reported errors
    8336a00a5f4d x86/asm: Replace __force_order with a memory clobber
    5056a1b3f6fb crypto: ccp - fix error handling
    121ce5e30b64 x86/dumpstack: Fix misleading instruction pointer error message
    6337db2af4d1 block: ratelimit handle_bad_sector() message
    4c4b1a29c3d0 md/bitmap: fix memory leak of temporary bitmap
    44a58dd22c28 i2c: core: Restore acpi_walk_dep_device_list() getting called after registering the ACPI i2c devs
    c1c4b2d0dee1 perf: correct SNOOPX field offset
    c93a8cddf4d2 sched/features: Fix !CONFIG_JUMP_LABEL case
    62bb6c5a3cee ntb: intel: Fix memleak in intel_ntb_pci_probe
    06a3b0080eaa NTB: hw: amd: fix an issue about leak system resources
    990c91c323f3 KVM: ioapic: break infinite recursion on lazy EOI
    959d1d42f0b6 nvmet: fix uninitialized work for zero kato
    05eb719ac46a powerpc/pseries: Avoid using addr_to_pfn in real mode
    1eb1f681057b powerpc/powernv/dump: Fix race while processing OPAL dump
    cd85f97e424b lightnvm: fix out-of-bounds write to array devices->info[]
    bd396a2c1bc9 ARM: dts: meson8: remove two invalid interrupt lines from the GPU node
    68d2900fc0c8 arm64: dts: zynqmp: Remove additional compatible string for i2c IPs
    e1f385dfa255 drm/mediatek: reduce clear event
    632bf6c3b82b soc: mediatek: cmdq: add clear option in cmdq_pkt_wfe api
    fab5aff89c9e ARM: dts: iwg20d-q7-common: Fix touch controller probe failure
    a0b4366823d9 ARM: dts: stm32: Fix DH PDK2 display PWM channel
    abb56e08ed1d ARM: dts: stm32: Swap PHY reset GPIO and TSC2004 IRQ on DHCOM SOM
    937a5596d619 ARM: dts: stm32: Move ethernet PHY into DH SoM DT
    2e7e56a6af3f ARM: dts: stm32: lxa-mc1: Fix kernel warning about PHY delays
    f80f23f39e6b ARM: dts: stm32: Fix sdmmc2 pins on AV96
    1925f1fdf9a6 ARM: OMAP2+: Restore MPU power domain if cpu_cluster_pm_enter() fails
    fdb6b483eaaf soc: fsl: qbman: Fix return value on success
    342c29116aae ARM: dts: owl-s500: Fix incorrect PPI interrupt specifiers
    52c37b7f0e04 arm64: dts: actions: limit address range for pinctrl node
    251ab5b1f8e8 arm64: dts: mt8173: elm: Fix nor_flash node property
    6e4cd77c0235 arm64: dts: renesas: r8a774c0: Fix MSIOF1 DMA channels
    5c91fc9a6d16 arm64: dts: renesas: r8a77990: Fix MSIOF1 DMA channels
    70ca9a567129 dt-bindings: crypto: Specify that allwinner, sun8i-a33-crypto needs reset
    10c78d0a1a2f soc: qcom: apr: Fixup the error displayed on lookup failure
    e8bd4ce4e877 arm64: dts: qcom: msm8916: Fix MDP/DSI interrupts
    26a8ac2d6512 arm64: dts: qcom: pm8916: Remove invalid reg size from wcd_codec
    6747001ebcb5 arm64: dts: qcom: msm8916: Remove one more thermal trip point unit name
    64ca77e846b0 soc: qcom: pdr: Fixup array type of get_domain_list_resp message
    3ca890f0e5d2 arm64: dts: qcom: sc7180: Drop flags on mdss irqs
    d9aa6534e78b arm64: dts: imx8mq: Add missing interrupts to GPC
    6395b7702156 firmware: arm_scmi: Fix NULL pointer dereference in mailbox_chan_free
    afcd57ad541b memory: fsl-corenet-cf: Fix handling of platform_get_irq() error
    244c3ac190e3 arm64: dts: qcom: sc7180: Fix the LLCC base register size
    fe5a0679f7e7 memory: omap-gpmc: Fix build error without CONFIG_OF
    d69ca7a7dfa9 memory: omap-gpmc: Fix a couple off by ones
    cc0820957d0f arm64: dts: allwinner: h5: remove Mali GPU PMU module
    4f9e6b1be196 ARM: dts: sun8i: r40: bananapi-m2-ultra: Fix dcdc1 regulator
    9a3eb126861f ARM: s3c24xx: fix mmc gpio lookup tables
    ea25940ff19f ARM: at91: pm: of_node_put() after its usage
    ba11877a60f2 ARM: dts: imx6sl: fix rng node
    2c9966436d0e arm64: dts: meson: vim3: correct led polarity
    23e1e4451190 soc: xilinx: Fix error code in zynqmp_pm_probe()
    29e043f9016c netfilter: nf_fwd_netdev: clear timestamp in forwarding path
    735b4d75a1c7 netsec: ignore 'phy-mode' device property on ACPI systems
    51ba2945a8ef netfilter: ebtables: Fixes dropping of small packets in bridge nat
    ceb1eb6cbeaf netfilter: conntrack: connection timeout after re-register
    9dd95e294542 arm64: mm: use single quantity to represent the PA to VA translation
    4a0b1d0e70ac scsi: bfa: Fix error return in bfad_pci_init()
    bdde093c81f2 KVM: x86: emulating RDPID failure shall return #UD rather than #GP
    029525c89bf1 Input: sun4i-ps2 - fix handling of platform_get_irq() error
    e186019ad86f Input: twl4030_keypad - fix handling of platform_get_irq() error
    86f11d554a8c Input: omap4-keypad - fix handling of platform_get_irq() error
    d96fc374d241 Input: ep93xx_keypad - fix handling of platform_get_irq() error
    9b9746342d52 Input: stmfts - fix a & vs && typo
    0a721220eada Input: imx6ul_tsc - clean up some errors in imx6ul_tsc_resume()
    61b00bdcd281 Input: elants_i2c - fix typo for an attribute to show calibration count
    f81bd7468e3a platform/chrome: cros_ec_lightbar: Reduce ligthbar get version command
    565697e82267 SUNRPC: fix copying of multiple pages in gss_read_proxy_verf()
    f9fc8ae508e6 clk: imx8mq: Fix usdhc parents order
    7564d5bb2b11 vfio iommu type1: Fix memory leak in vfio_iommu_type1_pin_pages
    4f9ece8b888f vfio/pci: Clear token on bypass registration failure
    6d0590647b75 ext4: limit entries returned when counting fsmap records
    9ede401a6d21 ext4: disallow modifying DAX inode flag if inline_data has been set
    1da9c8a1784b ext4: discard preallocations before releasing group lock
    9cb6c6db999e ext4: fix dead loop in ext4_mb_new_blocks
    e38a4885c98f svcrdma: fix bounce buffers for unaligned offsets and multiple pages
    e8e81bf91992 watchdog: sp5100: Fix definition of EFCH_PM_DECODEEN3
    c3228ef8f8a3 watchdog: Use put_device on error
    f12e9c2f9708 watchdog: Fix memleak in watchdog_cdev_register
    e70232457bf1 kbuild: deb-pkg: do not build linux-headers package if CONFIG_MODULES=n
    9f94507374a3 clk: bcm2835: add missing release if devm_clk_hw_register fails
    2290bfef3bbe clk: at91: clk-main: update key before writing AT91_CKGR_MOR
    963fc20cf561 module: statically initialize init section freeing data
    28270e928bae clk: mediatek: add UART0 clock support
    cab8d1bde580 clk: rockchip: Initialize hw to error to avoid undefined behavior
    b6bd62dc59e7 PCI: hv: Fix hibernation in case interrupts are not re-created
    83cf3166bd72 remoteproc/mediatek: fix null pointer dereference on null scp pointer
    1642d9e7095c pwm: img: Fix null pointer access in probe
    8db3dfe46548 pwm: rockchip: Keep enabled PWMs running while probing
    ec87b61ac31a clk: keystone: sci-clk: fix parsing assigned-clock data during probe
    2e415af55c34 clk: qcom: gcc-sdm660: Fix wrong parent_map
    ed4ce310b712 vfio/type1: fix dirty bitmap calculation in vfio_dma_rw
    01bec5d78c05 vfio: fix a missed vfio group put in vfio_pin_pages
    a1e9faa0d7c5 vfio/pci: Decouple PCI_COMMAND_MEMORY bit checks from is_virtfn
    0cdb91a009fa s390/pci: Mark all VFs as not implementing PCI_COMMAND_MEMORY
    b40bd0d87d1a vfio: add a singleton check for vfio_group_pin_pages
    7e4f15f7c99b PCI/IOV: Mark VFs as not implementing PCI_COMMAND_MEMORY
    167b37558b7f rpmsg: Avoid double-free in mtk_rpmsg_register_device
    ce43542b46a5 rpmsg: smd: Fix a kobj leak in in qcom_smd_parse_edge()
    edd546b3222f PCI: iproc: Set affinity mask on MSI interrupts
    c1e465c1a4dc PCI: aardvark: Check for errors from pci_bridge_emul_init() call
    48cc5b57cc46 PCI: aardvark: Fix compilation on s390
    50c4627222c2 PCI: designware-ep: Fix the Header Type check
    4f515d03d4f9 clk: meson: g12a: mark fclk_div2 as critical
    66a5d399702c i2c: rcar: Auto select RESET_CONTROLLER
    d39ced9254b6 rtc: ds1307: Clear OSF flag on DS1388 when setting time
    5e2918d95f79 clk: meson: axg-audio: separate axg and g12a regmap tables
    0d921fec7e59 mailbox: avoid timer start from callback
    efa544eda19e rapidio: fix the missed put_device() for rio_mport_add_riodev
    8838ee6189c3 rapidio: fix error handling path
    0a80f93ccd61 ramfs: fix nommu mmap with gaps in the page cache
    8cc3277e8e28 lib/crc32.c: fix trivial typo in preprocessor condition
    546f36709441 mm/page_owner: change split_page_owner to take a count
    99d1a5c21305 RDMA/rxe: Handle skb_clone() failure in rxe_recv.c
    ab5faad5bd33 afs: Fix cell removal
    0b6392c7ad1d afs: Fix cell purging with aliases
    e44b8d2aa154 afs: Fix cell refcounting by splitting the usage counter
    45045b6253e9 afs: Fix rapid cell addition/removal by not using RCU on cells tree
    1ad93f42c484 f2fs: wait for sysfs kobject removal before freeing f2fs_sb_info
    a08401b32a3a selftests/powerpc: Fix eeh-basic.sh exit codes
    bb24e3cb31cd perf trace: Fix off by ones in memset() after realloc() in arches using libaudit
    c6a8b7714cd7 maiblox: mediatek: Fix handling of platform_get_irq() error
    66f6ea1e0ed3 um: time-travel: Fix IRQ handling in time_travel_handle_message()
    e3ee6ff237eb um: vector: Use GFP_ATOMIC under spin lock
    fe4b4e47125d f2fs: reject CASEFOLD inode flag without casefold feature
    982f2438ac82 RDMA/rxe: Fix skb lifetime in rxe_rcv_mcast_pkt()
    1407e22fb4ca IB/rdmavt: Fix sizeof mismatch
    aae2a43ace26 cpufreq: powernv: Fix frame-size-overflow in powernv_cpufreq_reboot_notifier
    a2b19fdbf29b powerpc/papr_scm: Add PAPR command family to pass-through command-set
    0e486cc3f8a2 i3c: master: Fix error return in cdns_i3c_master_probe()
    69a4718cb2bc perf stat: Fix out of bounds CPU map access when handling armv8_pmu events
    a4682cb94495 powerpc/perf/hv-gpci: Fix starting index value
    8d1d0dfb9df8 powerpc/perf: Exclude pmc5/6 from the irrelevant PMU group constraints
    bef320194790 powerpc/64: fix irq replay pt_regs->softe value
    281c47bcad03 powerpc/64: fix irq replay missing preempt
    938e97b946ec RDMA/ipoib: Set rtnl_link_ops for ipoib interfaces
    ea879d9c818e overflow: Include header file with SIZE_MAX declaration
    1519018b8c89 kdb: Fix pager search for multi-line strings
    473fb9250371 mtd: rawnand: ams-delta: Fix non-OF build warning
    dfc293422070 mtd: spinand: gigadevice: Add QE Bit
    ab0328ef3f83 mtd: spinand: gigadevice: Only one dummy byte in QUADIO
    86cb4ae61b64 mtd: rawnand: vf610: disable clk on error handling path in probe
    fbb2d15c177f mtd: rawnand: stm32_fmc2: fix a buffer overflow
    86e185a733a8 mtd: hyperbus: hbmc-am654: Fix direct mapping setup flash access
    3b5f3adce906 RDMA/hns: Fix missing sq_sig_type when querying QP
    69accfaa1033 RDMA/hns: Fix configuration of ack_req_freq in QPC
    d56447a8cdbb RDMA/hns: Fix the wrong value of rnr_retry when querying qp
    42ae1aebaaac RDMA/hns: Solve the overflow of the calc_pg_sz()
    5c80a3655565 RDMA/hns: Add check for the validity of sl configuration
    939faf121632 perf stat: Skip duration_time in setup_system_wide
    45397023c8c2 i40iw: Add support to make destroy QP synchronous
    fd8da32da3ee RDMA/mlx5: Disable IB_DEVICE_MEM_MGT_EXTENSIONS if IB_WR_REG_MR can't work
    7486a981eb88 RDMA/mlx5: Make mkeys always owned by the kernel's PD when not enabled
    af393dd73c14 RDMA/mlx5: Use set_mkc_access_pd_addr_fields() in reg_create()
    27ca3de942d1 RDMA/hns: Set the unsupported wr opcode
    dc8b27028c1c RDMA/qedr: Fix resource leak in qedr_create_qp
    be825f704b2f perf intel-pt: Fix "context_switch event has no tid" error
    b8d1adbff983 RDMA/cma: Fix use after free race in roce multicast join
    9ef5b6658d6b RDMA/cma: Consolidate the destruction of a cma_multicast in one place
    e3b942c76b24 RDMA/cma: Remove dead code for kernel rdmacm multicast
    7d31a74bcc01 RDMA/cma: Combine cma_ndev_work with cma_work
    d1926d0b50f5 powerpc/64s/radix: Fix mm_cpumask trimming race vs kthread_use_mm
    95219c4004fd powerpc/kasan: Fix CONFIG_KASAN_VMALLOC for 8xx
    ebeafdd0f221 powerpc/tau: Disable TAU between measurements
    19d39d5d682a powerpc/tau: Check processor type before enabling TAU interrupt
    c348ab2f7276 powerpc/tau: Remove duplicated set_thresholds() call
    b61bb0da35fc powerpc/tau: Convert from timer to workqueue
    d7f12e732190 powerpc/tau: Use appropriate temperature sample interval
    1c441d9aef74 powerpc/book3s64/hash/4k: Support large linear mapping range with 4K
    990cf02eb297 powerpc/watchpoint: Add hw_len wherever missing
    0fea340b870f powerpc/watchpoint: Fix handling of vector instructions
    b99d4986bc69 powerpc/watchpoint: Fix quadword instruction handling on p10 predecessors
    6f64ff9f30d1 powerpc/pseries/svm: Allocate SWIOTLB buffer anywhere in memory
    049ab4efdf9a RDMA/qedr: Fix inline size returned for iWARP
    b1010144c1eb RDMA/qedr: Fix return code if accept is called on a destroyed qp
    b3939bfc71ec RDMA/qedr: Fix use of uninitialized field
    fbe513321c49 RDMA/qedr: Fix doorbell setting
    e947bbb26f70 RDMA/qedr: Fix qp structure memory leak
    10200a0a5d3a RDMA/umem: Prevent small pages from being returned by ib_umem_find_best_pgsz()
    59f07434b297 RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary
    7ac277a01f90 RDMA: Allow fail of destroy CQ
    7802648c1dad RDMA/core: Delete function indirection for alloc/free kernel CQ
    4a8e9dbc7fde RDMA/rtrs-srv: Incorporate ib_register_client into rtrs server init
    929cdbcce02f xfs: fix high key handling in the rt allocator's query_range function
    a6d831917953 nfs: add missing "posix" local_lock constant table definition
    6a5757946685 xfs: fix deadlock and streamline xfs_getfsmap performance
    29eedbf9e39d xfs: limit entries returned when counting fsmap records
    c32adb866dac ida: Free allocated bitmap in error path
    1e84d2a5c113 arc: plat-hsdk: fix kconfig dependency warning when !RESET_CONTROLLER
    bdb0da4659e3 m68knommu: include SDHC support only when hardware has it
    01d89b4a82a4 xfs: fix finobt btree block recovery ordering
    c85d7a847227 ARM: 9007/1: l2c: fix prefetch bits init in L2X0_AUX_CTRL using DT values
    93a6c893c4d6 tools feature: Add missing -lzstd to the fast path feature detection
    26b8aa1bec47 perf tools: Make GTK2 support opt-in
    a3872e54738b mtd: mtdoops: Don't write panic data twice
    0081545c66c1 RDMA/mlx5: Fix potential race between destroy and CQE poll
    2c9da663c149 pseries/drmem: don't cache node id in drmem_lmb struct
    b1cf3e9298de powerpc/pseries: explicitly reschedule during drmem_lmb list traversal
    78805c0d14f5 RDMA/umem: Fix signature of stub ib_umem_find_best_pgsz()
    9f101b8ad2fa RDMA/hns: Add a check for current state before modifying QP
    e91945de1531 mtd: lpddr: fix excessive stack usage with clang
    33c6484d377e RDMA/ucma: Add missing locking around rdma_leave_multicast()
    191627ddc46f RDMA/ucma: Fix locking for ctx->events_reported
    582da8e19991 rcutorture: Properly set rcu_fwds for OOM handling
    11539276e399 rcu/tree: Force quiescent state on callback overload
    3aee0ca521f0 powerpc/icp-hv: Fix missing of_node_put() in success path
    cc86827cef62 powerpc/pseries: Fix missing of_node_put() in rng_init()
    bcbeec5a9a19 IB/mlx4: Adjust delayed work when a dup is observed
    f735c10a4731 IB/mlx4: Fix starvation in paravirt mux/demux
    c5e25cf59765 i3c: master add i3c_master_attach_boardinfo to preserve boardinfo
    549642f490d2 tracing: Handle synthetic event array field type checking correctly
    826adb405a53 selftests/ftrace: Change synthetic event name for inter-event-combined test
    3b82bd94e0ec fs: fix NULL dereference due to data race in prepend_path()
    7871c282d292 mm, oom_adj: don't loop through tasks in __set_oom_adj when not necessary
    349fc836d5d1 mm/memcg: fix device private memcg accounting
    b9e60476c04f mm/swapfile.c: fix potential memory leak in sys_swapon
    43edc7232737 netfilter: nf_log: missing vlan offload tag and proto
    ebd09f1ad811 net: korina: fix kfree of rx/tx descriptor array
    733dcb4149ff bpf, sockmap: Remove skb_orphan and let normal skb_kfree do cleanup
    4cdfe55c067b ipvs: clear skb->tstamp in forwarding path
    2566242742c9 drm/panfrost: increase readl_relaxed_poll_timeout values
    87ea06ea9f8d mwifiex: fix double free
    a0f38fd8303e platform/x86: mlx-platform: Remove PSU EEPROM configuration
    455ecbd43d3a tracing: Fix parse_synth_field() error handling
    4372729d5201 ipmi_si: Fix wrong return value in try_smi_init()
    caa0fa6b36ca dmaengine: ioat: Allocate correct size for descriptor chunk
    3cdf3cbc3b48 scsi: be2iscsi: Fix a theoretical leak in beiscsi_create_eqs()
    4c35763fbb0c scsi: target: tcmu: Fix warning: 'page' may be used uninitialized
    03504f955527 usb: dwc2: Fix INTR OUT transfers in DDMA mode.
    0ff11535a204 nl80211: fix non-split wiphy information
    cff51e84cb83 ocxl: fix kconfig dependency warning for OCXL
    4a87896b4e91 bus: mhi: core: Fix the building of MHI module
    e44e0bea8b7b usb: gadget: u_ether: enable qmult on SuperSpeed Plus as well
    665ed7027a67 usb: gadget: u_serial: clear suspended flag when disconnecting
    ec69e8c7686b usb: gadget: f_ncm: fix ncm_bitrate for SuperSpeed and above.
    da0922d0f8b5 iwlwifi: dbg: run init_cfg function once per driver load
    2b021c85c224 iwlwifi: dbg: remove no filter condition
    be0f631711f9 iwlwifi: mvm: split a print to avoid a WARNING in ROC
    d97c35bd05dd ASoC: wm_adsp: Pass full name to snd_ctl_notify
    1ab21ba36a84 mfd: sm501: Fix leaks in probe()
    2eb24b3bf835 net: enic: Cure the enic api locking trainwreck
    cd29df4df421 iio: adc: stm32-adc: fix runtime autosuspend delay when slow polling
    5975fa6e0519 iommu/qcom: add missing put_device() call in qcom_iommu_of_xlate()
    a13766e01768 pinctrl: aspeed: Use the right pinconf mask
    a30a515f2773 qtnfmac: fix resource leaks on unsupported iftype error return path
    148a2543ca50 selftests: Remove fmod_ret from test_overhead
    c2ebc88260ff bpf: disallow attaching modify_return tracing functions to other BPF programs
    7c37b28e0b37 ibmvnic: set up 200GBPS speed
    4829beb0ce79 coresight: etm4x: Fix save and restore of TRCVMIDCCTLR1 register
    ccc73e031de6 coresight: cti: Fix bug clearing sysfs links on callback
    79589b73fb25 coresight: cti: Fix remove sysfs link error
    9d645e979fdf coresight: etm: perf: Fix warning caused by etm_setup_aux failure
    4d3adf453eec iomap: Use kzalloc to allocate iomap_page
    f5758f108b61 nl80211: fix OBSS PD min and max offset validation
    b6ca9ea12055 hv: clocksource: Add notrace attribute to read_hv_sched_clock_*() functions
    70f1f999e24d nvmem: core: fix possibly memleak when use nvmem_cell_info_to_nvmem_cell()
    b21749762534 tty: hvc: fix link error with CONFIG_SERIAL_CORE_CONSOLE=n
    f4e52bc14c84 HID: hid-input: fix stylus battery reporting
    aba2ee9e7425 ASoC: fsl_sai: Instantiate snd_soc_dai_driver
    184c5e17b926 slimbus: qcom-ngd-ctrl: disable ngd in qmi server down callback
    caf464017965 slimbus: core: do not enter to clock pause mode in core
    4d11ab5f0904 slimbus: core: check get_addr before removing laddr ida
    9da861400bfd quota: clear padding in v2r1_mem2diskdqb()
    3efc30bcd162 mt76: mt7915: fix possible memory leak in mt7915_mcu_add_beacon
    6f0f3ad5a602 rtw88: Fix potential probe error handling race with wow firmware loading
    762f48374c26 rtw88: Fix probe error handling race with firmware loading
    e611c92ab330 usb: dwc2: Add missing cleanups when usb_add_gadget_udc() fails
    f9a314f5aa59 usb: dwc3: core: Properly default unspecified speed
    0cf8eb3b9858 usb: dwc2: Fix parameter type in function pointer prototype
    21b7dcfbf378 ALSA: seq: oss: Avoid mutex lock for a long-time ioctl
    a0229d675455 misc: mic: scif: Fix error handling path
    3eb24fb8582c ASoC: cros_ec_codec: fix kconfig dependency warning for SND_SOC_CROS_EC_CODEC
    ed848b21eb91 dmaengine: dmatest: Check list for emptiness before access its last entry
    2dbfe8f6b97c phy: rockchip-dphy-rx0: Include linux/delay.h
    e43acbf29d76 drm: rcar-du: Put reference to VSP device
    0e8f4263125f ath6kl: wmi: prevent a shift wrapping bug in ath6kl_wmi_delete_pstream_cmd()
    5569ffd9e497 ath11k: Add checked value for ath11k_ahb_remove
    ec71c634dcbd spi: omap2-mcspi: Improve performance waiting for CHSTAT
    c00cdd1b966a ASoC: tas2770: Fix unbalanced calls to pm_runtime
    46701b00ed9d ASoC: SOF: control: add size checks for ext_bytes control .put()
    e06a18b78b43 net: dsa: rtl8366rb: Support all 4096 VLANs
    a8091e02962a ASoC: tlv320aic32x4: Fix bdiv clock rate derivation
    63ed07138636 ASoC: tas2770: Fix error handling with update_bits
    6ce4b0c4f3d5 ASoC: tas2770: Fix required DT properties in the code
    92cc64394bc9 ASoC: tas2770: Add missing bias level power states
    304c38230dfd ASoC: tas2770: Fix calling reset in probe
    da374cb21045 net: wilc1000: clean up resource in error path of init mon interface
    a74a1c39af96 net: dsa: rtl8366: Skip PVID setting if not requested
    b8d304cdf951 net: dsa: rtl8366: Refactor VLAN/PVID init
    6aa894ff3372 net: dsa: rtl8366: Check validity of passed VLANs
    701c56f56837 xhci: don't create endpoint debugfs entry before ring buffer is set.
    98d66a3bb9c0 selftests/bpf: Fix endianness issue in test_sockopt_sk
    f130c8a0eeac selftests/bpf: Fix endianness issue in sk_assign
    a1aff5c4417e selftests: mptcp: interpret \n as a new line
    6c87ffcb2bff nvmem: core: fix missing of_node_put() in of_nvmem_device_get()
    3a0f17922776 coresight: etm4x: Fix issues on trcseqevr access
    0c97523e87a8 coresight: etm4x: Handle unreachable sink in perf mode
    abea9d776fe9 coresight: cti: Write regsiters directly in cti_enable_hw()
    3857796b8b49 coresight: etm4x: Fix issues within reset interface of sysfs
    efd00a5ed569 coresight: etm4x: Ensure default perf settings filter user/kernel
    435fd705a501 coresight: cti: remove pm_runtime_get_sync() from CPU hotplug
    0d0d70e1b1da coresight: cti: disclaim device only when it's claimed
    9fe394b41ba6 coresight: fix offset by one error in counting ports
    3c5c980ece55 coresight: etm4x: Fix etm4_count race by moving cpuhp callbacks to init
    8f319155ef51 ASoC: tlv320adcx140: Fix digital gain range
    7d3dcc5d26e1 ASoC: topology: disable size checks for bytes_ext controls if needed
    4a4778394419 ima: Fix NULL pointer dereference in ima_file_hash
    453ed3d7f990 drm: mxsfb: check framebuffer pitch
    dec5fabe7202 cpufreq: armada-37xx: Add missing MODULE_DEVICE_TABLE
    f3ceea270494 xfs: force the log after remapping a synchronous-writes file
    5e78a6fe2d85 net: stmmac: use netif_tx_start|stop_all_queues() function
    be17fb81e944 net: stmmac: Fix incorrect location to set real_num_rx|tx_queues
    f817cdd6d1fd scsi: mpt3sas: Fix sync irqs
    3c33f586d090 net/mlx5: Don't call timecounter cyc2time directly from 1PPS flow
    9ba9292375df net/mlx5: Fix uninitialized variable warning
    b60c22ea6623 drm/msm/adreno: fix probe without iommu
    37c857ec136c pinctrl: devicetree: Keep deferring even on timeout
    151d4913e81e pinctrl: mcp23s08: Fix mcp23x17 precious range
    bbcbd596e676 pinctrl: mcp23s08: Fix mcp23x17_regmap initialiser
    dc7285e0f1f8 Bluetooth: Re-order clearing suspend tasks
    8141ec5a8f5a selftests/lkdtm: Use "comm" instead of "diff" for dmesg
    7c38731efb2f iomap: Mark read blocks uptodate in write_begin
    d69930b3ec0b iomap: Clear page error before beginning a write
    039ee8a6363d drm/panfrost: Ensure GPU quirks are always initialised
    dc48ca171bdc drm/msm: Avoid div-by-zero in dpu_crtc_atomic_check()
    b7d539816d06 HID: roccat: add bounds checking in kone_sysfs_write_settings()
    25529f1f6003 scsi: ufs: ufs-mediatek: Fix HOST_PA_TACTIVATE quirk
    8c230b3b3668 ASoC: fsl: imx-es8328: add missing put_device() call in imx_es8328_probe()
    7a702a885270 video: fbdev: radeon: Fix memleak in radeonfb_pci_register
    53d19f4bb131 video: fbdev: sis: fix null ptr dereference
    33b1e23741cb video: fbdev: vga16fb: fix setting of pixclock because a pass-by-value error
    d92db965ef66 ath11k: fix a double free and a memory leak
    c7072eda4093 drivers/virt/fsl_hypervisor: Fix error handling path
    38b319133226 pwm: lpss: Add range limit check for the base_unit register value
    25eb525f5bf9 pwm: lpss: Fix off by one error in base_unit math in pwm_lpss_prepare()
    04e819b2f765 pty: do tty_flip_buffer_push without port->lock in pty_write
    2e92899228ae tty: hvcs: Don't NULL tty->driver_data until hvcs_cleanup()
    45f20b6066c3 tty: serial: earlycon dependency
    5ec7b8a3b6e7 binder: Remove bogus warning on failed same-process transaction
    4f40c79cbe72 scsi: ufs: Make ufshcd_print_trs() consider UFSHCD_QUIRK_PRDT_BYTE_GRAN
    6852678afe96 selftests: vm: add fragment CONFIG_GUP_BENCHMARK
    e9f1340193b5 Bluetooth: Clear suspend tasks on unregister
    7a15bd2bae85 drm/crc-debugfs: Fix memleak in crc_control_write
    91c8e9e18580 samples/bpf: Fix to xdpsock to avoid recycling frames
    88b34c076be3 drm: panel: Fix bpc for OrtusTech COM43H4M85ULC panel
    71782955ade1 mm/error_inject: Fix allow_error_inject function signatures.
    9c5e9f50572e VMCI: check return value of get_user_pages_fast() for errors
    2e1356e81edd staging: emxx_udc: Fix passing of NULL to dma_alloc_coherent()
    ad5c72b65770 backlight: sky81452-backlight: Fix refcount imbalance on error
    39d464cdfe30 rtw88: don't treat NULL pointer as an array
    8976b0bf6d8b wilc1000: Fix memleak in wilc_bus_probe
    93feab00afca wilc1000: Fix memleak in wilc_sdio_probe
    2b87f9ce106e libbpf: Fix unintentional success return code in bpf_object__load
    6ff694ac40b9 scsi: csiostor: Fix wrong return value in csio_hw_prep_fw()
    d646554479f3 scsi: qla2xxx: Fix wrong return value in qla_nvme_register_hba()
    7e26ebb1a9d2 scsi: qla2xxx: Fix wrong return value in qlt_chk_unresolv_exchg()
    d1bfd5d44f4b scsi: qla2xxx: Fix the size used in a 'dma_free_coherent()' call
    66deb6aebe10 scsi: qla4xxx: Fix an error handling path in 'qla4xxx_get_host_stats()'
    34b42a17b99f drm/gma500: fix error check
    1b8b0d839d1b selftests/bpf: Fix test_vmlinux test to use bpf_probe_read_user()
    8135d168d84c drm/amd/display: fix potential integer overflow when shifting 32 bit variable bl_pwm
    c2f41d9b1d53 staging: rtl8192u: Do not use GFP_KERNEL in atomic context
    9959c2031233 mwifiex: Do not use GFP_KERNEL in atomic context
    027b25d74ffb brcmfmac: check ndev pointer
    e9e2a870a490 ath11k: Fix possible memleak in ath11k_qmi_init_service
    7d93d871e55b ASoC: qcom: lpass-cpu: fix concurrency issue
    41a33c66b6e6 ASoC: qcom: lpass-platform: fix memory leak
    d981fcece216 wcn36xx: Fix reported 802.11n rx_highest rate wcn3660/wcn3680
    2af670b21911 ath10k: Fix the size used in a 'dma_free_coherent()' call in an error handling path
    ef10e65b3d7e ath9k: Fix potential out of bounds in ath9k_htc_txcompletion_cb()
    7c81b8b6c0b3 ath6kl: prevent potential array overflow in ath6kl_add_new_sta()
    b395ec13f72b drm: panel: Fix bus format for OrtusTech COM43H4M85ULC panel
    31e3c7aefb96 drm/vkms: add missing platform_device_unregister() in vkms_init()
    199cb9d9336f drm/vgem: add missing platform_device_unregister() in vgem_init()
    2723170f9c1b drm/amd/display: Fix wrong return value in dm_update_plane_state()
    3fe978892ab4 Bluetooth: hci_uart: Cancel init work before unregistering
    0775947bf20b drm/vkms: fix xrgb on compute crc
    6a251056d920 ath10k: provide survey info as accumulated data
    1e2be69a0396 blk-mq: move cancel of hctx->run_work to the front of blk_exit_queue
    eb66ae00496f btrfs: add owner and fs_info to alloc_state io_tree
    6cc523c1ba7e hwmon: (bt1-pvt) Wait for the completion with timeout
    82f27fd04df6 hwmon: (bt1-pvt) Cache current update timeout
    f8896b1dc97f hwmon: (bt1-pvt) Test sensor power supply on probe
    283d31599577 spi: spi-s3c64xx: Check return values
    9c27047159fd spi: spi-s3c64xx: swap s3c64xx_spi_set_cs() and s3c64xx_enable_datapath()
    2d92aae41a06 pinctrl: bcm: fix kconfig dependency warning when !GPIOLIB
    96c6b5d57756 regulator: resolve supply after creating regulator
    539f606e1044 media: ti-vpe: Fix a missing check and reference count leak
    36ba112a7c8d media: stm32-dcmi: Fix a reference count leak
    344632d9b782 media: s5p-mfc: Fix a reference count leak
    00eff51ebd27 media: camss: Fix a reference count leak.
    445adb4113e8 media: platform: fcp: Fix a reference count leak.
    34b2032620a3 media: rockchip/rga: Fix a reference count leak.
    96b1dbdb92ad media: rcar-vin: Fix a reference count leak.
    0936f228c185 media: tc358743: cleanup tc358743_cec_isr
    e25e1421396d media: tc358743: initialize variable
    ffa1c6807c37 media: mx2_emmaprp: Fix memleak in emmaprp_probe
    19b283f0b3d4 crypto: sun8i-ce - handle endianness of t_common_ctl
    9748e867ac81 crypto: stm32/crc32 - Avoid lock if hardware is already used
    aee35828de88 cypto: mediatek - fix leaks in mtk_desc_ring_alloc
    abfdbdda990a hwmon: (w83627ehf) Fix a resource leak in probe
    20d16af9c0fb hwmon: (pmbus/max34440) Fix status register reads for MAX344{51,60,61}
    621368b5adfe crypto: omap-sham - fix digcnt register handling with export/import
    71452513b06b spi: dw-pci: free previously allocated IRQs if desc->setup() fails
    31a31b30b0f6 spi: fsi: Implement restricted size for certain controllers
    a2e41e4fcd8e spi: fsi: Fix use of the bneq+ sequencer instruction
    c2177e077841 spi: fsi: Handle 9 to 15 byte transfers lengths
    0f8c1ad5ed8f media: rcar-csi2: Allocate v4l2_async_subdev dynamically
    bd48c278ba33 media: rcar_drif: Allocate v4l2_async_subdev dynamically
    23b043e23923 media: rcar_drif: Fix fwnode reference leak when parsing DT
    c78cc511ff68 media: i2c: ov5640: Enable data pins on poweron for DVP mode
    d1bb697b085a media: i2c: ov5640: Separate out mipi configuration from s_power
    44046ac3fd90 media: i2c: ov5640: Remain in power down for DVP mode unless streaming
    2038c71aeea7 media: omap3isp: Fix memleak in isp_probe
    ae17eb2da566 media: staging/intel-ipu3: css: Correctly reset some memory
    fbd50e6e825f media: uvcvideo: Silence shift-out-of-bounds warning
    3eff11b54bac media: uvcvideo: Set media controller entity functions
    008efc8c2ec0 fscrypt: restrict IV_INO_LBLK_32 to ino_bits <= 32
    38cc20da3fd2 media: m5mols: Check function pointer in m5mols_sensor_power
    6cd272c1b1d3 media: ov5640: Correct Bit Div register in clock tree diagram
    3bc4af05a125 media: hantro: postproc: Fix motion vector space allocation
    841d6b2bb64a media: hantro: h264: Get the correct fallback reference buffer
    b076e6ad0081 media: Revert "media: exynos4-is: Add missed check for pinctrl_lookup_state()"
    2e35f75c9a14 crypto: ccree - fix runtime PM imbalance on error
    707041cc6852 media: tuner-simple: fix regression in simple_set_radio_freq
    1c1e39f91ffe media: vivid: Fix global-out-of-bounds read in precalculate_color()
    0ebbe42a9a4c crypto: picoxcell - Fix potential race condition bug
    5ec044fb819d crypto: ixp4xx - Fix the size used in a 'dma_free_coherent()' call
    df29e4415305 crypto: mediatek - Fix wrong return value in mtk_desc_ring_alloc()
    36c93e69cb80 crypto: algif_skcipher - EBUSY on aio should be an error
    ff57d46f868e perf/core: Fix race in the perf_mmap_close() function
    7e5248ec07bc perf/x86: Fix n_pair for cancelled txn
    2df4319976f9 pinctrl: qcom: Use return value from irq_set_wake() call
    9d371ffd8434 pinctrl: qcom: Set IRQCHIP_SET_TYPE_MASKED and IRQCHIP_MASK_ON_SUSPEND flags
    9a7d327326bd x86/events/amd/iommu: Fix sizeof mismatch
    5fd2c1240d75 x86/nmi: Fix nmi_handle() duration miscalculation
    6f9bc7071b53 perf/x86/intel/uncore: Fix the scale of the IMC free-running events
    32ce27005110 perf/x86/intel/uncore: Reduce the number of CBOX counters
    accdd0292919 perf/x86/intel/uncore: Update Ice Lake uncore units
    140596caef50 arm64: perf: Add missing ISB in armv8pmu_enable_counter()
    4792206af85f sched/fair: Use dst group while checking imbalance for NUMA balancer
    63829cb38a3c sched/fair: Fix wrong cpu selecting from isolated domain
    b75cbad81cfc drivers/perf: thunderx2_pmu: Fix memory resource error handling
    a071f86dd7c4 drivers/perf: xgene_pmu: Fix uninitialized resource struct
    e99cf7b5025a arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
    b45c14f9b0c6 x86/fpu: Allow multiple bits in clearcpuid= parameter
    4f596c780958 perf/x86/intel/ds: Fix x86_pmu_stop warning for large PEBS
    3b172044dc55 EDAC/ti: Fix handling of platform_get_irq() error
    0d0f50ecd85d EDAC/aspeed: Fix handling of platform_get_irq() error
    3a70ad440e20 EDAC/i5100: Fix error handling order in i5100_init_one()
    6411e8ea3086 microblaze: fix kbuild redundant file warning
    1b8e25772d8e sched/fair: Fix wrong negative conversion in find_energy_efficient_cpu()
    03e0226f1cfe RAS/CEC: Fix cec_init() prototype
    19212b1a2be3 crypto: caam/qi - add support for more XTS key lengths
    d0100d71efff crypto: caam/qi - add fallback for XTS with more than 8B IV
    b61aa1de53f4 crypto: algif_aead - Do not set MAY_BACKLOG on the async path
    dd5df0880122 ima: Don't ignore errors from crypto_shash_update()
    ee0e07130bd0 KVM: SVM: Initialize prev_ga_tag before use
    af216a426bcc KVM: x86: Intercept LA57 to inject #GP fault when it's reserved
    f7b5e3c6ab6e KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages
    efd21b7274b0 KVM: nVMX: Reload vmcs01 if getting vmcs12's pages fails
    f7421220fd60 KVM: nVMX: Reset the segment cache when stuffing guest segs
    c5ec2a6618d3 KVM: nVMX: Morph notification vector IRQ on nested VM-Enter to pending PI
    dd6120a8e1f3 arm64: Make use of ARCH_WORKAROUND_1 even when KVM is not enabled
    cb6c316cd99a smb3: fix stat when special device file and mounted with modefromsid
    321cf0e88e25 smb3: do not try to cache root directory if dir leases not supported
    dd80b98bdf0a SMB3.1.1: Fix ids returned in POSIX query dir
    2ab6d3b441dd SMB3: Resolve data corruption of TCP server info fields
    55bf111d4e81 cifs: Return the error from crypt_message when enc/dec key not found.
    c5db0e593499 cifs: remove bogus debug code
    2d8b73fc38ae ALSA: hda/realtek: Enable audio jacks of ASUS D700SA with ALC887
    1fb41e21037e ALSA: hda/realtek - Add mute Led support for HP Elitebook 845 G7
    29050421372a ALSA: hda/realtek - set mic to auto detect on a HP AIO machine
    eba61e03eadf ALSA: hda/realtek - The front Mic on a HP machine doesn't work
    383fcddfbcaa ALSA: usb-audio: Line6 Pod Go interface requires static clock rate quirk
    70dcb923cc27 ALSA: hda - Fix the return value if cb func is already registered
    4e3c57b30473 ALSA: hda - Don't register a cb func if it is registered already
    618a54d780a5 net/sched: act_gate: Unlock ->tcfa_lock in tc_setup_flow_action()
    ed2c3b4a04c2 net: ethernet: mtk-star-emac: select REGMAP_MMIO
    9c70b53dda47 tcp: fix to update snd_wl1 in bulk receiver fast path
    e4d5d075c190 selftests: rtnetlink: load fou module for kci_test_encap_fou() test
    8ab1b9ef3974 selftests: forwarding: Add missing 'rp_filter' configuration
    11a3f1f851da r8169: fix operation under forced interrupt threading
    6c9e378d7579 nfc: Ensure presence of NFC_ATTR_FIRMWARE_NAME attribute in nfc_genl_fw_download()
    a81996aa6ee5 nexthop: Fix performance regression in nexthop deletion
    8672e0e1be10 net/sched: act_tunnel_key: fix OOB write in case of IPv6 ERSPAN tunnels
    e5b67266fb48 net/sched: act_ct: Fix adding udp port mangle operation
    f6bb7b012676 net: Properly typecast int values to set sk_max_pacing_rate
    08c6a8c61f9f net: hdlc_raw_eth: Clear the IFF_TX_SKB_SHARING flag after calling ether_setup
    6fe9d5ac3f76 net: hdlc: In hdlc_rcv, check to make sure dev is an HDLC device
    79a5e1726d4f net: ftgmac100: Fix Aspeed ast2600 TX hang issue
    7f0afe20abab mptcp: initialize mptcp_options_received's ahmac
    ec5c9273f731 icmp: randomize the global rate limiter
    ab91b97c5f92 ibmvnic: save changed mac address to adapter->mac_addr
    3f9420b4d3fc chelsio/chtls: fix writing freed memory
    d632d6da9724 chelsio/chtls: correct function return and return type
    ea95811a67e3 chelsio/chtls: Fix panic when listen on multiadapter
    8650467aa359 chelsio/chtls: fix panic when server is on ipv6
    e94a4b48d51b chelsio/chtls: correct netdevice for vlan interface
    958fc22dbc30 chelsio/chtls: fix socket lock
    eb7ee70b9226 tipc: fix incorrect setting window for bcast link
    a52c1d9114f1 tipc: re-configure queue limit for broadcast link
    760295f17597 ALSA: hda/hdmi: fix incorrect locking in hdmi_pcm_close
    2b7a2a0be104 ALSA: hda: fix jack detection with Realtek codecs when in D3
    f4b88ebd9b73 ALSA: bebob: potential info leak in hwdep_read()
    40d4418ea4db binder: fix UAF when releasing todo list
    dd5743391b5e r8169: fix data corruption issue on RTL8402
    7f1b0fa4805c net_sched: remove a redundant goto chain check
    f736e9e2f750 net/ipv4: always honour route mtu during forwarding
    7ef2b9748f88 net: j1939: j1939_session_fresh_new(): fix missing initialization of skbcnt
    3cda27a6e540 can: j1935: j1939_tp_tx_dat_new(): fix missing initialization of skbcnt
    46ebf7a3bdb0 can: m_can_platform: don't call m_can_class_suspend in runtime suspend
    575e9184885b socket: don't clear SOCK_TSTAMP_NEW when SO_TIMESTAMPNS is disabled
    d2bc51dbdecd socket: fix option SO_TIMESTAMPING_NEW
    a7d0ffde99d5 tipc: fix the skb_unshare() in tipc_buf_append()
    83e8af2ee339 net: usb: qmi_wwan: add Cellient MPL200 card
    01630fae60bd net/tls: sendfile fails with ktls offload
    91119131f8a8 net/smc: fix valid DMBE buffer sizes
    c0d0fad9bed7 net/smc: fix use-after-free of delayed events
    5e52ea477365 net: sched: Fix suspicious RCU usage while accessing tcf_tunnel_info
    b91a8c7486a3 net: mptcp: make DACK4/DACK8 usage consistent among all subflows
    a0f063a63afa net: ipa: skip suspend/resume activities if not set up
    8090c13d3e4b net: fix pos incrementment in ipv6_route_seq_next
    f17fe0c1addf net: fec: Fix PHY init after phy_reset_after_clk_enable()
    8a6ab151443c net: fec: Fix phy_device lookup for phy_reset_after_clk_enable()
    d6cc94152da1 net: dsa: microchip: fix race condition
    61d51568e43b mlx4: handle non-napi callers to napi_poll
    8536e300622a ipv4: Restore flowi4_oif update before call to xfrm_lookup_route
    bd0912cd125e ibmveth: Identify ingress large send packets.
    d673d278f59f ibmveth: Switch order of ibmveth_helper calls.
    68e3dec3c3e4 xgb4: handle 4-tuple PEDIT to NAT mode translation

(From OE-Core rev: 75182dd3db60a78920aaff724f0c71e000a77260)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit eab49834f263a2727fa699050a8d01715f1e9d21)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
e2de476001 linux-yocto/5.4: update to v5.4.72
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    52f6ded2a377 Linux 5.4.72
    865b015e8d41 crypto: qat - check cipher length for aead AES-CBC-HMAC-SHA
    aa1167908ac4 crypto: bcm - Verify GCM/CCM key length in setkey
    564312e08892 xen/events: don't use chip_data for legacy IRQs
    041445d0d577 reiserfs: Fix oops during mount
    046616898a57 reiserfs: Initialize inode keys properly
    22ab9ca024a0 USB: serial: ftdi_sio: add support for FreeCalypso JTAG+UART adapters
    bfb1438e8c15 USB: serial: pl2303: add device-id for HP GC device
    aecf3a1c11dc staging: comedi: check validity of wMaxPacketSize of usb endpoints found
    8aff87284be6 USB: serial: option: Add Telit FT980-KS composition
    3c3eb734ef1f USB: serial: option: add Cellient MPL200 card
    b970578274e9 media: usbtv: Fix refcounting mixup
    6ad2e647d91f Bluetooth: Disconnect if E0 is used for Level 4
    21d2051d1f1c Bluetooth: Fix update of connection state in `hci_encrypt_cfm`
    ed6c361e3229 Bluetooth: Consolidate encryption handling in hci_encrypt_cfm
    155bf3fd4e8c Bluetooth: MGMT: Fix not checking if BT_HS is enabled
    66a14350de9a Bluetooth: L2CAP: Fix calling sk_filter on non-socket based channel
    0d9e9b6e1a26 Bluetooth: A2MP: Fix not initializing all members
    54f8badb9bc9 ACPI: Always build evged in
    30ddaa4c0c95 ARM: 8939/1: kbuild: use correct nm executable
    1bf467fdfeae btrfs: take overcommit into account in inc_block_group_ro
    39c5eb1482b2 btrfs: don't pass system_chunk into can_overcommit
    bc79abf4afea perf cs-etm: Move definition of 'traceid_list' global variable from header file

(From OE-Core rev: dffb8b856649d4280ac376d480c7935663f8bd7a)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5da55c543cf38ca1082bc160fd571b3c7c6a40ba)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
45c8a7e583 linux-yocto/5.8: update to v5.8.16
Updating linux-yocto/5.8 to the latest korg -stable release that comprises
the following commits:

    c5464f4be19b Linux 5.8.16
    4cadc0dd5ce2 reiserfs: Fix oops during mount
    492f415bb105 reiserfs: Initialize inode keys properly
    27319196d104 USB: serial: ftdi_sio: add support for FreeCalypso JTAG+UART adapters
    56eff3982215 USB: serial: pl2303: add device-id for HP GC device
    e95645fd1e28 staging: comedi: check validity of wMaxPacketSize of usb endpoints found
    75ea7049c9c6 USB: serial: option: Add Telit FT980-KS composition
    a7f0e37b29f4 USB: serial: option: add Cellient MPL200 card
    d6efa7525a59 media: usbtv: Fix refcounting mixup
    1b7150e1c95e Bluetooth: Disconnect if E0 is used for Level 4
    9e473bae14f3 Bluetooth: MGMT: Fix not checking if BT_HS is enabled
    ffddc73458e8 Bluetooth: L2CAP: Fix calling sk_filter on non-socket based channel
    a350bfd9a93f Bluetooth: A2MP: Fix not initializing all members
    8fae48c4bf67 crypto: qat - check cipher length for aead AES-CBC-HMAC-SHA
    c4ab0a2944b8 crypto: bcm - Verify GCM/CCM key length in setkey

(From OE-Core rev: c80d6d89e90b119e8fa1b434c35c46448bb2934c)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 869f4a5edf70a88301646356c8d3faa55996e5a9)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
4d2fd8ddd3 linux-yocto/5.4: update to v5.4.71
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    85b0841aab15 Linux 5.4.71
    22e6625babfc net_sched: commit action insertions together
    a5de4ee6d055 net_sched: defer tcf_idr_insert() in tcf_action_init_1()
    dbb763107d3e net: usb: rtl8150: set random MAC address when set_ethernet_addr() fails
    6c9edf2d855a Input: ati_remote2 - add missing newlines when printing module parameters
    536c767b14e3 net/mlx5e: Fix driver's declaration to support GRE offload
    8dc5025c6a44 net/tls: race causes kernel panic
    a42dbd059ef6 net/core: check length before updating Ethertype in skb_mpls_{push,pop}
    e39c9eba9bef tcp: fix receive window update in tcp_add_backlog()
    2729afe17987 mm: khugepaged: recalculate min_free_kbytes after memory hotplug as expected by khugepaged
    d94c1505fa91 mmc: core: don't set limits.discard_granularity as 0
    760c7a948bea perf: Fix task_function_call() error handling
    b750f86a62d1 rxrpc: Fix server keyring leak
    ae1a085b4aac rxrpc: The server keyring isn't network-namespaced
    513dd1609c9d rxrpc: Fix some missing _bh annotations on locking conn->state_lock
    422f5c5d3ef9 rxrpc: Downgrade the BUG() for unsupported token type in rxrpc_read()
    7e1f39b5c1d5 rxrpc: Fix rxkad token xdr encoding
    9a52da3f61b4 net/mlx5e: Fix VLAN create flow
    6b9752d85e72 net/mlx5e: Fix VLAN cleanup flow
    47e83c69fe14 net/mlx5e: Add resiliency in Striding RQ mode for packets larger than MTU
    1e7a94724b78 net/mlx5: Fix request_irqs error flow
    073fff810206 net/mlx5: Avoid possible free of command entry while timeout comp handler
    0955c774f32d virtio-net: don't disable guest csum when disable LRO
    15f84bdf6185 net: usb: ax88179_178a: fix missing stop entry in driver_info
    70877d04d41f r8169: fix RTL8168f/RTL8411 EPHY config
    7a96cbd74fcd mlxsw: spectrum_acl: Fix mlxsw_sp_acl_tcam_group_add()'s error path
    f3b35c3782ed mdio: fix mdio-thunder.c dependency & build error
    8d103b1f9ce5 bonding: set dev->needed_headroom in bond_setup_by_slave()
    3ce96a55b756 net: ethernet: cavium: octeon_mgmt: use phy_start and phy_stop
    e987ea087fd2 iavf: Fix incorrect adapter get in iavf_resume
    029ced5cce89 iavf: use generic power management
    84ab35eacdf2 xfrm: Use correct address family in xfrm_state_find
    4d3edb2e4d6e platform/x86: fix kconfig dependency warning for FUJITSU_LAPTOP
    dd2786a3e521 net: stmmac: removed enabling eee in EEE set callback
    e9a12de5a2be xfrm: clone whole liftime_cur structure in xfrm_do_migrate
    7ea7436c406c xfrm: clone XFRMA_SEC_CTX in xfrm_do_migrate
    c1becfebe33e xfrm: clone XFRMA_REPLAY_ESN_VAL in xfrm_do_migrate
    0bea401a9a5a xfrm: clone XFRMA_SET_MARK in xfrm_do_migrate
    f825fd534f8b iommu/vt-d: Fix lockdep splat in iommu_flush_dev_iotlb()
    bdffb36bcd38 drm/amdgpu: prevent double kfree ttm->sg
    4034664a733e openvswitch: handle DNAT tuple collision
    f89128ad358e net: team: fix memory leak in __team_options_register
    003269d8d6de team: set dev->needed_headroom in team_setup_by_port()
    fb3681c20fbf sctp: fix sctp_auth_init_hmacs() error path
    040e3110d49c i2c: owl: Clear NACK and BUS error bits
    abe997f632d1 i2c: meson: fixup rate calculation with filter delay
    6db69c390622 i2c: meson: fix clock setting overwrite
    209549c1c0f0 cifs: Fix incomplete memory allocation on setxattr path
    0afdda28eb2b xfrmi: drop ignore_df check before updating pmtu
    49af88ac6534 nvme-tcp: check page by sendpage_ok() before calling kernel_sendpage()
    15cac17d9d39 tcp: use sendpage_ok() to detect misused .sendpage
    d23dd3864b4c net: introduce helper sendpage_ok() in include/linux/net.h
    5c62d335317c mm/khugepaged: fix filemap page_to_pgoff(page) != offset
    1317469fa05b macsec: avoid use-after-free in macsec_handle_frame()
    20f96fee81c6 nvme-core: put ctrl ref when module ref get fail
    c0f3c5386995 btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation
    e531fd7f8b3a btrfs: fix RWF_NOWAIT write not failling when we need to cow
    1f90600e259b btrfs: Ensure we trim ranges across block group boundary
    6a0f5da2db3b btrfs: volumes: Use more straightforward way to calculate map length
    5aefd1fa9f4d Btrfs: send, fix emission of invalid clone operations within the same file
    19d8412679f2 Btrfs: send, allow clone operations within the same file
    f02dc39bbb20 arm64: dts: stratix10: add status to qspi dts node
    e8e1d16e0b89 i2c: i801: Exclude device from suspend direct complete optimization
    2118c7ba5f2a perf top: Fix stdio interface input handling with glibc 2.28+
    2499c15115ac perf test session topology: Fix data path
    7c1847aa4932 driver core: Fix probe_count imbalance in really_probe()
    3fd2647f9d68 platform/x86: thinkpad_acpi: re-initialize ACPI buffer size when reuse
    da4cdc87dfeb platform/x86: intel-vbtn: Switch to an allow-list for SW_TABLET_MODE reporting
    6440fb9bda91 bpf: Prevent .BTF section elimination
    67a57230b4bf bpf: Fix sysfs export of empty BTF section
    9bd694ccfd44 platform/x86: thinkpad_acpi: initialize tp_nvram_state variable
    d101961ce588 platform/x86: intel-vbtn: Fix SW_TABLET_MODE always reporting 1 on the HP Pavilion 11 x360
    2293272345ff Platform: OLPC: Fix memleak in olpc_ec_probe
    ce8432912f1b usermodehelper: reset umask to default before executing user process
    920a61ddd3b5 vhost: Use vhost_get_used_size() in vhost_vring_set_addr()
    57b47abc1a4a vhost: Don't call access_ok() when using IOTLB
    456d77c1bdfa drm/nouveau/mem: guard against NULL pointer access in mem_del
    8ece83bf754f net: wireless: nl80211: fix out-of-bounds access in nl80211_del_key()
    ee413b2915bf io_uring: Fix double list add in io_queue_async_work()
    efb1cef27d59 io_uring: Fix remove irrelevant req from the task_list
    75524f753318 io_uring: Fix missing smp_mb() in io_cancel_async_work()
    d9e81b2fb372 io_uring: Fix resource leaking when kill the process
    4f46ef7bec86 Revert "ravb: Fixed to be able to unload modules"
    1b2fcd82c0ca fbcon: Fix global-out-of-bounds read in fbcon_get_font()
    f51ec3fd7128 Fonts: Support FONT_EXTRA_WORDS macros for built-in fonts
    eebe3685701b fbdev, newport_con: Move FONT_EXTRA_WORDS macros into linux/font.h
    d22f99d235e1 Linux 5.4.70
    253052b636e9 netfilter: ctnetlink: add a range check for l3/l4 protonum
    27423bb05e25 ep_create_wakeup_source(): dentry name can change under you...
    8e58bad666bb epoll: EPOLL_CTL_ADD: close the race in decision to take fast path
    099b7a1bc791 epoll: replace ->visited/visited_list with generation count
    8993da3d4d3a epoll: do not insert into poll queues until all sanity checks are done
    8db44b30d392 nvme: consolidate chunk_sectors settings
    03f4f85bbd7d nvme: Introduce nvme_lba_to_sect()
    34b939695f28 nvme: Cleanup and rename nvme_block_nr()
    9626c1a63703 mm: don't rely on system state to detect hot-plug operations
    42b7153dd6a6 mm: replace memmap_context by meminit_context
    2334b2d5a2bd block/diskstats: more accurate approximation of io_ticks for slow disks
    1d13c3a5000b random32: Restore __latent_entropy attribute on net_rand_state
    4faf2c3a97ec scripts/dtc: only append to HOST_EXTRACFLAGS instead of overwriting
    ea4c691b58d7 Input: trackpoint - enable Synaptics trackpoints
    21b9387253a7 i2c: cpm: Fix i2c_ram structure
    811ac052e264 gpio: aspeed: fix ast2600 bank properties
    f2a2380812c6 gpio/aspeed-sgpio: don't enable all interrupts by default
    8323d1e09037 gpio/aspeed-sgpio: enable access to all 80 input & output sgpios
    eddeff708c15 iommu/exynos: add missing put_device() call in exynos_iommu_of_xlate()
    08e66c0c1c0e clk: samsung: exynos4: mark 'chipid' clock as CLK_IGNORE_UNUSED
    0ded28e3c468 clk: tegra: Always program PLL_E when enabled
    2f37a1ef1e5d nfs: Fix security label length not being reset
    6c5a11ead942 pinctrl: mvebu: Fix i2c sda definition for 98DX3236
    ae68b15839b0 phy: ti: am654: Fix a leak in serdes_am654_probe()
    543ea1af5744 gpio: sprd: Clear interrupt when setting the type as edge
    8c03d0ef62dd nvme-fc: fail new connections to a deleted host or remote port
    2b217eafcf74 nvme-pci: fix NULL req in completion handler
    157ccdf7eb2c spi: fsl-espi: Only process interrupts for expected events
    8cc5eb809aa5 tools/io_uring: fix compile breakage
    4e4646c85e89 tracing: Make the space reserved for the pid wider
    a0fe7f705457 mac80211: do not allow bigger VHT MPDUs than the hardware supports
    355a710f0813 mac80211: Fix radiotap header channel flag for 6GHz band
    126e6099b8c1 drivers/net/wan/hdlc: Set skb->protocol before transmitting
    3ba3fc3e7ea6 drivers/net/wan/lapbether: Make skb->protocol consistent with the header
    89fd103fbbb0 fuse: fix the ->direct_IO() treatment of iov_iter
    44b4baf850bd nvme-core: get/put ctrl and transport module in nvme_dev_open/release()
    0bcc3480393b rndis_host: increase sleep time in the query-response loop
    f19ff011027b net: dec: de2104x: Increase receive ring size for Tulip
    e9af030ddd4b drm/sun4i: mixer: Extend regmap max_register
    985a56c58c4f drivers/net/wan/hdlc_fr: Add needed_headroom for PVC devices
    91d59157b103 libbpf: Remove arch-specific include path in Makefile
    688aa0e0aaf9 clocksource/drivers/timer-gx6605s: Fixup counter reload
    3d54a640e20c drm/amdgpu: restore proper ref count in amdgpu_display_crtc_set_config
    de21eb7f8cb0 memstick: Skip allocating card when removing host
    c524a17312d4 ftrace: Move RCU is watching check after recursion check
    5ac7065e0866 iio: adc: qcom-spmi-adc5: fix driver name
    ac3bf99fc26a Input: i8042 - add nopnp quirk for Acer Aspire 5 A515
    aee38af574a1 xfs: trim IO to found COW extent limit
    aed60a1746ba net: virtio_vsock: Enhance connection semantics
    215459ff3666 vsock/virtio: add transport parameter to the virtio_transport_reset_no_sock()
    14c79ef213c2 clk: socfpga: stratix10: fix the divider for the emac_ptp_free_clk
    79c8ebdce55c gpio: tc35894: fix up tc35894 interrupt configuration
    035f59ad4ba8 gpio: mockup: fix resource leak in error path
    b079337f697a gpio: siox: explicitly support only threaded irqs
    57bd08a301f7 USB: gadget: f_ncm: Fix NDP16 datagram validation
    23389cf97aa1 mmc: sdhci: Workaround broken command queuing on Intel GLK based IRBIS models
    09c826447cb0 btrfs: fix filesystem corruption after a device replace

(From OE-Core rev: d7fe2a96ae30eecdfddd5a46c3fb088e633afc5b)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8f9352782e610775efbb059fbfb5a6b997d2ec88)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
ea0af53e2a linux-yocto/5.8: update to v5.8.15
Updating linux-yocto/5.8 to the latest korg -stable release that comprises
the following commits:

    665c6ff082e2 Linux 5.8.15
    03b7311c2d35 net_sched: commit action insertions together
    1e02bbf908d3 net_sched: defer tcf_idr_insert() in tcf_action_init_1()
    b6a788af71ed net: qrtr: ns: Protect radix_tree_deref_slot() using rcu read locks
    691847cc626c net: usb: rtl8150: set random MAC address when set_ethernet_addr() fails
    624143319921 Input: ati_remote2 - add missing newlines when printing module parameters
    2cdb64863860 tty/vt: Do not warn when huge selection requested
    af2c68e241ba net/mlx5e: Fix driver's declaration to support GRE offload
    13e623dc2772 net/tls: race causes kernel panic
    d1a1891a5865 net: bridge: fdb: don't flush ext_learn entries
    54d2034e1d13 net/core: check length before updating Ethertype in skb_mpls_{push,pop}
    912721b3ad72 netlink: fix policy dump leak
    85355299d6fa tcp: fix receive window update in tcp_add_backlog()
    a4c5f912c926 mm: khugepaged: recalculate min_free_kbytes after memory hotplug as expected by khugepaged
    0d600018dde7 mm: validate inode in mapping_set_error()
    270974601ea5 mmc: core: don't set limits.discard_granularity as 0
    23030fd91348 perf: Fix task_function_call() error handling
    02b573f11b1c afs: Fix deadlock between writeback and truncate
    29c60e82c6a5 net: mscc: ocelot: divide watermark value by 60 when writing to SYS_ATOP
    9fd541ad02bd net: mscc: ocelot: extend watermark encoding function
    13c116784250 net: mscc: ocelot: split writes to pause frame enable bit and to thresholds
    43e89f7e3c98 net: mscc: ocelot: rename ocelot_board.c to ocelot_vsc7514.c
    78272109f44d rxrpc: Fix server keyring leak
    bf1235365637 rxrpc: The server keyring isn't network-namespaced
    0fb27a1f99c1 rxrpc: Fix some missing _bh annotations on locking conn->state_lock
    6343a701ca68 rxrpc: Downgrade the BUG() for unsupported token type in rxrpc_read()
    3a15888ff3df rxrpc: Fix rxkad token xdr encoding
    41d0598c0f43 net: mvneta: fix double free of txq->buf
    d5c6f130b6f0 vhost-vdpa: fix page pinning leakage in error path
    ec7257845d40 vhost-vdpa: fix vhost_vdpa_map() on error condition
    72d41c97e736 net: hinic: fix DEVLINK build errors
    a974b4bddae3 net: stmmac: Modify configuration method of EEE timers
    d0eb9588f724 net/mlx5e: Fix race condition on nhe->n pointer in neigh update
    eef0da156040 net/mlx5e: Fix VLAN create flow
    b6dc435f3603 net/mlx5e: Fix VLAN cleanup flow
    f2140d0c6b93 net/mlx5e: Fix return status when setting unsupported FEC mode
    96e80a346634 net/mlx5e: Add resiliency in Striding RQ mode for packets larger than MTU
    4dc4c132f27f net/mlx5: Fix request_irqs error flow
    91ddbc505218 net/mlx5: Add retry mechanism to the command entry index allocation
    963f9da02730 net/mlx5: poll cmd EQ in case of command timeout
    da87ea137373 net/mlx5: Avoid possible free of command entry while timeout comp handler
    eb50f5c289e6 net/mlx5: Fix a race when moving command interface to polling mode
    04f31610f34f pipe: Fix memory leaks in create_pipe_files()
    ce1dde198079 octeontx2-pf: Fix synchnorization issue in mbox
    5cfc870ede16 octeontx2-pf: Fix the device state on error
    7778b8860228 octeontx2-pf: Fix TCP/UDP checksum offload for IPv6 frames
    921dfb5fec6b octeontx2-af: Fix enable/disable of default NPC entries
    b9f0dcfbfc07 net: phy: realtek: fix rtl8211e rx/tx delay config
    9d41929ceea9 virtio-net: don't disable guest csum when disable LRO
    f5f8861d01d3 net: usb: ax88179_178a: fix missing stop entry in driver_info
    fb4fb78d23fc r8169: fix RTL8168f/RTL8411 EPHY config
    0ea7fe7c26ef mlxsw: spectrum_acl: Fix mlxsw_sp_acl_tcam_group_add()'s error path
    698075baae0b mdio: fix mdio-thunder.c dependency & build error
    c83ed7bb7469 bonding: set dev->needed_headroom in bond_setup_by_slave()
    665298cbd6bd net: ethernet: cavium: octeon_mgmt: use phy_start and phy_stop
    2cb43007e060 net: stmmac: Fix clock handling on remove path
    39d93de64749 vmxnet3: fix cksum offload issues for non-udp tunnels
    6ececc888c0c ice: fix memory leak in ice_vsi_setup
    c4b9b9d7eb10 ice: fix memory leak if register_netdev_fails
    33e948635e65 iavf: Fix incorrect adapter get in iavf_resume
    1e0cdecfb896 iavf: use generic power management
    13685508abf3 xfrm: Use correct address family in xfrm_state_find
    3e835221d670 net: dsa: felix: convert TAS link speed based on phylink speed
    24bc1ec457c8 hinic: fix wrong return value of mac-set cmd
    43b7d340cb3a hinic: add log in exception handling processes
    5f8c48c299bc platform/x86: fix kconfig dependency warning for FUJITSU_LAPTOP
    6d9886e6081b platform/x86: fix kconfig dependency warning for LG_LAPTOP
    046add2ce07c net: stmmac: removed enabling eee in EEE set callback
    ac25c357463b xsk: Do not discard packet when NETDEV_TX_BUSY
    38dd384ce429 xfrm: clone whole liftime_cur structure in xfrm_do_migrate
    8baab8024028 xfrm: clone XFRMA_SEC_CTX in xfrm_do_migrate
    3ab37554e6ce xfrm: clone XFRMA_REPLAY_ESN_VAL in xfrm_do_migrate
    958c224a99d3 xfrm: clone XFRMA_SET_MARK in xfrm_do_migrate
    954adf701189 iommu/vt-d: Fix lockdep splat in iommu_flush_dev_iotlb()
    31bc10ac6d01 btrfs: move btrfs_rm_dev_replace_free_srcdev outside of all locks
    b50aa502610f drm/amd/display: fix return value check for hdcp_work
    b02b690b4bb3 drm/amd/pm: Removed fixed clock in auto mode DPM
    9e184961ddb7 io_uring: fix potential ABBA deadlock in ->show_fdinfo()
    287d8f00338d btrfs: move btrfs_scratch_superblocks into btrfs_dev_replace_finishing
    cefd370cb723 drm/amdgpu: prevent double kfree ttm->sg
    9c6944b53f1d openvswitch: handle DNAT tuple collision
    0388ffce1059 net: team: fix memory leak in __team_options_register
    70af9c28d423 team: set dev->needed_headroom in team_setup_by_port()
    9360901e714d sctp: fix sctp_auth_init_hmacs() error path
    d63492ab001b i2c: owl: Clear NACK and BUS error bits
    08a1313bfca0 i2c: meson: fixup rate calculation with filter delay
    3531df70c312 i2c: meson: keep peripheral clock enabled
    fe6124585cfe i2c: meson: fix clock setting overwrite
    d681bce5bc03 cifs: Fix incomplete memory allocation on setxattr path
    80683929112b espintcp: restore IP CB before handing the packet to xfrm
    1427c13cc16f xfrmi: drop ignore_df check before updating pmtu
    c2a55388bada nvme-tcp: check page by sendpage_ok() before calling kernel_sendpage()
    f4abc5911a9e tcp: use sendpage_ok() to detect misused .sendpage
    854828e10e2d net: introduce helper sendpage_ok() in include/linux/net.h
    89bec0adbf50 mm/khugepaged: fix filemap page_to_pgoff(page) != offset
    f994c81fe4c5 gpiolib: Disable compat ->read() code in UML case
    987c12d56402 RISC-V: Make sure memblock reserves the memory containing DT
    659a68b11df3 macsec: avoid use-after-free in macsec_handle_frame()
    8c995b27d066 nvme-core: put ctrl ref when module ref get fail
    3113391293be platform/x86: thinkpad_acpi: re-initialize ACPI buffer size when reuse
    46a00e3e9275 platform/x86: intel-vbtn: Switch to an allow-list for SW_TABLET_MODE reporting
    402ee2f96fb9 r8169: consider that PHY reset may still be in progress after applying firmware
    a73bb4ddee83 bpf: Prevent .BTF section elimination
    bc33b9bb0757 bpf: Fix sysfs export of empty BTF section
    944e354acfc3 platform/x86: asus-wmi: Fix SW_TABLET_MODE always reporting 1 on many different models
    88ddba3ebc3c platform/x86: thinkpad_acpi: initialize tp_nvram_state variable
    b9c0333ac6c8 platform/x86: intel-vbtn: Fix SW_TABLET_MODE always reporting 1 on the HP Pavilion 11 x360
    6b010ed04d50 Platform: OLPC: Fix memleak in olpc_ec_probe
    6ad52d3ee278 splice: teach splice pipe reading about empty pipe buffers
    c679280057ee usermodehelper: reset umask to default before executing user process
    3d36be053e58 vhost: Use vhost_get_used_size() in vhost_vring_set_addr()
    3480587d9b9d vhost: Don't call access_ok() when using IOTLB
    145a5510ef6a block/scsi-ioctl: Fix kernel-infoleak in scsi_put_cdrom_generic_arg()
    128f5fe7c102 partitions/ibm: fix non-DASD devices
    ef29249b066f drm/nouveau/mem: guard against NULL pointer access in mem_del
    e82867e1c2b4 drm/nouveau/device: return error for unknown chipsets
    bc7382371b2d net: wireless: nl80211: fix out-of-bounds access in nl80211_del_key()
    82dfd230b0c0 exfat: fix use of uninitialized spinlock on error path
    6a4bf26a176d crypto: arm64: Use x16 with indirect branch to bti_c
    fc5b5ae8ac3c bpf: Fix scalar32_min_max_or bounds tracking
    849d01ef1894 Revert "ravb: Fixed to be able to unload modules"
    e57db2fee8b1 fbcon: Fix global-out-of-bounds read in fbcon_get_font()
    34873e40e8d8 Fonts: Support FONT_EXTRA_WORDS macros for built-in fonts
    3714c5596a9d fbdev, newport_con: Move FONT_EXTRA_WORDS macros into linux/font.h
    70b225d0a8ca Linux 5.8.14
    8eec10e1335d ep_create_wakeup_source(): dentry name can change under you...
    4306cae1d98a epoll: EPOLL_CTL_ADD: close the race in decision to take fast path
    a6a47119b527 epoll: replace ->visited/visited_list with generation count
    bdb43b31e65d epoll: do not insert into poll queues until all sanity checks are done
    5e6bc9b1f1ae scsi: sd: sd_zbc: Fix ZBC disk initialization
    a12f67b54771 scsi: sd: sd_zbc: Fix handling of host-aware ZBC disks
    ecd72c95c278 drm/i915/gvt: Fix port number for BDW on EDID region setup
    115b0aed8b74 gpiolib: Fix line event handling in syscall compatible mode
    b4b93f8c92bb random32: Restore __latent_entropy attribute on net_rand_state
    d4ff049a3463 pipe: remove pipe_wait() and fix wakeup race with splice
    f6e5c604d67b iommu/amd: Fix the overwritten field in IVMD header
    7af706248ce2 gpio: pca953x: Correctly initialize registers 6 and 7 for PCA957x
    b7d423041485 pinctrl: mediatek: check mtk_is_virt_gpio input parameter
    1b62e4935b0c pinctrl: qcom: sm8250: correct sdc2_clk
    5f040ac168f3 autofs: use __kernel_write() for the autofs pipe writing
    b06582ae5052 scripts/dtc: only append to HOST_EXTRACFLAGS instead of overwriting
    c53cd1877406 blk-mq: call commit_rqs while list empty but error happen
    a6141f191d83 Input: trackpoint - enable Synaptics trackpoints
    83884333497f i2c: npcm7xx: Clear LAST bit after a failed transaction.
    95b874d021f6 i2c: cpm: Fix i2c_ram structure
    f6ae5ac641a8 gpio: aspeed: fix ast2600 bank properties
    cf7f69852717 gpio/aspeed-sgpio: don't enable all interrupts by default
    7dc4222171ce gpio/aspeed-sgpio: enable access to all 80 input & output sgpios
    20d7a2cbc339 gpio: pca953x: Fix uninitialized pending variable
    c8a8adc7df57 iommu/exynos: add missing put_device() call in exynos_iommu_of_xlate()
    32b462c501ee scsi: target: Fix lun lookup for TARGET_SCF_LOOKUP_LUN_FROM_TAG case
    40e2e6c71ac1 clk: samsung: exynos4: mark 'chipid' clock as CLK_IGNORE_UNUSED
    f6e9c4310f5a dmaengine: dmatest: Prevent to run on misconfigured channel
    ec9002ead04b clk: tegra: Fix missing prototype for tegra210_clk_register_emc()
    ef3f3611b462 clk: tegra: Always program PLL_E when enabled
    63cd394fa3f0 pNFS/flexfiles: Ensure we initialise the mirror bsizes correctly on read
    ac376f2245bb NFSv4.2: fix client's attribute cache management for copy_file_range
    a98e3583bd8d nfs: Fix security label length not being reset
    6846eb762344 pinctrl: mvebu: Fix i2c sda definition for 98DX3236
    fdf8212f0260 phy: ti: am654: Fix a leak in serdes_am654_probe()
    9f6c717ffa47 gpio: sprd: Clear interrupt when setting the type as edge
    6bef7d4b4770 scripts/kallsyms: skip ppc compiler stub *.long_branch.* / *.plt_branch.*
    a50ea89d1ae5 nvme-fc: fail new connections to a deleted host or remote port
    7d2120bc38b9 nvme-pci: fix NULL req in completion handler
    189c154bc593 net: dsa: felix: fix some key offsets for IP4_TCP_UDP VCAP IS2 entries
    b23f9f0dc930 spi: fsl-espi: Only process interrupts for expected events
    cbbc927e0e62 cpuidle: psci: Fix suspicious RCU usage
    f833ed7a202b io_uring: mark statx/files_update/epoll_ctl as non-SQPOLL
    fc4b56ae9e76 tools/io_uring: fix compile breakage
    4ff709d00af4 tracing: Make the space reserved for the pid wider
    f2465c7d069c mac80211: do not allow bigger VHT MPDUs than the hardware supports
    9c72951f9e97 mac80211: Fix radiotap header channel flag for 6GHz band
    2dd5f2a99bf3 drivers/net/wan/hdlc: Set skb->protocol before transmitting
    3074634461c5 drivers/net/wan/lapbether: Make skb->protocol consistent with the header
    74e81de01e49 fuse: fix the ->direct_IO() treatment of iov_iter
    72adaf934802 nvme-core: get/put ctrl and transport module in nvme_dev_open/release()
    f3f3da8c1ff9 nvme-pci: disable the write zeros command for Intel 600P/P3100
    33701f04a59a rndis_host: increase sleep time in the query-response loop
    21f41dd7e883 net: dec: de2104x: Increase receive ring size for Tulip
    9c524f9df9c7 hv_netvsc: Cache the current data path to avoid duplicate call and message
    caac35688ac1 drm/sun4i: mixer: Extend regmap max_register
    b92f98f9307c Revert "wlcore: Adding suppoprt for IGTK key in wlcore driver"
    73fadce8c80b drivers/net/wan/hdlc_fr: Add needed_headroom for PVC devices
    1017b151fb4a libbpf: Remove arch-specific include path in Makefile
    9f183485e888 mt76: mt7915: use ieee80211_free_txskb to free tx skbs
    057c9ed4565b vboxsf: Fix the check for the old binary mount-arguments struct
    4a1db91e697a clocksource/drivers/timer-gx6605s: Fixup counter reload
    5d48f7b0ed06 xen/events: don't use chip_data for legacy IRQs
    e99ecd62bb9c drm/amdgpu: restore proper ref count in amdgpu_display_crtc_set_config
    b64a43b072c7 memstick: Skip allocating card when removing host
    13cee195a180 tracing: Fix trace_find_next_entry() accounting of temp buffer size
    7f5d5928b9cc ftrace: Move RCU is watching check after recursion check
    1f0038ad6eed iio: adc: qcom-spmi-adc5: fix driver name
    14f6276e202f Input: i8042 - add nopnp quirk for Acer Aspire 5 A515
    6901d792bc35 i2c: i801: Exclude device from suspend direct complete optimization
    7d29e9507663 scsi: iscsi: iscsi_tcp: Avoid holding spinlock while calling getpeername()
    c32f1ee1d6d0 clk: socfpga: stratix10: fix the divider for the emac_ptp_free_clk
    a77ae2f6d900 clk: samsung: Keep top BPLL mux on Exynos542x enabled
    9705d89518ae gpio: amd-fch: correct logic of GPIO_LINE_DIRECTION
    f67837215194 gpio: tc35894: fix up tc35894 interrupt configuration
    baeac67ee6e2 gpio: mockup: fix resource leak in error path
    cb2480639590 gpio: siox: explicitly support only threaded irqs
    5ae75e1e510d usbcore/driver: Accommodate usbip
    ab3edda370ee usbcore/driver: Fix incorrect downcast
    dc1e84d05a96 usbcore/driver: Fix specific driver selection
    36ec30f02a00 Revert "usbip: Implement a match function to fix usbip"
    9c69e3a769db USB: gadget: f_ncm: Fix NDP16 datagram validation
    26be1c145cfe mmc: sdhci: Workaround broken command queuing on Intel GLK based IRBIS models
    a8183e677fc1 btrfs: fix filesystem corruption after a device replace
    f2a5cb2f24ae io_uring: always delete double poll wait entry on match

(From OE-Core rev: d044bd0603c2e80c5529f468f077c21f0af1d827)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 20a986da54728af38cac4556d01e39ef4bd558d6)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Chee Yang Lee
2d342da2a3 bluez5: update to 5.55
Release note:
5a180f2ec9

(From OE-Core rev: 6ed12979194b8fb73d6f7365128b5451e580cdba)

Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c2895e3e4eabca64cbcc8682e72d25026df5e5f0)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Richard Purdie
f1b304df93 bitbake: Add missing documentation Makefile
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-24 09:10:58 +00:00
Nathan Rossi
b569f2a414 diffstat: add nativesdk to BBCLASSEXTEND
The diffstat tool is part of HOSTTOOLS. To support hosts that do not
have it installed with buildtools-tarball it must be enabled for
nativesdk.

(From OE-Core rev: 3a4ac9d028e6d7840660bb9640614d92fd89246f)

Signed-off-by: Nathan Rossi <nathan@nathanrossi.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0ed002422bc46539f1d71ed19ee17358b6691bf0)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Jose Quaresma
411f541288 gstreamer1.0: warn the user when something is wrong with GstBufferPool
This is not a critical bug fix but it can be usefull in some BSP
with exotic drivers like on nvidia tegra bsp.

(From OE-Core rev: b53a89f4e5457689b7cb38ed9b3d0885cfd47c12)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Mark Jonas
83477f0280 libbsd: Remove BSD-4-Clause from main package
libbsd contains a multitude of licenses. For (commercial) projects the
3rd clause of the BSD-4-Clause license can be problematic. But only a
few man pages use this license. This means that the main package
containing the binary library itself is not under BSD-4-Clause ruling.

(From OE-Core rev: e822d8423fb836cc821b5c87d1b4f30477a313fd)

Signed-off-by: Mark Jonas <toertel@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9c3e3f83b5fb162d161a7b9773d426418a22c05f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Matt Madison
7e7893983f layer.conf: fix syntax error in PATH setting
Commit 05a87be51b44608ce4f77ac332df90a3cd2445ef introduced
a Python conditional expression when updating PATH that
generates syntax warnings in bitbake-cookerdaemon.log:

  Var <PATH[:=]>:1: SyntaxWarning: "is not" with a literal. Did you mean "!="?

Fix this by using the more appropriate '!=' comparison
operator.

(From OE-Core rev: b6c3950be8e4edbdde74b5819c974124e30680c7)

Signed-off-by: Matt Madison <matt@madison.systems>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2e753a12cf6bb98f9e0940e5ed6255ce8c538eed)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Khem Raj
e3a67d60cc gawk: Avoid using host ar during cross compile
(From OE-Core rev: 93178cea0e694cccd602ba965909f50f1b7159c7)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5bc83ca06d0d38a6eb9fcc0343d081021dafb2ce)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Khem Raj
23a0428069 lrzsz: Use Cross AR during compile
Current code hardcodes archiver to be 'ar' from build host

(From OE-Core rev: 694202b05134bdef603b69667cd70a28bb311ccf)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 74ed1d10434213ad3fcf54ded49879090f979e1e)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Denys Zagorui
b74901b816 binutils: reproducibility: reuse debug-prefix-map for stabs
powerpc 32bit Linux Kernel widely uses .stabs pseudo-op to
produce debugging information in stabs format. Faced an issue
that during Linux Kernel build with Yocto build system for 32bit
powerpc platform resulting vmlinux contains absolute path in
.stabstr section that cannot be remapped with -fdebug-prefix-map
option.

Yocto uses scripts/mkmakefile Linux Kernel build approach that
allows to store all generated files outside of kernel source
tree. With this approach each compilier invocation is performed
with an absolute path to a file that will be compiled and this
absolute path is recorded in init stab. There is no way to remap
this path.

Reuse remap_debug_filename api to make -fdebug-prefix-map flag
aplicable for init stab.

(From OE-Core rev: b4b79870d7946e58692adb68d1329955500d3c56)

Signed-off-by: Denys Zagorui <dzagorui@cisco.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 4dce4e01cfa153fb12cfd1684d36e0432bef6741)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Konrad Weihmann
010625f35a testimage: print results for interrupted runs
When a run is ended by overall timeout, print the already executed
testcases, to provide some hints which testcase might made the
test suite reach global timeout.
Nonetheless make the testrun exit with an error

(From OE-Core rev: 54a7e5feee2bec78f8d526b69076fd0e8e50e228)

Signed-off-by: Konrad Weihmann <kweihmann@outlook.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2bcc643195a3b3c66d698fac8b7af037c08545ac)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Konrad Weihmann
0647439a0a oeqa/core/context: initialize _run_end_time
with _run_start_time as value. For partial results of interrupted runs,
this info might be otherwise missing for at least one testcase

(From OE-Core rev: a91308482e1bb524df413d4342a9ebb472314663)

Signed-off-by: Konrad Weihmann <kweihmann@outlook.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1c5e8baf57fa2a33b9ef507b11d9ea9acaa77238)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Konrad Weihmann
87a05c7316 oeqa/core/context: expose results as variable
register an unittest handler for testresults and expose it as
variable result.
With this even partial results from an interrupted test suite run
can be made available

(From OE-Core rev: ba41688f7f0cb44293321df6c69fe47ac1804d63)

Signed-off-by: Konrad Weihmann <kweihmann@outlook.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a97ae47525157871b6c098ffc352293e365a4335)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Steve Sakoman
5c33ee311c openssh: whitelist CVE-2014-9278
The OpenSSH server, as used in Fedora and Red Hat Enterprise
Linux 7 and when running in a Kerberos environment, allows remote
authenticated users to log in as another user when they are listed
in the .k5users file of that user, which might bypass intended
authentication requirements that would force a local login.

Whitelist the CVE since this issue is Redhat specific.

(From OE-Core rev: b43201dd7459c2e408889fd8a81a52719308b5fe)

Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 309132e50d23b1e3f15ef8db1a101166b35f7ca4)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Alexander Kanavin
3ad92d4d09 conf-notes.txt: mention more important images than just sato
(From OE-Core rev: b622ea5c6d2965feb68b760e96e9073c50441a02)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f89138e12c3021ed49aa7ccdf90543d2aaaad279)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Alexander Kanavin
5e5a7fd73d clutter-gst-3.0: do not call out to host gstreamer plugin scanner
This is host contamination and can also fail for all kinds of
reasons when running under usermode qemu.

(From OE-Core rev: 4088ef3f6e608031a4f951cce5cc30b0af867e75)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit fb60d0920b660dffb346b2212dc6f8ba2a0b9fde)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:20 +00:00
Gratian Crisan
3269613984 kernel-module-split.bbclass: identify kernel modconf files as configuration files
Currently the modconf fragments representing the configuration for
kernel modules are written out to appropriate .conf files and added to
the FILES variable. However they are not identified as 'configuration
files' and installing a new version of a kernel module results in a
conflict and a failed installed because the respective .conf file is
already in place from a previous install.

Add the generated .conf files to the CONFFILES variable denoting their
true nature.

(From OE-Core rev: eb42ef100c52b243eee55b950f3dc7d4010ea1f2)

Signed-off-by: Gratian Crisan <gratian.crisan@ni.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1a70a92d1f1006be115429a4262259c9084f484d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:20 +00:00
Richard Purdie
b955cbdcfb alsa-utils: Fix license to GPLv2 only
Parts of alsa-utils are v2 only, parts are v2 or later. The effect is
the end result is GPLv2 and there seems little value in marking everything
as being a mixture of both. Fix LICENSE to match reality.

(From OE-Core rev: e14646de7fb45605de33fc0b797dad013ec20414)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a9a17a991174b732597e21045763ea851f486a01)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:20 +00:00
Richard Purdie
58e47e1b70 libdnf: Fix license as it contains 'or later' clause
The license headers are clear that the code is "or later", fix LICENSE
to match.

(From OE-Core rev: 01fd8b51074a91053f632b2932238e35c926045c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e565e0b908c71ad5106d1c6c73d269b819787e55)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:20 +00:00
Richard Purdie
bb0524e189 ptest-runner: Fix license as it contains 'or later' clause
The license headers are clear that the code is "or later", fix LICENSE
to match.

(From OE-Core rev: daa16f56f1596fa2987499d6b48b98f5b7aedca2)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5f0b5cdfcb104ac50222a47652e090ad8770e49f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:20 +00:00
Diego Santa Cruz
7d58c8bed6 freetype: fix CVE-2020-15999, backport from 2.10.4
(From OE-Core rev: 95b928e68325218508cff8def10e72bbe0051c83)

Signed-off-by: Diego Santa Cruz <Diego.SantaCruz@spinetix.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:20 +00:00
Yongxin Liu
5232b03e22 grub: clean up CVE patches
Clean up several patches introduced in commit 6732918498 ("grub:fix
several CVEs in grub 2.04").

1) Add CVE tags to individual patches.
2) Rename upstream patches and prefix them with CVE tags.
3) Add description of reference to upstream patch.

(From OE-Core rev: a1db1e71129c3e67ddd9dbef21e1c5eb31552e00)

Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bcb8b6719beaf6625e6b703e91958fe8afba5819)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Mingli Yu
e2312cd887 update_udev_hwdb: clean hwdb.bin
Steps to reproduce:
echo "IMAGE_INSTALL_append = \" udev-hwdb lib32-udev-hwdb\"" >> conf/local.conf

When install both udev-hwdb and lib32-udev-hwdb as above,
there comes below do_populate_sdk error:
 $ bitbake core-image-sato  -c populate_sdk
 ERROR: Task (/path/core-image-sato.bb:do_populate_sdk) failed with exit code '134'
 NOTE: Tasks Summary: Attempted 5554 tasks of which 0 didn't need to be rerun and 1 failed.

 $ cat /path/tmp/work/qemux86_64-poky-linux/core-image-sato/1.0-r5/pseudo/pseudo.log
 [snip]
 inode mismatch: '/path/tmp/work/qemux86_64-poky-linux/core-image-sato/1.0-r5/sdk/image/usr/local/oecore-x86_64/sysroots/core2-64-poky-linux/lib/udev/hwdb.bin' ino 427383040 in db, 427383042 in request.
 [snip]

It is because both udev-hwdb and lib32-udev-hwdb will generate
${SDK_OUTPUT}/${SDKTARGETSYSROOT}/lib/udev/hwdb.bin during do_populate_sdk
and it triggers pseudo error.

So clean hwdb.bin before generate hwdb.bin to avoid conflict to
fix the above do_populate_sdk error.

(From OE-Core rev: c7472925feb53ce92c1799feba2b7a9104e3f38f)

(From OE-Core rev: 93e59a78da3dab56c91f423b2c0f29a8ebaf2700)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 994ca65e6f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Alexander Kanavin
f552970178 apt: remove host contamination with gtest
(From OE-Core rev: 41aa60cdb1e26617e1eeac95a6ffcdd6561c539f)

(From OE-Core rev: a76d66feae7050d5d59964108a065bc6251667eb)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 600cb136cd)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Yann E. MORIN
d59e28ea73 recipes-core/busybox: fixup licensing information
Commit 7d32417b4d (busybox: Correct the name of the bzip2 license)
changes the licesne from 'bzip2' to 'bzip2-1.0.6' on the rationale
that the 'bzip2 license was renamed from "bzip2" to "bzip2-1.0.6"
[...] to match the official SPDX identifier.'

Though the above is true for the bzip2 and pbzip2 packages, the bzip2
code bundled in busybox is a copy from the bzip2 1.0.4 version, not the
1.0.6 version.

As such, using bzip2-1.0.6 is wrong.

Unfortunately, there is no official SPDX license identifier for this
bzip2 1.0.4 version, so we just mimick the existing ones (bzip2-1.0.5
and bzip2-1.0.6) by using bzip2-1.0.4.

Also, there is a license file attached to that, so we add it to the
list.

(From OE-Core rev: 6238ee3ecd385cbadd8e75eb8b22a96d9cb13639)

(From OE-Core rev: fb590d12a0979e0db69e9d7b0cb605467f678000)

Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Cc: Richard Purdie <richard.purdie@linuxfoundation.org>
Cc: Alexandre BELLONI <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0776bf6600)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Yann E. MORIN
61642ef429 common-licenses: add bzip2-1.0.4
The bzip2 license changes with each version; the changes are subtle, but
that makes it a different license everytime:
  - copyright year
  - authorship identification and address
  - version of the release
  - date of the release

Although we currently only have bzip2 and pbzip2 packages, we're going
to need this license for busybox, which uses code from bzip2-1.0.4.

Add it, as copied from the upstream bzip2 git tree at tag 'bzip2-1.0.4'
(commit f10a33538e9bab6deb61779b3d8aae168824ef48).

(From OE-Core rev: f303c31b813f371737c9a9d7a93e9f920f84e75a)

(From OE-Core rev: e29fb3d418f3ac53e49a14b430f0ef6ef323375f)

Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Khem Raj <raj.khem@gmail.com>
Cc: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f3f62ed09d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Khem Raj
7f6f1519b9 qemuboot.bbclass: Fix a typo
(From OE-Core rev: 2b5fb66344432390aa0cc199ad3f9ec2a4da26bb)

(From OE-Core rev: 2eb8cd12bdc4b6a83f8ab1ac6643821db5d8087c)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit aea9a37ae3)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Mark Jonas
528de6bc4f libsdl2: Fix directfb SDL_RenderFillRect
Refactoring of SDL2 internal API has broken SDL_RenderFillRect for
DirectFB. The problem has already been fixed upstream.

(From OE-Core rev: a7c8dfc1f9beebeb9da7f61b323d85fba82ec1cb)

(From OE-Core rev: 1eabecc8bcb459b0fe6b14c9a368cd1b4b6dd7dd)

Signed-off-by: Mark Jonas <toertel@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e956531526)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Mark Jonas
0ccf16fab3 libsdl2: Fix directfb syntax error
Build of libsdl2 with directfb is broken due to a spurious '}' and a
missing 'E' since version 2.0.12. The upstream is already fixed.

(From OE-Core rev: 8963daba093c3c5e2c60e1e4e057862971b84cb0)

(From OE-Core rev: a2b4c03bbb1f340da2f0723336978b22f8203065)

Signed-off-by: Mark Jonas <toertel@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9e9871de01)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Chee Yang Lee
4e513e2b86 ruby: fix CVE-2020-25613
(From OE-Core rev: 4e02862b4fcfbf3a9cace8a35e355f156d26ed37)

(From OE-Core rev: a8875221054da40c66366f63d9f61940311b1fbc)

Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
1272d1b8fc gst-validate: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: a153bd3eeffa40554884d3a50cf6f78b57416749)

(From OE-Core rev: 88c3919e7cd46b16ec26fe4678bc2c59f7ceffb5)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
686396e3dc gstreamer1.0-python: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: dc9c8ca89e9d7429deac696c9995135706b9a548)

(From OE-Core rev: 74fb595b88671de668aff4beae0764d7af88b6c7)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
2fa7fde32f gstreamer1.0-omx: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: e091bfead5907cc13c237d7464c50efe8810d6cd)

(From OE-Core rev: d82aae6725545449edd5e4a8d04d67cf5168846a)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
72050b72e2 gstreamer1.0-rtsp-server: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: 75b4e0c2ad5827b5eea9e810fd03bcfc53582873)

(From OE-Core rev: eff91cfc5b203519f438f99920196eb2be227078)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
2fa97151cd gstreamer1.0-vaapi: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: 8a04f7326539980f83731846db3de4af9ee1a2f0)

(From OE-Core rev: 5a226e38d0add3b8e0298558946d317b9109c44b)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
e67a7af07c gstreamer1.0-libav: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: af7cf7c37b4ea30592529442c72f22309cb577c5)

(From OE-Core rev: 1cc05b37c302c393a8137a619225e66f16778a56)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
2306702899 gstreamer1.0-plugins-ugly: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: 0fec6a473695d9ae794593f7cea98d05ef959d7a)

(From OE-Core rev: ff6954c90a1aeda1a12a3414ae0901476a173cd1)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
f652c4d1b8 gstreamer1.0-plugins-bad: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: ee8e7a9fb8f3d29357598b2a533bb44da12d6099)

(From OE-Core rev: 9b716b146dc875bf55f1ad093dc95244a201d745)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
ca1ed50ab3 gstreamer1.0-plugins-good: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: 0c9cdf7961e0991c5d25f18954bbd8fe243df225)

(From OE-Core rev: 1693e87495b2ecf63397b396930b8934a1478b88)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
46db037b1f gstreamer1.0-plugins-base: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: c38eefb0693b771a97ab7dc15103cb5be6a003f7)

(From OE-Core rev: 77fdfb7f52f876c4530fdef77c17a540b60bf024)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
70761072f5 gstreamer1.0: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: d24f8ac481082cdb07f141508a2caf964167aec4)

(From OE-Core rev: 3ed1ccdf977b265dac2325095caa0e2b0764aa56)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
efa68c6490 gstreamer1.0: Fix reproducibility issue around libcap
Currently gstreamer configuration depends libcap and on whether
setcap is found on the host system.

Removing libcap from DEPENDS and only use it when the 'setcap' is enabled.

    * 0004-capfix.patch
      Removed as the same goals can be achieved only with the PACKAGECONFIG 'setcap'

(From OE-Core rev: 7691d3f963dc02570b5092db8f061c4d327b277a)

(From OE-Core rev: 3b186880c95e8ab120fee6304af52384b040aae1)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Nicolas Dechesne
3daa976efb poky.yaml: updates for 3.2
Updates global variables for 3.2 / Gategarth release.

(From yocto-docs rev: 7b699c26bfcf05666460746dd7a28eacbf98870c)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:04:18 +00:00
Nicolas Dechesne
4d35e4b168 poky.yaml: remove unused variables
There are plenty of variables in poky.yaml which are not used anywhere
in the docs. So let's remove them. We can always add the one we need
later.

Note ORGEMAIL could be used in boilerplate.rst, however this file is
not parsed but included, and somehow the yocto-vars.py exenstion does
not process this file, so we cannot use a variable there.

(From yocto-docs rev: 3d58472daf118b25eda151bbf1a638905bba183a)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:04:18 +00:00
Nicolas Dechesne
dff89518bd conf: use bitbake 1.48 branch for intersphinx
We now publish the branch 1.48 of bitbake docs to
https://docs.yoctoproject.org/bitbake/1.48/

yocto-docs can refer to bitbake documentation using the intersphinx
extension. The gatesgarth docs should refer to the 1.48 branch of
bitbake, not the development branch.

(From yocto-docs rev: 09ae216a022b85fe1f03b55e6341e258c7215e20)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-09 14:40:32 +00:00
Nicolas Dechesne
cdae385f7d conf: update for release 3.2
conf.py:
* set version to 3.2

switchers.js
* add 3.2 release
* update 'dev' to 3.3

(From yocto-docs rev: eac8b251be5cd28ebec32345562c838dd5f43b00)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-09 13:16:18 +00:00
Robert P. J. Day
b7a7dde44a adt-manual: delete obsolete ADT manual, and related content
Since the ADT manual has long been superseded by the SDK manual,
remove the entire adt-manual directory, and the references to it in
the two top-level files "conf.py" and "poky.yaml".

(From yocto-docs rev: 64b2e83bddf6af0439ac7089ac95e60faa696cfc)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-06 15:14:01 +00:00
5674 changed files with 356033 additions and 321984 deletions

8
.gitignore vendored
View File

@@ -30,10 +30,4 @@ hob-image-*.bb
pull-*/
bitbake/lib/toaster/contrib/tts/backlog.txt
bitbake/lib/toaster/contrib/tts/log/*
bitbake/lib/toaster/contrib/tts/.cache/*
bitbake/lib/bb/tests/runqueue-tests/bitbake-cookerdaemon.log
_toaster_clones/
downloads/
sstate-cache/
toaster.sqlite
.vscode/
bitbake/lib/toaster/contrib/tts/.cache/*

View File

@@ -1,2 +1,2 @@
# Template settings
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf/templates/default}
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf}

View File

@@ -1,71 +0,0 @@
OpenEmbedded-Core and Yocto Project Maintainer Information
==========================================================
OpenEmbedded and Yocto Project work jointly together to maintain the metadata,
layers, tools and sub-projects that make up their ecosystems.
The projects operate through collaborative development. This currently takes
place on mailing lists for many components as the "pull request on github"
workflow works well for single or small numbers of maintainers but we have
a large number, all with different specialisms and benefit from the mailing
list review process. Changes therefore undergo peer review through mailing
lists in many cases.
This file aims to acknowledge people with specific skills/knowledge/interest
both to recognise their contributions but also empower them to help lead and
curate those components. Where we have people with specialist knowledge in
particular areas, during review patches/feedback from these people in these
areas would generally carry weight.
This file is maintained in OE-Core but may refer to components that are separate
to it if that makes sense in the context of maintainership. The README of specific
layers and components should ultimately be definitive about the patch process and
maintainership for the component.
Recipe Maintainers
------------------
See meta/conf/distro/include/maintainers.inc
Component/Subsystem Maintainers
-------------------------------
* Kernel (inc. linux-yocto, perf): Bruce Ashfield
* Reproducible Builds: Joshua Watt
* Toaster: David Reyna
* Hash-Equivalence: Joshua Watt
* Recipe upgrade infrastructure: Alex Kanavin
* Toolchain: Khem Raj
* ptest-runner: Aníbal Limón
* opkg: Alex Stewart
* devtool: Saul Wold
* eSDK: Saul Wold
* overlayfs: Vyacheslav Yurkov
Maintainers needed
------------------
* Pseudo
* Layer Index
* recipetool
* QA framework/automated testing
* error reporting system/web UI
* wic
* Patchwork
* Patchtest
* Matchbox
* Sato
* Autobuilder
Layer Maintainers needed
------------------------
* meta-gplv2 (ideally new strategy but active maintainer welcome)
Shadow maintainers/development needed
--------------------------------------
* toaster
* bitbake

35
Makefile Normal file
View File

@@ -0,0 +1,35 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
DESTDIR = final
ifeq ($(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; else echo 0; fi),0)
$(error "The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed")
endif
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile.sphinx clean publish
publish: Makefile.sphinx html singlehtml
rm -rf $(BUILDDIR)/$(DESTDIR)/
mkdir -p $(BUILDDIR)/$(DESTDIR)/
cp -r $(BUILDDIR)/html/* $(BUILDDIR)/$(DESTDIR)/
cp $(BUILDDIR)/singlehtml/index.html $(BUILDDIR)/$(DESTDIR)/singleindex.html
sed -i -e 's@index.html#@singleindex.html#@g' $(BUILDDIR)/$(DESTDIR)/singleindex.html
clean:
@rm -rf $(BUILDDIR)
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile.sphinx
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

29
README.OE-Core Normal file
View File

@@ -0,0 +1,29 @@
OpenEmbedded-Core
=================
OpenEmbedded-Core is a layer containing the core metadata for current versions
of OpenEmbedded. It is distro-less (can build a functional image with
DISTRO = "nodistro") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
http://www.openembedded.org/
The Yocto Project has extensive documentation about OE including a reference manual
which can be found at:
http://yoctoproject.org/documentation
Contributing
------------
Please refer to
http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
for guidelines on how to submit patches.
Mailing list:
http://lists.openembedded.org/mailman/listinfo/openembedded-core
Source code:
http://git.openembedded.org/openembedded-core/

View File

@@ -1,33 +0,0 @@
OpenEmbedded-Core
=================
OpenEmbedded-Core is a layer containing the core metadata for current versions
of OpenEmbedded. It is distro-less (can build a functional image with
DISTRO = "nodistro") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
https://www.openembedded.org/
The Yocto Project has extensive documentation about OE including a reference manual
which can be found at:
https://docs.yoctoproject.org/
Contributing
------------
Please refer to our contributor guide here: https://docs.yoctoproject.org/dev/contributor-guide/
for full details on how to submit changes.
As a quick guide, patches should be sent to openembedded-core@lists.openembedded.org
The git command to do that would be:
git send-email -M -1 --to openembedded-core@lists.openembedded.org
Mailing list:
https://lists.openembedded.org/g/openembedded-core
Source code:
https://git.openembedded.org/openembedded-core/

1
README.hardware Symbolic link
View File

@@ -0,0 +1 @@
meta-yocto-bsp/README.hardware

View File

@@ -1 +0,0 @@
meta-yocto-bsp/README.hardware.md

View File

@@ -1 +0,0 @@
README.poky.md

1
README.poky Symbolic link
View File

@@ -0,0 +1 @@
meta-poky/README.poky

View File

@@ -1 +0,0 @@
meta-poky/README.poky.md

View File

@@ -1,22 +0,0 @@
How to Report a Potential Vulnerability?
========================================
If you would like to report a public issue (for example, one with a released
CVE number), please report it using the
[https://bugzilla.yoctoproject.org/enter_bug.cgi?product=Security Security Bugzilla]
If you are dealing with a not-yet released or urgent issue, please send a
message to security AT yoctoproject DOT org, including as many details as
possible: the layer or software module affected, the recipe and its version,
and any example code, if available.
Branches maintained with security fixes
---------------------------------------
See [https://wiki.yoctoproject.org/wiki/Stable_Release_and_LTS Stable release and LTS]
for detailed info regarding the policies and maintenance of Stable branches.
The [https://wiki.yoctoproject.org/wiki/Releases Release page] contains a list of all
releases of the Yocto Project. Versions in grey are no longer actively maintained with
security patches, but well-tested patches may still be accepted for them for
significant issues.

View File

@@ -7,57 +7,29 @@ One of BitBake's main users, OpenEmbedded, takes this core and builds embedded L
stacks using a task-oriented approach.
For information about Bitbake, see the OpenEmbedded website:
https://www.openembedded.org/
http://www.openembedded.org/
Bitbake plain documentation can be found under the doc directory or its integrated
html version at the Yocto Project website:
https://docs.yoctoproject.org
Bitbake requires Python version 3.8 or newer.
http://yoctoproject.org/documentation
Contributing
------------
Please refer to our contributor guide here: https://docs.yoctoproject.org/contributor-guide/
for full details on how to submit changes.
As a quick guide, patches should be sent to bitbake-devel@lists.openembedded.org
The git command to do that would be:
Please refer to
http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
for guidelines on how to submit patches, just note that the latter documentation is intended
for OpenEmbedded (and its core) not bitbake patches (bitbake-devel@lists.openembedded.org)
but in general main guidelines apply. Once the commit(s) have been created, the way to send
the patch is through git-send-email. For example, to send the last commit (HEAD) on current
branch, type:
git send-email -M -1 --to bitbake-devel@lists.openembedded.org
If you're sending a patch related to the BitBake manual, make sure you copy
the Yocto Project documentation mailing list:
git send-email -M -1 --to bitbake-devel@lists.openembedded.org --cc docs@lists.yoctoproject.org
Mailing list:
https://lists.openembedded.org/g/bitbake-devel
http://lists.openembedded.org/mailman/listinfo/bitbake-devel
Source code:
https://git.openembedded.org/bitbake/
Testing
-------
Bitbake has a testsuite located in lib/bb/tests/ whichs aim to try and prevent regressions.
You can run this with "bitbake-selftest". In particular the fetcher is well covered since
it has so many corner cases. The datastore has many tests too. Testing with the testsuite is
recommended before submitting patches, particularly to the fetcher and datastore. We also
appreciate new test cases and may require them for more obscure issues.
To run the tests "zstd" and "git" must be installed.
The assumption is made that this testsuite is run from an initialized OpenEmbedded build
environment (i.e. `source oe-init-build-env` is used). If this is not the case, run the
testsuite as follows:
export PATH=$(pwd)/bin:$PATH
bin/bitbake-selftest
The testsuite can alternatively be executed using pytest, e.g. obtained from PyPI (in this
case, the PATH is configured automatically):
pytest
http://git.openembedded.org/bitbake/

View File

@@ -1,24 +0,0 @@
How to Report a Potential Vulnerability?
========================================
If you would like to report a public issue (for example, one with a released
CVE number), please report it using the
[https://bugzilla.yoctoproject.org/enter_bug.cgi?product=Security Security Bugzilla].
If you have a patch ready, submit it following the same procedure as any other
patch as described in README.md.
If you are dealing with a not-yet released or urgent issue, please send a
message to security AT yoctoproject DOT org, including as many details as
possible: the layer or software module affected, the recipe and its version,
and any example code, if available.
Branches maintained with security fixes
---------------------------------------
See [https://wiki.yoctoproject.org/wiki/Stable_Release_and_LTS Stable release and LTS]
for detailed info regarding the policies and maintenance of Stable branches.
The [https://wiki.yoctoproject.org/wiki/Releases Release page] contains a list of all
releases of the Yocto Project. Versions in grey are no longer actively maintained with
security patches, but well-tested patches may still be accepted for them for
significant issues.

View File

@@ -12,8 +12,6 @@
import os
import sys
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),
'lib'))
@@ -25,9 +23,10 @@ except RuntimeError as exc:
from bb import cookerdata
from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
bb.utils.check_system_locale()
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
__version__ = "2.8.0"
__version__ = "1.48.0"
if __name__ == "__main__":
if __version__ != bb.__version__:

View File

@@ -11,8 +11,6 @@
import os
import sys
import warnings
warnings.simplefilter("default")
import argparse
import logging
import pickle
@@ -28,7 +26,6 @@ logger = bb.msg.logger_create(myname)
is_dump = myname == 'bitbake-dumpsig'
def find_siginfo(tinfoil, pn, taskname, sigs=None):
result = None
tinfoil.set_event_mask(['bb.event.FindSigInfoResult',
@@ -54,7 +51,6 @@ def find_siginfo(tinfoil, pn, taskname, sigs=None):
sys.exit(2)
return result
def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
""" Find the most recent signature files for the specified PN/task """
@@ -63,25 +59,22 @@ def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
if sig1 and sig2:
sigfiles = find_siginfo(bbhandler, pn, taskname, [sig1, sig2])
if not sigfiles:
if len(sigfiles) == 0:
logger.error('No sigdata files found matching %s %s matching either %s or %s' % (pn, taskname, sig1, sig2))
sys.exit(1)
elif sig1 not in sigfiles:
elif not sig1 in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig1))
sys.exit(1)
elif sig2 not in sigfiles:
elif not sig2 in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig2))
sys.exit(1)
latestfiles = [sigfiles[sig1], sigfiles[sig2]]
else:
sigfiles = find_siginfo(bbhandler, pn, taskname)
latestsigs = sorted(sigfiles.keys(), key=lambda h: sigfiles[h]['time'])[-2:]
if not latestsigs:
filedates = find_siginfo(bbhandler, pn, taskname)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-2:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
sig1 = latestsigs[0]
sig2 = latestsigs[1]
latestfiles = [sigfiles[sig1]['path'], sigfiles[sig2]['path']]
return latestfiles
@@ -92,14 +85,14 @@ def recursecb(key, hash1, hash2):
hashfiles = find_siginfo(tinfoil, key, None, hashes)
recout = []
if not hashfiles:
if len(hashfiles) == 0:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
elif hash1 not in hashfiles:
elif not hash1 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash1))
elif hash2 not in hashfiles:
elif not hash2 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash2))
else:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1]['path'], hashfiles[hash2]['path'], recursecb, color=color)
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb, color=color)
for change in out2:
for line in change.splitlines():
recout.append(' ' + line)
@@ -116,36 +109,36 @@ parser.add_argument('-D', '--debug',
if is_dump:
parser.add_argument("-t", "--task",
help="find the signature data file for the last run of the specified task",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
help="find the signature data file for the last run of the specified task",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
parser.add_argument("sigdatafile1",
help="Signature file to dump. Not used when using -t/--task.",
action="store", nargs='?', metavar="sigdatafile")
help="Signature file to dump. Not used when using -t/--task.",
action="store", nargs='?', metavar="sigdatafile")
else:
parser.add_argument('-c', '--color',
help='Colorize the output (where %(metavar)s is %(choices)s)',
choices=['auto', 'always', 'never'], default='auto', metavar='color')
help='Colorize the output (where %(metavar)s is %(choices)s)',
choices=['auto', 'always', 'never'], default='auto', metavar='color')
parser.add_argument('-d', '--dump',
help='Dump the last signature data instead of comparing (equivalent to using bitbake-dumpsig)',
action='store_true')
help='Dump the last signature data instead of comparing (equivalent to using bitbake-dumpsig)',
action='store_true')
parser.add_argument("-t", "--task",
help="find the signature data files for the last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
help="find the signature data files for the last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
parser.add_argument("-s", "--signature",
help="With -t/--task, specify the signatures to look for instead of taking the last two",
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
help="With -t/--task, specify the signatures to look for instead of taking the last two",
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
parser.add_argument("sigdatafile1",
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
action="store", nargs='?')
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
action="store", nargs='?')
parser.add_argument("sigdatafile2",
help="Second signature file to compare",
action="store", nargs='?')
help="Second signature file to compare",
action="store", nargs='?')
options = parser.parse_args()
if is_dump:
@@ -163,8 +156,7 @@ if options.taskargs:
with bb.tinfoil.Tinfoil() as tinfoil:
tinfoil.prepare(config_only=True)
if not options.dump and options.sigargs:
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0],
options.sigargs[1])
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0], options.sigargs[1])
else:
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1])
@@ -173,8 +165,7 @@ if options.taskargs:
output = bb.siggen.dump_sigfile(files[-1])
else:
if len(files) < 2:
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (
options.taskargs[0], options.taskargs[1]))
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (options.taskargs[0], options.taskargs[1]))
sys.exit(1)
# Recurse into signature comparison

View File

@@ -1,60 +0,0 @@
#! /usr/bin/env python3
#
# Copyright (C) 2021 Richard Purdie
#
# SPDX-License-Identifier: GPL-2.0-only
#
import argparse
import io
import os
import sys
import warnings
warnings.simplefilter("default")
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
sys.path[0:0] = [os.path.join(topdir, 'lib')]
import bb.tinfoil
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Bitbake Query Variable")
parser.add_argument("variable", help="variable name to query")
parser.add_argument("-r", "--recipe", help="Recipe name to query", default=None, required=False)
parser.add_argument('-u', '--unexpand', help='Do not expand the value (with --value)', action="store_true")
parser.add_argument('-f', '--flag', help='Specify a variable flag to query (with --value)', default=None)
parser.add_argument('--value', help='Only report the value, no history and no variable name', action="store_true")
parser.add_argument('-q', '--quiet', help='Silence bitbake server logging', action="store_true")
parser.add_argument('--ignore-undefined', help='Suppress any errors related to undefined variables', action="store_true")
args = parser.parse_args()
if not args.value:
if args.unexpand:
sys.exit("--unexpand only makes sense with --value")
if args.flag:
sys.exit("--flag only makes sense with --value")
quiet = args.quiet or args.value
with bb.tinfoil.Tinfoil(tracking=True, setup_logging=not quiet) as tinfoil:
if args.recipe:
tinfoil.prepare(quiet=3 if quiet else 2)
d = tinfoil.parse_recipe(args.recipe)
else:
tinfoil.prepare(quiet=2, config_only=True)
d = tinfoil.config_data
value = None
if args.flag:
value = d.getVarFlag(args.variable, args.flag, expand=not args.unexpand)
if value is None and not args.ignore_undefined:
sys.exit(f"The flag '{args.flag}' is not defined for variable '{args.variable}'")
else:
value = d.getVar(args.variable, expand=not args.unexpand)
if value is None and not args.ignore_undefined:
sys.exit(f"The variable '{args.variable}' is not defined")
if args.value:
print(str(value if value is not None else ""))
else:
bb.data.emit_var(args.variable, d=d, all=True)

View File

@@ -13,10 +13,6 @@ import pprint
import sys
import threading
import time
import warnings
import netrc
import json
warnings.simplefilter("default")
try:
import tqdm
@@ -38,42 +34,18 @@ except ImportError:
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import hashserv
import bb.asyncrpc
DEFAULT_ADDRESS = 'unix://./hashserve.sock'
METHOD = 'stress.test.method'
def print_user(u):
print(f"Username: {u['username']}")
if "permissions" in u:
print("Permissions: " + " ".join(u["permissions"]))
if "token" in u:
print(f"Token: {u['token']}")
def main():
def handle_get(args, client):
result = client.get_taskhash(args.method, args.taskhash, all_properties=True)
if not result:
return 0
print(json.dumps(result, sort_keys=True, indent=4))
return 0
def handle_get_outhash(args, client):
result = client.get_outhash(args.method, args.outhash, args.taskhash)
if not result:
return 0
print(json.dumps(result, sort_keys=True, indent=4))
return 0
def handle_stats(args, client):
if args.reset:
s = client.reset_stats()
else:
s = client.get_stats()
print(json.dumps(s, sort_keys=True, indent=4))
pprint.pprint(s)
return 0
def handle_stress(args, client):
@@ -82,24 +54,25 @@ def main():
nonlocal missed_hashes
nonlocal max_time
with hashserv.create_client(args.address) as client:
for i in range(args.requests):
taskhash = hashlib.sha256()
taskhash.update(args.taskhash_seed.encode('utf-8'))
taskhash.update(str(i).encode('utf-8'))
client = hashserv.create_client(args.address)
start_time = time.perf_counter()
l = client.get_unihash(METHOD, taskhash.hexdigest())
elapsed = time.perf_counter() - start_time
for i in range(args.requests):
taskhash = hashlib.sha256()
taskhash.update(args.taskhash_seed.encode('utf-8'))
taskhash.update(str(i).encode('utf-8'))
with lock:
if l:
found_hashes += 1
else:
missed_hashes += 1
start_time = time.perf_counter()
l = client.get_unihash(METHOD, taskhash.hexdigest())
elapsed = time.perf_counter() - start_time
max_time = max(elapsed, max_time)
pbar.update()
with lock:
if l:
found_hashes += 1
else:
missed_hashes += 1
max_time = max(elapsed, max_time)
pbar.update()
max_time = 0
found_hashes = 0
@@ -138,114 +111,12 @@ def main():
with lock:
pbar.update()
def handle_remove(args, client):
where = {k: v for k, v in args.where}
if where:
result = client.remove(where)
print("Removed %d row(s)" % (result["count"]))
else:
print("No query specified")
def handle_clean_unused(args, client):
result = client.clean_unused(args.max_age)
print("Removed %d rows" % (result["count"]))
return 0
def handle_refresh_token(args, client):
r = client.refresh_token(args.username)
print_user(r)
def handle_set_user_permissions(args, client):
r = client.set_user_perms(args.username, args.permissions)
print_user(r)
def handle_get_user(args, client):
r = client.get_user(args.username)
print_user(r)
def handle_get_all_users(args, client):
users = client.get_all_users()
print("{username:20}| {permissions}".format(username="Username", permissions="Permissions"))
print(("-" * 20) + "+" + ("-" * 20))
for u in users:
print("{username:20}| {permissions}".format(username=u["username"], permissions=" ".join(u["permissions"])))
def handle_new_user(args, client):
r = client.new_user(args.username, args.permissions)
print_user(r)
def handle_delete_user(args, client):
r = client.delete_user(args.username)
print_user(r)
def handle_get_db_usage(args, client):
usage = client.get_db_usage()
print(usage)
tables = sorted(usage.keys())
print("{name:20}| {rows:20}".format(name="Table name", rows="Rows"))
print(("-" * 20) + "+" + ("-" * 20))
for t in tables:
print("{name:20}| {rows:<20}".format(name=t, rows=usage[t]["rows"]))
print()
total_rows = sum(t["rows"] for t in usage.values())
print(f"Total rows: {total_rows}")
def handle_get_db_query_columns(args, client):
columns = client.get_db_query_columns()
print("\n".join(sorted(columns)))
def handle_gc_status(args, client):
result = client.gc_status()
if not result["mark"]:
print("No Garbage collection in progress")
return 0
print("Current Mark: %s" % result["mark"])
print("Total hashes to keep: %d" % result["keep"])
print("Total hashes to remove: %s" % result["remove"])
return 0
def handle_gc_mark(args, client):
where = {k: v for k, v in args.where}
result = client.gc_mark(args.mark, where)
print("New hashes marked: %d" % result["count"])
return 0
def handle_gc_sweep(args, client):
result = client.gc_sweep(args.mark)
print("Removed %d rows" % result["count"])
return 0
def handle_unihash_exists(args, client):
result = client.unihash_exists(args.unihash)
if args.quiet:
return 0 if result else 1
print("true" if result else "false")
return 0
parser = argparse.ArgumentParser(description='Hash Equivalence Client')
parser.add_argument('--address', default=DEFAULT_ADDRESS, help='Server address (default "%(default)s")')
parser.add_argument('--log', default='WARNING', help='Set logging level')
parser.add_argument('--login', '-l', metavar="USERNAME", help="Authenticate as USERNAME")
parser.add_argument('--password', '-p', metavar="TOKEN", help="Authenticate using token TOKEN")
parser.add_argument('--become', '-b', metavar="USERNAME", help="Impersonate user USERNAME (if allowed) when performing actions")
parser.add_argument('--no-netrc', '-n', action="store_false", dest="netrc", help="Do not use .netrc")
subparsers = parser.add_subparsers()
get_parser = subparsers.add_parser('get', help="Get the unihash for a taskhash")
get_parser.add_argument("method", help="Method to query")
get_parser.add_argument("taskhash", help="Task hash to query")
get_parser.set_defaults(func=handle_get)
get_outhash_parser = subparsers.add_parser('get-outhash', help="Get output hash information")
get_outhash_parser.add_argument("method", help="Method to query")
get_outhash_parser.add_argument("outhash", help="Output hash to query")
get_outhash_parser.add_argument("taskhash", help="Task hash to query")
get_outhash_parser.set_defaults(func=handle_get_outhash)
stats_parser = subparsers.add_parser('stats', help='Show server stats')
stats_parser.add_argument('--reset', action='store_true',
help='Reset server stats')
@@ -264,64 +135,6 @@ def main():
help='Include string in outhash')
stress_parser.set_defaults(func=handle_stress)
remove_parser = subparsers.add_parser('remove', help="Remove hash entries")
remove_parser.add_argument("--where", "-w", metavar="KEY VALUE", nargs=2, action="append", default=[],
help="Remove entries from table where KEY == VALUE")
remove_parser.set_defaults(func=handle_remove)
clean_unused_parser = subparsers.add_parser('clean-unused', help="Remove unused database entries")
clean_unused_parser.add_argument("max_age", metavar="SECONDS", type=int, help="Remove unused entries older than SECONDS old")
clean_unused_parser.set_defaults(func=handle_clean_unused)
refresh_token_parser = subparsers.add_parser('refresh-token', help="Refresh auth token")
refresh_token_parser.add_argument("--username", "-u", help="Refresh the token for another user (if authorized)")
refresh_token_parser.set_defaults(func=handle_refresh_token)
set_user_perms_parser = subparsers.add_parser('set-user-perms', help="Set new permissions for user")
set_user_perms_parser.add_argument("--username", "-u", help="Username", required=True)
set_user_perms_parser.add_argument("permissions", metavar="PERM", nargs="*", default=[], help="New permissions")
set_user_perms_parser.set_defaults(func=handle_set_user_permissions)
get_user_parser = subparsers.add_parser('get-user', help="Get user")
get_user_parser.add_argument("--username", "-u", help="Username")
get_user_parser.set_defaults(func=handle_get_user)
get_all_users_parser = subparsers.add_parser('get-all-users', help="List all users")
get_all_users_parser.set_defaults(func=handle_get_all_users)
new_user_parser = subparsers.add_parser('new-user', help="Create new user")
new_user_parser.add_argument("--username", "-u", help="Username", required=True)
new_user_parser.add_argument("permissions", metavar="PERM", nargs="*", default=[], help="New permissions")
new_user_parser.set_defaults(func=handle_new_user)
delete_user_parser = subparsers.add_parser('delete-user', help="Delete user")
delete_user_parser.add_argument("--username", "-u", help="Username", required=True)
delete_user_parser.set_defaults(func=handle_delete_user)
db_usage_parser = subparsers.add_parser('get-db-usage', help="Database Usage")
db_usage_parser.set_defaults(func=handle_get_db_usage)
db_query_columns_parser = subparsers.add_parser('get-db-query-columns', help="Show columns that can be used in database queries")
db_query_columns_parser.set_defaults(func=handle_get_db_query_columns)
gc_status_parser = subparsers.add_parser("gc-status", help="Show garbage collection status")
gc_status_parser.set_defaults(func=handle_gc_status)
gc_mark_parser = subparsers.add_parser('gc-mark', help="Mark hashes to be kept for garbage collection")
gc_mark_parser.add_argument("mark", help="Mark for this garbage collection operation")
gc_mark_parser.add_argument("--where", "-w", metavar="KEY VALUE", nargs=2, action="append", default=[],
help="Keep entries in table where KEY == VALUE")
gc_mark_parser.set_defaults(func=handle_gc_mark)
gc_sweep_parser = subparsers.add_parser('gc-sweep', help="Perform garbage collection and delete any entries that are not marked")
gc_sweep_parser.add_argument("mark", help="Mark for this garbage collection operation")
gc_sweep_parser.set_defaults(func=handle_gc_sweep)
unihash_exists_parser = subparsers.add_parser('unihash-exists', help="Check if a unihash is known to the server")
unihash_exists_parser.add_argument("--quiet", action="store_true", help="Don't print status. Instead, exit with 0 if unihash exists and 1 if it does not")
unihash_exists_parser.add_argument("unihash", help="Unihash to check")
unihash_exists_parser.set_defaults(func=handle_unihash_exists)
args = parser.parse_args()
logger = logging.getLogger('hashserv')
@@ -335,30 +148,14 @@ def main():
console.setLevel(level)
logger.addHandler(console)
login = args.login
password = args.password
if login is None and args.netrc:
try:
n = netrc.netrc()
auth = n.authenticators(args.address)
if auth is not None:
login, _, password = auth
except FileNotFoundError:
pass
except netrc.NetrcParseError as e:
sys.stderr.write(f"Error parsing {e.filename}:{e.lineno}: {e.msg}\n")
func = getattr(args, 'func', None)
if func:
try:
with hashserv.create_client(args.address, login, password) as client:
if args.become:
client.become_user(args.become)
return func(args, client)
except bb.asyncrpc.InvokeError as e:
print(f"ERROR: {e}")
return 1
client = hashserv.create_client(args.address)
# Try to establish a connection to the server now to detect failures
# early
client.connect()
return func(args, client)
return 0

View File

@@ -10,162 +10,53 @@ import sys
import logging
import argparse
import sqlite3
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), "lib"))
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import hashserv
from hashserv.server import DEFAULT_ANON_PERMS
VERSION = "1.0.0"
DEFAULT_BIND = "unix://./hashserve.sock"
DEFAULT_BIND = 'unix://./hashserve.sock'
def main():
parser = argparse.ArgumentParser(
description="Hash Equivalence Reference Server. Version=%s" % VERSION,
formatter_class=argparse.RawTextHelpFormatter,
epilog="""
The bind address may take one of the following formats:
unix://PATH - Bind to unix domain socket at PATH
ws://ADDRESS:PORT - Bind to websocket on ADDRESS:PORT
ADDRESS:PORT - Bind to raw TCP socket on ADDRESS:PORT
parser = argparse.ArgumentParser(description='Hash Equivalence Reference Server. Version=%s' % VERSION,
epilog='''The bind address is the path to a unix domain socket if it is
prefixed with "unix://". Otherwise, it is an IP address
and port in form ADDRESS:PORT. To bind to all addresses, leave
the ADDRESS empty, e.g. "--bind :8686". To bind to a specific
IPv6 address, enclose the address in "[]", e.g.
"--bind [::1]:8686"'''
)
To bind to all addresses, leave the ADDRESS empty, e.g. "--bind :8686" or
"--bind ws://:8686". To bind to a specific IPv6 address, enclose the address in
"[]", e.g. "--bind [::1]:8686" or "--bind ws://[::1]:8686"
Note that the default Anonymous permissions are designed to not break existing
server instances when upgrading, but are not particularly secure defaults. If
you want to use authentication, it is recommended that you use "--anon-perms
@read" to only give anonymous users read access, or "--anon-perms @none" to
give un-authenticated users no access at all.
Setting "--anon-perms @all" or "--anon-perms @user-admin" is not allowed, since
this would allow anonymous users to manage all users accounts, which is a bad
idea.
If you are using user authentication, you should run your server in websockets
mode with an SSL terminating load balancer in front of it (as this server does
not implement SSL). Otherwise all usernames and passwords will be transmitted
in the clear. When configured this way, clients can connect using a secure
websocket, as in "wss://SERVER:PORT"
The following permissions are supported by the server:
@none - No permissions
@read - The ability to read equivalent hashes from the server
@report - The ability to report equivalent hashes to the server
@db-admin - Manage the hash database(s). This includes cleaning the
database, removing hashes, etc.
@user-admin - The ability to manage user accounts. This includes, creating
users, deleting users, resetting login tokens, and assigning
permissions.
@all - All possible permissions, including any that may be added
in the future
""",
)
parser.add_argument(
"-b",
"--bind",
default=os.environ.get("HASHSERVER_BIND", DEFAULT_BIND),
help='Bind address (default $HASHSERVER_BIND, "%(default)s")',
)
parser.add_argument(
"-d",
"--database",
default=os.environ.get("HASHSERVER_DB", "./hashserv.db"),
help='Database file (default $HASHSERVER_DB, "%(default)s")',
)
parser.add_argument(
"-l",
"--log",
default=os.environ.get("HASHSERVER_LOG_LEVEL", "WARNING"),
help='Set logging level (default $HASHSERVER_LOG_LEVEL, "%(default)s")',
)
parser.add_argument(
"-u",
"--upstream",
default=os.environ.get("HASHSERVER_UPSTREAM", None),
help="Upstream hashserv to pull hashes from ($HASHSERVER_UPSTREAM)",
)
parser.add_argument(
"-r",
"--read-only",
action="store_true",
help="Disallow write operations from clients ($HASHSERVER_READ_ONLY)",
)
parser.add_argument(
"--db-username",
default=os.environ.get("HASHSERVER_DB_USERNAME", None),
help="Database username ($HASHSERVER_DB_USERNAME)",
)
parser.add_argument(
"--db-password",
default=os.environ.get("HASHSERVER_DB_PASSWORD", None),
help="Database password ($HASHSERVER_DB_PASSWORD)",
)
parser.add_argument(
"--anon-perms",
metavar="PERM[,PERM[,...]]",
default=os.environ.get("HASHSERVER_ANON_PERMS", ",".join(DEFAULT_ANON_PERMS)),
help='Permissions to give anonymous users (default $HASHSERVER_ANON_PERMS, "%(default)s")',
)
parser.add_argument(
"--admin-user",
default=os.environ.get("HASHSERVER_ADMIN_USER", None),
help="Create default admin user with name ADMIN_USER ($HASHSERVER_ADMIN_USER)",
)
parser.add_argument(
"--admin-password",
default=os.environ.get("HASHSERVER_ADMIN_PASSWORD", None),
help="Create default admin user with password ADMIN_PASSWORD ($HASHSERVER_ADMIN_PASSWORD)",
)
parser.add_argument('--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
parser.add_argument('--database', default='./hashserv.db', help='Database file (default "%(default)s")')
parser.add_argument('--log', default='WARNING', help='Set logging level')
args = parser.parse_args()
logger = logging.getLogger("hashserv")
logger = logging.getLogger('hashserv')
level = getattr(logging, args.log.upper(), None)
if not isinstance(level, int):
raise ValueError("Invalid log level: %s (Try ERROR/WARNING/INFO/DEBUG)" % args.log)
raise ValueError('Invalid log level: %s' % args.log)
logger.setLevel(level)
console = logging.StreamHandler()
console.setLevel(level)
logger.addHandler(console)
read_only = (os.environ.get("HASHSERVER_READ_ONLY", "0") == "1") or args.read_only
if "," in args.anon_perms:
anon_perms = args.anon_perms.split(",")
else:
anon_perms = args.anon_perms.split()
server = hashserv.create_server(
args.bind,
args.database,
upstream=args.upstream,
read_only=read_only,
db_username=args.db_username,
db_password=args.db_password,
anon_perms=anon_perms,
admin_username=args.admin_user,
admin_password=args.admin_password,
)
server = hashserv.create_server(args.bind, args.database)
server.serve_forever()
return 0
if __name__ == "__main__":
if __name__ == '__main__':
try:
ret = main()
except Exception:
ret = 1
import traceback
traceback.print_exc()
sys.exit(ret)

View File

@@ -14,8 +14,6 @@ import logging
import os
import sys
import argparse
import warnings
warnings.simplefilter("default")
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
@@ -68,11 +66,11 @@ def main():
registered = False
for plugin in plugins:
if hasattr(plugin, 'tinfoil_init'):
plugin.tinfoil_init(tinfoil)
if hasattr(plugin, 'register_commands'):
registered = True
plugin.register_commands(subparsers)
if hasattr(plugin, 'tinfoil_init'):
plugin.tinfoil_init(tinfoil)
if not registered:
logger.error("No commands registered - missing plugins?")

View File

@@ -1,83 +1,49 @@
#!/usr/bin/env python3
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import os
import sys,logging
import argparse
import warnings
warnings.simplefilter("default")
import optparse
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), "lib"))
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),'lib'))
import prserv
import prserv.serv
VERSION = "1.1.0"
__version__="1.0.0"
PRHOST_DEFAULT="0.0.0.0"
PRHOST_DEFAULT='0.0.0.0'
PRPORT_DEFAULT=8585
def main():
parser = argparse.ArgumentParser(
description="BitBake PR Server. Version=%s" % VERSION,
formatter_class=argparse.RawTextHelpFormatter)
parser = optparse.OptionParser(
version="Bitbake PR Service Core version %s, %%prog version %s" % (prserv.__version__, __version__),
usage = "%prog < --start | --stop > [options]")
parser.add_argument(
"-f",
"--file",
default="prserv.sqlite3",
help="database filename (default: prserv.sqlite3)",
)
parser.add_argument(
"-l",
"--log",
default="prserv.log",
help="log filename(default: prserv.log)",
)
parser.add_argument(
"--loglevel",
default="INFO",
help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
)
parser.add_argument(
"--start",
action="store_true",
help="start daemon",
)
parser.add_argument(
"--stop",
action="store_true",
help="stop daemon",
)
parser.add_argument(
"--host",
help="ip address to bind",
default=PRHOST_DEFAULT,
)
parser.add_argument(
"--port",
type=int,
default=PRPORT_DEFAULT,
help="port number (default: 8585)",
)
parser.add_argument(
"-r",
"--read-only",
action="store_true",
help="open database in read-only mode",
)
parser.add_option("-f", "--file", help="database filename(default: prserv.sqlite3)", action="store",
dest="dbfile", type="string", default="prserv.sqlite3")
parser.add_option("-l", "--log", help="log filename(default: prserv.log)", action="store",
dest="logfile", type="string", default="prserv.log")
parser.add_option("--loglevel", help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
action = "store", type="string", dest="loglevel", default = "INFO")
parser.add_option("--start", help="start daemon",
action="store_true", dest="start")
parser.add_option("--stop", help="stop daemon",
action="store_true", dest="stop")
parser.add_option("--host", help="ip address to bind", action="store",
dest="host", type="string", default=PRHOST_DEFAULT)
parser.add_option("--port", help="port number(default: 8585)", action="store",
dest="port", type="int", default=PRPORT_DEFAULT)
args = parser.parse_args()
prserv.init_logger(os.path.abspath(args.log), args.loglevel)
options, args = parser.parse_args(sys.argv)
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
if args.start:
ret=prserv.serv.start_daemon(args.file, args.host, args.port, os.path.abspath(args.log), args.read_only)
elif args.stop:
ret=prserv.serv.stop_daemon(args.host, args.port)
if options.start:
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile))
elif options.stop:
ret=prserv.serv.stop_daemon(options.host, options.port)
else:
ret=parser.print_help()
return ret

View File

@@ -7,8 +7,6 @@
import os
import sys, logging
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import unittest
@@ -31,7 +29,6 @@ tests = ["bb.tests.codeparser",
"bb.tests.runqueue",
"bb.tests.siggen",
"bb.tests.utils",
"bb.tests.compression",
"hashserv.tests",
"layerindexlib.tests.layerindexobj",
"layerindexlib.tests.restapi",

View File

@@ -8,16 +8,14 @@
import os
import sys
import warnings
warnings.simplefilter("default")
import logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb
bb.utils.check_system_locale()
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 11 or not sys.argv[1].startswith("decafbad"):
if len(sys.argv) != 10 or not sys.argv[1].startswith("decafbad"):
print("bitbake-server is meant for internal execution by bitbake itself, please don't use it standalone.")
sys.exit(1)
@@ -28,11 +26,12 @@ readypipeinfd = int(sys.argv[3])
logfile = sys.argv[4]
lockname = sys.argv[5]
sockname = sys.argv[6]
timeout = float(sys.argv[7])
profile = bool(int(sys.argv[8]))
xmlrpcinterface = (sys.argv[9], int(sys.argv[10]))
timeout = sys.argv[7]
xmlrpcinterface = (sys.argv[8], int(sys.argv[9]))
if xmlrpcinterface[0] == "None":
xmlrpcinterface = (None, xmlrpcinterface[1])
if timeout == "None":
timeout = None
# Replace standard fds with our own
with open('/dev/null', 'r') as si:
@@ -51,5 +50,5 @@ logger = logging.getLogger("BitBake")
handler = bb.event.LogHandler()
logger.addHandler(handler)
bb.server.process.execServer(lockfd, readypipeinfd, lockname, sockname, timeout, xmlrpcinterface, profile)
bb.server.process.execServer(lockfd, readypipeinfd, lockname, sockname, timeout, xmlrpcinterface)

View File

@@ -1,14 +1,11 @@
#!/usr/bin/env python3
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import os
import sys
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
from bb import fetch2
import logging
@@ -19,12 +16,11 @@ import signal
import pickle
import traceback
import queue
import shlex
import subprocess
from multiprocessing import Lock
from threading import Thread
bb.utils.check_system_locale()
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
@@ -91,19 +87,19 @@ def worker_fire_prepickled(event):
worker_thread_exit = False
def worker_flush(worker_queue):
worker_queue_int = bytearray()
worker_queue_int = b""
global worker_pipe, worker_thread_exit
while True:
try:
worker_queue_int.extend(worker_queue.get(True, 1))
worker_queue_int = worker_queue_int + worker_queue.get(True, 1)
except queue.Empty:
pass
while (worker_queue_int or not worker_queue.empty()):
try:
(_, ready, _) = select.select([], [worker_pipe], [], 1)
if not worker_queue.empty():
worker_queue_int.extend(worker_queue.get())
worker_queue_int = worker_queue_int + worker_queue.get()
written = os.write(worker_pipe, worker_queue_int)
worker_queue_int = worker_queue_int[written:]
except (IOError, OSError) as e:
@@ -121,10 +117,9 @@ def worker_child_fire(event, d):
data = b"<event>" + pickle.dumps(event) + b"</event>"
try:
with bb.utils.lock_timeout(worker_pipe_lock):
while(len(data)):
written = worker_pipe.write(data)
data = data[written:]
worker_pipe_lock.acquire()
worker_pipe.write(data)
worker_pipe_lock.release()
except IOError:
sigterm_handler(None, None)
raise
@@ -143,59 +138,40 @@ def sigterm_handler(signum, frame):
os.killpg(0, signal.SIGTERM)
sys.exit()
def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
fn = runtask['fn']
task = runtask['task']
taskname = runtask['taskname']
taskhash = runtask['taskhash']
unihash = runtask['unihash']
appends = runtask['appends']
layername = runtask['layername']
taskdepdata = runtask['taskdepdata']
quieterrors = runtask['quieterrors']
def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskhash, unihash, appends, taskdepdata, extraconfigdata, quieterrors=False, dry_run_exec=False):
# We need to setup the environment BEFORE the fork, since
# a fork() or exec*() activates PSEUDO...
envbackup = {}
fakeroot = False
fakeenv = {}
umask = None
uid = os.getuid()
gid = os.getgid()
taskdep = runtask['taskdep']
taskdep = workerdata["taskdeps"][fn]
if 'umask' in taskdep and taskname in taskdep['umask']:
umask = taskdep['umask'][taskname]
elif workerdata["umask"]:
umask = workerdata["umask"]
if umask:
# umask might come in as a number or text string..
try:
umask = int(umask, 8)
umask = int(taskdep['umask'][taskname],8)
except TypeError:
pass
umask = taskdep['umask'][taskname]
dry_run = cfg.dry_run or runtask['dry_run']
dry_run = cfg.dry_run or dry_run_exec
# We can't use the fakeroot environment in a dry run as it possibly hasn't been built
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
fakeroot = True
envvars = (runtask['fakerootenv'] or "").split()
for key, value in (var.split('=',1) for var in envvars):
envvars = (workerdata["fakerootenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
fakeenv[key] = value
fakedirs = (runtask['fakerootdirs'] or "").split()
fakedirs = (workerdata["fakerootdirs"][fn] or "").split()
for p in fakedirs:
bb.utils.mkdirhier(p)
logger.debug2('Running %s:%s under fakeroot, fakedirs: %s' %
logger.debug(2, 'Running %s:%s under fakeroot, fakedirs: %s' %
(fn, taskname, ', '.join(fakedirs)))
else:
envvars = (runtask['fakerootnoenv'] or "").split()
for key, value in (var.split('=',1) for var in envvars):
envvars = (workerdata["fakerootnoenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
fakeenv[key] = value
@@ -237,21 +213,19 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
# Let SIGHUP exit as SIGTERM
signal.signal(signal.SIGHUP, sigterm_handler)
# No stdin & stdout
# stdout is used as a status report channel and must not be used by child processes.
dumbio = os.open(os.devnull, os.O_RDWR)
os.dup2(dumbio, sys.stdin.fileno())
os.dup2(dumbio, sys.stdout.fileno())
# No stdin
newsi = os.open(os.devnull, os.O_RDWR)
os.dup2(newsi, sys.stdin.fileno())
if umask is not None:
if umask:
os.umask(umask)
try:
bb_cache = bb.cache.NoCache(databuilder)
(realfn, virtual, mc) = bb.cache.virtualfn2realfn(fn)
the_data = databuilder.mcdata[mc]
the_data.setVar("BB_WORKERCONTEXT", "1")
the_data.setVar("BB_TASKDEPDATA", taskdepdata)
the_data.setVar('BB_CURRENTTASK', taskname.replace("do_", ""))
if cfg.limited_deps:
the_data.setVar("BB_LIMITEDDEPS", "1")
the_data.setVar("BUILDNAME", workerdata["buildname"])
@@ -265,20 +239,12 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
bb.parse.siggen.set_taskhashes(workerdata["newhashes"])
ret = 0
the_data = databuilder.parseRecipe(fn, appends, layername)
the_data = bb_cache.loadDataFull(fn, appends)
the_data.setVar('BB_TASKHASH', taskhash)
the_data.setVar('BB_UNIHASH', unihash)
bb.parse.siggen.setup_datacache_from_datastore(fn, the_data)
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN"), taskname.replace("do_", "")))
if not bb.utils.to_boolean(the_data.getVarFlag(taskname, 'network')):
if bb.utils.is_local_uid(uid):
logger.debug("Attempting to disable network for %s" % taskname)
bb.utils.disable_network(uid, gid)
else:
logger.debug("Skipping disable network for %s since %s is not a local uid." % (taskname, uid))
# exported_vars() returns a generator which *cannot* be passed to os.environ.update()
# successfully. We also need to unset anything from the environment which shouldn't be there
exports = bb.data.exported_vars(the_data)
@@ -307,20 +273,10 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
if not quieterrors:
logger.critical(traceback.format_exc())
os._exit(1)
sys.stdout.flush()
sys.stderr.flush()
try:
if dry_run:
return 0
try:
ret = bb.build.exec_task(fn, taskname, the_data, cfg.profile)
finally:
if fakeroot:
fakerootcmd = shlex.split(the_data.getVar("FAKEROOTCMD"))
subprocess.run(fakerootcmd + ['-S'], check=True, stdout=subprocess.PIPE)
return ret
return bb.build.exec_task(fn, taskname, the_data, cfg.profile)
except:
os._exit(1)
if not profiling:
@@ -352,12 +308,12 @@ class runQueueWorkerPipe():
if pipeout:
pipeout.close()
bb.utils.nonblockingfd(self.input)
self.queue = bytearray()
self.queue = b""
def read(self):
start = len(self.queue)
try:
self.queue.extend(self.input.read(102400) or b"")
self.queue = self.queue + (self.input.read(102400) or b"")
except (OSError, IOError) as e:
if e.errno != errno.EAGAIN:
raise
@@ -365,9 +321,7 @@ class runQueueWorkerPipe():
end = len(self.queue)
index = self.queue.find(b"</event>")
while index != -1:
msg = self.queue[:index+8]
assert msg.startswith(b"<event>") and msg.count(b"<event>") == 1
worker_fire_prepickled(msg)
worker_fire_prepickled(self.queue[:index+8])
self.queue = self.queue[index+8:]
index = self.queue.find(b"</event>")
return (end > start)
@@ -385,7 +339,7 @@ class BitbakeWorker(object):
def __init__(self, din):
self.input = din
bb.utils.nonblockingfd(self.input)
self.queue = bytearray()
self.queue = b""
self.cookercfg = None
self.databuilder = None
self.data = None
@@ -419,7 +373,7 @@ class BitbakeWorker(object):
if len(r) == 0:
# EOF on pipe, server must have terminated
self.sigterm_exception(signal.SIGTERM, None)
self.queue.extend(r)
self.queue = self.queue + r
except (OSError, IOError):
pass
if len(self.queue):
@@ -439,35 +393,19 @@ class BitbakeWorker(object):
while self.process_waitpid():
continue
def handle_item(self, item, func):
opening_tag = b"<" + item + b">"
if not self.queue.startswith(opening_tag):
return
tag_len = len(opening_tag)
if len(self.queue) < tag_len + 4:
# we need to receive more data
return
header = self.queue[tag_len:tag_len + 4]
payload_len = int.from_bytes(header, 'big')
# closing tag has length (tag_len + 1)
if len(self.queue) < tag_len * 2 + 1 + payload_len:
# we need to receive more data
return
index = self.queue.find(b"</" + item + b">")
if index != -1:
try:
func(self.queue[(tag_len + 4):index])
except pickle.UnpicklingError:
workerlog_write("Unable to unpickle data: %s\n" % ":".join("{:02x}".format(c) for c in self.queue))
raise
self.queue = self.queue[(index + len(b"</") + len(item) + len(b">")):]
if self.queue.startswith(b"<" + item + b">"):
index = self.queue.find(b"</" + item + b">")
while index != -1:
func(self.queue[(len(item) + 2):index])
self.queue = self.queue[(index + len(item) + 3):]
index = self.queue.find(b"</" + item + b">")
def handle_cookercfg(self, data):
self.cookercfg = pickle.loads(data)
self.databuilder = bb.cookerdata.CookerDataBuilder(self.cookercfg, worker=True)
self.databuilder.parseBaseConfiguration(worker=True)
self.databuilder.parseBaseConfiguration()
self.data = self.databuilder.data
def handle_extraconfigdata(self, data):
@@ -482,7 +420,6 @@ class BitbakeWorker(object):
for mc in self.databuilder.mcdata:
self.databuilder.mcdata[mc].setVar("PRSERV_HOST", self.workerdata["prhost"])
self.databuilder.mcdata[mc].setVar("BB_HASHSERVE", self.workerdata["hashservaddr"])
self.databuilder.mcdata[mc].setVar("__bbclasstype", "recipe")
def handle_newtaskhashes(self, data):
self.workerdata["newhashes"] = pickle.loads(data)
@@ -500,15 +437,11 @@ class BitbakeWorker(object):
sys.exit(0)
def handle_runtask(self, data):
runtask = pickle.loads(data)
fn = runtask['fn']
task = runtask['task']
taskname = runtask['taskname']
fn, task, taskname, taskhash, unihash, quieterrors, appends, taskdepdata, dry_run_exec = pickle.loads(data)
workerlog_write("Handling runtask %s %s %s\n" % (task, fn, taskname))
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, self.extraconfigdata, runtask)
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, fn, task, taskname, taskhash, unihash, appends, taskdepdata, self.extraconfigdata, quieterrors, dry_run_exec)
self.build_pids[pid] = task
self.build_pipes[pid] = runQueueWorkerPipe(pipein, pipeout)
@@ -572,11 +505,9 @@ except BaseException as e:
import traceback
sys.stderr.write(traceback.format_exc())
sys.stderr.write(str(e))
finally:
worker_thread_exit = True
worker_thread.join()
workerlog_write("exiting")
if not normalexit:
sys.exit(1)
worker_thread_exit = True
worker_thread.join()
workerlog_write("exitting")
sys.exit(0)

View File

@@ -1,7 +1,5 @@
#!/usr/bin/env python3
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -18,23 +16,19 @@ import itertools
import os
import subprocess
import sys
import warnings
warnings.simplefilter("default")
version = 1.0
git_cmd = ['git', '-c', 'safe.bareRepository=all']
def main():
if sys.version_info < (3, 4, 0):
sys.exit('Python 3.4 or greater is required')
git_dir = check_output(git_cmd + ['rev-parse', '--git-dir']).rstrip()
git_dir = check_output(['git', 'rev-parse', '--git-dir']).rstrip()
shallow_file = os.path.join(git_dir, 'shallow')
if os.path.exists(shallow_file):
try:
check_output(git_cmd + ['fetch', '--unshallow'])
check_output(['git', 'fetch', '--unshallow'])
except subprocess.CalledProcessError:
try:
os.unlink(shallow_file)
@@ -43,21 +37,21 @@ def main():
raise
args = process_args()
revs = check_output(git_cmd + ['rev-list'] + args.revisions).splitlines()
revs = check_output(['git', 'rev-list'] + args.revisions).splitlines()
make_shallow(shallow_file, args.revisions, args.refs)
ref_revs = check_output(git_cmd + ['rev-list'] + args.refs).splitlines()
ref_revs = check_output(['git', 'rev-list'] + args.refs).splitlines()
remaining_history = set(revs) & set(ref_revs)
for rev in remaining_history:
if check_output(git_cmd + ['rev-parse', '{}^@'.format(rev)]):
if check_output(['git', 'rev-parse', '{}^@'.format(rev)]):
sys.exit('Error: %s was not made shallow' % rev)
filter_refs(args.refs)
if args.shrink:
shrink_repo(git_dir)
subprocess.check_call(git_cmd + ['fsck', '--unreachable'])
subprocess.check_call(['git', 'fsck', '--unreachable'])
def process_args():
@@ -74,12 +68,12 @@ def process_args():
args = parser.parse_args()
if args.refs:
args.refs = check_output(git_cmd + ['rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
args.refs = check_output(['git', 'rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
else:
args.refs = get_all_refs(lambda r, t, tt: t == 'commit' or tt == 'commit')
args.refs = list(filter(lambda r: not r.endswith('/HEAD'), args.refs))
args.revisions = check_output(git_cmd + ['rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
args.revisions = check_output(['git', 'rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
return args
@@ -97,7 +91,7 @@ def make_shallow(shallow_file, revisions, refs):
def get_all_refs(ref_filter=None):
"""Return all the existing refs in this repository, optionally filtering the refs."""
ref_output = check_output(git_cmd + ['for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
ref_output = check_output(['git', 'for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
ref_split = [tuple(iter_extend(l.rsplit('\t'), 3)) for l in ref_output.splitlines()]
if ref_filter:
ref_split = (e for e in ref_split if ref_filter(*e))
@@ -115,7 +109,7 @@ def filter_refs(refs):
all_refs = get_all_refs()
to_remove = set(all_refs) - set(refs)
if to_remove:
check_output(['xargs', '-0', '-n', '1'] + git_cmd + ['update-ref', '-d', '--no-deref'],
check_output(['xargs', '-0', '-n', '1', 'git', 'update-ref', '-d', '--no-deref'],
input=''.join(l + '\0' for l in to_remove))
@@ -128,7 +122,7 @@ def follow_history_intersections(revisions, refs):
if rev in seen:
continue
parents = check_output(git_cmd + ['rev-parse', '%s^@' % rev]).splitlines()
parents = check_output(['git', 'rev-parse', '%s^@' % rev]).splitlines()
yield rev
seen.add(rev)
@@ -136,12 +130,12 @@ def follow_history_intersections(revisions, refs):
if not parents:
continue
check_refs = check_output(git_cmd + ['merge-base', '--independent'] + sorted(refs)).splitlines()
check_refs = check_output(['git', 'merge-base', '--independent'] + sorted(refs)).splitlines()
for parent in parents:
for ref in check_refs:
print("Checking %s vs %s" % (parent, ref))
try:
merge_base = check_output(git_cmd + ['merge-base', parent, ref]).rstrip()
merge_base = check_output(['git', 'merge-base', parent, ref]).rstrip()
except subprocess.CalledProcessError:
continue
else:
@@ -161,14 +155,14 @@ def iter_except(func, exception, start=None):
def shrink_repo(git_dir):
"""Shrink the newly shallow repository, removing the unreachable objects."""
subprocess.check_call(git_cmd + ['reflog', 'expire', '--expire-unreachable=now', '--all'])
subprocess.check_call(git_cmd + ['repack', '-ad'])
subprocess.check_call(['git', 'reflog', 'expire', '--expire-unreachable=now', '--all'])
subprocess.check_call(['git', 'repack', '-ad'])
try:
os.unlink(os.path.join(git_dir, 'objects', 'info', 'alternates'))
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
subprocess.check_call(git_cmd + ['prune', '--expire', 'now'])
subprocess.check_call(['git', 'prune', '--expire', 'now'])
if __name__ == '__main__':

View File

@@ -33,7 +33,7 @@ databaseCheck()
$MANAGE migrate --noinput || retval=1
if [ $retval -eq 1 ]; then
echo "Failed migrations, halting system start" 1>&2
echo "Failed migrations, aborting system start" 1>&2
return $retval
fi
# Make sure that checksettings can pick up any value for TEMPLATECONF
@@ -41,7 +41,7 @@ databaseCheck()
$MANAGE checksettings --traceback || retval=1
if [ $retval -eq 1 ]; then
printf "\nError while checking settings; exiting\n"
printf "\nError while checking settings; aborting\n"
return $retval
fi
@@ -84,7 +84,7 @@ webserverStartAll()
echo "Starting webserver..."
$MANAGE runserver --noreload "$ADDR_PORT" \
</dev/null >>${TOASTER_LOGS_DIR}/web.log 2>&1 \
</dev/null >>${BUILDDIR}/toaster_web.log 2>&1 \
& echo $! >${BUILDDIR}/.toastermain.pid
sleep 1
@@ -181,14 +181,6 @@ WEBSERVER=1
export TOASTER_BUILDSERVER=1
ADDR_PORT="localhost:8000"
TOASTERDIR=`dirname $BUILDDIR`
# ${BUILDDIR}/toaster_logs/ became the default location for toaster logs
# This is needed for implemented django-log-viewer: https://pypi.org/project/django-log-viewer/
# If the directory does not exist, create it.
TOASTER_LOGS_DIR="${BUILDDIR}/toaster_logs/"
if [ ! -d $TOASTER_LOGS_DIR ]
then
mkdir $TOASTER_LOGS_DIR
fi
unset CMD
for param in $*; do
case $param in
@@ -256,7 +248,7 @@ fi
# 3) the sqlite db if that is being used.
# 4) pid's we need to clean up on exit/shutdown
export TOASTER_DIR=$TOASTERDIR
export BB_ENV_PASSTHROUGH_ADDITIONS="$BB_ENV_PASSTHROUGH_ADDITIONS TOASTER_DIR"
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE TOASTER_DIR"
# Determine the action. If specified by arguments, fine, if not, toggle it
if [ "$CMD" = "start" ] ; then
@@ -307,7 +299,7 @@ case $CMD in
export BITBAKE_UI='toasterui'
if [ $TOASTER_BUILDSERVER -eq 1 ] ; then
$MANAGE runbuilds \
</dev/null >>${TOASTER_LOGS_DIR}/toaster_runbuilds.log 2>&1 \
</dev/null >>${BUILDDIR}/toaster_runbuilds.log 2>&1 \
& echo $! >${BUILDDIR}/.runbuilds.pid
else
echo "Toaster build server not started."

View File

@@ -19,8 +19,6 @@ import sys
import json
import pickle
import codecs
import warnings
warnings.simplefilter("default")
from collections import namedtuple
@@ -30,23 +28,79 @@ sys.path.insert(0, join(dirname(dirname(abspath(__file__))), 'lib'))
import bb.cooker
from bb.ui import toasterui
from bb.ui import eventreplay
class EventPlayer:
"""Emulate a connection to a bitbake server."""
def __init__(self, eventfile, variables):
self.eventfile = eventfile
self.variables = variables
self.eventmask = []
def waitEvent(self, _timeout):
"""Read event from the file."""
line = self.eventfile.readline().strip()
if not line:
return
try:
event_str = json.loads(line)['vars'].encode('utf-8')
event = pickle.loads(codecs.decode(event_str, 'base64'))
event_name = "%s.%s" % (event.__module__, event.__class__.__name__)
if event_name not in self.eventmask:
return
return event
except ValueError as err:
print("Failed loading ", line)
raise err
def runCommand(self, command_line):
"""Emulate running a command on the server."""
name = command_line[0]
if name == "getVariable":
var_name = command_line[1]
variable = self.variables.get(var_name)
if variable:
return variable['v'], None
return None, "Missing variable %s" % var_name
elif name == "getAllKeysWithFlags":
dump = {}
flaglist = command_line[1]
for key, val in self.variables.items():
try:
if not key.startswith("__"):
dump[key] = {
'v': val['v'],
'history' : val['history'],
}
for flag in flaglist:
dump[key][flag] = val[flag]
except Exception as err:
print(err)
return (dump, None)
elif name == 'setEventMask':
self.eventmask = command_line[-1]
return True, None
else:
raise Exception("Command %s not implemented" % command_line[0])
def getEventHandle(self):
"""
This method is called by toasterui.
The return value is passed to self.runCommand but not used there.
"""
pass
def main(argv):
with open(argv[-1]) as eventfile:
# load variables from the first line
variables = None
while line := eventfile.readline().strip():
try:
variables = json.loads(line)['allvariables']
break
except (KeyError, json.JSONDecodeError):
continue
if not variables:
sys.exit("Cannot find allvariables entry in event log file %s" % argv[-1])
eventfile.seek(0)
variables = json.loads(eventfile.readline().strip())['allvariables']
params = namedtuple('ConfigParams', ['observe_only'])(True)
player = eventreplay.EventPlayer(eventfile, variables)
player = EventPlayer(eventfile, variables)
return toasterui.main(player, player, params)

View File

@@ -1,23 +0,0 @@
# SPDX-License-Identifier: MIT
#
# Copyright (c) 2021 Joshua Watt <JPEWhacker@gmail.com>
#
# Dockerfile to build a bitbake hash equivalence server container
#
# From the root of the bitbake repository, run:
#
# docker build -f contrib/hashserv/Dockerfile .
#
FROM alpine:3.13.1
RUN apk add --no-cache python3
COPY bin/bitbake-hashserv /opt/bbhashserv/bin/
COPY lib/hashserv /opt/bbhashserv/lib/hashserv/
COPY lib/bb /opt/bbhashserv/lib/bb/
COPY lib/codegen.py /opt/bbhashserv/lib/codegen.py
COPY lib/ply /opt/bbhashserv/lib/ply/
COPY lib/bs4 /opt/bbhashserv/lib/bs4/
ENTRYPOINT ["/opt/bbhashserv/bin/bitbake-hashserv"]

View File

@@ -1,62 +0,0 @@
# SPDX-License-Identifier: MIT
#
# Copyright (c) 2022 Daniel Gomez <daniel@qtec.com>
#
# Dockerfile to build a bitbake PR service container
#
# From the root of the bitbake repository, run:
#
# docker build -f contrib/prserv/Dockerfile . -t prserv
#
# Running examples:
#
# 1. PR Service in RW mode, port 18585:
#
# docker run --detach --tty \
# --env PORT=18585 \
# --publish 18585:18585 \
# --volume $PWD:/var/lib/bbprserv \
# prserv
#
# 2. PR Service in RO mode, default port (8585) and custom LOGFILE:
#
# docker run --detach --tty \
# --env DBMODE="--read-only" \
# --env LOGFILE=/var/lib/bbprserv/prservro.log \
# --publish 8585:8585 \
# --volume $PWD:/var/lib/bbprserv \
# prserv
#
FROM alpine:3.14.4
RUN apk add --no-cache python3
COPY bin/bitbake-prserv /opt/bbprserv/bin/
COPY lib/prserv /opt/bbprserv/lib/prserv/
COPY lib/bb /opt/bbprserv/lib/bb/
COPY lib/codegen.py /opt/bbprserv/lib/codegen.py
COPY lib/ply /opt/bbprserv/lib/ply/
COPY lib/bs4 /opt/bbprserv/lib/bs4/
ENV PATH=$PATH:/opt/bbprserv/bin
RUN mkdir -p /var/lib/bbprserv
ENV DBFILE=/var/lib/bbprserv/prserv.sqlite3 \
LOGFILE=/var/lib/bbprserv/prserv.log \
LOGLEVEL=debug \
HOST=0.0.0.0 \
PORT=8585 \
DBMODE=""
ENTRYPOINT [ "/bin/sh", "-c", \
"bitbake-prserv \
--file=$DBFILE \
--log=$LOGFILE \
--loglevel=$LOGLEVEL \
--start \
--host=$HOST \
--port=$PORT \
$DBMODE \
&& tail -f $LOGFILE"]

View File

@@ -40,7 +40,7 @@ set cpo&vim
let s:maxoff = 50 " maximum number of lines to look backwards for ()
function! GetBBPythonIndent(lnum)
function GetPythonIndent(lnum)
" If this line is explicitly joined: If the previous line was also joined,
" line it up with that one, otherwise add two 'shiftwidth'
@@ -257,7 +257,7 @@ let b:did_indent = 1
setlocal indentkeys+=0\"
function! BitbakeIndent(lnum)
function BitbakeIndent(lnum)
if !has('syntax_items')
return -1
endif
@@ -315,7 +315,7 @@ function! BitbakeIndent(lnum)
endif
if index(["bbPyDefRegion", "bbPyFuncRegion"], name) != -1
let ret = GetBBPythonIndent(a:lnum)
let ret = GetPythonIndent(a:lnum)
" Should normally always be indented by at least one shiftwidth; but allow
" return of -1 (defer to autoindent) or -2 (force indent to 0)
if ret == 0

View File

@@ -20,7 +20,7 @@ fun! NewBBAppendTemplate()
set nopaste
" New bbappend template
0 put ='FILESEXTRAPATHS:prepend := \"${THISDIR}/${PN}:\"'
0 put ='FILESEXTRAPATHS_prepend := \"${THISDIR}/${PN}:\"'
2
if paste == 1

View File

@@ -51,9 +51,9 @@ syn region bbString matchgroup=bbQuote start=+'+ skip=+\\$+ end=+'+
syn match bbExport "^export" nextgroup=bbIdentifier skipwhite
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
syn match bbIdentifier "[a-zA-Z0-9\-_\.\/\+]\+" display contained
syn match bbVarDeref "${[a-zA-Z0-9\-_:\.\/\+]\+}" contained
syn match bbVarDeref "${[a-zA-Z0-9\-_\.\/\+]\+}" contained
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)" contained nextgroup=bbVarValue
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+][${}a-zA-Z0-9\-_:\.\/\+]*\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbOverrideOperator,bbVarDeref nextgroup=bbVarEq
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+]\+\(_[${}a-zA-Z0-9\-_\.\/\+]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
syn region bbVarPyValue start=+${@+ skip=+\\$+ end=+}+ contained contains=@python
@@ -63,14 +63,13 @@ syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*
" Includes and requires
syn keyword bbInclude inherit include require contained
syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref
syn match bbIncludeLine "^\(inherit\|include\|require\)\s\+" contains=bbInclude nextgroup=bbIncludeRest
" Add taks and similar
syn keyword bbStatement addtask deltask addhandler after before EXPORT_FUNCTIONS contained
syn match bbStatementRest /[^\\]*$/ skipwhite contained contains=bbStatement,bbVarDeref,bbVarPyValue
syn region bbStatementRestCont start=/.*\\$/ end=/^[^\\]*$/ contained contains=bbStatement,bbVarDeref,bbVarPyValue,bbContinue keepend
syn match bbStatementLine "^\(addtask\|deltask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest,bbStatementRestCont
syn match bbStatementRest ".*$" skipwhite contained contains=bbStatement
syn match bbStatementLine "^\(addtask\|deltask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
" OE Important Functions
syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_compile do_stage do_install do_package contained
@@ -78,15 +77,13 @@ syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_comp
" Generic Functions
syn match bbFunction "\h[0-9A-Za-z_\-\.]*" display contained contains=bbOEFunctions
syn keyword bbOverrideOperator append prepend remove contained
" BitBake shell metadata
syn include @shell syntax/sh.vim
if exists("b:current_syntax")
unlet b:current_syntax
endif
syn keyword bbShFakeRootFlag fakeroot contained
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([\.0-9A-Za-z_:${}\-\.]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbOverrideOperator,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([\.0-9A-Za-z_${}\-\.]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
syn region bbShFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@shell
" Python value inside shell functions
@@ -94,7 +91,7 @@ syn region shDeref start=+${@+ skip=+\\$+ excludenl end=+}+ contained co
" BitBake python metadata
syn keyword bbPyFlag python contained
syn match bbPyFuncDef "^\(fakeroot\s*\)\?\(python\)\(\s\+[0-9A-Za-z_:${}\-\.]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbPyFlag,bbFunction,bbOverrideOperator,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
syn match bbPyFuncDef "^\(fakeroot\s*\)\?\(python\)\(\s\+[0-9A-Za-z_${}\-\.]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbPyFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
syn region bbPyFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@python
" BitBake 'def'd python functions
@@ -123,9 +120,7 @@ hi def link bbPyFlag Type
hi def link bbPyDef Statement
hi def link bbStatement Statement
hi def link bbStatementRest Identifier
hi def link bbStatementRestCont Identifier
hi def link bbOEFunctions Special
hi def link bbVarPyValue PreProc
hi def link bbOverrideOperator Operator
let b:current_syntax = "bb"

View File

@@ -3,7 +3,7 @@
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?= -W --keep-going -j auto
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build

View File

@@ -8,12 +8,12 @@ Manual Organization
Folders exist for individual manuals as follows:
* bitbake-user-manual --- The BitBake User Manual
* bitbake-user-manual - The BitBake User Manual
Each folder is self-contained regarding content and figures.
If you want to find HTML versions of the BitBake manuals on the web,
go to https://www.openembedded.org/wiki/Documentation.
go to http://www.openembedded.org/wiki/Documentation.
Sphinx
======
@@ -47,8 +47,8 @@ To install all required packages run:
To build the documentation locally, run:
$ cd doc
$ make html
$ cd documentation
$ make -f Makefile.sphinx html
The resulting HTML index page will be _build/html/index.html, and you
can browse your own copy of the locally generated documentation with

View File

@@ -1,9 +0,0 @@
<footer>
<hr/>
<div role="contentinfo">
<p>&copy; Copyright {{ copyright }}
<br>Last updated on {{ last_updated }} from the <a href="https://git.openembedded.org/bitbake/">bitbake</a> git repository.
</p>
</div>
</footer>

View File

@@ -16,7 +16,7 @@ data, or simply return information about the execution environment.
This chapter describes BitBake's execution process from start to finish
when you use it to create an image. The execution process is launched
using the following command form::
using the following command form: ::
$ bitbake target
@@ -32,7 +32,7 @@ the BitBake command and its options, see ":ref:`The BitBake Command
your project's ``local.conf`` configuration file.
A common method to determine this value for your build host is to run
the following::
the following: ::
$ grep processor /proc/cpuinfo
@@ -40,7 +40,7 @@ the BitBake command and its options, see ":ref:`The BitBake Command
the number of processors, which takes into account hyper-threading.
Thus, a quad-core build host with hyper-threading most likely shows
eight processors, which is the value you would then assign to
:term:`BB_NUMBER_THREADS`.
``BB_NUMBER_THREADS``.
A possibly simpler solution is that some Linux distributions (e.g.
Debian and Ubuntu) provide the ``ncpus`` command.
@@ -65,13 +65,13 @@ data itself is of various types:
The ``layer.conf`` files are used to construct key variables such as
:term:`BBPATH` and :term:`BBFILES`.
:term:`BBPATH` is used to search for configuration and class files under the
``conf`` and ``classes`` directories, respectively. :term:`BBFILES` is used
``BBPATH`` is used to search for configuration and class files under the
``conf`` and ``classes`` directories, respectively. ``BBFILES`` is used
to locate both recipe and recipe append files (``.bb`` and
``.bbappend``). If there is no ``bblayers.conf`` file, it is assumed the
user has set the :term:`BBPATH` and :term:`BBFILES` directly in the environment.
user has set the ``BBPATH`` and ``BBFILES`` directly in the environment.
Next, the ``bitbake.conf`` file is located using the :term:`BBPATH` variable
Next, the ``bitbake.conf`` file is located using the ``BBPATH`` variable
that was just constructed. The ``bitbake.conf`` file may also include
other configuration files using the ``include`` or ``require``
directives.
@@ -79,8 +79,8 @@ directives.
Prior to parsing configuration files, BitBake looks at certain
variables, including:
- :term:`BB_ENV_PASSTHROUGH`
- :term:`BB_ENV_PASSTHROUGH_ADDITIONS`
- :term:`BB_ENV_WHITELIST`
- :term:`BB_ENV_EXTRAWHITE`
- :term:`BB_PRESERVE_ENV`
- :term:`BB_ORIGENV`
- :term:`BITBAKE_UI`
@@ -104,7 +104,7 @@ BitBake first searches the current working directory for an optional
contain a :term:`BBLAYERS` variable that is a
space-delimited list of 'layer' directories. Recall that if BitBake
cannot find a ``bblayers.conf`` file, then it is assumed the user has
set the :term:`BBPATH` and :term:`BBFILES` variables directly in the
set the ``BBPATH`` and ``BBFILES`` variables directly in the
environment.
For each directory (layer) in this list, a ``conf/layer.conf`` file is
@@ -114,7 +114,7 @@ files automatically set up :term:`BBPATH` and other
variables correctly for a given build directory.
BitBake then expects to find the ``conf/bitbake.conf`` file somewhere in
the user-specified :term:`BBPATH`. That configuration file generally has
the user-specified ``BBPATH``. That configuration file generally has
include directives to pull in any other metadata such as files specific
to the architecture, the machine, the local environment, and so forth.
@@ -135,11 +135,11 @@ The ``base.bbclass`` file is always included. Other classes that are
specified in the configuration using the
:term:`INHERIT` variable are also included. BitBake
searches for class files in a ``classes`` subdirectory under the paths
in :term:`BBPATH` in the same way as configuration files.
in ``BBPATH`` in the same way as configuration files.
A good way to get an idea of the configuration files and the class files
used in your execution environment is to run the following BitBake
command::
command: ::
$ bitbake -e > mybb.log
@@ -155,7 +155,7 @@ execution environment.
pair of curly braces in a shell function, the closing curly brace
must not be located at the start of the line without leading spaces.
Here is an example that causes BitBake to produce a parsing error::
Here is an example that causes BitBake to produce a parsing error: ::
fakeroot create_shar() {
cat << "EOF" > ${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.sh
@@ -184,13 +184,13 @@ Locating and Parsing Recipes
During the configuration phase, BitBake will have set
:term:`BBFILES`. BitBake now uses it to construct a
list of recipes to parse, along with any append files (``.bbappend``) to
apply. :term:`BBFILES` is a space-separated list of available files and
supports wildcards. An example would be::
apply. ``BBFILES`` is a space-separated list of available files and
supports wildcards. An example would be: ::
BBFILES = "/path/to/bbfiles/*.bb /path/to/appends/*.bbappend"
BitBake parses each
recipe and append file located with :term:`BBFILES` and stores the values of
recipe and append file located with ``BBFILES`` and stores the values of
various variables into the datastore.
.. note::
@@ -201,18 +201,18 @@ For each file, a fresh copy of the base configuration is made, then the
recipe is parsed line by line. Any inherit statements cause BitBake to
find and then parse class files (``.bbclass``) using
:term:`BBPATH` as the search path. Finally, BitBake
parses in order any append files found in :term:`BBFILES`.
parses in order any append files found in ``BBFILES``.
One common convention is to use the recipe filename to define pieces of
metadata. For example, in ``bitbake.conf`` the recipe name and version
are used to set the variables :term:`PN` and
:term:`PV`::
:term:`PV`: ::
PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
PV = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
In this example, a recipe called "something_1.2.3.bb" would set
:term:`PN` to "something" and :term:`PV` to "1.2.3".
``PN`` to "something" and ``PV`` to "1.2.3".
By the time parsing is complete for a recipe, BitBake has a list of
tasks that the recipe defines and a set of data consisting of keys and
@@ -228,7 +228,7 @@ and then reload it.
Where possible, subsequent BitBake commands reuse this cache of recipe
information. The validity of this cache is determined by first computing
a checksum of the base configuration data (see
:term:`BB_HASHCONFIG_IGNORE_VARS`) and
:term:`BB_HASHCONFIG_WHITELIST`) and
then checking if the checksum matches. If that checksum matches what is
in the cache and the recipe and class files have not changed, BitBake is
able to use the cache. BitBake then reloads the cached information about
@@ -238,14 +238,13 @@ Recipe file collections exist to allow the user to have multiple
repositories of ``.bb`` files that contain the same exact package. For
example, one could easily use them to make one's own local copy of an
upstream repository, but with custom modifications that one does not
want upstream. Here is an example::
want upstream. Here is an example: ::
BBFILES = "/stuff/openembedded/*/*.bb /stuff/openembedded.modified/*/*.bb"
BBFILE_COLLECTIONS = "upstream local"
BBFILE_PATTERN_upstream = "^/stuff/openembedded/"
BBFILE_PATTERN_local = "^/stuff/openembedded.modified/"
BBFILE_PRIORITY_upstream = "5"
BBFILE_PRIORITY_local = "10"
BBFILE_PRIORITY_upstream = "5" BBFILE_PRIORITY_local = "10"
.. note::
@@ -260,21 +259,21 @@ Providers
Assuming BitBake has been instructed to execute a target and that all
the recipe files have been parsed, BitBake starts to figure out how to
build the target. BitBake looks through the :term:`PROVIDES` list for each
of the recipes. A :term:`PROVIDES` list is the list of names by which the
recipe can be known. Each recipe's :term:`PROVIDES` list is created
build the target. BitBake looks through the ``PROVIDES`` list for each
of the recipes. A ``PROVIDES`` list is the list of names by which the
recipe can be known. Each recipe's ``PROVIDES`` list is created
implicitly through the recipe's :term:`PN` variable and
explicitly through the recipe's :term:`PROVIDES`
variable, which is optional.
When a recipe uses :term:`PROVIDES`, that recipe's functionality can be
found under an alternative name or names other than the implicit :term:`PN`
When a recipe uses ``PROVIDES``, that recipe's functionality can be
found under an alternative name or names other than the implicit ``PN``
name. As an example, suppose a recipe named ``keyboard_1.0.bb``
contained the following::
contained the following: ::
PROVIDES += "fullkeyboard"
The :term:`PROVIDES`
The ``PROVIDES``
list for this recipe becomes "keyboard", which is implicit, and
"fullkeyboard", which is explicit. Consequently, the functionality found
in ``keyboard_1.0.bb`` can be found under two different names.
@@ -284,14 +283,14 @@ in ``keyboard_1.0.bb`` can be found under two different names.
Preferences
===========
The :term:`PROVIDES` list is only part of the solution for figuring out a
The ``PROVIDES`` list is only part of the solution for figuring out a
target's recipes. Because targets might have multiple providers, BitBake
needs to prioritize providers by determining provider preferences.
A common example in which a target has multiple providers is
"virtual/kernel", which is on the :term:`PROVIDES` list for each kernel
"virtual/kernel", which is on the ``PROVIDES`` list for each kernel
recipe. Each machine often selects the best kernel provider by using a
line similar to the following in the machine configuration file::
line similar to the following in the machine configuration file: ::
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto"
@@ -309,10 +308,10 @@ specify a particular version. You can influence the order by using the
:term:`DEFAULT_PREFERENCE` variable.
By default, files have a preference of "0". Setting
:term:`DEFAULT_PREFERENCE` to "-1" makes the recipe unlikely to be used
unless it is explicitly referenced. Setting :term:`DEFAULT_PREFERENCE` to
"1" makes it likely the recipe is used. :term:`PREFERRED_VERSION` overrides
any :term:`DEFAULT_PREFERENCE` setting. :term:`DEFAULT_PREFERENCE` is often used
``DEFAULT_PREFERENCE`` to "-1" makes the recipe unlikely to be used
unless it is explicitly referenced. Setting ``DEFAULT_PREFERENCE`` to
"1" makes it likely the recipe is used. ``PREFERRED_VERSION`` overrides
any ``DEFAULT_PREFERENCE`` setting. ``DEFAULT_PREFERENCE`` is often used
to mark newer and more experimental recipe versions until they have
undergone sufficient testing to be considered stable.
@@ -331,7 +330,7 @@ If the first recipe is named ``a_1.1.bb``, then the
Thus, if a recipe named ``a_1.2.bb`` exists, BitBake will choose 1.2 by
default. However, if you define the following variable in a ``.conf``
file that BitBake parses, you can change that preference::
file that BitBake parses, you can change that preference: ::
PREFERRED_VERSION_a = "1.1"
@@ -394,7 +393,7 @@ ready to run, those tasks have all their dependencies met, and the
thread threshold has not been exceeded.
It is worth noting that you can greatly speed up the build time by
properly setting the :term:`BB_NUMBER_THREADS` variable.
properly setting the ``BB_NUMBER_THREADS`` variable.
As each task completes, a timestamp is written to the directory
specified by the :term:`STAMP` variable. On subsequent
@@ -435,7 +434,7 @@ BitBake writes a shell script to
executes the script. The generated shell script contains all the
exported variables, and the shell functions with all variables expanded.
Output from the shell script goes to the file
``${``\ :term:`T`\ ``}/log.do_taskname.pid``. Looking at the expanded shell functions in
``${T}/log.do_taskname.pid``. Looking at the expanded shell functions in
the run file and the output in the log files is a useful debugging
technique.
@@ -477,7 +476,7 @@ changes because it should not affect the output for target packages. The
simplistic approach for excluding the working directory is to set it to
some fixed value and create the checksum for the "run" script. BitBake
goes one step better and uses the
:term:`BB_BASEHASH_IGNORE_VARS` variable
:term:`BB_HASHBASE_WHITELIST` variable
to define a list of variables that should never be included when
generating the signatures.
@@ -498,7 +497,7 @@ to the task.
Like the working directory case, situations exist where dependencies
should be ignored. For these cases, you can instruct the build process
to ignore a dependency by using a line like the following::
to ignore a dependency by using a line like the following: ::
PACKAGE_ARCHS[vardepsexclude] = "MACHINE"
@@ -508,7 +507,7 @@ even if it does reference it.
Equally, there are cases where we need to add dependencies BitBake is
not able to find. You can accomplish this by using a line like the
following::
following: ::
PACKAGE_ARCHS[vardeps] = "MACHINE"
@@ -523,7 +522,7 @@ it cannot figure out dependencies.
Thus far, this section has limited discussion to the direct inputs into
a task. Information based on direct inputs is referred to as the
"basehash" in the code. However, there is still the question of a task's
indirect inputs --- the things that were already built and present in the
indirect inputs - the things that were already built and present in the
build directory. The checksum (or signature) for a particular task needs
to add the hashes of all the tasks on which the particular task depends.
Choosing which dependencies to add is a policy decision. However, the
@@ -534,11 +533,11 @@ At the code level, there are a variety of ways both the basehash and the
dependent task hashes can be influenced. Within the BitBake
configuration file, we can give BitBake some extra information to help
it construct the basehash. The following statement effectively results
in a list of global variable dependency excludes --- variables never
in a list of global variable dependency excludes - variables never
included in any checksum. This example uses variables from OpenEmbedded
to help illustrate the concept::
to help illustrate the concept: ::
BB_BASEHASH_IGNORE_VARS ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL \
USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \
PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \
@@ -552,22 +551,23 @@ through dependency chains are more complex and are generally
accomplished with a Python function. The code in
``meta/lib/oe/sstatesig.py`` shows two examples of this and also
illustrates how you can insert your own policy into the system if so
desired. This file defines the basic signature generator
OpenEmbedded-Core uses: "OEBasicHash". By default, there
desired. This file defines the two basic signature generators
OpenEmbedded-Core uses: "OEBasic" and "OEBasicHash". By default, there
is a dummy "noop" signature handler enabled in BitBake. This means that
behavior is unchanged from previous versions. ``OE-Core`` uses the
"OEBasicHash" signature handler by default through this setting in the
``bitbake.conf`` file::
``bitbake.conf`` file: ::
BB_SIGNATURE_HANDLER ?= "OEBasicHash"
The main feature of the "OEBasicHash" :term:`BB_SIGNATURE_HANDLER` is that
it adds the task hash to the stamp files. Thanks to this, any metadata
change will change the task hash, automatically causing the task to be run
again. This removes the need to bump :term:`PR` values, and changes to
metadata automatically ripple across the build.
The "OEBasicHash" ``BB_SIGNATURE_HANDLER`` is the same as the "OEBasic"
version but adds the task hash to the stamp files. This results in any
metadata change that changes the task hash, automatically causing the
task to be run again. This removes the need to bump
:term:`PR` values, and changes to metadata automatically
ripple across the build.
It is also worth noting that the end result of signature
It is also worth noting that the end result of these signature
generators is to make some dependency and hash information available to
the build. This information includes:
@@ -577,7 +577,10 @@ the build. This information includes:
- ``BB_BASEHASH_``\ *filename:taskname*: The base hashes for each
dependent task.
- :term:`BB_TASKHASH`: The hash of the currently running task.
- ``BBHASHDEPS_``\ *filename:taskname*: The task dependencies for
each task.
- ``BB_TASKHASH``: The hash of the currently running task.
It is worth noting that BitBake's "-S" option lets you debug BitBake's
processing of signatures. The options passed to -S allow different
@@ -586,11 +589,10 @@ or possibly those defined in the metadata/signature handler itself. The
simplest parameter to pass is "none", which causes a set of signature
information to be written out into ``STAMPS_DIR`` corresponding to the
targets specified. The other currently available parameter is
"printdiff", which causes BitBake to try to establish the most recent
"printdiff", which causes BitBake to try to establish the closest
signature match it can (e.g. in the sstate cache) and then run
compare the matched signatures to determine the stamps and delta
where these two stamp trees diverge. This can be used to determine why
tasks need to be re-run in situations where that is not expected.
``bitbake-diffsigs`` over the matches to determine the stamps and delta
where these two stamp trees diverge.
.. note::
@@ -645,6 +647,13 @@ compiled binary. To handle this, BitBake calls the
each successful setscene task to know whether or not it needs to obtain
the dependencies of that task.
Finally, after all the setscene tasks have executed, BitBake calls the
function listed in
:term:`BB_SETSCENE_VERIFY_FUNCTION2`
with the list of tasks BitBake thinks has been "covered". The metadata
can then ensure that this list is correct and can inform BitBake that it
wants specific tasks to be run regardless of the setscene result.
You can find more information on setscene metadata in the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:task checksums and setscene`
section.
@@ -657,7 +666,7 @@ builds are when execute, bitbake also supports user defined
configuration of the `Python
logging <https://docs.python.org/3/library/logging.html>`__ facilities
through the :term:`BB_LOGCONFIG` variable. This
variable defines a JSON or YAML `logging
variable defines a json or yaml `logging
configuration <https://docs.python.org/3/library/logging.config.html>`__
that will be intelligently merged into the default configuration. The
logging configuration is merged using the following rules:
@@ -691,9 +700,9 @@ logging configuration is merged using the following rules:
adds a filter called ``BitBake.defaultFilter``, both filters will be
applied to the logger
As a first example, you can create a ``hashequiv.json`` user logging
configuration file to log all Hash Equivalence related messages of ``VERBOSE``
or higher priority to a file called ``hashequiv.log``::
As an example, consider the following user logging configuration file
which logs all Hash Equivalence related messages of VERBOSE or higher to
a file called ``hashequiv.log`` ::
{
"version": 1,
@@ -722,40 +731,3 @@ or higher priority to a file called ``hashequiv.log``::
}
}
}
Then set the :term:`BB_LOGCONFIG` variable in ``conf/local.conf``::
BB_LOGCONFIG = "hashequiv.json"
Another example is this ``warn.json`` file to log all ``WARNING`` and
higher priority messages to a ``warn.log`` file::
{
"version": 1,
"formatters": {
"warnlogFormatter": {
"()": "bb.msg.BBLogFormatter",
"format": "%(levelname)s: %(message)s"
}
},
"handlers": {
"warnlog": {
"class": "logging.FileHandler",
"formatter": "warnlogFormatter",
"level": "WARNING",
"filename": "warn.log"
}
},
"loggers": {
"BitBake": {
"handlers": ["warnlog"]
}
},
"@disable_existing_loggers": false
}
Note that BitBake's helper classes for structured logging are implemented in
``lib/bb/msg.py``.

View File

@@ -27,7 +27,7 @@ and unpacking the files is often optionally followed by patching.
Patching, however, is not covered by this module.
The code to execute the first part of this process, a fetch, looks
something like the following::
something like the following: ::
src_uri = (d.getVar('SRC_URI') or "").split()
fetcher = bb.fetch2.Fetch(src_uri, d)
@@ -37,7 +37,7 @@ This code sets up an instance of the fetch class. The instance uses a
space-separated list of URLs from the :term:`SRC_URI`
variable and then calls the ``download`` method to download the files.
The instantiation of the fetch class is usually followed by::
The instantiation of the fetch class is usually followed by: ::
rootdir = l.getVar('WORKDIR')
fetcher.unpack(rootdir)
@@ -51,7 +51,7 @@ This code unpacks the downloaded files to the specified by ``WORKDIR``.
examine the OpenEmbedded class file ``base.bbclass``
.
The :term:`SRC_URI` and ``WORKDIR`` variables are not hardcoded into the
The ``SRC_URI`` and ``WORKDIR`` variables are not hardcoded into the
fetcher, since those fetcher methods can be (and are) called with
different variable names. In OpenEmbedded for example, the shared state
(sstate) code uses the fetch module to fetch the sstate files.
@@ -64,38 +64,38 @@ URLs by looking for source files in a specific search order:
:term:`PREMIRRORS` variable.
- *Source URI:* If pre-mirrors fail, BitBake uses the original URL (e.g
from :term:`SRC_URI`).
from ``SRC_URI``).
- *Mirror Sites:* If fetch failures occur, BitBake next uses mirror
locations as defined by the :term:`MIRRORS` variable.
For each URL passed to the fetcher, the fetcher calls the submodule that
handles that particular URL type. This behavior can be the source of
some confusion when you are providing URLs for the :term:`SRC_URI` variable.
Consider the following two URLs::
some confusion when you are providing URLs for the ``SRC_URI`` variable.
Consider the following two URLs: ::
https://git.yoctoproject.org/git/poky;protocol=git
http://git.yoctoproject.org/git/poky;protocol=git
git://git.yoctoproject.org/git/poky;protocol=http
In the former case, the URL is passed to the ``wget`` fetcher, which does not
understand "git". Therefore, the latter case is the correct form since the Git
fetcher does know how to use HTTP as a transport.
Here are some examples that show commonly used mirror definitions::
Here are some examples that show commonly used mirror definitions: ::
PREMIRRORS ?= "\
bzr://.*/.\* http://somemirror.org/sources/ \
cvs://.*/.\* http://somemirror.org/sources/ \
git://.*/.\* http://somemirror.org/sources/ \
hg://.*/.\* http://somemirror.org/sources/ \
osc://.*/.\* http://somemirror.org/sources/ \
p4://.*/.\* http://somemirror.org/sources/ \
svn://.*/.\* http://somemirror.org/sources/"
bzr://.*/.\* http://somemirror.org/sources/ \\n \
cvs://.*/.\* http://somemirror.org/sources/ \\n \
git://.*/.\* http://somemirror.org/sources/ \\n \
hg://.*/.\* http://somemirror.org/sources/ \\n \
osc://.*/.\* http://somemirror.org/sources/ \\n \
p4://.*/.\* http://somemirror.org/sources/ \\n \
svn://.*/.\* http://somemirror.org/sources/ \\n"
MIRRORS =+ "\
ftp://.*/.\* http://somemirror.org/sources/ \
http://.*/.\* http://somemirror.org/sources/ \
https://.*/.\* http://somemirror.org/sources/"
ftp://.*/.\* http://somemirror.org/sources/ \\n \
http://.*/.\* http://somemirror.org/sources/ \\n \
https://.*/.\* http://somemirror.org/sources/ \\n"
It is useful to note that BitBake
supports cross-URLs. It is possible to mirror a Git repository on an
@@ -110,26 +110,26 @@ which is specified by the :term:`DL_DIR` variable.
File integrity is of key importance for reproducing builds. For
non-local archive downloads, the fetcher code can verify SHA-256 and MD5
checksums to ensure the archives have been downloaded correctly. You can
specify these checksums by using the :term:`SRC_URI` variable with the
appropriate varflags as follows::
specify these checksums by using the ``SRC_URI`` variable with the
appropriate varflags as follows: ::
SRC_URI[md5sum] = "value"
SRC_URI[sha256sum] = "value"
You can also specify the checksums as
parameters on the :term:`SRC_URI` as shown below::
parameters on the ``SRC_URI`` as shown below: ::
SRC_URI = "http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d"
If multiple URIs exist, you can specify the checksums either directly as
in the previous example, or you can name the URLs. The following syntax
shows how you name the URIs::
shows how you name the URIs: ::
SRC_URI = "http://example.com/foobar.tar.bz2;name=foo"
SRC_URI[foo.md5sum] = 4a8e0f237e961fd7785d19d07fdb994d
After a file has been downloaded and
has had its checksum checked, a ".done" stamp is placed in :term:`DL_DIR`.
has had its checksum checked, a ".done" stamp is placed in ``DL_DIR``.
BitBake uses this stamp during subsequent builds to avoid downloading or
comparing a checksum for the file again.
@@ -144,10 +144,6 @@ download without a checksum triggers an error message. The
make any attempted network access a fatal error, which is useful for
checking that mirrors are complete as well as other things.
If :term:`BB_CHECK_SSL_CERTS` is set to ``0`` then SSL certificate checking will
be disabled. This variable defaults to ``1`` so SSL certificates are normally
checked.
.. _bb-the-unpack:
The Unpack
@@ -167,8 +163,8 @@ govern the behavior of the unpack stage:
- *dos:* Applies to ``.zip`` and ``.jar`` files and specifies whether
to use DOS line ending conversion on text files.
- *striplevel:* Strip specified number of leading components (levels)
from file names on extraction
- *basepath:* Instructs the unpack stage to strip the specified
directories from the source path when unpacking.
- *subdir:* Unpacks the specific URL to the specified subdirectory
within the root directory.
@@ -208,7 +204,7 @@ time the ``download()`` method is called.
If you specify a directory, the entire directory is unpacked.
Here are a couple of example URLs, the first relative and the second
absolute::
absolute: ::
SRC_URI = "file://relativefile.patch"
SRC_URI = "file:///Users/ich/very_important_software"
@@ -229,12 +225,7 @@ downloaded file is useful for avoiding collisions in
:term:`DL_DIR` when dealing with multiple files that
have the same name.
If a username and password are specified in the ``SRC_URI``, a Basic
Authorization header will be added to each request, including across redirects.
To instead limit the Authorization header to the first request, add
"redirectauth=0" to the list of parameters.
Some example URLs are as follows::
Some example URLs are as follows: ::
SRC_URI = "http://oe.handhelds.org/not_there.aac"
SRC_URI = "ftp://oe.handhelds.org/not_there_as_well.aac"
@@ -244,13 +235,15 @@ Some example URLs are as follows::
Because URL parameters are delimited by semi-colons, this can
introduce ambiguity when parsing URLs that also contain semi-colons,
for example::
for example:
::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git;a=snapshot;h=a5dd47"
Such URLs should should be modified by replacing semi-colons with '&'
characters::
characters:
::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47"
@@ -258,7 +251,8 @@ Some example URLs are as follows::
In most cases this should work. Treating semi-colons and '&' in
queries identically is recommended by the World Wide Web Consortium
(W3C). Note that due to the nature of the URL, you may have to
specify the name of the downloaded file as well::
specify the name of the downloaded file as well:
::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47;downloadfilename=myfile.bz2"
@@ -327,7 +321,7 @@ The supported parameters are as follows:
- *"port":* The port to which the CVS server connects.
Some example URLs are as follows::
Some example URLs are as follows: ::
SRC_URI = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
SRC_URI = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
@@ -369,7 +363,7 @@ The supported parameters are as follows:
username is different than the username used in the main URL, which
is passed to the subversion command.
Following are three examples using svn::
Following are three examples using svn: ::
SRC_URI = "svn://myrepos/proj1;module=vip;protocol=http;rev=667"
SRC_URI = "svn://myrepos/proj1;module=opie;protocol=svn+ssh"
@@ -396,19 +390,6 @@ This fetcher supports the following parameters:
protocol is "file". You can also use "http", "https", "ssh" and
"rsync".
.. note::
When ``protocol`` is "ssh", the URL expected in :term:`SRC_URI` differs
from the one that is typically passed to ``git clone`` command and provided
by the Git server to fetch from. For example, the URL returned by GitLab
server for ``mesa`` when cloning over SSH is
``git@gitlab.freedesktop.org:mesa/mesa.git``, however the expected URL in
:term:`SRC_URI` is the following::
SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
Note the ``:`` character changed for a ``/`` before the path to the project.
- *"nocheckout":* Tells the fetcher to not checkout source code when
unpacking when set to "1". Set this option for the URL where there is
a custom routine to checkout code. The default is "0".
@@ -424,17 +405,17 @@ This fetcher supports the following parameters:
- *"nobranch":* Tells the fetcher to not check the SHA validation for
the branch when set to "1". The default is "0". Set this option for
the recipe that refers to the commit that is valid for any namespace
(branch, tag, ...) instead of the branch.
the recipe that refers to the commit that is valid for a tag instead
of the branch.
- *"bareclone":* Tells the fetcher to clone a bare clone into the
destination directory without checking out a working tree. Only the
raw Git metadata is provided. This parameter implies the "nocheckout"
parameter as well.
- *"branch":* The branch(es) of the Git tree to clone. Unless
"nobranch" is set to "1", this is a mandatory parameter. The number of
branch parameters must match the number of name parameters.
- *"branch":* The branch(es) of the Git tree to clone. If unset, this
is assumed to be "master". The number of branch parameters much match
the number of name parameters.
- *"rev":* The revision to use for the checkout. The default is
"master".
@@ -455,35 +436,10 @@ This fetcher supports the following parameters:
parameter implies no branch and only works when the transfer protocol
is ``file://``.
Here are some example URLs::
Here are some example URLs: ::
SRC_URI = "git://github.com/fronteed/icheck.git;protocol=https;branch=${PV};tag=${PV}"
SRC_URI = "git://github.com/asciidoc/asciidoc-py;protocol=https;branch=main"
SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
.. note::
When using ``git`` as the fetcher of the main source code of your software,
``S`` should be set accordingly::
S = "${WORKDIR}/git"
.. note::
Specifying passwords directly in ``git://`` urls is not supported.
There are several reasons: :term:`SRC_URI` is often written out to logs and
other places, and that could easily leak passwords; it is also all too
easy to share metadata without removing passwords. SSH keys, ``~/.netrc``
and ``~/.ssh/config`` files can be used as alternatives.
Using tags with the git fetcher may cause surprising behaviour. Bitbake needs to
resolve the tag to a specific revision and to do that, it has to connect to and use
the upstream repository. This is because the revision the tags point at can change and
we've seen cases of this happening in well known public repositories. This can mean
many more network connections than expected and recipes may be reparsed at every build.
Source mirrors will also be bypassed as the upstream repository is the only source
of truth to resolve the revision accurately. For these reasons, whilst the fetcher
can support tags, we recommend being specific about revisions in recipes.
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
.. _gitsm-fetcher:
@@ -519,7 +475,7 @@ repository.
To use this fetcher, make sure your recipe has proper
:term:`SRC_URI`, :term:`SRCREV`, and
:term:`PV` settings. Here is an example::
:term:`PV` settings. Here is an example: ::
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
SRCREV = "EXAMPLE_CLEARCASE_TAG"
@@ -528,7 +484,7 @@ To use this fetcher, make sure your recipe has proper
The fetcher uses the ``rcleartool`` or
``cleartool`` remote client, depending on which one is available.
Following are options for the :term:`SRC_URI` statement:
Following are options for the ``SRC_URI`` statement:
- *vob*: The name, which must include the prepending "/" character,
of the ClearCase VOB. This option is required.
@@ -541,7 +497,7 @@ Following are options for the :term:`SRC_URI` statement:
The module and vob options are combined to create the load rule in the
view config spec. As an example, consider the vob and module values from
the SRC_URI statement at the start of this section. Combining those values
results in the following::
results in the following: ::
load /example_vob/example_module
@@ -590,10 +546,10 @@ password if you do not wish to keep those values in a recipe itself. If
you choose not to use ``P4CONFIG``, or to explicitly set variables that
``P4CONFIG`` can contain, you can specify the ``P4PORT`` value, which is
the server's URL and port number, and you can specify a username and
password directly in your recipe within :term:`SRC_URI`.
password directly in your recipe within ``SRC_URI``.
Here is an example that relies on ``P4CONFIG`` to specify the server URL
and port, username, and password, and fetches the Head Revision::
and port, username, and password, and fetches the Head Revision: ::
SRC_URI = "p4://example-depot/main/source/..."
SRCREV = "${AUTOREV}"
@@ -601,7 +557,7 @@ and port, username, and password, and fetches the Head Revision::
S = "${WORKDIR}/p4"
Here is an example that specifies the server URL and port, username, and
password, and fetches a Revision based on a Label::
password, and fetches a Revision based on a Label: ::
P4PORT = "tcp:p4server.example.net:1666"
SRC_URI = "p4://user:passwd@example-depot/main/source/..."
@@ -627,7 +583,7 @@ paths locally is desirable, the fetcher supports two parameters:
paths locally for the specified location, even in combination with the
``module`` parameter.
Here is an example use of the the ``module`` parameter::
Here is an example use of the the ``module`` parameter: ::
SRC_URI = "p4://user:passwd@example-depot/main;module=source/..."
@@ -635,7 +591,7 @@ In this case, the content of the top-level directory ``source/`` will be fetched
to ``${P4DIR}``, including the directory itself. The top-level directory will
be accesible at ``${P4DIR}/source/``.
Here is an example use of the the ``remotepath`` parameter::
Here is an example use of the the ``remotepath`` parameter: ::
SRC_URI = "p4://user:passwd@example-depot/main;module=source/...;remotepath=keep"
@@ -663,166 +619,11 @@ This fetcher supports the following parameters:
- *"manifest":* Name of the manifest file (default: ``default.xml``).
Here are some example URLs::
Here are some example URLs: ::
SRC_URI = "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml"
SRC_URI = "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml"
.. _az-fetcher:
Az Fetcher (``az://``)
--------------------------
This submodule fetches data from an
`Azure Storage account <https://docs.microsoft.com/en-us/azure/storage/>`__ ,
it inherits its functionality from the HTTP wget fetcher, but modifies its
behavior to accomodate the usage of a
`Shared Access Signature (SAS) <https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview>`__
for non-public data.
Such functionality is set by the variable:
- :term:`AZ_SAS`: The Azure Storage Shared Access Signature provides secure
delegate access to resources, if this variable is set, the Az Fetcher will
use it when fetching artifacts from the cloud.
You can specify the AZ_SAS variable as shown below::
AZ_SAS = "se=2021-01-01&sp=r&sv=2018-11-09&sr=c&skoid=<skoid>&sig=<signature>"
Here is an example URL::
SRC_URI = "az://<azure-storage-account>.blob.core.windows.net/<foo_container>/<bar_file>"
It can also be used when setting mirrors definitions using the :term:`PREMIRRORS` variable.
.. _gcp-fetcher:
GCP Fetcher (``gs://``)
--------------------------
This submodule fetches data from a
`Google Cloud Storage Bucket <https://cloud.google.com/storage/docs/buckets>`__.
It uses the `Google Cloud Storage Python Client <https://cloud.google.com/python/docs/reference/storage/latest>`__
to check the status of objects in the bucket and download them.
The use of the Python client makes it substantially faster than using command
line tools such as gsutil.
The fetcher requires the Google Cloud Storage Python Client to be installed, along
with the gsutil tool.
The fetcher requires that the machine has valid credentials for accessing the
chosen bucket. Instructions for authentication can be found in the
`Google Cloud documentation <https://cloud.google.com/docs/authentication/provide-credentials-adc#local-dev>`__.
If it used from the OpenEmbedded build system, the fetcher can be used for
fetching sstate artifacts from a GCS bucket by specifying the
``SSTATE_MIRRORS`` variable as shown below::
SSTATE_MIRRORS ?= "\
file://.* gs://<bucket name>/PATH \
"
The fetcher can also be used in recipes::
SRC_URI = "gs://<bucket name>/<foo_container>/<bar_file>"
However, the checksum of the file should be also be provided::
SRC_URI[sha256sum] = "<sha256 string>"
.. _crate-fetcher:
Crate Fetcher (``crate://``)
----------------------------
This submodule fetches code for
`Rust language "crates" <https://doc.rust-lang.org/reference/glossary.html?highlight=crate#crate>`__
corresponding to Rust libraries and programs to compile. Such crates are typically shared
on https://crates.io/ but this fetcher supports other crate registries too.
The format for the :term:`SRC_URI` setting must be::
SRC_URI = "crate://REGISTRY/NAME/VERSION"
Here is an example URL::
SRC_URI = "crate://crates.io/glob/0.2.11"
.. _npm-fetcher:
NPM Fetcher (``npm://``)
------------------------
This submodule fetches source code from an
`NPM <https://en.wikipedia.org/wiki/Npm_(software)>`__
Javascript package registry.
The format for the :term:`SRC_URI` setting must be::
SRC_URI = "npm://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
This fetcher supports the following parameters:
- *"package":* The NPM package name. This is a mandatory parameter.
- *"version":* The NPM package version. This is a mandatory parameter.
- *"downloadfilename":* Specifies the filename used when storing the downloaded file.
- *"destsuffix":* Specifies the directory to use to unpack the package (default: ``npm``).
Note that NPM fetcher only fetches the package source itself. The dependencies
can be fetched through the `npmsw-fetcher`_.
Here is an example URL with both fetchers::
SRC_URI = " \
npm://registry.npmjs.org/;package=cute-files;version=${PV} \
npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
"
See :yocto_docs:`Creating Node Package Manager (NPM) Packages
</dev-manual/packages.html#creating-node-package-manager-npm-packages>`
in the Yocto Project manual for details about using
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
to automatically create a recipe from an NPM URL.
.. _npmsw-fetcher:
NPM shrinkwrap Fetcher (``npmsw://``)
-------------------------------------
This submodule fetches source code from an
`NPM shrinkwrap <https://docs.npmjs.com/cli/v8/commands/npm-shrinkwrap>`__
description file, which lists the dependencies
of an NPM package while locking their versions.
The format for the :term:`SRC_URI` setting must be::
SRC_URI = "npmsw://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
This fetcher supports the following parameters:
- *"dev":* Set this parameter to ``1`` to install "devDependencies".
- *"destsuffix":* Specifies the directory to use to unpack the dependencies
(``${S}`` by default).
Note that the shrinkwrap file can also be provided by the recipe for
the package which has such dependencies, for example::
SRC_URI = " \
npm://registry.npmjs.org/;package=cute-files;version=${PV} \
npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
"
Such a file can automatically be generated using
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
as described in the :yocto_docs:`Creating Node Package Manager (NPM) Packages
</dev-manual/packages.html#creating-node-package-manager-npm-packages>`
section of the Yocto Project.
Other Fetchers
--------------
@@ -832,9 +633,9 @@ Fetch submodules also exist for the following:
- Mercurial (``hg://``)
- OSC (``osc://``)
- npm (``npm://``)
- S3 (``s3://``)
- OSC (``osc://``)
- Secure FTP (``sftp://``)
@@ -848,4 +649,4 @@ submodules. However, you might find the code helpful and readable.
Auto Revisions
==============
We need to document ``AUTOREV`` and :term:`SRCREV_FORMAT` here.
We need to document ``AUTOREV`` and ``SRCREV_FORMAT`` here.

View File

@@ -18,32 +18,28 @@ it.
Obtaining BitBake
=================
See the :ref:`bitbake-user-manual/bitbake-user-manual-intro:obtaining bitbake` section for
See the :ref:`bitbake-user-manual/bitbake-user-manual-hello:obtaining bitbake` section for
information on how to obtain BitBake. Once you have the source code on
your machine, the BitBake directory appears as follows::
your machine, the BitBake directory appears as follows: ::
$ ls -al
total 108
drwxr-xr-x 9 fawkh 10000 4096 feb 24 12:10 .
drwx------ 36 fawkh 10000 4096 mar 2 17:00 ..
-rw-r--r-- 1 fawkh 10000 365 feb 24 12:10 AUTHORS
drwxr-xr-x 2 fawkh 10000 4096 feb 24 12:10 bin
-rw-r--r-- 1 fawkh 10000 16501 feb 24 12:10 ChangeLog
drwxr-xr-x 2 fawkh 10000 4096 feb 24 12:10 classes
drwxr-xr-x 2 fawkh 10000 4096 feb 24 12:10 conf
drwxr-xr-x 5 fawkh 10000 4096 feb 24 12:10 contrib
drwxr-xr-x 6 fawkh 10000 4096 feb 24 12:10 doc
drwxr-xr-x 8 fawkh 10000 4096 mar 2 16:26 .git
-rw-r--r-- 1 fawkh 10000 31 feb 24 12:10 .gitattributes
-rw-r--r-- 1 fawkh 10000 392 feb 24 12:10 .gitignore
drwxr-xr-x 13 fawkh 10000 4096 feb 24 12:11 lib
-rw-r--r-- 1 fawkh 10000 1224 feb 24 12:10 LICENSE
-rw-r--r-- 1 fawkh 10000 15394 feb 24 12:10 LICENSE.GPL-2.0-only
-rw-r--r-- 1 fawkh 10000 1286 feb 24 12:10 LICENSE.MIT
-rw-r--r-- 1 fawkh 10000 229 feb 24 12:10 MANIFEST.in
-rw-r--r-- 1 fawkh 10000 2413 feb 24 12:10 README
-rw-r--r-- 1 fawkh 10000 43 feb 24 12:10 toaster-requirements.txt
-rw-r--r-- 1 fawkh 10000 2887 feb 24 12:10 TODO
total 100
drwxrwxr-x. 9 wmat wmat 4096 Jan 31 13:44 .
drwxrwxr-x. 3 wmat wmat 4096 Feb 4 10:45 ..
-rw-rw-r--. 1 wmat wmat 365 Nov 26 04:55 AUTHORS
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 bin
drwxrwxr-x. 4 wmat wmat 4096 Jan 31 13:44 build
-rw-rw-r--. 1 wmat wmat 16501 Nov 26 04:55 ChangeLog
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 classes
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 conf
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 contrib
-rw-rw-r--. 1 wmat wmat 17987 Nov 26 04:55 COPYING
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 doc
-rw-rw-r--. 1 wmat wmat 69 Nov 26 04:55 .gitignore
-rw-rw-r--. 1 wmat wmat 849 Nov 26 04:55 HEADER
drwxrwxr-x. 5 wmat wmat 4096 Jan 31 13:44 lib
-rw-rw-r--. 1 wmat wmat 195 Nov 26 04:55 MANIFEST.in
-rw-rw-r--. 1 wmat wmat 2887 Nov 26 04:55 TODO
At this point, you should have BitBake cloned to a directory that
matches the previous listing except for dates and user names.
@@ -53,10 +49,10 @@ Setting Up the BitBake Environment
First, you need to be sure that you can run BitBake. Set your working
directory to where your local BitBake files are and run the following
command::
command: ::
$ ./bin/bitbake --version
BitBake Build Tool Core version 2.3.1
BitBake Build Tool Core version 1.23.0, bitbake version 1.23.0
The console output tells you what version
you are running.
@@ -65,14 +61,14 @@ The recommended method to run BitBake is from a directory of your
choice. To be able to run BitBake from any directory, you need to add
the executable binary to your binary to your shell's environment
``PATH`` variable. First, look at your current ``PATH`` variable by
entering the following::
entering the following: ::
$ echo $PATH
Next, add the directory location
for the BitBake binary to the ``PATH``. Here is an example that adds the
``/home/scott-lenovo/bitbake/bin`` directory to the front of the
``PATH`` variable::
``PATH`` variable: ::
$ export PATH=/home/scott-lenovo/bitbake/bin:$PATH
@@ -103,7 +99,7 @@ discussion mailing list about the BitBake build tool.
This example was inspired by and drew heavily from
`Mailing List post - The BitBake equivalent of "Hello, World!"
<https://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html>`_.
<http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html>`_.
As stated earlier, the goal of this example is to eventually compile
"Hello World". However, it is unknown what BitBake needs and what you
@@ -120,7 +116,7 @@ Following is the complete "Hello World" example.
#. **Create a Project Directory:** First, set up a directory for the
"Hello World" project. Here is how you can do so in your home
directory::
directory: ::
$ mkdir ~/hello
$ cd ~/hello
@@ -131,26 +127,41 @@ Following is the complete "Hello World" example.
directory is a good way to isolate your project.
#. **Run BitBake:** At this point, you have nothing but a project
directory. Run the ``bitbake`` command and see what it does::
directory. Run the ``bitbake`` command and see what it does: ::
$ bitbake
ERROR: The BBPATH variable is not set and bitbake did not find a conf/bblayers.conf file in the expected location.
The BBPATH variable is not set and bitbake did not
find a conf/bblayers.conf file in the expected location.
Maybe you accidentally invoked bitbake from the wrong directory?
DEBUG: Removed the following variables from the environment:
GNOME_DESKTOP_SESSION_ID, XDG_CURRENT_DESKTOP,
GNOME_KEYRING_CONTROL, DISPLAY, SSH_AGENT_PID, LANG, no_proxy,
XDG_SESSION_PATH, XAUTHORITY, SESSION_MANAGER, SHLVL,
MANDATORY_PATH, COMPIZ_CONFIG_PROFILE, WINDOWID, EDITOR,
GPG_AGENT_INFO, SSH_AUTH_SOCK, GDMSESSION, GNOME_KEYRING_PID,
XDG_SEAT_PATH, XDG_CONFIG_DIRS, LESSOPEN, DBUS_SESSION_BUS_ADDRESS,
_, XDG_SESSION_COOKIE, DESKTOP_SESSION, LESSCLOSE, DEFAULTS_PATH,
UBUNTU_MENUPROXY, OLDPWD, XDG_DATA_DIRS, COLORTERM, LS_COLORS
The majority of this output is specific to environment variables that
are not directly relevant to BitBake. However, the very first
message regarding the ``BBPATH`` variable and the
``conf/bblayers.conf`` file is relevant.
When you run BitBake, it begins looking for metadata files. The
:term:`BBPATH` variable is what tells BitBake where
to look for those files. :term:`BBPATH` is not set and you need to set
it. Without :term:`BBPATH`, BitBake cannot find any configuration files
to look for those files. ``BBPATH`` is not set and you need to set
it. Without ``BBPATH``, BitBake cannot find any configuration files
(``.conf``) or recipe files (``.bb``) at all. BitBake also cannot
find the ``bitbake.conf`` file.
#. **Setting BBPATH:** For this example, you can set :term:`BBPATH` in
#. **Setting BBPATH:** For this example, you can set ``BBPATH`` in
the same manner that you set ``PATH`` earlier in the appendix. You
should realize, though, that it is much more flexible to set the
:term:`BBPATH` variable up in a configuration file for each project.
``BBPATH`` variable up in a configuration file for each project.
From your shell, enter the following commands to set and export the
:term:`BBPATH` variable::
``BBPATH`` variable: ::
$ BBPATH="projectdirectory"
$ export BBPATH
@@ -164,18 +175,24 @@ Following is the complete "Hello World" example.
("~") character as BitBake does not expand that character as the
shell would.
#. **Run BitBake:** Now that you have :term:`BBPATH` defined, run the
``bitbake`` command again::
#. **Run BitBake:** Now that you have ``BBPATH`` defined, run the
``bitbake`` command again: ::
$ bitbake
ERROR: Unable to parse /home/scott-lenovo/bitbake/lib/bb/parse/__init__.py
Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 127, in resolve_file(fn='conf/bitbake.conf', d=<bb.data_smart.DataSmart object at 0x7f22919a3df0>):
if not newfn:
> raise IOError(errno.ENOENT, "file %s not found in %s" % (fn, bbpath))
fn = newfn
FileNotFoundError: [Errno 2] file conf/bitbake.conf not found in <projectdirectory>
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 173, in parse_config_file
return bb.parse.handle(fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 99, in handle
return h['handle'](fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 120, in handle
abs_fn = resolve_file(fn, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 117, in resolve_file
raise IOError("file %s not found in %s" % (fn, bbpath))
IOError: file conf/bitbake.conf not found in /home/scott-lenovo/hello
ERROR: Unable to parse conf/bitbake.conf: file conf/bitbake.conf not found in /home/scott-lenovo/hello
This sample output shows that BitBake could not find the
``conf/bitbake.conf`` file in the project directory. This file is
@@ -188,18 +205,18 @@ Following is the complete "Hello World" example.
recipe files. For this example, you need to create the file in your
project directory and define some key BitBake variables. For more
information on the ``bitbake.conf`` file, see
https://git.openembedded.org/bitbake/tree/conf/bitbake.conf.
http://git.openembedded.org/bitbake/tree/conf/bitbake.conf.
Use the following commands to create the ``conf`` directory in the
project directory::
project directory: ::
$ mkdir conf
From within the ``conf`` directory,
use some editor to create the ``bitbake.conf`` so that it contains
the following::
the following: ::
PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
TMPDIR = "${TOPDIR}/tmp"
CACHE = "${TMPDIR}/cache"
@@ -209,12 +226,12 @@ Following is the complete "Hello World" example.
.. note::
Without a value for :term:`PN`, the variables :term:`STAMP`, :term:`T`, and :term:`B`, prevent more
than one recipe from working. You can fix this by either setting :term:`PN` to
Without a value for PN , the variables STAMP , T , and B , prevent more
than one recipe from working. You can fix this by either setting PN to
have a value similar to what OpenEmbedded and BitBake use in the default
``bitbake.conf`` file (see previous example). Or, by manually updating each
recipe to set :term:`PN`. You will also need to include :term:`PN` as part of the :term:`STAMP`,
:term:`T`, and :term:`B` variable definitions in the ``local.conf`` file.
bitbake.conf file (see previous example). Or, by manually updating each
recipe to set PN . You will also need to include PN as part of the STAMP
, T , and B variable definitions in the local.conf file.
The ``TMPDIR`` variable establishes a directory that BitBake uses
for build output and intermediate files other than the cached
@@ -234,17 +251,21 @@ Following is the complete "Hello World" example.
glossary.
#. **Run BitBake:** After making sure that the ``conf/bitbake.conf`` file
exists, you can run the ``bitbake`` command again::
exists, you can run the ``bitbake`` command again: ::
$ bitbake
ERROR: Unable to parse /home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py
Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 67, in inherit(files=['base'], fn='configuration INHERITs', lineno=0, d=<bb.data_smart.DataSmart object at 0x7fab6815edf0>):
if not os.path.exists(file):
> raise ParseError("Could not inherit file %s" % (file), fn, lineno)
bb.parse.ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 177, in _inherit
bb.parse.BBHandler.inherit(bbclass, "configuration INHERITs", 0, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 92, in inherit
include(fn, file, lineno, d, "inherit")
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 100, in include
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
In the sample output,
BitBake could not find the ``classes/base.bbclass`` file. You need
@@ -257,23 +278,20 @@ Following is the complete "Hello World" example.
in the ``classes`` directory of the project (i.e ``hello/classes``
in this example).
Create the ``classes`` directory as follows::
Create the ``classes`` directory as follows: ::
$ cd $HOME/hello
$ mkdir classes
Move to the ``classes`` directory and then create the
``base.bbclass`` file by inserting this single line::
addtask build
``base.bbclass`` file by inserting this single line: addtask build
The minimal task that BitBake runs is the ``do_build`` task. This is
all the example needs in order to build the project. Of course, the
``base.bbclass`` can have much more depending on which build
environments BitBake is supporting.
#. **Run BitBake:** After making sure that the ``classes/base.bbclass``
file exists, you can run the ``bitbake`` command again::
file exists, you can run the ``bitbake`` command again: ::
$ bitbake
Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.
@@ -296,7 +314,7 @@ Following is the complete "Hello World" example.
Minimally, you need a recipe file and a layer configuration file in
your layer. The configuration file needs to be in the ``conf``
directory inside the layer. Use these commands to set up the layer
and the ``conf`` directory::
and the ``conf`` directory: ::
$ cd $HOME
$ mkdir mylayer
@@ -304,29 +322,20 @@ Following is the complete "Hello World" example.
$ mkdir conf
Move to the ``conf`` directory and create a ``layer.conf`` file that has the
following::
following: ::
BBPATH .= ":${LAYERDIR}"
BBFILES += "${LAYERDIR}/*.bb"
BBFILES += "${LAYERDIR}/\*.bb"
BBFILE_COLLECTIONS += "mylayer"
BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
LAYERSERIES_CORENAMES = "hello_world_example"
LAYERSERIES_COMPAT_mylayer = "hello_world_example"
`BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
For information on these variables, click on :term:`BBFILES`,
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS`, :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
or :term:`LAYERSERIES_COMPAT` to go to the definitions in the glossary.
.. note::
We are setting both ``LAYERSERIES_CORENAMES`` and :term:`LAYERSERIES_COMPAT` in this particular case, because we
are using bitbake without OpenEmbedded.
You should usually just use :term:`LAYERSERIES_COMPAT` to specify the OE-Core versions for which your layer
is compatible, and add the meta-openembedded layer to your project.
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS` or :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
to go to the definitions in the glossary.
You need to create the recipe file next. Inside your layer at the
top-level, use an editor and create a recipe file named
``printhello.bb`` that has the following::
``printhello.bb`` that has the following: ::
DESCRIPTION = "Prints Hello World"
PN = 'printhello'
@@ -347,7 +356,7 @@ Following is the complete "Hello World" example.
follow the links to the glossary.
#. **Run BitBake With a Target:** Now that a BitBake target exists, run
the command and provide that target::
the command and provide that target: ::
$ cd $HOME/hello
$ bitbake printhello
@@ -367,7 +376,7 @@ Following is the complete "Hello World" example.
``hello/conf`` for this example).
Set your working directory to the ``hello/conf`` directory and then
create the ``bblayers.conf`` file so that it contains the following::
create the ``bblayers.conf`` file so that it contains the following: ::
BBLAYERS ?= " \
/home/<you>/mylayer \
@@ -377,17 +386,15 @@ Following is the complete "Hello World" example.
#. **Run BitBake With a Target:** Now that you have supplied the
``bblayers.conf`` file, run the ``bitbake`` command and provide the
target::
target: ::
$ bitbake printhello
Loading cache: 100% |
Loaded 0 entries from dependency cache.
Parsing recipes: 100% |##################################################################################|
Time: 00:00:00
Parsing of 1 .bb files complete (0 cached, 1 parsed). 1 targets, 0 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
Initialising tasks: 100% |###############################################################################|
NOTE: No setscene tasks
NOTE: Executing Tasks
NOTE: Preparing RunQueue
NOTE: Executing RunQueue Tasks
********************
* *
* Hello, World! *

View File

@@ -27,7 +27,7 @@ Linux software stacks using a task-oriented approach.
Conceptually, BitBake is similar to GNU Make in some regards but has
significant differences:
- BitBake executes tasks according to the provided metadata that builds up
- BitBake executes tasks according to provided metadata that builds up
the tasks. Metadata is stored in recipe (``.bb``) and related recipe
"append" (``.bbappend``) files, configuration (``.conf``) and
underlying include (``.inc``) files, and in class (``.bbclass``)
@@ -60,10 +60,11 @@ member Chris Larson split the project into two distinct pieces:
- OpenEmbedded, a metadata set utilized by BitBake
Today, BitBake is the primary basis of the
`OpenEmbedded <https://www.openembedded.org/>`__ project, which is being
used to build and maintain Linux distributions such as the `Poky
Reference Distribution <https://www.yoctoproject.org/software-item/poky/>`__,
developed under the umbrella of the `Yocto Project <https://www.yoctoproject.org>`__.
`OpenEmbedded <http://www.openembedded.org/>`__ project, which is being
used to build and maintain Linux distributions such as the `Angstrom
Distribution <http://www.angstrom-distribution.org/>`__, and which is
also being used as the build tool for Linux projects such as the `Yocto
Project <http://www.yoctoproject.org>`__.
Prior to BitBake, no other build tool adequately met the needs of an
aspiring embedded Linux distribution. All of the build systems used by
@@ -247,13 +248,13 @@ underlying, similarly-named recipe files.
When you name an append file, you can use the "``%``" wildcard character
to allow for matching recipe names. For example, suppose you have an
append file named as follows::
append file named as follows: ::
busybox_1.21.%.bbappend
That append file
would match any ``busybox_1.21.``\ x\ ``.bb`` version of the recipe. So,
the append file would match the following recipe names::
the append file would match the following recipe names: ::
busybox_1.21.1.bb
busybox_1.21.2.bb
@@ -289,7 +290,7 @@ You can obtain BitBake several different ways:
are using. The metadata is generally backwards compatible but not
forward compatible.
Here is an example that clones the BitBake repository::
Here is an example that clones the BitBake repository: ::
$ git clone git://git.openembedded.org/bitbake
@@ -297,7 +298,7 @@ You can obtain BitBake several different ways:
Git repository into a directory called ``bitbake``. Alternatively,
you can designate a directory after the ``git clone`` command if you
want to call the new directory something other than ``bitbake``. Here
is an example that names the directory ``bbdev``::
is an example that names the directory ``bbdev``: ::
$ git clone git://git.openembedded.org/bitbake bbdev
@@ -316,9 +317,9 @@ You can obtain BitBake several different ways:
method for getting BitBake. Cloning the repository makes it easier
to update as patches are added to the stable branches.
The following example downloads a snapshot of BitBake version 1.17.0::
The following example downloads a snapshot of BitBake version 1.17.0: ::
$ wget https://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
$ wget http://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
$ tar zxpvf bitbake-1.17.0.tar.gz
After extraction of the tarball using
@@ -346,7 +347,7 @@ execution examples.
Usage and syntax
----------------
Following is the usage and syntax for BitBake::
Following is the usage and syntax for BitBake: ::
$ bitbake -h
Usage: bitbake [options] [recipename/target recipe:do_task ...]
@@ -416,8 +417,8 @@ Following is the usage and syntax for BitBake::
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile Profile the command and save reports.
-u UI, --ui=UI The user interface to use (knotty, ncurses, taskexp or
teamcity - default knotty).
-u UI, --ui=UI The user interface to use (knotty, ncurses or taskexp
- default knotty).
--token=XMLRPCTOKEN Specify the connection token to be used when
connecting to a remote server.
--revisions-changed Set the exit code depending on whether upstream
@@ -432,9 +433,6 @@ Following is the usage and syntax for BitBake::
Environment variable BB_SERVER_TIMEOUT.
--no-setscene Do not run any setscene tasks. sstate will be ignored
and everything needed, built.
--skip-setscene Skip setscene tasks if they would be executed. Tasks
previously restored from sstate will be kept, unlike
--no-setscene
--setscene-only Only run setscene tasks, don't run any real tasks.
--remote-server=REMOTE_SERVER
Connect to the specified server.
@@ -471,11 +469,11 @@ default task, which is "build". BitBake obeys inter-task dependencies
when doing so.
The following command runs the build task, which is the default task, on
the ``foo_1.0.bb`` recipe file::
the ``foo_1.0.bb`` recipe file: ::
$ bitbake -b foo_1.0.bb
The following command runs the clean task on the ``foo.bb`` recipe file::
The following command runs the clean task on the ``foo.bb`` recipe file: ::
$ bitbake -b foo.bb -c clean
@@ -499,13 +497,13 @@ functionality, or when there are multiple versions of a recipe.
The ``bitbake`` command, when not using "--buildfile" or "-b" only
accepts a "PROVIDES". You cannot provide anything else. By default, a
recipe file generally "PROVIDES" its "packagename" as shown in the
following example::
following example: ::
$ bitbake foo
This next example "PROVIDES" the
package name and also uses the "-c" option to tell BitBake to just
execute the ``do_clean`` task::
execute the ``do_clean`` task: ::
$ bitbake -c clean foo
@@ -516,7 +514,7 @@ The BitBake command line supports specifying different tasks for
individual targets when you specify multiple targets. For example,
suppose you had two targets (or recipes) ``myfirstrecipe`` and
``mysecondrecipe`` and you needed BitBake to run ``taskA`` for the first
recipe and ``taskB`` for the second recipe::
recipe and ``taskB`` for the second recipe: ::
$ bitbake myfirstrecipe:do_taskA mysecondrecipe:do_taskB
@@ -536,13 +534,13 @@ current working directory:
- ``pn-buildlist``: Shows a simple list of targets that are to be
built.
To stop depending on common depends, use the ``-I`` depend option and
To stop depending on common depends, use the "-I" depend option and
BitBake omits them from the graph. Leaving this information out can
produce more readable graphs. This way, you can remove from the graph
:term:`DEPENDS` from inherited classes such as ``base.bbclass``.
``DEPENDS`` from inherited classes such as ``base.bbclass``.
Here are two examples that create dependency graphs. The second example
omits depends common in OpenEmbedded from the graph::
omits depends common in OpenEmbedded from the graph: ::
$ bitbake -g foo
@@ -566,7 +564,7 @@ for two separate targets:
.. image:: figures/bb_multiconfig_files.png
:align: center
The reason for this required file hierarchy is because the :term:`BBPATH`
The reason for this required file hierarchy is because the ``BBPATH``
variable is not constructed until the layers are parsed. Consequently,
using the configuration file as a pre-configuration file is not possible
unless it is located in the current working directory.
@@ -584,17 +582,17 @@ accomplished by setting the
configuration files for ``target1`` and ``target2`` defined in the build
directory. The following statement in the ``local.conf`` file both
enables BitBake to perform multiple configuration builds and specifies
the two extra multiconfigs::
the two extra multiconfigs: ::
BBMULTICONFIG = "target1 target2"
Once the target configuration files are in place and BitBake has been
enabled to perform multiple configuration builds, use the following
command form to start the builds::
command form to start the builds: ::
$ bitbake [mc:multiconfigname:]target [[[mc:multiconfigname:]target] ... ]
Here is an example for two extra multiconfigs: ``target1`` and ``target2``::
Here is an example for two extra multiconfigs: ``target1`` and ``target2``: ::
$ bitbake mc::target mc:target1:target mc:target2:target
@@ -615,12 +613,12 @@ multiconfig.
To enable dependencies in a multiple configuration build, you must
declare the dependencies in the recipe using the following statement
form::
form: ::
task_or_package[mcdepends] = "mc:from_multiconfig:to_multiconfig:recipe_name:task_on_which_to_depend"
To better show how to use this statement, consider an example with two
multiconfigs: ``target1`` and ``target2``::
multiconfigs: ``target1`` and ``target2``: ::
image_task[mcdepends] = "mc:target1:target2:image2:rootfs_task"
@@ -631,7 +629,7 @@ completion of the rootfs_task used to build out image2, which is
associated with the "target2" multiconfig.
Once you set up this dependency, you can build the "target1" multiconfig
using a BitBake command as follows::
using a BitBake command as follows: ::
$ bitbake mc:target1:image1
@@ -641,7 +639,7 @@ the ``rootfs_task`` for the "target2" multiconfig build.
Having a recipe depend on the root filesystem of another build might not
seem that useful. Consider this change to the statement in the image1
recipe::
recipe: ::
image_task[mcdepends] = "mc:target1:target2:image2:image_task"

View File

@@ -1,91 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
================
Variable Context
================
|
Variables might only have an impact or can be used in certain contexts. Some
should only be used in global files like ``.conf``, while others are intended only
for local files like ``.bb``. This chapter aims to describe some important variable
contexts.
.. _ref-varcontext-configuration:
BitBake's own configuration
===========================
Variables starting with ``BB_`` usually configure the behaviour of BitBake itself.
For example, one could configure:
- System resources, like disk space to be used (:term:`BB_DISKMON_DIRS`),
or the number of tasks to be run in parallel by BitBake (:term:`BB_NUMBER_THREADS`).
- How the fetchers shall behave, e.g., :term:`BB_FETCH_PREMIRRORONLY` is used
by BitBake to determine if BitBake's fetcher shall search only
:term:`PREMIRRORS` for files.
Those variables are usually configured globally.
BitBake configuration
=====================
There are variables:
- Like :term:`B` or :term:`T`, that are used to specify directories used by
BitBake during the build of a particular recipe. Those variables are
specified in ``bitbake.conf``. Some, like :term:`B`, are quite often
overwritten in recipes.
- Starting with ``FAKEROOT``, to configure how the ``fakeroot`` command is
handled. Those are usually set by ``bitbake.conf`` and might get adapted in a
``bbclass``.
- Detailing where BitBake will store and fetch information from, for
data reuse between build runs like :term:`CACHE`, :term:`DL_DIR` or
:term:`PERSISTENT_DIR`. Those are usually global.
Layers and files
================
Variables starting with ``LAYER`` configure how BitBake handles layers.
Additionally, variables starting with ``BB`` configure how layers and files are
handled. For example:
- :term:`LAYERDEPENDS` is used to configure on which layers a given layer
depends.
- The configured layers are contained in :term:`BBLAYERS` and files in
:term:`BBFILES`.
Those variables are often used in the files ``layer.conf`` and ``bblayers.conf``.
Recipes and packages
====================
Variables handling recipes and packages can be split into:
- :term:`PN`, :term:`PV` or :term:`PF` for example, contain information about
the name or revision of a recipe or package. Usually, the default set in
``bitbake.conf`` is used, but those are from time to time overwritten in
recipes.
- :term:`SUMMARY`, :term:`DESCRIPTION`, :term:`LICENSE` or :term:`HOMEPAGE`
contain the expected information and should be set specifically for every
recipe.
- In recipes, variables are also used to control build and runtime
dependencies between recipes/packages with other recipes/packages. The
most common should be: :term:`PROVIDES`, :term:`RPROVIDES`, :term:`DEPENDS`,
and :term:`RDEPENDS`.
- There are further variables starting with ``SRC`` that specify the sources in
a recipe like :term:`SRC_URI` or :term:`SRCDATE`. Those are also usually set
in recipes.
- Which version or provider of a recipe should be given preference when
multiple recipes would provide the same item, is controlled by variables
starting with ``PREFERRED_``. Those are normally set in the configuration
files of a ``MACHINE`` or ``DISTRO``.

View File

@@ -14,7 +14,6 @@
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import sys
import datetime
current_version = "dev"

View File

@@ -13,7 +13,6 @@ BitBake User Manual
bitbake-user-manual/bitbake-user-manual-intro
bitbake-user-manual/bitbake-user-manual-execution
bitbake-user-manual/bitbake-user-manual-metadata
bitbake-user-manual/bitbake-user-manual-ref-variables-context
bitbake-user-manual/bitbake-user-manual-fetching
bitbake-user-manual/bitbake-user-manual-ref-variables
bitbake-user-manual/bitbake-user-manual-hello

View File

@@ -1,76 +1,32 @@
.. SPDX-License-Identifier: CC-BY-2.5
=================================
BitBake Supported Release Manuals
=================================
*******************************
Release Series 4.2 (mickledore)
*******************************
- :yocto_docs:`BitBake 2.4 User Manual </bitbake/2.4/>`
******************************
Release Series 4.0 (kirkstone)
******************************
- :yocto_docs:`BitBake 2.0 User Manual </bitbake/2.0/>`
=========================
Current Release Manuals
=========================
****************************
Release Series 3.1 (dunfell)
3.1 'dunfell' Release Series
****************************
- :yocto_docs:`BitBake 1.46 User Manual </bitbake/1.46/>`
================================
BitBake Outdated Release Manuals
================================
*****************************
Release Series 4.1 (langdale)
*****************************
- :yocto_docs:`BitBake 2.2 User Manual </bitbake/2.2/>`
******************************
Release Series 3.4 (honister)
******************************
- :yocto_docs:`BitBake 1.52 User Manual </bitbake/1.52/>`
******************************
Release Series 3.3 (hardknott)
******************************
- :yocto_docs:`BitBake 1.50 User Manual </bitbake/1.50/>`
*******************************
Release Series 3.2 (gatesgarth)
*******************************
- :yocto_docs:`BitBake 1.48 User Manual </bitbake/1.48/>`
*******************************************
Release Series 3.1 (dunfell first versions)
*******************************************
- :yocto_docs:`3.1 BitBake User Manual </3.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.1 BitBake User Manual </3.1.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.2 BitBake User Manual </3.1.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.3 BitBake User Manual </3.1.3/bitbake-user-manual/bitbake-user-manual.html>`
==========================
Previous Release Manuals
==========================
*************************
Release Series 3.0 (zeus)
3.0 'zeus' Release Series
*************************
- :yocto_docs:`3.0 BitBake User Manual </3.0/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.1 BitBake User Manual </3.0.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.2 BitBake User Manual </3.0.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.3 BitBake User Manual </3.0.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.4 BitBake User Manual </3.0.4/bitbake-user-manual/bitbake-user-manual.html>`
****************************
Release Series 2.7 (warrior)
2.7 'warrior' Release Series
****************************
- :yocto_docs:`2.7 BitBake User Manual </2.7/bitbake-user-manual/bitbake-user-manual.html>`
@@ -80,7 +36,7 @@ Release Series 2.7 (warrior)
- :yocto_docs:`2.7.4 BitBake User Manual </2.7.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 2.6 (thud)
2.6 'thud' Release Series
*************************
- :yocto_docs:`2.6 BitBake User Manual </2.6/bitbake-user-manual/bitbake-user-manual.html>`
@@ -90,16 +46,16 @@ Release Series 2.6 (thud)
- :yocto_docs:`2.6.4 BitBake User Manual </2.6.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 2.5 (sumo)
2.5 'sumo' Release Series
*************************
- :yocto_docs:`2.5 Documentation </2.5>`
- :yocto_docs:`2.5.1 Documentation </2.5.1>`
- :yocto_docs:`2.5.2 Documentation </2.5.2>`
- :yocto_docs:`2.5.3 Documentation </2.5.3>`
- :yocto_docs:`2.5 BitBake User Manual </2.5/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.5.1 BitBake User Manual </2.5.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.5.2 BitBake User Manual </2.5.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.5.3 BitBake User Manual </2.5.3/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 2.4 (rocko)
2.4 'rocko' Release Series
**************************
- :yocto_docs:`2.4 BitBake User Manual </2.4/bitbake-user-manual/bitbake-user-manual.html>`
@@ -109,7 +65,7 @@ Release Series 2.4 (rocko)
- :yocto_docs:`2.4.4 BitBake User Manual </2.4.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 2.3 (pyro)
2.3 'pyro' Release Series
*************************
- :yocto_docs:`2.3 BitBake User Manual </2.3/bitbake-user-manual/bitbake-user-manual.html>`
@@ -119,7 +75,7 @@ Release Series 2.3 (pyro)
- :yocto_docs:`2.3.4 BitBake User Manual </2.3.4/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 2.2 (morty)
2.2 'morty' Release Series
**************************
- :yocto_docs:`2.2 BitBake User Manual </2.2/bitbake-user-manual/bitbake-user-manual.html>`
@@ -128,7 +84,7 @@ Release Series 2.2 (morty)
- :yocto_docs:`2.2.3 BitBake User Manual </2.2.3/bitbake-user-manual/bitbake-user-manual.html>`
****************************
Release Series 2.1 (krogoth)
2.1 'krogoth' Release Series
****************************
- :yocto_docs:`2.1 BitBake User Manual </2.1/bitbake-user-manual/bitbake-user-manual.html>`
@@ -137,7 +93,7 @@ Release Series 2.1 (krogoth)
- :yocto_docs:`2.1.3 BitBake User Manual </2.1.3/bitbake-user-manual/bitbake-user-manual.html>`
***************************
Release Series 2.0 (jethro)
2.0 'jethro' Release Series
***************************
- :yocto_docs:`1.9 BitBake User Manual </1.9/bitbake-user-manual/bitbake-user-manual.html>`
@@ -147,7 +103,7 @@ Release Series 2.0 (jethro)
- :yocto_docs:`2.0.3 BitBake User Manual </2.0.3/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 1.8 (fido)
1.8 'fido' Release Series
*************************
- :yocto_docs:`1.8 BitBake User Manual </1.8/bitbake-user-manual/bitbake-user-manual.html>`
@@ -155,7 +111,7 @@ Release Series 1.8 (fido)
- :yocto_docs:`1.8.2 BitBake User Manual </1.8.2/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 1.7 (dizzy)
1.7 'dizzy' Release Series
**************************
- :yocto_docs:`1.7 BitBake User Manual </1.7/bitbake-user-manual/bitbake-user-manual.html>`
@@ -164,7 +120,7 @@ Release Series 1.7 (dizzy)
- :yocto_docs:`1.7.3 BitBake User Manual </1.7.3/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 1.6 (daisy)
1.6 'daisy' Release Series
**************************
- :yocto_docs:`1.6 BitBake User Manual </1.6/bitbake-user-manual/bitbake-user-manual.html>`

View File

@@ -3,8 +3,6 @@
#
# Copyright (C) 2006 Tim Ansell
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Please Note:
# Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
# Assign a file to __warn__ to get warnings about slow operations.

View File

@@ -9,34 +9,26 @@
# SPDX-License-Identifier: GPL-2.0-only
#
__version__ = "2.8.0"
__version__ = "1.48.0"
import sys
if sys.version_info < (3, 8, 0):
raise RuntimeError("Sorry, python 3.8.0 or later is required for this version of bitbake")
if sys.version_info < (3, 5, 0):
raise RuntimeError("Sorry, python 3.5.0 or later is required for this version of bitbake")
if sys.version_info < (3, 10, 0):
# With python 3.8 and 3.9, we see errors of "libgcc_s.so.1 must be installed for pthread_cancel to work"
# https://stackoverflow.com/questions/64797838/libgcc-s-so-1-must-be-installed-for-pthread-cancel-to-work
# https://bugs.ams1.psf.io/issue42888
# so ensure libgcc_s is loaded early on
import ctypes
libgcc_s = ctypes.CDLL('libgcc_s.so.1')
class BBHandledException(Exception):
"""
The big dilemma for generic bitbake code is what information to give the user
when an exception occurs. Any exception inheriting this base exception class
has already provided information to the user via some 'fired' message type such as
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
encounters an exception derived from this class, no backtrace or other information
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
encounters an exception derived from this class, no backtrace or other information
will be given to the user, its assumed the earlier event provided the relevant information.
"""
pass
import os
import logging
from collections import namedtuple
class NullHandler(logging.Handler):
@@ -50,28 +42,15 @@ class BBLoggerMixin(object):
def setup_bblogger(self, name):
if name.split(".")[0] == "BitBake":
self.debug = self._debug_helper
def _debug_helper(self, *args, **kwargs):
return self.bbdebug(1, *args, **kwargs)
def debug2(self, *args, **kwargs):
return self.bbdebug(2, *args, **kwargs)
def debug3(self, *args, **kwargs):
return self.bbdebug(3, *args, **kwargs)
self.debug = self.bbdebug
def bbdebug(self, level, msg, *args, **kwargs):
loglevel = logging.DEBUG - level + 1
if not bb.event.worker_pid:
if self.name in bb.msg.loggerDefaultDomains and loglevel > (bb.msg.loggerDefaultDomains[self.name]):
return
if loglevel < bb.msg.loggerDefaultLogLevel:
if loglevel > bb.msg.loggerDefaultLogLevel:
return
if not isinstance(level, int) or not isinstance(msg, str):
mainlogger.warning("Invalid arguments in bbdebug: %s" % repr((level, msg,) + args))
return self.log(loglevel, msg, *args, **kwargs)
def plain(self, msg, *args, **kwargs):
@@ -83,13 +62,6 @@ class BBLoggerMixin(object):
def verbnote(self, msg, *args, **kwargs):
return self.log(logging.INFO + 2, msg, *args, **kwargs)
def warnonce(self, msg, *args, **kwargs):
return self.log(logging.WARNING - 1, msg, *args, **kwargs)
def erroronce(self, msg, *args, **kwargs):
return self.log(logging.ERROR - 1, msg, *args, **kwargs)
Logger = logging.getLoggerClass()
class BBLogger(Logger, BBLoggerMixin):
def __init__(self, name, *args, **kwargs):
@@ -156,7 +128,7 @@ def debug(lvl, *args):
mainlogger.warning("Passed invalid debug level '%s' to bb.debug", lvl)
args = (lvl,) + args
lvl = 1
mainlogger.bbdebug(lvl, ''.join(args))
mainlogger.debug(lvl, ''.join(args))
def note(*args):
mainlogger.info(''.join(args))
@@ -176,15 +148,9 @@ def verbnote(*args):
def warn(*args):
mainlogger.warning(''.join(args))
def warnonce(*args):
mainlogger.warnonce(''.join(args))
def error(*args, **kwargs):
mainlogger.error(''.join(args), extra=kwargs)
def erroronce(*args):
mainlogger.erroronce(''.join(args))
def fatal(*args, **kwargs):
mainlogger.critical(''.join(args), extra=kwargs)
raise BBHandledException()
@@ -228,14 +194,3 @@ def deprecate_import(current, modulename, fromlist, renames = None):
setattr(sys.modules[current], newname, newobj)
TaskData = namedtuple("TaskData", [
"pn",
"taskname",
"fn",
"deps",
"provides",
"taskhash",
"unihash",
"hashfn",
"taskhash_deps",
])

View File

@@ -1,215 +0,0 @@
#! /usr/bin/env python3
#
# Copyright 2023 by Garmin Ltd. or its subsidiaries
#
# SPDX-License-Identifier: MIT
import sys
import ctypes
import os
import errno
import pwd
import grp
libacl = ctypes.CDLL("libacl.so.1", use_errno=True)
ACL_TYPE_ACCESS = 0x8000
ACL_TYPE_DEFAULT = 0x4000
ACL_FIRST_ENTRY = 0
ACL_NEXT_ENTRY = 1
ACL_UNDEFINED_TAG = 0x00
ACL_USER_OBJ = 0x01
ACL_USER = 0x02
ACL_GROUP_OBJ = 0x04
ACL_GROUP = 0x08
ACL_MASK = 0x10
ACL_OTHER = 0x20
ACL_READ = 0x04
ACL_WRITE = 0x02
ACL_EXECUTE = 0x01
acl_t = ctypes.c_void_p
acl_entry_t = ctypes.c_void_p
acl_permset_t = ctypes.c_void_p
acl_perm_t = ctypes.c_uint
acl_tag_t = ctypes.c_int
libacl.acl_free.argtypes = [acl_t]
def acl_free(acl):
libacl.acl_free(acl)
libacl.acl_get_file.restype = acl_t
libacl.acl_get_file.argtypes = [ctypes.c_char_p, ctypes.c_uint]
def acl_get_file(path, typ):
acl = libacl.acl_get_file(os.fsencode(path), typ)
if acl is None:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err), str(path))
return acl
libacl.acl_get_entry.argtypes = [acl_t, ctypes.c_int, ctypes.c_void_p]
def acl_get_entry(acl, entry_id):
entry = acl_entry_t()
ret = libacl.acl_get_entry(acl, entry_id, ctypes.byref(entry))
if ret < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
if ret == 0:
return None
return entry
libacl.acl_get_tag_type.argtypes = [acl_entry_t, ctypes.c_void_p]
def acl_get_tag_type(entry_d):
tag = acl_tag_t()
ret = libacl.acl_get_tag_type(entry_d, ctypes.byref(tag))
if ret < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
return tag.value
libacl.acl_get_qualifier.restype = ctypes.c_void_p
libacl.acl_get_qualifier.argtypes = [acl_entry_t]
def acl_get_qualifier(entry_d):
ret = libacl.acl_get_qualifier(entry_d)
if ret is None:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
return ctypes.c_void_p(ret)
libacl.acl_get_permset.argtypes = [acl_entry_t, ctypes.c_void_p]
def acl_get_permset(entry_d):
permset = acl_permset_t()
ret = libacl.acl_get_permset(entry_d, ctypes.byref(permset))
if ret < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
return permset
libacl.acl_get_perm.argtypes = [acl_permset_t, acl_perm_t]
def acl_get_perm(permset_d, perm):
ret = libacl.acl_get_perm(permset_d, perm)
if ret < 0:
err = ctypes.get_errno()
raise OSError(err, os.strerror(err))
return bool(ret)
class Entry(object):
def __init__(self, tag, qualifier, mode):
self.tag = tag
self.qualifier = qualifier
self.mode = mode
def __str__(self):
typ = ""
qual = ""
if self.tag == ACL_USER:
typ = "user"
qual = pwd.getpwuid(self.qualifier).pw_name
elif self.tag == ACL_GROUP:
typ = "group"
qual = grp.getgrgid(self.qualifier).gr_name
elif self.tag == ACL_USER_OBJ:
typ = "user"
elif self.tag == ACL_GROUP_OBJ:
typ = "group"
elif self.tag == ACL_MASK:
typ = "mask"
elif self.tag == ACL_OTHER:
typ = "other"
r = "r" if self.mode & ACL_READ else "-"
w = "w" if self.mode & ACL_WRITE else "-"
x = "x" if self.mode & ACL_EXECUTE else "-"
return f"{typ}:{qual}:{r}{w}{x}"
class ACL(object):
def __init__(self, acl):
self.acl = acl
def __del__(self):
acl_free(self.acl)
def entries(self):
entry_id = ACL_FIRST_ENTRY
while True:
entry = acl_get_entry(self.acl, entry_id)
if entry is None:
break
permset = acl_get_permset(entry)
mode = 0
for m in (ACL_READ, ACL_WRITE, ACL_EXECUTE):
if acl_get_perm(permset, m):
mode |= m
qualifier = None
tag = acl_get_tag_type(entry)
if tag == ACL_USER or tag == ACL_GROUP:
qual = acl_get_qualifier(entry)
qualifier = ctypes.cast(qual, ctypes.POINTER(ctypes.c_int))[0]
yield Entry(tag, qualifier, mode)
entry_id = ACL_NEXT_ENTRY
@classmethod
def from_path(cls, path, typ):
acl = acl_get_file(path, typ)
return cls(acl)
def main():
import argparse
import pwd
import grp
from pathlib import Path
parser = argparse.ArgumentParser()
parser.add_argument("path", help="File Path", type=Path)
args = parser.parse_args()
acl = ACL.from_path(args.path, ACL_TYPE_ACCESS)
for entry in acl.entries():
print(str(entry))
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,16 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
from .client import AsyncClient, Client, ClientPool
from .serv import AsyncServer, AsyncServerConnection
from .connection import DEFAULT_MAX_CHUNK
from .exceptions import (
ClientError,
ServerError,
ConnectionClosedError,
InvokeError,
)

View File

@@ -1,313 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import abc
import asyncio
import json
import os
import socket
import sys
import re
import contextlib
from threading import Thread
from .connection import StreamConnection, WebsocketConnection, DEFAULT_MAX_CHUNK
from .exceptions import ConnectionClosedError, InvokeError
UNIX_PREFIX = "unix://"
WS_PREFIX = "ws://"
WSS_PREFIX = "wss://"
ADDR_TYPE_UNIX = 0
ADDR_TYPE_TCP = 1
ADDR_TYPE_WS = 2
def parse_address(addr):
if addr.startswith(UNIX_PREFIX):
return (ADDR_TYPE_UNIX, (addr[len(UNIX_PREFIX) :],))
elif addr.startswith(WS_PREFIX) or addr.startswith(WSS_PREFIX):
return (ADDR_TYPE_WS, (addr,))
else:
m = re.match(r"\[(?P<host>[^\]]*)\]:(?P<port>\d+)$", addr)
if m is not None:
host = m.group("host")
port = m.group("port")
else:
host, port = addr.split(":")
return (ADDR_TYPE_TCP, (host, int(port)))
class AsyncClient(object):
def __init__(
self,
proto_name,
proto_version,
logger,
timeout=30,
server_headers=False,
headers={},
):
self.socket = None
self.max_chunk = DEFAULT_MAX_CHUNK
self.proto_name = proto_name
self.proto_version = proto_version
self.logger = logger
self.timeout = timeout
self.needs_server_headers = server_headers
self.server_headers = {}
self.headers = headers
async def connect_tcp(self, address, port):
async def connect_sock():
reader, writer = await asyncio.open_connection(address, port)
return StreamConnection(reader, writer, self.timeout, self.max_chunk)
self._connect_sock = connect_sock
async def connect_unix(self, path):
async def connect_sock():
# AF_UNIX has path length issues so chdir here to workaround
cwd = os.getcwd()
try:
os.chdir(os.path.dirname(path))
# The socket must be opened synchronously so that CWD doesn't get
# changed out from underneath us so we pass as a sock into asyncio
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM, 0)
sock.connect(os.path.basename(path))
finally:
os.chdir(cwd)
reader, writer = await asyncio.open_unix_connection(sock=sock)
return StreamConnection(reader, writer, self.timeout, self.max_chunk)
self._connect_sock = connect_sock
async def connect_websocket(self, uri):
import websockets
async def connect_sock():
websocket = await websockets.connect(uri, ping_interval=None)
return WebsocketConnection(websocket, self.timeout)
self._connect_sock = connect_sock
async def setup_connection(self):
# Send headers
await self.socket.send("%s %s" % (self.proto_name, self.proto_version))
await self.socket.send(
"needs-headers: %s" % ("true" if self.needs_server_headers else "false")
)
for k, v in self.headers.items():
await self.socket.send("%s: %s" % (k, v))
# End of headers
await self.socket.send("")
self.server_headers = {}
if self.needs_server_headers:
while True:
line = await self.socket.recv()
if not line:
# End headers
break
tag, value = line.split(":", 1)
self.server_headers[tag.lower()] = value.strip()
async def get_header(self, tag, default):
await self.connect()
return self.server_headers.get(tag, default)
async def connect(self):
if self.socket is None:
self.socket = await self._connect_sock()
await self.setup_connection()
async def disconnect(self):
if self.socket is not None:
await self.socket.close()
self.socket = None
async def close(self):
await self.disconnect()
async def _send_wrapper(self, proc):
count = 0
while True:
try:
await self.connect()
return await proc()
except (
OSError,
ConnectionError,
ConnectionClosedError,
json.JSONDecodeError,
UnicodeDecodeError,
) as e:
self.logger.warning("Error talking to server: %s" % e)
if count >= 3:
if not isinstance(e, ConnectionError):
raise ConnectionError(str(e))
raise e
await self.close()
count += 1
def check_invoke_error(self, msg):
if isinstance(msg, dict) and "invoke-error" in msg:
raise InvokeError(msg["invoke-error"]["message"])
async def invoke(self, msg):
async def proc():
await self.socket.send_message(msg)
return await self.socket.recv_message()
result = await self._send_wrapper(proc)
self.check_invoke_error(result)
return result
async def ping(self):
return await self.invoke({"ping": {}})
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc_value, traceback):
await self.close()
class Client(object):
def __init__(self):
self.client = self._get_async_client()
self.loop = asyncio.new_event_loop()
# Override any pre-existing loop.
# Without this, the PR server export selftest triggers a hang
# when running with Python 3.7. The drawback is that there is
# potential for issues if the PR and hash equiv (or some new)
# clients need to both be instantiated in the same process.
# This should be revisited if/when Python 3.9 becomes the
# minimum required version for BitBake, as it seems not
# required (but harmless) with it.
asyncio.set_event_loop(self.loop)
self._add_methods("connect_tcp", "ping")
@abc.abstractmethod
def _get_async_client(self):
pass
def _get_downcall_wrapper(self, downcall):
def wrapper(*args, **kwargs):
return self.loop.run_until_complete(downcall(*args, **kwargs))
return wrapper
def _add_methods(self, *methods):
for m in methods:
downcall = getattr(self.client, m)
setattr(self, m, self._get_downcall_wrapper(downcall))
def connect_unix(self, path):
self.loop.run_until_complete(self.client.connect_unix(path))
self.loop.run_until_complete(self.client.connect())
@property
def max_chunk(self):
return self.client.max_chunk
@max_chunk.setter
def max_chunk(self, value):
self.client.max_chunk = value
def disconnect(self):
self.loop.run_until_complete(self.client.close())
def close(self):
if self.loop:
self.loop.run_until_complete(self.client.close())
if sys.version_info >= (3, 6):
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
self.loop.close()
self.loop = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
return False
class ClientPool(object):
def __init__(self, max_clients):
self.avail_clients = []
self.num_clients = 0
self.max_clients = max_clients
self.loop = None
self.client_condition = None
@abc.abstractmethod
async def _new_client(self):
raise NotImplementedError("Must be implemented in derived class")
def close(self):
if self.client_condition:
self.client_condition = None
if self.loop:
self.loop.run_until_complete(self.__close_clients())
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
self.loop.close()
self.loop = None
def run_tasks(self, tasks):
if not self.loop:
self.loop = asyncio.new_event_loop()
thread = Thread(target=self.__thread_main, args=(tasks,))
thread.start()
thread.join()
@contextlib.asynccontextmanager
async def get_client(self):
async with self.client_condition:
if self.avail_clients:
client = self.avail_clients.pop()
elif self.num_clients < self.max_clients:
self.num_clients += 1
client = await self._new_client()
else:
while not self.avail_clients:
await self.client_condition.wait()
client = self.avail_clients.pop()
try:
yield client
finally:
async with self.client_condition:
self.avail_clients.append(client)
self.client_condition.notify()
def __thread_main(self, tasks):
async def process_task(task):
async with self.get_client() as client:
await task(client)
asyncio.set_event_loop(self.loop)
if not self.client_condition:
self.client_condition = asyncio.Condition()
tasks = [process_task(t) for t in tasks]
self.loop.run_until_complete(asyncio.gather(*tasks))
async def __close_clients(self):
for c in self.avail_clients:
await c.close()
self.avail_clients = []
self.num_clients = 0
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
return False

View File

@@ -1,146 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import asyncio
import itertools
import json
from datetime import datetime
from .exceptions import ClientError, ConnectionClosedError
# The Python async server defaults to a 64K receive buffer, so we hardcode our
# maximum chunk size. It would be better if the client and server reported to
# each other what the maximum chunk sizes were, but that will slow down the
# connection setup with a round trip delay so I'd rather not do that unless it
# is necessary
DEFAULT_MAX_CHUNK = 32 * 1024
def chunkify(msg, max_chunk):
if len(msg) < max_chunk - 1:
yield "".join((msg, "\n"))
else:
yield "".join((json.dumps({"chunk-stream": None}), "\n"))
args = [iter(msg)] * (max_chunk - 1)
for m in map("".join, itertools.zip_longest(*args, fillvalue="")):
yield "".join(itertools.chain(m, "\n"))
yield "\n"
def json_serialize(obj):
if isinstance(obj, datetime):
return obj.isoformat()
raise TypeError("Type %s not serializeable" % type(obj))
class StreamConnection(object):
def __init__(self, reader, writer, timeout, max_chunk=DEFAULT_MAX_CHUNK):
self.reader = reader
self.writer = writer
self.timeout = timeout
self.max_chunk = max_chunk
@property
def address(self):
return self.writer.get_extra_info("peername")
async def send_message(self, msg):
for c in chunkify(json.dumps(msg, default=json_serialize), self.max_chunk):
self.writer.write(c.encode("utf-8"))
await self.writer.drain()
async def recv_message(self):
l = await self.recv()
m = json.loads(l)
if not m:
return m
if "chunk-stream" in m:
lines = []
while True:
l = await self.recv()
if not l:
break
lines.append(l)
m = json.loads("".join(lines))
return m
async def send(self, msg):
self.writer.write(("%s\n" % msg).encode("utf-8"))
await self.writer.drain()
async def recv(self):
if self.timeout < 0:
line = await self.reader.readline()
else:
try:
line = await asyncio.wait_for(self.reader.readline(), self.timeout)
except asyncio.TimeoutError:
raise ConnectionError("Timed out waiting for data")
if not line:
raise ConnectionClosedError("Connection closed")
line = line.decode("utf-8")
if not line.endswith("\n"):
raise ConnectionError("Bad message %r" % (line))
return line.rstrip()
async def close(self):
self.reader = None
if self.writer is not None:
self.writer.close()
self.writer = None
class WebsocketConnection(object):
def __init__(self, socket, timeout):
self.socket = socket
self.timeout = timeout
@property
def address(self):
return ":".join(str(s) for s in self.socket.remote_address)
async def send_message(self, msg):
await self.send(json.dumps(msg, default=json_serialize))
async def recv_message(self):
m = await self.recv()
return json.loads(m)
async def send(self, msg):
import websockets.exceptions
try:
await self.socket.send(msg)
except websockets.exceptions.ConnectionClosed:
raise ConnectionClosedError("Connection closed")
async def recv(self):
import websockets.exceptions
try:
if self.timeout < 0:
return await self.socket.recv()
try:
return await asyncio.wait_for(self.socket.recv(), self.timeout)
except asyncio.TimeoutError:
raise ConnectionError("Timed out waiting for data")
except websockets.exceptions.ConnectionClosed:
raise ConnectionClosedError("Connection closed")
async def close(self):
if self.socket is not None:
await self.socket.close()
self.socket = None

View File

@@ -1,21 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
class ClientError(Exception):
pass
class InvokeError(Exception):
pass
class ServerError(Exception):
pass
class ConnectionClosedError(Exception):
pass

View File

@@ -1,391 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import abc
import asyncio
import json
import os
import signal
import socket
import sys
import multiprocessing
import logging
from .connection import StreamConnection, WebsocketConnection
from .exceptions import ClientError, ServerError, ConnectionClosedError, InvokeError
class ClientLoggerAdapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
return f"[Client {self.extra['address']}] {msg}", kwargs
class AsyncServerConnection(object):
# If a handler returns this object (e.g. `return self.NO_RESPONSE`), no
# return message will be automatically be sent back to the client
NO_RESPONSE = object()
def __init__(self, socket, proto_name, logger):
self.socket = socket
self.proto_name = proto_name
self.handlers = {
"ping": self.handle_ping,
}
self.logger = ClientLoggerAdapter(
logger,
{
"address": socket.address,
},
)
self.client_headers = {}
async def close(self):
await self.socket.close()
async def handle_headers(self, headers):
return {}
async def process_requests(self):
try:
self.logger.info("Client %r connected" % (self.socket.address,))
# Read protocol and version
client_protocol = await self.socket.recv()
if not client_protocol:
return
(client_proto_name, client_proto_version) = client_protocol.split()
if client_proto_name != self.proto_name:
self.logger.debug("Rejecting invalid protocol %s" % (self.proto_name))
return
self.proto_version = tuple(int(v) for v in client_proto_version.split("."))
if not self.validate_proto_version():
self.logger.debug(
"Rejecting invalid protocol version %s" % (client_proto_version)
)
return
# Read headers
self.client_headers = {}
while True:
header = await self.socket.recv()
if not header:
# Empty line. End of headers
break
tag, value = header.split(":", 1)
self.client_headers[tag.lower()] = value.strip()
if self.client_headers.get("needs-headers", "false") == "true":
for k, v in (await self.handle_headers(self.client_headers)).items():
await self.socket.send("%s: %s" % (k, v))
await self.socket.send("")
# Handle messages
while True:
d = await self.socket.recv_message()
if d is None:
break
try:
response = await self.dispatch_message(d)
except InvokeError as e:
await self.socket.send_message(
{"invoke-error": {"message": str(e)}}
)
break
if response is not self.NO_RESPONSE:
await self.socket.send_message(response)
except ConnectionClosedError as e:
self.logger.info(str(e))
except (ClientError, ConnectionError) as e:
self.logger.error(str(e))
finally:
await self.close()
async def dispatch_message(self, msg):
for k in self.handlers.keys():
if k in msg:
self.logger.debug("Handling %s" % k)
return await self.handlers[k](msg[k])
raise ClientError("Unrecognized command %r" % msg)
async def handle_ping(self, request):
return {"alive": True}
class StreamServer(object):
def __init__(self, handler, logger):
self.handler = handler
self.logger = logger
self.closed = False
async def handle_stream_client(self, reader, writer):
# writer.transport.set_write_buffer_limits(0)
socket = StreamConnection(reader, writer, -1)
if self.closed:
await socket.close()
return
await self.handler(socket)
async def stop(self):
self.closed = True
class TCPStreamServer(StreamServer):
def __init__(self, host, port, handler, logger):
super().__init__(handler, logger)
self.host = host
self.port = port
def start(self, loop):
self.server = loop.run_until_complete(
asyncio.start_server(self.handle_stream_client, self.host, self.port)
)
for s in self.server.sockets:
self.logger.debug("Listening on %r" % (s.getsockname(),))
# Newer python does this automatically. Do it manually here for
# maximum compatibility
s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
# Enable keep alives. This prevents broken client connections
# from persisting on the server for long periods of time.
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
name = self.server.sockets[0].getsockname()
if self.server.sockets[0].family == socket.AF_INET6:
self.address = "[%s]:%d" % (name[0], name[1])
else:
self.address = "%s:%d" % (name[0], name[1])
return [self.server.wait_closed()]
async def stop(self):
await super().stop()
self.server.close()
def cleanup(self):
pass
class UnixStreamServer(StreamServer):
def __init__(self, path, handler, logger):
super().__init__(handler, logger)
self.path = path
def start(self, loop):
cwd = os.getcwd()
try:
# Work around path length limits in AF_UNIX
os.chdir(os.path.dirname(self.path))
self.server = loop.run_until_complete(
asyncio.start_unix_server(
self.handle_stream_client, os.path.basename(self.path)
)
)
finally:
os.chdir(cwd)
self.logger.debug("Listening on %r" % self.path)
self.address = "unix://%s" % os.path.abspath(self.path)
return [self.server.wait_closed()]
async def stop(self):
await super().stop()
self.server.close()
def cleanup(self):
os.unlink(self.path)
class WebsocketsServer(object):
def __init__(self, host, port, handler, logger):
self.host = host
self.port = port
self.handler = handler
self.logger = logger
def start(self, loop):
import websockets.server
self.server = loop.run_until_complete(
websockets.server.serve(
self.client_handler,
self.host,
self.port,
ping_interval=None,
)
)
for s in self.server.sockets:
self.logger.debug("Listening on %r" % (s.getsockname(),))
# Enable keep alives. This prevents broken client connections
# from persisting on the server for long periods of time.
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
name = self.server.sockets[0].getsockname()
if self.server.sockets[0].family == socket.AF_INET6:
self.address = "ws://[%s]:%d" % (name[0], name[1])
else:
self.address = "ws://%s:%d" % (name[0], name[1])
return [self.server.wait_closed()]
async def stop(self):
self.server.close()
def cleanup(self):
pass
async def client_handler(self, websocket):
socket = WebsocketConnection(websocket, -1)
await self.handler(socket)
class AsyncServer(object):
def __init__(self, logger):
self.logger = logger
self.loop = None
self.run_tasks = []
def start_tcp_server(self, host, port):
self.server = TCPStreamServer(host, port, self._client_handler, self.logger)
def start_unix_server(self, path):
self.server = UnixStreamServer(path, self._client_handler, self.logger)
def start_websocket_server(self, host, port):
self.server = WebsocketsServer(host, port, self._client_handler, self.logger)
async def _client_handler(self, socket):
address = socket.address
try:
client = self.accept_client(socket)
await client.process_requests()
except Exception as e:
import traceback
self.logger.error(
"Error from client %s: %s" % (address, str(e)), exc_info=True
)
traceback.print_exc()
finally:
self.logger.debug("Client %s disconnected", address)
await socket.close()
@abc.abstractmethod
def accept_client(self, socket):
pass
async def stop(self):
self.logger.debug("Stopping server")
await self.server.stop()
def start(self):
tasks = self.server.start(self.loop)
self.address = self.server.address
return tasks
def signal_handler(self):
self.logger.debug("Got exit signal")
self.loop.create_task(self.stop())
def _serve_forever(self, tasks):
try:
self.loop.add_signal_handler(signal.SIGTERM, self.signal_handler)
self.loop.add_signal_handler(signal.SIGINT, self.signal_handler)
self.loop.add_signal_handler(signal.SIGQUIT, self.signal_handler)
signal.pthread_sigmask(signal.SIG_UNBLOCK, [signal.SIGTERM])
self.loop.run_until_complete(asyncio.gather(*tasks))
self.logger.debug("Server shutting down")
finally:
self.server.cleanup()
def serve_forever(self):
"""
Serve requests in the current process
"""
self._create_loop()
tasks = self.start()
self._serve_forever(tasks)
self.loop.close()
def _create_loop(self):
# Create loop and override any loop that may have existed in
# a parent process. It is possible that the usecases of
# serve_forever might be constrained enough to allow using
# get_event_loop here, but better safe than sorry for now.
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
def serve_as_process(self, *, prefunc=None, args=(), log_level=None):
"""
Serve requests in a child process
"""
def run(queue):
# Create loop and override any loop that may have existed
# in a parent process. Without doing this and instead
# using get_event_loop, at the very minimum the hashserv
# unit tests will hang when running the second test.
# This happens since get_event_loop in the spawned server
# process for the second testcase ends up with the loop
# from the hashserv client created in the unit test process
# when running the first testcase. The problem is somewhat
# more general, though, as any potential use of asyncio in
# Cooker could create a loop that needs to replaced in this
# new process.
self._create_loop()
try:
self.address = None
tasks = self.start()
finally:
# Always put the server address to wake up the parent task
queue.put(self.address)
queue.close()
if prefunc is not None:
prefunc(self, *args)
if log_level is not None:
self.logger.setLevel(log_level)
self._serve_forever(tasks)
if sys.version_info >= (3, 6):
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
self.loop.close()
queue = multiprocessing.Queue()
# Temporarily block SIGTERM. The server process will inherit this
# block which will ensure it doesn't receive the SIGTERM until the
# handler is ready for it
mask = signal.pthread_sigmask(signal.SIG_BLOCK, [signal.SIGTERM])
try:
self.process = multiprocessing.Process(target=run, args=(queue,))
self.process.start()
self.address = queue.get()
queue.close()
queue.join_thread()
return self.process
finally:
signal.pthread_sigmask(signal.SIG_SETMASK, mask)

View File

@@ -20,12 +20,10 @@ import itertools
import time
import re
import stat
import datetime
import bb
import bb.msg
import bb.process
import bb.progress
from io import StringIO
from bb import data, event, utils
bblogger = logging.getLogger('BitBake')
@@ -178,9 +176,7 @@ class StdoutNoopContextManager:
@property
def name(self):
if "name" in dir(sys.stdout):
return sys.stdout.name
return "<mem>"
return sys.stdout.name
def exec_func(func, d, dirs = None):
@@ -299,25 +295,9 @@ def exec_func_python(func, d, runfile, cwd=None):
lineno = int(d.getVarFlag(func, "lineno", False))
bb.methodpool.insert_method(func, text, fn, lineno - 1)
if verboseStdoutLogging:
sys.stdout.flush()
sys.stderr.flush()
currout = sys.stdout
currerr = sys.stderr
sys.stderr = sys.stdout = execio = StringIO()
comp = utils.better_compile(code, func, "exec_func_python() autogenerated")
utils.better_exec(comp, {"d": d}, code, "exec_func_python() autogenerated")
comp = utils.better_compile(code, func, "exec_python_func() autogenerated")
utils.better_exec(comp, {"d": d}, code, "exec_python_func() autogenerated")
finally:
if verboseStdoutLogging:
execio.flush()
logger.plain("%s" % execio.getvalue())
sys.stdout = currout
sys.stderr = currerr
execio.close()
# We want any stdout/stderr to be printed before any other log messages to make debugging
# more accurate. In some cases we seem to lose stdout/stderr entirely in logging tests without this.
sys.stdout.flush()
sys.stderr.flush()
bb.debug(2, "Python function %s finished" % func)
if cwd and olddir:
@@ -456,11 +436,7 @@ exit $ret
if fakerootcmd:
cmd = [fakerootcmd, runfile]
# We only want to output to logger via LogTee if stdout is sys.__stdout__ (which will either
# be real stdout or subprocess PIPE or similar). In other cases we are being run "recursively",
# ie. inside another function, in which case stdout is already being captured so we don't
# want to Tee here as output would be printed twice, and out of order.
if verboseStdoutLogging and sys.stdout == sys.__stdout__:
if verboseStdoutLogging:
logfile = LogTee(logger, StdoutNoopContextManager())
else:
logfile = StdoutNoopContextManager()
@@ -589,8 +565,10 @@ exit $ret
def _task_data(fn, task, d):
localdata = bb.data.createCopy(d)
localdata.setVar('BB_FILENAME', fn)
localdata.setVar('BB_CURRENTTASK', task[3:])
localdata.setVar('OVERRIDES', 'task-%s:%s' %
(task[3:].replace('_', '-'), d.getVar('OVERRIDES', False)))
localdata.finalize()
bb.data.expandKeys(localdata)
return localdata
@@ -601,11 +579,11 @@ def _exec_task(fn, task, d, quieterr):
running it with its own local metadata, and with some useful variables set.
"""
if not d.getVarFlag(task, 'task', False):
event.fire(TaskInvalid(task, fn, d), d)
event.fire(TaskInvalid(task, d), d)
logger.error("No such task: %s" % task)
return 1
logger.debug("Executing task %s", task)
logger.debug(1, "Executing task %s", task)
localdata = _task_data(fn, task, d)
tempdir = localdata.getVar('T')
@@ -618,7 +596,7 @@ def _exec_task(fn, task, d, quieterr):
curnice = os.nice(0)
nice = int(nice) - curnice
newnice = os.nice(nice)
logger.debug("Renice to %s " % newnice)
logger.debug(1, "Renice to %s " % newnice)
ionice = localdata.getVar("BB_TASK_IONICE_LEVEL")
if ionice:
try:
@@ -637,8 +615,7 @@ def _exec_task(fn, task, d, quieterr):
logorder = os.path.join(tempdir, 'log.task_order')
try:
with open(logorder, 'a') as logorderfile:
timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S.%f")
logorderfile.write('{0} {1} ({2}): {3}\n'.format(timestamp, task, os.getpid(), logbase))
logorderfile.write('{0} ({1}): {2}\n'.format(task, os.getpid(), logbase))
except OSError:
logger.exception("Opening log file '%s'", logorder)
pass
@@ -705,55 +682,47 @@ def _exec_task(fn, task, d, quieterr):
try:
try:
event.fire(TaskStarted(task, fn, logfn, flags, localdata), localdata)
except (bb.BBHandledException, SystemExit):
return 1
try:
for func in (prefuncs or '').split():
exec_func(func, localdata)
exec_func(task, localdata)
for func in (postfuncs or '').split():
exec_func(func, localdata)
finally:
# Need to flush and close the logs before sending events where the
# UI may try to look at the logs.
sys.stdout.flush()
sys.stderr.flush()
except bb.BBHandledException:
event.fire(TaskFailed(task, fn, logfn, localdata, True), localdata)
return 1
except Exception as exc:
if quieterr:
event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
logger.error(str(exc))
event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
return 1
finally:
sys.stdout.flush()
sys.stderr.flush()
bblogger.removeHandler(handler)
bblogger.removeHandler(handler)
# Restore the backup fds
os.dup2(osi[0], osi[1])
os.dup2(oso[0], oso[1])
os.dup2(ose[0], ose[1])
# Restore the backup fds
os.dup2(osi[0], osi[1])
os.dup2(oso[0], oso[1])
os.dup2(ose[0], ose[1])
# Close the backup fds
os.close(osi[0])
os.close(oso[0])
os.close(ose[0])
logfile.close()
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
logger.debug2("Zero size logfn %s, removing", logfn)
bb.utils.remove(logfn)
bb.utils.remove(loglink)
except (Exception, SystemExit) as exc:
handled = False
if isinstance(exc, bb.BBHandledException):
handled = True
if quieterr:
if not handled:
logger.warning(repr(exc))
event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
# If the output is already on stdout, we've printed the information in the
# logs once already so don't duplicate
if verboseStdoutLogging or handled:
errprinted = True
if not handled:
logger.error(repr(exc))
event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
return 1
# Close the backup fds
os.close(osi[0])
os.close(oso[0])
os.close(ose[0])
logfile.close()
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
logger.debug(2, "Zero size logfn %s, removing", logfn)
bb.utils.remove(logfn)
bb.utils.remove(loglink)
event.fire(TaskSucceeded(task, fn, logfn, localdata), localdata)
if not localdata.getVarFlag(task, 'nostamp', False) and not localdata.getVarFlag(task, 'selfstamp', False):
@@ -791,7 +760,44 @@ def exec_task(fn, task, d, profile = False):
event.fire(failedevent, d)
return 1
def _get_cleanmask(taskname, mcfn):
def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
"""
Internal stamp helper function
Makes sure the stamp directory exists
Returns the stamp path+filename
In the bitbake core, d can be a CacheData and file_name will be set.
When called in task context, d will be a data store, file_name will not be set
"""
taskflagname = taskname
if taskname.endswith("_setscene") and taskname != "do_setscene":
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stamp[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMP')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
if baseonly:
return stamp
if noextra:
extrainfo = ""
if not stamp:
return
stamp = bb.parse.siggen.stampfile(stamp, file_name, taskname, extrainfo)
stampdir = os.path.dirname(stamp)
if cached_mtime_noerror(stampdir) == 0:
bb.utils.mkdirhier(stampdir)
return stamp
def stamp_cleanmask_internal(taskname, d, file_name):
"""
Internal stamp helper function to generate stamp cleaning mask
Returns the stamp path+filename
@@ -799,14 +805,31 @@ def _get_cleanmask(taskname, mcfn):
In the bitbake core, d can be a CacheData and file_name will be set.
When called in task context, d will be a data store, file_name will not be set
"""
cleanmask = bb.parse.siggen.stampcleanmask_mcfn(taskname, mcfn)
taskflagname = taskname.replace("_setscene", "")
if cleanmask:
return [cleanmask, cleanmask.replace(taskflagname, taskflagname + "_setscene")]
return []
taskflagname = taskname
if taskname.endswith("_setscene") and taskname != "do_setscene":
taskflagname = taskname.replace("_setscene", "")
def clean_stamp_mcfn(task, mcfn):
cleanmask = _get_cleanmask(task, mcfn)
if file_name:
stamp = d.stampclean[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMPCLEAN')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
if not stamp:
return []
cleanmask = bb.parse.siggen.stampcleanmask(stamp, file_name, taskname, extrainfo)
return [cleanmask, cleanmask.replace(taskflagname, taskflagname + "_setscene")]
def make_stamp(task, d, file_name = None):
"""
Creates/updates a stamp for a given task
(d can be a data dict or dataCache)
"""
cleanmask = stamp_cleanmask_internal(task, d, file_name)
for mask in cleanmask:
for name in glob.glob(mask):
# Preserve sigdata files in the stamps directory
@@ -817,66 +840,52 @@ def clean_stamp_mcfn(task, mcfn):
continue
os.unlink(name)
def clean_stamp(task, d):
mcfn = d.getVar('BB_FILENAME')
clean_stamp_mcfn(task, mcfn)
def make_stamp_mcfn(task, mcfn):
basestamp = bb.parse.siggen.stampfile_mcfn(task, mcfn)
stampdir = os.path.dirname(basestamp)
if cached_mtime_noerror(stampdir) == 0:
bb.utils.mkdirhier(stampdir)
clean_stamp_mcfn(task, mcfn)
stamp = stamp_internal(task, d, file_name)
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
if basestamp:
bb.utils.remove(basestamp)
open(basestamp, "w").close()
def make_stamp(task, d):
"""
Creates/updates a stamp for a given task
"""
mcfn = d.getVar('BB_FILENAME')
make_stamp_mcfn(task, mcfn)
if stamp:
bb.utils.remove(stamp)
open(stamp, "w").close()
# If we're in task context, write out a signature file for each task
# as it completes
if not task.endswith("_setscene"):
stampbase = bb.parse.siggen.stampfile_base(mcfn)
bb.parse.siggen.dump_sigtask(mcfn, task, stampbase, True)
if not task.endswith("_setscene") and task != "do_setscene" and not file_name:
stampbase = stamp_internal(task, d, None, True)
file_name = d.getVar('BB_FILENAME')
bb.parse.siggen.dump_sigtask(file_name, task, stampbase, True)
def del_stamp(task, d, file_name = None):
"""
Removes a stamp for a given task
(d can be a data dict or dataCache)
"""
stamp = stamp_internal(task, d, file_name)
bb.utils.remove(stamp)
def find_stale_stamps(task, mcfn):
current = bb.parse.siggen.stampfile_mcfn(task, mcfn)
current2 = bb.parse.siggen.stampfile_mcfn(task + "_setscene", mcfn)
cleanmask = _get_cleanmask(task, mcfn)
found = []
for mask in cleanmask:
for name in glob.glob(mask):
if "sigdata" in name or "sigbasedata" in name:
continue
if name.endswith('.taint'):
continue
if name == current or name == current2:
continue
logger.debug2("Stampfile %s does not match %s or %s" % (name, current, current2))
found.append(name)
return found
def write_taint(task, d):
def write_taint(task, d, file_name = None):
"""
Creates a "taint" file which will force the specified task and its
dependents to be re-run the next time by influencing the value of its
taskhash.
(d can be a data dict or dataCache)
"""
mcfn = d.getVar('BB_FILENAME')
bb.parse.siggen.invalidate_task(task, mcfn)
import uuid
if file_name:
taintfn = d.stamp[file_name] + '.' + task + '.taint'
else:
taintfn = d.getVar('STAMP') + '.' + task + '.taint'
bb.utils.mkdirhier(os.path.dirname(taintfn))
# The specific content of the taint file is not really important,
# we just need it to be random, so a random UUID is used
with open(taintfn, 'w') as taintf:
taintf.write(str(uuid.uuid4()))
def stampfile(taskname, d, file_name = None, noextra=False):
"""
Return the stamp for a given task
(d can be a data dict or dataCache)
"""
return stamp_internal(taskname, d, file_name, noextra=noextra)
def add_tasks(tasklist, d):
task_deps = d.getVar('_task_deps', False)
@@ -901,11 +910,6 @@ def add_tasks(tasklist, d):
task_deps[name] = {}
if name in flags:
deptask = d.expand(flags[name])
if name in ['noexec', 'fakeroot', 'nostamp']:
if deptask != '1':
bb.warn("In a future version of BitBake, setting the '{}' flag to something other than '1' "
"will result in the flag not being set. See YP bug #13808.".format(name))
task_deps[name][task] = deptask
getTask('mcdepends')
getTask('depends')
@@ -1004,8 +1008,6 @@ def tasksbetween(task_start, task_end, d):
def follow_chain(task, endtask, chain=None):
if not chain:
chain = []
if task in chain:
bb.fatal("Circular task dependencies as %s depends on itself via the chain %s" % (task, " -> ".join(chain)))
chain.append(task)
for othertask in tasks:
if othertask == task:

View File

@@ -19,16 +19,14 @@
import os
import logging
import pickle
from collections import defaultdict
from collections.abc import Mapping
from collections import defaultdict, Mapping
import bb.utils
from bb import PrefixLoggerAdapter
import re
import shutil
logger = logging.getLogger("BitBake.Cache")
__cache_version__ = "155"
__cache_version__ = "153"
def getCacheFile(path, filename, mc, data_hash):
mcspec = ''
@@ -55,12 +53,12 @@ class RecipeInfoCommon(object):
@classmethod
def pkgvar(cls, var, packages, metadata):
return dict((pkg, cls.depvar("%s:%s" % (var, pkg), metadata))
return dict((pkg, cls.depvar("%s_%s" % (var, pkg), metadata))
for pkg in packages)
@classmethod
def taskvar(cls, var, tasks, metadata):
return dict((task, cls.getvar("%s:task-%s" % (var, task), metadata))
return dict((task, cls.getvar("%s_task-%s" % (var, task), metadata))
for task in tasks)
@classmethod
@@ -96,7 +94,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
if not self.packages:
self.packages.append(self.pn)
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
self.skipreason = self.getvar('__SKIPPED', metadata)
if self.skipreason:
@@ -105,7 +102,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.tasks = metadata.getVar('__BBTASKS', False)
self.basetaskhashes = metadata.getVar('__siggen_basehashes', False) or {}
self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
self.hashfilename = self.getvar('BB_HASHFILENAME', metadata)
self.task_deps = metadata.getVar('_task_deps', False) or {'tasks': [], 'parents': {}}
@@ -123,12 +120,12 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.depends = self.depvar('DEPENDS', metadata)
self.rdepends = self.depvar('RDEPENDS', metadata)
self.rrecommends = self.depvar('RRECOMMENDS', metadata)
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
self.rdepends_pkg = self.pkgvar('RDEPENDS', self.packages, metadata)
self.rrecommends_pkg = self.pkgvar('RRECOMMENDS', self.packages, metadata)
self.inherits = self.getvar('__inherit_cache', metadata, expand=False)
self.fakerootenv = self.getvar('FAKEROOTENV', metadata)
self.fakerootdirs = self.getvar('FAKEROOTDIRS', metadata)
self.fakerootlogs = self.getvar('FAKEROOTLOGS', metadata)
self.fakerootnoenv = self.getvar('FAKEROOTNOENV', metadata)
self.extradepsfunc = self.getvar('calculate_extra_depends', metadata)
@@ -166,7 +163,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootenv = {}
cachedata.fakerootnoenv = {}
cachedata.fakerootdirs = {}
cachedata.fakerootlogs = {}
cachedata.extradepsfunc = {}
def add_cacheData(self, cachedata, fn):
@@ -216,10 +212,10 @@ class CoreRecipeInfo(RecipeInfoCommon):
# Collect files we may need for possible world-dep
# calculations
if not bb.utils.to_boolean(self.not_world):
if not self.not_world:
cachedata.possible_world.append(fn)
#else:
# logger.debug2("EXCLUDE FROM WORLD: %s", fn)
# logger.debug(2, "EXCLUDE FROM WORLD: %s", fn)
# create a collection of all targets for sanity checking
# tasks, such as upstream versions, license, and tools for
@@ -235,116 +231,17 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootenv[fn] = self.fakerootenv
cachedata.fakerootnoenv[fn] = self.fakerootnoenv
cachedata.fakerootdirs[fn] = self.fakerootdirs
cachedata.fakerootlogs[fn] = self.fakerootlogs
cachedata.extradepsfunc[fn] = self.extradepsfunc
class SiggenRecipeInfo(RecipeInfoCommon):
__slots__ = ()
classname = "SiggenRecipeInfo"
cachefile = "bb_cache_" + classname +".dat"
# we don't want to show this information in graph files so don't set cachefields
#cachefields = []
def __init__(self, filename, metadata):
self.siggen_gendeps = metadata.getVar("__siggen_gendeps", False)
self.siggen_varvals = metadata.getVar("__siggen_varvals", False)
self.siggen_taskdeps = metadata.getVar("__siggen_taskdeps", False)
@classmethod
def init_cacheData(cls, cachedata):
cachedata.siggen_taskdeps = {}
cachedata.siggen_gendeps = {}
cachedata.siggen_varvals = {}
def add_cacheData(self, cachedata, fn):
cachedata.siggen_gendeps[fn] = self.siggen_gendeps
cachedata.siggen_varvals[fn] = self.siggen_varvals
cachedata.siggen_taskdeps[fn] = self.siggen_taskdeps
# The siggen variable data is large and impacts:
# - bitbake's overall memory usage
# - the amount of data sent over IPC between parsing processes and the server
# - the size of the cache files on disk
# - the size of "sigdata" hash information files on disk
# The data consists of strings (some large) or frozenset lists of variables
# As such, we a) deplicate the data here and b) pass references to the object at second
# access (e.g. over IPC or saving into pickle).
store = {}
save_map = {}
save_count = 1
restore_map = {}
restore_count = {}
@classmethod
def reset(cls):
# Needs to be called before starting new streamed data in a given process
# (e.g. writing out the cache again)
cls.save_map = {}
cls.save_count = 1
cls.restore_map = {}
@classmethod
def _save(cls, deps):
ret = []
if not deps:
return deps
for dep in deps:
fs = deps[dep]
if fs is None:
ret.append((dep, None, None))
elif fs in cls.save_map:
ret.append((dep, None, cls.save_map[fs]))
else:
cls.save_map[fs] = cls.save_count
ret.append((dep, fs, cls.save_count))
cls.save_count = cls.save_count + 1
return ret
@classmethod
def _restore(cls, deps, pid):
ret = {}
if not deps:
return deps
if pid not in cls.restore_map:
cls.restore_map[pid] = {}
map = cls.restore_map[pid]
for dep, fs, mapnum in deps:
if fs is None and mapnum is None:
ret[dep] = None
elif fs is None:
ret[dep] = map[mapnum]
else:
try:
fs = cls.store[fs]
except KeyError:
cls.store[fs] = fs
map[mapnum] = fs
ret[dep] = fs
return ret
def __getstate__(self):
ret = {}
for key in ["siggen_gendeps", "siggen_taskdeps", "siggen_varvals"]:
ret[key] = self._save(self.__dict__[key])
ret['pid'] = os.getpid()
return ret
def __setstate__(self, state):
pid = state['pid']
for key in ["siggen_gendeps", "siggen_taskdeps", "siggen_varvals"]:
setattr(self, key, self._restore(state[key], pid))
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
"""
mc = ""
if virtualfn.startswith('mc:') and virtualfn.count(':') >= 2:
(_, mc, virtualfn) = virtualfn.split(':', 2)
if virtualfn.startswith('mc:'):
elems = virtualfn.split(':')
mc = elems[1]
virtualfn = ":".join(elems[2:])
fn = virtualfn
cls = ""
@@ -367,29 +264,107 @@ def realfn2virtual(realfn, cls, mc):
def variant2virtual(realfn, variant):
"""
Convert a real filename + a variant to a virtual filename
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if variant == "":
return realfn
if variant.startswith("mc:") and variant.count(':') >= 2:
if variant.startswith("mc:"):
elems = variant.split(":")
if elems[2]:
return "mc:" + elems[1] + ":virtual:" + ":".join(elems[2:]) + ":" + realfn
return "mc:" + elems[1] + ":" + realfn
return "virtual:" + variant + ":" + realfn
#
# Cooker calls cacheValid on its recipe list, then either calls loadCached
# from it's main thread or parse from separate processes to generate an up to
# date cache
#
class Cache(object):
def parse_recipe(bb_data, bbfile, appends, mc=''):
"""
Parse a recipe
"""
chdir_back = False
bb_data.setVar("__BBMULTICONFIG", mc)
# expand tmpdir to include this topdir
bb_data.setVar('TMPDIR', bb_data.getVar('TMPDIR') or "")
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
bb.parse.cached_mtime_noerror(bbfile_loc)
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not bb_data.getVar('TOPDIR', False):
chdir_back = True
bb_data.setVar('TOPDIR', bbfile_loc)
try:
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
bb_data = bb.parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
return bb_data
except:
if chdir_back:
os.chdir(oldpath)
raise
class NoCache(object):
def __init__(self, databuilder):
self.databuilder = databuilder
self.data = databuilder.data
def loadDataFull(self, virtualfn, appends):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug(1, "Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = virtualfn2realfn(virtualfn)
bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
return bb_data[virtual]
def load_bbfile(self, bbfile, appends, virtonly = False, mc=None):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
if virtonly:
(bbfile, virtual, mc) = virtualfn2realfn(bbfile)
bb_data = self.databuilder.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
datastores = parse_recipe(bb_data, bbfile, appends, mc)
return datastores
if mc is not None:
bb_data = self.databuilder.mcdata[mc].createCopy()
return parse_recipe(bb_data, bbfile, appends, mc)
bb_data = self.data.createCopy()
datastores = parse_recipe(bb_data, bbfile, appends)
for mc in self.databuilder.mcdata:
if not mc:
continue
bb_data = self.databuilder.mcdata[mc].createCopy()
newstores = parse_recipe(bb_data, bbfile, appends, mc)
for ns in newstores:
datastores["mc:%s:%s" % (mc, ns)] = newstores[ns]
return datastores
class Cache(NoCache):
"""
BitBake Cache implementation
"""
def __init__(self, databuilder, mc, data_hash, caches_array):
self.databuilder = databuilder
self.data = databuilder.data
super().__init__(databuilder)
data = databuilder.data
# Pass caches_array information into Cache Constructor
# It will be used later for deciding whether we
@@ -397,7 +372,7 @@ class Cache(object):
self.mc = mc
self.logger = PrefixLoggerAdapter("Cache: %s: " % (mc if mc else "default"), logger)
self.caches_array = caches_array
self.cachedir = self.data.getVar("CACHE")
self.cachedir = data.getVar("CACHE")
self.clean = set()
self.checked = set()
self.depends_cache = {}
@@ -407,17 +382,25 @@ class Cache(object):
self.filelist_regex = re.compile(r'(?:(?<=:True)|(?<=:False))\s+')
if self.cachedir in [None, '']:
bb.fatal("Please ensure CACHE is set to the cache directory for BitBake to use")
self.has_cache = False
self.logger.info("Not using a cache. "
"Set CACHE = <directory> to enable.")
return
self.has_cache = True
def getCacheFile(self, cachefile):
return getCacheFile(self.cachedir, cachefile, self.mc, self.data_hash)
def prepare_cache(self, progress):
if not self.has_cache:
return 0
loaded = 0
self.cachefile = self.getCacheFile("bb_cache.dat")
self.logger.debug("Cache dir: %s", self.cachedir)
self.logger.debug(1, "Cache dir: %s", self.cachedir)
bb.utils.mkdirhier(self.cachedir)
cache_ok = True
@@ -425,7 +408,7 @@ class Cache(object):
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
cache_exists = os.path.exists(cachefile)
self.logger.debug2("Checking if %s exists: %r", cachefile, cache_exists)
self.logger.debug(2, "Checking if %s exists: %r", cachefile, cache_exists)
cache_ok = cache_ok and cache_exists
cache_class.init_cacheData(self)
if cache_ok:
@@ -433,7 +416,7 @@ class Cache(object):
elif os.path.isfile(self.cachefile):
self.logger.info("Out of date cache found, rebuilding...")
else:
self.logger.debug("Cache file %s not found, building..." % self.cachefile)
self.logger.debug(1, "Cache file %s not found, building..." % self.cachefile)
# We don't use the symlink, its just for debugging convinience
if self.mc:
@@ -451,6 +434,9 @@ class Cache(object):
return loaded
def cachesize(self):
if not self.has_cache:
return 0
cachesize = 0
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
@@ -463,11 +449,13 @@ class Cache(object):
return cachesize
def load_cachefile(self, progress):
cachesize = self.cachesize()
previous_progress = 0
previous_percent = 0
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
self.logger.debug('Loading cache file: %s' % cachefile)
self.logger.debug(1, 'Loading cache file: %s' % cachefile)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
# Check cache version information
@@ -512,11 +500,11 @@ class Cache(object):
return len(self.depends_cache)
def parse(self, filename, appends, layername):
def parse(self, filename, appends):
"""Parse the specified filename, returning the recipe information"""
self.logger.debug("Parsing %s", filename)
self.logger.debug(1, "Parsing %s", filename)
infos = []
datastores = self.databuilder.parseRecipeVariants(filename, appends, mc=self.mc, layername=layername)
datastores = self.load_bbfile(filename, appends, mc=self.mc)
depends = []
variants = []
# Process the "real" fn last so we can store variants list
@@ -538,19 +526,43 @@ class Cache(object):
return infos
def loadCached(self, filename, appends):
def load(self, filename, appends):
"""Obtain the recipe information for the specified filename,
using cached values.
"""
using cached values if available, otherwise parsing.
infos = []
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = variant2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
Note that if it does parse to obtain the info, it will not
automatically add the information to the cache or to your
CacheData. Use the add or add_info method to do so after
running this, or use loadData instead."""
cached = self.cacheValid(filename, appends)
if cached:
infos = []
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = variant2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
else:
return self.parse(filename, appends, configdata, self.caches_array)
return infos
return cached, infos
def loadData(self, fn, appends, cacheData):
"""Load the recipe info for the specified filename,
parsing and adding to the cache if necessary, and adding
the recipe information to the supplied CacheData instance."""
skipped, virtuals = 0, 0
cached, infos = self.load(fn, appends)
for virtualfn, info_array in infos:
if info_array[0].skipped:
self.logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
skipped += 1
else:
self.add_info(virtualfn, info_array, cacheData, not cached)
virtuals += 1
return cached, skipped, virtuals
def cacheValid(self, fn, appends):
"""
@@ -559,6 +571,10 @@ class Cache(object):
"""
if fn not in self.checked:
self.cacheValidUpdate(fn, appends)
# Is cache enabled?
if not self.has_cache:
return False
if fn in self.clean:
return True
return False
@@ -568,25 +584,29 @@ class Cache(object):
Is the cache valid for fn?
Make thorough (slower) checks including timestamps.
"""
# Is cache enabled?
if not self.has_cache:
return False
self.checked.add(fn)
# File isn't in depends_cache
if not fn in self.depends_cache:
self.logger.debug2("%s is not cached", fn)
self.logger.debug(2, "%s is not cached", fn)
return False
mtime = bb.parse.cached_mtime_noerror(fn)
# Check file still exists
if mtime == 0:
self.logger.debug2("%s no longer exists", fn)
self.logger.debug(2, "%s no longer exists", fn)
self.remove(fn)
return False
info_array = self.depends_cache[fn]
# Check the file's timestamp
if mtime != info_array[0].timestamp:
self.logger.debug2("%s changed", fn)
self.logger.debug(2, "%s changed", fn)
self.remove(fn)
return False
@@ -597,13 +617,13 @@ class Cache(object):
fmtime = bb.parse.cached_mtime_noerror(f)
# Check if file still exists
if old_mtime != 0 and fmtime == 0:
self.logger.debug2("%s's dependency %s was removed",
self.logger.debug(2, "%s's dependency %s was removed",
fn, f)
self.remove(fn)
return False
if (fmtime != old_mtime):
self.logger.debug2("%s's dependency %s changed",
self.logger.debug(2, "%s's dependency %s changed",
fn, f)
self.remove(fn)
return False
@@ -618,16 +638,16 @@ class Cache(object):
for f in flist:
if not f:
continue
f, exist = f.rsplit(":", 1)
f, exist = f.split(":")
if (exist == "True" and not os.path.exists(f)) or (exist == "False" and os.path.exists(f)):
self.logger.debug2("%s's file checksum list file %s changed",
self.logger.debug(2, "%s's file checksum list file %s changed",
fn, f)
self.remove(fn)
return False
if tuple(appends) != tuple(info_array[0].appends):
self.logger.debug2("appends for %s changed", fn)
self.logger.debug2("%s to %s" % (str(appends), str(info_array[0].appends)))
self.logger.debug(2, "appends for %s changed", fn)
self.logger.debug(2, "%s to %s" % (str(appends), str(info_array[0].appends)))
self.remove(fn)
return False
@@ -636,10 +656,10 @@ class Cache(object):
virtualfn = variant2virtual(fn, cls)
self.clean.add(virtualfn)
if virtualfn not in self.depends_cache:
self.logger.debug2("%s is not cached", virtualfn)
self.logger.debug(2, "%s is not cached", virtualfn)
invalid = True
elif len(self.depends_cache[virtualfn]) != len(self.caches_array):
self.logger.debug2("Extra caches missing for %s?" % virtualfn)
self.logger.debug(2, "Extra caches missing for %s?" % virtualfn)
invalid = True
# If any one of the variants is not present, mark as invalid for all
@@ -647,10 +667,10 @@ class Cache(object):
for cls in info_array[0].variants:
virtualfn = variant2virtual(fn, cls)
if virtualfn in self.clean:
self.logger.debug2("Removing %s from cache", virtualfn)
self.logger.debug(2, "Removing %s from cache", virtualfn)
self.clean.remove(virtualfn)
if fn in self.clean:
self.logger.debug2("Marking %s as not clean", fn)
self.logger.debug(2, "Marking %s as not clean", fn)
self.clean.remove(fn)
return False
@@ -663,10 +683,10 @@ class Cache(object):
Called from the parser in error cases
"""
if fn in self.depends_cache:
self.logger.debug("Removing %s from cache", fn)
self.logger.debug(1, "Removing %s from cache", fn)
del self.depends_cache[fn]
if fn in self.clean:
self.logger.debug("Marking %s as unclean", fn)
self.logger.debug(1, "Marking %s as unclean", fn)
self.clean.remove(fn)
def sync(self):
@@ -674,14 +694,18 @@ class Cache(object):
Save the cache
Called from the parser when complete (or exiting)
"""
if not self.has_cache:
return
if self.cacheclean:
self.logger.debug2("Cache is clean, not saving.")
self.logger.debug(2, "Cache is clean, not saving.")
return
for cache_class in self.caches_array:
cache_class_name = cache_class.__name__
cachefile = self.getCacheFile(cache_class.cachefile)
self.logger.debug2("Writing %s", cachefile)
self.logger.debug(2, "Writing %s", cachefile)
with open(cachefile, "wb") as f:
p = pickle.Pickler(f, pickle.HIGHEST_PROTOCOL)
p.dump(__cache_version__)
@@ -694,7 +718,6 @@ class Cache(object):
p.dump(info)
del self.depends_cache
SiggenRecipeInfo.reset()
@staticmethod
def mtime(cachefile):
@@ -717,11 +740,26 @@ class Cache(object):
if watcher:
watcher(info_array[0].file_depends)
if not self.has_cache:
return
if (info_array[0].skipped or 'SRCREVINACTION' not in info_array[0].pv) and not info_array[0].nocache:
if parsed:
self.cacheclean = False
self.depends_cache[filename] = info_array
def add(self, file_name, data, cacheData, parsed=None):
"""
Save data we need into the cache
"""
realfn = virtualfn2realfn(file_name)[0]
info_array = []
for cache_class in self.caches_array:
info_array.append(cache_class(realfn, data))
self.add_info(file_name, info_array, cacheData, parsed)
class MulticonfigCache(Mapping):
def __init__(self, databuilder, data_hash, caches_array):
def progress(p):
@@ -758,7 +796,6 @@ class MulticonfigCache(Mapping):
loaded = 0
for c in self.__caches.values():
SiggenRecipeInfo.reset()
loaded += c.prepare_cache(progress)
previous_progress = current_progress
@@ -779,6 +816,10 @@ class MulticonfigCache(Mapping):
for k in self.__caches:
yield k
def keys(self):
return self.__caches[key]
def init(cooker):
"""
The Objective: Cache the minimum amount of data possible yet get to the
@@ -836,14 +877,15 @@ class MultiProcessCache(object):
self.cachedata = self.create_cachedata()
self.cachedata_extras = self.create_cachedata()
def init_cache(self, cachedir, cache_file_name=None):
if not cachedir:
def init_cache(self, d, cache_file_name=None):
cachedir = (d.getVar("PERSISTENT_DIR") or
d.getVar("CACHE"))
if cachedir in [None, '']:
return
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
logger.debug("Using cache in '%s'", self.cachefile)
logger.debug(1, "Using cache in '%s'", self.cachefile)
glf = bb.utils.lockfile(self.cachefile + ".lock")
@@ -870,10 +912,6 @@ class MultiProcessCache(object):
if not self.cachefile:
return
have_data = any(self.cachedata_extras)
if not have_data:
return
glf = bb.utils.lockfile(self.cachefile + ".lock", shared=True)
i = os.getpid()
@@ -908,8 +946,6 @@ class MultiProcessCache(object):
data = self.cachedata
have_data = False
for f in [y for y in os.listdir(os.path.dirname(self.cachefile)) if y.startswith(os.path.basename(self.cachefile) + '-')]:
f = os.path.join(os.path.dirname(self.cachefile), f)
try:
@@ -924,14 +960,12 @@ class MultiProcessCache(object):
os.unlink(f)
continue
have_data = True
self.merge_data(extradata, data)
os.unlink(f)
if have_data:
with open(self.cachefile, "wb") as f:
p = pickle.Pickler(f, -1)
p.dump([data, self.__class__.CACHE_VERSION])
with open(self.cachefile, "wb") as f:
p = pickle.Pickler(f, -1)
p.dump([data, self.__class__.CACHE_VERSION])
bb.utils.unlockfile(glf)
@@ -957,7 +991,7 @@ class SimpleCache(object):
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
logger.debug("Using cache in '%s'", self.cachefile)
logger.debug(1, "Using cache in '%s'", self.cachefile)
glf = bb.utils.lockfile(self.cachefile + ".lock")
@@ -987,11 +1021,3 @@ class SimpleCache(object):
p.dump([data, self.cacheversion])
bb.utils.unlockfile(glf)
def copyfile(self, target):
if not self.cachefile:
return
glf = bb.utils.lockfile(self.cachefile + ".lock")
shutil.copy(self.cachefile, target)
bb.utils.unlockfile(glf)

View File

@@ -11,13 +11,10 @@ import os
import stat
import bb.utils
import logging
import re
from bb.cache import MultiProcessCache
logger = logging.getLogger("BitBake.Cache")
filelist_regex = re.compile(r'(?:(?<=:True)|(?<=:False))\s+')
# mtime cache (non-persistent)
# based upon the assumption that files do not change during bitbake run
class FileMtimeCache(object):
@@ -53,7 +50,6 @@ class FileChecksumCache(MultiProcessCache):
MultiProcessCache.__init__(self)
def get_checksum(self, f):
f = os.path.normpath(f)
entry = self.cachedata[0].get(f)
cmtime = self.mtime_cache.cached_mtime(f)
if entry:
@@ -88,36 +84,22 @@ class FileChecksumCache(MultiProcessCache):
return None
return checksum
#
# Changing the format of file-checksums is problematic as both OE and Bitbake have
# knowledge of them. We need to encode a new piece of data, the portion of the path
# we care about from a checksum perspective. This means that files that change subdirectory
# are tracked by the task hashes. To do this, we do something horrible and put a "/./" into
# the path. The filesystem handles it but it gives us a marker to know which subsection
# of the path to cache.
#
def checksum_dir(pth):
# Handle directories recursively
if pth == "/":
bb.fatal("Refusing to checksum /")
pth = pth.rstrip("/")
dirchecksums = []
for root, dirs, files in os.walk(pth, topdown=True):
[dirs.remove(d) for d in list(dirs) if d in localdirsexclude]
for name in files:
fullpth = os.path.join(root, name).replace(pth, os.path.join(pth, "."))
fullpth = os.path.join(root, name)
checksum = checksum_file(fullpth)
if checksum:
dirchecksums.append((fullpth, checksum))
return dirchecksums
checksums = []
for pth in filelist_regex.split(filelist):
if not pth:
continue
pth = pth.strip()
if not pth:
continue
for pth in filelist.split():
exist = pth.split(":")[1]
if exist == "False":
continue

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -27,7 +25,6 @@ import ast
import sys
import codegen
import logging
import inspect
import bb.pysh as pysh
import bb.utils, bb.data
import hashlib
@@ -59,45 +56,10 @@ def check_indent(codestr):
return codestr
modulecode_deps = {}
def add_module_functions(fn, functions, namespace):
import os
fstat = os.stat(fn)
fixedhash = fn + ":" + str(fstat.st_size) + ":" + str(fstat.st_mtime)
for f in functions:
name = "%s.%s" % (namespace, f)
parser = PythonParser(name, logger)
try:
parser.parse_python(None, filename=fn, lineno=1, fixedhash=fixedhash+f)
#bb.warn("Cached %s" % f)
except KeyError:
targetfn = inspect.getsourcefile(functions[f])
if fn != targetfn:
# Skip references to other modules outside this file
#bb.warn("Skipping %s" % name)
continue
lines, lineno = inspect.getsourcelines(functions[f])
src = "".join(lines)
parser.parse_python(src, filename=fn, lineno=lineno, fixedhash=fixedhash+f)
#bb.warn("Not cached %s" % f)
execs = parser.execs.copy()
# Expand internal module exec references
for e in parser.execs:
if e in functions:
execs.remove(e)
execs.add(namespace + "." + e)
modulecode_deps[name] = [parser.references.copy(), execs, parser.var_execs.copy(), parser.contains.copy(), parser.extra]
#bb.warn("%s: %s\nRefs:%s Execs: %s %s %s" % (name, fn, parser.references, parser.execs, parser.var_execs, parser.contains))
def update_module_dependencies(d):
for mod in modulecode_deps:
excludes = set((d.getVarFlag(mod, "vardepsexclude") or "").split())
if excludes:
modulecode_deps[mod] = [modulecode_deps[mod][0] - excludes, modulecode_deps[mod][1] - excludes, modulecode_deps[mod][2] - excludes, modulecode_deps[mod][3], modulecode_deps[mod][4]]
# A custom getstate/setstate using tuples is actually worth 15% cachesize by
# avoiding duplication of the attribute names!
class SetCache(object):
def __init__(self):
self.setcache = {}
@@ -117,22 +79,21 @@ class SetCache(object):
codecache = SetCache()
class pythonCacheLine(object):
def __init__(self, refs, execs, contains, extra):
def __init__(self, refs, execs, contains):
self.refs = codecache.internSet(refs)
self.execs = codecache.internSet(execs)
self.contains = {}
for c in contains:
self.contains[c] = codecache.internSet(contains[c])
self.extra = extra
def __getstate__(self):
return (self.refs, self.execs, self.contains, self.extra)
return (self.refs, self.execs, self.contains)
def __setstate__(self, state):
(refs, execs, contains, extra) = state
self.__init__(refs, execs, contains, extra)
(refs, execs, contains) = state
self.__init__(refs, execs, contains)
def __hash__(self):
l = (hash(self.refs), hash(self.execs), hash(self.extra))
l = (hash(self.refs), hash(self.execs))
for c in sorted(self.contains.keys()):
l = l + (c, hash(self.contains[c]))
return hash(l)
@@ -161,7 +122,7 @@ class CodeParserCache(MultiProcessCache):
# so that an existing cache gets invalidated. Additionally you'll need
# to increment __cache_version__ in cache.py in order to ensure that old
# recipe caches don't trigger "Taskhash mismatch" errors.
CACHE_VERSION = 12
CACHE_VERSION = 11
def __init__(self):
MultiProcessCache.__init__(self)
@@ -175,8 +136,8 @@ class CodeParserCache(MultiProcessCache):
self.pythoncachelines = {}
self.shellcachelines = {}
def newPythonCacheLine(self, refs, execs, contains, extra):
cacheline = pythonCacheLine(refs, execs, contains, extra)
def newPythonCacheLine(self, refs, execs, contains):
cacheline = pythonCacheLine(refs, execs, contains)
h = hash(cacheline)
if h in self.pythoncachelines:
return self.pythoncachelines[h]
@@ -191,12 +152,12 @@ class CodeParserCache(MultiProcessCache):
self.shellcachelines[h] = cacheline
return cacheline
def init_cache(self, cachedir):
def init_cache(self, d):
# Check if we already have the caches
if self.pythoncache:
return
MultiProcessCache.init_cache(self, cachedir)
MultiProcessCache.init_cache(self, d)
# cachedata gets re-assigned in the parent
self.pythoncache = self.cachedata[0]
@@ -208,8 +169,8 @@ class CodeParserCache(MultiProcessCache):
codeparsercache = CodeParserCache()
def parser_cache_init(cachedir):
codeparsercache.init_cache(cachedir)
def parser_cache_init(d):
codeparsercache.init_cache(d)
def parser_cache_save():
codeparsercache.save_extras()
@@ -234,10 +195,6 @@ class BufferedLogger(Logger):
self.target.handle(record)
self.buffer = []
class DummyLogger():
def flush(self):
return
class PythonParser():
getvars = (".getVar", ".appendVar", ".prependVar", "oe.utils.conditional")
getvarflags = (".getVarFlag", ".appendVarFlag", ".prependVarFlag")
@@ -255,26 +212,26 @@ class PythonParser():
funcstr = codegen.to_source(func)
argstr = codegen.to_source(arg)
except TypeError:
self.log.debug2('Failed to convert function and argument to source form')
self.log.debug(2, 'Failed to convert function and argument to source form')
else:
self.log.debug(self.unhandled_message % (funcstr, argstr))
self.log.debug(1, self.unhandled_message % (funcstr, argstr))
def visit_Call(self, node):
name = self.called_node_name(node.func)
if name and (name.endswith(self.getvars) or name.endswith(self.getvarflags) or name in self.containsfuncs or name in self.containsanyfuncs):
if isinstance(node.args[0], ast.Constant) and isinstance(node.args[0].value, str):
varname = node.args[0].value
if name in self.containsfuncs and isinstance(node.args[1], ast.Constant):
if isinstance(node.args[0], ast.Str):
varname = node.args[0].s
if name in self.containsfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].add(node.args[1].value)
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Constant):
self.contains[varname].add(node.args[1].s)
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].update(node.args[1].value.split())
self.contains[varname].update(node.args[1].s.split())
elif name.endswith(self.getvarflags):
if isinstance(node.args[1], ast.Constant):
self.references.add('%s[%s]' % (varname, node.args[1].value))
if isinstance(node.args[1], ast.Str):
self.references.add('%s[%s]' % (varname, node.args[1].s))
else:
self.warn(node.func, node.args[1])
else:
@@ -282,8 +239,8 @@ class PythonParser():
else:
self.warn(node.func, node.args[0])
elif name and name.endswith(".expand"):
if isinstance(node.args[0], ast.Constant):
value = node.args[0].value
if isinstance(node.args[0], ast.Str):
value = node.args[0].s
d = bb.data.init()
parser = d.expandWithRefs(value, self.name)
self.references |= parser.references
@@ -293,8 +250,8 @@ class PythonParser():
self.contains[varname] = set()
self.contains[varname] |= parser.contains[varname]
elif name in self.execfuncs:
if isinstance(node.args[0], ast.Constant):
self.var_execs.add(node.args[0].value)
if isinstance(node.args[0], ast.Str):
self.var_execs.add(node.args[0].s)
else:
self.warn(node.func, node.args[0])
elif name and isinstance(node.func, (ast.Name, ast.Attribute)):
@@ -319,24 +276,16 @@ class PythonParser():
self.contains = {}
self.execs = set()
self.references = set()
self._log = log
# Defer init as expensive
self.log = DummyLogger()
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, log)
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
# For the python module code it is expensive to have the function text so it is
# uses a different fixedhash to cache against. We can take the hit on obtaining the
# text if it isn't in the cache.
def parse_python(self, node, lineno=0, filename="<string>", fixedhash=None):
if not fixedhash and (not node or not node.strip()):
def parse_python(self, node, lineno=0, filename="<string>"):
if not node or not node.strip():
return
if fixedhash:
h = fixedhash
else:
h = bbhash(str(node))
h = bbhash(str(node))
if h in codeparsercache.pythoncache:
self.references = set(codeparsercache.pythoncache[h].refs)
@@ -344,7 +293,6 @@ class PythonParser():
self.contains = {}
for i in codeparsercache.pythoncache[h].contains:
self.contains[i] = set(codeparsercache.pythoncache[h].contains[i])
self.extra = codeparsercache.pythoncache[h].extra
return
if h in codeparsercache.pythoncacheextras:
@@ -353,15 +301,8 @@ class PythonParser():
self.contains = {}
for i in codeparsercache.pythoncacheextras[h].contains:
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
self.extra = codeparsercache.pythoncacheextras[h].extra
return
if fixedhash and not node:
raise KeyError
# Need to parse so take the hit on the real log buffer
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, self._log)
# We can't add to the linenumbers for compile, we can pad to the correct number of blank lines though
node = "\n" * int(lineno) + node
code = compile(check_indent(str(node)), filename, "exec",
@@ -372,22 +313,15 @@ class PythonParser():
self.visit_Call(n)
self.execs.update(self.var_execs)
self.extra = None
if fixedhash:
self.extra = bbhash(str(node))
codeparsercache.pythoncacheextras[h] = codeparsercache.newPythonCacheLine(self.references, self.execs, self.contains, self.extra)
codeparsercache.pythoncacheextras[h] = codeparsercache.newPythonCacheLine(self.references, self.execs, self.contains)
class ShellParser():
def __init__(self, name, log):
self.funcdefs = set()
self.allexecs = set()
self.execs = set()
self._name = name
self._log = log
# Defer init as expensive
self.log = DummyLogger()
self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
self.unhandled_template = "unable to handle non-literal command '%s'"
self.unhandled_template = "while parsing %s, %s" % (name, self.unhandled_template)
@@ -406,9 +340,6 @@ class ShellParser():
self.execs = set(codeparsercache.shellcacheextras[h].execs)
return self.execs
# Need to parse so take the hit on the real log buffer
self.log = BufferedLogger('BitBake.Data.%s' % self._name, logging.DEBUG, self._log)
self._parse_shell(value)
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
@@ -519,7 +450,7 @@ class ShellParser():
cmd = word[1]
if cmd.startswith("$"):
self.log.debug(self.unhandled_template % cmd)
self.log.debug(1, self.unhandled_template % cmd)
elif cmd == "eval":
command = " ".join(word for _, word in words[1:])
self._parse_shell(command)

View File

@@ -20,7 +20,6 @@ Commands are queued in a CommandQueue
from collections import OrderedDict, defaultdict
import io
import bb.event
import bb.cooker
import bb.remotedata
@@ -51,32 +50,23 @@ class Command:
"""
A queue of asynchronous commands for bitbake
"""
def __init__(self, cooker, process_server):
def __init__(self, cooker):
self.cooker = cooker
self.cmds_sync = CommandsSync()
self.cmds_async = CommandsAsync()
self.remotedatastores = None
self.process_server = process_server
# Access with locking using process_server.{get/set/clear}_async_cmd()
# FIXME Add lock for this
self.currentAsyncCommand = None
def runCommand(self, commandline, process_server, ro_only=False):
def runCommand(self, commandline, ro_only = False):
command = commandline.pop(0)
# Ensure cooker is ready for commands
if command not in ["updateConfig", "setFeatures", "ping"]:
try:
self.cooker.init_configdata()
if not self.remotedatastores:
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
except (Exception, SystemExit) as exc:
import traceback
if isinstance(exc, bb.BBHandledException):
# We need to start returning real exceptions here. Until we do, we can't
# tell if an exception is an instance of bb.BBHandledException
return None, "bb.BBHandledException()\n" + traceback.format_exc()
return None, traceback.format_exc()
if command != "updateConfig" and command != "setFeatures":
self.cooker.init_configdata()
if not self.remotedatastores:
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
if hasattr(CommandsSync, command):
# Can run synchronous commands straight away
@@ -85,6 +75,7 @@ class Command:
if not hasattr(command_method, 'readonly') or not getattr(command_method, 'readonly'):
return None, "Not able to execute not readonly commands in readonly mode"
try:
self.cooker.process_inotify_updates()
if getattr(command_method, 'needconfig', True):
self.cooker.updateCacheSync()
result = command_method(self, commandline)
@@ -99,23 +90,24 @@ class Command:
return None, traceback.format_exc()
else:
return result, None
if self.currentAsyncCommand is not None:
return None, "Busy (%s in progress)" % self.currentAsyncCommand[0]
if command not in CommandsAsync.__dict__:
return None, "No such command"
if not process_server.set_async_cmd((command, commandline)):
return None, "Busy (%s in progress)" % self.process_server.get_async_cmd()[0]
self.cooker.idleCallBackRegister(self.runAsyncCommand, process_server)
self.currentAsyncCommand = (command, commandline)
self.cooker.idleCallBackRegister(self.cooker.runCommands, self.cooker)
return True, None
def runAsyncCommand(self, _, process_server, halt):
def runAsyncCommand(self):
try:
self.cooker.process_inotify_updates()
if self.cooker.state in (bb.cooker.state.error, bb.cooker.state.shutdown, bb.cooker.state.forceshutdown):
# updateCache will trigger a shutdown of the parser
# and then raise BBHandledException triggering an exit
self.cooker.updateCache()
return bb.server.process.idleFinish("Cooker in error state")
cmd = process_server.get_async_cmd()
if cmd is not None:
(command, options) = cmd
return False
if self.currentAsyncCommand is not None:
(command, options) = self.currentAsyncCommand
commandmethod = getattr(CommandsAsync, command)
needcache = getattr( commandmethod, "needcache" )
if needcache and self.cooker.state != bb.cooker.state.running:
@@ -125,21 +117,24 @@ class Command:
commandmethod(self.cmds_async, self, options)
return False
else:
return bb.server.process.idleFinish("Nothing to do, no async command?")
return False
except KeyboardInterrupt as exc:
return bb.server.process.idleFinish("Interrupted")
self.finishAsyncCommand("Interrupted")
return False
except SystemExit as exc:
arg = exc.args[0]
if isinstance(arg, str):
return bb.server.process.idleFinish(arg)
self.finishAsyncCommand(arg)
else:
return bb.server.process.idleFinish("Exited with %s" % arg)
self.finishAsyncCommand("Exited with %s" % arg)
return False
except Exception as exc:
import traceback
if isinstance(exc, bb.BBHandledException):
return bb.server.process.idleFinish("")
self.finishAsyncCommand("")
else:
return bb.server.process.idleFinish(traceback.format_exc())
self.finishAsyncCommand(traceback.format_exc())
return False
def finishAsyncCommand(self, msg=None, code=None):
if msg or msg == "":
@@ -148,8 +143,8 @@ class Command:
bb.event.fire(CommandExit(code), self.cooker.data)
else:
bb.event.fire(CommandCompleted(), self.cooker.data)
self.currentAsyncCommand = None
self.cooker.finishcommand()
self.process_server.clear_async_cmd()
def reset(self):
if self.remotedatastores:
@@ -162,14 +157,6 @@ class CommandsSync:
These must not influence any running synchronous command.
"""
def ping(self, command, params):
"""
Allow a UI to check the server is still alive
"""
return "Still alive!"
ping.needconfig = False
ping.readonly = True
def stateShutdown(self, command, params):
"""
Trigger cooker 'shutdown' mode
@@ -307,11 +294,6 @@ class CommandsSync:
return ret
getLayerPriorities.readonly = True
def revalidateCaches(self, command, params):
"""Called by UI clients when metadata may have changed"""
command.cooker.revalidateCaches()
parseConfiguration.needconfig = False
def getRecipes(self, command, params):
try:
mc = params[0]
@@ -518,17 +500,6 @@ class CommandsSync:
d = command.remotedatastores[dsindex].varhistory
return getattr(d, method)(*args, **kwargs)
def dataStoreConnectorVarHistCmdEmit(self, command, params):
dsindex = params[0]
var = params[1]
oval = params[2]
val = params[3]
d = command.remotedatastores[params[4]]
o = io.StringIO()
command.remotedatastores[dsindex].varhistory.emit(var, oval, val, o, d)
return o.getvalue()
def dataStoreConnectorIncHistCmd(self, command, params):
dsindex = params[0]
method = params[1]
@@ -550,8 +521,8 @@ class CommandsSync:
and return a datastore object representing the environment
for the recipe.
"""
virtualfn = params[0]
(fn, cls, mc) = bb.cache.virtualfn2realfn(virtualfn)
fn = params[0]
mc = bb.runqueue.mc_from_tid(fn)
appends = params[1]
appendlist = params[2]
if len(params) > 3:
@@ -566,7 +537,6 @@ class CommandsSync:
appendfiles = command.cooker.collections[mc].get_file_appends(fn)
else:
appendfiles = []
layername = command.cooker.collections[mc].calc_bbfile_priority(fn)[2]
# We are calling bb.cache locally here rather than on the server,
# but that's OK because it doesn't actually need anything from
# the server barring the global datastore (which we have a remote
@@ -574,10 +544,11 @@ class CommandsSync:
if config_data:
# We have to use a different function here if we're passing in a datastore
# NOTE: we took a copy above, so we don't do it here again
envdata = command.cooker.databuilder._parse_recipe(config_data, fn, appendfiles, mc, layername)[cls]
envdata = bb.cache.parse_recipe(config_data, fn, appendfiles, mc)['']
else:
# Use the standard path
envdata = command.cooker.databuilder.parseRecipe(virtualfn, appendfiles, layername)
parser = bb.cache.NoCache(command.cooker.databuilder)
envdata = parser.loadDataFull(fn, appendfiles)
idx = command.remotedatastores.store(envdata)
return DataStoreConnectionHandle(idx)
parseRecipeFile.readonly = True
@@ -676,16 +647,6 @@ class CommandsAsync:
command.finishAsyncCommand()
findFilesMatchingInDir.needcache = False
def testCookerCommandEvent(self, command, params):
"""
Dummy command used by OEQA selftest to test tinfoil without IO
"""
pattern = params[0]
command.cooker.testCookerCommandEvent(pattern)
command.finishAsyncCommand()
testCookerCommandEvent.needcache = False
def findConfigFilePath(self, command, params):
"""
Find the path of the requested configuration file
@@ -750,7 +711,7 @@ class CommandsAsync:
"""
event = params[0]
bb.event.fire(eval(event), command.cooker.data)
process_server.clear_async_cmd()
command.currentAsyncCommand = None
triggerEvent.needcache = False
def resetCooker(self, command, params):
@@ -777,14 +738,7 @@ class CommandsAsync:
(mc, pn) = bb.runqueue.split_mc(params[0])
taskname = params[1]
sigs = params[2]
bb.siggen.check_siggen_version(bb.siggen)
res = bb.siggen.find_siginfo(pn, taskname, sigs, command.cooker.databuilder.mcdata[mc])
bb.event.fire(bb.event.FindSigInfoResult(res), command.cooker.databuilder.mcdata[mc])
command.finishAsyncCommand()
findSigInfo.needcache = False
def getTaskSignatures(self, command, params):
res = command.cooker.getTaskSignatures(params[0], params[1])
bb.event.fire(bb.event.GetTaskSignatureResult(res), command.cooker.data)
command.finishAsyncCommand()
getTaskSignatures.needcache = True

View File

@@ -1,196 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Helper library to implement streaming compression and decompression using an
# external process
#
# This library should be used directly by end users; a wrapper library for the
# specific compression tool should be created
import builtins
import io
import os
import subprocess
def open_wrap(
cls, filename, mode="rb", *, encoding=None, errors=None, newline=None, **kwargs
):
"""
Open a compressed file in binary or text mode.
Users should not call this directly. A specific compression library can use
this helper to provide it's own "open" command
The filename argument can be an actual filename (a str or bytes object), or
an existing file object to read from or write to.
The mode argument can be "r", "rb", "w", "wb", "x", "xb", "a" or "ab" for
binary mode, or "rt", "wt", "xt" or "at" for text mode. The default mode is
"rb".
For binary mode, this function is equivalent to the cls constructor:
cls(filename, mode). In this case, the encoding, errors and newline
arguments must not be provided.
For text mode, a cls object is created, and wrapped in an
io.TextIOWrapper instance with the specified encoding, error handling
behavior, and line ending(s).
"""
if "t" in mode:
if "b" in mode:
raise ValueError("Invalid mode: %r" % (mode,))
else:
if encoding is not None:
raise ValueError("Argument 'encoding' not supported in binary mode")
if errors is not None:
raise ValueError("Argument 'errors' not supported in binary mode")
if newline is not None:
raise ValueError("Argument 'newline' not supported in binary mode")
file_mode = mode.replace("t", "")
if isinstance(filename, (str, bytes, os.PathLike, int)):
binary_file = cls(filename, file_mode, **kwargs)
elif hasattr(filename, "read") or hasattr(filename, "write"):
binary_file = cls(None, file_mode, fileobj=filename, **kwargs)
else:
raise TypeError("filename must be a str or bytes object, or a file")
if "t" in mode:
return io.TextIOWrapper(
binary_file, encoding, errors, newline, write_through=True
)
else:
return binary_file
class CompressionError(OSError):
pass
class PipeFile(io.RawIOBase):
"""
Class that implements generically piping to/from a compression program
Derived classes should add the function get_compress() and get_decompress()
that return the required commands. Input will be piped into stdin and the
(de)compressed output should be written to stdout, e.g.:
class FooFile(PipeCompressionFile):
def get_decompress(self):
return ["fooc", "--decompress", "--stdout"]
def get_compress(self):
return ["fooc", "--compress", "--stdout"]
"""
READ = 0
WRITE = 1
def __init__(self, filename=None, mode="rb", *, stderr=None, fileobj=None):
if "t" in mode or "U" in mode:
raise ValueError("Invalid mode: {!r}".format(mode))
if not "b" in mode:
mode += "b"
if mode.startswith("r"):
self.mode = self.READ
elif mode.startswith("w"):
self.mode = self.WRITE
else:
raise ValueError("Invalid mode %r" % mode)
if fileobj is not None:
self.fileobj = fileobj
else:
self.fileobj = builtins.open(filename, mode or "rb")
if self.mode == self.READ:
self.p = subprocess.Popen(
self.get_decompress(),
stdin=self.fileobj,
stdout=subprocess.PIPE,
stderr=stderr,
close_fds=True,
)
self.pipe = self.p.stdout
else:
self.p = subprocess.Popen(
self.get_compress(),
stdin=subprocess.PIPE,
stdout=self.fileobj,
stderr=stderr,
close_fds=True,
)
self.pipe = self.p.stdin
self.__closed = False
def _check_process(self):
if self.p is None:
return
returncode = self.p.wait()
if returncode:
raise CompressionError("Process died with %d" % returncode)
self.p = None
def close(self):
if self.closed:
return
self.pipe.close()
if self.p is not None:
self._check_process()
self.fileobj.close()
self.__closed = True
@property
def closed(self):
return self.__closed
def fileno(self):
return self.pipe.fileno()
def flush(self):
self.pipe.flush()
def isatty(self):
return self.pipe.isatty()
def readable(self):
return self.mode == self.READ
def writable(self):
return self.mode == self.WRITE
def readinto(self, b):
if self.mode != self.READ:
import errno
raise OSError(
errno.EBADF, "read() on write-only %s object" % self.__class__.__name__
)
size = self.pipe.readinto(b)
if size == 0:
self._check_process()
return size
def write(self, data):
if self.mode != self.WRITE:
import errno
raise OSError(
errno.EBADF, "write() on read-only %s object" % self.__class__.__name__
)
data = self.pipe.write(data)
if not data:
self._check_process()
return data

View File

@@ -1,19 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import bb.compress._pipecompress
def open(*args, **kwargs):
return bb.compress._pipecompress.open_wrap(LZ4File, *args, **kwargs)
class LZ4File(bb.compress._pipecompress.PipeFile):
def get_compress(self):
return ["lz4c", "-z", "-c"]
def get_decompress(self):
return ["lz4c", "-d", "-c"]

View File

@@ -1,30 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import bb.compress._pipecompress
import shutil
def open(*args, **kwargs):
return bb.compress._pipecompress.open_wrap(ZstdFile, *args, **kwargs)
class ZstdFile(bb.compress._pipecompress.PipeFile):
def __init__(self, *args, num_threads=1, compresslevel=3, **kwargs):
self.num_threads = num_threads
self.compresslevel = compresslevel
super().__init__(*args, **kwargs)
def _get_zstd(self):
if self.num_threads == 1 or not shutil.which("pzstd"):
return ["zstd"]
return ["pzstd", "-p", "%d" % self.num_threads]
def get_compress(self):
return self._get_zstd() + ["-c", "-%d" % self.compresslevel]
def get_decompress(self):
return self._get_zstd() + ["-d", "-c"]

File diff suppressed because it is too large Load Diff

View File

@@ -23,8 +23,8 @@ logger = logging.getLogger("BitBake")
parselog = logging.getLogger("BitBake.Parsing")
class ConfigParameters(object):
def __init__(self, argv=None):
self.options, targets = self.parseCommandLine(argv or sys.argv)
def __init__(self, argv=sys.argv):
self.options, targets = self.parseCommandLine(argv)
self.environment = self.parseEnvironment()
self.options.pkgs_to_build = targets or []
@@ -57,7 +57,7 @@ class ConfigParameters(object):
def updateToServer(self, server, environment):
options = {}
for o in ["halt", "force", "invalidate_stamp",
for o in ["abort", "force", "invalidate_stamp",
"dry_run", "dump_signatures",
"extra_assume_provided", "profile",
"prefile", "postfile", "server_timeout",
@@ -86,7 +86,7 @@ class ConfigParameters(object):
action['msg'] = "Only one target can be used with the --environment option."
elif self.options.buildfile and len(self.options.pkgs_to_build) > 0:
action['msg'] = "No target should be used with the --environment and --buildfile options."
elif self.options.pkgs_to_build:
elif len(self.options.pkgs_to_build) > 0:
action['action'] = ["showEnvironmentTarget", self.options.pkgs_to_build]
else:
action['action'] = ["showEnvironment", self.options.buildfile]
@@ -124,7 +124,7 @@ class CookerConfiguration(object):
self.prefile = []
self.postfile = []
self.cmd = None
self.halt = True
self.abort = True
self.force = False
self.profile = False
self.nosetscene = False
@@ -160,7 +160,12 @@ def catch_parse_error(func):
def wrapped(fn, *args):
try:
return func(fn, *args)
except Exception as exc:
except IOError as exc:
import traceback
parselog.critical(traceback.format_exc())
parselog.critical("Unable to parse %s: %s" % (fn, exc))
raise bb.BBHandledException()
except bb.data_smart.ExpansionError as exc:
import traceback
bbdir = os.path.dirname(__file__) + os.sep
@@ -172,11 +177,14 @@ def catch_parse_error(func):
break
parselog.critical("Unable to parse %s" % fn, exc_info=(exc_class, exc, tb))
raise bb.BBHandledException()
except bb.parse.ParseError as exc:
parselog.critical(str(exc))
raise bb.BBHandledException()
return wrapped
@catch_parse_error
def parse_config_file(fn, data, include=True):
return bb.parse.handle(fn, data, include, baseconfig=True)
return bb.parse.handle(fn, data, include)
@catch_parse_error
def _inherit(bbclass, data):
@@ -201,8 +209,8 @@ def findConfigFile(configfile, data):
return None
#
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
# up to /. If that fails, bitbake would fall back to cwd.
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
# up to /. If that fails, we search for a conf/bitbake.conf in BBPATH.
#
def findTopdir():
@@ -215,8 +223,11 @@ def findTopdir():
layerconf = findConfigFile("bblayers.conf", d)
if layerconf:
return os.path.dirname(os.path.dirname(layerconf))
return os.path.abspath(os.getcwd())
if bbpath:
bitbakeconf = bb.utils.which(bbpath, "conf/bitbake.conf")
if bitbakeconf:
return os.path.dirname(os.path.dirname(bitbakeconf))
return None
class CookerDataBuilder(object):
@@ -239,14 +250,10 @@ class CookerDataBuilder(object):
self.savedenv = bb.data.init()
for k in cookercfg.env:
self.savedenv.setVar(k, cookercfg.env[k])
if k in bb.data_smart.bitbake_renamed_vars:
bb.error('Shell environment variable %s has been renamed to %s' % (k, bb.data_smart.bitbake_renamed_vars[k]))
bb.fatal("Exiting to allow enviroment variables to be corrected")
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
self.basedata.setVar("BB_ORIGENV", self.savedenv)
self.basedata.setVar("__bbclasstype", "global")
if worker:
self.basedata.setVar("BB_WORKERCONTEXT", "1")
@@ -254,15 +261,15 @@ class CookerDataBuilder(object):
self.data = self.basedata
self.mcdata = {}
def parseBaseConfiguration(self, worker=False):
mcdata = {}
def parseBaseConfiguration(self):
data_hash = hashlib.sha256()
try:
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
if self.data.getVar("BB_WORKERCONTEXT", False) is None and not worker:
if self.data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(self.data)
bb.parse.init_parser(self.data)
bb.codeparser.parser_cache_init(self.data)
bb.event.fire(bb.event.ConfigParsed(), self.data)
@@ -280,62 +287,38 @@ class CookerDataBuilder(object):
bb.parse.init_parser(self.data)
data_hash.update(self.data.get_hash().encode('utf-8'))
mcdata[''] = self.data
self.mcdata[''] = self.data
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
for config in multiconfig:
if config[0].isdigit():
bb.fatal("Multiconfig name '%s' is invalid as multiconfigs cannot start with a digit" % config)
parsed_mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), parsed_mcdata)
mcdata[config] = parsed_mcdata
data_hash.update(parsed_mcdata.get_hash().encode('utf-8'))
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), mcdata)
self.mcdata[config] = mcdata
data_hash.update(mcdata.get_hash().encode('utf-8'))
if multiconfig:
bb.event.fire(bb.event.MultiConfigParsed(mcdata), self.data)
bb.event.fire(bb.event.MultiConfigParsed(self.mcdata), self.data)
self.data_hash = data_hash.hexdigest()
except (SyntaxError, bb.BBHandledException):
raise bb.BBHandledException()
except bb.data_smart.ExpansionError as e:
logger.error(str(e))
raise bb.BBHandledException()
bb.codeparser.update_module_dependencies(self.data)
# Handle obsolete variable names
d = self.data
renamedvars = d.getVarFlags('BB_RENAMED_VARIABLES') or {}
renamedvars.update(bb.data_smart.bitbake_renamed_vars)
issues = False
for v in renamedvars:
if d.getVar(v) != None or d.hasOverrides(v):
issues = True
loginfo = {}
history = d.varhistory.get_variable_refs(v)
for h in history:
for line in history[h]:
loginfo = {'file' : h, 'line' : line}
bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
if not history:
bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
if issues:
except Exception:
logger.exception("Error parsing configuration files")
raise bb.BBHandledException()
for mc in mcdata:
mcdata[mc].renameVar("__depends", "__base_depends")
mcdata[mc].setVar("__bbclasstype", "recipe")
# Create a copy so we can reset at a later date when UIs disconnect
self.mcorigdata = mcdata
for mc in mcdata:
self.mcdata[mc] = bb.data.createCopy(mcdata[mc])
self.data = self.mcdata['']
self.origdata = self.data
self.data = bb.data.createCopy(self.origdata)
self.mcdata[''] = self.data
def reset(self):
# We may not have run parseBaseConfiguration() yet
if not hasattr(self, 'mcorigdata'):
if not hasattr(self, 'origdata'):
return
for mc in self.mcorigdata:
self.mcdata[mc] = bb.data.createCopy(self.mcorigdata[mc])
self.data = self.mcdata['']
self.data = bb.data.createCopy(self.origdata)
self.mcdata[''] = self.data
def _findLayerConf(self, data):
return findConfigFile("bblayers.conf", data)
@@ -350,23 +333,15 @@ class CookerDataBuilder(object):
layerconf = self._findLayerConf(data)
if layerconf:
parselog.debug2("Found bblayers.conf (%s)", layerconf)
parselog.debug(2, "Found bblayers.conf (%s)", layerconf)
# By definition bblayers.conf is in conf/ of TOPDIR.
# We may have been called with cwd somewhere else so reset TOPDIR
data.setVar("TOPDIR", os.path.dirname(os.path.dirname(layerconf)))
data = parse_config_file(layerconf, data)
if not data.getVar("BB_CACHEDIR"):
data.setVar("BB_CACHEDIR", "${TOPDIR}/cache")
bb.codeparser.parser_cache_init(data.getVar("BB_CACHEDIR"))
layers = (data.getVar('BBLAYERS') or "").split()
broken_layers = []
if not layers:
bb.fatal("The bblayers.conf file doesn't contain any BBLAYERS definition")
data = bb.data.createCopy(data)
approved = bb.utils.approved_variables()
@@ -382,10 +357,8 @@ class CookerDataBuilder(object):
parselog.critical("Please check BBLAYERS in %s" % (layerconf))
raise bb.BBHandledException()
layerseries = None
compat_entries = {}
for layer in layers:
parselog.debug2("Adding layer %s", layer)
parselog.debug(2, "Adding layer %s", layer)
if 'HOME' in approved and '~' in layer:
layer = os.path.expanduser(layer)
if layer.endswith('/'):
@@ -396,27 +369,8 @@ class CookerDataBuilder(object):
data.expandVarref('LAYERDIR')
data.expandVarref('LAYERDIR_RE')
# Sadly we can't have nice things.
# Some layers think they're going to be 'clever' and copy the values from
# another layer, e.g. using ${LAYERSERIES_COMPAT_core}. The whole point of
# this mechanism is to make it clear which releases a layer supports and
# show when a layer master branch is bitrotting and is unmaintained.
# We therefore avoid people doing this here.
collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
for c in collections:
compat_entry = data.getVar("LAYERSERIES_COMPAT_%s" % c)
if compat_entry:
compat_entries[c] = set(compat_entry.split())
data.delVar("LAYERSERIES_COMPAT_%s" % c)
if not layerseries:
layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
if layerseries:
data.delVar("LAYERSERIES_CORENAMES")
data.delVar('LAYERDIR_RE')
data.delVar('LAYERDIR')
for c in compat_entries:
data.setVar("LAYERSERIES_COMPAT_%s" % c, " ".join(sorted(compat_entries[c])))
bbfiles_dynamic = (data.getVar('BBFILES_DYNAMIC') or "").split()
collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
@@ -435,38 +389,26 @@ class CookerDataBuilder(object):
if invalid:
bb.fatal("BBFILES_DYNAMIC entries must be of the form {!}<collection name>:<filename pattern>, not:\n %s" % "\n ".join(invalid))
layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
collections_tmp = collections[:]
for c in collections:
collections_tmp.remove(c)
if c in collections_tmp:
bb.fatal("Found duplicated BBFILE_COLLECTIONS '%s', check bblayers.conf or layer.conf to fix it." % c)
compat = set()
if c in compat_entries:
compat = compat_entries[c]
if compat and not layerseries:
bb.fatal("No core layer found to work with layer '%s'. Missing entry in bblayers.conf?" % c)
compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
if compat and not (compat & layerseries):
bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)"
% (c, " ".join(layerseries), " ".join(compat)))
elif not compat and not data.getVar("BB_WORKERCONTEXT"):
bb.warn("Layer %s should set LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core layer names it is compatible with." % (c, c))
data.setVar("LAYERSERIES_CORENAMES", " ".join(sorted(layerseries)))
if not data.getVar("BBPATH"):
msg = "The BBPATH variable is not set"
if not layerconf:
msg += (" and bitbake did not find a conf/bblayers.conf file in"
" the expected location.\nMaybe you accidentally"
" invoked bitbake from the wrong directory?")
bb.fatal(msg)
if not data.getVar("TOPDIR"):
data.setVar("TOPDIR", os.path.abspath(os.getcwd()))
if not data.getVar("BB_CACHEDIR"):
data.setVar("BB_CACHEDIR", "${TOPDIR}/cache")
bb.codeparser.parser_cache_init(data.getVar("BB_CACHEDIR"))
raise SystemExit(msg)
data = parse_config_file(os.path.join("conf", "bitbake.conf"), data)
@@ -479,7 +421,7 @@ class CookerDataBuilder(object):
for bbclass in bbclasses:
data = _inherit(bbclass, data)
# Normally we only register event handlers at the end of parsing .bb files
# Nomally we only register event handlers at the end of parsing .bb files
# We register any handlers we've found so far here...
for var in data.getVar('__BBHANDLERS', False) or []:
handlerfn = data.getVarFlag(var, "filename", False)
@@ -487,60 +429,9 @@ class CookerDataBuilder(object):
parselog.critical("Undefined event handler function '%s'" % var)
raise bb.BBHandledException()
handlerln = int(data.getVarFlag(var, "lineno", False))
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln, data)
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))
return data
@staticmethod
def _parse_recipe(bb_data, bbfile, appends, mc, layername):
bb_data.setVar("__BBMULTICONFIG", mc)
bb_data.setVar("FILE_LAYERNAME", layername)
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
bb.parse.cached_mtime_noerror(bbfile_loc)
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
return bb.parse.handle(bbfile, bb_data)
def parseRecipeVariants(self, bbfile, appends, virtonly=False, mc=None, layername=None):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
if virtonly:
(bbfile, virtual, mc) = bb.cache.virtualfn2realfn(bbfile)
bb_data = self.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
return self._parse_recipe(bb_data, bbfile, appends, mc, layername)
if mc is not None:
bb_data = self.mcdata[mc].createCopy()
return self._parse_recipe(bb_data, bbfile, appends, mc, layername)
bb_data = self.data.createCopy()
datastores = self._parse_recipe(bb_data, bbfile, appends, '', layername)
for mc in self.mcdata:
if not mc:
continue
bb_data = self.mcdata[mc].createCopy()
newstores = self._parse_recipe(bb_data, bbfile, appends, mc, layername)
for ns in newstores:
datastores["mc:%s:%s" % (mc, ns)] = newstores[ns]
return datastores
def parseRecipe(self, virtualfn, appends, layername):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug("Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = bb.cache.virtualfn2realfn(virtualfn)
datastores = self.parseRecipeVariants(virtualfn, appends, virtonly=True, layername=layername)
return datastores[virtual]

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -76,26 +74,26 @@ def createDaemon(function, logfile):
with open('/dev/null', 'r') as si:
os.dup2(si.fileno(), sys.stdin.fileno())
with open(logfile, 'a+') as so:
try:
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(so.fileno(), sys.stderr.fileno())
except io.UnsupportedOperation:
sys.stdout = so
try:
so = open(logfile, 'a+')
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(so.fileno(), sys.stderr.fileno())
except io.UnsupportedOperation:
sys.stdout = open(logfile, 'a+')
# Have stdout and stderr be the same so log output matches chronologically
# and there aren't two separate buffers
sys.stderr = sys.stdout
# Have stdout and stderr be the same so log output matches chronologically
# and there aren't two seperate buffers
sys.stderr = sys.stdout
try:
function()
except Exception as e:
traceback.print_exc()
finally:
bb.event.print_ui_queue()
# os._exit() doesn't flush open files like os.exit() does. Manually flush
# stdout and stderr so that any logging output will be seen, particularly
# exception tracebacks.
sys.stdout.flush()
sys.stderr.flush()
os._exit(0)
try:
function()
except Exception as e:
traceback.print_exc()
finally:
bb.event.print_ui_queue()
# os._exit() doesn't flush open files like os.exit() does. Manually flush
# stdout and stderr so that any logging output will be seen, particularly
# exception tracebacks.
sys.stdout.flush()
sys.stderr.flush()
os._exit(0)

View File

@@ -4,16 +4,14 @@ BitBake 'Data' implementations
Functions for interacting with the data structure used by the
BitBake build tools.
expandKeys and datastore iteration are the most expensive
operations. Updating overrides is now "on the fly" but still based
on the idea of the cookie monster introduced by zecke:
"At night the cookie monster came by and
The expandKeys and update_data are the most expensive
operations. At night the cookie monster came by and
suggested 'give me cookies on setting the variables and
things will work out'. Taking this suggestion into account
applying the skills from the not yet passed 'Entwurf und
Analyse von Algorithmen' lecture and the cookie
monster seems to be right. We will track setVar more carefully
to have faster datastore operations."
to have faster update_data and expandKeys operations.
This is a trade-off between speed and memory again but
the speed is more critical here.
@@ -28,6 +26,11 @@ the speed is more critical here.
import sys, os, re
import hashlib
if sys.argv[0][-5:] == "pydoc":
path = os.path.dirname(os.path.dirname(sys.argv[1]))
else:
path = os.path.dirname(os.path.dirname(sys.argv[0]))
sys.path.insert(0, path)
from itertools import groupby
from bb import data_smart
@@ -67,6 +70,10 @@ def keys(d):
"""Return a list of keys in d"""
return d.keys()
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
def expand(s, d, varname = None):
"""Variable expansion using the data store"""
return d.expand(s, varname)
@@ -114,8 +121,8 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
if d.getVarFlag(var, 'python', False) and func:
return False
export = bb.utils.to_boolean(d.getVarFlag(var, "export"))
unexport = bb.utils.to_boolean(d.getVarFlag(var, "unexport"))
export = d.getVarFlag(var, "export", False)
unexport = d.getVarFlag(var, "unexport", False)
if not all and not export and not unexport and not func:
return False
@@ -188,8 +195,8 @@ def emit_env(o=sys.__stdout__, d = init(), all=False):
def exported_keys(d):
return (key for key in d.keys() if not key.startswith('__') and
bb.utils.to_boolean(d.getVarFlag(key, 'export')) and
not bb.utils.to_boolean(d.getVarFlag(key, 'unexport')))
d.getVarFlag(key, 'export', False) and
not d.getVarFlag(key, 'unexport', False))
def exported_vars(d):
k = list(exported_keys(d))
@@ -219,7 +226,7 @@ def emit_func(func, o=sys.__stdout__, d = init()):
deps = newdeps
seen |= deps
newdeps = set()
for dep in sorted(deps):
for dep in deps:
if d.getVarFlag(dep, "func", False) and not d.getVarFlag(dep, "python", False):
emit_var(dep, o, d, False) and o.write('\n')
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep))
@@ -261,72 +268,65 @@ def emit_func_python(func, o=sys.__stdout__, d = init()):
newdeps |= set((d.getVarFlag(dep, "vardeps") or "").split())
newdeps -= seen
def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_vars, d, codeparsedata):
def handle_contains(value, contains, exclusions, d):
newvalue = []
if value:
newvalue.append(str(value))
for k in sorted(contains):
if k in exclusions or k in ignored_vars:
continue
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
newvalue.append("\n%s{%s} = Unset" % (k, item))
break
else:
newvalue.append("\n%s{%s} = Set" % (k, item))
return "".join(newvalue)
def handle_remove(value, deps, removes, d):
for r in sorted(removes):
r2 = d.expandWithRefs(r, None)
value += "\n_remove of %s" % r
deps |= r2.references
deps = deps | (keys & r2.execs)
value = handle_contains(value, r2.contains, exclusions, d)
return value
def update_data(d):
"""Performs final steps upon the datastore, including application of overrides"""
d.finalize(parent = True)
def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps = set()
try:
if key in mod_funcs:
exclusions = set()
moddep = bb.codeparser.modulecode_deps[key]
value = handle_contains(moddep[4], moddep[3], exclusions, d)
return frozenset((moddep[0] | keys & moddep[1]) - ignored_vars), value
if key[-1] == ']':
vf = key[:-1].split('[')
if vf[1] == "vardepvalueexclude":
return deps, ""
value, parser = d.getVarFlag(vf[0], vf[1], False, retparser=True)
deps |= parser.references
deps = deps | (keys & parser.execs)
deps -= ignored_vars
return frozenset(deps), value
return deps, value
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
vardeps = varflags.get("vardeps")
exclusions = varflags.get("vardepsexclude", "").split()
def handle_contains(value, contains, d):
newvalue = ""
for k in sorted(contains):
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
newvalue += "\n%s{%s} = Unset" % (k, item)
break
else:
newvalue += "\n%s{%s} = Set" % (k, item)
if not newvalue:
return value
if not value:
return newvalue
return value + newvalue
def handle_remove(value, deps, removes, d):
for r in sorted(removes):
r2 = d.expandWithRefs(r, None)
value += "\n_remove of %s" % r
deps |= r2.references
deps = deps | (keys & r2.execs)
return value
if "vardepvalue" in varflags:
value = varflags.get("vardepvalue")
elif varflags.get("func"):
if varflags.get("python"):
value = codeparsedata.getVarFlag(key, "_content", False)
value = d.getVarFlag(key, "_content", False)
parser = bb.codeparser.PythonParser(key, logger)
parser.parse_python(value, filename=varflags.get("filename"), lineno=varflags.get("lineno"))
deps = deps | parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, exclusions, d)
value = handle_contains(value, parser.contains, d)
else:
value, parsedvar = codeparsedata.getVarFlag(key, "_content", False, retparser=True)
value, parsedvar = d.getVarFlag(key, "_content", False, retparser=True)
parser = bb.codeparser.ShellParser(key, logger)
parser.parse_shell(parsedvar.value)
deps = deps | shelldeps
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
value = handle_contains(value, parsedvar.contains, exclusions, d)
value = handle_contains(value, parsedvar.contains, d)
if hasattr(parsedvar, "removes"):
value = handle_remove(value, deps, parsedvar.removes, d)
if vardeps is None:
@@ -341,7 +341,7 @@ def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_va
value, parser = d.getVarFlag(key, "_content", False, retparser=True)
deps |= parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, exclusions, d)
value = handle_contains(value, parser.contains, d)
if hasattr(parser, "removes"):
value = handle_remove(value, deps, parser.removes, d)
@@ -361,50 +361,43 @@ def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_va
deps |= set(varfdeps)
deps |= set((vardeps or "").split())
deps -= set(exclusions)
deps -= ignored_vars
deps -= set(varflags.get("vardepsexclude", "").split())
except bb.parse.SkipRecipe:
raise
except Exception as e:
bb.warn("Exception during build_dependencies for %s" % key)
raise
return frozenset(deps), value
return deps, value
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)
def generate_dependencies(d, ignored_vars):
def generate_dependencies(d, whitelist):
mod_funcs = set(bb.codeparser.modulecode_deps.keys())
keys = set(key for key in d if not key.startswith("__")) | mod_funcs
shelldeps = set(key for key in d.getVar("__exportlist", False) if bb.utils.to_boolean(d.getVarFlag(key, "export")) and not bb.utils.to_boolean(d.getVarFlag(key, "unexport")))
keys = set(key for key in d if not key.startswith("__"))
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export", False) and not d.getVarFlag(key, "unexport", False))
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS')
codeparserd = d.createCopy()
for forced in (d.getVar('BB_HASH_CODEPARSER_VALS') or "").split():
key, value = forced.split("=", 1)
codeparserd.setVar(key, value)
deps = {}
values = {}
tasklist = d.getVar('__BBTASKS', False) or []
for task in tasklist:
deps[task], values[task] = build_dependencies(task, keys, mod_funcs, shelldeps, varflagsexcl, ignored_vars, d, codeparserd)
deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, d)
newdeps = deps[task]
seen = set()
while newdeps:
nextdeps = newdeps
nextdeps = newdeps - whitelist
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep not in deps:
deps[dep], values[dep] = build_dependencies(dep, keys, mod_funcs, shelldeps, varflagsexcl, ignored_vars, d, codeparserd)
deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, varflagsexcl, d)
newdeps |= deps[dep]
newdeps -= seen
#print "For %s: %s" % (task, str(deps[task]))
return tasklist, deps, values
def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
def generate_dependency_hash(tasklist, gendeps, lookupcache, whitelist, fn):
taskdeps = {}
basehash = {}
@@ -413,10 +406,9 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
if data is None:
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
data = []
else:
data = [data]
data = ''
gendeps[task] -= whitelist
newdeps = gendeps[task]
seen = set()
while newdeps:
@@ -424,24 +416,27 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep in whitelist:
continue
gendeps[dep] -= whitelist
newdeps |= gendeps[dep]
newdeps -= seen
alldeps = sorted(seen)
for dep in alldeps:
data.append(dep)
data = data + dep
var = lookupcache[dep]
if var is not None:
data.append(str(var))
data = data + str(var)
k = fn + ":" + task
basehash[k] = hashlib.sha256("".join(data).encode("utf-8")).hexdigest()
taskdeps[task] = frozenset(seen)
basehash[k] = hashlib.sha256(data.encode("utf-8")).hexdigest()
taskdeps[task] = alldeps
return taskdeps, basehash
def inherits_class(klass, d):
val = d.getVar('__inherit_cache', False) or []
needle = '/%s.bbclass' % klass
needle = os.path.join('classes', '%s.bbclass' % klass)
for v in val:
if v.endswith(needle):
return True

View File

@@ -16,11 +16,8 @@ BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import builtins
import copy
import re
import sys
from collections.abc import MutableMapping
import copy, re, sys, traceback
from collections import MutableMapping
import logging
import hashlib
import bb, bb.codeparser
@@ -29,25 +26,13 @@ from bb.COW import COWDictBase
logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = [":append", ":prepend", ":remove"]
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>:append|:prepend|:remove)(:(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+?}")
__expand_python_regexp__ = re.compile(r"\${@(?:{.*?}|.)+?}")
__setvar_keyword__ = ["_append", "_prepend", "_remove"]
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~]+?}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
__whitespace_split__ = re.compile(r'(\s)')
__override_regexp__ = re.compile(r'[a-z0-9]+')
bitbake_renamed_vars = {
"BB_ENV_WHITELIST": "BB_ENV_PASSTHROUGH",
"BB_ENV_EXTRAWHITE": "BB_ENV_PASSTHROUGH_ADDITIONS",
"BB_HASHBASE_WHITELIST": "BB_BASEHASH_IGNORE_VARS",
"BB_HASHCONFIG_WHITELIST": "BB_HASHCONFIG_IGNORE_VARS",
"BB_HASHTASK_WHITELIST": "BB_TASKHASH_IGNORE_TASKS",
"BB_SETSCENE_ENFORCE_WHITELIST": "BB_SETSCENE_ENFORCE_IGNORE_TASKS",
"MULTI_PROVIDER_WHITELIST": "BB_MULTI_PROVIDER_ALLOWED",
"BB_STAMP_WHITELIST": "is a deprecated variable and support has been removed",
"BB_STAMP_POLICY": "is a deprecated variable and support has been removed",
}
def infer_caller_details(loginfo, parent = False, varval = True):
"""Save the caller the trouble of specifying everything."""
# Save effort.
@@ -95,11 +80,10 @@ def infer_caller_details(loginfo, parent = False, varval = True):
loginfo['func'] = func
class VariableParse:
def __init__(self, varname, d, unexpanded_value = None, val = None):
def __init__(self, varname, d, val = None):
self.varname = varname
self.d = d
self.value = val
self.unexpanded_value = unexpanded_value
self.references = set()
self.execs = set()
@@ -123,11 +107,6 @@ class VariableParse:
else:
code = match.group()[3:-1]
# Do not run code that contains one or more unexpanded variables
# instead return the code with the characters we removed put back
if __expand_var_regexp__.findall(code):
return "${@" + code + "}"
if self.varname:
varname = 'Var <%s>' % self.varname
else:
@@ -153,21 +132,16 @@ class VariableParse:
value = utils.better_eval(codeobj, DataContext(self.d), {'d' : self.d})
return str(value)
class DataContext(dict):
excluded = set([i for i in dir(builtins) if not i.startswith('_')] + ['oe'])
class DataContext(dict):
def __init__(self, metadata, **kwargs):
self.metadata = metadata
dict.__init__(self, **kwargs)
self['d'] = metadata
self.context = set(bb.utils.get_context())
def __missing__(self, key):
if key in self.excluded or key in self.context:
raise KeyError(key)
value = self.metadata.getVar(key)
if value is None:
if value is None or self.metadata.getVarFlag(key, 'func', False):
raise KeyError(key)
else:
return value
@@ -177,7 +151,6 @@ class ExpansionError(Exception):
self.expression = expression
self.variablename = varname
self.exception = exception
self.varlist = [varname or expression or ""]
if varname:
if expression:
self.msg = "Failure expanding variable %s, expression was %s which triggered exception %s: %s" % (varname, expression, type(exception).__name__, exception)
@@ -187,14 +160,8 @@ class ExpansionError(Exception):
self.msg = "Failure expanding expression %s which triggered exception %s: %s" % (expression, type(exception).__name__, exception)
Exception.__init__(self, self.msg)
self.args = (varname, expression, exception)
def addVar(self, varname):
if varname:
self.varlist.append(varname)
def __str__(self):
chain = "\nThe variable dependency chain for the failure is: " + " -> ".join(self.varlist)
return self.msg + chain
return self.msg
class IncludeHistory(object):
def __init__(self, parent = None, filename = '[TOP LEVEL]'):
@@ -310,7 +277,7 @@ class VariableHistory(object):
for (r, override) in d.overridedata[var]:
for event in self.variable(r):
loginfo = event.copy()
if 'flag' in loginfo and not loginfo['flag'].startswith(("_", ":")):
if 'flag' in loginfo and not loginfo['flag'].startswith("_"):
continue
loginfo['variable'] = var
loginfo['op'] = 'override[%s]:%s' % (override, loginfo['op'])
@@ -362,16 +329,6 @@ class VariableHistory(object):
lines.append(line)
return lines
def get_variable_refs(self, var):
"""Return a dict of file/line references"""
var_history = self.variable(var)
refs = {}
for event in var_history:
if event['file'] not in refs:
refs[event['file']] = []
refs[event['file']].append(event['line'])
return refs
def get_variable_items_files(self, var):
"""
Use variable history to map items added to a list variable and
@@ -385,7 +342,7 @@ class VariableHistory(object):
for event in history:
if 'flag' in event:
continue
if event['op'] == ':remove':
if event['op'] == '_remove':
continue
if isset and event['op'] == 'set?':
continue
@@ -406,23 +363,6 @@ class VariableHistory(object):
else:
self.variables[var] = []
def _print_rename_error(var, loginfo, renamedvars, fullvar=None):
info = ""
if "file" in loginfo:
info = " file: %s" % loginfo["file"]
if "line" in loginfo:
info += " line: %s" % loginfo["line"]
if fullvar and fullvar != var:
info += " referenced as: %s" % fullvar
if info:
info = " (%s)" % info.strip()
renameinfo = renamedvars[var]
if " " in renameinfo:
# A space signals a string to display instead of a rename
bb.erroronce('Variable %s %s%s' % (var, renameinfo, info))
else:
bb.erroronce('Variable %s has been renamed to %s%s' % (var, renameinfo, info))
class DataSmart(MutableMapping):
def __init__(self):
self.dict = {}
@@ -430,8 +370,6 @@ class DataSmart(MutableMapping):
self.inchistory = IncludeHistory()
self.varhistory = VariableHistory(self)
self._tracking = False
self._var_renames = {}
self._var_renames.update(bitbake_renamed_vars)
self.expand_cache = {}
@@ -453,9 +391,9 @@ class DataSmart(MutableMapping):
def expandWithRefs(self, s, varname):
if not isinstance(s, str): # sanity check
return VariableParse(varname, self, s, s)
return VariableParse(varname, self, s)
varparse = VariableParse(varname, self, s)
varparse = VariableParse(varname, self)
while s.find('${') != -1:
olds = s
@@ -465,17 +403,14 @@ class DataSmart(MutableMapping):
s = __expand_python_regexp__.sub(varparse.python_sub, s)
except SyntaxError as e:
# Likely unmatched brackets, just don't expand the expression
if e.msg != "EOL while scanning string literal" and not e.msg.startswith("unterminated string literal"):
if e.msg != "EOL while scanning string literal":
raise
if s == olds:
break
except ExpansionError as e:
e.addVar(varname)
except ExpansionError:
raise
except bb.parse.SkipRecipe:
raise
except bb.BBHandledException:
raise
except Exception as exc:
tb = sys.exc_info()[2]
raise ExpansionError(varname, s, exc).with_traceback(tb) from exc
@@ -487,19 +422,24 @@ class DataSmart(MutableMapping):
def expand(self, s, varname = None):
return self.expandWithRefs(s, varname).value
def finalize(self, parent = False):
return
def internal_finalize(self, parent = False):
"""Performs final steps upon the datastore, including application of overrides"""
self.overrides = None
def need_overrides(self):
if self.overrides is not None:
return
if self.inoverride:
return
overrride_stack = []
for count in range(5):
self.inoverride = True
# Can end up here recursively so setup dummy values
self.overrides = []
self.overridesset = set()
self.overrides = (self.getVar("OVERRIDES") or "").split(":") or []
overrride_stack.append(self.overrides)
self.overridesset = set(self.overrides)
self.inoverride = False
self.expand_cache = {}
@@ -509,7 +449,7 @@ class DataSmart(MutableMapping):
self.overrides = newoverrides
self.overridesset = set(self.overrides)
else:
bb.fatal("Overrides could not be expanded into a stable state after 5 iterations, overrides must be being referenced by other overridden variables in some recursive fashion. Please provide your configuration to bitbake-devel so we can laugh, er, I mean try and understand how to make it work. The list of failing override expansions: %s" % "\n".join(str(s) for s in overrride_stack))
bb.fatal("Overrides could not be expanded into a stable state after 5 iterations, overrides must be being referenced by other overridden variables in some recursive fashion. Please provide your configuration to bitbake-devel so we can laugh, er, I mean try and understand how to make it work.")
def initVar(self, var):
self.expand_cache = {}
@@ -520,44 +460,27 @@ class DataSmart(MutableMapping):
dest = self.dict
while dest:
if var in dest:
return dest[var]
return dest[var], self.overridedata.get(var, None)
if "_data" not in dest:
break
dest = dest["_data"]
return None
return None, self.overridedata.get(var, None)
def _makeShadowCopy(self, var):
if var in self.dict:
return
local_var = self._findVar(var)
local_var, _ = self._findVar(var)
if local_var:
self.dict[var] = copy.copy(local_var)
else:
self.initVar(var)
def hasOverrides(self, var):
return var in self.overridedata
def setVar(self, var, value, **loginfo):
#print("var=" + str(var) + " val=" + str(value))
if not var.startswith("__anon_") and ("_append" in var or "_prepend" in var or "_remove" in var):
info = "%s" % var
if "file" in loginfo:
info += " file: %s" % loginfo["file"]
if "line" in loginfo:
info += " line: %s" % loginfo["line"]
bb.fatal("Variable %s contains an operation using the old override syntax. Please convert this layer/metadata before attempting to use with a newer bitbake." % info)
shortvar = var.split(":", 1)[0]
if shortvar in self._var_renames:
_print_rename_error(shortvar, loginfo, self._var_renames, fullvar=var)
# Mark that we have seen a renamed variable
self.setVar("_FAILPARSINGERRORHANDLED", True)
self.expand_cache = {}
parsing=False
if 'parsing' in loginfo:
@@ -586,7 +509,7 @@ class DataSmart(MutableMapping):
# pay the cookie monster
# more cookies for the cookie monster
if ':' in var:
if '_' in var:
self._setvar_update_overrides(base, **loginfo)
if base in self.overridevars:
@@ -597,27 +520,27 @@ class DataSmart(MutableMapping):
self._makeShadowCopy(var)
if not parsing:
if ":append" in self.dict[var]:
del self.dict[var][":append"]
if ":prepend" in self.dict[var]:
del self.dict[var][":prepend"]
if ":remove" in self.dict[var]:
del self.dict[var][":remove"]
if "_append" in self.dict[var]:
del self.dict[var]["_append"]
if "_prepend" in self.dict[var]:
del self.dict[var]["_prepend"]
if "_remove" in self.dict[var]:
del self.dict[var]["_remove"]
if var in self.overridedata:
active = []
self.need_overrides()
for (r, o) in self.overridedata[var]:
if o in self.overridesset:
active.append(r)
elif ":" in o:
if set(o.split(":")).issubset(self.overridesset):
elif "_" in o:
if set(o.split("_")).issubset(self.overridesset):
active.append(r)
for a in active:
self.delVar(a)
del self.overridedata[var]
# more cookies for the cookie monster
if ':' in var:
if '_' in var:
self._setvar_update_overrides(var, **loginfo)
# setting var
@@ -639,12 +562,12 @@ class DataSmart(MutableMapping):
nextnew.update(vardata.references)
nextnew.update(vardata.contains.keys())
new = nextnew
self.overrides = None
self.internal_finalize(True)
def _setvar_update_overrides(self, var, **loginfo):
# aka pay the cookie monster
override = var[var.rfind(':')+1:]
shortvar = var[:var.rfind(':')]
override = var[var.rfind('_')+1:]
shortvar = var[:var.rfind('_')]
while override and __override_regexp__.match(override):
if shortvar not in self.overridedata:
self.overridedata[shortvar] = []
@@ -653,9 +576,9 @@ class DataSmart(MutableMapping):
self.overridedata[shortvar] = list(self.overridedata[shortvar])
self.overridedata[shortvar].append([var, override])
override = None
if ":" in shortvar:
override = var[shortvar.rfind(':')+1:]
shortvar = var[:shortvar.rfind(':')]
if "_" in shortvar:
override = var[shortvar.rfind('_')+1:]
shortvar = var[:shortvar.rfind('_')]
if len(shortvar) == 0:
override = None
@@ -679,11 +602,10 @@ class DataSmart(MutableMapping):
self.varhistory.record(**loginfo)
self.setVar(newkey, val, ignore=True, parsing=True)
srcflags = self.getVarFlags(key, False, True) or {}
for i in srcflags:
if i not in (__setvar_keyword__):
for i in (__setvar_keyword__):
src = self.getVarFlag(key, i, False)
if src is None:
continue
src = srcflags[i]
dest = self.getVarFlag(newkey, i, False) or []
dest.extend(src)
@@ -695,7 +617,7 @@ class DataSmart(MutableMapping):
self.overridedata[newkey].append([v.replace(key, newkey), o])
self.renameVar(v, v.replace(key, newkey))
if ':' in newkey and val is None:
if '_' in newkey and val is None:
self._setvar_update_overrides(newkey, **loginfo)
loginfo['variable'] = key
@@ -707,12 +629,12 @@ class DataSmart(MutableMapping):
def appendVar(self, var, value, **loginfo):
loginfo['op'] = 'append'
self.varhistory.record(**loginfo)
self.setVar(var + ":append", value, ignore=True, parsing=True)
self.setVar(var + "_append", value, ignore=True, parsing=True)
def prependVar(self, var, value, **loginfo):
loginfo['op'] = 'prepend'
self.varhistory.record(**loginfo)
self.setVar(var + ":prepend", value, ignore=True, parsing=True)
self.setVar(var + "_prepend", value, ignore=True, parsing=True)
def delVar(self, var, **loginfo):
self.expand_cache = {}
@@ -723,10 +645,10 @@ class DataSmart(MutableMapping):
self.dict[var] = {}
if var in self.overridedata:
del self.overridedata[var]
if ':' in var:
override = var[var.rfind(':')+1:]
shortvar = var[:var.rfind(':')]
while override and __override_regexp__.match(override):
if '_' in var:
override = var[var.rfind('_')+1:]
shortvar = var[:var.rfind('_')]
while override and override.islower():
try:
if shortvar in self.overridedata:
# Force CoW by recreating the list first
@@ -735,23 +657,15 @@ class DataSmart(MutableMapping):
except ValueError as e:
pass
override = None
if ":" in shortvar:
override = var[shortvar.rfind(':')+1:]
shortvar = var[:shortvar.rfind(':')]
if "_" in shortvar:
override = var[shortvar.rfind('_')+1:]
shortvar = var[:shortvar.rfind('_')]
if len(shortvar) == 0:
override = None
def setVarFlag(self, var, flag, value, **loginfo):
self.expand_cache = {}
if var == "BB_RENAMED_VARIABLES":
self._var_renames[flag] = value
if var in self._var_renames:
_print_rename_error(var, loginfo, self._var_renames)
# Mark that we have seen a renamed variable
self.setVar("_FAILPARSINGERRORHANDLED", True)
if 'op' not in loginfo:
loginfo['op'] = "set"
loginfo['flag'] = flag
@@ -760,7 +674,7 @@ class DataSmart(MutableMapping):
self._makeShadowCopy(var)
self.dict[var][flag] = value
if flag == "_defaultval" and ':' in var:
if flag == "_defaultval" and '_' in var:
self._setvar_update_overrides(var, **loginfo)
if flag == "_defaultval" and var in self.overridevars:
self._setvar_update_overridevars(var, value)
@@ -781,27 +695,22 @@ class DataSmart(MutableMapping):
return None
cachename = var + "[" + flag + "]"
if not expand and retparser and cachename in self.expand_cache:
return self.expand_cache[cachename].unexpanded_value, self.expand_cache[cachename]
if expand and cachename in self.expand_cache:
return self.expand_cache[cachename].value
local_var = self._findVar(var)
local_var, overridedata = self._findVar(var)
value = None
removes = set()
if flag == "_content" and not parsing:
overridedata = self.overridedata.get(var, None)
if flag == "_content" and not parsing and overridedata is not None:
if flag == "_content" and overridedata is not None and not parsing:
match = False
active = {}
self.need_overrides()
for (r, o) in overridedata:
# FIXME What about double overrides both with "_" in the name?
# What about double overrides both with "_" in the name?
if o in self.overridesset:
active[o] = r
elif ":" in o:
if set(o.split(":")).issubset(self.overridesset):
elif "_" in o:
if set(o.split("_")).issubset(self.overridesset):
active[o] = r
mod = True
@@ -809,10 +718,10 @@ class DataSmart(MutableMapping):
mod = False
for o in self.overrides:
for a in active.copy():
if a.endswith(":" + o):
if a.endswith("_" + o):
t = active[a]
del active[a]
active[a.replace(":" + o, "")] = t
active[a.replace("_" + o, "")] = t
mod = True
elif a == o:
match = active[a]
@@ -831,31 +740,31 @@ class DataSmart(MutableMapping):
value = copy.copy(local_var["_defaultval"])
if flag == "_content" and local_var is not None and ":append" in local_var and not parsing:
if flag == "_content" and local_var is not None and "_append" in local_var and not parsing:
if not value:
value = ""
self.need_overrides()
for (r, o) in local_var[":append"]:
for (r, o) in local_var["_append"]:
match = True
if o:
for o2 in o.split(":"):
for o2 in o.split("_"):
if not o2 in self.overrides:
match = False
if match:
if value is None:
value = ""
value = value + r
if flag == "_content" and local_var is not None and ":prepend" in local_var and not parsing:
if flag == "_content" and local_var is not None and "_prepend" in local_var and not parsing:
if not value:
value = ""
self.need_overrides()
for (r, o) in local_var[":prepend"]:
for (r, o) in local_var["_prepend"]:
match = True
if o:
for o2 in o.split(":"):
for o2 in o.split("_"):
if not o2 in self.overrides:
match = False
if match:
if value is None:
value = ""
value = r + value
parser = None
@@ -864,12 +773,12 @@ class DataSmart(MutableMapping):
if expand:
value = parser.value
if value and flag == "_content" and local_var is not None and ":remove" in local_var and not parsing:
if value and flag == "_content" and local_var is not None and "_remove" in local_var and not parsing:
self.need_overrides()
for (r, o) in local_var[":remove"]:
for (r, o) in local_var["_remove"]:
match = True
if o:
for o2 in o.split(":"):
for o2 in o.split("_"):
if not o2 in self.overrides:
match = False
if match:
@@ -882,7 +791,7 @@ class DataSmart(MutableMapping):
expanded_removes[r] = self.expand(r).split()
parser.removes = set()
val = []
val = ""
for v in __whitespace_split__.split(parser.value):
skip = False
for r in removes:
@@ -891,8 +800,8 @@ class DataSmart(MutableMapping):
skip = True
if skip:
continue
val.append(v)
parser.value = "".join(val)
val = val + v
parser.value = val
if expand:
value = parser.value
@@ -907,7 +816,7 @@ class DataSmart(MutableMapping):
def delVarFlag(self, var, flag, **loginfo):
self.expand_cache = {}
local_var = self._findVar(var)
local_var, _ = self._findVar(var)
if not local_var:
return
if not var in self.dict:
@@ -950,12 +859,12 @@ class DataSmart(MutableMapping):
self.dict[var][i] = flags[i]
def getVarFlags(self, var, expand = False, internalflags=False):
local_var = self._findVar(var)
local_var, _ = self._findVar(var)
flags = {}
if local_var:
for i in local_var:
if i.startswith(("_", ":")) and not internalflags:
if i.startswith("_") and not internalflags:
continue
flags[i] = local_var[i]
if expand and i in expand:
@@ -996,7 +905,6 @@ class DataSmart(MutableMapping):
data.inchistory = self.inchistory.copy()
data._tracking = self._tracking
data._var_renames = self._var_renames
data.overrides = None
data.overridevars = copy.copy(self.overridevars)
@@ -1019,7 +927,7 @@ class DataSmart(MutableMapping):
value = self.getVar(variable, False)
for key in keys:
referrervalue = self.getVar(key, False)
if referrervalue and isinstance(referrervalue, str) and ref in referrervalue:
if referrervalue and ref in referrervalue:
self.setVar(key, referrervalue.replace(ref, value))
def localkeys(self):
@@ -1054,8 +962,8 @@ class DataSmart(MutableMapping):
for (r, o) in self.overridedata[var]:
if o in self.overridesset:
overrides.add(var)
elif ":" in o:
if set(o.split(":")).issubset(self.overridesset):
elif "_" in o:
if set(o.split("_")).issubset(self.overridesset):
overrides.add(var)
for k in keylist(self.dict):
@@ -1085,10 +993,10 @@ class DataSmart(MutableMapping):
d = self.createCopy()
bb.data.expandKeys(d)
config_ignore_vars = set((d.getVar("BB_HASHCONFIG_IGNORE_VARS") or "").split())
config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST") or "").split())
keys = set(key for key in iter(d) if not key.startswith("__"))
for key in keys:
if key in config_ignore_vars:
if key in config_whitelist:
continue
value = d.getVar(key, False) or ""
@@ -1097,7 +1005,7 @@ class DataSmart(MutableMapping):
else:
data.update({key:value})
varflags = d.getVarFlags(key, internalflags = True, expand=["vardepvalue"])
varflags = d.getVarFlags(key, internalflags = True)
if not varflags:
continue
for f in varflags:

View File

@@ -40,7 +40,7 @@ class HeartbeatEvent(Event):
"""Triggered at regular time intervals of 10 seconds. Other events can fire much more often
(runQueueTaskStarted when there are many short tasks) or not at all for long periods
of time (again runQueueTaskStarted, when there is just one long-running task), so this
event is more suitable for doing some task-independent work occasionally."""
event is more suitable for doing some task-independent work occassionally."""
def __init__(self, time):
Event.__init__(self)
self.time = time
@@ -68,39 +68,29 @@ _catchall_handlers = {}
_eventfilter = None
_uiready = False
_thread_lock = threading.Lock()
_heartbeat_enabled = False
_should_exit = threading.Event()
_thread_lock_enabled = False
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
def enable_threadlock():
# Always needed now
return
global _thread_lock_enabled
_thread_lock_enabled = True
def disable_threadlock():
# Always needed now
return
def enable_heartbeat():
global _heartbeat_enabled
_heartbeat_enabled = True
def disable_heartbeat():
global _heartbeat_enabled
_heartbeat_enabled = False
#
# In long running code, this function should be called periodically
# to check if we should exit due to an interuption (.e.g Ctrl+C from the UI)
#
def check_for_interrupts(d):
global _should_exit
if _should_exit.is_set():
bb.warn("Exiting due to interrupt.")
raise bb.BBHandledException()
global _thread_lock_enabled
_thread_lock_enabled = False
def execute_handler(name, handler, event, d):
event.data = d
addedd = False
if 'd' not in builtins:
builtins['d'] = d
addedd = True
try:
ret = handler(event, d)
ret = handler(event)
except (bb.parse.SkipRecipe, bb.BBHandledException):
raise
except Exception:
@@ -114,7 +104,8 @@ def execute_handler(name, handler, event, d):
raise
finally:
del event.data
if addedd:
del builtins['d']
def fire_class_handlers(event, d):
if isinstance(event, logging.LogRecord):
@@ -127,8 +118,6 @@ def fire_class_handlers(event, d):
if _eventfilter:
if not _eventfilter(name, handler, event, d):
continue
if d is not None and not name in (d.getVar("__BBHANDLERS_MC") or set()):
continue
execute_handler(name, handler, event, d)
ui_queue = []
@@ -141,14 +130,8 @@ def print_ui_queue():
if not _uiready:
from bb.msg import BBLogFormatter
# Flush any existing buffered content
try:
sys.stdout.flush()
except:
pass
try:
sys.stderr.flush()
except:
pass
sys.stdout.flush()
sys.stderr.flush()
stdout = logging.StreamHandler(sys.stdout)
stderr = logging.StreamHandler(sys.stderr)
formatter = BBLogFormatter("%(levelname)s: %(message)s")
@@ -189,30 +172,36 @@ def print_ui_queue():
def fire_ui_handlers(event, d):
global _thread_lock
global _thread_lock_enabled
if not _uiready:
# No UI handlers registered yet, queue up the messages
ui_queue.append(event)
return
with bb.utils.lock_timeout(_thread_lock):
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
try:
if not _ui_logfilters[h].filter(event):
continue
# We use pickle here since it better handles object instances
# which xmlrpc's marshaller does not. Events *must* be serializable
# by pickle.
if hasattr(_ui_handlers[h].event, "sendpickle"):
_ui_handlers[h].event.sendpickle((pickle.dumps(event)))
else:
_ui_handlers[h].event.send(event)
except:
errors.append(h)
for h in errors:
del _ui_handlers[h]
if _thread_lock_enabled:
_thread_lock.acquire()
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
try:
if not _ui_logfilters[h].filter(event):
continue
# We use pickle here since it better handles object instances
# which xmlrpc's marshaller does not. Events *must* be serializable
# by pickle.
if hasattr(_ui_handlers[h].event, "sendpickle"):
_ui_handlers[h].event.sendpickle((pickle.dumps(event)))
else:
_ui_handlers[h].event.send(event)
except:
errors.append(h)
for h in errors:
del _ui_handlers[h]
if _thread_lock_enabled:
_thread_lock.release()
def fire(event, d):
"""Fire off an Event"""
@@ -238,34 +227,25 @@ def fire_from_worker(event, d):
fire_ui_handlers(event, d)
noop = lambda _: None
def register(name, handler, mask=None, filename=None, lineno=None, data=None):
def register(name, handler, mask=None, filename=None, lineno=None):
"""Register an Event handler"""
if data is not None and data.getVar("BB_CURRENT_MC"):
mc = data.getVar("BB_CURRENT_MC")
name = '%s%s' % (mc.replace('-', '_'), name)
# already registered
if name in _handlers:
if data is not None:
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
bbhands_mc.add(name)
data.setVar("__BBHANDLERS_MC", bbhands_mc)
return AlreadyRegistered
if handler is not None:
# handle string containing python code
if isinstance(handler, str):
tmp = "def %s(e, d):\n%s" % (name, handler)
# Inject empty lines to make code match lineno in filename
if lineno is not None:
tmp = "\n" * (lineno-1) + tmp
tmp = "def %s(e):\n%s" % (name, handler)
try:
code = bb.methodpool.compile_cache(tmp)
if not code:
if filename is None:
filename = "%s(e, d)" % name
filename = "%s(e)" % name
code = compile(tmp, filename, "exec", ast.PyCF_ONLY_AST)
if lineno is not None:
ast.increment_lineno(code, lineno-1)
code = compile(code, filename, "exec")
bb.methodpool.compile_cache_add(tmp, code)
except SyntaxError:
@@ -288,20 +268,10 @@ def register(name, handler, mask=None, filename=None, lineno=None, data=None):
_event_handler_map[m] = {}
_event_handler_map[m][name] = True
if data is not None:
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
bbhands_mc.add(name)
data.setVar("__BBHANDLERS_MC", bbhands_mc)
return Registered
def remove(name, handler, data=None):
def remove(name, handler):
"""Remove an Event handler"""
if data is not None:
if data.getVar("BB_CURRENT_MC"):
mc = data.getVar("BB_CURRENT_MC")
name = '%s%s' % (mc.replace('-', '_'), name)
_handlers.pop(name)
if name in _catchall_handlers:
_catchall_handlers.pop(name)
@@ -309,12 +279,6 @@ def remove(name, handler, data=None):
if name in _event_handler_map[event]:
_event_handler_map[event].pop(name)
if data is not None:
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
if name in bbhands_mc:
bbhands_mc.remove(name)
data.setVar("__BBHANDLERS_MC", bbhands_mc)
def get_handlers():
return _handlers
@@ -327,23 +291,21 @@ def set_eventfilter(func):
_eventfilter = func
def register_UIHhandler(handler, mainui=False):
with bb.utils.lock_timeout(_thread_lock):
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
_ui_logfilters[_ui_handler_seq] = UIEventFilter(level, debug_domains)
if mainui:
global _uiready
_uiready = _ui_handler_seq
return _ui_handler_seq
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
_ui_logfilters[_ui_handler_seq] = UIEventFilter(level, debug_domains)
if mainui:
global _uiready
_uiready = _ui_handler_seq
return _ui_handler_seq
def unregister_UIHhandler(handlerNum, mainui=False):
if mainui:
global _uiready
_uiready = False
with bb.utils.lock_timeout(_thread_lock):
if handlerNum in _ui_handlers:
del _ui_handlers[handlerNum]
if handlerNum in _ui_handlers:
del _ui_handlers[handlerNum]
return
def get_uihandler():
@@ -498,7 +460,7 @@ class BuildCompleted(BuildBase, OperationCompleted):
BuildBase.__init__(self, n, p, failures)
class DiskFull(Event):
"""Disk full case build halted"""
"""Disk full case build aborted"""
def __init__(self, dev, type, freespace, mountpoint):
Event.__init__(self)
self._dev = dev
@@ -682,17 +644,6 @@ class ReachableStamps(Event):
Event.__init__(self)
self.stamps = stamps
class StaleSetSceneTasks(Event):
"""
An event listing setscene tasks which are 'stale' and will
be rerun. The metadata may use to clean up stale data.
tasks is a mapping of tasks and matching stale stamps.
"""
def __init__(self, tasks):
Event.__init__(self)
self.tasks = tasks
class FilesMatchingFound(Event):
"""
Event when a list of files matching the supplied pattern has
@@ -776,7 +727,7 @@ class LogHandler(logging.Handler):
class MetadataEvent(Event):
"""
Generic event that target for OE-Core classes
to report information during asynchronous execution
to report information during asynchrous execution
"""
def __init__(self, eventtype, eventdata):
Event.__init__(self)
@@ -857,19 +808,3 @@ class FindSigInfoResult(Event):
def __init__(self, result):
Event.__init__(self)
self.result = result
class GetTaskSignatureResult(Event):
"""
Event to return results from GetTaskSignatures command
"""
def __init__(self, sig):
Event.__init__(self)
self.sig = sig
class ParseError(Event):
"""
Event to indicate parse failed
"""
def __init__(self, msg):
super().__init__()
self._msg = msg

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#

View File

@@ -1,57 +0,0 @@
There are expectations of users of the fetcher code. This file attempts to document
some of the constraints that are present. Some are obvious, some are less so. It is
documented in the context of how OE uses it but the API calls are generic.
a) network access for sources is only expected to happen in the do_fetch task.
This is not enforced or tested but is required so that we can:
i) audit the sources used (i.e. for license/manifest reasons)
ii) support offline builds with a suitable cache
iii) allow work to continue even with downtime upstream
iv) allow for changes upstream in incompatible ways
v) allow rebuilding of the software in X years time
b) network access is not expected in do_unpack task.
c) you can take DL_DIR and use it as a mirror for offline builds.
d) access to the network is only made when explicitly configured in recipes
(e.g. use of AUTOREV, or use of git tags which change revision).
e) fetcher output is deterministic (i.e. if you fetch configuration XXX now it
will match in future exactly in a clean build with a new DL_DIR).
One specific pain point example are git tags. They can be replaced and change
so the git fetcher has to resolve them with the network. We use git revisions
where possible to avoid this and ensure determinism.
f) network access is expected to work with the standard linux proxy variables
so that access behind firewalls works (the fetcher sets these in the
environment but only in the do_fetch tasks).
g) access during parsing has to be minimal, a "git ls-remote" for an AUTOREV
git recipe might be ok but you can't expect to checkout a git tree.
h) we need to provide revision information during parsing such that a version
for the recipe can be constructed.
i) versions are expected to be able to increase in a way which sorts allowing
package feeds to operate (see PR server required for git revisions to sort).
j) API to query for possible version upgrades of a url is highly desireable to
allow our automated upgrage code to function (it is implied this does always
have network access).
k) Where fixes or changes to behaviour in the fetcher are made, we ask that
test cases are added (run with "bitbake-selftest bb.tests.fetch"). We do
have fairly extensive test coverage of the fetcher as it is the only way
to track all of its corner cases, it still doesn't give entire coverage
though sadly.
l) If using tools during parse time, they will have to be in ASSUME_PROVIDED
in OE's context as we can't build git-native, then parse a recipe and use
git ls-remote.
Not all fetchers support all features, autorev is optional and doesn't make
sense for some. Upgrade detection means different things in different contexts
too.

View File

@@ -113,7 +113,7 @@ class MissingParameterError(BBFetchException):
self.args = (missing, url)
class ParameterError(BBFetchException):
"""Exception raised when a url cannot be processed due to invalid parameters."""
"""Exception raised when a url cannot be proccessed due to invalid parameters."""
def __init__(self, message, url):
msg = "URL: '%s' has invalid parameters. %s" % (url, message)
self.url = url
@@ -182,7 +182,7 @@ class URI(object):
Some notes about relative URIs: while it's specified that
a URI beginning with <scheme>:// should either be directly
followed by a hostname or a /, the old URI handling of the
fetch2 library did not conform to this. Therefore, this URI
fetch2 library did not comform to this. Therefore, this URI
class has some kludges to make sure that URIs are parsed in
a way comforming to bitbake's current usage. This URI class
supports the following:
@@ -199,7 +199,7 @@ class URI(object):
file://hostname/absolute/path.diff (would be IETF compliant)
Note that the last case only applies to a list of
explicitly allowed schemes (currently only file://), that requires
"whitelisted" schemes (currently only file://), that requires
its URIs to not have a network location.
"""
@@ -290,12 +290,12 @@ class URI(object):
def _param_str_split(self, string, elmdelim, kvdelim="="):
ret = collections.OrderedDict()
for k, v in [x.split(kvdelim, 1) if kvdelim in x else (x, None) for x in string.split(elmdelim) if x]:
for k, v in [x.split(kvdelim, 1) for x in string.split(elmdelim)]:
ret[k] = v
return ret
def _param_str_join(self, dict_, elmdelim, kvdelim="="):
return elmdelim.join([kvdelim.join([k, v]) if v else k for k, v in dict_.items()])
return elmdelim.join([kvdelim.join([k, v]) for k, v in dict_.items()])
@property
def hostport(self):
@@ -388,7 +388,7 @@ def decodeurl(url):
if s:
if not '=' in s:
raise MalformedUrl(url, "The URL: '%s' is invalid: parameter %s does not specify a value (missing '=')" % (url, s))
s1, s2 = s.split('=', 1)
s1, s2 = s.split('=')
p[s1] = s2
return type, host, urllib.parse.unquote(path), user, pswd, p
@@ -402,24 +402,24 @@ def encodeurl(decoded):
if not type:
raise MissingParameterError('type', "encoded from the data %s" % str(decoded))
url = ['%s://' % type]
url = '%s://' % type
if user and type != "file":
url.append("%s" % user)
url += "%s" % user
if pswd:
url.append(":%s" % pswd)
url.append("@")
url += ":%s" % pswd
url += "@"
if host and type != "file":
url.append("%s" % host)
url += "%s" % host
if path:
# Standardise path to ensure comparisons work
while '//' in path:
path = path.replace("//", "/")
url.append("%s" % urllib.parse.quote(path))
url += "%s" % urllib.parse.quote(path)
if p:
for parm in p:
url.append(";%s=%s" % (parm, p[parm]))
url += ";%s=%s" % (parm, p[parm])
return "".join(url)
return url
def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
if not ud.url or not uri_find or not uri_replace:
@@ -428,9 +428,8 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
uri_decoded = list(decodeurl(ud.url))
uri_find_decoded = list(decodeurl(uri_find))
uri_replace_decoded = list(decodeurl(uri_replace))
logger.debug2("For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
logger.debug(2, "For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
result_decoded = ['', '', '', '', '', {}]
# 0 - type, 1 - host, 2 - path, 3 - user, 4- pswd, 5 - params
for loc, i in enumerate(uri_find_decoded):
result_decoded[loc] = uri_decoded[loc]
regexp = i
@@ -450,9 +449,6 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
for l in replacements:
uri_replace_decoded[loc][k] = uri_replace_decoded[loc][k].replace(l, replacements[l])
result_decoded[loc][k] = uri_replace_decoded[loc][k]
elif (loc == 3 or loc == 4) and uri_replace_decoded[loc]:
# User/password in the replacement is just a straight replacement
result_decoded[loc] = uri_replace_decoded[loc]
elif (re.match(regexp, uri_decoded[loc])):
if not uri_replace_decoded[loc]:
result_decoded[loc] = ""
@@ -469,24 +465,16 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
basename = os.path.basename(mirrortarball)
# Kill parameters, they make no sense for mirror tarballs
uri_decoded[5] = {}
uri_find_decoded[5] = {}
elif ud.localpath and ud.method.supports_checksum(ud):
basename = os.path.basename(ud.localpath)
if basename:
uri_basename = os.path.basename(uri_decoded[loc])
# Prefix with a slash as a sentinel in case
# result_decoded[loc] does not contain one.
path = "/" + result_decoded[loc]
if uri_basename and basename != uri_basename and path.endswith("/" + uri_basename):
result_decoded[loc] = path[1:-len(uri_basename)] + basename
elif not path.endswith("/" + basename):
result_decoded[loc] = os.path.join(path[1:], basename)
if basename and not result_decoded[loc].endswith(basename):
result_decoded[loc] = os.path.join(result_decoded[loc], basename)
else:
return None
result = encodeurl(result_decoded)
if result == ud.url:
return None
logger.debug2("For url %s returning %s" % (ud.url, result))
logger.debug(2, "For url %s returning %s" % (ud.url, result))
return result
methods = []
@@ -511,14 +499,14 @@ def fetcher_init(d):
# When to drop SCM head revisions controlled by user policy
srcrev_policy = d.getVar('BB_SRCREV_POLICY') or "clear"
if srcrev_policy == "cache":
logger.debug("Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
logger.debug(1, "Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
elif srcrev_policy == "clear":
logger.debug("Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
logger.debug(1, "Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
revs.clear()
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
_checksum_cache.init_cache(d.getVar("BB_CACHEDIR"))
_checksum_cache.init_cache(d)
for m in methods:
if hasattr(m, "init"):
@@ -546,7 +534,7 @@ def mirror_from_string(data):
bb.warn('Invalid mirror data %s, should have paired members.' % data)
return list(zip(*[iter(mirrors)]*2))
def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True):
def verify_checksum(ud, d, precomputed={}):
"""
verify the MD5 and SHA256 checksum for downloaded src
@@ -560,25 +548,20 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
file against those in the recipe each time, rather than only after
downloading. See https://bugzilla.yoctoproject.org/show_bug.cgi?id=5571.
"""
if ud.ignore_checksums or not ud.method.supports_checksum(ud):
return {}
if localpath is None:
localpath = ud.localpath
def compute_checksum_info(checksum_id):
checksum_name = getattr(ud, "%s_name" % checksum_id)
if checksum_id in precomputed:
checksum_data = precomputed[checksum_id]
else:
checksum_data = getattr(bb.utils, "%s_file" % checksum_id)(localpath)
checksum_data = getattr(bb.utils, "%s_file" % checksum_id)(ud.localpath)
checksum_expected = getattr(ud, "%s_expected" % checksum_id)
if checksum_expected == '':
checksum_expected = None
return {
"id": checksum_id,
"name": checksum_name,
@@ -598,13 +581,17 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
checksum_lines = ["SRC_URI[%s] = \"%s\"" % (ci["name"], ci["data"])]
# If no checksum has been provided
if fatal_nochecksum and ud.method.recommends_checksum(ud) and all(ci["expected"] is None for ci in checksum_infos):
if ud.method.recommends_checksum(ud) and all(ci["expected"] is None for ci in checksum_infos):
messages = []
strict = d.getVar("BB_STRICT_CHECKSUM") or "0"
# If strict checking enabled and neither sum defined, raise error
if strict == "1":
raise NoChecksumError("\n".join(checksum_lines))
messages.append("No checksum specified for '%s', please add at " \
"least one to the recipe:" % ud.localpath)
messages.extend(checksum_lines)
logger.error("\n".join(messages))
raise NoChecksumError("Missing SRC_URI checksum", ud.url)
bb.event.fire(MissingChecksumEvent(ud.url, **checksum_event), d)
@@ -625,8 +612,8 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
for ci in checksum_infos:
if ci["expected"] and ci["expected"] != ci["data"]:
messages.append("File: '%s' has %s checksum '%s' when '%s' was " \
"expected" % (localpath, ci["id"], ci["data"], ci["expected"]))
messages.append("File: '%s' has %s checksum %s when %s was " \
"expected" % (ud.localpath, ci["id"], ci["data"], ci["expected"]))
bad_checksum = ci["data"]
if bad_checksum:
@@ -744,16 +731,13 @@ def subprocess_setup():
# SIGPIPE errors are known issues with gzip/bash
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def mark_recipe_nocache(d):
def get_autorev(d):
# only not cache src rev in autorev case
if d.getVar('BB_SRCREV_POLICY') != "cache":
d.setVar('BB_DONT_CACHE', '1')
def get_autorev(d):
mark_recipe_nocache(d)
d.setVar("__BBAUTOREV_SEEN", True)
return "AUTOINC"
def _get_srcrev(d, method_name='sortable_revision'):
def get_srcrev(d, method_name='sortable_revision'):
"""
Return the revision string, usually for use in the version string (PV) of the current package
Most packages usually only have one SCM so we just pass on the call.
@@ -767,34 +751,23 @@ def _get_srcrev(d, method_name='sortable_revision'):
that fetcher provides a method with the given name and the same signature as sortable_revision.
"""
d.setVar("__BBSRCREV_SEEN", "1")
recursion = d.getVar("__BBINSRCREV")
if recursion:
raise FetchError("There are recursive references in fetcher variables, likely through SRC_URI")
d.setVar("__BBINSRCREV", True)
scms = []
revs = []
fetcher = Fetch(d.getVar('SRC_URI').split(), d)
urldata = fetcher.ud
for u in urldata:
if urldata[u].method.supports_srcrev():
scms.append(u)
if not scms:
d.delVar("__BBINSRCREV")
return "", revs
if len(scms) == 0:
raise FetchError("SRCREV was used yet no valid SCM was found in SRC_URI")
if len(scms) == 1 and len(urldata[scms[0]].names) == 1:
autoinc, rev = getattr(urldata[scms[0]].method, method_name)(urldata[scms[0]], d, urldata[scms[0]].names[0])
revs.append(rev)
if len(rev) > 10:
rev = rev[:10]
d.delVar("__BBINSRCREV")
if autoinc:
return "AUTOINC+" + rev, revs
return rev, revs
return "AUTOINC+" + rev
return rev
#
# Mutiple SCMs are in SRC_URI so we resort to SRCREV_FORMAT
@@ -810,7 +783,6 @@ def _get_srcrev(d, method_name='sortable_revision'):
ud = urldata[scm]
for name in ud.names:
autoinc, rev = getattr(ud.method, method_name)(ud, d, name)
revs.append(rev)
seenautoinc = seenautoinc or autoinc
if len(rev) > 10:
rev = rev[:10]
@@ -827,70 +799,12 @@ def _get_srcrev(d, method_name='sortable_revision'):
if seenautoinc:
format = "AUTOINC+" + format
d.delVar("__BBINSRCREV")
return format, revs
def get_hashvalue(d, method_name='sortable_revision'):
pkgv, revs = _get_srcrev(d, method_name=method_name)
return " ".join(revs)
def get_pkgv_string(d, method_name='sortable_revision'):
pkgv, revs = _get_srcrev(d, method_name=method_name)
return pkgv
def get_srcrev(d, method_name='sortable_revision'):
pkgv, revs = _get_srcrev(d, method_name=method_name)
if not pkgv:
raise FetchError("SRCREV was used yet no valid SCM was found in SRC_URI")
return pkgv
return format
def localpath(url, d):
fetcher = bb.fetch2.Fetch([url], d)
return fetcher.localpath(url)
# Need to export PATH as binary could be in metadata paths
# rather than host provided
# Also include some other variables.
FETCH_EXPORT_VARS = ['HOME', 'PATH',
'HTTP_PROXY', 'http_proxy',
'HTTPS_PROXY', 'https_proxy',
'FTP_PROXY', 'ftp_proxy',
'FTPS_PROXY', 'ftps_proxy',
'NO_PROXY', 'no_proxy',
'ALL_PROXY', 'all_proxy',
'GIT_PROXY_COMMAND',
'GIT_SSH',
'GIT_SSH_COMMAND',
'GIT_SSL_CAINFO',
'GIT_SMART_HTTP',
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
'SOCKS5_USER', 'SOCKS5_PASSWD',
'DBUS_SESSION_BUS_ADDRESS',
'P4CONFIG',
'SSL_CERT_FILE',
'NODE_EXTRA_CA_CERTS',
'AWS_PROFILE',
'AWS_ACCESS_KEY_ID',
'AWS_SECRET_ACCESS_KEY',
'AWS_ROLE_ARN',
'AWS_WEB_IDENTITY_TOKEN_FILE',
'AWS_DEFAULT_REGION',
'AWS_SESSION_TOKEN',
'GIT_CACHE_PATH',
'REMOTE_CONTAINERS_IPC',
'SSL_CERT_DIR']
def get_fetcher_environment(d):
newenv = {}
origenv = d.getVar("BB_ORIGENV")
for name in bb.fetch2.FETCH_EXPORT_VARS:
value = d.getVar(name)
if not value and origenv:
value = origenv.getVar(name)
if value:
newenv[name] = value
return newenv
def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
"""
Run cmd returning the command output
@@ -899,7 +813,25 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
Optionally remove the files/directories listed in cleanup upon failure
"""
exportvars = FETCH_EXPORT_VARS
# Need to export PATH as binary could be in metadata paths
# rather than host provided
# Also include some other variables.
# FIXME: Should really include all export varaiables?
exportvars = ['HOME', 'PATH',
'HTTP_PROXY', 'http_proxy',
'HTTPS_PROXY', 'https_proxy',
'FTP_PROXY', 'ftp_proxy',
'FTPS_PROXY', 'ftps_proxy',
'NO_PROXY', 'no_proxy',
'ALL_PROXY', 'all_proxy',
'GIT_PROXY_COMMAND',
'GIT_SSH',
'GIT_SSL_CAINFO',
'GIT_SMART_HTTP',
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
'SOCKS5_USER', 'SOCKS5_PASSWD',
'DBUS_SESSION_BUS_ADDRESS',
'P4CONFIG']
if not cleanup:
cleanup = []
@@ -921,13 +853,18 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
if val:
cmd = 'export ' + var + '=\"%s\"; %s' % (val, cmd)
# Ensure that a _PYTHON_SYSCONFIGDATA_NAME value set by a recipe
# (for example via python3native.bbclass since warrior) is not set for
# host Python (otherwise tools like git-make-shallow will fail)
cmd = 'unset _PYTHON_SYSCONFIGDATA_NAME; ' + cmd
# Disable pseudo as it may affect ssh, potentially causing it to hang.
cmd = 'export PSEUDO_DISABLED=1; ' + cmd
if workdir:
logger.debug("Running '%s' in %s" % (cmd, workdir))
logger.debug(1, "Running '%s' in %s" % (cmd, workdir))
else:
logger.debug("Running %s", cmd)
logger.debug(1, "Running %s", cmd)
success = False
error_message = ""
@@ -936,17 +873,14 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
(output, errors) = bb.process.run(cmd, log=log, shell=True, stderr=subprocess.PIPE, cwd=workdir)
success = True
except bb.process.NotFoundError as e:
error_message = "Fetch command %s not found" % (e.command)
error_message = "Fetch command %s" % (e.command)
except bb.process.ExecutionError as e:
if e.stdout:
output = "output:\n%s\n%s" % (e.stdout, e.stderr)
elif e.stderr:
output = "output:\n%s" % e.stderr
else:
if log:
output = "see logfile for output"
else:
output = "no output"
output = "no output"
error_message = "Fetch command %s failed with exit code %s, %s" % (e.command, e.exitcode, output)
except bb.process.CmdError as e:
error_message = "Fetch command %s could not be run:\n%s" % (e.command, e.msg)
@@ -971,7 +905,7 @@ def check_network_access(d, info, url):
elif not trusted_network(d, url):
raise UntrustedUrl(url, info)
else:
logger.debug("Fetcher accessed the network with the command %s" % info)
logger.debug(1, "Fetcher accessed the network with the command %s" % info)
def build_mirroruris(origud, mirrors, ld):
uris = []
@@ -997,7 +931,7 @@ def build_mirroruris(origud, mirrors, ld):
continue
if not trusted_network(ld, newuri):
logger.debug("Mirror %s not in the list of trusted networks, skipping" % (newuri))
logger.debug(1, "Mirror %s not in the list of trusted networks, skipping" % (newuri))
continue
# Create a local copy of the mirrors minus the current line
@@ -1008,11 +942,10 @@ def build_mirroruris(origud, mirrors, ld):
try:
newud = FetchData(newuri, ld)
newud.ignore_checksums = True
newud.setup_localpath(ld)
except bb.fetch2.BBFetchException as e:
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
logger.debug(str(e))
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
logger.debug(1, str(e))
try:
# setup_localpath of file:// urls may fail, we should still see
# if mirrors of the url exist
@@ -1115,11 +1048,10 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
elif isinstance(e, NoChecksumError):
raise
else:
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
logger.debug(str(e))
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
logger.debug(1, str(e))
try:
if ud.method.cleanup_upon_failure():
ud.method.clean(ud, ld)
ud.method.clean(ud, ld)
except UnboundLocalError:
pass
return False
@@ -1130,8 +1062,6 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
def ensure_symlink(target, link_name):
if not os.path.exists(link_name):
dirname = os.path.dirname(link_name)
bb.utils.mkdirhier(dirname)
if os.path.islink(link_name):
# Broken symbolic link
os.unlink(link_name)
@@ -1215,11 +1145,11 @@ def srcrev_internal_helper(ud, d, name):
pn = d.getVar("PN")
attempts = []
if name != '' and pn:
attempts.append("SRCREV_%s:pn-%s" % (name, pn))
attempts.append("SRCREV_%s_pn-%s" % (name, pn))
if name != '':
attempts.append("SRCREV_%s" % name)
if pn:
attempts.append("SRCREV:pn-%s" % pn)
attempts.append("SRCREV_pn-%s" % pn)
attempts.append("SRCREV")
for a in attempts:
@@ -1244,7 +1174,6 @@ def srcrev_internal_helper(ud, d, name):
if srcrev == "INVALID" or not srcrev:
raise FetchError("Please set a valid SRCREV for url %s (possible key names are %s, or use a ;rev=X URL parameter)" % (str(attempts), ud.url), ud.url)
if srcrev == "AUTOINC":
d.setVar("__BBAUTOREV_ACTED_UPON", True)
srcrev = ud.method.latest_revision(ud, d, name)
return srcrev
@@ -1256,21 +1185,23 @@ def get_checksum_file_list(d):
SRC_URI as a space-separated string
"""
fetch = Fetch([], d, cache = False, localonly = True)
dl_dir = d.getVar('DL_DIR')
filelist = []
for u in fetch.urls:
ud = fetch.ud[u]
if ud and isinstance(ud.method, local.Local):
found = False
paths = ud.method.localfile_searchpaths(ud, d)
paths = ud.method.localpaths(ud, d)
for f in paths:
pth = ud.decodedurl
if os.path.exists(f):
found = True
if f.startswith(dl_dir):
# The local fetcher's behaviour is to return a path under DL_DIR if it couldn't find the file anywhere else
if os.path.exists(f):
bb.warn("Getting checksum for %s SRC_URI entry %s: file not found except in DL_DIR" % (d.getVar('PN'), os.path.basename(f)))
else:
bb.warn("Unable to get checksum for %s SRC_URI entry %s: file could not be found" % (d.getVar('PN'), os.path.basename(f)))
filelist.append(f + ":" + str(os.path.exists(f)))
if not found:
bb.fatal(("Unable to get checksum for %s SRC_URI entry %s: file could not be found"
"\nThe following paths were searched:"
"\n%s") % (d.getVar('PN'), os.path.basename(f), '\n'.join(paths)))
return " ".join(filelist)
@@ -1317,13 +1248,18 @@ class FetchData(object):
if checksum_name in self.parm:
checksum_expected = self.parm[checksum_name]
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3", "az", "crate", "gs"]:
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3"]:
checksum_expected = None
else:
checksum_expected = d.getVarFlag("SRC_URI", checksum_name)
setattr(self, "%s_expected" % checksum_id, checksum_expected)
for checksum_id in CHECKSUM_LIST:
configure_checksum(checksum_id)
self.ignore_checksums = False
self.names = self.parm.get("name",'default').split(',')
self.method = None
@@ -1345,11 +1281,6 @@ class FetchData(object):
if hasattr(self.method, "urldata_init"):
self.method.urldata_init(self, d)
for checksum_id in CHECKSUM_LIST:
configure_checksum(checksum_id)
self.ignore_checksums = False
if "localpath" in self.parm:
# if user sets localpath for file, use it instead.
self.localpath = self.parm["localpath"]
@@ -1429,9 +1360,6 @@ class FetchMethod(object):
Is localpath something that can be represented by a checksum?
"""
# We cannot compute checksums for None
if urldata.localpath is None:
return False
# We cannot compute checksums for directories
if os.path.isdir(urldata.localpath):
return False
@@ -1444,12 +1372,6 @@ class FetchMethod(object):
"""
return False
def cleanup_upon_failure(self):
"""
When a fetch fails, should clean() be called?
"""
return True
def verify_donestamp(self, ud, d):
"""
Verify the donestamp file
@@ -1517,35 +1439,28 @@ class FetchMethod(object):
cmd = None
if unpack:
tar_cmd = 'tar --extract --no-same-owner'
if 'striplevel' in urldata.parm:
tar_cmd += ' --strip-components=%s' % urldata.parm['striplevel']
if file.endswith('.tar'):
cmd = '%s -f %s' % (tar_cmd, file)
cmd = 'tar x --no-same-owner -f %s' % file
elif file.endswith('.tgz') or file.endswith('.tar.gz') or file.endswith('.tar.Z'):
cmd = '%s -z -f %s' % (tar_cmd, file)
cmd = 'tar xz --no-same-owner -f %s' % file
elif file.endswith('.tbz') or file.endswith('.tbz2') or file.endswith('.tar.bz2'):
cmd = 'bzip2 -dc %s | %s -f -' % (file, tar_cmd)
cmd = 'bzip2 -dc %s | tar x --no-same-owner -f -' % file
elif file.endswith('.gz') or file.endswith('.Z') or file.endswith('.z'):
cmd = 'gzip -dc %s > %s' % (file, efile)
elif file.endswith('.bz2'):
cmd = 'bzip2 -dc %s > %s' % (file, efile)
elif file.endswith('.txz') or file.endswith('.tar.xz'):
cmd = 'xz -dc %s | %s -f -' % (file, tar_cmd)
cmd = 'xz -dc %s | tar x --no-same-owner -f -' % file
elif file.endswith('.xz'):
cmd = 'xz -dc %s > %s' % (file, efile)
elif file.endswith('.tar.lz'):
cmd = 'lzip -dc %s | %s -f -' % (file, tar_cmd)
cmd = 'lzip -dc %s | tar x --no-same-owner -f -' % file
elif file.endswith('.lz'):
cmd = 'lzip -dc %s > %s' % (file, efile)
elif file.endswith('.tar.7z'):
cmd = '7z x -so %s | %s -f -' % (file, tar_cmd)
cmd = '7z x -so %s | tar x --no-same-owner -f -' % file
elif file.endswith('.7z'):
cmd = '7za x -y %s 1>/dev/null' % file
elif file.endswith('.tzst') or file.endswith('.tar.zst'):
cmd = 'zstd --decompress --stdout %s | %s -f -' % (file, tar_cmd)
elif file.endswith('.zst'):
cmd = 'zstd --decompress --stdout %s > %s' % (file, efile)
elif file.endswith('.zip') or file.endswith('.jar'):
try:
dos = bb.utils.to_boolean(urldata.parm.get('dos'), False)
@@ -1576,7 +1491,7 @@ class FetchMethod(object):
raise UnpackError("Unable to unpack deb/ipk package - does not contain data.tar.* file", urldata.url)
else:
raise UnpackError("Unable to unpack deb/ipk package - could not list contents", urldata.url)
cmd = 'ar x %s %s && %s -p -f %s && rm %s' % (file, datafile, tar_cmd, datafile, datafile)
cmd = 'ar x %s %s && tar --no-same-owner -xpf %s && rm %s' % (file, datafile, datafile, datafile)
# If 'subdir' param exists, create a dir and use it as destination for unpack cmd
if 'subdir' in urldata.parm:
@@ -1592,7 +1507,6 @@ class FetchMethod(object):
unpackdir = rootdir
if not unpack or not cmd:
urldata.unpack_tracer.unpack("file-copy", unpackdir)
# If file == dest, then avoid any copies, as we already put the file into dest!
dest = os.path.join(unpackdir, os.path.basename(file))
if file != dest and not (os.path.exists(dest) and os.path.samefile(file, dest)):
@@ -1607,8 +1521,6 @@ class FetchMethod(object):
destdir = urlpath.rsplit("/", 1)[0] + '/'
bb.utils.mkdirhier("%s/%s" % (unpackdir, destdir))
cmd = 'cp -fpPRH "%s" "%s"' % (file, destdir)
else:
urldata.unpack_tracer.unpack("archive-extract", unpackdir)
if not cmd:
return
@@ -1700,61 +1612,12 @@ class FetchMethod(object):
"""
return []
class DummyUnpackTracer(object):
"""
Abstract API definition for a class that traces unpacked source files back
to their respective upstream SRC_URI entries, for software composition
analysis, license compliance and detailed SBOM generation purposes.
User may load their own unpack tracer class (instead of the dummy
one) by setting the BB_UNPACK_TRACER_CLASS config parameter.
"""
def start(self, unpackdir, urldata_dict, d):
"""
Start tracing the core Fetch.unpack process, using an index to map
unpacked files to each SRC_URI entry.
This method is called by Fetch.unpack and it may receive nested calls by
gitsm and npmsw fetchers, that expand SRC_URI entries by adding implicit
URLs and by recursively calling Fetch.unpack from new (nested) Fetch
instances.
"""
return
def start_url(self, url):
"""Start tracing url unpack process.
This method is called by Fetch.unpack before the fetcher-specific unpack
method starts, and it may receive nested calls by gitsm and npmsw
fetchers.
"""
return
def unpack(self, unpack_type, destdir):
"""
Set unpack_type and destdir for current url.
This method is called by the fetcher-specific unpack method after url
tracing started.
"""
return
def finish_url(self, url):
"""Finish tracing url unpack process and update the file index.
This method is called by Fetch.unpack after the fetcher-specific unpack
method finished its job, and it may receive nested calls by gitsm
and npmsw fetchers.
"""
return
def complete(self):
"""
Finish tracing the Fetch.unpack process, and check if all nested
Fecth.unpack calls (if any) have been completed; if so, save collected
metadata.
"""
return
class Fetch(object):
def __init__(self, urls, d, cache = True, localonly = False, connection_cache = None):
if localonly and cache:
raise Exception("bb.fetch2.Fetch.__init__: cannot set cache and localonly at same time")
if not urls:
if len(urls) == 0:
urls = d.getVar("SRC_URI").split()
self.urls = urls
self.d = d
@@ -1769,30 +1632,10 @@ class Fetch(object):
if key in urldata_cache:
self.ud = urldata_cache[key]
# the unpack_tracer object needs to be made available to possible nested
# Fetch instances (when those are created by gitsm and npmsw fetchers)
# so we set it as a global variable
global unpack_tracer
try:
unpack_tracer
except NameError:
class_path = d.getVar("BB_UNPACK_TRACER_CLASS")
if class_path:
# use user-defined unpack tracer class
import importlib
module_name, _, class_name = class_path.rpartition(".")
module = importlib.import_module(module_name)
class_ = getattr(module, class_name)
unpack_tracer = class_()
else:
# fall back to the dummy/abstract class
unpack_tracer = DummyUnpackTracer()
for url in urls:
if url not in self.ud:
try:
self.ud[url] = FetchData(url, d, localonly)
self.ud[url].unpack_tracer = unpack_tracer
except NonLocalMethod:
if localonly:
self.ud[url] = None
@@ -1831,7 +1674,6 @@ class Fetch(object):
network = self.d.getVar("BB_NO_NETWORK")
premirroronly = bb.utils.to_boolean(self.d.getVar("BB_FETCH_PREMIRRORONLY"))
checksum_missing_messages = []
for u in urls:
ud = self.ud[u]
ud.setup_localpath(self.d)
@@ -1843,10 +1685,11 @@ class Fetch(object):
try:
self.d.setVar("BB_NO_NETWORK", network)
if m.verify_donestamp(ud, self.d) and not m.need_update(ud, self.d):
done = True
elif m.try_premirror(ud, self.d):
logger.debug("Trying PREMIRRORS")
logger.debug(1, "Trying PREMIRRORS")
mirrors = mirror_from_string(self.d.getVar('PREMIRRORS'))
done = m.try_mirrors(self, ud, self.d, mirrors)
if done:
@@ -1856,21 +1699,19 @@ class Fetch(object):
m.update_donestamp(ud, self.d)
except ChecksumError as e:
logger.warning("Checksum failure encountered with premirror download of %s - will attempt other sources." % u)
logger.debug(str(e))
logger.debug(1, str(e))
done = False
if premirroronly:
self.d.setVar("BB_NO_NETWORK", "1")
firsterr = None
verified_stamp = False
if done:
verified_stamp = m.verify_donestamp(ud, self.d)
verified_stamp = m.verify_donestamp(ud, self.d)
if not done and (not verified_stamp or m.need_update(ud, self.d)):
try:
if not trusted_network(self.d, ud.url):
raise UntrustedUrl(ud.url)
logger.debug("Trying Upstream")
logger.debug(1, "Trying Upstream")
m.download(ud, self.d)
if hasattr(m, "build_mirror_data"):
m.build_mirror_data(ud, self.d)
@@ -1885,19 +1726,19 @@ class Fetch(object):
except BBFetchException as e:
if isinstance(e, ChecksumError):
logger.warning("Checksum failure encountered with download of %s - will attempt other sources if available" % u)
logger.debug(str(e))
logger.debug(1, str(e))
if os.path.exists(ud.localpath):
rename_bad_checksum(ud, e.checksum)
elif isinstance(e, NoChecksumError):
raise
else:
logger.warning('Failed to fetch URL %s, attempting MIRRORS if available' % u)
logger.debug(str(e))
logger.debug(1, str(e))
firsterr = e
# Remove any incomplete fetch
if not verified_stamp and m.cleanup_upon_failure():
if not verified_stamp:
m.clean(ud, self.d)
logger.debug("Trying MIRRORS")
logger.debug(1, "Trying MIRRORS")
mirrors = mirror_from_string(self.d.getVar('MIRRORS'))
done = m.try_mirrors(self, ud, self.d, mirrors)
@@ -1914,28 +1755,17 @@ class Fetch(object):
raise ChecksumError("Stale Error Detected")
except BBFetchException as e:
if isinstance(e, NoChecksumError):
(message, _) = e.args
checksum_missing_messages.append(message)
continue
elif isinstance(e, ChecksumError):
if isinstance(e, ChecksumError):
logger.error("Checksum failure fetching %s" % u)
raise
finally:
if ud.lockfile:
bb.utils.unlockfile(lf)
if checksum_missing_messages:
logger.error("Missing SRC_URI checksum, please add those to the recipe: \n%s", "\n".join(checksum_missing_messages))
raise BBFetchException("There was some missing checksums in the recipe")
def checkstatus(self, urls=None):
"""
Check all URLs exist upstream.
Returns None if the URLs exist, raises FetchError if the check wasn't
successful but there wasn't an error (such as file not found), and
raises other exceptions in error cases.
Check all urls exist upstream
"""
if not urls:
@@ -1945,7 +1775,7 @@ class Fetch(object):
ud = self.ud[u]
ud.setup_localpath(self.d)
m = ud.method
logger.debug("Testing URL %s", u)
logger.debug(1, "Testing URL %s", u)
# First try checking uri, u, from PREMIRRORS
mirrors = mirror_from_string(self.d.getVar('PREMIRRORS'))
ret = m.try_mirrors(self, ud, self.d, mirrors, True)
@@ -1958,7 +1788,7 @@ class Fetch(object):
ret = m.try_mirrors(self, ud, self.d, mirrors, True)
if not ret:
raise FetchError("URL doesn't work", u)
raise FetchError("URL %s doesn't work" % u, u)
def unpack(self, root, urls=None):
"""
@@ -1968,8 +1798,6 @@ class Fetch(object):
if not urls:
urls = self.urls
unpack_tracer.start(root, self.ud, self.d)
for u in urls:
ud = self.ud[u]
ud.setup_localpath(self.d)
@@ -1977,15 +1805,11 @@ class Fetch(object):
if ud.lockfile:
lf = bb.utils.lockfile(ud.lockfile)
unpack_tracer.start_url(u)
ud.method.unpack(ud, root, self.d)
unpack_tracer.finish_url(u)
if ud.lockfile:
bb.utils.unlockfile(lf)
unpack_tracer.complete()
def clean(self, urls=None):
"""
Clean files that the fetcher gets or places
@@ -2085,9 +1909,6 @@ from . import repo
from . import clearcase
from . import npm
from . import npmsw
from . import az
from . import crate
from . import gcp
methods.append(local.Local())
methods.append(wget.Wget())
@@ -2107,6 +1928,3 @@ methods.append(repo.Repo())
methods.append(clearcase.ClearCase())
methods.append(npm.Npm())
methods.append(npmsw.NpmShrinkWrap())
methods.append(az.Az())
methods.append(crate.Crate())
methods.append(gcp.GCP())

View File

@@ -1,93 +0,0 @@
"""
BitBake 'Fetch' Azure Storage implementation
"""
# Copyright (C) 2021 Alejandro Hernandez Samaniego
#
# Based on bb.fetch2.wget:
# Copyright (C) 2003, 2004 Chris Larson
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import shlex
import os
import bb
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2.wget import Wget
class Az(Wget):
def supports(self, ud, d):
"""
Check to see if a given url can be fetched from Azure Storage
"""
return ud.type in ['az']
def checkstatus(self, fetch, ud, d, try_again=True):
# checkstatus discards parameters either way, we need to do this before adding the SAS
ud.url = ud.url.replace('az://','https://').split(';')[0]
az_sas = d.getVar('AZ_SAS')
if az_sas and az_sas not in ud.url:
ud.url += az_sas
return Wget.checkstatus(self, fetch, ud, d, try_again)
# Override download method, include retries
def download(self, ud, d, retries=3):
"""Fetch urls"""
# If were reaching the account transaction limit we might be refused a connection,
# retrying allows us to avoid false negatives since the limit changes over time
fetchcmd = self.basecmd + ' --retry-connrefused --waitretry=5'
# We need to provide a localpath to avoid wget using the SAS
# ud.localfile either has the downloadfilename or ud.path
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile)
bb.utils.mkdirhier(os.path.dirname(localpath))
fetchcmd += " -O %s" % shlex.quote(localpath)
if ud.user and ud.pswd:
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
# Check if a Shared Access Signature was given and use it
az_sas = d.getVar('AZ_SAS')
if az_sas:
azuri = '%s%s%s%s' % ('https://', ud.host, ud.path, az_sas)
else:
azuri = '%s%s%s' % ('https://', ud.host, ud.path)
if os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again.
fetchcmd += d.expand(" -c -P ${DL_DIR} '%s'" % azuri)
else:
fetchcmd += d.expand(" -P ${DL_DIR} '%s'" % azuri)
try:
self._runwget(ud, d, fetchcmd, False)
except FetchError as e:
# Azure fails on handshake sometimes when using wget after some stress, producing a
# FetchError from the fetcher, if the artifact exists retyring should succeed
if 'Unable to establish SSL connection' in str(e):
logger.debug2('Unable to establish SSL connection: Retries remaining: %s, Retrying...' % retries)
self.download(ud, d, retries -1)
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath):
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (azuri, ud.localpath), azuri)
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (azuri), azuri)
return True

View File

@@ -74,16 +74,16 @@ class Bzr(FetchMethod):
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
bzrcmd = self._buildbzrcommand(ud, d, "update")
logger.debug("BZR Update %s", ud.url)
logger.debug(1, "BZR Update %s", ud.url)
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
runfetchcmd(bzrcmd, d, workdir=os.path.join(ud.pkgdir, os.path.basename(ud.path)))
else:
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
logger.debug("BZR Checkout %s", ud.url)
logger.debug(1, "BZR Checkout %s", ud.url)
bb.utils.mkdirhier(ud.pkgdir)
logger.debug("Running %s", bzrcmd)
logger.debug(1, "Running %s", bzrcmd)
runfetchcmd(bzrcmd, d, workdir=ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
@@ -109,7 +109,7 @@ class Bzr(FetchMethod):
"""
Return the latest upstream revision number
"""
logger.debug2("BZR fetcher hitting network for %s", ud.url)
logger.debug(2, "BZR fetcher hitting network for %s", ud.url)
bb.fetch2.check_network_access(d, self._buildbzrcommand(ud, d, "revno"), ud.url)

View File

@@ -70,7 +70,7 @@ class ClearCase(FetchMethod):
return ud.type in ['ccrc']
def debug(self, msg):
logger.debug("ClearCase: %s", msg)
logger.debug(1, "ClearCase: %s", msg)
def urldata_init(self, ud, d):
"""

View File

@@ -1,141 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for crates.io
"""
# Copyright (C) 2016 Doug Goldstein
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import hashlib
import json
import os
import subprocess
import bb
from bb.fetch2 import logger, subprocess_setup, UnpackError
from bb.fetch2.wget import Wget
class Crate(Wget):
"""Class to fetch crates via wget"""
def _cargo_bitbake_path(self, rootdir):
return os.path.join(rootdir, "cargo_home", "bitbake")
def supports(self, ud, d):
"""
Check to see if a given url is for this fetcher
"""
return ud.type in ['crate']
def recommends_checksum(self, urldata):
return True
def urldata_init(self, ud, d):
"""
Sets up to download the respective crate from crates.io
"""
if ud.type == 'crate':
self._crate_urldata_init(ud, d)
super(Crate, self).urldata_init(ud, d)
def _crate_urldata_init(self, ud, d):
"""
Sets up the download for a crate
"""
# URL syntax is: crate://NAME/VERSION
# break the URL apart by /
parts = ud.url.split('/')
if len(parts) < 5:
raise bb.fetch2.ParameterError("Invalid URL: Must be crate://HOST/NAME/VERSION", ud.url)
# version is expected to be the last token
# but ignore possible url parameters which will be used
# by the top fetcher class
version = parts[-1].split(";")[0]
# second to last field is name
name = parts[-2]
# host (this is to allow custom crate registries to be specified
host = '/'.join(parts[2:-2])
# if using upstream just fix it up nicely
if host == 'crates.io':
host = 'crates.io/api/v1/crates'
ud.url = "https://%s/%s/%s/download" % (host, name, version)
ud.parm['downloadfilename'] = "%s-%s.crate" % (name, version)
if 'name' not in ud.parm:
ud.parm['name'] = '%s-%s' % (name, version)
logger.debug2("Fetching %s to %s" % (ud.url, ud.parm['downloadfilename']))
def unpack(self, ud, rootdir, d):
"""
Uses the crate to build the necessary paths for cargo to utilize it
"""
if ud.type == 'crate':
return self._crate_unpack(ud, rootdir, d)
else:
super(Crate, self).unpack(ud, rootdir, d)
def _crate_unpack(self, ud, rootdir, d):
"""
Unpacks a crate
"""
thefile = ud.localpath
# possible metadata we need to write out
metadata = {}
# change to the rootdir to unpack but save the old working dir
save_cwd = os.getcwd()
os.chdir(rootdir)
bp = d.getVar('BP')
if bp == ud.parm.get('name'):
cmd = "tar -xz --no-same-owner -f %s" % thefile
ud.unpack_tracer.unpack("crate-extract", rootdir)
else:
cargo_bitbake = self._cargo_bitbake_path(rootdir)
ud.unpack_tracer.unpack("cargo-extract", cargo_bitbake)
cmd = "tar -xz --no-same-owner -f %s -C %s" % (thefile, cargo_bitbake)
# ensure we've got these paths made
bb.utils.mkdirhier(cargo_bitbake)
# generate metadata necessary
with open(thefile, 'rb') as f:
# get the SHA256 of the original tarball
tarhash = hashlib.sha256(f.read()).hexdigest()
metadata['files'] = {}
metadata['package'] = tarhash
path = d.getVar('PATH')
if path:
cmd = "PATH=\"%s\" %s" % (path, cmd)
bb.note("Unpacking %s to %s/" % (thefile, os.getcwd()))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True)
os.chdir(save_cwd)
if ret != 0:
raise UnpackError("Unpack command %s failed with return value %s" % (cmd, ret), ud.url)
# if we have metadata to write out..
if len(metadata) > 0:
cratepath = os.path.splitext(os.path.basename(thefile))[0]
bbpath = self._cargo_bitbake_path(rootdir)
mdfile = '.cargo-checksum.json'
mdpath = os.path.join(bbpath, cratepath, mdfile)
with open(mdpath, "w") as f:
json.dump(metadata, f)

View File

@@ -109,7 +109,7 @@ class Cvs(FetchMethod):
cvsupdatecmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvsupdatecmd)
# create module directory
logger.debug2("Fetch: checking for module directory")
logger.debug(2, "Fetch: checking for module directory")
moddir = os.path.join(ud.pkgdir, localdir)
workdir = None
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
@@ -123,7 +123,7 @@ class Cvs(FetchMethod):
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
workdir = ud.pkgdir
logger.debug("Running %s", cvscmd)
logger.debug(1, "Running %s", cvscmd)
bb.fetch2.check_network_access(d, cvscmd, ud.url)
cmd = cvscmd

View File

@@ -1,102 +0,0 @@
"""
BitBake 'Fetch' implementation for Google Cloup Platform Storage.
Class for fetching files from Google Cloud Storage using the
Google Cloud Storage Python Client. The GCS Python Client must
be correctly installed, configured and authenticated prior to use.
Additionally, gsutil must also be installed.
"""
# Copyright (C) 2023, Snap Inc.
#
# Based in part on bb.fetch2.s3:
# Copyright (C) 2017 Andre McCurdy
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import bb
import urllib.parse, urllib.error
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class GCP(FetchMethod):
"""
Class to fetch urls via GCP's Python API.
"""
def __init__(self):
self.gcp_client = None
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with GCP.
"""
return ud.type in ['gs']
def recommends_checksum(self, urldata):
return True
def urldata_init(self, ud, d):
if 'downloadfilename' in ud.parm:
ud.basename = ud.parm['downloadfilename']
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
ud.basecmd = "gsutil stat"
def get_gcp_client(self):
from google.cloud import storage
self.gcp_client = storage.Client(project=None)
def download(self, ud, d):
"""
Fetch urls using the GCP API.
Assumes localpath was called first.
"""
logger.debug2(f"Trying to download gs://{ud.host}{ud.path} to {ud.localpath}")
if self.gcp_client is None:
self.get_gcp_client()
bb.fetch2.check_network_access(d, ud.basecmd, f"gs://{ud.host}{ud.path}")
runfetchcmd("%s %s" % (ud.basecmd, f"gs://{ud.host}{ud.path}"), d)
# Path sometimes has leading slash, so strip it
path = ud.path.lstrip("/")
blob = self.gcp_client.bucket(ud.host).blob(path)
blob.download_to_filename(ud.localpath)
# Additional sanity checks copied from the wget class (although there
# are no known issues which mean these are required, treat the GCP API
# tool with a little healthy suspicion).
if not os.path.exists(ud.localpath):
raise FetchError(f"The GCP API returned success for gs://{ud.host}{ud.path} but {ud.localpath} doesn't exist?!")
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError(f"The downloaded file for gs://{ud.host}{ud.path} resulted in a zero size file?! Deleting and failing since this isn't right.")
return True
def checkstatus(self, fetch, ud, d):
"""
Check the status of a URL.
"""
logger.debug2(f"Checking status of gs://{ud.host}{ud.path}")
if self.gcp_client is None:
self.get_gcp_client()
bb.fetch2.check_network_access(d, ud.basecmd, f"gs://{ud.host}{ud.path}")
runfetchcmd("%s %s" % (ud.basecmd, f"gs://{ud.host}{ud.path}"), d)
# Path sometimes has leading slash, so strip it
path = ud.path.lstrip("/")
if self.gcp_client.bucket(ud.host).blob(path).exists() == False:
raise FetchError(f"The GCP API reported that gs://{ud.host}{ud.path} does not exist")
else:
return True

View File

@@ -44,27 +44,13 @@ Supported SRC_URI options are:
- nobranch
Don't check the SHA validation for branch. set this option for the recipe
referring to commit which is valid in any namespace (branch, tag, ...)
instead of branch.
referring to commit which is valid in tag instead of branch.
The default is "0", set nobranch=1 if needed.
- subpath
Limit the checkout to a specific subpath of the tree.
By default, checkout the whole tree, set subpath=<path> if needed
- destsuffix
The name of the path in which to place the checkout.
By default, the path is git/, set destsuffix=<suffix> if needed
- usehead
For local git:// urls to use the current branch HEAD as the revision for use with
AUTOREV. Implies nobranch.
- lfs
Enable the checkout to use LFS for large files. This will download all LFS files
in the download step, as the unpack step does not have network access.
The default is "1", set lfs=0 to skip.
"""
# Copyright (C) 2005 Richard Purdie
@@ -78,21 +64,15 @@ import fnmatch
import os
import re
import shlex
import shutil
import subprocess
import tempfile
import bb
import bb.progress
from contextlib import contextmanager
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
from bb.fetch2 import trusted_network
sha1_re = re.compile(r'^[0-9a-f]{40}$')
slash_re = re.compile(r"/+")
class GitProgressHandler(bb.progress.LineFilterProgressHandler):
"""Extract progress information from git output"""
def __init__(self, d):
@@ -150,9 +130,6 @@ class Git(FetchMethod):
def supports_checksum(self, urldata):
return False
def cleanup_upon_failure(self):
return False
def urldata_init(self, ud, d):
"""
init git specific variable within url data
@@ -164,11 +141,6 @@ class Git(FetchMethod):
ud.proto = 'file'
else:
ud.proto = "git"
if ud.host == "github.com" and ud.proto == "git":
# github stopped supporting git protocol
# https://github.blog/2021-09-01-improving-git-protocol-security-github/#no-more-unauthenticated-git
ud.proto = "https"
bb.warn("URL: %s uses git protocol which is no longer supported by github. Please change to ;protocol=https in the url." % ud.url)
if not ud.proto in ('git', 'file', 'ssh', 'http', 'https', 'rsync'):
raise bb.fetch2.ParameterError("Invalid protocol type", ud.url)
@@ -192,18 +164,11 @@ class Git(FetchMethod):
ud.nocheckout = 1
ud.unresolvedrev = {}
branches = ud.parm.get("branch", "").split(',')
if branches == [""] and not ud.nobranch:
bb.warn("URL: %s does not set any branch parameter. The future default branch used by tools and repositories is uncertain and we will therefore soon require this is set in all git urls." % ud.url)
branches = ["master"]
branches = ud.parm.get("branch", "master").split(',')
if len(branches) != len(ud.names):
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
ud.noshared = d.getVar("BB_GIT_NOSHARED") == "1"
ud.cloneflags = "-n"
if not ud.noshared:
ud.cloneflags += " -s"
ud.cloneflags = "-s -n"
if ud.bareclone:
ud.cloneflags += " --mirror"
@@ -255,14 +220,9 @@ class Git(FetchMethod):
ud.shallow = False
if ud.usehead:
# When usehead is set let's associate 'HEAD' with the unresolved
# rev of this repository. This will get resolved into a revision
# later. If an actual revision happens to have also been provided
# then this setting will be overridden.
for name in ud.names:
ud.unresolvedrev[name] = 'HEAD'
ud.unresolvedrev['default'] = 'HEAD'
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c gc.autoDetach=false -c core.pager=cat -c safe.bareRepository=all"
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0"
write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0"
ud.write_tarballs = write_tarballs != "0" or ud.rebaseable
@@ -271,20 +231,20 @@ class Git(FetchMethod):
ud.setup_revisions(d)
for name in ud.names:
# Ensure any revision that doesn't look like a SHA-1 is translated into one
if not sha1_re.match(ud.revisions[name] or ''):
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
if not ud.revisions[name] or len(ud.revisions[name]) != 40 or (False in [c in "abcdef0123456789" for c in ud.revisions[name]]):
if ud.revisions[name]:
ud.unresolvedrev[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud, d, name)
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.').replace(' ','_').replace('(', '_').replace(')', '_'))
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.').replace(' ','_'))
if gitsrcname.startswith('.'):
gitsrcname = gitsrcname[1:]
# For a rebaseable git repo, it is necessary to keep a mirror tar ball
# per revision, so that even if the revision disappears from the
# for rebaseable git repo, it is necessary to keep mirror tar ball
# per revision, so that even the revision disappears from the
# upstream repo in the future, the mirror will remain intact and still
# contain the revision
# contains the revision
if ud.rebaseable:
for name in ud.names:
gitsrcname = gitsrcname + '_' + ud.revisions[name]
@@ -328,10 +288,7 @@ class Git(FetchMethod):
return ud.clonedir
def need_update(self, ud, d):
return self.clonedir_need_update(ud, d) \
or self.shallow_tarball_need_update(ud) \
or self.tarball_need_update(ud) \
or self.lfs_need_update(ud, d)
return self.clonedir_need_update(ud, d) or self.shallow_tarball_need_update(ud) or self.tarball_need_update(ud)
def clonedir_need_update(self, ud, d):
if not os.path.exists(ud.clonedir):
@@ -343,15 +300,6 @@ class Git(FetchMethod):
return True
return False
def lfs_need_update(self, ud, d):
if self.clonedir_need_update(ud, d):
return True
for name in ud.names:
if not self._lfs_objects_downloaded(ud, d, name, ud.clonedir):
return True
return False
def clonedir_need_shallow_revs(self, ud, d):
for rev in ud.shallow_revs:
try:
@@ -371,16 +319,6 @@ class Git(FetchMethod):
# is not possible
if bb.utils.to_boolean(d.getVar("BB_FETCH_PREMIRRORONLY")):
return True
# If the url is not in trusted network, that is, BB_NO_NETWORK is set to 0
# and BB_ALLOWED_NETWORKS does not contain the host that ud.url uses, then
# we need to try premirrors first as using upstream is destined to fail.
if not trusted_network(d, ud.url):
return True
# the following check is to ensure incremental fetch in downloads, this is
# because the premirror might be old and does not contain the new rev required,
# and this will cause a total removal and new clone. So if we can reach to
# network, we prefer upstream over premirror, though the premirror might contain
# the new rev.
if os.path.exists(ud.clonedir):
return False
return True
@@ -394,54 +332,17 @@ class Git(FetchMethod):
if ud.shallow and os.path.exists(ud.fullshallow) and self.need_update(ud, d):
ud.localpath = ud.fullshallow
return
elif os.path.exists(ud.fullmirror) and self.need_update(ud, d):
if not os.path.exists(ud.clonedir):
bb.utils.mkdirhier(ud.clonedir)
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
else:
tmpdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=tmpdir)
output = runfetchcmd("%s remote" % ud.basecmd, d, quiet=True, workdir=ud.clonedir)
if 'mirror' in output:
runfetchcmd("%s remote rm mirror" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s remote add --mirror=fetch mirror %s" % (ud.basecmd, tmpdir), d, workdir=ud.clonedir)
fetch_cmd = "LANG=C %s fetch -f --update-head-ok --progress mirror " % (ud.basecmd)
runfetchcmd(fetch_cmd, d, workdir=ud.clonedir)
elif os.path.exists(ud.fullmirror) and not os.path.exists(ud.clonedir):
bb.utils.mkdirhier(ud.clonedir)
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
repourl = self._get_repo_url(ud)
needs_clone = False
if os.path.exists(ud.clonedir):
# The directory may exist, but not be the top level of a bare git
# repository in which case it needs to be deleted and re-cloned.
try:
# Since clones can be bare, use --absolute-git-dir instead of --show-toplevel
output = runfetchcmd("LANG=C %s rev-parse --absolute-git-dir" % ud.basecmd, d, workdir=ud.clonedir)
toplevel = output.rstrip()
if not bb.utils.path_is_descendant(toplevel, ud.clonedir):
logger.warning("Top level directory '%s' is not a descendant of '%s'. Re-cloning", toplevel, ud.clonedir)
needs_clone = True
except bb.fetch2.FetchError as e:
logger.warning("Unable to get top level for %s (not a git directory?): %s", ud.clonedir, e)
needs_clone = True
except FileNotFoundError as e:
logger.warning("%s", e)
needs_clone = True
if needs_clone:
shutil.rmtree(ud.clonedir)
else:
needs_clone = True
# If the repo still doesn't exist, fallback to cloning it
if needs_clone:
# We do this since git will use a "-l" option automatically for local urls where possible,
# but it doesn't work when git/objects is a symlink, only works when it is a directory.
if not os.path.exists(ud.clonedir):
# We do this since git will use a "-l" option automatically for local urls where possible
if repourl.startswith("file://"):
repourl_path = repourl[7:]
objects = os.path.join(repourl_path, 'objects')
if os.path.isdir(objects) and not os.path.islink(objects):
repourl = repourl_path
repourl = repourl[7:]
clone_cmd = "LANG=C %s clone --bare --mirror %s %s --progress" % (ud.basecmd, shlex.quote(repourl), ud.clonedir)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, clone_cmd, ud.url)
@@ -455,11 +356,7 @@ class Git(FetchMethod):
runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, shlex.quote(repourl)), d, workdir=ud.clonedir)
if ud.nobranch:
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/*:refs/*" % (ud.basecmd, shlex.quote(repourl))
else:
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/heads/*:refs/heads/* refs/tags/*:refs/tags/*" % (ud.basecmd, shlex.quote(repourl))
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/*:refs/*" % (ud.basecmd, shlex.quote(repourl))
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
progresshandler = GitProgressHandler(d)
@@ -482,47 +379,7 @@ class Git(FetchMethod):
if missing_rev:
raise bb.fetch2.FetchError("Unable to find revision %s even from upstream" % missing_rev)
if self.lfs_need_update(ud, d):
# Unpack temporary working copy, use it to run 'git checkout' to force pre-fetching
# of all LFS blobs needed at the srcrev.
#
# It would be nice to just do this inline here by running 'git-lfs fetch'
# on the bare clonedir, but that operation requires a working copy on some
# releases of Git LFS.
with tempfile.TemporaryDirectory(dir=d.getVar('DL_DIR')) as tmpdir:
# Do the checkout. This implicitly involves a Git LFS fetch.
Git.unpack(self, ud, tmpdir, d)
# Scoop up a copy of any stuff that Git LFS downloaded. Merge them into
# the bare clonedir.
#
# As this procedure is invoked repeatedly on incremental fetches as
# a recipe's SRCREV is bumped throughout its lifetime, this will
# result in a gradual accumulation of LFS blobs in <ud.clonedir>/lfs
# corresponding to all the blobs reachable from the different revs
# fetched across time.
#
# Only do this if the unpack resulted in a .git/lfs directory being
# created; this only happens if at least one blob needed to be
# downloaded.
if os.path.exists(os.path.join(ud.destdir, ".git", "lfs")):
runfetchcmd("tar -cf - lfs | tar -xf - -C %s" % ud.clonedir, d, workdir="%s/.git" % ud.destdir)
def build_mirror_data(self, ud, d):
# Create as a temp file and move atomically into position to avoid races
@contextmanager
def create_atomic(filename):
fd, tfile = tempfile.mkstemp(dir=os.path.dirname(filename))
try:
yield tfile
umask = os.umask(0o666)
os.umask(umask)
os.chmod(tfile, (0o666 & ~umask))
os.rename(tfile, filename)
finally:
os.close(fd)
if ud.shallow and ud.write_shallow_tarballs:
if not os.path.exists(ud.fullshallow):
if os.path.islink(ud.fullshallow):
@@ -533,8 +390,7 @@ class Git(FetchMethod):
self.clone_shallow_local(ud, shallowclone, d)
logger.info("Creating tarball of git repository")
with create_atomic(ud.fullshallow) as tfile:
runfetchcmd("tar -czf %s ." % tfile, d, workdir=shallowclone)
runfetchcmd("tar -czf %s ." % ud.fullshallow, d, workdir=shallowclone)
runfetchcmd("touch %s.done" % ud.fullshallow, d)
finally:
bb.utils.remove(tempdir, recurse=True)
@@ -543,11 +399,7 @@ class Git(FetchMethod):
os.unlink(ud.fullmirror)
logger.info("Creating tarball of git repository")
with create_atomic(ud.fullmirror) as tfile:
mtime = runfetchcmd("{} log --all -1 --format=%cD".format(ud.basecmd), d,
quiet=True, workdir=ud.clonedir)
runfetchcmd("tar -czf %s --owner oe:0 --group oe:0 --mtime \"%s\" ."
% (tfile, mtime), d, workdir=ud.clonedir)
runfetchcmd("tar -czf %s ." % ud.fullmirror, d, workdir=ud.clonedir)
runfetchcmd("touch %s.done" % ud.fullmirror, d)
def clone_shallow_local(self, ud, dest, d):
@@ -609,33 +461,20 @@ class Git(FetchMethod):
def unpack(self, ud, destdir, d):
""" unpack the downloaded src to destdir"""
subdir = ud.parm.get("subdir")
subpath = ud.parm.get("subpath")
readpathspec = ""
def_destsuffix = "git/"
if subpath:
readpathspec = ":%s" % subpath
def_destsuffix = "%s/" % os.path.basename(subpath.rstrip('/'))
if subdir:
# If 'subdir' param exists, create a dir and use it as destination for unpack cmd
if os.path.isabs(subdir):
if not os.path.realpath(subdir).startswith(os.path.realpath(destdir)):
raise bb.fetch2.UnpackError("subdir argument isn't a subdirectory of unpack root %s" % destdir, ud.url)
destdir = subdir
else:
destdir = os.path.join(destdir, subdir)
def_destsuffix = ""
subdir = ud.parm.get("subpath", "")
if subdir != "":
readpathspec = ":%s" % subdir
def_destsuffix = "%s/" % os.path.basename(subdir.rstrip('/'))
else:
readpathspec = ""
def_destsuffix = "git/"
destsuffix = ud.parm.get("destsuffix", def_destsuffix)
destdir = ud.destdir = os.path.join(destdir, destsuffix)
if os.path.exists(destdir):
bb.utils.prunedir(destdir)
if not ud.bareclone:
ud.unpack_tracer.unpack("git", destdir)
need_lfs = self._need_lfs(ud)
need_lfs = ud.parm.get("lfs", "1") == "1"
if not need_lfs:
ud.basecmd = "GIT_LFS_SKIP_SMUDGE=1 " + ud.basecmd
@@ -643,12 +482,13 @@ class Git(FetchMethod):
source_found = False
source_error = []
clonedir_is_up_to_date = not self.clonedir_need_update(ud, d)
if clonedir_is_up_to_date:
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
source_found = True
else:
source_error.append("clone directory not available or not up to date: " + ud.clonedir)
if not source_found:
clonedir_is_up_to_date = not self.clonedir_need_update(ud, d)
if clonedir_is_up_to_date:
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
source_found = True
else:
source_error.append("clone directory not available or not up to date: " + ud.clonedir)
if not source_found:
if ud.shallow:
@@ -672,11 +512,9 @@ class Git(FetchMethod):
raise bb.fetch2.FetchError("Repository %s has LFS content, install git-lfs on host to download (or set lfs=0 to ignore it)" % (repourl))
elif not need_lfs:
bb.note("Repository %s has LFS content but it is not being fetched" % (repourl))
else:
runfetchcmd("%s lfs install --local" % ud.basecmd, d, workdir=destdir)
if not ud.nocheckout:
if subpath:
if subdir != "":
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d,
workdir=destdir)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d, workdir=destdir)
@@ -725,54 +563,18 @@ class Git(FetchMethod):
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
return output.split()[0] != "0"
def _lfs_objects_downloaded(self, ud, d, name, wd):
"""
Verifies whether the LFS objects for requested revisions have already been downloaded
"""
# Bail out early if this repository doesn't use LFS
if not self._need_lfs(ud) or not self._contains_lfs(ud, d, wd):
return True
# The Git LFS specification specifies ([1]) the LFS folder layout so it should be safe to check for file
# existence.
# [1] https://github.com/git-lfs/git-lfs/blob/main/docs/spec.md#intercepting-git
cmd = "%s lfs ls-files -l %s" \
% (ud.basecmd, ud.revisions[name])
output = runfetchcmd(cmd, d, quiet=True, workdir=wd).rstrip()
# Do not do any further matching if no objects are managed by LFS
if not output:
return True
# Match all lines beginning with the hexadecimal OID
oid_regex = re.compile("^(([a-fA-F0-9]{2})([a-fA-F0-9]{2})[A-Fa-f0-9]+)")
for line in output.split("\n"):
oid = re.search(oid_regex, line)
if not oid:
bb.warn("git lfs ls-files output '%s' did not match expected format." % line)
if not os.path.exists(os.path.join(wd, "lfs", "objects", oid.group(2), oid.group(3), oid.group(1))):
return False
return True
def _need_lfs(self, ud):
return ud.parm.get("lfs", "1") == "1"
def _contains_lfs(self, ud, d, wd):
"""
Check if the repository has 'lfs' (large file) content
"""
if ud.nobranch:
# If no branch is specified, use the current git commit
refname = self._build_revision(ud, d, ud.names[0])
elif wd == ud.clonedir:
# The bare clonedir doesn't use the remote names; it has the branch immediately.
refname = ud.branches[ud.names[0]]
if not ud.nobranch:
branchname = ud.branches[ud.names[0]]
else:
refname = "origin/%s" % ud.branches[ud.names[0]]
branchname = "master"
cmd = "%s grep lfs %s:.gitattributes | wc -l" % (
ud.basecmd, refname)
cmd = "%s grep lfs origin/%s:.gitattributes | wc -l" % (
ud.basecmd, ud.branches[ud.names[0]])
try:
output = runfetchcmd(cmd, d, quiet=True, workdir=wd)
@@ -793,11 +595,6 @@ class Git(FetchMethod):
"""
Return the repository URL
"""
# Note that we do not support passwords directly in the git urls. There are several
# reasons. SRC_URI can be written out to things like buildhistory and people don't
# want to leak passwords like that. Its also all too easy to share metadata without
# removing the password. ssh keys, ~/.netrc and ~/.ssh/config files can be used as
# alternatives so we will not take patches adding password support here.
if ud.user:
username = ud.user + '@'
else:
@@ -809,6 +606,7 @@ class Git(FetchMethod):
Return a unique key for the url
"""
# Collapse adjacent slashes
slash_re = re.compile(r"/+")
return "git:" + ud.host + slash_re.sub(".", ud.path) + ud.unresolvedrev[name]
def _lsremote(self, ud, d, search):
@@ -841,12 +639,6 @@ class Git(FetchMethod):
"""
Compute the HEAD revision for the url
"""
if not d.getVar("__BBSRCREV_SEEN"):
raise bb.fetch2.FetchError("Recipe uses a floating tag/branch '%s' for repo '%s' without a fixed SRCREV yet doesn't call bb.fetch2.get_srcrev() (use SRCPV in PV for OE)." % (ud.unresolvedrev[name], ud.host+ud.path))
# Ensure we mark as not cached
bb.fetch2.mark_recipe_nocache(d)
output = self._lsremote(ud, d, "")
# Tags of the form ^{} may not work, need to fallback to other form
if ud.unresolvedrev[name][:5] == "refs/" or ud.usehead:
@@ -871,42 +663,38 @@ class Git(FetchMethod):
"""
pupver = ('', '')
tagregex = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or r"(?P<pver>([0-9][\.|_]?)+)")
try:
output = self._lsremote(ud, d, "refs/tags/*")
except (bb.fetch2.FetchError, bb.fetch2.NetworkAccess) as e:
bb.note("Could not list remote: %s" % str(e))
return pupver
rev_tag_re = re.compile(r"([0-9a-f]{40})\s+refs/tags/(.*)")
pver_re = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or r"(?P<pver>([0-9][\.|_]?)+)")
nonrel_re = re.compile(r"(alpha|beta|rc|final)+")
verstring = ""
revision = ""
for line in output.split("\n"):
if not line:
break
m = rev_tag_re.match(line)
if not m:
continue
(revision, tag) = m.groups()
tag_head = line.split("/")[-1]
# Ignore non-released branches
if nonrel_re.search(tag):
m = re.search(r"(alpha|beta|rc|final)+", tag_head)
if m:
continue
# search for version in the line
m = pver_re.search(tag)
if not m:
tag = tagregex.search(tag_head)
if tag is None:
continue
pver = m.group('pver').replace("_", ".")
tag = tag.group('pver')
tag = tag.replace("_", ".")
if verstring and bb.utils.vercmp(("0", pver, ""), ("0", verstring, "")) < 0:
if verstring and bb.utils.vercmp(("0", tag, ""), ("0", verstring, "")) < 0:
continue
verstring = pver
verstring = tag
revision = line.split()[0]
pupver = (verstring, revision)
return pupver

View File

@@ -78,7 +78,7 @@ class GitSM(Git):
module_hash = ""
if not module_hash:
logger.debug("submodule %s is defined, but is not initialized in the repository. Skipping", m)
logger.debug(1, "submodule %s is defined, but is not initialized in the repository. Skipping", m)
continue
submodules.append(m)
@@ -88,9 +88,9 @@ class GitSM(Git):
subrevision[m] = module_hash.split()[2]
# Convert relative to absolute uri based on parent uri
if uris[m].startswith('..') or uris[m].startswith('./'):
if uris[m].startswith('..'):
newud = copy.copy(ud)
newud.path = os.path.normpath(os.path.join(newud.path, uris[m]))
newud.path = os.path.realpath(os.path.join(newud.path, uris[m]))
uris[m] = Git._get_repo_url(self, newud)
for module in submodules:
@@ -115,21 +115,10 @@ class GitSM(Git):
# This has to be a file reference
proto = "file"
url = "gitsm://" + uris[module]
if url.endswith("{}{}".format(ud.host, ud.path)):
raise bb.fetch2.FetchError("Submodule refers to the parent repository. This will cause deadlock situation in current version of Bitbake." \
"Consider using git fetcher instead.")
url += ';protocol=%s' % proto
url += ";name=%s" % module
url += ";subpath=%s" % module
url += ";nobranch=1"
url += ";lfs=%s" % self._need_lfs(ud)
# Note that adding "user=" here to give credentials to the
# submodule is not supported. Since using SRC_URI to give git://
# URL a password is not supported, one have to use one of the
# recommended way (eg. ~/.netrc or SSH config) which does specify
# the user (See comment in git.py).
# So, we will not take patches adding "user=" support here.
ld = d.createCopy()
# Not necessary to set SRC_URI, since we're passing the URI to
@@ -151,6 +140,16 @@ class GitSM(Git):
if Git.need_update(self, ud, d):
return True
try:
# Check for the nugget dropped by the download operation
known_srcrevs = runfetchcmd("%s config --get-all bitbake.srcrev" % \
(ud.basecmd), d, workdir=ud.clonedir)
if ud.revisions[ud.names[0]] in known_srcrevs.split():
return False
except bb.fetch2.FetchError:
pass
need_update_list = []
def need_update_submodule(ud, url, module, modpath, workdir, d):
url += ";bareclone=1;nobranch=1"
@@ -173,9 +172,14 @@ class GitSM(Git):
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, need_update_submodule, d)
if len(need_update_list) == 0:
# We already have the required commits of all submodules. Drop
# a nugget so we don't need to check again.
runfetchcmd("%s config --add bitbake.srcrev %s" % \
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
if need_update_list:
logger.debug('gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
if len(need_update_list) > 0:
logger.debug(1, 'gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
return True
return False
@@ -205,6 +209,9 @@ class GitSM(Git):
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, download_submodule, d)
# Drop a nugget for the srcrev we've fetched (used by need_update)
runfetchcmd("%s config --add bitbake.srcrev %s" % \
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
def unpack(self, ud, destdir, d):
def unpack_submodules(ud, url, module, modpath, workdir, d):
@@ -218,10 +225,6 @@ class GitSM(Git):
try:
newfetch = Fetch([url], d, cache=False)
# modpath is needed by unpack tracer to calculate submodule
# checkout dir
new_ud = newfetch.ud[url]
new_ud.modpath = modpath
newfetch.unpack(root=os.path.dirname(os.path.join(repo_conf, 'modules', module)))
except Exception as e:
logger.error('gitsm: submodule unpack failed: %s %s' % (type(e).__name__, str(e)))
@@ -247,12 +250,10 @@ class GitSM(Git):
ret = self.process_submodules(ud, ud.destdir, unpack_submodules, d)
if not ud.bareclone and ret:
# All submodules should already be downloaded and configured in the tree. This simply
# sets up the configuration and checks out the files. The main project config should
# remain unmodified, and no download from the internet should occur. As such, lfs smudge
# should also be skipped as these files were already smudged in the fetch stage if lfs
# was enabled.
runfetchcmd("GIT_LFS_SKIP_SMUDGE=1 %s submodule update --recursive --no-fetch" % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
# All submodules should already be downloaded and configured in the tree. This simply sets
# up the configuration and checks out the files. The main project config should remain
# unmodified, and no download from the internet should occur.
runfetchcmd("%s submodule update --recursive --no-fetch" % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
def implicit_urldata(self, ud, d):
import shutil, subprocess, tempfile

View File

@@ -150,7 +150,7 @@ class Hg(FetchMethod):
def download(self, ud, d):
"""Fetch url"""
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.pkgdir) and os.path.exists(ud.fullmirror):
@@ -160,7 +160,7 @@ class Hg(FetchMethod):
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
# Found the source, check whether need pull
updatecmd = self._buildhgcommand(ud, d, "update")
logger.debug("Running %s", updatecmd)
logger.debug(1, "Running %s", updatecmd)
try:
runfetchcmd(updatecmd, d, workdir=ud.moddir)
except bb.fetch2.FetchError:
@@ -168,7 +168,7 @@ class Hg(FetchMethod):
pullcmd = self._buildhgcommand(ud, d, "pull")
logger.info("Pulling " + ud.url)
# update sources there
logger.debug("Running %s", pullcmd)
logger.debug(1, "Running %s", pullcmd)
bb.fetch2.check_network_access(d, pullcmd, ud.url)
runfetchcmd(pullcmd, d, workdir=ud.moddir)
try:
@@ -183,14 +183,14 @@ class Hg(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
logger.debug("Running %s", fetchcmd)
logger.debug(1, "Running %s", fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
runfetchcmd(fetchcmd, d, workdir=ud.pkgdir)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
logger.debug("Running %s", updatecmd)
logger.debug(1, "Running %s", updatecmd)
runfetchcmd(updatecmd, d, workdir=ud.moddir)
def clean(self, ud, d):
@@ -242,15 +242,14 @@ class Hg(FetchMethod):
revflag = "-r %s" % ud.revision
subdir = ud.parm.get("destsuffix", ud.module)
codir = "%s/%s" % (destdir, subdir)
ud.unpack_tracer.unpack("hg", codir)
scmdata = ud.parm.get("scmdata", "")
if scmdata != "nokeep":
proto = ud.parm.get('protocol', 'http')
if not os.access(os.path.join(codir, '.hg'), os.R_OK):
logger.debug2("Unpack: creating new hg repository in '" + codir + "'")
logger.debug(2, "Unpack: creating new hg repository in '" + codir + "'")
runfetchcmd("%s init %s" % (ud.basecmd, codir), d)
logger.debug2("Unpack: updating source in '" + codir + "'")
logger.debug(2, "Unpack: updating source in '" + codir + "'")
if ud.user and ud.pswd:
runfetchcmd("%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull %s" % (ud.basecmd, ud.user, ud.pswd, proto, ud.moddir), d, workdir=codir)
else:
@@ -260,5 +259,5 @@ class Hg(FetchMethod):
else:
runfetchcmd("%s up -C %s" % (ud.basecmd, revflag), d, workdir=codir)
else:
logger.debug2("Unpack: extracting source to '" + codir + "'")
logger.debug(2, "Unpack: extracting source to '" + codir + "'")
runfetchcmd("%s archive -t files %s %s" % (ud.basecmd, revflag, codir), d, workdir=ud.moddir)

View File

@@ -41,9 +41,9 @@ class Local(FetchMethod):
"""
Return the local filename of a given url assuming a successful fetch.
"""
return self.localfile_searchpaths(urldata, d)[-1]
return self.localpaths(urldata, d)[-1]
def localfile_searchpaths(self, urldata, d):
def localpaths(self, urldata, d):
"""
Return the local filename of a given url assuming a successful fetch.
"""
@@ -51,14 +51,18 @@ class Local(FetchMethod):
path = urldata.decodedurl
newpath = path
if path[0] == "/":
logger.debug2("Using absolute %s" % (path))
return [path]
filespath = d.getVar('FILESPATH')
if filespath:
logger.debug2("Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
logger.debug(2, "Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
newpath, hist = bb.utils.which(filespath, path, history=True)
logger.debug2("Using %s for %s" % (newpath, path))
searched.extend(hist)
if not os.path.exists(newpath):
dldirfile = os.path.join(d.getVar("DL_DIR"), path)
logger.debug(2, "Defaulting to %s for %s" % (dldirfile, path))
bb.utils.mkdirhier(os.path.dirname(dldirfile))
searched.append(dldirfile)
return searched
return searched
def need_update(self, ud, d):
@@ -74,7 +78,9 @@ class Local(FetchMethod):
filespath = d.getVar('FILESPATH')
if filespath:
locations = filespath.split(":")
msg = "Unable to find file " + urldata.url + " anywhere to download to " + urldata.localpath + ". The paths that were searched were:\n " + "\n ".join(locations)
locations.append(d.getVar("DL_DIR"))
msg = "Unable to find file " + urldata.url + " anywhere. The paths that were searched were:\n " + "\n ".join(locations)
raise FetchError(msg)
return True

View File

@@ -44,24 +44,17 @@ def npm_package(package):
"""Convert the npm package name to remove unsupported character"""
# Scoped package names (with the @) use the same naming convention
# as the 'npm pack' command.
name = re.sub("/", "-", package)
name = name.lower()
name = re.sub(r"[^\-a-z0-9]", "", name)
name = name.strip("-")
return name
if package.startswith("@"):
return re.sub("/", "-", package[1:])
return package
def npm_filename(package, version):
"""Get the filename of a npm package"""
return npm_package(package) + "-" + version + ".tgz"
def npm_localfile(package, version=None):
def npm_localfile(package, version):
"""Get the local filename of a npm package"""
if version is not None:
filename = npm_filename(package, version)
else:
filename = package
return os.path.join("npm2", filename)
return os.path.join("npm2", npm_filename(package, version))
def npm_integrity(integrity):
"""
@@ -76,52 +69,41 @@ def npm_unpack(tarball, destdir, d):
bb.utils.mkdirhier(destdir)
cmd = "tar --extract --gzip --file=%s" % shlex.quote(tarball)
cmd += " --no-same-owner"
cmd += " --delay-directory-restore"
cmd += " --strip-components=1"
runfetchcmd(cmd, d, workdir=destdir)
runfetchcmd("chmod -R +X '%s'" % (destdir), d, quiet=True, workdir=destdir)
class NpmEnvironment(object):
"""
Using a npm config file seems more reliable than using cli arguments.
This class allows to create a controlled environment for npm commands.
"""
def __init__(self, d, configs=[], npmrc=None):
def __init__(self, d, configs=None):
self.d = d
self.user_config = tempfile.NamedTemporaryFile(mode="w", buffering=1)
for key, value in configs:
self.user_config.write("%s=%s\n" % (key, value))
if npmrc:
self.global_config_name = npmrc
else:
self.global_config_name = "/dev/null"
def __del__(self):
if self.user_config:
self.user_config.close()
self.configs = configs
def run(self, cmd, args=None, configs=None, workdir=None):
"""Run npm command in a controlled environment"""
with tempfile.TemporaryDirectory() as tmpdir:
d = bb.data.createCopy(self.d)
d.setVar("PATH", d.getVar("PATH")) # PATH might contain $HOME - evaluate it before patching
d.setVar("HOME", tmpdir)
cfgfile = os.path.join(tmpdir, "npmrc")
if not workdir:
workdir = tmpdir
def _run(cmd):
cmd = "NPM_CONFIG_USERCONFIG=%s " % (self.user_config.name) + cmd
cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % (self.global_config_name) + cmd
cmd = "NPM_CONFIG_USERCONFIG=%s " % cfgfile + cmd
cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % cfgfile + cmd
return runfetchcmd(cmd, d, workdir=workdir)
if self.configs:
for key, value in self.configs:
_run("npm config set %s %s" % (key, shlex.quote(value)))
if configs:
bb.warn("Use of configs argument of NpmEnvironment.run() function"
" is deprecated. Please use args argument instead.")
for key, value in configs:
cmd += " --%s=%s" % (key, shlex.quote(value))
_run("npm config set %s %s" % (key, shlex.quote(value)))
if args:
for key, value in args:
@@ -160,12 +142,12 @@ class Npm(FetchMethod):
raise ParameterError("Invalid 'version' parameter", ud.url)
# Extract the 'registry' part of the url
ud.registry = re.sub(r"^npm://", "https://", ud.url.split(";")[0])
ud.registry = re.sub(r"^npm://", "http://", ud.url.split(";")[0])
# Using the 'downloadfilename' parameter as local filename
# or the npm package name.
if "downloadfilename" in ud.parm:
ud.localfile = npm_localfile(d.expand(ud.parm["downloadfilename"]))
ud.localfile = d.expand(ud.parm["downloadfilename"])
else:
ud.localfile = npm_localfile(ud.package, ud.version)
@@ -183,14 +165,14 @@ class Npm(FetchMethod):
def _resolve_proxy_url(self, ud, d):
def _npm_view():
args = []
args.append(("json", "true"))
args.append(("registry", ud.registry))
configs = []
configs.append(("json", "true"))
configs.append(("registry", ud.registry))
pkgver = shlex.quote(ud.package + "@" + ud.version)
cmd = ud.basecmd + " view %s" % pkgver
env = NpmEnvironment(d)
check_network_access(d, cmd, ud.registry)
view_string = env.run(cmd, args=args)
view_string = env.run(cmd, configs=configs)
if not view_string:
raise FetchError("Unavailable package %s" % pkgver, ud.url)
@@ -298,7 +280,6 @@ class Npm(FetchMethod):
destsuffix = ud.parm.get("destsuffix", "npm")
destdir = os.path.join(rootdir, destsuffix)
npm_unpack(ud.localpath, destdir, d)
ud.unpack_tracer.unpack("npm", destdir)
def clean(self, ud, d):
"""Clean any existing full or partial download"""

View File

@@ -24,14 +24,11 @@ import bb
from bb.fetch2 import Fetch
from bb.fetch2 import FetchMethod
from bb.fetch2 import ParameterError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import URI
from bb.fetch2.npm import npm_integrity
from bb.fetch2.npm import npm_localfile
from bb.fetch2.npm import npm_unpack
from bb.utils import is_semver
from bb.utils import lockfile
from bb.utils import unlockfile
def foreach_dependencies(shrinkwrap, callback=None, dev=False):
"""
@@ -41,9 +38,8 @@ def foreach_dependencies(shrinkwrap, callback=None, dev=False):
with:
name = the package name (string)
params = the package parameters (dictionary)
destdir = the destination of the package (string)
deptree = the package dependency tree (array of strings)
"""
# For handling old style dependencies entries in shinkwrap files
def _walk_deps(deps, deptree):
for name in deps:
subtree = [*deptree, name]
@@ -53,22 +49,9 @@ def foreach_dependencies(shrinkwrap, callback=None, dev=False):
continue
elif deps[name].get("bundled", False):
continue
destsubdirs = [os.path.join("node_modules", dep) for dep in subtree]
destsuffix = os.path.join(*destsubdirs)
callback(name, deps[name], destsuffix)
callback(name, deps[name], subtree)
# packages entry means new style shrinkwrap file, else use dependencies
packages = shrinkwrap.get("packages", None)
if packages is not None:
for package in packages:
if package != "":
name = package.split('node_modules/')[-1]
package_infos = packages.get(package, {})
if dev == False and package_infos.get("dev", False):
continue
callback(name, package_infos, package)
else:
_walk_deps(shrinkwrap.get("dependencies", {}), [])
_walk_deps(shrinkwrap.get("dependencies", {}), [])
class NpmShrinkWrap(FetchMethod):
"""Class to fetch all package from a shrinkwrap file"""
@@ -89,22 +72,19 @@ class NpmShrinkWrap(FetchMethod):
# Resolve the dependencies
ud.deps = []
def _resolve_dependency(name, params, destsuffix):
def _resolve_dependency(name, params, deptree):
url = None
localpath = None
extrapaths = []
unpack = True
destsubdirs = [os.path.join("node_modules", dep) for dep in deptree]
destsuffix = os.path.join(*destsubdirs)
integrity = params.get("integrity", None)
resolved = params.get("resolved", None)
version = params.get("version", None)
# Handle registry sources
if is_semver(version) and integrity:
# Handle duplicate dependencies without url
if not resolved:
return
if is_semver(version) and resolved and integrity:
localfile = npm_localfile(name, version)
uri = URI(resolved)
@@ -129,7 +109,7 @@ class NpmShrinkWrap(FetchMethod):
# Handle http tarball sources
elif version.startswith("http") and integrity:
localfile = npm_localfile(os.path.basename(version))
localfile = os.path.join("npm2", os.path.basename(version))
uri = URI(version)
uri.params["downloadfilename"] = localfile
@@ -141,28 +121,8 @@ class NpmShrinkWrap(FetchMethod):
localpath = os.path.join(d.getVar("DL_DIR"), localfile)
# Handle local tarball and link sources
elif version.startswith("file"):
localpath = version[5:]
if not version.endswith(".tgz"):
unpack = False
# Handle git sources
elif version.startswith(("git", "bitbucket","gist")) or (
not version.endswith((".tgz", ".tar", ".tar.gz"))
and not version.startswith((".", "@", "/"))
and "/" in version
):
if version.startswith("github:"):
version = "git+https://github.com/" + version[len("github:"):]
elif version.startswith("gist:"):
version = "git+https://gist.github.com/" + version[len("gist:"):]
elif version.startswith("bitbucket:"):
version = "git+https://bitbucket.org/" + version[len("bitbucket:"):]
elif version.startswith("gitlab:"):
version = "git+https://gitlab.com/" + version[len("gitlab:"):]
elif not version.startswith(("git+","git:")):
version = "git+https://github.com/" + version
elif version.startswith("git"):
regex = re.compile(r"""
^
git\+
@@ -188,17 +148,15 @@ class NpmShrinkWrap(FetchMethod):
url = str(uri)
# local tarball sources and local link sources are unsupported
else:
raise ParameterError("Unsupported dependency: %s" % name, ud.url)
# name is needed by unpack tracer for module mapping
ud.deps.append({
"name": name,
"url": url,
"localpath": localpath,
"extrapaths": extrapaths,
"destsuffix": destsuffix,
"unpack": unpack,
})
try:
@@ -219,23 +177,17 @@ class NpmShrinkWrap(FetchMethod):
# This fetcher resolves multiple URIs from a shrinkwrap file and then
# forwards it to a proxy fetcher. The management of the donestamp file,
# the lockfile and the checksums are forwarded to the proxy fetcher.
shrinkwrap_urls = [dep["url"] for dep in ud.deps if dep["url"]]
if shrinkwrap_urls:
ud.proxy = Fetch(shrinkwrap_urls, data)
ud.proxy = Fetch([dep["url"] for dep in ud.deps], data)
ud.needdonestamp = False
@staticmethod
def _foreach_proxy_method(ud, handle):
returns = []
#Check if there are dependencies before try to fetch them
if len(ud.deps) > 0:
for proxy_url in ud.proxy.urls:
proxy_ud = ud.proxy.ud[proxy_url]
proxy_d = ud.proxy.d
proxy_ud.setup_localpath(proxy_d)
lf = lockfile(proxy_ud.lockfile)
returns.append(handle(proxy_ud.method, proxy_ud, proxy_d))
unlockfile(lf)
for proxy_url in ud.proxy.urls:
proxy_ud = ud.proxy.ud[proxy_url]
proxy_d = ud.proxy.d
proxy_ud.setup_localpath(proxy_d)
returns.append(handle(proxy_ud.method, proxy_ud, proxy_d))
return returns
def verify_donestamp(self, ud, d):
@@ -272,7 +224,6 @@ class NpmShrinkWrap(FetchMethod):
destsuffix = ud.parm.get("destsuffix")
if destsuffix:
destdir = os.path.join(rootdir, destsuffix)
ud.unpack_tracer.unpack("npm-shrinkwrap", destdir)
bb.utils.mkdirhier(destdir)
bb.utils.copyfile(ud.shrinkwrap_file,
@@ -286,16 +237,7 @@ class NpmShrinkWrap(FetchMethod):
for dep in manual:
depdestdir = os.path.join(destdir, dep["destsuffix"])
if dep["url"]:
npm_unpack(dep["localpath"], depdestdir, d)
else:
depsrcdir= os.path.join(destdir, dep["localpath"])
if dep["unpack"]:
npm_unpack(depsrcdir, depdestdir, d)
else:
bb.utils.mkdirhier(depdestdir)
cmd = 'cp -fpPRH "%s/." .' % (depsrcdir)
runfetchcmd(cmd, d, workdir=depdestdir)
npm_unpack(dep["localpath"], depdestdir, d)
def clean(self, ud, d):
"""Clean any existing full or partial download"""

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
"""
@@ -11,7 +9,6 @@ Based on the svn "Fetch" implementation.
import logging
import os
import re
import bb
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
@@ -39,7 +36,6 @@ class Osc(FetchMethod):
# Create paths to osc checkouts
oscdir = d.getVar("OSCDIR") or (d.getVar("DL_DIR") + "/osc")
relpath = self._strip_leading_slashes(ud.path)
ud.oscdir = oscdir
ud.pkgdir = os.path.join(oscdir, ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
@@ -47,13 +43,13 @@ class Osc(FetchMethod):
ud.revision = ud.parm['rev']
else:
pv = d.getVar("PV", False)
rev = bb.fetch2.srcrev_internal_helper(ud, d, '')
rev = bb.fetch2.srcrev_internal_helper(ud, d)
if rev:
ud.revision = rev
else:
ud.revision = ""
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), relpath.replace('/', '.'), ud.revision))
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision))
def _buildosccommand(self, ud, d, command):
"""
@@ -63,61 +59,38 @@ class Osc(FetchMethod):
basecmd = d.getVar("FETCHCMD_osc") or "/usr/bin/env osc"
proto = ud.parm.get('protocol', 'https')
proto = ud.parm.get('protocol', 'ocs')
options = []
config = "-c %s" % self.generate_config(ud, d)
if getattr(ud, 'revision', ''):
if ud.revision:
options.append("-r %s" % ud.revision)
coroot = self._strip_leading_slashes(ud.path)
if command == "fetch":
osccmd = "%s %s -A %s://%s co %s/%s %s" % (basecmd, config, proto, ud.host, coroot, ud.module, " ".join(options))
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
elif command == "update":
osccmd = "%s %s -A %s://%s up %s" % (basecmd, config, proto, ud.host, " ".join(options))
elif command == "api_source":
osccmd = "%s %s -A %s://%s api source/%s/%s" % (basecmd, config, proto, ud.host, coroot, ud.module)
osccmd = "%s %s up %s" % (basecmd, config, " ".join(options))
else:
raise FetchError("Invalid osc command %s" % command, ud.url)
return osccmd
def _latest_revision(self, ud, d, name):
"""
Fetch latest revision for the given package
"""
api_source_cmd = self._buildosccommand(ud, d, "api_source")
output = runfetchcmd(api_source_cmd, d)
match = re.match(r'<directory ?.* rev="(\d+)".*>', output)
if match is None:
raise FetchError("Unable to parse osc response", ud.url)
return match.groups()[0]
def _revision_key(self, ud, d, name):
"""
Return a unique key for the url
"""
# Collapse adjacent slashes
slash_re = re.compile(r"/+")
rev = getattr(ud, 'revision', "latest")
return "osc:%s%s.%s.%s" % (ud.host, slash_re.sub(".", ud.path), name, rev)
def download(self, ud, d):
"""
Fetch url
"""
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(ud.moddir, os.R_OK):
if os.access(os.path.join(d.getVar('OSCDIR'), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ ud.url)
# update sources there
logger.debug("Running %s", oscupdatecmd)
logger.debug(1, "Running %s", oscupdatecmd)
bb.fetch2.check_network_access(d, oscupdatecmd, ud.url)
runfetchcmd(oscupdatecmd, d, workdir=ud.moddir)
else:
@@ -125,7 +98,7 @@ class Osc(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
logger.debug("Running %s", oscfetchcmd)
logger.debug(1, "Running %s", oscfetchcmd)
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
runfetchcmd(oscfetchcmd, d, workdir=ud.pkgdir)
@@ -141,23 +114,20 @@ class Osc(FetchMethod):
Generate a .oscrc to be used for this run.
"""
config_path = os.path.join(ud.oscdir, "oscrc")
if not os.path.exists(ud.oscdir):
bb.utils.mkdirhier(ud.oscdir)
config_path = os.path.join(d.getVar('OSCDIR'), "oscrc")
if (os.path.exists(config_path)):
os.remove(config_path)
f = open(config_path, 'w')
proto = ud.parm.get('protocol', 'https')
f.write("[general]\n")
f.write("apiurl = %s://%s\n" % (proto, ud.host))
f.write("apisrv = %s\n" % ud.host)
f.write("scheme = http\n")
f.write("su-wrapper = su -c\n")
f.write("build-root = %s\n" % d.getVar('WORKDIR'))
f.write("urllist = %s\n" % d.getVar("OSCURLLIST"))
f.write("extra-pkgs = gzip\n")
f.write("\n")
f.write("[%s://%s]\n" % (proto, ud.host))
f.write("[%s]\n" % ud.host)
f.write("user = %s\n" % ud.parm["user"])
f.write("pass = %s\n" % ud.parm["pswd"])
f.close()

View File

@@ -90,16 +90,16 @@ class Perforce(FetchMethod):
p4port = d.getVar('P4PORT')
if p4port:
logger.debug('Using recipe provided P4PORT: %s' % p4port)
logger.debug(1, 'Using recipe provided P4PORT: %s' % p4port)
ud.host = p4port
else:
logger.debug('Trying to use P4CONFIG to automatically set P4PORT...')
logger.debug(1, 'Trying to use P4CONFIG to automatically set P4PORT...')
ud.usingp4config = True
p4cmd = '%s info | grep "Server address"' % ud.basecmd
bb.fetch2.check_network_access(d, p4cmd, ud.url)
ud.host = runfetchcmd(p4cmd, d, True)
ud.host = ud.host.split(': ')[1].strip()
logger.debug('Determined P4PORT to be: %s' % ud.host)
logger.debug(1, 'Determined P4PORT to be: %s' % ud.host)
if not ud.host:
raise FetchError('Could not determine P4PORT from P4CONFIG')
@@ -119,7 +119,6 @@ class Perforce(FetchMethod):
cleanedpath = ud.path.replace('/...', '').replace('/', '.')
cleanedhost = ud.host.replace(':', '.')
cleanedmodule = ""
# Merge the path and module into the final depot location
if ud.module:
if ud.module.find('/') == 0:
@@ -134,7 +133,7 @@ class Perforce(FetchMethod):
ud.setup_revisions(d)
ud.localfile = d.expand('%s_%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, cleanedmodule, ud.revision))
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, ud.revision))
def _buildp4command(self, ud, d, command, depot_filename=None):
"""
@@ -208,7 +207,7 @@ class Perforce(FetchMethod):
for filename in p4fileslist:
item = filename.split(' - ')
lastaction = item[1].split()
logger.debug('File: %s Last Action: %s' % (item[0], lastaction[0]))
logger.debug(1, 'File: %s Last Action: %s' % (item[0], lastaction[0]))
if lastaction[0] == 'delete':
continue
filelist.append(item[0])
@@ -255,7 +254,7 @@ class Perforce(FetchMethod):
raise FetchError('Could not determine the latest perforce changelist')
tipcset = tip.split(' ')[1]
logger.debug('p4 tip found to be changelist %s' % tipcset)
logger.debug(1, 'p4 tip found to be changelist %s' % tipcset)
return tipcset
def sortable_revision(self, ud, d, name):

View File

@@ -47,7 +47,7 @@ class Repo(FetchMethod):
"""Fetch url"""
if os.access(os.path.join(d.getVar("DL_DIR"), ud.localfile), os.R_OK):
logger.debug("%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
return
repodir = d.getVar("REPODIR") or (d.getVar("DL_DIR") + "/repo")

View File

@@ -18,47 +18,10 @@ The aws tool must be correctly installed and configured prior to use.
import os
import bb
import urllib.request, urllib.parse, urllib.error
import re
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
def convertToBytes(value, unit):
value = float(value)
if (unit == "KiB"):
value = value*1024.0;
elif (unit == "MiB"):
value = value*1024.0*1024.0;
elif (unit == "GiB"):
value = value*1024.0*1024.0*1024.0;
return value
class S3ProgressHandler(bb.progress.LineFilterProgressHandler):
"""
Extract progress information from s3 cp output, e.g.:
Completed 5.1 KiB/8.8 GiB (12.0 MiB/s) with 1 file(s) remaining
"""
def __init__(self, d):
super(S3ProgressHandler, self).__init__(d)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def writeline(self, line):
percs = re.findall(r'^Completed (\d+.{0,1}\d*) (\w+)\/(\d+.{0,1}\d*) (\w+) (\(.+\)) with\s+', line)
if percs:
completed = (percs[-1][0])
completedUnit = (percs[-1][1])
total = (percs[-1][2])
totalUnit = (percs[-1][3])
completed = convertToBytes(completed, completedUnit)
total = convertToBytes(total, totalUnit)
progress = (completed/total)*100.0
rate = percs[-1][4]
self.update(progress, rate)
return False
return True
class S3(FetchMethod):
"""Class to fetch urls via 'aws s3'"""
@@ -89,9 +52,7 @@ class S3(FetchMethod):
cmd = '%s cp s3://%s%s %s' % (ud.basecmd, ud.host, ud.path, ud.localpath)
bb.fetch2.check_network_access(d, cmd, ud.url)
progresshandler = S3ProgressHandler(d)
runfetchcmd(cmd, d, False, log=progresshandler)
runfetchcmd(cmd, d)
# Additional sanity checks copied from the wget class (although there
# are no known issues which mean these are required, treat the aws cli

View File

@@ -103,7 +103,7 @@ class SFTP(FetchMethod):
if path[:3] == '/~/':
path = path[3:]
remote = '"%s%s:%s"' % (user, urlo.hostname, path)
remote = '%s%s:%s' % (user, urlo.hostname, path)
cmd = '%s %s %s %s' % (basecmd, port, remote, lpath)

View File

@@ -32,7 +32,6 @@ IETF secsh internet draft:
import re, os
from bb.fetch2 import check_network_access, FetchMethod, ParameterError, runfetchcmd
import urllib
__pattern__ = re.compile(r'''
@@ -41,9 +40,9 @@ __pattern__ = re.compile(r'''
( # Optional username/password block
(?P<user>\S+) # username
(:(?P<pass>\S+))? # colon followed by the password (optional)
)?
(?P<cparam>(;[^;]+)*)? # connection parameters block (optional)
@
)?
(?P<host>\S+?) # non-greedy match of the host
(:(?P<port>[0-9]+))? # colon followed by the port (optional)
/
@@ -71,7 +70,6 @@ class SSH(FetchMethod):
"git:// prefix with protocol=ssh", urldata.url)
m = __pattern__.match(urldata.url)
path = m.group('path')
path = urllib.parse.unquote(path)
host = m.group('host')
urldata.localpath = os.path.join(d.getVar('DL_DIR'),
os.path.basename(os.path.normpath(path)))
@@ -98,11 +96,6 @@ class SSH(FetchMethod):
fr += '@%s' % host
else:
fr = host
if path[0] != '~':
path = '/%s' % path
path = urllib.parse.unquote(path)
fr += ':%s' % path
cmd = 'scp -B -r %s %s %s/' % (
@@ -115,41 +108,3 @@ class SSH(FetchMethod):
runfetchcmd(cmd, d)
def checkstatus(self, fetch, urldata, d):
"""
Check the status of the url
"""
m = __pattern__.match(urldata.url)
path = m.group('path')
host = m.group('host')
port = m.group('port')
user = m.group('user')
password = m.group('pass')
if port:
portarg = '-P %s' % port
else:
portarg = ''
if user:
fr = user
if password:
fr += ':%s' % password
fr += '@%s' % host
else:
fr = host
if path[0] != '~':
path = '/%s' % path
path = urllib.parse.unquote(path)
cmd = 'ssh -o BatchMode=true %s %s [ -f %s ]' % (
portarg,
fr,
path
)
check_network_access(d, cmd, urldata.url)
runfetchcmd(cmd, d)
return True

View File

@@ -57,12 +57,7 @@ class Svn(FetchMethod):
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
# Whether to use the @REV peg-revision syntax in the svn command or not
ud.pegrevision = True
if 'nopegrevision' in ud.parm:
ud.pegrevision = False
ud.localfile = d.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ["0", "1"][ud.pegrevision]))
ud.localfile = d.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision))
def _buildsvncommand(self, ud, d, command):
"""
@@ -91,7 +86,7 @@ class Svn(FetchMethod):
if command == "info":
svncmd = "%s info %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
elif command == "log1":
svncmd = "%s log --limit 1 --quiet %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
svncmd = "%s log --limit 1 %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
else:
suffix = ""
@@ -103,8 +98,7 @@ class Svn(FetchMethod):
if ud.revision:
options.append("-r %s" % ud.revision)
if ud.pegrevision:
suffix = "@%s" % (ud.revision)
suffix = "@%s" % (ud.revision)
if command == "fetch":
transportuser = ud.parm.get("transportuser", "")
@@ -122,7 +116,7 @@ class Svn(FetchMethod):
def download(self, ud, d):
"""Fetch url"""
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
lf = bb.utils.lockfile(ud.svnlock)
@@ -135,7 +129,7 @@ class Svn(FetchMethod):
runfetchcmd(ud.basecmd + " upgrade", d, workdir=ud.moddir)
except FetchError:
pass
logger.debug("Running %s", svncmd)
logger.debug(1, "Running %s", svncmd)
bb.fetch2.check_network_access(d, svncmd, ud.url)
runfetchcmd(svncmd, d, workdir=ud.moddir)
else:
@@ -143,7 +137,7 @@ class Svn(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
logger.debug("Running %s", svncmd)
logger.debug(1, "Running %s", svncmd)
bb.fetch2.check_network_access(d, svncmd, ud.url)
runfetchcmd(svncmd, d, workdir=ud.pkgdir)

View File

@@ -26,6 +26,7 @@ from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
from bb.utils import export_proxies
from bs4 import BeautifulSoup
from bs4 import SoupStrainer
@@ -52,23 +53,11 @@ class WgetProgressHandler(bb.progress.LineFilterProgressHandler):
class Wget(FetchMethod):
"""Class to fetch urls via 'wget'"""
# CDNs like CloudFlare may do a 'browser integrity test' which can fail
# with the standard wget/urllib User-Agent, so pretend to be a modern
# browser.
user_agent = "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
def check_certs(self, d):
"""
Should certificates be checked?
"""
return (d.getVar("BB_CHECK_SSL_CERTS") or "1") != "0"
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with wget.
"""
return ud.type in ['http', 'https', 'ftp', 'ftps']
return ud.type in ['http', 'https', 'ftp']
def recommends_checksum(self, urldata):
return True
@@ -87,19 +76,13 @@ class Wget(FetchMethod):
if not ud.localfile:
ud.localfile = d.expand(urllib.parse.unquote(ud.host + ud.path).replace("/", "."))
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30"
if ud.type == 'ftp' or ud.type == 'ftps':
self.basecmd += " --passive-ftp"
if not self.check_certs(d):
self.basecmd += " --no-check-certificate"
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp --no-check-certificate"
def _runwget(self, ud, d, command, quiet, workdir=None):
progresshandler = WgetProgressHandler(d)
logger.debug2("Fetching %s using command '%s'" % (ud.url, command))
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command, ud.url)
runfetchcmd(command + ' --progress=dot -v', d, quiet, log=progresshandler, workdir=workdir)
@@ -108,51 +91,32 @@ class Wget(FetchMethod):
fetchcmd = self.basecmd
dldir = os.path.realpath(d.getVar("DL_DIR"))
localpath = os.path.join(dldir, ud.localfile) + ".tmp"
bb.utils.mkdirhier(os.path.dirname(localpath))
fetchcmd += " -O %s" % shlex.quote(localpath)
if 'downloadfilename' in ud.parm:
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile)
bb.utils.mkdirhier(os.path.dirname(localpath))
fetchcmd += " -O %s" % shlex.quote(localpath)
if ud.user and ud.pswd:
fetchcmd += " --auth-no-challenge"
if ud.parm.get("redirectauth", "1") == "1":
# An undocumented feature of wget is that if the
# username/password are specified on the URI, wget will only
# send the Authorization header to the first host and not to
# any hosts that it is redirected to. With the increasing
# usage of temporary AWS URLs, this difference now matters as
# AWS will reject any request that has authentication both in
# the query parameters (from the redirect) and in the
# Authorization header.
fetchcmd += " --user=%s --password=%s" % (ud.user, ud.pswd)
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
uri = ud.url.split(";")[0]
if os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
fetchcmd += " -c -P " + dldir + " '" + uri + "'"
fetchcmd += d.expand(" -c -P ${DL_DIR} '%s'" % uri)
else:
fetchcmd += " -P " + dldir + " '" + uri + "'"
fetchcmd += d.expand(" -P ${DL_DIR} '%s'" % uri)
self._runwget(ud, d, fetchcmd, False)
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(localpath):
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, localpath), uri)
if not os.path.exists(ud.localpath):
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
if os.path.getsize(localpath) == 0:
os.remove(localpath)
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (uri), uri)
# Try and verify any checksum now, meaning if it isn't correct, we don't remove the
# original file, which might be a race (imagine two recipes referencing the same
# source, one with an incorrect checksum)
bb.fetch2.verify_checksum(ud, d, localpath=localpath, fatal_nochecksum=False)
# Remove the ".tmp" and move the file into position atomically
# Our lock prevents multiple writers but mirroring code may grab incomplete files
os.rename(localpath, localpath[:-4])
return True
def checkstatus(self, fetch, ud, d, try_again=True):
@@ -239,7 +203,7 @@ class Wget(FetchMethod):
# We let the request fail and expect it to be
# tried once more ("try_again" in check_status()),
# with the dead connection removed from the cache.
# If it still fails, we give up, which can happen for bad
# If it still fails, we give up, which can happend for bad
# HTTP proxy settings.
fetch.connection_cache.remove_connection(h.host, h.port)
raise urllib.error.URLError(err)
@@ -312,76 +276,56 @@ class Wget(FetchMethod):
newreq = urllib.request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, headers, newurl)
newreq.get_method = req.get_method
return newreq
exported_proxies = export_proxies(d)
# We need to update the environment here as both the proxy and HTTPS
# handlers need variables set. The proxy needs http_proxy and friends to
# be set, and HTTPSHandler ends up calling into openssl to load the
# certificates. In buildtools configurations this will be looking at the
# wrong place for certificates by default: we set SSL_CERT_FILE to the
# right location in the buildtools environment script but as BitBake
# prunes prunes the environment this is lost. When binaries are executed
# runfetchcmd ensures these values are in the environment, but this is
# pure Python so we need to update the environment.
#
# Avoid tramping the environment too much by using bb.utils.environment
# to scope the changes to the build_opener request, which is when the
# environment lookups happen.
newenv = bb.fetch2.get_fetcher_environment(d)
handlers = [FixedHTTPRedirectHandler, HTTPMethodFallback]
if exported_proxies:
handlers.append(urllib.request.ProxyHandler())
handlers.append(CacheHTTPHandler())
# Since Python 2.7.9 ssl cert validation is enabled by default
# see PEP-0476, this causes verification errors on some https servers
# so disable by default.
import ssl
if hasattr(ssl, '_create_unverified_context'):
handlers.append(urllib.request.HTTPSHandler(context=ssl._create_unverified_context()))
opener = urllib.request.build_opener(*handlers)
with bb.utils.environment(**newenv):
import ssl
try:
uri = ud.url.split(";")[0]
r = urllib.request.Request(uri)
r.get_method = lambda: "HEAD"
# Some servers (FusionForge, as used on Alioth) require that the
# optional Accept header is set.
r.add_header("Accept", "*/*")
r.add_header("User-Agent", "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12")
def add_basic_auth(login_str, request):
'''Adds Basic auth to http request, pass in login:password as string'''
import base64
encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
authheader = "Basic %s" % encodeuser
r.add_header("Authorization", authheader)
if self.check_certs(d):
context = ssl.create_default_context()
else:
context = ssl._create_unverified_context()
handlers = [FixedHTTPRedirectHandler,
HTTPMethodFallback,
urllib.request.ProxyHandler(),
CacheHTTPHandler(),
urllib.request.HTTPSHandler(context=context)]
opener = urllib.request.build_opener(*handlers)
if ud.user and ud.pswd:
add_basic_auth(ud.user + ':' + ud.pswd, r)
try:
uri_base = ud.url.split(";")[0]
uri = "{}://{}{}".format(urllib.parse.urlparse(uri_base).scheme, ud.host, ud.path)
r = urllib.request.Request(uri)
r.get_method = lambda: "HEAD"
# Some servers (FusionForge, as used on Alioth) require that the
# optional Accept header is set.
r.add_header("Accept", "*/*")
r.add_header("User-Agent", self.user_agent)
def add_basic_auth(login_str, request):
'''Adds Basic auth to http request, pass in login:password as string'''
import base64
encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
authheader = "Basic %s" % encodeuser
r.add_header("Authorization", authheader)
if ud.user and ud.pswd:
add_basic_auth(ud.user + ':' + ud.pswd, r)
try:
import netrc
auth_data = netrc.netrc().authenticators(urllib.parse.urlparse(uri).hostname)
if auth_data:
login, _, password = auth_data
add_basic_auth("%s:%s" % (login, password), r)
except (FileNotFoundError, netrc.NetrcParseError):
pass
with opener.open(r, timeout=30) as response:
pass
except (urllib.error.URLError, ConnectionResetError, TimeoutError) as e:
if try_again:
logger.debug2("checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug2("checkstatus() urlopen failed for %s: %s" % (uri,e))
return False
import netrc
n = netrc.netrc()
login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
add_basic_auth("%s:%s" % (login, password), r)
except (TypeError, ImportError, IOError, netrc.NetrcParseError):
pass
with opener.open(r) as response:
pass
except urllib.error.URLError as e:
if try_again:
logger.debug(2, "checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug(2, "checkstatus() urlopen failed: %s" % e)
return False
return True
def _parse_path(self, regex, s):
@@ -457,8 +401,9 @@ class Wget(FetchMethod):
"""
f = tempfile.NamedTemporaryFile()
with tempfile.TemporaryDirectory(prefix="wget-index-") as workdir, tempfile.NamedTemporaryFile(dir=workdir, prefix="wget-listing-") as f:
agent = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12"
fetchcmd = self.basecmd
fetchcmd += " -O " + f.name + " --user-agent='" + self.user_agent + "' '" + uri + "'"
fetchcmd += " -O " + f.name + " --user-agent='" + agent + "' '" + uri + "'"
try:
self._runwget(ud, d, fetchcmd, True, workdir=workdir)
fetchresult = f.read()
@@ -514,7 +459,7 @@ class Wget(FetchMethod):
version_dir = ['', '', '']
version = ['', '', '']
dirver_regex = re.compile(r"(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])*(\d+))")
dirver_regex = re.compile(r"(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])+(\d+))")
s = dirver_regex.search(dirver)
if s:
version_dir[1] = s.group('ver')
@@ -590,7 +535,7 @@ class Wget(FetchMethod):
# src.rpm extension was added only for rpm package. Can be removed if the rpm
# packaged will always be considered as having to be manually upgraded
psuffix_regex = r"(tar\.\w+|tgz|zip|xz|rpm|bz2|orig\.tar\.\w+|src\.tar\.\w+|src\.tgz|svnr\d+\.tar\.\w+|stable\.tar\.\w+|src\.rpm)"
psuffix_regex = r"(tar\.gz|tgz|tar\.bz2|zip|xz|tar\.lz|rpm|bz2|orig\.tar\.gz|tar\.xz|src\.tar\.gz|src\.tgz|svnr\d+\.tar\.bz2|stable\.tar\.gz|src\.rpm)"
# match name, version and archive type of a package
package_regex_comp = re.compile(r"(?P<name>%s?\.?v?)(?P<pver>%s)(?P<arch>%s)?[\.-](?P<type>%s$)"
@@ -641,10 +586,10 @@ class Wget(FetchMethod):
# search for version matches on folders inside the path, like:
# "5.7" in http://download.gnome.org/sources/${PN}/5.7/${PN}-${PV}.tar.gz
dirver_regex = re.compile(r"(?P<dirver>[^/]*(\d+\.)*\d+([-_]r\d+)*)/")
m = dirver_regex.findall(path)
m = dirver_regex.search(path)
if m:
pn = d.getVar('PN')
dirver = m[-1][0]
dirver = m.group('dirver')
dirver_pn_regex = re.compile(r"%s\d?" % (re.escape(pn)))
if not dirver_pn_regex.search(dirver):

View File

@@ -12,12 +12,11 @@
import os
import sys
import logging
import argparse
import optparse
import warnings
import fcntl
import time
import traceback
import datetime
import bb
from bb import event
@@ -44,18 +43,18 @@ def present_options(optionlist):
else:
return optionlist[0]
class BitbakeHelpFormatter(argparse.HelpFormatter):
def _get_help_string(self, action):
class BitbakeHelpFormatter(optparse.IndentedHelpFormatter):
def format_option(self, option):
# We need to do this here rather than in the text we supply to
# add_option() because we don't want to call list_extension_modules()
# on every execution (since it imports all of the modules)
# Note also that we modify option.help rather than the returned text
# - this is so that we don't have to re-format the text ourselves
if action.dest == 'ui':
if option.dest == 'ui':
valid_uis = list_extension_modules(bb.ui, 'main')
return action.help.replace('@CHOICES@', present_options(valid_uis))
option.help = option.help.replace('@CHOICES@', present_options(valid_uis))
return action.help
return optparse.IndentedHelpFormatter.format_option(self, option)
def list_extension_modules(pkg, checkattr):
"""
@@ -113,209 +112,186 @@ def _showwarning(message, category, filename, lineno, file=None, line=None):
warnlog.warning(s)
warnings.showwarning = _showwarning
def create_bitbake_parser():
parser = argparse.ArgumentParser(
description="""\
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.
""",
formatter_class=BitbakeHelpFormatter,
allow_abbrev=False,
add_help=False, # help is manually added below in a specific argument group
)
general_group = parser.add_argument_group('General options')
task_group = parser.add_argument_group('Task control options')
exec_group = parser.add_argument_group('Execution control options')
logging_group = parser.add_argument_group('Logging/output control options')
server_group = parser.add_argument_group('Server options')
config_group = parser.add_argument_group('Configuration options')
general_group.add_argument("targets", nargs="*", metavar="recipename/target",
help="Execute the specified task (default is 'build') for these target "
"recipes (.bb files).")
general_group.add_argument("-s", "--show-versions", action="store_true",
help="Show current and preferred versions of all recipes.")
general_group.add_argument("-e", "--environment", action="store_true",
dest="show_environment",
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
general_group.add_argument("-g", "--graphviz", action="store_true", dest="dot_graph",
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
general_group.add_argument("-u", "--ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %(default)s).")
general_group.add_argument("--version", action="store_true",
help="Show programs version and exit.")
general_group.add_argument('-h', '--help', action='help',
help='Show this help message and exit.')
task_group.add_argument("-f", "--force", action="store_true",
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
task_group.add_argument("-c", "--cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
task_group.add_argument("-C", "--clear-stamp", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
task_group.add_argument("--runall", action="append", default=[],
help="Run the specified task for any recipe in the taskgraph of the "
"specified target (even if it wouldn't otherwise have run).")
task_group.add_argument("--runonly", action="append",
help="Run only the specified task within the taskgraph of the "
"specified targets (and any task dependencies those tasks may have).")
task_group.add_argument("--no-setscene", action="store_true",
dest="nosetscene",
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
task_group.add_argument("--skip-setscene", action="store_true",
dest="skipsetscene",
help="Skip setscene tasks if they would be executed. Tasks previously "
"restored from sstate will be kept, unlike --no-setscene.")
task_group.add_argument("--setscene-only", action="store_true",
dest="setsceneonly",
help="Only run setscene tasks, don't run any real tasks.")
exec_group.add_argument("-n", "--dry-run", action="store_true",
help="Don't execute, just go through the motions.")
exec_group.add_argument("-p", "--parse-only", action="store_true",
help="Quit after parsing the BB recipes.")
exec_group.add_argument("-k", "--continue", action="store_false", dest="halt",
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
exec_group.add_argument("-P", "--profile", action="store_true",
help="Profile the command and save reports.")
exec_group.add_argument("-S", "--dump-signatures", action="append",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means recursively compare the dumped signature with the most recent"
" one in a local build or sstate cache (can be used to find out why tasks re-run"
" when that is not expected)")
exec_group.add_argument("--revisions-changed", action="store_true",
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
exec_group.add_argument("-b", "--buildfile",
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
logging_group.add_argument("-D", "--debug", action="count", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
logging_group.add_argument("-l", "--log-domains", action="append", dest="debug_domains",
default=[],
help="Show debug logging for the specified logging domains.")
logging_group.add_argument("-v", "--verbose", action="store_true",
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
logging_group.add_argument("-q", "--quiet", action="count", default=0,
help="Output less log message data to the terminal. You can specify this "
"more than once.")
logging_group.add_argument("-w", "--write-log", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
server_group.add_argument("-B", "--bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
server_group.add_argument("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
server_group.add_argument("--remote-server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
server_group.add_argument("-m", "--kill-server", action="store_true",
help="Terminate any running bitbake server.")
server_group.add_argument("--token", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
server_group.add_argument("--observe-only", action="store_true",
help="Connect to a server as an observing-only client.")
server_group.add_argument("--status-only", action="store_true",
help="Check the status of the remote bitbake server.")
server_group.add_argument("--server-only", action="store_true",
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
config_group.add_argument("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
config_group.add_argument("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
config_group.add_argument("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing.")
return parser
warnings.filterwarnings("ignore")
warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
warnings.filterwarnings("ignore", category=ImportWarning)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
class BitBakeConfigParameters(cookerdata.ConfigParameters):
def parseCommandLine(self, argv=sys.argv):
parser = create_bitbake_parser()
options = parser.parse_intermixed_args(argv[1:])
if options.version:
print("BitBake Build Tool Core version %s" % bb.__version__)
sys.exit(0)
def parseCommandLine(self, argv=sys.argv):
parser = optparse.OptionParser(
formatter=BitbakeHelpFormatter(),
version="BitBake Build Tool Core version %s" % bb.__version__,
usage="""%prog [options] [recipename/target recipe:do_task ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.""")
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
parser.add_option("-c", "--cmd", action="store", dest="cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
help="Output less log message data to the terminal. You can specify this more than once.")
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
help="Don't execute, just go through the motions.")
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
parser.add_option("-p", "--parse-only", action="store_true",
dest="parse_only", default=False,
help="Quit after parsing the BB recipes.")
parser.add_option("-s", "--show-versions", action="store_true",
dest="show_versions", default=False,
help="Show current and preferred versions of all recipes.")
parser.add_option("-e", "--environment", action="store_true",
dest="show_environment", default=False,
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing")
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
help="Show debug logging for the specified logging domains")
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
help="Profile the command and save reports.")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-u", "--ui", action="store", dest="ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
parser.add_option("", "--revisions-changed", action="store_true",
dest="revisions_changed", default=False,
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
parser.add_option("", "--server-only", action="store_true",
dest="server_only", default=False,
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
parser.add_option("", "--skip-setscene", action="store_true",
dest="skipsetscene", default=False,
help="Skip setscene tasks if they would be executed. Tasks previously "
"restored from sstate will be kept, unlike --no-setscene")
parser.add_option("", "--setscene-only", action="store_true",
dest="setsceneonly", default=False,
help="Only run setscene tasks, don't run any real tasks.")
parser.add_option("", "--remote-server", action="store", dest="remote_server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
help="Terminate any running bitbake server.")
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
help="Connect to a server as an observing-only client.")
parser.add_option("", "--status-only", action="store_true",
dest="status_only", default=False,
help="Check the status of the remote bitbake server.")
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
parser.add_option("", "--runall", action="append", dest="runall",
help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).")
parser.add_option("", "--runonly", action="append", dest="runonly",
help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).")
options, targets = parser.parse_args(argv)
if options.quiet and options.verbose:
parser.error("options --quiet and --verbose are mutually exclusive")
@@ -347,7 +323,7 @@ class BitBakeConfigParameters(cookerdata.ConfigParameters):
else:
options.xmlrpcinterface = (None, 0)
return options, options.targets
return options, targets[1:]
def bitbake_main(configParams, configuration):
@@ -412,9 +388,6 @@ def bitbake_main(configParams, configuration):
return 1
def timestamp():
return datetime.datetime.now().strftime('%H:%M:%S.%f')
def setup_bitbake(configParams, extrafeatures=None):
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
@@ -422,11 +395,6 @@ def setup_bitbake(configParams, extrafeatures=None):
# In status only mode there are no logs and no UI
logger.addHandler(handler)
if configParams.dump_signatures:
if extrafeatures is None:
extrafeatures = []
extrafeatures.append(bb.cooker.CookerFeatures.RECIPE_SIGGEN_INFO)
if configParams.server_only:
featureset = []
ui_module = None
@@ -454,7 +422,7 @@ def setup_bitbake(configParams, extrafeatures=None):
retries = 8
while retries:
try:
topdir, lock, lockfile = lockBitbake()
topdir, lock = lockBitbake()
sockname = topdir + "/bitbake.sock"
if lock:
if configParams.status_only or configParams.kill_server:
@@ -465,22 +433,18 @@ def setup_bitbake(configParams, extrafeatures=None):
logger.info("Starting bitbake server...")
# Clear the event queue since we already displayed messages
bb.event.ui_queue = []
server = bb.server.process.BitBakeServer(lock, sockname, featureset, configParams.server_timeout, configParams.xmlrpcinterface, configParams.profile)
server = bb.server.process.BitBakeServer(lock, sockname, featureset, configParams.server_timeout, configParams.xmlrpcinterface)
else:
logger.info("Reconnecting to bitbake server...")
if not os.path.exists(sockname):
logger.info("Previous bitbake instance shutting down?, waiting to retry... (%s)" % timestamp())
procs = bb.server.process.get_lockfile_process_msg(lockfile)
if procs:
logger.info("Processes holding bitbake.lock (missing socket %s):\n%s" % (sockname, procs))
logger.info("Directory listing: %s" % (str(os.listdir(topdir))))
logger.info("Previous bitbake instance shutting down?, waiting to retry...")
i = 0
lock = None
# Wait for 5s or until we can get the lock
while not lock and i < 50:
time.sleep(0.1)
_, lock, _ = lockBitbake()
_, lock = lockBitbake()
i += 1
if lock:
bb.utils.unlockfile(lock)
@@ -499,10 +463,10 @@ def setup_bitbake(configParams, extrafeatures=None):
retries -= 1
tryno = 8 - retries
if isinstance(e, (bb.server.process.ProcessTimeout, BrokenPipeError, EOFError, SystemExit)):
logger.info("Retrying server connection (#%d)... (%s)" % (tryno, timestamp()))
logger.info("Retrying server connection (#%d)..." % tryno)
else:
logger.info("Retrying server connection (#%d)... (%s, %s)" % (tryno, traceback.format_exc(), timestamp()))
logger.info("Retrying server connection (#%d)... (%s)" % (tryno, traceback.format_exc()))
if not retries:
bb.fatal("Unable to connect to bitbake server, or start one (server startup failures would be in bitbake-cookerdaemon.log).")
bb.event.print_ui_queue()
@@ -530,5 +494,5 @@ def lockBitbake():
bb.error("Unable to find conf/bblayers.conf or conf/bitbake.conf. BBPATH is unset and/or not in a build directory?")
raise BBMainFatal
lockfile = topdir + "/bitbake.lock"
return topdir, bb.utils.lockfile(lockfile, False, False), lockfile
return topdir, bb.utils.lockfile(lockfile, False, False)

View File

@@ -59,7 +59,7 @@ def getMountedDev(path):
pass
return None
def getDiskData(BBDirs):
def getDiskData(BBDirs, configuration):
"""Prepare disk data for disk space monitor"""
@@ -76,12 +76,7 @@ def getDiskData(BBDirs):
return None
action = pathSpaceInodeRe.group(1)
if action == "ABORT":
# Emit a deprecation warning
logger.warnonce("The BB_DISKMON_DIRS \"ABORT\" action has been renamed to \"HALT\", update configuration")
action = "HALT"
if action not in ("HALT", "STOPTASKS", "WARN"):
if action not in ("ABORT", "STOPTASKS", "WARN"):
printErr("Unknown disk space monitor action: %s" % action)
return None
@@ -173,7 +168,7 @@ class diskMonitor:
BBDirs = configuration.getVar("BB_DISKMON_DIRS") or None
if BBDirs:
self.devDict = getDiskData(BBDirs)
self.devDict = getDiskData(BBDirs, configuration)
if self.devDict:
self.spaceInterval, self.inodeInterval = getInterval(configuration)
if self.spaceInterval and self.inodeInterval:
@@ -182,7 +177,7 @@ class diskMonitor:
# use them to avoid printing too many warning messages
self.preFreeS = {}
self.preFreeI = {}
# This is for STOPTASKS and HALT, to avoid printing the message
# This is for STOPTASKS and ABORT, to avoid printing the message
# repeatedly while waiting for the tasks to finish
self.checked = {}
for k in self.devDict:
@@ -224,8 +219,8 @@ class diskMonitor:
self.checked[k] = True
rq.finish_runqueue(False)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
elif action == "HALT" and not self.checked[k]:
logger.error("Immediately halt since the disk space monitor action is \"HALT\"!")
elif action == "ABORT" and not self.checked[k]:
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
self.checked[k] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
@@ -234,10 +229,9 @@ class diskMonitor:
freeInode = st.f_favail
if minInode and freeInode < minInode:
# Some filesystems use dynamic inodes so can't run out.
# This is reported by the inode count being 0 (btrfs) or the free
# inode count being -1 (cephfs).
if st.f_files == 0 or st.f_favail == -1:
# Some filesystems use dynamic inodes so can't run out
# (e.g. btrfs). This is reported by the inode count being 0.
if st.f_files == 0:
self.devDict[k][2] = None
continue
# Always show warning, the self.checked would always be False if the action is WARN
@@ -251,8 +245,8 @@ class diskMonitor:
self.checked[k] = True
rq.finish_runqueue(False)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
elif action == "HALT" and not self.checked[k]:
logger.error("Immediately halt since the disk space monitor action is \"HALT\"!")
elif action == "ABORT" and not self.checked[k]:
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
self.checked[k] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)

View File

@@ -30,9 +30,7 @@ class BBLogFormatter(logging.Formatter):
PLAIN = logging.INFO + 1
VERBNOTE = logging.INFO + 2
ERROR = logging.ERROR
ERRORONCE = logging.ERROR - 1
WARNING = logging.WARNING
WARNONCE = logging.WARNING - 1
CRITICAL = logging.CRITICAL
levelnames = {
@@ -44,9 +42,7 @@ class BBLogFormatter(logging.Formatter):
PLAIN : '',
VERBNOTE: 'NOTE',
WARNING : 'WARNING',
WARNONCE : 'WARNING',
ERROR : 'ERROR',
ERRORONCE : 'ERROR',
CRITICAL: 'ERROR',
}
@@ -62,9 +58,7 @@ class BBLogFormatter(logging.Formatter):
PLAIN : BASECOLOR,
VERBNOTE: BASECOLOR,
WARNING : YELLOW,
WARNONCE : YELLOW,
ERROR : RED,
ERRORONCE : RED,
CRITICAL: RED,
}
@@ -127,22 +121,6 @@ class BBLogFilter(object):
return True
return False
class LogFilterShowOnce(logging.Filter):
def __init__(self):
self.seen_warnings = set()
self.seen_errors = set()
def filter(self, record):
if record.levelno == bb.msg.BBLogFormatter.WARNONCE:
if record.msg in self.seen_warnings:
return False
self.seen_warnings.add(record.msg)
if record.levelno == bb.msg.BBLogFormatter.ERRORONCE:
if record.msg in self.seen_errors:
return False
self.seen_errors.add(record.msg)
return True
class LogFilterGEQLevel(logging.Filter):
def __init__(self, level):
self.strlevel = str(level)
@@ -228,9 +206,8 @@ def logger_create(name, output=sys.stderr, level=logging.INFO, preserve_handlers
"""Standalone logger creation function"""
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
console.addFilter(bb.msg.LogFilterShowOnce())
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if color == 'always' or (color == 'auto' and output.isatty() and os.environ.get('NO_COLOR', '') == ''):
if color == 'always' or (color == 'auto' and output.isatty()):
format.enable_color()
console.setFormatter(format)
if preserve_handlers:
@@ -301,7 +278,7 @@ def setLoggingConfig(defaultconfig, userconfigfile=None):
with open(os.path.normpath(userconfigfile), 'r') as f:
if userconfigfile.endswith('.yml') or userconfigfile.endswith('.yaml'):
import yaml
userconfig = yaml.safe_load(f)
userconfig = yaml.load(f)
elif userconfigfile.endswith('.json') or userconfigfile.endswith('.cfg'):
import json
userconfig = json.load(f)
@@ -316,17 +293,10 @@ def setLoggingConfig(defaultconfig, userconfigfile=None):
# Convert all level parameters to integers in case users want to use the
# bitbake defined level names
for name, h in logconfig["handlers"].items():
for h in logconfig["handlers"].values():
if "level" in h:
h["level"] = bb.msg.stringToLevel(h["level"])
# Every handler needs its own instance of the once filter.
once_filter_name = name + ".showonceFilter"
logconfig.setdefault("filters", {})[once_filter_name] = {
"()": "bb.msg.LogFilterShowOnce",
}
h.setdefault("filters", []).append(once_filter_name)
for l in logconfig["loggers"].values():
if "level" in l:
l["level"] = bb.msg.stringToLevel(l["level"])

View File

@@ -49,32 +49,20 @@ class SkipPackage(SkipRecipe):
__mtime_cache = {}
def cached_mtime(f):
if f not in __mtime_cache:
res = os.stat(f)
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
return __mtime_cache[f]
def cached_mtime_noerror(f):
if f not in __mtime_cache:
try:
res = os.stat(f)
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
return 0
return __mtime_cache[f]
def check_mtime(f, mtime):
try:
res = os.stat(f)
current_mtime = (res.st_mtime_ns, res.st_size, res.st_ino)
__mtime_cache[f] = current_mtime
except OSError:
current_mtime = 0
return current_mtime == mtime
def update_mtime(f):
try:
res = os.stat(f)
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
if f in __mtime_cache:
del __mtime_cache[f]
@@ -83,7 +71,7 @@ def update_mtime(f):
def update_cache(f):
if f in __mtime_cache:
logger.debug("Updating mtime cache for %s" % f)
logger.debug(1, "Updating mtime cache for %s" % f)
update_mtime(f)
def clear_cache():
@@ -111,12 +99,12 @@ def supports(fn, data):
return 1
return 0
def handle(fn, data, include=0, baseconfig=False):
def handle(fn, data, include = 0):
"""Call the handler that is appropriate for this file"""
for h in handlers:
if h['supports'](fn, data):
with data.inchistory.include(fn):
return h['handle'](fn, data, include, baseconfig)
return h['handle'](fn, data, include)
raise ParseError("not a BitBake file", fn)
def init(fn, data):
@@ -125,8 +113,6 @@ def init(fn, data):
return h['init'](data)
def init_parser(d):
if hasattr(bb.parse, "siggen"):
bb.parse.siggen.exit()
bb.parse.siggen = bb.siggen.init(d)
def resolve_file(fn, d):

View File

@@ -9,7 +9,6 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import sys
import bb
from bb import methodpool
from bb.parse import logger
@@ -35,7 +34,7 @@ class IncludeNode(AstNode):
Include the file and evaluate the statements
"""
s = data.expand(self.what_file)
logger.debug2("CONF %s:%s: including %s", self.filename, self.lineno, s)
logger.debug(2, "CONF %s:%s: including %s", self.filename, self.lineno, s)
# TODO: Cache those includes... maybe not here though
if self.force:
@@ -131,10 +130,6 @@ class DataNode(AstNode):
else:
val = groupd["value"]
if ":append" in key or ":remove" in key or ":prepend" in key:
if op in ["append", "prepend", "postdot", "predot", "ques"]:
bb.warn(key + " " + groupd[op] + " is not a recommended operator combination, please replace it.")
flag = None
if 'flag' in groupd and groupd['flag'] is not None:
flag = groupd['flag']
@@ -150,7 +145,7 @@ class DataNode(AstNode):
data.setVar(key, val, parsing=True, **loginfo)
class MethodNode(AstNode):
tr_tbl = str.maketrans('/.+-@%&~', '________')
tr_tbl = str.maketrans('/.+-@%&', '_______')
def __init__(self, filename, lineno, func_name, body, python, fakeroot):
AstNode.__init__(self, filename, lineno)
@@ -211,12 +206,10 @@ class ExportFuncsNode(AstNode):
def eval(self, data):
sentinel = " # Export function set\n"
for func in self.n:
calledfunc = self.classname + "_" + func
basevar = data.getVar(func, False)
if basevar and sentinel not in basevar:
if data.getVar(func, False) and not data.getVarFlag(func, 'export_func', False):
continue
if data.getVar(func, False):
@@ -226,18 +219,19 @@ class ExportFuncsNode(AstNode):
for flag in [ "func", "python" ]:
if data.getVarFlag(calledfunc, flag, False):
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag, False))
for flag in ["dirs", "cleandirs", "fakeroot"]:
for flag in [ "dirs" ]:
if data.getVarFlag(func, flag, False):
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag, False))
data.setVarFlag(func, "filename", "autogenerated")
data.setVarFlag(func, "lineno", 1)
if data.getVarFlag(calledfunc, "python", False):
data.setVar(func, sentinel + " bb.build.exec_func('" + calledfunc + "', d)\n", parsing=True)
data.setVar(func, " bb.build.exec_func('" + calledfunc + "', d)\n", parsing=True)
else:
if "-" in self.classname:
bb.fatal("The classname %s contains a dash character and is calling an sh function %s using EXPORT_FUNCTIONS. Since a dash is illegal in sh function names, this cannot work, please rename the class or don't use EXPORT_FUNCTIONS." % (self.classname, calledfunc))
data.setVar(func, sentinel + " " + calledfunc + "\n", parsing=True)
data.setVar(func, " " + calledfunc + "\n", parsing=True)
data.setVarFlag(func, 'export_func', '1')
class AddTaskNode(AstNode):
def __init__(self, filename, lineno, func, before, after):
@@ -271,41 +265,6 @@ class BBHandlerNode(AstNode):
data.setVarFlag(h, "handler", 1)
data.setVar('__BBHANDLERS', bbhands)
class PyLibNode(AstNode):
def __init__(self, filename, lineno, libdir, namespace):
AstNode.__init__(self, filename, lineno)
self.libdir = libdir
self.namespace = namespace
def eval(self, data):
global_mods = (data.getVar("BB_GLOBAL_PYMODULES") or "").split()
for m in global_mods:
if m not in bb.utils._context:
bb.utils._context[m] = __import__(m)
libdir = data.expand(self.libdir)
if libdir not in sys.path:
sys.path.append(libdir)
try:
bb.utils._context[self.namespace] = __import__(self.namespace)
toimport = getattr(bb.utils._context[self.namespace], "BBIMPORTS", [])
for i in toimport:
bb.utils._context[self.namespace] = __import__(self.namespace + "." + i)
mod = getattr(bb.utils._context[self.namespace], i)
fn = getattr(mod, "__file__")
funcs = {}
for f in dir(mod):
if f.startswith("_"):
continue
fcall = getattr(mod, f)
if not callable(fcall):
continue
funcs[f] = fcall
bb.codeparser.add_module_functions(fn, funcs, "%s.%s" % (self.namespace, i))
except AttributeError as e:
bb.error("Error importing OE modules: %s" % str(e))
class InheritNode(AstNode):
def __init__(self, filename, lineno, classes):
AstNode.__init__(self, filename, lineno)
@@ -314,16 +273,6 @@ class InheritNode(AstNode):
def eval(self, data):
bb.parse.BBHandler.inherit(self.classes, self.filename, self.lineno, data)
class InheritDeferredNode(AstNode):
def __init__(self, filename, lineno, classes):
AstNode.__init__(self, filename, lineno)
self.inherit = (classes, filename, lineno)
def eval(self, data):
inherits = data.getVar('__BBDEFINHERITS', False) or []
inherits.append(self.inherit)
data.setVar('__BBDEFINHERITS', inherits)
def handleInclude(statements, filename, lineno, m, force):
statements.append(IncludeNode(filename, lineno, m.group(1), force))
@@ -367,17 +316,10 @@ def handleDelTask(statements, filename, lineno, m):
def handleBBHandlers(statements, filename, lineno, m):
statements.append(BBHandlerNode(filename, lineno, m.group(1)))
def handlePyLib(statements, filename, lineno, m):
statements.append(PyLibNode(filename, lineno, m.group(1), m.group(2)))
def handleInherit(statements, filename, lineno, m):
classes = m.group(1)
statements.append(InheritNode(filename, lineno, classes))
def handleInheritDeferred(statements, filename, lineno, m):
classes = m.group(1)
statements.append(InheritDeferredNode(filename, lineno, classes))
def runAnonFuncs(d):
code = []
for funcname in d.getVar("__BBANONFUNCS", False) or []:
@@ -387,17 +329,13 @@ def runAnonFuncs(d):
def finalize(fn, d, variant = None):
saved_handlers = bb.event.get_handlers().copy()
try:
# Found renamed variables. Exit immediately
if d.getVar("_FAILPARSINGERRORHANDLED", False) == True:
raise bb.BBHandledException()
for var in d.getVar('__BBHANDLERS', False) or []:
# try to add the handler
handlerfn = d.getVarFlag(var, "filename", False)
if not handlerfn:
bb.fatal("Undefined event handler function '%s'" % var)
handlerln = int(d.getVarFlag(var, "lineno", False))
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln, data=d)
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
@@ -415,9 +353,6 @@ def finalize(fn, d, variant = None):
d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
if d.getVar('__BBAUTOREV_SEEN') and d.getVar('__BBSRCREV_SEEN') and not d.getVar("__BBAUTOREV_ACTED_UPON"):
bb.fatal("AUTOREV/SRCPV set too late for the fetcher to work properly, please set the variables earlier in parsing. Erroring instead of later obtuse build failures.")
bb.event.fire(bb.event.RecipeParsed(fn), d)
finally:
bb.event.set_handlers(saved_handlers)
@@ -441,17 +376,9 @@ def _create_variants(datastores, names, function, onlyfinalise):
def multi_finalize(fn, d):
appends = (d.getVar("__BBAPPEND") or "").split()
for append in appends:
logger.debug("Appending .bbappend file %s to %s", append, fn)
logger.debug(1, "Appending .bbappend file %s to %s", append, fn)
bb.parse.BBHandler.handle(append, d, True)
while True:
inherits = d.getVar('__BBDEFINHERITS', False) or []
if not inherits:
break
inherit, filename, lineno = inherits.pop(0)
d.setVar('__BBDEFINHERITS', inherits)
bb.parse.BBHandler.inherit(inherit, filename, lineno, d, deferred=True)
onlyfinalise = d.getVar("__ONLYFINALISE", False)
safe_d = d

View File

@@ -13,15 +13,17 @@
#
import re, bb, os
import bb.build, bb.utils, bb.data_smart
import bb.build, bb.utils
from . import ConfHandler
from .. import resolve_file, ast, logger, ParseError
from .ConfHandler import include, init
__func_start_regexp__ = re.compile(r"(((?P<py>python(?=(\s|\()))|(?P<fr>fakeroot(?=\s)))\s*)*(?P<func>[\w\.\-\+\{\}\$:]+)?\s*\(\s*\)\s*{$" )
# For compatibility
bb.deprecate_import(__name__, "bb.parse", ["vars_from_file"])
__func_start_regexp__ = re.compile(r"(((?P<py>python)|(?P<fr>fakeroot))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
__inherit_regexp__ = re.compile(r"inherit\s+(.+)" )
__inherit_def_regexp__ = re.compile(r"inherit_defer\s+(.+)" )
__export_func_regexp__ = re.compile(r"EXPORT_FUNCTIONS\s+(.+)" )
__addtask_regexp__ = re.compile(r"addtask\s+(?P<func>\w+)\s*((before\s*(?P<before>((.*(?=after))|(.*))))|(after\s*(?P<after>((.*(?=before))|(.*)))))*")
__deltask_regexp__ = re.compile(r"deltask\s+(.+)")
@@ -34,7 +36,6 @@ __infunc__ = []
__inpython__ = False
__body__ = []
__classname__ = ""
__residue__ = []
cached_statements = {}
@@ -42,46 +43,31 @@ def supports(fn, d):
"""Return True if fn has a supported extension"""
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
def inherit(files, fn, lineno, d, deferred=False):
def inherit(files, fn, lineno, d):
__inherit_cache = d.getVar('__inherit_cache', False) or []
#if "${" in files and not deferred:
# bb.warn("%s:%s has non deferred conditional inherit" % (fn, lineno))
files = d.expand(files).split()
for file in files:
classtype = d.getVar("__bbclasstype", False)
origfile = file
for t in ["classes-" + classtype, "classes"]:
file = origfile
if not os.path.isabs(file) and not file.endswith(".bbclass"):
file = os.path.join(t, '%s.bbclass' % file)
if not os.path.isabs(file) and not file.endswith(".bbclass"):
file = os.path.join('classes', '%s.bbclass' % file)
if not os.path.isabs(file):
bbpath = d.getVar("BBPATH")
abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
for af in attempts:
if af != abs_fn:
bb.parse.mark_dependency(d, af)
if abs_fn:
file = abs_fn
if os.path.exists(file):
break
if not os.path.exists(file):
raise ParseError("Could not inherit file %s" % (file), fn, lineno)
if not os.path.isabs(file):
bbpath = d.getVar("BBPATH")
abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
for af in attempts:
if af != abs_fn:
bb.parse.mark_dependency(d, af)
if abs_fn:
file = abs_fn
if not file in __inherit_cache:
logger.debug("Inheriting %s (from %s:%d)" % (file, fn, lineno))
logger.debug(1, "Inheriting %s (from %s:%d)" % (file, fn, lineno))
__inherit_cache.append( file )
d.setVar('__inherit_cache', __inherit_cache)
try:
bb.parse.handle(file, d, True)
except (IOError, OSError) as exc:
raise ParseError("Could not inherit file %s: %s" % (fn, exc.strerror), fn, lineno)
include(fn, file, lineno, d, "inherit")
__inherit_cache = d.getVar('__inherit_cache', False) or []
def get_statements(filename, absolute_filename, base_name):
global cached_statements, __residue__, __body__
global cached_statements
try:
return cached_statements[absolute_filename]
@@ -101,17 +87,12 @@ def get_statements(filename, absolute_filename, base_name):
# add a blank line to close out any python definition
feeder(lineno, "", filename, base_name, statements, eof=True)
if __residue__:
raise ParseError("Unparsed lines %s: %s" % (filename, str(__residue__)), filename, lineno)
if __body__:
raise ParseError("Unparsed lines from unclosed function %s: %s" % (filename, str(__body__)), filename, lineno)
if filename.endswith(".bbclass") or filename.endswith(".inc"):
cached_statements[absolute_filename] = statements
return statements
def handle(fn, d, include, baseconfig=False):
global __infunc__, __body__, __residue__, __classname__
def handle(fn, d, include):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__, __classname__
__body__ = []
__infunc__ = []
__classname__ = ""
@@ -163,7 +144,7 @@ def handle(fn, d, include, baseconfig=False):
return d
def feeder(lineno, s, fn, root, statements, eof=False):
global __inpython__, __infunc__, __body__, __residue__, __classname__
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__, __infunc__, __body__, bb, __residue__, __classname__
# Check tabs in python functions:
# - def py_funcname(): covered by __inpython__
@@ -200,10 +181,10 @@ def feeder(lineno, s, fn, root, statements, eof=False):
if s and s[0] == '#':
if len(__residue__) != 0 and __residue__[0][0] != "#":
bb.fatal("There is a comment on line %s of file %s:\n'''\n%s\n'''\nwhich is in the middle of a multiline expression. This syntax is invalid, please correct it." % (lineno, fn, s))
bb.fatal("There is a comment on line %s of file %s (%s) which is in the middle of a multiline expression.\nBitbake used to ignore these but no longer does so, please fix your metadata as errors are likely as a result of this change." % (lineno, fn, s))
if len(__residue__) != 0 and __residue__[0][0] == "#" and (not s or s[0] != "#"):
bb.fatal("There is a confusing multiline partially commented expression on line %s of file %s:\n%s\nPlease clarify whether this is all a comment or should be parsed." % (lineno - len(__residue__), fn, "\n".join(__residue__)))
bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
if s and s[-1] == '\\':
__residue__.append(s[:-1])
@@ -252,10 +233,6 @@ def feeder(lineno, s, fn, root, statements, eof=False):
if taskexpression.count(word) > 1:
logger.warning("addtask contained multiple '%s' keywords, only one is supported" % word)
# Check and warn for having task with exprssion as part of task name
for te in taskexpression:
if any( ( "%s_" % keyword ) in te for keyword in bb.data_smart.__setvar_keyword__ ):
raise ParseError("Task name '%s' contains a keyword which is not recommended/supported.\nPlease rename the task not to include the keyword.\n%s" % (te, ("\n".join(map(str, bb.data_smart.__setvar_keyword__)))), fn)
ast.handleAddTask(statements, fn, lineno, m)
return
@@ -274,12 +251,7 @@ def feeder(lineno, s, fn, root, statements, eof=False):
ast.handleInherit(statements, fn, lineno, m)
return
m = __inherit_def_regexp__.match(s)
if m:
ast.handleInheritDeferred(statements, fn, lineno, m)
return
return ConfHandler.feeder(lineno, s, fn, statements, conffile=False)
return ConfHandler.feeder(lineno, s, fn, statements)
# Add us to the handlers list
from .. import handlers

View File

@@ -20,8 +20,8 @@ from bb.parse import ParseError, resolve_file, ast, logger, handle
__config_regexp__ = re.compile( r"""
^
(?P<exp>export\s+)?
(?P<var>[a-zA-Z0-9\-_+.${}/~:]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.][a-zA-Z0-9\-_+.@]*)\])?
(?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
\s* (
(?P<colon>:=) |
@@ -45,11 +45,13 @@ __include_regexp__ = re.compile( r"include\s+(.+)" )
__require_regexp__ = re.compile( r"require\s+(.+)" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.][a-zA-Z0-9\-_+.@]+)\]$" )
__addpylib_regexp__ = re.compile(r"addpylib\s+(.+)\s+(.+)" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.]+)\]$" )
def init(data):
return
topdir = data.getVar('TOPDIR', False)
if not topdir:
data.setVar('TOPDIR', os.getcwd())
def supports(fn, d):
return fn[-5:] == ".conf"
@@ -93,7 +95,7 @@ def include_single_file(parentfn, fn, lineno, data, error_out):
if exc.errno == errno.ENOENT:
if error_out:
raise ParseError("Could not %s file %s" % (error_out, fn), parentfn, lineno)
logger.debug2("CONF file '%s' not found", fn)
logger.debug(2, "CONF file '%s' not found", fn)
else:
if error_out:
raise ParseError("Could not %s file %s: %s" % (error_out, fn, exc.strerror), parentfn, lineno)
@@ -103,12 +105,12 @@ def include_single_file(parentfn, fn, lineno, data, error_out):
# We have an issue where a UI might want to enforce particular settings such as
# an empty DISTRO variable. If configuration files do something like assigning
# a weak default, it turns out to be very difficult to filter out these changes,
# particularly when the weak default might appear half way though parsing a chain
# particularly when the weak default might appear half way though parsing a chain
# of configuration files. We therefore let the UIs hook into configuration file
# parsing. This turns out to be a hard problem to solve any other way.
confFilters = []
def handle(fn, data, include, baseconfig=False):
def handle(fn, data, include):
init(data)
if include == 0:
@@ -126,26 +128,21 @@ def handle(fn, data, include, baseconfig=False):
s = f.readline()
if not s:
break
origlineno = lineno
origline = s
w = s.strip()
# skip empty lines
if not w:
continue
s = s.rstrip()
while s[-1] == '\\':
line = f.readline()
origline += line
s2 = line.rstrip()
s2 = f.readline().rstrip()
lineno = lineno + 1
if (not s2 or s2 and s2[0] != "#") and s[0] == "#" :
bb.fatal("There is a confusing multiline, partially commented expression starting on line %s of file %s:\n%s\nPlease clarify whether this is all a comment or should be parsed." % (origlineno, fn, origline))
bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
s = s[:-1] + s2
# skip comments
if s[0] == '#':
continue
feeder(lineno, s, abs_fn, statements, baseconfig=baseconfig)
feeder(lineno, s, abs_fn, statements)
# DONE WITH PARSING... time to evaluate
data.setVar('FILE', abs_fn)
@@ -153,14 +150,14 @@ def handle(fn, data, include, baseconfig=False):
if oldfile:
data.setVar('FILE', oldfile)
f.close()
for f in confFilters:
f(fn, data)
return data
# baseconfig is set for the bblayers/layer.conf cookerdata config parsing
# The function is also used by BBHandler, conffile would be False
def feeder(lineno, s, fn, statements, baseconfig=False, conffile=True):
def feeder(lineno, s, fn, statements):
m = __config_regexp__.match(s)
if m:
groupd = m.groupdict()
@@ -192,11 +189,6 @@ def feeder(lineno, s, fn, statements, baseconfig=False, conffile=True):
ast.handleUnsetFlag(statements, fn, lineno, m)
return
m = __addpylib_regexp__.match(s)
if baseconfig and conffile and m:
ast.handlePyLib(statements, fn, lineno, m)
return
raise ParseError("unparsed line: '%s'" % s, fn, lineno);
# Add us to the handlers list

View File

@@ -12,14 +12,14 @@ currently, providing a key/value store accessed by 'domain'.
#
import collections
import collections.abc
import contextlib
import functools
import logging
import os.path
import sqlite3
import sys
from collections.abc import Mapping
import warnings
from collections import Mapping
sqlversion = sqlite3.sqlite_version_info
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
@@ -29,7 +29,7 @@ if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
logger = logging.getLogger("BitBake.PersistData")
@functools.total_ordering
class SQLTable(collections.abc.MutableMapping):
class SQLTable(collections.MutableMapping):
class _Decorators(object):
@staticmethod
def retry(*, reconnect=True):
@@ -63,7 +63,7 @@ class SQLTable(collections.abc.MutableMapping):
"""
Decorator that starts a database transaction and creates a database
cursor for performing queries. If no exception is thrown, the
database results are committed. If an exception occurs, the database
database results are commited. If an exception occurs, the database
is rolled back. In all cases, the cursor is closed after the
function ends.
@@ -208,7 +208,7 @@ class SQLTable(collections.abc.MutableMapping):
def __lt__(self, other):
if not isinstance(other, Mapping):
raise NotImplementedError()
raise NotImplemented
return len(self) < len(other)
@@ -238,6 +238,55 @@ class SQLTable(collections.abc.MutableMapping):
def has_key(self, key):
return key in self
class PersistData(object):
"""Deprecated representation of the bitbake persistent data store"""
def __init__(self, d):
warnings.warn("Use of PersistData is deprecated. Please use "
"persist(domain, d) instead.",
category=DeprecationWarning,
stacklevel=2)
self.data = persist(d)
logger.debug(1, "Using '%s' as the persistent data cache",
self.data.filename)
def addDomain(self, domain):
"""
Add a domain (pending deprecation)
"""
return self.data[domain]
def delDomain(self, domain):
"""
Removes a domain and all the data it contains
"""
del self.data[domain]
def getKeyValues(self, domain):
"""
Return a list of key + value pairs for a domain
"""
return list(self.data[domain].items())
def getValue(self, domain, key):
"""
Return the value of a key for a domain
"""
return self.data[domain][key]
def setValue(self, domain, key, value):
"""
Sets the value of a key for a domain
"""
self.data[domain][key] = value
def delValue(self, domain, key):
"""
Deletes a key/value pair
"""
del self.data[domain][key]
def persist(domain, d):
"""Convenience factory for SQLTable objects based upon metadata"""
import bb.utils
@@ -249,23 +298,4 @@ def persist(domain, d):
bb.utils.mkdirhier(cachedir)
cachefile = os.path.join(cachedir, "bb_persist_data.sqlite3")
try:
return SQLTable(cachefile, domain)
except sqlite3.OperationalError:
# Sqlite fails to open database when its path is too long.
# After testing, 504 is the biggest path length that can be opened by
# sqlite.
# Note: This code is called before sanity.bbclass and its path length
# check
max_len = 504
if len(cachefile) > max_len:
logger.critical("The path of the cache file is too long "
"({0} chars > {1}) to be opened by sqlite! "
"Your cache file is \"{2}\"".format(
len(cachefile),
max_len,
cachefile))
sys.exit(1)
else:
raise
return SQLTable(cachefile, domain)

View File

@@ -1,6 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -62,7 +60,7 @@ class Popen(subprocess.Popen):
"close_fds": True,
"preexec_fn": subprocess_setup,
"stdout": subprocess.PIPE,
"stderr": subprocess.PIPE,
"stderr": subprocess.STDOUT,
"stdin": subprocess.PIPE,
"shell": False,
}
@@ -144,7 +142,7 @@ def _logged_communicate(pipe, log, input, extrafiles):
while pipe.poll() is None:
read_all_pipes(log, rin, outdata, errdata)
# Process closed, drain all pipes...
# Pocess closed, drain all pipes...
read_all_pipes(log, rin, outdata, errdata)
finally:
log.flush()
@@ -183,8 +181,5 @@ def run(cmd, input=None, log=None, extrafiles=None, **options):
stderr = stderr.decode("utf-8")
if pipe.returncode != 0:
if log:
# Don't duplicate the output in the exception if logging it
raise ExecutionError(cmd, pipe.returncode, None, None)
raise ExecutionError(cmd, pipe.returncode, stdout, stderr)
return stdout, stderr

Some files were not shown because too many files have changed in this diff Show More