Compare commits

..

94 Commits

Author SHA1 Message Date
Richard Purdie
943ef2fad8 build-appliance-image: Update to gatesgarth head revision
(From OE-Core rev: d11ab9cb77bf91f939035417b757773a5d80242c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 16:25:30 +00:00
Richard Purdie
76dac9d657 build-appliance: Correct branch to gatesgarth
(From OE-Core rev: feb77e322fa13495550b98e3924d24df1560156d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 16:25:24 +00:00
Richard Purdie
333f24caec build-appliance-image: Update to gatesgarth head revision
(From OE-Core rev: e525592e83062ed9a9b2d3cb37c8dbbcfe8759a9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 08:59:37 +00:00
Anuj Mittal
e5bd9b93b4 poky.conf: bump version for 3.2.1 release
(From meta-yocto rev: be61a726ee0036402c460493df9532714903ea57)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 08:58:12 +00:00
Anuj Mittal
a4ff9dd2dc releases.rst: add gatesgarth to current releases
(From yocto-docs rev: b9d69c76561eb6708cd217126a5ed08b52315fa5)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 08:58:12 +00:00
Nicolas Dechesne
2d3224bf20 sphinx: releases: add link to 3.1.3
(From yocto-docs rev: 5e422dc364800d67ef5ee632b5c787265afd75f8)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bf395d0af044f6e9826a8235b760b2d285602b26)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 08:58:12 +00:00
Anuj Mittal
e6f6420d98 documentation: prepare for 3.2.1 release
Bump the current version to 3.2.1

(From yocto-docs rev: 1e46d6ffd3a193c24ddc07aaaad6f4769d12cc45)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-07 08:58:12 +00:00
He Zhe
f0b8b3a960 lttng-modules: Backport a patch to fix btrfs build failure
lttng-modules-2.12.3/probes/lttng-probe-btrfs.c:36:
lttng-modules-2.12.3/probes/../probes/lttng-tracepoint-event-impl.h:131:6:
error: conflicting types for 'trace_find_free_extent'

(From OE-Core rev: af428fa2432279d24cdf2a62f9dee91b30d46c3a)

Signed-off-by: He Zhe <zhe.he@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 42c791ab3815b47188fdd98998cdcb3d2c62ef20)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Alexander Kanavin
fef73fcd3a lttng-modules: update 2.12.2 -> 2.12.3
Drop a pile of backports.

(From OE-Core rev: d11a2157befcfe40517140988dd26bf0ed7240b6)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit fba843f79ac6ad2636385de2bd63e90e08c04fcd)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Anuj Mittal
d12e2d67c9 distutils-common-base: fix LINKSHARED expansion
Add the missing $ so SECURITY_CFLAGS actually gets expanded.

(From OE-Core rev: 26bd176e221789e9592d71e8c469eb40f506029a)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 6ed2f892ebb0b4e30a3bf167eac68027ea378a2d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Khem Raj
eeb98ec6ae binutils: Fix linker errors on chromium/ffmpeg on aarch64
ffmpeg in qtwebengine/chromium fails to build on aarch64

ffmpeg/ffmpeg_internal/videodsp.o: in function `ff_prefetch_aarch64':
(.text+0x10): relocation truncated to fit: R_AARCH64_CONDBR19 against symbol `ff_prefetch_aarch64' defined in .text section in obj/third_party/ffmpeg/ffmpeg_internal/videodsp.o

Backport an upstream fix to handle this error which is a regrression in
binutils 2.35

(From OE-Core rev: 658024f47b5f96d3f4e1813b4716e8981fbf2e47)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0a68def6b1f69b61096e58ae7778b61412dec4a2)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Richard Purdie
3f2bc0a2e1 e2fsprogs: Fix a ptest permissions determinism issue
When comparing builds built with different host umasks, this file jumped out.
The umask from do_compile was influencing ${D} and as cp was used to add the
file it wasn't deterministic. Fix the file mode to ensure determinism.

(From OE-Core rev: b99796ec9436b63e4fc7cb7d12c0c9bcceef5d4b)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 37f37f4a52de3711973b372160f23672b61ff6ad)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Richard Purdie
cbd023e0db fs-perms: Ensure /usr/src/debug/ file modes are correct
If files are copied into /usr/src/debug directly from WORKDIR (e.g. makedevs)
we'd get the permissions from the checkout which would depend on the host umask.

Avoid this and be deterministic by setting the file modes consistently. Core
code copies the files in so we're responsible for the permissions.

Unfortunately to force this change to apply we need to invalidate both
the package tasks and the hash equivalance mappings since file mode
'corruption' already made it into the output hashes (both input options
were mapped to the output hashes).

(From OE-Core rev: 1f807da38b9d9aebdd86b3b5839305e03d9930e1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1f958bcd6c9cd12ec76d80586cba15f4d6ed17a7)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Stacy Gaikovaia
307146220b valgrind: helgrind: Intercept libc functions
PTH_FUNC definition needs to be modified in order to
intercept posix thread functions in both libc and libpthread.
In order to handle this in helgrind, weak alias the pthread functions in glibc.
Include a special case for musl.

See https://bugs.kde.org/show_bug.cgi?id=428909 for additional
discussion.

Upstream-Status: Submitted

(From OE-Core rev: 4c33ce1b1eca9aff0009bf71ce50f6398f7cd281)

Signed-off-by: Paul Floyd <paulf@free.fr>
Signed-off-by: Stacy Gaikovaia <Stacy.Gaikovaia@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5da46a552d54de34a5243e1d90dcc6f52b7af746)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Fedor Ross
d754cd3a49 eudev: remove bashism to be compatible with dash
Remove 'echo -e' and replace it with 'printf'. In bash the builtin
'echo' has an option for interpreting backslash escapes. In a shell like
dash the builtin 'echo' interprets backslash escapes by default.
Therefor the 'echo' in dash doesn't have the '-e' option. When using
'printf' instead it is safe to use it either with bash or dash.

(From OE-Core rev: af5a68b545fda9013bbe8f07a2175a04e950d768)

Signed-off-by: Fedor Ross <fedor.ross@ifm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c747acca33f84879a1ebd0ef972c07f4d5dff8b7)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Fedor Ross
3d5309b736 sysvinit: remove bashism to be compatible with dash
Replace the equality operator '==' with '=' inside of '[]' to be
compatible with bash and dash.

(From OE-Core rev: f3dbd50d3af6ff6ef6d2d5a64691c0861a19a733)

Signed-off-by: Fedor Ross <fedor.ross@ifm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b7f0ec6eafb35117eaf4eeef281162080f0ca79a)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Ross Burton
369b6e0192 gstreamer1.0-plugins-base: set CVE_PRODUCT
There are CVEs with the 'gst-plugins-base' product, so set that.

(From OE-Core rev: 679964bf178e0bba9fc3e5f8064b1cd55bf159c0)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit ec0f0e5995ab498f50ad51ceb361784247614982)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Ross Burton
e03e489758 gstreamer1.0-rtsp-server: set CVE_PRODUCT
There are CVEs with the 'gst-rtsp-server' product, so set that.

(From OE-Core rev: 096b1aa0727ee29adaf54b3133ebdaa71399a967)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit eb5cbdead78d092733e783b09528b208efccac3d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Ross Burton
321e17803e sqlite3: add CVE-2015-3717 to whitelist
As per https://groups.google.com/g/sqlite-dev/c/U7OjAbZO6LA this issue
is believed to be either iOS specific, or fixed in 3.8.9.

(From OE-Core rev: 2b68dc373895c2e609a5841841960c57ea457e22)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b781058267bd86bd979c50f4dfe8168c58dfa5a9)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Ross Burton
086ed4af2a python3: add CVE-2007-4559 to whitelist
This issue describes expected behaviour, do not use tarfile with
untrusted data.

(From OE-Core rev: 391ed53928db0df325798a0bce18ec6947e09ddd)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f4c22e83f2e68ff157da5ea1303acc2931d63f5f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Ross Burton
67ff1d9ffb cve-check: show real PN/PV
The output currently shows the remapped product and version fields,
which may not be the actual recipe name/version. As this report is about
recipes, use the real values.

(From OE-Core rev: 62e07072bbeeebfead34bbdb04e75cff1c4ef1e1)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 18827d7f40db4a4f92680bd59ca655cca373ad65)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Anuj Mittal
8de9b33e14 glib-2.0: RDEPEND on dbusmock only when GI_DATA_ENABLED is True
python3-dbusmock depends on pygobject unconditionally and it's not going
to work if g-i is disabled.

(From OE-Core rev: 881986b4032d893464dbcbd7e7e114b454af0a1b)

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b70627e2818ded74be862ad8650e19bf1fe9bd43)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Bruce Ashfield
afe59c8e1d linux-yocto/5.4: update to v5.4.78
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    315443293a2d Linux 5.4.78
    9fda2e762498 Convert trailing spaces and periods in path components
    ebc24aeb8694 net: sch_generic: fix the missing new qdisc assignment bug
    c5cf5c7b585c perf/core: Fix race in the perf_mmap_close() function
    c6b1616f5472 perf scripting python: Avoid declaring function pointers with a visibility attribute
    b74fe3186471 x86/speculation: Allow IBPB to be conditionally enabled on CPUs with always-on STIBP
    6958fbd52e79 powerpc/603: Always fault when _PAGE_ACCESSED is not set
    5af9d48acbee drm/i915: Correctly set SFC capability for video engines
    6fcf4141b9a2 r8169: fix potential skb double free in an error path
    78f6fac0814e tipc: fix memory leak in tipc_topsrv_start()
    c59039a088bd net/x25: Fix null-ptr-deref in x25_connect
    7e332a5c0e2c net: Update window_clamp if SOCK_RCVBUF is set
    25786fb512f7 net: udp: fix UDP header access on Fast/frag0 UDP GRO
    016e70d176ff net/af_iucv: fix null pointer dereference on shutdown
    22ee23fe1cc9 IPv6: Set SIT tunnel hard_header_len to zero
    98901bff58d9 swiotlb: fix "x86: Don't panic if can not alloc buffer for swiotlb"
    2cd21fe5bcc4 pinctrl: amd: fix incorrect way to disable debounce filter
    fa76dd3c1df3 pinctrl: amd: use higher precision for 512 RtcClk
    c6a6168a31e1 drm/gma500: Fix out-of-bounds access to struct drm_device.vblank[]
    974e3a7002a0 don't dump the threads that had been already exiting when zapped.
    039c8dcd2b15 mmc: renesas_sdhi_core: Add missing tmio_mmc_host_free() at remove
    e1d706eeeaf7 mmc: sdhci-of-esdhc: Handle pulse width detection erratum for more SoCs
    2a6cba6d3d72 gpio: pcie-idio-24: Enable PEX8311 interrupts
    7b6790ae3a94 gpio: pcie-idio-24: Fix IRQ Enable Register value
    819bf3b0d969 gpio: pcie-idio-24: Fix irq mask when masking
    68dae71b7cde selinux: Fix error return code in sel_ib_pkey_sid_slow()
    33e53f2cac19 btrfs: fix potential overflow in cluster_pages_for_defrag on 32bit arch
    9de4ffb70150 ocfs2: initialize ip_next_orphan
    ac18b128cfd6 reboot: fix overflow parsing reboot cpu number
    fa6265f8fb9e Revert "kernel/reboot.c: convert simple_strtoul to kstrtoint"
    bd4d106f3122 mm/slub: fix panic in slab_alloc_node()
    84778a43ae59 jbd2: fix up sparse warnings in checkpoint code
    2192d905df0d futex: Don't enable IRQs unconditionally in put_pi_state()
    761fb6829238 mei: protect mei_cl_mtu from null dereference
    e2b2c390ec9e virtio: virtio_console: fix DMA memory allocation for rproc serial
    57626d77ef1e xhci: hisilicon: fix refercence leak in xhci_histb_probe
    cbad9668929c usb: cdc-acm: Add DISABLE_ECHO for Renesas USB Download mode
    f988e9c85cfb uio: Fix use-after-free in uio_unregister_device()
    1654bf2d9f0e thunderbolt: Add the missed ida_simple_remove() in ring_request_msix()
    06c1895fe71b thunderbolt: Fix memory leak if ida_simple_get() fails in enumerate_services()
    11c14da8d005 KVM: arm64: Don't hide ID registers from userspace
    2033dd885297 btrfs: dev-replace: fail mount if we don't have replace item with target device
    5af9630036ef btrfs: fix min reserved size calculation in merge_reloc_root
    8266c23124c1 btrfs: ref-verify: fix memory leak in btrfs_ref_tree_mod
    062c9b04f6eb ext4: unlock xattr_sem properly in ext4_inline_data_truncate()
    a6ca4c7ec44c ext4: correctly report "not supported" for {usr,grp}jquota when !CONFIG_QUOTA
    52e3a55bc253 erofs: derive atime instead of leaving it empty
    09b0d47b7952 perf: Fix get_recursion_context()
    70867a9dbf57 vrf: Fix fast path output packet handling with async Netfilter rules
    2ab9c76986e4 cosa: Add missing kfree in error path of cosa_write
    c0a6cc9e11f4 of/address: Fix of_node memory leak in of_dma_is_coherent
    f10d238aad93 xfs: fix a missing unlock on error in xfs_fs_map_blocks
    0e2ad69bd4b5 lan743x: fix "BUG: invalid wait context" when setting rx mode
    b45f52a20879 xfs: fix brainos in the refcount scrubber's rmap fragment processor
    7cbf708b1b9a xfs: fix rmap key and record comparison functions
    3bd97b33be41 xfs: set the unwritten bit in rmap lookup flags in xchk_bmap_get_rmapextents
    08e213bef291 xfs: fix flags argument to rmap lookup when converting shared file rmaps
    a8ee686597fb igc: Fix returning wrong statistics
    81dcfdb9a015 nbd: fix a block_device refcount leak in nbd_release
    c602ad2b52dc bpf: Zero-fill re-used per-cpu map element
    dfcb33773877 SUNRPC: Fix general protection fault in trace_rpc_xdr_overflow()
    b9e8f9d139bd net/mlx5: Fix deletion of duplicate rules
    e74e514c8cca pinctrl: aspeed: Fix GPI only function problem.
    d2e61c5202e6 bpf: Don't rely on GCC __attribute__((optimize)) to disable GCSE
    443ae3655f8c ARM: 9019/1: kprobes: Avoid fortify_panic() when copying optprobe template
    c0be7a34c889 pinctrl: intel: Set default bias in case no particular value given
    88ccabbd2066 mfd: sprd: Add wakeup capability for PMIC IRQ
    58953e87343d tick/common: Touch watchdog in tick_unfreeze() on all CPUs
    3322f7289e50 spi: bcm2835: remove use of uninitialized gpio flags variable
    572e545d80ea tpm_tis: Disable interrupts on ThinkPad T490s
    713a3a94bee0 i2c: sh_mobile: implement atomic transfers
    37a048d790c3 riscv: Set text_offset correctly for M-Mode
    6d8b43376990 selftests: proc: fix warning: _GNU_SOURCE redefined
    ab10b7def421 amd/amdgpu: Disable VCN DPG mode for Picasso
    4faa1fabc645 i2c: mediatek: move dma reset before i2c reset
    b66c7cdedd1e vfio/pci: Bypass IGD init in case of -ENODEV
    c6be53caf1c8 vfio: platform: fix reference leak in vfio_platform_open
    4d6f536e34d6 s390/smp: move rcu_cpu_starting() earlier
    984d77507439 iommu/amd: Increase interrupt remapping table limit to 512 entries
    a889cd3d350d nvme-tcp: avoid repeated request completion
    9d14f5225dbb nvme-rdma: avoid repeated request completion
    531b55cce9cd nvme-tcp: avoid race between time out and tear down
    d0e888a20dfd nvme-rdma: avoid race between time out and tear down
    0ca279c859d7 nvme: introduce nvme_sync_io_queues
    c473b3e56c1d scsi: mpt3sas: Fix timeouts observed while reenabling IRQ
    b61e157d9f64 scsi: scsi_dh_alua: Avoid crash during alua_bus_detach()
    bf1cedc12f58 tracing: Fix the checking of stackidx in __ftrace_trace_stack
    e57c04697030 cfg80211: regulatory: Fix inconsistent format argument
    a3f0db0d2320 cfg80211: initialize wdev data earlier
    67bb2e4d41de mac80211: fix use of skb payload instead of header
    c1cbb64c100d drm/amd/pm: do not use ixFEATURE_STATUS for checking smc running
    48083640a47b drm/amd/pm: perform SMC reset on suspend/hibernation
    f449b902badb drm/amdgpu: perform srbm soft reset always on SDMA resume
    7f6df0b085ce scsi: hpsa: Fix memory leak in hpsa_init_one()
    325455358e54 gfs2: check for live vs. read-only file system in gfs2_fitrim
    edeff05a1f10 gfs2: Add missing truncate_inode_pages_final for sd_aspace
    99dcfc517d17 gfs2: Free rd_bits later in gfs2_clear_rgrpd to fix use-after-free
    42eaa22aaf2e ALSA: hda: Reinstate runtime_allow() for all hda controllers
    0a4c091673ca ALSA: hda: Separate runtime and system suspend
    9b7e6b670df7 selftests: pidfd: fix compilation errors due to wait.h
    9110e2f2633d selftests/ftrace: check for do_sys_openat2 in user-memory test
    1737ea0c5775 usb: gadget: goku_udc: fix potential crashes in probe
    e60490354191 opp: Reduce the size of critical section in _opp_table_kref_release()
    fe2dc1093c61 usb: dwc3: pci: add support for the Intel Alder Lake-S
    e22142a9a2a9 ASoC: cs42l51: manage mclk shutdown delay
    0fc0befe0bfa ASoC: qcom: sdm845: set driver name correctly
    b668352c4aad ath9k_htc: Use appropriate rs_datalen type
    42501604363f KVM: x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally
    d2cef3bae14b KVM: arm64: ARM_SMCCC_ARCH_WORKAROUND_1 doesn't return SMCCC_RET_NOT_REQUIRED
    213e1238cacc random32: make prandom_u32() output unpredictable
    327af342ca9b tpm: efi: Don't create binary_bios_measurements file for an empty log
    0685eb84ad56 xfs: fix scrub flagging rtinherit even if there is no rt device
    2f6cbef32718 xfs: flush new eof page on truncate to avoid post-eof corruption
    66ce8bfad6f6 can: flexcan: flexcan_remove(): disable wakeup completely
    0b657367309e can: flexcan: remove FLEXCAN_QUIRK_DISABLE_MECR quirk for LS1021A
    56c56af0a3a1 can: peak_canfd: pucan_handle_can_rx(): fix echo management when loopback is on
    a23ee9956612 can: peak_usb: peak_usb_get_ts_time(): fix timestamp wrapping
    44b2c4beff8a can: peak_usb: add range checking in decode operations
    d6c34afab0ed can: xilinx_can: handle failure cases of pm_runtime_get_sync
    51920ca7519c can: ti_hecc: ti_hecc_probe(): add missed clk_disable_unprepare() in error path
    b9c4a9a07c4a can: j1939: j1939_sk_bind(): return failure if netdev is down
    0ab4c839409a can: j1939: swap addr and pgn in the send example
    5bde65abe166 can: can_create_echo_skb(): fix echo skb generation: always use skb_clone()
    183f1af506fe can: dev: __can_get_echo_skb(): fix real payload length return value for RTR frames
    ab46748bf988 can: dev: can_get_echo_skb(): prevent call to kfree_skb() in hard IRQ context
    3d0954767918 can: rx-offload: don't call kfree_skb() from IRQ context
    e201588fad54 afs: Fix warning due to unadvanced marshalling pointer
    9946509a027b iommu/vt-d: Fix a bug for PDP check in prq_event_thread
    2825a5bf3ca5 ALSA: hda: prevent undefined shift in snd_hdac_ext_bus_get_link()
    22901751d269 perf tools: Add missing swap for ino_generation
    b36f78fd48e9 perf trace: Fix segfault when trying to trace events by cgroup
    d261d0bd9066 powerpc/eeh_cache: Fix a possible debugfs deadlock
    1c8fe343a79d netfilter: ipset: Update byte and packet counters regardless of whether they match
    ad017cf5dace netfilter: nf_tables: missing validation from the abort path
    56907fa27b94 netfilter: use actual socket sk rather than skb sk when routing harder
    6234710dc634 xfs: set xefi_discard when creating a deferred agfl free log intent item
    933f911136e2 ASoC: codecs: wcd9335: Set digital gain range correctly
    5cb904da85ed net: xfrm: fix a race condition during allocing spi
    4e438ca1b629 hv_balloon: disable warning when floor reached
    bb2b60242c8e genirq: Let GENERIC_IRQ_IPI select IRQ_DOMAIN_HIERARCHY
    bb8c6bd53cc0 ASoC: Intel: kbl_rt5663_max98927: Fix kabylake_ssp_fixup function
    a8ec66026dd8 btrfs: reschedule when cloning lots of extents
    0ee771e96954 btrfs: sysfs: init devices outside of the chunk_mutex
    c58fa93b1409 btrfs: tracepoints: output proper root owner for trace_find_free_extent()
    e24516cf62f9 usb: dwc3: gadget: Reclaim extra TRBs after request completion
    ab031673e2ab usb: dwc3: gadget: Continue to process pending requests
    504cfb5e3bca PCI: qcom: Make sure PCIe is reset before init for rev 2.1.0
    9dfbc2f82ac8 KVM: arm64: Force PTE mapping on fault resulting in a device mapping
    95fda70d3955 nbd: don't update block size after device is started
    160777b19b86 time: Prevent undefined behaviour in timespec64_to_ns()
    5a39fb2f22fd drm/i915/gem: Flush coherency domains on first set-domain-ioctl
    2544d06afd8d Linux 5.4.77
    19f6d91bdad4 powercap: restrict energy meter to root access
    ec9c6b417e27 Linux 5.4.76
    c3d60c695712 arm64: dts: marvell: espressobin: Add ethernet switch aliases
    b7f7474b3921 perf/core: Fix a memory leak in perf_event_parse_addr_filter()
    21ab13af8c50 xfs: flush for older, xfs specific ioctls
    258d01b1577e PM: runtime: Resume the device earlier in __device_release_driver()
    37f75c6aa8dd PM: runtime: Drop pm_runtime_clean_up_links()
    874dfb5c6aa3 PM: runtime: Drop runtime PM references to supplier on link removal
    fbfca92c7840 ARC: stack unwinding: avoid indefinite looping
    d61edc06002f drm/panfrost: Fix a deadlock between the shrinker and madvise path
    b9d91fa92164 usb: mtu3: fix panic in mtu3_gadget_stop()
    b0d03a1bdb3c USB: Add NO_LPM quirk for Kingston flash drive
    290fcf3e0c0c usb: dwc3: ep0: Fix delay status handling
    86875e1d6426 tty: serial: fsl_lpuart: LS1021A has a FIFO size of 16 words, like LS1028A
    8febdfb5973d tty: serial: fsl_lpuart: add LS1028A support
    d5d3cca9d61f USB: serial: option: add Telit FN980 composition 0x1055
    7f7be9341b86 USB: serial: option: add LE910Cx compositions 0x1203, 0x1230, 0x1231
    b7f74775c2bb USB: serial: option: add Quectel EC200T module support
    9d34dbab6ef4 USB: serial: cyberjack: fix write-URB completion race
    62c4b2b21e3b serial: txx9: add missing platform_driver_unregister() on error in serial_txx9_init
    085fc4784e4b serial: 8250_mtk: Fix uart_get_baud_rate warning
    b33a1039564c s390/pkey: fix paes selftest failure with paes and pkey static build
    beeb658cfd35 fork: fix copy_process(CLONE_PARENT) race with the exiting ->real_parent
    642181fe3567 vt: Disable KD_FONT_OP_COPY
    cfd9d7137759 Revert "coresight: Make sysfs functional on topologies with per core sink"
    8ee6a0f25457 arm64/smp: Move rcu_cpu_starting() earlier
    eceb94287dbf drm/nouveau/gem: fix "refcount_t: underflow; use-after-free"
    7d0de6f87257 drm/nouveau/nouveau: fix the start/end range for migration
    4dab0fd40323 usb: cdns3: gadget: suspicious implicit sign extension
    937753df482c ACPI: NFIT: Fix comparison to '-ENXIO'
    16476c2b26ca drm/vc4: drv: Add error handding for bind
    a04cec1dd293 nvmet: fix a NULL pointer dereference when tracing the flush command
    8c9c03432500 nvme-rdma: handle unexpected nvme completion data length
    2fd9e60760ef vsock: use ns_capable_noaudit() on socket create
    2149aa583068 scsi: ibmvscsi: Fix potential race after loss of transport
    1247f4e29188 drm/amdgpu: add DID for navi10 blockchain SKU
    fd4fb5080725 scsi: core: Don't start concurrent async scan on same host
    3c52715ceaae blk-cgroup: Pre-allocate tree node on blkg_conf_prep
    f77756ea6641 blk-cgroup: Fix memleak on error path
    914fc5524261 drm/sun4i: frontend: Fix the scaler phase on A33
    f743f73f42a7 drm/sun4i: frontend: Reuse the ch0 phase for RGB formats
    6d7b41a67687 drm/sun4i: frontend: Rework a bit the phase data
    147e3743cf7a of: Fix reserved-memory overlap detection
    6e02c29e4ac4 x86/kexec: Use up-to-dated screen_info copy to fill boot params
    3283d4d78412 arm64: dts: meson: add missing g12 rng clock
    69e0e917c7c8 ARM: dts: sun4i-a10: fix cpu_alert temperature
    2716e78a6486 futex: Handle transient "ownerless" rtmutex state correctly
    ec5f524e0293 tracing: Fix out of bounds write in get_trace_buf
    9f6883fce694 spi: bcm2835: fix gpio cs level inversion
    f352cca84625 regulator: defer probe when trying to get voltage from unresolved supply
    a69af5baed80 ftrace: Handle tracing when switching between context
    3058420f40fb ftrace: Fix recursion check for NMI test
    cfaf010cf345 mtd: spi-nor: Don't copy self-pointing struct around
    aef59b5e5bdf ring-buffer: Fix recursion protection transitions between interrupt context
    2cd71743e7ff gfs2: Wake up when sd_glock_disposal becomes zero
    d2286457bd83 mm: always have io_remap_pfn_range() set pgprot_decrypted()
    1b8490d6b809 kthread_worker: prevent queuing delayed work from timer_fn when it is being canceled
    b1d16be4f2f4 lib/crc32test: remove extra local_irq_disable/enable
    c1f729c7dec0 mm: mempolicy: fix potential pte_unmap_unlock pte error
    f7c2913d606b ALSA: usb-audio: Add implicit feedback quirk for MODX
    26a871cf86cb ALSA: usb-audio: Add implicit feedback quirk for Qu-16
    a46e830d017e ALSA: usb-audio: add usb vendor id as DSD-capable for Khadas devices
    65457e345f3c ALSA: usb-audio: Add implicit feedback quirk for Zoom UAC-2
    72ce616ed55a ALSA: hda/realtek - Enable headphone for ASUS TM420
    f7d0f7242405 ALSA: hda/realtek - Fixed HP headset Mic can't be detected
    61402d61a2af Fonts: Replace discarded const qualifier
    e5ea79bb19f8 sfp: Fix error handing in sfp_probe()
    9b5458effeee sctp: Fix COMM_LOST/CANT_STR_ASSOC err reporting on big-endian platforms
    26ffb8916059 powerpc/vnic: Extend "failover pending" window
    92e65059beda net: usb: qmi_wwan: add Telit LE910Cx 0x1230 composition
    8e3c047f814b ip_tunnel: fix over-mtu packet send fail without TUNNEL_DONT_FRAGMENT flags
    ac343efb572c ionic: check port ptr before use
    6ef3bcc25a3e gianfar: Account for Tx PTP timestamp in the skb headroom
    5b66a5b6a9e2 gianfar: Replace skb_realloc_headroom with skb_cow_head for PTP
    7bf7b7c385a1 chelsio/chtls: fix always leaking ctrl_skb
    14d755a4815e chelsio/chtls: fix memory leaks caused by a race
    57bb59f9d8fb cadence: force nonlinear buffers to be cloned
    1695fca8a923 ptrace: fix task_join_group_stop() for the case when current is traced
    76e5bba75a63 tipc: fix use-after-free in tipc_bcast_get_mode
    ca16a42f5f0d arm64: Change .weak to SYM_FUNC_START_WEAK_PI for arch/arm64/lib/mem*.S
    d94589900d98 arm64: lib: Use modern annotations for assembly functions
    3e7050661d95 arm64: asm: Add new-style position independent function annotations
    840d8c9b3e5f linkage: Introduce new macros for assembler symbols
    1ca84322ab5b ASoC: Intel: Skylake: Add alternative topology binary name
    e05dfcff26e9 drm/i915: Drop runtime-pm assert from vgpu io accessors
    d321f127eb51 drm/i915/gt: Delay execlist processing for tgl
    5bcd18bf8082 drm/i915: Break up error capture compression loops with cond_resched()

(From OE-Core rev: 1dcfaba6c60805a3987a0bbdc8fbf61225a41dc1)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 6063baedd741e1ae86a2c42cd2dc41899718a2d5)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Bruce Ashfield
f6434fde67 linux-yocto/5.8: ext4/tipc warning fixups
Integrating the following commit(s) to linux-yocto/5.8:

    3c5d210805d6 tipc: fix -Wstringop-truncation warnings
    cc89fd77c248 ext4: fix -Wstringop-truncation warnings

(From OE-Core rev: 234c8101d642120b08b369d305914b1560f140db)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 45a229f84fe71b251530bb182c1ad03a88f592a8)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Bruce Ashfield
e46465c718 linux-yocto/5.8: perf: Alias SYS_futex with SYS_futex_time64 on 32-bit arches with 64bit time_t
Integrating the following commit(s) to linux-yocto/5.8:

    52b840afae05 perf: Alias SYS_futex with SYS_futex_time64 on 32-bit arches with 64bit time_t

(From OE-Core rev: fbcd54a3db79e85aa1180523ca2903bf03ff7462)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 41135c844af1165b1e74e8e2654784f3cd4def8b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Bruce Ashfield
e4156f232b linux-yocto/5.4: perf: Alias SYS_futex with SYS_futex_time64 on 32-bit arches with 64bit time_t
Integrating the following commit(s) to linux-yocto/5.4:

    356914747645 perf: Alias SYS_futex with SYS_futex_time64 on 32-bit arches with 64bit time_t

(From OE-Core rev: 7c8b7ed2ece21b5473eca2144c8b9a01d0197475)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 73ee256e5c1194ec5d0843dee274d29cc0efe993)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
INC@Cisco)
bfa254bd1a kernel-devsrc: improve reproducibility for arm64
.vdso-offsets.h.cmd contains command that was used to produce vdso-offsets.h.
It breaks reproducibility because it has an absolute path in it. There is no
any value to package such files so it can be dropped.

(From OE-Core rev: b627c00c624f9f9279c21ddd4d8aa9a8a592a8d3)

Signed-off-by: Denys Zagorui <dzagorui@cisco.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit d31b4db24643b0867c654af34c684b4de2f8122b)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:08 +00:00
Vyacheslav Yurkov
4315a12330 license_image.bbclass: use canonical name for license files
When copying license files to the image rootfs, i.e to
/usr/share/common-licenses, a canonical name of a license should be
used, otherwise duplicated files end up in common-licenses directory.

For example, GPL-2.0 license according to conf/license.conf can be
referenced in recipes as GPL-2, GPLv2, and GPLv2.0. If a license name is
used directly, we end up with three files in the rootfs with the same
content. If a canonical name used instead, then each license gets copied
only once.

(From OE-Core rev: 0fda54af52dfb57598ea9409113d33dacb786dc1)

Signed-off-by: Vyacheslav Yurkov <Vyacheslav.Yurkov@bruker.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 670fe71dd18ea675f35581db4a61fda137f8bf00)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-12-03 23:02:07 +00:00
Lee Chee Yang
9b58e1d1a8 qemu: fix CVE-2020-24352
(From OE-Core rev: 12bee66a42a7c2a38789ddb37cb098bcbf0b3841)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Lee Chee Yang
f4ff33fd11 python3: fix CVE-2020-27619
(From OE-Core rev: 0edf9f32929c462b9b53f0cdc7e5ecf816fbb7b3)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Lee Chee Yang
f9f50c5638 libproxy: fix CVE-2020-26154
(From OE-Core rev: af85169a4dfb2fc4dc820409eb4a7756dc14e894)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Max Krummenacher
23eef02eff linux-firmware: rdepend on license for all nvidia packages
Fixes commit 0671d04978 ("linux-firmware: package nvidia firmware")

(From OE-Core rev: cbe3142a32363a45c9935b6ee748f217a699f6b8)

Signed-off-by: Max Krummenacher <max.krummenacher@toradex.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 59789dea33629a96f0fe5646eb684aa131e167bf)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Loic Domaigne
bef1f4761e roofs_*.bbclass: fix missing vardeps for do_rootfs
As per lib/oe/rootfs.py and lib/oe/package_manager/???/__init__.py
the PACKAGE_FEED baseurl is defined as the joined paths of:
URIS/BASE_PATHS/ARCHS

Therefore, the do_rootfs task should depend furthermore on
PACKAGE_FEED_{BASE_PATHS,ARCHS} to properly retrigger a build if
the value changes.

(From OE-Core rev: 14165724d41a5d00384a9db60b49b37ac4f3b40f)

Signed-off-by: Loic Domaigne (ljd) <tech@domaigne.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e5329464f5ebad909c4c9bd27a718bbd8f4cc221)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Alistair
8b9bdf1d1e weston-init: Fix incorrect idle-time setting
(From OE-Core rev: c7cd893088bc82466bf1843c292731eb5992467b)

Signed-off-by: Alistair Francis <alistair@alistair23.me>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 84b3a6b7bd73ebad90865ee4351578c2109358fb)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Wonmin Jung
1a4b81a392 kernel: Set proper LD in KERNEL_KCONFIG_COMMAND
With 'ld-is-gold' and linux kernel 5.4 or later, menuconfig
task for kernel recipes will fail with:

$ bitbake -c menuconfig virtual/kernel
...
scripts/kconfig/mconf  Kconfig
scripts/Kconfig.include:43:  gold linker 'x86_64-poky-linux-ld' not supported
/OE/build/tmp/work-shared/qemux86-64/kernel-source/scripts/kconfig/Makefile:29:
 recipe for target 'menuconfig' failed
make[2]: *** [menuconfig] Error 1
/OE/build/tmp/work-shared/qemux86-64/kernel-source/Makefile:606:
 recipe for target 'menuconfig' failed
make[1]: *** [menuconfig] Error 2
/OE/build/tmp/work-shared/qemux86-64/kernel-source/Makefile:185:
 recipe for target '__sub-make' failed
make: *** [__sub-make] Error 2
Command failed.

This is because that the KERNEL_LD variable already set in
kernel-arch.bbclass isn't used by do_menuconfig function of
cml1.bbclass.

To fix this issue specify LD variable while calling the kernel
menuconfig command through KERNEL_KCONFIG_COMMAND.

(From OE-Core rev: 263e0c7a301fc11d3cf4ced4ffb911ebf6cb2f14)

Signed-off-by: Wonmin Jung <wonmin82@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1faf66ce0b1f8f5165277161e07e25e672370c3f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Bruce Ashfield
c111b692cc kernel: relocate copy of module.lds to module compilation task
There were two copies of this patch floating around, and the merged
variant has the copy in the wrong place.

module.lds is only created during modules_prepare, and that target is
not invoked during our main build of the kernel. We aren't about to
change the kernel build (there's no need), so we move the copy into
the compile_kernelmodules task. After that runs, we have module.lds
availble to copy.

This has been tested against clean kernel + out of tree module
builds, and the dependencies are correct that the file is copied
before the out of tree module build starts.

(From OE-Core rev: d9e327063f63193186822d958706081d64ec8139)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 7d94f9209ebaaf59ea001239a889dd7f928a0e7c)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Bruce Ashfield
701e43727a kernel: provide module.lds for out of tree builds in v5.10+
The upstream commit 596b0474d3d [kbuild: preprocess module linker
script], adds a dependency on module.lds for external module
building.

Since module.lds is generated as part of 'modules_prepare', we
must make it available with the other kernel artifacts in the
kernel shared workdir, otherwise out of tree builds fail.

This fixes errors like:

    | make[4]: *** No rule to make target 'scripts/module.lds', needed by
        'build/tmp/work/qemuarm64-poky-linux/cryptodev-module/1.11-r0/git/cryptodev.ko'.
        Stop.
    | make[4]: *** Waiting for unfinished jobs....

We also ensure that kernel-devsrc has a copy to support on
target module builds that are often prepared with 'make scripts
prepare'. Those targets won't regenerate it, so the build fails.
If 'make modules_prepare' is used, the file will be regenerated
and overwrite our copy (as expected).

(From OE-Core rev: 27856184dee4b68254cb302b2294c115a46fcf16)

Signed-off-by: Pan, Kris <kris.pan@intel.com>
Signed-off-by: Lili Li <lili.li@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0fc66a0b64953aae38d0124b57615fffaec8de52)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Bruce Ashfield
dedca9ecb7 linux-yocto/5.4: update to v5.4.75
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    6e97ed6efa70 Linux 5.4.75
    6ce4da84e5f4 staging: octeon: Drop on uncorrectable alignment or FCS error
    b869f6b67274 staging: octeon: repair "fixed-link" support
    15506ee68893 staging: comedi: cb_pcidas: Allow 2-channel commands for AO subdevice
    4d934fe936fd staging: fieldbus: anybuss: jump to correct label in an error path
    8fd792948e76 KVM: arm64: Fix AArch32 handling of DBGD{CCINT,SCRext} and DBGVCR
    4cb29cdd5043 device property: Don't clear secondary pointer for shared primary firmware node
    26086875476f device property: Keep secondary firmware node secondary by type
    e793fc391351 ARM: s3c24xx: fix missing system reset
    2937774ef43a ARM: samsung: fix PM debug build with DEBUG_LL but !MMU
    0808ca98e67e arm: dts: mt7623: add missing pause for switchport
    f3d8023e0647 hil/parisc: Disable HIL driver when it gets stuck
    81190a9efde0 cachefiles: Handle readpage error correctly
    4bf2a744a4e7 arm64: berlin: Select DW_APB_TIMER_OF
    c2313d7818b9 tty: make FONTX ioctl use the tty pointer they were actually passed
    beb5d0dfc154 drm/amd/pm: increase mclk switch threshold to 200 us
    071b3300c951 mmc: sdhci: Use Auto CMD Auto Select only when v4_mode is true
    fb4e2a67e193 mmc: sdhci-of-esdhc: set timeout to max before tuning
    b7e1a637eae9 drm/ttm: fix eviction valuable range check.
    b60edf37d5d3 ext4: fix invalid inode checksum
    ae05fdc6d60a ext4: fix error handling code in add_new_gdb
    c0de3cf2f286 ext4: fix leaking sysfs kobject after failed mount
    b11e9dd66e3a vringh: fix __vringh_iov() when riov and wiov are different
    3cfbc13ab3f0 ring-buffer: Return 0 on success from ring_buffer_resize()
    0db6e7161e33 9P: Cast to loff_t before multiplying
    51135ffbb54d libceph: clear con->out_msg on Policy::stateful_server faults
    d4fdbedef767 ceph: promote to unsigned long long before shifting
    9cdccb4761e5 drm/amd/display: Fix kernel panic by dal_gpio_open() error
    d7e22dbc662d drm/amd/display: Don't invoke kgdb_breakpoint() unconditionally
    d1628cdacfb0 drm/amdgpu: increase the reserved VM size to 2MB
    adff3a805c97 drm/amd/display: Avoid MST manager resource leak.
    1e460aa7353d drm/amdkfd: Use same SQ prefetch setting as amdgpu
    d417026c4081 drm/amdgpu: correct the gpu reset handling for job != NULL case
    9887a48d49f0 drm/amd/display: Increase timeout for DP Disable
    987d3814c92c drm/amdgpu: don't map BO in reserved region
    2c58d5e0c754 i2c: imx: Fix external abort on interrupt in exit paths
    da3ccf5b2045 rtc: rx8010: don't modify the global rtc ops
    e17afa6d1de3 ia64: fix build error with !COREDUMP
    da3bb6fa23f1 ubi: check kthread_should_stop() after the setting of task state
    6d0beeebd15d ARC: perf: redo the pct irq missing in device-tree handling
    468811595833 perf python scripting: Fix printable strings in python3 scripts
    a99cbd20a5c5 ubifs: mount_ubifs: Release authentication resource in error handling path
    9ba6324ca9c4 ubifs: Don't parse authentication mount options in remount process
    748057df47b9 ubifs: Fix a memleak after dumping authentication mount options
    bc202c839b5d ubifs: journal: Make sure to not dirty twice for auth nodes
    a77927469760 ubifs: xattr: Fix some potential memory leaks while iterating entries
    213c836b2396 ubifs: dent: Fix some potential memory leaks while iterating entries
    c1ea3c4a4302 NFSD: Add missing NFSv2 .pc_func methods
    da86bb4c214f NFSv4.2: support EXCHGID4_FLAG_SUPP_FENCE_OPS 4.2 EXCHANGE_ID flag
    c342001cab7f NFSv4: Wait for stateid updates after CLOSE/OPEN_DOWNGRADE
    415043c3ec0d powerpc: Fix undetected data corruption with P9N DD2.1 VSX CI load emulation
    94e27f13694c powerpc/powermac: Fix low_sleep_handler with KUAP and KUEP
    61ed8c1b940d powerpc/powernv/elog: Fix race while processing OPAL error log event.
    7850dd0851a3 powerpc/memhotplug: Make lmb size 64bit
    3fa03b7f21a3 powerpc: Warn about use of smt_snooze_delay
    240baebeda09 powerpc/rtas: Restrict RTAS requests from userspace
    551bf7c4bc24 s390/stp: add locking to sysfs functions
    58a7dc5f521a MIPS: DEC: Restore bootmem reservation for firmware working memory area
    73597ab2a9b9 powerpc/drmem: Make lmb_size 64 bit
    829c0a9634b9 iio:gyro:itg3200: Fix timestamp alignment and prevent data leak.
    9f4f75df4b47 iio:adc:ti-adc12138 Fix alignment issue with timestamp
    96a5134423ae iio:adc:ti-adc0832 Fix alignment issue with timestamp
    a8c59abdbc6b iio: adc: gyroadc: fix leak of device node iterator
    ad877be5b983 iio:light:si1145: Fix timestamp alignment and prevent data leak.
    a4f02a81c7e6 dmaengine: dma-jz4780: Fix race in jz4780_dma_tx_status
    f707ccb2f10c udf: Fix memory leak when mounting
    93da9dcee2d2 HID: wacom: Avoid entering wacom_wac_pen_report for pad / battery
    87d398f348b8 vt: keyboard, extend func_buf_lock to readers
    eb4c460e2e06 vt: keyboard, simplify vt_kdgkbsent
    8c16ca600657 drm/i915: Force VT'd workarounds when running as a guest OS
    94478c1dc57d usb: host: fsl-mph-dr-of: check return of dma_set_mask()
    75d0d4ff5970 usb: typec: tcpm: reset hard_reset_count for any disconnect
    543432d078c0 usb: cdc-acm: fix cooldown mechanism
    2850f148cd7f usb: dwc3: gadget: END_TRANSFER before CLEAR_STALL command
    206dcd6ce82f usb: dwc3: gadget: Resume pending requests after CLEAR_STALL
    97224cdc0440 usb: dwc3: core: don't trigger runtime pm when remove driver
    726f638e7cd1 usb: dwc3: core: add phy cleanup for probe error handling
    f935b70cf724 usb: dwc3: gadget: Check MPS of the request length
    1c9e86c933ea usb: dwc3: ep0: Fix ZLP for OUT ep0 requests
    3468cbceb563 usb: dwc3: pci: Allow Elkhart Lake to utilize DSM method for PM functionality
    2600a131e1f6 usb: xhci: Workaround for S3 issue on AMD SNPS 3.0 xHC
    c964d386e849 btrfs: fix readahead hang and use-after-free after removing a device
    dfda50e882f5 btrfs: fix use-after-free on readahead extent after failure to create it
    834a61b2123b btrfs: tree-checker: validate number of chunk stripes and parity
    1cedc54ad3d4 btrfs: cleanup cow block on error
    d3ce2d0fb8b2 btrfs: tree-checker: fix false alert caused by legacy btrfs root item
    4b82b8aba08d btrfs: use kvzalloc() to allocate clone_roots in btrfs_ioctl_send()
    6ec4b82fc322 btrfs: send, recompute reference path after orphanization of a directory
    c2dcc9b03b7f btrfs: send, orphanize first all conflicting inodes when processing references
    e1cf034899b6 btrfs: reschedule if necessary when logging directory items
    223b462744b3 btrfs: improve device scanning messages
    c5f2a5091263 btrfs: qgroup: fix wrong qgroup metadata reserve for delayed inode
    1e2f16dd611b PM: runtime: Remove link state checks in rpm_get/put_supplier()
    a0bdb5b16392 scsi: qla2xxx: Fix crash on session cleanup with unload
    f0ef0e2299f5 scsi: mptfusion: Fix null pointer dereferences in mptscsih_remove()
    3fc2cbba4069 w1: mxc_w1: Fix timeout resolution problem leading to bus error
    a034ea12bdd4 acpi-cpufreq: Honor _PSD table setting on new AMD CPUs
    7f9d9a007e59 ACPI: EC: PM: Drop ec_no_wakeup check from acpi_ec_dispatch_gpe()
    0adf4dbae9c0 ACPI: EC: PM: Flush EC work unconditionally after wakeup
    e7f52fd6e0ef PCI/ACPI: Whitelist hotplug ports for D3 if power managed by ACPI
    6341984bef17 ACPI: debug: don't allow debugging when ACPI is disabled
    1a5f62a3c694 ACPI: video: use ACPI backlight for HP 635 Notebook
    9578d7381432 ACPI / extlog: Check for RDMSR failure
    5e25b44cc2eb ACPI: button: fix handling lid state changes when input device closed
    c75b77cb9f01 NFS: fix nfs_path in case of a rename retry
    f8a6a2ed4b7d fs: Don't invalidate page buffers in block_write_full_page()
    2f3cb993a6f2 media: uvcvideo: Fix uvc_ctrl_fixup_xu_info() not having any effect
    8ac92a5e5fd7 leds: bcm6328, bcm6358: use devres LED registering function
    a908e29705ee extcon: ptn5150: Fix usage of atomic GPIO with sleeping GPIO chips
    004fb028f22c spi: sprd: Release DMA channel also on probe deferral
    d789e1c5b1ce perf/x86/amd/ibs: Fix raw sample data accumulation
    2e2a324641f9 perf/x86/amd/ibs: Don't include randomized bits in get_ibs_op_count()
    f9a48ff99961 perf/x86/intel: Fix Ice Lake event constraint table
    3674b0445b70 selftests/x86/fsgsbase: Test PTRACE_PEEKUSER for GSBASE with invalid LDT GS
    2d1c48227780 seccomp: Make duplicate listener detection non-racy
    470c8c409e1c mmc: sdhci-acpi: AMDI0040: Set SDHCI_QUIRK2_PRESET_VALUE_BROKEN
    3f56e94b6f7c mmc: sdhci: Add LTR support for some Intel BYT based controllers
    b91d4797b3da md/raid5: fix oops during stripe resizing
    a7aa5d578fed nvme-rdma: fix crash when connect rejected
    c421c082088e sgl_alloc_order: fix memory leak
    742fd49cf811 nbd: make the config put is called before the notifying the waiter
    b71dbaf08f9f ARM: dts: s5pv210: remove dedicated 'audio-subsystem' node
    3ad1464467e7 ARM: dts: s5pv210: move PMU node out of clock controller
    8a9024f6e29f ARM: dts: s5pv210: move fixed clocks under root node
    8c1b47e8aa43 ARM: dts: s5pv210: remove DMA controller bus node name to fix dtschema warnings
    c6029d9bc68d memory: emif: Remove bogus debugfs error handling
    2f98e2843b69 ARM: dts: omap4: Fix sgx clock rate for 4430
    c70f909e7ad6 arm64: dts: renesas: ulcb: add full-pwr-cycle-in-suspend into eMMC nodes
    e2dca8845c37 cifs: handle -EINTR in cifs_setattr
    3c78eb161c26 gfs2: add validation checks for size of superblock
    9f7e4bfadfe9 gfs2: use-after-free in sysfs deregistration
    9b58c55ba81c KVM: PPC: Book3S HV: Do not allocate HPT for a nested guest
    d7d7920a7f66 ext4: Detect already used quota file early
    d01b63320799 drivers: watchdog: rdc321x_wdt: Fix race condition bugs
    229bdf0b1319 net: 9p: initialize sun_server.sun_path to have addr's value only when addr is valid
    660e2d9d1417 clk: ti: clockdomain: fix static checker warning
    f66125e1c4df rpmsg: glink: Use complete_all for open states
    dfcfccd05075 bnxt_en: Log unknown link speed appropriately.
    78452408bb3e md/bitmap: md_bitmap_get_counter returns wrong blocks
    4ebdad05129e btrfs: fix replace of seed device
    1f145a1193ea ARC: [dts] fix the errors detected by dtbs_check
    5759f38a63db drm/amd/display: HDMI remote sink need mode validation for Linux
    3ef6095d6587 power: supply: test_power: add missing newlines when printing parameters by sysfs
    cf5a6124f237 ACPI: HMAT: Fix handling of changes from ACPI 6.2 to ACPI 6.3
    37464a8a7f68 bus/fsl_mc: Do not rely on caller to provide non NULL mc_io
    0606a8df86fe drivers/net/wan/hdlc_fr: Correctly handle special skb->protocol values
    592cbc0a6a83 brcmfmac: Fix warning message after dongle setup failed
    cf9cc49cd881 ACPI: Add out of bounds and numa_off protections to pxm_to_node()
    5880a0d1c835 xfs: don't free rt blocks when we're doing a REMAP bunmapi call
    7551e2f4fddd can: flexcan: disable clocks during stop mode
    64129ad98b74 arm64/mm: return cpu_all_mask when node is NUMA_NO_NODE
    ea888a14ac6e SUNRPC: Mitigate cond_resched() in xprt_transmit()
    7f7f437277ac usb: xhci: omit duplicate actions when suspending a runtime suspended host.
    8fd52a21ab57 coresight: Make sysfs functional on topologies with per core sink
    2502107a9ccd uio: free uio id after uio file node is freed
    16b9e40d2989 USB: adutux: fix debugging
    65052761eeb9 cpufreq: sti-cpufreq: add stih418 support
    2eab702ee945 riscv: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFO
    7762afa04fd4 samples/bpf: Fix possible deadlock in xdpsock
    58c80462e467 selftests/bpf: Define string const as global for test_sysctl_prog.c
    8f71fb76a312 media: uvcvideo: Fix dereference of out-of-bound list iterator
    4801ffdd6962 bpf: Permit map_ptr arithmetic with opcode add and offset 0
    f7f7b77ee507 kgdb: Make "kgdbcon" work properly with "kgdb_earlycon"
    77fa5e15c933 ia64: kprobes: Use generic kretprobe trampoline handler
    b3142fe7ff63 printk: reduce LOG_BUF_SHIFT range for H8300
    80685a94f7c4 arm64: topology: Stop using MPIDR for topology information
    7975367a005f drm/bridge/synopsys: dsi: add support for non-continuous HS clock
    d3fb88a51c04 mmc: via-sdmmc: Fix data race bug
    67e18c92e081 media: imx274: fix frame interval handling
    448e5004ad85 media: tw5864: check status of tw5864_frameinterval_get
    47ab020f3290 usb: typec: tcpm: During PR_SWAP, source caps should be sent only after tSwapSourceStart
    5472c5d1d505 media: platform: Improve queue set up flow for bug fixing
    3a8568806285 media: videodev2.h: RGB BT2020 and HSV are always full range
    ac437801e3c2 selftests/x86/fsgsbase: Reap a forgotten child
    581940d9b9c8 drm/brige/megachips: Add checking if ge_b850v3_lvds_init() is working correctly
    ed0bd7b12939 ath10k: fix VHT NSS calculation when STBC is enabled
    b30a5c8d9def ath10k: start recovery process when payload length exceeds max htc length for sdio
    759721fb5886 video: fbdev: pvr2fb: initialize variables
    b2844ba3d37c xfs: fix realtime bitmap/summary file truncation when growing rt volume
    a10ed3b55fed power: supply: bq27xxx: report "not charging" on all types
    036b0f4d7671 NFS4: Fix oops when copy_file_range is attempted with NFS4.0 source
    13081d5ddb58 ARM: 8997/2: hw_breakpoint: Handle inexact watchpoint addresses
    df5b07f2172a f2fs: handle errors of f2fs_get_meta_page_nofail
    15c7ec03ddb8 um: change sigio_spinlock to a mutex
    fb9b18150e3f s390/startup: avoid save_area_sync overflow
    9804eda4a975 f2fs: fix to check segment boundary during SIT page readahead
    1544dcb514ad f2fs: fix uninit-value in f2fs_lookup
    40b357f7436d f2fs: add trace exit in exception path
    2eab8974aea8 sparc64: remove mm_cpumask clearing to fix kthread_use_mm race
    7d59323cff67 powerpc: select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
    82e93f94ac65 mm: fix exec activate_mm vs TLB shootdown and lazy tlb switching race
    dc17b990ee90 powerpc/powernv/smp: Fix spurious DBG() warning
    2db759037152 futex: Fix incorrect should_fail_futex() handling
    87d9ac94c7e7 ata: sata_nv: Fix retrieving of active qcs
    da8e2fbe458c RDMA/qedr: Fix memory leak in iWARP CM
    d90dd1599cf3 mlxsw: core: Fix use-after-free in mlxsw_emad_trans_finish()
    f7e7de28d106 x86/unwind/orc: Fix inactive tasks with stack pointer in %sp on GCC 10 compiled kernels
    6937c143e3d3 firmware: arm_scmi: Add missing Rx size re-initialisation
    aedcfe9a02f8 firmware: arm_scmi: Fix ARCH_COLD_RESET
    85d9d02a49e2 xen/events: block rogue events for some time
    1d628c330fa6 xen/events: defer eoi in case of excessive number of events
    25c23f033457 xen/events: use a common cpu hotplug hook for event channels
    b7d6a66e2172 xen/events: switch user event channels to lateeoi model
    48b533aa838d xen/pciback: use lateeoi irq binding
    9396de462aa6 xen/pvcallsback: use lateeoi irq binding
    5441639a38df xen/scsiback: use lateeoi irq binding
    e6ea898e5602 xen/netback: use lateeoi irq binding
    ade6bd5af7f9 xen/blkback: use lateeoi irq binding
    df54eca9ae8a xen/events: add a new "late EOI" evtchn framework
    44a455e06d87 xen/events: fix race in evtchn_fifo_unmask()
    4bea575a1069 xen/events: add a proper barrier to 2-level uevent unmasking
    a01379671d67 xen/events: avoid removing an event channel while handling it
    b300b28b7814 Linux 5.4.74
    847c86d7f1d5 phy: marvell: comphy: Convert internal SMCC firmware return codes to errno
    aa3410cc232c misc: rtsx: do not setting OC_POWER_DOWN reg in rtsx_pci_init_ocp()
    a6db3aab9c40 openrisc: Fix issue with get_user for 64-bit values
    f73328c3192e crypto: x86/crc32c - fix building with clang ias
    29bbc9cb0b27 xen/gntdev.c: Mark pages as dirty
    8f640cd8ee60 ata: sata_rcar: Fix DMA boundary mask
    9f531583c1f0 PM: runtime: Fix timer_expires data type on 32-bit arches
    870d910e1afb serial: pl011: Fix lockdep splat when handling magic-sysrq interrupt
    44ef3b63c788 serial: qcom_geni_serial: To correct QUP Version detection logic
    c274d1f8baaf mtd: lpddr: Fix bad logic in print_drs_error
    bc67eeb9781b RDMA/addr: Fix race with netevent_callback()/rdma_addr_cancel()
    ebb0adcfbb1f cxl: Rework error message for incompatible slots
    125a229e52e7 p54: avoid accessing the data mapped to streaming DMA
    801863f634c4 evm: Check size of security.evm before using it
    dd2f800e9074 bpf: Fix comment for helper bpf_current_task_under_cgroup()
    860448e73ba2 fuse: fix page dereference after free
    4e1a23779bde ata: ahci: mvebu: Make SATA PHY optional for Armada 3720
    7aae7466f5db x86/xen: disable Firmware First mode for correctable memory errors
    47a4d5406389 arch/x86/amd/ibs: Fix re-arming IBS Fetch
    95daf621291c erofs: avoid duplicated permission check for "trusted." xattrs
    b8321829036f bnxt_en: Invoke cancel_delayed_work_sync() for PFs also.
    b1b5efe574cd bnxt_en: Fix regression in workqueue cleanup logic in bnxt_remove_one().
    aa4dba4e2226 bnxt_en: Re-write PCI BARs after PCI fatal error.
    5c86cda6a529 net: hns3: Clear the CMDQ registers before unmapping BAR region
    30d628ede582 tipc: fix memory leak caused by tipc_buf_append()
    8cc351a3d444 tcp: Prevent low rmem stalls with SO_RCVLOWAT.
    7740774940fc ravb: Fix bit fields checking in ravb_hwtstamp_get()
    4939183bb28c r8169: fix issue with forced threading in combination with shared interrupts
    f1493ab33679 net/sched: act_mpls: Add softdep on mpls_gso.ko
    4bffc9618caf netem: fix zero division in tabledist
    13a4843d3938 mlxsw: core: Fix memory leak on module removal
    c90459593f55 ibmvnic: fix ibmvnic_set_mac
    e781c67629ed gtp: fix an use-before-init in gtp_newlink()
    0ea202010b40 cxgb4: set up filter action after rewrites
    3a0d5b5358d1 chelsio/chtls: fix tls record info to user
    c5db8069776f chelsio/chtls: fix memory leaks in CPL handlers
    a5b9b28b22ba chelsio/chtls: fix deadlock issue
    c17d5aea3395 bnxt_en: Send HWRM_FUNC_RESET fw command unconditionally.
    72c17fadf3f8 bnxt_en: Check abort error state in bnxt_open_nic().
    8e1b40e57dca efivarfs: Replace invalid slashes with exclamation marks in dentries.
    c3019695f1d8 x86/PCI: Fix intel_mid_pci.c build error when ACPI is not enabled
    57a88e44b512 arm64: link with -z norelro regardless of CONFIG_RELOCATABLE
    7736c61080f1 arm64: Run ARCH_WORKAROUND_2 enabling code on all CPUs
    114c6930b351 arm64: Run ARCH_WORKAROUND_1 enabling code on all CPUs
    2dcb0c6c3818 scripts/setlocalversion: make git describe output more reliable
    c8a5496bc747 objtool: Support Clang non-section symbols in ORC generation
    a45c8c0a31a7 socket: don't clear SOCK_TSTAMP_NEW when SO_TIMESTAMPNS is disabled
    bded4de4a5e1 netfilter: nftables_offload: KASAN slab-out-of-bounds Read in nft_flow_rule_create

(From OE-Core rev: daa8aa8af31dc74ba9c916525db348a393fe4f1e)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 16dc22108fcf7e53750424b90c0aeb8dba2dc5e5)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Bruce Ashfield
d890775c90 linux-yocto/5.8: update to v5.8.18
Updating linux-yocto/5.8 to the latest korg -stable release that comprises
the following commits:

    ab435ce49bd1 Linux 5.8.18
    4a5649e0d379 phy: marvell: comphy: Convert internal SMCC firmware return codes to errno
    b8049438969b misc: rtsx: do not setting OC_POWER_DOWN reg in rtsx_pci_init_ocp()
    ad9ee9ce9d68 openrisc: Fix issue with get_user for 64-bit values
    f594998331bc xen/gntdev.c: Mark pages as dirty
    67e326e4f5df ata: sata_rcar: Fix DMA boundary mask
    f6b94060a123 PM: runtime: Fix timer_expires data type on 32-bit arches
    53faca2f4ca3 serial: pl011: Fix lockdep splat when handling magic-sysrq interrupt
    e3f6c126a3f7 serial: qcom_geni_serial: To correct QUP Version detection logic
    8f924c0a5665 drm/i915/gem: Serialise debugfs i915_gem_objects with ctx->mutex
    241bd102e337 mtd: lpddr: Fix bad logic in print_drs_error
    5868beda60c8 RDMA/addr: Fix race with netevent_callback()/rdma_addr_cancel()
    a8069b80a1fb cxl: Rework error message for incompatible slots
    9f9dc704c8cd p54: avoid accessing the data mapped to streaming DMA
    9f4ef6a90c1b evm: Check size of security.evm before using it
    a42b1273af73 bpf: Fix comment for helper bpf_current_task_under_cgroup()
    07d54b8dc56e fuse: fix page dereference after free
    78453a7dbb1a ata: ahci: mvebu: Make SATA PHY optional for Armada 3720
    4752a1313463 PCI: aardvark: Fix initialization with old Marvell's Arm Trusted Firmware
    b9cc04b049d8 x86/xen: disable Firmware First mode for correctable memory errors
    ea4e8cf5072e x86/traps: Fix #DE Oops message regression
    085f6be2fe88 arch/x86/amd/ibs: Fix re-arming IBS Fetch
    b4818cfc3f9c erofs: avoid duplicated permission check for "trusted." xattrs
    3a9e7db9a40e net: protect tcf_block_unbind with block lock
    af5d5b8afd12 tipc: fix memory leak caused by tipc_buf_append()
    519366f64c27 tcp: Prevent low rmem stalls with SO_RCVLOWAT.
    9ceecfdba701 ravb: Fix bit fields checking in ravb_hwtstamp_get()
    fa67cc69a8c8 r8169: fix issue with forced threading in combination with shared interrupts
    62d9cec6f928 net/sched: act_mpls: Add softdep on mpls_gso.ko
    2bc5d5c373ef net: ipa: command payloads already mapped
    1336d288b353 net: hns3: Clear the CMDQ registers before unmapping BAR region
    7fb8fbceb0e3 netem: fix zero division in tabledist
    25259932e1bb mlxsw: core: Fix memory leak on module removal
    d6f6e3f97885 ibmvnic: fix ibmvnic_set_mac
    4606d3512043 ibmveth: Fix use of ibmveth in a bridge.
    b520e574fdbf gtp: fix an use-before-init in gtp_newlink()
    9921e777a347 cxgb4: set up filter action after rewrites
    b97638e0f3be chelsio/chtls: fix tls record info to user
    eb592f2ae478 chelsio/chtls: fix memory leaks in CPL handlers
    c3208dec446a chelsio/chtls: fix deadlock issue
    b334112f20b7 bnxt_en: Send HWRM_FUNC_RESET fw command unconditionally.
    f739fc7e1072 bnxt_en: Re-write PCI BARs after PCI fatal error.
    7fe9514cfe68 bnxt_en: Invoke cancel_delayed_work_sync() for PFs also.
    bfbbfb501e74 bnxt_en: Fix regression in workqueue cleanup logic in bnxt_remove_one().
    0b17de4d67bf bnxt_en: Check abort error state in bnxt_open_nic().
    c328793e21fb efivarfs: Replace invalid slashes with exclamation marks in dentries.
    61ececc85274 x86/copy_mc: Introduce copy_mc_enhanced_fast_string()
    a092869e0351 x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()
    18703f749e99 x86/PCI: Fix intel_mid_pci.c build error when ACPI is not enabled
    4b0a9591dd78 arm64: link with -z norelro regardless of CONFIG_RELOCATABLE
    dfaa0f7d0832 arm64: Run ARCH_WORKAROUND_2 enabling code on all CPUs
    0ccd5c2c60e0 arm64: Run ARCH_WORKAROUND_1 enabling code on all CPUs
    4720b25e4ca3 fs/kernel_read_file: Remove FIRMWARE_EFI_EMBEDDED enum
    8b23af0ef2f7 efi/arm64: libstub: Deal gracefully with EFI_RNG_PROTOCOL failure
    865013fcf4c3 scripts/setlocalversion: make git describe output more reliable
    6f4c9772e195 io_uring: Convert advanced XArray uses to the normal API
    f7b24bee5e6e io_uring: Fix XArray usage in io_uring_add_task_file
    efce965a49f1 io_uring: Fix use of XArray in __io_uring_files_cancel
    5ee3fea0c227 io_uring: no need to call xa_destroy() on empty xarray
    0ca6ce23f4f6 io-wq: fix use-after-free in io_wq_worker_running
    4863be653425 io_wq: Make io_wqe::lock a raw_spinlock_t
    b6a6d1df552b io_uring: reference ->nsproxy for file table commands
    511abceaf0a0 io_uring: don't rely on weak ->files references
    fdc84c9bf131 io_uring: enable task/files specific overflow flushing
    3de61f9bcc1c io_uring: return cancelation status from poll/timeout/files handlers
    f34e674fbe6d io_uring: unconditionally grab req->task
    bf0305989241 io_uring: stash ctx task reference for SQPOLL
    dd1acc182c85 io_uring: move dropping of files into separate helper
    cecf78cc0890 io_uring: allow timeout/poll/files killing to take task into account
    07463d7da999 io_uring: don't run task work on an exiting task
    6e1f770fbc0a netfilter: nftables_offload: KASAN slab-out-of-bounds Read in nft_flow_rule_create

(From OE-Core rev: ba9858ac4397958b0e693b687622923266c951c7)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8c81b83bfe7cb870eb12c93d0793cad27d1de162)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:58 +00:00
Bruce Ashfield
fd3e68b355 linux-yocto/5.8: config cleanup / warnings
Integrating the following commit(s) to:

    d5ca337b7e9 bsp/mti-malta64: fix warning of CONFIG_SCSI_VIRTIO on qemumips64
    63c7a70c90f net/l2tp.cfg: fix CONFIG_PPPOL2TP mismatched warnings

(From OE-Core rev: f74584cfafccad63967ff8ae63bf3375f5e2c274)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bc51dcff0b23827fc05a6203c889154616f48014)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
678eafa74d linux-yocto/5.4: config cleanup / warnings
Integrating the following commit(s):

    eadca496e9f bsp/mti-malta64: fix warning of CONFIG_SCSI_VIRTIO on qemumips64
    203911bc035 net/l2tp.cfg: fix CONFIG_PPPOL2TP mismatched warnings

(From OE-Core rev: 33edfd487088b674b1e512eaa33c43542a9d1441)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e8df0a1f9607417f3f308b9ff852e287837b6cdf)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
c2014927f2 linux-yocto-dev: move to v5.10-rc
(From OE-Core rev: a8637f9f52a7541250dce4b1da1676b9894501f2)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a04e56631c4bc7fac58e2f157beea3423195ad8e)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
c5b7872dab linux-yocto/5.4: update to v5.4.73
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    bde3f94035b0 Linux 5.4.73
    3c7ccd7d4ace usb: gadget: f_ncm: allow using NCM in SuperSpeed Plus gadgets.
    efb893a56cea eeprom: at25: set minimum read/write access stride to 1
    8011f45598cd usb: cdns3: gadget: free interrupt after gadget has deleted
    ed134662a62b USB: cdc-wdm: Make wdm_flush() interruptible and add wdm_fsync().
    2cc661ab2bde usb: cdc-acm: add quirk to blacklist ETAS ES58X devices
    1d2ce4350a01 tty: serial: fsl_lpuart: fix lpuart32_poll_get_char
    231146202650 tty: serial: lpuart: fix lpuart32_write usage
    a8a4b17bcc9d s390/qeth: don't let HW override the configured port role
    905f0d17a07f net: korina: cast KSEG0 address to pointer in kfree
    9bca56ad2f0a ath10k: check idx validity in __ath10k_htt_rx_ring_fill_n()
    18ec92b1ce29 dmaengine: dw: Activate FIFO-mode for memory peripherals only
    190bce292b73 dmaengine: dw: Add DMA-channels mask cell support
    bc94a025cfd2 scsi: ufs: ufs-qcom: Fix race conditions caused by ufs_qcom_testbus_config()
    e13f0d325a04 usb: core: Solve race condition in anchor cleanup functions
    5912b09c97cd brcm80211: fix possible memleak in brcmf_proto_msgbuf_attach
    36df67bd0097 scsi: smartpqi: Avoid crashing kernel for controller issues
    d00555d2255f ALSA: hda/ca0132 - Add new quirk ID for SoundBlaster AE-7.
    4529f9e5067c ALSA: hda/ca0132 - Add AE-7 microphone selection commands.
    752df39ed6e1 mwifiex: don't call del_timer_sync() on uninitialized timer
    045f29c16fcf reiserfs: Fix memory leak in reiserfs_parse_options()
    109f5845a60f ipvs: Fix uninit-value in do_ip_vs_set_ctl()
    8f8df766f75c Bluetooth: btusb: Fix memleak in btusb_mtk_submit_wmt_recv_urb
    4886c2cf3d91 tty: ipwireless: fix error handling
    e80b7ebcfda7 fbmem: add margin check to fb_check_caps()
    f14811c617b4 scsi: qedi: Fix list_del corruption while removing active I/O
    56b2fd0cbfb0 scsi: qedi: Protect active command list to avoid list corruption
    f8bf0bbee1cc scsi: qedf: Return SUCCESS if stale rport is encountered
    09e4f2271178 HID: ite: Add USB id match for Acer One S1003 keyboard dock
    f3c23dcff8fb Fix use after free in get_capset_info callback.
    a4638768b03d rtl8xxxu: prevent potential memory leak
    d5eb55b5f96f brcmsmac: fix memory leak in wlc_phy_attach_lcnphy
    061d2f3fce45 selftests/bpf: Fix test_sysctl_loop{1, 2} failure due to clang change
    d399015f191b scsi: qla2xxx: Warn if done() or free() are called on an already freed srb
    0bb4a0b5a0ec scsi: ibmvfc: Fix error return in ibmvfc_probe()
    ff9c607f0355 iomap: fix WARN_ON_ONCE() from unprivileged users
    6458e8e8689b drm/msm/a6xx: fix a potential overflow issue
    bab673eef853 Bluetooth: Only mark socket zapped after unlocking
    78a47ef68262 usb: ohci: Default to per-port over-current protection
    df01087859fa xfs: make sure the rt allocator doesn't run off the end
    09b63105d089 opp: Prevent memory leak in dev_pm_opp_attach_genpd()
    6ff3df752c06 reiserfs: only call unlock_new_inode() if I_NEW
    0e3f41b6bec0 misc: rtsx: Fix memory leak in rtsx_pci_probe
    3a8d86d8da1b bpf: Limit caller's stack depth 256 for subprogs with tailcalls
    6c3a1aabfcff drm/panfrost: add amlogic reset quirk callback
    a9990ed2d7ca ath9k: hif_usb: fix race condition between usb_get_urb() and usb_kill_anchored_urbs()
    85b757ca3005 can: flexcan: flexcan_chip_stop(): add error handling and propagate error value
    42e781da7b37 usb: dwc3: simple: add support for Hikey 970
    0e1fb72e27d7 USB: cdc-acm: handle broken union descriptors
    ca4261a249dd rtw88: increse the size of rx buffer size
    41ce99a3ef1a udf: Avoid accessing uninitialized data on failed inode read
    01d886b89eb8 udf: Limit sparing table size
    e9e791f5c39a usb: gadget: function: printer: fix use-after-free in __lock_acquire
    08045050c6bd usb: dwc3: Add splitdisable quirk for Hisilicon Kirin Soc
    821dcabafded misc: vop: add round_up(x,4) for vring_size to avoid kernel panic
    85efddd97b72 mic: vop: copy data to kernel space then write to io memory
    e93b629d347e scsi: target: core: Add CONTROL field for trace events
    7cb5830b775a scsi: mvumi: Fix error return in mvumi_io_attach()
    267edd6478f9 PM: hibernate: remove the bogus call to get_gendisk() in software_resume()
    9ff197703e25 mac80211: handle lack of sband->bitrates in rates
    c8b6ad0a8afb ip_gre: set dev->hard_header_len and dev->needed_headroom properly
    16281bdd202f ntfs: add check for mft record size in superblock
    05f9cc28a954 media: venus: core: Fix runtime PM imbalance in venus_probe
    0ce7ba162b35 fs: dlm: fix configfs memory leak
    ed99b3e5117d media: venus: fixes for list corruption
    4f6af5a3c0f4 media: saa7134: avoid a shift overflow
    cb475ba4400f mmc: sdio: Check for CISTPL_VERS_1 buffer size
    67806a68d52c media: uvcvideo: Ensure all probed info is returned to v4l2
    6827d62a86de x86/mce: Make mce_rdmsrl() panic on an inaccessible MSR
    7aa3f954cd91 media: media/pci: prevent memory leak in bttv_probe
    ad3825eedb16 media: bdisp: Fix runtime PM imbalance on error
    e1285a73c5fa media: platform: sti: hva: Fix runtime PM imbalance on error
    8d727e1d261a media: platform: s3c-camif: Fix runtime PM imbalance on error
    6b3f0742f531 media: vsp1: Fix runtime PM imbalance on error
    7db4c3dfee01 media: exynos4-is: Fix a reference count leak
    f36a80bc7512 media: exynos4-is: Fix a reference count leak due to pm_runtime_get_sync
    8babe11e46ba media: exynos4-is: Fix several reference count leaks due to pm_runtime_get_sync
    62f3bc07008d media: sti: Fix reference count leaks
    e4d4abe6e86f media: st-delta: Fix reference count leak in delta_run_work
    d310c7437cb8 media: ati_remote: sanity check for both endpoints
    b4325c738f8f media: firewire: fix memory leak
    d06ea207e90b x86/mce: Add Skylake quirk for patrol scrub reported errors
    624c2782b49d x86/asm: Replace __force_order with a memory clobber
    fce2779e1c6e crypto: ccp - fix error handling
    b3a0ed411008 block: ratelimit handle_bad_sector() message
    a47cecbd2816 md/bitmap: fix memory leak of temporary bitmap
    44e2bc80a6ec i2c: core: Restore acpi_walk_dep_device_list() getting called after registering the ACPI i2c devs
    f224b8be9e31 perf: correct SNOOPX field offset
    78e27678db4e sched/features: Fix !CONFIG_JUMP_LABEL case
    13153509d8f3 NTB: hw: amd: fix an issue about leak system resources
    abd19984441c nvmet: fix uninitialized work for zero kato
    5ef1279abc74 powerpc/pseries: Avoid using addr_to_pfn in real mode
    72ccbd1481cb powerpc/powernv/dump: Fix race while processing OPAL dump
    d21b8c8fbf89 lightnvm: fix out-of-bounds write to array devices->info[]
    b0b10fa454ea ARM: dts: meson8: remove two invalid interrupt lines from the GPU node
    7de30421d646 arm64: dts: zynqmp: Remove additional compatible string for i2c IPs
    64b8f8fbe939 ARM: OMAP2+: Restore MPU power domain if cpu_cluster_pm_enter() fails
    55a7acbc0495 soc: fsl: qbman: Fix return value on success
    c7ffa707e657 ARM: dts: owl-s500: Fix incorrect PPI interrupt specifiers
    d725df0e2bbb arm64: dts: actions: limit address range for pinctrl node
    449ad29d76f7 arm64: dts: renesas: r8a774c0: Fix MSIOF1 DMA channels
    845e4eefd3c4 arm64: dts: renesas: r8a77990: Fix MSIOF1 DMA channels
    b78cdf1b51fc arm64: dts: qcom: msm8916: Fix MDP/DSI interrupts
    1e61c8fda1bb arm64: dts: qcom: pm8916: Remove invalid reg size from wcd_codec
    975dafc038f0 arm64: dts: qcom: msm8916: Remove one more thermal trip point unit name
    08ece4ba2a6e arm64: dts: imx8mq: Add missing interrupts to GPC
    93c3898ee8df memory: fsl-corenet-cf: Fix handling of platform_get_irq() error
    c072b76699a4 memory: omap-gpmc: Fix build error without CONFIG_OF
    afb15453ca4c memory: omap-gpmc: Fix a couple off by ones
    8426055fc960 arm64: dts: allwinner: h5: remove Mali GPU PMU module
    ec65c6a90621 ARM: dts: sun8i: r40: bananapi-m2-ultra: Fix dcdc1 regulator
    46ac92161144 ARM: s3c24xx: fix mmc gpio lookup tables
    e118c1527ffe ARM: at91: pm: of_node_put() after its usage
    5c4c2f437cea ARM: dts: imx6sl: fix rng node
    c1430c876984 arm64: dts: meson: vim3: correct led polarity
    6dbdc81b2625 netfilter: nf_fwd_netdev: clear timestamp in forwarding path
    2f3839075a5f netfilter: ebtables: Fixes dropping of small packets in bridge nat
    4d1eec59628c netfilter: conntrack: connection timeout after re-register
    e6b7b40aced7 scsi: bfa: Fix error return in bfad_pci_init()
    48df327e4b04 KVM: x86: emulating RDPID failure shall return #UD rather than #GP
    ad87f31648ab Input: sun4i-ps2 - fix handling of platform_get_irq() error
    cb3b77359a26 Input: twl4030_keypad - fix handling of platform_get_irq() error
    2f967303cbdd Input: omap4-keypad - fix handling of platform_get_irq() error
    2106d1cbe1c2 Input: ep93xx_keypad - fix handling of platform_get_irq() error
    b205eef76388 Input: stmfts - fix a & vs && typo
    81e5e2c268e9 Input: imx6ul_tsc - clean up some errors in imx6ul_tsc_resume()
    6498597aeb4c SUNRPC: fix copying of multiple pages in gss_read_proxy_verf()
    e412625f38a4 clk: imx8mq: Fix usdhc parents order
    b4035b3d64b6 vfio iommu type1: Fix memory leak in vfio_iommu_type1_pin_pages
    f54d8a9e37b0 vfio/pci: Clear token on bypass registration failure
    f2f616f3e333 ext4: limit entries returned when counting fsmap records
    9c27185e12e8 svcrdma: fix bounce buffers for unaligned offsets and multiple pages
    120222811b2e watchdog: sp5100: Fix definition of EFCH_PM_DECODEEN3
    dbb9ef17777e watchdog: Use put_device on error
    a8bbb47d94af watchdog: Fix memleak in watchdog_cdev_register
    9a3ee7177f72 clk: bcm2835: add missing release if devm_clk_hw_register fails
    c10e3c919a69 clk: at91: clk-main: update key before writing AT91_CKGR_MOR
    1ed7508e684e module: statically initialize init section freeing data
    b213999028e6 clk: mediatek: add UART0 clock support
    56e68e2cd8fe clk: rockchip: Initialize hw to error to avoid undefined behavior
    72407e5aa058 pwm: img: Fix null pointer access in probe
    7e5155fdd061 clk: keystone: sci-clk: fix parsing assigned-clock data during probe
    5b8882b53b0c clk: qcom: gcc-sdm660: Fix wrong parent_map
    fddcf515454e vfio/pci: Decouple PCI_COMMAND_MEMORY bit checks from is_virtfn
    42f16b3add6c PCI/IOV: Mark VFs as not implementing PCI_COMMAND_MEMORY
    aafa4b4c38e8 rpmsg: smd: Fix a kobj leak in in qcom_smd_parse_edge()
    833f3c362f63 PCI: iproc: Set affinity mask on MSI interrupts
    bcb9394accb6 PCI: aardvark: Check for errors from pci_bridge_emul_init() call
    bf65e6c51ac4 clk: meson: g12a: mark fclk_div2 as critical
    423e65dcd594 i2c: rcar: Auto select RESET_CONTROLLER
    63bd88ba8865 mailbox: avoid timer start from callback
    fe1936208e3f rapidio: fix the missed put_device() for rio_mport_add_riodev
    bfab0711eb27 rapidio: fix error handling path
    c5df8ff043c3 ramfs: fix nommu mmap with gaps in the page cache
    410f50b41c14 lib/crc32.c: fix trivial typo in preprocessor condition
    a3a45516c70e mm/page_owner: change split_page_owner to take a count
    06727f797f45 RDMA/rxe: Handle skb_clone() failure in rxe_recv.c
    6fa4d484bada f2fs: wait for sysfs kobject removal before freeing f2fs_sb_info
    f08ae0c46198 selftests/powerpc: Fix eeh-basic.sh exit codes
    180cf2e5f722 maiblox: mediatek: Fix handling of platform_get_irq() error
    e7f0b9ab8b7d RDMA/rxe: Fix skb lifetime in rxe_rcv_mcast_pkt()
    7efb373881f7 IB/rdmavt: Fix sizeof mismatch
    bc2cba6b2d5a cpufreq: powernv: Fix frame-size-overflow in powernv_cpufreq_reboot_notifier
    56c30ffe5fcd i3c: master: Fix error return in cdns_i3c_master_probe()
    ebe1a014d7ed powerpc/perf/hv-gpci: Fix starting index value
    271e53005a26 powerpc/perf: Exclude pmc5/6 from the irrelevant PMU group constraints
    dc1d4c658b9c RDMA/ipoib: Set rtnl_link_ops for ipoib interfaces
    c3a1c7b426b9 overflow: Include header file with SIZE_MAX declaration
    de47278648aa kdb: Fix pager search for multi-line strings
    626e2200f80b mtd: spinand: gigadevice: Add QE Bit
    8999f59944e3 mtd: spinand: gigadevice: Only one dummy byte in QUADIO
    2bb74bc921e0 mtd: rawnand: vf610: disable clk on error handling path in probe
    5e3782b1fae1 RDMA/hns: Fix missing sq_sig_type when querying QP
    eff57fbc2377 RDMA/hns: Fix the wrong value of rnr_retry when querying qp
    1e583b2948ae perf stat: Skip duration_time in setup_system_wide
    b79dd191680f i40iw: Add support to make destroy QP synchronous
    61ad14e24eba RDMA/mlx5: Disable IB_DEVICE_MEM_MGT_EXTENSIONS if IB_WR_REG_MR can't work
    4b1d559cc5c6 RDMA/hns: Set the unsupported wr opcode
    0ff75bfed10d perf intel-pt: Fix "context_switch event has no tid" error
    cee5080a0776 RDMA/cma: Consolidate the destruction of a cma_multicast in one place
    7c4fec28980d RDMA/cma: Remove dead code for kernel rdmacm multicast
    557c184df3c5 powerpc/64s/radix: Fix mm_cpumask trimming race vs kthread_use_mm
    148d4f4dc75e powerpc/tau: Disable TAU between measurements
    72407b8d08b3 powerpc/tau: Check processor type before enabling TAU interrupt
    68a8ec0b022f powerpc/tau: Remove duplicated set_thresholds() call
    c0578b423b5e powerpc/tau: Convert from timer to workqueue
    0305488040dc powerpc/tau: Use appropriate temperature sample interval
    a2087c04a2ac powerpc/book3s64/hash/4k: Support large linear mapping range with 4K
    8fd3154eb0ee RDMA/qedr: Fix inline size returned for iWARP
    97336c8296b5 RDMA/qedr: Fix return code if accept is called on a destroyed qp
    4c5f385ab49e RDMA/qedr: Fix use of uninitialized field
    e0a970d8f627 RDMA/qedr: Fix qp structure memory leak
    1738b03e34ad RDMA/umem: Prevent small pages from being returned by ib_umem_find_best_pgsz()
    85e40ba1c4a5 RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary
    b1712ec30dfb xfs: fix high key handling in the rt allocator's query_range function
    b005b448daf2 xfs: fix deadlock and streamline xfs_getfsmap performance
    adc3e2698637 xfs: limit entries returned when counting fsmap records
    2577720d35e2 ida: Free allocated bitmap in error path
    3789f5cfd600 arc: plat-hsdk: fix kconfig dependency warning when !RESET_CONTROLLER
    67c2e58b684e ARM: 9007/1: l2c: fix prefetch bits init in L2X0_AUX_CTRL using DT values
    baa7ea082f8e mtd: mtdoops: Don't write panic data twice
    b8d4f65c6ae2 RDMA/mlx5: Fix potential race between destroy and CQE poll
    935950e3190d pseries/drmem: don't cache node id in drmem_lmb struct
    eb327e98631e powerpc/pseries: explicitly reschedule during drmem_lmb list traversal
    937cdcc45aaa RDMA/umem: Fix signature of stub ib_umem_find_best_pgsz()
    a43f936da88f RDMA/hns: Add a check for current state before modifying QP
    4a5aaa1747a3 mtd: lpddr: fix excessive stack usage with clang
    1564884a4176 RDMA/ucma: Add missing locking around rdma_leave_multicast()
    cc8ebd76b10a RDMA/ucma: Fix locking for ctx->events_reported
    22d8bebf634a powerpc/icp-hv: Fix missing of_node_put() in success path
    d2575bf27279 powerpc/pseries: Fix missing of_node_put() in rng_init()
    4f74f179a335 IB/mlx4: Adjust delayed work when a dup is observed
    1fe669e9ad19 IB/mlx4: Fix starvation in paravirt mux/demux
    8d44d75812cf i3c: master add i3c_master_attach_boardinfo to preserve boardinfo
    e7f826cd20a6 selftests/ftrace: Change synthetic event name for inter-event-combined test
    17ed6448b00c fs: fix NULL dereference due to data race in prepend_path()
    91e4c12a3bf4 mm, oom_adj: don't loop through tasks in __set_oom_adj when not necessary
    9a1656f1d19b mm/memcg: fix device private memcg accounting
    04fabdfcbf5d mm/swapfile.c: fix potential memory leak in sys_swapon
    8194371c4d60 netfilter: nf_log: missing vlan offload tag and proto
    a6aaab712d6a net: korina: fix kfree of rx/tx descriptor array
    76c0e4b2a50f ipvs: clear skb->tstamp in forwarding path
    7c83fe15ecb1 mwifiex: fix double free
    91962ac35b48 platform/x86: mlx-platform: Remove PSU EEPROM configuration
    dddb49f4152a ipmi_si: Fix wrong return value in try_smi_init()
    b2a98fec2d1e scsi: be2iscsi: Fix a theoretical leak in beiscsi_create_eqs()
    9899e57bd714 scsi: target: tcmu: Fix warning: 'page' may be used uninitialized
    2fb431e69ad6 usb: dwc2: Fix INTR OUT transfers in DDMA mode.
    3fed2b5657e4 nl80211: fix non-split wiphy information
    6aa25d03dfb5 usb: gadget: u_ether: enable qmult on SuperSpeed Plus as well
    9af716ed41e4 usb: gadget: f_ncm: fix ncm_bitrate for SuperSpeed and above.
    2f002b5172b2 iwlwifi: mvm: split a print to avoid a WARNING in ROC
    1dbf9d994b12 mfd: sm501: Fix leaks in probe()
    df63949a2750 net: enic: Cure the enic api locking trainwreck
    7c48d6e80e70 iio: adc: stm32-adc: fix runtime autosuspend delay when slow polling
    cbe5109aa47b qtnfmac: fix resource leaks on unsupported iftype error return path
    1d3188378d9b ibmvnic: set up 200GBPS speed
    da012618c502 coresight: etm: perf: Fix warning caused by etm_setup_aux failure
    56365dbb3ec2 nl80211: fix OBSS PD min and max offset validation
    99e8886339fa nvmem: core: fix possibly memleak when use nvmem_cell_info_to_nvmem_cell()
    903bee2ebff1 HID: hid-input: fix stylus battery reporting
    1ad7f52fe668 ASoC: fsl_sai: Instantiate snd_soc_dai_driver
    56c1c45bb82d slimbus: qcom-ngd-ctrl: disable ngd in qmi server down callback
    5bfd32bb16dc slimbus: core: do not enter to clock pause mode in core
    9da3ff3368b7 slimbus: core: check get_addr before removing laddr ida
    b7e2b1fe04bf quota: clear padding in v2r1_mem2diskdqb()
    3fcd75ae29b5 usb: dwc2: Fix parameter type in function pointer prototype
    f70650083b9e ALSA: seq: oss: Avoid mutex lock for a long-time ioctl
    6f04266d084d misc: mic: scif: Fix error handling path
    a7bf4cf31f57 dmaengine: dmatest: Check list for emptiness before access its last entry
    4ca39ef88adc ath6kl: wmi: prevent a shift wrapping bug in ath6kl_wmi_delete_pstream_cmd()
    572a7d15f2d1 spi: omap2-mcspi: Improve performance waiting for CHSTAT
    98d0b2742fe0 net: dsa: rtl8366rb: Support all 4096 VLANs
    06ba92787790 ASoC: tlv320aic32x4: Fix bdiv clock rate derivation
    0f5203a88ca4 net: wilc1000: clean up resource in error path of init mon interface
    26751638ff09 net: dsa: rtl8366: Skip PVID setting if not requested
    11064fef1bb1 net: dsa: rtl8366: Refactor VLAN/PVID init
    09cb271bcbde net: dsa: rtl8366: Check validity of passed VLANs
    714ca2d03282 xhci: don't create endpoint debugfs entry before ring buffer is set.
    1a31fa71d979 coresight: etm4x: Handle unreachable sink in perf mode
    ed8b90d303cf drm: mxsfb: check framebuffer pitch
    c8bc46fc01e4 cpufreq: armada-37xx: Add missing MODULE_DEVICE_TABLE
    1122f2a7833c net: stmmac: use netif_tx_start|stop_all_queues() function
    148b49be7277 scsi: mpt3sas: Fix sync irqs
    e757a39c2d84 net/mlx5: Don't call timecounter cyc2time directly from 1PPS flow
    50185a14fe8e pinctrl: mcp23s08: Fix mcp23x17 precious range
    5e829cdd6d62 pinctrl: mcp23s08: Fix mcp23x17_regmap initialiser
    44a83bd3243b iomap: Clear page error before beginning a write
    82ef2b6a9b6c drm/panfrost: Ensure GPU quirks are always initialised
    a74f0f0a6265 drm/msm: Avoid div-by-zero in dpu_crtc_atomic_check()
    02bf8fbfb445 HID: roccat: add bounds checking in kone_sysfs_write_settings()
    4d861784f0eb ASoC: fsl: imx-es8328: add missing put_device() call in imx_es8328_probe()
    23159b4375a4 video: fbdev: radeon: Fix memleak in radeonfb_pci_register
    2370d94aed41 video: fbdev: sis: fix null ptr dereference
    67e65396cd56 video: fbdev: vga16fb: fix setting of pixclock because a pass-by-value error
    be700c52ae00 drivers/virt/fsl_hypervisor: Fix error handling path
    bf12e769ff2a pwm: lpss: Add range limit check for the base_unit register value
    34f326e702fd pwm: lpss: Fix off by one error in base_unit math in pwm_lpss_prepare()
    2b6fb30cb49d pty: do tty_flip_buffer_push without port->lock in pty_write
    bf94a8754f2a tty: hvcs: Don't NULL tty->driver_data until hvcs_cleanup()
    f3f79d92ca71 tty: serial: earlycon dependency
    2b150aa2e3ef binder: Remove bogus warning on failed same-process transaction
    48c121a74fb6 drm/crc-debugfs: Fix memleak in crc_control_write
    751c4cf0ee62 drm: panel: Fix bpc for OrtusTech COM43H4M85ULC panel
    d911c0e9fcf0 mm/error_inject: Fix allow_error_inject function signatures.
    ebc1d548a729 VMCI: check return value of get_user_pages_fast() for errors
    659da2df0c5d staging: emxx_udc: Fix passing of NULL to dma_alloc_coherent()
    f87f0236bdbb backlight: sky81452-backlight: Fix refcount imbalance on error
    517f0785cef9 scsi: csiostor: Fix wrong return value in csio_hw_prep_fw()
    a28b846431c6 scsi: qla2xxx: Fix wrong return value in qla_nvme_register_hba()
    835e3a595aa3 scsi: qla2xxx: Fix wrong return value in qlt_chk_unresolv_exchg()
    49fc81280f83 scsi: qla4xxx: Fix an error handling path in 'qla4xxx_get_host_stats()'
    58826ecb7385 drm/gma500: fix error check
    84b79c485356 staging: rtl8192u: Do not use GFP_KERNEL in atomic context
    dc432c231f4a mwifiex: Do not use GFP_KERNEL in atomic context
    7bf50ff5a32c brcmfmac: check ndev pointer
    eb4bb7e520a7 ASoC: qcom: lpass-cpu: fix concurrency issue
    cab19b7f827b ASoC: qcom: lpass-platform: fix memory leak
    0627ae9be941 wcn36xx: Fix reported 802.11n rx_highest rate wcn3660/wcn3680
    a3cf5b3ad12d ath10k: Fix the size used in a 'dma_free_coherent()' call in an error handling path
    9981ef0f9cfa ath9k: Fix potential out of bounds in ath9k_htc_txcompletion_cb()
    80ff60f046f4 ath6kl: prevent potential array overflow in ath6kl_add_new_sta()
    e2a1b94f7fd2 drm: panel: Fix bus format for OrtusTech COM43H4M85ULC panel
    0a5630dee31f drm/amd/display: Fix wrong return value in dm_update_plane_state()
    0d234d1135dc Bluetooth: hci_uart: Cancel init work before unregistering
    e99958ec096b drm/vkms: fix xrgb on compute crc
    0ae399b5da2a ath10k: provide survey info as accumulated data
    450d03435ca9 blk-mq: move cancel of hctx->run_work to the front of blk_exit_queue
    96bc5e4cb4c8 spi: spi-s3c64xx: Check return values
    a053db13b3e6 spi: spi-s3c64xx: swap s3c64xx_spi_set_cs() and s3c64xx_enable_datapath()
    fcf7bf406590 pinctrl: bcm: fix kconfig dependency warning when !GPIOLIB
    0120ec32a777 regulator: resolve supply after creating regulator
    cd68531d2981 media: ti-vpe: Fix a missing check and reference count leak
    5c4ffc07f92e media: stm32-dcmi: Fix a reference count leak
    a05590cc08e3 media: s5p-mfc: Fix a reference count leak
    0747ff17aa6c media: camss: Fix a reference count leak.
    28b21e02dce9 media: platform: fcp: Fix a reference count leak.
    4e954d4dea1e media: rockchip/rga: Fix a reference count leak.
    aa60f4ad0707 media: rcar-vin: Fix a reference count leak.
    55d01160af68 media: tc358743: cleanup tc358743_cec_isr
    de566409e3ad media: tc358743: initialize variable
    3c66762f0c64 media: mx2_emmaprp: Fix memleak in emmaprp_probe
    7fb271426a70 cypto: mediatek - fix leaks in mtk_desc_ring_alloc
    cc0f25040972 hwmon: (pmbus/max34440) Fix status register reads for MAX344{51,60,61}
    90e8f87c0b25 crypto: omap-sham - fix digcnt register handling with export/import
    0db26c777a25 media: rcar-csi2: Allocate v4l2_async_subdev dynamically
    7906b7a7ce1d media: rcar_drif: Allocate v4l2_async_subdev dynamically
    58e2bcb7fa43 media: rcar_drif: Fix fwnode reference leak when parsing DT
    79ec0578c7e0 media: i2c: ov5640: Enable data pins on poweron for DVP mode
    b2f8546056b3 media: i2c: ov5640: Separate out mipi configuration from s_power
    b9ccea540564 media: i2c: ov5640: Remain in power down for DVP mode unless streaming
    8409370ae02e media: omap3isp: Fix memleak in isp_probe
    79a41d2357c6 media: staging/intel-ipu3: css: Correctly reset some memory
    8bcc5c270771 media: uvcvideo: Silence shift-out-of-bounds warning
    8504250759f4 media: uvcvideo: Set media controller entity functions
    8b426d665a41 media: m5mols: Check function pointer in m5mols_sensor_power
    361a1b76b2d2 media: ov5640: Correct Bit Div register in clock tree diagram
    7052f4c5ab51 media: Revert "media: exynos4-is: Add missed check for pinctrl_lookup_state()"
    c6243d107c32 media: tuner-simple: fix regression in simple_set_radio_freq
    ac36f94d34df crypto: picoxcell - Fix potential race condition bug
    71444295839c crypto: ixp4xx - Fix the size used in a 'dma_free_coherent()' call
    3dd9ffbb6eda crypto: mediatek - Fix wrong return value in mtk_desc_ring_alloc()
    528acbf310ff crypto: algif_skcipher - EBUSY on aio should be an error
    d6623eea9abb x86/events/amd/iommu: Fix sizeof mismatch
    200f13d0d9a1 x86/nmi: Fix nmi_handle() duration miscalculation
    b257bb437dc3 perf/x86/intel/uncore: Reduce the number of CBOX counters
    e089a75b7786 perf/x86/intel/uncore: Update Ice Lake uncore units
    cfa97676cb44 sched/fair: Fix wrong cpu selecting from isolated domain
    500a98894821 drivers/perf: thunderx2_pmu: Fix memory resource error handling
    1731c693a62c drivers/perf: xgene_pmu: Fix uninitialized resource struct
    7e297c83e64d x86/fpu: Allow multiple bits in clearcpuid= parameter
    ab6bb1c1f1de perf/x86/intel/ds: Fix x86_pmu_stop warning for large PEBS
    9aee8216556e EDAC/ti: Fix handling of platform_get_irq() error
    64a9f5a30fbb EDAC/aspeed: Fix handling of platform_get_irq() error
    4d86328e42c3 EDAC/i5100: Fix error handling order in i5100_init_one()
    24543df3f491 crypto: caam/qi - add fallback for XTS with more than 8B IV
    66ec3755f791 crypto: algif_aead - Do not set MAY_BACKLOG on the async path
    68e3b25444cb ima: Don't ignore errors from crypto_shash_update()
    4a62024168c3 KVM: SVM: Initialize prev_ga_tag before use
    39ba2b6c3d11 KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages
    413aeed19567 KVM: nVMX: Reload vmcs01 if getting vmcs12's pages fails
    f9ac2036344a KVM: nVMX: Reset the segment cache when stuffing guest segs
    a5513655cfee SMB3: Resolve data corruption of TCP server info fields
    aeaa30720d67 cifs: Return the error from crypt_message when enc/dec key not found.
    65604f3ea2f2 cifs: remove bogus debug code
    706538edacc6 ALSA: hda/realtek: Enable audio jacks of ASUS D700SA with ALC887
    5e19bf634c92 ALSA: hda/realtek - Add mute Led support for HP Elitebook 845 G7
    995a90e70429 ALSA: hda/realtek - set mic to auto detect on a HP AIO machine
    a40f49438a15 ALSA: hda/realtek - The front Mic on a HP machine doesn't work
    8df0ffe2f32c icmp: randomize the global rate limiter
    9fa95d101caf tcp: fix to update snd_wl1 in bulk receiver fast path
    c5e4e010f39e selftests: rtnetlink: load fou module for kci_test_encap_fou() test
    6f7c40767bf4 selftests: forwarding: Add missing 'rp_filter' configuration
    f93a27b0f301 r8169: fix operation under forced interrupt threading
    68db21094ee5 nfc: Ensure presence of NFC_ATTR_FIRMWARE_NAME attribute in nfc_genl_fw_download()
    2f58abe7708a nexthop: Fix performance regression in nexthop deletion
    d6d478290815 net/sched: act_tunnel_key: fix OOB write in case of IPv6 ERSPAN tunnels
    09ea22aa3681 net: Properly typecast int values to set sk_max_pacing_rate
    432336b3cf2a net: hdlc_raw_eth: Clear the IFF_TX_SKB_SHARING flag after calling ether_setup
    62d366f8e570 net: hdlc: In hdlc_rcv, check to make sure dev is an HDLC device
    1a3c8d6acbfc net: ftgmac100: Fix Aspeed ast2600 TX hang issue
    7a6a016c5281 ibmvnic: save changed mac address to adapter->mac_addr
    416eec363622 chelsio/chtls: correct function return and return type
    15110ce6e26f chelsio/chtls: correct netdevice for vlan interface
    fe97af291fee chelsio/chtls: fix socket lock
    750e81e2dbc0 nvme-pci: disable the write zeros command for Intel 600P/P3100
    a86bf1d8b19c ALSA: hda/hdmi: fix incorrect locking in hdmi_pcm_close
    17784cec2da4 ALSA: hda: fix jack detection with Realtek codecs when in D3
    8bedcbceaaa3 ALSA: bebob: potential info leak in hwdep_read()
    401d4d79a8ed binder: fix UAF when releasing todo list
    711c0471ef17 cxgb4: handle 4-tuple PEDIT to NAT mode translation
    5f269cb9e513 r8169: fix data corruption issue on RTL8402
    c5b868eecb4f net_sched: remove a redundant goto chain check
    ba05057bd056 net/ipv4: always honour route mtu during forwarding
    46a55a44cc75 net: j1939: j1939_session_fresh_new(): fix missing initialization of skbcnt
    25bd9ea1ae5b can: j1935: j1939_tp_tx_dat_new(): fix missing initialization of skbcnt
    b0342b87cad8 can: m_can_platform: don't call m_can_class_suspend in runtime suspend
    c4099221dbc0 socket: fix option SO_TIMESTAMPING_NEW
    7d31e5722cbf tipc: fix the skb_unshare() in tipc_buf_append()
    dd3f58f499d0 net: usb: qmi_wwan: add Cellient MPL200 card
    65033e39f728 net/tls: sendfile fails with ktls offload
    926210cd8158 net/smc: fix valid DMBE buffer sizes
    cdd3c52a983e net: fix pos incrementment in ipv6_route_seq_next
    f08752a4498b net: fec: Fix PHY init after phy_reset_after_clk_enable()
    9e70485b40c8 net: fec: Fix phy_device lookup for phy_reset_after_clk_enable()
    0b41975f7b78 mlx4: handle non-napi callers to napi_poll
    3392c9d8f9aa ipv4: Restore flowi4_oif update before call to xfrm_lookup_route
    b7d2587f726a ibmveth: Identify ingress large send packets.
    b809bead48a3 ibmveth: Switch order of ibmveth_helper calls.

(From OE-Core rev: 914263fa624e6cce8580ba2c0a2dc7b903a3e9df)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 13cc1130b778f60330534804153abef4c4833ea4)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
2691a54e91 linux-yocto/5.8: update to v5.8.17
Updating linux-yocto/5.8 to the latest korg -stable release that comprises
the following commits:

    33156ccb29d9 Linux 5.8.17
    05981710aa5e usb: gadget: f_ncm: allow using NCM in SuperSpeed Plus gadgets.
    5a30d4a5afcc eeprom: at25: set minimum read/write access stride to 1
    d33abbe3b327 usb: cdns3: gadget: free interrupt after gadget has deleted
    5a118fc75b65 USB: cdc-wdm: Make wdm_flush() interruptible and add wdm_fsync().
    2e1905ce84a1 usb: cdc-acm: add quirk to blacklist ETAS ES58X devices
    3f7ebf3355ac usb: gadget: bcm63xx_udc: fix up the error of undeclared usb_debug_root
    3d53646d781b tty: serial: fsl_lpuart: fix lpuart32_poll_get_char
    40254b8d0f8b tty: serial: lpuart: fix lpuart32_write usage
    6a8a92d5770b s390/qeth: don't let HW override the configured port role
    941895dc705d net: korina: cast KSEG0 address to pointer in kfree
    574079593732 ath10k: check idx validity in __ath10k_htt_rx_ring_fill_n()
    f8ea12647fa6 dmaengine: dw: Activate FIFO-mode for memory peripherals only
    e106dc6c4c4d dmaengine: dw: Add DMA-channels mask cell support
    b6dead6f20e9 drm/amd/display: Screen corruption on dual displays (DP+USB-C)
    0666c173a061 scsi: ufs: ufs-qcom: Fix race conditions caused by ufs_qcom_testbus_config()
    4360db24d35a usb: core: Solve race condition in anchor cleanup functions
    19bcbc2ee12f brcm80211: fix possible memleak in brcmf_proto_msgbuf_attach
    044d8bfb9028 scsi: smartpqi: Avoid crashing kernel for controller issues
    651984d53d54 ASoC: Intel: sof_rt5682: override quirk data for tgl_max98373_rt5682
    85f1ad8c8644 ASoC: SOF: Add topology filename override based on dmi data match
    54e4b6262ca7 ALSA: hda/ca0132 - Add new quirk ID for SoundBlaster AE-7.
    4597e6f214c1 ALSA: hda/ca0132 - Add AE-7 microphone selection commands.
    5fa4faf96e44 mwifiex: don't call del_timer_sync() on uninitialized timer
    047a51bba8dc s390/qeth: strictly order bridge address events
    a527bf9df3af reiserfs: Fix memory leak in reiserfs_parse_options()
    72720eaa6c33 ipvs: Fix uninit-value in do_ip_vs_set_ctl()
    2e2b67844504 Bluetooth: btusb: Fix memleak in btusb_mtk_submit_wmt_recv_urb
    97811d992adb tty: ipwireless: fix error handling
    ffe1b711045f fbmem: add margin check to fb_check_caps()
    98d29fc2c451 scsi: qedi: Fix list_del corruption while removing active I/O
    ee3fc1103a40 scsi: qedi: Protect active command list to avoid list corruption
    5bbd0a791b7c scsi: qedi: Mark all connections for recovery on link down event
    95d42ebebc2c scsi: qedf: Return SUCCESS if stale rport is encountered
    3f07687e959e HID: ite: Add USB id match for Acer One S1003 keyboard dock
    0c1943f203c2 Fix use after free in get_capset_info callback.
    4d779accb71b rtl8xxxu: prevent potential memory leak
    437ee0e6c677 brcmsmac: fix memory leak in wlc_phy_attach_lcnphy
    445359b32632 selftests/bpf: Fix test_sysctl_loop{1, 2} failure due to clang change
    5ecc5ea6e1a7 scsi: qla2xxx: Warn if done() or free() are called on an already freed srb
    d6447b6646ef scsi: ibmvfc: Fix error return in ibmvfc_probe()
    458a89fa9015 iomap: fix WARN_ON_ONCE() from unprivileged users
    e653923ad7f1 drm/msm/a6xx: fix a potential overflow issue
    1d8181746a36 Bluetooth: Only mark socket zapped after unlocking
    76925b9ea722 drm: fix double free for gbo in drm_gem_vram_init and drm_gem_vram_create
    c64d4179f8ae usb: ohci: Default to per-port over-current protection
    0c0476d096d6 xfs: make sure the rt allocator doesn't run off the end
    0c35ab58c587 opp: Prevent memory leak in dev_pm_opp_attach_genpd()
    c31de74b342a reiserfs: only call unlock_new_inode() if I_NEW
    af90d9faf01a misc: rtsx: Fix memory leak in rtsx_pci_probe
    7a40d2814425 bpf: Limit caller's stack depth 256 for subprogs with tailcalls
    cc618717afdd drm/panfrost: add support for vendor quirk
    c246a3325c75 drm/panfrost: add amlogic reset quirk callback
    8159f330f25e drm/panfrost: add Amlogic GPU integration quirks
    7f5972267295 ath9k: hif_usb: fix race condition between usb_get_urb() and usb_kill_anchored_urbs()
    8951e760c038 HID: multitouch: Lenovo X1 Tablet Gen3 trackpoint and buttons
    3eb0b62e57c3 can: flexcan: flexcan_chip_stop(): add error handling and propagate error value
    5d2dd06ad8db habanalabs: cast to u64 before shift > 31 bits
    375d81cf16bb usb: dwc3: simple: add support for Hikey 970
    c373f8d5098f USB: cdc-acm: handle broken union descriptors
    739048988f1b rtw88: increse the size of rx buffer size
    eacaacfe8bd0 udf: Avoid accessing uninitialized data on failed inode read
    9a3d398af87d udf: Limit sparing table size
    6a71fc5ca9f5 rtw88: pci: Power cycle device during shutdown
    34f026263889 usb: gadget: function: printer: fix use-after-free in __lock_acquire
    b9c15de08dfd usb: dwc3: Add splitdisable quirk for Hisilicon Kirin Soc
    e7eec8654168 misc: vop: add round_up(x,4) for vring_size to avoid kernel panic
    226b5887720b mic: vop: copy data to kernel space then write to io memory
    f96fba04992c scsi: target: core: Add CONTROL field for trace events
    d805c83716ef scsi: mvumi: Fix error return in mvumi_io_attach()
    9f1960911919 PM: hibernate: remove the bogus call to get_gendisk() in software_resume()
    6cc0a248bcfa bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI
    6afdaf29e4c2 libbpf: Close map fd if init map slots failed
    e1ec1c25b00e staging: wfx: fix handling of MMIC error
    858c56fa3741 mac80211: handle lack of sband->bitrates in rates
    148c3d23858d ip_gre: set dev->hard_header_len and dev->needed_headroom properly
    ec23aa8bb0e5 ntfs: add check for mft record size in superblock
    d5772580c109 media: venus: core: Fix runtime PM imbalance in venus_probe
    6ed15eebcb61 media: venus: core: Fix error handling in probe
    91cde7d5aa17 fs: dlm: fix configfs memory leak
    24f924dbf640 media: venus: fixes for list corruption
    6e5fdad5c10f media: atomisp: fix memleak in ia_css_stream_create
    93b6de835777 media: saa7134: avoid a shift overflow
    c0f64a9057e3 mmc: sdio: Check for CISTPL_VERS_1 buffer size
    60e8d95f72b5 media: uvcvideo: Ensure all probed info is returned to v4l2
    5b66aa6f52a1 x86/mce: Make mce_rdmsrl() panic on an inaccessible MSR
    9300f536c77e spi: fsi: Fix clock running too fast
    75d927fc5587 crypto: hisilicon - fixed memory allocation error
    cde267085992 x86/mce: Annotate mce_rd/wrmsrl() with noinstr
    71b3d6794ae7 media: media/pci: prevent memory leak in bttv_probe
    e4f08676d93c media: bdisp: Fix runtime PM imbalance on error
    bad248c1ec53 media: platform: sti: hva: Fix runtime PM imbalance on error
    59eb92867e9c media: platform: s3c-camif: Fix runtime PM imbalance on error
    9fa2286f1925 media: vsp1: Fix runtime PM imbalance on error
    2341407a05ea media: exynos4-is: Fix a reference count leak
    dcc6fbbab0dc media: exynos4-is: Fix a reference count leak due to pm_runtime_get_sync
    e7997018b45d media: exynos4-is: Fix several reference count leaks due to pm_runtime_get_sync
    30f5c4e91d14 media: sti: Fix reference count leaks
    236117a8bf3a media: st-delta: Fix reference count leak in delta_run_work
    fe8798e78292 media: ati_remote: sanity check for both endpoints
    49e06f165b9c media: firewire: fix memory leak
    ba3c07c18034 x86/mce: Add Skylake quirk for patrol scrub reported errors
    8336a00a5f4d x86/asm: Replace __force_order with a memory clobber
    5056a1b3f6fb crypto: ccp - fix error handling
    121ce5e30b64 x86/dumpstack: Fix misleading instruction pointer error message
    6337db2af4d1 block: ratelimit handle_bad_sector() message
    4c4b1a29c3d0 md/bitmap: fix memory leak of temporary bitmap
    44a58dd22c28 i2c: core: Restore acpi_walk_dep_device_list() getting called after registering the ACPI i2c devs
    c1c4b2d0dee1 perf: correct SNOOPX field offset
    c93a8cddf4d2 sched/features: Fix !CONFIG_JUMP_LABEL case
    62bb6c5a3cee ntb: intel: Fix memleak in intel_ntb_pci_probe
    06a3b0080eaa NTB: hw: amd: fix an issue about leak system resources
    990c91c323f3 KVM: ioapic: break infinite recursion on lazy EOI
    959d1d42f0b6 nvmet: fix uninitialized work for zero kato
    05eb719ac46a powerpc/pseries: Avoid using addr_to_pfn in real mode
    1eb1f681057b powerpc/powernv/dump: Fix race while processing OPAL dump
    cd85f97e424b lightnvm: fix out-of-bounds write to array devices->info[]
    bd396a2c1bc9 ARM: dts: meson8: remove two invalid interrupt lines from the GPU node
    68d2900fc0c8 arm64: dts: zynqmp: Remove additional compatible string for i2c IPs
    e1f385dfa255 drm/mediatek: reduce clear event
    632bf6c3b82b soc: mediatek: cmdq: add clear option in cmdq_pkt_wfe api
    fab5aff89c9e ARM: dts: iwg20d-q7-common: Fix touch controller probe failure
    a0b4366823d9 ARM: dts: stm32: Fix DH PDK2 display PWM channel
    abb56e08ed1d ARM: dts: stm32: Swap PHY reset GPIO and TSC2004 IRQ on DHCOM SOM
    937a5596d619 ARM: dts: stm32: Move ethernet PHY into DH SoM DT
    2e7e56a6af3f ARM: dts: stm32: lxa-mc1: Fix kernel warning about PHY delays
    f80f23f39e6b ARM: dts: stm32: Fix sdmmc2 pins on AV96
    1925f1fdf9a6 ARM: OMAP2+: Restore MPU power domain if cpu_cluster_pm_enter() fails
    fdb6b483eaaf soc: fsl: qbman: Fix return value on success
    342c29116aae ARM: dts: owl-s500: Fix incorrect PPI interrupt specifiers
    52c37b7f0e04 arm64: dts: actions: limit address range for pinctrl node
    251ab5b1f8e8 arm64: dts: mt8173: elm: Fix nor_flash node property
    6e4cd77c0235 arm64: dts: renesas: r8a774c0: Fix MSIOF1 DMA channels
    5c91fc9a6d16 arm64: dts: renesas: r8a77990: Fix MSIOF1 DMA channels
    70ca9a567129 dt-bindings: crypto: Specify that allwinner, sun8i-a33-crypto needs reset
    10c78d0a1a2f soc: qcom: apr: Fixup the error displayed on lookup failure
    e8bd4ce4e877 arm64: dts: qcom: msm8916: Fix MDP/DSI interrupts
    26a8ac2d6512 arm64: dts: qcom: pm8916: Remove invalid reg size from wcd_codec
    6747001ebcb5 arm64: dts: qcom: msm8916: Remove one more thermal trip point unit name
    64ca77e846b0 soc: qcom: pdr: Fixup array type of get_domain_list_resp message
    3ca890f0e5d2 arm64: dts: qcom: sc7180: Drop flags on mdss irqs
    d9aa6534e78b arm64: dts: imx8mq: Add missing interrupts to GPC
    6395b7702156 firmware: arm_scmi: Fix NULL pointer dereference in mailbox_chan_free
    afcd57ad541b memory: fsl-corenet-cf: Fix handling of platform_get_irq() error
    244c3ac190e3 arm64: dts: qcom: sc7180: Fix the LLCC base register size
    fe5a0679f7e7 memory: omap-gpmc: Fix build error without CONFIG_OF
    d69ca7a7dfa9 memory: omap-gpmc: Fix a couple off by ones
    cc0820957d0f arm64: dts: allwinner: h5: remove Mali GPU PMU module
    4f9e6b1be196 ARM: dts: sun8i: r40: bananapi-m2-ultra: Fix dcdc1 regulator
    9a3eb126861f ARM: s3c24xx: fix mmc gpio lookup tables
    ea25940ff19f ARM: at91: pm: of_node_put() after its usage
    ba11877a60f2 ARM: dts: imx6sl: fix rng node
    2c9966436d0e arm64: dts: meson: vim3: correct led polarity
    23e1e4451190 soc: xilinx: Fix error code in zynqmp_pm_probe()
    29e043f9016c netfilter: nf_fwd_netdev: clear timestamp in forwarding path
    735b4d75a1c7 netsec: ignore 'phy-mode' device property on ACPI systems
    51ba2945a8ef netfilter: ebtables: Fixes dropping of small packets in bridge nat
    ceb1eb6cbeaf netfilter: conntrack: connection timeout after re-register
    9dd95e294542 arm64: mm: use single quantity to represent the PA to VA translation
    4a0b1d0e70ac scsi: bfa: Fix error return in bfad_pci_init()
    bdde093c81f2 KVM: x86: emulating RDPID failure shall return #UD rather than #GP
    029525c89bf1 Input: sun4i-ps2 - fix handling of platform_get_irq() error
    e186019ad86f Input: twl4030_keypad - fix handling of platform_get_irq() error
    86f11d554a8c Input: omap4-keypad - fix handling of platform_get_irq() error
    d96fc374d241 Input: ep93xx_keypad - fix handling of platform_get_irq() error
    9b9746342d52 Input: stmfts - fix a & vs && typo
    0a721220eada Input: imx6ul_tsc - clean up some errors in imx6ul_tsc_resume()
    61b00bdcd281 Input: elants_i2c - fix typo for an attribute to show calibration count
    f81bd7468e3a platform/chrome: cros_ec_lightbar: Reduce ligthbar get version command
    565697e82267 SUNRPC: fix copying of multiple pages in gss_read_proxy_verf()
    f9fc8ae508e6 clk: imx8mq: Fix usdhc parents order
    7564d5bb2b11 vfio iommu type1: Fix memory leak in vfio_iommu_type1_pin_pages
    4f9ece8b888f vfio/pci: Clear token on bypass registration failure
    6d0590647b75 ext4: limit entries returned when counting fsmap records
    9ede401a6d21 ext4: disallow modifying DAX inode flag if inline_data has been set
    1da9c8a1784b ext4: discard preallocations before releasing group lock
    9cb6c6db999e ext4: fix dead loop in ext4_mb_new_blocks
    e38a4885c98f svcrdma: fix bounce buffers for unaligned offsets and multiple pages
    e8e81bf91992 watchdog: sp5100: Fix definition of EFCH_PM_DECODEEN3
    c3228ef8f8a3 watchdog: Use put_device on error
    f12e9c2f9708 watchdog: Fix memleak in watchdog_cdev_register
    e70232457bf1 kbuild: deb-pkg: do not build linux-headers package if CONFIG_MODULES=n
    9f94507374a3 clk: bcm2835: add missing release if devm_clk_hw_register fails
    2290bfef3bbe clk: at91: clk-main: update key before writing AT91_CKGR_MOR
    963fc20cf561 module: statically initialize init section freeing data
    28270e928bae clk: mediatek: add UART0 clock support
    cab8d1bde580 clk: rockchip: Initialize hw to error to avoid undefined behavior
    b6bd62dc59e7 PCI: hv: Fix hibernation in case interrupts are not re-created
    83cf3166bd72 remoteproc/mediatek: fix null pointer dereference on null scp pointer
    1642d9e7095c pwm: img: Fix null pointer access in probe
    8db3dfe46548 pwm: rockchip: Keep enabled PWMs running while probing
    ec87b61ac31a clk: keystone: sci-clk: fix parsing assigned-clock data during probe
    2e415af55c34 clk: qcom: gcc-sdm660: Fix wrong parent_map
    ed4ce310b712 vfio/type1: fix dirty bitmap calculation in vfio_dma_rw
    01bec5d78c05 vfio: fix a missed vfio group put in vfio_pin_pages
    a1e9faa0d7c5 vfio/pci: Decouple PCI_COMMAND_MEMORY bit checks from is_virtfn
    0cdb91a009fa s390/pci: Mark all VFs as not implementing PCI_COMMAND_MEMORY
    b40bd0d87d1a vfio: add a singleton check for vfio_group_pin_pages
    7e4f15f7c99b PCI/IOV: Mark VFs as not implementing PCI_COMMAND_MEMORY
    167b37558b7f rpmsg: Avoid double-free in mtk_rpmsg_register_device
    ce43542b46a5 rpmsg: smd: Fix a kobj leak in in qcom_smd_parse_edge()
    edd546b3222f PCI: iproc: Set affinity mask on MSI interrupts
    c1e465c1a4dc PCI: aardvark: Check for errors from pci_bridge_emul_init() call
    48cc5b57cc46 PCI: aardvark: Fix compilation on s390
    50c4627222c2 PCI: designware-ep: Fix the Header Type check
    4f515d03d4f9 clk: meson: g12a: mark fclk_div2 as critical
    66a5d399702c i2c: rcar: Auto select RESET_CONTROLLER
    d39ced9254b6 rtc: ds1307: Clear OSF flag on DS1388 when setting time
    5e2918d95f79 clk: meson: axg-audio: separate axg and g12a regmap tables
    0d921fec7e59 mailbox: avoid timer start from callback
    efa544eda19e rapidio: fix the missed put_device() for rio_mport_add_riodev
    8838ee6189c3 rapidio: fix error handling path
    0a80f93ccd61 ramfs: fix nommu mmap with gaps in the page cache
    8cc3277e8e28 lib/crc32.c: fix trivial typo in preprocessor condition
    546f36709441 mm/page_owner: change split_page_owner to take a count
    99d1a5c21305 RDMA/rxe: Handle skb_clone() failure in rxe_recv.c
    ab5faad5bd33 afs: Fix cell removal
    0b6392c7ad1d afs: Fix cell purging with aliases
    e44b8d2aa154 afs: Fix cell refcounting by splitting the usage counter
    45045b6253e9 afs: Fix rapid cell addition/removal by not using RCU on cells tree
    1ad93f42c484 f2fs: wait for sysfs kobject removal before freeing f2fs_sb_info
    a08401b32a3a selftests/powerpc: Fix eeh-basic.sh exit codes
    bb24e3cb31cd perf trace: Fix off by ones in memset() after realloc() in arches using libaudit
    c6a8b7714cd7 maiblox: mediatek: Fix handling of platform_get_irq() error
    66f6ea1e0ed3 um: time-travel: Fix IRQ handling in time_travel_handle_message()
    e3ee6ff237eb um: vector: Use GFP_ATOMIC under spin lock
    fe4b4e47125d f2fs: reject CASEFOLD inode flag without casefold feature
    982f2438ac82 RDMA/rxe: Fix skb lifetime in rxe_rcv_mcast_pkt()
    1407e22fb4ca IB/rdmavt: Fix sizeof mismatch
    aae2a43ace26 cpufreq: powernv: Fix frame-size-overflow in powernv_cpufreq_reboot_notifier
    a2b19fdbf29b powerpc/papr_scm: Add PAPR command family to pass-through command-set
    0e486cc3f8a2 i3c: master: Fix error return in cdns_i3c_master_probe()
    69a4718cb2bc perf stat: Fix out of bounds CPU map access when handling armv8_pmu events
    a4682cb94495 powerpc/perf/hv-gpci: Fix starting index value
    8d1d0dfb9df8 powerpc/perf: Exclude pmc5/6 from the irrelevant PMU group constraints
    bef320194790 powerpc/64: fix irq replay pt_regs->softe value
    281c47bcad03 powerpc/64: fix irq replay missing preempt
    938e97b946ec RDMA/ipoib: Set rtnl_link_ops for ipoib interfaces
    ea879d9c818e overflow: Include header file with SIZE_MAX declaration
    1519018b8c89 kdb: Fix pager search for multi-line strings
    473fb9250371 mtd: rawnand: ams-delta: Fix non-OF build warning
    dfc293422070 mtd: spinand: gigadevice: Add QE Bit
    ab0328ef3f83 mtd: spinand: gigadevice: Only one dummy byte in QUADIO
    86cb4ae61b64 mtd: rawnand: vf610: disable clk on error handling path in probe
    fbb2d15c177f mtd: rawnand: stm32_fmc2: fix a buffer overflow
    86e185a733a8 mtd: hyperbus: hbmc-am654: Fix direct mapping setup flash access
    3b5f3adce906 RDMA/hns: Fix missing sq_sig_type when querying QP
    69accfaa1033 RDMA/hns: Fix configuration of ack_req_freq in QPC
    d56447a8cdbb RDMA/hns: Fix the wrong value of rnr_retry when querying qp
    42ae1aebaaac RDMA/hns: Solve the overflow of the calc_pg_sz()
    5c80a3655565 RDMA/hns: Add check for the validity of sl configuration
    939faf121632 perf stat: Skip duration_time in setup_system_wide
    45397023c8c2 i40iw: Add support to make destroy QP synchronous
    fd8da32da3ee RDMA/mlx5: Disable IB_DEVICE_MEM_MGT_EXTENSIONS if IB_WR_REG_MR can't work
    7486a981eb88 RDMA/mlx5: Make mkeys always owned by the kernel's PD when not enabled
    af393dd73c14 RDMA/mlx5: Use set_mkc_access_pd_addr_fields() in reg_create()
    27ca3de942d1 RDMA/hns: Set the unsupported wr opcode
    dc8b27028c1c RDMA/qedr: Fix resource leak in qedr_create_qp
    be825f704b2f perf intel-pt: Fix "context_switch event has no tid" error
    b8d1adbff983 RDMA/cma: Fix use after free race in roce multicast join
    9ef5b6658d6b RDMA/cma: Consolidate the destruction of a cma_multicast in one place
    e3b942c76b24 RDMA/cma: Remove dead code for kernel rdmacm multicast
    7d31a74bcc01 RDMA/cma: Combine cma_ndev_work with cma_work
    d1926d0b50f5 powerpc/64s/radix: Fix mm_cpumask trimming race vs kthread_use_mm
    95219c4004fd powerpc/kasan: Fix CONFIG_KASAN_VMALLOC for 8xx
    ebeafdd0f221 powerpc/tau: Disable TAU between measurements
    19d39d5d682a powerpc/tau: Check processor type before enabling TAU interrupt
    c348ab2f7276 powerpc/tau: Remove duplicated set_thresholds() call
    b61bb0da35fc powerpc/tau: Convert from timer to workqueue
    d7f12e732190 powerpc/tau: Use appropriate temperature sample interval
    1c441d9aef74 powerpc/book3s64/hash/4k: Support large linear mapping range with 4K
    990cf02eb297 powerpc/watchpoint: Add hw_len wherever missing
    0fea340b870f powerpc/watchpoint: Fix handling of vector instructions
    b99d4986bc69 powerpc/watchpoint: Fix quadword instruction handling on p10 predecessors
    6f64ff9f30d1 powerpc/pseries/svm: Allocate SWIOTLB buffer anywhere in memory
    049ab4efdf9a RDMA/qedr: Fix inline size returned for iWARP
    b1010144c1eb RDMA/qedr: Fix return code if accept is called on a destroyed qp
    b3939bfc71ec RDMA/qedr: Fix use of uninitialized field
    fbe513321c49 RDMA/qedr: Fix doorbell setting
    e947bbb26f70 RDMA/qedr: Fix qp structure memory leak
    10200a0a5d3a RDMA/umem: Prevent small pages from being returned by ib_umem_find_best_pgsz()
    59f07434b297 RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary
    7ac277a01f90 RDMA: Allow fail of destroy CQ
    7802648c1dad RDMA/core: Delete function indirection for alloc/free kernel CQ
    4a8e9dbc7fde RDMA/rtrs-srv: Incorporate ib_register_client into rtrs server init
    929cdbcce02f xfs: fix high key handling in the rt allocator's query_range function
    a6d831917953 nfs: add missing "posix" local_lock constant table definition
    6a5757946685 xfs: fix deadlock and streamline xfs_getfsmap performance
    29eedbf9e39d xfs: limit entries returned when counting fsmap records
    c32adb866dac ida: Free allocated bitmap in error path
    1e84d2a5c113 arc: plat-hsdk: fix kconfig dependency warning when !RESET_CONTROLLER
    bdb0da4659e3 m68knommu: include SDHC support only when hardware has it
    01d89b4a82a4 xfs: fix finobt btree block recovery ordering
    c85d7a847227 ARM: 9007/1: l2c: fix prefetch bits init in L2X0_AUX_CTRL using DT values
    93a6c893c4d6 tools feature: Add missing -lzstd to the fast path feature detection
    26b8aa1bec47 perf tools: Make GTK2 support opt-in
    a3872e54738b mtd: mtdoops: Don't write panic data twice
    0081545c66c1 RDMA/mlx5: Fix potential race between destroy and CQE poll
    2c9da663c149 pseries/drmem: don't cache node id in drmem_lmb struct
    b1cf3e9298de powerpc/pseries: explicitly reschedule during drmem_lmb list traversal
    78805c0d14f5 RDMA/umem: Fix signature of stub ib_umem_find_best_pgsz()
    9f101b8ad2fa RDMA/hns: Add a check for current state before modifying QP
    e91945de1531 mtd: lpddr: fix excessive stack usage with clang
    33c6484d377e RDMA/ucma: Add missing locking around rdma_leave_multicast()
    191627ddc46f RDMA/ucma: Fix locking for ctx->events_reported
    582da8e19991 rcutorture: Properly set rcu_fwds for OOM handling
    11539276e399 rcu/tree: Force quiescent state on callback overload
    3aee0ca521f0 powerpc/icp-hv: Fix missing of_node_put() in success path
    cc86827cef62 powerpc/pseries: Fix missing of_node_put() in rng_init()
    bcbeec5a9a19 IB/mlx4: Adjust delayed work when a dup is observed
    f735c10a4731 IB/mlx4: Fix starvation in paravirt mux/demux
    c5e25cf59765 i3c: master add i3c_master_attach_boardinfo to preserve boardinfo
    549642f490d2 tracing: Handle synthetic event array field type checking correctly
    826adb405a53 selftests/ftrace: Change synthetic event name for inter-event-combined test
    3b82bd94e0ec fs: fix NULL dereference due to data race in prepend_path()
    7871c282d292 mm, oom_adj: don't loop through tasks in __set_oom_adj when not necessary
    349fc836d5d1 mm/memcg: fix device private memcg accounting
    b9e60476c04f mm/swapfile.c: fix potential memory leak in sys_swapon
    43edc7232737 netfilter: nf_log: missing vlan offload tag and proto
    ebd09f1ad811 net: korina: fix kfree of rx/tx descriptor array
    733dcb4149ff bpf, sockmap: Remove skb_orphan and let normal skb_kfree do cleanup
    4cdfe55c067b ipvs: clear skb->tstamp in forwarding path
    2566242742c9 drm/panfrost: increase readl_relaxed_poll_timeout values
    87ea06ea9f8d mwifiex: fix double free
    a0f38fd8303e platform/x86: mlx-platform: Remove PSU EEPROM configuration
    455ecbd43d3a tracing: Fix parse_synth_field() error handling
    4372729d5201 ipmi_si: Fix wrong return value in try_smi_init()
    caa0fa6b36ca dmaengine: ioat: Allocate correct size for descriptor chunk
    3cdf3cbc3b48 scsi: be2iscsi: Fix a theoretical leak in beiscsi_create_eqs()
    4c35763fbb0c scsi: target: tcmu: Fix warning: 'page' may be used uninitialized
    03504f955527 usb: dwc2: Fix INTR OUT transfers in DDMA mode.
    0ff11535a204 nl80211: fix non-split wiphy information
    cff51e84cb83 ocxl: fix kconfig dependency warning for OCXL
    4a87896b4e91 bus: mhi: core: Fix the building of MHI module
    e44e0bea8b7b usb: gadget: u_ether: enable qmult on SuperSpeed Plus as well
    665ed7027a67 usb: gadget: u_serial: clear suspended flag when disconnecting
    ec69e8c7686b usb: gadget: f_ncm: fix ncm_bitrate for SuperSpeed and above.
    da0922d0f8b5 iwlwifi: dbg: run init_cfg function once per driver load
    2b021c85c224 iwlwifi: dbg: remove no filter condition
    be0f631711f9 iwlwifi: mvm: split a print to avoid a WARNING in ROC
    d97c35bd05dd ASoC: wm_adsp: Pass full name to snd_ctl_notify
    1ab21ba36a84 mfd: sm501: Fix leaks in probe()
    2eb24b3bf835 net: enic: Cure the enic api locking trainwreck
    cd29df4df421 iio: adc: stm32-adc: fix runtime autosuspend delay when slow polling
    5975fa6e0519 iommu/qcom: add missing put_device() call in qcom_iommu_of_xlate()
    a13766e01768 pinctrl: aspeed: Use the right pinconf mask
    a30a515f2773 qtnfmac: fix resource leaks on unsupported iftype error return path
    148a2543ca50 selftests: Remove fmod_ret from test_overhead
    c2ebc88260ff bpf: disallow attaching modify_return tracing functions to other BPF programs
    7c37b28e0b37 ibmvnic: set up 200GBPS speed
    4829beb0ce79 coresight: etm4x: Fix save and restore of TRCVMIDCCTLR1 register
    ccc73e031de6 coresight: cti: Fix bug clearing sysfs links on callback
    79589b73fb25 coresight: cti: Fix remove sysfs link error
    9d645e979fdf coresight: etm: perf: Fix warning caused by etm_setup_aux failure
    4d3adf453eec iomap: Use kzalloc to allocate iomap_page
    f5758f108b61 nl80211: fix OBSS PD min and max offset validation
    b6ca9ea12055 hv: clocksource: Add notrace attribute to read_hv_sched_clock_*() functions
    70f1f999e24d nvmem: core: fix possibly memleak when use nvmem_cell_info_to_nvmem_cell()
    b21749762534 tty: hvc: fix link error with CONFIG_SERIAL_CORE_CONSOLE=n
    f4e52bc14c84 HID: hid-input: fix stylus battery reporting
    aba2ee9e7425 ASoC: fsl_sai: Instantiate snd_soc_dai_driver
    184c5e17b926 slimbus: qcom-ngd-ctrl: disable ngd in qmi server down callback
    caf464017965 slimbus: core: do not enter to clock pause mode in core
    4d11ab5f0904 slimbus: core: check get_addr before removing laddr ida
    9da861400bfd quota: clear padding in v2r1_mem2diskdqb()
    3efc30bcd162 mt76: mt7915: fix possible memory leak in mt7915_mcu_add_beacon
    6f0f3ad5a602 rtw88: Fix potential probe error handling race with wow firmware loading
    762f48374c26 rtw88: Fix probe error handling race with firmware loading
    e611c92ab330 usb: dwc2: Add missing cleanups when usb_add_gadget_udc() fails
    f9a314f5aa59 usb: dwc3: core: Properly default unspecified speed
    0cf8eb3b9858 usb: dwc2: Fix parameter type in function pointer prototype
    21b7dcfbf378 ALSA: seq: oss: Avoid mutex lock for a long-time ioctl
    a0229d675455 misc: mic: scif: Fix error handling path
    3eb24fb8582c ASoC: cros_ec_codec: fix kconfig dependency warning for SND_SOC_CROS_EC_CODEC
    ed848b21eb91 dmaengine: dmatest: Check list for emptiness before access its last entry
    2dbfe8f6b97c phy: rockchip-dphy-rx0: Include linux/delay.h
    e43acbf29d76 drm: rcar-du: Put reference to VSP device
    0e8f4263125f ath6kl: wmi: prevent a shift wrapping bug in ath6kl_wmi_delete_pstream_cmd()
    5569ffd9e497 ath11k: Add checked value for ath11k_ahb_remove
    ec71c634dcbd spi: omap2-mcspi: Improve performance waiting for CHSTAT
    c00cdd1b966a ASoC: tas2770: Fix unbalanced calls to pm_runtime
    46701b00ed9d ASoC: SOF: control: add size checks for ext_bytes control .put()
    e06a18b78b43 net: dsa: rtl8366rb: Support all 4096 VLANs
    a8091e02962a ASoC: tlv320aic32x4: Fix bdiv clock rate derivation
    63ed07138636 ASoC: tas2770: Fix error handling with update_bits
    6ce4b0c4f3d5 ASoC: tas2770: Fix required DT properties in the code
    92cc64394bc9 ASoC: tas2770: Add missing bias level power states
    304c38230dfd ASoC: tas2770: Fix calling reset in probe
    da374cb21045 net: wilc1000: clean up resource in error path of init mon interface
    a74a1c39af96 net: dsa: rtl8366: Skip PVID setting if not requested
    b8d304cdf951 net: dsa: rtl8366: Refactor VLAN/PVID init
    6aa894ff3372 net: dsa: rtl8366: Check validity of passed VLANs
    701c56f56837 xhci: don't create endpoint debugfs entry before ring buffer is set.
    98d66a3bb9c0 selftests/bpf: Fix endianness issue in test_sockopt_sk
    f130c8a0eeac selftests/bpf: Fix endianness issue in sk_assign
    a1aff5c4417e selftests: mptcp: interpret \n as a new line
    6c87ffcb2bff nvmem: core: fix missing of_node_put() in of_nvmem_device_get()
    3a0f17922776 coresight: etm4x: Fix issues on trcseqevr access
    0c97523e87a8 coresight: etm4x: Handle unreachable sink in perf mode
    abea9d776fe9 coresight: cti: Write regsiters directly in cti_enable_hw()
    3857796b8b49 coresight: etm4x: Fix issues within reset interface of sysfs
    efd00a5ed569 coresight: etm4x: Ensure default perf settings filter user/kernel
    435fd705a501 coresight: cti: remove pm_runtime_get_sync() from CPU hotplug
    0d0d70e1b1da coresight: cti: disclaim device only when it's claimed
    9fe394b41ba6 coresight: fix offset by one error in counting ports
    3c5c980ece55 coresight: etm4x: Fix etm4_count race by moving cpuhp callbacks to init
    8f319155ef51 ASoC: tlv320adcx140: Fix digital gain range
    7d3dcc5d26e1 ASoC: topology: disable size checks for bytes_ext controls if needed
    4a4778394419 ima: Fix NULL pointer dereference in ima_file_hash
    453ed3d7f990 drm: mxsfb: check framebuffer pitch
    dec5fabe7202 cpufreq: armada-37xx: Add missing MODULE_DEVICE_TABLE
    f3ceea270494 xfs: force the log after remapping a synchronous-writes file
    5e78a6fe2d85 net: stmmac: use netif_tx_start|stop_all_queues() function
    be17fb81e944 net: stmmac: Fix incorrect location to set real_num_rx|tx_queues
    f817cdd6d1fd scsi: mpt3sas: Fix sync irqs
    3c33f586d090 net/mlx5: Don't call timecounter cyc2time directly from 1PPS flow
    9ba9292375df net/mlx5: Fix uninitialized variable warning
    b60c22ea6623 drm/msm/adreno: fix probe without iommu
    37c857ec136c pinctrl: devicetree: Keep deferring even on timeout
    151d4913e81e pinctrl: mcp23s08: Fix mcp23x17 precious range
    bbcbd596e676 pinctrl: mcp23s08: Fix mcp23x17_regmap initialiser
    dc7285e0f1f8 Bluetooth: Re-order clearing suspend tasks
    8141ec5a8f5a selftests/lkdtm: Use "comm" instead of "diff" for dmesg
    7c38731efb2f iomap: Mark read blocks uptodate in write_begin
    d69930b3ec0b iomap: Clear page error before beginning a write
    039ee8a6363d drm/panfrost: Ensure GPU quirks are always initialised
    dc48ca171bdc drm/msm: Avoid div-by-zero in dpu_crtc_atomic_check()
    b7d539816d06 HID: roccat: add bounds checking in kone_sysfs_write_settings()
    25529f1f6003 scsi: ufs: ufs-mediatek: Fix HOST_PA_TACTIVATE quirk
    8c230b3b3668 ASoC: fsl: imx-es8328: add missing put_device() call in imx_es8328_probe()
    7a702a885270 video: fbdev: radeon: Fix memleak in radeonfb_pci_register
    53d19f4bb131 video: fbdev: sis: fix null ptr dereference
    33b1e23741cb video: fbdev: vga16fb: fix setting of pixclock because a pass-by-value error
    d92db965ef66 ath11k: fix a double free and a memory leak
    c7072eda4093 drivers/virt/fsl_hypervisor: Fix error handling path
    38b319133226 pwm: lpss: Add range limit check for the base_unit register value
    25eb525f5bf9 pwm: lpss: Fix off by one error in base_unit math in pwm_lpss_prepare()
    04e819b2f765 pty: do tty_flip_buffer_push without port->lock in pty_write
    2e92899228ae tty: hvcs: Don't NULL tty->driver_data until hvcs_cleanup()
    45f20b6066c3 tty: serial: earlycon dependency
    5ec7b8a3b6e7 binder: Remove bogus warning on failed same-process transaction
    4f40c79cbe72 scsi: ufs: Make ufshcd_print_trs() consider UFSHCD_QUIRK_PRDT_BYTE_GRAN
    6852678afe96 selftests: vm: add fragment CONFIG_GUP_BENCHMARK
    e9f1340193b5 Bluetooth: Clear suspend tasks on unregister
    7a15bd2bae85 drm/crc-debugfs: Fix memleak in crc_control_write
    91c8e9e18580 samples/bpf: Fix to xdpsock to avoid recycling frames
    88b34c076be3 drm: panel: Fix bpc for OrtusTech COM43H4M85ULC panel
    71782955ade1 mm/error_inject: Fix allow_error_inject function signatures.
    9c5e9f50572e VMCI: check return value of get_user_pages_fast() for errors
    2e1356e81edd staging: emxx_udc: Fix passing of NULL to dma_alloc_coherent()
    ad5c72b65770 backlight: sky81452-backlight: Fix refcount imbalance on error
    39d464cdfe30 rtw88: don't treat NULL pointer as an array
    8976b0bf6d8b wilc1000: Fix memleak in wilc_bus_probe
    93feab00afca wilc1000: Fix memleak in wilc_sdio_probe
    2b87f9ce106e libbpf: Fix unintentional success return code in bpf_object__load
    6ff694ac40b9 scsi: csiostor: Fix wrong return value in csio_hw_prep_fw()
    d646554479f3 scsi: qla2xxx: Fix wrong return value in qla_nvme_register_hba()
    7e26ebb1a9d2 scsi: qla2xxx: Fix wrong return value in qlt_chk_unresolv_exchg()
    d1bfd5d44f4b scsi: qla2xxx: Fix the size used in a 'dma_free_coherent()' call
    66deb6aebe10 scsi: qla4xxx: Fix an error handling path in 'qla4xxx_get_host_stats()'
    34b42a17b99f drm/gma500: fix error check
    1b8b0d839d1b selftests/bpf: Fix test_vmlinux test to use bpf_probe_read_user()
    8135d168d84c drm/amd/display: fix potential integer overflow when shifting 32 bit variable bl_pwm
    c2f41d9b1d53 staging: rtl8192u: Do not use GFP_KERNEL in atomic context
    9959c2031233 mwifiex: Do not use GFP_KERNEL in atomic context
    027b25d74ffb brcmfmac: check ndev pointer
    e9e2a870a490 ath11k: Fix possible memleak in ath11k_qmi_init_service
    7d93d871e55b ASoC: qcom: lpass-cpu: fix concurrency issue
    41a33c66b6e6 ASoC: qcom: lpass-platform: fix memory leak
    d981fcece216 wcn36xx: Fix reported 802.11n rx_highest rate wcn3660/wcn3680
    2af670b21911 ath10k: Fix the size used in a 'dma_free_coherent()' call in an error handling path
    ef10e65b3d7e ath9k: Fix potential out of bounds in ath9k_htc_txcompletion_cb()
    7c81b8b6c0b3 ath6kl: prevent potential array overflow in ath6kl_add_new_sta()
    b395ec13f72b drm: panel: Fix bus format for OrtusTech COM43H4M85ULC panel
    31e3c7aefb96 drm/vkms: add missing platform_device_unregister() in vkms_init()
    199cb9d9336f drm/vgem: add missing platform_device_unregister() in vgem_init()
    2723170f9c1b drm/amd/display: Fix wrong return value in dm_update_plane_state()
    3fe978892ab4 Bluetooth: hci_uart: Cancel init work before unregistering
    0775947bf20b drm/vkms: fix xrgb on compute crc
    6a251056d920 ath10k: provide survey info as accumulated data
    1e2be69a0396 blk-mq: move cancel of hctx->run_work to the front of blk_exit_queue
    eb66ae00496f btrfs: add owner and fs_info to alloc_state io_tree
    6cc523c1ba7e hwmon: (bt1-pvt) Wait for the completion with timeout
    82f27fd04df6 hwmon: (bt1-pvt) Cache current update timeout
    f8896b1dc97f hwmon: (bt1-pvt) Test sensor power supply on probe
    283d31599577 spi: spi-s3c64xx: Check return values
    9c27047159fd spi: spi-s3c64xx: swap s3c64xx_spi_set_cs() and s3c64xx_enable_datapath()
    2d92aae41a06 pinctrl: bcm: fix kconfig dependency warning when !GPIOLIB
    96c6b5d57756 regulator: resolve supply after creating regulator
    539f606e1044 media: ti-vpe: Fix a missing check and reference count leak
    36ba112a7c8d media: stm32-dcmi: Fix a reference count leak
    344632d9b782 media: s5p-mfc: Fix a reference count leak
    00eff51ebd27 media: camss: Fix a reference count leak.
    445adb4113e8 media: platform: fcp: Fix a reference count leak.
    34b2032620a3 media: rockchip/rga: Fix a reference count leak.
    96b1dbdb92ad media: rcar-vin: Fix a reference count leak.
    0936f228c185 media: tc358743: cleanup tc358743_cec_isr
    e25e1421396d media: tc358743: initialize variable
    ffa1c6807c37 media: mx2_emmaprp: Fix memleak in emmaprp_probe
    19b283f0b3d4 crypto: sun8i-ce - handle endianness of t_common_ctl
    9748e867ac81 crypto: stm32/crc32 - Avoid lock if hardware is already used
    aee35828de88 cypto: mediatek - fix leaks in mtk_desc_ring_alloc
    abfdbdda990a hwmon: (w83627ehf) Fix a resource leak in probe
    20d16af9c0fb hwmon: (pmbus/max34440) Fix status register reads for MAX344{51,60,61}
    621368b5adfe crypto: omap-sham - fix digcnt register handling with export/import
    71452513b06b spi: dw-pci: free previously allocated IRQs if desc->setup() fails
    31a31b30b0f6 spi: fsi: Implement restricted size for certain controllers
    a2e41e4fcd8e spi: fsi: Fix use of the bneq+ sequencer instruction
    c2177e077841 spi: fsi: Handle 9 to 15 byte transfers lengths
    0f8c1ad5ed8f media: rcar-csi2: Allocate v4l2_async_subdev dynamically
    bd48c278ba33 media: rcar_drif: Allocate v4l2_async_subdev dynamically
    23b043e23923 media: rcar_drif: Fix fwnode reference leak when parsing DT
    c78cc511ff68 media: i2c: ov5640: Enable data pins on poweron for DVP mode
    d1bb697b085a media: i2c: ov5640: Separate out mipi configuration from s_power
    44046ac3fd90 media: i2c: ov5640: Remain in power down for DVP mode unless streaming
    2038c71aeea7 media: omap3isp: Fix memleak in isp_probe
    ae17eb2da566 media: staging/intel-ipu3: css: Correctly reset some memory
    fbd50e6e825f media: uvcvideo: Silence shift-out-of-bounds warning
    3eff11b54bac media: uvcvideo: Set media controller entity functions
    008efc8c2ec0 fscrypt: restrict IV_INO_LBLK_32 to ino_bits <= 32
    38cc20da3fd2 media: m5mols: Check function pointer in m5mols_sensor_power
    6cd272c1b1d3 media: ov5640: Correct Bit Div register in clock tree diagram
    3bc4af05a125 media: hantro: postproc: Fix motion vector space allocation
    841d6b2bb64a media: hantro: h264: Get the correct fallback reference buffer
    b076e6ad0081 media: Revert "media: exynos4-is: Add missed check for pinctrl_lookup_state()"
    2e35f75c9a14 crypto: ccree - fix runtime PM imbalance on error
    707041cc6852 media: tuner-simple: fix regression in simple_set_radio_freq
    1c1e39f91ffe media: vivid: Fix global-out-of-bounds read in precalculate_color()
    0ebbe42a9a4c crypto: picoxcell - Fix potential race condition bug
    5ec044fb819d crypto: ixp4xx - Fix the size used in a 'dma_free_coherent()' call
    df29e4415305 crypto: mediatek - Fix wrong return value in mtk_desc_ring_alloc()
    36c93e69cb80 crypto: algif_skcipher - EBUSY on aio should be an error
    ff57d46f868e perf/core: Fix race in the perf_mmap_close() function
    7e5248ec07bc perf/x86: Fix n_pair for cancelled txn
    2df4319976f9 pinctrl: qcom: Use return value from irq_set_wake() call
    9d371ffd8434 pinctrl: qcom: Set IRQCHIP_SET_TYPE_MASKED and IRQCHIP_MASK_ON_SUSPEND flags
    9a7d327326bd x86/events/amd/iommu: Fix sizeof mismatch
    5fd2c1240d75 x86/nmi: Fix nmi_handle() duration miscalculation
    6f9bc7071b53 perf/x86/intel/uncore: Fix the scale of the IMC free-running events
    32ce27005110 perf/x86/intel/uncore: Reduce the number of CBOX counters
    accdd0292919 perf/x86/intel/uncore: Update Ice Lake uncore units
    140596caef50 arm64: perf: Add missing ISB in armv8pmu_enable_counter()
    4792206af85f sched/fair: Use dst group while checking imbalance for NUMA balancer
    63829cb38a3c sched/fair: Fix wrong cpu selecting from isolated domain
    b75cbad81cfc drivers/perf: thunderx2_pmu: Fix memory resource error handling
    a071f86dd7c4 drivers/perf: xgene_pmu: Fix uninitialized resource struct
    e99cf7b5025a arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
    b45c14f9b0c6 x86/fpu: Allow multiple bits in clearcpuid= parameter
    4f596c780958 perf/x86/intel/ds: Fix x86_pmu_stop warning for large PEBS
    3b172044dc55 EDAC/ti: Fix handling of platform_get_irq() error
    0d0f50ecd85d EDAC/aspeed: Fix handling of platform_get_irq() error
    3a70ad440e20 EDAC/i5100: Fix error handling order in i5100_init_one()
    6411e8ea3086 microblaze: fix kbuild redundant file warning
    1b8e25772d8e sched/fair: Fix wrong negative conversion in find_energy_efficient_cpu()
    03e0226f1cfe RAS/CEC: Fix cec_init() prototype
    19212b1a2be3 crypto: caam/qi - add support for more XTS key lengths
    d0100d71efff crypto: caam/qi - add fallback for XTS with more than 8B IV
    b61aa1de53f4 crypto: algif_aead - Do not set MAY_BACKLOG on the async path
    dd5df0880122 ima: Don't ignore errors from crypto_shash_update()
    ee0e07130bd0 KVM: SVM: Initialize prev_ga_tag before use
    af216a426bcc KVM: x86: Intercept LA57 to inject #GP fault when it's reserved
    f7b5e3c6ab6e KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages
    efd21b7274b0 KVM: nVMX: Reload vmcs01 if getting vmcs12's pages fails
    f7421220fd60 KVM: nVMX: Reset the segment cache when stuffing guest segs
    c5ec2a6618d3 KVM: nVMX: Morph notification vector IRQ on nested VM-Enter to pending PI
    dd6120a8e1f3 arm64: Make use of ARCH_WORKAROUND_1 even when KVM is not enabled
    cb6c316cd99a smb3: fix stat when special device file and mounted with modefromsid
    321cf0e88e25 smb3: do not try to cache root directory if dir leases not supported
    dd80b98bdf0a SMB3.1.1: Fix ids returned in POSIX query dir
    2ab6d3b441dd SMB3: Resolve data corruption of TCP server info fields
    55bf111d4e81 cifs: Return the error from crypt_message when enc/dec key not found.
    c5db0e593499 cifs: remove bogus debug code
    2d8b73fc38ae ALSA: hda/realtek: Enable audio jacks of ASUS D700SA with ALC887
    1fb41e21037e ALSA: hda/realtek - Add mute Led support for HP Elitebook 845 G7
    29050421372a ALSA: hda/realtek - set mic to auto detect on a HP AIO machine
    eba61e03eadf ALSA: hda/realtek - The front Mic on a HP machine doesn't work
    383fcddfbcaa ALSA: usb-audio: Line6 Pod Go interface requires static clock rate quirk
    70dcb923cc27 ALSA: hda - Fix the return value if cb func is already registered
    4e3c57b30473 ALSA: hda - Don't register a cb func if it is registered already
    618a54d780a5 net/sched: act_gate: Unlock ->tcfa_lock in tc_setup_flow_action()
    ed2c3b4a04c2 net: ethernet: mtk-star-emac: select REGMAP_MMIO
    9c70b53dda47 tcp: fix to update snd_wl1 in bulk receiver fast path
    e4d5d075c190 selftests: rtnetlink: load fou module for kci_test_encap_fou() test
    8ab1b9ef3974 selftests: forwarding: Add missing 'rp_filter' configuration
    11a3f1f851da r8169: fix operation under forced interrupt threading
    6c9e378d7579 nfc: Ensure presence of NFC_ATTR_FIRMWARE_NAME attribute in nfc_genl_fw_download()
    a81996aa6ee5 nexthop: Fix performance regression in nexthop deletion
    8672e0e1be10 net/sched: act_tunnel_key: fix OOB write in case of IPv6 ERSPAN tunnels
    e5b67266fb48 net/sched: act_ct: Fix adding udp port mangle operation
    f6bb7b012676 net: Properly typecast int values to set sk_max_pacing_rate
    08c6a8c61f9f net: hdlc_raw_eth: Clear the IFF_TX_SKB_SHARING flag after calling ether_setup
    6fe9d5ac3f76 net: hdlc: In hdlc_rcv, check to make sure dev is an HDLC device
    79a5e1726d4f net: ftgmac100: Fix Aspeed ast2600 TX hang issue
    7f0afe20abab mptcp: initialize mptcp_options_received's ahmac
    ec5c9273f731 icmp: randomize the global rate limiter
    ab91b97c5f92 ibmvnic: save changed mac address to adapter->mac_addr
    3f9420b4d3fc chelsio/chtls: fix writing freed memory
    d632d6da9724 chelsio/chtls: correct function return and return type
    ea95811a67e3 chelsio/chtls: Fix panic when listen on multiadapter
    8650467aa359 chelsio/chtls: fix panic when server is on ipv6
    e94a4b48d51b chelsio/chtls: correct netdevice for vlan interface
    958fc22dbc30 chelsio/chtls: fix socket lock
    eb7ee70b9226 tipc: fix incorrect setting window for bcast link
    a52c1d9114f1 tipc: re-configure queue limit for broadcast link
    760295f17597 ALSA: hda/hdmi: fix incorrect locking in hdmi_pcm_close
    2b7a2a0be104 ALSA: hda: fix jack detection with Realtek codecs when in D3
    f4b88ebd9b73 ALSA: bebob: potential info leak in hwdep_read()
    40d4418ea4db binder: fix UAF when releasing todo list
    dd5743391b5e r8169: fix data corruption issue on RTL8402
    7f1b0fa4805c net_sched: remove a redundant goto chain check
    f736e9e2f750 net/ipv4: always honour route mtu during forwarding
    7ef2b9748f88 net: j1939: j1939_session_fresh_new(): fix missing initialization of skbcnt
    3cda27a6e540 can: j1935: j1939_tp_tx_dat_new(): fix missing initialization of skbcnt
    46ebf7a3bdb0 can: m_can_platform: don't call m_can_class_suspend in runtime suspend
    575e9184885b socket: don't clear SOCK_TSTAMP_NEW when SO_TIMESTAMPNS is disabled
    d2bc51dbdecd socket: fix option SO_TIMESTAMPING_NEW
    a7d0ffde99d5 tipc: fix the skb_unshare() in tipc_buf_append()
    83e8af2ee339 net: usb: qmi_wwan: add Cellient MPL200 card
    01630fae60bd net/tls: sendfile fails with ktls offload
    91119131f8a8 net/smc: fix valid DMBE buffer sizes
    c0d0fad9bed7 net/smc: fix use-after-free of delayed events
    5e52ea477365 net: sched: Fix suspicious RCU usage while accessing tcf_tunnel_info
    b91a8c7486a3 net: mptcp: make DACK4/DACK8 usage consistent among all subflows
    a0f063a63afa net: ipa: skip suspend/resume activities if not set up
    8090c13d3e4b net: fix pos incrementment in ipv6_route_seq_next
    f17fe0c1addf net: fec: Fix PHY init after phy_reset_after_clk_enable()
    8a6ab151443c net: fec: Fix phy_device lookup for phy_reset_after_clk_enable()
    d6cc94152da1 net: dsa: microchip: fix race condition
    61d51568e43b mlx4: handle non-napi callers to napi_poll
    8536e300622a ipv4: Restore flowi4_oif update before call to xfrm_lookup_route
    bd0912cd125e ibmveth: Identify ingress large send packets.
    d673d278f59f ibmveth: Switch order of ibmveth_helper calls.
    68e3dec3c3e4 xgb4: handle 4-tuple PEDIT to NAT mode translation

(From OE-Core rev: 75182dd3db60a78920aaff724f0c71e000a77260)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit eab49834f263a2727fa699050a8d01715f1e9d21)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
e2de476001 linux-yocto/5.4: update to v5.4.72
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    52f6ded2a377 Linux 5.4.72
    865b015e8d41 crypto: qat - check cipher length for aead AES-CBC-HMAC-SHA
    aa1167908ac4 crypto: bcm - Verify GCM/CCM key length in setkey
    564312e08892 xen/events: don't use chip_data for legacy IRQs
    041445d0d577 reiserfs: Fix oops during mount
    046616898a57 reiserfs: Initialize inode keys properly
    22ab9ca024a0 USB: serial: ftdi_sio: add support for FreeCalypso JTAG+UART adapters
    bfb1438e8c15 USB: serial: pl2303: add device-id for HP GC device
    aecf3a1c11dc staging: comedi: check validity of wMaxPacketSize of usb endpoints found
    8aff87284be6 USB: serial: option: Add Telit FT980-KS composition
    3c3eb734ef1f USB: serial: option: add Cellient MPL200 card
    b970578274e9 media: usbtv: Fix refcounting mixup
    6ad2e647d91f Bluetooth: Disconnect if E0 is used for Level 4
    21d2051d1f1c Bluetooth: Fix update of connection state in `hci_encrypt_cfm`
    ed6c361e3229 Bluetooth: Consolidate encryption handling in hci_encrypt_cfm
    155bf3fd4e8c Bluetooth: MGMT: Fix not checking if BT_HS is enabled
    66a14350de9a Bluetooth: L2CAP: Fix calling sk_filter on non-socket based channel
    0d9e9b6e1a26 Bluetooth: A2MP: Fix not initializing all members
    54f8badb9bc9 ACPI: Always build evged in
    30ddaa4c0c95 ARM: 8939/1: kbuild: use correct nm executable
    1bf467fdfeae btrfs: take overcommit into account in inc_block_group_ro
    39c5eb1482b2 btrfs: don't pass system_chunk into can_overcommit
    bc79abf4afea perf cs-etm: Move definition of 'traceid_list' global variable from header file

(From OE-Core rev: dffb8b856649d4280ac376d480c7935663f8bd7a)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5da55c543cf38ca1082bc160fd571b3c7c6a40ba)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
45c8a7e583 linux-yocto/5.8: update to v5.8.16
Updating linux-yocto/5.8 to the latest korg -stable release that comprises
the following commits:

    c5464f4be19b Linux 5.8.16
    4cadc0dd5ce2 reiserfs: Fix oops during mount
    492f415bb105 reiserfs: Initialize inode keys properly
    27319196d104 USB: serial: ftdi_sio: add support for FreeCalypso JTAG+UART adapters
    56eff3982215 USB: serial: pl2303: add device-id for HP GC device
    e95645fd1e28 staging: comedi: check validity of wMaxPacketSize of usb endpoints found
    75ea7049c9c6 USB: serial: option: Add Telit FT980-KS composition
    a7f0e37b29f4 USB: serial: option: add Cellient MPL200 card
    d6efa7525a59 media: usbtv: Fix refcounting mixup
    1b7150e1c95e Bluetooth: Disconnect if E0 is used for Level 4
    9e473bae14f3 Bluetooth: MGMT: Fix not checking if BT_HS is enabled
    ffddc73458e8 Bluetooth: L2CAP: Fix calling sk_filter on non-socket based channel
    a350bfd9a93f Bluetooth: A2MP: Fix not initializing all members
    8fae48c4bf67 crypto: qat - check cipher length for aead AES-CBC-HMAC-SHA
    c4ab0a2944b8 crypto: bcm - Verify GCM/CCM key length in setkey

(From OE-Core rev: c80d6d89e90b119e8fa1b434c35c46448bb2934c)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 869f4a5edf70a88301646356c8d3faa55996e5a9)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
4d2fd8ddd3 linux-yocto/5.4: update to v5.4.71
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    85b0841aab15 Linux 5.4.71
    22e6625babfc net_sched: commit action insertions together
    a5de4ee6d055 net_sched: defer tcf_idr_insert() in tcf_action_init_1()
    dbb763107d3e net: usb: rtl8150: set random MAC address when set_ethernet_addr() fails
    6c9edf2d855a Input: ati_remote2 - add missing newlines when printing module parameters
    536c767b14e3 net/mlx5e: Fix driver's declaration to support GRE offload
    8dc5025c6a44 net/tls: race causes kernel panic
    a42dbd059ef6 net/core: check length before updating Ethertype in skb_mpls_{push,pop}
    e39c9eba9bef tcp: fix receive window update in tcp_add_backlog()
    2729afe17987 mm: khugepaged: recalculate min_free_kbytes after memory hotplug as expected by khugepaged
    d94c1505fa91 mmc: core: don't set limits.discard_granularity as 0
    760c7a948bea perf: Fix task_function_call() error handling
    b750f86a62d1 rxrpc: Fix server keyring leak
    ae1a085b4aac rxrpc: The server keyring isn't network-namespaced
    513dd1609c9d rxrpc: Fix some missing _bh annotations on locking conn->state_lock
    422f5c5d3ef9 rxrpc: Downgrade the BUG() for unsupported token type in rxrpc_read()
    7e1f39b5c1d5 rxrpc: Fix rxkad token xdr encoding
    9a52da3f61b4 net/mlx5e: Fix VLAN create flow
    6b9752d85e72 net/mlx5e: Fix VLAN cleanup flow
    47e83c69fe14 net/mlx5e: Add resiliency in Striding RQ mode for packets larger than MTU
    1e7a94724b78 net/mlx5: Fix request_irqs error flow
    073fff810206 net/mlx5: Avoid possible free of command entry while timeout comp handler
    0955c774f32d virtio-net: don't disable guest csum when disable LRO
    15f84bdf6185 net: usb: ax88179_178a: fix missing stop entry in driver_info
    70877d04d41f r8169: fix RTL8168f/RTL8411 EPHY config
    7a96cbd74fcd mlxsw: spectrum_acl: Fix mlxsw_sp_acl_tcam_group_add()'s error path
    f3b35c3782ed mdio: fix mdio-thunder.c dependency & build error
    8d103b1f9ce5 bonding: set dev->needed_headroom in bond_setup_by_slave()
    3ce96a55b756 net: ethernet: cavium: octeon_mgmt: use phy_start and phy_stop
    e987ea087fd2 iavf: Fix incorrect adapter get in iavf_resume
    029ced5cce89 iavf: use generic power management
    84ab35eacdf2 xfrm: Use correct address family in xfrm_state_find
    4d3edb2e4d6e platform/x86: fix kconfig dependency warning for FUJITSU_LAPTOP
    dd2786a3e521 net: stmmac: removed enabling eee in EEE set callback
    e9a12de5a2be xfrm: clone whole liftime_cur structure in xfrm_do_migrate
    7ea7436c406c xfrm: clone XFRMA_SEC_CTX in xfrm_do_migrate
    c1becfebe33e xfrm: clone XFRMA_REPLAY_ESN_VAL in xfrm_do_migrate
    0bea401a9a5a xfrm: clone XFRMA_SET_MARK in xfrm_do_migrate
    f825fd534f8b iommu/vt-d: Fix lockdep splat in iommu_flush_dev_iotlb()
    bdffb36bcd38 drm/amdgpu: prevent double kfree ttm->sg
    4034664a733e openvswitch: handle DNAT tuple collision
    f89128ad358e net: team: fix memory leak in __team_options_register
    003269d8d6de team: set dev->needed_headroom in team_setup_by_port()
    fb3681c20fbf sctp: fix sctp_auth_init_hmacs() error path
    040e3110d49c i2c: owl: Clear NACK and BUS error bits
    abe997f632d1 i2c: meson: fixup rate calculation with filter delay
    6db69c390622 i2c: meson: fix clock setting overwrite
    209549c1c0f0 cifs: Fix incomplete memory allocation on setxattr path
    0afdda28eb2b xfrmi: drop ignore_df check before updating pmtu
    49af88ac6534 nvme-tcp: check page by sendpage_ok() before calling kernel_sendpage()
    15cac17d9d39 tcp: use sendpage_ok() to detect misused .sendpage
    d23dd3864b4c net: introduce helper sendpage_ok() in include/linux/net.h
    5c62d335317c mm/khugepaged: fix filemap page_to_pgoff(page) != offset
    1317469fa05b macsec: avoid use-after-free in macsec_handle_frame()
    20f96fee81c6 nvme-core: put ctrl ref when module ref get fail
    c0f3c5386995 btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation
    e531fd7f8b3a btrfs: fix RWF_NOWAIT write not failling when we need to cow
    1f90600e259b btrfs: Ensure we trim ranges across block group boundary
    6a0f5da2db3b btrfs: volumes: Use more straightforward way to calculate map length
    5aefd1fa9f4d Btrfs: send, fix emission of invalid clone operations within the same file
    19d8412679f2 Btrfs: send, allow clone operations within the same file
    f02dc39bbb20 arm64: dts: stratix10: add status to qspi dts node
    e8e1d16e0b89 i2c: i801: Exclude device from suspend direct complete optimization
    2118c7ba5f2a perf top: Fix stdio interface input handling with glibc 2.28+
    2499c15115ac perf test session topology: Fix data path
    7c1847aa4932 driver core: Fix probe_count imbalance in really_probe()
    3fd2647f9d68 platform/x86: thinkpad_acpi: re-initialize ACPI buffer size when reuse
    da4cdc87dfeb platform/x86: intel-vbtn: Switch to an allow-list for SW_TABLET_MODE reporting
    6440fb9bda91 bpf: Prevent .BTF section elimination
    67a57230b4bf bpf: Fix sysfs export of empty BTF section
    9bd694ccfd44 platform/x86: thinkpad_acpi: initialize tp_nvram_state variable
    d101961ce588 platform/x86: intel-vbtn: Fix SW_TABLET_MODE always reporting 1 on the HP Pavilion 11 x360
    2293272345ff Platform: OLPC: Fix memleak in olpc_ec_probe
    ce8432912f1b usermodehelper: reset umask to default before executing user process
    920a61ddd3b5 vhost: Use vhost_get_used_size() in vhost_vring_set_addr()
    57b47abc1a4a vhost: Don't call access_ok() when using IOTLB
    456d77c1bdfa drm/nouveau/mem: guard against NULL pointer access in mem_del
    8ece83bf754f net: wireless: nl80211: fix out-of-bounds access in nl80211_del_key()
    ee413b2915bf io_uring: Fix double list add in io_queue_async_work()
    efb1cef27d59 io_uring: Fix remove irrelevant req from the task_list
    75524f753318 io_uring: Fix missing smp_mb() in io_cancel_async_work()
    d9e81b2fb372 io_uring: Fix resource leaking when kill the process
    4f46ef7bec86 Revert "ravb: Fixed to be able to unload modules"
    1b2fcd82c0ca fbcon: Fix global-out-of-bounds read in fbcon_get_font()
    f51ec3fd7128 Fonts: Support FONT_EXTRA_WORDS macros for built-in fonts
    eebe3685701b fbdev, newport_con: Move FONT_EXTRA_WORDS macros into linux/font.h
    d22f99d235e1 Linux 5.4.70
    253052b636e9 netfilter: ctnetlink: add a range check for l3/l4 protonum
    27423bb05e25 ep_create_wakeup_source(): dentry name can change under you...
    8e58bad666bb epoll: EPOLL_CTL_ADD: close the race in decision to take fast path
    099b7a1bc791 epoll: replace ->visited/visited_list with generation count
    8993da3d4d3a epoll: do not insert into poll queues until all sanity checks are done
    8db44b30d392 nvme: consolidate chunk_sectors settings
    03f4f85bbd7d nvme: Introduce nvme_lba_to_sect()
    34b939695f28 nvme: Cleanup and rename nvme_block_nr()
    9626c1a63703 mm: don't rely on system state to detect hot-plug operations
    42b7153dd6a6 mm: replace memmap_context by meminit_context
    2334b2d5a2bd block/diskstats: more accurate approximation of io_ticks for slow disks
    1d13c3a5000b random32: Restore __latent_entropy attribute on net_rand_state
    4faf2c3a97ec scripts/dtc: only append to HOST_EXTRACFLAGS instead of overwriting
    ea4c691b58d7 Input: trackpoint - enable Synaptics trackpoints
    21b9387253a7 i2c: cpm: Fix i2c_ram structure
    811ac052e264 gpio: aspeed: fix ast2600 bank properties
    f2a2380812c6 gpio/aspeed-sgpio: don't enable all interrupts by default
    8323d1e09037 gpio/aspeed-sgpio: enable access to all 80 input & output sgpios
    eddeff708c15 iommu/exynos: add missing put_device() call in exynos_iommu_of_xlate()
    08e66c0c1c0e clk: samsung: exynos4: mark 'chipid' clock as CLK_IGNORE_UNUSED
    0ded28e3c468 clk: tegra: Always program PLL_E when enabled
    2f37a1ef1e5d nfs: Fix security label length not being reset
    6c5a11ead942 pinctrl: mvebu: Fix i2c sda definition for 98DX3236
    ae68b15839b0 phy: ti: am654: Fix a leak in serdes_am654_probe()
    543ea1af5744 gpio: sprd: Clear interrupt when setting the type as edge
    8c03d0ef62dd nvme-fc: fail new connections to a deleted host or remote port
    2b217eafcf74 nvme-pci: fix NULL req in completion handler
    157ccdf7eb2c spi: fsl-espi: Only process interrupts for expected events
    8cc5eb809aa5 tools/io_uring: fix compile breakage
    4e4646c85e89 tracing: Make the space reserved for the pid wider
    a0fe7f705457 mac80211: do not allow bigger VHT MPDUs than the hardware supports
    355a710f0813 mac80211: Fix radiotap header channel flag for 6GHz band
    126e6099b8c1 drivers/net/wan/hdlc: Set skb->protocol before transmitting
    3ba3fc3e7ea6 drivers/net/wan/lapbether: Make skb->protocol consistent with the header
    89fd103fbbb0 fuse: fix the ->direct_IO() treatment of iov_iter
    44b4baf850bd nvme-core: get/put ctrl and transport module in nvme_dev_open/release()
    0bcc3480393b rndis_host: increase sleep time in the query-response loop
    f19ff011027b net: dec: de2104x: Increase receive ring size for Tulip
    e9af030ddd4b drm/sun4i: mixer: Extend regmap max_register
    985a56c58c4f drivers/net/wan/hdlc_fr: Add needed_headroom for PVC devices
    91d59157b103 libbpf: Remove arch-specific include path in Makefile
    688aa0e0aaf9 clocksource/drivers/timer-gx6605s: Fixup counter reload
    3d54a640e20c drm/amdgpu: restore proper ref count in amdgpu_display_crtc_set_config
    de21eb7f8cb0 memstick: Skip allocating card when removing host
    c524a17312d4 ftrace: Move RCU is watching check after recursion check
    5ac7065e0866 iio: adc: qcom-spmi-adc5: fix driver name
    ac3bf99fc26a Input: i8042 - add nopnp quirk for Acer Aspire 5 A515
    aee38af574a1 xfs: trim IO to found COW extent limit
    aed60a1746ba net: virtio_vsock: Enhance connection semantics
    215459ff3666 vsock/virtio: add transport parameter to the virtio_transport_reset_no_sock()
    14c79ef213c2 clk: socfpga: stratix10: fix the divider for the emac_ptp_free_clk
    79c8ebdce55c gpio: tc35894: fix up tc35894 interrupt configuration
    035f59ad4ba8 gpio: mockup: fix resource leak in error path
    b079337f697a gpio: siox: explicitly support only threaded irqs
    57bd08a301f7 USB: gadget: f_ncm: Fix NDP16 datagram validation
    23389cf97aa1 mmc: sdhci: Workaround broken command queuing on Intel GLK based IRBIS models
    09c826447cb0 btrfs: fix filesystem corruption after a device replace

(From OE-Core rev: d7fe2a96ae30eecdfddd5a46c3fb088e633afc5b)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8f9352782e610775efbb059fbfb5a6b997d2ec88)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Bruce Ashfield
ea0af53e2a linux-yocto/5.8: update to v5.8.15
Updating linux-yocto/5.8 to the latest korg -stable release that comprises
the following commits:

    665c6ff082e2 Linux 5.8.15
    03b7311c2d35 net_sched: commit action insertions together
    1e02bbf908d3 net_sched: defer tcf_idr_insert() in tcf_action_init_1()
    b6a788af71ed net: qrtr: ns: Protect radix_tree_deref_slot() using rcu read locks
    691847cc626c net: usb: rtl8150: set random MAC address when set_ethernet_addr() fails
    624143319921 Input: ati_remote2 - add missing newlines when printing module parameters
    2cdb64863860 tty/vt: Do not warn when huge selection requested
    af2c68e241ba net/mlx5e: Fix driver's declaration to support GRE offload
    13e623dc2772 net/tls: race causes kernel panic
    d1a1891a5865 net: bridge: fdb: don't flush ext_learn entries
    54d2034e1d13 net/core: check length before updating Ethertype in skb_mpls_{push,pop}
    912721b3ad72 netlink: fix policy dump leak
    85355299d6fa tcp: fix receive window update in tcp_add_backlog()
    a4c5f912c926 mm: khugepaged: recalculate min_free_kbytes after memory hotplug as expected by khugepaged
    0d600018dde7 mm: validate inode in mapping_set_error()
    270974601ea5 mmc: core: don't set limits.discard_granularity as 0
    23030fd91348 perf: Fix task_function_call() error handling
    02b573f11b1c afs: Fix deadlock between writeback and truncate
    29c60e82c6a5 net: mscc: ocelot: divide watermark value by 60 when writing to SYS_ATOP
    9fd541ad02bd net: mscc: ocelot: extend watermark encoding function
    13c116784250 net: mscc: ocelot: split writes to pause frame enable bit and to thresholds
    43e89f7e3c98 net: mscc: ocelot: rename ocelot_board.c to ocelot_vsc7514.c
    78272109f44d rxrpc: Fix server keyring leak
    bf1235365637 rxrpc: The server keyring isn't network-namespaced
    0fb27a1f99c1 rxrpc: Fix some missing _bh annotations on locking conn->state_lock
    6343a701ca68 rxrpc: Downgrade the BUG() for unsupported token type in rxrpc_read()
    3a15888ff3df rxrpc: Fix rxkad token xdr encoding
    41d0598c0f43 net: mvneta: fix double free of txq->buf
    d5c6f130b6f0 vhost-vdpa: fix page pinning leakage in error path
    ec7257845d40 vhost-vdpa: fix vhost_vdpa_map() on error condition
    72d41c97e736 net: hinic: fix DEVLINK build errors
    a974b4bddae3 net: stmmac: Modify configuration method of EEE timers
    d0eb9588f724 net/mlx5e: Fix race condition on nhe->n pointer in neigh update
    eef0da156040 net/mlx5e: Fix VLAN create flow
    b6dc435f3603 net/mlx5e: Fix VLAN cleanup flow
    f2140d0c6b93 net/mlx5e: Fix return status when setting unsupported FEC mode
    96e80a346634 net/mlx5e: Add resiliency in Striding RQ mode for packets larger than MTU
    4dc4c132f27f net/mlx5: Fix request_irqs error flow
    91ddbc505218 net/mlx5: Add retry mechanism to the command entry index allocation
    963f9da02730 net/mlx5: poll cmd EQ in case of command timeout
    da87ea137373 net/mlx5: Avoid possible free of command entry while timeout comp handler
    eb50f5c289e6 net/mlx5: Fix a race when moving command interface to polling mode
    04f31610f34f pipe: Fix memory leaks in create_pipe_files()
    ce1dde198079 octeontx2-pf: Fix synchnorization issue in mbox
    5cfc870ede16 octeontx2-pf: Fix the device state on error
    7778b8860228 octeontx2-pf: Fix TCP/UDP checksum offload for IPv6 frames
    921dfb5fec6b octeontx2-af: Fix enable/disable of default NPC entries
    b9f0dcfbfc07 net: phy: realtek: fix rtl8211e rx/tx delay config
    9d41929ceea9 virtio-net: don't disable guest csum when disable LRO
    f5f8861d01d3 net: usb: ax88179_178a: fix missing stop entry in driver_info
    fb4fb78d23fc r8169: fix RTL8168f/RTL8411 EPHY config
    0ea7fe7c26ef mlxsw: spectrum_acl: Fix mlxsw_sp_acl_tcam_group_add()'s error path
    698075baae0b mdio: fix mdio-thunder.c dependency & build error
    c83ed7bb7469 bonding: set dev->needed_headroom in bond_setup_by_slave()
    665298cbd6bd net: ethernet: cavium: octeon_mgmt: use phy_start and phy_stop
    2cb43007e060 net: stmmac: Fix clock handling on remove path
    39d93de64749 vmxnet3: fix cksum offload issues for non-udp tunnels
    6ececc888c0c ice: fix memory leak in ice_vsi_setup
    c4b9b9d7eb10 ice: fix memory leak if register_netdev_fails
    33e948635e65 iavf: Fix incorrect adapter get in iavf_resume
    1e0cdecfb896 iavf: use generic power management
    13685508abf3 xfrm: Use correct address family in xfrm_state_find
    3e835221d670 net: dsa: felix: convert TAS link speed based on phylink speed
    24bc1ec457c8 hinic: fix wrong return value of mac-set cmd
    43b7d340cb3a hinic: add log in exception handling processes
    5f8c48c299bc platform/x86: fix kconfig dependency warning for FUJITSU_LAPTOP
    6d9886e6081b platform/x86: fix kconfig dependency warning for LG_LAPTOP
    046add2ce07c net: stmmac: removed enabling eee in EEE set callback
    ac25c357463b xsk: Do not discard packet when NETDEV_TX_BUSY
    38dd384ce429 xfrm: clone whole liftime_cur structure in xfrm_do_migrate
    8baab8024028 xfrm: clone XFRMA_SEC_CTX in xfrm_do_migrate
    3ab37554e6ce xfrm: clone XFRMA_REPLAY_ESN_VAL in xfrm_do_migrate
    958c224a99d3 xfrm: clone XFRMA_SET_MARK in xfrm_do_migrate
    954adf701189 iommu/vt-d: Fix lockdep splat in iommu_flush_dev_iotlb()
    31bc10ac6d01 btrfs: move btrfs_rm_dev_replace_free_srcdev outside of all locks
    b50aa502610f drm/amd/display: fix return value check for hdcp_work
    b02b690b4bb3 drm/amd/pm: Removed fixed clock in auto mode DPM
    9e184961ddb7 io_uring: fix potential ABBA deadlock in ->show_fdinfo()
    287d8f00338d btrfs: move btrfs_scratch_superblocks into btrfs_dev_replace_finishing
    cefd370cb723 drm/amdgpu: prevent double kfree ttm->sg
    9c6944b53f1d openvswitch: handle DNAT tuple collision
    0388ffce1059 net: team: fix memory leak in __team_options_register
    70af9c28d423 team: set dev->needed_headroom in team_setup_by_port()
    9360901e714d sctp: fix sctp_auth_init_hmacs() error path
    d63492ab001b i2c: owl: Clear NACK and BUS error bits
    08a1313bfca0 i2c: meson: fixup rate calculation with filter delay
    3531df70c312 i2c: meson: keep peripheral clock enabled
    fe6124585cfe i2c: meson: fix clock setting overwrite
    d681bce5bc03 cifs: Fix incomplete memory allocation on setxattr path
    80683929112b espintcp: restore IP CB before handing the packet to xfrm
    1427c13cc16f xfrmi: drop ignore_df check before updating pmtu
    c2a55388bada nvme-tcp: check page by sendpage_ok() before calling kernel_sendpage()
    f4abc5911a9e tcp: use sendpage_ok() to detect misused .sendpage
    854828e10e2d net: introduce helper sendpage_ok() in include/linux/net.h
    89bec0adbf50 mm/khugepaged: fix filemap page_to_pgoff(page) != offset
    f994c81fe4c5 gpiolib: Disable compat ->read() code in UML case
    987c12d56402 RISC-V: Make sure memblock reserves the memory containing DT
    659a68b11df3 macsec: avoid use-after-free in macsec_handle_frame()
    8c995b27d066 nvme-core: put ctrl ref when module ref get fail
    3113391293be platform/x86: thinkpad_acpi: re-initialize ACPI buffer size when reuse
    46a00e3e9275 platform/x86: intel-vbtn: Switch to an allow-list for SW_TABLET_MODE reporting
    402ee2f96fb9 r8169: consider that PHY reset may still be in progress after applying firmware
    a73bb4ddee83 bpf: Prevent .BTF section elimination
    bc33b9bb0757 bpf: Fix sysfs export of empty BTF section
    944e354acfc3 platform/x86: asus-wmi: Fix SW_TABLET_MODE always reporting 1 on many different models
    88ddba3ebc3c platform/x86: thinkpad_acpi: initialize tp_nvram_state variable
    b9c0333ac6c8 platform/x86: intel-vbtn: Fix SW_TABLET_MODE always reporting 1 on the HP Pavilion 11 x360
    6b010ed04d50 Platform: OLPC: Fix memleak in olpc_ec_probe
    6ad52d3ee278 splice: teach splice pipe reading about empty pipe buffers
    c679280057ee usermodehelper: reset umask to default before executing user process
    3d36be053e58 vhost: Use vhost_get_used_size() in vhost_vring_set_addr()
    3480587d9b9d vhost: Don't call access_ok() when using IOTLB
    145a5510ef6a block/scsi-ioctl: Fix kernel-infoleak in scsi_put_cdrom_generic_arg()
    128f5fe7c102 partitions/ibm: fix non-DASD devices
    ef29249b066f drm/nouveau/mem: guard against NULL pointer access in mem_del
    e82867e1c2b4 drm/nouveau/device: return error for unknown chipsets
    bc7382371b2d net: wireless: nl80211: fix out-of-bounds access in nl80211_del_key()
    82dfd230b0c0 exfat: fix use of uninitialized spinlock on error path
    6a4bf26a176d crypto: arm64: Use x16 with indirect branch to bti_c
    fc5b5ae8ac3c bpf: Fix scalar32_min_max_or bounds tracking
    849d01ef1894 Revert "ravb: Fixed to be able to unload modules"
    e57db2fee8b1 fbcon: Fix global-out-of-bounds read in fbcon_get_font()
    34873e40e8d8 Fonts: Support FONT_EXTRA_WORDS macros for built-in fonts
    3714c5596a9d fbdev, newport_con: Move FONT_EXTRA_WORDS macros into linux/font.h
    70b225d0a8ca Linux 5.8.14
    8eec10e1335d ep_create_wakeup_source(): dentry name can change under you...
    4306cae1d98a epoll: EPOLL_CTL_ADD: close the race in decision to take fast path
    a6a47119b527 epoll: replace ->visited/visited_list with generation count
    bdb43b31e65d epoll: do not insert into poll queues until all sanity checks are done
    5e6bc9b1f1ae scsi: sd: sd_zbc: Fix ZBC disk initialization
    a12f67b54771 scsi: sd: sd_zbc: Fix handling of host-aware ZBC disks
    ecd72c95c278 drm/i915/gvt: Fix port number for BDW on EDID region setup
    115b0aed8b74 gpiolib: Fix line event handling in syscall compatible mode
    b4b93f8c92bb random32: Restore __latent_entropy attribute on net_rand_state
    d4ff049a3463 pipe: remove pipe_wait() and fix wakeup race with splice
    f6e5c604d67b iommu/amd: Fix the overwritten field in IVMD header
    7af706248ce2 gpio: pca953x: Correctly initialize registers 6 and 7 for PCA957x
    b7d423041485 pinctrl: mediatek: check mtk_is_virt_gpio input parameter
    1b62e4935b0c pinctrl: qcom: sm8250: correct sdc2_clk
    5f040ac168f3 autofs: use __kernel_write() for the autofs pipe writing
    b06582ae5052 scripts/dtc: only append to HOST_EXTRACFLAGS instead of overwriting
    c53cd1877406 blk-mq: call commit_rqs while list empty but error happen
    a6141f191d83 Input: trackpoint - enable Synaptics trackpoints
    83884333497f i2c: npcm7xx: Clear LAST bit after a failed transaction.
    95b874d021f6 i2c: cpm: Fix i2c_ram structure
    f6ae5ac641a8 gpio: aspeed: fix ast2600 bank properties
    cf7f69852717 gpio/aspeed-sgpio: don't enable all interrupts by default
    7dc4222171ce gpio/aspeed-sgpio: enable access to all 80 input & output sgpios
    20d7a2cbc339 gpio: pca953x: Fix uninitialized pending variable
    c8a8adc7df57 iommu/exynos: add missing put_device() call in exynos_iommu_of_xlate()
    32b462c501ee scsi: target: Fix lun lookup for TARGET_SCF_LOOKUP_LUN_FROM_TAG case
    40e2e6c71ac1 clk: samsung: exynos4: mark 'chipid' clock as CLK_IGNORE_UNUSED
    f6e9c4310f5a dmaengine: dmatest: Prevent to run on misconfigured channel
    ec9002ead04b clk: tegra: Fix missing prototype for tegra210_clk_register_emc()
    ef3f3611b462 clk: tegra: Always program PLL_E when enabled
    63cd394fa3f0 pNFS/flexfiles: Ensure we initialise the mirror bsizes correctly on read
    ac376f2245bb NFSv4.2: fix client's attribute cache management for copy_file_range
    a98e3583bd8d nfs: Fix security label length not being reset
    6846eb762344 pinctrl: mvebu: Fix i2c sda definition for 98DX3236
    fdf8212f0260 phy: ti: am654: Fix a leak in serdes_am654_probe()
    9f6c717ffa47 gpio: sprd: Clear interrupt when setting the type as edge
    6bef7d4b4770 scripts/kallsyms: skip ppc compiler stub *.long_branch.* / *.plt_branch.*
    a50ea89d1ae5 nvme-fc: fail new connections to a deleted host or remote port
    7d2120bc38b9 nvme-pci: fix NULL req in completion handler
    189c154bc593 net: dsa: felix: fix some key offsets for IP4_TCP_UDP VCAP IS2 entries
    b23f9f0dc930 spi: fsl-espi: Only process interrupts for expected events
    cbbc927e0e62 cpuidle: psci: Fix suspicious RCU usage
    f833ed7a202b io_uring: mark statx/files_update/epoll_ctl as non-SQPOLL
    fc4b56ae9e76 tools/io_uring: fix compile breakage
    4ff709d00af4 tracing: Make the space reserved for the pid wider
    f2465c7d069c mac80211: do not allow bigger VHT MPDUs than the hardware supports
    9c72951f9e97 mac80211: Fix radiotap header channel flag for 6GHz band
    2dd5f2a99bf3 drivers/net/wan/hdlc: Set skb->protocol before transmitting
    3074634461c5 drivers/net/wan/lapbether: Make skb->protocol consistent with the header
    74e81de01e49 fuse: fix the ->direct_IO() treatment of iov_iter
    72adaf934802 nvme-core: get/put ctrl and transport module in nvme_dev_open/release()
    f3f3da8c1ff9 nvme-pci: disable the write zeros command for Intel 600P/P3100
    33701f04a59a rndis_host: increase sleep time in the query-response loop
    21f41dd7e883 net: dec: de2104x: Increase receive ring size for Tulip
    9c524f9df9c7 hv_netvsc: Cache the current data path to avoid duplicate call and message
    caac35688ac1 drm/sun4i: mixer: Extend regmap max_register
    b92f98f9307c Revert "wlcore: Adding suppoprt for IGTK key in wlcore driver"
    73fadce8c80b drivers/net/wan/hdlc_fr: Add needed_headroom for PVC devices
    1017b151fb4a libbpf: Remove arch-specific include path in Makefile
    9f183485e888 mt76: mt7915: use ieee80211_free_txskb to free tx skbs
    057c9ed4565b vboxsf: Fix the check for the old binary mount-arguments struct
    4a1db91e697a clocksource/drivers/timer-gx6605s: Fixup counter reload
    5d48f7b0ed06 xen/events: don't use chip_data for legacy IRQs
    e99ecd62bb9c drm/amdgpu: restore proper ref count in amdgpu_display_crtc_set_config
    b64a43b072c7 memstick: Skip allocating card when removing host
    13cee195a180 tracing: Fix trace_find_next_entry() accounting of temp buffer size
    7f5d5928b9cc ftrace: Move RCU is watching check after recursion check
    1f0038ad6eed iio: adc: qcom-spmi-adc5: fix driver name
    14f6276e202f Input: i8042 - add nopnp quirk for Acer Aspire 5 A515
    6901d792bc35 i2c: i801: Exclude device from suspend direct complete optimization
    7d29e9507663 scsi: iscsi: iscsi_tcp: Avoid holding spinlock while calling getpeername()
    c32f1ee1d6d0 clk: socfpga: stratix10: fix the divider for the emac_ptp_free_clk
    a77ae2f6d900 clk: samsung: Keep top BPLL mux on Exynos542x enabled
    9705d89518ae gpio: amd-fch: correct logic of GPIO_LINE_DIRECTION
    f67837215194 gpio: tc35894: fix up tc35894 interrupt configuration
    baeac67ee6e2 gpio: mockup: fix resource leak in error path
    cb2480639590 gpio: siox: explicitly support only threaded irqs
    5ae75e1e510d usbcore/driver: Accommodate usbip
    ab3edda370ee usbcore/driver: Fix incorrect downcast
    dc1e84d05a96 usbcore/driver: Fix specific driver selection
    36ec30f02a00 Revert "usbip: Implement a match function to fix usbip"
    9c69e3a769db USB: gadget: f_ncm: Fix NDP16 datagram validation
    26be1c145cfe mmc: sdhci: Workaround broken command queuing on Intel GLK based IRBIS models
    a8183e677fc1 btrfs: fix filesystem corruption after a device replace
    f2a5cb2f24ae io_uring: always delete double poll wait entry on match

(From OE-Core rev: d044bd0603c2e80c5529f468f077c21f0af1d827)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 20a986da54728af38cac4556d01e39ef4bd558d6)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Chee Yang Lee
2d342da2a3 bluez5: update to 5.55
Release note:
5a180f2ec9

(From OE-Core rev: 6ed12979194b8fb73d6f7365128b5451e580cdba)

Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c2895e3e4eabca64cbcc8682e72d25026df5e5f0)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-29 00:07:57 +00:00
Richard Purdie
f1b304df93 bitbake: Add missing documentation Makefile
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-24 09:10:58 +00:00
Nathan Rossi
b569f2a414 diffstat: add nativesdk to BBCLASSEXTEND
The diffstat tool is part of HOSTTOOLS. To support hosts that do not
have it installed with buildtools-tarball it must be enabled for
nativesdk.

(From OE-Core rev: 3a4ac9d028e6d7840660bb9640614d92fd89246f)

Signed-off-by: Nathan Rossi <nathan@nathanrossi.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0ed002422bc46539f1d71ed19ee17358b6691bf0)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Jose Quaresma
411f541288 gstreamer1.0: warn the user when something is wrong with GstBufferPool
This is not a critical bug fix but it can be usefull in some BSP
with exotic drivers like on nvidia tegra bsp.

(From OE-Core rev: b53a89f4e5457689b7cb38ed9b3d0885cfd47c12)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Mark Jonas
83477f0280 libbsd: Remove BSD-4-Clause from main package
libbsd contains a multitude of licenses. For (commercial) projects the
3rd clause of the BSD-4-Clause license can be problematic. But only a
few man pages use this license. This means that the main package
containing the binary library itself is not under BSD-4-Clause ruling.

(From OE-Core rev: e822d8423fb836cc821b5c87d1b4f30477a313fd)

Signed-off-by: Mark Jonas <toertel@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9c3e3f83b5fb162d161a7b9773d426418a22c05f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Matt Madison
7e7893983f layer.conf: fix syntax error in PATH setting
Commit 05a87be51b44608ce4f77ac332df90a3cd2445ef introduced
a Python conditional expression when updating PATH that
generates syntax warnings in bitbake-cookerdaemon.log:

  Var <PATH[:=]>:1: SyntaxWarning: "is not" with a literal. Did you mean "!="?

Fix this by using the more appropriate '!=' comparison
operator.

(From OE-Core rev: b6c3950be8e4edbdde74b5819c974124e30680c7)

Signed-off-by: Matt Madison <matt@madison.systems>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2e753a12cf6bb98f9e0940e5ed6255ce8c538eed)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Khem Raj
e3a67d60cc gawk: Avoid using host ar during cross compile
(From OE-Core rev: 93178cea0e694cccd602ba965909f50f1b7159c7)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5bc83ca06d0d38a6eb9fcc0343d081021dafb2ce)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Khem Raj
23a0428069 lrzsz: Use Cross AR during compile
Current code hardcodes archiver to be 'ar' from build host

(From OE-Core rev: 694202b05134bdef603b69667cd70a28bb311ccf)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 74ed1d10434213ad3fcf54ded49879090f979e1e)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Denys Zagorui
b74901b816 binutils: reproducibility: reuse debug-prefix-map for stabs
powerpc 32bit Linux Kernel widely uses .stabs pseudo-op to
produce debugging information in stabs format. Faced an issue
that during Linux Kernel build with Yocto build system for 32bit
powerpc platform resulting vmlinux contains absolute path in
.stabstr section that cannot be remapped with -fdebug-prefix-map
option.

Yocto uses scripts/mkmakefile Linux Kernel build approach that
allows to store all generated files outside of kernel source
tree. With this approach each compilier invocation is performed
with an absolute path to a file that will be compiled and this
absolute path is recorded in init stab. There is no way to remap
this path.

Reuse remap_debug_filename api to make -fdebug-prefix-map flag
aplicable for init stab.

(From OE-Core rev: b4b79870d7946e58692adb68d1329955500d3c56)

Signed-off-by: Denys Zagorui <dzagorui@cisco.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 4dce4e01cfa153fb12cfd1684d36e0432bef6741)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Konrad Weihmann
010625f35a testimage: print results for interrupted runs
When a run is ended by overall timeout, print the already executed
testcases, to provide some hints which testcase might made the
test suite reach global timeout.
Nonetheless make the testrun exit with an error

(From OE-Core rev: 54a7e5feee2bec78f8d526b69076fd0e8e50e228)

Signed-off-by: Konrad Weihmann <kweihmann@outlook.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2bcc643195a3b3c66d698fac8b7af037c08545ac)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Konrad Weihmann
0647439a0a oeqa/core/context: initialize _run_end_time
with _run_start_time as value. For partial results of interrupted runs,
this info might be otherwise missing for at least one testcase

(From OE-Core rev: a91308482e1bb524df413d4342a9ebb472314663)

Signed-off-by: Konrad Weihmann <kweihmann@outlook.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1c5e8baf57fa2a33b9ef507b11d9ea9acaa77238)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Konrad Weihmann
87a05c7316 oeqa/core/context: expose results as variable
register an unittest handler for testresults and expose it as
variable result.
With this even partial results from an interrupted test suite run
can be made available

(From OE-Core rev: ba41688f7f0cb44293321df6c69fe47ac1804d63)

Signed-off-by: Konrad Weihmann <kweihmann@outlook.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a97ae47525157871b6c098ffc352293e365a4335)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Steve Sakoman
5c33ee311c openssh: whitelist CVE-2014-9278
The OpenSSH server, as used in Fedora and Red Hat Enterprise
Linux 7 and when running in a Kerberos environment, allows remote
authenticated users to log in as another user when they are listed
in the .k5users file of that user, which might bypass intended
authentication requirements that would force a local login.

Whitelist the CVE since this issue is Redhat specific.

(From OE-Core rev: b43201dd7459c2e408889fd8a81a52719308b5fe)

Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 309132e50d23b1e3f15ef8db1a101166b35f7ca4)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Alexander Kanavin
3ad92d4d09 conf-notes.txt: mention more important images than just sato
(From OE-Core rev: b622ea5c6d2965feb68b760e96e9073c50441a02)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f89138e12c3021ed49aa7ccdf90543d2aaaad279)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:21 +00:00
Alexander Kanavin
5e5a7fd73d clutter-gst-3.0: do not call out to host gstreamer plugin scanner
This is host contamination and can also fail for all kinds of
reasons when running under usermode qemu.

(From OE-Core rev: 4088ef3f6e608031a4f951cce5cc30b0af867e75)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit fb60d0920b660dffb346b2212dc6f8ba2a0b9fde)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:20 +00:00
Gratian Crisan
3269613984 kernel-module-split.bbclass: identify kernel modconf files as configuration files
Currently the modconf fragments representing the configuration for
kernel modules are written out to appropriate .conf files and added to
the FILES variable. However they are not identified as 'configuration
files' and installing a new version of a kernel module results in a
conflict and a failed installed because the respective .conf file is
already in place from a previous install.

Add the generated .conf files to the CONFFILES variable denoting their
true nature.

(From OE-Core rev: eb42ef100c52b243eee55b950f3dc7d4010ea1f2)

Signed-off-by: Gratian Crisan <gratian.crisan@ni.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1a70a92d1f1006be115429a4262259c9084f484d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:20 +00:00
Richard Purdie
b955cbdcfb alsa-utils: Fix license to GPLv2 only
Parts of alsa-utils are v2 only, parts are v2 or later. The effect is
the end result is GPLv2 and there seems little value in marking everything
as being a mixture of both. Fix LICENSE to match reality.

(From OE-Core rev: e14646de7fb45605de33fc0b797dad013ec20414)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a9a17a991174b732597e21045763ea851f486a01)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:20 +00:00
Richard Purdie
58e47e1b70 libdnf: Fix license as it contains 'or later' clause
The license headers are clear that the code is "or later", fix LICENSE
to match.

(From OE-Core rev: 01fd8b51074a91053f632b2932238e35c926045c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e565e0b908c71ad5106d1c6c73d269b819787e55)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:20 +00:00
Richard Purdie
bb0524e189 ptest-runner: Fix license as it contains 'or later' clause
The license headers are clear that the code is "or later", fix LICENSE
to match.

(From OE-Core rev: daa16f56f1596fa2987499d6b48b98f5b7aedca2)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5f0b5cdfcb104ac50222a47652e090ad8770e49f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:20 +00:00
Diego Santa Cruz
7d58c8bed6 freetype: fix CVE-2020-15999, backport from 2.10.4
(From OE-Core rev: 95b928e68325218508cff8def10e72bbe0051c83)

Signed-off-by: Diego Santa Cruz <Diego.SantaCruz@spinetix.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-20 10:53:20 +00:00
Yongxin Liu
5232b03e22 grub: clean up CVE patches
Clean up several patches introduced in commit 6732918498 ("grub:fix
several CVEs in grub 2.04").

1) Add CVE tags to individual patches.
2) Rename upstream patches and prefix them with CVE tags.
3) Add description of reference to upstream patch.

(From OE-Core rev: a1db1e71129c3e67ddd9dbef21e1c5eb31552e00)

Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit bcb8b6719beaf6625e6b703e91958fe8afba5819)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Mingli Yu
e2312cd887 update_udev_hwdb: clean hwdb.bin
Steps to reproduce:
echo "IMAGE_INSTALL_append = \" udev-hwdb lib32-udev-hwdb\"" >> conf/local.conf

When install both udev-hwdb and lib32-udev-hwdb as above,
there comes below do_populate_sdk error:
 $ bitbake core-image-sato  -c populate_sdk
 ERROR: Task (/path/core-image-sato.bb:do_populate_sdk) failed with exit code '134'
 NOTE: Tasks Summary: Attempted 5554 tasks of which 0 didn't need to be rerun and 1 failed.

 $ cat /path/tmp/work/qemux86_64-poky-linux/core-image-sato/1.0-r5/pseudo/pseudo.log
 [snip]
 inode mismatch: '/path/tmp/work/qemux86_64-poky-linux/core-image-sato/1.0-r5/sdk/image/usr/local/oecore-x86_64/sysroots/core2-64-poky-linux/lib/udev/hwdb.bin' ino 427383040 in db, 427383042 in request.
 [snip]

It is because both udev-hwdb and lib32-udev-hwdb will generate
${SDK_OUTPUT}/${SDKTARGETSYSROOT}/lib/udev/hwdb.bin during do_populate_sdk
and it triggers pseudo error.

So clean hwdb.bin before generate hwdb.bin to avoid conflict to
fix the above do_populate_sdk error.

(From OE-Core rev: c7472925feb53ce92c1799feba2b7a9104e3f38f)

(From OE-Core rev: 93e59a78da3dab56c91f423b2c0f29a8ebaf2700)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 994ca65e6f)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Alexander Kanavin
f552970178 apt: remove host contamination with gtest
(From OE-Core rev: 41aa60cdb1e26617e1eeac95a6ffcdd6561c539f)

(From OE-Core rev: a76d66feae7050d5d59964108a065bc6251667eb)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 600cb136cd)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Yann E. MORIN
d59e28ea73 recipes-core/busybox: fixup licensing information
Commit 7d32417b4d (busybox: Correct the name of the bzip2 license)
changes the licesne from 'bzip2' to 'bzip2-1.0.6' on the rationale
that the 'bzip2 license was renamed from "bzip2" to "bzip2-1.0.6"
[...] to match the official SPDX identifier.'

Though the above is true for the bzip2 and pbzip2 packages, the bzip2
code bundled in busybox is a copy from the bzip2 1.0.4 version, not the
1.0.6 version.

As such, using bzip2-1.0.6 is wrong.

Unfortunately, there is no official SPDX license identifier for this
bzip2 1.0.4 version, so we just mimick the existing ones (bzip2-1.0.5
and bzip2-1.0.6) by using bzip2-1.0.4.

Also, there is a license file attached to that, so we add it to the
list.

(From OE-Core rev: 6238ee3ecd385cbadd8e75eb8b22a96d9cb13639)

(From OE-Core rev: fb590d12a0979e0db69e9d7b0cb605467f678000)

Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Cc: Richard Purdie <richard.purdie@linuxfoundation.org>
Cc: Alexandre BELLONI <alexandre.belloni@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0776bf6600)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Yann E. MORIN
61642ef429 common-licenses: add bzip2-1.0.4
The bzip2 license changes with each version; the changes are subtle, but
that makes it a different license everytime:
  - copyright year
  - authorship identification and address
  - version of the release
  - date of the release

Although we currently only have bzip2 and pbzip2 packages, we're going
to need this license for busybox, which uses code from bzip2-1.0.4.

Add it, as copied from the upstream bzip2 git tree at tag 'bzip2-1.0.4'
(commit f10a33538e9bab6deb61779b3d8aae168824ef48).

(From OE-Core rev: f303c31b813f371737c9a9d7a93e9f920f84e75a)

(From OE-Core rev: e29fb3d418f3ac53e49a14b430f0ef6ef323375f)

Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Khem Raj <raj.khem@gmail.com>
Cc: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f3f62ed09d)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Khem Raj
7f6f1519b9 qemuboot.bbclass: Fix a typo
(From OE-Core rev: 2b5fb66344432390aa0cc199ad3f9ec2a4da26bb)

(From OE-Core rev: 2eb8cd12bdc4b6a83f8ab1ac6643821db5d8087c)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit aea9a37ae3)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Mark Jonas
528de6bc4f libsdl2: Fix directfb SDL_RenderFillRect
Refactoring of SDL2 internal API has broken SDL_RenderFillRect for
DirectFB. The problem has already been fixed upstream.

(From OE-Core rev: a7c8dfc1f9beebeb9da7f61b323d85fba82ec1cb)

(From OE-Core rev: 1eabecc8bcb459b0fe6b14c9a368cd1b4b6dd7dd)

Signed-off-by: Mark Jonas <toertel@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e956531526)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Mark Jonas
0ccf16fab3 libsdl2: Fix directfb syntax error
Build of libsdl2 with directfb is broken due to a spurious '}' and a
missing 'E' since version 2.0.12. The upstream is already fixed.

(From OE-Core rev: 8963daba093c3c5e2c60e1e4e057862971b84cb0)

(From OE-Core rev: a2b4c03bbb1f340da2f0723336978b22f8203065)

Signed-off-by: Mark Jonas <toertel@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9e9871de01)
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Chee Yang Lee
4e513e2b86 ruby: fix CVE-2020-25613
(From OE-Core rev: 4e02862b4fcfbf3a9cace8a35e355f156d26ed37)

(From OE-Core rev: a8875221054da40c66366f63d9f61940311b1fbc)

Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
1272d1b8fc gst-validate: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: a153bd3eeffa40554884d3a50cf6f78b57416749)

(From OE-Core rev: 88c3919e7cd46b16ec26fe4678bc2c59f7ceffb5)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
686396e3dc gstreamer1.0-python: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: dc9c8ca89e9d7429deac696c9995135706b9a548)

(From OE-Core rev: 74fb595b88671de668aff4beae0764d7af88b6c7)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
2fa7fde32f gstreamer1.0-omx: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: e091bfead5907cc13c237d7464c50efe8810d6cd)

(From OE-Core rev: d82aae6725545449edd5e4a8d04d67cf5168846a)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
72050b72e2 gstreamer1.0-rtsp-server: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: 75b4e0c2ad5827b5eea9e810fd03bcfc53582873)

(From OE-Core rev: eff91cfc5b203519f438f99920196eb2be227078)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
2fa97151cd gstreamer1.0-vaapi: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: 8a04f7326539980f83731846db3de4af9ee1a2f0)

(From OE-Core rev: 5a226e38d0add3b8e0298558946d317b9109c44b)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
e67a7af07c gstreamer1.0-libav: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: af7cf7c37b4ea30592529442c72f22309cb577c5)

(From OE-Core rev: 1cc05b37c302c393a8137a619225e66f16778a56)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
2306702899 gstreamer1.0-plugins-ugly: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: 0fec6a473695d9ae794593f7cea98d05ef959d7a)

(From OE-Core rev: ff6954c90a1aeda1a12a3414ae0901476a173cd1)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
f652c4d1b8 gstreamer1.0-plugins-bad: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: ee8e7a9fb8f3d29357598b2a533bb44da12d6099)

(From OE-Core rev: 9b716b146dc875bf55f1ad093dc95244a201d745)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
ca1ed50ab3 gstreamer1.0-plugins-good: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: 0c9cdf7961e0991c5d25f18954bbd8fe243df225)

(From OE-Core rev: 1693e87495b2ecf63397b396930b8934a1478b88)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
46db037b1f gstreamer1.0-plugins-base: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: c38eefb0693b771a97ab7dc15103cb5be6a003f7)

(From OE-Core rev: 77fdfb7f52f876c4530fdef77c17a540b60bf024)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
70761072f5 gstreamer1.0: Update 1.16.2 -> Update 1.16.3
(From OE-Core rev: d24f8ac481082cdb07f141508a2caf964167aec4)

(From OE-Core rev: 3ed1ccdf977b265dac2325095caa0e2b0764aa56)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Jose Quaresma
efa68c6490 gstreamer1.0: Fix reproducibility issue around libcap
Currently gstreamer configuration depends libcap and on whether
setcap is found on the host system.

Removing libcap from DEPENDS and only use it when the 'setcap' is enabled.

    * 0004-capfix.patch
      Removed as the same goals can be achieved only with the PACKAGECONFIG 'setcap'

(From OE-Core rev: 7691d3f963dc02570b5092db8f061c4d327b277a)

(From OE-Core rev: 3b186880c95e8ab120fee6304af52384b040aae1)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:06:28 +00:00
Nicolas Dechesne
3daa976efb poky.yaml: updates for 3.2
Updates global variables for 3.2 / Gategarth release.

(From yocto-docs rev: 7b699c26bfcf05666460746dd7a28eacbf98870c)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:04:18 +00:00
Nicolas Dechesne
4d35e4b168 poky.yaml: remove unused variables
There are plenty of variables in poky.yaml which are not used anywhere
in the docs. So let's remove them. We can always add the one we need
later.

Note ORGEMAIL could be used in boilerplate.rst, however this file is
not parsed but included, and somehow the yocto-vars.py exenstion does
not process this file, so we cannot use a variable there.

(From yocto-docs rev: 3d58472daf118b25eda151bbf1a638905bba183a)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-12 13:04:18 +00:00
Nicolas Dechesne
dff89518bd conf: use bitbake 1.48 branch for intersphinx
We now publish the branch 1.48 of bitbake docs to
https://docs.yoctoproject.org/bitbake/1.48/

yocto-docs can refer to bitbake documentation using the intersphinx
extension. The gatesgarth docs should refer to the 1.48 branch of
bitbake, not the development branch.

(From yocto-docs rev: 09ae216a022b85fe1f03b55e6341e258c7215e20)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-09 14:40:32 +00:00
Nicolas Dechesne
cdae385f7d conf: update for release 3.2
conf.py:
* set version to 3.2

switchers.js
* add 3.2 release
* update 'dev' to 3.3

(From yocto-docs rev: eac8b251be5cd28ebec32345562c838dd5f43b00)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-09 13:16:18 +00:00
Robert P. J. Day
b7a7dde44a adt-manual: delete obsolete ADT manual, and related content
Since the ADT manual has long been superseded by the SDK manual,
remove the entire adt-manual directory, and the references to it in
the two top-level files "conf.py" and "poky.yaml".

(From yocto-docs rev: 64b2e83bddf6af0439ac7089ac95e60faa696cfc)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-11-06 15:14:01 +00:00
2015 changed files with 174163 additions and 64130 deletions

3
.gitignore vendored
View File

@@ -30,5 +30,4 @@ hob-image-*.bb
pull-*/
bitbake/lib/toaster/contrib/tts/backlog.txt
bitbake/lib/toaster/contrib/tts/log/*
bitbake/lib/toaster/contrib/tts/.cache/*
bitbake/lib/bb/tests/runqueue-tests/bitbake-cookerdaemon.log
bitbake/lib/toaster/contrib/tts/.cache/*

View File

@@ -6,24 +6,24 @@ of OpenEmbedded. It is distro-less (can build a functional image with
DISTRO = "nodistro") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
https://www.openembedded.org/
http://www.openembedded.org/
The Yocto Project has extensive documentation about OE including a reference manual
which can be found at:
https://docs.yoctoproject.org/
http://yoctoproject.org/documentation
Contributing
------------
Please refer to
https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
for guidelines on how to submit patches.
Mailing list:
https://lists.openembedded.org/g/openembedded-core
http://lists.openembedded.org/mailman/listinfo/openembedded-core
Source code:
https://git.openembedded.org/openembedded-core/
http://git.openembedded.org/openembedded-core/

View File

@@ -11,7 +11,7 @@ For information about Bitbake, see the OpenEmbedded website:
Bitbake plain documentation can be found under the doc directory or its integrated
html version at the Yocto Project website:
https://docs.yoctoproject.org
http://yoctoproject.org/documentation
Contributing
------------

View File

@@ -26,7 +26,7 @@ from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
__version__ = "1.50.0"
__version__ = "1.48.0"
if __name__ == "__main__":
if __version__ != bb.__version__:

View File

@@ -151,6 +151,9 @@ def main():
func = getattr(args, 'func', None)
if func:
client = hashserv.create_client(args.address)
# Try to establish a connection to the server now to detect failures
# early
client.connect()
return func(args, client)

View File

@@ -30,11 +30,9 @@ def main():
"--bind [::1]:8686"'''
)
parser.add_argument('-b', '--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
parser.add_argument('-d', '--database', default='./hashserv.db', help='Database file (default "%(default)s")')
parser.add_argument('-l', '--log', default='WARNING', help='Set logging level')
parser.add_argument('-u', '--upstream', help='Upstream hashserv to pull hashes from')
parser.add_argument('-r', '--read-only', action='store_true', help='Disallow write operations from clients')
parser.add_argument('--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
parser.add_argument('--database', default='./hashserv.db', help='Database file (default "%(default)s")')
parser.add_argument('--log', default='WARNING', help='Set logging level')
args = parser.parse_args()
@@ -49,7 +47,7 @@ def main():
console.setLevel(level)
logger.addHandler(console)
server = hashserv.create_server(args.bind, args.database, upstream=args.upstream, read_only=args.read_only)
server = hashserv.create_server(args.bind, args.database)
server.serve_forever()
return 0

View File

@@ -26,7 +26,7 @@ readypipeinfd = int(sys.argv[3])
logfile = sys.argv[4]
lockname = sys.argv[5]
sockname = sys.argv[6]
timeout = float(sys.argv[7])
timeout = sys.argv[7]
xmlrpcinterface = (sys.argv[8], int(sys.argv[9]))
if xmlrpcinterface[0] == "None":
xmlrpcinterface = (None, xmlrpcinterface[1])

View File

@@ -16,8 +16,6 @@ import signal
import pickle
import traceback
import queue
import shlex
import subprocess
from multiprocessing import Lock
from threading import Thread
@@ -120,9 +118,7 @@ def worker_child_fire(event, d):
data = b"<event>" + pickle.dumps(event) + b"</event>"
try:
worker_pipe_lock.acquire()
while(len(data)):
written = worker_pipe.write(data)
data = data[written:]
worker_pipe.write(data)
worker_pipe_lock.release()
except IOError:
sigterm_handler(None, None)
@@ -147,27 +143,21 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
# a fork() or exec*() activates PSEUDO...
envbackup = {}
fakeroot = False
fakeenv = {}
umask = None
taskdep = workerdata["taskdeps"][fn]
if 'umask' in taskdep and taskname in taskdep['umask']:
umask = taskdep['umask'][taskname]
elif workerdata["umask"]:
umask = workerdata["umask"]
if umask:
# umask might come in as a number or text string..
try:
umask = int(umask, 8)
umask = int(taskdep['umask'][taskname],8)
except TypeError:
pass
umask = taskdep['umask'][taskname]
dry_run = cfg.dry_run or dry_run_exec
# We can't use the fakeroot environment in a dry run as it possibly hasn't been built
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
fakeroot = True
envvars = (workerdata["fakerootenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
@@ -177,7 +167,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
fakedirs = (workerdata["fakerootdirs"][fn] or "").split()
for p in fakedirs:
bb.utils.mkdirhier(p)
logger.debug2('Running %s:%s under fakeroot, fakedirs: %s' %
logger.debug(2, 'Running %s:%s under fakeroot, fakedirs: %s' %
(fn, taskname, ', '.join(fakedirs)))
else:
envvars = (workerdata["fakerootnoenv"][fn] or "").split()
@@ -286,13 +276,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
try:
if dry_run:
return 0
try:
ret = bb.build.exec_task(fn, taskname, the_data, cfg.profile)
finally:
if fakeroot:
fakerootcmd = shlex.split(the_data.getVar("FAKEROOTCMD"))
subprocess.run(fakerootcmd + ['-S'], check=True, stdout=subprocess.PIPE)
return ret
return bb.build.exec_task(fn, taskname, the_data, cfg.profile)
except:
os._exit(1)
if not profiling:
@@ -337,9 +321,7 @@ class runQueueWorkerPipe():
end = len(self.queue)
index = self.queue.find(b"</event>")
while index != -1:
msg = self.queue[:index+8]
assert msg.startswith(b"<event>") and msg.count(b"<event>") == 1
worker_fire_prepickled(msg)
worker_fire_prepickled(self.queue[:index+8])
self.queue = self.queue[index+8:]
index = self.queue.find(b"</event>")
return (end > start)
@@ -523,11 +505,9 @@ except BaseException as e:
import traceback
sys.stderr.write(traceback.format_exc())
sys.stderr.write(str(e))
finally:
worker_thread_exit = True
worker_thread.join()
workerlog_write("exiting")
if not normalexit:
sys.exit(1)
worker_thread_exit = True
worker_thread.join()
workerlog_write("exitting")
sys.exit(0)

View File

@@ -1,19 +0,0 @@
# SPDX-License-Identifier: MIT
#
# Copyright (c) 2021 Joshua Watt <JPEWhacker@gmail.com>
#
# Dockerfile to build a bitbake hash equivalence server container
#
# From the root of the bitbake repository, run:
#
# docker build -f contrib/hashserv/Dockerfile .
#
FROM alpine:3.13.1
RUN apk add --no-cache python3
COPY bin/bitbake-hashserv /opt/bbhashserv/bin/
COPY lib/hashserv /opt/bbhashserv/lib/hashserv/
ENTRYPOINT ["/opt/bbhashserv/bin/bitbake-hashserv"]

View File

@@ -3,7 +3,7 @@
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?= -j auto
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build

View File

@@ -244,8 +244,7 @@ want upstream. Here is an example: ::
BBFILE_COLLECTIONS = "upstream local"
BBFILE_PATTERN_upstream = "^/stuff/openembedded/"
BBFILE_PATTERN_local = "^/stuff/openembedded.modified/"
BBFILE_PRIORITY_upstream = "5"
BBFILE_PRIORITY_local = "10"
BBFILE_PRIORITY_upstream = "5" BBFILE_PRIORITY_local = "10"
.. note::

View File

@@ -441,15 +441,6 @@ Here are some example URLs: ::
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
.. note::
Specifying passwords directly in ``git://`` urls is not supported.
There are several reasons: ``SRC_URI`` is often written out to logs and
other places, and that could easily leak passwords; it is also all too
easy to share metadata without removing passwords. SSH keys, ``~/.netrc``
and ``~/.ssh/config`` files can be used as alternatives.
.. _gitsm-fetcher:
Git Submodule Fetcher (``gitsm://``)
@@ -633,34 +624,6 @@ Here are some example URLs: ::
SRC_URI = "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml"
SRC_URI = "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml"
.. _az-fetcher:
Az Fetcher (``az://``)
--------------------------
This submodule fetches data from an
`Azure Storage account <https://docs.microsoft.com/en-us/azure/storage/>`__ ,
it inherits its functionality from the HTTP wget fetcher, but modifies its
behavior to accomodate the usage of a
`Shared Access Signature (SAS) <https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview>`__
for non-public data.
Such functionality is set by the variable:
- :term:`AZ_SAS`: The Azure Storage Shared Access Signature provides secure
delegate access to resources, if this variable is set, the Az Fetcher will
use it when fetching artifacts from the cloud.
You can specify the AZ_SAS variable as shown below: ::
AZ_SAS = "se=2021-01-01&sp=r&sv=2018-11-09&sr=c&skoid=<skoid>&sig=<signature>"
Here is an example URL: ::
SRC_URI = "az://<azure-storage-account>.blob.core.windows.net/<foo_container>/<bar_file>"
It can also be used when setting mirrors definitions using the :term:`PREMIRRORS` variable.
Other Fetchers
--------------

View File

@@ -1296,17 +1296,6 @@ For more information on task dependencies, see the
See the ":ref:`bitbake-user-manual/bitbake-user-manual-metadata:variable flags`" section for information
on variable flags you can use with tasks.
.. note::
While it's infrequent, it's possible to define multiple tasks as
dependencies when calling ``addtask``. For example, here's a snippet
from the OpenEmbedded class file ``package_tar.bbclass``::
addtask package_write_tar before do_build after do_packagedata do_package
Note how the ``package_write_tar`` task has to wait until both of
``do_packagedata`` and ``do_package`` complete.
Deleting a Task
---------------
@@ -1580,7 +1569,7 @@ might have an interest in viewing:
events when each of the workers parse the base configuration or if
the server changes configuration and reparses. Any given datastore
only has one such event executed against it, however. If
:term:`BB_INVALIDCONF` is set in the datastore by the event
```BB_INVALIDCONF`` <#>`__ is set in the datastore by the event
handler, the configuration is reparsed and a new event triggered,
allowing the metadata to update configuration.

View File

@@ -39,19 +39,6 @@ overview of their function and contents.
when specified allows for the Git binary from the host to be used
rather than building ``git-native``.
:term:`AZ_SAS`
Azure Storage Shared Access Signature, when using the
:ref:`Azure Storage fetcher <bitbake-user-manual/bitbake-user-manual-fetching:fetchers>`
This variable can be defined to be used by the fetcher to authenticate
and gain access to non-public artifacts.
::
AZ_SAS = ""se=2021-01-01&sp=r&sv=2018-11-09&sr=c&skoid=<skoid>&sig=<signature>""
For more information see Microsoft's Azure Storage documentation at
https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview
:term:`B`
The directory in which BitBake executes functions during a recipe's
build process.
@@ -121,10 +108,6 @@ overview of their function and contents.
command line option). The task name specified should not include the
``do_`` prefix.
:term:`BB_DEFAULT_UMASK`
The default umask to apply to tasks if specified and no task specific
umask flag is set.
:term:`BB_DISKMON_DIRS`
Monitors disk space and available inodes during the build and allows
you to control the build based on these parameters.
@@ -270,6 +253,45 @@ overview of their function and contents.
``my-recipe.bb`` is executing, the ``BB_FILENAME`` variable contains
"/foo/path/my-recipe.bb".
:term:`BBFILES_DYNAMIC`
Activates content depending on presence of identified layers. You
identify the layers by the collections that the layers define.
Use the ``BBFILES_DYNAMIC`` variable to avoid ``.bbappend`` files whose
corresponding ``.bb`` file is in a layer that attempts to modify other
layers through ``.bbappend`` but does not want to introduce a hard
dependency on those other layers.
Additionally you can prefix the rule with "!" to add ``.bbappend`` and
``.bb`` files in case a layer is not present. Use this avoid hard
dependency on those other layers.
Use the following form for ``BBFILES_DYNAMIC``: ::
collection_name:filename_pattern
The following example identifies two collection names and two filename
patterns: ::
BBFILES_DYNAMIC += "\
clang-layer:${LAYERDIR}/bbappends/meta-clang/*/*/*.bbappend \
core:${LAYERDIR}/bbappends/openembedded-core/meta/*/*/*.bbappend \
"
When the collection name is prefixed with "!" it will add the file pattern in case
the layer is absent: ::
BBFILES_DYNAMIC += "\
!clang-layer:${LAYERDIR}/backfill/meta-clang/*/*/*.bb \
"
This next example shows an error message that occurs because invalid
entries are found, which cause parsing to abort: ::
ERROR: BBFILES_DYNAMIC entries must be of the form {!}<collection name>:<filename pattern>, not:
/work/my-layer/bbappends/meta-security-isafw/*/*/*.bbappend
/work/my-layer/bbappends/openembedded-core/meta/*/*/*.bbappend
:term:`BB_GENERATE_MIRROR_TARBALLS`
Causes tarballs of the Git repositories, including the Git metadata,
to be placed in the :term:`DL_DIR` directory. Anyone
@@ -645,45 +667,6 @@ overview of their function and contents.
For details on the syntax, see the documentation by following the
previous link.
:term:`BBFILES_DYNAMIC`
Activates content depending on presence of identified layers. You
identify the layers by the collections that the layers define.
Use the ``BBFILES_DYNAMIC`` variable to avoid ``.bbappend`` files whose
corresponding ``.bb`` file is in a layer that attempts to modify other
layers through ``.bbappend`` but does not want to introduce a hard
dependency on those other layers.
Additionally you can prefix the rule with "!" to add ``.bbappend`` and
``.bb`` files in case a layer is not present. Use this avoid hard
dependency on those other layers.
Use the following form for ``BBFILES_DYNAMIC``: ::
collection_name:filename_pattern
The following example identifies two collection names and two filename
patterns: ::
BBFILES_DYNAMIC += "\
clang-layer:${LAYERDIR}/bbappends/meta-clang/*/*/*.bbappend \
core:${LAYERDIR}/bbappends/openembedded-core/meta/*/*/*.bbappend \
"
When the collection name is prefixed with "!" it will add the file pattern in case
the layer is absent: ::
BBFILES_DYNAMIC += "\
!clang-layer:${LAYERDIR}/backfill/meta-clang/*/*/*.bb \
"
This next example shows an error message that occurs because invalid
entries are found, which cause parsing to abort: ::
ERROR: BBFILES_DYNAMIC entries must be of the form {!}<collection name>:<filename pattern>, not:
/work/my-layer/bbappends/meta-security-isafw/*/*/*.bbappend
/work/my-layer/bbappends/openembedded-core/meta/*/*/*.bbappend
:term:`BBINCLUDED`
Contains a space-separated list of all of all files that BitBake's
parser included during parsing of the current file.
@@ -1096,8 +1079,8 @@ overview of their function and contents.
PREFERRED_PROVIDER_aaa = "bbb"
:term:`PREFERRED_VERSION`
If there are multiple versions of a recipe available, this variable
determines which version should be given preference. You must always
If there are multiple versions of recipes available, this variable
determines which recipe should be given preference. You must always
suffix the variable with the :term:`PN` you want to
select, and you should set :term:`PV` accordingly for
precedence.
@@ -1117,10 +1100,6 @@ overview of their function and contents.
end of the string. You cannot use the wildcard character in any other
location of the string.
If a recipe with the specified version is not available, a warning
message will be shown. See :term:`REQUIRED_VERSION` if you want this
to be an error instead.
:term:`PREMIRRORS`
Specifies additional paths from which BitBake gets source code. When
the build system searches for source code, it first tries the local
@@ -1231,16 +1210,6 @@ overview of their function and contents.
The directory in which a local copy of a ``google-repo`` directory is
stored when it is synced.
:term:`REQUIRED_VERSION`
If there are multiple versions of a recipe available, this variable
determines which version should be given preference. ``REQUIRED_VERSION``
works in exactly the same manner as :term:`PREFERRED_VERSION`, except
that if the specified version is not available then an error message
is shown and the build fails immediately.
If both ``REQUIRED_VERSION`` and ``PREFERRED_VERSION`` are set for
the same recipe, the ``REQUIRED_VERSION`` value applies.
:term:`RPROVIDES`
A list of package name aliases that a package also provides. These
aliases are useful for satisfying runtime dependencies of other
@@ -1330,8 +1299,6 @@ overview of their function and contents.
- ``svn://`` : Fetches files from a Subversion (``svn``) revision
control repository.
- ``az://`` : Fetches files from an Azure Storage account using HTTPS.
Here are some additional options worth mentioning:
- ``unpack`` : Controls whether or not to unpack the file if it is

View File

@@ -14,7 +14,6 @@
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import sys
import datetime
current_version = "dev"

View File

@@ -9,7 +9,7 @@
# SPDX-License-Identifier: GPL-2.0-only
#
__version__ = "1.50.0"
__version__ = "1.48.0"
import sys
if sys.version_info < (3, 5, 0):
@@ -21,8 +21,8 @@ class BBHandledException(Exception):
The big dilemma for generic bitbake code is what information to give the user
when an exception occurs. Any exception inheriting this base exception class
has already provided information to the user via some 'fired' message type such as
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
encounters an exception derived from this class, no backtrace or other information
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
encounters an exception derived from this class, no backtrace or other information
will be given to the user, its assumed the earlier event provided the relevant information.
"""
pass
@@ -42,23 +42,14 @@ class BBLoggerMixin(object):
def setup_bblogger(self, name):
if name.split(".")[0] == "BitBake":
self.debug = self._debug_helper
def _debug_helper(self, *args, **kwargs):
return self.bbdebug(1, *args, **kwargs)
def debug2(self, *args, **kwargs):
return self.bbdebug(2, *args, **kwargs)
def debug3(self, *args, **kwargs):
return self.bbdebug(3, *args, **kwargs)
self.debug = self.bbdebug
def bbdebug(self, level, msg, *args, **kwargs):
loglevel = logging.DEBUG - level + 1
if not bb.event.worker_pid:
if self.name in bb.msg.loggerDefaultDomains and loglevel > (bb.msg.loggerDefaultDomains[self.name]):
return
if loglevel < bb.msg.loggerDefaultLogLevel:
if loglevel > bb.msg.loggerDefaultLogLevel:
return
return self.log(loglevel, msg, *args, **kwargs)
@@ -137,7 +128,7 @@ def debug(lvl, *args):
mainlogger.warning("Passed invalid debug level '%s' to bb.debug", lvl)
args = (lvl,) + args
lvl = 1
mainlogger.bbdebug(lvl, ''.join(args))
mainlogger.debug(lvl, ''.join(args))
def note(*args):
mainlogger.info(''.join(args))

View File

@@ -298,10 +298,6 @@ def exec_func_python(func, d, runfile, cwd=None):
comp = utils.better_compile(code, func, "exec_python_func() autogenerated")
utils.better_exec(comp, {"d": d}, code, "exec_python_func() autogenerated")
finally:
# We want any stdout/stderr to be printed before any other log messages to make debugging
# more accurate. In some cases we seem to lose stdout/stderr entirely in logging tests without this.
sys.stdout.flush()
sys.stderr.flush()
bb.debug(2, "Python function %s finished" % func)
if cwd and olddir:
@@ -587,7 +583,7 @@ def _exec_task(fn, task, d, quieterr):
logger.error("No such task: %s" % task)
return 1
logger.debug("Executing task %s", task)
logger.debug(1, "Executing task %s", task)
localdata = _task_data(fn, task, d)
tempdir = localdata.getVar('T')
@@ -600,7 +596,7 @@ def _exec_task(fn, task, d, quieterr):
curnice = os.nice(0)
nice = int(nice) - curnice
newnice = os.nice(nice)
logger.debug("Renice to %s " % newnice)
logger.debug(1, "Renice to %s " % newnice)
ionice = localdata.getVar("BB_TASK_IONICE_LEVEL")
if ionice:
try:
@@ -698,16 +694,12 @@ def _exec_task(fn, task, d, quieterr):
except bb.BBHandledException:
event.fire(TaskFailed(task, fn, logfn, localdata, True), localdata)
return 1
except (Exception, SystemExit) as exc:
except Exception as exc:
if quieterr:
event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
# If the output is already on stdout, we've printed the information in the
# logs once already so don't duplicate
if verboseStdoutLogging:
errprinted = True
logger.error(repr(exc))
logger.error(str(exc))
event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
return 1
finally:
@@ -728,7 +720,7 @@ def _exec_task(fn, task, d, quieterr):
logfile.close()
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
logger.debug2("Zero size logfn %s, removing", logfn)
logger.debug(2, "Zero size logfn %s, removing", logfn)
bb.utils.remove(logfn)
bb.utils.remove(loglink)
event.fire(TaskSucceeded(task, fn, logfn, localdata), localdata)
@@ -862,23 +854,6 @@ def make_stamp(task, d, file_name = None):
file_name = d.getVar('BB_FILENAME')
bb.parse.siggen.dump_sigtask(file_name, task, stampbase, True)
def find_stale_stamps(task, d, file_name=None):
current = stamp_internal(task, d, file_name)
current2 = stamp_internal(task + "_setscene", d, file_name)
cleanmask = stamp_cleanmask_internal(task, d, file_name)
found = []
for mask in cleanmask:
for name in glob.glob(mask):
if "sigdata" in name or "sigbasedata" in name:
continue
if name.endswith('.taint'):
continue
if name == current or name == current2:
continue
logger.debug2("Stampfile %s does not match %s or %s" % (name, current, current2))
found.append(name)
return found
def del_stamp(task, d, file_name = None):
"""
Removes a stamp for a given task
@@ -1033,8 +1008,6 @@ def tasksbetween(task_start, task_end, d):
def follow_chain(task, endtask, chain=None):
if not chain:
chain = []
if task in chain:
bb.fatal("Circular task dependencies as %s depends on itself via the chain %s" % (task, " -> ".join(chain)))
chain.append(task)
for othertask in tasks:
if othertask == task:

View File

@@ -19,15 +19,14 @@
import os
import logging
import pickle
from collections import defaultdict
from collections.abc import Mapping
from collections import defaultdict, Mapping
import bb.utils
from bb import PrefixLoggerAdapter
import re
logger = logging.getLogger("BitBake.Cache")
__cache_version__ = "154"
__cache_version__ = "153"
def getCacheFile(path, filename, mc, data_hash):
mcspec = ''
@@ -95,7 +94,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
if not self.packages:
self.packages.append(self.pn)
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
self.skipreason = self.getvar('__SKIPPED', metadata)
if self.skipreason:
@@ -122,12 +120,12 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.depends = self.depvar('DEPENDS', metadata)
self.rdepends = self.depvar('RDEPENDS', metadata)
self.rrecommends = self.depvar('RRECOMMENDS', metadata)
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
self.rdepends_pkg = self.pkgvar('RDEPENDS', self.packages, metadata)
self.rrecommends_pkg = self.pkgvar('RRECOMMENDS', self.packages, metadata)
self.inherits = self.getvar('__inherit_cache', metadata, expand=False)
self.fakerootenv = self.getvar('FAKEROOTENV', metadata)
self.fakerootdirs = self.getvar('FAKEROOTDIRS', metadata)
self.fakerootlogs = self.getvar('FAKEROOTLOGS', metadata)
self.fakerootnoenv = self.getvar('FAKEROOTNOENV', metadata)
self.extradepsfunc = self.getvar('calculate_extra_depends', metadata)
@@ -165,7 +163,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootenv = {}
cachedata.fakerootnoenv = {}
cachedata.fakerootdirs = {}
cachedata.fakerootlogs = {}
cachedata.extradepsfunc = {}
def add_cacheData(self, cachedata, fn):
@@ -218,7 +215,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
if not self.not_world:
cachedata.possible_world.append(fn)
#else:
# logger.debug2("EXCLUDE FROM WORLD: %s", fn)
# logger.debug(2, "EXCLUDE FROM WORLD: %s", fn)
# create a collection of all targets for sanity checking
# tasks, such as upstream versions, license, and tools for
@@ -234,7 +231,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootenv[fn] = self.fakerootenv
cachedata.fakerootnoenv[fn] = self.fakerootnoenv
cachedata.fakerootdirs[fn] = self.fakerootdirs
cachedata.fakerootlogs[fn] = self.fakerootlogs
cachedata.extradepsfunc[fn] = self.extradepsfunc
def virtualfn2realfn(virtualfn):
@@ -242,7 +238,7 @@ def virtualfn2realfn(virtualfn):
Convert a virtual file name to a real one + the associated subclass keyword
"""
mc = ""
if virtualfn.startswith('mc:') and virtualfn.count(':') >= 2:
if virtualfn.startswith('mc:'):
elems = virtualfn.split(':')
mc = elems[1]
virtualfn = ":".join(elems[2:])
@@ -272,7 +268,7 @@ def variant2virtual(realfn, variant):
"""
if variant == "":
return realfn
if variant.startswith("mc:") and variant.count(':') >= 2:
if variant.startswith("mc:"):
elems = variant.split(":")
if elems[2]:
return "mc:" + elems[1] + ":virtual:" + ":".join(elems[2:]) + ":" + realfn
@@ -327,7 +323,7 @@ class NoCache(object):
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug("Parsing %s (full)" % virtualfn)
logger.debug(1, "Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = virtualfn2realfn(virtualfn)
bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
return bb_data[virtual]
@@ -404,7 +400,7 @@ class Cache(NoCache):
self.cachefile = self.getCacheFile("bb_cache.dat")
self.logger.debug("Cache dir: %s", self.cachedir)
self.logger.debug(1, "Cache dir: %s", self.cachedir)
bb.utils.mkdirhier(self.cachedir)
cache_ok = True
@@ -412,7 +408,7 @@ class Cache(NoCache):
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
cache_exists = os.path.exists(cachefile)
self.logger.debug2("Checking if %s exists: %r", cachefile, cache_exists)
self.logger.debug(2, "Checking if %s exists: %r", cachefile, cache_exists)
cache_ok = cache_ok and cache_exists
cache_class.init_cacheData(self)
if cache_ok:
@@ -420,7 +416,7 @@ class Cache(NoCache):
elif os.path.isfile(self.cachefile):
self.logger.info("Out of date cache found, rebuilding...")
else:
self.logger.debug("Cache file %s not found, building..." % self.cachefile)
self.logger.debug(1, "Cache file %s not found, building..." % self.cachefile)
# We don't use the symlink, its just for debugging convinience
if self.mc:
@@ -453,11 +449,13 @@ class Cache(NoCache):
return cachesize
def load_cachefile(self, progress):
cachesize = self.cachesize()
previous_progress = 0
previous_percent = 0
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
self.logger.debug('Loading cache file: %s' % cachefile)
self.logger.debug(1, 'Loading cache file: %s' % cachefile)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
# Check cache version information
@@ -504,7 +502,7 @@ class Cache(NoCache):
def parse(self, filename, appends):
"""Parse the specified filename, returning the recipe information"""
self.logger.debug("Parsing %s", filename)
self.logger.debug(1, "Parsing %s", filename)
infos = []
datastores = self.load_bbfile(filename, appends, mc=self.mc)
depends = []
@@ -558,7 +556,7 @@ class Cache(NoCache):
cached, infos = self.load(fn, appends)
for virtualfn, info_array in infos:
if info_array[0].skipped:
self.logger.debug("Skipping %s: %s", virtualfn, info_array[0].skipreason)
self.logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
skipped += 1
else:
self.add_info(virtualfn, info_array, cacheData, not cached)
@@ -594,21 +592,21 @@ class Cache(NoCache):
# File isn't in depends_cache
if not fn in self.depends_cache:
self.logger.debug2("%s is not cached", fn)
self.logger.debug(2, "%s is not cached", fn)
return False
mtime = bb.parse.cached_mtime_noerror(fn)
# Check file still exists
if mtime == 0:
self.logger.debug2("%s no longer exists", fn)
self.logger.debug(2, "%s no longer exists", fn)
self.remove(fn)
return False
info_array = self.depends_cache[fn]
# Check the file's timestamp
if mtime != info_array[0].timestamp:
self.logger.debug2("%s changed", fn)
self.logger.debug(2, "%s changed", fn)
self.remove(fn)
return False
@@ -619,13 +617,13 @@ class Cache(NoCache):
fmtime = bb.parse.cached_mtime_noerror(f)
# Check if file still exists
if old_mtime != 0 and fmtime == 0:
self.logger.debug2("%s's dependency %s was removed",
self.logger.debug(2, "%s's dependency %s was removed",
fn, f)
self.remove(fn)
return False
if (fmtime != old_mtime):
self.logger.debug2("%s's dependency %s changed",
self.logger.debug(2, "%s's dependency %s changed",
fn, f)
self.remove(fn)
return False
@@ -642,14 +640,14 @@ class Cache(NoCache):
continue
f, exist = f.split(":")
if (exist == "True" and not os.path.exists(f)) or (exist == "False" and os.path.exists(f)):
self.logger.debug2("%s's file checksum list file %s changed",
self.logger.debug(2, "%s's file checksum list file %s changed",
fn, f)
self.remove(fn)
return False
if tuple(appends) != tuple(info_array[0].appends):
self.logger.debug2("appends for %s changed", fn)
self.logger.debug2("%s to %s" % (str(appends), str(info_array[0].appends)))
self.logger.debug(2, "appends for %s changed", fn)
self.logger.debug(2, "%s to %s" % (str(appends), str(info_array[0].appends)))
self.remove(fn)
return False
@@ -658,10 +656,10 @@ class Cache(NoCache):
virtualfn = variant2virtual(fn, cls)
self.clean.add(virtualfn)
if virtualfn not in self.depends_cache:
self.logger.debug2("%s is not cached", virtualfn)
self.logger.debug(2, "%s is not cached", virtualfn)
invalid = True
elif len(self.depends_cache[virtualfn]) != len(self.caches_array):
self.logger.debug2("Extra caches missing for %s?" % virtualfn)
self.logger.debug(2, "Extra caches missing for %s?" % virtualfn)
invalid = True
# If any one of the variants is not present, mark as invalid for all
@@ -669,10 +667,10 @@ class Cache(NoCache):
for cls in info_array[0].variants:
virtualfn = variant2virtual(fn, cls)
if virtualfn in self.clean:
self.logger.debug2("Removing %s from cache", virtualfn)
self.logger.debug(2, "Removing %s from cache", virtualfn)
self.clean.remove(virtualfn)
if fn in self.clean:
self.logger.debug2("Marking %s as not clean", fn)
self.logger.debug(2, "Marking %s as not clean", fn)
self.clean.remove(fn)
return False
@@ -685,10 +683,10 @@ class Cache(NoCache):
Called from the parser in error cases
"""
if fn in self.depends_cache:
self.logger.debug("Removing %s from cache", fn)
self.logger.debug(1, "Removing %s from cache", fn)
del self.depends_cache[fn]
if fn in self.clean:
self.logger.debug("Marking %s as unclean", fn)
self.logger.debug(1, "Marking %s as unclean", fn)
self.clean.remove(fn)
def sync(self):
@@ -701,13 +699,13 @@ class Cache(NoCache):
return
if self.cacheclean:
self.logger.debug2("Cache is clean, not saving.")
self.logger.debug(2, "Cache is clean, not saving.")
return
for cache_class in self.caches_array:
cache_class_name = cache_class.__name__
cachefile = self.getCacheFile(cache_class.cachefile)
self.logger.debug2("Writing %s", cachefile)
self.logger.debug(2, "Writing %s", cachefile)
with open(cachefile, "wb") as f:
p = pickle.Pickler(f, pickle.HIGHEST_PROTOCOL)
p.dump(__cache_version__)
@@ -818,6 +816,10 @@ class MulticonfigCache(Mapping):
for k in self.__caches:
yield k
def keys(self):
return self.__caches[key]
def init(cooker):
"""
The Objective: Cache the minimum amount of data possible yet get to the
@@ -883,7 +885,7 @@ class MultiProcessCache(object):
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
logger.debug("Using cache in '%s'", self.cachefile)
logger.debug(1, "Using cache in '%s'", self.cachefile)
glf = bb.utils.lockfile(self.cachefile + ".lock")
@@ -989,7 +991,7 @@ class SimpleCache(object):
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
logger.debug("Using cache in '%s'", self.cachefile)
logger.debug(1, "Using cache in '%s'", self.cachefile)
glf = bb.utils.lockfile(self.cachefile + ".lock")

View File

@@ -212,9 +212,9 @@ class PythonParser():
funcstr = codegen.to_source(func)
argstr = codegen.to_source(arg)
except TypeError:
self.log.debug2('Failed to convert function and argument to source form')
self.log.debug(2, 'Failed to convert function and argument to source form')
else:
self.log.debug(self.unhandled_message % (funcstr, argstr))
self.log.debug(1, self.unhandled_message % (funcstr, argstr))
def visit_Call(self, node):
name = self.called_node_name(node.func)
@@ -450,7 +450,7 @@ class ShellParser():
cmd = word[1]
if cmd.startswith("$"):
self.log.debug(self.unhandled_template % cmd)
self.log.debug(1, self.unhandled_template % cmd)
elif cmd == "eval":
command = " ".join(word for _, word in words[1:])
self._parse_shell(command)

View File

@@ -73,9 +73,7 @@ class SkippedPackage:
self.pn = info.pn
self.skipreason = info.skipreason
self.provides = info.provides
self.rprovides = info.packages + info.rprovides
for package in info.packages:
self.rprovides += info.rprovides_pkg[package]
self.rprovides = info.rprovides
elif reason:
self.skipreason = reason
@@ -382,29 +380,14 @@ class BBCooker:
try:
self.prhost = prserv.serv.auto_start(self.data)
except prserv.serv.PRServiceConfigError as e:
bb.fatal("Unable to start PR Server, exiting")
bb.fatal("Unable to start PR Server, exitting")
if self.data.getVar("BB_HASHSERVE") == "auto":
# Create a new hash server bound to a unix domain socket
if not self.hashserv:
dbfile = (self.data.getVar("PERSISTENT_DIR") or self.data.getVar("CACHE")) + "/hashserv.db"
upstream = self.data.getVar("BB_HASHSERVE_UPSTREAM") or None
if upstream:
import socket
try:
sock = socket.create_connection(upstream.split(":"), 5)
sock.close()
except socket.error as e:
bb.warn("BB_HASHSERVE_UPSTREAM is not valid, unable to connect hash equivalence server at '%s': %s"
% (upstream, repr(e)))
self.hashservaddr = "unix://%s/hashserve.sock" % self.data.getVar("TOPDIR")
self.hashserv = hashserv.create_server(
self.hashservaddr,
dbfile,
sync=False,
upstream=upstream,
)
self.hashserv = hashserv.create_server(self.hashservaddr, dbfile, sync=False)
self.hashserv.process = multiprocessing.Process(target=self.hashserv.serve_forever)
self.hashserv.process.start()
self.data.setVar("BB_HASHSERVE", self.hashservaddr)
@@ -426,8 +409,6 @@ class BBCooker:
self.data.disableTracking()
def parseConfiguration(self):
self.updateCacheSync()
# Change nice level if we're asked to
nice = self.data.getVar("BB_NICE_LEVEL")
if nice:
@@ -458,7 +439,7 @@ class BBCooker:
continue
except AttributeError:
pass
logger.debug("Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
logger.debug(1, "Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
print("Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
clean = False
if hasattr(self.configuration, o):
@@ -485,17 +466,17 @@ class BBCooker:
for k in bb.utils.approved_variables():
if k in environment and k not in self.configuration.env:
logger.debug("Updating new environment variable %s to %s" % (k, environment[k]))
logger.debug(1, "Updating new environment variable %s to %s" % (k, environment[k]))
self.configuration.env[k] = environment[k]
clean = False
if k in self.configuration.env and k not in environment:
logger.debug("Updating environment variable %s (deleted)" % (k))
logger.debug(1, "Updating environment variable %s (deleted)" % (k))
del self.configuration.env[k]
clean = False
if k not in self.configuration.env and k not in environment:
continue
if environment[k] != self.configuration.env[k]:
logger.debug("Updating environment variable %s from %s to %s" % (k, self.configuration.env[k], environment[k]))
logger.debug(1, "Updating environment variable %s from %s to %s" % (k, self.configuration.env[k], environment[k]))
self.configuration.env[k] = environment[k]
clean = False
@@ -503,7 +484,7 @@ class BBCooker:
self.configuration.env = environment
if not clean:
logger.debug("Base environment change, triggering reparse")
logger.debug(1, "Base environment change, triggering reparse")
self.reset()
def runCommands(self, server, data, abort):
@@ -517,30 +498,22 @@ class BBCooker:
def showVersions(self):
(latest_versions, preferred_versions, required) = self.findProviders()
(latest_versions, preferred_versions) = self.findProviders()
logger.plain("%-35s %25s %25s %25s", "Recipe Name", "Latest Version", "Preferred Version", "Required Version")
logger.plain("%-35s %25s %25s %25s\n", "===========", "==============", "=================", "================")
logger.plain("%-35s %25s %25s", "Recipe Name", "Latest Version", "Preferred Version")
logger.plain("%-35s %25s %25s\n", "===========", "==============", "=================")
for p in sorted(self.recipecaches[''].pkg_pn):
preferred = preferred_versions[p]
pref = preferred_versions[p]
latest = latest_versions[p]
requiredstr = ""
preferredstr = ""
if required[p]:
if preferred[0] is not None:
requiredstr = preferred[0][0] + ":" + preferred[0][1] + '-' + preferred[0][2]
else:
bb.fatal("REQUIRED_VERSION of package %s not available" % p)
else:
preferredstr = preferred[0][0] + ":" + preferred[0][1] + '-' + preferred[0][2]
prefstr = pref[0][0] + ":" + pref[0][1] + '-' + pref[0][2]
lateststr = latest[0][0] + ":" + latest[0][1] + "-" + latest[0][2]
if preferred == latest:
preferredstr = ""
if pref == latest:
prefstr = ""
logger.plain("%-35s %25s %25s %25s", p, lateststr, preferredstr, requiredstr)
logger.plain("%-35s %25s %25s", p, lateststr, prefstr)
def showEnvironment(self, buildfile=None, pkgs_to_build=None):
"""
@@ -639,7 +612,7 @@ class BBCooker:
# Replace string such as "mc:*:bash"
# into "mc:A:bash mc:B:bash bash"
for k in targetlist:
if k.startswith("mc:") and k.count(':') >= 2:
if k.startswith("mc:"):
if wildcard:
bb.fatal('multiconfig conflict')
if k.split(":")[1] == "*":
@@ -673,7 +646,7 @@ class BBCooker:
for k in fulltargetlist:
origk = k
mc = ""
if k.startswith("mc:") and k.count(':') >= 2:
if k.startswith("mc:"):
mc = k.split(":")[1]
k = ":".join(k.split(":")[2:])
ktask = task
@@ -722,7 +695,7 @@ class BBCooker:
if depmc not in self.multiconfigs:
bb.fatal("Multiconfig dependency %s depends on nonexistent multiconfig configuration named configuration %s" % (k,depmc))
else:
logger.debug("Adding providers for multiconfig dependency %s" % l[3])
logger.debug(1, "Adding providers for multiconfig dependency %s" % l[3])
taskdata[depmc].add_provider(localdata[depmc], self.recipecaches[depmc], l[3])
seen.add(k)
new = True
@@ -815,9 +788,7 @@ class BBCooker:
for dep in rq.rqdata.runtaskentries[tid].depends:
(depmc, depfn, _, deptaskfn) = bb.runqueue.split_tid_mcfn(dep)
deppn = self.recipecaches[depmc].pkg_fn[deptaskfn]
if depmc:
depmc = "mc:" + depmc + ":"
depend_tree["tdepends"][dotname].append("%s%s.%s" % (depmc, deppn, bb.runqueue.taskname_from_tid(dep)))
depend_tree["tdepends"][dotname].append("%s.%s" % (deppn, bb.runqueue.taskname_from_tid(dep)))
if taskfn not in seen_fns:
seen_fns.append(taskfn)
packages = []
@@ -1088,16 +1059,10 @@ class BBCooker:
if pn in self.recipecaches[mc].providers:
filenames = self.recipecaches[mc].providers[pn]
eligible, foundUnique = bb.providers.filterProviders(filenames, pn, self.databuilder.mcdata[mc], self.recipecaches[mc])
if eligible is not None:
filename = eligible[0]
else:
filename = None
filename = eligible[0]
return None, None, None, filename
elif pn in self.recipecaches[mc].pkg_pn:
(latest, latest_f, preferred_ver, preferred_file, required) = bb.providers.findBestProvider(pn, self.databuilder.mcdata[mc], self.recipecaches[mc], self.recipecaches[mc].pkg_pn)
if required and preferred_file is None:
return None, None, None, None
return (latest, latest_f, preferred_ver, preferred_file)
return bb.providers.findBestProvider(pn, self.databuilder.mcdata[mc], self.recipecaches[mc], self.recipecaches[mc].pkg_pn)
else:
return None, None, None, None
@@ -1586,7 +1551,7 @@ class BBCooker:
self.inotify_modified_files = []
if not self.baseconfig_valid:
logger.debug("Reloading base configuration data")
logger.debug(1, "Reloading base configuration data")
self.initConfigurationData()
self.handlePRServ()
@@ -2216,33 +2181,21 @@ class CookerParser(object):
yield not cached, mc, infos
def parse_generator(self):
empty = False
while self.processes or not empty:
for process in self.processes.copy():
if not process.is_alive():
process.join()
self.processes.remove(process)
while True:
if self.parsed >= self.toparse:
break
try:
result = self.result_queue.get(timeout=0.25)
except queue.Empty:
empty = True
pass
else:
empty = False
value = result[1]
if isinstance(value, BaseException):
raise value
else:
yield result
if not (self.parsed >= self.toparse):
raise bb.parse.ParseError("Not all recipes parsed, parser thread killed/died? Exiting.", None)
def parse_next(self):
result = []
parsed = None
@@ -2254,18 +2207,18 @@ class CookerParser(object):
except bb.BBHandledException as exc:
self.error += 1
logger.error('Failed to parse recipe: %s' % exc.recipe)
self.shutdown(clean=False, force=True)
self.shutdown(clean=False)
return False
except ParsingFailure as exc:
self.error += 1
logger.error('Unable to parse %s: %s' %
(exc.recipe, bb.exceptions.to_string(exc.realexception)))
self.shutdown(clean=False, force=True)
self.shutdown(clean=False)
return False
except bb.parse.ParseError as exc:
self.error += 1
logger.error(str(exc))
self.shutdown(clean=False, force=True)
self.shutdown(clean=False)
return False
except bb.data_smart.ExpansionError as exc:
self.error += 1
@@ -2274,7 +2227,7 @@ class CookerParser(object):
tb = list(itertools.dropwhile(lambda e: e.filename.startswith(bbdir), exc.traceback))
logger.error('ExpansionError during parsing %s', value.recipe,
exc_info=(etype, value, tb))
self.shutdown(clean=False, force=True)
self.shutdown(clean=False)
return False
except Exception as exc:
self.error += 1
@@ -2286,7 +2239,7 @@ class CookerParser(object):
# Most likely, an exception occurred during raising an exception
import traceback
logger.error('Exception during parse: %s' % traceback.format_exc())
self.shutdown(clean=False, force=True)
self.shutdown(clean=False)
return False
self.current += 1

View File

@@ -23,8 +23,8 @@ logger = logging.getLogger("BitBake")
parselog = logging.getLogger("BitBake.Parsing")
class ConfigParameters(object):
def __init__(self, argv=None):
self.options, targets = self.parseCommandLine(argv or sys.argv)
def __init__(self, argv=sys.argv):
self.options, targets = self.parseCommandLine(argv)
self.environment = self.parseEnvironment()
self.options.pkgs_to_build = targets or []
@@ -209,7 +209,7 @@ def findConfigFile(configfile, data):
return None
#
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
# up to /. If that fails, we search for a conf/bitbake.conf in BBPATH.
#
@@ -291,8 +291,6 @@ class CookerDataBuilder(object):
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
for config in multiconfig:
if config[0].isdigit():
bb.fatal("Multiconfig name '%s' is invalid as multiconfigs cannot start with a digit" % config)
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), mcdata)
self.mcdata[config] = mcdata
@@ -344,9 +342,6 @@ class CookerDataBuilder(object):
layers = (data.getVar('BBLAYERS') or "").split()
broken_layers = []
if not layers:
bb.fatal("The bblayers.conf file doesn't contain any BBLAYERS definition")
data = bb.data.createCopy(data)
approved = bb.utils.approved_variables()
@@ -401,8 +396,6 @@ class CookerDataBuilder(object):
if c in collections_tmp:
bb.fatal("Found duplicated BBFILE_COLLECTIONS '%s', check bblayers.conf or layer.conf to fix it." % c)
compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
if compat and not layerseries:
bb.fatal("No core layer found to work with layer '%s'. Missing entry in bblayers.conf?" % c)
if compat and not (compat & layerseries):
bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)"
% (c, " ".join(layerseries), " ".join(compat)))
@@ -436,7 +429,7 @@ class CookerDataBuilder(object):
parselog.critical("Undefined event handler function '%s'" % var)
raise bb.BBHandledException()
handlerln = int(data.getVarFlag(var, "lineno", False))
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln, data)
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))

View File

@@ -17,7 +17,7 @@ BitBake build tools.
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import copy, re, sys, traceback
from collections.abc import MutableMapping
from collections import MutableMapping
import logging
import hashlib
import bb, bb.codeparser
@@ -28,7 +28,7 @@ logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = ["_append", "_prepend", "_remove"]
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+?}")
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~]+?}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
__whitespace_split__ = re.compile(r'(\s)')
__override_regexp__ = re.compile(r'[a-z0-9]+')
@@ -403,7 +403,7 @@ class DataSmart(MutableMapping):
s = __expand_python_regexp__.sub(varparse.python_sub, s)
except SyntaxError as e:
# Likely unmatched brackets, just don't expand the expression
if e.msg != "EOL while scanning string literal" and not e.msg.startswith("unterminated string literal"):
if e.msg != "EOL while scanning string literal":
raise
if s == olds:
break
@@ -411,8 +411,6 @@ class DataSmart(MutableMapping):
raise
except bb.parse.SkipRecipe:
raise
except bb.BBHandledException:
raise
except Exception as exc:
tb = sys.exc_info()[2]
raise ExpansionError(varname, s, exc).with_traceback(tb) from exc
@@ -483,7 +481,6 @@ class DataSmart(MutableMapping):
def setVar(self, var, value, **loginfo):
#print("var=" + str(var) + " val=" + str(value))
var = var.replace(":", "_")
self.expand_cache = {}
parsing=False
if 'parsing' in loginfo:
@@ -592,8 +589,6 @@ class DataSmart(MutableMapping):
"""
Rename the variable key to newkey
"""
key = key.replace(":", "_")
newkey = newkey.replace(":", "_")
if key == newkey:
bb.warn("Calling renameVar with equivalent keys (%s) is invalid" % key)
return
@@ -642,7 +637,6 @@ class DataSmart(MutableMapping):
self.setVar(var + "_prepend", value, ignore=True, parsing=True)
def delVar(self, var, **loginfo):
var = var.replace(":", "_")
self.expand_cache = {}
loginfo['detail'] = ""
@@ -670,7 +664,6 @@ class DataSmart(MutableMapping):
override = None
def setVarFlag(self, var, flag, value, **loginfo):
var = var.replace(":", "_")
self.expand_cache = {}
if 'op' not in loginfo:
@@ -694,7 +687,6 @@ class DataSmart(MutableMapping):
self.dict["__exportlist"]["_content"].add(var)
def getVarFlag(self, var, flag, expand=True, noweakdefault=False, parsing=False, retparser=False):
var = var.replace(":", "_")
if flag == "_content":
cachename = var
else:
@@ -822,7 +814,6 @@ class DataSmart(MutableMapping):
return value
def delVarFlag(self, var, flag, **loginfo):
var = var.replace(":", "_")
self.expand_cache = {}
local_var, _ = self._findVar(var)
@@ -840,7 +831,6 @@ class DataSmart(MutableMapping):
del self.dict[var][flag]
def appendVarFlag(self, var, flag, value, **loginfo):
var = var.replace(":", "_")
loginfo['op'] = 'append'
loginfo['flag'] = flag
self.varhistory.record(**loginfo)
@@ -848,7 +838,6 @@ class DataSmart(MutableMapping):
self.setVarFlag(var, flag, newvalue, ignore=True)
def prependVarFlag(self, var, flag, value, **loginfo):
var = var.replace(":", "_")
loginfo['op'] = 'prepend'
loginfo['flag'] = flag
self.varhistory.record(**loginfo)
@@ -856,7 +845,6 @@ class DataSmart(MutableMapping):
self.setVarFlag(var, flag, newvalue, ignore=True)
def setVarFlags(self, var, flags, **loginfo):
var = var.replace(":", "_")
self.expand_cache = {}
infer_caller_details(loginfo)
if not var in self.dict:
@@ -871,7 +859,6 @@ class DataSmart(MutableMapping):
self.dict[var][i] = flags[i]
def getVarFlags(self, var, expand = False, internalflags=False):
var = var.replace(":", "_")
local_var, _ = self._findVar(var)
flags = {}
@@ -888,7 +875,6 @@ class DataSmart(MutableMapping):
def delVarFlags(self, var, **loginfo):
var = var.replace(":", "_")
self.expand_cache = {}
if not var in self.dict:
self._makeShadowCopy(var)
@@ -1019,7 +1005,7 @@ class DataSmart(MutableMapping):
else:
data.update({key:value})
varflags = d.getVarFlags(key, internalflags = True, expand=["vardepvalue"])
varflags = d.getVarFlags(key, internalflags = True)
if not varflags:
continue
for f in varflags:

View File

@@ -118,8 +118,6 @@ def fire_class_handlers(event, d):
if _eventfilter:
if not _eventfilter(name, handler, event, d):
continue
if d is not None and not name in (d.getVar("__BBHANDLERS_MC") or set()):
continue
execute_handler(name, handler, event, d)
ui_queue = []
@@ -229,19 +227,11 @@ def fire_from_worker(event, d):
fire_ui_handlers(event, d)
noop = lambda _: None
def register(name, handler, mask=None, filename=None, lineno=None, data=None):
def register(name, handler, mask=None, filename=None, lineno=None):
"""Register an Event handler"""
if data is not None and data.getVar("BB_CURRENT_MC"):
mc = data.getVar("BB_CURRENT_MC")
name = '%s%s' % (mc.replace('-', '_'), name)
# already registered
if name in _handlers:
if data is not None:
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
bbhands_mc.add(name)
data.setVar("__BBHANDLERS_MC", bbhands_mc)
return AlreadyRegistered
if handler is not None:
@@ -278,20 +268,10 @@ def register(name, handler, mask=None, filename=None, lineno=None, data=None):
_event_handler_map[m] = {}
_event_handler_map[m][name] = True
if data is not None:
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
bbhands_mc.add(name)
data.setVar("__BBHANDLERS_MC", bbhands_mc)
return Registered
def remove(name, handler, data=None):
def remove(name, handler):
"""Remove an Event handler"""
if data is not None:
if data.getVar("BB_CURRENT_MC"):
mc = data.getVar("BB_CURRENT_MC")
name = '%s%s' % (mc.replace('-', '_'), name)
_handlers.pop(name)
if name in _catchall_handlers:
_catchall_handlers.pop(name)
@@ -299,12 +279,6 @@ def remove(name, handler, data=None):
if name in _event_handler_map[event]:
_event_handler_map[event].pop(name)
if data is not None:
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
if name in bbhands_mc:
bbhands_mc.remove(name)
data.setVar("__BBHANDLERS_MC", bbhands_mc)
def get_handlers():
return _handlers
@@ -670,17 +644,6 @@ class ReachableStamps(Event):
Event.__init__(self)
self.stamps = stamps
class StaleSetSceneTasks(Event):
"""
An event listing setscene tasks which are 'stale' and will
be rerun. The metadata may use to clean up stale data.
tasks is a mapping of tasks and matching stale stamps.
"""
def __init__(self, tasks):
Event.__init__(self)
self.tasks = tasks
class FilesMatchingFound(Event):
"""
Event when a list of files matching the supplied pattern has

View File

@@ -290,7 +290,7 @@ class URI(object):
def _param_str_split(self, string, elmdelim, kvdelim="="):
ret = collections.OrderedDict()
for k, v in [x.split(kvdelim, 1) for x in string.split(elmdelim) if x]:
for k, v in [x.split(kvdelim, 1) for x in string.split(elmdelim)]:
ret[k] = v
return ret
@@ -428,9 +428,8 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
uri_decoded = list(decodeurl(ud.url))
uri_find_decoded = list(decodeurl(uri_find))
uri_replace_decoded = list(decodeurl(uri_replace))
logger.debug2("For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
logger.debug(2, "For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
result_decoded = ['', '', '', '', '', {}]
# 0 - type, 1 - host, 2 - path, 3 - user, 4- pswd, 5 - params
for loc, i in enumerate(uri_find_decoded):
result_decoded[loc] = uri_decoded[loc]
regexp = i
@@ -450,9 +449,6 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
for l in replacements:
uri_replace_decoded[loc][k] = uri_replace_decoded[loc][k].replace(l, replacements[l])
result_decoded[loc][k] = uri_replace_decoded[loc][k]
elif (loc == 3 or loc == 4) and uri_replace_decoded[loc]:
# User/password in the replacement is just a straight replacement
result_decoded[loc] = uri_replace_decoded[loc]
elif (re.match(regexp, uri_decoded[loc])):
if not uri_replace_decoded[loc]:
result_decoded[loc] = ""
@@ -478,7 +474,7 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
result = encodeurl(result_decoded)
if result == ud.url:
return None
logger.debug2("For url %s returning %s" % (ud.url, result))
logger.debug(2, "For url %s returning %s" % (ud.url, result))
return result
methods = []
@@ -503,9 +499,9 @@ def fetcher_init(d):
# When to drop SCM head revisions controlled by user policy
srcrev_policy = d.getVar('BB_SRCREV_POLICY') or "clear"
if srcrev_policy == "cache":
logger.debug("Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
logger.debug(1, "Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
elif srcrev_policy == "clear":
logger.debug("Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
logger.debug(1, "Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
revs.clear()
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
@@ -566,9 +562,6 @@ def verify_checksum(ud, d, precomputed={}):
checksum_expected = getattr(ud, "%s_expected" % checksum_id)
if checksum_expected == '':
checksum_expected = None
return {
"id": checksum_id,
"name": checksum_name,
@@ -619,7 +612,7 @@ def verify_checksum(ud, d, precomputed={}):
for ci in checksum_infos:
if ci["expected"] and ci["expected"] != ci["data"]:
messages.append("File: '%s' has %s checksum '%s' when '%s' was " \
messages.append("File: '%s' has %s checksum %s when %s was " \
"expected" % (ud.localpath, ci["id"], ci["data"], ci["expected"]))
bad_checksum = ci["data"]
@@ -860,13 +853,18 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
if val:
cmd = 'export ' + var + '=\"%s\"; %s' % (val, cmd)
# Ensure that a _PYTHON_SYSCONFIGDATA_NAME value set by a recipe
# (for example via python3native.bbclass since warrior) is not set for
# host Python (otherwise tools like git-make-shallow will fail)
cmd = 'unset _PYTHON_SYSCONFIGDATA_NAME; ' + cmd
# Disable pseudo as it may affect ssh, potentially causing it to hang.
cmd = 'export PSEUDO_DISABLED=1; ' + cmd
if workdir:
logger.debug("Running '%s' in %s" % (cmd, workdir))
logger.debug(1, "Running '%s' in %s" % (cmd, workdir))
else:
logger.debug("Running %s", cmd)
logger.debug(1, "Running %s", cmd)
success = False
error_message = ""
@@ -875,7 +873,7 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
(output, errors) = bb.process.run(cmd, log=log, shell=True, stderr=subprocess.PIPE, cwd=workdir)
success = True
except bb.process.NotFoundError as e:
error_message = "Fetch command %s not found" % (e.command)
error_message = "Fetch command %s" % (e.command)
except bb.process.ExecutionError as e:
if e.stdout:
output = "output:\n%s\n%s" % (e.stdout, e.stderr)
@@ -907,7 +905,7 @@ def check_network_access(d, info, url):
elif not trusted_network(d, url):
raise UntrustedUrl(url, info)
else:
logger.debug("Fetcher accessed the network with the command %s" % info)
logger.debug(1, "Fetcher accessed the network with the command %s" % info)
def build_mirroruris(origud, mirrors, ld):
uris = []
@@ -933,7 +931,7 @@ def build_mirroruris(origud, mirrors, ld):
continue
if not trusted_network(ld, newuri):
logger.debug("Mirror %s not in the list of trusted networks, skipping" % (newuri))
logger.debug(1, "Mirror %s not in the list of trusted networks, skipping" % (newuri))
continue
# Create a local copy of the mirrors minus the current line
@@ -946,8 +944,8 @@ def build_mirroruris(origud, mirrors, ld):
newud = FetchData(newuri, ld)
newud.setup_localpath(ld)
except bb.fetch2.BBFetchException as e:
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
logger.debug(str(e))
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
logger.debug(1, str(e))
try:
# setup_localpath of file:// urls may fail, we should still see
# if mirrors of the url exist
@@ -1050,8 +1048,8 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
elif isinstance(e, NoChecksumError):
raise
else:
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
logger.debug(str(e))
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
logger.debug(1, str(e))
try:
ud.method.clean(ud, ld)
except UnboundLocalError:
@@ -1250,7 +1248,7 @@ class FetchData(object):
if checksum_name in self.parm:
checksum_expected = self.parm[checksum_name]
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3", "az"]:
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3"]:
checksum_expected = None
else:
checksum_expected = d.getVarFlag("SRC_URI", checksum_name)
@@ -1463,10 +1461,6 @@ class FetchMethod(object):
cmd = '7z x -so %s | tar x --no-same-owner -f -' % file
elif file.endswith('.7z'):
cmd = '7za x -y %s 1>/dev/null' % file
elif file.endswith('.tzst') or file.endswith('.tar.zst'):
cmd = 'zstd --decompress --stdout %s | tar x --no-same-owner -f -' % file
elif file.endswith('.zst'):
cmd = 'zstd --decompress --stdout %s > %s' % (file, efile)
elif file.endswith('.zip') or file.endswith('.jar'):
try:
dos = bb.utils.to_boolean(urldata.parm.get('dos'), False)
@@ -1695,7 +1689,7 @@ class Fetch(object):
if m.verify_donestamp(ud, self.d) and not m.need_update(ud, self.d):
done = True
elif m.try_premirror(ud, self.d):
logger.debug("Trying PREMIRRORS")
logger.debug(1, "Trying PREMIRRORS")
mirrors = mirror_from_string(self.d.getVar('PREMIRRORS'))
done = m.try_mirrors(self, ud, self.d, mirrors)
if done:
@@ -1705,7 +1699,7 @@ class Fetch(object):
m.update_donestamp(ud, self.d)
except ChecksumError as e:
logger.warning("Checksum failure encountered with premirror download of %s - will attempt other sources." % u)
logger.debug(str(e))
logger.debug(1, str(e))
done = False
if premirroronly:
@@ -1717,7 +1711,7 @@ class Fetch(object):
try:
if not trusted_network(self.d, ud.url):
raise UntrustedUrl(ud.url)
logger.debug("Trying Upstream")
logger.debug(1, "Trying Upstream")
m.download(ud, self.d)
if hasattr(m, "build_mirror_data"):
m.build_mirror_data(ud, self.d)
@@ -1732,19 +1726,19 @@ class Fetch(object):
except BBFetchException as e:
if isinstance(e, ChecksumError):
logger.warning("Checksum failure encountered with download of %s - will attempt other sources if available" % u)
logger.debug(str(e))
logger.debug(1, str(e))
if os.path.exists(ud.localpath):
rename_bad_checksum(ud, e.checksum)
elif isinstance(e, NoChecksumError):
raise
else:
logger.warning('Failed to fetch URL %s, attempting MIRRORS if available' % u)
logger.debug(str(e))
logger.debug(1, str(e))
firsterr = e
# Remove any incomplete fetch
if not verified_stamp:
m.clean(ud, self.d)
logger.debug("Trying MIRRORS")
logger.debug(1, "Trying MIRRORS")
mirrors = mirror_from_string(self.d.getVar('MIRRORS'))
done = m.try_mirrors(self, ud, self.d, mirrors)
@@ -1781,7 +1775,7 @@ class Fetch(object):
ud = self.ud[u]
ud.setup_localpath(self.d)
m = ud.method
logger.debug("Testing URL %s", u)
logger.debug(1, "Testing URL %s", u)
# First try checking uri, u, from PREMIRRORS
mirrors = mirror_from_string(self.d.getVar('PREMIRRORS'))
ret = m.try_mirrors(self, ud, self.d, mirrors, True)
@@ -1915,7 +1909,6 @@ from . import repo
from . import clearcase
from . import npm
from . import npmsw
from . import az
methods.append(local.Local())
methods.append(wget.Wget())
@@ -1935,4 +1928,3 @@ methods.append(repo.Repo())
methods.append(clearcase.ClearCase())
methods.append(npm.Npm())
methods.append(npmsw.NpmShrinkWrap())
methods.append(az.Az())

View File

@@ -1,93 +0,0 @@
"""
BitBake 'Fetch' Azure Storage implementation
"""
# Copyright (C) 2021 Alejandro Hernandez Samaniego
#
# Based on bb.fetch2.wget:
# Copyright (C) 2003, 2004 Chris Larson
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import shlex
import os
import bb
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2.wget import Wget
class Az(Wget):
def supports(self, ud, d):
"""
Check to see if a given url can be fetched from Azure Storage
"""
return ud.type in ['az']
def checkstatus(self, fetch, ud, d, try_again=True):
# checkstatus discards parameters either way, we need to do this before adding the SAS
ud.url = ud.url.replace('az://','https://').split(';')[0]
az_sas = d.getVar('AZ_SAS')
if az_sas and az_sas not in ud.url:
ud.url += az_sas
return Wget.checkstatus(self, fetch, ud, d, try_again)
# Override download method, include retries
def download(self, ud, d, retries=3):
"""Fetch urls"""
# If were reaching the account transaction limit we might be refused a connection,
# retrying allows us to avoid false negatives since the limit changes over time
fetchcmd = self.basecmd + ' --retry-connrefused --waitretry=5'
# We need to provide a localpath to avoid wget using the SAS
# ud.localfile either has the downloadfilename or ud.path
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile)
bb.utils.mkdirhier(os.path.dirname(localpath))
fetchcmd += " -O %s" % shlex.quote(localpath)
if ud.user and ud.pswd:
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
# Check if a Shared Access Signature was given and use it
az_sas = d.getVar('AZ_SAS')
if az_sas:
azuri = '%s%s%s%s' % ('https://', ud.host, ud.path, az_sas)
else:
azuri = '%s%s%s' % ('https://', ud.host, ud.path)
if os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again.
fetchcmd += d.expand(" -c -P ${DL_DIR} '%s'" % azuri)
else:
fetchcmd += d.expand(" -P ${DL_DIR} '%s'" % azuri)
try:
self._runwget(ud, d, fetchcmd, False)
except FetchError as e:
# Azure fails on handshake sometimes when using wget after some stress, producing a
# FetchError from the fetcher, if the artifact exists retyring should succeed
if 'Unable to establish SSL connection' in str(e):
logger.debug2('Unable to establish SSL connection: Retries remaining: %s, Retrying...' % retries)
self.download(ud, d, retries -1)
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath):
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (azuri, ud.localpath), azuri)
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (azuri), azuri)
return True

View File

@@ -74,16 +74,16 @@ class Bzr(FetchMethod):
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
bzrcmd = self._buildbzrcommand(ud, d, "update")
logger.debug("BZR Update %s", ud.url)
logger.debug(1, "BZR Update %s", ud.url)
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
runfetchcmd(bzrcmd, d, workdir=os.path.join(ud.pkgdir, os.path.basename(ud.path)))
else:
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
logger.debug("BZR Checkout %s", ud.url)
logger.debug(1, "BZR Checkout %s", ud.url)
bb.utils.mkdirhier(ud.pkgdir)
logger.debug("Running %s", bzrcmd)
logger.debug(1, "Running %s", bzrcmd)
runfetchcmd(bzrcmd, d, workdir=ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
@@ -109,7 +109,7 @@ class Bzr(FetchMethod):
"""
Return the latest upstream revision number
"""
logger.debug2("BZR fetcher hitting network for %s", ud.url)
logger.debug(2, "BZR fetcher hitting network for %s", ud.url)
bb.fetch2.check_network_access(d, self._buildbzrcommand(ud, d, "revno"), ud.url)

View File

@@ -70,7 +70,7 @@ class ClearCase(FetchMethod):
return ud.type in ['ccrc']
def debug(self, msg):
logger.debug("ClearCase: %s", msg)
logger.debug(1, "ClearCase: %s", msg)
def urldata_init(self, ud, d):
"""

View File

@@ -109,7 +109,7 @@ class Cvs(FetchMethod):
cvsupdatecmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvsupdatecmd)
# create module directory
logger.debug2("Fetch: checking for module directory")
logger.debug(2, "Fetch: checking for module directory")
moddir = os.path.join(ud.pkgdir, localdir)
workdir = None
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
@@ -123,7 +123,7 @@ class Cvs(FetchMethod):
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
workdir = ud.pkgdir
logger.debug("Running %s", cvscmd)
logger.debug(1, "Running %s", cvscmd)
bb.fetch2.check_network_access(d, cvscmd, ud.url)
cmd = cvscmd

View File

@@ -68,7 +68,6 @@ import subprocess
import tempfile
import bb
import bb.progress
from contextlib import contextmanager
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
@@ -142,10 +141,6 @@ class Git(FetchMethod):
ud.proto = 'file'
else:
ud.proto = "git"
if ud.host == "github.com" and ud.proto == "git":
# github stopped supporting git protocol
# https://github.blog/2021-09-01-improving-git-protocol-security-github/#no-more-unauthenticated-git
ud.proto = "https"
if not ud.proto in ('git', 'file', 'ssh', 'http', 'https', 'rsync'):
raise bb.fetch2.ParameterError("Invalid protocol type", ud.url)
@@ -225,12 +220,7 @@ class Git(FetchMethod):
ud.shallow = False
if ud.usehead:
# When usehead is set let's associate 'HEAD' with the unresolved
# rev of this repository. This will get resolved into a revision
# later. If an actual revision happens to have also been provided
# then this setting will be overridden.
for name in ud.names:
ud.unresolvedrev[name] = 'HEAD'
ud.unresolvedrev['default'] = 'HEAD'
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0"
@@ -389,50 +379,7 @@ class Git(FetchMethod):
if missing_rev:
raise bb.fetch2.FetchError("Unable to find revision %s even from upstream" % missing_rev)
if self._contains_lfs(ud, d, ud.clonedir) and self._need_lfs(ud):
# Unpack temporary working copy, use it to run 'git checkout' to force pre-fetching
# of all LFS blobs needed at the the srcrev.
#
# It would be nice to just do this inline here by running 'git-lfs fetch'
# on the bare clonedir, but that operation requires a working copy on some
# releases of Git LFS.
tmpdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
try:
# Do the checkout. This implicitly involves a Git LFS fetch.
Git.unpack(self, ud, tmpdir, d)
# Scoop up a copy of any stuff that Git LFS downloaded. Merge them into
# the bare clonedir.
#
# As this procedure is invoked repeatedly on incremental fetches as
# a recipe's SRCREV is bumped throughout its lifetime, this will
# result in a gradual accumulation of LFS blobs in <ud.clonedir>/lfs
# corresponding to all the blobs reachable from the different revs
# fetched across time.
#
# Only do this if the unpack resulted in a .git/lfs directory being
# created; this only happens if at least one blob needed to be
# downloaded.
if os.path.exists(os.path.join(tmpdir, "git", ".git", "lfs")):
runfetchcmd("tar -cf - lfs | tar -xf - -C %s" % ud.clonedir, d, workdir="%s/git/.git" % tmpdir)
finally:
bb.utils.remove(tmpdir, recurse=True)
def build_mirror_data(self, ud, d):
# Create as a temp file and move atomically into position to avoid races
@contextmanager
def create_atomic(filename):
fd, tfile = tempfile.mkstemp(dir=os.path.dirname(filename))
try:
yield tfile
umask = os.umask(0o666)
os.umask(umask)
os.chmod(tfile, (0o666 & ~umask))
os.rename(tfile, filename)
finally:
os.close(fd)
if ud.shallow and ud.write_shallow_tarballs:
if not os.path.exists(ud.fullshallow):
if os.path.islink(ud.fullshallow):
@@ -443,8 +390,7 @@ class Git(FetchMethod):
self.clone_shallow_local(ud, shallowclone, d)
logger.info("Creating tarball of git repository")
with create_atomic(ud.fullshallow) as tfile:
runfetchcmd("tar -czf %s ." % tfile, d, workdir=shallowclone)
runfetchcmd("tar -czf %s ." % ud.fullshallow, d, workdir=shallowclone)
runfetchcmd("touch %s.done" % ud.fullshallow, d)
finally:
bb.utils.remove(tempdir, recurse=True)
@@ -453,8 +399,7 @@ class Git(FetchMethod):
os.unlink(ud.fullmirror)
logger.info("Creating tarball of git repository")
with create_atomic(ud.fullmirror) as tfile:
runfetchcmd("tar -czf %s ." % tfile, d, workdir=ud.clonedir)
runfetchcmd("tar -czf %s ." % ud.fullmirror, d, workdir=ud.clonedir)
runfetchcmd("touch %s.done" % ud.fullmirror, d)
def clone_shallow_local(self, ud, dest, d):
@@ -529,7 +474,7 @@ class Git(FetchMethod):
if os.path.exists(destdir):
bb.utils.prunedir(destdir)
need_lfs = self._need_lfs(ud)
need_lfs = ud.parm.get("lfs", "1") == "1"
if not need_lfs:
ud.basecmd = "GIT_LFS_SKIP_SMUDGE=1 " + ud.basecmd
@@ -618,9 +563,6 @@ class Git(FetchMethod):
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
return output.split()[0] != "0"
def _need_lfs(self, ud):
return ud.parm.get("lfs", "1") == "1"
def _contains_lfs(self, ud, d, wd):
"""
Check if the repository has 'lfs' (large file) content
@@ -631,14 +573,8 @@ class Git(FetchMethod):
else:
branchname = "master"
# The bare clonedir doesn't use the remote names; it has the branch immediately.
if wd == ud.clonedir:
refname = ud.branches[ud.names[0]]
else:
refname = "origin/%s" % ud.branches[ud.names[0]]
cmd = "%s grep lfs %s:.gitattributes | wc -l" % (
ud.basecmd, refname)
cmd = "%s grep lfs origin/%s:.gitattributes | wc -l" % (
ud.basecmd, ud.branches[ud.names[0]])
try:
output = runfetchcmd(cmd, d, quiet=True, workdir=wd)
@@ -659,11 +595,6 @@ class Git(FetchMethod):
"""
Return the repository URL
"""
# Note that we do not support passwords directly in the git urls. There are several
# reasons. SRC_URI can be written out to things like buildhistory and people don't
# want to leak passwords like that. Its also all too easy to share metadata without
# removing the password. ssh keys, ~/.netrc and ~/.ssh/config files can be used as
# alternatives so we will not take patches adding password support here.
if ud.user:
username = ud.user + '@'
else:

View File

@@ -78,7 +78,7 @@ class GitSM(Git):
module_hash = ""
if not module_hash:
logger.debug("submodule %s is defined, but is not initialized in the repository. Skipping", m)
logger.debug(1, "submodule %s is defined, but is not initialized in the repository. Skipping", m)
continue
submodules.append(m)
@@ -179,7 +179,7 @@ class GitSM(Git):
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
if len(need_update_list) > 0:
logger.debug('gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
logger.debug(1, 'gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
return True
return False

View File

@@ -150,7 +150,7 @@ class Hg(FetchMethod):
def download(self, ud, d):
"""Fetch url"""
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.pkgdir) and os.path.exists(ud.fullmirror):
@@ -160,7 +160,7 @@ class Hg(FetchMethod):
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
# Found the source, check whether need pull
updatecmd = self._buildhgcommand(ud, d, "update")
logger.debug("Running %s", updatecmd)
logger.debug(1, "Running %s", updatecmd)
try:
runfetchcmd(updatecmd, d, workdir=ud.moddir)
except bb.fetch2.FetchError:
@@ -168,7 +168,7 @@ class Hg(FetchMethod):
pullcmd = self._buildhgcommand(ud, d, "pull")
logger.info("Pulling " + ud.url)
# update sources there
logger.debug("Running %s", pullcmd)
logger.debug(1, "Running %s", pullcmd)
bb.fetch2.check_network_access(d, pullcmd, ud.url)
runfetchcmd(pullcmd, d, workdir=ud.moddir)
try:
@@ -183,14 +183,14 @@ class Hg(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
logger.debug("Running %s", fetchcmd)
logger.debug(1, "Running %s", fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
runfetchcmd(fetchcmd, d, workdir=ud.pkgdir)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
logger.debug("Running %s", updatecmd)
logger.debug(1, "Running %s", updatecmd)
runfetchcmd(updatecmd, d, workdir=ud.moddir)
def clean(self, ud, d):
@@ -247,9 +247,9 @@ class Hg(FetchMethod):
if scmdata != "nokeep":
proto = ud.parm.get('protocol', 'http')
if not os.access(os.path.join(codir, '.hg'), os.R_OK):
logger.debug2("Unpack: creating new hg repository in '" + codir + "'")
logger.debug(2, "Unpack: creating new hg repository in '" + codir + "'")
runfetchcmd("%s init %s" % (ud.basecmd, codir), d)
logger.debug2("Unpack: updating source in '" + codir + "'")
logger.debug(2, "Unpack: updating source in '" + codir + "'")
if ud.user and ud.pswd:
runfetchcmd("%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull %s" % (ud.basecmd, ud.user, ud.pswd, proto, ud.moddir), d, workdir=codir)
else:
@@ -259,5 +259,5 @@ class Hg(FetchMethod):
else:
runfetchcmd("%s up -C %s" % (ud.basecmd, revflag), d, workdir=codir)
else:
logger.debug2("Unpack: extracting source to '" + codir + "'")
logger.debug(2, "Unpack: extracting source to '" + codir + "'")
runfetchcmd("%s archive -t files %s %s" % (ud.basecmd, revflag, codir), d, workdir=ud.moddir)

View File

@@ -54,12 +54,12 @@ class Local(FetchMethod):
return [path]
filespath = d.getVar('FILESPATH')
if filespath:
logger.debug2("Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
logger.debug(2, "Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
newpath, hist = bb.utils.which(filespath, path, history=True)
searched.extend(hist)
if not os.path.exists(newpath):
dldirfile = os.path.join(d.getVar("DL_DIR"), path)
logger.debug2("Defaulting to %s for %s" % (dldirfile, path))
logger.debug(2, "Defaulting to %s for %s" % (dldirfile, path))
bb.utils.mkdirhier(os.path.dirname(dldirfile))
searched.append(dldirfile)
return searched

View File

@@ -29,8 +29,6 @@ from bb.fetch2.npm import npm_integrity
from bb.fetch2.npm import npm_localfile
from bb.fetch2.npm import npm_unpack
from bb.utils import is_semver
from bb.utils import lockfile
from bb.utils import unlockfile
def foreach_dependencies(shrinkwrap, callback=None, dev=False):
"""
@@ -189,9 +187,7 @@ class NpmShrinkWrap(FetchMethod):
proxy_ud = ud.proxy.ud[proxy_url]
proxy_d = ud.proxy.d
proxy_ud.setup_localpath(proxy_d)
lf = lockfile(proxy_ud.lockfile)
returns.append(handle(proxy_ud.method, proxy_ud, proxy_d))
unlockfile(lf)
return returns
def verify_donestamp(self, ud, d):

View File

@@ -84,13 +84,13 @@ class Osc(FetchMethod):
Fetch url
"""
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(d.getVar('OSCDIR'), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ ud.url)
# update sources there
logger.debug("Running %s", oscupdatecmd)
logger.debug(1, "Running %s", oscupdatecmd)
bb.fetch2.check_network_access(d, oscupdatecmd, ud.url)
runfetchcmd(oscupdatecmd, d, workdir=ud.moddir)
else:
@@ -98,7 +98,7 @@ class Osc(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
logger.debug("Running %s", oscfetchcmd)
logger.debug(1, "Running %s", oscfetchcmd)
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
runfetchcmd(oscfetchcmd, d, workdir=ud.pkgdir)

View File

@@ -90,16 +90,16 @@ class Perforce(FetchMethod):
p4port = d.getVar('P4PORT')
if p4port:
logger.debug('Using recipe provided P4PORT: %s' % p4port)
logger.debug(1, 'Using recipe provided P4PORT: %s' % p4port)
ud.host = p4port
else:
logger.debug('Trying to use P4CONFIG to automatically set P4PORT...')
logger.debug(1, 'Trying to use P4CONFIG to automatically set P4PORT...')
ud.usingp4config = True
p4cmd = '%s info | grep "Server address"' % ud.basecmd
bb.fetch2.check_network_access(d, p4cmd, ud.url)
ud.host = runfetchcmd(p4cmd, d, True)
ud.host = ud.host.split(': ')[1].strip()
logger.debug('Determined P4PORT to be: %s' % ud.host)
logger.debug(1, 'Determined P4PORT to be: %s' % ud.host)
if not ud.host:
raise FetchError('Could not determine P4PORT from P4CONFIG')
@@ -119,7 +119,6 @@ class Perforce(FetchMethod):
cleanedpath = ud.path.replace('/...', '').replace('/', '.')
cleanedhost = ud.host.replace(':', '.')
cleanedmodule = ""
# Merge the path and module into the final depot location
if ud.module:
if ud.module.find('/') == 0:
@@ -134,7 +133,7 @@ class Perforce(FetchMethod):
ud.setup_revisions(d)
ud.localfile = d.expand('%s_%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, cleanedmodule, ud.revision))
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, ud.revision))
def _buildp4command(self, ud, d, command, depot_filename=None):
"""
@@ -208,7 +207,7 @@ class Perforce(FetchMethod):
for filename in p4fileslist:
item = filename.split(' - ')
lastaction = item[1].split()
logger.debug('File: %s Last Action: %s' % (item[0], lastaction[0]))
logger.debug(1, 'File: %s Last Action: %s' % (item[0], lastaction[0]))
if lastaction[0] == 'delete':
continue
filelist.append(item[0])
@@ -255,7 +254,7 @@ class Perforce(FetchMethod):
raise FetchError('Could not determine the latest perforce changelist')
tipcset = tip.split(' ')[1]
logger.debug('p4 tip found to be changelist %s' % tipcset)
logger.debug(1, 'p4 tip found to be changelist %s' % tipcset)
return tipcset
def sortable_revision(self, ud, d, name):

View File

@@ -47,7 +47,7 @@ class Repo(FetchMethod):
"""Fetch url"""
if os.access(os.path.join(d.getVar("DL_DIR"), ud.localfile), os.R_OK):
logger.debug("%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
return
repodir = d.getVar("REPODIR") or (d.getVar("DL_DIR") + "/repo")

View File

@@ -86,7 +86,7 @@ class Svn(FetchMethod):
if command == "info":
svncmd = "%s info %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
elif command == "log1":
svncmd = "%s log --limit 1 --quiet %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
svncmd = "%s log --limit 1 %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
else:
suffix = ""
@@ -116,7 +116,7 @@ class Svn(FetchMethod):
def download(self, ud, d):
"""Fetch url"""
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
lf = bb.utils.lockfile(ud.svnlock)
@@ -129,7 +129,7 @@ class Svn(FetchMethod):
runfetchcmd(ud.basecmd + " upgrade", d, workdir=ud.moddir)
except FetchError:
pass
logger.debug("Running %s", svncmd)
logger.debug(1, "Running %s", svncmd)
bb.fetch2.check_network_access(d, svncmd, ud.url)
runfetchcmd(svncmd, d, workdir=ud.moddir)
else:
@@ -137,7 +137,7 @@ class Svn(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
logger.debug("Running %s", svncmd)
logger.debug(1, "Running %s", svncmd)
bb.fetch2.check_network_access(d, svncmd, ud.url)
runfetchcmd(svncmd, d, workdir=ud.pkgdir)

View File

@@ -52,12 +52,6 @@ class WgetProgressHandler(bb.progress.LineFilterProgressHandler):
class Wget(FetchMethod):
# CDNs like CloudFlare may do a 'browser integrity test' which can fail
# with the standard wget/urllib User-Agent, so pretend to be a modern
# browser.
user_agent = "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
"""Class to fetch urls via 'wget'"""
def supports(self, ud, d):
"""
@@ -88,7 +82,7 @@ class Wget(FetchMethod):
progresshandler = WgetProgressHandler(d)
logger.debug2("Fetching %s using command '%s'" % (ud.url, command))
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command, ud.url)
runfetchcmd(command + ' --progress=dot -v', d, quiet, log=progresshandler, workdir=workdir)
@@ -303,7 +297,7 @@ class Wget(FetchMethod):
# Some servers (FusionForge, as used on Alioth) require that the
# optional Accept header is set.
r.add_header("Accept", "*/*")
r.add_header("User-Agent", self.user_agent)
r.add_header("User-Agent", "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12")
def add_basic_auth(login_str, request):
'''Adds Basic auth to http request, pass in login:password as string'''
import base64
@@ -322,23 +316,15 @@ class Wget(FetchMethod):
except (TypeError, ImportError, IOError, netrc.NetrcParseError):
pass
with opener.open(r, timeout=30) as response:
with opener.open(r) as response:
pass
except urllib.error.URLError as e:
if try_again:
logger.debug2("checkstatus: trying again")
logger.debug(2, "checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug2("checkstatus() urlopen failed: %s" % e)
return False
except ConnectionResetError as e:
if try_again:
logger.debug2("checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug2("checkstatus() urlopen failed: %s" % e)
logger.debug(2, "checkstatus() urlopen failed: %s" % e)
return False
return True
@@ -415,8 +401,9 @@ class Wget(FetchMethod):
"""
f = tempfile.NamedTemporaryFile()
with tempfile.TemporaryDirectory(prefix="wget-index-") as workdir, tempfile.NamedTemporaryFile(dir=workdir, prefix="wget-listing-") as f:
agent = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12"
fetchcmd = self.basecmd
fetchcmd += " -O " + f.name + " --user-agent='" + self.user_agent + "' '" + uri + "'"
fetchcmd += " -O " + f.name + " --user-agent='" + agent + "' '" + uri + "'"
try:
self._runwget(ud, d, fetchcmd, True, workdir=workdir)
fetchresult = f.read()
@@ -472,7 +459,7 @@ class Wget(FetchMethod):
version_dir = ['', '', '']
version = ['', '', '']
dirver_regex = re.compile(r"(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])*(\d+))")
dirver_regex = re.compile(r"(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])+(\d+))")
s = dirver_regex.search(dirver)
if s:
version_dir[1] = s.group('ver')

View File

@@ -119,181 +119,178 @@ warnings.filterwarnings("ignore", category=ImportWarning)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
class BitBakeConfigParameters(cookerdata.ConfigParameters):
def create_bitbake_parser():
parser = optparse.OptionParser(
formatter=BitbakeHelpFormatter(),
version="BitBake Build Tool Core version %s" % bb.__version__,
usage="""%prog [options] [recipename/target recipe:do_task ...]
def parseCommandLine(self, argv=sys.argv):
parser = optparse.OptionParser(
formatter=BitbakeHelpFormatter(),
version="BitBake Build Tool Core version %s" % bb.__version__,
usage="""%prog [options] [recipename/target recipe:do_task ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.""")
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
parser.add_option("-c", "--cmd", action="store", dest="cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-c", "--cmd", action="store", dest="cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
help="Output less log message data to the terminal. You can specify this more than once.")
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
help="Output less log message data to the terminal. You can specify this more than once.")
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
help="Don't execute, just go through the motions.")
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
help="Don't execute, just go through the motions.")
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
parser.add_option("-p", "--parse-only", action="store_true",
dest="parse_only", default=False,
help="Quit after parsing the BB recipes.")
parser.add_option("-p", "--parse-only", action="store_true",
dest="parse_only", default=False,
help="Quit after parsing the BB recipes.")
parser.add_option("-s", "--show-versions", action="store_true",
dest="show_versions", default=False,
help="Show current and preferred versions of all recipes.")
parser.add_option("-s", "--show-versions", action="store_true",
dest="show_versions", default=False,
help="Show current and preferred versions of all recipes.")
parser.add_option("-e", "--environment", action="store_true",
dest="show_environment", default=False,
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-e", "--environment", action="store_true",
dest="show_environment", default=False,
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing")
parser.add_option("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing")
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
help="Show debug logging for the specified logging domains")
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
help="Show debug logging for the specified logging domains")
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
help="Profile the command and save reports.")
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
help="Profile the command and save reports.")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-u", "--ui", action="store", dest="ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-u", "--ui", action="store", dest="ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
parser.add_option("", "--revisions-changed", action="store_true",
dest="revisions_changed", default=False,
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
parser.add_option("", "--revisions-changed", action="store_true",
dest="revisions_changed", default=False,
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
parser.add_option("", "--server-only", action="store_true",
dest="server_only", default=False,
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
parser.add_option("", "--server-only", action="store_true",
dest="server_only", default=False,
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
parser.add_option("", "--skip-setscene", action="store_true",
dest="skipsetscene", default=False,
help="Skip setscene tasks if they would be executed. Tasks previously "
"restored from sstate will be kept, unlike --no-setscene")
parser.add_option("", "--skip-setscene", action="store_true",
dest="skipsetscene", default=False,
help="Skip setscene tasks if they would be executed. Tasks previously "
"restored from sstate will be kept, unlike --no-setscene")
parser.add_option("", "--setscene-only", action="store_true",
dest="setsceneonly", default=False,
help="Only run setscene tasks, don't run any real tasks.")
parser.add_option("", "--setscene-only", action="store_true",
dest="setsceneonly", default=False,
help="Only run setscene tasks, don't run any real tasks.")
parser.add_option("", "--remote-server", action="store", dest="remote_server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
parser.add_option("", "--remote-server", action="store", dest="remote_server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
help="Terminate any running bitbake server.")
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
help="Terminate any running bitbake server.")
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
help="Connect to a server as an observing-only client.")
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
help="Connect to a server as an observing-only client.")
parser.add_option("", "--status-only", action="store_true",
dest="status_only", default=False,
help="Check the status of the remote bitbake server.")
parser.add_option("", "--status-only", action="store_true",
dest="status_only", default=False,
help="Check the status of the remote bitbake server.")
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
parser.add_option("", "--runall", action="append", dest="runall",
help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).")
parser.add_option("", "--runall", action="append", dest="runall",
help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).")
parser.add_option("", "--runonly", action="append", dest="runonly",
help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).")
return parser
parser.add_option("", "--runonly", action="append", dest="runonly",
help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).")
class BitBakeConfigParameters(cookerdata.ConfigParameters):
def parseCommandLine(self, argv=sys.argv):
parser = create_bitbake_parser()
options, targets = parser.parse_args(argv)
if options.quiet and options.verbose:
@@ -469,7 +466,7 @@ def setup_bitbake(configParams, extrafeatures=None):
logger.info("Retrying server connection (#%d)..." % tryno)
else:
logger.info("Retrying server connection (#%d)... (%s)" % (tryno, traceback.format_exc()))
if not retries:
bb.fatal("Unable to connect to bitbake server, or start one (server startup failures would be in bitbake-cookerdaemon.log).")
bb.event.print_ui_queue()

View File

@@ -59,7 +59,7 @@ def getMountedDev(path):
pass
return None
def getDiskData(BBDirs):
def getDiskData(BBDirs, configuration):
"""Prepare disk data for disk space monitor"""
@@ -168,7 +168,7 @@ class diskMonitor:
BBDirs = configuration.getVar("BB_DISKMON_DIRS") or None
if BBDirs:
self.devDict = getDiskData(BBDirs)
self.devDict = getDiskData(BBDirs, configuration)
if self.devDict:
self.spaceInterval, self.inodeInterval = getInterval(configuration)
if self.spaceInterval and self.inodeInterval:

View File

@@ -278,7 +278,7 @@ def setLoggingConfig(defaultconfig, userconfigfile=None):
with open(os.path.normpath(userconfigfile), 'r') as f:
if userconfigfile.endswith('.yml') or userconfigfile.endswith('.yaml'):
import yaml
userconfig = yaml.safe_load(f)
userconfig = yaml.load(f)
elif userconfigfile.endswith('.json') or userconfigfile.endswith('.cfg'):
import json
userconfig = json.load(f)

View File

@@ -71,7 +71,7 @@ def update_mtime(f):
def update_cache(f):
if f in __mtime_cache:
logger.debug("Updating mtime cache for %s" % f)
logger.debug(1, "Updating mtime cache for %s" % f)
update_mtime(f)
def clear_cache():

View File

@@ -34,7 +34,7 @@ class IncludeNode(AstNode):
Include the file and evaluate the statements
"""
s = data.expand(self.what_file)
logger.debug2("CONF %s:%s: including %s", self.filename, self.lineno, s)
logger.debug(2, "CONF %s:%s: including %s", self.filename, self.lineno, s)
# TODO: Cache those includes... maybe not here though
if self.force:
@@ -97,7 +97,6 @@ class DataNode(AstNode):
def eval(self, data):
groupd = self.groupd
key = groupd["var"]
key = key.replace(":", "_")
loginfo = {
'variable': key,
'file': self.filename,
@@ -208,7 +207,6 @@ class ExportFuncsNode(AstNode):
def eval(self, data):
for func in self.n:
func = func.replace(":", "_")
calledfunc = self.classname + "_" + func
if data.getVar(func, False) and not data.getVarFlag(func, 'export_func', False):
@@ -337,7 +335,7 @@ def finalize(fn, d, variant = None):
if not handlerfn:
bb.fatal("Undefined event handler function '%s'" % var)
handlerln = int(d.getVarFlag(var, "lineno", False))
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln, data=d)
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
@@ -378,7 +376,7 @@ def _create_variants(datastores, names, function, onlyfinalise):
def multi_finalize(fn, d):
appends = (d.getVar("__BBAPPEND") or "").split()
for append in appends:
logger.debug("Appending .bbappend file %s to %s", append, fn)
logger.debug(1, "Appending .bbappend file %s to %s", append, fn)
bb.parse.BBHandler.handle(append, d, True)
onlyfinalise = d.getVar("__ONLYFINALISE", False)

View File

@@ -13,7 +13,7 @@
#
import re, bb, os
import bb.build, bb.utils, bb.data_smart
import bb.build, bb.utils
from . import ConfHandler
from .. import resolve_file, ast, logger, ParseError
@@ -22,7 +22,7 @@ from .ConfHandler import include, init
# For compatibility
bb.deprecate_import(__name__, "bb.parse", ["vars_from_file"])
__func_start_regexp__ = re.compile(r"(((?P<py>python(?=(\s|\()))|(?P<fr>fakeroot(?=\s)))\s*)*(?P<func>[\w\.\-\+\{\}\$:]+)?\s*\(\s*\)\s*{$" )
__func_start_regexp__ = re.compile(r"(((?P<py>python)|(?P<fr>fakeroot))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
__inherit_regexp__ = re.compile(r"inherit\s+(.+)" )
__export_func_regexp__ = re.compile(r"EXPORT_FUNCTIONS\s+(.+)" )
__addtask_regexp__ = re.compile(r"addtask\s+(?P<func>\w+)\s*((before\s*(?P<before>((.*(?=after))|(.*))))|(after\s*(?P<after>((.*(?=before))|(.*)))))*")
@@ -60,7 +60,7 @@ def inherit(files, fn, lineno, d):
file = abs_fn
if not file in __inherit_cache:
logger.debug("Inheriting %s (from %s:%d)" % (file, fn, lineno))
logger.debug(1, "Inheriting %s (from %s:%d)" % (file, fn, lineno))
__inherit_cache.append( file )
d.setVar('__inherit_cache', __inherit_cache)
include(fn, file, lineno, d, "inherit")
@@ -233,10 +233,6 @@ def feeder(lineno, s, fn, root, statements, eof=False):
if taskexpression.count(word) > 1:
logger.warning("addtask contained multiple '%s' keywords, only one is supported" % word)
# Check and warn for having task with exprssion as part of task name
for te in taskexpression:
if any( ( "%s_" % keyword ) in te for keyword in bb.data_smart.__setvar_keyword__ ):
raise ParseError("Task name '%s' contains a keyword which is not recommended/supported.\nPlease rename the task not to include the keyword.\n%s" % (te, ("\n".join(map(str, bb.data_smart.__setvar_keyword__)))), fn)
ast.handleAddTask(statements, fn, lineno, m)
return

View File

@@ -20,7 +20,7 @@ from bb.parse import ParseError, resolve_file, ast, logger, handle
__config_regexp__ = re.compile( r"""
^
(?P<exp>export\s+)?
(?P<var>[a-zA-Z0-9\-_+.${}/~:]+?)
(?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
\s* (
@@ -95,7 +95,7 @@ def include_single_file(parentfn, fn, lineno, data, error_out):
if exc.errno == errno.ENOENT:
if error_out:
raise ParseError("Could not %s file %s" % (error_out, fn), parentfn, lineno)
logger.debug2("CONF file '%s' not found", fn)
logger.debug(2, "CONF file '%s' not found", fn)
else:
if error_out:
raise ParseError("Could not %s file %s: %s" % (error_out, fn, exc.strerror), parentfn, lineno)

View File

@@ -12,7 +12,6 @@ currently, providing a key/value store accessed by 'domain'.
#
import collections
import collections.abc
import contextlib
import functools
import logging
@@ -20,7 +19,7 @@ import os.path
import sqlite3
import sys
import warnings
from collections.abc import Mapping
from collections import Mapping
sqlversion = sqlite3.sqlite_version_info
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
@@ -30,7 +29,7 @@ if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
logger = logging.getLogger("BitBake.PersistData")
@functools.total_ordering
class SQLTable(collections.abc.MutableMapping):
class SQLTable(collections.MutableMapping):
class _Decorators(object):
@staticmethod
def retry(*, reconnect=True):
@@ -249,7 +248,7 @@ class PersistData(object):
stacklevel=2)
self.data = persist(d)
logger.debug("Using '%s' as the persistent data cache",
logger.debug(1, "Using '%s' as the persistent data cache",
self.data.filename)
def addDomain(self, domain):

View File

@@ -60,7 +60,7 @@ class Popen(subprocess.Popen):
"close_fds": True,
"preexec_fn": subprocess_setup,
"stdout": subprocess.PIPE,
"stderr": subprocess.PIPE,
"stderr": subprocess.STDOUT,
"stdin": subprocess.PIPE,
"shell": False,
}
@@ -181,8 +181,5 @@ def run(cmd, input=None, log=None, extrafiles=None, **options):
stderr = stderr.decode("utf-8")
if pipe.returncode != 0:
if log:
# Don't duplicate the output in the exception if logging it
raise ExecutionError(cmd, pipe.returncode, None, None)
raise ExecutionError(cmd, pipe.returncode, stdout, stderr)
return stdout, stderr

View File

@@ -38,17 +38,16 @@ def findProviders(cfgData, dataCache, pkg_pn = None):
localdata = data.createCopy(cfgData)
bb.data.expandKeys(localdata)
required = {}
preferred_versions = {}
latest_versions = {}
for pn in pkg_pn:
(last_ver, last_file, pref_ver, pref_file, req) = findBestProvider(pn, localdata, dataCache, pkg_pn)
(last_ver, last_file, pref_ver, pref_file) = findBestProvider(pn, localdata, dataCache, pkg_pn)
preferred_versions[pn] = (pref_ver, pref_file)
latest_versions[pn] = (last_ver, last_file)
required[pn] = req
return (latest_versions, preferred_versions, required)
return (latest_versions, preferred_versions)
def allProviders(dataCache):
"""
@@ -60,6 +59,7 @@ def allProviders(dataCache):
all_providers[pn].append((ver, fn))
return all_providers
def sortPriorities(pn, dataCache, pkg_pn = None):
"""
Reorder pkg_pn by file priority and default preference
@@ -87,21 +87,6 @@ def sortPriorities(pn, dataCache, pkg_pn = None):
return tmp_pn
def versionVariableMatch(cfgData, keyword, pn):
"""
Return the value of the <keyword>_VERSION variable if set.
"""
# pn can contain '_', e.g. gcc-cross-x86_64 and an override cannot
# hence we do this manually rather than use OVERRIDES
ver = cfgData.getVar("%s_VERSION_pn-%s" % (keyword, pn))
if not ver:
ver = cfgData.getVar("%s_VERSION_%s" % (keyword, pn))
if not ver:
ver = cfgData.getVar("%s_VERSION" % keyword)
return ver
def preferredVersionMatch(pe, pv, pr, preferred_e, preferred_v, preferred_r):
"""
Check if the version pe,pv,pr is the preferred one.
@@ -117,28 +102,19 @@ def preferredVersionMatch(pe, pv, pr, preferred_e, preferred_v, preferred_r):
def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
"""
Find the first provider in pkg_pn with REQUIRED_VERSION or PREFERRED_VERSION set.
Find the first provider in pkg_pn with a PREFERRED_VERSION set.
"""
preferred_file = None
preferred_ver = None
required = False
required_v = versionVariableMatch(cfgData, "REQUIRED", pn)
preferred_v = versionVariableMatch(cfgData, "PREFERRED", pn)
itemstr = ""
if item:
itemstr = " (for item %s)" % item
if required_v is not None:
if preferred_v is not None:
logger.warn("REQUIRED_VERSION and PREFERRED_VERSION for package %s%s are both set using REQUIRED_VERSION %s", pn, itemstr, required_v)
else:
logger.debug("REQUIRED_VERSION is set for package %s%s", pn, itemstr)
# REQUIRED_VERSION always takes precedence over PREFERRED_VERSION
preferred_v = required_v
required = True
# pn can contain '_', e.g. gcc-cross-x86_64 and an override cannot
# hence we do this manually rather than use OVERRIDES
preferred_v = cfgData.getVar("PREFERRED_VERSION_pn-%s" % pn)
if not preferred_v:
preferred_v = cfgData.getVar("PREFERRED_VERSION_%s" % pn)
if not preferred_v:
preferred_v = cfgData.getVar("PREFERRED_VERSION")
if preferred_v:
m = re.match(r'(\d+:)*(.*)(_.*)*', preferred_v)
@@ -171,9 +147,11 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
pv_str = preferred_v
if not (preferred_e is None):
pv_str = '%s:%s' % (preferred_e, pv_str)
itemstr = ""
if item:
itemstr = " (for item %s)" % item
if preferred_file is None:
if not required:
logger.warn("preferred version %s of %s not available%s", pv_str, pn, itemstr)
logger.info("preferred version %s of %s not available%s", pv_str, pn, itemstr)
available_vers = []
for file_set in pkg_pn:
for f in file_set:
@@ -185,16 +163,12 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
available_vers.append(ver_str)
if available_vers:
available_vers.sort()
logger.warn("versions of %s available: %s", pn, ' '.join(available_vers))
if required:
logger.error("required version %s of %s not available%s", pv_str, pn, itemstr)
logger.info("versions of %s available: %s", pn, ' '.join(available_vers))
else:
if required:
logger.debug("selecting %s as REQUIRED_VERSION %s of package %s%s", preferred_file, pv_str, pn, itemstr)
else:
logger.debug("selecting %s as PREFERRED_VERSION %s of package %s%s", preferred_file, pv_str, pn, itemstr)
logger.debug(1, "selecting %s as PREFERRED_VERSION %s of package %s%s", preferred_file, pv_str, pn, itemstr)
return (preferred_ver, preferred_file)
return (preferred_ver, preferred_file, required)
def findLatestProvider(pn, cfgData, dataCache, file_set):
"""
@@ -215,6 +189,7 @@ def findLatestProvider(pn, cfgData, dataCache, file_set):
return (latest, latest_f)
def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
"""
If there is a PREFERRED_VERSION, find the highest-priority bbfile
@@ -223,16 +198,17 @@ def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
"""
sortpkg_pn = sortPriorities(pn, dataCache, pkg_pn)
# Find the highest priority provider with a REQUIRED_VERSION or PREFERRED_VERSION set
(preferred_ver, preferred_file, required) = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn, item)
# Find the highest priority provider with a PREFERRED_VERSION set
(preferred_ver, preferred_file) = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn, item)
# Find the latest version of the highest priority provider
(latest, latest_f) = findLatestProvider(pn, cfgData, dataCache, sortpkg_pn[0])
if not required and preferred_file is None:
if preferred_file is None:
preferred_file = latest_f
preferred_ver = latest
return (latest, latest_f, preferred_ver, preferred_file, required)
return (latest, latest_f, preferred_ver, preferred_file)
def _filterProviders(providers, item, cfgData, dataCache):
"""
@@ -256,15 +232,12 @@ def _filterProviders(providers, item, cfgData, dataCache):
pkg_pn[pn] = []
pkg_pn[pn].append(p)
logger.debug("providers for %s are: %s", item, list(sorted(pkg_pn.keys())))
logger.debug(1, "providers for %s are: %s", item, list(sorted(pkg_pn.keys())))
# First add REQUIRED_VERSIONS or PREFERRED_VERSIONS
# First add PREFERRED_VERSIONS
for pn in sorted(pkg_pn):
sortpkg_pn[pn] = sortPriorities(pn, dataCache, pkg_pn)
preferred_ver, preferred_file, required = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn[pn], item)
if required and preferred_file is None:
return eligible
preferred_versions[pn] = (preferred_ver, preferred_file)
preferred_versions[pn] = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn[pn], item)
if preferred_versions[pn][1]:
eligible.append(preferred_versions[pn][1])
@@ -275,8 +248,9 @@ def _filterProviders(providers, item, cfgData, dataCache):
preferred_versions[pn] = findLatestProvider(pn, cfgData, dataCache, sortpkg_pn[pn][0])
eligible.append(preferred_versions[pn][1])
if not eligible:
return eligible
if len(eligible) == 0:
logger.error("no eligible providers for %s", item)
return 0
# If pn == item, give it a slight default preference
# This means PREFERRED_PROVIDER_foobar defaults to foobar if available
@@ -292,6 +266,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
return eligible
def filterProviders(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
@@ -316,7 +291,7 @@ def filterProviders(providers, item, cfgData, dataCache):
foundUnique = True
break
logger.debug("sorted providers for %s are: %s", item, eligible)
logger.debug(1, "sorted providers for %s are: %s", item, eligible)
return eligible, foundUnique
@@ -358,7 +333,7 @@ def filterProvidersRunTime(providers, item, cfgData, dataCache):
provides = dataCache.pn_provides[pn]
for provide in provides:
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide)
#logger.debug("checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
#logger.debug(1, "checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
if prefervar in pns and pns[prefervar] not in preferred:
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
logger.verbose("selecting %s to satisfy runtime %s due to %s", prefervar, item, var)
@@ -374,7 +349,7 @@ def filterProvidersRunTime(providers, item, cfgData, dataCache):
if numberPreferred > 1:
logger.error("Trying to resolve runtime dependency %s resulted in conflicting PREFERRED_PROVIDER entries being found.\nThe providers found were: %s\nThe PREFERRED_PROVIDER entries resulting in this conflict were: %s. You could set PREFERRED_RPROVIDER_%s" % (item, preferred, preferred_vars, item))
logger.debug("sorted runtime providers for %s are: %s", item, eligible)
logger.debug(1, "sorted runtime providers for %s are: %s", item, eligible)
return eligible, numberPreferred
@@ -409,10 +384,11 @@ def getRuntimeProviders(dataCache, rdepend):
regexp_cache[pattern] = regexp
if regexp.match(rdepend):
rproviders += dataCache.packages_dynamic[pattern]
logger.debug("Assuming %s is a dynamic package, but it may not exist" % rdepend)
logger.debug(1, "Assuming %s is a dynamic package, but it may not exist" % rdepend)
return rproviders
def buildWorldTargetList(dataCache, task=None):
"""
Build package list for "bitbake world"
@@ -420,22 +396,22 @@ def buildWorldTargetList(dataCache, task=None):
if dataCache.world_target:
return
logger.debug("collating packages for \"world\"")
logger.debug(1, "collating packages for \"world\"")
for f in dataCache.possible_world:
terminal = True
pn = dataCache.pkg_fn[f]
if task and task not in dataCache.task_deps[f]['tasks']:
logger.debug2("World build skipping %s as task %s doesn't exist", f, task)
logger.debug(2, "World build skipping %s as task %s doesn't exist", f, task)
terminal = False
for p in dataCache.pn_provides[pn]:
if p.startswith('virtual/'):
logger.debug2("World build skipping %s due to %s provider starting with virtual/", f, p)
logger.debug(2, "World build skipping %s due to %s provider starting with virtual/", f, p)
terminal = False
break
for pf in dataCache.providers[p]:
if dataCache.pkg_fn[pf] != pn:
logger.debug2("World build skipping %s due to both us and %s providing %s", f, pf, p)
logger.debug(2, "World build skipping %s due to both us and %s providing %s", f, pf, p)
terminal = False
break
if terminal:

View File

@@ -38,7 +38,7 @@ def taskname_from_tid(tid):
return tid.rsplit(":", 1)[1]
def mc_from_tid(tid):
if tid.startswith('mc:') and tid.count(':') >= 2:
if tid.startswith('mc:'):
return tid.split(':')[1]
return ""
@@ -47,13 +47,13 @@ def split_tid(tid):
return (mc, fn, taskname)
def split_mc(n):
if n.startswith("mc:") and n.count(':') >= 2:
if n.startswith("mc:"):
_, mc, n = n.split(":", 2)
return (mc, n)
return ('', n)
def split_tid_mcfn(tid):
if tid.startswith('mc:') and tid.count(':') >= 2:
if tid.startswith('mc:'):
elems = tid.split(':')
mc = elems[1]
fn = ":".join(elems[2:-1])
@@ -85,19 +85,15 @@ class RunQueueStats:
"""
Holds statistics on the tasks handled by the associated runQueue
"""
def __init__(self, total, setscene_total):
def __init__(self, total):
self.completed = 0
self.skipped = 0
self.failed = 0
self.active = 0
self.setscene_active = 0
self.setscene_covered = 0
self.setscene_notcovered = 0
self.setscene_total = setscene_total
self.total = total
def copy(self):
obj = self.__class__(self.total, self.setscene_total)
obj = self.__class__(self.total)
obj.__dict__.update(self.__dict__)
return obj
@@ -116,13 +112,6 @@ class RunQueueStats:
def taskActive(self):
self.active = self.active + 1
def updateCovered(self, covered, notcovered):
self.setscene_covered = covered
self.setscene_notcovered = notcovered
def updateActiveSetscene(self, active):
self.setscene_active = active
# These values indicate the next step due to be run in the
# runQueue state machine
runQueuePrepare = 2
@@ -555,8 +544,8 @@ class RunQueueData:
for tid in self.runtaskentries:
if task_done[tid] is False or deps_left[tid] != 0:
problem_tasks.append(tid)
logger.debug2("Task %s is not buildable", tid)
logger.debug2("(Complete marker was %s and the remaining dependency count was %s)\n", task_done[tid], deps_left[tid])
logger.debug(2, "Task %s is not buildable", tid)
logger.debug(2, "(Complete marker was %s and the remaining dependency count was %s)\n", task_done[tid], deps_left[tid])
self.runtaskentries[tid].weight = weight[tid]
if problem_tasks:
@@ -654,7 +643,7 @@ class RunQueueData:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
#runtid = build_tid(mc, fn, taskname)
#logger.debug2("Processing %s,%s:%s", mc, fn, taskname)
#logger.debug(2, "Processing %s,%s:%s", mc, fn, taskname)
depends = set()
task_deps = self.dataCaches[mc].task_deps[taskfn]
@@ -926,36 +915,38 @@ class RunQueueData:
#
# Once all active tasks are marked, prune the ones we don't need.
delcount = {}
for tid in list(self.runtaskentries.keys()):
if tid not in runq_build:
delcount[tid] = self.runtaskentries[tid]
del self.runtaskentries[tid]
# Handle --runall
if self.cooker.configuration.runall:
# re-run the mark_active and then drop unused tasks from new list
reduced_tasklist = set(self.runtaskentries.keys())
for tid in list(self.runtaskentries.keys()):
if tid not in runq_build:
reduced_tasklist.remove(tid)
runq_build = {}
for task in self.cooker.configuration.runall:
if not task.startswith("do_"):
task = "do_{0}".format(task)
runall_tids = set()
for tid in reduced_tasklist:
for tid in list(self.runtaskentries):
wanttid = "{0}:{1}".format(fn_from_tid(tid), task)
if wanttid in delcount:
self.runtaskentries[wanttid] = delcount[wanttid]
if wanttid in self.runtaskentries:
runall_tids.add(wanttid)
for tid in list(runall_tids):
mark_active(tid, 1)
mark_active(tid,1)
if self.cooker.configuration.force:
invalidate_task(tid, False)
delcount = set()
for tid in list(self.runtaskentries.keys()):
if tid not in runq_build:
delcount.add(tid)
del self.runtaskentries[tid]
for tid in list(self.runtaskentries.keys()):
if tid not in runq_build:
delcount[tid] = self.runtaskentries[tid]
del self.runtaskentries[tid]
if self.cooker.configuration.runall:
if len(self.runtaskentries) == 0:
bb.msg.fatal("RunQueue", "Could not find any tasks with the tasknames %s to run within the recipes of the taskgraphs of the targets %s" % (str(self.cooker.configuration.runall), str(self.targets)))
@@ -969,16 +960,16 @@ class RunQueueData:
for task in self.cooker.configuration.runonly:
if not task.startswith("do_"):
task = "do_{0}".format(task)
runonly_tids = [k for k in self.runtaskentries.keys() if taskname_from_tid(k) == task]
runonly_tids = { k: v for k, v in self.runtaskentries.items() if taskname_from_tid(k) == task }
for tid in runonly_tids:
mark_active(tid, 1)
for tid in list(runonly_tids):
mark_active(tid,1)
if self.cooker.configuration.force:
invalidate_task(tid, False)
for tid in list(self.runtaskentries.keys()):
if tid not in runq_build:
delcount.add(tid)
delcount[tid] = self.runtaskentries[tid]
del self.runtaskentries[tid]
if len(self.runtaskentries) == 0:
@@ -1208,9 +1199,9 @@ class RunQueueData:
"""
Dump some debug information on the internal data structures
"""
logger.debug3("run_tasks:")
logger.debug(3, "run_tasks:")
for tid in self.runtaskentries:
logger.debug3(" %s: %s Deps %s RevDeps %s", tid,
logger.debug(3, " %s: %s Deps %s RevDeps %s", tid,
self.runtaskentries[tid].weight,
self.runtaskentries[tid].depends,
self.runtaskentries[tid].revdeps)
@@ -1247,11 +1238,10 @@ class RunQueue:
self.fakeworker = {}
def _start_worker(self, mc, fakeroot = False, rqexec = None):
logger.debug("Starting bitbake-worker")
logger.debug(1, "Starting bitbake-worker")
magic = "decafbad"
if self.cooker.configuration.profile:
magic = "decafbadbad"
fakerootlogs = None
if fakeroot:
magic = magic + "beef"
mcdata = self.cooker.databuilder.mcdata[mc]
@@ -1261,11 +1251,10 @@ class RunQueue:
for key, value in (var.split('=') for var in fakerootenv):
env[key] = value
worker = subprocess.Popen(fakerootcmd + ["bitbake-worker", magic], stdout=subprocess.PIPE, stdin=subprocess.PIPE, env=env)
fakerootlogs = self.rqdata.dataCaches[mc].fakerootlogs
else:
worker = subprocess.Popen(["bitbake-worker", magic], stdout=subprocess.PIPE, stdin=subprocess.PIPE)
bb.utils.nonblockingfd(worker.stdout)
workerpipe = runQueuePipe(worker.stdout, None, self.cfgData, self, rqexec, fakerootlogs=fakerootlogs)
workerpipe = runQueuePipe(worker.stdout, None, self.cfgData, self, rqexec)
workerdata = {
"taskdeps" : self.rqdata.dataCaches[mc].task_deps,
@@ -1282,7 +1271,6 @@ class RunQueue:
"date" : self.cfgData.getVar("DATE"),
"time" : self.cfgData.getVar("TIME"),
"hashservaddr" : self.cooker.hashservaddr,
"umask" : self.cfgData.getVar("BB_DEFAULT_UMASK"),
}
worker.stdin.write(b"<cookerconfig>" + pickle.dumps(self.cooker.configuration) + b"</cookerconfig>")
@@ -1295,7 +1283,7 @@ class RunQueue:
def _teardown_worker(self, worker):
if not worker:
return
logger.debug("Teardown for bitbake-worker")
logger.debug(1, "Teardown for bitbake-worker")
try:
worker.process.stdin.write(b"<quit></quit>")
worker.process.stdin.flush()
@@ -1368,12 +1356,12 @@ class RunQueue:
# If the stamp is missing, it's not current
if not os.access(stampfile, os.F_OK):
logger.debug2("Stampfile %s not available", stampfile)
logger.debug(2, "Stampfile %s not available", stampfile)
return False
# If it's a 'nostamp' task, it's not current
taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
if 'nostamp' in taskdep and taskname in taskdep['nostamp']:
logger.debug2("%s.%s is nostamp\n", fn, taskname)
logger.debug(2, "%s.%s is nostamp\n", fn, taskname)
return False
if taskname != "do_setscene" and taskname.endswith("_setscene"):
@@ -1397,18 +1385,18 @@ class RunQueue:
continue
if fn == fn2 or (fulldeptree and fn2 not in stampwhitelist):
if not t2:
logger.debug2('Stampfile %s does not exist', stampfile2)
logger.debug(2, 'Stampfile %s does not exist', stampfile2)
iscurrent = False
break
if t1 < t2:
logger.debug2('Stampfile %s < %s', stampfile, stampfile2)
logger.debug(2, 'Stampfile %s < %s', stampfile, stampfile2)
iscurrent = False
break
if recurse and iscurrent:
if dep in cache:
iscurrent = cache[dep]
if not iscurrent:
logger.debug2('Stampfile for dependency %s:%s invalid (cached)' % (fn2, taskname2))
logger.debug(2, 'Stampfile for dependency %s:%s invalid (cached)' % (fn2, taskname2))
else:
iscurrent = self.check_stamp_task(dep, recurse=True, cache=cache)
cache[dep] = iscurrent
@@ -1478,7 +1466,7 @@ class RunQueue:
if not self.dm_event_handler_registered:
res = bb.event.register(self.dm_event_handler_name,
lambda x: self.dm.check(self) if self.state in [runQueueRunning, runQueueCleanUp] else False,
('bb.event.HeartbeatEvent',), data=self.cfgData)
('bb.event.HeartbeatEvent',))
self.dm_event_handler_registered = True
dump = self.cooker.configuration.dump_signatures
@@ -1517,7 +1505,7 @@ class RunQueue:
build_done = self.state is runQueueComplete or self.state is runQueueFailed
if build_done and self.dm_event_handler_registered:
bb.event.remove(self.dm_event_handler_name, None, data=self.cfgData)
bb.event.remove(self.dm_event_handler_name, None)
self.dm_event_handler_registered = False
if build_done and self.rqexe:
@@ -1744,7 +1732,8 @@ class RunQueueExecute:
self.holdoff_need_update = True
self.sqdone = False
self.stats = RunQueueStats(len(self.rqdata.runtaskentries), len(self.rqdata.runq_setscene_tids))
self.stats = RunQueueStats(len(self.rqdata.runtaskentries))
self.sq_stats = RunQueueStats(len(self.rqdata.runq_setscene_tids))
for mc in rq.worker:
rq.worker[mc].pipe.setrunqueueexec(self)
@@ -1772,7 +1761,7 @@ class RunQueueExecute:
for scheduler in schedulers:
if self.scheduler == scheduler.name:
self.sched = scheduler(self, self.rqdata)
logger.debug("Using runqueue scheduler '%s'", scheduler.name)
logger.debug(1, "Using runqueue scheduler '%s'", scheduler.name)
break
else:
bb.fatal("Invalid scheduler '%s'. Available schedulers: %s" %
@@ -1782,7 +1771,7 @@ class RunQueueExecute:
self.sqdata = SQData()
build_scenequeue_data(self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self)
def runqueue_process_waitpid(self, task, status, fakerootlog=None):
def runqueue_process_waitpid(self, task, status):
# self.build_stamps[pid] may not exist when use shared work directory.
if task in self.build_stamps:
@@ -1795,10 +1784,9 @@ class RunQueueExecute:
else:
self.sq_task_complete(task)
self.sq_live.remove(task)
self.stats.updateActiveSetscene(len(self.sq_live))
else:
if status != 0:
self.task_fail(task, status, fakerootlog=fakerootlog)
self.task_fail(task, status)
else:
self.task_complete(task)
return True
@@ -1829,7 +1817,7 @@ class RunQueueExecute:
def finish(self):
self.rq.state = runQueueCleanUp
active = self.stats.active + len(self.sq_live)
active = self.stats.active + self.sq_stats.active
if active > 0:
bb.event.fire(runQueueExitWait(active), self.cfgData)
self.rq.read_workers()
@@ -1862,7 +1850,7 @@ class RunQueueExecute:
return valid
def can_start_task(self):
active = self.stats.active + len(self.sq_live)
active = self.stats.active + self.sq_stats.active
can_start = active < self.number_tasks
return can_start
@@ -1911,13 +1899,7 @@ class RunQueueExecute:
break
if alldeps:
self.setbuildable(revdep)
logger.debug("Marking task %s as buildable", revdep)
for t in self.sq_deferred.copy():
if self.sq_deferred[t] == task:
logger.debug2("Deferred task %s now buildable" % t)
del self.sq_deferred[t]
update_scenequeue_data([t], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
logger.debug(1, "Marking task %s as buildable", revdep)
def task_complete(self, task):
self.stats.taskCompleted()
@@ -1925,31 +1907,14 @@ class RunQueueExecute:
self.task_completeoutright(task)
self.runq_tasksrun.add(task)
def task_fail(self, task, exitcode, fakerootlog=None):
def task_fail(self, task, exitcode):
"""
Called when a task has failed
Updates the state engine with the failure
"""
self.stats.taskFailed()
self.failed_tids.append(task)
fakeroot_log = ""
if fakerootlog and os.path.exists(fakerootlog):
with open(fakerootlog) as fakeroot_log_file:
fakeroot_failed = False
for line in reversed(fakeroot_log_file.readlines()):
for fakeroot_error in ['mismatch', 'error', 'fatal']:
if fakeroot_error in line.lower():
fakeroot_failed = True
if 'doing new pid setup and server start' in line:
break
fakeroot_log = line + fakeroot_log
if not fakeroot_failed:
fakeroot_log = None
bb.event.fire(runQueueTaskFailed(task, self.stats, exitcode, self.rq, fakeroot_log=fakeroot_log), self.cfgData)
bb.event.fire(runQueueTaskFailed(task, self.stats, exitcode, self.rq), self.cfgData)
if self.rqdata.taskData[''].abort:
self.rq.state = runQueueCleanUp
@@ -1964,8 +1929,8 @@ class RunQueueExecute:
def summarise_scenequeue_errors(self):
err = False
if not self.sqdone:
logger.debug('We could skip tasks %s', "\n".join(sorted(self.scenequeue_covered)))
completeevent = sceneQueueComplete(self.stats, self.rq)
logger.debug(1, 'We could skip tasks %s', "\n".join(sorted(self.scenequeue_covered)))
completeevent = sceneQueueComplete(self.sq_stats, self.rq)
bb.event.fire(completeevent, self.cfgData)
if self.sq_deferred:
logger.error("Scenequeue had deferred entries: %s" % pprint.pformat(self.sq_deferred))
@@ -1977,10 +1942,6 @@ class RunQueueExecute:
logger.error("Scenequeue had holdoff tasks: %s" % pprint.pformat(self.holdoff_tasks))
err = True
for tid in self.scenequeue_covered.intersection(self.scenequeue_notcovered):
# No task should end up in both covered and uncovered, that is a bug.
logger.error("Setscene task %s in both covered and notcovered." % tid)
for tid in self.rqdata.runq_setscene_tids:
if tid not in self.scenequeue_covered and tid not in self.scenequeue_notcovered:
err = True
@@ -2025,7 +1986,7 @@ class RunQueueExecute:
if nexttask in self.sq_buildable and nexttask not in self.sq_running and self.sqdata.stamps[nexttask] not in self.build_stamps.values():
if nexttask not in self.sqdata.unskippable and len(self.sqdata.sq_revdeps[nexttask]) > 0 and self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and self.check_dependencies(nexttask, self.sqdata.sq_revdeps[nexttask]):
if nexttask not in self.rqdata.target_tids:
logger.debug2("Skipping setscene for task %s" % nexttask)
logger.debug(2, "Skipping setscene for task %s" % nexttask)
self.sq_task_skip(nexttask)
self.scenequeue_notneeded.add(nexttask)
if nexttask in self.sq_deferred:
@@ -2038,26 +1999,28 @@ class RunQueueExecute:
if nexttask in self.sq_deferred:
if self.sq_deferred[nexttask] not in self.runq_complete:
continue
logger.debug("Task %s no longer deferred" % nexttask)
logger.debug(1, "Task %s no longer deferred" % nexttask)
del self.sq_deferred[nexttask]
valid = self.rq.validate_hashes(set([nexttask]), self.cooker.data, 0, False, summary=False)
if not valid:
logger.debug("%s didn't become valid, skipping setscene" % nexttask)
logger.debug(1, "%s didn't become valid, skipping setscene" % nexttask)
self.sq_task_failoutright(nexttask)
return True
else:
self.sqdata.outrightfail.remove(nexttask)
if nexttask in self.sqdata.outrightfail:
logger.debug2('No package found, so skipping setscene task %s', nexttask)
logger.debug(2, 'No package found, so skipping setscene task %s', nexttask)
self.sq_task_failoutright(nexttask)
return True
if nexttask in self.sqdata.unskippable:
logger.debug2("Setscene task %s is unskippable" % nexttask)
logger.debug(2, "Setscene task %s is unskippable" % nexttask)
task = nexttask
break
if task is not None:
(mc, fn, taskname, taskfn) = split_tid_mcfn(task)
taskname = taskname + "_setscene"
if self.rq.check_stamp_task(task, taskname_from_tid(task), recurse = True, cache=self.stampcache):
logger.debug2('Stamp for underlying task %s is current, so skipping setscene variant', task)
logger.debug(2, 'Stamp for underlying task %s is current, so skipping setscene variant', task)
self.sq_task_failoutright(task)
return True
@@ -2067,16 +2030,16 @@ class RunQueueExecute:
return True
if self.rq.check_stamp_task(task, taskname, cache=self.stampcache):
logger.debug2('Setscene stamp current task %s, so skip it and its dependencies', task)
logger.debug(2, 'Setscene stamp current task %s, so skip it and its dependencies', task)
self.sq_task_skip(task)
return True
if self.cooker.configuration.skipsetscene:
logger.debug2('No setscene tasks should be executed. Skipping %s', task)
logger.debug(2, 'No setscene tasks should be executed. Skipping %s', task)
self.sq_task_failoutright(task)
return True
startevent = sceneQueueTaskStarted(task, self.stats, self.rq)
startevent = sceneQueueTaskStarted(task, self.sq_stats, self.rq)
bb.event.fire(startevent, self.cfgData)
taskdepdata = self.sq_build_taskdepdata(task)
@@ -2097,7 +2060,7 @@ class RunQueueExecute:
self.build_stamps2.append(self.build_stamps[task])
self.sq_running.add(task)
self.sq_live.add(task)
self.stats.updateActiveSetscene(len(self.sq_live))
self.sq_stats.taskActive()
if self.can_start_task():
return True
@@ -2134,12 +2097,12 @@ class RunQueueExecute:
return True
if task in self.tasks_covered:
logger.debug2("Setscene covered task %s", task)
logger.debug(2, "Setscene covered task %s", task)
self.task_skip(task, "covered")
return True
if self.rq.check_stamp_task(task, taskname, cache=self.stampcache):
logger.debug2("Stamp current task %s", task)
logger.debug(2, "Stamp current task %s", task)
self.task_skip(task, "existing")
self.runq_tasksrun.add(task)
@@ -2187,7 +2150,7 @@ class RunQueueExecute:
if self.can_start_task():
return True
if self.stats.active > 0 or len(self.sq_live) > 0:
if self.stats.active > 0 or self.sq_stats.active > 0:
self.rq.read_workers()
return self.rq.active_fds()
@@ -2195,8 +2158,7 @@ class RunQueueExecute:
if self.sq_deferred:
tid = self.sq_deferred.pop(list(self.sq_deferred.keys())[0])
logger.warning("Runqeueue deadlocked on deferred tasks, forcing task %s" % tid)
if tid not in self.runq_complete:
self.sq_task_failoutright(tid)
self.sq_task_failoutright(tid)
return True
if len(self.failed_tids) != 0:
@@ -2310,16 +2272,10 @@ class RunQueueExecute:
self.updated_taskhash_queue.remove((tid, unihash))
if unihash != self.rqdata.runtaskentries[tid].unihash:
# Make sure we rehash any other tasks with the same task hash that we're deferred against.
torehash = [tid]
for deftid in self.sq_deferred:
if self.sq_deferred[deftid] == tid:
torehash.append(deftid)
for hashtid in torehash:
hashequiv_logger.verbose("Task %s unihash changed to %s" % (hashtid, unihash))
self.rqdata.runtaskentries[hashtid].unihash = unihash
bb.parse.siggen.set_unihash(hashtid, unihash)
toprocess.add(hashtid)
hashequiv_logger.verbose("Task %s unihash changed to %s" % (tid, unihash))
self.rqdata.runtaskentries[tid].unihash = unihash
bb.parse.siggen.set_unihash(tid, unihash)
toprocess.add(tid)
# Work out all tasks which depend upon these
total = set()
@@ -2366,7 +2322,7 @@ class RunQueueExecute:
remapped = True
if not remapped:
#logger.debug("Task %s hash changes: %s->%s %s->%s" % (tid, orighash, newhash, origuni, newuni))
#logger.debug(1, "Task %s hash changes: %s->%s %s->%s" % (tid, orighash, newhash, origuni, newuni))
self.rqdata.runtaskentries[tid].hash = newhash
self.rqdata.runtaskentries[tid].unihash = newuni
changed.add(tid)
@@ -2381,7 +2337,7 @@ class RunQueueExecute:
for mc in self.rq.fakeworker:
self.rq.fakeworker[mc].process.stdin.write(b"<newtaskhashes>" + pickle.dumps(bb.parse.siggen.get_taskhashes()) + b"</newtaskhashes>")
hashequiv_logger.debug(pprint.pformat("Tasks changed:\n%s" % (changed)))
hashequiv_logger.debug(1, pprint.pformat("Tasks changed:\n%s" % (changed)))
for tid in changed:
if tid not in self.rqdata.runq_setscene_tids:
@@ -2400,7 +2356,7 @@ class RunQueueExecute:
# Check no tasks this covers are running
for dep in self.sqdata.sq_covered_tasks[tid]:
if dep in self.runq_running and dep not in self.runq_complete:
hashequiv_logger.debug2("Task %s is running which blocks setscene for %s from running" % (dep, tid))
hashequiv_logger.debug(2, "Task %s is running which blocks setscene for %s from running" % (dep, tid))
valid = False
break
if not valid:
@@ -2459,11 +2415,6 @@ class RunQueueExecute:
if update_tasks:
self.sqdone = False
for tid in [t[0] for t in update_tasks]:
h = pending_hash_index(tid, self.rqdata)
if h in self.sqdata.hashes and tid != self.sqdata.hashes[h]:
self.sq_deferred[tid] = self.sqdata.hashes[h]
bb.note("Deferring %s after %s" % (tid, self.sqdata.hashes[h]))
update_scenequeue_data([t[0] for t in update_tasks], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
for (tid, harddepfail, origvalid) in update_tasks:
@@ -2473,20 +2424,13 @@ class RunQueueExecute:
self.sq_task_failoutright(tid)
if changed:
self.stats.updateCovered(len(self.scenequeue_covered), len(self.scenequeue_notcovered))
self.holdoff_need_update = True
def scenequeue_updatecounters(self, task, fail=False):
for dep in sorted(self.sqdata.sq_deps[task]):
if fail and task in self.sqdata.sq_harddeps and dep in self.sqdata.sq_harddeps[task]:
if dep in self.scenequeue_covered or dep in self.scenequeue_notcovered:
# dependency could be already processed, e.g. noexec setscene task
continue
noexec, stamppresent = check_setscene_stamps(dep, self.rqdata, self.rq, self.stampcache)
if noexec or stamppresent:
continue
logger.debug2("%s was unavailable and is a hard dependency of %s so skipping" % (task, dep))
logger.debug(2, "%s was unavailable and is a hard dependency of %s so skipping" % (task, dep))
self.sq_task_failoutright(dep)
continue
if self.sqdata.sq_revdeps[dep].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
@@ -2507,7 +2451,6 @@ class RunQueueExecute:
new.add(dep)
next = new
self.stats.updateCovered(len(self.scenequeue_covered), len(self.scenequeue_notcovered))
self.holdoff_need_update = True
def sq_task_completeoutright(self, task):
@@ -2517,7 +2460,7 @@ class RunQueueExecute:
completed dependencies as buildable
"""
logger.debug('Found task %s which could be accelerated', task)
logger.debug(1, 'Found task %s which could be accelerated', task)
self.scenequeue_covered.add(task)
self.scenequeue_updatecounters(task)
@@ -2531,11 +2474,13 @@ class RunQueueExecute:
self.rq.state = runQueueCleanUp
def sq_task_complete(self, task):
bb.event.fire(sceneQueueTaskCompleted(task, self.stats, self.rq), self.cfgData)
self.sq_stats.taskCompleted()
bb.event.fire(sceneQueueTaskCompleted(task, self.sq_stats, self.rq), self.cfgData)
self.sq_task_completeoutright(task)
def sq_task_fail(self, task, result):
bb.event.fire(sceneQueueTaskFailed(task, self.stats, result, self), self.cfgData)
self.sq_stats.taskFailed()
bb.event.fire(sceneQueueTaskFailed(task, self.sq_stats, result, self), self.cfgData)
self.scenequeue_notcovered.add(task)
self.scenequeue_updatecounters(task, True)
self.sq_check_taskfail(task)
@@ -2543,6 +2488,8 @@ class RunQueueExecute:
def sq_task_failoutright(self, task):
self.sq_running.add(task)
self.sq_buildable.add(task)
self.sq_stats.taskSkipped()
self.sq_stats.taskCompleted()
self.scenequeue_notcovered.add(task)
self.scenequeue_updatecounters(task, True)
@@ -2550,6 +2497,8 @@ class RunQueueExecute:
self.sq_running.add(task)
self.sq_buildable.add(task)
self.sq_task_completeoutright(task)
self.sq_stats.taskSkipped()
self.sq_stats.taskCompleted()
def sq_build_taskdepdata(self, task):
def getsetscenedeps(tid):
@@ -2803,55 +2752,8 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
sqdata.stamppresent = set()
sqdata.valid = set()
sqdata.hashes = {}
sqrq.sq_deferred = {}
for mc in sorted(sqdata.multiconfigs):
for tid in sorted(sqdata.sq_revdeps):
if mc_from_tid(tid) != mc:
continue
h = pending_hash_index(tid, rqdata)
if h not in sqdata.hashes:
sqdata.hashes[h] = tid
else:
sqrq.sq_deferred[tid] = sqdata.hashes[h]
bb.note("Deferring %s after %s" % (tid, sqdata.hashes[h]))
update_scenequeue_data(sqdata.sq_revdeps, sqdata, rqdata, rq, cooker, stampcache, sqrq, summary=True)
# Compute a list of 'stale' sstate tasks where the current hash does not match the one
# in any stamp files. Pass the list out to metadata as an event.
found = {}
for tid in rqdata.runq_setscene_tids:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
stamps = bb.build.find_stale_stamps(taskname, rqdata.dataCaches[mc], taskfn)
if stamps:
if mc not in found:
found[mc] = {}
found[mc][tid] = stamps
for mc in found:
event = bb.event.StaleSetSceneTasks(found[mc])
bb.event.fire(event, cooker.databuilder.mcdata[mc])
def check_setscene_stamps(tid, rqdata, rq, stampcache, noexecstamp=False):
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
taskdep = rqdata.dataCaches[mc].task_deps[taskfn]
if 'noexec' in taskdep and taskname in taskdep['noexec']:
bb.build.make_stamp(taskname + "_setscene", rqdata.dataCaches[mc], taskfn)
return True, False
if rq.check_stamp_task(tid, taskname + "_setscene", cache=stampcache):
logger.debug2('Setscene stamp current for task %s', tid)
return False, True
if rq.check_stamp_task(tid, taskname, recurse = True, cache=stampcache):
logger.debug2('Normal stamp current for task %s', tid)
return False, True
return False, False
def update_scenequeue_data(tids, sqdata, rqdata, rq, cooker, stampcache, sqrq, summary=True):
tocheck = set()
@@ -2861,17 +2763,25 @@ def update_scenequeue_data(tids, sqdata, rqdata, rq, cooker, stampcache, sqrq, s
sqdata.stamppresent.remove(tid)
if tid in sqdata.valid:
sqdata.valid.remove(tid)
if tid in sqdata.outrightfail:
sqdata.outrightfail.remove(tid)
noexec, stamppresent = check_setscene_stamps(tid, rqdata, rq, stampcache, noexecstamp=True)
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
if noexec:
taskdep = rqdata.dataCaches[mc].task_deps[taskfn]
if 'noexec' in taskdep and taskname in taskdep['noexec']:
sqdata.noexec.add(tid)
sqrq.sq_task_skip(tid)
bb.build.make_stamp(taskname + "_setscene", rqdata.dataCaches[mc], taskfn)
continue
if rq.check_stamp_task(tid, taskname + "_setscene", cache=stampcache):
logger.debug(2, 'Setscene stamp current for task %s', tid)
sqdata.stamppresent.add(tid)
sqrq.sq_task_skip(tid)
continue
if stamppresent:
if rq.check_stamp_task(tid, taskname, recurse = True, cache=stampcache):
logger.debug(2, 'Normal stamp current for task %s', tid)
sqdata.stamppresent.add(tid)
sqrq.sq_task_skip(tid)
continue
@@ -2880,20 +2790,28 @@ def update_scenequeue_data(tids, sqdata, rqdata, rq, cooker, stampcache, sqrq, s
sqdata.valid |= rq.validate_hashes(tocheck, cooker.data, len(sqdata.stamppresent), False, summary=summary)
for tid in tids:
if tid in sqdata.stamppresent:
continue
if tid in sqdata.valid:
continue
if tid in sqdata.noexec:
continue
if tid in sqrq.scenequeue_covered:
continue
if tid in sqrq.scenequeue_notcovered:
continue
if tid in sqrq.sq_deferred:
continue
sqdata.outrightfail.add(tid)
sqdata.hashes = {}
for mc in sorted(sqdata.multiconfigs):
for tid in sorted(sqdata.sq_revdeps):
if mc_from_tid(tid) != mc:
continue
if tid in sqdata.stamppresent:
continue
if tid in sqdata.valid:
continue
if tid in sqdata.noexec:
continue
if tid in sqrq.scenequeue_notcovered:
continue
sqdata.outrightfail.add(tid)
h = pending_hash_index(tid, rqdata)
if h not in sqdata.hashes:
sqdata.hashes[h] = tid
else:
sqrq.sq_deferred[tid] = sqdata.hashes[h]
bb.note("Deferring %s after %s" % (tid, sqdata.hashes[h]))
class TaskFailure(Exception):
"""
@@ -2957,16 +2875,12 @@ class runQueueTaskFailed(runQueueEvent):
"""
Event notifying a task failed
"""
def __init__(self, task, stats, exitcode, rq, fakeroot_log=None):
def __init__(self, task, stats, exitcode, rq):
runQueueEvent.__init__(self, task, stats, rq)
self.exitcode = exitcode
self.fakeroot_log = fakeroot_log
def __str__(self):
if self.fakeroot_log:
return "Task (%s) failed with exit code '%s' \nPseudo log:\n%s" % (self.taskstring, self.exitcode, self.fakeroot_log)
else:
return "Task (%s) failed with exit code '%s'" % (self.taskstring, self.exitcode)
return "Task (%s) failed with exit code '%s'" % (self.taskstring, self.exitcode)
class sceneQueueTaskFailed(sceneQueueEvent):
"""
@@ -3018,7 +2932,7 @@ class runQueuePipe():
"""
Abstraction for a pipe between a worker thread and the server
"""
def __init__(self, pipein, pipeout, d, rq, rqexec, fakerootlogs=None):
def __init__(self, pipein, pipeout, d, rq, rqexec):
self.input = pipein
if pipeout:
pipeout.close()
@@ -3027,7 +2941,6 @@ class runQueuePipe():
self.d = d
self.rq = rq
self.rqexec = rqexec
self.fakerootlogs = fakerootlogs
def setrunqueueexec(self, rqexec):
self.rqexec = rqexec
@@ -3073,11 +2986,7 @@ class runQueuePipe():
task, status = pickle.loads(self.queue[10:index])
except (ValueError, pickle.UnpicklingError, AttributeError, IndexError) as e:
bb.msg.fatal("RunQueue", "failed load pickle '%s': '%s'" % (e, self.queue[10:index]))
(_, _, _, taskfn) = split_tid_mcfn(task)
fakerootlog = None
if self.fakerootlogs and taskfn and taskfn in self.fakerootlogs:
fakerootlog = self.fakerootlogs[taskfn]
self.rqexec.runqueue_process_waitpid(task, status, fakerootlog=fakerootlog)
self.rqexec.runqueue_process_waitpid(task, status)
found = True
self.queue = self.queue[index+11:]
index = self.queue.find(b"</exitcode>")

View File

@@ -147,7 +147,7 @@ class ProcessServer():
conn = newconnections.pop(-1)
fds.append(conn)
self.controllersock = conn
elif not self.timeout and not ready:
elif self.timeout is None and not ready:
serverlog("No timeout, exiting.")
self.quit = True
@@ -367,12 +367,7 @@ class ProcessServer():
self.next_heartbeat = now + self.heartbeat_seconds
if hasattr(self.cooker, "data"):
heartbeat = bb.event.HeartbeatEvent(now)
try:
bb.event.fire(heartbeat, self.cooker.data)
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running heartbeat function')
self.quit = True
bb.event.fire(heartbeat, self.cooker.data)
if nextsleep and now + nextsleep > self.next_heartbeat:
# Shorten timeout so that we we wake up in time for
# the heartbeat.
@@ -471,7 +466,7 @@ class BitBakeServer(object):
try:
r = ready.get()
except EOFError:
# Trap the child exiting/closing the pipe and error out
# Trap the child exitting/closing the pipe and error out
r = None
if not r or r[0] != "r":
ready.close()
@@ -514,7 +509,7 @@ class BitBakeServer(object):
os.set_inheritable(self.bitbake_lock.fileno(), True)
os.set_inheritable(self.readypipein, True)
serverscript = os.path.realpath(os.path.dirname(__file__) + "/../../../bin/bitbake-server")
os.execl(sys.executable, "bitbake-server", serverscript, "decafbad", str(self.bitbake_lock.fileno()), str(self.readypipein), self.logfile, self.bitbake_lock.name, self.sockname, str(self.server_timeout or 0), str(self.xmlrpcinterface[0]), str(self.xmlrpcinterface[1]))
os.execl(sys.executable, "bitbake-server", serverscript, "decafbad", str(self.bitbake_lock.fileno()), str(self.readypipein), self.logfile, self.bitbake_lock.name, self.sockname, str(self.server_timeout), str(self.xmlrpcinterface[0]), str(self.xmlrpcinterface[1]))
def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpcinterface):
@@ -659,7 +654,7 @@ class BBUIEventQueue:
self.reader = ConnectionReader(readfd)
self.t = threading.Thread()
self.t.daemon = True
self.t.setDaemon(True)
self.t.run = self.startCallbackHandler
self.t.start()

View File

@@ -311,7 +311,13 @@ class SignatureGeneratorBasic(SignatureGenerator):
data = self.basehash[tid]
for dep in self.runtaskdeps[tid]:
data = data + self.get_unihash(dep)
if dep in self.unihash:
if self.unihash[dep] is None:
data = data + self.taskhash[dep]
else:
data = data + self.unihash[dep]
else:
data = data + self.get_unihash(dep)
for (f, cs) in self.file_checksum_values[tid]:
if cs:
@@ -541,7 +547,7 @@ class SignatureGeneratorUniHashMixIn(object):
# is much more interesting, so it is reported at debug level 1
hashequiv_logger.debug((1, 2)[unihash == taskhash], 'Found unihash %s in place of %s for %s from %s' % (unihash, taskhash, tid, self.server))
else:
hashequiv_logger.debug2('No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
hashequiv_logger.debug(2, 'No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
except hashserv.client.HashConnectionError as e:
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
@@ -615,12 +621,12 @@ class SignatureGeneratorUniHashMixIn(object):
new_unihash = data['unihash']
if new_unihash != unihash:
hashequiv_logger.debug('Task %s unihash changed %s -> %s by server %s' % (taskhash, unihash, new_unihash, self.server))
hashequiv_logger.debug(1, 'Task %s unihash changed %s -> %s by server %s' % (taskhash, unihash, new_unihash, self.server))
bb.event.fire(bb.runqueue.taskUniHashUpdate(fn + ':do_' + task, new_unihash), d)
self.set_unihash(tid, new_unihash)
d.setVar('BB_UNIHASH', new_unihash)
else:
hashequiv_logger.debug('Reported task %s as unihash %s to %s' % (taskhash, unihash, self.server))
hashequiv_logger.debug(1, 'Reported task %s as unihash %s to %s' % (taskhash, unihash, self.server))
except hashserv.client.HashConnectionError as e:
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
finally:
@@ -748,7 +754,7 @@ def clean_basepath(basepath):
if basepath[0] == '/':
return cleaned
if basepath.startswith("mc:") and basepath.count(':') >= 2:
if basepath.startswith("mc:"):
mc, mc_name, basepath = basepath.split(":", 2)
mc_suffix = ':mc:' + mc_name
else:

View File

@@ -131,7 +131,7 @@ class TaskData:
for depend in dataCache.deps[fn]:
dependids.add(depend)
self.depids[fn] = list(dependids)
logger.debug2("Added dependencies %s for %s", str(dataCache.deps[fn]), fn)
logger.debug(2, "Added dependencies %s for %s", str(dataCache.deps[fn]), fn)
# Work out runtime dependencies
if not fn in self.rdepids:
@@ -149,9 +149,9 @@ class TaskData:
rreclist.append(rdepend)
rdependids.add(rdepend)
if rdependlist:
logger.debug2("Added runtime dependencies %s for %s", str(rdependlist), fn)
logger.debug(2, "Added runtime dependencies %s for %s", str(rdependlist), fn)
if rreclist:
logger.debug2("Added runtime recommendations %s for %s", str(rreclist), fn)
logger.debug(2, "Added runtime recommendations %s for %s", str(rreclist), fn)
self.rdepids[fn] = list(rdependids)
for dep in self.depids[fn]:
@@ -378,7 +378,7 @@ class TaskData:
for fn in eligible:
if fn in self.failed_fns:
continue
logger.debug2("adding %s to satisfy %s", fn, item)
logger.debug(2, "adding %s to satisfy %s", fn, item)
self.add_build_target(fn, item)
self.add_tasks(fn, dataCache)
@@ -431,7 +431,7 @@ class TaskData:
for fn in eligible:
if fn in self.failed_fns:
continue
logger.debug2("adding '%s' to satisfy runtime '%s'", fn, item)
logger.debug(2, "adding '%s' to satisfy runtime '%s'", fn, item)
self.add_runtime_target(fn, item)
self.add_tasks(fn, dataCache)
@@ -446,7 +446,7 @@ class TaskData:
return
if not missing_list:
missing_list = []
logger.debug("File '%s' is unbuildable, removing...", fn)
logger.debug(1, "File '%s' is unbuildable, removing...", fn)
self.failed_fns.append(fn)
for target in self.build_targets:
if fn in self.build_targets[target]:
@@ -526,7 +526,7 @@ class TaskData:
added = added + 1
except (bb.providers.NoRProvider, bb.providers.MultipleRProvider):
self.remove_runtarget(target)
logger.debug("Resolved " + str(added) + " extra dependencies")
logger.debug(1, "Resolved " + str(added) + " extra dependencies")
if added == 0:
break
# self.dump_data()
@@ -549,38 +549,38 @@ class TaskData:
"""
Dump some debug information on the internal data structures
"""
logger.debug3("build_names:")
logger.debug3(", ".join(self.build_targets))
logger.debug(3, "build_names:")
logger.debug(3, ", ".join(self.build_targets))
logger.debug3("run_names:")
logger.debug3(", ".join(self.run_targets))
logger.debug(3, "run_names:")
logger.debug(3, ", ".join(self.run_targets))
logger.debug3("build_targets:")
logger.debug(3, "build_targets:")
for target in self.build_targets:
targets = "None"
if target in self.build_targets:
targets = self.build_targets[target]
logger.debug3(" %s: %s", target, targets)
logger.debug(3, " %s: %s", target, targets)
logger.debug3("run_targets:")
logger.debug(3, "run_targets:")
for target in self.run_targets:
targets = "None"
if target in self.run_targets:
targets = self.run_targets[target]
logger.debug3(" %s: %s", target, targets)
logger.debug(3, " %s: %s", target, targets)
logger.debug3("tasks:")
logger.debug(3, "tasks:")
for tid in self.taskentries:
logger.debug3(" %s: %s %s %s",
logger.debug(3, " %s: %s %s %s",
tid,
self.taskentries[tid].idepends,
self.taskentries[tid].irdepends,
self.taskentries[tid].tdepends)
logger.debug3("dependency ids (per fn):")
logger.debug(3, "dependency ids (per fn):")
for fn in self.depids:
logger.debug3(" %s: %s", fn, self.depids[fn])
logger.debug(3, " %s: %s", fn, self.depids[fn])
logger.debug3("runtime dependency ids (per fn):")
logger.debug(3, "runtime dependency ids (per fn):")
for fn in self.rdepids:
logger.debug3(" %s: %s", fn, self.rdepids[fn])
logger.debug(3, " %s: %s", fn, self.rdepids[fn])

View File

@@ -111,9 +111,9 @@ ${D}${libdir}/pkgconfig/*.pc
self.assertExecs(set(["sed"]))
def test_parameter_expansion_modifiers(self):
# -,+ and : are also valid modifiers for parameter expansion, but are
# - and + are also valid modifiers for parameter expansion, but are
# valid characters in bitbake variable names, so are not included here
for i in ('=', '?', '#', '%', '##', '%%'):
for i in ('=', ':-', ':=', '?', ':?', ':+', '#', '%', '##', '%%'):
name = "foo%sbar" % i
self.parseExpression("${%s}" % name)
self.assertNotIn(name, self.references)

View File

@@ -31,7 +31,7 @@ class ColorCodeTests(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self._progress_watcher = ProgressWatcher()
bb.event.register("bb.build.TaskProgress", self._progress_watcher.handle_event, data=self.d)
bb.event.register("bb.build.TaskProgress", self._progress_watcher.handle_event)
def tearDown(self):
bb.event.remove("bb.build.TaskProgress", None)

View File

@@ -87,25 +87,6 @@ class URITest(unittest.TestCase):
},
'relative': False
},
# Check that trailing semicolons are handled correctly
"http://www.example.org/index.html?qparam1=qvalue1;param2=value2;" : {
'uri': 'http://www.example.org/index.html?qparam1=qvalue1;param2=value2',
'scheme': 'http',
'hostname': 'www.example.org',
'port': None,
'hostport': 'www.example.org',
'path': '/index.html',
'userinfo': '',
'username': '',
'password': '',
'params': {
'param2': 'value2'
},
'query': {
'qparam1': 'qvalue1'
},
'relative': False
},
"http://www.example.com:8080/index.html" : {
'uri': 'http://www.example.com:8080/index.html',
'scheme': 'http',
@@ -390,7 +371,6 @@ class FetcherTest(unittest.TestCase):
if os.environ.get("BB_TMPDIR_NOCLEAN") == "yes":
print("Not cleaning up %s. Please remove manually." % self.tempdir)
else:
bb.process.run('chmod u+rw -R %s' % self.tempdir)
bb.utils.prunedir(self.tempdir)
class MirrorUriTest(FetcherTest):
@@ -431,10 +411,6 @@ class MirrorUriTest(FetcherTest):
("git://someserver.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master", "git://someserver.org/bitbake;branch=master", "git://git.openembedded.org/bitbake;protocol=http")
: "git://git.openembedded.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master;protocol=http",
("git://user1@someserver.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master", "git://someserver.org/bitbake;branch=master", "git://user2@git.openembedded.org/bitbake;protocol=http")
: "git://user2@git.openembedded.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master;protocol=http",
#Renaming files doesn't work
#("http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz") : "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz"
#("file://sstate-xyz.tgz", "file://.*/.*", "file:///somewhere/1234/sstate-cache") : "file:///somewhere/1234/sstate-cache/sstate-xyz.tgz",
@@ -495,7 +471,7 @@ class GitDownloadDirectoryNamingTest(FetcherTest):
super(GitDownloadDirectoryNamingTest, self).setUp()
self.recipe_url = "git://git.openembedded.org/bitbake"
self.recipe_dir = "git.openembedded.org.bitbake"
self.mirror_url = "git://github.com/openembedded/bitbake.git;protocol=https"
self.mirror_url = "git://github.com/openembedded/bitbake.git"
self.mirror_dir = "github.com.openembedded.bitbake.git"
self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40')
@@ -543,7 +519,7 @@ class TarballNamingTest(FetcherTest):
super(TarballNamingTest, self).setUp()
self.recipe_url = "git://git.openembedded.org/bitbake"
self.recipe_tarball = "git2_git.openembedded.org.bitbake.tar.gz"
self.mirror_url = "git://github.com/openembedded/bitbake.git;protocol=https"
self.mirror_url = "git://github.com/openembedded/bitbake.git"
self.mirror_tarball = "git2_github.com.openembedded.bitbake.git.tar.gz"
self.d.setVar('BB_GENERATE_MIRROR_TARBALLS', '1')
@@ -577,7 +553,7 @@ class GitShallowTarballNamingTest(FetcherTest):
super(GitShallowTarballNamingTest, self).setUp()
self.recipe_url = "git://git.openembedded.org/bitbake"
self.recipe_tarball = "gitshallow_git.openembedded.org.bitbake_82ea737-1_master.tar.gz"
self.mirror_url = "git://github.com/openembedded/bitbake.git;protocol=https"
self.mirror_url = "git://github.com/openembedded/bitbake.git"
self.mirror_tarball = "gitshallow_github.com.openembedded.bitbake.git_82ea737-1_master.tar.gz"
self.d.setVar('BB_GIT_SHALLOW', '1')
@@ -678,62 +654,6 @@ class FetcherLocalTest(FetcherTest):
with self.assertRaises(bb.fetch2.UnpackError):
self.fetchUnpack(['file://a;subdir=/bin/sh'])
def test_local_gitfetch_usehead(self):
# Create dummy local Git repo
src_dir = tempfile.mkdtemp(dir=self.tempdir,
prefix='gitfetch_localusehead_')
src_dir = os.path.abspath(src_dir)
bb.process.run("git init", cwd=src_dir)
bb.process.run("git config user.email 'you@example.com'", cwd=src_dir)
bb.process.run("git config user.name 'Your Name'", cwd=src_dir)
bb.process.run("git commit --allow-empty -m'Dummy commit'",
cwd=src_dir)
# Use other branch than master
bb.process.run("git checkout -b my-devel", cwd=src_dir)
bb.process.run("git commit --allow-empty -m'Dummy commit 2'",
cwd=src_dir)
stdout = bb.process.run("git rev-parse HEAD", cwd=src_dir)
orig_rev = stdout[0].strip()
# Fetch and check revision
self.d.setVar("SRCREV", "AUTOINC")
url = "git://" + src_dir + ";protocol=file;usehead=1"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
fetcher.unpack(self.unpackdir)
stdout = bb.process.run("git rev-parse HEAD",
cwd=os.path.join(self.unpackdir, 'git'))
unpack_rev = stdout[0].strip()
self.assertEqual(orig_rev, unpack_rev)
def test_local_gitfetch_usehead_withname(self):
# Create dummy local Git repo
src_dir = tempfile.mkdtemp(dir=self.tempdir,
prefix='gitfetch_localusehead_')
src_dir = os.path.abspath(src_dir)
bb.process.run("git init", cwd=src_dir)
bb.process.run("git config user.email 'you@example.com'", cwd=src_dir)
bb.process.run("git config user.name 'Your Name'", cwd=src_dir)
bb.process.run("git commit --allow-empty -m'Dummy commit'",
cwd=src_dir)
# Use other branch than master
bb.process.run("git checkout -b my-devel", cwd=src_dir)
bb.process.run("git commit --allow-empty -m'Dummy commit 2'",
cwd=src_dir)
stdout = bb.process.run("git rev-parse HEAD", cwd=src_dir)
orig_rev = stdout[0].strip()
# Fetch and check revision
self.d.setVar("SRCREV", "AUTOINC")
url = "git://" + src_dir + ";protocol=file;usehead=1;name=newName"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
fetcher.unpack(self.unpackdir)
stdout = bb.process.run("git rev-parse HEAD",
cwd=os.path.join(self.unpackdir, 'git'))
unpack_rev = stdout[0].strip()
self.assertEqual(orig_rev, unpack_rev)
class FetcherNoNetworkTest(FetcherTest):
def setUp(self):
super().setUp()
@@ -924,21 +844,35 @@ class FetcherNetworkTest(FetcherTest):
self.assertRaises(bb.fetch.FetchError, self.gitfetcher, url1, url2)
@skipIfNoNetwork()
def test_gitfetch_usehead(self):
# Since self.gitfetcher() sets SRCREV we expect this to override
# `usehead=1' and instead fetch the specified SRCREV. See
# test_local_gitfetch_usehead() for a positive use of the usehead
# feature.
url = "git://git.openembedded.org/bitbake;usehead=1"
self.assertRaises(bb.fetch.ParameterError, self.gitfetcher, url, url)
def test_gitfetch_localusehead(self):
# Create dummy local Git repo
src_dir = tempfile.mkdtemp(dir=self.tempdir,
prefix='gitfetch_localusehead_')
src_dir = os.path.abspath(src_dir)
bb.process.run("git init", cwd=src_dir)
bb.process.run("git commit --allow-empty -m'Dummy commit'",
cwd=src_dir)
# Use other branch than master
bb.process.run("git checkout -b my-devel", cwd=src_dir)
bb.process.run("git commit --allow-empty -m'Dummy commit 2'",
cwd=src_dir)
stdout = bb.process.run("git rev-parse HEAD", cwd=src_dir)
orig_rev = stdout[0].strip()
# Fetch and check revision
self.d.setVar("SRCREV", "AUTOINC")
url = "git://" + src_dir + ";protocol=file;usehead=1"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
fetcher.unpack(self.unpackdir)
stdout = bb.process.run("git rev-parse HEAD",
cwd=os.path.join(self.unpackdir, 'git'))
unpack_rev = stdout[0].strip()
self.assertEqual(orig_rev, unpack_rev)
@skipIfNoNetwork()
def test_gitfetch_usehead_withname(self):
# Since self.gitfetcher() sets SRCREV we expect this to override
# `usehead=1' and instead fetch the specified SRCREV. See
# test_local_gitfetch_usehead() for a positive use of the usehead
# feature.
url = "git://git.openembedded.org/bitbake;usehead=1;name=newName"
def test_gitfetch_remoteusehead(self):
url = "git://git.openembedded.org/bitbake;usehead=1"
self.assertRaises(bb.fetch.ParameterError, self.gitfetcher, url, url)
@skipIfNoNetwork()
@@ -989,7 +923,7 @@ class FetcherNetworkTest(FetcherTest):
def test_git_submodule_dbus_broker(self):
# The following external repositories have show failures in fetch and unpack operations
# We want to avoid regressions!
url = "gitsm://github.com/bus1/dbus-broker;protocol=https;rev=fc874afa0992d0c75ec25acb43d344679f0ee7d2;branch=main"
url = "gitsm://github.com/bus1/dbus-broker;protocol=git;rev=fc874afa0992d0c75ec25acb43d344679f0ee7d2;branch=main"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
# Previous cwd has been deleted
@@ -1005,7 +939,7 @@ class FetcherNetworkTest(FetcherTest):
@skipIfNoNetwork()
def test_git_submodule_CLI11(self):
url = "gitsm://github.com/CLIUtils/CLI11;protocol=https;rev=bd4dc911847d0cde7a6b41dfa626a85aab213baf;branch=main"
url = "gitsm://github.com/CLIUtils/CLI11;protocol=git;rev=bd4dc911847d0cde7a6b41dfa626a85aab213baf"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
# Previous cwd has been deleted
@@ -1020,12 +954,12 @@ class FetcherNetworkTest(FetcherTest):
@skipIfNoNetwork()
def test_git_submodule_update_CLI11(self):
""" Prevent regression on update detection not finding missing submodule, or modules without needed commits """
url = "gitsm://github.com/CLIUtils/CLI11;protocol=https;rev=cf6a99fa69aaefe477cc52e3ef4a7d2d7fa40714;branch=main"
url = "gitsm://github.com/CLIUtils/CLI11;protocol=git;rev=cf6a99fa69aaefe477cc52e3ef4a7d2d7fa40714"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
# CLI11 that pulls in a newer nlohmann-json
url = "gitsm://github.com/CLIUtils/CLI11;protocol=https;rev=49ac989a9527ee9bb496de9ded7b4872c2e0e5ca;branch=main"
url = "gitsm://github.com/CLIUtils/CLI11;protocol=git;rev=49ac989a9527ee9bb496de9ded7b4872c2e0e5ca"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
# Previous cwd has been deleted
@@ -1039,7 +973,7 @@ class FetcherNetworkTest(FetcherTest):
@skipIfNoNetwork()
def test_git_submodule_aktualizr(self):
url = "gitsm://github.com/advancedtelematic/aktualizr;branch=master;protocol=https;rev=d00d1a04cc2366d1a5f143b84b9f507f8bd32c44"
url = "gitsm://github.com/advancedtelematic/aktualizr;branch=master;protocol=git;rev=d00d1a04cc2366d1a5f143b84b9f507f8bd32c44"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
# Previous cwd has been deleted
@@ -1059,7 +993,7 @@ class FetcherNetworkTest(FetcherTest):
""" Prevent regression on deeply nested submodules not being checked out properly, even though they were fetched. """
# This repository also has submodules where the module (name), path and url do not align
url = "gitsm://github.com/azure/iotedge.git;protocol=https;rev=d76e0316c6f324345d77c48a83ce836d09392699"
url = "gitsm://github.com/azure/iotedge.git;protocol=git;rev=d76e0316c6f324345d77c48a83ce836d09392699"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
# Previous cwd has been deleted
@@ -1117,7 +1051,7 @@ class SVNTest(FetcherTest):
bb.process.run("svn co %s svnfetch_co" % self.repo_url, cwd=self.tempdir)
# Github will emulate SVN. Use this to check if we're downloding...
bb.process.run("svn propset svn:externals 'bitbake https://github.com/PhilipHazel/pcre2.git' .",
bb.process.run("svn propset svn:externals 'bitbake svn://vcs.pcre.org/pcre2/code' .",
cwd=os.path.join(self.tempdir, 'svnfetch_co', 'trunk'))
bb.process.run("svn commit --non-interactive -m 'Add external'",
cwd=os.path.join(self.tempdir, 'svnfetch_co', 'trunk'))
@@ -1235,7 +1169,7 @@ class FetchLatestVersionTest(FetcherTest):
test_git_uris = {
# version pattern "X.Y.Z"
("mx-1.0", "git://github.com/clutter-project/mx.git;branch=mx-1.4;protocol=https", "9b1db6b8060bd00b121a692f942404a24ae2960f", "")
("mx-1.0", "git://github.com/clutter-project/mx.git;branch=mx-1.4", "9b1db6b8060bd00b121a692f942404a24ae2960f", "")
: "1.99.4",
# version pattern "vX.Y"
# mirror of git.infradead.org since network issues interfered with testing
@@ -1246,7 +1180,7 @@ class FetchLatestVersionTest(FetcherTest):
("presentproto", "git://git.yoctoproject.org/bbfetchtests-presentproto", "24f3a56e541b0a9e6c6ee76081f441221a120ef9", "")
: "1.0",
# version pattern "pkg_name-vX.Y.Z"
("dtc", "git://git.yoctoproject.org/bbfetchtests-dtc.git", "65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf", "")
("dtc", "git://git.qemu.org/dtc.git", "65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf", "")
: "1.4.0",
# combination version pattern
("sysprof", "git://gitlab.gnome.org/GNOME/sysprof.git;protocol=https", "cd44ee6644c3641507fb53b8a2a69137f2971219", "")
@@ -1262,9 +1196,9 @@ class FetchLatestVersionTest(FetcherTest):
: "0.4.3",
("build-appliance-image", "git://git.yoctoproject.org/poky", "b37dd451a52622d5b570183a81583cc34c2ff555", "(?P<pver>(([0-9][\.|_]?)+[0-9]))")
: "11.0.0",
("chkconfig-alternatives-native", "git://github.com/kergoth/chkconfig;branch=sysroot;protocol=https", "cd437ecbd8986c894442f8fce1e0061e20f04dee", "chkconfig\-(?P<pver>((\d+[\.\-_]*)+))")
("chkconfig-alternatives-native", "git://github.com/kergoth/chkconfig;branch=sysroot", "cd437ecbd8986c894442f8fce1e0061e20f04dee", "chkconfig\-(?P<pver>((\d+[\.\-_]*)+))")
: "1.3.59",
("remake", "git://github.com/rocky/remake.git;protocol=https", "f05508e521987c8494c92d9c2871aec46307d51d", "(?P<pver>(\d+\.(\d+\.)*\d*(\+dbg\d+(\.\d+)*)*))")
("remake", "git://github.com/rocky/remake.git", "f05508e521987c8494c92d9c2871aec46307d51d", "(?P<pver>(\d+\.(\d+\.)*\d*(\+dbg\d+(\.\d+)*)*))")
: "3.82+dbg0.9",
}
@@ -1354,10 +1288,13 @@ class FetchCheckStatusTest(FetcherTest):
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.2.tar.gz",
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.3.tar.gz",
"https://yoctoproject.org/",
"https://docs.yoctoproject.org",
"https://yoctoproject.org/documentation",
"http://downloads.yoctoproject.org/releases/opkg/opkg-0.1.7.tar.gz",
"http://downloads.yoctoproject.org/releases/opkg/opkg-0.3.0.tar.gz",
"ftp://sourceware.org/pub/libffi/libffi-1.20.tar.gz",
"http://ftp.gnu.org/gnu/autoconf/autoconf-2.60.tar.gz",
"https://ftp.gnu.org/gnu/chess/gnuchess-5.08.tar.gz",
"https://ftp.gnu.org/gnu/gmp/gmp-4.0.tar.gz",
# GitHub releases are hosted on Amazon S3, which doesn't support HEAD
"https://github.com/kergoth/tslib/releases/download/1.1/tslib-1.1.tar.xz"
]
@@ -1396,8 +1333,6 @@ class GitMakeShallowTest(FetcherTest):
self.gitdir = os.path.join(self.tempdir, 'gitshallow')
bb.utils.mkdirhier(self.gitdir)
bb.process.run('git init', cwd=self.gitdir)
bb.process.run('git config user.email "you@example.com"', cwd=self.gitdir)
bb.process.run('git config user.name "Your Name"', cwd=self.gitdir)
def assertRefs(self, expected_refs):
actual_refs = self.git(['for-each-ref', '--format=%(refname)']).splitlines()
@@ -1521,8 +1456,6 @@ class GitShallowTest(FetcherTest):
bb.utils.mkdirhier(self.srcdir)
self.git('init', cwd=self.srcdir)
self.git('config user.email "you@example.com"', cwd=self.srcdir)
self.git('config user.name "Your Name"', cwd=self.srcdir)
self.d.setVar('WORKDIR', self.tempdir)
self.d.setVar('S', self.gitdir)
self.d.delVar('PREMIRRORS')
@@ -1604,7 +1537,6 @@ class GitShallowTest(FetcherTest):
# fetch and unpack, from the shallow tarball
bb.utils.remove(self.gitdir, recurse=True)
bb.process.run('chmod u+w -R "%s"' % ud.clonedir)
bb.utils.remove(ud.clonedir, recurse=True)
bb.utils.remove(ud.clonedir.replace('gitsource', 'gitsubmodule'), recurse=True)
@@ -1757,8 +1689,6 @@ class GitShallowTest(FetcherTest):
smdir = os.path.join(self.tempdir, 'gitsubmodule')
bb.utils.mkdirhier(smdir)
self.git('init', cwd=smdir)
self.git('config user.email "you@example.com"', cwd=smdir)
self.git('config user.name "Your Name"', cwd=smdir)
# Make this look like it was cloned from a remote...
self.git('config --add remote.origin.url "%s"' % smdir, cwd=smdir)
self.git('config --add remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"', cwd=smdir)
@@ -1789,8 +1719,6 @@ class GitShallowTest(FetcherTest):
smdir = os.path.join(self.tempdir, 'gitsubmodule')
bb.utils.mkdirhier(smdir)
self.git('init', cwd=smdir)
self.git('config user.email "you@example.com"', cwd=smdir)
self.git('config user.name "Your Name"', cwd=smdir)
# Make this look like it was cloned from a remote...
self.git('config --add remote.origin.url "%s"' % smdir, cwd=smdir)
self.git('config --add remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"', cwd=smdir)
@@ -1833,8 +1761,8 @@ class GitShallowTest(FetcherTest):
self.git('annex init', cwd=self.srcdir)
open(os.path.join(self.srcdir, 'c'), 'w').close()
self.git('annex add c', cwd=self.srcdir)
self.git('commit --author "Foo Bar <foo@bar>" -m annex-c -a', cwd=self.srcdir)
bb.process.run('chmod u+w -R %s' % self.srcdir)
self.git('commit -m annex-c -a', cwd=self.srcdir)
bb.process.run('chmod u+w -R %s' % os.path.join(self.srcdir, '.git', 'annex'))
uri = 'gitannex://%s;protocol=file;subdir=${S}' % self.srcdir
fetcher, ud = self.fetch_shallow(uri)
@@ -2048,7 +1976,7 @@ class GitShallowTest(FetcherTest):
@skipIfNoNetwork()
def test_bitbake(self):
self.git('remote add --mirror=fetch origin https://github.com/openembedded/bitbake', cwd=self.srcdir)
self.git('remote add --mirror=fetch origin git://github.com/openembedded/bitbake', cwd=self.srcdir)
self.git('config core.bare true', cwd=self.srcdir)
self.git('fetch', cwd=self.srcdir)
@@ -2109,8 +2037,6 @@ class GitLfsTest(FetcherTest):
bb.utils.mkdirhier(self.srcdir)
self.git('init', cwd=self.srcdir)
self.git('config user.email "you@example.com"', cwd=self.srcdir)
self.git('config user.name "Your Name"', cwd=self.srcdir)
with open(os.path.join(self.srcdir, '.gitattributes'), 'wt') as attrs:
attrs.write('*.mp3 filter=lfs -text')
self.git(['add', '.gitattributes'], cwd=self.srcdir)
@@ -2125,14 +2051,13 @@ class GitLfsTest(FetcherTest):
cwd = self.gitdir
return bb.process.run(cmd, cwd=cwd)[0]
def fetch(self, uri=None, download=True):
def fetch(self, uri=None):
uris = self.d.getVar('SRC_URI').split()
uri = uris[0]
d = self.d
fetcher = bb.fetch2.Fetch(uris, d)
if download:
fetcher.download()
fetcher.download()
ud = fetcher.ud[uri]
return fetcher, ud
@@ -2142,21 +2067,16 @@ class GitLfsTest(FetcherTest):
uri = 'git://%s;protocol=file;subdir=${S};lfs=1' % self.srcdir
self.d.setVar('SRC_URI', uri)
# Careful: suppress initial attempt at downloading until
# we know whether git-lfs is installed.
fetcher, ud = self.fetch(uri=None, download=False)
fetcher, ud = self.fetch()
self.assertIsNotNone(ud.method._find_git_lfs)
# If git-lfs can be found, the unpack should be successful. Only
# attempt this with the real live copy of git-lfs installed.
if ud.method._find_git_lfs(self.d):
fetcher.download()
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
# If git-lfs can be found, the unpack should be successful
ud.method._find_git_lfs = lambda d: True
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
# If git-lfs cannot be found, the unpack should throw an error
with self.assertRaises(bb.fetch2.FetchError):
fetcher.download()
ud.method._find_git_lfs = lambda d: False
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
@@ -2167,16 +2087,10 @@ class GitLfsTest(FetcherTest):
uri = 'git://%s;protocol=file;subdir=${S};lfs=0' % self.srcdir
self.d.setVar('SRC_URI', uri)
# In contrast to test_lfs_enabled(), allow the implicit download
# done by self.fetch() to occur here. The point of this test case
# is to verify that the fetcher can survive even if the source
# repository has Git LFS usage configured.
fetcher, ud = self.fetch()
self.assertIsNotNone(ud.method._find_git_lfs)
# If git-lfs can be found, the unpack should be successful. A
# live copy of git-lfs is not required for this case, so
# unconditionally forge its presence.
# If git-lfs can be found, the unpack should be successful
ud.method._find_git_lfs = lambda d: True
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))

View File

@@ -1 +1 @@
do_install[mcdepends] = "mc:mc-1:mc_2:a1:do_build"
do_install[mcdepends] = "mc:mc1:mc2:a1:do_build"

View File

@@ -1,5 +1,5 @@
python () {
if d.getVar("BB_CURRENT_MC") == "mc-1":
bb.fatal("Multiconfig is mc-1")
if d.getVar("BB_CURRENT_MC") == "mc1":
bb.fatal("Multiconfig is mc1")
}

View File

@@ -1,4 +1,4 @@
python () {
if d.getVar("BB_CURRENT_MC") == "mc_2":
bb.fatal("Multiconfig is mc_2")
if d.getVar("BB_CURRENT_MC") == "mc2":
bb.fatal("Multiconfig is mc2")
}

View File

@@ -216,66 +216,66 @@ class RunQueueTests(unittest.TestCase):
def test_multiconfig_setscene_optimise(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
extraenv = {
"BBMULTICONFIG" : "mc-1 mc_2",
"BBMULTICONFIG" : "mc1 mc2",
"BB_SIGNATURE_HANDLER" : "basic"
}
cmd = ["bitbake", "b1", "mc:mc-1:b1", "mc:mc_2:b1"]
cmd = ["bitbake", "b1", "mc:mc1:b1", "mc:mc2:b1"]
setscenetasks = ['package_write_ipk_setscene', 'package_write_rpm_setscene', 'packagedata_setscene',
'populate_sysroot_setscene', 'package_qa_setscene']
sstatevalid = ""
tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv)
expected = ['a1:' + x for x in self.alltasks] + ['b1:' + x for x in self.alltasks] + \
['mc-1:b1:' + x for x in setscenetasks] + ['mc-1:a1:' + x for x in setscenetasks] + \
['mc_2:b1:' + x for x in setscenetasks] + ['mc_2:a1:' + x for x in setscenetasks] + \
['mc-1:b1:build', 'mc_2:b1:build']
for x in ['mc-1:a1:package_qa_setscene', 'mc_2:a1:package_qa_setscene', 'a1:build', 'a1:package_qa']:
['mc1:b1:' + x for x in setscenetasks] + ['mc1:a1:' + x for x in setscenetasks] + \
['mc2:b1:' + x for x in setscenetasks] + ['mc2:a1:' + x for x in setscenetasks] + \
['mc1:b1:build', 'mc2:b1:build']
for x in ['mc1:a1:package_qa_setscene', 'mc2:a1:package_qa_setscene', 'a1:build', 'a1:package_qa']:
expected.remove(x)
self.assertEqual(set(tasks), set(expected))
def test_multiconfig_bbmask(self):
# This test validates that multiconfigs can independently mask off
# recipes they do not want with BBMASK. It works by having recipes
# that will fail to parse for mc-1 and mc_2, then making each multiconfig
# that will fail to parse for mc1 and mc2, then making each multiconfig
# build the one that does parse. This ensures that the recipes are in
# each multiconfigs BBFILES, but each is masking only the one that
# doesn't parse
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
extraenv = {
"BBMULTICONFIG" : "mc-1 mc_2",
"BBMULTICONFIG" : "mc1 mc2",
"BB_SIGNATURE_HANDLER" : "basic",
"EXTRA_BBFILES": "${COREBASE}/recipes/fails-mc/*.bb",
}
cmd = ["bitbake", "mc:mc-1:fails-mc2", "mc:mc_2:fails-mc1"]
cmd = ["bitbake", "mc:mc1:fails-mc2", "mc:mc2:fails-mc1"]
self.run_bitbakecmd(cmd, tempdir, "", extraenv=extraenv)
def test_multiconfig_mcdepends(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
extraenv = {
"BBMULTICONFIG" : "mc-1 mc_2",
"BBMULTICONFIG" : "mc1 mc2",
"BB_SIGNATURE_HANDLER" : "TestMulticonfigDepends",
"EXTRA_BBFILES": "${COREBASE}/recipes/fails-mc/*.bb",
}
tasks = self.run_bitbakecmd(["bitbake", "mc:mc-1:f1"], tempdir, "", extraenv=extraenv, cleanup=True)
expected = ["mc-1:f1:%s" % t for t in self.alltasks] + \
["mc_2:a1:%s" % t for t in self.alltasks]
tasks = self.run_bitbakecmd(["bitbake", "mc:mc1:f1"], tempdir, "", extraenv=extraenv, cleanup=True)
expected = ["mc1:f1:%s" % t for t in self.alltasks] + \
["mc2:a1:%s" % t for t in self.alltasks]
self.assertEqual(set(tasks), set(expected))
# A rebuild does nothing
tasks = self.run_bitbakecmd(["bitbake", "mc:mc-1:f1"], tempdir, "", extraenv=extraenv, cleanup=True)
tasks = self.run_bitbakecmd(["bitbake", "mc:mc1:f1"], tempdir, "", extraenv=extraenv, cleanup=True)
self.assertEqual(set(tasks), set())
# Test that a signature change in the dependent task causes
# mcdepends to rebuild
tasks = self.run_bitbakecmd(["bitbake", "mc:mc_2:a1", "-c", "compile", "-f"], tempdir, "", extraenv=extraenv, cleanup=True)
expected = ["mc_2:a1:compile"]
tasks = self.run_bitbakecmd(["bitbake", "mc:mc2:a1", "-c", "compile", "-f"], tempdir, "", extraenv=extraenv, cleanup=True)
expected = ["mc2:a1:compile"]
self.assertEqual(set(tasks), set(expected))
rerun_tasks = self.alltasks[:]
for x in ("fetch", "unpack", "patch", "prepare_recipe_sysroot", "configure", "compile"):
rerun_tasks.remove(x)
tasks = self.run_bitbakecmd(["bitbake", "mc:mc-1:f1"], tempdir, "", extraenv=extraenv, cleanup=True)
expected = ["mc-1:f1:%s" % t for t in rerun_tasks] + \
["mc_2:a1:%s" % t for t in rerun_tasks]
tasks = self.run_bitbakecmd(["bitbake", "mc:mc1:f1"], tempdir, "", extraenv=extraenv, cleanup=True)
expected = ["mc1:f1:%s" % t for t in rerun_tasks] + \
["mc2:a1:%s" % t for t in rerun_tasks]
self.assertEqual(set(tasks), set(expected))
@unittest.skipIf(sys.version_info < (3, 5, 0), 'Python 3.5 or later required')
@@ -361,7 +361,7 @@ class RunQueueTests(unittest.TestCase):
def shutdown(self, tempdir):
# Wait for the hashserve socket to disappear else we'll see races with the tempdir cleanup
while (os.path.exists(tempdir + "/hashserve.sock") or os.path.exists(tempdir + "cache/hashserv.db-wal")):
while os.path.exists(tempdir + "/hashserve.sock"):
time.sleep(0.5)

View File

@@ -440,7 +440,7 @@ class Tinfoil:
to initialise Tinfoil and use it with config_only=True first and
then conditionally call this function to parse recipes later.
"""
config_params = TinfoilConfigParameters(config_only=False, quiet=self.quiet)
config_params = TinfoilConfigParameters(config_only=False)
self.run_actions(config_params)
self.recipes_parsed = True

View File

@@ -148,14 +148,14 @@ class ORMWrapper(object):
buildrequest = None
if brbe is not None:
# Toaster-triggered build
logger.debug("buildinfohelper: brbe is %s" % brbe)
logger.debug(1, "buildinfohelper: brbe is %s" % brbe)
br, _ = brbe.split(":")
buildrequest = BuildRequest.objects.get(pk=br)
prj = buildrequest.project
else:
# CLI build
prj = Project.objects.get_or_create_default_project()
logger.debug("buildinfohelper: project is not specified, defaulting to %s" % prj)
logger.debug(1, "buildinfohelper: project is not specified, defaulting to %s" % prj)
if buildrequest is not None:
# reuse existing Build object
@@ -171,7 +171,7 @@ class ORMWrapper(object):
completed_on=now,
build_name='')
logger.debug("buildinfohelper: build is created %s" % build)
logger.debug(1, "buildinfohelper: build is created %s" % build)
if buildrequest is not None:
buildrequest.build = build
@@ -906,7 +906,7 @@ class BuildInfoHelper(object):
self.project = None
logger.debug("buildinfohelper: Build info helper inited %s" % vars(self))
logger.debug(1, "buildinfohelper: Build info helper inited %s" % vars(self))
###################
@@ -1620,7 +1620,7 @@ class BuildInfoHelper(object):
# if we have a backlog of events, do our best to save them here
if len(self.internal_state['backlog']):
tempevent = self.internal_state['backlog'].pop()
logger.debug("buildinfohelper: Saving stored event %s "
logger.debug(1, "buildinfohelper: Saving stored event %s "
% tempevent)
self.store_log_event(tempevent,cli_backlog)
else:

View File

@@ -745,7 +745,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
continue
if isinstance(event, bb.runqueue.sceneQueueTaskStarted):
logger.info("Running setscene task %d of %d (%s)" % (event.stats.setscene_covered + event.stats.setscene_active + event.stats.setscene_notcovered + 1, event.stats.setscene_total, event.taskstring))
logger.info("Running setscene task %d of %d (%s)" % (event.stats.completed + event.stats.active + event.stats.failed + 1, event.stats.total, event.taskstring))
continue
if isinstance(event, bb.runqueue.runQueueTaskStarted):

View File

@@ -49,8 +49,8 @@ class BBUIHelper:
tid = event._fn + ":" + event._task
removetid(event.pid, tid)
self.failed_tasks.append( { 'title' : "%s %s" % (event._package, event._task)})
elif isinstance(event, bb.runqueue.runQueueTaskStarted) or isinstance(event, bb.runqueue.sceneQueueTaskStarted):
self.tasknumber_current = event.stats.completed + event.stats.active + event.stats.failed + event.stats.setscene_active + 1
elif isinstance(event, bb.runqueue.runQueueTaskStarted):
self.tasknumber_current = event.stats.completed + event.stats.active + event.stats.failed + 1
self.tasknumber_total = event.stats.total
self.needUpdate = True
elif isinstance(event, bb.build.TaskProgress):

View File

@@ -16,8 +16,7 @@ import bb.msg
import multiprocessing
import fcntl
import importlib
import importlib.machinery
import importlib.util
from importlib import machinery
import itertools
import subprocess
import glob
@@ -130,7 +129,6 @@ def vercmp(ta, tb):
return r
def vercmp_string(a, b):
""" Split version strings and compare them """
ta = split_version(a)
tb = split_version(b)
return vercmp(ta, tb)
@@ -249,12 +247,6 @@ def explode_dep_versions2(s, *, sort=True):
return r
def explode_dep_versions(s):
"""
Take an RDEPENDS style string of format:
"DEPEND1 (optional version) DEPEND2 (optional version) ..."
skip null value and items appeared in dependancy string multiple times
and return a dictionary of dependencies and versions.
"""
r = explode_dep_versions2(s)
for d in r:
if not r[d]:
@@ -452,10 +444,6 @@ def lockfile(name, shared=False, retry=True, block=False):
consider the possibility of sending a signal to the process to break
out - at which point you want block=True rather than retry=True.
"""
if len(name) > 255:
root, ext = os.path.splitext(name)
name = root[:255 - len(ext)] + ext
dirname = os.path.dirname(name)
mkdirhier(dirname)
@@ -492,7 +480,7 @@ def lockfile(name, shared=False, retry=True, block=False):
return lf
lf.close()
except OSError as e:
if e.errno == errno.EACCES or e.errno == errno.ENAMETOOLONG:
if e.errno == errno.EACCES:
logger.error("Unable to acquire lock '%s', %s",
e.strerror, name)
sys.exit(1)
@@ -614,7 +602,7 @@ def filter_environment(good_vars):
os.environ["LC_ALL"] = "en_US.UTF-8"
if removed_vars:
logger.debug("Removed the following variables from the environment: %s", ", ".join(removed_vars.keys()))
logger.debug(1, "Removed the following variables from the environment: %s", ", ".join(removed_vars.keys()))
return removed_vars
@@ -704,7 +692,7 @@ def remove(path, recurse=False, ionice=False):
raise
def prunedir(topdir, ionice=False):
""" Delete everything reachable from the directory named in 'topdir'. """
# Delete everything reachable from the directory named in 'topdir'.
# CAUTION: This is dangerous!
if _check_unsafe_delete_path(topdir):
raise Exception('bb.utils.prunedir: called with dangerous path "%s", refusing to delete!' % topdir)
@@ -715,10 +703,8 @@ def prunedir(topdir, ionice=False):
# but thats possibly insane and suffixes is probably going to be small
#
def prune_suffix(var, suffixes, d):
"""
See if var ends with any of the suffixes listed and
remove it if found
"""
# See if var ends with any of the suffixes listed and
# remove it if found
for suffix in suffixes:
if suffix and var.endswith(suffix):
return var[:-len(suffix)]
@@ -970,10 +956,6 @@ def umask(new_mask):
os.umask(current_mask)
def to_boolean(string, default=None):
"""
Check input string and return boolean value True/False/None
depending upon the checks
"""
if not string:
return default
@@ -1017,23 +999,6 @@ def contains(variable, checkvalues, truevalue, falsevalue, d):
return falsevalue
def contains_any(variable, checkvalues, truevalue, falsevalue, d):
"""Check if a variable contains any values specified.
Arguments:
variable -- the variable name. This will be fetched and expanded (using
d.getVar(variable)) and then split into a set().
checkvalues -- if this is a string it is split on whitespace into a set(),
otherwise coerced directly into a set().
truevalue -- the value to return if checkvalues is a subset of variable.
falsevalue -- the value to return if variable is empty or if checkvalues is
not a subset of variable.
d -- the data store.
"""
val = d.getVar(variable)
if not val:
return falsevalue
@@ -1595,8 +1560,8 @@ def set_process_name(name):
except:
pass
# export common proxies variables from datastore to environment
def export_proxies(d):
""" export common proxies variables from datastore to environment """
import os
variables = ['http_proxy', 'HTTP_PROXY', 'https_proxy', 'HTTPS_PROXY',
@@ -1618,14 +1583,12 @@ def export_proxies(d):
def load_plugins(logger, plugins, pluginpath):
def load_plugin(name):
logger.debug('Loading plugin %s' % name)
logger.debug(1, 'Loading plugin %s' % name)
spec = importlib.machinery.PathFinder.find_spec(name, path=[pluginpath] )
if spec:
mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(mod)
return mod
return spec.loader.load_module()
logger.debug('Loading plugins from %s...' % pluginpath)
logger.debug(1, 'Loading plugins from %s...' % pluginpath)
expanded = (glob.glob(os.path.join(pluginpath, '*' + ext))
for ext in python_extensions)

View File

@@ -50,10 +50,10 @@ class ActionPlugin(LayerPlugin):
if not (args.force or notadded):
try:
self.tinfoil.run_command('parseConfiguration')
except (bb.tinfoil.TinfoilUIException, bb.BBHandledException):
except bb.tinfoil.TinfoilUIException:
# Restore the back up copy of bblayers.conf
shutil.copy2(backup, bblayers_conf)
bb.fatal("Parse failure with the specified layer added, aborting.")
bb.fatal("Parse failure with the specified layer added")
else:
for item in notadded:
sys.stderr.write("Specified layer %s is already in BBLAYERS\n" % item)

View File

@@ -79,7 +79,7 @@ class LayerIndexPlugin(ActionPlugin):
branches = [args.branch]
else:
branches = (self.tinfoil.config_data.getVar('LAYERSERIES_CORENAMES') or 'master').split()
logger.debug('Trying branches: %s' % branches)
logger.debug(1, 'Trying branches: %s' % branches)
ignore_layers = []
if args.ignore:

View File

@@ -128,7 +128,7 @@ skipped recipes will also be listed, with a " (skipped)" suffix.
sys.exit(1)
pkg_pn = self.tinfoil.cooker.recipecaches[mc].pkg_pn
(latest_versions, preferred_versions, required_versions) = self.tinfoil.find_providers(mc)
(latest_versions, preferred_versions) = self.tinfoil.find_providers(mc)
allproviders = self.tinfoil.get_all_providers(mc)
# Ensure we list skipped recipes

View File

@@ -3,7 +3,6 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import asyncio
from contextlib import closing
import re
import sqlite3
@@ -22,24 +21,6 @@ ADDR_TYPE_TCP = 1
# is necessary
DEFAULT_MAX_CHUNK = 32 * 1024
TABLE_DEFINITION = (
("method", "TEXT NOT NULL"),
("outhash", "TEXT NOT NULL"),
("taskhash", "TEXT NOT NULL"),
("unihash", "TEXT NOT NULL"),
("created", "DATETIME"),
# Optional fields
("owner", "TEXT"),
("PN", "TEXT"),
("PV", "TEXT"),
("PR", "TEXT"),
("task", "TEXT"),
("outhash_siginfo", "TEXT"),
)
TABLE_COLUMNS = tuple(name for name, _ in TABLE_DEFINITION)
def setup_database(database, sync=True):
db = sqlite3.connect(database)
db.row_factory = sqlite3.Row
@@ -48,10 +29,23 @@ def setup_database(database, sync=True):
cursor.execute('''
CREATE TABLE IF NOT EXISTS tasks_v2 (
id INTEGER PRIMARY KEY AUTOINCREMENT,
%s
method TEXT NOT NULL,
outhash TEXT NOT NULL,
taskhash TEXT NOT NULL,
unihash TEXT NOT NULL,
created DATETIME,
-- Optional fields
owner TEXT,
PN TEXT,
PV TEXT,
PR TEXT,
task TEXT,
outhash_siginfo TEXT,
UNIQUE(method, outhash, taskhash)
)
''' % " ".join("%s %s," % (name, typ) for name, typ in TABLE_DEFINITION))
''')
cursor.execute('PRAGMA journal_mode = WAL')
cursor.execute('PRAGMA synchronous = %s' % ('NORMAL' if sync else 'OFF'))
@@ -94,10 +88,10 @@ def chunkify(msg, max_chunk):
yield "\n"
def create_server(addr, dbname, *, sync=True, upstream=None, read_only=False):
def create_server(addr, dbname, *, sync=True):
from . import server
db = setup_database(dbname, sync=sync)
s = server.Server(db, upstream=upstream, read_only=read_only)
s = server.Server(db)
(typ, a) = parse_address(addr)
if typ == ADDR_TYPE_UNIX:
@@ -119,15 +113,3 @@ def create_client(addr):
c.connect_tcp(*a)
return c
async def create_async_client(addr):
from . import client
c = client.AsyncClient()
(typ, a) = parse_address(addr)
if typ == ADDR_TYPE_UNIX:
await c.connect_unix(*a)
else:
await c.connect_tcp(*a)
return c

View File

@@ -3,231 +3,189 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import asyncio
import json
import logging
import socket
import os
from . import chunkify, DEFAULT_MAX_CHUNK, create_async_client
from . import chunkify, DEFAULT_MAX_CHUNK
logger = logging.getLogger("hashserv.client")
logger = logging.getLogger('hashserv.client')
class HashConnectionError(Exception):
pass
class AsyncClient(object):
class Client(object):
MODE_NORMAL = 0
MODE_GET_STREAM = 1
def __init__(self):
self._socket = None
self.reader = None
self.writer = None
self.mode = self.MODE_NORMAL
self.max_chunk = DEFAULT_MAX_CHUNK
async def connect_tcp(self, address, port):
async def connect_sock():
return await asyncio.open_connection(address, port)
def connect_tcp(self, address, port):
def connect_sock():
s = socket.create_connection((address, port))
s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
return s
self._connect_sock = connect_sock
async def connect_unix(self, path):
async def connect_sock():
return await asyncio.open_unix_connection(path)
def connect_unix(self, path):
def connect_sock():
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
# AF_UNIX has path length issues so chdir here to workaround
cwd = os.getcwd()
try:
os.chdir(os.path.dirname(path))
s.connect(os.path.basename(path))
finally:
os.chdir(cwd)
return s
self._connect_sock = connect_sock
async def connect(self):
if self.reader is None or self.writer is None:
(self.reader, self.writer) = await self._connect_sock()
def connect(self):
if self._socket is None:
self._socket = self._connect_sock()
self.writer.write("OEHASHEQUIV 1.1\n\n".encode("utf-8"))
await self.writer.drain()
self.reader = self._socket.makefile('r', encoding='utf-8')
self.writer = self._socket.makefile('w', encoding='utf-8')
self.writer.write('OEHASHEQUIV 1.1\n\n')
self.writer.flush()
# Restore mode if the socket is being re-created
cur_mode = self.mode
self.mode = self.MODE_NORMAL
await self._set_mode(cur_mode)
self._set_mode(cur_mode)
async def close(self):
self.reader = None
return self._socket
if self.writer is not None:
self.writer.close()
def close(self):
if self._socket is not None:
self._socket.close()
self._socket = None
self.reader = None
self.writer = None
async def _send_wrapper(self, proc):
def _send_wrapper(self, proc):
count = 0
while True:
try:
await self.connect()
return await proc()
except (
OSError,
HashConnectionError,
json.JSONDecodeError,
UnicodeDecodeError,
) as e:
logger.warning("Error talking to server: %s" % e)
self.connect()
return proc()
except (OSError, HashConnectionError, json.JSONDecodeError, UnicodeDecodeError) as e:
logger.warning('Error talking to server: %s' % e)
if count >= 3:
if not isinstance(e, HashConnectionError):
raise HashConnectionError(str(e))
raise e
await self.close()
self.close()
count += 1
async def send_message(self, msg):
async def get_line():
line = await self.reader.readline()
def send_message(self, msg):
def get_line():
line = self.reader.readline()
if not line:
raise HashConnectionError("Connection closed")
raise HashConnectionError('Connection closed')
line = line.decode("utf-8")
if not line.endswith("\n"):
raise HashConnectionError("Bad message %r" % message)
if not line.endswith('\n'):
raise HashConnectionError('Bad message %r' % message)
return line
async def proc():
def proc():
for c in chunkify(json.dumps(msg), self.max_chunk):
self.writer.write(c.encode("utf-8"))
await self.writer.drain()
self.writer.write(c)
self.writer.flush()
l = await get_line()
l = get_line()
m = json.loads(l)
if m and "chunk-stream" in m:
if 'chunk-stream' in m:
lines = []
while True:
l = (await get_line()).rstrip("\n")
l = get_line().rstrip('\n')
if not l:
break
lines.append(l)
m = json.loads("".join(lines))
m = json.loads(''.join(lines))
return m
return await self._send_wrapper(proc)
return self._send_wrapper(proc)
async def send_stream(self, msg):
async def proc():
self.writer.write(("%s\n" % msg).encode("utf-8"))
await self.writer.drain()
l = await self.reader.readline()
def send_stream(self, msg):
def proc():
self.writer.write("%s\n" % msg)
self.writer.flush()
l = self.reader.readline()
if not l:
raise HashConnectionError("Connection closed")
return l.decode("utf-8").rstrip()
raise HashConnectionError('Connection closed')
return l.rstrip()
return await self._send_wrapper(proc)
return self._send_wrapper(proc)
async def _set_mode(self, new_mode):
def _set_mode(self, new_mode):
if new_mode == self.MODE_NORMAL and self.mode == self.MODE_GET_STREAM:
r = await self.send_stream("END")
if r != "ok":
raise HashConnectionError("Bad response from server %r" % r)
r = self.send_stream('END')
if r != 'ok':
raise HashConnectionError('Bad response from server %r' % r)
elif new_mode == self.MODE_GET_STREAM and self.mode == self.MODE_NORMAL:
r = await self.send_message({"get-stream": None})
if r != "ok":
raise HashConnectionError("Bad response from server %r" % r)
r = self.send_message({'get-stream': None})
if r != 'ok':
raise HashConnectionError('Bad response from server %r' % r)
elif new_mode != self.mode:
raise Exception(
"Undefined mode transition %r -> %r" % (self.mode, new_mode)
)
raise Exception('Undefined mode transition %r -> %r' % (self.mode, new_mode))
self.mode = new_mode
async def get_unihash(self, method, taskhash):
await self._set_mode(self.MODE_GET_STREAM)
r = await self.send_stream("%s %s" % (method, taskhash))
def get_unihash(self, method, taskhash):
self._set_mode(self.MODE_GET_STREAM)
r = self.send_stream('%s %s' % (method, taskhash))
if not r:
return None
return r
async def report_unihash(self, taskhash, method, outhash, unihash, extra={}):
await self._set_mode(self.MODE_NORMAL)
def report_unihash(self, taskhash, method, outhash, unihash, extra={}):
self._set_mode(self.MODE_NORMAL)
m = extra.copy()
m["taskhash"] = taskhash
m["method"] = method
m["outhash"] = outhash
m["unihash"] = unihash
return await self.send_message({"report": m})
m['taskhash'] = taskhash
m['method'] = method
m['outhash'] = outhash
m['unihash'] = unihash
return self.send_message({'report': m})
async def report_unihash_equiv(self, taskhash, method, unihash, extra={}):
await self._set_mode(self.MODE_NORMAL)
def report_unihash_equiv(self, taskhash, method, unihash, extra={}):
self._set_mode(self.MODE_NORMAL)
m = extra.copy()
m["taskhash"] = taskhash
m["method"] = method
m["unihash"] = unihash
return await self.send_message({"report-equiv": m})
m['taskhash'] = taskhash
m['method'] = method
m['unihash'] = unihash
return self.send_message({'report-equiv': m})
async def get_taskhash(self, method, taskhash, all_properties=False):
await self._set_mode(self.MODE_NORMAL)
return await self.send_message(
{"get": {"taskhash": taskhash, "method": method, "all": all_properties}}
)
def get_taskhash(self, method, taskhash, all_properties=False):
self._set_mode(self.MODE_NORMAL)
return self.send_message({'get': {
'taskhash': taskhash,
'method': method,
'all': all_properties
}})
async def get_outhash(self, method, outhash, taskhash):
await self._set_mode(self.MODE_NORMAL)
return await self.send_message(
{"get-outhash": {"outhash": outhash, "taskhash": taskhash, "method": method}}
)
def get_stats(self):
self._set_mode(self.MODE_NORMAL)
return self.send_message({'get-stats': None})
async def get_stats(self):
await self._set_mode(self.MODE_NORMAL)
return await self.send_message({"get-stats": None})
async def reset_stats(self):
await self._set_mode(self.MODE_NORMAL)
return await self.send_message({"reset-stats": None})
async def backfill_wait(self):
await self._set_mode(self.MODE_NORMAL)
return (await self.send_message({"backfill-wait": None}))["tasks"]
class Client(object):
def __init__(self):
self.client = AsyncClient()
self.loop = asyncio.new_event_loop()
for call in (
"connect_tcp",
"close",
"get_unihash",
"report_unihash",
"report_unihash_equiv",
"get_taskhash",
"get_stats",
"reset_stats",
"backfill_wait",
):
downcall = getattr(self.client, call)
setattr(self, call, self._get_downcall_wrapper(downcall))
def _get_downcall_wrapper(self, downcall):
def wrapper(*args, **kwargs):
return self.loop.run_until_complete(downcall(*args, **kwargs))
return wrapper
def connect_unix(self, path):
# AF_UNIX has path length issues so chdir here to workaround
cwd = os.getcwd()
try:
os.chdir(os.path.dirname(path))
self.loop.run_until_complete(self.client.connect_unix(os.path.basename(path)))
self.loop.run_until_complete(self.client.connect())
finally:
os.chdir(cwd)
@property
def max_chunk(self):
return self.client.max_chunk
@max_chunk.setter
def max_chunk(self, value):
self.client.max_chunk = value
def reset_stats(self):
self._set_mode(self.MODE_NORMAL)
return self.send_message({'reset-stats': None})

View File

@@ -3,7 +3,7 @@
# SPDX-License-Identifier: GPL-2.0-only
#
from contextlib import closing, contextmanager
from contextlib import closing
from datetime import datetime
import asyncio
import json
@@ -12,9 +12,8 @@ import math
import os
import signal
import socket
import sys
import time
from . import chunkify, DEFAULT_MAX_CHUNK, create_async_client, TABLE_COLUMNS
from . import chunkify, DEFAULT_MAX_CHUNK
logger = logging.getLogger('hashserv.server')
@@ -112,95 +111,29 @@ class Stats(object):
class ClientError(Exception):
pass
class ServerError(Exception):
pass
def insert_task(cursor, data, ignore=False):
keys = sorted(data.keys())
query = '''INSERT%s INTO tasks_v2 (%s) VALUES (%s)''' % (
" OR IGNORE" if ignore else "",
', '.join(keys),
', '.join(':' + k for k in keys))
cursor.execute(query, data)
async def copy_from_upstream(client, db, method, taskhash):
d = await client.get_taskhash(method, taskhash, True)
if d is not None:
# Filter out unknown columns
d = {k: v for k, v in d.items() if k in TABLE_COLUMNS}
keys = sorted(d.keys())
with closing(db.cursor()) as cursor:
insert_task(cursor, d)
db.commit()
return d
async def copy_outhash_from_upstream(client, db, method, outhash, taskhash):
d = await client.get_outhash(method, outhash, taskhash)
if d is not None:
# Filter out unknown columns
d = {k: v for k, v in d.items() if k in TABLE_COLUMNS}
keys = sorted(d.keys())
with closing(db.cursor()) as cursor:
insert_task(cursor, d)
db.commit()
return d
class ServerClient(object):
FAST_QUERY = 'SELECT taskhash, method, unihash FROM tasks_v2 WHERE method=:method AND taskhash=:taskhash ORDER BY created ASC LIMIT 1'
ALL_QUERY = 'SELECT * FROM tasks_v2 WHERE method=:method AND taskhash=:taskhash ORDER BY created ASC LIMIT 1'
OUTHASH_QUERY = '''
-- Find tasks with a matching outhash (that is, tasks that
-- are equivalent)
SELECT * FROM tasks_v2 WHERE method=:method AND outhash=:outhash
-- If there is an exact match on the taskhash, return it.
-- Otherwise return the oldest matching outhash of any
-- taskhash
ORDER BY CASE WHEN taskhash=:taskhash THEN 1 ELSE 2 END,
created ASC
-- Only return one row
LIMIT 1
'''
def __init__(self, reader, writer, db, request_stats, backfill_queue, upstream, read_only):
def __init__(self, reader, writer, db, request_stats):
self.reader = reader
self.writer = writer
self.db = db
self.request_stats = request_stats
self.max_chunk = DEFAULT_MAX_CHUNK
self.backfill_queue = backfill_queue
self.upstream = upstream
self.handlers = {
'get': self.handle_get,
'get-outhash': self.handle_get_outhash,
'report': self.handle_report,
'report-equiv': self.handle_equivreport,
'get-stream': self.handle_get_stream,
'get-stats': self.handle_get_stats,
'reset-stats': self.handle_reset_stats,
'chunk-stream': self.handle_chunk,
}
if not read_only:
self.handlers.update({
'report': self.handle_report,
'report-equiv': self.handle_equivreport,
'reset-stats': self.handle_reset_stats,
'backfill-wait': self.handle_backfill_wait,
})
async def process_requests(self):
if self.upstream is not None:
self.upstream_client = await create_async_client(self.upstream)
else:
self.upstream_client = None
try:
self.addr = self.writer.get_extra_info('peername')
logger.debug('Client %r connected' % (self.addr,))
@@ -238,9 +171,6 @@ class ServerClient(object):
except ClientError as e:
logger.error(str(e))
finally:
if self.upstream_client is not None:
await self.upstream_client.close()
self.writer.close()
async def dispatch_message(self, msg):
@@ -309,34 +239,15 @@ class ServerClient(object):
if row is not None:
logger.debug('Found equivalent task %s -> %s', (row['taskhash'], row['unihash']))
d = {k: row[k] for k in row.keys()}
elif self.upstream_client is not None:
d = await copy_from_upstream(self.upstream_client, self.db, method, taskhash)
self.write_message(d)
else:
d = None
self.write_message(d)
async def handle_get_outhash(self, request):
with closing(self.db.cursor()) as cursor:
cursor.execute(self.OUTHASH_QUERY,
{k: request[k] for k in ('method', 'outhash', 'taskhash')})
row = cursor.fetchone()
if row is not None:
logger.debug('Found equivalent outhash %s -> %s', (row['outhash'], row['unihash']))
d = {k: row[k] for k in row.keys()}
else:
d = None
self.write_message(d)
self.write_message(None)
async def handle_get_stream(self, request):
self.write_message('ok')
while True:
upstream = None
l = await self.reader.readline()
if not l:
return
@@ -361,12 +272,6 @@ class ServerClient(object):
if row is not None:
msg = ('%s\n' % row['unihash']).encode('utf-8')
#logger.debug('Found equivalent task %s -> %s', (row['taskhash'], row['unihash']))
elif self.upstream_client is not None:
upstream = await self.upstream_client.get_unihash(method, taskhash)
if upstream:
msg = ("%s\n" % upstream).encode("utf-8")
else:
msg = "\n".encode("utf-8")
else:
msg = '\n'.encode('utf-8')
@@ -377,26 +282,25 @@ class ServerClient(object):
await self.writer.drain()
# Post to the backfill queue after writing the result to minimize
# the turn around time on a request
if upstream is not None:
await self.backfill_queue.put((method, taskhash))
async def handle_report(self, data):
with closing(self.db.cursor()) as cursor:
cursor.execute(self.OUTHASH_QUERY,
{k: data[k] for k in ('method', 'outhash', 'taskhash')})
cursor.execute('''
-- Find tasks with a matching outhash (that is, tasks that
-- are equivalent)
SELECT taskhash, method, unihash FROM tasks_v2 WHERE method=:method AND outhash=:outhash
-- If there is an exact match on the taskhash, return it.
-- Otherwise return the oldest matching outhash of any
-- taskhash
ORDER BY CASE WHEN taskhash=:taskhash THEN 1 ELSE 2 END,
created ASC
-- Only return one row
LIMIT 1
''', {k: data[k] for k in ('method', 'outhash', 'taskhash')})
row = cursor.fetchone()
if row is None and self.upstream_client:
# Try upstream
row = await copy_outhash_from_upstream(self.upstream_client,
self.db,
data['method'],
data['outhash'],
data['taskhash'])
# If no matching outhash was found, or one *was* found but it
# wasn't an exact match on the taskhash, a new entry for this
# taskhash should be added
@@ -420,7 +324,11 @@ class ServerClient(object):
if k in data:
insert_data[k] = data[k]
insert_task(cursor, insert_data)
cursor.execute('''INSERT INTO tasks_v2 (%s) VALUES (%s)''' % (
', '.join(sorted(insert_data.keys())),
', '.join(':' + k for k in sorted(insert_data.keys()))),
insert_data)
self.db.commit()
logger.info('Adding taskhash %s with unihash %s',
@@ -450,7 +358,11 @@ class ServerClient(object):
if k in data:
insert_data[k] = data[k]
insert_task(cursor, insert_data, ignore=True)
cursor.execute('''INSERT OR IGNORE INTO tasks_v2 (%s) VALUES (%s)''' % (
', '.join(sorted(insert_data.keys())),
', '.join(':' + k for k in sorted(insert_data.keys()))),
insert_data)
self.db.commit()
# Fetch the unihash that will be reported for the taskhash. If the
@@ -482,13 +394,6 @@ class ServerClient(object):
self.request_stats.reset()
self.write_message(d)
async def handle_backfill_wait(self, request):
d = {
'tasks': self.backfill_queue.qsize(),
}
await self.backfill_queue.join()
self.write_message(d)
def query_equivalent(self, method, taskhash, query):
# This is part of the inner loop and must be as fast as possible
try:
@@ -500,10 +405,7 @@ class ServerClient(object):
class Server(object):
def __init__(self, db, loop=None, upstream=None, read_only=False):
if upstream and read_only:
raise ServerError("Read-only hashserv cannot pull from an upstream server")
def __init__(self, db, loop=None):
self.request_stats = Stats()
self.db = db
@@ -514,14 +416,11 @@ class Server(object):
self.loop = loop
self.close_loop = False
self.upstream = upstream
self.read_only = read_only
self._cleanup_socket = None
def start_tcp_server(self, host, port):
self.server = self.loop.run_until_complete(
asyncio.start_server(self.handle_client, host, port)
asyncio.start_server(self.handle_client, host, port, loop=self.loop)
)
for s in self.server.sockets:
@@ -546,7 +445,7 @@ class Server(object):
# Work around path length limits in AF_UNIX
os.chdir(os.path.dirname(path))
self.server = self.loop.run_until_complete(
asyncio.start_unix_server(self.handle_client, os.path.basename(path))
asyncio.start_unix_server(self.handle_client, os.path.basename(path), loop=self.loop)
)
finally:
os.chdir(cwd)
@@ -559,7 +458,7 @@ class Server(object):
async def handle_client(self, reader, writer):
# writer.transport.set_write_buffer_limits(0)
try:
client = ServerClient(reader, writer, self.db, self.request_stats, self.backfill_queue, self.upstream, self.read_only)
client = ServerClient(reader, writer, self.db, self.request_stats)
await client.process_requests()
except Exception as e:
import traceback
@@ -568,60 +467,23 @@ class Server(object):
writer.close()
logger.info('Client disconnected')
@contextmanager
def _backfill_worker(self):
async def backfill_worker_task():
client = await create_async_client(self.upstream)
try:
while True:
item = await self.backfill_queue.get()
if item is None:
self.backfill_queue.task_done()
break
method, taskhash = item
await copy_from_upstream(client, self.db, method, taskhash)
self.backfill_queue.task_done()
finally:
await client.close()
async def join_worker(worker):
await self.backfill_queue.put(None)
await worker
if self.upstream is not None:
worker = asyncio.ensure_future(backfill_worker_task())
try:
yield
finally:
self.loop.run_until_complete(join_worker(worker))
else:
yield
def serve_forever(self):
def signal_handler():
self.loop.stop()
asyncio.set_event_loop(self.loop)
self.loop.add_signal_handler(signal.SIGTERM, signal_handler)
try:
self.backfill_queue = asyncio.Queue()
self.loop.run_forever()
except KeyboardInterrupt:
pass
self.loop.add_signal_handler(signal.SIGTERM, signal_handler)
self.server.close()
self.loop.run_until_complete(self.server.wait_closed())
logger.info('Server shutting down')
with self._backfill_worker():
try:
self.loop.run_forever()
except KeyboardInterrupt:
pass
if self.close_loop:
self.loop.close()
self.server.close()
self.loop.run_until_complete(self.server.wait_closed())
logger.info('Server shutting down')
finally:
if self.close_loop:
if sys.version_info >= (3, 6):
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
self.loop.close()
if self._cleanup_socket is not None:
self._cleanup_socket()
if self._cleanup_socket is not None:
self._cleanup_socket()

View File

@@ -6,7 +6,6 @@
#
from . import create_server, create_client
from .client import HashConnectionError
import hashlib
import logging
import multiprocessing
@@ -17,68 +16,44 @@ import threading
import unittest
import socket
def _run_server(server, idx):
# logging.basicConfig(level=logging.DEBUG, filename='bbhashserv.log', filemode='w',
# format='%(levelname)s %(filename)s:%(lineno)d %(message)s')
sys.stdout = open('bbhashserv-%d.log' % idx, 'w')
sys.stderr = sys.stdout
server.serve_forever()
class HashEquivalenceTestSetup(object):
class TestHashEquivalenceServer(object):
METHOD = 'TestMethod'
server_index = 0
def start_server(self, dbpath=None, upstream=None, read_only=False):
self.server_index += 1
if dbpath is None:
dbpath = os.path.join(self.temp_dir.name, "db%d.sqlite" % self.server_index)
def cleanup_thread(thread):
thread.terminate()
thread.join()
server = create_server(self.get_server_addr(self.server_index),
dbpath,
upstream=upstream,
read_only=read_only)
server.dbpath = dbpath
server.thread = multiprocessing.Process(target=_run_server, args=(server, self.server_index))
server.thread.start()
self.addCleanup(cleanup_thread, server.thread)
def cleanup_client(client):
client.close()
client = create_client(server.address)
self.addCleanup(cleanup_client, client)
return (client, server)
def _run_server(self):
# logging.basicConfig(level=logging.DEBUG, filename='bbhashserv.log', filemode='w',
# format='%(levelname)s %(filename)s:%(lineno)d %(message)s')
self.server.serve_forever()
def setUp(self):
if sys.version_info < (3, 5, 0):
self.skipTest('Python 3.5 or later required')
self.temp_dir = tempfile.TemporaryDirectory(prefix='bb-hashserv')
self.addCleanup(self.temp_dir.cleanup)
self.dbfile = os.path.join(self.temp_dir.name, 'db.sqlite')
(self.client, self.server) = self.start_server()
self.server = create_server(self.get_server_addr(), self.dbfile)
self.server_thread = multiprocessing.Process(target=self._run_server)
self.server_thread.start()
self.client = create_client(self.server.address)
def assertClientGetHash(self, client, taskhash, unihash):
result = client.get_unihash(self.METHOD, taskhash)
self.assertEqual(result, unihash)
def tearDown(self):
# Shutdown server
s = getattr(self, 'server', None)
if s is not None:
self.server_thread.terminate()
self.server_thread.join()
self.client.close()
self.temp_dir.cleanup()
class HashEquivalenceCommonTests(object):
def test_create_hash(self):
# Simple test that hashes can be created
taskhash = '35788efcb8dfb0a02659d81cf2bfd695fb30faf9'
outhash = '2765d4a5884be49b28601445c2760c5f21e7e5c0ee2b7e3fce98fd7e5970796f'
unihash = 'f46d3fbb439bd9b921095da657a4de906510d2cd'
self.assertClientGetHash(self.client, taskhash, None)
result = self.client.get_unihash(self.METHOD, taskhash)
self.assertIsNone(result, msg='Found unexpected task, %r' % result)
result = self.client.report_unihash(taskhash, self.METHOD, outhash, unihash)
self.assertEqual(result['unihash'], unihash, 'Server returned bad unihash')
@@ -109,19 +84,22 @@ class HashEquivalenceCommonTests(object):
unihash = '218e57509998197d570e2c98512d0105985dffc9'
self.client.report_unihash(taskhash, self.METHOD, outhash, unihash)
self.assertClientGetHash(self.client, taskhash, unihash)
result = self.client.get_unihash(self.METHOD, taskhash)
self.assertEqual(result, unihash)
outhash2 = '0904a7fe3dc712d9fd8a74a616ddca2a825a8ee97adf0bd3fc86082c7639914d'
unihash2 = 'ae9a7d252735f0dafcdb10e2e02561ca3a47314c'
self.client.report_unihash(taskhash, self.METHOD, outhash2, unihash2)
self.assertClientGetHash(self.client, taskhash, unihash)
result = self.client.get_unihash(self.METHOD, taskhash)
self.assertEqual(result, unihash)
outhash3 = '77623a549b5b1a31e3732dfa8fe61d7ce5d44b3370f253c5360e136b852967b4'
unihash3 = '9217a7d6398518e5dc002ed58f2cbbbc78696603'
self.client.report_unihash(taskhash, self.METHOD, outhash3, unihash3)
self.assertClientGetHash(self.client, taskhash, unihash)
result = self.client.get_unihash(self.METHOD, taskhash)
self.assertEqual(result, unihash)
def test_huge_message(self):
# Simple test that hashes can be created
@@ -129,7 +107,8 @@ class HashEquivalenceCommonTests(object):
outhash = '3c979c3db45c569f51ab7626a4651074be3a9d11a84b1db076f5b14f7d39db44'
unihash = '90e9bc1d1f094c51824adca7f8ea79a048d68824'
self.assertClientGetHash(self.client, taskhash, None)
result = self.client.get_unihash(self.METHOD, taskhash)
self.assertIsNone(result, msg='Found unexpected task, %r' % result)
siginfo = "0" * (self.client.max_chunk * 4)
@@ -177,140 +156,16 @@ class HashEquivalenceCommonTests(object):
self.assertFalse(failures)
def test_upstream_server(self):
# Tests upstream server support. This is done by creating two servers
# that share a database file. The downstream server has it upstream
# set to the test server, whereas the side server doesn't. This allows
# verification that the hash requests are being proxied to the upstream
# server by verifying that they appear on the downstream client, but not
# the side client. It also verifies that the results are pulled into
# the downstream database by checking that the downstream and side servers
# match after the downstream is done waiting for all backfill tasks
(down_client, down_server) = self.start_server(upstream=self.server.address)
(side_client, side_server) = self.start_server(dbpath=down_server.dbpath)
def check_hash(taskhash, unihash, old_sidehash):
nonlocal down_client
nonlocal side_client
# check upstream server
self.assertClientGetHash(self.client, taskhash, unihash)
# Hash should *not* be present on the side server
self.assertClientGetHash(side_client, taskhash, old_sidehash)
# Hash should be present on the downstream server, since it
# will defer to the upstream server. This will trigger
# the backfill in the downstream server
self.assertClientGetHash(down_client, taskhash, unihash)
# After waiting for the downstream client to finish backfilling the
# task from the upstream server, it should appear in the side server
# since the database is populated
down_client.backfill_wait()
self.assertClientGetHash(side_client, taskhash, unihash)
# Basic report
taskhash = '8aa96fcffb5831b3c2c0cb75f0431e3f8b20554a'
outhash = 'afe240a439959ce86f5e322f8c208e1fedefea9e813f2140c81af866cc9edf7e'
unihash = '218e57509998197d570e2c98512d0105985dffc9'
self.client.report_unihash(taskhash, self.METHOD, outhash, unihash)
check_hash(taskhash, unihash, None)
# Duplicated taskhash with multiple output hashes and unihashes.
# All servers should agree with the originally reported hash
outhash2 = '0904a7fe3dc712d9fd8a74a616ddca2a825a8ee97adf0bd3fc86082c7639914d'
unihash2 = 'ae9a7d252735f0dafcdb10e2e02561ca3a47314c'
self.client.report_unihash(taskhash, self.METHOD, outhash2, unihash2)
check_hash(taskhash, unihash, unihash)
# Report an equivalent task. The sideload will originally report
# no unihash until backfilled
taskhash3 = "044c2ec8aaf480685a00ff6ff49e6162e6ad34e1"
unihash3 = "def64766090d28f627e816454ed46894bb3aab36"
self.client.report_unihash(taskhash3, self.METHOD, outhash, unihash3)
check_hash(taskhash3, unihash, None)
# Test that reporting a unihash in the downstream client isn't
# propagating to the upstream server
taskhash4 = "e3da00593d6a7fb435c7e2114976c59c5fd6d561"
outhash4 = "1cf8713e645f491eb9c959d20b5cae1c47133a292626dda9b10709857cbe688a"
unihash4 = "3b5d3d83f07f259e9086fcb422c855286e18a57d"
down_client.report_unihash(taskhash4, self.METHOD, outhash4, unihash4)
down_client.backfill_wait()
self.assertClientGetHash(down_client, taskhash4, unihash4)
self.assertClientGetHash(side_client, taskhash4, unihash4)
self.assertClientGetHash(self.client, taskhash4, None)
# Test that reporting a unihash in the downstream is able to find a
# match which was previously reported to the upstream server
taskhash5 = '35788efcb8dfb0a02659d81cf2bfd695fb30faf9'
outhash5 = '2765d4a5884be49b28601445c2760c5f21e7e5c0ee2b7e3fce98fd7e5970796f'
unihash5 = 'f46d3fbb439bd9b921095da657a4de906510d2cd'
result = self.client.report_unihash(taskhash5, self.METHOD, outhash5, unihash5)
taskhash6 = '35788efcb8dfb0a02659d81cf2bfd695fb30fafa'
unihash6 = 'f46d3fbb439bd9b921095da657a4de906510d2ce'
result = down_client.report_unihash(taskhash6, self.METHOD, outhash5, unihash6)
self.assertEqual(result['unihash'], unihash5, 'Server failed to copy unihash from upstream')
def test_ro_server(self):
(ro_client, ro_server) = self.start_server(dbpath=self.server.dbpath, read_only=True)
# Report a hash via the read-write server
taskhash = '35788efcb8dfb0a02659d81cf2bfd695fb30faf9'
outhash = '2765d4a5884be49b28601445c2760c5f21e7e5c0ee2b7e3fce98fd7e5970796f'
unihash = 'f46d3fbb439bd9b921095da657a4de906510d2cd'
result = self.client.report_unihash(taskhash, self.METHOD, outhash, unihash)
self.assertEqual(result['unihash'], unihash, 'Server returned bad unihash')
# Check the hash via the read-only server
self.assertClientGetHash(ro_client, taskhash, unihash)
# Ensure that reporting via the read-only server fails
taskhash2 = 'c665584ee6817aa99edfc77a44dd853828279370'
outhash2 = '3c979c3db45c569f51ab7626a4651074be3a9d11a84b1db076f5b14f7d39db44'
unihash2 = '90e9bc1d1f094c51824adca7f8ea79a048d68824'
with self.assertRaises(HashConnectionError):
ro_client.report_unihash(taskhash2, self.METHOD, outhash2, unihash2)
# Ensure that the database was not modified
self.assertClientGetHash(self.client, taskhash2, None)
class TestHashEquivalenceUnixServer(TestHashEquivalenceServer, unittest.TestCase):
def get_server_addr(self):
return "unix://" + os.path.join(self.temp_dir.name, 'sock')
class TestHashEquivalenceUnixServer(HashEquivalenceTestSetup, HashEquivalenceCommonTests, unittest.TestCase):
def get_server_addr(self, server_idx):
return "unix://" + os.path.join(self.temp_dir.name, 'sock%d' % server_idx)
class TestHashEquivalenceUnixServerLongPath(HashEquivalenceTestSetup, unittest.TestCase):
DEEP_DIRECTORY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb/ccccccccccccccccccccccccccccccccccccccccccc"
def get_server_addr(self, server_idx):
os.makedirs(os.path.join(self.temp_dir.name, self.DEEP_DIRECTORY), exist_ok=True)
return "unix://" + os.path.join(self.temp_dir.name, self.DEEP_DIRECTORY, 'sock%d' % server_idx)
def test_long_sock_path(self):
# Simple test that hashes can be created
taskhash = '35788efcb8dfb0a02659d81cf2bfd695fb30faf9'
outhash = '2765d4a5884be49b28601445c2760c5f21e7e5c0ee2b7e3fce98fd7e5970796f'
unihash = 'f46d3fbb439bd9b921095da657a4de906510d2cd'
self.assertClientGetHash(self.client, taskhash, None)
result = self.client.report_unihash(taskhash, self.METHOD, outhash, unihash)
self.assertEqual(result['unihash'], unihash, 'Server returned bad unihash')
class TestHashEquivalenceTCPServer(HashEquivalenceTestSetup, HashEquivalenceCommonTests, unittest.TestCase):
def get_server_addr(self, server_idx):
class TestHashEquivalenceTCPServer(TestHashEquivalenceServer, unittest.TestCase):
def get_server_addr(self):
# Some hosts cause asyncio module to misbehave, when IPv6 is not enabled.
# If IPv6 is enabled, it should be safe to use localhost directly, in general
# case it is more reliable to resolve the IP address explicitly.
return socket.gethostbyname("localhost") + ":0"

View File

@@ -94,7 +94,7 @@ class LayerIndex():
if not param:
continue
item = param.split('=', 1)
logger.debug(item)
logger.debug(1, item)
param_dict[item[0]] = item[1]
return param_dict
@@ -123,7 +123,7 @@ class LayerIndex():
up = urlparse(url)
if username:
logger.debug("Configuring authentication for %s..." % url)
logger.debug(1, "Configuring authentication for %s..." % url)
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, "%s://%s" % (up.scheme, up.netloc), username, password)
handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
@@ -133,20 +133,20 @@ class LayerIndex():
urllib.request.install_opener(opener)
logger.debug("Fetching %s (%s)..." % (url, ["without authentication", "with authentication"][bool(username)]))
logger.debug(1, "Fetching %s (%s)..." % (url, ["without authentication", "with authentication"][bool(username)]))
try:
res = urlopen(Request(url, headers={'User-Agent': 'Mozilla/5.0 (bitbake/lib/layerindex)'}, unverifiable=True))
except urllib.error.HTTPError as e:
logger.debug("HTTP Error: %s: %s" % (e.code, e.reason))
logger.debug(" Requested: %s" % (url))
logger.debug(" Actual: %s" % (e.geturl()))
logger.debug(1, "HTTP Error: %s: %s" % (e.code, e.reason))
logger.debug(1, " Requested: %s" % (url))
logger.debug(1, " Actual: %s" % (e.geturl()))
if e.code == 404:
logger.debug("Request not found.")
logger.debug(1, "Request not found.")
raise LayerIndexFetchError(url, e)
else:
logger.debug("Headers:\n%s" % (e.headers))
logger.debug(1, "Headers:\n%s" % (e.headers))
raise LayerIndexFetchError(url, e)
except OSError as e:
error = 0
@@ -170,7 +170,7 @@ class LayerIndex():
raise LayerIndexFetchError(url, "Unable to fetch OSError exception: %s" % e)
finally:
logger.debug("...fetching %s (%s), done." % (url, ["without authentication", "with authentication"][bool(username)]))
logger.debug(1, "...fetching %s (%s), done." % (url, ["without authentication", "with authentication"][bool(username)]))
return res
@@ -205,14 +205,14 @@ The format of the indexURI:
if reload:
self.indexes = []
logger.debug('Loading: %s' % indexURI)
logger.debug(1, 'Loading: %s' % indexURI)
if not self.plugins:
raise LayerIndexException("No LayerIndex Plugins available")
for plugin in self.plugins:
# Check if the plugin was initialized
logger.debug('Trying %s' % plugin.__class__)
logger.debug(1, 'Trying %s' % plugin.__class__)
if not hasattr(plugin, 'type') or not plugin.type:
continue
try:
@@ -220,11 +220,11 @@ The format of the indexURI:
indexEnt = plugin.load_index(indexURI, load)
break
except LayerIndexPluginUrlError as e:
logger.debug("%s doesn't support %s" % (plugin.type, e.url))
logger.debug(1, "%s doesn't support %s" % (plugin.type, e.url))
except NotImplementedError:
pass
else:
logger.debug("No plugins support %s" % indexURI)
logger.debug(1, "No plugins support %s" % indexURI)
raise LayerIndexException("No plugins support %s" % indexURI)
# Mark CONFIG data as something we've added...
@@ -255,19 +255,19 @@ will write out the individual elements split by layer and related components.
for plugin in self.plugins:
# Check if the plugin was initialized
logger.debug('Trying %s' % plugin.__class__)
logger.debug(1, 'Trying %s' % plugin.__class__)
if not hasattr(plugin, 'type') or not plugin.type:
continue
try:
plugin.store_index(indexURI, index)
break
except LayerIndexPluginUrlError as e:
logger.debug("%s doesn't support %s" % (plugin.type, e.url))
logger.debug(1, "%s doesn't support %s" % (plugin.type, e.url))
except NotImplementedError:
logger.debug("Store not implemented in %s" % plugin.type)
logger.debug(1, "Store not implemented in %s" % plugin.type)
pass
else:
logger.debug("No plugins support %s" % indexURI)
logger.debug(1, "No plugins support %s" % indexURI)
raise LayerIndexException("No plugins support %s" % indexURI)
@@ -292,7 +292,7 @@ layerBranches set. If not, they are effectively blank.'''
the default configuration until the first vcs_url/branch match.'''
for index in self.indexes:
logger.debug(' searching %s' % index.config['DESCRIPTION'])
logger.debug(1, ' searching %s' % index.config['DESCRIPTION'])
layerBranch = index.find_vcs_url(vcs_url, [branch])
if layerBranch:
return layerBranch
@@ -304,7 +304,7 @@ layerBranches set. If not, they are effectively blank.'''
If a branch has not been specified, we will iterate over the branches in
the default configuration until the first collection/branch match.'''
logger.debug('find_collection: %s (%s) %s' % (collection, version, branch))
logger.debug(1, 'find_collection: %s (%s) %s' % (collection, version, branch))
if branch:
branches = [branch]
@@ -312,12 +312,12 @@ layerBranches set. If not, they are effectively blank.'''
branches = None
for index in self.indexes:
logger.debug(' searching %s' % index.config['DESCRIPTION'])
logger.debug(1, ' searching %s' % index.config['DESCRIPTION'])
layerBranch = index.find_collection(collection, version, branches)
if layerBranch:
return layerBranch
else:
logger.debug('Collection %s (%s) not found for branch (%s)' % (collection, version, branch))
logger.debug(1, 'Collection %s (%s) not found for branch (%s)' % (collection, version, branch))
return None
def find_layerbranch(self, name, branch=None):
@@ -408,7 +408,7 @@ layerBranches set. If not, they are effectively blank.'''
version=deplayerbranch.version
)
if rdeplayerbranch != deplayerbranch:
logger.debug('Replaced %s:%s:%s with %s:%s:%s' % \
logger.debug(1, 'Replaced %s:%s:%s with %s:%s:%s' % \
(deplayerbranch.index.config['DESCRIPTION'],
deplayerbranch.branch.name,
deplayerbranch.layer.name,
@@ -1121,7 +1121,7 @@ class LayerBranch(LayerIndexItemObj):
@property
def branch(self):
try:
logger.debug("Get branch object from branches[%s]" % (self.branch_id))
logger.debug(1, "Get branch object from branches[%s]" % (self.branch_id))
return self.index.branches[self.branch_id]
except KeyError:
raise AttributeError('Unable to find branches in index to map branch_id %s' % self.branch_id)
@@ -1149,7 +1149,7 @@ class LayerBranch(LayerIndexItemObj):
@actual_branch.setter
def actual_branch(self, value):
logger.debug("Set actual_branch to %s .. name is %s" % (value, self.branch.name))
logger.debug(1, "Set actual_branch to %s .. name is %s" % (value, self.branch.name))
if value != self.branch.name:
self._setattr('actual_branch', value, prop=False)
else:

View File

@@ -173,7 +173,7 @@ class CookerPlugin(layerindexlib.plugin.IndexPlugin):
else:
branches = ['HEAD']
logger.debug("Loading cooker data branches %s" % branches)
logger.debug(1, "Loading cooker data branches %s" % branches)
index = self._load_bblayers(branches=branches)
@@ -220,7 +220,7 @@ class CookerPlugin(layerindexlib.plugin.IndexPlugin):
required=required, layerbranch=layerBranchId,
dependency=depLayerBranch.layer_id)
logger.debug('%s requires %s' % (layerDependency.layer.name, layerDependency.dependency.name))
logger.debug(1, '%s requires %s' % (layerDependency.layer.name, layerDependency.dependency.name))
index.add_element("layerDependencies", [layerDependency])
return layerDependencyId

View File

@@ -82,7 +82,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
def load_cache(path, index, branches=[]):
logger.debug('Loading json file %s' % path)
logger.debug(1, 'Loading json file %s' % path)
with open(path, 'rt', encoding='utf-8') as f:
pindex = json.load(f)
@@ -102,7 +102,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
if newpBranch:
index.add_raw_element('branches', layerindexlib.Branch, newpBranch)
else:
logger.debug('No matching branches (%s) in index file(s)' % branches)
logger.debug(1, 'No matching branches (%s) in index file(s)' % branches)
# No matching branches.. return nothing...
return
@@ -120,7 +120,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
load_cache(up.path, index, branches)
return index
logger.debug('Loading from dir %s...' % (up.path))
logger.debug(1, 'Loading from dir %s...' % (up.path))
for (dirpath, _, filenames) in os.walk(up.path):
for filename in filenames:
if not filename.endswith('.json'):
@@ -144,7 +144,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
def _get_json_response(apiurl=None, username=None, password=None, retry=True):
assert apiurl is not None
logger.debug("fetching %s" % apiurl)
logger.debug(1, "fetching %s" % apiurl)
up = urlparse(apiurl)
@@ -163,9 +163,9 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
parsed = json.loads(res.read().decode('utf-8'))
except ConnectionResetError:
if retry:
logger.debug("%s: Connection reset by peer. Retrying..." % url)
logger.debug(1, "%s: Connection reset by peer. Retrying..." % url)
parsed = _get_json_response(apiurl=up_stripped.geturl(), username=username, password=password, retry=False)
logger.debug("%s: retry successful.")
logger.debug(1, "%s: retry successful.")
else:
raise layerindexlib.LayerIndexFetchError('%s: Connection reset by peer. Is there a firewall blocking your connection?' % apiurl)
@@ -207,25 +207,25 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
if "*" not in branches:
filter = "?filter=name:%s" % "OR".join(branches)
logger.debug("Loading %s from %s" % (branches, index.apilinks['branches']))
logger.debug(1, "Loading %s from %s" % (branches, index.apilinks['branches']))
# The link won't include username/password, so pull it from the original url
pindex['branches'] = _get_json_response(index.apilinks['branches'] + filter,
username=up.username, password=up.password)
if not pindex['branches']:
logger.debug("No valid branches (%s) found at url %s." % (branch, url))
logger.debug(1, "No valid branches (%s) found at url %s." % (branch, url))
return index
index.add_raw_element("branches", layerindexlib.Branch, pindex['branches'])
# Load all of the layerItems (these can not be easily filtered)
logger.debug("Loading %s from %s" % ('layerItems', index.apilinks['layerItems']))
logger.debug(1, "Loading %s from %s" % ('layerItems', index.apilinks['layerItems']))
# The link won't include username/password, so pull it from the original url
pindex['layerItems'] = _get_json_response(index.apilinks['layerItems'],
username=up.username, password=up.password)
if not pindex['layerItems']:
logger.debug("No layers were found at url %s." % (url))
logger.debug(1, "No layers were found at url %s." % (url))
return index
index.add_raw_element("layerItems", layerindexlib.LayerItem, pindex['layerItems'])
@@ -235,13 +235,13 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
for branch in index.branches:
filter = "?filter=branch__name:%s" % index.branches[branch].name
logger.debug("Loading %s from %s" % ('layerBranches', index.apilinks['layerBranches']))
logger.debug(1, "Loading %s from %s" % ('layerBranches', index.apilinks['layerBranches']))
# The link won't include username/password, so pull it from the original url
pindex['layerBranches'] = _get_json_response(index.apilinks['layerBranches'] + filter,
username=up.username, password=up.password)
if not pindex['layerBranches']:
logger.debug("No valid layer branches (%s) found at url %s." % (branches or "*", url))
logger.debug(1, "No valid layer branches (%s) found at url %s." % (branches or "*", url))
return index
index.add_raw_element("layerBranches", layerindexlib.LayerBranch, pindex['layerBranches'])
@@ -256,7 +256,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
("distros", layerindexlib.Distro)]:
if lName not in load:
continue
logger.debug("Loading %s from %s" % (lName, index.apilinks[lName]))
logger.debug(1, "Loading %s from %s" % (lName, index.apilinks[lName]))
# The link won't include username/password, so pull it from the original url
pindex[lName] = _get_json_response(index.apilinks[lName] + filter,
@@ -283,7 +283,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
if up.scheme != 'file':
raise layerindexlib.plugin.LayerIndexPluginUrlError(self.type, url)
logger.debug("Storing to %s..." % up.path)
logger.debug(1, "Storing to %s..." % up.path)
try:
layerbranches = index.layerBranches
@@ -299,12 +299,12 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
if getattr(index, objects)[obj].layerbranch_id == layerbranchid:
filtered.append(getattr(index, objects)[obj]._data)
except AttributeError:
logger.debug('No obj.layerbranch_id: %s' % objects)
logger.debug(1, 'No obj.layerbranch_id: %s' % objects)
# No simple filter method, just include it...
try:
filtered.append(getattr(index, objects)[obj]._data)
except AttributeError:
logger.debug('No obj._data: %s %s' % (objects, type(obj)))
logger.debug(1, 'No obj._data: %s %s' % (objects, type(obj)))
filtered.append(obj)
return filtered

View File

@@ -72,7 +72,7 @@ class LayerIndexCookerTest(LayersTest):
def test_find_collection(self):
def _check(collection, expected):
self.logger.debug("Looking for collection %s..." % collection)
self.logger.debug(1, "Looking for collection %s..." % collection)
result = self.layerindex.find_collection(collection)
if expected:
self.assertIsNotNone(result, msg="Did not find %s when it shouldn't be there" % collection)
@@ -91,7 +91,7 @@ class LayerIndexCookerTest(LayersTest):
def test_find_layerbranch(self):
def _check(name, expected):
self.logger.debug("Looking for layerbranch %s..." % name)
self.logger.debug(1, "Looking for layerbranch %s..." % name)
result = self.layerindex.find_layerbranch(name)
if expected:
self.assertIsNotNone(result, msg="Did not find %s when it shouldn't be there" % collection)

View File

@@ -57,11 +57,11 @@ class LayerIndexWebRestApiTest(LayersTest):
type in self.layerindex.indexes[0].config['local']:
continue
for id in getattr(self.layerindex.indexes[0], type):
self.logger.debug("type %s" % (type))
self.logger.debug(1, "type %s" % (type))
self.assertTrue(id in getattr(reload.indexes[0], type), msg="Id number not in reloaded index")
self.logger.debug("%s ? %s" % (getattr(self.layerindex.indexes[0], type)[id], getattr(reload.indexes[0], type)[id]))
self.logger.debug(1, "%s ? %s" % (getattr(self.layerindex.indexes[0], type)[id], getattr(reload.indexes[0], type)[id]))
self.assertEqual(getattr(self.layerindex.indexes[0], type)[id], getattr(reload.indexes[0], type)[id], msg="Reloaded contents different")
@@ -80,11 +80,11 @@ class LayerIndexWebRestApiTest(LayersTest):
type in self.layerindex.indexes[0].config['local']:
continue
for id in getattr(self.layerindex.indexes[0] ,type):
self.logger.debug("type %s" % (type))
self.logger.debug(1, "type %s" % (type))
self.assertTrue(id in getattr(reload.indexes[0], type), msg="Id number missing from reloaded data")
self.logger.debug("%s ? %s" % (getattr(self.layerindex.indexes[0] ,type)[id], getattr(reload.indexes[0], type)[id]))
self.logger.debug(1, "%s ? %s" % (getattr(self.layerindex.indexes[0] ,type)[id], getattr(reload.indexes[0], type)[id]))
self.assertEqual(getattr(self.layerindex.indexes[0] ,type)[id], getattr(reload.indexes[0], type)[id], msg="reloaded data does not match original")
@@ -111,14 +111,14 @@ class LayerIndexWebRestApiTest(LayersTest):
if dep.layer.name == 'meta-python':
break
else:
self.logger.debug("meta-python was not found")
self.logger.debug(1, "meta-python was not found")
raise self.failureException
# Only check the first element...
break
else:
# Empty list, this is bad.
self.logger.debug("Empty list of dependencies")
self.logger.debug(1, "Empty list of dependencies")
self.assertIsNotNone(first, msg="Empty list of dependencies")
# Last dep should be the requested item
@@ -128,7 +128,7 @@ class LayerIndexWebRestApiTest(LayersTest):
@skipIfNoNetwork()
def test_find_collection(self):
def _check(collection, expected):
self.logger.debug("Looking for collection %s..." % collection)
self.logger.debug(1, "Looking for collection %s..." % collection)
result = self.layerindex.find_collection(collection)
if expected:
self.assertIsNotNone(result, msg="Did not find %s when it should be there" % collection)
@@ -148,11 +148,11 @@ class LayerIndexWebRestApiTest(LayersTest):
@skipIfNoNetwork()
def test_find_layerbranch(self):
def _check(name, expected):
self.logger.debug("Looking for layerbranch %s..." % name)
self.logger.debug(1, "Looking for layerbranch %s..." % name)
for index in self.layerindex.indexes:
for layerbranchid in index.layerBranches:
self.logger.debug("Present: %s" % index.layerBranches[layerbranchid].layer.name)
self.logger.debug(1, "Present: %s" % index.layerBranches[layerbranchid].layer.name)
result = self.layerindex.find_layerbranch(name)
if expected:
self.assertIsNotNone(result, msg="Did not find %s when it should be there" % collection)

View File

@@ -119,7 +119,7 @@ class BuildTest(unittest.TestCase):
if os.environ.get("TOASTER_TEST_USE_SSTATE_MIRROR"):
ProjectVariable.objects.get_or_create(
name="SSTATE_MIRRORS",
value="file://.* http://sstate.yoctoproject.org/PATH;downloadfilename=PATH",
value="file://.* http://autobuilder.yoctoproject.org/pub/sstate/PATH;downloadfilename=PATH",
project=project)
ProjectTarget.objects.create(project=project,

View File

@@ -1,78 +0,0 @@
#!/usr/bin/env python3
# Copyright (C) 2020 Agilent Technologies, Inc.
# Author: Chris Laplante <chris.laplante@agilent.com>
# This sendemail-validate hook injects 'From: ' header lines into outgoing
# emails sent via 'git send-email', to ensure that accurate commit authorship
# information is present. It was created because some email servers
# (notably Microsoft Exchange / Office 360) seem to butcher outgoing patches,
# resulting in incorrect authorship.
# Current limitations:
# 1. Assumes one per patch per email
# 2. Minimal error checking
#
# Installation:
# 1. Copy to .git/hooks/sendemail-validate
# 2. chmod +x .git/hooks/sendemail-validate
import enum
import re
import subprocess
import sys
class Subject(enum.IntEnum):
NOT_SEEN = 0
CONSUMING = 1
SEEN = 2
def make_from_line():
cmd = ["git", "var", "GIT_COMMITTER_IDENT"]
proc = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, universal_newlines=True)
regex = re.compile(r"^(.*>).*$")
match = regex.match(proc.stdout)
assert match is not None
return "From: {0}".format(match.group(1))
def main():
email = sys.argv[1]
with open(email, "r") as f:
email_lines = f.read().split("\n")
subject_seen = Subject.NOT_SEEN
first_body_line = None
for i, line in enumerate(email_lines):
if (subject_seen == Subject.NOT_SEEN) and line.startswith("Subject: "):
subject_seen = Subject.CONSUMING
continue
if subject_seen == Subject.CONSUMING:
if not line.strip():
subject_seen = Subject.SEEN
continue
if subject_seen == Subject.SEEN:
first_body_line = i
break
assert subject_seen == Subject.SEEN
assert first_body_line is not None
from_line = make_from_line()
# Only add FROM line if it is not already there
if email_lines[first_body_line] != from_line:
email_lines.insert(first_body_line, from_line)
email_lines.insert(first_body_line + 1, "")
with open(email, "w") as f:
f.write("\n".join(email_lines))
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,3 +1 @@
_build/
Pipfile.lock
.vscode/

View File

@@ -3,7 +3,7 @@
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?= -j auto
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build

View File

@@ -1,14 +0,0 @@
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
sphinx = "*"
sphinx-rtd-theme = "*"
pyyaml = "*"
[requires]
python_version = "3"

View File

@@ -2,7 +2,7 @@ documentation
=============
This is the directory that contains the Yocto Project documentation. The Yocto
Project source repositories at https://git.yoctoproject.org/cgit.cgi have two
Project source repositories at http://git.yoctoproject.org/cgit.cgi have two
instances of the "documentation" directory. You should understand each of
these instances.
@@ -47,12 +47,12 @@ Folders exist for individual manuals as follows:
Each folder is self-contained regarding content and figures.
If you want to find HTML versions of the Yocto Project manuals on the web,
go to https://www.yoctoproject.org and click on the "Docs" tab. From
go to http://www.yoctoproject.org and click on the "Documentation" tab. From
there you have access to archived documentation from previous releases, current
documentation for the latest release, and "Docs in Progress" for the release
currently being developed.
In general, the Yocto Project site (https://www.yoctoproject.org) is a great
In general, the Yocto Project site (http://www.yoctoproject.org) is a great
reference for both information and downloads.
poky.yaml
@@ -91,13 +91,13 @@ Yocto Project documentation website
A new website has been created to host the Yocto Project
documentation, it can be found at: https://docs.yoctoproject.org/.
The entire Yocto Project documentation, as well as the BitBake manual,
The entire Yocto Project documentation, as well as the BitBake manual
is published on this website, including all previously released
versions. A version switcher was added, as a drop-down menu on the top
of the page to switch back and forth between the various versions of
the current active Yocto Project releases.
Transition pages have been added (as rst files) to show links to old
Transition pages have been added (as rst file) to show links to old
versions of the Yocto Project documentation with links to each manual
generated with DocBook.
@@ -109,7 +109,7 @@ obvious reasons, we will only support building the Yocto Project
documentation with Python3.
Sphinx might be available in your Linux distro packages repositories,
however it is not recommended to use distro packages, as they might be
however it is not recommend using distro packages, as they might be
old versions, especially if you are using an LTS version of your
distro. The recommended method to install Sphinx and all required
dependencies is to use the Python Package Index (pip).
@@ -127,13 +127,6 @@ The resulting HTML index page will be _build/html/index.html, and you
can browse your own copy of the locally generated documentation with
your browser.
Alternatively, you can use Pipenv to automatically install all required
dependencies in a virtual environment:
$ cd documentation
$ pipenv install
$ pipenv run make html
Sphinx theme and CSS customization
==================================
@@ -185,7 +178,7 @@ Sphinx has a glossary directive. From
https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#glossary:
This directive must contain a reST definition list with terms and
definitions. It's then possible to refer to each definition through the
definitions. The definitions will then be referencable with the
[https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-term
'term' role].
@@ -206,7 +199,7 @@ however there are important shortcomings. For example they cannot be
used/nested inside code-block sections.
A Sphinx extension was implemented to support variable substitutions
to mimic the DocBook based documentation behavior. Variable
to mimic the DocBook based documentation behavior. Variabes
substitutions are done while reading/parsing the .rst files. The
pattern for variables substitutions is the same as with DocBook,
e.g. `&VAR;`.
@@ -222,13 +215,13 @@ For example, the following .rst content will produce the 'expected'
content:
.. code-block::
$ mkdir poky-&DISTRO;
$ mkdir ~/poky-&DISTRO;
or
$ git clone &YOCTO_GIT_URL;/git/poky -b &DISTRO_NAME_NO_CAP;
Variables can be nested, like it was the case for DocBook:
YOCTO_HOME_URL : "https://www.yoctoproject.org"
YOCTO_HOME_URL : "http://www.yoctoproject.org"
YOCTO_DOCS_URL : "&YOCTO_HOME_URL;/docs"
Note directive
@@ -237,14 +230,14 @@ Note directive
Sphinx has a builtin 'note' directive that produces clean Note section
in the output file. There are various types of directives such as
"attention", "caution", "danger", "error", "hint", "important", "tip",
"warning", "admonition" that are supported, and additional directives
"warning", "admonition" that are supported, and additional directive
can be added as Sphinx extension if needed.
Figures
=======
The Yocto Project documentation has many figures/images. Sphinx has a
'figure' directive which is straightforward to use. To include a
'figure' directive which is straight forward to use. To include a
figure in the body of the documentation:
.. image:: figures/YP-flow-diagram.png
@@ -259,13 +252,10 @@ websites.
More information can be found here:
https://sublime-and-sphinx-guide.readthedocs.io/en/latest/references.html.
Anchor (<#link>) links are forbidden as they are not checked by Sphinx during
the build and may be broken without knowing about it.
References
==========
The following extension is enabled by default:
The following extension is enabed by default:
sphinx.ext.autosectionlabel
(https://www.sphinx-doc.org/en/master/usage/extensions/autosectionlabel.html).
@@ -274,7 +264,7 @@ autosectionlabel_prefix_document is enabled by default, so that we can
insert references from any document.
For example, to insert an HTML link to a section from
documentation/manual/intro.rst, use:
documentaion/manual/intro.rst, use:
Please check this :ref:`manual/intro:Cross-References to Locations in the Same Document`
@@ -297,8 +287,7 @@ Extlinks
The sphinx.ext.extlinks extension is enabled by default
(https://sublime-and-sphinx-guide.readthedocs.io/en/latest/references.html#use-the-external-links-extension),
and it is configured with the 'extlinks' definitions in
the 'documentation/conf.py' file:
and it is configured with:
'yocto_home': ('https://yoctoproject.org%s', None),
'yocto_wiki': ('https://wiki.yoctoproject.org%s', None),
@@ -310,10 +299,6 @@ the 'documentation/conf.py' file:
'yocto_git': ('https://git.yoctoproject.org%s', None),
'oe_home': ('https://www.openembedded.org%s', None),
'oe_lists': ('https://lists.openembedded.org%s', None),
'oe_git': ('https://git.openembedded.org%s', None),
'oe_wiki': ('https://www.openembedded.org/wiki%s', None),
'oe_layerindex': ('https://layers.openembedded.org%s', None),
'oe_layer': ('https://layers.openembedded.org/layerindex/branch/master/layer%s', None),
It creates convenient shortcuts which can be used throughout the
documentation rst files, as:
@@ -333,9 +318,3 @@ References to the bitbake manual can be done like this:
See the ":ref:`-D <bitbake:bitbake-user-manual/bitbake-user-manual-intro:usage and syntax>`" option
or
:term:`bitbake:BB_NUMBER_PARSE_THREADS`
Submitting documentation changes
================================
Please see the top level README file in this repository for details of where
to send patches.

View File

@@ -1,19 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
=====================
BitBake Documentation
=====================
|
BitBake was originally a part of the OpenEmbedded project. It was inspired by
the Portage package management system used by the Gentoo Linux distribution. In
2004, the OpenEmbedded project was split the project into two distinct pieces:
- BitBake, a generic task executor
- OpenEmbedded, a metadata set utilized by BitBake
Today, BitBake is the primary build tool of OpenEmbedded based projects, such as
the Yocto Project.
The BitBake documentation can be found :doc:`here <bitbake:index>`.

View File

@@ -8,11 +8,11 @@
Permission is granted to copy, distribute and/or modify this document under the
terms of the `Creative Commons Attribution-Share Alike 2.0 UK: England & Wales
<https://creativecommons.org/licenses/by-sa/2.0/uk/>`_ as published by Creative
<http://creativecommons.org/licenses/by-sa/2.0/uk/>`_ as published by Creative
Commons.
To report any inaccuracies or problems with this (or any other Yocto Project)
manual, or to send additions or changes, please send email/patches to the Yocto
Project documentation mailing list at ``docs@lists.yoctoproject.org`` or
log into the Freenode ``#yocto`` channel.
log into the freenode ``#yocto`` channel.

View File

@@ -20,7 +20,7 @@ build a reference embedded OS called Poky.
(:term:`Build Host`) is not
a native Linux system, you can still perform these steps by using
CROss PlatformS (CROPS) and setting up a Poky container. See the
:ref:`dev-manual/start:setting up to use cross platforms (crops)`
:ref:`dev-manual/dev-manual-start:setting up to use cross platforms (crops)`
section
in the Yocto Project Development Tasks Manual for more
information.
@@ -34,12 +34,12 @@ build a reference embedded OS called Poky.
compatible but not officially supported nor validated with
WSLv2, if you still decide to use WSL please upgrade to WSLv2.
See the :ref:`dev-manual/start:setting up to use windows
See the :ref:`dev-manual/dev-manual-start:setting up to use windows
subsystem for linux (wslv2)` section in the Yocto Project Development
Tasks Manual for more information.
If you want more conceptual or background information on the Yocto
Project, see the :doc:`/overview-manual/index`.
Project, see the :doc:`../overview-manual/overview-manual`.
Compatible Linux Distribution
=============================
@@ -52,23 +52,23 @@ following requirements:
- Runs a supported Linux distribution (i.e. recent releases of Fedora,
openSUSE, CentOS, Debian, or Ubuntu). For a list of Linux
distributions that support the Yocto Project, see the
:ref:`ref-manual/system-requirements:supported linux distributions`
:ref:`ref-manual/ref-system-requirements:supported linux distributions`
section in the Yocto Project Reference Manual. For detailed
information on preparing your build host, see the
:ref:`dev-manual/start:preparing the build host`
:ref:`dev-manual/dev-manual-start:preparing the build host`
section in the Yocto Project Development Tasks Manual.
-
- Git &MIN_GIT_VERSION; or greater
- tar &MIN_TAR_VERSION; or greater
- Python &MIN_PYTHON_VERSION; or greater.
- gcc &MIN_GCC_VERSION; or greater.
- Git 1.8.3.1 or greater
- tar 1.28 or greater
- Python 3.5.0 or greater.
- gcc 5.0 or greater.
If your build host does not meet any of these three listed version
requirements, you can take steps to prepare the system so that you
can still use the Yocto Project. See the
:ref:`ref-manual/system-requirements:required git, tar, python and gcc versions`
:ref:`ref-manual/ref-system-requirements:required git, tar, python and gcc versions`
section in the Yocto Project Reference Manual for information.
Build Host Packages
@@ -85,7 +85,7 @@ distribution:
.. note::
For host package requirements on all supported Linux distributions,
see the :ref:`ref-manual/system-requirements:required packages for the build host`
see the :ref:`ref-manual/ref-system-requirements:required packages for the build host`
section in the Yocto Project Reference Manual.
Use Git to Clone Poky
@@ -106,61 +106,46 @@ commands to clone the Poky repository.
Resolving deltas: 100% (323116/323116), done.
Checking connectivity... done.
Go to :yocto_wiki:`Releases wiki page </Releases>`, and choose a release
codename (such as ``&DISTRO_NAME_NO_CAP;``), corresponding to either the
latest stable release or a Long Term Support release.
Then move to the ``poky`` directory and take a look at existing branches:
Move to the ``poky`` directory and take a look at the tags:
.. code-block:: shell
$ cd poky
$ git branch -a
.
.
.
remotes/origin/HEAD -> origin/master
remotes/origin/dunfell
remotes/origin/dunfell-next
.
.
.
remotes/origin/gatesgarth
remotes/origin/gatesgarth-next
.
.
.
remotes/origin/master
remotes/origin/master-next
$ git fetch --tags
$ git tag
1.1_M1.final
1.1_M1.rc1
1.1_M1.rc2
1.1_M2.final
1.1_M2.rc1
.
.
.
yocto-2.5
yocto-2.5.1
yocto-2.5.2
yocto-2.6
yocto-2.6.1
yocto-2.6.2
yocto-2.7
yocto_1.5_M5.rc8
For this example, check out the ``&DISTRO_NAME_NO_CAP;`` branch based on the
``&DISTRO_NAME;`` release:
For this example, check out the branch based on the
``&DISTRO_REL_TAG;`` release:
.. code-block:: shell
$ git checkout -t origin/&DISTRO_NAME_NO_CAP; -b my-&DISTRO_NAME_NO_CAP;
Branch 'my-&DISTRO_NAME_NO_CAP;' set up to track remote branch '&DISTRO_NAME_NO_CAP;' from 'origin'.
Switched to a new branch 'my-&DISTRO_NAME_NO_CAP;'
$ git checkout tags/&DISTRO_REL_TAG; -b my-&DISTRO_REL_TAG;
Switched to a new branch 'my-&DISTRO_REL_TAG;'
The previous Git checkout command creates a local branch named
``my-&DISTRO_NAME_NO_CAP;``. The files available to you in that branch
exactly match the repository's files in the ``&DISTRO_NAME_NO_CAP;``
release branch.
Note that you can regularly type the following command in the same directory
to keep your local files in sync with the release branch:
.. code-block:: shell
$ git pull
``my-&DISTRO_REL_TAG;``. The files available to you in that branch exactly
match the repository's files in the ``&DISTRO_NAME_NO_CAP;`` development
branch at the time of the Yocto Project &DISTRO_REL_TAG; release.
For more options and information about accessing Yocto Project related
repositories, see the
:ref:`dev-manual/start:locating yocto project source files`
:ref:`dev-manual/dev-manual-start:locating yocto project source files`
section in the Yocto Project Development Tasks Manual.
Building Your Image
@@ -180,18 +165,18 @@ an entire Linux distribution, including the toolchain, from source.
infrastructure resources and get that information. A good starting
point could also be to check your web browser settings. Finally,
you can find more information on the
":yocto_wiki:`Working Behind a Network Proxy </Working_Behind_a_Network_Proxy>`"
":yocto_wiki:`Working Behind a Network Proxy </wiki/Working_Behind_a_Network_Proxy>`"
page of the Yocto Project Wiki.
#. **Initialize the Build Environment:** From within the ``poky``
directory, run the :ref:`ref-manual/structure:\`\`oe-init-build-env\`\``
directory, run the :ref:`ref-manual/ref-structure:\`\`oe-init-build-env\`\``
environment
setup script to define Yocto Project's build environment on your
build host.
.. code-block:: shell
$ cd poky
$ cd ~/poky
$ source oe-init-build-env
You had no conf/local.conf file. This configuration file has therefore been
created for you with some default values. You may wish to edit it to, for
@@ -204,7 +189,7 @@ an entire Linux distribution, including the toolchain, from source.
The Yocto Project has extensive documentation about OE including a reference
manual which can be found at:
https://docs.yoctoproject.org
http://yoctoproject.org/documentation
For more information about OpenEmbedded see their website:
http://www.openembedded.org/
@@ -219,7 +204,7 @@ an entire Linux distribution, including the toolchain, from source.
meta-toolchain
meta-ide-support
You can also run generated QEMU images with a command like 'runqemu qemux86-64'
You can also run generated qemu images with a command like 'runqemu qemux86-64'
Among other things, the script creates the :term:`Build Directory`, which is
``build`` in this case and is located in the :term:`Source Directory`. After
@@ -259,9 +244,9 @@ an entire Linux distribution, including the toolchain, from source.
$ bitbake core-image-sato
For information on using the ``bitbake`` command, see the
:ref:`overview-manual/concepts:bitbake` section in the Yocto Project Overview and
:ref:`usingpoky-components-bitbake` section in the Yocto Project Overview and
Concepts Manual, or see the ":ref:`BitBake Command
<bitbake:bitbake-user-manual/bitbake-user-manual-intro:the bitbake command>`" section in the BitBake User Manual.
<bitbake:bitbake-user-manual-command>`" section in the BitBake User Manual.
#. **Simulate Your Image Using QEMU:** Once this particular image is
built, you can start QEMU, which is a Quick EMUlator that ships with
@@ -272,7 +257,7 @@ an entire Linux distribution, including the toolchain, from source.
$ runqemu qemux86-64
If you want to learn more about running QEMU, see the
:ref:`dev-manual/qemu:using the quick emulator (qemu)` chapter in
:ref:`dev-manual/dev-manual-qemu:using the quick emulator (qemu)` chapter in
the Yocto Project Development Tasks Manual.
#. **Exit QEMU:** Exit QEMU by either clicking on the shutdown icon or by typing
@@ -308,7 +293,7 @@ Follow these steps to add a hardware layer:
.. code-block:: shell
$ cd poky
$ cd ~/poky
$ git clone https://github.com/kraj/meta-altera.git
Cloning into 'meta-altera'...
remote: Counting objects: 25170, done.
@@ -352,7 +337,7 @@ Follow these steps to add a hardware layer:
.. code-block:: shell
$ cd poky/build
$ cd ~/poky/build
$ bitbake-layers add-layer ../meta-altera
NOTE: Starting bitbake server...
Parsing recipes: 100% |##################################################################| Time: 0:00:32
@@ -361,7 +346,7 @@ Follow these steps to add a hardware layer:
You can find
more information on adding layers in the
:ref:`dev-manual/common-tasks:adding a layer using the \`\`bitbake-layers\`\` script`
:ref:`dev-manual/dev-manual-common-tasks:adding a layer using the \`\`bitbake-layers\`\` script`
section.
Completing these steps has added the ``meta-altera`` layer to your Yocto
@@ -389,14 +374,14 @@ The following commands run the tool to create a layer named
.. code-block:: shell
$ cd poky
$ cd ~/poky
$ bitbake-layers create-layer meta-mylayer
NOTE: Starting bitbake server...
Add your new layer with 'bitbake-layers add-layer meta-mylayer'
For more information
on layers and how to create them, see the
:ref:`dev-manual/common-tasks:creating a general layer using the \`\`bitbake-layers\`\` script`
:ref:`dev-manual/dev-manual-common-tasks:creating a general layer using the \`\`bitbake-layers\`\` script`
section in the Yocto Project Development Tasks Manual.
Where To Go Next
@@ -412,14 +397,14 @@ information including the website, wiki pages, and user manuals:
Development Community into which you can tap.
- **Developer Screencast:** The `Getting Started with the Yocto Project -
New Developer Screencast Tutorial <https://vimeo.com/36450321>`__
New Developer Screencast Tutorial <http://vimeo.com/36450321>`__
provides a 30-minute video created for users unfamiliar with the
Yocto Project but familiar with Linux build hosts. While this
screencast is somewhat dated, the introductory and fundamental
concepts are useful for the beginner.
- **Yocto Project Overview and Concepts Manual:** The
:doc:`/overview-manual/index` is a great
:doc:`../overview-manual/overview-manual` is a great
place to start to learn about the Yocto Project. This manual
introduces you to the Yocto Project and its development environment.
The manual also provides conceptual information for various aspects

View File

@@ -44,7 +44,7 @@ machine or platform name, which is "bsp_root_name" in the above form.
To help understand the BSP layer concept, consider the BSPs that the
Yocto Project supports and provides with each release. You can see the
layers in the
:ref:`overview-manual/development-environment:yocto project source repositories`
:ref:`overview-manual/overview-manual-development-environment:yocto project source repositories`
through
a web interface at :yocto_git:`/`. If you go to that interface,
you will find a list of repositories under "Yocto Metadata Layers".
@@ -72,7 +72,7 @@ For information on typical BSP development workflow, see the
section. For more
information on how to set up a local copy of source files from a Git
repository, see the
:ref:`dev-manual/start:locating yocto project source files`
:ref:`dev-manual/dev-manual-start:locating yocto project source files`
section in the Yocto Project Development Tasks Manual.
The BSP layer's base directory (``meta-bsp_root_name``) is the root
@@ -81,7 +81,7 @@ directory of that Layer. This directory is what you add to the
``conf/bblayers.conf`` file found in your
:term:`Build Directory`, which is
established after you run the OpenEmbedded build environment setup
script (i.e. :ref:`ref-manual/structure:\`\`oe-init-build-env\`\``).
script (i.e. :ref:`ref-manual/ref-structure:\`\`oe-init-build-env\`\``).
Adding the root directory allows the :term:`OpenEmbedded Build System`
to recognize the BSP
layer and from it build an image. Here is an example: ::
@@ -128,7 +128,7 @@ you want to work with, such as: ::
and so on.
For more information on layers, see the
":ref:`dev-manual/common-tasks:understanding and creating layers`"
":ref:`dev-manual/dev-manual-common-tasks:understanding and creating layers`"
section of the Yocto Project Development Tasks Manual.
Preparing Your Build Host to Work With BSP Layers
@@ -146,7 +146,7 @@ section.
:ref:`bsp-guide/bsp:example filesystem layout` section.
#. *Set Up the Build Environment:* Be sure you are set up to use BitBake
in a shell. See the ":ref:`dev-manual/start:preparing the build host`"
in a shell. See the ":ref:`dev-manual/dev-manual-start:preparing the build host`"
section in the Yocto Project Development Tasks Manual for information on how
to get a build host ready that is either a native Linux machine or a machine
that uses CROPS.
@@ -154,10 +154,10 @@ section.
#. *Clone the poky Repository:* You need to have a local copy of the
Yocto Project :term:`Source Directory` (i.e. a local
``poky`` repository). See the
":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`" and
":ref:`dev-manual/dev-manual-start:cloning the \`\`poky\`\` repository`" and
possibly the
":ref:`dev-manual/start:checking out by branch in poky`" or
":ref:`dev-manual/start:checking out by tag in poky`"
":ref:`dev-manual/dev-manual-start:checking out by branch in poky`" or
":ref:`dev-manual/dev-manual-start:checking out by tag in poky`"
sections
all in the Yocto Project Development Tasks Manual for information on
how to clone the ``poky`` repository and check out the appropriate
@@ -172,7 +172,8 @@ section.
#. *Optionally Clone the meta-intel BSP Layer:* If your hardware is
based on current Intel CPUs and devices, you can leverage this BSP
layer. For details on the ``meta-intel`` BSP layer, see the layer's
:yocto_git:`README </meta-intel/tree/README>` file.
`README <http://git.yoctoproject.org/cgit/cgit.cgi/meta-intel/tree/README>`__
file.
#. *Navigate to Your Source Directory:* Typically, you set up the
``meta-intel`` Git repository inside the :term:`Source Directory` (e.g.
@@ -205,7 +206,7 @@ section.
To see the available branch names in a cloned repository, use the ``git
branch -al`` command. See the
":ref:`dev-manual/start:checking out by branch in poky`"
":ref:`dev-manual/dev-manual-start:checking out by branch in poky`"
section in the Yocto Project Development Tasks Manual for more
information.
@@ -229,7 +230,7 @@ section.
#. *Initialize the Build Environment:* While in the root directory of
the Source Directory (i.e. ``poky``), run the
:ref:`ref-manual/structure:\`\`oe-init-build-env\`\`` environment
:ref:`ref-manual/ref-structure:\`\`oe-init-build-env\`\`` environment
setup script to define the OpenEmbedded build environment on your
build host. ::
@@ -240,6 +241,8 @@ section.
the script runs, your current working directory is set to the ``build``
directory.
.. _bsp-filelayout:
Example Filesystem Layout
=========================
@@ -250,10 +253,10 @@ standardization of software support for hardware.
The proposed form described in this section does have elements that are
specific to the OpenEmbedded build system. It is intended that
developers can use this structure with other build systems besides the
OpenEmbedded build system. It is also intended that it will be simple
OpenEmbedded build system. It is also intended that it will be be simple
to extract information and convert it to other formats if required. The
OpenEmbedded build system, through its standard :ref:`layers mechanism
<overview-manual/yp-intro:the yocto project layer model>`, can
<overview-manual/overview-manual-yp-intro:the yocto project layer model>`, can
directly accept the format described as a layer. The BSP layer captures
all the hardware-specific details in one place using a standard format,
which is useful for any person wishing to use the hardware platform
@@ -289,7 +292,7 @@ individual BSPs could differ. ::
meta-bsp_root_name/recipes-kernel/linux/linux-yocto_kernel_rev.bbappend
Below is an example of the Raspberry Pi BSP layer that is available from
the :yocto_git:`Source Repositories <>`:
the :yocto_git:`Source Respositories <>`:
.. code-block:: none
@@ -448,6 +451,8 @@ the :yocto_git:`Source Repositories <>`:
The following sections describe each part of the proposed BSP format.
.. _bsp-filelayout-license:
License Files
-------------
@@ -463,9 +468,11 @@ requirements are handled with the ``COPYING.MIT`` file.
Licensing files can be MIT, BSD, GPLv*, and so forth. These files are
recommended for the BSP but are optional and totally up to the BSP
developer. For information on how to maintain license compliance, see
the ":ref:`dev-manual/common-tasks:maintaining open source license compliance during your product's lifecycle`"
the ":ref:`dev-manual/dev-manual-common-tasks:maintaining open source license compliance during your product's lifecycle`"
section in the Yocto Project Development Tasks Manual.
.. _bsp-filelayout-readme:
README File
-----------
@@ -481,6 +488,8 @@ At a minimum, the ``README`` file must contain a list of dependencies,
such as the names of any other layers on which the BSP depends and the
name of the BSP maintainer with his or her contact information.
.. _bsp-filelayout-readme-sources:
README.sources File
-------------------
@@ -500,6 +509,8 @@ used to generate the images that ship with the BSP.
If the BSP's ``binary`` directory is missing or the directory has no images, an
existing ``README.sources`` file is meaningless and usually does not exist.
.. _bsp-filelayout-binary:
Pre-built User Binaries
-----------------------
@@ -523,6 +534,8 @@ hardware. Additionally, the
present to locate the sources used to build the images and provide
information on the Metadata.
.. _bsp-filelayout-layer:
Layer Configuration File
------------------------
@@ -573,6 +586,8 @@ This file simply makes :term:`BitBake` aware of the recipes and configuration
directories. The file must exist so that the OpenEmbedded build system can
recognize the BSP.
.. _bsp-filelayout-machine:
Hardware Configuration Options
------------------------------
@@ -589,7 +604,7 @@ filenames correspond to the values to which users have set the
These files define things such as the kernel package to use
(:term:`PREFERRED_PROVIDER` of
:ref:`virtual/kernel <dev-manual/common-tasks:using virtual providers>`),
:ref:`virtual/kernel <dev-manual/dev-manual-common-tasks:using virtual providers>`),
the hardware drivers to include in different types of images, any
special software components that are needed, any bootloader information,
and also any special image format requirements.
@@ -611,6 +626,8 @@ configuration file. For example, the Raspberry Pi BSP
include conf/machine/include/rpi-base.inc
.. _bsp-filelayout-misc-recipes:
Miscellaneous BSP-Specific Recipe Files
---------------------------------------
@@ -641,6 +658,8 @@ directory. Here is the ``machconfig`` file for the Raspberry Pi BSP: ::
``meta/recipes-bsp/formfactor/formfactor_0.0.bb``, which is found in
the :term:`Source Directory`.
.. _bsp-filelayout-recipes-graphics:
Display Support Files
---------------------
@@ -652,6 +671,8 @@ This optional directory contains recipes for the BSP if it has special
requirements for graphics support. All files that are needed for the BSP
to support a display are kept here.
.. _bsp-filelayout-kernel:
Linux Kernel Configuration
--------------------------
@@ -693,7 +714,7 @@ BSP settings to the kernel, thus configuring the kernel for your
particular BSP.
You can find more information on what your append file should contain in
the ":ref:`kernel-dev/common:creating the append file`" section
the ":ref:`kernel-dev/kernel-dev-common:creating the append file`" section
in the Yocto Project Linux Kernel Development Manual.
An alternate scenario is when you create your own kernel recipe for the
@@ -726,7 +747,7 @@ workflow.
:align: center
#. *Set up Your Host Development System to Support Development Using the
Yocto Project*: See the ":ref:`dev-manual/start:preparing the build host`"
Yocto Project*: See the ":ref:`dev-manual/dev-manual-start:preparing the build host`"
section in the Yocto Project Development Tasks Manual for options on how to
get a system ready to use the Yocto Project.
@@ -754,9 +775,9 @@ workflow.
are kept. The key point for a layer is that it is an isolated area
that contains all the relevant information for the project that the
OpenEmbedded build system knows about. For more information on
layers, see the ":ref:`overview-manual/yp-intro:the yocto project layer model`"
layers, see the ":ref:`overview-manual/overview-manual-yp-intro:the yocto project layer model`"
section in the Yocto Project Overview and Concepts Manual. You can also
reference the ":ref:`dev-manual/common-tasks:understanding and creating layers`"
reference the ":ref:`dev-manual/dev-manual-common-tasks:understanding and creating layers`"
section in the Yocto Project Development Tasks Manual. For more
information on BSP layers, see the ":ref:`bsp-guide/bsp:bsp layers`"
section.
@@ -815,7 +836,7 @@ workflow.
key configuration files are configured appropriately: the
``conf/local.conf`` and the ``conf/bblayers.conf`` file. You must
make the OpenEmbedded build system aware of your new layer. See the
":ref:`dev-manual/common-tasks:enabling your layer`"
":ref:`dev-manual/dev-manual-common-tasks:enabling your layer`"
section in the Yocto Project Development Tasks Manual for information
on how to let the build system know about your new layer.
@@ -826,7 +847,7 @@ workflow.
The build process supports several types of images to satisfy
different needs. See the
":ref:`ref-manual/images:Images`" chapter in the Yocto
":ref:`ref-manual/ref-images:Images`" chapter in the Yocto
Project Reference Manual for information on supported images.
Requirements and Recommendations for Released BSPs
@@ -846,14 +867,14 @@ Before looking at BSP requirements, you should consider the following:
layer that can be added to the Yocto Project. For guidelines on
creating a layer that meets these base requirements, see the
":ref:`bsp-guide/bsp:bsp layers`" section in this manual and the
":ref:`dev-manual/common-tasks:understanding and creating layers`"
":ref:`dev-manual/dev-manual-common-tasks:understanding and creating layers`"
section in the Yocto Project Development Tasks Manual.
- The requirements in this section apply regardless of how you package
a BSP. You should consult the packaging and distribution guidelines
for your specific release process. For an example of packaging and
distribution requirements, see the ":yocto_wiki:`Third Party BSP Release
Process </Third_Party_BSP_Release_Process>`"
Process </wiki/Third_Party_BSP_Release_Process>`"
wiki page.
- The requirements for the BSP as it is made available to a developer
@@ -894,20 +915,20 @@ Yocto Project:
``recipes-*`` subdirectories specific to the recipe's function, or
within a subdirectory containing a set of closely-related recipes.
The recipes themselves should follow the general guidelines for
recipes used in the Yocto Project found in the ":oe_wiki:`OpenEmbedded
Style Guide </Styleguide>`".
recipes used in the Yocto Project found in the "`OpenEmbedded Style
Guide <http://openembedded.org/wiki/Styleguide>`__".
- *License File:* You must include a license file in the
``meta-bsp_root_name`` directory. This license covers the BSP
Metadata as a whole. You must specify which license to use since no
default license exists when one is not specified. See the
:yocto_git:`COPYING.MIT </meta-raspberrypi/tree/COPYING.MIT>`
:yocto_git:`COPYING.MIT </cgit.cgi/meta-raspberrypi/tree/COPYING.MIT>`
file for the Raspberry Pi BSP in the ``meta-raspberrypi`` BSP layer
as an example.
- *README File:* You must include a ``README`` file in the
``meta-bsp_root_name`` directory. See the
:yocto_git:`README.md </meta-raspberrypi/tree/README.md>`
:yocto_git:`README.md </cgit.cgi/meta-raspberrypi/tree/README.md>`
file for the Raspberry Pi BSP in the ``meta-raspberrypi`` BSP layer
as an example.
@@ -928,7 +949,7 @@ Yocto Project:
- The name and contact information for the BSP layer maintainer.
This is the person to whom patches and questions should be sent.
For information on how to find the right person, see the
":ref:`dev-manual/common-tasks:submitting a change to the yocto project`"
":ref:`dev-manual/dev-manual-common-tasks:submitting a change to the yocto project`"
section in the Yocto Project Development Tasks Manual.
- Instructions on how to build the BSP using the BSP layer.
@@ -1013,7 +1034,7 @@ If you plan on customizing a recipe for a particular BSP, you need to do
the following:
- Create a ``*.bbappend`` file for the modified recipe. For information on using
append files, see the ":ref:`dev-manual/common-tasks:using
append files, see the ":ref:`dev-manual/dev-manual-common-tasks:using
.bbappend files in your layer`" section in the Yocto Project Development
Tasks Manual.
@@ -1036,7 +1057,7 @@ the following:
to reside in a machine-specific directory.
Following is a specific example to help you better understand the
process. This example customizes a recipe by adding a
process. This example customizes customizes a recipe by adding a
BSP-specific configuration file named ``interfaces`` to the
``init-ifupdown_1.0.bb`` recipe for machine "xyz" where the BSP layer
also supports several other machines:
@@ -1118,7 +1139,7 @@ list describes them in order of preference:
Specifying the matching license string signifies that you agree to
the license. Thus, the build system can build the corresponding
recipe and include the component in the image. See the
":ref:`dev-manual/common-tasks:enabling commercially licensed recipes`"
":ref:`dev-manual/dev-manual-common-tasks:enabling commercially licensed recipes`"
section in the Yocto Project Development Tasks Manual for details on
how to use these variables.
@@ -1170,7 +1191,7 @@ Use these steps to create a BSP layer:
``create-layer`` subcommand to create a new general layer. For
instructions on how to create a general layer using the
``bitbake-layers`` script, see the
":ref:`dev-manual/common-tasks:creating a general layer using the \`\`bitbake-layers\`\` script`"
":ref:`dev-manual/dev-manual-common-tasks:creating a general layer using the \`\`bitbake-layers\`\` script`"
section in the Yocto Project Development Tasks Manual.
- *Create a Layer Configuration File:* Every layer needs a layer
@@ -1180,16 +1201,16 @@ Use these steps to create a BSP layer:
:yocto_git:`Source Repositories <>`. To get examples of what you need
in your configuration file, locate a layer (e.g. "meta-ti") and
examine the
:yocto_git:`local.conf </meta-ti/tree/conf/layer.conf>`
:yocto_git:`local.conf </cgit/cgit.cgi/meta-ti/tree/conf/layer.conf>`
file.
- *Create a Machine Configuration File:* Create a
``conf/machine/bsp_root_name.conf`` file. See
:yocto_git:`meta-yocto-bsp/conf/machine </poky/tree/meta-yocto-bsp/conf/machine>`
:yocto_git:`meta-yocto-bsp/conf/machine </cgit/cgit.cgi/poky/tree/meta-yocto-bsp/conf/machine>`
for sample ``bsp_root_name.conf`` files. Other samples such as
:yocto_git:`meta-ti </meta-ti/tree/conf/machine>`
:yocto_git:`meta-ti </cgit/cgit.cgi/meta-ti/tree/conf/machine>`
and
:yocto_git:`meta-freescale </meta-freescale/tree/conf/machine>`
:yocto_git:`meta-freescale </cgit/cgit.cgi/meta-freescale/tree/conf/machine>`
exist from other vendors that have more specific machine and tuning
examples.
@@ -1197,13 +1218,13 @@ Use these steps to create a BSP layer:
``recipes-kernel/linux`` by either using a kernel append file or a
new custom kernel recipe file (e.g. ``yocto-linux_4.12.bb``). The BSP
layers mentioned in the previous step also contain different kernel
examples. See the ":ref:`kernel-dev/common:modifying an existing recipe`"
examples. See the ":ref:`kernel-dev/kernel-dev-common:modifying an existing recipe`"
section in the Yocto Project Linux Kernel Development Manual for
information on how to create a custom kernel.
The remainder of this section provides a description of the Yocto
Project reference BSP for Beaglebone, which resides in the
:yocto_git:`meta-yocto-bsp </poky/tree/meta-yocto-bsp>`
:yocto_git:`meta-yocto-bsp </cgit/cgit.cgi/poky/tree/meta-yocto-bsp>`
layer.
BSP Layer Configuration Example
@@ -1230,7 +1251,7 @@ configuration files is to examine various files for BSP from the
:yocto_git:`Source Repositories <>`.
For a detailed description of this particular layer configuration file,
see ":ref:`step 3 <dev-manual/common-tasks:creating your own layer>`"
see ":ref:`step 3 <dev-manual/dev-manual-common-tasks:creating your own layer>`"
in the discussion that describes how to create layers in the Yocto
Project Development Tasks Manual.
@@ -1305,7 +1326,7 @@ the example reference machine configuration file for the BeagleBone
development boards. Realize that much more can be defined as part of a
machine's configuration file. In general, you can learn about related
variables that this example does not have by locating the variables in
the ":ref:`ref-manual/variables:variables glossary`" in the Yocto
the ":ref:`ref-manual/ref-variables:variables glossary`" in the Yocto
Project Reference Manual.
- :term:`PREFERRED_PROVIDER_virtual/xserver <PREFERRED_PROVIDER>`:
@@ -1360,7 +1381,7 @@ Project Reference Manual.
`JFFS2 <https://en.wikipedia.org/wiki/JFFS2>`__ image.
- :term:`WKS_FILE`: The location of
the :ref:`Wic kickstart <ref-manual/kickstart:openembedded kickstart (\`\`.wks\`\`) reference>` file used
the :ref:`Wic kickstart <ref-manual/ref-kickstart:openembedded kickstart (\`\`.wks\`\`) reference>` file used
by the OpenEmbedded build system to create a partitioned image
(image.wic).
@@ -1412,7 +1433,7 @@ Project Reference Manual.
.. note::
For more information on how the SPL variables are used, see the
:yocto_git:`u-boot.inc </poky/tree/meta/recipes-bsp/u-boot/u-boot.inc>`
:yocto_git:`u-boot.inc </cgit/cgit.cgi/poky/tree/meta/recipes-bsp/u-boot/u-boot.inc>`
include file.
- :term:`UBOOT_* <UBOOT_ENTRYPOINT>`: Defines
@@ -1456,7 +1477,7 @@ The ``meta-yocto-bsp/recipes-kernel/linux`` directory in the layer contains
metadata used to build the kernel. In this case, a kernel append file
(i.e. ``linux-yocto_5.0.bbappend``) is used to override an established
kernel recipe (i.e. ``linux-yocto_5.0.bb``), which is located in
:yocto_git:`/poky/tree/meta/recipes-kernel/linux`.
:yocto_git:`/cgit/cgit.cgi/poky/tree/meta/recipes-kernel/linux`.
Following is the contents of the append file: ::

View File

@@ -16,8 +16,7 @@ import os
import sys
import datetime
current_version = "3.3.4"
bitbake_version = "1.50"
current_version = "3.2.1"
# String used in sidebar
version = 'Version: ' + current_version
@@ -34,9 +33,6 @@ author = 'The Linux Foundation'
# -- General configuration ---------------------------------------------------
# Prevent building with an outdated version of sphinx
needs_sphinx = "3.1"
# to load local extension from the folder 'sphinx'
sys.path.insert(0, os.path.abspath('sphinx'))
@@ -72,25 +68,21 @@ rst_prolog = """
# external links and substitutions
extlinks = {
'yocto_home': ('https://www.yoctoproject.org%s', None),
'yocto_wiki': ('https://wiki.yoctoproject.org/wiki%s', None),
'yocto_home': ('https://yoctoproject.org%s', None),
'yocto_wiki': ('https://wiki.yoctoproject.org%s', None),
'yocto_dl': ('https://downloads.yoctoproject.org%s', None),
'yocto_lists': ('https://lists.yoctoproject.org%s', None),
'yocto_bugs': ('https://bugzilla.yoctoproject.org%s', None),
'yocto_ab': ('https://autobuilder.yoctoproject.org%s', None),
'yocto_docs': ('https://docs.yoctoproject.org%s', None),
'yocto_git': ('https://git.yoctoproject.org/cgit/cgit.cgi%s', None),
'yocto_git': ('https://git.yoctoproject.org%s', None),
'oe_home': ('https://www.openembedded.org%s', None),
'oe_lists': ('https://lists.openembedded.org%s', None),
'oe_git': ('https://git.openembedded.org%s', None),
'oe_wiki': ('https://www.openembedded.org/wiki%s', None),
'oe_layerindex': ('https://layers.openembedded.org%s', None),
'oe_layer': ('https://layers.openembedded.org/layerindex/branch/master/layer%s', None),
}
# Intersphinx config to use cross reference with Bitbake user manual
intersphinx_mapping = {
'bitbake': ('https://docs.yoctoproject.org/bitbake/' + bitbake_version, None)
'bitbake': ('https://docs.yoctoproject.org/bitbake/1.48', None)
}
# -- Options for HTML output -------------------------------------------------
@@ -132,8 +124,3 @@ html_last_updated_fmt = '%b %d, %Y'
# Remove the trailing 'dot' in section numbers
html_secnumber_suffix = " "
latex_elements = {
'passoptionstopackages': '\PassOptionsToPackage{bookmarksdepth=5}{hyperref}',
'preamble': '\setcounter{tocdepth}{2}',
}

View File

@@ -4,6 +4,8 @@
The Yocto Project Development Tasks Manual
******************************************
.. _dev-welcome:
Welcome
=======
@@ -31,13 +33,13 @@ This manual provides the following:
This manual does not provide the following:
- Redundant Step-by-step Instructions: For example, the
:doc:`/sdk-manual/index` manual contains detailed
:doc:`../sdk-manual/sdk-manual` manual contains detailed
instructions on how to install an SDK, which is used to develop
applications for target hardware.
- Reference or Conceptual Material: This type of material resides in an
appropriate reference manual. For example, system variables are
documented in the :doc:`/ref-manual/index`.
documented in the :doc:`../ref-manual/ref-manual`.
- Detailed Public Information Not Specific to the Yocto Project: For
example, exhaustive information on how to use the Source Control
@@ -52,7 +54,7 @@ supplemental information is recommended for full comprehension. For
introductory information on the Yocto Project, see the
:yocto_home:`Yocto Project Website <>`. If you want to build an image with no
knowledge of Yocto Project as a way of quickly testing it out, see the
:doc:`/brief-yoctoprojectqs/index` document.
:doc:`../brief-yoctoprojectqs/brief-yoctoprojectqs` document.
For a comprehensive list of links and other documentation, see the
":ref:`ref-manual/resources:links and related documentation`"

View File

@@ -10,6 +10,8 @@ This chapter provides both procedures that show you how to use the Quick
EMUlator (QEMU) and other QEMU information helpful for development
purposes.
.. _qemu-dev-overview:
Overview
========
@@ -37,6 +39,8 @@ following references:
- `Documentation <https://wiki.qemu.org/Manual>`__\ *:* The QEMU user
manual.
.. _qemu-running-qemu:
Running QEMU
============
@@ -46,7 +50,7 @@ available. Follow these general steps to run QEMU:
1. *Install QEMU:* QEMU is made available with the Yocto Project a
number of ways. One method is to install a Software Development Kit
(SDK). See ":ref:`sdk-manual/intro:the qemu emulator`" section in the
(SDK). See ":ref:`sdk-manual/sdk-intro:the qemu emulator`" section in the
Yocto Project Application Development and the Extensible Software
Development Kit (eSDK) manual for information on how to install QEMU.
@@ -58,7 +62,7 @@ available. Follow these general steps to run QEMU:
environment script (i.e. :ref:`structure-core-script`):
::
$ cd poky
$ cd ~/poky
$ source oe-init-build-env
- If you installed a cross-toolchain, you can run the script that
@@ -66,7 +70,7 @@ available. Follow these general steps to run QEMU:
the initialization script from the default ``poky_sdk`` directory:
::
. poky_sdk/environment-setup-core2-64-poky-linux
. ~/poky_sdk/environment-setup-core2-64-poky-linux
3. *Ensure the Artifacts are in Place:* You need to be sure you have a
pre-built kernel that will boot in QEMU. You also need the target
@@ -77,11 +81,11 @@ available. Follow these general steps to run QEMU:
your :term:`Build Directory`.
- If you have not built an image, you can go to the
:yocto_dl:`machines/qemu </releases/yocto/yocto-&DISTRO;/machines/qemu/>` area and download a
:yocto_dl:`machines/qemu </releases/yocto/yocto-3.1.2/machines/qemu/>` area and download a
pre-built image that matches your architecture and can be run on
QEMU.
See the ":ref:`sdk-manual/appendix-obtain:extracting the root filesystem`"
See the ":ref:`sdk-manual/sdk-appendix-obtain:extracting the root filesystem`"
section in the Yocto Project Application Development and the
Extensible Software Development Kit (eSDK) manual for information on
how to extract a root filesystem.
@@ -183,6 +187,8 @@ allow input of absolute coordinates. This default means that the mouse
can enter and leave the main window without the grab taking effect
leading to a better user experience.
.. _qemu-running-under-a-network-file-system-nfs-server:
Running Under a Network File System (NFS) Server
================================================
@@ -237,6 +243,8 @@ using an NFS server.
runqemu-export-rootfs restart file-system-location
.. _qemu-kvm-cpu-compatibility:
QEMU CPU Compatibility Under KVM
================================
@@ -258,6 +266,8 @@ directory. This setting specifies a ``-cpu`` option passed into QEMU in
the ``runqemu`` script. Running ``qemu -cpu help`` returns a list of
available supported CPU types.
.. _qemu-dev-performance:
QEMU Performance
================
@@ -306,10 +316,12 @@ present, the toolchain is also automatically used.
tarball by using the ``runqemu-extract-sdk`` command. After
running the command, you must then point the ``runqemu`` script to
the extracted directory instead of a root filesystem image file.
See the
":ref:`dev-manual/qemu:running under a network file system (nfs) server`"
See the "`Running Under a Network File System (NFS)
Server <#qemu-running-under-a-network-file-system-nfs-server>`__"
section for more information.
.. _qemu-dev-command-line-syntax:
QEMU Command-Line Syntax
========================
@@ -365,6 +377,8 @@ Following is the command-line help output for the ``runqemu`` command:
runqemu path/to/<image>-<machine>.wic
runqemu path/to/<image>-<machine>.wic.vmdk
.. _qemu-dev-runqemu-command-line-options:
``runqemu`` Command-Line Options
================================
@@ -452,7 +466,7 @@ command line:
or "qemux86-64" QEMU architectures. For KVM with VHOST to work, the
following conditions must be met:
- ``kvm`` option conditions defined above must be met.
- `kvm <#kvm-cond>`__ option conditions must be met.
- Your build host has to have virtio net device, which are
``/dev/vhost-net``.

View File

@@ -7,10 +7,12 @@ Setting Up to Use the Yocto Project
This chapter provides guidance on how to prepare to use the Yocto
Project. You can learn about creating a team environment to develop
using the Yocto Project, how to set up a :ref:`build
host <dev-manual/start:preparing the build host>`, how to locate
host <dev-manual/dev-manual-start:preparing the build host>`, how to locate
Yocto Project source repositories, and how to create local Git
repositories.
.. _usingpoky-changes-collaborate:
Creating a Team Development Environment
=======================================
@@ -78,7 +80,7 @@ particular working environment and set of practices.
developing under the control of an SCM system that is compatible
with the OpenEmbedded build system is advisable. Of all of the SCMs
supported by BitBake, the Yocto Project team strongly recommends using
:ref:`overview-manual/development-environment:git`.
:ref:`overview-manual/overview-manual-development-environment:git`.
Git is a distributed system
that is easy to back up, allows you to work remotely, and then
connects back to the infrastructure.
@@ -165,7 +167,7 @@ particular working environment and set of practices.
- Highlights when commits break the build.
- Populates an :ref:`sstate
cache <overview-manual/concepts:shared state cache>` from which
cache <overview-manual/overview-manual-concepts:shared state cache>` from which
developers can pull rather than requiring local builds.
- Allows commit hook triggers, which trigger builds when commits
@@ -218,20 +220,20 @@ particular working environment and set of practices.
some best practices exist within the Yocto Project development
environment. Consider the following:
- Use :ref:`overview-manual/development-environment:git` as the source control
- Use :ref:`overview-manual/overview-manual-development-environment:git` as the source control
system.
- Maintain your Metadata in layers that make sense for your
situation. See the ":ref:`overview-manual/yp-intro:the yocto project layer model`"
situation. See the ":ref:`overview-manual/overview-manual-yp-intro:the yocto project layer model`"
section in the Yocto Project Overview and Concepts Manual and the
":ref:`dev-manual/common-tasks:understanding and creating layers`"
":ref:`dev-manual/dev-manual-common-tasks:understanding and creating layers`"
section for more information on layers.
- Separate the project's Metadata and code by using separate Git
repositories. See the ":ref:`overview-manual/development-environment:yocto project source repositories`"
repositories. See the ":ref:`overview-manual/overview-manual-development-environment:yocto project source repositories`"
section in the Yocto Project Overview and Concepts Manual for
information on these repositories. See the
":ref:`dev-manual/start:locating yocto project source files`"
information on these repositories. See the "`Locating Yocto
Project Source Files <#locating-yocto-project-source-files>`__"
section for information on how to set up local Git repositories
for related upstream Yocto Project Git repositories.
@@ -248,17 +250,19 @@ particular working environment and set of practices.
project to fix bugs or add features. If you do submit patches,
follow the project commit guidelines for writing good commit
messages. See the
":ref:`dev-manual/common-tasks:submitting a change to the yocto project`"
":ref:`dev-manual/dev-manual-common-tasks:submitting a change to the yocto project`"
section.
- Send changes to the core sooner than later as others are likely
to run into the same issues. For some guidance on mailing lists
to use, see the list in the
":ref:`dev-manual/common-tasks:submitting a change to the yocto project`"
":ref:`dev-manual/dev-manual-common-tasks:submitting a change to the yocto project`"
section. For a description
of the available mailing lists, see the ":ref:`resources-mailinglist`" section in
the Yocto Project Reference Manual.
.. _dev-preparing-the-build-host:
Preparing the Build Host
========================
@@ -288,7 +292,7 @@ Package (BSP) development and kernel development:
section in the Yocto Project Board Support Package (BSP) Developer's
Guide.
- *Kernel Development:* See the ":ref:`kernel-dev/common:preparing the build host to work on the kernel`"
- *Kernel Development:* See the ":ref:`kernel-dev/kernel-dev-common:preparing the build host to work on the kernel`"
section in the Yocto Project Linux Kernel Development Manual.
Setting Up a Native Linux Host
@@ -305,7 +309,7 @@ Project Build Host:
validation and their status, see the ":ref:`Supported Linux
Distributions <detailed-supported-distros>`"
section in the Yocto Project Reference Manual and the wiki page at
:yocto_wiki:`Distribution Support </Distribution_Support>`.
:yocto_wiki:`Distribution Support </wiki/Distribution_Support>`.
2. *Have Enough Free Memory:* Your system should have at least 50 Gbytes
of free disk space for building images.
@@ -314,18 +318,18 @@ Project Build Host:
should be able to run on any modern distribution that has the
following versions for Git, tar, Python and gcc.
- Git &MIN_GIT_VERSION; or greater
- Git 1.8.3.1 or greater
- tar &MIN_TAR_VERSION; or greater
- tar 1.28 or greater
- Python &MIN_PYTHON_VERSION; or greater.
- Python 3.5.0 or greater.
- gcc &MIN_GCC_VERSION; or greater.
- gcc 5.0 or greater.
If your build host does not meet any of these three listed version
requirements, you can take steps to prepare the system so that you
can still use the Yocto Project. See the
":ref:`ref-manual/system-requirements:required git, tar, python and gcc versions`"
":ref:`ref-manual/ref-system-requirements:required git, tar, python and gcc versions`"
section in the Yocto Project Reference Manual for information.
4. *Install Development Host Packages:* Required development host
@@ -334,20 +338,22 @@ Project Build Host:
is large if you want to be able to cover all cases.
For lists of required packages for all scenarios, see the
":ref:`ref-manual/system-requirements:required packages for the build host`"
":ref:`ref-manual/ref-system-requirements:required packages for the build host`"
section in the Yocto Project Reference Manual.
Once you have completed the previous steps, you are ready to continue
using a given development path on your native Linux machine. If you are
going to use BitBake, see the
":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`"
":ref:`dev-manual/dev-manual-start:cloning the \`\`poky\`\` repository`"
section. If you are going
to use the Extensible SDK, see the ":doc:`/sdk-manual/extensible`" Chapter in the Yocto
to use the Extensible SDK, see the ":doc:`../sdk-manual/sdk-extensible`" Chapter in the Yocto
Project Application Development and the Extensible Software Development
Kit (eSDK) manual. If you want to work on the kernel, see the :doc:`/kernel-dev/index`. If you are going to use
Toaster, see the ":doc:`/toaster-manual/setup-and-use`"
Kit (eSDK) manual. If you want to work on the kernel, see the :doc:`../kernel-dev/kernel-dev`. If you are going to use
Toaster, see the ":doc:`../toaster-manual/toaster-manual-setup-and-use`"
section in the Toaster User Manual.
.. _setting-up-to-use-crops:
Setting Up to Use CROss PlatformS (CROPS)
-----------------------------------------
@@ -440,14 +446,16 @@ as your Yocto Project build host:
Once you have a container set up, everything is in place to develop just
as if you were running on a native Linux machine. If you are going to
use the Poky container, see the
":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`"
":ref:`dev-manual/dev-manual-start:cloning the \`\`poky\`\` repository`"
section. If you are going to use the Extensible SDK container, see the
":doc:`/sdk-manual/extensible`" Chapter in the Yocto
":doc:`../sdk-manual/sdk-extensible`" Chapter in the Yocto
Project Application Development and the Extensible Software Development
Kit (eSDK) manual. If you are going to use the Toaster container, see
the ":doc:`/toaster-manual/setup-and-use`"
the ":doc:`../toaster-manual/toaster-manual-setup-and-use`"
section in the Toaster User Manual.
.. _setting-up-to-use-wsl:
Setting Up to Use Windows Subsystem For Linux (WSLv2)
-----------------------------------------------------
@@ -557,10 +565,10 @@ your Yocto Project build host:
Once you have WSLv2 set up, everything is in place to develop just as if
you were running on a native Linux machine. If you are going to use the
Extensible SDK container, see the ":doc:`/sdk-manual/extensible`" Chapter in the Yocto
Extensible SDK container, see the ":doc:`../sdk-manual/sdk-extensible`" Chapter in the Yocto
Project Application Development and the Extensible Software Development
Kit (eSDK) manual. If you are going to use the Toaster container, see
the ":doc:`/toaster-manual/setup-and-use`"
the ":doc:`../toaster-manual/toaster-manual-setup-and-use`"
section in the Toaster User Manual.
Locating Yocto Project Source Files
@@ -572,21 +580,21 @@ files you'll need to work with the Yocto Project.
.. note::
- For concepts and introductory information about Git as it is used
in the Yocto Project, see the ":ref:`overview-manual/development-environment:git`"
in the Yocto Project, see the ":ref:`overview-manual/overview-manual-development-environment:git`"
section in the Yocto Project Overview and Concepts Manual.
- For concepts on Yocto Project source repositories, see the
":ref:`overview-manual/development-environment:yocto project source repositories`"
":ref:`overview-manual/overview-manual-development-environment:yocto project source repositories`"
section in the Yocto Project Overview and Concepts Manual."
Accessing Source Repositories
-----------------------------
Working from a copy of the upstream :ref:`dev-manual/start:accessing source repositories` is the
Working from a copy of the upstream :ref:`dev-manual/dev-manual-start:accessing source repositories` is the
preferred method for obtaining and using a Yocto Project release. You
can view the Yocto Project Source Repositories at
:yocto_git:`/`. In particular, you can find the ``poky``
repository at :yocto_git:`/poky`.
repository at :yocto_git:`/cgit.cgi/poky`.
Use the following procedure to locate the latest upstream copy of the
``poky`` Git repository:
@@ -600,12 +608,12 @@ Use the following procedure to locate the latest upstream copy of the
3. *Find the URL Used to Clone the Repository:* At the bottom of the
page, note the URL used to clone that repository
(e.g. :yocto_git:`/poky`).
(e.g. :yocto_git:`/cgit.cgi/poky`).
.. note::
For information on cloning a repository, see the
":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`" section.
":ref:`dev-manual/dev-manual-start:cloning the \`\`poky\`\` repository`" section.
Accessing Index of Releases
---------------------------
@@ -655,7 +663,8 @@ The :yocto_home:`Yocto Project Website <>` uses a "DOWNLOADS" page
from which you can locate and download tarballs of any Yocto Project
release. Rather than Git repositories, these files represent snapshot
tarballs similar to the tarballs located in the Index of Releases
described in the ":ref:`dev-manual/start:accessing index of releases`" section.
described in the "`Accessing Index of
Releases <#accessing-index-of-releases>`__" section.
.. note::
@@ -677,7 +686,7 @@ described in the ":ref:`dev-manual/start:accessing index of releases`" section.
.. note::
For a "map" of Yocto Project releases to version numbers, see the
:yocto_wiki:`Releases </Releases>` wiki page.
:yocto_wiki:`Releases </wiki/Releases>` wiki page.
You can use the "RELEASE ARCHIVE" link to reveal a menu of all Yocto
Project releases.
@@ -721,7 +730,7 @@ files is referred to as the :term:`Source Directory`
in the Yocto Project documentation.
The preferred method of creating your Source Directory is by using
:ref:`overview-manual/development-environment:git` to clone a local copy of the upstream
:ref:`overview-manual/overview-manual-development-environment:git` to clone a local copy of the upstream
``poky`` repository. Working from a cloned copy of the upstream
repository allows you to contribute back into the Yocto Project or to
simply work with the latest software on a development branch. Because
@@ -758,16 +767,16 @@ Follow these steps to create a local version of the upstream
"master" branch, which results in a snapshot of the latest
development changes for "master". For information on how to check out
a specific development branch or on how to check out a local branch
based on a tag name, see the
":ref:`dev-manual/start:checking out by branch in poky`" and
":ref:`dev-manual/start:checking out by tag in poky`" sections, respectively.
based on a tag name, see the "`Checking Out By Branch in
Poky <#checking-out-by-branch-in-poky>`__" and `Checking Out By Tag
in Poky <#checkout-out-by-tag-in-poky>`__" sections, respectively.
Once the local repository is created, you can change to that
directory and check its status. Here, the single "master" branch
exists on your system and by default, it is checked out:
::
$ cd poky
$ cd ~/poky
$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
@@ -800,7 +809,7 @@ and then specifically check out that development branch.
1. *Switch to the Poky Directory:* If you have a local poky Git
repository, switch to that directory. If you do not have the local
copy of poky, see the
":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`"
":ref:`dev-manual/dev-manual-start:cloning the \`\`poky\`\` repository`"
section.
2. *Determine Existing Branch Names:*
@@ -846,6 +855,8 @@ and then specifically check out that development branch.
master
* &DISTRO_NAME_NO_CAP;
.. _checkout-out-by-tag-in-poky:
Checking Out by Tag in Poky
---------------------------
@@ -863,7 +874,7 @@ similar to checking out by branch name except you use tag names.
1. *Switch to the Poky Directory:* If you have a local poky Git
repository, switch to that directory. If you do not have the local
copy of poky, see the
":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`"
":ref:`dev-manual/dev-manual-start:cloning the \`\`poky\`\` repository`"
section.
2. *Fetch the Tag Names:* To checkout the branch based on a tag name,

View File

@@ -10,10 +10,10 @@ Yocto Project Development Tasks Manual
:caption: Table of Contents
:numbered:
intro
start
common-tasks
qemu
dev-manual-intro
dev-manual-start
dev-manual-common-tasks
dev-manual-qemu
history
.. include:: /boilerplate.rst

Binary file not shown.

Before

Width:  |  Height:  |  Size: 214 KiB

After

Width:  |  Height:  |  Size: 244 KiB

View File

@@ -14,7 +14,7 @@ Welcome to the Yocto Project Documentation
:maxdepth: 1
:caption: Introduction and Overview
Quick Build <brief-yoctoprojectqs/index>
Quick Build <brief-yoctoprojectqs/brief-yoctoprojectqs>
what-i-wish-id-known
transitioning-to-a-custom-environment
Yocto Project Software Overview <https://www.yoctoproject.org/software-overview/>
@@ -25,16 +25,16 @@ Welcome to the Yocto Project Documentation
:maxdepth: 1
:caption: Manuals
Overview and Concepts Manual <overview-manual/index>
Reference Manual <ref-manual/index>
Board Support Package (BSP) Developer's guide <bsp-guide/index>
Development Tasks Manual <dev-manual/index>
Linux Kernel Development Manual <kernel-dev/index>
Profile and Tracing Manual <profile-manual/index>
Application Development and the Extensible SDK (eSDK) <sdk-manual/index>
Toaster Manual <toaster-manual/index>
Test Environment Manual <test-manual/index>
bitbake
Overview and Concepts Manual <overview-manual/overview-manual>
Reference Manual <ref-manual/ref-manual>
Board Support Package (BSP) Developer's guide <bsp-guide/bsp-guide>
Development Tasks Manual <dev-manual/dev-manual>
Linux Kernel Development Manual <kernel-dev/kernel-dev>
Profile and Tracing Manual <profile-manual/profile-manual>
Application Development and the Extensible SDK (eSDK) <sdk-manual/sdk-manual>
Toaster Manual <toaster-manual/toaster-manual>
Test Environment Manual <test-manual/test-manual>
Bitbake User Manual <https://docs.yoctoproject.org/bitbake>
.. toctree::
:maxdepth: 1

View File

@@ -4,6 +4,8 @@
Working with Advanced Metadata (``yocto-kernel-cache``)
*******************************************************
.. _kernel-dev-advanced-overview:
Overview
========
@@ -16,7 +18,7 @@ complexity of the configuration and sources used to support multiple
BSPs and Linux kernel types.
Kernel Metadata exists in many places. One area in the
:ref:`overview-manual/development-environment:yocto project source repositories`
:ref:`overview-manual/overview-manual-development-environment:yocto project source repositories`
is the ``yocto-kernel-cache`` Git repository. You can find this repository
grouped under the "Yocto Linux Kernel" heading in the
:yocto_git:`Yocto Project Source Repositories <>`.
@@ -56,8 +58,8 @@ using the same BSP description. Multiple Corei7-based BSPs could share
the same "intel-corei7-64" value for ``KMACHINE``. It is important to
realize that ``KMACHINE`` is just for kernel mapping, while ``MACHINE``
is the machine type within a BSP Layer. Even with this distinction,
however, these two variables can hold the same value. See the
":ref:`kernel-dev/advanced:bsp descriptions`" section for more information.
however, these two variables can hold the same value. See the `BSP
Descriptions <#bsp-descriptions>`__ section for more information.
Every linux-yocto style recipe must also indicate the Linux kernel
source repository branch used to build the Linux kernel. The
@@ -87,7 +89,7 @@ Together with ``KMACHINE``, ``LINUX_KERNEL_TYPE`` defines the search
arguments used by the kernel tools to find the appropriate description
within the kernel Metadata with which to build out the sources and
configuration. The linux-yocto recipes define "standard", "tiny", and
"preempt-rt" kernel types. See the ":ref:`kernel-dev/advanced:kernel types`"
"preempt-rt" kernel types. See the "`Kernel Types <#kernel-types>`__"
section for more information on kernel types.
During the build, the kern-tools search for the BSP description file
@@ -123,8 +125,8 @@ the entries in ``KERNEL_FEATURES`` are dependent on their location
within the kernel Metadata itself. The examples here are taken from the
``yocto-kernel-cache`` repository. Each branch of this repository
contains "features" and "cfg" subdirectories at the top-level. For more
information, see the ":ref:`kernel-dev/advanced:kernel metadata syntax`"
section.
information, see the "`Kernel Metadata
Syntax <#kernel-metadata-syntax>`__" section.
Kernel Metadata Syntax
======================
@@ -148,7 +150,7 @@ Features aggregate sources in the form of patches and configuration
fragments into a modular reusable unit. You can use features to
implement conceptually separate kernel Metadata descriptions such as
pure configuration fragments, simple patches, complex features, and
kernel types. :ref:`kernel-dev/advanced:kernel types` define general kernel
kernel types. `Kernel types <#kernel-types>`__ define general kernel
features and policy to be reused in the BSPs.
BSPs define hardware-specific features and aggregate them with kernel
@@ -167,9 +169,10 @@ following Metadata file hierarchy is recommended:
ktypes/
patches/
The ``bsp`` directory contains the :ref:`kernel-dev/advanced:bsp descriptions`.
The remaining directories all contain "features". Separating ``bsp`` from the
rest of the structure aids conceptualizing intended usage.
The ``bsp`` directory contains the `BSP
descriptions <#bsp-descriptions>`__. The remaining directories all
contain "features". Separating ``bsp`` from the rest of the structure
aids conceptualizing intended usage.
Use these guidelines to help place your ``scc`` description files within
the structure:
@@ -197,12 +200,11 @@ contain "features" as far as the kernel tools are concerned.
Paths used in kernel Metadata files are relative to base, which is
either
:term:`FILESEXTRAPATHS` if
you are creating Metadata in
:ref:`recipe-space <kernel-dev/advanced:recipe-space metadata>`,
you are creating Metadata in `recipe-space <#recipe-space-metadata>`__,
or the top level of
:yocto_git:`yocto-kernel-cache </yocto-kernel-cache/tree/>`
if you are creating
:ref:`kernel-dev/advanced:metadata outside the recipe-space`.
:yocto_git:`yocto-kernel-cache </cgit/cgit.cgi/yocto-kernel-cache/tree/>`
if you are creating `Metadata outside of the
recipe-space <#metadata-outside-the-recipe-space>`__.
.. [1]
``scc`` stands for Series Configuration Control, but the naming has
@@ -243,7 +245,7 @@ two files: ``smp.scc`` and ``smp.cfg``. You can find these files in the
CONFIG_X86_BIGSMP=y
You can find general information on configuration
fragment files in the ":ref:`kernel-dev/common:creating configuration fragments`" section.
fragment files in the ":ref:`creating-config-fragments`" section.
Within the ``smp.scc`` file, the
:term:`KFEATURE_DESCRIPTION`
@@ -264,7 +266,7 @@ non-hardware fragment.
fragment.
As described in the
":ref:`kernel-dev/common:validating configuration`" section, you can
":ref:`kernel-dev/kernel-dev-common:validating configuration`" section, you can
use the following BitBake command to audit your configuration:
::
@@ -325,8 +327,8 @@ for the five patches in the directory.
You can create a typical ``.patch`` file using ``diff -Nurp`` or
``git format-patch`` commands. For information on how to create patches,
see the ":ref:`kernel-dev/common:using \`\`devtool\`\` to patch the kernel`"
and ":ref:`kernel-dev/common:using traditional kernel development to patch the kernel`"
see the ":ref:`kernel-dev/kernel-dev-common:using \`\`devtool\`\` to patch the kernel`"
and ":ref:`kernel-dev/kernel-dev-common:using traditional kernel development to patch the kernel`"
sections.
Features
@@ -353,9 +355,9 @@ as how an additional feature description file is included with the
Typically, features are less granular than configuration fragments and
are more likely than configuration fragments and patches to be the types
of things you want to specify in the ``KERNEL_FEATURES`` variable of the
Linux kernel recipe. See the
":ref:`kernel-dev/advanced:using kernel metadata in a recipe`" section earlier
in the manual.
Linux kernel recipe. See the "`Using Kernel Metadata in a
Recipe <#using-kernel-metadata-in-a-recipe>`__" section earlier in the
manual.
Kernel Types
------------
@@ -364,12 +366,12 @@ A kernel type defines a high-level kernel policy by aggregating
non-hardware configuration fragments with patches you want to use when
building a Linux kernel of a specific type (e.g. a real-time kernel).
Syntactically, kernel types are no different than features as described
in the ":ref:`kernel-dev/advanced:features`" section. The
in the "`Features <#features>`__" section. The
:term:`LINUX_KERNEL_TYPE`
variable in the kernel recipe selects the kernel type. For example, in
the ``linux-yocto_4.12.bb`` kernel recipe found in
``poky/meta/recipes-kernel/linux``, a
:ref:`require <bitbake:bitbake-user-manual/bitbake-user-manual-metadata:\`\`require\`\` directive>` directive
:ref:`require <bitbake:require-inclusion>` directive
includes the ``poky/meta/recipes-kernel/linux/linux-yocto.inc`` file,
which has the following statement that defines the default kernel type:
::
@@ -386,9 +388,9 @@ type as follows:
.. note::
You can find kernel recipes in the ``meta/recipes-kernel/linux`` directory
of the :ref:`overview-manual/development-environment:yocto project source repositories`
of the :ref:`overview-manual/overview-manual-development-environment:yocto project source repositories`
(e.g. ``poky/meta/recipes-kernel/linux/linux-yocto_4.12.bb``). See the
":ref:`kernel-dev/advanced:using kernel metadata in a recipe`"
":ref:`kernel-dev/kernel-dev-advanced:using kernel metadata in a recipe`"
section for more information.
Three kernel types ("standard", "tiny", and "preempt-rt") are supported
@@ -453,7 +455,7 @@ and ``patch`` commands, respectively.
It is not strictly necessary to create a kernel type ``.scc``
file. The Board Support Package (BSP) file can implicitly define the
kernel type using a ``define`` :term:`KTYPE` ``myktype`` line. See the
":ref:`kernel-dev/advanced:bsp descriptions`" section for more
":ref:`kernel-dev/kernel-dev-advanced:bsp descriptions`" section for more
information.
BSP Descriptions
@@ -469,12 +471,14 @@ supported kernel type.
For BSPs supported by the Yocto Project, the BSP description files
are located in the ``bsp`` directory of the ``yocto-kernel-cache``
repository organized under the "Yocto Linux Kernel" heading in the
:yocto_git:`Yocto Project Source Repositories <>`.
:yocto_git:`Yocto Project Source Repositories </>`.
This section overviews the BSP description structure, the aggregation
concepts, and presents a detailed example using a BSP supported by the
Yocto Project (i.e. BeagleBone Board). For complete information on BSP
layer file hierarchy, see the :doc:`/bsp-guide/index`.
layer file hierarchy, see the :doc:`../bsp-guide/bsp-guide`.
.. _bsp-description-file-overview:
Description Overview
~~~~~~~~~~~~~~~~~~~~
@@ -540,7 +544,7 @@ example, this is done using the following:
This file aggregates all the configuration
fragments, patches, and features that make up your standard kernel
policy. See the ":ref:`kernel-dev/advanced:kernel types`" section for more
policy. See the "`Kernel Types <#kernel-types>`__" section for more
information.
To aggregate common configurations and features specific to the kernel
@@ -555,7 +559,7 @@ You can see that in the BeagleBone example with the following:
include beaglebone.scc
For information on how to break a complete ``.config`` file into the various
configuration fragments, see the ":ref:`kernel-dev/common:creating configuration fragments`" section.
configuration fragments, see the ":ref:`creating-config-fragments`" section.
Finally, if you have any configurations specific to the hardware that
are not in a ``*.scc`` file, you can include them as follows:
@@ -579,6 +583,8 @@ types of configurations. However, the Malta 32-bit board does
include mti-malta32.scc
kconf hardware mti-malta32-le.cfg
.. _bsp-description-file-example-minnow:
Example
~~~~~~~
@@ -696,7 +702,7 @@ good approach if you are working with Linux kernel sources you do not
control or if you just do not want to maintain a Linux kernel Git
repository on your own. For partial information on how you can define
kernel Metadata in the recipe-space, see the
":ref:`kernel-dev/common:modifying an existing recipe`" section.
":ref:`kernel-dev/kernel-dev-common:modifying an existing recipe`" section.
Conversely, if you are actively developing a kernel and are already
maintaining a Linux kernel Git repository of your own, you might find it
@@ -716,7 +722,7 @@ modifying
``oe-core/meta-skeleton/recipes-kernel/linux/linux-yocto-custom.bb`` to
a recipe in your layer, ``FILESEXTRAPATHS`` is typically set to
``${``\ :term:`THISDIR`\ ``}/${``\ :term:`PN`\ ``}``.
See the ":ref:`kernel-dev/common:modifying an existing recipe`"
See the ":ref:`kernel-dev/kernel-dev-common:modifying an existing recipe`"
section for more information.
Here is an example that shows a trivial tree of kernel Metadata stored
@@ -825,11 +831,11 @@ Given this scenario, you do not need to create any branches in the
source repository. Rather, you just take the static patches you need and
encapsulate them within a feature description. Once you have the feature
description, you simply include that into the BSP description as
described in the ":ref:`kernel-dev/advanced:bsp descriptions`" section.
described in the "`BSP Descriptions <#bsp-descriptions>`__" section.
You can find information on how to create patches and BSP descriptions
in the ":ref:`kernel-dev/advanced:patches`" and
":ref:`kernel-dev/advanced:bsp descriptions`" sections.
in the "`Patches <#patches>`__" and "`BSP
Descriptions <#bsp-descriptions>`__" sections.
Machine Branches
----------------
@@ -919,6 +925,8 @@ after any ``branch`` commands:
include mybsp-hw.scc
.. _scc-reference:
SCC Description File Reference
==============================

View File

@@ -21,11 +21,11 @@ Preparing the Build Host to Work on the Kernel
Before you can do any kernel development, you need to be sure your build
host is set up to use the Yocto Project. For information on how to get
set up, see the ":doc:`/dev-manual/start`" section in
set up, see the ":doc:`../dev-manual/dev-manual-start`" section in
the Yocto Project Development Tasks Manual. Part of preparing the system
is creating a local Git repository of the
:term:`Source Directory` (``poky``) on your system. Follow the steps in the
":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`"
":ref:`dev-manual/dev-manual-start:cloning the \`\`poky\`\` repository`"
section in the Yocto Project Development Tasks Manual to set up your
Source Directory.
@@ -34,12 +34,12 @@ Source Directory.
Be sure you check out the appropriate development branch or you
create your local branch by checking out a specific tag to get the
desired version of Yocto Project. See the
":ref:`dev-manual/start:checking out by branch in poky`" and
":ref:`dev-manual/start:checking out by tag in poky`"
":ref:`dev-manual/dev-manual-start:checking out by branch in poky`" and
":ref:`dev-manual/dev-manual-start:checking out by tag in poky`"
sections in the Yocto Project Development Tasks Manual for more information.
Kernel development is best accomplished using
:ref:`devtool <sdk-manual/extensible:using \`\`devtool\`\` in your sdk workflow>`
:ref:`devtool <sdk-manual/sdk-extensible:using \`\`devtool\`\` in your sdk workflow>`
and not through traditional kernel workflow methods. The remainder of
this section provides information for both scenarios.
@@ -49,7 +49,7 @@ Getting Ready to Develop Using ``devtool``
Follow these steps to prepare to update the kernel image using
``devtool``. Completing this procedure leaves you with a clean kernel
image and ready to make modifications as described in the
":ref:`kernel-dev/common:using \`\`devtool\`\` to patch the kernel`"
":ref:`kernel-dev/kernel-dev-common:using \`\`devtool\`\` to patch the kernel`"
section:
1. *Initialize the BitBake Environment:* Before building an extensible
@@ -57,13 +57,13 @@ section:
the build environment script (i.e. :ref:`structure-core-script`):
::
$ cd poky
$ cd ~/poky
$ source oe-init-build-env
.. note::
The previous commands assume the
:ref:`overview-manual/development-environment:yocto project source repositories`
:ref:`overview-manual/overview-manual-development-environment:yocto project source repositories`
(i.e. ``poky``) have been cloned using Git and the local repository is named
"poky".
@@ -74,7 +74,7 @@ section:
``MACHINE`` variable appropriately in your ``conf/local.conf`` file
found in the
:term:`Build Directory` (i.e.
``poky/build`` in this example).
``~/poky/build`` in this example).
Also, since you are preparing to work on the kernel image, you need
to set the
@@ -94,7 +94,7 @@ section:
``bitbake-layers create-layer`` command as follows:
::
$ cd poky/build
$ cd ~/poky/build
$ bitbake-layers create-layer ../../meta-mylayer
NOTE: Starting bitbake server...
Add your new layer with 'bitbake-layers add-layer ../../meta-mylayer'
@@ -104,13 +104,13 @@ section:
For background information on working with common and BSP layers,
see the
":ref:`dev-manual/common-tasks:understanding and creating layers`"
":ref:`dev-manual/dev-manual-common-tasks:understanding and creating layers`"
section in the Yocto Project Development Tasks Manual and the
":ref:`bsp-guide/bsp:bsp layers`" section in the Yocto Project Board
Support (BSP) Developer's Guide, respectively. For information on how to
use the ``bitbake-layers create-layer`` command to quickly set up a layer,
see the
":ref:`dev-manual/common-tasks:creating a general layer using the \`\`bitbake-layers\`\` script`"
":ref:`dev-manual/dev-manual-common-tasks:creating a general layer using the \`\`bitbake-layers\`\` script`"
section in the Yocto Project Development Tasks Manual.
4. *Inform the BitBake Build Environment About Your Layer:* As directed
@@ -119,7 +119,7 @@ section:
``bblayers.conf`` file as follows:
::
$ cd poky/build
$ cd ~/poky/build
$ bitbake-layers add-layer ../../meta-mylayer
NOTE: Starting bitbake server...
$
@@ -128,7 +128,7 @@ section:
specifically for use with images to be run using QEMU:
::
$ cd poky/build
$ cd ~/poky/build
$ bitbake core-image-minimal -c populate_sdk_ext
Once
@@ -136,21 +136,21 @@ section:
``*.sh`` file) in the following directory:
::
poky/build/tmp/deploy/sdk
~/poky/build/tmp/deploy/sdk
For this example, the installer file is named
``poky-glibc-x86_64-core-image-minimal-i586-toolchain-ext-&DISTRO;.sh``.
``poky-glibc-x86_64-core-image-minimal-i586-toolchain-ext-DISTRO.sh``.
6. *Install the Extensible SDK:* Use the following command to install
the SDK. For this example, install the SDK in the default
``poky_sdk`` directory:
``~/poky_sdk`` directory:
::
$ cd poky/build/tmp/deploy/sdk
$ ./poky-glibc-x86_64-core-image-minimal-i586-toolchain-ext-&DISTRO;.sh
Poky (Yocto Project Reference Distro) Extensible SDK installer version &DISTRO;
$ cd ~/poky/build/tmp/deploy/sdk
$ ./poky-glibc-x86_64-core-image-minimal-i586-toolchain-ext-3.1.2.sh
Poky (Yocto Project Reference Distro) Extensible SDK installer version 3.1.2
============================================================================
Enter target directory for SDK (default: poky_sdk):
Enter target directory for SDK (default: ~/poky_sdk):
You are about to install the SDK to "/home/scottrif/poky_sdk". Proceed [Y/n]? Y
Extracting SDK......................................done
Setting it up...
@@ -175,7 +175,7 @@ section:
directed by the output from installing the SDK:
::
$ source poky_sdk/environment-setup-i586-poky-linux
$ source ~/poky_sdk/environment-setup-i586-poky-linux
"SDK environment now set up; additionally you may now run devtool to perform development tasks.
Run devtool --help for further details.
@@ -207,12 +207,12 @@ section:
building for actual hardware and not for emulation, you could flash
the image to a USB stick on ``/dev/sdd`` and boot your device. For an
example that uses a Minnowboard, see the
:yocto_wiki:`TipsAndTricks/KernelDevelopmentWithEsdk </TipsAndTricks/KernelDevelopmentWithEsdk>`
:yocto_wiki:`TipsAndTricks/KernelDevelopmentWithEsdk </wiki/TipsAndTricks/KernelDevelopmentWithEsdk>`
Wiki page.
At this point you have set up to start making modifications to the
kernel by using the extensible SDK. For a continued example, see the
":ref:`kernel-dev/common:using \`\`devtool\`\` to patch the kernel`"
":ref:`kernel-dev/kernel-dev-common:using \`\`devtool\`\` to patch the kernel`"
section.
Getting Ready for Traditional Kernel Development
@@ -226,7 +226,7 @@ you will be editing these files.
Follow these steps to prepare to update the kernel image using
traditional kernel development flow with the Yocto Project. Completing
this procedure leaves you ready to make modifications to the kernel
source as described in the ":ref:`kernel-dev/common:using traditional kernel development to patch the kernel`"
source as described in the ":ref:`kernel-dev/kernel-dev-common:using traditional kernel development to patch the kernel`"
section:
1. *Initialize the BitBake Environment:* Before you can do anything
@@ -236,11 +236,11 @@ section:
Also, for this example, be sure that the local branch you have
checked out for ``poky`` is the Yocto Project &DISTRO_NAME; branch. If
you need to checkout out the &DISTRO_NAME; branch, see the
":ref:`dev-manual/start:checking out by branch in poky`"
":ref:`dev-manual/dev-manual-start:checking out by branch in poky`"
section in the Yocto Project Development Tasks Manual.
::
$ cd poky
$ cd ~/poky
$ git branch
master
* &DISTRO_NAME_NO_CAP;
@@ -249,7 +249,7 @@ section:
.. note::
The previous commands assume the
:ref:`overview-manual/development-environment:yocto project source repositories`
:ref:`overview-manual/overview-manual-development-environment:yocto project source repositories`
(i.e. ``poky``) have been cloned using Git and the local repository is named
"poky".
@@ -260,7 +260,7 @@ section:
``MACHINE`` variable appropriately in your ``conf/local.conf`` file
found in the
:term:`Build Directory` (i.e.
``poky/build`` in this example).
``~/poky/build`` in this example).
Also, since you are preparing to work on the kernel image, you need
to set the
@@ -280,7 +280,7 @@ section:
``bitbake-layers create-layer`` command as follows:
::
$ cd poky/build
$ cd ~/poky/build
$ bitbake-layers create-layer ../../meta-mylayer
NOTE: Starting bitbake server...
Add your new layer with 'bitbake-layers add-layer ../../meta-mylayer'
@@ -289,13 +289,13 @@ section:
For background information on working with common and BSP layers,
see the
":ref:`dev-manual/common-tasks:understanding and creating layers`"
":ref:`dev-manual/dev-manual-common-tasks:understanding and creating layers`"
section in the Yocto Project Development Tasks Manual and the
":ref:`bsp-guide/bsp:bsp layers`" section in the Yocto Project Board
Support (BSP) Developer's Guide, respectively. For information on how to
use the ``bitbake-layers create-layer`` command to quickly set up a layer,
see the
":ref:`dev-manual/common-tasks:creating a general layer using the \`\`bitbake-layers\`\` script`"
":ref:`dev-manual/dev-manual-common-tasks:creating a general layer using the \`\`bitbake-layers\`\` script`"
section in the Yocto Project Development Tasks Manual.
4. *Inform the BitBake Build Environment About Your Layer:* As directed
@@ -304,7 +304,7 @@ section:
``bblayers.conf`` file as follows:
::
$ cd poky/build
$ cd ~/poky/build
$ bitbake-layers add-layer ../../meta-mylayer
NOTE: Starting bitbake server ...
$
@@ -365,7 +365,8 @@ section:
At this point, you are ready to start making modifications to the kernel
using traditional kernel development steps. For a continued example, see
the ":ref:`kernel-dev/common:using traditional kernel development to patch the kernel`"
the "`Using Traditional Kernel Development to Patch the
Kernel <#using-traditional-kernel-development-to-patch-the-kernel>`__"
section.
Creating and Preparing a Layer
@@ -377,7 +378,7 @@ layer contains its own :term:`BitBake`
append files (``.bbappend``) and provides a convenient mechanism to
create your own recipe files (``.bb``) as well as store and use kernel
patch files. For background information on working with layers, see the
":ref:`dev-manual/common-tasks:understanding and creating layers`"
":ref:`dev-manual/dev-manual-common-tasks:understanding and creating layers`"
section in the Yocto Project Development Tasks Manual.
.. note::
@@ -385,7 +386,7 @@ section in the Yocto Project Development Tasks Manual.
The Yocto Project comes with many tools that simplify tasks you need
to perform. One such tool is the ``bitbake-layers create-layer``
command, which simplifies creating a new layer. See the
":ref:`dev-manual/common-tasks:creating a general layer using the \`\`bitbake-layers\`\` script`"
":ref:`dev-manual/dev-manual-common-tasks:creating a general layer using the \`\`bitbake-layers\`\` script`"
section in the Yocto Project Development Tasks Manual for
information on how to use this script to quick set up a new layer.
@@ -397,6 +398,7 @@ home directory:
1. *Create Structure*: Create the layer's structure:
::
$ cd $HOME
$ mkdir meta-mylayer
$ mkdir meta-mylayer/conf
$ mkdir meta-mylayer/recipes-kernel
@@ -441,7 +443,7 @@ home directory:
The :term:`FILESEXTRAPATHS` and :term:`SRC_URI` statements
enable the OpenEmbedded build system to find patch files. For more
information on using append files, see the
":ref:`dev-manual/common-tasks:using .bbappend files in your layer`"
":ref:`dev-manual/dev-manual-common-tasks:using .bbappend files in your layer`"
section in the Yocto Project Development Tasks Manual.
Modifying an Existing Recipe
@@ -455,15 +457,15 @@ the :term:`Source Directory` in
Modifying an existing recipe can consist of the following:
- :ref:`kernel-dev/common:creating the append file`
- :ref:`kernel-dev/kernel-dev-common:creating the append file`
- :ref:`kernel-dev/common:applying patches`
- :ref:`kernel-dev/kernel-dev-common:applying patches`
- :ref:`kernel-dev/common:changing the configuration`
- :ref:`kernel-dev/kernel-dev-common:changing the configuration`
Before modifying an existing recipe, be sure that you have created a
minimal, custom layer from which you can work. See the
":ref:`kernel-dev/common:creating and preparing a layer`" section for
minimal, custom layer from which you can work. See the "`Creating and
Preparing a Layer <#creating-and-preparing-a-layer>`__" section for
information.
Creating the Append File
@@ -500,7 +502,7 @@ your layer in the following area:
.. note::
If you are working on a new machine Board Support Package (BSP), be
sure to refer to the :doc:`/bsp-guide/index`.
sure to refer to the :doc:`../bsp-guide/bsp-guide`.
As an example, consider the following append file used by the BSPs in
``meta-yocto-bsp``:
@@ -640,9 +642,9 @@ and applies the patches before building the kernel.
For a detailed example showing how to patch the kernel using
``devtool``, see the
":ref:`kernel-dev/common:using \`\`devtool\`\` to patch the kernel`"
":ref:`kernel-dev/kernel-dev-common:using \`\`devtool\`\` to patch the kernel`"
and
":ref:`kernel-dev/common:using traditional kernel development to patch the kernel`"
":ref:`kernel-dev/kernel-dev-common:using traditional kernel development to patch the kernel`"
sections.
Changing the Configuration
@@ -709,7 +711,7 @@ Linux kernel, BitBake detects the change in the recipe and fetches and
applies the new configuration before building the kernel.
For a detailed example showing how to configure the kernel, see the
":ref:`kernel-dev/common:configuring the kernel`" section.
"`Configuring the Kernel <#configuring-the-kernel>`__" section.
Using an "In-Tree"  ``defconfig`` File
--------------------------------------
@@ -767,7 +769,7 @@ the extensible SDK and ``devtool``.
Before attempting this procedure, be sure you have performed the
steps to get ready for updating the kernel as described in the
":ref:`kernel-dev/common:getting ready to develop using \`\`devtool\`\``"
":ref:`kernel-dev/kernel-dev-common:getting ready to develop using \`\`devtool\`\``"
section.
Patching the kernel involves changing or adding configurations to an
@@ -780,7 +782,7 @@ output at boot time through ``printk`` statements in the kernel's
``calibrate.c`` source code file. Applying the patch and booting the
modified image causes the added messages to appear on the emulator's
console. The example is a continuation of the setup procedure found in
the ":ref:`kernel-dev/common:getting ready to develop using \`\`devtool\`\``" Section.
the ":ref:`kernel-dev/kernel-dev-common:getting ready to develop using \`\`devtool\`\``" Section.
1. *Check Out the Kernel Source Files:* First you must use ``devtool``
to checkout the kernel source code in its workspace. Be sure you are
@@ -789,7 +791,7 @@ the ":ref:`kernel-dev/common:getting ready to develop using \`\`devtool\`\``" Se
.. note::
See this step in the
":ref:`kernel-dev/common:getting ready to develop using \`\`devtool\`\``"
":ref:`kernel-dev/kernel-dev-common:getting ready to develop using \`\`devtool\`\``"
section for more information.
Use the following ``devtool`` command to check out the code:
@@ -817,12 +819,12 @@ the ":ref:`kernel-dev/common:getting ready to develop using \`\`devtool\`\``" Se
1. *Change the working directory*: In the previous step, the output
noted where you can find the source files (e.g.
``poky_sdk/workspace/sources/linux-yocto``). Change to where the
``~/poky_sdk/workspace/sources/linux-yocto``). Change to where the
kernel source code is before making your edits to the
``calibrate.c`` file:
::
$ cd poky_sdk/workspace/sources/linux-yocto
$ cd ~/poky_sdk/workspace/sources/linux-yocto
2. *Edit the source file*: Edit the ``init/calibrate.c`` file to have
the following changes:
@@ -860,7 +862,7 @@ the ":ref:`kernel-dev/common:getting ready to develop using \`\`devtool\`\``" Se
If the image you originally created resulted in a Wic file, you
can use an alternate method to create the new image with the
updated kernel. For an example, see the steps in the
:yocto_wiki:`TipsAndTricks/KernelDevelopmentWithEsdk </TipsAndTricks/KernelDevelopmentWithEsdk>`
:yocto_wiki:`TipsAndTricks/KernelDevelopmentWithEsdk </wiki/TipsAndTricks/KernelDevelopmentWithEsdk>`
Wiki Page.
::
@@ -894,7 +896,7 @@ the ":ref:`kernel-dev/common:getting ready to develop using \`\`devtool\`\``" Se
and use these Git commands to stage and commit your changes:
::
$ cd poky_sdk/workspace/sources/linux-yocto
$ cd ~/poky_sdk/workspace/sources/linux-yocto
$ git status
$ git add init/calibrate.c
$ git commit -m "calibrate: Add printk example"
@@ -910,7 +912,7 @@ the ":ref:`kernel-dev/common:getting ready to develop using \`\`devtool\`\``" Se
.. note::
See Step 3 of the
":ref:`kernel-dev/common:getting ready to develop using \`\`devtool\`\``"
":ref:`kernel-dev/kernel-dev-common:getting ready to develop using \`\`devtool\`\``"
section for information on setting up this layer.
Once the command
@@ -924,7 +926,7 @@ the ":ref:`kernel-dev/common:getting ready to develop using \`\`devtool\`\``" Se
set up to run BitBake:
::
$ cd poky/build
$ cd ~/poky/build
$ bitbake core-image-minimal
Using Traditional Kernel Development to Patch the Kernel
@@ -933,14 +935,14 @@ Using Traditional Kernel Development to Patch the Kernel
The steps in this procedure show you how you can patch the kernel using
traditional kernel development (i.e. not using ``devtool`` and the
extensible SDK as described in the
":ref:`kernel-dev/common:using \`\`devtool\`\` to patch the kernel`"
":ref:`kernel-dev/kernel-dev-common:using \`\`devtool\`\` to patch the kernel`"
section).
.. note::
Before attempting this procedure, be sure you have performed the
steps to get ready for updating the kernel as described in the
":ref:`kernel-dev/common:getting ready for traditional kernel development`"
":ref:`kernel-dev/kernel-dev-common:getting ready for traditional kernel development`"
section.
Patching the kernel involves changing or adding configurations to an
@@ -953,14 +955,15 @@ emulator console output at boot time through ``printk`` statements in
the kernel's ``calibrate.c`` source code file. Applying the patch and
booting the modified image causes the added messages to appear on the
emulator's console. The example is a continuation of the setup procedure
found in the
":ref:`kernel-dev/common:getting ready for traditional kernel development`"
found in the "`Getting Ready for Traditional Kernel
Development <#getting-ready-for-traditional-kernel-development>`__"
Section.
1. *Edit the Source Files* Prior to this step, you should have used Git
to create a local copy of the repository for your kernel. Assuming
you created the repository as directed in the
":ref:`kernel-dev/common:getting ready for traditional kernel development`"
you created the repository as directed in the "`Getting Ready for
Traditional Kernel
Development <#getting-ready-for-traditional-kernel-development>`__"
section, use the following commands to edit the ``calibrate.c`` file:
1. *Change the working directory*: You need to locate the source
@@ -1012,7 +1015,7 @@ Section.
to the following to your ``local.conf``:
::
$ cd poky/build/conf
$ cd ~/poky/build/conf
Add the following to the ``local.conf``:
::
@@ -1034,7 +1037,7 @@ Section.
you can now use BitBake to build the image:
::
$ cd poky/build
$ cd ~/poky/build
$ bitbake core-image-minimal
5. *Boot the image*: Boot the modified image in the QEMU emulator using
@@ -1042,7 +1045,7 @@ Section.
with no password:
::
$ cd poky/build
$ cd ~/poky/build
$ runqemu qemux86
6. *Look for Your Changes:* As QEMU booted, you might have seen your
@@ -1102,10 +1105,10 @@ Section.
The :term:`FILESEXTRAPATHS` and :term:`SRC_URI` statements
enable the OpenEmbedded build system to find the patch file.
For more information on append files and patches, see the
":ref:`kernel-dev/common:creating the append file`" and
":ref:`kernel-dev/common:applying patches`" sections. You can also see the
":ref:`dev-manual/common-tasks:using .bbappend files in your layer`"
For more information on append files and patches, see the "`Creating
the Append File <#creating-the-append-file>`__" and "`Applying
Patches <#applying-patches>`__" sections. You can also see the
":ref:`dev-manual/dev-manual-common-tasks:using .bbappend files in your layer`"
section in the Yocto Project Development Tasks Manual.
.. note::
@@ -1116,7 +1119,7 @@ Section.
the following sequence of commands:
::
$ cd poky/build
$ cd ~/poky/build
$ bitbake -c cleanall yocto-linux
$ bitbake core-image-minimal -c cleanall
$ bitbake core-image-minimal
@@ -1138,8 +1141,8 @@ configuration fragments, and how to interactively modify your
``.config`` file to create the leanest kernel configuration file
possible.
For more information on kernel configuration, see the
":ref:`kernel-dev/common:changing the configuration`" section.
For more information on kernel configuration, see the "`Changing the
Configuration <#changing-the-configuration>`__" section.
Using  ``menuconfig``
---------------------
@@ -1169,7 +1172,7 @@ environment, you must do the following:
The following commands initialize the BitBake environment, run the
:ref:`ref-tasks-kernel_configme`
task, and launch ``menuconfig``. These commands assume the Source
Directory's top-level folder is ``poky``:
Directory's top-level folder is ``~/poky``:
::
$ cd poky
@@ -1187,9 +1190,9 @@ the tool and save your changes to create an updated version of the
You can use the entire ``.config`` file as the ``defconfig`` file. For
information on ``defconfig`` files, see the
":ref:`kernel-dev/common:changing the configuration`",
":ref:`kernel-dev/common:using an "in-tree" \`\`defconfig\`\` file`",
and ":ref:`kernel-dev/common:creating a \`\`defconfig\`\` file`"
":ref:`kernel-dev/kernel-dev-common:changing the configuration`",
":ref:`kernel-dev/kernel-dev-common:using an "in-tree" \`\`defconfig\`\` file`",
and ":ref:`kernel-dev/kernel-dev-common:creating a \`\`defconfig\`\` file`"
sections.
Consider an example that configures the "CONFIG_SMP" setting for the
@@ -1295,8 +1298,10 @@ created to hold the configuration changes.
applies these on top of and after applying the existing ``defconfig`` file
configurations.
For more information on configuring the kernel, see the
":ref:`kernel-dev/common:changing the configuration`" section.
For more information on configuring the kernel, see the "`Changing the
Configuration <#changing-the-configuration>`__" section.
.. _creating-config-fragments:
Creating Configuration Fragments
--------------------------------
@@ -1317,7 +1322,7 @@ appear in the ``.config`` file, which is in the :term:`Build Directory`.
For more information about where the ``.config`` file is located, see the
example in the
":ref:`kernel-dev/common:using \`\`menuconfig\`\``"
":ref:`kernel-dev/kernel-dev-common:using \`\`menuconfig\`\``"
section.
It is simple to create a configuration fragment. One method is to use
@@ -1367,14 +1372,14 @@ steps:
$ bitbake linux-yocto -c diffconfig
The ``diffconfig`` command creates a file that is a list of Linux kernel
``CONFIG_`` assignments. See the
":ref:`kernel-dev/common:changing the configuration`" section for additional
``CONFIG_`` assignments. See the "`Changing the
Configuration <#changing-the-configuration>`__" section for additional
information on how to use the output as a configuration fragment.
.. note::
You can also use this method to create configuration fragments for a
BSP. See the ":ref:`kernel-dev/advanced:bsp descriptions`"
BSP. See the ":ref:`kernel-dev/kernel-dev-advanced:bsp descriptions`"
section for more information.
Where do you put your configuration fragment files? You can place these
@@ -1420,7 +1425,7 @@ when you override a policy configuration in a hardware configuration
fragment.
In order to run this task, you must have an existing ``.config`` file.
See the ":ref:`kernel-dev/common:using \`\`menuconfig\`\``" section for
See the ":ref:`kernel-dev/kernel-dev-common:using \`\`menuconfig\`\``" section for
information on how to create a configuration file.
Following is sample output from the ``do_kernel_configcheck`` task:
@@ -1493,7 +1498,7 @@ and
tasks until they produce no warnings.
For more information on how to use the ``menuconfig`` tool, see the
:ref:`kernel-dev/common:using \`\`menuconfig\`\`` section.
:ref:`kernel-dev/kernel-dev-common:using \`\`menuconfig\`\`` section.
Fine-Tuning the Kernel Configuration File
-----------------------------------------
@@ -1609,10 +1614,11 @@ source directory. Follow these steps to clean up the version string:
Depending on your particular kernel development workflow, the
commands you use to rebuild the kernel might differ. For information
on building the kernel image when using ``devtool``, see the
":ref:`kernel-dev/common:using \`\`devtool\`\` to patch the kernel`"
":ref:`kernel-dev/kernel-dev-common:using \`\`devtool\`\` to patch the kernel`"
section. For
information on building the kernel image when using Bitbake, see the
":ref:`kernel-dev/common:using traditional kernel development to patch the kernel`"
"`Using Traditional Kernel Development to Patch the
Kernel <#using-traditional-kernel-development-to-patch-the-kernel>`__"
section.
Working With Your Own Sources
@@ -1730,9 +1736,8 @@ Here are some basic steps you can use to work with your own sources:
5. *Customize Your Recipe as Needed:* Provide further customizations to
your recipe as needed just as you would customize an existing
linux-yocto recipe. See the
":ref:`ref-manual/devtool-reference:modifying an existing recipe`" section
for information.
linux-yocto recipe. See the "`Modifying an Existing
Recipe <#modifying-an-existing-recipe>`__" section for information.
Working with Out-of-Tree Modules
================================
@@ -1912,7 +1917,7 @@ differences:
$ git show origin/standard/base..origin/standard/emenlow
Use this command to create individual patches for each change. Here is
an example that creates patch files for each commit and places them
an example that that creates patch files for each commit and places them
in your ``Documents`` directory:
::
@@ -1939,7 +1944,7 @@ Adding Recipe-Space Kernel Features
===================================
You can add kernel features in the
:ref:`recipe-space <kernel-dev/advanced:recipe-space metadata>`
:ref:`recipe-space <kernel-dev/kernel-dev-advanced:recipe-space metadata>`
by using the :term:`KERNEL_FEATURES`
variable and by specifying the feature's ``.scc`` file path in the
:term:`SRC_URI` statement. When you
@@ -1958,7 +1963,7 @@ OpenEmbedded build system searches all forms of kernel Metadata on the
``SRC_URI`` statement regardless of whether the Metadata is in the
"kernel-cache", system kernel Metadata, or a recipe-space Metadata (i.e.
part of the kernel recipe). See the
":ref:`kernel-dev/advanced:kernel metadata location`" section for
":ref:`kernel-dev/kernel-dev-advanced:kernel metadata location`" section for
additional information.
When you specify the feature's ``.scc`` file on the ``SRC_URI``

View File

@@ -4,6 +4,8 @@
Advanced Kernel Concepts
************************
.. _kernel-big-picture:
Yocto Project Kernel Development and Maintenance
================================================
@@ -35,7 +37,7 @@ Yocto Project Linux kernel that caters to specific embedded designer
needs for targeted hardware.
You can find a web interface to the Yocto Linux kernels in the
:ref:`overview-manual/development-environment:yocto project source repositories`
:ref:`overview-manual/overview-manual-development-environment:yocto project source repositories`
at :yocto_git:`/`. If you look at the interface, you will see to
the left a grouping of Git repositories titled "Yocto Linux Kernel".
Within this group, you will find several Linux Yocto kernels developed
@@ -71,7 +73,7 @@ and included with Yocto Project releases:
and configurations for the linux-yocto kernel tree. This repository
is useful when working on the linux-yocto kernel. For more
information on this "Advanced Kernel Metadata", see the
":doc:`/kernel-dev/advanced`" Chapter.
":doc:`kernel-dev-advanced`" Chapter.
- *linux-yocto-dev:* A development kernel based on the latest
upstream release candidate available.
@@ -160,7 +162,7 @@ implemented by the Yocto Project team using the Source Code Manager
- You can find documentation on Git at https://git-scm.com/doc. You can
also get an introduction to Git as it applies to the Yocto Project in the
":ref:`overview-manual/development-environment:git`" section in the Yocto Project
":ref:`overview-manual/overview-manual-development-environment:git`" section in the Yocto Project
Overview and Concepts Manual. The latter reference provides an
overview of Git and presents a minimal set of Git commands that
allows you to be functional using Git. You can use as much, or as
@@ -258,7 +260,7 @@ Yocto Linux kernel needed for any given set of requirements.
Yocto Linux kernels, but rather shows a single generic kernel just
for conceptual purposes. Also keep in mind that this structure
represents the
:ref:`overview-manual/development-environment:yocto project source repositories`
:ref:`overview-manual/overview-manual-development-environment:yocto project source repositories`
that are either pulled from during the build or established on the
host development system prior to the build by either cloning a
particular kernel's Git repository or by downloading and unpacking a
@@ -293,13 +295,13 @@ ways:
- *Files Accessed While using devtool:* ``devtool``, which is
available with the Yocto Project, is the preferred method by which to
modify the kernel. See the ":ref:`kernel-dev/intro:kernel modification workflow`" section.
modify the kernel. See the ":ref:`kernel-dev/kernel-dev-intro:kernel modification workflow`" section.
- *Cloned Repository:* If you are working in the kernel all the time,
you probably would want to set up your own local Git repository of
the Yocto Linux kernel tree. For information on how to clone a Yocto
Linux kernel Git repository, see the
":ref:`kernel-dev/common:preparing the build host to work on the kernel`"
":ref:`kernel-dev/kernel-dev-common:preparing the build host to work on the kernel`"
section.
- *Temporary Source Files from a Build:* If you just need to make some
@@ -327,11 +329,11 @@ source files used during the build.
Again, for additional information on the Yocto Project kernel's
architecture and its branching strategy, see the
":ref:`kernel-dev/concepts-appx:yocto linux kernel architecture and branching strategies`"
":ref:`kernel-dev/kernel-dev-concepts-appx:yocto linux kernel architecture and branching strategies`"
section. You can also reference the
":ref:`kernel-dev/common:using \`\`devtool\`\` to patch the kernel`"
":ref:`kernel-dev/kernel-dev-common:using \`\`devtool\`\` to patch the kernel`"
and
":ref:`kernel-dev/common:using traditional kernel development to patch the kernel`"
":ref:`kernel-dev/kernel-dev-common:using traditional kernel development to patch the kernel`"
sections for detailed example that modifies the kernel.
Determining Hardware and Non-Hardware Features for the Kernel Configuration Audit Phase
@@ -341,7 +343,7 @@ This section describes part of the kernel configuration audit phase that
most developers can ignore. For general information on kernel
configuration including ``menuconfig``, ``defconfig`` files, and
configuration fragments, see the
":ref:`kernel-dev/common:configuring the kernel`" section.
":ref:`kernel-dev/kernel-dev-common:configuring the kernel`" section.
During this part of the audit phase, the contents of the final
``.config`` file are compared against the fragments specified by the

Some files were not shown because too many files have changed in this diff Show More