Compare commits

..

138 Commits

Author SHA1 Message Date
Richard Purdie
febbe2944c build-appliance-image: Update to dunfell head revision
(From OE-Core rev: 6fa967f194edd314c9026c80f8d93360ac6d9efa)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-08 21:45:13 +01:00
Richard Purdie
ff7dbd392a build-appliance: Update branch to point at dunfell
(From OE-Core rev: cad1b34fbdb3af04b527c27c8c84077eb695deb1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-08 21:45:05 +01:00
Richard Purdie
6cc48bdfeb build-appliance-image: Update to dunfell head revision
(From OE-Core rev: 2e4be161e65370708dfe85fe886843db857f5520)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-08 21:28:51 +01:00
Richard Purdie
7df25f49c3 poky.conf: Bump version for 3.1.1 dunfell release
(From meta-yocto rev: 9d7966cf2fb33d827c999fbb88872d8437d8f0c7)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-08 21:28:41 +01:00
Tim Orling
5890b72bd6 bitbake: toaster-requirements.txt: require Django 2.2
In commit 9730f95686b2ac72cf1fa513c555f7c7787e2667
Django 2.2 was enabled.

Django 1.11 was EOL on April 1, 2020

(Bitbake rev: 6cc09fa33131f71a3fd0e336ff07a4186b41bf8f)

Signed-off-by: Tim Orling <timothy.t.orling@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit ee15e78c6f9b59c221b1e43973ee4db20c5b443b)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:37:15 +01:00
Kai Kang
ee1bb639fb bitbake: bitbake-user-manual-metadata.xml: fix a minor error
In the '_remove' example in bitbake-user-manual-metadata.xml, there is
no 'jkl' in the original value of FOO2. So remove it from result.

(Bitbake rev: 324aaa7f8d6d83e1e00b8054dac44df561588be8)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 06b5cf0ab6c6e518ac780d081fab5546334c5c7d)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:37:15 +01:00
Jacob Kroon
6af527ca94 bitbake: doc: More explanation to tasks that recursively depend on themselves
(Bitbake rev: f92e19a3b3d89eb26eeb74b18ca01248767035b5)

Signed-off-by: Jacob Kroon <jacob.kroon@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c92a266c8e452833f2a590721aa1c2bd6fbeb2e0)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:37:15 +01:00
Jacob Kroon
81833e0ee8 bitbake: doc: Clarify how task dependencies relate to RDEPENDS
Clarify that BitBake knows how to map entries defined in the runtime
dependency namespace back to build-time dependencies (recipes) in
which tasks are defined.

(Bitbake rev: e4695176ffdc5eb959f71a08f77ff6a8e028ffa9)

Signed-off-by: Jacob Kroon <jacob.kroon@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit caf422435ad64aacbdab8a94da3115599dd0938b)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:37:15 +01:00
Robert P. J. Day
406b9a2434 bitbake: user manual: properly tag content as <replaceable>
Tag a couple fields as replaceable to be consistent with rest of
manual.

(Bitbake rev: 25c5c79bbe814eaff03c72cc2680414a73cff7f4)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 647c13d4ae746a1bb9bd76ff318477dadb4d292f)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:37:15 +01:00
Robert P. J. Day
919d0e98f2 bitbake: docs: delete reference to obsolete recipe-depends.dot
Given that generation of recipe-depends.dot was removed:

  commit 4c484cc01e3eee7ab2ab0359fd680b4dbd31dc30
  Author: Chen Qi <Qi.Chen@windriver.com>
  Date:   Thu Aug 22 15:52:51 2019 +0800

      cooker.py: remove generation of recipe-depends.dot

      The information of recipe-depends.dot is misleading.

delete mention of it from the user manual.

(Bitbake rev: be367887b0a729ef01fc04f2b91368612ed92ed3)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2effbb6e10b07dc12e4ecdf449ca29fc20968c59)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:37:15 +01:00
Steve Sakoman
7fbc1d298f Documentation: Add 3.1.1 version updates missing from previous commit
(From yocto-docs rev: bd140f0f9988fbd55f428a8ba7a54ae68716a33a)

Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Joshua Watt
b72565b65f layer.conf: Bump OE-Core layer version
The commits 910ffaf5be ("pyelftools: Import from meta-python") and
a96f815c53 ("pycryptodome: Import from meta-python") moved recipes from
meta-python to oe-core. In order for this to be communicated with users,
bump the LAYERVERSION so that meta-python can key of it in its
LAYERDEPENDS.

(From OE-Core rev: 4d4e69bc056bec4625b1cde0e1fc9d5e527c6a98)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2d503b27e7c88cee9a37c79c4605c77b11f230b6)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Joshua Watt
a05f552e21 python3-pyelftools: Upgrade 0.25 -> 0.26
(From OE-Core rev: b2306d00dc82cb780a439b569104c0f526e6e4d5)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0323e12624ef45e64e7a8ba6384c06e4d42df064)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Joshua Watt
f26ccdb13b python3-pycryptodome(x): Upgrade 3.9.4 -> 3.9.7
Also splits apart the SRC_URI checksums to make automatic upgrades
easier

(From OE-Core rev: 03b27d56272a4815ead04da08cfaa738b450ae59)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit ae1f210546396b761ea86d9e32bf90c0867ff845)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Joshua Watt
cbdc6f9862 pyelftools: Import from meta-python
Imports the pyelftools recipes from meta-python, as of 7c02c7d41
("gnome-themes-extra: correct the recipe name").

This recipe is commonly used by other layers, so moving it into
OE-core helps to cut down on layer dependencies.

(From OE-Core rev: 0a8cdaa90f4dd2d09b0b471dafd868a4dcad4ed3)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 910ffaf5beed42936588c95b0c7c1b1ad67f99d3)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Joshua Watt
681c1ecfa0 pycryptodome: Import from meta-python
Imports the pycryptodome recipes from meta-python, as of 7c02c7d41
("gnome-themes-extra: correct the recipe name").

These recipes are commonly used by other layers, so moving them into
OE-core helps to cut down on layer dependencies.

(From OE-Core rev: 27798f3da506fcae19b74deb17ef199131cff405)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a96f815c53364b119b5743b8b7100eb5588d5cf5)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Bruce Ashfield
f6eea5bdaf linux-yocto/5.4: update to v5.4.43
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    e0d81ce76004 Linux 5.4.43
    b5100186021a sched/fair: Fix enqueue_task_fair() warning some more
    8b13f5657fa8 sched/fair: Fix reordering of enqueue/dequeue_task_fair()
    a2ad232aa6a9 sched/fair: Reorder enqueue/dequeue_task_fair path
    f4520daa3c5a tpm: check event log version before reading final events
    68b7b8183c12 rxrpc: Fix ack discard
    283eb0016f97 rxrpc: Trace discarded ACKs
    f2da8c0dfe81 iio: adc: stm32-dfsdm: fix device used to request dma
    7b5af65ca246 iio: adc: stm32-dfsdm: Use dma_request_chan() instead dma_request_slave_channel()
    692001a867b6 iio: adc: stm32-adc: fix device used to request dma
    8e8836b2b782 iio: adc: stm32-adc: Use dma_request_chan() instead dma_request_slave_channel()
    1084eee4dc5d x86/unwind/orc: Fix unwind_get_return_address_ptr() for inactive tasks
    860fe59783a9 flow_dissector: Drop BPF flow dissector prog ref on netns cleanup
    bd6f0c799f4d s390/kexec_file: fix initrd location for kdump kernel
    834d24ec3a0a rxrpc: Fix a memory leak in rxkad_verify_response()
    23ae6e3e8aeb rxrpc: Fix the excessive initial retransmission timeout
    c2a26769b43e kasan: disable branch tracing for core runtime
    86217fecc4b7 rapidio: fix an error in get_user_pages_fast() error handling
    689dacb2b09d device-dax: don't leak kernel memory to user space after unloading kmem
    9e451933bba9 s390/kaslr: add support for R_390_JMP_SLOT relocation type
    72f3241508ac s390/pci: Fix s390_mmio_read/write with MIO
    9c84884cd5dc ipack: tpci200: fix error return code in tpci200_register()
    46f47dda27bc mei: release me_cl object reference
    f505a3e24c6a tty: serial: add missing spin_lock_init for SiFive serial console
    a5b4b3f97de7 misc: rtsx: Add short delay after exit from ASPM
    e64b205035fb iio: adc: ti-ads8344: Fix channel selection
    9af65dc54b9b iio: dac: vf610: Fix an error handling path in 'vf610_dac_probe()'
    d54e5a4ff04c iio: sca3000: Remove an erroneous 'get_device()'
    56cff2ac7c9d staging: greybus: Fix uninitialized scalar variable
    a41e02cb4232 staging: kpc2000: fix error return code in kp2000_pcie_probe()
    dee81110a488 staging: iio: ad2s1210: Fix SPI reading
    76296dc723ef media: fdp1: Fix R-Car M3-N naming in debug message
    4adb7a2b3161 Revert "gfs2: Don't demote a glock until its revokes are written"
    bb6524537dc2 kbuild: Remove debug info from kallsyms linking
    ee71c590dd8d bpf: Avoid setting bpf insns pages read-only when prog is jited
    4c732e81bd4d powerpc/64s: Disable STRICT_KERNEL_RWX
    b67da9dbdb89 powerpc: Remove STRICT_KERNEL_RWX incompatibility with RELOCATABLE
    9bcfbd8ba2b5 drm/i915: Propagate error from completed fences
    5e171483e947 drm/i915/gvt: Init DPLL/DDI vreg for virtual display instead of inheritance.
    0e1d5f67253e vsprintf: don't obfuscate NULL and error pointers
    4b1b34621998 dmaengine: owl: Use correct lock in owl_dma_get_pchan()
    0fcbe108b01a dmaengine: dmatest: Restore default for channel
    57c32a52c3fe drm/etnaviv: Fix a leak in submit_pin_objects()
    432b103596bd dmaengine: tegra210-adma: Fix an error handling path in 'tegra_adma_probe()'
    870a45e0b507 apparmor: Fix aa_label refcnt leak in policy_update
    054934aa9faa apparmor: fix potential label refcnt leak in aa_change_profile
    97d817b9ef13 apparmor: Fix use-after-free in aa_audit_rule_init
    3b1e38dfbc9f drm/etnaviv: fix perfmon domain interation
    53683907ef68 arm64: Fix PTRACE_SYSEMU semantics
    96e56055a2f0 scsi: target: Put lun_ref at end of tmr processing
    818657105a0b scsi: qla2xxx: Do not log message when reading port speed via sysfs
    d54c5eff8795 ALSA: hda/realtek - Add more fixup entries for Clevo machines
    80f5822c2bf3 ALSA: hda/realtek - Fix silent output on Gigabyte X570 Aorus Xtreme
    1b17a0f98ad0 ALSA: pcm: fix incorrect hw_base increase
    a44cb2581718 ALSA: iec1712: Initialize STDSP24 properly when using the model=staudio option
    99e392a4979b KVM: x86: Fix pkru save/restore when guest CR4.PKE=0, move it to x86.c
    1c3d707d7d12 ALSA: hda/realtek: Enable headset mic of ASUS UX581LV with ALC295
    26a3a3053332 ALSA: hda/realtek - Enable headset mic of ASUS UX550GE with ALC295
    c5742497dcd2 ALSA: hda/realtek - Enable headset mic of ASUS GL503VM with ALC295
    2523e9010d2b ALSA: hda/realtek: Add quirk for Samsung Notebook
    6cc4dd44e207 ALSA: hda/realtek - Add HP new mute led supported for ALC236
    0d189b31c4d7 ALSA: hda/realtek - Add supported new mute Led for HP
    69d5dc286d05 ALSA: hda: Manage concurrent reg access more properly
    1efaaf74528c ALSA: hda: patch_realtek: fix empty macro usage in if block
    749e58bd2b09 ALSA: hda - constify and cleanup static NodeID tables
    02ebbd1da394 scripts/gdb: repair rb_first() and rb_last()
    9eff404a4382 ARM: futex: Address build warning
    67a5c3104d12 KVM: selftests: Fix build for evmcs.h
    4f48af814798 drm/amd/display: Prevent dpcd reads with passive dongles
    e0bb3075f91b iommu/amd: Call domain_flush_complete() in update_domain()
    e1efb9893bdd platform/x86: asus-nb-wmi: Do not load on Asus T100TA and T200TA
    c8d323578e36 USB: core: Fix misleading driver bug report
    42b32a43529e stmmac: fix pointer check after utilization in stmmac_interrupt
    b68d27c5fffd ceph: fix double unlock in handle_cap_export()
    df0df8ee2ac7 HID: quirks: Add HID_QUIRK_NO_INIT_REPORTS quirk for Dell K12A keyboard-dock
    8a5de4a391e4 gtp: set NLM_F_MULTI flag in gtp_genl_dump_pdp()
    7932168ec08e x86/apic: Move TSC deadline timer debug printk
    1ae9f1a62a50 ftrace/selftest: make unresolved cases cause failure if --fail-unresolved set
    2eac9d3dc23f ibmvnic: Skip fatal error reset after passive init
    f82a3013226e x86/mm/cpa: Flush direct map alias during cpa
    632db044ab99 HID: i2c-hid: reset Synaptics SYNA2393 on resume
    acd3efa17d96 scsi: ibmvscsi: Fix WARN_ON during event pool release
    6ef21295dc20 net/ena: Fix build warning in ena_xdp_set()
    d0db69f9d132 component: Silence bind error on -EPROBE_DEFER
    7a5f60dc3a67 aquantia: Fix the media type of AQC100 ethernet controller in the driver
    445437b417b6 vhost/vsock: fix packet delivery order to monitoring devices
    dcec6678c3b1 configfs: fix config_item refcnt leak in configfs_rmdir()
    2b52a61adb38 scsi: qla2xxx: Delete all sessions before unregister local nvme port
    d2430cb7f2d4 scsi: qla2xxx: Fix hang when issuing nvme disconnect-all in NPIV
    7b481b802a8f HID: alps: ALPS_1657 is too specific; use U1_UNICORN_LEGACY instead
    a08626f6e982 HID: alps: Add AUI1657 device ID
    68988c00b153 HID: multitouch: add eGalaxTouch P80H84 support
    cc6428803d22 gcc-common.h: Update for GCC 10
    3c140d22e3c2 net: drop_monitor: use IS_REACHABLE() to guard net_dm_hw_report()
    87863a7426b2 kbuild: avoid concurrency issue in parallel building dtbs and dtbs_check
    44fd02a3d719 mtd: Fix mtd not registered due to nvmem name collision
    496c7c61bd64 afs: Don't unlock fetched data pages until the op completes successfully
    17c9595cca71 ubi: Fix seq_file usage in detailed_erase_block_info debugfs file
    274cd3c7b5d3 i2c: mux: demux-pinctrl: Fix an error handling path in 'i2c_demux_pinctrl_probe()'
    dd540f2d7c2d evm: Fix a small race in init_desc()
    f96ab0d1f3ec iommu/amd: Fix over-read of ACPI UID from IVRS table
    33769c19feba i2c: fix missing pm_runtime_put_sync in i2c_device_probe
    9f885f17501d ubifs: remove broken lazytime support
    ac6f94d3be65 fix multiplication overflow in copy_fdtable()
    725b0bb0f94d mtd: spinand: Propagate ECC information to the MTD structure
    e3637eb6a351 ACPI: EC: PM: Avoid flushing EC work when EC GPE is inactive
    3be8ece11440 ubifs: fix wrong use of crypto_shash_descsize()
    48bbd44f5fa9 ima: Fix return value of ima_write_policy()
    1066327bf936 evm: Check also if *tfm is an error pointer in init_desc()
    4aedc534b608 ima: Set file->f_mode instead of file->f_flags in ima_calc_file_hash()
    ac46cea606d5 KVM: SVM: Fix potential memory leak in svm_cpu_init()
    1bed86cfe5cb i2c: dev: Fix the race between the release of i2c_dev and cdev

(From OE-Core rev: ef5af31f406076107402694f5d6afb27b240eba6)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9cd117dec502f40402ebd3a09ed3e8dba804ce2b)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Bruce Ashfield
d1efd89f88 linux-yocto: gather reproducibility configs into a fragment
Updating the meta SRCREV to pickup the following fix:

    commit 9e68afb48b16a447dcd3996ffa350f3e79e44257 (HEAD -> master)
    Author: Bruce Ashfield <bruce.ashfield@gmail.com>
    Date:   Thu May 28 11:22:22 2020 -0400

        features: add reproducibility fragement

        Creating an initial feature fragment that can be included when a
        reproducible kernel build is desired. This is currently only one
        option, but will have more in the future.

        Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>

(From OE-Core rev: 864cc7b3c349c94e34e3129053c2b22c8946c73d)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit eaa34c96b60e703c96495e60650adc6d149603f1)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Bruce Ashfield
953a9d91be linux-yocto/5.4: temporarily revert IKHEADERS in standard kernels
We had a commit that enabled IKHEADERS, since bpf requires them on
target.

This is still causing incremental reproducibility errors during the
module compilation phase of the build.

We are temporarily turning this off, so we can integrate -stable
and other related changes. A replacement feature "reproducibility"
is also being added to this can be conditionally enabled while
we debug.

(From OE-Core rev: 85c481d13814b889a3d86044dcaac7d4eb685ade)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 5706788603b38ad4a0987e187a1c11c06f4d4e6c)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Bruce Ashfield
1152c0353f linux-yocto-rt/5.4: update to rt24
Integrating the following commit(s) to linux-yocto-rt/5.4:

    3d70f110c590 Linux 5.4.40-rt24
    6445e48533d9 Linux 5.4.39-rt23
    0a6ba32d4177 Linux 5.4.37-rt22
    35c686fb7671 Linux 5.4.34-rt21
    e54886570abd Linux 5.4.33-rt20
    307ba149ec47 v5.4.28-rt19
    8d488719e24a mm/compaction: Disable compact_unevictable_allowed on RT (Update)
    d1d2315e077c v5.4.28-rt18
    78028bc22d31 v5.4.26-rt17
    815bfc775961 swait: Remove the warning with more than two waiters
    b23b7f974955 powerpc: Fix lazy preemption for powerpc 32bit
    a79a552889de mm/page_alloc: Use migrate_disable() in drain_local_pages_wq()
    5e488daa19cb mm: Revert the DEFINE_PER_CPU_PAGEVEC implementation

(From OE-Core rev: 9e088d38fbae9a646ed5e608acbb3d3ce172303d)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0123efae31dab8bce15e11fcee0b139a61b67cd6)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Bruce Ashfield
e2b8018a31 linux-yocto/5.4: update to v5.4.42
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    1cdaf895c99d Linux 5.4.42
    ecb3f529a554 bpf: Test_progs, fix test_get_stack_rawtp_err.c build
    aee43146cc10 selftest/bpf: fix backported test_select_reuseport selftest changes
    35d9107ad30b libbpf: Extract and generalize CPU mask parsing logic
    10cfaa7456d7 Makefile: disallow data races on gcc-10 as well
    9972e851b9f4 KVM: x86: Fix off-by-one error in kvm_vcpu_ioctl_x86_setup_mce
    9d2487643b4f bpf: Fix sk_psock refcnt leak when receiving message
    d41689a34a9d SUNRPC: Revert 241b1f419f0e ("SUNRPC: Remove xdr_buf_trim()")
    3a8efe589bb6 ARM: dts: r8a7740: Add missing extal2 to CPG node
    cd8ae9b73258 arm64: dts: renesas: r8a77980: Fix IPMMU VIP[01] nodes
    c580f2fe3270 ARM: dts: r8a73a4: Add missing CMT1 interrupts
    8972721aea41 arm64: dts: imx8mn: Change SDMA1 ahb clock for imx8mn
    764715615178 arm64: dts: rockchip: Rename dwc3 device nodes on rk3399 to make dtc happy
    64ad7ef3a6e5 arm64: dts: rockchip: Replace RK805 PMIC node name with "pmic" on rk3328 boards
    af518b5b77fd arm64: dts: meson-g12-common: fix dwc2 clock names
    9b9c52752a11 arm64: dts: meson-g12b-khadas-vim3: add missing frddr_a status property
    01febb33cb6e clk: Unlink clock if failed to prepare or enable
    e2084a8d5fee Revert "ALSA: hda/realtek: Fix pop noise on ALC225"
    5e553801462b usb: gadget: legacy: fix error return code in cdc_bind()
    7e5c1db8ad15 usb: gadget: legacy: fix error return code in gncm_bind()
    8228b6736964 usb: gadget: audio: Fix a missing error return value in audio_bind()
    8ef452001769 usb: gadget: net2272: Fix a memory leak in an error handling path in 'net2272_plat_probe()'
    9f65d776756e fanotify: fix merging marks masks with FAN_ONDIR
    20a6d2455cde dwc3: Remove check for HWO flag in dwc3_gadget_ep_reclaim_trb_sg()
    4f6815e429a8 clk: rockchip: fix incorrect configuration of rk3228 aclk_gpu* clocks
    553a2cbca7c3 exec: Move would_dump into flush_old_exec
    136353c5066c x86/unwind/orc: Fix error handling in __unwind_start()
    91b9ce04ff1f x86: Fix early boot crash on gcc-10, third try
    f8e370ccab35 cifs: fix leaked reference on requeued write
    4e06196336a1 powerpc/32s: Fix build failure with CONFIG_PPC_KUAP_DEBUG
    57aa19acfc22 drm/amd/display: add basic atomic check for cursor plane
    97e43314763d ARM: dts: imx6dl-yapp4: Fix Ursa board Ethernet connection
    215589310fa2 ARM: dts: imx27-phytec-phycard-s-rdk: Fix the I2C1 pinctrl entries
    e1409dc95410 ARM: dts: dra7: Fix bus_dma_limit for PCIe
    da55eeb3245a Make the "Reducing compressed framebufer size" message be DRM_INFO_ONCE()
    c6a1ce81b43e usb: xhci: Fix NULL pointer dereference when enqueuing trbs from urb sg list
    6bb054f006c3 USB: gadget: fix illegal array access in binding with UDC
    e6be4400ac34 usb: cdns3: gadget: prev_req->trb is NULL for ep0
    f1a9bed7969a usb: host: xhci-plat: keep runtime active when removing host
    b96a62f506ee usb: core: hub: limit HUB_QUIRK_DISABLE_AUTOSUSPEND to USB5534B
    93dda4f0e2ff ALSA: usb-audio: Add control message quirk delay for Kingston HyperX headset
    3fa58fc9f8c4 ALSA: rawmidi: Fix racy buffer resize under concurrent accesses
    04ccdf6b031d ALSA: hda/realtek - Add COEF workaround for ASUS ZenBook UX431DA
    c9709800eeeb ALSA: hda/realtek - Limit int mic boost for Thinkpad T530
    c737b7533596 USB: usbfs: fix mmap dma mismatch
    0432f7632a24 usb: usbfs: correct kernel->user page attribute mismatch
    dca0ae3900b3 gcc-10: avoid shadowing standard library 'free()' in crypto
    6cbb91bdd3a2 gcc-10: mark more functions __init to avoid section mismatch warnings
    7955081a3a65 gcc-10 warnings: fix low-hanging fruit
    dff2ce17934c gcc-10: disable 'restrict' warning for now
    b8e7b93333db gcc-10: disable 'stringop-overflow' warning for now
    9ba07a72fc5e gcc-10: disable 'array-bounds' warning for now
    a740b68fd169 gcc-10: disable 'zero-length-bounds' warning for now
    8f6a84167e86 Stop the ad-hoc games with -Wno-maybe-initialized
    ab638a49a9f3 net/rds: Use ERR_PTR for rds_message_alloc_sgs()
    b597815ce1e8 pnp: Use list_for_each_entry() instead of open coding
    d4e58131829f NFSv3: fix rpc receive buffer size for MOUNT call
    e26e2a3febcb mm, memcg: fix inconsistent oom event behavior
    46a22f3ea146 s390/ism: fix error return code in ism_probe()
    e1608af17030 hwmon: (da9052) Synchronize access with mfd
    6e7253dc4562 RDMA/iw_cxgb4: Fix incorrect function parameters
    08f187dbd223 RDMA/core: Fix double put of resource
    ee7ce7d7e7c7 IB/core: Fix potential NULL pointer dereference in pkey cache
    b491aeec55fe IB/mlx4: Test return value of calls to ib_get_cached_pkey
    eaad00390ff9 RDMA/rxe: Always return ERR_PTR from rxe_create_mmap_info()
    da532ce587c7 netfilter: nft_set_rbtree: Add missing expired checks
    1c235d0eb1f0 netfilter: nft_set_rbtree: Introduce and use nft_rbtree_interval_start()
    6259b1c1bca5 SUNRPC: Signalled ASYNC tasks need to exit
    d1538d8d6325 nfs: fix NULL deference in nfs4_get_valid_delegation
    ea7c4d9e542f arm64: fix the flush_icache_range arguments in machine_kexec
    1222b257654b drm/i915/gvt: Fix kernel oops for 3-level ppgtt guest
    a308d6e6861d netfilter: conntrack: avoid gcc-10 zero-length-bounds warning
    b526c01b38ae NFSv4: Fix fscache cookie aux_data to ensure change_attr is included
    021f5799de53 nfs: fscache: use timespec64 in inode auxdata
    ef8195ee1618 NFS: Fix fscache super_cookie index_key from changing after umount
    32b9de3e935d drm/amdgpu: force fbdev into vram
    e1b2b93243ca fork: prevent accidental access to clone3 features
    f256dea07774 gfs2: More gfs2_find_jhead fixes
    18541e49f70b mmc: block: Fix request completion in the CQE timeout path
    e8eb122b9f43 mmc: core: Fix recursive locking issue in CQE recovery path
    fdf547a591f5 mmc: core: Check request type before completing the request
    3a8bc2ae2f79 mmc: sdhci-pci-gli: Fix can not access GL9750 after reboot from Windows 10
    e0830bb37734 mmc: alcor: Fix a resource leak in the error path for ->probe()
    62f217e0a9c8 bpf, sockmap: bpf_tcp_ingress needs to subtract bytes from sg.size
    ce3193bf8964 bpf, sockmap: msg_pop_data can incorrecty set an sge length
    af1f11fe6667 drm/i915: Don't enable WaIncreaseLatencyIPCEnabled when IPC is disabled
    0d9bc7986366 i40iw: Fix error handling in i40iw_manage_arp_cache()
    95827ac65244 ALSA: firewire-lib: fix 'function sizeof not defined' error of tracepoints format
    5d47b3d6b4d2 bpf: Fix error return code in map_lookup_and_delete_elem()
    5b96668b63c0 pinctrl: cherryview: Add missing spinlock usage in chv_gpio_irq_handler
    aec927836c7d pinctrl: qcom: fix wrong write in update_dual_edge
    604ad1bb8aae pinctrl: baytrail: Enable pin configuration setting for GPIO chip
    960d609dd4dd pinctrl: sunrisepoint: Fix PAD lock register offset for SPT-H
    e529b8db9684 ACPI: EC: PM: Avoid premature returns from acpi_s2idle_wake()
    9e54afec08f7 IB/hfi1: Fix another case where pq is left on waitlist
    d942a6a18463 mmc: sdhci-pci-gli: Fix no irq handler from suspend
    171bf6ef038b gfs2: Another gfs2_walk_metadata fix
    87954aacd585 ALSA: hda/realtek - Fix S3 pop noise on Dell Wyse
    05aae468d31a ipc/util.c: sysvipc_find_ipc() incorrectly updates position index
    3c3ade92b62a drm/amdgpu: invalidate L2 before SDMA IBs (v2)
    938489ef2902 drm/amdgpu: simplify padding calculations (v2)
    eefe5e0bb7b7 drm/qxl: lost qxl_bo_kunmap_atomic_page in qxl_image_init_helper()
    94cce94badf7 drm/amd/display: Update downspread percent to match spreadsheet for DCN2.1
    f4164b29dc08 drm/amd/display: check if REFCLK_CNTL register is present
    65f3108cbb1d drm/amd/powerplay: avoid using pm_en before it is initialized revised
    8c5f11093ef4 ALSA: hda/hdmi: fix race in monitor detection during probe
    4d1a83cb5afe cpufreq: intel_pstate: Only mention the BIOS disabling turbo mode once
    d12d7bf92b08 selftests/ftrace: Check the first record for kprobe_args_type.tc
    2b313699e7a9 dmaengine: mmp_tdma: Reset channel error on release
    6c414ddee7f0 dmaengine: mmp_tdma: Do not ignore slave config validation errors
    de76c0d4a03c dmaengine: pch_dma.c: Avoid data race between probe and irq handler
    c096a8645e3f riscv: fix vdso build with lld
    2fffdf4dded1 umh: fix memory leak on execve failure
    44ee727013d5 r8169: re-establish support for RTL8401 chip version
    e03d3510f45c nfp: abm: fix error return code in nfp_abm_vnic_alloc()
    2fbd6eca3711 net: tcp: fix rx timestamp behavior for tcp_recvmsg
    fc800ec491c3 netprio_cgroup: Fix unlimited memory leak of v2 cgroups
    cab607a627cf net: ipv4: really enforce backoff for redirects
    d375d99f8902 net: dsa: loop: Add module soft dependency
    b2e8946250c3 hinic: fix a bug of ndo_stop
    d07987924a04 dpaa2-eth: prevent array underflow in update_cls_rule()
    84916465b0f0 virtio_net: fix lockdep warning on 32 bit
    23300d6a39d7 tcp: fix SO_RCVLOWAT hangs with fat skbs
    cb4f78986065 tcp: fix error recovery in tcp_zerocopy_receive()
    f152793058b5 Revert "ipv6: add mtu lock check in __ip6_rt_update_pmtu"
    5f93b45fa58c pppoe: only process PADT targeted at local interfaces
    ecb8356aafba net: stmmac: fix num_por initialization
    4300e210b005 net: phy: fix aneg restart in phy_ethtool_set_eee
    debcbc56fdfc netlabel: cope with NULL catmap
    60a4f2ce0596 net: fix a potential recursive NETDEV_FEAT_CHANGE
    97e860545e24 dpaa2-eth: properly handle buffer size restrictions
    425853cc1160 mmc: sdhci-acpi: Add SDHCI_QUIRK2_BROKEN_64_BIT_DMA for AMDI0040
    a761f65879e8 selftests/bpf: fix goto cleanup label not defined
    2d6d0ce4de03 scsi: sg: add sg_remove_request in sg_write
    7d8da6d7d90c net_sched: fix tcm_parent in tc filter dump
    e2824505a813 sun6i: dsi: fix gcc-4.8
    645b44b6b3b3 virtio-blk: handle block_device_operations callbacks after hot unplug
    fbe2c2c50914 drop_monitor: work around gcc-10 stringop-overflow warning
    23a0a0914a1e ftrace/selftests: workaround cgroup RT scheduling issues
    dbd667a322ac net: moxa: Fix a potential double 'free_irq()'
    2bcd4df42d5d net/sonic: Fix a resource leak in an error handling path in 'jazz_sonic_probe()'
    e15d3d42900a SUNRPC: Fix GSS privacy computation of auth->au_ralign
    3bf0794e7309 SUNRPC: Add "@len" parameter to gss_unwrap()
    3c605abef3ee gpio: pca953x: Fix pca953x_gpio_set_config
    163b48932571 KVM: arm: vgic: Synchronize the whole guest on GIC{D,R}_I{S,C}ACTIVER read
    7abefa3f9a4b net: phy: microchip_t1: add lan87xx_phy_init to initialize the lan87xx phy.
    a12f3ad8d952 shmem: fix possible deadlocks on shmlock_user_lock
    723090ae8ea6 net: dsa: Do not make user port errors fatal
    cbaf23699561 Linux 5.4.41
    9bd5a84ceba3 fanotify: merge duplicate events on parent and child
    4638e0ff0fa4 fsnotify: replace inode pointer with an object id
    03447528a390 bdi: add a ->dev_name field to struct backing_dev_info
    25390a31983c bdi: move bdi_dev_name out of line
    c1af2c13a4ac mm, memcg: fix error return value of mem_cgroup_css_alloc()
    1642f114ce2d scripts/decodecode: fix trapping instruction formatting
    2e86e3841c3c iommu/virtio: Reverse arguments to list_add
    1a31c4456af9 objtool: Fix stack offset tracking for indirect CFAs
    30a38059cdd4 netfilter: nf_osf: avoid passing pointer to local var
    4ccbd9c859dd netfilter: nat: never update the UDP checksum when it's 0
    634c950c624d arch/x86/kvm/svm/sev.c: change flag passed to GUP fast in sev_pin_memory()
    4cbb69b45cad KVM: x86: Fixes posted interrupt check for IRQs delivery modes
    db00b1d9d71a x86/unwind/orc: Fix premature unwind stoppage due to IRET frames
    c9473a0260b2 x86/unwind/orc: Fix error path for bad ORC entry type
    1b4bd44645ac x86/unwind/orc: Prevent unwinding before ORC initialization
    511261578b8b x86/unwind/orc: Don't skip the first frame for inactive tasks
    162e9f141d96 x86/entry/64: Fix unwind hints in rewind_stack_do_exit()
    16aace664b27 x86/entry/64: Fix unwind hints in kernel exit path
    07c4cd680c0b x86/entry/64: Fix unwind hints in register clearing code
    d8eb5a1cde35 batman-adv: Fix refcnt leak in batadv_v_ogm_process
    13f968c8b762 batman-adv: Fix refcnt leak in batadv_store_throughput_override
    b71348105899 batman-adv: Fix refcnt leak in batadv_show_throughput_override
    bee7e9da58ba batman-adv: fix batadv_nc_random_weight_tq
    34ca080088e2 iocost: protect iocg->abs_vdebt with iocg->waitq.lock
    d8c7f015d1a9 riscv: set max_pfn to the PFN of the last page
    480534e03061 coredump: fix crash when umh is disabled
    b8fe132bae66 staging: gasket: Check the return value of gasket_get_bar_index()
    53f453031a20 ceph: demote quotarealm lookup warning to a debug message
    3fd9f902c08a ceph: fix endianness bug when handling MDS session feature bits
    e991f7ded4e1 mm: limit boost_watermark on small zones
    4b49a9660d26 mm/page_alloc: fix watchdog soft lockups during set_zone_contiguous()
    ee922a2f6be9 eventpoll: fix missing wakeup for ovflist in ep_poll_callback
    5d77631de15a epoll: atomically remove wait entry on wake up
    1f3aa3e028c5 ipc/mqueue.c: change __do_notify() to bypass check_kill_permission()
    65f96f4b797e drm: ingenic-drm: add MODULE_DEVICE_TABLE
    0eae1647f145 arm64: hugetlb: avoid potential NULL dereference
    e983c6064a0a KVM: arm64: Fix 32bit PC wrap-around
    3ae9279d725a KVM: arm: vgic: Fix limit condition when writing to GICD_I[CS]ACTIVER
    152d97d0b26f KVM: VMX: Explicitly clear RFLAGS.CF and RFLAGS.ZF in VM-Exit RSB path
    3f23f781290b KVM: s390: Remove false WARN_ON_ONCE for the PQAP instruction
    eb0373fc3871 crypto: arch/nhpoly1305 - process in explicit 4k chunks
    8b166a6f6286 tracing: Add a vmalloc_sync_mappings() for safe measure
    72886ae16a75 USB: serial: garmin_gps: add sanity checking for data length
    4f4dc27c09cd usb: chipidea: msm: Ensure proper controller reset using role switch API
    2419a955172c USB: uas: add quirk for LaCie 2Big Quadra
    b60a086ec733 HID: wacom: Report 2nd-gen Intuos Pro S center button status over BT
    613045bfc63d HID: usbhid: Fix race between usbhid_close() and usbhid_stop()
    1017955fab5b Revert "HID: wacom: generic: read the number of expected touches on a per collection basis"
    a204d577be70 sctp: Fix bundling of SHUTDOWN with COOKIE-ACK
    3fc16b5b1947 HID: wacom: Read HID_DG_CONTACTMAX directly for non-generic devices
    0aeae7ad9450 net: mvpp2: cls: Prevent buffer overflow in mvpp2_ethtool_cls_rule_del()
    b2930c86ee2b net: mvpp2: prevent buffer overflow in mvpp22_rss_ctx()
    d595dd5ba909 net/mlx5: Fix command entry leak in Internal Error State
    11dd1d0ebfdd net/mlx5: Fix forced completion access non initialized command entry
    18cfbcdf1f41 net/mlx5: DR, On creation set CQ's arm_db member to right value
    6ab4dd433b61 bnxt_en: Fix VLAN acceleration handling in bnxt_fix_features().
    cf07e0ccffde bnxt_en: Return error when allocating zero size context memory.
    76737d877fab bnxt_en: Improve AER slot reset.
    ab1c944361b4 bnxt_en: Reduce BNXT_MSIX_VEC_MAX value to supported CQs per PF.
    2be3a9e71ce4 bnxt_en: Fix VF anti-spoof filter setup.
    a882d44e5bad tunnel: Propagate ECT(1) when decapsulating as recommended by RFC6040
    e9edd5a0f5f5 tipc: fix partial topology connection closure
    f2d581951775 sch_sfq: validate silly quantum values
    017242e3bdb3 sch_choke: avoid potential panic in choke_reset()
    66f7e30273ef nfp: abm: fix a memory leak bug
    8fc441d16183 net: usb: qmi_wwan: add support for DW5816e
    4107cd9a869f net/tls: Fix sk_psock refcnt leak when in tls_data_ready()
    a15ccc88e516 net/tls: Fix sk_psock refcnt leak in bpf_exec_tx_verdict()
    4124b1317f26 net: tc35815: Fix phydev supported/advertising mask
    7bbf73e918be net: stricter validation of untrusted gso packets
    b51b394f4ab1 net_sched: sch_skbprio: add message validation to skbprio_change()
    c78c166748e9 net/mlx4_core: Fix use of ENOSPC around mlx4_counter_alloc()
    57f6c4340aad net: macsec: preserve ingress frame ordering
    301d6eb32d81 net: macb: fix an issue about leak related system resources
    5ffd49c52bad net: dsa: Do not leave DSA master with NULL netdev_ops
    e781af2fdc2e neigh: send protocol value in neighbor create notification
    89469cf72fae mlxsw: spectrum_acl_tcam: Position vchunk in a vregion list properly
    5d7e1e23efb6 ipv6: Use global sernum for dst validation with nexthop objects
    45b6af95aae7 fq_codel: fix TCA_FQ_CODEL_DROP_BATCH_SIZE sanity checks
    429a89625693 dp83640: reverse arguments to list_add_tail
    6ee2fdf2ba4d devlink: fix return value after hitting end in region read
    b586a95e2606 tty: xilinx_uartps: Fix missing id assignment to the console
    8ca4302bc663 vt: fix unicode console freeing with a common interface
    f4d20b01eaf6 drm/amdgpu: drop redundant cg/pg ungate on runpm enter
    c973b108912a drm/amdgpu: move kfd suspend after ip_suspend_phase1
    8e16ede5b7a1 net: macb: Fix runtime PM refcounting
    eb6f88cd81ac tracing/kprobes: Fix a double initialization typo
    56fc76893f87 nvme: fix possible hang when ns scanning fails during error recovery
    fb1b41128c70 nvme: refactor nvme_identify_ns_descs error handling
    a5d53ad84eb5 USB: serial: qcserial: Add DW5816e support

(From OE-Core rev: 94b473a0c82d77b3a365cbeb6c99a3338b6c7524)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8c4b48a05f54520b4d5fcb5b0e6f74857ca4f1d2)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Bruce Ashfield
6f19690b73 kernel/reproducibility: kernel modules need SOURCE_DATE_EPOCH export
If CONFIG_IKHEADERS is set to =m, then reproducibility issues creep
into the modules build, since the variables we are setting for the
main kernel build are not present.

Since the source code must be available for a possibly git query
on the timestamp, there didn't seem to be an easy way to move the
environment variable setting to a common routine. As such, we
duplicate the block of code that exports the required variables for
reproducible builds. There is a maintenance risk to this, but any
issues should be easy enough to catch.

(From OE-Core rev: f511d78164581f80e7b8c592fe88ffbf38738150)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 82cdfcdccfedd320ebc0cdc778c7d4966198b96f)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Bruce Ashfield
6777db7ae7 linux-yocto/5.4: update to v5.4.40
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    f015b86259a5 Linux 5.4.40
    2852b559afdf PM / devfreq: Add missing locking while setting suspend_freq
    8e054bd6dfc4 udp: document udp_rcv_segment special case for looped packets
    2a03c23b2015 tools headers UAPI: Sync copy of arm64's asm/unistd.h with the kernel sources
    f11664da13b9 Revert "drm/amd/display: setting the DIG_MODE to the correct value."
    c365ff781540 mm/mremap: Add comment explaining the untagging behaviour of mremap()
    8f30c3687f09 libbpf: Fix readelf output parsing for Fedora
    88348bd1f696 cgroup, netclassid: remove double cond_resched
    702d710ffd83 mac80211: add ieee80211_is_any_nullfunc()
    468465fdef4d ACPI: PM: s2idle: Fix comment in acpi_s2idle_prepare_late()
    da283f9be924 platform/x86: GPD pocket fan: Fix error message when temp-limits are out of range
    03f235a5bd3a x86/kvm: fix a missing-prototypes "vmread_error"
    85701f4768a1 ALSA: hda: Match both PCI ID and SSID for driver blacklist
    b8b42c8dcf44 hexagon: define ioremap_uc
    f31c9e904f1d hexagon: clean up ioremap
    1bc508b2d16d mfd: intel-lpss: Use devm_ioremap_uc for MMIO
    78b19f56b952 lib: devres: add a helper function for ioremap_uc
    7a9b738c7511 Revert "software node: Simplify software_node_release() function"
    b8bb9c3192f9 drm/amdgpu: Fix oops when pp_funcs is unset in ACPI event
    3fb4c93dc761 sctp: Fix SHUTDOWN CTSN Ack in the peer restart case
    9da07c4aeaf7 drm/i915: Extend WaDisableDARBFClkGating to icl,ehl,tgl
    d8e0b58fa471 net: systemport: suppress warnings on failed Rx SKB allocations
    5c065ee4a07d net: bcmgenet: suppress warnings on failed Rx SKB allocations
    fd2c9e605269 mac80211: sta_info: Add lockdep condition for RCU list usage
    07fea3d3ef88 lib/mpi: Fix building for powerpc with clang
    bacf98ee0003 tracing: Fix memory leaks in trace_events_hist.c
    c46330d4dabf cifs: do not share tcons with DFS
    84778248e013 scripts/config: allow colons in option strings for sed
    b31e0bd4a97a cifs: protect updating server->dstaddr with a spinlock
    0560b7c3ba48 ASoC: rsnd: Fix "status check failed" spam for multi-SSI
    883d34cdefea ASoC: rsnd: Don't treat master SSI in multi SSI setup as parent
    15de2df38652 net: stmmac: Fix sub-second increment
    8d5a1ddaa9bb net: stmmac: fix enabling socfpga's ptp_ref_clock
    d3539ea43a37 wimax/i2400m: Fix potential urb refcnt leak
    f0d6b056bc18 drm/amdgpu: Correctly initialize thermal controller for GPUs with Powerplay table v0 (e.g Hawaii)
    a09ba140db2f remoteproc: qcom_q6v5_mss: fix a bug in q6v5_probe()
    b2978c307696 ASoC: codecs: hdac_hdmi: Fix incorrect use of list_for_each_entry
    f9c3a17786fd ASoC: rsnd: Fix HDMI channel mapping for multi-SSI mode
    26500b980bf8 ASoC: rsnd: Fix parent SSI start/stop in multi-SSI mode
    5087c7f4e7f2 usb: dwc3: gadget: Properly set maxpacket limit
    ab182c06fc22 ASoC: topology: Fix endianness issue
    ae975c8e1062 ASoC: sgtl5000: Fix VAG power-on handling
    3ea62d49613b selftests/ipc: Fix test failure seen after initial test run
    a5dec15686e9 ASoC: topology: Check return value of soc_tplg_dai_config
    fd8f4a3be50b ASoC: topology: Check return value of pcm_new_ver
    0d452c7e309c ASoC: topology: Check soc_tplg_add_route return value
    76336d4fa881 ASoC: topology: Check return value of soc_tplg_*_create
    db80b7cb17d9 ASoC: topology: Check return value of soc_tplg_create_tlv
    04da88c86c2e drm/bridge: analogix_dp: Split bind() into probe() and real bind()
    336c7260a788 vhost: vsock: kick send_pkt worker once device is started
    592465e6a54b Linux 5.4.39
    eeef0d9fd40d selinux: properly handle multiple messages in selinux_netlink_send()
    1de07eb54ab7 arm64: vdso: Add -fasynchronous-unwind-tables to cflags
    73162ca8156f dmaengine: dmatest: Fix process hang when reading 'wait' parameter
    c753a12c88e8 dmaengine: dmatest: Fix iteration non-stop logic
    d458565e3c02 nfs: Fix potential posix_acl refcnt leak in nfs3_set_acl
    779f155811eb nvme: prevent double free in nvme_alloc_ns() error handling
    57165a241302 Fix use after free in get_tree_bdev()
    c0be115eb22d ALSA: opti9xx: shut up gcc-10 range warning
    3af9be5f5c66 i2c: aspeed: Avoid i2c interrupt status clear race condition.
    501ecc8fc9e5 iommu/amd: Fix legacy interrupt remapping for x2APIC-enabled system
    a0000d228dd3 scsi: target/iblock: fix WRITE SAME zeroing
    de59f2fbe6ca iommu/qcom: Fix local_base status check
    205757f476e8 vfio/type1: Fix VA->PA translation for PFNMAP VMAs in vaddr_get_pfn()
    08e90b299d4e vfio: avoid possible overflow in vfio_iommu_type1_pin_pages
    44e2a98e2b58 i2c: iproc: generate stop event for slave writes
    92c99197815d RDMA/cm: Fix an error check in cm_alloc_id_priv()
    4c499dafdd63 RDMA/cm: Fix ordering of xa_alloc_cyclic() in ib_create_cm_id()
    169b8b62717a RDMA/core: Fix race between destroy and release FD object
    1e12524f09a1 RDMA/core: Prevent mixed use of FDs between shared ufiles
    b7b72a16c5b0 RDMA/siw: Fix potential siw_mem refcnt leak in siw_fastreg_mr()
    7665d88f9d0e RDMA/mlx4: Initialize ib_spec on the stack
    80ba1153bc25 RDMA/mlx5: Set GRH fields in query QP on RoCE
    1f5a2162516e scsi: qla2xxx: check UNLOADING before posting async work
    faa8daca0226 scsi: qla2xxx: set UNLOADING before waiting for session deletion
    4438f397ee4c ARM: dts: imx6qdl-sr-som-ti: indicate powering off wifi is safe
    100cf0ba5b5d dm multipath: use updated MPATHF_QUEUE_IO on mapping for bio-based mpath
    beed763ab934 dm writecache: fix data corruption when reloading the target
    969b9cb1209b dm verity fec: fix hash block number in verity_fec_decode
    c554ab856b66 PM: hibernate: Freeze kernel threads in software_resume()
    8fc24d1029fd PM: ACPI: Output correct message on target power state
    ca662b6014f3 IB/rdmavt: Always return ERR_PTR from rvt_create_mmap_info()
    16cc37b3dc17 dlmfs_file_write(): fix the bogosity in handling non-zero *ppos
    5049385407b4 Drivers: hv: vmbus: Fix Suspend-to-Idle for Generation-2 VM
    95dd3099171e i2c: amd-mp2-pci: Fix Oops in amd_mp2_pci_init() error handling
    ea63e38b29e7 ALSA: pcm: oss: Place the plugin buffer overflow checks correctly
    c867614f196a ALSA: line6: Fix POD HD500 audio playback
    c7577237c228 ALSA: hda/hdmi: fix without unlocked before return
    6426aa65f7ca ALSA: usb-audio: Correct a typo of NuPrime DAC-10 USB ID
    981b7194e82a ALSA: hda/realtek - Two front mics on a Lenovo ThinkCenter
    35a9399714db crypto: caam - fix the address of the last entry of S/G
    ca34751b5819 mmc: meson-mx-sdio: remove the broken ->card_busy() op
    9e3315116f7e mmc: meson-mx-sdio: Set MMC_CAP_WAIT_WHILE_BUSY
    80e99f42608d mmc: sdhci-msm: Enable host capabilities pertains to R1b response
    d8f7e15a65dd mmc: sdhci-pci: Fix eMMC driver strength for BYT-based controllers
    eed4792f9657 mmc: sdhci-xenon: fix annoying 1.8V regulator warning
    31ba94b893b6 mmc: cqhci: Avoid false "cqhci: CQE stuck on" by not open-coding timeout loop
    2b925c4600bf btrfs: transaction: Avoid deadlock due to bad initialization timing of fs_info::journal_info
    67bc5f667a18 btrfs: fix partial loss of prealloc extent past i_size after fsync
    636987650f6b btrfs: fix block group leak when removing fails
    a378abbb8e39 btrfs: fix transaction leak in btrfs_recover_relocation
    e5744821adc9 NFSv4.1: fix handling of backchannel binding in BIND_CONN_TO_SESSION
    6eb95b35fd39 drm/qxl: qxl_release use after free
    c465bc31ed9f drm/qxl: qxl_release leak in qxl_hw_surface_alloc()
    4441fb2ab0fc drm/qxl: qxl_release leak in qxl_draw_dirty_fb()
    f25335a83cf4 drm/amd/display: Fix green screen issue after suspend
    5ec7eb970df4 drm/edid: Fix off-by-one in DispID DTD pixel clock
    ffd99c012a2e dma-buf: Fix SET_NAME ioctl uapi

(From OE-Core rev: bb126c867adbe3eca3f30670e7b2e84bf98e97cf)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit d2fdd473db5446b0e96ad4f774121129fbf94e0e)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:31 +01:00
Bruce Ashfield
8c2f48f17f linux-yocto/5.4: update to v5.4.38
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    9895e0ac338a Linux 5.4.38
    5a54c69c4ef8 Revert "ASoC: meson: axg-card: fix codec-to-codec link setup"
    527c60e8b7a8 Linux 5.4.37
    4e7fb753e803 ASoC: stm32: spdifrx: fix regmap status check
    4104faaeeda0 ASoC: soc-core: disable route checks for legacy devices
    8c472abaedc7 ext4: check for non-zero journal inum in ext4_calculate_overhead
    93af898b251f qed: Fix use after free in qed_chain_free
    f1610480602a net: use indirect call wrappers for skb_copy_datagram_iter()
    ec9cf8afcd64 Crypto: chelsio - Fixes a hang issue during driver registration
    b0946b45b85a qed: Fix race condition between scheduling and destroying the slowpath workqueue
    d15fc1470441 taprio: do not use BIT() in TCA_TAPRIO_ATTR_FLAG_* definitions
    f37079e9ea83 hwmon: (jc42) Fix name to have no illegal characters
    c7b6c51298bd blk-mq: Put driver tag in blk_mq_dispatch_rq_list() when no budget
    3e9299c28fc5 ext4: convert BUG_ON's to WARN_ON's in mballoc.c
    1e4281eba3ff ext4: increase wait time needed before reuse of deleted inode numbers
    0fe3908e6abc ext4: use matching invalidatepage in ext4_writepage
    1876e0e654b8 arm64: Delete the space separator in __emit_inst
    a719f7bf5c88 mac80211: fix channel switch trigger from unknown mesh peer
    9178430df3f7 net: stmmac: socfpga: Allow all RGMII modes
    daafdf87b898 ALSA: hda: call runtime_allow() for all hda controllers
    d9d4ea17d6d6 xen/xenbus: ensure xenbus_map_ring_valloc() returns proper grant status
    8c627d4b15de objtool: Support Clang non-section symbols in ORC dump
    820126d9a83d objtool: Fix CONFIG_UBSAN_TRAP unreachable warnings
    1cc2460dad82 scsi: target: tcmu: reset_ring should reset TCMU_DEV_BIT_BROKEN
    62d350eb31d1 scsi: target: fix PR IN / READ FULL STATUS for FC
    a323f69d00c6 ALSA: hda: Explicitly permit using autosuspend if runtime PM is supported
    bd074af53cb6 ALSA: hda: Keep the controller initialization even if no codecs found
    135e10232fb5 ALSA: hda: Release resources at error in delayed probe
    535ed3f01564 xfs: fix partially uninitialized structure in xfs_reflink_remap_extent
    ec6e5792d62d afs: Fix length of dump of bad YFSFetchStatus record
    16976275b929 signal: check sig before setting info in kill_pid_usb_asyncio
    f88761412b90 x86: hyperv: report value of misc_features
    b5da1152f716 net: fec: set GPR bit on suspend by DT configuration.
    316ad98983d9 libbpf: Initialize *nl_pid so gcc 10 is happy
    3c9bbe7f44f6 bpf, x86: Fix encoding for lower 8-bit registers in BPF_STX BPF_B
    ab6e8af64f39 xfs: clear PF_MEMALLOC before exiting xfsaild thread
    e5329fcdc907 mm: shmem: disable interrupt when acquiring info->lock in userfaultfd_copy path
    309a509dabd5 bpf, x86_32: Fix logic error in BPF_LDX zero-extension
    d0b8695703f2 bpf, x86_32: Fix clobbering of dst for BPF_JSET
    50c5d9146100 bpf, x86_32: Fix incorrect encoding in BPF_LDX zero-extension
    ca3a2ca4cfa2 um: ensure `make ARCH=um mrproper` removes arch/$(SUBARCH)/include/generated/
    9c5c94c5012c blk-iocost: Fix error on iocost_ioc_vrate_adj
    b9c31556c37b PM: sleep: core: Switch back to async_schedule_dev()
    36c436a8e46a netfilter: nat: fix error handling upon registering inet hook
    9578a8c157b4 perf/core: fix parent pid/tid in task exit events
    c04d01e918d8 sched/core: Fix reset-on-fork from RT with uclamp
    040287785f42 net/mlx5: Fix failing fw tracer allocation on s390
    a8b5611ffee3 s390/pci: do not set affinity for floating irqs
    6cfb8c2ada58 cpumap: Avoid warning when CONFIG_DEBUG_PER_CPU_MAPS is enabled
    05ae98547af9 ARM: dts: bcm283x: Disable dsi0 node
    764a7d0a2756 PCI: Move Apex Edge TPU class quirk to fix BAR assignment
    684dba87fdd7 PCI: Add ACS quirk for Zhaoxin Root/Downstream Ports
    17d166e9535c PCI: Add Zhaoxin Vendor ID
    d2481b5d1257 PCI: Unify ACS quirk desired vs provided checking
    981fd6ad2a5a PCI: Make ACS quirk implementations more uniform
    85a9e198f124 PCI: Add ACS quirk for Zhaoxin multi-function devices
    d2b631a136e8 PCI: Avoid ASMedia XHCI USB PME# from D0 defect
    246ff2a6f69e net/mlx5e: Get the latest values from counters in switchdev mode
    2292e4049097 net/mlx5e: Don't trigger IRQ multiple times on XSK wakeup to avoid WQ overruns
    b4284efb1e14 svcrdma: Fix leak of svc_rdma_recv_ctxt objects
    53dbb934dd4f svcrdma: Fix trace point use-after-free race
    ccd3b4bb9944 xfs: acquire superblock freeze protection on eofblocks scans
    62f1cb491552 net/cxgb4: Check the return from t4_query_params properly
    ce3460b90ed9 rxrpc: Fix DATA Tx to disable nofrag for UDP on AF_INET6 socket
    6cdded333de6 i2c: altera: use proper variable to hold errno
    db2426f86d44 bpf: Forbid XADD on spilled pointers for unprivileged users
    f1317a4a2b9b nfsd: memory corruption in nfsd4_lock()
    13b28f6b6778 drivers: soc: xilinx: fix firmware driver Kconfig dependency
    1157d97cfa2b ASoC: wm8960: Fix wrong clock after suspend & resume
    005aa9f0af9d ASoC: meson: axg-card: fix codec-to-codec link setup
    08865eb796c4 ASoC: tas571x: disable regulators on failed probe
    e9058b45556b ASoC: q6dsp6: q6afe-dai: add missing channels to MI2S DAIs
    37405f2963c7 s390/ftrace: fix potential crashes when switching tracers
    1f107e441bde counter: 104-quad-8: Add lock guards - generic interface
    db66fd5fef68 propagate_one(): mnt_set_mountpoint() needs mount_lock
    f9e41e4bbe61 iio:ad7797: Use correct attribute_group
    f581eff93958 afs: Fix to actually set AFS_SERVER_FL_HAVE_EPOCH
    c2bdc86ec8ac afs: Make record checking use TASK_UNINTERRUPTIBLE when appropriate
    9dcb1844f884 usb: gadget: udc: atmel: Fix vbus disconnect handling
    7155416143dd usb: gadget: udc: bdc: Remove unnecessary NULL checks in bdc_req_complete
    8f4cd6f0ea82 kbuild: fix DT binding schema rule again to avoid needless rebuilds
    7067a62563d2 usb: dwc3: gadget: Do link recovery for SS and SSP
    a74a5435a610 ASoC: stm32: sai: fix sai probe
    4a5c9ae67b12 printk: queue wake_up_klogd irq_work only if per-CPU areas are ready
    276224b7a147 ubifs: Fix ubifs_tnc_lookup() usage in do_kill_orphans()
    4d23f544a328 remoteproc: Fix wrong rvring index computation
    aa73bcc37686 Linux 5.4.36
    44d9eb0ebe8f s390/mm: fix page table upgrade vs 2ndary address mode accesses
    58b243cf2786 compat: ARM64: always include asm-generic/compat.h
    3160e84abaf7 powerpc/mm: Fix CONFIG_PPC_KUAP_DEBUG on PPC32
    b48331b52a28 powerpc/kuap: PPC_KUAP_DEBUG should depend on PPC_KUAP
    c4606876164c Revert "serial: uartps: Register own uart console and driver structures"
    02d32033b397 Revert "serial: uartps: Move Port ID to device data structure"
    bbc0423c8968 Revert "serial: uartps: Change uart ID port allocation"
    f7504efa6bf7 Revert "serial: uartps: Do not allow use aliases >= MAX_UART_INSTANCES"
    3e64d4db7b10 Revert "serial: uartps: Fix error path when alloc failed"
    6fcbf58b115c Revert "serial: uartps: Use the same dynamic major number for all ports"
    1bb43b4d8c32 Revert "serial: uartps: Fix uartps_major handling"
    3af0614df15c serial: sh-sci: Make sure status register SCxSR is read in correct sequence
    fceab238c534 xhci: Don't clear hub TT buffer on ep0 protocol stall
    54470b0bd16a xhci: prevent bus suspend if a roothub port detected a over-current condition
    f385e765ac93 xhci: Fix handling halted endpoint even if endpoint ring appears empty
    8dbfb11452c0 usb: typec: altmode: Fix typec_altmode_get_partner sometimes returning an invalid pointer
    740c93814783 usb: typec: tcpm: Ignore CC and vbus changes in PORT_RESET change
    11c2089767cd usb: f_fs: Clear OS Extended descriptor counts to zero in ffs_data_reset()
    bf996950d8de usb: dwc3: gadget: Fix request completion check
    a0f1f53ecd8d fpga: dfl: pci: fix return value of cci_pci_sriov_configure
    22432bcf066c UAS: fix deadlock in error handling and PM flushing work
    e1b656677f7d UAS: no use logging any details in case of ENODEV
    f4d1cf2ef83c cdc-acm: introduce a cool down
    892de572ea71 cdc-acm: close race betrween suspend() and acm_softint
    23d44059bc44 staging: vt6656: Power save stop wake_up_count wrap around.
    9f1a23cbef73 staging: vt6656: Fix pairwise key entry save.
    0bcc6585717e staging: vt6656: Fix drivers TBTT timing counter.
    74bbe9d99040 staging: vt6656: Fix calling conditions of vnt_set_bss_mode
    ec5ad5e1958c staging: vt6656: Don't set RCR_MULTICAST or RCR_BROADCAST by default.
    64882aa0c531 vt: don't use kmalloc() for the unicode screen buffer
    b027b30d1428 vt: don't hardcode the mem allocation upper bound
    8f8d7f07d951 staging: comedi: Fix comedi_device refcnt leak in comedi_open
    279dd75cec55 staging: comedi: dt2815: fix writing hi byte of analog output
    dba6465408b8 powerpc/setup_64: Set cache-line-size based on cache-block-size
    921b7b175605 ARM: imx: provide v7_cpu_resume() only on ARM_CPU_SUSPEND=y
    eabc107d20da cifs: fix uninitialised lease_key in open_shroot()
    562489ba1078 iwlwifi: mvm: fix inactive TID removal return value usage
    f1926b14bd8f iwlwifi: mvm: Do not declare support for ACK Enabled Aggregation
    c93fb506bfaf iwlwifi: mvm: limit maximum queue appropriately
    4025ac3d7fb7 iwlwifi: mvm: beacon statistics shouldn't go backwards
    222722be70de iwlwifi: pcie: actually release queue memory in TVQM
    7e69c9e6bbf3 SUNRPC: Fix backchannel RPC soft lockups
    d62d85260ac4 mac80211: populate debugfs only after cfg80211 init
    f67f3317ceb3 ASoC: dapm: fixup dapm kcontrol widget
    83f82fd5552c audit: check the length of userspace generated audit records
    20821047aca4 signal: Avoid corrupting si_pid and si_uid in do_notify_parent
    1b4e23a945bd usb-storage: Add unusual_devs entry for JMicron JMS566
    9de9003b255e tty: rocket, avoid OOB access
    f1c0d3243dbe tty: hvc: fix buffer overflow during hvc_alloc().
    52ca311e5f82 KVM: VMX: Enable machine check support for 32bit targets
    878127ac8b70 KVM: Check validity of resolved slot when searching memslots
    347125705f02 KVM: s390: Return last valid slot if approx index is out-of-bounds
    3fc644fd6100 tpm: ibmvtpm: retry on H_CLOSED in tpm_ibmvtpm_send()
    16244edc3bbe tpm: fix wrong return value in tpm_pcr_extend
    86f1c523d422 tpm/tpm_tis: Free IRQ if probing fails
    387039b25077 ALSA: usb-audio: Filter out unsupported sample rates on Focusrite devices
    d5cd82153629 ALSA: usb-audio: Fix usb audio refcnt leak when getting spdif
    dbb11f1d6d33 ALSA: hda/hdmi: Add module option to disable audio component binding
    1e1f9d36280f ALSA: hda/realtek - Add new codec supported for ALC245
    0939d06af06f ALSA: hda/realtek - Fix unexpected init_amp override
    16e373fe61cb ALSA: usx2y: Fix potential NULL dereference
    000515184f6f tools/vm: fix cross-compile build
    5126bdeaf980 mm/ksm: fix NULL pointer dereference when KSM zero page is enabled
    3c88e95cd167 mm/hugetlb: fix a addressing exception caused by huge_pte_offset
    a77daafc2e37 coredump: fix null pointer dereference on coredump
    fcfd63da5d82 staging: gasket: Fix incongruency in handling of sysfs entries creation
    f4f235309b5c vmalloc: fix remap_vmalloc_range() bounds checks
    3d15344e23c5 tty: serial: owl: add "much needed" clk_prepare_enable()
    4fbf19bbba6a USB: hub: Revert commit bd0e6c9614b9 ("usb: hub: try old enumeration scheme first for high speed devices")
    50ad463e20bf USB: hub: Fix handling of connect changes during sleep
    b48193a7c303 USB: core: Fix free-while-in-use bug in the USB S-Glibrary
    1d53402d89d7 USB: early: Handle AMD's spec-compliant identifiers, too
    8409f83e3e81 USB: Add USB_QUIRK_DELAY_CTRL_MSG and USB_QUIRK_DELAY_INIT for Corsair K70 RGB RAPIDFIRE
    b7758cd38b94 USB: sisusbvga: Change port variable from signed to unsigned
    557f3f549217 iio: xilinx-xadc: Make sure not exceed maximum samplerate
    b3e365a07016 iio: xilinx-xadc: Fix sequencer configuration for aux channels in simultaneous mode
    cf2849c9ef46 iio: xilinx-xadc: Fix clearing interrupt when enabling trigger
    6a956eb2e1a7 iio: xilinx-xadc: Fix ADC-B powerdown
    f83a969fcb0b iio: adc: ti-ads8344: properly byte swap value
    db168069b0d6 iio: adc: stm32-adc: fix sleep in atomic context
    02311bc13344 iio: st_sensors: rely on odr mask to know if odr can be set
    14952589c9d8 iio: core: remove extra semi-colon from devm_iio_device_register() macro
    12c02c473e86 ALSA: usb-audio: Add connector notifier delegation
    6ec99b94a3a0 ALSA: usb-audio: Add static mapping table for ALC1220-VB-based mobos
    23abb5f2faea ALSA: hda: Remove ASUS ROG Zenith from the blacklist
    419d8fb1630c KEYS: Avoid false positive ENOMEM error on key read
    b1bcb485dd6b vrf: Check skb for XFRM_TRANSFORMED flag
    dfbbb4557af4 xfrm: Always set XFRM_TRANSFORMED in xfrm{4,6}_output_finish
    ace87b487a5f geneve: use the correct nlattr array in NL_SET_ERR_MSG_ATTR
    b977fe1c9e80 vxlan: use the correct nlattr array in NL_SET_ERR_MSG_ATTR
    51c935f6c6ef net: dsa: b53: b53_arl_rw_op() needs to select IVL or SVL
    cb1a18a7d328 net: dsa: b53: Rework ARL bin logic
    2cc27f102dcd net: dsa: b53: Fix ARL register definitions
    1fae6eb0fc91 net: dsa: b53: Fix valid setting for MDB entries
    2537dc9e2c03 net: dsa: b53: Lookup VID in ARL searches when VLAN is enabled
    07856b2108cf vrf: Fix IPv6 with qdisc and xfrm
    755425c1b004 team: fix hang in team_mode_get()
    3f642d785a51 tcp: cache line align MAX_TCP_HEADER
    8a60fad4495d selftests: Fix suppress test in fib_tests.sh
    a3afaa5033f4 sched: etf: do not assume all sockets are full blown
    5a2ddf8e5a5d net/x25: Fix x25_neigh refcnt leak when receiving frame
    6885d58eb439 net: stmmac: dwmac-meson8b: Add missing boundary to RGMII TX clock array
    4acc0b18f7af net: openvswitch: ovs_ct_exit to be done under ovs_lock
    21b1a767eba6 net: netrom: Fix potential nr_neigh refcnt leak in nr_add_node
    befd63a980cc net/mlx4_en: avoid indirect call in TX completion
    49bbf322316c net: bcmgenet: correct per TX/RX ring statistics
    aa6a14bc4102 mlxsw: Fix some IS_ERR() vs NULL bugs
    d5ba4c22928f macvlan: fix null dereference in macvlan_device_event()
    70a37b9816f3 macsec: avoid to set wrong mtu
    2d197d8e1aa4 ipv6: fix restrict IPV6_ADDRFORM operation
    382f57b996aa ipv4: Update fib_select_default to handle nexthop objects
    3b759befd7f2 cxgb4: fix large delays in PTP synchronization
    d02f4242650d cxgb4: fix adapter crash due to wrong MC size
    91097eba10d3 PCI/PM: Add missing link delays required by the PCIe spec
    7720fd9c679e PCI/ASPM: Allow re-enabling Clock PM
    3340d011cff4 scsi: smartpqi: fix problem with unique ID for physical device
    d867f2757173 scsi: smartpqi: fix call trace in device discovery
    8a20fb1c9a49 scsi: smartpqi: fix controller lockup observed during force reboot
    3edd55247295 virtio-blk: improve virtqueue error to BLK_STS
    2390698b9dbd tracing/selftests: Turn off timeout setting
    ca958fe8af20 ASoC: SOF: trace: fix unconditional free in trace release
    01fad934f1bd PCI: pciehp: Prevent deadlock on disconnect
    39b9a0b3d24d libbpf: Fix readelf output parsing on powerpc with recent binutils
    b91ae5994725 PCI/PM: Add pcie_wait_for_link_delay()
    df38cda0144a drm/amd/display: Not doing optimize bandwidth if flip pending.
    2be21320076d xhci: Finetune host initiated USB3 rootport link suspend and resume
    ea6f7011c42d xhci: Wait until link state trainsits to U0 after setting USB_SS_PORT_LS_U0
    e650a264df6f xhci: Ensure link state is U3 after setting USB_SS_PORT_LS_U3
    bdb61374da1b ALSA: usb-audio: Add Pioneer DJ DJM-250MK2 quirk
    578aa47612f2 ASoC: Intel: bytcr_rt5640: Add quirk for MPMAN MPWIN895CL tablet
    632d9736d215 drm/amd/display: Calculate scaling ratios on every medium/full update
    16c370534d6c perf/core: Disable page faults when getting phys address
    41a3e446bc56 pwm: bcm2835: Dynamically allocate base
    53cdc935c912 pwm: renesas-tpu: Fix late Runtime PM enablement
    1bfb6423c6fc nvme: fix compat address handling in several ioctls
    de1263d4306e powerpc/pseries: Fix MCE handling on pseries
    107290a8f06b Revert "powerpc/64: irq_work avoid interrupt when called with hardware irqs enabled"
    1712911bfb34 loop: Better discard support for block devices
    ed61eec49a70 s390/cio: avoid duplicated 'ADD' uevents
    ad1187668ffe s390/cio: generate delayed uevent for vfio-ccw subchannels
    8652254e96a6 lib/raid6/test: fix build on distros whose /bin/sh is not bash
    e84ef75fa184 kconfig: qconf: Fix a few alignment issues
    cb5d9604038c ipc/util.c: sysvipc_find_ipc() should increase position index
    70638a74c52a selftests: kmod: fix handling test numbers above 9
    16846f6fcbcf kernel/gcov/fs.c: gcov_seq_next() should increase position index
    1edfff795d4f dma-direct: fix data truncation in dma_direct_get_required_mask()
    8300465623bf drm/amd/display: Update stream adjust in dc_stream_adjust_vmin_vmax
    da2c733a7180 nvme: fix deadlock caused by ANA update wrong locking
    90a33c23aad8 ASoC: Intel: atom: Take the drv->lock mutex before calling sst_send_slot_map()
    1310d9655be0 tools/test/nvdimm: Fix out of tree build
    713ad9b9d37a scsi: iscsi: Report unbind session event when the target has been removed
    f507ae6e33cb nvme-tcp: fix possible crash in write_zeroes processing
    a5f036adae09 pwm: rcar: Fix late Runtime PM enablement
    b71ac8086a7b ceph: don't skip updating wanted caps when cap is stale
    acbfccc6a3e3 ceph: return ceph_mdsc_do_request() errors from __get_parent()
    fb669262fdef scsi: libfc: If PRLI rejected, move rport to PLOGI state
    8427b05a7a1f scsi: lpfc: Fix crash in target side cable pulls hitting WAIT_FOR_UNREG
    0c5733a96261 scsi: lpfc: Fix crash after handling a pci error
    9d1062c4dd14 scsi: lpfc: Fix kasan slab-out-of-bounds error in lpfc_unreg_login
    66491dadd125 watchdog: reset last_hw_keepalive time at start
    7b709f1ba800 tools/testing/nvdimm: Fix compilation failure without CONFIG_DEV_DAX_PMEM_COMPAT
    810045068bda arm64: Silence clang warning on mismatched value/register sizes
    aa50d567ec4a arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space
    6de0c621191a arm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419
    f2791551cedb arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419
    4b823bf7c2ca net, ip_tunnel: fix interface lookup with no key
    5811f24abd27 f2fs: fix to avoid memory leakage in f2fs_listxattr
    79ad14904152 ext4: fix extent_status fragmentation for plain files
    0c418786cb3a Linux 5.4.35
    a801a05ca714 bpf, test_verifier: switch bpf_get_stack's 0 s> r8 test
    8781011a302b bpf: Test_progs, add test to catch retval refine error handling
    37e1cdff90c1 bpf: Test_verifier, bpf_get_stack return value add <0
    3bd5bcafbbf3 bpf: fix buggy r0 retval refinement for tracing helpers
    f1afcf9488fc KEYS: Don't write out to userspace while holding key semaphore
    5d53bfdce008 mtd: phram: fix a double free issue in error path
    4191ebe1fc71 mtd: lpddr: Fix a double free in probe()
    7d4adb1d3c69 docs: Fix path to MTD command line partition parser
    318d5088fdfe mtd: spinand: Explicitly use MTD_OPS_RAW to write the bad block marker to OOB
    700bccb8e9a2 mtd: rawnand: free the nand_device object
    0c72ec11d8bd locktorture: Print ratio of acquisitions, not failures
    01c9e2a9fc5c tty: evh_bytechan: Fix out of bounds accesses
    f656649089a3 fbmem: Adjust indentation in fb_prepare_logo and fb_blank
    47e4d791d514 iio: si1133: read 24-bit signed integer for measurement
    a2a385aae551 ARM: dts: sunxi: Fix DE2 clocks register range
    7e141c307834 fbdev: potential information leak in do_fb_ioctl()
    f0938746879a dma-debug: fix displaying of dma allocation type
    bc69709c54df net: dsa: bcm_sf2: Fix overflow checks
    762d35aa906f drm/nouveau/gr/gp107,gp108: implement workaround for HW hanging during init
    a156e67acf6c f2fs: fix to wait all node page writeback
    f08e4e70b0ac iommu/amd: Fix the configuration of GCR3 table root pointer
    436af737c3c2 libnvdimm: Out of bounds read in __nd_ioctl()
    dcb122749f58 power: supply: axp288_fuel_gauge: Broaden vendor check for Intel Compute Sticks.
    760eecac993b csky: Fixup init_fpu compile warning with __init
    1500c7003146 sunrpc: Fix gss_unwrap_resp_integ() again
    ddb8812a21e1 ext2: fix debug reference to ext2_xattr_cache
    24191c8c9bd2 iommu/vt-d: Fix page request descriptor size
    a5a1d567a069 iommu/vt-d: Silence RCU-list debugging warning in dmar_find_atsr()
    21439dff919e ext2: fix empty body warnings when -Wextra is used
    d00041a48c3e SUNRPC: fix krb5p mount to provide large enough buffer in rq_rcvsize
    900cd0f6c688 iommu/vt-d: Fix mm reference leak
    9c01a49a7117 iommu/virtio: Fix freeing of incomplete domains
    475bec7063bc drm/vc4: Fix HDMI mode validation
    b58244c482ce um: falloc.h needs to be directly included for older libc
    6c3339269a8a ACPICA: Fixes for acpiExec namespace init file
    9f8b1216dac9 f2fs: fix NULL pointer dereference in f2fs_write_begin()
    57615a8561f0 csky: Fixup get wrong psr value from phyical reg
    c848e00e3b95 NFS: Fix memory leaks in nfs_pageio_stop_mirroring()
    2e03d3c569b6 drm/amdkfd: kfree the wrong pointer
    e907a0d09b34 csky: Fixup cpu speculative execution to IO area
    88591187bebc x86: ACPI: fix CPU hotplug deadlock
    a9282e58238d leds: core: Fix warning message when init_data
    ddf39dc2f7a3 drm/nouveau: workaround runpm fail by disabling PCI power management on certain intel bridges
    f24d8de03b72 KVM: s390: vsie: Fix possible race when shadowing region 3 tables
    3910babeac1a compiler.h: fix error in BUILD_BUG_ON() reporting
    b525f94f16e5 percpu_counter: fix a data race at vm_committed_as
    ffac60b8bc5f include/linux/swapops.h: correct guards for non_swap_entry()
    2a40eaab1fc4 drm/nouveau/svm: fix vma range check for migration
    f3955f1e58be drm/nouveau/svm: check for SVM initialized before migrating
    a825ce86ebed mm/hugetlb: fix build failure with HUGETLB_PAGE but not HUGEBTLBFS
    23e2519760f8 cifs: Allocate encryption header through kmalloc
    6ba010ea4856 um: ubd: Prevent buffer overrun on command completion
    b9f88c31b266 ext4: do not commit super on read-only bdev
    4078dceb1228 s390/cpum_sf: Fix wrong page count in error message
    fd80f4a6805c powerpc/maple: Fix declaration made after definition
    bee9bc3e0248 powerpc/prom_init: Pass the "os-term" message to hypervisor
    765052217847 btrfs: add RCU locks around block group initialization
    285f25c97f24 hibernate: Allow uswsusp to write to swap
    4753b111f003 s390/cpuinfo: fix wrong output when CPU0 is offline
    380d12904603 f2fs: Add a new CP flag to help fsck fix resize SPO issues
    066f1e4174f2 f2fs: Fix mount failure due to SPO after a successful online resize FS
    ea468f37370a NFS: direct.c: Fix memory leak of dreq when nfs_get_lock_context fails
    81b41f5ecc96 phy: uniphier-usb3ss: Add Pro5 support
    3e85d501828c f2fs: fix to show norecovery mount option
    ffbad91b66ce KVM: PPC: Book3S HV: Fix H_CEDE return code for nested guests
    ea410f2a1fc8 ARM: dts: rockchip: fix lvds-encoder ports subnode for rk3188-bqedison2qc
    59bafdc99440 NFSv4.2: error out when relink swapfile
    264e3f1597e8 NFSv4/pnfs: Return valid stateids in nfs_layout_find_inode_by_stateid()
    07cd4e8f745c NFS: alloc_nfs_open_context() must use the file cred when available
    66bfacd0f302 rtc: 88pm860x: fix possible race condition
    56aaa0e8c92a dma-coherent: fix integer overflow in the reserved-memory dma allocation
    960bf4e436ca soc: imx: gpc: fix power up sequencing
    1e7abaf24875 arm64: dts: clearfog-gt-8k: set gigabit PHY reset deassert delay
    d7b59cd020f7 arm64: tegra: Fix Tegra194 PCIe compatible string
    5615f66bfdfc arm64: tegra: Add PCIe endpoint controllers nodes for Tegra194
    540f9620f192 clk: tegra: Fix Tegra PMC clock out parents
    b7dee304aa0e power: supply: bq27xxx_battery: Silence deferred-probe error
    6a7721714835 arm64: dts: allwinner: a64: Fix display clock register range
    5d2861f840bb ARM: dts: rockchip: fix vqmmc-supply property name for rk3188-bqedison2qc
    1321fb4320e7 f2fs: fix the panic in do_checkpoint()
    6d4330391c49 net/mlx5e: Enforce setting of a single FEC mode
    0d03cbfdf364 clk: at91: usb: continue if clk_hw_round_rate() return zero
    04e43c7c664a clk: Don't cache errors from clk_ops::get_phase()
    83321ee302e3 drm/ttm: flush the fence on the bo after we individualize the reservation object
    94ebb1eea0e7 x86/Hyper-V: Free hv_panic_page when fail to register kmsg dump
    d662b44161e4 rbd: call rbd_dev_unprobe() after unwatching and flushing notifies
    88a57e387cf0 rbd: avoid a deadlock on header_rwsem when flushing notifies
    a362482b2325 block, bfq: invoke flush_idle_tree after reparent_active_queues in pd_offline
    839b7cd1d8bc block, bfq: make reparent_leaf_entity actually work only on leaf entities
    ad749ca022ad block, bfq: turn put_queue into release_process_ref in __bfq_bic_change_cgroup
    00d392873771 afs: Fix race between post-modification dir edit and readdir/d_revalidate
    42e343cf3285 afs: Fix afs_d_validate() to set the right directory version
    8c3e4ba0fa7a afs: Fix rename operation status delivery
    4eba6ec9644a afs: Fix decoding of inline abort codes from version 1 status records
    0604b60ef9d7 afs: Fix missing XDR advance in xdr_decode_{AFS,YFS}FSFetchStatus()
    4f7b1e892ed0 x86/Hyper-V: Report crash data in die() when panic_on_oops is set
    5097186b279a x86/Hyper-V: Report crash register data when sysctl_record_panic_msg is not set
    31ebf98817c6 x86/Hyper-V: Report crash register data or kmsg before running crash kernel
    1ed38a98478f x86/Hyper-V: Trigger crash enlightenment only once during system crash.
    9f38f7b46de0 x86/Hyper-V: Unload vmbus channel in hv panic callback
    4c2a34f9f448 of: overlay: kmemleak in dup_and_fixup_symbol_prop()
    93ef21bb1a72 of: unittest: kmemleak in of_unittest_overlay_high_level()
    a1371954ee49 of: unittest: kmemleak in of_unittest_platform_populate()
    dd3dd28241e0 of: unittest: kmemleak on changeset destroy
    25c9cdef5748 xsk: Add missing check on user supplied headroom size
    9244c79da15c ALSA: hda: Don't release card at firmware loading error
    182fa4d72a7c irqchip/mbigen: Free msi_desc on device teardown
    daefa51c4353 netfilter: nf_tables: report EOPNOTSUPP on unsupported flags/object type
    aea3873fb02c kbuild, btf: Fix dependencies for DEBUG_INFO_BTF
    e1e5c219f033 ARM: dts: imx6: Use gpc for FEC interrupt controller to fix wake on LAN.
    ed0a5355aa62 ALSA: hda: Honor PM disablement in PM freeze and thaw_noirq ops
    d8b667b45d72 scsi: sg: add sg_remove_request in sg_common_write
    d979eda8a72b objtool: Fix switch table detection in .text.unlikely
    2613535abd3b arm, bpf: Fix offset overflow for BPF_MEM BPF_DW
    d4adee8e8f2f arm, bpf: Fix bugs with ALU64 {RSH, ARSH} BPF_K shift by 0
    e7f6c25bafa6 xsk: Fix out of boundary write in __xsk_rcv_memcpy
    9a9eae78529c watchdog: sp805: fix restart handler
    41d097c83343 ext4: use non-movable memory for superblock readahead

(From OE-Core rev: dc05c81a0f74f7a7cc2852e5e66b871514b77817)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2503b1a55b3525ad8f97d3adafd442688dbd4397)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Trevor Gamblin
1bb406b092 qemuarm: check serial consoles vs /proc/consoles
Note that this patch affects qemuarm AND qemuarm64.

When booting a VM and during operation, the following message
periodically appears:

INIT: Id "hvc0" respawning too fast: disabled for 5 minutes

This is because hvc0 is specified in SERIAL_CONSOLES in qemuarm.conf
and qemuarm64.conf, but it is not in /proc/consoles and
SERIAL_CONSOLES_CHECK is not specified, leaving getty to attempt to
enable hvc0. Add SERIAL_CONSOLES_CHECK to both conf files so that
hvc0 isn't enabled if it hasn't been set there or in local.conf.

(From OE-Core rev: e2658a7d73b6f21939e644e533718cd05b288766)

Signed-off-by: Trevor Gamblin <trevor.gamblin@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 982b7f98b8423236cc986346379b1bde3694f131)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Mark Hatle
45d742b0e0 sstate.bbclass: When siginfo or sig files are missing, stop fetcher errors
Prior to fetching, the system checks if the sstate file is present
either locally or on the mirror.  If it is, then it goes to the fetch
stage.  Up to three files can be fetched, sstate, sstate.siginfo and
sstate.sig (if signature validation is enabled).

The previous pstaging_fetch function would iterate over these, and if
a download error occurred would spew forth a great amount of fetcher
failure messages as well as stop fetching the next item in the set.

This was resolved by adding a fetcher.checkstatus() call prior to
the download.  If the file isn't present, then the exception will
be triggered, and no fetcher failure messages will reach the user.

The exception handler is then modified to be a pass so that it will
loop and pull the rest of the files that that are requested.

Additionally, a check for the existance of the .sig file was added
to the sstate_installpkg to avoid an error trying to load the .sig
if it wasn't downloaded.

(From OE-Core rev: ec58532ab6fc6343144da67789c928c751d36c06)

Signed-off-by: Mark Hatle <mark.hatle@xilinx.com>
Signed-off-by: Mark Hatle <mark.hatle@kernel.crashing.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a9085140434e2d26c0bb75bb53fcb7f7c19ef86d)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Khem Raj
4c582f5cf1 make-mod-scripts: Fix a rare build race condition
There is a build break which rarely happens but is seen often enough
with 4.1 kernel based builds

/bin/sh: 1: scripts/basic/fixdep: Permission denied
scripts/Makefile.host:124: recipe for target 'scripts/dtc/srcpos.o' failed
make[3]: *** [scripts/dtc/srcpos.o] Error 126

this patch sequences the build targets so it can work reliably with
different kernel versions

Divide the target into scripts_basic scripts is not
strictly necessary and was simply what was used for
testing on kernel 4.1, which is quite an old kernel

perhaps just using scripts is sufficient, but it is not tested to not
known will cause the build race as seen above.

(From OE-Core rev: 8a7da39c04fbab1280c464f39a791e4fbd1e7da9)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Cc: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 55ac6e2d251287419138931aa0d0894cf1267787)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Konrad Weihmann
9aefaf5de9 qemurunner: fix ip fallback detection
When falling back from detecting ip from /proc/./cmdline the
output of runqemu is acutally
'Network configuration: ip=192.168.7.2::192.168.7.1::255.255.255.0'
which doesn't match the given regex and leading to run failure, although
IP is detectable.
Fix regex by inserting an optional 'ip=' prefix to first IP

(From OE-Core rev: 9c2efe41d5d894094552c4bbc4180675a5aac751)

Signed-off-by: Konrad Weihmann <kweihmann@outlook.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 75f2471d15fab024775c59cb70c54e3f25f9ae72)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Adrian Bunk
56035c1714 libubootenv: Remove the DEPENDS on mtd-utils
It was only used for pulling in zlib, but this is now
a direct dependency.

Also move the DEPENDS to a more common location in the file.

(From OE-Core rev: ce5500cc07da270322b67db5001fc1476b6bf2fe)

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a99fd8b705be3b8c70cb0f17f60b013d989d625c)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Lee Chee Yang
200e6be175 libexif: fix CVE-2020-13114
(From OE-Core rev: 2e497029ee00babbc50f3c1d99580230bc46155c)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Lee Chee Yang
2119777dab bind: fix CVE-2020-8616/7
fix CVE-2020-8616 and CVE-2020-8617

(From OE-Core rev: 8681058cce46b342c9895819e3a4bc0770934d86)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit d0df831830e4c5f8df2343a45ea75c2ab4f57058)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Joe Slater
7f887005f9 terminal.py: do not stop searching for auto
If a terminal fails to spawn() we should continue looking.
gnome-terminal, in particular can be present but not start.

(From OE-Core rev: 5ca00faa9c085fef1781b66561de461e9cc5b117)

Signed-off-by: Joe Slater <joe.slater@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 6e4babdeee38d32002a4c9129e77466ae4156dd7)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Richard Purdie
53559bb82d resulttool/log: Add ability to dump ltp logs as well as ptest
Currently only ptest logs are accessible with the log command, this
adds support so the ltp logs can be extracted too.

(From OE-Core rev: 0b513274a0ae722065cf1a605090000e854e2f81)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 64a2121a875ce128959ee0a62e310d5f91f87b0d)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Richard Purdie
212c92482c resulttool/report: Remove leftover debugging
I've long since wondered why there was some odd output in result reports,
remove the leftover debug which was causing it.

(From OE-Core rev: 10d1d2ffa0906561d65886caee44652242139913)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 66e96bf70753933714ff8edcc13a1f35a052656f)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Robert Yang
7c65513587 archiver.bbclass: Fix duplicated SRC_URIs for do_ar_original
The argument urls of bb.fetch2.Fetch(urls, d) are duplicated to SRC_URI, which caused errors like:

bb.data_smart.ExpansionError: Failure expanding variable SRCPV, expression was ${@bb.fetch2.get_srcrev(d)} which triggered exception FetchError: Fetcher failure: The SRCREV_FORMAT variable must be set when multiple SCMs are used.
The SCMs are:
git://github.com/docker/notary.git;destsuffix=git/src/github.com/docker/notary
git://github.com/docker/notary.git

The first one is from original SRC_URI, the second one is from the
variable 'urls', so cleanup SRC_URI before call bb.fetch2.Fetch() can fix the
problem.

(From OE-Core rev: a7f50876f95a9be9fe045af1e4efddfe53a983f5)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b244c4f3427cd07376d4b8f7d27e38735bcc90e7)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Gregor Zatko
6dc4f62bcc sanity.bbclass: Detect and fail if 'inherit' is used in conf file
'inherit' directive may not be used in conf files as it's supposed
to be used for the inheritance of classes.
Correct form in conf file is INHERIT.

This commit adds:
- a sanity check to find whether the wrong case exists
- fail the build if so
- tell user about the difference in directives

[YOCTO #5426]

(From OE-Core rev: bc6e27aeed5d536d2b764949c307f260f78b7810)

Signed-off-by: Gregor Zatko <gzatko@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 07bf9b460fe97dec86439302a83bbefa8bac9d70)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Steve Sakoman
45c4dbf32e oeqa/concurrencytest: don't delete build directory for failed tests
(From OE-Core rev: be24033acee75526a3a66c12606ef67dcb52a7ee)

Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3d5aa170d2e88b852bd2a4452aab9311a24badef)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Joshua Watt
5b65a822d6 checklayer: Skip layers without a collection
As in other places in the file, skip layers that don't define a
collection when searching for a layer to resolve a dependency. Fixes
KeyError exceptions when attempting to access the layer collections
later

(From OE-Core rev: ae65adf471a9ad04c6a44bf020a28f1006db106a)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 26090a2861ebe21224aaf89d7be0c0a89ca58e48)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Khem Raj
e9c66dd898 cve-check: Run it after do_fetch
Certain recipes e.g. bash readline ( from meta-gplv2 ) download patches instead of having them in
metadata, this could fail cve_check

ERROR: readline-5.2-r9 do_cve_check: File Not found: qemuarm/build/../downloads/readline52-001

This patch ensures that download is done before running CVE scan, even
though these will be external patches and may not contain CVE tags as it
expects, but it will fix the run failures as seen above

(From OE-Core rev: dbf143d79476e54e8da93101fc16eaedeec88362)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e406fcb6c609a0d2456d7da0d2406d2d9fa52dd2)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-05 21:36:30 +01:00
Steve Sakoman
6436d102f1 Documenation: Prepared for the 3.1.1 release
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-02 15:47:38 +01:00
Robert P. J. Day
d0df44077b ref-manual: delete long-unused comments in variable glossary
As these comments have been around since 2015 and apparently unused,
get rid of hundreds of them.

(From yocto-docs rev: 98687310b9e2d4cd3bd4c96e100877414dcf791c)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit afec5770a22ac51c956e87567bf39e71064e9f04)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-02 15:47:38 +01:00
Robert P. J. Day
9c7484d341 ref-manual: Remove long-dead PACKAGE_GROUP variable
This was, years ago, deprecated in favour of FEATURE_PACKAGES, so
remove all references, other than the entry in the migration section.

(From yocto-docs rev: 2deac02f283547f66d1f7a002f5bf07ddd449401)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-02 15:47:38 +01:00
Robert P. J. Day
9cff09000f ref-manual: typo "SSTATE_MIRROR" -> "SSTATE_MIRRORS"
(From yocto-docs rev: 7c4d45e50422fa6e6cb607b0df2abdbb41ffd6d1)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-02 15:47:38 +01:00
Robert P. J. Day
9d777c388d ref-manual: IMAGE_TYPES, add tar.zst, delete elf
Update list of legal IMAGE_TYPES to match what's in
image_types.bbclass.

(From yocto-docs rev: 5edeb02f5154c17902b74739052e46d4223473b3)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-02 15:47:38 +01:00
Robert P. J. Day
db51b0943b ref-manual: fix excessive command indentation
(From yocto-docs rev: e3a74639609e99357b23a5702c38a7a943713cdc)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-06-02 15:47:37 +01:00
Paul Barker
9c80490684 avahi: Don't advertise example services by default
The example service files are placed into /etc/avahi/services when we
run `make install` for avahi. This results in ssh and sftp-ssh services
being announced by default even if no ssh server is installed in an
image.

These example files should be moved away to another location such as
/usr/share/doc/avahi (taking inspiration from Arch Linux).

(From OE-Core rev: c88cf750f26f6786d6ba5b4f1f7e5d4f0c800e6e)

Signed-off-by: Paul Barker <pbarker@konsulko.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:29 +01:00
Adrian Bunk
df1424206a wireless-regdb: Upgrade 2019.06.03 -> 2020.04.29
(From OE-Core rev: 5b71a3f3d1bca6b52f53b97971131a6771618420)

(From OE-Core rev: 5472a4f30aa1c599ca85efc3f973f2fd9dbc4476)

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:29 +01:00
Adrian Bunk
4924f7f99f git: Upgrade 2.24.1 -> 2.24.3
This includes the fixes for CVE-2020-5260 and CVE-2020-11008.

(From OE-Core rev: 46da8ac6d25bb75c625c2da1d36cbc693a7d442d)

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:29 +01:00
Jan-Simon Moeller
99bb209520 file: add bzip2-replacement-native to DEPENDS to fix sstate issue
file-native when built on a Debian 10 host will embed a dependency to
'libbz2.so.1.0' (instead of 'libbz2.so.1'). This can cause issues
when sharing the sstate between hosts e.g.:

 recipe-sysroot-native/usr/lib/rpm/rpmdeps:
      error while loading shared libraries: libbz2.so.1.0: \
        cannot open shared object file: No such file or directory

To avoid this situation, let's add the bzip2-replacement-native to the
file recipe's DEPENDS_class-native .

Details in https://bugzilla.yoctoproject.org/show_bug.cgi?id=13915 .

(From OE-Core rev: 5a2bc3bfa9e1a4f37b6e26a5c40a4a9c025d03f1)

Signed-off-by: Jan-Simon Moeller <dl9pf@gmx.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 4a996574464028bd5d57b90920d0887d1a81e9e9)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:29 +01:00
Alexander Kanavin
2acb49212e testresults.json: add duration of the tests as well
This is printed by testimage, but isn't actually saved.
It's a useful metric for tracking execution times.

(From OE-Core rev: 866c652c850d9e23300218fcbe0b9e4b3ade2ebf)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8fc19639f47b959a141dae231395bbababa644e1)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:29 +01:00
Quentin Schulz
b7fa39c31a base/insane: Check pkgs lics are subset of recipe lics only once
Move logic checking that all packages licenses are only a subset of
recipe licenses from base.bbclass to the insane.bbclass so that it's
evaluated only once, during do_package_qa.

As explained in the linked bugzilla entry, if a package license is not
part of the recipe license, the warning message gets shown an
unreasonable amount of time because it's evaluated every time a recipe
is parsed.

[YOCTO #10130]

This also makes it possible to silence this error with INSANE_SKIP.

(From OE-Core rev: ae404ef230882e442e9390b314e1ce023fdbbd1b)

Signed-off-by: Quentin Schulz <quentin.schulz@streamunlimited.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 852408ed4be1f64c57e196688728b7ed223d3493)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:29 +01:00
Lee Chee Yang
0a8d17cdbe qemu: fix CVE-2020-11869
(From OE-Core rev: 1af607d9e635e7cf2f6cf3e4c6d05f1e2cb6acc9)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
(cherry picked from commit 5f01d45266bbc0d0f1a32d10c0841326193cc9c1)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:28 +01:00
Marek Vasut
6955683b99 libubootenv: Depend on zlib
The libubootenv depends on zlib as it calls at least crc32() from
there and links against it. Add the DEPENDS entry.

(From OE-Core rev: dc5babe9472ba7379edbb17b6cbac44604606b26)

Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Stefano Babic <sbabic@denx.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
(cherry picked from commit db513f9ec59b7ac526b2cdc42b0eb2573e134bc4)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:28 +01:00
zhengruoqin
4ebe10ffde make-mod-scripts: Fix dependence error.
Error:
 Problem: conflicting requests
  - nothing provides make-mod-scripts = 1.0-r0 needed by
make-mod-scripts-dev-1.0-r0

(From OE-Core rev: 354adf441ac880fe3759af594f9669c3a7eb3308)

Signed-off-by: Zheng Ruoqin <zhengrq.fnst@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
(cherry picked from commit 41fff377b921070f371c0aa639e37c27c113ccb9)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:28 +01:00
Aníbal Limón
575d59897e linux-firmware: Update to 20200122 -> 20200421
(From OE-Core rev: d53dd9460f17e3c1bb4e5a13d34a8977b25bf0f0)

Signed-off-by: Aníbal Limón <anibal.limon@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
(cherry picked from commit 7f52475a2c84197c95928d65debf894bf59c90e7)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:28 +01:00
Aníbal Limón
b370f02148 recipes-kernel/linux-firmware: Add adreno-a630 firmware package
(From OE-Core rev: 0e62fe015b457c18de158ba22641759a093daf45)

Signed-off-by: Aníbal Limón <anibal.limon@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
(cherry picked from commit f3021d19eff3c9705cd91e407c042a1aa3b8e7d9)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:28 +01:00
Aníbal Limón
da40da5fd0 recipes-kernel/linux-firmware: Add wlanmdsp.mbn to qcom-modem package
(From OE-Core rev: 717d73e5b23a49fa8ede847168445c1340147011)

Signed-off-by: Aníbal Limón <anibal.limon@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
(cherry picked from commit 730999efcab0b8e49f1ade4e535a59f6d8e395e0)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:28 +01:00
Mingli Yu
fcaba44e6a python3-setuptools: add the missing rdepends
Add the missing rdepends to fix below error:
 # python3
 [snip]
 >>> import setuptools.lib2to3_ex
 [snip]
 ModuleNotFoundError: No module named 'lib2to3'
 ModuleNotFoundError: No module named 'pickle'

(From OE-Core rev: d19d1ccca3f86a59a72023727d3d804c2e9d18dc)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
(cherry picked from commit be5c3c989d75290863cc7aef9949cf6e82d3070f)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:28 +01:00
Paul Barker
95f51b7c07 archiver.bbclass: Make do_deploy_archives a recursive dependency
To ensure that archives are captured for all dependencies of a typical
bitbake build we add do_deploy_archives to the list of recursive
dependencies of do_build. Without this, archives may be missed for
recipes such as gcc-source which do not create packages or populate a
sysroot.

do_deploy_archives is also added to the recursive dependencies of
do_populate_sdk so that all sources required for an SDK can be captured.

(From OE-Core rev: 66a2e4bcafb3f8835bb21d73a9e78e7d9d15bbd3)

Signed-off-by: Paul Barker <pbarker@konsulko.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
(cherry picked from commit e1feb6030cd8e77c553ec10a366cbeb7e902bada)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:28 +01:00
Mingli Yu
8f45dc87d0 bison: fix the parallel build
Explicitly make the BUILT_SOURCES which
are the generated headers such as stdio.h,
fcntl.h and etc to be the dependencies of
the gl_LIBOBJS such as libbison_a-sprintf.o,
libbison_a-printf.o and etc to guarantee the
BUILT_SOURCES is generated before begin to
compile EXTRA_lib_libbison_a_SOURCES such as
fprintf.c in parallel builid, otherwise there
may come below error:
 | muscle-tab.c:(.text+0x77a): undefined reference to `rpl_sprintf'

It does the same for src_bison_OBJECTS and
lib_libbison_a_OBJECTS to make sure BUILT_SOURCES
generated before begin to compile src_bison_SOURCES
which contains AnnotationList.c and etc.

BTW, the MOSTLYCLEANFILES also contains the
generated header needs to be created early
in the build process, so add it also in to
avoid below error:
 | ./lib/uniwidth/width.c:21:10: fatal error: uniwidth.h: No such file or directory

[YOCTO #13825]

(From OE-Core rev: 99ddfee2a2434d282749e2062987067f70b0ef54)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
(cherry picked from commit 43d74b11095092b13f94074785d0306484fabea6)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:28 +01:00
Kai Kang
a7098495ce gcr: depends on gnupg-native
It fails to build gcr if no commmand gpg on build host:

| meson.build:44:0: ERROR: Program(s) ['gpg2', 'gpg'] not found or not executable

Add dependency gnupg-native to fix the error.

(From OE-Core rev: da7360247995d7c8e79dfcaa0c0761952a9013f1)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
(cherry picked from commit e4a6eda4c246b2bca059defed796bdab19a7ab5f)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:15:28 +01:00
Steve Sakoman
a94da86a1f poky: Add Ubuntu 20.04 as a supported distro
(From meta-yocto rev: 4ebb37d825ee6a0e713b8c5b59fabcae5d50aa93)

Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-28 18:08:58 +01:00
Konrad Weihmann
ed3bdd7fbc file: add PACKAGECONFIG for auto options
A few options of file configure are set to auto, which can lead to
unpredictable effects when something in the sysroot does provide
things that satisfy the autotools checks.
In the worst case this will lead to package-qa failures as libraries are
not set in RDEPENDS but configured for the tool.

To mitigate changes of accidental configure set explicit options via
newly introduced PACKAGECONFIG variables for bzip, lzma and zlib
support, where the default is just zlib, as it was before

(From OE-Core rev: 5bfdb6bfbd6f1de10d415228e5a5ebe01a623e2a)

Signed-off-by: Konrad Weihmann <kweihmann@outlook.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:24 +01:00
Yeoh Ee Peng
02f8d9bac2 selftest/imagefeatures: Enable sanity test for IMAGE_GEN_DEBUGFS
Add new testcase to check IMAGE_GEN_DEBUGFS. Test makes
sure that debug filesystem is created accordingly. Test also check
for debug symbols for some packages as suggested by Ross Burton.

[YOCTO #10906]

(From OE-Core rev: 1038195fe9823c93cf20e2d256865171e0c915c4)

Signed-off-by: Humberto Ibarra <humberto.ibarra.lopez@intel.com>
Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:24 +01:00
Jacob Kroon
224bb0f837 pseudo: Fix enum typedef
'pseudo_access_t' is a type, so use typedef.

Fixes building pseudo with gcc 10 where -fno-common is the default.

(From OE-Core rev: 2093769199ecc3d1e474bea5ec6e8b193f6f8ea3)

Signed-off-by: Jacob Kroon <jacob.kroon@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:24 +01:00
Anton Eliasson
8a673aeaa4 meson.bbclass: Close the log file after reading
This fixes warnings like:

    WARNING: package-name-0.0.1-r0 do_configure: <string>:164: ResourceWarning:
    unclosed file <_io.TextIOWrapper
    name='/source_directory/build/tmp/work/arch/package-name/0.0.1-r0/package-name-0.0.1//meson-logs/meson-log.txt'
    mode='r' encoding='UTF-8'>

(From OE-Core rev: 789c008167e5fe94f781ab274d60b06eaa46ce25)

Signed-off-by: Anton Eliasson <anton.eliasson@axis.com>
Reviewed-by: Ola x Nilsson <ola.x.nilsson@axis.com>
Signed-off-by: Anton Eliasson <anton.eliasson@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:24 +01:00
Tim Orling
805a6a5b0c pypi.bbclass: use new pypi UPSTREAM_CHECK_URI
Upstream https://pypi.python.org/pypi/${PYPI_PACKAGE}/
redirects to https://pypi.org/project/${PYPI_PACKAGE}/

(From OE-Core rev: e5f3f961242d888f3f786af8f793bf1d247fdff0)

Signed-off-by: Tim Orling <timothy.t.orling@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:24 +01:00
Konrad Weihmann
ba658dfdf3 pypi.bbclass: mind package suffix on version check
Some pypi packages do have suffixes like dev, or a0 or b1.
When doing a version check on these, the version will get falsely
identified as major release versions.
Add a terminating slash to rule out those false positives

(From OE-Core rev: 0603f6d9f2abfa67b99b1bc39228f6aa16a0370d)

Signed-off-by: Konrad Weihmann <kweihmann@outlook.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:24 +01:00
Alexander Kanavin
52623b59c0 webkitgtk: update to 2.28.2 to fix multiple CVE's
This latest stable release fixes: CVE-2020-10018, CVE-2020-11793,
CVE-2020-3885, CVE-2020-3894, CVE-2020-3895, CVE-2020-3897,
CVE-2020-3899, CVE-2020-3900, CVE-2020-3901, CVE-2020-3902

(From OE-Core rev: ea75624d2b36fc4c5fbbbb0b2f06ce8a5cadd214)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:24 +01:00
Joshua Watt
0f1efa7db0 bitbake.conf: Prevent git from detecting parent repo in recipe
Prevents git commands run in a recipe from moving up past ${WORKDIR}
when searching for a .git directory, and thus prevents them from
detecting the parent OE-core .git directory. Fixes several
reproducibility issues where recipes would use the OE-core version as
the recipe version due to git walking up the tree.

(From OE-Core rev: 02ecf3e2a98a614805f6f2574c2bf14162192d01)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:24 +01:00
Mingli Yu
68e9329c47 python3-setuptools: add the missing rdepends
Add the missing rdepends to fix below error:
 # python3
 [snip]
 >>> import setuptools
 [snip]
 ModuleNotFoundError: No module named 'json'

(From OE-Core rev: 5733811eeba9fd88f4a621c705a2de61b197c3d7)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:24 +01:00
jan
33763b8878 scripts/tiny/ksize: Fix for more recent kernels
In the past kernel built object files were named 'built-in.o'.
Nowadays it is 'built-in.a'.

The script is modified to work with both.  I expect
it will not happen that there are built-in.a and built-in.o
files in the same kernel.

(From OE-Core rev: 8a883c3b0773960908491c03c46e7ed320e41dc5)

Signed-off-by: Jan Vermaete <jan.vermaete@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:24 +01:00
Joe Slater
50f3180f92 wget: improve reproducible build
Modify DEBUG_PREFIX_MAP as used by sed to handle
whitespace correctly.

This modifies an existing patch.

(From OE-Core rev: fcd6c105bee1c689f06b46659779bddfad07d9c9)

Signed-off-by: Joe Slater <joe.slater@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:24 +01:00
Peter Kjellerstedt
ad17c36065 sstate.bbclass: Do not fail if files cannot be touched
It may be that a file is not allowed to be touched, e.g., if it is a
symbolic link into a global sstate cache served over NFS.

(From OE-Core rev: f528d6ffc9649536d21d625f65b415bfec7db258)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:24 +01:00
Yi Zhao
5f904d42b3 opkg-keyrings: check if opkg-key exists before run postinst
By default, the opkg-key command is not included in pokg package because
it is only installed when gpg support is enabled. We'd better check if
it exists before run 'opkg-key populate' in pkg_postinst.

(From OE-Core rev: 174c27e4edea0af92f60779cf3f63d21f6bce6fe)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:23 +01:00
Alejandro Hernandez
4bf427dc0d connman: Include vpn-script in FILES
When vpnc support is included through PACKAGECONFIG, there
is now an extra vpn-script coming after the atest upgrade,
include that script into FILES so it gets packaged.

(From OE-Core rev: 8587149c49dd8d1e1a0a0b5cf81e458bfa88547e)

Signed-off-by: Alejandro Hernandez Samaniego <alejandro@enedino.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:23 +01:00
Robert P. J. Day
bddbb63878 documentation.conf: Add variables supported by features_check.bbclass
Add to documentation.conf all the new variables supported by
features_check.bbclass.

(From OE-Core rev: d82fc1d482c5be81cf88f9b37943bf9f4159ea56)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:23 +01:00
Joshua Watt
2c9338ad8f libnewt: Backport patch to fix reproducibility
Backports a patch from upstream to fix a reproducibility problem where
paths would be encoded in the binary.

Drops an obsolete patch that conflicted with the backport

(From OE-Core rev: b8f5114aabf6bbbc4adf5802a6707efaf18ba2ee)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:23 +01:00
Alexander Kanavin
c0d765f8bf re2c: correct upstream location
(From OE-Core rev: 89afb271b32ed3dbe9c899fbfd30f9a80af161da)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:23 +01:00
Alexander Kanavin
394b5fadf9 cdrtools-native: fix upstream version check
(From OE-Core rev: aecd1239123d6fd94fbae10b162cf8a57eeaf18d)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:23 +01:00
Alexander Kanavin
a94fe94007 meson: fix upstream version check
(From OE-Core rev: 6cfa86069dd2189782a3505ffdacc489ea5cc3f1)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-22 16:23:23 +01:00
Bruce Ashfield
33fdf03169 linux-yocto/5.4: update to v5.4.34
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    6ccc74c083c0 Linux 5.4.34
    b538aacc9400 x86/microcode/AMD: Increase microcode PATCH_MAX_SIZE
    856a74fd7e61 scsi: target: fix hang when multiple threads try to destroy the same iscsi session
    992e469b4c44 scsi: target: remove boilerplate code
    4b3380e007b2 x86/resctrl: Fix invalid attempt at removing the default resource group
    3652782e3a87 x86/resctrl: Preserve CDP enable over CPU hotplug
    6b5e8e7cbe24 irqchip/ti-sci-inta: Fix processing of masked irqs
    9d2759006e29 ext4: do not zeroout extents beyond i_disksize
    653b42530952 i2c: designware: platdrv: Remove DPM_FLAG_SMART_SUSPEND flag on BYT and CHT
    e2b80bf26956 drm/amdgpu: fix the hw hang during perform system reboot and reset
    251f13126e23 drm/amd/powerplay: force the trim of the mclk dpm_levels if OD is enabled
    eecd70c77ff3 net/mlx5e: Use preactivate hook to set the indirection table
    57f578bce415 net/mlx5e: Rename hw_modify to preactivate
    67284c11399f net/mlx5e: Encapsulate updating netdev queues into a function
    cae385538886 mac80211_hwsim: Use kstrndup() in place of kasprintf()
    a8ce3412e8a2 mac80211: fix race in ieee80211_register_hw()
    21350f28b226 nl80211: fix NL80211_ATTR_FTM_RESPONDER policy
    aa5b11bb333c btrfs: check commit root generation in should_ignore_root
    0026e356e51a tracing: Fix the race between registering 'snapshot' event trigger and triggering 'snapshot' operation
    a0aaafe7ce4b keys: Fix proc_keys_next to increase position index
    f32a339e0321 arm64: vdso: don't free unallocated pages
    5209e5f05bf2 ALSA: usb-audio: Check mapping at creating connector controls, too
    250db0305392 ALSA: usb-audio: Don't create jack controls for PCM terminals
    aae6e154680f ALSA: usb-audio: Don't override ignore_ctl_error value from the map
    9acfd1ac016a ALSA: usb-audio: Filter error from connector kctl ops, too
    0eb7bcf3ad32 ALSA: hda/realtek - Enable the headset mic on Asus FX505DT
    549a48900d8e ASoC: Intel: mrfld: return error codes when an error occurs
    86ec55651fd2 ASoC: Intel: mrfld: fix incorrect check on p->sink
    538b623fed6d usb: dwc3: gadget: Don't clear flags before transfer ended
    3bffb20603cd arm64: dts: librem5-devkit: add a vbus supply to usb0
    3a8dc1e91561 ARM: dts: imx7-colibri: fix muxing of usbc_det pin
    c2c5d07090d4 clk: at91: usb: use proper usbs_mask
    90c1f740ddf8 clk: at91: sam9x60: fix usb clock parents
    c874d9d116d8 ext4: fix incorrect inodes per group in error message
    dd7b410c9b01 ext4: fix incorrect group count in ext4_fill_super error message
    44c70ed66c93 net/bpfilter: remove superfluous testing message
    229563dc6b04 pwm: pca9685: Fix PWM/GPIO inter-operation
    0a4c06f0db06 perf report: Fix no branch type statistics report issue
    4542e583e2b8 acpi/nfit: improve bounds checking for 'func'
    5815a5d70def jbd2: improve comments about freeing data buffers whose page mapping is NULL
    8078d3af4af7 platform/chrome: cros_ec_rpmsg: Fix race with host event
    777c8c9f11a8 scsi: ufs: Fix ufshcd_hold() caused scheduling while atomic
    66458aa271b0 ovl: fix value of i_ino for lower hardlink corner case
    c85a7109f905 Revert "ACPI: EC: Do not clear boot_ec_is_ecdt in acpi_ec_add()"
    4f2fb2a1990a net: stmmac: dwmac-sunxi: Provide TX and RX fifo sizes
    a06a51d59292 net/mlx5e: Fix pfnum in devlink port attribute
    e25122586403 net/mlx5e: Fix nest_level for vlan pop action
    cb8892f52ec9 net/mlx5e: Add missing release firmware call
    34310505d404 net/mlx5: Fix frequent ioread PCI access during recovery
    1ff0732cf805 net: ethernet: mediatek: move mt7623 settings out off the mt7530
    f749a8bfdd38 net: dsa: mt7530: move mt7623 settings out off the mt7530
    bb54dcca3fb5 net: tun: record RX queue in skb before do_xdp_generic()
    f6b264f2a04c net: revert default NAPI poll timeout to 2 jiffies
    61260486790e net: qrtr: send msgs from local of same id as broadcast
    81dc4e9bff98 net: phy: micrel: use genphy_read_status for KSZ9131
    a9a851f0ec67 net: ipv6: do not consider routes via gateways for anycast address check
    22e56cb2f951 net: ipv4: devinet: Fix crash when add/del multicast IP with autojoin
    3ca854743110 net: dsa: mt7530: fix tagged frames pass-through in VLAN-unaware mode
    016e3531d5c1 l2tp: Allow management of tunnels and session in user namespace
    22ea267a9cd6 hsr: check protocol version in hsr_newlink()
    ced57064a085 amd-xgbe: Use __napi_schedule() in BH context
    dc4059d21d87 Linux 5.4.33
    484cc15ad00f scsi: lpfc: fix inlining of lpfc_sli4_cleanup_poll_list()
    8dead2c275e4 ASoC: stm32: sai: Add missing cleanup
    aed5ee6befcc efi/x86: Fix the deletion of variables in mixed mode
    0c839eee166a mfd: dln2: Fix sanity checking for endpoints
    b70eb420e96d bpf: Fix tnum constraints for 32-bit comparisons
    26711cc7e064 mmc: sdhci: Refactor sdhci_set_timeout()
    56a296657e4b mmc: sdhci: Convert sdhci_set_timeout_irq() to non-static
    c1f3e1d8d7e6 powerpc/kasan: Fix kasan_remap_early_shadow_ro()
    36b0b1f63994 drm/i915/icl+: Don't enable DDI IO power on a TypeC port in TBT mode
    bdac1d76a310 drm/amdgpu: fix gfx hang during suspend with video playback (v2)
    d1bbdf003c05 drm/dp_mst: Fix clearing payload state on topology disable
    7676e69c67e3 Revert "drm/dp_mst: Remove VCPI while disabling topology mgr"
    ba74ab0c29fc scsi: lpfc: Fix broken Credit Recovery after driver load
    33ebae4f3ba6 scsi: lpfc: Fix configuration of BB credit recovery in service parameters
    037b0b5521a4 scsi: lpfc: Fix Fabric hostname registration if system hostname changes
    f48e7593523e scsi: lpfc: Add registration for CPU Offline/Online events
    33344a7661a1 dm clone: Add missing casts to prevent overflows and data corruption
    2d7eb7ee36a3 dm clone: Fix handling of partial region discards
    dcf2f00b0869 dm clone: replace spin_lock_irqsave with spin_lock_irq
    fddfa591da8e dm zoned: remove duplicate nr_rnd_zones increase in dmz_init_zone()
    1ba26c2aedb4 arm64: Always force a branch protection mode when the compiler has one
    ba7581be850b powerpc: Make setjmp/longjmp signature standard
    3457b2232eaf scsi: mpt3sas: Fix kernel panic observed on soft HBA unplug
    e294f8a5ad31 powerpc/64: Prevent stack protection in early boot
    fc8755dc01d1 powerpc/kprobes: Ignore traps that happened in real mode
    ed6f6b2b39af powerpc/xive: Fix xmon support on the PowerNV platform
    1ab730b65946 powerpc/64: Setup a paca before parsing device tree etc.
    9240f83aa9c7 powerpc/xive: Use XIVE_BAD_IRQ instead of zero to catch non configured IPIs
    bd0fa144737c powerpc/hash64/devmap: Use H_PAGE_THP_HUGE when setting up huge devmap PTE entries
    81b9336ab20e powerpc/fsl_booke: Avoid creating duplicate tlb1 entry
    38aa7f32dfd8 powerpc/64/tm: Don't let userspace set regs->trap via sigreturn
    0abc07d23c51 xen/blkfront: fix memory allocation flags in blkfront_setup_indirect()
    5fdf01181cb8 ipmi: fix hung processes in __get_guid()
    d0b9bd4804a7 libata: Return correct status in sata_pmp_eh_recover_pm() when ATA_DFLAG_DETACH is set
    ec2c054e87a5 hfsplus: fix crash and filesystem corruption when deleting files
    af80e6f70f72 cpufreq: powernv: Fix use-after-free
    9cc4f52d34a2 kmod: make request_module() return an error when autoloading is disabled
    bf4fcd52742e clk: ingenic/TCU: Fix round_rate returning error
    9e8388fdf4de clk: ingenic/jz4770: Exit with error if CGU init failed
    7bcca67bdee8 ftrace/kprobe: Show the maxactive number on kprobe_events
    7dad5beb8dfd Input: i8042 - add Acer Aspire 5738z to nomux list
    efb9e9f723f5 s390/diag: fix display of diagnose call statistics
    453fb8b20db4 perf tools: Support Python 3.8+ in Makefile
    beb3ef51cfd8 ocfs2: no need try to truncate file beyond i_size
    47199f4b87eb fs/filesystems.c: downgrade user-reachable WARN_ONCE() to pr_warn_once()
    6772387e8201 ext4: fix a data race at inode->i_blocks
    699d2c4d667e NFS: Fix a page leak in nfs_destroy_unlinked_subrequests()
    6b64cbd05807 NFS: Fix use-after-free issues in nfs_pageio_add_request()
    98a817eda5bc nfsd: fsnotify on rmdir under nfsd/clients/
    27993365c009 powerpc/pseries: Avoid NULL pointer dereference when drmem is unavailable
    4e4c6760fe03 drm/amdgpu: unify fw_write_wait for new gfx9 asics
    45bc323b8102 drm/amdgpu/powerplay: using the FCLK DPM table to set the MCLK
    fe0ec6f90e4d drm: Remove PageReserved manipulation from drm_pci_alloc
    b716a5f5ec65 drm/etnaviv: rework perfmon query infrastructure
    463a2dddb4f9 drm/i915/gem: Flush all the reloc_gpu batch
    cda1eda28f1d vfio: platform: Switch to platform_get_irq_optional()
    b5eec37a3b85 selftests/powerpc: Add tlbie_test in .gitignore
    e1ec78f93042 selftests/vm: fix map_hugetlb length used for testing read and write
    336b96a68170 selftests: vm: drop dependencies on page flags from mlock2 tests
    20a62e9073f3 arm64: armv8_deprecated: Fix undef_hook mask for thumb setend
    3d66a67f7310 arm64: dts: ti: k3-am65: Add clocks to dwc3 nodes
    9d971b0059a2 ARM: dts: exynos: Fix polarity of the LCD SPI bus on UniversalC210 board
    e5b9c1027ee8 scsi: lpfc: Fix lpfc_io_buf resource leak in lpfc_get_scsi_buf_s4 error path
    73a122c2636d scsi: ufs: fix Auto-Hibern8 error detection
    0ad68e6212ad scsi: zfcp: fix missing erp_lock in port recovery trigger for point-to-point
    8179a260313e crypto: ccree - dec auth tag size from cryptlen map
    9135cd1b0f64 crypto: ccree - only try to map auth tag if needed
    a86744642789 crypto: ccree - protect against empty or NULL scatterlists
    f3f13f979448 crypto: caam - update xts sector size for large input length
    bc8413b626dd crypto: caam/qi2 - fix chacha20 data size error
    07378b099139 xarray: Fix early termination of xas_for_each_marked
    8f4c8e92bdac XArray: Fix xas_pause for large multi-index entries
    a1ffc47f22a8 dm clone metadata: Fix return type of dm_clone_nr_of_hydrated_regions()
    996f8f1ba72a dm clone: Add overflow check for number of regions
    2e703059348d dm verity fec: fix memory leak in verity_fec_dtr
    833309f3fb51 dm integrity: fix a crash with unusually large tag size
    bef0d2f5fdcb dm writecache: add cond_resched to avoid CPU hangs
    5c84ab9c96d7 mm, memcg: do not high throttle allocators based on wraparound
    935e87b20c56 arm64: dts: allwinner: h5: Fix PMU compatible
    1dbfae009525 sched/core: Remove duplicate assignment in sched_tick_remote()
    8b068046321f arm64: dts: allwinner: h6: Fix PMU compatible
    27dbb3633809 net: qualcomm: rmnet: Allow configuration updates to existing devices
    add09c86cd3e tools: gpio: Fix out-of-tree build regression
    a0f079ac13be powerpc/pseries: Drop pointless static qualifier in vpa_debugfs_init()
    e0ae9da3fb2f mmc: sdhci-of-esdhc: fix esdhc_reset() for different controller versions
    7661469ef56e io_uring: honor original task RLIMIT_FSIZE
    a181a74610e6 erofs: correct the remaining shrink objects
    433868b19ce0 crypto: mxs-dcp - fix scatterlist linearization for hash
    248414f50596 crypto: rng - Fix a refcounting bug in crypto_rng_reset()
    6b936b1872ba remoteproc: Fix NULL pointer dereference in rproc_virtio_notify
    5b677eddc547 remoteproc: qcom_q6v5_mss: Reload the mba region on coredump
    241f681d19e1 remoteproc: qcom_q6v5_mss: Don't reassign mpss region on shutdown
    87a9058d5552 btrfs: use nofs allocations for running delayed items
    0425813c2279 btrfs: fix missing semaphore unlock in btrfs_sync_file
    08e69ab983da btrfs: unset reloc control if we fail to recover
    098d3da1ad30 btrfs: fix missing file extent item for hole after ranged fsync
    b436fbff6fca btrfs: drop block from cache on error in relocation
    dd68ba0d7355 btrfs: set update the uuid generation as soon as possible
    441b83a84208 btrfs: reloc: clean dirty subvols if we fail to start a transaction
    1bd44cada415 Btrfs: fix crash during unmount due to race with delayed inode workers
    941dabde6c1a btrfs: Don't submit any btree write bio if the fs has errors
    0297b7f9842e mtd: spinand: Do not erase the block before writing a bad block marker
    4da7c98c3081 mtd: spinand: Stop using spinand->oobbuf for buffering bad block markers
    c138ad0741fc CIFS: Fix bug which the return value by asynchronous read is error
    9b35348318d1 smb3: fix performance regression with setting mtime
    40888c31aca3 KVM: VMX: fix crash cleanup when KVM wasn't used
    93a2b7368862 KVM: VMX: Add a trampoline to fix VMREAD error handling
    771b9374a529 KVM: x86: Gracefully handle __vmalloc() failure during VM allocation
    455f37affe13 KVM: VMX: Always VMCLEAR in-use VMCSes during crash with kexec support
    bcd1d7462aba KVM: x86: Allocate new rmap and large page tracking when moving memslot
    0c7fb8c91c0f KVM: s390: vsie: Fix delivery of addressing exceptions
    654b70e84710 KVM: s390: vsie: Fix region 1 ASCE sanity shadow address checks
    2c5bfcda8791 KVM: nVMX: Properly handle userspace interrupt window request
    99a890ed7009 platform/x86: asus-wmi: Support laptops where the first battery is named BATT
    bd90b96e3486 x86/entry/32: Add missing ASM_CLAC to general_protection entry
    3dc06261a41f x86/tsc_msr: Make MSR derived TSC frequency more accurate
    41a7f842e312 x86/tsc_msr: Fix MSR_FSB_FREQ mask for Cherry Trail devices
    6c63cf15d066 x86/tsc_msr: Use named struct initializers
    5f2d04139aa5 signal: Extend exec_id to 64bits
    0a993df8d609 ath9k: Handle txpower changes even when TPC is disabled
    d941b33bdc68 PM: sleep: wakeup: Skip wakeup_source_sysfs_remove() if device is not there
    4fcbc35fab57 PM / Domains: Allow no domain-idle-states DT property in genpd when parsing
    5bd5307cd264 MIPS: OCTEON: irq: Fix potential NULL pointer dereference
    ed374eee8ce6 MIPS/tlbex: Fix LDDIR usage in setup_pw() for Loongson-3
    4acbbe98e06a pstore: pstore_ftrace_seq_next should increase position index
    38119a689766 io_uring: remove bogus RLIMIT_NOFILE check in file registration
    6124e10dbc4f irqchip/versatile-fpga: Apply clear-mask earlier
    3f3700c4697b genirq/debugfs: Add missing sanity checks to interrupt injection
    6ecc37daf64e cpu/hotplug: Ignore pm_wakeup_pending() for disable_nonboot_cpus()
    4b67e5afc2a0 KEYS: reaching the keys quotas correctly
    f7384f90ecc7 tpm: tpm2_bios_measurements_next should increase position index
    27544e1bdcc6 tpm: tpm1_bios_measurements_next should increase position index
    96e05bb57b40 tpm: Don't make log failures fatal
    524089fa70ef sched/fair: Fix enqueue_task_fair warning
    8b6f8619fc96 PCI: endpoint: Fix for concurrent memory allocation in OB address region
    96843346b201 PCI: qcom: Fix the fixup of PCI_VENDOR_ID_QCOM
    55b61a08bf86 PCI: Add boot interrupt quirk mechanism for Xeon chipsets
    72d52a779e99 PCI/ASPM: Clear the correct bits when enabling L1 substates
    463181e64f5f PCI: pciehp: Fix indefinite wait on sysfs requests
    c755ca32c8cd efi/x86: Add TPM related EFI tables to unencrypted mapping checks
    91bed1f1fb97 nvme-fc: Revert "add module to ops template to allow module references"
    0eb4d8b985be nvmet-tcp: fix maxh2cdata icresp parameter
    b3c7227ad4c6 thermal: devfreq_cooling: inline all stubs for CONFIG_DEVFREQ_THERMAL=n
    e7251a88d387 ACPI: PM: s2idle: Refine active GPEs check
    dd993e283bc3 ACPICA: Allow acpi_any_gpe_status_set() to skip one GPE
    1efd20ea57d4 acpi/x86: ignore unspecified bit positions in the ACPI global lock field
    52e6985f2c91 seccomp: Add missing compat_ioctl for notify
    15ae94fe2211 media: ti-vpe: cal: fix a kernel oops when unloading module
    3a59d985ceb1 media: ti-vpe: cal: fix disable_irqs to only the intended target
    46b0e2900ee2 media: hantro: Read be32 words starting at every fourth byte
    7ac962c5b730 media: venus: firmware: Ignore secure call error on first resume
    be9956bac91a ALSA: hda/realtek - Add quirk for MSI GL63
    09e7b678f3e0 ALSA: hda/realtek - Add quirk for Lenovo Carbon X1 8th gen
    f5462668ad94 ALSA: hda/realtek - Remove now-unnecessary XPS 13 headphone noise fixups
    a92931dea6b1 ALSA: hda/realtek - Set principled PC Beep configuration for ALC256
    0f18192b6924 ALSA: doc: Document PC Beep Hidden Register on Realtek ALC256
    3e7167475236 ALSA: hda/realtek - a fake key event is triggered by running shutup
    faea94956333 ALSA: hda/realtek: Enable mute LED on an HP system
    1dfcd70d1fcc ALSA: pcm: oss: Fix regression by buffer overflow fix
    e3ab9c5540e3 ALSA: ice1724: Fix invalid access for enumerated ctl items
    6a9ba565b41f ALSA: hda: Fix potential access overflow in beep helper
    f4f0a1f017e0 ALSA: hda: Add driver blacklist
    1ee0023c340e ALSA: usb-audio: Add mixer workaround for TRX40 and co
    78a92756fc2c usb: gadget: composite: Inform controller driver of self-powered
    a385ebdaa4dc usb: gadget: f_fs: Fix use after free issue as part of queue failure
    9a8b1ba9d41f ASoC: topology: use name_prefix for new kcontrol
    f467e054c03f ASoC: dpcm: allow start or stop during pause for backend
    af0b76f9f632 ASoC: dapm: connect virtual mux with default value
    803db8a07868 ASoC: fix regwmask
    acec0e9a916a btrfs: track reloc roots based on their commit root bytenr
    9632851a5326 btrfs: restart relocate_tree_blocks properly
    ddc25a38ab36 btrfs: remove a BUG_ON() from merge_reloc_roots()
    679885143c04 btrfs: qgroup: ensure qgroup_rescan_running is only set when the worker is at least queued
    b37de1b1e882 block, bfq: fix use-after-free in bfq_idle_slice_timer_body
    bd9afea9bde7 locking/lockdep: Avoid recursion in lockdep_count_{for,back}ward_deps()
    b9da72cb7019 spi: spi-fsl-dspi: Replace interruptible wait queue with a simple completion
    64a97384d4f4 firmware: fix a double abort case with fw_load_sysfs_fallback
    2d29a61a14fa md: check arrays is suspended in mddev_detach before call quiesce operations
    6420b2e5fa66 irqchip/gic-v4: Provide irq_retrigger to avoid circular locking dependency
    80e85ab88b3f usb: dwc3: core: add support for disabling SS instances in park mode
    b6257832dd45 media: i2c: ov5695: Fix power on and off sequences
    510b4e069508 block: Fix use-after-free issue accessing struct io_cq
    b9d5ced37ac7 genirq/irqdomain: Check pointer in irq_domain_alloc_irqs_hierarchy()
    bceda1dd4716 efi/x86: Ignore the memory attributes table on i386
    fc427b7a0266 x86/boot: Use unsigned comparison for addresses
    f6bb3ea812f0 cpufreq: imx6q: fix error handling
    c5bcaacd0640 gfs2: Don't demote a glock until its revokes are written
    46bbc5526dd7 gfs2: Do log_flush in gfs2_ail_empty_gl even if ail list is empty
    aa547b9dc20f pstore/platform: fix potential mem leak if pstore_init_fs failed
    347f091094ab libata: Remove extra scsi_host_put() in ata_scsi_add_hosts()
    288761c9f0a2 media: i2c: video-i2c: fix build errors due to 'imply hwmon'
    fb80a18584a4 block, bfq: move forward the getting of an extra ref in bfq_bfqq_move
    d1d846fb02a8 PCI/switchtec: Fix init_completion race condition with poll_wait()
    75434bcc6593 selftests/x86/ptrace_syscall_32: Fix no-vDSO segfault
    dd39eadc71d4 sched: Avoid scale real weight down to zero
    f7557078e16e media: allegro: fix type of gop_length in channel_create message
    2902207377f8 time/sched_clock: Expire timer in hardirq context
    3f755f5233a2 irqchip/versatile-fpga: Handle chained IRQs properly
    c8b81c33c5cb debugfs: Check module state before warning in {full/open}_proxy_open()
    fd66df97dce9 block: keep bdi->io_pages in sync with max_sectors_kb for stacked devices
    e88ee287fd82 dma-mapping: Fix dma_pgprot() for unencrypted coherent pages
    aa04e8d359d7 x86: Don't let pgprot_modify() change the page encryption bit
    ce7a61a0d57d ACPI: EC: Do not clear boot_ec_is_ecdt in acpi_ec_add()
    99e20a79d215 xhci: bail out early if driver can't accress host in resume
    61ed3dcad80c media: imx: imx7-media-csi: Fix video field handling
    dd051f1af594 media: imx: imx7_mipi_csis: Power off the source when stopping streaming
    502b83e73e35 null_blk: fix spurious IO errors after failed past-wp access
    38c1299f8c5c null_blk: Handle null_add_dev() failures properly
    becd9a906657 null_blk: Fix the null_add_dev() error path
    f9ee512dd913 firmware: arm_sdei: fix double-lock on hibernate with shared events
    7bf2c31ba0bb media: venus: hfi_parser: Ignore HEVC encoding for V1
    0d3d868b34af staging: wilc1000: avoid double unlocking of 'wilc->hif_cs' mutex
    d5bc44e6b0d4 cpufreq: imx6q: Fixes unwanted cpu overclocking on i.MX6ULL
    33dbe5867c39 media: rc: add keymap for Videostrong KII Pro
    a5ef462303e0 i2c: pca-platform: Use platform_irq_get_optional
    54d09aab81aa i2c: st: fix missing struct parameter description
    28f5b6ee1c2f qlcnic: Fix bad kzalloc null test
    d7f6f2b0be09 cfg80211: Do not warn on same channel at the end of CSA
    068168461e68 drm/scheduler: fix rare NULL ptr race
    f5429ec64f4f cxgb4/ptp: pass the sign of offset delta in FW CMD
    d2037f68ae03 selftests/net: add definition for SOL_DCCP to fix compilation errors for old libc
    9a3f55fc0f46 hinic: fix wrong value of MIN_SKB_LEN
    a8f9fe793001 hinic: fix wrong para of wait_for_completion_timeout
    243ebc24e01c hinic: fix out-of-order excution in arm cpu
    5edd115ba09e hinic: fix the bug of clearing event queue
    d63fac896335 hinic: fix a bug of waitting for IO stopped
    ad4ad8253f89 net: vxge: fix wrong __VA_ARGS__ usage
    b9c961998565 net: stmmac: platform: Fix misleading interrupt error msg
    f96f2c885eda rxrpc: Fix call interruptibility handling
    f8da7f442861 rxrpc: Abstract out the calculation of whether there's Tx space
    96860db5c09f soc: fsl: dpio: register dpio irq handlers after dpio create
    10e15e1b9297 Input: tm2-touchkey - add support for Coreriver TC360 variant
    ed1c4d2ca9da iwlwifi: mvm: Fix rate scale NSS configuration
    fd29a0242f86 bpf: Fix deadlock with rq_lock in bpf_send_signal()
    5c234312e805 ARM: dts: Fix dm814x Ethernet by changing to use rgmii-id mode
    d04ffa50f901 bus: sunxi-rsb: Return correct data when mixing 16-bit and 8-bit reads
    7092cc4590c0 ARM: dts: sun8i-a83t-tbs-a711: HM5065 doesn't like such a high voltage

(From OE-Core rev: 1a50634e56dfcb63eac0df1aa9cd7e6fb7bd470a)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-14 16:45:42 +01:00
Bruce Ashfield
28819f7bf0 linux-yocto/5.4: update to v5.4.32
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    bc844d58f697 Linux 5.4.32
    ad5676629a12 iommu/vt-d: Allow devices with RMRRs to use identity domain
    04ad505eed58 drm/i915: Fix ref->mutex deadlock in i915_active_wait()
    047affa0ef00 fbcon: fix null-ptr-deref in fbcon_switch
    d4083258db04 blk-mq: Keep set->nr_hw_queues and set->map[].nr_queues in sync
    d020ff5060a4 RDMA/cm: Update num_paths in cma_resolve_iboe_route error flow
    b40f1ae359f2 Bluetooth: RFCOMM: fix ODEBUG bug in rfcomm_dev_ioctl
    7f5432c2f446 RDMA/siw: Fix passive connection establishment
    09583e3f0402 RDMA/cma: Teach lockdep about the order of rtnl and lock
    51795bcf595d RDMA/ucma: Put a lock around every call to the rdma_cm layer
    ab6ee4330288 include/uapi/linux/swab.h: fix userspace breakage, use __BITS_PER_LONG for swap
    193490dbe5ba ceph: canonicalize server path in place
    56385788f7f6 ceph: remove the extra slashes in the server path
    7dafb2c6fb46 ARM: imx: only select ARM_ERRATA_814220 for ARMv7-A
    cf7005662673 ARM: imx: Enable ARM_ERRATA_814220 for i.MX6UL and i.MX7D
    4ac80b02f10d IB/mlx5: Replace tunnel mpls capability bits for tunnel_offloads
    ccc2b645de20 IB/hfi1: Fix memory leaks in sysfs registration and unregistration
    cd38d8b231f1 IB/hfi1: Call kobject_put() when kobject_init_and_add() fails
    9351dee1cc24 ASoC: jz4740-i2s: Fix divider written at incorrect offset in register
    e30a21c6fea5 platform/x86: intel_int0002_vgpio: Use acpi_register_wakeup_handler()
    9da847d65f37 ACPI: PM: Add acpi_[un]register_wakeup_handler()
    41a0cfa05c05 hwrng: imx-rngc - fix an error path
    dfa210cf9f94 tools/accounting/getdelays.c: fix netlink attribute length
    ea84a26ab633 slub: improve bit diffusion for freelist ptr obfuscation
    8b0f08036659 uapi: rename ext2_swab() to swab() and share globally in swab.h
    94d2d84bcafa usb: dwc3: gadget: Wrap around when skip TRBs
    170f88a47b9f random: always use batched entropy for get_random_u{32,64}
    5e331978200e s390: prevent leaking kernel address in BEAR
    74107d56d1e8 r8169: change back SG and TSO to be disabled by default
    5249653d971d mlxsw: spectrum_flower: Do not stop at FLOW_ACTION_VLAN_MANGLE
    671331c11c39 tun: Don't put_page() for all negative return values from XDP program
    fdb6a094ba41 slcan: Don't transmit uninitialized stack data in padding
    feed32e3d6fe net: stmmac: dwmac1000: fix out-of-bounds mac address reg setting
    049b9fa3ef65 net_sched: fix a missing refcnt in tcindex_init()
    1891d57f89aa net_sched: add a temporary refcnt for struct tcindex_data
    1189ba9eedac net: phy: micrel: kszphy_resume(): add delay after genphy_resume() before accessing PHY registers
    7d3d99f579e8 net: dsa: mt7530: fix null pointer dereferencing in port5 setup
    bce7ce18bd18 net: dsa: bcm_sf2: Ensure correct sub-node is parsed
    040f7a27583f net: dsa: bcm_sf2: Do not register slave MDIO bus with OF
    bbbdd7956bab ipv6: don't auto-add link-local address to lag ports
    77cf80793692 cxgb4: fix MPS index overwrite when setting MAC address
    3fcd53b1d859 net: phy: realtek: fix handling of RTL8105e-integrated PHY
    de850633a01f Linux 5.4.31
    c3f87e03f90f mm: mempolicy: require at least one nodeid for MPOL_PREFERRED
    c3d4e6fc4b37 padata: always acquire cpu_hotplug_lock before pinst->lock
    238112fcf391 net: Fix Tx hash bound checking
    15ee8da79ee3 i2c: i801: Do not add ICH_RES_IO_SMI for the iTCO_wdt device
    079c8da9e5ac watchdog: iTCO_wdt: Make ICH_RES_IO_SMI optional
    b42afa3475bf watchdog: iTCO_wdt: Export vendorsupport
    4ebd16641797 tcp: fix TFO SYNACK undo to avoid double-timestamp-undo
    a6b1820d3330 IB/hfi1: Ensure pq is not left on waitlist
    c4168080f1d4 rxrpc: Fix sendmsg(MSG_WAITALL) handling
    be8a3aecd21a iwlwifi: dbg: don't abort if sending DBGC_SUSPEND_RESUME fails
    b4190809a17b iwlwifi: yoyo: don't add TLV offset when reading FIFOs
    00e332e42bbe iwlwifi: consider HE capability when setting LDPC
    5f843cb77142 net/mlx5e: kTLS, Fix wrong value in record tracker enum
    ea26f82a0422 soc: mediatek: knows_txdone needs to be set in Mediatek CMDQ helper
    f6c8f128856b ALSA: hda/ca0132 - Add Recon3Di quirk to handle integrated sound on EVGA X99 Classified motherboard
    2892100bc85a Revert "dm: always call blk_queue_split() in dm_process_bio()"
    7c6ae8ae0ac5 power: supply: axp288_charger: Add special handling for HP Pavilion x2 10
    899c38d93000 extcon: axp288: Add wakeup support
    4d60b72514c2 nvmem: check for NULL reg_read and reg_write before dereferencing
    98b32db072e9 mei: me: add cedar fork device ids
    1843cba24aef coresight: do not use the BIT() macro in the UAPI header
    b5212116392e PCI: sysfs: Revert "rescan" file renames
    aa98c16a5b7c misc: pci_endpoint_test: Avoid using module parameter to determine irqtype
    a5d697c1e92d misc: pci_endpoint_test: Fix to support > 10 pci-endpoint-test devices
    82f6c72e5d4d misc: rtsx: set correct pcr_ops for rts522A
    cec4be18d136 brcmfmac: abort and release host after error
    625b940a28e0 padata: fix uninitialized return value in padata_replace()
    16696ee7b581 XArray: Fix xa_find_next for large multi-index entries
    4eb33cb9b566 net/mlx5e: kTLS, Fix TCP seq off-by-1 issue in TX resync flow
    8792e1ac5f48 tools/power turbostat: Fix 32-bit capabilities warning
    09116eeea6a5 tools/power turbostat: Fix missing SYS_LPI counter on some Chromebooks
    0ba0ce3cbb86 tools/power turbostat: Fix gcc build warnings
    7ebc1e53a46b drm/amdgpu: fix typo for vcn1 idle check
    d2faee42f9e7 initramfs: restore default compression behavior
    4a8ba74c1c64 drm/bochs: downgrade pci_request_region failure from error to warning
    f8abcff4fd0d drm/amd/display: Add link_rate quirk for Apple 15" MBP 2017
    205b5f80c74f kconfig: introduce m32-flag and m64-flag
    91358d0f36fa nvme-rdma: Avoid double freeing of async event data
    ad13e142e024 Linux 5.4.30
    9e62b6673d14 arm64: dts: ls1046ardb: set RGMII interfaces to RGMII_ID mode
    c399a50ae878 arm64: dts: ls1043a-rdb: correct RGMII delay mode to rgmii-id
    5aa29219206a ARM: dts: sun8i: r40: Move AHCI device node based on address order
    8f1199341837 ARM: dts: N900: fix onenand timings
    89ecba47b391 ARM: dts: imx6: phycore-som: fix arm and soc minimum voltage
    bb4ec20d1687 ARM: bcm2835-rpi-zero-w: Add missing pinctrl name
    e58eb564e1fc ARM: dts: oxnas: Fix clear-mask property
    a1081413e834 perf map: Fix off by one in strncpy() size argument
    451bf4d9592a arm64: alternative: fix build with clang integrated assembler
    693860e79552 libceph: fix alloc_msg_with_page_vector() memory leaks
    61bbc823a17a clk: ti: am43xx: Fix clock parent for RTC clock
    b2efabe3f88c clk: imx: Align imx sc clock parent msg structs to 4
    4a3c7e1c807f clk: imx: Align imx sc clock msg structs to 4
    08479b1391cb net: ks8851-ml: Fix IO operations, again
    62465fd66323 gpiolib: acpi: Add quirk to ignore EC wakeups on HP x2 10 CHT + AXP288 model
    877f28596da2 bpf: Explicitly memset some bpf info structures declared on the stack
    e92528a8984e bpf: Explicitly memset the bpf_attr structure
    d3e215554a6c platform/x86: pmc_atom: Add Lex 2I385SW to critclk_systems DMI table
    3f4ba176c623 vt: vt_ioctl: fix use-after-free in vt_in_use()
    acf0e9401931 vt: vt_ioctl: fix VT_DISALLOCATE freeing in-use virtual console
    d1b6ab26c850 vt: vt_ioctl: remove unnecessary console allocation checks
    c897e625f94b vt: switch vt_dont_switch to bool
    e7244ce86ceb vt: ioctl, switch VT_IS_IN_USE and VT_BUSY to inlines
    383c71b7314f vt: selection, introduce vc_is_sel
    125dd8c48b19 serial: sprd: Fix a dereference warning
    5b1bd4900fed mac80211: fix authentication with iwlwifi/mvm
    5863d2b27fb2 mac80211: Check port authorization in the ieee80211_tx_dequeue() case
    73fea3292b49 Linux 5.4.29
    f8c60f7a0051 net: Fix CONFIG_NET_CLS_ACT=n and CONFIG_NFT_FWD_NETDEV={y, m} build
    5f80d17c517d media: v4l2-core: fix a use-after-free bug of sd->devnode
    e7cd85f398cd media: xirlink_cit: add missing descriptor sanity checks
    4490085a9e2d media: stv06xx: add missing descriptor sanity checks
    d111431a4420 media: dib0700: fix rc endpoint lookup
    e4af1cf37b90 media: ov519: add missing endpoint sanity checks
    b25af84517de libfs: fix infoleak in simple_attr_read()
    dcf2d659add5 ahci: Add Intel Comet Lake H RAID PCI ID
    89d4acabb2f6 staging: wlan-ng: fix use-after-free Read in hfa384x_usbin_callback
    c44ea4fe738b staging: wlan-ng: fix ODEBUG bug in prism2sta_disconnect_usb
    0ec1ab1b15d2 staging: rtl8188eu: Add ASUS USB-N10 Nano B1 to device table
    fea3939c6ccc staging: kpc2000: prevent underflow in cpld_reconfigure()
    b958dea86c26 media: usbtv: fix control-message timeouts
    275316b63165 media: flexcop-usb: fix endpoint sanity check
    5102000134f4 usb: musb: fix crash with highmen PIO and usbmon
    f32219427ca1 USB: serial: io_edgeport: fix slab-out-of-bounds read in edge_interrupt_callback
    004b43fdfcf4 USB: cdc-acm: restore capability check order
    4003d59a00e2 USB: serial: option: add Wistron Neweb D19Q1
    d5fec27c54e7 USB: serial: option: add BroadMobi BM806U
    6eff944ff084 USB: serial: option: add support for ASKEY WWHC050
    8d62a8c7489a bpf: Undo incorrect __reg_bound_offset32 handling
    f23f37fe702f clocksource/drivers/hyper-v: Untangle stimers and timesync from clocksources
    791c420f4228 r8169: fix PHY driver check on platforms w/o module softdeps
    d8166d4b4203 vti6: Fix memory leak of skb if input policy check fails
    9c4f1506b477 ARM: dts: sun8i-a83t-tbs-a711: Fix USB OTG mode detection
    7f884cb145dc bpf, sockmap: Remove bucket->lock from sock_{hash|map}_free
    657559d632c2 bpf/btf: Fix BTF verification of enum members in struct/union
    188aae1f3d5f bpf: Initialize storage pointers to NULL to prevent freeing garbage pointer
    c68e1117f4e4 bpf, x32: Fix bug with JMP32 JSET BPF_X checking upper bits
    74617178d694 i2c: nvidia-gpu: Handle timeout correctly in gpu_i2c_check_status()
    6734a326cb13 netfilter: nft_fwd_netdev: allow to redirect to ifb via ingress
    5be3b97a1f18 netfilter: nft_fwd_netdev: validate family and chain type
    4e8bba9420e2 netfilter: flowtable: reload ip{v6}h in nf_flow_tuple_ip{v6}
    0bc1c7f6358c mac80211: set IEEE80211_TX_CTRL_PORT_CTRL_PROTO for nl80211 TX
    74fdc220e2f1 ieee80211: fix HE SPR size calculation
    eaca61f5f850 afs: Fix unpinned address list during probing
    455f5192a10d afs: Fix some tracing details
    c743855a0ebe afs: Fix client call Rx-phase signal handling
    21af83e17ffa xfrm: policy: Fix doulbe free in xfrm_policy_timer
    160c2ffa7016 xfrm: add the missing verify_sec_ctx_len check in xfrm_add_acquire
    a5c5cf6f24bb xfrm: fix uctx len check in verify_sec_ctx_len
    1b92d81d4cc2 RDMA/mlx5: Block delay drop to unprivileged users
    1babd2c979aa RDMA/mlx5: Fix access to wrong pointer while performing flush due to error
    9961c56955a4 RDMA/mlx5: Fix the number of hwcounters of a dynamic counter
    f8f90690df59 vti[6]: fix packet tx through bpf_redirect() in XinY cases
    c467570443bb xfrm: handle NETDEV_UNREGISTER for xfrm device
    86c7d38c2baf genirq: Fix reference leaks on irq affinity notifiers
    fe6010e47ddc afs: Fix handling of an abort from a service handler
    d9e974eea8f1 RDMA/core: Ensure security pkey modify is not lost
    768e582a9970 bpf: Fix cgroup ref leak in cgroup_bpf_inherit on out-of-memory
    0dcf81d2c12f gpiolib: acpi: Add quirk to ignore EC wakeups on HP x2 10 BYT + AXP288 model
    43d2a61ceb09 gpiolib: acpi: Rework honor_wakeup option into an ignore_wake option
    323a89bff42b gpiolib: acpi: Correct comment for HP x2 10 honor_wakeup quirk
    159aef18f05c mm: fork: fix kernel_stack memcg stats for various stack implementations
    cc5da743a456 mm/sparse: fix kernel crash with pfn_section_valid check
    238dd5ab0080 drivers/base/memory.c: indicate all memory blocks as removable
    da458bbfb6cf mm/swapfile.c: move inode_lock out of claim_swapfile
    33c8bc8aa7b2 mac80211: mark station unauthorized before key removal
    d6b1f3fc76c4 mac80211: drop data frames without key on encrypted links
    4a89bb3fca20 nl80211: fix NL80211_ATTR_CHANNEL_WIDTH attribute type
    b34e20c78f1c scsi: sd: Fix optimal I/O size for devices that change reported values
    35b34d264cb3 scripts/dtc: Remove redundant YYLOC global declaration
    683cf6637730 tools: Let O= makes handle a relative path with -C option
    2fe72de89cf7 rtlwifi: rtl8188ee: Fix regression due to commit d1d1a96bdb44
    a2d866c50a35 perf probe: Do not depend on dwfl_module_addrsym()
    5f2b792d3125 perf probe: Fix to delete multiple probe event
    94a4104bf10e x86/ioremap: Fix CONFIG_EFI=n build
    174da11b6474 ARM: dts: omap5: Add bus_dma_limit for L3 bus
    e41cd3b598ae ARM: dts: dra7: Add bus_dma_limit for L3 bus
    7cdaa5cd79ab ceph: fix memory leak in ceph_cleanup_snapid_map()
    ed24820d1b0c ceph: check POOL_FLAG_FULL/NEARFULL in addition to OSDMAP_FULL/NEARFULL
    44960e1c39d8 RDMA/mad: Do not crash if the rdma device does not have a umad interface
    34aa3d5b84d5 RDMA/nl: Do not permit empty devices names during RDMA_NLDEV_CMD_NEWLINK/SET
    9924d9fac61b gpiolib: Fix irq_disable() semantics
    10d5de234df4 RDMA/core: Fix missing error check on dev_set_name()
    b0a2af91cd78 IB/rdmavt: Free kernel completion queue when done
    99058b8beef5 Input: avoid BIT() macro usage in the serio.h UAPI header
    597d6fb4815c Input: synaptics - enable RMI on HP Envy 13-ad105ng
    381c88a6b948 Input: fix stale timestamp on key autorepeat events
    cd18a7f6a789 Input: raydium_i2c_ts - fix error codes in raydium_i2c_boot_trigger()
    d8f58a0f533a i2c: hix5hd2: add missed clk_disable_unprepare in remove
    65047f7538ba iwlwifi: mvm: fix non-ACPI function
    72a0cfeb513c iommu/vt-d: Populate debugfs if IOMMUs are detected
    cb17ed60ec39 iommu/vt-d: Fix debugfs register reads
    e5ea0d970f33 net: hns3: fix "tc qdisc del" failed issue
    24e72d55bc0b sxgbe: Fix off by one in samsung driver strncpy size arg
    753ea21f2ac3 dpaa_eth: Remove unnecessary boolean expression in dpaa_get_headroom
    27030150699b mac80211: Do not send mesh HWMP PREQ if HWMP is disabled
    5ecb28b15678 scsi: ipr: Fix softlockup when rescanning devices in petitboot
    ee3bc486643d s390/qeth: handle error when backing RX buffer
    8b6cccd9bd84 s390/qeth: don't reset default_out_queue
    f8de95a236f6 iommu/vt-d: Silence RCU-list debugging warnings
    957e6f437d02 drm/exynos: Fix cleanup of IOMMU related objects
    70e0a720038e drm/amdgpu: correct ROM_INDEX/DATA offset for VEGA20
    2e89e4e7f7e1 drm/amd/display: update soc bb for nv14
    8dab286ab527 fsl/fman: detect FMan erratum A050385
    406f1ac075fe arm64: dts: ls1043a: FMan erratum A050385
    c211a30c1846 dt-bindings: net: FMan erratum A050385
    b82e91ae6384 cgroup1: don't call release_agent when it is ""
    0cd633314661 drivers/of/of_mdio.c:fix of_mdiobus_register()
    dda4fca30906 cpupower: avoid multiple definition with gcc -fno-common
    7f9c2d71cfd3 nfs: add minor version to nfs_server_key for fscache
    b51274fabedc cgroup-v1: cgroup_pidlist_next should update position index
    74f554af848d net/mlx5e: Do not recover from a non-fatal syndrome
    f94d69e5f682 net/mlx5e: Fix ICOSQ recovery flow with Striding RQ
    bd81b9ba546a net/mlx5e: Fix missing reset of SW metadata in Striding RQ reset
    d8338b5f373a net/mlx5e: Enhance ICOSQ WQE info fields
    63a0fc3b0047 net/mlx5: DR, Fix postsend actions write length
    c3c9927d0a8f hsr: set .netnsok flag
    1a0fdef2d52d hsr: add restart routine into hsr_get_node_list()
    80aa1e38e16b hsr: use rcu_read_lock() in hsr_get_node_{list/status}()
    e4723e0a858e net: ip_gre: Accept IFLA_INFO_DATA-less configuration
    85aa84d3c587 net: ip_gre: Separate ERSPAN newlink / changelink callbacks
    62e3ffa4ea4e bnxt_en: Reset rings if ring reservation fails during open()
    0234e8ebb7f4 bnxt_en: Free context memory after disabling PCI in probe error path.
    797d6f91c399 bnxt_en: Return error if bnxt_alloc_ctx_mem() fails.
    ae4565168af3 bnxt_en: fix memory leaks in bnxt_dcbnl_ieee_getets()
    2ac37a531115 bnxt_en: Fix Priority Bytes and Packets counters in ethtool -S.
    53d0bf064c9f vxlan: check return value of gro_cells_init()
    a6ce82deba5c tcp: repair: fix TCP_QUEUE_SEQ implementation
    27cf5410a9e1 tcp: ensure skb->dev is NULL before leaving TCP stack
    c94b94626876 tcp: also NULL skb->dev when copy was needed
    49d2333f97f0 slcan: not call free_netdev before rtnl_unlock in slcan_open
    4cc2498b7ebb r8169: re-enable MSI on RTL8168c
    3428faf70c59 NFC: fdp: Fix a signedness bug in fdp_nci_send_patch()
    3d9cc478af25 net: stmmac: dwmac-rk: fix error path in rk_gmac_probe
    d23faf32e577 net_sched: keep alloc_hash updated after hash allocation
    5317abb870fe net_sched: hold rtnl lock in tcindex_partial_destroy_work()
    ff28c6195814 net_sched: cls_route: remove the right filter from hashtable
    a631b9668460 net/sched: act_ct: Fix leak of ct zone template on replace
    312805c93bf6 net: qmi_wwan: add support for ASKEY WWHC050
    522d2dc17967 net: phy: mdio-mux-bcm-iproc: check clk_prepare_enable() return value
    f806b9e84057 net: phy: mdio-bcm-unimac: Fix clock handling
    9fe154ee3fd5 net: phy: dp83867: w/a for fld detect threshold bootstrapping issue
    86137342fd4c net/packet: tpacket_rcv: avoid a producer race condition
    bb8c787be0e3 net: mvneta: Fix the case where the last poll did not process all rx
    a2a3baa29914 net: ena: Add PCI shutdown handler to allow safe kexec
    e586427a0abb net: dsa: tag_8021q: replace dsa_8021q_remove_header with __skb_vlan_pop
    0ec037c1353c net: dsa: mt7530: Change the LINK bit to reflect the link status
    60e975088be8 net: dsa: Fix duplicate frames flooded by learning
    7c6fe9b2af79 net: cbs: Fix software cbs to consider packet sending time
    712c39d9319a net/bpfilter: fix dprintf usage for /dev/kmsg
    85675064133e mlxsw: spectrum_mr: Fix list iteration in error path
    5a1a00f6ac32 mlxsw: pci: Only issue reset when system is ready
    6e75284e2480 macsec: restrict to ethernet devices
    51db2db8fe68 ipv4: fix a RCU-list lock in inet_dump_fib()
    b67aa57f4a9d hsr: fix general protection fault in hsr_addr_is_self()
    6fe31c7ce0ed geneve: move debug check after netdev unregister
    b5c9652ada33 cxgb4: fix Txq restart check during backpressure
    e92a0e7fba68 cxgb4: fix throughput drop during Tx backpressure
    b0ab8700283c ACPI: PM: s2idle: Rework ACPI events synchronization
    127882d10931 mmc: sdhci-tegra: Fix busy detection by enabling MMC_CAP_NEED_RSP_BUSY
    71d89344af0b mmc: sdhci-omap: Fix busy detection by enabling MMC_CAP_NEED_RSP_BUSY
    bf8b920f474e mmc: core: Respect MMC_CAP_NEED_RSP_BUSY for eMMC sleep command
    3b9b71adbec4 mmc: core: Respect MMC_CAP_NEED_RSP_BUSY for erase/trim/discard
    d9c4f387e22a mmc: core: Allow host controllers to require R1B for CMD6

(From OE-Core rev: bc1ae928b0aea450d703ab2bc415c20db4cfb407)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-14 16:45:42 +01:00
Bruce Ashfield
41ca2be4fc linux-yocto/5.4: update to v5.4.28
Updating linux-yocto/5.4 to the latest korg -stable release that comprises
the following commits:

    462afcd6e7ea Linux 5.4.28
    7b2cdbd67ff0 staging: greybus: loopback_test: fix potential path truncations
    8e79f440edb5 staging: greybus: loopback_test: fix potential path truncation
    58ffe6b0245e drm/bridge: dw-hdmi: fix AVI frame colorimetry
    c965a0299c61 nvmet-tcp: set MSG_MORE only if we actually have more to send
    d3eb4daa333f arm64: smp: fix crash_smp_send_stop() behaviour
    6080e0a9d107 arm64: smp: fix smp_send_stop() behaviour
    c61417fef99a ALSA: hda/realtek: Fix pop noise on ALC225
    163489b64361 futex: Unbreak futex hashing
    553d46b07dc4 futex: Fix inode life-time issue
    66f28e110565 x86/mm: split vmalloc_sync_all()
    9dfed456e1eb page-flags: fix a crash at SetPageError(THP_SWAP)
    32991c960d0b mm, slub: prevent kmalloc_node crashes and memory leaks
    623515739282 mm: slub: be more careful about the double cmpxchg of freelist
    8e709bbe41d6 epoll: fix possible lost wakeup on epoll_ctl() path
    69f434a05fb4 mm: do not allow MADV_PAGEOUT for CoW pages
    c3f54f0a68bf mm/hotplug: fix hot remove failure in SPARSEMEM|!VMEMMAP case
    61cfbcce9e09 mm, memcg: throttle allocators based on ancestral memory.high
    77c4bc4bf612 mm, memcg: fix corruption on 64-bit divisor in memory.high throttling
    ceca26903bd7 memcg: fix NULL pointer dereference in __mem_cgroup_usage_unregister_event
    2439259c32c8 stm class: sys-t: Fix the use of time_after()
    f7ef7a020f3b drm/lease: fix WARNING in idr_destroy
    b4e798cab8e9 drm/amd/amdgpu: Fix GPR read from debugfs (v2)
    eaa7fe20231a btrfs: fix log context list corruption after rename whiteout error
    039547fbd1e8 xhci: Do not open code __print_symbolic() in xhci trace events
    ac9d3279514c arm64: compat: Fix syscall number of compat_clock_getres
    70ca8a95df81 rtc: max8907: add missing select REGMAP_IRQ
    eba75a365f55 modpost: move the namespace field in Module.symvers last
    69a9b971406f intel_th: pci: Add Elkhart Lake CPU support
    3bdc0f68a170 intel_th: Fix user-visible error codes
    97097054a1f0 intel_th: msu: Fix the unexpected state warning
    07c70054ba24 staging/speakup: fix get_word non-space look-ahead
    35da67a8a50c staging: greybus: loopback_test: fix poll-mask build breakage
    fbe68a636982 staging: rtl8188eu: Add device id for MERCUSYS MW150US v2
    5f9579641df2 kbuild: Disable -Wpointer-to-enum-cast
    0f5be2f69e89 CIFS: fiemap: do not return EINVAL if get nothing
    48a9bc9534f3 mmc: sdhci-cadence: set SDHCI_QUIRK2_PRESET_VALUE_BROKEN for UniPhier
    8aafd5a0c63c mmc: sdhci-of-at91: fix cd-gpios for SAMA5D2
    0c4e0f0d2e51 mmc: rtsx_pci: Fix support for speed-modes that relies on tuning
    dbb328d1a87d iio: light: vcnl4000: update sampling periods for vcnl4040
    c3540b094edb iio: light: vcnl4000: update sampling periods for vcnl4200
    7ad22950caf5 iio: adc: at91-sama5d2_adc: fix differential channels in triggered mode
    4d71a4f76179 iio: adc: stm32-dfsdm: fix sleep in atomic context
    a79f53a2f5af iio: magnetometer: ak8974: Fix negative raw values in sysfs
    6387b4002357 iio: accel: adxl372: Set iio_chan BE
    3c69b794f96e iio: trigger: stm32-timer: disable master mode when stopping
    eb5f46b0cc55 iio: st_sensors: remap SMO8840 to LIS2DH12
    69399842e4a9 iio: chemical: sps30: fix missing triggered buffer dependency
    51d590fadc14 tty: fix compat TIOCGSERIAL checking wrong function ptr
    a754de70f6d6 tty: fix compat TIOCGSERIAL leaking uninitialized memory
    279cdccb6dc7 ALSA: pcm: oss: Remove WARNING from snd_pcm_plug_alloc() checks
    07ec940ceda5 ALSA: pcm: oss: Avoid plugin buffer overflow
    59e4624e664c ALSA: seq: oss: Fix running status after receiving sysex
    f439c2ece795 ALSA: seq: virmidi: Fix running status after receiving sysex
    e2f1c2d0b6db ALSA: hda/realtek - Enable the headset of Acer N50-600 with ALC662
    f0e819900968 ALSA: hda/realtek - Enable headset mic of Acer X2660G with ALC662
    2d994c9cefc4 ALSA: line6: Fix endless MIDI read loop
    64ab82cf614f USB: cdc-acm: fix rounding error in TIOCSSERIAL
    9ed83da8cd97 USB: cdc-acm: fix close_delay and closing_wait units in TIOCSSERIAL
    186b9564cf5e usb: typec: ucsi: displayport: Fix a potential race during registration
    ff1d876e9f4f usb: typec: ucsi: displayport: Fix NULL pointer dereference
    7b5aab752efc usb: xhci: apply XHCI_SUSPEND_DELAY to AMD XHCI controller 1022:145c
    6e1167db8d21 USB: serial: pl2303: add device-id for HP LD381
    ade2ca96e7a6 usb: host: xhci-plat: add a shutdown
    bace91138933 USB: serial: option: add ME910G1 ECM composition 0x110b
    2601053cafb4 usb: quirks: add NO_LPM quirk for RTL8153 based ethernet adapters
    d742e9874048 USB: Disable LPM on WD19's Realtek Hub
    712d9c2e92ea Revert "drm/fbdev: Fallback to non tiled mode if all tiles not present"
    c71986d18dea binderfs: use refcount for binder control devices too
    169bf660646a parse-maintainers: Mark as executable
    4db2f87e15c8 block, bfq: fix overwrite of bfq_group pointer in bfq_find_set_group()
    5d33ba6f385f xenbus: req->err should be updated before req->state
    7a79e217e3a5 xenbus: req->body should be updated before req->state
    25c3f96370a1 drm/amd/display: fix dcc swath size calculations on dcn1
    46c5b0d8dfbb drm/amd/display: Clear link settings on MST disable connector
    e53a333014a3 drm/amdgpu: clean wptr on wb when gpu recovery
    b557b2f00682 riscv: Fix range looking for kernel image memblock
    1c2106d2d9c1 riscv: Force flat memory model with no-mmu
    0bc9de1b1c1b spi: spi_register_controller(): free bus id on error paths
    af7dd05d7c8f ASoC: stm32: sai: manage rebind issue
    a3f349393eed riscv: avoid the PIC offset of static percpu data in module beyond 2G limits
    1804cdf99fdb dm integrity: use dm_bio_record and dm_bio_restore
    2e7e6de9ae38 dm bio record: save/restore bi_end_io and bi_integrity
    886a8fb13d0c altera-stapl: altera_get_note: prevent write beyond end of 'key'
    2c4e36033ace drivers/perf: arm_pmu_acpi: Fix incorrect checking of gicc pointer
    1002a094e066 drivers/perf: fsl_imx8_ddr: Correct the CLEAR bit definition
    0f6ae2cba3b8 drm/exynos: hdmi: don't leak enable HDMI_EN regulator if probe fails
    53138bea67b2 drm/exynos: dsi: fix workaround for the legacy clock name
    41f88dc1adcc drm/exynos: dsi: propagate error value and silence meaningless warning
    0c30297dddc0 spi/zynqmp: remove entry that causes a cs glitch
    b8ba4d74f9f3 spi: pxa2xx: Add CS control clock quirk
    416e1f433c70 ARM: dts: dra7: Add "dma-ranges" property to PCIe RC DT nodes
    74219d52d4e7 cifs: add missing mount option to /proc/mounts
    ddd8b3ed509a cifs: fix potential mismatch of UNC paths
    a7393e6f2ecf powerpc: Include .BTF section
    9eee3e21a59d spi: qup: call spi_qup_pm_resume_runtime before suspending
    1d4f214c8820 ARM: dts: dra7-l4: mark timer13-16 as pwm capable
    5f657e5303d3 phy: ti: gmii-sel: do not fail in case of gmii
    ee1245396b6e phy: ti: gmii-sel: fix set of copy-paste errors
    4d9020c3d802 drm/mediatek: Find the cursor plane instead of hard coding it
    61c895d0f726 spi: spi-omap2-mcspi: Support probe deferral for DMA channels
    f9f635c04769 locks: reinstate locks_delete_block optimization
    384e15fc4226 locks: fix a potential use-after-free problem when wakeup a waiter

(From OE-Core rev: ceadc52e8c7bd03ca45c342bdabfa770ac32bc71)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-14 16:45:42 +01:00
Khem Raj
cbff49a700 musl: Remove spurious unused patch
(From OE-Core rev: 2bd345826e23802ff3b9fcc77cdab88aee21d3ca)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-14 16:45:42 +01:00
Vyacheslav Yurkov
c288352cee os-release: sanitize required fields
Currently only VERSION_ID field is sanitized, but os-release (5) has
more fields with the same requirement. Moreover, those fields come
unquoted in most distributions, because quotes are not needed for a
values without whitespaces.

(From OE-Core rev: ea39b2edecc00cc2340328893cdfbefed5d3b981)

Signed-off-by: Vyacheslav Yurkov <uvv.mail@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-14 16:45:42 +01:00
Sakib Sajal
e328ec317e sqlite: backport CVE fixes
Fixes CVE-2020-11655 and CVE-2020-11656

(From OE-Core rev: e63a38ca6ea95c0dbc79d5024c0cec31062d2e39)

Signed-off-by: Sakib Sajal <sakib.sajal@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-14 16:45:42 +01:00
Benjamin Fair
16d76aa636 util-linux: fix build error in kill
Backport patches from upstream to fix a build error in the kill utility.

Fixes:
| In file included from ../util-linux-2.35.1/misc-utils/kill.c:57:
| ../util-linux-2.35.1/include/pidfd-utils.h: In function ‘pidfd_open’:
| ../util-linux-2.35.1/include/pidfd-utils.h:19:17: error: ‘SYS_pidfd_open’ undeclared (first use in this function); did you mean ‘pidfd_open’?

(From OE-Core rev: 9620c4e6e0e184b2b3907c8f8da4b7b54b97354e)

Signed-off-by: Benjamin Fair <benjaminfair@google.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-14 16:45:42 +01:00
Pierre-Jean Texier
8b0767b6dc timezone: upgrade 2019c -> 2020a
See full changelog https://github.com/eggert/tz/blob/master/NEWS#L11

(From OE-Core rev: 365852b55b66ecbe8a8d5654a082a231ee345919)

Signed-off-by: Pierre-Jean Texier <pjtexier@koncepto.io>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-14 16:45:42 +01:00
Wang Mingyu
2be7ce47d6 icu: CVE-2020-10531
security Advisory

References:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10531

(From OE-Core rev: 12f0cbf348d5acb0a7913bb5dc98e7fccc5ec34f)

Signed-off-by: Wang Mingyu <wangmy@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Jan Luebbe
b20c048e04 openssl: upgrade 1.1.1f -> 1.1.1g
This also fixes CVE-2020-1967.

(From OE-Core rev: f0bd52e5b50a1742b767eefe0d9d67facbb6c53a)

Signed-off-by: Jan Luebbe <jlu@pengutronix.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Mingli Yu
5bb40837b5 iputils: Initialize libgcrypt
Initialize libgcrypt on first use otherwise
there comes below warning when check the status
of the ninfod.service.
 # systemctl status ninfod.service
 * ninfod.service - Respond to IPv6 Node Information Queries
 Loaded: loaded (/lib/systemd/system/ninfod.service; enabled; vendor preset: enabled)
 Active: active (running) since Wed 2020-04-29 05:18:21 UTC; 36s ago
 Docs: man:ninfod(8)
 Main PID: 347 (ninfod)
 Tasks: 1 (limit: 9382)
 Memory: 1.2M
 CGroup: /system.slice/ninfod.service
 `-347 /sbin/ninfod -d

 Apr 29 05:18:21 intel-x86-64 systemd[1]: Started Respond to IPv6 Node Information Queries.
 Apr 29 05:18:24 intel-x86-64 ninfod[347]: Libgcrypt warning: missing initialization - please fix the application

Reference: 4f489a8c79

(From OE-Core rev: 8648c6497d1904b988059cbd72d1592caa8708d0)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Richard Purdie
323560d6ae gcc-target: Ensure buildtools-extended-tarball doesn't use arch=native
A nativesdk BBCLASSEXTEND was added to gcc-target without realising this
would pass arch=native through to it for x86-64. This heavily optimises
gcc output for the host its running on meaning it can't be reused via
sstate on other machines.

Add class-target overrides here to get the desired behaviour. All
targets have been covered for completeness.

(From OE-Core rev: 3fff2c9400f2f64cbc8cc450b5ab29505eacbdd1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Kai Kang
65aa038fa0 pseudo: add macro guard for seccomp
It fails to compile pseudo-native on centos 7:

| ports/linux/pseudo_wrappers.c: In function ‘prctl’:
| ports/linux/pseudo_wrappers.c:129:14: error: ‘SECCOMP_SET_MODE_FILTER’ undeclared (first use in this function)
|    if (cmd == SECCOMP_SET_MODE_FILTER) {
|               ^

Add macro guard for the definition to avoid the failure.

(From OE-Core rev: 9fff03afb8e67b360042e80fda8213a67472b9ec)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Andrew Geissler
4d041f2d04 libffi: fix v3.3 compile on ppc64le
The latest released version of libffi no longer compiles on ppc64le
based machines. Some searching found a patch that fixed our issue but
had not been submitted upstream to libffi.

It has now been submitted upstream with this PR:
https://github.com/libffi/libffi/pull/561

(From OE-Core rev: ed7ce0d5e9009d80a79c39bb3d0d45de6e7721c0)

Signed-off-by: Andrew Geissler <geissonator@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Alexander Kanavin
c9b2bdd9c3 buildtools-extended-tarball: add libgomp-dev
This is needed in particular for newer versions of rpm
which would otherwise fail to build due to absence of omp.h header.

(From OE-Core rev: a83904481cf85ad4a15209017ab04f690b7779ed)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Richard Purdie
e820c86fb9 oeqa/qemurunner: Clean up failure handling
If you fail to setup the tap devices, runqemu will error quickly
however stdout/stderr are not shown to the user, instead a SystemExit
traceback is shown. This could explain some long since unexplained
failures on the autobuilder.

Rework the error handling so SystemExit isn't used and the
standard log failure messages can be shown. The code could
likely ultimatley need some restructuring to work effectively.

(From OE-Core rev: 83b8e66b66aa9848ed9c8761a21cb47c6443d0c6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Richard Purdie
9b335fa867 targetcontrol: Fix leaking log handler
We had a mystery failure on the autobuilder where runqemu appeared to
be failing as a logfile directory no longer existed. The key to
reproducing was running a runqemu where the image was deleted (as
devtool does), then running another runqemu test. E.g.:

'oe-selftest -r  devtool.DevtoolExtractTests.test_devtool_deploy_target wic.Wic2.test_qemu_efi'

This then tries to write to the logfile from the first test, the
image directory was deleted and we get strange failures.

The fix is to remove the logging handler when qemu is stopped.

(From OE-Core rev: 924b020eacf111b4fd4d731b363084e254a3422d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Trevor Gamblin
5811ed9140 python3: fix CVE-2020-8492
CVE: CVE-2020-8492

(From OE-Core rev: c9ee462bb606b34ab31cfb90f84a5302d15135cf)

Signed-off-by: Trevor Gamblin <trevor.gamblin@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Richard Purdie
cdde3cfae3 staging: Fix overlapping file failures
If there are different providers of a file and they are swiched when the
recipe isn't machine specific, we can get tracebacks due to the overlapping
files. The issue is that the previous provider isn't uninstalled since
the system can't tell whether some later task needs them.

By tracking which tasks we depend upon, the code can now choose to
uninstall more things since a later task can reinstall if/as needed.

The code here was to protect against code with two different tasks
running in parallel which is still protected agaisnt.

[YOCTO #13702]

(From OE-Core rev: 86f36e3f93cdb2f5882b72e736a770aa6f46100d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Richard Purdie
21684e6c83 oeqa/selftest: Add test for conflicting sysroot provider
sysroot-test depends on virtual/sysroot-test which we build for one machine,
switch machine, switch provider of virtual/sysroot-test and check that the
sysroot is correctly cleaned up. The files in the two providers overlap
so can cause errors if the sysroot code doesn't function correctly.

Yes, sysroot-test should be machine specific really to avoid this, however
the sysroot cleanup should also work.

This adds a test for bug:

[YOCTO #13702]

(From OE-Core rev: 31a8b4935e673aba8a1147c4a2fb510b1a8bc3ce)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Tim Orling
1246049541 scripts/install-buildtools: bump to 3.1 release by default
By default, use the extended buildtools installer from the
Yocto Project 3.1 "dunfell" release.

(From OE-Core rev: f1d4da322d607a17de6d9c291562b5fd1128fbc6)

Signed-off-by: Tim Orling <timothy.t.orling@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Alex Kiernan
9a21a49065 run-postinsts: Set RemainAfterExit on systemd unit
run-postinsts is only expected to run once, but during startup, but if
any dependency is pulled into a transaction, even once it has been
marked disabled, then it can be restarted.

This leads to occasional failures during QA if an ssh session starts
whilst the existing transaction is still running:

  Finished Run pending postinsts.
  run-postinsts.service: Succeeded.
  Condition check resulted in Commit a transient machine-id on disk being skipped.
  Condition check resulted in Bind mount volatile /srv being skipped.
  Condition check resulted in Bind mount volatile /var/spool being skipped.
  Condition check resulted in Bind mount volatile /var/lib being skipped.
  Condition check resulted in Bind mount volatile /var/cache being skipped.
  Condition check resulted in Platform Persistent Storage Archival being skipped.
  Condition check resulted in Rebuild Hardware Database being skipped.
  Starting Run pending postinsts...
  Condition check resulted in Kernel Configuration File System being skipped.
  Condition check resulted in FUSE Control File System being skipped.
  Condition check resulted in Load Kernel Modules being skipped.
  Condition check resulted in File System Check on Root Device being skipped.
  Condition check resulted in Huge Pages File System being skipped.
  Condition check resulted in Journal Audit Socket being skipped.
  dropbear@125-192.168.7.2:22-192.168.7.1:44226.service: Succeeded.
  Condition check resulted in Platform Persistent Storage Archival being skipped.
  Started SSH Per-Connection Server (192.168.7.1:44226).
  dropbear@124-192.168.7.2:22-192.168.7.1:44224.service: Succeeded.
  Started SSH Per-Connection Server (192.168.7.1:44224).
  Condition check resulted in Commit a transient machine-id on disk being skipped.
  Condition check resulted in Bind mount volatile /srv being skipped.
  Condition check resulted in Bind mount volatile /var/spool being skipped.
  Condition check resulted in Bind mount volatile /var/lib being skipped.
  Condition check resulted in Bind mount volatile /var/cache being skipped.
  Condition check resulted in Platform Persistent Storage Archival being skipped.
  Condition check resulted in Rebuild Hardware Database being skipped.
  Failed to start Run pending postinsts.
  run-postinsts.service: Failed with result 'start-limit-hit'.
  run-postinsts.service: Start request repeated too quickly.

Setting RemainAfterExit ensures that the unit remains active and is not
gratuitously restarted, unless done so explicitly using systemctl
restart.

(From OE-Core rev: 6e78fd580a8c6ed9d886b8431974baf6c988831c)

Signed-off-by: Alex Kiernan <alex.kiernan@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Alexander Kanavin
37d1220985 testimage.bbclass: correctly process SIGTERM
Python's unittest will not propagate exceptions upside
of itself, but rather will just catch and print them.

The working way to make it stop is to send a SIGINT
(e.g. simulate a ctrl-c press), which will make it exit
with a KeyboardInterrupt exception.

This also makes pressing ctrl-c twice from bitbake work
again (previously hanging instances of bitbake and qemu were
left around, and bitbake would no longer start until they
were killed manually).

(From OE-Core rev: 72a19f5f0f4bc4472d13b29e46a5c1673977e37a)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Drew Moseley
447653f340 perl: Add missing dependency for tie-hash on carp.
(From OE-Core rev: 3d633f20a98edff434086aa59e8157990bd62f25)

Signed-off-by: Drew Moseley <drew.moseley@northern.tech>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Andrew Geissler
e8b26eb161 boost: revert 1.72.0 regression
https://www.boost.org/users/history/version_1_72_0.html documents a
"Known Issue" and has a revert patch for an issue that causes code to
fail to compile that includes the coroutine function. Without this
patch, code which includes the asymmetric_coroutine.hpp will fail to
compile.

(From OE-Core rev: b9998aa98052cc1c05f59d070677f74bd64c5a10)

Signed-off-by: Andrew Geissler <geissonator@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Joe Slater
79ae284741 vim: do not adjust script paths building for target
When cross-compiling, do not change scripts to use host
versions of perl and gawk.

Also, use INSANE_SKIP to suppress QA complaints if perl
or gawk are not on the target.

(From OE-Core rev: 9a96733e29daf84cca9212538f3fc5bd7bb144f4)

Signed-off-by: Joe Slater <joe.slater@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Domarys Correa
72a1526fd9 insane.bbclass: Add test for shebang line length
Shebang lines longer than 128 characters can give an error
depending on the operating system.
This implements a test that signals an error when locating a
faulty shebang.

YOCTO: #11053

(From OE-Core rev: 9ed54437b00aed1d41993f7658820d8adfb09282)

Signed-off-by: Domarys Correa <domarys.correa@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Khem Raj
df9e772bc7 binutils: Install PIC version of libiberty.a
some architectures e.g. mips complain in linking apps which have shared
libs that are linking with libiberty.a fixes errors like below

libiberty/../../libiberty/hashtab.c:285:(.text+0xf8): relocation R_MIPS_26 against `htab_create_typed_alloc' cannot be used when making a shared object; recompile with -fPIC

(From OE-Core rev: 4e64f0bc62fd81f91d75a1f46230fff7c71650e2)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Khem Raj
f21e3abbac binutils: Detect proper static-libstdc++ support when using clang
Fixes configure time tests to ensure static-libstdc++ is enabled when
using clang

(From OE-Core rev: 7e90a36e62ebddf287c2ef19e28f88426e061897)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Khem Raj
426be0f2ae gcc: Configure all gccs with --disable-install-libiberty
OE uses libiberty from binutils, since its properly compiled as pic
archive and applications and other libraries needing libiberty can
properly link with it.

With this option applied, explicit delete of libiberty headers and
libraries is not required in install step, since they wont get installed
in first place.

(From OE-Core rev: b6f1def25cbb477549fad48e9586cef3ada2f9e5)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Khem Raj
9960536181 packagegroup-go-sdk-target: Add go to packagegroup
This ensures that we have go compiler installed into image along with
runtime

(From OE-Core rev: a2371216d693d93c68f6e8aed5c41fd726c423b0)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Khem Raj
97b2996489 go: Rely on go-runtime to provide needed modules
go compiler is including go/src/cmd modules in -dev package which is in
conflict with go-runtime-dev which provides exact same copy of this
module along with other runtime modules, as a result when both go-dev and
go-runtime-dev are included in image then it results in rootfs failures,
here lets make go depend on go-runtime and dont install the cmd module
here explicitly.

(From OE-Core rev: 1ace1655f8ae08c07c8875be53b641e7c2564ded)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Khem Raj
b4024d1280 packagegroup-go-sdk-target: Enable on rv64
RISCV64 now supports golang (starting dunfell), therefore limit
disabling to rv32 only.

(From OE-Core rev: 284060ed28862f287fde628cc42742aafa5baef1)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Mingli Yu
4be0455aca pbzip2: Fix license warning
After below commit introduced, the LICENSE
field changed from BSD-4-Clause to bzip-1.0.6.
669600ef9b bzip2/pbzip2: Correct license information

But actually it should be bzip2-1.0.6,
update it to fix the below license warning:
WARNING: pbzip2-native-1.1.13-r0 do_populate_lic: pbzip2-native: No generic license file exists for: bzip-1.0.6 in any provider

(From OE-Core rev: 1b0312ec6f546fce0610d08ba754f500f3df4147)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Peter Kjellerstedt
301153707c busybox: Correct the name of the bzip2 license
The common bzip2 license was renamed from "bzip2" to "bzip2-1.0.6" in
commit 669600ef to match the official SPDX identifier.

(From OE-Core rev: be67faad412c47fb739059bd401322271f2cd7c8)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Richard Purdie
20d7b287c7 bzip2/pbzip2: Correct license information
The license of pbzip2 looks slightly BSD like but is in fact the bzip2
license. The SPDX identifier for this is "bzip-1.0.6" since there is
another version of the bzip license out there.

To clear up all the confusion, use the SPDX license name and update
both recipes to refer to it. The copyright information is slightly
different between the codebases but the license looks the same.

(From OE-Core rev: 05fdae7687d22e9f3476c807a15906a1f80e4daa)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-07 13:03:30 +01:00
Changqing Li
3fc7afb010 parselogs.py: ignore pulseaudio startup warning messages
If set default syslog to rsyslog, we can see below messages
in user.log,

[pulseaudio] authkey.c: Failed to open cookie file
[pulseaudio] authkey.c: Failed to load authentication key

They are only warnings when cookie file is not found. And
PulseAudio will create it if it doesn't exist.

refer:
https://wiki.archlinux.org/index.php/PulseAudio/Configuration
https://lists.freedesktop.org/archives/pulseaudio-discuss/2014-December/022719.html

(From OE-Core rev: 2cc3fac9cd1a0d77931c9e49dbe2941fa8619c51)

Signed-off-by: Changqing Li <changqing.li@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:31 +01:00
hongxu
7ad39607ac buildtools-tarball: add nativesdk-mtools for `wic ls'
On ubuntu 18.04.1, it does not provides `mdir' by default
which caused `wic ls **.wic' failed on fat partition

...
$ wic ls build/tmp-glibc/deploy/images/xilinx-zynqmp/wrlinux-image-std-xilinx-zynqmp.wic

ERROR: Can't find executable 'mdir'
...

Add nativesdk-mtools to buildtools-tarball and use buildtools
to provide mdir

(From OE-Core rev: 605c81ff90760cdf4a1247df777d5ce8e12d6f6f)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:31 +01:00
Yi Zhao
2032c02627 alsa-state: ignore 'No soundcards found' error in pkg_postinst
If there is no soundcards on the target (e.g. qemu), the pkp_postinst
function will report an error:
  alsactl: load_state:1735: No soundcards found...
  pkg_run_script: package "alsa-state" postinst script returned status 19.
  opkg_configure: alsa-state.postinst returned 19.

Pass '-g' option to alsactl to ignore this error.

(From OE-Core rev: b2a3cf79cf564a76727bd7dbb21ba9b3d20cf5d4)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:31 +01:00
Tim Orling
885edd1721 python3-manifest.json: add pathlib to core
The pathlib module is for Object-oriented filesystem paths

It also provides a lot of handy utilities for checking on
paths. This seems to justify adding it to the core package
along side os, sys, and the other *path libraries.

[YOCTO #13670]

(From OE-Core rev: 81bec2f08229723b550a0cc33d1c77f82432814d)

Signed-off-by: Tim Orling <timothy.t.orling@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:31 +01:00
Pierre-Jean Texier
5895d5d16e ell: upgrade 0.30 -> 0.31
This is a bugfix release:

ver 0.31:
	Fix issue with verification of the second certificate in chain.
	Fix issue with handling trusted CA matching in verification.

(From OE-Core rev: c1892a1074560e27671975f4b9fb92468d9874da)

Signed-off-by: Pierre-Jean Texier <pjtexier@koncepto.io>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:31 +01:00
Wang Mingyu
fe91314542 gnutls: upgrade 3.6.12 -> 3.6.13
(From OE-Core rev: 41d9beb709713eb5a16bb31393717dce71db6018)

Signed-off-by: Wang Mingyu <wangmy@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:30 +01:00
Maxime Roussin-B?langer
2aeba66c15 tzdata: remove exit 0 from pkg_postinst
Documentation says that if you exit 0 in a pkg_postinst it will marked as
installed.
If you exit 0, before running postinst-intercepts defer_to_first_boot, the
pkg_postinst_ontarget script will not be present on target.

The "exit 0" in tzdata makes it difficult to have a bbappend with a
pkg_postinst_target step when you have `INSTALL_TIMEZONE_FILE = 0`

(From OE-Core rev: ebf675abd0a077bc9aa71acf62b0477a84e1f536)

Signed-off-by: Maxime Roussin-Bélanger <maxime.roussinbelanger@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:30 +01:00
Khem Raj
47cc12f9df ruby: Link with libucontext on musl
coroutines in ruby 2.7+ needs ucontext APIs which are not available in
musl but an external library is available to provide them so use it

Use cached values for ac_cv_func_isnan and ac_cv_func_isinf this is not
detected correctly by configure on musl

on ARM drop using old arm32 implementation of coroutine which is slow and
inefficient

(From OE-Core rev: a2b1af47316a9f5c522db0c9feff1fbe0d39e022)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:30 +01:00
Khem Raj
1aeeecba56 libucontext: Bring in mips/mips64 support
License-Update: Updated copyright years [1]

Latest master 0.10.x+ has added support for mips/mips64, which should
help compile ruby on musl for these architectures

Switch SRC_URI to github upstream URI

Check for common arches before checking others in map_kernel_arch

Drop already upstreamed patches

[1] d31eaabbaf

(From OE-Core rev: 5dbb7d5bb9509dd455673a326c9191dec6f3092c)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:30 +01:00
Jeremy Puhlman
d165768b8e buildtools-extended-tarball: Add libstc++.a
Builds like native-openjdk, really wants a to link
some tools against the static version. Since when
using the extended tarball, its the only place to
get it, add the library.

(From OE-Core rev: dfeca4d1e2442192aa40c420648cae2914c30be5)

Signed-off-by: Jeremy Puhlman <jpuhlman@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:30 +01:00
Jeremy Puhlman
b13c3781b5 nativesdk-gcc-runtime: enable building libstdc++.a
(From OE-Core rev: 217e8f587792b2fe25aead085ddc533d4100cd7a)

Signed-off-by: Jeremy Puhlman <jpuhlman@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:30 +01:00
Jeremy Puhlman
b6601c3d38 qemu-system-native: Fix commented out PACKAGECONFIG
(From OE-Core rev: 2797779cb8b821d8bec8df999c6ebb86384c9686)

Signed-off-by: Jeremy A. Puhlman <jpuhlman@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:30 +01:00
Paul Barker
9a14631d66 kernel-yocto.bbclass: Fix deps when externalsrc is used
do_kernel_configme was recently removed from SRCTREECOVEREDTASKS so this
task still runs when externalsrc is used. This task normally runs after
do_patch but when externalsrc is used, do_patch is removed and this ordering
restriction does nothing. This allows bitbake to execute do_kernel_configme
too early, causing races with do_unpack.

This is fixed by adding in a dependency on do_unpack when externalsrc is
used.

(From OE-Core rev: 75b47388fb18aaf58db311e570c009350d64084f)

Signed-off-by: Paul Barker <pbarker@konsulko.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:30 +01:00
Denys Dmytriyenko
613163d0da u-boot.inc: install u-boot-initial-env as ${PN}-initial-env in $D and $DEPLOYDIR
The common u-boot.inc can be used by multiple recipes in the same build for
different cores and/or multiple stages of the bootloader. Naming initial-env
with ${PN} prefix avoids clashes in deploy and rootfs between those recipes.

This fixes 69b3b093079c2ca2744d6c02747c5d1b5d3e7ecf that unconditionally
builds, installs and deploys u-boot-initial-env in the common u-boot.inc.

(From OE-Core rev: 78c55eac69dc4b6ae28d7e7911adb59430376b23)

Signed-off-by: Denys Dmytriyenko <denys@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-05-05 08:45:30 +01:00
Richard Purdie
5e44568d90 bitbake: data_smart: Handle hashing of datastores within datastores correctly
If there is a datastore within a datastore (e.g. BB_ORIGENV) then
get-hash() doesn;t correclty handle the contents using the memory
address instead of the contents.

This is a patch from dominik.jaeger@nokia.com which addresses
this problem. Its been low priority since we don't include
BB_ORIGENV anywhere this would cause an issue as standard.

[YOCTO #12473]

(Bitbake rev: 1a8bcfc1eb89ccff834ba68fb514330b510976a2)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-04-26 13:41:29 +01:00
Richard Purdie
9062a225aa bitbake: cache: Fix performance problem with large numbers of source files
Some companies are using large numbers of patch files in SRC_URI.
Rightly or wrongly that exposes a performance problem where the code
does not handle the large string manipulations in a way which works
efficienty in python.

This is a modified version of a patch from z00539568
<zhangyifan46@huawei.com153340508@qq.com which addresses the performance
problem. I modified it to use a more advanced regex, retain the "*" check
and cache the regex.

[YOCTO #13824]

(Bitbake rev: c07f374998903359ed55f263c86466d05aa39b68)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-04-26 13:41:29 +01:00
Paul Barker
d03a6d33de bitbake: fetch2/wget: Set User-Agent when checking status of a URL
When a website is behind a CDN like Cloudflare there may be a "Browser
Integrity Check" or other test applied to requests before they are
allowed through to the server. Downloading via wget passes these tests
as headers are set appropriately, however the Python urllib module may
fail these tests unless additional headers are set. This causes
Wget.checkstatus() to fail where Wget.download() would actually succeed.

For Cloudflare in particular a valid User-Agent is needed, it's easy to
add this to the headers in Wget.checkstatus(). The user agent string is
copied from Wget._fetch_index().

(Bitbake rev: 4679d3cdb9cdf23f3962aa61c599ad7474591f9f)

Signed-off-by: Paul Barker <pbarker@konsulko.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-04-26 13:41:29 +01:00
Hongxu Jia
14c7608311 bitbake: tinfoil: fix config_data mess up insane check while parsing multiple recipes
Since commit [tinfoil: Simplify remote datastore connections][1] and
[tinfoil: Add back ability to parse on top of a datastore][2] applied,
bitbake run command parseRecipeFile with param config_data.dsindex rather
than config_data.

While calling tinfoil.parse_recipe_file() with one config_data (with the same
config_data.dsindex) to parse multiple recipes, it will mess up insane check.

It broke update_layer.py on layerindex, here are the simplified steps:
[snip]
t= bb.tinfoil.Tinfoil()
t.prepare()
data = bb.data.createCopy(t.config_data)
fn = "path_to/oe-core/meta/recipes-graphics/images/core-image-clutter.bb"
t.parse_recipe_file(fn, appends=False, config_data=data)
fn = "path_to/oe-core/meta/recipes-graphics/packagegroups/packagegroup-core-x11-base.bb"
t.parse_recipe_file(fn, appends=False, config_data=data)

|  File "path_to/oe-core/meta/classes/insane.bbclass", line 1303,
in __anon_1304__path_to_oe_core_meta_classes_insane_bbclass
|    bb.fatal("Fatal QA errors found, failing task.")
[snip]

In above failure, RDEPENDS is assigned `${PACKAGE_INSTALL} ${LINGUAS_INSTALL}
${IMAGE_INSTALL_DEBUGFS}' in core-image-clutter.bb, but it broke insane check
on packagegroup-core-x11-base.bb

>From commit [remotedata: enable transporting datastore from the client to
the server][3], it create a new DataSmart to save receive_datastore's remote_data
Similarly, make a copy of config_data(with different config_data.dsindex) could
fix the issue.

[1] http://git.openembedded.org/bitbake/commit/?id=85e03a64dd0a4ebe71009ec4bdf4192c04a9786e
[2] http://git.openembedded.org/bitbake/commit/?id=4618da2094189e4d814b7d65672cb65c86c0626a
[3] http://git.openembedded.org/bitbake/commit/?id=784d2f1a024efe632fc9049ce5b78692d419d938

(Bitbake rev: a3074807974536e370289c25fddcb9ad93cbc137)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-04-26 13:41:29 +01:00
Mark Morton
289a63b5dc Updated patch for publishing
(From yocto-docs rev: ac352ad7f95db7eeacb53c2778caa31800bd7c26)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-04-26 13:40:32 +01:00
mmorton
8e0a2e5456 Added usrmerge to distro-features for Bug 13494
(From yocto-docs rev: 049832ea4c13b01c31911ad0a6f3e170781db6cd)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-04-26 13:40:32 +01:00
Robert P. J. Day
472aebce31 ref-manual: correct "script" dirname to "scripts"
(From yocto-docs rev: 43f042582675f89fcdf81c0cd2ac2602d4282cb3)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2020-04-26 13:40:32 +01:00
3242 changed files with 206615 additions and 190030 deletions

3
.gitignore vendored
View File

@@ -30,5 +30,4 @@ hob-image-*.bb
pull-*/
bitbake/lib/toaster/contrib/tts/backlog.txt
bitbake/lib/toaster/contrib/tts/log/*
bitbake/lib/toaster/contrib/tts/.cache/*
bitbake/lib/bb/tests/runqueue-tests/bitbake-cookerdaemon.log
bitbake/lib/toaster/contrib/tts/.cache/*

View File

@@ -1,35 +0,0 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
DESTDIR = final
ifeq ($(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; else echo 0; fi),0)
$(error "The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed")
endif
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile.sphinx clean publish
publish: Makefile.sphinx html singlehtml
rm -rf $(BUILDDIR)/$(DESTDIR)/
mkdir -p $(BUILDDIR)/$(DESTDIR)/
cp -r $(BUILDDIR)/html/* $(BUILDDIR)/$(DESTDIR)/
cp $(BUILDDIR)/singlehtml/index.html $(BUILDDIR)/$(DESTDIR)/singleindex.html
sed -i -e 's@index.html#@singleindex.html#@g' $(BUILDDIR)/$(DESTDIR)/singleindex.html
clean:
@rm -rf $(BUILDDIR)
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile.sphinx
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

View File

@@ -11,7 +11,7 @@ For information about Bitbake, see the OpenEmbedded website:
Bitbake plain documentation can be found under the doc directory or its integrated
html version at the Yocto Project website:
https://docs.yoctoproject.org
http://yoctoproject.org/documentation
Contributing
------------

View File

@@ -26,7 +26,7 @@ from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
__version__ = "1.50.0"
__version__ = "1.46.0"
if __name__ == "__main__":
if __version__ != bb.__version__:

View File

@@ -151,6 +151,9 @@ def main():
func = getattr(args, 'func', None)
if func:
client = hashserv.create_client(args.address)
# Try to establish a connection to the server now to detect failures
# early
client.connect()
return func(args, client)

View File

@@ -30,11 +30,9 @@ def main():
"--bind [::1]:8686"'''
)
parser.add_argument('-b', '--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
parser.add_argument('-d', '--database', default='./hashserv.db', help='Database file (default "%(default)s")')
parser.add_argument('-l', '--log', default='WARNING', help='Set logging level')
parser.add_argument('-u', '--upstream', help='Upstream hashserv to pull hashes from')
parser.add_argument('-r', '--read-only', action='store_true', help='Disallow write operations from clients')
parser.add_argument('--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
parser.add_argument('--database', default='./hashserv.db', help='Database file (default "%(default)s")')
parser.add_argument('--log', default='WARNING', help='Set logging level')
args = parser.parse_args()
@@ -49,7 +47,7 @@ def main():
console.setLevel(level)
logger.addHandler(console)
server = hashserv.create_server(args.bind, args.database, upstream=args.upstream, read_only=args.read_only)
server = hashserv.create_server(args.bind, args.database)
server.serve_forever()
return 0

View File

@@ -14,6 +14,7 @@ import logging
import os
import sys
import argparse
import signal
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
@@ -25,6 +26,7 @@ import bb.msg
logger = bb.msg.logger_create('bitbake-layers', sys.stdout)
def main():
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
parser = argparse.ArgumentParser(
description="BitBake layers utility",
epilog="Use %(prog)s <subcommand> --help to get help on a specific command",

View File

@@ -18,7 +18,6 @@ except RuntimeError as exc:
sys.exit(str(exc))
tests = ["bb.tests.codeparser",
"bb.tests.color",
"bb.tests.cooker",
"bb.tests.cow",
"bb.tests.data",
@@ -27,7 +26,6 @@ tests = ["bb.tests.codeparser",
"bb.tests.parse",
"bb.tests.persist_data",
"bb.tests.runqueue",
"bb.tests.siggen",
"bb.tests.utils",
"hashserv.tests",
"layerindexlib.tests.layerindexobj",

View File

@@ -1,54 +0,0 @@
#!/usr/bin/env python3
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Copyright (C) 2020 Richard Purdie
#
import os
import sys
import warnings
import logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 10 or not sys.argv[1].startswith("decafbad"):
print("bitbake-server is meant for internal execution by bitbake itself, please don't use it standalone.")
sys.exit(1)
import bb.server.process
lockfd = int(sys.argv[2])
readypipeinfd = int(sys.argv[3])
logfile = sys.argv[4]
lockname = sys.argv[5]
sockname = sys.argv[6]
timeout = float(sys.argv[7])
xmlrpcinterface = (sys.argv[8], int(sys.argv[9]))
if xmlrpcinterface[0] == "None":
xmlrpcinterface = (None, xmlrpcinterface[1])
if timeout == "None":
timeout = None
# Replace standard fds with our own
with open('/dev/null', 'r') as si:
os.dup2(si.fileno(), sys.stdin.fileno())
so = open(logfile, 'a+')
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(so.fileno(), sys.stderr.fileno())
# Have stdout and stderr be the same so log output matches chronologically
# and there aren't two seperate buffers
sys.stderr = sys.stdout
logger = logging.getLogger("BitBake")
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
logger.addHandler(handler)
bb.server.process.execServer(lockfd, readypipeinfd, lockname, sockname, timeout, xmlrpcinterface)

View File

@@ -16,8 +16,6 @@ import signal
import pickle
import traceback
import queue
import shlex
import subprocess
from multiprocessing import Lock
from threading import Thread
@@ -120,9 +118,7 @@ def worker_child_fire(event, d):
data = b"<event>" + pickle.dumps(event) + b"</event>"
try:
worker_pipe_lock.acquire()
while(len(data)):
written = worker_pipe.write(data)
data = data[written:]
worker_pipe.write(data)
worker_pipe_lock.release()
except IOError:
sigterm_handler(None, None)
@@ -147,27 +143,21 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
# a fork() or exec*() activates PSEUDO...
envbackup = {}
fakeroot = False
fakeenv = {}
umask = None
taskdep = workerdata["taskdeps"][fn]
if 'umask' in taskdep and taskname in taskdep['umask']:
umask = taskdep['umask'][taskname]
elif workerdata["umask"]:
umask = workerdata["umask"]
if umask:
# umask might come in as a number or text string..
try:
umask = int(umask, 8)
umask = int(taskdep['umask'][taskname],8)
except TypeError:
pass
umask = taskdep['umask'][taskname]
dry_run = cfg.dry_run or dry_run_exec
# We can't use the fakeroot environment in a dry run as it possibly hasn't been built
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
fakeroot = True
envvars = (workerdata["fakerootenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
@@ -177,7 +167,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
fakedirs = (workerdata["fakerootdirs"][fn] or "").split()
for p in fakedirs:
bb.utils.mkdirhier(p)
logger.debug2('Running %s:%s under fakeroot, fakedirs: %s' %
logger.debug(2, 'Running %s:%s under fakeroot, fakedirs: %s' %
(fn, taskname, ', '.join(fakedirs)))
else:
envvars = (workerdata["fakerootnoenv"][fn] or "").split()
@@ -286,13 +276,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
try:
if dry_run:
return 0
try:
ret = bb.build.exec_task(fn, taskname, the_data, cfg.profile)
finally:
if fakeroot:
fakerootcmd = shlex.split(the_data.getVar("FAKEROOTCMD"))
subprocess.run(fakerootcmd + ['-S'], check=True, stdout=subprocess.PIPE)
return ret
return bb.build.exec_task(fn, taskname, the_data, cfg.profile)
except:
os._exit(1)
if not profiling:
@@ -337,9 +321,7 @@ class runQueueWorkerPipe():
end = len(self.queue)
index = self.queue.find(b"</event>")
while index != -1:
msg = self.queue[:index+8]
assert msg.startswith(b"<event>") and msg.count(b"<event>") == 1
worker_fire_prepickled(msg)
worker_fire_prepickled(self.queue[:index+8])
self.queue = self.queue[index+8:]
index = self.queue.find(b"</event>")
return (end > start)
@@ -431,9 +413,9 @@ class BitbakeWorker(object):
def handle_workerdata(self, data):
self.workerdata = pickle.loads(data)
bb.build.verboseShellLogging = self.workerdata["build_verbose_shell"]
bb.build.verboseStdoutLogging = self.workerdata["build_verbose_stdout"]
bb.msg.loggerDefaultLogLevel = self.workerdata["logdefaultlevel"]
bb.msg.loggerDefaultVerbose = self.workerdata["logdefaultverbose"]
bb.msg.loggerVerboseLogs = self.workerdata["logdefaultverboselogs"]
bb.msg.loggerDefaultDomains = self.workerdata["logdefaultdomain"]
for mc in self.databuilder.mcdata:
self.databuilder.mcdata[mc].setVar("PRSERV_HOST", self.workerdata["prhost"])
@@ -523,11 +505,9 @@ except BaseException as e:
import traceback
sys.stderr.write(traceback.format_exc())
sys.stderr.write(str(e))
finally:
worker_thread_exit = True
worker_thread.join()
workerlog_write("exiting")
if not normalexit:
sys.exit(1)
worker_thread_exit = True
worker_thread.join()
workerlog_write("exitting")
sys.exit(0)

519
bitbake/bin/bitdoc Executable file
View File

@@ -0,0 +1,519 @@
#!/usr/bin/env python3
#
# Copyright (C) 2005 Holger Hans Peter Freyther
#
# SPDX-License-Identifier: GPL-2.0-only
#
import optparse, os, sys
# bitbake
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(__file__), 'lib'))
import bb
import bb.parse
from string import split, join
__version__ = "0.0.2"
class HTMLFormatter:
"""
Simple class to help to generate some sort of HTML files. It is
quite inferior solution compared to docbook, gtkdoc, doxygen but it
should work for now.
We've a global introduction site (index.html) and then one site for
the list of keys (alphabetical sorted) and one for the list of groups,
one site for each key with links to the relations and groups.
index.html
all_keys.html
all_groups.html
groupNAME.html
keyNAME.html
"""
def replace(self, text, *pairs):
"""
From pydoc... almost identical at least
"""
while pairs:
(a, b) = pairs[0]
text = join(split(text, a), b)
pairs = pairs[1:]
return text
def escape(self, text):
"""
Escape string to be conform HTML
"""
return self.replace(text,
('&', '&amp;'),
('<', '&lt;' ),
('>', '&gt;' ) )
def createNavigator(self):
"""
Create the navgiator
"""
return """<table class="navigation" width="100%" summary="Navigation header" cellpadding="2" cellspacing="2">
<tr valign="middle">
<td><a accesskey="g" href="index.html">Home</a></td>
<td><a accesskey="n" href="all_groups.html">Groups</a></td>
<td><a accesskey="u" href="all_keys.html">Keys</a></td>
</tr></table>
"""
def relatedKeys(self, item):
"""
Create HTML to link to foreign keys
"""
if len(item.related()) == 0:
return ""
txt = "<p><b>See also:</b><br>"
txts = []
for it in item.related():
txts.append("""<a href="key%(it)s.html">%(it)s</a>""" % vars() )
return txt + ",".join(txts)
def groups(self, item):
"""
Create HTML to link to related groups
"""
if len(item.groups()) == 0:
return ""
txt = "<p><b>See also:</b><br>"
txts = []
for group in item.groups():
txts.append( """<a href="group%s.html">%s</a> """ % (group, group) )
return txt + ",".join(txts)
def createKeySite(self, item):
"""
Create a site for a key. It contains the header/navigator, a heading,
the description, links to related keys and to the groups.
"""
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Key %s</title></head>
<link rel="stylesheet" href="style.css" type="text/css">
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
%s
<h2><span class="refentrytitle">%s</span></h2>
<div class="refsynopsisdiv">
<h2>Synopsis</h2>
<p>
%s
</p>
</div>
<div class="refsynopsisdiv">
<h2>Related Keys</h2>
<p>
%s
</p>
</div>
<div class="refsynopsisdiv">
<h2>Groups</h2>
<p>
%s
</p>
</div>
</body>
""" % (item.name(), self.createNavigator(), item.name(),
self.escape(item.description()), self.relatedKeys(item), self.groups(item))
def createGroupsSite(self, doc):
"""
Create the Group Overview site
"""
groups = ""
sorted_groups = sorted(doc.groups())
for group in sorted_groups:
groups += """<a href="group%s.html">%s</a><br>""" % (group, group)
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Group overview</title></head>
<link rel="stylesheet" href="style.css" type="text/css">
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
%s
<h2>Available Groups</h2>
%s
</body>
""" % (self.createNavigator(), groups)
def createIndex(self):
"""
Create the index file
"""
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Bitbake Documentation</title></head>
<link rel="stylesheet" href="style.css" type="text/css">
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
%s
<h2>Documentation Entrance</h2>
<a href="all_groups.html">All available groups</a><br>
<a href="all_keys.html">All available keys</a><br>
</body>
""" % self.createNavigator()
def createKeysSite(self, doc):
"""
Create Overview of all avilable keys
"""
keys = ""
sorted_keys = sorted(doc.doc_keys())
for key in sorted_keys:
keys += """<a href="key%s.html">%s</a><br>""" % (key, key)
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Key overview</title></head>
<link rel="stylesheet" href="style.css" type="text/css">
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
%s
<h2>Available Keys</h2>
%s
</body>
""" % (self.createNavigator(), keys)
def createGroupSite(self, gr, items, _description = None):
"""
Create a site for a group:
Group the name of the group, items contain the name of the keys
inside this group
"""
groups = ""
description = ""
# create a section with the group descriptions
if _description:
description += "<h2 Description of Grozp %s</h2>" % gr
description += _description
items.sort(lambda x, y:cmp(x.name(), y.name()))
for group in items:
groups += """<a href="key%s.html">%s</a><br>""" % (group.name(), group.name())
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Group %s</title></head>
<link rel="stylesheet" href="style.css" type="text/css">
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
%s
%s
<div class="refsynopsisdiv">
<h2>Keys in Group %s</h2>
<pre class="synopsis">
%s
</pre>
</div>
</body>
""" % (gr, self.createNavigator(), description, gr, groups)
def createCSS(self):
"""
Create the CSS file
"""
return """.synopsis, .classsynopsis
{
background: #eeeeee;
border: solid 1px #aaaaaa;
padding: 0.5em;
}
.programlisting
{
background: #eeeeff;
border: solid 1px #aaaaff;
padding: 0.5em;
}
.variablelist
{
padding: 4px;
margin-left: 3em;
}
.variablelist td:first-child
{
vertical-align: top;
}
table.navigation
{
background: #ffeeee;
border: solid 1px #ffaaaa;
margin-top: 0.5em;
margin-bottom: 0.5em;
}
.navigation a
{
color: #770000;
}
.navigation a:visited
{
color: #550000;
}
.navigation .title
{
font-size: 200%;
}
div.refnamediv
{
margin-top: 2em;
}
div.gallery-float
{
float: left;
padding: 10px;
}
div.gallery-float img
{
border-style: none;
}
div.gallery-spacer
{
clear: both;
}
a
{
text-decoration: none;
}
a:hover
{
text-decoration: underline;
color: #FF0000;
}
"""
class DocumentationItem:
"""
A class to hold information about a configuration
item. It contains the key name, description, a list of related names,
and the group this item is contained in.
"""
def __init__(self):
self._groups = []
self._related = []
self._name = ""
self._desc = ""
def groups(self):
return self._groups
def name(self):
return self._name
def description(self):
return self._desc
def related(self):
return self._related
def setName(self, name):
self._name = name
def setDescription(self, desc):
self._desc = desc
def addGroup(self, group):
self._groups.append(group)
def addRelation(self, relation):
self._related.append(relation)
def sort(self):
self._related.sort()
self._groups.sort()
class Documentation:
"""
Holds the documentation... with mappings from key to items...
"""
def __init__(self):
self.__keys = {}
self.__groups = {}
def insert_doc_item(self, item):
"""
Insert the Doc Item into the internal list
of representation
"""
item.sort()
self.__keys[item.name()] = item
for group in item.groups():
if not group in self.__groups:
self.__groups[group] = []
self.__groups[group].append(item)
self.__groups[group].sort()
def doc_item(self, key):
"""
Return the DocumentationInstance describing the key
"""
try:
return self.__keys[key]
except KeyError:
return None
def doc_keys(self):
"""
Return the documented KEYS (names)
"""
return self.__keys.keys()
def groups(self):
"""
Return the names of available groups
"""
return self.__groups.keys()
def group_content(self, group_name):
"""
Return a list of keys/names that are in a specefic
group or the empty list
"""
try:
return self.__groups[group_name]
except KeyError:
return []
def parse_cmdline(args):
"""
Parse the CMD line and return the result as a n-tuple
"""
parser = optparse.OptionParser( version = "Bitbake Documentation Tool Core version %s, %%prog version %s" % (bb.__version__, __version__))
usage = """%prog [options]
Create a set of html pages (documentation) for a bitbake.conf....
"""
# Add the needed options
parser.add_option( "-c", "--config", help = "Use the specified configuration file as source",
action = "store", dest = "config", default = os.path.join("conf", "documentation.conf") )
parser.add_option( "-o", "--output", help = "Output directory for html files",
action = "store", dest = "output", default = "html/" )
parser.add_option( "-D", "--debug", help = "Increase the debug level",
action = "count", dest = "debug", default = 0 )
parser.add_option( "-v", "--verbose", help = "output more chit-char to the terminal",
action = "store_true", dest = "verbose", default = False )
options, args = parser.parse_args( sys.argv )
bb.msg.init_msgconfig(options.verbose, options.debug)
return options.config, options.output
def main():
"""
The main Method
"""
(config_file, output_dir) = parse_cmdline( sys.argv )
# right to let us load the file now
try:
documentation = bb.parse.handle( config_file, bb.data.init() )
except IOError:
bb.fatal( "Unable to open %s" % config_file )
except bb.parse.ParseError:
bb.fatal( "Unable to parse %s" % config_file )
if isinstance(documentation, dict):
documentation = documentation[""]
# Assuming we've the file loaded now, we will initialize the 'tree'
doc = Documentation()
# defined states
state_begin = 0
state_see = 1
state_group = 2
for key in bb.data.keys(documentation):
data = documentation.getVarFlag(key, "doc", False)
if not data:
continue
# The Documentation now starts
doc_ins = DocumentationItem()
doc_ins.setName(key)
tokens = data.split(' ')
state = state_begin
string= ""
for token in tokens:
token = token.strip(',')
if not state == state_see and token == "@see":
state = state_see
continue
elif not state == state_group and token == "@group":
state = state_group
continue
if state == state_begin:
string += " %s" % token
elif state == state_see:
doc_ins.addRelation(token)
elif state == state_group:
doc_ins.addGroup(token)
# set the description
doc_ins.setDescription(string)
doc.insert_doc_item(doc_ins)
# let us create the HTML now
bb.utils.mkdirhier(output_dir)
os.chdir(output_dir)
# Let us create the sites now. We do it in the following order
# Start with the index.html. It will point to sites explaining all
# keys and groups
html_slave = HTMLFormatter()
f = file('style.css', 'w')
print >> f, html_slave.createCSS()
f = file('index.html', 'w')
print >> f, html_slave.createIndex()
f = file('all_groups.html', 'w')
print >> f, html_slave.createGroupsSite(doc)
f = file('all_keys.html', 'w')
print >> f, html_slave.createKeysSite(doc)
# now for each group create the site
for group in doc.groups():
f = file('group%s.html' % group, 'w')
print >> f, html_slave.createGroupSite(group, doc.group_content(group))
# now for the keys
for key in doc.doc_keys():
f = file('key%s.html' % doc.doc_item(key).name(), 'w')
print >> f, html_slave.createKeySite(doc.doc_item(key))
if __name__ == "__main__":
main()

View File

@@ -1,89 +0,0 @@
#! /usr/bin/env python3
#
# Copyright (C) 2020 Joshua Watt <JPEWhacker@gmail.com>
#
# SPDX-License-Identifier: MIT
import argparse
import os
import random
import shutil
import signal
import subprocess
import sys
import time
def try_unlink(path):
try:
os.unlink(path)
except:
pass
def main():
def cleanup():
shutil.rmtree("tmp/cache", ignore_errors=True)
try_unlink("bitbake-cookerdaemon.log")
try_unlink("bitbake.sock")
try_unlink("bitbake.lock")
parser = argparse.ArgumentParser(
description="Bitbake parser torture test",
epilog="""
A torture test for bitbake's parser. Repeatedly interrupts parsing until
bitbake decides to deadlock.
""",
)
args = parser.parse_args()
if not "BUILDDIR" in os.environ:
print(
"'BUILDDIR' not found in the environment. Did you initialize the build environment?"
)
return 1
os.chdir(os.environ["BUILDDIR"])
run_num = 0
while True:
if run_num % 100 == 0:
print("Calibrating wait time...")
cleanup()
start_time = time.monotonic()
r = subprocess.run(["bitbake", "-p"])
max_wait_time = time.monotonic() - start_time
if r.returncode != 0:
print("Calibration run exited with %d" % r.returncode)
return 1
print("Maximum wait time is %f seconds" % max_wait_time)
run_num += 1
wait_time = random.random() * max_wait_time
print("Run #%d" % run_num)
print("Will sleep for %f seconds" % wait_time)
cleanup()
with subprocess.Popen(["bitbake", "-p"]) as proc:
time.sleep(wait_time)
proc.send_signal(signal.SIGINT)
try:
proc.wait(45)
except subprocess.TimeoutExpired:
print("Run #%d: Waited too long. Possible deadlock!" % run_num)
proc.wait()
return 1
if proc.returncode == 0:
print("Exited successfully. Timeout too long?")
else:
print("Exited with %d" % proc.returncode)
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,19 +0,0 @@
# SPDX-License-Identifier: MIT
#
# Copyright (c) 2021 Joshua Watt <JPEWhacker@gmail.com>
#
# Dockerfile to build a bitbake hash equivalence server container
#
# From the root of the bitbake repository, run:
#
# docker build -f contrib/hashserv/Dockerfile .
#
FROM alpine:3.13.1
RUN apk add --no-cache python3
COPY bin/bitbake-hashserv /opt/bbhashserv/bin/
COPY lib/hashserv /opt/bbhashserv/lib/hashserv/
ENTRYPOINT ["/opt/bbhashserv/bin/bitbake-hashserv"]

View File

@@ -1,18 +0,0 @@
The MIT License (MIT)
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -6,12 +6,12 @@
"
" This sets up the syntax highlighting for BitBake files, like .bb, .bbclass and .inc
if &compatible || version < 600 || exists("b:loaded_bitbake_plugin")
if &compatible || version < 600
finish
endif
" .bb, .bbappend and .bbclass
au BufNewFile,BufRead *.{bb,bbappend,bbclass} set filetype=bitbake
au BufNewFile,BufRead *.{bb,bbappend,bbclass} set filetype=bitbake
" .inc
au BufNewFile,BufRead *.inc set filetype=bitbake

View File

@@ -1,13 +1,2 @@
" Only do this when not done yet for this buffer
if exists("b:did_ftplugin")
finish
endif
" Don't load another plugin for this buffer
let b:did_ftplugin = 1
let b:undo_ftplugin = "setl cms< sts< sw< et< sua<"
setlocal commentstring=#\ %s
setlocal softtabstop=4 shiftwidth=4 expandtab
setlocal suffixesadd+=.bb,.bbclass
set sts=4 sw=4 et
set cms=#%s

14
bitbake/contrib/vim/plugin/newbb.vim Normal file → Executable file
View File

@@ -10,7 +10,7 @@
"
" Will try to use git to find the user name and email
if &compatible || v:version < 600 || exists("b:loaded_bitbake_plugin")
if &compatible || v:version < 600
finish
endif
@@ -25,7 +25,7 @@ endfun
fun! <SID>GetUserEmail()
let l:user_email = system("git config --get user.email")
if v:shell_error
return "unknown@user.org"
return "unknow@user.org"
else
return substitute(l:user_email, "\n", "", "")
endfun
@@ -41,10 +41,6 @@ fun! BBHeader()
endfun
fun! NewBBTemplate()
if line2byte(line('$') + 1) != -1
return
endif
let l:paste = &paste
set nopaste
@@ -52,7 +48,7 @@ fun! NewBBTemplate()
call BBHeader()
" New the bb template
put ='SUMMARY = \"\"'
put ='DESCRIPTION = \"\"'
put ='HOMEPAGE = \"\"'
put ='LICENSE = \"\"'
put ='SECTION = \"\"'
@@ -62,7 +58,7 @@ fun! NewBBTemplate()
" Go to the first place to edit
0
/^SUMMARY =/
/^DESCRIPTION =/
exec "normal 2f\""
if paste == 1
@@ -80,7 +76,7 @@ if v:progname =~ "vimdiff"
endif
augroup NewBB
au BufNewFile,BufReadPost *.bb
au BufNewFile *.bb
\ if g:bb_create_on_empty |
\ call NewBBTemplate() |
\ endif

View File

@@ -1,46 +0,0 @@
" Vim plugin file
" Purpose: Create a template for new bbappend file
" Author: Joshua Watt <JPEWhacker@gmail.com>
" Copyright: Copyright (C) 2017 Joshua Watt <JPEWhacker@gmail.com>
"
" This file is licensed under the MIT license, see COPYING.MIT in
" this source distribution for the terms.
"
if &compatible || v:version < 600 || exists("b:loaded_bitbake_plugin")
finish
endif
fun! NewBBAppendTemplate()
if line2byte(line('$') + 1) != -1
return
endif
let l:paste = &paste
set nopaste
" New bbappend template
0 put ='FILESEXTRAPATHS_prepend := \"${THISDIR}/${PN}:\"'
2
if paste == 1
set paste
endif
endfun
if !exists("g:bb_create_on_empty")
let g:bb_create_on_empty = 1
endif
" disable in case of vimdiff
if v:progname =~ "vimdiff"
let g:bb_create_on_empty = 0
endif
augroup NewBBAppend
au BufNewFile,BufReadPost *.bbappend
\ if g:bb_create_on_empty |
\ call NewBBAppendTemplate() |
\ endif
augroup END

View File

@@ -12,7 +12,7 @@
"
" It's an entirely new type, just has specific syntax in shell and python code
if &compatible || v:version < 600 || exists("b:loaded_bitbake_plugin")
if &compatible || v:version < 600
finish
endif
if exists("b:current_syntax")
@@ -58,8 +58,8 @@ syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref,bbV
syn region bbVarPyValue start=+${@+ skip=+\\$+ end=+}+ contained contains=@python
" Vars metadata flags
syn match bbVarFlagDef "^\([a-zA-Z0-9\-_\.]\+\)\(\[[a-zA-Z0-9\-_\.+]\+\]\)\@=" contains=bbIdentifier nextgroup=bbVarFlagFlag
syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*\(:=\|=\|.=\|=.|+=\|=+\|?=\)\@=" contained contains=bbIdentifier nextgroup=bbVarEq
syn match bbVarFlagDef "^\([a-zA-Z0-9\-_\.]\+\)\(\[[a-zA-Z0-9\-_\.]\+\]\)\@=" contains=bbIdentifier nextgroup=bbVarFlagFlag
syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*\(=\|+=\|=+\|?=\)\@=" contained contains=bbIdentifier nextgroup=bbVarEq
" Includes and requires
syn keyword bbInclude inherit include require contained
@@ -67,15 +67,15 @@ syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref
syn match bbIncludeLine "^\(inherit\|include\|require\)\s\+" contains=bbInclude nextgroup=bbIncludeRest
" Add taks and similar
syn keyword bbStatement addtask deltask addhandler after before EXPORT_FUNCTIONS contained
syn keyword bbStatement addtask addhandler after before EXPORT_FUNCTIONS contained
syn match bbStatementRest ".*$" skipwhite contained contains=bbStatement
syn match bbStatementLine "^\(addtask\|deltask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
syn match bbStatementLine "^\(addtask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
" OE Important Functions
syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_compile do_stage do_install do_package contained
" Generic Functions
syn match bbFunction "\h[0-9A-Za-z_\-\.]*" display contained contains=bbOEFunctions
syn match bbFunction "\h[0-9A-Za-z_-]*" display contained contains=bbOEFunctions
" BitBake shell metadata
syn include @shell syntax/sh.vim
@@ -83,7 +83,7 @@ if exists("b:current_syntax")
unlet b:current_syntax
endif
syn keyword bbShFakeRootFlag fakeroot contained
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([\.0-9A-Za-z_${}\-\.]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([0-9A-Za-z_${}-]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
syn region bbShFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@shell
" Python value inside shell functions
@@ -91,7 +91,7 @@ syn region shDeref start=+${@+ skip=+\\$+ excludenl end=+}+ contained co
" BitBake python metadata
syn keyword bbPyFlag python contained
syn match bbPyFuncDef "^\(fakeroot\s*\)\?\(python\)\(\s\+[0-9A-Za-z_${}\-\.]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbPyFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
syn match bbPyFuncDef "^\(python\s\+\)\([0-9A-Za-z_${}-]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbPyFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
syn region bbPyFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@python
" BitBake 'def'd python functions

View File

@@ -1 +0,0 @@
_build/

View File

@@ -1,35 +1,91 @@
# Minimal makefile for Sphinx documentation
# This is a single Makefile to handle all generated BitBake documents.
# The Makefile needs to live in the documentation directory and all figures used
# in any manuals must be .PNG files and live in the individual book's figures
# directory.
#
# The Makefile has these targets:
#
# pdf: generates a PDF version of a manual.
# html: generates an HTML version of a manual.
# tarball: creates a tarball for the doc files.
# validate: validates
# clean: removes files
#
# The Makefile generates an HTML version of every document. The
# variable DOC indicates the folder name for a given manual.
#
# To build a manual, you must invoke 'make' with the DOC argument.
#
# Examples:
#
# make DOC=bitbake-user-manual
# make pdf DOC=bitbake-user-manual
#
# The first example generates the HTML version of the User Manual.
# The second example generates the PDF version of the User Manual.
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?= -j auto
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
DESTDIR = final
ifeq ($(DOC),bitbake-user-manual)
XSLTOPTS = --stringparam html.stylesheet bitbake-user-manual-style.css \
--stringparam chapter.autolabel 1 \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--xinclude
ALLPREQ = html tarball
TARFILES = bitbake-user-manual-style.css bitbake-user-manual.html figures/bitbake-title.png
MANUALS = $(DOC)/$(DOC).html
FIGURES = figures
STYLESHEET = $(DOC)/*.css
ifeq ($(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; else echo 0; fi),0)
$(error "The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed")
endif
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
##
# These URI should be rewritten by your distribution's xml catalog to
# match your localy installed XSL stylesheets.
XSL_BASE_URI = http://docbook.sourceforge.net/release/xsl/current
XSL_XHTML_URI = $(XSL_BASE_URI)/xhtml/docbook.xsl
.PHONY: help Makefile clean publish
all: $(ALLPREQ)
publish: Makefile html singlehtml
rm -rf $(BUILDDIR)/$(DESTDIR)/
mkdir -p $(BUILDDIR)/$(DESTDIR)/
cp -r $(BUILDDIR)/html/* $(BUILDDIR)/$(DESTDIR)/
cp $(BUILDDIR)/singlehtml/index.html $(BUILDDIR)/$(DESTDIR)/singleindex.html
sed -i -e 's@index.html#@singleindex.html#@g' $(BUILDDIR)/$(DESTDIR)/singleindex.html
pdf:
ifeq ($(DOC),bitbake-user-manual)
@echo " "
@echo "********** Building."$(DOC)
@echo " "
cd $(DOC); ../tools/docbook-to-pdf $(DOC).xml ../template; cd ..
endif
html:
ifeq ($(DOC),bitbake-user-manual)
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
@echo " "
@echo "******** Building "$(DOC)
@echo " "
cd $(DOC); xsltproc $(XSLTOPTS) -o $(DOC).html $(DOC)-customization.xsl $(DOC).xml; cd ..
endif
tarball: html
@echo " "
@echo "******** Creating Tarball of document files"
@echo " "
cd $(DOC); tar -cvzf $(DOC).tgz $(TARFILES); cd ..
validate:
cd $(DOC); xmllint --postvalid --xinclude --noout $(DOC).xml; cd ..
publish:
@if test -f $(DOC)/$(DOC).html; \
then \
echo " "; \
echo "******** Publishing "$(DOC)".html"; \
echo " "; \
scp -r $(MANUALS) $(STYLESHEET) docs.yp:/var/www/www.yoctoproject.org-docs/$(VER)/$(DOC); \
cd $(DOC); scp -r $(FIGURES) docs.yp:/var/www/www.yoctoproject.org-docs/$(VER)/$(DOC); \
else \
echo " "; \
echo $(DOC)".html missing. Generate the file first then try again."; \
echo " "; \
fi
clean:
@rm -rf $(BUILDDIR)
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
rm -rf $(MANUALS); rm $(DOC)/$(DOC).tgz;

View File

@@ -15,41 +15,25 @@ Each folder is self-contained regarding content and figures.
If you want to find HTML versions of the BitBake manuals on the web,
go to http://www.openembedded.org/wiki/Documentation.
Sphinx
======
Makefile
========
The BitBake documentation was migrated from the original DocBook
format to Sphinx based documentation for the Yocto Project 3.2
release.
The Makefile processes manual directories to create HTML, PDF,
tarballs, etc. Details on how the Makefile work are documented
inside the Makefile. See that file for more information.
Additional information related to the Sphinx migration, and guidelines
for developers willing to contribute to the BitBake documentation can
be found in the Yocto Project Documentation README file:
To build a manual, you run the make command and pass it the name
of the folder containing the manual's contents.
For example, the following command run from the documentation directory
creates an HTML and a PDF version of the BitBake User Manual.
The DOC variable specifies the manual you are making:
https://git.yoctoproject.org/cgit/cgit.cgi/yocto-docs/tree/documentation/README
$ make DOC=bitbake-user-manual
How to build the Yocto Project documentation
============================================
template
========
Contains various templates, fonts, and some old PNG files.
Sphinx is written in Python. While it might work with Python2, for
obvious reasons, we will only support building the BitBake
documentation with Python3.
Sphinx might be available in your Linux distro packages repositories,
however it is not recommend using distro packages, as they might be
old versions, especially if you are using an LTS version of your
distro. The recommended method to install Sphinx and all required
dependencies is to use the Python Package Index (pip).
To install all required packages run:
$ pip3 install sphinx sphinx_rtd_theme pyyaml
To build the documentation locally, run:
$ cd documentation
$ make -f Makefile.sphinx html
The resulting HTML index page will be _build/html/index.html, and you
can browse your own copy of the locally generated documentation with
your browser.
tools
=====
Contains a tool to convert the DocBook files to PDF format.

View File

@@ -1,14 +0,0 @@
{% extends "!breadcrumbs.html" %}
{% block breadcrumbs %}
<li>
<span class="doctype_switcher_placeholder">{{ doctype or 'single' }}</span>
<span class="version_switcher_placeholder">{{ release }}</span>
</li>
<li> &raquo;</li>
{% for doc in parents %}
<li><a href="{{ doc.link|e }}">{{ doc.title }}</a> &raquo;</li>
{% endfor %}
<li>{{ title }}</li>
{% endblock %}

View File

@@ -1,7 +0,0 @@
{% extends "!layout.html" %}
{% block extrabody %}
<div id="outdated-warning" style="text-align: center; background-color: #FFBABA; color: #6A0E0E;">
</div>
{% endblock %}

View File

@@ -0,0 +1,29 @@
<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
<xsl:import href="http://downloads.yoctoproject.org/mirror/docbook-mirror/docbook-xsl-1.76.1/xhtml/docbook.xsl" />
<!--
<xsl:import href="../template/1.76.1/docbook-xsl-1.76.1/xhtml/docbook.xsl" />
<xsl:import href="http://docbook.sourceforge.net/release/xsl/1.76.1/xhtml/docbook.xsl" />
-->
<xsl:include href="../template/permalinks.xsl"/>
<xsl:include href="../template/section.title.xsl"/>
<xsl:include href="../template/component.title.xsl"/>
<xsl:include href="../template/division.title.xsl"/>
<xsl:include href="../template/formal.object.heading.xsl"/>
<xsl:include href="../template/gloss-permalinks.xsl"/>
<xsl:param name="html.stylesheet" select="'user-manual-style.css'" />
<xsl:param name="chapter.autolabel" select="1" />
<xsl:param name="section.autolabel" select="1" />
<xsl:param name="section.label.includes.component.label" select="1" />
<xsl:param name="appendix.autolabel">A</xsl:param>
<!-- <xsl:param name="generate.toc" select="'article nop'"></xsl:param> -->
</xsl:stylesheet>

View File

@@ -1,734 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
=========
Execution
=========
|
The primary purpose for running BitBake is to produce some kind of
output such as a single installable package, a kernel, a software
development kit, or even a full, board-specific bootable Linux image,
complete with bootloader, kernel, and root filesystem. Of course, you
can execute the ``bitbake`` command with options that cause it to
execute single tasks, compile single recipe files, capture or clear
data, or simply return information about the execution environment.
This chapter describes BitBake's execution process from start to finish
when you use it to create an image. The execution process is launched
using the following command form: ::
$ bitbake target
For information on
the BitBake command and its options, see ":ref:`The BitBake Command
<bitbake-user-manual-command>`" section.
.. note::
Prior to executing BitBake, you should take advantage of available
parallel thread execution on your build host by setting the
:term:`BB_NUMBER_THREADS` variable in
your project's ``local.conf`` configuration file.
A common method to determine this value for your build host is to run
the following: ::
$ grep processor /proc/cpuinfo
This command returns
the number of processors, which takes into account hyper-threading.
Thus, a quad-core build host with hyper-threading most likely shows
eight processors, which is the value you would then assign to
``BB_NUMBER_THREADS``.
A possibly simpler solution is that some Linux distributions (e.g.
Debian and Ubuntu) provide the ``ncpus`` command.
Parsing the Base Configuration Metadata
=======================================
The first thing BitBake does is parse base configuration metadata. Base
configuration metadata consists of your project's ``bblayers.conf`` file
to determine what layers BitBake needs to recognize, all necessary
``layer.conf`` files (one from each layer), and ``bitbake.conf``. The
data itself is of various types:
- **Recipes:** Details about particular pieces of software.
- **Class Data:** An abstraction of common build information (e.g. how to
build a Linux kernel).
- **Configuration Data:** Machine-specific settings, policy decisions,
and so forth. Configuration data acts as the glue to bind everything
together.
The ``layer.conf`` files are used to construct key variables such as
:term:`BBPATH` and :term:`BBFILES`.
``BBPATH`` is used to search for configuration and class files under the
``conf`` and ``classes`` directories, respectively. ``BBFILES`` is used
to locate both recipe and recipe append files (``.bb`` and
``.bbappend``). If there is no ``bblayers.conf`` file, it is assumed the
user has set the ``BBPATH`` and ``BBFILES`` directly in the environment.
Next, the ``bitbake.conf`` file is located using the ``BBPATH`` variable
that was just constructed. The ``bitbake.conf`` file may also include
other configuration files using the ``include`` or ``require``
directives.
Prior to parsing configuration files, BitBake looks at certain
variables, including:
- :term:`BB_ENV_WHITELIST`
- :term:`BB_ENV_EXTRAWHITE`
- :term:`BB_PRESERVE_ENV`
- :term:`BB_ORIGENV`
- :term:`BITBAKE_UI`
The first four variables in this list relate to how BitBake treats shell
environment variables during task execution. By default, BitBake cleans
the environment variables and provides tight control over the shell
execution environment. However, through the use of these first four
variables, you can apply your control regarding the environment
variables allowed to be used by BitBake in the shell during execution of
tasks. See the
":ref:`bitbake-user-manual/bitbake-user-manual-metadata:Passing Information Into the Build Task Environment`"
section and the information about these variables in the variable
glossary for more information on how they work and on how to use them.
The base configuration metadata is global and therefore affects all
recipes and tasks that are executed.
BitBake first searches the current working directory for an optional
``conf/bblayers.conf`` configuration file. This file is expected to
contain a :term:`BBLAYERS` variable that is a
space-delimited list of 'layer' directories. Recall that if BitBake
cannot find a ``bblayers.conf`` file, then it is assumed the user has
set the ``BBPATH`` and ``BBFILES`` variables directly in the
environment.
For each directory (layer) in this list, a ``conf/layer.conf`` file is
located and parsed with the :term:`LAYERDIR` variable
being set to the directory where the layer was found. The idea is these
files automatically set up :term:`BBPATH` and other
variables correctly for a given build directory.
BitBake then expects to find the ``conf/bitbake.conf`` file somewhere in
the user-specified ``BBPATH``. That configuration file generally has
include directives to pull in any other metadata such as files specific
to the architecture, the machine, the local environment, and so forth.
Only variable definitions and include directives are allowed in BitBake
``.conf`` files. Some variables directly influence BitBake's behavior.
These variables might have been set from the environment depending on
the environment variables previously mentioned or set in the
configuration files. The ":ref:`bitbake-user-manual/bitbake-user-manual-ref-variables:Variables Glossary`"
chapter presents a full list of
variables.
After parsing configuration files, BitBake uses its rudimentary
inheritance mechanism, which is through class files, to inherit some
standard classes. BitBake parses a class when the inherit directive
responsible for getting that class is encountered.
The ``base.bbclass`` file is always included. Other classes that are
specified in the configuration using the
:term:`INHERIT` variable are also included. BitBake
searches for class files in a ``classes`` subdirectory under the paths
in ``BBPATH`` in the same way as configuration files.
A good way to get an idea of the configuration files and the class files
used in your execution environment is to run the following BitBake
command: ::
$ bitbake -e > mybb.log
Examining the top of the ``mybb.log``
shows you the many configuration files and class files used in your
execution environment.
.. note::
You need to be aware of how BitBake parses curly braces. If a recipe
uses a closing curly brace within the function and the character has
no leading spaces, BitBake produces a parsing error. If you use a
pair of curly braces in a shell function, the closing curly brace
must not be located at the start of the line without leading spaces.
Here is an example that causes BitBake to produce a parsing error: ::
fakeroot create_shar() {
cat << "EOF" > ${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.sh
usage()
{
echo "test"
###### The following "}" at the start of the line causes a parsing error ######
}
EOF
}
Writing the recipe this way avoids the error:
fakeroot create_shar() {
cat << "EOF" > ${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.sh
usage()
{
echo "test"
###### The following "}" with a leading space at the start of the line avoids the error ######
}
EOF
}
Locating and Parsing Recipes
============================
During the configuration phase, BitBake will have set
:term:`BBFILES`. BitBake now uses it to construct a
list of recipes to parse, along with any append files (``.bbappend``) to
apply. ``BBFILES`` is a space-separated list of available files and
supports wildcards. An example would be: ::
BBFILES = "/path/to/bbfiles/*.bb /path/to/appends/*.bbappend"
BitBake parses each
recipe and append file located with ``BBFILES`` and stores the values of
various variables into the datastore.
.. note::
Append files are applied in the order they are encountered in BBFILES.
For each file, a fresh copy of the base configuration is made, then the
recipe is parsed line by line. Any inherit statements cause BitBake to
find and then parse class files (``.bbclass``) using
:term:`BBPATH` as the search path. Finally, BitBake
parses in order any append files found in ``BBFILES``.
One common convention is to use the recipe filename to define pieces of
metadata. For example, in ``bitbake.conf`` the recipe name and version
are used to set the variables :term:`PN` and
:term:`PV`: ::
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
In this example, a recipe called "something_1.2.3.bb" would set
``PN`` to "something" and ``PV`` to "1.2.3".
By the time parsing is complete for a recipe, BitBake has a list of
tasks that the recipe defines and a set of data consisting of keys and
values as well as dependency information about the tasks.
BitBake does not need all of this information. It only needs a small
subset of the information to make decisions about the recipe.
Consequently, BitBake caches the values in which it is interested and
does not store the rest of the information. Experience has shown it is
faster to re-parse the metadata than to try and write it out to the disk
and then reload it.
Where possible, subsequent BitBake commands reuse this cache of recipe
information. The validity of this cache is determined by first computing
a checksum of the base configuration data (see
:term:`BB_HASHCONFIG_WHITELIST`) and
then checking if the checksum matches. If that checksum matches what is
in the cache and the recipe and class files have not changed, BitBake is
able to use the cache. BitBake then reloads the cached information about
the recipe instead of reparsing it from scratch.
Recipe file collections exist to allow the user to have multiple
repositories of ``.bb`` files that contain the same exact package. For
example, one could easily use them to make one's own local copy of an
upstream repository, but with custom modifications that one does not
want upstream. Here is an example: ::
BBFILES = "/stuff/openembedded/*/*.bb /stuff/openembedded.modified/*/*.bb"
BBFILE_COLLECTIONS = "upstream local"
BBFILE_PATTERN_upstream = "^/stuff/openembedded/"
BBFILE_PATTERN_local = "^/stuff/openembedded.modified/"
BBFILE_PRIORITY_upstream = "5"
BBFILE_PRIORITY_local = "10"
.. note::
The layers mechanism is now the preferred method of collecting code.
While the collections code remains, its main use is to set layer
priorities and to deal with overlap (conflicts) between layers.
.. _bb-bitbake-providers:
Providers
=========
Assuming BitBake has been instructed to execute a target and that all
the recipe files have been parsed, BitBake starts to figure out how to
build the target. BitBake looks through the ``PROVIDES`` list for each
of the recipes. A ``PROVIDES`` list is the list of names by which the
recipe can be known. Each recipe's ``PROVIDES`` list is created
implicitly through the recipe's :term:`PN` variable and
explicitly through the recipe's :term:`PROVIDES`
variable, which is optional.
When a recipe uses ``PROVIDES``, that recipe's functionality can be
found under an alternative name or names other than the implicit ``PN``
name. As an example, suppose a recipe named ``keyboard_1.0.bb``
contained the following: ::
PROVIDES += "fullkeyboard"
The ``PROVIDES``
list for this recipe becomes "keyboard", which is implicit, and
"fullkeyboard", which is explicit. Consequently, the functionality found
in ``keyboard_1.0.bb`` can be found under two different names.
.. _bb-bitbake-preferences:
Preferences
===========
The ``PROVIDES`` list is only part of the solution for figuring out a
target's recipes. Because targets might have multiple providers, BitBake
needs to prioritize providers by determining provider preferences.
A common example in which a target has multiple providers is
"virtual/kernel", which is on the ``PROVIDES`` list for each kernel
recipe. Each machine often selects the best kernel provider by using a
line similar to the following in the machine configuration file: ::
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto"
The default :term:`PREFERRED_PROVIDER` is the provider
with the same name as the target. BitBake iterates through each target
it needs to build and resolves them and their dependencies using this
process.
Understanding how providers are chosen is made complicated by the fact
that multiple versions might exist for a given provider. BitBake
defaults to the highest version of a provider. Version comparisons are
made using the same method as Debian. You can use the
:term:`PREFERRED_VERSION` variable to
specify a particular version. You can influence the order by using the
:term:`DEFAULT_PREFERENCE` variable.
By default, files have a preference of "0". Setting
``DEFAULT_PREFERENCE`` to "-1" makes the recipe unlikely to be used
unless it is explicitly referenced. Setting ``DEFAULT_PREFERENCE`` to
"1" makes it likely the recipe is used. ``PREFERRED_VERSION`` overrides
any ``DEFAULT_PREFERENCE`` setting. ``DEFAULT_PREFERENCE`` is often used
to mark newer and more experimental recipe versions until they have
undergone sufficient testing to be considered stable.
When there are multiple "versions" of a given recipe, BitBake defaults
to selecting the most recent version, unless otherwise specified. If the
recipe in question has a
:term:`DEFAULT_PREFERENCE` set lower than
the other recipes (default is 0), then it will not be selected. This
allows the person or persons maintaining the repository of recipe files
to specify their preference for the default selected version.
Additionally, the user can specify their preferred version.
If the first recipe is named ``a_1.1.bb``, then the
:term:`PN` variable will be set to "a", and the
:term:`PV` variable will be set to 1.1.
Thus, if a recipe named ``a_1.2.bb`` exists, BitBake will choose 1.2 by
default. However, if you define the following variable in a ``.conf``
file that BitBake parses, you can change that preference: ::
PREFERRED_VERSION_a = "1.1"
.. note::
It is common for a recipe to provide two versions -- a stable,
numbered (and preferred) version, and a version that is automatically
checked out from a source code repository that is considered more
"bleeding edge" but can be selected only explicitly.
For example, in the OpenEmbedded codebase, there is a standard,
versioned recipe file for BusyBox, ``busybox_1.22.1.bb``, but there
is also a Git-based version, ``busybox_git.bb``, which explicitly
contains the line ::
DEFAULT_PREFERENCE = "-1"
to ensure that the
numbered, stable version is always preferred unless the developer
selects otherwise.
.. _bb-bitbake-dependencies:
Dependencies
============
Each target BitBake builds consists of multiple tasks such as ``fetch``,
``unpack``, ``patch``, ``configure``, and ``compile``. For best
performance on multi-core systems, BitBake considers each task as an
independent entity with its own set of dependencies.
Dependencies are defined through several variables. You can find
information about variables BitBake uses in the
:doc:`bitbake-user-manual-ref-variables` near the end of this manual. At a
basic level, it is sufficient to know that BitBake uses the
:term:`DEPENDS` and
:term:`RDEPENDS` variables when calculating
dependencies.
For more information on how BitBake handles dependencies, see the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:Dependencies`
section.
.. _ref-bitbake-tasklist:
The Task List
=============
Based on the generated list of providers and the dependency information,
BitBake can now calculate exactly what tasks it needs to run and in what
order it needs to run them. The
:ref:`bitbake-user-manual/bitbake-user-manual-execution:executing tasks`
section has more information on how BitBake chooses which task to
execute next.
The build now starts with BitBake forking off threads up to the limit
set in the :term:`BB_NUMBER_THREADS`
variable. BitBake continues to fork threads as long as there are tasks
ready to run, those tasks have all their dependencies met, and the
thread threshold has not been exceeded.
It is worth noting that you can greatly speed up the build time by
properly setting the ``BB_NUMBER_THREADS`` variable.
As each task completes, a timestamp is written to the directory
specified by the :term:`STAMP` variable. On subsequent
runs, BitBake looks in the build directory within ``tmp/stamps`` and
does not rerun tasks that are already completed unless a timestamp is
found to be invalid. Currently, invalid timestamps are only considered
on a per recipe file basis. So, for example, if the configure stamp has
a timestamp greater than the compile timestamp for a given target, then
the compile task would rerun. Running the compile task again, however,
has no effect on other providers that depend on that target.
The exact format of the stamps is partly configurable. In modern
versions of BitBake, a hash is appended to the stamp so that if the
configuration changes, the stamp becomes invalid and the task is
automatically rerun. This hash, or signature used, is governed by the
signature policy that is configured (see the
:ref:`bitbake-user-manual/bitbake-user-manual-execution:checksums (signatures)`
section for information). It is also
possible to append extra metadata to the stamp using the
``[stamp-extra-info]`` task flag. For example, OpenEmbedded uses this
flag to make some tasks machine-specific.
.. note::
Some tasks are marked as "nostamp" tasks. No timestamp file is
created when these tasks are run. Consequently, "nostamp" tasks are
always rerun.
For more information on tasks, see the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:tasks` section.
Executing Tasks
===============
Tasks can be either a shell task or a Python task. For shell tasks,
BitBake writes a shell script to
``${``\ :term:`T`\ ``}/run.do_taskname.pid`` and then
executes the script. The generated shell script contains all the
exported variables, and the shell functions with all variables expanded.
Output from the shell script goes to the file
``${T}/log.do_taskname.pid``. Looking at the expanded shell functions in
the run file and the output in the log files is a useful debugging
technique.
For Python tasks, BitBake executes the task internally and logs
information to the controlling terminal. Future versions of BitBake will
write the functions to files similar to the way shell tasks are handled.
Logging will be handled in a way similar to shell tasks as well.
The order in which BitBake runs the tasks is controlled by its task
scheduler. It is possible to configure the scheduler and define custom
implementations for specific use cases. For more information, see these
variables that control the behavior:
- :term:`BB_SCHEDULER`
- :term:`BB_SCHEDULERS`
It is possible to have functions run before and after a task's main
function. This is done using the ``[prefuncs]`` and ``[postfuncs]``
flags of the task that lists the functions to run.
.. _checksums:
Checksums (Signatures)
======================
A checksum is a unique signature of a task's inputs. The signature of a
task can be used to determine if a task needs to be run. Because it is a
change in a task's inputs that triggers running the task, BitBake needs
to detect all the inputs to a given task. For shell tasks, this turns
out to be fairly easy because BitBake generates a "run" shell script for
each task and it is possible to create a checksum that gives you a good
idea of when the task's data changes.
To complicate the problem, some things should not be included in the
checksum. First, there is the actual specific build path of a given task
- the working directory. It does not matter if the working directory
changes because it should not affect the output for target packages. The
simplistic approach for excluding the working directory is to set it to
some fixed value and create the checksum for the "run" script. BitBake
goes one step better and uses the
:term:`BB_HASHBASE_WHITELIST` variable
to define a list of variables that should never be included when
generating the signatures.
Another problem results from the "run" scripts containing functions that
might or might not get called. The incremental build solution contains
code that figures out dependencies between shell functions. This code is
used to prune the "run" scripts down to the minimum set, thereby
alleviating this problem and making the "run" scripts much more readable
as a bonus.
So far we have solutions for shell scripts. What about Python tasks? The
same approach applies even though these tasks are more difficult. The
process needs to figure out what variables a Python function accesses
and what functions it calls. Again, the incremental build solution
contains code that first figures out the variable and function
dependencies, and then creates a checksum for the data used as the input
to the task.
Like the working directory case, situations exist where dependencies
should be ignored. For these cases, you can instruct the build process
to ignore a dependency by using a line like the following: ::
PACKAGE_ARCHS[vardepsexclude] = "MACHINE"
This example ensures that the
``PACKAGE_ARCHS`` variable does not depend on the value of ``MACHINE``,
even if it does reference it.
Equally, there are cases where we need to add dependencies BitBake is
not able to find. You can accomplish this by using a line like the
following: ::
PACKAGE_ARCHS[vardeps] = "MACHINE"
This example explicitly
adds the ``MACHINE`` variable as a dependency for ``PACKAGE_ARCHS``.
Consider a case with in-line Python, for example, where BitBake is not
able to figure out dependencies. When running in debug mode (i.e. using
``-DDD``), BitBake produces output when it discovers something for which
it cannot figure out dependencies.
Thus far, this section has limited discussion to the direct inputs into
a task. Information based on direct inputs is referred to as the
"basehash" in the code. However, there is still the question of a task's
indirect inputs - the things that were already built and present in the
build directory. The checksum (or signature) for a particular task needs
to add the hashes of all the tasks on which the particular task depends.
Choosing which dependencies to add is a policy decision. However, the
effect is to generate a master checksum that combines the basehash and
the hashes of the task's dependencies.
At the code level, there are a variety of ways both the basehash and the
dependent task hashes can be influenced. Within the BitBake
configuration file, we can give BitBake some extra information to help
it construct the basehash. The following statement effectively results
in a list of global variable dependency excludes - variables never
included in any checksum. This example uses variables from OpenEmbedded
to help illustrate the concept: ::
BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL \
USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \
PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \
CCACHE_DIR EXTERNAL_TOOLCHAIN CCACHE CCACHE_DISABLE LICENSE_PATH SDKPKGSUFFIX"
The previous example excludes the work directory, which is part of
``TMPDIR``.
The rules for deciding which hashes of dependent tasks to include
through dependency chains are more complex and are generally
accomplished with a Python function. The code in
``meta/lib/oe/sstatesig.py`` shows two examples of this and also
illustrates how you can insert your own policy into the system if so
desired. This file defines the two basic signature generators
OpenEmbedded-Core uses: "OEBasic" and "OEBasicHash". By default, there
is a dummy "noop" signature handler enabled in BitBake. This means that
behavior is unchanged from previous versions. ``OE-Core`` uses the
"OEBasicHash" signature handler by default through this setting in the
``bitbake.conf`` file: ::
BB_SIGNATURE_HANDLER ?= "OEBasicHash"
The "OEBasicHash" ``BB_SIGNATURE_HANDLER`` is the same as the "OEBasic"
version but adds the task hash to the stamp files. This results in any
metadata change that changes the task hash, automatically causing the
task to be run again. This removes the need to bump
:term:`PR` values, and changes to metadata automatically
ripple across the build.
It is also worth noting that the end result of these signature
generators is to make some dependency and hash information available to
the build. This information includes:
- ``BB_BASEHASH_task-``\ *taskname*: The base hashes for each task in the
recipe.
- ``BB_BASEHASH_``\ *filename:taskname*: The base hashes for each
dependent task.
- ``BBHASHDEPS_``\ *filename:taskname*: The task dependencies for
each task.
- ``BB_TASKHASH``: The hash of the currently running task.
It is worth noting that BitBake's "-S" option lets you debug BitBake's
processing of signatures. The options passed to -S allow different
debugging modes to be used, either using BitBake's own debug functions
or possibly those defined in the metadata/signature handler itself. The
simplest parameter to pass is "none", which causes a set of signature
information to be written out into ``STAMPS_DIR`` corresponding to the
targets specified. The other currently available parameter is
"printdiff", which causes BitBake to try to establish the closest
signature match it can (e.g. in the sstate cache) and then run
``bitbake-diffsigs`` over the matches to determine the stamps and delta
where these two stamp trees diverge.
.. note::
It is likely that future versions of BitBake will provide other
signature handlers triggered through additional "-S" parameters.
You can find more information on checksum metadata in the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:task checksums and setscene`
section.
Setscene
========
The setscene process enables BitBake to handle "pre-built" artifacts.
The ability to handle and reuse these artifacts allows BitBake the
luxury of not having to build something from scratch every time.
Instead, BitBake can use, when possible, existing build artifacts.
BitBake needs to have reliable data indicating whether or not an
artifact is compatible. Signatures, described in the previous section,
provide an ideal way of representing whether an artifact is compatible.
If a signature is the same, an object can be reused.
If an object can be reused, the problem then becomes how to replace a
given task or set of tasks with the pre-built artifact. BitBake solves
the problem with the "setscene" process.
When BitBake is asked to build a given target, before building anything,
it first asks whether cached information is available for any of the
targets it's building, or any of the intermediate targets. If cached
information is available, BitBake uses this information instead of
running the main tasks.
BitBake first calls the function defined by the
:term:`BB_HASHCHECK_FUNCTION` variable
with a list of tasks and corresponding hashes it wants to build. This
function is designed to be fast and returns a list of the tasks for
which it believes in can obtain artifacts.
Next, for each of the tasks that were returned as possibilities, BitBake
executes a setscene version of the task that the possible artifact
covers. Setscene versions of a task have the string "_setscene" appended
to the task name. So, for example, the task with the name ``xxx`` has a
setscene task named ``xxx_setscene``. The setscene version of the task
executes and provides the necessary artifacts returning either success
or failure.
As previously mentioned, an artifact can cover more than one task. For
example, it is pointless to obtain a compiler if you already have the
compiled binary. To handle this, BitBake calls the
:term:`BB_SETSCENE_DEPVALID` function for
each successful setscene task to know whether or not it needs to obtain
the dependencies of that task.
Finally, after all the setscene tasks have executed, BitBake calls the
function listed in
:term:`BB_SETSCENE_VERIFY_FUNCTION2`
with the list of tasks BitBake thinks has been "covered". The metadata
can then ensure that this list is correct and can inform BitBake that it
wants specific tasks to be run regardless of the setscene result.
You can find more information on setscene metadata in the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:task checksums and setscene`
section.
Logging
=======
In addition to the standard command line option to control how verbose
builds are when execute, bitbake also supports user defined
configuration of the `Python
logging <https://docs.python.org/3/library/logging.html>`__ facilities
through the :term:`BB_LOGCONFIG` variable. This
variable defines a json or yaml `logging
configuration <https://docs.python.org/3/library/logging.config.html>`__
that will be intelligently merged into the default configuration. The
logging configuration is merged using the following rules:
- The user defined configuration will completely replace the default
configuration if top level key ``bitbake_merge`` is set to the value
``False``. In this case, all other rules are ignored.
- The user configuration must have a top level ``version`` which must
match the value of the default configuration.
- Any keys defined in the ``handlers``, ``formatters``, or ``filters``,
will be merged into the same section in the default configuration,
with the user specified keys taking replacing a default one if there
is a conflict. In practice, this means that if both the default
configuration and user configuration specify a handler named
``myhandler``, the user defined one will replace the default. To
prevent the user from inadvertently replacing a default handler,
formatter, or filter, all of the default ones are named with a prefix
of "``BitBake.``"
- If a logger is defined by the user with the key ``bitbake_merge`` set
to ``False``, that logger will be completely replaced by user
configuration. In this case, no other rules will apply to that
logger.
- All user defined ``filter`` and ``handlers`` properties for a given
logger will be merged with corresponding properties from the default
logger. For example, if the user configuration adds a filter called
``myFilter`` to the ``BitBake.SigGen``, and the default configuration
adds a filter called ``BitBake.defaultFilter``, both filters will be
applied to the logger
As an example, consider the following user logging configuration file
which logs all Hash Equivalence related messages of VERBOSE or higher to
a file called ``hashequiv.log`` ::
{
"version": 1,
"handlers": {
"autobuilderlog": {
"class": "logging.FileHandler",
"formatter": "logfileFormatter",
"level": "DEBUG",
"filename": "hashequiv.log",
"mode": "w"
}
},
"formatters": {
"logfileFormatter": {
"format": "%(name)s: %(levelname)s: %(message)s"
}
},
"loggers": {
"BitBake.SigGen.HashEquiv": {
"level": "VERBOSE",
"handlers": ["autobuilderlog"]
},
"BitBake.RunQueue.HashEquiv": {
"level": "VERBOSE",
"handlers": ["autobuilderlog"]
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,689 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
=====================
File Download Support
=====================
|
BitBake's fetch module is a standalone piece of library code that deals
with the intricacies of downloading source code and files from remote
systems. Fetching source code is one of the cornerstones of building
software. As such, this module forms an important part of BitBake.
The current fetch module is called "fetch2" and refers to the fact that
it is the second major version of the API. The original version is
obsolete and has been removed from the codebase. Thus, in all cases,
"fetch" refers to "fetch2" in this manual.
The Download (Fetch)
====================
BitBake takes several steps when fetching source code or files. The
fetcher codebase deals with two distinct processes in order: obtaining
the files from somewhere (cached or otherwise) and then unpacking those
files into a specific location and perhaps in a specific way. Getting
and unpacking the files is often optionally followed by patching.
Patching, however, is not covered by this module.
The code to execute the first part of this process, a fetch, looks
something like the following: ::
src_uri = (d.getVar('SRC_URI') or "").split()
fetcher = bb.fetch2.Fetch(src_uri, d)
fetcher.download()
This code sets up an instance of the fetch class. The instance uses a
space-separated list of URLs from the :term:`SRC_URI`
variable and then calls the ``download`` method to download the files.
The instantiation of the fetch class is usually followed by: ::
rootdir = l.getVar('WORKDIR')
fetcher.unpack(rootdir)
This code unpacks the downloaded files to the specified by ``WORKDIR``.
.. note::
For convenience, the naming in these examples matches the variables
used by OpenEmbedded. If you want to see the above code in action,
examine the OpenEmbedded class file ``base.bbclass``
.
The ``SRC_URI`` and ``WORKDIR`` variables are not hardcoded into the
fetcher, since those fetcher methods can be (and are) called with
different variable names. In OpenEmbedded for example, the shared state
(sstate) code uses the fetch module to fetch the sstate files.
When the ``download()`` method is called, BitBake tries to resolve the
URLs by looking for source files in a specific search order:
- *Pre-mirror Sites:* BitBake first uses pre-mirrors to try and find
source files. These locations are defined using the
:term:`PREMIRRORS` variable.
- *Source URI:* If pre-mirrors fail, BitBake uses the original URL (e.g
from ``SRC_URI``).
- *Mirror Sites:* If fetch failures occur, BitBake next uses mirror
locations as defined by the :term:`MIRRORS` variable.
For each URL passed to the fetcher, the fetcher calls the submodule that
handles that particular URL type. This behavior can be the source of
some confusion when you are providing URLs for the ``SRC_URI`` variable.
Consider the following two URLs: ::
http://git.yoctoproject.org/git/poky;protocol=git
git://git.yoctoproject.org/git/poky;protocol=http
In the former case, the URL is passed to the ``wget`` fetcher, which does not
understand "git". Therefore, the latter case is the correct form since the Git
fetcher does know how to use HTTP as a transport.
Here are some examples that show commonly used mirror definitions: ::
PREMIRRORS ?= "\
bzr://.*/.\* http://somemirror.org/sources/ \\n \
cvs://.*/.\* http://somemirror.org/sources/ \\n \
git://.*/.\* http://somemirror.org/sources/ \\n \
hg://.*/.\* http://somemirror.org/sources/ \\n \
osc://.*/.\* http://somemirror.org/sources/ \\n \
p4://.*/.\* http://somemirror.org/sources/ \\n \
svn://.*/.\* http://somemirror.org/sources/ \\n"
MIRRORS =+ "\
ftp://.*/.\* http://somemirror.org/sources/ \\n \
http://.*/.\* http://somemirror.org/sources/ \\n \
https://.*/.\* http://somemirror.org/sources/ \\n"
It is useful to note that BitBake
supports cross-URLs. It is possible to mirror a Git repository on an
HTTP server as a tarball. This is what the ``git://`` mapping in the
previous example does.
Since network accesses are slow, BitBake maintains a cache of files
downloaded from the network. Any source files that are not local (i.e.
downloaded from the Internet) are placed into the download directory,
which is specified by the :term:`DL_DIR` variable.
File integrity is of key importance for reproducing builds. For
non-local archive downloads, the fetcher code can verify SHA-256 and MD5
checksums to ensure the archives have been downloaded correctly. You can
specify these checksums by using the ``SRC_URI`` variable with the
appropriate varflags as follows: ::
SRC_URI[md5sum] = "value"
SRC_URI[sha256sum] = "value"
You can also specify the checksums as
parameters on the ``SRC_URI`` as shown below: ::
SRC_URI = "http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d"
If multiple URIs exist, you can specify the checksums either directly as
in the previous example, or you can name the URLs. The following syntax
shows how you name the URIs: ::
SRC_URI = "http://example.com/foobar.tar.bz2;name=foo"
SRC_URI[foo.md5sum] = 4a8e0f237e961fd7785d19d07fdb994d
After a file has been downloaded and
has had its checksum checked, a ".done" stamp is placed in ``DL_DIR``.
BitBake uses this stamp during subsequent builds to avoid downloading or
comparing a checksum for the file again.
.. note::
It is assumed that local storage is safe from data corruption. If
this were not the case, there would be bigger issues to worry about.
If :term:`BB_STRICT_CHECKSUM` is set, any
download without a checksum triggers an error message. The
:term:`BB_NO_NETWORK` variable can be used to
make any attempted network access a fatal error, which is useful for
checking that mirrors are complete as well as other things.
.. _bb-the-unpack:
The Unpack
==========
The unpack process usually immediately follows the download. For all
URLs except Git URLs, BitBake uses the common ``unpack`` method.
A number of parameters exist that you can specify within the URL to
govern the behavior of the unpack stage:
- *unpack:* Controls whether the URL components are unpacked. If set to
"1", which is the default, the components are unpacked. If set to
"0", the unpack stage leaves the file alone. This parameter is useful
when you want an archive to be copied in and not be unpacked.
- *dos:* Applies to ``.zip`` and ``.jar`` files and specifies whether
to use DOS line ending conversion on text files.
- *basepath:* Instructs the unpack stage to strip the specified
directories from the source path when unpacking.
- *subdir:* Unpacks the specific URL to the specified subdirectory
within the root directory.
The unpack call automatically decompresses and extracts files with ".Z",
".z", ".gz", ".xz", ".zip", ".jar", ".ipk", ".rpm". ".srpm", ".deb" and
".bz2" extensions as well as various combinations of tarball extensions.
As mentioned, the Git fetcher has its own unpack method that is
optimized to work with Git trees. Basically, this method works by
cloning the tree into the final directory. The process is completed
using references so that there is only one central copy of the Git
metadata needed.
.. _bb-fetchers:
Fetchers
========
As mentioned earlier, the URL prefix determines which fetcher submodule
BitBake uses. Each submodule can support different URL parameters, which
are described in the following sections.
.. _local-file-fetcher:
Local file fetcher (``file://``)
--------------------------------
This submodule handles URLs that begin with ``file://``. The filename
you specify within the URL can be either an absolute or relative path to
a file. If the filename is relative, the contents of the
:term:`FILESPATH` variable is used in the same way
``PATH`` is used to find executables. If the file cannot be found, it is
assumed that it is available in :term:`DL_DIR` by the
time the ``download()`` method is called.
If you specify a directory, the entire directory is unpacked.
Here are a couple of example URLs, the first relative and the second
absolute: ::
SRC_URI = "file://relativefile.patch"
SRC_URI = "file:///Users/ich/very_important_software"
.. _http-ftp-fetcher:
HTTP/FTP wget fetcher (``http://``, ``ftp://``, ``https://``)
-------------------------------------------------------------
This fetcher obtains files from web and FTP servers. Internally, the
fetcher uses the wget utility.
The executable and parameters used are specified by the
``FETCHCMD_wget`` variable, which defaults to sensible values. The
fetcher supports a parameter "downloadfilename" that allows the name of
the downloaded file to be specified. Specifying the name of the
downloaded file is useful for avoiding collisions in
:term:`DL_DIR` when dealing with multiple files that
have the same name.
Some example URLs are as follows: ::
SRC_URI = "http://oe.handhelds.org/not_there.aac"
SRC_URI = "ftp://oe.handhelds.org/not_there_as_well.aac"
SRC_URI = "ftp://you@oe.handhelds.org/home/you/secret.plan"
.. note::
Because URL parameters are delimited by semi-colons, this can
introduce ambiguity when parsing URLs that also contain semi-colons,
for example:
::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git;a=snapshot;h=a5dd47"
Such URLs should should be modified by replacing semi-colons with '&'
characters:
::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47"
In most cases this should work. Treating semi-colons and '&' in
queries identically is recommended by the World Wide Web Consortium
(W3C). Note that due to the nature of the URL, you may have to
specify the name of the downloaded file as well:
::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47;downloadfilename=myfile.bz2"
.. _cvs-fetcher:
CVS fetcher (``(cvs://``)
-------------------------
This submodule handles checking out files from the CVS version control
system. You can configure it using a number of different variables:
- :term:`FETCHCMD_cvs <FETCHCMD>`: The name of the executable to use when running
the ``cvs`` command. This name is usually "cvs".
- :term:`SRCDATE`: The date to use when fetching the CVS source code. A
special value of "now" causes the checkout to be updated on every
build.
- :term:`CVSDIR`: Specifies where a temporary
checkout is saved. The location is often ``DL_DIR/cvs``.
- CVS_PROXY_HOST: The name to use as a "proxy=" parameter to the
``cvs`` command.
- CVS_PROXY_PORT: The port number to use as a "proxyport="
parameter to the ``cvs`` command.
As well as the standard username and password URL syntax, you can also
configure the fetcher with various URL parameters:
The supported parameters are as follows:
- *"method":* The protocol over which to communicate with the CVS
server. By default, this protocol is "pserver". If "method" is set to
"ext", BitBake examines the "rsh" parameter and sets ``CVS_RSH``. You
can use "dir" for local directories.
- *"module":* Specifies the module to check out. You must supply this
parameter.
- *"tag":* Describes which CVS TAG should be used for the checkout. By
default, the TAG is empty.
- *"date":* Specifies a date. If no "date" is specified, the
:term:`SRCDATE` of the configuration is used to
checkout a specific date. The special value of "now" causes the
checkout to be updated on every build.
- *"localdir":* Used to rename the module. Effectively, you are
renaming the output directory to which the module is unpacked. You
are forcing the module into a special directory relative to
:term:`CVSDIR`.
- *"rsh":* Used in conjunction with the "method" parameter.
- *"scmdata":* Causes the CVS metadata to be maintained in the tarball
the fetcher creates when set to "keep". The tarball is expanded into
the work directory. By default, the CVS metadata is removed.
- *"fullpath":* Controls whether the resulting checkout is at the
module level, which is the default, or is at deeper paths.
- *"norecurse":* Causes the fetcher to only checkout the specified
directory with no recurse into any subdirectories.
- *"port":* The port to which the CVS server connects.
Some example URLs are as follows: ::
SRC_URI = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
SRC_URI = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
.. _svn-fetcher:
Subversion (SVN) Fetcher (``svn://``)
-------------------------------------
This fetcher submodule fetches code from the Subversion source control
system. The executable used is specified by ``FETCHCMD_svn``, which
defaults to "svn". The fetcher's temporary working directory is set by
:term:`SVNDIR`, which is usually ``DL_DIR/svn``.
The supported parameters are as follows:
- *"module":* The name of the svn module to checkout. You must provide
this parameter. You can think of this parameter as the top-level
directory of the repository data you want.
- *"path_spec":* A specific directory in which to checkout the
specified svn module.
- *"protocol":* The protocol to use, which defaults to "svn". If
"protocol" is set to "svn+ssh", the "ssh" parameter is also used.
- *"rev":* The revision of the source code to checkout.
- *"scmdata":* Causes the ".svn" directories to be available during
compile-time when set to "keep". By default, these directories are
removed.
- *"ssh":* An optional parameter used when "protocol" is set to
"svn+ssh". You can use this parameter to specify the ssh program used
by svn.
- *"transportuser":* When required, sets the username for the
transport. By default, this parameter is empty. The transport
username is different than the username used in the main URL, which
is passed to the subversion command.
Following are three examples using svn: ::
SRC_URI = "svn://myrepos/proj1;module=vip;protocol=http;rev=667"
SRC_URI = "svn://myrepos/proj1;module=opie;protocol=svn+ssh"
SRC_URI = "svn://myrepos/proj1;module=trunk;protocol=http;path_spec=${MY_DIR}/proj1"
.. _git-fetcher:
Git Fetcher (``git://``)
------------------------
This fetcher submodule fetches code from the Git source control system.
The fetcher works by creating a bare clone of the remote into
:term:`GITDIR`, which is usually ``DL_DIR/git2``. This
bare clone is then cloned into the work directory during the unpack
stage when a specific tree is checked out. This is done using alternates
and by reference to minimize the amount of duplicate data on the disk
and make the unpack process fast. The executable used can be set with
``FETCHCMD_git``.
This fetcher supports the following parameters:
- *"protocol":* The protocol used to fetch the files. The default is
"git" when a hostname is set. If a hostname is not set, the Git
protocol is "file". You can also use "http", "https", "ssh" and
"rsync".
- *"nocheckout":* Tells the fetcher to not checkout source code when
unpacking when set to "1". Set this option for the URL where there is
a custom routine to checkout code. The default is "0".
- *"rebaseable":* Indicates that the upstream Git repository can be
rebased. You should set this parameter to "1" if revisions can become
detached from branches. In this case, the source mirror tarball is
done per revision, which has a loss of efficiency. Rebasing the
upstream Git repository could cause the current revision to disappear
from the upstream repository. This option reminds the fetcher to
preserve the local cache carefully for future use. The default value
for this parameter is "0".
- *"nobranch":* Tells the fetcher to not check the SHA validation for
the branch when set to "1". The default is "0". Set this option for
the recipe that refers to the commit that is valid for a tag instead
of the branch.
- *"bareclone":* Tells the fetcher to clone a bare clone into the
destination directory without checking out a working tree. Only the
raw Git metadata is provided. This parameter implies the "nocheckout"
parameter as well.
- *"branch":* The branch(es) of the Git tree to clone. If unset, this
is assumed to be "master". The number of branch parameters much match
the number of name parameters.
- *"rev":* The revision to use for the checkout. The default is
"master".
- *"tag":* Specifies a tag to use for the checkout. To correctly
resolve tags, BitBake must access the network. For that reason, tags
are often not used. As far as Git is concerned, the "tag" parameter
behaves effectively the same as the "rev" parameter.
- *"subpath":* Limits the checkout to a specific subpath of the tree.
By default, the whole tree is checked out.
- *"destsuffix":* The name of the path in which to place the checkout.
By default, the path is ``git/``.
- *"usehead":* Enables local ``git://`` URLs to use the current branch
HEAD as the revision for use with ``AUTOREV``. The "usehead"
parameter implies no branch and only works when the transfer protocol
is ``file://``.
Here are some example URLs: ::
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
.. note::
Specifying passwords directly in ``git://`` urls is not supported.
There are several reasons: ``SRC_URI`` is often written out to logs and
other places, and that could easily leak passwords; it is also all too
easy to share metadata without removing passwords. SSH keys, ``~/.netrc``
and ``~/.ssh/config`` files can be used as alternatives.
.. _gitsm-fetcher:
Git Submodule Fetcher (``gitsm://``)
------------------------------------
This fetcher submodule inherits from the :ref:`Git
fetcher<bitbake-user-manual/bitbake-user-manual-fetching:git fetcher
(\`\`git://\`\`)>` and extends that fetcher's behavior by fetching a
repository's submodules. :term:`SRC_URI` is passed to the Git fetcher as
described in the :ref:`bitbake-user-manual/bitbake-user-manual-fetching:git
fetcher (\`\`git://\`\`)` section.
.. note::
You must clean a recipe when switching between '``git://``' and
'``gitsm://``' URLs.
The Git Submodules fetcher is not a complete fetcher implementation.
The fetcher has known issues where it does not use the normal source
mirroring infrastructure properly. Further, the submodule sources it
fetches are not visible to the licensing and source archiving
infrastructures.
.. _clearcase-fetcher:
ClearCase Fetcher (``ccrc://``)
-------------------------------
This fetcher submodule fetches code from a
`ClearCase <http://en.wikipedia.org/wiki/Rational_ClearCase>`__
repository.
To use this fetcher, make sure your recipe has proper
:term:`SRC_URI`, :term:`SRCREV`, and
:term:`PV` settings. Here is an example: ::
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
SRCREV = "EXAMPLE_CLEARCASE_TAG"
PV = "${@d.getVar("SRCREV", False).replace("/", "+")}"
The fetcher uses the ``rcleartool`` or
``cleartool`` remote client, depending on which one is available.
Following are options for the ``SRC_URI`` statement:
- *vob*: The name, which must include the prepending "/" character,
of the ClearCase VOB. This option is required.
- *module*: The module, which must include the prepending "/"
character, in the selected VOB.
.. note::
The module and vob options are combined to create the load rule in the
view config spec. As an example, consider the vob and module values from
the SRC_URI statement at the start of this section. Combining those values
results in the following: ::
load /example_vob/example_module
- *proto*: The protocol, which can be either ``http`` or ``https``.
By default, the fetcher creates a configuration specification. If you
want this specification written to an area other than the default, use
the ``CCASE_CUSTOM_CONFIG_SPEC`` variable in your recipe to define where
the specification is written.
.. note::
the SRCREV loses its functionality if you specify this variable. However,
SRCREV is still used to label the archive after a fetch even though it does
not define what is fetched.
Here are a couple of other behaviors worth mentioning:
- When using ``cleartool``, the login of ``cleartool`` is handled by
the system. The login require no special steps.
- In order to use ``rcleartool`` with authenticated users, an
"rcleartool login" is necessary before using the fetcher.
.. _perforce-fetcher:
Perforce Fetcher (``p4://``)
----------------------------
This fetcher submodule fetches code from the
`Perforce <https://www.perforce.com/>`__ source control system. The
executable used is specified by ``FETCHCMD_p4``, which defaults to "p4".
The fetcher's temporary working directory is set by
:term:`P4DIR`, which defaults to "DL_DIR/p4".
The fetcher does not make use of a perforce client, instead it
relies on ``p4 files`` to retrieve a list of
files and ``p4 print`` to transfer the content
of those files locally.
To use this fetcher, make sure your recipe has proper
:term:`SRC_URI`, :term:`SRCREV`, and
:term:`PV` values. The p4 executable is able to use the
config file defined by your system's ``P4CONFIG`` environment variable
in order to define the Perforce server URL and port, username, and
password if you do not wish to keep those values in a recipe itself. If
you choose not to use ``P4CONFIG``, or to explicitly set variables that
``P4CONFIG`` can contain, you can specify the ``P4PORT`` value, which is
the server's URL and port number, and you can specify a username and
password directly in your recipe within ``SRC_URI``.
Here is an example that relies on ``P4CONFIG`` to specify the server URL
and port, username, and password, and fetches the Head Revision: ::
SRC_URI = "p4://example-depot/main/source/..."
SRCREV = "${AUTOREV}"
PV = "p4-${SRCPV}"
S = "${WORKDIR}/p4"
Here is an example that specifies the server URL and port, username, and
password, and fetches a Revision based on a Label: ::
P4PORT = "tcp:p4server.example.net:1666"
SRC_URI = "p4://user:passwd@example-depot/main/source/..."
SRCREV = "release-1.0"
PV = "p4-${SRCPV}"
S = "${WORKDIR}/p4"
.. note::
You should always set S to "${WORKDIR}/p4" in your recipe.
By default, the fetcher strips the depot location from the local file paths. In
the above example, the content of ``example-depot/main/source/`` will be placed
in ``${WORKDIR}/p4``. For situations where preserving parts of the remote depot
paths locally is desirable, the fetcher supports two parameters:
- *"module":*
The top-level depot location or directory to fetch. The value of this
parameter can also point to a single file within the depot, in which case
the local file path will include the module path.
- *"remotepath":*
When used with the value "``keep``", the fetcher will mirror the full depot
paths locally for the specified location, even in combination with the
``module`` parameter.
Here is an example use of the the ``module`` parameter: ::
SRC_URI = "p4://user:passwd@example-depot/main;module=source/..."
In this case, the content of the top-level directory ``source/`` will be fetched
to ``${P4DIR}``, including the directory itself. The top-level directory will
be accesible at ``${P4DIR}/source/``.
Here is an example use of the the ``remotepath`` parameter: ::
SRC_URI = "p4://user:passwd@example-depot/main;module=source/...;remotepath=keep"
In this case, the content of the top-level directory ``source/`` will be fetched
to ``${P4DIR}``, but the complete depot paths will be mirrored locally. The
top-level directory will be accessible at
``${P4DIR}/example-depot/main/source/``.
.. _repo-fetcher:
Repo Fetcher (``repo://``)
--------------------------
This fetcher submodule fetches code from ``google-repo`` source control
system. The fetcher works by initiating and syncing sources of the
repository into :term:`REPODIR`, which is usually
``${DL_DIR}/repo``.
This fetcher supports the following parameters:
- *"protocol":* Protocol to fetch the repository manifest (default:
git).
- *"branch":* Branch or tag of repository to get (default: master).
- *"manifest":* Name of the manifest file (default: ``default.xml``).
Here are some example URLs: ::
SRC_URI = "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml"
SRC_URI = "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml"
.. _az-fetcher:
Az Fetcher (``az://``)
--------------------------
This submodule fetches data from an
`Azure Storage account <https://docs.microsoft.com/en-us/azure/storage/>`__ ,
it inherits its functionality from the HTTP wget fetcher, but modifies its
behavior to accomodate the usage of a
`Shared Access Signature (SAS) <https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview>`__
for non-public data.
Such functionality is set by the variable:
- :term:`AZ_SAS`: The Azure Storage Shared Access Signature provides secure
delegate access to resources, if this variable is set, the Az Fetcher will
use it when fetching artifacts from the cloud.
You can specify the AZ_SAS variable as shown below: ::
AZ_SAS = "se=2021-01-01&sp=r&sv=2018-11-09&sr=c&skoid=<skoid>&sig=<signature>"
Here is an example URL: ::
SRC_URI = "az://<azure-storage-account>.blob.core.windows.net/<foo_container>/<bar_file>"
It can also be used when setting mirrors definitions using the :term:`PREMIRRORS` variable.
Other Fetchers
--------------
Fetch submodules also exist for the following:
- Bazaar (``bzr://``)
- Mercurial (``hg://``)
- npm (``npm://``)
- OSC (``osc://``)
- Secure FTP (``sftp://``)
- Secure Shell (``ssh://``)
- Trees using Git Annex (``gitannex://``)
No documentation currently exists for these lesser used fetcher
submodules. However, you might find the code helpful and readable.
Auto Revisions
==============
We need to document ``AUTOREV`` and ``SRCREV_FORMAT`` here.

View File

@@ -0,0 +1,868 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter>
<title>File Download Support</title>
<para>
BitBake's fetch module is a standalone piece of library code
that deals with the intricacies of downloading source code
and files from remote systems.
Fetching source code is one of the cornerstones of building software.
As such, this module forms an important part of BitBake.
</para>
<para>
The current fetch module is called "fetch2" and refers to the
fact that it is the second major version of the API.
The original version is obsolete and has been removed from the codebase.
Thus, in all cases, "fetch" refers to "fetch2" in this
manual.
</para>
<section id='the-download-fetch'>
<title>The Download (Fetch)</title>
<para>
BitBake takes several steps when fetching source code or files.
The fetcher codebase deals with two distinct processes in order:
obtaining the files from somewhere (cached or otherwise)
and then unpacking those files into a specific location and
perhaps in a specific way.
Getting and unpacking the files is often optionally followed
by patching.
Patching, however, is not covered by this module.
</para>
<para>
The code to execute the first part of this process, a fetch,
looks something like the following:
<literallayout class='monospaced'>
src_uri = (d.getVar('SRC_URI') or "").split()
fetcher = bb.fetch2.Fetch(src_uri, d)
fetcher.download()
</literallayout>
This code sets up an instance of the fetch class.
The instance uses a space-separated list of URLs from the
<link linkend='var-bb-SRC_URI'><filename>SRC_URI</filename></link>
variable and then calls the <filename>download</filename>
method to download the files.
</para>
<para>
The instantiation of the fetch class is usually followed by:
<literallayout class='monospaced'>
rootdir = l.getVar('WORKDIR')
fetcher.unpack(rootdir)
</literallayout>
This code unpacks the downloaded files to the
specified by <filename>WORKDIR</filename>.
<note>
For convenience, the naming in these examples matches
the variables used by OpenEmbedded.
If you want to see the above code in action, examine
the OpenEmbedded class file <filename>base.bbclass</filename>.
</note>
The <filename>SRC_URI</filename> and <filename>WORKDIR</filename>
variables are not hardcoded into the fetcher, since those fetcher
methods can be (and are) called with different variable names.
In OpenEmbedded for example, the shared state (sstate) code uses
the fetch module to fetch the sstate files.
</para>
<para>
When the <filename>download()</filename> method is called,
BitBake tries to resolve the URLs by looking for source files
in a specific search order:
<itemizedlist>
<listitem><para><emphasis>Pre-mirror Sites:</emphasis>
BitBake first uses pre-mirrors to try and find source files.
These locations are defined using the
<link linkend='var-bb-PREMIRRORS'><filename>PREMIRRORS</filename></link>
variable.
</para></listitem>
<listitem><para><emphasis>Source URI:</emphasis>
If pre-mirrors fail, BitBake uses the original URL (e.g from
<filename>SRC_URI</filename>).
</para></listitem>
<listitem><para><emphasis>Mirror Sites:</emphasis>
If fetch failures occur, BitBake next uses mirror locations as
defined by the
<link linkend='var-bb-MIRRORS'><filename>MIRRORS</filename></link>
variable.
</para></listitem>
</itemizedlist>
</para>
<para>
For each URL passed to the fetcher, the fetcher
calls the submodule that handles that particular URL type.
This behavior can be the source of some confusion when you
are providing URLs for the <filename>SRC_URI</filename>
variable.
Consider the following two URLs:
<literallayout class='monospaced'>
http://git.yoctoproject.org/git/poky;protocol=git
git://git.yoctoproject.org/git/poky;protocol=http
</literallayout>
In the former case, the URL is passed to the
<filename>wget</filename> fetcher, which does not
understand "git".
Therefore, the latter case is the correct form since the
Git fetcher does know how to use HTTP as a transport.
</para>
<para>
Here are some examples that show commonly used mirror
definitions:
<literallayout class='monospaced'>
PREMIRRORS ?= "\
bzr://.*/.* http://somemirror.org/sources/ \n \
cvs://.*/.* http://somemirror.org/sources/ \n \
git://.*/.* http://somemirror.org/sources/ \n \
hg://.*/.* http://somemirror.org/sources/ \n \
osc://.*/.* http://somemirror.org/sources/ \n \
p4://.*/.* http://somemirror.org/sources/ \n \
svn://.*/.* http://somemirror.org/sources/ \n"
MIRRORS =+ "\
ftp://.*/.* http://somemirror.org/sources/ \n \
http://.*/.* http://somemirror.org/sources/ \n \
https://.*/.* http://somemirror.org/sources/ \n"
</literallayout>
It is useful to note that BitBake supports
cross-URLs.
It is possible to mirror a Git repository on an HTTP
server as a tarball.
This is what the <filename>git://</filename> mapping in
the previous example does.
</para>
<para>
Since network accesses are slow, BitBake maintains a
cache of files downloaded from the network.
Any source files that are not local (i.e.
downloaded from the Internet) are placed into the download
directory, which is specified by the
<link linkend='var-bb-DL_DIR'><filename>DL_DIR</filename></link>
variable.
</para>
<para>
File integrity is of key importance for reproducing builds.
For non-local archive downloads, the fetcher code can verify
SHA-256 and MD5 checksums to ensure the archives have been
downloaded correctly.
You can specify these checksums by using the
<filename>SRC_URI</filename> variable with the appropriate
varflags as follows:
<literallayout class='monospaced'>
SRC_URI[md5sum] = "<replaceable>value</replaceable>"
SRC_URI[sha256sum] = "<replaceable>value</replaceable>"
</literallayout>
You can also specify the checksums as parameters on the
<filename>SRC_URI</filename> as shown below:
<literallayout class='monospaced'>
SRC_URI = "http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d"
</literallayout>
If multiple URIs exist, you can specify the checksums either
directly as in the previous example, or you can name the URLs.
The following syntax shows how you name the URIs:
<literallayout class='monospaced'>
SRC_URI = "http://example.com/foobar.tar.bz2;name=foo"
SRC_URI[foo.md5sum] = 4a8e0f237e961fd7785d19d07fdb994d
</literallayout>
After a file has been downloaded and has had its checksum checked,
a ".done" stamp is placed in <filename>DL_DIR</filename>.
BitBake uses this stamp during subsequent builds to avoid
downloading or comparing a checksum for the file again.
<note>
It is assumed that local storage is safe from data corruption.
If this were not the case, there would be bigger issues to worry about.
</note>
</para>
<para>
If
<link linkend='var-bb-BB_STRICT_CHECKSUM'><filename>BB_STRICT_CHECKSUM</filename></link>
is set, any download without a checksum triggers an
error message.
The
<link linkend='var-bb-BB_NO_NETWORK'><filename>BB_NO_NETWORK</filename></link>
variable can be used to make any attempted network access a fatal
error, which is useful for checking that mirrors are complete
as well as other things.
</para>
</section>
<section id='bb-the-unpack'>
<title>The Unpack</title>
<para>
The unpack process usually immediately follows the download.
For all URLs except Git URLs, BitBake uses the common
<filename>unpack</filename> method.
</para>
<para>
A number of parameters exist that you can specify within the
URL to govern the behavior of the unpack stage:
<itemizedlist>
<listitem><para><emphasis>unpack:</emphasis>
Controls whether the URL components are unpacked.
If set to "1", which is the default, the components
are unpacked.
If set to "0", the unpack stage leaves the file alone.
This parameter is useful when you want an archive to be
copied in and not be unpacked.
</para></listitem>
<listitem><para><emphasis>dos:</emphasis>
Applies to <filename>.zip</filename> and
<filename>.jar</filename> files and specifies whether to
use DOS line ending conversion on text files.
</para></listitem>
<listitem><para><emphasis>basepath:</emphasis>
Instructs the unpack stage to strip the specified
directories from the source path when unpacking.
</para></listitem>
<listitem><para><emphasis>subdir:</emphasis>
Unpacks the specific URL to the specified subdirectory
within the root directory.
</para></listitem>
</itemizedlist>
The unpack call automatically decompresses and extracts files
with ".Z", ".z", ".gz", ".xz", ".zip", ".jar", ".ipk", ".rpm".
".srpm", ".deb" and ".bz2" extensions as well as various combinations
of tarball extensions.
</para>
<para>
As mentioned, the Git fetcher has its own unpack method that
is optimized to work with Git trees.
Basically, this method works by cloning the tree into the final
directory.
The process is completed using references so that there is
only one central copy of the Git metadata needed.
</para>
</section>
<section id='bb-fetchers'>
<title>Fetchers</title>
<para>
As mentioned earlier, the URL prefix determines which
fetcher submodule BitBake uses.
Each submodule can support different URL parameters,
which are described in the following sections.
</para>
<section id='local-file-fetcher'>
<title>Local file fetcher (<filename>file://</filename>)</title>
<para>
This submodule handles URLs that begin with
<filename>file://</filename>.
The filename you specify within the URL can be
either an absolute or relative path to a file.
If the filename is relative, the contents of the
<link linkend='var-bb-FILESPATH'><filename>FILESPATH</filename></link>
variable is used in the same way
<filename>PATH</filename> is used to find executables.
If the file cannot be found, it is assumed that it is available in
<link linkend='var-bb-DL_DIR'><filename>DL_DIR</filename></link>
by the time the <filename>download()</filename> method is called.
</para>
<para>
If you specify a directory, the entire directory is
unpacked.
</para>
<para>
Here are a couple of example URLs, the first relative and
the second absolute:
<literallayout class='monospaced'>
SRC_URI = "file://relativefile.patch"
SRC_URI = "file:///Users/ich/very_important_software"
</literallayout>
</para>
</section>
<section id='http-ftp-fetcher'>
<title>HTTP/FTP wget fetcher (<filename>http://</filename>, <filename>ftp://</filename>, <filename>https://</filename>)</title>
<para>
This fetcher obtains files from web and FTP servers.
Internally, the fetcher uses the wget utility.
</para>
<para>
The executable and parameters used are specified by the
<filename>FETCHCMD_wget</filename> variable, which defaults
to sensible values.
The fetcher supports a parameter "downloadfilename" that
allows the name of the downloaded file to be specified.
Specifying the name of the downloaded file is useful
for avoiding collisions in
<link linkend='var-bb-DL_DIR'><filename>DL_DIR</filename></link>
when dealing with multiple files that have the same name.
</para>
<para>
Some example URLs are as follows:
<literallayout class='monospaced'>
SRC_URI = "http://oe.handhelds.org/not_there.aac"
SRC_URI = "ftp://oe.handhelds.org/not_there_as_well.aac"
SRC_URI = "ftp://you@oe.handhelds.org/home/you/secret.plan"
</literallayout>
</para>
<note>
Because URL parameters are delimited by semi-colons, this can
introduce ambiguity when parsing URLs that also contain semi-colons,
for example:
<literallayout class='monospaced'>
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git;a=snapshot;h=a5dd47"
</literallayout>
Such URLs should should be modified by replacing semi-colons with '&amp;' characters:
<literallayout class='monospaced'>
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&amp;a=snapshot&amp;h=a5dd47"
</literallayout>
In most cases this should work. Treating semi-colons and '&amp;' in queries
identically is recommended by the World Wide Web Consortium (W3C).
Note that due to the nature of the URL, you may have to specify the name
of the downloaded file as well:
<literallayout class='monospaced'>
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&amp;a=snapshot&amp;h=a5dd47;downloadfilename=myfile.bz2"
</literallayout>
</note>
</section>
<section id='cvs-fetcher'>
<title>CVS fetcher (<filename>(cvs://</filename>)</title>
<para>
This submodule handles checking out files from the
CVS version control system.
You can configure it using a number of different variables:
<itemizedlist>
<listitem><para><emphasis><filename>FETCHCMD_cvs</filename>:</emphasis>
The name of the executable to use when running
the <filename>cvs</filename> command.
This name is usually "cvs".
</para></listitem>
<listitem><para><emphasis><filename>SRCDATE</filename>:</emphasis>
The date to use when fetching the CVS source code.
A special value of "now" causes the checkout to
be updated on every build.
</para></listitem>
<listitem><para><emphasis><link linkend='var-bb-CVSDIR'><filename>CVSDIR</filename></link>:</emphasis>
Specifies where a temporary checkout is saved.
The location is often <filename>DL_DIR/cvs</filename>.
</para></listitem>
<listitem><para><emphasis><filename>CVS_PROXY_HOST</filename>:</emphasis>
The name to use as a "proxy=" parameter to the
<filename>cvs</filename> command.
</para></listitem>
<listitem><para><emphasis><filename>CVS_PROXY_PORT</filename>:</emphasis>
The port number to use as a "proxyport=" parameter to
the <filename>cvs</filename> command.
</para></listitem>
</itemizedlist>
As well as the standard username and password URL syntax,
you can also configure the fetcher with various URL parameters:
</para>
<para>
The supported parameters are as follows:
<itemizedlist>
<listitem><para><emphasis>"method":</emphasis>
The protocol over which to communicate with the CVS
server.
By default, this protocol is "pserver".
If "method" is set to "ext", BitBake examines the
"rsh" parameter and sets <filename>CVS_RSH</filename>.
You can use "dir" for local directories.
</para></listitem>
<listitem><para><emphasis>"module":</emphasis>
Specifies the module to check out.
You must supply this parameter.
</para></listitem>
<listitem><para><emphasis>"tag":</emphasis>
Describes which CVS TAG should be used for
the checkout.
By default, the TAG is empty.
</para></listitem>
<listitem><para><emphasis>"date":</emphasis>
Specifies a date.
If no "date" is specified, the
<link linkend='var-bb-SRCDATE'><filename>SRCDATE</filename></link>
of the configuration is used to checkout a specific date.
The special value of "now" causes the checkout to be
updated on every build.
</para></listitem>
<listitem><para><emphasis>"localdir":</emphasis>
Used to rename the module.
Effectively, you are renaming the output directory
to which the module is unpacked.
You are forcing the module into a special
directory relative to
<link linkend='var-bb-CVSDIR'><filename>CVSDIR</filename></link>.
</para></listitem>
<listitem><para><emphasis>"rsh"</emphasis>
Used in conjunction with the "method" parameter.
</para></listitem>
<listitem><para><emphasis>"scmdata":</emphasis>
Causes the CVS metadata to be maintained in the tarball
the fetcher creates when set to "keep".
The tarball is expanded into the work directory.
By default, the CVS metadata is removed.
</para></listitem>
<listitem><para><emphasis>"fullpath":</emphasis>
Controls whether the resulting checkout is at the
module level, which is the default, or is at deeper
paths.
</para></listitem>
<listitem><para><emphasis>"norecurse":</emphasis>
Causes the fetcher to only checkout the specified
directory with no recurse into any subdirectories.
</para></listitem>
<listitem><para><emphasis>"port":</emphasis>
The port to which the CVS server connects.
</para></listitem>
</itemizedlist>
Some example URLs are as follows:
<literallayout class='monospaced'>
SRC_URI = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
SRC_URI = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
</literallayout>
</para>
</section>
<section id='svn-fetcher'>
<title>Subversion (SVN) Fetcher (<filename>svn://</filename>)</title>
<para>
This fetcher submodule fetches code from the
Subversion source control system.
The executable used is specified by
<filename>FETCHCMD_svn</filename>, which defaults
to "svn".
The fetcher's temporary working directory is set by
<link linkend='var-bb-SVNDIR'><filename>SVNDIR</filename></link>,
which is usually <filename>DL_DIR/svn</filename>.
</para>
<para>
The supported parameters are as follows:
<itemizedlist>
<listitem><para><emphasis>"module":</emphasis>
The name of the svn module to checkout.
You must provide this parameter.
You can think of this parameter as the top-level
directory of the repository data you want.
</para></listitem>
<listitem><para><emphasis>"path_spec":</emphasis>
A specific directory in which to checkout the
specified svn module.
</para></listitem>
<listitem><para><emphasis>"protocol":</emphasis>
The protocol to use, which defaults to "svn".
If "protocol" is set to "svn+ssh", the "ssh"
parameter is also used.
</para></listitem>
<listitem><para><emphasis>"rev":</emphasis>
The revision of the source code to checkout.
</para></listitem>
<listitem><para><emphasis>"scmdata":</emphasis>
Causes the “.svn” directories to be available during
compile-time when set to "keep".
By default, these directories are removed.
</para></listitem>
<listitem><para><emphasis>"ssh":</emphasis>
An optional parameter used when "protocol" is set
to "svn+ssh".
You can use this parameter to specify the ssh
program used by svn.
</para></listitem>
<listitem><para><emphasis>"transportuser":</emphasis>
When required, sets the username for the transport.
By default, this parameter is empty.
The transport username is different than the username
used in the main URL, which is passed to the subversion
command.
</para></listitem>
</itemizedlist>
Following are three examples using svn:
<literallayout class='monospaced'>
SRC_URI = "svn://myrepos/proj1;module=vip;protocol=http;rev=667"
SRC_URI = "svn://myrepos/proj1;module=opie;protocol=svn+ssh"
SRC_URI = "svn://myrepos/proj1;module=trunk;protocol=http;path_spec=${MY_DIR}/proj1"
</literallayout>
</para>
</section>
<section id='git-fetcher'>
<title>Git Fetcher (<filename>git://</filename>)</title>
<para>
This fetcher submodule fetches code from the Git
source control system.
The fetcher works by creating a bare clone of the
remote into
<link linkend='var-bb-GITDIR'><filename>GITDIR</filename></link>,
which is usually <filename>DL_DIR/git2</filename>.
This bare clone is then cloned into the work directory during the
unpack stage when a specific tree is checked out.
This is done using alternates and by reference to
minimize the amount of duplicate data on the disk and
make the unpack process fast.
The executable used can be set with
<filename>FETCHCMD_git</filename>.
</para>
<para>
This fetcher supports the following parameters:
<itemizedlist>
<listitem><para><emphasis>"protocol":</emphasis>
The protocol used to fetch the files.
The default is "git" when a hostname is set.
If a hostname is not set, the Git protocol is "file".
You can also use "http", "https", "ssh" and "rsync".
</para></listitem>
<listitem><para><emphasis>"nocheckout":</emphasis>
Tells the fetcher to not checkout source code when
unpacking when set to "1".
Set this option for the URL where there is a custom
routine to checkout code.
The default is "0".
</para></listitem>
<listitem><para><emphasis>"rebaseable":</emphasis>
Indicates that the upstream Git repository can be rebased.
You should set this parameter to "1" if
revisions can become detached from branches.
In this case, the source mirror tarball is done per
revision, which has a loss of efficiency.
Rebasing the upstream Git repository could cause the
current revision to disappear from the upstream repository.
This option reminds the fetcher to preserve the local cache
carefully for future use.
The default value for this parameter is "0".
</para></listitem>
<listitem><para><emphasis>"nobranch":</emphasis>
Tells the fetcher to not check the SHA validation
for the branch when set to "1".
The default is "0".
Set this option for the recipe that refers to
the commit that is valid for a tag instead of
the branch.
</para></listitem>
<listitem><para><emphasis>"bareclone":</emphasis>
Tells the fetcher to clone a bare clone into the
destination directory without checking out a working tree.
Only the raw Git metadata is provided.
This parameter implies the "nocheckout" parameter as well.
</para></listitem>
<listitem><para><emphasis>"branch":</emphasis>
The branch(es) of the Git tree to clone.
If unset, this is assumed to be "master".
The number of branch parameters much match the number of
name parameters.
</para></listitem>
<listitem><para><emphasis>"rev":</emphasis>
The revision to use for the checkout.
The default is "master".
</para></listitem>
<listitem><para><emphasis>"tag":</emphasis>
Specifies a tag to use for the checkout.
To correctly resolve tags, BitBake must access the
network.
For that reason, tags are often not used.
As far as Git is concerned, the "tag" parameter behaves
effectively the same as the "rev" parameter.
</para></listitem>
<listitem><para><emphasis>"subpath":</emphasis>
Limits the checkout to a specific subpath of the tree.
By default, the whole tree is checked out.
</para></listitem>
<listitem><para><emphasis>"destsuffix":</emphasis>
The name of the path in which to place the checkout.
By default, the path is <filename>git/</filename>.
</para></listitem>
<listitem><para><emphasis>"usehead":</emphasis>
Enables local <filename>git://</filename> URLs to use the
current branch HEAD as the revision for use with
<filename>AUTOREV</filename>.
The "usehead" parameter implies no branch and only works
when the transfer protocol is
<filename>file://</filename>.
</para></listitem>
</itemizedlist>
Here are some example URLs:
<literallayout class='monospaced'>
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
</literallayout>
</para>
</section>
<section id='gitsm-fetcher'>
<title>Git Submodule Fetcher (<filename>gitsm://</filename>)</title>
<para>
This fetcher submodule inherits from the
<link linkend='git-fetcher'>Git fetcher</link> and extends
that fetcher's behavior by fetching a repository's submodules.
<link linkend='var-bb-SRC_URI'><filename>SRC_URI</filename></link>
is passed to the Git fetcher as described in the
"<link linkend='git-fetcher'>Git Fetcher (<filename>git://</filename>)</link>"
section.
<note>
<title>Notes and Warnings</title>
<para>
You must clean a recipe when switching between
'<filename>git://</filename>' and
'<filename>gitsm://</filename>' URLs.
</para>
<para>
The Git Submodules fetcher is not a complete fetcher
implementation.
The fetcher has known issues where it does not use the
normal source mirroring infrastructure properly. Further,
the submodule sources it fetches are not visible to the
licensing and source archiving infrastructures.
</para>
</note>
</para>
</section>
<section id='clearcase-fetcher'>
<title>ClearCase Fetcher (<filename>ccrc://</filename>)</title>
<para>
This fetcher submodule fetches code from a
<ulink url='http://en.wikipedia.org/wiki/Rational_ClearCase'>ClearCase</ulink>
repository.
</para>
<para>
To use this fetcher, make sure your recipe has proper
<link linkend='var-bb-SRC_URI'><filename>SRC_URI</filename></link>,
<link linkend='var-bb-SRCREV'><filename>SRCREV</filename></link>, and
<link linkend='var-bb-PV'><filename>PV</filename></link> settings.
Here is an example:
<literallayout class='monospaced'>
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
SRCREV = "EXAMPLE_CLEARCASE_TAG"
PV = "${@d.getVar("SRCREV", False).replace("/", "+")}"
</literallayout>
The fetcher uses the <filename>rcleartool</filename> or
<filename>cleartool</filename> remote client, depending on
which one is available.
</para>
<para>
Following are options for the <filename>SRC_URI</filename>
statement:
<itemizedlist>
<listitem><para><emphasis><filename>vob</filename></emphasis>:
The name, which must include the
prepending "/" character, of the ClearCase VOB.
This option is required.
</para></listitem>
<listitem><para><emphasis><filename>module</filename></emphasis>:
The module, which must include the
prepending "/" character, in the selected VOB.
<note>
The <filename>module</filename> and <filename>vob</filename>
options are combined to create the <filename>load</filename> rule in
the view config spec.
As an example, consider the <filename>vob</filename> and
<filename>module</filename> values from the
<filename>SRC_URI</filename> statement at the start of this section.
Combining those values results in the following:
<literallayout class='monospaced'>
load /example_vob/example_module
</literallayout>
</note>
</para></listitem>
<listitem><para><emphasis><filename>proto</filename></emphasis>:
The protocol, which can be either <filename>http</filename> or
<filename>https</filename>.
</para></listitem>
</itemizedlist>
</para>
<para>
By default, the fetcher creates a configuration specification.
If you want this specification written to an area other than the default,
use the <filename>CCASE_CUSTOM_CONFIG_SPEC</filename> variable
in your recipe to define where the specification is written.
<note>
the <filename>SRCREV</filename> loses its functionality if you
specify this variable.
However, <filename>SRCREV</filename> is still used to label the
archive after a fetch even though it does not define what is
fetched.
</note>
</para>
<para>
Here are a couple of other behaviors worth mentioning:
<itemizedlist>
<listitem><para>
When using <filename>cleartool</filename>, the login of
<filename>cleartool</filename> is handled by the system.
The login require no special steps.
</para></listitem>
<listitem><para>
In order to use <filename>rcleartool</filename> with authenticated
users, an "rcleartool login" is necessary before using the fetcher.
</para></listitem>
</itemizedlist>
</para>
</section>
<section id='perforce-fetcher'>
<title>Perforce Fetcher (<filename>p4://</filename>)</title>
<para>
This fetcher submodule fetches code from the
<ulink url='https://www.perforce.com/'>Perforce</ulink>
source control system.
The executable used is specified by
<filename>FETCHCMD_p4</filename>, which defaults
to "p4".
The fetcher's temporary working directory is set by
<link linkend='var-bb-P4DIR'><filename>P4DIR</filename></link>,
which defaults to "DL_DIR/p4".
</para>
<para>
To use this fetcher, make sure your recipe has proper
<link linkend='var-bb-SRC_URI'><filename>SRC_URI</filename></link>,
<link linkend='var-bb-SRCREV'><filename>SRCREV</filename></link>, and
<link linkend='var-bb-PV'><filename>PV</filename></link> values.
The p4 executable is able to use the config file defined by your
system's <filename>P4CONFIG</filename> environment variable in
order to define the Perforce server URL and port, username, and
password if you do not wish to keep those values in a recipe
itself.
If you choose not to use <filename>P4CONFIG</filename>,
or to explicitly set variables that <filename>P4CONFIG</filename>
can contain, you can specify the <filename>P4PORT</filename> value,
which is the server's URL and port number, and you can
specify a username and password directly in your recipe within
<filename>SRC_URI</filename>.
</para>
<para>
Here is an example that relies on <filename>P4CONFIG</filename>
to specify the server URL and port, username, and password, and
fetches the Head Revision:
<literallayout class='monospaced'>
SRC_URI = "p4://example-depot/main/source/..."
SRCREV = "${AUTOREV}"
PV = "p4-${SRCPV}"
S = "${WORKDIR}/p4"
</literallayout>
</para>
<para>
Here is an example that specifies the server URL and port,
username, and password, and fetches a Revision based on a Label:
<literallayout class='monospaced'>
P4PORT = "tcp:p4server.example.net:1666"
SRC_URI = "p4://user:passwd@example-depot/main/source/..."
SRCREV = "release-1.0"
PV = "p4-${SRCPV}"
S = "${WORKDIR}/p4"
</literallayout>
<note>
You should always set <filename>S</filename>
to <filename>"${WORKDIR}/p4"</filename> in your recipe.
</note>
</para>
</section>
<section id='repo-fetcher'>
<title>Repo Fetcher (<filename>repo://</filename>)</title>
<para>
This fetcher submodule fetches code from
<filename>google-repo</filename> source control system.
The fetcher works by initiating and syncing sources of the
repository into
<link linkend='var-bb-REPODIR'><filename>REPODIR</filename></link>,
which is usually
<link linkend='var-bb-DL_DIR'><filename>DL_DIR</filename></link><filename>/repo</filename>.
</para>
<para>
This fetcher supports the following parameters:
<itemizedlist>
<listitem><para>
<emphasis>"protocol":</emphasis>
Protocol to fetch the repository manifest (default: git).
</para></listitem>
<listitem><para>
<emphasis>"branch":</emphasis>
Branch or tag of repository to get (default: master).
</para></listitem>
<listitem><para>
<emphasis>"manifest":</emphasis>
Name of the manifest file (default: <filename>default.xml</filename>).
</para></listitem>
</itemizedlist>
Here are some example URLs:
<literallayout class='monospaced'>
SRC_URI = "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml"
SRC_URI = "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml"
</literallayout>
</para>
</section>
<section id='other-fetchers'>
<title>Other Fetchers</title>
<para>
Fetch submodules also exist for the following:
<itemizedlist>
<listitem><para>
Bazaar (<filename>bzr://</filename>)
</para></listitem>
<listitem><para>
Mercurial (<filename>hg://</filename>)
</para></listitem>
<listitem><para>
npm (<filename>npm://</filename>)
</para></listitem>
<listitem><para>
OSC (<filename>osc://</filename>)
</para></listitem>
<listitem><para>
Secure FTP (<filename>sftp://</filename>)
</para></listitem>
<listitem><para>
Secure Shell (<filename>ssh://</filename>)
</para></listitem>
<listitem><para>
Trees using Git Annex (<filename>gitannex://</filename>)
</para></listitem>
</itemizedlist>
No documentation currently exists for these lesser used
fetcher submodules.
However, you might find the code helpful and readable.
</para>
</section>
</section>
<section id='auto-revisions'>
<title>Auto Revisions</title>
<para>
We need to document <filename>AUTOREV</filename> and
<filename>SRCREV_FORMAT</filename> here.
</para>
</section>
</chapter>

View File

@@ -1,415 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
===================
Hello World Example
===================
BitBake Hello World
===================
The simplest example commonly used to demonstrate any new programming
language or tool is the "`Hello
World <http://en.wikipedia.org/wiki/Hello_world_program>`__" example.
This appendix demonstrates, in tutorial form, Hello World within the
context of BitBake. The tutorial describes how to create a new project
and the applicable metadata files necessary to allow BitBake to build
it.
Obtaining BitBake
=================
See the :ref:`bitbake-user-manual/bitbake-user-manual-hello:obtaining bitbake` section for
information on how to obtain BitBake. Once you have the source code on
your machine, the BitBake directory appears as follows: ::
$ ls -al
total 100
drwxrwxr-x. 9 wmat wmat 4096 Jan 31 13:44 .
drwxrwxr-x. 3 wmat wmat 4096 Feb 4 10:45 ..
-rw-rw-r--. 1 wmat wmat 365 Nov 26 04:55 AUTHORS
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 bin
drwxrwxr-x. 4 wmat wmat 4096 Jan 31 13:44 build
-rw-rw-r--. 1 wmat wmat 16501 Nov 26 04:55 ChangeLog
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 classes
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 conf
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 contrib
-rw-rw-r--. 1 wmat wmat 17987 Nov 26 04:55 COPYING
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 doc
-rw-rw-r--. 1 wmat wmat 69 Nov 26 04:55 .gitignore
-rw-rw-r--. 1 wmat wmat 849 Nov 26 04:55 HEADER
drwxrwxr-x. 5 wmat wmat 4096 Jan 31 13:44 lib
-rw-rw-r--. 1 wmat wmat 195 Nov 26 04:55 MANIFEST.in
-rw-rw-r--. 1 wmat wmat 2887 Nov 26 04:55 TODO
At this point, you should have BitBake cloned to a directory that
matches the previous listing except for dates and user names.
Setting Up the BitBake Environment
==================================
First, you need to be sure that you can run BitBake. Set your working
directory to where your local BitBake files are and run the following
command: ::
$ ./bin/bitbake --version
BitBake Build Tool Core version 1.23.0, bitbake version 1.23.0
The console output tells you what version
you are running.
The recommended method to run BitBake is from a directory of your
choice. To be able to run BitBake from any directory, you need to add
the executable binary to your binary to your shell's environment
``PATH`` variable. First, look at your current ``PATH`` variable by
entering the following: ::
$ echo $PATH
Next, add the directory location
for the BitBake binary to the ``PATH``. Here is an example that adds the
``/home/scott-lenovo/bitbake/bin`` directory to the front of the
``PATH`` variable: ::
$ export PATH=/home/scott-lenovo/bitbake/bin:$PATH
You should now be able to enter the ``bitbake`` command from the command
line while working from any directory.
The Hello World Example
=======================
The overall goal of this exercise is to build a complete "Hello World"
example utilizing task and layer concepts. Because this is how modern
projects such as OpenEmbedded and the Yocto Project utilize BitBake, the
example provides an excellent starting point for understanding BitBake.
To help you understand how to use BitBake to build targets, the example
starts with nothing but the ``bitbake`` command, which causes BitBake to
fail and report problems. The example progresses by adding pieces to the
build to eventually conclude with a working, minimal "Hello World"
example.
While every attempt is made to explain what is happening during the
example, the descriptions cannot cover everything. You can find further
information throughout this manual. Also, you can actively participate
in the :oe_lists:`/g/bitbake-devel`
discussion mailing list about the BitBake build tool.
.. note::
This example was inspired by and drew heavily from
`Mailing List post - The BitBake equivalent of "Hello, World!"
<http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html>`_.
As stated earlier, the goal of this example is to eventually compile
"Hello World". However, it is unknown what BitBake needs and what you
have to provide in order to achieve that goal. Recall that BitBake
utilizes three types of metadata files:
:ref:`bitbake-user-manual/bitbake-user-manual-intro:configuration files`,
:ref:`bitbake-user-manual/bitbake-user-manual-intro:classes`, and
:ref:`bitbake-user-manual/bitbake-user-manual-intro:recipes`.
But where do they go? How does BitBake find
them? BitBake's error messaging helps you answer these types of
questions and helps you better understand exactly what is going on.
Following is the complete "Hello World" example.
#. **Create a Project Directory:** First, set up a directory for the
"Hello World" project. Here is how you can do so in your home
directory: ::
$ mkdir ~/hello
$ cd ~/hello
This is the directory that
BitBake will use to do all of its work. You can use this directory
to keep all the metafiles needed by BitBake. Having a project
directory is a good way to isolate your project.
#. **Run BitBake:** At this point, you have nothing but a project
directory. Run the ``bitbake`` command and see what it does: ::
$ bitbake
The BBPATH variable is not set and bitbake did not
find a conf/bblayers.conf file in the expected location.
Maybe you accidentally invoked bitbake from the wrong directory?
DEBUG: Removed the following variables from the environment:
GNOME_DESKTOP_SESSION_ID, XDG_CURRENT_DESKTOP,
GNOME_KEYRING_CONTROL, DISPLAY, SSH_AGENT_PID, LANG, no_proxy,
XDG_SESSION_PATH, XAUTHORITY, SESSION_MANAGER, SHLVL,
MANDATORY_PATH, COMPIZ_CONFIG_PROFILE, WINDOWID, EDITOR,
GPG_AGENT_INFO, SSH_AUTH_SOCK, GDMSESSION, GNOME_KEYRING_PID,
XDG_SEAT_PATH, XDG_CONFIG_DIRS, LESSOPEN, DBUS_SESSION_BUS_ADDRESS,
_, XDG_SESSION_COOKIE, DESKTOP_SESSION, LESSCLOSE, DEFAULTS_PATH,
UBUNTU_MENUPROXY, OLDPWD, XDG_DATA_DIRS, COLORTERM, LS_COLORS
The majority of this output is specific to environment variables that
are not directly relevant to BitBake. However, the very first
message regarding the ``BBPATH`` variable and the
``conf/bblayers.conf`` file is relevant.
When you run BitBake, it begins looking for metadata files. The
:term:`BBPATH` variable is what tells BitBake where
to look for those files. ``BBPATH`` is not set and you need to set
it. Without ``BBPATH``, BitBake cannot find any configuration files
(``.conf``) or recipe files (``.bb``) at all. BitBake also cannot
find the ``bitbake.conf`` file.
#. **Setting BBPATH:** For this example, you can set ``BBPATH`` in
the same manner that you set ``PATH`` earlier in the appendix. You
should realize, though, that it is much more flexible to set the
``BBPATH`` variable up in a configuration file for each project.
From your shell, enter the following commands to set and export the
``BBPATH`` variable: ::
$ BBPATH="projectdirectory"
$ export BBPATH
Use your actual project directory in the command. BitBake uses that
directory to find the metadata it needs for your project.
.. note::
When specifying your project directory, do not use the tilde
("~") character as BitBake does not expand that character as the
shell would.
#. **Run BitBake:** Now that you have ``BBPATH`` defined, run the
``bitbake`` command again: ::
$ bitbake
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 173, in parse_config_file
return bb.parse.handle(fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 99, in handle
return h['handle'](fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 120, in handle
abs_fn = resolve_file(fn, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 117, in resolve_file
raise IOError("file %s not found in %s" % (fn, bbpath))
IOError: file conf/bitbake.conf not found in /home/scott-lenovo/hello
ERROR: Unable to parse conf/bitbake.conf: file conf/bitbake.conf not found in /home/scott-lenovo/hello
This sample output shows that BitBake could not find the
``conf/bitbake.conf`` file in the project directory. This file is
the first thing BitBake must find in order to build a target. And,
since the project directory for this example is empty, you need to
provide a ``conf/bitbake.conf`` file.
#. **Creating conf/bitbake.conf:** The ``conf/bitbake.conf`` includes
a number of configuration variables BitBake uses for metadata and
recipe files. For this example, you need to create the file in your
project directory and define some key BitBake variables. For more
information on the ``bitbake.conf`` file, see
http://git.openembedded.org/bitbake/tree/conf/bitbake.conf.
Use the following commands to create the ``conf`` directory in the
project directory: ::
$ mkdir conf
From within the ``conf`` directory,
use some editor to create the ``bitbake.conf`` so that it contains
the following: ::
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
TMPDIR = "${TOPDIR}/tmp"
CACHE = "${TMPDIR}/cache"
STAMP = "${TMPDIR}/${PN}/stamps"
T = "${TMPDIR}/${PN}/work"
B = "${TMPDIR}/${PN}"
.. note::
Without a value for PN , the variables STAMP , T , and B , prevent more
than one recipe from working. You can fix this by either setting PN to
have a value similar to what OpenEmbedded and BitBake use in the default
bitbake.conf file (see previous example). Or, by manually updating each
recipe to set PN . You will also need to include PN as part of the STAMP
, T , and B variable definitions in the local.conf file.
The ``TMPDIR`` variable establishes a directory that BitBake uses
for build output and intermediate files other than the cached
information used by the
:ref:`bitbake-user-manual/bitbake-user-manual-execution:setscene`
process. Here, the ``TMPDIR`` directory is set to ``hello/tmp``.
.. tip::
You can always safely delete the tmp directory in order to rebuild a
BitBake target. The build process creates the directory for you when you
run BitBake.
For information about each of the other variables defined in this
example, check :term:`PN`, :term:`TOPDIR`, :term:`CACHE`, :term:`STAMP`,
:term:`T` or :term:`B` to take you to the definitions in the
glossary.
#. **Run BitBake:** After making sure that the ``conf/bitbake.conf`` file
exists, you can run the ``bitbake`` command again: ::
$ bitbake
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 177, in _inherit
bb.parse.BBHandler.inherit(bbclass, "configuration INHERITs", 0, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 92, in inherit
include(fn, file, lineno, d, "inherit")
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 100, in include
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
In the sample output,
BitBake could not find the ``classes/base.bbclass`` file. You need
to create that file next.
#. **Creating classes/base.bbclass:** BitBake uses class files to
provide common code and functionality. The minimally required class
for BitBake is the ``classes/base.bbclass`` file. The ``base`` class
is implicitly inherited by every recipe. BitBake looks for the class
in the ``classes`` directory of the project (i.e ``hello/classes``
in this example).
Create the ``classes`` directory as follows: ::
$ cd $HOME/hello
$ mkdir classes
Move to the ``classes`` directory and then create the
``base.bbclass`` file by inserting this single line: addtask build
The minimal task that BitBake runs is the ``do_build`` task. This is
all the example needs in order to build the project. Of course, the
``base.bbclass`` can have much more depending on which build
environments BitBake is supporting.
#. **Run BitBake:** After making sure that the ``classes/base.bbclass``
file exists, you can run the ``bitbake`` command again: ::
$ bitbake
Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.
BitBake is finally reporting
no errors. However, you can see that it really does not have
anything to do. You need to create a recipe that gives BitBake
something to do.
#. **Creating a Layer:** While it is not really necessary for such a
small example, it is good practice to create a layer in which to
keep your code separate from the general metadata used by BitBake.
Thus, this example creates and uses a layer called "mylayer".
.. note::
You can find additional information on layers in the
":ref:`bitbake-user-manual/bitbake-user-manual-intro:Layers`" section.
Minimally, you need a recipe file and a layer configuration file in
your layer. The configuration file needs to be in the ``conf``
directory inside the layer. Use these commands to set up the layer
and the ``conf`` directory: ::
$ cd $HOME
$ mkdir mylayer
$ cd mylayer
$ mkdir conf
Move to the ``conf`` directory and create a ``layer.conf`` file that has the
following: ::
BBPATH .= ":${LAYERDIR}"
BBFILES += "${LAYERDIR}/\*.bb"
BBFILE_COLLECTIONS += "mylayer"
`BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
For information on these variables, click on :term:`BBFILES`,
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS` or :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
to go to the definitions in the glossary.
You need to create the recipe file next. Inside your layer at the
top-level, use an editor and create a recipe file named
``printhello.bb`` that has the following: ::
DESCRIPTION = "Prints Hello World"
PN = 'printhello'
PV = '1'
python do_build() {
bb.plain("********************");
bb.plain("* *");
bb.plain("* Hello, World! *");
bb.plain("* *");
bb.plain("********************");
}
The recipe file simply provides
a description of the recipe, the name, version, and the ``do_build``
task, which prints out "Hello World" to the console. For more
information on :term:`DESCRIPTION`, :term:`PN` or :term:`PV`
follow the links to the glossary.
#. **Run BitBake With a Target:** Now that a BitBake target exists, run
the command and provide that target: ::
$ cd $HOME/hello
$ bitbake printhello
ERROR: no recipe files to build, check your BBPATH and BBFILES?
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
We have created the layer with the recipe and
the layer configuration file but it still seems that BitBake cannot
find the recipe. BitBake needs a ``conf/bblayers.conf`` that lists
the layers for the project. Without this file, BitBake cannot find
the recipe.
#. **Creating conf/bblayers.conf:** BitBake uses the
``conf/bblayers.conf`` file to locate layers needed for the project.
This file must reside in the ``conf`` directory of the project (i.e.
``hello/conf`` for this example).
Set your working directory to the ``hello/conf`` directory and then
create the ``bblayers.conf`` file so that it contains the following: ::
BBLAYERS ?= " \
/home/<you>/mylayer \
"
You need to provide your own information for ``you`` in the file.
#. **Run BitBake With a Target:** Now that you have supplied the
``bblayers.conf`` file, run the ``bitbake`` command and provide the
target: ::
$ bitbake printhello
Parsing recipes: 100% |##################################################################################|
Time: 00:00:00
Parsing of 1 .bb files complete (0 cached, 1 parsed). 1 targets, 0 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
NOTE: Preparing RunQueue
NOTE: Executing RunQueue Tasks
********************
* *
* Hello, World! *
* *
********************
NOTE: Tasks Summary: Attempted 1 tasks of which 0 didn't need to be rerun and all succeeded.
.. note::
After the first execution, re-running bitbake printhello again will not
result in a BitBake run that prints the same console output. The reason
for this is that the first time the printhello.bb recipe's do_build task
executes successfully, BitBake writes a stamp file for the task. Thus,
the next time you attempt to run the task using that same bitbake
command, BitBake notices the stamp and therefore determines that the task
does not need to be re-run. If you delete the tmp directory or run
bitbake -c clean printhello and then re-run the build, the "Hello,
World!" message will be printed again.

View File

@@ -0,0 +1,513 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<appendix id='hello-world-example'>
<title>Hello World Example</title>
<section id='bitbake-hello-world'>
<title>BitBake Hello World</title>
<para>
The simplest example commonly used to demonstrate any new
programming language or tool is the
"<ulink url="http://en.wikipedia.org/wiki/Hello_world_program">Hello World</ulink>"
example.
This appendix demonstrates, in tutorial form, Hello
World within the context of BitBake.
The tutorial describes how to create a new project
and the applicable metadata files necessary to allow
BitBake to build it.
</para>
</section>
<section id='example-obtaining-bitbake'>
<title>Obtaining BitBake</title>
<para>
See the
"<link linkend='obtaining-bitbake'>Obtaining BitBake</link>"
section for information on how to obtain BitBake.
Once you have the source code on your machine, the BitBake directory
appears as follows:
<literallayout class='monospaced'>
$ ls -al
total 100
drwxrwxr-x. 9 wmat wmat 4096 Jan 31 13:44 .
drwxrwxr-x. 3 wmat wmat 4096 Feb 4 10:45 ..
-rw-rw-r--. 1 wmat wmat 365 Nov 26 04:55 AUTHORS
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 bin
drwxrwxr-x. 4 wmat wmat 4096 Jan 31 13:44 build
-rw-rw-r--. 1 wmat wmat 16501 Nov 26 04:55 ChangeLog
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 classes
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 conf
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 contrib
-rw-rw-r--. 1 wmat wmat 17987 Nov 26 04:55 COPYING
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 doc
-rw-rw-r--. 1 wmat wmat 69 Nov 26 04:55 .gitignore
-rw-rw-r--. 1 wmat wmat 849 Nov 26 04:55 HEADER
drwxrwxr-x. 5 wmat wmat 4096 Jan 31 13:44 lib
-rw-rw-r--. 1 wmat wmat 195 Nov 26 04:55 MANIFEST.in
-rw-rw-r--. 1 wmat wmat 2887 Nov 26 04:55 TODO
</literallayout>
</para>
<para>
At this point, you should have BitBake cloned to
a directory that matches the previous listing except for
dates and user names.
</para>
</section>
<section id='setting-up-the-bitbake-environment'>
<title>Setting Up the BitBake Environment</title>
<para>
First, you need to be sure that you can run BitBake.
Set your working directory to where your local BitBake
files are and run the following command:
<literallayout class='monospaced'>
$ ./bin/bitbake --version
BitBake Build Tool Core version 1.23.0, bitbake version 1.23.0
</literallayout>
The console output tells you what version you are running.
</para>
<para>
The recommended method to run BitBake is from a directory of your
choice.
To be able to run BitBake from any directory, you need to add the
executable binary to your binary to your shell's environment
<filename>PATH</filename> variable.
First, look at your current <filename>PATH</filename> variable
by entering the following:
<literallayout class='monospaced'>
$ echo $PATH
</literallayout>
Next, add the directory location for the BitBake binary to the
<filename>PATH</filename>.
Here is an example that adds the
<filename>/home/scott-lenovo/bitbake/bin</filename> directory
to the front of the <filename>PATH</filename> variable:
<literallayout class='monospaced'>
$ export PATH=/home/scott-lenovo/bitbake/bin:$PATH
</literallayout>
You should now be able to enter the <filename>bitbake</filename>
command from the command line while working from any directory.
</para>
</section>
<section id='the-hello-world-example'>
<title>The Hello World Example</title>
<para>
The overall goal of this exercise is to build a
complete "Hello World" example utilizing task and layer
concepts.
Because this is how modern projects such as OpenEmbedded and
the Yocto Project utilize BitBake, the example
provides an excellent starting point for understanding
BitBake.
</para>
<para>
To help you understand how to use BitBake to build targets,
the example starts with nothing but the <filename>bitbake</filename>
command, which causes BitBake to fail and report problems.
The example progresses by adding pieces to the build to
eventually conclude with a working, minimal "Hello World"
example.
</para>
<para>
While every attempt is made to explain what is happening during
the example, the descriptions cannot cover everything.
You can find further information throughout this manual.
Also, you can actively participate in the
<ulink url='http://lists.openembedded.org/mailman/listinfo/bitbake-devel'></ulink>
discussion mailing list about the BitBake build tool.
</para>
<note>
This example was inspired by and drew heavily from
<ulink url="http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html">Mailing List post - The BitBake equivalent of "Hello, World!"</ulink>.
</note>
<para>
As stated earlier, the goal of this example
is to eventually compile "Hello World".
However, it is unknown what BitBake needs and what you have
to provide in order to achieve that goal.
Recall that BitBake utilizes three types of metadata files:
<link linkend='configuration-files'>Configuration Files</link>,
<link linkend='classes'>Classes</link>, and
<link linkend='recipes'>Recipes</link>.
But where do they go?
How does BitBake find them?
BitBake's error messaging helps you answer these types of questions
and helps you better understand exactly what is going on.
</para>
<para>
Following is the complete "Hello World" example.
</para>
<orderedlist>
<listitem><para><emphasis>Create a Project Directory:</emphasis>
First, set up a directory for the "Hello World" project.
Here is how you can do so in your home directory:
<literallayout class='monospaced'>
$ mkdir ~/hello
$ cd ~/hello
</literallayout>
This is the directory that BitBake will use to do all of
its work.
You can use this directory to keep all the metafiles needed
by BitBake.
Having a project directory is a good way to isolate your
project.
</para></listitem>
<listitem><para><emphasis>Run BitBake:</emphasis>
At this point, you have nothing but a project directory.
Run the <filename>bitbake</filename> command and see what
it does:
<literallayout class='monospaced'>
$ bitbake
The BBPATH variable is not set and bitbake did not
find a conf/bblayers.conf file in the expected location.
Maybe you accidentally invoked bitbake from the wrong directory?
DEBUG: Removed the following variables from the environment:
GNOME_DESKTOP_SESSION_ID, XDG_CURRENT_DESKTOP,
GNOME_KEYRING_CONTROL, DISPLAY, SSH_AGENT_PID, LANG, no_proxy,
XDG_SESSION_PATH, XAUTHORITY, SESSION_MANAGER, SHLVL,
MANDATORY_PATH, COMPIZ_CONFIG_PROFILE, WINDOWID, EDITOR,
GPG_AGENT_INFO, SSH_AUTH_SOCK, GDMSESSION, GNOME_KEYRING_PID,
XDG_SEAT_PATH, XDG_CONFIG_DIRS, LESSOPEN, DBUS_SESSION_BUS_ADDRESS,
_, XDG_SESSION_COOKIE, DESKTOP_SESSION, LESSCLOSE, DEFAULTS_PATH,
UBUNTU_MENUPROXY, OLDPWD, XDG_DATA_DIRS, COLORTERM, LS_COLORS
</literallayout>
The majority of this output is specific to environment variables
that are not directly relevant to BitBake.
However, the very first message regarding the
<filename>BBPATH</filename> variable and the
<filename>conf/bblayers.conf</filename> file
is relevant.</para>
<para>
When you run BitBake, it begins looking for metadata files.
The
<link linkend='var-bb-BBPATH'><filename>BBPATH</filename></link>
variable is what tells BitBake where to look for those files.
<filename>BBPATH</filename> is not set and you need to set it.
Without <filename>BBPATH</filename>, BitBake cannot
find any configuration files (<filename>.conf</filename>)
or recipe files (<filename>.bb</filename>) at all.
BitBake also cannot find the <filename>bitbake.conf</filename>
file.
</para></listitem>
<listitem><para><emphasis>Setting <filename>BBPATH</filename>:</emphasis>
For this example, you can set <filename>BBPATH</filename>
in the same manner that you set <filename>PATH</filename>
earlier in the appendix.
You should realize, though, that it is much more flexible to set the
<filename>BBPATH</filename> variable up in a configuration
file for each project.</para>
<para>From your shell, enter the following commands to set and
export the <filename>BBPATH</filename> variable:
<literallayout class='monospaced'>
$ BBPATH="<replaceable>projectdirectory</replaceable>"
$ export BBPATH
</literallayout>
Use your actual project directory in the command.
BitBake uses that directory to find the metadata it needs for
your project.
<note>
When specifying your project directory, do not use the
tilde ("~") character as BitBake does not expand that character
as the shell would.
</note>
</para></listitem>
<listitem><para><emphasis>Run BitBake:</emphasis>
Now that you have <filename>BBPATH</filename> defined, run
the <filename>bitbake</filename> command again:
<literallayout class='monospaced'>
$ bitbake
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 173, in parse_config_file
return bb.parse.handle(fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 99, in handle
return h['handle'](fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 120, in handle
abs_fn = resolve_file(fn, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 117, in resolve_file
raise IOError("file %s not found in %s" % (fn, bbpath))
IOError: file conf/bitbake.conf not found in /home/scott-lenovo/hello
ERROR: Unable to parse conf/bitbake.conf: file conf/bitbake.conf not found in /home/scott-lenovo/hello
</literallayout>
This sample output shows that BitBake could not find the
<filename>conf/bitbake.conf</filename> file in the project
directory.
This file is the first thing BitBake must find in order
to build a target.
And, since the project directory for this example is
empty, you need to provide a <filename>conf/bitbake.conf</filename>
file.
</para></listitem>
<listitem><para><emphasis>Creating <filename>conf/bitbake.conf</filename>:</emphasis>
The <filename>conf/bitbake.conf</filename> includes a number of
configuration variables BitBake uses for metadata and recipe
files.
For this example, you need to create the file in your project directory
and define some key BitBake variables.
For more information on the <filename>bitbake.conf</filename> file,
see
<ulink url='http://git.openembedded.org/bitbake/tree/conf/bitbake.conf'></ulink>.
</para>
<para>Use the following commands to create the <filename>conf</filename>
directory in the project directory:
<literallayout class='monospaced'>
$ mkdir conf
</literallayout>
From within the <filename>conf</filename> directory, use
some editor to create the <filename>bitbake.conf</filename>
so that it contains the following:
<literallayout class='monospaced'>
<link linkend='var-bb-PN'>PN</link> = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
</literallayout>
<literallayout class='monospaced'>
TMPDIR = "${<link linkend='var-bb-TOPDIR'>TOPDIR</link>}/tmp"
<link linkend='var-bb-CACHE'>CACHE</link> = "${TMPDIR}/cache"
<link linkend='var-bb-STAMP'>STAMP</link> = "${TMPDIR}/${PN}/stamps"
<link linkend='var-bb-T'>T</link> = "${TMPDIR}/${PN}/work"
<link linkend='var-bb-B'>B</link> = "${TMPDIR}/${PN}"
</literallayout>
<note>
Without a value for <filename>PN</filename>, the
variables <filename>STAMP</filename>,
<filename>T</filename>, and <filename>B</filename>,
prevent more than one recipe from working. You can fix
this by either setting <filename>PN</filename> to have
a value similar to what OpenEmbedded and BitBake use
in the default <filename>bitbake.conf</filename> file
(see previous example). Or, by manually updating each
recipe to set <filename>PN</filename>. You will also
need to include <filename>PN</filename> as part of the
<filename>STAMP</filename>, <filename>T</filename>, and
<filename>B</filename> variable definitions in the
<filename>local.conf</filename> file.
</note>
The <filename>TMPDIR</filename> variable establishes a directory
that BitBake uses for build output and intermediate files other
than the cached information used by the
<link linkend='setscene'>Setscene</link> process.
Here, the <filename>TMPDIR</filename> directory is set to
<filename>hello/tmp</filename>.
<note><title>Tip</title>
You can always safely delete the <filename>tmp</filename>
directory in order to rebuild a BitBake target.
The build process creates the directory for you
when you run BitBake.
</note></para>
<para>For information about each of the other variables defined in this
example, click on the links to take you to the definitions in
the glossary.
</para></listitem>
<listitem><para><emphasis>Run BitBake:</emphasis>
After making sure that the <filename>conf/bitbake.conf</filename>
file exists, you can run the <filename>bitbake</filename>
command again:
<literallayout class='monospaced'>
$ bitbake
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 177, in _inherit
bb.parse.BBHandler.inherit(bbclass, "configuration INHERITs", 0, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 92, in inherit
include(fn, file, lineno, d, "inherit")
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 100, in include
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
</literallayout>
In the sample output, BitBake could not find the
<filename>classes/base.bbclass</filename> file.
You need to create that file next.
</para></listitem>
<listitem><para><emphasis>Creating <filename>classes/base.bbclass</filename>:</emphasis>
BitBake uses class files to provide common code and functionality.
The minimally required class for BitBake is the
<filename>classes/base.bbclass</filename> file.
The <filename>base</filename> class is implicitly inherited by
every recipe.
BitBake looks for the class in the <filename>classes</filename>
directory of the project (i.e <filename>hello/classes</filename>
in this example).
</para>
<para>Create the <filename>classes</filename> directory as follows:
<literallayout class='monospaced'>
$ cd $HOME/hello
$ mkdir classes
</literallayout>
Move to the <filename>classes</filename> directory and then
create the <filename>base.bbclass</filename> file by inserting
this single line:
<literallayout class='monospaced'>
addtask build
</literallayout>
The minimal task that BitBake runs is the
<filename>do_build</filename> task.
This is all the example needs in order to build the project.
Of course, the <filename>base.bbclass</filename> can have much
more depending on which build environments BitBake is
supporting.
</para></listitem>
<listitem><para><emphasis>Run BitBake:</emphasis>
After making sure that the <filename>classes/base.bbclass</filename>
file exists, you can run the <filename>bitbake</filename>
command again:
<literallayout class='monospaced'>
$ bitbake
Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.
</literallayout>
BitBake is finally reporting no errors.
However, you can see that it really does not have anything
to do.
You need to create a recipe that gives BitBake something to do.
</para></listitem>
<listitem><para><emphasis>Creating a Layer:</emphasis>
While it is not really necessary for such a small example,
it is good practice to create a layer in which to keep your
code separate from the general metadata used by BitBake.
Thus, this example creates and uses a layer called "mylayer".
<note>
You can find additional information on layers in the
"<link linkend='layers'>Layers</link>" section.
</note></para>
<para>Minimally, you need a recipe file and a layer configuration
file in your layer.
The configuration file needs to be in the <filename>conf</filename>
directory inside the layer.
Use these commands to set up the layer and the <filename>conf</filename>
directory:
<literallayout class='monospaced'>
$ cd $HOME
$ mkdir mylayer
$ cd mylayer
$ mkdir conf
</literallayout>
Move to the <filename>conf</filename> directory and create a
<filename>layer.conf</filename> file that has the following:
<literallayout class='monospaced'>
BBPATH .= ":${<link linkend='var-bb-LAYERDIR'>LAYERDIR</link>}"
<link linkend='var-bb-BBFILES'>BBFILES</link> += "${LAYERDIR}/*.bb"
<link linkend='var-bb-BBFILE_COLLECTIONS'>BBFILE_COLLECTIONS</link> += "mylayer"
<link linkend='var-bb-BBFILE_PATTERN'>BBFILE_PATTERN_mylayer</link> := "^${LAYERDIR_RE}/"
</literallayout>
For information on these variables, click the links
to go to the definitions in the glossary.</para>
<para>You need to create the recipe file next.
Inside your layer at the top-level, use an editor and create
a recipe file named <filename>printhello.bb</filename> that
has the following:
<literallayout class='monospaced'>
<link linkend='var-bb-DESCRIPTION'>DESCRIPTION</link> = "Prints Hello World"
<link linkend='var-bb-PN'>PN</link> = 'printhello'
<link linkend='var-bb-PV'>PV</link> = '1'
python do_build() {
bb.plain("********************");
bb.plain("* *");
bb.plain("* Hello, World! *");
bb.plain("* *");
bb.plain("********************");
}
</literallayout>
The recipe file simply provides a description of the
recipe, the name, version, and the <filename>do_build</filename>
task, which prints out "Hello World" to the console.
For more information on these variables, follow the links
to the glossary.
</para></listitem>
<listitem><para><emphasis>Run BitBake With a Target:</emphasis>
Now that a BitBake target exists, run the command and provide
that target:
<literallayout class='monospaced'>
$ cd $HOME/hello
$ bitbake printhello
ERROR: no recipe files to build, check your BBPATH and BBFILES?
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
</literallayout>
We have created the layer with the recipe and the layer
configuration file but it still seems that BitBake cannot
find the recipe.
BitBake needs a <filename>conf/bblayers.conf</filename> that
lists the layers for the project.
Without this file, BitBake cannot find the recipe.
</para></listitem>
<listitem><para><emphasis>Creating <filename>conf/bblayers.conf</filename>:</emphasis>
BitBake uses the <filename>conf/bblayers.conf</filename> file
to locate layers needed for the project.
This file must reside in the <filename>conf</filename> directory
of the project (i.e. <filename>hello/conf</filename> for this
example).</para>
<para>Set your working directory to the <filename>hello/conf</filename>
directory and then create the <filename>bblayers.conf</filename>
file so that it contains the following:
<literallayout class='monospaced'>
BBLAYERS ?= " \
/home/&lt;you&gt;/mylayer \
"
</literallayout>
You need to provide your own information for
<filename>you</filename> in the file.
</para></listitem>
<listitem><para><emphasis>Run BitBake With a Target:</emphasis>
Now that you have supplied the <filename>bblayers.conf</filename>
file, run the <filename>bitbake</filename> command and provide
the target:
<literallayout class='monospaced'>
$ bitbake printhello
Parsing recipes: 100% |##################################################################################|
Time: 00:00:00
Parsing of 1 .bb files complete (0 cached, 1 parsed). 1 targets, 0 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
NOTE: Preparing RunQueue
NOTE: Executing RunQueue Tasks
********************
* *
* Hello, World! *
* *
********************
NOTE: Tasks Summary: Attempted 1 tasks of which 0 didn't need to be rerun and all succeeded.
</literallayout>
BitBake finds the <filename>printhello</filename> recipe and
successfully runs the task.
<note>
After the first execution, re-running
<filename>bitbake printhello</filename> again will not
result in a BitBake run that prints the same console
output.
The reason for this is that the first time the
<filename>printhello.bb</filename> recipe's
<filename>do_build</filename> task executes
successfully, BitBake writes a stamp file for the task.
Thus, the next time you attempt to run the task
using that same <filename>bitbake</filename> command,
BitBake notices the stamp and therefore determines
that the task does not need to be re-run.
If you delete the <filename>tmp</filename> directory
or run <filename>bitbake -c clean printhello</filename>
and then re-run the build, the "Hello, World!" message will
be printed again.
</note>
</para></listitem>
</orderedlist>
</section>
</appendix>

View File

@@ -1,651 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
========
Overview
========
|
Welcome to the BitBake User Manual. This manual provides information on
the BitBake tool. The information attempts to be as independent as
possible regarding systems that use BitBake, such as OpenEmbedded and
the Yocto Project. In some cases, scenarios or examples within the
context of a build system are used in the manual to help with
understanding. For these cases, the manual clearly states the context.
.. _intro:
Introduction
============
Fundamentally, BitBake is a generic task execution engine that allows
shell and Python tasks to be run efficiently and in parallel while
working within complex inter-task dependency constraints. One of
BitBake's main users, OpenEmbedded, takes this core and builds embedded
Linux software stacks using a task-oriented approach.
Conceptually, BitBake is similar to GNU Make in some regards but has
significant differences:
- BitBake executes tasks according to provided metadata that builds up
the tasks. Metadata is stored in recipe (``.bb``) and related recipe
"append" (``.bbappend``) files, configuration (``.conf``) and
underlying include (``.inc``) files, and in class (``.bbclass``)
files. The metadata provides BitBake with instructions on what tasks
to run and the dependencies between those tasks.
- BitBake includes a fetcher library for obtaining source code from
various places such as local files, source control systems, or
websites.
- The instructions for each unit to be built (e.g. a piece of software)
are known as "recipe" files and contain all the information about the
unit (dependencies, source file locations, checksums, description and
so on).
- BitBake includes a client/server abstraction and can be used from a
command line or used as a service over XML-RPC and has several
different user interfaces.
History and Goals
=================
BitBake was originally a part of the OpenEmbedded project. It was
inspired by the Portage package management system used by the Gentoo
Linux distribution. On December 7, 2004, OpenEmbedded project team
member Chris Larson split the project into two distinct pieces:
- BitBake, a generic task executor
- OpenEmbedded, a metadata set utilized by BitBake
Today, BitBake is the primary basis of the
`OpenEmbedded <http://www.openembedded.org/>`__ project, which is being
used to build and maintain Linux distributions such as the `Angstrom
Distribution <http://www.angstrom-distribution.org/>`__, and which is
also being used as the build tool for Linux projects such as the `Yocto
Project <http://www.yoctoproject.org>`__.
Prior to BitBake, no other build tool adequately met the needs of an
aspiring embedded Linux distribution. All of the build systems used by
traditional desktop Linux distributions lacked important functionality,
and none of the ad hoc Buildroot-based systems, prevalent in the
embedded space, were scalable or maintainable.
Some important original goals for BitBake were:
- Handle cross-compilation.
- Handle inter-package dependencies (build time on target architecture,
build time on native architecture, and runtime).
- Support running any number of tasks within a given package,
including, but not limited to, fetching upstream sources, unpacking
them, patching them, configuring them, and so forth.
- Be Linux distribution agnostic for both build and target systems.
- Be architecture agnostic.
- Support multiple build and target operating systems (e.g. Cygwin, the
BSDs, and so forth).
- Be self-contained, rather than tightly integrated into the build
machine's root filesystem.
- Handle conditional metadata on the target architecture, operating
system, distribution, and machine.
- Be easy to use the tools to supply local metadata and packages
against which to operate.
- Be easy to use BitBake to collaborate between multiple projects for
their builds.
- Provide an inheritance mechanism to share common metadata between
many packages.
Over time it became apparent that some further requirements were
necessary:
- Handle variants of a base recipe (e.g. native, sdk, and multilib).
- Split metadata into layers and allow layers to enhance or override
other layers.
- Allow representation of a given set of input variables to a task as a
checksum. Based on that checksum, allow acceleration of builds with
prebuilt components.
BitBake satisfies all the original requirements and many more with
extensions being made to the basic functionality to reflect the
additional requirements. Flexibility and power have always been the
priorities. BitBake is highly extensible and supports embedded Python
code and execution of any arbitrary tasks.
.. _Concepts:
Concepts
========
BitBake is a program written in the Python language. At the highest
level, BitBake interprets metadata, decides what tasks are required to
run, and executes those tasks. Similar to GNU Make, BitBake controls how
software is built. GNU Make achieves its control through "makefiles",
while BitBake uses "recipes".
BitBake extends the capabilities of a simple tool like GNU Make by
allowing for the definition of much more complex tasks, such as
assembling entire embedded Linux distributions.
The remainder of this section introduces several concepts that should be
understood in order to better leverage the power of BitBake.
Recipes
-------
BitBake Recipes, which are denoted by the file extension ``.bb``, are
the most basic metadata files. These recipe files provide BitBake with
the following:
- Descriptive information about the package (author, homepage, license,
and so on)
- The version of the recipe
- Existing dependencies (both build and runtime dependencies)
- Where the source code resides and how to fetch it
- Whether the source code requires any patches, where to find them, and
how to apply them
- How to configure and compile the source code
- How to assemble the generated artifacts into one or more installable
packages
- Where on the target machine to install the package or packages
created
Within the context of BitBake, or any project utilizing BitBake as its
build system, files with the ``.bb`` extension are referred to as
recipes.
.. note::
The term "package" is also commonly used to describe recipes.
However, since the same word is used to describe packaged output from
a project, it is best to maintain a single descriptive term -
"recipes". Put another way, a single "recipe" file is quite capable
of generating a number of related but separately installable
"packages". In fact, that ability is fairly common.
Configuration Files
-------------------
Configuration files, which are denoted by the ``.conf`` extension,
define various configuration variables that govern the project's build
process. These files fall into several areas that define machine
configuration, distribution configuration, possible compiler tuning,
general common configuration, and user configuration. The main
configuration file is the sample ``bitbake.conf`` file, which is located
within the BitBake source tree ``conf`` directory.
Classes
-------
Class files, which are denoted by the ``.bbclass`` extension, contain
information that is useful to share between metadata files. The BitBake
source tree currently comes with one class metadata file called
``base.bbclass``. You can find this file in the ``classes`` directory.
The ``base.bbclass`` class files is special since it is always included
automatically for all recipes and classes. This class contains
definitions for standard basic tasks such as fetching, unpacking,
configuring (empty by default), compiling (runs any Makefile present),
installing (empty by default) and packaging (empty by default). These
tasks are often overridden or extended by other classes added during the
project development process.
Layers
------
Layers allow you to isolate different types of customizations from each
other. While you might find it tempting to keep everything in one layer
when working on a single project, the more modular your metadata, the
easier it is to cope with future changes.
To illustrate how you can use layers to keep things modular, consider
customizations you might make to support a specific target machine.
These types of customizations typically reside in a special layer,
rather than a general layer, called a Board Support Package (BSP) layer.
Furthermore, the machine customizations should be isolated from recipes
and metadata that support a new GUI environment, for example. This
situation gives you a couple of layers: one for the machine
configurations and one for the GUI environment. It is important to
understand, however, that the BSP layer can still make machine-specific
additions to recipes within the GUI environment layer without polluting
the GUI layer itself with those machine-specific changes. You can
accomplish this through a recipe that is a BitBake append
(``.bbappend``) file.
.. _append-bbappend-files:
Append Files
------------
Append files, which are files that have the ``.bbappend`` file
extension, extend or override information in an existing recipe file.
BitBake expects every append file to have a corresponding recipe file.
Furthermore, the append file and corresponding recipe file must use the
same root filename. The filenames can differ only in the file type
suffix used (e.g. ``formfactor_0.0.bb`` and
``formfactor_0.0.bbappend``).
Information in append files extends or overrides the information in the
underlying, similarly-named recipe files.
When you name an append file, you can use the "``%``" wildcard character
to allow for matching recipe names. For example, suppose you have an
append file named as follows: ::
busybox_1.21.%.bbappend
That append file
would match any ``busybox_1.21.``\ x\ ``.bb`` version of the recipe. So,
the append file would match the following recipe names: ::
busybox_1.21.1.bb
busybox_1.21.2.bb
busybox_1.21.3.bb
.. note::
The use of the " % " character is limited in that it only works directly in
front of the .bbappend portion of the append file's name. You cannot use the
wildcard character in any other location of the name.
If the ``busybox`` recipe was updated to ``busybox_1.3.0.bb``, the
append name would not match. However, if you named the append file
``busybox_1.%.bbappend``, then you would have a match.
In the most general case, you could name the append file something as
simple as ``busybox_%.bbappend`` to be entirely version independent.
Obtaining BitBake
=================
You can obtain BitBake several different ways:
- **Cloning BitBake:** Using Git to clone the BitBake source code
repository is the recommended method for obtaining BitBake. Cloning
the repository makes it easy to get bug fixes and have access to
stable branches and the master branch. Once you have cloned BitBake,
you should use the latest stable branch for development since the
master branch is for BitBake development and might contain less
stable changes.
You usually need a version of BitBake that matches the metadata you
are using. The metadata is generally backwards compatible but not
forward compatible.
Here is an example that clones the BitBake repository: ::
$ git clone git://git.openembedded.org/bitbake
This command clones the BitBake
Git repository into a directory called ``bitbake``. Alternatively,
you can designate a directory after the ``git clone`` command if you
want to call the new directory something other than ``bitbake``. Here
is an example that names the directory ``bbdev``: ::
$ git clone git://git.openembedded.org/bitbake bbdev
- **Installation using your Distribution Package Management System:**
This method is not recommended because the BitBake version that is
provided by your distribution, in most cases, is several releases
behind a snapshot of the BitBake repository.
- **Taking a snapshot of BitBake:** Downloading a snapshot of BitBake
from the source code repository gives you access to a known branch or
release of BitBake.
.. note::
Cloning the Git repository, as described earlier, is the preferred
method for getting BitBake. Cloning the repository makes it easier
to update as patches are added to the stable branches.
The following example downloads a snapshot of BitBake version 1.17.0: ::
$ wget http://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
$ tar zxpvf bitbake-1.17.0.tar.gz
After extraction of the tarball using
the tar utility, you have a directory entitled ``bitbake-1.17.0``.
- **Using the BitBake that Comes With Your Build Checkout:** A final
possibility for getting a copy of BitBake is that it already comes
with your checkout of a larger BitBake-based build system, such as
Poky. Rather than manually checking out individual layers and gluing
them together yourself, you can check out an entire build system. The
checkout will already include a version of BitBake that has been
thoroughly tested for compatibility with the other components. For
information on how to check out a particular BitBake-based build
system, consult that build system's supporting documentation.
.. _bitbake-user-manual-command:
The BitBake Command
===================
The ``bitbake`` command is the primary interface to the BitBake tool.
This section presents the BitBake command syntax and provides several
execution examples.
Usage and syntax
----------------
Following is the usage and syntax for BitBake: ::
$ bitbake -h
Usage: bitbake [options] [recipename/target recipe:do_task ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.
Options:
--version show program's version number and exit
-h, --help show this help message and exit
-b BUILDFILE, --buildfile=BUILDFILE
Execute tasks from a specific .bb recipe directly.
WARNING: Does not handle any dependencies from other
recipes.
-k, --continue Continue as much as possible after an error. While the
target that failed and anything depending on it cannot
be built, as much as possible will be built before
stopping.
-f, --force Force the specified targets/task to run (invalidating
any existing stamp file).
-c CMD, --cmd=CMD Specify the task to execute. The exact options
available depend on the metadata. Some examples might
be 'compile' or 'populate_sysroot' or 'listtasks' may
give a list of the tasks available.
-C INVALIDATE_STAMP, --clear-stamp=INVALIDATE_STAMP
Invalidate the stamp for the specified task such as
'compile' and then run the default task for the
specified target(s).
-r PREFILE, --read=PREFILE
Read the specified file before bitbake.conf.
-R POSTFILE, --postread=POSTFILE
Read the specified file after bitbake.conf.
-v, --verbose Enable tracing of shell tasks (with 'set -x'). Also
print bb.note(...) messages to stdout (in addition to
writing them to ${T}/log.do_&lt;task&gt;).
-D, --debug Increase the debug level. You can specify this more
than once. -D sets the debug level to 1, where only
bb.debug(1, ...) messages are printed to stdout; -DD
sets the debug level to 2, where both bb.debug(1, ...)
and bb.debug(2, ...) messages are printed; etc.
Without -D, no debug messages are printed. Note that
-D only affects output to stdout. All debug messages
are written to ${T}/log.do_taskname, regardless of the
debug level.
-q, --quiet Output less log message data to the terminal. You can
specify this more than once.
-n, --dry-run Don't execute, just go through the motions.
-S SIGNATURE_HANDLER, --dump-signatures=SIGNATURE_HANDLER
Dump out the signature construction information, with
no task execution. The SIGNATURE_HANDLER parameter is
passed to the handler. Two common values are none and
printdiff but the handler may define more/less. none
means only dump the signature, printdiff means compare
the dumped signature with the cached one.
-p, --parse-only Quit after parsing the BB recipes.
-s, --show-versions Show current and preferred versions of all recipes.
-e, --environment Show the global or per-recipe environment complete
with information about where variables were
set/changed.
-g, --graphviz Save dependency tree information for the specified
targets in the dot syntax.
-I EXTRA_ASSUME_PROVIDED, --ignore-deps=EXTRA_ASSUME_PROVIDED
Assume these dependencies don't exist and are already
provided (equivalent to ASSUME_PROVIDED). Useful to
make dependency graphs more appealing
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile Profile the command and save reports.
-u UI, --ui=UI The user interface to use (knotty, ncurses or taskexp
- default knotty).
--token=XMLRPCTOKEN Specify the connection token to be used when
connecting to a remote server.
--revisions-changed Set the exit code depending on whether upstream
floating revisions have changed or not.
--server-only Run bitbake without a UI, only starting a server
(cooker) process.
-B BIND, --bind=BIND The name/address for the bitbake xmlrpc server to bind
to.
-T SERVER_TIMEOUT, --idle-timeout=SERVER_TIMEOUT
Set timeout to unload bitbake server due to
inactivity, set to -1 means no unload, default:
Environment variable BB_SERVER_TIMEOUT.
--no-setscene Do not run any setscene tasks. sstate will be ignored
and everything needed, built.
--setscene-only Only run setscene tasks, don't run any real tasks.
--remote-server=REMOTE_SERVER
Connect to the specified server.
-m, --kill-server Terminate any running bitbake server.
--observe-only Connect to a server as an observing-only client.
--status-only Check the status of the remote bitbake server.
-w WRITEEVENTLOG, --write-log=WRITEEVENTLOG
Writes the event log of the build to a bitbake event
json file. Use '' (empty string) to assign the name
automatically.
--runall=RUNALL Run the specified task for any recipe in the taskgraph
of the specified target (even if it wouldn't otherwise
have run).
--runonly=RUNONLY Run only the specified task within the taskgraph of
the specified targets (and any task dependencies those
tasks may have).
.. _bitbake-examples:
Examples
--------
This section presents some examples showing how to use BitBake.
.. _example-executing-a-task-against-a-single-recipe:
Executing a Task Against a Single Recipe
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Executing tasks for a single recipe file is relatively simple. You
specify the file in question, and BitBake parses it and executes the
specified task. If you do not specify a task, BitBake executes the
default task, which is "build". BitBake obeys inter-task dependencies
when doing so.
The following command runs the build task, which is the default task, on
the ``foo_1.0.bb`` recipe file: ::
$ bitbake -b foo_1.0.bb
The following command runs the clean task on the ``foo.bb`` recipe file: ::
$ bitbake -b foo.bb -c clean
.. note::
The "-b" option explicitly does not handle recipe dependencies. Other
than for debugging purposes, it is instead recommended that you use
the syntax presented in the next section.
Executing Tasks Against a Set of Recipe Files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are a number of additional complexities introduced when one wants
to manage multiple ``.bb`` files. Clearly there needs to be a way to
tell BitBake what files are available and, of those, which you want to
execute. There also needs to be a way for each recipe to express its
dependencies, both for build-time and runtime. There must be a way for
you to express recipe preferences when multiple recipes provide the same
functionality, or when there are multiple versions of a recipe.
The ``bitbake`` command, when not using "--buildfile" or "-b" only
accepts a "PROVIDES". You cannot provide anything else. By default, a
recipe file generally "PROVIDES" its "packagename" as shown in the
following example: ::
$ bitbake foo
This next example "PROVIDES" the
package name and also uses the "-c" option to tell BitBake to just
execute the ``do_clean`` task: ::
$ bitbake -c clean foo
Executing a List of Task and Recipe Combinations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The BitBake command line supports specifying different tasks for
individual targets when you specify multiple targets. For example,
suppose you had two targets (or recipes) ``myfirstrecipe`` and
``mysecondrecipe`` and you needed BitBake to run ``taskA`` for the first
recipe and ``taskB`` for the second recipe: ::
$ bitbake myfirstrecipe:do_taskA mysecondrecipe:do_taskB
Generating Dependency Graphs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
BitBake is able to generate dependency graphs using the ``dot`` syntax.
You can convert these graphs into images using the ``dot`` tool from
`Graphviz <http://www.graphviz.org>`__.
When you generate a dependency graph, BitBake writes two files to the
current working directory:
- ``task-depends.dot``: Shows dependencies between tasks. These
dependencies match BitBake's internal task execution list.
- ``pn-buildlist``: Shows a simple list of targets that are to be
built.
To stop depending on common depends, use the "-I" depend option and
BitBake omits them from the graph. Leaving this information out can
produce more readable graphs. This way, you can remove from the graph
``DEPENDS`` from inherited classes such as ``base.bbclass``.
Here are two examples that create dependency graphs. The second example
omits depends common in OpenEmbedded from the graph: ::
$ bitbake -g foo
$ bitbake -g -I virtual/kernel -I eglibc foo
Executing a Multiple Configuration Build
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
BitBake is able to build multiple images or packages using a single
command where the different targets require different configurations
(multiple configuration builds). Each target, in this scenario, is
referred to as a "multiconfig".
To accomplish a multiple configuration build, you must define each
target's configuration separately using a parallel configuration file in
the build directory. The location for these multiconfig configuration
files is specific. They must reside in the current build directory in a
sub-directory of ``conf`` named ``multiconfig``. Following is an example
for two separate targets:
.. image:: figures/bb_multiconfig_files.png
:align: center
The reason for this required file hierarchy is because the ``BBPATH``
variable is not constructed until the layers are parsed. Consequently,
using the configuration file as a pre-configuration file is not possible
unless it is located in the current working directory.
Minimally, each configuration file must define the machine and the
temporary directory BitBake uses for the build. Suggested practice
dictates that you do not overlap the temporary directories used during
the builds.
Aside from separate configuration files for each target, you must also
enable BitBake to perform multiple configuration builds. Enabling is
accomplished by setting the
:term:`BBMULTICONFIG` variable in the
``local.conf`` configuration file. As an example, suppose you had
configuration files for ``target1`` and ``target2`` defined in the build
directory. The following statement in the ``local.conf`` file both
enables BitBake to perform multiple configuration builds and specifies
the two extra multiconfigs: ::
BBMULTICONFIG = "target1 target2"
Once the target configuration files are in place and BitBake has been
enabled to perform multiple configuration builds, use the following
command form to start the builds: ::
$ bitbake [mc:multiconfigname:]target [[[mc:multiconfigname:]target] ... ]
Here is an example for two extra multiconfigs: ``target1`` and ``target2``: ::
$ bitbake mc::target mc:target1:target mc:target2:target
.. _bb-enabling-multiple-configuration-build-dependencies:
Enabling Multiple Configuration Build Dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometimes dependencies can exist between targets (multiconfigs) in a
multiple configuration build. For example, suppose that in order to
build an image for a particular architecture, the root filesystem of
another build for a different architecture needs to exist. In other
words, the image for the first multiconfig depends on the root
filesystem of the second multiconfig. This dependency is essentially
that the task in the recipe that builds one multiconfig is dependent on
the completion of the task in the recipe that builds another
multiconfig.
To enable dependencies in a multiple configuration build, you must
declare the dependencies in the recipe using the following statement
form: ::
task_or_package[mcdepends] = "mc:from_multiconfig:to_multiconfig:recipe_name:task_on_which_to_depend"
To better show how to use this statement, consider an example with two
multiconfigs: ``target1`` and ``target2``: ::
image_task[mcdepends] = "mc:target1:target2:image2:rootfs_task"
In this example, the
``from_multiconfig`` is "target1" and the ``to_multiconfig`` is "target2". The
task on which the image whose recipe contains image_task depends on the
completion of the rootfs_task used to build out image2, which is
associated with the "target2" multiconfig.
Once you set up this dependency, you can build the "target1" multiconfig
using a BitBake command as follows: ::
$ bitbake mc:target1:image1
This command executes all the tasks needed to create ``image1`` for the "target1"
multiconfig. Because of the dependency, BitBake also executes through
the ``rootfs_task`` for the "target2" multiconfig build.
Having a recipe depend on the root filesystem of another build might not
seem that useful. Consider this change to the statement in the image1
recipe: ::
image_task[mcdepends] = "mc:target1:target2:image2:image_task"
In this case, BitBake must create ``image2`` for the "target2" build since
the "target1" build depends on it.
Because "target1" and "target2" are enabled for multiple configuration
builds and have separate configuration files, BitBake places the
artifacts for each build in the respective temporary build directories.

View File

@@ -0,0 +1,891 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id="bitbake-user-manual-intro">
<title>Overview</title>
<para>
Welcome to the BitBake User Manual.
This manual provides information on the BitBake tool.
The information attempts to be as independent as possible regarding
systems that use BitBake, such as OpenEmbedded and the
Yocto Project.
In some cases, scenarios or examples within the context of
a build system are used in the manual to help with understanding.
For these cases, the manual clearly states the context.
</para>
<section id="intro">
<title>Introduction</title>
<para>
Fundamentally, BitBake is a generic task execution
engine that allows shell and Python tasks to be run
efficiently and in parallel while working within
complex inter-task dependency constraints.
One of BitBake's main users, OpenEmbedded, takes this core
and builds embedded Linux software stacks using
a task-oriented approach.
</para>
<para>
Conceptually, BitBake is similar to GNU Make in
some regards but has significant differences:
<itemizedlist>
<listitem><para>
BitBake executes tasks according to provided
metadata that builds up the tasks.
Metadata is stored in recipe (<filename>.bb</filename>)
and related recipe "append" (<filename>.bbappend</filename>)
files, configuration (<filename>.conf</filename>) and
underlying include (<filename>.inc</filename>) files, and
in class (<filename>.bbclass</filename>) files.
The metadata provides
BitBake with instructions on what tasks to run and
the dependencies between those tasks.
</para></listitem>
<listitem><para>
BitBake includes a fetcher library for obtaining source
code from various places such as local files, source control
systems, or websites.
</para></listitem>
<listitem><para>
The instructions for each unit to be built (e.g. a piece
of software) are known as "recipe" files and
contain all the information about the unit
(dependencies, source file locations, checksums, description
and so on).
</para></listitem>
<listitem><para>
BitBake includes a client/server abstraction and can
be used from a command line or used as a service over
XML-RPC and has several different user interfaces.
</para></listitem>
</itemizedlist>
</para>
</section>
<section id="history-and-goals">
<title>History and Goals</title>
<para>
BitBake was originally a part of the OpenEmbedded project.
It was inspired by the Portage package management system
used by the Gentoo Linux distribution.
On December 7, 2004, OpenEmbedded project team member
Chris Larson split the project into two distinct pieces:
<itemizedlist>
<listitem><para>BitBake, a generic task executor</para></listitem>
<listitem><para>OpenEmbedded, a metadata set utilized by
BitBake</para></listitem>
</itemizedlist>
Today, BitBake is the primary basis of the
<ulink url="http://www.openembedded.org/">OpenEmbedded</ulink>
project, which is being used to build and maintain Linux
distributions such as the
<ulink url='http://www.angstrom-distribution.org/'>Angstrom Distribution</ulink>,
and which is also being used as the build tool for Linux projects
such as the
<ulink url='http://www.yoctoproject.org'>Yocto Project</ulink>.
</para>
<para>
Prior to BitBake, no other build tool adequately met the needs of
an aspiring embedded Linux distribution.
All of the build systems used by traditional desktop Linux
distributions lacked important functionality, and none of the
ad hoc Buildroot-based systems, prevalent in the
embedded space, were scalable or maintainable.
</para>
<para>
Some important original goals for BitBake were:
<itemizedlist>
<listitem><para>
Handle cross-compilation.
</para></listitem>
<listitem><para>
Handle inter-package dependencies (build time on
target architecture, build time on native
architecture, and runtime).
</para></listitem>
<listitem><para>
Support running any number of tasks within a given
package, including, but not limited to, fetching
upstream sources, unpacking them, patching them,
configuring them, and so forth.
</para></listitem>
<listitem><para>
Be Linux distribution agnostic for both build and
target systems.
</para></listitem>
<listitem><para>
Be architecture agnostic.
</para></listitem>
<listitem><para>
Support multiple build and target operating systems
(e.g. Cygwin, the BSDs, and so forth).
</para></listitem>
<listitem><para>
Be self-contained, rather than tightly
integrated into the build machine's root
filesystem.
</para></listitem>
<listitem><para>
Handle conditional metadata on the target architecture,
operating system, distribution, and machine.
</para></listitem>
<listitem><para>
Be easy to use the tools to supply local metadata and packages
against which to operate.
</para></listitem>
<listitem><para>
Be easy to use BitBake to collaborate between multiple
projects for their builds.
</para></listitem>
<listitem><para>
Provide an inheritance mechanism to share
common metadata between many packages.
</para></listitem>
</itemizedlist>
Over time it became apparent that some further requirements
were necessary:
<itemizedlist>
<listitem><para>
Handle variants of a base recipe (e.g. native, sdk,
and multilib).
</para></listitem>
<listitem><para>
Split metadata into layers and allow layers
to enhance or override other layers.
</para></listitem>
<listitem><para>
Allow representation of a given set of input variables
to a task as a checksum.
Based on that checksum, allow acceleration of builds
with prebuilt components.
</para></listitem>
</itemizedlist>
BitBake satisfies all the original requirements and many more
with extensions being made to the basic functionality to
reflect the additional requirements.
Flexibility and power have always been the priorities.
BitBake is highly extensible and supports embedded Python code and
execution of any arbitrary tasks.
</para>
</section>
<section id="Concepts">
<title>Concepts</title>
<para>
BitBake is a program written in the Python language.
At the highest level, BitBake interprets metadata, decides
what tasks are required to run, and executes those tasks.
Similar to GNU Make, BitBake controls how software is
built.
GNU Make achieves its control through "makefiles", while
BitBake uses "recipes".
</para>
<para>
BitBake extends the capabilities of a simple
tool like GNU Make by allowing for the definition of much more
complex tasks, such as assembling entire embedded Linux
distributions.
</para>
<para>
The remainder of this section introduces several concepts
that should be understood in order to better leverage
the power of BitBake.
</para>
<section id='recipes'>
<title>Recipes</title>
<para>
BitBake Recipes, which are denoted by the file extension
<filename>.bb</filename>, are the most basic metadata files.
These recipe files provide BitBake with the following:
<itemizedlist>
<listitem><para>Descriptive information about the
package (author, homepage, license, and so on)</para></listitem>
<listitem><para>The version of the recipe</para></listitem>
<listitem><para>Existing dependencies (both build
and runtime dependencies)</para></listitem>
<listitem><para>Where the source code resides and
how to fetch it</para></listitem>
<listitem><para>Whether the source code requires
any patches, where to find them, and how to apply
them</para></listitem>
<listitem><para>How to configure and compile the
source code</para></listitem>
<listitem><para>How to assemble the generated artifacts into
one or more installable packages</para></listitem>
<listitem><para>Where on the target machine to install the
package or packages created</para></listitem>
</itemizedlist>
</para>
<para>
Within the context of BitBake, or any project utilizing BitBake
as its build system, files with the <filename>.bb</filename>
extension are referred to as <firstterm>recipes</firstterm>.
<note>
The term "package" is also commonly used to describe recipes.
However, since the same word is used to describe packaged
output from a project, it is best to maintain a single
descriptive term - "recipes".
Put another way, a single "recipe" file is quite capable
of generating a number of related but separately installable
"packages".
In fact, that ability is fairly common.
</note>
</para>
</section>
<section id='configuration-files'>
<title>Configuration Files</title>
<para>
Configuration files, which are denoted by the
<filename>.conf</filename> extension, define
various configuration variables that govern the project's build
process.
These files fall into several areas that define
machine configuration, distribution configuration,
possible compiler tuning, general common
configuration, and user configuration.
The main configuration file is the sample
<filename>bitbake.conf</filename> file, which is
located within the BitBake source tree
<filename>conf</filename> directory.
</para>
</section>
<section id='classes'>
<title>Classes</title>
<para>
Class files, which are denoted by the
<filename>.bbclass</filename> extension, contain
information that is useful to share between metadata files.
The BitBake source tree currently comes with one class metadata file
called <filename>base.bbclass</filename>.
You can find this file in the
<filename>classes</filename> directory.
The <filename>base.bbclass</filename> class files is special since it
is always included automatically for all recipes
and classes.
This class contains definitions for standard basic tasks such
as fetching, unpacking, configuring (empty by default),
compiling (runs any Makefile present), installing (empty by
default) and packaging (empty by default).
These tasks are often overridden or extended by other classes
added during the project development process.
</para>
</section>
<section id='layers'>
<title>Layers</title>
<para>
Layers allow you to isolate different types of
customizations from each other.
While you might find it tempting to keep everything in one layer
when working on a single project, the more modular
your metadata, the easier it is to cope with future changes.
</para>
<para>
To illustrate how you can use layers to keep things modular,
consider customizations you might make to support a specific target machine.
These types of customizations typically reside in a special layer,
rather than a general layer, called a <firstterm>Board Support Package</firstterm> (BSP)
layer.
Furthermore, the machine customizations should be isolated from
recipes and metadata that support a new GUI environment, for
example.
This situation gives you a couple of layers: one for the machine
configurations and one for the GUI environment.
It is important to understand, however, that the BSP layer can still
make machine-specific additions to recipes within
the GUI environment layer without polluting the GUI layer itself
with those machine-specific changes.
You can accomplish this through a recipe that is a BitBake append
(<filename>.bbappend</filename>) file.
</para>
</section>
<section id='append-bbappend-files'>
<title>Append Files</title>
<para>
Append files, which are files that have the
<filename>.bbappend</filename> file extension, extend or
override information in an existing recipe file.
</para>
<para>
BitBake expects every append file to have a corresponding recipe file.
Furthermore, the append file and corresponding recipe file
must use the same root filename.
The filenames can differ only in the file type suffix used
(e.g. <filename>formfactor_0.0.bb</filename> and
<filename>formfactor_0.0.bbappend</filename>).
</para>
<para>
Information in append files extends or
overrides the information in the underlying,
similarly-named recipe files.
</para>
<para>
When you name an append file, you can use the
"<filename>%</filename>" wildcard character to allow for matching
recipe names.
For example, suppose you have an append file named
as follows:
<literallayout class='monospaced'>
busybox_1.21.%.bbappend
</literallayout>
That append file would match any <filename>busybox_1.21.</filename><replaceable>x</replaceable><filename>.bb</filename>
version of the recipe.
So, the append file would match the following recipe names:
<literallayout class='monospaced'>
busybox_1.21.1.bb
busybox_1.21.2.bb
busybox_1.21.3.bb
</literallayout>
<note><title>Important</title>
The use of the "<filename>%</filename>" character
is limited in that it only works directly in front of the
<filename>.bbappend</filename> portion of the append file's
name.
You cannot use the wildcard character in any other
location of the name.
</note>
If the <filename>busybox</filename> recipe was updated to
<filename>busybox_1.3.0.bb</filename>, the append name would not
match.
However, if you named the append file
<filename>busybox_1.%.bbappend</filename>, then you would have a match.
</para>
<para>
In the most general case, you could name the append file something as
simple as <filename>busybox_%.bbappend</filename> to be entirely
version independent.
</para>
</section>
</section>
<section id='obtaining-bitbake'>
<title>Obtaining BitBake</title>
<para>
You can obtain BitBake several different ways:
<itemizedlist>
<listitem><para><emphasis>Cloning BitBake:</emphasis>
Using Git to clone the BitBake source code repository
is the recommended method for obtaining BitBake.
Cloning the repository makes it easy to get bug fixes
and have access to stable branches and the master
branch.
Once you have cloned BitBake, you should use
the latest stable
branch for development since the master branch is for
BitBake development and might contain less stable changes.
</para>
<para>You usually need a version of BitBake
that matches the metadata you are using.
The metadata is generally backwards compatible but
not forward compatible.</para>
<para>Here is an example that clones the BitBake repository:
<literallayout class='monospaced'>
$ git clone git://git.openembedded.org/bitbake
</literallayout>
This command clones the BitBake Git repository into a
directory called <filename>bitbake</filename>.
Alternatively, you can
designate a directory after the
<filename>git clone</filename> command
if you want to call the new directory something
other than <filename>bitbake</filename>.
Here is an example that names the directory
<filename>bbdev</filename>:
<literallayout class='monospaced'>
$ git clone git://git.openembedded.org/bitbake bbdev
</literallayout></para></listitem>
<listitem><para><emphasis>Installation using your Distribution
Package Management System:</emphasis>
This method is not
recommended because the BitBake version that is
provided by your distribution, in most cases,
is several
releases behind a snapshot of the BitBake repository.
</para></listitem>
<listitem><para><emphasis>Taking a snapshot of BitBake:</emphasis>
Downloading a snapshot of BitBake from the
source code repository gives you access to a known
branch or release of BitBake.
<note>
Cloning the Git repository, as described earlier,
is the preferred method for getting BitBake.
Cloning the repository makes it easier to update as
patches are added to the stable branches.
</note></para>
<para>The following example downloads a snapshot of
BitBake version 1.17.0:
<literallayout class='monospaced'>
$ wget http://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
$ tar zxpvf bitbake-1.17.0.tar.gz
</literallayout>
After extraction of the tarball using the tar utility,
you have a directory entitled
<filename>bitbake-1.17.0</filename>.
</para></listitem>
<listitem><para><emphasis>Using the BitBake that Comes With Your
Build Checkout:</emphasis>
A final possibility for getting a copy of BitBake is that it
already comes with your checkout of a larger BitBake-based build
system, such as Poky.
Rather than manually checking out individual layers and
gluing them together yourself, you can check
out an entire build system.
The checkout will already include a version of BitBake that
has been thoroughly tested for compatibility with the other
components.
For information on how to check out a particular BitBake-based
build system, consult that build system's supporting documentation.
</para></listitem>
</itemizedlist>
</para>
</section>
<section id="bitbake-user-manual-command">
<title>The BitBake Command</title>
<para>
The <filename>bitbake</filename> command is the primary interface
to the BitBake tool.
This section presents the BitBake command syntax and provides
several execution examples.
</para>
<section id='usage-and-syntax'>
<title>Usage and syntax</title>
<para>
Following is the usage and syntax for BitBake:
<literallayout class='monospaced'>
$ bitbake -h
Usage: bitbake [options] [recipename/target recipe:do_task ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.
Options:
--version show program's version number and exit
-h, --help show this help message and exit
-b BUILDFILE, --buildfile=BUILDFILE
Execute tasks from a specific .bb recipe directly.
WARNING: Does not handle any dependencies from other
recipes.
-k, --continue Continue as much as possible after an error. While the
target that failed and anything depending on it cannot
be built, as much as possible will be built before
stopping.
-f, --force Force the specified targets/task to run (invalidating
any existing stamp file).
-c CMD, --cmd=CMD Specify the task to execute. The exact options
available depend on the metadata. Some examples might
be 'compile' or 'populate_sysroot' or 'listtasks' may
give a list of the tasks available.
-C INVALIDATE_STAMP, --clear-stamp=INVALIDATE_STAMP
Invalidate the stamp for the specified task such as
'compile' and then run the default task for the
specified target(s).
-r PREFILE, --read=PREFILE
Read the specified file before bitbake.conf.
-R POSTFILE, --postread=POSTFILE
Read the specified file after bitbake.conf.
-v, --verbose Enable tracing of shell tasks (with 'set -x'). Also
print bb.note(...) messages to stdout (in addition to
writing them to ${T}/log.do_&lt;task&gt;).
-D, --debug Increase the debug level. You can specify this more
than once. -D sets the debug level to 1, where only
bb.debug(1, ...) messages are printed to stdout; -DD
sets the debug level to 2, where both bb.debug(1, ...)
and bb.debug(2, ...) messages are printed; etc.
Without -D, no debug messages are printed. Note that
-D only affects output to stdout. All debug messages
are written to ${T}/log.do_taskname, regardless of the
debug level.
-q, --quiet Output less log message data to the terminal. You can
specify this more than once.
-n, --dry-run Don't execute, just go through the motions.
-S SIGNATURE_HANDLER, --dump-signatures=SIGNATURE_HANDLER
Dump out the signature construction information, with
no task execution. The SIGNATURE_HANDLER parameter is
passed to the handler. Two common values are none and
printdiff but the handler may define more/less. none
means only dump the signature, printdiff means compare
the dumped signature with the cached one.
-p, --parse-only Quit after parsing the BB recipes.
-s, --show-versions Show current and preferred versions of all recipes.
-e, --environment Show the global or per-recipe environment complete
with information about where variables were
set/changed.
-g, --graphviz Save dependency tree information for the specified
targets in the dot syntax.
-I EXTRA_ASSUME_PROVIDED, --ignore-deps=EXTRA_ASSUME_PROVIDED
Assume these dependencies don't exist and are already
provided (equivalent to ASSUME_PROVIDED). Useful to
make dependency graphs more appealing
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile Profile the command and save reports.
-u UI, --ui=UI The user interface to use (knotty, ncurses or taskexp
- default knotty).
--token=XMLRPCTOKEN Specify the connection token to be used when
connecting to a remote server.
--revisions-changed Set the exit code depending on whether upstream
floating revisions have changed or not.
--server-only Run bitbake without a UI, only starting a server
(cooker) process.
-B BIND, --bind=BIND The name/address for the bitbake xmlrpc server to bind
to.
-T SERVER_TIMEOUT, --idle-timeout=SERVER_TIMEOUT
Set timeout to unload bitbake server due to
inactivity, set to -1 means no unload, default:
Environment variable BB_SERVER_TIMEOUT.
--no-setscene Do not run any setscene tasks. sstate will be ignored
and everything needed, built.
--setscene-only Only run setscene tasks, don't run any real tasks.
--remote-server=REMOTE_SERVER
Connect to the specified server.
-m, --kill-server Terminate any running bitbake server.
--observe-only Connect to a server as an observing-only client.
--status-only Check the status of the remote bitbake server.
-w WRITEEVENTLOG, --write-log=WRITEEVENTLOG
Writes the event log of the build to a bitbake event
json file. Use '' (empty string) to assign the name
automatically.
--runall=RUNALL Run the specified task for any recipe in the taskgraph
of the specified target (even if it wouldn't otherwise
have run).
--runonly=RUNONLY Run only the specified task within the taskgraph of
the specified targets (and any task dependencies those
tasks may have).
</literallayout>
</para>
</section>
<section id='bitbake-examples'>
<title>Examples</title>
<para>
This section presents some examples showing how to use BitBake.
</para>
<section id='example-executing-a-task-against-a-single-recipe'>
<title>Executing a Task Against a Single Recipe</title>
<para>
Executing tasks for a single recipe file is relatively simple.
You specify the file in question, and BitBake parses
it and executes the specified task.
If you do not specify a task, BitBake executes the default
task, which is "build”.
BitBake obeys inter-task dependencies when doing
so.
</para>
<para>
The following command runs the build task, which is
the default task, on the <filename>foo_1.0.bb</filename>
recipe file:
<literallayout class='monospaced'>
$ bitbake -b foo_1.0.bb
</literallayout>
The following command runs the clean task on the
<filename>foo.bb</filename> recipe file:
<literallayout class='monospaced'>
$ bitbake -b foo.bb -c clean
</literallayout>
<note>
The "-b" option explicitly does not handle recipe
dependencies.
Other than for debugging purposes, it is instead
recommended that you use the syntax presented in the
next section.
</note>
</para>
</section>
<section id='executing-tasks-against-a-set-of-recipe-files'>
<title>Executing Tasks Against a Set of Recipe Files</title>
<para>
There are a number of additional complexities introduced
when one wants to manage multiple <filename>.bb</filename>
files.
Clearly there needs to be a way to tell BitBake what
files are available and, of those, which you
want to execute.
There also needs to be a way for each recipe
to express its dependencies, both for build-time and
runtime.
There must be a way for you to express recipe preferences
when multiple recipes provide the same functionality, or when
there are multiple versions of a recipe.
</para>
<para>
The <filename>bitbake</filename> command, when not using
"--buildfile" or "-b" only accepts a "PROVIDES".
You cannot provide anything else.
By default, a recipe file generally "PROVIDES" its
"packagename" as shown in the following example:
<literallayout class='monospaced'>
$ bitbake foo
</literallayout>
This next example "PROVIDES" the package name and also uses
the "-c" option to tell BitBake to just execute the
<filename>do_clean</filename> task:
<literallayout class='monospaced'>
$ bitbake -c clean foo
</literallayout>
</para>
</section>
<section id='executing-a-list-of-task-and-recipe-combinations'>
<title>Executing a List of Task and Recipe Combinations</title>
<para>
The BitBake command line supports specifying different
tasks for individual targets when you specify multiple
targets.
For example, suppose you had two targets (or recipes)
<filename>myfirstrecipe</filename> and
<filename>mysecondrecipe</filename> and you needed
BitBake to run <filename>taskA</filename> for the first
recipe and <filename>taskB</filename> for the second
recipe:
<literallayout class='monospaced'>
$ bitbake myfirstrecipe:do_taskA mysecondrecipe:do_taskB
</literallayout>
</para>
</section>
<section id='generating-dependency-graphs'>
<title>Generating Dependency Graphs</title>
<para>
BitBake is able to generate dependency graphs using
the <filename>dot</filename> syntax.
You can convert these graphs into images using the
<filename>dot</filename> tool from
<ulink url='http://www.graphviz.org'>Graphviz</ulink>.
</para>
<para>
When you generate a dependency graph, BitBake writes two files
to the current working directory:
<itemizedlist>
<listitem><para>
<emphasis><filename>task-depends.dot</filename>:</emphasis>
Shows dependencies between tasks.
These dependencies match BitBake's internal task execution list.
</para></listitem>
<listitem><para>
<emphasis><filename>pn-buildlist</filename>:</emphasis>
Shows a simple list of targets that are to be built.
</para></listitem>
</itemizedlist>
</para>
<para>
To stop depending on common depends, use the "-I" depend
option and BitBake omits them from the graph.
Leaving this information out can produce more readable graphs.
This way, you can remove from the graph
<filename>DEPENDS</filename> from inherited classes
such as <filename>base.bbclass</filename>.
</para>
<para>
Here are two examples that create dependency graphs.
The second example omits depends common in OpenEmbedded from
the graph:
<literallayout class='monospaced'>
$ bitbake -g foo
$ bitbake -g -I virtual/kernel -I eglibc foo
</literallayout>
</para>
</section>
<section id='executing-a-multiple-configuration-build'>
<title>Executing a Multiple Configuration Build</title>
<para>
BitBake is able to build multiple images or packages
using a single command where the different targets
require different configurations (multiple configuration
builds).
Each target, in this scenario, is referred to as a
"multiconfig".
</para>
<para>
To accomplish a multiple configuration build, you must
define each target's configuration separately using
a parallel configuration file in the build directory.
The location for these multiconfig configuration files
is specific.
They must reside in the current build directory in
a sub-directory of <filename>conf</filename> named
<filename>multiconfig</filename>.
Following is an example for two separate targets:
<imagedata fileref="figures/bb_multiconfig_files.png" align="center" width="4in" depth="3in" />
</para>
<para>
The reason for this required file hierarchy
is because the <filename>BBPATH</filename> variable
is not constructed until the layers are parsed.
Consequently, using the configuration file as a
pre-configuration file is not possible unless it is
located in the current working directory.
</para>
<para>
Minimally, each configuration file must define the
machine and the temporary directory BitBake uses
for the build.
Suggested practice dictates that you do not
overlap the temporary directories used during the
builds.
</para>
<para>
Aside from separate configuration files for each
target, you must also enable BitBake to perform multiple
configuration builds.
Enabling is accomplished by setting the
<link linkend='var-bb-BBMULTICONFIG'><filename>BBMULTICONFIG</filename></link>
variable in the <filename>local.conf</filename>
configuration file.
As an example, suppose you had configuration files
for <filename>target1</filename> and
<filename>target2</filename> defined in the build
directory.
The following statement in the
<filename>local.conf</filename> file both enables
BitBake to perform multiple configuration builds and
specifies the two extra multiconfigs:
<literallayout class='monospaced'>
BBMULTICONFIG = "target1 target2"
</literallayout>
</para>
<para>
Once the target configuration files are in place and
BitBake has been enabled to perform multiple configuration
builds, use the following command form to start the
builds:
<literallayout class='monospaced'>
$ bitbake [mc:<replaceable>multiconfigname</replaceable>:]<replaceable>target</replaceable> [[[mc:<replaceable>multiconfigname</replaceable>:]<replaceable>target</replaceable>] ... ]
</literallayout>
Here is an example for two extra multiconfigs:
<filename>target1</filename> and
<filename>target2</filename>:
<literallayout class='monospaced'>
$ bitbake mc::<replaceable>target</replaceable> mc:target1:<replaceable>target</replaceable> mc:target2:<replaceable>target</replaceable>
</literallayout>
</para>
</section>
<section id='bb-enabling-multiple-configuration-build-dependencies'>
<title>Enabling Multiple Configuration Build Dependencies</title>
<para>
Sometimes dependencies can exist between targets
(multiconfigs) in a multiple configuration build.
For example, suppose that in order to build an image
for a particular architecture, the root filesystem of
another build for a different architecture needs to
exist.
In other words, the image for the first multiconfig depends
on the root filesystem of the second multiconfig.
This dependency is essentially that the task in the recipe
that builds one multiconfig is dependent on the
completion of the task in the recipe that builds
another multiconfig.
</para>
<para>
To enable dependencies in a multiple configuration
build, you must declare the dependencies in the recipe
using the following statement form:
<literallayout class='monospaced'>
<replaceable>task_or_package</replaceable>[mcdepends] = "mc:<replaceable>from_multiconfig</replaceable>:<replaceable>to_multiconfig</replaceable>:<replaceable>recipe_name</replaceable>:<replaceable>task_on_which_to_depend</replaceable>"
</literallayout>
To better show how to use this statement, consider an
example with two multiconfigs: <filename>target1</filename>
and <filename>target2</filename>:
<literallayout class='monospaced'>
<replaceable>image_task</replaceable>[mcdepends] = "mc:target1:target2:<replaceable>image2</replaceable>:<replaceable>rootfs_task</replaceable>"
</literallayout>
In this example, the
<replaceable>from_multiconfig</replaceable> is "target1" and
the <replaceable>to_multiconfig</replaceable> is "target2".
The task on which the image whose recipe contains
<replaceable>image_task</replaceable> depends on the
completion of the <replaceable>rootfs_task</replaceable>
used to build out <replaceable>image2</replaceable>, which
is associated with the "target2" multiconfig.
</para>
<para>
Once you set up this dependency, you can build the
"target1" multiconfig using a BitBake command as follows:
<literallayout class='monospaced'>
$ bitbake mc:target1:<replaceable>image1</replaceable>
</literallayout>
This command executes all the tasks needed to create
<replaceable>image1</replaceable> for the "target1"
multiconfig.
Because of the dependency, BitBake also executes through
the <replaceable>rootfs_task</replaceable> for the "target2"
multiconfig build.
</para>
<para>
Having a recipe depend on the root filesystem of another
build might not seem that useful.
Consider this change to the statement in the
<replaceable>image1</replaceable> recipe:
<literallayout class='monospaced'>
<replaceable>image_task</replaceable>[mcdepends] = "mc:target1:target2:<replaceable>image2</replaceable>:<replaceable>image_task</replaceable>"
</literallayout>
In this case, BitBake must create
<replaceable>image2</replaceable> for the "target2"
build since the "target1" build depends on it.
</para>
<para>
Because "target1" and "target2" are enabled for multiple
configuration builds and have separate configuration
files, BitBake places the artifacts for each build in the
respective temporary build directories.
</para>
</section>
</section>
</section>
</chapter>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,984 @@
/*
Generic XHTML / DocBook XHTML CSS Stylesheet.
Browser wrangling and typographic design by
Oyvind Kolas / pippin@gimp.org
Customised for Poky by
Matthew Allum / mallum@o-hand.com
Thanks to:
Liam R. E. Quin
William Skaggs
Jakub Steiner
Structure
---------
The stylesheet is divided into the following sections:
Positioning
Margins, paddings, width, font-size, clearing.
Decorations
Borders, style
Colors
Colors
Graphics
Graphical backgrounds
Nasty IE tweaks
Workarounds needed to make it work in internet explorer,
currently makes the stylesheet non validating, but up until
this point it is validating.
Mozilla extensions
Transparency for footer
Rounded corners on boxes
*/
/*************** /
/ Positioning /
/ ***************/
body {
font-family: Verdana, Sans, sans-serif;
min-width: 640px;
width: 80%;
margin: 0em auto;
padding: 2em 5em 5em 5em;
color: #333;
}
h1,h2,h3,h4,h5,h6,h7 {
font-family: Arial, Sans;
color: #00557D;
clear: both;
}
h1 {
font-size: 2em;
text-align: left;
padding: 0em 0em 0em 0em;
margin: 2em 0em 0em 0em;
}
h2.subtitle {
margin: 0.10em 0em 3.0em 0em;
padding: 0em 0em 0em 0em;
font-size: 1.8em;
padding-left: 20%;
font-weight: normal;
font-style: italic;
}
h2 {
margin: 2em 0em 0.66em 0em;
padding: 0.5em 0em 0em 0em;
font-size: 1.5em;
font-weight: bold;
}
h3.subtitle {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
font-size: 142.14%;
text-align: right;
}
h3 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 140%;
font-weight: bold;
}
h4 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 120%;
font-weight: bold;
}
h5 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 110%;
font-weight: bold;
}
h6 {
margin: 1em 0em 0em 0em;
padding: 1em 0em 0em 0em;
font-size: 110%;
font-weight: bold;
}
.authorgroup {
background-color: transparent;
background-repeat: no-repeat;
padding-top: 256px;
background-image: url("figures/bitbake-title.png");
background-position: left top;
margin-top: -256px;
padding-right: 50px;
margin-left: 0px;
text-align: right;
width: 740px;
}
h3.author {
margin: 0em 0me 0em 0em;
padding: 0em 0em 0em 0em;
font-weight: normal;
font-size: 100%;
color: #333;
clear: both;
}
.author tt.email {
font-size: 66%;
}
.titlepage hr {
width: 0em;
clear: both;
}
.revhistory {
padding-top: 2em;
clear: both;
}
.toc,
.list-of-tables,
.list-of-examples,
.list-of-figures {
padding: 1.33em 0em 2.5em 0em;
color: #00557D;
}
.toc p,
.list-of-tables p,
.list-of-figures p,
.list-of-examples p {
padding: 0em 0em 0em 0em;
padding: 0em 0em 0.3em;
margin: 1.5em 0em 0em 0em;
}
.toc p b,
.list-of-tables p b,
.list-of-figures p b,
.list-of-examples p b{
font-size: 100.0%;
font-weight: bold;
}
.toc dl,
.list-of-tables dl,
.list-of-figures dl,
.list-of-examples dl {
margin: 0em 0em 0.5em 0em;
padding: 0em 0em 0em 0em;
}
.toc dt {
margin: 0em 0em 0em 0em;
padding: 0em 0em 0em 0em;
}
.toc dd {
margin: 0em 0em 0em 2.6em;
padding: 0em 0em 0em 0em;
}
div.glossary dl,
div.variablelist dl {
}
.glossary dl dt,
.variablelist dl dt,
.variablelist dl dt span.term {
font-weight: normal;
width: 20em;
text-align: right;
}
.variablelist dl dt {
margin-top: 0.5em;
}
.glossary dl dd,
.variablelist dl dd {
margin-top: -1em;
margin-left: 25.5em;
}
.glossary dd p,
.variablelist dd p {
margin-top: 0em;
margin-bottom: 1em;
}
div.calloutlist table td {
padding: 0em 0em 0em 0em;
margin: 0em 0em 0em 0em;
}
div.calloutlist table td p {
margin-top: 0em;
margin-bottom: 1em;
}
div p.copyright {
text-align: left;
}
div.legalnotice p.legalnotice-title {
margin-bottom: 0em;
}
p {
line-height: 1.5em;
margin-top: 0em;
}
dl {
padding-top: 0em;
}
hr {
border: solid 1px;
}
.mediaobject,
.mediaobjectco {
text-align: center;
}
img {
border: none;
}
ul {
padding: 0em 0em 0em 1.5em;
}
ul li {
padding: 0em 0em 0em 0em;
}
ul li p {
text-align: left;
}
table {
width :100%;
}
th {
padding: 0.25em;
text-align: left;
font-weight: normal;
vertical-align: top;
}
td {
padding: 0.25em;
vertical-align: top;
}
p a[id] {
margin: 0px;
padding: 0px;
display: inline;
background-image: none;
}
a {
text-decoration: underline;
color: #444;
}
pre {
overflow: auto;
}
a:hover {
text-decoration: underline;
/*font-weight: bold;*/
}
/* This style defines how the permalink character
appears by itself and when hovered over with
the mouse. */
[alt='Permalink'] { color: #eee; }
[alt='Permalink']:hover { color: black; }
div.informalfigure,
div.informalexample,
div.informaltable,
div.figure,
div.table,
div.example {
margin: 1em 0em;
padding: 1em;
page-break-inside: avoid;
}
div.informalfigure p.title b,
div.informalexample p.title b,
div.informaltable p.title b,
div.figure p.title b,
div.example p.title b,
div.table p.title b{
padding-top: 0em;
margin-top: 0em;
font-size: 100%;
font-weight: normal;
}
.mediaobject .caption,
.mediaobject .caption p {
text-align: center;
font-size: 80%;
padding-top: 0.5em;
padding-bottom: 0.5em;
}
.epigraph {
padding-left: 55%;
margin-bottom: 1em;
}
.epigraph p {
text-align: left;
}
.epigraph .quote {
font-style: italic;
}
.epigraph .attribution {
font-style: normal;
text-align: right;
}
span.application {
font-style: italic;
}
.programlisting {
font-family: monospace;
font-size: 80%;
white-space: pre;
margin: 1.33em 0em;
padding: 1.33em;
}
.tip,
.warning,
.caution,
.note {
margin-top: 1em;
margin-bottom: 1em;
}
/* force full width of table within div */
.tip table,
.warning table,
.caution table,
.note table {
border: none;
width: 100%;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
padding: 0.8em 0.0em 0.0em 0.0em;
margin : 0em 0em 0em 0em;
}
.tip p,
.warning p,
.caution p,
.note p {
margin-top: 0.5em;
margin-bottom: 0.5em;
padding-right: 1em;
text-align: left;
}
.acronym {
text-transform: uppercase;
}
b.keycap,
.keycap {
padding: 0.09em 0.3em;
margin: 0em;
}
.itemizedlist li {
clear: none;
}
.filename {
font-size: medium;
font-family: Courier, monospace;
}
div.navheader, div.heading{
position: absolute;
left: 0em;
top: 0em;
width: 100%;
background-color: #cdf;
width: 100%;
}
div.navfooter, div.footing{
position: fixed;
left: 0em;
bottom: 0em;
background-color: #eee;
width: 100%;
}
div.navheader td,
div.navfooter td {
font-size: 66%;
}
div.navheader table th {
/*font-family: Georgia, Times, serif;*/
/*font-size: x-large;*/
font-size: 80%;
}
div.navheader table {
border-left: 0em;
border-right: 0em;
border-top: 0em;
width: 100%;
}
div.navfooter table {
border-left: 0em;
border-right: 0em;
border-bottom: 0em;
width: 100%;
}
div.navheader table td a,
div.navfooter table td a {
color: #777;
text-decoration: none;
}
/* normal text in the footer */
div.navfooter table td {
color: black;
}
div.navheader table td a:visited,
div.navfooter table td a:visited {
color: #444;
}
/* links in header and footer */
div.navheader table td a:hover,
div.navfooter table td a:hover {
text-decoration: underline;
background-color: transparent;
color: #33a;
}
div.navheader hr,
div.navfooter hr {
display: none;
}
.qandaset tr.question td p {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
}
.qandaset tr.answer td p {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
}
.answer td {
padding-bottom: 1.5em;
}
.emphasis {
font-weight: bold;
}
/************* /
/ decorations /
/ *************/
.titlepage {
}
.part .title {
}
.subtitle {
border: none;
}
/*
h1 {
border: none;
}
h2 {
border-top: solid 0.2em;
border-bottom: solid 0.06em;
}
h3 {
border-top: 0em;
border-bottom: solid 0.06em;
}
h4 {
border: 0em;
border-bottom: solid 0.06em;
}
h5 {
border: 0em;
}
*/
.programlisting {
border: solid 1px;
}
div.figure,
div.table,
div.informalfigure,
div.informaltable,
div.informalexample,
div.example {
border: 1px solid;
}
.tip,
.warning,
.caution,
.note {
border: 1px solid;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
border-bottom: 1px solid;
}
.question td {
border-top: 1px solid black;
}
.answer {
}
b.keycap,
.keycap {
border: 1px solid;
}
div.navheader, div.heading{
border-bottom: 1px solid;
}
div.navfooter, div.footing{
border-top: 1px solid;
}
/********* /
/ colors /
/ *********/
body {
color: #333;
background: white;
}
a {
background: transparent;
}
a:hover {
background-color: #dedede;
}
h1,
h2,
h3,
h4,
h5,
h6,
h7,
h8 {
background-color: transparent;
}
hr {
border-color: #aaa;
}
.tip, .warning, .caution, .note {
border-color: #fff;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
border-bottom-color: #fff;
}
.warning {
background-color: #f0f0f2;
}
.caution {
background-color: #f0f0f2;
}
.tip {
background-color: #f0f0f2;
}
.note {
background-color: #f0f0f2;
}
.glossary dl dt,
.variablelist dl dt,
.variablelist dl dt span.term {
color: #044;
}
div.figure,
div.table,
div.example,
div.informalfigure,
div.informaltable,
div.informalexample {
border-color: #aaa;
}
pre.programlisting {
color: black;
background-color: #fff;
border-color: #aaa;
border-width: 2px;
}
.guimenu,
.guilabel,
.guimenuitem {
background-color: #eee;
}
b.keycap,
.keycap {
background-color: #eee;
border-color: #999;
}
div.navheader {
border-color: black;
}
div.navfooter {
border-color: black;
}
/*********** /
/ graphics /
/ ***********/
/*
body {
background-image: url("images/body_bg.jpg");
background-attachment: fixed;
}
.navheader,
.note,
.tip {
background-image: url("images/note_bg.jpg");
background-attachment: fixed;
}
.warning,
.caution {
background-image: url("images/warning_bg.jpg");
background-attachment: fixed;
}
.figure,
.informalfigure,
.example,
.informalexample,
.table,
.informaltable {
background-image: url("images/figure_bg.jpg");
background-attachment: fixed;
}
*/
h1,
h2,
h3,
h4,
h5,
h6,
h7{
}
/*
Example of how to stick an image as part of the title.
div.article .titlepage .title
{
background-image: url("figures/white-on-black.png");
background-position: center;
background-repeat: repeat-x;
}
*/
div.preface .titlepage .title,
div.colophon .title,
div.chapter .titlepage .title,
div.article .titlepage .title
{
}
div.section div.section .titlepage .title,
div.sect2 .titlepage .title {
background: none;
}
h1.title {
background-color: transparent;
background-repeat: no-repeat;
height: 256px;
text-indent: -9000px;
overflow:hidden;
}
h2.subtitle {
background-color: transparent;
text-indent: -9000px;
overflow:hidden;
width: 0px;
display: none;
}
/*************************************** /
/ pippin.gimp.org specific alterations /
/ ***************************************/
/*
div.heading, div.navheader {
color: #777;
font-size: 80%;
padding: 0;
margin: 0;
text-align: left;
position: absolute;
top: 0px;
left: 0px;
width: 100%;
height: 50px;
background: url('/gfx/heading_bg.png') transparent;
background-repeat: repeat-x;
background-attachment: fixed;
border: none;
}
div.heading a {
color: #444;
}
div.footing, div.navfooter {
border: none;
color: #ddd;
font-size: 80%;
text-align:right;
width: 100%;
padding-top: 10px;
position: absolute;
bottom: 0px;
left: 0px;
background: url('/gfx/footing_bg.png') transparent;
}
*/
/****************** /
/ nasty ie tweaks /
/ ******************/
/*
div.heading, div.navheader {
width:expression(document.body.clientWidth + "px");
}
div.footing, div.navfooter {
width:expression(document.body.clientWidth + "px");
margin-left:expression("-5em");
}
body {
padding:expression("4em 5em 0em 5em");
}
*/
/**************************************** /
/ mozilla vendor specific css extensions /
/ ****************************************/
/*
div.navfooter, div.footing{
-moz-opacity: 0.8em;
}
div.figure,
div.table,
div.informalfigure,
div.informaltable,
div.informalexample,
div.example,
.tip,
.warning,
.caution,
.note {
-moz-border-radius: 0.5em;
}
b.keycap,
.keycap {
-moz-border-radius: 0.3em;
}
*/
table tr td table tr td {
display: none;
}
hr {
display: none;
}
table {
border: 0em;
}
.photo {
float: right;
margin-left: 1.5em;
margin-bottom: 1.5em;
margin-top: 0em;
max-width: 17em;
border: 1px solid gray;
padding: 3px;
background: white;
}
.seperator {
padding-top: 2em;
clear: both;
}
#validators {
margin-top: 5em;
text-align: right;
color: #777;
}
@media print {
body {
font-size: 8pt;
}
.noprint {
display: none;
}
}
.tip,
.note {
background: #f0f0f2;
color: #333;
padding: 20px;
margin: 20px;
}
.tip h3,
.note h3 {
padding: 0em;
margin: 0em;
font-size: 2em;
font-weight: bold;
color: #333;
}
.tip a,
.note a {
color: #333;
text-decoration: underline;
}
.footnote {
font-size: small;
color: #333;
}
/* Changes the announcement text */
.tip h3,
.warning h3,
.caution h3,
.note h3 {
font-size:large;
color: #00557D;
}

View File

@@ -0,0 +1,88 @@
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<book id='bitbake-user-manual' lang='en'
xmlns:xi="http://www.w3.org/2003/XInclude"
xmlns="http://docbook.org/ns/docbook"
>
<bookinfo>
<mediaobject>
<imageobject>
<imagedata fileref='figures/bitbake-title.png'
format='SVG'
align='left' scalefit='1' width='100%'/>
</imageobject>
</mediaobject>
<title>
BitBake User Manual
</title>
<authorgroup>
<author>
<firstname>Richard Purdie, Chris Larson, and </firstname> <surname>Phil Blundell</surname>
<affiliation>
<orgname>BitBake Community</orgname>
</affiliation>
<email>bitbake-devel@lists.openembedded.org</email>
</author>
</authorgroup>
<!--
# Add in some revision history if we want it here.
<revhistory>
<revision>
<revnumber>x.x</revnumber>
<date>dd month year</date>
<revremark>Some relevent comment</revremark>
</revision>
<revision>
<revnumber>x.x</revnumber>
<date>dd month year</date>
<revremark>Some relevent comment</revremark>
</revision>
<revision>
<revnumber>x.x</revnumber>
<date>dd month year</date>
<revremark>Some relevent comment</revremark>
</revision>
<revision>
<revnumber>x.x</revnumber>
<date>dd month year</date>
<revremark>Some relevent comment</revremark>
</revision>
</revhistory>
-->
<copyright>
<year>2004-2018</year>
<holder>Richard Purdie</holder>
<holder>Chris Larson</holder>
<holder>and Phil Blundell</holder>
</copyright>
<legalnotice>
<para>
This work is licensed under the Creative Commons Attribution License.
To view a copy of this license, visit
<ulink url="http://creativecommons.org/licenses/by/2.5/">http://creativecommons.org/licenses/by/2.5/</ulink>
or send a letter to Creative Commons, 444 Castro Street,
Suite 900, Mountain View, California 94041, USA.
</para>
</legalnotice>
</bookinfo>
<xi:include href="bitbake-user-manual-intro.xml"/>
<xi:include href="bitbake-user-manual-execution.xml"/>
<xi:include href="bitbake-user-manual-metadata.xml"/>
<xi:include href="bitbake-user-manual-fetching.xml"/>
<xi:include href="bitbake-user-manual-ref-variables.xml"/>
<xi:include href="bitbake-user-manual-hello.xml"/>
</book>

View File

@@ -0,0 +1,281 @@
/* Feuille de style DocBook du projet Traduc.org */
/* DocBook CSS stylesheet of the Traduc.org project */
/* (c) Jean-Philippe Gu<47>rard - 14 ao<61>t 2004 */
/* (c) Jean-Philippe Gu<47>rard - 14 August 2004 */
/* Cette feuille de style est libre, vous pouvez la */
/* redistribuer et la modifier selon les termes de la Licence */
/* Art Libre. Vous trouverez un exemplaire de cette Licence sur */
/* http://tigreraye.org/Petit-guide-du-traducteur.html#licence-art-libre */
/* This work of art is free, you can redistribute it and/or */
/* modify it according to terms of the Free Art license. You */
/* will find a specimen of this license on the Copyleft */
/* Attitude web site: http://artlibre.org as well as on other */
/* sites. */
/* Please note that the French version of this licence as shown */
/* on http://tigreraye.org/Petit-guide-du-traducteur.html#licence-art-libre */
/* is only official licence of this document. The English */
/* is only provided to help you understand this licence. */
/* La derni<6E>re version de cette feuille de style est toujours */
/* disponible sur<75>: http://tigreraye.org/style.css */
/* Elle est <20>galement disponible sur<75>: */
/* http://www.traduc.org/docs/HOWTO/lecture/style.css */
/* The latest version of this stylesheet is available from: */
/* http://tigreraye.org/style.css */
/* It is also available on: */
/* http://www.traduc.org/docs/HOWTO/lecture/style.css */
/* N'h<>sitez pas <20> envoyer vos commentaires et corrections <20> */
/* Jean-Philippe Gu<47>rard <jean-philippe.guerard@tigreraye.org> */
/* Please send feedback and bug reports to */
/* Jean-Philippe Gu<47>rard <jean-philippe.guerard@tigreraye.org> */
/* $Id: style.css,v 1.14 2004/09/10 20:12:09 fevrier Exp fevrier $ */
/* Pr<50>sentation g<>n<EFBFBD>rale du document */
/* Overall document presentation */
body {
/*
font-family: Apolline, "URW Palladio L", Garamond, jGaramond,
"Bitstream Cyberbit", "Palatino Linotype", serif;
*/
margin: 7%;
background-color: white;
}
/* Taille du texte */
/* Text size */
* { font-size: 100%; }
/* Gestion des textes mis en relief imbriqu<71>s */
/* Embedded emphasis */
em { font-style: italic; }
em em { font-style: normal; }
em em em { font-style: italic; }
/* Titres */
/* Titles */
h1 { font-size: 200%; font-weight: 900; }
h2 { font-size: 160%; font-weight: 900; }
h3 { font-size: 130%; font-weight: bold; }
h4 { font-size: 115%; font-weight: bold; }
h5 { font-size: 108%; font-weight: bold; }
h6 { font-weight: bold; }
/* Nom de famille en petites majuscules (uniquement en fran<61>ais) */
/* Last names in small caps (for French only) */
*[class~="surname"]:lang(fr) { font-variant: small-caps; }
/* Blocs de citation */
/* Quotation blocs */
div[class~="blockquote"] {
border: solid 2px #AAA;
padding: 5px;
margin: 5px;
}
div[class~="blockquote"] > table {
border: none;
}
/* Blocs lit<69>raux<75>: fond gris clair */
/* Literal blocs: light gray background */
*[class~="literallayout"] {
background: #f0f0f0;
padding: 5px;
margin: 5px;
}
/* Programmes et captures texte<74>: fond bleu clair */
/* Listing and text screen snapshots: light blue background */
*[class~="programlisting"], *[class~="screen"] {
background: #f0f0ff;
padding: 5px;
margin: 5px;
}
/* Les textes <20> remplacer sont surlign<67>s en vert p<>le */
/* Replaceable text in highlighted in pale green */
*[class~="replaceable"] {
background-color: #98fb98;
font-style: normal; }
/* Tables<65>: fonds gris clair & bords simples */
/* Tables: light gray background and solid borders */
*[class~="table"] *[class~="title"] { width:100%; border: 0px; }
table {
border: 1px solid #aaa;
border-collapse: collapse;
padding: 2px;
margin: 5px;
}
/* Listes simples en style table */
/* Simples lists in table presentation */
table[class~="simplelist"] {
background-color: #F0F0F0;
margin: 5px;
border: solid 1px #AAA;
}
table[class~="simplelist"] td {
border: solid 1px #AAA;
}
/* Les tables */
/* Tables */
*[class~="table"] table {
background-color: #F0F0F0;
border: solid 1px #AAA;
}
*[class~="informaltable"] table { background-color: #F0F0F0; }
th,td {
vertical-align: baseline;
text-align: left;
padding: 0.1em 0.3em;
empty-cells: show;
}
/* Alignement des colonnes */
/* Colunms alignment */
td[align=center] , th[align=center] { text-align: center; }
td[align=right] , th[align=right] { text-align: right; }
td[align=left] , th[align=left] { text-align: left; }
td[align=justify] , th[align=justify] { text-align: justify; }
/* Pas de marge autour des images */
/* No inside margins for images */
img { border: 0; }
/* Les liens ne sont pas soulign<67>s */
/* No underlines for links */
:link , :visited , :active { text-decoration: none; }
/* Prudence<63>: cadre jaune et fond jaune clair */
/* Caution: yellow border and light yellow background */
*[class~="caution"] {
border: solid 2px yellow;
background-color: #ffffe0;
padding: 1em 6px 1em ;
margin: 5px;
}
*[class~="caution"] th {
vertical-align: middle
}
*[class~="caution"] table {
background-color: #ffffe0;
border: none;
}
/* Note importante<74>: cadre jaune et fond jaune clair */
/* Important: yellow border and light yellow background */
*[class~="important"] {
border: solid 2px yellow;
background-color: #ffffe0;
padding: 1em 6px 1em;
margin: 5px;
}
*[class~="important"] th {
vertical-align: middle
}
*[class~="important"] table {
background-color: #ffffe0;
border: none;
}
/* Mise en <20>vidence<63>: texte l<>g<EFBFBD>rement plus grand */
/* Highlights: slightly larger texts */
*[class~="highlights"] {
font-size: 110%;
}
/* Note<74>: cadre bleu et fond bleu clair */
/* Notes: blue border and light blue background */
*[class~="note"] {
border: solid 2px #7099C5;
background-color: #f0f0ff;
padding: 1em 6px 1em ;
margin: 5px;
}
*[class~="note"] th {
vertical-align: middle
}
*[class~="note"] table {
background-color: #f0f0ff;
border: none;
}
/* Astuce<63>: cadre vert et fond vert clair */
/* Tip: green border and light green background */
*[class~="tip"] {
border: solid 2px #00ff00;
background-color: #f0ffff;
padding: 1em 6px 1em ;
margin: 5px;
}
*[class~="tip"] th {
vertical-align: middle;
}
*[class~="tip"] table {
background-color: #f0ffff;
border: none;
}
/* Avertissement<6E>: cadre rouge et fond rouge clair */
/* Warning: red border and light red background */
*[class~="warning"] {
border: solid 2px #ff0000;
background-color: #fff0f0;
padding: 1em 6px 1em ;
margin: 5px;
}
*[class~="warning"] th {
vertical-align: middle;
}
*[class~="warning"] table {
background-color: #fff0f0;
border: none;
}
/* Fin */
/* The End */

View File

@@ -1,101 +0,0 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import sys
import datetime
current_version = "dev"
# String used in sidebar
version = 'Version: ' + current_version
if current_version == 'dev':
version = 'Version: Current Development'
# Version seen in documentation_options.js and hence in js switchers code
release = current_version
# -- Project information -----------------------------------------------------
project = 'Bitbake'
copyright = '2004-%s, Richard Purdie, Chris Larson, and Phil Blundell' \
% datetime.datetime.now().year
author = 'Richard Purdie, Chris Larson, and Phil Blundell'
# external links and substitutions
extlinks = {
'yocto_docs': ('https://docs.yoctoproject.org%s', None),
'oe_lists': ('https://lists.openembedded.org%s', None),
}
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autosectionlabel',
'sphinx.ext.extlinks',
]
autosectionlabel_prefix_document = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# master document name. The default changed from contents to index. so better
# set it ourselves.
master_doc = 'index'
# create substitution for project configuration variables
rst_prolog = """
.. |project_name| replace:: %s
.. |copyright| replace:: %s
.. |author| replace:: %s
""" % (project, copyright, author)
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
try:
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
except ImportError:
sys.stderr.write("The Sphinx sphinx_rtd_theme HTML theme was not found.\
\nPlease make sure to install the sphinx_rtd_theme python package.\n")
sys.exit(1)
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['sphinx-static']
# Add customm CSS and JS files
html_css_files = ['theme_overrides.css']
html_js_files = ['switchers.js']
# Hide 'Created using Sphinx' text
html_show_sphinx = False
# Add 'Last updated' on each page
html_last_updated_fmt = '%b %d, %Y'
# Remove the trailing 'dot' in section numbers
html_secnumber_suffix = " "

View File

@@ -1,3 +0,0 @@
=====
Index
=====

View File

@@ -1,38 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
===================
BitBake User Manual
===================
|
.. toctree::
:caption: Table of Contents
:numbered:
bitbake-user-manual/bitbake-user-manual-intro
bitbake-user-manual/bitbake-user-manual-execution
bitbake-user-manual/bitbake-user-manual-metadata
bitbake-user-manual/bitbake-user-manual-fetching
bitbake-user-manual/bitbake-user-manual-ref-variables
bitbake-user-manual/bitbake-user-manual-hello
.. toctree::
:maxdepth: 1
:hidden:
genindex
releases
----
.. include:: <xhtml1-lat1.txt>
| BitBake Community
| Copyright |copy| |copyright|
| <bitbake-devel@lists.openembedded.org>
This work is licensed under the Creative Commons Attribution License. To view a
copy of this license, visit http://creativecommons.org/licenses/by/2.5/ or send
a letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View,
California 94041, USA.

51
bitbake/doc/poky.ent Normal file
View File

@@ -0,0 +1,51 @@
<!ENTITY DISTRO "1.4">
<!ENTITY DISTRO_NAME "tbd">
<!ENTITY YOCTO_DOC_VERSION "1.4">
<!ENTITY POKYVERSION "8.0">
<!ENTITY YOCTO_POKY "poky-&DISTRO_NAME;-&POKYVERSION;">
<!ENTITY COPYRIGHT_YEAR "2010-2013">
<!ENTITY YOCTO_DL_URL "http://downloads.yoctoproject.org">
<!ENTITY YOCTO_HOME_URL "http://www.yoctoproject.org">
<!ENTITY YOCTO_LISTS_URL "http://lists.yoctoproject.org">
<!ENTITY YOCTO_BUGZILLA_URL "http://bugzilla.yoctoproject.org">
<!ENTITY YOCTO_WIKI_URL "https://wiki.yoctoproject.org">
<!ENTITY YOCTO_AB_URL "http://autobuilder.yoctoproject.org">
<!ENTITY YOCTO_GIT_URL "http://git.yoctoproject.org">
<!ENTITY YOCTO_ADTREPO_URL "http://adtrepo.yoctoproject.org">
<!ENTITY OE_HOME_URL "http://www.openembedded.org">
<!ENTITY OE_LISTS_URL "http://lists.linuxtogo.org/cgi-bin/mailman">
<!ENTITY OE_DOCS_URL "http://docs.openembedded.org">
<!ENTITY OH_HOME_URL "http://o-hand.com">
<!ENTITY BITBAKE_HOME_URL "http://developer.berlios.de/projects/bitbake/">
<!ENTITY YOCTO_DOCS_URL "&YOCTO_HOME_URL;/docs">
<!ENTITY YOCTO_SOURCES_URL "&YOCTO_HOME_URL;/sources/">
<!ENTITY YOCTO_AB_PORT_URL "&YOCTO_AB_URL;:8010">
<!ENTITY YOCTO_AB_NIGHTLY_URL "&YOCTO_AB_URL;/nightly/">
<!ENTITY YOCTO_POKY_URL "&YOCTO_DL_URL;/releases/poky/">
<!ENTITY YOCTO_RELEASE_DL_URL "&YOCTO_DL_URL;/releases/yocto/yocto-&DISTRO;">
<!ENTITY YOCTO_TOOLCHAIN_DL_URL "&YOCTO_RELEASE_DL_URL;/toolchain/">
<!ENTITY YOCTO_ADTINSTALLER_DL_URL "&YOCTO_RELEASE_DL_URL;/adt_installer">
<!ENTITY YOCTO_POKY_DL_URL "&YOCTO_RELEASE_DL_URL;/&YOCTO_POKY;.tar.bz2">
<!ENTITY YOCTO_MACHINES_DL_URL "&YOCTO_RELEASE_DL_URL;/machines">
<!ENTITY YOCTO_QEMU_DL_URL "&YOCTO_MACHINES_DL_URL;/qemu">
<!ENTITY YOCTO_PYTHON-i686_DL_URL "&YOCTO_DL_URL;/releases/miscsupport/python-nativesdk-standalone-i686.tar.bz2">
<!ENTITY YOCTO_PYTHON-x86_64_DL_URL "&YOCTO_DL_URL;/releases/miscsupport/python-nativesdk-standalone-x86_64.tar.bz2">
<!ENTITY YOCTO_DOCS_QS_URL "&YOCTO_DOCS_URL;/&YOCTO_DOC_VERSION;/yocto-project-qs/yocto-project-qs.html">
<!ENTITY YOCTO_DOCS_ADT_URL "&YOCTO_DOCS_URL;/&YOCTO_DOC_VERSION;/adt-manual/adt-manual.html">
<!ENTITY YOCTO_DOCS_REF_URL "&YOCTO_DOCS_URL;/&YOCTO_DOC_VERSION;/ref-manual/ref-manual.html">
<!ENTITY YOCTO_DOCS_BSP_URL "&YOCTO_DOCS_URL;/&YOCTO_DOC_VERSION;/bsp-guide/bsp-guide.html">
<!ENTITY YOCTO_DOCS_DEV_URL "&YOCTO_DOCS_URL;/&YOCTO_DOC_VERSION;/dev-manual/dev-manual.html">
<!ENTITY YOCTO_DOCS_KERNEL_URL "&YOCTO_DOCS_URL;/&YOCTO_DOC_VERSION;/kernel-manual/kernel-manual.html">
<!ENTITY YOCTO_ADTPATH_DIR "/opt/poky/&DISTRO;">
<!ENTITY YOCTO_POKY_TARBALL "&YOCTO_POKY;.tar.bz2">
<!ENTITY OE_INIT_PATH "&YOCTO_POKY;/oe-init-build-env">
<!ENTITY OE_INIT_FILE "oe-init-build-env">
<!ENTITY UBUNTU_HOST_PACKAGES_ESSENTIAL "gawk wget git-core diffstat unzip texinfo \
build-essential chrpath">
<!ENTITY FEDORA_HOST_PACKAGES_ESSENTIAL "gawk make wget tar bzip2 gzip python unzip perl patch \
diffutils diffstat git cpp gcc gcc-c++ eglibc-devel texinfo chrpath \
ccache">
<!ENTITY OPENSUSE_HOST_PACKAGES_ESSENTIAL "python gcc gcc-c++ git chrpath make wget python-xml \
diffstat texinfo python-curses">
<!ENTITY CENTOS_HOST_PACKAGES_ESSENTIAL "gawk make wget tar bzip2 gzip python unzip perl patch \
diffutils diffstat git cpp gcc gcc-c++ glibc-devel texinfo chrpath">

View File

@@ -1,130 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
=========================
Current Release Manuals
=========================
****************************
3.1 'dunfell' Release Series
****************************
- :yocto_docs:`3.1 BitBake User Manual </3.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.1 BitBake User Manual </3.1.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.2 BitBake User Manual </3.1.2/bitbake-user-manual/bitbake-user-manual.html>`
==========================
Previous Release Manuals
==========================
*************************
3.0 'zeus' Release Series
*************************
- :yocto_docs:`3.0 BitBake User Manual </3.0/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.1 BitBake User Manual </3.0.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.2 BitBake User Manual </3.0.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.3 BitBake User Manual </3.0.3/bitbake-user-manual/bitbake-user-manual.html>`
****************************
2.7 'warrior' Release Series
****************************
- :yocto_docs:`2.7 BitBake User Manual </2.7/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.7.1 BitBake User Manual </2.7.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.7.2 BitBake User Manual </2.7.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.7.3 BitBake User Manual </2.7.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.7.4 BitBake User Manual </2.7.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
2.6 'thud' Release Series
*************************
- :yocto_docs:`2.6 BitBake User Manual </2.6/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.6.1 BitBake User Manual </2.6.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.6.2 BitBake User Manual </2.6.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.6.3 BitBake User Manual </2.6.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.6.4 BitBake User Manual </2.6.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
2.5 'sumo' Release Series
*************************
- :yocto_docs:`2.5 BitBake User Manual </2.5/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.5.1 BitBake User Manual </2.5.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.5.2 BitBake User Manual </2.5.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.5.3 BitBake User Manual </2.5.3/bitbake-user-manual/bitbake-user-manual.html>`
**************************
2.4 'rocko' Release Series
**************************
- :yocto_docs:`2.4 BitBake User Manual </2.4/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.4.1 BitBake User Manual </2.4.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.4.2 BitBake User Manual </2.4.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.4.3 BitBake User Manual </2.4.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.4.4 BitBake User Manual </2.4.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
2.3 'pyro' Release Series
*************************
- :yocto_docs:`2.3 BitBake User Manual </2.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.3.1 BitBake User Manual </2.3.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.3.2 BitBake User Manual </2.3.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.3.3 BitBake User Manual </2.3.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.3.4 BitBake User Manual </2.3.4/bitbake-user-manual/bitbake-user-manual.html>`
**************************
2.2 'morty' Release Series
**************************
- :yocto_docs:`2.2 BitBake User Manual </2.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.2.1 BitBake User Manual </2.2.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.2.2 BitBake User Manual </2.2.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.2.3 BitBake User Manual </2.2.3/bitbake-user-manual/bitbake-user-manual.html>`
****************************
2.1 'krogoth' Release Series
****************************
- :yocto_docs:`2.1 BitBake User Manual </2.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.1.1 BitBake User Manual </2.1.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.1.2 BitBake User Manual </2.1.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.1.3 BitBake User Manual </2.1.3/bitbake-user-manual/bitbake-user-manual.html>`
***************************
2.0 'jethro' Release Series
***************************
- :yocto_docs:`1.9 BitBake User Manual </1.9/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.0 BitBake User Manual </2.0/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.0.1 BitBake User Manual </2.0.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.0.2 BitBake User Manual </2.0.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.0.3 BitBake User Manual </2.0.3/bitbake-user-manual/bitbake-user-manual.html>`
*************************
1.8 'fido' Release Series
*************************
- :yocto_docs:`1.8 BitBake User Manual </1.8/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.8.1 BitBake User Manual </1.8.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.8.2 BitBake User Manual </1.8.2/bitbake-user-manual/bitbake-user-manual.html>`
**************************
1.7 'dizzy' Release Series
**************************
- :yocto_docs:`1.7 BitBake User Manual </1.7/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.7.1 BitBake User Manual </1.7.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.7.2 BitBake User Manual </1.7.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.7.3 BitBake User Manual </1.7.3/bitbake-user-manual/bitbake-user-manual.html>`
**************************
1.6 'daisy' Release Series
**************************
- :yocto_docs:`1.6 BitBake User Manual </1.6/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.6.1 BitBake User Manual </1.6.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.6.2 BitBake User Manual </1.6.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.6.3 BitBake User Manual </1.6.3/bitbake-user-manual/bitbake-user-manual.html>`

View File

@@ -1,233 +0,0 @@
(function() {
'use strict';
var all_versions = {
'dev': 'dev (3.2)',
'3.1.2': '3.1.2',
'3.0.3': '3.0.3',
'2.7.4': '2.7.4',
};
var all_doctypes = {
'single': 'Individual Webpages',
'mega': "All-in-one 'Mega' Manual",
};
// Simple version comparision
// Return 1 if a > b
// Return -1 if a < b
// Return 0 if a == b
function ver_compare(a, b) {
if (a == "dev") {
return 1;
}
if (a === b) {
return 0;
}
var a_components = a.split(".");
var b_components = b.split(".");
var len = Math.min(a_components.length, b_components.length);
// loop while the components are equal
for (var i = 0; i < len; i++) {
// A bigger than B
if (parseInt(a_components[i]) > parseInt(b_components[i])) {
return 1;
}
// B bigger than A
if (parseInt(a_components[i]) < parseInt(b_components[i])) {
return -1;
}
}
// If one's a prefix of the other, the longer one is greater.
if (a_components.length > b_components.length) {
return 1;
}
if (a_components.length < b_components.length) {
return -1;
}
// Otherwise they are the same.
return 0;
}
function build_version_select(current_series, current_version) {
var buf = ['<select>'];
$.each(all_versions, function(version, title) {
var series = version.substr(0, 3);
if (series == current_series) {
if (version == current_version)
buf.push('<option value="' + version + '" selected="selected">' + title + '</option>');
else
buf.push('<option value="' + version + '">' + title + '</option>');
if (version != current_version)
buf.push('<option value="' + current_version + '" selected="selected">' + current_version + '</option>');
} else {
buf.push('<option value="' + version + '">' + title + '</option>');
}
});
buf.push('</select>');
return buf.join('');
}
function build_doctype_select(current_doctype) {
var buf = ['<select>'];
$.each(all_doctypes, function(doctype, title) {
if (doctype == current_doctype)
buf.push('<option value="' + doctype + '" selected="selected">' +
all_doctypes[current_doctype] + '</option>');
else
buf.push('<option value="' + doctype + '">' + title + '</option>');
});
if (!(current_doctype in all_doctypes)) {
// In case we're browsing a doctype that is not yet in all_doctypes.
buf.push('<option value="' + current_doctype + '" selected="selected">' +
current_doctype + '</option>');
all_doctypes[current_doctype] = current_doctype;
}
buf.push('</select>');
return buf.join('');
}
function navigate_to_first_existing(urls) {
// Navigate to the first existing URL in urls.
var url = urls.shift();
// Web browsers won't redirect file:// urls to file urls using ajax but
// its useful for local testing
if (url.startsWith("file://")) {
window.location.href = url;
return;
}
if (urls.length == 0) {
window.location.href = url;
return;
}
$.ajax({
url: url,
success: function() {
window.location.href = url;
},
error: function() {
navigate_to_first_existing(urls);
}
});
}
function get_docroot_url() {
var url = window.location.href;
var root = DOCUMENTATION_OPTIONS.URL_ROOT;
var urlarray = url.split('/');
// Trim off anything after '/'
urlarray.pop();
var depth = (root.match(/\.\.\//g) || []).length;
for (var i = 0; i < depth; i++) {
urlarray.pop();
}
return urlarray.join('/') + '/';
}
function on_version_switch() {
var selected_version = $(this).children('option:selected').attr('value');
var url = window.location.href;
var current_version = DOCUMENTATION_OPTIONS.VERSION;
var docroot = get_docroot_url()
var new_versionpath = selected_version + '/';
if (selected_version == "dev")
new_versionpath = '';
// dev versions have no version prefix
if (current_version == "dev") {
var new_url = docroot + new_versionpath + url.replace(docroot, "");
var fallback_url = docroot + new_versionpath;
} else {
var new_url = url.replace('/' + current_version + '/', '/' + new_versionpath);
var fallback_url = new_url.replace(url.replace(docroot, ""), "");
}
console.log(get_docroot_url())
console.log(url + " to url " + new_url);
console.log(url + " to fallback " + fallback_url);
if (new_url != url) {
navigate_to_first_existing([
new_url,
fallback_url,
'https://www.yoctoproject.org/docs/',
]);
}
}
function on_doctype_switch() {
var selected_doctype = $(this).children('option:selected').attr('value');
var url = window.location.href;
if (selected_doctype == 'mega') {
var docroot = get_docroot_url()
var current_version = DOCUMENTATION_OPTIONS.VERSION;
// Assume manuals before 3.2 are using old docbook mega-manual
if (ver_compare(current_version, "3.2") < 0) {
var new_url = docroot + "mega-manual/mega-manual.html";
} else {
var new_url = docroot + "singleindex.html";
}
} else {
var new_url = url.replace("singleindex.html", "index.html")
}
if (new_url != url) {
navigate_to_first_existing([
new_url,
'https://www.yoctoproject.org/docs/',
]);
}
}
// Returns the current doctype based upon the url
function doctype_segment_from_url(url) {
if (url.includes("singleindex") || url.includes("mega-manual"))
return "mega";
return "single";
}
$(document).ready(function() {
var release = DOCUMENTATION_OPTIONS.VERSION;
var current_doctype = doctype_segment_from_url(window.location.href);
var current_series = release.substr(0, 3);
var version_select = build_version_select(current_series, release);
$('.version_switcher_placeholder').html(version_select);
$('.version_switcher_placeholder select').bind('change', on_version_switch);
var doctype_select = build_doctype_select(current_doctype);
$('.doctype_switcher_placeholder').html(doctype_select);
$('.doctype_switcher_placeholder select').bind('change', on_doctype_switch);
if (ver_compare(release, "3.1") < 0) {
$('#outdated-warning').html('Version ' + release + ' of the project is now considered obsolete, please select and use a more recent version');
$('#outdated-warning').css('padding', '.5em');
} else if (release != "dev") {
$.each(all_versions, function(version, title) {
var series = version.substr(0, 3);
if (series == current_series && version != release) {
$('#outdated-warning').html('This document is for outdated version ' + release + ', you should select the latest release version in this series, ' + version + '.');
$('#outdated-warning').css('padding', '.5em');
}
});
}
});
})();

View File

@@ -1,162 +0,0 @@
/*
SPDX-License-Identifier: CC-BY-2.0-UK
*/
body {
font-family: Verdana, Sans, sans-serif;
margin: 0em auto;
color: #333;
}
h1,h2,h3,h4,h5,h6,h7 {
font-family: Arial, Sans;
color: #00557D;
clear: both;
}
h1 {
font-size: 2em;
text-align: left;
padding: 0em 0em 0em 0em;
margin: 2em 0em 0em 0em;
}
h2.subtitle {
margin: 0.10em 0em 3.0em 0em;
padding: 0em 0em 0em 0em;
font-size: 1.8em;
padding-left: 20%;
font-weight: normal;
font-style: italic;
}
h2 {
margin: 2em 0em 0.66em 0em;
padding: 0.5em 0em 0em 0em;
font-size: 1.5em;
font-weight: bold;
}
h3.subtitle {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
font-size: 142.14%;
text-align: right;
}
h3 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 140%;
font-weight: bold;
}
h4 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 120%;
font-weight: bold;
}
h5 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 110%;
font-weight: bold;
}
h6 {
margin: 1em 0em 0em 0em;
padding: 1em 0em 0em 0em;
font-size: 110%;
font-weight: bold;
}
em {
font-weight: bold;
}
.pre {
font-size: medium;
font-family: Courier, monospace;
}
.wy-nav-content a {
text-decoration: underline;
color: #444;
background: transparent;
}
.wy-nav-content a:hover {
text-decoration: underline;
background-color: #dedede;
}
.wy-nav-content a:visited {
color: #444;
}
[alt='Permalink'] { color: #eee; }
[alt='Permalink']:hover { color: black; }
@media screen {
/* content column
*
* RTD theme's default is 800px as max width for the content, but we have
* tables with tons of columns, which need the full width of the view-port.
*/
.wy-nav-content{max-width: none; }
/* inline literal: drop the borderbox, padding and red color */
code, .rst-content tt, .rst-content code {
color: inherit;
border: none;
padding: unset;
background: inherit;
font-size: 85%;
}
.rst-content tt.literal,.rst-content tt.literal,.rst-content code.literal {
color: inherit;
}
/* Admonition should be gray, not blue or green */
.rst-content .note .admonition-title,
.rst-content .tip .admonition-title,
.rst-content .warning .admonition-title,
.rst-content .caution .admonition-title,
.rst-content .important .admonition-title {
background: #f0f0f2;
color: #00557D;
}
.rst-content .note,
.rst-content .tip,
.rst-content .important,
.rst-content .warning,
.rst-content .caution {
background: #f0f0f2;
}
/* Remove the icon in front of note/tip element, and before the logo */
.icon-home:before, .rst-content .admonition-title:before {
display: none
}
/* a custom informalexample container is used in some doc */
.informalexample {
border: 1px solid;
border-color: #aaa;
margin: 1em 0em;
padding: 1em;
page-break-inside: avoid;
}
/* Remove the blue background in the top left corner, around the logo */
.wy-side-nav-search {
background: inherit;
}
}

1
bitbake/doc/template/Vera.xml vendored Normal file

File diff suppressed because one or more lines are too long

1
bitbake/doc/template/VeraMoBd.xml vendored Normal file

File diff suppressed because one or more lines are too long

1
bitbake/doc/template/VeraMono.xml vendored Normal file

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,39 @@
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns="http://www.w3.org/1999/xhtml"
exclude-result-prefixes="d">
<xsl:template name="component.title">
<xsl:param name="node" select="."/>
<xsl:variable name="level">
<xsl:choose>
<xsl:when test="ancestor::d:section">
<xsl:value-of select="count(ancestor::d:section)+1"/>
</xsl:when>
<xsl:when test="ancestor::d:sect5">6</xsl:when>
<xsl:when test="ancestor::d:sect4">5</xsl:when>
<xsl:when test="ancestor::d:sect3">4</xsl:when>
<xsl:when test="ancestor::d:sect2">3</xsl:when>
<xsl:when test="ancestor::d:sect1">2</xsl:when>
<xsl:otherwise>1</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<xsl:element name="h{$level+1}" namespace="http://www.w3.org/1999/xhtml">
<xsl:attribute name="class">title</xsl:attribute>
<xsl:if test="$generate.id.attributes = 0">
<xsl:call-template name="anchor">
<xsl:with-param name="node" select="$node"/>
<xsl:with-param name="conditional" select="0"/>
</xsl:call-template>
</xsl:if>
<xsl:apply-templates select="$node" mode="object.title.markup">
<xsl:with-param name="allow-anchors" select="1"/>
</xsl:apply-templates>
<xsl:call-template name="permalink">
<xsl:with-param name="node" select="$node"/>
</xsl:call-template>
</xsl:element>
</xsl:template>
</xsl:stylesheet>

64
bitbake/doc/template/db-pdf.xsl vendored Normal file
View File

@@ -0,0 +1,64 @@
<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/fo/docbook.xsl" />
<!-- check project-plan.sh for how this is generated, needed to tweak
the cover page
-->
<xsl:include href="/tmp/titlepage.xsl"/>
<!-- To force a page break in document, i.e per section add a
<?hard-pagebreak?> tag.
-->
<xsl:template match="processing-instruction('hard-pagebreak')">
<fo:block break-before='page' />
</xsl:template>
<!--Fix for defualt indent getting TOC all wierd..
See http://sources.redhat.com/ml/docbook-apps/2005-q1/msg00455.html
FIXME: must be a better fix
-->
<xsl:param name="body.start.indent" select="'0'"/>
<!--<xsl:param name="title.margin.left" select="'0'"/>-->
<!-- stop long-ish header titles getting wrapped -->
<xsl:param name="header.column.widths">1 10 1</xsl:param>
<!-- customise headers and footers a little -->
<xsl:template name="head.sep.rule">
<xsl:if test="$header.rule != 0">
<xsl:attribute name="border-bottom-width">0.5pt</xsl:attribute>
<xsl:attribute name="border-bottom-style">solid</xsl:attribute>
<xsl:attribute name="border-bottom-color">#cccccc</xsl:attribute>
</xsl:if>
</xsl:template>
<xsl:template name="foot.sep.rule">
<xsl:if test="$footer.rule != 0">
<xsl:attribute name="border-top-width">0.5pt</xsl:attribute>
<xsl:attribute name="border-top-style">solid</xsl:attribute>
<xsl:attribute name="border-top-color">#cccccc</xsl:attribute>
</xsl:if>
</xsl:template>
<xsl:attribute-set name="header.content.properties">
<xsl:attribute name="color">#cccccc</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="footer.content.properties">
<xsl:attribute name="color">#cccccc</xsl:attribute>
</xsl:attribute-set>
<!-- general settings -->
<xsl:param name="fop1.extensions" select="1"></xsl:param>
<xsl:param name="paper.type" select="'A4'"></xsl:param>
<xsl:param name="section.autolabel" select="1"></xsl:param>
<xsl:param name="body.font.family" select="'verasans'"></xsl:param>
<xsl:param name="title.font.family" select="'verasans'"></xsl:param>
<xsl:param name="monospace.font.family" select="'veramono'"></xsl:param>
</xsl:stylesheet>

25
bitbake/doc/template/division.title.xsl vendored Normal file
View File

@@ -0,0 +1,25 @@
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns="http://www.w3.org/1999/xhtml"
exclude-result-prefixes="d">
<xsl:template name="division.title">
<xsl:param name="node" select="."/>
<h1>
<xsl:attribute name="class">title</xsl:attribute>
<xsl:call-template name="anchor">
<xsl:with-param name="node" select="$node"/>
<xsl:with-param name="conditional" select="0"/>
</xsl:call-template>
<xsl:apply-templates select="$node" mode="object.title.markup">
<xsl:with-param name="allow-anchors" select="1"/>
</xsl:apply-templates>
<xsl:call-template name="permalink">
<xsl:with-param name="node" select="$node"/>
</xsl:call-template>
</h1>
</xsl:template>
</xsl:stylesheet>

58
bitbake/doc/template/fop-config.xml vendored Normal file
View File

@@ -0,0 +1,58 @@
<fop version="1.0">
<!-- Strict user configuration -->
<strict-configuration>true</strict-configuration>
<!-- Strict FO validation -->
<strict-validation>true</strict-validation>
<!--
Set the baseDir so common/openedhand.svg references in plans still
work ok. Note, relative file references to current dir should still work.
-->
<base>../template</base>
<font-base>../template</font-base>
<!-- Source resolution in dpi (dots/pixels per inch) for determining the
size of pixels in SVG and bitmap images, default: 72dpi -->
<!-- <source-resolution>72</source-resolution> -->
<!-- Target resolution in dpi (dots/pixels per inch) for specifying the
target resolution for generated bitmaps, default: 72dpi -->
<!-- <target-resolution>72</target-resolution> -->
<!-- default page-height and page-width, in case
value is specified as auto -->
<default-page-settings height="11in" width="8.26in"/>
<!-- <use-cache>false</use-cache> -->
<renderers>
<renderer mime="application/pdf">
<fonts>
<font metrics-file="VeraMono.xml"
kerning="yes"
embed-url="VeraMono.ttf">
<font-triplet name="veramono" style="normal" weight="normal"/>
</font>
<font metrics-file="VeraMoBd.xml"
kerning="yes"
embed-url="VeraMoBd.ttf">
<font-triplet name="veramono" style="normal" weight="bold"/>
</font>
<font metrics-file="Vera.xml"
kerning="yes"
embed-url="Vera.ttf">
<font-triplet name="verasans" style="normal" weight="normal"/>
<font-triplet name="verasans" style="normal" weight="bold"/>
<font-triplet name="verasans" style="italic" weight="normal"/>
<font-triplet name="verasans" style="italic" weight="bold"/>
</font>
<auto-detect/>
</fonts>
</renderer>
</renderers>
</fop>

View File

@@ -0,0 +1,21 @@
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns="http://www.w3.org/1999/xhtml"
exclude-result-prefixes="d">
<xsl:template name="formal.object.heading">
<xsl:param name="object" select="."/>
<xsl:param name="title">
<xsl:apply-templates select="$object" mode="object.title.markup">
<xsl:with-param name="allow-anchors" select="1"/>
</xsl:apply-templates>
</xsl:param>
<p class="title">
<b><xsl:copy-of select="$title"/></b>
<xsl:call-template name="permalink">
<xsl:with-param name="node" select="$object"/>
</xsl:call-template>
</p>
</xsl:template>
</xsl:stylesheet>

View File

@@ -0,0 +1,14 @@
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns="http://www.w3.org/1999/xhtml">
<xsl:template match="glossentry/glossterm">
<xsl:apply-imports/>
<xsl:if test="$generate.permalink != 0">
<xsl:call-template name="permalink">
<xsl:with-param name="node" select=".."/>
</xsl:call-template>
</xsl:if>
</xsl:template>
</xsl:stylesheet>

25
bitbake/doc/template/permalinks.xsl vendored Normal file
View File

@@ -0,0 +1,25 @@
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0"
xmlns="http://www.w3.org/1999/xhtml"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:param name="generate.permalink" select="1"/>
<xsl:param name="permalink.text"></xsl:param>
<xsl:template name="permalink">
<xsl:param name="node"/>
<xsl:if test="$generate.permalink != '0'">
<span class="permalink">
<a alt="Permalink" title="Permalink">
<xsl:attribute name="href">
<xsl:call-template name="href.target">
<xsl:with-param name="object" select="$node"/>
</xsl:call-template>
</xsl:attribute>
<xsl:copy-of select="$permalink.text"/>
</a>
</span>
</xsl:if>
</xsl:template>
</xsl:stylesheet>

55
bitbake/doc/template/section.title.xsl vendored Normal file
View File

@@ -0,0 +1,55 @@
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns="http://www.w3.org/1999/xhtml" exclude-result-prefixes="d">
<xsl:template name="section.title">
<xsl:variable name="section"
select="(ancestor::section |
ancestor::simplesect|
ancestor::sect1|
ancestor::sect2|
ancestor::sect3|
ancestor::sect4|
ancestor::sect5)[last()]"/>
<xsl:variable name="renderas">
<xsl:choose>
<xsl:when test="$section/@renderas = 'sect1'">1</xsl:when>
<xsl:when test="$section/@renderas = 'sect2'">2</xsl:when>
<xsl:when test="$section/@renderas = 'sect3'">3</xsl:when>
<xsl:when test="$section/@renderas = 'sect4'">4</xsl:when>
<xsl:when test="$section/@renderas = 'sect5'">5</xsl:when>
<xsl:otherwise><xsl:value-of select="''"/></xsl:otherwise>
</xsl:choose>
</xsl:variable>
<xsl:variable name="level">
<xsl:choose>
<xsl:when test="$renderas != ''">
<xsl:value-of select="$renderas"/>
</xsl:when>
<xsl:otherwise>
<xsl:call-template name="section.level">
<xsl:with-param name="node" select="$section"/>
</xsl:call-template>
</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<xsl:call-template name="section.heading">
<xsl:with-param name="section" select="$section"/>
<xsl:with-param name="level" select="$level"/>
<xsl:with-param name="title">
<xsl:apply-templates select="$section" mode="object.title.markup">
<xsl:with-param name="allow-anchors" select="1"/>
</xsl:apply-templates>
<xsl:if test="$level &gt; 0">
<xsl:call-template name="permalink">
<xsl:with-param name="node" select="$section"/>
</xsl:call-template>
</xsl:if>
</xsl:with-param>
</xsl:call-template>
</xsl:template>
</xsl:stylesheet>

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,51 @@
#!/bin/sh
if [ -z "$1" -o -z "$2" ]; then
echo "usage: [-v] $0 <docbook file> <templatedir>"
echo
echo "*NOTE* you need xsltproc, fop and nwalsh docbook stylesheets"
echo " installed for this to work!"
echo
exit 0
fi
FO=`echo $1 | sed s/.xml/.fo/` || exit 1
PDF=`echo $1 | sed s/.xml/.pdf/` || exit 1
TEMPLATEDIR=$2
##
# These URI should be rewritten by your distribution's xml catalog to
# match your localy installed XSL stylesheets.
XSL_BASE_URI="http://docbook.sourceforge.net/release/xsl/current"
# Creates a temporary XSL stylesheet based on titlepage.xsl
xsltproc -o /tmp/titlepage.xsl \
--xinclude \
$XSL_BASE_URI/template/titlepage.xsl \
$TEMPLATEDIR/titlepage.templates.xml || exit 1
# Creates the file needed for FOP
xsltproc --xinclude \
--stringparam hyphenate false \
--stringparam formal.title.placement "figure after" \
--stringparam ulink.show 1 \
--stringparam body.font.master 9 \
--stringparam title.font.master 11 \
--stringparam draft.watermark.image "$TEMPLATEDIR/draft.png" \
--stringparam chapter.autolabel 1 \
--stringparam appendix.autolabel A \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--output $FO \
$TEMPLATEDIR/db-pdf.xsl \
$1 || exit 1
# Invokes the Java version of FOP. Uses the additional configuration file common/fop-config.xml
fop -c $TEMPLATEDIR/fop-config.xml -fo $FO -pdf $PDF || exit 1
rm -f $FO
rm -f /tmp/titlepage.xsl
echo
echo " #### Success! $PDF ready. ####"
echo

View File

@@ -3,14 +3,13 @@
#
# Copyright (C) 2006 Tim Ansell
#
# Please Note:
#Please Note:
# Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
# Assign a file to __warn__ to get warnings about slow operations.
#
import copy
ImmutableTypes = (
bool,
complex,
@@ -23,11 +22,9 @@ ImmutableTypes = (
MUTABLE = "__mutable__"
class COWMeta(type):
pass
class COWDictMeta(COWMeta):
__warn__ = False
__hasmutable__ = False
@@ -36,15 +33,12 @@ class COWDictMeta(COWMeta):
def __str__(cls):
# FIXME: I have magic numbers!
return "<COWDict Level: %i Current Keys: %i>" % (cls.__count__, len(cls.__dict__) - 3)
__repr__ = __str__
def cow(cls):
class C(cls):
__count__ = cls.__count__ + 1
return C
copy = cow
__call__ = cow
@@ -76,9 +70,8 @@ class COWDictMeta(COWMeta):
return value
__getmarker__ = []
def __getreadonly__(cls, key, default=__getmarker__):
"""
"""\
Get a value (even if mutable) which you promise not to change.
"""
return cls.__getitem__(key, default, True)
@@ -145,29 +138,24 @@ class COWDictMeta(COWMeta):
def iterkeys(cls):
return cls.iter("keys")
def itervalues(cls, readonly=False):
if not cls.__warn__ is False and cls.__hasmutable__ and readonly is False:
print("Warning: If you aren't going to change any of the values call with True.", file=cls.__warn__)
print("Warning: If you arn't going to change any of the values call with True.", file=cls.__warn__)
return cls.iter("values", readonly)
def iteritems(cls, readonly=False):
if not cls.__warn__ is False and cls.__hasmutable__ and readonly is False:
print("Warning: If you aren't going to change any of the values call with True.", file=cls.__warn__)
print("Warning: If you arn't going to change any of the values call with True.", file=cls.__warn__)
return cls.iter("items", readonly)
class COWSetMeta(COWDictMeta):
def __str__(cls):
# FIXME: I have magic numbers!
return "<COWSet Level: %i Current Keys: %i>" % (cls.__count__, len(cls.__dict__) - 3)
return "<COWSet Level: %i Current Keys: %i>" % (cls.__count__, len(cls.__dict__) -3)
__repr__ = __str__
def cow(cls):
class C(cls):
__count__ = cls.__count__ + 1
return C
def add(cls, value):
@@ -185,11 +173,131 @@ class COWSetMeta(COWDictMeta):
def iteritems(cls):
raise TypeError("sets don't have 'items'")
# These are the actual classes you use!
class COWDictBase(metaclass=COWDictMeta):
class COWDictBase(object, metaclass = COWDictMeta):
__count__ = 0
class COWSetBase(metaclass=COWSetMeta):
class COWSetBase(object, metaclass = COWSetMeta):
__count__ = 0
if __name__ == "__main__":
import sys
COWDictBase.__warn__ = sys.stderr
a = COWDictBase()
print("a", a)
a['a'] = 'a'
a['b'] = 'b'
a['dict'] = {}
b = a.copy()
print("b", b)
b['c'] = 'b'
print()
print("a", a)
for x in a.iteritems():
print(x)
print("--")
print("b", b)
for x in b.iteritems():
print(x)
print()
b['dict']['a'] = 'b'
b['a'] = 'c'
print("a", a)
for x in a.iteritems():
print(x)
print("--")
print("b", b)
for x in b.iteritems():
print(x)
print()
try:
b['dict2']
except KeyError as e:
print("Okay!")
a['set'] = COWSetBase()
a['set'].add("o1")
a['set'].add("o1")
a['set'].add("o2")
print("a", a)
for x in a['set'].itervalues():
print(x)
print("--")
print("b", b)
for x in b['set'].itervalues():
print(x)
print()
b['set'].add('o3')
print("a", a)
for x in a['set'].itervalues():
print(x)
print("--")
print("b", b)
for x in b['set'].itervalues():
print(x)
print()
a['set2'] = set()
a['set2'].add("o1")
a['set2'].add("o1")
a['set2'].add("o2")
print("a", a)
for x in a.iteritems():
print(x)
print("--")
print("b", b)
for x in b.iteritems(readonly=True):
print(x)
print()
del b['b']
try:
print(b['b'])
except KeyError:
print("Yay! deleted key raises error")
if 'b' in b:
print("Boo!")
else:
print("Yay - has_key with delete works!")
print("a", a)
for x in a.iteritems():
print(x)
print("--")
print("b", b)
for x in b.iteritems(readonly=True):
print(x)
print()
b.__revertitem__('b')
print("a", a)
for x in a.iteritems():
print(x)
print("--")
print("b", b)
for x in b.iteritems(readonly=True):
print(x)
print()
b.__revertitem__('dict')
print("a", a)
for x in a.iteritems():
print(x)
print("--")
print("b", b)
for x in b.iteritems(readonly=True):
print(x)
print()

View File

@@ -9,7 +9,7 @@
# SPDX-License-Identifier: GPL-2.0-only
#
__version__ = "1.50.0"
__version__ = "1.46.0"
import sys
if sys.version_info < (3, 5, 0):
@@ -21,8 +21,8 @@ class BBHandledException(Exception):
The big dilemma for generic bitbake code is what information to give the user
when an exception occurs. Any exception inheriting this base exception class
has already provided information to the user via some 'fired' message type such as
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
encounters an exception derived from this class, no backtrace or other information
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
encounters an exception derived from this class, no backtrace or other information
will be given to the user, its assumed the earlier event provided the relevant information.
"""
pass
@@ -35,30 +35,19 @@ class NullHandler(logging.Handler):
def emit(self, record):
pass
class BBLoggerMixin(object):
def __init__(self, *args, **kwargs):
# Does nothing to allow calling super() from derived classes
pass
def setup_bblogger(self, name):
Logger = logging.getLoggerClass()
class BBLogger(Logger):
def __init__(self, name):
if name.split(".")[0] == "BitBake":
self.debug = self._debug_helper
def _debug_helper(self, *args, **kwargs):
return self.bbdebug(1, *args, **kwargs)
def debug2(self, *args, **kwargs):
return self.bbdebug(2, *args, **kwargs)
def debug3(self, *args, **kwargs):
return self.bbdebug(3, *args, **kwargs)
self.debug = self.bbdebug
Logger.__init__(self, name)
def bbdebug(self, level, msg, *args, **kwargs):
loglevel = logging.DEBUG - level + 1
if not bb.event.worker_pid:
if self.name in bb.msg.loggerDefaultDomains and loglevel > (bb.msg.loggerDefaultDomains[self.name]):
return
if loglevel < bb.msg.loggerDefaultLogLevel:
if loglevel > bb.msg.loggerDefaultLogLevel:
return
return self.log(loglevel, msg, *args, **kwargs)
@@ -71,56 +60,16 @@ class BBLoggerMixin(object):
def verbnote(self, msg, *args, **kwargs):
return self.log(logging.INFO + 2, msg, *args, **kwargs)
Logger = logging.getLoggerClass()
class BBLogger(Logger, BBLoggerMixin):
def __init__(self, name, *args, **kwargs):
self.setup_bblogger(name)
super().__init__(name, *args, **kwargs)
logging.raiseExceptions = False
logging.setLoggerClass(BBLogger)
class BBLoggerAdapter(logging.LoggerAdapter, BBLoggerMixin):
def __init__(self, logger, *args, **kwargs):
self.setup_bblogger(logger.name)
super().__init__(logger, *args, **kwargs)
if sys.version_info < (3, 6):
# These properties were added in Python 3.6. Add them in older versions
# for compatibility
@property
def manager(self):
return self.logger.manager
@manager.setter
def manager(self, value):
self.logger.manager = value
@property
def name(self):
return self.logger.name
def __repr__(self):
logger = self.logger
level = logger.getLevelName(logger.getEffectiveLevel())
return '<%s %s (%s)>' % (self.__class__.__name__, logger.name, level)
logging.LoggerAdapter = BBLoggerAdapter
logger = logging.getLogger("BitBake")
logger.addHandler(NullHandler())
logger.setLevel(logging.DEBUG - 2)
mainlogger = logging.getLogger("BitBake.Main")
class PrefixLoggerAdapter(logging.LoggerAdapter):
def __init__(self, prefix, logger):
super().__init__(logger, {})
self.__msg_prefix = prefix
def process(self, msg, kwargs):
return "%s%s" %(self.__msg_prefix, msg), kwargs
# This has to be imported after the setLoggerClass, as the import of bb.msg
# can result in construction of the various loggers.
import bb.msg
@@ -137,7 +86,7 @@ def debug(lvl, *args):
mainlogger.warning("Passed invalid debug level '%s' to bb.debug", lvl)
args = (lvl,) + args
lvl = 1
mainlogger.bbdebug(lvl, ''.join(args))
mainlogger.debug(lvl, ''.join(args))
def note(*args):
mainlogger.info(''.join(args))

View File

@@ -16,9 +16,7 @@ import os
import sys
import logging
import glob
import itertools
import time
import re
import stat
import bb
import bb.msg
@@ -29,9 +27,6 @@ from bb import data, event, utils
bblogger = logging.getLogger('BitBake')
logger = logging.getLogger('BitBake.Build')
verboseShellLogging = False
verboseStdoutLogging = False
__mtime_cache = {}
def cached_mtime_noerror(f):
@@ -298,10 +293,6 @@ def exec_func_python(func, d, runfile, cwd=None):
comp = utils.better_compile(code, func, "exec_python_func() autogenerated")
utils.better_exec(comp, {"d": d}, code, "exec_python_func() autogenerated")
finally:
# We want any stdout/stderr to be printed before any other log messages to make debugging
# more accurate. In some cases we seem to lose stdout/stderr entirely in logging tests without this.
sys.stdout.flush()
sys.stderr.flush()
bb.debug(2, "Python function %s finished" % func)
if cwd and olddir:
@@ -312,60 +303,20 @@ def exec_func_python(func, d, runfile, cwd=None):
def shell_trap_code():
return '''#!/bin/sh\n
__BITBAKE_LAST_LINE=0
# Emit a useful diagnostic if something fails:
bb_sh_exit_handler() {
bb_exit_handler() {
ret=$?
if [ "$ret" != 0 ]; then
echo "WARNING: exit code $ret from a shell command."
fi
exit $ret
case $ret in
0) ;;
*) case $BASH_VERSION in
"") echo "WARNING: exit code $ret from a shell command.";;
*) echo "WARNING: ${BASH_SOURCE[0]}:${BASH_LINENO[0]} exit $ret from '$BASH_COMMAND'";;
esac
exit $ret
esac
}
bb_bash_exit_handler() {
ret=$?
{ set +x; } > /dev/null
trap "" DEBUG
if [ "$ret" != 0 ]; then
echo "WARNING: ${BASH_SOURCE[0]}:${__BITBAKE_LAST_LINE} exit $ret from '$1'"
echo "WARNING: Backtrace (BB generated script): "
for i in $(seq 1 $((${#FUNCNAME[@]} - 1))); do
if [ "$i" -eq 1 ]; then
echo -e "\t#$((i)): ${FUNCNAME[$i]}, ${BASH_SOURCE[$((i-1))]}, line ${__BITBAKE_LAST_LINE}"
else
echo -e "\t#$((i)): ${FUNCNAME[$i]}, ${BASH_SOURCE[$((i-1))]}, line ${BASH_LINENO[$((i-1))]}"
fi
done
fi
exit $ret
}
bb_bash_debug_handler() {
local line=${BASH_LINENO[0]}
# For some reason the DEBUG trap trips with lineno=1 when scripts exit; ignore it
if [ "$line" -eq 1 ]; then
return
fi
# Track the line number of commands as they execute. This is so we can have access to the failing line number
# in the EXIT trap. See http://gnu-bash.2382.n7.nabble.com/trap-echo-quot-trap-exit-on-LINENO-quot-EXIT-gt-wrong-linenumber-td3666.html
if [ "${FUNCNAME[1]}" != "bb_bash_exit_handler" ]; then
__BITBAKE_LAST_LINE=$line
fi
}
case $BASH_VERSION in
"") trap 'bb_sh_exit_handler' 0
set -e
;;
*) trap 'bb_bash_exit_handler "$BASH_COMMAND"' 0
trap '{ bb_bash_debug_handler; } 2>/dev/null' DEBUG
set -e
shopt -s extdebug
;;
esac
trap 'bb_exit_handler' 0
set -e
'''
def create_progress_handler(func, progress, logfile, d):
@@ -395,7 +346,7 @@ def create_progress_handler(func, progress, logfile, d):
cls_obj = functools.reduce(resolve, cls.split("."), bb.utils._context)
if not cls_obj:
# Fall-back on __builtins__
cls_obj = functools.reduce(resolve, cls.split("."), __builtins__)
cls_obj = functools.reduce(lambda x, y: x.get(y), cls.split("."), __builtins__)
if cls_obj:
return cls_obj(d, outfile=logfile, otherargs=otherargs)
bb.warn('%s: unknown custom progress handler in task progress varflag value "%s", ignoring' % (func, cls))
@@ -420,7 +371,7 @@ def exec_func_shell(func, d, runfile, cwd=None):
bb.data.emit_func(func, script, d)
if verboseShellLogging or bb.utils.to_boolean(d.getVar("BB_VERBOSE_LOGS", False)):
if bb.msg.loggerVerboseLogs:
script.write("set -x\n")
if cwd:
script.write("cd '%s'\n" % cwd)
@@ -440,20 +391,14 @@ exit $ret
if fakerootcmd:
cmd = [fakerootcmd, runfile]
if verboseStdoutLogging:
if bb.msg.loggerDefaultVerbose:
logfile = LogTee(logger, StdoutNoopContextManager())
else:
logfile = StdoutNoopContextManager()
progress = d.getVarFlag(func, 'progress')
if progress:
try:
logfile = create_progress_handler(func, progress, logfile, d)
except:
from traceback import format_exc
logger.error("Failed to create progress handler")
logger.error(format_exc())
raise
logfile = create_progress_handler(func, progress, logfile, d)
fifobuffer = bytearray()
def readfifo(data):
@@ -505,62 +450,6 @@ exit $ret
bb.debug(2, "Executing shell function %s" % func)
with open(os.devnull, 'r+') as stdin, logfile:
bb.process.run(cmd, shell=False, stdin=stdin, log=logfile, extrafiles=[(fifo,readfifo)])
except bb.process.ExecutionError as exe:
# Find the backtrace that the shell trap generated
backtrace_marker_regex = re.compile(r"WARNING: Backtrace \(BB generated script\)")
stdout_lines = (exe.stdout or "").split("\n")
backtrace_start_line = None
for i, line in enumerate(reversed(stdout_lines)):
if backtrace_marker_regex.search(line):
backtrace_start_line = len(stdout_lines) - i
break
# Read the backtrace frames, starting at the location we just found
backtrace_entry_regex = re.compile(r"#(?P<frameno>\d+): (?P<funcname>[^\s]+), (?P<file>.+?), line ("
r"?P<lineno>\d+)")
backtrace_frames = []
if backtrace_start_line:
for line in itertools.islice(stdout_lines, backtrace_start_line, None):
match = backtrace_entry_regex.search(line)
if match:
backtrace_frames.append(match.groupdict())
with open(runfile, "r") as script:
script_lines = [line.rstrip() for line in script.readlines()]
# For each backtrace frame, search backwards in the script (from the line number called out by the frame),
# to find the comment that emit_vars injected when it wrote the script. This will give us the metadata
# filename (e.g. .bb or .bbclass) and line number where the shell function was originally defined.
script_metadata_comment_regex = re.compile(r"# line: (?P<lineno>\d+), file: (?P<file>.+)")
better_frames = []
# Skip the very last frame since it's just the call to the shell task in the body of the script
for frame in backtrace_frames[:-1]:
# Check whether the frame corresponds to a function defined in the script vs external script.
if os.path.samefile(frame["file"], runfile):
# Search backwards from the frame lineno to locate the comment that BB injected
i = int(frame["lineno"]) - 1
while i >= 0:
match = script_metadata_comment_regex.match(script_lines[i])
if match:
# Calculate the relative line in the function itself
relative_line_in_function = int(frame["lineno"]) - i - 2
# Calculate line in the function as declared in the metadata
metadata_function_line = relative_line_in_function + int(match["lineno"])
better_frames.append("#{frameno}: {funcname}, {file}, line {lineno}".format(
frameno=frame["frameno"],
funcname=frame["funcname"],
file=match["file"],
lineno=metadata_function_line
))
break
i -= 1
else:
better_frames.append("#{frameno}: {funcname}, {file}, line {lineno}".format(**frame))
if better_frames:
better_frames = ("\t{0}".format(frame) for frame in better_frames)
exe.extra_message = "\nBacktrace (metadata-relative locations):\n{0}".format("\n".join(better_frames))
raise
finally:
os.unlink(fifopath)
@@ -587,7 +476,7 @@ def _exec_task(fn, task, d, quieterr):
logger.error("No such task: %s" % task)
return 1
logger.debug("Executing task %s", task)
logger.debug(1, "Executing task %s", task)
localdata = _task_data(fn, task, d)
tempdir = localdata.getVar('T')
@@ -600,7 +489,7 @@ def _exec_task(fn, task, d, quieterr):
curnice = os.nice(0)
nice = int(nice) - curnice
newnice = os.nice(nice)
logger.debug("Renice to %s " % newnice)
logger.debug(1, "Renice to %s " % newnice)
ionice = localdata.getVar("BB_TASK_IONICE_LEVEL")
if ionice:
try:
@@ -698,16 +587,12 @@ def _exec_task(fn, task, d, quieterr):
except bb.BBHandledException:
event.fire(TaskFailed(task, fn, logfn, localdata, True), localdata)
return 1
except (Exception, SystemExit) as exc:
except Exception as exc:
if quieterr:
event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
# If the output is already on stdout, we've printed the information in the
# logs once already so don't duplicate
if verboseStdoutLogging:
errprinted = True
logger.error(repr(exc))
logger.error(str(exc))
event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
return 1
finally:
@@ -728,7 +613,7 @@ def _exec_task(fn, task, d, quieterr):
logfile.close()
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
logger.debug2("Zero size logfn %s, removing", logfn)
logger.debug(2, "Zero size logfn %s, removing", logfn)
bb.utils.remove(logfn)
bb.utils.remove(loglink)
event.fire(TaskSucceeded(task, fn, logfn, localdata), localdata)
@@ -862,23 +747,6 @@ def make_stamp(task, d, file_name = None):
file_name = d.getVar('BB_FILENAME')
bb.parse.siggen.dump_sigtask(file_name, task, stampbase, True)
def find_stale_stamps(task, d, file_name=None):
current = stamp_internal(task, d, file_name)
current2 = stamp_internal(task + "_setscene", d, file_name)
cleanmask = stamp_cleanmask_internal(task, d, file_name)
found = []
for mask in cleanmask:
for name in glob.glob(mask):
if "sigdata" in name or "sigbasedata" in name:
continue
if name.endswith('.taint'):
continue
if name == current or name == current2:
continue
logger.debug2("Stampfile %s does not match %s or %s" % (name, current, current2))
found.append(name)
return found
def del_stamp(task, d, file_name = None):
"""
Removes a stamp for a given task
@@ -1033,8 +901,6 @@ def tasksbetween(task_start, task_end, d):
def follow_chain(task, endtask, chain=None):
if not chain:
chain = []
if task in chain:
bb.fatal("Circular task dependencies as %s depends on itself via the chain %s" % (task, " -> ".join(chain)))
chain.append(task)
for othertask in tasks:
if othertask == task:

View File

@@ -19,20 +19,16 @@
import os
import logging
import pickle
from collections import defaultdict, Mapping
from collections import defaultdict
import bb.utils
from bb import PrefixLoggerAdapter
import re
logger = logging.getLogger("BitBake.Cache")
__cache_version__ = "154"
__cache_version__ = "152"
def getCacheFile(path, filename, mc, data_hash):
mcspec = ''
if mc:
mcspec = ".%s" % mc
return os.path.join(path, filename + mcspec + "." + data_hash)
def getCacheFile(path, filename, data_hash):
return os.path.join(path, filename + "." + data_hash)
# RecipeInfoCommon defines common data retrieving methods
# from meta data for caches. CoreRecipeInfo as well as other
@@ -94,7 +90,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
if not self.packages:
self.packages.append(self.pn)
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
self.skipreason = self.getvar('__SKIPPED', metadata)
if self.skipreason:
@@ -121,12 +116,12 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.depends = self.depvar('DEPENDS', metadata)
self.rdepends = self.depvar('RDEPENDS', metadata)
self.rrecommends = self.depvar('RRECOMMENDS', metadata)
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
self.rdepends_pkg = self.pkgvar('RDEPENDS', self.packages, metadata)
self.rrecommends_pkg = self.pkgvar('RRECOMMENDS', self.packages, metadata)
self.inherits = self.getvar('__inherit_cache', metadata, expand=False)
self.fakerootenv = self.getvar('FAKEROOTENV', metadata)
self.fakerootdirs = self.getvar('FAKEROOTDIRS', metadata)
self.fakerootlogs = self.getvar('FAKEROOTLOGS', metadata)
self.fakerootnoenv = self.getvar('FAKEROOTNOENV', metadata)
self.extradepsfunc = self.getvar('calculate_extra_depends', metadata)
@@ -164,7 +159,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootenv = {}
cachedata.fakerootnoenv = {}
cachedata.fakerootdirs = {}
cachedata.fakerootlogs = {}
cachedata.extradepsfunc = {}
def add_cacheData(self, cachedata, fn):
@@ -217,7 +211,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
if not self.not_world:
cachedata.possible_world.append(fn)
#else:
# logger.debug2("EXCLUDE FROM WORLD: %s", fn)
# logger.debug(2, "EXCLUDE FROM WORLD: %s", fn)
# create a collection of all targets for sanity checking
# tasks, such as upstream versions, license, and tools for
@@ -233,7 +227,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootenv[fn] = self.fakerootenv
cachedata.fakerootnoenv[fn] = self.fakerootnoenv
cachedata.fakerootdirs[fn] = self.fakerootdirs
cachedata.fakerootlogs[fn] = self.fakerootlogs
cachedata.extradepsfunc[fn] = self.extradepsfunc
def virtualfn2realfn(virtualfn):
@@ -241,7 +234,7 @@ def virtualfn2realfn(virtualfn):
Convert a virtual file name to a real one + the associated subclass keyword
"""
mc = ""
if virtualfn.startswith('mc:') and virtualfn.count(':') >= 2:
if virtualfn.startswith('mc:'):
elems = virtualfn.split(':')
mc = elems[1]
virtualfn = ":".join(elems[2:])
@@ -271,7 +264,7 @@ def variant2virtual(realfn, variant):
"""
if variant == "":
return realfn
if variant.startswith("mc:") and variant.count(':') >= 2:
if variant.startswith("mc:"):
elems = variant.split(":")
if elems[2]:
return "mc:" + elems[1] + ":virtual:" + ":".join(elems[2:]) + ":" + realfn
@@ -326,12 +319,12 @@ class NoCache(object):
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug("Parsing %s (full)" % virtualfn)
logger.debug(1, "Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = virtualfn2realfn(virtualfn)
bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
return bb_data[virtual]
def load_bbfile(self, bbfile, appends, virtonly = False, mc=None):
def load_bbfile(self, bbfile, appends, virtonly = False):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
@@ -344,10 +337,6 @@ class NoCache(object):
datastores = parse_recipe(bb_data, bbfile, appends, mc)
return datastores
if mc is not None:
bb_data = self.databuilder.mcdata[mc].createCopy()
return parse_recipe(bb_data, bbfile, appends, mc)
bb_data = self.data.createCopy()
datastores = parse_recipe(bb_data, bbfile, appends)
@@ -365,15 +354,14 @@ class Cache(NoCache):
"""
BitBake Cache implementation
"""
def __init__(self, databuilder, mc, data_hash, caches_array):
def __init__(self, databuilder, data_hash, caches_array):
super().__init__(databuilder)
data = databuilder.data
# Pass caches_array information into Cache Constructor
# It will be used later for deciding whether we
# need extra cache file dump/load support
self.mc = mc
self.logger = PrefixLoggerAdapter("Cache: %s: " % (mc if mc else "default"), logger)
self.caches_array = caches_array
self.cachedir = data.getVar("CACHE")
self.clean = set()
@@ -386,47 +374,31 @@ class Cache(NoCache):
if self.cachedir in [None, '']:
self.has_cache = False
self.logger.info("Not using a cache. "
"Set CACHE = <directory> to enable.")
logger.info("Not using a cache. "
"Set CACHE = <directory> to enable.")
return
self.has_cache = True
self.cachefile = getCacheFile(self.cachedir, "bb_cache.dat", self.data_hash)
def getCacheFile(self, cachefile):
return getCacheFile(self.cachedir, cachefile, self.mc, self.data_hash)
def prepare_cache(self, progress):
if not self.has_cache:
return 0
loaded = 0
self.cachefile = self.getCacheFile("bb_cache.dat")
self.logger.debug("Cache dir: %s", self.cachedir)
logger.debug(1, "Cache dir: %s", self.cachedir)
bb.utils.mkdirhier(self.cachedir)
cache_ok = True
if self.caches_array:
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
cache_exists = os.path.exists(cachefile)
self.logger.debug2("Checking if %s exists: %r", cachefile, cache_exists)
cache_ok = cache_ok and cache_exists
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
cache_ok = cache_ok and os.path.exists(cachefile)
cache_class.init_cacheData(self)
if cache_ok:
loaded = self.load_cachefile(progress)
self.load_cachefile()
elif os.path.isfile(self.cachefile):
self.logger.info("Out of date cache found, rebuilding...")
logger.info("Out of date cache found, rebuilding...")
else:
self.logger.debug("Cache file %s not found, building..." % self.cachefile)
logger.debug(1, "Cache file %s not found, building..." % self.cachefile)
# We don't use the symlink, its just for debugging convinience
if self.mc:
symlink = os.path.join(self.cachedir, "bb_cache.dat.%s" % self.mc)
else:
symlink = os.path.join(self.cachedir, "bb_cache.dat")
symlink = os.path.join(self.cachedir, "bb_cache.dat")
if os.path.exists(symlink):
bb.utils.remove(symlink)
try:
@@ -434,29 +406,22 @@ class Cache(NoCache):
except OSError:
pass
return loaded
def cachesize(self):
if not self.has_cache:
return 0
def load_cachefile(self):
cachesize = 0
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
try:
with open(cachefile, "rb") as cachefile:
cachesize += os.fstat(cachefile.fileno()).st_size
except FileNotFoundError:
pass
return cachesize
def load_cachefile(self, progress):
previous_progress = 0
previous_percent = 0
# Calculate the correct cachesize of all those cache files
for cache_class in self.caches_array:
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
cachesize += os.fstat(cachefile.fileno()).st_size
bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
self.logger.debug('Loading cache file: %s' % cachefile)
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
logger.debug(1, 'Loading cache file: %s' % cachefile)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
# Check cache version information
@@ -464,15 +429,15 @@ class Cache(NoCache):
cache_ver = pickled.load()
bitbake_ver = pickled.load()
except Exception:
self.logger.info('Invalid cache, rebuilding...')
return 0
logger.info('Invalid cache, rebuilding...')
return
if cache_ver != __cache_version__:
self.logger.info('Cache version mismatch, rebuilding...')
return 0
logger.info('Cache version mismatch, rebuilding...')
return
elif bitbake_ver != bb.__version__:
self.logger.info('Bitbake version mismatch, rebuilding...')
return 0
logger.info('Bitbake version mismatch, rebuilding...')
return
# Load the rest of the cache file
current_progress = 0
@@ -495,17 +460,29 @@ class Cache(NoCache):
self.depends_cache[key] = [value]
# only fire events on even percentage boundaries
current_progress = cachefile.tell() + previous_progress
progress(cachefile.tell() + previous_progress)
if current_progress > cachesize:
# we might have calculated incorrect total size because a file
# might've been written out just after we checked its size
cachesize = current_progress
current_percent = 100 * current_progress / cachesize
if current_percent > previous_percent:
previous_percent = current_percent
bb.event.fire(bb.event.CacheLoadProgress(current_progress, cachesize),
self.data)
previous_progress += current_progress
return len(self.depends_cache)
# Note: depends cache number is corresponding to the parsing file numbers.
# The same file has several caches, still regarded as one item in the cache
bb.event.fire(bb.event.CacheLoadCompleted(cachesize,
len(self.depends_cache)),
self.data)
def parse(self, filename, appends):
"""Parse the specified filename, returning the recipe information"""
self.logger.debug("Parsing %s", filename)
logger.debug(1, "Parsing %s", filename)
infos = []
datastores = self.load_bbfile(filename, appends, mc=self.mc)
datastores = self.load_bbfile(filename, appends)
depends = []
variants = []
# Process the "real" fn last so we can store variants list
@@ -557,7 +534,7 @@ class Cache(NoCache):
cached, infos = self.load(fn, appends)
for virtualfn, info_array in infos:
if info_array[0].skipped:
self.logger.debug("Skipping %s: %s", virtualfn, info_array[0].skipreason)
logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
skipped += 1
else:
self.add_info(virtualfn, info_array, cacheData, not cached)
@@ -593,21 +570,21 @@ class Cache(NoCache):
# File isn't in depends_cache
if not fn in self.depends_cache:
self.logger.debug2("%s is not cached", fn)
logger.debug(2, "Cache: %s is not cached", fn)
return False
mtime = bb.parse.cached_mtime_noerror(fn)
# Check file still exists
if mtime == 0:
self.logger.debug2("%s no longer exists", fn)
logger.debug(2, "Cache: %s no longer exists", fn)
self.remove(fn)
return False
info_array = self.depends_cache[fn]
# Check the file's timestamp
if mtime != info_array[0].timestamp:
self.logger.debug2("%s changed", fn)
logger.debug(2, "Cache: %s changed", fn)
self.remove(fn)
return False
@@ -618,14 +595,14 @@ class Cache(NoCache):
fmtime = bb.parse.cached_mtime_noerror(f)
# Check if file still exists
if old_mtime != 0 and fmtime == 0:
self.logger.debug2("%s's dependency %s was removed",
fn, f)
logger.debug(2, "Cache: %s's dependency %s was removed",
fn, f)
self.remove(fn)
return False
if (fmtime != old_mtime):
self.logger.debug2("%s's dependency %s changed",
fn, f)
logger.debug(2, "Cache: %s's dependency %s changed",
fn, f)
self.remove(fn)
return False
@@ -637,18 +614,18 @@ class Cache(NoCache):
# Have to be careful about spaces and colons in filenames
flist = self.filelist_regex.split(fl)
for f in flist:
if not f:
if not f or "*" in f:
continue
f, exist = f.split(":")
if (exist == "True" and not os.path.exists(f)) or (exist == "False" and os.path.exists(f)):
self.logger.debug2("%s's file checksum list file %s changed",
fn, f)
logger.debug(2, "Cache: %s's file checksum list file %s changed",
fn, f)
self.remove(fn)
return False
if tuple(appends) != tuple(info_array[0].appends):
self.logger.debug2("appends for %s changed", fn)
self.logger.debug2("%s to %s" % (str(appends), str(info_array[0].appends)))
if appends != info_array[0].appends:
logger.debug(2, "Cache: appends for %s changed", fn)
logger.debug(2, "%s to %s" % (str(appends), str(info_array[0].appends)))
self.remove(fn)
return False
@@ -657,10 +634,10 @@ class Cache(NoCache):
virtualfn = variant2virtual(fn, cls)
self.clean.add(virtualfn)
if virtualfn not in self.depends_cache:
self.logger.debug2("%s is not cached", virtualfn)
logger.debug(2, "Cache: %s is not cached", virtualfn)
invalid = True
elif len(self.depends_cache[virtualfn]) != len(self.caches_array):
self.logger.debug2("Extra caches missing for %s?" % virtualfn)
logger.debug(2, "Cache: Extra caches missing for %s?" % virtualfn)
invalid = True
# If any one of the variants is not present, mark as invalid for all
@@ -668,10 +645,10 @@ class Cache(NoCache):
for cls in info_array[0].variants:
virtualfn = variant2virtual(fn, cls)
if virtualfn in self.clean:
self.logger.debug2("Removing %s from cache", virtualfn)
logger.debug(2, "Cache: Removing %s from cache", virtualfn)
self.clean.remove(virtualfn)
if fn in self.clean:
self.logger.debug2("Marking %s as not clean", fn)
logger.debug(2, "Cache: Marking %s as not clean", fn)
self.clean.remove(fn)
return False
@@ -684,10 +661,10 @@ class Cache(NoCache):
Called from the parser in error cases
"""
if fn in self.depends_cache:
self.logger.debug("Removing %s from cache", fn)
logger.debug(1, "Removing %s from cache", fn)
del self.depends_cache[fn]
if fn in self.clean:
self.logger.debug("Marking %s as unclean", fn)
logger.debug(1, "Marking %s as unclean", fn)
self.clean.remove(fn)
def sync(self):
@@ -700,13 +677,12 @@ class Cache(NoCache):
return
if self.cacheclean:
self.logger.debug2("Cache is clean, not saving.")
logger.debug(2, "Cache is clean, not saving.")
return
for cache_class in self.caches_array:
cache_class_name = cache_class.__name__
cachefile = self.getCacheFile(cache_class.cachefile)
self.logger.debug2("Writing %s", cachefile)
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "wb") as f:
p = pickle.Pickler(f, pickle.HIGHEST_PROTOCOL)
p.dump(__cache_version__)
@@ -725,18 +701,8 @@ class Cache(NoCache):
return bb.parse.cached_mtime_noerror(cachefile)
def add_info(self, filename, info_array, cacheData, parsed=None, watcher=None):
if self.mc is not None:
(fn, cls, mc) = virtualfn2realfn(filename)
if mc:
self.logger.error("Unexpected multiconfig %s", filename)
return
vfn = realfn2virtual(fn, cls, self.mc)
else:
vfn = filename
if isinstance(info_array[0], CoreRecipeInfo) and (not info_array[0].skipped):
cacheData.add_from_recipeinfo(vfn, info_array)
cacheData.add_from_recipeinfo(filename, info_array)
if watcher:
watcher(info_array[0].file_depends)
@@ -761,61 +727,6 @@ class Cache(NoCache):
info_array.append(cache_class(realfn, data))
self.add_info(file_name, info_array, cacheData, parsed)
class MulticonfigCache(Mapping):
def __init__(self, databuilder, data_hash, caches_array):
def progress(p):
nonlocal current_progress
nonlocal previous_progress
nonlocal previous_percent
nonlocal cachesize
current_progress = previous_progress + p
if current_progress > cachesize:
# we might have calculated incorrect total size because a file
# might've been written out just after we checked its size
cachesize = current_progress
current_percent = 100 * current_progress / cachesize
if current_percent > previous_percent:
previous_percent = current_percent
bb.event.fire(bb.event.CacheLoadProgress(current_progress, cachesize),
databuilder.data)
cachesize = 0
current_progress = 0
previous_progress = 0
previous_percent = 0
self.__caches = {}
for mc, mcdata in databuilder.mcdata.items():
self.__caches[mc] = Cache(databuilder, mc, data_hash, caches_array)
cachesize += self.__caches[mc].cachesize()
bb.event.fire(bb.event.CacheLoadStarted(cachesize), databuilder.data)
loaded = 0
for c in self.__caches.values():
loaded += c.prepare_cache(progress)
previous_progress = current_progress
# Note: depends cache number is corresponding to the parsing file numbers.
# The same file has several caches, still regarded as one item in the cache
bb.event.fire(bb.event.CacheLoadCompleted(cachesize, loaded), databuilder.data)
def __len__(self):
return len(self.__caches)
def __getitem__(self, key):
return self.__caches[key]
def __contains__(self, key):
return key in self.__caches
def __iter__(self):
for k in self.__caches:
yield k
def init(cooker):
"""
@@ -882,7 +793,7 @@ class MultiProcessCache(object):
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
logger.debug("Using cache in '%s'", self.cachefile)
logger.debug(1, "Using cache in '%s'", self.cachefile)
glf = bb.utils.lockfile(self.cachefile + ".lock")
@@ -988,7 +899,7 @@ class SimpleCache(object):
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
logger.debug("Using cache in '%s'", self.cachefile)
logger.debug(1, "Using cache in '%s'", self.cachefile)
glf = bb.utils.lockfile(self.cachefile + ".lock")

View File

@@ -212,9 +212,9 @@ class PythonParser():
funcstr = codegen.to_source(func)
argstr = codegen.to_source(arg)
except TypeError:
self.log.debug2('Failed to convert function and argument to source form')
self.log.debug(2, 'Failed to convert function and argument to source form')
else:
self.log.debug(self.unhandled_message % (funcstr, argstr))
self.log.debug(1, self.unhandled_message % (funcstr, argstr))
def visit_Call(self, node):
name = self.called_node_name(node.func)
@@ -450,7 +450,7 @@ class ShellParser():
cmd = word[1]
if cmd.startswith("$"):
self.log.debug(self.unhandled_template % cmd)
self.log.debug(1, self.unhandled_template % cmd)
elif cmd == "eval":
command = " ".join(word for _, word in words[1:])
self._parse_shell(command)

View File

@@ -54,20 +54,13 @@ class Command:
self.cooker = cooker
self.cmds_sync = CommandsSync()
self.cmds_async = CommandsAsync()
self.remotedatastores = None
self.remotedatastores = bb.remotedata.RemoteDatastores(cooker)
# FIXME Add lock for this
self.currentAsyncCommand = None
def runCommand(self, commandline, ro_only = False):
command = commandline.pop(0)
# Ensure cooker is ready for commands
if command != "updateConfig" and command != "setFeatures":
self.cooker.init_configdata()
if not self.remotedatastores:
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
if hasattr(CommandsSync, command):
# Can run synchronous commands straight away
command_method = getattr(self.cmds_sync, command)
@@ -81,12 +74,8 @@ class Command:
result = command_method(self, commandline)
except CommandError as exc:
return None, exc.args[0]
except (Exception, SystemExit) as exc:
except (Exception, SystemExit):
import traceback
if isinstance(exc, bb.BBHandledException):
# We need to start returning real exceptions here. Until we do, we can't
# tell if an exception is an instance of bb.BBHandledException
return None, "bb.BBHandledException()\n" + traceback.format_exc()
return None, traceback.format_exc()
else:
return result, None
@@ -95,7 +84,7 @@ class Command:
if command not in CommandsAsync.__dict__:
return None, "No such command"
self.currentAsyncCommand = (command, commandline)
self.cooker.idleCallBackRegister(self.cooker.runCommands, self.cooker)
self.cooker.configuration.server_register_idlecallback(self.cooker.runCommands, self.cooker)
return True, None
def runAsyncCommand(self):
@@ -147,8 +136,13 @@ class Command:
self.cooker.finishcommand()
def reset(self):
if self.remotedatastores:
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
def split_mc_pn(pn):
if pn.startswith("multiconfig:"):
_, mc, pn = pn.split(":", 2)
return (mc, pn)
return ('', pn)
class CommandsSync:
"""
@@ -238,11 +232,7 @@ class CommandsSync:
def matchFile(self, command, params):
fMatch = params[0]
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.matchFile(fMatch, mc)
return command.cooker.matchFile(fMatch)
matchFile.needconfig = False
def getUIHandlerNum(self, command, params):
@@ -405,38 +395,22 @@ class CommandsSync:
def getSkippedRecipes(self, command, params):
# Return list sorted by reverse priority order
import bb.cache
def sortkey(x):
vfn, _ = x
realfn, _, mc = bb.cache.virtualfn2realfn(vfn)
return (-command.cooker.collections[mc].calc_bbfile_priority(realfn)[0], vfn)
skipdict = OrderedDict(sorted(command.cooker.skiplist.items(), key=sortkey))
skipdict = OrderedDict(sorted(command.cooker.skiplist.items(),
key=lambda x: (-command.cooker.collection.calc_bbfile_priority(bb.cache.virtualfn2realfn(x[0])[0]), x[0])))
return list(skipdict.items())
getSkippedRecipes.readonly = True
def getOverlayedRecipes(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(command.cooker.collections[mc].overlayed.items())
return list(command.cooker.collection.overlayed.items())
getOverlayedRecipes.readonly = True
def getFileAppends(self, command, params):
fn = params[0]
try:
mc = params[1]
except IndexError:
mc = ''
return command.cooker.collections[mc].get_file_appends(fn)
return command.cooker.collection.get_file_appends(fn)
getFileAppends.readonly = True
def getAllAppends(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.collections[mc].bbappends
return command.cooker.collection.bbappends
getAllAppends.readonly = True
def findProviders(self, command, params):
@@ -448,7 +422,7 @@ class CommandsSync:
findProviders.readonly = True
def findBestProvider(self, command, params):
(mc, pn) = bb.runqueue.split_mc(params[0])
(mc, pn) = split_mc_pn(params[0])
return command.cooker.findBestProvider(pn, mc)
findBestProvider.readonly = True
@@ -522,7 +496,6 @@ class CommandsSync:
for the recipe.
"""
fn = params[0]
mc = bb.runqueue.mc_from_tid(fn)
appends = params[1]
appendlist = params[2]
if len(params) > 3:
@@ -534,7 +507,7 @@ class CommandsSync:
if appendlist is not None:
appendfiles = appendlist
else:
appendfiles = command.cooker.collections[mc].get_file_appends(fn)
appendfiles = command.cooker.collection.get_file_appends(fn)
else:
appendfiles = []
# We are calling bb.cache locally here rather than on the server,
@@ -544,7 +517,7 @@ class CommandsSync:
if config_data:
# We have to use a different function here if we're passing in a datastore
# NOTE: we took a copy above, so we don't do it here again
envdata = bb.cache.parse_recipe(config_data, fn, appendfiles, mc)['']
envdata = bb.cache.parse_recipe(config_data, fn, appendfiles)['']
else:
# Use the standard path
parser = bb.cache.NoCache(command.cooker.databuilder)
@@ -735,10 +708,10 @@ class CommandsAsync:
"""
Find signature info files via the signature generator
"""
(mc, pn) = bb.runqueue.split_mc(params[0])
pn = params[0]
taskname = params[1]
sigs = params[2]
res = bb.siggen.find_siginfo(pn, taskname, sigs, command.cooker.databuilder.mcdata[mc])
bb.event.fire(bb.event.FindSigInfoResult(res), command.cooker.databuilder.mcdata[mc])
res = bb.siggen.find_siginfo(pn, taskname, sigs, command.cooker.data)
bb.event.fire(bb.event.FindSigInfoResult(res), command.cooker.data)
command.finishAsyncCommand()
findSigInfo.needcache = False

10
bitbake/lib/bb/compat.py Normal file
View File

@@ -0,0 +1,10 @@
#
# SPDX-License-Identifier: GPL-2.0-only
#
"""Code pulled from future python versions, here for compatibility"""
from collections import MutableMapping, KeysView, ValuesView, ItemsView, OrderedDict
from functools import total_ordering

View File

@@ -73,9 +73,7 @@ class SkippedPackage:
self.pn = info.pn
self.skipreason = info.skipreason
self.provides = info.provides
self.rprovides = info.packages + info.rprovides
for package in info.packages:
self.rprovides += info.rprovides_pkg[package]
self.rprovides = info.rprovides
elif reason:
self.skipreason = reason
@@ -150,18 +148,15 @@ class BBCooker:
Manages one bitbake build run
"""
def __init__(self, featureSet=None, idleCallBackRegister=None):
def __init__(self, configuration, featureSet=None):
self.recipecaches = None
self.eventlog = None
self.skiplist = {}
self.featureset = CookerFeatures()
if featureSet:
for f in featureSet:
self.featureset.setFeature(f)
self.configuration = bb.cookerdata.CookerConfiguration()
self.idleCallBackRegister = idleCallBackRegister
self.configuration = configuration
bb.debug(1, "BBCooker starting %s" % time.time())
sys.stdout.flush()
@@ -197,13 +192,25 @@ class BBCooker:
self.hashserv = None
self.hashservaddr = None
self.initConfigurationData()
bb.debug(1, "BBCooker parsed base configuration %s" % time.time())
sys.stdout.flush()
# we log all events to a file if so directed
if self.configuration.writeeventlog:
# register the log file writer as UI Handler
writer = EventWriter(self, self.configuration.writeeventlog)
EventLogWriteHandler = namedtuple('EventLogWriteHandler', ['event'])
bb.event.register_UIHhandler(EventLogWriteHandler(writer))
self.inotify_modified_files = []
def _process_inotify_updates(server, cooker, abort):
cooker.process_inotify_updates()
return 1.0
self.idleCallBackRegister(_process_inotify_updates, self)
self.configuration.server_register_idlecallback(_process_inotify_updates, self)
# TOSTOP must not be set or our children will hang when they output
try:
@@ -230,13 +237,6 @@ class BBCooker:
bb.debug(1, "BBCooker startup complete %s" % time.time())
sys.stdout.flush()
def init_configdata(self):
if not hasattr(self, "data"):
self.initConfigurationData()
bb.debug(1, "BBCooker parsed base configuration %s" % time.time())
sys.stdout.flush()
self.handlePRServ()
def process_inotify_updates(self):
for n in [self.confignotifier, self.notifier]:
if n.check_events(timeout=0):
@@ -322,7 +322,7 @@ class BBCooker:
for feature in features:
self.featureset.setFeature(feature)
bb.debug(1, "Features set %s (was %s)" % (original_featureset, list(self.featureset)))
if (original_featureset != list(self.featureset)) and self.state != state.error and hasattr(self, "data"):
if (original_featureset != list(self.featureset)) and self.state != state.error:
self.reset()
def initConfigurationData(self):
@@ -354,7 +354,7 @@ class BBCooker:
self.caches_array.append(getattr(module, cache_name))
except ImportError as exc:
logger.critical("Unable to import extra RecipeInfo '%s' from '%s': %s" % (cache_name, module_name, exc))
raise bb.BBHandledException()
sys.exit("FATAL: Failed to import extra cache class '%s'." % cache_name)
self.databuilder = bb.cookerdata.CookerDataBuilder(self.configuration, False)
self.databuilder.parseBaseConfiguration()
@@ -382,19 +382,14 @@ class BBCooker:
try:
self.prhost = prserv.serv.auto_start(self.data)
except prserv.serv.PRServiceConfigError as e:
bb.fatal("Unable to start PR Server, exiting")
bb.fatal("Unable to start PR Server, exitting")
if self.data.getVar("BB_HASHSERVE") == "auto":
# Create a new hash server bound to a unix domain socket
if not self.hashserv:
dbfile = (self.data.getVar("PERSISTENT_DIR") or self.data.getVar("CACHE")) + "/hashserv.db"
self.hashservaddr = "unix://%s/hashserve.sock" % self.data.getVar("TOPDIR")
self.hashserv = hashserv.create_server(
self.hashservaddr,
dbfile,
sync=False,
upstream=self.data.getVar("BB_HASHSERVE_UPSTREAM") or None,
)
self.hashserv = hashserv.create_server(self.hashservaddr, dbfile, sync=False)
self.hashserv.process = multiprocessing.Process(target=self.hashserv.serve_forever)
self.hashserv.process.start()
self.data.setVar("BB_HASHSERVE", self.hashservaddr)
@@ -416,7 +411,10 @@ class BBCooker:
self.data.disableTracking()
def parseConfiguration(self):
self.updateCacheSync()
# Set log file verbosity
verboselogs = bb.utils.to_boolean(self.data.getVar("BB_VERBOSE_LOGS", False))
if verboselogs:
bb.msg.loggerVerboseLogs = True
# Change nice level if we're asked to
nice = self.data.getVar("BB_NICE_LEVEL")
@@ -448,52 +446,27 @@ class BBCooker:
continue
except AttributeError:
pass
logger.debug("Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
logger.debug(1, "Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
print("Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
clean = False
if hasattr(self.configuration, o):
setattr(self.configuration, o, options[o])
if self.configuration.writeeventlog:
if self.eventlog and self.eventlog[0] != self.configuration.writeeventlog:
bb.event.unregister_UIHhandler(self.eventlog[1])
if not self.eventlog or self.eventlog[0] != self.configuration.writeeventlog:
# we log all events to a file if so directed
# register the log file writer as UI Handler
writer = EventWriter(self, self.configuration.writeeventlog)
EventLogWriteHandler = namedtuple('EventLogWriteHandler', ['event'])
self.eventlog = (self.configuration.writeeventlog, bb.event.register_UIHhandler(EventLogWriteHandler(writer)))
bb.msg.loggerDefaultLogLevel = self.configuration.default_loglevel
bb.msg.loggerDefaultDomains = self.configuration.debug_domains
if hasattr(self, "data"):
origenv = bb.data.init()
for k in environment:
origenv.setVar(k, environment[k])
self.data.setVar("BB_ORIGENV", origenv)
setattr(self.configuration, o, options[o])
for k in bb.utils.approved_variables():
if k in environment and k not in self.configuration.env:
logger.debug("Updating new environment variable %s to %s" % (k, environment[k]))
logger.debug(1, "Updating new environment variable %s to %s" % (k, environment[k]))
self.configuration.env[k] = environment[k]
clean = False
if k in self.configuration.env and k not in environment:
logger.debug("Updating environment variable %s (deleted)" % (k))
logger.debug(1, "Updating environment variable %s (deleted)" % (k))
del self.configuration.env[k]
clean = False
if k not in self.configuration.env and k not in environment:
continue
if environment[k] != self.configuration.env[k]:
logger.debug("Updating environment variable %s from %s to %s" % (k, self.configuration.env[k], environment[k]))
logger.debug(1, "Updating environment variable %s from %s to %s" % (k, self.configuration.env[k], environment[k]))
self.configuration.env[k] = environment[k]
clean = False
# Now update all the variables not in the datastore to match
self.configuration.env = environment
if not clean:
logger.debug("Base environment change, triggering reparse")
logger.debug(1, "Base environment change, triggering reparse")
self.reset()
def runCommands(self, server, data, abort):
@@ -507,30 +480,22 @@ class BBCooker:
def showVersions(self):
(latest_versions, preferred_versions, required) = self.findProviders()
(latest_versions, preferred_versions) = self.findProviders()
logger.plain("%-35s %25s %25s %25s", "Recipe Name", "Latest Version", "Preferred Version", "Required Version")
logger.plain("%-35s %25s %25s %25s\n", "===========", "==============", "=================", "================")
logger.plain("%-35s %25s %25s", "Recipe Name", "Latest Version", "Preferred Version")
logger.plain("%-35s %25s %25s\n", "===========", "==============", "=================")
for p in sorted(self.recipecaches[''].pkg_pn):
preferred = preferred_versions[p]
pref = preferred_versions[p]
latest = latest_versions[p]
requiredstr = ""
preferredstr = ""
if required[p]:
if preferred[0] is not None:
requiredstr = preferred[0][0] + ":" + preferred[0][1] + '-' + preferred[0][2]
else:
bb.fatal("REQUIRED_VERSION of package %s not available" % p)
else:
preferredstr = preferred[0][0] + ":" + preferred[0][1] + '-' + preferred[0][2]
prefstr = pref[0][0] + ":" + pref[0][1] + '-' + pref[0][2]
lateststr = latest[0][0] + ":" + latest[0][1] + "-" + latest[0][2]
if preferred == latest:
preferredstr = ""
if pref == latest:
prefstr = ""
logger.plain("%-35s %25s %25s %25s", p, lateststr, preferredstr, requiredstr)
logger.plain("%-35s %25s %25s", p, lateststr, prefstr)
def showEnvironment(self, buildfile=None, pkgs_to_build=None):
"""
@@ -560,7 +525,7 @@ class BBCooker:
self.parseConfiguration()
fn, cls, mc = bb.cache.virtualfn2realfn(buildfile)
fn = self.matchFile(fn, mc)
fn = self.matchFile(fn)
fn = bb.cache.realfn2virtual(fn, cls, mc)
elif len(pkgs_to_build) == 1:
mc = mc_base(pkgs_to_build[0])
@@ -576,8 +541,8 @@ class BBCooker:
if fn:
try:
bb_caches = bb.cache.MulticonfigCache(self.databuilder, self.data_hash, self.caches_array)
envdata = bb_caches[mc].loadDataFull(fn, self.collections[mc].get_file_appends(fn))
bb_cache = bb.cache.Cache(self.databuilder, self.data_hash, self.caches_array)
envdata = bb_cache.loadDataFull(fn, self.collection.get_file_appends(fn))
except Exception as e:
parselog.exception("Unable to read %s", fn)
raise
@@ -629,7 +594,7 @@ class BBCooker:
# Replace string such as "mc:*:bash"
# into "mc:A:bash mc:B:bash bash"
for k in targetlist:
if k.startswith("mc:") and k.count(':') >= 2:
if k.startswith("mc:"):
if wildcard:
bb.fatal('multiconfig conflict')
if k.split(":")[1] == "*":
@@ -661,9 +626,8 @@ class BBCooker:
current = 0
runlist = []
for k in fulltargetlist:
origk = k
mc = ""
if k.startswith("mc:") and k.count(':') >= 2:
if k.startswith("mc:"):
mc = k.split(":")[1]
k = ":".join(k.split(":")[2:])
ktask = task
@@ -671,10 +635,6 @@ class BBCooker:
k2 = k.split(":do_")
k = k2[0]
ktask = k2[1]
if mc not in self.multiconfigs:
bb.fatal("Multiconfig dependency %s depends on nonexistent multiconfig configuration named %s" % (origk, mc))
taskdata[mc].add_provider(localdata[mc], self.recipecaches[mc], k)
current += 1
if not ktask.startswith("do_"):
@@ -710,9 +670,9 @@ class BBCooker:
l = k.split(':')
depmc = l[2]
if depmc not in self.multiconfigs:
bb.fatal("Multiconfig dependency %s depends on nonexistent multiconfig configuration named configuration %s" % (k,depmc))
bb.fatal("Multiconfig dependency %s depends on nonexistent mc configuration %s" % (k,depmc))
else:
logger.debug("Adding providers for multiconfig dependency %s" % l[3])
logger.debug(1, "Adding providers for multiconfig dependency %s" % l[3])
taskdata[depmc].add_provider(localdata[depmc], self.recipecaches[depmc], l[3])
seen.add(k)
new = True
@@ -969,33 +929,26 @@ class BBCooker:
logger.info("Task dependencies saved to 'task-depends.dot'")
def show_appends_with_no_recipes(self):
appends_without_recipes = {}
# Determine which bbappends haven't been applied
for mc in self.multiconfigs:
# First get list of recipes, including skipped
recipefns = list(self.recipecaches[mc].pkg_fn.keys())
recipefns.extend(self.skiplist.keys())
# Work out list of bbappends that have been applied
applied_appends = []
for fn in recipefns:
applied_appends.extend(self.collections[mc].get_file_appends(fn))
# First get list of recipes, including skipped
recipefns = list(self.recipecaches[''].pkg_fn.keys())
recipefns.extend(self.skiplist.keys())
appends_without_recipes[mc] = []
for _, appendfn in self.collections[mc].bbappends:
if not appendfn in applied_appends:
appends_without_recipes[mc].append(appendfn)
# Work out list of bbappends that have been applied
applied_appends = []
for fn in recipefns:
applied_appends.extend(self.collection.get_file_appends(fn))
msgs = []
for mc in sorted(appends_without_recipes.keys()):
if appends_without_recipes[mc]:
msgs.append('No recipes in %s available for:\n %s' % (mc if mc else 'default',
'\n '.join(appends_without_recipes[mc])))
appends_without_recipes = []
for _, appendfn in self.collection.bbappends:
if not appendfn in applied_appends:
appends_without_recipes.append(appendfn)
if msgs:
msg = "\n".join(msgs)
warn_only = self.databuilder.mcdata[mc].getVar("BB_DANGLINGAPPENDS_WARNONLY", \
False) or "no"
if appends_without_recipes:
msg = 'No recipes available for:\n %s' % '\n '.join(appends_without_recipes)
warn_only = self.data.getVar("BB_DANGLINGAPPENDS_WARNONLY", \
False) or "no"
if warn_only.lower() in ("1", "yes", "true"):
bb.warn(msg)
else:
@@ -1076,16 +1029,10 @@ class BBCooker:
if pn in self.recipecaches[mc].providers:
filenames = self.recipecaches[mc].providers[pn]
eligible, foundUnique = bb.providers.filterProviders(filenames, pn, self.databuilder.mcdata[mc], self.recipecaches[mc])
if eligible is not None:
filename = eligible[0]
else:
filename = None
filename = eligible[0]
return None, None, None, filename
elif pn in self.recipecaches[mc].pkg_pn:
(latest, latest_f, preferred_ver, preferred_file, required) = bb.providers.findBestProvider(pn, self.databuilder.mcdata[mc], self.recipecaches[mc], self.recipecaches[mc].pkg_pn)
if required and preferred_file is None:
return None, None, None, None
return (latest, latest_f, preferred_ver, preferred_file)
return bb.providers.findBestProvider(pn, self.databuilder.mcdata[mc], self.recipecaches[mc], self.recipecaches[mc].pkg_pn)
else:
return None, None, None, None
@@ -1150,7 +1097,7 @@ class BBCooker:
from bb import shell
except ImportError:
parselog.exception("Interactive mode not available")
raise bb.BBHandledException()
sys.exit(1)
else:
shell.start( self )
@@ -1302,15 +1249,15 @@ class BBCooker:
if siggen_cache:
bb.parse.siggen.checksum_cache.mtime_cache.clear()
def matchFiles(self, bf, mc=''):
def matchFiles(self, bf):
"""
Find the .bb files which match the expression in 'buildfile'.
"""
if bf.startswith("/") or bf.startswith("../"):
bf = os.path.abspath(bf)
self.collections = {mc: CookerCollectFiles(self.bbfile_config_priorities, mc)}
filelist, masked, searchdirs = self.collections[mc].collect_bbfiles(self.databuilder.mcdata[mc], self.databuilder.mcdata[mc])
self.collection = CookerCollectFiles(self.bbfile_config_priorities)
filelist, masked, searchdirs = self.collection.collect_bbfiles(self.data, self.data)
try:
os.stat(bf)
bf = os.path.abspath(bf)
@@ -1323,12 +1270,12 @@ class BBCooker:
matches.append(f)
return matches
def matchFile(self, buildfile, mc=''):
def matchFile(self, buildfile):
"""
Find the .bb file which matches the expression in 'buildfile'.
Raise an error if multiple files
"""
matches = self.matchFiles(buildfile, mc)
matches = self.matchFiles(buildfile)
if len(matches) != 1:
if matches:
msg = "Unable to match '%s' to a specific recipe file - %s matches found:" % (buildfile, len(matches))
@@ -1369,14 +1316,14 @@ class BBCooker:
task = "do_%s" % task
fn, cls, mc = bb.cache.virtualfn2realfn(buildfile)
fn = self.matchFile(fn, mc)
fn = self.matchFile(fn)
self.buildSetVars()
self.reset_mtime_caches()
bb_caches = bb.cache.MulticonfigCache(self.databuilder, self.data_hash, self.caches_array)
bb_cache = bb.cache.Cache(self.databuilder, self.data_hash, self.caches_array)
infos = bb_caches[mc].parse(fn, self.collections[mc].get_file_appends(fn))
infos = bb_cache.parse(fn, self.collection.get_file_appends(fn))
infos = dict(infos)
fn = bb.cache.realfn2virtual(fn, cls, mc)
@@ -1464,7 +1411,7 @@ class BBCooker:
return True
return retval
self.idleCallBackRegister(buildFileIdle, rq)
self.configuration.server_register_idlecallback(buildFileIdle, rq)
def buildTargets(self, targets, task):
"""
@@ -1535,7 +1482,7 @@ class BBCooker:
if 'universe' in targets:
rq.rqdata.warn_multi_bb = True
self.idleCallBackRegister(buildTargetsIdle, rq)
self.configuration.server_register_idlecallback(buildTargetsIdle, rq)
def getAllKeysWithFlags(self, flaglist):
@@ -1574,7 +1521,7 @@ class BBCooker:
self.inotify_modified_files = []
if not self.baseconfig_valid:
logger.debug("Reloading base configuration data")
logger.debug(1, "Reloading base configuration data")
self.initConfigurationData()
self.handlePRServ()
@@ -1586,7 +1533,6 @@ class BBCooker:
if self.state in (state.shutdown, state.forceshutdown, state.error):
if hasattr(self.parser, 'shutdown'):
self.parser.shutdown(clean=False, force = True)
self.parser.final_cleanup()
raise bb.BBHandledException()
if self.state != state.parsing:
@@ -1606,24 +1552,14 @@ class BBCooker:
for dep in self.configuration.extra_assume_provided:
self.recipecaches[mc].ignored_dependencies.add(dep)
self.collections = {}
mcfilelist = {}
total_masked = 0
searchdirs = set()
for mc in self.multiconfigs:
self.collections[mc] = CookerCollectFiles(self.bbfile_config_priorities, mc)
(filelist, masked, search) = self.collections[mc].collect_bbfiles(self.databuilder.mcdata[mc], self.databuilder.mcdata[mc])
mcfilelist[mc] = filelist
total_masked += masked
searchdirs |= set(search)
self.collection = CookerCollectFiles(self.bbfile_config_priorities)
(filelist, masked, searchdirs) = self.collection.collect_bbfiles(self.data, self.data)
# Add inotify watches for directories searched for bb/bbappend files
for dirent in searchdirs:
self.add_filewatch([[dirent]], dirs=True)
self.parser = CookerParser(self, mcfilelist, total_masked)
self.parser = CookerParser(self, filelist, masked)
self.parsecache_valid = True
self.state = state.parsing
@@ -1635,7 +1571,7 @@ class BBCooker:
self.show_appends_with_no_recipes()
self.handlePrefProviders()
for mc in self.multiconfigs:
self.recipecaches[mc].bbfile_priority = self.collections[mc].collection_priorities(self.recipecaches[mc].pkg_fn, self.parser.mcfilelist[mc], self.data)
self.recipecaches[mc].bbfile_priority = self.collection.collection_priorities(self.recipecaches[mc].pkg_fn, self.data)
self.state = state.running
# Send an event listing all stamps reachable after parsing
@@ -1656,7 +1592,7 @@ class BBCooker:
raise NothingToBuild
ignore = (self.data.getVar("ASSUME_PROVIDED") or "").split()
for pkg in pkgs_to_build.copy():
for pkg in pkgs_to_build:
if pkg in ignore:
parselog.warning("Explicit target \"%s\" is in ASSUME_PROVIDED, ignoring" % pkg)
if pkg.startswith("multiconfig:"):
@@ -1694,16 +1630,17 @@ class BBCooker:
return pkgs_to_build
def pre_serve(self):
# We now are in our own process so we can call this here.
# PRServ exits if its parent process exits
self.handlePRServ()
return
def post_serve(self):
self.shutdown(force=True)
prserv.serv.auto_shutdown()
if self.hashserv:
self.hashserv.process.terminate()
self.hashserv.process.join()
if hasattr(self, "data"):
bb.event.fire(CookerExit(), self.data)
bb.event.fire(CookerExit(), self.data)
def shutdown(self, force = False):
if force:
@@ -1713,7 +1650,6 @@ class BBCooker:
if self.parser:
self.parser.shutdown(clean=not force, force=force)
self.parser.final_cleanup()
def finishcommand(self):
self.state = state.initial
@@ -1727,9 +1663,8 @@ class BBCooker:
self.finishcommand()
self.extraconfigdata = {}
self.command.reset()
if hasattr(self, "data"):
self.databuilder.reset()
self.data = self.databuilder.data
self.databuilder.reset()
self.data = self.databuilder.data
self.parsecache_valid = False
self.baseconfig_valid = False
@@ -1744,19 +1679,21 @@ class CookerExit(bb.event.Event):
class CookerCollectFiles(object):
def __init__(self, priorities, mc=''):
self.mc = mc
def __init__(self, priorities):
self.bbappends = []
# Priorities is a list of tupples, with the second element as the pattern.
# We need to sort the list with the longest pattern first, and so on to
# the shortest. This allows nested layers to be properly evaluated.
self.bbfile_config_priorities = sorted(priorities, key=lambda tup: tup[1], reverse=True)
def calc_bbfile_priority(self, filename):
def calc_bbfile_priority( self, filename, matched = None ):
for _, _, regex, pri in self.bbfile_config_priorities:
if regex.match(filename):
return pri, regex
return 0, None
if matched is not None:
if not regex in matched:
matched.add(regex)
return pri
return 0
def get_bbfiles(self):
"""Get list of default .bb files by reading out the current directory"""
@@ -1786,10 +1723,10 @@ class CookerCollectFiles(object):
collectlog.debug(1, "collecting .bb files")
files = (config.getVar( "BBFILES") or "").split()
config.setVar("BBFILES", " ".join(files))
# Sort files by priority
files.sort( key=lambda fileitem: self.calc_bbfile_priority(fileitem)[0] )
config.setVar("BBFILES_PRIORITIZED", " ".join(files))
files.sort( key=lambda fileitem: self.calc_bbfile_priority(fileitem) )
if not len(files):
files = self.get_bbfiles()
@@ -1909,67 +1846,43 @@ class CookerCollectFiles(object):
(bbappend, filename) = b
if (bbappend == f) or ('%' in bbappend and bbappend.startswith(f[:bbappend.index('%')])):
filelist.append(filename)
return tuple(filelist)
return filelist
def collection_priorities(self, pkgfns, fns, d):
# Return the priorities of the entries in pkgfns
# Also check that all the regexes in self.bbfile_config_priorities are used
# (but to do that we need to ensure skipped recipes aren't counted, nor
# collections in BBFILE_PATTERN_IGNORE_EMPTY)
def collection_priorities(self, pkgfns, d):
priorities = {}
seen = set()
matched = set()
matched_regex = set()
unmatched_regex = set()
for _, _, regex, _ in self.bbfile_config_priorities:
unmatched_regex.add(regex)
# Calculate priorities for each file
matched = set()
for p in pkgfns:
realfn, cls, mc = bb.cache.virtualfn2realfn(p)
priorities[p], regex = self.calc_bbfile_priority(realfn)
if regex in unmatched_regex:
matched_regex.add(regex)
unmatched_regex.remove(regex)
seen.add(realfn)
if regex:
matched.add(realfn)
priorities[p] = self.calc_bbfile_priority(realfn, matched)
if unmatched_regex:
# Account for bbappend files
unmatched = set()
for _, _, regex, pri in self.bbfile_config_priorities:
if not regex in matched:
unmatched.add(regex)
# Don't show the warning if the BBFILE_PATTERN did match .bbappend files
def find_bbappend_match(regex):
for b in self.bbappends:
(bbfile, append) = b
seen.add(append)
if regex.match(append):
# If the bbappend is matched by already "matched set", return False
for matched_regex in matched:
if matched_regex.match(append):
return False
return True
return False
# Account for skipped recipes
seen.update(fns)
seen.difference_update(matched)
def already_matched(fn):
for regex in matched_regex:
if regex.match(fn):
return True
return False
for unmatch in unmatched_regex.copy():
for fn in seen:
if unmatch.match(fn):
# If the bbappend or file was already matched by another regex, skip it
# e.g. for a layer within a layer, the outer regex could match, the inner
# regex may match nothing and we should warn about that
if already_matched(fn):
continue
unmatched_regex.remove(unmatch)
break
for unmatch in unmatched.copy():
if find_bbappend_match(unmatch):
unmatched.remove(unmatch)
for collection, pattern, regex, _ in self.bbfile_config_priorities:
if regex in unmatched_regex:
if regex in unmatched:
if d.getVar('BBFILE_PATTERN_IGNORE_EMPTY_%s' % collection) != '1':
collectlog.warning("No bb files in %s matched BBFILE_PATTERN_%s '%s'" % (self.mc if self.mc else 'default',
collection, pattern))
collectlog.warning("No bb files matched BBFILE_PATTERN_%s '%s'" % (collection, pattern))
return priorities
@@ -2018,8 +1931,7 @@ class Parser(multiprocessing.Process):
except queue.Empty:
pass
else:
self.results.close()
self.results.join_thread()
self.results.cancel_join_thread()
break
if pending:
@@ -2028,8 +1940,6 @@ class Parser(multiprocessing.Process):
try:
job = self.jobs.pop()
except IndexError:
self.results.close()
self.results.join_thread()
break
result = self.parse(*job)
# Clear the siggen cache after parsing to control memory usage, its huge
@@ -2039,7 +1949,7 @@ class Parser(multiprocessing.Process):
except queue.Full:
pending.append(result)
def parse(self, mc, cache, filename, appends):
def parse(self, filename, appends):
try:
origfilter = bb.event.LogHandler.filter
# Record the filename we're parsing into any events generated
@@ -2053,7 +1963,7 @@ class Parser(multiprocessing.Process):
bb.event.set_class_handlers(self.handlers.copy())
bb.event.LogHandler.filter = parse_filter
return True, mc, cache.parse(filename, appends)
return True, self.bb_cache.parse(filename, appends)
except Exception as exc:
tb = sys.exc_info()[2]
exc.recipe = filename
@@ -2068,8 +1978,8 @@ class Parser(multiprocessing.Process):
bb.event.LogHandler.filter = origfilter
class CookerParser(object):
def __init__(self, cooker, mcfilelist, masked):
self.mcfilelist = mcfilelist
def __init__(self, cooker, filelist, masked):
self.filelist = filelist
self.cooker = cooker
self.cfgdata = cooker.data
self.cfghash = cooker.data_hash
@@ -2083,31 +1993,28 @@ class CookerParser(object):
self.skipped = 0
self.virtuals = 0
self.total = len(filelist)
self.current = 0
self.process_names = []
self.bb_caches = bb.cache.MulticonfigCache(self.cfgbuilder, self.cfghash, cooker.caches_array)
self.fromcache = set()
self.willparse = set()
for mc in self.cooker.multiconfigs:
for filename in self.mcfilelist[mc]:
appends = self.cooker.collections[mc].get_file_appends(filename)
if not self.bb_caches[mc].cacheValid(filename, appends):
self.willparse.add((mc, self.bb_caches[mc], filename, appends))
else:
self.fromcache.add((mc, self.bb_caches[mc], filename, appends))
self.total = len(self.fromcache) + len(self.willparse)
self.toparse = len(self.willparse)
self.bb_cache = bb.cache.Cache(self.cfgbuilder, self.cfghash, cooker.caches_array)
self.fromcache = []
self.willparse = []
for filename in self.filelist:
appends = self.cooker.collection.get_file_appends(filename)
if not self.bb_cache.cacheValid(filename, appends):
self.willparse.append((filename, appends))
else:
self.fromcache.append((filename, appends))
self.toparse = self.total - len(self.fromcache)
self.progress_chunk = int(max(self.toparse / 100, 1))
self.num_processes = min(int(self.cfgdata.getVar("BB_NUMBER_PARSE_THREADS") or
multiprocessing.cpu_count()), self.toparse)
multiprocessing.cpu_count()), len(self.willparse))
self.start()
self.haveshutdown = False
self.syncthread = None
def start(self):
self.results = self.load_cached()
@@ -2115,9 +2022,7 @@ class CookerParser(object):
if self.toparse:
bb.event.fire(bb.event.ParseStarted(self.toparse), self.cfgdata)
def init():
signal.signal(signal.SIGTERM, signal.SIG_DFL)
signal.signal(signal.SIGHUP, signal.SIG_DFL)
signal.signal(signal.SIGINT, signal.SIG_IGN)
Parser.bb_cache = self.bb_cache
bb.utils.set_process_name(multiprocessing.current_process().name)
multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, exitpriority=1)
multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, exitpriority=1)
@@ -2127,7 +2032,7 @@ class CookerParser(object):
def chunkify(lst,n):
return [lst[i::n] for i in range(n)]
self.jobs = chunkify(list(self.willparse), self.num_processes)
self.jobs = chunkify(self.willparse, self.num_processes)
for i in range(0, self.num_processes):
parser = Parser(self.jobs[i], self.result_queue, self.parser_quit, init, self.cooker.configuration.profile)
@@ -2151,9 +2056,12 @@ class CookerParser(object):
self.total)
bb.event.fire(event, self.cfgdata)
for process in self.processes:
self.parser_quit.put(None)
for process in self.processes:
self.parser_quit.put(None)
else:
self.parser_quit.cancel_join_thread()
for process in self.processes:
self.parser_quit.put(None)
# Cleanup the queue before call process.join(), otherwise there might be
# deadlocks.
@@ -2170,17 +2078,9 @@ class CookerParser(object):
else:
process.join()
self.parser_quit.close()
# Allow data left in the cancel queue to be discarded
self.parser_quit.cancel_join_thread()
def sync_caches():
for c in self.bb_caches.values():
c.sync()
sync = threading.Thread(target=sync_caches, name="SyncThread")
self.syncthread = sync
sync = threading.Thread(target=self.bb_cache.sync)
sync.start()
multiprocessing.util.Finalize(None, sync.join, exitpriority=-100)
bb.codeparser.parser_cache_savemerge()
bb.fetch.fetcher_parse_done()
if self.cooker.configuration.profile:
@@ -2194,14 +2094,10 @@ class CookerParser(object):
bb.utils.process_profilelog(profiles, pout = pout)
print("Processed parsing statistics saved to %s" % (pout))
def final_cleanup(self):
if self.syncthread:
self.syncthread.join()
def load_cached(self):
for mc, cache, filename, appends in self.fromcache:
cached, infos = cache.load(filename, appends)
yield not cached, mc, infos
for filename, appends in self.fromcache:
cached, infos = self.bb_cache.load(filename, appends)
yield not cached, infos
def parse_generator(self):
while True:
@@ -2223,25 +2119,25 @@ class CookerParser(object):
result = []
parsed = None
try:
parsed, mc, result = next(self.results)
parsed, result = next(self.results)
except StopIteration:
self.shutdown()
return False
except bb.BBHandledException as exc:
self.error += 1
logger.error('Failed to parse recipe: %s' % exc.recipe)
self.shutdown(clean=False, force=True)
self.shutdown(clean=False)
return False
except ParsingFailure as exc:
self.error += 1
logger.error('Unable to parse %s: %s' %
(exc.recipe, bb.exceptions.to_string(exc.realexception)))
self.shutdown(clean=False, force=True)
self.shutdown(clean=False)
return False
except bb.parse.ParseError as exc:
self.error += 1
logger.error(str(exc))
self.shutdown(clean=False, force=True)
self.shutdown(clean=False)
return False
except bb.data_smart.ExpansionError as exc:
self.error += 1
@@ -2250,7 +2146,7 @@ class CookerParser(object):
tb = list(itertools.dropwhile(lambda e: e.filename.startswith(bbdir), exc.traceback))
logger.error('ExpansionError during parsing %s', value.recipe,
exc_info=(etype, value, tb))
self.shutdown(clean=False, force=True)
self.shutdown(clean=False)
return False
except Exception as exc:
self.error += 1
@@ -2262,7 +2158,7 @@ class CookerParser(object):
# Most likely, an exception occurred during raising an exception
import traceback
logger.error('Exception during parse: %s' % traceback.format_exc())
self.shutdown(clean=False, force=True)
self.shutdown(clean=False)
return False
self.current += 1
@@ -2279,16 +2175,13 @@ class CookerParser(object):
if info_array[0].skipped:
self.skipped += 1
self.cooker.skiplist[virtualfn] = SkippedPackage(info_array[0])
self.bb_caches[mc].add_info(virtualfn, info_array, self.cooker.recipecaches[mc],
(fn, cls, mc) = bb.cache.virtualfn2realfn(virtualfn)
self.bb_cache.add_info(virtualfn, info_array, self.cooker.recipecaches[mc],
parsed=parsed, watcher = self.cooker.add_filewatch)
return True
def reparse(self, filename):
to_reparse = set()
for mc in self.cooker.multiconfigs:
to_reparse.add((mc, filename, self.cooker.collections[mc].get_file_appends(filename)))
for mc, filename, appends in to_reparse:
infos = self.bb_caches[mc].parse(filename, appends)
for vfn, info_array in infos:
self.cooker.recipecaches[mc].add_from_recipeinfo(vfn, info_array)
infos = self.bb_cache.parse(filename, self.cooker.collection.get_file_appends(filename))
for vfn, info_array in infos:
(fn, cls, mc) = bb.cache.virtualfn2realfn(vfn)
self.cooker.recipecaches[mc].add_from_recipeinfo(vfn, info_array)

View File

@@ -23,8 +23,8 @@ logger = logging.getLogger("BitBake")
parselog = logging.getLogger("BitBake.Parsing")
class ConfigParameters(object):
def __init__(self, argv=None):
self.options, targets = self.parseCommandLine(argv or sys.argv)
def __init__(self, argv=sys.argv):
self.options, targets = self.parseCommandLine(argv)
self.environment = self.parseEnvironment()
self.options.pkgs_to_build = targets or []
@@ -58,18 +58,11 @@ class ConfigParameters(object):
def updateToServer(self, server, environment):
options = {}
for o in ["abort", "force", "invalidate_stamp",
"dry_run", "dump_signatures",
"extra_assume_provided", "profile",
"prefile", "postfile", "server_timeout",
"nosetscene", "setsceneonly", "skipsetscene",
"runall", "runonly", "writeeventlog"]:
"verbose", "debug", "dry_run", "dump_signatures",
"debug_domains", "extra_assume_provided", "profile",
"prefile", "postfile", "server_timeout"]:
options[o] = getattr(self.options, o)
options['build_verbose_shell'] = self.options.verbose
options['build_verbose_stdout'] = self.options.verbose
options['default_loglevel'] = bb.msg.loggerDefaultLogLevel
options['debug_domains'] = bb.msg.loggerDefaultDomains
ret, error = server.runCommand(["updateConfig", options, environment, sys.argv])
if error:
raise Exception("Unable to update the server configuration with local parameters: %s" % error)
@@ -118,11 +111,11 @@ class CookerConfiguration(object):
"""
def __init__(self):
self.debug_domains = bb.msg.loggerDefaultDomains
self.default_loglevel = bb.msg.loggerDefaultLogLevel
self.debug_domains = []
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
self.debug = 0
self.cmd = None
self.abort = True
self.force = False
@@ -132,21 +125,34 @@ class CookerConfiguration(object):
self.skipsetscene = False
self.invalidate_stamp = False
self.dump_signatures = []
self.build_verbose_shell = False
self.build_verbose_stdout = False
self.dry_run = False
self.tracking = False
self.xmlrpcinterface = []
self.server_timeout = None
self.writeeventlog = False
self.server_only = False
self.limited_deps = False
self.runall = []
self.runonly = []
self.env = {}
def setConfigParameters(self, parameters):
for key in self.__dict__.keys():
if key in parameters.options.__dict__:
setattr(self, key, parameters.options.__dict__[key])
self.env = parameters.environment.copy()
def setServerRegIdleCallback(self, srcb):
self.server_register_idlecallback = srcb
def __getstate__(self):
state = {}
for key in self.__dict__.keys():
state[key] = getattr(self, key)
if key == "server_register_idlecallback":
state[key] = None
else:
state[key] = getattr(self, key)
return state
def __setstate__(self,state):
@@ -164,7 +170,7 @@ def catch_parse_error(func):
import traceback
parselog.critical(traceback.format_exc())
parselog.critical("Unable to parse %s: %s" % (fn, exc))
raise bb.BBHandledException()
sys.exit(1)
except bb.data_smart.ExpansionError as exc:
import traceback
@@ -176,10 +182,10 @@ def catch_parse_error(func):
if not fn.startswith(bbdir):
break
parselog.critical("Unable to parse %s" % fn, exc_info=(exc_class, exc, tb))
raise bb.BBHandledException()
sys.exit(1)
except bb.parse.ParseError as exc:
parselog.critical(str(exc))
raise bb.BBHandledException()
sys.exit(1)
return wrapped
@catch_parse_error
@@ -209,7 +215,7 @@ def findConfigFile(configfile, data):
return None
#
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
# up to /. If that fails, we search for a conf/bitbake.conf in BBPATH.
#
@@ -291,8 +297,6 @@ class CookerDataBuilder(object):
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
for config in multiconfig:
if config[0].isdigit():
bb.fatal("Multiconfig name '%s' is invalid as multiconfigs cannot start with a digit" % config)
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), mcdata)
self.mcdata[config] = mcdata
@@ -302,13 +306,13 @@ class CookerDataBuilder(object):
self.data_hash = data_hash.hexdigest()
except (SyntaxError, bb.BBHandledException):
raise bb.BBHandledException()
raise bb.BBHandledException
except bb.data_smart.ExpansionError as e:
logger.error(str(e))
raise bb.BBHandledException()
raise bb.BBHandledException
except Exception:
logger.exception("Error parsing configuration files")
raise bb.BBHandledException()
raise bb.BBHandledException
# Create a copy so we can reset at a later date when UIs disconnect
self.origdata = self.data
@@ -344,9 +348,6 @@ class CookerDataBuilder(object):
layers = (data.getVar('BBLAYERS') or "").split()
broken_layers = []
if not layers:
bb.fatal("The bblayers.conf file doesn't contain any BBLAYERS definition")
data = bb.data.createCopy(data)
approved = bb.utils.approved_variables()
@@ -360,7 +361,7 @@ class CookerDataBuilder(object):
for layer in broken_layers:
parselog.critical(" %s", layer)
parselog.critical("Please check BBLAYERS in %s" % (layerconf))
raise bb.BBHandledException()
sys.exit(1)
for layer in layers:
parselog.debug(2, "Adding layer %s", layer)
@@ -386,13 +387,10 @@ class CookerDataBuilder(object):
invalid.append(entry)
continue
l, f = parts
invert = l[0] == "!"
if invert:
l = l[1:]
if (l in collections and not invert) or (l not in collections and invert):
if l in collections:
data.appendVar("BBFILES", " " + f)
if invalid:
bb.fatal("BBFILES_DYNAMIC entries must be of the form {!}<collection name>:<filename pattern>, not:\n %s" % "\n ".join(invalid))
bb.fatal("BBFILES_DYNAMIC entries must be of the form <collection name>:<filename pattern>, not:\n %s" % "\n ".join(invalid))
layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
collections_tmp = collections[:]
@@ -401,8 +399,6 @@ class CookerDataBuilder(object):
if c in collections_tmp:
bb.fatal("Found duplicated BBFILE_COLLECTIONS '%s', check bblayers.conf or layer.conf to fix it." % c)
compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
if compat and not layerseries:
bb.fatal("No core layer found to work with layer '%s'. Missing entry in bblayers.conf?" % c)
if compat and not (compat & layerseries):
bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)"
% (c, " ".join(layerseries), " ".join(compat)))
@@ -434,9 +430,9 @@ class CookerDataBuilder(object):
handlerfn = data.getVarFlag(var, "filename", False)
if not handlerfn:
parselog.critical("Undefined event handler function '%s'" % var)
raise bb.BBHandledException()
sys.exit(1)
handlerln = int(data.getVarFlag(var, "lineno", False))
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln, data)
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))

View File

@@ -14,8 +14,6 @@ import sys
import io
import traceback
import bb
def createDaemon(function, logfile):
"""
Detach a process from the controlling terminal and run it in the

View File

@@ -161,12 +161,6 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
return True
if func:
# Write a comment indicating where the shell function came from (line number and filename) to make it easier
# for the user to diagnose task failures. This comment is also used by build.py to determine the metadata
# location of shell functions.
o.write("# line: {0}, file: {1}\n".format(
d.getVarFlag(var, "lineno", False),
d.getVarFlag(var, "filename", False)))
# NOTE: should probably check for unbalanced {} within the var
val = val.rstrip('\n')
o.write("%s() {\n%s\n}\n" % (varExpanded, val))

View File

@@ -28,7 +28,7 @@ logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = ["_append", "_prepend", "_remove"]
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+?}")
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~]+?}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
__whitespace_split__ = re.compile(r'(\s)')
__override_regexp__ = re.compile(r'[a-z0-9]+')
@@ -189,7 +189,7 @@ class IncludeHistory(object):
if self.current.parent:
self.current = self.current.parent
else:
bb.warn("Include log: Tried to finish '%s' at top level." % self.filename)
bb.warn("Include log: Tried to finish '%s' at top level." % filename)
return False
def emit(self, o, level = 0):
@@ -411,8 +411,6 @@ class DataSmart(MutableMapping):
raise
except bb.parse.SkipRecipe:
raise
except bb.BBHandledException:
raise
except Exception as exc:
tb = sys.exc_info()[2]
raise ExpansionError(varname, s, exc).with_traceback(tb) from exc
@@ -483,7 +481,6 @@ class DataSmart(MutableMapping):
def setVar(self, var, value, **loginfo):
#print("var=" + str(var) + " val=" + str(value))
var = var.replace(":", "_")
self.expand_cache = {}
parsing=False
if 'parsing' in loginfo:
@@ -592,8 +589,6 @@ class DataSmart(MutableMapping):
"""
Rename the variable key to newkey
"""
key = key.replace(":", "_")
newkey = newkey.replace(":", "_")
if key == newkey:
bb.warn("Calling renameVar with equivalent keys (%s) is invalid" % key)
return
@@ -642,7 +637,6 @@ class DataSmart(MutableMapping):
self.setVar(var + "_prepend", value, ignore=True, parsing=True)
def delVar(self, var, **loginfo):
var = var.replace(":", "_")
self.expand_cache = {}
loginfo['detail'] = ""
@@ -670,7 +664,6 @@ class DataSmart(MutableMapping):
override = None
def setVarFlag(self, var, flag, value, **loginfo):
var = var.replace(":", "_")
self.expand_cache = {}
if 'op' not in loginfo:
@@ -694,7 +687,6 @@ class DataSmart(MutableMapping):
self.dict["__exportlist"]["_content"].add(var)
def getVarFlag(self, var, flag, expand=True, noweakdefault=False, parsing=False, retparser=False):
var = var.replace(":", "_")
if flag == "_content":
cachename = var
else:
@@ -822,7 +814,6 @@ class DataSmart(MutableMapping):
return value
def delVarFlag(self, var, flag, **loginfo):
var = var.replace(":", "_")
self.expand_cache = {}
local_var, _ = self._findVar(var)
@@ -840,7 +831,6 @@ class DataSmart(MutableMapping):
del self.dict[var][flag]
def appendVarFlag(self, var, flag, value, **loginfo):
var = var.replace(":", "_")
loginfo['op'] = 'append'
loginfo['flag'] = flag
self.varhistory.record(**loginfo)
@@ -848,7 +838,6 @@ class DataSmart(MutableMapping):
self.setVarFlag(var, flag, newvalue, ignore=True)
def prependVarFlag(self, var, flag, value, **loginfo):
var = var.replace(":", "_")
loginfo['op'] = 'prepend'
loginfo['flag'] = flag
self.varhistory.record(**loginfo)
@@ -856,7 +845,6 @@ class DataSmart(MutableMapping):
self.setVarFlag(var, flag, newvalue, ignore=True)
def setVarFlags(self, var, flags, **loginfo):
var = var.replace(":", "_")
self.expand_cache = {}
infer_caller_details(loginfo)
if not var in self.dict:
@@ -871,7 +859,6 @@ class DataSmart(MutableMapping):
self.dict[var][i] = flags[i]
def getVarFlags(self, var, expand = False, internalflags=False):
var = var.replace(":", "_")
local_var, _ = self._findVar(var)
flags = {}
@@ -888,7 +875,6 @@ class DataSmart(MutableMapping):
def delVarFlags(self, var, **loginfo):
var = var.replace(":", "_")
self.expand_cache = {}
if not var in self.dict:
self._makeShadowCopy(var)
@@ -1019,7 +1005,7 @@ class DataSmart(MutableMapping):
else:
data.update({key:value})
varflags = d.getVarFlags(key, internalflags = True, expand=["vardepvalue"])
varflags = d.getVarFlags(key, internalflags = True)
if not varflags:
continue
for f in varflags:

View File

@@ -10,17 +10,17 @@ BitBake build tools.
# SPDX-License-Identifier: GPL-2.0-only
#
import ast
import atexit
import collections
import logging
import pickle
import sys
import threading
import pickle
import logging
import atexit
import traceback
import ast
import threading
import bb.exceptions
import bb.utils
import bb.compat
import bb.exceptions
# This is the pid for which we should generate the event. This is set when
# the runqueue forks off.
@@ -56,7 +56,7 @@ def set_class_handlers(h):
_handlers = h
def clean_class_handlers():
return collections.OrderedDict()
return bb.compat.OrderedDict()
# Internal
_handlers = clean_class_handlers()
@@ -118,8 +118,6 @@ def fire_class_handlers(event, d):
if _eventfilter:
if not _eventfilter(name, handler, event, d):
continue
if d is not None and not name in (d.getVar("__BBHANDLERS_MC") or set()):
continue
execute_handler(name, handler, event, d)
ui_queue = []
@@ -229,19 +227,11 @@ def fire_from_worker(event, d):
fire_ui_handlers(event, d)
noop = lambda _: None
def register(name, handler, mask=None, filename=None, lineno=None, data=None):
def register(name, handler, mask=None, filename=None, lineno=None):
"""Register an Event handler"""
if data is not None and data.getVar("BB_CURRENT_MC"):
mc = data.getVar("BB_CURRENT_MC")
name = '%s%s' % (mc.replace('-', '_'), name)
# already registered
if name in _handlers:
if data is not None:
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
bbhands_mc.add(name)
data.setVar("__BBHANDLERS_MC", bbhands_mc)
return AlreadyRegistered
if handler is not None:
@@ -278,20 +268,10 @@ def register(name, handler, mask=None, filename=None, lineno=None, data=None):
_event_handler_map[m] = {}
_event_handler_map[m][name] = True
if data is not None:
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
bbhands_mc.add(name)
data.setVar("__BBHANDLERS_MC", bbhands_mc)
return Registered
def remove(name, handler, data=None):
def remove(name, handler):
"""Remove an Event handler"""
if data is not None:
if data.getVar("BB_CURRENT_MC"):
mc = data.getVar("BB_CURRENT_MC")
name = '%s%s' % (mc.replace('-', '_'), name)
_handlers.pop(name)
if name in _catchall_handlers:
_catchall_handlers.pop(name)
@@ -299,12 +279,6 @@ def remove(name, handler, data=None):
if name in _event_handler_map[event]:
_event_handler_map[event].pop(name)
if data is not None:
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
if name in bbhands_mc:
bbhands_mc.remove(name)
data.setVar("__BBHANDLERS_MC", bbhands_mc)
def get_handlers():
return _handlers
@@ -415,10 +389,6 @@ class RecipeEvent(Event):
class RecipePreFinalise(RecipeEvent):
""" Recipe Parsing Complete but not yet finalised"""
class RecipePostKeyExpansion(RecipeEvent):
""" Recipe Parsing Complete but not yet finalised"""
class RecipeTaskPreProcess(RecipeEvent):
"""
Recipe Tasks about to be finalised
@@ -670,17 +640,6 @@ class ReachableStamps(Event):
Event.__init__(self)
self.stamps = stamps
class StaleSetSceneTasks(Event):
"""
An event listing setscene tasks which are 'stale' and will
be rerun. The metadata may use to clean up stale data.
tasks is a mapping of tasks and matching stale stamps.
"""
def __init__(self, tasks):
Event.__init__(self)
self.tasks = tasks
class FilesMatchingFound(Event):
"""
Event when a list of files matching the supplied pattern has

View File

@@ -290,7 +290,7 @@ class URI(object):
def _param_str_split(self, string, elmdelim, kvdelim="="):
ret = collections.OrderedDict()
for k, v in [x.split(kvdelim, 1) for x in string.split(elmdelim) if x]:
for k, v in [x.split(kvdelim, 1) for x in string.split(elmdelim)]:
ret[k] = v
return ret
@@ -428,7 +428,7 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
uri_decoded = list(decodeurl(ud.url))
uri_find_decoded = list(decodeurl(uri_find))
uri_replace_decoded = list(decodeurl(uri_replace))
logger.debug2("For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
logger.debug(2, "For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
result_decoded = ['', '', '', '', '', {}]
for loc, i in enumerate(uri_find_decoded):
result_decoded[loc] = uri_decoded[loc]
@@ -474,7 +474,7 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
result = encodeurl(result_decoded)
if result == ud.url:
return None
logger.debug2("For url %s returning %s" % (ud.url, result))
logger.debug(2, "For url %s returning %s" % (ud.url, result))
return result
methods = []
@@ -499,9 +499,9 @@ def fetcher_init(d):
# When to drop SCM head revisions controlled by user policy
srcrev_policy = d.getVar('BB_SRCREV_POLICY') or "clear"
if srcrev_policy == "cache":
logger.debug("Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
logger.debug(1, "Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
elif srcrev_policy == "clear":
logger.debug("Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
logger.debug(1, "Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
revs.clear()
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
@@ -562,9 +562,6 @@ def verify_checksum(ud, d, precomputed={}):
checksum_expected = getattr(ud, "%s_expected" % checksum_id)
if checksum_expected == '':
checksum_expected = None
return {
"id": checksum_id,
"name": checksum_name,
@@ -615,7 +612,7 @@ def verify_checksum(ud, d, precomputed={}):
for ci in checksum_infos:
if ci["expected"] and ci["expected"] != ci["data"]:
messages.append("File: '%s' has %s checksum '%s' when '%s' was " \
messages.append("File: '%s' has %s checksum %s when %s was " \
"expected" % (ud.localpath, ci["id"], ci["data"], ci["expected"]))
bad_checksum = ci["data"]
@@ -856,13 +853,18 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
if val:
cmd = 'export ' + var + '=\"%s\"; %s' % (val, cmd)
# Ensure that a _PYTHON_SYSCONFIGDATA_NAME value set by a recipe
# (for example via python3native.bbclass since warrior) is not set for
# host Python (otherwise tools like git-make-shallow will fail)
cmd = 'unset _PYTHON_SYSCONFIGDATA_NAME; ' + cmd
# Disable pseudo as it may affect ssh, potentially causing it to hang.
cmd = 'export PSEUDO_DISABLED=1; ' + cmd
if workdir:
logger.debug("Running '%s' in %s" % (cmd, workdir))
logger.debug(1, "Running '%s' in %s" % (cmd, workdir))
else:
logger.debug("Running %s", cmd)
logger.debug(1, "Running %s", cmd)
success = False
error_message = ""
@@ -871,7 +873,7 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
(output, errors) = bb.process.run(cmd, log=log, shell=True, stderr=subprocess.PIPE, cwd=workdir)
success = True
except bb.process.NotFoundError as e:
error_message = "Fetch command %s not found" % (e.command)
error_message = "Fetch command %s" % (e.command)
except bb.process.ExecutionError as e:
if e.stdout:
output = "output:\n%s\n%s" % (e.stdout, e.stderr)
@@ -903,7 +905,7 @@ def check_network_access(d, info, url):
elif not trusted_network(d, url):
raise UntrustedUrl(url, info)
else:
logger.debug("Fetcher accessed the network with the command %s" % info)
logger.debug(1, "Fetcher accessed the network with the command %s" % info)
def build_mirroruris(origud, mirrors, ld):
uris = []
@@ -929,7 +931,7 @@ def build_mirroruris(origud, mirrors, ld):
continue
if not trusted_network(ld, newuri):
logger.debug("Mirror %s not in the list of trusted networks, skipping" % (newuri))
logger.debug(1, "Mirror %s not in the list of trusted networks, skipping" % (newuri))
continue
# Create a local copy of the mirrors minus the current line
@@ -942,8 +944,8 @@ def build_mirroruris(origud, mirrors, ld):
newud = FetchData(newuri, ld)
newud.setup_localpath(ld)
except bb.fetch2.BBFetchException as e:
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
logger.debug(str(e))
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
logger.debug(1, str(e))
try:
# setup_localpath of file:// urls may fail, we should still see
# if mirrors of the url exist
@@ -1046,8 +1048,8 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
elif isinstance(e, NoChecksumError):
raise
else:
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
logger.debug(str(e))
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
logger.debug(1, str(e))
try:
ud.method.clean(ud, ld)
except UnboundLocalError:
@@ -1193,6 +1195,8 @@ def get_checksum_file_list(d):
paths = ud.method.localpaths(ud, d)
for f in paths:
pth = ud.decodedurl
if '*' in pth:
f = os.path.join(os.path.abspath(f), pth)
if f.startswith(dl_dir):
# The local fetcher's behaviour is to return a path under DL_DIR if it couldn't find the file anywhere else
if os.path.exists(f):
@@ -1246,7 +1250,7 @@ class FetchData(object):
if checksum_name in self.parm:
checksum_expected = self.parm[checksum_name]
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3", "az"]:
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3"]:
checksum_expected = None
else:
checksum_expected = d.getVarFlag("SRC_URI", checksum_name)
@@ -1361,6 +1365,9 @@ class FetchMethod(object):
# We cannot compute checksums for directories
if os.path.isdir(urldata.localpath):
return False
if urldata.localpath.find("*") != -1:
return False
return True
def recommends_checksum(self, urldata):
@@ -1423,6 +1430,11 @@ class FetchMethod(object):
iterate = False
file = urldata.localpath
# Localpath can't deal with 'dir/*' entries, so it converts them to '.',
# but it must be corrected back for local files copying
if urldata.basename == '*' and file.endswith('/.'):
file = '%s/%s' % (file.rstrip('/.'), urldata.path)
try:
unpack = bb.utils.to_boolean(urldata.parm.get('unpack'), True)
except ValueError as exc:
@@ -1459,10 +1471,6 @@ class FetchMethod(object):
cmd = '7z x -so %s | tar x --no-same-owner -f -' % file
elif file.endswith('.7z'):
cmd = '7za x -y %s 1>/dev/null' % file
elif file.endswith('.tzst') or file.endswith('.tar.zst'):
cmd = 'zstd --decompress --stdout %s | tar x --no-same-owner -f -' % file
elif file.endswith('.zst'):
cmd = 'zstd --decompress --stdout %s > %s' % (file, efile)
elif file.endswith('.zip') or file.endswith('.jar'):
try:
dos = bb.utils.to_boolean(urldata.parm.get('dos'), False)
@@ -1522,7 +1530,7 @@ class FetchMethod(object):
if urlpath.find("/") != -1:
destdir = urlpath.rsplit("/", 1)[0] + '/'
bb.utils.mkdirhier("%s/%s" % (unpackdir, destdir))
cmd = 'cp -fpPRH "%s" "%s"' % (file, destdir)
cmd = 'cp -fpPRH %s %s' % (file, destdir)
if not cmd:
return
@@ -1605,15 +1613,10 @@ class FetchMethod(object):
"""
if os.path.exists(ud.localpath):
return True
if ud.localpath.find("*") != -1:
return True
return False
def implicit_urldata(self, ud, d):
"""
Get a list of FetchData objects for any implicit URLs that will also
be downloaded when we fetch the given URL.
"""
return []
class Fetch(object):
def __init__(self, urls, d, cache = True, localonly = False, connection_cache = None):
if localonly and cache:
@@ -1691,7 +1694,7 @@ class Fetch(object):
if m.verify_donestamp(ud, self.d) and not m.need_update(ud, self.d):
done = True
elif m.try_premirror(ud, self.d):
logger.debug("Trying PREMIRRORS")
logger.debug(1, "Trying PREMIRRORS")
mirrors = mirror_from_string(self.d.getVar('PREMIRRORS'))
done = m.try_mirrors(self, ud, self.d, mirrors)
if done:
@@ -1701,7 +1704,7 @@ class Fetch(object):
m.update_donestamp(ud, self.d)
except ChecksumError as e:
logger.warning("Checksum failure encountered with premirror download of %s - will attempt other sources." % u)
logger.debug(str(e))
logger.debug(1, str(e))
done = False
if premirroronly:
@@ -1713,7 +1716,7 @@ class Fetch(object):
try:
if not trusted_network(self.d, ud.url):
raise UntrustedUrl(ud.url)
logger.debug("Trying Upstream")
logger.debug(1, "Trying Upstream")
m.download(ud, self.d)
if hasattr(m, "build_mirror_data"):
m.build_mirror_data(ud, self.d)
@@ -1728,19 +1731,19 @@ class Fetch(object):
except BBFetchException as e:
if isinstance(e, ChecksumError):
logger.warning("Checksum failure encountered with download of %s - will attempt other sources if available" % u)
logger.debug(str(e))
logger.debug(1, str(e))
if os.path.exists(ud.localpath):
rename_bad_checksum(ud, e.checksum)
elif isinstance(e, NoChecksumError):
raise
else:
logger.warning('Failed to fetch URL %s, attempting MIRRORS if available' % u)
logger.debug(str(e))
logger.debug(1, str(e))
firsterr = e
# Remove any incomplete fetch
if not verified_stamp:
m.clean(ud, self.d)
logger.debug("Trying MIRRORS")
logger.debug(1, "Trying MIRRORS")
mirrors = mirror_from_string(self.d.getVar('MIRRORS'))
done = m.try_mirrors(self, ud, self.d, mirrors)
@@ -1777,7 +1780,7 @@ class Fetch(object):
ud = self.ud[u]
ud.setup_localpath(self.d)
m = ud.method
logger.debug("Testing URL %s", u)
logger.debug(1, "Testing URL %s", u)
# First try checking uri, u, from PREMIRRORS
mirrors = mirror_from_string(self.d.getVar('PREMIRRORS'))
ret = m.try_mirrors(self, ud, self.d, mirrors, True)
@@ -1839,24 +1842,6 @@ class Fetch(object):
if ud.lockfile:
bb.utils.unlockfile(lf)
def expanded_urldata(self, urls=None):
"""
Get an expanded list of FetchData objects covering both the given
URLS and any additional implicit URLs that are added automatically by
the appropriate FetchMethod.
"""
if not urls:
urls = self.urls
urldata = []
for url in urls:
ud = self.ud[url]
urldata.append(ud)
urldata += ud.method.implicit_urldata(ud, self.d)
return urldata
class FetchConnectionCache(object):
"""
A class which represents an container for socket connections.
@@ -1911,7 +1896,6 @@ from . import repo
from . import clearcase
from . import npm
from . import npmsw
from . import az
methods.append(local.Local())
methods.append(wget.Wget())
@@ -1931,4 +1915,3 @@ methods.append(repo.Repo())
methods.append(clearcase.ClearCase())
methods.append(npm.Npm())
methods.append(npmsw.NpmShrinkWrap())
methods.append(az.Az())

View File

@@ -1,93 +0,0 @@
"""
BitBake 'Fetch' Azure Storage implementation
"""
# Copyright (C) 2021 Alejandro Hernandez Samaniego
#
# Based on bb.fetch2.wget:
# Copyright (C) 2003, 2004 Chris Larson
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import shlex
import os
import bb
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2.wget import Wget
class Az(Wget):
def supports(self, ud, d):
"""
Check to see if a given url can be fetched from Azure Storage
"""
return ud.type in ['az']
def checkstatus(self, fetch, ud, d, try_again=True):
# checkstatus discards parameters either way, we need to do this before adding the SAS
ud.url = ud.url.replace('az://','https://').split(';')[0]
az_sas = d.getVar('AZ_SAS')
if az_sas and az_sas not in ud.url:
ud.url += az_sas
return Wget.checkstatus(self, fetch, ud, d, try_again)
# Override download method, include retries
def download(self, ud, d, retries=3):
"""Fetch urls"""
# If were reaching the account transaction limit we might be refused a connection,
# retrying allows us to avoid false negatives since the limit changes over time
fetchcmd = self.basecmd + ' --retry-connrefused --waitretry=5'
# We need to provide a localpath to avoid wget using the SAS
# ud.localfile either has the downloadfilename or ud.path
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile)
bb.utils.mkdirhier(os.path.dirname(localpath))
fetchcmd += " -O %s" % shlex.quote(localpath)
if ud.user and ud.pswd:
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
# Check if a Shared Access Signature was given and use it
az_sas = d.getVar('AZ_SAS')
if az_sas:
azuri = '%s%s%s%s' % ('https://', ud.host, ud.path, az_sas)
else:
azuri = '%s%s%s' % ('https://', ud.host, ud.path)
if os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again.
fetchcmd += d.expand(" -c -P ${DL_DIR} '%s'" % azuri)
else:
fetchcmd += d.expand(" -P ${DL_DIR} '%s'" % azuri)
try:
self._runwget(ud, d, fetchcmd, False)
except FetchError as e:
# Azure fails on handshake sometimes when using wget after some stress, producing a
# FetchError from the fetcher, if the artifact exists retyring should succeed
if 'Unable to establish SSL connection' in str(e):
logger.debug2('Unable to establish SSL connection: Retries remaining: %s, Retrying...' % retries)
self.download(ud, d, retries -1)
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath):
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (azuri, ud.localpath), azuri)
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (azuri), azuri)
return True

View File

@@ -74,16 +74,16 @@ class Bzr(FetchMethod):
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
bzrcmd = self._buildbzrcommand(ud, d, "update")
logger.debug("BZR Update %s", ud.url)
logger.debug(1, "BZR Update %s", ud.url)
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
runfetchcmd(bzrcmd, d, workdir=os.path.join(ud.pkgdir, os.path.basename(ud.path)))
else:
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
logger.debug("BZR Checkout %s", ud.url)
logger.debug(1, "BZR Checkout %s", ud.url)
bb.utils.mkdirhier(ud.pkgdir)
logger.debug("Running %s", bzrcmd)
logger.debug(1, "Running %s", bzrcmd)
runfetchcmd(bzrcmd, d, workdir=ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
@@ -109,7 +109,7 @@ class Bzr(FetchMethod):
"""
Return the latest upstream revision number
"""
logger.debug2("BZR fetcher hitting network for %s", ud.url)
logger.debug(2, "BZR fetcher hitting network for %s", ud.url)
bb.fetch2.check_network_access(d, self._buildbzrcommand(ud, d, "revno"), ud.url)

View File

@@ -70,7 +70,7 @@ class ClearCase(FetchMethod):
return ud.type in ['ccrc']
def debug(self, msg):
logger.debug("ClearCase: %s", msg)
logger.debug(1, "ClearCase: %s", msg)
def urldata_init(self, ud, d):
"""

View File

@@ -51,10 +51,6 @@ class Cvs(FetchMethod):
ud.localfile = d.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath))
pkg = d.getVar('PN')
cvsdir = d.getVar("CVSDIR") or (d.getVar("DL_DIR") + "/cvs")
ud.pkgdir = os.path.join(cvsdir, pkg)
def need_update(self, ud, d):
if (ud.date == "now"):
return True
@@ -109,8 +105,11 @@ class Cvs(FetchMethod):
cvsupdatecmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvsupdatecmd)
# create module directory
logger.debug2("Fetch: checking for module directory")
moddir = os.path.join(ud.pkgdir, localdir)
logger.debug(2, "Fetch: checking for module directory")
pkg = d.getVar('PN')
cvsdir = d.getVar("CVSDIR") or (d.getVar("DL_DIR") + "/cvs")
pkgdir = os.path.join(cvsdir, pkg)
moddir = os.path.join(pkgdir, localdir)
workdir = None
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
logger.info("Update " + ud.url)
@@ -121,9 +120,9 @@ class Cvs(FetchMethod):
else:
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
workdir = ud.pkgdir
logger.debug("Running %s", cvscmd)
bb.utils.mkdirhier(pkgdir)
workdir = pkgdir
logger.debug(1, "Running %s", cvscmd)
bb.fetch2.check_network_access(d, cvscmd, ud.url)
cmd = cvscmd
@@ -141,7 +140,7 @@ class Cvs(FetchMethod):
# tar them up to a defined filename
workdir = None
if 'fullpath' in ud.parm:
workdir = ud.pkgdir
workdir = pkgdir
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, localdir)
else:
workdir = os.path.dirname(os.path.realpath(moddir))
@@ -152,6 +151,9 @@ class Cvs(FetchMethod):
def clean(self, ud, d):
""" Clean CVS Files and tarballs """
bb.utils.remove(ud.pkgdir, True)
pkg = d.getVar('PN')
pkgdir = os.path.join(d.getVar("CVSDIR"), pkg)
bb.utils.remove(pkgdir, True)
bb.utils.remove(ud.localpath)

View File

@@ -63,12 +63,10 @@ import errno
import fnmatch
import os
import re
import shlex
import subprocess
import tempfile
import bb
import bb.progress
from contextlib import contextmanager
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
@@ -142,10 +140,6 @@ class Git(FetchMethod):
ud.proto = 'file'
else:
ud.proto = "git"
if ud.host == "github.com" and ud.proto == "git":
# github stopped supporting git protocol
# https://github.blog/2021-09-01-improving-git-protocol-security-github/#no-more-unauthenticated-git
ud.proto = "https"
if not ud.proto in ('git', 'file', 'ssh', 'http', 'https', 'rsync'):
raise bb.fetch2.ParameterError("Invalid protocol type", ud.url)
@@ -225,12 +219,7 @@ class Git(FetchMethod):
ud.shallow = False
if ud.usehead:
# When usehead is set let's associate 'HEAD' with the unresolved
# rev of this repository. This will get resolved into a revision
# later. If an actual revision happens to have also been provided
# then this setting will be overridden.
for name in ud.names:
ud.unresolvedrev[name] = 'HEAD'
ud.unresolvedrev['default'] = 'HEAD'
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0"
@@ -247,7 +236,7 @@ class Git(FetchMethod):
ud.unresolvedrev[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud, d, name)
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.').replace(' ','_'))
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.'))
if gitsrcname.startswith('.'):
gitsrcname = gitsrcname[1:]
@@ -353,7 +342,7 @@ class Git(FetchMethod):
# We do this since git will use a "-l" option automatically for local urls where possible
if repourl.startswith("file://"):
repourl = repourl[7:]
clone_cmd = "LANG=C %s clone --bare --mirror %s %s --progress" % (ud.basecmd, shlex.quote(repourl), ud.clonedir)
clone_cmd = "LANG=C %s clone --bare --mirror %s %s --progress" % (ud.basecmd, repourl, ud.clonedir)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, clone_cmd, ud.url)
progresshandler = GitProgressHandler(d)
@@ -365,8 +354,8 @@ class Git(FetchMethod):
if "origin" in output:
runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, shlex.quote(repourl)), d, workdir=ud.clonedir)
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/*:refs/*" % (ud.basecmd, shlex.quote(repourl))
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d, workdir=ud.clonedir)
fetch_cmd = "LANG=C %s fetch -f --prune --progress %s refs/*:refs/*" % (ud.basecmd, repourl)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
progresshandler = GitProgressHandler(d)
@@ -389,50 +378,7 @@ class Git(FetchMethod):
if missing_rev:
raise bb.fetch2.FetchError("Unable to find revision %s even from upstream" % missing_rev)
if self._contains_lfs(ud, d, ud.clonedir) and self._need_lfs(ud):
# Unpack temporary working copy, use it to run 'git checkout' to force pre-fetching
# of all LFS blobs needed at the the srcrev.
#
# It would be nice to just do this inline here by running 'git-lfs fetch'
# on the bare clonedir, but that operation requires a working copy on some
# releases of Git LFS.
tmpdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
try:
# Do the checkout. This implicitly involves a Git LFS fetch.
Git.unpack(self, ud, tmpdir, d)
# Scoop up a copy of any stuff that Git LFS downloaded. Merge them into
# the bare clonedir.
#
# As this procedure is invoked repeatedly on incremental fetches as
# a recipe's SRCREV is bumped throughout its lifetime, this will
# result in a gradual accumulation of LFS blobs in <ud.clonedir>/lfs
# corresponding to all the blobs reachable from the different revs
# fetched across time.
#
# Only do this if the unpack resulted in a .git/lfs directory being
# created; this only happens if at least one blob needed to be
# downloaded.
if os.path.exists(os.path.join(tmpdir, "git", ".git", "lfs")):
runfetchcmd("tar -cf - lfs | tar -xf - -C %s" % ud.clonedir, d, workdir="%s/git/.git" % tmpdir)
finally:
bb.utils.remove(tmpdir, recurse=True)
def build_mirror_data(self, ud, d):
# Create as a temp file and move atomically into position to avoid races
@contextmanager
def create_atomic(filename):
fd, tfile = tempfile.mkstemp(dir=os.path.dirname(filename))
try:
yield tfile
umask = os.umask(0o666)
os.umask(umask)
os.chmod(tfile, (0o666 & ~umask))
os.rename(tfile, filename)
finally:
os.close(fd)
if ud.shallow and ud.write_shallow_tarballs:
if not os.path.exists(ud.fullshallow):
if os.path.islink(ud.fullshallow):
@@ -443,8 +389,7 @@ class Git(FetchMethod):
self.clone_shallow_local(ud, shallowclone, d)
logger.info("Creating tarball of git repository")
with create_atomic(ud.fullshallow) as tfile:
runfetchcmd("tar -czf %s ." % tfile, d, workdir=shallowclone)
runfetchcmd("tar -czf %s ." % ud.fullshallow, d, workdir=shallowclone)
runfetchcmd("touch %s.done" % ud.fullshallow, d)
finally:
bb.utils.remove(tempdir, recurse=True)
@@ -453,8 +398,7 @@ class Git(FetchMethod):
os.unlink(ud.fullmirror)
logger.info("Creating tarball of git repository")
with create_atomic(ud.fullmirror) as tfile:
runfetchcmd("tar -czf %s ." % tfile, d, workdir=ud.clonedir)
runfetchcmd("tar -czf %s ." % ud.fullmirror, d, workdir=ud.clonedir)
runfetchcmd("touch %s.done" % ud.fullmirror, d)
def clone_shallow_local(self, ud, dest, d):
@@ -529,10 +473,7 @@ class Git(FetchMethod):
if os.path.exists(destdir):
bb.utils.prunedir(destdir)
need_lfs = self._need_lfs(ud)
if not need_lfs:
ud.basecmd = "GIT_LFS_SKIP_SMUDGE=1 " + ud.basecmd
need_lfs = ud.parm.get("lfs", "1") == "1"
source_found = False
source_error = []
@@ -560,12 +501,12 @@ class Git(FetchMethod):
raise bb.fetch2.UnpackError("No up to date source found: " + "; ".join(source_error), ud.url)
repourl = self._get_repo_url(ud)
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, shlex.quote(repourl)), d, workdir=destdir)
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d, workdir=destdir)
if self._contains_lfs(ud, d, destdir):
if need_lfs and not self._find_git_lfs(d):
raise bb.fetch2.FetchError("Repository %s has LFS content, install git-lfs on host to download (or set lfs=0 to ignore it)" % (repourl))
elif not need_lfs:
else:
bb.note("Repository %s has LFS content but it is not being fetched" % (repourl))
if not ud.nocheckout:
@@ -618,28 +559,12 @@ class Git(FetchMethod):
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
return output.split()[0] != "0"
def _need_lfs(self, ud):
return ud.parm.get("lfs", "1") == "1"
def _contains_lfs(self, ud, d, wd):
"""
Check if the repository has 'lfs' (large file) content
"""
if not ud.nobranch:
branchname = ud.branches[ud.names[0]]
else:
branchname = "master"
# The bare clonedir doesn't use the remote names; it has the branch immediately.
if wd == ud.clonedir:
refname = ud.branches[ud.names[0]]
else:
refname = "origin/%s" % ud.branches[ud.names[0]]
cmd = "%s grep lfs %s:.gitattributes | wc -l" % (
ud.basecmd, refname)
cmd = "%s grep lfs HEAD:.gitattributes | wc -l" % (
ud.basecmd)
try:
output = runfetchcmd(cmd, d, quiet=True, workdir=wd)
if int(output) > 0:
@@ -659,11 +584,6 @@ class Git(FetchMethod):
"""
Return the repository URL
"""
# Note that we do not support passwords directly in the git urls. There are several
# reasons. SRC_URI can be written out to things like buildhistory and people don't
# want to leak passwords like that. Its also all too easy to share metadata without
# removing the password. ssh keys, ~/.netrc and ~/.ssh/config files can be used as
# alternatives so we will not take patches adding password support here.
if ud.user:
username = ud.user + '@'
else:
@@ -694,7 +614,7 @@ class Git(FetchMethod):
try:
repourl = self._get_repo_url(ud)
cmd = "%s ls-remote %s %s" % \
(ud.basecmd, shlex.quote(repourl), search)
(ud.basecmd, repourl, search)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, cmd, repourl)
output = runfetchcmd(cmd, d, True)

View File

@@ -78,7 +78,7 @@ class GitSM(Git):
module_hash = ""
if not module_hash:
logger.debug("submodule %s is defined, but is not initialized in the repository. Skipping", m)
logger.debug(1, "submodule %s is defined, but is not initialized in the repository. Skipping", m)
continue
submodules.append(m)
@@ -143,43 +143,12 @@ class GitSM(Git):
try:
# Check for the nugget dropped by the download operation
known_srcrevs = runfetchcmd("%s config --get-all bitbake.srcrev" % \
(ud.basecmd), d, workdir=ud.clonedir)
(ud.basecmd), d, workdir=ud.clonedir)
if ud.revisions[ud.names[0]] in known_srcrevs.split():
return False
if ud.revisions[ud.names[0]] not in known_srcrevs.split():
return True
except bb.fetch2.FetchError:
pass
need_update_list = []
def need_update_submodule(ud, url, module, modpath, workdir, d):
url += ";bareclone=1;nobranch=1"
try:
newfetch = Fetch([url], d, cache=False)
new_ud = newfetch.ud[url]
if new_ud.method.need_update(new_ud, d):
need_update_list.append(modpath)
except Exception as e:
logger.error('gitsm: submodule update check failed: %s %s' % (type(e).__name__, str(e)))
need_update_result = True
# If we're using a shallow mirror tarball it needs to be unpacked
# temporarily so that we can examine the .gitmodules file
if ud.shallow and os.path.exists(ud.fullshallow) and not os.path.exists(ud.clonedir):
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
self.process_submodules(ud, tmpdir, need_update_submodule, d)
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, need_update_submodule, d)
if len(need_update_list) == 0:
# We already have the required commits of all submodules. Drop
# a nugget so we don't need to check again.
runfetchcmd("%s config --add bitbake.srcrev %s" % \
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
if len(need_update_list) > 0:
logger.debug('gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
# No srcrev nuggets, so this is new and needs to be updated
return True
return False
@@ -194,6 +163,9 @@ class GitSM(Git):
try:
newfetch = Fetch([url], d, cache=False)
newfetch.download()
# Drop a nugget to add each of the srcrevs we've fetched (used by need_update)
runfetchcmd("%s config --add bitbake.srcrev %s" % \
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=workdir)
except Exception as e:
logger.error('gitsm: submodule download failed: %s %s' % (type(e).__name__, str(e)))
raise
@@ -209,9 +181,6 @@ class GitSM(Git):
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, download_submodule, d)
# Drop a nugget for the srcrev we've fetched (used by need_update)
runfetchcmd("%s config --add bitbake.srcrev %s" % \
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
def unpack(self, ud, destdir, d):
def unpack_submodules(ud, url, module, modpath, workdir, d):
@@ -254,24 +223,3 @@ class GitSM(Git):
# up the configuration and checks out the files. The main project config should remain
# unmodified, and no download from the internet should occur.
runfetchcmd("%s submodule update --recursive --no-fetch" % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
def implicit_urldata(self, ud, d):
import shutil, subprocess, tempfile
urldata = []
def add_submodule(ud, url, module, modpath, workdir, d):
url += ";bareclone=1;nobranch=1"
newfetch = Fetch([url], d, cache=False)
urldata.extend(newfetch.expanded_urldata())
# If we're using a shallow mirror tarball it needs to be unpacked
# temporarily so that we can examine the .gitmodules file
if ud.shallow and os.path.exists(ud.fullshallow) and ud.method.need_update(ud, d):
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
subprocess.check_call("tar -xzf %s" % ud.fullshallow, cwd=tmpdir, shell=True)
self.process_submodules(ud, tmpdir, add_submodule, d)
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, add_submodule, d)
return urldata

View File

@@ -150,7 +150,7 @@ class Hg(FetchMethod):
def download(self, ud, d):
"""Fetch url"""
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.pkgdir) and os.path.exists(ud.fullmirror):
@@ -160,7 +160,7 @@ class Hg(FetchMethod):
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
# Found the source, check whether need pull
updatecmd = self._buildhgcommand(ud, d, "update")
logger.debug("Running %s", updatecmd)
logger.debug(1, "Running %s", updatecmd)
try:
runfetchcmd(updatecmd, d, workdir=ud.moddir)
except bb.fetch2.FetchError:
@@ -168,7 +168,7 @@ class Hg(FetchMethod):
pullcmd = self._buildhgcommand(ud, d, "pull")
logger.info("Pulling " + ud.url)
# update sources there
logger.debug("Running %s", pullcmd)
logger.debug(1, "Running %s", pullcmd)
bb.fetch2.check_network_access(d, pullcmd, ud.url)
runfetchcmd(pullcmd, d, workdir=ud.moddir)
try:
@@ -183,14 +183,14 @@ class Hg(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
logger.debug("Running %s", fetchcmd)
logger.debug(1, "Running %s", fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
runfetchcmd(fetchcmd, d, workdir=ud.pkgdir)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
logger.debug("Running %s", updatecmd)
logger.debug(1, "Running %s", updatecmd)
runfetchcmd(updatecmd, d, workdir=ud.moddir)
def clean(self, ud, d):
@@ -247,9 +247,9 @@ class Hg(FetchMethod):
if scmdata != "nokeep":
proto = ud.parm.get('protocol', 'http')
if not os.access(os.path.join(codir, '.hg'), os.R_OK):
logger.debug2("Unpack: creating new hg repository in '" + codir + "'")
logger.debug(2, "Unpack: creating new hg repository in '" + codir + "'")
runfetchcmd("%s init %s" % (ud.basecmd, codir), d)
logger.debug2("Unpack: updating source in '" + codir + "'")
logger.debug(2, "Unpack: updating source in '" + codir + "'")
if ud.user and ud.pswd:
runfetchcmd("%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull %s" % (ud.basecmd, ud.user, ud.pswd, proto, ud.moddir), d, workdir=codir)
else:
@@ -259,5 +259,5 @@ class Hg(FetchMethod):
else:
runfetchcmd("%s up -C %s" % (ud.basecmd, revflag), d, workdir=codir)
else:
logger.debug2("Unpack: extracting source to '" + codir + "'")
logger.debug(2, "Unpack: extracting source to '" + codir + "'")
runfetchcmd("%s archive -t files %s %s" % (ud.basecmd, revflag, codir), d, workdir=ud.moddir)

View File

@@ -17,7 +17,7 @@ import os
import urllib.request, urllib.parse, urllib.error
import bb
import bb.utils
from bb.fetch2 import FetchMethod, FetchError, ParameterError
from bb.fetch2 import FetchMethod, FetchError
from bb.fetch2 import logger
class Local(FetchMethod):
@@ -33,8 +33,6 @@ class Local(FetchMethod):
ud.basename = os.path.basename(ud.decodedurl)
ud.basepath = ud.decodedurl
ud.needdonestamp = False
if "*" in ud.decodedurl:
raise bb.fetch2.ParameterError("file:// urls using globbing are no longer supported. Please place the files in a directory and reference that instead.", ud.url)
return
def localpath(self, urldata, d):
@@ -54,18 +52,26 @@ class Local(FetchMethod):
return [path]
filespath = d.getVar('FILESPATH')
if filespath:
logger.debug2("Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
logger.debug(2, "Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
newpath, hist = bb.utils.which(filespath, path, history=True)
searched.extend(hist)
if (not newpath or not os.path.exists(newpath)) and path.find("*") != -1:
# For expressions using '*', best we can do is take the first directory in FILESPATH that exists
newpath, hist = bb.utils.which(filespath, ".", history=True)
searched.extend(hist)
logger.debug(2, "Searching for %s in path: %s" % (path, newpath))
return searched
if not os.path.exists(newpath):
dldirfile = os.path.join(d.getVar("DL_DIR"), path)
logger.debug2("Defaulting to %s for %s" % (dldirfile, path))
logger.debug(2, "Defaulting to %s for %s" % (dldirfile, path))
bb.utils.mkdirhier(os.path.dirname(dldirfile))
searched.append(dldirfile)
return searched
return searched
def need_update(self, ud, d):
if ud.url.find("*") != -1:
return False
if os.path.exists(ud.localpath):
return False
return True
@@ -89,6 +95,9 @@ class Local(FetchMethod):
"""
Check the status of the url
"""
if urldata.localpath.find("*") != -1:
logger.info("URL %s looks like a glob and was therefore not checked.", urldata.url)
return True
if os.path.exists(urldata.localpath):
return True
return False

View File

@@ -29,8 +29,6 @@ from bb.fetch2.npm import npm_integrity
from bb.fetch2.npm import npm_localfile
from bb.fetch2.npm import npm_unpack
from bb.utils import is_semver
from bb.utils import lockfile
from bb.utils import unlockfile
def foreach_dependencies(shrinkwrap, callback=None, dev=False):
"""
@@ -189,9 +187,7 @@ class NpmShrinkWrap(FetchMethod):
proxy_ud = ud.proxy.ud[proxy_url]
proxy_d = ud.proxy.d
proxy_ud.setup_localpath(proxy_d)
lf = lockfile(proxy_ud.lockfile)
returns.append(handle(proxy_ud.method, proxy_ud, proxy_d))
unlockfile(lf)
return returns
def verify_donestamp(self, ud, d):

View File

@@ -8,15 +8,12 @@ Based on the svn "Fetch" implementation.
"""
import logging
import os
import bb
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
from bb.fetch2 import runfetchcmd
logger = logging.getLogger(__name__)
class Osc(FetchMethod):
"""Class to fetch a module or modules from Opensuse build server
repositories."""
@@ -84,13 +81,13 @@ class Osc(FetchMethod):
Fetch url
"""
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(d.getVar('OSCDIR'), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ ud.url)
# update sources there
logger.debug("Running %s", oscupdatecmd)
logger.debug(1, "Running %s", oscupdatecmd)
bb.fetch2.check_network_access(d, oscupdatecmd, ud.url)
runfetchcmd(oscupdatecmd, d, workdir=ud.moddir)
else:
@@ -98,7 +95,7 @@ class Osc(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
logger.debug("Running %s", oscfetchcmd)
logger.debug(1, "Running %s", oscfetchcmd)
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
runfetchcmd(oscfetchcmd, d, workdir=ud.pkgdir)

View File

@@ -1,20 +1,6 @@
"""
BitBake 'Fetch' implementation for perforce
Supported SRC_URI options are:
- module
The top-level location to fetch while preserving the remote paths
The value of module can point to either a directory or a file. The result,
in both cases, is that the fetcher will preserve all file paths starting
from the module path. That is, the top-level directory in the module value
will also be the top-level directory in P4DIR.
- remotepath
If the value "keep" is given, the full depot location of each file is
preserved in P4DIR. This option overrides the effect of the module option.
"""
# Copyright (C) 2003, 2004 Chris Larson
@@ -31,36 +17,6 @@ from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class PerforceProgressHandler (bb.progress.BasicProgressHandler):
"""
Implements basic progress information for perforce, based on the number of
files to be downloaded.
The p4 print command will print one line per file, therefore it can be used
to "count" the number of files already completed and give an indication of
the progress.
"""
def __init__(self, d, num_files):
self._num_files = num_files
self._count = 0
super(PerforceProgressHandler, self).__init__(d)
# Send an initial progress event so the bar gets shown
self._fire_progress(-1)
def write(self, string):
self._count = self._count + 1
percent = int(100.0 * float(self._count) / float(self._num_files))
# In case something goes wrong, we try to preserve our sanity
if percent > 100:
percent = 100
self.update(percent)
super(PerforceProgressHandler, self).write(string)
class Perforce(FetchMethod):
""" Class to fetch from perforce repositories """
def supports(self, ud, d):
@@ -90,51 +46,31 @@ class Perforce(FetchMethod):
p4port = d.getVar('P4PORT')
if p4port:
logger.debug('Using recipe provided P4PORT: %s' % p4port)
logger.debug(1, 'Using recipe provided P4PORT: %s' % p4port)
ud.host = p4port
else:
logger.debug('Trying to use P4CONFIG to automatically set P4PORT...')
logger.debug(1, 'Trying to use P4CONFIG to automatically set P4PORT...')
ud.usingp4config = True
p4cmd = '%s info | grep "Server address"' % ud.basecmd
bb.fetch2.check_network_access(d, p4cmd, ud.url)
ud.host = runfetchcmd(p4cmd, d, True)
ud.host = ud.host.split(': ')[1].strip()
logger.debug('Determined P4PORT to be: %s' % ud.host)
logger.debug(1, 'Determined P4PORT to be: %s' % ud.host)
if not ud.host:
raise FetchError('Could not determine P4PORT from P4CONFIG')
# Fetcher options
ud.module = ud.parm.get('module')
ud.keepremotepath = (ud.parm.get('remotepath', '') == 'keep')
if ud.path.find('/...') >= 0:
ud.pathisdir = True
else:
ud.pathisdir = False
# Avoid using the "/..." syntax in SRC_URI when a module value is given
if ud.pathisdir and ud.module:
raise FetchError('SRC_URI depot path cannot not end in /... when a module value is given')
cleanedpath = ud.path.replace('/...', '').replace('/', '.')
cleanedhost = ud.host.replace(':', '.')
cleanedmodule = ""
# Merge the path and module into the final depot location
if ud.module:
if ud.module.find('/') == 0:
raise FetchError('module cannot begin with /')
ud.path = os.path.join(ud.path, ud.module)
# Append the module path to the local pkg name
cleanedmodule = ud.module.replace('/...', '').replace('/', '.')
cleanedpath += '--%s' % cleanedmodule
ud.pkgdir = os.path.join(ud.dldir, cleanedhost, cleanedpath)
ud.setup_revisions(d)
ud.localfile = d.expand('%s_%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, cleandedmodule, ud.revision))
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, ud.revision))
def _buildp4command(self, ud, d, command, depot_filename=None):
"""
@@ -159,20 +95,10 @@ class Perforce(FetchMethod):
pathnrev = '%s' % (ud.path)
if depot_filename:
if ud.keepremotepath:
# preserve everything, remove the leading //
filename = depot_filename.lstrip('/')
elif ud.module:
# remove everything up to the module path
modulepath = ud.module.rstrip('/...')
filename = depot_filename[depot_filename.rfind(modulepath):]
elif ud.pathisdir:
# Remove leading (visible) path to obtain the filepath
if ud.pathisdir: # Remove leading path to obtain filename
filename = depot_filename[len(ud.path)-1:]
else:
# Remove everything, except the filename
filename = depot_filename[depot_filename.rfind('/'):]
filename = filename[:filename.find('#')] # Remove trailing '#rev'
if command == 'changes':
@@ -208,7 +134,7 @@ class Perforce(FetchMethod):
for filename in p4fileslist:
item = filename.split(' - ')
lastaction = item[1].split()
logger.debug('File: %s Last Action: %s' % (item[0], lastaction[0]))
logger.debug(1, 'File: %s Last Action: %s' % (item[0], lastaction[0]))
if lastaction[0] == 'delete':
continue
filelist.append(item[0])
@@ -224,12 +150,10 @@ class Perforce(FetchMethod):
bb.utils.remove(ud.pkgdir, True)
bb.utils.mkdirhier(ud.pkgdir)
progresshandler = PerforceProgressHandler(d, len(filelist))
for afile in filelist:
p4fetchcmd = self._buildp4command(ud, d, 'print', afile)
bb.fetch2.check_network_access(d, p4fetchcmd, ud.url)
runfetchcmd(p4fetchcmd, d, workdir=ud.pkgdir, log=progresshandler)
runfetchcmd(p4fetchcmd, d, workdir=ud.pkgdir)
runfetchcmd('tar -czf %s p4' % (ud.localpath), d, cleanup=[ud.localpath], workdir=ud.pkgdir)
@@ -255,7 +179,7 @@ class Perforce(FetchMethod):
raise FetchError('Could not determine the latest perforce changelist')
tipcset = tip.split(' ')[1]
logger.debug('p4 tip found to be changelist %s' % tipcset)
logger.debug(1, 'p4 tip found to be changelist %s' % tipcset)
return tipcset
def sortable_revision(self, ud, d, name):

View File

@@ -47,7 +47,7 @@ class Repo(FetchMethod):
"""Fetch url"""
if os.access(os.path.join(d.getVar("DL_DIR"), ud.localfile), os.R_OK):
logger.debug("%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
return
repodir = d.getVar("REPODIR") or (d.getVar("DL_DIR") + "/repo")

View File

@@ -31,7 +31,8 @@ IETF secsh internet draft:
#
import re, os
from bb.fetch2 import check_network_access, FetchMethod, ParameterError, runfetchcmd
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
__pattern__ = re.compile(r'''
@@ -64,7 +65,7 @@ class SSH(FetchMethod):
def urldata_init(self, urldata, d):
if 'protocol' in urldata.parm and urldata.parm['protocol'] == 'git':
raise ParameterError(
raise bb.fetch2.ParameterError(
"Invalid protocol - if you wish to fetch from a git " +
"repository using ssh, you need to use " +
"git:// prefix with protocol=ssh", urldata.url)
@@ -104,7 +105,7 @@ class SSH(FetchMethod):
dldir
)
check_network_access(d, cmd, urldata.url)
bb.fetch2.check_network_access(d, cmd, urldata.url)
runfetchcmd(cmd, d)

View File

@@ -86,7 +86,7 @@ class Svn(FetchMethod):
if command == "info":
svncmd = "%s info %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
elif command == "log1":
svncmd = "%s log --limit 1 --quiet %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
svncmd = "%s log --limit 1 %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
else:
suffix = ""
@@ -116,7 +116,7 @@ class Svn(FetchMethod):
def download(self, ud, d):
"""Fetch url"""
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
lf = bb.utils.lockfile(ud.svnlock)
@@ -129,7 +129,7 @@ class Svn(FetchMethod):
runfetchcmd(ud.basecmd + " upgrade", d, workdir=ud.moddir)
except FetchError:
pass
logger.debug("Running %s", svncmd)
logger.debug(1, "Running %s", svncmd)
bb.fetch2.check_network_access(d, svncmd, ud.url)
runfetchcmd(svncmd, d, workdir=ud.moddir)
else:
@@ -137,7 +137,7 @@ class Svn(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
logger.debug("Running %s", svncmd)
logger.debug(1, "Running %s", svncmd)
bb.fetch2.check_network_access(d, svncmd, ud.url)
runfetchcmd(svncmd, d, workdir=ud.pkgdir)

View File

@@ -52,12 +52,6 @@ class WgetProgressHandler(bb.progress.LineFilterProgressHandler):
class Wget(FetchMethod):
# CDNs like CloudFlare may do a 'browser integrity test' which can fail
# with the standard wget/urllib User-Agent, so pretend to be a modern
# browser.
user_agent = "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
"""Class to fetch urls via 'wget'"""
def supports(self, ud, d):
"""
@@ -88,7 +82,7 @@ class Wget(FetchMethod):
progresshandler = WgetProgressHandler(d)
logger.debug2("Fetching %s using command '%s'" % (ud.url, command))
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command, ud.url)
runfetchcmd(command + ' --progress=dot -v', d, quiet, log=progresshandler, workdir=workdir)
@@ -214,7 +208,10 @@ class Wget(FetchMethod):
fetch.connection_cache.remove_connection(h.host, h.port)
raise urllib.error.URLError(err)
else:
r = h.getresponse()
try:
r = h.getresponse(buffering=True)
except TypeError: # buffering kw not supported
r = h.getresponse()
# Pick apart the HTTPResponse object to get the addinfourl
# object initialized properly.
@@ -303,7 +300,7 @@ class Wget(FetchMethod):
# Some servers (FusionForge, as used on Alioth) require that the
# optional Accept header is set.
r.add_header("Accept", "*/*")
r.add_header("User-Agent", self.user_agent)
r.add_header("User-Agent", "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12")
def add_basic_auth(login_str, request):
'''Adds Basic auth to http request, pass in login:password as string'''
import base64
@@ -326,19 +323,11 @@ class Wget(FetchMethod):
pass
except urllib.error.URLError as e:
if try_again:
logger.debug2("checkstatus: trying again")
logger.debug(2, "checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug2("checkstatus() urlopen failed: %s" % e)
return False
except ConnectionResetError as e:
if try_again:
logger.debug2("checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug2("checkstatus() urlopen failed: %s" % e)
logger.debug(2, "checkstatus() urlopen failed: %s" % e)
return False
return True
@@ -415,8 +404,9 @@ class Wget(FetchMethod):
"""
f = tempfile.NamedTemporaryFile()
with tempfile.TemporaryDirectory(prefix="wget-index-") as workdir, tempfile.NamedTemporaryFile(dir=workdir, prefix="wget-listing-") as f:
agent = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12"
fetchcmd = self.basecmd
fetchcmd += " -O " + f.name + " --user-agent='" + self.user_agent + "' '" + uri + "'"
fetchcmd += " -O " + f.name + " --user-agent='" + agent + "' '" + uri + "'"
try:
self._runwget(ud, d, fetchcmd, True, workdir=workdir)
fetchresult = f.read()
@@ -472,7 +462,7 @@ class Wget(FetchMethod):
version_dir = ['', '', '']
version = ['', '', '']
dirver_regex = re.compile(r"(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])*(\d+))")
dirver_regex = re.compile(r"(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])+(\d+))")
s = dirver_regex.search(dirver)
if s:
version_dir[1] = s.group('ver')

View File

@@ -119,181 +119,178 @@ warnings.filterwarnings("ignore", category=ImportWarning)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
class BitBakeConfigParameters(cookerdata.ConfigParameters):
def create_bitbake_parser():
parser = optparse.OptionParser(
formatter=BitbakeHelpFormatter(),
version="BitBake Build Tool Core version %s" % bb.__version__,
usage="""%prog [options] [recipename/target recipe:do_task ...]
def parseCommandLine(self, argv=sys.argv):
parser = optparse.OptionParser(
formatter=BitbakeHelpFormatter(),
version="BitBake Build Tool Core version %s" % bb.__version__,
usage="""%prog [options] [recipename/target recipe:do_task ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.""")
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
parser.add_option("-c", "--cmd", action="store", dest="cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-c", "--cmd", action="store", dest="cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
help="Output less log message data to the terminal. You can specify this more than once.")
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
help="Output less log message data to the terminal. You can specify this more than once.")
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
help="Don't execute, just go through the motions.")
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
help="Don't execute, just go through the motions.")
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
parser.add_option("-p", "--parse-only", action="store_true",
dest="parse_only", default=False,
help="Quit after parsing the BB recipes.")
parser.add_option("-p", "--parse-only", action="store_true",
dest="parse_only", default=False,
help="Quit after parsing the BB recipes.")
parser.add_option("-s", "--show-versions", action="store_true",
dest="show_versions", default=False,
help="Show current and preferred versions of all recipes.")
parser.add_option("-s", "--show-versions", action="store_true",
dest="show_versions", default=False,
help="Show current and preferred versions of all recipes.")
parser.add_option("-e", "--environment", action="store_true",
dest="show_environment", default=False,
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-e", "--environment", action="store_true",
dest="show_environment", default=False,
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing")
parser.add_option("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing")
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
help="Show debug logging for the specified logging domains")
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
help="Show debug logging for the specified logging domains")
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
help="Profile the command and save reports.")
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
help="Profile the command and save reports.")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-u", "--ui", action="store", dest="ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-u", "--ui", action="store", dest="ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
parser.add_option("", "--revisions-changed", action="store_true",
dest="revisions_changed", default=False,
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
parser.add_option("", "--revisions-changed", action="store_true",
dest="revisions_changed", default=False,
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
parser.add_option("", "--server-only", action="store_true",
dest="server_only", default=False,
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
parser.add_option("", "--server-only", action="store_true",
dest="server_only", default=False,
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
parser.add_option("", "--skip-setscene", action="store_true",
dest="skipsetscene", default=False,
help="Skip setscene tasks if they would be executed. Tasks previously "
"restored from sstate will be kept, unlike --no-setscene")
parser.add_option("", "--skip-setscene", action="store_true",
dest="skipsetscene", default=False,
help="Skip setscene tasks if they would be executed. Tasks previously "
"restored from sstate will be kept, unlike --no-setscene")
parser.add_option("", "--setscene-only", action="store_true",
dest="setsceneonly", default=False,
help="Only run setscene tasks, don't run any real tasks.")
parser.add_option("", "--setscene-only", action="store_true",
dest="setsceneonly", default=False,
help="Only run setscene tasks, don't run any real tasks.")
parser.add_option("", "--remote-server", action="store", dest="remote_server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
parser.add_option("", "--remote-server", action="store", dest="remote_server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
help="Terminate any running bitbake server.")
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
help="Terminate any running bitbake server.")
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
help="Connect to a server as an observing-only client.")
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
help="Connect to a server as an observing-only client.")
parser.add_option("", "--status-only", action="store_true",
dest="status_only", default=False,
help="Check the status of the remote bitbake server.")
parser.add_option("", "--status-only", action="store_true",
dest="status_only", default=False,
help="Check the status of the remote bitbake server.")
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
parser.add_option("", "--runall", action="append", dest="runall",
help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).")
parser.add_option("", "--runall", action="append", dest="runall",
help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).")
parser.add_option("", "--runonly", action="append", dest="runonly",
help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).")
return parser
parser.add_option("", "--runonly", action="append", dest="runonly",
help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).")
class BitBakeConfigParameters(cookerdata.ConfigParameters):
def parseCommandLine(self, argv=sys.argv):
parser = create_bitbake_parser()
options, targets = parser.parse_args(argv)
if options.quiet and options.verbose:
@@ -347,6 +344,8 @@ def bitbake_main(configParams, configuration):
except:
pass
configuration.setConfigParameters(configParams)
if configParams.server_only and configParams.remote_server:
raise BBMainException("FATAL: The '--server-only' option conflicts with %s.\n" %
("the BBSERVER environment variable" if "BBSERVER" in os.environ \
@@ -358,13 +357,13 @@ def bitbake_main(configParams, configuration):
if "BBDEBUG" in os.environ:
level = int(os.environ["BBDEBUG"])
if level > configParams.debug:
configParams.debug = level
if level > configuration.debug:
configuration.debug = level
bb.msg.init_msgconfig(configParams.verbose, configParams.debug,
configParams.debug_domains)
bb.msg.init_msgconfig(configParams.verbose, configuration.debug,
configuration.debug_domains)
server_connection, ui_module = setup_bitbake(configParams)
server_connection, ui_module = setup_bitbake(configParams, configuration)
# No server connection
if server_connection is None:
if configParams.status_only:
@@ -391,7 +390,7 @@ def bitbake_main(configParams, configuration):
return 1
def setup_bitbake(configParams, extrafeatures=None):
def setup_bitbake(configParams, configuration, extrafeatures=None):
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
if not configParams.status_only:
@@ -432,11 +431,11 @@ def setup_bitbake(configParams, extrafeatures=None):
logger.info("bitbake server is not running.")
lock.close()
return None, None
# we start a server with a given featureset
# we start a server with a given configuration
logger.info("Starting bitbake server...")
# Clear the event queue since we already displayed messages
bb.event.ui_queue = []
server = bb.server.process.BitBakeServer(lock, sockname, featureset, configParams.server_timeout, configParams.xmlrpcinterface)
server = bb.server.process.BitBakeServer(lock, sockname, configuration, featureset)
else:
logger.info("Reconnecting to bitbake server...")
@@ -459,17 +458,15 @@ def setup_bitbake(configParams, extrafeatures=None):
break
except BBMainFatal:
raise
except (Exception, bb.server.process.ProcessTimeout, SystemExit) as e:
# SystemExit does not inherit from the Exception class, needs to be included explicitly
except (Exception, bb.server.process.ProcessTimeout) as e:
if not retries:
raise
retries -= 1
tryno = 8 - retries
if isinstance(e, (bb.server.process.ProcessTimeout, BrokenPipeError, EOFError, SystemExit)):
if isinstance(e, (bb.server.process.ProcessTimeout, BrokenPipeError, EOFError)):
logger.info("Retrying server connection (#%d)..." % tryno)
else:
logger.info("Retrying server connection (#%d)... (%s)" % (tryno, traceback.format_exc()))
if not retries:
bb.fatal("Unable to connect to bitbake server, or start one (server startup failures would be in bitbake-cookerdaemon.log).")
bb.event.print_ui_queue()

View File

@@ -59,7 +59,7 @@ def getMountedDev(path):
pass
return None
def getDiskData(BBDirs):
def getDiskData(BBDirs, configuration):
"""Prepare disk data for disk space monitor"""
@@ -168,7 +168,7 @@ class diskMonitor:
BBDirs = configuration.getVar("BB_DISKMON_DIRS") or None
if BBDirs:
self.devDict = getDiskData(BBDirs)
self.devDict = getDiskData(BBDirs, configuration)
if self.devDict:
self.spaceInterval, self.inodeInterval = getInterval(configuration)
if self.spaceInterval and self.inodeInterval:

View File

@@ -14,7 +14,6 @@ import sys
import copy
import logging
import logging.config
import os
from itertools import groupby
import bb
import bb.event
@@ -147,12 +146,18 @@ class LogFilterLTLevel(logging.Filter):
#
loggerDefaultLogLevel = BBLogFormatter.NOTE
loggerDefaultVerbose = False
loggerVerboseLogs = False
loggerDefaultDomains = {}
def init_msgconfig(verbose, debug, debug_domains=None):
"""
Set default verbosity and debug levels config the logger
"""
bb.msg.loggerDefaultVerbose = verbose
if verbose:
bb.msg.loggerVerboseLogs = True
if debug:
bb.msg.loggerDefaultLogLevel = BBLogFormatter.DEBUG - debug + 1
elif verbose:
@@ -275,10 +280,10 @@ def setLoggingConfig(defaultconfig, userconfigfile=None):
logconfig = copy.deepcopy(defaultconfig)
if userconfigfile:
with open(os.path.normpath(userconfigfile), 'r') as f:
with open(userconfigfile, 'r') as f:
if userconfigfile.endswith('.yml') or userconfigfile.endswith('.yaml'):
import yaml
userconfig = yaml.safe_load(f)
userconfig = yaml.load(f)
elif userconfigfile.endswith('.json') or userconfigfile.endswith('.cfg'):
import json
userconfig = json.load(f)

View File

@@ -61,9 +61,17 @@ class _NamedTupleABCMeta(ABCMeta):
return ABCMeta.__new__(mcls, name, bases, namespace)
class _NamedTupleABC(metaclass=_NamedTupleABCMeta):
'''The abstract base class + mix-in for named tuples.'''
_fields = abstractproperty()
exec(
# Python 2.x metaclass declaration syntax
"""class _NamedTupleABC(object):
'''The abstract base class + mix-in for named tuples.'''
__metaclass__ = _NamedTupleABCMeta
_fields = abstractproperty()""" if version_info[0] < 3 else
# Python 3.x metaclass declaration syntax
"""class _NamedTupleABC(metaclass=_NamedTupleABCMeta):
'''The abstract base class + mix-in for named tuples.'''
_fields = abstractproperty()"""
)
_namedtuple.abc = _NamedTupleABC

View File

@@ -71,7 +71,7 @@ def update_mtime(f):
def update_cache(f):
if f in __mtime_cache:
logger.debug("Updating mtime cache for %s" % f)
logger.debug(1, "Updating mtime cache for %s" % f)
update_mtime(f)
def clear_cache():

View File

@@ -34,7 +34,7 @@ class IncludeNode(AstNode):
Include the file and evaluate the statements
"""
s = data.expand(self.what_file)
logger.debug2("CONF %s:%s: including %s", self.filename, self.lineno, s)
logger.debug(2, "CONF %s:%s: including %s", self.filename, self.lineno, s)
# TODO: Cache those includes... maybe not here though
if self.force:
@@ -97,7 +97,6 @@ class DataNode(AstNode):
def eval(self, data):
groupd = self.groupd
key = groupd["var"]
key = key.replace(":", "_")
loginfo = {
'variable': key,
'file': self.filename,
@@ -208,7 +207,6 @@ class ExportFuncsNode(AstNode):
def eval(self, data):
for func in self.n:
func = func.replace(":", "_")
calledfunc = self.classname + "_" + func
if data.getVar(func, False) and not data.getVarFlag(func, 'export_func', False):
@@ -246,14 +244,12 @@ class AddTaskNode(AstNode):
bb.build.addtask(self.func, self.before, self.after, data)
class DelTaskNode(AstNode):
def __init__(self, filename, lineno, tasks):
def __init__(self, filename, lineno, func):
AstNode.__init__(self, filename, lineno)
self.tasks = tasks
self.func = func
def eval(self, data):
tasks = data.expand(self.tasks).split()
for task in tasks:
bb.build.deltask(task, data)
bb.build.deltask(self.func, data)
class BBHandlerNode(AstNode):
def __init__(self, filename, lineno, fns):
@@ -309,7 +305,7 @@ def handleAddTask(statements, filename, lineno, m):
statements.append(AddTaskNode(filename, lineno, func, before, after))
def handleDelTask(statements, filename, lineno, m):
func = m.group(1)
func = m.group("func")
if func is None:
return
@@ -337,14 +333,11 @@ def finalize(fn, d, variant = None):
if not handlerfn:
bb.fatal("Undefined event handler function '%s'" % var)
handlerln = int(d.getVarFlag(var, "lineno", False))
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln, data=d)
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
bb.data.expandKeys(d)
bb.event.fire(bb.event.RecipePostKeyExpansion(fn), d)
runAnonFuncs(d)
tasklist = d.getVar('__BBTASKS', False) or []
@@ -378,7 +371,7 @@ def _create_variants(datastores, names, function, onlyfinalise):
def multi_finalize(fn, d):
appends = (d.getVar("__BBAPPEND") or "").split()
for append in appends:
logger.debug("Appending .bbappend file %s to %s", append, fn)
logger.debug(1, "Appending .bbappend file %s to %s", append, fn)
bb.parse.BBHandler.handle(append, d, True)
onlyfinalise = d.getVar("__ONLYFINALISE", False)

View File

@@ -13,7 +13,7 @@
#
import re, bb, os
import bb.build, bb.utils, bb.data_smart
import bb.build, bb.utils
from . import ConfHandler
from .. import resolve_file, ast, logger, ParseError
@@ -22,11 +22,11 @@ from .ConfHandler import include, init
# For compatibility
bb.deprecate_import(__name__, "bb.parse", ["vars_from_file"])
__func_start_regexp__ = re.compile(r"(((?P<py>python(?=(\s|\()))|(?P<fr>fakeroot(?=\s)))\s*)*(?P<func>[\w\.\-\+\{\}\$:]+)?\s*\(\s*\)\s*{$" )
__func_start_regexp__ = re.compile(r"(((?P<py>python)|(?P<fr>fakeroot))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
__inherit_regexp__ = re.compile(r"inherit\s+(.+)" )
__export_func_regexp__ = re.compile(r"EXPORT_FUNCTIONS\s+(.+)" )
__addtask_regexp__ = re.compile(r"addtask\s+(?P<func>\w+)\s*((before\s*(?P<before>((.*(?=after))|(.*))))|(after\s*(?P<after>((.*(?=before))|(.*)))))*")
__deltask_regexp__ = re.compile(r"deltask\s+(.+)")
__deltask_regexp__ = re.compile(r"deltask\s+(?P<func>\w+)(?P<ignores>.*)")
__addhandler_regexp__ = re.compile(r"addhandler\s+(.+)" )
__def_regexp__ = re.compile(r"def\s+(\w+).*:" )
__python_func_regexp__ = re.compile(r"(\s+.*)|(^$)|(^#)" )
@@ -60,7 +60,7 @@ def inherit(files, fn, lineno, d):
file = abs_fn
if not file in __inherit_cache:
logger.debug("Inheriting %s (from %s:%d)" % (file, fn, lineno))
logger.debug(1, "Inheriting %s (from %s:%d)" % (file, fn, lineno))
__inherit_cache.append( file )
d.setVar('__inherit_cache', __inherit_cache)
include(fn, file, lineno, d, "inherit")
@@ -233,15 +233,14 @@ def feeder(lineno, s, fn, root, statements, eof=False):
if taskexpression.count(word) > 1:
logger.warning("addtask contained multiple '%s' keywords, only one is supported" % word)
# Check and warn for having task with exprssion as part of task name
for te in taskexpression:
if any( ( "%s_" % keyword ) in te for keyword in bb.data_smart.__setvar_keyword__ ):
raise ParseError("Task name '%s' contains a keyword which is not recommended/supported.\nPlease rename the task not to include the keyword.\n%s" % (te, ("\n".join(map(str, bb.data_smart.__setvar_keyword__)))), fn)
ast.handleAddTask(statements, fn, lineno, m)
return
m = __deltask_regexp__.match(s)
if m:
# Check and warn "for deltask task1 task2"
if m.group('ignores'):
logger.warning('deltask ignored: "%s"' % m.group('ignores'))
ast.handleDelTask(statements, fn, lineno, m)
return

View File

@@ -20,7 +20,7 @@ from bb.parse import ParseError, resolve_file, ast, logger, handle
__config_regexp__ = re.compile( r"""
^
(?P<exp>export\s+)?
(?P<var>[a-zA-Z0-9\-_+.${}/~:]+?)
(?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
\s* (
@@ -95,7 +95,7 @@ def include_single_file(parentfn, fn, lineno, data, error_out):
if exc.errno == errno.ENOENT:
if error_out:
raise ParseError("Could not %s file %s" % (error_out, fn), parentfn, lineno)
logger.debug2("CONF file '%s' not found", fn)
logger.debug(2, "CONF file '%s' not found", fn)
else:
if error_out:
raise ParseError("Could not %s file %s: %s" % (error_out, fn, exc.strerror), parentfn, lineno)

View File

@@ -12,14 +12,14 @@ currently, providing a key/value store accessed by 'domain'.
#
import collections
import contextlib
import functools
import logging
import os.path
import sqlite3
import sys
import warnings
from bb.compat import total_ordering
from collections import Mapping
import sqlite3
import contextlib
sqlversion = sqlite3.sqlite_version_info
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
@@ -28,7 +28,7 @@ if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
logger = logging.getLogger("BitBake.PersistData")
@functools.total_ordering
@total_ordering
class SQLTable(collections.MutableMapping):
class _Decorators(object):
@staticmethod
@@ -248,7 +248,7 @@ class PersistData(object):
stacklevel=2)
self.data = persist(d)
logger.debug("Using '%s' as the persistent data cache",
logger.debug(1, "Using '%s' as the persistent data cache",
self.data.filename)
def addDomain(self, domain):

View File

@@ -7,7 +7,6 @@ import signal
import subprocess
import errno
import select
import bb
logger = logging.getLogger('BitBake.Process')
@@ -42,7 +41,6 @@ class ExecutionError(CmdError):
self.exitcode = exitcode
self.stdout = stdout
self.stderr = stderr
self.extra_message = None
def __str__(self):
message = ""
@@ -53,7 +51,7 @@ class ExecutionError(CmdError):
if message:
message = ":\n" + message
return (CmdError.__str__(self) +
" with exit code %s" % self.exitcode + message + (self.extra_message or ""))
" with exit code %s" % self.exitcode + message)
class Popen(subprocess.Popen):
defaults = {
@@ -181,8 +179,5 @@ def run(cmd, input=None, log=None, extrafiles=None, **options):
stderr = stderr.decode("utf-8")
if pipe.returncode != 0:
if log:
# Don't duplicate the output in the exception if logging it
raise ExecutionError(cmd, pipe.returncode, None, None)
raise ExecutionError(cmd, pipe.returncode, stdout, stderr)
return stdout, stderr

View File

@@ -14,27 +14,7 @@ import bb.event
import bb.build
from bb.build import StdoutNoopContextManager
# from https://stackoverflow.com/a/14693789/221061
ANSI_ESCAPE_REGEX = re.compile(r'\x1B\[[0-?]*[ -/]*[@-~]')
def filter_color(string):
"""
Filter ANSI escape codes out of |string|, return new string
"""
return ANSI_ESCAPE_REGEX.sub('', string)
def filter_color_n(string):
"""
Filter ANSI escape codes out of |string|, returns tuple of
(new string, # of ANSI codes removed)
"""
return ANSI_ESCAPE_REGEX.subn('', string)
class ProgressHandler:
class ProgressHandler(object):
"""
Base class that can pretend to be a file object well enough to be
used to build objects to intercept console output and determine the
@@ -75,7 +55,6 @@ class ProgressHandler:
self._lastevent = ts
self._progress = progress
class LineFilterProgressHandler(ProgressHandler):
"""
A ProgressHandler variant that provides the ability to filter out
@@ -87,7 +66,7 @@ class LineFilterProgressHandler(ProgressHandler):
"""
def __init__(self, d, outfile=None):
self._linebuffer = ''
super().__init__(d, outfile)
super(LineFilterProgressHandler, self).__init__(d, outfile)
def write(self, string):
self._linebuffer += string
@@ -101,44 +80,41 @@ class LineFilterProgressHandler(ProgressHandler):
lbreakpos = line.rfind('\r') + 1
if lbreakpos:
line = line[lbreakpos:]
if self.writeline(filter_color(line)):
super().write(line)
if self.writeline(line):
super(LineFilterProgressHandler, self).write(line)
def writeline(self, line):
return True
class BasicProgressHandler(ProgressHandler):
def __init__(self, d, regex=r'(\d+)%', outfile=None):
super().__init__(d, outfile)
super(BasicProgressHandler, self).__init__(d, outfile)
self._regex = re.compile(regex)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def write(self, string):
percs = self._regex.findall(filter_color(string))
percs = self._regex.findall(string)
if percs:
progress = int(percs[-1])
self.update(progress)
super().write(string)
super(BasicProgressHandler, self).write(string)
class OutOfProgressHandler(ProgressHandler):
def __init__(self, d, regex, outfile=None):
super().__init__(d, outfile)
super(OutOfProgressHandler, self).__init__(d, outfile)
self._regex = re.compile(regex)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def write(self, string):
nums = self._regex.findall(filter_color(string))
nums = self._regex.findall(string)
if nums:
progress = (float(nums[-1][0]) / float(nums[-1][1])) * 100
self.update(progress)
super().write(string)
super(OutOfProgressHandler, self).write(string)
class MultiStageProgressReporter:
class MultiStageProgressReporter(object):
"""
Class which allows reporting progress without the caller
having to know where they are in the overall sequence. Useful
@@ -223,7 +199,6 @@ class MultiStageProgressReporter:
value is considered to be out of stage_total, otherwise it should
be a percentage value from 0 to 100.
"""
progress = None
if self._stage_total:
stage_progress = (float(stage_progress) / self._stage_total) * 100
if self._stage < 0:
@@ -232,10 +207,9 @@ class MultiStageProgressReporter:
progress = self._base_progress + (stage_progress * self._stage_weights[self._stage])
else:
progress = self._base_progress
if progress:
if progress > 100:
progress = 100
self._fire_progress(progress)
if progress > 100:
progress = 100
self._fire_progress(progress)
def finish(self):
if self._finished:
@@ -256,7 +230,6 @@ class MultiStageProgressReporter:
out.append('Up to finish: %d' % stage_weight)
bb.warn('Stage times:\n %s' % '\n '.join(out))
class MultiStageProcessProgressReporter(MultiStageProgressReporter):
"""
Version of MultiStageProgressReporter intended for use with
@@ -265,7 +238,7 @@ class MultiStageProcessProgressReporter(MultiStageProgressReporter):
def __init__(self, d, processname, stage_weights, debug=False):
self._processname = processname
self._started = False
super().__init__(d, stage_weights, debug)
MultiStageProgressReporter.__init__(self, d, stage_weights, debug)
def start(self):
if not self._started:
@@ -282,14 +255,13 @@ class MultiStageProcessProgressReporter(MultiStageProgressReporter):
MultiStageProgressReporter.finish(self)
bb.event.fire(bb.event.ProcessFinished(self._processname), self._data)
class DummyMultiStageProcessProgressReporter(MultiStageProgressReporter):
"""
MultiStageProcessProgressReporter that takes the calls and does nothing
with them (to avoid a bunch of "if progress_reporter:" checks)
"""
def __init__(self):
super().__init__(None, [])
MultiStageProcessProgressReporter.__init__(self, "", None, [])
def _fire_progress(self, taskprogress, rate=None):
pass

Some files were not shown because too many files have changed in this diff Show More