Compare commits

..

83 Commits

Author SHA1 Message Date
Ross Burton
0ba0a534a3 glibc: add workaround for faccessat2 being blocked by seccomp filters
Older seccomp-based filters used in container frameworks will block faccessat2
calls as it's a relatively new syscall.  This isn't a big problem with
glibc <2.33 but 2.33 will call faccessat2 itself, get EPERM, and thenn be confused
about what to do as EPERM isn't an expected error code.

(From OE-Core rev: 4d6ad6d611834c2648d6bf9791cb8140967e2529)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-12 23:32:16 +00:00
Yoann Congal
f13c22358a npm.bbclass: avoid building target nodejs for native npm recipes
The current recipe unconditionally RDEPENDS on nodejs (the target one).
When building on the "-native recipe" of "BBCLASSEXTEND native" recipe,
the target nodejs is unnecessarily built.

This patch fixes this by only RDEPENDS on nodejs when building for the target.

(From OE-Core rev: 92a9a86df9e3bcffb13d2f8b5dcbe7822170f734)

Signed-off-by: Yoann Congal <yoann.congal@smile.fr>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-12 23:32:16 +00:00
Khem Raj
fa7db24367 security_flags.inc: Add same O<level> as in SELECTED_OPTIMIZATION
Adding -O can be troublesome in some packages where it may override the
O<n> specified by CFLAGS, this can be due to configure processing of
CFLAGS and munging them into new values in Makefiles, which is
contructed from CC and CFLAGS passed by bitbake environment. Problem
arises if the sequence is altered, which seems to be the case in some
packages e.g. ncurses, where the value from CC variable is added last
and thus overrides -O<n> coming from CFLAGS,

Therefore grok the value from SELECTED_OPTIMIZATION and append the
appropriate -O<level> flag to lcl_maybe_fortify so the level does not
change inaderdantly.

Since we do not use -O0 anymore there is no point of checking for
DEBUG_BUILD since it uses -Og now which works fine with
-D_FORTIFY_SOURCE=2, so check for optlevel O0 instead

(From OE-Core rev: 9571a18f7d15b3bffafc2e277ab90a21d6763697)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Cc: Anuj Mittal <anuj.mittal@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-12 23:32:16 +00:00
Khem Raj
3c0919821b tcf-agent: Fix build on riscv32
LCL_STOP_SERVICES needs tcf/cpudefs-mdep.h ported

(From OE-Core rev: ed5e0de938469a7fa4e6cd725d9e0c8325d890d3)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-12 23:32:16 +00:00
akuster
d41e3e7d4c connman: update to 1.39
Bug fix only and includes two security fixes:

CVE-2021-26675
CVE-2021-26676

Changelog:
- Fix issue with scanning state synchronization and iwd.
- Fix issue with invalid key with 4-way handshake offloading.
- Fix issue with DNS proxy length checks to prevent buffer overflow.
- Fix issue with DHCP leaking stack data via uninitialized variable.

[Yocto #14231]

(From OE-Core rev: eb20fd47d738f469f7bbeb4b8d85040f9163722b)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-12 23:32:16 +00:00
Richard Purdie
064e203987 pseudo: Update for rename and faccessat fixes
Pull in:

  ports/rename/renameat: Avoid race when renaming files
  ports/unix: Add faccessat and faccessat2
  ports/access.c: Use EACCES, not EPERM

which includes a fix for rename race issues causing pseudo aborts.

(From OE-Core rev: 330c232e4f756296331f9026e91ac26fd45f0315)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-12 23:29:51 +00:00
Dorinda
9294bc4bb4 oe-pkgdata-util: Check if environment script is initialized
Tinfoil doesn't behave well if environment is not initialized, this check ensures a proper error log if environment is not initialized.

[YOCTO #12096]

(From OE-Core rev: e88073e16f1b4cfd0f97c81a988640a84adad674)

Signed-off-by: Dorinda Bassey <dorindabassey@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:55 +00:00
Robert Rosengren
c2580d3a78 mpg123: Add support for FPU-less targets
Support added to configure mpg123 for FPU-less targets. Building for
fixed-point arithmetic increases performance on such devices.

(From OE-Core rev: 55a65571d19407befd3c2d152680573d7318c279)

Signed-off-by: Robert Rosengren <robert.rosengren@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:55 +00:00
Richard Purdie
4094b0bd50 opkg: Fix patch glitches
The original patch contained some text which shouldn't have been there
and used brackets in configure which isn't a great idea. Tweak the patch
to resolve this.

(From OE-Core rev: 63cbf187fe189c99645fe3afee8a6361a9a32cdc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:55 +00:00
Wang Mingyu
7a89d2d2ed parted: upgrade 3.3 -> 3.4
0001-Move-python-helper-scripts-used-only-in-tests-to-Pyt.patch
0001-libparted-fs-add-sourcedir-lib-to-include-paths.patch
0002-tests-use-skip_-rather-than-skip_test_-which-is-unde.patch
removed since they are included in 3.4

Add python3-core to RDEPENDS_parted-ptest
since /usr/lib/parted/ptest/tests/msdos-overlap contained in package parted-ptest requires /usr/bin/python3

(From OE-Core rev: c7d7e5f8177cdebd580ca7ff8f4412596567aff1)

Signed-off-by: Wang Mingyu <wangmy@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:55 +00:00
akuster
ede47ac147 documentation.conf: add both CVE_CHECK_LAYER_*
(From OE-Core rev: cac7f890f6b223006c1c290e76b9d575b729d87d)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:55 +00:00
Richard Purdie
0b9711efcb Fix up bitbake logging compatibility
Bitbake changed the debug() logging call to make it compatible with
standard python logging by no longer including a debug level as the
first argument. Fix up the few places this was being used.

Tweaked version of a patch from Joshua Watt.

(From OE-Core rev: 5aecb6df67b876aa12eec54998f209d084579599)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:55 +00:00
Richard Purdie
7f4d057585 sanity.conf: Increase minimum bitbake version due to logging function change
(From OE-Core rev: 8ddfab7b185dbba171afce80260e5638eb06a769)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:55 +00:00
Richard Purdie
e1691ae855 bitbake: bitbake: Bump release to 1.49.1
(Bitbake rev: 9f23fa605c542a705d00c6c263491899d55bb0d9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:16 +00:00
Joshua Watt
75f87db413 bitbake: logging: Make bitbake logger compatible with python logger
The bitbake logger overrode the definition of the debug() logging call
to include a debug level, but this causes problems with code that may
be using standard python logging, since the extra argument is
interpreted differently.

Instead, change the bitbake loggers debug() call to match the python
logger call and add a debug2() and debug3() API to replace calls that
were logging to a different debug level.

[RP: Small fix to ensure bb.debug calls bbdebug()]
(Bitbake rev: f68682a79d83e6399eb403f30a1f113516575f51)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:16 +00:00
Richard Purdie
7283a0b3b6 bitbake: bblayers/action: When adding layers, catch BBHandledException
When adding a layer, parse error can occur, raising BBHandledException.
Catch this and error, aborting the layer add to meet user expectations.

[YOCTO #14054]

(Bitbake rev: ceddb5b3d229b83c172656053cd29aeb521fcce0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:16 +00:00
Richard Purdie
df90345d33 bitbake: cooker: Ensure reparsing is handled correctly
From tinfoil, if you edit bblayers.conf and break it, then call
parseConfiguration (e.g. by adding a bad layer with bitbake-layers),
the system doens't show any parse error yet it should.

Add in a call to the updateCache function so that things really
are reparsed when requested.

Partially fixes [YOCTO #14054]

(Bitbake rev: e655f9361b9c3b77906b8e06b5cc76bc5180640e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:16 +00:00
Tomasz Dziendzielski
a30a06d9c1 bitbake: BBHandler: Don't classify shell functions that names start with "python*" as python function
If shell function name starts with 'python' or 'fakeroot' parser wrongly
assumes it's python/fakeroot function.

[YOCTO #14204]

Use regex lookahead assertions to check if 'python' expression is
followed by whitespace or '(' and if 'fakeroot' is followed by
whitespace.

(Bitbake rev: b07b226d5d1b3acd3f76d8365bc8002293365999)

Signed-off-by: Tomasz Dziendzielski <tomasz.dziendzielski@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:16 +00:00
Paul Barker
40ecec326e bitbake: hashserv: Add get-outhash message
The get-outhash message can be sent via the get_outhash client method.
This works in a similar way to the get message but looks up a db entry
by outhash rather than by taskhash. It is intended to be used as a
read-only form of the report message.

As both handle_get_outhash and handle_report use the same query string
we can factor this out.

(Bitbake rev: dc19606ada29a4d8afde4fcecd8ec986b47b867e)

Signed-off-by: Paul Barker <pbarker@konsulko.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:16 +00:00
Paul Barker
73160aac06 bitbake: hashserv: server: Support searching upstream for outhash
Use the new get-outhash message to perform a read-only query against an
upstream server (if present) when a reported taskhash/outhash
combination is not found in the current database. If a matching entry is
found upstream it is copied into the current database so it can be found
by future queries.

(Bitbake rev: 2be4f7f0d2ccb09917398289e8140e1467e84bb2)

Signed-off-by: Paul Barker <pbarker@konsulko.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:16 +00:00
Paul Barker
94f34b951b bitbake: hashserv: Add short forms of remaining command line arguments
Short form arguments are added for convenience.

(Bitbake rev: 921199a4923ce383b27e23c9b7e34eb785c8bae3)

Signed-off-by: Paul Barker <pbarker@konsulko.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:16 +00:00
Paul Barker
44176bd385 bitbake: hashserv: Support upstream command line argument
The hashserv server already implements support for pulling hash data
from another "upstream" server. Add the -u/--upstream argument to the
bitbake-hashserv app to expose this functionality to users.

(Bitbake rev: 8de510f1de35e581bcd5858edf23619c6a4cf923)

Signed-off-by: Paul Barker <pbarker@konsulko.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:16 +00:00
Paul Barker
3b559bb16d bitbake: hashserv: Support read-only server
The -r/--readonly argument is added to the bitbake-hashserv app. If this
argument is given then clients may only perform read operations against
the server. The read-only mode is implemented by simply not installing
handlers for write operations, this keeps the permission model simple
and reduces the risk of accidentally allowing write operations.

As a sqlite database can be safely opened by multiple processes in
parallel, it's possible to start two hashserv instances against a single
database if you wish to export both a read-only port and a read-write
port.

(Bitbake rev: 492bb02eb0e071c792407ac3113f92492da1a9cc)

Signed-off-by: Paul Barker <pbarker@konsulko.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-10 23:48:16 +00:00
Alexander Kanavin
f5188da2f1 u-boot: upgrade 2020.10 -> 2021.01
tools/binman/binman needs python3-setuptools now.

(From OE-Core rev: cb896051a7e7b25c02fb40aa8a422d3e5580dd34)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 11:22:54 +00:00
Lee Chee Yang
34ea1433fc wic: debug mode to keep tmp directory
files in wic tmp directory can be usefull for debugging, so do not remove
tmp directory when wic create run with debugging mode (-D or --debug).

also update wic.Wic.test_debug_short and wic.Wic.test_debug_long to
check for tmp directory.

[YOCTO#14216]

(From OE-Core rev: a122e2418b67d38f691edcf8dd846c167d6b4fa9)

Signed-off-by: Lee Chee Yang <Chee.Yang.Lee@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:11 +00:00
Khem Raj
7747c98bf2 autoconf: Fix typo for prefuncs
(From OE-Core rev: c64b06296b378e99cde489583c97b7d7edba4f88)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:11 +00:00
Chee Yang Lee
d6b4d57e62 initrdscripts: init-install-efi.sh install extra files for ESP
Currently the install script copy only few hard coded item while
setting up target ESP, kernel artifacts, all .efi in EFI/BOOT,
grub & boot cfg and loader.conf.
While ESP can be much complex, eg: contain multiple initrd.

Add a ESP folder to carry any other files to setup onto ESP.

(From OE-Core rev: 6eaca9cf20c42501fba27dea3a6446bad948e859)

Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:11 +00:00
akuster
73befa8f41 cve-check: add include/exclude layers
There are times when exluding or including a layer
may be desired. This provide the framwork for that via
two variables. The default is all layers in bblayers.

CVE_CHECK_LAYER_INCLUDELIST
CVE_CHECK_LAYER_EXCLUDELIST

(From OE-Core rev: 5fdde65ef58b4c1048839e4f9462b34bab36fc22)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
akuster
29e280f7ee cve-check.bbclass: add layer to cve log
Lets include whcih layer a package belongs to and
add it to the cve logs

(From OE-Core rev: 00d965bb42dc427749a4c3985af56ceffff80457)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
zhengruoqin
93775123c5 python3-packaging: upgrade 20.8 -> 20.9
20.9 - 2021-01-29
~~~~~~~~~~~~~~~~~

* Run [isort](https://pypi.org/project/isort/) over the code base (:issue:`377`)
* Add support for the ``macosx_10_*_universal2`` platform tags (:issue:`379`)
* Introduce ``packaging.utils.parse_wheel_filename()`` and ``parse_sdist_filename()``
  (:issue:`387` and :issue:`389`)

(From OE-Core rev: 6199c71030d527c57a1cb8496a377afb503d7670)

Signed-off-by: Zheng Ruoqin <zhengrq.fnst@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Wang Mingyu
11d23de584 libdrm: upgrade 2.4.103 -> 2.4.104
Add 0001-meson-Also-search-for-rst2man.py.patch to fix bug of program rst2man cannot be found.

Add dependency python3-docutils-native to manpages.

(From OE-Core rev: 600b75cef2807ccb3f89f24c77d02cd6e0a99e1f)

Signed-off-by: Wang Mingyu <wangmy@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Wang Mingyu
485af44c14 bind: upgrade 9.16.10 -> 9.16.11
rename directory of patches
-License-Update: Copyright year updated to 2021.

(From OE-Core rev: 316f9602c633fdf52009b4567ccf598d1c716acd)

Signed-off-by: Wang Mingyu <wangmy@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Alexander Kanavin
c1335d558f spirv-tools: correct version check
(From OE-Core rev: e4ef9eaea1e05975bd09b838e6ba35cc56da37d6)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Alexander Kanavin
2f82cf5a42 shaderc: correct version check
(From OE-Core rev: 4e22a84e0482d8c56942acd0243c94f20484ffef)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Alexander Kanavin
9b106a2f0b at: correct upstream version check
(From OE-Core rev: 0e2dfa9f7904db32a14c09b1d451382a4c91f85d)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Alexander Kanavin
b85d2a236c tar: update 1.32 -> 1.33
Drop musl fix as upstream fixed the issue.

(From OE-Core rev: 9ac95af964876752e7dae819f5b678ae4b510064)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Alexander Kanavin
95204ac052 libhandy: upgrade 1.0.2 -> 1.0.3
(From OE-Core rev: 97acf2c86b7496385eabf57d5e21dae835a45e6b)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Alexander Kanavin
bf34676143 dpkg: update 1.20.5 -> 1.20.7.1
(From OE-Core rev: b13ebb89b63a8a7d1c5d688c72c4aa4f54088963)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Alexander Kanavin
d8fe22e05f vulkan-samples: update to latest revision
Drop patch merged upstream.

(From OE-Core rev: 4ca7c5435a379160fb9ac2d2d9d7aa5550632f65)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Alexander Kanavin
e789354511 ruby: update 2.7.2 -> 3.0.0
Drop 0001-Modify-shebang-of-libexec-y2racc-and-libexec-racc2y.patch
as files removed upstream.

License-Update: formatting

Drop autoconf270.patch, as no longer needed with 3.0.0
(I verified against master-next which has the new autoconf).

(From OE-Core rev: 8fbf04053845aac24e0c0f1395051b60294e02a3)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Alexander Kanavin
612c8b8b12 python3-setuptools: update 51.0.0 -> 52.0.0
easy_install script removed upstream:
https://github.com/pypa/setuptools/blob/v52.0.0/CHANGES.rst

Tarballs are now provided instead of zip files.

License-Update: formatting

(From OE-Core rev: 131105f94c8de1f087e8bd6e3e76a5c38962ae7d)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Alexander Kanavin
cdd670b3cf gptfdisk: update 1.0.5 -> 1.0.6
(From OE-Core rev: 124416ee6ff3228101f7b4423b6a5581a096cae1)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
Alexander Kanavin
35c4fbe550 distcc: update 3.3.3 -> 3.3.5
(From OE-Core rev: e7521584b4acfc1ffa612f0167cef53eab967bcc)

Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-09 08:56:10 +00:00
zhengruoqin
6131fb0e1b sqlite3: upgrade 3.34.0 -> 3.34.1
(From OE-Core rev: d26f5601d0cfe15cf9ef953e33e5e36e1b58e915)

Signed-off-by: Zheng Ruoqin <zhengrq.fnst@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-08 14:47:16 +00:00
Khem Raj
9c7f1052f0 systemd: Fix build on musl
include "missing_stdlib.h" is needed for strndupa()

(From OE-Core rev: 87c9ed35fce8c9358d8a5dda20ece0a46cbff325)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-08 14:47:16 +00:00
Oleksandr Kravchuk
c43b253bc5 ell: update to 0.37
Changelog:
- Fix issue with D-Bus filter messages with no interfaces set.
- Add support for PKCS#12 certification loading.

(From OE-Core rev: a522b528170291264a1dd5293840bec7cdfa7311)

Signed-off-by: Oleksandr Kravchuk <open.source@oleksandr-kravchuk.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-08 14:47:16 +00:00
Oleksandr Kravchuk
bf64a62bfd inetutils: update to 2.0
Removed upstreamed patches and refreshed q few other.

(From OE-Core rev: a21e8fdf1b66961ddae5929d393daa08800bb748)

Signed-off-by: Oleksandr Kravchuk <open.source@oleksandr-kravchuk.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-08 14:47:16 +00:00
Jose Quaresma
c5d80f154d selftest/reproducible: remove spirv-tools-dev from exclusion list
(From OE-Core rev: ecb156fa391b29c6b317abb7bb126a36d709be6a)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-08 14:47:16 +00:00
Jose Quaresma
f5ef0e12d0 spirv-tools: fix reproducible
- remove build host path in cmake dev file to fix spirv-tools-dev reproducible
  https://autobuilder.yocto.io/pub/repro-fail/oe-reproducible-20210125-8161_obd/packages/diff-html/

(From OE-Core rev: 7795a919f127b5fde5eb2049ec4e1e22f16bfee7)

Signed-off-by: Jose Quaresma <quaresma.jose@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-08 14:47:16 +00:00
Khem Raj
525493e3ef security_flags.inc: Use -O with -D_FORTIFY_SOURCE
compiler can only use fortify options when some level of optimization is
on, otherwise it ends up sending some warnings.

warning: _FORTIFY_SOURCE requires compiling with optimization (-O) [-W#warnings]

this is usually OK, since -O<level> would be added via CFLAGS to
compiler cmdline in normal compile stages, however during configure
there are problems when CC,CPP,CXX are probed alone in configure tests
which results in above warning, which confuses the configure results and
autotools 2.70+ detects it as error e.g.

configure:17292: error: C preprocessor "riscv32-yoe-linux-clang -target riscv32-yoe-linux      -mlittle-endian -mno-relax -Qunused-arguments -fstack-protector-strong  -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/mnt/b/yoe/master/build/tmp/work/riscv32-yoe-linux/ndpi/3.4-r0/recipe-sysroot -E" fails sanity check
See `config.log' for more details

therefore adding a -O ( which actually is -O1 ) to lcl_maybe_fortify
means we can properly test these configure tests and real -O<level> will
still override -O added here, so overrall behavior improves

(From OE-Core rev: b6113dd68caa46d56cf3c8293119f2b9d8b137fd)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-08 14:47:16 +00:00
Michael Halstead
88cb39cc3c yocto-uninative.inc: version 2.11 updates glibc to 2.33
Support glibc 2.33.

(From OE-Core rev: 5c7f963d395aa4a94d78c37883488baac471ea43)

Signed-off-by: Michael Halstead <mhalstead@linuxfoundation.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-07 22:26:42 +00:00
Ross Burton
20f926f086 autotools: no need to depend on gnu-config
autoconf 2.70 onwards installs its own copies of config.guess/config.sub
which we keep up to date when autoconf builds, so there's no need to
depend on gnu-config for those files.

(From OE-Core rev: 332145c34b4aac2e74a713070af25414e1fd8c9c)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-07 22:26:42 +00:00
Ross Burton
bea9c8b988 autotools: remove intltoolize logic
autoconf 2.70 now invokes intltoolize, so there's no need to do it again
in this class.

(From OE-Core rev: e24ac6605aeaae42475d3f753dc9452093af5a14)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-07 22:26:42 +00:00
Ross Burton
66dab7e17b autotools: disable gtkdocize for now
This breaks kmod, so for now we can continue to do it ourselves.

(From OE-Core rev: 628e0263e3bb768ea771d0e0260fdb18e16c871e)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-07 22:26:42 +00:00
Ross Burton
bdad4025c7 autoconf: upgrade to 2.71
After too many years, autoconf has made a new release.  On the whole it
is compatible with previous releases, but some macros are more specific
about what they expose so minor tweaks to configure.ac may be required.

autoconf also now invokes intltoolize, gtkdocize, and copies
config.sub/guess, so there is less work for autotools.bbclass to do.

- AC_HEADER_MAJOR-port-to-glibc-2.25.patch
- add_musl_config.patch
- autoconf-replace-w-option-in-shebangs-with-modern-use-warnings.patch
- autoreconf-gnuconfigize.patch
- check-automake-cross-warning.patch
- config_site.patch
- fix_path_xtra.patch
- performance.patch
Drop a number of patches which have been integrated upstream.

- man-host-perl.patch
Don't use the target perl path when building documentation at build time:

- no-man.patch
Don't build documentation in native builds to avoid further build
dependencies.

(From OE-Core rev: f5dd2e0acbb0aa4079c51aaeab8c26e743a4c714)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-07 22:26:42 +00:00
Ross Burton
1dbef3ce98 autotools: don't warn about obsolete usage
New autoconf warns about obsolete macro usage, but there is quite
a lot of obsolete usage in the wild which isn't really in our problem.

(From OE-Core rev: a152b5a37aec247b0540b82ad6c9bdc20c532d21)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-07 22:26:42 +00:00
Ross Burton
c9299cea83 autoconf: merge .bb and .inc files
These files are split for historical reasons, so merge them to make
maintaining them easier.

The bb and inc had differing LICENSE assignments.  Current autoconf is
GPLv3+.

(From OE-Core rev: 192f635fa6964213e771c0b1443b2c15863b3d57)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-07 22:26:42 +00:00
Ross Burton
915a6afc3c gnu-config: update to latest commit
Update gnu-config to the latest upstream commit.

(From OE-Core rev: 37c088759218909acbd06a3a935c7ff99ff2fcd5)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-07 22:26:42 +00:00
Richard Purdie
1729dcf527 apr: Fix to work with autoconf 2.70
Fix an issue with autoconf 2.70 where duplicate macro includes
caused configure failures.

(From OE-Core rev: 4e5d7c86a8a5e752df451d988861a86236e8c8ff)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-07 22:26:42 +00:00
Ross Burton
e1ab06f14b unfs3: fix build with new autoconf
(From OE-Core rev: d6327189d2e86f0647a2cf11bc3dc3effa51a55d)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-07 22:26:42 +00:00
Ross Burton
43421f57bc Revert "lrzsz: Fix to work with autoconf 2.70"
This change was only needed with 2.70, it is not needed with 2.71.

This reverts commit 36aef08dcd.

(From OE-Core rev: 37362d8bdbec17a676af41b13683efd17c0cef50)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-07 22:26:42 +00:00
Michael Halstead
987aa92017 uninative: Upgrade to 2.10
Final glibc 2.32 based uninative.

(From OE-Core rev: 8b5d932a42ce9e3e801837bea9cf319c455d9ae5)

Signed-off-by: Michael Halstead <mhalstead@linuxfoundation.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-07 22:26:42 +00:00
Richard Purdie
2ffbb020fe bitbake: bitbake-worker: Try and avoid potential short write events issues
We're seeing occasional issues where builds fail as events were written from the
worker children in the form <event>partial data<event>full event</event>.

This causes failures as bitbake server can't parse that and exits. This could
be due to short writes to the worker event pipe which we weren't checking. Check
this and loop accordingly. Also add some asserts to detect other potential causes.

Thanks to Joshua Watt for help in spotting the issue.

[YOCTO #14181]

(Bitbake rev: a9451746a4bd7ccedf4c72cd03ad4ff0ab0143aa)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:12:00 +00:00
Paul Barker
adcd9608a7 bitbake: hashserv: client: Fix handling of null responses
If the server returns an empty response ("null" in json), this cannot
be iterated to check for the presence of the "chunk-stream" key.

(Bitbake rev: bf75370bcd6d02ed08cd959eec6190196b792515)

Signed-off-by: Paul Barker <pbarker@konsulko.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:12:00 +00:00
Paul Barker
a8a468efac bitbake: bitbake-hashclient: Remove obsolete call to client.connect
The connect function was previously removed from the hashserv Client
class but the bitbake-hashclient app was not updated. The client is
connected during hashserv.create_client() anyway so not separate connect
call is needed.

(Bitbake rev: 99bdb236bceeffa0083a0fa529280b217c1d310d)

Signed-off-by: Paul Barker <pbarker@konsulko.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:12:00 +00:00
Tomasz Dziendzielski
1d2fe91db5 bitbake: lib/bb: Don't treat mc recipe (Midnight Commander) as a multiconfig target
When we run `devtool build mc` recipe's task dependencies are expanded
to "mc:do_populate_sysroot" where "mc" name is treated as multiconfig
and "do_package_sysroot" as multiconfigname.

| ERROR: Multiconfig dependency mc:do_populate_sysroot depends on
| nonexistent multiconfig configuration named do_populate_sysroot

(Bitbake rev: 3ce4b2caccfe608a54dff159459f3687ea610597)

Signed-off-by: Tomasz Dziendzielski <tomasz.dziendzielski@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:12:00 +00:00
Mike Looijmans
2fcbd0f115 license_image.bbclass: Don't attempt to symlink to the same file
Sometimes (that is, in all my builds) the lic_manifest_dir and
lic_manifest_symlink_dir end up pointing to the same file, resulting
in an error like this:
  Exception: FileExistsError: [Errno 17] File exists: '/.../tmp-glibc/deploy/licenses/my-image-tdkz15' -> '/.../tmp-glibc/deploy/licenses/my-image-tdkz15'

First check to see if this is the case before attempting to create
the link.

(From OE-Core rev: 50f83fb542065eaf7a20ac07b63ae06441ada180)

Signed-off-by: Mike Looijmans <mike.looijmans@topic.nl>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:03:00 +00:00
Martin Jansa
c0ec68956c image_types.bbclass: tar: use posix format instead of gnu
* gnu isn't compatible with  --xattrs used e.g. here:
d3a832f66e/classes/image_types_ostree.bbclass (L16)
causing do_image_tar failing with:

| tar: --xattrs can be used only on POSIX archives
| Try 'tar --help' or 'tar --usage' for more information.

* https://www.gnu.org/software/tar/manual/html_chapter/tar_8.html
  says about posix format:

  This is the most flexible and feature-rich format.
  It does not impose any restrictions on file sizes or file name lengths.
  This format is quite recent, so not all tar implementations are able to handle it properly.
  However, this format is designed in such a way that any tar implementation able to read `ustar'
  archives will be able to read most `posix' archives as well, with the only exception that any
  additional information (such as long file names etc.) will in such case be extracted as plain
  text files along with the files it refers to.

  This archive format will be the default format for future versions of GNU tar.

  and:

  The default format for GNU tar is defined at compilation time.
  You may check it by running tar --help, and examining the last lines of its output.
  Usually, GNU tar is configured to create archives in `gnu' format, however, future version will switch to `posix'.

* I've compared tar on centos7 and ubuntu-18.04:

bash-4.2$ cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)

bash-4.2$ tar --version
tar (GNU tar) 1.26
...

bash-4.2$ tar --help | tail -n 5
*This* tar defaults to:
--format=gnu -f- -b20 --quoting-style=escape --rmt-command=/etc/rmt
--rsh-command=/usr/bin/ssh
...

bitbake@e0ee76f81c2f:/$ grep VERSION /etc/os-release
VERSION="18.04.5 LTS (Bionic Beaver)"
VERSION_ID="18.04"
VERSION_CODENAME=bionic

bitbake@e0ee76f81c2f:/$ tar --version
tar (GNU tar) 1.29
...

bitbake@e0ee76f81c2f:/$ tar --help | tail -n 5
...
*This* tar defaults to:
--format=gnu -f- -b20 --quoting-style=escape --rmt-command=/usr/lib/tar/rmt
--rsh-command=/usr/bin/rsh

Both support posix format (as pax POSIX 1003.1-2001). But centos7 version is
already too old anyway, because it doesn't support --sort=name used since:
https://git.openembedded.org/openembedded-core/commit/?id=4fa68626bbcfd9795577e1426c27d00f4d9d1c17
and
https://git.openembedded.org/openembedded-core/commit/?id=f19e43dec63a86c200e04ba14393583588550380
says that 1.28 is the minium version now and
https://git.openembedded.org/openembedded-core/commit/?id=7a66434cf11b7f051699b774e4fccd6738351368
recommends to use install-buildtools for hosts with tar < 1.28

On the other side latest tumbleweed from:
https://hub.docker.com/r/opensuse/tumbleweed
with tar-1.33 alredy defaults to posix format:

b99dbb3d86dd:/ # head -n 3 /etc/os-release
NAME="openSUSE Tumbleweed"
ID="opensuse-tumbleweed"

b99dbb3d86dd:/ # tar --version
tar (GNU tar) 1.33
...

b99dbb3d86dd:/ # tar --help | tail -n 3
*This* tar defaults to:
--format=posix -f- -b20 --quoting-style=escape --rmt-command=/usr/bin/rmt
--rsh-command=/usr/bin/ssh

I've packaged some sample rootfs directory with both tars and the result is
identical (with --format=gnu as well as --format=posix).

with ubuntu:
tar --sort=name --format=gnu --numeric-owner -cf rootfs.ubuntu.gnu.tar -C rootfs .
tar --xattrs --xattrs-include=* --sort=name --format=posix --numeric-owner -cf rootfs.ubuntu.posix.tar -C rootfs .
tumbleweed:
tar --sort=name --format=gnu --numeric-owner -cf rootfs.tumbleweed.gnu.tar -C rootfs .
tar --xattrs --xattrs-include=* --sort=name --format=posix --numeric-owner -cf rootfs.tumbleweed.posix.tar -C rootfs .
centos7 (without --sort=name):
tar --format=gnu --numeric-owner -cf rootfs.centos7.gnu.tar -C rootfs .
tar --xattrs --xattrs-include=* --format=posix --numeric-owner -cf rootfs.centos7.posix.tar -C rootfs .

size is identical:
-rw-r--r-- 1 mjansa mjansa 2487480320 Feb  5 09:19 rootfs.ubuntu.gnu.tar
-rw-r--r-- 1 mjansa mjansa 2487480320 Feb  5 10:17 rootfs.centos7.gnu.tar
-rw-r--r-- 1 mjansa mjansa 2487480320 Feb  5 10:26 rootfs.tumbleweed.gnu.tar
-rw-r--r-- 1 mjansa mjansa 2579875840 Feb  5 10:15 rootfs.ubuntu.posix.tar
-rw-r--r-- 1 mjansa mjansa 2579875840 Feb  5 10:16 rootfs.centos7.posix.tar
-rw-r--r-- 1 mjansa mjansa 2579875840 Feb  5 10:26 rootfs.tumbleweed.posix.tar

but md5s aren't:
5e3880283379dd773ac054e20562fdea  rootfs.centos7.gnu.tar
abeaf992c780aa780a27be01365d26f5  rootfs.centos7.posix.tar
0c6ee59d87ab56583293262de110bca4  rootfs.tumbleweed.gnu.tar
1555bc7276eaba924bf82a13a010fd6d  rootfs.tumbleweed.posix.tar
553d802bba351e273191bd5b2a621b66  rootfs.ubuntu.gnu.tar
b6d7b43b30174686f6625ba3c7aefdc6  rootfs.ubuntu.posix.tar

diffoscope shows some differences when using gnu format:

$ diffoscope rootfs.tumbleweed.gnu.tar rootfs.ubuntu.gnu.tar
...
-00239890: 3030 3000 3030 3737 3637 0020 4b00 0000  000.007767. K...
+00239890: 3030 3000 3031 3135 3737 0020 4b00 0000  000.011577. K...
...
-00239900: 0075 7374 6172 2020 0000 0000 0000 0000  .ustar  ........
+00239900: 0075 7374 6172 2020 0072 6f6f 7400 0000  .ustar  .root...
...
-00239920: 0000 0000 0000 0000 0000 0000 0000 0000  ................
+00239920: 0000 0000 0000 0000 0072 6f6f 7400 0000  .........root...

with posix format there are also some differences shown by diffoscope:

$ diffoscope rootfs.tumbleweed.posix.tar rootfs.ubuntu.posix.tar
 016a4c00: 2e2f 7573 722f 6269 6e2f 5061 7848 6561  ./usr/bin/PaxHea
-016a4c10: 6465 7273 2f63 6861 7474 722e 6532 6673  ders/chattr.e2fs
-016a4c20: 7072 6f67 7300 0000 0000 0000 0000 0000  progs...........
+016a4c10: 6465 7273 2e32 322f 6368 6174 7472 2e65  ders.22/chattr.e
+016a4c20: 3266 7370 726f 6773 0000 0000 0000 0000  2fsprogs........
...
 03937000: 2e2f 7573 722f 6269 6e2f 5061 7848 6561  ./usr/bin/PaxHea
-03937010: 6465 7273 2f63 6f6e 7461 696e 6572 642d  ders/containerd-
-03937020: 6374 7200 0000 0000 0000 0000 0000 0000  ctr.............
+03937010: 6465 7273 2e32 322f 636f 6e74 6169 6e65  ders.22/containe
+03937020: 7264 2d63 7472 0000 0000 0000 0000 0000  rd-ctr..........

so cannot really say which format is better for reproducible tar
archives from different distros, but posix at least supports xattrs
and it's the format for future.

(From OE-Core rev: 3ecea58f2a3382d9f4b410d6ad7089111334cb6f)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:03:00 +00:00
saloni
b74de5d19a libcroco: Added CVE
Added below CVE:
CVE-2020-12825
Link: CVE-2020-12825 [6eb257e5c7]
Link: https://gitlab.gnome.org/Archive/libcroco/-/issues/8

(From OE-Core rev: f8cee7386c556e1c5adb07a0aee385642b7a5568)

Signed-off-by: Saloni Jain <Saloni.Jain@kpit.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:03:00 +00:00
saloni
51b95cf1e7 libgcrypt: Whitelisted CVEs
Whitelisted below CVEs:

1. CVE-2018-12433
Link: https://security-tracker.debian.org/tracker/CVE-2018-12433
Link: https://nvd.nist.gov/vuln/detail/CVE-2018-12433
CVE-2018-12433 is marked disputed and ignored by NVD as it does
not impact crypt libraries for any distros and hence, can be safely
marked whitelisted.

2. CVE-2018-12438
Link: https://security-tracker.debian.org/tracker/CVE-2018-12438
Link: https://ubuntu.com/security/CVE-2018-12438
CVE-2018-12438 was reported for affecting openjdk crypt libraries
but there are no details available on which openjdk versions are
affected and does not directly affect libgcrypt or any specific
yocto distributions, hence, can be whitelisted.

(From OE-Core rev: 2943efe3f56d394308f9364b439c25f6a7613288)

Signed-off-by: Saloni Jain <Saloni.Jain@kpit.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:03:00 +00:00
Wang Mingyu
b535616586 e2fsprogs: upgrade 1.45.6 -> 1.45.7
0001-fix-up-check-for-hardlinks-always-false-if-inode-0xF.patch
removed since it is included in 1.45.7

(From OE-Core rev: f51835e022731d1c0e8e18209e48f1a718048977)

Signed-off-by: Wang Mingyu <wangmy@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:03:00 +00:00
Joshua Watt
071f23ad79 bash: Disable bracketed input by default
Bash 5.1 enabled bracketed input mode by default, but this causes a lot
of problems with automated testing as it can inject a lot of control
sequences into non-interactive output. Disable it to cleanup the output
an preserve the pre-5.1 behavior

(From OE-Core rev: 6c1cb7e274050f1ccb817b8ee34d0f61f34c95e3)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:03:00 +00:00
Chen Qi
e6be41a204 systemd: change /bin/nologin to /sbin/nologin
Our nologin path is /sbin/nologin instead of /bin/nologin.

(From OE-Core rev: cd7f55e960e759d946d8b619b0a306e610f66356)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:03:00 +00:00
zhengruoqin
9000f80336 mc: upgrade 4.8.25 -> 4.8.26
Fix the do_compile error:
| ../../../mc-4.8.26/lib/tty/tty-ncurses.c: In function 'tty_colorize_area':
| ../../../mc-4.8.26/lib/tty/tty-ncurses.c:557:5: error: unknown type name 'cchar_t'; did you mean 'wchar_t'?
add -DNCURSES_WIDECHAR=1 when musl.

(From OE-Core rev: 5be29caca3d06dd3d2bab4c76588f509f1268199)

Signed-off-by: Zheng Ruoqin <zhengrq.fnst@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:03:00 +00:00
Andreas Müller
7ee0c2c8cb openssl: re-enable whirlpool
* it breaks KDE's qca and dependencies
* it is not deprecated. Openssl 3.0 (currently alpha) will deprecate whirlpool

[1] https://www.openssl.org/news/changelog.html#openssl-30

(From OE-Core rev: bc02baadeee477b10eceae62985af4f4c323506e)

Signed-off-by: Andreas Müller <schnitzeltony@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-06 09:03:00 +00:00
Khem Raj
8e3b42f442 glibc: Require full ISA support for x86-64 level marker
(From OE-Core rev: 7f40096fabd4d8a1b67e96aabca6a15637501222)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-05 12:00:54 +00:00
Khem Raj
51cf448849 glibc: Enable cet
Enable Intel Control-flow Enforcement Technology (CET) instrumentation
support

helps with overcoming
/lib/libc.so.6: CPU ISA level is lower than required

(From OE-Core rev: c864e0e496ab1a4176d7a1673d8fc5b300ae68cf)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-05 12:00:54 +00:00
Khem Raj
7b8df042d0 glibc: Upgrade to 2.33
Drop backported patches

(From OE-Core rev: aa87638cf4f2bef66df92f961c7814f6b482fd3d)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-05 12:00:54 +00:00
Richard Purdie
77d9b5f02a pseudo: Update to work with glibc 2.33
Update to a pseudo version which contains some heqader fixes for
glibc 2.33.

(From OE-Core rev: c897ac317926b132547578b1f6bd347fe5677dfc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-05 12:00:54 +00:00
Richard Purdie
fa905948e5 openssh: Backport a fix to fix with glibc 2.33 on some platforms
This fixes openssh failing to work on qemux86 with glibc 2.33 due to
seccomp and the fact new syscalls are used. Also likely fixes issues
on other platforms.

(From OE-Core rev: 22f8ce6e6d998c0539a40b2776b1a2abb4f44bb3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-05 12:00:54 +00:00
Mingli Yu
03bf6b3ec0 qemu: make ptest rework
After qemu upgrades to 5.2.0 in commit [1], if also switches
to meson and the previous logic which introduces the testsuites
changes in [2] and results in below error when run ptest test:
 # ./run-ptest
 for f in ; do \
     nf=$(echo $f | sed 's/tests\//\.\//g'); \
     $nf; \
 done

So refactor the ptest part code to make it work again.

[1] https://git.openembedded.org/openembedded-core/commit/?id=181c635567aafb9b4787d8d6d0bcd4a615ceae80
[2] https://git.qemu.org/?p=qemu.git;a=commitdiff;h=279588d4deea2694ebe9ceb29dfdc5c08a7c4e27

(From OE-Core rev: a5c1290e8a24b844f0ba62df270f976096394d87)

Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-05 08:16:32 +00:00
Lee Chee Yang
c05385c887 wic/selftest: test_permissions also test bitbake image
existing test case test_permissions use Wic command as standalone
tools to create wic image and check that wic image for permissions.

add extra steps to the test case to also check against image build
using bitbake do_image_wic.

(From OE-Core rev: 551ce73a90757ba43501fe5cf9ac84a7b77de549)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-05 08:16:32 +00:00
Yi Fan Yu
4b5b18c369 glib-2.0: add workaround to fix codegen.py.test failing
Adding a patch to remove an unecessary print statement
in test-codegen.py that cause the ptest-runner to fail.

Root cause is suspected to be in ptest-runner.

[YOCTO #14170]

Uptream-Status: Inappropriate [other]
this is a workaround.

(From OE-Core rev: afc9ba7d546f3f2e60fb6f46f740dc925542df16)

Signed-off-by: Yi Fan Yu <yifan.yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2021-02-05 08:16:32 +00:00
202 changed files with 1627 additions and 2648 deletions

View File

@@ -26,7 +26,7 @@ from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
__version__ = "1.49.0"
__version__ = "1.49.1"
if __name__ == "__main__":
if __version__ != bb.__version__:

View File

@@ -151,9 +151,6 @@ def main():
func = getattr(args, 'func', None)
if func:
client = hashserv.create_client(args.address)
# Try to establish a connection to the server now to detect failures
# early
client.connect()
return func(args, client)

View File

@@ -30,9 +30,11 @@ def main():
"--bind [::1]:8686"'''
)
parser.add_argument('--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
parser.add_argument('--database', default='./hashserv.db', help='Database file (default "%(default)s")')
parser.add_argument('--log', default='WARNING', help='Set logging level')
parser.add_argument('-b', '--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
parser.add_argument('-d', '--database', default='./hashserv.db', help='Database file (default "%(default)s")')
parser.add_argument('-l', '--log', default='WARNING', help='Set logging level')
parser.add_argument('-u', '--upstream', help='Upstream hashserv to pull hashes from')
parser.add_argument('-r', '--read-only', action='store_true', help='Disallow write operations from clients')
args = parser.parse_args()
@@ -47,7 +49,7 @@ def main():
console.setLevel(level)
logger.addHandler(console)
server = hashserv.create_server(args.bind, args.database)
server = hashserv.create_server(args.bind, args.database, upstream=args.upstream, read_only=args.read_only)
server.serve_forever()
return 0

View File

@@ -118,7 +118,9 @@ def worker_child_fire(event, d):
data = b"<event>" + pickle.dumps(event) + b"</event>"
try:
worker_pipe_lock.acquire()
worker_pipe.write(data)
while(len(data)):
written = worker_pipe.write(data)
data = data[written:]
worker_pipe_lock.release()
except IOError:
sigterm_handler(None, None)
@@ -167,7 +169,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
fakedirs = (workerdata["fakerootdirs"][fn] or "").split()
for p in fakedirs:
bb.utils.mkdirhier(p)
logger.debug(2, 'Running %s:%s under fakeroot, fakedirs: %s' %
logger.debug2('Running %s:%s under fakeroot, fakedirs: %s' %
(fn, taskname, ', '.join(fakedirs)))
else:
envvars = (workerdata["fakerootnoenv"][fn] or "").split()
@@ -321,7 +323,9 @@ class runQueueWorkerPipe():
end = len(self.queue)
index = self.queue.find(b"</event>")
while index != -1:
worker_fire_prepickled(self.queue[:index+8])
msg = self.queue[:index+8]
assert msg.startswith(b"<event>") and msg.count(b"<event>") == 1
worker_fire_prepickled(msg)
self.queue = self.queue[index+8:]
index = self.queue.find(b"</event>")
return (end > start)

View File

@@ -9,7 +9,7 @@
# SPDX-License-Identifier: GPL-2.0-only
#
__version__ = "1.49.0"
__version__ = "1.49.1"
import sys
if sys.version_info < (3, 5, 0):
@@ -21,8 +21,8 @@ class BBHandledException(Exception):
The big dilemma for generic bitbake code is what information to give the user
when an exception occurs. Any exception inheriting this base exception class
has already provided information to the user via some 'fired' message type such as
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
encounters an exception derived from this class, no backtrace or other information
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
encounters an exception derived from this class, no backtrace or other information
will be given to the user, its assumed the earlier event provided the relevant information.
"""
pass
@@ -42,7 +42,16 @@ class BBLoggerMixin(object):
def setup_bblogger(self, name):
if name.split(".")[0] == "BitBake":
self.debug = self.bbdebug
self.debug = self._debug_helper
def _debug_helper(self, *args, **kwargs):
return self.bbdebug(1, *args, **kwargs)
def debug2(self, *args, **kwargs):
return self.bbdebug(2, *args, **kwargs)
def debug3(self, *args, **kwargs):
return self.bbdebug(3, *args, **kwargs)
def bbdebug(self, level, msg, *args, **kwargs):
loglevel = logging.DEBUG - level + 1
@@ -128,7 +137,7 @@ def debug(lvl, *args):
mainlogger.warning("Passed invalid debug level '%s' to bb.debug", lvl)
args = (lvl,) + args
lvl = 1
mainlogger.debug(lvl, ''.join(args))
mainlogger.bbdebug(lvl, ''.join(args))
def note(*args):
mainlogger.info(''.join(args))

View File

@@ -583,7 +583,7 @@ def _exec_task(fn, task, d, quieterr):
logger.error("No such task: %s" % task)
return 1
logger.debug(1, "Executing task %s", task)
logger.debug("Executing task %s", task)
localdata = _task_data(fn, task, d)
tempdir = localdata.getVar('T')
@@ -596,7 +596,7 @@ def _exec_task(fn, task, d, quieterr):
curnice = os.nice(0)
nice = int(nice) - curnice
newnice = os.nice(nice)
logger.debug(1, "Renice to %s " % newnice)
logger.debug("Renice to %s " % newnice)
ionice = localdata.getVar("BB_TASK_IONICE_LEVEL")
if ionice:
try:
@@ -720,7 +720,7 @@ def _exec_task(fn, task, d, quieterr):
logfile.close()
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
logger.debug(2, "Zero size logfn %s, removing", logfn)
logger.debug2("Zero size logfn %s, removing", logfn)
bb.utils.remove(logfn)
bb.utils.remove(loglink)
event.fire(TaskSucceeded(task, fn, logfn, localdata), localdata)

View File

@@ -215,7 +215,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
if not self.not_world:
cachedata.possible_world.append(fn)
#else:
# logger.debug(2, "EXCLUDE FROM WORLD: %s", fn)
# logger.debug2("EXCLUDE FROM WORLD: %s", fn)
# create a collection of all targets for sanity checking
# tasks, such as upstream versions, license, and tools for
@@ -238,7 +238,7 @@ def virtualfn2realfn(virtualfn):
Convert a virtual file name to a real one + the associated subclass keyword
"""
mc = ""
if virtualfn.startswith('mc:'):
if virtualfn.startswith('mc:') and virtualfn.count(':') >= 2:
elems = virtualfn.split(':')
mc = elems[1]
virtualfn = ":".join(elems[2:])
@@ -268,7 +268,7 @@ def variant2virtual(realfn, variant):
"""
if variant == "":
return realfn
if variant.startswith("mc:"):
if variant.startswith("mc:") and variant.count(':') >= 2:
elems = variant.split(":")
if elems[2]:
return "mc:" + elems[1] + ":virtual:" + ":".join(elems[2:]) + ":" + realfn
@@ -323,7 +323,7 @@ class NoCache(object):
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug(1, "Parsing %s (full)" % virtualfn)
logger.debug("Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = virtualfn2realfn(virtualfn)
bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
return bb_data[virtual]
@@ -400,7 +400,7 @@ class Cache(NoCache):
self.cachefile = self.getCacheFile("bb_cache.dat")
self.logger.debug(1, "Cache dir: %s", self.cachedir)
self.logger.debug("Cache dir: %s", self.cachedir)
bb.utils.mkdirhier(self.cachedir)
cache_ok = True
@@ -408,7 +408,7 @@ class Cache(NoCache):
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
cache_exists = os.path.exists(cachefile)
self.logger.debug(2, "Checking if %s exists: %r", cachefile, cache_exists)
self.logger.debug2("Checking if %s exists: %r", cachefile, cache_exists)
cache_ok = cache_ok and cache_exists
cache_class.init_cacheData(self)
if cache_ok:
@@ -416,7 +416,7 @@ class Cache(NoCache):
elif os.path.isfile(self.cachefile):
self.logger.info("Out of date cache found, rebuilding...")
else:
self.logger.debug(1, "Cache file %s not found, building..." % self.cachefile)
self.logger.debug("Cache file %s not found, building..." % self.cachefile)
# We don't use the symlink, its just for debugging convinience
if self.mc:
@@ -453,7 +453,7 @@ class Cache(NoCache):
for cache_class in self.caches_array:
cachefile = self.getCacheFile(cache_class.cachefile)
self.logger.debug(1, 'Loading cache file: %s' % cachefile)
self.logger.debug('Loading cache file: %s' % cachefile)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
# Check cache version information
@@ -500,7 +500,7 @@ class Cache(NoCache):
def parse(self, filename, appends):
"""Parse the specified filename, returning the recipe information"""
self.logger.debug(1, "Parsing %s", filename)
self.logger.debug("Parsing %s", filename)
infos = []
datastores = self.load_bbfile(filename, appends, mc=self.mc)
depends = []
@@ -554,7 +554,7 @@ class Cache(NoCache):
cached, infos = self.load(fn, appends)
for virtualfn, info_array in infos:
if info_array[0].skipped:
self.logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
self.logger.debug("Skipping %s: %s", virtualfn, info_array[0].skipreason)
skipped += 1
else:
self.add_info(virtualfn, info_array, cacheData, not cached)
@@ -590,21 +590,21 @@ class Cache(NoCache):
# File isn't in depends_cache
if not fn in self.depends_cache:
self.logger.debug(2, "%s is not cached", fn)
self.logger.debug2("%s is not cached", fn)
return False
mtime = bb.parse.cached_mtime_noerror(fn)
# Check file still exists
if mtime == 0:
self.logger.debug(2, "%s no longer exists", fn)
self.logger.debug2("%s no longer exists", fn)
self.remove(fn)
return False
info_array = self.depends_cache[fn]
# Check the file's timestamp
if mtime != info_array[0].timestamp:
self.logger.debug(2, "%s changed", fn)
self.logger.debug2("%s changed", fn)
self.remove(fn)
return False
@@ -615,13 +615,13 @@ class Cache(NoCache):
fmtime = bb.parse.cached_mtime_noerror(f)
# Check if file still exists
if old_mtime != 0 and fmtime == 0:
self.logger.debug(2, "%s's dependency %s was removed",
self.logger.debug2("%s's dependency %s was removed",
fn, f)
self.remove(fn)
return False
if (fmtime != old_mtime):
self.logger.debug(2, "%s's dependency %s changed",
self.logger.debug2("%s's dependency %s changed",
fn, f)
self.remove(fn)
return False
@@ -638,14 +638,14 @@ class Cache(NoCache):
continue
f, exist = f.split(":")
if (exist == "True" and not os.path.exists(f)) or (exist == "False" and os.path.exists(f)):
self.logger.debug(2, "%s's file checksum list file %s changed",
self.logger.debug2("%s's file checksum list file %s changed",
fn, f)
self.remove(fn)
return False
if tuple(appends) != tuple(info_array[0].appends):
self.logger.debug(2, "appends for %s changed", fn)
self.logger.debug(2, "%s to %s" % (str(appends), str(info_array[0].appends)))
self.logger.debug2("appends for %s changed", fn)
self.logger.debug2("%s to %s" % (str(appends), str(info_array[0].appends)))
self.remove(fn)
return False
@@ -654,10 +654,10 @@ class Cache(NoCache):
virtualfn = variant2virtual(fn, cls)
self.clean.add(virtualfn)
if virtualfn not in self.depends_cache:
self.logger.debug(2, "%s is not cached", virtualfn)
self.logger.debug2("%s is not cached", virtualfn)
invalid = True
elif len(self.depends_cache[virtualfn]) != len(self.caches_array):
self.logger.debug(2, "Extra caches missing for %s?" % virtualfn)
self.logger.debug2("Extra caches missing for %s?" % virtualfn)
invalid = True
# If any one of the variants is not present, mark as invalid for all
@@ -665,10 +665,10 @@ class Cache(NoCache):
for cls in info_array[0].variants:
virtualfn = variant2virtual(fn, cls)
if virtualfn in self.clean:
self.logger.debug(2, "Removing %s from cache", virtualfn)
self.logger.debug2("Removing %s from cache", virtualfn)
self.clean.remove(virtualfn)
if fn in self.clean:
self.logger.debug(2, "Marking %s as not clean", fn)
self.logger.debug2("Marking %s as not clean", fn)
self.clean.remove(fn)
return False
@@ -681,10 +681,10 @@ class Cache(NoCache):
Called from the parser in error cases
"""
if fn in self.depends_cache:
self.logger.debug(1, "Removing %s from cache", fn)
self.logger.debug("Removing %s from cache", fn)
del self.depends_cache[fn]
if fn in self.clean:
self.logger.debug(1, "Marking %s as unclean", fn)
self.logger.debug("Marking %s as unclean", fn)
self.clean.remove(fn)
def sync(self):
@@ -697,13 +697,13 @@ class Cache(NoCache):
return
if self.cacheclean:
self.logger.debug(2, "Cache is clean, not saving.")
self.logger.debug2("Cache is clean, not saving.")
return
for cache_class in self.caches_array:
cache_class_name = cache_class.__name__
cachefile = self.getCacheFile(cache_class.cachefile)
self.logger.debug(2, "Writing %s", cachefile)
self.logger.debug2("Writing %s", cachefile)
with open(cachefile, "wb") as f:
p = pickle.Pickler(f, pickle.HIGHEST_PROTOCOL)
p.dump(__cache_version__)
@@ -879,7 +879,7 @@ class MultiProcessCache(object):
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
logger.debug(1, "Using cache in '%s'", self.cachefile)
logger.debug("Using cache in '%s'", self.cachefile)
glf = bb.utils.lockfile(self.cachefile + ".lock")
@@ -985,7 +985,7 @@ class SimpleCache(object):
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
logger.debug(1, "Using cache in '%s'", self.cachefile)
logger.debug("Using cache in '%s'", self.cachefile)
glf = bb.utils.lockfile(self.cachefile + ".lock")

View File

@@ -411,6 +411,8 @@ class BBCooker:
self.data.disableTracking()
def parseConfiguration(self):
self.updateCacheSync()
# Change nice level if we're asked to
nice = self.data.getVar("BB_NICE_LEVEL")
if nice:
@@ -441,7 +443,7 @@ class BBCooker:
continue
except AttributeError:
pass
logger.debug(1, "Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
logger.debug("Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
print("Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
clean = False
if hasattr(self.configuration, o):
@@ -468,17 +470,17 @@ class BBCooker:
for k in bb.utils.approved_variables():
if k in environment and k not in self.configuration.env:
logger.debug(1, "Updating new environment variable %s to %s" % (k, environment[k]))
logger.debug("Updating new environment variable %s to %s" % (k, environment[k]))
self.configuration.env[k] = environment[k]
clean = False
if k in self.configuration.env and k not in environment:
logger.debug(1, "Updating environment variable %s (deleted)" % (k))
logger.debug("Updating environment variable %s (deleted)" % (k))
del self.configuration.env[k]
clean = False
if k not in self.configuration.env and k not in environment:
continue
if environment[k] != self.configuration.env[k]:
logger.debug(1, "Updating environment variable %s from %s to %s" % (k, self.configuration.env[k], environment[k]))
logger.debug("Updating environment variable %s from %s to %s" % (k, self.configuration.env[k], environment[k]))
self.configuration.env[k] = environment[k]
clean = False
@@ -486,7 +488,7 @@ class BBCooker:
self.configuration.env = environment
if not clean:
logger.debug(1, "Base environment change, triggering reparse")
logger.debug("Base environment change, triggering reparse")
self.reset()
def runCommands(self, server, data, abort):
@@ -614,7 +616,7 @@ class BBCooker:
# Replace string such as "mc:*:bash"
# into "mc:A:bash mc:B:bash bash"
for k in targetlist:
if k.startswith("mc:"):
if k.startswith("mc:") and k.count(':') >= 2:
if wildcard:
bb.fatal('multiconfig conflict')
if k.split(":")[1] == "*":
@@ -648,7 +650,7 @@ class BBCooker:
for k in fulltargetlist:
origk = k
mc = ""
if k.startswith("mc:"):
if k.startswith("mc:") and k.count(':') >= 2:
mc = k.split(":")[1]
k = ":".join(k.split(":")[2:])
ktask = task
@@ -697,7 +699,7 @@ class BBCooker:
if depmc not in self.multiconfigs:
bb.fatal("Multiconfig dependency %s depends on nonexistent multiconfig configuration named configuration %s" % (k,depmc))
else:
logger.debug(1, "Adding providers for multiconfig dependency %s" % l[3])
logger.debug("Adding providers for multiconfig dependency %s" % l[3])
taskdata[depmc].add_provider(localdata[depmc], self.recipecaches[depmc], l[3])
seen.add(k)
new = True
@@ -1553,7 +1555,7 @@ class BBCooker:
self.inotify_modified_files = []
if not self.baseconfig_valid:
logger.debug(1, "Reloading base configuration data")
logger.debug("Reloading base configuration data")
self.initConfigurationData()
self.handlePRServ()

View File

@@ -428,7 +428,7 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
uri_decoded = list(decodeurl(ud.url))
uri_find_decoded = list(decodeurl(uri_find))
uri_replace_decoded = list(decodeurl(uri_replace))
logger.debug(2, "For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
logger.debug2("For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
result_decoded = ['', '', '', '', '', {}]
for loc, i in enumerate(uri_find_decoded):
result_decoded[loc] = uri_decoded[loc]
@@ -474,7 +474,7 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
result = encodeurl(result_decoded)
if result == ud.url:
return None
logger.debug(2, "For url %s returning %s" % (ud.url, result))
logger.debug2("For url %s returning %s" % (ud.url, result))
return result
methods = []
@@ -499,9 +499,9 @@ def fetcher_init(d):
# When to drop SCM head revisions controlled by user policy
srcrev_policy = d.getVar('BB_SRCREV_POLICY') or "clear"
if srcrev_policy == "cache":
logger.debug(1, "Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
logger.debug("Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
elif srcrev_policy == "clear":
logger.debug(1, "Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
logger.debug("Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
revs.clear()
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
@@ -857,9 +857,9 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
cmd = 'export PSEUDO_DISABLED=1; ' + cmd
if workdir:
logger.debug(1, "Running '%s' in %s" % (cmd, workdir))
logger.debug("Running '%s' in %s" % (cmd, workdir))
else:
logger.debug(1, "Running %s", cmd)
logger.debug("Running %s", cmd)
success = False
error_message = ""
@@ -900,7 +900,7 @@ def check_network_access(d, info, url):
elif not trusted_network(d, url):
raise UntrustedUrl(url, info)
else:
logger.debug(1, "Fetcher accessed the network with the command %s" % info)
logger.debug("Fetcher accessed the network with the command %s" % info)
def build_mirroruris(origud, mirrors, ld):
uris = []
@@ -926,7 +926,7 @@ def build_mirroruris(origud, mirrors, ld):
continue
if not trusted_network(ld, newuri):
logger.debug(1, "Mirror %s not in the list of trusted networks, skipping" % (newuri))
logger.debug("Mirror %s not in the list of trusted networks, skipping" % (newuri))
continue
# Create a local copy of the mirrors minus the current line
@@ -939,8 +939,8 @@ def build_mirroruris(origud, mirrors, ld):
newud = FetchData(newuri, ld)
newud.setup_localpath(ld)
except bb.fetch2.BBFetchException as e:
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
logger.debug(1, str(e))
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
logger.debug(str(e))
try:
# setup_localpath of file:// urls may fail, we should still see
# if mirrors of the url exist
@@ -1043,8 +1043,8 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
elif isinstance(e, NoChecksumError):
raise
else:
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
logger.debug(1, str(e))
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
logger.debug(str(e))
try:
ud.method.clean(ud, ld)
except UnboundLocalError:
@@ -1688,7 +1688,7 @@ class Fetch(object):
if m.verify_donestamp(ud, self.d) and not m.need_update(ud, self.d):
done = True
elif m.try_premirror(ud, self.d):
logger.debug(1, "Trying PREMIRRORS")
logger.debug("Trying PREMIRRORS")
mirrors = mirror_from_string(self.d.getVar('PREMIRRORS'))
done = m.try_mirrors(self, ud, self.d, mirrors)
if done:
@@ -1698,7 +1698,7 @@ class Fetch(object):
m.update_donestamp(ud, self.d)
except ChecksumError as e:
logger.warning("Checksum failure encountered with premirror download of %s - will attempt other sources." % u)
logger.debug(1, str(e))
logger.debug(str(e))
done = False
if premirroronly:
@@ -1710,7 +1710,7 @@ class Fetch(object):
try:
if not trusted_network(self.d, ud.url):
raise UntrustedUrl(ud.url)
logger.debug(1, "Trying Upstream")
logger.debug("Trying Upstream")
m.download(ud, self.d)
if hasattr(m, "build_mirror_data"):
m.build_mirror_data(ud, self.d)
@@ -1725,19 +1725,19 @@ class Fetch(object):
except BBFetchException as e:
if isinstance(e, ChecksumError):
logger.warning("Checksum failure encountered with download of %s - will attempt other sources if available" % u)
logger.debug(1, str(e))
logger.debug(str(e))
if os.path.exists(ud.localpath):
rename_bad_checksum(ud, e.checksum)
elif isinstance(e, NoChecksumError):
raise
else:
logger.warning('Failed to fetch URL %s, attempting MIRRORS if available' % u)
logger.debug(1, str(e))
logger.debug(str(e))
firsterr = e
# Remove any incomplete fetch
if not verified_stamp:
m.clean(ud, self.d)
logger.debug(1, "Trying MIRRORS")
logger.debug("Trying MIRRORS")
mirrors = mirror_from_string(self.d.getVar('MIRRORS'))
done = m.try_mirrors(self, ud, self.d, mirrors)
@@ -1774,7 +1774,7 @@ class Fetch(object):
ud = self.ud[u]
ud.setup_localpath(self.d)
m = ud.method
logger.debug(1, "Testing URL %s", u)
logger.debug("Testing URL %s", u)
# First try checking uri, u, from PREMIRRORS
mirrors = mirror_from_string(self.d.getVar('PREMIRRORS'))
ret = m.try_mirrors(self, ud, self.d, mirrors, True)

View File

@@ -74,16 +74,16 @@ class Bzr(FetchMethod):
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
bzrcmd = self._buildbzrcommand(ud, d, "update")
logger.debug(1, "BZR Update %s", ud.url)
logger.debug("BZR Update %s", ud.url)
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
runfetchcmd(bzrcmd, d, workdir=os.path.join(ud.pkgdir, os.path.basename(ud.path)))
else:
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
logger.debug(1, "BZR Checkout %s", ud.url)
logger.debug("BZR Checkout %s", ud.url)
bb.utils.mkdirhier(ud.pkgdir)
logger.debug(1, "Running %s", bzrcmd)
logger.debug("Running %s", bzrcmd)
runfetchcmd(bzrcmd, d, workdir=ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
@@ -109,7 +109,7 @@ class Bzr(FetchMethod):
"""
Return the latest upstream revision number
"""
logger.debug(2, "BZR fetcher hitting network for %s", ud.url)
logger.debug2("BZR fetcher hitting network for %s", ud.url)
bb.fetch2.check_network_access(d, self._buildbzrcommand(ud, d, "revno"), ud.url)

View File

@@ -70,7 +70,7 @@ class ClearCase(FetchMethod):
return ud.type in ['ccrc']
def debug(self, msg):
logger.debug(1, "ClearCase: %s", msg)
logger.debug("ClearCase: %s", msg)
def urldata_init(self, ud, d):
"""

View File

@@ -109,7 +109,7 @@ class Cvs(FetchMethod):
cvsupdatecmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvsupdatecmd)
# create module directory
logger.debug(2, "Fetch: checking for module directory")
logger.debug2("Fetch: checking for module directory")
moddir = os.path.join(ud.pkgdir, localdir)
workdir = None
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
@@ -123,7 +123,7 @@ class Cvs(FetchMethod):
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
workdir = ud.pkgdir
logger.debug(1, "Running %s", cvscmd)
logger.debug("Running %s", cvscmd)
bb.fetch2.check_network_access(d, cvscmd, ud.url)
cmd = cvscmd

View File

@@ -78,7 +78,7 @@ class GitSM(Git):
module_hash = ""
if not module_hash:
logger.debug(1, "submodule %s is defined, but is not initialized in the repository. Skipping", m)
logger.debug("submodule %s is defined, but is not initialized in the repository. Skipping", m)
continue
submodules.append(m)
@@ -179,7 +179,7 @@ class GitSM(Git):
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
if len(need_update_list) > 0:
logger.debug(1, 'gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
logger.debug('gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
return True
return False

View File

@@ -150,7 +150,7 @@ class Hg(FetchMethod):
def download(self, ud, d):
"""Fetch url"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.pkgdir) and os.path.exists(ud.fullmirror):
@@ -160,7 +160,7 @@ class Hg(FetchMethod):
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
# Found the source, check whether need pull
updatecmd = self._buildhgcommand(ud, d, "update")
logger.debug(1, "Running %s", updatecmd)
logger.debug("Running %s", updatecmd)
try:
runfetchcmd(updatecmd, d, workdir=ud.moddir)
except bb.fetch2.FetchError:
@@ -168,7 +168,7 @@ class Hg(FetchMethod):
pullcmd = self._buildhgcommand(ud, d, "pull")
logger.info("Pulling " + ud.url)
# update sources there
logger.debug(1, "Running %s", pullcmd)
logger.debug("Running %s", pullcmd)
bb.fetch2.check_network_access(d, pullcmd, ud.url)
runfetchcmd(pullcmd, d, workdir=ud.moddir)
try:
@@ -183,14 +183,14 @@ class Hg(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
logger.debug(1, "Running %s", fetchcmd)
logger.debug("Running %s", fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
runfetchcmd(fetchcmd, d, workdir=ud.pkgdir)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
logger.debug(1, "Running %s", updatecmd)
logger.debug("Running %s", updatecmd)
runfetchcmd(updatecmd, d, workdir=ud.moddir)
def clean(self, ud, d):
@@ -247,9 +247,9 @@ class Hg(FetchMethod):
if scmdata != "nokeep":
proto = ud.parm.get('protocol', 'http')
if not os.access(os.path.join(codir, '.hg'), os.R_OK):
logger.debug(2, "Unpack: creating new hg repository in '" + codir + "'")
logger.debug2("Unpack: creating new hg repository in '" + codir + "'")
runfetchcmd("%s init %s" % (ud.basecmd, codir), d)
logger.debug(2, "Unpack: updating source in '" + codir + "'")
logger.debug2("Unpack: updating source in '" + codir + "'")
if ud.user and ud.pswd:
runfetchcmd("%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull %s" % (ud.basecmd, ud.user, ud.pswd, proto, ud.moddir), d, workdir=codir)
else:
@@ -259,5 +259,5 @@ class Hg(FetchMethod):
else:
runfetchcmd("%s up -C %s" % (ud.basecmd, revflag), d, workdir=codir)
else:
logger.debug(2, "Unpack: extracting source to '" + codir + "'")
logger.debug2("Unpack: extracting source to '" + codir + "'")
runfetchcmd("%s archive -t files %s %s" % (ud.basecmd, revflag, codir), d, workdir=ud.moddir)

View File

@@ -54,12 +54,12 @@ class Local(FetchMethod):
return [path]
filespath = d.getVar('FILESPATH')
if filespath:
logger.debug(2, "Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
logger.debug2("Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
newpath, hist = bb.utils.which(filespath, path, history=True)
searched.extend(hist)
if not os.path.exists(newpath):
dldirfile = os.path.join(d.getVar("DL_DIR"), path)
logger.debug(2, "Defaulting to %s for %s" % (dldirfile, path))
logger.debug2("Defaulting to %s for %s" % (dldirfile, path))
bb.utils.mkdirhier(os.path.dirname(dldirfile))
searched.append(dldirfile)
return searched

View File

@@ -84,13 +84,13 @@ class Osc(FetchMethod):
Fetch url
"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(d.getVar('OSCDIR'), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ ud.url)
# update sources there
logger.debug(1, "Running %s", oscupdatecmd)
logger.debug("Running %s", oscupdatecmd)
bb.fetch2.check_network_access(d, oscupdatecmd, ud.url)
runfetchcmd(oscupdatecmd, d, workdir=ud.moddir)
else:
@@ -98,7 +98,7 @@ class Osc(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
logger.debug(1, "Running %s", oscfetchcmd)
logger.debug("Running %s", oscfetchcmd)
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
runfetchcmd(oscfetchcmd, d, workdir=ud.pkgdir)

View File

@@ -90,16 +90,16 @@ class Perforce(FetchMethod):
p4port = d.getVar('P4PORT')
if p4port:
logger.debug(1, 'Using recipe provided P4PORT: %s' % p4port)
logger.debug('Using recipe provided P4PORT: %s' % p4port)
ud.host = p4port
else:
logger.debug(1, 'Trying to use P4CONFIG to automatically set P4PORT...')
logger.debug('Trying to use P4CONFIG to automatically set P4PORT...')
ud.usingp4config = True
p4cmd = '%s info | grep "Server address"' % ud.basecmd
bb.fetch2.check_network_access(d, p4cmd, ud.url)
ud.host = runfetchcmd(p4cmd, d, True)
ud.host = ud.host.split(': ')[1].strip()
logger.debug(1, 'Determined P4PORT to be: %s' % ud.host)
logger.debug('Determined P4PORT to be: %s' % ud.host)
if not ud.host:
raise FetchError('Could not determine P4PORT from P4CONFIG')
@@ -208,7 +208,7 @@ class Perforce(FetchMethod):
for filename in p4fileslist:
item = filename.split(' - ')
lastaction = item[1].split()
logger.debug(1, 'File: %s Last Action: %s' % (item[0], lastaction[0]))
logger.debug('File: %s Last Action: %s' % (item[0], lastaction[0]))
if lastaction[0] == 'delete':
continue
filelist.append(item[0])
@@ -255,7 +255,7 @@ class Perforce(FetchMethod):
raise FetchError('Could not determine the latest perforce changelist')
tipcset = tip.split(' ')[1]
logger.debug(1, 'p4 tip found to be changelist %s' % tipcset)
logger.debug('p4 tip found to be changelist %s' % tipcset)
return tipcset
def sortable_revision(self, ud, d, name):

View File

@@ -47,7 +47,7 @@ class Repo(FetchMethod):
"""Fetch url"""
if os.access(os.path.join(d.getVar("DL_DIR"), ud.localfile), os.R_OK):
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
logger.debug("%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
return
repodir = d.getVar("REPODIR") or (d.getVar("DL_DIR") + "/repo")

View File

@@ -116,7 +116,7 @@ class Svn(FetchMethod):
def download(self, ud, d):
"""Fetch url"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
lf = bb.utils.lockfile(ud.svnlock)
@@ -129,7 +129,7 @@ class Svn(FetchMethod):
runfetchcmd(ud.basecmd + " upgrade", d, workdir=ud.moddir)
except FetchError:
pass
logger.debug(1, "Running %s", svncmd)
logger.debug("Running %s", svncmd)
bb.fetch2.check_network_access(d, svncmd, ud.url)
runfetchcmd(svncmd, d, workdir=ud.moddir)
else:
@@ -137,7 +137,7 @@ class Svn(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
logger.debug(1, "Running %s", svncmd)
logger.debug("Running %s", svncmd)
bb.fetch2.check_network_access(d, svncmd, ud.url)
runfetchcmd(svncmd, d, workdir=ud.pkgdir)

View File

@@ -88,7 +88,7 @@ class Wget(FetchMethod):
progresshandler = WgetProgressHandler(d)
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
logger.debug2("Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command, ud.url)
runfetchcmd(command + ' --progress=dot -v', d, quiet, log=progresshandler, workdir=workdir)
@@ -326,11 +326,11 @@ class Wget(FetchMethod):
pass
except urllib.error.URLError as e:
if try_again:
logger.debug(2, "checkstatus: trying again")
logger.debug2("checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug(2, "checkstatus() urlopen failed: %s" % e)
logger.debug2("checkstatus() urlopen failed: %s" % e)
return False
return True

View File

@@ -71,7 +71,7 @@ def update_mtime(f):
def update_cache(f):
if f in __mtime_cache:
logger.debug(1, "Updating mtime cache for %s" % f)
logger.debug("Updating mtime cache for %s" % f)
update_mtime(f)
def clear_cache():

View File

@@ -34,7 +34,7 @@ class IncludeNode(AstNode):
Include the file and evaluate the statements
"""
s = data.expand(self.what_file)
logger.debug(2, "CONF %s:%s: including %s", self.filename, self.lineno, s)
logger.debug2("CONF %s:%s: including %s", self.filename, self.lineno, s)
# TODO: Cache those includes... maybe not here though
if self.force:
@@ -376,7 +376,7 @@ def _create_variants(datastores, names, function, onlyfinalise):
def multi_finalize(fn, d):
appends = (d.getVar("__BBAPPEND") or "").split()
for append in appends:
logger.debug(1, "Appending .bbappend file %s to %s", append, fn)
logger.debug("Appending .bbappend file %s to %s", append, fn)
bb.parse.BBHandler.handle(append, d, True)
onlyfinalise = d.getVar("__ONLYFINALISE", False)

View File

@@ -22,7 +22,7 @@ from .ConfHandler import include, init
# For compatibility
bb.deprecate_import(__name__, "bb.parse", ["vars_from_file"])
__func_start_regexp__ = re.compile(r"(((?P<py>python)|(?P<fr>fakeroot))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
__func_start_regexp__ = re.compile(r"(((?P<py>python(?=(\s|\()))|(?P<fr>fakeroot(?=\s)))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
__inherit_regexp__ = re.compile(r"inherit\s+(.+)" )
__export_func_regexp__ = re.compile(r"EXPORT_FUNCTIONS\s+(.+)" )
__addtask_regexp__ = re.compile(r"addtask\s+(?P<func>\w+)\s*((before\s*(?P<before>((.*(?=after))|(.*))))|(after\s*(?P<after>((.*(?=before))|(.*)))))*")
@@ -60,7 +60,7 @@ def inherit(files, fn, lineno, d):
file = abs_fn
if not file in __inherit_cache:
logger.debug(1, "Inheriting %s (from %s:%d)" % (file, fn, lineno))
logger.debug("Inheriting %s (from %s:%d)" % (file, fn, lineno))
__inherit_cache.append( file )
d.setVar('__inherit_cache', __inherit_cache)
include(fn, file, lineno, d, "inherit")

View File

@@ -95,7 +95,7 @@ def include_single_file(parentfn, fn, lineno, data, error_out):
if exc.errno == errno.ENOENT:
if error_out:
raise ParseError("Could not %s file %s" % (error_out, fn), parentfn, lineno)
logger.debug(2, "CONF file '%s' not found", fn)
logger.debug2("CONF file '%s' not found", fn)
else:
if error_out:
raise ParseError("Could not %s file %s: %s" % (error_out, fn, exc.strerror), parentfn, lineno)

View File

@@ -248,7 +248,7 @@ class PersistData(object):
stacklevel=2)
self.data = persist(d)
logger.debug(1, "Using '%s' as the persistent data cache",
logger.debug("Using '%s' as the persistent data cache",
self.data.filename)
def addDomain(self, domain):

View File

@@ -165,7 +165,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
available_vers.sort()
logger.warn("versions of %s available: %s", pn, ' '.join(available_vers))
else:
logger.debug(1, "selecting %s as PREFERRED_VERSION %s of package %s%s", preferred_file, pv_str, pn, itemstr)
logger.debug("selecting %s as PREFERRED_VERSION %s of package %s%s", preferred_file, pv_str, pn, itemstr)
return (preferred_ver, preferred_file)
@@ -232,7 +232,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
pkg_pn[pn] = []
pkg_pn[pn].append(p)
logger.debug(1, "providers for %s are: %s", item, list(sorted(pkg_pn.keys())))
logger.debug("providers for %s are: %s", item, list(sorted(pkg_pn.keys())))
# First add PREFERRED_VERSIONS
for pn in sorted(pkg_pn):
@@ -291,7 +291,7 @@ def filterProviders(providers, item, cfgData, dataCache):
foundUnique = True
break
logger.debug(1, "sorted providers for %s are: %s", item, eligible)
logger.debug("sorted providers for %s are: %s", item, eligible)
return eligible, foundUnique
@@ -333,7 +333,7 @@ def filterProvidersRunTime(providers, item, cfgData, dataCache):
provides = dataCache.pn_provides[pn]
for provide in provides:
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide)
#logger.debug(1, "checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
#logger.debug("checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
if prefervar in pns and pns[prefervar] not in preferred:
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
logger.verbose("selecting %s to satisfy runtime %s due to %s", prefervar, item, var)
@@ -349,7 +349,7 @@ def filterProvidersRunTime(providers, item, cfgData, dataCache):
if numberPreferred > 1:
logger.error("Trying to resolve runtime dependency %s resulted in conflicting PREFERRED_PROVIDER entries being found.\nThe providers found were: %s\nThe PREFERRED_PROVIDER entries resulting in this conflict were: %s. You could set PREFERRED_RPROVIDER_%s" % (item, preferred, preferred_vars, item))
logger.debug(1, "sorted runtime providers for %s are: %s", item, eligible)
logger.debug("sorted runtime providers for %s are: %s", item, eligible)
return eligible, numberPreferred
@@ -384,7 +384,7 @@ def getRuntimeProviders(dataCache, rdepend):
regexp_cache[pattern] = regexp
if regexp.match(rdepend):
rproviders += dataCache.packages_dynamic[pattern]
logger.debug(1, "Assuming %s is a dynamic package, but it may not exist" % rdepend)
logger.debug("Assuming %s is a dynamic package, but it may not exist" % rdepend)
return rproviders
@@ -396,22 +396,22 @@ def buildWorldTargetList(dataCache, task=None):
if dataCache.world_target:
return
logger.debug(1, "collating packages for \"world\"")
logger.debug("collating packages for \"world\"")
for f in dataCache.possible_world:
terminal = True
pn = dataCache.pkg_fn[f]
if task and task not in dataCache.task_deps[f]['tasks']:
logger.debug(2, "World build skipping %s as task %s doesn't exist", f, task)
logger.debug2("World build skipping %s as task %s doesn't exist", f, task)
terminal = False
for p in dataCache.pn_provides[pn]:
if p.startswith('virtual/'):
logger.debug(2, "World build skipping %s due to %s provider starting with virtual/", f, p)
logger.debug2("World build skipping %s due to %s provider starting with virtual/", f, p)
terminal = False
break
for pf in dataCache.providers[p]:
if dataCache.pkg_fn[pf] != pn:
logger.debug(2, "World build skipping %s due to both us and %s providing %s", f, pf, p)
logger.debug2("World build skipping %s due to both us and %s providing %s", f, pf, p)
terminal = False
break
if terminal:

View File

@@ -38,7 +38,7 @@ def taskname_from_tid(tid):
return tid.rsplit(":", 1)[1]
def mc_from_tid(tid):
if tid.startswith('mc:'):
if tid.startswith('mc:') and tid.count(':') >= 2:
return tid.split(':')[1]
return ""
@@ -47,13 +47,13 @@ def split_tid(tid):
return (mc, fn, taskname)
def split_mc(n):
if n.startswith("mc:"):
if n.startswith("mc:") and n.count(':') >= 2:
_, mc, n = n.split(":", 2)
return (mc, n)
return ('', n)
def split_tid_mcfn(tid):
if tid.startswith('mc:'):
if tid.startswith('mc:') and tid.count(':') >= 2:
elems = tid.split(':')
mc = elems[1]
fn = ":".join(elems[2:-1])
@@ -544,8 +544,8 @@ class RunQueueData:
for tid in self.runtaskentries:
if task_done[tid] is False or deps_left[tid] != 0:
problem_tasks.append(tid)
logger.debug(2, "Task %s is not buildable", tid)
logger.debug(2, "(Complete marker was %s and the remaining dependency count was %s)\n", task_done[tid], deps_left[tid])
logger.debug2("Task %s is not buildable", tid)
logger.debug2("(Complete marker was %s and the remaining dependency count was %s)\n", task_done[tid], deps_left[tid])
self.runtaskentries[tid].weight = weight[tid]
if problem_tasks:
@@ -643,7 +643,7 @@ class RunQueueData:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
#runtid = build_tid(mc, fn, taskname)
#logger.debug(2, "Processing %s,%s:%s", mc, fn, taskname)
#logger.debug2("Processing %s,%s:%s", mc, fn, taskname)
depends = set()
task_deps = self.dataCaches[mc].task_deps[taskfn]
@@ -1199,9 +1199,9 @@ class RunQueueData:
"""
Dump some debug information on the internal data structures
"""
logger.debug(3, "run_tasks:")
logger.debug3("run_tasks:")
for tid in self.runtaskentries:
logger.debug(3, " %s: %s Deps %s RevDeps %s", tid,
logger.debug3(" %s: %s Deps %s RevDeps %s", tid,
self.runtaskentries[tid].weight,
self.runtaskentries[tid].depends,
self.runtaskentries[tid].revdeps)
@@ -1238,7 +1238,7 @@ class RunQueue:
self.fakeworker = {}
def _start_worker(self, mc, fakeroot = False, rqexec = None):
logger.debug(1, "Starting bitbake-worker")
logger.debug("Starting bitbake-worker")
magic = "decafbad"
if self.cooker.configuration.profile:
magic = "decafbadbad"
@@ -1283,7 +1283,7 @@ class RunQueue:
def _teardown_worker(self, worker):
if not worker:
return
logger.debug(1, "Teardown for bitbake-worker")
logger.debug("Teardown for bitbake-worker")
try:
worker.process.stdin.write(b"<quit></quit>")
worker.process.stdin.flush()
@@ -1356,12 +1356,12 @@ class RunQueue:
# If the stamp is missing, it's not current
if not os.access(stampfile, os.F_OK):
logger.debug(2, "Stampfile %s not available", stampfile)
logger.debug2("Stampfile %s not available", stampfile)
return False
# If it's a 'nostamp' task, it's not current
taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
if 'nostamp' in taskdep and taskname in taskdep['nostamp']:
logger.debug(2, "%s.%s is nostamp\n", fn, taskname)
logger.debug2("%s.%s is nostamp\n", fn, taskname)
return False
if taskname != "do_setscene" and taskname.endswith("_setscene"):
@@ -1385,18 +1385,18 @@ class RunQueue:
continue
if fn == fn2 or (fulldeptree and fn2 not in stampwhitelist):
if not t2:
logger.debug(2, 'Stampfile %s does not exist', stampfile2)
logger.debug2('Stampfile %s does not exist', stampfile2)
iscurrent = False
break
if t1 < t2:
logger.debug(2, 'Stampfile %s < %s', stampfile, stampfile2)
logger.debug2('Stampfile %s < %s', stampfile, stampfile2)
iscurrent = False
break
if recurse and iscurrent:
if dep in cache:
iscurrent = cache[dep]
if not iscurrent:
logger.debug(2, 'Stampfile for dependency %s:%s invalid (cached)' % (fn2, taskname2))
logger.debug2('Stampfile for dependency %s:%s invalid (cached)' % (fn2, taskname2))
else:
iscurrent = self.check_stamp_task(dep, recurse=True, cache=cache)
cache[dep] = iscurrent
@@ -1761,7 +1761,7 @@ class RunQueueExecute:
for scheduler in schedulers:
if self.scheduler == scheduler.name:
self.sched = scheduler(self, self.rqdata)
logger.debug(1, "Using runqueue scheduler '%s'", scheduler.name)
logger.debug("Using runqueue scheduler '%s'", scheduler.name)
break
else:
bb.fatal("Invalid scheduler '%s'. Available schedulers: %s" %
@@ -1899,7 +1899,7 @@ class RunQueueExecute:
break
if alldeps:
self.setbuildable(revdep)
logger.debug(1, "Marking task %s as buildable", revdep)
logger.debug("Marking task %s as buildable", revdep)
def task_complete(self, task):
self.stats.taskCompleted()
@@ -1929,7 +1929,7 @@ class RunQueueExecute:
def summarise_scenequeue_errors(self):
err = False
if not self.sqdone:
logger.debug(1, 'We could skip tasks %s', "\n".join(sorted(self.scenequeue_covered)))
logger.debug('We could skip tasks %s', "\n".join(sorted(self.scenequeue_covered)))
completeevent = sceneQueueComplete(self.sq_stats, self.rq)
bb.event.fire(completeevent, self.cfgData)
if self.sq_deferred:
@@ -1986,7 +1986,7 @@ class RunQueueExecute:
if nexttask in self.sq_buildable and nexttask not in self.sq_running and self.sqdata.stamps[nexttask] not in self.build_stamps.values():
if nexttask not in self.sqdata.unskippable and len(self.sqdata.sq_revdeps[nexttask]) > 0 and self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and self.check_dependencies(nexttask, self.sqdata.sq_revdeps[nexttask]):
if nexttask not in self.rqdata.target_tids:
logger.debug(2, "Skipping setscene for task %s" % nexttask)
logger.debug2("Skipping setscene for task %s" % nexttask)
self.sq_task_skip(nexttask)
self.scenequeue_notneeded.add(nexttask)
if nexttask in self.sq_deferred:
@@ -1999,28 +1999,28 @@ class RunQueueExecute:
if nexttask in self.sq_deferred:
if self.sq_deferred[nexttask] not in self.runq_complete:
continue
logger.debug(1, "Task %s no longer deferred" % nexttask)
logger.debug("Task %s no longer deferred" % nexttask)
del self.sq_deferred[nexttask]
valid = self.rq.validate_hashes(set([nexttask]), self.cooker.data, 0, False, summary=False)
if not valid:
logger.debug(1, "%s didn't become valid, skipping setscene" % nexttask)
logger.debug("%s didn't become valid, skipping setscene" % nexttask)
self.sq_task_failoutright(nexttask)
return True
else:
self.sqdata.outrightfail.remove(nexttask)
if nexttask in self.sqdata.outrightfail:
logger.debug(2, 'No package found, so skipping setscene task %s', nexttask)
logger.debug2('No package found, so skipping setscene task %s', nexttask)
self.sq_task_failoutright(nexttask)
return True
if nexttask in self.sqdata.unskippable:
logger.debug(2, "Setscene task %s is unskippable" % nexttask)
logger.debug2("Setscene task %s is unskippable" % nexttask)
task = nexttask
break
if task is not None:
(mc, fn, taskname, taskfn) = split_tid_mcfn(task)
taskname = taskname + "_setscene"
if self.rq.check_stamp_task(task, taskname_from_tid(task), recurse = True, cache=self.stampcache):
logger.debug(2, 'Stamp for underlying task %s is current, so skipping setscene variant', task)
logger.debug2('Stamp for underlying task %s is current, so skipping setscene variant', task)
self.sq_task_failoutright(task)
return True
@@ -2030,12 +2030,12 @@ class RunQueueExecute:
return True
if self.rq.check_stamp_task(task, taskname, cache=self.stampcache):
logger.debug(2, 'Setscene stamp current task %s, so skip it and its dependencies', task)
logger.debug2('Setscene stamp current task %s, so skip it and its dependencies', task)
self.sq_task_skip(task)
return True
if self.cooker.configuration.skipsetscene:
logger.debug(2, 'No setscene tasks should be executed. Skipping %s', task)
logger.debug2('No setscene tasks should be executed. Skipping %s', task)
self.sq_task_failoutright(task)
return True
@@ -2097,12 +2097,12 @@ class RunQueueExecute:
return True
if task in self.tasks_covered:
logger.debug(2, "Setscene covered task %s", task)
logger.debug2("Setscene covered task %s", task)
self.task_skip(task, "covered")
return True
if self.rq.check_stamp_task(task, taskname, cache=self.stampcache):
logger.debug(2, "Stamp current task %s", task)
logger.debug2("Stamp current task %s", task)
self.task_skip(task, "existing")
self.runq_tasksrun.add(task)
@@ -2322,7 +2322,7 @@ class RunQueueExecute:
remapped = True
if not remapped:
#logger.debug(1, "Task %s hash changes: %s->%s %s->%s" % (tid, orighash, newhash, origuni, newuni))
#logger.debug("Task %s hash changes: %s->%s %s->%s" % (tid, orighash, newhash, origuni, newuni))
self.rqdata.runtaskentries[tid].hash = newhash
self.rqdata.runtaskentries[tid].unihash = newuni
changed.add(tid)
@@ -2337,7 +2337,7 @@ class RunQueueExecute:
for mc in self.rq.fakeworker:
self.rq.fakeworker[mc].process.stdin.write(b"<newtaskhashes>" + pickle.dumps(bb.parse.siggen.get_taskhashes()) + b"</newtaskhashes>")
hashequiv_logger.debug(1, pprint.pformat("Tasks changed:\n%s" % (changed)))
hashequiv_logger.debug(pprint.pformat("Tasks changed:\n%s" % (changed)))
for tid in changed:
if tid not in self.rqdata.runq_setscene_tids:
@@ -2356,7 +2356,7 @@ class RunQueueExecute:
# Check no tasks this covers are running
for dep in self.sqdata.sq_covered_tasks[tid]:
if dep in self.runq_running and dep not in self.runq_complete:
hashequiv_logger.debug(2, "Task %s is running which blocks setscene for %s from running" % (dep, tid))
hashequiv_logger.debug2("Task %s is running which blocks setscene for %s from running" % (dep, tid))
valid = False
break
if not valid:
@@ -2430,7 +2430,7 @@ class RunQueueExecute:
for dep in sorted(self.sqdata.sq_deps[task]):
if fail and task in self.sqdata.sq_harddeps and dep in self.sqdata.sq_harddeps[task]:
logger.debug(2, "%s was unavailable and is a hard dependency of %s so skipping" % (task, dep))
logger.debug2("%s was unavailable and is a hard dependency of %s so skipping" % (task, dep))
self.sq_task_failoutright(dep)
continue
if self.sqdata.sq_revdeps[dep].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
@@ -2460,7 +2460,7 @@ class RunQueueExecute:
completed dependencies as buildable
"""
logger.debug(1, 'Found task %s which could be accelerated', task)
logger.debug('Found task %s which could be accelerated', task)
self.scenequeue_covered.add(task)
self.scenequeue_updatecounters(task)
@@ -2775,13 +2775,13 @@ def update_scenequeue_data(tids, sqdata, rqdata, rq, cooker, stampcache, sqrq, s
continue
if rq.check_stamp_task(tid, taskname + "_setscene", cache=stampcache):
logger.debug(2, 'Setscene stamp current for task %s', tid)
logger.debug2('Setscene stamp current for task %s', tid)
sqdata.stamppresent.add(tid)
sqrq.sq_task_skip(tid)
continue
if rq.check_stamp_task(tid, taskname, recurse = True, cache=stampcache):
logger.debug(2, 'Normal stamp current for task %s', tid)
logger.debug2('Normal stamp current for task %s', tid)
sqdata.stamppresent.add(tid)
sqrq.sq_task_skip(tid)
continue

View File

@@ -541,7 +541,7 @@ class SignatureGeneratorUniHashMixIn(object):
# is much more interesting, so it is reported at debug level 1
hashequiv_logger.debug((1, 2)[unihash == taskhash], 'Found unihash %s in place of %s for %s from %s' % (unihash, taskhash, tid, self.server))
else:
hashequiv_logger.debug(2, 'No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
hashequiv_logger.debug2('No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
except hashserv.client.HashConnectionError as e:
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
@@ -615,12 +615,12 @@ class SignatureGeneratorUniHashMixIn(object):
new_unihash = data['unihash']
if new_unihash != unihash:
hashequiv_logger.debug(1, 'Task %s unihash changed %s -> %s by server %s' % (taskhash, unihash, new_unihash, self.server))
hashequiv_logger.debug('Task %s unihash changed %s -> %s by server %s' % (taskhash, unihash, new_unihash, self.server))
bb.event.fire(bb.runqueue.taskUniHashUpdate(fn + ':do_' + task, new_unihash), d)
self.set_unihash(tid, new_unihash)
d.setVar('BB_UNIHASH', new_unihash)
else:
hashequiv_logger.debug(1, 'Reported task %s as unihash %s to %s' % (taskhash, unihash, self.server))
hashequiv_logger.debug('Reported task %s as unihash %s to %s' % (taskhash, unihash, self.server))
except hashserv.client.HashConnectionError as e:
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
finally:
@@ -748,7 +748,7 @@ def clean_basepath(basepath):
if basepath[0] == '/':
return cleaned
if basepath.startswith("mc:"):
if basepath.startswith("mc:") and basepath.count(':') >= 2:
mc, mc_name, basepath = basepath.split(":", 2)
mc_suffix = ':mc:' + mc_name
else:

View File

@@ -131,7 +131,7 @@ class TaskData:
for depend in dataCache.deps[fn]:
dependids.add(depend)
self.depids[fn] = list(dependids)
logger.debug(2, "Added dependencies %s for %s", str(dataCache.deps[fn]), fn)
logger.debug2("Added dependencies %s for %s", str(dataCache.deps[fn]), fn)
# Work out runtime dependencies
if not fn in self.rdepids:
@@ -149,9 +149,9 @@ class TaskData:
rreclist.append(rdepend)
rdependids.add(rdepend)
if rdependlist:
logger.debug(2, "Added runtime dependencies %s for %s", str(rdependlist), fn)
logger.debug2("Added runtime dependencies %s for %s", str(rdependlist), fn)
if rreclist:
logger.debug(2, "Added runtime recommendations %s for %s", str(rreclist), fn)
logger.debug2("Added runtime recommendations %s for %s", str(rreclist), fn)
self.rdepids[fn] = list(rdependids)
for dep in self.depids[fn]:
@@ -378,7 +378,7 @@ class TaskData:
for fn in eligible:
if fn in self.failed_fns:
continue
logger.debug(2, "adding %s to satisfy %s", fn, item)
logger.debug2("adding %s to satisfy %s", fn, item)
self.add_build_target(fn, item)
self.add_tasks(fn, dataCache)
@@ -431,7 +431,7 @@ class TaskData:
for fn in eligible:
if fn in self.failed_fns:
continue
logger.debug(2, "adding '%s' to satisfy runtime '%s'", fn, item)
logger.debug2("adding '%s' to satisfy runtime '%s'", fn, item)
self.add_runtime_target(fn, item)
self.add_tasks(fn, dataCache)
@@ -446,7 +446,7 @@ class TaskData:
return
if not missing_list:
missing_list = []
logger.debug(1, "File '%s' is unbuildable, removing...", fn)
logger.debug("File '%s' is unbuildable, removing...", fn)
self.failed_fns.append(fn)
for target in self.build_targets:
if fn in self.build_targets[target]:
@@ -526,7 +526,7 @@ class TaskData:
added = added + 1
except (bb.providers.NoRProvider, bb.providers.MultipleRProvider):
self.remove_runtarget(target)
logger.debug(1, "Resolved " + str(added) + " extra dependencies")
logger.debug("Resolved " + str(added) + " extra dependencies")
if added == 0:
break
# self.dump_data()
@@ -549,38 +549,38 @@ class TaskData:
"""
Dump some debug information on the internal data structures
"""
logger.debug(3, "build_names:")
logger.debug(3, ", ".join(self.build_targets))
logger.debug3("build_names:")
logger.debug3(", ".join(self.build_targets))
logger.debug(3, "run_names:")
logger.debug(3, ", ".join(self.run_targets))
logger.debug3("run_names:")
logger.debug3(", ".join(self.run_targets))
logger.debug(3, "build_targets:")
logger.debug3("build_targets:")
for target in self.build_targets:
targets = "None"
if target in self.build_targets:
targets = self.build_targets[target]
logger.debug(3, " %s: %s", target, targets)
logger.debug3(" %s: %s", target, targets)
logger.debug(3, "run_targets:")
logger.debug3("run_targets:")
for target in self.run_targets:
targets = "None"
if target in self.run_targets:
targets = self.run_targets[target]
logger.debug(3, " %s: %s", target, targets)
logger.debug3(" %s: %s", target, targets)
logger.debug(3, "tasks:")
logger.debug3("tasks:")
for tid in self.taskentries:
logger.debug(3, " %s: %s %s %s",
logger.debug3(" %s: %s %s %s",
tid,
self.taskentries[tid].idepends,
self.taskentries[tid].irdepends,
self.taskentries[tid].tdepends)
logger.debug(3, "dependency ids (per fn):")
logger.debug3("dependency ids (per fn):")
for fn in self.depids:
logger.debug(3, " %s: %s", fn, self.depids[fn])
logger.debug3(" %s: %s", fn, self.depids[fn])
logger.debug(3, "runtime dependency ids (per fn):")
logger.debug3("runtime dependency ids (per fn):")
for fn in self.rdepids:
logger.debug(3, " %s: %s", fn, self.rdepids[fn])
logger.debug3(" %s: %s", fn, self.rdepids[fn])

View File

@@ -148,14 +148,14 @@ class ORMWrapper(object):
buildrequest = None
if brbe is not None:
# Toaster-triggered build
logger.debug(1, "buildinfohelper: brbe is %s" % brbe)
logger.debug("buildinfohelper: brbe is %s" % brbe)
br, _ = brbe.split(":")
buildrequest = BuildRequest.objects.get(pk=br)
prj = buildrequest.project
else:
# CLI build
prj = Project.objects.get_or_create_default_project()
logger.debug(1, "buildinfohelper: project is not specified, defaulting to %s" % prj)
logger.debug("buildinfohelper: project is not specified, defaulting to %s" % prj)
if buildrequest is not None:
# reuse existing Build object
@@ -171,7 +171,7 @@ class ORMWrapper(object):
completed_on=now,
build_name='')
logger.debug(1, "buildinfohelper: build is created %s" % build)
logger.debug("buildinfohelper: build is created %s" % build)
if buildrequest is not None:
buildrequest.build = build
@@ -906,7 +906,7 @@ class BuildInfoHelper(object):
self.project = None
logger.debug(1, "buildinfohelper: Build info helper inited %s" % vars(self))
logger.debug("buildinfohelper: Build info helper inited %s" % vars(self))
###################
@@ -1620,7 +1620,7 @@ class BuildInfoHelper(object):
# if we have a backlog of events, do our best to save them here
if len(self.internal_state['backlog']):
tempevent = self.internal_state['backlog'].pop()
logger.debug(1, "buildinfohelper: Saving stored event %s "
logger.debug("buildinfohelper: Saving stored event %s "
% tempevent)
self.store_log_event(tempevent,cli_backlog)
else:

View File

@@ -609,7 +609,7 @@ def filter_environment(good_vars):
os.environ["LC_ALL"] = "en_US.UTF-8"
if removed_vars:
logger.debug(1, "Removed the following variables from the environment: %s", ", ".join(removed_vars.keys()))
logger.debug("Removed the following variables from the environment: %s", ", ".join(removed_vars.keys()))
return removed_vars
@@ -1613,12 +1613,12 @@ def export_proxies(d):
def load_plugins(logger, plugins, pluginpath):
def load_plugin(name):
logger.debug(1, 'Loading plugin %s' % name)
logger.debug('Loading plugin %s' % name)
spec = importlib.machinery.PathFinder.find_spec(name, path=[pluginpath] )
if spec:
return spec.loader.load_module()
logger.debug(1, 'Loading plugins from %s...' % pluginpath)
logger.debug('Loading plugins from %s...' % pluginpath)
expanded = (glob.glob(os.path.join(pluginpath, '*' + ext))
for ext in python_extensions)

View File

@@ -50,10 +50,10 @@ class ActionPlugin(LayerPlugin):
if not (args.force or notadded):
try:
self.tinfoil.run_command('parseConfiguration')
except bb.tinfoil.TinfoilUIException:
except (bb.tinfoil.TinfoilUIException, bb.BBHandledException):
# Restore the back up copy of bblayers.conf
shutil.copy2(backup, bblayers_conf)
bb.fatal("Parse failure with the specified layer added")
bb.fatal("Parse failure with the specified layer added, aborting.")
else:
for item in notadded:
sys.stderr.write("Specified layer %s is already in BBLAYERS\n" % item)

View File

@@ -79,7 +79,7 @@ class LayerIndexPlugin(ActionPlugin):
branches = [args.branch]
else:
branches = (self.tinfoil.config_data.getVar('LAYERSERIES_CORENAMES') or 'master').split()
logger.debug(1, 'Trying branches: %s' % branches)
logger.debug('Trying branches: %s' % branches)
ignore_layers = []
if args.ignore:

View File

@@ -94,10 +94,10 @@ def chunkify(msg, max_chunk):
yield "\n"
def create_server(addr, dbname, *, sync=True, upstream=None):
def create_server(addr, dbname, *, sync=True, upstream=None, read_only=False):
from . import server
db = setup_database(dbname, sync=sync)
s = server.Server(db, upstream=upstream)
s = server.Server(db, upstream=upstream, read_only=read_only)
(typ, a) = parse_address(addr)
if typ == ADDR_TYPE_UNIX:

View File

@@ -99,7 +99,7 @@ class AsyncClient(object):
l = await get_line()
m = json.loads(l)
if "chunk-stream" in m:
if m and "chunk-stream" in m:
lines = []
while True:
l = (await get_line()).rstrip("\n")
@@ -170,6 +170,12 @@ class AsyncClient(object):
{"get": {"taskhash": taskhash, "method": method, "all": all_properties}}
)
async def get_outhash(self, method, outhash, taskhash):
await self._set_mode(self.MODE_NORMAL)
return await self.send_message(
{"get-outhash": {"outhash": outhash, "taskhash": taskhash, "method": method}}
)
async def get_stats(self):
await self._set_mode(self.MODE_NORMAL)
return await self.send_message({"get-stats": None})

View File

@@ -112,6 +112,9 @@ class Stats(object):
class ClientError(Exception):
pass
class ServerError(Exception):
pass
def insert_task(cursor, data, ignore=False):
keys = sorted(data.keys())
query = '''INSERT%s INTO tasks_v2 (%s) VALUES (%s)''' % (
@@ -127,6 +130,18 @@ async def copy_from_upstream(client, db, method, taskhash):
d = {k: v for k, v in d.items() if k in TABLE_COLUMNS}
keys = sorted(d.keys())
with closing(db.cursor()) as cursor:
insert_task(cursor, d)
db.commit()
return d
async def copy_outhash_from_upstream(client, db, method, outhash, taskhash):
d = await client.get_outhash(method, outhash, taskhash)
if d is not None:
# Filter out unknown columns
d = {k: v for k, v in d.items() if k in TABLE_COLUMNS}
keys = sorted(d.keys())
with closing(db.cursor()) as cursor:
insert_task(cursor, d)
@@ -137,8 +152,22 @@ async def copy_from_upstream(client, db, method, taskhash):
class ServerClient(object):
FAST_QUERY = 'SELECT taskhash, method, unihash FROM tasks_v2 WHERE method=:method AND taskhash=:taskhash ORDER BY created ASC LIMIT 1'
ALL_QUERY = 'SELECT * FROM tasks_v2 WHERE method=:method AND taskhash=:taskhash ORDER BY created ASC LIMIT 1'
OUTHASH_QUERY = '''
-- Find tasks with a matching outhash (that is, tasks that
-- are equivalent)
SELECT * FROM tasks_v2 WHERE method=:method AND outhash=:outhash
def __init__(self, reader, writer, db, request_stats, backfill_queue, upstream):
-- If there is an exact match on the taskhash, return it.
-- Otherwise return the oldest matching outhash of any
-- taskhash
ORDER BY CASE WHEN taskhash=:taskhash THEN 1 ELSE 2 END,
created ASC
-- Only return one row
LIMIT 1
'''
def __init__(self, reader, writer, db, request_stats, backfill_queue, upstream, read_only):
self.reader = reader
self.writer = writer
self.db = db
@@ -149,15 +178,20 @@ class ServerClient(object):
self.handlers = {
'get': self.handle_get,
'report': self.handle_report,
'report-equiv': self.handle_equivreport,
'get-outhash': self.handle_get_outhash,
'get-stream': self.handle_get_stream,
'get-stats': self.handle_get_stats,
'reset-stats': self.handle_reset_stats,
'chunk-stream': self.handle_chunk,
'backfill-wait': self.handle_backfill_wait,
}
if not read_only:
self.handlers.update({
'report': self.handle_report,
'report-equiv': self.handle_equivreport,
'reset-stats': self.handle_reset_stats,
'backfill-wait': self.handle_backfill_wait,
})
async def process_requests(self):
if self.upstream is not None:
self.upstream_client = await create_async_client(self.upstream)
@@ -282,6 +316,21 @@ class ServerClient(object):
self.write_message(d)
async def handle_get_outhash(self, request):
with closing(self.db.cursor()) as cursor:
cursor.execute(self.OUTHASH_QUERY,
{k: request[k] for k in ('method', 'outhash', 'taskhash')})
row = cursor.fetchone()
if row is not None:
logger.debug('Found equivalent outhash %s -> %s', (row['outhash'], row['unihash']))
d = {k: row[k] for k in row.keys()}
else:
d = None
self.write_message(d)
async def handle_get_stream(self, request):
self.write_message('ok')
@@ -335,23 +384,19 @@ class ServerClient(object):
async def handle_report(self, data):
with closing(self.db.cursor()) as cursor:
cursor.execute('''
-- Find tasks with a matching outhash (that is, tasks that
-- are equivalent)
SELECT taskhash, method, unihash FROM tasks_v2 WHERE method=:method AND outhash=:outhash
-- If there is an exact match on the taskhash, return it.
-- Otherwise return the oldest matching outhash of any
-- taskhash
ORDER BY CASE WHEN taskhash=:taskhash THEN 1 ELSE 2 END,
created ASC
-- Only return one row
LIMIT 1
''', {k: data[k] for k in ('method', 'outhash', 'taskhash')})
cursor.execute(self.OUTHASH_QUERY,
{k: data[k] for k in ('method', 'outhash', 'taskhash')})
row = cursor.fetchone()
if row is None and self.upstream_client:
# Try upstream
row = await copy_outhash_from_upstream(self.upstream_client,
self.db,
data['method'],
data['outhash'],
data['taskhash'])
# If no matching outhash was found, or one *was* found but it
# wasn't an exact match on the taskhash, a new entry for this
# taskhash should be added
@@ -455,7 +500,10 @@ class ServerClient(object):
class Server(object):
def __init__(self, db, loop=None, upstream=None):
def __init__(self, db, loop=None, upstream=None, read_only=False):
if upstream and read_only:
raise ServerError("Read-only hashserv cannot pull from an upstream server")
self.request_stats = Stats()
self.db = db
@@ -467,6 +515,7 @@ class Server(object):
self.close_loop = False
self.upstream = upstream
self.read_only = read_only
self._cleanup_socket = None
@@ -510,7 +559,7 @@ class Server(object):
async def handle_client(self, reader, writer):
# writer.transport.set_write_buffer_limits(0)
try:
client = ServerClient(reader, writer, self.db, self.request_stats, self.backfill_queue, self.upstream)
client = ServerClient(reader, writer, self.db, self.request_stats, self.backfill_queue, self.upstream, self.read_only)
await client.process_requests()
except Exception as e:
import traceback

View File

@@ -6,6 +6,7 @@
#
from . import create_server, create_client
from .client import HashConnectionError
import hashlib
import logging
import multiprocessing
@@ -29,7 +30,7 @@ class HashEquivalenceTestSetup(object):
server_index = 0
def start_server(self, dbpath=None, upstream=None):
def start_server(self, dbpath=None, upstream=None, read_only=False):
self.server_index += 1
if dbpath is None:
dbpath = os.path.join(self.temp_dir.name, "db%d.sqlite" % self.server_index)
@@ -38,7 +39,10 @@ class HashEquivalenceTestSetup(object):
thread.terminate()
thread.join()
server = create_server(self.get_server_addr(self.server_index), dbpath, upstream=upstream)
server = create_server(self.get_server_addr(self.server_index),
dbpath,
upstream=upstream,
read_only=read_only)
server.dbpath = dbpath
server.thread = multiprocessing.Process(target=_run_server, args=(server, self.server_index))
@@ -242,6 +246,43 @@ class HashEquivalenceCommonTests(object):
self.assertClientGetHash(side_client, taskhash4, unihash4)
self.assertClientGetHash(self.client, taskhash4, None)
# Test that reporting a unihash in the downstream is able to find a
# match which was previously reported to the upstream server
taskhash5 = '35788efcb8dfb0a02659d81cf2bfd695fb30faf9'
outhash5 = '2765d4a5884be49b28601445c2760c5f21e7e5c0ee2b7e3fce98fd7e5970796f'
unihash5 = 'f46d3fbb439bd9b921095da657a4de906510d2cd'
result = self.client.report_unihash(taskhash5, self.METHOD, outhash5, unihash5)
taskhash6 = '35788efcb8dfb0a02659d81cf2bfd695fb30fafa'
unihash6 = 'f46d3fbb439bd9b921095da657a4de906510d2ce'
result = down_client.report_unihash(taskhash6, self.METHOD, outhash5, unihash6)
self.assertEqual(result['unihash'], unihash5, 'Server failed to copy unihash from upstream')
def test_ro_server(self):
(ro_client, ro_server) = self.start_server(dbpath=self.server.dbpath, read_only=True)
# Report a hash via the read-write server
taskhash = '35788efcb8dfb0a02659d81cf2bfd695fb30faf9'
outhash = '2765d4a5884be49b28601445c2760c5f21e7e5c0ee2b7e3fce98fd7e5970796f'
unihash = 'f46d3fbb439bd9b921095da657a4de906510d2cd'
result = self.client.report_unihash(taskhash, self.METHOD, outhash, unihash)
self.assertEqual(result['unihash'], unihash, 'Server returned bad unihash')
# Check the hash via the read-only server
self.assertClientGetHash(ro_client, taskhash, unihash)
# Ensure that reporting via the read-only server fails
taskhash2 = 'c665584ee6817aa99edfc77a44dd853828279370'
outhash2 = '3c979c3db45c569f51ab7626a4651074be3a9d11a84b1db076f5b14f7d39db44'
unihash2 = '90e9bc1d1f094c51824adca7f8ea79a048d68824'
with self.assertRaises(HashConnectionError):
ro_client.report_unihash(taskhash2, self.METHOD, outhash2, unihash2)
# Ensure that the database was not modified
self.assertClientGetHash(self.client, taskhash2, None)
class TestHashEquivalenceUnixServer(HashEquivalenceTestSetup, HashEquivalenceCommonTests, unittest.TestCase):
def get_server_addr(self, server_idx):

View File

@@ -94,7 +94,7 @@ class LayerIndex():
if not param:
continue
item = param.split('=', 1)
logger.debug(1, item)
logger.debug(item)
param_dict[item[0]] = item[1]
return param_dict
@@ -123,7 +123,7 @@ class LayerIndex():
up = urlparse(url)
if username:
logger.debug(1, "Configuring authentication for %s..." % url)
logger.debug("Configuring authentication for %s..." % url)
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, "%s://%s" % (up.scheme, up.netloc), username, password)
handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
@@ -133,20 +133,20 @@ class LayerIndex():
urllib.request.install_opener(opener)
logger.debug(1, "Fetching %s (%s)..." % (url, ["without authentication", "with authentication"][bool(username)]))
logger.debug("Fetching %s (%s)..." % (url, ["without authentication", "with authentication"][bool(username)]))
try:
res = urlopen(Request(url, headers={'User-Agent': 'Mozilla/5.0 (bitbake/lib/layerindex)'}, unverifiable=True))
except urllib.error.HTTPError as e:
logger.debug(1, "HTTP Error: %s: %s" % (e.code, e.reason))
logger.debug(1, " Requested: %s" % (url))
logger.debug(1, " Actual: %s" % (e.geturl()))
logger.debug("HTTP Error: %s: %s" % (e.code, e.reason))
logger.debug(" Requested: %s" % (url))
logger.debug(" Actual: %s" % (e.geturl()))
if e.code == 404:
logger.debug(1, "Request not found.")
logger.debug("Request not found.")
raise LayerIndexFetchError(url, e)
else:
logger.debug(1, "Headers:\n%s" % (e.headers))
logger.debug("Headers:\n%s" % (e.headers))
raise LayerIndexFetchError(url, e)
except OSError as e:
error = 0
@@ -170,7 +170,7 @@ class LayerIndex():
raise LayerIndexFetchError(url, "Unable to fetch OSError exception: %s" % e)
finally:
logger.debug(1, "...fetching %s (%s), done." % (url, ["without authentication", "with authentication"][bool(username)]))
logger.debug("...fetching %s (%s), done." % (url, ["without authentication", "with authentication"][bool(username)]))
return res
@@ -205,14 +205,14 @@ The format of the indexURI:
if reload:
self.indexes = []
logger.debug(1, 'Loading: %s' % indexURI)
logger.debug('Loading: %s' % indexURI)
if not self.plugins:
raise LayerIndexException("No LayerIndex Plugins available")
for plugin in self.plugins:
# Check if the plugin was initialized
logger.debug(1, 'Trying %s' % plugin.__class__)
logger.debug('Trying %s' % plugin.__class__)
if not hasattr(plugin, 'type') or not plugin.type:
continue
try:
@@ -220,11 +220,11 @@ The format of the indexURI:
indexEnt = plugin.load_index(indexURI, load)
break
except LayerIndexPluginUrlError as e:
logger.debug(1, "%s doesn't support %s" % (plugin.type, e.url))
logger.debug("%s doesn't support %s" % (plugin.type, e.url))
except NotImplementedError:
pass
else:
logger.debug(1, "No plugins support %s" % indexURI)
logger.debug("No plugins support %s" % indexURI)
raise LayerIndexException("No plugins support %s" % indexURI)
# Mark CONFIG data as something we've added...
@@ -255,19 +255,19 @@ will write out the individual elements split by layer and related components.
for plugin in self.plugins:
# Check if the plugin was initialized
logger.debug(1, 'Trying %s' % plugin.__class__)
logger.debug('Trying %s' % plugin.__class__)
if not hasattr(plugin, 'type') or not plugin.type:
continue
try:
plugin.store_index(indexURI, index)
break
except LayerIndexPluginUrlError as e:
logger.debug(1, "%s doesn't support %s" % (plugin.type, e.url))
logger.debug("%s doesn't support %s" % (plugin.type, e.url))
except NotImplementedError:
logger.debug(1, "Store not implemented in %s" % plugin.type)
logger.debug("Store not implemented in %s" % plugin.type)
pass
else:
logger.debug(1, "No plugins support %s" % indexURI)
logger.debug("No plugins support %s" % indexURI)
raise LayerIndexException("No plugins support %s" % indexURI)
@@ -292,7 +292,7 @@ layerBranches set. If not, they are effectively blank.'''
the default configuration until the first vcs_url/branch match.'''
for index in self.indexes:
logger.debug(1, ' searching %s' % index.config['DESCRIPTION'])
logger.debug(' searching %s' % index.config['DESCRIPTION'])
layerBranch = index.find_vcs_url(vcs_url, [branch])
if layerBranch:
return layerBranch
@@ -304,7 +304,7 @@ layerBranches set. If not, they are effectively blank.'''
If a branch has not been specified, we will iterate over the branches in
the default configuration until the first collection/branch match.'''
logger.debug(1, 'find_collection: %s (%s) %s' % (collection, version, branch))
logger.debug('find_collection: %s (%s) %s' % (collection, version, branch))
if branch:
branches = [branch]
@@ -312,12 +312,12 @@ layerBranches set. If not, they are effectively blank.'''
branches = None
for index in self.indexes:
logger.debug(1, ' searching %s' % index.config['DESCRIPTION'])
logger.debug(' searching %s' % index.config['DESCRIPTION'])
layerBranch = index.find_collection(collection, version, branches)
if layerBranch:
return layerBranch
else:
logger.debug(1, 'Collection %s (%s) not found for branch (%s)' % (collection, version, branch))
logger.debug('Collection %s (%s) not found for branch (%s)' % (collection, version, branch))
return None
def find_layerbranch(self, name, branch=None):
@@ -408,7 +408,7 @@ layerBranches set. If not, they are effectively blank.'''
version=deplayerbranch.version
)
if rdeplayerbranch != deplayerbranch:
logger.debug(1, 'Replaced %s:%s:%s with %s:%s:%s' % \
logger.debug('Replaced %s:%s:%s with %s:%s:%s' % \
(deplayerbranch.index.config['DESCRIPTION'],
deplayerbranch.branch.name,
deplayerbranch.layer.name,
@@ -1121,7 +1121,7 @@ class LayerBranch(LayerIndexItemObj):
@property
def branch(self):
try:
logger.debug(1, "Get branch object from branches[%s]" % (self.branch_id))
logger.debug("Get branch object from branches[%s]" % (self.branch_id))
return self.index.branches[self.branch_id]
except KeyError:
raise AttributeError('Unable to find branches in index to map branch_id %s' % self.branch_id)
@@ -1149,7 +1149,7 @@ class LayerBranch(LayerIndexItemObj):
@actual_branch.setter
def actual_branch(self, value):
logger.debug(1, "Set actual_branch to %s .. name is %s" % (value, self.branch.name))
logger.debug("Set actual_branch to %s .. name is %s" % (value, self.branch.name))
if value != self.branch.name:
self._setattr('actual_branch', value, prop=False)
else:

View File

@@ -173,7 +173,7 @@ class CookerPlugin(layerindexlib.plugin.IndexPlugin):
else:
branches = ['HEAD']
logger.debug(1, "Loading cooker data branches %s" % branches)
logger.debug("Loading cooker data branches %s" % branches)
index = self._load_bblayers(branches=branches)
@@ -220,7 +220,7 @@ class CookerPlugin(layerindexlib.plugin.IndexPlugin):
required=required, layerbranch=layerBranchId,
dependency=depLayerBranch.layer_id)
logger.debug(1, '%s requires %s' % (layerDependency.layer.name, layerDependency.dependency.name))
logger.debug('%s requires %s' % (layerDependency.layer.name, layerDependency.dependency.name))
index.add_element("layerDependencies", [layerDependency])
return layerDependencyId

View File

@@ -82,7 +82,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
def load_cache(path, index, branches=[]):
logger.debug(1, 'Loading json file %s' % path)
logger.debug('Loading json file %s' % path)
with open(path, 'rt', encoding='utf-8') as f:
pindex = json.load(f)
@@ -102,7 +102,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
if newpBranch:
index.add_raw_element('branches', layerindexlib.Branch, newpBranch)
else:
logger.debug(1, 'No matching branches (%s) in index file(s)' % branches)
logger.debug('No matching branches (%s) in index file(s)' % branches)
# No matching branches.. return nothing...
return
@@ -120,7 +120,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
load_cache(up.path, index, branches)
return index
logger.debug(1, 'Loading from dir %s...' % (up.path))
logger.debug('Loading from dir %s...' % (up.path))
for (dirpath, _, filenames) in os.walk(up.path):
for filename in filenames:
if not filename.endswith('.json'):
@@ -144,7 +144,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
def _get_json_response(apiurl=None, username=None, password=None, retry=True):
assert apiurl is not None
logger.debug(1, "fetching %s" % apiurl)
logger.debug("fetching %s" % apiurl)
up = urlparse(apiurl)
@@ -163,9 +163,9 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
parsed = json.loads(res.read().decode('utf-8'))
except ConnectionResetError:
if retry:
logger.debug(1, "%s: Connection reset by peer. Retrying..." % url)
logger.debug("%s: Connection reset by peer. Retrying..." % url)
parsed = _get_json_response(apiurl=up_stripped.geturl(), username=username, password=password, retry=False)
logger.debug(1, "%s: retry successful.")
logger.debug("%s: retry successful.")
else:
raise layerindexlib.LayerIndexFetchError('%s: Connection reset by peer. Is there a firewall blocking your connection?' % apiurl)
@@ -207,25 +207,25 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
if "*" not in branches:
filter = "?filter=name:%s" % "OR".join(branches)
logger.debug(1, "Loading %s from %s" % (branches, index.apilinks['branches']))
logger.debug("Loading %s from %s" % (branches, index.apilinks['branches']))
# The link won't include username/password, so pull it from the original url
pindex['branches'] = _get_json_response(index.apilinks['branches'] + filter,
username=up.username, password=up.password)
if not pindex['branches']:
logger.debug(1, "No valid branches (%s) found at url %s." % (branch, url))
logger.debug("No valid branches (%s) found at url %s." % (branch, url))
return index
index.add_raw_element("branches", layerindexlib.Branch, pindex['branches'])
# Load all of the layerItems (these can not be easily filtered)
logger.debug(1, "Loading %s from %s" % ('layerItems', index.apilinks['layerItems']))
logger.debug("Loading %s from %s" % ('layerItems', index.apilinks['layerItems']))
# The link won't include username/password, so pull it from the original url
pindex['layerItems'] = _get_json_response(index.apilinks['layerItems'],
username=up.username, password=up.password)
if not pindex['layerItems']:
logger.debug(1, "No layers were found at url %s." % (url))
logger.debug("No layers were found at url %s." % (url))
return index
index.add_raw_element("layerItems", layerindexlib.LayerItem, pindex['layerItems'])
@@ -235,13 +235,13 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
for branch in index.branches:
filter = "?filter=branch__name:%s" % index.branches[branch].name
logger.debug(1, "Loading %s from %s" % ('layerBranches', index.apilinks['layerBranches']))
logger.debug("Loading %s from %s" % ('layerBranches', index.apilinks['layerBranches']))
# The link won't include username/password, so pull it from the original url
pindex['layerBranches'] = _get_json_response(index.apilinks['layerBranches'] + filter,
username=up.username, password=up.password)
if not pindex['layerBranches']:
logger.debug(1, "No valid layer branches (%s) found at url %s." % (branches or "*", url))
logger.debug("No valid layer branches (%s) found at url %s." % (branches or "*", url))
return index
index.add_raw_element("layerBranches", layerindexlib.LayerBranch, pindex['layerBranches'])
@@ -256,7 +256,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
("distros", layerindexlib.Distro)]:
if lName not in load:
continue
logger.debug(1, "Loading %s from %s" % (lName, index.apilinks[lName]))
logger.debug("Loading %s from %s" % (lName, index.apilinks[lName]))
# The link won't include username/password, so pull it from the original url
pindex[lName] = _get_json_response(index.apilinks[lName] + filter,
@@ -283,7 +283,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
if up.scheme != 'file':
raise layerindexlib.plugin.LayerIndexPluginUrlError(self.type, url)
logger.debug(1, "Storing to %s..." % up.path)
logger.debug("Storing to %s..." % up.path)
try:
layerbranches = index.layerBranches
@@ -299,12 +299,12 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
if getattr(index, objects)[obj].layerbranch_id == layerbranchid:
filtered.append(getattr(index, objects)[obj]._data)
except AttributeError:
logger.debug(1, 'No obj.layerbranch_id: %s' % objects)
logger.debug('No obj.layerbranch_id: %s' % objects)
# No simple filter method, just include it...
try:
filtered.append(getattr(index, objects)[obj]._data)
except AttributeError:
logger.debug(1, 'No obj._data: %s %s' % (objects, type(obj)))
logger.debug('No obj._data: %s %s' % (objects, type(obj)))
filtered.append(obj)
return filtered

View File

@@ -72,7 +72,7 @@ class LayerIndexCookerTest(LayersTest):
def test_find_collection(self):
def _check(collection, expected):
self.logger.debug(1, "Looking for collection %s..." % collection)
self.logger.debug("Looking for collection %s..." % collection)
result = self.layerindex.find_collection(collection)
if expected:
self.assertIsNotNone(result, msg="Did not find %s when it shouldn't be there" % collection)
@@ -91,7 +91,7 @@ class LayerIndexCookerTest(LayersTest):
def test_find_layerbranch(self):
def _check(name, expected):
self.logger.debug(1, "Looking for layerbranch %s..." % name)
self.logger.debug("Looking for layerbranch %s..." % name)
result = self.layerindex.find_layerbranch(name)
if expected:
self.assertIsNotNone(result, msg="Did not find %s when it shouldn't be there" % collection)

View File

@@ -57,11 +57,11 @@ class LayerIndexWebRestApiTest(LayersTest):
type in self.layerindex.indexes[0].config['local']:
continue
for id in getattr(self.layerindex.indexes[0], type):
self.logger.debug(1, "type %s" % (type))
self.logger.debug("type %s" % (type))
self.assertTrue(id in getattr(reload.indexes[0], type), msg="Id number not in reloaded index")
self.logger.debug(1, "%s ? %s" % (getattr(self.layerindex.indexes[0], type)[id], getattr(reload.indexes[0], type)[id]))
self.logger.debug("%s ? %s" % (getattr(self.layerindex.indexes[0], type)[id], getattr(reload.indexes[0], type)[id]))
self.assertEqual(getattr(self.layerindex.indexes[0], type)[id], getattr(reload.indexes[0], type)[id], msg="Reloaded contents different")
@@ -80,11 +80,11 @@ class LayerIndexWebRestApiTest(LayersTest):
type in self.layerindex.indexes[0].config['local']:
continue
for id in getattr(self.layerindex.indexes[0] ,type):
self.logger.debug(1, "type %s" % (type))
self.logger.debug("type %s" % (type))
self.assertTrue(id in getattr(reload.indexes[0], type), msg="Id number missing from reloaded data")
self.logger.debug(1, "%s ? %s" % (getattr(self.layerindex.indexes[0] ,type)[id], getattr(reload.indexes[0], type)[id]))
self.logger.debug("%s ? %s" % (getattr(self.layerindex.indexes[0] ,type)[id], getattr(reload.indexes[0], type)[id]))
self.assertEqual(getattr(self.layerindex.indexes[0] ,type)[id], getattr(reload.indexes[0], type)[id], msg="reloaded data does not match original")
@@ -111,14 +111,14 @@ class LayerIndexWebRestApiTest(LayersTest):
if dep.layer.name == 'meta-python':
break
else:
self.logger.debug(1, "meta-python was not found")
self.logger.debug("meta-python was not found")
raise self.failureException
# Only check the first element...
break
else:
# Empty list, this is bad.
self.logger.debug(1, "Empty list of dependencies")
self.logger.debug("Empty list of dependencies")
self.assertIsNotNone(first, msg="Empty list of dependencies")
# Last dep should be the requested item
@@ -128,7 +128,7 @@ class LayerIndexWebRestApiTest(LayersTest):
@skipIfNoNetwork()
def test_find_collection(self):
def _check(collection, expected):
self.logger.debug(1, "Looking for collection %s..." % collection)
self.logger.debug("Looking for collection %s..." % collection)
result = self.layerindex.find_collection(collection)
if expected:
self.assertIsNotNone(result, msg="Did not find %s when it should be there" % collection)
@@ -148,11 +148,11 @@ class LayerIndexWebRestApiTest(LayersTest):
@skipIfNoNetwork()
def test_find_layerbranch(self):
def _check(name, expected):
self.logger.debug(1, "Looking for layerbranch %s..." % name)
self.logger.debug("Looking for layerbranch %s..." % name)
for index in self.layerindex.indexes:
for layerbranchid in index.layerBranches:
self.logger.debug(1, "Present: %s" % index.layerBranches[layerbranchid].layer.name)
self.logger.debug("Present: %s" % index.layerBranches[layerbranchid].layer.name)
result = self.layerindex.find_layerbranch(name)
if expected:
self.assertIsNotNone(result, msg="Did not find %s when it should be there" % collection)

View File

@@ -17,7 +17,7 @@ def autotools_dep_prepend(d):
and not d.getVar('INHIBIT_DEFAULT_DEPS'):
deps += 'libtool-cross '
return deps + 'gnu-config-native '
return deps
DEPENDS_prepend = "${@autotools_dep_prepend(d)} "
@@ -30,7 +30,7 @@ inherit siteinfo
export CONFIG_SITE
acpaths ?= "default"
EXTRA_AUTORECONF = "--exclude=autopoint"
EXTRA_AUTORECONF = "--exclude=autopoint --exclude=gtkdocize"
export lt_cv_sys_lib_dlsearch_path_spec = "${libdir} ${base_libdir}"
@@ -215,21 +215,13 @@ autotools_do_configure() {
PRUNE_M4="$PRUNE_M4 gettext.m4 iconv.m4 lib-ld.m4 lib-link.m4 lib-prefix.m4 nls.m4 po.m4 progtest.m4"
fi
mkdir -p m4
if grep -q "^[[:space:]]*[AI][CT]_PROG_INTLTOOL" $CONFIGURE_AC; then
if ! echo "${DEPENDS}" | grep -q intltool-native; then
bbwarn "Missing DEPENDS on intltool-native"
fi
PRUNE_M4="$PRUNE_M4 intltool.m4"
bbnote Executing intltoolize --copy --force --automake
intltoolize --copy --force --automake
fi
for i in $PRUNE_M4; do
find ${S} -ignore_readdir_race -name $i -delete
done
bbnote Executing ACLOCAL=\"$ACLOCAL\" autoreconf -Wcross --verbose --install --force ${EXTRA_AUTORECONF} $acpaths
ACLOCAL="$ACLOCAL" autoreconf -Wcross --verbose --install --force ${EXTRA_AUTORECONF} $acpaths || die "autoreconf execution failed."
ACLOCAL="$ACLOCAL" autoreconf -Wcross -Wno-obsolete --verbose --install --force ${EXTRA_AUTORECONF} $acpaths || die "autoreconf execution failed."
cd $olddir
fi
if [ -e ${CONFIGURE_SCRIPT} ]; then

View File

@@ -53,6 +53,13 @@ CVE_CHECK_PN_WHITELIST ?= ""
#
CVE_CHECK_WHITELIST ?= ""
# Layers to be excluded
CVE_CHECK_LAYER_EXCLUDELIST ??= ""
# Layers to be included
CVE_CHECK_LAYER_INCLUDELIST ??= ""
# set to "alphabetical" for version using single alphabetical character as increament release
CVE_VERSION_SUFFIX ??= ""
@@ -334,7 +341,20 @@ def cve_write_data(d, patched, unpatched, whitelisted, cve_data):
CVE manifest if enabled.
"""
cve_file = d.getVar("CVE_CHECK_LOG")
fdir_name = d.getVar("FILE_DIRNAME")
layer = fdir_name.split("/")[-3]
include_layers = d.getVar("CVE_CHECK_LAYER_INCLUDELIST").split()
exclude_layers = d.getVar("CVE_CHECK_LAYER_EXCLUDELIST").split()
if exclude_layers and layer in exclude_layers:
return
if include_layers and layer not in include_layers:
return
nvd_link = "https://web.nvd.nist.gov/view/vuln/detail?vulnId="
write_string = ""
unpatched_cves = []
@@ -344,6 +364,7 @@ def cve_write_data(d, patched, unpatched, whitelisted, cve_data):
is_patched = cve in patched
if is_patched and (d.getVar("CVE_CHECK_REPORT_PATCHED") != "1"):
continue
write_string += "LAYER: %s\n" % layer
write_string += "PACKAGE NAME: %s\n" % d.getVar("PN")
write_string += "PACKAGE VERSION: %s%s\n" % (d.getVar("EXTENDPE"), d.getVar("PV"))
write_string += "CVE: %s\n" % cve

View File

@@ -110,7 +110,7 @@ IMAGE_CMD_squashfs-lz4 = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAM
IMAGE_CMD_TAR ?= "tar"
# ignore return code 1 "file changed as we read it" as other tasks(e.g. do_image_wic) may be hardlinking rootfs
IMAGE_CMD_tar = "${IMAGE_CMD_TAR} --sort=name --format=gnu --numeric-owner -cf ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.tar -C ${IMAGE_ROOTFS} . || [ $? -eq 1 ]"
IMAGE_CMD_tar = "${IMAGE_CMD_TAR} --sort=name --format=posix --numeric-owner -cf ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.tar -C ${IMAGE_ROOTFS} . || [ $? -eq 1 ]"
do_image_cpio[cleandirs] += "${WORKDIR}/cpio_append"
IMAGE_CMD_cpio () {

View File

@@ -210,7 +210,8 @@ def license_deployed_manifest(d):
os.unlink(lic_manifest_symlink_dir)
# create the image dir symlink
os.symlink(lic_manifest_dir, lic_manifest_symlink_dir)
if lic_manifest_dir != lic_manifest_symlink_dir:
os.symlink(lic_manifest_dir, lic_manifest_symlink_dir)
def get_deployed_dependencies(d):
"""

View File

@@ -20,7 +20,7 @@
inherit python3native
DEPENDS_prepend = "nodejs-native "
RDEPENDS_${PN}_prepend = "nodejs "
RDEPENDS_${PN}_append_class-target = " nodejs"
NPM_INSTALL_DEV ?= "0"

View File

@@ -10,7 +10,9 @@ GCCPIE ?= "--enable-default-pie"
# _FORTIFY_SOURCE requires -O1 or higher, so disable in debug builds as they use
# -O0 which then results in a compiler warning.
lcl_maybe_fortify ?= "${@oe.utils.conditional('DEBUG_BUILD','1','','-D_FORTIFY_SOURCE=2',d)}"
OPTLEVEL = "${@bb.utils.filter('SELECTED_OPTIMIZATION', '-O0 -O1 -O2 -O3 -Ofast -Og -Os -Oz -O', d)}"
lcl_maybe_fortify ?= "${@oe.utils.conditional('OPTLEVEL','-O0','','${OPTLEVEL} -D_FORTIFY_SOURCE=2',d)}"
# Error on use of format strings that represent possible security problems
SECURITY_STRINGFORMAT ?= "-Wformat -Wformat-security -Werror=format-security"

View File

@@ -20,7 +20,7 @@ GCCVERSION ?= "10.%"
SDKGCCVERSION ?= "${GCCVERSION}"
BINUVERSION ?= "2.36%"
GDBVERSION ?= "10.%"
GLIBCVERSION ?= "2.32"
GLIBCVERSION ?= "2.33"
LINUXLIBCVERSION ?= "5.10%"
QEMUVERSION ?= "5.2%"
GOVERSION ?= "1.15%"

View File

@@ -6,9 +6,9 @@
# to the distro running on the build machine.
#
UNINATIVE_MAXGLIBCVERSION = "2.32"
UNINATIVE_MAXGLIBCVERSION = "2.33"
UNINATIVE_URL ?= "http://downloads.yoctoproject.org/releases/uninative/2.9/"
UNINATIVE_CHECKSUM[aarch64] ?= "9f25a667aee225b1dd65c4aea73e01983e825b1cb9b56937932a1ee328b45f81"
UNINATIVE_CHECKSUM[i686] ?= "cae5d73245d95b07cf133b780ba3f6c8d0adca3ffc4e7e7fab999961d5e24d36"
UNINATIVE_CHECKSUM[x86_64] ?= "d07916b95c419c81541a19c8ef0ed8cbd78ae18437ff28a4c8a60ef40518e423"
UNINATIVE_URL ?= "http://downloads.yoctoproject.org/releases/uninative/2.11/"
UNINATIVE_CHECKSUM[aarch64] ?= "fa703e25c26eaebb1afd895337b92a24cc5077818e093af74912e53846a117fe"
UNINATIVE_CHECKSUM[i686] ?= "638901c990ffbe716a34400134a2ad49a1c3104e3b48cdafd6fcd28e9b133294"
UNINATIVE_CHECKSUM[x86_64] ?= "047ddd78d6b5cabd2a102120e27755a9eaa1d5724c6a8f4007daa3f10ecb6871"

View File

@@ -123,6 +123,8 @@ CONFLICT_MACHINE_FEATURES[doc] = "When a recipe inherits the features_check clas
CORE_IMAGE_EXTRA_INSTALL[doc] = "Specifies the list of packages to be added to the image. You should only set this variable in the conf/local.conf file in the Build Directory."
COREBASE[doc] = "Specifies the parent directory of the OpenEmbedded Core Metadata layer (i.e. meta)."
CONF_VERSION[doc] = "Tracks the version of local.conf. Increased each time build/conf/ changes incompatibly."
CVE_CHECK_LAYER_EXCLUDELIST[doc] = "Defines which layers to exclude from cve-check scanning"
CVE_CHECK_LAYER_INCLUDELIST[doc] = "Defines which layers to include during cve-check scanning"
#D

View File

@@ -3,7 +3,7 @@
# See sanity.bbclass
#
# Expert users can confirm their sanity with "touch conf/sanity.conf"
BB_MIN_VERSION = "1.47.0"
BB_MIN_VERSION = "1.49.1"
SANITY_ABIFILE = "${TMPDIR}/abi_version"

View File

@@ -185,7 +185,7 @@ class Custom(Terminal):
Terminal.__init__(self, sh_cmd, title, env, d)
logger.warning('Custom terminal was started.')
else:
logger.debug(1, 'No custom terminal (OE_TERMINAL_CUSTOMCMD) set')
logger.debug('No custom terminal (OE_TERMINAL_CUSTOMCMD) set')
raise UnsupportedTerminal('OE_TERMINAL_CUSTOMCMD not set')
@@ -216,7 +216,7 @@ def spawn_preferred(sh_cmd, title=None, env=None, d=None):
def spawn(name, sh_cmd, title=None, env=None, d=None):
"""Spawn the specified terminal, by name"""
logger.debug(1, 'Attempting to spawn terminal "%s"', name)
logger.debug('Attempting to spawn terminal "%s"', name)
try:
terminal = Registry.registry[name]
except KeyError:

View File

@@ -65,7 +65,6 @@ exclude_packages = [
'quilt-ptest',
'rsync',
'ruby',
'spirv-tools-dev',
'swig',
'syslinux-misc',
'systemd-bootchart',

View File

@@ -318,6 +318,7 @@ class Wic(WicTestCase):
"--image-name=core-image-minimal "
"-D -o %s" % self.resultdir)
self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
self.assertEqual(1, len(glob(self.resultdir + "tmp.wic*")))
def test_debug_long(self):
"""Test --debug option"""
@@ -325,6 +326,7 @@ class Wic(WicTestCase):
"--image-name=core-image-minimal "
"--debug -o %s" % self.resultdir)
self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
self.assertEqual(1, len(glob(self.resultdir + "tmp.wic*")))
def test_skip_build_check_short(self):
"""Test -s option"""
@@ -588,6 +590,9 @@ part / --source rootfs --fstype=ext4 --include-path %s --include-path core-imag
def test_permissions(self):
"""Test permissions are respected"""
# prepare wicenv and rootfs
bitbake('core-image-minimal core-image-minimal-mtdutils -c do_rootfs_wicenv')
oldpath = os.environ['PATH']
os.environ['PATH'] = get_bb_var("PATH", "wic-tools")
@@ -621,6 +626,19 @@ part /etc --source rootfs --fstype=ext4 --change-directory=etc
res = runCmd("debugfs -R 'ls -p' %s 2>/dev/null" % (part))
self.assertEqual(True, files_own_by_root(res.output))
config = 'IMAGE_FSTYPES += "wic"\nWKS_FILE = "%s"\n' % wks_file
self.append_config(config)
bitbake('core-image-minimal')
tmpdir = os.path.join(get_bb_var('WORKDIR', 'core-image-minimal'),'build-wic')
# check each partition for permission
for part in glob(os.path.join(tmpdir, 'temp-*.direct.p*')):
res = runCmd("debugfs -R 'ls -p' %s 2>/dev/null" % (part))
self.assertTrue(files_own_by_root(res.output)
,msg='Files permission incorrect using wks set "%s"' % test)
# clean config and result directory for next cases
self.remove_config(config)
rmtree(self.resultdir, ignore_errors=True)
finally:

View File

@@ -43,28 +43,12 @@ def make_logger_bitbake_compatible(logger):
import logging
"""
Bitbake logger redifines debug() in order to
set a level within debug, this breaks compatibility
with vainilla logging, so we neeed to redifine debug()
method again also add info() method with INFO + 1 level.
We need to raise the log level of the info output so unittest
messages are visible on the console.
"""
def _bitbake_log_debug(*args, **kwargs):
lvl = logging.DEBUG
if isinstance(args[0], int):
lvl = args[0]
msg = args[1]
args = args[2:]
else:
msg = args[0]
args = args[1:]
logger.log(lvl, msg, *args, **kwargs)
def _bitbake_log_info(msg, *args, **kwargs):
logger.log(logging.INFO + 1, msg, *args, **kwargs)
logger.debug = _bitbake_log_debug
logger.info = _bitbake_log_info
return logger

View File

@@ -117,7 +117,7 @@ def extract_packages(d, needed_packages):
extract = package.get('extract', True)
if extract:
#logger.debug(1, 'Extracting %s' % pkg)
#logger.debug('Extracting %s' % pkg)
dst_dir = os.path.join(extracted_path, pkg)
# Same package used for more than one test,
# don't need to extract again.
@@ -130,7 +130,7 @@ def extract_packages(d, needed_packages):
shutil.rmtree(pkg_dir)
else:
#logger.debug(1, 'Copying %s' % pkg)
#logger.debug('Copying %s' % pkg)
_copy_package(d, pkg)
def _extract_in_tmpdir(d, pkg):

View File

@@ -3,10 +3,9 @@ Update autotools infrastructure (including gettext) to modern versions.
Upstream-Status: Pending
Signed-off-by: Phil Blundell <pb@pbcl.net>
Index: lrzsz-0.12.20/configure.in
===================================================================
--- lrzsz-0.12.20.orig/configure.in
+++ lrzsz-0.12.20/configure.in
diff -uprN clean/lrzsz-0.12.20/configure.in lrzsz-0.12.20/configure.in
--- clean/lrzsz-0.12.20/configure.in 1998-12-30 07:50:07.000000000 +0000
+++ lrzsz-0.12.20/configure.in 2019-11-25 16:22:37.000000000 +0000
@@ -92,7 +92,6 @@ AC_PROG_RANLIB
AC_ISC_POSIX
AC_AIX
@@ -15,7 +14,7 @@ Index: lrzsz-0.12.20/configure.in
AC_C_CONST
AC_C_INLINE
@@ -253,18 +252,14 @@ ihave$lookup_facility
@@ -253,18 +252,13 @@ ihave$lookup_facility
fi
@@ -25,7 +24,6 @@ Index: lrzsz-0.12.20/configure.in
-AM_GNU_GETTEXT
+AM_GNU_GETTEXT([external])
+AM_GNU_GETTEXT_VERSION([0.21])
-AC_DEFINE_UNQUOTED(LOCALEDIR,"$prefix/$DATADIRNAME")
-AC_LINK_FILES($nls_cv_header_libgt, $nls_cv_header_intl)
@@ -38,10 +36,9 @@ Index: lrzsz-0.12.20/configure.in
+[
chmod +x debian/rules;
test -z "$CONFIG_HEADERS" || echo timestamp > stamp-h])
Index: lrzsz-0.12.20/intl/bindtextdom.c
===================================================================
--- lrzsz-0.12.20.orig/intl/bindtextdom.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/bindtextdom.c lrzsz-0.12.20/intl/bindtextdom.c
--- clean/lrzsz-0.12.20/intl/bindtextdom.c 1998-04-26 14:22:36.000000000 +0100
+++ lrzsz-0.12.20/intl/bindtextdom.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,199 +0,0 @@
-/* Implementation of the bindtextdomain(3) function
- Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
@@ -242,10 +239,9 @@ Index: lrzsz-0.12.20/intl/bindtextdom.c
-/* Alias for function name in GNU C Library. */
-weak_alias (__bindtextdomain, bindtextdomain);
-#endif
Index: lrzsz-0.12.20/intl/cat-compat.c
===================================================================
--- lrzsz-0.12.20.orig/intl/cat-compat.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/cat-compat.c lrzsz-0.12.20/intl/cat-compat.c
--- clean/lrzsz-0.12.20/intl/cat-compat.c 1998-04-26 14:22:37.000000000 +0100
+++ lrzsz-0.12.20/intl/cat-compat.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,262 +0,0 @@
-/* Compatibility code for gettext-using-catgets interface.
- Copyright (C) 1995, 1997 Free Software Foundation, Inc.
@@ -509,10 +505,9 @@ Index: lrzsz-0.12.20/intl/cat-compat.c
- return dest - 1;
-}
-#endif
Index: lrzsz-0.12.20/intl/ChangeLog
===================================================================
--- lrzsz-0.12.20.orig/intl/ChangeLog
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/ChangeLog lrzsz-0.12.20/intl/ChangeLog
--- clean/lrzsz-0.12.20/intl/ChangeLog 1998-04-26 14:22:35.000000000 +0100
+++ lrzsz-0.12.20/intl/ChangeLog 1970-01-01 01:00:00.000000000 +0100
@@ -1,1022 +0,0 @@
-1997-09-06 02:10 Ulrich Drepper <drepper@cygnus.com>
-
@@ -1536,10 +1531,9 @@ Index: lrzsz-0.12.20/intl/ChangeLog
- which allow to use the X/Open catgets function with an interface
- like the Uniforum gettext function. For system which does not
- have neither of those a complete implementation is provided.
Index: lrzsz-0.12.20/intl/dcgettext.c
===================================================================
--- lrzsz-0.12.20.orig/intl/dcgettext.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/dcgettext.c lrzsz-0.12.20/intl/dcgettext.c
--- clean/lrzsz-0.12.20/intl/dcgettext.c 1998-04-26 14:22:36.000000000 +0100
+++ lrzsz-0.12.20/intl/dcgettext.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,593 +0,0 @@
-/* Implementation of the dcgettext(3) function
- Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
@@ -2134,10 +2128,9 @@ Index: lrzsz-0.12.20/intl/dcgettext.c
- return dest - 1;
-}
-#endif
Index: lrzsz-0.12.20/intl/dgettext.c
===================================================================
--- lrzsz-0.12.20.orig/intl/dgettext.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/dgettext.c lrzsz-0.12.20/intl/dgettext.c
--- clean/lrzsz-0.12.20/intl/dgettext.c 1998-04-26 14:20:52.000000000 +0100
+++ lrzsz-0.12.20/intl/dgettext.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,59 +0,0 @@
-/* dgettext.c -- implementation of the dgettext(3) function
- Copyright (C) 1995 Software Foundation, Inc.
@@ -2198,10 +2191,9 @@ Index: lrzsz-0.12.20/intl/dgettext.c
-/* Alias for function name in GNU C Library. */
-weak_alias (__dgettext, dgettext);
-#endif
Index: lrzsz-0.12.20/intl/explodename.c
===================================================================
--- lrzsz-0.12.20.orig/intl/explodename.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/explodename.c lrzsz-0.12.20/intl/explodename.c
--- clean/lrzsz-0.12.20/intl/explodename.c 1998-04-26 14:22:37.000000000 +0100
+++ lrzsz-0.12.20/intl/explodename.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,181 +0,0 @@
-/* Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
- Contributed by Ulrich Drepper <drepper@gnu.ai.mit.edu>, 1995.
@@ -2384,10 +2376,9 @@ Index: lrzsz-0.12.20/intl/explodename.c
-
- return mask;
-}
Index: lrzsz-0.12.20/intl/finddomain.c
===================================================================
--- lrzsz-0.12.20.orig/intl/finddomain.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/finddomain.c lrzsz-0.12.20/intl/finddomain.c
--- clean/lrzsz-0.12.20/intl/finddomain.c 1998-04-26 14:22:36.000000000 +0100
+++ lrzsz-0.12.20/intl/finddomain.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,189 +0,0 @@
-/* Handle list of needed message catalogs
- Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
@@ -2578,10 +2569,9 @@ Index: lrzsz-0.12.20/intl/finddomain.c
-
- return retval;
-}
Index: lrzsz-0.12.20/intl/gettext.c
===================================================================
--- lrzsz-0.12.20.orig/intl/gettext.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/gettext.c lrzsz-0.12.20/intl/gettext.c
--- clean/lrzsz-0.12.20/intl/gettext.c 1998-04-26 14:22:36.000000000 +0100
+++ lrzsz-0.12.20/intl/gettext.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,70 +0,0 @@
-/* Implementation of gettext(3) function
- Copyright (C) 1995, 1997 Free Software Foundation, Inc.
@@ -2653,10 +2643,9 @@ Index: lrzsz-0.12.20/intl/gettext.c
-/* Alias for function name in GNU C Library. */
-weak_alias (__gettext, gettext);
-#endif
Index: lrzsz-0.12.20/intl/gettext.h
===================================================================
--- lrzsz-0.12.20.orig/intl/gettext.h
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/gettext.h lrzsz-0.12.20/intl/gettext.h
--- clean/lrzsz-0.12.20/intl/gettext.h 1998-04-26 14:22:35.000000000 +0100
+++ lrzsz-0.12.20/intl/gettext.h 1970-01-01 01:00:00.000000000 +0100
@@ -1,105 +0,0 @@
-/* Internal header for GNU gettext internationalization functions
- Copyright (C) 1995, 1997 Free Software Foundation, Inc.
@@ -2763,10 +2752,9 @@ Index: lrzsz-0.12.20/intl/gettext.h
-/* @@ begin of epilog @@ */
-
-#endif /* gettext.h */
Index: lrzsz-0.12.20/intl/gettextP.h
===================================================================
--- lrzsz-0.12.20.orig/intl/gettextP.h
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/gettextP.h lrzsz-0.12.20/intl/gettextP.h
--- clean/lrzsz-0.12.20/intl/gettextP.h 1998-04-26 14:22:35.000000000 +0100
+++ lrzsz-0.12.20/intl/gettextP.h 1970-01-01 01:00:00.000000000 +0100
@@ -1,73 +0,0 @@
-/* Header describing internals of gettext library
- Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
@@ -2841,10 +2829,9 @@ Index: lrzsz-0.12.20/intl/gettextP.h
-/* @@ begin of epilog @@ */
-
-#endif /* gettextP.h */
Index: lrzsz-0.12.20/intl/hash-string.h
===================================================================
--- lrzsz-0.12.20.orig/intl/hash-string.h
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/hash-string.h lrzsz-0.12.20/intl/hash-string.h
--- clean/lrzsz-0.12.20/intl/hash-string.h 1998-04-26 14:22:36.000000000 +0100
+++ lrzsz-0.12.20/intl/hash-string.h 1970-01-01 01:00:00.000000000 +0100
@@ -1,63 +0,0 @@
-/* Implements a string hashing function.
- Copyright (C) 1995, 1997 Free Software Foundation, Inc.
@@ -2909,10 +2896,9 @@ Index: lrzsz-0.12.20/intl/hash-string.h
- }
- return hval;
-}
Index: lrzsz-0.12.20/intl/intl-compat.c
===================================================================
--- lrzsz-0.12.20.orig/intl/intl-compat.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/intl-compat.c lrzsz-0.12.20/intl/intl-compat.c
--- clean/lrzsz-0.12.20/intl/intl-compat.c 1998-04-26 14:20:52.000000000 +0100
+++ lrzsz-0.12.20/intl/intl-compat.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,76 +0,0 @@
-/* intl-compat.c - Stub functions to call gettext functions from GNU gettext
- Library.
@@ -2990,10 +2976,9 @@ Index: lrzsz-0.12.20/intl/intl-compat.c
-{
- return textdomain__ (domainname);
-}
Index: lrzsz-0.12.20/intl/l10nflist.c
===================================================================
--- lrzsz-0.12.20.orig/intl/l10nflist.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/l10nflist.c lrzsz-0.12.20/intl/l10nflist.c
--- clean/lrzsz-0.12.20/intl/l10nflist.c 1998-04-26 14:22:37.000000000 +0100
+++ lrzsz-0.12.20/intl/l10nflist.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,409 +0,0 @@
-/* Handle list of needed message catalogs
- Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
@@ -3404,10 +3389,9 @@ Index: lrzsz-0.12.20/intl/l10nflist.c
- return dest - 1;
-}
-#endif
Index: lrzsz-0.12.20/intl/libgettext.h
===================================================================
--- lrzsz-0.12.20.orig/intl/libgettext.h
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/libgettext.h lrzsz-0.12.20/intl/libgettext.h
--- clean/lrzsz-0.12.20/intl/libgettext.h 1998-04-26 14:22:36.000000000 +0100
+++ lrzsz-0.12.20/intl/libgettext.h 1970-01-01 01:00:00.000000000 +0100
@@ -1,182 +0,0 @@
-/* Message catalogs for internationalization.
- Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
@@ -3591,10 +3575,9 @@ Index: lrzsz-0.12.20/intl/libgettext.h
-#endif
-
-#endif
Index: lrzsz-0.12.20/intl/linux-msg.sed
===================================================================
--- lrzsz-0.12.20.orig/intl/linux-msg.sed
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/linux-msg.sed lrzsz-0.12.20/intl/linux-msg.sed
--- clean/lrzsz-0.12.20/intl/linux-msg.sed 1998-04-26 14:20:52.000000000 +0100
+++ lrzsz-0.12.20/intl/linux-msg.sed 1970-01-01 01:00:00.000000000 +0100
@@ -1,100 +0,0 @@
-# po2msg.sed - Convert Uniforum style .po file to Linux style .msg file
-# Copyright (C) 1995 Free Software Foundation, Inc.
@@ -3696,10 +3679,9 @@ Index: lrzsz-0.12.20/intl/linux-msg.sed
- tb
-}
-d
Index: lrzsz-0.12.20/intl/loadinfo.h
===================================================================
--- lrzsz-0.12.20.orig/intl/loadinfo.h
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/loadinfo.h lrzsz-0.12.20/intl/loadinfo.h
--- clean/lrzsz-0.12.20/intl/loadinfo.h 1998-04-26 14:20:52.000000000 +0100
+++ lrzsz-0.12.20/intl/loadinfo.h 1970-01-01 01:00:00.000000000 +0100
@@ -1,58 +0,0 @@
-#ifndef PARAMS
-# if __STDC__
@@ -3759,10 +3741,9 @@ Index: lrzsz-0.12.20/intl/loadinfo.h
- const char **special,
- const char **sponsor,
- const char **revision));
Index: lrzsz-0.12.20/intl/loadmsgcat.c
===================================================================
--- lrzsz-0.12.20.orig/intl/loadmsgcat.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/loadmsgcat.c lrzsz-0.12.20/intl/loadmsgcat.c
--- clean/lrzsz-0.12.20/intl/loadmsgcat.c 1998-04-26 14:22:37.000000000 +0100
+++ lrzsz-0.12.20/intl/loadmsgcat.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,199 +0,0 @@
-/* Load needed message catalogs
- Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
@@ -3963,10 +3944,9 @@ Index: lrzsz-0.12.20/intl/loadmsgcat.c
- translations invalid. */
- ++_nl_msg_cat_cntr;
-}
Index: lrzsz-0.12.20/intl/localealias.c
===================================================================
--- lrzsz-0.12.20.orig/intl/localealias.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/localealias.c lrzsz-0.12.20/intl/localealias.c
--- clean/lrzsz-0.12.20/intl/localealias.c 1998-04-26 14:22:37.000000000 +0100
+++ lrzsz-0.12.20/intl/localealias.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,378 +0,0 @@
-/* Handle aliases for locale names
- Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
@@ -4346,10 +4326,9 @@ Index: lrzsz-0.12.20/intl/localealias.c
- return c1 - c2;
-#endif
-}
Index: lrzsz-0.12.20/intl/Makefile.in
===================================================================
--- lrzsz-0.12.20.orig/intl/Makefile.in
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/Makefile.in lrzsz-0.12.20/intl/Makefile.in
--- clean/lrzsz-0.12.20/intl/Makefile.in 1998-04-26 14:22:35.000000000 +0100
+++ lrzsz-0.12.20/intl/Makefile.in 1970-01-01 01:00:00.000000000 +0100
@@ -1,214 +0,0 @@
-# Makefile for directory with message catalog handling in GNU NLS Utilities.
-# Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
@@ -4565,10 +4544,9 @@ Index: lrzsz-0.12.20/intl/Makefile.in
-# Tell versions [3.59,3.63) of GNU make not to export all variables.
-# Otherwise a system limit (for SysV at least) may be exceeded.
-.NOEXPORT:
Index: lrzsz-0.12.20/intl/po2tbl.sed.in
===================================================================
--- lrzsz-0.12.20.orig/intl/po2tbl.sed.in
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/po2tbl.sed.in lrzsz-0.12.20/intl/po2tbl.sed.in
--- clean/lrzsz-0.12.20/intl/po2tbl.sed.in 1998-04-26 14:20:52.000000000 +0100
+++ lrzsz-0.12.20/intl/po2tbl.sed.in 1970-01-01 01:00:00.000000000 +0100
@@ -1,102 +0,0 @@
-# po2tbl.sed - Convert Uniforum style .po file to lookup table for catgets
-# Copyright (C) 1995 Free Software Foundation, Inc.
@@ -4672,10 +4650,9 @@ Index: lrzsz-0.12.20/intl/po2tbl.sed.in
- s/0*\(.*\)/int _msg_tbl_length = \1;/p
-}
-d
Index: lrzsz-0.12.20/intl/textdomain.c
===================================================================
--- lrzsz-0.12.20.orig/intl/textdomain.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/textdomain.c lrzsz-0.12.20/intl/textdomain.c
--- clean/lrzsz-0.12.20/intl/textdomain.c 1998-04-26 14:22:37.000000000 +0100
+++ lrzsz-0.12.20/intl/textdomain.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,106 +0,0 @@
-/* Implementation of the textdomain(3) function
- Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
@@ -4783,16 +4760,14 @@ Index: lrzsz-0.12.20/intl/textdomain.c
-/* Alias for function name in GNU C Library. */
-weak_alias (__textdomain, textdomain);
-#endif
Index: lrzsz-0.12.20/intl/VERSION
===================================================================
--- lrzsz-0.12.20.orig/intl/VERSION
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/VERSION lrzsz-0.12.20/intl/VERSION
--- clean/lrzsz-0.12.20/intl/VERSION 1998-04-26 14:22:37.000000000 +0100
+++ lrzsz-0.12.20/intl/VERSION 1970-01-01 01:00:00.000000000 +0100
@@ -1 +0,0 @@
-GNU gettext library from gettext-0.10.32
Index: lrzsz-0.12.20/intl/xopen-msg.sed
===================================================================
--- lrzsz-0.12.20.orig/intl/xopen-msg.sed
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/intl/xopen-msg.sed lrzsz-0.12.20/intl/xopen-msg.sed
--- clean/lrzsz-0.12.20/intl/xopen-msg.sed 1998-04-26 14:20:52.000000000 +0100
+++ lrzsz-0.12.20/intl/xopen-msg.sed 1970-01-01 01:00:00.000000000 +0100
@@ -1,104 +0,0 @@
-# po2msg.sed - Convert Uniforum style .po file to X/Open style .msg file
-# Copyright (C) 1995 Free Software Foundation, Inc.
@@ -4898,10 +4873,9 @@ Index: lrzsz-0.12.20/intl/xopen-msg.sed
- tb
-}
-d
Index: lrzsz-0.12.20/lib/Makefile.am
===================================================================
--- lrzsz-0.12.20.orig/lib/Makefile.am
+++ lrzsz-0.12.20/lib/Makefile.am
diff -uprN clean/lrzsz-0.12.20/lib/Makefile.am lrzsz-0.12.20/lib/Makefile.am
--- clean/lrzsz-0.12.20/lib/Makefile.am 1998-12-27 16:25:26.000000000 +0000
+++ lrzsz-0.12.20/lib/Makefile.am 2019-11-25 16:22:34.000000000 +0000
@@ -1,6 +1,4 @@
noinst_LIBRARIES=libzmodem.a
-CFLAGS=@CFLAGS@
@@ -4909,10 +4883,9 @@ Index: lrzsz-0.12.20/lib/Makefile.am
EXTRA_DIST = alloca.c ansi2knr.1 ansi2knr.c \
getopt.c getopt1.c mkdir.c mktime.c \
Index: lrzsz-0.12.20/Makefile.am
===================================================================
--- lrzsz-0.12.20.orig/Makefile.am
+++ lrzsz-0.12.20/Makefile.am
diff -uprN clean/lrzsz-0.12.20/Makefile.am lrzsz-0.12.20/Makefile.am
--- clean/lrzsz-0.12.20/Makefile.am 1998-12-30 11:19:40.000000000 +0000
+++ lrzsz-0.12.20/Makefile.am 2019-11-26 11:47:29.000000000 +0000
@@ -1,5 +1,5 @@
-SUBDIRS = lib intl src po man testsuite
-EXTRA_DIST = check.lrzsz COMPATABILITY README.cvs README.isdn4linux \
@@ -4935,10 +4908,9 @@ Index: lrzsz-0.12.20/Makefile.am
+
+ACLOCAL_AMFLAGS = -I m4
Index: lrzsz-0.12.20/po/cat-id-tbl.c
===================================================================
--- lrzsz-0.12.20.orig/po/cat-id-tbl.c
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/po/cat-id-tbl.c lrzsz-0.12.20/po/cat-id-tbl.c
--- clean/lrzsz-0.12.20/po/cat-id-tbl.c 1998-12-29 09:24:24.000000000 +0000
+++ lrzsz-0.12.20/po/cat-id-tbl.c 1970-01-01 01:00:00.000000000 +0100
@@ -1,234 +0,0 @@
-/* Automatically generated by po2tbl.sed from lrzsz.pot. */
-
@@ -5174,10 +5146,10 @@ Index: lrzsz-0.12.20/po/cat-id-tbl.c
-};
-
-int _msg_tbl_length = 136;
Index: lrzsz-0.12.20/po/de.po
===================================================================
--- lrzsz-0.12.20.orig/po/de.po
+++ lrzsz-0.12.20/po/de.po
Binary files clean/lrzsz-0.12.20/po/de.gmo and lrzsz-0.12.20/po/de.gmo differ
diff -uprN clean/lrzsz-0.12.20/po/de.po lrzsz-0.12.20/po/de.po
--- clean/lrzsz-0.12.20/po/de.po 1998-12-30 16:31:46.000000000 +0000
+++ lrzsz-0.12.20/po/de.po 2019-11-26 11:42:07.000000000 +0000
@@ -6,10 +6,12 @@
msgid ""
msgstr ""
@@ -5412,10 +5384,9 @@ Index: lrzsz-0.12.20/po/de.po
#: src/lrz.c:2079
msgid "rzfile: reached stop time"
msgstr "rzfile: Abbruchzeit erreicht"
Index: lrzsz-0.12.20/po/lrzsz.pot
===================================================================
--- lrzsz-0.12.20.orig/po/lrzsz.pot
+++ lrzsz-0.12.20/po/lrzsz.pot
diff -uprN clean/lrzsz-0.12.20/po/lrzsz.pot lrzsz-0.12.20/po/lrzsz.pot
--- clean/lrzsz-0.12.20/po/lrzsz.pot 1998-12-30 07:50:00.000000000 +0000
+++ lrzsz-0.12.20/po/lrzsz.pot 2019-11-26 11:39:12.000000000 +0000
@@ -1,24 +1,27 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR Free Software Foundation, Inc.
@@ -5647,10 +5618,9 @@ Index: lrzsz-0.12.20/po/lrzsz.pot
#: src/lrz.c:2079
msgid "rzfile: reached stop time"
msgstr ""
Index: lrzsz-0.12.20/po/Makevars
===================================================================
--- /dev/null
+++ lrzsz-0.12.20/po/Makevars
diff -uprN clean/lrzsz-0.12.20/po/Makevars lrzsz-0.12.20/po/Makevars
--- clean/lrzsz-0.12.20/po/Makevars 1970-01-01 01:00:00.000000000 +0100
+++ lrzsz-0.12.20/po/Makevars 2019-11-25 18:09:52.000000000 +0000
@@ -0,0 +1,78 @@
+# Makefile variables for PO directory in any package using GNU gettext.
+
@@ -5730,22 +5700,19 @@ Index: lrzsz-0.12.20/po/Makevars
+# "no". Set this to no if the POT file and PO files are maintained
+# externally.
+DIST_DEPENDS_ON_UPDATE_PO = yes
Index: lrzsz-0.12.20/po/stamp-cat-id
===================================================================
--- lrzsz-0.12.20.orig/po/stamp-cat-id
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/po/stamp-cat-id lrzsz-0.12.20/po/stamp-cat-id
--- clean/lrzsz-0.12.20/po/stamp-cat-id 1998-12-30 07:50:01.000000000 +0000
+++ lrzsz-0.12.20/po/stamp-cat-id 1970-01-01 01:00:00.000000000 +0100
@@ -1 +0,0 @@
-timestamp
Index: lrzsz-0.12.20/po/stamp-po
===================================================================
--- /dev/null
+++ lrzsz-0.12.20/po/stamp-po
diff -uprN clean/lrzsz-0.12.20/po/stamp-po lrzsz-0.12.20/po/stamp-po
--- clean/lrzsz-0.12.20/po/stamp-po 1970-01-01 01:00:00.000000000 +0100
+++ lrzsz-0.12.20/po/stamp-po 2019-11-26 11:42:09.000000000 +0000
@@ -0,0 +1 @@
+timestamp
Index: lrzsz-0.12.20/src/Makefile.am
===================================================================
--- lrzsz-0.12.20.orig/src/Makefile.am
+++ lrzsz-0.12.20/src/Makefile.am
diff -uprN clean/lrzsz-0.12.20/src/Makefile.am lrzsz-0.12.20/src/Makefile.am
--- clean/lrzsz-0.12.20/src/Makefile.am 1998-12-28 08:38:47.000000000 +0000
+++ lrzsz-0.12.20/src/Makefile.am 2019-11-25 16:22:49.000000000 +0000
@@ -2,13 +2,11 @@ bin_PROGRAMS=lrz lsz
lrz_SOURCES=lrz.c timing.c zperr.c zreadline.c crctab.c rbsb.c zm.c protname.c tcp.c lsyslog.c canit.c
lsz_SOURCES=lsz.c timing.c zperr.c zreadline.c crctab.c rbsb.c zm.c protname.c tcp.c lsyslog.c canit.c
@@ -5762,10 +5729,9 @@ Index: lrzsz-0.12.20/src/Makefile.am
EXTRA_DIST = ansi2knr.1 ansi2knr.c lrzszbug.in
INCLUDES = -I.. -I$(srcdir) -I$(top_srcdir)/src -I../intl -I$(top_srcdir)/lib
#DEFS = -DLOCALEDIR=\"$(localedir)\" -DOS=\"@host_os@\" -DCPU=\"@host_cpu@\"
Index: lrzsz-0.12.20/src/zglobal.h
===================================================================
--- lrzsz-0.12.20.orig/src/zglobal.h
+++ lrzsz-0.12.20/src/zglobal.h
diff -uprN clean/lrzsz-0.12.20/src/zglobal.h lrzsz-0.12.20/src/zglobal.h
--- clean/lrzsz-0.12.20/src/zglobal.h 1998-12-29 12:34:59.000000000 +0000
+++ lrzsz-0.12.20/src/zglobal.h 2019-11-25 16:32:42.000000000 +0000
@@ -180,9 +180,6 @@ struct termios;
#if HAVE_LOCALE_H
# include <locale.h>
@@ -5776,9 +5742,8 @@ Index: lrzsz-0.12.20/src/zglobal.h
#if ENABLE_NLS
# include <libintl.h>
Index: lrzsz-0.12.20/stamp-h.in
===================================================================
--- lrzsz-0.12.20.orig/stamp-h.in
+++ /dev/null
diff -uprN clean/lrzsz-0.12.20/stamp-h.in lrzsz-0.12.20/stamp-h.in
--- clean/lrzsz-0.12.20/stamp-h.in 1998-12-30 07:51:07.000000000 +0000
+++ lrzsz-0.12.20/stamp-h.in 1970-01-01 01:00:00.000000000 +0100
@@ -1 +0,0 @@
-timestamp

View File

@@ -12,7 +12,7 @@ PE = "1"
# We use the revision in order to avoid having to fetch it from the
# repo during parse
SRCREV = "050acee119b3757fee3bd128f55d720fdd9bb890"
SRCREV = "c4fddedc48f336eabc4ce3f74940e6aa372de18c"
SRC_URI = "git://git.denx.de/u-boot.git \
"

View File

@@ -1,5 +0,0 @@
require u-boot-common.inc
require u-boot.inc
DEPENDS += "bc-native dtc-native"

View File

@@ -0,0 +1,4 @@
require u-boot-common.inc
require u-boot.inc
DEPENDS += "bc-native dtc-native python3-setuptools-native"

View File

@@ -3,7 +3,7 @@ HOMEPAGE = "https://www.isc.org/bind/"
SECTION = "console/network"
LICENSE = "MPL-2.0"
LIC_FILES_CHKSUM = "file://COPYRIGHT;md5=4673dc07337cace3b93c65e9ffe57b60"
LIC_FILES_CHKSUM = "file://COPYRIGHT;md5=ef10b4de6371115dcecdc38ca2af4561"
DEPENDS = "openssl libcap zlib libuv"
@@ -19,7 +19,7 @@ SRC_URI = "https://ftp.isc.org/isc/bind9/${PV}/${BPN}-${PV}.tar.xz \
file://0001-avoid-start-failure-with-bind-user.patch \
"
SRC_URI[sha256sum] = "bc47fc019c6205e6a6bfb839c544a1472321df0537ba905b846a4cbffe3362b3"
SRC_URI[sha256sum] = "0111f64dd7d8f515cfa129e181cce96ff82070d1b27f11a21f6856110d0699c1"
UPSTREAM_CHECK_URI = "https://ftp.isc.org/isc/bind9/"
# stay at 9.16 follow the ESV versions divisible by 4

View File

@@ -9,8 +9,7 @@ SRC_URI = "${KERNELORG_MIRROR}/linux/network/${BPN}/${BP}.tar.xz \
SRC_URI_append_libc-musl = " file://0002-resolve-musl-does-not-implement-res_ninit.patch"
SRC_URI[md5sum] = "1ed8745354c7254bdfd4def54833ee94"
SRC_URI[sha256sum] = "cb30aca97c2f79ccaed8802aa2909ac5100a3969de74c0af8a9d73b85fc4932b"
SRC_URI[sha256sum] = "9f62a7169b7491c670a1ff2e335b0d966308fb2f62e285c781105eb90f181af3"
RRECOMMENDS_${PN} = "connman-conf"
RCONFLICTS_${PN} = "networkmanager"

View File

@@ -1,31 +0,0 @@
Upstream-Status: Pending
Subject: rcp: fix to work with large files
When we copy file by rcp command, if the file > 2GB, it will fail.
The cause is that it used incorrect data type on file size in sink() of rcp.
Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
---
src/rcp.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/rcp.c b/src/rcp.c
index 21f55b6..bafa35f 100644
--- a/src/rcp.c
+++ b/src/rcp.c
@@ -876,9 +876,9 @@ sink (int argc, char *argv[])
enum
{ YES, NO, DISPLAYED } wrerr;
BUF *bp;
- off_t i, j;
+ off_t i, j, size;
int amt, count, exists, first, mask, mode, ofd, omode;
- int setimes, size, targisdir, wrerrno;
+ int setimes, targisdir, wrerrno;
char ch, *cp, *np, *targ, *vect[1], buf[BUFSIZ];
const char *why;
--
1.9.1

View File

@@ -1,17 +1,22 @@
Upstream: http://www.mail-archive.com/bug-inetutils@gnu.org/msg02103.html
From c7c27ba763c613f83c1561e56448b49315c271c5 Mon Sep 17 00:00:00 2001
From: Jackie Huang <jackie.huang@windriver.com>
Date: Wed, 6 Mar 2019 09:36:11 -0500
Subject: [PATCH] Upstream:
http://www.mail-archive.com/bug-inetutils@gnu.org/msg02103.html
Upstream-Status: Pending
Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
---
ping/ping_common.h | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/ping/ping_common.h b/ping/ping_common.h
index 1dfd1b5..3bfbd12 100644
index 65e3e60..3e84db0 100644
--- a/ping/ping_common.h
+++ b/ping/ping_common.h
@@ -17,10 +17,14 @@
@@ -18,10 +18,14 @@
You should have received a copy of the GNU General Public License
along with this program. If not, see `http://www.gnu.org/licenses/'. */
@@ -26,7 +31,7 @@ index 1dfd1b5..3bfbd12 100644
#include <icmp.h>
#include <error.h>
#include <progname.h>
@@ -62,7 +66,12 @@ struct ping_stat
@@ -63,7 +67,12 @@ struct ping_stat
want to follow the traditional behaviour of ping. */
#define DEFAULT_PING_COUNT 0
@@ -39,9 +44,9 @@ index 1dfd1b5..3bfbd12 100644
#define PING_TIMING(s) ((s) >= sizeof (struct timeval))
#define PING_DATALEN (64 - PING_HEADER_LEN) /* default data length */
@@ -74,13 +83,20 @@ struct ping_stat
(t).tv_usec = ((i)%PING_PRECISION)*(1000000/PING_PRECISION) ;\
} while (0)
@@ -78,13 +87,20 @@ struct ping_stat
#define PING_MIN_USER_INTERVAL (200000/PING_PRECISION)
+#ifdef HAVE_IPV6
/* FIXME: Adjust IPv6 case for options and their consumption. */
@@ -60,7 +65,7 @@ index 1dfd1b5..3bfbd12 100644
typedef int (*ping_efp) (int code,
void *closure,
@@ -89,13 +105,17 @@ typedef int (*ping_efp) (int code,
@@ -93,13 +109,17 @@ typedef int (*ping_efp) (int code,
struct ip * ip, icmphdr_t * icmp, int datalen);
union event {
@@ -78,6 +83,3 @@ index 1dfd1b5..3bfbd12 100644
};
typedef struct ping_data PING;
--
2.8.3

View File

@@ -1,20 +1,21 @@
From 552a7d64ad4a7188a9b7cd89933ae7caf7ebfe90 Mon Sep 17 00:00:00 2001
From f7f785c21306010b2367572250b2822df5bc7728 Mon Sep 17 00:00:00 2001
From: Mike Frysinger <vapier at gentoo.org>
Date: Thu, 18 Nov 2010 16:59:14 -0500
Subject: [PATCH gnulib] printf-parse: pull in features.h for __GLIBC__
Subject: [PATCH] printf-parse: pull in features.h for __GLIBC__
Upstream-Status: Pending
Signed-off-by: Mike Frysinger <vapier at gentoo.org>
---
lib/printf-parse.h | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
lib/printf-parse.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/lib/printf-parse.h b/lib/printf-parse.h
index 67a4a2a..3bd6152 100644
index e7d0f82..d7b4534 100644
--- a/lib/printf-parse.h
+++ b/lib/printf-parse.h
@@ -25,6 +25,9 @@
@@ -28,6 +28,9 @@
#include "printf-args.h"
@@ -24,6 +25,3 @@ index 67a4a2a..3bd6152 100644
/* Flags */
#define FLAG_GROUP 1 /* ' flag */
--
1.7.3.2

View File

@@ -1,8 +1,19 @@
From 9089c6eafbf5903174dce87b68476e35db80beb9 Mon Sep 17 00:00:00 2001
From: Martin Jansa <martin.jansa@gmail.com>
Date: Wed, 6 Mar 2019 09:36:11 -0500
Subject: [PATCH] inetutils: Import version 1.9.4
Upstream-Status: Pending
--- inetutils-1.8/lib/wchar.in.h
+++ inetutils-1.8/lib/wchar.in.h
@@ -70,6 +70,9 @@
---
lib/wchar.in.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/lib/wchar.in.h b/lib/wchar.in.h
index cdda680..043866a 100644
--- a/lib/wchar.in.h
+++ b/lib/wchar.in.h
@@ -77,6 +77,9 @@
/* The include_next requires a split double-inclusion guard. */
#if @HAVE_WCHAR_H@
# @INCLUDE_NEXT@ @NEXT_WCHAR_H@

View File

@@ -1,4 +1,10 @@
inetutils: define PATH_PROCNET_DEV if not already defined
From 101130f422dd5c01a1459645d7b2a5b8d19720ab Mon Sep 17 00:00:00 2001
From: Martin Jansa <martin.jansa@gmail.com>
Date: Wed, 6 Mar 2019 09:36:11 -0500
Subject: [PATCH] inetutils: define PATH_PROCNET_DEV if not already defined
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
this prevents the following compilation error :
system/linux.c:401:15: error: 'PATH_PROCNET_DEV' undeclared (first use in this function)
@@ -9,11 +15,16 @@ this patch comes from :
Upstream-Status: Inappropriate [not author]
Signed-of-by: Eric Bénard <eric@eukrea.com>
---
diff -Naur inetutils-1.9.orig/ifconfig/system/linux.c inetutils-1.9/ifconfig/system/linux.c
--- inetutils-1.9.orig/ifconfig/system/linux.c 2012-01-04 16:31:36.000000000 -0500
+++ inetutils-1.9/ifconfig/system/linux.c 2012-01-04 16:40:53.000000000 -0500
@@ -49,6 +49,10 @@
ifconfig/system/linux.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/ifconfig/system/linux.c b/ifconfig/system/linux.c
index e453b46..4268ca9 100644
--- a/ifconfig/system/linux.c
+++ b/ifconfig/system/linux.c
@@ -53,6 +53,10 @@
#include "../ifconfig.h"

View File

@@ -1,15 +1,24 @@
From 684e45b34a33186bb17bcee0b01814c549a60bf6 Mon Sep 17 00:00:00 2001
From: Kai Kang <kai.kang@windriver.com>
Date: Wed, 6 Mar 2019 09:36:11 -0500
Subject: [PATCH] inetutils: Import version 1.9.4
Only check security/pam_appl.h which is provided by package libpam when pam is
enabled.
Upstream-Status: Pending
Signed-off-by: Kai Kang <kai.kang@windriver.com>
---
configure.ac | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/configure.ac b/configure.ac
index b35e672..e78a751 100644
index 86136fb..b220319 100644
--- a/configure.ac
+++ b/configure.ac
@@ -195,6 +195,19 @@ fi
@@ -183,6 +183,19 @@ AC_SUBST(LIBUTIL)
# See if we have libpam.a. Investigate PAM versus Linux-PAM.
if test "$with_pam" = yes ; then
@@ -29,8 +38,8 @@ index b35e672..e78a751 100644
AC_CHECK_LIB(dl, dlopen, LIBDL=-ldl)
AC_CHECK_LIB(pam, pam_authenticate, LIBPAM=-lpam)
if test "$ac_cv_lib_pam_pam_authenticate" = yes ; then
@@ -587,7 +600,7 @@ AC_HEADER_DIRENT
AC_CHECK_HEADERS([arpa/nameser.h errno.h fcntl.h features.h \
@@ -620,7 +633,7 @@ AC_HEADER_DIRENT
AC_CHECK_HEADERS([arpa/nameser.h arpa/tftp.h errno.h fcntl.h features.h \
glob.h memory.h netinet/ether.h netinet/in_systm.h \
netinet/ip.h netinet/ip_icmp.h netinet/ip_var.h \
- security/pam_appl.h shadow.h \

View File

@@ -1,17 +0,0 @@
Upstream-Status: Pending
remove m4_esyscmd function
Signed-off-by: Chunrong Guo <b40290@freescale.com>
--- inetutils-1.9.1/configure.ac 2012-01-06 22:05:05.000000000 +0800
+++ inetutils-1.9.1/configure.ac 2012-11-12 14:01:11.732957019 +0800
@@ -20,8 +20,7 @@
AC_PREREQ(2.59)
-AC_INIT([GNU inetutils],
- m4_esyscmd([build-aux/git-version-gen .tarball-version 's/inetutils-/v/;s/_/./g']),
+AC_INIT([GNU inetutils],[1.9.4],
[bug-inetutils@gnu.org])
AC_CONFIG_SRCDIR([src/inetd.c])

View File

@@ -10,8 +10,7 @@ LICENSE = "GPLv3"
LIC_FILES_CHKSUM = "file://COPYING;md5=0c7051aef9219dc7237f206c5c4179a7"
SRC_URI = "${GNU_MIRROR}/inetutils/inetutils-${PV}.tar.gz \
file://version.patch \
SRC_URI = "${GNU_MIRROR}/inetutils/inetutils-${PV}.tar.xz \
file://inetutils-1.8-0001-printf-parse-pull-in-features.h-for-__GLIBC__.patch \
file://inetutils-1.8-0003-wchar.patch \
file://rexec.xinetd.inetutils \
@@ -21,13 +20,9 @@ SRC_URI = "${GNU_MIRROR}/inetutils/inetutils-${PV}.tar.gz \
file://tftpd.xinetd.inetutils \
file://inetutils-1.9-PATH_PROCNET_DEV.patch \
file://inetutils-only-check-pam_appl.h-when-pam-enabled.patch \
file://0001-rcp-fix-to-work-with-large-files.patch \
file://fix-buffer-fortify-tfpt.patch \
file://0001-ftpd-telnetd-Fix-multiple-definitions-of-errcatch-an.patch \
"
SRC_URI[md5sum] = "04852c26c47cc8c6b825f2b74f191f52"
SRC_URI[sha256sum] = "be8f75eff936b8e41b112462db51adf689715658a1b09e0d6b05d11ec92cc616"
SRC_URI[md5sum] = "5e1018502cd131ed8e42339f6b5c98aa"
inherit autotools gettext update-alternatives texinfo

View File

@@ -0,0 +1,28 @@
From 0f90440ca70abab947acbd77795e9f130967956c Mon Sep 17 00:00:00 2001
From: Darren Tucker <dtucker@dtucker.net>
Date: Fri, 20 Nov 2020 13:37:54 +1100
Subject: [PATCH] Add new pselect6_time64 syscall on ARM.
This is apparently needed on armhfp/armv7hl. bz#3232, patch from
jjelen at redhat.com.
---
sandbox-seccomp-filter.c | 3 +++
1 file changed, 3 insertions(+)
Upstream-Status: Backport
[fixes issues on 32bit IA and probably other 32 bit platforms too with glibc 2.33]
diff --git a/sandbox-seccomp-filter.c b/sandbox-seccomp-filter.c
index e0768c063..5065ae7ef 100644
--- a/sandbox-seccomp-filter.c
+++ b/sandbox-seccomp-filter.c
@@ -267,6 +267,9 @@ static const struct sock_filter preauth_insns[] = {
#ifdef __NR_pselect6
SC_ALLOW(__NR_pselect6),
#endif
+#ifdef __NR_pselect6_time64
+ SC_ALLOW(__NR_pselect6_time64),
+#endif
#ifdef __NR_read
SC_ALLOW(__NR_read),
#endif

View File

@@ -24,6 +24,7 @@ SRC_URI = "http://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-${PV}.tar
file://fix-potential-signed-overflow-in-pointer-arithmatic.patch \
file://sshd_check_keys \
file://add-test-support-for-busybox.patch \
file://0f90440ca70abab947acbd77795e9f130967956c.patch \
"
SRC_URI[sha256sum] = "5a01d22e407eb1c05ba8a8f7c654d388a13e9f226e4ed33bd38748dafa1d2b24"

View File

@@ -65,7 +65,8 @@ CFLAGS_append_class-nativesdk = " -DOPENSSLDIR=/not/builtin -DENGINESDIR=/not/bu
# rc2 (mailx)
# psk (qt5)
# srp (libest)
DEPRECATED_CRYPTO_FLAGS = "no-ssl no-idea no-rc5 no-md2 no-camellia no-mdc2 no-scrypt no-seed no-siphash no-sm2 no-sm3 no-sm4 no-whirlpool"
# whirlpool (qca)
DEPRECATED_CRYPTO_FLAGS = "no-ssl no-idea no-rc5 no-md2 no-camellia no-mdc2 no-scrypt no-seed no-siphash no-sm2 no-sm3 no-sm4"
do_configure () {
os=${HOST_OS}

View File

@@ -16,7 +16,7 @@ inherit autotools pkgconfig
SRC_URI = "https://mirrors.edge.kernel.org/pub/linux/libs/${BPN}/${BPN}-${PV}.tar.xz \
file://0001-pem.c-do-not-use-rawmemchr.patch \
"
SRC_URI[sha256sum] = "2f99e743a235b1c834b19112e4e0283d02da93b863899381466cde47bf159cf6"
SRC_URI[sha256sum] = "30027a2043bbe2faca7849946bb2ed7d5e48c1b9d2638bfa8f5fdef3093c4784"
do_configure_prepend () {
mkdir -p ${S}/build-aux

View File

@@ -0,0 +1,38 @@
From 6178df5658045a6253ef806e018fe80d99a8f5fb Mon Sep 17 00:00:00 2001
From: Yi Fan Yu <yifan.yu@windriver.com>
Date: Mon, 1 Feb 2021 16:10:28 -0500
Subject: [PATCH] tests/codegen.py: removing unecessary print statement
A huge amount of output(boiler-plate code) is
printed to the console screen.
This is not critical to the test results.
This causes intermittent test failure when another process
has to parse its output.
Root cause is in ptest-runner, This is a workaround
Uptream-Status: Inappropriate [other]
Signed-off-by: Yi Fan Yu <yifan.yu@windriver.com>
---
gio/tests/codegen.py | 1 -
1 file changed, 1 deletion(-)
diff --git a/gio/tests/codegen.py b/gio/tests/codegen.py
index 51de0ede4..cfa4db42e 100644
--- a/gio/tests/codegen.py
+++ b/gio/tests/codegen.py
@@ -250,7 +250,6 @@ class TestCodegen(unittest.TestCase):
result = Result(info, out, err, subs)
- print('Output:', result.out)
return result
def runCodegenWithInterface(self, interface_contents, *args):
--
2.29.2

View File

@@ -17,6 +17,7 @@ SRC_URI = "${GNOME_MIRROR}/glib/${SHRT_VER}/glib-${PV}.tar.xz \
file://0001-meson-Run-atomics-test-on-clang-as-well.patch \
file://0001-gio-tests-resources.c-comment-out-a-build-host-only-.patch \
file://0001-gio-tests-codegen.py-bump-timeout-to-100-seconds.patch \
file://0001-tests-codegen.py-removing-unecessary-print-statement.patch \
"
SRC_URI_append_class-native = " file://relocate-modules.patch"

View File

@@ -22,4 +22,4 @@ ARM_INSTRUCTION_SET_armv6 = "arm"
#
COMPATIBLE_HOST_libc-musl_class-target = "null"
PV = "2.32"
PV = "2.33"

View File

@@ -1,6 +1,6 @@
SRCBRANCH ?= "release/2.32/master"
PV = "2.32"
SRCREV_glibc ?= "760e1d287825fa91d4d5a0cc921340c740d803e2"
SRCBRANCH ?= "release/2.33/master"
PV = "2.33"
SRCREV_glibc ?= "9826b03b747b841f5fc6de2054bf1ef3f5c4bdf3"
SRCREV_localedef ?= "bd644c9e6f3e20c5504da1488448173c69c56c28"
GLIBC_GIT_URI ?= "git://sourceware.org/git/glibc.git"

View File

@@ -1,7 +1,7 @@
From 5db90855621a81d02f1434d5602cefea8c45de1c Mon Sep 17 00:00:00 2001
From d1f1671034a222417f9a829dcaa4f0c3d4f8954d Mon Sep 17 00:00:00 2001
From: Jason Wessel <jason.wessel@windriver.com>
Date: Sat, 7 Dec 2019 09:59:22 -0800
Subject: [PATCH 01/29] localedef: Add hardlink resolver from util-linux
Subject: [PATCH] localedef: Add hardlink resolver from util-linux
The hard link resolver that is built into localedef cannot be run in
parallel. It will search sibling directories (which are be processed
@@ -1128,6 +1128,3 @@ index 0000000000..0129a85e2e
+}
+
+#endif
--
2.27.0

View File

@@ -1,7 +1,7 @@
From ab022ce3c1c01fd6c850f541a33efd0cacabe052 Mon Sep 17 00:00:00 2001
From 14d256e2db009f8bac9a265e8393d7ed25050df9 Mon Sep 17 00:00:00 2001
From: Jason Wessel <jason.wessel@windriver.com>
Date: Sat, 7 Dec 2019 10:01:37 -0800
Subject: [PATCH 02/29] localedef: fix-ups hardlink to make it compile
Subject: [PATCH] localedef: fix-ups hardlink to make it compile
Upstream-Status: Pending
Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
@@ -236,6 +236,3 @@ index 63615896b0..726e6dd948 100644
}
continue;
}
--
2.27.0

View File

@@ -1,8 +1,7 @@
From de4322ef6d4dc9fc3ee9b69af1c10edbc64a66a3 Mon Sep 17 00:00:00 2001
From 32a4b8ae046fe4bb1b19f61378d079d44deaede7 Mon Sep 17 00:00:00 2001
From: Khem Raj <raj.khem@gmail.com>
Date: Wed, 18 Mar 2015 01:48:24 +0000
Subject: [PATCH 03/29] nativesdk-glibc: Look for host system ld.so.cache as
well
Subject: [PATCH] nativesdk-glibc: Look for host system ld.so.cache as well
Upstream-Status: Inappropriate [embedded specific]
@@ -31,10 +30,10 @@ Signed-off-by: Khem Raj <raj.khem@gmail.com>
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/elf/dl-load.c b/elf/dl-load.c
index e39980fb19..565b039b23 100644
index 9e2089cfaa..ad01674027 100644
--- a/elf/dl-load.c
+++ b/elf/dl-load.c
@@ -2160,6 +2160,14 @@ _dl_map_object (struct link_map *loader, const char *name,
@@ -2175,6 +2175,14 @@ _dl_map_object (struct link_map *loader, const char *name,
}
}
@@ -42,14 +41,14 @@ index e39980fb19..565b039b23 100644
+ if (fd == -1
+ && ((l = loader ?: GL(dl_ns)[nsid]._ns_loaded) == NULL
+ || __builtin_expect (!(l->l_flags_1 & DF_1_NODEFLIB), 1))
+ && rtld_search_dirs.dirs != (void *) -1)
+ fd = open_path (name, namelen, mode & __RTLD_SECURE, &rtld_search_dirs,
+ && __rtld_search_dirs.dirs != (void *) -1)
+ fd = open_path (name, namelen, mode & __RTLD_SECURE, &__rtld_search_dirs,
+ &realname, &fb, l, LA_SER_DEFAULT, &found_other_class);
+ /* Finally try ld.so.cache */
#ifdef USE_LDCONFIG
if (fd == -1
&& (__glibc_likely ((mode & __RTLD_SECURE) == 0)
@@ -2218,14 +2226,6 @@ _dl_map_object (struct link_map *loader, const char *name,
@@ -2233,14 +2241,6 @@ _dl_map_object (struct link_map *loader, const char *name,
}
#endif
@@ -57,13 +56,10 @@ index e39980fb19..565b039b23 100644
- if (fd == -1
- && ((l = loader ?: GL(dl_ns)[nsid]._ns_loaded) == NULL
- || __glibc_likely (!(l->l_flags_1 & DF_1_NODEFLIB)))
- && rtld_search_dirs.dirs != (void *) -1)
- fd = open_path (name, namelen, mode, &rtld_search_dirs,
- && __rtld_search_dirs.dirs != (void *) -1)
- fd = open_path (name, namelen, mode, &__rtld_search_dirs,
- &realname, &fb, l, LA_SER_DEFAULT, &found_other_class);
-
/* Add another newline when we are tracing the library loading. */
if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_LIBS))
_dl_debug_printf ("\n");
--
2.27.0

View File

@@ -1,8 +1,7 @@
From 258c44e4ecffd830cb89d0016d45b2bac765f559 Mon Sep 17 00:00:00 2001
From aa8393bff257e4badfd208b88473ead175c69362 Mon Sep 17 00:00:00 2001
From: Khem Raj <raj.khem@gmail.com>
Date: Wed, 18 Mar 2015 01:50:00 +0000
Subject: [PATCH 04/29] nativesdk-glibc: Fix buffer overrun with a relocated
SDK
Subject: [PATCH] nativesdk-glibc: Fix buffer overrun with a relocated SDK
When ld-linux-*.so.2 is relocated to a path that is longer than the
original fixed location, the dynamic loader will crash in open_path
@@ -22,10 +21,10 @@ Signed-off-by: Khem Raj <raj.khem@gmail.com>
1 file changed, 12 insertions(+)
diff --git a/elf/dl-load.c b/elf/dl-load.c
index 565b039b23..e1b3486549 100644
index ad01674027..f455207e79 100644
--- a/elf/dl-load.c
+++ b/elf/dl-load.c
@@ -1860,7 +1860,19 @@ open_path (const char *name, size_t namelen, int mode,
@@ -1871,7 +1871,19 @@ open_path (const char *name, size_t namelen, int mode,
given on the command line when rtld is run directly. */
return -1;
@@ -45,6 +44,3 @@ index 565b039b23..e1b3486549 100644
do
{
struct r_search_path_elem *this_dir = *dirs;
--
2.27.0

View File

@@ -1,8 +1,7 @@
From 19cd858f5f04a6ac584fbd89a2fbc51791263b85 Mon Sep 17 00:00:00 2001
From 3ea08e491a8494ff03e598b5e0fc2d8131e75da9 Mon Sep 17 00:00:00 2001
From: Khem Raj <raj.khem@gmail.com>
Date: Wed, 18 Mar 2015 01:51:38 +0000
Subject: [PATCH 05/29] nativesdk-glibc: Raise the size of arrays containing dl
paths
Subject: [PATCH] nativesdk-glibc: Raise the size of arrays containing dl paths
This patch puts the dynamic loader path in the binaries, SYSTEM_DIRS strings
and lengths as well as ld.so.cache path in the dynamic loader to specific
@@ -18,20 +17,21 @@ Signed-off-by: Khem Raj <raj.khem@gmail.com>
---
elf/dl-cache.c | 4 ++++
elf/dl-load.c | 4 ++--
elf/dl-usage.c | 6 ++++--
elf/interp.c | 2 +-
elf/ldconfig.c | 3 +++
elf/rtld.c | 5 +++--
elf/rtld.c | 1 +
iconv/gconv_conf.c | 2 +-
sysdeps/generic/dl-cache.h | 4 ----
7 files changed, 14 insertions(+), 10 deletions(-)
8 files changed, 16 insertions(+), 10 deletions(-)
diff --git a/elf/dl-cache.c b/elf/dl-cache.c
index 93d185e788..e115b18756 100644
index 32f3bef5ea..71f3a82dc0 100644
--- a/elf/dl-cache.c
+++ b/elf/dl-cache.c
@@ -133,6 +133,10 @@ do \
while (0)
@@ -359,6 +359,10 @@ search_cache (const char *string_table, uint32_t string_table_size,
return best;
}
+const char LD_SO_CACHE[4096] __attribute__ ((section (".ldsocache"))) =
+ SYSCONFDIR "/ld.so.cache";
@@ -41,10 +41,10 @@ index 93d185e788..e115b18756 100644
_dl_cache_libcmp (const char *p1, const char *p2)
{
diff --git a/elf/dl-load.c b/elf/dl-load.c
index e1b3486549..5226d0c4fa 100644
index f455207e79..a144e24fcf 100644
--- a/elf/dl-load.c
+++ b/elf/dl-load.c
@@ -111,8 +111,8 @@ static size_t max_capstrlen attribute_relro;
@@ -115,8 +115,8 @@ enum { ncapstr = 1, max_capstrlen = 0 };
gen-trusted-dirs.awk. */
#include "trusted-dirs.h"
@@ -55,8 +55,39 @@ index e1b3486549..5226d0c4fa 100644
{
SYSTEM_DIRS_LEN
};
diff --git a/elf/dl-usage.c b/elf/dl-usage.c
index 6e26818bd7..f09e8b93e5 100644
--- a/elf/dl-usage.c
+++ b/elf/dl-usage.c
@@ -25,6 +25,8 @@
#include <dl-procinfo.h>
#include <dl-hwcaps.h>
+extern const char LD_SO_CACHE[4096] __attribute__ ((section (".ldsocache")));
+
void
_dl_usage (const char *argv0, const char *wrong_option)
{
@@ -244,7 +246,7 @@ setting environment variables (which would be inherited by subprocesses).\n\
--list list all dependencies and how they are resolved\n\
--verify verify that given object really is a dynamically linked\n\
object we can handle\n\
- --inhibit-cache Do not use " LD_SO_CACHE "\n\
+ --inhibit-cache Do not use %s\n\
--library-path PATH use given PATH instead of content of the environment\n\
variable LD_LIBRARY_PATH\n\
--glibc-hwcaps-prepend LIST\n\
@@ -266,7 +268,7 @@ setting environment variables (which would be inherited by subprocesses).\n\
\n\
This program interpreter self-identifies as: " RTLD "\n\
",
- argv0);
+ argv0, LD_SO_CACHE);
print_search_path_for_help (state);
print_hwcaps_subdirectories (state);
print_legacy_hwcap_directories ();
diff --git a/elf/interp.c b/elf/interp.c
index 331cc1df48..885b2d9476 100644
index 91966702ca..dc86c20e83 100644
--- a/elf/interp.c
+++ b/elf/interp.c
@@ -18,5 +18,5 @@
@@ -67,10 +98,10 @@ index 331cc1df48..885b2d9476 100644
+const char __invoke_dynamic_linker__[4096] __attribute__ ((section (".interp")))
= RUNTIME_LINKER;
diff --git a/elf/ldconfig.c b/elf/ldconfig.c
index 0c090dca15..6bb6e0fe72 100644
index 28ed637a29..5d38a60c5d 100644
--- a/elf/ldconfig.c
+++ b/elf/ldconfig.c
@@ -171,6 +171,9 @@ static struct argp argp =
@@ -176,6 +176,9 @@ static struct argp argp =
options, parse_opt, NULL, doc, NULL, more_help, NULL
};
@@ -81,10 +112,10 @@ index 0c090dca15..6bb6e0fe72 100644
a platform. */
static int
diff --git a/elf/rtld.c b/elf/rtld.c
index 5b882163fa..db407b5d8b 100644
index 596b6ac3d9..1ccd33f668 100644
--- a/elf/rtld.c
+++ b/elf/rtld.c
@@ -217,6 +217,7 @@ dso_name_valid_for_suid (const char *p)
@@ -185,6 +185,7 @@ dso_name_valid_for_suid (const char *p)
}
return *p != '\0';
}
@@ -92,24 +123,8 @@ index 5b882163fa..db407b5d8b 100644
static void
audit_list_init (struct audit_list *list)
@@ -1286,13 +1287,13 @@ of this helper program; chances are you did not intend to run this program.\n\
--list list all dependencies and how they are resolved\n\
--verify verify that given object really is a dynamically linked\n\
object we can handle\n\
- --inhibit-cache Do not use " LD_SO_CACHE "\n\
+ --inhibit-cache Do not use %s\n\
--library-path PATH use given PATH instead of content of the environment\n\
variable LD_LIBRARY_PATH\n\
--inhibit-rpath LIST ignore RUNPATH and RPATH information in object names\n\
in LIST\n\
--audit LIST use objects named in LIST as auditors\n\
- --preload LIST preload objects named in LIST\n");
+ --preload LIST preload objects named in LIST\n", LD_SO_CACHE);
++_dl_skip_args;
--_dl_argc;
diff --git a/iconv/gconv_conf.c b/iconv/gconv_conf.c
index 735bd1f2d5..25100ba666 100644
index 682f949834..7eed87bc9d 100644
--- a/iconv/gconv_conf.c
+++ b/iconv/gconv_conf.c
@@ -36,7 +36,7 @@
@@ -122,10 +137,10 @@ index 735bd1f2d5..25100ba666 100644
/* Type to represent search path. */
struct path_elem
diff --git a/sysdeps/generic/dl-cache.h b/sysdeps/generic/dl-cache.h
index 6b310e9e15..3877311df4 100644
index 964d50a486..94bf68ca9d 100644
--- a/sysdeps/generic/dl-cache.h
+++ b/sysdeps/generic/dl-cache.h
@@ -27,10 +27,6 @@
@@ -34,10 +34,6 @@
((flags) == 1 || (flags) == _DL_CACHE_DEFAULT_ID)
#endif
@@ -136,6 +151,3 @@ index 6b310e9e15..3877311df4 100644
#ifndef add_system_dir
# define add_system_dir(dir) add_dir (dir)
#endif
--
2.27.0

View File

@@ -1,7 +1,7 @@
From bd0486cab67c3441210aed48caab67418610a765 Mon Sep 17 00:00:00 2001
From 19e3e45eb1838ee80af13c3d27fcff446773211e Mon Sep 17 00:00:00 2001
From: Khem Raj <raj.khem@gmail.com>
Date: Thu, 31 Dec 2015 14:35:35 -0800
Subject: [PATCH 06/29] nativesdk-glibc: Allow 64 bit atomics for x86
Subject: [PATCH] nativesdk-glibc: Allow 64 bit atomics for x86
The fix consist of allowing 64bit atomic ops for x86.
This should be safe for i586 and newer CPUs.
@@ -17,11 +17,11 @@ Signed-off-by: Khem Raj <raj.khem@gmail.com>
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/sysdeps/x86/atomic-machine.h b/sysdeps/x86/atomic-machine.h
index bb49648374..aa08d3c0a7 100644
index 695222e4fa..9d39bfdbd5 100644
--- a/sysdeps/x86/atomic-machine.h
+++ b/sysdeps/x86/atomic-machine.h
@@ -58,15 +58,14 @@ typedef uintmax_t uatomic_max_t;
#endif
@@ -52,15 +52,14 @@ typedef uintmax_t uatomic_max_t;
#define LOCK_PREFIX "lock;"
#define USE_ATOMIC_COMPILER_BUILTINS 1
+# define __HAVE_64B_ATOMICS 1
@@ -37,6 +37,3 @@ index bb49648374..aa08d3c0a7 100644
# define SP_REG "esp"
# define SEG_REG "gs"
# define BR_CONSTRAINT "r"
--
2.27.0

Some files were not shown because too many files have changed in this diff Show More